journal contacts acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 2 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 section editors (vol. 7 11) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy application of butterworth high pass filter as an approximation of wood anderson seismometer frequency response to earthquake signal recording acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 379 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 379 382 application of butterworth high pass filter as an approximation of wood anderson seismometer frequency response to earthquake signal recording hamidatul husna matondang1, endra joelianto2, sri widiyantoro3 1 bmkg, jakarta, indonesia, hamidatul.husna@bmkg.go.id 2 itb, bandung, indonesia, ejoel@tf.itb.ac.id 3 itb, bandung, indonesia, sriwid@geoph.itb.ac.id abstract: the method for generating maximum amplitude and signal to noise ratio values by using second order high pass butterworth filter on local seismic magnitude scale calculations is proposed. the test data are signals from local earthquake that have been occurred in sunda strait on april 8th 2012. based on the experimental results, a 8 hz cutoff frequency and a gain of 2200 of second order butterworth high pass filter as an approach to simulating the frequency response of wood anderson seismometer can provide maximum amplitude value, snr, and the magnitude better than simulated wood anderson frequency response. keywords: high pass butterworth filter; wood anderson seismometer; frequency response simulation; instrument correction 1. introduction initially, the calculation of the local seismic magnitude scale was based on the calculation of the magnitude scale obtained from wood anderson's seismometer recordings. however, the wood anderson seismometer is a short period instrument that has analog recording, in which, the analog recording speed on paper is limited. this resulted in an earthquake recording on the recording station that was close to the location of the earthquake experiencing clipping. therefore, recording earthquake signals from the wood anderson seismometer is no longer used in the calculation of local magnitude scales. thus, the study of local magnitude has been shifted to the use of digital recording. in order to produce digital recordings as if coming from a wood anderson seismometer, the digital recordings are processed to simulate the wood anderson seismometer frequency response. the recording of the wood anderson frequency response simulation provides more accurate data from the original instrument, because there is no clipping on earthquake recordings [1]. however, in practice, the butterworth filter application is still used to eliminate microseismic noise [2]. considerable research has been done on simulating seismometer responses. among them, designing a simulation of the frequency response of wood anderson seismometer from recursive filters with differential equations has been considered in [3]. the seismometer frequency response is considered as a second order high pass filter. if the filter is applied to an earthquake signal, the 2nd order butterworth high pass filter with a cutoff frequency of 2 hz can approximate the local magnitude in 0.1 magnitude units [1]. the seismometer frequency response approach can also be obtained from a 5th order butterworth low pass filter cascaded with a 3rd order butterworth high pass filter [4]. these filters are used to correct short period and broadband seismometer responses. pole and zero determination was provided for recursive filters used as a correction for seismometer instruments [5]. different from previous studies, this paper tests earthquake signals using a second order butterworth high pass filter with a cutoff frequency of 0.1 to 12 hz. the test data used in this paper are local earthquake signals and noise signals recorded by four earthquake recording stations. the do the authors mean to correct short period and broadband seismometer responses recording stations are cigelis jawa indonesia (cgji), lembang (lem), cisompet (cisi) and karang pucung jawa indonesia (kpji). local earthquake recorded signals used are the ones related to the earthquake that have occurred in the sunda strait on 8th april 2012 at 08: 06: 47.1 am utc+07.00 with a strength of 4.6 magnitudes, at coordinates longitude 105.859998 and latitude -6.94 with depth of 100 km. while noisy signals were taken in the morning at 03.00 am utc+07.00, noon at 12.00 pm utc+07.00 and at night at 10.00 pm utc+07.00 [6]. http://www.imeko.org/ mailto:hamidatul.husna@bmkg.go.id mailto:ejoel@tf.itb.ac.id mailto:sriwid@geoph.itb.ac.id acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 380 from the test results, the gain value and cut off frequency of the filter are configured considering the recorded data characteristic. the filter is then applied to the signal. the results of the filter application are expected to produce higher amplitude and signal to noise ratio (snr) values than the simulated wood anderson’s seismometer frequency response. the butterworth filtered results are used as an approach to simulate earthquake recordings that would be obtained from a typical frequency response of a wood anderson's seismometer. 2. proposed methodology the design is based on the similarity of responses between the butterworth high pass filter and the wood anderson seismometer which is a second order system [7]. the filter is applied to earthquake signals that have been used for local seismic magnitude scale calculations. from the filtered signal, the maximum amplitude is obtained in the s signal phase and signal to noise ratio (snr). for comparison, the maximum amplitude and snr of filtered earthquake signals are compared with the maximum amplitude and snr of the earthquake signal on a simulated wood anderson seismogram. the research flow chart is shown in figure 1. start data collection catalogue data of bmkg waveform data conversion of seed to sac resampling determination of the butterworth highpass filter coefficient application of high pass butterworth filter frequency response design results component rotation readings of maximum amplitude and snr calculation component rotation finished application of high pass butterworth filter frequency response design results readings of maximum amplitude and snr calculation reference figure 1: flowchart of the method 3. analysis the results of the butterworth filter test applied to bmkg data show an almost similar amplitude value for each earthquake recording station. the average maximum amplitude value for the four recording stations is 24,000,000 nm, produced by the butterworth filter with a gain of 3000 and a cutoff frequency of 0.1 hz. while the minimum amplitude is obtained from the gain with a value of 1000 and the cut-off frequency around 0.1 hz. see figure 2. figure 2: cut off frequency, gain and maximum amplitude curves it is also known that the frequency response of second order butterworth filter with gains of 2000, 2200, and 2300 have similarities with frequency response of wood anderson’s seismometer. however, the second-order butterworth filter frequency response with a gain of 2200 and an 8 hz cut-off frequency that is close to the maximum amplitude of wood anderson's seismometer. the second-order butterworth filter frequency response with a gain of 1000 has a magnitude smaller than the magnitude generated by the wood anderson seismometer. whereas the second order butterworth filter frequency response with a gain of 3000 has a magnitude greater than the seismic magnitude generated by the wood anderson seismometer. see figure 3. table 1: signal to noise ratio of wood anderson seismometer and butterworth filter simulated the largest maximum amplitude value of each recording station is obtained from the butterworth filter simulation results, while the results from the station simulated wood anderson butterworth filtered signal 3:00 am (%) 12:00 pm (%) 22:00 pm (%) 3:00 am (%) 12:00 pm (%) 22:00 pm (%) cgji 6,7 3,9 1,8 5,1 6,2 6,6 lem 4,9 1,5 1,1 -1,3 -1,3 1,3 cisi 2,1 1,4 4,1 4,6 3,7 5,2 kpji -9,8 -7,8 2,9 3,0 2,8 3,1 http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 381 wood anderson seismometer frequency response simulation have lower values. in calculating the signal to noise ratio values, the butterworth filter frequency response simulation gives an increase in signal to noise ratio values at each recording station, but at the lem and kpji recording stations snr decreases in the morning. this is caused by noise due to human activities. at the kpji station, the percentage of snr values decreased in the signal from the wood anderson seismometer frequency response simulation results. meanwhile, the signal from the simulation results using the butterworth filter is increased. however, this does not affect the results of local magnitude calculations. simulation of butterworth filter frequency response produces local magnitude values close to the seismic magnitude value released by bmkg of 4.6 [8]. while the local magnitude value is generated by wood anderson's frequency response seismometer simulation is 4.5. it can also be seen that the local magnitude of the signal from the butterworth filter simulation results has a more consistent value. figure 3: comparison of wood anderson seismometer frequency response with butterworth filter cut off frequencies 7 to 9 hz 4. summary in the paper, an earthquake signal processing algorithm was designed so that the signal was presented in physical displacement units. the second-order butterworth filter with a gain of 2200 and a cut off frequency of 8 hz had a maximum amplitude value and a higher snr than the wood anderson seismometer. hence, the butterworth filter can be used as a wood anderson seismometer approach. this research has the opportunity to be further developed so that it can be applied to the calculation of earthquake magnitude. however, it is necessary to calculate the local magnitude more accurately. in addition, it is necessary to have a greater number of observation stations. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 382 5. references [1] havskov. j, and ottemoler. l, “routine data processing in earthquake seismology”, springer, pp. 14-160, 2010. [2] ottemoller. l, and sargeant. s.t, “a local magnitude scale ml for the united kingdom”, bulletin of the seismological society of america, vol 103, pp. 2884-2893, 2013. [3] kanamori. h, maechling. p, and hauksson. e, “continuous monitoring of ground-motion parameters”, bulletin of the seismological society of america, vol. 89, pp. 311-316, 1999. [4] haney, m.m, power, j, west, m, and michaels. p, “causal instrument corrections for short period and broadband seismometers”, seismological research letters, vol. 83, pp. 834-845, 2012. [5] anderson. j. f, lees. j. m, “instruments corrections by time-domain deconvolution”, seismological research letters, vol. 85, pp. 197-201, 2014. [6] http://202.90.198.92/arclink/query?sesskey [7] hutton. l.k, and boore. d.m, “the ml scale in southern california”, bulletin of the seismological society of america , vol. 77, pp. 2074-2094, 1987. [8] http://172.19.3.51 http://www.imeko.org/ http://202.90.198.92/arclink/query?sesskey announcement of acta imeko second issue 2022 acta imeko issn: 2221-870x may 2022, volume 11, number 2, 1 acta imeko | www.imeko.org may 2022 | volume 11 | number 2 | 1 announcement of acta imeko second issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, announcement of acta imeko second issue 2022, acta imeko, vol. 11, no. 2, article 1, may 2022, identifier: imeko-acta11 (2022)-02-01 received may 4, 2022; in final form may 4, 2022; published may 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the second issue of volume 11 is started. for acta imeko, this is the first editorial announcing the start of an issue, since from now the submitted papers will be published as soon as they are ready. this new publication policy is in line with the actions towards the speed-up of the publication time and the attracting of high-value papers. this issue will include the papers of the general issue together with the special issue segment of selected papers from the two events organized by tc17, the imeko technical committee on robotic measurement and managed by prof zafar taqvi, research fellow at the university of houston clear lake in texas. annually tc17 organizes "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends of robotic innovations for benefit of humanity, advanced human-robot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms, and environment, and mobile work machines as well as virtual reality (vr), augmented reality (ar), and 3d modelling and simulation. during the imeko congress years, tc17 organizes only "topical events." in 2021, tc17 organized two virtual topical events, both following the covid-19 restrictions. ismcr2021 had a theme "virtual media technologies for the post covid19 era" and the other tc17-vrise was a jointly organized event with the theme "robotics for risky interventions and environmental surveillance.” these symposia are forums for the exchange of recent research results and futuristic ideas in robotics technologies and applications. it is of interest to a wide range of participants from government agencies, relevant international institutions, universities, and research organizations, working with futuristic applications of automated vehicles. the presentation is also of interest to the media as well the general public. the papers in the special issue segment were specially selected from the above two events. we are sure that acta imeko readers will found this special issue a further source of ideas for their specific research fields. furter information about the published papers will be given in june, as usual, in the introductory note. we hope that you will enjoy your readings and that you can confirm acta imeko as your main source to find new solutions and ideas and a valuable resource for spreading your results. zafar taqvi chairperson of imeko tc17 francesco lamonaca editor in chief mailto:editorinchief.actaimeko@hunmeko.org development of a method for the determination of the pressure balance piston fall rate acta imeko june 2014, volume 3, number 2, 44 – 47 www.imeko.org development of a method for the determination of the pressure balance piston fall rate lovorka grgec bermanec 1, davor zvizdic1, vedran simunovic2 1 faculty of mechanical engineering and naval architecture, laboratory for process measurement, 10000 zagreb, croatia 2 faculty of mechanical engineering and naval architecture, laboratory for length measurement, 10000 zagreb, croatia section: research paper keywords: pressure balance; fall rate measurement; sobel filter citation: lovorka grgec bermanec , marin martinaga1, davor zvizdic, vedran simunovic, development of method for determination of pressure balance piston fall rate, acta imeko, vol. 3, no. 2, article 11, june 2014, identifier: imeko-acta-03 (2014)-02-11 editor: paolo carbone, university of perugia received april 15th, 2013; in final form june 22nd, 2013; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: lovorka g.bermanec, e-mail: lovorka.grgec@fsb.hr 1. introduction the determination of the pressure balance piston fall rate is important for to several reasons. as an internal measure for quality assurance it indicates some deformation or changes in effective area [1], and in the ''cross-float'' calibration of other pressure balances where the fall rate is obtained when the two balances are connected and is compared with the natural fall rate. if the fall rates differ, small masses can be added to or subtracted from one of the pressure balance, and the measurements should be repeated until the fall rates agree. [2]. for periodical determination of the lpm standard unit fall rates and for internal quality assurance it was necessary to develop a simple, efficient, repeatable and precise enough method. since there is no standard procedure for this measurement, there was no limitation in selecting equipment. piston rate of fall is usually determined with laser sensors or expensive optic. further equipment that was also taken into consideration in this work comprises eddy current sensors and different cameras. a simple camera has been chosen for analysing measurement possibilities regarding accuracy, accessibility and price. 2. measurement method and calculation procedure measurements were performed with an amateur camera (nikon digital camera) equipped with appropriate lenses. a plane parallel gauge block with 1.5 mm thickness was used to relate relative motion to real displacement in millimetres. before the measurement, while the piston was in stand-up position, a snap of the standard gauge block was taken. pictures were analysed using matlab software which has inbuilt and predefined functions for various filters. in this measurement a sobel filter was used. this filter is often used for edge detection. edge detection enables to follow the relative movement of the pressure balance edges through continuous pictures. after implementation of the sobel filter, a simple method for transforming real thickness into pixel numbers was applied. from the number of pixels the movement between pictures can be calculated and converted into millimetres. two different results were obtained. for two consecutive measurements on the same gauge, the thickness of the standard gauge block was found to be 16 pixels and 15 pixels, abstract this paper describes a laboratory method for the determination of the pressure balance piston fall rate using a simple camera-based optical system with internally developed software. measurements were carried out on three standard piston/cylinder units in the croatian national pressure laboratory (lpm) using gas and oil as transmitting medium. measurement equipment, procedure and fall rate results for three sets of measurements are given, as well as an evaluation of the measurement uncertainty. results were compared with other relevant measurements. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 44 respectively. this shows that a resolution of the described equipment of at least 0.1 mm was achieved. the interval between two consecutive pictures was somewhat longer than one minute, so an adequate number of pictures can be acquired to achieve good accuracy. for the analysis of each measurement result, picture intervals of 3 seconds were used, which means 20 measurement points per minute. the initial concept was to take 60 measurement points in a minute to precisely follow the movement of the pressure balance piston. due to computer and camera limitations, the smallest possible interval between pictures with this equipment was 3 seconds. to avoid any contact with the camera, all adjusting parameters and the start of the photographing process were controlled by a computer connected to the camera. every setting was adjusted by appropriate software that allowed camera control via a cable. in this way a fixed position of the camera was assured, which is critical for the applied measurement method. after all the photographs were taken and have passed the sobel filter (figure 1), a series of 20 pictures for each measurement was obtained with information about the relative movement in pixels. to avoid accidental movement of the camera or imperfections of edges visible on the pictures, the x-axis was kept constant. in this way, possible distortions of the pictures are constant over the whole y-axis movement, avoiding errors. the relative motion in pixels on the y-axis was converted into mm for every two consecutive pictures. measurements were performed on three different effective areas of the piston/cylinder, including oil and gas pressure balances (figure 2). the oil pressure balance was designed by budenberg with a double piston for 600 bar and 60 bar loads (figure 3 and figure 4). the gas pressure balance that was used in this work was a dhi pg7601 with 3 bar load. the dhi standard pressure balance is equipped with an internal fall rate sensor so we had an opportunity to compare results obtained between the proposed method and those obtained from the dhi pressure balance. the fall rate of the oil unit was compared with the last calibration certificates from the physikalisch-technische bundesanstalt (ptb). 3. fall rate results and measurement uncertainty evaluation in this paragraph the results for the three standard piston/cylinder units as well as the measurement uncertainty evaluation are presented. the fall rate measurement uncertainty, uf, was evaluated as type b uncertainty [3] taking into account the gauge block uncertainty, the camera resolution and the time measurements as the major influence quantities. 2 2 2= + +f g r tu u u u (1) where: ug uncertainty of the plane parallel gauge block ur uncertainty due to resolution ut -uncertainty due to the time measurement figure 2. determination of oil piston fall rate at 600 bar load. figure 1. computer image of the pressure balance and the plane parallel gauge block with applied sobel filter. figure 3. start of measurement (first photograph) of the budenberg pressure balance with 600 bar load. figure 4. start of the measurement (computer image) of the budenberg pressure balance with 600 bar load after application of the sobel filter. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 45 3.1. oil operated system up to 600 bar measurements performed on the budenberg standard pressure balance with 600 bar load has a high rate of fall, as expected. this was clearly visible without any equipment. results can be compared with results given in the calibration certificate obtained from the ptb. these results were (3.0 ± 0.5) mm/min. results from the lpm first unit are given in table 1. from this result it can be seen that the fall rate is too large for a pressure balance classified in the accuracy class of 0.02. the maximum piston fall rate defined in [1] is 1.5 mm/min. the uncertainty estimation is given in table 2, only for the first piston/cylinder unit, although it is calculated for each unit separately with different sensitivity coefficient for the time measurement. 3.2. oil operated system up to 60 bar the second measurement is performed on the same budenberg oil unit but using a low pressure range up to 60 bar maximum load. results from the ptb for this unit were (0.26 ± 0.10) mm/min. results obtained from the picture analysis are shown in table 3. good agreement between the results of ptb and lpm can be observed. table 3. determination of oil piston fall rate at 60 bar load. table 4. determination of dhi pg7601 gas piston fall rate at 3 bar load. table 1. determination of oil piston fall rate at 600 bar load. table 2. fall rate uncertainty evaluation. influence quantity uncertainty of the influence quantity factor sensitivity coefficient contribution to the standard uncertainty plane parallel gauge block 0.1 µm 3 1 0.06 µm resolution 0.1 mm 3 1 57.8 µm time 0.5 s 3 0.06 mm/s 17.3 µm fall rate uncertainty uf 60 µm expanded fall rate measurement uncertainty (k=2) uf=2∙uf 0.12 mm acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 46 3.3. gas operated standard system up to 3 bar third set of measurements was performed on the gas operated dhi pressure balance with a maximum load of 3 bar. this unit is equipped with an internal fall rate sensor and all the results were directly compared. results obtained after pictures analyses are shown in table 4. in this measurement, the relative movement was 0.094 mm for one pixel, and the internal fall rate sensor has a precision of 0.1 mm. this prevented direct comparison. as it can be seen from the results in table 3, the pressure balance fall rate is 0.56 mm/min, and the internal fall rate sensor started changing its value from 0.5 mm to 0.6 mm after one minute and three seconds. the maximum piston fall rate for gas operated systems according to the oiml document is 1 mm/min. comparing the results in all three cases with results from the calibration certificates, as well as from comparison with the internal fall rate sensor in the dhi pressure balance, it can be concluded that new method is sufficiently accurate for further development. 4. conclusions an internal laboratory method for the determination of the fall rate was developed in the lpm, with a target uncertainty of 0.1 mm/min, using a camera based optical system. the advantages of the proposed method are a simple and cheap measurement equipment. measurement results obtained with the proposed method show good agreement with other relevant measurements. disadvantages are found in the choice of lenses. further development of the method focuses on the automation of the measurements. references [1] 1994 oiml regulation r110, edition 1994(e) pressure balances (paris: organisation international de m´etrologie l´egale) [2] dadson r.s., lewis s.l., peggs g.n., the pressure balance: theory and practice, ed.1., hmso, london, 1982. [3] iso guide to the expression of uncertainty in measurement, [4] geneva: iso, 1995. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 47 development of a method for the determination of the pressure balance piston fall rate laser-doppler-vibrometer calibration by laser stimulation acta imeko issn: 221-870x december 2020, volume 9, number 5, 357-360 laser-doppler-vibrometer calibration by laser stimulation h. volkers1 and th. bruns2 1physikalisch-technische bundesanstalt, braunschweig und berlin, germany, henrik.volkers@ptb.de 2physikalisch-technische bundesanstalt, braunschweig und berlin, germany, thomas.bruns@ptb.de abstract a new set-up for primary laser vibrometer calibration was developed and tested at the acceleration laboratory of ptb. contrary to existing set-ups, this configuration makes use of electro-optical excitation. while avoiding the limitations imposed by mechanical motion generators in classic set-ups, the new method still encompasses all components of commercial laser vibrometers in the calibration and thus goes beyond the current capabilities of the purely electrical excitation schemes. keywords: primary calibration, laser doppler vibrometer, ldv calibration 1 introduction laser doppler vibrometers (ldv) are great tools for all kind of vibration measurements, especially for the use as a primary reference for calibration of accelerometers as described in the standards [1, 2]. if pd x(t) y(t) demodulator/ controller u pd_raw (t) f aom u x,v,a (t) ldv figure 1: schematic signal chain of a laser doppler vibrometer figure 1 shows a schematic diagram of the signal flow in an ldv. at the measurement point, the motion quantity x(t) is measured via a laser beam that passes an interferometer (if) including an acoustooptical modulator (aom) and finally illuminates a photodetector (pd) with an intensity modulated by interference according to i(t) = i0 ·sin(2π faom ·t + 4π x(t) λ + τ0)+ b + enoise (1) with faom as the frequency of the aom, λ the wavelength of the laser, τ0 a constant delay due to the time of flight of the laser light, a constant bias intensity b typical for interference and a noise component enoise, e.g. from stray ambient light. the voltage output updraw of the photo detector follows the intensity with a certain additional delay and additional noise from embedded amplifiers and feeds into the demodulation stage of the ldv controller. the internal processing of a commercial ldv is generally unknown to the user but probably follows an arctangents demodulation scheme as described in [3]. the demodulated analog (voltage) output of the instrument is then characterized by its complex transfer functions s in the frequency domain: sux( f ) = ux( f ) x( f ) (2) where the signal delay is characterized by the phase of this complex quantity. 2 existing methods existing calibration methods and set-ups either follow the classical approach of measuring a mechanical vibration [4, 5] or use an electrical excitation of the ldv controller and thus simulate the optical laser head. for the former set-up a typical primary accelerometer calibration system is utilized where a reference ldv measures the motion of a shaker’s armature to provide a reference acceleration signal. the measurement beam of the device under test (dut) is coaligned to the reference beam using a beam splitter and points at the same spot on the armature surface. in the optimized set-up [6] only a single laser beam needs to be adjusted, hence, inaccuracies of the co-alignment are avoided. this method provides a significant smaller uncertainty for ldv calibration than for the classical accelerometer calibration. however, it still suffers from the mechanical limitations of the utilized shaker system in terms of a limited frequency and amplitude range and non-ideal motion. the second method substitutes the laser head of the dut with an electrical signal generator. the signal generator provides the frequency modulated input to acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 357 henrik.volkers@ptb.de thomas.bruns@ptb.de if pd pol λ/4 demodulator/ controller u pd_ref (t) f aom u x,v,a (t) ld signal generator pd adc1 adc2 u am (t) ldv bs figure 2: schematic of a ldv calibration setup with amplitude-modulated laser diode stimulation the ldv controller of the dut, simulating the photo diode output of the laser head. by providing a corresponding fm signal, a very wide variety of virtual motion patterns can be simulated under almost ideal conditions. by simultaneous sampling of the generator signal and the dut output a precise determination of the complex transfer function of the dut is possible. this method, however, requires good knowledge of the working principle of the hardware in order to provide adequate signal levels and carrier frequencies to the controller. in addition, it is not able to account for any signal preconditioning performed within the original laser head. on the other hand it suffers from far less limitations by avoiding any mechanical components[7]. 3 stimulation by amplitude modulated laser source the basic idea of the new set-up evolves from the eq. (1) and figure 1 and is shown in figure 2. the photo diode, being the central sensing part of the ldv, cannot distinguish the cause of an intensity variation. whether it is the result of optical interference or simply is an intensity variation caused by a modulated external light source makes no difference. if an appropriate light source with a suitable amplitude modulation following eq. (1) is targeted at its sensing element, the ldv response will be identical to the respective interference caused by real motion. this approach combines the benefits of those existing set-ups described above, while avoiding the disadvantages. while ommiting any mechanical moving components, that may limit the scope or accuracy, it still includes the potential preprocessing of the laser head in the calibration. all requested excitations for the calibration of ldvs can be provided by electrooptical means. in the new calibration set-up at ptb (c.f. figure 2), the light source is a common 10 mw laser diode (ld) of a wavelength of 635 nm, well matching the ldv’s he-ne wavelength. the bias current is adjusted such that the mean beam power entering the ldv is less than 1 mw, matching approximately the typical output power of the ldv’s laser. an non-polarising beam splitter (bs) separates about 50 % of the light which is directed to a reference photo diode (femto hca-s-400m) of a bandwidth of 400 mhz and a known time delay [8]. an rf generator with phase modulation capability provides the modulation current for the laser diode. the modulation depth is adjusted in a range of 30% to 50% and monitored by the reference photo diode. a polarization filter (pol) and a λ /4 plate between ld and ldv ensure that circular polarized laser light enters the interferometer. the majority of the emitted light from the ld is already linear polarized, however, for the first set-up, the orientation of the polarity is unknown, hence, a polarization filter eases the initial orientation of the λ /4 plate. not shown in figure 2 is the collimator lens of the laser diode and two aperture plates used to ease the alignment of the stimulating laser diode unit with the ldv’s beam line. 4 signal generation and processing the data acquisition system is based on a pxi system controlled by a labview program and provides two synchronized adc channels for the acquisition of the reference signal and the dut output. while the final stage of the set-up is supposed to include an arbitrary waveform generator and the ability to phase lock the generator or the whole pxi system to the ldv carrier frequency, the preliminary results given here were acquired by utilizing an rf generator (type agilent e4400b). the system is similar to the acquisition systems already used and validated for the primary acceleration calibration facilities at ptb. the synchronous data acquisition was performed at 200 ms/s to account for the high carrier frequency. the demodulation of the reference followed the validated arctangents demodulation of ptb’s national primary calibration standards. finally, the evaluation of the simulated motion used the usual three-parameter sine-approximation. 4.1 determination of time delays figure 3 shows the involved time delays of the calibration set-up depicted in figure 2. all components except hldv are assumed to have a true time delay, i.e. the time delay is constant within the frequency ranges observed. the time delay τcor = φcor ω , with ω = 2π f (3) acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 358 ld pd τ r1 τ m1 ldv head τ m2.1 τm2.2 ldv controller τm2.3 hcontr adc2 adc1 τm3 τ r2 τr3 δτadc hldv figure 3: time delays of the signal chains to correct the measured transfer function hmeas to get hldv hldv(ω) = |hmeas(ω)|·e jω(φmeas−φcor) (4) is calculated as τcor = τr1 −τm1 + τr2 + τr3 −τm3 + ∆τadc (5) with the values: τr1 −τm1 = −545(5)mm·cair =−1.82(3)ns(6) cair = 2.9971×108 m/s being the difference of the laser beam lengths measured from the reference surface of the ldv head stated in the manual. its uncertainty covers the unknown delays of the λ /4 plate and the polarization filter. the photo diode delay τr2 was measured in [8] as τr2 = 3.10(8)ns. (7) the time delay difference between the two simultaneously sampled channels, each with an rg58 cable of 2 m length, representing the term τr3−τm3 + ∆τadc, is measured by feeding both cable ends with a common signal from the generator via a power splitter minicircuits zfsc-2-4. by doing multiple measurements with swapped cables at the power splitter and adc inputs we found τr3 −τm3 + ∆τadc = 0.15(5)ns (8) including variances due to remounting. putting the results into (5) gives τcor = 1.43(10)ns (9) the measurement of a time delay was performed in a two-step process. in the first step, a signal with a carrier frequency of 40 mhz and an fm sine modulation of 500 hz and a modulation depth of 3.15 mhz was applied and a coarse time delay was determined by the maxima of a computed cross correlation, see figure 4. in the final step the modulation was turned off and the phase difference of the 40 mhz carrier signal was measured by applying a three parameter sine approximation. this process was chosen to be able to measure time delays of heterodyne signal outputs of ldv controllers with a time delay greater than one period of the carrier signal (25 ns in this case) due to internal signal processing like amplitude stabilisation or filter. 800 -800 0 a m p li tu d e 800 -800 0 a m p li tu d e 800 -800 0 a m p li tu d e 0 0 80n 120n 160n40n0-40n-80n -10µ -8µ -6µ -4µ -2µ 2µ 4µ 6µ 8µ 10µ -0.2m-0.4m-0.6m-0.8m-1m 0.2m 0.4m 0.6m 0.8m 1m time in s figure 4: cross correlation of the two adc channels at a sample rate of 200 mhz, both fed with a 40 mhz fm signal modulated with a 500 hz sine of 3.15 mhz modulation depth, on different time scales, with one of the two connection cables being about 8 m longer, resulting in a delay of 41.90(5)ns. 5 results first tests with a ldv controller polytec ofv5000-ku and a laser head ofv 353, connected via a 5 m cable, were performed and figure 5 shows a measured frequency response obtained with the new setup. the ldv is based on a carrier frequency faom of 40 mhz. the ldv controller settings were: • velocity decoder: vd-01 • range: 1 m/s/v • max. frequency: 50 khz • tracking filter: off • low pass filter: 100 khz • high pass filter: off the frequency range in terms of the simulated vibration was 100 hz to 30 khz and an amplitude of 1 m/s leading to a modulation depth of 3.159 mhz. the plots show the relative deviation in magnitude and the absolute phase of the analog velocity output in relation to the demodulated intensity stimulus on a linear frequency scale. the nearly linear phase corresponds to a delay of about 7.52 µ s. 6 outlook instead of a photo diode with known delay, a known laser source would obviate the beam splitter and photo acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 359 0 5000 10000 15000 20000 25000 30000 -4.0% -3.5% -3.0% -2.5% -2.0% -1.5% -1.0% -0.5% 0.0% 0.5% -90 -80 -70 -60 -50 -40 -30 -20 -10 0 magnitude phase frequency in hz re l. d ev ia ti o n m ag n it u d e p h as e in 1 ° figure 5: transfer function of a ldv analog velocity output at 1 v/(m/s) diode, simplifying the set-up. it is a classical chickenor-egg dilemma; in our case we had the known pd first to relate an optical signal to an electrical signal. the measurement uncertainty budget is still under investigation, the absence of mechanical excitation are expected to significantly reduce the uncertainties, leaving the absolute ac voltage measurement as the main contributor for magnitude uncertainty, while the phase uncertainties are expected to be better than 0.01° for frequencies up to 100 khz. acknowledgment the authors like to thank dr. siegmund of polytec for some inspiring walks and talks. 7 references [1] iso 16063-11:1999 methods for the calibration of vibration and shock transducers — part 11: primary vibration calibration by laser interferometry, iso, geneva, switzerland, 1999 [2] iso 16063-13:2001 methods for the calibration of vibration and shock transducers — part 13: primary shock calibration using laser interferometry [3] iso 16063-41:2011 methods for the calibration of vibration and shock transducers — part 41: calibration of laser vibrometers [4] u buehn et al., "calibration of laser vibrometer standards according to iso 16063-41", xviii imeko world congress 2006, rio de janeiro, brazil, september, 2006 https://www.imeko.org/publications/wc2006/pwc-2006-tc22-007u.pdf. [5] th. bruns, f. blume, a. täubner, "laser vibrometer calibration at high frequencies using conventional calibration equipment", xix imeko world congress, september 6-11, 2009, lisbon, portugal http://www.imeko2009.it.pt/papers/fp_495.pdf [6] f. blume, a. täubner, u. göbel, th. bruns, "primary phase calibration of laser-vibrometers with a single laser source", metrologia, 2009, vol. 46, n. 5, https://dx.doi.org/10.1088/0026-1394/46/5/013 [7] m. winter, h. füser, m. bieler, g. siegmund, c. rembe, "the problem of calibrating laser-doppler vibrometers at high frequencies", aip conference proceedings 1457, 165 (2012) https://doi.org/10.1063/1.4730555 [8] th. bruns, f. blume, k.baaske, m. bieler, h. volkers "optoelectronic phase delay measurement for a modified michelson interferometer", measurement, 2013, vol.46, n. 5, https://doi.org/10.1016/j.measurement.2012.11.044 acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 360 https://www.imeko.org/publications/wc-2006/pwc-2006-tc22-007u.pdf https://www.imeko.org/publications/wc-2006/pwc-2006-tc22-007u.pdf http://www.imeko2009.it.pt/papers/fp_495.pdf https://dx.doi.org/10.1088/0026-1394/46/5/013 https://doi.org/10.1063/1.4730555 https://doi.org/10.1016/j.measurement.2012.11.044 introduction existing methods stimulation by amplitude modulated laser source signal generation and processing determination of time delays results outlook references comment to: l. mari “is our understanding of measurement evolving?” acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 comment to: l. mari “is our understanding of measurement evolving?” franco pavese1 1 independent scientist, research director in metrology (formerly, at cnr then inrim, torino), 10139 torino, italy section: technical note keywords: metrology; measurement science; measurement process; informational component citation: franco pavese, comment to: l. mari “is our understanding of measurement evolving?”, acta imeko, vol. 11, no. 2, article 19, june 2022, identifier: imeko-acta-11 (2022)-02-19 section editor: francesco lamonaca, university of calabria, italy received february 7, 2022; in final form june 1, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: franco pavese, e-mail: frpavese@gmail.commailto:frpavese@gmail.com 1. introduction after having read the recently published paper [1], i am feeling, dictated perhaps by being a long-since metrologist, a compulsory need to write the comments below, more from some sense of surprise than for a less appreciation for its author. the comments concern the following quoted parts (where added section numbers are the ones used here below): (2.) – “doubt: isn’t metrology a ‘real’ science ? … metrology is a social body of knowledge” (original italics); (3.) – “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component”; – “listing some necessary conditions that characterize measurement, and that plausibly are generally accepted, is not a hard task: measurement is … (iv) that produces information in the form of values of that property. indeed, (iv) characterises measurement as an information process”. (4.) – “however, not any such process is a measurement, thus acknowledging that not any data acquisition is a measurement. we may call “property evaluation” a process fulfilling (i)-(iv). what sufficient conditions characterize measurement as a specific kind of property evaluation? the answer does not seem as easy” (blank lines added for clarity) … “the possible evolutionary perspectives of measurement can be considered along four main complementary, though somewhat mutually connected, dimensions: – measurable entities as quantitative or non-quantitative properties …; – measurable entities as physical or non-physical properties; – measuring instruments as technological devices or human beings; – measurement as an empirical or an informational process, and therefore the relation between measurement and computation …”. 2. “isn’t metrology a ’real’ science?” my position is that metrology is a part of measurement science, a process intended to share common ways to transmit the knowledge to the community that is not limited to a single generation of scientists and practitioners, and to obtain the necessary consensus. it is metrology the part that defines the meaning of the terms “precision” and “accuracy” by introducing the concept of uncertainty, otherwise not necessarily embedded in the meaning of measurement, as it happened, e.g., in times anterior to modern science. another mandatory requirement of metrology is the need of a multiplicity of the measurements in order to obtain data comparable with each other, so requiring that they are abstract the present contribution is a comment which addresses the paper published in this journal “is our understanding of measurement evolving?” and authored by luca mari. this technical note concerns specific parts of that paper, namely the statements: “doubt: isn’t metrology a ‘real’ science? … metrology is a social body of knowledge”, “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component” and “what sufficient conditions characterise measurement as a specific kind of property evaluation?”, and discusses alternatives. mailto:frpavese@gmail.com mailto:frpavese@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 traceable—to a common denominator— to each other, in order to ensure that all the scientists are “on the same page”. the reason for this is that data, numerical or not, are certainly to be considered facts—as opposed to inferences—but not necessarily and usually as un-equivocal as one would ask for. restricting the case to numerical data—e.g., rarely the published ones are the “rough” instrumental indications (as recognized, e.g., by the vim [2])—metrology is the science dealing with correctly setting rules for their elaboration. metrological competence is also necessary to implement the increase in precision of the measurements, the latter being a normal goal of science as one of the ways for increasing knowledge. yet, in order to allow sharing the results, “good-practice” rules must be installed, eventually ordered in protocols and conventions, only apparently a non-scientific stage of the measurement process. the latter requires instead scientific competence and sharing meanings and, consequently, a language (also across local ones). at the end of this process, language is stored into written rules (“scripta manent, verba volant”) —notation being a specific symbolic language—a feature common to every frame of shared knowledge, called consensus (“pacta sunt servanda”), in the lack of human possibility of knowing truth. on the other hand, consensus cannot replace or contrast current knowledge, but its function is merely a notary one (an issue becoming today more and more critically important and subject to misuse). in the above respect, my opinion is that measurement science may be compared to the dna of the observational science, i.e. the body of knowledge, with metrology its rna as presently interpreted in biology, i.e. the tool for implementation of its basic principles and rules. 3. “any measurement must then include an informational component” i agree that a measurement result, numerical or not, is additional information, assumed to be useful for increasing the knowledge level—otherwise there would be no reasons to perform measurements. however, after shannon having extended the scientific meaning of the term measurement, and especially in the present times dominated by informatics, i suggest that assigning a specific further meaning to the term information becomes necessary, as i do not consider it anymore a generic or un-equivocal one. i consider it sufficient here to point out that one meaning of information is the one carried by a value measured according to the scientific definition of measurement, i.e., a “material” information concerning the “external world” as perceived by the humans and their built apparatuses—made to supplement the human limited (standard) sensorial capabilities. instead, i consider a different meaning of information what scientists (and anybody else) elaborate or communicate (the information born from) thoughts of their mind. the difference is the usual one, between the case of becoming informed by our senses about a feature of a world’s phenomenon, and the case of having a personal thought—or based on an inter-subjective one, basically without difference when concerning this issue. in other words, in all cases “information” is a concept from human mind, while a ”measured value” for a human is an external fact, then assimilated through subsequent mind inference. this difference is substantial and does not produce, in my opinion, any “evolutionary perspective” (italics added) in measurement science— at least, when the perspective does not concern the human mind. in this sense and meaning, i respectfully disagree with the above clause (3.) (iv), because it is not an information process in the current sense, and especially not according to the procedures used today in information science. let me introduce here a bit of humour, by citing an extreme case that recently occurred to me in this subject matter. it is popular in this period for an author to pretend to have found an information method to check if the current scientific evaluation of measurement uncertainty leads to correct estimates, e.g., in the case of the uncertainty associated to the value of the universal constants of physics (for the planck one see [3]). his method is said to be based on the information content. i had a short correspondence with him to understand the way he implements the information process and gets his results, until i discovered that he is considering as the information content of a given physical constant value the number of times that the value is cited in the reference document of the si, that being the “firm” basis of the rest of his computations … 4. “what sufficient conditions characterize measurement as a specific kind of property evaluation?” the author indicates as “evolutionary perspective” basically an extension to a wider meaning of the term “measurement”, namely to categories of observations that historically were not comprised in the current definition of measurement, namely the non-quantitative and the informational ones, a goal having attracted more attention in recent times. it seems to me that, for that purpose, it would be simpler to use a term different from “measurement”, e.g., the one used by the author itself, “evaluation” for the non-quantitative case. it does not seem to me a diminutio, being used simply for indicating a previous stage of the process, even in the quantitative case [4— or “representation”, if one prefers to avoid any misunderstanding about the existence of a possible quantification. concerning instead the issue of measurement vs. computation, i think that “computation” has always been in modern science part of the elaboration of the numerical (or logical) data obtained from observations (here the theoretical case is not considered), and that recently the elaboration is done prevalently via automatic computing. this fact has induced the development of a new discipline in science: informatics. arising from its nature, its most important influence in science has been an exponential increase in time of the development of new (machine) languages, obviously having for their roots in the human ones, thus also concerning the organization of the numerical knowledge for it elaboration and use. in that sense, i see informatics as a marginal follow up of the measurement process, not an integral stage of it—as also “simulation” and “extrapolation” are, both based on models, so actually pertaining to the theoretical frame. 5. conclusions i conclude by saying that i am aware and i had direct experience long since that there are gaps between disciplines—i contributed, since 30 years ago, to start a conference series just with the main goal, at that time, to “increase the extent of cooperation by calling scientists from both the mathematical and acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 the metrological fields to meet and exchange experiences” [5], later extended to computational science. these gaps include language: metrology has developed its own idiom that, in my opinion, is satisfactorily summarized in [2]—for a more extended discussion on this issue see [6]. different meanings may be assigned to terms in other discipline idioms, namely those of philosophy of science, or different terms may be used. consequently, it may easily happen that it becomes hard to overcome basic misunderstandings in inter-discipline conversations. on the other hand, this diversity is a richness of science. [7] references [1] l. mari, is our understanding of measurement evolving? acta imeko (2021) 10 (4), pp. 209–213. issn: 2221-870x. doi: 10.21014/acta_imeko.v10i4.1169 [2] jcgm 102:2008. international vocabulary of metrology – basic and general concepts and associated terms (vim) 3rd edition, jcgm (2012) pp. 108 [3] b. menin, the boltzmann constant: evaluation of measurement relative uncertainty using the information approach. journal of applied mathematics and physics (2019) 7, 486–504. doi: 10.4236/jamp.2019.73035 [4] f. pavese, measurement in science: between evaluation and prediction. in advanced mathematical and computational tools in metrology and testing xii (f. pavese, a. b. forbes, n. f. zhang, a. g. chunovkina, eds) series on advances in mathematics for applied sciences 90 (2021), world scientific publishing co, singapore, pp. 346–363. isbn 978-981-124-237-3 (hardcover), 978-981-124-239-7 (ebook) [5] advanced mathematical tools in metrology, workshop (p. ciarlini, m. g. cox, r. monaco, f. pavese, eds., 1993, turin, italy) series on advances in mathematics for applied sciences 16 (1994), world scientific publishing co, singapore. isbn 981-021758-7. see present denomination in ref. 4 above [6] f. pavese, from vim3 toward the next edition of the international dictionary of metrology, acqual, in special issue in memory of paul de bièvre, 2022 (in press) [7] f. pavese, p. de bièvre, fostering diversity of thought in measurement. in advanced mathematical and computational tools in metrology and testing x, vol.10 (f. pavese, w. bremser, a. g. chunovkina, n. fischer, a.b. forbes, eds.), series on advances in mathematics for applied sciences vol 86, world scientific, singapore, 2015, pp 1–8. isbn: 978-981-4678-61-2, isbn: 978-981-4678-62-9(ebook) https://doi.org/10.21014/acta_imeko.v10i4.1169 https://doi.org/10.4236/jamp.2019.73035 introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements acta imeko issn: 2221-870x december 2021, volume 10, number 4, 6 7 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 6 introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements alfredo cigada1, roberto montanini2 1 dipartimento di meccanica., politecnico di milano, via la masa 120156 milano, italy 2 dipartimento di ingegneria, università degli studi di messina, c.da di dio 98166 villaggio sant’ agata, messina, italy section: editorial citation: alfredo cigada, roberto montanini, introductory notes for the acta imeko special issue on the xxix italian national congress on mechanical and thermal measurements, acta imeko, vol. 10, no. 4, article 4, december 2021, identifier: imeko-acta-10 (2021)-04-04 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alfredo cigada, e-mail: alfredo.cigada@polimi.it dear readers, there is no doubt that measurements are playing a fundamental role in our everyday life. hot topics like the internet of things, industry 4.0, measurements for health or smart structures can not exist without a massive and pervasive presence of sensors and, more generally speaking, of measurement systems, meaning that data management and their analysis, both led by the interpretation by numerical models and by data driven approaches, are to be considered research trends of paramount interest for measurements. the reason why mechanical measurements are considered a science on its own has strong roots, not always fully understood, as data quality assessment, a preliminary step to any model, requires skills in a broad range of engineering topics, from sensors to mechanics, to materials and data science, just to mention some: this wide knowledge makes the cultural background of the expert in mechanical measurements one of the largest in engineering. all these aspects make the yearly italian meeting of “forum nazionale delle misure” a milestone for the discussion and update about the new trends in measurements, a science running very fast: this is also the occasion to relate to the colleagues mainly dealing with electric and electronic measurements, to widen the horizons and cultural exchange, making the occasion very rich for both a review on the present activities and a stimulus for the future. the pandemic has created a discontinuity in the meeting long lasting tradition, started in 1986: in 2020 the forum took place online, and the in-person meeting was organized again in september 2021 at giardini naxos, a wonderful location close to messina. as usual, part of the workshop has consisted in joint meetings between scientists from the mechanical and thermal measurements and the electrical and electronic measurements academic groups, in an interesting cultural discussion on the shared topics, faced with different viewpoints, and then on other topics, more peculiar to each group, in which the common language of metrology has a fundamental role to define the measurements quality. this special issue collects a selection of 12 papers presented during the three days of the “forum”: the authors have been asked to review their work and prepare an extended version, fit for the publication on acta imeko, a reference journal for measurements. the remarkable transversality of the mechanical measurements field is witnessed by the significant heterogeneity of the topics covered in each of the twelve selected works. the paper entitled ‘skin potential response for stress recognition in simulated urban driving’ by zontone et al. addresses the problem of stress conditions arising in car drivers using machine learning techniques based on skin potential response (spr) signals recorded from each hand of the test subjects. results showed that, in a situation without traffic, the test individuals are less stressed, confirming the effectiveness of the proposed minimally invasive system for detection of stress in drivers. in the paper ‘human identification and tracking using ultrawideband-vision data fusion in unstructured environments’ the research group of the university of trento, led by prof. mariolino de cecco, faced the problem of the cooperation between automated guided vehicles (agv) and the operator, to solve two crucial functions of autonomy: operator identification, and tracking. using sensor fusion, the authors were able to improve the accuracy and goodness of the final tracking, reducing uncertainty. mailto:alfredo.cigada@polimi.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 7 8 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 7 the third paper, authored by giulietti et al., deals with the continuous monitoring of cement-based structures and infrastructures, to optimize their service life and reduce maintenance costs. the proposed approach is based on electrical impedance measurements. data can be made available on a cloud through wi-fi network or lte modem, hence they can be accessed remotely, via a user-friendly multi-platform interface. the paper ‘validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations’ saw the collaboration between the research group of the university of l’aquila and the fire, public rescue and civil defense department of the italian ministry of the interior. the work provides a preliminary contribution to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. in the paper ‘a comparison between aeroacoustic source mapping techniques for characterization of wind turbine blade models with microphone arrays’, g. battista et al. deal with the problem of characterizing the aeroacoustic noise sources generated by a rotating wind turbine blade, in order to provide useful information for tackling noise reduction. this paper discusses a series of acoustic mapping strategies that can be exploited in this kind of applications, based on laboratory tests carried out in a semianechoic room on a single-blade rotor. the research group of the university of padua, led by prof. stefano debei, presented a paper entitled ‘occupancy grid mapping for rover navigation based on semantic segmentation’, dealing with obstacle perception in a planetary environment by means of occupancy grid mapping. to evaluate the metrological performances of the proposed method, the esa katwijk beach planetary rover dataset have been used. the paper, ‘characterization of glue behaviour under thermal and mechanical stress conditions’ by caposciutti et al., explores the behaviour of the glued interface commonly used to fix the electronics to the box housing, as it undergoes daily or seasonal thermal cycles combined to mechanical stress. to carry out the study, the authors prepared some parallel plates capacitors by using glue as a dielectric material. non-linear behaviour of the capacitance vs. temperature as well as effects of thermal cycles on the glue geometry were investigated. the research group of the university of perugia, led by prof. gianluca rossi, presented an optical-flow-based motion compensation algorithm to be used in thermoelastic stress analysis to account for rigid-displacements that can occur during loading. the proposed approach is based on measuring the displacement field of the specimen directly from the thermal video. the blurring and edge effects produced by the motion were almost completely eliminated, making it possible to accurately measure the stress field, especially in areas around geometrical discontinuities. thermoelasticity and aruco markers were also employed by l. capponi et al. to validate a numerical model of the inspection robot mounted on the new san giorgio's bridge on the polcevera river in genova. an infrared thermoelasticity-based approach was used to measure stress-concentration factors while aruco fiducial markers were exploited to assess the natural frequencies of the robot inspection structure. a completely different field of application concerns the paper ‘doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform’, that reports some very last results obtained by the research group led by prof. sciuto at the university of roma tre. the paper aims at providing an improvement of a previously proposed method for doppler flow phantom failures detection, combining application of empirical mode decomposition (emd), independent component analysis (ica) and short time fourier transform (stft) techniques on pulsed wave (pw) doppler spectrograms. the paper ‘comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network’ was originated from the collaboration between the research groups of mechanical and thermal measurements of the university of messina and of the university of catania. the aim of the work was to compare two different 3d reconstruction techniques, epipolar geometry and digital image correlation, to measure the deformation field of hyperelastic membranes under plane and equibiaxial stress state. a ffnn neural network was then used to assess accuracy of the two experimental approaches using a laser sensor as reference. finally, the paper ‘development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment’, written by the research unit of the university of messina, led by prof. roberto montanini, addresses a series of interesting topics: among the others measurements for the sea and energy harvesting from the sea waves: the paper focuses on this aspect with an innovative approach. we gratefully acknowledge all the authors who have contributed to this special issue, as well as all the reviewers and a special thanks goes to prof. francesco lamonaca, editor in chief of acta imeko for his tireless and patient help which has made possible this special issue. we are proud of having served as guest editors for this issue, hoping that this will help spreading the culture of measurements. alfredo cigada and roberto montanini guest editors accuracy – review of the concept and proposal for a revised definition acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 414 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 414 418 accuracy – review of the concept and proposal for a revised definition c. müller-schöll1 1 mettler-toledo international inc., greifensee, switzerland, christian.mueller-schoell@mt.com abstract: accuracy as a concept is widely used in metrology. it has undergone a historical development and is defined and used differently in different documents, communities and usage situations. present existing definitions are sometimes not clear, not unambiguous and sometimes even not useful. this paper explains the difficulties regarding present definitions (section 3 and 4), throws a spotlight on present day use of the concept (section 5), clarifies the question if accuracy is a qualitative concept (section 6) and finally presents conceptual ideas and a proposal for a new wording of a sustainable definition for accuracy (section 7). this paper is intended to support the highly estimated work of jcgm wg2. keywords: accuracy; terminology; metrology 1. introduction, motivation language is subject to ongoing change, due to its usage and due to its users. the term “accuracy” in the context of metrology has undergone changes in its definition, in its use and in its understanding and application in the past. there is still today uncertainty about its proper use and its proper meaning, and there is more than one normative document defining accuracy, using different words. it might be time to reflect on the language and it might be time for a clarification resulting in a clear and (for the field of metrology) universally applicable definition. it turns out that two distinguishable concepts are in use: one is taken from the international vocabulary of metrological terms and concepts [1] henceforth referred to as "the vim", the other one is more prevalent in standard documents published by the international organization for standardization (iso)1. 2. review of the vim definition2 2.1. text review the definition of accuracy according to [1] reads: 1 the vim is not considered an "iso document" here, although it is also published as "iso guide 99" under an iso name. "2.13 (3.5) measurement accuracy; accuracy of measurement; accuracy closeness of agreement between a measured quantity value and a true quantity value of a measurand note 1 the concept ‘measurement accuracy’ is not a quantity and is not given a numerical quantity value. a measurement is said to be more accurate when it offers a smaller measurement error. note 2 the term “measurement accuracy” should not be used for measurement trueness and the term “measurement precision” should not be used for ‘measurement accuracy’, which, however, is related to both these concepts. note 3 ‘measurement accuracy’ is sometimes understood as closeness of agreement between measured quantity values that are being attributed to the measurand." additionally, an "annotation" can be found in the internet-version of the vim, reading: "annotation (informative) [9 june 2016] historically, the term "measurement accuracy" has been used in related but slightly different ways. sometimes a single measured value is considered to be accurate (as in the vim3 definition), when the measurement error is assumed to be small. in other cases, a set of measured values is considered to be accurate when both the measurement trueness and the measurement precision are assumed to be good. sometimes a measuring instrument or measuring system is considered to be accurate, in the sense that it provides accurate indications. care must therefore be taken in explaining in which sense the term "measurement accuracy" is being used. in no case is there an established methodology for assigning a numerical value to measurement accuracy." 2.2. text analysis the vim definition uses the concept “closeness of agreement”, however, the term “closeness” is not defined and therefore is subject to interpretation. the definition speaks of the closeness of agreement of “a” measured quantity value, so it applies to a single measured value. note 2 mentions a "relation" between accuracy and precision and between accuracy and trueness, but leaves open what kind of relation that is. although accuracy being "not a quantity" [1], a comparative statement like “more accurate" is possible according to note 1. it is left open how two things can be judged “more” and “less”, when being not a quantity. 2 in the following paragraphs we investigate some definitions of the term accuracy as examples. other definitions might exist. http://www.imeko.org/ mailto:christian.mueller-schoell@mt.com acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 415 a relation between "measurement accuracy" and “measurement error” is mentioned in note 1. “measurement error” in turn is given a relation to “quantity value” in its vim-definition (2.16), meaning that both “measurement error” and “quantity value” are of the same kind. “quantity value” in turn is used as the first fraction/component of a “measurement result” (vim definition 2.9) where measurement result consists of a first fraction/component "value" and a second fraction/component "uncertainty associated to it". consequently, measurement error is clearly a one-dimensional property (it is a one-dimensional quantity value) and not a two dimensional vector like "measurement result" (with its two fractions/dimensions/components "value" and "uncertainty"). the "annotation" (2016) that can be found in the internet-version of the vim clearly points at ambiguities of interpretations of the definition, calling them "historical". but in the text of the annotation no hint is given that either of the two interpretations given there is considered outdated, so they still both exist. the two interpretations a) either relate "a" measured value to its error (note "a" value, not "values"), so "smaller error = more accurate" b) or relate "a set of measured values" to both its trueness and its precision (where both the trueness and the precision need to be "good" for the set to be considered "accurate"). but this is not consistent in itself, because even a single value can be attributed both a trueness and also a precision (!) if the distribution is known from which this single value is drawn (type a uncertainty of a single measured value). so the relation of "a" measured value to its trueness and its precision is missing here. the npl has published interpretations referencing to the vim-definition where accuracy is independent from precision [2] and where a set of measurements (!) is said to have "high accuracy" while having "low precision" at the same time. the consistency with case b) mentioned above appears unclear or not being given. 2.3. conclusion regarding vim with regards to the vim definition, it is remains unclear how accuracy is defined and how accuracy can be expressed. it is said to be "not a quantity", but vim does clearly not state it be "qualitative". accuracy can be judged "more accurate [than]" but accuracy cannot be assigned a value. it is unclear how this goes together. accuracy is stated to relate to "error" (smaller error = more accurate), however an additional property like dispersion or variance or uncertainty is not an explicitly mentioned part of this definition (only in the notes). on the other hand, accuracy is said to be "related to both" trueness and precision (note 2), where "precision" (vim 2.15) is clearly not related to error, but to dispersion. this is a contradiction in the vim in the view of the author. since the idea of accuracy being applied to more than one value and the idea of dispersion of values are not visible in the vim definition text (not considering the non-normative annotation 2), the commonly used dartboard model with dispersing hits cannot be considered an appropriate illustration for the definition of accuracy according to the vim. accuracy and precision are on the same hierarchical level in this concept (they are like apples and pears). within this concept it is possible that a result can be attributed e.g. “accurate but not precise” which can be found in publications (e.g. [2]). 3. review of the "iso definition" 3.1. text review iso (iso 5725-1:1994) the title of the standard iso 5725-1 [3] is: “accuracy (trueness and precision) of measurement methods and results". this standard gives definitional phrases in two different locations: one rather explanatory (yet definitional in character) in section 0.1, and a definition in section 3.6. (in chapter 3 named "definitions"): section 0.1 of iso 5725-1 uses two terms "trueness" and "precision" to describe the accuracy of a measurement method. "trueness'' refers to the "closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value". "precision" refers to the "closeness of agreement between test results". additionally, we find a so-declared "definition" in the same standard: 3.6 accuracy: the closeness of agreement between a test result and the accepted reference value. note 2 the term accuracy, when applied to a set of test results, involves a combination of random components and a common systematic error or bias component [emphasis by the author]. 3.2. text analysis – iso 5725-1:1994 section 0.1 of iso 5725-1:1994 according to section 0.1 of [3], it is both “trueness” and “precision” that are used to form accuracy. the text of 0.1 refers to accuracy as a property of a measurement method (and not of a single result and not of a set of results). this is a clear, yet understandable difference to the vim concept since the focus of [3] is methods. both trueness and precision are explained using the words “a large number of” results or at least “results” (in plural form) in [3]. according to this, accuracy, as explained here, cannot be applied to a single measurement result, since the language does not cover a single result (again in contradiction to vim). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 416 precision is not the same as uncertainty. so, accuracy – according to this concept – is not related to what is usually perceived as being the quality of a "measurement result" (vim 2.9) which consists (vim 2.9 note 2) of "a measured quantity value and a measurement uncertainty" [emphasis by author] and is thus a two-dimensional statement. the two dimensions of "measurement result" introduced in this definition are: • the first dimension, either the measured quantity value or the measurement error (with reference to a reference value) calculated from it and, • a second dimension characterizing the dispersion which is the uncertainty associated with the first component. the wording of 0.1 of [3] is in contradiction with the title of the standard, which announces both the accuracy of "methods and results". the accuracy of results is neither explained nor even mentioned here. section 3.6 of iso 5725-1:1994 in section 3.6 of the same standard, a definition is given for accuracy. this definition is the same as the vim definition which refers to “a” test result (singular form) and to a “test result”. methods are not mentioned in the definition. however, a note (note 2) has been added, broadening the scope to “a set of … results”. this circumstance in the note is inconsistent with the definition. the note is not in alignment with the definition and a definition that can be applied to the accuracy of a method is missing in the whole standard despite its title. additionally, the restriction "when applied to a set of results" is misleading (or maybe even wrong), since also a single value can be assigned a dispersion originating from the population this single value was drawn from (as explained above). according to 3.6 of [3], accuracy embeds both trueness and precision. this defines a hierarchy: trueness and precision are on the same level and accuracy combines both, defining a parent hierarchical level. (accuracy is like fruits and trueness and precision are like apples and pears.) a two-dimensional dartboard model is frequently used to visualize the concept depicting trueness and precision with a set of hits (by means of centering of the mean value and spread of the hits). 3.3. conclusion regarding iso 5725-1:1994 it seems that in the definition of accuracy given in [3], the authors intended to take both, a "deviation component" and a "spread component" into account. however, instead of measurement uncertainty, the precision was chosen to describe what was missing when accuracy "at one time" (quoted from section 0.6) contained only what we today call trueness. nevertheless, the different statements within the standard are hard to match (if not impossible to match). note 2 broadens the scope of the definition to "a set of results" which is not consistent with the mere text of the definition. 3.4. text review iso iwa 15 iso iwa 15 [4] reproduces the vim definition for accuracy. however, in a specific definition of "uncertainty" (section 3.1.3), we find the following phrase in a note: "uncertainty is inversely related to accuracy, and is a quantity value." this is one of the rare occasions that in the defining literature uncertainty comes explicitly into play in a relation to the concept of accuracy. nevertheless, it is neither detailed, how this "relation" looks like nor how a relation would lead from a non-quantitative accuracy to a quantitative uncertainty nor if and how "inversely" is to be understood mathematically. chapter 5.1 of [4] reads: "accuracy may be improved by improving precision and trueness." this hints again at the concept of [3] where accuracy is (whatever kind of) a combination trueness and precision and accuracy is on a higher hierarchical level than trueness and precision (please note that at this point, no connection to "uncertainty" is made). however, immediately afterwards, we read "accuracy, precision and trueness are conceptual terms. quantitative expressions of these concepts are given in terms of uncertainty, random error and systematic error, respectively." this is clarifying accuracy: accuracy is said to be quantified by uncertainty, which in turn is said to be combined of systematic and random errors. the latter of course is wrong since uncertainty according to the concept of the gum [5] is not just a combination of random and systematic "errors". (in order to clearly separate these conceptual areas, the gum has introduced "type a" and "type b" uncertainties and does not support the wording quoted from [4] above.) later in [4], accuracy is even expressed as a quantity value (e.g. in table 2, column name: "typical accuracy", column content is values). this is in clear contradiction to the definition given in the same document. and also later in the standard (section b.6.1.4) "accuracy" and "precision" are mentioned on the same hierarchical level, which again is an internal contradiction in the same document and in section b.7.1 systematic error is equalized to accuracy which also contradicts internally in the document. 3.5. conclusion regarding iso iwa 15 also [4] does not clarify the matter. it adds ideas of uncertainty to the concept of accuracy, however, also appears not fully internally consistent. 4. current use of the term accuracy a recent high-level metrology publication adds an interesting view to the discussion: it is the paper titled "evaluation of the accuracy, consistency, and http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 417 stability of measurements of the planck constant…" by a possolo, s schlamminger et al. [6]. in section 3 of the paper (section named "accuracy"!), the authors detail on the "accuracy requirements" of the ccm regarding the re-definition of the kilogram. in fact, the requirements made by the ccm are not given this specific name ("accuracy requirements") in the original ccm document [7]. in the publication, these requirements and the values that are compared to them are solely expressed in terms of uncertainties. (it is obvious that "errors" are no subject in this field of work.) the only accuracy measure (quantification) the authors take into consideration (and relate to the title of the publication) is uncertainty. this is an encouraging indicator that according to present recognized metrologists, uncertainty must be at least part of the concept of accuracy. 5. qualitative or quantitative? note 1 to the vim definition of accuracy states "the concept ‘measurement accuracy’ is not a quantity and is not given a numerical quantity value.". this statement (being "not a quantity") obviously tempts a number of authors to make statements of accuracy being "qualitative", e.g. [2] and [8]. however, this is not what is said in the referred vim text and is probably also not the intention of the wording of the vim. there is a fine yet significant difference between being "not a quantity" and being "qualitative": accuracy is said to be "not a quantity" [1] and not being given "a quantity value". this, according to the understanding of the author, wants to express that – in in simple words – there is no globally accepted number scale and no unit to express accuracy to make accuracy a metrologically traceable or metrologically comparable property (= a quantity). this note statement should probably separate the concept of accuracy from concepts like "quantity" or "uncertainty", where there is a common, global understanding how to quantify these. however, there is clearly no statement and no hint anywhere in the vim definition that accuracy be qualitative! common usage of "accuracy" requests comparative statements like "more" or "less" accurate which indicates the necessity to give accuracy "some kind of number" (a value), at least in a given situation or for a given purpose. this gives room for a user to apply a purpose oriented algorithm to get and assign an indicative accuracy number which allows comparative statements in a given situation. 6. synthesis, proposal for a solution it appears that the concept "accuracy" as we find it today in the vim and in iso documents is a leftover of an ongoing, not yet completed, development. it is stated that "at one time" ([3], 0.6) the concept of accuracy, being perceived as one-dimensional (only related to measurement error), was amended by the concept of precision as additional information to take account of the possible dispersion of values as a second dimension. probably, the concept of uncertainty was not yet fully established at that time. however today, according to the vim, it is exactly measurement uncertainty that is "characterizing the dispersion of the quantity values being attributed to the measurand" ([1] 2.26), which is exactly what should be used if a "dispersion dimension" is to be considered in the concept of accuracy in addition to a "trueness dimension". in addition, it is historically obvious and intentional, that there is not one metrologically traceable and metrologically comparable ([1] 2.46) way of quantifying accuracy. yet, it is necessary in practice that accuracy can be quantified in order to compare or rank. these quantifications may be done using an algorithm (e.g. a mathematical equation) which may follow a purpose given by the specific situation. therefore, we propose the introduction of a new definition for “accuracy” which should take into account: • the modern understanding of a measurement result, consisting of a "measured quantity value and a measurement uncertainty" (vim 2.9, note 2), • backwards compatibility still allowing to combine accuracy from a combination of trueness and precision, • the possibility of applying the concept of accuracy to a single value and to a set of values and to a method and to a procedure, • clarifying that "not a quantity" does not mean "being qualitative", • the fact that there may be (various possible) ways of assigning values to accuracy for making comparative statements. this could be realized with the following wording which is submitted to further discussion and consideration: http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 418 accuracy (measurement accuracy, accuracy of measurement) term to describe the closeness of agreement between one or several measurement results and a true quantity value. accuracy consists of a combination of a trueness property and a dispersion property. any algorithm to combine these two to yield a quantification, may follow its intended purpose. quantifications of accuracy which origin from the same algorithm may offer comparability (statements like "more accurate" or "less accurate" are then possible). note 1: the trueness property is preferably the measurement error (vim 2.16) and the dispersion property is preferably the measurement uncertainty (vim 2.26). note 2: accuracy according to this definition can be applied, if the necessary information is available, to a set of measurement results, to a single measurement result and to a measurement method and to a measurement procedure. note 3: possible algorithms can be summations of e.g. measurement error and measurement uncertainty in quadrature or considering the absolute of measurement error and the quadrature of measurement uncertainty etc. also a two-dimensional dartboard model can used to illustrate the concept, see figure 1: figure 1: accuracy dart board model; x-axis: improving dispersion, y-axis: improving trueness the x-axis is labelled "improving dispersion", the y-axis is labelled "improving trueness". it is left to the user whether "dispersion" is substituted by "precision" or by "uncertainty". the definition encompasses both concepts, which is no problem due to the fact that according to the definition proposed above, accuracy does neither deliver mathematically unambiguous nor metrologically traceable or comparable figures. it is admitted that the above proposed definition is rather lengthy and it would be desirable that definitions were brief and clear without "notes". however, given the history of this concept and the ambiguities explained in this paper, it appears to be necessary to use more words to connect the new wording to its history and to clarify misconceptions. 7. summary in this paper, we have analysed current use of the concept "accuracy". as means of example, we investigated three normative documents in which accuracy is defined. we have detected various incompatibilities, even within the same document but also in the community using this concept. we tried to understand the historical development that might have affected the understanding of this concept and conclude with a synthesis proposal for a new definition that encompasses historical as well as modern concepts and thus offers full backwards-compatibility. 8. references [1] jcgm 200:2012: international vocabulary of metrology – basic and general concepts and associated terms (vim) 3rd edition 2008 version with minor corrections (including "vim definitions with informative annotations", last updated 29 april 2017), accessed 06/2020 [2] npl: the difference between accuracy & precision https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_accuracy-precision-v7-hr-nc.pdf (accessed 06/2020) [3] international organization for standardization iso 5725-1:1994 accuracy (trueness and precision) of measurement methods and results, geneva. switzerland [4] international organization for standardization iso iwa 15:2015 specification and method for the determination of performance of automated liquid handling systems, geneva. switzerland [5] jcgm 100:2008. evaluation of measurement data – guide to the expression of uncertainty in measurement (gum 1995 with minor corrections) [6] a possolo, s schlamminger et al. evaluation of the accuracy, consistency, and stability of measurements of the planck constant… metrologia 55 29 [7] recommendation of the consultative committee for mass and related quantities submitted to the international committee for weights and measures; recommendation g 1 (2013) on a new definition of the kilogram https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf (accessed 06/2020) [8] nist engineering statistics handbook https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm (accessed 01/2020) http://www.imeko.org/ https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.npl.co.uk/skills-learning/outreach/school-posters/npl-schools-poster-_-accuracy-precision-v7-hr-nc.pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.bipm.org/cc/ccm/allowed/14/31a_recommendation_ccm_g1(2013).pdf https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm https://www.itl.nist.gov/div898/handbook/mpc/section1/mpc113.htm introductory notes for the acta imeko special issue on the 24th imeko technical committee 4 international symposium and the 22nd international workshop on analogue-to-digital conversion and digital-to-analogue conversion modelling and testing acta imeko issn: 2221-870x june 2021, volume 10, number 2, 1 3 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 1 introductory notes for the acta imeko special issue on the 24th imeko technical committee 4 international symposium and the 22nd international workshop on analogue-to-digital conversion and digital-to-analogue conversion modelling and testing giuseppe caravello1, ciro spataro1 1 università degli studi di palermo, viale delle scienze, 90128 palermo, italy section: editorial citation: giuseppe caravello; ciro spataro, introductory notes for the acta imeko special issue on the 24th imeko technical committee 4 international symposium and the 22nd international workshop on analogue-to-digital conversion and digital-to-analogue conversion modelling and testing, acta imeko, vol. 10, no. 2, article 1, june 2021, identifier: imeko-acta-10 (2021)-02-01 editor: francesco lamonaca, university of calabria, italy received may 25, 2021; in final form may 25, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited. corresponding authors: giuseppe caravello, e-mail: giuseppe.caravello02@unipa.it ciro spataro, e-mail: ciro.spataro@unipa.it dear readers, measurement has always been a tool by which we can observe the world around us. this concept was once again confirmed during the 24th imeko technical committee 4 (tc4) international symposium, which showed how topics related to the world of measurement range across many fields of knowledge. the imeko tc4 international symposium is one of the most important events in the fields concerned with the theoretical and practical aspects of the measurement of electrical quantities and related instrumentation. it involves institutions and academia in a discussion of the state of the art and issues that require a joint approach by engineers, academics and other experts of measurement, instrumentation, testing and metrology. the 24th edition of the symposium was originally planned to be held in palermo, italy; however, due to the covid-19 emergency, the committee was forced to organise the event as a virtual conference. we do hope that, soon, there will be another chance to host you all in palermo. the virtual symposium was organised to make an online conference not so different from a live event. it was challenging to set up a web platform to enable live presentations, and we thank the organising team, who addressed this issue professionally. it was also challenging to pursue imeko tc4’s standard mission of creating an international platform where experts from academia and industry can consider the measurement of electrical quantities, emphasising both theoretical and practical aspects of research in the field. as in the last editions, the 2020 symposium covered a large number of engineering fields, from digitalisation to renewable energy and from acoustic and mechanical measurements to biomedical and chemical fields. the large space devoted to quantum metrology was a novelty this year. thanks to the help of many physicist colleagues, it was, in fact, possible to propose both a special session and a plenary talk on the subject. in this issue, quantum metrology is represented by martina marzano et al., who, in the paper ‘design and development of a coaxial cryogenic probe for precision measurements of the quantum hall effect in the ac regime’, describe and characterise a cryogenic probe able to perform very accurate measurements in the alternating current (ac) regime with impedance bridges. the characterisation results show that the probe can be usefully employed to reach the quantisation condition in hall devices, performing sensitive direct current (dc) measurements. three papers in the issue concern metrological characterisation. a first contribution, ‘metrological characterisation of current transformers calibration unit for accurate measurement’ by valentyn isaiev et al., proposes an approach to simulate the errors generated by current transformers with the aim of characterising the performances of the ac comparators commonly used to calibrate the transformers. the results show mailto:giuseppe.caravello02@unipa.it mailto:ciro.spataro@unipa.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 2 that the proposed approach can achieve several tenths of µa/a when calibrating a commercial transformer calibration unit under ordinary laboratory conditions. the paper by stefano sorti et al., ‘metrological characterisation of rotating-coil magnetometer systems’ characterises a rotating-coil magnetometer for the measurement of integral magnetic-field harmonics in accelerator magnets. the authors focus their attention on modelling the mechanical components of the device to predict the transducer response in both static and dynamic conditions. sioma baltianski, in the paper ‘bias-induced impedance effect of the current-carrying conductors’, presents previously unstudied properties of current-carrying conductors utilising impedance spectroscopy. the methodology is based on the superposition of test signals and bias affecting the objects under study. the work shows that the studied objects have an additional low-frequency impedance that can be either capacitive, inductive or both, depending on the current density and the properties of the material. as in the last editions, a high number of papers in the symposium concerned sensors and actuators. a special session was dedicated to the topic, which, in this special issue, is represented by five papers. a first contribution, by giovanni gugliandolo et al., ‘on the design and characterisation of a microwave microstrip resonator for gas sensing applications’, deals with relative humidity monitoring. the proposed solution is based on a one-port microwave gas transducer developed by coupling a microstrip resonator for electromagnetic wave propagation. the developed transducer can be applied to detect different target gases by selecting an appropriate sensing material tailored to the specific sensing application. the paper by federica vurchio et al., ‘comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study’, compares measurements performed using different methods for the angular displacement of a comb drive in a microelectromechanical system (mems) gripper prototype for biomedical applications. the angular displacement was measured by means of two novel automatic procedures based on an image analysis method. the performances of the proposed procedures were compared with those of a semi-automatic method. the contribution by antonino quattrocchi et al., ‘pmmacoated fiber bragg grating sensor for measurement of ethanol in liquid solution: manufacturing and metrological evaluation’, explores the possibility of measuring the concentration of ethanol in aqueous solutions by using polymethyl methacrylate (pmma) as a coating material for a single-mode fibre bragg grating sensor. a prototype of this sensor was developed and compared to traditional sensors. the paper by lorenzo ciani et al., ‘design optimisation of a wireless sensor node using a temperature-based test plan’, deals with the design optimisation of a sensor node in a wireless mesh network under temperature stress. since there is not a specific standard for this kind of system, a customised test plan was developed in this work. tommaso addabbo et al., in the paper ‘solar energy harvesting for lorawan-based pervasive environmental monitoring’, propose the architecture of a self-powered lowpower wide-area network sensor node for the pervasive measurement of particulate matter concentrations in urban areas. to validate the effectiveness of the proposed solution, various field tests were carried out with the integrated environmental monitoring device. three contributions in the issue deal with biomedical measurement. imran ahmed et al., in the paper ‘iomt-based biomedical measurement systems for healthcare monitoring: a review’, present an extended overview of recent activities towards the development of internet of medical things (iomt)-based biomedical measurement systems for various healthcare applications. several approaches that are used in the development of these systems are presented and discussed, and metrological aspects related to accuracy, reliability and calibration requirements are considered. a second contribution, by giorgia fiori et al., ‘a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds’, proposes and validates a novel image-analysis-based method for the estimation of the lowest detectable signal in the spectrogram of blood flow velocity. the paper by marius-vasile ursachianu et al., ‘experimental study on sar reduction from cell phones’, deals with the measurement of the specific absorption rate (sar) relative to the emissions generated by three different generations of mobile phone. the test results quantify the dependence of the measured values on the positioning of the antenna, the size of the device, the relative position to the human head and the presence of protective cases. in the palermo symposium, the topic of renewable energy was highlighted, and it is represented here by three papers. the first contribution, by marco balato et al., ‘buck-based dmppt emulator: a helpful experimental demonstration unit’, deals with the troublesome question of the efficiency reduction in mismatched photovoltaic systems. in particular, the paper describes the realisation and use of a buck-based distributed maximum power point tracking (dmppt) emulator able to fully understand the advantages offered by the dmppt approach in various operative conditions. alessio carullo et al., in the paper ‘an innovative correction method of wind speed for efficiency evaluation of wind turbines’, propose an innovative statistical method to evaluate the average efficiency of wind turbines by correcting the wind speed at the entrance of the rotor assessed by a nacelle anemometer. the measured values, in fact, were systematically lower than the actual ones. the proposed statistical approach, unlike those already presented in the literature, does not need data measured from a meteorological station but is based only on the power curve declared by the turbine manufacturer. the aim of the paper by davide aloisio et al., ‘comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery’, is the determination of the state of charge (soc) and the state of health (soh) of batteries. the proposed approach, which was applied on an aged lithium-ion battery, was based on different machine learning techniques, and the performances of these were compared. the topic of electromagnetic compatibility is treated by andrea mariscotti et al. their contribution, ‘review of models and measurement methods for compliance of electromagnetic emissions of electric machines and drives’, discusses the problem of electromagnetic compliance for electric machinery and power drives by introducing and reviewing the normative references for electromagnetic emissions, the available modelling approaches and their accuracy and the measurement methods. the influence acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 3 of the setup, environment and behaviour of various types of machines is also considered. energy monitoring is the subject of the paper by barbara cannas et al., ‘nilm techniques applied to a real-time monitoring system of the electricity consumption’. the work presents a low-frequency non-intrusive load monitoring (nilm) system suitable for a typical domestic user. the system is able to disaggregate and keep track of a device’s consumption by analysing low frequency aggregate data. andrea mariscotti, in the paper ‘power quality metrics for dc grids with pulsed power loads’, analyses the interaction between pulsed power loads and a dc grid and then focuses on metrics to quantitatively describe such interactions and on the impact on dc grid operation and power quality. the contribution by jakub svatos et al., ‘system for an acoustic detection, localisation and classification’, presents a system able to detect, localise and classify gunshots. the system consists of sensor units that continuously monitor acoustic events and an advanced digital signal processing technique that analyses the acoustic data. lastly, among the papers of the 22nd imeko international workshop on analogue-to-digital conversion (adc) and digital-to-analogue conversion (dac) modelling and testing, the paper by tomasz kowalski et al., ‘design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements’, deals with the design, characterisation and performance assessment of an analogue front end for use in the polish national centre for nuclear research’s gamma spectrometry system. in-field experiments involving actual spectrometry measurements validated the designed front end. we would like to conclude these brief introductory notes by thanking, first, the authors for their interesting and high-quality papers and the reviewers for their indispensable and professional contribution. moreover, we would like to thank our past editor in chief, dušan agrež, who started the production of this special issue with us, our current editor in chief, francesco lamonaca, who has undertaken his new assignment with the same professionalism and competence as his predecessors, and the imeko tc4 chairperson, alexandru sălceanu, for his invaluable support and advice. it was a great honour for us to act as guest editors, and we hope that the readers will find this issue of acta imeko useful and inspiring. giuseppe caravello and ciro spataro guest editors https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahukewjqnugohelwahxgwaihhzkwbsaqfnoecayqaa&url=https%3a%2f%2fwww.apavital.ro%2fuploads%2fpages%2f383292%2fdeclaratie_de_avere_-_2019_-_salceanu%2520alexandru.pdf&usg=aovvaw3cgrljkj_djc2ujyoywzql https://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahukewjqnugohelwahxgwaihhzkwbsaqfnoecayqaa&url=https%3a%2f%2fwww.apavital.ro%2fuploads%2fpages%2f383292%2fdeclaratie_de_avere_-_2019_-_salceanu%2520alexandru.pdf&usg=aovvaw3cgrljkj_djc2ujyoywzql towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications acta imeko issn: 2221-870x june 2021, volume 10, number 2, 104 110 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 104 towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications stanislao grazioso1, annarita tedesco2, mario selvaggio3, stefano debei4, sebastiano chiodini4 1 department of industrial engineering, university of naples federico ii, naples, italy 2 ims laboratory, university of bordeaux, bordeaux, france 3 department of electrical engineering and information technology, university of naples federico ii, naples, italy 4 department of industrial engineering, university of padova, padova, italy section: research paper keywords: 4.0 era; soft growing robots; remote monitoring; monitoring systems; remote sensing citation: stanislao grazioso, annarita tedesco, mario selvaggio, stefano debei, sebastiano chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko, vol. 10, no. 2, article 15, june 2021, identifier: imeko-acta-10 (2021)-02-15 section editor: francesco lamonaca, university of calabria, italy received may 4, 2021; in final form may 11, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: stanislao grazioso, e-mail: stanislao.grazioso@unina.it 1. introduction the 4.0 era is characterized by an innovative multidisciplinary approach which addresses technical challenges by seeking transverse solutions to both technological and methodological problems [1]-[8]. the most effective expression of the 4.0 paradigm is represented by cyber-physical systems (cpss), i.e., smart systems that include engineered interacting networks of physical and computational components, able to monitor and control the physical environment [9]-[14]. from this definition, it appears that measurement and monitoring systems (mmss) are essential for the implementation of cpss. however, mmss are generally considered as subordinate elements of a cps: they provide source of information for the cps (i.e., connection to the physical world) but do not participate in any higher-level actions of the cps (i.e., conversion, cyber, cognition, configuration) [15]. mmss are historically seen as responsible for sensing the conditions from the physical environment, rather than as providers of higher-level intelligent information. however, the suitable adoption of the enabling technologies can reshape the role of mmss in the 4.0 era. this requires a fundamental change of perspective, in which the enabling technologies stop being employed as external superstructures for mmss, and become embedded solutions, intrinsically present in the architecture of the mms, and fully effective through an adequate metrological configuration. this approach emphasizes the holistic nature of monitoring and paves the way for 4.0 transition-driven monitoring systems. through the wise adoption of the 4.0 enabling technologies, mmss can turn into self-aware, abstract the most effective expression of the 4.0 era is represented by cyber-physical systems (cpss). historically, measurement and monitoring systems (mmss) have been an essential part of cpss; however, by introducing the 4.0 enabling technologies into mmss, a mms can evolve into a cyber-physical measurement system (cpms). starting from this consideration, this work reports a preliminary case study of a cpms, namely an innovative bioinspired robotic platform that can be used for measurement and monitoring applications in confined and constrained environments. the innovative system is a “soft growing” robot that can access a remote site through controlled lengthening and steering of its body via a pneumatic actuation mechanism. the system can be endowed with different sensors at the tip, or along its body, to enable remote measurement and monitoring tasks; as a result, the robot can be employed to effectively deploy sensors in remote locations. in this work, a digital twin of the system is developed for simulation of a practical measurement scenario. the ultimate goal is to achieve a self-adapting, fully/partially autonomous system for remote monitoring operations to be used reliably and safely for the inspection of unknown and/or constrained environments. mailto:stanislao.grazioso@unina.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 105 self-conscious, self-maintained entities, able to generate highly valued insights, just like cpss. this means that mmss can evolve into cyber-physical measurement systems (cpmss), thus becoming a proactive expression of the 4.0 era, strengthening not only the role of measurement, but the performance of the overall 4.0 ecosystem. starting from these considerations, this work introduces a definition for the cpmss and presents a preliminary case study, i.e., a “soft growing” robot for measurement and monitoring applications in constrained and confined remote environments. this system is a concrete example of cpms for its ability to sense itself within the environment (i.e., self-awareness), to adapt itself to the surrounding environment thanks to its softness and growth and steering capabilities (i.e., self-configure), and to autonomously predict navigation path towards the inspection target (i.e., selfpredict). embedding multiple 4.0 enabling technologies in one measurement and monitoring system represents a promising solution for achieving the cpms capabilities. this work is an extension of our previous conference paper [16], where the general idea was preliminarily introduced. this paper is organized as follows. in section 2, the state-of-the-art of robotic technologies is presented, with focus on measurement and monitoring remote applications in difficult-to-reach environments. then, in section 3, the definition for the cpmss is introduced. section 4 relates to the design and implementation of the soft growing robot and shows a simulated measurement and monitoring scenario of the cpms in a remote location. finally, in section 6, conclusions are drawn, and the future work is outlined. 2. background as well known, robotic technologies are largely used for carrying out remote measurement tasks, especially in environments that are hazardous, dangerous or difficult to reach for humans [17], [18]. nevertheless, there are several applications in which traditional rigid-bodied robot technologies cannot be used. this occurs, for example, when measurements must be carried out in confined, constrained, or unknown environments (e.g., inspection of difficult-to-reach industrial environments, exploration of archaeological sites which are often inaccessible and fragile). one technological solution suitable to perform this kind of tasks is represented by soft continuum robots [19], i.e. robots with a continuously deformable mechanical structure, whose design is inspired from the principles of shaping, movements, sensing and control of soft biological systems [20]. in the literature, there are several examples of soft continuum robots for remote measurement applications, in different fields: space, airlines, nuclear, marine (inspection and maintenance), medical (minimally invasive surgery), and so on [21]. one limitation of soft continuum robots is their limited workspace, as they usually have a fixed base and a pre-established length; this can be a problem for tasks that require inspection and exploration of large environments. to overcome this issue, soft continuum robots can be endowed with locomotion capabilities, by using tethered/untethered fluidic or cable-driven actuators, taking inspiration from the animal movements (snake, earthworms, caterpillars) [22]-[24]. however, this solution involves a relative movement between the robot and the environment, which can lead to low energy efficiency of the robot when high sliding friction is present. a recent design solution of soft continuum robots achieves enhanced mobility through growth rather than locomotion, taking inspiration from the growing process of plants and vines [25]. these robots, referred to as soft growing robots, achieve mobility by everting new material at the tip: this enables lengthening without relative movements between the robot’s body and the environment. with soft growing robots, the inspection and exploration length of remote environments is therefore limited only by the amount of robot’s body material that can be transported on the field. although different mechanisms have been used to enable this form of apical extension [26], recently, pneumatically-driven solutions have achieved great potentials [27]. in these systems, the growing process is implemented by pressurizing a fluid (typically, air) inside a chamber created by a self-folded cylindrical body, which unfolds by everting new material from the tip; this enables the forward growth of the robot's body through lengthening. while growing, the robot's body can be curved/steered by additional actuators distributed along its body (e.g., pneumatic artificial muscles [28], artificial tendons, etc.). the contraction of these additional actuators on one side causes bending (or kinking) along that direction. an example of the growing and steering processes of soft growing robots is shown in figure 1. 3. cyber-physical measurement system the concept of cpms is built on the 5c architecture [10] used for cps, as shown in figure 2. it consists of the following five levels: 1. connection layer: connected with the physical world where the measurements are collected, and the sensing is performed. 2. conversion layer: responsible for very first processing for endowing the system with self-awareness capabilities (i.e., reconstruction of its internal state). 3. cyber layer: responsible for the development of the digital twin model of the measurement system and for figure 1. example of the concept of a soft growing robot that achieves enhanced mobility through growth (top image). concept of growth by eversion of material rolled onto a spool (central image); and the concept of curving by pressurization, with contraction (bottom image) of soft actuators placed and sealed on the main body. in the example, a camera is placed on the tip of the robot. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 106 endowing the system with self-comparison capabilities (i.e., self-awareness within the network). 4. cognition layer: responsible for cognition and reasoning, i.e., high-level models and algorithms to endow the measurement system with decision support capabilities. 5. configuration layer: starting from the knowledge generated by the cognitive level, this layer generates corrective actions, as adaptation, and reactiveness to the environments. a cpms can be defined as a novel form of mms which, in addition to collecting data from the physical world, is able to provide higher-level information thanks to the use of suitable models and 4.0 enabling technologies. similarly to a cps, a cpms has knowledge of its state in time and space (self-awareness) and with respect to other systems in the network (self-comparison); it is capable of enforcing actions for its own maintenance (selfmaintain), predicting its own evolution in time and space (selfpredict), and adapting to the environment (self-configure). drawing a comparison with the master-slave architecture, historically mmss represented the slaves (mostly dedicated to data collection) of a master (i.e., the cps itself). by evolving into cpmss, instead, mmss become cpss among cpss; eventually, the current master-slave relationship between mmss and cpss turns into a peer-to-peer cooperation. 4. case study: the soft growing robot this section addresses the design and implementation of the soft growing robot, as a preliminary example of cpms to be used for applications within confined and constrained remote environments. first, the design requirements of the system are presented. successively, its major components (i.e., the robotic platform and the electronic control unit) are described in detail. finally, an example of a practical measurement scenario through a simulated digital twin of the system is presented. the architecture of the soft growing robot is shown in figure 3. 4.1. requirements the requirements of the soft growing robot are the following: • access within environments with small cross sections (with minimum dimension equal to 100 mm); • high inspection/exploration length (up to 10 m) while maintaining portability; • controllable growth; • steering/curving capability; and • human situation awareness. the long-term goal is to develop the first soft growing robot endowed with model-based strategies [29] for planning [30], control and navigation to accomplish remote measurement tasks. these requirements give the possibility to develop a generalpurpose platform for inspection and exploration usable in a wide range of scenarios. 4.2. robotic platform the robotic platform is mainly constituted by the robot base (where the body material is stored) and the robot body (which grows and accesses the remote environment). the cad model of the robotic platform is shown in figure 4. the robot base is the container of the unfolded robot body and represents the pressurized vessel when the robot is in figure 2. description of a cpms according to the 5c architecture. figure 3. architecture of the proposed soft growing robotic system (dimensions not in scale). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 107 operation. it is formed by an acrylic cylinder with two end caps (qc-108 qwik cap, fernco inc., davison, mi). the spool for rolling out the robot material is driven by a dc motor (6408k27, mcmaster-carr inc., douglasville, ga) with a magnetic encoder which allows growing/retracting the robot body and measuring its current length. the vessel of the robot base is a cylinder with diameter d equal to 25 cm and height h equal to 50 cm. the main body of the soft growing robot is made of an airtight tube which is flexible but not stretchable. during the eversion process, the material should slide relative to itself with negligible friction; and should guarantee major durability for field applications. to this end, a double side silicon-coated ripstop nylon (rockywoods fabric llc, loveland, co) is used as a material for the main body of the robot. the soft robot body is rolled up and fixed from one hand around a spool inside the base vessel: when pressurized, the material is pushed outside the robot base through an opening and everts from the tip of the robot. when fully extended, the robot body achieves maximum dimensions corresponding to a diameter p of 10 cm and a maximum length l of 10 m. the forward growth is controllable by finding a suitable balance between the desired air pressure inside the vessel and the desired spool rotation, and thus, motor angular velocity about the axis of the spool. for guaranteeing a reversible steering/curving of the robot body, soft pneumatic actuators (made of the same material of the robot’s body) are placed along the entire length of the robot [28]: the steering/curving control is guaranteed by a suitable pressurization of these additional actuators, considering models of curvature/deformations of the robot’s shape [31]. during the retraction process of the robot’s body, to avoid structural kinking, suitable mechanisms to drive the retraction should be foreseen [32]. 4.3. electronic control unit the electronic control unit is composed by two sub-systems: one for generating the desired air pressure for pressurization of the vessel, and one for generating the desired voltage for the dc motor for growth/retraction of the robot body. the pneumatic circuit regulates the air pressure by pulse-width modulation (pwm), which involves the controlled timing of the opening and closing of solenoid valves (sy114-5lou, smc) through a mosfet board (based on irf540 mosfets, stmicroelectronics), with pressure sensors (asdxavx100pgaa5, honeywell) providing feedback. the pneumatic circuit is an essential component of the robot, as it is responsible for the growth process (one pneumatic tube for the main tube of the robot body) and the steering of the robot body (one pneumatic tube for each of the serial soft actuators placed along the robot body). this pwm-based pneumatic homemade circuit substitutes more expensive solutions based on professional closed-loop pressure regulators. the prototype of the developed electronic control unit (related to the compressed air circuit) is shown in figure 5. 4.4. example of a practical measurement scenario the practical measurement scenario consists of a human operator performing visual inspection in a remote site through the proposed soft growing robot, endowed with a tip-mounted camera. an input device is used by the human operator to impart growing/steering commands. a digital twin of the measurement system is built in v-rep including the model of the robot (modelled as a growing constant curvature robot [33]), the model of the cameras as well as the model of the environment. the constant curvature assumption is reasonable when artificial pneumatic muscles are used to steer the robot, as shown in [30]. snapshots of the measurement scenario are shown in figure 6, where we can see the remotely operated soft growing robot approaching and inspecting a target (red box) within the simulated remote site. the soft growing robotic platform represents a novel technology for the inspection of confined and constrained remote environments that are not accessible by current technologies. furthermore, it represents a suitable platform for enabling measurement applications in large, gps-denied environments. in addition to industrial inspections, this robotic platform may be used for exploration purposes or even for search and rescue applications after accidents, earthquakes, and collapses of buildings. finally, an additional application is the sensor delivery in difficult-to-reach remote sites. figure 4. cad representation of the robotic platform. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 108 5. conclusion and future work in this work, a definition for cpms was introduced and a soft growing robot was presented as a case study. a cpms can be considered as a 4.0-oriented evolution of traditional mms; as a matter of fact, thanks to the adoption of 4.0 enabling technologies, a mms is not only seen as a system for collecting data, but also for data processing and interpretation. the soft growing robot as a cpms is intended to be used for remote measurement and monitoring applications in constrained and confined environments. the proposed system consists of a robot, to be placed outside the remote site, and a soft body that accesses the site through growth, with the possibility of controlling length and steering. at the tip of the system, a sensor can be placed to enable remote measurement tasks, or a sensor can be deployed through the robot body when the target is reached. in this work, we have considered a tip-mounted camera in the system. the benefits of using soft growing robots for remote measurement and monitoring applications are access within small-scaled sections; high inspection/exploration length; transportability; controllable growth and steering; safe interaction with the environment; and ability to perform sensor deployment. to get closer to a full definition of cpms, future work will be dedicated to embedding additional 4.0 enabling technologies (e.g., artificial intelligence algorithms) in the cmps, for endowing monitoring system with autonomous planning/navigation capabilities. also, effort will be dedicated to motion analysis and control in highly constrained situations. additionally, a set of practical applications cases will be identified, to experiment the system and assess its metrological performance. finally, suitable sensing technologies and processing strategies will be developed to enhance the metrological performance of the system (e.g., in terms of resolution, reliability and accuracy of interaction with the environment). the ultimate goal is to achieve a self-adapting, fully autonomous system for remote monitoring operations to be used reliably and safely for the inspection of unknown and/or constrained and confined environments. figure 5. picture of the electronic control unit (pneumatic board). figure 6. example scenario for remote inspection through the proposed soft growing robot. by using a suitable input device, the human operator drives the growth and steering of the robotic system in the remote environment, accessed through an access section which is slightly larger than the diameter of the robot’s body. the human operator sees the remote environment through the tip-mounted camera. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 109 references [1] d. seneviratne, l. ciani, m. catelani, d. galar, smart maintenance and inspection of linear assets: an industry 4.0 approach, acta imeko 7(1) (2018), pp. 50-56. doi: 10.21014/acta_imeko.v7i1.519 [2] t. i. erdei, z. molnár, n. c. obinna, g. husi, a novel design of an augmented reality based navigation system & its industrial applications, acta imeko 7(1) (2018), pp. 57-62. doi: 10.21014/acta_imeko.v7i1.528 [3] p. arpaia, e. de benedetto, l. duraccio, design, implementation, and metrological characterization of a wearable, integrated arbci hands-free system for health 4.0 monitoring, measurement 177 (2021) art. no. 109280. doi: 10.1016/j.measurement.2021.109280 [4] p. arpaia, e. de benedetto, c. a. dodaro, l. duraccio, g. servillo, metrology-based design of a wearable augmented reality system for monitoring patient's vitals in real time, ieee sensors journal 21(9) (2021), pp. 11176-11183. doi: 10.1109/jsen.2021.3059636 [5] l. angrisani, u. cesaro, m. d'arco, o. tamburis, measurement applications in industry 4.0: the case of an iot-oriented platform for remote programming of automatic test equipment, acta imeko 8(2) (2019), pp. 62-69. doi: 10.21014/acta_imeko.v8i2.643 [6] r. schiavoni, g. monti, e. piuzzi, l., tarricone, a. tedesco, e. de benedetto, a. cataldo, feasibility of a wearable reflectometric system for sensing skin hydration, sensors 20(10) (2020), art. no. 2833. doi: 10.3390/s20102833 [7] m. prist, a. monteriù, e. pallotta, p. cicconi, a. freddi, f. giuggioloni, e. caizer, c. verdini, s. longhi, cyber-physical manufacturing systems: an architecture for sensor integration, production line simulation and cloud services, acta imeko 9(4) (2020), pp. 39-52. doi: 10.21014/acta_imeko.v9i4.731 [8] a. cataldo, e. de benedetto, l. angrisani, g. cannazza, e. piuzzi, a microwave measuring system for detecting and localizing anomalies in metallic pipelines, ieee transactions on instrumentation and measurement 70 (2021) art. no 8001711. doi: 10.1109/tim.2020.3038491 [9] cyber-physical systems public working group, framework for cyber-physical systems: volume 1, overview, version 1.0, nist special publication, 1500–201, 2017. online [accessed 09 june 2021] https://pages.nist.gov/cpspwg/ [10] a. ahmadi, c. cherifi, v. cheutet, y. ouzrout, a review of cps 5 components architecture for manufacturing based on standards, in proc. of ieee 2017 11th international conference on software, knowledge, information management and applications (skima), sri lanka, 06-08 december 2017, pp. 1–6. doi: 10.1109/skima.2017.8294091 [11] a. tedesco, m. gallo, a. tufano, a preliminary discussion of measurement and networking issues in cyber physical systems for industrial manufacturing, in proc. of 2017 ieee international workshop on measurement and networking, m&n 2017, naples, italy, 27-29 sept. 2017. doi: 10.1109/iwmn.2017.8078384 [12] a. drago, s. marrone, n. mazzocca, r. nardone, a. tedesco, v. vittorini, a model-driven approach for vulnerability evaluation of modern physical protection systems, software and systems modeling 18 (2019), pp. 523-556. doi: 10.1007/s10270-016-0572-7 [13] a. sforza, c. sterle, p. d'amore, a. tedesco, f. de cillis, r. setola, optimization models in a smart tool for the railway infrastructure protection, in: critis 2013: critical information infrastructures security, lecture notes in computer science, 8328, amsterdam, the netherlands, 16-18 sept. 2013, pp. 191196. doi: 10.1007/978-3-319-03964-0_17 [14] p. d’amore, a. tedesco, technologies for the implementation of a security system on rail transportation infrastructures, in topics in safety, risk, reliability and quality 27 (2015), pp. 123-141. doi: 10.1007/978-3-319-04426-2_7 [15] d. yin, x. ming, x. zhang, understanding data-driven cyberphysical-social system (d-cpss) using a 7c framework in social manufacturing context, sensors 20(18) (2020), art. no 5319. doi: 10.3390/s20185319 [16] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, e. de benedetto, g. di gironimo, a. lanzotti, design of a soft growing robot as a practical example of cyber-physical measurement systems, in proc. 2021 ieee international workshop on metrology for industry 4.0 and iot, rome, italy, 79 june 2021. [17] m. friedrich, g. dobie, c. chan, s. pierce, w. galbraith, s. marshall, g. hayward, miniature mobile sensor platforms for condition monitoring of structures, ieee sensors journal 9(11) (2009), pp. 1439–1448. doi: 10.1109/jsen.2009.2027405 [18] m. y. moemen, h. elghamrawy, s. n. givigi, a. noureldin, 3-d reconstruction and measurement system based on multi mobile robot machine vision, ieee transactions on instrumentation and measurement 70 (2021), pp. 1–9. doi: 10.1109/tim.2020.3026719 [19] c. della santina, m. g. catalano, a. bicchi, soft robots, berlin, heidelberg: springer, 2020. doi: https://doi.org/10.1007/978-3-642-41610-1_146-2 [20] s. kim, c. laschi, b. trimmer, soft robotics: a bio inspired evolution in robotics, trends in biotechnology 31(5) (2013), pp. 287–294. doi:10.1016/j.tibtech.2013.03.002 [21] l. angrisani, s. grazioso, g. di gironimo, d. panariello, a. tedesco, on the use of soft continuum robots for remote measurement tasks in constrained environments: a brief overview of applications, in proc. of 2019 ieee international symposium on measurements &networking (m&n). ieee, catania, italy, 810 july 2019, pp. 1–5. doi: 10.1109/iwmn.2019.8805050 [22] w. hu, g. z. lum, m. mastrangeli, m. sitti, small-scale softbodied robot with multimodal locomotion, nature 554 (2018), pp. 81-85. doi: 10.1038/nature25443 [23] i. h. han, h. yi, c.-w. song, h. e. jeong, s.-y. lee, a miniaturized wall-climbing segment robot inspired by caterpillar locomotion, bioinspiration & biomimetics 12(4) (2017), 13 pages. doi: 10.1088/1748-3190/aa728c [24] g. gu, j. zou, r. zhao, x. zhao, x. zhu, soft wall-climbing robots, science robotics 3(25) (2018), 12 pages. doi: https://doi.org/10.1126/scirobotics.aat2874 [25] e. w. hawkes, l. h. blumenschein, j. d. greer, and a. m. okamura, a soft robot that navigates its environment through growth, science robotics 2(8) (2017), 8 pages. doi: https://doi.org/10.1126/scirobotics.aan3028 [26] a. sadeghi, a. mondini, b. mazzolai, toward self-growing soft robots inspired by plant roots and based on additive manufacturing technologies, soft robotics 4(3) (2017), pp. 211223. doi: https://doi.org/10.1089/soro.2016.0080 [27] j. d. greer, t. k. morimoto, a. m. okamura, e. w. hawkes, a soft, steerable continuum robot that grows via tip extension, soft robotics 6 (2019), pp. 95-108. doi: https://doi.org/10.1089/soro.2018.0034 [28] n. d. naclerio, e. w. hawkes, simple, low-hysteresis, foldable, fabric pneumatic artificial muscle, ieee robotics and automation letters 5(2) (2020), pp. 3406-3413. doi: 10.1109/lra.2020.2976309 [29] s. grazioso, g. di gironimo, b. siciliano, a geometrically exact model for soft continuum robots: the finite element deformation space formulation, soft robotics 6 (2019), pp. 790-811. doi: https://doi.org/10.1089/soro.2018.0047 http://dx.doi.org/10.21014/acta_imeko.v7i1.519 http://dx.doi.org/10.21014/acta_imeko.v7i1.528 https://doi.org/10.1016/j.measurement.2021.109280 https://ieeexplore.ieee.org/document/9354808 http://dx.doi.org/10.21014/acta_imeko.v8i2.643 https://doi.org/10.3390/s20102833 http://dx.doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.1109/tim.2020.3038491 https://pages.nist.gov/cpspwg/ https://doi.org/10.1109/skima.2017.8294091 https://doi.org/10.1109/iwmn.2017.8078384 https://doi.org/10.1007/s10270-016-0572-7 https://doi.org/10.1007/978-3-319-03964-0_17 https://doi.org/10.1007/978-3-319-04426-2_7 https://doi.org/10.3390/s20185319 https://doi.org/10.1109/jsen.2009.2027405 https://doi.org/10.1109/tim.2020.3026719 https://doi.org/10.1007/978-3-642-41610-1_146-2 https://doi.org/10.1016/j.tibtech.2013.03.002 https://doi.org/10.1109/iwmn.2019.8805050 https://doi.org/10.1038/nature25443 https://doi.org/10.1088/1748-3190/aa728c https://doi.org/10.1126/scirobotics.aat2874 https://doi.org/10.1126/scirobotics.aan3028 https://doi.org/10.1089/soro.2016.0080 https://doi.org/10.1089/soro.2018.0034 https://doi.org/10.1109/lra.2020.2976309 https://doi.org/10.1089/soro.2018.0047 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 110 [30] m. selvaggio, l. ramirez, n. naclerio, b. siciliano, e. hawkes, an obstacle-interaction planning method for navigation of actuated vine robots, in proc. of 2020 ieee international conference on robotics and automation (icra), paris, france, may 31aug. 31, 2020, pp. 3227–3233. doi: 10.1109/icra40945.2020.9196587 [31] s. grazioso, g. di gironimo, b. siciliano, from differential geometry of curves to helical kinematics of continuum robots using exponential mapping, in proc. of international symposium on advances in robot kinematics, bologna, italy, 01-05 july 2018, pp. 319-326. doi: 10.1007/978-3-319-93188-3_37 [32] m. m. coad, r. p. thomasson, l. h. blumenschein, n. s. usevitch, e. w. hawkes, a. m. okamura, retraction of soft growing robots without buckling, ieee robotics and automation letters 5(2) (2020), pp. 2115-2122. doi: 10.1109/lra.2020.2970629 [33] s. grazioso, g. di gironimo, b. siciliano, analytic solutions for the static equilibrium configurations of externally loaded cantilever soft robotic arms, in proc. of 2018 ieee international conference on soft robotics(robosoft), livorno, italy, 24-28 april 2018, pp. 140-145. doi: 10.1109/robosoft.2018.8404910 https://doi.org/10.1109/icra40945.2020.9196587 https://doi.org/10.1007/978-3-319-93188-3_37 https://doi.org/10.1109/lra.2020.2970629 https://doi.org/10.1109/robosoft.2018.8404910 validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations acta imeko issn: 2221-870x december 2021, volume 10, number 4, 140 146 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 140 validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations maria alicandro1, giulio d’emilia2, donatella dominici1, antonella gaspari3, stefano marsella4, marcello marzoli4, emanuela natale2, sara zollini1 1 diceaa, università dell’aquila, via g. gronchi 18, 67100 l’aquila, italy 2 diiie, università dell’aquila, via g. gronchi 1867100, l’aquila, italy 3 dmmm, politecnico di bari, via orabona 4, 70125 bari, italy 4 dipartimento dei vigili del fuoco del soccorso pubblico e della difesa civile, ministero dell’interno, piazzale viminale 1, 00184 roma, italy section: research paper keywords: validation; measurement uncertainty; calibration; total station; building monitoring; technical rescue citation: maria alicandro, giulio d’emilia, donatella dominici, antonella gaspari, stefano marsella, marcello marzoli, emanuela natale, sara zollini, validation of a measurement procedure for the assessment of the safety of buildings in urgent technical rescue operations, acta imeko, vol. 10, no. 4, article 23, december 2021, identifier: imeko-acta-10 (2021)-04-23 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuela natale, e-mail: emanuela.natale@univaq.it 1. introduction seismic-damage prevention is one of the main goals of researchers in the management of historical buildings, and several authors have dealt with the appraisal and the inventory of the building’s heritage [1]-[7]. a particular application concerns the evaluations required during urgent technical rescue operations. besides the primary target to search and rescue survivors, soon after an earthquake most resources are spent to evaluate the level of damages suffered by buildings and infrastructures, both in the immediate aftermaths of the event, to support the logistics of the rescue itself, to assess the level of safety of the rescue operations and the road network, and in the following phases, to implement provisional measures able to secure buildings and, in particular, the cultural heritage [8]-[12]. such assessment is quite challenging and critical, both from the technical and the logistic point of view, due to the high number of buildings (up to thousands), good part of which could be listed as cultural heritage. moreover, recent earthquakes were followed by 1-3 months aftershocks with similar intensity, causing a waste of most of the spent effort, which imposed to repeat the assessments anew. up to now, in italy as well as worldwide, this task has been carried out thanks to the expertise of fire fighters or other technicians, who had to assess the residual safety of the buildings on the only basis of a visual inspection. such assessment is obviously subjective, even if carried out by applying severe operational procedures, which foresee the analysis of the damages against well-defined schemas. abstract this work would like to provide a preliminary contribution to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. in particular, some considerations will be made regarding the effect of the number and positioning of monitoring points on the tilt determination of a building façade, in order to set up simplified procedures, which are quick and easy to implement in emergency situations, at the same time guaranteeing the reliability of the results. two types of building will be taken into account as test cases, which have different characteristics in terms of height, distance and angle with respect to the total station. some considerations will be made about the aspects to be explored in future work, for the calibration of the method as a whole and the definition of all the steps of a procedure for the evaluation of the safety of a building. mailto:emanuela.natale@univaq.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 141 to improve the efficiency and reduce the subjectivity of such assessments, the authors propose to employ a dedicated survey system designed to support rescue operations and the implementation of provisional measures. such system is now possible, thanks to the recent availability of dedicated, userfriendly human-machine interfaces, able to hide the complexity of the employed tools, while maintaining the scientific value of the retrieved data. in fact, such user-friendly interfaces make them accessible by untrained first responders, which are employed on the field in the first phases of the rescue operations [8]-[12]. systems and tools make less subjective the assessment of building residual safety, and, in particular, could decisively improve the following: • fast execution of accurate surveys of damaged buildings, so as to reduce the exposure risk of rescuers in the first phase of the emergency; • detailed design of provisional measures; • quantitative monitoring of building damage evolution over time; • quantitative monitoring of provisional measures evolution over time, to assess their residual efficacy, in particular following aftershocks. available technologies offer several instruments able to reach the aim. e.g., most common methods to monitor and assess cultural heritage building damage are based on satellite systems, photogrammetry, laser scanning, infrared thermography [13][17]. 3d reconstruction techniques of buildings, based on unmanned aerial vehicle (uav) tilt photography, have the advantages of multi-angle and three-dimensional, but they are very time-consuming and high levels of accuracy are not easy to achieve [18]-[21]. approaches are proposed, for identifying intact and collapsed buildings via multi-scale morphological profiles with multi-structuring elements from post-earthquake satellite imagery, or using synthetic aperture radar (sar) techniques; however, these methodologies do not examine in detail the structural characteristics of individual buildings [22]-[24]. total stations are remote sensing tools, which can be easily used both indoor and outdoor, thanks to the easy installation and real-time output, rain and wind-proof, wide range of operating temperatures and insensitivity to light conditions. as for total stations used without reflective targets, point distance is measured remotely, at safe distance from the target building, so that surveys are completely not-invasive and safe for the operators. as such, they consent to reach a good balance between precision and data quality, easy-to-use, cost, easy to realtime processing. there are studies in the literature that refer to indoor calibration of total stations [25]. to obtain reliable outcomes, it is crucial to standardise procedures which take into proper account the constraints imposed by the deployment on the field in emergency situations. in fact, such environment implies numerous operational issues, due to fast-evolving scenarios, with unpredictable intrusions by rescuers and vehicles. in particular: • the impossibility to measure some target points, especially the lowest, due to the interposition of obstacles (rescuers and rescue vehicles) between instrument and target; • the need to deploy as fast as possible, and to survey the minimal number of target points, needed to obtain outcomes with sufficient accuracy; • the possibility of accidental or voluntary movements of the instruments (e.g., when caterpillars have to move in between), for which it is necessary to define stable reference points, in order to correctly acquire series of data carried out from different positions. in the process of simplifying and speeding up procedures for using instrumentation in specific applications, validation techniques are needed to ensure the reliability of the results [26] [29]. in this paper the authors, as a preliminary contribution, will report their considerations regarding the impact on the tilt determination of a building façade, of the number and positioning of reference and monitoring points. this work would like to contribute to the draft of standard procedures for the adoption of total stations by rescuers in emergency situations, so as to offer a reliable and effective support to their assessment activities. section 2 will describe the instrumentation used for these tests and the lay-out of the site; the position of the measuring points is also discussed and the measuring set-up able to reduce step by step the monitoring points on which data processing is carried out. in section 3, the results are presented and analysed with reference to procedure simplification purposes. conclusions and future work will end the paper. 2. materials and methods the test area is located in fossa, a small village next to l’aquila, in central italy (figure 1), hits by 2009 earthquake [15]. the area is a square surrounded by three buildings and a tower. reference targets have been materialised on two buildings, while monitoring points have been located on the other buildings, the “house” and the “tower” in figure 2. these buildings have been chosen as test cases for the analyses, as they have different characteristics from the point of view of height, that implies different distance and inclination angle of the highest points with respect to the position of the measurement system, which is more or less halfway between the two buildings. the safer system has been used to perform the measurements. the safer system has been designed to estimate and monitoring any critical movements of structures and civil works. the operating modes and functions have been developed to make the system as suitable as possible for the emergency activities of the fire fighters. it is composed by (figure 3): • a total station leica geosystems (tps), model ts16 imaging, tool used extensively in monitoring activities by measuring azimuthal and zenithal angles and distances from the instrumental centre to the measured point with high precision. these measures (polar figure 1. test area location [image credit: google earth]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 142 coordinates) are transformed by the system into cartesian coordinates (x, y, z); • software safer, in order to manage the total station; • a tablet pc, with the software installed on it, able to communicate with the leica total station (tps) both by wireless and with dedicated cabling. by measuring the 3d coordinates, it is possible to carry out monitoring and control activities of slow or relatively fast movements over time both in external and internal environments, during day and night. all measurements are carried out automatically in order to exclude any errors by the operators. the measurement results are immediately processed in the field and displayed in graphical and tabular form, helping the operator to fast interpretate the phenomenon in progress and make immediate decisions. the safer system is designed to provide spatial information (3d coordinates) of punctual points and the operational scene is schematised in figure 4. the points on the building considered to be stable are called reference points while the ones on the building to be monitored are called monitoring points. the safer system is positioned at the centre in order to have maximum visibility. all the preliminary operations, required by the standard procedures for using this kind of equipment (fixing the tripod to the ground, levelling), have been correctly carried out. the measurement by the total station can be carried out in two different ways, by using: infrared ray laser beam. surveying by infrared ray needs a reflective target, the prism. using the prism allows more accurate measurements and guarantees the achievement of greater distances. the survey by laser beam is adopted to detect points that are difficult to access by the operator, who must physically place the prisms. in this case, in fact, it is not necessary to adopt any external reflecting system. reference points have to be defined to have a stable reference, and they must be fixed according to an optimal geometry (not too close or aligned) and on fixed points, not susceptible to movement: these aspects will be taken into account in the procedure definition. seven reference points have been placed in independent positions with respect to the monitored buildings. at the reference points, prisms are used; at the monitoring points, white/black targets (figure 5) are positioned using epoxy resins, to have reference positions, easily identifiable, on which to impose, in perspective, known displacements for the evaluation of the metrological characteristics of the instrument and to obtain useful information for the optimization of the procedure. the latter targets have reflectivity characteristics similar to those of a common building wall. a coordinate system has been defined with reference to the first building, the house, in such a way that the x-axis direction is obtained by projecting on the horizontal plane the points taken on the façade, the z-axis is vertical and its origin is on the horizontal plane at the height of the instrument, the y-axis enters the wall of the house. figure 6 shows the point clouds acquired on the façades of the two buildings and the defined reference systems. for each monitoring point, 20 repeated measurements have been carried out. the monitoring points have been named as indicated in figure 7. on the basis of the acquired measurements, after elimination of outliers, the following analysis are carried out, using the matlab software: a) b) figure 2. monitored buildings: a) house; b) tower. figure 3. safer system used for the analysis. figure 4. safer system operational scene [30]. a) b) figure 5. targets used for monitoring and reference points, respectively: a) black/white target; b) prism. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 143 1. standard deviations of the measured values of the coordinates of the monitoring points are calculated, on the basis of 20 repeated measurements. 2. the selected points (ml 1-12 for the house, ml 14-19, for the tower) are processed by means of a least squares regression (first degree polynomial model) and the inclination angles of the façades are determined with respect to the horizontal plane. these values are considered as a reference, with respect to which to evaluate other configurations of the monitoring points. 3. the angle of inclination is calculated with reference to the configurations described in figure 8 and figure 9, that is: excluding points along vertical lines from the analysis (configurations b, c, d and e for house; configurations b’ and c’ for tower) excluding the points along the lowest line, only in the case of house (configuration f) considering only extreme points (configuration g and h for house; configuration d’ for tower). 3. results figure 10 and figure 11 show the calculated standard deviations for the monitoring points on the house and the tower: these values do not exceed 5 mm. the selected points (ml 1-12 for the house, ml 14-19, for the tower) are processed by means of a least squares regression, according to a first degree polynomial model, and the inclination angles of the façades are determined with respect to the horizontal plane. the obtained results, in terms of inclination angles and signs of the direction cosines (cos(ry) and cos(rz)) of the normal to the plane, are summarized in table 1. the sign of the direction cosines indicates if the façade is inclined towards the outside or inside of the building: taking into account figure 6, in the case of the house, if the signs of cos(rz) and cos(ry) are concordant, the façade is inclined toward the outside, if discordant toward the inside; in the case of tower, on the contrary, if the signs are concordant, the façade is inclined toward the inside. the least squares regression has been also performed for the configurations of figure 8 and figure 9, and the inclination differences with respect to the reference case have been calculated (figure 12 and figure 13). furthermore, the displacement in the horizontal direction of the highest point in both the two buildings has been evaluated (figure 14 and figure 15). the following observations can be made: • for the house, variations in the angle of inclination up to 0.18°, compared to the reference case, may result, by changing the number of points chosen for processing; the displacements evaluated in the horizontal direction, at the height of the highest point, can reach about 20 mm. • for the tower, variations in the angle of inclination up to 0.062°, compared to the reference case, may result, by changing the number of points chosen for processing; the displacements evaluated in the horizontal direction, at the height of the highest point, can reach about 25 mm. • the results show, for both buildings, that the acquisition of only the extreme monitoring points allows to obtain results comparable to those of the reference case. these results appear promising from the point of view of the possibility of simplification, also considering that the standard deviation of the measurements is lower with respect to the calculated displacements. • it must be noticed that the standard deviation of the measurements is greater in some areas of the façade of the house, and this does not seem to be directly related to parameters such as distance and angle of positioning of the total station with respect to the building. this aspect will need to be explored in future work. figure 6. point clouds acquired on the façades of the buildings: in red the points on the house; in blue the points on the tower. a) b) figure 7. identification codes of the monitoring points for: a) house; b) tower. table 1. results of the least squares regression. house tower inclination angle 89.99 ° 89.93 ° signs of cos(ry) and cos(rz) concordant concordant acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 144 a) b) c) d) e) f) g) h) figure 8. configurations of measuring point for the house: a) “ref” (reference); b) “config 2”; c) “config 3”; d) “config 4”; e) “config 5”; f) “config 6”; g) “config 7”; h) “config 8”. a) b) c) d) figure 9. configurations of measuring point for the tower: a’) “ref” (reference); b’) “config 2’”; c’) “config 3’”; d’) “config 4’”. figure 10. standard deviation of the coordinates of the monitoring points for the house. figure 11. standard deviation of the coordinates of the monitoring points for the tower. figure 12. inclination difference with respect to the reference for the house. figure 13. inclination difference with respect to the reference for the tower. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 145 4. conclusions in this paper some considerations have been made regarding the effect on the tilt determination of a building façade of the number and positioning of the monitoring points. the study has been conducted considering two buildings with different characteristics as test cases. the results show, for both buildings, that the acquisition of only part of the monitoring points can allow to obtain results comparable to those of the reference case. these results appear promising from the point of view of the possibility of simplification, also considering that the standard deviation of the measurements do not exceed 5 mm, and it is lower than the calculated displacements (20 mm – 25 mm). in future work monitoring targets will be subject to known displacements to calibrate the method on the whole. furthermore, the causes of increased standard deviation of the measurements carried out in specific areas will be investigated. acknowledgments work carried out with the co-financing of the storm research and development project, funded by the european commission for the horizon 2020 program. thanks are due to angelo celano and luca macerola, of leica geosystems s.p.a., for the technical support. references [1] r. s. olivito, s. porzio, c. scuro, d. l.carnì, f. lamonaca, inventory and monitoring of historical cultural heritage buildings on a territorial scale: a preliminary study of structural health monitoring based on the cartis approach, acta imeko, 10 (2021) 1, pp. 57-69. doi: 10.21014/acta_imeko.v10i1.820 [2] v. sangiorgio, martiradonna, f. fatiguso, g. uva, historical masonry churches diagnosis supported by an analytic-hierarchyprocess-based decision support system, acta imeko, 10 (2021) 1, pp. 6-14. doi: 10.21014/acta_imeko.v10i1.793 [3] r. spallone, g. bertola, f. ronco, digital strategies for the valorisation of archival heritage, acta imeko, 10 (2021) 1, pp. 224-233. doi: 10.21014/acta_imeko.v10i1.883 [4] v. croce, g. caroti, a. piemonte, m.g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko, 10 (2021) 1, pp. 98-108. doi: 10.21014/acta_imeko.v10i1.842 [5] i. roselli, a. tatì, v. fioriti, i. bellagamba, m. mongelli, r. romano, g. de canio, m. barbera, m. m. cianetti, integrated approach to structural diagnosis by non-destructive techniques: the case of the temple of minerva medica, acta imeko, 7 (2018) 3, pp. 13-19. doi: 10.21014/acta_imeko.v7i3.558 [6] a. malagnino, g. mangialardi, g. zavarise, a. corallo, process modeling for historical buildings restoration: an innovation in the management of cultural heritage, acta imeko, 7 (2018) 3, pp. 95-103. doi: 10.21014/acta_imeko.v7i3.602 [7] f. lamonaca, c. scuro, p.f. sciammarella, r.s. olivito, d. grimaldi, d.l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko, 8 (2019) 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [8] y. hashem, j. ambani, a story of change. published by the international centre for the study of the preservation and restoration of cultural property (iccrom), rome, italy, 2021. [9] s. marsella, m. marzoli, l‘uso di tecnologie innovative nella valutazione speditiva del rischio, convegno internazionale di studi “monitoraggio e manutenzione nelle aree archeologiche. cambiamenti climatici, dissesto idrogeologico, degrado chimicoambientale”, l'erma di bretschneider, bibliotheca archaeologica, 2020, pp. 165-170, isbn: 9788891322029. [10] a. utkin, l. argyriou, v. bexiga, f. boldrighini, g. cantoro, p. chaves, f. cakır, advanced sensing and information technologies for timely artefact diagnosis, pisa university press, 2019, isbn 978-88-3339-240-0. [11] methodologies for quick assessment of building stability in the eu storm project (2018, september). proceedings of the xxi international nkf conference, reykjavík, iceland. [12] s. marsella, m. marzoli, l. palestini, results of h2020 storm project in the assessment of damage to cultural heritage buildings following seismic events, (2020). [13] s. zollini, m. alicandro, d. dominici, r. quaresima, m. giallonardo, uav photogrammetry for concrete bridge inspection using object-based image analysis (obia), remote sens., 12 (2020) 3180. doi: 10.13140/rg.2.2.35754.85441 04/07/2020 [14] d. dominici, e. rosciano, m. alicandro, m. elaiopoulos, s. trigliozzi, v. massimi, cultural heritage documentation using geomatic techniques: case study: san basilio's monastery, l'aquila. in 2013 digital heritage international congress (digitalheritage) 1, (2013) pp. 211-214. doi: 10.1109/digitalheritage.2013.6743735 [15] dominici, d., galeota, d., gregori, a., rosciano, e., alicandro, m. elaiopoulos, integrating geomatics and structural investigation in post-earthquake monitoring of ancient monumental buildings, j. appl. geod., 8 (2014), pp. 141-154. doi: 10.1515/jag-2012-0008 [16] f. yang, x. wen, x. wang, x. li, z. li, a model study of building seismic damage information extraction and analysis on ground-based lidar data, adv. civ. eng., 2021 (2021), 5542012. doi: 10.1155/2021/5542012 figure 14. horizontal displacement of the higher point for the house. figure 15. horizontal displacement of the higher point for the tower. https://doi.org/10.21014/acta_imeko.v10i1.820 https://doi.org/10.21014/acta_imeko.v10i1.793 https://doi.org/10.21014/acta_imeko.v10i1.883 https://doi.org/10.21014/acta_imeko.v10i1.842 https://doi.org/10.21014/acta_imeko.v7i3.558 https://doi.org/10.21014/acta_imeko.v7i3.602 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.13140/rg.2.2.35754.85441%2004/07/2020 https://doi.org/10.1109/digitalheritage.2013.6743735 https://doi.org/10.1515/jag-2012-0008 https://doi.org/10.1155/2021/5542012 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 146 [17] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, 10 (2021) 1, pp. 114-121. doi: 10.21014/acta_imeko.v10i1.847 [18] x. chen, j. lin, x. zhao, s. xiao, research on uav oblique photography route planning in the investigation of building damage after the earthquake, iop conference series: earth and environmental science, 783 (2021), 012081. doi: 10.1088/1755-1315/783/1/012081 [19] r. zhan, h. li, k. duan, s. you, k. liu, f. wang, y. hu, automatic detection of earthquake-damaged buildings by integrating uav oblique photography and infrared thermal imaging, remote sens., 12 (2020), 2621. doi: 10.3390/rs12162621 [20] m.g. d’urso, v. manzari, s. lucidi, f. cuzzocrea, rescue management and assessment of structural damage by uav in post-seismic, isprs ann. photogramm. remote sens. spat. inf. sci., 5 (2020), pp. 61 – 703. doi: 10.5194/isprs-annals-v-5-2020-61-2020 [21] c.c. chuang, j. y. rau, m.k. lai, c.l. shi, combining unmanned aerial vehicles, and internet protocol cameras to reconstruct 3d disaster scenes during rescue operations, prehosp. emerg. care, 23 (2019), pp. 479–4844. doi: 10.1080/10903127.2018.1528323 [22] r. zhang, k. duan, s. you, f. wang, s. tan, a novel remote sensing detection method for buildings damaged by earthquake based on multiscale adaptive multiple feature fusion, geomat. nat. hazards risk, 11 (2020), pp. 1912–19381. doi: 10.1080/19475705.2020.1818637 [23] b. wang, x. tan, d. song, l. zhang, rapid identification of postearthquake collapsed buildings via multi-scale morphological profiles with multi-structuring elements, ieee access, 8 (2020), pp. 122036 – 1220562020. doi: 10.1109/access.2020.3007255 [24] x. xiao, w. zhai, z. liu, building damage information extraction from fully polarimetric sar images based on variogram texture features, acrs 2020 41st asian conference on remote sensing2020 41st asian conference on remote sensing, acrs 2020, 9 11 november 2020. doi: 10.5194/isprs-archives-xliii-b1-2020-587-2020 [25] l. siaudinyte, modelling of linear test bench for short distance measurements. acta imeko, 4 (2015) 2, pp. 68-71. doi: 10.21014/acta_imeko.v4i2.229 [26] g. d’emilia, s. lucci, e. natale, f. pizzicannella, validation of a method for composition measurement of a non-standard liquid fuel for emission factor evaluation, measurement, 44 (2011), pp. 18-23. doi: 10.1016/j.measurement.2010.08.016 [27] g. d'emilia, a. gaspari, e. natale, how simplifying a condition monitoring procedure affects its performances. in 2021 ieee international instrumentation and measurement technology conference (i2mtc) (2021), pp. 1-5. ieee. doi: 10.1109/i2mtc50364.2021.9459924 [28] g. d'emilia, d. di gasbarro, a. gaspari, e. natale, managing the uncertainty of conformity assessment in environmental testing by machine learning. measurement, 124 (2018) pp. 560-567. doi: 10.1016/j.measurement.2017.12.034 [29] g. d'emilia, a. gaspari, e. natale, dynamic calibration uncertainty of three-axis low frequency accelerometers, acta imeko, 4 (2015) 4, pp. 75-81. doi: 10.21014/acta_imeko.v4i4.239 [30] leica geosystems ag part of hexagon. online [accessed 14 december 2021] https://leica-geosystems.com/ https://doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.1088/1755-1315/783/1/012081 https://doi.org/10.3390/rs12162621 https://doi.org/10.5194/isprs-annals-v-5-2020-61-2020 https://doi.org/10.1080/10903127.2018.1528323 https://doi.org/10.1080/19475705.2020.1818637 https://doi.org/10.1109/access.2020.3007255 https://doi.org/10.5194/isprs-archives-xliii-b1-2020-587-2020 https://doi.org/10.21014/acta_imeko.v4i4.239 https://doi.org/10.21014/acta_imeko.v4i4.239 https://doi.org/10.1016/j.measurement.2010.08.016 https://doi.org/10.1109/i2mtc50364.2021.9459924 https://doi.org/10.1016/j.measurement.2017.12.034 https://doi.org/10.21014/acta_imeko.v4i4.239 https://leica-geosystems.com/ image analysis for the sorting of brick and masonry waste using machine learning methods acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 image analysis for the sorting of brick and masonry waste using machine learning methods elske linß1, jurij walz1, carsten könke1 1 materialforschungsund -prüfanstalt at the bauhaus-university of weimar (mfpa), coudraystraße 9, 99423 weimar, germany section: research paper keywords: optical sorting of building material; masonry waste; image analysis; classification; machine learning citation: elske linß, jurij walz, carsten könke, image analysis for the sorting of brick and masonry waste using machine learning methods, acta imeko, vol. 12, no. 2, article 15, june 2023, identifier: imeko-acta-12 (2023)-02-15 section editor: eric benoit, université savoie mont blanc, france received july 8, 2022; in final form february 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work is supporting by the tmwwdg of the free state of thuringia, germany. corresponding author: elske linß, e-mail: elske.linss@mfpa.de 1. introduction and state of the art currently, 20-23 million tons of masonry building materials (mortar and plaster, lightweight concrete, aerated concrete, sandlime bricks, and bricks) are produced annually in germany. the quantity of bricks produced, including roof tiles, is approx. 1015 million tons [1]. current guiding strategies and ambitious environmental policy goals increasingly call on manufacturers of mineral building products to introduce material cycles [2], [3]. pure brick aggregates are currently used in sports field construction, in vegetation applications, in road construction as a proportionate component of frost protection and gravel base courses, and in building construction as recycled aggregate for concrete production [1]. pure brick recycled aggregates (figure 1), which are made from low-dense waste and can be obtained from masonry waste, can be returned to brick production as recycled material after being ground again. a prerequisite for this, however, is that no mortar adhesions or other impurities may be present [4]-[6]. based on the investigations in the projects [5], [7], a leaflet on the use of recycled bricks in the brick industry was drafted. it recommends pure hard-fired material, pure/hard material, lowfired material, and masonry waste as suitable feed material for the production of roof tiles, facing bricks, and vertically perforated bricks. depending on the type of brick, up to 25 wt.-% can be reused [6], [7]. after grinding, masonry waste can also be used as a cement composite material in the cement industry [8]. here it is necessary to be able to distinguish and separate low-fired and high-fired brick types, mortar, concrete, and other components from each other. figure 1. masonry waste. abstract this paper describes different machine learning methods for recognizing and distinguishing brick types in masonry debris. certain types of bricks, such as roof tiles, facing bricks and vertically perforated bricks can be reused and recycled in different ways if it is possible to separate them by optical sorting. the aim of the research was to test different classification methods from machine learning for this task based on high-resolution images. for this purpose, image captures of different bricks were made with an image acquisition system, the data was pre-processed, segmented, significant features selected and different ai methods were applied. a support vector machine (svm), multilayer perceptron (mlp), and k-nearest neighbour (k-nn) classifier were used to classify the images. as a result, a recognition rate of 98 % and higher was achieved for the classification into the three investigated brick classes. mailto:elske.linss@mfpa.de acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 optical single-grain sorting methods offer a chance for much differentiation even of very similar materials [10, 11, and 12]. optical analysis and sorting methods are not yet used on a large scale in the sorting of construction and demolition waste. the optical sorting methods are essentially based on innovative detection routines that can be integrated into corresponding software (figure 2). the aim of the investigations is the development of a recognition routine for the differentiation of different brick types for single-variety brick and masonry rubble. features from the high-resolution image and spectrum and evaluation algorithms from machine learning will be used for the recognition task. the main focus is in a first step on particles with a size of 8/16 mm. later it will be extended on particle sizes 2/4 and 4/8 mm. very important is the sorting out impurities, like mortar, gypsum, concrete and so on. the fundamentals are being created in order to develop more effective sorting processes in the recycling of residual brick masses in the future. this paper investigates the recognition of types of brick by using rgb images and innovative methods of machine learning are described. this provides the basis for the development of optical sorting procedures. in the investigations, recognition routines and algorithms are to be tested for the correct differentiation and separation of various brick types based on image features. 2. investigations figure 3 gives an overview of the test procedure. 2.1. materials and categories at the mfpa, initial preliminary investigations were carried out to distinguish between brick types on the basis of visual image information. in the investigations, different brick products were distinguished, which can be assigned to the following brick categories: • category ii: roof tiles and facing bricks, • category iii: vertically perforated bricks and • category v: other. at first, a sample collection of a representative nationwide cross-section of brick varieties and adhering mineral building materials were done. the brick samples in the different categories are composed of different new (unused) and recycled (used) bricks without adherents. furthermore, other relevant building materials such as aerated concrete, concrete, mortar, sand-lime bricks, etc. (new and recycled) have also been included as impurities. the measured material parameters water absorption after 24 h and bulk density according to din 4226-101 for all samples included in the investigations are shown in the diagram (figure 4). a total of 7,765 image recordings of samples were made. category i (high-fired material) is still missing from these investigations, as too little material was available for examination. category iv (brick waste from recycling plant) was also excluded at this point, as the samples were not homogeneous and can therefore belong to different categories. in future works, both categories will be added to the data set. the particle size is 8-16 mm. the brick samples listed in table 1 were divided into three classes, which differ in bulk density. figure 2. principle of optical single-grain sorting method. figure 3. overview of the experimental programme. figure 4. water adsorption versus bulk density for all samples. table 1. example images for the investigated three brick categories. category objects per category example images ii roof tiles and facing bricks 3,203 iii vertically perforated bricks 2,314 v others 2,248 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 2.2. image acquisition and used software a data set of high-resolution images of the brick particles was created. the brick samples were examined under different conditions. on the one hand, three different types of lighting and, on the other hand, different combinations of features were used. this allows the influence of the illumination on the analysis results to be investigated. figure 5 shows the used image acquisition system “qualileo”. it consists of an rgb matrix camera with a 12 mp sensor and a step-less adjustable lighting system (figure 6). 2.3. investigations and results the provided algorithms are analysed using the presented data set with individual parameters and the average recognition rate (rr) captured. the achieved rr and standard deviation (stdev) are used to compare all of the results. all of the researches were carried out in the halcon programming environment. the investigation’s classifier have different setting parameters. the distance-based nearest k-neighbours value has been set to 5 in the k-nn classifier. the rbf-kernel was used with svm and the γ parameter was set to 0.02. also the one-versusall classification mode was used. with this setting a multi-class problem is reduced to a binary decision each class is compared to all the other classes. for mlp is used a softmax activation function. it performs best at classification tasks with multiple independent classification outputs. in the hidden layer, the number of hidden units is set to 15. in this study, the data set is split into two parts: 80 % and 20 %. the classifier are trained with a major part of the data set, while the classifier are tested with a smaller fraction of the data. 2.3.1. results before selection and combination of features at the beginning, the typical feature groups were analysed. table 2 shows a summary of these results. the first result was that lighting 2 (half of a ring light that simulates light from the side) always produced significantly worse results, which is why the following only presented the results for lighting l1 (fully ring light on) and l3 (incident light). this data set showed that colour features alone are the most important feature for this recognition task. the recognition rates achieved are always above 95 % for all data sets and classifiers, as shown in table 2. the region features have only little impact on recognition. additionally, the texture features prove more effective with over 80 % recognition rates for svm and mlp. the k-nn classifier had the lowest recognition results. aside from colour features, all other accuracy is insufficient. this is because the k-nn algorithm is more vulnerable to redundant features than the robust classifiers such as svm and mlp. this research found as well that using all of the possible number of features does not always result in a better accuracy, or that the improvement is minor because many of the features are redundant. besides that, with a large number of features, the classifier’s training time increase. a further improvement is to be achieved with feature selection. only the most important features for the study’s problem are calculated by this algorithm. 2.3.2. results after selection and combination of features the best results by feature selection are shown in figure 7 and figure 8. the very high recognition rates for all classifiers are remarkable. figure 5. used image acquisition system qualileo. figure 6. step-less adjustable lighting system on the qualileo. table 2. recognition rates (rr) for different classifiers learned by various numbers and kinds of features for different lightings (l1 = lighting 1, l3 = lighting 3). features svm (rr in %) mlp (rr in %) k-nn (rr in %) l1 l3 l1 l3 l1 l3 18 colour 96.27 97.04 98.30 98.23 95.91 95.24 32 region 36.62 37.71 44.98 46.33 35.26 37.52 195 texture 86.16 83.78 84.94 81.92 65.83 64.74 382 gray 78.38 77.48 74.13 85.14 42.92 44.08 432 all 80.05 79.99 93.89 94.02 58.42 60.01 figure 7. results of the reached recognition rates for the different brick types using different classifiers on l1 data set. category ii roof tiles and facing bricks category iii vertically perforated bricks category v others acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the very high recognition rates for all classifiers are remarkable. also, there is no significant difference in the detection rate in the different lighting settings. both ring light images and incident light images have very similar results. the best result was achieved by the mlp with 98.51 % for lighting 1. with svm lighting l3 an average recognition rate of 97.92 % was reached. with a rr of 96.40 % a slightly lower result is achieved by k-nn classifier. however, the difference is marginal. a look at the individual categories also shows that category iii is recognised with an over 99 % recognition rate with both svm and mlp. not far from this result, k-nn is just below. category ii is always recognised equally well by svm and mlp (close to 98 %). here k-nn is clearly worse in the recognition (about 96 %). category v shows a lower recognition rate between 96 % and 97 % for all classifiers, but again k-nn shows the lowest results of all. compared to the results without feature selection and combination, no large increase in the recognition rates was observed. with svm and k-nn, the detection rate increased by only about 1 %, with mlp it remained at about the same rate. finally, figure 9 shows the importance of the feature selection procedure in classical classification classifiers. because it is not useful to use as many features as possible for classification, but only the most important ones. this can be seen as the score progresses. the rr increases with the number of features until it reaches its maximum value. after that, the rr decreases as more characteristics are added. because a lot of the features are redundant and have a lot of noise in them. as a feature selection method halcon presents a greedy method. in this algorithm the currently most promising feature is added to the feature vector. after that, it is evaluated whether one of the newly added features is unnecessary. summarised, it is important to note that the high recognition rates after feature selection were achieved with a relatively small number of features (table 3). the average rr is over 95 % for all classifier and both data sets l1 and l3. hence the different settings on light had only low effect on the result of the optimised classifier. additionally, the standard deviation is low, indicating a stable classification process. this simplifies the future implementation of the application and reduces the required computing effort and time. 3. conclusion and future work in this work, a method for recognizing different kinds of the brick using image processing and machine learning was investigated. a very good differentiation of the selected brick categories roof tiles and facing bricks or vertically perforated bricks could be demonstrated. with an overall recognition rate of about 98 %, the three categories are separated. in the future, the data set will be extended by additional categories, and further classifier methods (also deep learning) will be tested. the algorithms obtained are to be further used for the development of optical sorting methods. acknowledgement the results were developed within the framework of the research group "sensor technology for products and processes" at mfpa weimar, which is funded by the free state of thuringia. the investigations will be continued in an aif-igf research project. we would like to express our sincere thanks for the funding from the thuringian ministry of economics, science and digital society, and the federal ministry of economics and climate protection. the responsibility for the research content lies with the authors. references [1] a. müller, i. martins, recycling of building materials: generation processing – utilization, springer vieweg (2022). doi: https://doi.org/10.1007/978-3-658-34609-6 [2] bundesverband der deutschen ziegelindustrie e. v., roadmap for a greenhouse gas neutral brick and roof tile industry in germany, futurecamp climate gmbh (2021). online [accessed 26 may 2023] figure 8. results of the reached recognition rates for the different brick types using different classifiers on l3 data set. category ii roof tiles and facing bricks category iii vertically perforated bricks category v others figure 9. mean recognition rate (rr) of the mlp classifier (on lighting 1 data set) in dependence of the feature. table 3. mean value of recognition rates (rr) and standard deviation (sd) for all classifiers and different lightings (l1 = lighting 1, l3 = lighting 3). classifier l1 (ring light) l3 (incident light) svm (rr and sd in %) rr sd features 97.85 0.42 23 97.92 0.42 23 mlp (rr and sd in %) rr sd features 98.51 0.16 20 98.37 0.24 19 k-nn (rr and sd in %) rr sd features 96.40 0.54 18 96.05 0.31 13 https://doi.org/10.1007/978-3-658-34609-6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 https://cerameunie.eu/topics/cerame-uniesectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brickroof-tile-industry-in-germany/ [3] mehr recycling von bauund abbruchabfällen in europa notwendig online [accessed 3 january 2020] [in german] https://www.recyclingmagazin.de/2016/10/02/352716 [4] m. landmann, a. müller, u. palzer, b. leydolph, leistungsfähigkeit von aufbereitungsverfahren zur rückgewinnung sortenreiner materialfraktionen aus mauerwerk – teil 1 und 2, at mineral processing, heft 03 und heft 04, issn 1434-9302, 55. jahrgang (2014). [in german] [5] aifigf vorhaben 18889 bg: charakterisierung sortierter ziegel-recycling-materialien anhand physikalischer und chemisch-mineralogischer eigenschaften für die generierung neuer stoffströme, schlussbericht (2019). [in german] [6] s. petereit, ressourceneffizienz ziegel aus alternativen rohstoffen. vortrag zum izf-seminar, essen, germany, 19.-20. september 2019. [in german] [7] s. sabath, charakterisierung sortierter ziegel-rc-materialien, vortrag zum izf-seminar, essen, germany, 19.-20. september 2019. [in german] [8] verein deutscher zementwerke e.v., brechsand als zementhauptbestandteil – leitlinien künftiger anwendung im zement und beton: die potenziale der recyclingbrechsande: von der aufbereitung mineralischer bauabfälle bis zur herstellung ressourcenschonender betone. information betontechnik, 11 (2019). [in german] [9] mvtec software gmbh, halcon. online [accessed 26 may 2023] https://www.mvtec.com/doc/halcon/1712/de/toc_regions_fea tures.html [10] e. linß, a. karrasch, m. landmann, sorting of mineral construction and demolition waste by near-infrared technology, hiser int. conference, 21-23 june 2017, delft, the netherlands, isbn/ean: 978-94-6186-826-8, s. 29-32 [11] aif-zim vorhaben fkz zf 4144903gr6, analyseverfahren zur automatisierten qualitätssicherung für rezyklierte gesteinskörnungen auf basis hyperspektraler bildinformationen im vis und nir. schlussbericht, (2018). [in german] [12] e. linß, d. garten, a. karrasch, k. anding, p. kuritcyn, automatisierte sortieranalyse für rezyklierte gesteinskörnungen, tagungsbeitrag, fachtagung recycling r’19, weimar, germany, 2526 september 2019. [in german] [13] f. rosenblatt, the perceptron: a probabilistic model for information storage and organization in the brain, psychological review, 6 (1958), p. 386-408. doi: 10.1037/h0042519 https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://cerameunie.eu/topics/cerame-unie-sectors/sectors/roadmap-for-a-greenhouse-gas-neutral-brick-roof-tile-industry-in-germany/ https://www.recyclingmagazin.de/2016/10/02/352716 https://www.mvtec.com/doc/halcon/1712/de/toc_regions_features.html https://www.mvtec.com/doc/halcon/1712/de/toc_regions_features.html https://doi.org/10.1037/h0042519 the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea abraham zerai gebremariam1,2, patrizia davit3, monica gulmini3, lara maritan4, alessandro re1,2, roberto giustetto2,5, serena massa6, chiara mandelli6, yohannes gebreyesus7, alessandro lo giudice1,2 1 department of physics, university of turin, via pietro giuria 1, 10125 turin, italy 2 national institute of nuclear physics, turin section, via pietro giuria 1, 10125 turin, italy 3 department of chemistry, university of turin, via pietro giuria 7, 10125 turin, italy 4 department of geosciences, university of padua, via giovanni gradenigo 6, 35131 padua, italy 5 department of earth sciences, university of turin, via valperga caluso n. 35, 10125 turin, italy 6 department of archaeology, catholic university of the sacred heart of milan, largo gemelli 1, 20123 milan, italy 7 northern red sea regional museum of massawa, p.o. box 33, massawa, eritrea section: research paper keywords: colorimetry; pottery; fabric; adulis citation: abraham zerai gebremariam, patrizia davit, monica gulmini, lara maritan, alessandro re, roberto giustetto, serena massa, chiara mandelli, yohannes gebreyesus, alessandro lo giudice, the contribution of colour measurements to the archaeometric study of pottery assemblages from the archaeological site of adulis, eritrea, acta imeko, vol. 11, no. 1, article 17, march 2022, identifier: imeko-acta-11 (2022)-01-17 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 15, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c) corresponding author: abraham zerai gebremariam, e-mail: abraham.zeraigebremariam@unito.it 1. introduction colour is an important characteristic of archaeological materials and yet quantitative and reproducible measurements are needed for a meaningful analysis and classification of artefacts. review of works on the colorimetric study of ancient ceramics pinpoints to the versatility of this method over traditional and yet solely qualitative description of colour using the munsell colour charts [1]. the international commission on illumination; l*, a*, b* (cielab) colour space has been mentioned as a suitable system aimed at standardizing and comparing colorimetric features, and its utility demonstrated in many studies of different ceramic objects. colorimetric surveys on pottery objects that range from the evaluation of colour of the exposed interior and exterior surfaces as well as their core, and colour measurements on powdered samples and based up on the collection of cielab colour data from digital abstract colorimetric evaluation was applied on archaeological pottery from the ancient port city of adulis in the red sea coast of eritrea. pottery samples belong to the ayla-aksum typology, late roman amphora 1 and dolia classes, which had never been analyzed by means of this approach. the survey consisted of colorimetric measurements from different parts of the ceramic bodies, to comprehend how these data could be related to the overall fabric classification. differences in the colorimetric parameters provided helpful information on both technological manufacturing processes and fabric classification. subtle variations in the colour coordinates were detected and aptly interpreted, so as to ascribe the related differences. such an approach proved that the information provided by colour measurements can be partially correlated to observations from stereomicroscopy and optical microscopy, allowing a more in-depth description of the fabrics in the study of archaeological pottery. mailto:abraham.zeraigebremariam@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 photographs, have been extensively reported in the literature [2], [3]. in general, the archaeometric study of ancient pottery mainly aims at locating the production centres, the specific technology involved in pottery production and understanding the distribution patterns. provenance is often tackled by determining the chemical and/or mineralogical compositions. and by comparing them with the composition of reference groups. mirti and davit (2004) demonstrated that colour measurements may help in assessing provenances and/or production technologies. the general agreement is that sherds made from different clays or even from the same clay but processed in different ways would show different colour curves, while ceramic materials obtained from a single clay and processed following a similar procedure should display a similar curve [4]. colour measurements on archaeological pottery have been applied particularly for the evaluation of the original firing conditions [4][16]. on the other hand, the colour of ceramic bodies is not only affected by temperature, but also by firing duration and atmosphere, as well as by the final porosity, the mineralogical composition of the raw materials used and the nature of inclusions. experimental analyses, aimed at evaluating the colour change with temperature by re-firing, allowed the detection of colour alterations in a specific clay body, and thus enabling an evaluation of the possible behaviour of different sherds. [3-4]. it is also noted that colorimetry, when coupled to other characterization techniques, can provide important results for examining raw materials and artefacts variability. the nucleation, growth, and grain size of hematite during firing as well as the redox reactions inherent to firing process and the influence of mineralogical composition, are all parameters that contribute to colour in ceramics, the consequences of which can be inferred by using mineralogical, micro-structural and chemical approaches [7]-[10] and [13]-[16]. therefore, the complexity of mineralogical and physical-chemical factors influencing the colour of ceramics requires that colorimetry should be coupled to other archaeometric techniques. in this respect, the dynamics of firing and changes of microstructures affecting colour variations in ceramics can be understood through detailed analytical approaches, colorimetry can be useful to determine colour parameters of the paste allowing preliminary fabric determination. in this study, pottery assemblages from the archaeological site of adulis, the primary port of the aksumite empire in late antiquity in the red sea coast of eritrea, were investigated by means of colorimetry. the site of adulis was principally involved in the major developments in the history of the northern horn of africa from the first millennium ce [17]-[19]. the long standing trade relations and cultural interaction with the romans and the mediterranean is attested in later periods during first millennium ce, through the red sea. comparative analysis of architectural and ceramic typological sequences attest that the site was continuously inhabited from the first-second up to the sixth and early seventh c. ce and intensely occupied in the 5th 6th c. ce [18], [19]. the ayla-aksum, late roman amphora 1 as well as dolia samples considered in this colorimetric survey indicate imports from the mediterranean world in the later phases of the occupation of the site. studies on these pottery assemblages from the northern horn of africa are rare [20]-[22] and colorimetric studies have never been applied previously. in this work, we highlight the evaluation of colorimetric parameters on different parts of ceramic bodies and their treatment in the form of powdered samples to assess the usefulness of colorimetry in pottery studies. the aim of this survey is to establish a preliminary fabric classification, possibly to be used for provenance determination. the results and limits of colorimetric evaluation are also reported here. the following sections provide a look into colorimetric assessment (considering different sampling procedures) and the use of such data to define the variability existing among the different classes. 2. materials and methods the colorimetric survey adopted for this study included different sampling procedures to evaluate the colorimetric parameters on archaeological pottery representing the ayla-aksum, late roman amphora 1 and dolia classes. all the samples considered in this survey were collected in the 2019 fieldwork of the ongoing italian-eritrean excavations at adulis [23]-[25]. on one hand, a set of samples was selected among a collection of 49 small sherds under archaeometric investigations: part of them were in fact not represented in this survey for being too small or showing colour variations after possibly post-depositional alterations. for the examined samples, colorimetric evaluation was done both on the interior and exterior surfaces of the ceramic bodies. the analysed surface was about 8 mm in diameter; thus, the detected coordinates relate to an average area of about 50 mm2. figure 1. general view and stereo-microscope images of some ayla-aksum: (a), (e) sample 1.3(2); (b), (f) sample 2.0, late roman amphora 1 (c), (g) sample 3.9 and dolia sample (d), (h) sample 4.9. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 table 1. all colorimetric values (l*, a*, b* respectively) for exterior and interior surfaces as well as cross-sections and powders on ayla – aksum and late roman 1 typologies. typology sample powder exterior interior section ayla aksum 1.1 64.16; 10.84; 17.28 ------ 1.2 70.06; 1.88; 14.99 62.20; 3.14; 18.12 61.11; 4.11; 19.23 45.80; 3.36; 11.95 47.64; 2.93; 11.11 -- 1.3(1) 66.96; 6.21; 17.60 55.27; 7.77; 21.53 55.30; 7.82; 21.71 56.87; 6.03; 18.54 53.37; 6.15; 17.69 -- 1.3(2) 65.21; 8.41; 17.25 54.62; 6.10; 17.61 54.11; 5.89; 16.94 53.10; 5.70; 17.61 52.77; 7.90; 18.43 52.85; 7.95; 19.14 54.44; 8.26; 19.51 62.05; 10.65; 19.22 60.25; 10.65; 18.94 59.39; 10.52; 18.78 1.4(1) 68.95; 5.47; 16.53 ------ 1.4(2) 75.10; 1.86; 15.09 63.34; 5.67; 17.02 61.71; 5.39; 17.98 63.21; 5.16; 18.82 66.23; 4.32; 19.75 66.45; 4.74; 20.23 71.91; 1.52; 15.70 67.71; 1.59; 17.04 1.5 76.53; 2.74; 20.06 ------ 1.6 61.39; 15.35; 20.43 ------ 1.7(1) 63.13; 12.57; 18.44 ------ 1.7(2) 68.47; 3.92; 15.35 ------ 1.8 60.49; 14.40; 19.61 ------ 2.0 69.92; 6.89; 17.29 66.61; 5.15; 19.63 69.84; 3.98; 19.98 68.91; 3.83; 19.42 70.78; 3.36; 19.16 68.65; 3.52; 19.06 -- 3.3 71.59; 1.04; 11.49 ------ 3.4 67.73; 1.27; 14.16 ------ 3.5 69.19; 9.18; 17.87 ------ 3.6 68.51; 10.71; 18.45 66.54; 4.69; 19.69 61.90; 4.18; 18.99 63.35; 4.95; 22.04 61.92; 5.96; 20.94 60.44; 5.48; 20.37 -- 3.7 58.55; 19.19; 22.41 ------ 3.8 57.24; 16.01; 19.36 ------ c01 65.68; 8.50; 18.00 ------ c04 67.38; 5.97; 17.24 ------ c05 67.81; 9.43; 18.40 ------ late roman 1 1.9 60.50; 9.70; 16.72 44.77; 5.97 17.34 43.13; 8.32; 17.84 47.34; 8.39; 18.62 51.60; 7.82; 20.66 49.25; 7.93; 16.37 49.69; 8.13; 17.91 -- 2.1 65.58; 6.11; 15.48 53.48; 7.91; 18.10 52.82; 8.30; 17.85 50.37; 8.96; 18.32 45.14; 7.22; 15.89 48.91; 7.64; 17.82 -- 2.2 67.28; 4.56; 13.36 ------ 2.3 65.58; 6.11; 15.48 54.16; 10.87; 21.16 55.82; 11.88; 22.28 55.49; 12.91; 23.16 51.55; 13.11; 24.90 52.26; 13.83; 24.82 53.19; 13.54; 24.78 58.75; 9.16; 18.02 58.38; 9.58; 18.26 58.22; 9.08; 17.70 2.4 63.69; 12.86; 20.75 ------ 3.0 67.49; 9.55; 18.06 58.75; 5.69; 19.91 58.39; 6.38; 18.95 64.28; 6.87; 21.27 63.50; 6.67; 20.81 -- 3.1 67.97; 8.27; 17.38 64.58; 7.63; 21.78 61.31; 8.23; 21.86 63.10; 6.39; 21.61 61.15; 6.68; 21.58 -- 3.2 67.91; 10.44; 18.81 ------ 3.9 64.34; 3.64; 12.06 ------ 4.1 68.64; 6.54; 16.65 58.44; 7.15; 17.93 58.69; 7.08; 17.20 58.67; 7.19; 19.44 56.15; 7.33; 18.59 58.98; 7.68; 19.15 62.56; 8.25; 20.00 62.01; 8.44; 21.01 59.95; 8.66; 20.50 c02 67.28; 9.89; 18.40 ------ c03 67.02; 5.86; 15.75 ------ c06 62.50; 6.36; 14.56 ----58.96; 8.96; 20.01 57.92; 9.61; 19.84 58.22; 9.58; 20.59 c07 65.99; 7.09; 15.06 ----46.31; 9.32; 17.89 47.21; 10.74; 18.50 47.38; 9.91; 18.43 c08 65.92; 6.97; 14.86 ----46.79; 9.10; 17.94 49.51; 8.56; 18.10 48.05; 9.62; 19.02 c09 71.15; 4.76; 13.40 ----46.62; 9.37; 16.14 44.70; 9.26; 15.02 45.52; 9.55; 15.66 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 for some sherds the thickness was also adequate to evaluate colour parameters on their cross-sections. the cleanest areas on the surfaces were selected, to estimate at best the true colour of the ceramic body. in all cases, depending on the dimension of the fragment and on the suitability of the analysed surfaces, 2 or 3 measurements were carried out on different areas of the surfaces to estimate the spread of the data on every considered sample. finally, colorimetric measurements were performed on the powders of all samples representing the ayla-aksum, late roman amphora 1 and dolia classes. a fragment was cut from each sample, polished to avoid contaminations, and crushed using an agate mortar and pestle to obtain 100 mg of powders. by doing this, the resulting powders represent a mix of the components of the exterior, interior and cross-section parts of the ceramic bodies, including temper grains, which can also affect colour measurements due to compositional variability and differences in particle size [10]. the measurements were performed by inserting these powders in specific cylindrical cells of optical fused silica with transmittance ˃ 95 % and no features in the whole visible range of the spectrum. a minolta cm-508i portable spectrophotometer was used, equipped with a pulsed xenon arc lamp and an integrating sphere to diffusely illuminate the specimen surface, which was viewed at an angle of 8° to the normal (d/8 geometry). the light reflected by the sample surface (specular component included) was detected by a silicon photodiode array, which allowed obtaining the reflectance spectrum in the 400 nm – 700 nm range, with wavelength pitches of 20 nm. the spectrophotometer was calibrated to provide the mean values of three consecutive measurements. colour coordinates were expressed in the cie l*a*b* system, using the illuminant d65 (average solar light) and a 10˚ viewing angle. in this system, the l*coordinate is related to colour lightness, while a* and b* are each determined by both hue and saturation respectively [4], [6]. in figure 1, representative ayla-aksum, late roman amphora 1 and dolia samples (scale in cm) and the stereo-microscopic images of their fresh cut are shown. it is worth noting that many factors could contribute to the occurrence of inconsistencies in the data both within the same sample and between different samples, when untreated archaeological materials are analysed. phenomena such as imperfect geometries of the analysed surfaces, presence of contaminants (not easily detectable to the naked eye), alterations due to post-depositional processes, and porosity can contribute to this variability in the obtained measurements (see figure 1). 3. results and discussion 3.1. colorimetric measurements on exterior, interior and cross sections table 1 and table 2 report colorimetric coordinates for all measured samples while in table 3 the largest values for ∆eab (∆eab,max) of each sample are indicated, computed according to the following equation: δ𝐸ab,max = √(𝑎1 ∗ − 𝑎2 ∗ )2 + (𝑏1 ∗ − 𝑏2 ∗)2 , (1) table 2. all colorimetric values (l*, a*, b* respectively) for exterior and interior surfaces as well as cross-sections and powders on dolia typology. typology sample powder exterior interior section dolia 1.0 54.33; 11.66; 13.16 47.46; 12.55; 20.16 47.42; 12.70; 20.20 48.80; 13.01; 20.75 49.76; 12.11; 17.82 49.24; 12.29; 17.32 48.76; 12.61; 17.56 46.20; 15.06; 18.40 47.51; 15.54; 18.76 47.64; 14.99; 17.93 4.0 59.74; 19.19; 22.41 57.39; 9.53; 22.41 54.48; 9.01; 20.38 55.54; 8.82; 20.30 49.81; 15.14; 19.52 50.40; 15.34; 20.34 50.70; 15.34; 21.16 -- 4.8 66.75; 4.66; 12.72 59.01; 6.75; 16.66 55.63; 7.02; 17.79 56.02; 7.76; 18.99 58.46; 6.53; 17.86 52.99; 6.68; 18.90 55.65; 5.74; 16.75 59.10; 6.11; 15.35 59.11; 5.79; 15.32 59.12; 6.26; 15.24 4.9 61.15; 6.96; 14.28 43.06; 7.87; 18.66 48.06; 8.65; 19.25 47.06; 8.24; 16.31 50.67; 8.75; 17.63 47.06; 8.24; 17.63 52.44; 12.23; 19.43 55.23; 11.18; 20.83 table 3. association to fabric groups identified by petrography (fg) and maximum values of eab,max from colorimetry for each sample. the average for fg is also shown when there is more than one measurement. ayla aksum fg sample exterior interior in-out a 1.2 1.3(1) 1.4(2) 1.47 0.19 1.87 0.94 0.86 0.64 8.21 3.34 1.87 b 1.3(2) 2.0 3.6 0.70 1.22 3.15 1.14 0.54 0.75 4.35 1.85 3.15 a average 1.2 0.8 4.5 b average 1.7 0.8 3.1 late roman fg sample exterior interior in-out a 1.9 2.74 4.29 4.29 b 2.1 2.3 3.0 3.1 1.07 2.86 1.18 0.61 1.98 0.72 0.50 0.29 2.99 4.71 2.37 1.86 c 4.1 2.24 0.66 2.24 b average 1.4 0.9 3.0 dolia fg sample exterior interior in-out a 1.0 4.0 0.75 2.23 0.56 1.65 3.50 6.58 a1 4.8 2.54 2.35 3.02 b 4.9 0.98 1.42 2.97 a average 1.49 1.11 5.04 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 where the differences between a1* and a2*, b1* and b2* represent the colorimetric coordinates showing the maximum differences among the set of measurements for the exterior and interior surfaces, for each sample. lightness (l*) is reputed to be a less suitable parameter than hue and saturation (a* and b*) and, due to this, it was not considered in this formulation [3]. the reported values of ∆eab,max are quite low in each dataset, indicating that the surface colours of both the interior and exterior surfaces of the pottery samples appear to be quite homogenous. besides, it seems that data from the interior surfaces are less spread than those from the exterior ones. for example, the average values of ∆eab,max are 1.2-1.7 for exterior surfaces of the ayla-aksum -depending on the fabricand 0.8 for their interior ones, respectively. to highlight this behaviour, the bivariate plot of l*, a* and b* parameters for all measurements is shown in figure 2 for a selected number of samples (one for each fabric). it is quite evident that differences can arise in colour coordinates depending on which side of the ceramic body surface is being measured (table 3). for example, in ayla-aksum samples ∆eab,max is lower than 3.15 for the exterior and less than 1.14 for the interior surface. a similar behaviour is observed in late roman amphora 1 (with the sole exception of an anomalous interior value for sample 1.9) and dolia samples. the measurements on cross-section (not reported in table 3 but shown in figure 2) are particularly homogeneous for all classes. this is probably due to favourable experimental conditions, related to the uncontaminated and flat surfaced of the sections. when comparisons are made based on the values obtained for the exterior and interior surfaces, cross-sections, and powdered samples the colorimetric values for crosssections seem to be closer to those obtained for the powders in many instances. however, some discrepancies in the colorimetric measurements of the cross-sections could also be related to irregular geometries. figure 2. colorimetric values for selected samples (left); all colorimetric values for exterior and interior surfaces as well as cross-sections and powders (centre and right). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 moreover, in almost all cases, it clearly appears that the colour for the exterior and interior surfaces of the ceramic bodies are darker, i.e., the values of l* are lower than those obtained for their powders. such a survey confirms that discrepancies in measurements on both the exterior and interior surfaces – and, in some cases, on the cross-sections – are quite common when untreated archaeological samples are concerned. this sampling problem can be linked to different phenomena. further treatment of the samples, either by refiring them at different temperatures to evaluate colour changes or by extracting powders from their fragments, however, can sometimes compensate for measurement inconsistencies. 3.2. colorimetric measurements on powdered samples for colorimetric measurements on powders, 21 ayla-aksum, 15 late roman amphora 1 and 4 dolia samples were considered. in particular, the dispersion of the colorimetric values within a specific class of pottery was considered. such a dispersion for ayla-aksum samples (figure 3) clearly indicates that colorimetric information can be useful for discriminating different fabrics within a given class. data dispersion in these samples indicates a colour variation from creamy (lower a* value) to a reddish hue (higher a* values). the former is typical of samples belonging to the fabric a of the ayla-aksum amphorae; the latter pertains to the fabric b. such a differentiation might be related to different technologies of production (morphological and/or microstructural changes) and perhaps compositional differences too. the distinction between these samples enabled by colorimetric parameters further complements the observation of different fabrics identified for the ayla-axum amphorae by petrography. in this respect, colorimetry can thus be useful to allow preliminary fabric classification when coupled to typological classification, petrography, and stereo microscopy. it should be noted, however, that the colour parameters defined for a homogenised paste (powders) only allows a preliminary fabric determination for each sample, rather than providing information about its composition or provenance. however, in some cases the fabric classification extrapolated from petrography does not match the distinction made by colorimetric evaluation. this was observed in the trends discerned from the colorimetric evaluation of powder samples from the late roman amphora 1 and dolia classes (figure 4). such evidence might indicate that, although the original clay used for manufacturing should be similar, the addition of one or more tempers might account for fabric variability, as defined by petrographic analyses. in this respect the mix design in the production of the ceramics can contribute to colour changes [10]. therefore, in order to make reliable deductions it is strictly necessary to link colorimetric observations to textural, microstructural and chemical studies. the results of this study prove that while colorimetry can assist in defining the fabrics and typological classifications, it certainly needs to be complemented with other approaches in order for more detailed information to be achieved. furthermore, colorimetry can be significantly useful to make deductions about the inter-fabric variability of samples belonging to specific classes. the mere collection of colorimetric parameters for each sample considered in this study could not (if considered per se) exclusively ascribe sharp distinctions, when comparison is made among fabrics defined for ayla-axum, late roman amphora 1 and dolia classes. this limitation is characteristic of colorimetric evaluations as can be seen from figure 4, where the overlapping of the colorimetric data for different samples belonging to the three classes of pottery considered in this study prevents any feasible conclusions. nevertheless, the importance of this survey is relevant, as it allowsin many cases a preliminary distinction of fabric variability within a specific typological class of pottery. 4. conclusions colorimetric observations were done on ayla-aksum, late roman amphora 1 and dolia pottery samples collected from excavations at adulis in order to check potential grouping with figure 3. colorimetric values for powdered samples. colours refer to the fabric classification as reported in table 3. figure 4. colorimetric values for powdered samples (ayla-aksum, late roman and dolia classes. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 respect to the fabric description obtained by means of stereomicroscopy and petrography. in this respect, this study showed that it is possible to correlate colorimetric values to specific fabric attributions achieved by means of traditional microscopy approaches. different sampling procedures were adopted to collect colorimetric data, in order to thoroughly understand the limitations and strengths of such an approach. the exterior and interior surfaces of the ceramic bodies, as well as their cross-sections were considered for the untreated samples, while powders were extracted from many samples in order to obtain homogenous samples for the survey. subtle differences in colour measurements due to these different sampling conditions were interpreted as potential discriminating parameters in order to establish fabrics. data variability collected on the exterior or interior surfaces of the ceramic bodies, as well as on their cross-sections or powders, is an indication that various phenomena should be considered while interpreting these data, particularly on untreated samples. the ∆eab,max computation allowed us to understand subtle variations and/or inconsistencies in colorimetric measurements on the exterior and interior surfaces, as well as on the cross-sections. marked differences in these values have been considered in order to make extrapolations, while feeble differences between samples belonging to the same fabric can hardly be used to postulate a differentiation between them. yet, the limited number of analysed samples, coupled to the detected measurement discrepancies (due to several factors, such as imperfect geometries of the surfaces, post-depositional alterations and perhaps also porosity), make it necessary to increase the statistical reliability pertaining to the colorimetric approach applied to the classes considered in this study. however, in many cases the attribution of samples to distinct groups was possible based on colorimetric evaluation, which proved to be consistent with previously determined classification by petrography. this observation pinpoints that colorimetric measurements can be useful to complement petrographic studies for in-depth fabric description. the correspondence of the colorimetric groupings with petrographic observations is a further indicator that the information from colorimetry can be useful to support provenance and/or technological studies on archaeological pottery. on the other hand, when colorimetric information cannot parallel petrographic information (as seen in a few cases in this study), a detailed textural and chemical study as well as a micro-structural understanding of the ceramic body becomes necessary. in conclusion, different parts of the ceramic body offer a variety of sampling decisions for colorimetric evaluation with non-invasive procedures, and thus the objective comparison of colour through a quantitative analysis can overcome issues related to typological classification. moreover, this survey showed that colorimetric measurements could be useful, at least in some cases, to ascribe preliminary fabric determination when coupled to complementary techniques – such as optical microscopy. such an information could be used to understand – at least partly – the technological processes. as well as to help in tracing the provenance of a given artefact (assuming that objectively attained colour measurements could be correlated to information obtained from these complementary techniques). the inherent limitations of this approach have also been highlighted, particularly to deduce colour variations due to mineralogical, chemical, and micro-structural differences a subject that needs to be dealt with, in the future direction of this research. acknowledgments the authors wish to warmly thank the commission of culture and sports of the state of eritrea, northern red sea museum of massawa, and centro ricerche sul deserto orientale (ce.r.d.o.) for supporting this research. we acknowledge here also the funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c. references [1] m. giardino, r. miller, r. kuzio, d. muirhead, analysis of ceramic colour by spectral reflectance, american antiquity, 63(3), (1998), pp. 477-483. [2] j. r. mcgrath, m. beck, m. e. hill, jr. replicating red: analysis of ceramic slip colour with cielab colour data, journal of archaeological science: reports14(2017), pp.432-438. doi: 10.1016/j.jasrep.2017.06.020 [3] p. mirti, p. davit, new developments in the study of ancient pottery by colour measurement, journal of archaeological science 31(2004), pp. 741–751. doi: 10.1016/j.jas.2003.11.006 [4] p. mirti, on the use of colour coordinates to evaluate firing temperatures of ancient pottery, archaeometry 40 (1998), pp. 45– 57. doi: 10.1111/j.1475-4754.1998.tb00823.x [5] m. daszkiewicz, l. maritan, experimental firing and re-firing, in: the oxford handbook of archaeological ceramic analysis, a. m. w. hunt (ed.), oxford handbooks in archaeology, 2016, isbn: 9780199681532. [6] m. bayazit, i. işık, a. issi, e. genç, spectroscopic and thermal techniques for the characterization of the first millennium ad potteries from kuriki, turkey, ceramics international 40(9) (2014), pp. 14769-14779. doi: 10.1016/j.ceramint.2014.06.068 [7] m. j. feliu, m.c. edreira, j. martin, application of physical– chemical analytical techniques in the study of ancient ceramics, analytica chimica acta 502 (2004), pp. 241–250. doi: 10.1016/j.aca.2003.10.023 [8] l. nodari, e. marcuz l. maritan c. mazzoli, u. russo, hematite nucleation and growth in the firing of carbonate-rich clay for pottery production, journal of the european ceramic society 27 (2007), pp. 4665–4673. doi: 10.1016/j.jeurceramsoc.2007.03.031 [9] c. gernminario, g. cultrone, a. de bonis, f. izzo, a. langella, m. mercurio, v. morra, a. santoriello, s. siano, c. grif, the combined use of spectroscopic techniques for the characterization of late roman common wares from benevento (italy), measurement 114(2018), pp. 515-525. doi: 10.1016/j.measurement.2016.08.005 [10] de bonis, g. cultrone. c. grif, a. langella, a. leone, m. mercurio, v. morra, different shades of red: the complexity of mineralogical and physico-chemical factors influencing the colour of ceramics, ceramics international 43 (2017), pp.8065–8074. doi: 10.1016/j.ceramint.2017.03.127 [11] y. yang, m. feng, x. ling. z. mao, c. wang, x, sun, m. guo, microstructural analysis of the colour-generating mechanism in ru ware, modern copies, and its differentiation with jun ware, journal of archaeological science 32 (2005), pp.301–310. doi: 10.1016/j.jas.2004.09.007 [12] j. molera, t. pradell, m. vendrell-saz, the colours of ca-rich ceramic pastes: origin and characterization, applied clay science 13 (1998), pp.187–202. doi: 10.1016/s0169-1317(98)00024-6 https://doi.org/10.1016/j.jasrep.2017.06.020 https://doi.org/10.1016/j.jas.2003.11.006 https://doi.org/10.1111/j.1475-4754.1998.tb00823.x https://doi.org/10.1016/j.ceramint.2014.06.068 https://doi.org/10.1016/j.aca.2003.10.023 https://doi.org/10.1016/j.jeurceramsoc.2007.03.031 https://doi.org/10.1016/j.measurement.2016.08.005 https://doi.org/10.1016/j.ceramint.2017.03.127 https://doi.org/10.1016/j.jas.2004.09.007 https://doi.org/10.1016/s0169-1317(98)00024-6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [13] l. nodari, l. maritan, c. mazzoli, u. russo, sandwich structures in the etruscan-padan type pottery, applied clay science 27(2004) pp. 119-128. doi: 10.1016/j.clay.2004.03.003 [14] v. valanciene, r. siauciunas, j. baltusnikaite, the influence of mineralogical composition on the colour of clay body, journal of european ceramic society 30(2010), pp. 1609-1617. doi: 10.1016/j.jeurceramsoc.2010.01.017 [15] r. mentesana, v. kilkogolu, s. todaro, p. m. day, reconstructing change in firing technology during the final neolithic-early bronze age transition in phaistos, crete. just the tip of the iceberg? journal of archaeological and anthropological sciences 11 (2019), pp. 871-894. doi: 10.1007/s12520-017-0572-8 [16] y. maniatis, the emergence of ceramic technology and its evolution as revealed with the use of scientific techniques, in: from mine to microscope: advances in the study of ancient technology. a. shortland, i. freestone, t. rehren (editors), oxbow books, 2009, eisbn: 978-1-78297-279-2, pp. 11-28. [17] c. zazzaro, e. cocca, a. manzo, towards a chronology of the eritrean red sea port of adulis (1st – early 7th century ad). journal of african archaeology 12 (1) (2014), pp.43-73. doi: 10.3213/2191-5784-10253 [18] peacock, d. & blue, l. (eds.) the ancient red sea port of adulis, eritro-british expedition, 2004–5. oxbow books, oxford, 2007, isbn: 9781842173084 [19] c. zazzaro, the ancient red sea port of adulis and the eritrean coastal region, bar international series, vol. 2569, oxford, 2013, isbn 978 1 4073 1190 6. [20] r. k. pedersen, the byzantine-aksumite period shipwreck at black assarca island, eritrea, azania xliii (2008), pp.77-94. doi: 10.1080/00672700809480460 [21] m. m. raith, r. hoffbauer, h. euler, p. a. yule, k. damgaard, the view from zafar an archaeometric study of the aqaba pottery complex and its distribution in the 1st millennium ce, zora 6 (2013), pp. 320-350. [22] s. massa, a. de bonis, v. morra, v. guarino, in s. massa (ed.), adulis project 2015 report (2015), pp. 85-90 (unpublished). [23] adulis project 2018 report (unpublished), s. massa (ed) [24] adulis project 2019 report (unpublished), s. massa (ed). [25] adulis project 2020 report (unpublished), s. massa (ed) https://doi.org/10.1016/j.clay.2004.03.003 https://doi.org/10.1016/j.jeurceramsoc.2010.01.017 https://doi.org/10.1007/s12520-017-0572-8 https://doi.org/10.3213/2191-5784-10253 https://doi.org/10.1080/00672700809480460 a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system laura fabbiano1, paolo oresta1, rosario morello2, gaetano vacca1 1 dmmm, politecnico di bari university, bari, italy 2 diies, university mediterranea of reggio calabria, italy section: research paper keywords: industry 4.0; predictive maintenance; prognostic approach; plant operation simulator; fluidic thrust plant citation: laura fabbiano, paolo oresta, rosario morello, gaetano vacca, a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system, acta imeko, vol. 11, no. 2, article 31, june 2022, identifier: imeko-acta-11 (2022)-02-31 section editor: francesco lamonaca, university of calabria, italy received november 9, 2021; in final form february 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: laura fabbiano, e-mail: laura.fabbiano@poliba.it 1. introduction the costs of maintenance are a consistent amount of the total operating costs in industrial production. moreover, several extra costs come from the profit loss due to the undesired failure of the plant. they can be dropped down with a failure prediction that preserves the devices through just on-time maintenance. these two classes of problems can be solved by the implementation of preventive maintenance in the perspective of the smart industry, [1]-[3], for which all the elements of a factory work in a completely, collaborative and integrated way and deal in real-time with the timely changes of the workflow. concerning costs reduction in force of the just on-time failure detection, the goal of the smart industry is to implement all the fundamental measurement procedures for continuous real-time monitoring (real-time condition monitoring) and coordinated monitoring of all plant elements, [4]-[6]. the keywords of the fourth industrial revolution are, therefore "preventive maintenance", "intelligent and real-time orchestration" and "synchronization of physical and digital processes". in literature, there are crucial new insights about the continuous screening of the devices that made possible the detection of incipient failure, [7]-[9]. among others, the prognostic approach is mandatory to obtain an accurate and coding of incipient machine failures, [10]-[12]. the detection of the degradation and damage of a plant component is the goal of the predictive approach. the condition-based maintenance (cbm) specifications [13], [14] inspired by the prognostic approach has been applied by the authors to the case of a simple fluidic thrust system by using a mathematical approach. it consists of a pump-motor block with inverter and a control valve, which represents the load acting on the p-m block and attributable to the network supplied by it. the approach uses a numerical simulator to manage and analyse in real-time all the characteristic parameters (sensed or mathematically predicted) of each system components to get and to advise concerning likely incipient anomalies of the monitored components. abstract the goal of the paper is to propose a strategy of automating the control of wide spectrum industrial processes plants in the spirit of industry 4.0. the strategy is based on the creation of a virtual simulator of the operation of the plants involved in the process. through the digitization of the operational data sheets of the various components, the simulator can provide the reference values of the process control parameters to be compared with their actual values, to decide the direct inspection and/or the operational intervention on critical components before a possible failure. as example, a simple fluidic thrust plant has been considered, which a mathematical model (simulator) for its optimal operating conditions has been formulated for, by using the digitalized real operational data sheets of its components. the simple thrust system considered consists of a centrifugal pump driven by a three-phase electric motor, an inverter to regulate the rotation of the motor and a proportional valve that simulates the external load acting on the pump. as results, the operational data sheets and principal characteristics of the pump have been reproduced by means of the simulator here developed, showing a very good agreement. mailto:laura.fabbiano@poliba.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 the virtual simulator is based on the digital reconstruction of the data sheets of all the more sensitive components of the system to allow the digital prediction of the optimal point of system working conditions. these data provided by model are compared with the real-time instantaneous acquisitions of the correspondent main operating parameters. the eventual discrepancy between virtual and actual values of the monitored parameter identifies which of the components of the system may be in critical conditions, thus dropping down its start and stop time responsible of the maintenance and service interruption costs. the proposed procedure, based on the digitization of the technical data sheets of the plant devices can be easily extended to most of the systems operating in industrial processes, thus allowing to control the entire system in real-time and detecting the dangerous failure scenarios. 2. plant specifications the system here simulated is a simple fluidic thrust plant and consists of a centrifugal pump (calpeda nm4 25/12a/a, [15]) driven by an electric asynchronous motor whose rotation speed is regulated by an inverter, and a control valve simulating the supplied network. the operational point of such system is determined by the coupling of the pump with the valve. the technical and geometric characteristics of each component are known. those of the pump are here reported: 𝑛 = 1450 rpm rotational speed of reference 𝑃 = 0.25 kw absorbed power at 1450 rpm 𝑄 = 1 ÷ 6 m3/h flow rate working range 𝐻𝑢 = 6.1 ÷ 3.3 m head range 𝐷2 = 131.5 mm external diameter of the impeller 𝛽2 = 157.5 ° angle of blade exit 𝑙2 = 4 mm blade height 𝑧 = 7 number of blades 𝜁 = 1 − 𝑧𝑠 π 𝐷2 = 0.95 blade bulk coefficient a user interface has been set up to manage the procedure of analysis and acquisition of data. specifically, that interface has been created on labview platform and is divided into several sections; each section manages and analyses the different parts of the plant, figure 1. in the control section you find the commands to change the opening degree of the valve and the inverter frequency. the target section enforces the rotation number of the engine. once the frequency is set by the control panel, it automatically varies until the system reaches a stable point of operation where the number of revolutions of the pump coincides with the value imposed in the target panel. the transient panel shows the indicators of instant monitoring of the plant. in particular, the quantities acquired by the transducers (number of revolutions, mechanical torque, flow rate, head) and some derived quantities such as instantaneous absorbed power are represented. the results panel collects in graphic form (three different plots) the trends of the pump head (characteristic curve), efficiency and absorbed power as a function of the flow rate. 3. virtual simulator the mathematical model of the plant consists of: 1. valve model; 2. pump motor block model. most of the geometric and operating data have been inferred from the technical specifications of the components. 3.1. valve model the control valve (here a proportional solenoid valve) allows the modulation of the circuit external characteristic (valve characteristic), [16]. that is, by acting on the closing/opening control devise, the relationship between the pressure drops and the flow rate flowing through the valve changes, so determining a new operating point as intersection with the internal characteristic (pump operating curve). from bernoulli's equation the relationship to the basis of the valve operation comes out as: 𝑄 = [𝑐𝑣max ℎ + 𝑐𝑣min (1 − ℎ)]√ 𝐻𝑣 𝐻𝑣 ∗ , (1) where: • 0 < ℎ < 1 degree of opening of the valve (or shutter stroke); • 𝑐𝑣max = 0.005 m 3/h and 𝑐𝑣min = 10 −6 m3/h flow rates (or efflux coefficients) minimum and maximum estimated by the technical data sheet of the pump; • 𝐻𝑣 → static pressure drop through the valve (m); • 𝐻𝑣 ∗ → 1 𝑝𝑠𝑖 static pressure drop through the valve (m). it represents the linear operational limit of the valve itself. from the previous relationship we get: 𝐻𝑣 = 𝑄 2 𝐻𝑣 ∗ [𝑐𝑣max ℎ + 𝑐𝑣min (1 − ℎ)] 2 (2) if a linear valve is considered, its characteristic coefficient 𝐾𝑣 (ℎ) reads as: 𝐾𝑣 (ℎ) = 𝐻𝑢 ∗ [𝑐𝑣𝑚𝑎𝑥 ℎ + 𝑐𝑣𝑚𝑖𝑛 (1 − ℎ)] 2 , (3) so, making possible to rewrite the previous relation as: 𝐻𝑣 = 𝑄 2𝐾𝑣 (ℎ) (4) the efflux coefficient of the valve is an increasing monotonous function of the shutter stroke, 𝑐𝑣 (ℎ), and it can be expressed in dimensionless form if the following quantities are considered: • the relative efflux coefficient: 𝜑 = 𝑐𝑣 /𝑐max • the intrinsic rangeability, ratio between the maximum and the minimum values of the efflux coefficient: 𝑟 = 𝑐𝑣max /𝑐𝑣min figure 1. user interface in labview platform. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 examples of valve characteristics expressed in terms of ℎ are shown in figure 2 for linear, equal percentage, fast-opening and quadratic valves: for the linear characteristic, it holds the relation: 𝜑(ℎ) = ℎ + 1 𝑟 (1 − ℎ) (5) in the plant operating model (simulator), the characteristic coefficient of the valve can be rewritten as, from (4): 𝐾𝑣 (ℎ) = 𝐻𝑣 ∗ 𝑐𝑣 (ℎ) 2 = 𝐻𝑣 ∗ 𝑐𝑣 max 2 𝜑(ℎ)2 (6) this relation for 𝐾𝑣 (ℎ) keeps holding even for other kind of valves, if the right formula for 𝜑(ℎ) is introduced in it. 3.2. motor pump block the 'pump-motor' block, figure 3, represents the virtual simulator of a centrifugal pump and its electric control motor. this block has 2 inputs and 4 outputs. the 2 inputs are, respectively: 1. the coefficient of the valve, whose operating characteristic is enclosed in the 'valve' model and is provided by it. its regulation will result in the variation of the number of revolutions of the pump, 𝑛 2. the magnetic field frequency, parameter recalled by the control unit, which allows to act in feedback on it, in order to restore the number of revolutions to a target and constant value during the process of digital reconstruction of the characteristic curves of the pump. the 4 outputs represent the virtual sensors of the p-m block suitable for the detection of: 1. q flow rate (m³/h) 2. 𝐻𝑢 head provided by the pump (m) 3. 𝑛 number of engine revolutions (rpm) 4. 𝐶𝑚 drive torque, (n m). they are recalled by the supervisor function (see figure 1) and made visible in the transient panel, where the instant monitoring indicators of the system are reported. in particular, the quantities acquired by the transducers (for a number of revolutions, mechanical torque, flow rate, head) and some derived quantities such as the instantaneous absorbed power are shown. in the model there are some geometric parameters directly available from the technical characteristics of the pump; other necessary parameters are inferred from the characteristic curves and the power absorbed by the pump, for a fixed pump revolution. the internal characteristic of the pump at the revolution 𝑛0 = 450 rpm can be expressed in polynomial form as follows: 𝐻𝑢 = −0.0893 𝑄 2 + 0.0706 𝑄 + 6.104 (7) obtained by best square polynomial regression of the pump, reported in the table 1 and shown in figure 4. it is possible to generalize this relation and make it valid for any number of revolutions, as: 𝐻𝑢 = 𝑎 𝑄 2 + 𝑏 𝑛 𝑛0 𝑄 + 𝑐 ( 𝑛 𝑛0 ) 2 (8) where 𝑎, 𝑏 and 𝑐 are the coefficients of the specific previous relation. substituting the expression of the valve characteristic (4) in (8), in the steady state operating condition of the system, we obtain: [𝑎 − 𝐾𝑣 (ℎ)]𝑄 2 + 𝑏 𝑛 𝑛0 𝑄 + 𝑐 ( 𝑛 𝑛0 ) 2 = 0. (9) by expressing then 𝑄 as a function of the geometrical and operating parameters of the pump as: 𝑄 = 𝐾𝑄 𝜑 𝑛 , (10) where 𝐾𝑄 = 𝜂𝜈 𝜁 π 2 𝑙2 𝐷2 2, and substituting it in the equation that regulates the operation of the plant (9) we get the new relation: 𝐾𝜙0 2 [𝑎 − 𝐾𝑣 (ℎ)]𝜙 2 + 𝑏𝐾𝜙1 𝜙 + 𝐾𝜙 2 = 0 , (11) where 𝐾𝜙0 = 𝐾𝑄 2 𝑛0 2[𝑎 − 𝐾𝑣 (ℎ)], 𝐾𝜙1 = 𝑏 𝐾𝑄 𝑛0, 𝐾𝜙2 = 𝑐, and 𝜙 the flow parameter. the equality of the head provided by the pump with the pressure drop introduced by the control valve is not sufficient to describe the operation of the experimental system. we need the figure 2. valve characteristic curves in terms of the flow rate coefficient, 𝜙 as function of the opening position, h. the curves show the cases of linear, parabolic, equal percentage and fast opening design. figure 3. inputs and outputs of the pump-motor block. figure 4. pump performance curve at 𝑛 = 1450 rpm. comparison between simulation and actual operational point. table 1. pump data. q in m³/h 1 1.2 1.5 1.8 2.4 3 3.6 4.2 4.8 5.4 6 h in m 6 6.05 6 5.9 5.8 5.5 5.2 4.8 4.4 3.9 3.3 0 2 4 6 8 0 2 4 6 8 h u [ m ] q [m 3 / h] best-fit data sheet acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 power conservation law, for which the power supplied by the motor equals the power absorbed by the pump. the last one is expressed by: 𝑃𝑎𝑝 = 𝛾 𝑄 𝐻𝑢 𝜂𝑦 𝜂𝑣 𝜂𝑚 = 𝜌 𝑄 𝐿𝑖 𝜂𝑣 𝜂𝑚 = 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷 2 𝑛2 2 𝜂𝑣 𝜂𝑚 (12) in which 𝜓 = 2(1 − 𝜙 cotg𝛽2) − (2 π sin 𝛽2)/𝑧 and 𝜂𝑣 𝑒𝑑 𝜂𝑚 are the volumetric and mechanical efficiencies to be discussed in the paragraph of the results, where the methods to evaluate their values during the actual operation of the plant for its monitoring will be described. both could result indices of possible anomalies. with reference to the expression of the power supplied by the motor, 𝑃𝑚, the mechanical torque, 𝐶𝑚, can be expressed. the mechanical torque 𝐶𝑚 of the electric motor can be expressed as a function of the slip, 𝑠, according to the following relation: 𝐶𝑚 = 2𝐶𝑚 max 𝑠max ∙ 𝑠 (𝑠max 2 + 𝑠2) (13) in which 𝐶𝑚max is the maximum torque available for the slip value 𝑠 = 𝑠max, available from the relative data sheet and equal to 𝐶𝑚 = 50 n m and 𝑠 = 0.2. the slip is representative of the difference between the speed of rotation of the rotor, i.e., the shaft and the speed of rotation of the magnetic field, 𝑛𝑠,that is: 𝑠 = 𝑛𝑠 − 𝑛 𝑛𝑠 , (14) where: 𝑛 = 60𝑓 𝑝 (1 − 𝑠) (15) with 𝑝 = 2, motor polar couples. the mechanical characteristic of the motor, 𝐶𝑚, is represented as function of the revolution of the shaft in figure 5 for 3 different values of frequency. this characteristic shows that when the slip is roughly zero the torque reaches its maximum value, and the motor speed is close to the synchronism speed 𝑛𝑠. the efficiency 𝜂 of the three-phase asynchronous motor can be calculated with the well-known formula: 𝜂 = 𝑃𝑟 𝑃𝑎 , (16) where 𝑃𝑟 is the mechanical power supplied to the rotor and 𝑃𝑎 is the electrical power provided by the stator. equating the power absorbed by the pump to the power supplied by the motor, we obtain: 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷 2 𝑛2 2 𝜂𝑣 𝜂𝑚 = (2𝐶𝑚 max 𝑠max ∙ 𝑠 𝑠max 2 + 𝑠2 ) 2 π 𝑛 (17) and plugging into it the expression of the slip 𝑠, we get: 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2 ( 𝑝 𝑓 ) 2 𝑛4 − 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2 2 𝑝 𝑓 𝑛3 + 𝜌 𝐾𝑄 𝜙 𝜓 π 𝐷2 2(𝑠max + 1) 𝑛 2 + 8 𝐶𝑚 max 𝑠max 𝜂𝑣 𝜂𝑚 𝑝 𝑓 𝑛 − 8 𝐶𝑚 max 𝑠max 𝜂𝑣 𝜂𝑚 = 0. (18) the previous equation can be rewritten in the easier-to-read following way: 𝐾𝑛0 𝑛 4 + 𝐾𝑛1 𝑛 3 + 𝐾𝑛2 𝑛 2 + 𝐾 𝑛3 𝑛 + 𝐾𝑛4 = 0, (19) with obvious meaning of the symbols. the former equation represents the main equation of the plant operating model. such an equation can be solved iteratively by the newton-raphson method, for example, to provide the revolution speed of the system for a fixed working condition. the iterations, in the present case, have been interrupted when the percentage error on 𝑛 was less than 10−5. the described set of model equations of the thrust system components constitute the simulator of its working operation. 4. results the volumetric efficiency of the pump changes according to the operating regime. it is negligible for null flow rates in the case of a control valve fully closed. as the flow rate increases, the volumetric efficiency increases up to the plateau value that is almost constant for a wide range of the pump flow rate, in steady state operation. the data sheet values of it are shown in figure 6 here below. in the case of high flow rate, the following procedure aims to calculate the values of the mechanical and the volumetric efficiencies, 𝜂𝑚, 𝜂𝑣∞ (asymptotic value), respectively. the pressure parameter, 𝜓, can be computed, in the following two ways: 𝜓 = 2 𝑔 𝐻𝑢 𝑢2 2 𝜂𝑝 𝜂𝑚𝜂𝑦 (20) or 𝜓 = 2(1 + 𝜙 cotg 𝛽2) − 2 π sin𝛽2 𝑧 . (21) equating the above equations and rewriting the flow parameter as a function of the flow rate, 𝜙 = 𝜙(𝑄), we get: figure 5. characteristics of the asynchronous motor in terms of torque as function of revolution, for different frequency values. figure 6. pump volumetric efficiency experimental data. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 2 𝑔 𝐻𝑢 𝑢2 2 𝜂𝑝 = 2 cotg 𝛽2 𝜂𝑣 𝜉 π 2 𝑙2 𝐷2 2 𝑛 1 𝜂𝑚 𝜂𝑣 𝑄 + 2 𝜂𝑚 𝜂𝑣 (1 − π sin 𝛽2 𝑧 ) . (22) from the experimental data of the pump calpeda nm4 25/12a/a, it is observed that for flow rates greater than a threshold value (0.001 m3/s), the term 2 𝑔 𝐻𝑢 /(𝑢2 2 𝜂𝑝) is a linear function of 𝑄, figure 7. in the linearity range, the coefficients of the previous equation can be calculated by best fitting the pump data sheet discrete values, figure 8, and therefore it is possible to estimate the unknown values of the mechanical efficiency 𝜂𝑚 and the volumetric efficiency 𝜂𝑣 . specifically, the mechanical efficiency is equal to 𝜂𝑚 = 0.955 and it is assumed constant for the whole operating regime, whereas the volumetric efficiency obtained from this procedure represents the asymptotic limit only in the range of high flow rates and it is equal to 𝜂𝑣∞ = 0.719. known the value of the mechanical efficiency, from the (22), 𝜂𝑣 can be made explicit as function of the flow rate only, such as 𝜂𝑣 = 𝜂𝑣(𝑄). the relationship 𝜂𝑣 = 𝜂𝑣 (𝑄) is well described by the following relation: 𝜂𝑣 = 𝜂𝑣∞ (1 − e − 𝑄 𝜏 ) , (23) where the value of 𝜏 = 3.83 ∙ 10−4𝑚3/𝑠 is obtained from the best-fit polynomial of the available data operation data of the pump with 𝜂𝑣∞ = 0.719. the following figure shows the good agreement between the theoretical prediction and pump data over the whole range of the flow rates. the simulator function uses this model of the volumetric efficiency and the prediction of the mechanical efficiency. it provides a good agreement of the control parameters with the characteristic data of the pump, in terms of absorbed power 𝑃𝑎 , efficiency 𝜂𝑝 and pressure head 𝐻𝑢 . through the simulator, described in the previous section, it is possible to digitally reconstruct the pump characteristic curves, figure 10, figure 11 and figure 12. by using the digitalized technical and operating data of the system components and setting a value of the frequency of the inverter regulating the number of revolutions of the pump-motor block, this one can be iteratively determined from (19), and then the outputs of the pump-motor are calculated from its model, i.e. the flow rate, the head, the torque 𝐶𝑚 and the efficiency of the pump 𝜂𝑝, which constitute the virtual values with which to compare the physical ones acquired (and calculated, such as mechanical and volumetric efficiencies) by the sensors prepared for that purpose in the system. so, the entire performance curves of the pump can be reconstructed and digitalized just varying the opening degree of the valve for each of the desired discrete values of the frequency (revolution speed of the pump), and by reiterating the use of the simulator equations as reported in the sections. figure 7. experimental values for the left-hand side term in (22) as a function of the flow rate. figure 8. linear regression of experimental data for the left-hand side term of (22) as function of the flow rate. figure 9. least square regression of 𝜂𝑣 experimental data. figure 10.expected pump efficiency from the simulator compared with actual operational points. 0 1 2 3 0 0.0005 0.001 0.0015 0.002 0.0025 2 g h u /( h p u 2 ) q [m 3 / s] data sheet acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 the figure 13 shows the pump performance curves evaluated by the simulator for 3 revolution speed values, as example of its application. the operating parameter values predicted by the models, including the mechanical and volumetric efficiency, are the reference targets for eventual anomalies detection when compared with the relative acquired values. the eventual significative discrepancy between the predicted and sensed values of each characteristic quantity can indicate an incipient criticality for the component whom it refers to. in these conditions it is possible to report and provide for the irregularity of the operation with an active control on the system. in addition, it is even possible to understand the origin of the disagreement between the real and expected values of mechanical or volumetric efficiency. for example, with reference to their possible discrepancy it is possible to suspect a critical condition of the bearings or seals of the pump-motor block. 5. discussion and conclusion the intent of the authors was to propose a strategy for the automation of the monitoring of wide-ranging industrial processes, in the spirit of the "smart industry", in order to reduce the risks of sudden default of the activities caused by the possible criticality of the most delicate components, to reduce the time to identify the criticality, to reduce the maintenance time necessary to restore normal operating conditions, thus reducing the associated costs. the strategy is based on the creation of a virtual simulator of the operation of the various plants involved in the process which, through the digitization of the data sheets of the various components, can provide the reference values of the process control parameters. those values are so compared with the values acquired by the measuring chains predisposed for the purpose, to allow the operator the reporting of any suspected anomaly in place and to quickly intervene to restore optimal operating conditions through targeted maintenance. to this end and by way of an example, we have proposed the formulation of a simulator of a simple fluidic thrust system, consisting of a pump-motor block with inverter and a regulation valve, which represents the load acting on the pm block and attributable to the network it supplies. the simulator allows to manage and analyze in real time all the characteristic parameters, acquired and/or calculated, of each of the monitored system components, during normal operation, to determine if there are conditions of possible incipient anomalies on the components under observation. this operation allows to compare, through realtime acquisitions, the instantaneous values of the characteristic operating parameters with those provided by the simulator, corresponding to those that should be in the optimal operating conditions of the system. so, it is possible to identify which characteristic parameter of the various monitored components of the system reveals a more discrepant value from the optimal one, thus denouncing a possible critical condition of the component to which it refers to. in this way, the operator can decide to intervene on the component in time, thus minimizing the intervention times and therefore the maintenance and restoration costs of the normal operation of the system. is opinion of the authors that the identified procedure, based on the digitization of the technical data sheets of the plant components, is extendable to most of the operating plants of an industrial process, thus allowing the entire process to be controlled in real time. references [1] r. a. luchian, g. stamatescu, i. stamatescu, i. fagarasan, d. popescu, iiot decentralized system monitoring for smart industry applications, 2021 29th mediterranean conference on control and automation (med), 2021, pp. 1161-1166. doi 10.1109/med51440.2021.9480341 [2] l. fabbiano, g. vacca, g. dinardo, smart water grid: a smart methodology to detect leaks in water distribution networks, figure 11. expected pump manometric head from the simulator compared with actual operational points. figure 12. expected pump absorbed power from the simulator compared with actual operational points. figure 13. expected pump performance curves from the simulator for three revolution speed (red lines) and four 𝜂𝑝 iso-lines. https://doi.org/10.1109/med51440.2021.9480341 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 measurement, vol. 151, (2020). doi: 10.1016/j.measurement.2019.107260 [3] l. ardito, a. messeni petruzzelli, u. panniello, a. garavelli, towards industry 4.0: mapping digital technologies for supply chain management-marketing integration, business process management journal, vol. 25, (2019), no. 2, pp. 323-346. doi: 10.1108/bpmj-04-2017-0088 [4] a. j. isaksson, i. harjunkoski, g. sand, the impact of digitalization on the future of control and operations, comput. and chem. eng., 114, (2018), pp. 122–129. doi: 10.1016/j.compchemeng.2017.10.037 [5] g. dinardo, l. fabbiano, g. vacca, a smart and intuitive machine condition monitoring in the industry 4.0 scenario. measurement, 126, (2018), pp. 1–12. doi: 10.1016/j.measurement.2018.05.041 [6] m. short, j. twiddle, an industrial digitalization platform for condition monitoring and predictive maintenance of pumping equipment. sensors, 19, (2019), 3781. doi: 10.3390/s19173781 [7] p. girdhar, c. scheffer, predictive maintenance techniques: part 1 predictive maintenance basics, in practical machinery vibration analysis and predictive maintenance, p. girdhar and c. scheffer, eds., oxford, newnes, (2004), pp. 1-10. [8] p. girdhar and c. scheffer, predictive maintenance techniques: part 2 vibration basics, in practical machinery vibration analysis and predictive maintenance, p. girdhar and c. scheffer, eds., oxford, newnes, (2004), pp. 11-28. [9] m. caciotta, v. cerqua, f. leccese, s. giarnetti, e. de francesco, e. de francesco, n. scaldarella, a first study on prognostic system for electric engines based on envelope analysis, in ieee metrology for aerospace (2014), benevento, italy, 29-30 may 2014, pp. 362-366. doi: 10.1109/metroaerospace.2014.6865950 [10] t. van tung, y. bo-suk, machine fault diagnosis and prognosis: the state of the art, international journal of fluid machinery and systems 2.1, (2009), pp. 61-71. doi: 10.5293/ijfms.2009.2.1.061 [11] li zhe, yi wang, ke-sheng wang, intelligent predictive maintenance for fault diagnosis and prognosis in machine centers: industry 4.0 scenario, advances in manufacturing 5.4, (2017), pp. 377-387. doi: 10.1007/s40436-017-0203-8 [12] e. petritoli, f. leccese, g. schirripa spagnolo, new reliability for industry 4.0: a caste study in cots-based equipment, ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7-9 june 2021, pp. 27-31. doi: 10.1109/metroind4.0iot51437.2021.9488555 [13] e. quatrini, f. costantino, g. di gravio, r. patriarca, conditionbased maintenance an extensive literature review, machines, (2020), 8, 31, pp. 1-28. doi: 10.3390/machines8020031 [14] a. k. s. jardine, d. lin, d. banjevic, a review on machinery diagnostics and prognostics implementing condition-based maintenance, mechanical systems and signal processing, vol. 20, (2006), pp. 1483-1510. doi: 10.1016/j.ymssp.2005.09.012 [15] caldepa website. online [accessed 03 november 2021] https://pump-selector.calpeda.com/pump/23 [16] f. fornarelli, a. lippolis, p. oresta, a. posa, computational investigation of a pressure compensated vane pump, energy procedia, volume 148, 73rd conference of the italian thermal machines engineering association, pisa, italy, 12 september 2018, pp 194-201. doi: 10.1016/j.egypro.2018.08.068 https://doi.org/10.1016/j.measurement.2019.107260 https://doi.org/10.1108/bpmj-04-2017-0088 https://doi.org/10.1016/j.compchemeng.2017.10.037 https://doi.org/10.1016/j.measurement.2018.05.041 https://doi.org/10.3390/s19173781 https://doi.org/10.1109/metroaerospace.2014.6865950 http://dx.doi.org/10.5293/ijfms.2009.2.1.061 https://doi.org/10.1007/s40436-017-0203-8 https://doi.org/10.1109/metroind4.0iot51437.2021.9488555 https://doi.org/10.3390/machines8020031 https://doi.org/10.1016/j.ymssp.2005.09.012 https://pump-selector.calpeda.com/pump/23 https://doi.org/10.1016/j.egypro.2018.08.068 dynamic parameter identification method for robotic arms with static friction modelling acta imeko issn: 2221-870x september 2021, volume 10, number 3, 44 50 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 44 dynamic parameter identification method for robotic arms with static friction modelling dániel szabó1, emese gincsainé szádeczky-kardoss 1 1 department of control engineering and information technology, budapest university of technology and economics, budapest, hungary section: research paper keywords: identification; dynamics; manipulator citation: dániel szabó, emese gincsainé szádeczky-kardoss, dynamic parameter identification method for robotic arms with static friction modelling, acta imeko, vol. 10, no. 3, article 8, september 2021, identifier: imeko-acta-10 (2021)-03-08 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: dániel szabó, e-mail: szabo.daniel@iit.bme.hu 1. introduction automation is playing an increasingly important role in industry, where manipulators are used to solve various types of problems. precise control algorithms are required to meet the need for speed and accuracy, and an accurate dynamic model is needed to use these control algorithms, such as the computed torque method [1]. although the robot manufacturer can provide the necessary values, they are usually either inaccurate or nonexistent; thus, the dynamics of the manipulator have to be determined through an identification process. an independently identifiable parameter vector and a regression matrix are required to create an identifiable model. the barycentric parameters [2] or the modified recursive newton–euler formula [3] can be used to solve this problem. in general, the friction effect of the joints is not negligible, hence the dynamic model of the robot, determined according to the structure of the manipulator, has to be extended with the friction model. there are plenty of existing friction models that can be used in the identification of the dynamic model, but there has to be a trade-off between model accuracy and computational complexity. in the simplest case, only the coulomb and viscous friction effects need to be considered in the dynamic model [4]. although this can be sufficient in some cases, the stiction effect in the joint remains unmodelled, and the nonlinearity has to be compensated for by the controller. a dynamic friction model can be applied if a high level of precision is needed in the model, but in this case, the model cannot be converted into a linear-in-parameters model [5]. this method can be used only if the dynamic model is known and only friction compensation is needed. discontinuities of the sign function can be eliminated by using the arctangent function to approximate the step at which the velocity changes its sign. the continuity of this function is necessary when using the identified model in the controller for a smooth control signal when the velocity reaches zero. in [6], both static and dynamic friction identification were presented but only for cases where the other dynamic parameters were known. it is possible to convert the static friction model into a linear-in-parameters form. the optimisation of the excitation trajectory is important for generating a well-conditioned regression matrix. the finite fourier series [7] and the polynomial functions [8] are used frequently as excitation trajectories. abstract this paper presents an identification method for robotic manipulators. it demonstrates how a dynamic model can be constructed with the help of the modified newton–euler formula. to model the friction of the joints, static friction modelling is used, in which the friction behaviour depends only on the actual velocity of the given joint. with these techniques, the model can be converted into a linear-inparameters form, which can make the identification process easier. two estimators are introduced to solve the identification problem, the least-squares and the weighted least-squares estimators, and the determination of the independently identifiable parameter vector to make the regression matrix maximal column rank is presented. the frobenius norm is used as the condition of the regression matrix to optimise the excitation trajectories, and the form of the trajectories has been selected from the finite fourier series. the method is tested in a simulated environment to achieve a three-degrees-of-freedom manipulator. mailto:szabo.daniel@iit.bme.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 45 the proper estimator for a problem can be selected by investigating the rate of the noise corrupting the measurements [7]. in this study, the least-squares (ls) and weighted-ls (wls) estimation methods were assessed. as was shown in [9], the maximum likelihood estimation gives better results in simulations when there are corrupted joint value measurements, but usually, in real-life scenarios, this noise is negligible compared with the noise of the torque measurements. in [10], the dynamics of the payload were determined by an identification process that measured actuator currents. in this method, the dynamic parameters of the manipulator were known. the present study compares two methods to estimate the parameters of a robot dynamic model with static friction in each joint. it demonstrates how the reduced row echelon form of the regression matrix can be used to find the independently identifiable variables. the results in [9] are extended in this paper by presenting a method incorporating the friction model into the identification process. the paper is organised as follows. in section 2, the form of the dynamic model is introduced, and this section describes how the modified newton–euler formula and the static friction model are used to obtain its linear-in-parameters form. section 3 describes the configuration of the measurements, and the method to determine the independently identifiable parameters is explained in section 4.1. the ls and the wls estimators are then described. after that, section 0 introduces the optimisation of trajectory parameters and the criterion for this optimisation method. the results of the simulation are in section 6, and finally, the conclusions are set out in section 7. 2. model evaluation 2.1. dynamics of the manipulators a manipulator can be modelled with an open kinematic chain, which is composed of links connected by joints. the denavit– hartenberg parameters can be used to define the kinematics of a given manipulator [11], and the homogeneous transformation matrices can be defined to calculate the actual position and orientation of the end effector. the dynamic model of a manipulator is given by the nonlinear multiple-input and multiple-output function, which clarifies the relationship between the joint positions, their firstand secondorder derivatives and the joint torques/forces. the following equation shows this relationship: 𝝉 = 𝑯(𝒒)�̈� + 𝒉(𝒒,�̇�) + 𝒄(𝒒), (1) where 𝐪 denotes the 𝑛-by-1 vector of the joint variables (for an 𝑛 degree-of-freedom manipulator), 𝛕 is the vector of joint torques (or forces), 𝐇 is the 𝑛-by-𝑛 symmetric, positive-definite joint-space inertia matrix, the coriolis and centrifugal forces are modelled by vector 𝐡 and the gravitational effects are given by vector 𝐜. although the model in (1) provides a good physical interpretation of the system, it cannot be used effectively during the identification process. to solve this problem, the linear-inparameters form of the model can be used: 𝝉 = 𝜱(𝒒,�̇�, �̈�)𝜣, (2) where 𝛷 denotes the regression matrix and 𝛩 is the parameter vector. the values of the inertia matrix, the mass of the link (𝑚𝑗) and the first moment of the links are contained in 𝛩: 𝛩 = [𝛩1 𝑇 𝛩2 𝑇 … 𝛩𝑛 𝑇]𝑇 𝛩𝑗 = [𝐼𝑥𝑥 𝑗 𝐼𝑦𝑦 𝑗 𝐼𝑧𝑧 𝑗 𝐼𝑥𝑦 𝑗 𝐼𝑥𝑧 𝑗 𝐼𝑦𝑧 𝑗 … 𝑚𝑗 𝑚𝑗𝑠𝑥 𝑗 𝑚𝑗𝑠𝑦 𝑗 𝑚𝑗𝑠𝑧 𝑗 ] 𝑇 , (3) where 𝐼𝑘𝑙 𝑗 parameters are the entries of the symmetric inertia matrix and 𝐬𝑗 = [𝑠𝑥 𝑗 𝑠𝑦 𝑗 𝑠𝑧 𝑗 ] 𝑇 denotes the position of the centre of mass of the 𝑗-th link measured in the frame belonging to the 𝑗-th joint. to solve the identification problem, it is necessary to calculate the regression matrix. this can be achieved, for example, with the lagrangian formula, which provides a good physical view of the system; but it is computationally expensive. hence, a better solution is the newton–euler formula, which has a computational complexity of 𝒪(𝑛) [3]. first, using the newton-euler formula, the kinematics of each frame of the joints are calculated and transformed into the frame of the end effector. this is done with forward recursion, and the velocities, angular velocities, accelerations and angular accelerations are determined for each frame. then, the forces and moments are transformed with backward recursions from the end-effector frame to the base frame. the applied torque/force of the given joint is determined by the following equation in a given step of the backward recursion: 𝛕𝑗 = { 𝐧𝑗 𝑇 (𝐀𝑗 𝑇 𝐳0) rotational 𝐟𝑗 𝑇 (𝐀𝑗 𝑇 𝐳0) translational , (4) where 𝐧𝑗 and 𝐟𝑗 are the moment and the force exerted on link 𝑗 by link 𝑗-1 and 𝐀𝑗 is the orthonormal rotation matrix that describes the rotation between the 𝑗-th and the 𝑗-1-th frame; 𝐳0 is the base vector of the axis of rotation (or translation), which is selected by the convention [0 0 1]𝑇. this results in linear terms in the inertias and the masses, while the centre-of-mass vector will be represented in either linear or quadratic terms. these quadratic terms can be eliminated by transforming the inertia tensor: let 𝒞𝑖 denote the coordinate system for the 𝑖-th link and 𝒞𝑖 ∗ denote the frame of the centre of mass of the 𝑖-th link; let 𝐀𝑖,𝑖∗ be the orthonormal rotation matrix between 𝒞𝑖 and 𝒞𝑖 ∗. if 𝐈𝑖 is the inertia tensor around the centre of mass, then the inertia tensor 𝐈′𝑖 in 𝒞𝑖 is 𝐈′𝑖 = 𝐀𝑖,𝑖∗𝐈𝑖𝐀𝑖,𝑖∗ 𝑇 + 𝑚𝑖(𝐬𝑖 𝑇𝐬𝑖𝐈(3𝑥3) − 𝐬𝑖𝐬𝑖 𝑇), (5) where 𝐈(3𝑥3) is the 3-by-3 identity matrix. 2.2. friction modelling the model in (1) must be extended with the torque vector 𝝉𝒇 to model the effect of the friction: 𝝉 − 𝝉𝒇 = 𝑯(𝒒)�̈� + 𝒉(𝒒,�̇�) + 𝒄(𝒒). (6) there are several methods to model the vector of the friction torques. in this study, static friction modelling was used, including stiction, coulomb and viscous friction effects, and the striebeck effect, which was represented using the arctangent function [5]: 𝝉𝒇 = 𝝉𝒔𝑆0(𝒗) + 𝝉𝒔𝒄 2 π arctan(𝒗𝛿) + 𝝉𝒗 , (7) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 46 where 𝒗 denotes the vector of velocities (thus, the first-order derivative of the joint variables), 𝝉𝒔 represents stiction, 𝝉𝒔𝒄 is the difference between coulomb friction and stiction, 𝛿 represents the shape of the striebeck effect, 𝝉𝒗 denotes the coefficient of the viscous friction and the function 𝑆0 is used as the approximation of the 𝑠𝑖𝑔𝑛(𝒗) function: 𝑆0(𝒗) = 2 π arctan(𝒗𝐾𝑣), (8) where 𝐾𝑣 defines the shape of the function. in this case, the friction model can be transformed into a linear-in-parameters form, where the regression matrix and the parameters belonging to one joint are the following: 𝛷fric 𝑗 = [ 2 π arctan(𝒗𝐾𝑣), 2 π arctan(𝒗𝛿) ,𝑣] 𝛩fric 𝑗 = [𝜏𝑠 𝑗 , 𝜏𝑠𝑐 𝑗 , 𝜏𝑣 𝑗 ] 𝑇 . (9) 3. measurement setup simulations were used to perform the measurements in a matlab simulink environment, but the proposed method could also be used with real measurements. to model the dynamics and the friction effect, the simscape toolbox was applied. at this stage, control of the manipulator was performed by computing the torques automatically using the simulation; furthermore, the torque measurements were corrupted with independent zero-mean gaussian noise. the measurements were repeated with different variances for the specific joints, and the joint torques were corrupted with independent zero-mean gaussian noise when each joint had different variances. the joint angles were also corrupted with noise, but this was negligible compared with the noise of the torque measurements [9]. 4. identification process 4.1. determination of the independent variables according to (2) and (9), the dynamics of the manipulator can be written in a linear-in-parameters form, but to use it during the identification process, some changes have to be applied because of the parameter vector described in (3). with these parameter vectors, the column-rank of the regression matrix 𝛷 will not be maximal. this means that unidentifiable parameters or parameters that are not independently identifiable also exist. three parameter types can be defined [12]: 1. if the 𝑖-th column of 𝛷 is a null vector, then the 𝑖-th parameter is unidentifiable and the dynamics are unaffected. in this case, this parameter and the 𝑖-th column of 𝛷 can be removed. 2. if the 𝑖-th column of 𝛷 is not null and can be expressed as a linear combination of the other columns, then the 𝑖-th parameter can be identified only in a linear combination with other parameters. 3. if the 𝑖-th column of 𝛷 is not null and cannot be expressed as a linear combination of the other columns, then the 𝑖-th parameter is independently identifiable. first, all the columns of 𝛷 with only zeros should be eliminated; the number of independent parameters is then equal to the rank of 𝛷. these parameters can be expressed in a linear combination of the original parameters. to obtain the coefficients for the linear combinations, the reduced row echelon form of 𝛷 is required: [𝛷𝑟𝑟𝑒𝑓, 𝐣𝐋] = 𝑟𝑟𝑒𝑓(𝛷), (10) where 𝐣𝐋 denotes the indexes of the columns with the leading values 1 in the reduced row echelon form. these indexes will give a basis in the columns of 𝛷 by indexing out the column of the regression matrix with 𝐣𝐋, and 𝐣𝐊 is the vector of the indexes that are not contained by 𝐣𝐋. let 𝐋 and 𝐊 be the matrices with columns indexed by 𝐣𝐋 and 𝐣𝐊 from 𝛷, respectively. in this way, a linear combination can be defined to express each column of 𝐊 with the columns of 𝐋, and the coefficients can be determined by the columns of 𝛷𝑟𝑟𝑒𝑓 indexed by 𝐣𝐊: 𝐊(: , 𝑖) = 𝛷𝑟𝑟𝑒𝑓(1, 𝐣𝐊(𝑖))𝐋(: ,1) + ⋯+ +𝛷𝑟𝑟𝑒𝑓(𝑚,𝐣𝐊(𝑖))𝐋(: ,𝑚), (11) where 𝑚 is the rank of 𝛷, i.e. the number of independent parameters. the matrix 𝐊 can therefore be expressed as 𝐊 = 𝐋𝐁, with the help of 𝐁 defined as: 𝐁 = 𝛷𝑟𝑟𝑒𝑓(1:𝑚,𝐣𝐊). (12) two vectors of the parameters are defined by indexing them with 𝐣𝐋 and 𝐣𝐊: 𝛩𝐿 = 𝛩(𝐣𝐋) 𝛩𝐾 = 𝛩(𝐣𝐊) . (13) thus, a new form of the model can be introduced: 𝛕 = 𝛷𝛩 = [𝐋 𝐊][ 𝛩𝐿 𝛩𝐾 ] = = 𝐋[𝐼(𝑚𝑥𝑚) 𝐁][ 𝛩𝐿 𝛩𝐾 ] = 𝐋𝛩∗, (14) where 𝛩∗ = 𝛩𝐿 + 𝐁𝛩𝐾. consequently, 𝛩𝐿(𝑖) is an independently identifiable parameter if, and only if, the 𝑖-th row of 𝐁 contains only zeros. 4.2. ls estimation if the torques of the actuators are corrupted with zero-mean gaussian noise (𝐧𝜏) with the same standard deviation, the following measurement model can be defined: 𝛕𝑚(𝑘) = 𝛕(𝑘) + 𝐧𝜏(𝑘) 𝑒𝜏𝑖(𝛩 ∗,𝑘) = 𝛕𝑚𝑖(𝑘) − 𝛕𝑖(𝛩 ∗,𝑘) = = 𝛕𝑚𝑖(𝑘) − 𝐋(𝑘, :)𝛩 ∗, (15) where 𝛕𝑚 is the measured value of the torque vector, the error of the given joint is denoted by 𝑒𝜏𝑖 and 𝑘 denotes the index of the measurement. with this measurement model, the standard ls estimator can be used. the minimisation criterion of the estimator is the following: 𝑉𝐿𝑆 = ∑∑( 𝑛 𝑖=1 𝑁 𝑘=1 𝑒𝜏𝑖(𝛩 ∗,𝑘))2, (16) where 𝑁 is the number of measurements. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 47 the solution to the problem can be given in a closed form with the help of the moore–penrose pseudoinverse of 𝐋: �̂�𝐿𝑆 = 𝐋 +𝛕𝑚 = (𝐋 𝑇𝐋)−1𝐋𝑇𝛕𝑚. (17) 4.3. wls estimation if noises with different standard deviations (𝜎𝜏𝑖) are corrupting the measurements, the wls estimator has to be applied. here the error of the measurements in the optimisation criterion are weighted with the reciprocal of the variation of the noise for a given joint: 𝑉𝑊𝐿𝑆 = ∑∑ (𝑒𝜏𝑖(𝛩 ∗,𝑘))2 𝜎𝜏𝑖 2 𝑛 𝑖=1 𝑁 𝑘=1 . (18) consequently, the problem can be solved as �̂�𝑊𝐿𝑆 = (𝐋 𝑇𝚺−1𝐋)−1𝐋𝑇𝚺−1𝛕𝑚, (19) where 𝚺 is the diagonal covariance matrix of the noise. 5. trajectory optimisation the quality of the identification is dependent on the condition of the 𝐋 matrix, which can be optimised by using the correct excitation trajectories. there are several criteria to perform this. 5.1. optimisation criteria in several papers, the optimisation criterion to find the optimal excitation trajectories is the condition number of 𝐋𝑇𝐋 [13], while in [14], the maximisation of the minimum singular value of 𝐋𝑇𝐋 is used as the optimisation criterion. however, faster convergence can be achieved if the frobenius norm is used as the criterion [8]: condf(𝐀) = ||𝐀||f||𝐀 −1||f ||𝐀||f = (∑∑𝐀𝑖𝑗 2 𝑝 𝑗=1 𝑝 𝑖=1 ) 1/2 , (20) where 𝐀 is an 𝑝-by-𝑝 nonsingular matrix. hence, the optimisation problem can be written in the following form: min 𝛅 condf(𝐋 𝑇𝐋) such that { 𝐪min < 𝐪 < 𝐪max �̇�min < �̇� < �̇�max �̈�min < �̈� < �̈�max 𝐪(0) = 𝐪0 𝐪(𝑡𝑓) = 𝐪𝑓 �̇�(0) = 0 �̇�(𝑡𝑓) = 0 �̈�(0) = 0 �̈�(𝑡𝑓) = 0 , (21) where 𝛅 vector contains the parameters of the trajectory and the 𝐪𝑚𝑖𝑛,𝐪𝑚𝑎𝑥, etc. vectors are the minimal and maximal joint values, velocities and accelerations, respectively. by using equality constraints, the initial (𝐪0) and final (𝐪𝑓) conditions can be taken into consideration. the initial and final velocities and accelerations are selected as zero. the optimisation problem in (21) was solved in matlab using the fmincon function with the interior-point method as the solving algorithm. 5.2. trajectory parametrisation the form of the applied trajectories can be expressed as finite fourier series. the definitions of the parametrisation for the joint variables and their firstand second-order derivatives are the following [15]: 𝑞𝑖(𝑡) = ∑ 𝑎𝑙 𝑖 𝜔𝑓𝑙 𝑀 𝑙=1 sin(𝜔𝑓𝑙𝑡) − 𝑏𝑙 𝑖 𝜔𝑓𝑙 cos(𝜔𝑓𝑙𝑡), �̇�𝑖(𝑡) = ∑𝑎𝑙 𝑖 𝑀 𝑙=1 cos(𝜔𝑓𝑙𝑡) + 𝑏𝑙 𝑖sin(𝜔𝑓𝑙𝑡), �̈�𝑖(𝑡) = ∑− 𝑀 𝑙=1 𝑎𝑙 𝑖𝜔𝑓𝑙sin(𝜔𝑓𝑙𝑡) + 𝑏𝑙 𝑖𝜔𝑓𝑙cos(𝜔𝑓𝑙𝑡), (22) where 𝜔𝑓 is the fundamental frequency of the fourier series. the parameter vector can therefore be defined by an 𝑛 × 𝑀 matrix ([𝑎1 1 … 𝑎𝑀 𝑛 𝑏1 1 … 𝑏𝑀 𝑛]) , where 𝑛 is the number of joints of the manipulator and 𝑀 denotes the number of harmonics in the fourier series. 6. experimental results the implementation of the identification process was performed in matlab, while the dynamics and the friction modelling were evaluated in matlab simulink (section 3). the structure of the three-degrees-of-freedom manipulator used in this study is depicted in figure 1, and its denavit– hartenberg parameters are introduced in table 1. the matlab symbolic toolbox was used to determine the dynamics of the manipulator. the simulated trajectories are shown in figure 2. these trajectories were designed by the method described in section 0. the results of the identification and the symbolic expressions of the final 𝛩∗ parameter vector are shown in table 2. it can be seen that 24 identifiable variables were determined. figure 1. model of the three-degrees-of-freedom manipulator, defined by the denavit–hartenberg parameters in table 1. table 1. the denavit–hartenberg parameters of the modelled three-degreesof-freedom manipulator 𝒊 𝜽𝒊 𝐢𝐧 rad 𝒅𝒊 𝐢𝐧 m 𝒂𝒊 𝐢𝐧 m 𝜶𝒊 𝐢𝐧 rad 1 𝑞1 0.3 0 π 2 2 𝑞2 + π 2 0 0.5 0 3 𝑞3 0 0.5 0 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 48 figure 2. measurements with optimised excitation trajectories, 𝜎𝜏 = [0.5,0.3,0.7] n·m. table 2. results of the estimations (𝜇: mean, 𝜎2: variance of the estimators) symbolic expression exact values ls wls 𝜇 𝜎2 𝜇 𝜎2 𝐼𝑥𝑥 (2) − 𝐼𝑦𝑦 (2) − 𝑚(2)𝑠𝑥 (2) 2 0.059 0.059 8.27 ⋅ 10 −04 0.059 6.25 ⋅ 10−04 𝐼𝑥𝑥 (3) − 𝐼𝑦𝑦 (3) − 𝑚(3)𝑠𝑥 (3) 2 0.059 0.057 8.09 ⋅ 10 −04 0.058 6.18 ⋅ 10−04 𝐼𝑥𝑦 (2) 0.000 0.002 3.23 ⋅ 10−04 0.001 2.27 ⋅ 10−04 𝐼𝑥𝑦 (3) 0.000 −0.000 2.04 ⋅ 10−04 −0.000 1.47 ⋅ 10−04 𝐼𝑦𝑦 (1) + 𝐼𝑦𝑦 (2) + 𝐼𝑦𝑦 (3) + 𝑚(2)𝑠𝑥 (2) 2 + 𝑚(3)𝑠𝑥 (3) 2 −0.117 −0.115 5.25 ⋅ 10 −04 −0.115 4.30 ⋅ 10−04 𝐼𝑥𝑧 (2) − 𝑚(2)𝑠𝑧 (2) 2 − 𝑚(3)𝑠𝑧 (3) 2 0.000 0.001 2.06 ⋅ 10 −04 0.000 1.50 ⋅ 10−04 𝐼𝑥𝑧 (3) − 𝑚(3)𝑠𝑧 (3) 2 0.000 −0.001 1.60 ⋅ 10 −04 −0.000 1.14 ⋅ 10−04 𝐼𝑦𝑧 (2) 0.000 −0.001 1.32 ⋅ 10−04 −0.000 1.06 ⋅ 10−04 𝐼𝑦𝑧 (3) 0.000 0.001 7.56 ⋅ 10−05 0.001 5.49 ⋅ 10−05 𝐼𝑧𝑧 (2) + 𝑚(2)𝑠𝑥 (2) 2 −0.059 −0.060 3.68 ⋅ 10 −04 −0.060 2.68 ⋅ 10−04 𝐼𝑧𝑧 (3) + 𝑚(3)𝑠𝑥 (3) 2 −0.059 −0.057 3.30 ⋅ 10 −04 −0.058 2.21 ⋅ 10−04 𝑚(2) + 2𝑚(2)𝑠𝑥 (2) − 2𝑚(3)𝑠𝑥 (3) 1.414 1.415 1.50 ⋅ 10 −04 1.414 6.01 ⋅ 10−05 𝑚(3) + 2𝑚(3)𝑠𝑥 (3) 0.707 0.706 3.82 ⋅ 10 −05 0.707 1.42 ⋅ 10−05 𝜏𝑠 (1) 2.000 1.998 7.97 ⋅ 10 −04 1.998 7.80 ⋅ 10−04 𝜏𝑠 (2) 1.000 0.999 1.42 ⋅ 10 −04 0.999 1.41 ⋅ 10−04 𝜏𝑠 (3) 2.000 1.998 2.36 ⋅ 10 −03 1.998 2.34 ⋅ 10−03 𝜏𝑣 (1) 0.500 0.505 8.26 ⋅ 10 −03 0.506 7.89 ⋅ 10−03 𝜏𝑣 (2) 0.277 0.243 9.26 ⋅ 10 −04 0.244 8.08 ⋅ 10−04 𝜏𝑣 (3) 0.030 0.034 1.60 ⋅ 10 −02 0.035 1.54 ⋅ 10−02 𝜏𝑠𝑐 (1) −0.300 −0.302 1.45 ⋅ 10 −02 −0.303 1.39 ⋅ 10−02 𝜏𝑠𝑐 (2) −0.200 −0.176 1.67 ⋅ 10 −03 −0.177 1.65 ⋅ 10−03 𝜏𝑠𝑐 (3) −0.300 −0.306 3.56 ⋅ 10 −02 −0.307 3.45 ⋅ 10−02 𝑚(2)𝑠𝑦 (2) 0.000 0.000 3.82 ⋅ 10−06 0.000 2.24 ⋅ 10−06 𝑚(3)𝑠𝑦 (3) 0.000 0.000 4.67 ⋅ 10−06 0.000 2.37 ⋅ 10−06 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 49 the dynamics have some independently identifiable parameters (𝐼𝑥𝑦 (2) , 𝐼𝑥𝑦 (3) , 𝐼𝑦𝑧 (2) , 𝐼𝑦𝑧 (3) , 𝑚(2)𝑠𝑦 (2) , 𝑚(3)𝑠𝑦 (3) ) and some unidentifiable variables (e.g. the parameters of the inertia matrix of the first segment), which do not influence the robot dynamics. all the friction parameters are independently identifiable, and the other parameters can be identified in a linear combination with more parameters. it can be seen from the results that the ls and the wls estimators performed well. with this method, the friction parameters were also identified with only one measurement configuration, which simplifies the measurement process. figure 3 depicts the efficiency of the wls estimator in the case of the non-optimised trajectories, using a finite set of fourier series as excitation. the difference between the approximated and the nominal value of the torque vector is negligible. 7. conclusion this paper presents two methods to solve the identification problem of the dynamic model of robotic manipulators. by modelling only the static friction behaviour, the whole dynamic model can be expressed in linear-in-parameters form. in this form, both ls and wls estimators can be used effectively to approximate the unknown parameters of the robotic arm. as can be seen in table 2, all the friction parameters can be identified independently, hence the method can also be used when the dynamic parameters are known, but only for friction compensation. the advantage of this method is that only one measurement configuration is needed to obtain all the desired parameters, but it is not required for moving only one joint at a time. these results can be used in advanced control algorithms. 8. acknowledgement the research reported in this paper, carried out at the budapest university of technology and economics, was supported by the ‘tkp2020, institutional excellence programme’ of the national research development and innovation office in the field of artificial intelligence (bme iemi-sc tkp2020). references [1] r. middletone, g. goodwin, adaptive computed torque control for rigid link manipulators, proc. of the 25th ieee conference on decision and control, athens, greece, 10 – 12 december 1986, pp. 68-73. doi: 10.1016/0167-6911%2888%2990033-3 [2] b. raucent, g. campion, g. bastin, j.-c. samin, p. y. willems, identification of the barycentric parameters of robot manipulators from external measurements, automatica 28 (1992) pp. 10111016. doi: 10.1016/0005-1098(92)90155-9 [3] p. khosla, t. kanade, an algorithm to estimate manipulator dynamics parameters, int. j. robotics and automation, 2(3) (1987), pp. 127-135. [4] m. m. olsen, h. g. petersen, a new method for estimating parameters of a dynamic robot model, ieee transactions on robotics and automation 17 (2001) pp. 95-100. doi: 10.1109/70.917088 [5] m. r. kermani, r. v. patel, m. moallem, friction identification and compensation in robotic manipulators, ieee transactions on instrumentation and measurement 56 (2007) pp. 2346-2353. doi: 10.1109/tim.2007.907957 [6] m. indri, s. trapani, a framework for static and dynamic friction identification for industrial manipulators, ieee/asme transactions on mechatronics (2020), vol. 25, no. 3, pp. 15891599. doi: 10.1109/tmech.2020.2980435 [7] j. swevers, c. ganseman, d. b. tukel, j. de schutter, h. van brussel, optimal robot excitation and identification, ieee transactions on robotics and automation 13 (1997) pp. 730-740. doi: 10.1109/70.631234 [8] m. gautier, w. khalil, exciting trajectories for the identification of base inertial parameters of robots, international journal of robotics research 11 (1992) pp. 362-375. doi: 10.1109/cdc.1991.261353 [9] d. szabó, e. g. szádeczky-kardoss, parameter estimation process for the dynamic model of robotic manipulators, proc. of the 23rd figure 3. comparison between the measured torques (𝝉𝒎), the value of the torque vector without noise (𝝉𝟎) and the estimated torque vector calculated from the identified model (�̂�) using the wls estimator. the measured noise of 𝝉𝒎 is the same as that of figure 2 (𝜎𝜏 = [0.5,0.3,0.7] nm). the bottom graph shows the absolute value of the difference between 𝝉𝟎 and �̂�. https://doi.org/10.1016/0167-6911%2888%2990033-3 https://doi.org/10.1016/0005-1098(92)90155-9 https://doi.org/10.1109/70.917088 https://doi.org/10.1109/tim.2007.907957 https://doi.org/10.1109/tmech.2020.2980435 https://doi.org/10.1109/70.631234 https://doi.org/10.1109/cdc.1991.261353 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 50 international symposium on measurement and control in robotics (ismcr), budapest, hungary, 15 – 17 october 2020, pp. 1-6. [10] y. dong, t. ren, k. chen, d. wu, an efficient robot payload identification method for industrial application, industrial robot: an international journal (2018), vol. 45 no. 4, pp. 505-515. doi: 10.1108/ir-03-2018-0037 [11] m. w. spong, s. hutchinson, m. vidyasagar, robot modeling and control, john wiley & sons, inc., 2006. [12] l. zollo, e. lopez, l. spedaliere, n. garcia aracil, e. guglielmelli, identification of dynamic parameters for robots with elastic joints, advances in mechanical engineering 7 (2015) p. 843186. doi: 10.1155/2014/843186 [13] f. pfeiffer and j. holzl, parameter identification for industrial robots, proceedings of 1995 ieee international conference on robotics and automation, 1995, vol. 2, pp. 1468-1476. doi: 10.1109/robot.1995.525483 [14] j. swevers, b. de moor, h. van brussel, stepped sine system identification, errors-in-variables and the quotient singular value decomposition, mechanical systems and signal processing 6 (1992) pp. 121-134. doi: 10.1016/0888-3270(92)90060-v [15] k.-j. park, fourier-based optimal excitation trajectories for the dynamic identification of robots, robotica 24 (2006) p. 625. doi: 10.1017/s0263574706002712 https://doi.org/10.1108/ir-03-2018-0037 https://doi.org/10.1155/2014/843186 https://doi.org/10.1109/robot.1995.525483 https://doi.org/10.1016/0888-3270(92)90060-v https://doi.org/10.1017/s0263574706002712 x-rays investigations for the characterization of two 17th century brass instruments from nuremberg acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 x-rays investigations for the characterization of two 17th century brass instruments from nuremberg michela albano1,2,3, giacomo fiocco1,4, daniela comelli2, maurizio licchelli5, claudio canevari3, francesca tasso6, valentina ricetti6, pacifico cofrancesco5, marco malagodi1,3 1 arvedi laboratory of non-invasive diagnostics, cisric, university of pavia, via bell’aspa 3, 26100 cremona, italy 2 department of physics, polytechnic of milan, piazza leonardo da vinci 32, 20133, milano, italy 3 department of musicology and cultural heritage, università degli studi di pavia, corso garibaldi 178, 26100 cremona, italy 4 department of chemistry, university of turin, via pietro giuria 5, 10125, torino, italy 5 department of chemistry, university of pavia, via torquato taramelli 12, 27100 pavia, italy 6 museum of musical instruments, castello sforzesco, milano, italy section: research paper keywords: xrf spectroscopy; x-ray radiography; brass, musical instrument; nuremberg citation: michela albano, giacomo fiocco, daniela comelli, maurizio licchelli, claudio canevari, francesca tasso, valentina ricetti, pacifico cofrancesco, marco malagodi, x-rays investigations for the characterization of two 17th century brass instruments from nuremberg, acta imeko, vol. 11, no. 3, article 19, september 2022, identifier: imeko-acta-11 (2022)-03-19 section editor: francesco lamonaca, university of calabria, italy received march 3, 2021; in final form august 25, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: marco malagodi, e-mail: marco.malagodi@unipv.it 1. introduction in this work, two brass natural horns held in the storage of the museum of castello sforzesco in milan were investigated. one of the reasons for the extraordinary relevance of these objects is related to the makers: they are two of the most important components of the haas family, among the most influential ones in nuremberg (germany), between the 17th and the 18th century. during this period, the city was the capital of brass instrument making, and the families organized into guilds were exporting the instruments throughout europe dominating the field for several generations [1], [2]. here the craftsmanship has been improved so much to become a reference for the future and for contemporary makers. brass, an alloy of copper (cu) and zinc (zn), has always been considered the most suitable material for the construction of brasswind musical instruments. the selection of the chemical composition of such an alloy as well as the manufacturing process used are extremely important in determining the properties of the finished instrument. in particular, the amount of zn can affect the physical and chemical properties of the alloy such as malleability, durability, and resistance to corrosion [3], as well as the mechanical properties and the timbre of the finished instrument [4]. and if those are all crucial aspects for the makers of a new instrument, they are also essential for the modern makers who attempt to reproduce early brass wind musical instruments with the aim to find their original sound and style [5]-[8]. due to the peculiarity of the historical and technological context, and to the poor information on the chemical composition of brass alloys before the xvii century, interest has been growing around ancient nuremberg brass since the last century. the reference composition of the alloy was defined by study through x-ray fluorescence spectroscopy of 273 nuremberg jetons produced during the period from 1475 to abstract a recent finding at the castello sforzesco in milan of two brass natural horns from the end of the 17th century and assigned to the haas family from nuremberg brought to light new information about this class of objects. the instruments were heavily damaged, but their historical value was great. in this study, a multidisciplinary approach mainly based on non-invasive analytical techniques and including x-rays investigations (x-ray radiography, x-ray fluorescence and x-ray diffraction) was used. the present study was aimed at: i) pointing out the executive techniques for archaeometric purposes; ii) characterizing the morphological and the chemical features of materials; and iii) identifying and mapping the damages of the structure and the alterations of the surface. mailto:marco.malagodi@unipv.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 1888: considering the nuremberg trade laws that permitted the use of the sole local row supplies, the material used to forge the jeton must have been the same used by brasswind instrument makers [9]. nuremberg brass features include great purity, notwithstanding such a rare and precious alloy was commonly obtained by recycling any scraps believed to contain copper or zinc. further studies confirmed its high purity degree and the great compositional difference from other similar alloys coming from different geographical areas [4], [10]. the present study aims at contributing to the characterization of the materials used in the brass instruments’ craftsmanship. moreover, mainly non-invasive techniques have been used for documenting the executive and decorative details (e.g., diameters of tubes, depths of joints or jointing methods) and for evaluating the conservation state of instruments held in museum collections. this approach represents an essential support to define the best preservation and maintenance practices [11]-[14]. in particular, the diagnostic campaign was accomplished through photographic [15] and stereoscopic documentation of the surface details, x-ray radiography (rx), and x-ray fluorescence spectroscopy (xrf). small amounts of powder were detached from selected areas and analyzed by x-ray diffraction (xrpd). moreover, fourier-transformed infrared (ftir) spectra were collected in reflection mode where the surface was suitable to permit the correct acquisition geometry [16]. among the techniques, xrf generally affords the chemical characterization of the alloy disclosing plenty of information, even suggesting clues on the original parts, replacements, and conservation issues [17]-[19]. the realization of two 3d models acquired by a laser scanner and the prototyping of an augmented reality application for the museum exposition, detailed in previous work [20], complemented the investigation by fulfilling the collaboration with the museum. the main objectives of this investigation were: i) pointing out the executive techniques; ii) characterizing the morphological and the chemical features; and iii) identifying and mapping the structural damages and the alterations of the surface. 2. materials and methods the investigated instruments are shown in figure 1. the oldest one (figure 1a), cataloguing number inv.878, was made by johann wilhelm haas (1649-1723) who represents the most important member of the family and for what concerns the model, it attests the early construction of this horn-type in nuremberg, and it is probably dated back to the late 17th century. the second one (figure 1b), cataloguing number inv.877, is datable to the 1720s and was made by wolf wilhelm haas (16811760), son of johann. the photographic documentation under visible light was acquired with a nikon d4 full-frame digital camera equipped with a 50 mm f.1.4 nikkor objective using a softbox led lamp. the magnification of the surface details was performed with an olympus stereomicroscope equipped with an olympus hd dp73 camera. the images were recorded through stream essentials software. x-ray radiographic investigation was carried out by means of an x-ray generating industrial control machines cp 120b (settings: 110 kv, 1 μa of 100 s exposure time) and a photosensitive radiographic plate dürr ndt gmbh & co. scanned with cr35ndt dürr ndt. xrf analysis was exploited by elio portable x-ray fluorescence spectrometer by xglab (milan, italy), with an analytical spot diameter of 1.3 mm [21]. the x-ray source worked with a rh anode. x-ray fluorescence spectra were collected on 2048 channels with the following set parameters: 40 kv, 60 μa, and 120 s. data were processed by elio 1.6.0.29 software. with the aim to minimize the errors relative to the net area, the dead time of the detector was also kept constant varying the current intensity around the declared value. a small amount of few samples of crystalline phases were analyzed by xrpd. xray diffraction patterns were collected on powder samples taken from areas chosen by the conservators. x-ray powder diffraction analyses were performed by a d5005 instrument by bruker (germany). the measuring conditions were cukα radiation, figure 1. visible photographic images of a) inv.878 by johann wilhelm haas (1649-1723) and b) inv.877 by wolf wilhelm haas (1681-1760); x-rays radiographic images of inv.878 c) and inv.877 d) instruments. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 λ = 1.54060 å, 2θ angle ranging from 3° to 80° with a scan rate of 0.012°/min. a bruker silicon zero background sample holder was used to allow measurements with very small amount of sample. the diffraction patterns were analyzed with eva and match software for the identification of the phases. ftir spectroscopic analyses were performed using the alpha portable spectrometer (bruker) equipped with the r-alpha module. the working distance of about 15 mm with a beam diameter of 5 mm, enabled non-contact examinations. spectra were collected between 7500 and 375 cm-1 at the resolution of 4 cm-1, with an acquisition time of 1 min. for the investigation of absorption bands, due to the occurrence of the transflection, reflection infrared spectra were observed and are shown hereby in pseudoabsorbance. data were processed using opus 7.2 software. 3. results and discussion the horn made by wolf bears the inscription "macht / johann / haas / nurnberg" and the initials "iwh" above a leaping hare turning backwards its head, while the other one by the father johann, bears the hare keeping its face forwards [22]. the bells and the tubes were detached, some parts were lacking, and the surface suffered from different kinds of alteration. in both instruments, numerous deformations of the lamina, macroscopic cracks, and areas of soldering and reparations were highlighted and a magnification of some of their visible traces on the external surface has been documented by stereomicroscope. the study of the structural features gave access to the manufacturing processes, to identifying the damages, the presence of some reparations or restorations. even the alterations commonly found on the inner surface of tubes caused by the playing were put into evidence [23]. rx investigation suggested that the inv.878 (figure 1c) was characterized by a lamina with a uniform thickness joint by a nipped tooth joint between the edges (figure 2a). two important elements referred to the peculiar executive technique were highlighted for this instrument through rx: (i) the presence of the “gusset” (figure 1c) which is a “v” shaped lamina inserted with nipped tooth joints on both sides added to prevent too much stretching of the edge during the hammering to obtain a larger bell; (ii) the difference in the optical density around the ring that divides the tube into two parts and that seems to join laminas of different thickness. otherwise, the bell of the inv.877, which showed a unique metal joint (figure 1d), was made of a uniform lamina, probably thinner, and characterized by numerous cracks. moreover, the presence of the metal thread detailed in figure 2b, confirmed a traditional making technique of the garland that was the standard bell finishing method of early brasses before the industrial revolution: sheets of brass were cut in a “y” shape, folded, brazed, and hammered over a mandrel, the garland was then fitted over the edge without soldering [24]. on the outside of the bell, the position of the original support between the bell and the tube (the stay) is proven by a leftover. for this instrument, even the radiography (figure 1d) suggested the use of the traditional executive technique employed up to the first part of the xix century [24]. the tube was a single folded element probably obtained by flattering and hammering a brass lamina (cold-worked) then followed by soft annealing. with this procedure, the required shapes (bells and tubes) were produced [25]. the xrf results relating to the alloys (table 1) revealed that both the horns were manufactured from a yellow brass which is a kind of brass that generally contains 23 41 % of zinc as the major alloying element and may contain up to 3 % of lead and up to 1.5 % of tin as additional alloying elements [13]. in the studied objects, a slightly variable cu: zn ratio was detected for the tubes and the bells, respectively. the quantity of cu ranges between 68% and 75%, zn between 21% and 28%, and variable additional alloying elements such as iron (fe), nickel (ni), and lead (pb) in less than 1% amount, were also distinguished. generally, yellow brass and in particular the α-brass (with up to about 32.5 wt. % zinc) are ductile, easily cold-worked, can be rolled or hammered into thin sheets, and have good corrosion resistance in a salt-water atmosphere [26], and for these reasons, it was extensively employed for the making of historical brasses [10], [27]. the presence of several trace elements was the proof of the traditional cementation method employed by the early craftsmen to obtain brass, which included the use of calamine (a high-grade zinc carbonate ore). this process involved many different raw materials instead of melting the two principal elements of copper and zinc directly into each other as occurred in the modern processes [10], [28], [29]. among the trace elements commonly found in the historical brass, pb is the most critical since it could affect the mechanical properties of the material, reducing the brass shear strength and making it prone figure 2. details magnification by stereomicroscope: a) nipped tooth joint (inv. 878); b) metal thread in the garland (inv.877); c) corrosion morphology with pustules (inv. 878); d) reddish surface inside the bell (inv.878); e) reparation with patch and soldering (inv.878); f) white dusty leftovers inside punching (inv.877); g) translucent green matter along the tubes (inv. 877); h) possible residue of surface treatment (inv.878). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 to crack [30], [31]. in this case, the pb amount is limited and the observed embrittlement and cracking of the laminas could be imputed to mechanical stress also probably promoted by a selective corrosion and/or dezincification process that commonly affect the ancient brass object [31]-[35] whereas the presence of calcium (ca) detected widespread on the surface with variable and apparently random amounts must be considered independent of the brass composition and probably introduced by the environment or by cleaning procedures. in fact, a paste of precipitated chalk and water was often used with a soft cloth to remove the residue of corrosion products after the mechanical removal of dust and dirt [13], [27]. among the trace elements, sulphur (s) was detected almost everywhere. although it is not reported in the xrf results tables (table 1 and table 2) due to the low sensitivity of the technique to this element and the impossibility to provide affordable % amounts, more intense signals s were detected in correspondence of the brown and black pustules of corrosion spread on all over the surface (figure 2c). the elemental composition of this kind of corrosion did not differ from the composition of the sound brass lamina. a low amount of powder sampled and examined through xrpd oddly revealed an unexpected amorphous nature. nevertheless, the presence of sulphates as corrosion products was occasionally confirmed by the ftir analysis, elsewhere, possibly in combination with acetates and carbonates. accordingly, in figure 3, two selected spectra acquired on the bells of the two horns (black and grey spectra) show the presence of a mixture of the inorganic compounds mentioned above. with regard to the spectral features at higher wavenumbers, the signal in the range between 3700 and 3300 cm-1 could be assigned to the oh stretching vibration. instead, observing the fingerprint region, the main features of three main groups were recognized. the presence of a copper carbonate-based (i.e. malachite cuco3cu(oh)2) was identified through the absorption bands centred around 1575, 1495, 1420, 1395 cm-1, all caused by the co3 antisymmetric stretching vibrations; and those around 1095 cm-1, 820, 750 and 715 cm-1 generated by the co3 symmetric stretching [36]. moreover, the bands peaked around 1050 cm-1 and 880 cm-1, ascribed to the oh bending support the attribution. the presence of copper sulphate (i.e. brochantite cuso43cu(oh)2) could be hypothesized for the bands centred around 1120 cm-1, 980 cm-1, 620 cm-1 all of them reasonably assigned to the stretching vibration of the so42ion; finally, the signals around 450 cm-1 could be ascribed to the cu–o vibrations [36]. finally, also the characteristic bands of acetate (i.e. cu(ch3coo)2h2o, zn(ch3cooh)2h2o) were identified in the main bands around 1625 and 1425 cm-1 related to the copper carboxylates and their antisymmetric and symmetric stretching vibrations; and around 1455, 1355 and 1030 cm-1 possibly ascribable to ch3 deformations [36], [37]. all those materials were supposed to be alteration products due to the interaction with the environment. in addition, the presence of the organic fraction (i.e. natural resins and oils) could be tentatively recognized through the strong sharp carbonyl band around 1740 cm-1 [38] and the quite strong hydrocarbon stretches for both methylene groups around 2925 (asymmetric) and 2850 (symmetric) cm-1, and methyl groups around 2962 and 2872 cm-1. an outstanding result must be reported for the composition of both the instruments' bells: if the alloy of the tube and the outside of the bells are comparable (the same in inv.877), one of the insides of the bells was characterized by a higher amount of cu that corresponded to a more reddish surface with respect to the others (figure 2d). generally, cu enrichment of ancient brass surfaces was commonly due to the selective corrosion of the zn or an underwent dezincification process triggered by handling or by the contact with acids [13]. in this case, the homogeneity of the surface prompted the hypothesis of a treatment done on purpose to diversely decorate the bell: finishing it with a dull coating or a fine decoration was a common practice [3]. the soldering and the patches (figure 2e) were characterized by a coarse aspect that could be correlated to later restorations. xrf investigation shows that the nipped tooth joint was lacking the brazing as expected, and probably only the stay of the inv. 878 results to be brazing of a comparable composition with the brass alloy. the other solder joints are soft solder assembled with pbsn solder containing a variable ratio of the elements (table 2). the use of this kind of solder is largely documented in ancient copper-based objects and the proportions of the elements can vary from 30% pb (and 70% sn), to as much as 98% pb (and only 2% sn) [26], [39]-[42]. the results about the soldering and the similarity between the alloys of the mouthpipes joint in both table 1. chemical composition of the constitutive alloy of the tube and of the outside and the inside of the bell for both the instruments, namely inv.877 and inv.878. three points for each area were acquired. the mean and the standard deviation of the wt% values are shown for the detected elements. ca (wt%) cu (wt%) fe (wt%) ni (wt%) pb (wt%) zn (wt%) in v. 8 7 8 tube 2.0 ± 0.4 68.9 ± 1.1 0.6 ± 0.0 0.4 ± 0.0 0.3 ± 0.0 27.9 ± 0.6 bell out 1.1 ± 0.3 70.7 ± 0.6 0.6 ± 0.0 0.3 ± 0.0 0.4 ± 0.1 27.0 ± 0.8 bell in 2.7 ± 0.7 74.8 ± 0.6 0.6 ± 0.1 0.3 ± 0.0 0.4 ± 0.4 21.1 ± 1.7 in v. 8 7 7 tube 2.7 ± 2.7 68.5 ± 2.5 0.6 ± 0.2 0.3 ± 0.0 0.2 ± 0.2 27.7 ± 0.6 bell out 2.6 ± 0.4 68.5 ± 0.8 0.4 ± 0.1 0.3 ± 0.0 0.3 ± 0.1 27.8 ± 0.6 bell in 3.4 ± 2.3 72.2 ± 0.5 0.4 ± 0.0 0.4 ± 0.0 0.4 ± 0.1 23.2 ± 2.0 figure 3. reflection ftir spectra in pseudo-absorbance collected from inv.878 and inv.877. marker bands of copper carbonate-based (*), acetates (◆), sulphates (▽), organic matter (■), and zeolites (+) are highlighted. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 the instruments could prove later restorations supporting the hypothesis of the conservator according to which the mouthpipes underwent replacement. concerning the patches, each of them showed a different chemical composition: sn-based (tube inv.878), or cu, sn and pb, or pb and sn alloy (table 2) probably denoting different periods and restorers. about the decoration in the middle of the tubes with a ring shape, the composition of the inv.878 one resulted to be made of an alloy of a comparable composition of the tube. differently, in the latter (inv.877) the cu/zn alloy was enriched with pb and sn (more than 1%) and for this reason, it could be considered a later supplement, processed differently by the laminas the instrument was made up of. in fact, pb is known to be added to facilitate the shape modelling or the pouring of the alloy into the mould, or the soldering of the element. furthermore, white dusty and adherent deposit was observed as a patina and agglomerated inside the engravings or punches (figure 2f), and often in correspondence with the reparations. the xrf analysis detected a certain amount of ca (ranging from 6% up to 26%), probably a cleaning residue; xrpd examination even confirmed the presence of pure quartz often found among the ingredients of the modern polish for metal surfaces. in other areas of the tubes, spots of white ink with titanium (ti) based composition were also identified: an accidental stain of wall painting could not be excluded. with a similar distribution, a green translucent matter was found in the correspondence of small areas where the surface was more worn-out along the tubes (figure 2g). if xrf could not highlight significant information except for a more intense peak of silicon (si), xrpd examination clearly identified siliceous faujasite (na2, ca, mg)35[al7si17o48] 32(h2o) and, more generally, other zeolites, probably ascribable to polish supposed to be a synthetic mixture of lubricant and small dispersed particles (zeolites). this hypothesis is also endorsed by ftir results reported in figure 3 (green spectrum). here, some of the most characteristic bands of zeolites were identified [43], [44]: in particular, the bands that range from 3750 to 3450 cm-1 are ascribable to si-oh, si-oh-al and –oh hydroxyl groups whereas the bands between 1200 and 450 cm-1 could be assigned to si-o-al, si-o-si, si-o, si-al species. in particular, the bands ranging between 800 and 600 cm-1 in the green spectrum in figure 3 is probably due to the si-o-m where m is the ion metal species (na, ca, mg). moreover, concerning the hypothesized synthetic fraction, in the same spectrum the bands of the acetates and carbonates above-described and ascribed to the alteration products of the alloy, were accompanied to sharper bands in correspondence of the ch stretching vibration in the ranges 3200 – 2800 cm-1, to the bands ascribed to the coovibration in the range 1590-1520 cm-1, and to the bending vibration of the ch2 around 720 cm-1 which could suggest the presence of a synthetic mixture or a metal soaps originated from degradation or used as lubricants, probably, for maintenance or restoration practice [45]. finally, the documentation of a superimposed layer characterized by red particles suggested the presence of surface treatments (figure 2h). in fact, brass tends to oxidize quickly when exposed to air and as a common practice, a varnish was applied to give shine and protection to the metal alloy [3], [39]. unexpectedly, neither the xrf nor the ftir analysis even if confirming the presence of some organic matter could clearly prove the presence of such a coating. table 2. chemical composition of the selected area of interest in both the instruments, namely inv.878 and inv.877. a brief description of the measuring area and the relative results expressed in wt% are reported. ca (wt%) cu (wt%) fe (wt%) ni (wt%) pb (wt%) sn (wt%) ti (wt%) zn (wt%) in v. 8 7 8 soldering and reparations mouthpipe soldering 12.1 14.5 1.0 0.1 50.7 15.9 5.7 stay soldering 5.7 66.8 0.7 0.4 0.3 26.1 tube ring 1.5 70.6 0.5 0.4 0.5 26.5 patch 18.4 4.9 1.7 2.5 71.8 0.8 bell out soldering 13.6 2.0 1.0 21.9 61.0 0.5 others tube green 1.8 69.5 0.6 0.4 0.2 27.7 white powdery 6.3 59.6 5.3 0.3 0.7 27.8 bell out black corrosion 2.4 69.5 0.7 0.3 0.5 26.7 bell in black corrosion 1.7 75.5 0.6 0.3 1.0 20.9 in v. 8 7 7 soldering and reparations mouthpipe soldering 15.7 0.9 3.2 50.5 29.3 0.4 soldering 11.6 11.8 1.1 55.1 15.8 4.7 tube patch 3.3 2.9 0.1 30.9 62.4 0.5 ring 57.3 0.9 0.3 7.4 9.9 24.2 bell out patch 10.7 48.7 0.5 0.3 13.5 6.6 19.8 patch_ patina 14.0 22.8 6.1 0.1 32.1 12.5 12.4 joint 3.7 60.4 0.5 0.4 0.4 1.8 32.9 bell in joint 7.0 62.0 0.7 0.4 0.6 1.0 28.3 others tube green 6.4 63.2 0.5 0.3 0.2 29.4 white ink/matter 47.2 25.8 0.7 0.2 15.0 11.1 white powdery 26.0 52.3 0.4 0.3 0.2 20.8 black corrosion 8.8 65.9 1.2 0.3 23.8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 4. conclusions the non-invasive approach presented in the paper, mainly based on x-ray techniques, allowed us to obtain a map of the damages and the fragile areas of the two investigated brass horns. besides, it provides the conservator with some tools to better define the conservation state of the objects: complete documentation was provided for both the inner structure, the morphology, and the surface aspect with high magnification of the most significant details. moreover, information about the craftsmanship of the brass wind instrument in nuremberg during the 17th century was disclosed with a hint to the evolution of the technique during a period of about a century in which the traditional procedures seem to be kept. the in-depth chemical characterization of the instruments highlighted the original and the superimposed parts. in many cases, the superimposed matter and the reparations seem to have the same nature, supporting the idea that the instruments were kept together in the last part of their history. furthermore, discriminating the original parts enabled the opportunity to produce a replica of the objects to hopefully retrieve the original aspect and the timbre. unfortunately, further knowledge of the cold working and annealing of the laminas could only come from the microstructural investigation of the alloy. the availability of material for further investigations will be able to answer the questions left open by this study about the main corrosion products and the surface treatments that turned red on the inside of the bells, as useful and precious support for conservators and restorers in the planning of the preliminary preservation strategies. acknowledgement the authors would like to thank prof. renato meucci, for the valuable collaboration and for its worthy studies about early wind instruments that guided this research. references [1] d. smithers, the trumpets of jw haas: a survey of four generations of nuremberg brass instrument makers, the galpin society journal 18 (1965), pp. 23-41. doi: 10.2307/841975 [2] r. meucci, strumentaio. il costruttore di strumenti musicali nella tradizione occidentale, fondazione cologni, 2008, isbn 8831795902. [3] v. bucur, handbook of materials for wind musical instruments, springer nature, switzerland, 2019, isbn 978-3-030-19175-7. [4] a. l. bacon, a technical study of the alloy compositions of ‘brass’ wind musical instruments (1651-1857) utilising non-destructive xray fluorescence. diss. phd thesis, university of london, 2004. [5] a. baines, gli ottoni, i manuali edt/sidm, vol. 7, edt srl, 1991, isbn 8870630919. [6] c. pelosi, g. agresti, p. holmes, a. ervas, s., de angeli, u. santamaria, an x-ray fluorescence investigation of ancient roman musical instruments and replica production under the aegis of the european music archaeological project, int. j. conserv. sci. 7(2) (2016), pp. 847-856. [7] g. festa, g. tardino, l. pontecorvo, d.c. mannes, r. senesi, g. gorini, c. andreani, neutrons and music: imaging investigation of ancient wind musical instruments, nucl. instrum. methods phys. res. b, 336 (2014), pp. 63-69. doi: 10.1016/j.nimb.2014.06.020 [8] b. elsener, m. alter, t. lombardo, m. ledergerber, m. wörle, f. cocco, m. fantuzzi, s. palomba, a. rossi, a non-destructive insitu approach to monitor corrosion inside historical brass wind instruments, microchem. j., 124 (2016), pp. 757-764. doi: 10.1016/j.microc.2015.10.027 [9] m. b. mitchiner, c. mortimer, m. pollard, nuremberg and its jetons, c. 1475 to 1888: chemical compositions of the alloys, numismatic chronicle 147 (1987), pp. 114–55. online [accessed 24 august 2022] https://www.jstor.org/stable/42667501 [10] h. w. vereecke, b. frühmann, m. schreiner, the chemical composition of brass in nuremberg trombones of the sixteenth century, historic brass society journal 24 (2012), pp. 61-77. doi: 10.2153/0120120011004 [11] r. l. barclay, ethics in the conservation and restoration of early brass instruments, historic brass society journal, 1989, pp. 75-81. [12] r. l. barclay, the care of historic musical instruments, edinburgh, the museums and galleries commission, 1997, isbn 0-660-17116-3. [13] s. pollens, the manual of musical instrument conservation, cambridge university press, 2015, isbn 9781107077805. [14] g. fichera, m. albano, g. fiocco, c. invernizzi, m. licchelli, m. malagodi, t. rovetta, innovative monitoring plan for the preventive conservation of historical musical instruments, stud. conserv. 63.sup1 (2018) pp. 351-354. doi: 10.1080/00393630.2018.1499853 [15] p. dondi, l. lombardi, i. rocca, m. malagodi, m. licchelli, multimodal workflow for the creation of interactive presentations of 360 spin images of historical violins, multimedia tools and applications, 77.21 (2018) doi : 10.1007/s11042-018-6046-x [16] c. invernizzi, g. fiocco, m. iwanicka, m. kowalska, p. targowski, b. blümich, c. rehorn, v. gabrielli, d. bersani, m. licchelli, m. malagodi, non-invasive mobile technology to study the stratigraphy of ancient cremonese violins: oct, nmrmouse, xrf and reflection ft-ir spectroscopy, microchem. j. 155 (2020) doi: 10.1016/j.microc.2020.104754 [17] t. rovetta, c. invernizzi, g. fiocco, m. albano, m. licchelli, m. gulmini, g. alff, d. fabbri, a.g. rombolà, m. malagodi, the case of antonio stradivari 1718 ex-san lorenzo violin: history, restorations and conservation perspectives, j. archaeol. sci. 23 (2019), pp. 443-450. doi: 10.1016/j.jasrep.2018.11.010 [18] c. invernizzi, g. fiocco, m. iwanicka, p. targowski, a. piccirillo, m. vagnini, m. licchelli, m. malagodi, d. bersani, surface and interface treatments on wooden artefacts: potentialities and limits of a non-invasive multi-technique study, coatings 11(1) (2021) art. no. 29, pp. 1-24. doi: 10.3390/coatings11010029 [19] m. albano, s. grassi, g. fiocco, c. invernizzi, t. rovetta, m. licchelli, r. marotti, c. merlo, d. comelli, m. malagodi, a preliminary spectroscopic approach to evaluate the effectiveness of water-and silicone-based cleaning methods on historical varnished brass, appl. sci. 10 (2020) art. no. 3982, pp. 1-14. doi: 10.3390/app10113982 [20] m. albano, g. fiocco, p. dondi, f. tasso, v. ricetti, d comelli, d., m. licchelli, c. canevari, m. malagodi, preliminary study for the preservation of two natural horns from the end of the 17th century, proc. of the 2020 imeko tc-4 international conference on metrology for archaeology and cultural heritage trento, italy, october 22-24, 2020. online [accessed 24 august 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-087.pdf [21] m. albano, g. v. fichera, t. rovetta, g. guida, m. licchelli, c. merlo, p. cofrancesco, c. milanese, m. malagodi, microstructural investigations on historical organ pipes. j. mater. sci. 52(16) (2017), pp. 9859-9871. doi: 10.1007/s10853-017-1134-2 [22] w. waterhouse, l. g. langwill, the new langwill index: a dictionary of musical wind instrument makers and inventors, tony bingham, london, 1993, isbn 0-9461130-4-1. https://doi.org/10.2307/841975 https://doi.org/10.1016/j.nimb.2014.06.020 https://doi.org/10.1016/j.microc.2015.10.027 https://www.jstor.org/stable/42667501 https://doi.org/10.2153/0120120011004 https://doi.org/10.1080/00393630.2018.1499853 https://doi.org/10.1007/s11042-018-6046-x https://doi.org/10.1016/j.microc.2020.104754 https://doi.org/10.1016/j.jasrep.2018.11.010 https://doi.org/10.3390/coatings11010029 https://doi.org/10.3390/app10113982 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-087.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-087.pdf http://dx.doi.org/10.1007%2fs10853-017-1134-2 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 [23] m. fantauzzi, b. elsener, f. cocco, c. passiu, a. rossi, model protective films on cu-zn alloys simulating the inner surfaces of historical brass wind instruments by eis and xps, front. chem. 8 (2020), pp. 272. doi: 10.3389/fchem.2020.00272 [24] r. l. barclay, the art of the trumpet-maker: the materials, tools, and techniques of the seventeenth and eighteenth centuries in nuremberg, clarendon press; oxford university press, 1992, isbn 0198166052. [25] g. j. van der heide, brass instrument metalworking techniques: the bronze age to the industrial revolution, historic brass society journal 3 (1991), pp. 122-50. online [accessed 24 august 2022] https://www.historicbrass.org/edocman/hbj1991/hbsj_1991_jl01_012_vanderheide.pdf [26] d. ashkenazi, d. cvikel, a. stern, a. pasternak, o. barkai, a. aronson, y. kahanov, archaeometallurgical investigation of joining processes of metal objects from shipwrecks: three test cases, metallogr. microstruct. anal. 3 (2014), pp. 349–362. doi: 10.1007/s13632-014-0153-5 [27] m. megahed, m. m. abdelbar, characterization and treatment study of a handcraft brass trumpet from dhamar museum, yemen. archeomatica, 9(3) (2018), pp. 36-41. doi: 10.48258/arc.v9i3.1481 [28] d. bourgarit, f. bauchau, the ancient brass cementation processes revisited by extensive experimental simulation, jom 62(3) (2010), pp. 27-33. doi: 10.1007/s11837-010-0045-3 [29] d. a. scott, r. schwab, the metallurgy of pre-industrial metals and alloys. in: metallography in archaeology and art. cultural heritage science. springer, cham. (2019), pp. 133-206. doi: 10.1007/978-3-030-11265-3_5 [30] c. vilarinho, j. p. davim, d. soares, f. castro, j. barbosa, influence of the chemical composition on the machinability of brasses, j. mater. process. tech. 170.1-2 (2005), pp. 441-447. doi: 10.1016/j.jmatprotec.2005.05.035 [31] f. cocco, sustainability in cultural heritage: from diagnosis to the development of innovative systems for monitoring and understanding corrosion inside ancient brass wind instruments, diss. phd thesis, university of cagliari, 2017. [32] c. breckon, p. t. gilbert, dezincification of alpha brasses, chemistry and industry (1964), pp.35-36. [33] h. sugawara, h. ebiko, dezincification of brass, corros. sci. 7(8) (1967), pp. 513-518. doi : 10.1016/s0010-938x(67)80090-8 [34] s. selvaraj, s. ponmariappan, m. natesan, n. palaniswamy, dezincification of brass and its control-an overview, corros. rev. 21(1) (2003), pp. 41-74. doi: 10.1515/corrrev.2003.21.1.41 [35] l. campanella, o. c. alessandri, m. ferretti, s. h. plattner, the effect of tin on dezincification of archaeological copper alloys, corros. sci. 51(9) (2009), pp. 2183-2191. doi: 10.1016/j.corsci.2009.05.047 [36] c. zaffino, v. guglielmi, s. faraone, a. vinaccia, s. bruni, exploiting external reflection ftir spectroscopy for the in-situ identification of pigments and binders in illuminated manuscripts. brochantite and posnjakite as a case study, spectrochim. acta a mol. biomol. spectrosc. 136 (2015), pp. 1076-1085. doi: 10.1016/j.saa.2014.09.132 [37] t. ishioka, y. shibata, m. takahashi, i. kanesaka, y., kitagawa, k. t. nakamura, vibrational spectra and structures of zinc carboxylates i. zinc acetate dihydrate. spectrochimica acta part a: molecular and biomolecular spectroscopy 54(12) (1998), pp. 1827-1835. doi: 10.1016/s1386-1425(98)00063-8 [38] m. r. derrick, d. stulik, j. m. landry, infrared spectroscopy in conservation science, getty publications, 2000, isbn 0-89236469-6. [39] h. maryon, metal work and enamelling, 5th ed., usa, 1971, pp. 35. isbn 0-486-22702-2 [40] h. maryon, archaeology and metallurgy i. welding and soldering, man. 41 (1941), pp.118-124. doi: 10.2307/2791583 [41] z. goffer (2007), archaeological chemistry, john wiley & sons, inc., hoboken, 2nd, usa, p. 205, isbn 978-0-471-25288-7. [42] c. deck (2016), the care and preservation of historical brass and bronze, benson ford research center online [accessed 24 august 2022] https://www.thehenryford.org/docs/default-source/defaultdocument-library/the-henry-ford-brass-amp-bronzeconservation.pdf/?sfvrsn=2 [43] d. w. breck, zeolite molecular sieves-structure chemistry and use, ed. wiley interscience, new york, (1973). isbn 0471099856 [44] d. nibou, s. amokrane, h. mekatel, h., n. j. p. p. lebaili, elaboration and characterization of solid materials of types zeolite naa and faujasite nay exchanged by zinc metallic ions zn2+. phys. procedia, 2(3) (2009), pp. 1433-1440. doi: 10.1016/j.phpro.2009.11.113 [45] f. rosi, l. cartechini, l. monico, f. gabrieli, m. vagnini, d buti, b. doherty, a. anselmi, b. g. brunetti, c. miliani, tracking metal oxalates and carboxylates on painting surfaces by non-invasive reflection mid-ftir spectroscopy, in: metal soaps in art. f. casadio, k. keune, p. noble, a. van loon, a., e. hendriks (editors). springer: cham, switzerland, 2019, pp. 173–193. doi: 10.1007/978-3-319-90617-1_10 https://doi.org/10.3389/fchem.2020.00272 https://www.historicbrass.org/edocman/hbj-1991/hbsj_1991_jl01_012_vanderheide.pdf https://www.historicbrass.org/edocman/hbj-1991/hbsj_1991_jl01_012_vanderheide.pdf https://doi.org/10.1007/s13632-014-0153-5 https://doi.org/10.48258/arc.v9i3.1481 https://doi.org/10.1007/s11837-010-0045-3 https://doi.org/10.1007/978-3-030-11265-3_5 https://doi.org/10.1016/j.jmatprotec.2005.05.035 https://doi.org/10.1016/s0010-938x(67)80090-8 https://doi.org/10.1515/corrrev.2003.21.1.41 https://doi.org/10.1016/j.corsci.2009.05.047 https://doi.org/10.1016/j.saa.2014.09.132 https://doi.org/10.1016/s1386-1425(98)00063-8 https://doi.org/10.2307/2791583 https://www.thehenryford.org/docs/default-source/default-document-library/the-henry-ford-brass-amp-bronze-conservation.pdf/?sfvrsn=2 https://www.thehenryford.org/docs/default-source/default-document-library/the-henry-ford-brass-amp-bronze-conservation.pdf/?sfvrsn=2 https://www.thehenryford.org/docs/default-source/default-document-library/the-henry-ford-brass-amp-bronze-conservation.pdf/?sfvrsn=2 https://doi.org/10.1016/j.phpro.2009.11.113 https://doi.org/10.1007/978-3-319-90617-1_10 arel – augmented reality–based enriched learning experience acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 5 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 arel – augmented reality–based enriched learning experience a v geetha1, t mala2 1 research scholar, department of information science and technology, college of engineering, anna university, india 2 associate professor, department of information science and technology, college of engineering, anna university, india section: research paper keywords: augmented reality; learning technologies; education; vuforia citation: a v geetha, t mala, arel – augmented reality–based enriched learning experience, acta imeko, vol. 11, no. 3, article 12, september 2022, identifier: imeko-acta-11 (2022)-03-12 section editor: zafar taqvi, usa received march 30, 2022; in final form july 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: a v geetha, e-mail: geethu15@gmail.com 1. introduction within few months, the pandemic of coronavirus illness 2019 (covid-19) caused by the novel virus sars-cov-2 has forced enormous changes in the way businesses and other sectors operate. according to world economic forum, 1.2 billion children in 186 countries were affected by school closures, as of march 2021 [1]. moreover, the new wave of cases in several regions of the world impacts the return towards normalcy. herd immunity and vaccines provide only temporary relief to regions affected by the new virus strains. thus, online learning has evolved into a viable alternative to traditional classroom-based learning, with instruction delivered remotely and using digital platforms. even before the covid era, there is a steady increase in the growth rate and adoption of technology in education. according to globenewswire, it is estimated that the online education market will reach $350 billion by 2025. in addition, concerning the response to covid-19, several online education platforms such as dingtalk, have scaled their cloud services by more than 100,000 servers [2]. augmented and virtual reality (ar/vr), the emerging technology trend, can improve the online learning experience by increasing engagement and retention. vr headsets such as google cardboard (gc) have made the technology accessible to most of the world's population. vr and ar based online learning platforms offer experiential learning, where the students learn through experience rather than through traditional methods such as rote learning. some of the benefits of experiential learning are accelerated learning, engagement, understanding of complex concepts easily. traditional educational methods are progressively becoming digital, due to technological advancements. mobile augmented reality (ar) is the superimposition of the virtual objects over reality. ar is widely used in many field such as manufacturing, robotics, behavioral treatment, aircraft engineering design and so abstract in present era, teaching occur either on a chalkboard or on a projected power point presentation on the wall. traditional teaching methods such as blackboards and power point presentations are being phased out in favor of enriched learning experiences provided by emerging edtech. with the closure of schools due to covid-19, the demand for online educational platforms has also increased. furthermore, some of the recent trends in edtech include personalized learning, gamification and immersive learning with extended reality (xr) technologies. due to its immersive experience, xr is a pioneering technology in education, with multiple benefits including greater motivation, a positive attitude toward learning, concrete learning of abstract concepts, and so on. existing augmented reality (ar) based education applications often rely on unimodal input such as marker-based trigger to launch the educational content. hence, this work proposes a multi-modal interface to enable the content delivery through marker and speech recognition-based content delivery. additionally, the proposed work is designed as mobile based ar platform with the regional language support to increase the ubiquitous accessibility of the ar content. thus, the proposed mobile ar based enriched learning (arel) platform provides a multimodal mobile based educational ar platform for primary students. based on the feedback received after the usage, it is observed that arel improves the learning experience of the students. mailto:geethu15@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 on [3]. the spectrum of extended reality (xr), which refers to the spectrum of real-virtual environments. xr has gradually evolved in the realm of education, revolutionizing pedagogical practices as well as students' educational experiences by facilitating the understanding of complicated aspects in education through the visual depiction of images based on realworld data [4]. specifically, ar and vr has been extensively used in many educational applications. for example, in [5], a solar system mobile app is designed to test the knowledge retention of college students. in [6], a collaborative augmented reality system is used to deliver geometrical concepts in mathematics, which are abstract can be easily illustrated using ar platform. as a result, ar aids in the comprehension of challenging concepts of the learning module. in [7], the work presents a gesture-based ar system to aid in understanding the anatomical structures of the human body. [8] focuses on the concept of comic-based education through markerless ar for improving metacognitive abilities of the students. thus, there are variety of approaches for designing the educational apps based on ar and vr such as head mounted displays, markerless systems, and gesture-based systems. in addition, mobile ar systems are effective because they allow for portability and easy access. therefore, the proposed arel system is a mobile ar-based learning platform in which students can scan the contents of their books to discover videos that appear magically over the pages, transforming a plain textbook into a book with dynamic information. furthermore, arel delivers the contents in regional language of the students to improve the engagement with the app. arel is made up of a collection of modules such as speech recognition system, image tracking and registration module that take advantage of mobile sensors and computational power. the application is developed using unity engine and vuforia sdk. the mobile application interfaces with the vuforia cloud target recognition system via a client-server architecture. to grab students' attention and increase their learning experience, the content in the book is enriched with augmented graphics, animations, and other edutainment features. the mobile camera is linked to the scene's virtual camera as soon as the program is launched. once the target image is recognized with the help of the vuforia image database, using pattern recognition algorithms, the corresponding output view is rendered using the display unit. as a result, the output view consists of virtual objects laid out over the real-time objects. the triggered audio output explains the concepts that are scanned, which in turn improves the experience of the learning. the learning module also consists of multiple-choice questions on the contents taught, to assess the understanding of the learned material. based on the feedback received after the usage, it is observed that arel positively improves the learning experience. 2. methodology arel is designed as a multi-modal ar interface for students to deliver mobile-based learning. based on the survey of various literature related to ar based educational platform design, it is observed that ar-based learning platform improves engagement and comprehension of difficult concepts. ar also provides an experiential learning experience rather than traditional methods such as rote learning or instructor-led learning. arel is designed as a mobile-based ar system for increasing the accessibility of ar based learning for students. thus, arel complements and improves the online learning solutions or protocols developed during the covid era. 2.1. development model arel is developed using the rapid application development (rad) model, as it accelerates the system development. rad model is appropriate when the product development time is less, and the project requires high component reusability and modularity. figure 1 illustrates the rad model followed in the work. the rad model involves four development phases, and they are briefly explained as follows: • requirement gathering: in this phase, the objectives and requirements for the products are gathered based on the technical review. thus, this phase aids in the understanding of the project goals and expectations. • user description: to design the component, the developer collects the description of the component design from its user. based on the design from the user, a prototype of the application is developed, which further reviewed, and the design is updated. • implementation: in this phase, the developer implements the requirements and perform testing on the product. for a typical ar application, this phase involves: ui interface design, creation of 3d objects, coding and testing of the product. • evaluation: once the product is completely developed and tested whether the user expectations are met. once the product is successfully evaluated, the project reaches the users. 2.2. software used • unity: unity is a game development engine which is used to create games for 3d environments such as vr and ar. unity supports scripting through c# [9]. • the applications developed using unity can be exported to platforms such as ios, android, or desktop platforms like windows. unity provides a comprehensive framework for adding interactive animations, audio and physics based logical simulations for natural and close to real interactions. therefore, arel is designed used unity engine. • vuforia software development kit (sdk): vuforia is an sdk supported by unity and enables creation of ar applications for mobile [10]. it uses computer vision-based technologies to track image targets, object-targets, or area targets for marker-based ar application. • upon recognition, the virtual object is placed relative to the marker and the virtual camera position. vuforia is integrated figure 1. rad development model. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 to the unity engine for developing the ar concepts of arel. • google speech-to-text api: the google speech to text api [11] enables integration of the speech recognition ability into to variety of applications. upon sending a voice audio, it sends a transcript of the audio from the service. it uses sophisticated deep learning models ranging from long term short term-recurrent neural networks to sophisticated speech recognition algorithms to perform accurate recognition. 3. augmented reality based enriched learning the objectives of the arel system are as follows: • to design a mobile-based ar system which increases the accessibility of ar based learning for students • to design an ar platform that can act as teaching aid for the students • to complement and improve the online experience through ar • to provide multi-modal interface and regional language support. the objectives are achieved in arel through its multi-modal interfaced content delivery methods. arel consists of two modes of content delivery as follows: a) image target-based content delivery b) speech-to-text based content delivery. this is illustrated in figure 2, where the application receives input from the camera and microphone to deliver the content via ar. 3.1. image target-based content delivery the ar interface of the system is developed using vuforia sdk and uses its image target database system for processing the image targets. the image-targets from the children’s textbook is created and the processed in the target database system of vuforia. it is then integrated with the application through unity gaming engine. upon scanning these image targets, the virtual camera performs an image recognition based on the features available in the target database. once a matching image target is found, relevant content is displayed as ar content. figure 3 illustrates the image-target based content delivery. the output view consists of video content or 3d objects (with audio description) laid out over the real-time objects. 3.2. speech-to-text based content delivery arel also supports speech-to-text based content delivery. the audio samples received from the microphone is preprocessed using noise cancellation and the speech input is sent to the speech-to-text service. speech processing involves several steps, including analysis, feature extraction, modelling, and testing. the feature extraction process extracts unique features of the audio using mel frequency cepstral coefficients (mfcc) technique. upon recognizing the sample, the speech input is converted to text. if the text matches any speech commands, then the appropriate ar content is displayed. the detailed steps for the content delivery here is as follows: 1) record a short audio from the user’s microphone 2) convert the audio into wav format 3) upload the file into the google server 4) once the uploaded file is processed, it receives the output from the json file. 5) process the json file with the text command, the corresponding number is displayed figure 4 depicts the working of the speech to text-based content delivery. algorithm 1 depicts the process involved in the speech-to-text based content delivery. algorithm 1: speech-to-text based content delivery input: microphone audio (a) output: ar content based on speech 1: 𝐶:=command_words 2: w:=convert_audio_to_wav(a) 3: text_from_speech:=speech_to_text_api(w) 4: if text_from_speech ∈ 𝐶 then 5: command= text_from_speech ∩ 𝐶 6: load_content(command) 7: end if 4. results and discussion arel provides an ar based multi-modal audio-visual ar based learning with regional language support. to evaluate the usability and the learning experience of the arel system, an observation study of the prototype system is made in a primary school in chennai, india. participants of this study range from 58 years. the students were instructed on how to use the app over their physical book. during the experiment, the children were asked to try both the speech mode of learning and image targetbased mode. the screenshots of the image target-based content delivery are shown in figure 5. the screenshots of the speechbased content delivery are illustrated in figure 6. during the analysis of the arel experiment, the children were tested for the concepts presented. it is observed that after figure 2. architecture of the arel system. figure 3. image target-based content delivery. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the usage of the arel the children improved and answered correctly. post the experiment, the parents of the children were asked to provide feedback on the usability, learning retention, learning engagement and overall experience. the result of the survey is aggregated and tabulated, as shown in the figure 7, where 1 represents the lowest rating such as unusable app or poor learning retention or improper learning engagement. overall experience of 10 represents a user-friendly design and development with parameters related to learning are score high. the average usability rating of the app is 7, which represents the user-friendliness score of using the application. as the application supports both voice interaction and image-based interaction, it aids in better exploring their book. the children were excited to see the virtual content appearing in real-time over their book. according to the results of the experiment, the multi-modal user interface with image-target and voice-based interactions, as well as the augmented reality display integrating real and virtual items, functions as a natural immersive experience for children. as the children try out the various interactions of the same learnable content, the learning retention got improved, as indicated by the tabulated score in figure 7. the children expressed a strong willingness to explore the application, indicating that it might be used as a fun and engaging learning tool. this is also indicated by an average learning engagement score of 8.33 from the survey. the overall experience of the mobile ar platform is at 8, which indicates the learning experience and usability experience of the children is positive and improved. 5. conclusions the development and evaluation of a mobile ar based enriched learning experience for learning language and math are reported in this work. one of the benefits of an ar learning experience over a standard book is that other intriguing aspects like animation, virtual objects, sound, and video may be included while the physical book is still present. the findings of the study suggest that the existence of such aspects during the learning process generate excitement, learning engagement, and enjoyment. the findings are supported by the answers to our survey questions to the parents of the children. the findings also show that the multi-modal interface of real and virtual things provides a natural immersive experience as well as an engaging and exciting learning tool for this age range. however, after repeated usage of the same book, children may become bored with arel if they can guess what things will appear. therefore, as part of future work, including a surprise aspect in the application could make it more enjoyable and engaging. while each image target-based marker has multiple types of visual content that could be presented, randomising the presentation of such content could surprise the child and can be included as future enhancement. furthermore, learning analytics of user engagement and learning retention can be utilised to evaluate the user experience, and personalised content for each student will be applied in future work. references [1] the rise of online learning during the covid-19 pandemic |world economic forum. onlinbe [accessed 16 august 2022] https://www.weforum.org/agenda/2020/04/coronaviruseducation-global-covid19-online-digital-learning/ [2] online education market study 2019 | world market projected. online [accessed 16 august 2022] https://www.globenewswire.com/newsrelease/2019/12/17/1961785/0/en/online-education-market figure 4. speech-to-text based content delivery. figure 5. screenshots from image target-based content delivery. figure 6. screenshots from image target-based content delivery. figure 7. feedback for arel. https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/ https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/ https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 study-2019-world-market-projected-to-reach-350-billion-by2025-dominated-by-the-united-states-and-china.html [3] r. azuma, y. baillot, r. behringer, s. feiner, s. julier, b. macintyre, recent advances in augmented reality, ieee comput. graph. appl., vol. 21, no. 6, nov. 2001, pp. 34–47. doi: 10.1109/38.963459 [4] s. alvarado, w. gonzalez, t. guarda, augmented reality ‘another level of education, iber. conf. inf. syst. technol. cist., vol. 2018june, 2018, pp. 1–5. doi: 10.23919/cisti.2018.8399331 [5] k. t. huang, c. ball, j. francis, r. ratan, j. boumis, j. fordham, augmented versus virtual reality in education: an exploratory study examining science knowledge retention when using augmented reality/virtual reality mobile applications, cyberpsychology, behav. soc. netw., vol. 22, no. 2, feb. 2019, pp. 105–110. doi: 10.1089/cyber.2018.0150 [6] h. kaufmann, d. schmalstieg, mathematics and geometry education with collaborative augmented reality, computers & graphics, vol. 27, no. 3, 2003, pp. 339–345. doi: 10.1016/s0097-8493(03)00028-1 [7] f. bernard, c. gallet, h. d. fournier, l. laccoureye, p. h. roche, and l. troude, toward the development of 3-dimensional virtual reality video tutorials in the french neurosurgical residency program. example of the combined petrosal approach in the french college of neurosurgery, neurochirurgie, vol. 65, no. 4, aug. 2019, pp. 152–157. doi: 10.1016/j.neuchi.2019.04.004 [8] a. m. nidhom, a. a. smaragdina, k. n. gres dyah, b. n. r. p. andika, c. p. setiadi, j. m. yunos, markerless augmented reality (mar) through learning comics to improve student metacognitive ability, iceeie 2019 int. conf. electr. electron. inf. eng. emerg. innov. technol. sustain. futur., oct. 2019, pp. 201–205. doi: 10.1109/iceeie47180.2019.8981411 [9] unity real-time development platform | 3d, 2d vr & ar engine. online [accesed 16 august 2022] https://unity.com/ [10] vuforia developer portal. online [accessed 16 august 2022] https://developer.vuforia.com/ [11] quickstart: transcribe speech to text by using client libraries | cloud speech-to-text documentation | google cloud. online [accessed 16 august 2022] https://cloud.google.com/speech-to-text/docs/transcribeclient-libraries#before-you-begin https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://www.globenewswire.com/news-release/2019/12/17/1961785/0/en/online-education-market-study-2019-world-market-projected-to-reach-350-billion-by-2025-dominated-by-the-united-states-and-china.html https://doi.org/10.1109/38.963459 https://doi.org/10.23919/cisti.2018.8399331 https://doi.org/10.1089/cyber.2018.0150 https://doi.org/10.1016/s0097-8493(03)00028-1 https://doi.org/10.1016/j.neuchi.2019.04.004 https://doi.org/10.1109/iceeie47180.2019.8981411 https://unity.com/ https://developer.vuforia.com/ https://cloud.google.com/speech-to-text/docs/transcribe-client-libraries#before-you-begin https://cloud.google.com/speech-to-text/docs/transcribe-client-libraries#before-you-begin measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques antonino quattrocchi1, damiano alizzio1, lorenzo capponi2, tommaso tocci3, roberto marsili3, gianluca rossi3, simone pasinetti4, paolo chiariotti5, alessandro annessi6, paolo castellini6, milena martarelli6, fabrizio freni1, annamaria di giacomo1, roberto montanini1 1 department of engineering, university of messina, messina, italy 2 aerospace department, university of illinois urbana-champaign, urbana, illinois, usa 3 department of engineering, university of perugia, perugia, italy 4 department of mechanical and industrial engineering, university of brescia, brescia, italy 5 department of mechanical engineering, politecnico di milano, milan, italy 6 department of industrial engineering and mathematical sciences, polytechnic university of marche, ancona, italy section: research paper keywords: additive manufacturing; digital image correlation; thermoelastic stress analysis; finite element modelling; contact pressure mapping citation: antonino quattrocchi, damiano alizzio, lorenzo capponi, tommaso tocci, roberto marsili, gianluca rossi, simone pasinetti, paolo chiariotti, alessandro annessi, paolo castellini, milena martarelli, fabrizio freni, annamaria di giacomo, roberto montanini , measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques, acta imeko, vol. 11, no. 3, article 13, september 2022, identifier: imeko-acta11 (2022)-03-13 section editor: francesco lamonaca, university of calabria, italy received march 17, 2022; in final form june 23, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the ministry of education, university and research in italy (miur) under the research project of national interest (prin2015) “experimental techniques for the characterization of the effective performances of trabecular morphology structures realized in additive manufacturing”. corresponding author: antonino quattrocchi, e-mail: antonino.quattrocchi@unime.it 1. introduction additive manufacturing (am) or, more commonly, 3d printing is an emerging manufacturing technique, applied in several fields of industry [1], [2]. it allows to achieve some important goals such as weight reduction, development of complex shapes and use of a wide variety of materials [3]. currently, am is applied to exploit the design flexibility of numerical topology optimization tools and to lead to the creation of innovative samples [4]. moreover, am is not only adopted to produce prototypes, but it can target mass production. the latter abstract additive manufacturing (am) is becoming a widely employed technique also in mass production. in this field, compliances with geometry and mechanical performance standards represent a crucial constrain. since 3d printed products exhibit a mechanical behaviour that is difficult to predict and investigate due to the complex shape and the inaccuracy in reproducing nominal sizes, optical non-contact techniques are an appropriate candidate to solve these issues. in this paper, 2d digital image correlation and thermoelastic stress analysis are combined to map the stress and the strain performance of an airless wheel prototype. the innovative airless wheel samples are 3d-printed by fused deposition modelling and stereolithography in poly-lactic acid and photopolymer resin, respectively. the static mechanical behaviour for different wheel-ground contact configurations is analysed using the aforementioned non-contact techniques. moreover, the wheel-ground contact pressure is mapped, and a parametric finite element model is developed. the results presented in the paper demonstrate that several factors have great influence on 3d printed airless wheels: a) the type of material used for manufacturing the specimen, b) the correct transfer of the force line (i.e., the loading system), c) the geometric complexity of the lattice structure of the airless wheel. the work confirms the effectiveness of the proposed non-contact measurement procedures for characterizing complex shaped prototypes manufactured using am. mailto:antonino.quattrocchi@unime.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 purpose implies compliance with many restrictions due to the need to satisfy the production standards [5]. functional quality is generally associated with the structural response during the application of static and dynamic loads. however, in 3d printed lattice structures, the mechanical response can be significantly different than expected [6]. the causes of this anomaly can be identified in the geometric complexity and in the inaccurate reproduction of the nominal shape by the am process [7]. although finite elements numerical analyses have been performed [8], [9], experimental measurement of the actual structural response of lattice components has a relevant interest for am technology [10], [11]. indeed, traditional techniques [12] are not suitable for investigating 3d printing lattice parts, for example, due to the complex structures, non-conventional material, and small size. non-contact methods represent powerful alternatives, providing remarkable results despite the demanding technological requirements [13], [14]. the state-ofthe-art on this topic collects only few works, which often do not investigate in depth the mechanical behaviour of the product obtained. despite numerical finite elements analyses provide a good overview [15], the experimental mechanical characterization has been generally limited to the estimation of conventional stress-strain curves, carried out by static and dynamic compression tests [16], [17]. other ancillary analyses were performed for the evaluation of the porosity of the material by scanning electron microscopy [18] and for the investigation of the composition of the crystalline structure of the produced materials by means of backscattering electron diffraction [19]. one of the first examples of experimental mechanical characterization by non-contact methods was discussed by brenne et al. [20]. they mapped the strain behaviour of a lattice, heat-treated titanium alloy specimen, subjected to both uniaxial and bending loads, using 2d digital image correlation (dic) [21], [22] and electron backscatter diffraction. the results clarified the effect of the heat-treatment and the weaknesses of the structure, but, unfortunately, the quality of the 2d dic images [23], [24] did not allow to investigate the performance of the individual beam and to carry out some quantitative results. a more detailed inspection was provided by vanderesse et al. [25]. they investigated the strain behaviour of porous lattice materials with body cubic-centred reinforced, and diamond mesostructures subjected to quasi-static compression up to failure. the strain maps, obtained using dic, showed the localization of the most strained areas before and after the sample failure, highlighting a diffuse distribution, strongly depending on the analyzed reticular structure. in fact, it was generally noted that some struts exhibited a critical behavior that quickly leaded to their collapse, while other ones were slightly strained. allevi et al. [26] performed a feasibility study of the thermoelastic stress analysis (tsa) [27], [28] on a titanium based-alloy space bracket, made by electron beam melting. their results showed the same load trends that can be identified on larger scales, but also especially small and unexpected peaks in the tsa output and theoretical outcomes calculated by finite elements analysis. quattrocchi et al. [29] evaluated the mechanical behavior of a lumbar transforaminal interbody fusion cage implant, made by a 3d printing process adopting medical grade titanium. although these devices have a trabecular structure, useful for bone development, tsa allowed to identify that at the small scale the complex geometry of the specimen determines local differences in the stress distribution with intensification of the loads at the trabecular knots. finally, allevi et al. [30], [31] developed experimental protocols based on advanced non-contact measurement techniques to qualify the full mechanical behaviour of lattice structures. they employed different techniques, such as 2d vision systems, 2d dic, tsa and laser doppler vibrometry (ldv), to investigate morphological characteristics, to map local stress-strain fields, and to analyse the modal behaviour of simple lattice structures. this paper is the extended version of the one presented at the ieee i2mtc 2020 [32] and focuses on the evaluation of local stress-strain mapping. consequently, the aim of this work is to measure the mechanical behaviour of an airless wheel [33], [34] with a complex lattice structure, obtained through am, by adopting non-contact optical techniques. indeed, unlike traditional methods which are inefficient and even inapplicable in such conditions, non-contact optical techniques are an appropriate candidate for obtaining full-field information without altering the object of the study. two airless wheel samples, which were manufactured using different printing technologies (fused deposition modelling, fdm, and stereolithography, sla) and materials (poly-lactic acid, pla, and photopolymer resin, ppr), have been investigated. a measurement procedure, based on dic and tsa, has been applied to achieve combined full-field strain and stress measurements of the different wheel-ground contact configurations. furthermore, the wheel-ground contact pressure (cp) has been mapped estimating the load transfer. finally, a parametric finite elements model has been developed and compared to the results obtained from the experimental approaches. 2. materials and methods 2.1. airless wheel prototypes the airless wheel prototype (figure 1) was specifically manufactured according to [4]. the geometry is designed following a regular pattern of fixed angular amplitude (36◦). the pattern is then extruded in the axial direction of the same wheel. the lattice structure is obtained by connecting the intersection points of four circular crowns of equal width along the diameter and ten circular sectors of the wheel through a “zig-zag” criterion. the thickness of each trabecula is always kept the same. while fdm allows to obtain a model by subsequent layering of fused material, sla adopts a 3d printing method based on a photochemical process. a laser beam is focused on a liquid ppr to enable the wiring and solidification processes of the monomers on a building platform. the raw structure is washed in isopropyl alcohol; the supports useful for the realization of the product are removed and the grafting points of the same supports are smoothed. finally, a post-cure process is performed to complete the solidification process and to improve the figure 1. frontal view of the airless wheel prototype with lattice morphology. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 mechanical properties of the printed structure. the sla sample (figure 2 a) was realized by a form 2 printer (formlabs inc., usa) with a standard ppr, while the post-cure process was performed by exposing the raw sample to uv light (wavelength of 405 nm) for 30 min at 60 °c (15 min for each side). instead, the fdm sample (figure 2 b) was obtained by ultimaker 2+ printer (ultimaker b.v., utrecht; nl) with pla. in this case, the post-cure is not necessary. table 1 reports the parameters adopted for the 3d printing processes. 2.2. experimental setup the mechanical behaviour was studied using an experimental setup consisting of a loading system, a digital camera with lighting projectors for dic measurements, an infrared (ir) camera suited to tsa measurements and an acquisition system with a piezoresistive sensor for wheel-ground cp estimation (figure 3). a rubber tread was not considered yet for the tests performed and discussed in this paper. the stress and the strain fields were mapped employing specific load stages on the wheel. 2d dic was performed applying a static load, while tsa was implemented by a dynamic load with harmonic trend. the load was transferred to the wheel by a dedicated apparatus (figure 4), i.e., a shaft and a loading frame specifically designed and manufactured. the airless wheel was locked by a bolted connection. this arrangement prevented the rotation of the wheel around its axis. the load was then applied in the vertical direction (y-axis). the loading condition was driven by an actuator and the effective load provided was measured through a load cell. more in detail, two slightly different loading systems were used: an electromechanical material testing machine (mod. electroplus e3000, instron company, norwood, ma, usa) equipped with a calibrated 5 kn load cell was employing during the tests on the sla wheel, while an electrodynamic shaker (mod. lds v650, brüel & kjær, nærum, denmark) with a dedicate power amplification unit was used for the tests on the fdm wheel. the morphology of the lattice structure of the airless wheel does not guarantee the same the wheel hub stress and strain distributions because these depend on the topology occurring on the angular portion of the wheel corresponding to the wheel contact patch. consequently, three different contact configurations have been analyzed: mixed, rhombic and trapezoidal; this labelling refers to the frontal geometric topology of the lattice region. (figure 5). 2d dic measurements required a preliminary preparation of the target surface (frontal view) of the wheel in order to create a suitable random speckle pattern [35]. this was done by spraying a matt white paint, hence obtaining a high contrast surface given the natural black colour of the wheel material. the images were taken using a nikkor af micro 200 mm lens mounted on a a) b) figure 2. 3d printed samples manufactured by means of: a) sla and b) fdm technologies. figure 3. schematic representation of the experimental setup used for dic and tsa measurements; cps = contact pressure sensor. table 1. 3d printing parameters used for the airless wheel prototypes. am technique sla fdm material black v4 pla red support points size in mm 0.6 layer thickness in µm 50 100 printing time in h 9.15 23.33 number of layers 1003 258 material volume or weight 72 ml 56 g orientation in ° 15 0 figure 4. loading apparatus for dic and tsa measurements. a) b) c) figure 5. frontal view of the wheel lattice structure for different wheelground contact configurations: a) rhombic, b) mixed and c) trapezoidal. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 nikon d3100 digital camera (for the tests on the sla-printed wheel) and a canon ef 200m lens with canon eos 7d digital camera (for the tests on the fdm-printed wheel). the two imaging systems approximately ensure the same field of view (fov) but with a different spatial resolution. a diffuse illumination was obtained by exploiting led light. this was adopted to improve image quality and further increase contrast. particular attention was paid to checking the image quality and to verifying that no saturation phenomena occurred in the image. dic tests were carried out at different load levels synchronizing the frame rate to the load increase (i.e., 1 frame recorded every 1 n load increase step). the post-processing of dic tests was performed off-line, using the gom correlate pro software (gom gmbh, braunschweig, germany). table 2 reports the details of the two image systems and the load configuration for the 2d dic measurement. the wheel-ground cp was acquired during the dic tests using a flexible, thin and piezoresistive sensor (mod. 5040n), wired to a specific acquisition system (evolution™ handle, tekscan, south boston, ma, usa) and opting for a sampling frequency of 1 hz. the characteristics of the cp system are described in table 3. tsa was carried out by working in lock-in mode, i.e. by synchronizing ir system frame rate to the harmonic load applied. a preliminary study was conducted to determine the correct excitation parameters. this made it possible to obtain quasiadiabatic conditions and a high signal to noise ratio (snr) thermoelastic signal. these parameters are highly related to the thermal and mechanical properties of the printing material, more specifically to the heat capacity and stiffness. consequently, the working parameters are slightly different for the sla and the fdm prototypes. tests were performed with specific pre-load and overload levels but employing the same sampling frequency of 100 hz. the ir images were taken using a flir a6751sc camera (sla-printed wheel) and a flir sc7600 one (fdmprinted wheel). both ir cameras have a spatial resolution of 640  512 pixel and an insb detector with a thermal resolution of 20 mk at the room temperature. table 4 summarizes the characteristics of the two imaging systems and the load configuration used for the tsa measurements. 3. results and discussion 3.1. 2d dic measurements as an example, figure 6 displays the full-field maps of the displacements for the airless wheels along the vertical direction (y-axis). for the ppr sample (sla-printed), the displacement is uniformly high on the beams and nodes of the angular portion of the wheel when the wheel-ground contact takes place in the table 2. details of the systems configuration used for 2d dic measurements. am technique sla fdm camera nikon d3100 canon eos 7d focus distance in mm 610 550 f-stop f/7.1 f/4.5 exposition time in s 1/40 1/30 iso* sensitivity 2000 6400 focal distance in mm 200 100 images resolution in pixel × pixel 4608 × 3072 1920 × 1080 loads in n 0 to 200 0 to 250 * international organization for standardization (iso) table 3. details of the measurement systems. cp technique sla software i-scan 5040n sensing area (l × w × d) in mm 44.0 × 44.0 × 0.1 5040n sensing resolution in sensel/cm2 100.0 image scale (8 bit) in dl 0-255 images resolution in pixel × pixel 430×430 loads in n 0 to 200 table 4. details of the systems configuration used for tsa measurements. am technique sla fdm thermal camera flir titanium sc7200 flir a6751sc software altair li flir research ir + matlab images resolution in pixel × pixel 640 × 512 640 × 512 filter low-pass median load frequency in hz 5 10 15 5 10 15 preload in n 20 50 80 50 75 100 peak-peak load in n 5 10 15 20 40 60 a) b) c) d) figure 6. displacement along the vertical direction (y-axis) for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr sample at 200 n and d) trapezoidal for the pla one at 250 n; specifically, c) and d) have two distinct scales, different by an order of magnitude. a) b) c) d) figure 7. strain along the vertical direction (y-axis) for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr sample at 200 n and d) trapezoidal for the pla one at 250 n; specifically, c) and d) have two distinct scales, different by a factor three. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 rhombic and mixed configurations (figure 6 a and figure 6 b). the trapezoidal configuration (figure 6 c) shows a displacement mainly concentrated on the external ring of the wheel rather than on the beams (i.e., the weakest region), considering the unloaded stage. furthermore, for the rhombic and trapezoidal configurations (figure 6 a and figure 6 c), the displacements appear to be symmetrical if comparing the loading line with the other sectors that seem to be relatively unloaded. finally, by taking into account the trapezoidal configurations of the two types of samples (figure 6 c and figure 6 d), they exhibit the same trend, even if the trend of the pla one (fdm-printed) has a significant reduction. indeed, even though the pla sample was subjected to a greater load (25 % more with respect to the ppr sample) the maximum displacement is an order of magnitude less than that of the ppr wheel. figure 7 reports the full-field maps of the measured strain for the airless wheels along the vertical direction (y-axis). in each contact configuration investigated, the strain is well distributed on the lattice structure but concentrated in the nodes. the strain analysis demonstrates that flexural effects prevail, with maximum positive values (i.e. tensile strain) at the vertexes of the beams and maximum negative values (i.e. compressive strain) along the interconnecting segments. a marked symmetry can be highlighted on the trapezoidal and rhombic configurations (figure 7 a and figure 7 c). only for the trapezoidal configuration, the most stressed region corresponds to the sector along the loading line. specifically, for the ppr sample (figure 7 c), the external ring of the wheel shows a high strain value on the contact area. indeed, it is also very clear the elastic behavior of the connecting beams of the lattice structure produced by relatively high loads. contrarily, although the trend is similar to the one measured in the former case, the pla sample (figure 7 d) shows a maximum strain greater than the one of the ppr sample. this could be a consequence of the specific material used for manufacturing and loading the wheel. indeed, the shaker seemed to be more inefficient in transferring the load with respect to the electromechanical material testing machine, which appeared to be more rigid. the lower level of the effective load transferred to the wheel is reflected in a higher level of noise, which sometimes prevented an accurate identification of the strains. figure 8 exhibits the trend of y-strain along the trabeculae, from the external ring to the wheel hub, in the trapezoidal configuration for the ppr sample. as already discussed for figure 7, figure 8 highlights the flexural effects on the trabeculae. indeed, the maximum positive values correspond to the intersections of the trabeculae (points b, d, f and h), while the maximum negative values (close to points c, e, g and i) to the centreline of the individual trabeculae. furthermore, the considered trabeculae, two for the trapezoidal sector (b-d and d-f) and two for the rhombic sector (f-h and h-i), present approximately a similar trend. this can be sufficiently considered the same for all trabeculae of the wheel. 3.2. tsa measurements in tsa measurements, the lock-in process allows the entire sequence of the recorded ir images to be condensed into only two images that represent the amplitude (figure 9) and the phase figure 8. y-strain along the trabeculae (“white line” in strain map) for the ppr sample in trapezoidal configuration at 200 n. a) b) c) d) figure 9. amplitude of the tsa signal for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr samples and d) trapezoidal for the pla ones (harmonic excitation at 5 hz, with 15 n peak-to-peak load and 80 n preload). a) b) c) d) figure 10. phase of the tsa signal for the airless wheel in different contact configurations: a) rhombic, b) mixed and c) trapezoidal for the ppr samples and d) trapezoidal for the pla ones (harmonic excitation at 5 hz, with 15 n peak-to-peak load and 80 n preload). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 (figure 10) of the thermoelastic signal. as well known, the first one shows differential temperatures δt that are linked to the sum of the principal components of the stress tensor. also in this case, as already highlighted for the strain measurements, the loading apparatus used for transferring the load to the airless wheel hub plays a key role in determining the effective stress level to which the structure is subjected to. however, for both the ppr and the pla samples, tsa guarantees the correct identification of the most stressed regions, which are found to be located at the vertex connections of the beams and to be confined to the lower sectors of the wheel. tsa measurements were performed out of the frontal view of the wheel, i.e. by tilting the optical axis of the ir camera with respect to the wheel axis. more in detail, camera-wheel relative angles of 35 ° and 45 ° were used for the ppr and pla samples respectively. this configuration was introduced because the frontal view was subject to large edge effects due to the thin shaped beams. specifically, for the ppr sample, the rhombic and mixed configurations (figure 9 a and figure 9 b) exhibit a wide load distribution, marked at the vertex connections of the beams and at the sectors crossed by the loading line. contrarily, in the trapezoidal configuration (figure 9 c) the most stressed areas are reduced and concentrated in the lower sectors of the wheel. comparing the trapezoidal configurations of the ppr with that of the pla, the latter (figure 9 d) presents a more limited load amplitude. phase distribution (figure 10) is in line with the expected distributions for all the wheel-test configurations. 3.3. cp mapping the wheel-ground contact influences both the motion transfer and the wear of the rubber tread. therefore, the evaluation of the cps is useful for estimating both the functionality and stability of a vehicle. as an example, figure 11 shows the increment of the cps for the trapezoidal configuration of the ppr wheel. the cp maps show quite a good symmetry: the small dissimilarities are due to the connection of the airless wheel to the loading apparatus and the relative geometric tolerances. it is evident that the lattice structure has a decisive influence on the redistribution of the load. indeed, the beams, as previously emphasized, are the most rigid part of the airless wheel, hence the greatest cps are estimated in correspondence with their mark on the ground. this well matches the results obtained on the strain analysis when the wheel is tested in the same topology arrangement (figure 7 c). figure 12 compares the cp maps of the three different configurations at the maximum load (i.e. 200 n) on the ppr wheel. also in this case, it is possible to identify a good symmetry, except for the mixed configuration. the largest contact area is that of the trapezoidal configuration, while the average cp is greater for the mixed one. finally, the rhombic configuration shows the smallest contact area, but also the best symmetry. the maximum value of the cp is highlighted in correspondence with the beam of the mixed configuration. in both cases, mixed and rhombic configuration, the buckling effect would seem to be negligible due to the short arc of the external ring of the wheel between two successive beams. 3.4. finite elements analysis as a further analysis, a finite elements model (figure 13) of the airless wheel was developed. this model was created to calculate the displacements and the strains of the wheel in ideal conditions (in terms of both geometries, loads and constraints). the structure was meshed with about 2.5 million of solid tetrahedral elements, considering a fixed support at the wheel hub and a vertical static force at the wheel-ground interface. furthermore, to improve the computation resolution, the mesh quality was refined on those areas where stress and strain are expected to be more relevant (figure 14). as an example, figure 15 displays the maps relative to the computation of the y-displacement and the y-strain, for the pla airless wheel printed in fdm. the numerical simulation highlights the same trends observed in the experimental results. the computed strain closely resembles the real one, showing that the interconnection segments of the lattice structure are mainly subjected to bending, exhibiting both tensile and compression stresses. hence, the presented model might be used to analyze alternative configurations of the lattice geometry, laying the foundations for a structural optimization of the topology of the whole airless wheel. figure 11. cp map of the trapezoidal configuration for the ppr wheel. a) b) c) figure 12. comparison between the cp maps for the configurations: a) rhombic, b) mixed and c) trapezoidal of the ppr wheel at 200 n. figure 13. finite elements model of the airless wheel. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 4. conclusions this paper addresses the applicability of non-contact techniques, such as dic and tsa, to measure the actual stressstrain field of complex lattice structures printed in am. the chosen structure is based on the innovative airless wheel concept. this type of design could be widely applied for different vehicle models, taking advantage of its specific features. globally, a puncture-proof wheel improves safety and reliability by reducing its replacement due to critical events and preventing the rapid decrease in vehicle stability. furthermore, the mechanical response of a such structure could be enhanced by increasing the adhesion to the road surface and limiting the tread wear. specifically, the use of an airless wheel could be heterogeneous. for example, it could be a good solution for skateboards, exploiting the high deformability of the lattice structure, or for aircrafts and aerospace vehicles thanks to the possibility of reducing weight maintaining the same performance. in summary, the airless wheel should be of great interest to the next generation of full-electric cars. an airless wheel prototype, manufactured by different 3d printing technologies (fdm and sla) and made with different polymeric materials (pla and ppr) has been tested by employing dic and tsa techniques. the wheel-ground interaction has been studied by mapping the cps. a parametric finite elements model of the same wheel has been developed. furthermore, the experimental tests have been performed in two separated laboratories also using specific loading systems and instrumentation. the results achieved have demonstrated some critical aspects that should be considered in the characterization of such a system: the type of material used for manufacturing the sample, the loading system and the topology of the lattice structure. in this sense, considering a numerical model previously validated at least in terms of contact pressure distribution, can help in dealing with these issues. indeed, the proposed model might be used to analyze alternative configurations of the lattice geometry, laying the foundations for a structural optimization of the lattice topology with specific objective functions (e.g., uniform stress distribution or minimum weight). according to these outcomes, the present study has then confirmed the effectiveness of the non-contact techniques (dic and tsa) for measuring the spatial distribution of both strain and stress fields in functional and complex structures obtained from am. specifically, these investigation techniques have shown that the mechanical response of a lattice structure exhibits considerable complexity. in fact, although strain and stress are well distributed over almost all regions, concentrations have been identified in correspondence of the nodes of the trabeculae and in the lower sectors of the wheel. for example, this suggests moving towards a topological optimization that involves increasing the thickness of the trabeculae at the nodes and along the external ring, thinning the section of the trabeculae at their centerline and reducing the excess of material closer to the wheel hub. references [1] t. d. ngo, a. kashani, g. imbalzano, k. t. nguyen, d. hui, additive manufacturing (3d printing): a review of materials, methods, applications and challenges, compos. b. eng. 143 (2018), pp. 172-196. doi: 10.1016/j.compositesb.2018.02.012 [2] g. luchao, w. wenwang, s. lijuan, f. daining, damage characterizations and simulation of selective laser melting fabricated 3d re-entrant lattices based on in-situ ct testing and geometric reconstruction, int. j. mech. sci. 157–158 (2019), pp. 231–242. doi: 10.1016/j.ijmecsci.2019.04.054 [3] t. pereira, j.v. kennedy, j. potgieter, a comparison of traditional manufacturing vs additive manufacturing, the best method for the job, procedia. manuf. 30 (2019), pp. 11-18. doi: 10.1016/j.promfg.2019.02.003 [4] v. asnani, d. delap, c. creager, the development of wheels for the lunar roving vehicle, j. terramechanics 46 (2009), pp. 89103. doi: 10.1016/j.jterra.2009.02.005 [5] m. siebold, additive manufacturing for serial production of highperformance metal parts, mech. eng. 141 (2019), pp. 49-50. doi: 10.1115/1.2019-may5 [6] f. caiazzo, v. alfieri, b.d. bujazha, additive manufacturing of biomorphic scaffolds for bone tissue engineering, int. j. adv. manuf. syst. 113 (2021), pp. 2909-2923. doi: 10.1115/1.2019-may5 a) b) figure 14. details of the mesh quality of the airless wheel in a) frontal view and in b) 45° view. a) b) figure 15. finite element analysis of the airless wheel: a) y-displacement and b) y-strain at a load of 250 n on the pla sample. 0.300 0.280 0.260 0.240 0.220 0.200 0.180 0.160 0.140 0.120 0.100 0.080 0.060 y-axis displacement [mm] 0.900 0.750 0.600 0.450 0.300 0.150 0 -0.150 -0.300 -0.450 -0.600 -0.750 -0.900 y-axis strain [%] https://doi.org/10.1016/j.compositesb.2018.02.012 https://doi.org/10.1016/j.ijmecsci.2019.04.054 https://doi.org/10.1016/j.promfg.2019.02.003 https://doi.org/10.1016/j.jterra.2009.02.005 https://doi.org/10.1115/1.2019-may5 https://doi.org/10.1115/1.2019-may5 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 [7] e. umaras, m.s. tsuzuki, additive manufacturing-considerations on geometric accuracy and factors of influence, ifacpapersonline 50, (2017), pp. 14940-14945. doi: 10.1016/j.ifacol.2017.08.2545 [8] a. sutradhar, j. park, d. carrau, m.j. miller, experimental validation of 3d printed patient-specific implants using digital image correlation and finite element analysis, comput. biol. med. 52 (2014), pp. 8-17. doi: 10.1016/j.compbiomed.2014.06.002 [9] l.s. dimas, m.j. buehler, modeling and additive manufacturing of bio-inspired composites with tunable fracture mechanical properties, soft. matter. 10 (2014), pp. 4436-4442. doi: 10.1039/c3sm52890a [10] i. gibson, d. rosen, b. stucker, m. khorasani, design for additive manufacturing, addit. manuf. tech. (2021), pp. 555-607. doi: 10.1007/978-3-030-56127-7_19 [11] g. liu, x. zhang, x. chen, y. he, l. cheng, m. huo m, et al., additive manufacturing of structural materials, mater. sci. eng. rep. 145 (2021), 100596. doi: 10.1016/j.mser.2020.100596 [12] a. visco, c. scolaro, t. terracciano, r. montanini, a. quattrocchi, l. torrisi, n. restuccia, static and dynamic characterization of biomedical polyethylene laser welding using biocompatible nano-particles, epj web of conferences 167 (2018), 05009. doi: 10.1051/epjconf/201816705009 [13] v.m.r. santos, a. thompson, d. sims-waterhouse, i. maskery, p. woolliams, r. leach, design and characterisation of an additive manufacturing benchmarking artefact following a design-formetrology approach, addit. manuf. 32 (2020), 100964. doi: 10.1016/j.addma.2019.100964 [14] c. camposeco-negrete, j. varela-soriano, j.j. rojas-carreón, the effects of printing parameters on quality, strength, mass, and processing time of polylactic acid specimens produced by additive manufacturing, prog. addit. manuf. (2021), pp. 1-20. doi: 10.1007/s40964-021-00198-y [15] n. narra, j. valášek, m. hannula, p. marcián, g.k. sándor, j. hyttinen, j. wolff, finite element analysis of customized reconstruction plates for mandibular continuity defect therapy, j. biomech. 47 (2014), pp. 264-268. doi: 10.1007/s40964-021-00198-y [16] c. ling, a. cernicchi, m.d. gilchrist, p. cardiff, mechanical behaviour of additively-manufactured polymeric octet-truss lattice structures under quasi-static and dynamic compressive loading, mater. des. 162 (2019), pp. 106-118. doi: 10.1016/j.matdes.2018.11.035 [17] e. alabort, d. barba, r.c. reed, design of metallic bone by additive manufacturing, scr. mater. 164 (2019), pp. 110-114. doi: 10.1016/j.scriptamat.2019.01.022 [18] f. li, j. li, g. xu, g. liu, h. kou, l. zhou, fabrication, pore structure and compressive behavior of anisotropic porous titanium for human trabecular bone implant applications, j. mech. behav. biomed. mater. 46 (2015), pp. 104-114. doi: 10.1016/j.jmbbm.2015.02.023 [19] g. bi, c.n. sun, h.c. chen, f.l. ng, c.c.k. ma, microstructure and tensile properties of superalloy in100 fabricated by microlaser aided additive manufacturing. mater. des. 60 (2014), pp. 401408. doi: 10.1016/j.matdes.2014.04.020 [20] f. brenne, t. niendorf, h.j. maier, additively manufactured cellular structures: impact of microstructure and local strains on the monotonic and cyclic behavior under uniaxial and bending load, j. mater. process. technol. 213 (2013), pp. 1558-1564. doi: 10.1016/j.jmatprotec.2013.03.013 [21] b. pan, k. qian, h. xie, a. asundi, two-dimensional digital image correlation for in-plane displacement and strain measurement: a review, mst 20 (2009), 062001. doi: 10.1088/0957-0233/20/6/062001 [22] f. lo savio, m. bonfanti, g.m. grasso, d. alizzio, an experimental apparatus to evaluate the non-linearity of the acoustoelastic effect in rubber-like materials, polym. test. 80 (2019), 106133. doi: 10.1016/j.polymertesting.2019.106133 [23] d. de domenico, a. quattrocchi, d. alizzio, r. montanini, s. urso, g. ricciardi, a. recupero, experimental characterization of the frcm-concrete interface bond behavior assisted by digital image correlation, sensors 21 (2021), 1154. doi: 10.3390/s21041154 [24] d. de domenico, a. quattrocchi, s. urso, d. alizzio, r. montanini, g. ricciardi, a. recupero, experimental investigation on the bond behavior of frcm-concrete interface via digital image correlation, proc. of european workshop on structural health monitoring, july 2020 pp. 493-502. doi: 10.1007/978-3-030-64908-1_46 [25] n. vanderesse, a. richter, n. nuño, p. bocher, measurement of deformation heterogeneities in additive manufactured lattice materials by digital image correlation: strain maps analysis and reliability assessment, j. mech. behav. biomed. mater. 86 (2018), pp. 397-408. doi: 10.1016/j.jmbbm.2018.07.010 [26] g. allevi, m. cibeca, r. fioretti, r. marsili, r. montanini, g. rossi, qualification of additively manufactured aerospace brackets: a comparison between thermoelastic stress analysis and theoretical results, measurement 126 (2018), pp. 252-258. doi: 10.1016/j.measurement.2018.05.068 [27] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue 137 (2020), 105661. doi: 10.1016/j.ijfatigue.2020.105661 [28] d. palumbo, u. galietti, data correction for thermoelastic stress analysis on titanium components, exp. mech. 56 (2016), pp. 451462. doi: 10.1007/s11340-015-0115-0 [29] a. quattrocchi, d. palumbo, d. alizzio, u. galietti, r. montanini, thermoelastic stress analysis of titanium biomedical spinal cages printed in 3d, proc. of qirt2020, porto, portugal, 10 july 2020, pp. 1-7. doi: 10.21611/qirt.2020.098 [30] g. allevi, p. castellini, p. chiariotti, f. docchio, r. marsili, r. montanini, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e.p. tomasini, qualification of additive manufactured trabecular structures using a multi-instrumental approach. proc. i2mtc 2019, auckland, new zealand, 20-23 may 2019, pp. 1-6. doi: 10.1109/i2mtc.2019.8826969 [31] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. nartarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, s. sansoni, e.p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee tim 69 (2020), pp. 2459-2467. doi: 10.1109/tim.2019.2959293 [32] r. montanini, g. rossi, a. quattrocchi, d. alizzio, l. capponi, r. marsili, a. di giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, proc. of i2mtc 2020, dubrovnik, croatia, 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128771 [33] bridgestone airless tires. online [accessed 28 july 2022] https://www.bridgestonetire.com/tread-and-trend/tiretalk/airless-concept-tires [34] michelin, gm take the air out of tires for passenger vehicles. online [accessed 28 july 2022] https://www.michelin.com/en/press-releases/michelin-gmtake-the-air-out-of-tires-for-passenger-vehicles/ [35] h. wang, h. xie, y. li, j. zhu, fabrication of micro-scale speckle pattern and its applications for deformation measurement, mst, 3’23 (2012), 035402. doi: 10.1088/0957-0233/23/3/035402 https://doi.org/10.1016/j.ifacol.2017.08.2545 https://doi.org/10.1016/j.compbiomed.2014.06.002 https://doi.org/10.1039/c3sm52890a https://doi.org/10.1007/978-3-030-56127-7_19 https://doi.org/10.1016/j.mser.2020.100596 https://doi.org/10.1051/epjconf/201816705009 https://doi.org/10.1016/j.addma.2019.100964 https://doi.org/10.1007/s40964-021-00198-y https://doi.org/10.1007/s40964-021-00198-y https://doi.org/10.1016/j.matdes.2018.11.035 https://doi.org/10.1016/j.scriptamat.2019.01.022 https://doi.org/10.1016/j.jmbbm.2015.02.023 https://doi.org/10.1016/j.matdes.2014.04.020 https://doi.org/10.1016/j.jmatprotec.2013.03.013 https://doi.org/10.1088/0957-0233/20/6/062001 https://doi.org/10.1016/j.polymertesting.2019.106133 https://doi.org/10.3390/s21041154 https://doi.org/10.1007/978-3-030-64908-1_46 https://doi.org/10.1016/j.jmbbm.2018.07.010 https://doi.org/10.1016/j.measurement.2018.05.068 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1007/s11340-015-0115-0 https://doi.org/10.21611/qirt.2020.098 https://doi.org/10.1109/i2mtc.2019.8826969 https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://www.bridgestonetire.com/tread-and-trend/tire-talk/airless-concept-tires https://www.bridgestonetire.com/tread-and-trend/tire-talk/airless-concept-tires https://www.michelin.com/en/press-releases/michelin-gm-take-the-air-out-of-tires-for-passenger-vehicles/ https://www.michelin.com/en/press-releases/michelin-gm-take-the-air-out-of-tires-for-passenger-vehicles/ https://doi.org/10.1088/0957-0233/23/3/035402 spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning nagesh mantravadi1, md zia ur rahman1, sala surekha1, navarun gupta2 1 department of electronics and communication engineering, koneru lakshmaiah education foundation, k l university, vaddeswaram, guntur, andhra pradesh, 522502, india 2 department of electrical engineering, university of bridgeport, bridgeport, ct 06604, usa section: research paper keywords: adaptive algorithm; cognitive radio; energy measurement; noise uncertainty; threshold point citation: nagesh mantravadi, md zia ur rahman, sala surekha, navarun gupta, spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning, acta imeko, vol. 11, no. 1, article 34, march 2022, identifier: imeko-acta-11 (2022)-01-34 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 28, 2021; in final form february 18, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sala surekha, e-mail: surekhasala@gmail.com 1. introduction radio frequencies are limited natural resources they are controlled by government authorities, in a particular band the primary users can exclusively use licensed spectrum even the secondary users are unoccupied as these unlicensed users are avoided for use. due to the enormous growth in wireless communication applications the usage of radio frequencies is increasing rapidly. to overcome this spectrum utilization issues, cognitive radio systems are emerged with new technology [1]-[3] in wireless communications. enormous efforts are used for enhancing the usage of efficiency of cognitive radio systems, there is huge change in the usage of frequencies, time and space domain and then secondary users are allowed without creating any interference to the primary users. to overcome these interferences [4], [5] in wireless communications and for increasing spectrum utilization ieee 802.22 wireless frequency band can be used. with the help of these frequency bands, the data will be transmitted without causing interference in health care monitoring wireless application by using zigbee, wi-fi, ad hoc networks with operating frequency less than 3 mhz. orthogonal frequency division multiplexing (ofdm) is the eminent method used in wireless communication systems. channel estimation issues raised in ofdm based cognitive radio system are discussed in [6]-[8], this is because of mean square error (mse) of channel estimations. the unknown noise components effecting the spectrum sensing also studied. performance evaluation of channel estimation techniques for imperfect channels is analysed for correlated channels of cognitive radio system, then mutual information is considered at the input and output to provide better communication sensing and channel uncertainty. the receiver's minimal mean square error is then computed to obtain the fading coefficients of the fading channel, as well as the anticipated attainable rate for gaussian signalling and linear modulation schemes, assuming abstract to identify primary user signals in cognitive radios spectrum sensing method is used. due to statistical variances in received signal, noise is present in primary user signals, this noise powers are varied due to random nature of noise signals and leads to noise uncertainty problem in the performance of energy detection. the task of energy measurement and further detecting the unused frequency spectrum is a key task in cognitive radio applications. for avoiding these problems, least logarithmic absolute difference (llad) algorithm is proposed in which noise powers are adjusted at sensing point of licensed users. with help of proposed method, estimated noise signals are eliminated. sign regressor version of llad algorithm is considered due to it reduces computational complexity and convergence rate is improved. further probability of detection (pod), probability of false alarm (pofa) is estimated to know threshold value. from results, it is clear that good performance in terms of pofa versus pod in range of low signal to noise ratio in multiple nodes. therefore, the proposed energy measurement-based spectrum sensing method is useful in remote health care monitoring, medical telemetry applications by sharing the un-used spectrum. mailto:surekhasala@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 interference and channel estimation errors occur at primary users [9], [10]. spectrum sensing is used to avoid difficulties with spectrum underutilization and interference. the most commonly used detection techniques are wavelet detection, cyclostationary detection, energy detection, and covariance detection. the first three techniques require prior information of the principal user signal, frequency components, interference, and noise variance, but the fourth approach requires no prior data and is hence the most often used spectrum sensing method. in addition, for energy sensing, circuit implementation and computing complexity are minimal. spectrum sensing [11]-[17] performance is assessed in terms of false alarm and detection probabilities; it assumes that the primary user is idle or active during the spectrum sensing period, and then spectrum holes are detected in the spectrum frequency range based on this. there is a compromise between probability detection and false alarm, however by utilizing these factors, we can see if there was any interference among the primary and secondary users. in [18] sensing trade-off in cognitive radios is studied when numerous primary users arrive randomly. the trade-off is also caused by the performance of cognitive radio spectrum utilization and spectrum sensing, which is based mostly on primary user activity. a numerical technique was used to investigate this trade-off utilizing cooperative spectrum sensing. the spectrum potential for successful communication between a transmitter and receiver of cognitive networks is being investigated. also studied about when secondary user sensing time increases, false alarm probability reduced, it means there is high chances to secondary users are having to access idle channel, but less chance to transmit because of transmission time is limited. main objective is efficiently use spectrum utilization with optimum sensing for improving spectrum opportunities of cognitive radio networks. practically cognitive radios are having channel sensing errors because of miss detection and false alarms these problems will affect channel estimation [19] design also quality channel estimation error problems, receiver operating characteristics are also analysed for these parameters. further cooperative cognitive radio is considered for analysis of fading scenarios occurred with improved energy detection method. for this nakagami multipath fading is considered with power 2 based energy detection spectrum sensing [20], [21], then performance is improved in cognitive radio networks in terms of detection probability parameters and their operating curves also analysed. in spectrum sensing, there after spectrum allocation one of the promising methods is energy detection. in this work a new energy measurement methodology is proposed based on least logarithmic absolute difference algorithm. this methodology of energy detection and measurement is a key task in measurement technology as well as in cognitive radio-based communication systems. to overcome noise uncertainty problems double threshold-based spectrum sensing is studied for improving energy detection. this cognitive radio concept is widely used in health care applications, but interferences occurred due to wireless devices will affect the performance. to avoid these interferences occurred with health care, novel cognitive radio method is used by proposing modified normalized least mean square algorithm (mnlms) for hospital environment applications so that errors/interferences occurred with medical devices are removed and their performance is evaluated using matlab simulations. 2. system model spectrum sensing is mostly used method for detecting spectrum holes of cognitive radio network. by using this method, we can decide primary user is absent or present. energy detection block diagram is shown in figure 1. by using hypothesis testing, detection problem is considered as 𝑇0: 𝑧(𝑡) = 𝑤(𝑡) 𝑇1: 𝑧(𝑡) = 𝑤(𝑡) + 𝑠(𝑡) , (1) where 𝑧(𝑡) is received sample signal, 𝑤(𝑡) is noise effect of transmitted signal, 𝑠(𝑡) is primary transmitted signal with 𝑡 = 1, … , 𝑇 length carried for identifying spectrum. energy detection method [22] is considered for detecting spectrum holes, energy level is measured, then estimated noise variance by placing detection threshold value. then secondary user decides statistics of energy detection as 𝐷 = ∑[𝑧(𝑡)]2 𝑇 𝑡=1 . (2) decision static have central chi square distribution with t degrees of freedom when primary user signal is absent. it is having non central chi square distribution decision statistics when primary user is present. central limit theorem with gaussian approximation is used if detection samples are greater than 250, then mean and variance of decision static of primary user signal given as 𝐷 ~ 𝑇(𝑇𝜎𝑤𝑡 2 , 2𝑇𝜎𝑤𝑡 4 ) for h1 𝐷 ~ 𝑇(𝑇((𝜎𝑠𝑡 2 + 𝜎𝑤𝑡 2 , 2𝑇((𝜎𝑠𝑡+ 2 𝜎𝑤𝑡 2 )2) for h0 . (3) in testing 𝑇0 and 𝑇1, false alarm and misdetection errors are occurred due to false identification of 𝑇0 and 𝑇1 . energy detector performance is measured with probability of these two errors. probability false alarm occurred due to showing of wrong spectrum band occupancy, probability of miss detection is due to it shows as primary user absence, but actually it is present, it is also called as probability detection. probability false alarm, probability detection evaluated as figure 1. energy detector block diagram. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑃ofa = 𝑄 ( 𝛿𝑑 − 𝑇𝜎𝑤𝑡 2 √2 𝑇 𝜎𝑤𝑡 4 ) (4) 𝑃od = 𝑄 ( 𝛿𝑑 − 𝑇(𝜎𝑠𝑡 2 + 𝜎𝑤𝑡 2 ) √2 𝑇(𝜎𝑠𝑡+ 2 𝜎𝑤𝑡 2 )2 ) , (5) where 𝛿𝑑 = 𝜎𝑤𝑡 2 (𝑄−1(𝑃ofa)√2𝑇 + 𝑇) is the threshold. 2.1. least logarithmic absolute difference algorithm least logarithmic absolute difference (llad) technique elegantly and gradually adapts conventional cost function depends on amount of error in its implementation. in impulse-free noise environments, lms and llad algorithm exhibits likely convergence behaviour, while llad algorithm is robust against impulsive interference and exceeds sign algorithm [23], [24]. flowchart for llad algorithm is as shown in figure 2. mathematical modeling 𝑇 is filter length, 𝜇 is step size parameter is considered. let the tap input be x(𝑛) and filter length 𝑇 is moderate to large, w(0) = 0 is considered as initial condition, x(𝑛) is 𝑇 -by1 tap input vector to filter 𝑛2 at time 𝑛 as [x(𝑛), x(𝑛 − 1), … , x(𝑛 − 𝑇 + 1)]t (6) w(𝑛) is tap weight vector, d(𝑛) is desired response at time 𝑛, ωo is an unknown vector, (. ) t is the transpose of (. ). to be computed: w(𝑛 + 1) is to be computed tap-weight vector at time 𝑛 + 1. computation an unknown vector ωo is represented with linear model as d(𝑛) = ωo t x(𝑛) + n𝑡 (7) the instantaneous estimate of gradient vector j is written as: ∇’j(n) = − 2 x(𝑛) d∗(𝑛) + 2 d(𝑛) xt(𝑛) w(𝑛) (8) where x(𝑛) is the input tap vector, w(𝑛) is tap weight vector. w(𝑛) is random vector depends on x(𝑛) with its taps stored in a row vector given by [𝑤(𝑛), w(𝑛 − 1), … , w(𝑛 − 𝑇 + 1)]t (9) output of the filter y(𝑛) = [w0(𝑛) x(𝑛) + ⋯ + w𝐿−1(𝑛) x(𝑛 − 𝑀 + 1)] = wt(𝑛) x(𝑛). (10) expression for estimation error is given by e(𝑛) = d(𝑛) − wt(𝑛) x(𝑛). (11) where the term wt(𝑛) x(𝑛) is inner product of w(𝑛) and x(𝑛). the normalized error cost function introduced using logarithmic function is given by figure 2. spectrum sensing using llad. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 j(e(𝑛)) = f(e(𝑛)) − 1 𝛼 ln (1 + 𝛼 f(e(𝑛))) (12) based on steepest descent method, general weight update recursion is given by w(𝑛 + 1) = w(𝑛) − 𝜇 ∇j(𝑛) (13) new recursive relation for ∇j(𝑛) is written as ∇j(𝑛) = 𝐸{∇|e(𝑛)|2} = 𝐸{e(𝑛)∇e∗(𝑛)} ∇e∗(𝑛) = −x∗(𝑛) (14) thus, resultant expression for gradient vector is given by ∇j(𝑛) = −𝐸{e(𝑛) x∗(𝑛)} (15) also, first gradient of the relation in is given by δw. f(e(𝑛)) the signum representation is given below: sign{x(𝑛)} = { 1: x(𝑛) > 0 0: x(𝑛) = 0 −1: x(𝑛) < 0 } (16) step size boundary for convergence of mean square for lms algorithm is given by 0 < 𝜇 < 2 x𝑇 (𝑛) x(𝑛) (17) by substituting estimate of ∇’j(𝑛) in steepest descent algorithm, new recursive relation for updating tap weight vector is w(𝑛 + 1) = w(n) + 𝜇 x(𝑛)[d∗(𝑛) − xt(𝑛) ∙ w(𝑛)]. (18) to provide robustness against impulsive interferences, a cost function is introduced with normalized error by use of logarithmic function stated as 𝐹(e(𝑛)) = 𝐸 [(e(𝑛)) 2 ] = 𝐸[|e(𝑛)|] . (19) thus, the stochastic gradient update is given by w(𝑛 + 1) = w(𝑛) + 𝜇 ∙ x(𝑛) ∂f(e(𝑛)) ∂e(𝑛) [ 𝛼 f(e(𝑛)) 1 + 𝛼 f(e(𝑛)) ] , (20) where 𝛼 > 0 is a design parameter also f(e(𝑛)) as the conservative cost function for error signal e(𝑛). for |𝛼 f(e(𝑛))| ≤ 1, applying maclaurin series using natural algorithm, equation (19) gives j(e(n)) = f(e(𝑛)) − 1 𝛼 (𝛼 f(e(𝑛)) − 𝛼 2 f2(e(𝑛))). (21) for low values of f(e(𝑛)), it is an infinite combination for conventional cost functions. for smaller values of error, cost function j(e(𝑛)) resemble f(e(𝑛)) for instance as f(e(𝑛)) − 1 𝛼 ln (1 + 𝛼 𝐹(𝑒(𝑛))) → f(e(𝑛)) (22) thus general update expression of stochastic gradient is stated as w(𝑛 + 1) = w(𝑛) + 𝜇 ∙ x(𝑛) ∙ ∂f(e(𝑛)) ∂e(𝑛) [ 𝛼 f(e(𝑛)) 1 + 𝛼 f(e(𝑛)) ] (23) norm is power of least probable error ais at convex cost function, signed algorithm delivers slow rate of convergence. for f(e(𝑛)) = e[|e(𝑛)|] in (23), then resultant expression is w(𝑛 + 1) = w(𝑛) + 𝜇 x(𝑛) sign(e(𝑛)) [ 𝛼 (|e(n)|) 1 + 𝛼 (|e(n)|) ] . (24) then the weight update relation of the llad algorithm becomes w(𝑛 + 1) = w(𝑛) + 𝜇 [ 𝛼 x(𝑛) e(𝑛)) 1 + 𝛼 (|e(𝑛)|) ] (25) to reduce computational difficulty of lms, signed variants are preferable. sign regressor version offers low computational complexity with a smaller number of multiplications among all signed variants. here, sign regressor llad (srllad) algorithm by applying sign function on each element. is remains obtained as recursion of lms for altering input tap vector. 2.2. sign based least logarithmic absolute difference (llad) algorithms llad algorithm is generalized version of higher order adaptive filter. combination of llad in (25) with three types of sign variants results in srllad, sllad and ssllad algorithms respectively. hence weight update relations of signed llad based variants are given as follows. w(𝑛 + 1) = w(𝑛) + 𝜇 sign{x(𝑛)} sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]} (26) w(𝑛 + 1) = w(𝑛) + 𝜇 x(𝑛) sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]} (27) w(𝑛 + 1) = w(𝑛) + 𝜇 sign{x(𝑛)} sign {e(𝑛) [ 𝛼 (|e(𝑛)|) 1 + 𝛼 (|e(𝑛)|) ]}. (28) 3. results and discussion the proposed llad algorithm method for spectrum sensing is used to assess performance. the spectrum sensing of the transmitter of primary user and receiver of secondary user is evaluated. spectrum sensing simulations are run for 5000 samples, t is filter length it is chosen as 10 and distinct signals are acquired for each noise sample to process. for improved results, a received signal attenuation factor was introduced, and then the performance of energy detection was evaluated under various snr situations. because of the noise levels in the sensing spectrum, the main goal of developing an adaptive filter is based on spectrum sensing approach. due to incorrect detection of test acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 data, this spectrum sensing may result in probability detection and missed detection probability mistakes. due to incorrect detection of test data, this spectrum sensing may result in probability detection and missed detection probability errors. the proposed technique adjusts the threshold value for each sensing event and then adjusts the noise power accordingly. when the size of the receiving antenna, the averaging of eigen values rises, resulting in a reduction in estimate error. the performance of noise power estimation is measured in terms of mean square error (mse). when the antenna size and quantity of samples are increased, the steady state error reduces. the sensitivity of adaptive filter-based energy detection is measured in terms of probability detection, which is a function of snr and a constant false alarm probability. the proposed llad algorithm performs better when probability detection is used. noise uncertainty is not a problem for this proposed strategy. this detecting capability successfully identifies spectrum gaps, allowing for prevented spectrum reuse opportunities. to accomplish probability detection (pod) and false alarm probability (pofa) concurrently, snr values are considered as minimal and number of samples, noise uncertainty relationship is adopted. to obtain greater probability detection without an adaptive method for spectrum sensing, noise uncertainty increases for minimal snr levels. even though there is no influence on how many received signal samples are examined for sensing spectrum, some restrictions on snr for performance of probability detection beyond these limit false alarm values are sacrificed for values bigger than zero of noise uncertainty factor. for noise certainty levels greater than zero for every increasing snr value, a larger number of samples are required for improved detection probability. it is obvious that raising snr values improves detection sensitivity for noise circumstances in the proposed technique. theoretical values for probability detection of different snr levels for the basic energy detection technique and suggested energy detection using llad are determined using (4) and (5). computational complexity of various lms based adaptive algorithms shown in table 1. with the suggested strategy, probability detection produces superior results, as shown in table 2. simulation curves for pofa versus pod for various snr values are provided in figure 3. for low snr levels, detection probability performance is better, as shown in table 2 and figure 3. signal correlations create propagation fading in wireless communications, and their influence on the receiving antenna generates correlation losses in the received signal. the performance of the energy detector is initially unaffected by signal correlations, and noise power is evaluated using eigen values. the antenna correlation effect is avoided while calculating noise power estimate by using one primary user signal when calculating eigen values of antenna correlation effect on bigger eigen values compared to small eigen values. the main goal of the proposed llad is to improve energy detection accuracy when used in low snr and noise-uncertainty situations. the detection probability is then improved with constant false alarm probability and low snr values. threshold values are evaluated for energy detectors to perceive changes in noise power in order to minimize difficulties with noise uncertainties, therefore threshold values are adapted to compute table 1. computational complexities of various sign lms based adaptive. s.no algorithm multiplications additions divisions 1 lms t+1 t+1 nil 2 llad t+4 t+2 1 3 srllad 4 t+2 1 4 sllad t+3 t+2 1 5 sslad t+2 t+2 1 figure 3. pofa versus pod for different snr values. figure 4. convergence curves for sign based llad adaptive algorithms. table 2. performance comparison of eigen value based spectrum sensing, proposed llad and its sign variants. snr (db) 0 -5 -10 -15 -20 pfa pd pfa pd pfa pd pfa pd pfa pd eigen value-based spectrum sensing 0.4 0.6789 0.3 0.6989 0.3 0.7125 0.3 0.7859 0.3 0.8752 llad 0.2520 0.7824 0.3 0.8997 0.3 0.9581 0.3 0.9785 0.3 0.9969 sr-llad 0.26 0.6241 0.2 0.6805 0.2 0.7825 0.2 0.8628 0.2 0.8853 s-llad 0.3821 0.5645 0.4 0.7192 0.4 0.7403 0.4 0.7893 0.4 0.7990 ss-llad 0.42 0.4561 0.5 0.4985 0.5 0.5782 0.5 0.5981 0.5 0.6782 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 noise power estimation. when compared to spectrum sensing only by using energy detection, it provides improved results for the llad energy detection. the proposed approach for energy detection overcomes noise uncertainty difficulties, but it is also necessary to assess the signal correlation because it is based on noise power estimations with eigen values. if signal correlation occurs, the performances of energy detection and noise power estimates have an impact on spectrum sensing. convergence becomes delayed by applying signum function. according to the illustrations, sra variant is just lower than its non-signed variant. figure 4 shows llad algorithm and its sign variants have a higher convergence rate than lms. hence srllad algorithm is preferred when compared to llad and lms algorithms. when the proposed llad algorithm exceeds the lower bound stability constraint [25], the normalized mean square deviation noise variance diverges. as a result of this equation normalized mean square deviation provides higher stability for the proposed strategy. normalized mean square deviation analysis yields higher convergence for modified normalized median lms than nlms and lms for small step size noisy inputs. because of instabilities, the standard modified normalized lms method diverged at large step size values. in health care monitoring applications for remote patients, spectrum sensing with the proposed llad algorithm is utilized to reduce noisy inputs and interferences caused by wireless networks. the information of the patients is transmitted to doctors through cognitive systems by arranging wearable devices to patient body. the information is transmitted and received concurrently from both channels. in this case primary user of cognitive system takes higher priority over the secondary user. the data is accessible to secondary users, but not to primary users. this proposed llad algorithm eliminates interference and disturbances that arise in this situation. medical equipment’s are more sensitive than normal electric devices in health care accept. in these primary users are telemetry applications and secondary users are hospital information applications. by predicting spectrum holes the cognitive radio controller optimizes the performance of the system, cognitive radio controller regulates channel access, and probabilities are determined. for many data channels the loss and delay probabilities of cognitive radio system are enhanced. 4. conclusions the main objective of this paper is to reduce noise uncertainties during spectrum sensing using the proposed llad algorithm. probability detection and false alarm probability are errors that arise as a result of spectrum sensing at the receiver, and their expressions are derived. after that, the impact of noise uncertainty on threshold parameter selection is investigated. the mean and mean square deviation analysis ensures stability for noisy primary user inputs. we considered variable step size for improving the stability of the proposed spectrum sensing technique. the llad algorithm's performance is enhanced by a lower steady-state error rate and better convergence. in medical telemetry, this cognitive radio idea is utilized to minimize echo cancellations and signal correlations with noisy inputs at the primary user. as a consequence, we achieve improved outcomes in terms of noise uncertainty stability, convergence rate, and steady state error rate. the estimation of a false alert and the probability of detection were both computed as estimation parameters. references [1] t. düzenli, o. akay, a new spectrum sensing strategy for dynamic primary users in cognitive radio, ieee communications letters, vol. 20, no. 4, april 2016, pp. 752-755. doi: 10.1109/lcomm.2016.2527640 [2] a. ali, w. hamouda, advances on spectrum sensing for cognitive radio networks: theory and applications, ieee communications surveys & tutorials, vol. 19, no. 2, second quarter 2017, pp. 1277-1304. doi: 10.1109/comst.2016.2631080 [3] a. hajihoseini, s. a. ghorashi, distributed spectrum sensing for cognitive radio sensor networks using diffusion adaptation, ieee sensors letters, vol. 1, no. 5, oct. 2017, pp. 1-4. doi: 10.1109/lsens.2017.2734561 [4] l. gahane, p. k. sharma, n. varshney, t. a. tsiftsis, p. kumar, an improved energy detector for mobile cognitive users over generalized fading channels, ieee transactions on communications, vol. 66, no. 2, feb. 2018, pp. 534-545. doi: 10.1109/tcomm.2017.2754250 [5] d. sun, t. song, b. gu, x. li, j. hu, m. liu, spectrum sensing and the utilization of spectrum opportunity tradeoff in cognitive radio network, ieee communications letters, vol. 20, no. 12, dec. 2016, pp. 2442-2445. doi: 10.1109/lcomm.2016.2605674 [6] w. chin, on the noise uncertainty for the energy detection of ofdm signals, ieee transactions on vehicular technology, vol. 68, no. 8, aug. 2019, pp. 7593-7602. doi: 10.1109/tvt.2019.2920142 [7] s. yasmin fathima, m. zia ur rahman, k. m. krishna, s. bhanu, m. s. shahsavar, side lobe suppression in nc-ofdm systems using variable cancellation basis function, ieee access, vol. 5, pp. 9415-9421, 2017. doi: 10.1109/access.2017.2705351 [8] j. yao, m. jin, q. guo, y. li, j. xi, effective energy detection for iot systems against noise uncertainty at low snr, ieee internet of things journal, vol. 6, no. 4, aug. 2019, pp. 6165-6176. doi: 10.1109/jiot.2018.2877698 [9] s. k. gottapu, v. appalaraju, cognitive radio wireless sensor network localization in an open field, 2018 conference on signal processing and communication engineering systems (spaces), vijayawada, 2018, pp. 45-48. doi: 10.1109/spaces.2018.8316313 [10] j. kim, j. p. choi, sensing coverage-based cooperative spectrum detection in cognitive radio networks, ieee sensors journal, vol. 19, no. 13, 1 july 2019, pp. 5325-5332. doi: 10.1109/jsen.2019.2903408 [11] s. macdonald, d. c. popescu, o. popescu, analyzing the performance of spectrum sensing in cognitive radio systems with dynamic pu activity, ieee communications letters, vol. 21, no. 9, sept. 2017, pp. 2037-2040. doi: 10.1109/lcomm.2017.2705126 [12] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, vol. 9, 2020, no.1, pp. 3-9. doi: 10.21014/acta_imeko.v9i1.732 [13] d. li, j. cheng, v. c. m. leung, adaptive spectrum sharing for half-duplex and full-duplex cognitive radios: from the energy efficiency perspective, ieee transactions on communications, vol. 66, no. 11, nov. 2018, pp. 5067-5080. doi: 10.1109/tcomm.2018.2843768 [14] m. tavana, a. rahmati, v. shah-mansouri, b. maham, cooperative sensing with joint energy and correlation detection in cognitive radio networks, ieee communications letters, vol. 21, no. 1, jan. 2017, pp. 132-135. doi: 10.1109/lcomm.2016.2613858 [15] g. yang, jun wang, jun luo, oliver yu wen, husheng li, qiang li, shaoqian li cooperative spectrum sensing in heterogeneous cognitive radio networks based on normalized energy https://doi.org/10.1109/lcomm.2016.2527640 https://doi.org/10.1109/comst.2016.2631080 https://doi.org/10.1109/lsens.2017.2734561 https://doi.org/10.1109/tcomm.2017.2754250 https://doi.org/10.1109/lcomm.2016.2605674 https://doi.org/10.1109/tvt.2019.2920142 https://doi.org/10.1109/access.2017.2705351 https://doi.org/10.1109/jiot.2018.2877698 https://doi.org/10.1109/spaces.2018.8316313 https://doi.org/10.1109/jsen.2019.2903408 https://doi.org/10.1109/lcomm.2017.2705126 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.1109/tcomm.2018.2843768 https://doi.org/10.1109/lcomm.2016.2613858 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 detection, ieee transactions on vehicular technology, vol. 65, no. 3, march 2016, pp. 1452-1463. doi: 10.1109/tvt.2015.2413787 [16] livio d’alvia, eduardo palermo, stefano rossi, zaccaria del prete, validation of a low-cost wireless sensor’s node for museum environmental monitoring, acta imeko, vol. 6, no. 3, september 2017, pp. 45-51. doi: 10.21014/acta_imeko.v6i3.454 [17] s. cheerla, d. venkata ratnam, k. s. teja sri, p. s. sahithi, g. sowdamini, neural network based indoor localization using wi-fi received signal strength, journal of advanced research in dynamical and control systems, 10 (4), 2018, pp. 374379. [18] i. srivani, g. siva vara prasad, d. venkata ratnam, a deep learning-based approach to forecast ionospheric delays for gps signals, ieee geoscience and remote sensing letters, 16(8), 2019, pp. 1180-1184. doi: 10.1109/lgrs.2019.2895112 [19] n. b. gayathri, g. thumbur, p. rajesh kumar, m. z. u. rahman, p. v. reddy, a. lay-ekuakille, efficient and secure pairing-free certificateless aggregate signature scheme for healthcare wireless medical sensor networks, ieee internet of things journal, 6(5), 2019, pp. 9064-9075. doi: 10.1109/jiot.2019.2927089 [20] g. thumbur, n. b. gayathri, p. vasudeva reddy, m. z. u. rahman, a. lay-ekuakille, efficient pairing-free identity-based ads-b authentication scheme with batch verification, ieee transactions on aerospace and electronic systems, 55(5), 2019, pp. 2473-2486 doi: 10.1109/taes.2018.2890354 [21] s. atapattu, c. tellambura, h. jiang, n. rajatheva, unified analysis of low-snr energy detection and threshold selection, ieee transactions on vehicular technology, vol. 64, no. 11, nov. 2015, pp. 5006-5019. doi: 10.1109/tvt.2014.2381648 [22] s. surekha, m. z. ur rahman, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1-5. doi: 10.1109/i2mtc43012.2020.9129107 [23] a. sulthana, m. z. u. rahman, s. s. mirza, an efficient kalman noise canceller for cardiac signal analysis in modern telecardiology systems, ieee access, vol. 6, 2018, pp. 3461634630. doi: 10.1109/access.2018.2848201 [24] m. n. salman, p. trinatha rao, m. z. u. rahman, novel logarithmic reference free adaptive signal enhancers for ecg analysis of wireless cardiac care monitoring systems, ieee access, vol. 6, 2018, pp. 46382-46395. doi: 10.1109/access.2018.2866303 [25] s. m. jung, p. park, stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs, ieee transactions on signal processing, vol. 65, no. 11, 1 june 2017, pp. 2949-2961. doi: 10.1109/tsp.2017.2675865 https://doi.org/10.1109/tvt.2015.2413787 https://doi.org/10.21014/acta_imeko.v6i3.454 https://doi.org/10.1109/lgrs.2019.2895112 https://doi.org/10.1109/jiot.2019.2927089 https://doi.org/10.1109/taes.2018.2890354 https://doi.org/10.1109/tvt.2014.2381648 https://doi.org/10.1109/i2mtc43012.2020.9129107 https://doi.org/10.1109/access.2018.2848201 https://doi.org/10.1109/access.2018.2866303 https://doi.org/10.1109/tsp.2017.2675865 solar energy harvesting for lorawan-based pervasive environmental monitoring acta imeko issn: 2221-870x june 2021, volume 10, number 2, 111 118 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 111 solar energy harvesting for lorawan-based pervasive environmental monitoring tommaso addabbo1, ada fort1, matteo intravaia1, marco mugnaini1, lorenzo parri1, alessandro pozzebon1, valerio vignoli1 1 department of information engineering and mathematics, university of siena, via roma 56, 53100 siena, italy section: research paper keywords: solar; energy harvesting; lorawan; environmental monitoring; particulate matter citation: tommaso addabbo, ada fort, matteo intravaia, marco mugnaini, lorenzo parri, alessandro pozzebon, valerio vignoli, solar energy harvesting for lorawan-based pervasive environmental monitoring, acta imeko, vol. 10, no. 2, article 16, june 2021, identifier: imeko-acta-10 (2021)-02-16 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form may 5, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alessandro pozzebon, e-mail: alessandro.pozzebon@unisi.it 1. introduction energy self-sufficiency is one of the crucial requirements for the realisation of efficient real-time distributed monitoring infrastructures, in wide range of application fields, from environmental [1], [2] and cultural heritage monitoring [3], [4] to the aerospace [5] and the smart industry [6], [7] domains. indeed, when developing a large number of wirelessly connected sensing devices, energy self-sufficiency means their usability as deployand-forget items, where any kind of manual intervention is reduced at minimum. besides reducing power consumption, energy self-sufficiency mainly requires the presence of a continuous or semi-continuous source of energy that, when these devices are expected to be employed in motion, cannot be the power grid. for this reason, the only way to ensure the continuous presence of the adequate amount of energy to allow the sensing device functioning is the so-called energy harvesting, i.e., the presence of a source of renewable energy on-board. among the various possible sources of energy, the most exploited is the solar one: indeed, several monitoring platforms have been provided with solar cells that are used to recharge onboard batteries or super-capacitors. nevertheless, such a solution has to face in several cases limitations due to the large dimensions of the solar cells, to the limited amount of achievable power or to the inadequate exposition of the sensing device. another factor influencing the performances of the energy harvesting system is the complexity of the sensing platform: indeed, the more power-hungry are the components, the more difficult is the design of the harvesting solution. the aim of this paper is to propose the characterisation of a small scale solar-based energy harvesting system, designed to identify an efficient trade-off among dimensions and power efficiency. in order to demonstrate the validity of the proposed solution, it has been embedded in a wireless sensing device thought to be employed for distributed real-time environmental monitoring: in particular, the sensing device is provided with sensors for the measurement of particulate matter (pm), with a gps module for localisation and tracking purposes and with long range wide area network (lorawan) connectivity. such a device is expected to be employed within a smart city context. while several city-scale air quality wireless monitoring abstract the aim of this paper is to discuss the characterisation of a solar energy harvesting system to be integrated in a wireless sensor node, to be deployed on means of transport to pervasively collect measurements of particulate matter (pm) concentration in urban areas. the sensor node is based on the use of low-cost pm sensors and exploits lorawan connectivity to remotely transfer the collected data. the node also integrates gps localisation features, that allow to associate the measured values with the geographical coordinates of the sampling site. in particular, the system is provided with an innovative, small-scale, solar-based powering solution that allows its energy self-sufficiency and then its functioning without the need for a connection to the power grid. tests concerning the energy production of the solar cell were performed in order to optimise the functioning of the sensor node: satisfactory results were achieved in terms of number of samplings per hour. finally, field tests were carried out with the integrated environmental monitoring device proving its effectiveness. mailto:alessandro.pozzebon@unisi.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 112 infrastructures can be found in literature [8], [9], [10], this paper focuses on the realisation of a different typology of data acquisition architecture. indeed, in the proposed solution, the sensor nodes are expected to be provided with localisation features [11] and then to be deployed on means of public transport. this approach is especially relevant since it allows to acquire data in a more pervasive way, bringing the measurement instrumentation in almost every spot of a city. at the same time, the scope of the paper is to propose a device to be employed in a “plug-and-play” fashion: the use of the photovoltaic source for the powering of the system goes specifically in this direction. indeed, while on means of public transportation several sources of energy may be available, the connection of a new device may require structural modifications on the vehicle itself to wire the sensor node to the power source. such modifications may be even more cumbersome if the sensor node is expected to be deployed outside the vehicle, as in the case of acquisition of environmental parameters. conversely, the design of a totally autonomous system may allow its deployment without any kind of intervention on the vehicle: in its final configuration, the sensor node may be attached for example with a magnet to the vehicle chassis. the choice of long range (lora) as the transmission technology comes from its ability to provide possibly the best compromise between performances and costs within the smart city scenario [12], [13]. indeed, the long transmission ranges allow to cover a large area with a relatively small number of gateways. at the same time, thanks to the lorawan protocol, a large number of end devices can be simultaneously managed thanks to multi-channel and gateway redundancies. on the other side, costs are kept very low since no fee is required for the transmitting devices: such aspect may be crucial when the number of devices to be deployed is expected to grow. at the same time, the same lorawan network may be exploited also for other activities, thus further reducing the costs. if compared with competing technologies, the benefits coming from the adoption of lorawan can be better underlined. starting from local area technologies like zigbee, bluetooth or wifi, their short transmission range obviously prevents for using them for monitoring at a city scale since too many gateways may be required. moving to wide area technologies, cellular ones are of course more reliable than lora. however, they require a subscription for each device and this cost may be unsustainable with a growing number of devices: conversely, lorawan may easily scale since no cost is required for connection, and the price of lora modules is in the order of few euros, notably lower than its concurrent cellular technology, i.e., nb-iot. the same limitation is applied to the other well-known sub-ghz technology, sigfox. indeed, such technology too requires the payment of a subscription for each device. at the same time, the limitations that may come from the usage of lorawan are not crucial for the proposed application scenario. indeed, the 1% duty-cycle limitation is not influent on the acquisition of pm values that can be performed every 10-15 minutes, while the limited reliability of the connection may lead to the loss of some packets that are not relevant too for the purpose of the proposed system. the rest of the paper is structured as follows: in section 2 some details related to the monitoring of pm concentrations are provided, while section 3 focuses on the state of the art related to solar-based energy harvesting solutions. section 4 provides a description of the overall sensor node architecture while section 5 is devoted to the design of the solar harvesting system. section 6 provides some field test results while in section 7 some conclusive remarks are presented. 2. particulate matter monitoring the term "particulate matter" (pm) encompasses a wide range of solid, organic, and inorganic particles and liquid droplets that are commonly found in air. in general, pm is composed of a wide range of different elements that change according to the specific environmental features [14], [15], but include sulphate, nitrates, ammonia, sodium chloride, black carbon, mineral dust, and water. pm is classified according to the dimensions of the single particles: we speak then of pm10 when the diameter of the particles is lower than 10 micron (dpm10 < 10 µm) and of pm2.5 when the diameter is lower than 2.5 micron (dpm2.5 < 2.5 µm). both typologies of pm can be easily inhaled by human beings, and a chronic exposition to this kind of pollutants can bring to the emergence of cardiovascular and respiratory diseases. in particular [16], pm10 can penetrate inside lungs, while pm2.5 can penetrate the lung barrier and enter the blood system, with even more harmful effects. for this reason, world health organisation has defined two thresholds for each type of particulate, that can be seen in table 1 [16], that should not be overcome to safeguard the citizens' health. pm levels are usually measured by public bodies which collect the data by means of fixed monitoring stations deployed in a limited number of spots: in general, only one or few monitoring stations are present in medium to large-sized cities. moreover, data collected by these stations refer only to the area of the city where they are deployed, while they cannot provide a pervasive feedback on the pm levels in other parts of the city. this fact is mainly due to the high cost of this monitoring stations that prevents from deploying them in a large number across a large territory. nevertheless, some low-cost pm sensors are currently available on the market: while their accuracy level is not comparable with the fixed monitoring stations, they can still provide an interesting feedback on the level of pm, in particular, for what concerns the overcoming of the daily and yearly thresholds. moreover, these devices are characterised by small dimensions, and can be then integrated on portable data acquisition platforms that can be provided with the adequate connectivity to transfer the acquired data in real time to a remote data management centre. by deploying a large quantity of this kind of devices, a pervasive monitoring infrastructure can be then set up across a whole urban centre, thus perfectly fulfilling the paradigm of the smart city [17], [18]. 3. solar energy harvesting in the last decades, due to the steadily growing number of power requiring devices and to the subsequent technology sustainability issues, light energy harvesting has attracted tremendous interest and has aroused a great research effort in the scientific community, resulting in a plethora of solar cell typologies [19], [20], [21], [22], each with different optical and mechanical properties, performances and cost. table 1. particulate matter thresholds. 24-hour mean annual mean pm2.5 25 mg/m3 10 mg/m3 pm10 50 mg/m3 20 mg/m3 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 113 some studies [23], [24], [25], [26], [27], [28], [29] aim to enhance the performances under particular light spectrum conditions, mainly low-intensity indoor lighting, choosing materials with suitable absorption spectra. other researches focus on extremely improving the efficiency by realising multijunction structures capable of absorbing energy in a wide frequency range [30], [31]. crystalline (monocrystalline and polycrystalline) silicon [32] is surely the dominant technology for solar cells, representing a good compromise between performance and cost [20]. excellent efficiencies over 25% are achieved by monocrystalline silicon technology [19], thanks to efficiency improving strategies, such as carrier recombination reduction through contact passivation [33], [34], [35], [36]. even though these latest efficiency enhancing techniques are not commonly findable in the marketplace yet, monocrystalline silicon solar cells remain the preferable solution for powering a small outdoor electronic utility, like the application presented in this paper, with the minimum encumbrance and at a very reasonable cost. 4. sensor node structure the sensor node purpose is to periodically sample the amount of pm in the air and transmit this information over a lora radio channel. in addition, also the gps position of the node has to be acquired every time a sample is collected. the sensor is powered by a battery that is recharged by means of a crystalline silicon solar cell. the structure of the system is shown in figure 1. the main blocks that compose the node are the communication and control unit (ccu), the particle sensor, the gps module, a battery and a step-up dc-dc converter to manage the energy coming from the solar cells. the ccu has been developed ad-hoc (see figure 2) and hosts a low power stm32l073 microcontroller (mcu) by stmicroelectronics, a lora transceiver (rfm95 by hoperf) and a power management electronics to supply internal devices and charge a li-ion battery. the power from the solar cells is elevated and stabilised by a step-up dc-dc converter (ltc3105 on an evaluation board) that hosts a start-up controller (from 250 mv) and a maximum power point controller (mppc) that enables operation directly from low voltage power sources such as photovoltaic cells. the mppc set point can be selected depending on the solar cells used. if energy from solar cell is available, the battery charger (stc4054 by stmicroelectronics) will recharge the battery. the mcu and the radio module are supplied by a 2.5 v ldo regulator, the voltage level of the battery is controlled by an adc channel on the mcu and a voltage divider. the particle sensor (hpma115s0 by honeywell), shown in figure 3 requires 5 v to operate: this power source is generated by another ltc3105 module directly from the battery. this latter can be powered off by a specific shut down line from the mcu. the gps module (mtk3339 by adafruit) requires a power voltage of 3.3 v, that is available as an output of the particle sensor. since the sensor node is expected to operate continuously without the need of connection to the power grid, a power strategy based on a strict duty-cycling was adopted. in particular, the system is expected to perform data sampling and transmission, and then to be put in sleep mode according to an adaptive duty-cycling policy that will be described in detail in section 5. 5. energy harvesting and power management the sensor node is powered with a battery charged by a small solar harvester. the aim of this section is to foresee the maximum feasible duty cycle of the sensor node operations for not draining completely the battery, given the energy collected by the harvester. this problem can be formulated as the following condition: 𝑊𝐻 ≥ 𝑊𝛿 (𝛿) (1) figure 1. sensor node internal structure. figure 2. communication and control unit figure 3. honeywell hpma115s0 particulate matter sensor. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 114 where 𝑊h is the energy harvested over a day, while 𝑊𝛿 (𝛿) is the sensor node energy consumption over a day, which in fact depends on the duty cycle 𝛿. respecting condition (1) means to guarantee energy self-sufficiency to the sensor device, with no need for battery replacement. therefore, the first step is to assess the sensor node energy requirements, i.e. the quantity 𝑊𝛿 (𝛿). let us indicate with 𝑡0 the time (in seconds) necessary for a complete operating sequence, made up of mcu acquisition, data transmission via lora and gps localisation. the maximum number of operating sessions in an hour is given by: 𝑁 = ⌊ 3600 𝑡0 ⌋ . (2) in our case we have 𝑡0 = 10 𝑠 and thus 𝑁 = 360. let 𝑛 be the desired number of operating sessions in an hour. then the duty cycle is plainly given by: 𝛿 = 𝑛 𝑁 . (3) now, let 𝑊0 be the energy required for a single operating sequence. considering the sensor and analog front-end powering, the microcontroller consumption in run mode, the lora module consumption in stand-by and transmission modes, and the gps consumption, we obtained 𝑊0 ≈ 0.29 mw h. using equation (3), the daily energy consumption for a fixed duty cycle is given by: 𝑊𝛿 (𝛿) = 24 ∙ 𝑊0 ∙ 𝑛 = 24 ∙ 𝑊0 ∙ 𝛿 ∙ 𝑁. (4) substituting 𝑊𝛿 (𝛿) with 𝑊𝐻 in equation (4) yields the maximum feasible duty cycle: 𝛿max = 𝑊h 24 ∙ 𝑊0 ∙ 𝑁 . (5) the harvested energy 𝑊h varies dramatically during the year and obviously is strongly dependent on the weather conditions. in our previous work [37], we proposed a theoretical evaluation of the quantity 𝑊h over the year, calculating approximately the energy produced by a reference monocrystalline silicon solar cell [34]. in this paper, we present some recent experimental results on the energy harvested by a commercial monocrystalline silicon solar cell for outdoor use, produced by seeed studio (figure 4). the producers attest that this solar module is capable of operating at a voltage of 5.5 v and a current of 100 ma, resulting in a maximum power point (mpp) of 0.55 w1. the cell surface is 70 × 55 mm2. the measurements were performed in siena, italy, during sunny or partially cloudy days in the first half of january 2021 (a complete characterisation of the solar cell behaviour throughout the whole year would require a long measuring campaign, and data are not available to date). in the next section, the employed measurement system for the solar cell 1 the producers do not mention explicitly the working conditions under which their solar modules were characterised. usually, solar cell performances are evaluated under standard test conditions (i.e. 25 ℃ temperature, 1000 𝑊 𝑚2⁄ solar irradiance, 1.5 air mass). characterisation2 is described; section 5.2 shows the results of the measurements. 5.1. solar cell characterisation method the circuitry employed for characterising the performances of the selected solar module is based on a solution proposed by analog devices. figure 5 shows the circuit schematic. referring to figure 5, the solar module to be characterised is connected to the ports labelled pv+ and pvon the left. the lower branch is the voltage sensing part, made up of a simple voltage divider followed by an operational amplifier (oa) in noninverting configuration. the overall voltage gain 𝐺𝑉 is given by: 𝐺𝑉 = 𝑅2 𝑅1 + 𝑅2 ∙ (1 + 𝑅4 𝑅3 ) . (6) the r4 resistor is actually short-circuited because the solar module already outputs a voltage in the order of some volts. the upper branch is the current sensing part: the current is converted into a voltage through the 1 ω resistor and then amplified by 2 the term characterisation may be object of misunderstanding. in this context, we are only interested in evaluating the maximum power deliverable by the solar cell, we do not want to determine its parameters (open-circuit voltage, short-circuit current, fill factor) from the currentvoltage curve. therefore, in the following, the term “characterisation” is referred to the evaluation of the cell maximum deliverable power. figure 4. selected monocrystalline solar module. figure 5. solar cell characterisation circuit. for the varying elements, the value used for the experiments presented in this work is reported in brackets. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 115 another non-inverting amplifier. thus, the overall current gain 𝐺𝐼 is: 𝐺𝐼 = 1 + 𝑅11 𝑅10 . (7) this circuit is capable of scanning the current-voltage (iv) curve of the connected solar cell when mosfet q1 and mosfet q2 are conveniently driven. in detail, before starting the acquisition (idle state), q1 is on and q2 is on and therefore the solar cell is in short-circuit conditions, while the oa outputs are both zero. the measurement starts when q1 and q2 are switched off: when this happens, the current instantaneously flows through the capacitor c and the current sensing 1 ω resistor. the voltage across the solar cell is still zero (short-circuit conditions), but now the short-circuit current is visible amplified on the current sensing oa output. the capacitor starts to charge towards the cell open-circuit voltage, which is actually never reached as an effect of the presence of the voltage divider. in this way, the solar cell iv characteristic is scanned and, in particular, the mpp is certainly touched at some points. then, after the iv transient, q2, which has a drain dissipative power resistor to limit the current, is switched on again. finally, also q1 is switched on to return back into the initial idle state. the two oa outputs are sampled by a stm32l432kc microcontroller. the microcontroller acquires 500 samples (250 for the voltage and 250 for the current on two different adc channels) at 125 ksps per channel. the covered time interval is therefore 2 ms, which is sufficient for sampling the signals of interest with a satisfying time resolution. the adc has 12 bits and 3.3 v full-scale. the stm32l432kc also powers the two oa through an on-board 5 v output and drives q1 and q2 through 3.3 v tolerant general purpose input/output (gpio) pins. a raspberry pi powers the stm32l432kc and collects the data from it in javascript object notation (json) format. the data are available remotely on a web database. this solution for solar cell characterisation presents some problems (as already mentioned, the open-circuit voltage is unreachable), but is more than sufficient for our application, since we are not interested in drawing the entire iv curve, but only in evaluating the maximum power deliverable by the cell. furthermore, this solution exhibits two fundamental advantages: first, it is extremely low-cost; second, it is portable, as it exploits the response of the solar cell itself to a load impedance variation, and thus there is no need for a precision voltmeter or a signal generator (or for a more complex and power consuming current generator circuit), usually present for setting the cell current and measuring the voltage in more common solar cell characterisation methods. 5.2. solar cell characterisation results figure 6 shows the current and voltage variations on the cell during an acquisition cycle (i.e. the iv transient provoked by switching off q1 and q2, as explained in the previous section). as it can be seen, the cell passes from being short-circuited to an open-circuit condition. the resulting power curve (figure 7) assumes the bell shape typical of a solar cell. the specific curves in figure 6 and figure 7 were obtained in laboratory testing the circuit under a white led (3500 k colour temperature) and are only meant to demonstrate qualitatively the circuit functionality. the measurements were performed in siena, italy, throughout sunny or partially cloudy days in january 2021. the characterisation system was placed in a realistic position, exposed to the sun during most of the day but with some trees and other obstacles likely present in the actual sensor usage. an acquisition was performed every minute. as an example, figure 8 shows the cell maximum achievable power measured on 27th january 2021. the wells are due to the presence of obstacles (e.g. around midday some trees covered the solar module). the total energy collected during the examined days (that is 𝑊h, see the introduction of section 5) oscillates between a minimum of 400 mw h in partially cloudy days and a maximum of 1 w h. considering on average 𝑊h ≈ 700 mw h, the corresponding maximum feasible duty cycle, calculated through equation (5), is: 𝛿max ≈ 30% . (8) this result, put into equation (3), corresponds to about 100 sensor node acquisitions per hour, which is more than decent, considering that december and january are the worst months of the year for solar energy harvesting. however, it is clear that this performance assessment is not valid for heavily cloudy or rainy days, in which the energy production falls sharply (the energy produced on 9th january 2021 and 10th january 2021, which were completely sunless, was 20 mw h in total). for this reason, we are driving the future development of the powering of the sensor node towards a multi-source energy harvesting approach, figure 6. solar cell current and voltage variations during an iv acquisition cycle: laboratory test under white led. figure 7. power curve corresponding to current and voltage reported in figure 6. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 116 adding, along with the solar harvester, a piezoelectric harvester for collecting energy also from mechanical vibrations and even from rainfall [38]. 6. tests and measurements the performances of the system were tested in a real environment: in particular, the sensor node was placed for 1 week on the facade of the department of information engineering and mathematics of the university of siena, italy (see figure 9), acquiring pm10 and pm2.5 values each 15 minutes. during each sampling, 10 values were acquired and then their mean value was transmitted by means of lorawan protocol to a lorawan gateway positioned inside the building. the pm sensor was characterised only in static conditions (no significant vibration is present) since the idea in the real scenario is to acquire the measurement only when the vehicle is still. indeed, in its final configuration the node is expected to integrate an accelerometer that will be exploited to detect whether the vehicle is moving or not. the lorawan transmission is then performed too in this phase since the whole data acquisition and transmission task requires less than 2 s, avoiding thus possible additional issues due to the set up of a radio channel in nonstationary conditions. in order to verify the operation of the system, the sampled values were compared with the ones available on the website of the regional environmental protection agency of tuscany region (arpat), which owns a set of fixed monitoring stations deployed across the whole territory of tuscany. in particular, daily average values are available on arpat website: for this reason, for the acquired values the daily mean value was calculated. the values were compared with the ones acquired by the fixed station positioned in viale bracci, siena, italy, which is the closest one to the university building, at a distance of 2.5 km. a deployment close to this fixed station was not possible due to security reasons. figure 10 shows the daily mean values of pm10 concentrations measured at the fixed station by arpat and by the system described in this work, positioned on the university building. even if the two values are notably different, this is due to the deployment site: while the arpat fixed station is positioned close to the very busy road that leads to the siena hospital, the university building is positioned in a limited traffic area located in a peripheral part of the historic centre of siena. nevertheless, the effectiveness of the system can be noticed, since the trends of the two values all week long are almost similar: in particular, the values measured by the system are always almost half of the ones provided by arpat. an important comment has to be done: a low-cost sensor has been used in the realisation of the system, and its accuracy level cannot be compared with the professional, and then very expensive, measurement platforms used by arpat. nevertheless, looking at the values measured by the system, it is evident that the proposed solution can still be useful to collect data about pm in a more pervasive way, even if with a lower level of accuracy. in this sense, the proposed solution is not expected to replace the existing fixed measurement stations but mainly as system to enrich the knowledge about the different levels of pm that may be recorded in correspondence of different environmental conditions. figure 8. solar cell maximum power production on 27th january 2021 in siena, italy. figure 9. sensor node testing setup. figure 10. comparison between daily mean pm10 concentrations provided by arpat and measured by the system. figure 11 : geographical data visualisation. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 117 following the system characterisation performed in a controlled environment (i.e., the department facade), a geographical data acquisition campaign was carried out, measuring pm2.5 and pm10 concentrations along the roads of a wide area within the historic centre of siena. for this purpose, a dragino lorawan gateway was placed on the front facade of the department building, in the same spot that was used for the deployment of the sensor node in the previous experimentation. pm measurements were associated with the latitude and longitude values acquired by the gps module: these values were then used to set up a data visualisation tool by means of google maps services. the measured values show an increase in the narrower alleys where vehicular traffic was more consistent. a screenshot of the acquired data visualisation tool, with the measurements related to one of the positions, is shown in figure 11. blue markers represent the spots where measurements were acquired while the red star shows the gateway position. 7. conclusions the aim of this paper was to propose the architecture of a self-powered lorawan sensor node for the pervasive measurement of pm concentrations in urban areas. according to the presented results, the system is able to operate autonomously exploiting an energy harvesting system based on the use of a small low-cost monocrystalline solar cell. in particular, the experimentation carried out demonstrated how the energy provided by the solar harvester is sufficient to guarantee around a hundred samplings per hour during winter, when solar energy production is at its minimum. moreover, the addition of a mechanical vibration energy harvester is under evaluation as a future development, to enable a multi-source energy harvesting approach which would improve the energy production during sparsely lit days. at the same time, the system can sample the pm concentrations by means of a low-cost sensor, transmitting them to a lorawan gateway together with the geographic coordinates of the sampling location. by positioning the measurement system on means of public transport and combining these two data, pm levels may be measured across a large area and the level differences related to different areas of an urban centre may be identified. moreover, the energy selfsufficiency feature may allow an easy deployment of the device on the vehicles, without the need for setting up wires to connect the node to external power sources as for example the vehicle batteries. together with the energy harvesting system, also a prototype of the measurement system was tested: preliminary tests were carried out in a controlled environment, and the acquired values were compared with the certified ones, provided by a public body, proving the consistence of the measured parameters. following this preliminary step, the whole platform was tested for distributed data acquisition along city roads in siena, thus in a real application scenario. the acquired results showed the effectiveness of the proposed solution. references [1] f. leccese, m. cagnetti, a. calogero, d. trinca, s. di pasquale, s. giarnetti, i. cozzella, a new acquisition and imaging system for environmental measurements: an experience on the italian cultural heritage, sensors, vol. 14, 2014, no. 5, pp. 9290-9312. doi: 10.3390/s140509290 [2] a. pozzebon, i. cappelli, a. mecocci, d. bertoni, g. sarti, f. alquini, a wireless sensor network for the real-time remote measurement of aeolian sand transport on sandy beaches and dunes, sensors, vol. 18, 2018, no. 3, p. 820. doi: 10.3390/s18030820 [3] f. leccese, m. cagnetti, s. tuti, p. gabriele, e. de francesco, r. ðurovic-pejcev, a. pecora, modified leach for necropolis scenario, imeko international conference on metrology for archaeology and cultural heritage, lecce, italy, 23-25 october 2017, pp. 442-447. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-archaeo2017/imeko-tc4-archaeo-2017-088.pdf [4] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko, vol. 8 (2019), no. 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [5] f. leccese, m. cagnetti, s. sciuto, a. scorza, k. torokhtii, e. silva, analysis, design, realization and test of a sensor network for aerospace applications, i2mtc 2017 2017 ieee international instrumentation and measurement technology conference, turin, italy, 22-25 may 2017, pp. 1-6. doi: 10.1109/i2mtc.2017.7969946 [6] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low power iot architecture for the monitoring of chemical emissions, acta imeko, vol. 8 (2019), no. 2, pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [7] l. angrisani, u. cesaro, m. d'arco, o. tamburis, measurement applications in industry 4.0: the case of an iot–oriented platform for remote programming of automatic test equipment, acta imeko, vol. 8 (2019), no. 2, pp. 62-69. doi: 10.21014/acta_imeko.v8i2.643 [8] k. zheng, s. zhao, z. yang, x. xiong, w. w. xiang, design and implementation of lpwa-based air, ieee access, 2016, no. 4, pp. 3238-3245. doi: 10.1109/access.2016.2582153 [9] g. b. fioccola, r. sommese, i. tufano, r. c. a. g. ventre, polluino: an efficient cloud-based management of iot devices for air quality monitoring, in ieee 2nd international forum on research and technologies for society and industry leveraging a better tomorrow (rtsi), bologna, italy, 7-9 september 2016, pp. 1-6. doi: 10.1109/rtsi.2016.7740617 [10] a. candia, s. n. represa, d. giuliani, m. ç. luengo, a. a. porta, l. a. marrone, solutions for smartcities: proposal of a monitoring system of air quality based on a lorawan network with low-cost sensors, in congreso argentino de ciencias de la informatica y desarrollos de investigacion (cacidi), buenos aires, argentina, 28-30 november 2018, pp. 1-6. doi: 10.1109/cacidi.2018.8584183 [11] t. addabbo, a. fort, m. mugnaini, l. parri, a. pozzebon, v. vignoli, smart sensing in mobility: a lorawan architecture for pervasive environmental monitoring, in ieee 5th international forum on research and technology for society and industry (rtsi), firenze, 9-12 september 2019, pp. 421-426. doi: 10.1109/rtsi.2019.8895563 [12] d. magrin, m. centenaro, l. vangelista, performance evaluation of lora networks in a smart city scenario, 2017 ieee international conference on communications (icc) paris, france, 21-25 may 2017, pp. 1-7. doi: 10.1109/icc.2017.7996384 [13] p. j. basford, f. m. bulot, m. apetroaie-cristea, s. j. cox, s. j. ossont, lorawan for smart city iot deployments: a long term evaluation, sensors, vol. 20, 2020, no. 3. doi: 10.3390/s20030648 [14] c. perrino, f. marcovecchio, l. tofful, s. canepari, particulate matter concentration and chemical composition in the metro system of rome, italy, environmental science and pollution research, 2015, pp. 9204-9214. doi: 10.1007/s11356-014-4019-9 [15] b. zeb, k. alam, a. sorooshian, t. blaschke, i. ahmad, i. shahid, on the morphology and composition of particulate matter in an http://dx.doi.org/10.3390/s140509290 https://doi.org/10.3390/s18030820 https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-088.pdf https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-088.pdf http://dx.doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1109/i2mtc.2017.7969946 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 http://dx.doi.org/10.21014/acta_imeko.v8i2.643 https://doi.org/10.1109/access.2016.2582153 https://doi.org/10.1109/rtsi.2016.7740617 https://doi.org/10.1109/cacidi.2018.8584183 https://doi.org/10.1109/rtsi.2019.8895563 https://doi.org/10.1109/icc.2017.7996384 https://doi.org/10.3390/s20030648 https://doi.org/10.1007/s11356-014-4019-9 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 118 urban environments, aerosol and air quality research, 2018, p. 1431. doi: 10.4209/aaqr.2017.09.0340 [16] world health organization, ambient (outdoor) air pollution key facts. online [accessed 18 june 2021] https://www.who.int/news-room/fact-sheets/detail/ambient(outdoor)-air-quality-and-health [17] t. nam, t. a. pardo, conceptualizing smart city with dimensions of technology, people, and institutions, in proc. of the 12th annual international digital government research conference: digital government innovation in challenging times, college park maryland, usa 12-15 june 2011, pp. 282–291. doi: 10.1145/2037556.2037602 [18] m. cerchecci, f. luti, a. mecocci, s. parrino, g. peruzzi, a. pozzebon, a low power iot sensor node architecture for waste management within smart cities context, sensors, 2018, p. 1282. doi: 10.3390/s18041282 [19] m. green, e. dunlop, j. hohl‐ebinger, m. yoshita, n. kopidakis, x. hao, solar cell efficiency tables (version 57), prog photovolt res appl., vol. 29, 2021, pp. 3-15. doi: 10.1002/pip.3371 [20] m. h. shubbak, advances in solar photovoltaics: technology review and patent trends, renewable and sustainable energy reviews, vol. 115, 2019, p. 109383. doi: 10.1016/j.rser.2019.109383 [21] s. biswas, h. kim, solar cells for indoor applications: progress and development, polymers, vol. 12, 2020, p. 1338. doi: 10.3390/polym12061338 [22] s. chowdhury, m. kumar, s. dutta, j. park, j. kim, s. kim, m. ju, y. kim, y. cho, e. cho, j. yi, high-efficiency crystalline silicon solar cells: a review, new & renewable energy, vol. 15, 2019, pp. 36-45. doi: 10.1039/c5ee03380b [23] p. vincent, s.-c. shin, j. s. goo, y.-j. you, b. cho, s. lee, d.-w. lee, s. r. kwon, k.-b. chung, j.-j. lee, j.-h. bae, j. w. shim, h. kim, indoor-type photovoltaics with organic solar cells through optimal design, dyes and pigments, vol. 159, 2018, pp. 306-313. doi: 10.1016/j.dyepig.2018.06.025 [24] s. kim, m. a. saeed, s. h. kim, j. w. shim, enhanced hole selecting behavior of wo3 interlayers for efficient indoor organic photovoltaics with high fill-factor, applied surface science, vol. 527, 2020, p. 146840. doi: 10.1016/j.apsusc.2020.146840 [25] s. biswas, y.-j. you, y. lee, j. w. shim, h. kim, efficiency improvement of indoor organic solar cell by optimization of the doping level of the hole extraction layer, dyes and pigments, vol. 183, 2020, p. 108719. doi: 10.1016/j.dyepig.2020.108719 [26] a. s. teran, e. moon, w. lim, g. kim, i. lee, d. blaauw, j. d. phillips, energy harvesting for gaas photovoltaics under lowflux indoor lighting conditions, ieee transactions on electron devices, vol. 63, no. 7, 2016, pp. 2820-2825. doi: 10.1109/ted.2016.2569079 [27] i. mathews, p. j. king, f. s. a. r. frizzell, performance of iii–v solar cells as indoor light energy harvesters, ieee journal of photovoltaics, vol. 6, no. 1, 2016, pp. 230-235. doi: 10.1109/jphotov.2015.2487825 [28] c.-y. chen et al. (40 authors), performance characterization of dye-sensitized photovoltaics under indoor lighting, the journal of physical chemistry letters, vol. 8, 2017, pp. 1824-1830. doi: 10.1021/acs.jpclett.7b00515 [29] ming-chi tsai, chin-li wang, chiung-wen chang, cheng-wei hsu, yu-hsin hsiao, chia-lin liu, chen-chi wang, shr-yau lina, ching-yao lin, a large, ultra-black, efficient and costeffective dye-sensitized solar module approaching 12% overall efficiency under 1000 lux indoor light, journal of materials chemistry a, vol. 6, 2018, pp. 1995-2003. doi: 10.1039/c7ta09322e [30] p. colter, b. hagar, s. bedair, tunnel junctions for iii-v multijunction solar cells review, crystals, 2018, p. 445. doi: 10.3390/cryst8120445 [31] j. geisz, m. steiner, n. jain, k. schulte, r. france, w. mcmahon, e. perl, d. friedman, building a six-junction inverted metamorphic concentrator solar cell, ieee journal of photovoltaics, 2017, pp. 1-7. doi: 10.1109/jphotov.2017.2778567 [32] s. chowdhury, m. kumar, s. dutta, j. park, j. kim, s. kim, m. ju, y. kim, y. cho, e.-c. cho, j. yi, high-efficiency crystalline silicon solar cells: a review, new & renewable energy, vol. 15, 2019, pp. 36-45. doi: 10.7849/ksnre.2019.3.15.3.036 [33] a. morisset, r. cabal, v. giglia, a. boulineau, e. d. vito, a. chabli, s. dubois, j. alvarez, j.-p. kleider, evolution of the surface passivation mechanism during the fabrication of ex-situ doped poly-si(b)/siox passivating contacts for high-efficiency csi solar cells, solar energy materials and solar cells, vol. 221, 2021, p. 110899. doi: 10.1016/j.solmat.2020.110899 [34] a. richter, j. benick, f. feldmann, a. fell, m. hermle, s. glunz, n-type si solar cells with passivating electron contact: identifying sources for efficiency limitations by wafer thickness and resistivity variation, solar energy materials and solar cells, no. 173, 2017, pp. 96-105. doi: 10.1016/j.solmat.2017.05.042 [35] d. attafi, a. meftah, r. boumaraf, m. labed, n. sengouga, enhancement of silicon solar cell performance by introducing selected defects in the sio2 passivation layer, optik, vol. 229, 2021, p. 166206. doi: 10.1016/j.ijleo.2020.166206 [36] f. meyer, a. savoy, j. j. d. leon, m. persoz, x. niquille, c. allebé, s. nicolay, f.-j. haug, a. ingenito, c. ballif, optimization of front sinx/ito stacks for high-efficiency two-side contacted c-si solar cells with co-annealed front and rear passivating contacts, solar energy materials and solar cells, vol. 219, 2021, p. 110815. doi: 10.1016/j.solmat.2020.110815 [37] t. addabbo, a. fort, m. intravaia, m. mugnaini, l. parri, a. pozzebon, v. vignoli, pervasive environmental monitoring by means of self-powered particulate matter lorawan sensor nodes, in 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-20.pdf [38] g. acciari, m. caruso, m. fricano, a. imburgia, r. miceli, p. romano, g. schettino, f. viola, experimental investigation on different rainfall energy harvesting structures, in thirteenth international conference on ecological vehicles and renewable energies (ever), monte-carlo, monaco, 10-12 april 2018, pp. 15. doi: 10.1109/ever.2018.8362346 [39] p. löper, d. pysch, a. richter, m. hermle, s. janz, m. zacharias, s. glunz, analysis of the temperature dependence of the opencircuit voltage, energy procedia, no. 27, 2012, pp. 135-142. doi: 10.1016/j.egypro.2012.07.041 [40] p. peumans, a. yakimov, s. r. forrest, small molecular weight organic thin-film photodetectors and solar cells, journal of applied physics, no. 93(7), 2003, pp. 3693-3723. doi: 10.1063/1.1534621 https://doi.org/10.4209/aaqr.2017.09.0340 https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health https://doi.org/10.1145/2037556.2037602 https://doi.org/10.3390/s18041282 https://doi.org/10.1002/pip.3371 https://doi.org/10.1016/j.rser.2019.109383 https://doi.org/10.3390/polym12061338 https://doi.org/10.1039/c5ee03380b https://doi.org/10.1016/j.dyepig.2018.06.025 https://doi.org/10.1016/j.apsusc.2020.146840 https://doi.org/10.1016/j.dyepig.2020.108719 https://doi.org/10.1109/ted.2016.2569079 https://doi.org/10.1109/jphotov.2015.2487825 https://doi.org/10.1021/acs.jpclett.7b00515 https://doi.org/10.1039/c7ta09322e https://doi.org/10.3390/cryst8120445 https://doi.org/10.1109/jphotov.2017.2778567 https://doi.org/10.7849/ksnre.2019.3.15.3.036 https://doi.org/10.1016/j.solmat.2020.110899 https://doi.org/10.1016/j.solmat.2017.05.042 https://doi.org/10.1016/j.ijleo.2020.166206 https://doi.org/10.1016/j.solmat.2020.110815 http://? http://? https://doi.org/10.1109/ever.2018.8362346 https://doi.org/10.1016/j.egypro.2012.07.041 https://doi.org/10.1063/1.1534621 video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 -7 acta imeko | www.imeko.org june 2022| volume 11 | number 2|1 video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization kasani pranathi1, naga padmaja jagini2, satish kumar ramaraj3, deepa jeyaraman4 1 dept of information technology, vr siddhartha engineering college, kanuru, vijayawada-520007 andhra pradesh, india 2 dept of computer science and engineering, vardhaman college of engineering, hyderabad-501218, telangana , india 3 sengunthar college of engineering, tiruchengode, tamilnadu, india 4 dept of electronics and communication engineering, k. ramakrishnan college of technology, trichirapalli-621112, tamilndadu, india section:research paper keywords: artificial intelligence; convolutional neural network; kinetic gas molecule optimization; images; video-based emotion recognition citation: kasani pranathi, naga padmaja jagini, satish kumar ramaraj, deepa jeyaraman, video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization, acta imeko, vol. 11, no. 2, article 13, june 2022, identifier: imeko-acta-11 (2022)-02-13 section editor: francesco lamonaca, university of calabria, italy received october 2, 2021; in final form june 3, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: kasani pranathi, e-mail: pranathivrs@gmail.com 1. introduction emotional recognition is considered a major theme in machine learning and artificial intelligence (ai) [1] in recent years. the huge upsurge in the creation of advanced interaction technologies between humans and computers has further encouraged progress in this area [2]. facial actions convey emotions that transmit the character, the mood, and the intentions of a person, in turn. emotions and moods swiftly lead to the identification of the human mind. the psychologist says that emotions are mostly short and that mood is milder than emotion [3]. human emotions can be detected in different ways, such as verbal or voice responses, physical reactions or the languages of the body, autonomous responses, and so on [4]. in a person, the basic types of emotions are pleasing, normal, surprised, frightened, angry, disgusting or sad. while other expressions like dislike, amusement, or pride, dislike and honesty in humans are very difficult to find by expression of the face [5][6] is easy to detect emotions like happiness, normal, disgust, and fear. as we know, people identify emotion by combining difficult multimodal information and tend only to pay attention to significant information in various ways. for example, some people always talk while they keep a smile, while others talk loudly but not angrily [7]. consequently, we deliberate those human beings do not detect emotions based on modal alignment. the objective of emotional awareness can be generally achieved by means of visual or sound techniques. the field of humancomputer interaction has changed by artificial intelligence, providing many machine learning techniques to achieve our goal abstract human facial expressions are thought to be important in interpreting one's emotions. emotional recognition plays a very important part in the more exact inspection of human feelings and interior thoughts. over the last several years, emotion identification utilizing pictures, videos, or voice as input has been a popular issue in the field of study. recently, most emotional recognition research focuses on the extraction of representative modality characteristics and the definition of dynamic interactions between multiple modalities. deep learning methods have opened the way for the development of artificial intelligence products, and the suggested system employs a convolutional neural network (cnn) for identifying real-time human feelings. the aim of the research study is to create a real-time emotion detection application by utilizing improved cnn. this research offers information on identifying emotions in films using deep learning techniques. kinetic gas molecule optimization is used to optimize the fine-tuning and weights of cnn. this article describes the technique of the recognition process as well as its experimental validation. two datasets such as video-based and image-based datasets, which are employed in many scholarly publications, are also investigated. the results of several emotion recognition simulations are provided, along with their performance factors. mailto:pranathivrs@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2|2 [7]. the extraction of representative modal features using profound learning technology has grown easier. for example, the neural network [8] includes advanced cnn convolutional neural network (alexnet, vgg, resnet, senets, etc.) tuning process is extremely useful for capturing features of fine-grained facial expression; and long short-term memory unit (lstms) are another profound learning technology to store info with short-term interaction of time-step memory functionality [9]. on the basis of these technology of learning, there is much work to be done to define dynamic multimodal interactions by matching the relevance of each lstm memory unit. video-based emotional recognition is multidisciplinary, covering areas such as psychology, affective computing and interaction between humans and computers. the main element of the message is the expression of the face which makes up 55% of the overall impression. in order to create an appropriate model for the recognition of video emotions, proper feature frames of face expression must be provided within the scope. deep learning offers a diversity in terms of accuracy, learning rate and forecasting, rather than using standard techniques. cnn has offered support and platform for the analysis of visual imaging, among the in-depth learning methodologies. convolution is the basic application of a filter in an action to an input that results. reusing a related filter to an input creates an enactment map known as a feature map that shows the areas and the quality of, for instance, an identified element in an image. the growth of neural systems of convolution is thus the ability to learn skills with an enormous number of filters equating explicitly to a training dataset according to the needs of, for example, an image characterisation. the result is deeply explicit highlights which are distinguishable in the input images anywhere. deep learning accomplished a major achievement in the recognition of emotions and cnn is the renowned profound way of learning with exceptional image processing performance. this work aims to develop video-based emotional awareness through optimal cnn, as detailed in the next section. the remaining part of the paper is arranged: the related study on video emotional recognition is presented in section 2. section 3 provides an explanation for the suggested optimized cnn. section 4 discusses the validation of the methodology suggested with its current techniques. finally, section 5 presents the conclusion of this study with its future work. 2. literature review a number of disciplines, such as spam filtering, audio recognition, facial identification, classifying documents and processing of natural languages are addressed by machine learning algorithms. classification is one of the most frequently used domains of machine learning. video-based face-motion research in the computer vision community recently attracted notice. different kinds of input data, including facial expressions, voice, physiological indicators, and corporal motions, are utilised in emotional recognition. the work of michel healy[10], who describes a video feeding system in real time and uses a support vector machine for fast and reliable classification, offers several ways to the detection of emotion by facial expressions. the 68point facial features used in [10] are symbols. the application was taught to detect six emotions by monitoring changes in the expressions of the face. the work of dennis maier [11] currently uses neural networks via tensorflow to train image features and then achieves classification through fully connected neural layers. the advantage of image features over facial landmarks is the larger information space, where the spatial formation of landmarks gives a viable method for analysing facial expressions. however, this is also accompanied by a higher computing power requirement. the structure provides for an outsourced classification service that runs on a server with a gpu. images of faces are brought to the service in real time, which can perform a classification within a few milliseconds. in the future, this approach will be extended to include text and audio features and conversation context to boost accuracy. another approach uses cnns with tensorflow. an example of using tensorflow.js with the sense-care km-ep is discussed in [10], which deploys a web browser and node server. 93% rely on nonverbal’s (facial expressions as 55%, sound: 38%) and 7% rely on verbal language in terms of human emotional understanding. that is why various efforts have been carried out to recognize facial expression (fer) and acoustic emotion (aer) tasks. most of these works use deep learning (dl) skills to extract computer features to get recognition of high emotions. pramerdorfer et al., [12] used and confirmed contemporary dnn (vgg, resnet, inception) architectures to extract aspects of facial expression to enhance fer performance. on the other hand, the most typically employed features include pitch, log-mel filters, and cepstral coefficients for mfccs, as far as aer tasks are concerned. huang et al. [13] used four kinds to extract more complete emotional characteristics using log-mel. multiple spatial-time fusion feature framework (msff) was proposed by lu et al. [14]. they have improved the pretrained model for photos of facial expression to draw on facial expression characteristics and have applied the vgg-19 and blstm models to extract audio emotional aspects. however, the interactions between different modes were not taken into account. zadeh et al. [15], on the other hand, considered consistency and attributes complementary to the diverse modal information, proposing a memory fusion network which model modal and multimodal interactions through time to capture more efficacious emotional characteristics in the cmu-mosi dataset. liang et al. [16] have presented the neural model dynamic fusion graph to shape multimodal interactions, to capture one, two and three modal interactions, and, based on the importance, dynamically adjust multimodal dynamics of individual fusions. although [16] is able to dynamically collect interactions in several modalities, different modalities must be aligned with the word utterance time interval through the average of their modalities. the word-based alignment technique can nonetheless miss the chance to capture more active relationships between modes. 3. proposed methodology this section provides a description of the overall architecture for the development of a deep learning algorithm as a videobased emotional recognition model. in addition, architectural diagrams are briefly described together with various operations before and after processing. the system overview with cnn displays the suggested training and testing method in figure 1. the video input must pass through a number of procedures before cnn takes action. 3.1. pre-processing this is the first procedure used for the video sample input. emotions are typically classified as happiness, sadness, anger, pride, fear and surprise. frames should therefore be removed from the video input. the number of frames depends on complexity and computational time for different researchers. acta imeko | www.imeko.org june 2022| volume 11 | number 2|3 the pictures are transformed to the greyscale. the frame is rather black and white or grey monochrome after grey scaling. the contrast with low intensity leads to grey and white with high intensity. the histogram equalization of the frames is monitored by this step. histogram is a computer image management strategy to improve photographic contrast. this is achieved by extending, for instance by loosening the intensity of the image, the most successive intensity estimates. the intensity of a picture is shown by the histogram and it is the number of pixels for each intensity value deliberated in simple terms. 3.2. face detection emotions are usually characterized by the face. it is therefore important to detect the face for processing and recognition. many face detection algorithms are used by several investigators such as opencv, dlib, eigenfaces, the local histograms of binary patterns (lbph) and viola-jones (vj). conventional procedures included face recognition work in which facial highpoints are distinguished from the face image by extracting highlights or milestones. the calculation may, for example, survey the shape and size of the eyes, nose size and their relative position with the eyes in order to delete face highlights. the cheekbones and mastic may also be dissected. these highlights extracted would be used to view different images with matching features. the industry has gone deeply into learning throughout the years. cnn was recently used to improve the accuracy of calculations for facial recognition. these controls accept an image as information and concentrate on a very complex arrangement of features. these features include facial width, facial stature, nose width, lips, eyes, width proportion, skin shading, and surface. basically, a cnn divides a huge sum of highlights from a picture. this is then synchronised with the highlights in the database 3.3. image cropping and resizing during this phase, the face of the facial detection procedure is trimmed so that the facial image looks broader and clearer. cropping is the ejection from the photographic or graphical image of unwanted exterior parts. the technique often consists of expelling a section of the outermost regions of a picture, expelling the image's incidental waste, improving its surroundings, changing its perspective, or highlighting and disentangling the subject. the size of the images varies after the frames have been cropped. those photographs are therefore subject to resize, say 80 to 80 pixels for instance, in order to achieve homogeneity. a digital picture is only a quantity of information displaying a variety of red, green and blue pixels in a certain location. we notice these pixels more often than not as smaller than normal pixels on the pc screen wedged together. the frame size determines how long it takes to process. resizing is therefore very important if processing time is to be reduced. in addition, techniques of better resizing should be used to maintain image attributes following resizing. whether the features represent the expression well or not depends on the accuracy of the classification. the optimization of the selected features, therefore, improves classification precision automatically. 3.4. classification in this section, the classification includes learning rate optimization for cnn using kinetic gas molecule optimization (kgmo) is described briefly. initially, the cnn is explained as follows: cnn is a neural transmission network with several layer feeders, comprising several sorts of layers, including convolution layer and relu, pooling layers and fully connected output layers. figure 2 shows the architecture of cnn, which is intended to recognize visual characteristics such as borders and forms. 3.4.1. cnn cnn employs the vector x of the trained samples as the input for the associated target group y to support the back propagation technique for training information. learning is performed by comparing the desired target with each cnn output, and a learning error occurs in the difference between the two. taking mathematical responsibility for the future cnn, 𝐸(𝜔) = 1 2 ∑ ∑(𝑜j,p l − 𝑦j,p) 2 𝑁ι j=1 . 𝑝 ρ=1 (1) our goal is the cost function lessening of 𝐸(𝜔), discovery a minimizer �̃� = �̃�1, �̃�2, … , �̃�v ϵ ℝv, where v = ∑ weightnum (𝑘)lk=1 and indicate that the space of weight ℝ v is equal to the number of weights (weigtnum(. )) of the cnn network at each k layer of total l layers. ∇𝐸i(𝜔i) = ( ∂𝐸i ∂𝜔i 1 , … , ∂𝐸i ∂𝜔i v) (2) 𝜔i+1 = 𝜔i − 𝑛 ∇𝐸i(𝜔i) , (3) where 𝑛 is the learning rate (step) value. cnn is adapted to the video-based emotion detection, but how fast cnn is adapted can be controlled by this learning rate. more training epochs are acquired for smaller learning rates that give only small changes to the weights during each update. on the other hand, fewer training epochs are required for larger learning rates. specifically, the learning rate is a configurable hyper-parameter used in the training of neural networks that has a small positive value, often in the range between 0.0 and 1.0. to find the optimized learning rate value, this work uses the kgmo algorithm. here, the 𝑛 has figure 1. proposed workflow (top) and proposed training (bottom) testing processes. figure 2. typical architecture of some of the pre-trained deep cnn networks used in the study acta imeko | www.imeko.org june 2022| volume 11 | number 2|4 been designated with the assistance of the kgmo technique, which is explained as follows. 3.4.2. kgmo the boyle laws introduced gas, which are based on unproven descriptions to label gas glimmer's macroscopic plants. the motion of the gas molecule is based on certain characteristics such as pressure, temperature and gas volume. five suggestions and a cinematic molecular system of spectacular blasts were used to indicate the fauna of the gas molecules: • a gas involves the movement in straight-line movement of tiny molecules. the processing of the movement presents according to newton's law. • there is no capacity for the gas glimmer. it's like a subject. • no repulsive or attracting force exists between molecules, the average molecular kinetic energy of 3 k t/2 is described as t and boltzmann constant is described as k, income which has 1.38 × 10-23 m2 kg s-2 k-1. take on, that the scheme contains 𝑀particles. the locality of 𝑗𝑡ℎ the agent is labelled by (4) 𝑌𝑗 = (𝑦𝑗 1 , … … 𝑦 𝑗 𝑑, … … 𝑦 𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (4) where 𝑌𝑗 𝑑 defines the position of the 𝑗𝑡ℎ agent, that in 𝑑th dimension. the velocity of the 𝑗𝑡ℎ agent, it stated as below in (5) 𝑉𝑗 = (𝑣𝑗 1 , … . . , 𝑣𝑗 2 , … . . 𝑣𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (5) where 𝑉𝑗 𝑑 represents the 𝑗𝑡ℎ agent velocity, that in the 𝑑𝑡ℎ dimension. units are modified by the circulation of boltzmann. random element yield speed is related to the kinetic energy of the unit. equation (6) is the kinetic force of the atom as, 𝑘𝑗 𝑑(𝑢) = 3 2 𝑀𝑏 𝑇𝑗 𝑑 (𝑢), 𝐾𝑗 = (𝑘𝑗 1, … . . , 𝑘𝑗 𝑑, … . . 𝑘𝑗 𝑚) for (𝑗 = 1, 2, … , 𝑀), (6) where the number of atoms is represented 𝑀𝑏 as the boltzmann relentless and the high temperature of an 𝑗𝑡ℎ agent in the 𝑑𝑡ℎdimension at a time 𝑢 is characterized as 𝑇𝑗 𝑑(𝑢). the molecules rate is updated by (7) 𝑣𝑗 𝑑 (𝑢 + 1) = 𝑇𝑑 𝑗 (𝑢)𝑤𝑉𝑗 𝑑 (𝑢) + 𝐷1rand𝑗 (𝑢) (𝑔𝑏𝑒𝑠𝑡 𝑑 − 𝑦𝑗 𝑑 (𝑢)) + 𝐷2rand𝑗 (𝑢) (𝑝𝑏𝑒𝑠𝑡𝑗 𝑑 (𝑢) − 𝑦𝑗 𝑑 (𝑢)) , (7) where 𝑇𝑗 𝑑(𝑢) for the meeting molecules reduce the exponentially over time and is planned in (8). 𝑇𝑗 𝑑(𝑢) = 0.95 × 𝑇𝑗 𝑑(𝑢 − 1). (8) the mass 𝑚 of individual sub-division is a random number and it consumes a range of 0 < 𝑚 ≤ 1. once the mass is wellknown, the whole algorithm remains unchanged because only one type of gas is taken into consideration at any time. a random number shall be used to claim places in diverse types to produce various executions of the procedure. the constituent part is expressed as equation, based on the gesticulation equation (9), 𝑦𝑢+1 𝑗 = 1 2 𝑎𝑗 𝑑 (𝑢 + 1)𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)𝑢 + 𝑦𝑗 𝑑 (𝑢), (9) where acceleration of the 𝑗𝑡ℎagent is in the 𝑑𝑡ℎ dimension exemplified as 𝑎𝑗 𝑑 . centered on the acceleration dismiss achieve in (10). 𝑎𝑑 𝑗 = (d𝑣𝑗 𝑑 ) d𝑢 . (10) built on the gas specks law is signified in (11). d𝑘𝑑 𝑗 = 1 2 𝑚(d𝑣𝑗 𝑑 )2 ⇒ d𝑣𝑗 𝑑 = √ 2(d𝑘𝑗 𝑑 ) 𝑚 . (11) from (10) and (11), the acceleration is mentioned in (12) 𝑎𝑑 𝑗 = √ 2(d𝑘𝑗 𝑑) 𝑚 d𝑢 . (12) the acceleration reckoning is re-written depends on the duration interval δ𝑢which is shown in (13) 𝑎𝑑 𝑗 = √ 2(δ𝑘j d) 𝑚 δ𝑢 . (13) in a unit time interval, the acceleration would be (14) 𝑎𝑑 𝑗 = √ 2(d𝑘𝑗 𝑑 ) 𝑚 . (14) from (9) and (14), the section of the particle is computed by uttered by (15) 𝑦𝑢+1 𝑗 = 1 2 𝑎𝑗 𝑑 (𝑢 + 1)δ𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)δ𝑢 + 𝑦𝑗 𝑑 (𝑢) ⇒ 𝑦𝑢+1 𝑗 = 1 2 √ 2(δ𝑘𝑗 𝑑 ) 𝑚 (𝑢 + 1)δ𝑢2 + 𝑣𝑗 𝑑 (𝑢 + 1)δ𝑢 + 𝑦𝑗 𝑑 (𝑢). (15) in the past, the assumption is that the molecular mass is a random element in all rules exercising, but the execution is same in all particles. the location, which is expressed in (16). is sporadically reorganized to make the approach easier. 𝑦𝑢+1 𝑗 = √ 2(δ𝑘𝑗 𝑑 ) 𝑚 (𝑢 + 1) + 𝑣𝑗 𝑑 (𝑢 + 1) + 𝑦𝑗 𝑑 (𝑢) (16) the lowest appropriateness utility is firm by using resulting (17). 𝑝𝑏𝑒𝑠𝑡𝑗 = 𝑓(𝑦𝑗 ), if 𝑓(𝑦𝑗 ) < 𝑓(𝑝𝑏𝑒𝑠𝑡𝑗 ) 𝑔𝑏𝑒𝑠𝑡𝑗 = 𝑓(𝑦𝑗 ), if 𝑓(𝑦𝑗 ) < 𝑓(𝑔𝑏𝑒𝑠𝑡𝑗 ). (17) in the position(𝑥𝑗 𝑑 ) of that, each element is studied by consuming space that amid the current situation and the current position among in the space and 𝑔𝑏𝑒𝑠𝑡𝑗 . the next section will show the validation of proposed methodology with existing techniques. acta imeko | www.imeko.org june 2022| volume 11 | number 2|5 4. results and discussion all of our results were created on a single nvidia geforce rtx 2080 ti gpu. all the code was applied using pytorch2. 4.1. datasets description 4.1.1. image-based emotion recognition in the wild in this work, we have selected appropriate data sets to train the face extraction model. the data sets must address environments in the wild, in which numerous factors, such as obstruction, poses, lighting, etc., are uncontrolled. affectnet [17] and raf-db [18] are the most extensive datasets that meet these criteria by far. the photographs in the data sets are acquired using emotional keywords on the internet. experts note emotion labels to ensure trustworthiness. affectnet has two kinds of data, manual and automatic, with over 1,000,000 photos marked with 10 categories of emotions and dimensional emotions (valence and arousal). we only utilized photos in the manual group of seven fundamental categories of emotions. for example, we used 283901 training photos and 3500 validation images. the rafdb dataset comprises of approximately 30,000 facial photographs in basic emotional categories that have taken lighting variations, arbitrary poses, and occlusion under in-thewild situations. in this study, we selected 12,271 training images and 3068 evaluation images, all of them from the basic set of emotions. 4.1.2. video-based emotion recognition in the wild we used the afew dataset [19] to assess our work to determine face emotions in video clips. the video samples in the collection are obtained through uncontrolled occlusion, lighting and head positions from films and tv shows. each video clip was selected based on its label including emotional keywords that reflect the emotion shown by the main topic. using this information, we have assisted to deal with the challenge of temporality in the wild. we have used 773 trainings with the afew dataset and 383 validation video clips with labels for the seven basic types of emotion (anger, happiness, neutrality disgust, fear, sadness, and surprise). 4.2. evaluation parameters as quantitative measurements in this investigation, we employed precision (acc.) and the f 1 score. we also employed the average 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 depending on the major diagonal of the standardized 𝑀norm confusion matrix, for the performing results to be evaluated as in [18]. these measurements are derived accordingly. 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁 (18) 𝐹1 = 2 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛. 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 (19) 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 = ∑ 𝑔𝑖,𝑗 𝑛 𝑖=1 𝑛 (20) 𝑆𝑡𝑑𝐴𝑐𝑐 = √ ∑ (𝑔𝑖,𝑗 − 𝑀𝑒𝑎𝑛𝐴𝑐𝑐 ) 2𝑛 𝑖=1 𝑛 , (21) where 𝑔𝑖,𝑗 ∈ diag(𝑀norm) is the ith diagonal value of the normalized confusion matrix 𝑀norm , 𝑛is the size of 𝑀norm , and tp, tn, fp, and fn, respectively, are true positive, false positive, true negative, and false negative. table 1 shows the confusion matrix for proposed methodology. 4.3. evaluation of proposed model with kgmo for two different datasets. in this section, the different cnn models include convnet, densenet and resnet are compared with and without kgmo techniques in terms of accuracy, f1-score and mean accuracy with standard metrics on two different datasets such as imagebased and video-based datasets. table 2 and figure 3 shows the results of projected model with kgmo on image-based datasets. in the accuracy experiments, the convnet, densenet and resnet without kgmo achieved 56.26 %, 61.51 % and 61.57 %, where these techniques are implemented with kgmo technique and achieved 81.23 %, 83.64 % and 87.22 %. these results proved that resnet with kgmo achieved better accuracy than other models. in the f1-score analysis, the convnet, densenet and resnet without kgmo achieved 56.38 %, 61.50 % and 61.46 %, where these techniques are implemented with kgmo technique and achieved 81.79%, table 1. confusion matrix of proposed model with kgmo on image-based emotion recognition evaluation. overall classification with manual id ground truth anger happiness neutrality disgust fear sadness surprise p re d ic te d anger 93.30 0.93 4.19 2.33 1.40 1.86 happiness 2.79 83.93 5.89 2.65 2.33 2.33 neutrality disgust 6.51 8.35 87.37 2.65 7.44 6.05 fear 3.26 2.72 0.93 88.19 4.19 3.72 sadness 2.79 3.14 4.36 1.86 86.28 7.91 surprise 1.26 1.69 72.56 2.65 11.16 84.70 table 2. validation of proposed model with kgmo on image-based emotion recognition evaluation. cnn-model acc (%) 𝑭𝟏 (%) 𝑴𝒆𝒂𝒏𝑨𝒄𝒄 ± 𝑺𝒕𝒅 convnet 56.26 56.38 56.23 ± 11.18 densenet 61.51 61.50 61.51 ± 10.40 resnet 61.57 61.46 61.57 ± 10.79 convnet-kgmo 81.23 81.79 77.08 ± 08.10 densenet-kgmo 83.64 83.81 76.96 ± 11.12 resnet-kgmo 87.22 87.38 82.45 ± 09.20 figure 3. graphical representation of proposed model with kgmo in terms of accuracy and f1-score on image-based dataset. acta imeko | www.imeko.org june 2022| volume 11 | number 2|6 83.81% and 87.38%. the mean accuracy of each technique without kgmo achieved nearly 56% to 61% with standard deviation of 11. but, when these techniques are implemented with kgmo, they achieved nearly 77 % to 82 % of mean accuracy with standard deviation of 9.20 on proposed model with kgmo. table 3 and figure 4 shows the results of proposed model, existing techniques by implementing with and without kgmo on video-based datasets. in the accuracy experiments, the convnet, densenet and resnet without kgmo achieved 51.70 %, 52.22 % and 54.05 %, where these techniques are implemented with kgmo technique and achieved 55.87 %, 56.14 % and 58.66 %. these results proved that resnet with kgmo achieved better accuracy than other models, however the proposed technique achieved less performance than image-based dataset. in the f1score analysis, the convnet, densenet and resnet without kgmo achieved 46.17 %, 48.26 % and 50.78 %, where these techniques are implemented with kgmo technique and achieved 52.76 %, 54.61 % and 58.50 %. the mean accuracy of each technique without kgmo achieved nearly 46 % to 48 % with standard deviation of 32. but, when these techniques are implemented with kgmo, they achieved nearly 52 % to 56 % of mean accuracy with standard deviation of 15.63 on proposed model with kgmo. the reason for the less performance is that it is difficult to identify the proper emotion recognition while the video is continuously playing. 4.4. comparative evaluation of proposed technique with existing techniques the following table 4 shows the comparative analysis of proposed technique with various existing techniques in terms of accuracy. figure 5 shows the graphical representation of proposed model on video-based dataset in terms of accuracy. the existing techniques [20]-[21] such as dsn and cnn features with lstm achieved nearly 49 % of accuracy. the existing fan [23] and caer-net [24] achieved only 51 % of accuracy, where the existing techniques achieved nearly 52 % to 53 % of accuracy. noisy student training with multi-level attention [26]-[30] achieved 55.17 % of accuracy, where the proposed model achieved 58.66 % of accuracy. the reason is that the cnn is implemented with optimization technique called kgmo for fine-tuning and weight optimization. 5. conclusion computer vision research on facial expression analysis has extensively studied in the past decades. in recognition of emotions, the success of this method has improved greatly over time. our work has shown the general architectural model for developing a deep learning recognition system. the objective is to examine preand post-process methods. using kgmo algorithm, the smoothing and weight of cnn is optimized. this report also included the data sets available for academics in this subject on pictures and video. in different study, the advancement in this area is measured by different performance measures. the testing is performed on the various datasets, in which resnet-kgmo has achieved an accuracy of 58.66 % in the video dataset, and an accuracy of 87.22 % for the imagebased dataset. there is a very attractive scope of future developments in this area. various deep learning multimodal and various architectures can be employed to increase the performance parameters. besides the realization of the feelings alone, the intensity level can be raised further. this can contribute to forecasting the intensity of the feeling. in future works multi-medians may also be used; for example, the construction of a model with multi-data sets can be used with video and audio. table 3. validation of proposed model with kgmo on video-based emotion recognition evaluation. cnn-model acc (%) 𝑭𝟏 (%) 𝑴𝒆𝒂𝒏𝑨𝒄𝒄 ± 𝑺𝒕𝒅 convnet 51.70 46.17 46.51 ± 34.38 densenet 52.22 48.26 47.33 ± 31.73 resnet 54.05 50.78 48.98 ± 32.28 convnet-kgmo 55.87 52.76 51.21 ± 29.87 densenet-kgmo 56.14 54.61 52.35 ± 25.53 resnet-kgmo 58.66 58.50 56.25 ± 15.63 table 4. comparative analysis of proposed model with existing techniques on video-based dataset author technique accuracy (%) vielzeuf et al. [20] (2018) max score selection with temporal pooling 52.20 fan et al. [21] (2018) deeply-supervised cnn (dsn) weighted average fusion 48.04 duong et al. [22] (2019) cnn features with lstm 49.30 li et al. [23] (2019) vgg-face features with bi lstm 53.91 meng et al. [24] (2019) frame attention networks (fan) 51.18 lee et al. [25] (2019) caer-net 51.68 kumar et al. [26] (2019) noisy student training with multi-level attention 55.17 proposed (2021) cnn-kgmo 58.66 figure 4. graphical representation of proposed model with kgmo in terms of accuracy and f1-score on video-based dataset. figure 5. graphical representation of proposed model with kgmo in terms of accuracy on video-based dataset. acta imeko | www.imeko.org june 2022| volume 11 | number 2|7 references [1] d. l. carnì, e. balestrieri, i. tudosa, f. lamonaca, application of machine learning techniques and empirical mode decomposition for the classification of analog modulated signals, acta imeko, vol. 9, 2020, no. 2, pp. 66–74. doi: 10.21014/acta_imeko.v9i2.800 [2] p. c. vasanth, k. r. nataraj, facial expression recognition using svm classifier, indonesian journal of electrical engineering and informatics (ijeei), vol. 3, no. 1, pp. 16-20, 2015. doi: 10.11591/ijeei.v3i1.126 [3] anurag de, ashim saha, a comparative study on different approaches of real time human emotion recognition based on facial expression detection, 2015 international conference on advances in computer engineering and applications, icacea, ghaziabad, india, 19-20 march 2015, pp. 483–487. doi: 10.1109/icacea.2015.7164792 [4] m.a. ozdemir, b. elagoz, a. alaybeyoglu, r. sadighzadeh, a. akan, real time emotion recognition from facial expressions using cnn architecture, tiptekno 2019, izmir, turkey teknolkongresi 3-5 october 2019, pp. 1–4. doi: 10.1109/tiptekno.2019.8895215 [5] d. sokolov, patkin m, real-time emotion recognition on mobile devices. in: and others, editor. proc 13th ieee intconfautom face gesture recognition, vol. 787. 2018. doi: 10.1109/fg.2018.00124 [6] h. kaya, f. gürpınar, a. a. salah, video-based emotion recognition in the wild using deep transfer learning and score fusion, image and vision computing. 2017, 65, 66–75. doi: 10.1016/j.imavis.2017.01.012 [7] g. betta, d. capriglione, m. corvino, a. lavatelli, c. liguori, p. sommella, e. zappa, metrological characterization of 3d biometric face recognition systems in actual operating conditions, acta imeko, vol. 6, 2017, no. 1, pp.33-42, 2017. doi: 10.21014/acta_imeko.v6i1.392 [8] a. s. volosnikov, a. l. shestakov, neural network approach to reduce dynamic measurement errors, acta imeko, vol. 5, 2016, no. 3, pp. 24-31. doi: 10.21014/acta_imeko.v5i3.294 [9] y. xie et al., deception detection with spectral features based on deep belief network, acta acustica, vol. 2, 2019, pp. 214-220. [10] m. healy, r. donovan, p. walsh, h. zheng, a machine learning emotion detection platform to support affective well being, in 2018 ieee international conference on bioinformatics and biomedicine (bibm), madrid, spain, 3-6 december 2018, pp. 2694–2700. doi: 10.1109/bibm.2018.8621562 [11] d. maier, analysis of technical drawings by using deep learning, m.sc. thesis, department of computer science, hochschule mannheim, germany, 2019 [12] c. pramerdorfer, m. kampel, facial expression recognition using convolutional neural networks: state of the art. arxiv preprint arxiv:1612.02903 (2016). doi: 10.48550/arxiv.1612.02903 [13] c. huang, s. narayanan, characterizing types of convolutions in deep convolutional recurrent neural networks for robust speech emotion recognition, arxiv preprint arxiv:1706.02901 (2017). doi: 10.48550/arxiv.1706.02901 [14] c. lu, w. zheng, c. li, chuangao tang, s. liu, s. yan, y. zong, multiple spatio-temporal feature learning for video-based emotion recognition in the wild, proceedings of the international conference on multimodal interaction. acm, boulder, co, usa, 16-20 october 2018, 646–652. doi: 10.1145/3242969.3264992 [15] a. zadeh, p. pu liang, n. mazumder, s. poria, e. cambria, l.-p. morency, memory fusion network for multi-view sequential learning, thirty-second aaai conference on artificial intelligence, new orleans, la, usa, 2-7 february 2018, 9 pp. doi: 10.48550/arxiv.1802.00927 [16] p. liang, r. salakhutdinov, l. p. morency, computational modeling of human multimodal language: the mosei dataset and interpretable dynamic fusion, 2018. [17] a. mollahosseini, b. hasani, m. h. mahoor, affectnet: a database for facial expression, valence, and arousal computing, in the wild. ieee trans. affect. comput. 2019, 10, 18–31. doi: 10.48550/arxiv.1708.03985 [18] s. li, w. deng, j. du, reliable crowdsourcing and deep localitypreserving learning for expression recognition in the wild, ieee conference on computer vision and pattern recognition (cvpr), honolulu, hi, usa, 21–26 july 2017, pp. 2584–2593. doi: 10.1109/cvpr.2017.277 [19] a. dhall, r. goecke, s. lucey, t. gedeon, collecting large, richly annotated facial-expression databases from movies. ieee multimed. 2012, 19, 34–41. doi: 10.1109/mmul.2012.26 [20] v. vielzeuf, c. kervadec, s. pateux, a. lechervy, f. jurie, an occam’s razor view on learning audiovisual emotion recognition with small training sets, 20th acm international conference on multimodal interaction, boulder, co, usa, 16–20 october 2018, pp. 589–593. doi: 10.48550/arxiv.1808.02668 [21] y. fan, j. c. k. lam, v. o. k. li, video-based emotion recognition using deeply-supervised neural networks, 20th acm international conference on multimodal interaction, boulder, co, usa, 16–20 october 2018, pp. 584–588. doi: 10.1145/3242969.3264978 [22] d. h. nguyen, s. kim, g. s. lee, h. j. yang, i. s. na, s. h. kim, facial expression recognition using a temporal ensemble of multi-level convolutional neural networks, ieee trans. affect. comput. 2019, 33, 1. doi: 10.1109/taffc.2019.2946540 [23] s. li, w. zheng, y.zong, c. lu, c. tang, bi-modality fusion for emotion recognition in the wild, international conference on multimodal interaction, jiangsu, china, 14–18 october 2019, pp. 589–594. doi: 10.1145/3340555.3355719 [24] d. meng, d. peng, y. wang, y. qiao, frame attention networks for facial expression recognition in videos, ieee international conference on image processing (icip), taipei, taiwan, 22–25 september 2019, pp. 3866–3870. doi: 10.48550/arxiv.1907.00193 [25] j. lee, s. kim, s. kim, j. park, k. sohn, context-aware emotion recognition networks, ieee international conference on computer vision, seoul, korea, 27 october–2 november 2019, pp. 10142–10151. doi: 10.48550/arxiv.1908.05913 [26] v. kumar, s. rao, l. yu, noisy student training using body language dataset improves facial expression recognition. in computer vision—eccv 2020 workshops, a. bartoli, a. fusiello, eds., springer international publishing: cham, switzerland, 2020, pp. 756–773. doi: 10.48550/arxiv.2008.02655 [27] f. vurchio, g. fiori, a. scorza, s. a. sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol.10, 2021, no.2, pp.119-125. doi: 10.21014/acta_imeko.v10i2.1047 [28] h. ingerslev, s. andresen, j. holm winther, digital signal processsing functions for ultra-low frequency calibrations, acta imeko, vol.9, 2020, no.5, pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [29] m. florkowski, imaging and simulations of positive surface and airborne streamers adjacent to dielectric material, measurement, vol. 186, 2021, pp.1-14. doi: 10.1016/j.measurement.2021.110170 [30] g. ke, h. wang, s. zhou, h. zhang, encryption of medical image with most significant bit and high capacity in piecewise linear chaos graphics, measurements, vol. 135, 2021, pp. 385-391. doi: 10.1016/j.measurement.2018.11.074 http://dx.doi.org/10.21014/acta_imeko.v9i2.800 http://dx.doi.org/10.11591/ijeei.v3i1.126 https://doi.org/10.1109/icacea.2015.7164792 https://doi.org/10.1109/tiptekno.2019.8895215 https://doi.org/10.1109/fg.2018.00124 https://doi.org/10.1016/j.imavis.2017.01.012 http://dx.doi.org/10.21014/acta_imeko.v6i1.392 http://dx.doi.org/10.21014/acta_imeko.v5i3.294 https://doi.org/10.1109/bibm.2018.8621562 https://doi.org/10.48550/arxiv.1612.02903 https://doi.org/10.48550/arxiv.1706.02901 https://doi.org/10.1145/3242969.3264992 https://doi.org/10.48550/arxiv.1802.00927 https://doi.org/10.48550/arxiv.1708.03985 https://doi.org/10.1109/cvpr.2017.277 https://doi.org/10.1109/mmul.2012.26 https://doi.org/10.48550/arxiv.1808.02668 http://dx.doi.org/10.1145/3242969.3264978 https://doi.org/10.1109/taffc.2019.2946540 https://doi.org/10.1145/3340555.3355719 https://doi.org/10.48550/arxiv.1907.00193 https://doi.org/10.48550/arxiv.1908.05913 https://doi.org/10.48550/arxiv.2008.02655 http://dx.doi.org/10.21014/acta_imeko.v10i2.1047 http://dx.doi.org/10.21014/acta_imeko.v9i5.1004 https://www.sciencedirect.com/science/article/pii/s026322412101085x#! https://doi.org/10.1016/j.measurement.2021.110170 https://doi.org/10.1016/j.measurement.2018.11.074 digital calibration certificate in an industrial application acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 digital calibration certificate in an industrial application juho nummiluikki1, sari saxholm2, anu kärkkäinen2, sami koskinen1 1 beamex oy ab, ristisuonraitti 10, 68600 pietarsaari, finland 2 vtt mikes, tekniikantie 1, 02150 espoo, finland section: research paper keywords: calibration; digital calibration certificate; traceability; metrology; digitalization; quality infrastructure citation: juho nummiluikki, sari saxholm, anu kärkkäinen, sami koskinen, digital calibration certificate in an industrial application, acta imeko, vol. 12, no. 1, article 6, march 2023, identifier: imeko-acta-12 (2023)-01-06 section editor: daniel hutzschenreuter, ptb, germany received november 17, 2022; in final form march 22, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by business finland [grant number 1811/31/2019]. corresponding author: sami koskinen, e-mail: sami.koskinen@beamex.com 1. introduction western europe was the third largest region in the measuring and control instruments market worth $137.9 billion in 2020, accounting for 19.6 % of the global measuring and control instruments market, preceded and followed by north america at 27.7 % and eastern europe at 5.6 % respectively. the western europe measuring and control instruments market is expected to grow to $188.59 billion by 2025. [1] 1.1. need and benefits of dcc since only reliable measurement data enables reliable decision making, regular calibration of measuring instruments and the metrological traceability of these results are essential. today calibration certificates are mostly paper documents or pdf files which require human operations in the calibration processes. with the help of the digital calibration certificate (dcc) [2], manual work with calibration certificates can be eliminated. this makes calibration processes faster and allows valuable human operators to concentrate on more valuable work. data transfer and uncertainty calculations related to calibration results can be performed automatically. effective processes, calculations without human errors, automated reporting procedures and use of calibration data have indirect economic effects on the industry through better product quality. in industrial production, the quality of products and processes is verified by measurements. efficient quality inspection is essential to avoid defect propagation in the manufacturing chain. the target is to minimize waste in production as well as to ensure long lifetime to minimize waste due to premature end-product failure [3]. fully digitalized cyberphysical manufacturing requires autonomous quality inspection and control processes as well as integration to digital design and digital quality certificates, including digital calibration certificates. these principles are also emphasized by sustainable development, where the production chain from raw material to recycling (or waste management) of the end-product can be digitally controlled with a digital twin. key challenges and a potential future role of metrology in the digital age are discussed by eichstädt et al [4]. the publication investigates algorithms, cyber-physical systems, fair data and metrology, and the role of metrology in the digital transformation in the quality infrastructure (dqi). the digital transformation of metrology is abstract rapid growth of automation in combination with digitalization generates increasing need for measurement data, provided by sensors, the interface between real and digital world. in industry 4.0 scheme the measurement data, including the calibration information, flows through the whole production chain in digital format. the present study demonstrated a fully digitalized environment for the calibration data generation, transfer, and usage in a proof of concept (poc) project. various stages of the poc and different information systems used by the partners aalto university, vtt mikes, beamex, vaisala and orion are discussed. the developed and tested digital calibration certificate (dcc) solution and its components are described. the major findings of the project include further need of dcc standardization and good practice subschemas. development of the dcc is ongoing worldwide and the big picture goes even beyond the dcc. it is not only a question of calibration certificates and related data transfer to digital and machine-readable format, but also how this data could be used effectively. mailto:sami.koskinen@beamex.com acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 a key enabler for dqi and adoption of advanced analysis methods like artificial intelligence (ai), machine learning (ml), and big data. 1.2. dcc as enabler for new innovations moving towards machine-readable calibration certificates aligns not only with the digitalization strategies of individual organizations but with more comprehensive initiatives such as the european digital strategy of the european commission (ec) [5]. providing a quality and traceability infrastructure for data-driven engineering in industry 4.0, the internet-of-things, autonomous systems, etc., also requires machine-actionable calibration information, i.e., access and interpretation by computer systems. the dcc information can be directly transferred into other digital processes to optimize process control and hence get the benefits of the calibration data online. thus, it can be utilized for digital workflows in measurement science, calibration, and industry. new digital metrology innovation can be found e.g., from sensor networks (iot) and in-situ selfor co-calibration techniques. although already widely used and rapidly increasing, sensor networks have not been addressed in a metrologically sound and systemic manner. empir project met4fof [6] developed an initial set of methods for the propagation of uncertainty through elementary pre-processing steps in ml for sensor network data. the project provided an agent-based approach to ml implementation and demonstrated the use of the d-si, dcc and semantic information for the automated metrological treatment of sensor networks. however, in realworld cases, and particularly in very large-scale sensor networks, practical methods, software frameworks, and guidelines for the automated uncertainty and data quality evaluation, and semantic descriptions are not available yet. the recently established funsnm project, financed by european partnership on metrology, will address on these issues. 1.3. background the international system of units (si) [7] provides a coherent foundation for the representation and exchange of measurement data. it also enables interoperability and reproducibility in all fields. the empir smartcom project [8] has defined digital si (d-si) [9], where a data format for reporting measurement results in terms of a numerical value, associated unit, and uncertainty statement is described. however, the correct interpretation of this information is only possible by understanding the content of the si system, and additionally the gum [10] about the uncertainty, the vim [11] about the metrological terms, and standards such as iso/iec 17025 [12] about the concept of metrological traceability and calibration. thus, these terms and concepts together with their meanings need to be represented in a machine-interpretable way, in order to make a dcc machine-readable in a wider sense, and thus enable the new services and possibilities based on digital calibration data. this publication presents results of a proof of concept (poc) study of fully digitalized environment for calibration data. this work originates from the empir smartcom project [8]. the doctoral thesis of mustapää [13] was partly done as a part of the project. however, the thesis conducted a broader investigation of metrology digitalization in industrial iot systems. the thesis also discussed methods for secure data exchange and provided means for establishing trust between organizations in a fully digital environment. furthermore, the master thesis of riska [14] provided insight into possibilities to establish a new dcc ecosystem and how it should be realized. it also investigated the benefits and disadvantages of either a closed or open dcc ecosystem. similar kinds of initiatives are currently studied globally. the most relevant to this work is the gemimeg ii [15] project. the project aims at safe and robustly calibrated metrological systems for the digital transformation enabling secure, consistent, legally compliant, and legally acceptable end-to-end availability of information for the implementation of reliable, networked measurement systems in germany [15]. the number of publications related to dcc has increased rapidly during the last years [16]. gadelrab and abouhogail [17] reviewed 17 research publications published in the period from 2018 to 2020 covering the topics “digital transformation in metrology” and “digital calibration certificates”. the publications report dcc requirements, conceptual studies, specific application implementations, security, etc., all relevant to this work as well. however, the authors are not aware of earlier publications that report poc results focusing on interoperability of dcc data over the whole value chain. 1.4. contributions of the present study many of the previous studies about the dcc and its applications have been rather focused on implementing the dcc in small scale. in the present study, we describe the results of a poc project that tested a fully digitalized environment for calibration data generation, transfer, and usage. the goal of this project was to test and identify barriers for large scale implementation of dcc in a whole calibration value chain. the members of the project consortium are organizations with a high number of calibration customers or calibration providers. implementing a dcc in this kind of environment brings new requirements. for example, receiving dccs through email from a small number of calibration providers might not be a problem, but receiving dccs from hundreds of different calibration providers needs to be highly organized and automated. one of the core benefits of the dcc is that it mitigates manual processing of data between organizations. to realize the full potential of the dcc, the sending and receiving process between the organizations should also be automated. in this project, a data transfer platform was developed and tested in order to automate dcc transfer between the calibration service provider and the calibration customer. another important aspect of the project was to test the dcc in an environment with several different information systems through the whole calibration chain as shown in fig. 1. implementing a dcc in a multi operator, each having their own system, environment requires a well standardized protocol in order to be interoperable between different systems. in this project, the interoperability of the data transfer protocol, i.e., the dcc, was tested by importing dccs generated by different systems into one calibration management system. the aim of this proof of concept (poc) was to implement the concept presented in [18] to demonstrate and investigate its feasibility. poc is an effective way to test dcc in practice and gain experience. in addition, special attention was paid to the digital authentication of the data, data integrity and fulfilment of metrological traceability. 2. methods in the project, two linked calibrations of a metrological traceability chain were chosen for the poc. the calibrated device acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 was a humidity and temperature probe type hmb75b, manufactured by vaisala oyj. the studied traceability chain covers the metrological traceability from the si to the end user working measurement standard. the chosen chain is not fully complete since it does not include the actual measurement results performed by the poc end user in their real process environment. the poc was executed with dcc version 2.4.0 [19]. the good practice template used in the project was collectively developed and agreed on by the partners in the poc. 2.1. project consortium in finland, a digital data ecosystem has been established with dcc in its focus. the group consists of research institutes, instrument manufacturers, calibration laboratories and end users, i.e., the whole value chain is represented. the common goal is efficient and extensive utilization of measurement and calibration data as well as increasing digitalization in this context. the pharmaceutical industry has been especially active in this field, and the current activities around the dcc are linked together through this sector. in the poc reported in the present study, the partners were aalto university, vtt mikes, beamex oy, vaisala oyj and orion oyj. aalto university was the project coordinator and developed the data transfer platform and cloud components for dcc management. vtt mikes is the national metrology institute (nmi) in finland and provided the expertise of calibration certificates and traceability to the project. beamex, an integrated calibration solution provider, was the calibration management system specialist and developer in the present project. vaisala manufactures innovative measurement solutions for weather, environment, and industrial processes. the instrument containing humidity and temperature sensors was a product of vaisala. they operated in the project as an accredited calibration service provider and a dcc receiving party. pharmaceutical company orion was the other dcc receiving party and the endpoint of the calibration chain. 2.2. implementation and test setup dcc management and exchange was tested by simulating a traceability chain which included three partners: vtt mikes, vaisala and orion. the poc concentrated on a humidity and temperature probe used by the calibration end user orion. the simulation concentrated on the temperature traceability chain. each partner in the traceability chain had a different calibration management system (cms). calibration certificate generation in vtt mikes is based on their proprietary in-house software together with microsoft office tools for data management and certificate templates. for their system environment, a manual dcc creation tool was estimated to be the easiest to implement in the project. in the future, the dcc creation can be implemented as a part of their current information management systems to increase automation in dcc generation, but for the poc this was decided to be out of scope. vaisala had a dual role in the poc as they both received a dcc from vtt mikes and sent a dcc to orion. vaisala calibration services use a proprietary in-house calibration management platform. on the dcc receiving end of vaisala, it was decided by the project consortium to test the manual receiving process. receiving of the dcc was based on a human readable format of the dcc as described in section 3.2. as the certificate creation process is already highly digitalized within vaisala’s calibration management platform, automated dcc generation was possible to test due to fully machine-readable data. automated dcc generation was tested as a part of the calibration management platform as explained in section 3.1. the dcc created by vaisala was received by the calibration end user orion. orion uses a commercial off-the-shelf calibration management system, beamex cmx. dcc import to the cms was tested as a part of the beamex cmx environment to enable automated import of calibration results from the dcc. the structure of the dcc import is presented in section 3.2. as each organization had their own cms, the data transfer needed to be arranged through the organization boarders. in the poc, the data transfer was tested through a data transfer platform that was integrated into the systems used by the partners in the traceability chain. the data transfer platform enabled system-tosystem integration of different cms systems eliminating the need for e.g., emails as a transfer method. 2.3. test procedure testing of the poc solution was carried out in three phases. in the first phase, generating and reading the dccs was tested with a peer-to-peer approach. the goal of the first testing phase was to demonstrate the capability of creating and reading a dcc with two separate approaches. first an example dcc was manually created using existing pdf certificates as a template. once the first version of the dcc was created utilizing notepad++, both the calibration service provider and the calibration customer reviewed and accepted the example. the example dcc compatibility with the dcc schema was also validated at this point. next, the dcc example was frozen, and a copy was distributed to both parties to develop an implementation for dcc creation or reading. after the development, a dcc was generated by the calibration service provider and sent to the receiving party of the calibration. at this point, the dccs were delivered manually through email and without digital signatures. next, the receiving party used the received dcc to import calibration results to their cms. in the calibration between vtt mikes and vaisala, the calibration results were not automatically imported to the cms. instead, the dcc was tested with the human-readable format as described in section 3.2. the next testing phase in the project was end-to-end testing. in the context of this project, it meant that both calibrations were executed following the required industry procedures and standards. at this time, the full data flow and all developed components were tested. the goal of the test phase was to establish a fully digital calibration process from generating a dcc in the calibration service provider’s cms to reviewing the calibration results at the calibration customer’s cms. the process included the following steps: 1. calibration service provider generates a dcc from calibration management system. 2. calibration service provider validates the dcc against the dcc xml schema. 3. calibration service provider adds a digital signature to the dcc. 4. calibration service provider uploads the dcc to the data transfer platform. 5. calibration service provider shares the dcc with the calibration customer. 6. calibration customer validates the digital signature in the dcc. 7. calibration customer imports the dcc to their calibration management system. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 8. calibration customer reviews and accepts the calibration in their calibration management system. the final testing phase in the project was interoperability testing. the goal of the testing phase was to test the interoperability of all the dcc generation and reading methods with each other. the test was conducted by generating a dcc with both the manual web form and the automated cms dcc generation. these dccs were then read with both the human readable format and the automated dcc cms import. 2.4. data security for the calibration customer it is vital to ensure that the received calibration certificate is from the designated calibration service provider and it was not tampered by any third party. to ensure these objectives, the poc solution was designed with a focus on digital authentication of data and data integrity. digital authentication of data and data integrity were achieved with the combination of digital signature and user management. data integrity was ensured in the project with an xml advanced electronic signature (xades) [20] based signature solution. xades signatures are compliant with european eidas regulation [21] which defines requirements for legally binding digital signatures. digital signatures allow the calibration customer to validate that the received dcc was not modified in any way after it was accepted and authorized by the calibration service provider. in this project, digital signatures were tested as an organization level digital seals as described in section 3.1. in a production-ready implementation, the demo public key infrastructure (pki) should be replaced with another pki. requirements for such pki are described by e.g., pekka nikander et al [22]. some practical approaches for the pki are presented by e.g., tuukka mustapää [13]. in the poc solution, signing authorization was controlled through each organization’s existing user management. in this project, microsoft azure active directory was used as the user management solution as all participants used it. modern authentication solutions also have built-in audit trails that can record when and by whom a dcc was signed by. the project consortium considered data integrity as the most important aspect of data security in the calibration process. data confidentiality was also recognized as an important topic in the project. still many consortium members did not consider calibration data confidentiality critical to the business operations, since it’s difficult to misuse calibration data without the meta information involved in it. data confidentiality was ensured by using transport layer security (tls) encryption when uploading and downloading data from the data transfer platform. data was also stored encrypted at the data transfer platform. 3. results the tested solution created a fully digitalized environment for the calibration data generation, transfer, and usage. the different components and organizations in the traceability chain were integrated to each other through the data transfer platform. as presented in figure 1, the dcc generation tool in vaisala and the dcc receiving in orion’s cms were tested as a part of the organizations’ existing information technology environment. other components of the infrastructure were implemented as a part of the cloud-based data transfer platform. the calibration traceability chain was created through the dccs. each dcc includes all necessary information for traceability e.g., measurement data, deviation, uncertainty, and measurement references. 3.1. components for creation manual dcc creation: global adoption of the dcc format requires easy-to-use tools with low entry barrier. in the poc, a web form was developed, based on the good practice format of the dcc, for manual creation of dccs. the web form was developed using json gui technology and it was integrated to the data transfer platform. the web form enabled easy creation of dccs without any changes to the existing information systems. automated dcc creation: another approach to dcc creation was a fully automated dcc creation from the cms. the component was tested as a part of a laboratory calibration management platform. the component enabled creating a dcc automatically from the calibration management system database. fully automated dcc creation mitigates human errors in the dcc generation and removes unnecessary work. format validation: dcc offers a framework for the calibration data that needs to be specified to a use case in question. seamless interoperability requires that the use of the dcc schema and the good practice examples are commonly agreed and executed. to ensure this, dccs need to be verified to make sure that the data format follows the commonly agreed guidelines. format validation checks the data in the dcc against an xml schema, defined by the dcc schema and the use case specific good practice dcc, and highlights the possible differences. in the poc, the format validation was implemented as a part of the signature service. digital signature: calibration results need to be authorized by the calibration provider and secured during the transit. a cryptographical digital signature allows the calibration customer to trace the signature to the calibration provider and ensure that the data integrity of the calibration results is intact. in the poc, figure 1. data flow and components of the developed and investigated solution for digital calibration certificates (dcc). the color of the component indicates the organization that was responsible for the development. yellow and dark grey components were developed by aalto university, blue components by vaisala and green components by beamex. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 xades based cryptographical digital seals were used. digital seals are digital signatures that identify a group of people instead of one individual person. in the poc, digital seals were used at organization level. each organization had their own private keys for signature creation traceable by the demo public key infrastructure (pki). 3.2. components for receiving signature validation: the calibration customer needs to ensure that the data included in the dcc is authenticated by the expected organization and has not been manipulated by third parties. this can be ensured by validating the digital signature in the dcc. in the poc, the signature validation service was implemented as a part of the data transfer platform. human readable format: the dccs need to be visualized to a human readable format to be used in manual processes. these can be e.g., manual transfer of data to a calibration management system or a manual review of calibration results. in the project, a human readable format was developed using xslt and javascript technologies. the human readable format was used to manually review and transfer calibration results to the calibration management system of the accredited calibration laboratory. the human readable format also included a user interface for validating the digital signature. dcc cms import: utilizing the calibration data in calibration management systems (cms) requires data extraction from the dcc xml document to the cms database. in the poc, a dcc data parser was developed to enable fully automated and error resistant data transfer process for the calibration customer. the dcc data parser was integrated to the customer’s cms (beamex cmx) as a separate driver that enabled import from local data storage or dcc data transfer platform. after the data import, the calibration result could be reviewed and accepted according to current calibration processes. finally, the calibration result data can be utilized in the cms as part of instrument analytics and asset management processes. 3.3. general components in the data transfer platform dcc sharing: the data transfer was implemented through a data transfer platform used by all the partners in the project. the data privacy of each organization was achieved though multitenancy. the data transit between organizations took place in the platform through transferring the dccs between tenants. the platform user management also allowed to add user and visibility rights to specific dccs for other organizations in the platform. dcc storage: the long-term storage of dccs was outside the scope of this poc but the platform approach enables several options. if both the calibration provider and the calibration customer use the platform for long-term dcc storing, there is no need for unnecessary data replication as the same document can serve both parties. either party can also take control of storing their own dccs. in case both parties have local storages for their dccs, the platform can be used only for data transfer and no data will be stored in the platform. user management: the user management in the tested solution was based on microsoft azure active directory (aad). in the poc, all the organizations used microsoft aad for user management. the data transfer platform was developed to support single sign-on (sso) through microsoft aad. this allowed all organizations to use their existing authentication methods to authorize users to sign, transfer and manage dccs in the data transfer platform. microsoft aad integration also allows each organization to have full control of the users that can access their organization’s data. as the platform’s user authentication is integrated as a part of the organization’s normal user authentication, the user rights to the platform can be added and removed automatically through normal employee onboarding and offboarding processes. 4. discussion the poc solution presented in the present study was designed for testing and demonstrating an example implementation of the dcc in an industrial application. the investigated solution and its components worked as expected in general and the test was considered successful by all consortium members. the poc solution was considered effective and user friendly for executing calibration data exchange. the cloud-based platform offered a low entry level for organizations with less software development capabilities. the development of the necessary components, excluding the custom components for generating and reading dcc from specific calibration management systems, as a part of the calibration data transfer platform lowered the effort to implement the dcc. all necessary integrations between the information systems were implemented through standardized application programming interfaces (apis). necessary integrations were user management integrations with the data transfer platform through aad and cms integrations with the data transfer platform. the presented solution gives guidelines for the development of a production-applicable solution. room for improvement was identified especially in the solution interfaces and in general robustness of the system. furthermore, a production-ready solutions will require further development of the solution infrastructure and data security, as was discussed in section 2.4. 4.1. challenges in the dcc implementation interoperability between different information systems requires a well-standardized and universal data format. the universal dcc format has been designed to provide a framework for representing calibration data. it enables various ways to insert measurement data and recursive data structures which challenge the receiving end machine-readability. need for interoperable machine-readability sets tight requirements not only to the dcc structure but also to the overall harmonization of the calibration certificates’ content, including common and fixed vocabularies, glossaries, and semantics content. as use case-specific needs and data formats can vary significantly, there needs to be a way to standardize the data in a more detail level. to achieve use case level standardization, several good practice examples are currently being developed. the goal of the good practice examples is to standardize the representation of data, e.g., with a specific measurement quantity. the poc was executed before widely adopted good practice examples were published, and there were several challenges to achieve an adequate level of interoperability. in the first tests, only the universal dcc schema was used to validate the created dccs. as a result, the cms data import failed to read both dccs created with the manual web form and the automated dcc creator. the reason was identified to be different data structures in the two dccs despite they both were compatible with the universal dcc schema. the finding highlights the importance of good practice examples and related good practice subschemas. the development of good practice subschemas should restrict acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 the use of the universal dcc schema to the level where use case specific requirements are fulfilled, and interoperability ensured. in addition to the need for more precise standardization, also other technical findings were identified. the current version of the digital si (d-si) does not support resolution for numerical values. for example, measurement result 1.00 will be saved as 1 in the xml which has implications e.g., to uncertainty calculations. furthermore, calibration date or calibration due date of the reference instrument used in the calibration do not have a dedicated field in the dcc. these findings were considered critical for the implementation of the dcc by the project consortium members. 4.2. remaining challenges and future work the development of good practice examples should be intensively coordinated between different working groups to avoid the risk of divergence. also, the total number of different good practice examples should be minimized, and same data structures applied to as many different use cases as possible, to ease the implementation of the dcc. every additional good practice format will increase the difficulty to implement the dcc especially for organizations that operate with several types of measurements. the authors see finding the balance between the number of good practices developed and ensuring standardized data structures in the dcc as the core challenge for the dcc. wide reformation, like digital transformation, in a quite traditional calibration field will not happen in a second. and unfortunately, all stakeholders do not have resources to be involved in the development work. this leads to a situation where stakeholders do not have easy access to the latest development work, knowledge, and capabilities needed for dcc implementation. dccs are currently developed mainly in a few large european metrology institutions and some forerunner industrial companies. support for the digital transformation and implementation of dccs is needed. especially end user needs in different industrial operations need to be considered. our future work will include executing a new dcc poc, exploring new possibilities to exploit dcc data, and validating business opportunities. the partners in the new poc are vtt mikes, beamex, lahti precision, orion, vaisala, and bayer. we will demonstrate mass and weighing instrument calibration cases. the poc will examine a complete metrological traceability chain from the si to the end user process. more information on the reported poc and the future work is available from the authors. references [1] the business research company, measuring and control instruments market covering other electrical equipment, electronic products and components; navigational, measuring, electro medical and control instruments; global summary 2021: covid 19 impact and recovery, 2021 [2] s. hackel, f. härtig, j. hornig, t. wiedenhöfer, the digital calibration certificate, ptb-mitteilungen, vol. 127, no. 4, 2017, pp. 75–81. doi: 10.7795/310.20170403 [3] j. lindström, p. kyösti, w. birk, e. lejon, an initial model for zero defect manufacturing, applied sciences, vol. 10, 2020, no. 13, 4570. doi: 10.3390/app10134570 [4] s. eichstädt, a. keidel, j. tesch, metrology for the digital age, measurement: sensors, vol. 18, 2021, 100232. doi: 10.1016/j.measen.2021.100232 [5] european comission, european commission digital strategy: a digitally transformed, user-focused and data-driven commission, 2018. online [accessed september 2020] https://ec.europa.eu/digital-singlemarket/en/content/european-digital-strategy [6] project met4fof: metrology for factory of the future. online [accessed january 2023] https://www.euramet.org/research-innovation/search-researchprojects/details/project/metrology-for-the-factory-of-the-future [7] bureau international des poids et mesures (bipm), si brochure, the international system of units, 2019 [8] project smartcom: communication and validation of smart data in iot-networks. online [accessed april 2022] https://www.ptb.de/empir2018/smartcom/project/ [9] d. hutzschenreuter, r. klobučar, p. nikander, t. elo, t. mustapää, p. kuosmanen, o. maennel, k. hovhannisyan, b. müller, l. heindorf, b. ačko, j. sýkora, f. härtig, w. heeren, t. wiedenhöfer, a. forbes, c. brown, i. smith, s. rhodes, i. linkeová, j. sýkora, v. paciello, digital system of units (d-si), zenodo, 2019. doi: 10.5281/zenodo.3522631 [10] jcgm 100:2008, evaluation of measurement data guide to the expression of uncertainty in measurement (gum), 2008 [11] jcgm 200:2012, international vocabulary of metrology – basic and general concepts and associated terms (vim), 2012 [12] iso/iec 17025:2017, general requirements for the competence of testing and calibration laboratories, 2017 [13] t. mustapää, digitalisation of metrology for improving trustworthiness and management of measurement data in industrial iot systems, doctoral thesis 139/2022, aalto university, 2022, isbn 978-952-64-0959-7 [14] k. riska, digital calibration certificate as part of an ecosystem, thesis of a master of engineering, novia uas, 2022. online [accessed 22 march 2023] https://urn.fi/urn:nbn:fi:amk-2022052211073 [15] gemimeg ii. online [accessed january 2023] https://www.digitaletechnologien.de/dt/navigation/en/programmeprojekte/akt uellestrategischeeinzelprojekte/gemimeg2/gemimeg2.html/ [16] c. r. h. barbosa, m. c. sousa, m. f. l. almeida, r. f. calili, smart manufacturing and digitalization of metrology: a systematic literature review and a research agenda, sensors, vol. 22, 2022, no. 16, 6114. doi: 10.3390/s22166114 [17] m. s. gadelrab, r. a. abouhogail, towards a new generation of digital calibration certificate: analysis and survey. measurement, vol. 181, 2021, 109611. doi: 10.1016/j.measurement.2021.109611 [18] j. nummiluikki, t. mustapää, k. hietala, r. viitala, benefits of network effects and interoperability for the digital calibration certificate management, 2021 ieee international workshop on metrology for industry 4.0 & iot, rome, italy, 2021, pp. 352–357. doi: 10.1109/metroind4.0iot51437.2021.9488562 [19] physikalisch-technische bundesanstalt, digital calibration certificate version 2.4.0. online [accessed july 2022] https://www.ptb.de/dcc/v2.4.0/ [20] xml advanced electronic signatures (xades). online [accessed 9 january 2023] https://www.w3.org/tr/xades/ [21] regulation (eu) no 910/2014 of the european parliament and of the council of 23 july 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing directive 1999/93/ec. online [accessed 9 january 2023] https://eur-lex.europa.eu/legal-content /en/txt/?uri=uriserv:oj.l_.2014.257.01.0073.01.eng [22] p. nikander, t. elo, t. mustapää, p. kuosmanen, k. hovhannisyan, o. maennel, c. brown, j. dawkins, s. rhodes, i. smith, d. hutzschenreuter, h. weber, w. heeren, s. schönhals, t. wiedenhöfer, document specifying rules for the secure use of dcc covering legal aspects of metrology, zenodo, 2020. doi: 10.5281/zenodo.3664211 https://doi.org/10.7795/310.20170403 https://doi.org/10.3390/app10134570 https://doi.org/10.1016/j.measen.2021.100232 https://ec.europa.eu/digital-single-market/en/content/european-digital-strategy https://ec.europa.eu/digital-single-market/en/content/european-digital-strategy https://www.euramet.org/research-innovation/search-research-projects/details/project/metrology-for-the-factory-of-the-future https://www.euramet.org/research-innovation/search-research-projects/details/project/metrology-for-the-factory-of-the-future https://www.ptb.de/empir2018/smartcom/project/ https://doi.org/10.5281/zenodo.3522631 https://urn.fi/urn:nbn:fi:amk-2022052211073 https://www.digitale-technologien.de/dt/navigation/en/programmeprojekte/aktuellestrategischeeinzelprojekte/gemimeg2/gemimeg2.html/ https://www.digitale-technologien.de/dt/navigation/en/programmeprojekte/aktuellestrategischeeinzelprojekte/gemimeg2/gemimeg2.html/ https://www.digitale-technologien.de/dt/navigation/en/programmeprojekte/aktuellestrategischeeinzelprojekte/gemimeg2/gemimeg2.html/ https://doi.org/10.3390/s22166114 https://doi.org/10.1016/j.measurement.2021.109611 https://doi.org/10.1109/metroind4.0iot51437.2021.9488562 https://www.ptb.de/dcc/v2.4.0/ https://www.w3.org/tr/xades/ https://eur-lex.europa.eu/legal-content%20/en/txt/?uri=uriserv:oj.l_.2014.257.01.0073.01.eng https://eur-lex.europa.eu/legal-content%20/en/txt/?uri=uriserv:oj.l_.2014.257.01.0073.01.eng https://doi.org/10.5281/zenodo.3664211 characterization of laser doppler vibrometers using acousto-optic modulators acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 361 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 361 364 characterization of laser doppler vibrometers using acousto-optic modulators michael gaitan1, jon geist2, benjamin j. reschovsky3, ako chijioke4 1 nist, gaithersburg md, usa, michael.gaitan@nist.gov 2 nist, gaithersburg md, usa, jon.geist@nist.gov 3 nist, gaithersburg md, usa, benjamin.reschovsky@nist.gov 4 nist, gaithersburg md, usa, akobuije.chijioke@nist.gov abstract: we report on a new approach to characterize the performance of a laser doppler vibrometer (ldv). the method uses two acousto-optic modulators (aoms) to frequency shift the light from an ldv by a known quantity to create a synthetic velocity shift that is traceable to a frequency reference. results are presented for discrete velocity shifts and for sinusoidal velocity shifts that would be equivalent to what would be observed in an ideal accelerometer vibration calibration. the method also enables the user to sweep the synthetic vibration excitation frequency to characterize the bandwidth of an ldv together with its associated electronics. keywords: laser vibrometer; vibration calibration; acousto-optic modulator; 1. introduction following iso standard 16063-41 method for the calibration of vibration and shock transducers – part 41 calibration of laser vibrometers [1], laser doppler vibrometers (ldvs) are calibrated by a comparison-type measurement to a laser homodyne interferometer that is defined as the primary standard. not covered by iso 16063-41, but for the case where all the components of the ldv system and their associated uncertainties are known, methods can be employed for the direct determination of measurement uncertainty of the ldv [2], [3] or by using a combination of a heterodyne with a homodyne-quadrature configuration [4]. the technology for manufacturing commercial ldv systems has matured as well as their use in commercially available primary vibration calibration systems. these systems require calibration by the manufacturer over a periodic time interval that is typically one year and are traceable to the système international d'unités (si) through the manufacturer. from the end user perspective, the commercially manufactured ldv system is like a “black box”, meaning that the design and internal components of the ldv are not known in detail by the user. therefore, following an uncertainty determination approach described in [2] or [3] is not possible for such commercial black box systems especially if their internal workings are considered proprietary by the manufacturer. the only possibility provided by the standard for calibrating a such commercial ldv systems is therefore to follow iso 16063-41 and compare it to a primary heterodyne system, resulting in the ldv system being considered a secondary system. a challenge therefore remains in the adoption of cost-effective commercial ldv systems by national measurement institutes (nmis) who are responsible for direct determination of uncertainty. towards this end, we have recently reported on the calibration of laser heterodyne velocimeters using shock excitations and total distance travelled [5], [6]. one advantage of that method is that it characterizes the entire measurement system under the same conditions that would be used in an accelerometer shock calibration and could as well be included as part of the accelerometer shock calibration. however, a drawback of the method is that it does not characterize the frequency response and bandwidth of the ldv. the bandwidth of the excitation must not exceed the bandwidth of the ldv in order to produce accurate accelerometer shock calibrations. this drawback motivated us to develop a new method to characterize the bandwidth of the ldv system and resulted in the method that we present in this report. 2. experimental design figure 1 shows a diagram and photograph of the acousto-optic modulator (aom)-based ldv characterization system that we have developed. the laser light is first collimated using 300 mm and 30mm lenses to create a beam diameter that is compatible with the aperture of the aoms. the http://www.imeko.org/ mailto:michael.gaitan@nist.gov mailto:jon.geist@nist.gov mailto:benjamin.reschovsky@nist.gov mailto:akobuije.chijioke@nist.gov acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 362 beam passes through the 1st aom where it is downshifted in frequency f. next it passes through the 2nd aom where it is upshifted by a frequency f+. the beam is then reflected back along its path with a mirror, doubling the effect of the aoms, and delivering light to the ldv that is shifted in frequency by: 2𝛿 = 2 (𝑓 + 𝛿 − 𝑓). (1) the reason why we use two aoms to produce the frequency shift is that a single aom cannot generate frequency shifts of order 1 mhz or less as required. the velocity v reported by the ldv is related by the doppler equation with a frequency shift of 2𝛿 and the wavelength of the laser by the relationship: 𝑣 = ½ 𝜆 (2𝛿) = 𝜆 𝛿 , (2) where the laser wavelength 𝜆 = 632.81 nm [7]. 3. results the experimental results that we report were obtained using [8] a commercially available ldv system that includes a polytec ofv-503 sensor head, a ofv-5000 vibrometer controller, and a polytec data management system that were interfaced with the design shown in figure 1. the aoms (brimrose tef-110-50-633) were driven by two national instruments pxie-5650 sinusoidal radio frequency (rf) signal generators. an agilent 3458a digital multimeter was also used to measure root mean squared (rms) voltage for sinusoidal excitations. the signals from the rf signal generators were amplified using mini-circuits zhl-2010+ rf amplifiers connected to each of the aoms. the base frequency f for our results was selected to be 110 mhz, corresponding to the center frequency of the aoms. the zero frequency (dc) and transient velocity readings that we report were obtained using the polytec data management system. the rms readings that we report were obtained using the rms multimeter. 3.1. results for fixed frequency shift figure 2 shows the relationship between the frequency shift 𝛿, the reported velocity from the ldv, and the calculated velocity using the doppler equation (2). these results were obtained with vibrometer controller set to vd-09 with a corresponding amplification factor of 0.5 m/s/v. figure 1: diagram (upper) and photograph (lower) of the acousto-optic modulator (aom)-based ldv characterization system. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 363 figure 2: plot of the velocity reported by the ldv as a function of frequency shift 𝛿 in blue and the corresponding calculated values using the doppler equation (2). the resulting dc voltage was sampled 2048 times at 204800 samples/s with no filtering using the polytec data management system. the data were averaged, and the standard deviation was determined. the ldv exhibited a 0.0012 m/s offset when directed onto a non-moving surface without the aoms in the beam path as well as when the frequency shift 𝛿 was set to zero. this offset was subtracted from the measured results depicted in the figure. the data in figure 2 is replotted in figure 3 in terms of the percent difference between the velocity reported by the ldv (with the offset subtracted) and the calculated velocity from the doppler shift equation (2). the data show a maximum percent difference of  0.04 % over the frequency range that was tested. figure 3: percent difference between the velocity reported by the ldv (with the offset subtracted) and the calculated velocity using the doppler shift equation (2). the manufacturer of the ldv reports a 1% uncertainty in their instrument specifications while the frequency shift that we can produce with the signal generators is orders of magnitude smaller in uncertainty. 3.2. results for sinusoidal excitation in this experiment the 2nd aom was excited using the national instruments pxie-5650 rf signal generator with a sinusoidal frequency modulation to create a synthesized vibration measurement. the goal of this experiment was to characterize the bandwidth of the ldv system. in this experiment, the root mean squared (rms) voltage from the analog output of the polytec ofv5000 vibrometer controller was measured using the rms multimeter and converted to rms velocity using the vd-09 gain factor of 0.5 m/s/v. the sinusoidal modulation at the 110 mhz base frequency was swept from 100 hz to 3 mhz. figure 4 shows that the ldv vibrometer controller has a uniform response up to 1 mhz and drops off beyond that frequency. figure 4: frequency response of the ldv using sinusoidal frequency modulation of the 2nd aom to create a synthesized vibration measurement condition. 3.3. results for velocity step function excitation in this experiment the 2nd aom was excited using an hp 83650b 10 mhz to 50 ghz rf swept signal generator to provide a capability to frequency modulate an arbitrary analog signal. an agilent 33250a arbitrary waveform generator signal generator was used to produce a 1 hz square wave alternating from 0 mv to 300 mv for frequency modulation to simulate a step function for synthesized velocity. the analog velocity signal from the vibrometer controller was digitized using the polytec data management system set at its maximum sampling rate of 204800 samples/s. the resulting response shown in figure 5 includes the effects of the ldv as well as the digital acquisition system, which would be expected to have a maximum bandwidth of 102400 hz. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 364 4. summary our results show that this approach is immediately useful as a tool for characterizing dc, sinusoidal steady state, and transient response of an ldv as a system as a whole, together with its data acquisition and control electronics and amplifiers. the dc response we reported exhibited a maximum of  0.04 % difference between the measured value and the calculated value based on the doppler equation, which is in good agreement to what we had reported earlier using shock excitations and total distance travelled [5], and is well within the 1 % accuracy specified by the manufacturer. the ldv system bandwidth of 1 mhz determined by sinusoidal excitations is in good agreement to what the manufacturer specifies. lastly, the velocity step function experiment serves as an example that it is possible to create complex velocity profiles to test the response of the ldv together with its data acquisition, control electronics, and amplifiers. our future work is focused on further improvements on the system design and carrying out a full uncertainty analysis. one improvement that we envision in the design is measure the frequency shift 𝛿 with a photodiode rather than reading it from the signal generators driving the aoms. this could capture any offsets or fluctuations between the signal generators and the light entering the ldv, e.g. due to refractive index fluctuation. we anticipate that after further investigation this method can be used as a tool for primary calibration of laser doppler vibrometers. 5. references [1] iso standard 16063-41 method for the calibration of vibration and shock transducers – part 41 calibration of laser vibrometers, iso 1606341:2011(e). [2] g. siegmund, “sources of measurement error in laser doppler vibrometers and proposal for unified specifications”, proc. of spie vol. 7098, 70980y-1, 2008. [3] n. vlajic, a. chijioke, “traceable dynamic calibration of force transducers by primary means,” metrologia 53, s136–s148, 2016. [4] t. bruns, f. blume, a. täubner, “laser vibrometer calibration at high frequencies using conventional calibration equipment”, xix imeko world congress, lisbon, portugal, september 6-11, 2009. [5] m. gaitan, m. afridi, j. geist, “on the calibration of laser heterodyne velocimeters using shock excitations and total distance travelled”, imeko xxii world congress, belfast, 2018. [6] m. afridi, j. geist, m. gaitan, “primary calibration of the low frequency response of a laser heterodyne velocimeter used in a pendulumbased shock excitation system by si traceable distance measurement”, j res natl inst stan, in press. [7] iso standard 16063-11 methods for the calibration of vibration and shock transducers – part 11 primary vibration calibration by laser interferometry. [8] certain commercial equipment and instruments are identified in this article in order to describe the experimental procedure adequately. such identification is not intended to imply recommendation or endorsement by the national institute of standards and technology, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose. figure 5: response of the ldv system to a synthesized velocity step produced by frequency modulation of a 1 hz square wave on the 2nd aom. the time depicted on the x-axis has been magnified to observe the transient of the velocity step. http://www.imeko.org/ simulation study of the photovoltaic panel under different operation conditions acta imeko issn: 2221-870x december 2021, volume 10, number 4, 62 66 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 62 simulation study of the photovoltaic panel under different operation conditions mohammed alktranee1, péter bencs1 1 department of fluid and heat engineering, faculty of mechanical engineering and informatics, university of miskolc, miskolc, hungary section: research paper keywords: photovoltaic; solar radiation; simulation; temperature citation: mohammed alktranee, péter bencs, simulation study of the photovoltaic panel under different operation conditions, acta imeko, vol. 10, no. 4, article 12, december 2021, identifier: imeko-acta-10 (2021)-04-12 section editor: francesco lamonaca, university of calabria, italy received march 28, 2021; in final form september 25, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mohammed alktranee, e-mail: mohammed84alktranee@gmail.com 1. introduction solar energy is the essential branch of renewable energy is used in different applications as an alternative source of conventional systems such as solar electrical generation, solar cooling systems, solar heating, etc. [1]. the photovoltaic (pv) panels are one solar energy technology used for electrical generation by pv cells made of semiconductor materials. the pv cells convert the sunlight (photons energy) into electrical energy, where about 15 % – 20 % of the sunlight converts to electricity, and the rest converts to heat which influences their performance [2]. different factors influence the pv panel performance; some are related to the balance of system (subsystems) failure, such as pv inverter [3]. others are regarding operation conditions, cell cracking problems due to wind or snow pressure, vibrations, and the dust that collects on the pv module, which causes a failure along its operating lifetime during a short lifetime [4]. the pv panels efficiency is significantly affected by the increase in the pv cells temperature that causes cell degradation and shorting of life expected of pv module with dropped output power [5]. several studies revealed the performance of the pv panel is affected by the operation temperature. increasing the pv cells temperature higher than 25 °c has negative effects on the pv cells yield, where decreases the open-circuit voltage and then the electrical output of the pv panel [6]-[8]. the pv cells performance and productivity mainly depend on the solar radiation values and ambient temperature. therefore, at an average temperature of 25 °c and solar radiation of 1000 w/m², the pv cells give a maximum yield generate as a reference power produced by the pv cells [9]. stander test condition (stc) is a concept defined as the performance of pv cells within a temperature of 25 °c and solar radiation of 1000 w/m². increasing the temperature of the pv panel temperature one degree higher than 25 °c leads to reducing the pv cells efficiency to 0.45 % [10]. therefore, various pv models have been developed to predict the pv modules power output under different operation conditions [11]. a thermal model was developed to simulate pv panels thermal and electrical performance under different operation conditions with and without cooling. the developed model was sequentially linked with electrical and radiation models to evaluate the pv abstract an increase in the temperature of the photovoltaic (pv) cells is a significant issue in most pv panels application. about 15–20% of solar radiation is converted to electricity by pv panels, and the rest converts to heat that affects their efficiency. this paper studies the effects of temperature distribution on the pv panel at different solar radiation values, temperatures under different operation conditions in january and july. a 3d model of the pv panel was simulated with ansys software, depending on the various values of temperatures and solar radiation values obtained using mathematic equations. the simulation results indicate that pv panel temperature lowered with solar radiation values lower in january, and the temperature was homogeneous on the pv panel surface. an increase in the solar radiation value and temperature in july led causes heating of the pv panel with observed a convergence of the maximum and average temperature of the panel. thus, the pv panel temperature increase is directly proportional to the solar radiation increase that causes lower performance. cooling the pv panel by passive or active cooling represents the optimum option to enhance their performance and avoid increasing the pv cells' temperature at temperature increase. mailto:mohammed84alktranee@gmail.com acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 63 panels performance. the simulation results show lower pv panel performance with absorbed solar radiation of 800 w/m² and ambient temperature 0 °c – 50 °c without cooling. meanwhile, use cooling has a slight increase of the pv panels at ambient temperature 25 °c and absorbed radiation from 200 w/m² to 1000 w/m² [11]. the ambient temperature and solar radiation are important parameters that influence the pv cells' conversion efficiency and power produced [12]. another 3d model was simulated by ansys software to evaluate temperature distribution on the pv panel at different climate conditions and thus analysing the pv panels thermal behaviour under operating conditions. the results indicate an increase in the pv panel temperature is associated with increased solar radiation intensity and ambient temperature [13]. another simulation was conducted of the pv panel at a constant temperature with various solar radiation values, vice versa to predict the pv model performance and compare it with the pv panel performance under stc. the simulation revealed a reduction of solar radiation values even with constant temperature effects on the voltage and current of the pv panel. on the other hand, the pv panel performance degraded at high temperatures even with different solar radiation values. thus, lowering the temperature of the pv panel contributes to an increase in output power [14]. the present work aims to calculate temperature distribution on the pv panel at different solar radiation values and ambient temperature then determine the optimum operation condition extent of the pv panel. the simulation depended on the layers properties of the pv panel, values of solar radiation, temperature, convective heat transfer coefficient of the model to evaluate temperature distribution and identify the appropriate extent of operating conditions of the pv panel. the second section will discuss the characteristics of the pv panel and the mathematic equation used to predict the solar radiation values and the important results obtained. the third section discussed the simulation results and compared it with other studies [15], [16] that simulated under similar conditions to investigate the effect of ambient temperature and solar radiation on the pv panel performance. 2. methodology the pv panel used type as-6p30 polycrystalline, consisting of glass covering, pv cells with two layers of ethylene vinyl acetate (eva), aluminium frame, and tedlar (pvf) layer, as shown in figure 1 below. table 1 below shows the material properties of the pv components inserted in the ansys software engineering data. the pv panel datasheet has adopted a reference for comparing the simulation results and the pv panel values under stc. 2.1. geometry simulation according to the manufacturer datasheet, the pv model’s geometry is built by solidworks software with 1640 × 992 × 35 mm³ model dimensions. the materials are defined by the ansys library data of all the pv panel layers, as shown in figure 1. the pv model has been imported to the ansys software to analyse the pv panels temperature distribution inserted in the software as variable values depending on the different values of beam solar radiation obtained using mathematical equations below. thus, according to stc, three solar radiation values have been used two of them according to january and july with an estimation of clear-sky radiation and the third solar radiation value under stc. the heat flux changes with time, which causes a change in the temperature that influences the pv panels performance. the temperature was fixed at 4 °c in january, 35 °c in july, and 25 °c in stc. the value of the convective heat transfer coefficient on the panel is 14.8, given in w/(m2 k) and calculated using equation (1) [18] ℎ = 5.7 + 3.8 𝑉𝑚 , (1) where 𝑉𝑚 is the wind speed value that adopted 2.4, given in m/s, according to miskolc city [19]. the mesh element size of the pv model mesh using ansys simulation software was 0.002 m, and the number of meshes elements was 278964 with high smoothing, as shown in figure 2 above, the time simulated was 43.1 min in all initial conditions, figure 3 show the simulation steps of the pv panel for of january, july and for stc values. 2.2. governing mathematical equations the mathematical equations were used to predict and estimate direct radiations transmittance (beam radiation) for miskolc city when a clear atmosphere on 7 january at solar time 11:30 am and 13 july with solar time 12:30 pm. to achieve that, it needs to find some parameters that help achieve that such angular figure 1. the layers of the pv panels. table 1. the layers properties of the pv panel [17]. material (layer)  kg/m3 k w/(m²·k) cp j/(kg·k) glass 3000 1.8 500 eva 960 0.35 2090 pv 2330 148 677 pvf 1200 0.2 1250 aluminium frame 2707 204 996 figure 2. depict the pv panel meshing. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 64 position of the sun (solar declination 𝛿) represents the angle between the line from the centre of the sun to the earth centre equator. the solar declination value is changeable because of the rotating the earth around the sun and the tilt of the earth on its axis of rotation found by using cooper's equation [20] 𝛿 = 23.45° sin (360 284 + 𝑛 365 ) , (2) where 𝑛 is the day of the year of that selected date (𝑛 = 7 for 7 january and 𝑛 = 194 for 13 july). the zenith angle is the incidence beam solar radiation angle between the vertical and the line to the sun use with horizontal surfaces and determined by the following equation cos 𝜃𝑧 = cos 𝜙 cos 𝛿 cos 𝜔 + sin 𝜙 sin 𝛿 , (3) with latitude 𝜙 between −90° ≤ 𝜙 ≤ 90° for miskolc, hungary 𝜙 is 48.1°, 𝜔 is hour angle which is negative at morning and afternoon positive. solar time depends on the sun's angular motion in the sky, which may not synchronize with local time [21]. the extra-terrestrial radiation incident 𝐺on is the quantity of solar energy received per unit of time at the mean distance between the sun to the earth. on the normal plane, the 𝐺oncan calculate by the equation (4) 𝐺o = 𝐺sc (1 + 0.033 cos 360 𝑛 365 ) , (4) and 𝐺sc is the solar constant. calculating the daily and hourly solar radiation received on a horizontal surface is useful under standard conditions. to calculate the beam radiation transmitted from the sun without scattering by clear atmospheres, consider the zenith angle and altitude of the atmosphere for four climate types from equation (5) [21] 𝜏𝑏 = 𝑎0 + 𝑎1e − 𝑘 cos 𝜃𝑧 (5) 𝑎0 ∗ = 0.4237 − 0.00821 (6 − 𝐴)2, (6) 𝑎1 ∗ = 0.5055 + 0.00595 (6.5 − 𝐴)2 , (7) 𝑘∗ = 0.2711 + 0.01858 (2.5 − 𝐴)2, (8) where a0, a1 are constant, and k represents the standard atmosphere at 23 km visibility that can be found by using the equation (6), (7), and (8) a represents the altitude of the observer in kilometres. hence, multiply the constant values by correction factors in table 2, 𝑎0 ∗  𝑟𝑜 , 𝑎1 𝑎1 ∗ and k  𝑟𝑘 could calculate the beam radiation transmitted. thus, it can determine the beam radiation for any zenith angle or altitude, even 2.5 km. the normal beam radiation in the clear sky can be found by multiplying the value of the beam radiation transmitted during clear atmospheres 𝜏𝑏 by the value of extraterrestrial radiation incident 𝐺on. the value results multiply by the zenith angle to obtain solar radiations value on the horizontal of panel 𝐺cb [22]. 3. results and discussion the pv panel has been simulated under different solar radiation values estimated by mathematical equations and various temperatures. for january, solar radiation values were 176 w/m2, for july was 735 w/m2 and under stc was 1000 w/m2. january's highest temperature was 4 °c, july highest temperature reaches 35 °c, and the pv panel temperature was under stc 25 °c. the results indicate that temperature distribution on the pv panels surface was high at the top of the pv panel while the aluminium frame's temperature was lower. the dark-blue colour refers to the minimum temperature on the panel, the bright-red colour refers to the maximum temperature on the panel, and other colours represent temperature variations. in january, when the temperature is low, the beam solar radiation is 176 w/m2. the temperature distribution on the pv panel is between a maximum temperature of 15.4 °c and a minimum of 11.9 °c, as shown in figure 4. thus, this solar radiation range influences pv panel performance represented by power output, without leaving any damage or overheating because of lower the pv panel temperature. at applied the july solar radiation 735 w/m2 when the temperature reaches 35 °c, the simulation shows that the pv panel is exposed to a maximum temperature of 84.6 °c, while the minimum temperature was 68.4 °c as shown in figure 5 above. increase the pv panel temperature by 10°c above the stc value causes a decrease in its efficiency. this extent of temperature causes other problems for the pv panel, such as overheating, which leads to burning some cells or reduced voltage, power, and the pv panel's output current; thus, the pv panel becomes inoperative [23]. figure 3. simulation steps of the pv panel. table 2. correction factors for climate types [20]. climate type 𝒓𝒐 𝒓𝟏 𝒓𝒌 tropical 0.95 0.85 1.02 mid latitude winter 1.03 1.01 1 subarctic summer 0.99 0.99 1.01 mid latitude summer 0.97 0.99 1.02 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 65 under stc conditions of the pv panel 1000 w/m2 and 25 °c, observed rising the pv panel temperature until 92.5 °c, while the lowest temperature was 67.7 °c, as shown in figure 6 above. therefore, this temperature range is not suitable for pv panel operation due to the rising of the pv cells temperature and causes the same problems of july values as overheating pv cells or reduced conversion efficiency and production of the pv panel. the temperature increase of the pv panel is directly proportional to increasing solar radiation. due to the pv layers different properties, these layers temperature is different, as shown in the figures above; the pv panel temperatures surface is different from the aluminium frame. the maximum and average temperatures on the pv panel are converged somewhat at increasing the solar radiation, as shown in table 3 above, for july and stc values. the cooling of the pv panel to remove the excessive heat is important to continue its work under high temperatures, even with stc values. 4. conclusion the pv panel was simulated by ansys software according to the properties of the material of the pv layers, such as thermal conductivity, densities, and specific heat for each layer. the climatic conditions of miskolc city have depended on solar radiation, temperatures, and wind speed as a parameter in simulation. the solar radiation values have been estimated depending on the mathematical equations for january and july with an stc standard value. the pv panel showed a variable thermal behaviour with a change in solar radiations value, which became evident by the pv panel's distribution temperatures. thus, low solar radiation value and temperature influence the pv panels performance to the extent that it does not cause damage to the pv panel but reduces production. on the other hand, increasing solar radiation causes an increase in the pv panels temperature that causes reduce in power output with damage to the pv panel. a convergence of the maximum and average temperature was observed with increasing the solar radiation that causes the rising of pv cells temperature that requires cooling of the pv panel to remove the excessive heat. references [1] v. v. tyagi, n. a. a. rahim, n. a. rahim, j. selvaraj, progress in solar pv technology: research and achievement, renew sustain energy rev, 20 (2013), pp. 443-61. doi: 10.1016/j.rser.2012.09.028 [2] m. c. browne, b. norton, s. j. mccormack, heat retention of photovoltaic/thermal collector with pcm, sol energy, 133 (2016), pp. 533–48. doi: 10.1016/j. solener.2016.04.024 [3] loredana cristaldi, mohamed khalil, payam soulatiantork, a root cause analysis and a risk evaluation of pv balance of systems failures, acta imeko 6 (2007) 4, pp.113-120. doi: 10.21014/acta_imeko.v6i4.425 [4] loredana cristaldi, mohamed khalil, marco faifer, markov process reliability model for photovoltaic module failures, acta imeko 6 (2017), pp. 121-130. doi: 10.21014/acta_imeko.v6i4.428 [5] aarti kane, vishal verma, bhim singhc, optimization of thermoelectric cooling technology for active cooling of a photovoltaic panel, renewable and sustainable energy reviews 75 (2017), pp.1295-1305. doi: 10.1016/j.rser.2016.11.114 [6] fesharaki vj, dehghani m, fesharaki jj, the effect of temperature on photovoltaic cell efficiency, proceedings of the 1st international conference on etec, tehran, iran, 2011 . [7] borkar ds, prayagi sv, gotmare j, performance evaluation of photovoltaic solar panel using thermoelectric cooling, international journal of engineering research 9(2014), pp. 536539. [8] fontenault b, active forced convection photovoltaic/thermal panel efficiency optimization analysis, rensselaer polytechnic institute hartford, 2012. [9] skoplaki e, palyvos ja, on the temperature dependence of photovoltaic module electrical performance: a review of table 3. temperatures on the pv panel at different solar radiation. solar radiation w/m2 max temperature °c min temperature °c average temperature °c 176 15.8 11.4 15.2 735 84.6 66.3 82.1 1000 92.5 67.7 89.1 figure 4. at solar radiation 176 w/m2 and temperature 4 °c. figure 5. at solar radiation 734 w/m2 and temperature 35 °c. figure 6. at solar radiation 1000 w/m2 and temperature 25 °c. https://doi.org/10.1016/j.rser.2012.09.028 https://doi.org/10.1016/j.%20solener.2016.04.024 http://dx.doi.org/10.21014/acta_imeko.v6i4.425 http://dx.doi.org/10.21014/acta_imeko.v6i4.428 https://doi.org/10.1016/j.rser.2016.11.114 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 66 efficiency/power correlations, solar energy 83(2009), pp. 614– 624. [10] solar facts: photovoltaic efficiency inherent and system. online [accessed 8 december 2021] https://www.solar-facts.com/panels/panel-efficiency.php [11] g. m. tina, s. scrofani, electrical and thermal model for pv module temperature evaluation, ieee, 2008. doi: 10.1109/melcon.2008.4618498 [12] cătălin george popovici, sebastian valeriu hudișteanu, theodor dorin mateescu, nelu-cristian cherecheș. efficiency improvement of photovoltaic panels by using air cooled heat sinks, energy procedia. 85 (2016) pp. 425 – 432. doi: 10.1016/j.egypro.2015.12.223 [13] leow w z, irwan y m, safwati i, irwanto m, amelia a r, syafiqah z, fahmi m i and rosle n, simulation study on photovoltaic panel temperature under different solar radiation using a computational fluid dynamic method. journal of physics, conference series, iop 1432 (2020). doi: 10.1088/1742-6596/1432/1/012052 [14] mohammed alktranee and péter bencs, test the mathematical of the photovoltaic model under different conditions by use matlab– simulink, journal of mechanical engineering research and developments vol 43 (2020), pp. 514-521. [15] marek jaszczur, qusay hassan, janusz teneta, ewelina majewska1, marcin zych, an analysis of temperature distribution in solar photovoltaic module under various environmental conditions, matec web of conferences 240, 04004 (2018). doi: 10.1051/matecconf/201824004004 [16] n. pandiarajan and ranganath muthu, mathematical modeling of photovoltaic module with simulink, international conference on electrical energy systems (icees 2011) (2011), pp. 3-5. doi: 10.1109/icees.2011.5725339 [17] s. armstrong, w. g. hurley, a thermal photovoltaic panels under varying atmospheric conditions, applied thermal engineering 30(2010), pp. 1488-1495. doi: 10.1016/j.applthermaleng.2010.03.012 [18] j. vlachopoulos, d. strutt, heat transfer, in spe plastics technicians toolbox 2 (2002), pp. 21-33. [19] weather and climate, miskolc, hungary. online [accessed 5 march 2021] https://weather-and-climate.com/average-monthly-rainfalltemperature-sunshine,miskolc,hungary [20] cooper, p. i, the absorption of solar radiation in solar stills, solar energy 3 (1969), pp. 333-346. doi: 10.1016/0038-092x(69)90047-4 [21] john a. duffie, william a, beckman, solar engineering of thermal processes, fourth edition. isbn 978-0-470-87366-3 (2013). [22] hottel, h. c, a simple model for estimating the transmittance of direct solar radiation through clear atmospheres, solar energy 2 (1976), pp. 129-134. doi: 10.1016/0038-092x(76)90045-1 [23] bodnár, i., iski, p., koós, d., skribanek, á, examination of electricity production loss of a solar panel in case of different types and concentration of dust, advances and trends in engineering sciences and technologies iii (2019), pp. 313-318. doi: 10.1201/9780429021596-49 https://www.solar-facts.com/panels/panel-efficiency.php https://doi.org/10.1109/melcon.2008.4618498 https://doi.org/10.1016/j.egypro.2015.12.223 https://doi.org/10.1088/1742-6596/1432/1/012052 https://doi.org/10.1051/matecconf/201824004004 https://doi.org/10.1109/icees.2011.5725339 https://doi.org/10.1016/j.applthermaleng.2010.03.012 https://weather-and-climate.com/average-monthly-rainfall-temperature-sunshine,miskolc,hungary https://weather-and-climate.com/average-monthly-rainfall-temperature-sunshine,miskolc,hungary https://doi.org/10.1016/0038-092x(69)90047-4 https://doi.org/10.1016/0038-092x(76)90045-1 https://doi.org/10.1201/9780429021596-49 introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt acta imeko issn: 2221-870x december 2021, volume 10, number 4, 8 9 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 8 introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt carlo carobbi1, nicola giaquinto2, gian marco revel3 1 department of information engineering, università degli studi di firenze, via s. marta 3, 50139 firenze, italy 2 department of electrical and information engineering, politecnico di bari, via e. orabona 4, 70125 bari, italy 3 department of industrial engineering and mathematical sciences, università politecnica delle marche, via brecce bianche 12, 60131 ancona, italy section: editorial citation: carlo carobbi, nicola giaquinto, gian marco revel, introductory notes for the acta imeko special issue on the 40th measurement day jointly organised by the italian associations gmee and gmtt, acta imeko, vol. 10, no. 4, article 5, december 2021, identifier: imeko-acta-10 (2021)-04-05 received december 13, 2021; in final form december 13, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: carlo carobbi, e-mail: carlo.carobbi@unifi.it dear readers, the measurement day is an annual italian event, founded in 1982 by the late prof. mariano cunietti. with inspired intuition, cunietti conceived the event as a place where engineers, scientists, logicians, epistemologists, could talk to each other, investigating the very foundations of the act of measuring. at those times the event lasted two days and was a beloved appointment for many italian people passionate about measurements, including scholars, professionals, and simple amateurs, with any cultural background. in its long history, the format of the measurement day has evolved, adapting to the times, but always remaining a muchappreciated annual appointment for people interested in measurement. nowadays, it actually involves public discussion of invited presentations for half a day, and is mainly centred on advancement and updates in measurement technology and standards, including activities of international metrological bodies, contributions from calibration laboratories, discussions on historical perspectives and modern trends in the measurement world, etc. we can affirm that the measurement day has kept intact its vocation of bringing together people from different backgrounds, i.e. academia, industry, laboratories, accreditation bodies, etc., and is still a quite appreciated event by “measurers”. even if it is an event of national dimension, the measurement day involves high-level presentations of international level by recognized experts. therefore, we welcomed with pleasure, on its fortieth anniversary, the invitation of prof. lamonaca to devote a special section of acta imeko to the event. the 40th edition of the measurement day, organized in 2021 by the italian associations of electrical and electronic measurements (gmee) and of mechanical and thermal measurements (gmmt), was titled “comfort measurements between research, foundations, and industrial applications”. invited contributions have been from both young and expert researchers, in fields ranging from standardized measurements to psychology. below, we explain the rationale of the chosen theme for the 40th measurement day, and a brief account of the papers included in the special section. we chose the theme of the event starting from the consideration that the environment affects the well-being and the health and emotional state of people, and it is therefore of great importance – and even more in times of pandemics to guarantee the quality of the indoor life experience. for this purpose, it is essential to measure comfort accurately. but what exactly does it mean to measure comfort accurately? the question has no univocal answer, because we are not dealing with a physical quantity that can be defined regardless of the subject. a key recent trend is to integrate the measurement of actual physiological data, for example with wearable sensory devices, with environmental measurements. the standard predictive models need to be updated, as well as the data analysis methodologies. artificial intelligence has been successfully used in this area. comfort measurements do not only involve an update of hardware and software technologies: they require attention to the psychological aspects. how is the problem of measuring comfort (and in general, the problem of measurement) viewed in psychology and the humanities? how does the problem fit from a psychological perspective? the bipm has very recently circulated the “committee draft” of the new international vocabulary of metrology, the vim4. one of the purposes of vim4 is precisely to bring order mailto:carlo.carobbi@unifi.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 9 10 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 9 to the measurement of “things” fundamentally different from the familiar physical quantities encoded in the international system, for example by introducing the definitions of measurement scale, ordinal scale, nominal scale. on what scale do the comfort measurements fit? how are they different from measurements of physical quantities, and what do these differences entail? the study of comfort measurement thus offers the opportunity to present some of the main innovations of the new vocabulary. as regards the papers of this acta imeko special section, there are five manuscripts that expand the scientific concepts and applications presented during the event, held online on the 30th of march 2021. in the manuscript ‘is our understanding of measurement evolving?’, authored by mari, an analysis is proposed, from an evolutionary perspective, trying to answer to some fundamental questions, such as: what kind of knowledge do we obtain from a measurement? what is the source of the acknowledged special efficacy of measurement? the measurement process is traditionally understood as a quantitative empirical process, but in the last decades measurement has been reconsidered in its aims, scope, and structure. comfort measurements are an application showing the importance and relevance of a reexamination of measurement fundamentals, as suggested in this paper. the paper ‘application of wearable eeg sensors for indoor thermal comfort measurements’, by mansi et. al., presents a measurement protocol and signal processing approach to use wearable eeg (electroencephalography) sensors for human thermal comfort assessment. results, reported from the experimental campaign, confirm that thermal sensation can be detected by measuring the brain activity with low-cost and wearable eeg devices. the paper entitled ‘impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms’, authored by morresi et al., proposes an approach to assess uncertainty in the measurement of human thermal comfort by using an innovative method that exploits a heterogeneous set of data, made by physiological and environmental quantities, and artificial intelligence (ai) algorithms. uncertainty estimation has been developed by applying the monte carlo method (mcm), given the complexity of the measurement method. the paper entitled ‘an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation’, by serroni et al., proposes an innovative iot sensing solution, the “comfort eye”, specifically applied for continuous and real-time indoor environmental quality (ieq) measurement in occupied buildings during the renovation process. ieq monitoring allows investigating the building’s performance to improve energy efficiency and occupant’s well-being at the same time. finally, considering the correlation between stress and discomfort, in the article ‘continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach’ cipresso and colleagues repetitively measured physiological signals in 15 subjects in order to find a model of stress level in daily scenarios. using the experience sampling method, cipresso and colleagues were able to collect stressful moments reported by the 15 subjects through questionnaires. using a machine learning approach, the authors built a model to predict stressful situation based on physiological indexes that rely on cardiovascular measurements, that means only based on electrocardiogram or similar measures such as blood volume pulse extracted with a photopletismograph. carlo carobbi, nicola giaquinto and gian marco revel guest editors standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality jennifer wolf rogers1, karen alexander2 1 people accelerator, the woodlands, texas, usa 2 xrconnected, pittsburgh, pennsylvania, usa section: technical note keywords: xapi; arlem; augmented reality; virtual reality; data transformation; capability development; training; learning; education citation: jennifer wolf rogers, karen alexander, standards and affordances of 21st-century digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality, acta imeko, vol. 11, no. 3, article 7, september 2022, identifier: imeko-acta-11 (2022)-03-07 section editor: zafar taqvi, usa received march 1, 2022; in final form september 16, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: karen alexander, phd, karen@xrconnected.com 1. introduction as the delivery of learning has been increasingly digitized over the past few decades, standards such as scorm, the sharable content object reference model, have helped to ensure interoperability across learning management systems (lms). but the 21st century has brought new, powerful tools for learning that enable educational and training experiences far richer than those available via a desktop or laptop computer. virtual, augmented, and mixed reality (vr, ar, mr) bring affordances to learning that far surpass what was previously available. immersive, embodied learning in 3d environments, with interactive 3d objects and collaborative engagements with teachers or other learners, will revolutionize education and training as we know it. for these reasons, new standards that can aid in the capture of data about a learner’s experience have arisen, notably the augmented reality learning experience model (arlem) and the experience application programming interface (xapi). with xapi and arlem, specific learner behaviour can be directly tracked and measured as it is shaped and/or changes in a specific interaction, thus permitting predictions of transfer from knowledge to demonstrable skill. adoption of these standards is key to avoiding silos of information and data around associated learner development and behaviour change encoded in different systems and formats that make communication across them difficult. in section 2 we will discuss how new technologies for learning demand new means of assessment. section 3 introduces xapi, its structure, and design principles for interoperable data structures. in section 4, we describe the augmented reality learning experience model, or arlem. a case study illustrates xapi in virtual reality in section 5. in section 6, the challenges and opportunities in using ar and vr and xapi in early abstract the development of new extended reality (xr) technologies for learning enables the capture of a richer set of data than has previously been possible. to derive a benefit from this wealth of data requires new structures appropriate to the new learning activities these immersive learning tools afford. the experience application programming interface (xapi) and the augmented reality learning experience model (arlem) standards have been developed in response, and their adoption will help ensure interoperability as xr learning is more widely deployed. this paper briefly describes what is different about xr for learning and provides an account of xapi and its structures as well as design principles for its use. the relationship between environmental context and arlem is explained, and a case study of a vr experience using xapi is examined. the paper ends with an account of some of the promises for collecting data from early childhood learning experiences and an unsuccessful attempt at a study using augmented reality with young children. mailto:karen@xrconnected.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 childhood education are outlined. in conclusion, we point toward possibilities for future developments in the fields of extended reality (xr) for training and interoperable standards for data collection. 2. new ways of learning, new ways of measuring among the affordances of virtual reality that make it such an effective tool for learning is the way that it engages the senses of proprioception and presence while permitting movement and gesture as the learner interacts with the virtual world and content within it. research has shown that large gestures and bodily movements enhance the acquisition and retention of knowledge, and vr is able to deliver experiences that allow such movement, unlike the small, hands-only interactions we have when using a computer keyboard interface for digital learning [1]. ar and mr deliver digital content within a real-world environment, and such content appears to the learner as present in space alongside them. the learner can modify the size of the virtual objects with which they are interacting or rotate them for a different view, walk around them, and experience the content in a fully embodied, present manner. xr technologies engage the body as it is inhabited in space and tap into embodied cognition. as researcher mina johnsonglenberg notes, “human cognition is deeply rooted in the body’s interactions with the world and our systems of perception.” but johnson glenberg also acknowledges that “for several decades, the primary interface in educational technology has been the mouse and keyboard; however, these are not highly embodied interface tools” [1]. xr permits the collection of data from bodily movements and interactions. one of the benefits of this can be seen in recent research that shows that the combination of physical movement and a gaming element in learning experiences for children not only enhances their ability to learn, but in fact also enhances cognitive function [2], [3]. with such new tools that can engage the body, “we need better learning analytics for real-time and real-world interaction” [4]. 3. the experience api 3.1. experience (xapi) data the experience (xapi) data is a modern data standard that is uniquely positioned to measure and track human behaviour in learning scenarios in a manner that more closely approximates their behaviour in everyday situations and experiences. the xapi differs from traditional, compliance-based, linear forced-choice methods of measuring learning (such as scoring responses to a finite series of multiple-choice questions and distractors). instead, it takes a more dynamic and analytical approach, acknowledging the often less-than-predictable nature of human response, and allowing learners to immerse themselves in a variety of contexts and respond organically (and, in some cases, automatically, without an intentional initiation of cognitive processing) to a variety of stressors and/or motivators in a manner that more closely approximates their true everyday behaviour in the real world. additionally, xapi provides a standard javascript object notation (json) syntax that facilitates the collection and subsequent analysis of these learner behaviours across learning scenarios and experiences, providing assurances around degree of predictability of a learner response over time, as well as the opportunity to correlate behaviour in practice/simulated environments with behaviour from the same individual/group of individuals, and potential real-world outcomes arising from that behaviour, in the physical world. 3.2. key considerations for structuring data collection using xapi though xapi is a broad standard and syntax aimed at increasing the efficacy of all modalities associated with learning experience, immersive environments are, in particular, wellsuited to this type of measurement. in xr experiences, there is a unique opportunity for learners to embody specific roles and interact with specific environments and objects, as well as demonstrate behaviours that are highly analogous to the physical world. when structuring xapi data collection in these immersive scenarios, it is important to consider the ways in which people, objects, and actions/behaviours might interact with one another, the outcomes and/or consequences that may arise from these interactions, and how these interactions and interdependencies might be captured. ideally, data collection will be structured using the base syntax to answer the following types of questions: ● what is the environmental context? ● who/what is present with the learner in this environment? ● how does the environment (and/or people or objects within it) trigger a learner behaviour? ● how may learner behaviour be described in such a way that it may be generalized across other immersive environments? is this behaviour tracked in the physical world in the form of metrics and/or key performance indicators (kpis)? how is it described/reported? ● does the environmental context change as a result of the learner’s behaviour(s)? is there a natural resolution to a specific problem/tension? if so, how is this described? 3.3. key design principles for human metrics in xr beyond the decisions driving specific ontologies and/or taxonomies to describe json statement syntax within specific interactions/scenarios, there are also key design principles to consider when designing an overall interoperable data structure meant to support and analyse this syntax: 1. human metrics design should be scalable. when constructing data schemas and frameworks to support xapi, it is imperative that human-centred design is table 1. overview of styles and font sizes used in this template. json syntax field field description adult learner example child learner example subject/ actor identifiers and/or descriptors for individual learners/groups jane jill verb action that learner takes in the scenario released donned object object and/or person that learner interacts with the pressure valve a coat context optional extension information to describe the context of the behaviour (e.g. location coordinates, physical barriers, emotional factors, etc.) …when operating parameters [began to trend out of limits] …when snow fell from the sky results optional extension information to describe results and/or resolution of the initial context, based upon learner action ….which resulted in pressure stabilization ….which resulted in health stats increasing acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 employed to ensure that the resulting metrics include verbs, results, and contexts that may be shared across a wide variety of xr scenarios/experiences for the same user(s) over time. in this case, the human/individual is the constant as they “travel” through a variety of xr simulations/scenarios and their behaviour in these different contexts must be described in a manner that lends itself to trend analysis and/or situational anomalies. additionally, as humans are not truly “compartmentalized” within a particular function/domain of their life, frameworks should be constructed in such a way that provides for longevity around the description of behaviour across broad ecosystems an individual may transverse, in both the physical and virtual world (e.g. familial networks, social networks, academic networks, professional networks, etc.). 2. actions/verb selection should minimize bias and/or interpretation, wherever possible. as the range of human behaviour is vast, and “appropriate” responses may vary across simulations/scenarios, every effort should be taken to avoid introduction of verbs into an ontology/taxonomy and/or individual interaction that imply some sense of judgement and/or right and wrong responses. instead, the verb portion of the syntax itself should merely describe the user behaviour in the statement itself, utilizing the results and context extensions where appropriate to contextualize the nature of appropriateness of the response in light of the conditions at a particular point in time in an interaction. 3. human metrics data should be interoperable and machine-readable. deviation from json format and xapi-specific context should be avoided, as both are open source standards that ensure that data recorded during immersive experiences/scenarios may be created, managed, and analysed broadly across existing and future platforms and technology, particularly those that pertain to measurement of human development, capabilities, and performance (e.g. student/workforce management systems, capability management and workforce planning systems, learning management systems, assessment management systems, performance management systems, etc.). additionally, metrics should be constructed in such a manner that lends them to be analysed by machine learning principles such as natural language processing and, in some cases, subsequently actioned further by artificial intelligence in subsequent interactions, as described below. 4. individual actors should be differentiated. appropriate and differentiated identifiers should be utilized in actor fields that, though anonymized where necessary, may be correlated back to a specific learner. this standardized practice ensures that interaction data is cumulative across a variety of immersive scenarios/simulations and that resulting data may be analysed for specific behavioural trends in a variety of contexts. additionally, this convention provides an affordance whereby user experience and/or nuances of a specific scenario’s context in future trials may be personalized to target specific growth/performance targets of the specific individual. 5. overall data sets, and the ontologies and/or taxonomies inherent within them, should be human-readable. to enable the proper analysis of human behaviour within specific contexts and the subsequent growth and performance of people over time, particularly as it pertains to informed practice for educational and human resources professionals responsible for guiding academic and career growth, it is imperative that virtual interaction descriptors are chosen that most closely approximate observable human behaviours in the physical world. 4. augmented reality learning experience model one recent iteration of the xapi standard as it pertains to xr specifically lies in the augmented reality learning experience model, otherwise known as arlem [5]. in this formulation, as described by secretan, wild, & guest, performance = action + competence. “an action is the smallest possible step that can be taken by a user in a learning activity,” they say, and “competence becomes visible when learners perform” [4]. accordingly, arlem aims to solve many of the challenges faced by colleagues in industrial and/or operational environments, whereby a worker's ability to take the right action at the right time is paramount. traditional measurement protocols were previously oriented toward providing flat, twodimensional “performance support” to these colleagues, in the form of viewing a schematic or standard operating procedure, watching a video of a colleague performing a process step, etc. and were centred around the passive “completion” of learning, which inferred only that an individual had accessed content and, in some cases, progressed through to the end. in some cases, additional measurement in the form of more formal assessment required learners to answer knowledge-based questions around the operational process. these assessments, often based on rote memorization, are incapable of measuring what actions a worker will take when faced with a particular situation. with the arlem and xapi standards, actions performed may now be prioritized over memorization of facts. 4.1. environmental triggers a hallmark of the arlem standard itself is its unique application of environmental “triggers” prompting human behaviour or experience; in this case, in the workplace specifically. by monitoring the environmental context and its operational parameters, arlem provides an association between situational context and appropriate supports provided to the end user to assist them in taking the appropriate action. 4.2. operational excellence key performance indicators utilizing the differentiators between environmental triggers and human activity, and further applying the results extension in the xapi syntax allows for a unique opportunity to assess the effectiveness of augmented reality in the flow or work. in some instances, iot data associated with operational processes and/or equipment that is trending out of limits/acceptable parameters may “trigger” an augmented reality support layer, comprised of appropriate schematics, videos, and even remote operations assistance from a geographically distant subject matter expert, over the real world and deployed to the correct individual in proximity to prompt human intervention to normalize the operational process. the specific iot parameters may be recorded in the “context” extensions in xapi syntax so that the acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 resulting human action can be understood in relation to operational kpis. furthermore, resolution of the iot data associated with the operational process, where available, may be recorded. 5. xapi in virtual reality an operational case study in light of the success with xapi in the arlem standard, this paper’s authors hypothesized that utilization of the xapi base syntax, associated extensions, and concept of operational triggers would also lend themselves quite well to measurement of learning experiences and other immersive experiences, such as virtual reality. to test this theory, an existing 360 video-based virtual reality interaction, centred around operational safety and risk in a heavy-manufacturing environment, and for which existing learning measurement data was available, was selected. in the selected vr scenario, an operator at a plant was given a job task to complete, and, in so doing, a set of dangers/risks in the environment were layered throughout the scenes (see figure 1). the scenario itself had been created to simulate the work environment and to increase plant operators’ ability to recognize and mitigate risks present in their working environment without overt prompts to do so. existing dashboards associated with the scenario were then examined to ascertain the likelihood of correlation with actual plant operator behaviour in the workplace. analysis of this information demonstrated that, though the immersive environment lent itself quite well to application of the xapi syntax, more traditional learning measurement methods, such as scoring of responses as right/wrong were employed, along with a scoring mechanism that subtracted points for each incorrect response (or lack thereof) within the simulation. elapsed time in the interaction was also reported (see figure 2). because the original measurement design did not approximate the type of behaviours observed, collected, and reported in the workplace, the effectiveness of the immersive learning experience, as compared to other traditional learning modalities, was difficult to determine across various learners and/or similar scenarios. as expected, the nature of the existing data did not give a great deal of information that would allow inference as to the immersive scenario’s ability to shape and/or change human behaviour in a manner that one could reasonably assume would make them safer and/or more productive in their actual jobs in the real world. 5.1. enhanced data set vr safety/risk in order to enhance the measurement of human interaction within the simulation in a manner that could be more closely correlated to human behaviour in the workplace, we obtained a sample of the historical data and, along with a careful analysis of the operational risks present in the design of the vr scenario, completed a data transform that allowed for xapi syntax to be expressed in a manner that identified environmental context, as well as subsequent user action/behaviour, and results of this behaviour in relation to the original “triggering” context itself (see figure 3). xapi-formatted json statements were then created, using existing available data fields, as well as additional data available and yet not previously recorded (see figure 4). these statements were subsequently loaded into a learning record store (lrs), and subsequent dashboards were created to demonstrate the relationships between environment and corresponding user behaviour, as well as relationships with existing competency frameworks related to health and safety and designed by the u.s. department of labor [6] (see figure 5). as compared to the original, more traditional measurement design, these dashboards contextualized human behaviour in conditions of varying risk in a manner similar to the way health and safety behaviours are measured in real operational environments and gave a clearer indication of human behaviour that might be demonstrated in these operational environments, based upon actions measured and tracked in the immersive simulation. 5.2. key findings the results of this case study, as well as reference implications of the arlem standard, as previously described by wild et al., demonstrate that it is possible to map additional information to existing xr datasets to infer “shaping of human behaviour”, as well as potentially correlate actions in simulated and/or augmented environments to those observed and recorded in the real world. furthermore, the original “triggering condition/context” metaphor suggested in the arlem standard was found to extend more broadly to all experiential-based learning experiences and work in harmony with the xapi data syntax associated with learning experiences. to enable effective measurement across scenarios and users, we created the following taxonomies for use in the xapi syntax, designed to be utilized broadly across xr scenarios measuring hazard identification and mitigation of operational risk: figure 1. selected vr scenario, designed to measure learner’s ability to identify and mitigate operational risk in a simulated environment. figure 2. measurement of non-xapi-compliant learner behaviour in a vr scenario. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 1. stimulus (context) taxonomy to describe starting condition(s) within a scenario. 2. trigger (context) taxonomy to describe expected levels of perceived risk, competing human priorities (dilemmas), and/or intentional environmental distractor(s). 3. response (results)taxonomy to describe user roles and levels within the scenario, ending condition(s), correlation to open competency frameworks, haptics, eye gaze, biometrics. while these taxonomies were created to facilitate historical data transformation from an existing vr scenario into an xapicompliant dataset, it is acknowledged here that actual ontologies are more desirable. in order to promote interoperability and the broad reach of xapi as a measurement standard in xr learning scenarios, it is recommended that representative ontologies and/or taxonomies be developed that can inform widespread historical data transformation in the xr space, as well as serve as guides around measurement designs for additional immersive scenarios that are designed in future. these advancements, we believe, will provide a powerful measurement tool designed to effectively measure, describe, and perhaps even predict human behaviour in context in a manner that speaks to not only the knowledge that individuals possess, but also to their performed skill/competency level across a variety of contexts. this has major implications for the ways in which we measure and assess specific and generalized learnings in individuals over time. 6. designing augmented reality learning experiences for early childhood education: opportunities and challenges in a literature review of vr, ar, and mr in k-12 education, maas and hughes [7] noted that studies found increased attitude, motivation, engagement, performance/learning outcomes, and 21st century skills and competencies with the use of these xr technologies. but they also emphasized that it was challenging to find studies on xr in k-12 education in comparison to such research in higher ed. the reasons they theorized such research was difficult to find were the rapid development of the technology, which means that it has only recently become available; the lack of xr content for this age group; and a lack of resources, making xr unevenly available within countries and around the globe. with early childhood education, these challenges are even more daunting. when the current paper was first proposed, the intention was to discuss research on an early childhood ar application built on a platform compatible with xapi. that research was to be informed by work indicating that bodily engagement enhances learning outcomes and improves executive function. research on children’s learning by eng et al. using a modified vr game combined with a cognitive task suggests that the combination of “exergames” with a learning task not only improves performance on that task, but can actually improve cognitive ability and executive function [2], [3]. while eng et al. studied children using vr along with measurements in fmri as well as teacher assessments, in the case of the proposed new research, ar was to be used. augmented reality is much more accessible than virtual reality for early childhood and k-12 education, which increases the opportunities to study it. for example, tablets were already in use at the lab school where the study is expected to take place. more ar platforms and experiences are becoming compatible with xapi. thus, we still believe there is great promise for the study in development and for other such studies. in addition, arcompatible smartphones and tablets are also quite common in households in 2022, which means that many children would be able to participate in ar learning experiences from home. ironically, such ar learning experiences could greatly benefit the many children who have not been able to attend school in person in the time of covid and who have missed precious learning time at a key moment in their young lives. this proved challenging on many fronts. the challenges included ensuring that the design principle around differentiation of individual actors be followed. in practical terms, this would involve ensuring that each child be supplied with a device assigned to and associated with that particular child to avoid relying on young children to log into the device with the appropriate credentials, which would alleviate pressures on teachers in the classroom context. for young children who are just beginning to learn to read, audio prompts may need to be incorporated into the experience, or each student may need to be guided by the teacher, making the process resource intensive. the continuing effects of the pandemic also had a significant impact on the project because the early childhood education lab where the research was to be conducted experienced repeated closures and disruptions. as a result, the study has not yet formally begun. figure 3. measurement of learner behaviour in the same vr scenario, utilizing the xapi standard, in conditions of medium risk (context). figure 4. measurement of predicted results based upon learner behaviour in the same vr scenario, mirroring operational consequences in the workplace. figure 5. number of user actions in vr scenario showing evidence of competency in u.s. department of labor standard: “assessing material equipment, and fixtures for hazards”. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 7. conclusions and next steps some of the early childhood learning that we would like to measure and that are appropriate for structuring with xapi include self-concept skills such as proprioception, awareness of others in space, and geometry and spatial sense skills such as recognition of 3d shapes and their persistent form when rotated, for example. learning of appropriate behaviours for self-care and social responses to others give situational triggers that are also targets for studies of the use of xapi with young children. in general, future steps include continuing to explore specific elements of xapi profiles appropriate for xr with regards to experiential learning, considering the relationship between knowledge-based concepts and experiential skills, and examining xapi profiles for xr across the talent pipeline (from pre-k to adult) in order to hypothesize about implications for the future of education. acknowledgement special thanks to colleagues at warp vr, namely guido helmerhorst and menno van der sman for contributing scenarios, historical data subsets, and transformation support. this effort would not have been possible without your spirit of innovation and commitment to open source-initiatives and practices. references [1] m. johnson-glenberg, immersive vr and education: embodied design principles that include gesture and hand controls, frontiers in robotics and ai, july 2018. doi: 10.3389/frobt.2018.00081 [2] c. m. eng, m. pocsai, f. a. fishburn, d. m. calkosz, e. d. thiessen, a. v. fisher, adaptations of executive function and prefrontal cortex connectivity following exergame play in 4to 5-year old children. [3] c. m. eng, d. m. calkosz, s. y. yang, n. c. williams, e. d. thiessen, a. v. fisher, doctoral colloquium enhancing brain plasticity and cognition utilizing immersive technology and virtual reality contexts for gameplay. 6th international conference of the immersive learning research network (ilrn), san luis obispo, ca, usa, 21-25 june 2020, pp. 395-398. doi: 10.23919/ilrn47897.2020.9155120 [4] j. secretan, f. wild, w. guest, learning analytics in augmented reality: blueprint for an ar / xapi framework, in tale 2019, yogyakarta, indonesia, 10-13 december 2019, pp. 1-6. doi: 10.1109/tale48000.2019.9225931 [5] ieee standard for augmented reality learning experience model, ieee std 1589-2020, april 2020, pp. 1–48. doi: 10.1109/ieeestd.2020.9069498. [6] credential finder. online [accessed 15 september 2022] https://credentialfinder.org/competencyframework/1928/enginee ring_competency_model_u_s__department_of_labor_(dol) [7] m. j. maas, j. m. hughes, virtual, augmented, and mixed reality in k-12 education: a review of the literature. technology, pedagogy, and education 29 (6) 2020. doi: 10.1080/1475939x.2020.1737210 https://doi.org/10.3389/frobt.2018.00081 https://doi.org/10.23919/ilrn47897.2020.9155120 https://doi.org/10.1109/ieeestd.2020.9069498 https://doi.org/10.1109/ieeestd.2020.9069498 https://credentialfinder.org/competencyframework/1928/engineering_competency_model_u_s__department_of_labor_(dol) https://credentialfinder.org/competencyframework/1928/engineering_competency_model_u_s__department_of_labor_(dol) https://doi.org/10.1080/1475939x.2020.1737210 microsoft word article 12 108-730-1-le.doc acta imeko december 2013, volume 2, number 2, 67 – 72 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 67 development of nanoparticle sizing system using fluorescence polarization terutake hayashi, masaki michihata, yasuhiro takaya, kok foong lee osaka university, 2-1 yamadaoka, suita, osaka 565-0871, japan section: research paper keywords: nanoparticle; fluorescence polarization; brownian motion; rotational diffusion coefficient; particle sizing citation: terutake hayashi, masaki michihata, yasuhiro takaya, kok foong lee, development of nanoparticle sizing system using fluorescence polarization, acta imeko, vol. 2, no. 2, article 12, december 2013, identifier: imeko-acta-02 (2013)-02-12 editor: paolo carbone, university of perugia received april 15th, 2013; in final form november 17th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported in part by a grant-in-aid for scientific research from the ministry of education, culture, sports, science and technology of japan (grant-in-aid for exploratory research 21686015). corresponding author: terutake hayashi, e-mail: hayashi@mech.eng.osaka-u.ac.jp 1. introduction the integration of nanoparticles into devices produces unique electronic, photonic, and catalytic properties. it offers the prospect of both new fundamental scientific advances and useful technological applications. many of the fundamental properties of materials (e.g., optical, electrical, and mechanical) can be expressed as functions of the size, shape, composition, and structural order [1, 2]. it is important to evaluate nanoparticle sizes and maintain a constant state (size distribution, average size, shape uniformity). the size distribution of monodispersed nanoparticles in a solvent can be measured using dynamic light scattering [3,4]. however, the scattering method is inappropriate for detecting the sizes of particles in a mixture, which include both small monodispersed particles and large particles such as agglomerates. the signal intensity depends on the sixth power of the particle diameter. therefore, the scattering signal from monodispersed particles, compared to the signal from their agglomerates, is too small to detect. the size distribution of nanoparticles with a wide size distribution can be measured using transmission electron microscopy [5]. this requires a dried sample. however, it is difficult to evaluate both the average diameter and size distribution of nanoparticles because of the instability of their diameters in a solvent. to measure the sizes of nanoparticles with a wide size distribution in a solvent, we developed an optical microscopy system that enables fluorescence polarization (fp) measurement and optical observation. the fp method can trace the rotational diffusion constant of brownian motion in a fluorescent molecule. when a fluorophore is used to label nanoparticles, the rotational diffusion coefficient corresponds to the size of the nanoparticles. the system makes it possible to evaluate nanoparticle sizes over a wide range because the fluorescence signal intensity is independent of changes in the nanoparticle sizes. in this paper, we describe a fundamental experiment to verify the feasibility of using this method for different sizes of nanoparticles. 2. measurement of rotational diffusion coefficient and particle sizing 2.1. measurement of rotational diffusion coefficient we have developed an fp method [6, 7, 8] to measure nanoparticle sizes by measuring the rotational diffusion coefficient. when linearly polarized light is irradiated on a nanoparticle labeled with a fluorophore, a fluorescence signal abstract to measure the sizes of nanoparticles with a wide size distribution in a solvent, we developed an optical microscopy system that enables fluorescence polarization (fp) measurement and optical observation. this system allows the evaluation of nanoparticle sizes over a wide range because the fluorescence signal intensity is independent of changes in the nanoparticle sizes. in this paper, we describe a fundamental experiment to verify the feasibility of using this system for different sizes of nanoparticles. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 68 initially maintains the polarization state of the irradiation light. as shown in figure 1, small particles will have a high degree of rotation due to brownian motion, and the polarization plane of the light emitted will vary greatly from that of the excitation light. in contrast, larger particles will have a low degree of rotation, so light is emitted in the polarized plane, similar to the excitation light. the fluorescence signal then depolarizes as time passes. the fluorescence anisotropy depends on the angle of the particle rotation. in addition, depolarization is rapid for particles with small volumes, whereas the polarization state is maintained longer in particles with large volumes. the fluorescence anisotropy r(t) is related to the rotational diffusion coefficient dr of a nanoparticle. the particle size can be calculated from dr on the basis of an analysis using the debye–stokes–einstein (dse) relationship [8, 9] when the particle shape is approximated to that of a sphere. here r(t) is defined by   // // 2 i i r t = i + i    (1) where the first and second subscripts refer to the orientation of the excitation and that of the emission, respectively. i∥ and i⊥ are the horizontally polarized (p-polarized) and vertically polarized (s-polarized) components of the fluorescence, respectively, when horizontally polarized light is irradiated. we can also express r(t) as a function of time t as       / / 0 t τ t θ er t = r r e + r i t = τ     (2) where i(t) is the instantaneous emission intensity normalized to a unit-integrated value. the parameter r reflects the extent of the rotational reorientation of the fluorophore. in this case, r is obtained by integrating the intensity-weighted r(t), as shown in eq. (3). the steady-state anisotropy r of a fluorophore undergoing isotropic rotational diffusion is related to the fluorescence lifetime,  and rotational correlation time [4]:     0 0 1 r r r = r t i t = +σ r r        (3) where r0 is a limiting value in the absence of rotation given by the relative orientation of the absorption and emission transition moments, and  is the ratio . measurements of the anisotropy decay can reveal a multiplicity of rotational correlation times reflecting heterogeneity in the size, shape, and internal motions of the fluorophore–nanoparticle conjugate. the rotational correlation times are determined in either the time domain or the frequency domain. the rotational diffusion coefficient can be precisely calculated from the fluorescence anisotropy in the frequency domain. when an excitation pulse is used as a forcing function, eq. (3) is transformed into a decay process in which the fluorescence lifetime  is no longer linked to the correlation time. the rotational correlation times are determined by measuring three parameters: , yac, and ydc. to define them, we measure the sinusoidal waveforms of both i∥ and i⊥ in the frequency domain. a series of i∥ and i⊥ are measured while the relative phase between the excitation light and the detector gain is adjusted as the excitation light and detector gain are modulated using the same radial frequency. after the sinusoidal signals of i∥ and i⊥ are acquired, each data set is processed to yield the frequency-dependent amplitudes and the phase shift between the excitation and emission light. the resulting polarized emission components are modulated at the same frequency but phase shifted with phase decay  relative to each other [10]. i∥ac and i⊥ac are the amplitudes of i∥ and i⊥ in the frequency domain, respectively. i∥dc and i⊥dc are the average values of i∥ and i⊥, respectively. these signals are characterized by the ratio // /// /ac ac ac dc dc dcy = i i , y = i i  . (4) furthermore, the fluorescence lifetime of the emission light can be measured in the frequency domain. the fluorescence lifetime  is determined by both a series of data sets for the excited light intensity, i()and the total light intensity of the emission light, i() the parameters , yac, and ydc are then related to parameters r , r0, r and  by              0 0 0 01 0 3 2 1 1 2 1 1 tan 1 1 2 1 2 r r r r r r r r                               , (5)                 22 2 0 0 22 2 0 0 1 2 1 2 1 2 1 1 1 ac r r r y r r r                     , (6)         0 0 1 2 1 2 1 1 dc r r y r r            , (7) 1 2 dc dc y r y    . (8) the rotational correlation time  is given by               1 2 2 2 22 2 2 2 2 2 1 tan 4 tan tan4 2 ac ac dc dc ac dc ac ac dc ac dc y y y y y y y y y y y                            (9) to define the rotational correlation time , the fluorescence lifetime  is required. the fluorescence lifetime  is determined by the phase decay and modulation m of the fluorescence, which is excited using light whose intensity i(t) varies sinusoidal with time [10]: figure 1. concept of fluorescence polarization method. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 69    sini t a b t  (10) where  is the angular velocity of the modulated excitation light [10]. as a consequence of the finite duration of the excited state, the modulated fluorescence emission is delayed in phase by angle  relative to the excitation. in addition, the modulation of the fluorescence decreases. the intensity of the fluorescence is given by    sini t a b t    . (11) we acquire the phase decay  between the sinusoidal curve of the excitation light and the emitted fluorescence without the emission polarizer in the frequency domain by tan  . (12) 2.2. particle sizing using sphere model for brownian particles reorienting in a liquid, the debye model defines an exponential decay for the single-particle orientation time correlation function c and rotational correlation time   exp /c t   , (13) 1 6 rd   , (14) where dr is the rotational diffusion coefficient. dr is coupled with the shear viscosity  at temperature t using the dse relationship [8, 9]: b r h k t d v   (15) where kb is the boltzmann constant. the hydrodynamic volume vh is related to the particle volume  through a factor that depends on the shape of the reorienting particle and the boundary conditions. the particle volume  can be calculated by measuring the rotational diffusion coefficient dr when the relation between vh and  is formulated. for example, the formula for a sphere, 6 ,hv  is applied to the stick boundary conditions. the dse relationship is known to be effective at the molecular length scales of low-viscosity liquids. consequently, the hydrodynamic volume vh of a fluorophore can be determined using , yac, ydc, , and  on the basis of eqs. (14), eq. (15), and (12). when the sphere approximation is applied to a fluorophore–nanoparticle conjugate, the diameter can be calculated from vh. this reflects the excited state lifetime and intrinsic rotational diffusion properties of the fluorophore and the modulation frequency. 3. experimental setup we developed a rotational diffusion coefficient measurement system using an fp method, as shown in figure 2. an ar+ laser (wavelength: 488 nm) is the polarized light source, and it is coupled to an acousto-optic modulator (aom). before coupling to the aom, the polarization direction is oriented using a half-wave plate (1/2 wp) to improve the diffraction efficiency of the aom. high-speed light amplitude modulation (up to 80 mhz) can be achieved in the unit, which consists of the aom, lens (l), and iris. the polarization direction of the input signal is then oriented by a half-wave plate (1/2 wp) just as with the polarization direction of i//. the light is incident on the sample via a half mirror (hm) and objective lens. the sample is excited using linearly polarized light. the emission is relayed through the beam displacer, which divides the fluorescence polarization signals oriented parallel (i//) and perpendicular (i⊥) to the excitation beam polarizer. the fluorescence signals are finally relayed to the image intensifier. the image of the orthogonal components of the fluorescence signal is enhanced on the image intensifier and then relayed to a ccd. images of both the horizontally (i//) and vertically (i⊥) polarized components are analyzed to acquire the fluorescence anisotropy. the modulation of the gain of the image intensifier corresponds to the modulation of the input signal amplitude. the phase decay , is the phase difference between the incident light and the gain of the image intensifier. acquisition proceeds with a series of phase shifts to acquire a first-order photo bleaching compensation. a data series is processed to yield the frequencydependent amplitudes, along with the phase shift between the excitation light and the emitted lights in the two orthogonal polarization directions. the polarized emission components are modulated at the same frequency but phase shifted relative to each other. 4. experimental results 4.1. rotational correlation time for fluorophore to measure the particle sizes, precise measurement of the rotational diffusion coefficient, which corresponds to the rotational correlation time, is required, as shown in eq. (14). we performed fundamental experiments to verify the feasibility of ar-laser 488nm ½ wp l l aom iris m ½ wp p hm function generator function generator phase-locked  objective lens sample temperature controller emission filter tube lens beam displacer image intensifier relay lens ccd to computer figure 2. experimental setup. figure 3. ccd images of fluorescence signals. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 70 precise measurement of the rotational correlation time for the fluorophore [11] without labeling gold nanoparticles. in the experiment, a beam displacer is used to divide the fluorescence light into the parallel and perpendicular components. as shown in figure 3, both components of the signal are observed in the ccd camera after the fluorescence light passes through the beam displacer. the upper component is polarized in the horizontal (h) direction, and the lower component is polarized in the vertical (v) direction. to check the efficiency of both sides, two measurements are required. first, the polarization angle of the excitation light is adjusted to the direction parallel to the horizontal direction, and an image is recorded, as shown in figure 3(a). the intensities of the horizontal and vertical signals are denoted by ihh and ihv, respectively. next, the polarization angle of the excitation light is adjusted to the direction parallel to the vertical direction, and another image is recorded. the intensities of the horizontal and vertical signals are also denoted by ivh and ivv, against the orthogonal direction of polarization respectively, as shown in figure 3(b). fifteen images are taken for each case. depending on the direction of polarization, the efficiency of the light passing through may differ. therefore, a calibration factor g is needed to measure i∥ and i⊥ from the two signal components. we calculated the g factor for ivh; thus, when excitation light in the v direction is used, ivv is used directly in our calculation, and ivh times g is used for the h direction of i∥. the g factor for ivh is obtained as follows: vv hv ivh vh hh i i g i i  . (16) the modulation frequency of the aom was set to 60 mhz for the following experiment. the ccd exposure time was 400 ms, and the micro channel plate voltage was 760 v. the modulated fluorescence signals by varying the phase  are shown in figure 4. , yac, and ydc are determined from the fitting curves of both fluorescence signals. first, the fluorescence anisotropy as a function of the time t is evaluated using the developed system. figure 5 shows the linear variation in the rotational correlation time versus the viscosity of the solution. three solutions were prepared for measuring the rotational correlation time of a fluorophore by mixing water with glycerin at 30 wt%, 50 wt%, and 60 wt%. the resulting viscosities were 2.5 mpa·s, 6.0 mpa·s, and 10.8 mpa·s at 293 k, respectively. we also used water, which has a viscosity of 1.0 mpa·s at 293 k, as a solution. the fluorophore was alexa fluore 488 (invitrogen corp.), which is the same size as fluorescein. the fluorescence lifetime  varies, with values of 4.1 ns in water, 3.8 ns in 30 wt% glycerin, 3.6 ns in 50 wt% glycerin, and 3.1 ns in 60 wt% glycerin. if we apply the sphere approximation for the fluorophore, the theoretical value of the rotational correlation time, which depends on the particle size, can be calculated from eqs. (14) and (15) as follows: 36 b d k t     . (17) the theoretical value is plotted according to the solution temperature t (293 k) and nanoparticle diameter d (1.1 nm). according to eq. (16), the rotational correlation time of the fluorophore agrees well with the value for a nanoparticle with a diameter of 1.1 nm under the nonslip boundary condition. this value is close to that of fluorescein, whose size is estimated to be 1.0 nm [12, 13]. the size difference is considered to be an effect of hydration of the fluorophore in the solution. from the above results, the rotational correlation time can be precisely measured using the developed system. 4.2. rotational diffusion coefficient for gold nanoparticle with figure 4. parallel and perpendicular polarized components of modulated fluorescence signal. 0 0.5 1 1.5 2 2.5 0 2 4 6 8 10 12 r ot at io na l c or re la tio n tim e( ns ) viscosity (mpa.s) theoretical value for 1.1nm figure 5. rotational correlation time of fluorophore.                               2 22 0 2 2 22 22 2 0 0 0222 2 0 0 1 1 2 2 2 5 2 4 4 1 2 1 2 1 1 9 1 2 1 2 5 4 1 2 1 1 ac ac ac ac y r r r r r r r r r y r r y r r r r r r r y r r                                                           (18)             2 22 22 2 2 2 2 0 0 0 0 0 0 0 2 1 2 1 9 2 2 5 1 1 2 6 1 ac ac ac r ac r y r y r r r y r r d y                   (19) acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 71 fluorescent dna probe to evaluate the diameter of a gold nanoparticle, the rotational diffusion coefficient of the nanoparticle with a fluorescent dna probe is evaluated using the developed system. gold nanoparticles with average diameters of 8.2 nm were prepared as the standard sample for measurement of the rotational diffusion coefficient. we evaluated the rotational diffusion coefficient of the gold nanoparticles directly with the fluorescent dna probe. the fluorescent probe was connected to the gold nanoparticles via double-strand dna consisting of adenine with a length of 23 bases. the length of the double-strand dna is estimated to be 9.6 nm, as shown in figure 6. the dna acts as a spacer to avoid quenching of the fluorophore [14, 15, 16], which is located near the gold nanoparticle. the 23-base double-strand dna is strong enough to avoid bending [17, 18] and to maintain the distance between the gold nanoparticle and fluorophore. figure 7 shows the rotational diffusion coefficient of the fluorophore and gold nanoparticle with the fluorescent dna probes versus the solution parameters, represented by t/. the rotational correlation time  is calculated using eq. (18) and the data for the parameters , yac, and ydc. the rotational diffusion coefficient can be calculated using the reciprocal of , as shown in eq. (19). table 1 shows the measurement parameters used to evaluate the rotational diffusion coefficient. as shown in figure 7, the rotational diffusion coefficient, which indicates the speed of rotational motion, increases linearly with t/. this relation agrees with eq. (17), which shows a linear relation to /t. the inclination of the graph for the rotational diffusion coefficient of the fluorescent dna probe is four times higher than that of the gold nanoparticle (average diameter, 8.2 nm) with the fluorescent dna probe. a large difference in the rotational correlation coefficient appears between the fluorescent dna probe and the gold nanoparticle with the probe. this enables us to estimate the size of a gold nanoparticle quantitatively using the inclination of the rotational diffusion coefficient against the viscosity of the temperature. to evaluate the resolution of the particle sizing obtained using the proposed method, further experiments for various diameters of gold nanoparticles are needed. we are now preparing samples of gold nanoparticles with average diameters of 5 nm, 10 nm, 15 nm, and 20 nm. 5. conclusions we developed a nanoparticle sizing system using an fp method. this system can precisely determine the rotational correlation time of nanoparticles with a fluorophore. the results indicate that we can determine the size of a nanoparticle using the dse relation when the particle shape approximates a sphere. we also investigated the rotational correlation time of a fluorophore with gold nanoparticles that were smaller than 10 nm. this indicates that the measurement results for the rotational correlation time of a fluorophore-labeled gold particle can be used to estimate the size of gold nanoparticles smaller than 10 nm. acknowledgment this research was supported in part by a grant-in-aid for scientific research from the ministry of education, culture, sports, science and technology of japan (grant-in-aid for exploratory research 21686015). references [1] t. gao, q. li, and t. wang, “sonochemical synthesis, optical properties, and electrical properties of core/shell-type zno nanorod/cds nanoparticle composites,” chem. mater., no. 17, pp. 887–892, 2005. [2] r. g. freeman, k. c. grabar, k. j. allison, r. m. bright, j. a. davis, a. p. guthrie, m. b. hommer, m. a. jackson, p. c. smith, d. g. walter, and m. j. natan, “self-assembled metal colloid monolayers: an approach to sers substrates,” science, vol. 267, no. 5204, pp. 1629–1632, 1995. [3] s. sun, c. b. murray, d. weller, l. folks, and a. moser, “monodisperse fept nanoparticles and ferromagnetic fept nanocrystal superlattices,” science, vol. 287, no. 5460, pp. 1989-1992, 2000. figure 6. distance between gold nanoparticle and fluorescent probe. table 1. measurement parameters. temperature [k] viscosity [mpas] probe probe + gold nanoparticle τ [ns] yac τ [ns] yac 293 1.002 2.0 1.225 1.0 1.457 298 0.890 1.199 1.435 303 0.797 1.172 1.427 308 0.719 1.151 1.402 313 0.653 1.135 1.396 figure 7. rotational diffusion coefficients for fluorophore and gold nanoparticle with fluorescent dna probe. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 72 [4] r. pecora, “dynamic light scattering measurement of nanometer particles in liquids,” j. nanopart. res., vol. 2, issue 2, pp. 123-131, 2000. [5] l. c. gontard, d. ozkaya, and r. e. dunin-borkowski, “a simple algorithm for measuring particle size distributions on an uneven background from tem images,” ultramicroscopy, vol. 111, issue 2, pp. 101–106, 2011. [6] k. kinosita jr., s. kawato, and a. ikegami, “a theory of fluorescence polarization decay in membranes,” biophys. j., vol. 20, issue 3, pp. 289–305, 1977. [7] b. s. fujimoto and j. m. schurr, “an analysis of steady-state fluorescence polarization anisotropy measurements on dyes intercalated in dna,” j. phys. chem., vol. 91, no. 7, pp. 1947-1951, 1987. [8] f. stickel, e. w. fischer, and r. richert, “dynamics of glassforming liquids. ii. detailed comparison of dielectric relaxation, dc-conductivity, and viscosity data,” j. chem. phys., vol. 5, no. 104, pp. 2043–2055, 1996. [9] p. p. jose, d. chakrabarti, and b. bagchi, “complete breakdown of the debye model of rotational relaxation near the isotropicnematic phase boundary: effects of intermolecular correlations in orientational dynamics,” phys. rev. e, no. 73, 031705, 2006. [10] r. f. steiner, “fluorescence anisotropy: theory and applications,” top. fluoresc. spectrosc., vol. 2, pp. 1–52, 1991. [11] m. b. mustafa, d. l. tipton, m. d. barkley, and p. s. russo, “dye diffusion in isotropic and liquid crystalline aqueous (hydroxypropyl) cellulose,” macromolecules, vol. 26, no. 2, pp. 370–378, 1993. [12] a. h. a. clayton, q. s. hanley, d. j. arndt-jovin, v. subramaniam, and t. m. jovin, “dynamic fluorescence anisotropy imaging microscopy in the frequency domain (rflim),” biophys. j., vol. 83, pp. 1631–1649, 2002. [13] r. d. spencer and g. weber, “measurement of subnanosecond fluorescence lifetimes with a cross-correlation phase fluorometer,” ann. n. y. acad. sci., vol. 158, no. 1, pp. 361–376, 1969. [14] c. s. yun, a. javier, t. jennings, m. fisher, s. hira, s. peterson, b. hopkins, n. o. reich, and g. f. strouse, “nanometal surface energy transfer in optical rulers, breaking the fret barrier,” j. am. chem. soc., vol. 127, no. 9, pp. 3115–3119, 2005. [15] j. seelig, k. leslie, a. renn, s. kühn, v. jacobsen, m. van de corput, c. wyman, and v. sandoghdar, “nanoparticle-induced fluorescence lifetime modification as nanoscopic ruler: demonstration at the single molecule level,” nano lett., vol. 7, no. 3, pp. 685–689, 2007. [16] s. mayilo, m. a. kloster, m. wunderlich, a. lutich, t. a. klar, a. nichtl, k. kürzinger, f. d. stefani, and j. feldmann, “longrange fluorescence quenching by gold nanoparticles in a sandwich immunoassay for cardiac troponin t,” nano lett., vol. 9, no. 12, pp. 4558–4563, 2009. [17] d. porschke, “persistence length and bending dynamics of dna from electrooptical measurements at high salt concentrations,” biophys. chem., vol. 40, issue 2, pp. 169–179, 1991. [18] g. s. manning, “the persistence length of dna is reached from the persistence length of its null isomer through an internal electrostatic stretching force,” biophys. j., vol. 91, no. 10, pp. 3607–3616, 2006. measuring with confidence: leveraging expressive type systems for correct-by-construction software acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 5 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 measuring with confidence: leveraging expressive type systems for correct-by-construction software conor mcbride1, georgi nakov1, fredrik nordvall forsberg1 1 department of computer and information sciences, university of strathclyde, glasgow, uk section: research paper keywords: type systems; correctness; metrology; programming languages citation: conor mcbride, georgi nakov, fredrik nordvall forsberg, measuring with confidence: leveraging expressive type systems for correct-byconstruction software, acta imeko, vol. 12, no. 1, article 15, march 2023, identifier: imeko-acta-12 (2023)-01-15 section editor: daniel hutzschenreuter, ptb, germany received november 19, 2022; in final form february 28, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: supported by the uk national physical laboratory measurement fellowship project “dependent types for trustworthy tools”. corresponding author: georgi nakov, e-mail: georgi.nakov@strath.ac.uk 1. introduction the digital transformation of metrology offers both challenges and opportunities. with increased software usage and complexity, there is also a need to increase trust in the computations performed — how do we know that the software is doing what is expected of it? computer science offers a wide range of formal methods and verification techniques to tackle this challenge. as always, there is a balance to be struck between how much additional effort is required from the user, and how useful the verification procedure can be. our main thesis is that a lot can be achieved by simply “bridging the semantic gap” between human and machine: current common practice is for computers to mindlessly execute instructions, without understanding for what purpose. this means that any verification must happen after the fact, by a separate process. what if, instead, there would be some way of communicating our intent as we are writing our software? then the machine could help us write it, rather than just tell us off for getting it wrong… we advocate for the use of type systems as a lightweight method to communicate intent. dependently typed programming languages are a new breed of languages which have type systems which are expressive and strong enough to use types to encode the meaning of programs to whatever degree of precision is needed. we can ensure that types can be automatically checked at compile-time, and so they provide machine-certification of the program's behaviour at low-cost. concretely, we therefore get both lightweight and machine-certified trust in the correctness of software. in the metrology domain, in particular, we can make good use of implicit tacit knowledge such as dimensional correctness to help the computer help us. this is work currently done by humans, but there is no reason why it could not be done automatically by a machine instead. as a small case study, we show that by turning informal comments about the expected input format of data into machine-readable form, we can not only check that given data conforms to the format, but automatically generate code for reading from disk and converting to appropriate units, thus eliminating a source of bugs and increasing trust in the software. our goals are similar to other software projects for calculation with physical quantities [1], [2], but we put additional emphasis on the use of types as a convenient method of communication between human and machine. thus, our focus is on how dependently typed programming languages can provide a sound basis for developing software for metrology rather than describing any such software in detail. abstract modern programming language type systems help programmers write correct software, and furthermore helps them write the software they actually intended to write. we show how expressive types can be used to encode dimension and units of measure information, which can be used to avoid dimensional mistakes and guide software construction, and how types can even help to generate code automatically, which eliminates a whole class of bugs. mailto:georgi.nakov@strath.ac.uk acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 2. type systems as lightweight formal methods in programming languages such as fortran or c, types such as floating-point numbers or integers are used to help the compiler with memory layout. a pleasant side effect is that basic errors such as trying to divide an integer by a string can be detected and reported at compile time, rather than at run time. while useful for avoiding disastrous results, this is quite a negative view on types: they are against errors, but are they also for something? types can be used to make an active contribution by offering guidance during the program construction process, not just criticism afterwards. we pay upfront by stating the type of the program we want to write, but are then paid back in the machine being able to use those types to offer suggestions for functions to call, or even generate code for boring parts of the program. the space within which we search for programs is correspondingly smaller and better structured. for such help to be meaningful, the types available need to be sufficiently expressive; it is usually not very instructive to be told that we need to for example supply an integer, nor is it going to be very helpful for the compiler to generate a floating-point number for us. as another example, consider a type whose elements are matrices. as given, this is again not very helpful — a matrix can, after all, be seen as a (structured) collection of numbers, and we just said that numbers in themselves do not carry much meaning. but we can refine our type of matrices to a type matrix(𝑛, 𝑚) which keeps track of the size 𝑛 × 𝑚 of the matrix: e.g., the type of matrix multiplication can be usefully expressed as matrix(𝑛, 𝑚) × matrix(𝑚, 𝑘) → matrix(𝑛, 𝑘) i.e., insisting that the sizes of the input matrices are compatible, and determining size of the output matrix. furthermore, if we were trying to write a program to implement matrix multiplication, the above type would give us helpful hints on what we need to produce. for another example, consider implementing a program that creates a block matrix by putting two given matrices next to each other. it is natural to give it type matrix(𝑛, 𝑚) × matrix(𝑛, 𝑘) → matrix(𝑛, 𝑚 + 𝑘) i.e., we insist that both input matrices have the same number of rows, and the number of columns in the output matrix is the sum of the number of columns in the input matrices. we see that computations such as 𝑚 + 𝑘 naturally arise in types — to do a proper job classifying such programs as meaningful, our systems must thus allow values and computations to occur in types. such type systems are called dependent type systems [3], as types can depend on values. they give us enough expressive power to meaningfully communicate our intentions to the compiler. 3. units of measure using types the metrology domain is perhaps especially well-suited for the use of types to guarantee correctness, because the prevalent use of dimensions (such as length and time) and units of measure (such as metres and seconds) in many respects play the same role as types: it is not dimensionally correct to add a metre and a second, just like it is not data type correct to add an integer and a string. it thus seems natural to use type systems to reduce dimension checking to type checking. indeed, many mainstream languages have support for units, implemented using a wide range of techniques, from static types to dynamic run-time checks (see bennich-björkman and mckeever's survey [4] for more): • microsoft's f# [5] has units of measure built-in to static type checking; • c++'s boost units library [6] uses templates to check units statically; • java has a proposed api adding classes for dimensioned quantities [7], but run-time casts are inevitable; • haskell's type system can now encode basic units of measure as a library [8] or a typechecker plugin [9]; • python libraries such as pint [10] cannot do static checking of dimensional correctness, but implement runtime checks instead. similarly, matlab has support for dynamic unit checking using the symbolic math toolbox [11]. many of these solutions however provide no static guarantees, or rely on rather ad-hoc extensions of the type system, often with unintelligible error messages as a result. encoding dimensions using dependent types is a more principled way to include dimension checking in a programming language. in the rest of this section, we briefly describe how we implemented a typechecker including dimensions [12]. first, how are we to represent dimensions themselves? following kennedy [13], we fix a set 𝐷 of fundamental dimensions (such as length l time t and mass m). we may multiply or divide dimensions (for example forming mass per time squared m/t2), and the order of dimensions do not matter (mass times length 𝑀 ⋅ 𝐿 is the same as length times mass 𝐿 ⋅ 𝑀). these considerations lead us to model dimensions as elements of the free abelian group over the set of fundamental dimensions 𝐷 [14]. for type checking, we need to be able to decide if two given dimensions are equal or not. this is made easier by a normal form for elements of the free group: we first (arbitrarily) impose a total order on the fundamental dimensions 𝐷 (for example mass before length before time 𝑀 < 𝐿 < 𝑇). any dimension may be given as a finite product of distinct fundamental dimensions, in the chosen order, raised to nonzero integral powers. hence to check equality of dimensions 𝑑 = ? 𝑑′, we can reduce 𝑑 and 𝑑′ to normal forms 𝑑 = mn0 ⋅ l𝑛1 ⋅ t𝑛2 , 𝑑′ = m𝑛0 ′ ⋅ l𝑛1 ′ ⋅ t𝑛2 ′ and then straightforwardly check equality of the exponents 𝑛𝑖 = ? 𝑛𝑖 ′ , rather than applying the group axioms directly. with equality of dimensions in place, the crucial step in making dimension checking part of type checking is to also allow abstract dimensions [15]: addition is not length-specific, but works in one arbitrary dimension, which can stand for any dimension in particular. similarly, multiplication and division of quantities multiplies and divides arbitrary dimensions respectively. by giving addition and multiplication these types and taking our refined notion of equality of dimensions into account, dimension checking simply becomes type checking. gundry [16] has shown that the property of programs still having most general types is retained in this setting. as discussed by e.g., hall [17], dimension checking seems to be more fundamental than “unit checking”. when dimensions are encoded in types, units can be introduced as “smart constructors” such as watt _w: r → 𝑄(m l2 t−3) — the type of this function says that for any real number 𝑟, we get a quantity acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 𝑟 w of dimension m l2 t−3. if this is the only way to introduce quantities, we can ensure that only meaningful expressions enter the system. similarly, by only allowing the extraction of an actual number at dimensionless types (which can for example be achieved by dividing a quantity by a unit constant), only physically meaningful information can flow out of the system. the described approach is agnostic in the concrete choice of fundamental dimensions; indeed, if one wants to distinguish between torque and energy, for example, which both have the same dimension m l2 t−2 in the international system of units (si) [18], then one could introduce separate dimensions for them. a more principled approach would be to extend our system utilising aspects from the m-layer representation of quantities [19], but we use a conceptually simple representation here, as our aim is to demonstrate how dimension checking can be internalised as type checking. 4. using types to automatically generate code types are not just a stick to be beaten with when one makes a mistake; they can also act as a carrot, for example by enabling code generation. as a simple demonstration of this principle, we have developed a program that automatically generates code for reading and validating input data based on type declarations. the implementation is available at https://github.com/gnakov/mgen. many metrology software packages come with careful descriptions of input formats in their documentation, usually describing what input is required (e.g., “thermal conductivities”), in what form (e.g., “an array with an entry for each layer”), and in what unit (e.g., “w m-2 k−1”). however, these are written for humans, not machines, and consequently the code to read the inputs and convert them to the internal units used, if applicable, is also written by humans. this is typically fiddly code, with perhaps nested loops, and many opportunities for off-by-one errors to slip in. our approach is instead to make the description of the input format formal, so that it can be understood by a machine, which can then write the code for reading the inputs. in practice, this requires minimal changes to the description — mostly ensuring that the required data is actually present. an example input description is displayed in figure 1. an input is declared with its name (for example ivals and nlayer, followed by a colon ‘:’ followed by its description, which is for the benefits of humans. an input is either a composite object (such as ivals), a scalar field (such as nlayer), or an array (such as kappas). details about inputs which are important for the machine are tagged with an @ symbol, such as if an input is a number (for example, nlayer is tagged as a @number), or an array of a certain length (for example, lams is tagged as an array of size @nlayer). later inputs can refer to earlier inputs for their description (for example, the description of lams refer to nlayer) — we are making full use of dependent types by allowing later entries to depend on earlier ones. each non-number field entry has a unit attached to it, again indicated by an @ symbol. these can either be attached to individual fields of an array (such as for the array kappas), or uniformly for the whole array (such as the array rs). also note that we allow si derived units such as watt w — we convert these to their standard form in terms of si base units internally. given an input description, we first validate that it is sensible: that array lengths are numbers, that field names are not repeated, and that each scalar field has a unit. this way, we can catch simple mistakes in the input description such as typos or undeclared input fields. after validating the input description, we can generate code for reading input data following it. we currently generate matlab and python code, but there is nothing particular about the choice of languages — it would be possible to cover most programming languages. for the input description from figure 1, we generate the following matlab code — the corresponding python code can be found in figure 2. function ivals = getinputsfromfile(fname); f1 = fopen(fname); c1 = textscan(f1, `%f`); src = c1{1}; fclose(f1); rptr = 1; ivals.nlayer = src[rptr]; rptr = rptr + 1; for i = 1:ivals.nlayer ivals.lams[i] = 1e3 * src[rptr+i]; end rptr = rptr + ivals.nlayer; ivals.kappas[1] = 1e-2 * src[rptr+1]; for i = 2:3 ivals.kappas[i] = 1e-3 * src[rptr+i]; end rptr = rptr + 3; for i = 1:3 ivals.rs[i] = 1e3 * src[rptr+i]; ivals.cps[i+3] = src[rptr+i+3]; end rptr = rptr + 6; ivals.tflash = 1e-3 * src[rptr]; ivals : nlayer : contains the @number of layers (2 or 3) in the sample lams : array of thermal conductivities of layer @nlayer (in @w m^-2 k^-1). kappas : contains radius of sample (in @cm) laser (in @mm) measuring (in @mm) rs : heat transfer coefficient for losses in @w m^-2 k^-1 from front face from rear face curved side face cps : specific heat capacities in @j kg^-1 k^-1 of the front face of the rear faced of the curved side face tflash : duration of laser flash in @ms figure 1. formal input data description. https://github.com/g-nakov/mgen https://github.com/g-nakov/mgen acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 we make sure to generate fresh variable names for the read pointer rptr, the file handle f1 and the file contents c1 and src. the rest of the names are guaranteed to be non-clashing, since we have validated the description. we then sequentially read the data, advancing the read pointer as we go along. we use the unit information to scale data into the units used internally in the program. note also that we have taken the opportunity to merge the loops for ivals.kappas and ivals.rs into a single loop. these are exactly the kind of code transformations that are easy to get wrong if done manually — in contrast, we can reason generically that this transformation will always be correct. as a result, the generated code looks like similar to code that one would write by hand, but without the risk of making, for example, an off-by-one error somewhere. another guiding principle for our code generation tool is convenience of use of the input data afterwards — the whole exercise would be pointless if the user manually would have to convert the data into a different data type after reading. hence, we make sure to store the data in a data type that is “natural” for the chosen language, and which allows for a direct access of the fields following the input data description. in matlab, structure arrays readily meet these requirements, but we have to take extra care to explicitly generate the needed classes in python. 5. conclusions and future work type systems could be a powerful tool in the digitalisation of metrology. by exploiting advances in dependent type systems, we have shown that we can strengthen our ability to reason about dimensional correctness, and also bridge the gap between human-readable semantic specifications of data, and the actual code representing it in a specific programming environment. crucially, we were able to reap these benefits with minimal additional costs we put to good use already existing typecheckers without having to rewrite the infrastructure in place from scratch. we have chosen a straightforward treatment of dimensions as elements of a free group, and units as constants; this choice does not accurately disambiguate for example radians rad = m m−1 and square radians sr = m2 m−2, even though they are of very different nature. however, we stress that this is not an inherent limitation in the methodology of using types for dimensions — dimensionless quantity ratios can if necessary be tracked separately in types, using the same principles as presented here, which we hope to do in the future. overall, the work reported here is part of a larger project to incorporate dependent types in matlab programs for correctness checking, including dimensional correctness. acknowledgements thanks to alistair forbes, keith lines and ian smith for discussions about this work. the authors would also like to thank the attendants of the first imeko tc6 international conference on metrology and digital transformation for the useful suggestions and comments on a presentation of this work, in particular the suggestion to extend code generation also to python. funding: supported by the uk national physical laboratory measurement fellowship project “dependent types for trustworthy tools”. 6. references [1] m. p. foster, quantities, units and computing, computer standards & interfaces 35 (2013), pp. 529–535. doi: 10.1016/j.csi.2013.02.001 [2] b. d. hall, software for calculation with physical quantities, 2020 ieee international workshop on metrology for industry 4.0 iot, rome, italy, 2020, pp. 458–463. doi: 10.1109/metroind4.0iot48571.2020.9138281 [3] a. bove, p. dybjer, dependent types at work, lernet alfa summer school 2008 revised tutorial lectures (2009), pp. 57–99. doi: 10.1007/978-3-642-03153-3_2 [4] o. bennich-björkman, s. mckeever, the next 700 unit of measurement checkers, 11th acm sigplan int. conference on class ivals(): __slots__ = ("lams", "kappas", "rs", "cps", "tflash", "nlayer") def __init__(self): self.lams = {} self.kappas = {} self.rs = {} self.cps = {} ivals = ivals() with open(fname, 'r') as f1: src = f1.readlines() rptr = 0 ivals.nlayer = int(src[rptr]) rptr = rptr + 1 ivals.tflash = 1e-3 * float(src[rptr]) rptr = rptr + 1 for i in range(0, 3): ivals.cps[i] = float(src[rptr + i]) ivals.rs[i+3] = 1e3 * float(src[rptr + i + 3]) rptr = rptr + 6 ivals.kappas[0] = 1e-2 * float(src[rptr + 0]) for i in range(1, 3): ivals.kappas[i] = 1e-3 * float(src[rptr+i]) rptr = rptr + 3 for i in range(0, ivals.nlayer): ivals.lams[i] = 1e3 * float(src[rptr + i]) figure 2. generated python code from the data description in figure 1. https://doi.org/10.1016/j.csi.2013.02.001 https://doi.org/10.1109/metroind4.0iot48571.2020.9138281 https://doi.org/10.1007/978-3-642-03153-3_2 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 software engineering, boston, usa, 2018. pp. 121–132. doi: 10.1145/3276604.3276613 [5] a. kennedy, types for units-of-measure: theory and practice. central european functional programming school 2009, revised selected lectures (2010), pp. 268–305. doi: 10.1007/978-3-642-17685-2_8 [6] m. c. schabel, s. watanabe, boost c++ libraries, chapter 42 (boost.units 1.1.0), 2010. online [accessed 13 january 2023] https://www.boost.org/doc/libs/1_81_0/doc/html/boost_unit s.html [7] j.-m. dautelle, w. keil, o. santana, jsr 385: units of measurement, 2021. online [accessed 13 january 2023] https://unitsofmeasurement.github.io/pages/about.html [8] t. muranushi, r. eisenberg, experience report: type-checking polymorphic units for astrophysics research in haskell, 2014 acm sigplan symposium on haskell, gothenburg, sweden, 2014, pp. 31–38. doi: 10.1145/2633357.2633362 [9] a. gundry, a typechecker plugin for units of measure: domainspecific constraint solving in ghc haskell, 2015 acm sigplan symposium on haskell, vancouver, canada, 2015, pp. 11–22. doi: 10.1145/2804302.2804305 [10] pint developers, pint: makes units easy, 2022. online [accessed 13 january 2023] https://pint.readthedocs.io/ [11] mathworks, matlab units of measurement, 2022. online [accessed 13 january 2023] https://www.mathworks.com/help/symbolic/units-ofmeasurement.html [12] c. mcbride, f. nordvall forsberg, type systems for programs respecting dimensions, in: advanced mathematical and computational tools in metrology and testing xii. f. pavese, a. b. forbes, n. f. zhang, a. g. chunovkina (editors). world scientific, singapore, 2022, isbn 978-981-124-237-3, pp. 331– 345. doi: 10.1142/9789811242380_0020 [13] a. kennedy, programming languages and dimensions, ph.d. dissertation, university of cambridge, 1995. [14] c. c. sims, computation with finitely presented groups, cambridge university press, cambridge, 1994, isbn 978-051157-470-2. doi: 10.1017/cbo9780511574702 [15] m. wand, p. o'keefe, automatic dimensional inference, in: computational logic: essays in honor of alan robinson. j.-l. lassez, g. plotkin (editors), mit press, cambridge, massachusetts, 1991, isbn 978-0-262-12156-9, pp. 479–486. [16] a. gundry, type inference, haskell and dependent types, ph.d. dissertation, university of strathclyde, 2013. [17] b. d. hall, software representation of measured physical quantities, in: advanced mathematical and computational tools in metrology and testing xii. f. pavese, a. b. forbes, n. f. zhang, a. g. chunovkina (editors). world scientific, singapore, 2022, isbn 978-981-124-237-3, pp. 273-284. doi: 10.1142/9789811242380_0016 [18] bipm, the international system of units ('the si brochure') (ninth ed.). bureau international des poids et mesures, 2019. online [accessed 13 january 2023] http://www.bipm.org/en/si/si_brochure/ [19] b. d. hall, m. kuster, representing quantities and units in digital systems, measurement: sensors 23 (2022), 6 pp. doi: 10.1016/j.measen.2022.100387 https://doi.org/10.1145/3276604.3276613 https://doi.org/10.1007/978-3-642-17685-2_8 https://www.boost.org/doc/libs/1_81_0/doc/html/boost_units.html https://www.boost.org/doc/libs/1_81_0/doc/html/boost_units.html https://unitsofmeasurement.github.io/pages/about.html https://doi.org/10.1145/2633357.2633362 https://doi.org/10.1145/2804302.2804305 https://pint.readthedocs.io/ https://www.mathworks.com/help/symbolic/units-of-measurement.html https://www.mathworks.com/help/symbolic/units-of-measurement.html https://doi.org/10.1142/9789811242380_0020 https://doi.org/10.1017/cbo9780511574702 https://doi.org/10.1142/9789811242380_0016 http://www.bipm.org/en/si/si_brochure/ https://doi.org/10.1016/j.measen.2022.100387 primary shock calibration with fast linear motor drive acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 383 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 383 387 primary shock calibration with fast linear motor drive h. volkers1, h. c. schoenekess1, th. bruns1 1 physikalisch-technische bundesanstalt, braunschweig, germany, henrik.volkers@ptb.de abstract: this paper describes the implementation of a new, fast and precise linear motor drive for ptb’s primary shock calibration device. this device is used for monopole shock calibrations of accelerometers using the “hammer-anvil” principle according to iso 16063-13:2001 [1] and operates in a peak acceleration range from 50 m/s² to 5000 m/s². the main challenge of implementing this kind of shock generator is accelerating a hammer to velocities up to 5 m/s within distances of less than 70 mm. in this paper, a few helpful improvements are described which lead to an enhanced repeatability of pulse generation over the full shock intensity range as well as a substantial decrease of harmonic disturbing signals. keywords: primary shock calibration; hammeranvil principle; half-sine pulse; pulse transmission; ldv interferometry; linear motion drive; magnetic actuator 1 introduction ptb’s original shock exciter which was designed for low to medium acceleration intensities mainly consists of a mechanical spring unit, an airborne transmission element (the “hammer”) and an air-borne measuring unit (the “anvil”) to which the accelerometer to be calibrated is attached. the released spring pushes the hammer which subsequently strikes the anvil and accelerates it. exchangeable elastic pads (pulse shapers) between hammer and anvil allow certain shock shapes to be attained and disturbing oscillations to be reduced. using the spring drive unit, shock peak values can be varied between 100 m/s2 and 5000 m/s2, whereas the shock duration is between 8 ms and 1 ms. different sets of the modules (hammer, anvil and spring unit) are available, allowing different combinations of peak values and shock durations to be excited. the acceleration is measured by applying two laser doppler vibrometers (ldv). after a signal conditioning, both photo detector signals and the sensor measuring chain signal are simultaneously captured. following the signal demodulation, the primary measured acceleration peak and the sensor peak are used to determine the shock sensitivity according to 𝑆sh = 𝑢peak 𝑎peak (1) by definition [1], the shock sensitivity ssh is calculated as the quotient of the output charge peak value of the accelerometer and the peak value of the interferential measured shock acceleration (evaluation in the time domain). it is dependent on the impact spectrum, the duration of impact and the peak acceleration value. 2 the initial configuration the original driving mechanism to accelerate the hammer was realized by a spring-driven shaft with a free running length of up to 30 mm (c.f. figure 1). by manually setting the spring preload, the resulting hammer velocity was adjusted to generate the desired peak acceleration of the anvil. repeated loading and releasing of the spring was performed by motor-driven mechanics. a set of three spring modules with different stiffnesses provides the peak acceleration ranges needed. all of these spring units suffered from substantial drawbacks such as: the highest force acts at the start of motion, which results in a strong jerk. this introduces high frequency vibrations and excites resonances in the hammer and the anvil. the mechanical stop of the spring unit induces additional vibrations to the system. the force’s set-point has to be mechanically adjusted by hand. the repeatability of adjusted shock intensities is of only moderate quality. three different spring units have to be employed to cover the 100 m/s² to 5 km/s² range. shock intensities below 100 m/s² could not be achieved with the old spring units. http://www.imeko.org/ mailto:henrik.volkers@ptb.de acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 384 mechanical wear of the springs, locking knife edges and the release mechanism required frequent maintenance and readjustment. these were the main causes of poor repeatability. the calibration process involved a lot of manual interaction. the design and realization of this unit was the first of this type and was done in the 1990th of the last century [2, 3]. other nmis have set up similar devices in the meantime, sometimes with different drive methods [e.g. 4, 5]. to address the issues listed above, a project was started to update the driving unit specifically. the complete mechanical set-up of the calibration device is shown in figure 1. the manually adjustable spring unit is at the bottom. the air-borne hammer (the projectile) in the centre has a rectilinear motion distance of roughly 50 mm and is stopped after the impact with the anvil by mechanical dampers. the air-borne anvil (the target) at the top of the photograph has a freemotion distance of about 8 mm before it is halted by a pneumatic brake. figure 1: mechanics of the shock calibration device the impulse hammer element with a mass of approx. 0.25 kg hits the 0.25 kg anvil element with the mounted accelerometer (0.038 kg). the shock pulse is transmitted within a time that is shorter than 8 milliseconds. when the shortest duration (roughly 1 ms) is used, high accelerations can be achieved. the strongest spring with a maximum force of 400 n achieved more than 5 km/s² at the sensor’s reference surface, while the softest spring, with a minimum preload force of about 10 n, reached 100 m/s². via a suitably elastic pulse shaper at the tip of the hammer, a sin² acceleration time curve is generated and sensed by the accelerometer (c.f. fig. 2). several different pulse shapers made of rubber could be attached to the hammer to obtain the desired pulse duration and shape. figure 2: hammer and anvil heads with dut as discussed above, the mechanical spring unit proved to be a weak point in the old design. during the relaxation of the spring, strong vibrations were transmitted into the entire mechanical system, which led to disturbances. in addition, the knifeedge mechanics of the release mechanism suffered from wear and tear and caused a poor repeatability during calibration runs. the goal of this project was to overcome these issues using modern linear drive technology. 3 the new drive unit 3.1 requirements and design in order to ensure the full operation of the customer calibration service, it was necessary to have backstop conversion capability during the development and testing period. this resulted in dimensional design constraints. by applying laws of basic mechanics, the required mechanical specifications were easily derived from the involved moving masses, impact speeds and dimensions. subsequent market research covering mechanic, pneumatic, hydraulic and electro-magnetic actuators led to the decision to use an electric linear motor drive. the selected solution is a type ps01-23x160hhp-r linear motor with a pl01-12x350/310-hp magnetic slider as well as a type c1100 standard closed loop controller and a linmot® nti ag twohttp://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 385 phase power supply1 [6]. the package also includes software for the controller. this kind of drive was originally designed for so-called “fast pick and place” operation in industrial production lines. the use of such industrial equipment means that only moderate hardware costs are involved. in order to implement the drive, some modifications had to be applied to the calibration device such as: • construction of a damped stop for the runner of the linear motor (pusher). • a prolongation of the hammer and an increase of the distance between hammer and anvil air bearings. • an increase of the hammer’s mass (c.f. figure 3). the last two actions permitted the working length of the drive to be enlarged by about 72 mm. in the original set-up, similar hammer and anvil dimensions were chosen for an optimal momentum transfer during impact. this, however, also resulted in similar first modal resonances around 10 khz, which in turn favoured the propagation of disturbing oscillations during impact. the new hammer design reduces this effect by means of a (two times) heavier weight. table 1 summarizes the modified parameters. table 1: specifications of both impact generators change old state new state hammer weight 0.296 kg 0.595 kg anvil weight 0.296 kg 0.296 kg max. travel hammer 50 mm 72 mm max. travel anvil 4 to 7 mm 7 to 8 mm max. velocity hammer 5 m/s 5 m/s max. velocity anvil 5 m/s 6 to 7 m/s shock duration range 1 to 8 ms 1 to 8 ms max. drive force 400 n 138 n max. shock acceleration 5000 m/s² 5000+ m/s² these system modifications are a consequence of changing the hammer weight and travel parameters. the main improvements are found in the higher anvil velocity and, at the same time, the reduction of drive force to less than half. the latter greatly reduced the jerk and thus the issue of motion disturbances. 3.2 quasi-elastic central impact calculation the selected linear motor with a maximum propulsive force of fmot = 138 n serves as a new mechanical shock generator. the stroke s has been extended to 72 mm. the total mass to be accelerated consists of the runner’s 0.280 kg mass msl and the 1 commercial components are merely identified in this paper to adequately specify the experimental set-up. naming these products does not imply a recommendation by ptb, nor does it mass of the new air-borne hammer (mha = 0.595 kg). the mass of the anvil to be pushed by the hammer is man = 0.296 kg. the initial velocity reached by the accelerated hammer before impact is 𝑉𝐻𝑎 = √ 2∙𝐹𝑀𝑜𝑡∙𝑠 𝑚𝑠𝑙+𝑚𝐴𝑛 2 . ( 2) it attains up to 5 m/s. the moving hammer mha pushes the resting anvil man and transfers a large amount of its momentum to it. the mass ratio is mha/man = 2 and the anvil is initially at rest. applying the basic equations for momentum conservation, the speeds of the hammer and anvil after the impact become: 𝑉 ′𝐻𝑎 = 𝑚𝐻𝑎− 𝑚𝐴𝑛 𝑚𝐻𝑎+𝑚𝐴𝑛 ∙ 𝑉𝐻𝑎 (3) and 𝑉 ′𝐴𝑛 = 2 ∙ 𝑚𝐻𝑎 𝑚𝐻𝑎+𝑚𝐴𝑛 ∙ 𝑉𝐻𝑎 (4) or, with the given mass ratio: 𝑉 ′𝐻𝑎 = 1 3 ∙ 𝑉𝐻𝑎 and 𝑉 ′ 𝐴𝑛 = 4 3 ∙ 𝑉𝐻𝑎 (5) this transformation moves the anvil to a onethird higher speed than the hammer’s initial velocity. for an optimal hammer mass, one could insert (2) into the right hand side of (4) and solve for the maximum value. this gives an optimum hammer mass of 0.581 kg when all boundary conditions are considered. this is pretty close to the realized new hammer. figure 3 in the next section shows the new shock generator setup. 3.3 repeatability an additional advantage of this linear motor drive set-up is the option of preselecting an accurate target value by setting a potentiometer or using the software interface for digital adjustment. the controller of the linear motor can be parametrized and controlled via two serial interfaces by the supplied vendor software or a freely available labview driver which can be adapted to individual needs. for any given setting, the spread of the repeatedly realized acceleration peak values was drastically reduced compared to the original spring drive module. a comparison is depicted in figure 4. for both drives, the absolute range around the nominal value increases linearly with the shock imply that the equipment identified is necessarily the best available for the purpose. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 386 intensity. however, for the new linear drive, the spread is more than an order of magnitude lower than before. figure 3: shock generator for 5000 m/s² with fine controllable linear motor drive figure 4: comparison of the statistical reach of acceleration intensity of spring drive vs. linear motor drive 3.4 signal quality the disturbances due to modal resonances of the anvil are reduced by the redesign of the hammer. while the resonance frequency of both the hammer and anvil was about 11.5 khz in the old setup, the first longitudinal resonance of the new hammer is now at 13 khz. because the design and materials were identical, this difference can be attributed to the elongated geometry and the solid design (no axial holes). this disparity greatly reduces the transmission of ringing vibrations from the hammer to the reference surface of the accelerometer. in combination with the largely reduced jerk in the driving motion itself, ringing is no longer an issue in the signal shape. as an example, figure 5 shows an accelerometer output signal (filtered at 100 khz) for an acceleration peak value of 1.97 km/s². figure 5: raw signal of a half-sine shock measurement by an endevco 2270 accelerometer 4 summary the primary monopole shock acceleration calibration facility at ptb has been successfully upgraded by the implementation of an industrial electric linear motor which was implemented as the driving unit for the hammer. the change in the driving mechanics and geometry involved adaptations to the hammer and anvil configuration, too. after successful implementation the following improvements were noticeable: 1. a strong reduction in disturbing vibrations due to a lower, but constant, force level generated by the new drive. 2. a very strong reduction in the transfer of mechanical ringing from the hammer to the anvil due to the weight disparity between the two. 3. great improvement of the repeatability of the shock intensity. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 387 4. analog electrical or digital control of the set-point for the generated shock acceleration. 5. greatly reduced need for maintenance and operation is practically non-wear and has very little friction. 5 outlook based on the newly implemented drive a shift from manual operation to an automated performance of the low-intensity shock calibration seems feasible in the near future. the control options via a digital interface (rs-485) using labview or other programming languages together with the excellent repeatability allow, in principle, unsupervised operation. however, in the current set-up the pulse shaper remains as a component with unknown reproducibility. in order to achieve autonomous operation, some type of self-adjusting algorithm is necessary. first steps on the path towards such methods have been taken here and will be the topic of future publications. 6 references [1] iso 16063-13, https://www.iso.org/standard/27075.html [2] von martens, h.-j. taeubner a., wabinski w., link a., and. schlaak h.-j “laser interferometry as tool and object in vibration and shock calibrations”, proc. spie 3411, third international conference on vibration measurements by laser techniques: advances and applications, (1 june 1998); https://doi.org/10.1117/12.307700 [3] von martens h.-j., “ptb vibration and shock calibration”, royal swedish academy of engineering sciences (iva), 8 pages, 15-16 september 1993, stenungsund, sweden [4] nozato h., kokuyama w., ota a., “improvement and validity of shock measurements using heterodyne laser interferometer”, elsevier measurement, volume 77, pages 67-72, january 2016 [5] nozato h., usuda t., ota a., ishigami t. and kudo k., “development of shock acceleration calibration machine in nmij”, imeko 20th tc3, 3rd tc16 and 1st tc22 international conference cultivating metrological knowledge 27-30 november, 2007. merida, mexico. [6] linmot webpage https://linmot.com/ http://www.imeko.org/ https://www.iso.org/standard/27075.html https://doi.org/10.1117/12.307700 https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/article/pii/s0263224115004625#! https://www.sciencedirect.com/science/journal/02632241 https://www.sciencedirect.com/science/journal/02632241/77/supp/c https://linmot.com/ extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 -7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing bachina surendra babu1, satish kumar ramaraj2, karuganti phani rama krishna3, pinjerla swetha4 1 department of ece, bapatla engineering college, bapatla, guntur district, andhra pradesh, 522102, india, 2 department of medical electronics, sengunthar college of engineering, tiruchengode-637205, tamilnadu, india 3 dept of ece, pvp siddhartha institute of technology, vijayawada-520007, andhra pradesh, india 4 dept of ece, malla reddy college of engineering and technology, hyderabad,500100, telanagna, india section:research paper keywords: routing; buffer zone; measurement; rerouting time; virtual zone citation: bachina surendra babu, satish kumar ramaraj, karuganti phani rama krishna, pinjerla swetha, extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing, acta imeko, vol. 11, no. 1, article 26, march 2022, identifier: imeko-acta-11 (2022)-01-26 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 4, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: b. surendra babu, e-mail: surendrabachina@gmail.com 1. introduction a mobile adhoc network (manet) is a dynamic network made up of several nodes. an ad hoc is a self-contained system that operates without the assistance of a centralized authority. routing is a difficult operation outstanding to the flexibility of the nodes. the ad hoc changing architecture causes frequent route breakups. route failure has an impact on network connection. furthermore, the nodes are reliant on the partial battery power. a lack a power in any node can lead to network apportioning [1]. routing is the main function to guide communication across extensive networks. the basic duty of every routing is to find and preserve routes to the network destinations necessary. ad hoc network routing protocols are distributed into two types such as reactive and proactive and. proactive is the circumstance in which a node may send data to a certain terminus as soon as it receives it. a reactive routing protocol, on the other hand, determines a route as and when it is requested by a network node [2]. this article is about mobile adhoc networks that employ a proactive routing system. because of their dynamic topology, link breaks are a typical feature of mobile adhoc networks. in such circumstances, the routing protocol must seek alternate routes [3]. the rerouting interval is the period earlier new pathways are identified, and the rerouting time is the period of the rerouting interim [4]. stale routes exist across the broken connection during the rerouting interval. only when the routing protocol detects that the connection is damaged can rerouting take place. in fact, detecting the connection break accounts for a considerable portion of the rerouting time [5]. in summary, the rerouting time due to connection breakdowns is resolute by the time obligatory to complete the following processes: • link break detection. • network-wide linking-state notification of new pathways • draining of everything stale packets from output queue. the basic duty of every routing protocol is to find and preserve routes to the network endpoints necessary [6], [7]. ad abstract mobile adhoc network (manet) routing methods must deal with connection breakdowns caused by frequent node movement, measurement and a dynamic network topology. in these cases, the protocol must discover alternate routes. rerouting time refers to the lag that happens during this retransmission. researchers have proposed many ways to reduce rerouting period. one such technique is buffer zone routing (bzr), which divides a node transmission region into a safe zone adjacent to the node and a hazardous zone towards the end of the broadcast range. this technique, however, has rare gaps and restrictions, such as the ideal dimensions of the buffer zone, a rise in hop duration, network stress, and so on. this study offers a method to improve or expand buffer zone communication by grouping nodes inside the buffer zone into virtual zones based on their energy flat. when the routing decisions are made quickly, the energy consumption of the nodes is minimized. in the safe area of extended bzr, transfer time is reduced, and routing efficiency is increased. it solves issues in the present algorithm and fills holes in it, decreasing the time required for rerouting in manet. mailto:surendrabachina@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 hoc network routing protocols are separated into two classes such as, proactive and reactive [8]. this article is about manet environment that employ a proactive routing system. because of their dynamic topology, link breaks are a typical feature of manets. in such circumstances, the routing protocol must seek alternate routes. the rerouting interval is the period earlier new pathways are identified, and the redirecting time is the rerouting break period [9]. stale routes exist across the broken connection during the rerouting intermission. only when the routing protocol detects that the connection is damaged can rerouting take place. in fact, detecting the connection break accounts for a considerable portion of the rerouting time [10]. in summary, the rerouting period due to connection breakdowns is determined by the time essential to complete the following processes. in research work, bzr interact and update their routing tables once the nodes are live. in addition to the neighbouring information, the virtual zone information of the node in ebzr is also efficient in the routing table. this data is then used during the choice to route. finally, the routing could be achieved by continuously measuring the available zones, interference levels etc. the remainder of the paper is structured as trails. section ii first provides the different linked works. section iii describes the background of the optimized link state routing (olsr), the zone routing algorithm (zra), and link breakage and redirection time. the proposed method is obtainable in section iv, along with a comprehensive discussion and its benefits. simulation results are provided in section v, with the performance comparison charts of the standard olsr, buffer and virtual zone algorithm. lastly, in section vi, the scope for areas of future work is outlined out, with the conclusion. 2. related work this article is a continuation of [11] which examined mobile adhoc network routing protocols and metrics. it was discovered that rerouting time is a significant enactment measure. other authors, [12] offer an adaptive retry edge technique to decrease rerouting period. it also identifies stand in line as one of the key variables having a significant influence on rerouting time. the recommended remedy was also put into action and tested. g. jisha and s. dhanya [13] offer a review of different zone-based routing enhancements in mobile adhoc networks. other than the fundamental protocol implementation, this article performs a poll on ten other enhancements offered to zone routing protocol (zrp). it also analyses these enhancements and recommends apps that are best suited to them, [14]. this article implements a virtual zone-based routing enhancement technique that is described in this study. the authors present a novel zonebased routing algorithm for manet in [15]. to discover numerous stable routes among the source and destination nodes, the proposed technique integrates the concept of a buffer zone with the olsr procedure offer a trust-based computation method for improving the security of zone-based routing. when a lot of net nodes disobey and lose data packets, the performance of a vulnerable zrp is tested. the olsr is protracted in [16] to preserve node energy. in the qualnet simulator, the presentation of the projected system is evaluated in terms of control overheads, energy ingesting, end-to-end latency, and packet delivery ratio (pdr). the author from [17] have investigates the queuing problem and proposes methods to decrease queue stagnation. the author from [18] investigates the adaptive retry limit approach of eliminating the queueing problem in mobile adhoc networks. it also proposes a solution of asynchronous invocation, where the gaps in the adaptive retry limit approach are addressed. as a result, queueing is eliminated, and the rerouting time is reduced in mobile adhoc network routing. the simulation findings show that when the buffer zone communication is implemented in olsr, the rerouting time is decreased. when associated to regular olsr, the addition of a transmission buffer zone improves throughput. the use of a buffer zone is advantageous in both traffic situations of low and high [19]. 3. background 3.1. olsr the olsr is active in nature and routes for all network destinations are accessible immediately at each node. the open shortest path first is an optimisation of the pure link formal routing protocol (ospf). this method is linked to the multipoint relay idea (mpr). a multi-point relay minimizes the control messages' size [20]. the usage of mprs also reduces control traffic flooding. multipoint transmits forward control memos that give the benefit of lowering the sum of transmitted broadcast control memos. the olsr has two main features: neighbor detection and topology distribution. each node constructs routes with these two attributes for all known destinations. the hello and (topology control) tc are the two most significant messages in olsr: 1) hello messages: each node transmits hello messages on a regular basis to allow connection sensing, neighbour discovery and mpr selection signalling. the suggested hello messages emission intermission is 2 seconds and neighbour info time are 6 secs. a neighbour is deemed lost 6 secs after the neighbour received the last hello note. 2) tc messages: the link state (tc) messages are generated and transmitted via the system by every mpr, based on the information collected via hello messages. for tc communications, the optimum emission interval is 5 seconds and the hold duration is 15 seconds. 3.2. zone routing algorithm this method relies on the definition of nodes as safe or insecure, and whichever use as relay nodes if they are benign or avoid as relay nodes if they are insecure. in addition, if possible, traffic to hazardous nodes within the transmission region of the transmitting node should be tried through secure nodes. the signal power of the hello packets may be used as a criterion to identify in which nodes and what zones with different mobility speeds are regarded acceptable and unsafe. to support surrounding nodes in the routing of their unsafe neighbours, each link admission in the hello packets needs to be included in the zone status and communicated to other neighbours. a packet must not be routed to a relay node that has an unsafe neighbour as its destination. the routing table for every node is chief designed on the basis of the safety zone nodes. if this leads to dividing, it is included in the routing table to cross nodes in the dangerous zone. the buffer zone routing theory applies exclusively to the nodes in the harmless zone as realized in figure 1. the nodes in the dangerous bzr must only be utilized to forward if complete connectivity without them is impossible. as the neighbour defines the two-hop neighbour and topology set and no route modifications are allowed on the already specified routes. if a node is already shown as a acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 destination in the routing database, the afresh identified route to the similar destination will be ignored, even if it has fewer hops than its first route. in figure 2 the phases of the buffer zone routing method are depicted. 3.3. rerouting time one typical feature of ad hoc systems is that connections may break because of variations in radio circumstances, node mobility and other dynamic network. in these cases, the routing is meant to locate other paths. the period before novel paths is discovered is termed the recirculating time, and the recirculating time is called the recirculating time. take the experiment represented in figure 3, i.e., the period since the disruption of the link between a and c (1) to the re-establishment of connection via the intermediary node b (2). 3.4. queueing and rerouting time different conditions affect the timing of return in manet. queueing is one such scenario. the process of queuing packets in each layer is called queueing due to an enhanced transmission rate. in this circumstance, the packages are sequentially processed, which increases the redirection time. this problem has a layered solution, offering an adaptive retry limit to the mac layer. the retry limit shall be reduced by 1, for every packet with the same mac objective, which is lost by reaching the retry limit until every packet is transmitted once. once a packet is transferred effectively, the trial limit is reverted to its unique value that is equivalent to the former standard. two parameters significantly determine the latency associated with queuing, notably the size of the tail and the retry boundary. a large transmission queue size can lead to too many waste packages with stalled routing information. furthermore, too many waste retransmission attempts can result in a high retry number for these waste packages. the grouping of these factors could significantly extend the redirection time. figure 4 presents an overview of each layer's protocol stack and queues. 3.5. factors affecting rerouting time despite so many elements such node energy levels, transmission ranges, network structure, etc., affecting retraining figure 1. (a). communication area zones of node (b) safe node (c) unsafe node. figure 2. the zone routing algorithm. figure 3. rerouting time. figure 4. linux protocol stack. start clear routing table add routing to all neighbours (only in safe zone) if first time? add route to all topology tuplets with increasing hop count (only in safe zone) add two hop neighbours both are in safe zone exit acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 times, node speeds and traffic loads have a greater impact on the performance of this mobile adhoc network characteristic. node speed: reducing the node speed minimizes the amount of link breakage. the routing time grows as the node speed increases. with lowered speed, due to the lower node speed, the risk of a connected break due to an adjacent affecting out of the communication zone is lower. this makes it easy to adjust the threshold range higher to gain the same advantage while reducing the downside of greater track lengths. traffic load: if the whole network is overloaded by a huge number of unneeded broadcasts and an amplified packet loss, the successful transmission leaves a reduced part of the overall network capacity. in the event of link breakdowns, the combination of divisioning and decreased retransmissions increases the output at the lower levels. but the segmentation means that the packets will probably only reach a insufficient hops, exacerbating the injustice among the short and extended path traffic [20]. in short, the re-routing time is openly proportionate to the node speed and traffic load, which would increase the speed and traffic load of the node. 3.6. link breaks and rerouting time link breaks are the cause of the queueing scenario. when a node misplaces its transfer to its neighbour, the routing searches for the shortest alternative available path. in order to avoid such catastrophes, these connection disruptions have to be identified considerably sooner. the details of this preventive mechanism are outside the possibility of this paper. the standard technique to detect connection breakdowns for a routing is finished missed polling packets [21]. the olsr hello packets are then sent among one hop neighbours at the set time frequency to provide information about neighbourhood links and a method for detecting links. when no neighbour hello packet is received within a specific time interval (for example, within 6 seconds, the suggested olsr interval) the neighbour shall be deliberated inaccessible and a link to the neighbour shall be judged broken and invalid. another technique to identify connection failures is by delegating the routing protocol in the underlying link layer to a mechanism. a link break thru the link layer must be expressly notified of the routing protocol. the drawback of this link layer notification (lln) method may be the extra application complexity. the benefits, however, are that the connection layer usually detects the connection breaks earlier. used to detect interruptions through missing hello packets is the buffer zone or bza. the overall performance is vital to recognize the connection break in a timely manner because two negative impacts occur between both the physical connection breakdown and the routing protocol. main, the packets are marked with the next unachievable hop address in the queue of the interface. this means that at this time these packets are never reached and lost. second, these packets are sent numerous times to the mac layer until they are castoff. this will steal packets sent from other nodes of valuable intermediate time at a legal next hop address. 4. proposed solution 4.1. analysis there is a variance among both the buffer zone solution and regular olsr in the average sum of hops among a source and an endpoint. with the buffer zone solution, the number of hops per way is enhanced, as it prefers nodes in the safe zone as relay nodes. the hop length increase is the biggest drawback of the buffer zone key. chief, the hop length increases the number of transmissions required for the identical end-to-end streams and so reduces the overall capacity accessible per traffic stream. secondly, the danger increases as the links get longer that the topology info possessed by the transmitting nodes is incorrect. there is a greater likelihood that the ideal (both the actual and the tables presented) changes while the packet travels between both communication nodes. this increases the danger of a packet loop or a significant detour. an increased average path duration, an increase in packet reverse risk and an increased packet loop risk add to the likelihood of time-to-live (ttl) depletion. however, too much of a buffer zone indication to an unnecessarily larger mean sum of hops among the manet node pairs and a greater risk of network partitions. the bza is also averse to finding a way to predict the ideal size of the buffer zone, according to criteria such as network load and node mobility. finally, the buffer zone technique may be enhanced and extended to categorize a neighbour as safe or unsafe utilizing distance separate criteria [22]. these shortcomings and gaps allow the buffer zone algorithm to be enhanced or extended. 4.2. extending buffer zone algorithm this section describes the key characteristics of the bzr mechanism. they interact and update their routing tables once the nodes are live. then every node will know its individual neighbours, double and multiple. in addition to the neighbouring information, the virtual zone information of the node is also efficient in the routing table. this data is then used during the choice to route. virtual zones are animatedly constructed based on node similarities. a virtual zone generation could be carried out whenever the initial topology changes or can be carried out periodically. in virtual zones, the arrangement of notion of nodes within the safe zone is presented in figure 5. whenever a node leaves a virtual zone, it broadcasts it via the hello packet to its neighbours. the neighbours then update this evidence in their routing databases. as a result, the nodes are informed of their virtual zone at any given time. 4.3. exploring the benefits of the virtual zone algorithm as all nodes and other info on their routing table know all the information about the virtual zone, routed is optimized within the virtual zone and thereby reduces the overall number of hops and the redirection time to a minimum. this solution associations the benefits of the area routing algorithm with the benefits of virtual area induction. the energy equal of the nodes is also assessed in this approach apart from the distance from the nodes employed in the field routing algorithm alone. the energy consumption of the nodes is also reduced as routing decisions figure 5. nodes in virtual safe zone. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 are made quickly. this method increases routing efficiency and minimizes transfer time in the safe area. if the source and endpoint nodes are inside the similar virtual area and if both communication nodes are in a safe zone, it would be the worst scenario. if the neighbour or any of the multi-hop neighbours is in the simulated field, the source or destination would be average. 5. discussion after comprehending the upgrade to the present buffer zone, virtual districts may also be lengthy to the insecure zone, which would cut re-routing time ultimately. there are a few reasons behind this statement. although virtual area upsurges redirect time, it is an overhead to maintain the routing table and virtual area(s). secondly, introducing virtual areas in the buffer zone increases network traffic. it is known that the nodes communicate using hello packets concerning their virtual zone location. in order to retain virtual zone information, the number of packets delivered is increased. the packet size is also enhanced as it contains virtual zone information. given the node energy levels, the virtual zones are currently limited only to the safe zone. however, it is left open for future effort to extend them into the dangerous zone without hurting performance. another effort would be that the creation of virtual regions eventually leads to the zrp, which seeks to resolve proactive and reactive protocol constraints by incorporating the best features of both protocol technology and therefore can be characterised as a proactive or reactive hybrid routing protocol. the majority of communication between knots can be envisaged in a manet. when in the zone, the packet is transmitted proactively. the projected method is simply an allowance of the remaining area routing algorithm and is limited to the shortest path and proactive routing. the improvement is to lessen the time of return solely. this differs from the zrp, which employs reactive and proactive methods. 6. simulation results this section presents the simulation arrangement and then documents the findings. assessment charts between olsr standard, buffer zone and virtual zone are presented. the parameter settings for the simulator utilized throughout the simulations are provided in table 1. the simulations were conducted using the 2.34 version of ns-2 network simulator. the optimized link state routing protocol (olsr)[9] uses multihop routing with the ieee 802.11 scheme is used as the mac layer. each group node relayed packets to the other group nodes. the nodes were split into two equally huge groups. the traffic category was udp, with a constant bit rate of 64 bytes. figure 11 demonstrates the good performance results with a relative high node speed of 10 m/s and a little overall traffic load of 50 kbps. compared with the results of a virtual zone procedure at thresholds of less than 250 m with that of a standard olsr buffer zone algorithm, which equals the bza with a threshold of 250 meters, the increase would be 84 percent − 75 percent = 9 percent on a buffer-zone algorithm at 220 m. figure 6 shows that the main benefit of the algorithm virtual zone is that the loss of retransmission (ret) has been significantly decreased compared to the buffer zone ret loss and regular olsr. this again leads to further packages being lost because of the missing route (nrte loss), as shown in figure 7. most link breakage are then avoided because packets can still be received by nodes beyond the discard zone and reply with a recognition prior to the neighbour moving beyond a transmission radius. thus, the ret figure 6. ret loss (loss caused by mac extreme retransmissions reject). figure 7. nrte loss (loss affected by lack of route). figure 8. ttl loss (loss affected by exhausted time to live). table 1. parameter situations. parameters values radio-propagation tworayground interface queue fifo with droptail and priqueue interface queue size 300 packets antenna ideal omniantenna olsr hello interval 2 s maximum mac retries 7 traffic ttl 32 olsr hello timeout 6 s nominal transmission rate 2 mbps basic rate 1 mbps simulation period 500 s simulation period heuristic olsr tc interval 5 s olsr tc timeout 15 s acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 loss, also for the virtual zone approach, is decreased as seen in figure 6. this advantage is bigger than the advantage of a higher probability of partitioning, which results in a fully higher output than the olsr standard and buffer zone algorithm as shown in figure 11. it is noteworthy to look at the cost of this approach, since the reduction in ret loss has been recognized as the key advantage of the virtual zone method. figure 7 indicates that in terms of packets lost owing to a lack of routing, there is no transformation among the virtual area, buffer zone and normal olsr solutions. thus, the virtual zone algorithm is not increased compared to normal olsr and buffer zone by the odds of network partitioning. this is predicted because, if necessary, the buffer zone algorithm builds connections with neighbours in the buffer zone. the buffer zone solution and the virtual zone differ, however, in terms of the average sum of hops among a both nodes. as illustrated in figure 9, the sum of hops per path is augmented with the buffer zone approach, as the safety zone nodes as relay nodes are favoured. the hop longer is the biggest drawback of the buffer zone solution. this was lowered after the virtual zone was included into the safe zone. a higher average path length, more packet detour risk, and an increased packet loop risk add to the likelihood of exhaustion from time-to-live (ttl). indeed, because of the exhaustive time to live ratio (i.e., loss of ttl) of packets, the buffer zone technique is higher than the usual olsr, as shown in figure 8. figure 12 indicates that the virtual area method boosts the output, even under heavy traffic loads, as compared with the standard olsr and buffer zone results. however, compared with the virtual and buffer zone methods, they are more or less comparable. however, for such large traffic loads, the total profit of the virtual zone solution is inevitably lower. the reason for this is that the whole network is being undermined by a great increased packet loss which leaves the successful diffusion of traffic, whether by standard olsr traffic or traffic using a bza, with fewer parts of the total network capacity. in addition to a reduced average distance, the cost of the solution in the virtual zone also includes a higher routing burden for a higher payload of hello messages. this is because the virtual zone solution depends on the neighbouring nodes' zone status in hello messages being published. as in figure 10, the increase in routing load for 40 nodes at 10 m/s is about 3 kbps at a threshold of 250 m, which is rather low over the whole network capacity. 7. conclusion entering virtual zones in the buffer zone in the olsr improves the performance of the buffer zone alone in comparison with the olsr. when transmission takes place within virtual zones, the retraining time is also significantly decreased. this paper discusses the rationale of confining the virtual zone within the safe zone. there is also talk of the difference among the virtual zone and the area routing protocol. the approach proposed is simulated using ns 2.34 and experimentations are undertaken to demonstrate that the performance of the virtual zones is increased. comparison charts indicate that the redirection time is reduced when virtual zones are inducted. extending virtual areas to the dangerous area would be a stimulating piece of work. in addition to the node criterion supplied to construct a virtual zone, further criteria for the creation of virtual zones could be introduced in future. by improving both control traffic performance and delay of proposed zrp, the techniques can be applied to single or multiple channels of manets. it could also be proven that such additional criteria recover the routing presentation of the manet. although the routing speed with virtual areas is increased, maintaining the routing table is an upstairs due to the great mobility of the nodes. this creates the method difficult. as future work, a slight improvement in weight in the current technique is also left. the current approach merely limits the construction of the virtual zone to the safe zone of the bza. figure 9. average number of hops. figure 10. routing load. figure 11. goodput for 1 and 10 m/s at 50 kbps load. figure 12. goodput for 1 and 10 m/s at 500 kbps load. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 references [1] a. mohammed munawar, p. sheik abdul khader, a comprehensive analysis on mobile adhoc network routing metrics, protocols and simulators, proceedings of national conference on advances in computer applications, 28(2012), isbn: 978-93-80769-16-5. [2] a. mohammed munawar and dr.p.sheik abdul khader, an enhanced approach to reduce rerouting time in mobile adhoc networks, proceedings of the ieee international conference on emerging trends in computing, communication and nano technology(ice-ccn'13), 25-26 march 2013. [3] a. mohammed munawar p. sheik abdul khader, elimination of queue stagnation in mobile adhoc networks, proceedings of national conference on recent trends in web technologies (rtwt’13), 4-5 october 2013. [4] e. larsen, l. landmark, v. pham, ø. kure, p. e. engelstad, routing with transmission buffer zones in mobile adhoc networks, in proceedings of the ieee international symposium on a world of wireless mobile and multimedia networks (wowmom), kos, greece, 15–18 june 2009, isbn: 978-1-4244-4440-3. doi: 10.1109/wowmom.2009.5282463 [5] v. pham, e. larsen, k. øvsthus, p. engelstad, ø. kure, rerouting time and queueing in proactive ad hoc networks, in proceedings of the performance, computing, and communications conference (ipccc), new orleans, usa, 11-13 april 2007, pp. 160–169. doi: 10.1109/pccc.2007.358891 [6] dhanya sudarsan, g. jisha, a survey on various improvements of hybrid zone routing protocol in manet, in proceedings of the international conference on advances in computing, communications and informatics (icacci '12), pp. 1261-1265. doi: 10.1145/2345396.2345599 [7] y. elrefaie, l. nassef, i. a. saroit, enhancing security of zonebased routing protocol using trust, in proceeding of infos, giza, egypt, 14-16 may 2012. [8] mayur tokekar, radhika d. joshi, enhancement of optimized linked state routing protocol for energy conservation, ccsea 2011, cs & it 02, 2011, pp. 305–319. doi: 10.5121/csit.2011.1228 [9] t. clausen, p. jacquet, optimized link state routing protocol (olsr), rfc 3626, october 2003. [10] c. siva ram murthy, b. s. manoj, ad hoc wireless networks – architectures and protocols. [11] s. corson, j. macker, mobile ad hoc networking (mobile adhoc network): routing protocol performance issues and evaluation considerations, rfc 1999;2501. pearson edu 2007. [12] maltz d. johnson, ad hoc networking. addison-wesley; 2001. [13] li vok, lu zhenxin, ad hoc network routing, ieee international conference on networking, sensing and control, vol. 1, 2004, p. 100–5. [14] a. mohammed munawar, p. sheik abdul khader, asynchronous invocation method to eliminate queueing in mobile adhoc networks, proceedings of second international conference on design and applications of structures, drives, communicational and computing systems (icdasdc’13) 29-30 november 2013, isbn: 978-93-80686-92-9. [15] network simulator ns-2, online [accessed 17 march 2022] http://www.isi.edu/nsnam/ns/ [16] yi huang, clemens gühmann, wireless sensor network for temperatures estimation in an asynchronous machine using a kalman filter, acta imeko, 7(1) (2018), pp. 5-12. doi: 10.21014/acta_imeko.v7i1.509 [17] mariorosario prist, andrea monteriù, emanuele pallotta, paolo cicconi, alessandro freddi, federico giuggioloni, eduard caizer, carlo verdini, sauro longhi, cyber-physical manufacturing systems: an architecture for sensors integration, production line simulation and cloud services, acta imeko, 9(4) (2020), pp. 39-52. doi: 10.21014/acta_imeko.v9i4.731 [18] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimize maintenance policy, acta imeko, 9(1) (2020), pp. 39. doi: 10.21014/acta_imeko.v9i1.732 [19] kavita pandey, abhishek swaroop, a comprehensive performance analysis of proactive, reactive and hybrid manets routing protocols, international journal of computer science issues, 8(3) (2011), pp. 432-441. [20] livio d’alvia, eduardo palermo, stefano rossi, zaccaria del prete, validation of a low-cost wireless sensors node for museum environmental monitoring, acta imeko, 6(3) (2017), pp. 45-51. doi: 10.21014/acta_imeko.v6i3.454 [21] jiayu luo, xiangyu kong, changhua hu, hongzeng li, key performance-indicators-related fault subspace extraction for the reconstruction-based fault diagnosis, elsevier: measurement, 186 (2021), pp. 1-12. doi: 10.1016/j.measurement.2021.110119 [22] manohar yadav, a multi-constraint combined method for road extraction from airborne laser scanning data, elsevier: measurement, 186 (2021), pp. 1-13. doi: 10.1016/j.measurement.2021.110077 https://doi.org/10.1109/wowmom.2009.5282463 https://doi.org/10.1109/pccc.2007.358891 https://doi.org/10.1145/2345396.2345599 https://doi.org/10.5121/csit.2011.1228 http://www.isi.edu/nsnam/ns/ https://doi.org/10.21014/acta_imeko.v7i1.509 https://doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.21014/acta_imeko.v6i3.454 https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://doi.org/10.1016/j.measurement.2021.110119 https://www.sciencedirect.com/science/article/abs/pii/s0263224121009994#! https://doi.org/10.1016/j.measurement.2021.110077 introductory notes for the acta imeko third issue 2022 acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 3 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 introductory notes for the acta imeko third issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko third issue 2022, acta imeko, vol. 11, no. 3, article 1, september 2022, identifier: imeko-acta-11 (2022)-03-01 received september 28, 2022; in final form september 28, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the third issue 2022 of acta imeko collects contributions related to two events organized by imeko tc17, the imeko technical committee on robotic measurement [1]-[9], and, as usual, papers that do not relate to a specific event and are collected into the general track [10]-[18]. annually tc17 organizes the "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends related to robotic innovations for benefit of humanity, advanced human-robot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms and environments, and mobile work machines as well as virtual reality (vr), augmented reality (ar) and 3d modelling and simulation. the introduction to the papers relatd to this event is given in the editorial authored by prof. zafar taqvi, organizer of this special issue. as editor in chief, it is my pleasure to give readers an overview of general track papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. additive manufacturing (am) is becoming a widely employed technique also in mass production. in this field, compliances with geometry and mechanical performance standards represent a crucial constrain. since 3d printed products exhibit a mechanical behaviour that is difficult to predict and investigate due to the complex shape and the inaccuracy in reproducing nominal sizes, optical non-contact techniques are an appropriate candidate to solve these issues. in the paper “measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques”[10] by antonino quattrocchi et al., the 2d digital image correlation and thermoelastic stress analysis are combined to map the stress and the strain performance of an airless wheel prototype. the innovative airless wheel samples are 3d-printed by fused deposition modelling and stereolithography in poly-lactic acid and photopolymer resin, respectively. the static mechanical behaviour for different wheel-ground contact configurations is analysed using the aforementioned non-contact techniques. moreover, the wheel-ground contact pressure is mapped, and a parametric finite element model is developed. the results presented in the paper demonstrate that several factors have a great influence on 3d printed airless wheels: a) the material used for manufacturing the specimen, b) the correct transfer of the force line (i.e., the loading system), c) the geometric complexity of the lattice structure of the airless wheel. the work confirms the effectiveness of the proposed noncontact measurement procedures for characterising complexshaped prototypes manufactured using am. body impedance analysis (bia) is used to evaluate the human body composition by measuring the resistance and reactance of human tissues with a high-frequency, low-intensity electric current. nonetheless, the estimation of the body composition is influenced by many factors: body status, environmental conditions, instrumentation, and measurement procedure. valerio marcotuli et al., in “metrological characterization of instruments for body impedance analysis”, [11] present the results of a study about the effect of the connection cables, conductive electrodes, adhesive gel, and bia device characteristics on the measurement uncertainty. tests were initially performed on electric circuits with passive elements and on a jelly phantom simulating the body characteristics. results showed that the cables mainly contribute to increase the error on the resistance measurement, while the electrodes and the adhesive introduce a negligible disturbance on the measurement chain. the authors also propose a calibration procedure based on a multivariate linear regression to compensate for the systematic error effect of bia devices. sergio moltò et al. in the paper “uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration” [12] present a study about the deformation of a refractometer used to achieve a quantum realization of the pascal. first, the propagation of the uncertainty in the pressure measurement due to mechanical deformation was assessed. then, deformation simulations were carried out with a cavity designed by the cnam (conservatoire national des arts mailto:editorinchief.actaimeko@hunmeko.org acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 et métiers). this step aims to corroborate the methodology used in the simulations. the assessment of modal components is a fundamental step in structural dynamics. while experimental investigations are generally performed through full-contact techniques, using accelerometers or modal hammers, the research proposed in the paper entitled “frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers” [13], by lorenzo capponi et al, presents a non-contact frequency response function identification measurement technique based on aruco square fiducial markers displacement detection. a video of the phenomenon to be analyzed is acquired, and the displacement is measured through markers, using a dedicated tracking algorithm. the proposed method is presented using a harmonically excited fff 3d-printed flexible structure, equipped with multiple embedded-printed markers, whose displacement is measured by an industrial camera. comparison with numerical simulation and an established experimental approach is finally provided for the validation of the results. human movement modeling also referred to as motioncapture is a rapidly expanding field of interest for medical rehabilitation, sports training, and entertainment. motion capture devices are used to provide a virtual 3-dimensional reconstruction of human physical activities employing either optical or inertial sensors. using inertial measurement units and digital signal processing techniques offers a better alternative in terms of portability and immunity to visual perturbations when compared to conventional optical solutions. in the paper “lowcost real-time motion capturing system using inertial measurement units” [14], simona salicone et al. propose a cablefree, low-cost motion-capture solution based on inertial measurement units with a novel approach for calibration. the goal of the proposed solution is to apply motion capture to the fields that, because of cost problems, did not take enough benefit of such technology (e.g., fitness training centers). according to this goal, the necessary requirement for the proposed system is to be low-cost. therefore, all the considerations and all the solutions provided in this work have been done according to this main requirement. maximum-power extrapolation (mpe) techniques adopted for 4g and 5g signals are applied to systems using dynamic spectrum sharing (dss) signals generated by a base station and transferred to the measurement instruments through an air interface adapter to obtain a controlled environment. this allowed to focus the analysis on the effect of the frame structure on the mpe procedure, excluding the random effects associated to fading phenomena affecting signals received in real environments. the analysis presented by sara adda et al in the paper “experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb” [15] confirms that both the 4g mpe and the proposed 5g mpe procedure can be used for dss signals, provided that the correct number of subcarriers in the dss frame is considered. michela albano et al., in the paper entitled “x-rays investigations for the characterization of two 17th century brass instruments from nuremberg”, [16] propose a multidisciplinary approach mainly based on non-invasive analytical techniques and including x-rays investigations (x-ray radiography, x-ray fluorescence and x-ray diffraction) for the study of two brass natural horns from the end of the 17th recently found in castello sforzesco in milan (italy). these findings brought new information about this class of objects; actually, even though the instruments were heavily damaged, their historical value was great. the study proposed in the paper was aimed at: i) pointing out the executive techniques for archaeometric purposes; ii) characterizing the morphological and chemical features of materials; iii) identifying and mapping the damages of the structure and the alterations of the surface. in the paper “non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis” [17] nina simbirtseva et al. propose the method of neutron resonance capture analysis (nrca) to determine the elemental and isotope compositions of objects non-destructively, which makes it a suitable measurement tool for artefacts analysis without sampling. the method is currently being developed at the frank laboratory of neutron physics. nrca is based on the registration of neutron resonances in radiative capture and on the measurement of the yield of reaction products in these resonances. the potential of nrca at the intense resonance neutron source facility is demonstrated on the investigation of a kyathos from the necropolis volna 1 (6th-4th centuries bce) on the taman peninsula. in addition, x-ray fluorescence (xrf) analysis was applied to the same object. the elemental composition determined by nrca is in agreement with xrf data. a power system in which the generation units such as renewable energy sources and other types of generation equipment are located near loads, thereby, reducing operation costs and losses and improving voltage is referred to as ‘distributed generation’ (dg), and these generation units are named ‘distributed energy resources’. however, dgs must be located appropriately to improve the power quality and minimize power loss of the system. the objective of the paper entitled “performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation” [18] by ahmed jassim ahmed et al., is to propose an approach for measuring the optimal size and location of dgs in a low voltage microgrid using the autoadd algorithm. the algorithm is validated by testing it on the ieee 33-bus standard system and, compared with previous studies, the algorithm proved its efficiency and superiority on the other techniques. a significant improvement in voltage and reduction in losses were observed when the dgs are placed at the sites selected by the algorithm. therefore, autoadd was used in finding the optimal size and location of dgs in the distribution system; then, the possibility of isolating the low voltage microgrid is discussed by integrating distributed generation units and the results showed the possibility of this scenario during faults time and intermittency of energy time. also in this issue, high quality and heterogeneous papers are presented, confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. in particular, the technical note shows how acta imeko is the right place where different opinions and point of views can meet and compare, stimulating a fruitful and constructive debate in the scientific community of measurement science. i hope you will enjoy your reading. francesco lamonaca editor in chief acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 references [1] a. alsereidi, y. iwasaki, j. oh, v. vimolmongkolporn, f. kato, h. iwata, “experiment assisting system with local augmented body (easy-lab) in dual presence environment”, acta imeko, vol. 11, no. 3, pp. 1-6. [2] s. olasz-szabó, i. harmati, “path planning for data collection robot in sensing field with obstacles”, acta imeko, vol. 11, no. 3, pp. 1-6. [3] p. singh matharu, a. ashok ghadge, y. almubarak, y. tadesse, “jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring”, acta imeko, vol. 11, no. 3, pp. 1-7. [4] j. wolf rogers, k. alexander, “standards and affordances of 21stcentury digital learning: using the experience application programming interface and the augmented reality learning experience model to track engagement in extended reality”, acta imeko, vol. 11, no. 3, pp. 1-6. [5] e. e. cepolina, a. parmiggiani, c. canali, f. cannella, “disarmadillo: an open source, sustainable, robotic platform for humanitarian demining”, acta imeko, vol. 11, no. 3, pp. 1-8. [6] i. zobayed, d. miles, y. tadesse, “a 3d-printed soft orthotic hand actuated with twisted and coiled polymer muscles triggered by electromyography signals”, acta imeko, vol. 11, no. 3, pp. 1-8. [7] r. singh, s. mohapatra, p. singh matharu, y. tadesse, “twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management”, acta imeko, vol. 11, no. 3, pp. 1-6. [8] z. kovarikova, f. duchon, a. babinec, d. labat, “digital tools as part of a robotic system for adaptive manipulation and welding of objects”, acta imeko, vol. 11, no. 3, pp. 1-8. [9] a. v. geetha, t. mala “arel – augmented reality–based enriched learning experience”, acta imeko, vol. 11, no. 3, pp. 15. [10] a. quattrocchi, d. alizzio, l. capponi, t. tocci, r. marsili, g. rossi, s. pasinetti, p. chiariotti, a. annessi, p. castellini, m. martarelli, f. freni, a. di giacomo, r. montanini, “measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques”, acta imeko, vol. 11, no. 3, pp. 1-8. [11] v. marcotuli, m. zago, a. p. moorhead, m. vespasiani, g. vespasiani, m. tarabini, “metrological characterization of instruments for body impedance analysis”, acta imeko, vol. 11, no. 3, pp. 1-7. [12] l s. moltó, m. a. sáenz-nuño, e. bernabeu, m. n. medina, “uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration”, acta imeko, vol. 11, no. 3, pp. 1-8. [13] l. capponi, t. tocci, g. tribbiani, m. palmieri, g. rossi, “frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers”, acta imeko, vol. 11, no. 3, pp. 1-6 [14] s. salicone, s. corbellini, h. v. jetti, s. ronaghi, “low-cost realtime motion capturing system using inertial measurement units”, acta imeko, vol. 11, no. 3, pp. 1-9. [15] s. adda, t. aureli, t. cassano, d. franci, m.d. migliore, n. pasquino, s. pavoncello, f. schettino, m. schirone “experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb”, acta imeko, vol. 11, no. 3, pp. 1-7. [16] m. albano, g. fiocco, d. comelli, m. licchelli, c. canevari, f. tasso, v. ricetti, p. cofrancesco, m. malagodi, “x-rays investigations for the characterization of two 17th century brass instruments from nuremberg”, acta imeko, vol. 11, no. 3, pp. 1-7. [17] n. simbirtseva, p. v. sedyshev, s. mazhen, a. yergashov, a. yu. dmitriev, i. a. saprykina, r. a. mimokhod, “non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis”, acta imeko, vol. 11, no. 3, pp. 1-6. [18] a. j. ahmed, m. h. alkhafaji, a.j. mahdi, “performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation”, acta imeko, vol. 11, no. 3, pp. 1-8. integrating maintenance strategies in autonomous production control using a cost-based model acta imeko issn: 2221-870x september 2021, volume 10, number 3, 156 166 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 156 integrating maintenance strategies in autonomous production control using a cost-based model robert glawar1, fazel ansari1,2, zsolt jános viharos3,4, kurt matyas2, wilfried sihn1,2 1 fraunhofer austria research gmbh, theresianumgasse 7, a-1040, vienna, austria, 2 tu wien, institute of management science, theresianumgasse 27, a-1040, vienna, austria 3 institute for computer science & control (sztaki), kendestr.13-17, h-1111, budapest, hungary 4 john von neumann university, izsáki u. 10, h-6000, kecskemét, hungary section: research paper keywords: maintenance; autonomous production control; production planning; cyber physical systems; industry 4.0 citation: robert glawar, fazel ansari, zsolt jános viharos, kurt matyas, wilfried sihn, integrating maintenance strategies in autonomous production control using a cost-based model, acta imeko, vol. 10, no. 3, article 22, september 2021, identifier: imeko-acta-10 (2021)-03-22 section editor: lorenzo ciani, university of florence, italy received february 9, 2021; in final form may 2, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work has been supported by the european commission through the h2020 project epic (grant no. 739592) corresponding author: robert glawar, e-mail: robert.glawar@fraunhofer.at 1. introduction in today’s competitive market, manufacturing enterprises are faced with the challenge of achieving high productivity, short delivery times and a high level of delivery capability despite evershorter planning horizons, a large number of external planning changes and increasing planning complexity [1], [2]. this high degree of complexity in planning is no longer effectively and affordably manageable for humans [3]. on the one hand, there are high demands on flexibility and reaction times in planning and, on the other hand, high requirements regarding availability of production facilities, equipment and machines [4]. however, current systems for production planning and control (ppc) neither incorporate technical innovations nor social requirements and are therefore not able to meet the current challenges [5]. likewise, current maintenance processes and strategies are not sufficiently prepared for these challenges [6]. considering the advancement towards industry 4.0, new opportunities arise due to innovative technologies and approaches such as industrial internet of things (iiot) applications [7], horizontal and vertical communication within a production system by means of open platform communications unified architecture (opc ua) [8] or the use of artificial intelligence (ai) methods for data analysis, forecasting, optimization and planning [9]. the degree of autonomy of such a cyber physical production system (cpps) describes the ability to plan, control and initiate actions autonomously [10]. approaches to autonomous production control (apc) represent suitable ways to increase the degree of autonomy of a cpps [11], [12] therefore, apc represent suitable possibilities to deal with the aforementioned requirements [13]. however, these approaches are currently limited to lab research and are not ready for industrial applications [14]. most of the current approaches are based on idealised assumptions such as maximum availability (i.e. 95-98%) or do not take many abstract autonomous production control (apc) is able to deal with challenges, inter alia, high delivery accuracy, shorter planning horizons, increasing product and process complexity, and frequent changes. however, several state-of-the-art approaches do not consider maintenance factors contributing to operational and tactical decisions in production planning and control. the incomprehensiveness of the decision models and related decision support tools cause inefficiency in production planning and thus lead to a low acceptance in the manufacturing enterprises. to overcome this challenge, this paper presents a conceptual cost-based model for integrating different maintenance strategies in autonomous production control. the model provides relevant decision aspects and a cost function for different maintenance strategies using on a market-based approach. the present work thus makes a positive contribution to cope with the high demands on flexibility and response times in planning while at the same time ensuring high plant productivity. mailto:robert.glawar@fraunhofer.at acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 157 decisive factors such as maintenance strategies into account. for example, the question such as “how the current state of a production plant or machine can affect production control” is not taken into consideration [15]. exactly these factors, as exemplified, are decisive for the acceptance and implementation maturity of autonomous approaches in industrial companies. hence, the aim of the present work is to take a further step towards implementation maturity by integrating different maintenance strategies in apc. 2. maintenance in apc autonomous production control (apc) has the potential to deliver optimal and resource efficient processes as well as higher quality and variations of products than conventional, centralized decision-making systems [16] adaptive, decentralised production control can reduce planning efforts [17], enable shorter reaction times in planning [18] and create greater planning flexibility [19]. since in most cases not all decisions in a production system are made autonomously, a cpps typically includes a combination of hierarchical and heterarchical mechanisms for control [20]. since approaches to autonomous production control are able to deal quickly and flexibly with unplanned changes within the production system, they are used in the context of cpps to represent decision-making processes that require a high degree of responsiveness [21]. to ensure high level of acceptance among operational planning staff, it is particularly important that the underlying models comprehensively take relevant factors of the production system into account and thus making robust decisions [22]. current studies show that a large number of research activities are concerned with the development of approaches to autonomous production control [23]. current approaches focus on the description of the interactions between different parts of a production system from different perspectives. a typical task is to assign a waiting workpiece, which is to be processed in the course of a production job, to a machine or a workstation taking into account available resources, logistical parameters and the smoothing of the job load. existing apc approaches are usually either carried out as event-driven sequencing [18] or agent-based sequencing [24]. first examples that the agent-based simulation is a suitable possibility to realize apc by using multi-agent systems are shown by pantförder et al. (2017) [25]. the integration of such an approach in a production system on the basis of the upc-ua standard is shown by hoffmann et al. (2016) [26]. a key success factor for autonomous interaction in this context is the design of a of a robust system [27]. in order to reach such a design, different algorithms may be used to schedule the orders. in particular, genetic and evolutionary algorithms [28], swarm-based algorithms [29], [30], and market models [14] have been successfully used for apc. in addition, many current approaches to ppc rely on the application of methods of artificial intelligence. often, ml is applied, for example, to predict lead times predict and optimize resource utilization [31]. in addition, reinforcement learning is used to enable, for example, an autonomous order scheduling system [32]. however, as shown in table 1, few approaches deal with the integration of maintenance strategies in apc systems. for instance, erol and sihn (2017) presented a cloud-based architecture for intelligent production planning and control considering maintenance [33]. vallhagen et al. (2017) also presented a system and information infrastructure to enable optimized adaptive production control [34]. nevertheless, neither of these approaches explain which aspects of maintenance should be considered and how they should be integrated. in the approach presented by wang et al. (2018), the condition of production plants is automatically evaluated and thus the production sequence is intelligently adapted. system table 1. overview maintenance in autonomous production control. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 158 performance is improved by automatically evaluating the state of production systems and dynamically configuring processing paths for intelligent products and parts. while the implementtation as decentralized production control is proposed, the work deals in detail with a three-machine problem and neglects the dependencies on a higher-level production planning [35]. in summary, it can be concluded that none of the identified approaches includes a systematic integration of different maintenance strategies into autonomous production control. however, if this aspect is not taken into account, these approaches remain largely unsuitable for industrial application, as no valid decisions can be made in the case of unplanned outages or planned maintenance, and thus ultimately the acceptance of such approaches by operational planning staff is not given. in front of this background a comprehensive methodology for integrating maintenance strategies in autonomous production control has been presented by glawar et al. (2019) [20] and glawar et al. (2020) [36]. the core of this conceptual model, which is presented in detail in section 5, is a cost based model for the integrated planning. this model is laid out in section 4 based on the relevant aspects for the integration of maintenance in apc introduced in section 3. 3. relevant aspects for the integration of maintenance in apc for the integration of maintenance into apc, an important step is to clarify which maintenance aspects are relevant for the integration decision. for this purpose, an expert survey has been conducted including professionals from industrial sectors, namely semiconductor production, metal processing industry, condition monitoring and automotive industry as well as national and international academic experts. the aim of this survey was to discuss the following question with the experts: "how do you evaluate the individual aspects of maintenance with regard to their relevance for integration into production planning and control (ppc)?” the first step was to discuss which aspects of plant maintenance (aka industrial maintenance) are generally important for ppc and for which area of ppc a specific aspect is relevant. using the pair-wise comparison method, it was finally determined how relevant the individual aspects are for integration into the ppc. the results of this expert survey are presented in table 2. the essential aspects of maintenance are listed in the first column and evaluated with regard to their relevance for decision-making. in the second column the relevance for the integration into the different dimensions of ppc is shown. a significant finding is that the relevance for the consideration of the individual aspects for the ppc strongly depends on the general operational conditions, especially the degree of automation, production type and flexibility in case of a plant failure. a closer look at the results shows that some aspects are particularly relevant for integration into apc, while other aspects may have a positive influence on the quality of decisions but are not absolutely necessary for integration purpose. in addition, there are other aspects of plant maintenance which are particularly important for integration into mediumand longterm production planning as well as production controlling. these decision factors have not been further addresses during the course of the present work. 3.1. downtime & costs the time for a shutdown, in case of an occurring failure of the machine, can be estimated either on the basis of empirical knowledge, or calculated on the basis of historical shutdowns. it table 2. aspects and their importance for apc integration into ppc system evaluated by various domain experts [36]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 159 is important to note that the downtime that occurs usually differs significantly depending on whether it is a planned or unplanned shutdown. since downtime costs correlate with the order situation, the lost contribution margin in the event of a downtime is usually used to calculate the downtime costs. penalties for delayed order completion are also taken into account, if applicable. together with the probability of failure, the downtime costs represent an important basis for decisionmaking in production control. 3.2. maintenance time & costs the time for a repair can either be estimated on the basis of manufacturer information and empirical knowledge or calculated on the basis of historical data (e.g., mean time to repair mttr). usually, when calculating repair costs, a distinction is made between internal and external repair costs, which are usually caused by external services. the underlying share of external services largely determines the repair time and costs. in the case of internal repair costs, the repair time is usually taken into account, taking into account the hourly rates of the personnel required for the repair, depending on their qualifications, and supplemented by the material costs for the necessary spare parts. this applies analogously to the occurring maintenance times & costs. the repair and maintenance costs calculated in this way are important factors in production control for deciding whether maintenance should be carried out or even brought forward, or whether the risk of a breakdown with subsequent repair should be taken. 3.3. spare parts availability the information on whether the spare parts required for repair and maintenance are basically available is decisive for the decision within the framework of production control as to whether maintenance is triggered or whether an order is produced on a system with a certain risk of failure. depending on the type and complexity of a machine as well as the organizational form of maintenance, spare parts availability represents a more or less important decision aspect. in the case that mechanical spare parts can be produced independently with relatively little effort or are outsourced to a service provider via a service contract, it may not be necessary to integrate this decision aspect into production control. 3.4. availability of maintenance capacity the information as to whether maintenance capacity is available for repair or maintenance is essential in the context of production control in order to make the decision as to whether this should be triggered. capacities can represent both internal personnel resources and external third-party services, which are usually not available in unlimited quantities. depending on the type of maintenance organization, this decision aspect is also more or less important. while a capacity check can be very important in a decentralized maintenance organization that has to manage with a narrowly limited capacity, it is less important in an organization that provides sufficient resources centrally. 3.5. availability of qualifications the qualifications required to perform a particular repair or maintenance task can also play a relevant role in the decision within the framework of production control. however, the significance of this decision aspect depends to a large extent on the complexity of the equipment as well as the available qualifications of the internal personnel resources. while only a small number of qualified personnel resources are generally available for highly complex plant components such as bionic components, the significance decreases for simple mechanical plant components, for which a large proportion of the available personnel resources are qualified. 3.6. planned maintenance orders and planned maintenance interval orders that are scheduled for the execution of maintenance are very relevant for the ppc as they tie up capacities. while internal maintenance tasks only reduce the capacity within a period but allow a certain flexibility with regard to sequence planning, externally performed tasks often represent a hard restriction for production control. the basis for these planned maintenance orders are often the defined intervals for (periodic) preventive maintenance. maintenance interval management thus represents a key success factor for medium-term production planning. it is crucial that this maintenance planning is coordinated with the expected fluctuation in production volumes in order to prevent equipment from being unavailable in a phase of particularly high order levels, while it could be maintained in a phase of low order levels. 3.7. probability of failure the probability of failure significantly determines the risk of a plant or machine failure during production and thus influences production control decisions. depending on the maintenance strategy applied, different approaches exist to calculate the probability of failure. in the simplest case, information from the manufacturer or internal empirical values (e.g. mean time between failure mtbf) are used to calculate the probability of failure. often, historical data based on statistical methods, such as the weibull distribution, can also be used to calculate the probability of failure. ideally, the probability of failure is determined based on the actual condition of the plant and a corresponding forecast for the next failure. 3.8. condition of the plant or machine components if the condition of a plant or machine can be reliably measured, estimated or calculated, it represents an essential decision-making factor for the pps. in the context of mediumterm production planning and spare parts planning, it is possible to react as soon as a component exhibits a critical condition, for example, by ensuring that the corresponding spare parts are available or by initiating planned maintenance. in the context of production control, the risk of failure can be taken into account based on the change in the condition of the machine for example in the context of sequence planning or machine assignment. 3.9. technical plant availability technical plant availability, which describes what proportion of the available operating time a plant is technically available, is a key aspect of medium-term production planning. depending on the availability, the plants are scheduled to a greater or lesser extent. technical plant availability also represents a hard restriction for the maximum possible production quantity. 4. development of a cost function for an integrated planning different algorithms can be used to determine the order of the orders within the autonomous production control. many of these algorithms use a cost function to prioritize or determine the production sequence. for example, when applying the market principle for apc, orders are allocated to individual acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 160 production units based on a cost function. in this paper, a cost function for autonomous production control is developed using the market principle for autonomous production control as an example: 𝐾pa = ∑ 𝐾s + ∑ 𝐾pf + ∑ 𝐾pv × 𝑋p + ∑ 𝐾tv × 𝑋t . (1) a possible formulation of the costs of a production order, as shown in (1), is presented by rötzer and schwaiger [37]. in this description, the costs of a production order (kpa) consist of individual location costs (ks) fixed process costs (kpf) variable process costs (kpv) and variable transport costs (ktv) as well as the production quantity (xp) and the number of transports (xt). transportation costs (ktv) represent the expenses for the necessary transports (xt) to move the workload to be produced within the production system. they include, for example, costs for material supply and provision, transports between different workplaces and production facilities, as well as expenses for intermediate, inward and outward storage of the produced worklist. fixed production costs (kpf). represent the portion of production costs that is independent of the amount of work in process (xp)-that is, fixed production costs are constant for each production order. this includes, for example, the expenses for setup between the production orders but also administrative costs for order processing. in comparison, variable production costs (kpv) depend on the amount of work produced. typical variable production costs are, for example, costs for material, auxiliary and operating supplies, expenses for the actual production depending on the processing time, as well as expenses for the necessary energy input during production. the variable process costs of a production thus consist of costs due to production backlog, production time, setup times, and the inventory necessary for production, as well as maintenance costs [38]. in this context, the availability of the machines and the delay of the end of the order are particularly relevant for the evaluation of a production order [39]. for this reason, it is necessary to explicitly add the maintenance-relevant factors in a cost function to describe the costs of a production order. since different maintenance strategies make different demands on production control, but also allow for different information, it is advisable to use different cost functions for the different maintenance strategies. the cost functions are successively designed to build on each other, so that it is possible to use them in a production system that uses different maintenance strategies for the different assets and their components. 4.1. reactive maintenance strategy the reactive maintenance strategy is characterized by the fact that the system components are operated until failure and therefore the failure probability and the associated costs are not relevant for decision-making. this is also reflected in the representation of the cost function of a production order under consideration of reactive maintenance krm, cf. (2). the cost function takes into account not only the sum of the fixed production costs kf, variable production costs kp, transport costs kt, as well as the current order load xp, and the number of transports xt, but also the maintenance cost ratio km. the maintenance cost ratio describes the maintenance costs per production quantity produced. the maintenance costs under consideration include the costs for maintenance and repair of the various elements of the production system, the costs for spare parts stocking, and external service costs. 𝐾rm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p (2) 4.2. periodic preventive maintenance strategy in periodic preventive maintenance, measures are planned preventively either time-dependently, for example weekly, quarterly or annually, or load-dependently, for example after a certain number of operating hours or switching operations. hence, the cost function of a production order, taking into account preventive maintenance kpm, cf. (3). it also takes into account the risk of an unplanned production downtime rdtp, figure 1. costs of abased production order under consideration of multiple maintenance strategies [36]. costs of a product ion order under consideration of maintenance m aintenance strategies production planning production controlling reactive number of t ransport s fixed & variable product ion costs m aintenance cost ratio number of pieces produced preventive cbm pdm kf, kv km xp xt costs f or penalty payment s dow nt ime costs order backlog m aintenance planning forecast rul planned measures and degree of prevention determine dow ntime costs condition dow nt ime risk downtime costs depending on order backlog kdt kp c rul rdt acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 161 costs in the event of a downtime event kdt, as well as any costs for contractual penalties due to schedule variances kp. the costs in case of a downtime event are, for example, the lost contribution margin of the planned worklist in case of an unplanned downtime, as well as costs for repairs. these downtime costs are dependent on the current order load xp as shown in (4). the default risk in the case of periodic preventive maintenance rdtp, can be calculated based on historical failures. for this purpose, the probability density function fp(tslf) is used for integration, where tslf describes the time since last failure, cf. (5). normally, a normal distribution is assumed, cf. (6). while the normal distribution is calculated based on historical failures, the expected value is assumed by the mtbf. 𝐾pm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtp + ∑ 𝐾p × 𝑅dtp (3) 𝐾dt = f(𝑋p ) (4) 𝑅dtp = ∫ fp(𝑇𝑆𝐿𝐹) d𝑇𝑆𝐿𝐹 𝑇𝑆𝐿𝐹 0 (5) 𝑓𝑝(𝑇𝑆𝐿𝐹) = 1 𝜎√2 π e − 1 2 ( 𝑇𝑆𝐿𝐹−𝑀𝑇𝐵𝐹 𝜎 ) 2 (6) 4.3. condition-based maintenance strategy the cost function of a production order under consideration of condition-based maintenance kcm, cf. (7), corresponds largely to the cost function of preventive maintenance and also includes the risk of an unplanned production downtime rdtc applying condition-based maintenance strategies. in this case, in which a maintenance task is planned depending on the actual condition of a component, rdtc is calculated by a condition-based function fc at the respective time of the condition determination tc and the determined condition c at this time, cf. (8). the determination of this function is usually based on empirical studies or on already known equations or manufacturer data. in many cases, especially if a complex empirical determination is not economical, it is sufficient to assign a fixed default risk rdtc to defined states c based on empirical knowledge. 𝐾cm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtc + ∑ 𝐾p × 𝑅dtc (7) 𝑅dtc = 𝑓𝑐(𝑡𝑐; 𝐶) (8) 4.4. predictive maintenance strategy in predictive maintenance (pdm), maintenance tasks are planned depending on prognosis of remaining useful life (rul). the cost function of a production order, therefore, takes into account kpdm, cf. (9), the risk of an unplanned production downtime (rdrpdm). the failure risk is calculated analogous to the condition-based maintenance by a function fp which is determined by the rul i.e. the remaining degree of wear and tear of the machine component, cf. (10). this function must also be known or empirically determined. in (11), a determination of the rul using a weibull function is shown. here, t represents the characteristic life, beta the shape parameter and wi an influence factor to account for changing operating conditions of the weibull function. in summary, figure 1 shows the composition of the developed cost function depending on the applied maintenance strategy and visualizes the relationship between the individual cost factors 𝐾pdm = ∑ 𝐾f + ∑ 𝐾v × 𝑋p + ∑ 𝐾t × 𝑋t + ∑ 𝐾m × 𝑋p + ∑ 𝐾dt × 𝑅dtpdm + ∑ 𝐾p × 𝑅dtpdm (9) 𝑅dtpdm = fp(𝑅𝑈𝐿) (10) 𝑅𝑈𝐿 = e −( 𝑡 𝑇 × 𝑤𝑖 ) 𝛽 (11) 5. conceptual model for integrating maintenance strategies in apc the model for the integration of different maintenance strategies in apc is designed using three subsystems: i) a maintenance system, ii) a system for autonomous production control and iii) a system for production planning. in figure 2, these subsystems and their interrelations are shown in detail. the system for autonomous production control maps the level for machine-to-machine (m2m) communication of the apc model. it regulates the real-time communication of the different elements of a production system with the aim of autonomously determining a production sequence based on the requirements of the production control system (the production orders) and the current framework conditions of the production system. to achieve this goal, real-time communication between different machine agents (ma), work piece agents (wpa) and resource agents (ra) is necessary (information flow a). an ma represents the different machines and plants of a production system. wpas represent the open worklist within a production system. depending on the production environment, an open worklist can be a concrete workpiece, production lot or any clearly identifiable portion of the production quantity. an ra represents further elements of a production system which are of interest for the task of production control. depending on the production environment, these can be, for example, tools, workstations, measuring equipment, transport equipment and all other resources, which have a significant influence on the determination of the production sequence. the m2m communication between ma, wpa and ra takes place via a message transport system (mts), which communicates between the elements of the production system and an order agent (oa) via an agent management system (ams) and a directory facilitator (df). the ams manages the specific addresses of the individual agents (information flow b). in comparison, the df manages the specific attributes and properties of each individual agent (information flow c). examples of these attributes are the probability of failure, downtime costs, and repair and maintenance costs, which are communicated directly from the maintenance system to the df (information flow d). further attributes describe, for example, the ability of an agent to determine which possible production steps can be carried out at the respective ma or which processing times result from this. the mts distributes messages between the different agents and between agents and oa (information flow e). the mst transports information about the attributes and properties of the respective agents and production orders, which it receives from acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 162 df and oa. the mst transports this information from a specific address that it receives from ams to another specific address that is also provided by ams. the oa also receives information about spare parts availability, maintenance capacity availability, and available qualifications from the maintenance system (information flow f). with this information, taking into account the current production sequence, planned maintenance orders can be defined and confirmed to the maintenance system (information flow g). the operational control of these maintenance orders, as well as the control of production orders, takes place via communication between the various agents and df and ams using mts. the central task of the system for apc is to determine the production sequence based on real-time m2m communication. different scheduling models can be used to fulfil this task and to determine a sequential order for each of the different production orders provided by the oa. the production orders to be scheduled are typically created and managed by an enterprise resource planning (erp) or manufacturing execution system (mes). in the present work, a "marketplace-based" model is used to illustrate the integration of the system for apc. in this case, the oa receives a demand in the form of a production order from an erp or mes system. this demand is matched by a supply of capacities of the mas and ras representing the production capacities of the production system such as machine resources, work centre resources, tool resources or transport resources. the information necessary to describe the supply is provided to the oa by means of mst via the attributes figure 2. process model for the integrated planning of maintenance and apc [36]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 163 of the production resources relevant for the production order in question, which are managed in the df. 𝑃 = 1 (𝑡i − 𝑡e) × (𝑡c − 𝑡i) × 𝐾min (12) using the information on supply and demand, oa is able to determine the priority of each production order, cf. (12). the priority p is calculated by taking into account the desired completion date tc, the possible order start time ti, the order receipt time te, and a priority factor kmin. the oa can calculate the priority factor necessary to determine the priority using the information received from the mts based on the information managed in the df. to determine the priority factor, the cost function, presented in this paper, is used here. 𝑝𝑚𝑖𝑛 = min[𝑝(𝑥)] ; 0 < 𝑥 < 𝑛𝑀𝐴 (13) due to the structure of the underlying cost function, the priority of a manufacturing order increases the higher the costs of the manufacturing order and therefore the priority factor. similarly, the greater the difference between the current time and the incoming order, the higher the priority. the higher the desired production duration of the worklist to be produced, the lower the priority of the underlying manufacturing order is calculated. since it is usually assumed that both the variable production costs and the risk of an unplanned downtime differ between the different machines and plants of a production system, it is necessary to calculate the priority factor for the number of possible ma (nma), and then determine the minimum of the possible priority factors (pmin), cf. (13). based on this minimum, the final step is to determine the sequence rank n of the production order to be produced on the assigned ma, cf. (14). for this purpose, the priority rank (p(i)) of the individual available production orders (pan), is determined in order of the minimum priority: 𝑁 = rank(p(i)) < min{𝑃𝐴𝑛 } . (14) based on the sequence rank n and the lead time tpt, which the oa can determine using the information it receives from df via the mts and the current time tact, the oa can determine the estimated time of completion tn, (15), and communicate this together with the defined production sequence to the production planning system (information flow j). for example, the oa provides this information to a mes via the mst or an alternative interface. 𝑡n = 𝑡act + ∑ 𝑡pt 𝑁 0 (15) 6. outlook and further research 6.1. evaluation of economic plausibility in further research of glawar et al. (2021) the presented model has been implemented and evaluated [40]. since an implementation in a real time environment is yet hard to realize an implementation using an agent-based simulation approach based on a real industrial use case in the automotive industry has been realized. on this basis, the conditions for a successful, and cost-effective implementation of the model in industry are derived. in this use-case the benefits of integrating maintenance strategies in apc can be described as follows [40]: a. an increase in on-time delivery of more than 9% by reducing schedule deviations due to backlogs of production jobs. since the condition of a machine is already taken into consideration during the production control, even a simultaneous failure of several machines has little impact on the adherence to delivery dates. b. a reduction in the cost of manual rescheduling of approximately € 29,500 per year, since in the event of an unplanned machine failure the sequence and machine assignment can be adjusted autonomously. c. increase of the uptime by using the potential of modern maintenance strategies by approx. 4% to over 96% and thus a reduction of maintenance costs of approx. € 52,000 per year. d. increase in productivity, which is defined as parts produced per hour, by over 5.6%. 6.2. integration into maintenance cost controlling the cost function developed in this thesis aims at integrating the relevant aspects of maintenance for autonomous production control. in a further step, this cost consideration can also be used as a basis for integration into maintenance cost controlling. such an analysis enables the formalization of the relationship between key figures such as the proportion of external costs or the maintenance ratio and the operational logistical targets such as lead time and adherence to delivery dates as well as the productivity of the production system. the maintenance ratio describes the maintenance costs incurred in relation to a period under consideration. it is therefore an essential component of production costs and can already be used in rough-cut planning and in sequencing. this makes it possible to consider the resulting effects at the tactical level and to derive measures for achieving an overall optimum, independent of a fixed defined maintenance budget. existing models for maintenance cost controlling such as the cost prove model [41] model planned and unplanned maintenance costs and attempt to derive optimization measures based on any deviation from a defined budget in order to achieve the ideal operating point between planned and unplanned measures for maintenance. in comparison, by taking into account the developed cost function depending on the current and future expected production program as well as the current risk of failure of the equipment necessary for the workers of this production program, the maintenance costs are dynamically adjusted. this creates transparency for the performance of maintenance by quantifying the benefit of concrete measures on the operating result in costs and thus justifying, for example, the exceeding of a target budget while ensuring adherence to schedules and productivity. an example of integration in maintenance cost controlling is shown in figure 3. maintenance cost controlling supplies relevant cost variables to production control, which pursues the goal of minimizing the costs for a production order while taking maintenance into account. if a deviation from the original maintenance budget occurs, the effect on productivity and ontime delivery is used in a mathematical reference model to optimize maintenance cost controlling. this results in new target costs for maintenance, which influences the initiation of planned measures depending on the risk of failure and the respective machine condition. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 164 in order to create such a mathematical reference model, it is expedient to differentiate between planned and unplanned cost factors, as explained by ansari [38] and to model these in order to achieve an overall optimum. 7. conclusions in the present paper a novel model for integrating maintenance strategies in autonomous production control has been presented. relevant decision aspects have been discussed and a cost function for an integrated planning using a marketbased approach have been laid out. it is based on the key elements of a cpps and their relations to establish a complete but fully, efficiently integrated component in ppc. essential findings are the identified and evaluated aspects of maintenance which are decision relevant for the integration into the production control. the most relevant aspects are taken into account in the developed cost function for integrated planning, thus providing a robust basis for the implementation of apc in industrial practice. only through clear guidelines on how autonomous control behaves in the event of a failure and how the case of an increased risk of failure is taken into account, a acceptance for the implementation of apc can be achieved. against this background, the developed cost-based model contributes to bringing approaches to apc a further step towards implementation maturity and thus provides an innovative approach to communication between production and maintenance planning both from a practical and a scientific point of view. however, further research questions remain open: 1) the implementation of the present process model in a real production environment and a corresponding evaluation of the benefits in industrial practice represent the logical next step. this requires a well-thought-out roadmap, since such an implementation deeply affects different areas of a production system. in addition, there is the challenge of preparing personnel for the new way of working using autonomous production control and providing appropriate qualification measures in good time. 2) the present model is limited to the mapping of a manufacturing area and negates at this point the dependencies with respect to the higher-level planning to the present or subsequent areas of the production system. in particular, the challenges to the integration of autonomous and human agents have to be addressed. 3) similarly, the impact of short-term production control at the operational level on the tactical and strategic levels, such as production controlling, poses exciting challenges for further research activities. in particular, integration into maintenance cost controlling, as outlined schematically in section 6.2 can offer a significant contribution to quantifying the contribution of maintenance to the achievement of operational targets in this context. 4) approaches that make use of artificial intelligence methods represent interesting alternatives for the relatively simple market-based model used in the present modelin this context, reinforcement learning approaches in particular represent an alternative which, in the view of the author, should be explored in this context in the future. 5) in order to be able to apply this approach easily and quickly to further use cases in the future, research should be conducted in the direction of automated parameter optimization, for example by means of simulation studies. acknowledgement this work has been supported by the european commission through the h2020 project epic (grant no. 739592). references [1] t. bauernhansl, r. miehe, industrielle produktion–historie, treiber und ausblick. fabrikbetriebslehre 1, springer vieweg, berlin, heidelberg, 2020, pp. 1-33. [2] d. spath, e. westkämper, h.-j. bullinger, h.-j. warnecke, neue entwicklungen in der unternehmensorganisation. springer vieweg, 2017. figure 3. integration into maintenance cost controlling. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 165 [3] e. rauch, p. dallasega, d. t. matt, complexity reduction in engineer-to order industry through real-time capable production planning and control. production engineering, 12(3-4), 2018, pp. 341–352. doi: 10.1007/s11740-018-0809-0 [4] s. luke, c. cioffi, lpanait, k. m. sullivan, g. c. balan, mason: a multiagent simulation environment, simulation, 81(7), 2005, pp. 517–527. doi: 10.1177%2f0037549705058073 [5] v. gallina, l. lingitz, m. karner, a new perspective of the cyberphysical production planning system, 16th imeko tc10 conference, berlin, germany, 3-4 september 2019. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-008.pdf [6] a. kinz, r. bernerstaetter, h. biedermann, lean smart maintenance – efficient and effective asset management for smart factories. motsp 2016, porec, istria, croatia, 1-3 june 2016, 8 pp. [7] o. schmiedbauer, h. t. maier, h. biedermann, evolution of a lean smart maintenance maturity model towards the new age of industry 4.0, 2020 doi: 10.15488/9649 [8] f. pauker, t. frühwirth, b. kittl, w. kastner, a systematic approach to opc ua information model design. procedia cirp, 2016, 57:321–326. doi: 10.1016/j.procir.2016.11.056 [9] m. ulrich, d. bachlechner, wirtschaftliche bewertung von ki in der praxis, hmd praxis der wirtschaftsinformatik, 2020, 57(1):46–59 [in german]. doi: 10.1365/s40702-019-00576-9 [10] f. ansari, r. glawar t. nemeth, prima: a prescriptive maintenance model for cyber-physical production systems, international journal of computer integrated manufacturing, vol. 32, issue 4-5, 2019, pp. 482-503. doi: 10.1080/0951192x.2019.1571236 [11] m. henke, t. heller, smart maintenance der weg vom status quo zur zielvision. acatech studie, 2019, 68 pp. [in german]. [12] e. uhlmann, e. hohwieler, m. kraft, selbstorganisierende produktion, agenten intelligenter objekte koordinieren und steuern den produktionsablauf. fraunhofer ipk berlin, gito verlag, 2013, berlin, pp. 57-61 [in german]. [13] l. monostori, b. kádár, t. bauernhansl, s. kondoh, s. kumara, g. reinhart, o. sauer, g. schuh, w. sihn, k. ueda, cyber-physical systems in manufacturing, cirp annals, 65(2), 2016, pp. 621–641. doi: 10.1016/j.cirp.2016.06.005 [14] b. vogel-heuser, d. schütz, t. schöler, s. pröll, s. jeschke, d. ewert, o. niggemann, s. windmann, u. berger, c. lehmann, agentenbasierte cyber-physische produktionssysteme. anwendungen für die industrie 4.0, atp magazin, 57(09), 2016, pp. 36-45 [in german] [15] f. förster, a. schier, m. henke, m. ten hompel, dynamische risikoorientierung durch predictive analytics am beispiel der instandhaltungsplanung. logistics journal. proceedings, 2019(12), pp. 1-9 [in german]. doi: 10.2195/lj_proc_foerster_de_201912_01 [16] n. o. fernandes, t. martins, s. carmo-silva, improving materials flow through autonomous production control. journal of industrial and production engineering, 35(5), 2018, pp. 319–327. doi: 10.1080/21681015.2018.1479895 [17] j. zhang, multi-agent-based production planning and control, john wiley & sons, 2017, isbn 9781118890080 (pdf). [18] g. kasakow, n. menck, j. c. aurich, event-driven production planning and control based on individual customer orders, procedia cirp, 57, 2016, pp. 434-438. doi: 10.1016/j.procir.2016.11.075 [19] r. cupek, a. ziebinski, l. huczala, h. erdogan, agent-based manufacturing execution systems for short-series production scheduling. computers in industry 82, 2016, pp. 245-258. doi: 10.1016/j.compind.2016.07.009 [20] r. glawar, f. ansari, c. kardos, k. matyas, w. sihn, conceptual design of an integrated autonomous production control model in association with a prescriptive maintenance model (prima). procedia cirp, 80, 2019, pages 482–487. doi: 10.1016/j.procir.2019.01.047 [21] h. meissner, r. ilsen, j. c. aurich, analysis of control architectures in the context of industry 4.0. procedia cirp, 2017, 62:165–169. doi: 10.1016/j.procir.2016.06.113 [22] s. grundstein, s. schukraft, m. görges, b. scholz-reiter, an approach for applying autonomous production control methods with central production planning. int j syst appl eng dev, 7(4), 2013, pp.167-174. online [accessed 8 september 2021] https://www.naun.org/main/upress/saed/d012014-130.pdf [23] l. martins, n. o. fernandes, m. l. r. varela, autonomous production control: a literature review. in international conference on innovation, engineering and entrepreneurship, 2018, pp 425–431. doi: 10.1007/978-3-319-91334-6_58 [24] s. mantravadi, c. li, c. møller, multi-agent manufacturing execution system (mes): concept, architecture & ml algorithm for a smart factory case, proceedings of the 21st international conference on enterprise information systems, scitepress science and technology publications, 2019, pp. 477–482. doi: 10.5220/0007768904770482 [25] d. pantförder, f. mayer, c. diedrich, p. göhner, m. weyrich, b. vogel-heuser, agentenbasierte dynamische rekonfiguration von vernetzten intelligenten produktionsanlagen. in handbuch industrie 4.0 bd. 2, 2017. pp. 31–44 [in german]. doi: 10.1007/978-3-658-04682-8_7 [26] m. hoffmann, j. aro, c. büscher, t. meisen, intelligente produktionssteuerung und automatisierung, productivity, gito berlin, 2016, pp. 17-20 [in german]. [27] i. graessler, a. poehler, integration of a digital twin as human representation in a scheduling procedure of a cyber-physical production system, ieee international conference on industrial engineering and engineering management (ieem), singapore, 10-13 december 2017, pp. 289-293. doi: 10.1109/ieem.2017.8289898 [28] s. mayer, c. endisch, adaptive production control in a modular assembly system based on partial look-ahead scheduling, ieee international conference on mechatronics (icm), ilmenau, germany, 18-20 march 2019, vol. 1, pp. 293–300. doi: 10.1109/icmech.2019.8722904 [29] j. zou, q. chang, x. ou, j. arinez, g. xiao, resilient adaptive control based on renewal particle swarm optimization to improve production system energy efficiency. journal of manufacturing systems, 2019, vol. 50, pp. 135–145. doi: 10.1016/j.jmsy.2018.12.007 [30] t. jamrus, c.-f. chien, m. gen, k. sethanan, hybrid particle swarm optimization combined with genetic operators for flexible job-shop scheduling under uncertain processing time for semiconductor manufacturing, ieee transactions on semiconductor manufacturing, 31(1), 2018, pp. 32–41. doi: 10.1109/tsm.2017.2758380 [31] d. gyulai, a. pfeiffer, b. kádár, l. monostori, simulation-based production planning and execution control for reconfigurable assembly cells. procedia cirp, 57, 2016, pp. 445–450. doi: 10.1016/j.procir.2016.11.077 [32] a. kuhnle, n. röhrig, g. lanza, autonomous order dispatching in the semiconductor industry using reinforcement learning. procedia cirp, 79, 2019, pp. 391–396. doi: 10.1016/j.procir.2019.02.101 [33] s. erol, w. sihn, intelligent production planning and control in the cloud – towards a scalable software architecture. procedia cirp, 62, 2017, pp. 571–576. doi: 10.1016/j.procir.2017.01.003 [34] j. vallhagen, t. almgren, k. thörnblad, advanced use of data as an enabler for adaptive production control using mathematical optimization – an application of industry 4.0 principles. procedia https://doi.org/10.1007/s11740-018-0809-0 https://doi.org/10.1177%2f0037549705058073 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-008.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-008.pdf https://doi.org/10.15488/9649 https://doi.org/10.1016/j.procir.2016.11.056 https://dx.doi.org/10.1365/s40702-019-00576-9 https://doi.org/10.1080/0951192x.2019.1571236 https://doi.org/10.1016/j.cirp.2016.06.005 http://dx.doi.org/10.2195/lj_proc_foerster_de_201912_01 https://doi.org/10.1080/21681015.2018.1479895 https://doi.org/10.1016/j.procir.2016.11.075 https://doi.org/10.1016/j.compind.2016.07.009 https://doi.org/10.1016/j.procir.2019.01.047 https://doi.org/10.1016/j.procir.2016.06.113 https://www.naun.org/main/upress/saed/d012014-130.pdf https://doi.org/10.1007/978-3-319-91334-6_58 https://doi.org/10.5220/0007768904770482 https://doi.org/10.1007/978-3-658-04682-8_7 http://dx.doi.org/10.1109/ieem.2017.8289898 http://dx.doi.org/10.1109/icmech.2019.8722904 https://doi.org/10.1016/j.jmsy.2018.12.007 http://dx.doi.org/10.1109/tsm.2017.2758380 https://doi.org/10.1016/j.procir.2016.11.077 https://doi.org/10.1016/j.procir.2019.02.101 https://doi.org/10.1016/j.procir.2017.01.003 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 166 manufacturing, 11, 2017, pp. 663–670. doi: 10.1016/j.promfg.2017.07.165 [35] f. wang, y. lu, f. ju, condition-based real-time production control for smart manufacturing systems, 2018 ieee 14th international conference on automation science and engineering (case), munich, germany, 20-24 august 2018, pp. 1052–1057. doi: 10.1109/coase.2018.8560389 [36] r. glawar, f. ansari, z. j. viharos, k. matyas, w. sihn, a costbased model for integrating maintenance strategies in autonomous production control. 17th imeko tc 10 virtual conference, 2022 october 2020, pp. 258 -264. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2020/imekotc10-2020-037.pdf [37] s. rötzer, w. schwaiger, forschungsbericht zum projekt: „kosten und co2-emissionen im produktionsnetzwerk von magna europe", in: h. biedermann, industrial engineering und management. technoökonomische forschung und praxis, wiesbaden: springer gabler, 2016, pp. 237–246. [38] m. haoues, m. dahane, k. n. mouss, n. rezg, production planning in integrated maintenance context for multi-period multiproduct failure-prone single-machine, ieee 18th conference on emerging technologies & factory automation (etfa), cagliari, italy, 10-13 september 2013, pp. 1–8. doi: 10.1109/etfa.2013.6647980 [39] a. berrichi, f. yalaoui, bi objective artificial immune algorithms to the joint production scheduling and maintenance planning, ieee international conference on control, decision and information technologies (codit), hammamet, tunisia, 6-8 may 2013, pp. 810–814. doi: 10.1109/codit.2013.6689647 [40] r. glawar, f. ansari, k. matyas, evaluation of economic plausibility of integrating maintenance strategies in autonomous production control: a case study, automotive industry, 2021, 7th ifac symposium on information control problems in manufacturing (in print). [41] f. ansari, meta-analysis of knowledge assets for continuous improvement of maintenance cost controlling. faculty of science and technology, thesis, university of siegen, 2014, 169 pp. https://doi.org/10.1016/j.promfg.2017.07.165 https://doi.org/10.1109/coase.2018.8560389 https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-037.pdf https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-037.pdf https://doi.org/10.1109/etfa.2013.6647980 https://doi.org/10.1109/codit.2013.6689647 frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers lorenzo capponi1, tommaso tocci2, giulio tribbiani2, massimiliano palmieri2, gianluca rossi2 1 aerospace department, university of illinois at urbana-champaign, 61801 urbana, illinois 2 engineering department, university of perugia, 06122 perugia, italy section: research paper keywords: aruco; marker detection; non-contact measurement; structural dynamics citation: lorenzo capponi, tommaso tocci, giulio tribbiani, massimiliano palmieri, gianluca rossi, frequency response function identification using fused filament fabrication-3d-printed embedded aruco markers, acta imeko, vol. 11, no. 3, article 16, september 2022, identifier: imeko-acta-11 (2022)-03-16 section editor: francesco lamonaca, university of calabria, italy received july 24, 2022; in final form september 15, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo capponi, e-mail: lcapponi@illinois.edu 1. introduction when a flexible structure is excited at or close to one of its natural frequencies, resonance phenomenon occurs [1], [2]. in resonance operating conditions, most of the energy is released and the response vibration amplitudes significantly increases. this process generally leads to an increasing of the vibration fatigue damage [3]-[5]. due to this, the determination of the modal components (i.e., natural frequencies, mode shapes and damping) is fundamental in any structural dynamics approach, either numerical or experimental, in order to avoid potential critical conditions [2]. with this perspective, many experimental approaches have been developed in years that allows the determination of modal components [6]-[8]. the impact excitation using modal hammers and the shaker excitation are the two most used full-contact experimental approaches [2], [9]. in the last years, many image-based analysis have been introduced for the displacement measurements [10], [11] and consequently for the structural dynamics due to several operating advantages, e.g. high spatial density, full-field information, no sensors to be placed on the structure [12]. javh et al. [13] proved the hybrid modal-parameter identification of full-field mode shapes using a dslr camera for responses far above the camera's frame rate, employing the lucas-kanade optical flow algorithm [14]. gorjup et al. [15] researched on the full-field 3d operatingdeflection-shape (ods) identification using the frequency domain triangulation in the visible spectrum. capponi et al. [16] proposed a methodology based on the thermoelastic principle for the visual modal strain determination, that allowed the fatigue modal damage identification. one of the most promising approaches for deformation, displacement and motion detection involves markers, either physical or virtual [17]-[19]. virtual markers are often employed as they allow tracking objects in subsequent acquired frames without introducing physical targets [20], [21]. when virtual markers are not available, physical markers are employed [22], [23], and among them, the aruco marker library (aruco augmented reality university of cordoba) was found to be one of the most effective and robust to detection errors and occlusion [24]-[26]. elangovan et al. [27] used them for decoding contact forces exerted by adaptive hands, while sani and karamian [28] and lebedev et al. [29] employed them for drone quadrotor and uav autonomous navigation and landing, respectively. in relation to the use of fiducial markers for vibrations measurement, abdelbarr et al. [30] researched abstract the assessment of modal components is a fundamental step in structural dynamics. while experimental investigations are generally performed through full-contact techniques, using accelerometers or modal hammers, this research proposes a non-contact frequency response function identification measurement technique based on aruco square fiducial markers displacement detection. a video of the phenomenon to be analyzed is acquired, and the displacement is measured through markers, using a dedicated tracking algorithm. the proposed method is presented using a harmonically excited fused filament fabrication-3d-printed flexible structure, equipped with multiple embedded-printed markers, whose displacement is measured with an industrial camera. comparison with numerical simulation and an established experimental approach is finally provided for the results validation. mailto:paul@regtien.net acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 structural 3d displacement using aruco markers, while the study of kalybek et al. [31] provides one of the first evidence of the capability of optical vibration monitoring systems in modal identifications. recently, tocci et al. [32] presented an aruco marker-based vibration displacement technique, provided with an uncertainty analysis, based on acquisition parameters influence investigation. in this research, the aruco markers are employed for the determination of the frequency response function (frf) of a flexible structure using image-based analysis. in this study, the growing potential of 3d printing is exploited: the tested structure is realised in polylactic acid using fused filament fabrication 3d printing methodology. the employed markers are still realised using 3d printing methodology and they are generated as embedded in the structure during a unique printing job. an established experimental frf assessment technique and a numerical model are also provided for the results validation. the manuscript is organized as follows. in sec. 2 the theoretical background of structural dynamics and marker detection is given. in sec. 3, the proposed approach is presented and in sec. 4 the experimental campaign and the numerical model are described. sec. 5 gives the results while sec. 6 draws the conclusions. 2. theoretical background 2.1. structural dynamics flexible structures can be represented by n-degrees of freedom (dofs) systems using [8]: 𝑀 �̈�(𝑡) + 𝐷 �̇�(𝑡) + 𝐾 𝑥(𝑡) = 𝐷 �̇�(𝑡) + 𝐾 𝑦(𝑡), (1) where 𝑴, 𝑫 and 𝑲 are the mass, damping and stiffness matrices, while 𝒚(𝑡) and 𝒙(𝑡) are the excitation and the response displacements of the dofs, respectively. assuming a harmonic excitation 𝒚(𝑡) = 𝑌 𝑒 𝑖𝜔𝑡 and a response 𝒙(𝑡) = 𝑋 𝑒 𝑖𝜔𝑡 , eq. (1) can be written as[8]: (−ω2 + i ω 𝐷 𝑀−1 + 𝐾 𝑀−1) 𝑋(ω) = (i ω 𝐷 𝑀−1 + 𝐾 𝑀−1) 𝑌(ω). (2) from eq. (2), the displacement-response amplitude is obtained as [8]: α(ω) = 𝑋(ω) 𝑌(ω) = i ω 𝐷 𝑀−1 + 𝐾 𝑀−1 −ω2 + i ω 𝐷 𝑀−1 + 𝐾 𝑀−1 (3) where 𝜶(𝜔) defines the receptance matrix, that is also know as the frequency-response function (frf) from displacement to displacement [2], [8]. using the eigenvalues notation, 𝜶(𝜔), which relates the j-th response to the k-th excitation, can be written as [2], [8]: α𝑗𝑘 (ω) = ∑ ( 𝑅𝑗𝑘𝑟 i 𝜔 − 𝜆𝑟 + 𝑅 ∗𝑗𝑘𝑟 i 𝜔 − 𝜆 ∗𝑟 ) 𝑁 𝑟=1 , (4) where r is the eigenvalue index (i.e., the mode index), * stands for the complex conjugate notation, 𝑅𝑗𝑘𝑟 is the modal constant and 𝜆𝑟 is r-th eigenvalue [2]. in the experimental modal analysis, several approaches for the excitation and the measurement of the structural dynamics can be employed [2]. the frequency response function of a system can be experimentally determined using frf estimators [8]. when there is no input noise and the output noise is uncorrelated, the α̂(ω) is used: α̂(ω) = 𝑆𝑦�̂� (ω) 𝑆𝑦�̂� (ω) (5) where 𝑆𝑦�̂� (ω) is the cross-spectrum between the input excitation y and the output response x and 𝑆𝑦�̂� (ω) is the autospectrum of the input excitation y (the ^ symbol denotes the estimation from measurements), defined as: 𝑆𝑦�̂� (𝜔) = 1 𝑇 [𝑌∗̂(𝜔) �̂�(𝜔)], (6) 𝑆𝑦�̂� (𝜔) = 1 𝑇 [𝑌∗̂(𝜔) �̂�(𝜔)] (7) where t is the measurement time length, �̂�(ω) and �̂�(ω) are the spectra of the input excitation and of the response, respectively. for the experimental frf reconstruction, the modal parameters identification is required, and, for this purpose, different methods can be used [33], [34]. the preferred procedure in experimental modal analysis consists of using the least square frequency domain (lsfd) approach for the modal constants identification from the eigenvalues λ�̂� obtained through the least square complex frequency (lscf) and the stabilisation chart [35]. 2.2. aruco marker detection an aruco marker is a square-marker composed by a wide black border, that facilitates its detection in the image, and an inner binary-matrix, which determines its identification number [24], [25]. an example of aruco marker is presented in figure 1. the identification of an aruco marker in a captured frame requires several computational steps [32], that, as well as for the generation of the marker, are provided from the opencv python dedicated library. however, properly developed imageprocessing (e.g., filters and thresholds) can facilitate the pattern recognition. the marker detection is based on its 4 corners identification in each captured frame (see figure 1). from the corners, the spatial coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) are evaluated frame-by-frame during the acquisition [32]: 𝐶 = (𝑥c, 𝑦c) = �⃗� ⋅ ( 1 4 ∑ |𝑥r| 4 r=1 , 1 4 ∑ |𝑦r| 4 r=1 ) (8) where (𝑥𝑟 , 𝑦𝑟 ) are the coordinates of the r-th vertex and �⃗� is the calibration factor from pixel units to si units, defined as the ratio between the side length of the physical marker in si units 𝑑𝑆𝐼 and figure 1. example of an aruco marker from original dictionary: corners and reference system. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 the average of the four side lengths (in pixels) of the captured marker in the fov 𝑑𝑝𝑥⃗⃗ ⃗⃗ ⃗⃗ ⃗ ∶ �⃗� = 𝑑𝑆𝐼 / 𝑑𝑝𝑥⃗⃗ ⃗⃗ ⃗⃗ ⃗ [m/pixels]. (9) the calibration factor is evaluated at each new acquired frame. in this way, if the marker is subjected to non-planar displacements or deformations during the measurement, the calibration factor is again estimated. the time-history of the centre the marker 𝐶(𝑡) is obtained by tracking it during the acquisition. 3. aruco marker-based frequency-response function identification with this study, a method for the experimental frequency response function identification is proposed. as discussed in sec. 2.1, the receptance matrix α̂(ω), estimated from experiments, can be determined using eq. (5). in eq. (6), the spectrum of the excitation input �̂�(ω) and of the structure response �̂�(ω) are determined from the aruco marker centre displacement time histories 𝐶𝑦 (𝑡) and 𝐶𝑥 (𝑡) (see eq. 8): 𝑌�̂� (𝜔) = ∫ 𝐶𝑦(𝑡) ∞ −∞   𝑒 −i𝜔𝑡 𝑑𝑡 (10) 𝑋�̂� (𝜔) = ∫ 𝐶𝑥 (𝑡) ∞ −∞   𝑒 −i𝜔𝑡 𝑑𝑡 (11) then, the receptance α̂(ω) is estimated using: α̂(ω) = 𝑌𝐶 ∗̂(ω) 𝑋�̂� (ω) 𝑌𝐶 ∗̂(ω) 𝑌�̂� (ω) (12) finally, frfs reconstruction using lsfd and lscf approaches is performed. 4. experimental research 4.1. setup in this research, a y-shaped specimen, shown in figure 2, was used [4], [12]. in particular, this geometry was chosen due to its structural dynamic properties. in fact, by using two steel weights (each of 360 g), fixed to each of the arms, the structural dynamics were adjusted to the research needs. the y-shaped sample was realised in white pla, using an ultimaker3 3d printer (100% infill and 0.1 mm of layer height). default values for other printing parameters were used. in the printing process, three 8x8 mm2 aruco markers were embedded in the last four layers of the y-sample geometry and printed using black pla material in one printing process (see figure 2). the sample was mounted on an electro-dynamical shaker (sentek l1024 with pa115 power amplifier), as shown in figure 3. on the shaker fixation, a fourth aruco marker (printed in b&w on a standard 80 g/m2 paper in 8x8 mm2) was rigidly glued for the input excitation measurement. the marker detection was performed using a flir backfly s 5 mp monochrome camera with sony imx250 sensor and fujinon 12 mm optic mounted. the resolution of the camera was settled at 1000 × 850 pixels and the frame rate at 160 fps. the setup consists also of a pcb-352c34 accelerometer, bonded on the shaker fixation for controlling and measuring the input excitation, and of a pcb-352c23/nc, fixed on one y-sample arm for the response measurement. the main specifications of the accelerometers used are shown in table 1. for the excitation, a sine-sweep of 0.5 g of constant amplitude from 5 hz to 80 hz was given to the shaker (close-loop control), with a sweep-rate of 16 oct/min (i.e., approximately 4 sweeps in 68 seconds). the sweep rate was carefully chosen in order to excite the natural frequencies of the sample. however, the measurement with the camera was limited at approximately 45 seconds due to hardware and memory limitations. 4.2. data acquisition the marker used in this research are from the aruco original library, identified as shown in figure 4. in particular, the markers with id1 and id7 are considered as input reference while the markers on the two arms (i.e., id2 and id5) as output displacement. the displacement of the four markers centre point, captured during the experiment and evaluated using eq. 8 and eq. 9, is shown in figure 5. figure 2. y-shaped specimen with installed sensors. table 1. technical specifications of accelerometers used. specifications pcb-352c34 pcb-352c23/nc sensitivity 100 mv/g (±10 %) 5 mv/g (±20 %) measurement range ±490 m/s² pk ±9810 m/s² pk frequency range (±5 %) 0.5 to 10000 hz 2 to 10000 hz resonant frequency ≥50 khz ≥70 khz broadband resolution 0.0015 m/s2 rms 0.03 m/s2 rms non-linearity ≥ 1 % ≥ 1 % transverse sensitivity ≥ 5 % ≥ 5 % figure 3. experimental setup. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 once the displacement time-histories are obtained, the frfs can be evaluated through eq. 12, and, finally, reconstructed using lsfd and lscf approaches. similarly, the reference and the response accelerations time-histories are measured during the excitation (see figure 6) and the accelerance frf is evaluated [2]. 4.3. finite element model a finite element model of the y-shaped specimen is prepared using a commercial software. figure 7 shows the realized numerical model. the structure has been meshed using solid element with ten degree of freedom for node for a total of 89962 elements and 131580 nodes. to model the external masses attached to the free end of the structures, two-point mass, with a mass equal to 0.36 kg each one, are rigidly connected to the holes on the arms of the structure. moreover, to accurately replicate the experimental test, an additional mass of 1000 kg (namely large mass in figure 7) was connected to the constrain zone of the structure. the displacement along the y axis was not constrained, while all the other degree of freedom were fixed. in such a way it was possible to use the large mass method to evaluate the frequency response both in terms of displacement and in terms of acceleration. to calculate the numerical frequency response function (shown in sec. 5 the modal approach was used. for this reason, a modal analysis was necessary to obtain the natural frequencies of the system and the modal shapes in the point where the responses should be addressed. all the frequency response function shown in sec. 5 were obtained considering a percentage damping equal to 1% constant for each vibrating mode. 5. results four different frfs are obtained from the combination of the two markers as input and the two as output. however, as expected, the displacement measured with id1 and id7 markers is totally comparable and the same consideration can be performed for id2 and id5 markers, due to geometrical considerations. due to this, for the sake of clarity, only id1-id2 markers frf will be shown and considered in the further discussion. from the results in figure 8, the goodness of the numerical model is verified, comparing the frf obtained performing the accelerometer-base experiments. figure 4. aruco markers employed. figure 5. measured displacement of the detected markers. figure 6. reference and response accelerations. figure 7. fe model of the y-shaped specimen. figure 8. experimental and numerical acceleration frequency response functions comparison. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 in the considered frequency range, three predominant natural frequencies are clearly identified at approximately 17 hz, 48 hz and 69 hz from both experiments and numerical model. finally, the comparison between the verified numerical model and the proposed approach is performed in terms of displacement frf (see figure 9). in the same considered frequency range, the same natural frequencies are identified using the aruco markers with high accuracy. the obtained natural frequencies are presented in table 2. a slightly decreasing of the third natural frequency is detected with respect to the numerical model. however, the experimental approaches give similar results, and this deviation can be attributed to the numerical model setting. 6. conclusions this study researches the modal components identification using a non-contact measurement approach based on aruco marker displacement detection. even though the established fullcontact methods are widely used for several research applications, the required instrumentation is expensive and delicate, and the experimental procedures are time consuming for a full-field comprehensiveness of the dynamics of a structure. on the other hand, the proposed method proved high accuracy on the assessment of natural frequencies of a structure with a relatively low computational effort and extremely lower budget sensors instrumentation: each aruco marker can be considered as a sensor, and if multiple markers are placed and detected in the field of view of the camera, more information on the dynamics of the structure can be easily provided. moreover, using the 3d printing technology, embedded sensors are demonstrated to be effective and reliable. further employment of aruco markers in structural dynamics will be investigated. references [1] d. benasciutti, fatigue analysis of random loadings. a frequencydomain approach, phd university of ferrara, department of engineering, 2004. [2] j. slavič, m. boltezar, m. mrsnik, m. cesnik, j. javh, vibration fatigue by spectral methods. elsevier, 2021. doi: 10.1016/c2019-0-04580-3. [3] d. benasciutti, r. tovo, spectral methods for lifetime prediction under wide-band stationary random processes, int j fatigue, vol. 27, no. 8, aug. 2005, pp. 867–877. doi: 10.1016/j.ijfatigue.2004.10.007 [4] l. capponi, m. česnik, j. slavič, f. cianetti, m. boltežar, nonstationarity index in vibration fatigue: theoretical and experimental research, int j fatigue, vol. 104, nov. 2017, pp. 221– 230. doi: 10.1016/j.ijfatigue.2017.07.020 [5] m. mršnik, j. slavič, m. boltežar, vibration fatigue using modal decomposition, mech syst signal process, vol. 98, jan. 2018, pp. 548–556. doi: 10.1016/j.ymssp.2017.03.052 [6] d. j. ewins, modal testing: theory and practice, vol. 15, letchworth: research studies press, 1984. [7] z.-f. fu, j. he, modal analysis. elsevier, 2001. [8] n. m. m. maia, j. m. m. e silva, theoretical and experimental modal analysis. research studies press, 1997. [9] w. heylen, s. lammens, p. sas, modal analysis theory and testing, vol. 200, no. 7. katholieke universiteit leuven leuven, belgium, 1997. [10] t. tocci, l. capponi, r. marsili, g. rossi, optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video, acta imeko, vol. 10, no. 4, dec. 2021, p. 169. doi: 10.21014/acta_imeko.v10i4.1147 [11] f. vurchio, g. fiori, a. scorza, s. a. sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, jun. 2021, p. 119. doi: 10.21014/acta_imeko.v10i2.1047. [12] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int j fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661. [13] j. javh, j. slavič, m. boltežar, experimental modal analysis on fullfield dslr camera footage using spectral optical flow imaging, j sound vib, vol. 434, 2018, pp. 213–220. [14] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, proceedings darpa image understanding workshop, 1981, pp. 121–130. [15] d. gorjup, j. slavič, m. boltežar, frequency domain triangulation for full-field 3d operating-deflection-shape identification, mech syst signal process, vol. 133, nov. 2019, p. 106287. doi: 10.1016/j.ymssp.2019.106287 [16] l. capponi, thermoelasticity-based analysis: collection of python packages. 2020. doi: 10.5281/zenodo.4043102 [17] d. g. lowe, object recognition from local scale-invariant features, in proceedings of the seventh ieee international conference on computer vision, 1999, vol. 2, pp. 1150–1157. [18] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical flow for motion detection during different sinusoidal brightness variations, j phys conf ser, vol. 1149, no. 1, dec. 2018, p. 012032. doi: 10.1088/1742-6596/1149/1/012032 [19] t. tocci, l. capponi, r. marsili, g. rossi, j. pirisinu, suction system vapour velocity map estimation through sift-based alghoritm, j phys conf ser, vol. 1589, no. 1, jul. 2020, p. 012004. doi: 10.1088/1742-6596/1589/1/012004 [20] t. khuc, f. n. catbas, computer vision-based displacement and vibration monitoring without using physical target on structures, in bridge design, assessment and monitoring, routledge, 2018, pp. 89–100. doi: 10.1201/9781351208796-8 figure 9. experimental and numerical displacement frequency response functions comparison. table 2. natural frequencies obtained for each technique used. the standard deviation on each value is ±1.28 hz. technique 1st mode frequency / hz 2nd mode frequency / hz 3rd mode frequency / hz aruco markers 17.2 48.9 64.00 accelerometers 17.2 49.0 63.9 numerical model 17.2 48.9 69.5 https://doi.org/10.1016/c2019-0-04580-3 https://doi.org/10.1016/j.ijfatigue.2004.10.007 https://doi.org/10.1016/j.ijfatigue.2017.07.020 https://doi.org/10.1016/j.ymssp.2017.03.052 https://doi.org/10.21014/acta_imeko.v10i4.1147 https://doi.org/10.21014/acta_imeko.v10i2.1047 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1016/j.ymssp.2019.106287 https://doi.org/10.5281/zenodo.4043102 https://doi.org/10.1088/1742-6596/1149/1/012032 https://doi.org/10.1088/1742-6596/1589/1/012004 https://doi.org/10.1201/9781351208796-8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 [21] c.-z. dong, o. celik, f. n. catbas, marker-free monitoring of the grandstand structures and modal identification using computer vision methods, struct health monit, vol. 18, no. 5–6, nov. 2019, pp. 1491–1509. doi: 10.1177/1475921718806895 [22] f. lunghi, a. pavese, s. peloso, i. lanese, d. silvestri, computer vision system for monitoring in dynamic structural testing, in role of seismic testing facilities in performance-based earthquake engineering, vol. 22, m. n. fardis and z. t. rakicevic, eds. dordrecht: springer netherlands, 2012, pp. 159–176. doi: 10.1007/978-94-007-1977-4 [23] s. w. park, h. s. park, j. h. kim, h. adeli, 3d displacement measurement model for health monitoring of structures using a motion capture system, measurement, vol. 59, jan. 2015, pp. 352– 362. doi: 10.1016/j.measurement.2014.09.063 [24] f. j. romero-ramirez, r. muñoz-salinas, r. medina-carnicer, speeded up detection of squared fiducial markers, image vis comput, vol. 76, aug. 2018, pp. 38–47. doi: 10.1016/j.imavis.2018.05.004 [25] s. garrido-jurado, r. muñoz-salinas, f. j. madrid-cuevas, m. j. marín-jiménez, automatic generation and detection of highly reliable fiducial markers under occlusion, pattern recognit, vol. 47, no. 6, jun. 2014, pp. 2280–2292. doi: 10.1016/j.patcog.2014.01.005 [26] l. capponi, t. tocci, m. d'imperio, s. h. jawad abidi, m. scaccia, f. cannella, r. marsili, g. rossi, thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot, acta imeko, vol. 10, no. 4, dec. 2021, p. 177. doi: 10.21014/acta_imeko.v10i4.1148 [27] n. elangovan, a. dwivedi, l. gerez, c.-m. chang, m. liarokapis, employing imu and aruco marker based tracking to decode the contact forces exerted by adaptive hands, in 2019 ieeeras 19th international conference on humanoid robots (humanoids), oct. 2019, pp. 525–530. doi: 10.1109/humanoids43949.2019.9035051 [28] m. f. sani, g. karimian, automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors, in 2017 international conference on computer and drone applications (iconda), nov. 2017, pp. 102–107. doi: 10.1109/iconda.2017.8270408 [29] i. lebedev, a. erashov, a. shabanova, accurate autonomous uav landing using vision-based detection of aruco-marker, in international conference on interactive collaborative robotics, springer, 2020, pp. 179–188. doi: 10.1007/978-3-030-60337-3_18 [30] m. abdelbarr, y. l. chen, m. r. jahanshahi, s. f. masri, w.-m. shen, u. a. qidwai, 3d dynamic displacement-field measurement for structural health monitoring using inexpensive rgb-d based sensor, smart mater struct, vol. 26, no. 12, dec. 2017, p. 125016. doi: 10.1088/1361-665x/aa9450 [31] m. kalybek, m. bocian, n. nikitas, performance of optical structural vibration monitoring systems in experimental modal analysis, sensors, vol. 21, no. 4, feb. 2021, p. 1239. doi: 10.3390/s21041239 [32] t. tocci, l. capponi, g. rossi, aruco marker-based displacement measurement technique: uncertainty analysis, engineering research express, vol. 3, no. 3, sep. 2021, p. 035032. doi: 10.1088/2631-8695/ac1fc7 [33] p. guillame, b. peeters, b. cauberghe, p. verboven, identification of highly damped systems and its application to vibro-acoustic modeling, 2004. [34] p. guillaume, p. verboven, b. cauberghe, s. vanlanduit, e. parloo, g. de sitter, frequency-domain system identification techniques for experimental and operational modal analysis, ifac proceedings volumes, vol. 36, no. 16, sep. 2003, pp. 1609– 1614. doi: 10.1016/s1474-6670(17)34990-x [35] b. peeters, h. van der auweraer, p. guillaume, j. leuridan, the polymax frequency-domain method: a new standard for modal parameter estimation?, shock and vibration, vol. 11, no. 3–4, 2004, pp. 395–409. doi: 10.1155/2004/523692 https://doi.org/10.1177/1475921718806895 https://doi.org/10.1007/978-94-007-1977-4 https://doi.org/10.1016/j.measurement.2014.09.063 https://doi.org/10.1016/j.imavis.2018.05.004 https://doi.org/10.1016/j.patcog.2014.01.005 https://doi.org/10.21014/acta_imeko.v10i4.1148 https://doi.org/10.1109/humanoids43949.2019.9035051 https://doi.org/10.1109/iconda.2017.8270408 https://doi.org/10.1007/978-3-030-60337-3_18 https://doi.org/10.1088/1361-665x/aa9450 https://doi.org/10.3390/s21041239 https://doi.org/10.1088/2631-8695/ac1fc7 https://doi.org/10.1016/s1474-6670(17)34990-x https://doi.org/10.1155/2004/523692 introductory notes for the acta imeko second issue 2021 general track acta imeko issn: 2221-870x june 2021, volume 10, number 2, 4 5 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 4 introductory notes for the acta imeko second issue 2021 general track francesco lamonaca1 1 università della calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue 2021 general track, acta imeko, vol. 10, no. 2, article 2, june 2021, identifier: imeko-acta-10 (2021)-02-02 received may 25, 2021; in final form may 25, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: f.lamonaca@dimes.unical.it 1. introductory notes for the acta imeko general track this issue includes a general track aimed to collect contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give readers an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. elena fitkov-norris et al., in ‘are learning preferences really a myth? exploring the mapping between study approaches and mode of learning preferences’, present an interesting study on the presence of the conversion effect in the mapping related to the strength of students’ preferences for receiving information in a visual, auditory, reading/writing, or kinaesthetic modality and the study approaches they adopt when taking notes in class, learning new concepts, and revising for exams. this paper opens up the possibility of new measurement frontiers, as it stimulates research on the definition of new measurement methods and instruments for assessing and describing the approach taken by students to their studies. in ‘a colour-based image segmentation method for the measurement of masticatory performance in older adults’, lorenzo scalise et al. present a specific measurement method based on the automatic segmentation of two-coloured chewing gum and colour features using the k-means clustering algorithm. the proposed solution aims to quantify the mixed and unmixed areas of colour, separated from any background colour, in order to evaluate masticatory performance among older people with different dental conditions. this innovative measurement method will ameliorate people’s quality of life, especially the elderly. using the example of measurements of ion activity, oleksandr vasilevskyi in ‘assessing the level of confidence for expressing extended uncertainty: a model based on control errors in the measurement of ion activity’ proposes a method for estimating the level of confidence when determining the coverage factor based on control errors. based on information on tolerances and uncertainty, it is possible to establish a reasonable interval around the measurement result, within which most of the values that can be justified are assigned to the measured value. a novel design that changes the accelerometer mounting support of a commercial pneumatic shock exciter is described in ‘investigating the transverse motion of a pneumatic shock exciter using two different anvil mounting configurations’ by christiaan s. veldman. the aim is to reduce the transverse motion to which the accelerometer is subjected during shock excitation. the author describes the mounting support supplied by the manufacturer, the design changes made, and the measurement data to compare the transfer motions recorded using two different mounting designs. roberto de fazio et al., in ‘sensor-based mobile robot for harsh environments: functionalities, energy consumption analysis and characterisation’, illustrate the design of a semicustom wheeled mobile robot with an integrated high-efficiency monoor polycrystalline photovoltaic panel on the roof that supports the lithium ion batteries during specific tasks (e.g. navigating rough terrain, obstacles, or steep paths) in order to extend the robot’s autonomy. a new e-textile-based system for the remote monitoring of biomedical signals, named sweet shirt, is presented by armando coccia et al. in their paper ‘design and validation of an e-textile-based wearable system for remote health monitoring’. the system includes a textile sensing shirt, an electronic unit for data transmission, a custom-made android application for real-time signal visualisation, and desktop software for advanced digital signal processing. the device allows the acquisition of electrocardiographic, bicep electromyographic, and trunk acceleration signals. the study’s results show that the information contained in the signals recorded by the novel systems are comparable with those that can be obtained by a standard medical device used in a clinical environment. mailto:f.lamonaca@dimes.unical.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 5 valery mazin, in ‘measurements and geometry’, demonstrates the points of contact between measurements and geometry, which is done by modelling the main elements of the measurement process by the elements of geometry. it is shown in the study that the basic equation for measurements can be established based on the expression of a projective metric and represents its particular case. commonly occurring groups of functional transformations of the measured value are listed. in ‘towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications’, stanislao grazioso et al. report a preliminary case study of a cpms, namely an innovative bioinspired robotic platform that can be used for measurement and monitoring applications in confined and constrained environments. the innovative system is a ‘soft growing’ robot that can access a remote site through controlled lengthening and steering of its body via a pneumatic actuation mechanism. the system can be endowed with different sensors at the tip or along its body to enable remote measurement and monitoring tasks; as a result, the robot can be employed to effectively deploy sensors in remote locations. the heterogeneous topics of the papers submitted to the general track confirm acta imeko is the natural platform for disseminating measurement information and stimulating collaboration among researchers of many different fields but united by their common interest in measurement science and technologies. francesco lamonaca editor in chief comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network acta imeko issn: 2221-870x december 2021, volume 10, number 4, 194 200 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 194 comparison between 3d-reconstruction optical methods applied to bulge-tests through a feed-forward neural network damiano alizzio1, marco bonfanti2, guido garozzo3, fabio lo savio4, roberto montanini1, antonino quattrocchi1 1 dept. of engineering, university of messina, cont.da di dio, 98166 messina, italy 2 dept. of electric, electronic, informatics engineering, univ. of catania, via s. sofia, 54, 95100 catania, italy 3 zerodivision systems s.r.l. piazza s. francesco n. 1, 56127 pisa, italy 4 dept. of civil engineering and architecture, university of catania, via s. sofia, 54, 95100 catania, italy section: research paper keywords: bulge-test; creep; 3d-dic; epipolar geometry; neural network citation: damiano alizzio, marco bonfanti, guido garozzo, fabio lo savio, roberto montanini, antonino quattrocchi, comparison between 3dreconstruction optical methods applied to bulge-tests through a feed-forward neural network, acta imeko, vol. 10, no. 4, article 30, december 2021, identifier: imeko-acta-10 (2021)-04-30 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received august 2, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio lo savio, e-mail: flosavio@diim.unict.it 1. introduction the mechanical characterization of elastomers needs the knowledge of numerous hyperelastic constants describing their highly non-linear mechanical behaviour [1]-[6]. nowadays a lot of different techniques are used to determine the stress-strain curve, from the most traditional uniaxial tensile-compressive tests, through indentation and equibiaxial tests, up to the bulge tests. among these, the bulge test is a consolidated technique for the investigation of membranes subjected to an equibiaxial tension state [7], [8]. such method allows avoiding those edge damages commonly occurring when the specimen is stressed during other types of tests [9]. in bulge tests, a material thin sheet of uniform thickness is clamped between two circular flanges with cavities in the centre to cause, through the insufflation of fluid inside the test chamber, the sheet to deform plastically assuming a semi-spherical shape. since in isotropic materials the stress state is equibiaxial, the relation between the pressure-displacement and stress-strain curves is unique on the whole dome [10]. in light of the above, this technique, originally used for metallic materials, can be extended also to rubber-like materials, which show typically isotropic behaviour in absence of any previous calendering process which could generate a light transversally isotropy along calendering direction [11], [12]. under inflation, pressure and displacements are the testing parameters to be constantly monitored, these being the parameters necessary to determine the stress-strain curve. to achieve a faithful reconstruction of the dome, epipolar geometry [13]-[15] and 3d-digital image correlation (dic) [16][18] techniques were implemented. some of the authors have previously published a device with a mobile crosshead in order to subject an elastomeric membrane to the bulge test in force control (creep) [14], [19]. in the first of the two cited works [14], authors highlighted the 3d reconstructive technique of a hyperelastic specimen subjected to bulge-test, based on the epipolar geometry. in the abstract the mechanical behaviour of rubber-like materials can be investigated through numerous techniques that differ from each other in costs, execution times and parameters described. bulge test method proved helpful for hyperelastic membranes under plane and equibiaxial stress state. in the present study, bulge tests in force control were carried out on sbr 20% cb-filled specimens. 3d reconstructions of the dome were achieved through two different stereoscopic techniques, the epipolar geometry and the digital image correlation. through a feed-forward neural network (ffnn), these reconstructions were compared with the measurements by a laser triangulation sensor taken as reference. 3d-dic reconstruction was found to be more accurate. indeed, bias errors of the 3d-dic and epipolar techniques with respect to the relative reference values, under creep condition, were 0.53 mm and 0.87 mm, respectively. mailto:flosavio@diim.unict.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 195 second one [19], authors trained a feed-forward neural network (ffnn) on the data acquired using the previous technique, returning a predictive model that provides as output the dome apex height formed by the insufflated specimen. in the present paper, 5 specimens were bulge-tested in creep condition and, differently from previous works, a comparison between the stereoscopic reconstruction of the specimen dome based on epipolar geometry and 3d-dic was carried out. this comparison was preferred to be performed through a ffnn independently trained on the base of results from both the reconstructive techniques. the values of the dome apex height provided by the two ffnn models were, then, compared with the value revealed by using a laser triangulation measurement technique as reference. 2. theoretical background in the bulge test technique, some restrictive assumptions are required to be applied both on the tested material, such as isotropy and incompressibility, and on the geometry of the inflated sample, such as hemispherical shape and small thickness compared to the curvature radius. under these assumptions, the stress state can be thought as equibiaxial and plane and, according to the boyle-mariotte law (fit for thin-walled tanks), is defined by: 𝜎𝑐 = 𝜎𝑎 = 𝑝 ∙ 𝑅 2 𝑠 , (1) where 𝜎c and 𝜎a are, respectively, the circumferential and axial stress, p is the working inflation pressure, r is the curvature radius of the dome and s is the thickness at the dome apex. at the dome apex each meridian represents a main direction and, then, on its surface all the main strains are equal to each other (ε1= ε2= εeq). from the knowledge of the undeformed length l0 and the deformed length ld of a membrane finite element, these strains are given by: 𝜀1 = 𝜀2 = ln (𝑙𝑑 𝑙0)⁄ . (2) from the assumption of incompressibility of the material and in agreement with the well-known von mises equation [13]: 𝜀3 = ln (𝑠 𝑠0)⁄ , (3) where s0 is the undeformed specimen thickness. 3. 3d-reconstruction methods used the stereoscopic technique based on epipolar geometry is a 3d-reconstruction method allows determining the whole dome by identifying, via two cameras, the shifting of a grid printed on the surface of the sample as the dome inflates under creep bulge testing [20]. acquired images by two cameras are processed through apposite geometrical algorithms and different filters. the stereoscopic technique based on 3d-dic is an optical method adopted to measure full-field displacements and strains by applying the cross-correlation to measure shifts in digital images [21]. this method is effective at mapping displacements and deformations thanks to the contrast, necessary to correlate images, generated by the application of an airbrushed speckle pattern on the surface of the sample. the multi-camera system uses a single global axis system. a dedicate software, developed in matlab® environment, integrates a 2d-dic subset-based software with different camera calibration algorithms to reconstruct 3d surfaces from several acquisitions of stereo image pairs. furthermore, this software contains algorithms for computing and visualizing 3d displacements and strains. 4. experimental setup bulge tests in force control were performed by a home-made experimental setup already published in previous works of some of the authors [14], [19], [22]. a pneumatic circuit inflates a thin sample clamped between two flanges with adjustable flow rate (figure 1d). the device is equipped with a sliding crossbar, whose movements are proportional to the membrane inflation. creep phenomenon was obtained by keeping the pressure constant within the bulge chamber by means of a pressure regulator (0.1 bar resolution). in the creep test, after a transient inflation lasting 1 s, the sample was subjected to a constant pressure of 0.55 bar long enough for achieving a complete relaxation, ensuring at the same time that the strain fall within the linear and elastic field. the core of the stereoscopic acquisition setup (as shown in figure 1a) is a sliding crossbar, employed to translate two fixedfocus cameras (imaging source dmk23g445 monochromatic, equipped with fujifilm hf35ha-1b lens, having a frame rate up to 30 fps) in upward/downward direction so to follow the dome apex displacement due to the inflation and, thus, detect the equi-biaxial strain of the specimen. in this way, the dimensions of the captured images depend exclusively on the inflation of the dome and not on its approaching to the camera lenses. the crossbar shifting is made possible by converting the rotary motion of a stepper motor to a vertical translation of the crossbar through a driven shaft and a timing belt (figure 1b). the linear motion is controlled by an optical system rigidly connected to the crossbar and consisting of a laser diode (working as emitter) and a photodetector (hamamatsu si s5973-01 photodiode) placed laterally to the dome (figure 1c). when the laser beam is interrupted by the inflated dome, the system starts the crossbar shifting via a signal sent to a double h-bridge circuit. the shifting will be stopped once that the photodetector will be newly hit by the laser beam. the crossbar shifting takes place in steps of 0.125 mm, committing a focal length error of 0.03% and a negligible focusing error. system accuracy matches the size of the captured pixels (37.7 μm). since the photodiode diameter is greater than that of the laser spot (2 mm vs 1 mm), the laser beam does not affect the spatial resolution of the vertical motion (0.2 mm). a laser triangulation sensor (optoncdt 1302-200, having a resolution of 0.2% fso) was placed on the top of the device frame (figure 1a) and a target-lamina was fixed to the crossbar. these additions were made with the dual purpose of evaluating the uncertainty of the crossbar vertical shifting and indirectly obtaining the measurement of the height of the dome (as shown in figure 2) to be taken as reference value for both epipolar geometry and 3d-dic techniques. in other terms, the bias errors of the stereoscopic reconstructions will be calculated by comparing the laser measurement with the measurements from epipolar geometry and 3d-dic apparata. particular carefulness was payed for synchronizing, in pseudo real-time, the acquired images to data from pressure transducer and displacement sensor. for this purpose, a ni pxie-1073 chassis containing acquisition boards (pxie-6341 and pxi-8252), coupled to a pc in ni labview® environment, was used. this software also controls the compressed air supply and regulation system. an additional software (camera calibration matlab® toolbox), acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 196 using 3d calibration target on the pinhole model [23], was adopted to stereoscopically calibrate the cameras. to achieve the 3d reconstruction of the dome based on dic technique, an appropriate gom aramis 2m lt optical system (figure 3), also consisting of a double camera but mounted on a tripod, replaced the device with cameras/sliding crossbar used for the epipolar geometry acquisition. in this case, attention was taken during operations of calibration and focusing of cameras. 5. test samples the material tested in this paper was sbr 20% carbon blackfilled, an artificial elastomer widely used for several applications: from tires, through seals, up to shoe soles [24]-[26]. a set of 5 square specimens (180 × 180 mm2) were cut from a single 3 mm thick sheet of elastomer. each specimen was used for both stereoscopic techniques and, thus, presented two different patterns, one on each side, as shown in figure 4. on the first side (figure 4a), adopted for the epipolar reconstruction, a grid consisting of five concentric circles (hence named parallels), a small central circle whose centre corresponded to the top of the dome, and 73 equidistant rays (hence named meridians) radiating from the centre were silk-screened. on the second side (figure 4b), useful to carry out the 3d-dic reconstruction, a pattern of random spots of white paint was airbrushed (with nozzle diameter ϕ = 0.18 mm). in this way, the mean value of the diameter of the spots was found to be ϕ = 0.23 mm and their relative surface equals to 0.042 mm2. figure 1. (a) bulge test setup for epipolar geometry technique; (b) timing belt system; (c) optical system to regulate the vertical shifting; (d) cad of the bulge chamber cross-section (courtesy of authors [24]). figure 2. sketch of the laser sensor measurement (courtesy of authors [24]). figure 3. bulge test setup for 3d-dic technique (courtesy of authors [22]). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 197 6. stereoscopic reconstructions and experimental tests in the epipolar technique, in order to acquire the whole grid, the cameras separation has not to overcome 10 % of working distance. thus, adopting a distance of 40 mm between the two cameras, the distance from the top of the sample (u) was set at 400 mm. moreover, a focal length of 10,610 pixels, corresponding to 400 mm, and a minimum focusing range from 250 mm to ∞ were set. for accurate displacement measurements, 33 samples at a sample rate of 1 khz were acquired for each image. acquisition time required is 1/30 s, corresponding to the camera frame rate. for each acquisition, the average of the 33 values, taken as best value, and the standard deviation of the series were computed. since the phase shift of a single frame was 1/30 s, the resulting synchronization uncertainty could be reasonably estimated in half of it. due to the slowness of the creep of the specific material tested in the experiments, one acquisition per minute was considered sufficient to capture the relevant information of the phenomenon. some pre-processing stages were needed to improve the image sharpness, as in the example shown in figure 5: the image was first converted to grayscale (figure 5a) and then filtered with the convolution (figure 5b) and median filters (figure 5c). the convolution filter was intended to refine the grid edges by recovering the light intensity lost due to the specimen strain. next, a smoothing filter (median filter) was applied in order to both reduce the noise introduced by the convolution filter and balance the image. in particular, the size of median filter was the 3 × 3 (kernel size) and the type of convolution filter chosen was the “sharpen filter”, in order to obtain a more sharpened image allowing a more accurate edge detection. for the reconstruction, the origin of reference system was set at the centre of the clamping flange, with the vertical z-axis passing through the midpoint between the focal centres of cameras, x-axis oriented to the right of the testing machine on the median plane and y-axis perpendicular to x and z-axes (figure 6) [19]. the sample was oriented so that the first meridian was along the x-axis. anticlockwise numeration was chosen for the 73 meridians. dome 3d reconstruction was performed using a customized matlab® algorithm able to fix the markers (figure 7) [14], corresponding to the intersections between the 73 meridians and the 5 parallels of the screen-printed grid on the specimen. moreover, this algorithm identifies with red and blue markers the light transitions from dark to light and vice versa, respectively (figure 7a), selects the homothetic markers (figure 7b) and compares the distance between two homothetic markers at two consecutive stress states (figure 7c). the 3d reconstruction was implemented also through 3ddic. the ccd sensors of the two cameras were of 1624 × 1236 pixels, allowing a spatial resolution of 0.092 mm/pixel. before acquiring, the optical setup was calibrated with respect to a measure volume equal to 150 mm × 110 mm × 110 mm, taken as reference and vertically set on the centre of the flange (z-axis), in order to ensure that the space gradually occupied by the dome figure 4. bulge test sample: (a) side for epipolar reconstruction; (b) side for 3d-dic reconstruction. in this sample, holes for clamping are already made (courtesy of authors [22]). figure 5. image pre-processing stages: (a) grayscale image; (b) convolutionfiltered image; (c) median-filtered image for recovering attenuated-light areas in red encircled of (b) (courtesy of authors [14]). figure 6. cartesian reference system (courtesy of authors [19]). figure 7. (a) light transitions: dark to light (red point) and light to dark (blue point); (b) homothetic markers; (c) comparison of the distance between two homothetic markers at two successive stress states (courtesy of authors [14]). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 198 during creep expansion could be enclosed. to do that, a cp20 calibration table (175 × 140 mm²) was used, while the optics were adjusted on a depth of field equal to 5.6 mm. for post-processing needs, 35 stages in step of 300 s were acquired during the test. a facets size of 13 × 13 pixels with a relative distance of 8 pixels was chosen for computer processing of the dic meshes. each single creep bulge test lasted 180 minutes in order to ensure the complete relaxation of the material within the linear and elastic strains field. due to the temperature dependence of the tested elastomer, the temperature was constantly monitored and maintained at 20°c ± 0.5°c for the entire duration of the test. to use both stereoscopic methods it was necessary to analyse the two sides of each specimen individually. to avoid permanent deformation on the specimen between one test and another, the inflation pressure was kept constant at 0.55 bar that is a value well below the limit pressure. however, the nature of filled hyperelastic materials is such that even specimens from the same sheet can have different mechanical characteristics, especially when analyzed on opposite sides. therefore, based on the results obtained, it is legitimate to consider that the bias errors may be attributable to any differences in mechanical behavior between one side and the other of the specimen. 7. neural network architecture the neural network architecture used in this work was a ffnn (feed forward neural network) composed by 4 layers fully connected [19], [27]. the model had 2 n + 1 inputs (the cartesian coordinates (x, y) of the n points resulting from the intersection of meridians with parallels of the 3d image and the inflation pressure value) and n outputs: the height z relative of each point. the structural simplicity of such architecture (figure 8) allowed achieving a good quality of the outputs without a large waste of computational power and within training period relatively short. every layer had the sigmoid activation function and the hidden layers were composed by 30 neurons; only the last layer, the output layer, had a linear activation function. the model was trained with the dataset built using the cartesian points (x, y, z) from the 3d reconstruction of the dome and the inflation pressure during creep bulge test, and adopting a levenberg-marquardt algorithm as optimizer [28]. mse (mean square error) loss function and a technique of early stop were used in order to prevent the over-fitting of the model. 8. analysis of the results and conclusions figure 9 and figure 10 show the typical reconstruction of the dome at the end of the test, computed from epipolar geometry and from 3d-dic algorithm, respectively. figure 11a shows the plot of the inflating pressure as a function of the time, highlighting how this parameter, except for the instants in which the pressure regulator acts, remains constant during the whole test, as desired. from the comparison between the trends of the strain as function of the time resulting from the two stereoscopic techniques (figure 11b), as previously stated, it can be noted that the epipolar reconstruction can exploit a greater number of acquisitions respect the dic approach (180 vs 35). therefore, the timing resolution of the epipolar technique is better than the dic one. however, the spatial resolution is better in the latter, minimizing the error on the measurement of the dome apex height. in effect, while the 3d-dic spatial resolution is based on a points cloud, the epipolar spatial resolution is based on a limited homothetic markers set. a way to confirm the better quality of the 3d-dic reconstruction compared to that obtained with the epipolar geometry is based on the use of ffnn models. thus, two independent models were obtained by training the ffnn neural network on the acquired datasets. first dataset consists of the acquisitions on the 5 specimens analysed with the epipolar technique, the second one of those acquired on the same specimens with the 3d-dic technique. these models have as input the datasets, i.e. the set of the (x, y) coordinates of the acquired points and, as output, the height of the dome apex (z) from each acquisition. the output was used to estimate the bias error with respect to the reference laser value (hlaser) on the corresponding acquisition. figure 8. ffnn architecture. figure 9. epipolar geometric reconstruction of the dome (courtesy of authors [22]). figure 10. 3d-dic reconstruction of the dome. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 199 our results showed that the 3d-dic technique leads to smaller bias errors (table 1). with notations of figure 2, from the direct laser measurement (e), being fixed the working distance (u) at 400 mm (section 6), the indirect measurement of the dome apex height (hlaser) is geometrically given by: ℎ𝑙𝑎𝑠𝑒𝑟 = 𝑣 − 𝑚 − 𝑢 − 𝑒 , (4) where m = 75 mm and v = 680 mm. as previously stated, the measurements provided by the equation (4) were taken as references both for those obtained with the epipolar technique on one side of the specimen, and for those obtained with the 3d-dic technique on the other side. table 1 shows the best values of the dome apex height and the standard deviations (sd) of the measurements from the laser sensor (one for each sample side) and the two ffnn models after creep, where the coverage factor k for the uncertainty of each measurement was considered equal to 1. table 1 also shows the bias errors of each of the models with respect to its own reference value. each bias error is given by the difference, in absolute value, between the ffnn model value and the relative laser value. figure 12 plots gaussian distributions resumed in table 1. the small deviation observed between the values from ffnn models relative to dic and epipolar reconstructions, as shown in figure 12a and figure 12b is likely due to variations in the hyperelastic behavior of the specimens depending on the side analyzed and not to the inaccuracy of the experimental setup. this is confirmed by the corresponding variation in the reference values provided by the laser sensor. references [1] m. mooney, a theory of large elastic deformation, j appl phys, 11, 1940, 582-592. doi: 10.1063/1.1712836 [2] r. s. rivlin, large elastic deformations of isotropic materials iv. further developments of the general theory, phil trans r soc lond a, 241, 1948, 379-397. doi: 10.1098/rsta.1948.0024 [3] l. r. g. treloar, the physics of rubber elasticity, clarendon press, oxford, 1975. [4] e. m. arruda, m. c. boyce, a three-dimensional constitutive model for the large stretch behavior of rubber elastic materials, j mech phys solids, 41, 1993, 389-412. doi: 10.1016/0022-5096(93)90013-6 [5] o. h. yeoh, some forms of the strain energy function for rubber, rubber chem technol, 66, 1993, 754-771. doi: 10.5254/1.3538343 [6] r. w. ogden, non-linear elastic deformations, dover publications, mineola, n. y., 1997, isbn 978-0-486-69648-5. z. chenbing, w. xinpeng, l. xiyao, z. suoping, w. haitao, a small buoy for flux measurement in air-sea boundary layer, in proceedings of the icemi 2017. doi: 10.1109/icemi.2017.8265999 [7] j. y. sheng, l. y. zhang, b. li, g. f. wang, x. q. feng, bulge test method for measuring the hyperelastic parameters of soft membranes, acta mech 228 (2017) 4187–4197. doi: 10.1007/s00707-017-1945-x figure 11. (a) inflating pressure plot. (b) trends of the strains from 3d-dic and epipolar geometry techniques. figure 12. comparison between gaussian distributions for the height of the dome apex after creep from (a) laser measurements and epipolar ffnn model and (b) laser measurements and 3d-dic ffnn model. table 1. comparison between epipolar and dic ffnn models respect to its own laser measurements. measurement units in mm. 𝒉𝐥𝐚𝐬𝐞𝐫 (𝐞𝐩𝐢𝐩. 𝐬𝐢𝐝𝐞) ± 𝑺𝑫 𝒉𝐥𝐚𝐬𝐞𝐫 (𝐃𝐈𝐂 𝐬𝐢𝐝𝐞) ± 𝑺𝑫 𝒉𝐞𝐩𝐢𝐩. ± 𝑺𝑫 𝒉𝐃𝐈𝐂 ± 𝑺𝑫 𝑩𝒊𝒂𝒔𝐞𝐩𝐢𝐩. 𝑩𝒊𝒂𝒔𝐃𝐈𝐂 initial inflation 27.1 ± 0.8 25.9 ± 0.8 28.5 ± 1.4 26.9 ± 1.6 1.4 1.0 creep 29.2 ± 0.9 28.0 ± 0.8 30.1 ± 1.7 28.5 ± 1.4 0.9 0.5 https://doi.org/10.1063/1.1712836 https://doi.org/10.1098/rsta.1948.0024 https://doi.org/10.1016/0022-5096(93)90013-6 https://doi.org/10.5254/1.3538343 https://doi.org/10.1109/icemi.2017.8265999 https://doi.org/10.1007/s00707-017-1945-x acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 200 [8] m. koç, e. billur, o. n. cora, an experimental study on the comparative assessment of hydraulic bulge test analysis methods, mater. des. 32 (2011) 272–281. doi: 10.1016/j.matdes.2010.05.057 [9] m. k. small, w. d. nix, analysis of the accuracy of the bulge test in determining the mechanical-properties of thin-films. j mater res, 7, 1992, 1553-1563. doi: 10.1557/jmr.1992.1553 [10] dimarn, c. thanadngarn, v. buakaew, y. neamsup, mechanical properties testing of sheet metal by hydraulic bulge test, proceedings vol. 9234, international conference on experimental mechanics 2013 and twelfth asian conference on experimental mechanics, 2014. doi: 10.1117/12.2054257 [11] t. tsakalakos, the bulge test: a comparison of theory and experiment for isotropic and anisotropic films, thin solid films, 75, 1981, 293-305. doi: 10.1016/0040-6090(81)90407-7 [12] j. diani, m. brieu, j. m. vacherand, a. rezgui, directional model for isotropic and anisotropic hyperelastic rubber-like materials, mech mater, 36, 2004, 313-321. doi: 10.1016/s0167-6636(03)00025-5 [13] m. sasso, g. palmieri, g. chiappini, d. amodio, characterization of hyperelastic rubber-like materials by biaxial and uniaxial stretching tests based on optical methods, polym test, 27, 2008, 995-1004. doi: 10.1016/j.polymertesting.2008.09.001 [14] m. calì, f. lo savio, accurate 3d reconstruction of a rubber membrane inflated during a bulge test to evaluate anisotropy, advances on mechanics, design engineering and manufacturing lecture notes in mechanical engineering, springer, 2017, 12211231. doi: 10.1007/978-3-319-45781-9_122 [15] m. sigvant, k. mattiasson, h. vegter, p. thilderkvist, a viscous pressure bulge test for the determination of a plastic hardening curve and equibiaxial material data, int j mater form, 2009, 2, 235242. doi: 10.1007/s12289-009-0407-y [16] l. vitu, n. laforge, p. malécot, n. boudeau, s. manov, m. milesi, characterization of zinc alloy by sheet bulging test with analytical models and digital image correlation, aip conference proceedings, 2018, proceedings of the 21st international esaform conference on material forming, palermo, italy. doi: 10.1063/1.5035022 [17] c. feichter, z. major, r. w. lang, methods for measuring biaxial deformation on rubber and polypropylene specimens, in: gdoutos e. e. (eds), experimental analysis of nano and engineering materials and structures, 2007, 273-274. doi: 10.1007/978-1-4020-6239-1_135 [18] j. neggers, j. p. m. hoefnagels, f. hild, s. roux, m. g. d. geers, global digital image correlation for pressure-deflected membranes. in g. a. shaw, b. c. prorok, & l-va. starman (eds.), proceedings of the 2012 annual conference on experimental and applied mechanics, 6, 2012, 135-140. new york: springer. doi: 10.1007/978-1-4614-4436-7_20 [19] f. lo savio, g. capizzi, g. la rosa, g. lo sciuto, creep assessment in hyperelastic material by a 3d neural network reconstructor using bulge testing. polymer testing 63 (2017) 6572. doi: 10.1016/j.polymertesting.2017.08.009 [20] t. moons, l. van gool, m. vergauwen, 3d reconstruction from multiple images, part 1: principles, foundations and trends® in computer graphics and vision, vol. 4, no. 4 (2008) 287–398. doi: 10.1561/0600000007 [21] j. li, z. miao, x. liu, y. wan, 3d reconstruction based on stereovision and texture mapping, in paparoditis n., pierrotdeseilligny m., mallet c., tournaire o. (eds), iaprs, vol. xxxviii, part 3b – saint-mandé, france, 2010. [22] d. alizzio, f. lo savio, m. bonfanti, g. garozzo, a new approach based on neural network for a 3d reconstruction of the dome of a bulge tested specimen. ceur workshop proceedings icyrime 2020, 2768 (2020), pp. 54-58. online [accessed 22 december 2021] http://hdl.handle.net/20.500.11769/496378. [23] z. zhang, a flexible new technique for camera calibration, ieee t pattern anal, 22, 2000, pp. 1330-1334. doi: 10.1109/34.888718 [24] f. lo savio, g. la rosa, m. bonfanti, a new theoreticalexperimental model deriving from the contactless measurement of the thickness of bulge-tested elastomeric samples. polymer testing 87 (2020), 106548. doi: 10.1016/j.polymertesting.2020.106548 [25] f. lo savio, m. bonfanti, g. m. grasso, d. alizzio, an experimental apparatus to evaluate the non-linearity of the acoustoelastic effect in rubber-like materials, polymer testing 80 (2019), 106133. doi: 10.1016/j.polymertesting.2019.106133 [26] f. lo savio, m. bonfanti, a novel device for measuring the ultrasonic wave velocity and the thickness of hyperelastic materials under quasi-static deformations, polymer testing 74 (2019), pp. 235-244. doi: 10.1016/j.polymertesting.2019.01.005 [27] l. w. peng, s. m. shamsuddin, 3d object reconstruction and representation using neural networks, proceedings of the 2nd international conference on computer graphics and interactive techniques in australasia and southeast asia 2004, singapore, pp. 139-147. doi: 10.1145/988834.988859 [28] j. j. moré, the levenberg-marquardt algorithm: implementation and theory, numerical analysis, lnm 630 (2006), pp. 105-116. doi: 10.1007/bfb0067700 https://doi.org/10.1016/j.matdes.2010.05.057 https://doi.org/10.1557/jmr.1992.1553 https://doi.org/10.1117/12.2054257 https://doi.org/10.1016/0040-6090(81)90407-7 https://doi.org/10.1016/s0167-6636(03)00025-5 http://dx.doi.org/10.1016/j.polymertesting.2008.09.001 https://doi.org/10.1007/978-3-319-45781-9_122 https://doi.org/10.1007/s12289-009-0407-y https://doi.org/10.1063/1.5035022 https://doi.org/10.1007/978-1-4020-6239-1_135 https://doi.org/10.1007/978-1-4614-4436-7_20 https://doi.org/10.1016/j.polymertesting.2017.08.009 https://doi.org/10.1561/0600000007 http://hdl.handle.net/20.500.11769/496378 https://doi.org/10.1109/34.888718 https://doi.org/10.1016/j.polymertesting.2020.106548 http://dx.doi.org/10.1016/j.polymertesting.2019.106133 https://doi.org/10.1016/j.polymertesting.2019.01.005 https://doi.org/10.1145/988834.988859 https://doi.org/10.1007/bfb0067700 from electronic navigational chart data to sea-bottom models: kriging approaches for the bay of pozzuoli acta imeko issn: 2221-870x december 2021, volume 10, number 4, 36 45 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 36 from electronic navigational chart data to sea-bottom models: kriging approaches for the bay of pozzuoli emanuele alcaras1, claudio parente2, andrea vallario2 1 international phd programme “environment, resources and sustainable development”, department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy 2 dist department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy section: research paper keywords: enc data; 3d modelling; ordinary and universal kriging interpolation; bathymetry; geospatial analysis citation: emanuele alcaras, claudio parente, andrea vallario, from electronic navigational chart data to sea-bottom models: kriging approaches for the bay of pozzuoli, acta imeko, vol. 10, no. 4, article 9, december 2021, identifier: imeko-acta-10 (2021)-04-09 section editor: silvio del pizzo, university of naples 'parhenope', italy received march 8, 2021; in final form december 10, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuele alcaras, e-mail: emanuele.alcaras@uniparthenope.it 1. introduction the knowledge of the seabed conformation allows not only safe navigation [1], but also the development of new frontiers for other kind of studies such as those on directions and intensity of currents [2], sensitivity of habitats and species [3], [4], and other activities such as maintenance dredging [5], coastal infrastructure protection [6] etc. it is therefore obvious that knowing the seabed and its variability is a fundamental requirement in marine studies. the hydrographic institute of the italian navy (istituto idrografico della marina militare iimm) represents the primary public authority in the study and analysis of the sea depths surrounding the italian peninsula [7]. it is, in fact, the only institution that can produce certified cartography for navigation in italy. the iimm conduces hydrographic surveys that permit to acquire water depth measurements. hydrographic survey can be carried out by using different techniques. single beam sonar (sbs) and multi beam sonar (mbs) determine the depth of any waterbody by using sound beams. particularly, they measure the time lag between transmitting and receiving a signal that travels through the water, springs back the seafloor, and returns to the sounder; the time lag is converted into a range using the known speed of sound [8]. sbs is a less expansive system that mbs but provides much lower spatial resolution [9]. a good level of information about seabed morphology can be extract by multispectral satellite images, even if only in shallow water (depths less than 20 meters) [10], [11]. the results of bathymetric survey are used for nautical charts that provide seabed morphology through depth points and contours [12]. available in digital form (raster or vector), nautical charts are legible and manageable by information systems supporting ship navigation, i.e. electronic charting systems (ecss) and electronic chart display and information systems (ecdiss) [13]. when they are in vector format and comply abstract electronic navigational charts (encs), official databases created by a national hydrographic office and included in electronic chart display and information system (ecdis), supply, among essential indications for safe navigation, data about sea-bottom morphology in terms of depth points and isolines. those data are very useful to build bathymetric 3d models: applying interpolation methods, it is possible to produce a continuous representation of the seafloor for supporting studies concerning different aspects of a marine area, such as directions and intensity of currents, sensitivity of habitats and species, etc. many interpolation methods are available in literature for bathymetric data modelling: among them kriging ones are extremely performing, but require deep analysis to define input parameters, i.e. semi-variogram models. this paper aims to analyze kriging approaches for depth data concerning the bay of pozzuoli. the attention is focused on the role of semi-variogram models for ordinary and universal kriging. depth data included in two encs, namely it400129 and it400130, are processed using geostatistical analyst, an extension of arcgis 10.3.1 (esri). the results testify the relevance of the choice of the mathematical functions of the semi-variogram: stable model supplies, for this case study, the best performance in terms of depth accuracy for both ordinary and universal kriging. mailto:emanuele.alcaras@uniparthenope.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 37 specific standard established by the international hydrographic organization (iho), digital nautical charts are named electronic navigational charts (encs). ecdis and enc are the two fundamental components of the electronic navigation: the first performs the functions of the hardware, the second supplies the relevant information for a voyage, including bathymetry in form of depth points and contour lines. regardless of the technique with which they are obtained and the source from which they are extracted, data concerning seabed morphology can be used to produce bathymetric model that, according to iho, can be defined, as ‘‘a digital representation of the topography (bathymetry) of the seafloor by coordinates and depths’’ [14]. when a point cloud dataset is available, i.e. single beam data or depth data derived from a nautical chart, a spatial interpolation process is necessary to generate a 3d model. starting from irregular spaced measured points, the depths in unsampled areas must be calculated, using appropriate grid spacing related to the accuracy of the input data [15]. the number of nodes which compose the grid is a variable which impacts on the accuracy of model: the more nodes there are, the better the accuracy will be [16]. spatial interpolation is a concept strongly linked with digital terrain models (dtms), introduced by miller & laflamme [17], or more generally with digital elevation models (dems). dems can be defined as the digital representation of the earth surface elevation referred to any given geodetic reference system [18], [19]: a 3-dimensional representation of a terrain surface consisting of three-dimensional coordinates (i.e. e, n, h or λ, ϕ, h) stored in digital form [20]. they can be produced in different way, i.e. using 3d stereoscopic viewing photogrammetric methods [21], [22] or interpolating height points and/or contour lines [23], [24]. interpolation methods that support dem generation can be used not only for terrain modelling, but for seabed modelling too. this can be indicated as digital depth model (ddm) because describes the variability of the distance between the sea surface and sea bottom [25]. several interpolation methods are offered by gis software to interpolate depth values, each of them has its own advantages and disadvantages [26], but in many cases, the most performing ones result kriging interpolators [24]. they cannot be applied in an automatic way but require to be tested, due to the specificity of the analysed surface morphology that may influence the results [27], and to be supervised by the user to set specific parameters. one of these parameters is the semi-variogram model, that is a graphical representation of the spatial correlation between the measurement points. an approximating function is used instead of the experimental semi-variogram [28], [29]. the aim of this article is to analyse kriging approach, specifically ordinary kriging and universal kriging methods, and demonstrate that the level of accuracy that can be achieved depends crucially on the choice of the mathematical model of the semi-variogram. this paper is organized as follows. section 2 focuses on the study area and dataset. section 3 describes: firstly, the main characteristics of the kriging approach based on semi-variance concept, with particular attention to the methodological aspects of ordinary and universal kriging, then the analysis process for ddm accuracy evaluation. section 4 introduces and discusses the results obtained in dependence of the choice of the mathematical function to fit the experimental semi-variogram (11 different models are compared). finally, section 5 presents our conclusions. 2. study area and dataset the area considered for this study concerns the bay of pozzuoli, located north-west of naples, in the campania region (italy), as shown in figure 1. in the bay of pozzuoli, the inner continental shelf, that extends between 0 – 40 m below sea level (b.s.l.), varies significantly, from a few hundred meters at its western side (baia) to 1.6 km at its eastern side (between bagnoli and nisida), reaching 1.8 km west of pozzuoli [30]. in the seabed morphology, gentle slopes prevail, and several terraced surfaces mostly oriented n130°e occur: those terraced areas present widths up to 1.5 km in the easternmost side of the bay and as small as 0.5 km in the west [31]. the area is of great interest for many purposes: as a consequence of the overall subsidence starting at the end of the roman period, the main part of the ancient coastal strip, including all the buildings and maritime structures, is nowadays submerged below the marine surface [32], giving life to the “underwater park of baia”, which is the subject of numerous figure 1. geo-localization of the bay of pozzuoli in the national context (top) and a rgb composition from sentinel-2 imagery of the bay of pozzuoli (bottom). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 38 studies including those through underwater acoustics techniques [33], [34]. moreover, pozzuoli inland zone and pozzuoli bay are included in the active volcanic sector named “campi flegrei”, corresponding to a densely populated region with complex geological context [35]. as it usually happens in the case of volcanic areas, in addition to the calderas, several fault systems surround the craters in the gulf of pozzuoli [36]. particularly, it is characterized by at least one large caldera collapse structure which extends over an area of 8 km in diameter in the central sector and is associated with the eruption of the neapolitan yellow tuff (nyt), an ignimbrite deposit dated 15 ka b.p [37]. this exceptional environment, severely sacked over the years, has been included in a marine protected area since 2001 [38]. the seafloor of the gulf of pozzuoli has also been largely studied for sedimentological analysis [39], biological research [40], gravimetric measurements [41], sea-level changes [42], marine circulation [43], etc. furthermore, several bathymetric surveys have been conducted in the gulf of pozzuoli [31], [44]; therefore, the area presents excellent conditions for 3d modelling. depth data for this work are extracted from two encs produced by the iimm, in scale 1:30.000, identified as n° 129 and n° 130. the two sources are necessary as the area falls half in one and half in the other nautical chart. the original files are formed in accordance with the official standards established by the international hydrographic organization (s-57 iho) [14]. they are transformed in shape file for using them in arcgis version 10.3.1 (esri) [45]. encs are georeferred in wgs84 geodetic datum. the depths are referred to the mean lower low water (mllw), which is the average height of the lowest tide recorded at a tide station each day during a recording period [46]. encs are classified in six categories or “zones of confidence”, ranging from high accuracy to poor accuracy at the other end of the spectrum, and one category of “unassessed” [47]. the considered encs are classified d: it means that soundings are similarly sourced from historic surveys, but in this case those conducted with large distances between adjacent survey lines, or simply soundings collected on an opportunity basis by ships undertaking routine passage [48]. in absence of specific indications, we assume this dataset as characterized by nominal scale accuracy. this assumption is not in contradiction with our purpose, that is to establish the effectiveness of the kriging interpolators for bathymetric data in consideration of the map scale. firstly, we projected the dataset in the universal transverse of mercator (utm)/wgs84 zone 33 n (epsg code: 32633), and group the vertices of contour lines and the depth points in one shape file; formerly we select from them only ones that fall in the area shown in figure 2. this area extends within the following utm/wgs84 plane coordinates 33t zone: e1 = 422,600 m, e2 = 430,400 m, n1 = 4,514,200 m, n2 = 4,520,500 m. depth values range between 0 m and -142 m. however, from the same encs a less extended area was considered in a previous step of our research and a more limited dataset was extracted for bathymetric modelling applications; the first results were published in [49]. in this study, the resulting 3181 points are used as dataset for the application of the ordinary and universal kriging interpolation methods available in geostatistical analyst [50], an extension included in arcgis software. 3. methodology kriging is founded on the first law of geography introduced by waldo r. tobler's in 1969: "everything is related to everything else, but near things are more related than distant things" [51]. in other words, things closer together are more similar than things further away, so observations made on nearby points actually show less variability than observations made between distant points [52]. unlike to deterministic methods, kriging applies the geo-statistical model which includes the spatial correlation between sampled points and uses it to estimate the value at an unknown point [53]. kriging interpolation method assumes that the spatial variation of any continuous attribute is often too irregular to be modelled by a simple mathematical function. the variation can instead be better described by a stochastic surface [54]. in fact, kriging is included in stochastic interpolation approach, that is funded on the assumption that the link between the neighbouring points can be expressed by a statistical relationship, which may have no physical significance [55]. as a result, kriging approach can supply models that better represent and describe the territory variability: the method usually ensures a more reliable prediction of the values in the non-sampled points. in order to evaluate the variability of the points with increasing distances, the semi-variogram is adopted. the semivariogram is the diagram resulting from the representation of the semi-variance as a function of the distance between two points [56]. mathematically the semi-variance is given by [57]: 𝛾(ℎ) = 1 2 𝑛 ∑(𝑧(𝑥𝑖 ) − 𝑧(𝑥𝑖 + ℎ)) 2 𝑛 𝑖=1 , (1) where 𝛾(ℎ) is the value of the semi-variance at the distance ℎ; 𝑛 is the number of couples of points separated by ℎ; 𝑧 is the value of the depth; 𝑥𝑖 and 𝑥𝑖 + ℎ indicate the positions of each couple of points. to facilitate the procedure and make it faster, the pairs are grouped into lag bins. a good lag size has to be determined for grouping semi-variogram values. in this way, the size of a distance class into which pairs of locations are grouped permits to reduce the large number of possible combinations [58]. consequently, the semi-variance is calculated for all pairs of points that present distance within a specific range (e.g. 10 meters and 20 meters). mathematical models can be used to substitute the empirical semi-variogram, fitting the data in the best way: the standard model that finest approximates the empirical one has to be figure 2. the selected point dataset (in the green rectangle), extracted from enc depth information of the bay of pozzuoli, submitted to kriging interpolations. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 39 selected, in order to obtain a law that describes the trend of the random variable on the territory throughout the area covered by the samples [59]. this substitution permits to introduce in the kriging process semi-variogram values for lag distances that are not used in the empirical semi-variogram [60]. in kriging interpolation process, different weights are attributed to the measured values and chosen in such a way as to optimize the interpolating function [61]. to determine the weights, various approaches are adopted: this is a peculiar aspect that distinguishes different methods to implement kriging interpolation, as remarked in section 3.1 and section 3.2 for the considered ordinary kriging and universal kriging. the application of these methods requires to define some parameters as illustrated in section 3.3. because this article is aimed to analyse the role of the semivariogram model in ordinary kriging as well as in universal kriging, in section 3.4 the adopted approach to evaluate the accuracy of the resulting models is illustrated. 3.1. ordinary kriging ordinary kriging is the most widely used kriging method. it assumes the model [62]: 𝑧(𝑥0) = ∑ 𝜆𝑖 𝑧(𝑥𝑖 ) 𝑛 𝑖=1 (2) where 𝜆𝑖 are the kriging weights computed from a normal system of equations derived by minimization of the error variance. the function 𝑧(𝑥𝑖 ) is composed of a deterministic component µ and a random function 𝜀(𝑥𝑖 ) [63]: 𝑧(𝑥𝑖 ) = 𝜇 + 𝜀(𝑥𝑖 ). (3) the deterministic component is a constant value for each 𝑥𝑖 location in each search area. 3.2. universal kriging the universal kriging model assumes that the deterministic component can be expressed locally as a linear combination ∑ 𝑎𝑙 𝑓 𝑙 (𝑥)𝑙 of k known basis functions 𝑓 𝑙(𝑥) (generally polynomials) with unknown coefficients 𝑎𝑙 [64]. equation 3 can be re-written for this method as follow [65]: 𝑧(𝑥𝑖 ) = 𝜇(𝑥𝑖 ) + 𝜀(𝑥𝑖 ). (4) if compared with ordinary kriging this model is in general more difficult to implement [66]. 3.3. parameter settings to apply ordinary kriging as well as universal kriging, the user must define some parameters. first of all, the points involved in the estimation of depth value in each prediction location have to be established. according to the first law of geography, the user can take into account that the correlation of the measured values with the prediction value depends on the distance that separates dataset points from grid node and decreases as the distance increases. consequently, a search neighbourhood is necessary to exclude far points from the interpolation process to predict the depth at a specific location. the user defines shape and dimensions of the neighbourhood: in our experiments, an equal influence on the grid node is attributed to the surrounding points, so the same dimension of search is fixed for both semi-axis (isotropic model). the fixed value for the search radius defines the number of the points included in the neighbourhood. the search radius is not the only parameter to define. in addition, the user can divide the search area into sectors and ensure a minimum and a maximum number of surrounding points to be included in the interpolation process. however, the definition of the range of the surrounding point number is possible also in the case of a unique neighbourhood without sector division [67]. in our study, four sectors with an offset of 45° are used. an example of searching neighbourhood step for ordinary kriging application is shown in figure 3. the dialog box of the software for kriging application, usually permits to set also number and size of the lags to group semivariogram values. 3.4. cross-validation in order to evaluate the accuracy of each method, crossvalidation is adopted. it allows to define the level of accuracy of the predicted values by distinguishing between training set and validation set, the first used for model generation, the second for model evaluation [68]. there are several cross-validation approaches, among which one of the most adopted is the leaveone-out method. leave-one-out method is based on the removal of a point from the data to be interpolated, the use of the other points to estimate a value at the location of the removed point, and the performance test by means of the removed data [69]. to evaluate the performance of the selected interpolation method, the difference between the known value and the estimated value in each removed point is calculated [70]. for the experiments carried out in this study, the residuals are treated with a statistical approach, obtaining minimum, maximum, mean, standard deviation and root mean square error (rmse). 4. results and discussion in this study, ordinary and universal kriging are applied to the chosen dataset by varying all mathematical semi-variogram models available in geostatistical analyst. specifically, the following models are adopted: • gaussian model (gam), • circular model (cim), • exponential model (exm), • spherical model (spm), • tetraspherical model (tem), • pentaspherical model (pem), • stable model (stm), figure 3. searching neighbourhood step for ordinary kriging application. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 40 • j-bessel model (jbm), • k-bessel model (kbm), • rational quadratic model (rqm), and • hole effect model (hem). those models are described in the literature and some of them are very recurrent in kriging applications, so the readers could referrer to specific papers on this matter, e.g. [71]-[74]. to define the cell size of each model, the enc scale is considered. on this aspect several suggestions are available in literature. on the question of cell size definition for each raster map, waldo tobler [75] advised that the rule is: divide the denominator of the map scale by 1,000 to get the detectable size in meters; the resolution is one half of this amount. valenzuela and baumgardner recommended cell sizes ranging from 0.5 mm × 0.5 mm to 3 mm × 3 mm on the map when dealing with thematic maps in a raster-based gis [76]. relating grid resolution to cartographic concepts, tomislav hengl [77] proposed the following formulas for finding the right pixel size p (in meters) for dtm: 𝑝 ≤ 𝑆𝑁 ∙ 0.0025 m (5) 𝑝 ≥ 𝑆𝑁 ∙ 0.0001 m , (6) where sn is scale factor. he also suggested the following formula as a good compromise: 𝑝 = 𝑆𝑁 ∙ 0.0005 m . (7) in this application, considering also the enc poor accuracy (zoc = d) we fixed the cell at 30 m for the generated ddms. using all semi-variogram models available, eleven grids are generated for each kriging method. in the case of the ordinary kriging, a number of lag equal to 12 is chosen with a lag size of 290 m [78] and an isotropic search radius of 2300 m [67]. in the case of the universal kriging, a number of lag equal to 12 is chosen with a lag size of 72 m and an isotropic search radius of 570 m. in figure 4, 2d representations of 2 bathymetric models georeferred in utm-wgs84 plane coordinates, concerning the ordinary kriging applications are reported. particularly, the upper concerns the most performing model resulting from stm application, the lower concerns the worst performing model resulting from hem application. in the same way, in figure 5, 2d representations of 2 bathymetric models georeferred in utm-wgs84 plane coordinates, concerning the universal kriging applications are reported. also in this case, the upper concerns the most performing model resulting from stm application, the lower concerns the worst performing model resulting from hem application. in figure 6 to figure 9, four examples of semi-variogram generated for ordinary kriging applications, respectively by stm (first), hem (second), exm (third), spm (forth) are shown. significant statistical parameters (minimum, maximum, mean, standard deviation and root mean square error) of all residuals for each semi-variogram mathematical function are shown in table 1 for ordinary kriging applications and in table 2 for universal kriging applications. figure 4. 2d bathymetric models resulting for ordinary kriging approach from the application of stable semi-variogram model (top) and hole effect semivariogram model (bottom). figure 5. 2d bathymetric models resulting for universal kriging approach from the application of stable semi-variogram model (top) and hole effect semi-variogram model (bottom). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 41 the 3d visualization of the most performing bathymetric model, generated by ordinary kriging interpolator with stm, is shown in figure 10. note that the depth values have been multiplied for amplifying factor equal to 3, so as to enhance the visualization of seabed morphology. the results of the elaborations demonstrate the different levels of accuracy than can be achieved in dependence of the choice of the semi-variogram model for both kriging approaches. for ordinary kriging the range of minimum values goes from -11.916 m obtained for kbm, to -16.703 m resulting from exm. the range of maximum values goes from 12.000 m obtained for kbm, to 15.612 m resulting from exm. the range of mean values goes from -0.152 m obtained for hem, to 0.074 m resulting from exm. the range of standard deviation goes from 1.280 m for stm to 1.781 m resulting from hem. the range of rmse goes from 1.280 m for stm to 1.787 m resulting from hem. by analyzing the rmse values, stm seems to be the most performing semi-variogram model, while hem supplies the worst results. for universal kriging the range of minimum values goes from -13.739 m obtained for tem, to -16.361 m resulting from jbm. figure 6. example of semi-variogram generated for ordinary kriging application based on the exponential model. figure 7. example of semi-variogram generated for ordinary kriging application based on the exponential model. figure 8. example of semi-variogram generated for ordinary kriging application based on the stable model. figure 9. example of semi-variogram generated for ordinary kriging application based on the hole effect model. table 1. statistical terms of the residuals supplied by cross validation for the ordinary kriging. model min (m) max (m) mean (m) st.dev (m) rmse (m) gam -13.973 12.039 -0.142 1.699 1.705 cim -15.830 15.158 0.064 1.450 1.451 exm -16.703 15.612 0.074 1.482 1.484 spm -15.814 15.168 0.064 1.448 1.449 tem -15.800 15.186 0.063 1.447 1.448 pem -15.790 15.212 0.063 1.446 1.447 stm -13.440 12.717 0.008 1.280 1.280 jbm -13.226 13.254 -0.145 1.717 1.723 kbm -11.916 12.000 -0.085 1.463 1.465 rqm -14.456 14.627 -0.060 1.456 1.457 hem -15.051 14.746 -0.152 1.781 1.787 table 2. statistical terms of the residuals supplied by cross validation for the universal kriging. model min (m) max (m) mean (m) st.dev (m) rmse (m) gam -16.351 16.652 0.067 1.524 1.525 cim -13.995 15.630 0.066 1.438 1.440 exm -14.371 15.395 0.082 1.476 1.478 spm -13.827 15.283 0.066 1.435 1.437 tem -13.739 15.460 0.069 1.444 1.446 pem -14.093 15.560 0.072 1.452 1.454 stm -14.158 15.232 0.074 1.426 1.428 jbm -16.361 17.083 0.065 1.535 1.536 kbm -14.353 15.284 0.079 1.430 1.432 rqm -15.523 15.716 0.075 1.488 1.490 hem -15.435 19.163 0.062 1.578 1.579 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 42 the range of maximum values goes from 15.232 m obtained for stm, to 19.163 m resulting from hem. the range of mean values goes from 0.062 m obtained for hem, to 0.082 m resulting from exm. the range of standard deviation goes from 1.426 m for stm to 1.578 m resulting from hem. the range of rmse goes from 1.428 m for stm to 1.579 m resulting from hem. by analyzing the rmse values, also in this case, stm seems to be the most performing semi-variogram model, while hem supplies the worst results. in every case, rmse values are acceptable in consideration of the accuracy of the encs from which the depth data are derived. the best performance of stm obtained in this case study cannot be generalized. in other words, the comparison offers the possibility to establish in this specific case that stm offers the best performance, but it cannot be asserted that it will always be so. 5. conclusions the particular relevance of the study area, the bay of pozzuoli, in many fields, e.g. geology, archaeology and natural science, makes clear that accurate bathymetric models are fundamental to support studies and application. a first source of information about seabed morphology is included in enc: profundity information contained in isolines and depth points is useful to realize 3d models of the seabottom. to achieve this result, particular attention must be reserved to the interpolation approach to derive continuous model from cloud point dataset. the present research remarks the high performance of both the kriging approaches for this purpose and demonstrates the relevance of the choice of the mathematical model to build the semi-variogram for ordinary kriging as well as universal kriging. as tested by using leave-one-out cross validation, different levels of accuracy can be achieved in dependence of the function used to substitute the empirical semi-variogram, fitting the depth data in the best way. by analysing residuals between measured and interpolated values of bathymetric depths, it is possible to identify the best performing 3d model of seabed in the study area. the approach adopted for the bay of pozzuoli can be used each time bathymetric data are available and usable for 3d model of seabed. in this way, the choice of the most suitable semivariogram model supports the user to achieve a more performing 3d bathymetric model. appling kriging methods, the specificity of the considered area, as well as the distribution of the dataset points influence the quality of the results, so it is impossible to define in an absolute way the most effective semi-variogram model to be adopted. rmse analysis carried out on residuals resulting from leave-oneout cross validation remains the best approach to compare different mathematical functions for semi-variogram construction. acknowledgement this work synthesizes results of experiments executed within a research project performed in the laboratory of geomatics, remote sensing and gis of the “parthenope” university of naples. we would like to thank the technical staff for their support. references [1] j. duan, x. wan, j. luo, research on the hydrographic survey cycle for updating navigational charts, the journal of navigation 74(4) (2021), pp. 1-13. doi: 10.1017/s0373463320000776 [2] k. v. sanilkumar, t. v. kuruvilla, d. jogendranath, r. r. rao, observations of the western boundary current of the bay of bengal from a hydrographic survey during march 1993, deep sea research part i: oceanographic research papers 44(1) (1997), pp. 135-145. doi: 10.1016/s0967-0637(96)00036-2 [3] g. d'onghia, f. capezzuto, r. carlucci, a. carluccio, p. maiorano, m. panza, p. ricci, l. sion, a. tursi, using a benthic lander to explore and monitor vulnerable ecosystems in the mediterranean sea, acta imeko 7(2) (2018), pp. 45-49. doi: 10.21014/acta_imeko.v7i2.544 [4] g. kågesten, d. fiorentino, f. baumgartner, l. zillén, how do continuous high-resolution models of patchy seabed habitats enhance classification schemes? geosciences 9(5) (2019), pp. 237. doi: 10.3390/geosciences9050237 [5] a. i. el-hattab, single beam bathymetric data modelling techniques for accurate maintenance dredging, the egyptian journal of remote sensing and space science 17(2) (2014), pp. 189–195. doi: 10.1016/j.ejrs.2014.05.003 [6] m. specht, c. specht, j. mindykowski, p. dabrowski, r. masnicki, a. makar, geospatial modeling of the tombolo phenomenon in sopot using integrated geodetic and hydrographic measurement methods, remote sensing 12(4) (2020), pp. 737. doi: 10.3390/rs12040737 [7] l. sinapi, l. lamberti, n. pizzeghello, r. ivaldi, r. italian hydrographic institute (iim), genoa (italy), international hydrographic review, (2016). [8] a. ceylan, h. karabork, i. ekozoglu, an analysis of bathymetric changes in altinapa reservoir, carpathian journal of earth and environmental sciences, 6(2), (2011), pp. 15-24. online [accessed 31 august 2021] http://www.cjees.ro/actions/actiondownload.php?fileid=176 [9] i. parnum, j. siwabessy, a. gavrilov, m. parsons, a comparison of single beam and multibeam sonar systems in seafloor habitat mapping, proc. 3rd int. conf. and exhibition of underwater acoustic measurements: technologies & results, nafplion, greece, 2009, june, pp. 155-162. [10] c. parente, m. pepe, bathymetry from worldview-3 satellite data using radiometric band ratio, acta polytechnica 58(2) (2018), pp. 109-117. doi: 10.14311/ap.2018.58.0109 figure 10. 3d visualization of the most performing bathymetric model generated by ordinary kriging stm. https://doi.org/10.1017/s0373463320000776 https://doi.org/10.1016/s0967-0637(96)00036-2 http://dx.doi.org/10.21014/acta_imeko.v7i2.544 https://doi.org/10.3390/geosciences9050237 https://doi.org/10.1016/j.ejrs.2014.05.003 https://doi.org/10.3390/rs12040737 http://www.cjees.ro/actions/actiondownload.php?fileid=176 https://doi.org/10.14311/ap.2018.58.0109 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 43 [11] m. alicandro, v. baiocchi, r. brigante, f. radicioni, automatic shoreline detection from eight-band vhr satellite imagery, journal of marine science and engineering 7(12) (2019), pp. 459. doi: 10.3390/jmse7120459 [12] n. chukwu fidelis, o. t. badejo, bathymetric survey investigation for lagos lagoon seabed topographical changes, journal of geosciences 3(2) (2015), pp. 37-43. doi: 10.12691/jgg-3-2-2 [13] d. brčić, s. kos, s. žuškin, navigation with ecdis: choosing the proper secondary positioning source. transnav: international journal on marine navigation and safety of sea transportation 9 (2015). doi: 10.12716/1001.09.03.03 [14] iho, iho transfer standard for digital hydrographic data edition 3.1 november 2000, special publication no. 57, (2000), international hydrographic bureau, onaco. [15] j. w. chung, j. d. rogers, interpolations of groundwater table elevation in dissected uplands. groundwater 50(4) (2012), pp. 598607. doi: 10.1111/j.1745-6584.2011.00889.x [16] r. sassais, a. makar, methods to generate numerical models of terrain for spatial enc presentation. annual of navigation 18 (2011), pp. 69–81. online [accessed 31 august 2021] http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baz tech-352ef1e4-201c-4b6b-9f37500ee2c9e4e9/c/sassais.pdfhttp://yadda.icm.edu.pl/baztech/ele ment/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37500ee2c9e4e9/c/sassais.pdf [17] c. l. miller, r. a. laflamme, the digital terrain model: theory & application, mit photogrammetry laboratory (1958). [18] a. balasubramanian, digital elevation model (dem) in gis. university of mysore (2017). [19] d. c. guastella, l. cantelli, d. longo, c. d. melita, g. muscato, coverage path planning for a flock of aerial vehicles to support autonomous rovers through traversability analysis. acta imeko 8(4) (2019), pp. 9-12. doi: 10.21014/acta_imeko.v8i4.680 [20] e. alcaras, u. falchi, c. parente, digital terrain model generalization for multiscale use, international review of civil engineering (irece) 11(2) (2020), pp.52-59. doi: 10.15866/irece.v11i2.17815 [21] r. amato, g. dardanelli, d. emmolo, v. franco, m. lo brutto, p. midulla, p. orlando, b. villa, digital orthophotos at a scale of 1: 5000 from high resolution satellite images. isprs journal of photogrammetry & remote sensing, istanbul, turkey, 12-23 july 2004, pp. 593-598. [22] n. cenni, s. fiaschi, m. fabris, integrated use of archival aerial photogrammetry, gnss, and insar data for the monitoring of the patigno landslide (northern apennines, italy). landslides, (2021), pp. 1-17. doi: 10.1007/s10346-021-01635-3 [23] a. carrara, g. bitelli, r. carla, comparison of techniques for generating digital terrain models from contour lines, international journal of geographical information science, 11(5), (1997), pp. 451-473. doi: 10.1080/136588197242257 [24] e. alcaras, c. parente, a. vallario, a comparison of different interpolation methods for dem production, international journal of advanced trends in computer science and engineering 6 (2019), pp. 1654-1659. doi: 10.30534/ijatcse/2019/91842019 [25] c. parente, a. vallario, interpolation of single beam echo sounder data for 3d bathymetric model, international journal of advanced computer science and applications 10(10) (2019) pp. 6-13. doi: 10.14569/ijacsa.2019.0101002 [26] r. dupuy, a. makar, analysis of digital sea bottom models generated using enc data, annual of navigation 18 (2011), pp. 27–36. https://www.infona.pl/resource/bwmeta1.element.baztech6fdd414b-7436-4dc6-82e1-2c8a866f0993# [27] e. alcaras, l. carnevale, c. parente, interpolating single-beam data for sea bottom gis modelling, int. j. emerg. trends eng. res. 8 (2020), pp. 591–597. doi: 10.30534/ijeter/2020/50822020 [28] g. matheron, trate de geostatistique appliqde: mbmoires du bureau de recherches geologiques et mini&es, vol. 14, editions technip, paris, (1962), pp. 333. [29] r. a. olea, fundamentals of semivariogram estimation, modeling, and usage, in stochastic modeling and geostatistics, (1994), eds jeffrey m. yarus and richard l. chambers. [30] m. sacchi, f. pepe m. corradino, d. d. insinga, f. molisso, c. lubritto, the neapolitan yellow tuff caldera offshore the campi flegrei: stratal architecture and kinematic reconstruction during the last 15 ky, marine geology 354 (2014), pp.15-33. doi: 10.1016/j.margeo.2014.04.012 [31] r. somma, s. iuliano, f. matano, f. molisso, s. passaro, m. sacchi, c. troise, g. de natale, high-resolution morpho-bathymetry of pozzuoli bay, southern italy, journal of maps 12(2) (2016), pp. 222-230. doi: 10.1080/17445647.2014.1001800 [32] p. p. aucelli, g. mattei, c. caporizzo, a. cinque, s. troisi, f. peluso, m. stefanile, g. pappone, ancient coastal changes due to ground movements and human interventions in the roman portus julius (pozzuoli gulf, italy): results from photogrammetric and direct surveys, water 12(3) (2020), pp. 658. doi: 10.3390/w12030658 [33] b. d. petriaggi, g. g. de ayala, laser scanner reliefs of selected archeological structures in the submerged baiae (naples), the international archives of photogrammetry, remote sensing and spatial information sciences 40(5) (2015), pp. 79. doi: 10.5194/isprsarchives-xl-5-w5-79-2015 [34] f. bruno, a. lagudi, a. gallo, m. muzzupappa, b. d. petriaggi, s. passaro, 3d documentation of archeological remains in the underwater park of baiae, the international archives of photogrammetry, remote sensing and spatial information sciences 40(5) (2015), pp. 41. doi: 10.5194/isprsarchives-xl-5-w5-41-2015 [35] m. sacchi, s. passaro, f. molisso, f. matano, l. steinmann, v. spiess, ... g. ventura, the holocene marine record of unrest, volcanism, and hydrothermal activity of campi flegrei and somma–vesuvius. in vesuvius, campi flegrei, and campanian volcanism (2020), pp. 435-469. doi: 10.1016/b978-0-12-816454-9.00016-x [36] l. lirer, g. luongo, r. scandone, on the volcanological evolution of campi flegrei. eos, transactions american geophysical union 68(16) (1987), pp. 226-234. [37] s. carlino, r. somma, eruptive versus non-eruptive behaviour of large calderas: the example of campi flegrei caldera (southern italy), bulletin of volcanology 72(7) (2010), pp. 871–886. doi: 10.1007/s00445-010-0370-y [38] m. stefanile, underwater cultural heritage, tourism and diving centers. the case of pozzuoli and baiae (italy), in ikuwa v. congreso internacional de arqueología subacuática. un patrimonio para la humanidad, cartagena, 15-18 de octubre de 2014, (2016), pp. 213-224. [39] s. innangi, r. tonielli, g. di martino, a. guarino, f. molisso, m. sacchi, high-resolution seafloor sedimentological mapping: the case study of bagnoli-coroglio site, gulf of pozzuoli (napoli), italy, chemistry and ecology 36(6) (2020), pp. 511-528. doi: 10.1080/02757540.2020.1732942 [40] s. ricci, c. s. perasso, f. antonelli, b. d. petriaggi, marine bivalves colonizing roman artefacts recovered in the gulf of pozzuoli and in the blue grotto in capri (naples, italy): boring and nestling species, international biodeterioration & biodegradation 98 (2015), pp. 89-100. doi: 10.1016/j.ibiod.2014.12.001 https://doi.org/10.3390/jmse7120459 https://doi.org/10.12691/jgg-3-2-2 https://doi.org/10.12716/1001.09.03.03 https://doi.org/10.1111/j.1745-6584.2011.00889.x http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdfhttp:/yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdf http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdfhttp:/yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdf http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdfhttp:/yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdf http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdfhttp:/yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdf http://yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdfhttp:/yadda.icm.edu.pl/baztech/element/bwmeta1.element.baztech-352ef1e4-201c-4b6b-9f37-500ee2c9e4e9/c/sassais.pdf http://dx.doi.org/10.21014/acta_imeko.v8i4.680 https://doi.org/10.15866/irece.v11i2.17815 https://doi.org/10.1007/s10346-021-01635-3 https://doi.org/10.1080/136588197242257 https://doi.org/10.30534/ijatcse/2019/91842019 https://doi.org/10.14569/ijacsa.2019.0101002 https://www.infona.pl/resource/bwmeta1.element.baztech-6fdd414b-7436-4dc6-82e1-2c8a866f0993 https://www.infona.pl/resource/bwmeta1.element.baztech-6fdd414b-7436-4dc6-82e1-2c8a866f0993 https://doi.org/10.30534/ijeter/2020/50822020 https://doi.org/10.1016/j.margeo.2014.04.01 https://doi.org/10.1080/17445647.2014.1001800 https://doi.org/10.3390/w12030658 https://doi.org/10.5194/isprsarchives-xl-5-w5-79-2015 https://doi.org/10.5194/isprsarchives-xl-5-w5-41-2015 https://doi.org/10.1016/b978-0-12-816454-9.00016-x https://doi.org/10.1007/s00445-010-0370-y https://doi.org/10.1080/02757540.2020.1732942 https://doi.org/10.1016/j.ibiod.2014.12.001 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 44 [41] g. berrino, g. corrado, g. luongo, b. toro, ground deformation and gravity changes accompanying the 1982 pozzuoli uplift, bulletin volcanologique 47(2) (1984), pp. 187-200. doi: 10.1007/bf01961548 [42] a. d’argenio, t. pescatore, m. r. senatore, sea-level change and volcano-tectonic interplay, the gulf of pozzuoli (campi flegrei, eastern tyrrhenian sea) during the last 39 ka. journal of volcanology and geothermal research 133(1-4) (2004), pp. 105121. doi: 10.1016/s0377-0273(03)00393-7 [43] p.de ruggiero, g. esposito, e. napolitano, r. iacono, s. pierini, e. zambianchi, modelling the marine circulation of the campania coastal system (tyrrhenian sea) for the year 2016: analysis of the dynamics. journal of marine systems 210 (2020), pp. 103388. doi: 10.1016/j.jmarsys.2020.103388 [44] s. passaro, m. barra, r. saggiomo, s. di giacomo, a. leotta, h. uhlen, s. mazzola, multi-resolution morpho-bathymetric survey results at the pozzuoli–baia underwater archaeological site (naples, italy), journal of archaeological science 40(2), (2013), pp. 1268-1278. doi: 10.1016/j.jas.2012.09.035 [45] esri, arcgis 10.3, redlands, ca, usa. online [accessed 30 august 2021] www.esri.com/software/arcgis [46] l. surace, la georeferenziazione delle informazioni territoriali, bollettino di geodesia e scienze affini 57(2) (1998), pp. 181-234 (in italian). [47] iho, mariners’ guide to accuracy of depth information in electronic navigational charts (enc), edition 1.0.0, september (2020). online [accessed 31 august 2021] https://iho.int/uploads/user/services%20and%20standards/h ssc/hssc12/s67%20ed%20100%20mariners%20guide%20to%20accuracy% 20of%20depth%20information%20in%20enc.pdf [48] iho, mariners’ guide to accuracy of electronic navigational charts (enc) edition 0.5, july (2017). online [accessed 31 august 2021. https://iho.int/uploads/user/services%20and%20standards/d qwg/letters/s67%20mariners%20guide%20to%20accuracy%20of%20enc%2 0v0.5.pdf [49] e. alcaras, c. parente, a. vallario, kriging interpolation of bathymetric data for 3d model of the bay of pozzuoli (italy), 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, october 5-7, 2020, pp. 218-223. online [accessed 31 august 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-41.pdf [50] esri, geostatistical analyst, arcgis 10.3help, redlands, ca, usa. online [accessed 30 august 2021] https://desktop.arcgis.com/en/arcmap/latest/extensions/geost atistical-analyst/what-is-arcgis-geostatistical-analyst-.htm [51] w. r. tobler, a computer movie simulating urban growth in the detroit region, economic geography 46-1 (1970), pp. 234-240. [52] j. p. chiles, p. delfiner, geostatistics: modeling spatial uncertainty 497 (2009), john wiley & sons. [53] k. krivoruchko, empirical bayesian kriging. arcuser fall 6(10) (2012). online [accessed 31 august 2021] https://www.esri.com/news/arcuser/1012/files/ebk.pdf [54] m. a. oliver r. webster, kriging: a method of interpolation for geographical information systems, international journal of geographical information systems 4(3) (1990), pp. 313-332. doi: 10.1080/02693799008941549 [55] s. salekin, j. h. burgess, j. morgenroth, e. g. mason, d. f. meason, a comparative study of three non-geostatistical methods for optimising digital elevation model interpolation, isprs international journal of geo-information 7(8) (2018), pp. 300. doi: 10.3390/ijgi7080300 [56] x. jian, r. a. olea, y. s. yu, semivariogram modeling by weighted least squares. computers & geosciences 22(4) (1996), pp. 387397. doi: 10.1016/0098-3004(95)00095-x [57] c. a. rishikeshan, s. k. katiyar, v. n. vishnu mahesh, detailed evaluation of dem interpolation methods in gis using dgps data, international conference on computational intelligence and communication networks, bhopal, india, 14-16 nov. 2014, pp. 666.671. doi: 10.1109/cicn.2014.148 [58] j. a. shine, g. i. wakefield, a comparison of supervised imagery classification using analyst-chosen and geostatistically-chosen training sets, us army topographic engineering centre, (1999), alexandria, va, usa. online [accessed 31 august 2021] http://www.geocomputation.org/1999/044/gc_044.htm [59] esri, adjust the lag size of the semivariogram model, arcgis 10.3help, redlands, ca, usa. online [accessed 31 august 2021] http://webhelp.esri.com/arcgisdesktop/9.3/tutorials/geostat/g eostat_3_2.htm [60] m. armstrong, basic linear geostatistics, springer science & business media (1998), isbn: 978-3-642-58727-6. [61] g. bohling, introduction to geostatistics and variogram analysis, kansas geological survey 1 (2005), pp. 1-20. online [accessed 31 august 2021] https://discoverspatial.in/wpcontent/uploads/2018/04/variograms.pdf [62] j. k. yamamoto, an alternative measure of the reliability of ordinary kriging estimates, mathematical geology 32(4) (2000), pp. 489-509. doi: 10.1023/a:1007577916868 [63] esri, understanding ordinary kriging, arcgis 10.3help, redlands, ca, usa. online [accessed 31 august 2021] http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?topicna me=understanding_ordinary_kriging [64] m. armstrong, problems with universal kriging. journal of the international association for mathematical geology 16(1) (1984), pp. 101-108. doi: 10.1007/bf01036241 [65] esri, understanding universal kriging, arcgis 10.3help, redlands, ca, usa. online [accessed 31 august 2021] https://desktop.arcgis.com/en/arcmap/latest/extensions/geost atistical-analyst/understanding-universal-kriging.htm [66] h. wackernagel, universal kriging, in multivariate geostatistics. (2003), pp. 300-307, springer, berlin, heidelberg. [67] d. berrar, cross-validation. encyclopedia of bioinformatics and computational biology 1 (2019), pp. 542-545. doi: 10.1016/b978-0-12-809633-8.20349-x [68] u. falchi, c. parente, g. prezioso global geoid adjustment on local area for gis applications using gnss permanent station coordinates, geodesy and cartography 44(3) (2018), pp. 80-88. doi: 10.3846/gac.2018.4356 [69] g. e. fasshauer, j. g. zhang, on choosing “optimal” shape parameters for rbf approximation, numerical algorithms 45(14) (2007), pp. 345-368. doi: 10.1007/s11075-007-9072-8 [70] m. p. batistella pasini, a. dal'col lúcio, a. cargnelutti filho, semivariogram models for estimating fig fly population density throughout the year, pesquisa agropecuária brasileira 49(7) (2014), pp. 493-505. doi: 10.1590/s0100-204x2014000700001 [71] c. c. g. correa, p. e. teodoro, e. d. cunha, j. d. oliveira-júnior, g. gois, v. m. bacani, f. e. torres, spatial interpolation of annual rainfall in the state mato grosso do sul (brazil) using different transitive theoretical mathematical models, international journal of innovative research in science, engineering and technology 3(10) (2014), pp.16618-16625. doi: 10.15680/ijirset.2014.0310006 [72] x. zhang, l. jiang, x. qiu, j. qiu, j. wang, y. zhu, an improved method of delineating rectangular management zones using a https://doi.org/10.1007/bf01961548 https://doi.org/10.1016/s0377-0273(03)00393-7 https://doi.org/10.1016/j.jmarsys.2020.103388 https://doi.org/10.1016/j.jas.2012.09.035 http://www.esri.com/software/arcgis https://iho.int/uploads/user/services%20and%20standards/hssc/hssc12/s-67%20ed%20100%20mariners%20guide%20to%20accuracy%20of%20depth%20information%20in%20enc.pdf https://iho.int/uploads/user/services%20and%20standards/hssc/hssc12/s-67%20ed%20100%20mariners%20guide%20to%20accuracy%20of%20depth%20information%20in%20enc.pdf https://iho.int/uploads/user/services%20and%20standards/hssc/hssc12/s-67%20ed%20100%20mariners%20guide%20to%20accuracy%20of%20depth%20information%20in%20enc.pdf https://iho.int/uploads/user/services%20and%20standards/hssc/hssc12/s-67%20ed%20100%20mariners%20guide%20to%20accuracy%20of%20depth%20information%20in%20enc.pdf https://iho.int/uploads/user/services%20and%20standards/dqwg/letters/s-67%20mariners%20guide%20to%20accuracy%20of%20enc%20v0.5.pdf https://iho.int/uploads/user/services%20and%20standards/dqwg/letters/s-67%20mariners%20guide%20to%20accuracy%20of%20enc%20v0.5.pdf https://iho.int/uploads/user/services%20and%20standards/dqwg/letters/s-67%20mariners%20guide%20to%20accuracy%20of%20enc%20v0.5.pdf https://iho.int/uploads/user/services%20and%20standards/dqwg/letters/s-67%20mariners%20guide%20to%20accuracy%20of%20enc%20v0.5.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-41.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-41.pdf https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/what-is-arcgis-geostatistical-analyst-.htm https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/what-is-arcgis-geostatistical-analyst-.htm https://www.esri.com/news/arcuser/1012/files/ebk.pdf https://doi.org/10.1080/02693799008941549 https://doi.org/10.3390/ijgi7080300 https://doi.org/10.1016/0098-3004(95)00095-x https://doi.org/10.1109/cicn.2014.148 http://www.geocomputation.org/1999/044/gc_044.htm http://webhelp.esri.com/arcgisdesktop/9.3/tutorials/geostat/geostat_3_2.htm http://webhelp.esri.com/arcgisdesktop/9.3/tutorials/geostat/geostat_3_2.htm https://discoverspatial.in/wp-content/uploads/2018/04/variograms.pdf https://discoverspatial.in/wp-content/uploads/2018/04/variograms.pdf https://doi.org/10.1023/a:1007577916868 http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?topicname=understanding_ordinary_kriging http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?topicname=understanding_ordinary_kriging https://doi.org/10.1007/bf01036241 https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/understanding-universal-kriging.htm https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/understanding-universal-kriging.htm https://doi.org/10.1016/b978-0-12-809633-8.20349-x https://doi.org/10.3846/gac.2018.4356 https://doi.org/10.1007/s11075-007-9072-8 https://doi.org/10.1590/s0100-204x2014000700001 https://doi.org/10.15680/ijirset.2014.0310006 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 45 semivariogram-based technique, computers and electronics in agriculture 121 (2016), pp. 74-83. doi: 10.1016/j.compag.2015.11.016 [73] i. m. kiš, comparison of ordinary and universal kriging interpolation techniques on a depth variable (a case of linear spatial trend), case study of the šandrovac field, rudarsko-geološkonaftni zbornik 31(2) (2016), pp. 41-58. doi: 10.17794/rgn%20zbornik.v31i2.3862 [74] esri, choosing a lag size, arcgis 10.3help, redlands, ca, usa. online [accessed 31 august 2021] https://desktop.arcgis.com/en/arcmap/latest/extensions/geost atistical-analyst/choosing-a-lag-size.htm [75] w. tobler, measuring spatial resolution, proceedings, in land resources information systems conference, beijing, (1987), pp. 12-16. [76] c. valenzuela, m. baumgardner, selection of appropriate cell sizes for thematic maps, itc journal 3 (1990), pp. 219–224. [77] t. hengl, finding the right pixel size, computers & geosciences 32(9) (2006), pp. 1283-1298. doi: 10.1016/j.cageo.2005.11.008 [78] esri, search neighborhoods, arcgis 10.3help, redlands, ca, usa https://desktop.arcgis.com/en/arcmap/latest/extensions/geost atistical-analyst/search-neighborhoods.htm https://doi.org/10.1016/j.compag.2015.11.016 https://doi.org/10.17794/rgn%20zbornik.v31i2.3862 https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/choosing-a-lag-size.htm https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/choosing-a-lag-size.htm https://doi.org/10.1016/j.cageo.2005.11.008 https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/search-neighborhoods.htm https://desktop.arcgis.com/en/arcmap/latest/extensions/geostatistical-analyst/search-neighborhoods.htm digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 8 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system annalisa liccardo1, francesco bonavolontà1, rosario schiano lo moriello2, francesco lamonaca3, luca de vito4, antonio gloria2, enzo caputo2, giorgio de alteriis2 1 dipartimento di ingegneria elettrica università di napoli federico ii naples, italy 2 dipartimento di ingegneria industriale università di napoli federico ii naples, italy 3 dipartimento di ingegneria informatica, modellistica, elettronica e sistemistica, università della calabria, italy 4 dipartimento di ingegneria, università degli studi del sannio, italy section: research paper keywords: cyber-physical systems; digital twin; remote control; augmented reality; measurement instrumentation citation: annalisa liccardo, francesco bonavolontà, rosario schiano lo moriello, francesco lamonaca, luca de vito, antonio gloria, enzo caputo, giorgio de alteriis, digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system, acta imeko, vol. 11, no. 4, article 15, december 2022, identifier: imeko-acta-11 (2022)-04-15 section editor: leonardo iannucci, politecnico di torino, italy received november 14, 2022; in final form november 26, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: rosario schiano lo moriello, e-mail: rschiano@unina.it 1. introduction the concept of cyber-physical systems (cps) was first presented in 2006, introduced by helen gill of the national science foundation [1], in the united states, to denote a plane of "local sensation and manipulation of the physical world" correlated with a virtual plane of "real-time control and observability." the concept is presented as an evolution of embedded systems in which the computational capability and its effects descend deep into every physical component of the system and even within the materials [2]. from this initial vision, the concept of cps has taken on a breadth over time that does not facilitate its unambiguous and definitive conceptualization or representation. the most common definition of cps, as integration of computational resources and physical processes, now seems too simplifying as other elements, particularly large-scale network connectivity, have rightfully entered the perimeter of cps. in its current most common definition, cps is considered as an integration of systems of different natures whose main purpose is the control of a physical process and, through feedback, its adaptation in real-time to new operating conditions. this is achieved by the fusion of physical objects and processes, computational platforms, and telecommunications networks [3], [4]. the term physical refers to actual objects as they are "perceivable" by human senses, while the term cyber refers to the abstract the recent increase on the internet of things and industry 4.0 fields has led the research topics to investigate on the innovative technologies that could support these emerging topics in different area of applications. in particular, the current trends are to close the gap between the physical and digital worlds, thus originating the so-called cyber-physical system (cps). a relevant feature of the cps is the digital twin, i.e., a digital replica of a process/product with which user can interact to operate on the real world. in this paper, the authors propose an innovative approach exploiting an augmented reality solution as digital twin for the measurement instrument to obtain a tight connection between the measurements as physical world and the internet of things as digital applications. in fact, by means of the adoption of the 3d scanning strategy, augmented reality software and with the development of a suitable connection between the instrument and the digital world, a cyber-physical system has been realized as an iot platform that collect and control the real measurement instrument and makes its available in augmented reality. an application example involving a digital storage oscilloscope is finally presented to highlight the efficacy of the proposed approach. mailto:rschiano@unina.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 virtual image in which actual objects are represented and enriched with one or more additional layers of information. the relationship between physical objects and their virtual "interpretation" has been referred to by some authors as the effective term "social network of objects" [5]. cpss, therefore, are based on related objects that, through sensors, actuators, and network connections, generate and acquire data of various kinds, thus reducing distances and information asymmetries between all elements of the system [6]. with the help of widespread sensors, the cps can autonomously determine its current operational state within its environment and the distance between its component objects. actuators perform planned actions and execute corrective decisions, optimizing a process or solving a problem [7]. decisions are made by intelligence that evaluates information internal to the cps and, in some scenarios, also information from other cpss [8]-[10]. in this context, a very important aspect is the interaction with measurement systems (in terms of both sensors or sensor networks and actual measurement instruments). in this case, the aim is to make the measurement system an integral and priority part of the cps; realizing the digital twin of measurement systems allows possible users to be able to "touch with their hands" is, therefore, a desirable condition [11]. remote control of instruments is a research activity that saw its first examples in the 1990s; however, such activities were only aimed at enabling measurements to be made by remote programming of instruments. instead, the point of view, both educational and industrial, has changed, requiring a faithful replication of the system to be controlled with direct interaction of the operator (whether student or worker) on the instrument. for this reason, the authors propose an augmented realitybased approach to create a digital twin of the measuring instrument that can be controlled and operated remotely; for this purpose, several enabling technologies inherent in industry 4.0 and internet of things paradigms are used, such as augmented reality and mqtt communication protocols. the goal is to enable users to operate the remote instruments as if they were in their presence [12]. the paper is organized as follows: in section 2, a literature review on cps and ar-based applications is presented, while the proposed method is described in detail in section 3. an application example is given in section 4 before drawing the conclusions in section 5. 2. related work providing for an exhaustive review of the exploitation of ar in cyber-physical systems is a difficult challenge due to the wide variety of configurations and application fields they are interested in, such as in additive manufacturing [13]-[16] , industry 4.0 [17][22] and autonomous vehicle [23]-[25]. as an example, an application of cps in the manufacturing environment is proposed in [7], where the real-time data are sent into cyberspace through different types of networks to build a digital twin of the machine tool. finally, ar is exploited as an interface between human and the cps to mainly retrieve information about the ongoing processes. in [26], the authors present a closed-loop cooperative human cps for the mining industry that exploits the information obtained from ar and virtual reality (vr) systems. the main goal of this research is to allow human interaction with the mining operation by means of a deep integration of ar and vr; this integration makes visual information available to the operator that is supported during the operations in terms of making the correct decisions, conduct inspections and interacts with the equipment. a compelling cps for autonomous vehicle applications has been proposed in [9]. the authors have developed an ar indoor environment for testing and debugging autonomous vehicles that act in such a way to realize dynamic missions in laboratory environments for planned mission testing and validation; the proposed solution is realized by exploiting ground and aerial systems, motion cameras, ground projectors, and network communications to obtain a standard procedure for testing and prototyping environment for cpss. finally, in a field test performance of perception, planning, and learning algorithms in real-time have been evaluated. furthermore, in [27], context-aware guidance of cyberphysical process has been proposed for the press line maintenance process. in particular, by means of a suitable context graph, it was possible to manage and structure the cps sensor data. in addition, an ar application is adopted to support the interaction of users with the cps processes thanks to the integration of position and marker sensors in the proposed solution; the object detection is improved by means of the digital data that ensures improved guidance in the process execution. the adoption of ar with cps allows to support the end-user in the manual tasks where it is guided and monitored during the operations. an interesting application is evaluated in [28], where a suitable ar navigation system for industrial applications has been proposed. the authors have developed a prototype of an autonomous vehicle that interacts with a robot arm. in particular, the robot arm interacts with the vehicle when the correct position is reached, and the vehicle navigation is obtained by exploiting an ar solution based on markers recognition; these markers are used as system reference position points. finally, the research carried out in [29] has highlighted that the integration of iot platforms and ar software as digital twin is the most suitable technology to close the gap between the physical and digital worlds by means of the definition of a digital twin architecture model, the introduction of a digital twin service and the investigation on the key elements of industry 4.0 required for the realization of a digital twin. differently from the solutions mentioned so far and mainly intended to exploit ar as a fundamental information layer for the interaction with cps, the authors want to propose a method to realize a cps for measurement instruments that act as an interface between the actual and ar world. instrument control and the related measurements are not carried out as simulations, but the ar interaction between user and rendered instruments corresponds to real physical state changes in the real world. to accomplish the considered target, a general framework for the definition, implementation, and assessment of an example of digital twin application will be presented in the following by referring to a typical measuring instrument. 3. proposed method the method adopted to create a digital twin of the measurement instrument exploits an augmented reality approach based on the framework shown in figure 1. the first step is dedicated to generating the 3d geometric model of the desired instrument; operating approaches typical of reverse engineering can be applied to the purpose. in particular, the reverse acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 engineering approach uses (i) suitable tools to acquire 3d spatial data of objects and (ii) dedicated software environments to manipulate and convert them into useful information. faro's non-contact laser scanarm [30] (figure 2) was chosen to acquire the image and the main dimensional information about the instruments; the output of the scan operation consists of clouds of points in the 3d space. thanks to computer-aided design (cad) systems, it has been possible to define mathematical representations allowing the reconstruction of the instrument geometry, thus obtaining a 3d cad model of the desired object. in order to move from point clouds to the extraction of both surface and geometrical characteristics, suitable software as well as reverse engineering techniques were combined to provide a very efficient and robust solution. in particular, the point clouds were first handled using geomagic wrap software [31] to transform those data into polygonal meshes, and then the 3d image reconstruction was performed. from the reconstruction of case, front panel, back panel and support systems, the 3d model of the instrument has been obtained; buttons and knobs have been separately reconstructed and placed one by one in their well-defined positions. the scanning phase produces a file with an obj extension containing an empty container, a simple 3d view of the instrument (figure 3). the successive step aimed at transforming the obtained 3d object into an augmented interactive object; the 3d graphics development platform unity has been used to the purpose. every interaction that the user performs on the augmented reality object is translated in the corresponding operation executed on the actual instrument. to this aim, a typical iot protocol called message queue telemetry transport (mqtt) [32] is used to communicate with the laboratory where the instruments are placed. in particular, the lab is equipped with a figure 1. proposed solution workflow: from the physical world to the digital twin. figure 2. 3d scanning strategy of the instrument exploiting a non-contact laser scanarm by farotm. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 personal computer connected from one side to ethernet network and on the other side to the instruments. the pc converts received mqtt messages into messages compliant with the protocol used by instruments (ieee488), in order to forward to the instruments commands and request corresponding to ar user's operations. a suitable software on the pc implements a mqtt client, which has the role of receiving all messages from the augmented instrument and sending them to the actual instrument and vice versa. communication among different mqtt clients is assured by a third entity, the so-called broker, mandated to dispatch messages to the client subscribed to specific information arguments, referred to as topic [33]. to assure reliable operations and continuous service as well as maintain a complete control on the exchanged data, it was decided to exploit a private broker. to realize an augmented reality application, unity exploits a software development plug-in, namely vuforia, allows to recognize images chosen as a target. in particular, vuforia is capable of assessing the quality of image target and thus providing feedback regarding the possible usage or not of that image as a target. images with a more complex pattern proved to be the best candidate as a target in terms of 3d reconstruction location and stability; for this purpose, the one adopted for the realised application is shown in figure 4 and produced a satisfying evaluation from the vuforia tool. in an augmented reality application, the image target has an important role, since it allows to locate a 3d object in a scene; when an ar application starts and the camera mounted on the device frame the considered image target, digital replicas of the object, in this case the instruments are superimposed on the image itself. as said before, the obj file contains a 3d replica of the instrument. therefore, it was necessary to import the obj file into the unity environment and make the appearance of this object similar to the actual one. regarding the oscilloscope shown in figure 3, it has been necessary to add the labels above each key, the model of the instrument in the top left-hand corner as well as additional markings and symbols that are present on the real instrument. in addition, compared to the real instrument, two '+' and '-' symbols were added on each knob to allow the user to rotate them with step values similar to those of the actual instrument. the result of this operation is shown in figure 5. the final step regards the communication between ar and actual instrument; to this aim, a software module in c# language has been implemented to recognize button pressure on the virtual object and perform the corresponding operation by sending the command to the actual instrumentation (figure 6). as said before, the protocol used to communicate with the real instrumentation is mqtt, so an mqtt client was created within the application running on the user device (e.g. smartphone, tablet or smart glasses) to send commands to the actual instrumentation and to receive the corresponding responses from them if necessary, as in case of the so-called queries [34]. in particular, the developed c# module can figure 3. oscilloscope 3d model obtained from the 3d scanning strategy. figure 4. example of image target exploited for rendering and spatially locating ar instruments. figure 5. comparison between the actual instruments and its digital twin; instrument view in the ar software environment (a), actual instrument (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 recognize if the sequence of button pressed, and consequently of operations, are correct and in this case, sends the command to the real instrument; as previously said, in the laboratory is present a pc with a suitable software that receives these messages and sends them to the associated instruments. if the command requires a response from the instrumentation, the c# module reads these responses that are properly managed to be shown on the display of the 3d ar instrument. in this way, the user has the feeling of interfacing with the actual instrument even if he is in a different location. finally, to make the instrument's behaviour as close as possible to reality, secondary effects such as pressure emulation, knob movement or button backlighting are also reproduced. 4. application example of the implemented arbased cyber-physical system this section aims to present a case study of the proposed approach by considering the typical operations that student have to perform on the digital oscilloscope during basic metrology courses. to this aim, a proper mobile application has been realized to assess the reliability of the method. when the student interacts with the rendered 3d instrument, the first operation he/she must perform, as with the actual instrument, is to switch it on. in fact, when the oscilloscope power button is pressed in the ar application, a query is sent to the actual instrument to retrieve the waveforms currently present on the instrument's display as well as its configuration parameters (as an example, horizontal, s/div, and vertical, v/div, resolution, figure 7). this way, the student can gain awareness and knowledge about the signal currently acquired and displayed on the actual oscilloscope and, consequently, change the parameters, depending on the operations to be performed, by acting on the corresponding knobs of the v/div and s/div. in addition, the student can have information about the signal coupling. as on the actual instrument, the “ch1 menu” button must be clicked on, and the information appears on the righthand side of the display (figure 8). in the example case shown in the figure 8 the waveform coupling is “dc”. figure 6. example of implemented c# module for interaction between user, ar reconstruction and actual instrumentation. figure 7. waveform and resolution information on the ar display after turning on. figure 8. ch1 menu selection on the ar instrument. figure 9. signal level. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 by turning the specific knobs located on the “ch1 menu” button it is possible to change the signal level (figure 9), so that the signal is moved up or down according to the axis origin. this is important if the student has both signals on the instrument display and he doesn’t want to display them as superimposed. moreover, it is possible to change the trigger level as well as on the actual instrument and perform operations with the cursors to measure period and amplitude of the signal. figure 10.a shows how the cursors were used to evaluate the peak-to-peak amplitude of the signal, while in figure 10.b the period was measured. it is worth noting the relevant concurrence between the display of the instrument rendered within the ar application and that characterizing the actual instrument in the laboratory. as a further case study, a typical students’ exercise is presented, mandated to measure the variation in amplitude and phase of the output signal referred to an rc filter circuit; in particular, an rc filter with a cut-off frequency of 1.7 khz was used. figure 11 shows the measurement setup present in the actual laboratory, where in this case the device under test (dut) stands for the rc filter. as can appreciated in the figure 11, the input signal of the filter is connected to channel 1 of the oscilloscope while channel 2 will show the output signal of the filter; figure 12 shows the corresponding results as the frequency of the input signal rises. in particular, figure 12.a presents the acquired (and almost superimposed) waveforms of the two channels when an input sine wave with a frequency equal to 200 hz has been applied. when the frequency of the generated signal increases up to 1 khz, (figure 12.b), the output waveform begins to be attenuated and delayed with respect to the input one. as the frequency is increased to 5 khz (figure 12.c), the filter effect of the considered dut proves to be evident with a significant attenuation and displacement of the output waveform. as for the actual lab experience, gain and phase shift of the filter can be measured by means of cursors as in figure 10; for the sake of the brevity, the procedure is not shown in the paper. 5. conclusions the purpose of the current research study was to evaluate the capabilities of cps in the measurement framework. in particular, a suitable method to develop a digital twin has been proposed, and a real implementation referred to an oscilloscope has been presented. in fact, starting from a 3d scanning strategy, the actual instrument is reconstructed in augmented reality, each front panel element is associated with an instruction of the actual instrument, and then an ar application is developed by means of the vuforia and unity environments. moreover, a suitable communication, based on the mqtt protocol, is adopted between the actual instrument and its 3d reconstruction. to this aim, a personal computer is exploited to realize (i) the physical connection with the actual instrument, (ii) the internet or local connection with the mqtt client, and (iii) an interpreter for the command sent to the instrument and the responses from the instrument outputs to the ar application. as an example, a typical application is evaluated where a filtered signal has been correctly displayed and measured in terms of frequency in the ar application. so, it was proved that the oscilloscope could be used in an ar framework where it is remotely controlled and sent the actual measure (acquired in the physical world) to the augmented reality world. it has been demonstrated that a cps for the measuring instruments can be realized, highlighting that instrument as digital twin acts in the same way as those in the real world. a) b) figure 10. evaluation of a) peak-peak amplitude and b) period of the signal. figure 11. laboratory setup in experiments with rc filter. a) b) c) figure 12. evolution of signal output of rc filter: a) signal frequency input equal to 200 hz, b) signal frequency input equal to 1 khz, c) signal frequency input equal to 5 khz. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 references [1] r. bahati, h. gill, cyber-physical systems. the impact of control technology, open j. soc. sci. sci. res. publ, vol. 5, pp. 161–166, 2011. [2] m. bashendy, a. tantawy, a. erradi, intrusion response systems for cyber-physical systems: a comprehensive survey, comput secur, p. 102984, 2022. doi: 10.1016/j.cose.2022.102984 [3] s. zanero, cyber-physical systems, computer (long beach calif), vol. 50, no. 4, pp. 14–16, 2017. doi: 10.1109/mc.2017.105 [4] w. wolf, cyber-physical systems, computer (long beach calif), vol. 42, no. 03, pp. 88–89, 2009. [5] f. cicirelli, a. guerrieri, g. spezzano, a. vinci, edge computing and social internet of things for large-scale smart environments development, ieee internet things j, vol. 5, no. 4, pp. 2557– 2571, 2017. doi: 10.1109/jiot.2017.2775739 [6] m. prist, a. monteriù, e. pallotta, p. cicconi, a. freddi, f. giuggioloni, e. caizer, c. verdini, s. longhi, cyber-physical manufacturing systems: an architecture for sensor integration, production line simulation and cloud services, acta imeko, vol. 9, 2020, no. 4, pp. 39–52. doi: 10.21014/acta_imeko.v9i4.731 [7] f. hu, cyber-physical systems. taylor & francis group llc, 2014. [8] j. shi, j. wan, h. yan, h. suo, a survey of cyber-physical systems, int. conference on wireless communications and signal processing (wcsp), nanjing, china, 9-11 november 2011, pp. 1–6. doi: 10.1109/wcsp.2011.6096958 [9] b. scherer, hardware-in-the-loop test based non-intrusive diagnostics of cyber-physical systems using microcontroller debug ports, acta imeko, vol. 7, 2018, no. 1, pp. 27–35. doi: 10.21014/acta_imeko.v7i1.513 [10] p. s. matharu, a. a. ghadge, y. almubarak, y. tadesse, jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring, acta imeko, vol. 11, 2022, no. 3, pp. 1-7. doi: 10.21014/acta_imeko.v11i3.1255 [11] f. gabellone, digital twin: a new perspective for cultural heritage management and fruition, acta imeko, vol. 11, 2022, no. 1, 7 pp. doi: 10.21014/acta_imeko.v11i1.1085 [12] m. y. serebryakov, i. s. moiseev, current trends in the development of cyber-physical interfaces linking virtual reality and physical system, conference of russian young researchers in electrical and electronic engineering (elconrus), saint petersburg, russian federation, 25-28 january 2022, pp. 419–424. doi: 10.1109/elconrus54750.2022.9755545 [13] h. lhachemi, a. malik, r. shorten, augmented reality, cyberphysical systems, and feedback control for additive manufacturing: a review, ieee access, vol. 7, 2019, pp. 50119–50135. doi: 10.1109/access.2019.2907287 [14] a. t. silvestri, v. bottino, e. caputo, f. bonavolontà, r. schiano lo moriello, a. squillace, d. accardo, innovative fusion strategy for mems redundant-imu exploiting custom 3d components, ieee 9th int. workshop on metrology for aerospace (metroaerospace), 2022, pp. 644–648. doi: 10.1109/metroaerospace54187.2022.9856222 [15] a. t. silvestri, m. perini, p. bosetti, a. squillace, exploring potentialities of direct laser deposition: thin-walled structures, key engineering materials, vol. 926, 2022, pp. 206–212. doi: 10.4028/p-82vyug [16] a. t. silvestri, i. papa, f. rubino, a. squillace, on the critical technological issues of cff: enhancing the bearing strength, materials and manufacturing processes, 2021. doi: 10.1080/10426914.2021.1954195 [17] y. lu, cyber physical system (cps)-based industry 4.0: a survey, journal of industrial integration and management, vol. 2, 2017, no. 03, p. 1750014. doi: 10.1142/s2424862217500142 [18] b. dafflon, n. moalla, y. ouzrout, the challenges, approaches, and used techniques of cps for manufacturing in industry 4.0: a literature review, the international journal of advanced manufacturing technology, vol. 113, 2021, no. 7, pp. 2395–2412. doi: 10.1007/s00170-020-06572-4 [19] g. de alteriis, v. bottino, c. conte, g. rufino, r. s. lo moriello, accurate attitude inizialization procedure based on mems imu and magnetometer integration, ieee 8th int. workshop on metrology for aerospace (metroaerospace), 2021, pp. 1–6. doi: 10.1109/metroaerospace51421.2021.9511679 [20] s. surdo, a. zunino, a. diaspro, m. duocastella, acoustically shaped laser: a machining tool for industry 4.0, acta imeko, 2020, vol. 9, 2020, no. 4, p. 60-66. doi: 10.21014/acta_imeko.v9i4.740 [21] g. mariniello, t. pastore, a. bilotta, d. asprone, e. cosenza, seismic pre-dimensioning of irregular concrete frame structures: mathematical formulation and implementation of a learn-heuristic algorithm, journal of building engineering, vol. 46, 2022, doi: 10.1016/j.jobe.2021.103733 [22] g. mariniello, t. pastore, d. asprone, e. cosenza, layout-aware extreme learning machine to detect tendon malfunctions in prestressed concrete bridges using stress data, autom constr, vol. 132, 2021. doi: 10.1016/j.autcon.2021.103976 [23] g. raja, s. senthilkumar, s. ganesan, r. edhayachandran, g. vijayaraghavan, a. k. bashir, av-cps: audio visual cognitive processing system for critical intervention in autonomous vehicles, ieee int. conference on communications workshops (icc workshops), 2021, pp. 1–6. doi: 10.1109/iccworkshops50388.2021.9473647 [24] j. wang, z. cai, j. yu, achieving personalized $k$-anonymitybased content privacy for autonomous vehicles in cps, ieee trans industr inform, vol. 16, 2020, no. 6, pp. 4242–4251. doi: 10.1109/tii.2019.2950057 [25] c. conte, g. de alteriis, g. rufino, d. accardo, an innovative process-based mission management system for unmanned vehicles, ieee int. workshop on metrology for aerospace, metroaerospace 2020, pp. 377–381. doi: 10.1109/metroaerospace48742.2020.9160121 [26] j. xie, s. liu, x. wang, framework for a closed-loop cooperative human cyber-physical system for the mining industry driven by vr and ar: mhcps, comput ind eng, vol. 168, 2022, p. 108050. doi: [27] k. kammerer, r. pryss, k. sommer, m. reichert, towards context-aware process guidance in cyber-physical systems with augmented reality, 4th int. workshop on requirements engineering for self-adaptive, collaborative, and cyber physical systems (resacs), 2018, pp. 44–51. doi: [28] t. i. erdei, z. molnár, n. c. obinna, g. husi, a novel design of an augmented reality based navigation system & its industrial applications, acta imeko, vol. 7, 2018, no. 1, pp. 57–62. doi: 10.21014/acta_imeko.v7i1.528 [29] s. aheleroff, x. xu, r. y. zhong, y. lu, digital twin as a service (dtaas) in industry 4.0: an architecture reference model, advanced engineering informatics, vol. 47, 2021, p. 101225. doi: 10.1016/j.aei.2020.101225 [30] faro, technical specification sheet. online [accessed 28 november 2022] https://itknowledge.faro.com/hardware/faroarm_and_scanarm/faroa rm_and_scanarm/technical_specification_sheet_for_the_edge _faroarm_and_scanarm [31] artec3d, geometric wrap. online [accessed 28 november 2022] https://www.artec3d.com/3d-software/geomagicwrap?utm_source=google&utm_medium=cpc&utm_campaign= 12554714117&utm_term=geomagic%20wrap%7c%7ckwd524408837989&utm_content=119631253619%7c%7c&keywor https://doi.org/10.1016/j.cose.2022.102984 http://dx.doi.org/10.1109/mc.2017.105 http://dx.doi.org/10.1109/jiot.2017.2775739 https://doi.org/10.21014/acta_imeko.v9i4.731 https://doi.org/10.1109/wcsp.2011.6096958 https://doi.org/10.21014/acta_imeko.v7i1.513 https://doi.org/10.21014/acta_imeko.v11i3.1255 https://doi.org/10.21014/acta_imeko.v11i1.1085 https://doi.org/10.1109/elconrus54750.2022.9755545 https://doi.org/10.1109/access.2019.2907287 https://doi.org/10.1109/metroaerospace54187.2022.9856222 https://doi.org/10.4028/p-82vyug https://doi.org/10.1080/10426914.2021.1954195 https://doi.org/10.1142/s2424862217500142 https://doi.org/10.1007/s00170-020-06572-4 https://doi.org/10.1109/metroaerospace51421.2021.9511679 https://doi.org/10.21014/acta_imeko.v9i4.740 https://doi.org/10.1016/j.jobe.2021.103733 https://doi.org/10.1016/j.autcon.2021.103976 https://doi.org/10.1109/iccworkshops50388.2021.9473647 https://doi.org/10.1109/tii.2019.2950057 https://doi.org/10.1109/metroaerospace48742.2020.9160121 https://doi.org/10.21014/acta_imeko.v7i1.528 https://doi.org/10.1016/j.aei.2020.101225 https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://it-knowledge.faro.com/hardware/faroarm_and_scanarm/faroarm_and_scanarm/technical_specification_sheet_for_the_edge_faroarm_and_scanarm https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 d=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakula ks3tal5eczffdelgeh [32] iso/iec 20922:2016 information technology – message queuing telemetry transport (mqtt) v3.1.1,” 2016. online [accessed 28 november 2022] https://www.iso.org/standard/69466.html [33] oasis, mqtt version 3.1.1, 2015. online [accessed 28 november 2022] http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/ [34] scpi consortium, standard commands for programmable instruments (scpi), volume 1: syntax and style, usa, may 1999. online [accessed 28 november 2022] https://www.ivifoundation.org/docs/scpi-99.pdf https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.artec3d.com/3d-software/geomagic-wrap?utm_source=google&utm_medium=cpc&utm_campaign=12554714117&utm_term=geomagic%20wrap%7c%7ckwd-524408837989&utm_content=119631253619%7c%7c&keyword=geomagic%20wrap&gclid=cj0kcqiamaibbhcaarisakulaks3tal5eczffdelgeh https://www.iso.org/standard/69466.html http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/ https://www.ivifoundation.org/docs/scpi-99.pdf monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 9 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces nicola covre1, alessandro luchetti1, matteo lancini2, simone pasinetti2, enrico bertolazzi1, mariolino de cecco1 1 department of industrial engineering at the university of trento, via sommarive 9, 38123 trento, italy 2 department of mechanic and industrial engineering at the university of brescia, via branze 38, 25121 brescia, italy section: research paper keywords: monte carlo; volume estimation; affiliation criterion; cube explosion; point cloud citation: nicola covre, alessandro luchetti, matteo lancini, simone pasinetti, enrico bertolazzi, mariolino de cecco, monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces, acta imeko, vol. 11, no. 2, article 32, june 2022, identifier: imeko-acta-11 (2022)-02-32 section editor: francesco lamonaca, university of calabria, italy received november 22, 2021; in final form february 25, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from the european union’s horizon 2020 research and innovation program, via an open call issued and executed under project eurobench (grant agreement n° 779963). corresponding author: nicola covre, e-mail: nicola.covre@unitn.it 1. introduction estimating the volume enclosed in a three-dimensional (3d) surface point cloud is a widely explored topic in several scientific fields. with the increase of technologies for the virtual reconstruction of 3d environments and objects, many devices, such as kinect, lidar, real sense, make it possible to acquire a depth image with increasing accuracy and resolution [1]-[4]. several fields, such as mobile robotics [5], reverse prototyping [6], industrial automation, and land management, require accurate and efficient data processing to extract geometrical features from the real environment, such as distances, areas, and volume estimations. among different geometric features, volume estimation has been presented as a challenging issue and widely studied with different approaches in the literature. from the literature contributions on object volume estimation based on the 3d point cloud, chang et al. [7] used the slice method and least-squares approach achieving high accuracy by investigating mainly known and homogeneous solids. the same 3d point cloud volume calculation based on the slice method was applied by zhi et al. [8]. in general, the main limitation of using the sliding method for volume estimation is the dependence on the quality of the point cloud and the impossibility to work with complex shapes. bi et al. [10] and xu et al. [11] estimated the canopy volume measurement by using only the simple convex hull algorithm [12] with the problem of volume overestimation in the case of concave surfaces. lin et al. [13] improved the convex hull algorithm to handle concave polygons for the estimation of the abstract this article proposes a state-of-the-art algorithm for estimating the 3d volume enclosed in a surface point cloud via a modified extension of the monte carlo integration approach. the algorithm consists of a pre-processing of the surface point cloud, a sequential generation of points managed by an affiliation criterion, and the final computation of the volume. the pre-processing phase allows a spatial reorientation of the original point cloud, the evaluation of the homogeneity of its points distribution, and its enclosure inside a rectangular parallelepiped of known volume. the affiliation criterion using the explosion of cube faces is the core of the algorithm, handles the sequential generation of points, and proposes the effective extension of the traditional monte carlo method by introducing its applicability to the discrete domains. finally, the final computation estimates the volume as a function of the total amount of generated points, the portion enclosed within the surface point cloud, and the parallelepiped volume. the developed method proves to be accurate with surface point clouds of both convex and concave solids reporting an average percentage error of less than 7 %. it also shows considerable versatility in handling clouds with sparse, homogeneous, and sometimes even missing points distributions. a performance analysis is presented by testing the algorithm on both surface point clouds obtained from meshes of virtual objects as well as from real objects reconstructed using reverse engineering techniques. mailto:nicola.covre@unitn.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 tree crown volume, but their approach is still limited to providing a gross volume estimation that cannot be applied to complex objects with fine details. lee et al. [9] proposed a waste volume calculation using the triangular meshing method starting from the acquired point cloud. on one hand, this method results as accurate as the goodness of the acquired point cloud, on the other hand, it completely relies on mesh processing tools, such as meshlab [14], and it cannot work if the 3d reconstructed object is an open domain. we propose an innovative and competitive method to compute the volume of an object based on 3d point clouds via a modified extension of the monte carlo integration approach without the interpolation or the mesh reconstruction of the surface point cloud, it can handle homogeneous and nonhomogeneous point cloud surfaces, complex and simple shapes as well as open and close domains. 1.1. the monte carlo approach traditional methods for numerical integration on a volume, such as riemann integral [15] or cavalieri-simpson [16] partition the space into a dense grid, approximate the distribution in each cell with elements of known geometry and compute the overall volume by summing up all the contributions. in contrast, within a previously defined interval containing the distribution of interest, the monte carlo method generates random points with uniform distributions along the different dimensions to estimate the integral [17]. as shown in figure 1 in the case of a 2d example we wish to calculate the area atarget of a closed surface. the geometric element under consideration (green line, s2dtargetobject) is enclosed within a 2d box of known area abox which encloses s2dtargetobject. the total amount (n) of randomly generated points (pgenerated) will fall inside s2dbox. while some of them will fall outside s2dtargetobject (blue points), the others will fall inside (red points, ninsidepoints). in the 2d case, pgenerated is identified with its 2d cartesian coordinates (xpgenerated, ypgenerated). an affiliation criterion (most often expressed by a simple mathematical equation) allows the identification and counting of points dropped in and out of the s2dtargetobject. in particular, these elements are regulated by the following expression: 𝐴target = lim 𝑛→∞ 𝑛insidepoints 𝑁 ∙ 𝐴box (1) 1.2. 3d extension of the monte carlo approach as reported by newman et al. [18], this computational approach is particularly suitable for high-dimensional integrals. for this reason, we extended the 2d monte carlo method described in the previous subsection to the calculation of the 3d volumes starting from their discrete surface’s representation. each point cloud can have, within a certain range, variable resolution, and spatial distribution homogeneity. in this case, n points are generated with uniform distribution along the three dimensions to estimate the volume of the unknown object (vtargetobject) inside the prismatic element (s3dbox). in the 3d case, pgenerated is identified with its 3d cartesian coordinates (xpgenerated, ypgenerated, zpgenerated). the vtargetobject is then calculated by counting the number of pgenerated that fell inside it (ninsidepoints). equation (1) becomes: 𝑉targetobject = lim 𝑛→∞ 𝑛insidepoints 𝑁 ∙ 𝑉box (2) usually, having the target object represented by a continuous surface and described by a mathematical equation, as in the 2d case, the affiliation criterion is expressed by a continuous mathematical model. it is, therefore, easier to determine when a point falls within and without the s3dtargetobject. however, the problem becomes more difficult when the s3dtargetobject is represented by the discrete distribution of some points lying on its surface (s3dpointcloud), as can be shown in figure 2. in this case, it is difficult to determine when a point falls within the s3dpointcloud or not. moreover, in cases where the s3dpointcloud comes from a real acquisition, noise must also be taken into account. in fact, due to some acquisition errors not all the points of the cloud lie on the s3dtargetobject. this paper can be divided as follows: • in the introduction we presented the problem of volume computation from point clouds, the state of the art, and our approach by describing the traditional monte carlo method, its 3d extension, and its limitations. • in the following section, we describe our algorithm for volume estimation of point clouds based on the monte carlo approach. • in the third section, we present the results obtained for the validation of the algorithm, testing it on both virtual and real objects. • in the final section, we expose the drawn conclusions. figure 1. 2d example of monte carlo integral approach in green the geometric element under consideration (s2dtargetobject), in black the 2d box (s2dbox) of known area, in red the inside points, and blue the outside points. figure 2. extension of the monte carlo integral approach to the point cloud of a 3d sphere (s3dpointcloud) enclosed in a 3d box (s3dbox). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2. developed algorithm the algorithm reported as pseudocode in appendix a and explained here below takes the following parameters as input decided by the user: • the point cloud acquired from the surface of an object whose volume is computed. • the number of points n with which the monte carlo volume estimation must be performed. the algorithm is composed of a pre-processing function and a classification function. the pre-processing of the point cloud checks its orientation, encloses the cloud surface in a box of known volume (s3dbox), and performs a preliminary analysis of its distribution. the classification function is based on the "cube explosion" affiliation criterion, described in the following paragraphs. 2.1. pre-processing given the total amount of points n, an efficient monte carlo approach should provide the smallest box which encloses the volume taken for the measurement. in fact, with a fixed n the bigger is s3dbox the lower is the 3d point resolution and the worst is the accuracy of the monte carlo method. hence the s3dbox is defined by taking the minimum and maximum points in each one of the three principal directions x, y, z of the s3dpointcloud. to further minimize the box dimensions a previous re-orientation of the s3dpointcloud is performed by applying the principal component analysis (pca). once the s3dbox is defined its volume is computed and used at the end of the affiliation criterion as vbox. in the case of real objects, s3dpointcloud is collected by using depth cameras or other 3d scanners. the higher is the resolution of the tool, the denser is the point cloud. in the case of virtual objects, this point cloud is obtained by collecting the mesh nodes. the denser is the mesh, the more homogeneous is the point cloud distribution. however, the homogeneity of the point distribution is a factor that cannot be taken for granted. for this reason, a preliminary statistical analysis over the s3dpointcloud is carried out to obtain the parameters needed for the affiliation criterion, the core of the monte carlo method. in particular, the parameters are obtained from the quantile distribution q of the distances between each point and its closest neighbors. 2.2. affiliation criterion using the "explosion" of cube faces the proposed affiliation criterion iteratively defines whether each pgenerated inside s3dbox by the monte carlo method belongs to the external or internal domain of the s3dpointcloud. the affiliation criterion that we have developed is based on the concept of "explosions of cube faces". the idea is based on the generation of a cube of known edge (lcube), around each pgenerated, and iteratively extruding each one of its faces to determine how and how many times it encounters the s3dpointcloud, figure 3. each one of the 6 face extrusions corresponds to a specific direction η along with one of the 3 main directions x, y, z and returns a binary judgment (jη). jη is 0 or 1 respectively if the point is supposed to be outside or inside the s3dpointcloud. eventually, by taking the mode of all jη the final judgment (j) is assessed, and the point affiliation (internal or external) is defined. each cube is oriented by using the same reference system of s3dpointcloud. at each iteration, the procedure selects the direction η and extrudes the relative faces of the cube along their outgoing normal. initially, lcube is determined by the following empirical equation: 𝑙cube = 𝑄0.5 ∙ 3.5 ( 𝑄0.85 𝑄0.5 − 1) (3) where q0.5 and q0.85 are the quantiles at 50 % and 85 % of the distribution of the distances between each point and its closest neighbors respectively. equation (3) and the chosen quantiles are obtained by an empirical validation of the performances. in particular, the choice of q0.5 considers the median distance of two consecutive points and q0.85 highlights the s3dpointcloud sparse distribution and avoids initializing a small cube whose extruded faces pass through s3dpointcloud without touching its points. histograms of the distribution of the relative distances between each point and its closest neighbors for two different s3dpointcloud are shown in figure 4. in particular, the first histogram is referred to as non-homogenous point cloud, specifically, pokémon (mew), while the second is referred to the homogeneous cloud of the geometric solid sphere, both reported in table 1. on one hand from the first distribution is possible to observe that the ratio between q0.85 and q0.5 is 2.74, on the other hand, the quantile ratio of the second distribution is 1.01. the quantile ratio shows the proportion of the sparse portions of s3dpointcloud concerning the distribution of the average distances. in mew ‘s s3dpointcloud the ratio is higher because we have strong non-homogeneous distribution, such as a dense clustering of points for the eyes and sparse on the belly. in the sphere’s s3dpointcloud the ratio is close to one as the difference between q0.85 and q0.5 is almost null due to the homogeneity of the point cloud distribution. each face extrusion may intercept a sub-portion of s3dpointcloud points (pintercepted). if the total amount of intercepted points overcomes the threshold value (thintercepted) of 3 points, lcube is reduced of lreduction and the extrusion is repeated. the criterion for choosing thintercepted equal to 3 points depends on the clusterization checks introduced to strengthen the algorithm, as explained at the end of this section. the value of lreduction has been empirically set equal to 10 % of the actualized value of lcube as a compromise between final accuracy and computational time. the smaller lreduction is the higher the final accuracy is but the longer the final computational time. figure 3. example of explosion in the z-direction (η = z) of one cube’s face from the position of a pgenerated interception of 2 clusters of points (in green, pintercepted). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 following, a clustering algorithm subdivides the pintercepted into different clusters of points (pcluster) along η. jη assumes the values 1 or 0 if the total amount of clusters is, respectively, odd or even. the clusterization is performed by re-ordering pintercepted along η. computing the coordinate differences along η of each point from its consecutive, it is possible to obtain a sequence of distances. ideally, considering pcluster orthogonal to η, this difference within each pcluster should be null, as all the pcluster points lie on the same orthogonal plane. however, in real applications, several issues may occur, such as a slight inclination of pcluster concerning η or a random noise affecting the pcluster distribution along η. to overcome these problems, a threshold (thcluster) is set empirically equal to q0.5. therefore, considering the pintercepted re-ordered along η, whenever a distance between table 1. algorithms outputs with virtual objects. virtual object name original virtual model original point cloud ( s3dtargetobject) meshlab reconstruction our output cube sphere arm hand pokémon mew acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 two consecutive points results to be less than thcluster the two points belong to the same pcluster. otherwise, a new pcluster begins. a further issue that affects the clusterization occurs when the extrusion intercepts tangentially s3dpointcloud. in this case, a misleading clusterization is performed and further checks are needed to make the affiliation criterion more robust. in particular, this can be detected by performing an analysis of variance over each pcluster. if the variance of one single pcluster is greater than thcluster the entire analysis along η is compromised and jη needs to be discarded. to make the affiliation criterion more robust a limitation on the maximum number of clusters encountered along each extrusion is also introduced. in particular, only those directions η are selected that have a total amount of clusters less than or equal to 1. this reduces the probability of encountering misleading clusters such as in the case of noisy point clouds acquired from real objects. the noise mostly appears as isolated points outside s3dpointcloud, and rarely inside it. furthermore, this allows justifying the choice of thintercepted referring to the minimum number of points that define a plane. 3. results this section reports the validation of the extended monte carlo algorithm on real and virtual objects considering different shapes. a total of nine objects, including regular geometric solids, such as spheres or prisms as well as more complex shapes, such as human hands, arms, pokemon (mew), and the 3d scanning of ancient bronze statuettes of mythological figures were considered for this discussion. point clouds of real objects were acquired from the real environment with an azure kinect tof camera and a konica minolta vivid vi-9i 3d scanner for reverse engineering. the volumes used as a reference for the validation of the measures on virtual and real objects were respectively computed using the virtual mesh and the volume estimation by immersion in water [19]. the monte carlo algorithm accuracy increases with the total amount of points generated, as can be observed in figure 5 and figure 6. in particular figure 6 reports the measurement of the mean and variance with a box plot of the relative error distribution as a function of the number of pgenerated. as can be seen, the error is particularly high when few pgenerated are generated and gradually decreases as the number increases. for both virtual and real solids, depending on the resolution of the point cloud and the reported details, the asymptotic percentage error is below 7 % computed with respect to the reference volume when the total amount of pgenerated is greater than 42875. on the contrary, with a low number of pgenerated, the accuracy is low. it is also worth noting that the variance of the measures decreases as the number of pgenerated increases. this indicates that (a) (b) figure 4. distribution of distances between each point and its closest neighbors for (a) pokemon mew and (b) the sphere point clouds. figure 5. error distribution of the pokémon mew volume estimation in % with respect to the increasing number of generated points with the explosion cubes criterion. figure 6. box plot of the error distribution considering all the nine objects along with the increasing number of generated points with the explosion cubes criterion. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 for all objects considered, even for convex, non-uniform, and folded point clouds, the relative error decreases and converges with the same trend. as a general rule, on one hand, the performance of the algorithm can be driven by selecting the number of generated points. on the other hand, the more are the pgenerated the long is the computational time. the average times spent by the proposed algorithm are shown in figure 7. the tests were performed on a macbook pro 2 ghz intel core i5 quad-core 16 gb of ram in matlab_r2020b environment. the average time grows as the number of points increases no-linearly as shown in figure 7. however, with the same accuracy, the average computational time is less than or comparable to the time taken by the volume estimation methods reported in the literature. therefore, the user is allowed to choose the total amount of pgenerated with which monte carlo has to be executed as a tradeoff between the level of desired accuracy, figure 6, and the computational time, figure 7. in addition, due to the asymptotic behavior of the error, the increase in performance becomes negligible after a threshold. for these reasons, it is convenient to choose a total amount of pgenerated just above the estimated volume that has stabilized (in our case 42875 pgenerated). however, the greater is the total amount of pgenerated the higher is the resolution of the point cloud generated by the monte carlo algorithm. this can be used as visual feedback to evaluate the goodness of the affiliation criterion and compare it with the table 2. algorithms outputs with real objects. real object name real object original point cloud (s3dtargetobject) meshlab reconstruction our output cerbero ballerina head mecurio figure 7. computational times of the proposed algorithm changing the number of generated points (pgenerated). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 algorithms that reconstruct the mesh. as shown in the last column of both tables table 1 and table 2, the representation of the inner points with the developed method returns a good representation of the original point cloud, s3dpointcloud. on the contrary, mesh reconstructions are not always reliable. a mesh reconstruction using the default parameters of the poisson’s reconstruction method [8] in the meshlab environment is shown in the fourth column of the same tables. as can be observed, good results are returned only for uniformly distributed point clouds and regular shapes, while the same cannot be assessed for complex shapes or non-uniform point clouds, such as pokémon mew and the hand, with consequent negative repercussions on volumes calculation. another important aspect to consider when the volume is computed concerns the possibility to work with discontinuous and partially open surfaces, such as in figure 8(a), where the acquired point cloud results to have huge discontinuities around the elbow. most of the volume estimation algorithms based on the mesh reconstruction need manual fitting and adjustments to manage the missing clusters of points, otherwise, the measurement cannot be pursued. on the contrary, the prosed affiliation criterion for the monte carlo volume integration is robust to discontinuities due to the democratic judgment of the 6 cube faces. in fact, even if along a few directions the point cloud results to be open, and these faces extrusions will return jη = 0, the final judgment j will still be equal to 1. table 3 shows the actual volume of the objects compared with that obtained with meshlab, when possible, and our proposed method with its error percentage on the measurement. nevertheless, given the choice of the external box and maintaining its ratio to the calculated volume, taken any object, its percentage error does not change by scaling its size. this means that the uncertainty on the measurement is proportional to the percentage error multiplied by the calculated volume. 4. conclusions this paper proposes an extension of the 3d monte carlo method for calculating the volumes of objects starting from their surface point clouds. the overall algorithm includes a preprocessing analysis that re-orient and evaluates the point cloud, an affiliation criterion based on the explosion of cube faces to discern the inner and the outer points of the monte carlo method, and a final volume estimate as described in equation (2). as the reported results include also convex, complex, and folded surfaces, such as the pokémon mew or the cerbero point cloud it was possible to show that the cube explosion affiliation criterion results be stable and reliable, returning consistent and repeatable measurements compared with gold-standard software for volume measurements, such as meshlab. the algorithm proves to be accurate with point clouds of different objects, both in terms of shape and distribution of points. the performances are then tested with the surface point clouds of 9 virtual and real objects, reporting an average percentage error on the tested samples lower than 7% with a computational amount of time of a few minutes depending on the desired accuracy. references [1] k. khoshelham, s. elberink, accuracy and resolution of kinect depth data for indoor mapping applications, sensors, vol. 12, n. 2, 2012, pp. 1437-1454. doi: 10.3390/s120201437 [2] k. khoshelham, kourosh, accuracy analysis of kinect depth data, isprs workshop laser scanning, calgary, canada, 29-31 august 2011, pp. 133-138. doi: 10.5194/isprsarchives-xxxviii-5-w12-133-2011 [3] j. vaze, j. teng, g. spencer, impact of dem accuracy and resolution on topographic indices. environmental modelling & software, vol. 25, n. 10, 2010, pp. 1086-1098. doi: 10.1016/j.envsoft.2010.03.014 [4] l. keselman, j. woodfill, a. jepsen, a. bhowmik, intel realsense stereoscopic depth cameras. proc. of the ieee conference on computer vision and pattern recognition workshops, honolulu, hawaii, 21-26 july 2017, pp. 1267-1276. doi: 10.1109/cvprw.2017.167 [5] d. borrmann, a. nüchter, m. ðakulović, i. maurović, i. petrović, d. osmanković, j. velagić. a mobile robot based system for fully automated thermal 3d mapping, advanced engineering informatics, vol. 28, n. 4, 2014, pp. 425-440. doi: 10.1016/j.aei.2014.06.002 [6] d. li, x. feng, p. liao, h. ni, y. zhou, m. huang, z. li y. zhu, 3d reverse modeling and rapid prototyping of complete denture. in frontier and future development of information technology in medicine and education, springer, dordrecht, 2014, pp. 1919-1927. doi: 10.1007/978-94-007-7618-0_226 figure 8. (a) open point cloud from a real acquisition of a human arm, (b) monte carlo representation of inside generated points, (c) monte carlo representation of outside generated points. table 3. actual volume of the objects and its estimation with meshlab and our method. 3d object actual volume in dm³ meshlab volume in dm³ montecarlo volume in dm³ monte carlo error in % cube 1.00 1.00 0.99 0.10 sphere 4.19 4.19 4.10 2.15 arm 1.33 na 1.39 4.51 hand 0.412 na 0.409 0.73 pokemon mew 0.539 na 0.548 1.67 cerbero 4.08 na 4.18 2.45 ballerina 1.24 na 1.18 4.84 head 4.54 4.52 4.50 0.88 mercurio 4.53 4.57 4.66 2.82 https://doi.org/10.3390/s120201437 https://doi.org/10.5194/isprsarchives-xxxviii-5-w12-133-2011 https://doi.org/10.1016/j.envsoft.2010.03.014 https://doi.org/10.1109/cvprw.2017.167 https://doi.org/10.1016/j.aei.2014.06.002 https://doi.org/10.1007/978-94-007-7618-0_226 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [7] w. chang, c. wu, y. tsai, w. chiu, object volume estimation based on 3d point cloud, 2017 international automatic control conference (cacs), pingtung, taiwan, 12-15 november 2017, pp. 1-5. doi: 10.1109/cacs.2017.8284244 [8] y. zhi, y. zhang, h. chen, k. yang, h. xia, a method of 3d point cloud volume calculation based on slice method, international conference on intelligent control and computer application (icca 2016), atlantis press, zhengzhou, china, 19-17 january 2016, pp. 155-158. doi: 10.2991/icca-16.2016.35 [9] y. lee, s. cho, j. kang. a study on the waste volume calculation for efficient monitoring of the landfill facility. in computer applications for database, education, and ubiquitous computing, springer, berlin, heidelberg, 2012, 978-3-642-356025, pp. 158-169. [10] y. bi, l. qi, s. chen, l. li, s. liu, canopy volume measurement method based on point cloud data, science &technology review, beijing, china, vol.31, no. 27, 2013, pp. 31-36. [in chinese] doi: 10.3981/j.issn.1000-7857.2013.27.004 [11] w. xu, z. feng, z. su, h. xu, y. jiao, o. deng, an automatic extraction algorithm for individual tree crown projection area and volume based on 3d point cloud data. spectroscopy and spectral analysis, vol. 34, no. 2, 2014, pp. 465–471. doi: 10.3964/j.issn.1000-0593(2014)02-0465-07 [12] g. klette, a recursive algorithm for calculating the relative convex hull, 25th international conference of image and vision computing, ieee, new zealand, 8-9 november 2010, pp. 1–7. doi: 10.1109/ivcnz.2010.6148857 [13] w. lin, y. meng, z. qiu, s. zhang, j. wu, measurement and calculation of crown projection area and crown volume of individual trees based on 3d laser scanned pointcloud data, international journal of remote sensing, vol. 38, no. 4, 2017, pp. 1083–1100. doi: 10.1080/01431161.2016.1265690 [14] p. cignoni, m. callieri, m. corsini, m. dellepiane, f. ganovelli, g. ranzuglia, meshlab: an open-source mesh processing tool, in eurographics italian chapter conference, salerno, italy, 2008, pp. 129-136. doi: 10.2312/localchapterevents/italchap/italianchapconf2008/1 29-136 [15] r. mcleod. the generalized riemann integral, vol. 20. american mathematical soc.,1980. [16] m. kiderlen, k. petersen. the cavalieri estimator with unequal section spacing revisited. image analysis & stereology, vol. 36, no. 2, 2017, pp.133–139. doi: 10.5566/ias.1723 [17] w. press, s. teukolsky, w. vetterling, b. flannery. numerical recipes: the art of scientific computing, cambridge university press, 1992. [18] m. newman, g. barkema. monte carlo methods in statistical physics chapter 1-4, vol. 24, oxford university press: new york, usa, 1999. [19] d. m. k. s. kaulesar sukula, p. t. den hoed, e. j. johannes, r. van dolder, e. benda. direct and indirect methods for the quantification of leg volume: comparison between water displacement volumetry, the disk model method and the frustum sign model method, using the correlation coefficient and the limits of agreement. journal of biomedical engineering vol.15, no. 6, 1993, pp. 477-480. doi: 10.1016/0141-5425(93)90062-4 [20] m. kazhdan, m. bolitho, h. hoppe, poisson surface reconstruction, proc. of the fourth eurographics symposium on geometry processing, vol. 7, 2006, pp. 61-70. online [accessed 21 april 2022] https://hhoppe.com/poissonrecon.pdf https://doi.org/10.1109/cacs.2017.8284244 https://doi.org/10.2991/icca-16.2016.35 https://doi.org/10.3981/j.issn.1000-7857.2013.27.004 https://doi.org/10.3964/j.issn.1000-0593(2014)02-0465-07 https://doi.org/10.1109/ivcnz.2010.6148857 http://dx.doi.org/10.1080/01431161.2016.1265690 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 https://doi.org/10.5566/ias.1723 https://doi.org/10.1016/0141-5425(93)90062-4 https://hhoppe.com/poissonrecon.pdf acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 9 appendix a a tool for curating and searching databases proving traceable analysis of data and workflows acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 a tool for curating and searching databases proving traceable analysis of data and workflows frederic brochu1, michael chrubasik1, spencer a. thomas1 1 data science, national physical laboratory, hampton road, teddington, middlesex, tw11 0lw, united kingdom section: research paper keywords: searchable metadata; reproducibility; data curation; data traceability; fair citation: frederic brochu, michael chrubasik, spencer a. thomas, a tool for curating and searching databases proving traceable analysis of data and workflows, acta imeko, vol. 12, no. 1, article 12, march 2023, identifier: imeko-acta-12 (2023)-01-12 section editor: daniel hutzschenreuter, ptb, germany received november 18, 2022; in final form february 16, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the department for business, energy & industrial strategy through the uk’s national measurement system. corresponding author: frederic brochu, e-mail: frederic.brochu@npl.co.uk 1. introduction technological and scientific advances over the last 20+ years have led to the ability to generate and store vast amounts of data. furthermore, emphasis on reducing acquisition times has significantly increased the throughput of data from experiments. in parallel with the developments in measurement technologies, there have been significant advancements in data storage, allowing this data to be efficiently captured and stored. however, a lack of systems or standards to organise or curate data leads to ad-hoc file structures, inconsistent conventions in recording metadata and a loss of data provenance. consequently, the vast amounts of data being recorded are not findable, accessible, interoperable, and reusable (fair) [1]. without a well-curated database [2] that includes rich metadata, data will not be findable. for research institutions that generate or collect large volumes of data, this is highly problematic as it significantly restricts the data’s reusability and therefore value. this is compounded with potentially high financial costs associated with repeated acquisitions and double storage if the data are not discoverable. the inability to retrieve data may have significant repercussions for reproducibility of results, traceability, or adherence to funder requirements. the concept of measurement traceability, where any instrument’s measurement can be linked to a known standard through an unbroken chain of comparisons, is well established. data traceability extends this concept to data and analysis pipelines where a given output (processed data, figures, statistical tests, etc) is linked back to data at the point of measurement through an unbroken chain of steps in a data workflow. these steps include data conversion, data processing (for example noise reduction), and data analysis steps (e.g., statistical tests, machine learning, etc) [2]. throughout the rest of this paper, we use the term traceability to refer to data traceability. previously, at the national physical laboratory (npl), we have developed methods for curating data at the point of measurement which can be used to establish a fair and traceable database [2], [3]. a curated database with relevant wellstructured metadata tags permits searches using structured query language (sql) or similar, where investigators can return a list of datasets within the database to match given criteria. the metadata itself can be analysed to reveal insights into the data. for example, in radiology, analysis of the sensitivity and radiation exposure over time, at different sites, uncovered inter-site and temporal differences [4]. however, establishing such a system abstract we present a framework for easy annotating, archiving, retrieving, and searching measurement data from a large-scale data archival system. our tool extends and simplifies the interaction with the database and is implemented in popular scientific applications used for data analysis, namely matlab, and python. this allows scientists to execute complex interactions with the database for data curation and retrieval tasks in a few simple lines of accessible templated code. scientists can now ensure their measurement data is well curated and fair (findable, accessible, interoperable, and reusable) compliant without requiring specific data skills or knowledge. our tools allow users to perform sql-type (structured query language) queries on the data from simple templated scripts allowing data retrieval from long-term storage systems. mailto:frederic.brochu@npl.co.uk acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 requires a high level of computational skills and often requires bespoke software creating significant barriers for measurement scientists. we use our internal database for long-term curated storage of measurement data along with experimental conditions that form the basis of the metadata and are vital for traceability and reproducibility. in this work, we introduce a tool that combines and extends the functionalities of the application programming interface (api) provided with npl’s archive for file transfer, annotation, and metadata queries into a single, convenient interface tool accessible from the popular scientific applications matlab and python. this tool not only simplifies interactions with the archive, but it also makes the entire data management process more accessible for scientists. the data archive we use in this work is an ‘objectstore’, a database storing data as objects that have their own attributes comprising of system metadata (size, creation date, etc) and custom metadata (user-definable fields). in this work we will exploit the archives’ use of data objects to store any number of multimodal data files, in any format, as a single hierarchical data format (hdf5) file that we use as a ‘container’ for data files. a single hdf5 container file, consisting of any number of data files, corresponds to the data objects uploaded to the data archive. we refer to the data objects as hdf5 container files through the manuscript. we can further exploit the objectstore functionality by utilising the custom metadata to define domain-specific attributes that are used to ‘tag’ the hdf5 file in the objectstore enabling highly specialised and domain-specific searching of the data. for large organisations this provides a flexible approach to automatically curate large and diverse databases without domain-specific infrastructure. for users, this enables multiple data files to be wrapped in a single container that is stored in a data archive for long-term storage alongside relevant metadata for searching and retrieval. our tool provides a user-friendly interface for users to perform all necessary steps (generating the container, tagging with metadata, uploading to the database, performing searches and retrieval of the data) without any expertise in these areas. we provide a case study using large and complex multimodal cohort data from experiments conducted at multiple institutions and across multiple instruments. 2. background data curation is the process of organised storing of data in a structured way with rich machine-actionable information about the data and its provenance. analogous to finding a book in a library, data curation enables users to locate specific datasets based on a defined list of attributes. for example, locating a dataset consisting of an image of a cat in winter captured with a mobile phone camera in the countryside. although many databases will enable the searching of the criteria of ‘images’, ’cat’, ‘camera phone’, ‘winter’, and ‘countryside’, there is no strict matching of attributes that these keywords map to. in a curated database we can perform such searches, type = ‘images’, subject = ’cat’, device = ‘camera phone’, season = ‘winter’, and location = ‘countryside’. this search would provide exact matches to our search rather than any attribute matches to the keywords in the former case. the demand for long-term data curation arises from the researchers themselves, as it enables them to utilise the data in future studies, from funding bodies through data retention requirements, and from the community, which promotes open science with fair data. well-curated data has the additional benefit of enabling meta-analysis across the database which can provide more precise estimates compared to individual studies, as well as an assessment of variability [5] and the development of computational models [6]. for example, meta-analysis has identified a higher proportion of positive covid-19 tests in low and low-middle income countries compared to higher income countries [7], evaluating the treatment effects [8] and by assessing the impact of missing data on outcomes [9]. there is currently no data curation tool or platform suite to manage experimental data due to its inherent complexities. this is particularly problematic for research which will typically be subject to funding bodies’ data storage and retention policies. due to the high cost (financial, expertise, and time) involved in some experimental studies, researchers want to maximise the future utility of the data in other studies. for example, healthcare or pharmaceutical studies involving tissue imaging have very complex data collection pipelines with different centres responsible for collecting the samples (e.g. biopsies), sample preparation (e.g. embedding and sectioning) and the measurement data (e.g. imaging). in this case, one experiment can involve multiple institutions and the provenance of the sample is highly complex. data quality controls and future metaanalysis require this information to be captured in a machine actionable way. current solutions range from individual-level record keeping to universal data repositories; we argue these are insufficient for curating measurement data. record keeping, such as spreadsheets or a database (sql, access, etc), to capture information, such as data storage location, does not constitute a data archive as it simply lists the locations and possibly some metadata. the fact that this information is unstandardized, prone to error, not machine-actionable, and not searchable (in a database environment) is even more problematic as it prevents the data from being fair. universal data repositories such as zenodo, figshare, scientific data, dropbox, re3data, etc, offer storage of data and user flexibility with regards to files and formats stored as well as providing some scope for metadata. however, capturing the metadata is far from trivial [2], [3], [10] and there may be many terms to include. although many of these platforms offer a search functionality, it is offered in a basic implementation, preventing highly specialised searches such as structured database queries. furthermore, there are no specific checks on these entered fields, hence information may be missed, incorrectly added, or exist in several forms (e.g., acronyms, capitalisation). 3. method the developed tool is a set of matlab and python scripts handling interactions with the archive data storage and experimental data with metadata encapsulation. this enables “behind the scenes” operations at the command of the scientists wishing to archive their data without requiring the programming skills necessary to do so. the interface tool is invoked through matlab which is used as a user interface for python code operating as a two-layer program. the first layer is a matlab master class describing a connection “object” with different callback functions. in the second layer the functions are mapped into a python layer handling “representational state transfer” (rest) calls [11] to the objectstore apis for file handling and metadata queries. this configuration not only provides access of our tool for matlab and python users, but also provides a simple interface (in matlab) for users with little programming acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 expertise. the python libraries are installed as dependencies of the code following our documented installation procedure, this also covers matlab installation which is straightforward. this tool is designed to work within an organisations digital infrastructure and thus can be preloaded on to institutional machines such as laptops and lab machines. file transfers are performed with the aws s3 protocol [11] with all data stored on our institutions internal data storage infrastructure with access permissions fully controllable by our it team. our code is available on a private gitlab repository as a pypl package, and a public release may be possible in the future. to be an effective solution, our tool has required: to store all relevant and related data sets as a single instance (see section 3.1), the data be linked to the metadata (see section 3.2), all data to be archived in a common location (see section 3.3), and the metadata be searchable for data retrieval (see section 3.4). the layout tools and their different components are presented in figure 1. 3.1. hdf5 container we store the data in a hierarchical data format (hdf5) container file that can hold an arbitrary number of data files and formats. this allows the storage of experimental data with associated datasets such as calibration data or processing scripts as a complete and unbroken data pipeline [3]. containerising the data and associated steps in a workflow ensures reproducibility through the data pipeline. the data parsed into the container are stored in binary form as this allows supporting any data format. they are also compressed at the creation of the container to optimise data replication to the archive system. 3.2. annotation annotation refers to the process of attributing information to a data file such that it becomes the metadata for that file and providing terms that the file can be searched with. the metadata for complex experiments is multi-source [10] and takes different format. we collect and aggregate them all in a single ‘wellformed’ xml file [12]. xml is a metadata format that is both human and machine-readable, and is the only format supported by the npl archive described in section 3.3. the metadata flow is duplicated: a copy is embedded in the hdf5 container, and another is used to link the hdf5 container with the associated metadata in the archive annotation database. linking the hdf5 with its associated metadata ensures the data are fair compliant prior to uploading the file to the database. although aiding reproducibility and interoperability for individual files, linking the data with metadata is not sufficient to provide a curated database that can be easily searched or mined for meta-analysis. by using standardised and well-formed metadata to link to the hdf5 files we can automatically establish a curated database. that is, all hdf5 containers have the same metadata structure and are therefore well organised and can be viewed and search in a systematic way. further details of this are in section 3.4. 3.3. data archive (objectstore) the annotated hdf5 container files are stored in our “objectstore” database. the objectstore is npl’s large-scale data archival system, an instance of the hitachi content platform (hcp), which in its most basic form is a flexible database for storing and annotating data. file annotation tags the data with associated metadata for curation and database searching. the metadata can be defined by the user, known as ‘custom metadata’, and can be used for curating complex experimental data [2]. when tagged with metadata, data are stored in an internal database with a dedicated api allowing simple sql-type queries for searching the data. the objectstore consists of “tenants”, organisational level division of the system (e.g., departments); and “namespaces”, logical grouping of objects (e.g., projects). this archival system supports file versioning, where the history of any changes to the data are recorded, as well as annotation. both the data and metadata can be updated for new versions. a database with versioning enabled can enhance traceability when the data are tagged with the associated metadata [3]. the ability to trace software, file, and documentation changes through an unbroken chain (i.e., the version history) can help identifying bugs/errors and tracking system evolution. the ordered nature of versioning enables the user to return to a point of the evolution of the data and create a new branching point. this is particularly useful if errors have been identified or new methods have been developed, such as improved data processing. the tool provides specific functions for: connecting to the database, uploading data following archiving as a hdf5 container file, downloading datasets, and performing metadata queries (see section 3 for more details). an example of the matlab script to establish the database connection is given in figure 2. once a connection is established the data can be uploaded simply by specifying the location of the data to be uploaded (local_dir) and the target storage location on the database figure 1. interface layout and functionality described in section 3. user functions are represented by orange arrows. the desired datasets are each converted to binary and added to a hdf5 container file that is annotated (or tagged) with relevant metadata. the hdf5 container file is uploaded to the curated database where the metadata can be queried. any desired data can be easily downloaded and automatically converted back to its native format. figure 2. matlab script to setup database connection. comments are given in lines beginning with % and coloured green. here users only need to specify the namespace and tenant they wish to connect to which will be fixed for each project. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 (database_dir) in a function call. the function first creates an hdf5 container file that is populated with all the data in the directory specified by the user. note this directory can contain any number of, or format of data files and also supports the use of shortcuts/links to other directories. the latter is vital when dealing with very large files that may be stored on multiple disparate devices, such as different laboratory instruments, and avoids the need to transfer data or duplicate data prior to using our tool. next, the tool tags the container with the (well-formed xml) metadata as outlined in section 3.2 allowing search functions in the database. finally, this tagged hdf5 file is then uploaded to the database. the script for this is given in figure 3 which highlights the simplicity of our tool’s interface which is vital for non-expert users. similarly, data can be easily downloaded from the database. in this case “local_dir” is the folder location where the hdf5 container will be downloaded to and then unpacked to the same folder structure and data formats as the originally uploaded data. directories that were originally shortcuts are unpacked as subdirectories within the data parent directory, i.e., there are no shortcuts when data are downloaded and unpacked. the script for downloading the data is given in figure 4. 3.4. list contents and sql queries the curated database consists of hdf5 files tagged with associated metadata that allows the entire database to be searchable. the content of the entire namespace, or a specific directory can be listed as shown in figure 5. we can also search the contents of our database filtering on any of the tagged metadata attributes using sql-type queries. the queries return a list of hdf5 files that match the criteria of the query. in addition to attributes of the data from the metadata, the queries can also include the filename or a unique identifier making all the data findable. using sql-type queries, which can be standardised through template queries for metadata attributes in a specific domain (see results section), ensures that the data are accessible to all users in line with the go fair principles [13]. database permissions can be set to restrict the visibility of data for different users as required, though this is outside the scope of this work. 4. results we demonstrate our tool with a case study using a database from the cancer research uk rosetta grand challenge (a24034) project led by npl’s nice-msi group [14]. previously we have established a curated database of complex experimental data [2] that can be used for traceable data processing workflows that are fully reproducible [3]. this database consists of experimental data acquired from several different instruments (vendors and models), across multiple sites, with a large number of experimental parameters and operating procedures that are dependent on the sample. the project also conducts cohort, longitudinal and inter-laboratory studies, and aims to conduct some meta-analysis on the data once complete. some instruments write data in a proprietary format that is accessible internally, but this access is lost when archiving the data as the proprietary format requires the instrument’s vendor software to open. we convert this data in an open community standard format called imzml [15] making the data accessible. opening the database connection as shown in figure 2, the database can be searched with simple sql queries of the form conn.metadata_query('key' ,'value'). this provides a simple userfriendly means for experimentalists to search the curated database for any subset of files making the data findable. we provide standard and simplified code for common queries allowing non-programmers to utilise this functionality. some examples of domain-specific queries that users may want to perform are: • data from a specific measurement technique, for example desi and maldi: conn.metadata_query('technique','desi') conn.metadata_query('technique','maldi') • data from a particular experimental study: conn.metadata_query('study','slc7a5') • data acquired from samples from a particular collaborator: conn.metadata_query ('samplesource','astrazeneca') • data from a specific vendor instrument model, for example a ‘synapt g2-si’ model: conn.metadata_query('instrument','synaptg2-si') • data from a particular sample (unique barcode from a separate sample management database, the data is integrated into our curated database prior to upload see [2]): conn.metadata_query('barcode','1000202') figure 3. matlab script to upload local data to the database. comments are given in lines beginning with % and coloured green. users specify the location of the data they wish to upload (any number or format of data) in local_dir which also supports shortcuts, and the destination folder on the database in database_dir. the structure of folders on the database can resemble folder structures on computers which will not impact the searchability of the database. figure 4. matlab script for downloading data from the database to the local computer. comments are given in lines beginning with % and coloured green. here database_dir is the data users wish to retrieve from the database and local_dir is the target directory on the local machine to download the data and automatically covert it back into its original format prior to upload to the databased. figure 5. matlab script to list the objects contained within the objectstore namespace. comments are given in lines beginning with % and coloured green. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 • datasets of the same experimental parameters eg, data of a particular instrument polarity: conn.metadata_query('polarity','negative') • size of acquisitions area for each pixel: conn.metadata_query('pixelsize','100 microns') • acquisition time for each pixel: conn.metadata_query('scantime','0.485 sec') • data acquired over the same measurement range: conn.metadata_query('massrange','m/z 50-1200') note the last three examples can be executed with or without the units. example results for the conn.metadata_query( 'massrange', 'm/z 50-1200') query are given in table 1. as the data are contained in hdf5 containers with wellstructured metadata all relevant associated datasets, including calibration files, processing scripts and the data are interoperable and re-usable, meaning external entities can more easily access, exchange and make use of the information contained within the hdf5 file. 5. conclusions we have introduced and demonstrated our tool that allows non-experts to interact with a well-curated database using the popular programming languages matlab and python. by encapsulating multiple difficult apis into a single, user-friendly application, the tool allows measurement scientists to easily ensure that their data is saved in a well-curated, fair-compliant, and traceable database without the need for specialised computational skills. the ability to collate all relevant data, tag the collated data with relevant machine actionable metadata, and upload it to a database with a single function call reduces the computational barrier significantly. a single line of code is used to perform sql-type searches on the database ensuring all data are findable and accessible. an integrated function for downloading and extracting the data in its original format allows the utilisation of the hdf5 container without requiring the user to interact with it. one such benefit of the container file is the storage of data with associated metadata, processing and analysis codes providing interoperable and reusable data that is also traceable. we demonstrate this through a case study of data collected from a large-scale multisite imaging project with large volumes of highly complex measurement data. template scripts further reduce this barrier and enable the capture of metadata at the point of measurement as well as any stage throughout the data processing pipeline. this enables experiments to easily retrieve data and maximise the usefulness of a fair and curated database without requiring any knowledge of these principles. acknowledgement this work was funded by the department for business, energy & industrial strategy through the uk’s national measurement system. references [1] m. d. wilkinson, m. dumontier, i. j. aalbersberg, g. appleton, m. axton (+48 more authors), the fair guiding principles for scientific data management and stewardship, scientific data 6(1) 2019, 9 pp. doi: 10.1038/sdata.2016.18 [2] s. a. thomas, f. brochu, a framework for traceable storage and curation of measurement data, measurement: sensors, 18(100201) 2021, 5 pp. doi: 10.1016/j.measen.2021.100201 [3] s. a. thomas, f. brochu, curation at the point of measurement and traceability of measurement workflows, measurement: sensors, 23(1000399) 2022, 7 pp. doi: 10.1016/j.measen.2022.100399 [4] m. santos, p. sá-couto, a. silva, n. rocha, in dicom metadatamining in pacs for computed radiography x-ray exposure analysis: a mammography multisite study, european congress of radiology-ecr 2014, vienna, austria, 6-10 march 2014, 7 pp. doi: 10.1594/ecr2014/b-0276 [5] a. b. haidich, meta-analysis in medical research, hippokratia, 14(suppl 1): 29–37, 2010. online [accessed 17 march 2023] https://www.ncbi.nlm.nih.gov/pmc/articles/pmc3049418/ [6] n. mikolajewicz, s. v. komarova, meta-analytic methodology for basic research: a practical guide, front. physiol., sec. computational physiology and medicine, 2019, 20 pp. doi: 10.3389/fphys.2019.00203 [7] i. bergeri, m. g. whelan, h. ware, l. subissi, a. nardone (+25 more authors), global sars-cov-2 seroprevalence from january 2020 to april 2022: a systematic review and meta-analysis of standardized population-based studies, plos medicine, 19(11): e1004107, 2022, 24 pp. doi: 10.1371/journal.pmed.1004107 [8] c. b. joy, c. e. adams, s. lawrie, haloperidol versus placebo for schizophrenia. cochrane database of systematic reviews, john wiley & sons, ltd, 2001. doi: 10.1002/14651858.cd003082 [9] j. p. higgins, i. r. white, a. m. wood, imputation methods for missing outcome data in meta-analysis of clinical trials. clinical trials. 2008;5(3), pp. 225-239. doi: 10.1177/1740774508091600 [10] n. smith, d. sinden, s. a. thomas, m. romanchikova, j. e. talbott, m. adeogun, building confidence in digital health through metrology, the british journal of radiology, 93(1109) 2020, 3 pp. doi: 10.1259/bjr.20190574 [11] aws, aws s3 rest api protocol. online [accessed 17 march 2023] https://docs.aws.amazon.com/amazons3/latest/api/s3api.pdf#welcome [12] w3resource.com, well-formed-xml. online [accessed 17 march 2023] https://www.w3resource.com/xml/well-formed.php [13] go fair int. support and coordination office (gfisco), go fair initiative. online [accessed 17 march 2023] https://www.go-fair.org/fair-principles/ table 1. example output from the query conn.metadata_query( 'massrange', 'm/z 50-1200') with some additional fields with categorised results for clarity of the general reader. here we can see several objects from the same measurement site or of the same modality, or both, allowing inter-lab and multimodal studies and analysis. the file version also gives an indication of the provenance of the files and analysis. id file version measurement site modality object a 1st a b object b 2nd a a object c 1st c a object d 4th b c object e 2nd c c object f 2nd a b object g 1st c c … https://doi.org/10.1038/sdata.2016.18 https://doi.org/10.1016/j.measen.2021.100201 https://doi.org/10.1016/j.measen.2022.100399 https://doi.org/10.1594/ecr2014/b-0276 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc3049418/ https://doi.org/10.3389/fphys.2019.00203 https://doi.org/10.1371/journal.pmed.1004107 https://doi.org/10.1002/14651858.cd003082 https://doi.org/10.1177/1740774508091600 https://doi.org/10.1259/bjr.20190574 https://docs.aws.amazon.com/amazons3/latest/api/s3-api.pdf#welcome https://docs.aws.amazon.com/amazons3/latest/api/s3-api.pdf#welcome https://www.w3resource.com/xml/well-formed.php https://www.go-fair.org/fair-principles/ acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 [14] cancer research uk, rosetta project. online [accessed 17 march 2023] https://cancergrandchallenges.org/teams/rosetta [15] a. römpp, th. schramm, a. hester, i. klinkert, j.-p. both, r. m. a. heeren, m. stoeckli, b. spengler, imzml: imaging mass spectrometry markup language: a common data format for mass spectrometry imaging. methods mol biol., 696, 2011, pp. 205–224. doi: 10.1007/978-1-60761-987-1_12 https://cancergrandchallenges.org/teams/rosetta https://doi.org/10.1007/978-1-60761-987-1_12 a principal component analysis to detect cancer cell line aggressiveness acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a principal component analysis to detect cancer cell line aggressiveness livio d’alvia1, serena carraro1, barbara peruzzi2, enrica urciuoli2, ludovica apa1, emanuele rizzuto1 1 department of mechanical and aerospace engineering, sapienza university of rome, 00184 rome, italy 2 bone physiopathology research unit, bambino gesù children’s hospital, irccs, 00146 rome, italy section: research paper keywords: measurement of dielectric properties; biosensor; non-invasive measurements; cancer cell lines; cancer aggressiveness; osteosarcoma; breast cancer; pca; principal component citation: livio d’alvia, serena carraro, barbara peruzzi, enrica urciuoli, ludovica apa, emanuele rizzuto, a principal component analysis to detect cancer cell line aggressiveness, acta imeko, vol. 12, no. 2, article 22, june 2023, identifier: imeko-acta-12 (2023)-02-22 section editor: alfredo cigada, politecnico di milano, italy, andrea scorza, università degli studi roma tre, italy received october 12, 2022; in final form february 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by “progetti di ricerca medi 2021” sapienza university of rome corresponding author: livio d’alvia, e-mail: livio.dalvia@uniroma1.it 1. introduction one defining feature of malignant tumors is represented by the quick creation of abnormal cells that arise beyond their usual boundaries. moreover, these cells have an uncontrollable reproduction and division rate up to constitute cancerous tissues since they do not respond to the standard signaling system of the body [1]–[3]. as stated by the world health organization, in 2020, cancer will be the primary cause of approximately 10 million deaths worldwide. by way of example, 2.26 million cases and 685 thousand deaths of breast cancer and 2.21 million cases and 1.8 million deaths of lung cancer, without forgetting the hundreds of thousands of children who develop malignant tumors each year [4]. an early diagnosis and screening can therefore contribute to reducing mortality and aid in more effective treatment. however, there is a lack of research on cancer behavior due to the various and complex molecular pathways involved in the genesis of tumors [5]. in most cases, the tumor degree – established by the cancer cells' characteristics throughout the tumor lesions' growth – is often used to make the diagnosis. actually, a series of cancer screening methods, such as biopsy, computed axial tomography (cat), or scintigraphy, exist but are costly and intrusive. biosensors in the microwave field may serve as a complementary or replacement method for early-stage noninvasive prognosis of a variety of illnesses, including malignancies. in this context, the measurement of dielectric properties of biological tissues has achieved significant benefits in biomedical and healthcare due to their high sensitivity, versatility, and reduced invasiveness [6]–[9]. indeed, this technology has consolidated its use in various fields. for example, gugliandolo et al. [10] developed a microwave microstrip resonator to measure water vapor for industrial pipeline applications. likewise, majcher et al. [11] investigated the possibility of using a dagger-shaped probe to measure soil moisture in agrifood applications. ultimately, d’alvia et al. [12]– [14] and cataldo et al. [15] proposed several applications in the cultural heritage field. abstract in this paper, we propose the use of principal component analysis (pca) as a new post-processing method for the detection of breast and bone cancer cell lines cultured in vitro using a microwave biosensor. mda-mb-231 and mcf-7 breast cancer cell lines and saos-2 and 143b osteosarcoma cell lines were characterized using a circular patch resonator in the 1 mhz – 3 ghz frequency range. the return loss of each cancer cell line was analyzed, and the differences among each other were determined through principal component analysis according to a protocol previously proposed mainly for electrocardiogram processing and x-ray photoelectron spectroscopy. our results showed that the four cancer cell lines analyzed exhibited peculiar dielectric properties when compared to each other and to the growth medium, confirming that pca could be employed as an alternative methodology to analyze microwave characterization of cancer cell lines which, in turn, may be deeply exploited as a tool for the detection of cancer cells in healthy tissues. mailto:livio.dalvia@uniroma1.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 on these bases, microwave-based sensors are now gaining more and more interest in the biomedical field. as highlighted in the literature [16], microwave probes offer the possibility of analyzing living tissue properties through a non-invasive measurement of scattering parameters or complex permittivity [17]–[19] and identifying eventual pathological conditions as a variation in the dielectric properties. concerning cancer cell and tissue characterization, maenhout et al. [20] evaluated the dielectric properties (dielectric loss, dielectric constant, and conductivity) of healthy non-tumorigenic cell lines, namely mcf-10a and four breast cancer cell lines (hs578t, mda-mb-231, mcf7, and t47d) using an open-ended coaxial probe in 200 mhz to 13.6 ghz range. again, zhang et al. [21] proposed a microwave biosensor capable of identifying the grade of colon cancer cell aggressiveness in the 4-12 ghz range. finally, in previous work [22], we proposed a circular patch resonator for the measurement of cancer cell line aggressiveness (saos-2, 143b, mcf7, and mda-mb-231) through the use of a lorentzian fit model for the return loss signal processing and a weighted manova (multivariate analysis of variance) to investigate the differences in the three main parameters of interest, namely return loss, resonance frequency and full width at half maximum (fwhm). this paper proposes a novel methodology to analyze microwave sensor’s return loss based on an optimized savitzkygolay filter, generally adopted for electrocardiogram processing or x-ray photoelectron spectroscopy [23], [24], and principal component analysis (pca) to extract meaningful information from the data and present a final classification based on possible similarities between analyzed materials. 2. materials and methods 2.1. cell culture and experimental procedure as previously described [22], we had the opportunity to test two pediatric human osteosarcoma cell lines, saos-2 and 143b [25]–[28], and two human breast adenocarcinoma cell lines, mcf7 and mda-mb-231 [29], [30] for their dielectric response. in particular, saos-2 and mcf7 are low-aggressive osteoblastlike osteosarcoma and low-aggressive breast cancer cell lines, while 143b and mda-mb-231 are high-aggressive lung-tropic metastatic osteosarcoma and high-aggressive bone-tropic breast cancer cell lines, respectively. cells were seeded in a standard 60 mm petri dish at an average density of 8 × 105 cells/plate and placed in an incubator at 37 °c with 5 % co2 for 24 hours to allow cells to form a homogeneous confluent monolayer. during the measurements, all cell types were maintained in 1.5 ml of dulbecco's modified eagle medium (dmem) culture medium [31], and eight different dishes were prepared for each cell line. moreover, eight samples of 1.5 ml pure dmem were prepared as controls. a circular patch resonator with a radius of 20.00 mm [22] and a subminiature ver. a (sma) connector placed on the conductive edge was employed to determine the dielectric properties of cell line samples. the key component of the measuring setup is the low-cost portable vector network analyzer minivna-tiny [32], used for measuring the return loss|s11(f)| in the operating frequency range of 1.9 – 2.6 ghz. the 700 mhz frequency span was previously evaluated to maximize the resolution of acquired data (0.5 mhz) [22]. as a result, the return loss|s11(f)|was acquired for the eight samples of the five different “materials under test” i. e. different media and cell lines. 2.2. data elaboration process principal component analysis (pca) is a multivariate analysis that permits identifying and extracting meaningful information from the data and presenting a final classification based on a multiparametric similarity test and variables reduction [33]. figure 1 shows a scheme of the applied pre-processing algorithm. all data processing was performed with originlab 2017 software. in particular, pca is a useful tool to reduce the dimension of a dataset, maintaining only those variables with the highest variance. as a result, all the vectors used to represent the acquired return loss are transposed into a new space with a dimension equal to the number of significant components determined by pca, and the acquired data may be represented as: where x is the original data matrix containing the return loss data, l is the loading matrix, s is the score matrix based on the eigenvalues derived from the x matrix decomposition, and e is the error matrix, which contains the variance load not explained by the pca model. the matrix dimension i is the number of acquired samples, j is the signal length, and k is the number of significant components. before performing the pca on the acquired return loss data, we applied a pre-processing algorithm, as proposed by es sebar et al. [34] for raman spectroscopy applications: 1) baseline removal through an interactive endpoint weighted (epw) algorithm for each column vector of x; 2) application of a savitzky-golay filter (sgf) using a window length of 14 points and fitted with a secondorder polynomial since sfg flattens peaks less than a moving average smoothing with the same window width [35]; figure 1. data processing workflow involved in pca. 𝑿(𝑖,𝑗) = 𝑺(𝑖,𝑘) × 𝑳(𝑘,𝑗) t + 𝑬(𝑖,𝑗) = �̂�(𝑖,𝑗) + 𝑬(𝑖,𝑗) , (1) acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 3) data normalization by subtracting its average value from each x column and scaling by the standard deviation [36]. for the i-th column of x, equation 2 holds: with x* the normalized matrix, x(i,c) the i-th centered vector, and σ the standard deviation of the x(i,c) vector. this normalization is also known as standard normal variate (snv) transformation. the principal pca was performed by applying equation (2) in equation (1): the discriminant analysis based on a cross-validation test was performed as the final analysis, using as many k components as those with an eigenvalue greater than or equal to 3 [37]. 3. results and discussions figure 2 presents an example of data processing for dmem, reporting the initial raw data (figure 2a) and the three steps for the signal processing (figure 2 b, c and d) baseline remotion, filtering, and normalization, respectively. in detail, the epw algorithm translates and nutes the signal so that the tails lie at zero, while the sg filter evaluates a polynomial regression around each point, creating a new smoothed value for each data point. finally, the snv transformation permits to center and scale the data without altering their overall interpretation: indeed, if two variables were equally correlated before pre-processing, they would still be strongly correlated in post-processing. therefore, for each of the forty acquired signals, the background is removed, the spectrum is filtered to improve the signal-tonoise ratio, the normalization is completed, and the pca is performed. the cumulative variance trend is shown in figure 3. it is possible to observe that the first three components represent an overall variance of about 92.3 %, given by a contribution equal to 76.6 %, 11.9 %, and 3.8 % for components 1, 2, and 3, respectively. according to the literature [37] this can be considered a satisfactory value, as a balance between cumulative variance and complexity of the system to be analyzed for the subsequent analysis, also taking into account that the fourth component contributes only for 2.5 % of the total variance, while the remaining thirty-six components account for the 5.2 % of the whole. 𝑿𝑖 ∗ = 𝑿(𝑖,𝑟𝑎𝑤) − (𝑿(𝑖,𝑟𝑎𝑤)) 𝜎 (𝑿(𝑖,𝑟𝑎𝑤) − (𝑿(𝑖,𝑟𝑎𝑤))) = 𝑿(𝑖,𝑐) 𝜎(𝑿(𝑖,𝑐)) (2) 𝑿(𝑖,𝑗) ∗ = 𝑿(𝑖,𝑗) ∗̂ + 𝑬(𝑖,𝑗) . (3) a) b) c) d) figure 2. a) example of raw return loss for the dmem and computed baseline, b) return loss for the dmem after baseline removal through epw algorithm, c) return loss for the dmem filtered with sg, and d) return loss for the dmem normalized with snv transformation, i. e. the final output. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 figure 4 shows the result of the pca, reporting the scores of the three principal components as a combination of two separately. as can be seen, in figures 4 (a) and (b), the measurements group together into two macro clusters: one containing the pure medium, highlighted by a dotted rectangle, and a second containing all the tested cell types, highlighted by a dashed rectangle. nonetheless, in both figures, it is also possible to distinguish five sub-clusters highlighted by the ellipses enclosing similar spectra with a 95 % confidence level. interesting results can be obtained by focusing on the inclination of these clusters. indeed, the pure medium revealed a different inclination than those obtained when testing all the cell lines. on the other hand, the two less aggressive cell lines (saos-2 and mcf7) have the same inclination as the two aggressive cell lines (mda-mb-231 and 143b). more in detail, the pure medium cluster and the cluster representing the highly aggressive cell lines (143b and mda-mb-231) have the same inclination (100° and 90° respectively) both when focusing on pc2 vs. pc1 and pc3 vs. pc1, while the inclination of the cluster representing the lowaggressive cell lines (saos-2 and mcf7) is 105° when representing pc2 vs. pc1 and 84° when computing pc3 vs. pc1. as a matter of fact, the inclination of the 95 % confidence interval cluster may be a parameter that can give helpful information on tumor aggressiveness. figure 4 c) shows that cell lines and pure dmem did not show proper clusterization. however, the absence of clusters in the pc2 vs. pc3 plot can be explained by considering the low variance captured by the third component (3.9 %). indeed, this component plays a crucial role in the model in linear figure 3. cumulative percentage variance for the first seven components, as obtained from the pca, with the third component highlighted in green. a) b) c) figure 4. cumulative score plots of the first three components: a) pc1-pc2, b) pc1-pc3, and c) pc2-pc3. the percent variance obtained for each component is in the axis legend. the colored ellipse highlights the five clusters representing the 95 % confidence interval. the dotted and dashed rectangles highlight the medium and “cell” clusters. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 combination with the main two components to allow for a cumulative variance higher than 90.0 % (as discussed above), thus improving the fitting of the essential peaks found in the two main components. subsequently, we evaluated the cross-validation of the pca loadings concerning the first three components, and the results are reported in table 1. this test highlighted that pure dmem was detected with a prediction accuracy of 100.0 %, saos-2, mcf7 with a prediction accuracy of 87.5 %, and mda-mb-231 with a prediction accuracy of 75.0 %. finally, 143b cells have a prediction accuracy of 50.0 %. it is worth noting that these discretized prediction accuracy results are strictly related to the number of tested samples. indeed, when testing 8 samples, every prediction accounts for 12.5 % accuracy. figure 5 reports the cumulative results, allowing a better interpretation of the pca predictions. the figure clearly shows that all the 8 dmem tested samples have been appropriately predicted, while, for example, among the 8 tested saos-2 samples (the red bar), seven have been appropriately recognized, while 1 was interpreted as mda-mb-231 cells. similarly, among the 8 tested 143b samples, 4 have been interpreted as 143b, 2 as mcf7, and 2 as mda-mb-231; of the 8 tested mcf-7 samples, 7 have been appropriately recognized and 1 as 143b, and among the 8 tested mda-mb-231 samples 6 have been appropriately interpreted and 2 as 143b cells. interestingly, the final average prediction error for the entire data set is 20.0 %. moreover, these results are in high agreement with that reported in [22], in which the different cell lines were studied with reference to the main lorentzian fit parameters (return loss, resonance frequency, and fwhm) through a manova test. in particular, in [22], we reported a statistical significance difference between dmem and all tested cell lines (p < 0.0001), and in this work, we obtained a cross-validation of 100.0 %. similarly, manova reported a significant (p < 0.5) difference between 143b vs. mcf7 and no significant difference between 143b and mda-mb-231, and pca prediction accuracy was 12.5 % and 25.0 %, respectively. therefore, the procedure reported in this work represents an alternative methodology to distinguish tumor aggressiveness without using any fitting procedure, hence only based on the raw data, whose limit at present consists of the limited number of measurements for each group. 4. conclusions this paper proposes an alternative methodology to analyze the return loss of tumor cell lines. the method allows for discriminating between groups of different tumor cells, analyzing the appropriately filtered and normalized purchase signal, leading to results in agreement with those obtained with traditional methods, such as the lorentzian fit. this methodology is based on a pre-processing algorithm, background removal associated with a savitzky-golay filter, a normalization procedure concerning the signal variation, and a subsequent principal component analysis. results showed good average accuracy of the prediction methodology, confirming the feasibility of pca also for this kind of signal, whereas it has consolidated applications for processing more complex and multi-peak signals. as a future development, we expect to realize a "split ring resonator" sensor inducing more peaks in the instrument's uniformity band to evaluate better the reliability of the methodology proposed here. figure 5. prediction rate for each group. table 1. cross-validation summary for training data and error rate predicted group dmem saos-2 mda-mb-231 mcf-7 143b total dmem 8 0 0 0 0 8 100.0 % 0.0 % 0.0 % 0.0 % 0.0 % 100.0 % saos-2 0 7 1 0 0 8 0.0 % 87.5 % 12.5 % 0.0 % 0.0 % 100.0 % mda-mb-231 0 0 6 0 2 8 0.0 % 0.0 % 75.0 % 0.0 % 25.0 % 100.0 % mcf-7 0 0 0 7 1 8 0.0 % 0.0 % 0.0 % 87.5 % 12.5 % 100.0 % 143b 0 0 2 2 4 8 0.0 % 0.0 % 25.0 % 25.0 % 50.0 % 100.0 % total 8 7 9 9 7 40 20.0 % 17.5 % 22.5 % 22.5 % 17.5 % 100.0 % error rate dmem saos mda mcf7 143b total prior 0.2 0.2 0.2 0.2 0.2 rate 0.0 % 12.5 % 25.0 % 12.5 % 50.0 % 20.0 % acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 references [1] r. reilly, breast cancer, xpharm: the comprehensive pharmacology reference (2007) pp. 1–9. doi: 10.1016/b978-008055232-3.60809-8 [2] a. n. bishop, r. jewell, skin cancer, reference module in biomedical sciences, 2014. doi: 10.1016/b978-0-12-801238-3.05621-x [3] e. f. mccarthy, bone tumors, rheumatology: sixth edition, vol. 2–2, 2015, pp. 1734–1743. doi: 10.1016/b978-0-323-09138-1.00212-6 [4] f. bray, j. ferlay, i. soerjomataram, r. l. siegel, l. a. torre, a. jemal, global cancer statistics 2018: globocan estimates of incidence and mortality worldwide for 36 cancers in 185 countries, ca: a cancer journal for clinicians, nov. 2018. [5] hongdan he, xiaoni shao, yanan li, ribu gihu, haochen xie, junfu zhou, hengxiu yan, targeting signaling pathway networks in several malignant tumors: progresses and challenges, front pharmacol 12 (2021) art. no. 1373. doi: 10.3389/fphar.2021.675675 [6] s. r. mohd shah, n. b. asan, j. velander, j. ebrahimizadeh, m. d. perez, v. mattsson, t. blokhuis, r. augustine, analysis of thickness variation in biological tissues using microwave sensors for health monitoring applications, ieee access 7 (2019), pp. 156033–156043. doi: 10.1109/access.2019.2949179 [7] f. deshours, g. alquié, h. kokabi, k. rachedi, m. tlili, s. hardinata, f. koskas, improved microwave biosensor for noninvasive dielectric characterization of biological tissues,” microelectronics j 88 (2019), pp. 137–144. doi: 10.1016/j.mejo.2018.01.027 [8] g. gugliandolo, g. vermiglio, g. cutroneo, g. campobello, g. crupi, n. donato, inkjet-printed capacitive coupled ring resonators aimed at the characterization of cell cultures, 2022 ieee international symposium on medical measurements and applications (memea), messina, italy, 22-24 june 2022, pp. 1–5. doi: 10.1109/memea54994.2022.9856582 [9] a. martellosio, m. pasian, m. bozzi, l. perregrini, a. mazzanti, f. svelto, p. e. summers, g. renne, m. bellomi, 0.5–50 ghz dielectric characterisation of breast cancer tissues, electron lett 51(13) (2015), pp. 974–975. doi: 10.1049/el.2015.1199 [10] g. gugliandolo, d. aloisio, g. campobello, g. crupi, n. donato, on the design and characterisation of a microwave microstrip resonator for gas sensing applications, acta imeko 10 (2) (2021) pp. 54–61. doi: 10.21014/acta_imeko.v10i2.1039 [11] j. majcher, m. kafarski, a. wilczek, a. szypłowska, a. lewandowski, a. woszczyk, w. skierucha, application of a dagger probe for soil dielectric permittivity measurement by tdr, measurement 178 (2921), art. no. 109368. doi: 10.1016/j.measurement.2021.109368 [12] l. d’alvia, e. piuzzi, a. cataldo, z. del prete, permittivity-based water content calibration measurement in wood-based cultural heritage: a preliminary study, sensors 22(6) (2022), art. no. 2148. doi: 10.3390/s22062148 [13] l. d’alvia, e. palermo, z. del prete, e. pittella, s. pisa, e. piuzzi, a comparative evaluation of patch resonators layouts for moisture measurement in historic masonry units, in 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, 2019. online [accessed 22 april 2023] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-28.pdf [14] l. d’alvia, e. pittella, e. rizzuto, e. piuzzi, z. del prete, a portable low-cost reflectometric setup for moisture measurement in cultural heritage masonry unit, measurement 189 (2022) art. no. 110438. doi: 10.1016/j.measurement.2021.110438 [15] a. cataldo, e. de benedetto, r. schiavoni, a. tedesco, a. masciullo, g. cannazza, microwave reflectometric systems and monitoring apparatus for diffused-sensing applications, acta imeko 10 (3) (2021) pp. 202–208. doi: 10.21014/acta_imeko.v10i3.1143 [16] m. hussein, f. awwad, d. jithin, h. el hasasna, k. athamneh, r. iratni, breast cancer cells exhibits specific dielectric signature in vitro using the open-ended coaxial probe technique from 200 mhz to 13.6 ghz, sci rep 9(1) (2009), 8 pp. doi: 10.1038/s41598-019-41124-1 [17] c. gabriel, s. gabriel, e. corthout, the dielectric properties of biological tissues: i. literature survey, phys med biol 41(11) (1996), p. 2231. doi: 10.1088/0031-9155/41/11/001 [18] s. gabriel, r. w. lau, c. gabriel, the dielectric properties of biological tissues: ii. measurements in the frequency range 10 hz to 20 ghz, phys med biol 41(11) (1996), pp. 2251–2269. doi: 10.1088/0031-9155/41/11/002 [19] s. gabriel, r. w. lau, c. gabriel, the dielectric properties of biological tissues: iii. parametric models for the dielectric spectrum of tissues, phys med biol 41(11) (1996), art. no. 2271. doi: 10.1088/0031-9155/41/11/003 [20] g. maenhout, t. markovic, b. nauwelaers, flexible, segmented tubular design with embedded complementary split-ring resonators for tissue identification, ieee sens j 21(14) (2021), pp. 16024–16032. doi: 10.1109/jsen.2021.3075570 [21] ling yan zhang, c. b. m. du puch, a. lacroix, c. dalmay, a. pothier, c. lautrette, s. battu, f. lalloué, m.-o. jauberteau, p. blondy, microwave biosensors for identifying cancer cell aggressiveness grade, in ieee mtt-s international microwave symposium digest, 2012 doi: 10.1109/mwsym.2012.6259539 [22] l. d’alvia, s. carraro, b. peruzzi, e. urciuoli, l. palla, z. del prete, e. rizzuto, a novel microwave resonant sensor for measuring cancer cell line aggressiveness, sensors 22(12) (2022) art. no. 4383. doi: 10.3390/s22124383 [23] l. d. sharma, r. k. sunkaria, a robust qrs detection using novel pre-processing techniques and kurtosis based enhanced efficiency, measurement 87 (2016), pp. 194–204. doi: 10.1016/j.measurement.2016.03.015 [24] b. moeini, m. r. linford, n. fairley, a. barlow, p. cumpson, d. morgan, v. fernandez, j. baltrusaitis, definition of a new (doniach-sunjic-shirley) peak shape for fitting asymmetric signals applied to reduced graphene oxide/graphene oxide xps spectra, surface and interface analysis 54(1) (2022), pp. 67–77. doi: 10.1002/sia.7021 [25] m. longo, b. peruzzi, d. fortunati, v. de luca, s. denger, g. caselli, s. migliaccio, a. teti, modulation of human estrogen receptor alpha f promoter by a protein kinase c/c-src-dependent mechanism in osteoblast-like cells, j mol endocrinol 37(3) (2006), pp. 489–502. doi: 10.1677/jme.1.02055 [26] ling ren, a. mendoza, j. zhu, j. w. briggs, ch. halsey, e. s. hong, s. s. burkett, j. j. morrow, m. m. lizardo, t. osborne, s. q. li, h. h. luu, p. meltzer, ch. khanna, characterization of the metastatic phenotype of a panel of established osteosarcoma cells, oncotarget 6(30) (2015), pp. 29469–29481. doi: 10.18632/oncotarget.5177 [27] e. urciuoli, s. petrini, v. d’oria, m. leopizzi, c. della rocca, b. peruzzi, nuclear lamins and emerin are differentially expressed in osteosarcoma cells and scale with tumor aggressiveness, cancers (basel) 12(2) (2020). doi: 10.3390/cancers12020443 [28] e. urciuoli, v. d’oria, s. petrini, b. peruzzi, lamin a/c mechanosensor drives tumor cell aggressiveness and adhesion on substrates with tissue-specific elasticity, front cell dev biol 9 (2021). doi: 10.3389/fcell.2021.712377 [29] d. trivanović, s. nikolić, j. krstić, a. jauković, s. mojsilović, v. ilić, i. okić-djordjević, j. f. santibanez, g. jovčić, d. bugarski, http://dx.doi.org/10.1016/b978-008055232-3.60809-8 https://doi.org/10.1016/b978-0-12-801238-3.05621-x https://doi.org/10.1016/b978-0-323-09138-1.00212-6 https://doi.org/10.3389/fphar.2021.675675 https://doi.org/10.1109/access.2019.2949179 https://doi.org/10.1016/j.mejo.2018.01.027 https://doi.org/10.1109/memea54994.2022.9856582 https://doi.org/10.1049/el.2015.1199 https://doi.org/10.21014/acta_imeko.v10i2.1039 https://doi.org/10.1016/j.measurement.2021.109368 https://doi.org/10.3390/s22062148 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-28.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-28.pdf https://doi.org/10.1016/j.measurement.2021.110438 https://doi.org/10.21014/acta_imeko.v10i3.1143 https://doi.org/10.1038/s41598-019-41124-1 https://doi.org/10.1088/0031-9155/41/11/001 https://doi.org/10.1088/0031-9155/41/11/002 https://doi.org/10.1088/0031-9155/41/11/003 https://doi.org/10.1109/jsen.2021.3075570 https://doi.org/10.1109/mwsym.2012.6259539 https://doi.org/10.3390/s22124383 https://doi.org/10.1016/j.measurement.2016.03.015 https://doi.org/10.1002/sia.7021 https://doi.org/10.1677/jme.1.02055 https://doi.org/10.18632/oncotarget.5177 https://doi.org/10.3390/cancers12020443 https://doi.org/10.3389/fcell.2021.712377 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 characteristics of human adipose mesenchymal stem cells isolated from healthy and cancer affected people and their interactions with human breast cancer cell line mcf-7 in vitro, cell biol int 38(2) (2014) pp. 254–265. doi: 10.1002/cbin.10198 [30] a. j. minn, y. kang, i. serganova, g. p. gupta, d. d. giri, m. doubrovin, v. ponomarev, w. l. gerald, r. blasberg, j. massagué, distinct organ-specific metastatic potential of individual breast cancer cells and primary tumors, j clin invest 115(1) (2005), pp. 44–55. doi: 10.1172/jci22320 [31] thermofisher, dmem description. online [accessed 22 april 2023] https://www.thermofisher.com/it/en/home/life-science/cellculture/mammalian-cell-culture/classicalmedia/dmem.html?sid=fr-dmem-main [32] hardware manual for minivna tiny. online [accessed 22 april 2023]. https://www.wimo.com/media/manuals/mrs/minivna_tiny _antennenanalysator_antenna-analyzer_hardwaremanual_en.pdf [33] c. syms, principal components analysis, encyclopedia of ecology, five-volume set, jan. 2008, pp. 2940–2949. doi: 10.1016/b978-008045405-4.00538-3 [34] l. e. sebar, l. iannucci, y. goren, p. fabian, e. angelini, s. grassini, raman investigation of corrosion products on roman copper-based artefacts, acta imeko 10 (1) (2021), pp. 129-135. doi: 10.21014/acta_imeko.v10i1.858 [35] a. savitzky, m. j. e. golay, smoothing and differentiation of data by simplified least squares procedures, anal chem 36 (8) (1964), pp. 1627–1639. doi: 10.1021/ac60214a047 [36] m. zeaiter, d. rutledge, preprocessing methods, comprehensive chemometrics 3 (2009), pp. 121–231. doi: 10.1016/b978-044452701-1.00074-0 [37] s. wold, cross-validatory estimation of the number of components in factor and principal components models, technometrics 20(4) (1978), pp. 397-405. doi: 10.2307/1267639 https://doi.org/10.1002/cbin.10198 https://doi.org/10.1172/jci22320 https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.thermofisher.com/it/en/home/life-science/cell-culture/mammalian-cell-culture/classical-media/dmem.html?sid=fr-dmem-main https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf https://www.wimo.com/media/manuals/mrs/minivna_tiny_antennenanalysator_antenna-analyzer_hardware-manual_en.pdf http://dx.doi.org/10.1016/b978-008045405-4.00538-3 https://doi.org/10.21014/acta_imeko.v10i1.858 https://doi.org/10.1021/ac60214a047 https://doi.org/10.1016/b978-044452701-1.00074-0 https://doi.org/10.2307/1267639 acta imeko  september 2014, volume 3, number 3, 63 – 67  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 63  experimental evaluation of the air trapped during the water  entry of flexible structures  riccardo panciroli 1 , giangiacomo minak 2   1  università degli studi niccolò cusano. via don carlo gnocchi, 3. 00166 roma, italy  2  din – alma mater studiorum. viale del risorgimento, 2. 40136 bologna, italy      section: research paper  keywords: hull slamming; hydro‐elasticity; air trapping; flexible structures  citation: riccardo panciroli, giangiacomo minak, experimental evaluation of the air trapped during the water entry of flexible structures , acta imeko, vol. 3,  no. 3, article 13, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐13  editor: paolo carbone, university of perugia   received june 25 th , 2013; in final form july 13 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the office of naval research through the grant n00014‐12‐1‐0260.  corresponding author: giangiacomo minak, e‐mail: giangiacomo.minak@unibo.it      1. introduction  predicting the impact-induced stresses during the water entry of flexible structures is of major interest for the design of marine structures. during the water entry of flexible structures, several fluid structure interaction (fsi) phenomena might appear [1]–[9]. the most important are: cavitation, air trapping, and repetition of impact and separation between the fluid and the structure. the occurrence of such fsi phenomena might strongly influence the impact dynamics. although air trapping is a well-known phenomenon [10], to the author's knowledge none of the previous works in the literature presented a methodology to quantify it and evaluate the effect of the structural deformation on it. the present work faces this challenge, proposing the use of an optical technique to achieve this aim. optical techniques have been recently utilized to measure the structural deformation of compliant wedges entering the water in [11]. in this work, we firstly develop a digital imaging technique for the post-processing of high-speed images to isolate the regions of the fluid where air is trapped. this methodology is later used to study the evolution of the air trapping in time and to dissect the role of impact velocity and structural deformation. although there are no previous results in the literature to validate the proposed method, the present results are found in good agreement with the expectations. 2. experimental setup  experiments are conducted on a drop weight machine for water impacts with a maximum impact height of 4 m. wedges are comprised by two panels joined together and to the falling sledge on one edge to assume a cantilever boundary condition, where the boundary corresponds to the keel of the wedge. panels of various material and thickness can be mounted on the abstract  deformable  structures  entering  the  water  might  experience  several  fluid‐structure  interaction  (fsi)  phenomena;  air  trapping is one of these. according to its definition, it consists of air bubbles trapped between the structure and the fluid  during  the  initial  stage  of  the  impact.  these  bubbles  might  reduce  the  peak  impact  force.  this  phenomenon  is  characteristic for the water entry of flat‐bottom structures. above a deadrise angle of 10°, air trapping is negligible. in  this  work,  we  propose  a  methodology  to  evaluate  the  amount  of  air  trapped  in  the  fluid  during  the  water  entry.  experiments are performed on wedges with varying stiffness, entry velocity, and deadrise angle. a digital image post‐ processing technique is developed and utilized to track the air trapping mechanism and its evolution in time. interesting  results are found on the effect of the impact velocity and the structural deformation on the amount of air trapped during  the slamming event.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 64  sledge at any deadrise angle (β in figure 1) ranging smoothly from 0° to 50°, where 0° and 90° are the extreme case of a flat panel and a vertical blade, respectively. teflon insets minimize friction between the sledge and the prismatic rails. the sledge holds wedges 300 mm long and 250 mm wide. the falling body hits the fluid at the centre of a tank 1.2 meters wide, 1.8 meters long and 1.1 m deep. the tank was filled with water only up to 0.6 m to prevent the water waves generated during the impact to overflow. the drop height, defined as the distance between the keel and the water surface, ranged from 0.5 m to 3 m at 0.25 m increments. impact acceleration is measured by a v-link microstrain wireless accelerometer (±100g) located at the tip of the wedge. all reported accelerations are referenced to 0 g for the free-falling phase. the sampling frequency is set to its maximum of 4 khz. entering velocity is recorded by a laser sensor (με ils 1402) capturing the sledge position over 350 mm of ride at a frequency of 1.5 khz with a definition of 0.2 mm. the entry velocity is obtained by the numerical differentiation of the position. a high-speed camera is utilized to capture the images during the water entry. the camera is located to view the wedge from the side, as shown in figure 2. the capturing frequency is set to 1.5 khz with a definition of 1200×1024 pixels. a vertical transparent screen is located inside the water tank just before the wedge (clearance is ≈2 mm) to prevent fluid spray in the ydirection, which would have made it impossible to see the evolution of the fluid jet (figure 3) generated during the water entry. as the water on the front side of the screen remains still during the impact, pictures show both the still water surface (on the front side of the screen) and the fluid jet (on the back side of the screen), as shown in figure 3. aluminium (a), e-glass (mat) / vinyl ester (v) and e-glass (woven) / epoxy (w) panels 2 mm thick were used. composite panels were produced by vartm by infusion of vinyl ester resin on e-glass fibre mat, while the e-glass (woven 0°/90°) / epoxy panels were produced in an autoclave. the assumed material properties and the measured fundamental frequency of the panels are listed in table 1. for given material and panels thickness, the impact variables are: deadrise angle β (range 4° to 35°) and falling height (ranging 0.25 to 2.5 m). during the experiments, structural deformations are recorded by strain gauges located at various positions, while an accelerometer and a laser position sensor record the impact dynamics. the wedges are built as an open-structure, as the sides of the panels are open and the water is free to flow from the sides during impact (this setup is necessary to allow higher flexibility). this leads the entire structure to be theoretically negatively buoyant. however, the impact time is very short, as it typically lasts less than 40 ms (the structure enters the water with an entry velocity in the range of 4 to 6 m/s). in such a short duration, the water has not enough time to flow into the wedge from the sides, so it behaves like a closed-shape wedge, with a positively buoyant behaviour (the acceleration range is approximately 20 g to 100 g, increasing with the entry speed). wedges are manually lifted to the desired height and released. laser sensor, strain gauges, and accelerometer signals are triggered together in a single manual start. since the position of the sledge relative to the free surface is known, the initial impact time is calibrated during the data post-processing on the basis of the position recorded by the laser sensor. 3. preliminary experimental results  in the following, we display some images captured during the water entry of wedges with deadrise angles higher than 10°. the examples evaluate the air trapped for variable (high to low) deadrise angle. two impact velocities are shown for each deadrise angle. figure 4 shows the water entry of a wedge with deadrise angle of 30° entering the water at 4.2 m/s and 6 m/s. in both cases no water is trapped in the fluid, as indicated by the smooth uniform colour in the fluid region. figure  2.  sketch  of  the  experimental  set‐up.  the  wedge  is  hinged  to  the  sledge  that  enters  the  water  with  pure  vertical  velocity.  the  high‐speed  camera is located on the side of the water tank.  figure 1. conceptual scheme of the wedge used for experiments. l = panel  length.    =  deadrise  angle.  solid  line:  undeformed  panels,  dashed  line:  expected deformation during impact.   figure 3. sample of an image captured by the high‐speed camera. the still  water above the transparent screen is clearly visible. the lighter dots clearly  visible in the fluid are air bubbles that are used as tracers in the piv analysis. table 1. collection of the estimated material properties.  material  e1=e2 [gpa]    [kg/m 3 ]  n [hz]  6068 t6  68  0.32  2700  18.01  e‐glass/vinylester  20.4  0.28  2050  9.77  e‐glass /epoxy  30.3  0.28  2015  19.69    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 65  figure 5 shows the water entry of a wedge with deadrise angle of 20° entering the water at 4.2 m/s and 6 m/s. while in the first image there is no evidence of air trapped in the fluid, the wedge impacting at higher speed (picture on the right) is trapping some air in the form of small bubbles that appear at the middle of the wedge. although existing, air trapping is still negligible for this deadrise angle. wedges with deadrise angle of 15° (shown in figure 6) show results similar to the last example. even for this deadrise angle some negligible air bubbles are trapped in the fluid. air bubbles are definitely spread on a wider region in the case of a deadrise angle of 12°, as shown in figure 7. until the deadrise angle of 12° no air cushions are formed: air is trapped in the form of small bubbles dispersed in the fluid and its effect can be neglected. instead, air cushions are formed for deadrise angles lower than 12°, indicating that our results are in line with the literature. in the following sections, the research will focus on the water entry of wedges with deadrise angle lower than 12° with particular effort on evaluating the amount of air trapped in the fluid, its evolution in time, and the effect of the structural deformation on it. as the first step, a digital image postprocessing methodology capable of evaluating the amount of air trapped in the fluid has been developed and is presented in the next section. 4. the use of an optical method to account for the  air trapped during the water entry  to track the evolution of the air trapped during the water entry, a digital image technique has been developed to post– process the high-speed images. this technique relies on the properties of the water surface to diffract the light: as air bubbles are entrapped in the fluid, if lightened by a light source, their surface will diffract the light making them brighter than the surrounding fluid. images with a definition of 1200×1024 pixels are captured by the camera at a rate of 1.5 khz. an example of a typical image where air is trapped into the fluid is shown in figure 8. images (originally in colour) are converted to greyscale, that is, images are represented by a matrix where the cells assume a value between 0 and 255, where 0 corresponds to a fully black pixel and 255 to a fully white pixel. all the values in between define the grey level. the intensity of the grey levels vs. the pixels counts are plotted as a histogram (figure 9). we choose for the relation between the magnitude of air trapped below the structure and the number of pixels exceeding a certain grey level. each pixel corresponds to an area of 0.23×0.23 mm2, since the calibration gave that 1305 pixels correspond to 300 mm in length. upon an independent study on the role of the threshold level on the evaluated results on the computed amount of trapped air, we choose 200 as the threshold level above which the pixels have to be considered air. such threshold level was found to be extremely affected by the lighting parameters used during the image acquisition: diaphragm aperture and exposition time (inversely proportional to the capturing figure 4. image of a wedge with deadrise angle of 30° entering the water at 4.2 m/s (left) and 6 m/s (right). the fluid shows a uniform colour, meaning that no air has been trapped during the water entry.   figure 5. image of a wedge with deadrise angle of 20° entering the water at 4.2 m/s (left) and 6 m/s (right). there is no air trapped in the case of lower entry velocity while some very small air bubbles appear in the case of larger velocity.  figure 6. image of a wedge with deadrise angle of 15° entering the water at  4.2 m/s (left) and 6.7 m/s (right). in both cases there is some air trapped in the fluid in the form of small air bubbles.  figure 7. image of a wedge with deadrise angle of 12° entering the water at 5.2 m/s (left) and 6.7 m/s (right). air trapping is much more visible than the previous  cases  since  a  concentrated  light  is  used  to  highlight  the  air bubbles.  figure 8. example of an image where some air is trapped in the fluid during  the impact. air appears as a bright region due to the light that is diffracted  by the surface of the air bubbles.  figure 9. example of a histogram of the grey level (0 to 256) vs. pixel count  of the greyscale image.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 66  frequency). however, we comment that, for a given diaphragm aperture and exposition time, variations of the threshold level from the reference value by 20% have negligible effects on the results. a new binary image is then built: pixels below the threshold are set as black (0), while pixels exceeding the threshold are set as white (255). a black and white picture is obtained this way. to smooth out the images and clear possible lonely black pixels isolated in wide white regions an algorithm based on morphological reconstruction described in [12] is applied to the images. later, the general procedure outlined in [13] is applied to compute the white regions. as output, the white regions are listed and their perimeter and area is reported. small air bubbles are dispersed in the water even before the impact. a threshold on the minimum size of the area is thus used to neglect these small bubbles from the evaluation of the total amount of air trapped during the water entry. the number of white regions is thus filtered to exclude those with an area smaller than 12 pixels, as this value has been evaluated to correspond to the area of the air bubbles already trapped in the fluid before the impact. the total area of air trapped below the structure is thus evaluated as it is assumed to be proportional to the number of pixels counted with the proposed method. 5. on the effect of the entry velocity on the  trapped air  this section investigates the effect of the entry velocity on the amount of air trapped during the initial stage of the impact. using the technique presented in the previous section, it is possible to observe the time trace of the trapped air. the analysis is performed on wedges with deadrise angle of 4° entering the water in free fall from several impact heights: namely 50, 100, 150, and 200 cm. wedges are all 2 mm thick and are made by three different materials: aluminium, wovenglass/epoxy and matt e-glass/vinyl ester. this way the impact conditions were similar but bodies presented different flexibility due to the differences in the elasticity modulus of the three materials (namely 68 gpa, 30.3 gpa and 20.2 gpa). a detailed characterization of the specimens can be found in [14]. it was thus possible to study the effect of the structural deformation on the air trapped during the impact. figures 10 to 12 show the results of the evaluated trapped air versus the entry depth v0t, where v0 is the velocity at the beginning of the impact. 6. effect of the structural deformation on the air  trapped during the water entry  in the case of the water entry of flexible structures, there is the possibility that the structural deformation alters the air trapping mechanism. in particular, considering simple wedges, the deadrise angle is locally modified during the impact with water and this could lead air bubbles to coalesce and to form a cushion, or, on the other side, it could let the air escape from an already present cushion. an example of wedge deflection evolution during the water impact can be found in [8]. a collection of the experimental results for the different wedges impacting from the same heights is presented in figure 13. the experimental findings suggest that the flexibility of the wedge has negligible effects on the air trapping, as wedges assume large deformations once the air has been already entrapped in the fluid. even if the amount of trapped air is quite similar in all the three cases, it may be noted that, at the beginning of the water entry, stiffer wedges show a sharper peak of entrapped air, for high impact energies, than the more flexible ones. instead, the more flexible wedges apparently loose some air from the cushions in the final part of the entry. further investigation is needed to further explore and confirm such findings. figure  10.  experimental  evaluation  of  the  air  trapped  in  time.  aluminium wedge (a) 2 mm thick, deadrise angle = 4°, for variable impact height. the  impact heights in the legend are in cm. the product v0t is in mm.  figure  11.  experimental  evaluation  of  the  air  trapped  in  time.  composite  wedge (w) 2  mm thick, deadrise angle = 4°, for variable  impact height.  the impact heights in the legend are in cm. the product v0t is in mm.    figure  12.  experimental  evaluation  of  the  air  trapped  in  time.  composite  wedge (v) 2 mm thick, deadrise angle = 4°, for variable  impact heights.  the impact heights in the legend are in cm. the product v0t is in mm.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 67  7. conclusions  in this work we propose a technique to quantify the amount and the evolution in time of the air trapped during the water entry of flexible structures. first, a methodology based on the analysis of the high-speed images is proposed and commented. results are found to be in agreement with the expectations, although only qualitative comparison can be done, as there are no other experimental, nor numerical results in the literature to compare our results with. on the base of the experimental findings, air trapping seems to attain its maximum at the beginning of the impact, when the velocities are higher. however, bodies need time to deform: when the wedge deformations are large enough to modify the deadrise angle, the entireness of the entrapped air has been already trapped in the fluid. on the basis of these preliminary results, there is no experimental remarkable evidence of the influence of the structural deformation on the amount of air trapped during the impact. further studies are needed to confirm this observations. the analysis of the images showed that wedges with deadrise angles greater than 10° entrap the air in the form of small bubbles spread on a region that decreases as the deadrise angle increases. in the cases investigated, the structural deformation was not capable of lowering the deadrise angle enough to switch the air trapping mechanism from small air bubbles (with negligible effect on the hydrodynamic pressure) to an air cushion, which might have a strong effect on the hydrodynamic pressure. indeed, further investigations in this direction are needed to deeply understand this phenomenon. the experimental results show a saturating effect of the impact energy on the amount of entrapped air. furthermore, the role of the stiffness is very limited, as air trapping is found to mainly relate to the initial deadrise angle. further investigations are needed for the cases with very low deadrise angles and different geometries. we comment that the recently developed methodologies to reconstruct the hydrodynamic pressure in water entry problems from the flow kinematic components [15-17] can be adopted in the future to quantify the influence of air trapping on the hydrodynamic pressure. acknowledgement  support from the office of naval research (grant n0001412-1-0260) and the advice of dr. y. rajapakse are gratefully acknowledged. references  [1] s. abrate, hull slamming, appl. mech. rev. 64 (2013) p.060803. [2] a. korobkin, e.i. părău, j.m. vanden-broeck, the mathematical challenges and modelling of hydroelasticity, philos. trans. a. math. phys. eng. sci. 369 (2011) pp. 2803-2812. [3] s.e. hirdaris, p. temarel, hydroelasticity of ships: recent advances and future trends, proc. inst. mech. eng. part m j. eng. marit. environ. 223 (2009) pp. 305–330. [4] x. chen, y. wu, w. cui j.j. jensen, review of hydroelasticity theories for global response of marine structures, ocean eng. 33 (2006) pp. 439–457. [5] o.m. faltinsen, the effect of hydroelasticity on ship slamming, philos. trans. r. soc. a math. phys. eng. sci. 355 (1997) pp. 575–591. [6] o.m. faltinsen, hydroelastic slamming, j. mar. sci. technol. 5 (2000) pp. 49–65. [7] r. panciroli, water entry of flexible wedges : some issues on the fsi phenomena, appl. ocean res. 39 (2012) pp. 72–74. [8] r. panciroli, s. abrate, g. minak, a. zucchelli, hydroelasticity in water-entry problems: comparison between experimental and sph results,” compos. struct. 94 (2012) pp. 532–539. [9] r. panciroli, s. abrate, g. minak, dynamic response of flexible wedges entering the water, compos. struct., 99 (2013) pp. 163-171. [10] a. korobkin, a.s. ellis, f.t. smith, trapping of air in impact between a body and shallow water, j. fluid mech. 611 (2008) pp. 365–394. [11] m. cooper, l. mccue, “experimental study on deformation of flexible wedge upon water entry,” in 9th symposium on high speed marine vehicles. [12] p. soille, morphological image analysis: principles and applications. springer-verlag, 1999, pp. 173–174. [13] r.m. haralick, l.g. shapiro, computer and robot vision, 1st ed. boston, ma, usa: addison-wesley longman publishing co., inc., 1992. [14] r. panciroli, dynamic failure of composite and sandwich structures, vol. 192. dordrecht: springer netherlands, 2013. [15] a. nila, s. vanlanduit, s. vepa, w. van paepegem, a piv-based method for estimating slamming loads during water entry of rigid bodies, meas. sci. technol., 24 (2013) p. 045303. [16] b.w. van oudheusden, piv-based pressure measurement, meas. sci. technol., 24 (2013) p. 032001. [17] r. panciroli, m. porfiri, evaluation of the pressure field on a rigid body entering a quiescent fluid through particle image velocimetry, exp. fluids 54 (2013) p. 1630. figure  13.  evolution  of  the  air  entrapped  in  time  for  the  water  entry  of  wedges  with  various  flexural  stiffness  (a=aluminium,  w=woven glass/epoxy,  v=  mat  glass/epoxy)  impacting  from  two  different  impact heights (100 cm ‐ top, and 150 cm – bottom). the product v0t is in mm.   power quality metrics for dc grids with pulsed power loads acta imeko issn: 2221-870x june 2021, volume 10, number 2, 153 161 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 153 power quality metrics for dc grids with pulsed power loads andrea mariscotti1 1 diten, university of genova, via opera pia 11a, 16145 genova, italy section: research paper keywords: dc grid; power quality; pulsed power load citation: andrea mariscotti, power quality metrics for dc grids with pulsed power loads, acta imeko, vol. 10, no. 2, article 22, june 2021, identifier: imekoacta-10 (2021)-02-22 section editor: giuseppe caravello, università degli studi di palermo, italy received february 17, 2021; in final form april 24, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrea mariscotti, e-mail: andrea.mariscotti@unige.it 1. introduction the term “dc grid” may be considered as a catchall for various types of distribution networks that are being extensively used for a wide range of applications. one of the advantages of dc distribution is the ease of integration of sources and loads, without the complication of phase angle instability and coordination, typical of ac applications. such networks are operated at both medium voltage (mv) and low voltage (lv) with some standardized nominal voltage values. compared to ac grids in several cases dc grids can interface to pulse loads more effectively, that is with a faster response and without violating power quality (pq) constraints [1]. the physical extension is variable, from some tens of meters within a large room, a smart house or an electrified vehicle, hundreds of metres between buildings, at a campus, or onboard ships, and up to some km for distribution within smart residential districts, technological parks, etc. [2], [3]. some representative examples of such networks are as follows. smart data centres pursue a dc distribution perspective, in particular to ensure a high level of availability, using capacitors, batteries and autonomous renewable energy sources (photovoltaic and fuel cells to cite the most common ones operating at dc), covering a wide time scale of events (fluctuations, dips, short and long interruptions) [4]. mv power distribution onboard ships features various types of loads with different dynamics and power absorption levels [1], [5], [6]. various arrangements may be adopted: split dc and zonal bus types, different sub-bus solutions for generators, propulsion loads and ppls, etc. [6]. the extension of the network is of course limited to the physical size of the ship (up to about 300 m) but cabling and routing may be quite complex. another example of lv/mv application characterized by dynamic moving loads is the wide set of railways, metros, tramways and trolley buses, all supplied from a catenary (or third rail) system fed by rectifier substations [7], [8]. the line voltage is generally standardized to a set of nominal values of 600, 750, 1500 and 3000 v. these networks feature the largest extension, as they cover entire cities, regions and countries. they are sectioned, however, into smaller portions mainly for an exigency of maintenance and operation, besides control of supply voltage. the network frequency response is nevertheless peculiar, with significant variations of the network impedance and resonances already appearing at low frequency [9]. loads with large nominal power and significant dynamics have a direct impact on the line voltage (subject to appreciable variations consequential to load cycles) and can compromise network stability [10]. as in all distribution networks, the primary quantity is the voltage available at equipment terminals, but for a complete assessment of the network-load interaction, current and power should be also evaluated. in particular, the current waveform is specifically involved in phenomena such as inrush, identification of faults and in general control of the interface converters by feed-forward methods, attempting dynamic abstract dc grids can effectively feed pulsed power loads (ppls), integrating local energy storage to minimize the impact on other connected loads and implementing buffered sub-grids to isolate susceptible loads even more. the identification of regulatory limits for ppl integration and dc grid response and the assessment of ppl impact necessitate suitable power quality (pq) metrics. existing indexes and limits (e.g. ripple and distortion, voltage fluctuation) are compared to other metrics, derived from control methods and knowledge of the peculiar characteristics of ppl and interaction with dc grids in some examples. the objective is a unified approach for pq metrics suitable for a wide range of dc grids and in particular to the behavior of ppls. mailto:andrea.mariscotti@unige.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 154 impedance control. the power profile instead is the primary quantity for the power and energy budget, as well as for prediction and control [11]. in general, the impact of pulsed power loads (ppls) is reduced by a range of techniques suitable for dc grids, whose applicability may depend on the specific grid characteristics: • first of all load dynamics may be limited at the origin, e.g. by limiting the rate of rise of the power demand, compatible with the load mission and characteristics; a wide range of techniques have been proposed, falling all under a unique category that we may call “profile-based control” • additional passive energy storage devices (e.g. batteries, supercapacitors, etc.) may be installed, capable of feeding the supplementary power with the required rapidity, and at the same time dampen network swell (consequential to load release) and recover the energy, in case of regenerative behaviour • the role of passive energy storage devices may be backed up by the so called “dc springs” and other active compensation devices. the work is thus structured starting from ppl characteristics in terms of electrical behaviour and interaction with the dc grid (section 2). section 3 then focuses on metrics suitable to describe quantitatively such interaction and on the impact on dc grid operation and pq. these metrics are derived from previously proposed pq indexes, as well as from considerations regarding current and power absorption profiles. section 4 summarizes the normative pq requirements for dc grids in various applications that are characterized by the presence of ppls, such as avionics, shipboard and railways at various extents. section 5 then evaluates the performance of the proposed metrics, using simulated and measured data. 2. pulsed power loads and network interaction pulsed power loads (ppls) with repeated large variations of load current represent an issue to dc grid voltage stability and regulation [12]. for complex systems with non-essential and essential loads, the adoption of a dc zonal distribution system is commonplace, dividing the dc grid onboard into zones separated by interface converters. each sub-grid adopts specific solutions where necessary for critical loads, de facto confining and limiting the propagation of transients and disturbance. since power is fed often by intrinsically ac sources (such as gas turbine and diesel engine alternators), heavy loads may be instead fed directly from a primary ac grid, from which the other zoned dc sub-grids are derived, one or more dedicated to specific ppl and local energy storage. ppl loads are usually interfaced with a dc/dc front-end converter, based on a variety of topologies, depending also on power rating, voltage level and required dynamics. ppls for a wide range of applications may be exemplified as follows. • devices and apparatus that are part of scientific experiments in nuclear physics, such as the magnetic circuits of acceleration and bending at synchrotrons, the large hadron collider, etc. in several cases choices are a compromise between desirable performance for new types of experiments reutilization of existing equipment (undergoing thus gradual modernization) and exigencies of power absorption, feasibility of protections, and continuity of service. dc distribution facilitates the direct or indirect connection of energy storage, ensuring both fast dynamic response and higher continuity of service, even supporting reconfiguration on the fly (“hot swap”) to account for some amount of dc/dc supply going out of service [13]. • radar systems connected to aircraft dc grid with power absorption following transmission pulses. the dc grid onboard aircrafts is characterized by a limited power deployment, with flight control loads, constant power loads (such as fuel pump), and the said radar load, as well as modern radios. the load profile described in [14] may be taken as an exemplifying case: current pulses of 4 ms duration and peak power of 33.6 kw with respect to a steady power absorption of one third (11.2 kw); peaks are arranged in pulse bursts with spacing of 10 ms (5 peaks) with a repetition of 200 ms. modern aircrafts may be equipped with more than one radar device, for which coordination for interleaved operation may be a strategic option. modern radio systems with advanced characteristics of spectrum exploitation, long range and traffic acceptance flexibility (especially for military applications) may share a power absorption pattern similar to radars. • rail guns and electromagnetic weapons, such as high-power laser and microwave beams, typically located onboard ships [15]. all these weapon systems deliver peak power levels in the order of 1 gw or more through a pulse forming network, whose charging from the ship dc grid absorbs power levels in the order of 10 mw – 30 mw for a duration of some seconds. during intensive use dc grid loading is almost continuous, with the delivery of fast energy pulses that may easily decoupled thanks to the much higher characteristic frequency. • electrified transports featuring high-performance units with a dynamic power profile. two quite different modern transportation means fall into this category: urban highspeed high-pace electrified guideway systems and electric vehicles with dynamic wireless power charging, with pulsed power absorption with fast charging times when passing charging points. both can be assumed fed by a dc grid although the physical extension is quite different: dynamic wireless power transfer requires local dc distribution buses up to about some hundreds meters [16], whereas guideway systems, tramlines and railways of various kids feature supply sections of several km [17]. for guideway systems, such as metros, it must be pointed out that ppls at the load release or when implementing regenerative braking, may cause a significant increase of line voltage with consequence for electrical safety, especially for passengers and people standing at platform and near platform screen doors [18]. 3. disturbance metrics for both the implementation of the control policies and the assessment of performance, suitable metrics of the disturbance are necessary. such metrics can be applied to voltage, current, power, or a combination thereof (multi-criteria metrics). they were preliminarily discussed and compared to relevant standards in the imeko tc4 conference paper [19], of which this work is the continuation, with focus on dynamic ppl loads and network stability. from the normative viewpoint [20]–[25] transient electrical phenomena may be classified as: • macroscopic voltage variations lasting for tens ms or longer • shorter voltage variations, related to switching transients of variable amplitude acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 155 • periodic variations often described in terms of harmonic distortion or ripple, covering a wider frequency range. it is commonplace to evaluate all these phenomena by the instantaneous difference with respect to steady state vdc value [1], [12], applying the concept of ripple to all voltage variations, both transient and repetitive. the mil-std-1399-300 standard [26] has recently extended the concept of distortion and ripple to the active power absorption profile itself for ac grids onboard us navy ships. the concept is interesting for dc grids where the concept of absorbed power is more straightforward and does not involve reactive power. in the following, voltage fluctuations and transient variations are considered first, discussing alternative metrics other than ripple; periodic variations and ripple are then considered, including the extension to cover aperiodic variations and transients. 3.1. voltage fluctuation as anticipated the network response and the quality of the delivered energy (power quality in a wide perspective) is usually measured by indexes applied to the line voltage, measuring voltage spread during fluctuations to compare with maximum values ad time-amplitude limit curves. line transients may thus be evaluated by a measure of their pure amplitude a (e.g. peak or average value), equivalent time duration tx (e.g. half-amplitude duration t50) and combined amplitude-duration. the first index that has a strong relationship with ac networks is the rms value, that when defined for aperiodic phenomena corresponds in reality to a measure of the area of the signal. thus, focusing on aperiodic phenomena for generality, the combination of amplitude and duration gives two measures of the intensity of the phenomenon: area and energy. taking a time interval [t1, t2] and considering the ac portion x(t) of the network quantity (voltage or current), having first subtracted an estimate of the steady value x, we obtain: 𝑆 = ∫ 𝑥(𝑡) 𝑡2 𝑡1  𝑑𝑡, 𝐸 = ∫ |𝑥(𝑡)|2𝑑𝑡, 𝑡2 𝑡1 𝑃 = 𝐸 𝑡2 − 𝑡1 . (1) energy is calculated and considered in a signal processing perspective, as the square of a signal x over the time interval [t1, t2]. then power p is just obtained as energy e divided by the duration t2–-t1 of the time interval. it is easy to see that combining voltage and current corresponds to the squared rms. the mean square time duration may be also defined, that measures the interval where the energy is concentrated: 𝜏 2 = ∫ 𝑡 2|𝑥(𝑡)|2d𝑡 𝑡2 𝑡1 ∫ |𝑥(𝑡)|2d𝑡 𝑡2 𝑡1 ⁄ . (2) area (or impulse strength) and energy can be used to evaluate the impact on loads: the missing area or energy of a negative transient (a temporary reduction of the line voltage) will cause a reduction of the relevant quantities at the load side. the consequence is a diminished performance that may vary for duration and severity depending on the load type. such impact is partially compensated by filters (capacitors mainly), storage devices (e.g. batteries and supercapacitors) and dc springs over different time scales, covering fast and slow variations. in addition, control at the source with a reserve of energy driven by a suitable index of the network quality of power can support cooperative control. 3.2. voltage ripple dc grids are generally characterized by means of “ripple”, that en 61000-4-17 [27] describes as composed of “power frequency or its multiple 2, 3 or 6”, focusing with a limited view on the mechanism of production by classic ac/dc conversion (based e.g. on diode and thyristors rectifiers). this scheme is well represented in dc electrified transports where the ripple of substation rectifiers is clearly visible: its relevance is in terms of line voltage fluctuation reflected also in the track voltage, where a significant amount of ripple may be relevant for the assessment of touch voltage and body current [18]. ripple is thus a repetitive phenomenon superimposed to the dc steady value. modern ac/dc and dc/dc converters and poly-phase machines for renewable energy sources are used extensively to improve pq and for ease of interfacing in modern micro-grids and smartgrids. this leads necessarily to a reformulation of the concept of ripple to a more general definition accounting for nonharmonically related components, possibly non-stationary. the time domain definition of ripple is the maximum instantaneous voltage variation over a given time interval [28][29] or directly as difference between the observed maximum and the minimum voltage values [24], both weighted by the steady dc voltage value vdc. in this perspective ripple and voltage fluctuation are synonymous, provided that ripple is defined as difference with respect to vdc, taking thus positive and negative values (above or below vdc) separately, allowing to consider asymmetric limits for voltage variation. it is observed that dc networks have a much lower harmonic content thanks to the large deployed capacitance and in general the lower impedance in the harmonic frequency range, as it was demonstrated for railways in [30], comparing harmonic power terms in ac and dc systems. lower network impedance keeps harmonic voltage components low, while amplifying current distortion at higher frequency, as sources see a quasi-short-circuit condition. basically speaking, ripple may be calculated as the maximum of the peak-to-peak or peak excursion of network voltage [12], [28], but other measures of it (rms, percentiles, ...) were proposed in the past [31]-[33]. ripple addresses two objectives at once, quantifying the spread of instantaneous values and the network distortion (ripple was defined both in time and frequency domain [28], [29]). the explicit connection between ripple and dft (including harmonics as such and other components) was given in [28] with the index dlfsd and in [31] with rdf. ripple can describe the quality of the delivered voltage in terms of fluctuations and excursion, as well as assessing significant load steps and inrush phenomena when applied to the current waveform. 3.3. power trajectory ppl relevance is caused by the sudden power absorption and the line voltage drop consequential to the flow of absorbed current. grid voltage will change by tens of % when current increases by orders of magnitude, so that at a first approximation metrics based on power may be replicated for current with similar results. ppl input power as a function of time pp(t) is commonly named “power trajectory”. it has an approximately trapezoidal shape with amplitude p, comparable rise and fall time tr and tf, and top value duration tp; pulses of overall duration tp repeat in bursts of duration tb, that may occur more than once forming a train of bursts, one burst every tt seconds, as sketched in figure acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 156 1. the radar load depicted in section 2 fits this description with tp=4 ms, tp=10 ms, tb=50 ms and tt=200 ms. for our purpose the following elements are particularly important and characterize the power trajectory: the top value p, the rate of rise p/tr, the duration of the power pulse tp and the repetition intervals, tb and tt. the peak power p establishes the loading of the source and the grid voltage reduction, taking one pulse alone in almost steady conditions. the rate of rise instead indicates the dynamics of the power demand and the rapidity with which the dc grid and its energy storage must feed the load to sustain the network voltage. a metric that weights the rate of rise is the derivative vs. time of the power trajectory. the pulse duration tp , together with the absorbed power p, is relevant from an energy viewpoint, indicating the level of depletion of the energy storage during one power absorption event. the two parameters may be combined by calculating the area ap = p tp of the power trajectory pulse, that as a general metric was introduced in section 3.1. tb and tt indicate instead the periodicity and the amount of time available to recharge the energy storage devices between two pulses or two bursts of pulses. the number of pulses in one burst and of bursts in one train come directly as np = tb/tp and nb = tt/tb. combined with the area ap, that has the unit of measure of energy, we obtain the total depletion in terms of number of pulses, and the rate of depletion of the energy storage, that must be matched by the source capability. the time rate of change of the power trajectory was proposed as the reference quantity of a disturbance metric to apply to ppls [11]: 𝑑𝑃 = √ 1 𝑇𝑃 ∫ ( d𝑃𝑃 (𝑡) d𝑡 ) 2 d𝑡 𝑇𝑃 0 . (3) the expression for dp returns the average square derivative of the power trajectory pp(t); in other words, this is the rms value of the derivative of the power trajectory. considering the sinusoidal components composing it, the time derivative corresponds to a multiplication by  of each component; the structure of this penalization is similar then to the one adopted for high-voltage stress of capacitor insulation [34] (the reference quantity being there the electric field or the terminal voltage). by considering the power trajectory expressed as pp(t) = v(t)i(t), the derivative and square of the product may be further developed. the derivative of a product is equal to the sum of the two mixed products of one of the quantities with the derivative of the other. each derivative is the multiplication by  of the quantity expressed as a fourier series: 𝑑𝑃 ′ = √ 1 𝑇𝑃 ∫ (∑ �̃�𝑚 (𝑡) 𝑚 ∑ 𝜔𝑛 𝐼𝑛(𝑡) 𝑛 + ∑ 𝜔𝑚 �̃�𝑚 (𝑡) 𝑚 ∑ 𝐼𝑛(𝑡) 𝑛 ) 2 𝑇𝑃 0 having identified for brevity with )( ~ tvm and )( ~ tim the fourier series terms each with an exponential term and the peak value of the voltage or current component. it is also observed that to the aim of calculating energy, only the terms with a non-null integral over the longest period are retained, that is the mixed products of )( ~ tvm and )( ~ tin , mn, give a null contribution and only those with the same harmonic index are retained. the peak values in )( ~ tvm and )( ~ tin , once passed through the integral over tp, are transformed into rms values vm ad in with a scaling of 1/2 each that compensates with the factor of 2 resulting from the sum left and right of identical terms. 𝑑𝑃 ′ = √(2 ∑ 𝜔𝑛 1 2 𝑉𝑛 𝐼𝑛 𝑛 ) 2 = ∑ 𝜔𝑛𝑉𝑛 𝐼𝑛 𝑛 = ∑ 𝜔𝑛𝑃𝑛 𝑛 . (4) the resulting alternative power rate metric pd is thus a harmonic active power term multiplied by the pulsation, representing a sort of weighting function. harmonic active power was analysed for dc and ac railway systems in [30], finding that relevant terms are located at low frequency with relevance of a fraction of % compared to the total exchanged energy. such terms are larger for ac than for dc systems, that feature in general larger deployed capacitance (and may be also backed up by various energy storage systems). 3.4. power distortion treating the absorbed power profile (i.e. the power trajectory pp(t)) as a signal, the equivalent of the total harmonic distortion may be calculated, as well the intensity of spectral components of pp(t) for specific frequency intervals. this is the approach recently proposed in the mil-std-1399-300 interface standard for ac grids onboard us navy electric ships. power distortion (pd) is calculated for the active power component only (called “real power”): 𝑃𝐷 = √∑(𝑃𝑛 √2𝑃𝑎𝑣⁄ ) 2 𝑛 . (5) such a metric speaks of peak value of active power at a given frequency interval (or bin n) and extracts its rms value by dividing by 2. this metric is preliminarily considered here for dc systems by simply analysing the spectral behaviour of pp(t), compared to the expression derived for pd in (4), where the active power terms are arithmetically summed with the inclusion of pulsation as relative weight. considering the typical behaviour and response of the elements of a dc grid (energy storage devices, filters, converter controls), we briefly observe that an increased weight linear with frequency does not express a practical scenario where high frequency is in reality filtered by capacitors, whereas medium frequency components may fall beyond control bandwidth. figure 1. ideal power trajectory pp(t) of a ppl, absorbing power in bursts of individual approximately trapezoidal pulses. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 157 4. normative requirements 4.1. voltage variations, swells and sags (dips), and interruptions overvoltages and undervoltages may be named swells and sags (or dips) with analogy to ac networks. standards specifying requirements for pq, performance and reliability of dc distribution divide among the areas of application: mil-std704f [23] for aircrafts; iacs reg. e5 [22] and ieee std. 45.1 [21] for ships; en 50155 [24] onboard rolling stock; itu-t l.1200 [25] for telecommunication and data centers. in general, distinction is needed from other long-term transients (long interruptions and fluctuations), as well as from very short-term phenomena, usually classified as spikes or surges. these latter in dc grids with no overhead lines are in general of modest amplitude, thanks to the large deployed capacitance (batteries and other energy storage devices, output filters of interfacing converters, etc.) and the consequent low transient impedance). definitions of ieee std. 1159 [35] and en 61000-4-30 [36] for ac systems indicate a voltage dip or swell when the rms crosses a threshold, its duration quantified measuring the time interval between two consecutive crossings [35][37]. this definition based on the rms value is possible because swells and dips are defined for durations > 1 cycle. the analogy with dc systems is not perfect and various time scales for the estimate of the transient and related steady value may be used. this will be shown with an example in section 5. table 1 shows limits and reference values for voltage fluctuations, swells, sags and interruptions taken from the most relevant standards for the environment of application of ppls reviewed in section 2. figure 2 gives insight in graphical form into the timeamplitude limits of mil-std-704f and en 50155 for aircraft and railway applications, respectively. faster transients (with shorter duration and rise and fall times) are more application specific and may depend e.g. on switching operations. a peculiar source of repetitive fast transients in electrified transportation systems are electric arcs, the known byproduct of the current collection mechanism, occurring at small discontinuities of the catenary line, during pantograph detachment in dynamic conditions and with adverse weather (e.g. ice) [38][39]. such phenomenon is also relevant from an energy consumption viewpoint, in principle directly (as dissipative phenomenon in the arc itself and involved components), but most of all indirectly (interfering with the operation of the braking chopper of dc rolling stock) [40]. 4.2. ripple, harmonics and periodic variations the en 61000-4-7 [41] is a well-structured and complete standard that covers methods and algorithms quantifying spectral harmonic components, including interharmonics. however, the underlying assumption is always that of the existence of a fundamental, even for interharmonics, that e.g. combine mains and motor frequency in variable frequency drives or two mains frequencies in interface converters. as observed in sec. 3.2, for dc grids it is always assumed that the dc supply comes from ac mains upstream, although technology advancement is characterized also by autonomous dc grids, supplied by sources that are intrinsically dc, such as fuel cells and photovoltaic panels. it is evident thus that concepts as distortion and spectrum components, and suitable limits, should be independent from a purely harmonic perspective: • distortion becomes the amount of ac ripple superposed to the dc steady value vdc, rather than the composition of the table 1. limits and reference values for transient events (voltage variation, dip and interruptions) (e=emission, i=immunity, a=ambient specification). standard phenomenon type nom. volt. un in v ref. values mil-std-704f voltage var. a 28 v, 270 see figure 1 en 61000-4-29 voltage var. i 24-110 85-120 % 0.1-10 s en 61000-4-29 voltage dip i 24-110 40,70 % 0.01-1 s en 61000-4-29 voltage interr. hiz & loz i 24-110 0 % 0.001-1 s iacs voltage var. i  1kv 95-105 % en 50155 voltage var. a 24-110 see figure 1 en 50155 voltage interr. a 24-110 0 % 0.01-0.03 s l.1200 voltage var. a 300, 380 un-400-un 1 min un-260-un 1 min un-410-un 1 s un-420-un 10 ms l.1200 voltage dip i 300, 380 40 % 0.01 s l.1200 voltage interr. i 300, 380 0 % 0.01 s (loz) 1 s (hiz) a) b) figure 2. profile of allowed transient overvoltage / undervoltage for (a) avionic (mil-std-704f) and (b) railway (en 50155) applications. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 158 amplitude of the harmonic components; this definition goes back to ripple, that candidates itself as a flexible and all comprehensive metric • spectrum components are considered for a continuous frequency axis, although resolution frequency limitation is still relevant; this aspect is mentioned as mil-std-704f poses limits not only for distortion, but also to the spectral components [19]. table 2 summarizes normative limits and reference values for ripple and distortion. two quantities can be identified in the various standards that weight distortion and voltage variations: • the distortion d is the rms value of the ac components • the distortion factor df = d/vdc • the ripple r is the maximum absolute difference between an instantaneous value and the steady value vdc [23]; an alternative definition is the difference of maximum and minimum of the line voltage divided by the half sum [24]; then, in some cases ripple is expressed as an rms quantity, so that for clarity the properly said ripple r is considered as a “peak quantity”. distortion may evidently accompany ppls that are interfaced through static converters, but the most relevant feature is their impulsive and possibly repetitive power absorption profile. this profile for the considered dynamics may give rise to voltage and current components that may fall under the spectrum limitation at low frequency. for aircrafts in fact the mil-std-704f establishes limits of distortion components starting at 10 hz, unrelated from a concept of harmonic of a fundamental component. 5. results and discussion before going into the various cases, it is remembered that in general the voltage and current profiles characterizing loads and ppls have exponential profiles at the rising and falling edges, simply due to the most common behaviour of controls and electric network, featuring a dominant pole or two damped complex poles. shorter spikes may be present, but their origin is exogenous, as they are caused by lightning and fault induced phenomena. 5.1. pq indexes for typical ppl waveforms the behaviour of the metrics for transients (s, e and 2) and ripple index discussed in section 3.1 and 3.2, respectively, is analysed with respect to two typical ppl transient waveforms. case 1, shown in figure 3, includes a voltage reduction due to a significant current absorption and a fast transient (swell) following the release of the load that returns to a negligible current absorption. the transient response is stable and characterized by a first-order exponential. as evident, the voltage reduction is slow and characterized by local changes of slope, visible in the moving average profile superposed to the original waveform. as anticipated, a simple criterion based on a crossing threshold may face difficulties when the rate of change is very slow and the slope is not uniform. accurate and extensive noise removal is thus a minimum requisite. case 2, shown in figure 4, shows the voltage and current waveforms of a periodic ppl with the initial transient followed by the oscillatory response of the generator control and a variable amount of high-frequency ripple, that may be caused by the interface converters or may be the symptom of an incipient resonance. this waveform is quite similar to that of electric arc phenomena recognized in dc electric railways [39][42], where electric arcs trigger an impulsive reduction or increase of the line voltage (depending if the traction converter is absorbing power in a traction condition or injecting power into the network during a regenerative braking). the waveform in fact, besides a first pulse, features the oscillation of the rolling stock onboard filter (usually between 10 and 20 hz). substation ripple at 300 and 600 hz, besides a more or less intense component at 100 hz, is always present and appears as soon as the major transient response components have vanished. for case 1 the calculated values for the two portions of the waveform (voltage reduction with negative slope and subsequent swell) of the waveform, and altogether, are reported in table 3. the swell, although not so rapid, has a larger  than the five-time longer voltage reduction that precedes it. case 2 instead is evaluated for both the three indexes s, e, 2, and for ripple with a band limited approach as well, in order to separate the observed oscillations, at lower and higher frequency. table 2. limits and reference values for ripple and distortion (e=emission, i=immunity, a=ambient specification). standard phenomenon type nom. volt. un in v quantity ref. values mil-std-704f distortion a 28 d 3.5 % mil-std-704f ripple a 28 r 1.5 v mil-std-704f distortion a 270 d 1.5 % mil-std-704f ripple a 270 r 6 v iacs ur e5 ripple i rrms 10 % en 50155 ripple a 24-110 r 5 % en 61000-4-17 ripple i 24-110 r 1, 2.5, 5, 7.5 % figure 3. network voltage following a significant current absorption by the ppl local storage with a sudden release at the end, causing an exponential transient increase (swell). figure 4. ppl profile with periodic pulsation and superposed ripple following the initial power absorption: voltage (black) and current (grey) use the left axis, power (red) uses the right axis. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 159 since case 2 shows four absorption pulses, index values are commented for equivalence between apparently similar events. the results are shown in table 3, where the mean duration, and not its square, is reported for ease of comparison with signal duration and time intervals. the first voltage pulse has visibly less high-frequency oscillations, a slightly larger maximum value (the difference is about 3 v, so less than 1 %), but no other apparent differences. instead, the calculation of the energy points out an 8 % larger value than the remaining pulses; analogously the mean duration is shorter by about 10 %, that means that the energy concentration is higher. the physical explanation may be a deeper charging of internal storage, that is then less depleted during the successive pulses. indeed, the current peak in the first pulse is almost 4 % higher. case 2 is then further analysed for what concerns ripple components, that visibly are located at two distinct frequency intervals, around about 25 and 200 hz, and whose intensity changes between periods of the pulsating waveforms of voltage and current of figure 4. the band-limited ripple rbp [29] is thus calculated over different frequency intervals, using a windowed dft (hann window). from a preliminary evaluation of the band occupation, the frequency intervals should be separated at about 15 hz and 50-100 hz. with a sample frequency fs = 2.048 khz (matched by applying resampling) the shown frequency resolution is 8 hz, to match the visible main oscillation preliminarily qualitatively estimated at 24-25 hz. spectra shown in figure 5 are cut at 400 hz. as briefly commented in the figure caption, the dft of pp(t) highlights more effectively the spectrum pollution that may be a symptom of instability and that in a practical case should be further investigated. 5.2. spectrum of the ppl pattern the radar load profile [14], exemplified in section 2, is theoretically analysed for its frequency occupation, starting from the general ppl waveform of figure 1. the characteristics of the waveform are summarized as: current pulses of tp = 4 ms duration and peak power of 33.6 kw with respect to a steady power absorption of one third (11.2 kw); peaks are arranged in pulse bursts with spacing of tb = 10 ms (5 peaks) with a train repetition interval of tt = 200 ms. the coefficients of the fourier series of a symmetrical trapezoidal pulse are reported in (6) [43]. 𝑐0 = 𝑃 𝑝 , 𝑞 = 𝑇𝑝 𝑡𝑟 , 𝑝 = 𝑇𝑝 𝑡𝑝 + 𝑡𝑟 𝑐𝑘 = 2 𝑃 𝑝 ∙ sin( 𝑘 π/𝑞) 𝑘 π/𝑞 ∙ sin( 𝑘 π/𝑝) 𝑘 π/𝑝 , (6) having defined p and q as the fraction of the half-value pulse duration tp+tr and rise (or fall) time tr with respect to the duration of the single pulse tp. the resulting spectrum is a line spectrum with components repeating as per pulse spacing and modulated by the ck value. a more complex expression may be derived without the simplifying assumption of equal rise and fall times. however, as evident from the curve shape of figure 4, the trapezoidal curve shape is only an approximation and should not be assumed for more accurate assessment. the spectrum reported in figure 5, instead, gives more reliable indication of the average behaviour of components over one tp interval. a more refined assessment of the peak power oscillation and its damping could be more effectively achieved by curve fitting in the time domain. in general, when the pulse duration is shorter and the oscillations are characterized by variable instantaneous frequency, the precautions and verifications discussed in [45] should be considered. 5.3. power rate metric pd the power rate metric dp is evaluated with respect to a variable amount of harmonic distortion, using a realistic pp(t) trajectory with a pulsed profile. the first three periods of the red power curve of figure 1, reconstructed from the measured voltage and current waveforms, are considered as cases of an increasing amount of distortion due to the visible superposed ripple. such cases are indicated as case 2-intv1, -intv2 and -intv3 for the first three pulse periods. synthetic results are shown in table 4, reporting both the value of dp and the values of the previously defined indexes s, e and  for the selected intervals. it must be underlined that the two metrics s and e applied to pp(t) agree and weight more the table 3. values of area s, energy e, and mean duration  for the voltage waveforms of case 1 and case 2. interval s in v s e in v2  in s case 1-intv 1 202.07 2200 8.80 case 1-intv 2 55.51 1725 10.25 case 2-intv 1 3.61 203.56 0.0308 case 2-intv 2 3.56 193.58 0.0343 case 2-intv 3 3.58 193.24 0.0353 a) b) figure 5. dft of (a) v(t) and (b) pp(t), for the three first pulses of figure 4 (assigned colors black, blue and red). the main oscillation is clearly visible at about 24 hz, more stable in the v(t) profile; the high-frequency ripple is more recognizable in the pp(t) spectrum as a general increase, rather than isolated and visible in the 250 hz channel. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 160 larger peak value of the second and third pulse, but they ignore the increasing high-frequency ripple. the dp metric is based on the derivative of the fed signal (the power trajectory) and has a significant variability depending on the implementation. table 4 reports two versions of the calculation using “diff” (so the difference between adjacent vector components) and “gradient” (so the central difference between components with index differing by 2): the difference is significant especially because the two results do not have the same behaviour with respect to the interval number. as a note, since the derivative operator is insensible to the steady value of the fed signal, pp(t) was fed to s and e also as an “ac signal”, having subtracted the estimated steady value for each interval. for clarity s and e are thus indicated as s0 and e0. 6. conclusions this work has considered the presence of pulsed power loads (ppls) in dc grids with significant impact on network voltage variations and possibly on network stability. for the assessment of the impact and rating of various design solutions and implementations, suitable power quality metrics should be identified and applied. such metrics have been discussed, focusing in particular on the quantification of network transients and accounting for the typical profile of absorbed power, named power trajectory. these metrics may be divided in those weighting the characteristics of pulses and transients (area, energy, mean duration) and those dealing with characteristics that have been historically assigned to steady phenomena (ripple and distortion). metrics have been applied to voltage and current waveforms, but also to the power trajectory itself, as it has been recently proposed in the mil-std-1399-300 [26] for ac grids onboard us navy ships. it is interesting to note how the power curve in one case of a pulsed waveform with increasing distortion anticipates and allows detection of this phenomenon more effectively than processing voltage or current waveforms alone. the use and processing of power trajectory is particularly suited for ppls and opens new possibilities of revisiting “traditional” pq metrics with interesting performances and results. the better informative content of power with respect to voltage and current taken separately was pointed out in [46], when investigating internal and external sources of harmonic emissions in ac railways. references [1] y. luo, s. srivastava, m. andrus, d. cartes, application of disturbance metrics for reducing impacts of energy storage charging in an mvdc based ips, ieee electric ship technologies symposium (ests), arlington, va, usa, 22–24 april 2013, pp. 287-291. doi: 10.1109/ests.2013.6523748 [2] s. whaite, b. grainger, a. kwasinski, power quality in dc power distribution systems and microgrids, energies, 8 (2015), pp. 4378-4399. doi: 10.3390/en8054378 [3] v. a. prabhala, b. p. baddipadiga, p. fajri, m. ferdowsi, an overview of direct current distribution system architectures and benefits, energies 11 (2018), no. 2463, 20 pp. doi: 10.3390/en11092463 [4] b. r. shrestha, u. tamrakar, t. m. hansen, b. p. bhattarai, s. james, r. tonkoski, efficiency and reliability analyses of ac and 380 v dc distribution in data centers, ieee access 6 (2018), pp. 63305–63315. doi: 10.1109/access.2018.2877354 [5] s. g. jayasinghe, l. meegahapola, n. fernando, z. jin, j. m. guerrero, review of ship microgrids: system architectures, storage technologies and power quality aspects, inventions 2 (2017), pp. 1-4. doi: 10.3390/inventions2010004 [6] k. kim, k. park, g. roh, k. chun, dc-grid system for ships: a study of benefits and technical considerations, journal of international maritime safety, environmental affairs, and shipping, 2 (2018), pp. 1-12. doi: 10.1080/25725084.2018.1490239 [7] x. j. shen, y. zhang, s. chen, investigation of grid-connected photovoltaic generation system applied for urban rail transit energy-savings, ieee industry applications society annual meeting, las vegas, nv, usa, 7-11 october 2012, pp. 1-4. doi: 10.1109/ias.2012.6373995 [8] a. hinz, m. stieneker, r. w. de doncker, impact and opportunities of medium-voltage dc grids in urban railway systems, 18th european conf. on power electronics and applications (epe europe), karlsruhe, germany, 5-9 sept. 2016, pp. 1-10. doi: 10.1109/epe.2016.7695410 [9] p. ferrari, a. mariscotti, p. pozzobon, reference curves of the pantograph impedance in dc railway systems, ieee intern. conf. on circ. and sys., geneve, switzerland, 28-31 may 2000, pp. 555558. doi: 10.1109/iscas.2000.857155 [10] p. shamsi, b. fahimi, stability assessment of a dc distribution network in a hybrid micro-grid application, ieee trans. smart grid 5 (2014), pp. 2527–2534. doi: 10.1109/tsg.2014.2302804 [11] j. m. crider, s. d. sudhoff, reducing impact of pulsed power loads on microgrid power systems, ieee trans. smart grid 1 (2010), pp. 270-277. doi: 10.1109/tsg.2010.2080329 [12] m. steurer, m. andrus, j. langston, l. qi, s. suryanarayanan, s. woodruff, p. f. ribeiro, investigating the impact of pulsed power charging demands on shipboard power quality, proc. of ieee electric ship technologies symposium, arlington, va, usa, 2123 may 2007, pp. 315-321. doi: 10.1109/ests.2007.372104 [13] european synchrotron, ebs storage ring technical report, sept. 2018 [online]. available: https://www.esrf.eu/files/live/sites/www/files/about/upgrade /documentation/design%20report-reduced-jan19.pdf [14] h. ebrahimi, h. el-kishky, m. biswass, m. robinson, impact of pulsed power loads on advanced aircraft electric power systems with hybrid apu, ieee intern. power modulator and high voltage conf. (ipmhvc), san francisco, ca, usa, 6-9 july 2016, pp. 434-437. doi: 10.1109/ipmhvc.2016.8012857 [15] j. j. a. van der burgt, p. van gelder, e. van dijk, pulsed power requirements for future naval ships, proc. of 12th ieee intern. pulsed power conf., monterey, ca, usa, 27-30 june 1999, pp. 1357-1360 vol. 2. doi: 10.1109/ppc.1999.823779 [16] g. guidi, s. d’arco, j. a. suul, a modular and distributed grid interface for transformer-less power supply to road-side coil table 4. values of metric dp compared to s and e for case 2 applied to the power trajectory. s and e thus must be interpreted as “area” and “energy” of pp(t) considered as a signal and they do not have the same physical meaning. the derivative operator is insensible to the steady pp(t) value, so that to calculate s and e, pp(t) has been corrected by subtracting the steady value. case dp diff in whz dp grad in whz s0 in ws e0 in w2 case 2-intv 1 1.84 103 1.28 103 457.8 5.55 106 case 2-intv 2 1.58 103 1.25 103 470.2 5.65 106 case 2-intv 3 1.86 103 1.39 103 497.8 6.89 106 https://doi.org/10.1109/ests.2013.6523748 https://doi.org/10.3390/en8054378 https://doi.org/10.3390/en11092463 https://doi.org/10.1109/access.2018.2877354 https://doi.org/10.3390/inventions2010004 https://doi.org/10.1080/25725084.2018.1490239 https://doi.org/10.1109/ias.2012.6373995 https://doi.org/10.1109/epe.2016.7695410 https://doi.org/10.1109/iscas.2000.857155 https://doi.org/10.1109/tsg.2014.2302804 https://doi.org/10.1109/tsg.2010.2080329 https://doi.org/10.1109/ests.2007.372104 https://www.esrf.eu/files/live/sites/www/files/about/upgrade/documentation/design%20report-reduced-jan19.pdf https://www.esrf.eu/files/live/sites/www/files/about/upgrade/documentation/design%20report-reduced-jan19.pdf https://doi.org/10.1109/ipmhvc.2016.8012857 https://doi.org/10.1109/ppc.1999.823779 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 161 sections of dynamic inductive charging systems, ieee pels workshop on emerging technologies: wireless power transfer, london, uk, 18-21 june 2019, pp. 318-323. doi: 10.1109/wow45936.2019.9030614 [17] s. jung, h. lee, c. s. song, j.-h. han, w.-k. han, g. jang, optimal operation plan of the online electric vehicle system through establishment of a dc distribution system, ieee trans. pow. electron. 28 (2013), pp. 5878–5889. doi: 10.1109/tpel.2013.2251667 [18] a. mariscotti, electrical safety and stray current protection with platform screen doors in dc rapid transit, ieee trans. transp. electrif., (2021) (in print). doi: 10.1109/tte.2021.3051102 [19] a. mariscotti, overview of the requisites to define steady and transient power quality indexes for dc grids, proc. of 24th imeko tc4 intern. symp., palermo, italy, 14-16 september 2020. online [accessed 20 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-23.pdf [20] g. van den broeck, j. stuyts, j. driesen, a critical review of power quality standards and definitions applied to dc microgrids, applied energy 229 (2018), pp. 281-288. doi: 10.1016/j.apenergy.2018.07.058 [21] ieee std. 45.1, ieee recommended practice for electrical installations on shipboard — design, 2017. [22] iacs, electrical and electronic installations – e5: voltage and frequency variations, 2019. [23] mil-std-704f, aircraft electric power characteristics, w/change 1, 2016. [24] en 50155, railway applications – rolling stock – electronic equipment, 2017. [25] itu-t std. l.1200, direct current power feeding interface up to 400 v at the input to telecommunication and ict equipment, 2012. [26] mil-std-1399-300, interface standard section 300, part 1: low voltage electric power, alternating current, 2018. [27] en 61000-4-17, electromagnetic compatibility – part 4-17: testing and measurement techniques – ripple on d.c. input power port immunity test, 2009. [28] m. c. magro, a. mariscotti, p. pinceti, definition of power quality indices for dc low voltage distribution networks, proc. of ieee intern. meas. techn. conf. imtc, sorrento, italy, 20-23 april 2006, pp. 1885-1888. doi: 10.1109/imtc.2006.328304 [29] a. mariscotti, methods for ripple index evaluation in dc low voltage distribution networks, ieee intern. meas. techn. conf. imtc, warsaw, poland, 2-4 may 2007, pp. 1-4. doi: 10.1109/imtc.2007.379205 [30] a. mariscotti, characterization of active power flow at harmonics for ac and dc railway vehicles, proc. of ieee vehicle power and prop. conf., hanoi, vietnam, 14-17 october 2019, pp. 1-7. doi: 10.1109/vppc46532.2019.8952310 [31] j. barros, m. de apráiz, r. i. diego, power quality in dc distribution networks, energies 12 (2019), no. 848, 13 pp. doi: 10.3390/en12050848 [32] i. ciornei, m. albu, m. sanduleac, l. hadjidemetriou, e. kyriakides, analytical derivation of pq indicators compatible with control strategies for dc microgrids, proc. of ieee pes powertech, 18-22 june 2017, manchester, uk, pp. 1-6. doi: 10.1109/ptc.2017.7981179 [33] a. mariscotti, discussion of power quality metrics suitable for dc power distribution and smart grids, proc. of 23rd imeko tc4 intern. symp., xi’an, china, 17-20 september 2017. pp. 150154. online [accessed 20 june 2021] https://www.imeko.org/publications/tc4-2019/imeko-tc42019-032.pdf [34] g. c. montanari, d. fabiani, the effect of nonsinusoidal voltage on intrinsic aging of cable and capacitor insulating materials, ieee trans. diel. electr. insul. 6 (1999), pp. 798-802. doi: 10.1109/94.822018 [35] ieee std. 1159, ieee recommended practice for monitoring electric power quality, 2019. [36] en 61000-4-30, electromagnetic compatibility – part 4-30: testing and measurement techniques – power quality measurement methods, 2015. [37] a. florio, a. mariscotti, m. mazzucchelli, voltage sag detection based on rectified voltage processing, ieee trans. pow. del. 19 (2004), pp. 1962–1967. doi: 10.1109/tpwrd.2004.829924 [38] g. crotti, d. giordano, p. roccato, a. delle femine, d. gallo, c. landi, m. luiso, a. mariscotti, pantograph-to-ohl arc: conducted effects in dc railway supply system, proc. of 9th ieee intern. workshop on applied meas. for power systems (amps) , bologna, italy, 26-28 september 2018, pp. 1-6. doi: 10.1109/amps.2018.8494897 [39] a. mariscotti, d. giordano, experimental characterization of pantograph arcs and transient conducted phenomena in dc railways, acta imeko 9(2) (2020), pp. 10–17. doi: 10.21014/acta_imeko.v9i2.761 [40] a. mariscotti, d. giordano, a. delle femine, d. gallo, d. signorino, how pantograph electric arcs affect energy efficiency in dc railway vehicles, ieee vehicle power and propulsion conf., gijon, spain, 18 november -16 decemer 2020, pp. 1-5. doi: 10.1109/vppc49601.2020.9330954 [41] en 61000-4-7, electromagnetic compatibility – part 4-7: testing and measurement techniques – general guide on harmonics and interharmonics measurements and instrumentation, for power supply systems and equipment connected thereto, 2009. [42] a. mariscotti, d. giordano, a. delle femine, d. signorino, filter transients onboard dc rolling stock and exploitation for the estimate of the line impedance, proc. of ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020. pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128903 [43] c. r. paul, bandwidth of digital waveforms, ieee emc society newsletter, no. 223, 2009, pp. 58–64. online [accessed 20 june 2021] http://www.emcs.org/acstrial/newsletters/fall09/practicalpaper s.pdf [44] a. pratt, p. kumar, t. v. aldridge, evaluation of 400v dc distribution in telco and data centers to improve energy efficiency, 29th intern. telecommunications energy conf., rome, italy, 30 september 4 october 2007, pp. 32-39. doi: 10.1109/intlec.2007.4448733 [45] l. sandrolini, a. mariscotti, impact of short-time fourier transform parameters on the accuracy of emi spectra estimates in the 2-150 khz supraharmonic interval, electr. pow. sys. res. 195 (2021), 107130. doi: 10.1016/j.epsr.2021.107130 [46] a. mariscotti, experimental characterization of active and nonactive harmonic power flow of ac rolling stock and interaction with the supply network, iet electr. sys. transp., (2021) (in print), pp. 109-120. doi: 10.1049/els2.12009 https://doi.org/10.1109/wow45936.2019.9030614 https://doi.org/10.1109/tpel.2013.2251667 https://doi.org/10.1109/tte.2021.3051102 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-23.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-23.pdf https://doi.org/10.1016/j.apenergy.2018.07.058 https://doi.org/10.1109/imtc.2006.328304 https://doi.org/10.1109/imtc.2007.379205 https://doi.org/10.1109/vppc46532.2019.8952310 https://doi.org/10.3390/en12050848 https://doi.org/10.1109/ptc.2017.7981179 https://www.imeko.org/publications/tc4-2019/imeko-tc4-2019-032.pdf https://www.imeko.org/publications/tc4-2019/imeko-tc4-2019-032.pdf https://doi.org/10.1109/94.822018 https://doi.org/10.1109/tpwrd.2004.829924 https://doi.org/10.1109/amps.2018.8494897 https://doi.org/10.21014/acta_imeko.v9i2.761 https://doi.org/10.1109/vppc49601.2020.9330954 https://doi.org/10.1109/i2mtc43012.2020.9128903 http://www.emcs.org/acstrial/newsletters/fall09/practicalpapers.pdf http://www.emcs.org/acstrial/newsletters/fall09/practicalpapers.pdf https://doi.org/10.1109/intlec.2007.4448733 https://doi.org/10.1016/j.epsr.2021.107130 https://doi.org/10.1049/els2.12009 how the covid-19 changed the hands-on laboratory classes of electrical measurement acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 how the covid-19 changed the hands-on laboratory classes of electrical measurement jakub svatos1, jan holub1 1 czech technical university in prague, faculty of electrical engineering, department of measurement, technicka 2, 166 27, prague 6, czechia section: research paper keywords: covid-19; laboratory class; measurement; distance teaching citation: jakub svatos, jan holub, how the covid-19 changed the hands-on laboratory classes of electrical measurement, acta imeko, vol. 11, no. 4, article 5, december 2022, identifier: imeko-acta-11 (2022)-04-05 section editor: eric benoit, université savoie mont blanc, france received june 28, 2022; in final form november 24, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: j. svatos, e-mail: svatoja1@fel.cvut.cz 1. introduction the covid-19 pandemic suddenly paralyzed universities all around the world. it affected face-to-face education and dramatically changed the way of teaching. during the covid19 lockdown, universities switched to distance teaching and cancelled face-to-face education. face-to-face education has its specifics, and it supports interaction between student-student or student-lector interaction to promote better engagement, motivation, and study results [1] – [4]. moreover, practically oriented universities (e.g., engineering, health care, etc.) lost their capability to educate students hands-on. it means all handson classes were required to move exclusively to the distance from and fully employ online education, which challenged educators to adapt and transform all the courses. the impact of switching to distance teaching is perceptible to both students and teachers, and many questions have arisen about keeping the education quality [5] – [9]. covid-19 stroke hard at the czech technical university of prague, faculty of electrical engineering, department of measurement, where the need to switch to the distance form has been done practically from day to day. one of the subjects affected most by distance learning was electrical measurement and its laboratory classes. electrical measurement is a compulsory subject in electronics and communications and biomedical engineering and is available for all the faculty students in both czech and english. the course is taught to undergraduate students and applies to about 100 students per semester. the electrical measurement subject introduces different measurement methods of electrical and physical quantities like voltage, current, power, frequency, resistance, capacitance, or inductance. it is explained together with principles of their correct application and accuracy estimation. the subject also introduces the basic principles of electronic measuring instruments like multimeters, oscilloscopes, or spectral analyzers abstract this article depicts how covid-19 affected practical teaching at czech technical university in prague, faculty of electrical engineering, department of measurement. it introduces the subject of electrical measurement and its hands-on laboratory classes. the course applies to more than 100 undergraduate students per semester. since covid-19 mainly affected the teaching of the hands-on laboratory courses, the article narrates how the covid-19 lockdown has changed classes. an experience with the rapid transformation of lectures during the lockdown focusing on the hands-on laboratory classes is discussed, and the improvements that have been done to preserve its objectives, but also to adapt them to another possible lockdown, and the need to switch to the distance teaching, is revealed. the changes have been inspired by the results of a survey carried out among more than 250 students. the outline of the laboratory tasks transformation with an emphasis on the possibility of switching the real face-to-face measurement to the distance teaching remote lectures is described. the change includes reducing the number of laboratory tasks taught while preserving most of the objectives. all 11 new laboratory tasks are described in detail, and it is discussed how the possible change from hands-on to remote classes will affect every task. mailto:svatoja1@fel.cvut.cz acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 and explains the fundamentals of magnetic measurements and essential information concerning measurement systems. since the laboratory class is primarily hands-on activity, switching to the distance form has been different from conventional classroom-based teaching. the lecturers had to react to provide non-interrupted lectures while maintaining the quality quickly. according to [10], most students appreciated flexibility and quick reaction. laboratory exercises, on the contrary, often required a complete redesign of the measuring tasks to enable remote access and control of the measuring process. practically oriented courses are most often conducted in laboratories, with the assistance of lecturers. solving the handson laboratory classes' learning process requires much more than online learning [11]. since specific laboratory skills (e.g., the connection of the electronic circuit or handling the equipment) are harder to deliver in online teaching, a partial redesign of the laboratory classes must be done [12]. the paper from the times before the pandemic describes remote laboratories explicitly. in [13], laboratory classes are categorized into three types of laboratory exercises: a) development lab: students address specific design questions and verify that the design works as intended. b) research lab: addition to theoretical knowledge. c) educational lab: students apply theoretical knowledge in order to gain practical experience. at the same time, the authors admit that remote laboratories cannot replace all of the experience gained by face-to-face laboratories. another paper [14] performed a literature review of “hands-on, simulated, and remote laboratories,” where the authors, for remote laboratories, concluded that the more significant the computer-student interface is, the easier the task is to perform online. this supports articles [15] and [16] describing the methodology based on the simulation software, which is supplemented, if possible, with remote hands-on experiments. the advantage was that the students could compare the simulated and hands-on results. the main disadvantage was that only one student at a time could be connected and booked in advance for the remote laboratory. a different approach is used in [17] or [18]. both articles describe the teaching of electronics using pre-prepared kits. these kits have been physically sent directly to students to be able to perform all experiments at home. every kit comprises a microcontroller, necessary electrical components, and a breadboard with assembling guidance. the most complex approach, which can be applied to the class of electrical measurement, is described in [19]. the authors divided the learning process into five steps. 1. preparation for online labs: the explanation of the topic basis and short, simple multiple choices test to pass. 2. pre-lab: developing of understanding of critical points from theory. 3. in-lab: measurement, processing, presentation, and uncertainties. 4. post-lab: performing calculations, making conclusions, and understanding the concepts in a science and engineering context. 5. marking: grade distribution, analyzing comparison with face-to-face lectures. the article describes the challenges associated with distance teaching lectures on electrical measurement and hands-on laboratory classes and indicates changes in the post-covid-19 era. the paper is organized as follows. section 2 introduces the class of electrical measurement and its lectures. section 3 describes its laboratory class during the covid-19 lockdown and shows the student's feedback results. section 4 focuses on how covid-19 changed the laboratory class and its tasks. 2. lectures of electrical measurement class as covid-19 strikes suddenly, all the given lectures had to be quickly transformed into a distant form. the universities offered online courses and teaching classes before the pandemic. still, it never happened that the whole year had to be taught in a distant form without personal contact or consultations for all students [20], [21]. when the lockdown occurred, the lecturers could not even get to the classroom to provide the lecture. therefore, rapid actions had to be taken to preserve the classes. it was shown the best and only tolerable way how to continue in lectures almost continuously is to record the lessons in advance. the lecture can be offered, if there is no possibility to attend the university (i.e., lockdown or quarantine), as a recorded presentation in suitable software (i.e., ms powerpoint or similar), where tools like a pointer, marker, or drawing, and other features are available. in this case, recording the lecture in advance and allowing the students to watch the video with the lecture before the originally scheduled time showed as the best solution. in this way, the students can prepare their questions or topics for a discussion, and at the originally scheduled time, the lecturer is available for remote consultation with all participants. the survey pointed out the power of choice to watch a prerecorded lecture and use it as study material. the anonymous survey was conducted through 269 bachelor program students (approx. 75% of men, the average age of the respondents was about 21 years) in the second year of the study (third semester). the first survey question, "was the opportunity for you to watch a pre-recorded lecture a benefit?" shows in figure 1. an essential aspect of recorded lectures was the availability of the lecturer to be 'online' at the scheduled time of the original schedule to offer the lecture again and to discuss all the problems raised through the pre-watched video. the survey results for the second question: "was the opportunity for you to discuss a lecture with the lecturer online at the scheduled time a benefit?" is in figure 2. the student survey showed an important finding, which has to be considered to improve lectures, not only during the lockdown. the recorded lectures can also be used during face figure 1. survey results "was the opportunity for you to watch a prerecorded lecture a benefit?". 45% 38% 3% 14% was the opportunity for you to watch a prerecorded lecture a benefit? definetely yes rather yes no don't know acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 to-face classes to support students who need to repeat the problem or cannot attend the lecture. this was also verified during the first "face-to-face" semester after the covid-19 lockdown, as an excellent complement to face-to-face lectures. on the other hand, the possibility of watching recorded lecture could dramatically decrease the number of students who attends the face-to-face lecture. therefore, all the lectures of the electrical measurement class are available only during the examination period on the youtube channel of the department of measurement. moreover, to help the students in orientation in lectures and topics, all the lectures were divided into sub-topic to decrease the length of the individual videos. 3. laboratory class of electrical measurement during the covid-19 lockdown the laboratory class of the subject is focused on hands-on measurement method tasks of various electrical quantities and their evaluation. it shows the students the several measurement methods for the given topics and clarifies the most suitable techniques for specific measurements. moreover, it also draws attention to possible measurement systematic errors that may be unknowingly made. before covid-19, students had to perform real hands-on measurements of 17 laboratory tasks and complete reports from all the measurements. every task had multiple parts to be done, and a student had to connect several schematics to get the measurement results. when covid-19 struck, and the university was closed almost immediately, the question arose how to maintain laboratory teaching even in a distance form. the crucial requirement was to continue uninterrupted with the hands-on laboratories. as the quickest solution and keeping the learning tolerable, the pre-recorded tasks measured by the teacher and offered to students to watch were considered as a compromise between continuing in teaching and, at the same time, providing at least part of the practical experience. prerecorded laboratory tasks cannot substitute hands-on experience. however, on the other hand, quick response and some compromise are crucial for preserving even hands-on-type classes. this way, students can watch pre-recorded tasks anytime and every week, prepare the reports and discuss and present the achieved results during the originally scheduled class. during the online meeting, the main idea of the task was explained again, discussed, and theoretically explained again. there are different ways to handle hands-on distance teaching, for example, to secure remote access to the laboratory where the tasks will be already connected [22] – [25] or use batchmode remote-controlled laboratories, like the virtual instrument systems in reality [26], [27]. however, in classes with significant numbers of students and lots of tasks to be carried out, it demands lots of measurement instruments. it is also insisting on personnel preparing new sets of tasks every second week. it is worth to mention synchronizing the remote access to perform the tasks for all the students can be challenging in such a short time. moreover, in some of the tasks, there is a need to reconnect the schematics during the measurement. all these requisites were taken into account. how covid-19 affected the electrical measurement subject shows the statistics of the success rate of the students. during the times before covid-19, the subject's average success rate was slightly more than 85 %. when covid-19 strikes and the class have to change from face-to-face hands-on laboratories to a distance form in the first fifth of the semester, the success rate of the class was over 92%. it is given by the fact that the assessment requirements were decreased. nevertheless, the success rate during the covid-19 lockdown (at ctu, it has been the following two more semesters) dropped to 84% and 83%, respectively, for the classes that study the subject for the whole semester in a distance form. moreover, the average student grade of the class varied from 2.04 (average from the last three semesters before covid-19) to 2.54 during covid-19 (three semesters), where 1.00 is equal to the a grade, and 3.0 is equivalent to the e grade according to the european credit transfer and accumulation system (ects). clearly, the subject had to be changed to the future in case of another lockdown and to reflect the student survey. according to the feedback, the main issue was the number of tasks and receptivity of the reports from the tasks, which can demotivate the students. the results from the survey are in the following figures. one of the survey questions was: "was the number of tasks optimal for you?". results are shown in figure 3. figure 4 presents the results for the question, "were the recorded videos an adequate substitution for hands-on classes?". surprisingly, the results of the situation that arose were not conclusive. another question, "were the recorded videos beneficial, or would you prefer a different method? " revealed the students' satisfaction with the chosen method. according to almost 80% of all subject students, the pre-recorded videos of all the tasks figure 2. survey results "was the opportunity for you to discuss a lecture with the lecturer online at the scheduled time a benefit?". figure 3. survey results "was the number of tasks optimal for you?". 32% 50% 7% 11% was the opportunity for you to discuss a lecture with the lecturer online at the scheduled time a benefit? definetely yes rather yes no don't know 5% 93% 0% 2% was the number of tasks optimal for you? yes too much too little don't know acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 were an adequate substitution for considering the covid-19 lockdown situation. the results are shown in figure 5. the other part of the survey, where the students had space to express what they liked and did not like, indicated the direction in which the subject should be improved. the majority of the students addressed the repetitive reports of the tasks. the survey results and the overall feelings of the subject led to the need to change the number of tasks and content compression into a smaller number of tasks while preserving the subject's content. the survey also showed how the teaching during the covid-19 lockdown was appropriate and substitutes the hands-on laboratory class well under the given circumstances. 4. laboratory class of electrical measurement after the covid-19 lockdown as stated in the previous section, the laboratory classes had to be redesigned to preserve the content taught in the subject and can be effectively conducted even during the lockdown period. repetitiveness was a factor often mentioned by the students. the laboratory supervisors accepted that a thorough understanding of the principles demonstrated by the measurement experiment is way more important than gaining skill through excessive repetition of the same drill steps. moreover, most repetitive measurement tasks are fully automated today in professional environments. therefore, the number of tasks has been reduced from the original 17 real measurement tasks to 11 new real measurement tasks. this step will increase the motivation of the students to complete the course successfully. moreover, some new tasks have been designed to minimize the number of instruments used, reduce the wiring diagrams, and fulfill the need for modern measurement applications in the industry [28] and [29]. in this way, in case of pandemics and lockdowns, the newly designed tasks could be easily modified for remote laboratories. article [30] stress the lack of coherent learning objectives for laboratory classes and how this limits their effectiveness. the authors proposed the learning goals for instructional laboratories and suggested future research. the learning objectives of all tasks have been considered and worked on to motivate the students and to meet all intended educational objectives. the new tasks are: • measurement of non-sinusoidal voltages with multimeters and digital oscilloscope with a probe a task shows students principles of a voltage measurement using different principle multimeters and an oscilloscope using the oscilloscope probe. the effective values of non-harmonics voltages are explained for true rms and rectified mean principle voltmeters. moreover, the compensation of the oscilloscope probe is shown. through the measurement, students reveal the limitations of "cheap" multimeters in the measurement of non-harmonics voltages and compare the measurement error of such instruments with the true rms instrument. since all the multimeters can be set before the measurement and only the measured signal is changed, this part of the task can be done remotely in the case of distance teaching. in the case of a scope probe compensation, in a distance form, students have skipped this part and only theoretically derived the condition of a correctly compensated probe. • measurement of dc currents – a task that explains various methods to measure correctly small and large currents. it introduces a current-to-voltage convertor with an operational amplifier (oa) and shows the methodological error of measurement with multimeters with non-zero inner resistance. in the case of small currents measurement, the experiment pointed out the importance of knowledge of ammeters' real inner resistance and, by calculation, shows the possibility of great measurement error compared with the "ideal" current-to-voltage convertor with an oa. it also explains the contactless measurement of large currents using a clamp meter without breaking the circuit during the measurement. the task has been re-designed to minimize the reconnection of the circuit. therefore, the small current is measured on two ranges of one ammeter in the case of distance teaching to show the increasing inner resistance with the lower ranges. • measurement of small dc voltages – a task that explains the measuring amplifiers with oa's, its actual features like voltage input offset, input resistance, etc. the measurement uncertainties are also explained. the students have to measure the output voltage of the thermocouple with small inner resistance. the difference between inverting and non-inverting amplifiers with the oa is shown. students calculate and experimentally verify how the output voltage affects the load caused by the amplifier's input resistance. moreover, it is demonstrated by the measurement of how the input offset voltage of the oa can affect the resulting uncertainty. in the case of distance teaching, students have to measure only inverting amplifier, and for an figure 4. survey results "were the recorded videos an adequate substitution for hands-on classes?". figure 5. survey results "were the recorded videos beneficial, or would you prefer a different method?". 30% 56% 14% were the recorded videos an adequate substitution for hands-on classes? yes no 79% 10% 11% were the recorded videos beneficial, or would you prefer a different method? yes other method acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 uncertainty calculation, the catalogue value of the input offset voltage is used. • frequency dependence of multimeters for sinusoidal and rectangular waveforms – task explains the principle of frequency dependency of multimeters input circuits, its cut-off frequency, bandwidth, and effects of higher harmonics for rectangular waveforms. for instance, the article [31] compares simulations (virtual lab) and remote experiments with straightforward experiments dealing with the frequency characteristics of real instruments. in this task, students have to measure the frequency bandwidth of four different voltmeters, one laboratory and three "cheaper" voltmeters, hand-held included. the resulting frequency characteristics are then discussed. in a distance form, the task is unchanged since most instruments can be controlled remotely while the camera captures the hand-held. • frequency and period time measurement by a counter – introduces the principles of measurement of the time and the period, showing the possibility of inaccurate data measurement caused by incorrectly set of a trigger level and principle of averaging. in this task, a counter developed in the late 80s is used since it can best demonstrate the principle of the time and the period measurement. it can allow setting manually the time when the gate is open in case of the frequency measurement, and it allows manually setting the averaging for the period measurement. on the other hand, in the case of distance teaching, it cannot be used since it cannot be controlled remotely. in other words, this task is the most affected by distance teaching. as modern counters work fully automatically, they can't substitute the used counter. therefore, the task is skipped in case of distance teaching, and only recorded video is provided. • single-phase load power measurement – a power measurement task involves an electrodynamic wattmeter, power analyser, and clamp meter. it also explains the measurement current transformer. students have experimentally checked whether the current transformer has not been overloaded and used it in the measurement circuit to measure the power. the electrodynamic wattmeter measures power consumption. the constant of the wattmeter is explained. moreover, the power is simultaneously measured by the power analyser and a clamp meter. students have to measure the thd and power factor. the task is reduced by the clamp meter measurement in the case of distance teaching. • sampling principle, digital oscilloscope, and programmable waveform generator – a task with the advanced measurement with the oscilloscope, verification of the sampling theorem, principle of an arbitrary generator, and work with the oscilloscope trigger. the purely hands-on task focused on the skill to handle the instruments. similarly, like the task dealing with frequency and period time measurement by a counter, this task cannot be handled remotely, and only the recorded video is offered. • resistance measurement – complex task explaining the problem of measurement resistances, focusing on small resistances, 4-wire connection, thermoelectric voltages, and series comparison methods, introducing resistance-to-voltage convertor with oa. students must experimentally verify the different techniques used for the resistance measurement, its limitations (in the case of the resistance magnitude), and its advantages. the task can be measured remotely unchanged. • unbalanced wheatstone bridge evaluation of resistance change of pt1000 temperature sensor – the task is showing the principles of wheatstone bridges, analogue signal processing from resistance sensors, linearity of the voltage and current supply bridge, and linearized bridge with oa. similarly, like the task of resistance measurement, students experimentally verify the principle of the bridges and, through data analysis, calculate their linearity. also, this task can be measured remotely unchanged. • digital impedance and admittance meter – laboratory measurement explaining the principle of vector voltmeter, measurement of the real and the imaginary part of a complex voltage, calculation of rectified mean voltage and effective value voltage, measurement using a digital impedance meter. completing this task, the students understand the principles of the capacitance and inductance measurement and calculation of its serial and parallel equivalent schemes. the task can be measured, if needed, completely remotely. • magnetic measurements – a task that introduces the basics of magnetism, measurement of a leakage magnetic field of the transformer, hysteresis loop measurement, and calculation of amplitude permeability. in the first part, students have to experimentally determine the constant of the measuring coil by the helmholtz coils and then use the measuring coil to measure the leakage magnetic field of a transformer. in the second part, students measure the amplitude permeability of the ring specimen as a dependency on the magnetic field strength. in the case of the distance form, the first part has to be skipped, and only the second part is measured remotely. all the tasks are taught as real measurements during the faceto-face semester, but in the case of distance teaching, most of the tasks are designed to be transformed into remote laboratory classes. in addition, every task will be complemented by the instructional video to understand the main objectives better and to demonstrate the connection/wiring of the whole measurement circuit. apart from reducing the number of laboratory tasks, the repetitiveness of certain measurement types has significantly been reduced by, e.g., decreasing the number of points in which the examined characteristics are measured or evaluating the measurement uncertainty on a subset of measured points only. on the other hand, preserving the number of measuring method principles of a given topic (e.g., in resistance measurement, series comparison method, ohm method, r → u converter method) has been crucial. part of each task is home preparation containing several questions that prepare students to understand the measurement's issues better. these questions are awarded extra points added to the final grading to motivate the students. as mentioned at the beginning of section 4, most laboratory class tasks have been redesigned to prepare them, in case of a need, for a remote laboratory class. for this reason, as a supplement for basic instruments like, e.g., multimeters or arbitrary generators, a set of smart bench instruments with software to connect, control and capture data remotely has been purchased. each smart bench consists of a smart triple power supply (edu36311a) and a two-channel oscilloscope with a digital multimeter and waveform generator (edux1052g). all used instruments allow remote access to be controlled remotely. therefore, the objective of instrumentation, in a limited way, is preserved even in the case of distance teaching. although the number of tasks has been significantly reduced, the demands on the measuring instruments are still considerable. therefore, the tasks will be prepared gradually in groups of two or three, and students will have two or three weeks to complete them. moreover, in the case of online teaching, the remotely measured tasks will also be captured by the camera to help the acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 students better to understand its principles, schematics, and connections. on the other hand, in the case of pre-recorded videos, the fundamental objectives of teamwork, ethics in the laboratory, and communication are essentially limited in all tasks. an example of the measuring tasks, "digital impedance and admittance meter," connected to the smart bench is in figure 6. a camera continuously monitors the task to help the students be oriented. by this, students can compare the connection with the circuit diagram and see the instruments used. in the following step, students have to connect to the instruments remotely and set the power supply, generator, digital multimeter, and scope. in this case, the measurement is performed in two steps. firstly, the real part of the output voltage is measured, and the corresponding waveform is displayed. in the second step, the phase difference of 90° has to be set to measure the imaginary part of the output voltage and corresponding waveform. an integral part of the task, performed and controlled remotely, is a suitable remote access method to the controlling computer or directly controlling instruments if needed. an overview of software technologies presents in the article [32]. it introduces the main characteristic of the remote laboratory and its implementation from the client-server side. on the other hand, the selected method must comply with the institution's it security policies and not only allow remote access (usually of several students at a time) to the task but must also restrict access privileges to only those students who need to work on the task. based on this, the proposed solution uses apache guacamole open-source platform for a remote desktop gateway primarily used by the it of ctu in prague. this will be supplemented with an interactive calendar, implemented in moodle, an open-source learning management system, where students register in advance and thus reserve access to a specific task at a time that is convenient for them and complete the task at any time. all the proposed changes have been incorporated in the winter semester of 2022. the program board has positively received them. moreover, informal feedback from the students during the first semester taught in the changed form (winter 2022) indicates a step in the right direction. the home preparation helped the students better understand the essence of the experiment, and the reduced number of tasks reduced the student's workload and increased interest in a deep understanding of the discussed topic. therefore, it is believed it will motivate students to pass the subject successfully. 5. conclusion the article describes how covid-19 changed the electrical measurement subject taught at ctu in prague, faculty of electrical engineering, department of measurement. the electrical measurement subject is hands-on laboratory classoriented, and the impact of covid-19 was radical. it has been shown how the classes were taught in the pre-covid-19 era and how the covid-19 lockdown struck it. the fast response and recorded laboratory tasks helped to overcome the lockdown acceptably. the student survey revealed the subject's shortcomings and showed how the laboratories had to change. all the old tasks have been redesigned and reduced to preserve the fundamental objectives. moreover, the reworked tasks have been designed with an emphasis put on the possibility of switching to distance teaching. still, in the case of a distance form, the fundamental objectives of teamwork, ethics in the laboratory, and communication are essentially limited. all the adjustments made to improve the quality of the course have been made to maintain the quality, content, and objectives of the tasks in the case of a possible future lockdown. acknowledgement the authors would like to thank all members of the department of measurements of fee ctu in prague for their patience and support during the lockdown when much effort was spent to ensure high-quality teaching. references [1] t. wang, analysis of network teaching during covid-19, 2020 international conference on modern education and information management (icmeim), dalian, china, 25-27 september 2020, pp. 47-50. doi: 10.1109/icmeim51375.2020.00018 [2] h. wan, l. tang, z. zhong, q. cao, transit traditional face-toface teaching to online teaching during the outbreak of covid2019, 2020 ieee international conference on teaching, assessment, and learning for engineering (tale), takamatsu, japan, 8-11 dec. 2020, pp. 355-362. doi: 10.1109/tale48869.2020.9368330 [3] t. xu, insufficiency and improvement of online teaching in the context of the covid-19 pandemic, 2020 international conference on modern education and information management (icmeim), dalian, china, 25-27 september 2020, pp. 279-282. doi: 10.1109/icmeim51375.2020.00071 [4] s. arhun, a. hnatov, h. hnatova, a. patlins, n. kunicina, problems that have arisen in universities in connection with covid-19 on the example of the double degree master's program, electric vehicles, and energy-saving technologies, 2020 ieee 61th international scientific conference on power and electrical engineering of riga technical university, riga, latvia, 2020, pp. 1-6. [5] i. chiyón, a. v. quevedo, s. vegas, j. c. mosquera, an evaluation method of the impact of an online teaching system on engineering students' satisfaction during the covid-19 lockdown, 2021 international symposium on accreditation of engineering and computing education (icacit), lima, perù, 4-5 nov. 2021, pp. 1-4. doi: 10.1109/icacit53544.2021.9612504 [6] t. ma, online teaching exploration and practice of intelligent control courses in the context of covid-19 epidemic, 2020 figure 6. measurement task: digital impedance and admittance meter prepared for remote measurement. https://doi.org/10.1109/icmeim51375.2020.00018 https://doi.org/10.1109/tale48869.2020.9368330 https://doi.org/10.1109/icmeim51375.2020.00071 https://doi.org/10.1109/icacit53544.2021.9612504 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 international conference on modern education and information management (icmeim), dalian, china, 25-27 september 2020, pp. 300-303. doi: 10.1109/icmeim51375.2020.00076 [7] m. konecki, impact of distance learning on motivation and success rate of students during the covid-19 pandemic, 2020 43rd international convention on information, communication and electronic technology (mipro), opatija, croatia, 28 sept. 2 oct. 2020, pp. 813-817. doi: 10.23919/mipro48935.2020.9245406 [8] z. ping, l. fudong, s. zheng, thinking and practice of online teaching under covid-19 epidemic, 2020 ieee 2nd international conference on computer science and educational informatization (csei), xinxiang, china, 12-14 june 2020, pp. 165-167. doi: 10.1109/csei50228.2020.9142533 [9] f. j. garcía-peñalvo, r. r. -o. rector, m. j. rodríguez-conde and n. rodríguez-garcía, the institutional decisions to support remote learning and teaching during the covid-19 pandemic, 2020 x international conference on virtual campus (jicv), salamanca, spain, 3-5 dec. 2020, pp. 1-5. doi: 10.1109/jicv51605.2020.9375683 [10] j. roda-segarra, virtual laboratories during the covid-19 pandemic: a systematic review, xi int. conference on virtual campus (jicv), salamanca, spain, 30 september 1 october 2021, pp. 1-4. doi: 10.1109/jicv53222.2021.9600344 [11] s. m. mambo, f. makatia omusilibwa, effects of coronavirus pandemic spread on science, technology, engineering and mathematics education in higher learning institutions, 2020 ifees world engineering education forum global engineering deans council (weef-gedc), madrid, spain, 16-19 november 2020, pp. 1-4. doi: 10.1109/weef-gedc49885.2020.9293679 [12] t. m. sonbuchner, e. c. mundorff, j. lee, s. wei, p. a. novick, triage and recovery of stem laboratory skills, journal of microbiology and biology education 22(2) (2021) doi: 10.1128/jmbe.v22i1.2565 [13] b. balamuralithara, p. woods, virtual laboratories in engineering education: the simulation lab and remote lab, comput. appl. eng. educ. 17 (2009), pp. 108–118. doi: 10.1002/cae.20186 [14] m. jing, j. nickerson, hands-on, simulated, and remote laboratories: a comparative literature review, acm computing surveys 38(3) (2006). doi: 10.1145/1132960.1132961 [15] n. kapilan, p. vidhya, x. gao, virtual laboratory: a boon to the mechanical engineering education during covid-19 pandemic, higher education for the future 8(1) (2021), pp. 31-46. doi: 10.1177/2347631120970757 [16] m. deepa, p. reba, g. santhanamari, n. susithra, enriched blended learning through virtual experience in microprocessors and microcontrollers course, journal of engineering education transformations 34 (2021), pp. 642-650. doi: 10.16920/jeet/2021/v34i0/157236 [17] j. svatos, j. holub, j. fischer, j. sobotka, online teaching of practical classes under the covid-19 restrictions, measurement: sensors 22 (2022), 100378. doi: 10.1016/j.measen.2022.100378 [18] o. g. mcgrath, learning on and at the edge: enabling remote instructional activities with micro controller and microprocessor devices, siguccs '21, association for computing machinery, new york, usa, 14 march 30 april 2021, pp. 16–22. doi: 10.1145/3419944.3440730 [19] k. bangert, j, bates s, beck, et al, remote practicals in the time of coronavirus, a multidisciplinary approach, international journal of mechanical engineering education 50(2) (2022), pp. 219-239. doi: 10.1177/0306419020958100 [20] p. xu, research on an evaluation method for home-based network teaching quality based on analytic hierarchy process, 2020 international conference on modern education and information management (icmeim), dalian, china, 25-27 september 2020, pp. 110-114. doi: 10.1109/icmeim51375.2020.00032 [21] i. t. sanusi, s. a. olaleye, o. a. dada, teaching experiences during covid-19 pandemic: narratives from researchgate, 2020 xv conferencia latinoamericana de tecnologias de aprendizaje (laclo), loja, ecuador, 19-23 oct. 2020, pp. 1-6. doi: 10.1109/laclo50806.2020.9381133 [22] v. djalic, p. maric, d. kosic, d. samuelsen, b. thyberg, o. graven, remote laboratory for robotics and automation as a tool for remote access to learning content, 2012 15th international conference on interactive collaborative learning (icl), villach, austria, 26-28 sept. 2012, pp. 1-3. doi: 10.1109/icl.2012.6402174 [23] i. grout, remote laboratories to support electrical and information engineering (eie) laboratory access for students with disabilities, 2014 25th eaeeie annual conference (eaeeie), cesme, turkey, 30 may 1 june 2014, pp. 21-24. doi: 10.1109/eaeeie.2014.6879377 [24] j. m. mio, d. c. espinoza, j. solis-lastra, implementation of remote access to a radiofrequency laboratory: a case of study, 2021 ieee engineering international research conference (eircon), lima, peru, 27-29 oct. 2021, pp. 1-4. doi: 10.1109/eircon52903.2021.9613440 [25] i. pavel, m. brânzilă, c. sărmășanu, c. donose, labview based control and monitoring of a remote test-bench experiment for teaching laboratories, 2021 international conference on electromechanical and energy systems (sielmen), iasi, romania, 6-8 oct. 2021, pp. 398-402. doi: 10.1109/sielmen53755.2021.9600348 [26] m. a. marques, m.c. viegas, m.c. costa-lobo, a.v. fidalgo, g.r. alves, j. s. rocha, i. gustavsson, how remote labs impact on course outcomes: various practices using visir, ieee transactions on education 57(3) (2014), pp. 151-159. doi: 10.1109/te.2013.2284156 [27] d. may, b. reeves, m. trudgen, a. alweshah, the remote laboratory visir introducing online laboratory equipment in electrical engineering classes, 2020 ieee frontiers in education conference (fie), uppsala, sweden, 21-24 oct. 2020, pp. 1-9. doi: 10.1109/fie44824.2020.9274121 [28] l. angrisani, u. cesaro, m. d'arco, o. tamburis, measurement applications in industry 4.0: the case of an iot–oriented platform for remote programming of automatic test equipment, acta imeko 8 (2019) 2, pp. 62-69. doi: 10.21014/acta_imeko.v8i2.643 [29] l. mari, is our understanding of measurement evolving?, acta imeko 10 (2021) 4, pp. 209-213. doi: 10.21014/acta_imeko.v10i4.1169 [30] l. d. feisel, a. j. rosa, the role of the laboratory in undergraduate engineering education, journal of engineering education 94(1) (2005), pp. 121-130, doi: 10.1002/j.2168-9830.2005.tb00833.x [31] m. v. branco, l. a. coelho, l. schlichting, i. gustavsson, m. a. marques, g. r. alves, differentiating simulations and real (remote) experiments, in proceedings of the 5th international conference on technological ecosystems for enhancing multiculturality association for computing machinery, new york, usa, 18-20 oct. 2017, pp. 1–7. doi: 10.1145/3144826.3145363 [32] j. garcia-zubia, p. orduna, d. lopez-de-ipina, g. r. alves, addressing software impact in the design of remote laboratories, ieee transactions on industrial electronics 56(12) (2009), pp. 4757-4767. doi: 10.1109/tie.2009.2026368 https://doi.org/10.1109/icmeim51375.2020.00076 https://doi.org/10.23919/mipro48935.2020.9245406 https://doi.org/10.1109/csei50228.2020.9142533 https://doi.org/10.1109/jicv51605.2020.9375683 https://doi.org/10.1109/jicv53222.2021.9600344 https://doi.org/10.1109/weef-gedc49885.2020.9293679 https://doi.org/10.1128/jmbe.v22i1.2565 https://doi.org/10.1002/cae.20186 https://doi.org/10.1145/1132960.1132961 https://doi.org/10.1177/2347631120970757 https://doi.org/10.16920/jeet%2f2021%2fv34i0%2f157236 https://doi.org/10.1016/j.measen.2022.100378 https://doi.org/10.1145/3419944.3440730 https://doi.org/10.1177/0306419020958100 https://doi.org/10.1109/icmeim51375.2020.00032 https://doi.org/10.1109/laclo50806.2020.9381133 https://doi.org/10.1109/icl.2012.6402174 https://doi.org/10.1109/eaeeie.2014.6879377 https://doi.org/10.1109/eircon52903.2021.9613440 https://doi.org/10.1109/sielmen53755.2021.9600348 https://doi.org/10.1109/te.2013.2284156 https://doi.org/10.1109/fie44824.2020.9274121 https://doi.org/10.21014/acta_imeko.v8i2.643 https://doi.org/10.21014/acta_imeko.v10i4.1169 https://doi.org/10.1002/j.2168-9830.2005.tb00833.x https://doi.org/10.1145/3144826.3145363 https://doi.org/10.1109/tie.2009.2026368 uncertainty of factor z in the gravimetric volume measurement acta imeko issn: 2221-870x september 2021, volume 10, number 3, 198 201 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 198 uncertainty of factor z in gravimetric volume measurement mar lar win1 1 national institute of metrology (myanmar) nimm, yangon, myanmar section: research paper keywords: density of air; density of water; value of factor z; uncertainty of factor z; gravimetric volume measurement citation: mar lar win, uncertainty of factor z in the gravimetric volume measurement, acta imeko, vol. 10, no. 3, article 27, september 2021, identifier: imeko-acta-10 (2021)-03-27 section editor: andy knott, national physical laboratory, united kingdom received: may 7, 2021; in final form: august 13, 2021; published: september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mar lar win, e-mail: marlarwin99@gmail.com 1. introduction the iso 4787 standard describes the general equation for the calculation of the volume at the reference temperature of 20 °c, 𝑉20, from the apparent mass of the water, be it contained or delivered, as follows from [1], b.1: 𝑉20 = (𝐼l − 𝐼e) ( 1 𝜌w − 𝜌a ) (1 − 𝜌a 𝜌b ) [1 − 𝛾(𝑡 − 20)] , (1) where 𝐼l is the balance reading of the vessel containing water (g), 𝐼e is the balance reading of empty vessel (g), 𝜌w is the density (g/ml) of the water at temperature 𝑡, 𝜌a is the density of air (g/ml), 𝜌b is the actual density of the reference weights (8 g/ml), 𝛾 is the coefficient of the cubic thermal expansion of the material (°c−1), and 𝑡 is the temperature of the water used in the testing (°c). since this equation is somewhat complicated to work with, the factor 𝑍 is provided to simplify the volume calculation: 𝑉20 = 𝑚 𝑍 [1 − 𝛾(𝑡 − 20)] , (2) where 𝑚 is the net weighed values of 𝐼l − 𝐼e and 𝑍 is the conversion factor of the mass of the water measured in terms of volume at the measurement temperature. 2. determination of factor z the factor 𝑍 depends on the air density, water density, water temperature, and the density of the reference weights. the factor can be derived from (1) as follows: 𝑍 = ( 1 𝜌w − 𝜌a ) (1 − 𝜌a 𝜌b ) . (3) to determine the value of factor 𝑍, the following parameters must be taken into account: • density of air 𝜌a in relation to atmospheric pressure, temperature and a relative humidity of 40 % – 90 %; • density of water 𝜌w in relation to temperature; • density of the reference weights 𝜌b of the balance used. 2.1. determining the air density the air density 𝜌a in kg m −3 can be determined with sufficient uncertainty from the following cipm approximation formula for air density [2], e.3-1: 𝜌a = 0.348 48 𝑃 − 0.009 ℎ𝑟 × 𝑒0.061×𝑡 273.15 + 𝑡 , (4) where 𝑃 is the atmospheric pressure (hpa), ℎ𝑟 is the relative humidity (%), and 𝑡 is the atmospheric temperature (°c). abstract in the gravimetric volume measurement method, the factor z is generally used to facilitate an easy conversion from the apparent mass obtained using a balance to the liquid volume. the uncertainty of the measurement used for the liquid volume can be divided into two specific contributions: one from the components related to the mass measurements and one from those related to the mass-to-volume conversion. however, some iso standards and calibration guides have suggested that the uncertainty due to the factor z is generally neglected in the uncertainty calculation pertaining to gravimetric volume measurement. this paper describes the combined effects of the density of the water, the density of the reference weights, and the air buoyancy on the uncertainty of factor z in terms of how they subsequently affect the uncertainty of the measurement results. mailto:marlarwin99@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 199 table 1 presents the air density to temperature relationship at 1,013.25 hpa and a 50 % relative humidity. 2.2 determining the water density the density of air-free water 𝜌w′ at a pressure of 1,013.25 hpa and a temperature 𝑡 between 0 °c and 40 °c expressed in terms of the its-90 is given by tanaka et al. [3] as follows: 𝜌w′ = 𝑎5 [1 − (𝑡 + 𝑎1) 2(𝑡 + 𝑎2) 𝑎3(𝑡 + 𝑎4) ] , (5) where 𝑎1 = (− 3.983 035 ± 0.000 67) c 𝑎2 = 301.797 c 𝑎3 = 522 528.9 c 2 𝑎4 = 69.348 81 c 𝑎5 = (0.999 974 950 ± 0.000 000 84) g/ml 𝑡 = water temperature in c . since the pure water that is used in the laboratory is generally air-saturated, the density must be corrected. to adjust the air-free water density as described in (5) to air-saturated water between 0 °c and 25 °c (the standard laboratory condition), we can use the following [3], 5.2: ∆𝜌 = 𝑠0 + 𝑠1 × 𝑡 , (6) where 𝑠0 kg m−3 = − 4.612 × 10−3 𝑠1 kg m−3 °c−1 = 0.106 × 10−3 . moreover, since water is slightly compressible, a small correction may be required under typical laboratory conditions. the compressibility factor 𝐹c [3], 5.3, for the water density used at different pressures is as follows: 𝐹c = 1 + [(𝑘0 + 𝑘1 𝑡 + 𝑘2 𝑡 2) ∆𝑃] , (7) where ∆𝑃 = 𝑃 − (101 325 pa) 𝑘0 = 50.74 × 10 −11 pa−1 𝑘1 = −0.326 × 10 −11 pa−1 c−1 𝑘2 = 0.004 16 × 10 −11 pa−1 c−2 𝜌w = (𝜌w´ × 𝐹𝑐 + ∆𝜌) with 𝜌w = the density of air-saturated water and 𝜌w´= the density of air-free water. (8) table 2 presents examples of water density as a function of water temperature for distilled water at 1,013.25 hpa after correction for any dissolved gasses in the water. meanwhile, table 3 presents examples of calculated airsaturated water density as a function of water temperature and pressure after applying corrections for the dissolved gasses in the water. 2.3 density of the reference weights the density of the reference weights/mass pieces 𝜌b that are used for the balance calibration is normally presented in the calibration certificate of the weights. alternatively, the uncertainties corresponding to the used weight class according to oiml r 111-1 [2] can be used. if the density of the reference weights are not known, 8.0 g/ml is generally used. 2.4 determining the factor z values the values of the conversion factor 𝑍 (ml/g) as a function of temperature and pressure as given in the existing literature mainly pertain to distilled water. when liquids other than distilled water are used, the correction factors (factor 𝑍) for the specific liquid must be determined. table 4 presents the factor z values in relation to temperature and pressure. the calculations were based on (3) and (8). 3. the uncertainty of factor z 3.1 sources of uncertainty in factor z having identified the input quantities of factor z using (3), it is possible to determine the sources of uncertainty pertaining to the different input quantities, which are: table 1. air density values as a function of temperature at 1,013.25 hpa and a 50 % relative humidity. temperature in °c density of air in g/ml 15 0.001 225 20 0.001 204 25 0.001 184 table 2. water density values after correction for air-saturated distilled water. water temperature (°c) air-free water density (g/ml) correction for air saturation (g/ml) air-saturated water density (g/ml) 15 0.999 102 57 0.000 003 02 0.999 099 55 20 0.998 206 75 0.000 002 49 0.998 204 25 25 0.997 047 02 0.000 001 96 0.997 045 06 table 3. air-saturated water density as a function of water temperature and pressure. water temperature (°c) air-saturated water density (g/ml) at atmospheric pressure of 900 hpa 1,013.25 hpa 1,050 hpa 15 0.999 094 26 0.999 099 55 0.999 101 27 20 0.998 199 07 0.998 204 25 0.998 205 94 25 0.997 039 96 0.997 045 06 0.997 046 72 table 4. factor z values as a function of water temperature and pressure. water temperature (°c) value of factor z (ml/g) at atmospheric pressure of 900 hpa 1,013.25 hpa 1,050 hpa 15 1.001 845 1.001 958 1.001 995 20 1.002 745 1.002 858 1. 002 895 25 1.003 912 1.004 026 1.004 062 27 1.004 448 1.004 562 1.004 599 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 200 • water density • air density • density of the reference weights. the uncertainty of factor z can be determined using the gum law of propagation of uncertainty [4] as follows: 𝑢(𝑍) = √( 𝜕𝑍 𝜕𝜌a 𝑢(𝜌a)) 2 + ( 𝜕𝑍 𝜕𝜌w 𝑢(𝜌w)) 2 + ( 𝜕𝑍 𝜕𝜌b 𝑢(𝜌b)) 2 (9) where 𝑢(𝜌a) is the standard uncertainty of the air density (g/ml), 𝑢(𝜌w) is the standard uncertainty of the water density (g/ml), and 𝑢(𝜌b) is the standard uncertainty of the reference weights (g/ml). 3.2 uncertainty of air density at a relative humidity of hr = 0.5 (50 %), a temperature of 20 °c and a pressure of 1,013.25 hpa, the uncertainty of the air density can be estimated according to oiml r111 [2], c.6.3-3, as follows: 𝑢(𝜌a) 𝜌a = √ (1 × 10−4)2 + (−3.4 × 10−3 × 𝑢(𝑡)) 2 +(1 × 10−3 × 𝑢(𝑃))2 + (−10−2 × 𝑢(ℎ𝑟))2 , (10) where 𝑢(𝑡) is the uncertainty of the air temperature 𝑡 (°c), 𝑢(𝑃) is the uncertainty of the air pressure 𝑃 (hpa), and 𝑢(ℎ𝑟) is the uncertainty of the relative humidity ℎ𝑟 (%). for the uncertainty values (k = 2) from the calibration of the pressure, temperature and humidity sensors, the uncertainty due to the temperature of the air is 0.1 °c, the uncertainty due to the barometric pressure is 0.1 hpa, and the uncertainty due to the relative humidity is 1 %. this yields the following: 𝑢(𝜌a) = 0.0012 g ml⁄ × √ (1 × 10−4)2 + (−3.4 × 10−3 × 0.1 2 ) 2 + (1 × 10−3 × 0.1 2 ) 2 + (−1 × 10−2 × 0.01 2 ) 2 = 2.52 × 10−7 g ml⁄ . 3.3 uncertainty of water density the uncertainty of the water density can be evaluated from the formulation given by euramet cg 19 [5], 7.3.3, as follows: 𝑢(𝜌w) = √𝑢 2(𝜌w,f) + 𝑢 2(𝜌w,𝑡) + 𝑢 2(𝛿𝜌w) , (11) where 𝑢(𝜌w,𝑡) = 𝑢(𝑡w) × 𝛽 × 𝜌w(𝑡w) 𝑢(𝑡w) = [( 𝑈(ther) 𝑘 ) 2 + 𝑢(res)2] 0.5 𝑢(𝑡w) = [( 0.01 °c 2 ) 2 + ( 0.01 °c 2√3 ) 2 ] 0.5 𝑢(𝑡w) = 0.006 °c . the expansion coefficient can be estimated as follows: 𝛽 = (−0.117 6 𝑡2 + 15.846 𝑡 − 62.677) × 10−6 ℃−1 𝛽 = (−0.117 6 × 202 + 15.846 × 20 − 62.677) × 10−6 ℃−1 𝛽 = 0.000 21 °c−1 𝜌w(𝑡w) = 0.998 204 25 g/ml (from table 3) 𝑢(𝜌w,𝑡) = 0.006 × 0.000 21 × 0.998 204 25 𝑢(𝜌w,𝑡) = 1.26 × 10 −6g/ml~ 1 × 10−6g/ml . the standard uncertainty related to 𝑢(𝛿𝜌w) could range from a few ppm for highly pure water (measured by means of a highresolution density meter) to 10 ppm for distilled or de-ionised water, provided that the conductivity is less than 5 µs/cm, meaning it is assumed to be 5 × 10−6 g/ml. meanwhile, 𝑢(𝜌w,f) is the estimated standard uncertainty of the formulation (4.5 × 10−7 g/ml), 𝑢(𝜌w,𝑡) is the uncertainty due to the temperature of the water (1 × 10−6 g/ml), and 𝑢(𝛿𝜌w) is the uncertainty due to the purity of the water (5 × 10−6 g/ml) [5], 8.2.3. this yields the following: 𝑢(𝜌w) = √(4.5 × 10 −7)2 + (1 × 10−6)2 + (5 × 10−6)2 g/ml = 5.12 × 10−6 g/ml . (12) 3.4 uncertainty of the reference weights the uncertainty of the density of the weights/mass pieces is presented in the calibration certificate of the weights, or alternatively, according to oiml r111-1, the uncertainty of stainless-steel weights class e2 is 140 kg/m3 (k = 2). the uncertainty of the density of the reference weights is normally presented in the calibration certificate of the weights. for example, the value presented in the calibration certificate of the set of weights used is 0.06 g/ml (k = 2): 𝑢(𝜌b) = 𝜌r 2 = 0.06 2 g ml⁄ = 0.03 g ml⁄ . (13) alternatively, the uncertainty of the density of the weights, as presented in oiml r111-1, can also be used. 3.5 calculating the uncertainty of factor z table 5 presents example data for the calculation of the uncertainty of factor z for air-saturated distilled water at a temperature of 20 °c and a barometric pressure of 1,013.25 hpa. sensitivity coefficient of air density 𝜕𝑍 𝜕𝜌a = ( 1 𝜌w − 𝜌a ) (− 1 𝜌b ) + (1 − 𝜌a 𝜌b ) ( 1 (𝜌w − 𝜌a) 2 ) = 0.880 500 ml²/g2 sensitivity coefficient of water density 𝜕𝑍 𝜕𝜌w = (1 − 𝜌a 𝜌b ) ( −1 (𝜌w − 𝜌a) 2 ) = −1.005 876 ml²/g2 sensitivity coefficient of density of reference weights 𝜕𝑍 𝜕𝜌b = ( 1 𝜌w − 𝜌a ) ( 𝜌a 𝜌b 2 ) = 1.887 6 × 10−5 ml²/g2 table 5. example data. parameter value uncertainty (k = 1) air density 0.001 204 g/ml 2.52 × 10−7 g/ml water density 0.998 204 g/ml 5.12 × 10−6 g/ml mass density 8.0 g/ml 0.03 g/ml factor z 1.002 858 ml/g (refer to the conditions in table 4) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 201 uncertainty of factor z (k = 1) 𝑢(𝑍) = √( 𝜕𝑍 𝜕𝜌a 𝑢(𝜌a)) 2 + ( 𝜕𝑍 𝜕𝜌w 𝑢(𝜌w)) 2 + ( 𝜕𝑍 𝜕𝜌b 𝑢(𝜌b)) 2 = √ (0.880 500 × 2.52 × 10−7)2 +(−1.005 876 × 5.12 × 10−6)2 +(1.887 6 × 10−5 × 0.03)2 ml/g = 5.2 × 10−6 ml/g . 4. discussion as shown in table 4, the factor z variation in distilled water due to water temperature is approximately 2.0 × 10−3 ml/g or 2.0 × 10-4 ml/(g °c) between 15 °c and 25 °c. this implies that when the uncertainty of the temperature measurement of distilled water is 0.1 °c, this will present an uncertainty of approximately 0.002 % following the conversion to volume. meanwhile, the influence of barometric pressure on the factor z is approximately 1 × 10−4 ml/g or 0.01 % per pressure change of 100 hpa between 900 hpa and 1,050 hpa. the uncertainty of the pressure measurement in the laboratory is typically 5 hpa. this will cause the uncertainty to the conversion of volume by less than 0.000 5 %. 5. conclusions the factor z does not only depend on the density of the liquid compensated for water temperature and pressure but also on the air density and the density of the weights used for the calibration of the balance. however, the contribution to the evaluation of uncertainty resulting from the factor z for gravimetric volume measurement is extremely small compared to that resulting from the operator process (i.e., meniscus reading) [5], 8.2.7, and the mass of water determination, provided that the measuring instruments (balance, barometer, thermometer, etc.) are used in accordance with the specifications given in the standard operating procedure (sop). thus, it is good practice to neglect the uncertainty of the factor z in the evaluation of uncertainty and to give only the components related to the mass measurement process [6]. acknowledgement this paper would not have been possible without the exceptional support of bunjob suktat, my former expert from physikalisch-technische bundesanstalt (ptb) on the ‘ptbnimm strengthening myanmar quality infrastructure project’. his enthusiasm, knowledge, and minute attention to detail were an inspiration and ensured my work remained on track. as my teacher and mentor, he has taught me more than i could ever give him credit for here. references [1] iso 4787 (2010) laboratory glassware volumetric instruments methods for testing of capacity and for use. [2] oiml r111weights of classes e1, e2, f1, f2, m1, m2, m3 metrological and technical requirement, 2004. [3] m. tanaka, g. girard, r. davis, a. peuto, n. bignell, recommended table for the density of water between 0 °c and 40 °c based on recent experimental reports, metrologia 38 (2001) pp. 301-309. doi: 10.1088/0026-1394/38/4/3 [4] jcgm 100:2008 (gum), evaluation of measurement data – guide to the expression of uncertainty in measurement. [5] euramet cg-19, version 3.0 (09/2018), guidelines of determination of uncertainty in gravimetric volume calibration. [6] iso 8655-6:2002(e), international standard, piston-operated volumetric apparatus, part 6: gravimetric methods for the determination of measurement error. https://doi.org/10.1088/0026-1394/38/4/3 nilm techniques applied to a real-time monitoring system of the electricity consumption acta imeko issn: 2221-870x june 2021, volume 10, number 2, 139 146 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 139 nilm techniques applied to a real-time monitoring system of the electricity consumption b. cannas1, s. carcangiu1, d. carta2, a. fanni1, c. muscas1, g. sias1, b. canetto3, l. fresi3, p. porcu3 1 diee, university of cagliari, via marengo 2, 09123 cagliari, italy 2 iek-10, forschungszentrum jülich, 52425 jülich, germany 3 bithiatech technologies, elmas, (ca), italy section: research paper keywords: non-intrusive load monitoring (nilm); electricity consumption; energy disaggregation; features extraction; smart metering; blued dataset; proprietary dataset citation: barbara cannas, sara carcangiu, daniele carta, alessandra fanni, carlo muscas, giuliana sias, beatrice canetto, luca fresi, paolo porcu, nilm techniques applied to a real-time monitoring system of the electricity consumption, acta imeko, vol. 10, no. 2, article 20, june 2021, identifier: imekoacta-10 (2021)-02-20 section editor: giuseppe caravello, università degli studi di palermo, italy received january 21, 2021; in final form march 16, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially funded by sardinian region under project “rt-nilm (real-time non-intrusive load monitoring for intelligent management of electrical loads)”, call for basic research projects, year 2017, fsc 2014-2020. corresponding author: daniele carta, e-mail: d.carta@fz-juelich.de 1. introduction knowing how electric appliances are used and how different appliances contribute to the aggregate total consumption could help users to have a better understanding on how the energy is consumed, thus leading possibly to a more efficient management of their loads. non-intrusive load monitoring (nilm) is an area of computational sustainability research, and it presently identifies a set of techniques that can disaggregate the power usage into the individual appliances that are functioning and identify the consumption of electricity for each of them [1]. in residential buildings, where it is impractical to monitor single appliances, or even groups of appliances, through specific meters, nilm techniques are a low-cost and not invasive option for electric consumption monitoring, considering a single monitoring point where a smart meter is installed. literature reports several papers that applied different methods throughout the years to solve this problem. a first classification of nilm techniques could be done in supervised and unsupervised methods [2]. supervised methods require labelled data of consumption to train the model. unsupervised methods are targeted to extract the information to operate directly from the measured aggregate data consumption profiles. due to their better performance, most of the approaches are based on supervised algorithms and they require appliance data for model training to estimate the loads number, type and power by analysing the aggregate consumption signal. solutions based on machine learning range from classic supervised machine-learning algorithms (e.g., support vector machines, and artificial neural networks) to supervised statistical learning methods (e.g., k-nearest neighbours and bayes classifiers) and unsupervised method (e.g., hiddenmarkov models and its variants). a review of these methods is reported in [2]. recently, deep learning (dl) methods were also abstract non-intrusive load monitoring (nilm) allows providing appliance-level electricity consumption information and decomposing the overall power consumption by using simple hardware (one sensor) with a suitable software. this paper presents a low-frequency nilmbased monitoring system suitable for a typical house. the proposed solution is a hybrid event-detection approach including an eventdetection algorithm for devices with a finite number of states and an auxiliary algorithm for appliances characterized by complex patterns. the system was developed using data collected at households in italy and tested also with data from blued, a widely used dataset of real-world power consumption data. results show that the proposed approach works well in detecting and classifying what appliance is working and its consumption in complex household load dataset. mailto:d.carta@fz-juelich.de acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 140 employed and seem promising for the most challenging problem posed by the consumption profiles of multi-state appliances [3]-[6]. the frequency of energy data monitoring drives the use of the analysis techniques and the specific tools. although higher the frequency of energy data monitoring frequency higher could be the accuracy of the nilm disaggregation algorithms, commercial smart meters for homes supply low frequency sampling (less than 60 hz) of the electric power quantities. in this field, the majority of the research efforts focused on event-based techniques that identify significant variations in the power signals as switching events of appliances. these events must be classified as a state transition related to a specific appliance. for this purpose, electric signal characteristics extracted from measurements in proximity of the events (i.e., signatures) are used as distinctive features, and then labelled with classification procedures. in this paper, a monitoring system is proposed using a smart meter that supplies low frequency (1 hz) samples of power consumption. the system is able to disaggregate and keep track of the power consumption of the devices existing in a typical italian house. the households should follow the proposed procedure to customize the system for their homes, choosing the appliances of interest and collecting the corresponding measurements. the disaggregation algorithm is an improvement of the one proposed in [7]-[8]. the load disaggregation is performed applying a hybrid approach to power data, i.e. eventbased techniques and pattern recognition techniques for large household appliances. moreover, the procedure of the first setup phase has been simplified: the event detection for type-i and type-ii appliances that was performed separately for each device is now carried out on the aggregate power signals, strongly reducing the user effort. in order to validate the method, the procedure is also applied to building-level fully-labeled (blued) dataset for electricity disaggregation [9]. blued is a publicly available big dataset consisting of real voltage and current measurements for a singlefamily residence in the united states, sampled at 12 khz for a whole week. blued has been applied as benchmark dataset in several recent papers on nilm [5], [10], [11], [12]. however, few contributions have been proposed with low frequency samples. among them, [13] and [14] will be considered in this work as comparison. 2. the home energy management system the home energy management system (hems) is used to provide comfortable life for consumers as well as to save energy. this can be obtained using a home’s smart meter to monitor the electricity consumption of the devices existing in a household applying nilm techniques. the hems should identify the appliances that are active at any time, disaggregate the energy and estimate consumptions of the single device. in this work, the chosen low-frequency smart meter is a sensor belonging to a precommercial prototype [7] which provides steady-state signatures, such as real and reactive power time series. it is important that the hems set-up is easy to understand and interact with. it should have features, like auto-configuration, which make the set-up process very easy. the non-intrusive technique, resorting to only one smart meter for the household, requires some effort in the first set-up phase, but it can be sometimes the only possible choice because installing a specific monitoring infrastructure, including new devices cables, may result in high implementation cost to the user. in this work the appliances are categorized into four types based on their operation states [15]: 1) type-i, appliance with on/off states (binary states of power); 2) type-ii, finite state machines with a finite number of operation states (multiple states of power); 3) type-iii, continuously varying devices with variable power consumption, as a washing machine and a light dimmer (infinite middle states of power); 4) type-iv, appliances that stay on for days or weeks and with a constant power consumption level. in case of type-ii, data for all the transitions between possible states should be acquired and labelled (manual set-up). in case of type-iii devices, data from each cycle are characterized by complex patterns. in this paper, a technique to deal with such devices is proposed, by considering the washing machine as a case study. figure 1 and figure 2 show the filtered apparent power consumption typical of different models of washing machines with different washing cycles. as can been noted, the power consumption fluctuates while heating/washing or rinsing/drying the laundry. thus, the events do not correspond to simple steps in the power consumption, but characteristic complex patterns appear in the power time series. as it can be noted, heating water accounts for about 90% of the energy needed to run a washer and in both figures the washing machine typically has electrical components which turn on and off in sequence. the proposed technique facilitates the training process by pre-populating the training data set with signatures of some type-iii devices showing typical patterns (automatic set-up). 3. the procedure in the following, the implemented procedure is presented showing how the variations of apparent (s), real (p) and reactive powers (q), the oscillation frequencies of the signals and the varying patterns of type-iii appliances are used in the training figure 1. apparent power for a washing machine (hot water). figure 2. apparent power for a washing machine (cold water). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 141 phase for the creation of the signatures database, whereas during a recall phase the appliances are recognized inside an aggregated signal. in order to better understand the whole procedure, figure 3 shows its flow chart with the main phases of data collection and extraction of the signatures. 3.1. data collection the automatic procedure to collect the aggregate data of electric consumption consists of the following two steps, one for type-i and ii appliances, one for type-iii appliances. in the first step, type-iii appliances (such as washing machine or dishwasher) are switched off, whereas type-i and type-ii appliances could regularly work. the user must switch on and off type-i and ii devices several times for each event of interest. this allows to increase data robustness to the noise, e.g., small fluctuations in appliance consumption, electronics constantly on, and appliances turning on/off with consumption levels too small to be detected. multi-state devices, such as kitchen ovens, stoves, clothes dryers, etc., go through several states where heating elements and fans can be switched in various combinations. thus, to collect all the events, the user must test all the possible transitions from one state to another. for instance, for an electric stove with three power levels (states), the user should trigger twelve possible switching events, among off/on states. in the second step, large type-iii appliances work alone and their data of consumption are recorded. note that, for this work, also data from type-i and type ii appliances working alone have been collected, in order to combining them and create several synthetic aggregate data series to test the system. 3.2. extraction of the signatures once the aggregate data have been collected, the database of the signatures of each switching event is built. these data are normalized with respect to the constant voltage of 230 v in such a way that the voltage drops due to the load insertion do not influence the result. moreover, a causal filtering is applied to apparent, real, and reactive power signals. in this way, possible spikes and outliers can be discarded or smoothed. with such a low sampling frequency, fast transients should be removed, since they could be sometimes recorded, and other times missed during the acquisition. figure 4 shows the changes in electricity consumption due to the switching on and off of a fan before and after the filtering. for the type-i and ii devices, an edge detector finds the switching events in the apparent power data when the absolute value of the difference s between two consecutive values is larger than 20 va. the sign of s identifies the start up or the shutdown of the appliance. then, the real p and reactive q power variations in each edge must be determined and candidate as one of the signatures of the individual load. within the time interval between two consecutive events there is an almost constant power level consumption. in order to find a candidate signature of the appliance-switching events, the difference between the mean values of the measurements before and after each edge of real and reactive power is evaluated. among all the candidate signatures, the k-medoids clustering method [16] is applied to partition the set of switching events into a set of clusters whose number k depends on the possible states of the single appliances and must be set by the user. this clustering method is robust to noise and outliers and it chooses data points as centers of the clusters. at the end of the clustering figure 3. flow chart of the procedure. figure 4. original and filtered fan power consumption. pre-processing edge detection apparent power δs real power δp reactive power δq δp and δq clustering db signatures aggregate data acquisition insert # states appliance nearest neighbor search in the δp-δq plane recognition and labeling db patterns aggregate data oscillation frequency ≥ 0.05hz ? pre-processing edge detection apparent power δs real power δp reactive power δq no yes on/off large appliances check pattern energy disaggregation recall phase db event labels extraction of the signatures data collection extraction of the lower q envelope δq>100var? yes no acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 142 procedure, these centers will form the signatures associated with each transition. note that, as appliances with small power consumptions are not interesting from the point of view of energy savings, and hardly distinguished, loads with p < 20 w are discarded and loads with 20 w < p <50 w are associated to a unique “low consumption” cluster. for the same reason, switching events lasting less than 5 s are not taken into account. note that, the threshold of 50 w is implemented by default, but it can be modified by the user, in case only consumptions greater than a predetermined power are of interest. at the end of the training phase, all the collected data are given as input to the monitoring system; in case of errors in the classification, new common clusters are created for those devices characterized by close consumptions of real and reactive power. for large type-iii appliances, an ad hoc procedure has been implemented. for the washing machine, the start and the end of the cycle and the motor-spin events are detected. to this aim, the peak values of the real power oscillations that identify the heating and washing phases are identified. in order to avoid peaks due to noise or other events not characterizing this device, maximum relevance peak values are selected, i.e., those that drop at least 30 w on either side before the signal attains a higher value. a statistical analysis of the time distance of such peaks shows that the typical time distance between the peaks is close to 20 s. thus, the oscillations in the active power signal show a frequency greater than or equal to 0.05 hz. moreover, in order to avoid spurious detections, the pattern of the motor spin pattern is isolated. the motor-spin pattern shown in figure 5 is identified in the individual appliance signals. the pattern identified in the individual appliance signals is then extracted and included in the database of the system. 3.3. the recall phase during the recall phase, first of all, a check is carried out to verify if a washing machine is running: as described in section 3.2, the switching on (off) of the washing machine operation is identified when the oscillations, characterized by a frequency greater than or equal to 0.05 hz, start (end). when no washing machine cycle is detected, the following procedure is applied. when an edge in the aggregated signal is detected, the corresponding point in the p-q plane is evaluated. then, a nearest neighbour search in the p-q plane is performed, and the event is classified and associated to the appliance event with the nearest signature vector. moreover, a check on the sign of q is considered as further information, in addition to the cluster center distance, to identify the proper cluster in order to increase the discrimination capabilities. see as an example the signatures of the hairdryer and stove in figure 6, where the signatures are very close but the former, unlike the latter, is characterized by zero reactive power. if no association is performed, the event is labelled as unidentified. the detections of events of type-i and ii appliances switched on and off during a washing machine cycle become quite challenging with such a low frequency and many false switching could be triggered directly applying the described procedure. thus, if the washing machine is running in its heating phase, the lower envelope of q is extracted. since the washing-machine q lower envelope during the heating phases is equal to zero, the presence of the intervals of not-null constant values indicates the switching of an appliance. in this case a threshold of 100 var is considered. when an edge is detected in a flat-top interval of the q lower envelope, the nearest neighbour search is then applied in the p-q plane in order to associate the event to the appliance with the nearest signature vector. this procedure allows one to improve the detection in case of reactive loads switched on and off during the washing machine heating phases. finally, using a similarity search algorithm, it is possible to identify, in the aggregate signal, the operating phase that best matches with the reference associated to the spin motor functioning. this procedure could be applied for other devices with characteristic patterns, e.g. microwave ovens, to distinguish their operating conditions. 4. case study in this section the experiments carried out to verify the validity of the methodology proposed are reported. the algorithms are implemented in matlab. performance have been evaluated both on a custom dataset collected by the authors in some italian houses and on a public dataset, blued [9], which has already been extensively analysed in the literature. 4.1. custom dataset in order to create our dataset, several domestic consumption data have been acquired by installing an energy meter between the investigated appliances and the domestic network. the implemented acquisition system of the electricity consumption data consists of: one eastron sdm220 single-phase energy meter [17], for residential and industrial applications at a rated voltage of 230 v (range 176 v to 276 v) and current of 10 a (range 0.5 a to 100 a). the accuracy requirements of the meter are reported in table 1; figure 5. motor-spin pattern. figure 6. custom dataset: appliance signatures in δp-δq plane. the overlapping signatures of hairdryer and stove are highlighted with black rectangles [7]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 143 a pc on which the measurement software and the load disaggregation algorithm are implemented; a modbus/rs485 serial interface, including a serial port converter adapter cable usb-rs485, allowing the remote communication between the energy meter and the pc. the dataset has been created with an acquisition frequency equal to 1 hz that represents the maximum value allowed by the used meter. the dataset consists of three parts: individual loads, aggregate loads and synthetic aggregate loads. the data were acquired in single and aggregate manner during the actual operation of the devices or were generated by simulating conditions corresponding to actual user behaviour. the electricity consumptions of the individual loads reported in table 2 have been acquired by connecting the meter directly to the device plugs. the dataset was obtained acquiring multiple appliances simultaneously via a multi-socket. in order to increase the amount of the data corresponding to aggregate consumption, an aggregate synthetic dataset was obtained by combining the consumption data of the individual appliances, by summing the measurements of p and q and averaging the values of v. 4.2. blued dataset this dataset was built in 2011 by monitoring a whole house located in pennsylvania (us), for 8 days. in us there are three electricity feed lines for ordinary houses: two firewires and one neutral line. the two firewires have 120 v amplitude of voltage and they are named as phase a and phase b. usually, small 120 v-rated appliances are connected between one firewire and one neutral while larger 240 v-rated appliances such as heaters and air conditioners are connected between two firewires. in this work only phase a data have been used. the appliances connected to this phase are reported in table 3. the blued dataset contains high frequency (12 khz) aggregated data of raw current and voltage. during the creation of the dataset, every single switching on/off of any appliance is recorded and called as an event. in particular, all the changes in the state of power consumption higher than 30 w have been considered. every appliance event in the house was then labelled and time-stamped, providing the necessary event labels database for the evaluation of the proposed procedure. in total, 904 events have been registered in the considered phase a. in this work, to take into account the technical specifications of the pre-commercial smart meter used in the experiments discussed in the previous section 4.1, power signals evaluated from raw data are down-sampled at 1 hz. then, the events identified in blued, but unknown, and those with duration less than or equal to 5 s have been discarded obtaining a final database of 662 ground-truth events. 5. results figures 6 and 7 show the signatures of custom and blued databases in the p-q plane obtained by applying the phase of the procedure “extraction of the signatures”, described in section 3.2. in this section the performances of the recall phase (event detection and appliances identification) over the two datasets are presented. 5.1. performance measures in binary classification problems (such as the event detection) there are only two classes, called positive (on or off event) and negative (non-event). when a positive sample is incorrectly classified as negative, it is called a false negative (fn); when a negative sample is incorrectly classified as a positive, it is called table 1. accuracy requirements of eastron sdm220 [17]. parameter accuracy voltage 0.5% of range maximum current 0.5% of nominal power active 1% of range maximum reactive apparent energy active class 1 iec 62053-21 class b en 50470-3 reactive 1% of range maximum table 2. list of appliances in the custom dataset. appliance average power consumption in w type fridge 180 ii kettle 1900 i lamp 40 i notebook 60 i stereo 30 i toaster 500 i tv 40 i electric oven 2000 ii hairdryer 300-900 ii fan 30-40 ii induction cooker 400-2500 ii microwave oven 1000-1200 ii stove 900-1800 ii water heater 600-1200 ii washing machine 130-1700 iii table 3. list of appliances in the blued dataset (phase a). appliance average power consumption in w type kitchen aid chopper 1500 ii fridge 120 ii air compressor 1130 i hair dryer 1600 ii backyard lights 60 i washroom light 110 i bathroom upstairs lights 65 i bedroom lights 190 i figure 7. blued dataset (phase a): appliance signatures in δp-δq plane. the magnification of the part enclosed by the black rectangle shows more in detail the signatures with |∆𝑃| < 250 w. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 144 a false positive (fp); when a positive sample is correctly classified as positive is called a true positive (tp). the 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 (pr) represents what proportion of predicted positives is truly positive. it is the ratio between the number of correctly classified positives and the total number of samples predicted as positives: 𝑃𝑟 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 . (1) the 𝑅𝑒𝑐𝑎𝑙𝑙 (re) represents what proportion of actual positives is correctly classified. it is the ratio between the number of correctly classified positives and the total number of positives in the dataset: 𝑅𝑒 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 . (2) the 𝐹1 − 𝑠𝑐𝑜𝑟𝑒 combines 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 and 𝑅𝑒𝑐𝑎𝑙𝑙 into a single measure. mathematically it is the harmonic mean of precision and recall. 𝐹1 − 𝑠𝑐𝑜𝑟𝑒 = 2 ∙ 𝑃𝑟 ∙ 𝑅𝑒 𝑃𝑟 + 𝑅𝑒 . (3) in a multiclass classification problem (as the appliance identification) there are no positive or negative classes, but tp, fp and fn and the other performance measures can be evaluated for each individual class (each appliance event). summing up the single class measures, the total tp, the total fp and the total fn of the model can be obtained; then the global metrics of precision, recall and f-score can be evaluated. note that in this case, all the global metrics become equal, i.e. 𝑃𝑟 = 𝑅𝑒 = 𝐹1 − 𝑠𝑐𝑜𝑟𝑒. 5.2. performance with custom dataset the recall phase has been tested on aggregate signals composed of type-i and type-ii appliances with and without the washing machine. using the pattern shown in figure 5, the operation phases of the motor-spin are individuated even considering washing machines of different brands and with different washing programs. figure 8 shows the power demand of a household over a 14-hour period, between 08:00 and 22:00. as it can be observed, the aggregate power demand is generated by the fridge, which shows a periodical power consumption behaviour, and other appliances. the on and off events are shown as markers at the level of +δp and -δp respectively, whereas the motor spin functioning is indicated with red segments. the results are very satisfactory, especially regarding identification of the activation status of appliances that consume more energy, such as the washing machine. as an example, in figure 9 the events detection during a washing machine cycle, is shown. flat-top intervals of the q lower envelope, denoted by the green segments in figure 9, identify the switching on/off of the fridge. in order to show the effectiveness of the improvements proposed in this paper, in table 4 the performance for the edge detection as reported in [7], are shown. in table 5 the event classifier performance is compared with those reported in [7]. as it can be noted, despite the performance index obtained in [7] for the synthetically generated aggregate signals was already very high, a slight improvement has been achieved. a more significant improvement in the performance index can be observed on the experimental data after the introduction of the described changes. the pie charts in figure 10 compare the estimated decomposition results with ground-truth of energy consumption. as can be observed, the proposed method is capable of disaggregating energy consumption of the appliances with good accuracy. 5.3. performance with blued dataset in the blued dataset the aggregate signals are composed of type-i and type-ii appliances. the event detector applied to blued time series identifies 647 events characterized by ∆𝑆 > 20 var and lasting more than 5 s. among them, 9 events figure 8. household power demand over a 14-hour period. figure 9. detections of on/off events for the fridge during a washing machine cycle. table 4. performance metrics for the edge detector tested on synthetic and experimental custom dataset. test set # groundtruth events tp fp fn pr re f1score synthetic data 140 139 1 1 0.99 0.99 0.99 experimental data 137 135 2 2 0.99 0.99 0.99 table 5. performance metrics for appliances classification tested on synthetic and experimental custom dataset. test set # detected events tp fp fn pr re f1score synthetic data [7] 139 133 6 6 0.96 0.96 0.96 synthetic data 139 133 3 3 0.97 0.97 0.97 experimental data [7] 135 121 14 14 0.90 0.90 0.90 experimental data 135 129 6 6 0.96 0.96 0.96 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 145 refer to appliances characterized by an active power consumption less than 30 w. since these events are not labeled in blued, they have been discarded. table 6 and table 7 report the performance obtained by the algorithm for the event detection and the event classification respectively. all the performance indexes are very high confirming the suitability of the approach to detect sudden state changes. in table 8 the confusion matrix showing per class accuracy (in percent) is reported. in the confusion matrix all the different lights connected to the phase-a have been grouped into a single class. we can observe that the only classification errors are for the fridge class which is confused with that of the lights and vice versa. literature reports few contributions for nilm algorithms applied to blued data at the same frequency of 1 hz [13], [14], [18]. in [13] a panel of different machine learning algorithms is applied to appliance classification for a subset of blued dataset (air compressor, basement, computer, garage door, iron, kitchen light, laptop, lcd monitor, monitor, refrigerator, tv). both precision and recall are equal to 79%. in [14], clustering in δp δq plane is solved through a hierarchical approach executed with some manual supervision. the paper reports for phase-a blued data an f1-score for the event detection and for the appliance classification of about 92% and 88% respectively. in [18] an event detection algorithm is used to identify the time instant when a sudden increase of active power occurs, indicating a possible turn-on event, and a convolutional neural network (cnn) classifier is applied for event classification. results reported for blued are limited to three appliances (washing machine, fridge, microwave). table 9 reports the calculated recall (also called true positive rate-tpr) and accuracy metrics for event detection and classification respectively for [13], [14] and [18] and also other solutions applied to higher frequency data [19]-[21]. figure 10. energy disaggregation results. table 6. performance metrics for the edge detector tested on blued dataset. test set # groundtruth events tp fp fn pr re f1score experimental data 662 638 9 13 0.99 0.96 0.97 table 7. performance metrics for appliances classification tested on blued dataset. test set # detected events tp fp fn pr re f1score experimental data 638 625 13 13 0.98 0.98 0.98 table 8. confusion matrix showing per class accuracy (in percent) for the appliances connected to phase-a of blued; blue: actual device, grey: classified device, green: correct classifications (tp), orange: false positives. class fridge air compressor lights kitchen aid chopper hair dryer fridge 0.99 0 0.01 0 0 air compressor 0 1 0 0 0 lights 0.05 0 0.95 0 0 kitchen aid chopper 0 0 0 1 0 hair dryer 0 0 0 0 1 table 9. comparison of event detection and classification performance. reference method sampling-rate recall (tpr) accuracy proposed 1 hz 0.96 0.98 [13] active machine learning strategy 1 hz 0.79 0.92 [14] hierarchical approach 1 hz 0.89 0.88 [18] first and second differences + cnn 1 hz 0.94 0.98 [19] finite-precision analysis 4 khz 0.91 [20] extremely randomized trees 12 khz 0.94 [21] hybrid approach 60 hz 0.94 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 146 it can be seen that the proposed algorithm achieves good results while being simple and computationally efficient. in fact, despite the low frequency of data analyzed, the approach shows competitive performance even when it is compared to other more complex methodologies applied to high-sampling-rate signals, from 60 hz to 12 khz [19]-[21]. 6. conclusions in this paper, a monitoring system has been proposed that is able to disaggregate and keep track of the power consumption of the devices existing in a typical house analysing low frequency aggregate data. by applying a hybrid approach to power data, i.e. event-based techniques and pattern recognition techniques for large household appliances, the load disaggregation is performed with good performance, even when complex type-iii devices, such as the washing machine, are working. finally, using the more populated blued dataset, we showed that the proposed procedure is able to achieve high performance in two key tasks in energy disaggregation: identifying events from non-events, and identifying which appliance is associated with a specified event. references [1] s. s. hosseini, k. agbossou, s. kelouwani, a. cardenas, nonintrusive load monitoring through home energy management systems: a comprehensive review, renewable and sustainable energy reviews, vol. 79, 2017, pp. 1266–1274. doi: 10.1016/j.rser.2017.05.096 [2] a. ruano, a. hernandez, j. ureña, m. ruano, j. garcia, nilm techniques for intelligent home energy management and ambient assisted living: a review, energies, vol. 12, june 2019, p. 2203. doi: 10.3390/en12112203 [3] w. kong, z. y. dong, b. wang, j. zhao and j. huang, a practical solution for non-intrusive type ii load monitoring based on deep learning and post-processing, ieee trans. smart grid, vol. 11, no. 1, jan. 2020, pp. 148-160. doi: 10.1109/tsg.2019.2918330 [4] a. harell, s. makonin, i. v. bajić, wavenilm: a causal neural network for power disaggregation from the complex power signal, proc. of icassp 2019 ieee international conference on acoustics, speech and signal processing, brighton, uk, 12-17 may 2019, pp. 8335-8339. doi: 10.1109/icassp.2019.8682543 [5] q. wu, f. wang, concatenate convolutional neural networks for non-intrusive load monitoring across complex background, energies, vol. 12, no. 8, pp. 1572, apr. 2019. doi: 10.3390/en12081572 [6] p. davies, j. dennis, j. hansom, w. martin, a. stankevicius, l. ward, deep neural networks for appliance transient classification, proc. of icassp 2019 ieee international conference on acoustics, speech and signal processing, brighton, uk, 12-17 may 2019, pp. 8320-8324. doi: 10.1109/icassp.2019.8682658 [7] b. cannas, b. canetto, s. carcangiu, a. fanni, l. fresi, m. marceddu, c. muscas, p. porcu, g. sias, non-intrusive loads monitoring techniques for house energy management, proc. of the 1st int. conf. on energy transition in the mediterranean area (synergy med), cagliari, italy, 28-30 may 2019, pp. 1-6. doi: 10.1109/synergy-med.2019.8764104 [8] b. cannas, s. carcangiu, d. carta, a. fanni, c. muscas, g. sias, b. canetto, l. fresi, p. porcu, real-time monitoring system of the electricity consumption in a household using nilm techniques, proc. of the 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, virtual, italy, 14–16 september 2020; pp. 90-95. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-18.pdf [9] k. anderson, a. ocneanu, d. benitez, d. carlson, a. rowe, m. berges, blued: a fully labeled public dataset for event-based non-intrusive load monitoring research, proc. of the 2nd kdd workshop on data mining applications in sustainability (sustkdd), acm, beijing, china, 2012, pp. 1–5. [10] h. liu, q. zou, z. zhang, energy disaggregation of appliances consumptions using ham approach, ieee access 2019, 7, pp. 185977–185990. doi: 10.1109/access.2019.2960465 [11] p. ricardo, p.r.z. taveira, c.h.v.d. moraes, g. lamberttorres, non-intrusive identification of loads by random forest and fireworks optimization, ieee access 2020, 8, 75060–75072. doi: 10.1109/access.2020.2988366 [12] b. cannas, s. carcangiu, d. carta, a. fanni, c. muscas, selection of features based on electric power quantities for non-intrusive load monitoring, applied sciences, 2021, 11(2):533. doi: 10.3390/app11020533 [13] f. rossier, ph. lang, j. hennebert, near real-time appliance recognition using low frequency monitoring and active learning methods, energy procedia 122 (2017), pp. 691–696. doi: 10.1016/j.egypro.2017.07.371 [14] t. khandelwal, k. rajwanshi, p. bharadwaj, s. srinivasa garani, r. sundaresan, exploiting appliance state constraints to improve appliance state detection, proc. of e-energy ’17, shatin, hong kong, 16-19 may 2017, pp. 111–120. doi: 10.1145/3077839.3077859 [15] g. w. hart, nonintrusive appliance load monitoring, ieee proc. 1992, 80, pp. 1870–1891. doi: 10.1109/5.192069 [16] mathworks: k-medoids clustering. online [accessed 18 june 2021] https://it.mathworks.com/help/stats/kmedoids.html [17] eastrongroup: energy meters. online [accessed 18 june 2021] http://www.eastrongroup.com [18] c. athanasiadis, d. doukas, t. papadopoulos, a. chrysopoulos, a scalable real-time non-intrusive load monitoring system for the estimation of household appliance power consumption, energies, vol. 14, feb. 2021, no. 3. pp. 767. doi: 10.3390/en14030767 [19] r. nieto, l. de diego-otón, á. hernández, j. ureña, finite precision analysis for an fpga-based nilm event-detector, proc. of the 5th international workshop on non-intrusive load monitoring, online, 18 november 2020, pp. 30-33. doi: 10.1145/3427771.3427849 [20] a. k. jain, s. s. ahmed, p. sundaramoorthy, r. thiruvengadam, v. vijayaraghavan, current peak based device classification in nilm on a low-cost embedded platform using extra-trees, proc. of 2017 ieee mit undergraduate research technology conference (urtc), cambridge, ma, 3-5 november 2017, 4 pp. doi: 10.1109/urtc.2017.8284200 [21] m. lu, z. li, a hybrid event detection approach for nonintrusive load monitoring, ieee transactions on smart grid, vol. 11, no. 1, jan. 2020, pp. 528-540. doi: 10.1109/tsg.2019.2924862 https://doi.org/10.1016/j.rser.2017.05.096 https://doi.org/10.3390/en12112203 https://doi.org/10.1109/tsg.2019.2918330 https://doi.org/10.1109/icassp.2019.8682543 https://doi.org/10.3390/en12081572 https://doi.org/10.1109/icassp.2019.8682658 https://doi.org/10.1109/synergy-med.2019.8764104 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-18.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-18.pdf https://doi.org/10.1109/access.2019.2960465 https://doi.org/10.1109/access.2020.2988366 https://doi.org/10.3390/app11020533 https://doi.org/10.1016/j.egypro.2017.07.371 http://doi.org/10.1145/3077839.3077859 https://doi.org/10.1109/5.192069 https://it.mathworks.com/help/stats/kmedoids.html http://www.eastrongroup.com/ https://doi.org/10.3390/en14030767 https://doi.org/10.1145/3427771.3427849 https://doi.org/10.1109/urtc.2017.8284200 https://doi.org/10.1109/tsg.2019.2924862 optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video acta imeko issn: 2221-870x december 2021, volume 10, number 4, 169 176 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 169 optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video tommaso tocci1, lorenzo capponi1, roberto marsili1, gianluca rossi1 1 department of engineering, university of perugia, via g. duranti 93, 06125 perugia, italy section: research paper keywords: thermoelastic stress analysis; optical flow; motion compensation; mechanical stress; experimental mechanics citation: tommaso tocci, lorenzo capponi, roberto marsili, gianluca rossi, optical-flow-based motion compensation algorithm in thermoelastic stress analysis using single-infrared video, acta imeko, vol. 10, no. 4, article 27, december 2021, identifier: imeko-acta-10 (2021)-04-27 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 30, 2021; in final form december 9, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: tommaso tocci, e-mail: tommaso.tocci@outlook.it 1. introduction the thermoelastic stress analysis (tsa) [1]-[4] is an infrared image-based technique for non-contact measurement of stress fields. this technique is based on the detection and analysis of the temperature fluctuations amplitude on the surface of a mechanical component dynamically loaded, performed by a thermal camera. since the temperature fluctuation produced by the load are very small, a lock-in [5], [6] data processing is normally applied to the thermal video [7], which makes it possible to emphasise phenomena at particular loading frequencies, reducing the noise effects, normally higher than thermal fluctuation induced by stress. the tsa is largely applied for many mechanical stress analysis application, fem validation, design comparison and very useful for high resolution stress concentration analysis, non-contact stress detection in application where classical sensors have problems or limitations [8]-[11]. for an ideal application of this technique, the specimen surface should not move or make small movements throughout the test [2]. clearly, this ideal condition is not always achievable since, for example, in the case of a specimen with low stiffness, the phenomena of deformation and displacement are very large [3]. this phenomenon is emphasised by the fact that the tsa requires the application of dynamic loads, with sufficiently high stress levels, in order to generate, by the thermoelastic principle, sufficiently high temperature fluctuations. the main problem caused by specimen movement is that, in a sequence of consecutive frames, the same pixel does not always correspond to the same part of the specimen as it moves. this phenomenon affects the lock-in operation, causing alterations in the measured stress field. especially on the border of mechanical component, it may also happen that a pixel corresponds at a surface point or a background point alternatively: this phenomenon is commonly known as edge effect [1]. therefore, in the thermal video, it is necessary to compensate in a certain way these movements and deformations by means of algorithms commonly called motion compensation [2]. abstract thermoelastic stress analysis (tsa) is a non-contact measurement technique for stress distribution evaluation. a common issue related to this technique is the rigid-displacement of the specimen during the test phase, that can compromise the reliability of the measurement. for this purpose, several motion compensation techniques have been implemented over the years, but none of them is provided through a single measurement and a single sample surface conditioning. due to this, a motion compensation technique based on optical-flow has been implemented, which greatly increases the strength and the effectiveness of the methodology through a single measurement and single specimen preparation. the proposed approach is based on measuring the displacement field of the specimen directly from the thermal video, through optical flow. this displacement field is then used to compensate for the specimen’s displacement on the infrared video, which will then be used for thermoelastic stress analysis. firstly, the algorithm was validated by a comparison with synthetic videos, created ad hoc, and the quality of the motion compensation approach was evaluated on video acquired in the visible range. the research moved into infrared acquisitions, where the application of tsa gave reliable and accurate results. finally, the quality of the stress map obtained was verified by comparison with a numerical model. mailto:tommaso.tocci@outlook.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 170 the state of the art relies on sakagami et al. in [12], where the compensation of the motion using displacement fields is obtained with the digital image correlation (dic) [13], [14] on a second video recorded simultaneously in the visible range. the motion compensation is then applied to thermal video. this technique requires the employment of two different cameras and the double surface conditioning of the specimen: the application of speckle for the dic and a high emissivity black paint for tsa. silva et al. [15] simplified the compensation procedure limiting the use of a single camera both for the acquisition of the video for the dic and the tsa techniques. in this case, however, in order to be able to make displacement measurements with the dic, the speckle must be made through a paint with an emissivity that can be detected by an infrared camera. nevertheless, two surface conditioning of the specimen surface are still required. compared with sakagami et al. [12], since one camera is employed for the two acquisitions, the alignment problems, which are very evident in the use of two cameras that should ideally be positioned in the same place, are greatly reduced. this research proposes an algorithm that allows the compensation of the rigid motion using a single thermal video with a single step specimen preparation, from which it will be possible to perform both the compensation and the tsa. therefore, compared to the state of the art, it will no longer be necessary to first prepare the specimen for measurement with dic, then remove the speckle and finally prepare the surface for analysis by the tsa, but it will be sufficient to do only the preparation for the tsa analysis, significantly reducing the time required for the preparation phase. in addition, with a single thermal video it is possible to obtain both the motion compensation field and the stress field, thus also reducing the time of the experimental phase. this technique allows to carry out tests in which it is not possible to repeat the measurement, such as, for example, structural failure or random tests. the motion compensation phase sees the replacement of the dic with a computer vision technique called optical-flow [16]-[18], which is based on motion of visual-features, such as corners, edges, ridges and textures in two consecutive frames of a video scene [19], [20]. within optical-flow methods, the differentiation between sparse and dense approach is fundamental [18]. there are many different types of algorithms [21] based on different correlation techniques, such as gradient-based, based on the brightness constancy equation [22] or region-based approaches rely on the correlation of different features like normalized crosscorrelation and laplacian correlation [23], [24]. optical-flow has been widely applied to measure displacement fields in solid bodies [25]-[27]. in this work, the dense optical-flow farneback algorithm was implemented [28], [29]. a nylon specimen, realized by additive manufacturing, was tested applying a sinusoidal load along its y-axis, so that the movement of the specimen is exclusively along the vertical direction. in a preliminary step the algorithm was validated by analysing synthetically generated videos with imposed motion. the coincidence between the displacement measured by the implemented algorithm and the displacement imposed on the synthetic video was then evaluated. the algorithm was then tested in visible video to evaluate the efficiency of the motion compensation and then it is applied to a thermal video for the tsa. the manuscript is organized as follows: in sec. 2, the implemented algorithm, the test bench used and the experimental setup are presented. in sec.3 the algorithm is validated and the results discussed. sec. 4 draws the conclusions. 2. materials and method 2.1. algorithm implementation all the algorithms of this work have been implemented in a python environment. below is the description of the two main algorithms: one for measuring the displacement field and one for compensating for motion the thermal film. 2.1.1. algorithm for displacement field measurement data processing for the purpose of determining the displacement field is not carried out on the original video matrix but on a copy of it. especially for thermal video, the copy is normalised from the 14-bit of the original video to 8-bit. the compensation field detected by the 8-bit video is then applied to the original 14-bit video. each pair of frames, before being calculated by optical-flow, is subjected to a pre-processing phase, designed to reduce noise (very evident especially in infrared images) and to make a masking of the sample, which will be used later. each acquired frame is filtered, in sequence, through a gaussian filter [30], [31] with 5x5 kernel, a morphological transformation of dilation and erosion [30]-[32] , with 3x3 kernel. this produces a good noise reduction, especially in the thermal image, as shown in figure 1. the result obtained is then sorted through an image binarization algorithm [30], which returns a black and white binary field, showing the non-presence/presence of the specimen in a given pixel. example of mask of the specimen are shown in figure 2. as can be seen in the figure, the mask quality of the visible image is higher than that of the infrared image, but in both cases a good result is obtained. the farneback optical-flow algorithm [28], [29] is now applied to each pair of frame. each frame of the video is compared with a reference frame, which is usually the first frame of the video. the displacement field measured in this way indicates the displacement of the i-th frame with respect to the reference one. the displacement field indicates the displacement figure 1. image pre-processing: (a) original frame (b) filtered frame. figure 2. specimen mask: (a) visible video (b) infrared video. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 171 undergone by each pixel between the two frames. the field is displayed in hsv colour code, where the colour indicates the direction of displacement and the colour intensity indicates the magnitude. an example of a rough displacement field is given in figure 3: it can be seen that the displacement is mainly traced along the edges of the specimen and that the inner areas are detected as stationary. this requires a phase of improvement of the displacement field. we start with an initial masking, which eliminates all displacement vectors detected outside the specimen due to noise. operationally, the displacement matrix is multiplied by the mask matrix, as shown in figure 4. since it was decided to compensate for motion only along the vertical axis, the maximum displacement detected for each line is determined, which is likely to be located along the edges of the specimen. at this point, each row of the matrix has a constant value over the length of the frame, ideally including areas outside the specimen, as shown in figure 5. a new masking is then performed. to better understand this, it can be said that the first masking is intended to prevent the algorithm from taking for granted a maximum displacement that could be detected at a point outside the specimen, while the second is applied to eliminate line by line displacement values per pixel outside the geometry of the specimen. finally, the field of compensated displacements is filtered with a blur filter, in order to make the field of compensated displacements uniform in pixels, as shown in figure 7. compared to the one shown in figure 3, a greater uniformity and regularity can be observed. 2.1.2. algorithm for motion compensation motion compensation takes place on the original visible video or the original 14-bit infrared video, which has not been altered in any way during the previous step. a column vector containing the displacement of each row is then extracted from the displacement field. this displacement vector, once inverted, represents the motion to be applied to each row of the original video for motion compensation. in order to improve the compensation and make it smoother and more consistent, this compensation vector is manipulated using the kalman filter [33], as shown in figure 8. this vector is called compensation vector. figure 3. example of rough displacement field. figure 4. first masking operation: incorrect displacement vectors are masked out. figure 5. calculation of maximum displacement line by line. figure 6. second masking operation: compensation field is created. figure 7. example of smooth and masked displacement field. figure 8. extraction and smoothing of the compensation vector. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 172 we then move on to the actual compensation by applying a shift to the lines of the video to be compensated using the information contained in the compensation vector. the i-th row of the original video is moved to position i+k, where k is the value of the compensation move, as shown in figure 9. a check is also made to see if this move would place the line outside the maximum frame size, in which case no compensation is done for that line. the output is a compensated matrix starting from the original video, which contains the thermal and stress information, but which is not manipulated in any way: it is only compensated by moving the position of the lines, but not by modifying their content in any way. 2.2. test bench and acquisition system the test bench used during the experimental campaign consists of a shaker sentek l1024m, over which a structure for tensile testing was built, as shown in figure 10. a specimen with identical geometry to that used by sakagami et al. [12] was used. the specimen was produced using fdm printing in nylon 6-6 with 100% infill ratio and a layer height of 0.1 mm. the material has a tensile strength of 80 mpa and an elastic module of 2.2 gpa. two pairs of clamps, one connected to the head of the shaker and the other to the fixed part of the structure, were used to fix the specimen, as shown in figure 11. a load cell is positioned at the top of the specimen in order to monitor the loads exchanged during testing. two cameras were used: one for visible video and one for infrared video. a canon eos 7d reflex camera with 24 mm – 70 mm lens capable of acquiring 1920x1080 pixels resolution video at 30 fps was used. therefore, a flir a6751sc thermal camera with a cooled sensor was used for infrared video acquisition. this thermal camera captures 640x512 pixels resolution video at a framerate of 100 fps and has a thermal sensitivity of less than 20 mk. both video cameras were positioned in front of the specimen at the height of the specimen hole. 2.3. experimental campaign the experimental campaign was divided into two steps: first, videos were acquired in the visible at different frequencies in order to test and validate the compensation algorithm. subsequently, infrared videos were acquired at different frequencies and different load levels in order to test the effectiveness of the motion compensation algorithm for thermoelastic application. a summary is given in table 1. 3. results 3.1. algorithm validation since a validation phase of the motion compensation algorithm described above is required, it was decided to generate synthetic videos in which a body is present that is clearly identifiable from the optical-flow and has a known imposed displacement. the motion of the body in the synthetic video is imposed in a sinusoidal regime with imposed amplitude and frequency. several videos have been generated with different figure 9. motion compensation algorithm. figure 10. test bench. figure 11. specimen geometry and grasping. table 1. summary of the experimental campaign. frequency in hz sinusoidal load in n visible infrared 4 1500 x 3000 x x 5 1500 x 3000 x 8 1500 x 3000 x x table 2. summary of synthetic video. #syntethic video frequency in hz amplitude in pixel 1 1 5 2 1 10 3 1 20 4 4 5 5 4 10 6 4 20 7 8 5 8 8 10 9 8 20 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 173 operating conditions, as illustrated in fehler! verweisquelle konnte nicht gefunden werden.. in order to validate the displacement fields obtained by optical-flow, a comparison was made between the displacement detected in different survey lines on the object placed in the video and the displacement imposed during the generation of the synthetic video. in each video, three horizontal survey lines are defined: an upper red line, a central green line and a lower blue line. note that the red line and the blue line run alternately from the marker area. some results obtained at 4 hz and 8 hz are shown below. however, validation was carried out for each synthetic video generated, obtaining superimposable results between detected and imposed displacement. it is also reported the hsv displacement field, where it is possible to see how the detected displacement occupies the zone with a colour gradation going from green to violet, which corresponds in fact to the colour connected to displacements perfectly along the y-axis; this gives us a further confirmation of the correct application of the optical-flow. in figure 12 you can see how the green line in the vicinity of the marker areas gives us a maximum displacement of about 5 pixels which is exactly coincident with the imposed one. the same result is obtained at figure 13 where a displacement of 20 pixels is found, coinciding with the imposed one. also, from the point of view of the frequency of the load, it can be said that this face does not have a negative influence on the measurement of the displacement. the compensation algorithm is then validated and tested by computing visible video, which is easier to process due to the lower amount of noise than infrared video. as for the videos in the visible, the quality of motion compensation is verified by comparing the positions of the lower end of the hole in the specimen. a line is placed at the lower end of the hole, as shown in figure 14. it can be seen from the model that in the uncompensated frame, because there is rigid motion, the position of the hole is varied. thanks to the compensation of the motion instead, the position of the centre of the hole of the i-th frame, is brought back to the position of the reference frame. to illustrate the result obtained by means of compensation, we will insert the superimposed image of the 3 frames, as shown in figure 15: reference, uncompensated frame and compensated frame; it is clearly evident that in the compensated case, the position of the hole is in line with that of the reference frame. we also report the overlapped frames between the reference image and the uncompensated one (figure 16-a) and between the reference image and the compensated one (figure 16-b). also, in this case it is clear that in the compensated image the overlapping is almost perfect, because the edges of the reference image are not visible. on the contrary in the not compensated one we have that the displacement is well evident. in the following, the effect of motion compensation was evaluated by measuring the displacement of the centre of the hole in relation to the reference position. it can be seen that, for a load frequency of 4 hz, a displacement attenuation of approx. 92% was obtained. if, on the other hand, 8 hz is taken as the load frequency, since smaller amplitudes occur as the frequency increases, compensation of approx. 80% is achieved. in both cases, this is a good result. results are shown in table 3. figure 12. synthetic video #7: (a) survey lines monitored (b) field of measured displacement (c) dynamic plot of measured displacement along survey lines. figure 13. synthetic video #6: (a) survey lines monitored (b) field of measured displacement (c) dynamic plot of measured displacement along survey lines. figure 14. diagram of the comparison model between the reference hole and the uncompensated/compensated one. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 174 3.2. thermoelasticity application the following graphs (figure 17, figure 18 and figure 19) compare the stress fields obtained from uncompensated and compensated infrared videos. the stress distribution field is expressed in terms of temperature variation in °c. analysis of the fields clearly shows the improvement in quality produced by the motion compensation. in the uncompensated image, a significant component of edge effect is visible, due to the rigid motion of the specimen. this phenomenon makes it difficult to detect distress zones, especially in the areas close to the borehole, where a higher stress concentration is expected. after compensation, however, it is easiest to identify the typical stress distribution around the hole. it is also noticeable that the field is much sharper than the uncompensated one. in order to validate the result, the tsa stress fields are compared with those obtained from the fem model. a coherent trend of the experimental fields in relation to the numerical ones can be seen. below, in figure 20, is the graph comparing the stressconcentration factor at the hole in the uncompensated, compensated video and the fem model along the two check lines passing through its centre. in order to be able to compare the experimental stress profiles with the fem, the concentration of the normalised stresses was figure 15. comparison of the position of the 3 holes in the reference, uncompensated and compensated frame. figure 16. frame comparisons: (a) reference non-compensated (b) reference – compensated. table 3. measurement of hole displacement uncompensated and compensated configuration. test not compensated displacement in pixel compensated displacement in pixel 4 hz – frame 11 36 3 4 hz – frame 118 35 3 4 hz – frame 226 33 3 8 hz – frame 16 9 2 8 hz – frame 124 11 2 8 hz – frame 228 10 2 figure 17. 4 hz infrared video: comparison of distress fields of uncompensated and compensated video. figure 18. 5 hz infrared video: comparison of distress fields of uncompensated and compensated video. figure 19. 8 hz infrared video: comparison of distress fields of uncompensated and compensated video. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 175 evaluated, i.e., by dividing the series of values by the maximum of the series. it can be seen that, along the vertical direction, in the uncompensated video there is an inversion of the stress concentration at the bottom of the hole (red circle in figure 20), which in this case corresponds to the area between 80 and 100 along the y-direction, due to the edge effect. on the contrary, this inversion of stress concentration is not present in the compensated video, which presents a trend similar to that of the fem model. 4. conclusions the implementation of a new motion compensation algorithm for thermoelastic applications respect the state of the art was proposed in this research. the algorithm was first validated by means of a series of synthetic videos generated in such a way as to have a known imposed displacement. then, the tests were carried out on videos in the visible range where the performances of the motion compensation were evaluated, which was around 92% for 4hz displacements and 80% for 8hz displacement. stress distribution fields produced from the same compensated and uncompensated video were compared. the blurring and edge effects produced by the motion were almost completely eliminated, making it possible to correctly measure the stress field, especially in the area around the hole. this result was compared with stress fields obtained from finite element analysis. videos were tested at different frequencies in order to verify the robustness of the algorithm. finally, the normalised stress concentration was compared along two perpendicular survey lines passing through the centre of the hole. it can be seen that, along the y-compensation line, the inversion of tension due to the effect of motion was significatively reduced by the here proposed motion compensation algorithm. references [1] w. thomson, on the dynamical theory of heat, earth environ. sci. trans. r. soc. edinburgh, vol. 20, no. 2, pp. 261–288, 1853. [2] w. weber, über die specifische wärme fester körper, insbesondere der metalle, ann. phys., vol. 96, no. 10, pp. 177–213, 1830. [3] j. m. dulieu-barton, s. quinn, c. eyre, p. r. cunningham, development of a temperature calibration device for thermoelastic stress analysis, in applied mechanics and materials, 2004, vol. 1, pp. 197–204. doi: 10.4028/www.scientific.net/amm.1-2.197 [4] j. m. dulieu-barton, thermoelastic stress analysis, opt. methods solid mech. a full-f. approach, pp. 345–366, 2012. [5] n. harwood, w. m. cummings, applications of thermoelastic stress analysis, strain, vol. 22, no. 1, pp. 7–12, 1986. doi: 10.1111/j.1475-1305.1986.tb00014.x [6] n. harwood, w. m. cummings, calibration of the thermoelastic stress analysis technique under sinusoidal and random loading conditions, strain, vol. 25, no. 3, pp. 101–108, 1989. doi: 10.1111/j.1475-1305.1989.tb00701.x [7] l. capponi, lollocappo/pylia: digital lock-in analysis. zenodo, 2020, doi: 10.5281/zenodo.4043175 [8] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee trans. instrum. meas., 2019, pp. 2459 2467. doi: 10.1109/tim.2019.2959293 [9] r. montanini r. montanini, g. rossi, a. quattrocchi, d. alizzio, l. capponi, r. marsili, a. d. giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128771 [10] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661 [11] l. capponi, r. marsili, g. rossi, t. zara, thermoelastic stress analysis on rotating and oscillating mechanical components, int. j. comput. eng. res., vol. 10, no. 6, 2020, pp. 2250–3005. [12] t. sakagami, n. yamaguchi, s. kubo, t. nishimura, a new fullfield motion compensation technique for infrared stress measurement using digital image correlation, j. strain anal. eng. des., vol. 43, no. 6, 2008, pp. 539–549. doi: 10.1243/03093247jsa360 [13] b. pan, k. qian, h. xie, a. asundi, two-dimensional digital image correlation for in-plane displacement and strain measurement: a review, meas. sci. technol., vol. 20, no. 6, 2009, p. 62001. doi: 10.1088/0957-0233/20/6/062001 [14] jason cantrell, sean rohde, david damiani, rishi gurnani, luke disandro, josh anton, andie young, alex jerez, douglas steinbach, calvin kroese, peter ifju, experimental characterization of the mechanical properties of 3d printed abs and polycarbonate parts, conf. proc. soc. exp. mech. ser., vol. 3, 2017, pp. 89–105. doi: 10.1007/978-3-319-41600-7_11 [15] m. l. silva, g. ravichandran, combined thermoelastic stress analysis and digital image correlation with a single infrared camera, j. strain anal. eng. des., vol. 46, no. 8, 2011, pp. 783–793. doi: 10.1177%2f0309324711418286 [16] j. l. barron, d. j. fleet, s. s. beauchemin, performance of optical-flow techniques, int. j. comput. vis., vol. 12, 1994, no. 1, pp. 43–77. doi: 10.1007/bf01420984 figure 20. comparison of stress concentration in uncompensated, compensated and fem video along the two check lines passing through the centre of the hole. https://doi.org/10.4028/www.scientific.net/amm.1-2.197 http://dx.doi.org/10.1111/j.1475-1305.1986.tb00014.x https://doi.org/10.1111/j.1475-1305.1989.tb00701.x https://doi.org/10.5281/zenodo.4043175 https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://doi.org/10.1016/j.ijfatigue.2020.105661 https://doi.org/10.1243%2f03093247jsa360 https://doi.org/10.1088/0957-0233/20/6/062001 https://doi.org/10.1007/978-3-319-41600-7_11 https://doi.org/10.1177%2f0309324711418286 https://doi.org/10.1007/bf01420984 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 176 [17] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, 1981. [18] b. d. lucas, generalized image matching by the method of differences. phd. carnegie mellon university, 1984. [19] p. turaga, r. chellappa, a. veeraraghavan, advances in videobased human activity analysis: challenges and approaches, in advances in computers, vol. 80, elsevier, 2010, pp. 237–290. doi: 10.1016/s0065-2458(10)80007-5 [20] s. akpinar, f. n. alpaslan, video action recognition using an optical-flow based representation, in proceedings of the international conference on image processing, computer vision, and pattern recognition (ipcv), 2014, p. 1. [21] t. fuse, e. shimizu, m. tsutsumi, a comparative study on gradient-based approaches for optical-flow estimation, int. arch. photogramm. remote sens., vol. 33, no. b5/1; part 5, pp. 269– 276, 2000. [22] s. baker, i. matthews, lucas-kanade 20 years on: a unifying framework, int. j. comput. vis., vol. 56, no. 3, pp. 221–255, 2004. doi: 10.1023/b:visi.0000011205.11775.fd [23] w. k. pratt, correlation techniques of image registration, ieee trans. aerosp. electron. syst., no. 3, pp. 353–358, 1974. [24] p. j. burt, local correlation measures for motion analysis: a comparative study, 1982. [25] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical-flow for motion detection during different sinusoidal brightness variations, in journal of physics: conference series, 2018, vol. 1149, no. 1, p. 12032. doi: 10.1088/1742-6596/1149/1/012032 [26] d. gorjup, j. slavič, a. babnik, m. boltežar, still-camera multiview spectral optical-flow imaging for 3d operatingdeflection-shape identification, mech. syst. signal process., vol. 152, p. 107456, 2021. doi: 10.1016/j.ymssp.2020.107456 [27] j. javh, j. slavič, m. boltežar, experimental modal analysis on fullfield dslr camera footage using spectral optical-flow imaging, j. sound vib., vol. 434, pp. 213–220, 2018. doi: 10.1016/j.jsv.2018.07.046 [28] g. farnebäck, polynomial expansion for orientation and motion estimation. linköping university electronic press, 2002. [29] g. farnebäck, two-frame motion estimation based on polynomial expansion, in scandinavian conference on image analysis, 2003, pp. 363–370. doi: 10.1007/3-540-45103-x_50 [30] r. szeliski, computer vision: algorithms and applications. springer science & business media, 2010. [31] g. bradski, a. kaehler, learning opencv: computer vision with the opencv library. o’reilly media, inc., 2008. [32] a. eleftheriadis, a. jacquin, image and video segmentation, in advances in image communication, 1999, pp. 1–68. [33] g. welch, g. bishop, an introduction to the kalman filter, 1995. https://doi.org/10.1016/s0065-2458(10)80007-5 https://doi.org/10.1023/b:visi.0000011205.11775.fd https://doi.org/10.1088/1742-6596/1149/1/012032 http://dx.doi.org/10.1016/j.ymssp.2020.107456 http://dx.doi.org/10.1016/j.jsv.2018.07.046 https://doi.org/10.1007/3-540-45103-x_50 on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition giuseppe campobello1, giovanni gugliandolo1, angelica quercia2, elisa tatti3, maria felice ghilardi3, giovanni crupi2, angelo quartarone2, nicola donato1 1 department of engineering, university of messina, contrada di dio, s. agata, 98166 messina, italy 2 biomorf department, university of messina, aou "g. martino", via c. valeria 1, 98125, messina, italy 3 cuny school of medicine, cuny, 160 convent avenue, new york, ny 10031, usa section: research paper keywords: biomedical signal processing; electroencephalograph (eeg); eeg measurements; near-lossless compression; singular value decomposition (svd) citation: giuseppe campobello, giovanni gugliandolo, angelica quercia, elisa tatti, maria felice ghilardi, giovanni crupi, angelo quartarone, nicola donato, on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition, acta imeko, vol. 11, no. 2, article 30, june 2022, identifier: imeko-acta-11 (2022)-02-30 section editor: francesco lamonaca, university of calabria, italy received october 24, 2021; in final form february 22, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giuseppe campobello, e-mail: gcampobello@unime.it 1. introduction since its invention by the german psychiatrist hans berger almost a century ago [1], electroencephalography (eeg) has continuously evolved, becoming a powerful and extensively used method that allows measuring safely and noninvasively the spatiotemporal dynamics of the brain activity with a high temporal resolution in the range of milliseconds, which enables detecting rapid changes in the brain rhythms [2]. the brain rhythms are the periodic fluctuations of human eeg, which are associated with cognitive processes, physiological states, and neurological disorders [3]. hence, the use of eeg can range from basic research to clinical applications [4]. among the various applications of eeg, it is worth highlighting its recent application in brain computer interface (bci) research [5], [6]. eeg consists of a neurophysiological measurement of the electrical activity generated by the brain through multiple electrodes placed on the scalp surface. eeg data are measured as the electrical potential difference between two electrodes: active and reference electrodes. at neurophysiological level, the electrical potential differences are mostly generated by the summation of both excitatory and inhibitory post-synaptic potentials in tens of thousands of cortical pyramidal neurons that are synchronously activated [7]. hence, the brain sources of the electrical potentials recorded by eeg may be suited to an infinite number of configurations, thereby limiting the spatial resolution of scalp eeg. to overcome this drawback, several source localization methods have been proposed and their application with high-density eeg (hd-eeg) systems, such as 64-256 electrodes, can lead to a remarkable improvement in eeg spatial resolution [3], [8]. many applications require that eeg systems have to record continuously for several days or even weeks and this might easily yield several gigabytes (gb) of generated data, which makes compression algorithms necessary for efficient data handling. as abstract in this article we investigate the trade-off between the compression ratio and distortion of a recently published compression technique specifically devised for multichannel electroencephalograph (eeg) signals. in our previous paper, we proved that, when singular value decomposition (svd) is already performed for denoising or removing unwanted artifacts, it is possible to exploit the same svd for compression purpose by achieving a compression ratio in the order of 10 and a percentage root mean square distortion in the order of 0.01 %. in this article, we successfully demonstrate how, with a negligible increase in the computational cost of the algorithm, it is possible to further improve the compression ratio by about 10 % by maintaining the same distortion level or, alternatively, to improve the compression ratio by about 50 % by still maintaining the distortion level below the 0.1 %. mailto:gcampobello@unime.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 an illustrative example, about 2.6 gb of data per day are generated by an eeg system recording the data from 64 electrodes with a sampling rate of 250 hz and a 16-bit resolution. it should be mentioned that intracranial eeg recordings can generate even terabytes (tb) of data per day [9]. therefore, eeg data need to be largely compressed to efficiently manage their storage. furthermore, data compression is necessary also to reduce both transmission rate and power consumptions when telemonitoring eeg via wireless [10], [11]. for instance, wireless wearable eeg systems for long-term recordings should operate under a low-power budget, due to limitation on battery lifetime, and then the power consumption needs to be significantly reduced by compressing the data before transmission [12]. various eeg compression algorithms have been developed to minimize the number of bits needed to represent eeg data by exploiting inter and/or intra-channel correlations of eeg signals. eeg compression algorithms can be classified into two main categories: lossless and lossy compression [13], [14]. as the main goal of the compression algorithms is to reduce the size of the data, their performance is, typically, evaluated by using the compression ratio (cr), which is calculated as the ratio between the number of bits required to represent the original and compressed eeg data. generally, lossy compression enables superior compression performance compared to the lossless counterpart but it cannot guarantee a reconstruction of the exact original data from the compressed version. in such a case, the percent root-meansquare distortion (prd) is used as indicator for assessment of the quality of the reconstructed signal, which is affected by the distortion introduced by the lossy compression. typically, lossless compression algorithms are preferred in clinical practice to avoid diagnostic errors, since important medical information may be disregarded using lossy compression and, in addition, there is a lack of legislation and/or approved standards on lossy compression, making eeg reconstruction a more critical requirement than compression performance. on the other hand, the lossless compression approach has a limited impact on storage requirements for eeg applications. as a matter of fact, the use of the state-of-the-art lossless compression algorithms allows achieving typical compression ratios in the order of 2 or 3 [15]-[21]. on the other hand, eeg signals are of very small amplitude, typically in the order of microvolts (v), and thus they can be easily contaminated by noise and artifacts, which should be filtered to highlight and/or extract the actual clinical information [22], [23]. to accomplish this task, digital filters and denoising procedures based on wavelets, principal component analysis (pca) and/or independent component analysis (ica) are often used [24]-[28]. this enables the development of near-lossless pca/icabased compression algorithms that can achieve much higher compression ratios than those obtained with lossless compression algorithms with a tolerable reconstruction distortion for the application of interest. different near-lossless eeg compression schemes based on parallel factor decomposition (parafac) and singular value decomposition (svd) have been investigated and compared with wavelet-based compression techniques [29]. in most cases, parafac leads achieving better compression performance but the maximum cr obtained with a prd lower than 2 % was 4.96 [29]. a nearlossless algorithm able to obtain a cr of 4.58 with a prd in the range between 0.27 % and 7.28 %, depending on the specific dataset under study, has been proposed in [30]. more recently, a svd-based compression scheme able to obtain 80 % data compression (i.e., cr = 5) with a pdr of 5 % has been reported in [31]. more recently, in [32] we proposed a near-lossless compression algorithm for eeg signals able to achieve a compression ratio in the order of 10 with a 𝑃𝑅𝐷 < 0.01 %. in particular, the algorithm has been specifically devised for achieving a very low distortion in comparison to other state-ofthe-art solutions. in this paper, we present an improved version of our previous algorithm and particular attention is given to achieve a good trade-off between compression efficiency and distortion. the rest of this paper is organized as follows. in section ii, we briefly review svd and describe our original algorithm proposed in [32]. in section iii, we illustrate the proposed algorithm. in section iv, we present the experimental results obtained on a real-world eeg dataset. finally, future works and our conclusions are drawn in section v. 2. singular value decomposition eeg signals are easily contaminated with artifacts and noise and, therefore, they need to be filtered before extracting the actual clinical information. for this purpose, svd-based pca and ica techniques are commonly used. in order to briefly review how svd is exploited in this context, let us consider a high-density 𝑁-channel eeg system whose signals are sampled for a time interval 𝑇 and at a rate of 𝑓𝑠 samples per second (sps). in this case, we have 𝑀 = 𝑇 ⋅ 𝑓𝑠 samples per channel and thus an overall number of samples equal to 𝑁 ⋅ 𝑀. we assume that such as samples are represented by a 𝑁 × 𝑀 matrix 𝐴. it is known from the svd theory that it possible to decompose a matrix 𝐴 into three matrices 𝑈, 𝛴, and 𝑉, such that 𝐴 = 𝑈𝛴𝑉 𝑇. in particular, 𝛴 is a diagonal matrix whose diagonal elements, i.e., 𝜎𝑖 with 𝑖 ∈ [1, . . . , 𝑁], are named singular values. moreover, a rank 𝑘 approximation of 𝐴, i.e., 𝐴𝑘 = 𝑈𝑘 𝛴𝑘 𝑉𝑘 𝑇 , exists which minimizes the norm ||𝐴 − 𝐴𝑘 || and that can be obtained by considering the submatrices 𝑈𝑘 and 𝑉𝑘 , given by the first 𝑘 columns of 𝑈 and 𝑉, respectively, and the leading principal minor of order 𝑘 of 𝛴, i.e., 𝛴𝑘 , containing the first 𝑘 < 𝑁 singular values. in the specific context of eeg, the desired rank 𝑘, and thus the number of singular values exploited for approximation, is chosen by clinicians, or other eeg experts, in order to reduce the effect of undesired artifacts and noise by keeping unaltered the clinical information. in this case the actual clinical information is contained in 𝐴𝑘 and, with the aim of reducing storage resources that are needed to store eeg samples, it is mandatory to encode the matrix 𝐴𝑘 in the most efficient manner. in [32] authors proposed a solution for the above problem by deriving the near-lossless compression algorithm, as reported in figure 1. the basic idea of the algorithm is to decompose the matrix 𝐴𝑘 into two matrices, 𝑋𝑘 and 𝑌𝑘 , such that 𝐴𝑘 = 𝑋𝑘 𝑌𝑘 . in particular, the matrices 𝑋𝑘 and 𝑌𝑘 can be obtained, as shown in step 2, by first evaluating the matrix 𝑆 = 𝛴1/2 and then considering the first 𝑘 columns of the matrix 𝑈𝑆 and the first 𝑘 rows of the matrix 𝑆𝑉 𝑇 , i.e., 𝑋𝑘 = (𝑈𝑆)[: , 1 ∶ 𝑘] and 𝑌𝑘 = acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 (𝑆𝑉 𝑇 )[1 ∶ 𝑘, ∶] in matlab-like notation. successively (see step 3), maximum absolute values of the matrices 𝑋𝑘 and 𝑌𝑘 , i.e., 𝑚𝑋 = max(|𝑋𝑘 |) and 𝑚𝑌 = max(|𝑌𝑘 |), are evaluated. such values are used in the last step, i.e., step 4, to transform the floating-point matrices 𝑋𝑘 and 𝑌𝑘 into two integer matrices, �̃�𝑘 and �̃�𝑘 , on the basis of the following equations: �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) . (1) note that the round() operator in above equations is the usual rounding operator, i.e., it rounds a floating point number to the nearest integer number. it is worth observing that actual dimensions of the matrices 𝐴𝑘, �̃�𝑘 , and �̃�𝑘 are, 𝑁 × 𝑀, 𝑁 × 𝑘, and 𝑘 × 𝑀, respectively. thus, the number of elements in �̃�𝑘 and �̃�𝑘 is lower than the number of eeg samples in the matrix 𝐴𝑘. therefore, the matrices �̃�𝑘 and �̃�𝑘 can be considered as an alternative but compressed representation of the matrix 𝐴𝑘. in particular, the expected compression ratio can be derived as follows. let us indicate with 𝑤 the number of bits used to represent each eeg sample in 𝐴𝑘. considering the actual dimensions of the matrices 𝐴𝑘, the overall number of bits needed for representing the matrix 𝐴𝑘 is 𝐵𝑜 = 𝑤 ⋅ 𝑁 ⋅ 𝑀. in the same way, if we suppose that 𝑤 + 𝑎 is the maximum number of bits needed to represent the elements of �̃�𝑘 and �̃�𝑘 , the overall number of bits needed for representing the compressed matrices �̃�𝑘 and �̃�𝑘 is at most 𝐵𝑐 = (𝑤 + 𝑎) ⋅ (𝑁 + 𝑀) ⋅ 𝑘 and therefore the compression ratio can be evaluated as 𝐶𝑅 = 𝐵𝑜 𝐵𝑐 = 𝑤 ⋅ 𝑀 ⋅ 𝑁 (𝑤 + 𝑎) ⋅ (𝑁 + 𝑀) ⋅ 𝑘 . (2) in particular, when 𝑤 + 𝑎 ≈ 𝑤 and 𝑀 >> 𝑁, the expected compression ratio of the proposed algorithm can be approximated as 𝐶𝑅 ≈ 𝑁/𝑘. therefore, a considerable compression can be achieved when 𝑁 >> 𝑘, i.e., in the case of high-density eeg systems with correlated signals. for instance, in the case 𝑁 = 256 and 𝑘 = 15 we have 𝐶𝑅 ≈ 17 so that each gb of eeg data can be compressed and thus stored in less than 60 mb. in their paper, authors proved that, given the matrices �̃�𝑘 and �̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌 , an effective approximation �̃�𝑘 of the matrix 𝐴𝑘 is given by the following equation �̃�𝑘 = round ( �̃�𝑘 �̃�𝑘 𝑠 ) . (3) basically, the above relation provides the reconstruction equation needed for decompression. experimental results reported in [32] have shown that the maximum absolute error 𝑀𝐴𝐸 = |𝐴𝑘 − �̃�𝑘 | introduced by the above approximation is bounded by 𝑀𝐴𝐸 ≤ 2, that is a negligible error in comparison to the actual range of the original eeg samples, i.e. [−2𝑤−1, +2𝑤−1 − 1]. 3. proposed algorithm in this section, we slightly modify the previous algorithm with the aim of: 1) improving the compression ratio; 2) parameterizing the algorithm. in particular, we derived a new version of the algorithm able to achieve different trade-offs between compression efficiency and distortion. basically, the new algorithm exploits the fact that consecutive values in the matrix �̃�𝑘 are highly correlated. therefore, a further reduction in the number of bits, and thus an increasing in the compression ratio, can be obtained by encoding the differences between consecutive values in �̃�𝑘 instead of the matrix �̃�𝑘 itself. more precisely, let us introduce the matrix �̃�𝑌𝑘 = [(�̃�𝑘 [1, : ]) 𝑇 ; diff(�̃�𝑘 𝑇 )] , (4) where diff() returns the matrix of differences along the first dimension. it is worth observing that the matrix �̃�𝑘 can be exactly recovered from �̃�𝑌𝑘 as �̃�𝑘 = cumsum(�̃�𝑌𝑘 ) 𝑇 , (5) where cumsum() is the cumulative sum of elements along the first dimension. therefore, no further losses are introduced if, instead of the matrix �̃�𝑘, the matrix of differences �̃�𝑌𝑘 is stored or transmitted. on the basis of the previous observation, a new compression algorithm for eeg has been derived and can be summarized as shown in figure 2. note that, in comparison to the previous algorithm, we introduced a new step (see step 5), highlighted in bold for the sake of readability. reconstruction, i.e., decompression, can be easily achieved by obtaining �̃�𝑘 with (5) and thus using again (3) to recover �̃�𝑘. figure 1. illustration of the compression algorithm proposed in [32]. figure 2. illustration of the proposed compression algorithm. • inputs: an integer number 𝑘 < 𝑁 and a matrix 𝐴, formed by 𝑁 × 𝑀 eeg samples; • outputs: integer matrices �̃�𝑘 and �̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌. • algorithm: 1) use svd to decompose 𝐴 as 𝐴 = 𝑈𝛴𝑉𝑇 2) obtain the matrices 𝑆 = 𝛴1/2, 𝑋𝑘 = (𝑈𝑆)[: ,1: 𝑘] and 𝑌𝑘 = (𝑆𝑉 𝑇 )[1: 𝑘, : ] 3) evaluate 𝑚𝑋 = max(|𝑋𝑘 |), 𝑚𝑌 = max(|𝑌𝑘 |) 4) calculate 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌, �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) and �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) • inputs: an integer number 𝑘 < 𝑁, the matrix 𝐴 formed by 𝑁 × 𝑀 eeg samples and a scale factor 𝐹; • outputs: integer matrices �̃�𝑘 and 𝐷�̃�𝑘 and the scale factor 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌. • algorithm: 1) use svd to decompose 𝐴 as 𝐴 = 𝑈𝛴𝑉𝑇 2) obtain the matrices 𝑆 = 𝛴1/2, 𝑋𝑘 = (𝑈𝑆)[: ,1: 𝑘] and 𝑌𝑘 = (𝑆𝑉 𝑇 )[1: 𝑘, : ] 3) 𝑚𝑋 = max(|𝑋𝑘 |)/𝐹, 𝑚𝑌 = max(|𝑌𝑘 |)/𝐹 4) calculate 𝑠 = 𝑚𝑋 ⋅ 𝑚𝑌, �̃�𝑘 = round(𝑚𝑌 ⋅ 𝑋𝑘 ) and �̃�𝑘 = round(𝑚𝑋 ⋅ 𝑌𝑘 ) 5) calculate the matrix �̃�𝑌𝑘 = [(�̃�𝑘 [1, : ]) 𝑇 ; diff(�̃�𝑘 𝑇 )] acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 note that in the new algorithm we introduced a new input parameter, i.e., the scale factor 𝐹, which can be used to achieve different tradeoffs between compression efficiency and distortion. in particular, the factor 𝐹 is exploited for reducing 𝑚𝑋 and 𝑚𝑌 (see step 3 in figure 2). it is worth nothing that 𝑚𝑋 and 𝑚𝑌 in the new and previous algorithms assume the same values when 𝐹 = 1. therefore, intuitively and as confirmed in experimental results reported in the next section, we have no difference in the distortion achieved by the two algorithms when 𝐹 = 1. instead, by choosing a value of 𝐹 greater than 1, it is possible to achieve a greater compression ratio. this can be easily justified by observing that, according to (1), by reducing 𝑚𝑋 and 𝑚𝑌 we further reduce the dynamic range of the elements in the matrices �̃�𝑘 and �̃�𝑘 and thus the number of bits that are needed for their representation. obviously, a greater compression ratio is obtained at the cost of a greater distortion. nevertheless, experimental results reported in the next section show that the proposed algorithm improves the compression ratio by about 10 % by achieving the same distortion level of our previous algorithm, i.e., 0.01 %. moreover, a substantial increase in the compression ratio, up to 50 %, can be achieved by still maintaining the distortion level below the 0.1 %. 4. measurement-based results the proposed compression algorithm is applied to a dataset containing real eeg signals, which have been preprocessed by eeg experts to denoise and remove artifacts. the eeg dataset under study has been provided by cuny school of medicine (new york, ny, usa). this dataset refers to awake eeg recordings from the research work published in [25]. in this study, tatti et al. have investigated the role of beta oscillations (13.5-25 hz) in the sensorimotor system in a group of healthy individuals. in this experiment, participants were asked to perform planar reaching movements (mov test). mov test required the participants to reach a target, located at different distances and directions, that appeared on a screen in non-repeating and unpredictable order at 3 seconds interval. participants were asked to make reaching movements by moving a cursor on a digitizing tablet with their right hand to targets appearing on the screen. the total testing time was approximately five to six minutes for each eeg recording (96 targets). each mov test was measured with a 256-channel high-density eeg system (hydrocel geodesic sensor net, hcgsn, produced by electrical geodesic inc., eugene, or, usa), amplified using a net amp 300 amplifier, and sampled at 250 hz with 16-bit resolution using the net station software (version 5.0). eeg was noninvasively recorded using scalp electrodes and electrode-skin impedances were kept lower than 50 k. the eeglab toolbox (v13.6.5b) for matlab (v.2016b) was used for off-line preprocessing of the gathered eeg data [33], [34]. the signal of each recording was first filtered using a finite impulse response (fir) bandpass filter with a passband that extends from 1 hz to 80 hz and notch filtered at 60 hz. then, each recording was segmented in 4-seconds epochs and visually examined to remove sporadic artifacts and channels with poor signal quality. moreover, ica with pca-based dimension reduction (max 108 components) was employed to identify stereotypical artifacts (e.g., ocular, muscle, and electrocardiographic artifacts). only ica components with specific activity patterns and component maps characteristic of artefactual activity were removed. electrodes with poor signal quality were reconstructed with spherical spline interpolation procedures, whereas those located on the cheeks and neck were excluded, resulting in 180 signals. after the preprocessing, all signals were re-referenced to their initial average values and processed eeg data were exported in the european data format (edf) [35] by means of the eeglab toolbox. in particular, with the aim of evaluating the performance of the proposed algorithm, six edf files, which are related to three subjects (labelled with the subject numbers sn_m2, sn_m4 and sn_m5) and two sets of mov tests for each subject (“alltrials_1” and “alltrials_4”), have been tested. data range, number of samples, and a few other information of the above mentioned edf files are reported in table 1. it is worth observing that samples are represented with 16-bit integer numbers, i.e., 𝑤 = 16, and that the overall number of samples exploited for tests is more than 80 ⋅ 106. in order to apply the proposed algorithm, each edf file has been read and related data have been processed in blocks of 𝑁 × 𝑀 samples, where 𝑁 has been chosen equal to 180, i.e., 𝑁 coincides with the number of eeg channels remained after the preprecessing phase, and 𝑀 has been fixed equal to 1,000. so that each block of samples represents 4 seconds of data recorded by the multichannel eeg systems. the proposed compression algorithm, i.e., the algorithm in figure 2, has been applied to each block and the average compression ratio have been evaluated according to the relation: 𝐶𝑅𝐹 = 1 𝐿 ∑ 𝐵𝑜,𝑖 𝐵𝑐,𝑖 𝐿 𝑖=1 , (6) where 𝐿 represents the number of blocks processed, 𝐵𝑜,𝑖 is the number of bits that are needed to represent the 𝑖-th block before compression, and 𝐵𝑐,𝑖 is the number of bits that are needed to represent the same block but after compression. note that we use the subscript 𝐹 to highlight the scale factor used for compression, e.g., 𝐶𝑅2 is the compression ratio achieved when 𝐹 = 2. when the scale factor is not expressly stated we assume 𝐹 = 1. subsequently, each compressed block has been reconstructed according to (5) and (3). finally, distortion metrics, i.e., 𝑃𝑅𝐷 and 𝑀𝐴𝐸, have been evaluated on the whole edf file using the following equations: 𝑃𝑅𝐷 = 100 ⋅ √ ∑ ∑ (𝑀⋅𝐿𝑗=1 𝑁 𝑖=1 𝑎𝑖𝑗 − �̃�𝑖𝑗 ) 2 ∑ ∑ 𝑎𝑖𝑗 2𝑀⋅𝐿 𝑗=1 𝑁 𝑖=1 (7) 𝑀𝐴𝐸 = max 𝑖,𝑗 |𝑎𝑖𝑗 − �̃�𝑖𝑗 | , (8) where �̃�𝑖𝑗 are the integer values obtained after reconstruction and 𝑎𝑖𝑗 are the original samples. in our experiments, we further evaluated the compression efficiency of the proposed algorithm with respect to the nearlossless compression algorithm proposed in [32]. in particular, compression efficiency (𝐶𝐸) is here defined as: acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 𝐶𝐸 = 100 ⋅ 𝐶𝑅𝐹 − 𝐶𝑅0 𝐶𝑅0 , (9) where 𝐶𝑅0 is the compression ratio obtained with the algorithm proposed in [32], i.e., the algorithm reported in figure 1. similarly, we use 𝑃𝑅𝐷0 and 𝑀𝐴𝐸0 for referring related distortion metrics. compression results achieved with the proposed compression algorithm by setting the scale factor 𝐹 = 1 are shown in table 2. more precisely, for each compressed file, we reported the number of singular values exploited for compression (𝑘), the compression ratio (𝐶𝑅1), values of the distortion metrics (𝑃𝑅𝐷 and 𝑀𝐴𝐸), and the compression efficiency (𝐶𝐸) of the proposed algorithm and, in brackets, corresponding compression results obtained with the algorithm proposed in [32], evaluated on the same files and considering the same number of singular values. as it is possible to observe, the compression ratio achieved by the proposed algorithm when 𝐹 = 1 is near 𝑁/𝑘, which confirms our analytical results reported in section iii. note that the prd is less than 0.01 % for all edf files tested. in particular, prd values obtained with 𝐹 = 1 are the same values obtained in [32]. the same consideration can be extended to the mae. this confirms that the two algorithms in figure 2 and figure 1 have the same performance in terms of distortion when 𝐹 = 1. however, by observing the results on compression efficiency (see the last column of table 2), it is possible to conclude that, in comparison to the algorithm proposed in [32], the new one proposed in this paper is able to improve the compression ratio in a range between 7 % and 9 %. moreover, the scale factor 𝐹 introduced in the new algorithm provides the possibility to achieve even higher compression ratios, obviously at the cost of a greater distortion. we investigated the trade-off between compression efficiency and distortion of the proposed algorithm by considering different values of the scale factor 𝐹 within the range [1, 16]. in particular, we reported in table 3 compression ratios (𝐶𝑅𝐹), distortion metrics (𝑃𝑅𝐷 and 𝑀𝐴𝐸), and compression efficiency (𝐶𝐸) corresponding to 𝐹 ∈ {1,2,4,8,16}. as can be observed in table 3, by increasing 𝐹 it is possible to improve the compression ratio and thus the compression efficiency. in particular, by fixing 𝐹 = 16, the proposed algorithm is able to improve the compression ratio by about the 50 % by maintaining the 𝑃𝑅𝐷 below the 0.1 % threshold (in fact the 𝑃𝑅𝐷 is at most equal to 0.081 % for all the files tested). it is also worth noting that the mae obtained in our experimental results is approximatively equal to 2 𝐹. finally, we evaluated the distribution of absolute errors in recovered signals. in figure 3 we reported the cumulative distribution function (cdf) of the absolute errors, i.e., the probability 𝑃(|𝑒𝑟𝑟| ≤ 𝑥) table 1. edf files used as dataset. file name channels duration (s) number of samples physical range (μv) data range sn_m2_alltrials_1 180 344 15 480 000 [-35.500, +30.314] [-32768, 32767] sn_m2_alltrials_4 180 340 15 300 000 [-48.550, +38.539] [-32768, 32767] sn_m4_alltrials_1 180 328 14 760 000 [-34.532, +34.756] [-32768, 32767] sn_m4_alltrials_4 180 264 11 880 000 [-41.100, +40.673] [-32768, 32767] sn_m5_alltrials_1 180 324 14 580 000 [-41.463, +38.867] [-32768, 32767] sn_m5_alltrials_4 180 308 13 860 000 [-41.347, +46.929] [-32768, 32767] table 2. compression ratio (cr), percent root-mean-square distortion (prd), maximum absolute error (mae), and compression efficiency (ce) of the proposed algorithm when f = 1 (values of cr0 and prd0 are reported in brackets). file name k cr1 prd (%) mae ce (%) sn_m2_alltrials_1 20 8.6 (7.9) 0.0065 (0.0065) 2 (2) 8.6 sn_m2_alltrials_4 20 8.6 (7.9) 0.0063 (0.0063) 2 (2) 8.9 sn_m4_alltrials_1 12 14.6 (13.6) 0.0075 (0.0075) 2 (2) 7.4 sn_m4_alltrials_4 12 14.4 (13.4) 0.0069 (0.0069) 2 (2) 7.5 sn_m5_alltrials_1 13 13.4 (12.5) 0.0071 (0.0071) 2 (2) 7.2 sn_m5_alltrials_4 13 13.2 (12.3) 0.0069 (0.0069) 2 (2) 7.3 table 3. compression results achieved with the proposed algorithm for different values of the scale factor f. file name f crf prd (%) mae ce (%) sn_m2_alltrials_1 1 8.58 0.006 2 8.6 2 9.23 0.010 4 16.8 4 9.98 0.018 8 26.3 8 10.87 0.035 16 37.6 16 11.94 0.069 32 51.1 sn_m2_alltrials_4 1 8.60 0.006 2 8.9 2 9.25 0.010 4 17.1 4 10.01 0.017 8 26.7 8 10.91 0.033 17 38.1 16 11.98 0.066 31 51.6 sn_m4_alltrials_1 1 14.55 0.007 2 7.4 2 15.68 0.012 5 15.7 4 16.99 0.021 7 25.4 8 18.53 0.041 15 36.8 16 20.39 0.081 34 50.5 sn_m4_alltrials_4 1 14.43 0.007 2 7.5 2 15.53 0.011 4 15.7 4 16.81 0.019 8 25.3 8 18.33 0.036 16 36.6 16 20.14 0.072 31 50.1 sn_m5_alltrials_1 1 13.42 0.007 2 7.2 2 14.45 0.011 5 15.4 4 15.66 0.020 8 25.1 8 17.08 0.039 20 36.4 16 18.79 0.077 34 50.1 sn_m5_alltrials_4 1 13.16 0.007 2 7.3 2 14.16 0.010 4 15.5 4 15.31 0.019 9 24.9 8 16.67 0.037 16 36.0 16 18.30 0.073 35 49.3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 that the absolute error (|𝑒𝑟𝑟|) is lower than a threshold 𝑥, achieved for different values of 𝐹. as can be observed in figure 3, the percentage of samples with an absolute error lower than 𝐹 after reconstruction is near to 100 % for all the edf files in the dataset. note that the vertical lines in figure 3 represent the condition 𝑃(|𝑒𝑟𝑟| ≤ 𝐹)). therefore, we can state that the scale factor 𝐹, which is needed as input in the proposed algorithm, can be fixed according to the desired mae, i.e., for a given value of 𝐹, the mae obtained after reconstruction will be, with high probability, within the range [𝐹, 2𝐹]. 5. conclusions in this paper, we developed and validated an improved version of a recently proposed near-lossless compression algorithm for multichannel eeg signals. the algorithm exploits the fact that svd is usually performed on eeg signals for artifacts removal or denoising tasks. experimental results, reported in this paper, show that the developed algorithm is able to achieve a compression ratio proportional to the number of eeg channels with a root-mean-square distortion less than 0.01 %. moreover, with proper settings of input parameters, the compression ratio can be further improved up to 50 % by maintaining the distortion level below the 0.1 %. moreover, the algorithm allows the desired maximum absolute error to be fixed a priori. it should be highlighted that, although an eeg dataset has been considered as a case study, the proposed compression algorithm can be quite straightforwardly applied to different types of dataset. in a future work, we will further investigate performance of the proposed algorithm considering more extended datasets and other types of signals. references [1] h. berger, uber das elektrenkephalogramm des menschen. arch psychiat nervenkr (1929), pp. 527–570. doi: 10.1007/bf01797193 [2] j. t. koyazo, m. a. ugwiri, a. lay-ekuakille, m. fazio, m. villari, c. liguori, collaborative systems for telemedicine diagnosis accuracy, acta imeko 10 (2021) 3, pp. 192-197. doi: 10.21014/acta_imeko.v10i3.1133 [3] d. a. pizzagalli, electroencephalography and high-density electrophysiological source localization (handbook of psychophysiology, 3th ed.), cambridge university press, 2007. doi: 10.1017/cbo9780511546396 [4] d. l. schomer, f. l. da silva, niedermeyer’s electroencephalography: basic principles, clinical applications, and related fields, lippincott williams & wilkins, 2012. [5] n. yoshimura, o. koga, y. katsui, y. ogata, h. kambara, y. koike, decoding of emotional responses to user-unfriendly computer interfaces via electroencephalography signals, acta imeko 6, (2017). doi: 10.21014/acta_imeko.v6i2.383 [6] r. abiri, s. borhani, e. sellers, y. jiang, x. zhao, a comprehensive review of eeg-based brain-computer interface paradigms. j. neural eng. 16 (2018), pp. 011001–011001. figure 3. cumulative distribution function (cdf) of the absolute errors obtained after reconstruction with the proposed near-lossless algorithm for different scale factors (f). vertical lines represent the condition p(|err| ≤ f). https://doi.org/10.1007/bf01797193 http://dx.doi.org/10.21014/acta_imeko.v10i3.1133 https://doi.org/10.1017/cbo9780511546396 http://dx.doi.org/10.21014/acta_imeko.v6i2.383 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 doi: 10.1088/1741-2552/aaf12e [7] p. l. nunez, r. srinivasan, electric fields of the brain: the neurophysics of eeg, oxford university press, usa, 2006, isbn: 9780195050387. [8] c. lustenberger, r. huber, high density electroencephalography in sleep research: potential, problems, future perspective. front. neurol. 3 (2012), pp. 77. doi: 10.3389%2ffneur.2012.00077 [9] b. h. brinkmann, m. r. bower, k. a. stengel, g. a. worrell, m. stead, large-scale electrophysiology: acquisition, compression, encryption, and storage of big data, j. neurosc. meth. 180, (2009), pp. 185–192. doi: 10.1016%2fj.jneumeth.2009.03.022 [10] g. gugliandolo, g. campobello, p. p. capra, s. marino, a. bramanti, g. d. lorenzo, n. donato, a movement-tremors recorder for patients of neurodegenerative diseases, ieee trans instrum meas 68 (2019), pp. 1451–1457. doi: 10.1109/tim.2019.2900141 [11] g. campobello, a. segreto, s. zanafi, s. serrano, an efficient lossless compression algorithm for electrocardiogram signals. in proceedings of the 26th european signal processing conference, eusipco 2018, roma, italy, september 3-7, 2018, pp. 777–781. doi: 10.23919/eusipco.2018.8553597 [12] a. casson, d. yates, s. smith, j. duncan, e. rodriguez-villegas, wearable electroencephalography, ieee embs mag. 29 (2010), pp. 44–56. doi: 10.1109/memb.2010.936545 [13] g. campobello, o. giordano, a. segreto, s. serrano, comparison of local lossless compression algorithms for wireless sensor networks, j netw. comput. appl 47 (2015), pp. 23–31. doi: 10.1016/j.jnca.2014.09.013 [14] a. nait-ali, c. cavaro-menard, compression of biomedical images and signals, john wiley & sons, 2008, isbn: 978-1-84821028-8. [15] n. sriraam, correlation dimension based lossless compression of eeg signals, biomed. signal process. control 7 (2012), pp. 379–388. doi: 10.1016/j.bspc.2011.06.007 [16] n. sriraam, c. eswaran, lossless compression algorithms for eeg signals: a quantitative evaluation, in proceedings of the ieee/embs 5th international workshop on biosignal interpretation, tokyo japan, september 6-8, 2005, pp. 125–130. [17] y. wongsawat, s. oraintara, t. tanaka, k. r. rao, lossless multichannel eeg compression, in proceedings of the 2006 ieee international symposium on circuits and systems, island of kos, greece, may 21-24, 2006, p. 4 pp. – 1614. doi: 10.1109/iscas.2006.1692909 [18] g. antoniol, p. tonella, eeg data compression techniques, ieee trans biomed. eng. 44 (1997), pp. 105–114. doi: 10.1109/10.552239 [19] n. sriraam, c. eswaran, performance evaluation of neural network and linear predictors for near-lossless compression of eeg signals, ieee trans. inf. technol. biomed. 12 (2008), pp. 87–93. doi: 10.1109/titb.2007.899497 [20] g. campobello, a. segreto, s. zanafi, s. serrano, rake: a simple and efficient lossless compression algorithm for the internet of things. in proceedings of the 2017 25th european signal processing conference (eusipco), kos island, greece, 28 august 2 september, 2017, pp. 2581–2585. doi: 10.23919/eusipco.2017.8081677 [21] k. srinivasan, j. dauwels, m. r. reddy, a two-dimensional approach for lossless eeg compression, biomed. signal process. control 6 (2011), pp. 387–394. doi: 10.1016/j.bspc.2011.01.004 [22] n. ille, p. berg, m. scherg, artifact correction of the ongoing eeg using spatial filters based on artifact and brain signal topographies, j. clin. neurophysiol 19 (2002), pp. 113–124. doi: 10.1097/00004691-200203000-00002 [23] r. j. davidson, d. c. jackson, c. l. larson, human electroencephalography (handbook of psychophysiology, 2nd ed.), cambridge university press, 2000. [24] n. mammone, d. labate, a. lay-ekuakille, f. c. morabito, analysis of absence seizure generation using eeg spatialtemporal regularity measures, int j neural syst 22 (2012). doi: 10.1142/s0129065712500244 [25] e. tatti, s. ricci, a. b. nelson, d. mathew, h. chen, a. quartarone, c. cirelli, g. tononi, m. f. ghilardi, prior practice affects movement-related beta modulation and quiet wake restores it to baseline, front. syst. neurosci 14 (2020), pp. 61. doi: 10.3389/fnsys.2020.00061 [26] m. k. islam, a. rastegarnia, z. yang, methods for artifact detection and removal from scalp eeg: a review. neurophysiol. clin. 46 (2016), pp. 287–305. doi: 10.1016/j.neucli.2016.07.002 [27] s. casarotto, a. m. bianchi, s. cerutti, g. a. chiarenza, principal component analysis for reduction of ocular artefacts in eventrelated potentials of normal and dyslexic children, clin. neurophysiol. 115 (2004), pp. 609–619. doi: 10.1016/j.clinph.2003.10.018 [28] z. anusha, j. jinu, t. geevarghese, automatic eeg artifact removal by independent component analysis using critical eeg rhythms, in proceedings of the 2013 ieee international conference control communication and computing (iccc), trivandrum, kerala, india, december 13-15, 2013, pp. 364–367. doi: 10.1109/iccc.2013.6731680 [29] j. dauwels, k. srinivasan, m. r. reddy, a. cichocki, nearlossless multichannel eeg compression based on matrix and tensor decompositions, ieee j. biomed. health inform. 17 (2013), pp. 708–714. doi: 10.1109/titb.2012.2230012 [30] l. lin, y. meng, j. chen, z. li, multichannel eeg compression based on ica and spiht, biomed. signal process. control 20 (2015), pp. 45–51. doi: 10.1016/j.bspc.2015.04.001 [31] m. k. alam, a.a. aziz, s. a. latif, a. awang, eeg data compression using truncated singular value decomposition for remote driver status monitoring, in proceedings of the 2019 ieee student conference on research and development (scored), universiti teknologi petronas (utp), malaysia, 15-17 october 2019, pp. 323–327. doi: 10.1109/scored.2019.8896252 [32] g. campobello, a. quercia, g. gugliandolo, a. segreto, e. tatti, m. f. ghilardi, g. crupi, a. quartarone, n. donato, an efficient near-lossless compression algorithm for multichannel eeg signals, 2021 ieee international symposium on medical measurements and applications (memea), neuchâtel, switzerland, june 23-25, 2021. doi: 10.1109/memea52024.2021.9478756 [33] a. delorme, s. makeig, eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis, j. neurosci. methods 134 (2004), pp. 9–21. doi: 10.1016/j.jneumeth.2003.10.009 [34] s. makeig, s. debener, j. onton, a. delorme, mining eventrelated brain dynamics, trends cogn. sci. 8 (2004), pp. 204–210. doi: 10.1016/j.tics.2004.03.008 [35] b. kemp, a. värri, a. c. rosa, k. d. nielsen, j. gade, a simple format for exchange of digitized polygraphic recordings, electroencephalogr. clin. neurophysiol 82 (1992), pp. 391–393. doi: 10.1016/0013-4694(92)90009-7 https://doi.org/10.1088/1741-2552/aaf12e https://doi.org/10.3389%2ffneur.2012.00077 https://doi.org/10.1016%2fj.jneumeth.2009.03.022 https://doi.org/10.1109/tim.2019.2900141 https://doi.org/10.23919/eusipco.2018.8553597 https://doi.org/10.1109/memb.2010.936545 https://doi.org/10.1016/j.jnca.2014.09.013 https://doi.org/10.1016/j.bspc.2011.06.007 https://doi.org/10.1109/iscas.2006.1692909 https://doi.org/10.1109/10.552239 https://doi.org/10.1109/titb.2007.899497 https://doi.org/10.23919/eusipco.2017.8081677 https://doi.org/10.1016/j.bspc.2011.01.004 https://doi.org/10.1097/00004691-200203000-00002 https://doi.org/10.1142/s0129065712500244 https://doi.org/10.3389/fnsys.2020.00061 https://doi.org/10.1016/j.neucli.2016.07.002 https://doi.org/10.1016/j.clinph.2003.10.018 https://doi.org/10.1109/iccc.2013.6731680 https://doi.org/10.1109/titb.2012.2230012 https://doi.org/10.1016/j.bspc.2015.04.001 https://doi.org/10.1109/scored.2019.8896252 https://doi.org/10.1109/memea52024.2021.9478756 https://doi.org/10.1016/j.jneumeth.2003.10.009 https://doi.org/10.1016/j.tics.2004.03.008 https://doi.org/10.1016/0013-4694(92)90009-7 on the design and characterisation of a microwave microstrip resonator for gas sensing applications acta imeko issn: 2221-870x june 2021, volume 10, number 2, 54 61 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 54 on the design and characterisation of a microwave microstrip resonator for gas sensing applications giovanni gugliandolo1, davide aloisio2, giuseppe campobello1, giovanni crupi3, nicola donato1 1 department of engineering, university of messina, italy 2 cnr itae, messina, italy 3 biomorf department, university of messina, italy section: research paper keywords: microwaves; resonators; gas sensors; metrological evaluation, humidity citation: giovanni gugliandolo, davide aloisio, giuseppe campobello, giovanni crupi, nicola donato, on the design and characterisation of a microwave microstrip resonator for gas sensing applications, acta imeko, vol. 10, no. 2, article 9, june 2021, identifier: imeko-acta-10 (2021)-02-09 section editor: ciro spataro, university of palermo, italy received january 17, 2021; in final form may 4, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giovanni gugliandolo, e-mail: giovanni.gugliandolo@unime.it 1. introduction nowadays, the research interests in the development of sensors with extremely low-power consumption is growing because of the increasingly energy-saving requirements of the expanding market. this can be seen by the recent high demand of portable battery powered devices often used in wireless sensor networks (wsns) for industrial (e.g., harmful gas detection) [1], [2], healthcare (e.g., wearable or implantable devices) [3]-[6], and environmental (e.g., weather forecast) [7]-[10] monitoring applications. several sensor typologies have been investigated in order to achieve the best trade-off between performance and power consumption with a focus on size, weight, and production costs. in this context, microwave devices are considered as an attractive solution thanks to their interesting features in terms of cost, power consumption, and response time. they have been employed for materials characterization [11]-[13] as well as for gas sensing applications [14]. microwave gas sensors have the ability to operate at room temperature without the need of a heater [15], [16]. moreover, they are fully compatible with wireless technology so that they can be easily integrated into wireless smart nodes [17]-[19]. in particular, the planar microstrip technology is widely employed in the fabrication of microwave components like antennas, filters, and resonators. such devices are often used in sensing applications because of their low cost, easy fabrication, and good performance [20]-[24]. the microwave microstrip sensors are attractive especially for gas sensing applications, where the frequency-dependent dielectric properties of the sensing material are related with the adsorption of the target gas of interest on the sensing layer, deposited on the microstrip propagative structure. the progress in nanotechnologies has enabled advancements in the use of gas sensors using nanostructured materials as sensing layers [14], [25]-[27]. abstract this study focuses on the microwave characterisation of a microstrip resonator aimed for gas sensing applications. the developed oneport microstrip resonator, consisting of three concentric rings with a central disk, is coupled to a 50-ω microstrip feedline through a small gap. a humidity sensing layer is deposited on this gap by drop-coating an aqueous solution of ag@α-fe2o3 nanocomposite. the operation principle of the developed humidity sensor is based on the change of the dielectric properties of the ag@α-fe2o3 nanocomposite when the relative humidity is varied. however, it should be underlined that, depending on the choice of the sensing material, different target gases of interest can be detected with the proposed structure. the frequency-dependent response of the sensor is obtained using the reflection coefficient measured from 3.5 ghz to 5.6 ghz, with relative humidity ranging from 0 %rh to 83 %rh. the variation of the humidity concentration strongly impacts on the two resonances detected in the measured reflection coefficient. in particular, an increase of the humidity level leads to lowering both resonant frequencies, which can be used as sensing parameters for humidity monitoring purpose. an exponential function has been used to accurately model the two resonant frequencies as a function of the humidity. mailto:giovanni.gugliandolo@unime.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 55 following on from the results of our previous study [28], we present here a thorough investigation of a one-port gas transducer based on a microwave microstrip resonator, which is validated as humidity sensor by using an ag@α-fe2o3 nanocomposite as a sensing material. the experimental-based investigation is performed by focusing the analysis on both magnitude and phase of the reflection coefficient ( ) and its corresponding impedance (z). in particular, we monitored the relative humidity over the broad range going from 0% to 83% at room temperature, and by assessing the sensing performance of the developed gas transducer to change in the relative humidity in terms of variations in the frequency-dependent behaviour of . as shown later in this paper, two dips are clearly visible in the magnitude of  for the proposed sensor at approximately 3.7 ghz and 5.4 ghz, and their appearance is shifted towards lower frequencies when the humidity level is increased. hence, the resonant frequencies (fr1 and fr2) associated to the two dips observed in  can be directly used as humidity sensing parameters. to this end, a sensitivity-based investigation is developed in order to assess the sensing performance of the proposed microwave sensor for humidity monitoring application. the humidity-dependent variations in the two resonant frequencies are accurately modelled by using an exponential function. this article is structured as follows. section ii is dedicated to the design of the microstrip resonator, which is based on using three concentric rings with a central disk. this choice was made after a careful analysis of the performance of different resonator topologies through computer simulations. compared to the traditional ring configuration [29], [30], the proposed topology allows improvement in the quality (q-) factor and, thus, in the detection process. section iii is devoted to the development of the humidity sensor, which is based on using an ag@α-fe2o3 nanocomposite as sensing material. it is worth noting that the high porosity of the nanostructure allows enhancement of the interaction with water vapour, thereby leading to an improved humidity sensitivity. section iv is focused on the description of the fitting of the measurements locally around the two observed resonances by using a lorentzian function. section v is dedicated to the description of the setup for frequencyand humidity-dependent characterization and to the presentation of the experimental results. finally, the conclusions are drawn in the last section. 2. resonator design and simulation the proposed gas transducer is based on a concentric rings microstrip (crm) resonator acting as a propagative structure for the electromagnetic waves. this novel topology of propagative structure is composed by three concentric copper rings with a 6mm copper central disk and a 50-ω microstrip feedline coupled to the resonator through a 0.2-mm gap. the matlab antenna toolbox was used for the design process. as illustrated in figure 1, four different resonator topologies were considered during the design step based on computer simulations over the frequency range going from 3 ghz to 6 ghz: the classic ring resonator, two concentric rings, three concentric rings, and three concentric rings with a disk in the middle. starting from the traditional configuration, the coupling gap and the ring thickness were optimized in terms of q-factor. later, additional rings were included in the design considering a constant spacing. figure 3 shows the frequencydependent behaviour of the magnitude of the simulated  for the four studied topologies. as can be observed, the computer simulations show that all investigated topologies have two resonances appearing in , which can be detected as two marked dips occurring at about 3.7 ghz (i.e., dip 1) and 5.4 ghz (i.e., dip 2), respectively. the two dips are more clearly visible in figure 1. illustration of the four studied resonator topologies: (a) traditional ring, (b) two concentric rings, (c) three concentric rings, and (d) three concentric copper rings with a central disk. 3.0 3.5 4.0 4.5 5.0 5.5 6.0 -25 -20 -15 -10 -5 0 5  ( d b ) frequency (ghz) resonator rings+dot resonator 1 ring resonator 2 rings resonator 3 rings figure 2. behaviour of the magnitude of the simulated reflection coefficient versus frequency from 3.0 ghz to 6.0 ghz for the four studied resonator topologies. 3.4 3.5 3.6 3.7 5.2 5.3 5.4 5.5 5.6 -25 -20 -15 -10 -5 0 5  ( d b ) frequency (ghz) resonator rings + disk resonator 1 ring resonator 2 rings resonator 3 rings figure 3. illustration of the two dips appearing in the magnitude of the simulated reflection coefficient for the four studied resonator topologies. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 56 figure 2, where the observation of  is limited to the two narrow frequency bands around fr1 and fr2. to assess the microwave performance of the studied resonator topologies, the quality factor improvement was evaluated by using the single-ring configuration as a reference for comparison. figure 4 shows the q-factor improvement for both resonances as a function of the number of concentric rings. it is worth noting that the selected topology (consisting of three rings with a central disk) allows achieving an improvement in the qfactor equal to 6% and 44% at fr1 and fr2, respectively. the crm resonator was fabricated on a 3.2-mm fr4 substrate [31] with copper as conductor for both top and ground layers by using the protomat s103 pcb milling machine. the dielectric constant (εr) and the loss tangent (tanδ) of the substrate are 4.3 and 0.025, respectively. an sma connector was soldered at the end of the 50-ω microstrip feedline to connect the resonator with a vector network analyzer (vna) for measuring γ. 3. sensor development to obtain the gas sensor, a sensing material was deposited on the surface of the propagative structure. in particular, an aqueous solution of ag@α-fe2o3 nanocomposite was deposited on the gap placed between the external ring and the microstrip feedline by drop coating. the description and synthesis of this humidity sensing material is reported in [32]. the effect of the sensing material deposition on the frequency-dependent behaviour of γ of the developed structure was measured from 3.5 ghz to 5.6 ghz using the agilent 8753es vna with a one-port calibration (short open load, agilent 85052 mechanical calibration kit). as shown in figure 5, both dips in γ become much more pronounced after deposition, improving the quality factor of both dips. for the sake of completeness, the real and imaginary parts of the resonator input impedance for the selected frequency ranges are reported in figure 6. 4. resonator parameters evaluation estimating the resonant frequency (fr), quality factor (q), and dip amplitude (ar) from a discrete frequency response is not a trivial task. a simple linear interpolation of the available discrete data can lead to an inaccurate estimation of these quantities, especially when the data are affected by noise. a better fitting approach consists in using a lorentzian function [33], [34], which 2 3 4 -10 0 10 20 30 40 50 dip at 3.7 ghz dip at 5.4 ghz number of rings q f a c to r im p ro v e m e n t (% ) figure 4. analysis of the quality factor improvement of two resonances observed in the simulated reflection coefficient as a function of the number of rings of the resonator structure. 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -50 -40 -30 -20 -10 0 (a)  m a g n it u d e ( d b ) frequency (ghz) after deposition before deposition ranges investigated for sensing response 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -200 -150 -100 -50 0 50 100 150 200 (b)  p h a s e ( ° ) frequency (ghz) after deposition before deposition ranges investigated for sensing response figure 5. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient as a function of frequency, from 3.5 ghz to 5.6 ghz, for the studied resonator before (red lines) and after (blue lines) deposition of the sensing material. 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 0 25 50 75 100 125 150 175 200 (a) z r e a l p a rt (  ) frequency (ghz) after deposition before deposition ranges investigated for sensing response 3.5 3.6 3.7 3.8 5.2 5.3 5.4 5.5 5.6 -200 -150 -100 -50 0 50 100 150 200 z i m a g in a ry p a rt (  ) frequency (ghz) after deposition before deposition ranges investigated for sensing response (b) figure 6. behaviour of the (a) real and (b) imaginary parts of the impedance as a function of frequency from 3.5 ghz to 5.6 ghz, for the studied resonator before (red lines) and after (blue lines) deposition of the sensing material. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 57 allows achieving a good estimation of the resonant parameters fr, q, and ar. a more accurate result can be achieved by using a complex function to fit both real and imaginary parts of the spectrum [35], [36]. this technique can be useful in several applications in which the calibration procedure is impracticable (e.g., in cryogenic measurement systems) [36]. the frequency-dependent behaviour of the magnitude of γ of the microwave resonator was modelled as a lorentzian function: |𝛤(𝑓)| = 𝑐0 − 𝑎0 𝜋 ∙ 1 2 𝐺 (𝑓 − 𝑓𝑅 ) 2 + ( 1 2 𝐺) 2 , (1) where f is the frequency, c0 and a0 are two real coefficients, and g is the full width at half maximum. from equation (1), ar and q can be calculated respectively as: 𝐴𝑅 = 𝑐0 − 𝑎0 ∙ 2 𝜋𝐺 , (2) 𝑄 = 𝑓 ∆𝑓 = 𝑓𝑅 𝐺√√2 − 1 , (3) where δf is the resonator half-power bandwidth. the levenberg-marquardt algorithm was used for fitting the measured data points with the lorentzian function. it is found that the lorentzian curve allows fitting very well the two observed resonant dips, so that it is possible to obtain a smooth behaviour of the magnitude of γ over a continuous spectrum of frequencies for the estimation of the resonant parameters. as an illustrative example, figure 7 reports the lorentzian fitting applied to the magnitude of the measured γ over a narrow frequency band around the second resonance. by using the fitting process, the parameters fr, q, and ar can be accurately estimated over the whole considered humidity range. 5. experimental results the sensor was placed in a test chamber filled with a controlled atmosphere, where the electrical signal was supplied via an rf feed-through for connection with the agilent 8753es vna (see figure 8). the test chamber consists of a modified petri dish made in polystyrene, able to provide both a controlled atmosphere and good microwave propagation avoiding signal perturbations. the developed sensor was characterized at seven different values of the relative humidity concentration, ranging from 0 %rh to 83 %rh, at room temperature. the 0 %rh nominal value was set by means of the certification of the gas bottles (0.5%). the test gas mixture was set by means of a fully automated gas control system made by a certified gas bottle and a bubbler inside a thermostatic bath. the system is equipped with an array of bronkhorst® mass flow controllers able to set a flux of 100 cm3/min in the test chamber, providing a fast set and purge for each test value of the humidity concentration. the diagram of the gas apparatus is shown in figure 9. after performing a one-port calibration, the reflection coefficient was measured at each humidity condition. figure 10 and figure 11 illustrate the impact of the relative humidity on the measured 5.4682 5.4684 5.4686 5.4688 5.4690 5.4692 5.4694 -56 -54 -52 -50  m a g n it u d e ( d b ) frequency (ghz) measurement lorentzian fitting figure 7. illustration of the lorentzian fitting (red line) of the magnitude of the measured (black line) reflection coefficient over a narrow frequency band around the second resonance for the studied resonator. figure 8. illustration of (a) the sensor prototype placed in test chamber and (b) the frequencyand humidity-dependent characterization procedure. figure 9. illustration of the automated gas control and measurement system. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 58 behaviour of the complex reflection coefficient over two narrow frequency bands around the two observed dips, which were detected at approximately 3.7 ghz and 5.4 ghz. it can be seen that the size and the shape of the dips change significantly with the humidity values. it should be mentioned that humidity-dependent variations are observed in all the three parameters fr, q-factor, and ar for both resonances. nevertheless, q-factor and ar do not follow a clear monotonic trend (see figure 12 and figure 13). on the other hand, it is worth noting that both resonant frequencies decrease with increasing the humidity level (see figure 14), thereby enabling the use of the two resonant frequencies as humidity sensing parameters. with the aim to evaluate the humidity sensing performance of the developed gas transducer for the whole investigated humidity range, we used an exponential function to fit the two resonant frequencies as a function of humidity: 𝑓𝑅 = 𝐴 ∙ e (− 𝑅𝐻 𝐵 ) + 𝐶, (4) where fr represents the considered resonant frequency, rh is the relative humidity value, a, b, and c are the fitting parameters. the calibration curve for both dip 1 and dip 2 is depicted in figure 14(a); in table 1 the fitting parameters are reported, while the calibration fit residuals are shown in figure 14(b). for dip 1 3.696 3.698 3.700 3.702 3.704 -60 -55 -50 -45 -40 -35 -30 -25 -20  m a g n it u d e ( d b ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (a) 3.696 3.698 3.700 3.702 3.704 -40 -20 0 20 40 60 80 100 120 140 160  p h a s e ( ° ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (b) figure 10. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient over a narrow frequency band around the first resonance for the studied resonator, for seven relative humidity values. 5.463 5.466 5.469 5.472 5.475 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 (a)  m a g n it u d e ( d b ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 5.463 5.466 5.469 5.472 5.475 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40  p h a s e ( ° ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity (b) figure 11. behaviour of the (a) magnitude and (b) phase of the measured reflection coefficient over a narrow frequency band around the second resonance for the studied resonator at seven relative humidity values. 0 20 40 60 80 2000 3000 4000 5000 6000 7000 dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz relative humidity (%) q f a c to r 2000 3000 4000 5000 6000 7000 q fa c to r figure 12. analysis of the quality factor of two resonances observed in the measured reflection coefficient of the resonator as a function of the humidity. 0 20 40 60 80 -56 -54 -52 -50 dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz relative humidity (%)  m in ( d b ) -62 -60 -58 -56 -54 -52 -50  m in (d b ) figure 13. magnitude of the measured reflection coefficient of the resonator at the first (black) and second (blue) resonances as a function of the humidity. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 59 residuals are almost within ± 200 khz that, considering an absolute sensitivity of 26.4 khz/%rh, corresponds to ± 7.6 %rh. on the other hand, dip 2 exhibits a higher sensitivity (29.3 khz/%rh) with lower calibration fit residuals in comparison to dip 1: ± 100 khz, or ± 3.4 %rh. as an alternative, it is possible to use both dips for humidity detection, thereby reducing the measurement error and increasing accuracy [37]. for the sake of completeness, the impact of the humidity variations is reported also for the impedance associated to the measured , focusing on the two narrow frequency bands around the two dips. figure 15 and figure 16 show that a higher humidity implies that the real part decreases close to dip 1 and then increases close to dip 2, whereas the imaginary part is shifted towards higher values in both frequency bands. 6. conclusions a one-port microwave gas transducer was developed by coupling a microstrip resonator for electromagnetic wave propagation with an ag@α-fe2o3 nanocomposite for humidity 0 20 40 60 80 -300 -200 -100 0 100 200 300 (b) c a li b ra ti o n f it r e s id u a ls ( k h z ) relative humidity (% rh) dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz figure 14. calibration curve for both dips (a) and calibration fit residuals (b). table 1. fitting parameters values with standard errors for the two dips observed. parameter dip 1 dip 2 value standard error value standard error a (mhz) 2.68 0.302 2.92 0.104 b (%rh) 37.21 10.349 28.18 2.485 c (mhz) 3699.51 0.3043 5467.92 0.088 r2 = 0.994 r2 = 0.956 3.694 3.696 3.698 3.700 3.702 3.704 3.706 46 48 50 52 54 56 58 (a) z r e a l p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 3.694 3.696 3.698 3.700 3.702 3.704 3.706 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 (b) z i m a g in a ry p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity figure 15. behaviour of the (a) real and (b) imaginary parts of the measured impedance over a narrow frequency band around the first resonance for the studied resonator, for seven relative humidity values. 5.460 5.463 5.466 5.469 5.472 5.475 45 46 47 48 49 50 51 52 53 54 (a) z r e a l p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity 5.460 5.463 5.466 5.469 5.472 5.475 -3 -2 -1 0 1 2 (b) z i m a g in a ry p a rt (  ) frequency (ghz) 0 % rh 22 % rh 28 % rh 39 % rh 54 % rh 74 % rh 83 % rh increasing humidity figure 16. behaviour of the (a) real and (b) imaginary parts of the measured impedance over a narrow frequency band around the second resonance for the studied resonator, for seven relative humidity values. 0 20 40 60 80 3.698 3.699 3.700 3.701 3.702 3.703 (a) dip 1 @ 3.7 ghz dip 2 @ 5.4 ghz fitting relative humidity (% rh) f re q u e n c y (g h z ) 5.467 5.468 5.469 5.470 5.471 5.472 f re q u e n c y (g h z ) acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 60 monitoring purpose. the sensing performance of this prototype was established by monitoring relative humidity from 0 %rh to 85 %rh at room temperature. to this end, the sensor was placed in a test chamber consisting of a modified petri dish made in polystyrene. by using a vna, the reflection coefficient was measured over the 3.5 ghz … 5.6 ghz frequency range, under seven different conditions of relative humidity. it was observed that the frequency-dependent behaviour of the reflection coefficient exhibits two marked dips that change in intensity, broadness, and location when the relative humidity is varied. in particular, the two detected resonant frequencies progressively shift towards lower values with increasing humidity, enabling their use as effective sensing parameters. the humiditydependent behaviour of the two resonant frequencies was accurately reproduced by using an exponential function. the sensitivity-based analysis showed that the higher resonant frequency is the most sensitive parameter to change when the relative humidity is varied. finally, it should be highlighted that, although the reported analysis was limited to the humidity sensing application, the developed transducer can be applied also for the detection of different target gases by selecting an appropriate sensing material tailored to the specific sensing application. references [1] p. c. jain, r. kushwaha, wireless gas sensor network for detection and monitoring of harmful gases in utility areas and industries, 2012 sixth international conference on sensing technology (icst), kolkata, india, 18-21 dec. 2012, pp. 642-646. doi: 10.1109/icsenst.2012.6461759 [2] g. campobello, m. castano, a. fucile, a. segreto, weva: a complete solution for industrial internet of things, lect. notes in computer science, 10517 (2017) lncs, pp. 231-238. doi: 10.1007/978-3-319-67910-5_19 [3] g. campobello, a. segreto, s. zanafi, s. serrano, an efficient lossless compression algorithm for electrocardiogram signals, 2018 26th european signal processing conference (eusipco), rome, italy, 3-7 sept. 2018, pp. 777-781. doi: 10.23919/eusipco.2018.8553597 [4] a. darwish, a. e. hassanien. wearable and implantable wireless sensor network solutions for healthcare monitoring, sensors 11(6) (2011), pp. 5561-5595. doi: 10.3390/s110605561 [5] x. fu, w. chen, sh. ye, y. tu, y. tang, d. li, h. chen, k. jiang, a wireless implantable sensor network system for in vivo monitoring of physiological signals, ieee transactions on information technology in biomedicine 15(4) (2011), pp. 577584. doi: 10.1109/titb.2011.2149536 [6] g. gugliandolo, g. campobello, p. capra, s. marino, a. bramanti, g. lorenzo, n. donato, a movement-tremors recorder for patients of neurodegenerative diseases, ieee transactions on instrumentation and measurement 68(5) (2019), pp. 1451-1457. doi: 10.1109/tim.2019.2900141 [7] f. kiani, a. seyyedabbasi, wireless sensor network and internet of things in precision agriculture, international journal of advanced computer science and applications 9(6) (2018). doi: 10.14569/ijacsa.2018.090614 [8] g. campobello, a. segreto, s. zanafi, s. serrano, rake: a simple and efficient lossless compression algorithm for the internet of things 2017 25th european signal processing conference (eusipco), kos, greece, 28 august-2 september 2017, pp. 2581-2585. doi: 10.23919/eusipco.2017.8081677 [9] s. ullo, m. gallo, g. palmieri, p. amenta, m. russo, g. romano, m. ferrucci, a. ferrara, m. de angelis, application of wireless sensor networks to environmental monitoring for sustainable mobility, 2018 ieee international conference on environmental engineering (ee), milan, italy, 12-14 march 2018, pp. 1-7. doi: 10.1109/ee1.2018.8385263 [10] g. borrello, e. salvato, g. gugliandolo, z. marinkovic, n. donato, udoo-based environmental monitoring system. in: de gloria a. (eds) applications in electronics pervading industry, environment and society. applepies, lecture notes in electrical engineering, 409 (2017) springer, cham. doi: 10.1007/978-3-319-47913-2_21 [11] p. österberg, m. heinonen, m. ojanen-saloranta, a. mäkynen, comparison of the performance of a microwave-based and an nmr-based biomaterial moisture measurement instrument, acta imeko 5(4) (2016), pp. 88-99. doi: 10.21014/acta_imeko.v5i4.391 [12] m. scheffler, m. m. felger, m. thiemann, d. hafner, k. schlegel, m. dressel, k. ilin, m. siegel, s. seiro, c. geibel, f. steglich, broadband corbino spectroscopy and stripline resonators to study the microwave properties of superconductors, acta imeko 4(3) (2015), pp. 47-52. doi: 10.21014/acta_imeko.v4i3.247 [13] a. alimenti, k. torokhtii, n. pompeo, e. piuzzi, e. silva, characterisation of dielectric 3d-printing materials at microwave frequencies, acta imeko 9(3) (2020), pp. 26-32. doi: 10.21014/acta_imeko.v9i3.778 [14] g. gugliandolo, d. aloisio, s. g. leonardi, g. campobello, n. donato, resonant devices and gas sensing: from low frequencies to microwave range, proc, of 14th int. conf. telsiks 2019, 23-25 october 2019, nis, serbia, article n. 9002368, pp. 21-28. doi: 10.1109/telsiks46999.2019.9002368 [15] t. guo, t. zhou, q. tan, q. guo, f. lu, j. xiong, a roomtemperature cnt/fe3o4 based passive wireless gas sensor, sensors 18(10) (2018), art 3542. doi: 10.3390/s18103542 [16] k. staszek, a. szkudlarek, m. kawa, a. rydosz, microwave system with sensor utilizing go-based gas-sensitive layer and its application to acetone detection, sensors and actuators b: chemical, 297 (2019), art. 126699. doi: 10.1016/j.snb.2019.126699 [17] b. wu, x. zhang, b. huang, y. zhao, c. cheng, h. chen, highperformance wireless ammonia gas sensors based on reduced graphene oxide and nano-silver ink hybrid material loaded on a patch antenna, sensors 17(9) (2017), art. 2070. doi: 10.3390/s17092070 [18] g. gugliandolo, k. naishadham, n. donato, g. neri, v. fernicola, sensor-integrated aperture coupled patch antenna, 2019 ieee international symposium on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1-5. doi: 10.1109/iwmn.2019.8805023 [19] g. gugliandolo, k. naishadham, g. neri, v. c. fernicola, n. donato, a novel sensor-integrated aperture coupled microwave patch resonator for humidity detection, in ieee transactions on instrumentation and measurement70 (2021), pp. 1-11. doi: 10.1109/tim.2021.3062191 [20] g. barochi, j. rossignol, m. bouvet, development of microwave gas sensors, sensors and actuators b, 157(2) (2011), pp. 374-379. doi: 10.1016/j.snb.2011.04.059 [21] m. h. zarifi, t. thundat, m. daneshmand, high resolution microwave microstrip resonator for sensing applications, sensors and actuators a: physical 233 (2015), pp. 224-230. doi: 10.1016/j.sna.2015.06.031 [22] d. aloisio, n. donato, development of gas sensors on microstrip disk resonators, procedia engineering 87 (2014), pp. 1083-1086. doi: 10.1016/j.proeng.2014.11.351 [23] z. marinković, g. gugliandolo, m. latino, g. campobello, g. crupi, n. donato, characterization and neural modeling of a microwave gas sensor for oxygen detection aimed at healthcare applications, sensors 20(24) (2020), art. 7150. doi: 10.3390/s20247150 https://doi.org/10.1109/icsenst.2012.6461759 https://doi.org/10.1007/978-3-319-67910-5_19 https://doi.org/10.23919/eusipco.2018.8553597 https://doi.org/10.3390/s110605561 https://doi.org/10.1109/titb.2011.2149536 https://doi.org/10.1109/tim.2019.2900141 https://dx.doi.org/10.14569/ijacsa.2018.090614 https://doi.org/10.23919/eusipco.2017.8081677 https://doi.org/10.1109/ee1.2018.8385263 https://doi.org/10.1007/978-3-319-47913-2_21 http://dx.doi.org/10.21014/acta_imeko.v5i4.391 http://dx.doi.org/10.21014/acta_imeko.v4i3.247 http://dx.doi.org/10.21014/acta_imeko.v9i3.778 https://doi.org/10.1109/telsiks46999.2019.9002368 https://doi.org/10.3390/s18103542 https://doi.org/10.1016/j.snb.2019.126699 https://doi.org/10.3390/s17092070 https://doi.org/10.1109/iwmn.2019.8805023 https://doi.org/10.1109/tim.2021.3062191 https://doi.org/10.1016/j.snb.2011.04.059 https://doi.org/10.1016/j.sna.2015.06.031 https://doi.org/10.1016/j.proeng.2014.11.351 https://doi.org/10.3390/s20247150 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 61 [24] g. gugliandolo, m. latino, g. campobello, z. marinkovic, g. crupi, n. donato, on the gas sensing properties of microwave transducers, 2020 55th international scientific conference on information, communication and energy systems and technologies (icest), niš, serbia, 10-12 sept. 2020, pp. 161-194. doi: 10.1109/icest49890.2020.9232765 [25] s. b. tooski, sense toxins/sewage gases by chemically and biologically functionalized single-walled carbon nanotube sensor based microwave resonator, j. appl. phys. 107 (2010), art. 014702. doi: 10.1063/1.3277020 [26] j. park, t. kang, b. kim, et al., real-time humidity sensor based on microwave resonator coupled with pedot:pss conducting polymer film, scientific reports 8 (2018), art. 439. doi: 10.1038/s41598-017-18979-3 [27] a. bogner, c. steiner, s. walter, j. kita, g. hagen, r. moos, planar microstrip ring resonators for microwave-based gas sensing: design aspects and initial transducers for humidity and ammonia sensing, sensors 17(10) (2017), art. 2422. doi: 10.3390/s17102422 [28] g. gugliandolo, d. aloisio, g. campobello, g. crupi, n. donato, development and metrological evaluation of a microstrip resonator for gas sensing applications, proceedings of 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, 2020, pp. 80-84. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-16.pdf [29] m. t. jilani, w. p. wen, l. y. cheong, m. a. zakariya, m. z. u. rehman, equivalent circuit modeling of the dielectric loaded microwave biosensor, radioengineering 23(4) (2014), pp. 10381047. online [accessed 14 june 2021] https://www.radioeng.cz/fulltexts/2014/14_04_1038_1047.pdf [30] d. l. k. eng, b. c. olbricht, s. shi, d. w. prather, dielectric characterization of thin films using microstrip ring resonators, microwave and optical technology letters 57(10) (2015), pp. 2306-2310. doi: 10.1002/mop.29321 [31] j. r. aguilar, m. beadle, p. t. thompson, m. w. shelley, the microwave and rf characteristics of fr4 substrates, iee colloquium on low cost antenna technology (ref. no. 1998/206), 1998, pp. 2/1-2/6. doi: 10.1049/ic:19980078 [32] a. mirzaei, k. janghorban, b. hashemi, a. bonavita, m. bonyani, s. g. leonardi, g. neri, synthesis, characterization and gas sensing properties of ag@ -fe2o3 core-shell nanocomposites, nanomaterials 5(2) (2015), pp. 737-749. doi: 10.3390/nano5020737 [33] p. j. petersan, s. m. anlage, measurement of resonant frequency and quality factor of microwave resonators: comparison of methods, journal of applied physics 84(6) (1998). doi: 10.1063/1.368498 [34] m. p. robinson, j. clegg, improved determination of q-factor and resonant frequency by a quadratic curve-fitting method, ieee trans. on electrom. comp. 47(2) (2005), pp. 399-402. doi: 10.1109/temc.2005.847411 [35] g. gugliandolo, s. tabandeh, l. rosso, d. smorgon, v. fernicola, whispering gallery mode resonators for precision temperature metrology applications, sensors 21(8) (2021), art. 2844. doi: 10.3390/s21082844 [36] k. torokhtii, a. alimenti, n. pompeo, e. silva, estimation of microwave resonant measurement uncertainty from uncalibrated data, acta imeko 9(3) (2020), pp. 47-52. doi: 10.21014/acta_imeko.v9i3.782 [37] s. kiani, p. rezaei, m. navaei, dual-sensing and dual-frequency microwave srr sensor for liquid samples permittivity detection, measurement 160 (2020), art. 107805. doi: 10.1016/j.measurement.2020.107805 https://doi.org/10.1109/icest49890.2020.9232765 https://doi.org/10.1063/1.3277020 https://doi.org/10.1038/s41598-017-18979-3 https://dx.doi.org/10.3390/s17102422 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-16.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-16.pdf https://www.radioeng.cz/fulltexts/2014/14_04_1038_1047.pdf https://doi.org/10.1002/mop.29321 https://doi.org/10.1049/ic:19980078 https://doi.org/10.3390/nano5020737 https://doi.org/10.1063/1.368498 https://doi.org/10.1109/temc.2005.847411 https://doi.org/10.3390/s21082844 https://doi.org/10.21014/acta_imeko.v9i3.782 https://doi.org/10.1016/j.measurement.2020.107805 journal contacts acta imeko issn: 2221-870x march 2022, volume 11, number 1 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy section editors (vol. 7 11) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de experiment assisting system with local augmented body (easy-lab) in dual presence environment acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 experiment assisting system with local augmented body (easy-lab) in dual presence environment ahmed alsereidi1, yukiko iwasaki1, joi oh2, vitvasin vimolmongkolporn2, fumihiro kato2, hiroyasu iwata3 1 waseda university, graduate school of creative science and engineering, tokyo, japan 2 waseda university, global robot academic institute, tokyo, japan 3 waseda university, faculty of science and engineering, tokyo, japan section: research paper keywords: vr/ar; hands-free interface; telecommunication; teleoperation citation: ahmed alsereidi, yukiko iwasaki, joi oh, vitvasin vimolmongkolporn, fumihiro kato, hiroyasu iwata, experiment assisting system with local augmented body (easy-lab) in dual presence environment, acta imeko, vol. 11, no. 3, article 3, september 2022, identifier: imeko-acta-11 (2022)-03-03 section editor: zafar taqvi, usa received february 26, 2022; in final form july 21, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ahmed alsereidi, e-mail: ahmed@akane.waseda.jp 1. introduction there has been a notable amount of research and development in telepresence robotics, with the occurrence of covid, telepresence robotics has been used to improve subject experiments. in previous studies, researchers and engineers frequently ask individuals to complete a task, watch their behaviour, or evaluate the usability of new technologies. as researchers, we often need to observe how a subject act in an experiment and provide guidance and instructions during an experiment. easy-lab system was designed to allow such experiments to be carried from a remote location. the designed system uses a 6 dofs robotic head that utilizes differential gears to imitate human head-waist motion. the neck, waist, and detachable mechanism make up the mechanical design of the detachable robotic head. also, the maximum latency between the user and the robot is 25 ms, which is low enough for human perception [1]. this was verified by comparing the human head motion and robot head motion side by side, both moved identically as shown in figure 1. figure 1. detachable head robot used for testing. abstract this paper introduces a system designed to support conducting experiments of subjects when the situation does not allow experimenter and subject to be in the same place such as the covid19 pandemic where everyone relied on video conference applications which has its limitation. due to the difficulty of directing with a video conferencing system using solely video and voice. the system we developed allows an experimenter to actively watch and interact with the subject. even if you're operating from a distant area, it is still possible to conduct experiments. another important aspect this study will focus on is the case of when there are several subjects required and the experimenter must be able to guide both subjects equally well. the system proposed uses a 6 dof robotic arm with a camera and a laser pointer attached to it on the subject side. the experimenter uses a head-mounted display to control it and it moves corresponding to the head movement allowing for easy instruction and intervention to the subject side. comparison with other similar research is also covered. the study will focus mainly on which viewing method is the easiest for the experimenter to use, and if teaching one subject at the time gives better results than teaching two subjects simultaneously. mailto:ahmed@akane.waseda.jp acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 this system can also intervene using a laser pointer to point at the object being worked on. the joint angles of the robot are calculated by ik from the acquired head movements of the experimenter and reflected in the real robot. photon unity networking was used to sync the motion of the experimenter and the robot remotely [2]. photon unity networking is a package usually used for multiplayer games due to its flexible matchmaking where objects can be synced over the network. figure 2 shows the system diagram of how the system sends and receives data. another point to consider is that the system must be user friendly for most people to use. tele-operated robots have been making advances in this field, as well as full-body immersion systems to imitate the human motion [3], [4]. those systems focus on immersion and are lacking in terms of usability as they are heavy and are difficult to manoeuvre. there are some cases where multiple subjects are required for a certain experiment, in this case multiple robots would be required, at least one robot per subject. there are several requirements to fulfil for this system. first, instructions given to the subjects should be transmitted correctly in as little time as possible. second, the experimenter must have clear visibility to both subject’s surroundings, third, allow the experimenter to be present in several locations at the same time (dual presence). to create a system that allows users to freely switch and reallocate their attention. in telepresence systems, to fully immerse a user in a remote environment, it is preferable that the user devotes his or her undivided attention to it. teleoperation work efficiency improves when the system delivers a greater sensation of immersion and presence [5]. in this study, we focused on creating a dual presence system, so the experimenter needs to pay attention to two remote environments simultaneously and be able to focus their attention as needed between environments. in this research, we aim to develop a system that allows experimenters to be able to achieve dual-presence and monitor both subjects. we will propose two types of visual environment presentation and evaluate them in a set of experiments and then compare and discuss the results. the methods that would be used are as follow: a) split screen: the experimenter’s head-mounted display (hmd) screen is split up from the middle horizontally and shows subject a’s environment on the top, while subject b’s environment is on the bottom. b) superimposed screen: the experimenter’s hmd screen only shows one image. either environment a or environment b, or both with 50% transparency set to all of them. the subject’s feeling towards the robot must also be put into consideration when conducting this experiment. therefore, the system was also designed to operate into different modes, mode 0, 1 and 2: a) mode 0: allows the robots in both environments to always follow the experimenter’s motion b) mode 1: the robot in environment a moves according to the experimenter’s motion and robot in environment b is static. c) mode 2: the robot in environment b moves according to the experimenter’s motion and robot in environment a is static. the goal of this paper is to study the idea of dual-presence and how well it can be implemented as well as means of improving. experiments were conducted to evaluate the system and compare the methods used and finally suggest a way to improve remote experimenting and advances multi-presence research. 2. proposed method when using easy-lab all participants improved their task performance and recorded higher scores in the subjective evaluation. this result suggests easy-lab’s effectiveness, at least in tasks that require pointing and observation from multiple sides [2]. there are three main objectives to achieve with this experiment. first, fulfil the given task successfully. second, switch between the two robots with ease and be able to exist in two different places at the same time. finally, give the subjects the feeling that the robot or at least the person operating is human. as for the experiment itself, the same task is used for all conditions. the robot head has a laser pointer attached to the camera and it follows the head movement of the user as shown in figure 3. local operator head motion is measured by vive pro hmd (htc). local software is composed by using unity (version 2019.4.17f1, unity technologies) vr simulator. tcp/ip between local pc and remote pc, ros# library was used. for this research, the system was tested over the same network in one building, in the future we would like to broaden the scale and operate it from a different town or country and measure the difference in delay. however, technology-wise, it is possible to operate from any distance if there is internet access. there are two subjects in this experiment, and the second robot head is the same as the one shown in figure 3 but placed in front of the 2nd subject. the input required from the experimenter is made simple to improve usability, the vive controller’s trigger button is used to choose which robot to control, detailed explanation on the next section. on the subject’s side, each subject is given a piece of paper with holes in it as shown in figure 4. while on the experimenter side there is an answer sheet that he can use as a reference. the pattern of the answer sheet is generated randomly every time so that the same pattern is never repeated. the experimenter points the laser pointer in hole #1 and wait for 3 seconds, after 3 figure 2. easy-lab system diagram. figure 3. easy-lab system and experimenter. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 seconds have passed, the subject inserts a thread into that hole and so on until hole #6. the answer sheet is colour coded as well to make it easier to read for the experimenter. once all the holes have been connected, the subject stops working on the task and the experimenter must notice using the camera that all holes are connected properly, completing the task. also, a timer is set as soon as the task starts, when the experimenter confirms the task completion visually, he presses a button to stop the timer. 3. system configuration 3.1. experimenter system configuration the outcome we’re studying on the experimenter side is the effect of changing the way environment visuals is shown and the usability of the proposed interface in each mode. the basic requirement to present visuals of the detachable head’s vision is to familiarize with remote location environments and be aware of the state of the subject and the task he/she is performing. in addition, another important requirement is that the user can allocate their attention to the two environments at will while using this interface to perform the co-presence tasks. in this section, we design both the concurrent vision presentation system to relay environment information and the modes for control. regarding the environment information, the easiest way for most humans to be familiarized with an environment is to see it; so, we designed a system that presented images of the two environments simultaneously from a first-person view, since it provides a high level of immersion [6]. recent studies have proposed several presentation systems for visuals, such as a split screen by arranging several images in one screen [7], and others of a screen superimposition type used for superimposing halftransparent images [8]. in this research, we use both methods. the screen superimposition viewing method is used due to being able to provide two first-person view images simultaneously. while the second viewing method splits the two environments to two screens. the other requirement of this system is to allow users able to easily switch or reallocate their attention. researchers have proposed several methods for switching attention easily between two transparent images, such as changing the transmission ratio of two images via foot pedal [9], by the user's gazing point [10] but for this research we aimed for a simpler method which is pressing a button on the vive controller since the hmd used is also vive, the modes changing sequence changes as follow with every button click; mode 0 -> mode 1 -> mode 2. 4. experimenter viewing method the first viewing method is the two superimposed screens, by setting two depth-separated planes on different places as shown on (figure 5 a). the experimenter can see both subjects at the same time, transparency of 50% is used for both images. moreover, we want to test the difference between superimposed and split screens, so the 50% transparency affect is used for all viewing methods. the second viewing method is where the experimenter can see both images in the split screen method (figure 5 b). 4.1. experimenter operation modes ideally, the system should be conducted with several tasks to verify its usability in and avoid dependance of the task itself as much as possible. but this one task should be sufficient for evaluating dual presence viewing and control method and the evaluation is based on meeting the following conditions: a) point at the correct holes as viewed in the answer sheet b) instruct the subjects as quickly as possible, depending on the view and control method, the time is expected to vary. c) make sure the subjects perform the task correctly. in addition, notice when a subject makes a mistake as soon as possible. for every experiment, the experimenter was given time to test each mode and train for ~3 minutes to familiarize with the system. also, the task was performed 4 times, with conditions changing every time and they are as follow: a) superimposed screens/ mode 0; the user can see both environments at the same time and the robots both move all the time. when changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. b) superimposed screens/ mode 1&2; the user can see only environment a when in mode 1 and control the robot on the same environment, while robot in environment b stops. vise-versa when in mode 2. moreover, in this case, mode 0 shows the answer sheet. c) split screen/ mode 0; the user can see both environments at the same time and the robots both move all the time. when changing from mode 0 to 1 in this case, mode 1 only shows the answer sheet and the user can go back to operating the robots by pressing the same button again and return to mode 0. d) split screen/ mode 1&2; the user can see both environments when in mode 1 but control only the robot on environment a, while robot in environment b stops. figure 4. paper used to connect the dots in the experiment. figure 5. overview of the viewing methods. a) superimposed. b) split screen. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 vise-versa when in mode 2. moreover, in this case, mode 0 shows the answer sheet. 4.2. results and discussion on experimenter operation in this chapter, we verify the usability of the developed system through a user study. for this experiment, we asked the cooperation of six robotics researchers (mean age 24, sd 1.26, male 5, female 1) who were previously involved in subject experiments. each experimenter was instructing 2 subjects at the same time, the 12 participants who acted as subjects are also researchers familiar with robotics (mean age 23, sd 1.31, male 10, female 2) the experimenter and the two subjects were in locations where they can’t see each other, the only communication method is by using the laser pointer attached to each robot and guide the subjects to complete the task. next, the experimenter points at the holes in the correct order as described in the answer sheet provided to him when the task starts. since there are two subjects, the experimenters all chose to instruct subject a to connect the first hole, then moved to guide subject b immediately and repeated this until both finished the task (6 dots connected). depending on the method used, the average number of errors changed as shown in (table 1), error was measured by how many threads were inserted in the wrong hole. the total number of tries is also the maximum number of possible errors and its 12 for each method. table 2 shows the average instruction time in seconds for each method. the outcome for each method used is as follows: a) 3 errors were made in total, the highest error from an operation method. this result was expected, when the two screens are superimposed, it is confusing at times. some users reported that one of the subjects connected a hole without his instruction, this is due to both robots moving at the same time and the experimenter was instructing subject b but subject a’s robot was also moving. another experimenter reported that it was difficult to distinguish between the two laser pointers. as for the time, this method had the longest operation time, it took the users more time to distinguish between the two environments in this method which increased operation time more than necessary. b) no errors were made in this method. this result was due to the fact that the user can focus entirely on one environment and ignore the other one until his instructing is finished as one of the users reported so. the average time was 179 seconds, 19 seconds faster than method (a), even though the viewing method was the same, the average time increased because it was faster to instruct one person even though switching between environments took some time. c) in this method, the total number of errors is 2, again the reason for this error is because when guiding subject b, subject a’s robot also moves and sometimes gives wrong pointing to the subject. the average time taken is 187 seconds, its faster than method (a) but slower than method (b). d) this method resulted in one error which is most likely due to human error. the average time in this method is 157 seconds, it is the fastest method between them all. one user reported that the operation was very smooth by checking the answer sheet in mode 0, instructing subject a in mode 1 and instructing subject 2 in mode 2 and repeating. we can see from the results above that the best method to use for the experimenter is method (d) if the task’s main concern is time as it gave the best results in instructing time and only one error. however, method (b) is a better candidate if the content of the task allows for no errors to be made as it’s the easiest to focus on. after the experiment was over, a questionnaire was given to the experimenters and it’s shown on (table 3), the answers were based on a linear scale from 1-7 for all questions. the results of the questionnaire for each method are as follows: a) in this method, users reported that it was hard to see most of the time. they also felt that the two environments existed in the same place. the average answer was in the middle of the scale, similar to question 4 as well. they also felt that the time taken to instruct was too long. b) for this method, most users reported that its easy to see, they also felt as if they existed in two different locations at the same time. it was easy to instruct both subjects in this method. almost no one got confused when instructing in this setting. users reported time taken to be a little lower than method (a) but it is still considered a long time. c) in this method, users reported that it was easier to see the environments. some users felt that they exist in 2 different places while others felt that they exist in the same place, most answers leading to the middle of the scale. most users were able to instruct very well. question 4 was also in the middle of the scale while all users reported that instructing time was short. d) all the results are exactly the same as (c) except for question 4, in this method no one got confused. the results of the questionnaire show that the users had a better experience overall when operating with method (d) the most. 4.3. result and discussion on subject operation the focus of the research on the subject side is to study the effect of the experimenter having to instruct two people at once and how the subject reacts to it, especially when changing operation methods. the requirements on this part are as follow: table 1. average instruction error. method instructional error (times) (a) superimposed screens/ mode 0 3 (b) superimposed screens/ mode 1&2 0 (c) split screen/ mode 0 2 (d) split screen/ mode 1&2 1 table 2. average instruction time. subject instructional time (s) (a) superimposed screens/ mode 0 198.7 (b) superimposed screens/ mode 1&2 179.3 (c) split screen/ mode 0 187.4 (d) split screen/ mode 1&2 157.1 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 a) fulfil the given task successfully with minimum errors and quickly. b) be able to feel the presence of the experimenter instructing them. before starting the task, only the experimenter knows which type of operation mode was used. however, depending on the mode used, the subjects reacted differently to the task, so a survey of four questions was conducted after each task to further investigate. the answer of the questionnaire (table 4) had a linear scale from 1-7, similar to the experimenter survey. the methods used are also the same as (table 2) and the results are as follow: a) in this method, some users did not complete the task successfully and had a harder time following the instructions. most users also felt that the instructor is always watching, and they felt his presence most of the time. b) most users had no issues completing the task in this mode. on the other hand, they felt the presence of the instructor less than method (a). c) results of this method are the same as (a) d) results of this method are the same as (b). from the answers of the survey, it can be seen that the viewing method of the experimenter has no effect on the subject’s performance. while the operation mode is different, users felt more at ease when the robots were moving all the time and made them feel the presence of the instructor more. to further prove this, wilcoxon signed rank test shown in figure 6 was used to verify which robot was easier to follow with 7 being hard to follow and 1 being easy to follow. the score was significantly improved when mode 1&2 was used compared to mode 0 as the test statistic is lower than the critical value (5 < 8) so we reject ho. this is sufficient evidence that there is a difference between the two modes in terms of which one is easier to follow. 5. discussion on the practical application of easy-lab in this section, the practical application of the proposed method presented in this study is discussed. the advantages of the using easy-lab as in dual presence settings and the methods used can be clarified by comparing this method with other manipulation methods. following the comparison, concerns about using this interface in real life are discussed. 5.1. comparison with other similar methods based on the results of the previous section, the proposed method was compared with other similar methods: a. gesture cam: a video communication system for sympathetic remote collaboration the sharedview system. the operator wears the sharedview. the sharedcamera’s image is sent to the display at the instructor’s sight and the instructor uses gestures in front of the display. the display and the gestures are received by a camera and sent back to the operator’s hmd. in this way, the instructor can give instruction with gestures [11]. easy-lab provides a more modern use case and adds the ability to increase the number of robots as needed, in this research, two robots were required. b. use of gaze and hand pointers in mixed reality remote collaboration the mixed reality remote collaboration system setup. the system supports the use of hand gesture, sketch, hand pointer, and gaze pointer visual cues for communication in the collaboration. the system tracks the remote expert’s hands to use the visual cues and employs a 360-degree camera to share the task space [12]. this system requires more practice to get used to its operation and takes more time than easy-lab to instruct someone. c. telesar vi: telexistence surrogate anthropomorphic robot vi telesar vi is a newly developed telexistence platform for the accel embodied media project. it was designed and implemented with a mechanically unconstrained full-body master cockpit and a 67 degrees-of-freedom (dof) anthropomorphic avatar robot. the avatar robot can operate in a sitting position since the main area of operation is intended to be manipulation and gestural. the system provides a full-body experience of our extended “body schema,” which allows users to maintain an up-to-date representation in space of the positions of their different body parts, including their head, torso, arms, hands, and legs [13]. while this system provides more precise movements, it is too expensive to use for most researchers and requires experience to operate, its heavy and large size makes it hard to move as well. 5.2. limitations in this study, only a laser pointer was used to give instructions in the operation method, the purpose was to ensure that the operation method does not interfere with verifying which viewing method or operation mode is better, so it was made simple. a disadvantage to this is that its hard to transmit table 3. experimenter evaluation questionnaire. question qe1 can you see both environment a&b easily qe2 do you feel that you exist in 2 different places at the same time or do you feel like that both environments exist in the same place qe3 were you able to instruct the 2 subjects equally well qe4 did you get confused with the instructing when you switched environments qe5 did you feel that the time taken to instruct the students was too long figure 6. overview of the viewing methods. a) superimposed. b) split screen. table 4. experimenter evaluation questionnaire. question qe1 where you able to fulfil the task successfully qe2 was the instruction of the robot easy to follow qe3 were you able to feel the presence of the person instructing you qe4 did you feel that the instructor is always watching you and not someone else acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 information other than the pointing, a button to stop the laser pointer in one environment should be added to reduce confusion as much as possible. another feature that would be useful in this system is adding a method of transmitting audio of the experimenter to the subjects being instructed. one more restriction faced was the lack of experimenters, to provide more concrete results and finding, the number of subjects should be increased. 6. conclusions in this research, we proposed using easy-lab to perform dual presence operation control. the task performed had 4 different settings. first, two different viewing method were selected and tested accordingly, superimposed screens and split screens. it was found that the split screen operation method provided better results, the time taken to complete the task was faster with minimum number of errors. there were two operation modes used in this system, one mode moved all robots at the same time while the other allowed the experimenter to choose one robot to control at a time. the best control method in terms of ease of use and least number of errors is to control one robot at a time. furthermore, on the subject side, users had an easier time following the instructions of the robot when one robot at a time was being controlled. in the future, an intuitive posture instruction method will be developed that allows more information to be transmitted and provide more sense of being present in multiple places at the same time. other than the control method, the next step is to increase the number of robots and subjects to evolve the system from being dual presence to multi-presence. we have yet to test the limit of how many subjects a single person can instruct using easy-lab. by increasing the number of subjects, the control method must also be revised to accommodate such system. finally, this system has potential to be used by the masses in education, conferences, etc. therefore, further testing is needed in these environments to verify that as this study might potentially suggest a new method of working with other humans from remote places. acknowledgement this research is supported by waseda university global robot academic institute, waseda university green computing systems research organization and by jst erato grant number jpmjer1701, japan. references [1] v. vimolmongkolporn, f. kato, t. handa, y. iwasaki, h. iwata, design and development of 6 dofs detachable robotic head utilizing differential gear mechanism to imitate human head-waist motion; 2022 ieee/sice international symposium on system integration (sii), narvik, norway, 9-12 january 2022, pp. 467-472. doi: 10.1109/sii52469.2022.9708793 [2] y. iwasaki, j. oh, t. handa, a. a. sereidi, v. vimolmongkolporn, f. kato, h. iwata, experiment assisting system with local augmented body (easy-lab) for subject experiments under the covid-19 pandemic, acm siggraph 2021 emerging technologies, virtual, 9-13 august 2021, pp. 1-4. doi: 10.1145/3450550.3465345 [3] i. yamano, t. maeno, five-fingered robot hand using ultrasonic motors and elastic elements, proc. of the 2005 ieee international conference on robotics and automation, barcelona, spain, 18-22 april 2005, pp. 2673–2678. doi: 10.1109/robot.2005.1570517 [4] j. butterfass, m. grebenstein, h. liu, g. hirzinger, dlr-hand ii: next generation of a dextrous robot hand, proc. of the 2001 icra, ieee int. conference on robotics and automation (cat. no.01ch37164), seoul, korea (south), 21-26 may 2001, vol. 1, pp. 109–114 doi: 10.1109/robot.2001.932538 [5] w. zhou, j. zhu, yutao chen, jie yang, erbao dong, hao zhang, xuming tang, visual perception design and evaluation of electric working robots, ieee int. conference on mechatronics and automation, tianjin, china, 4-7 august 2019, pp. 886–891. doi: 10.1109/icma.2019.8816366 [6] h. debarba, e. molla, b. herbelin, r. boulic, characterizing embodied interaction in first and third person perspective viewpoints, ieee symposium on 3d user interfaces (3dui), arles, france, 23-24 march 2015, pp.67–72, 2015. doi: 10.1109/3dui.2015.7131728 [7] r. sato, m. kamezaki, j. yang, s. sugano, visual attention to appropriate monitors and parts using augmented reality for decreasing cognitive load in unmanned construction, proc. of the 6th int. conference on advanced mechatronics, no. 15-210, december 2015, p. 45. doi: 10.1299/jsmeicam.2015.6.45 [8] t. miura, behavioral and visual attention, kazama shobo, chiyoda, japan, 1996, isbn 978-4-7599-1936-3. [9] s. iizuka, y. iwasaki, h. iwata, research on the detachable body -validation of transparency ratio of displays for the co-presence dual task, the robotics and mechatronics conference, hiroshima, japan, 5-8 june 2019, paper no.2a2-l04, 2019 (in japanese). [10] m. y. saraiji, s. sugimoto, c. l. fernando, k. minamizawa, s. tachi, layered telepresence: simultaneous multi presence experience using eye gaze based perceptual awareness blending, acm siggraph 2016, anaheim, usa, 24-28 july 2016, posters, pp. 1-2. doi: 10.1145/2945078.2945098 [11] h. kuzuoka, t. kosuge, m. tanaka, gesturecam: a video communication system for sympathetic remote collaboration, proc. of the 1994 acm conference on computer supported cooperative work (cscw '94), chapel hill north carolina usa, 22-26 october 1994, pp. 35-43. doi: 10.1145/192844.192866 [12] s. kim, a. jing, h. park, s. h. kim, g. lee, m. billinghurst, use of gaze and hand pointers in mixed reality remote collaboration, 9th int. conference on smart media and applications (sma), jeju, republic of korea, 17-19 september 2020, pp. 1-6. [13] susumu tachi, yasuyuki inoue, fumihiro kato, telesar vi: telexistence surrogate anthropomorphic robot vi, int. journal of humanoid robotics 17, 05(2020), 2050019. doi: 10.1142/s021984362050019x [14] htc vive, 2011. online [accessed 26 february 2022] https://www.vive.com/eu/product/vive/ [15] arduino. online [accessed 26 february 2022] https://www.arduino.cc/. [16] c. zaiontz, wilcoxon signed-ranks table, 2020. online [accessed 26 february 2022] http://www.real-statistics.com/statistics-tables/wilcoxonsignedhttp://www.real-statistics.com/statistics-tables/wilcoxonsigned-ranks-table/ranks-table/ [17] unity technologies japan/ucl, unity-chan!, 2014. online [accessed 26 february 2022] https://unity-chan.com/ https://doi.org/10.1109/sii52469.2022.9708793 https://doi.org/10.1145/3450550.3465345 https://doi.org/10.1109/robot.2005.1570517 https://doi.org/10.1109/robot.2001.932538 https://doi.org/10.1109/icma.2019.8816366 https://doi.org/10.1109/3dui.2015.7131728 http://dx.doi.org/10.1299/jsmeicam.2015.6.45 http://dx.doi.org/10.1145/2945078.2945098 https://doi.org/10.1145/192844.192866 https://doi.org/10.1142/s021984362050019x https://www.vive.com/eu/product/vive/ https://www.vive.com/eu/product/vive/ https://www.arduino.cc/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ https://unity/ https://unity-chan.com/ https://unity-chan.com/ using coverage path planning methods for car park exploration acta imeko issn: 2221-870x september 2021, volume 10, number 3, 15 27 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 15 using coverage path planning methods for car park exploration anna barbara ádám1, lászló kocsány1, emese gincsainé szádeczky-kardoss1 1 department of control engineering and information technology, budapest university of technology and economics, budapest, hungary section: research paper keywords: car park exploration; coverage path planning; parking assistant system citation: anna barbara ádám, lászló kocsány, emese gincsainé szádeczky-kardoss, using coverage path planning methods for car park exploration, acta imeko, vol. 10, no. 3, article 5, september 2021, identifier: imeko-acta-10 (2021)-03-05 section editor: bálint kiss, budapest university of technology and economics, hungary received january 13, 2021; in final form september 17, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: anna barbara ádám, e-mail: annadam97@gmail.com 1. introduction with an increasing number of vehicles on the roads, it is becoming difficult to find a free parking space. several sensorbased parking assistant systems have been developed in the past decade to make it easier to find a free parking space in busy areas, such as city centres and shopping malls. these systems mainly include sensors installed in each parking space that can detect the presence of a car by measuring its weight with pressure sensors; sensing the car body with magnetic sensors or infrared and ultrasonic sensors can determine if something is in the examined area. the main problem with these parking systems is the need for extra infrastructure and sensors. as most car parks are not equipped with sensors, which indicate the occupancy of the parking space, a vehicle must drive passed a parking space to be able to detect if it is free [1]. there are also internet-of-things (iot) systems, which involve not only the signals of the sensors but also mobile applications [2]. these systems can navigate the driver to the free parking spaces in the shortest possible time. the main purpose of this paper is to present an installed sensor-free car park exploration method that navigates the vehicle to all the possible free parking spaces. autonomous vehicles are able to detect the free parking spaces with sensors installed in the vehicle (e.g. lidar [3]). as the coverage path planning (cpp) problem is similar to the car park exploration problem, the core concepts of cpp algorithms can be used. this paper is organised as follows. section 2 presents the most commonly used parking systems and provides examples, while the formulation of the car park exploration problem can be found in section 3. section 4 presents some cell decomposition and grid-based cpp methods and explains how they are used for car park exploration. an improved version of trapezoidal cell decomposition can also be found in this section. different traversal methods are presented in section 5, while section 6 introduces a cost function to grade the free parking spaces. the presented exploration methods are compared in section 7, and the method for using them in multi-storey car parks is presented in section 8. section 9 derives the conclusions from the different traversals and presents possible avenues for future work. abstract with the increasing number of vehicles on the roads, finding a free parking space has become a time-consuming problem. traditional car parks are not equipped with occupancy sensors, so planning a systematic traversal of a car park can ease and shorten the search. since car park exploration is similar to coverage path planning (cpp) problems, the core concepts of cpp algorithms can be used. this paper presents a method that divides maps into smaller cells using trapezoidal cell decomposition and then plans the traversal using wavefront algorithm core concepts. this method can be used for multi-storey car parks by planning the traversal of each floor separately and then the path from one floor to the next. several alternative explorational paths can be generated by taking different personal preferences into account, such as the length of the driven route and the proximity to preferred locations. the planned traversals are compared by step number, the cell visitedness ratio, the number of visits to each cell and the cost function. the comparison of the methods is based on simulation results. mailto:annadam97@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 16 2. parking assistant systems the literature proposes various solutions for autonomous parking with sensors installed in car parks. these systems require a centralised parking system in which a server stores the occupancy of the parking spaces based on the sensor signals. when a vehicle requests a parking space, the server reserves one for the vehicle. cctv systems are widely available in car parks. consequently, algorithms using image processing algorithms are able to detect the occupancy of a parking space based on camera signals. for example, athira et al. [4] present an optical character recognition system that detects occupied parking spaces based on cctv signals. another solution is the availability of iot-enabled cities, often referred to as smart cities. smart cities can provide information about the availability of parking spaces. if a centralised parking system server is available, the vehicle is able to request information about the occupancy of parking spaces from the server. al-turjman et al. [4] present a survey of iot-enabled cities. the use-case practices are also presented, including the smart payment system, the parking reservation system and the eparking system. there are several commercially available solutions for smart parking. in hungary, two such solutions can be found in budapest. the first is parkl [5], which is a smartphone application providing information about the location of car parks and making cashless payment possible. parkl does not provide the exact location of the parking spaces but the location of parking zones in the city where possible parking spaces can be found. in comparison, parker [6] (developed by smart lynx ltd.) provides information about the exact occupancy of about 1,500 parking spaces in the city centre. this solution uses preinstalled sensors to indicate the occupancy of a given parking space to the driver. parker also provides a cashless payment service for its users. the algorithm presented in this paper provides a car park exploration method using cpp. car park exploration is required because no additional signals for preinstalled sensors in car parks are used. a complete system is therefore required to perform the car park search from the creation of the exploration path to detecting appropriate parking spaces and performing the parking manoeuvre. this system is called the autonomous parking system and consists of four subsystems, as the parking task can be broken down into four subtasks. the first task of the autonomous parking system is to provide an exploration path for the vehicle. the focus of this paper is to propose a solution for this subsystem by applying cpp algorithms. this subsystem provides multiple goal configurations for the vehicle. these goal configurations are required, as the avoidance of preinstalled sensors leads to the loss of the goal configuration. the second subsystem is a detector subsystem. the main task is to scan the environment and detect parking spaces for the vehicle while the vehicle is driving along the exploration path [3]. in order to be able to detect parking spaces, a sensor must be mounted on the vehicle. the first subsystem should consider the sensor parameters during the planning of the exploration path. when a parking space is found, the third and fourth subsystems plan and execute the parking manoeuvre [7], [8]. 3. car park exploration the goal of car park exploration is to plan a path leading to all the possible free parking spaces. the binary map (figure 2) of the car park (figure 1) is known, and the parking spaces are treated as obstacles. while following the planned path, a sensor (e.g. lidar) searches for a suitable free parking space. in this formulation, c ⊂  r𝟚 defines the workspace of car park exploration and 𝒜 denotes the vehicle. the state of the vehicle is 𝑞 = [ 𝑥 𝑦], where 𝒜 (𝑞) ⊂ 𝐶, while [ 𝑥 𝑦] denotes the position of the vehicle in a fixed frame (the orientation of the vehicle is not taken into account). the workspace consists of obstacles (𝐶obs ⊂ 𝐶) and free spaces (𝐶free = c  ∖ 𝐶obs), some of which need to be visited (𝐶vis ⊆ 𝐶free). the vehicle can move only in free space (∀𝑞 ∈ 𝐶free). the vehicle moves on a collision-free path (τ), where 𝑠𝑖 ∈ 𝑅 is a scalar path parameter (𝑠 ∈ [0, 𝑇], 𝑇 is the length of the whole path): τ: 𝑠 ↦ 𝑞, ∀𝑠 ∈ [0, 𝑇]: τ(𝑠) ∈ 𝐶free . (1) 𝐿(𝑞) ⊂ 𝐶 denotes the points that are inside the range (δ) of the sensor, which detects the free parking spaces: 𝐿(𝑞) = { 𝑧 ∈ 𝐶free| ‖𝑞 − 𝑧‖ ≤ 𝛿}, (2) where ‖𝑞 − 𝑧‖ is the euclidean distance between points 𝑞 and 𝑧. the points seen while traversing the path are 𝐿(τ(𝑡)) = ⋃ 𝐿(τ(𝑠)) 𝑠∈[0,𝑡] . (3) the aim is to reach every position that should be visited during the exploration: 𝐶vis ⊆ 𝐿(τ(𝑇)). (4) other constraints that should be considered are as follows: • the start position is τ(0) = 𝑞init. figure 1. map of a car park. figure 2. binary map of a car park. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 17 • cost can be assigned to the path as 𝑤1(τ) (e.g. length of the path). • preferred positions are provided for, which are considered first. the 𝑤2(𝑞) cost can be assigned to the 𝑞 ∈ 𝐶𝑓𝑟𝑒𝑒 position (e.g. distance from the target position). • the traversal can be interrupted when a condition is met (parking space detected with lidar). • when stopping while driving (at some point with 𝑠1 path parameter), the cost of the traversal is the personal preference weighted (𝛼) sum of 𝑤1 and 𝑤2: 𝑤1(τ) + 𝛼 𝑤2(τ(𝑠1)) (the cost of the path up to that point plus the cost of the position). • there might be constraints to the order of the configurations in τ path (e.g one-way streets). 4. using coverage path planning methods the purpose of cpp methods is to plan an obstacle-free path that reaches every free point of the given configuration. this section presents car park traversal-related computational problems. it also explains the idea behind using cpp algorithms for the traversal and provides an overview of the most commonly used cell decomposition and grid-based methods. finally, methods chosen for implementation are presented. 4.1. related computational problems cpp is similar to the travelling salesman problem [10], in which the salesman has to visit all the cities via the shortest route possible and return to the beginning. the cities are represented as nodes on a graph and the routes between them are the edges. the travelling salesman problem is np-hard, which means it is at least as hard as any np problem, so it cannot be solved with polynomial-time algorithms. during path planning, an important factor in terms of the path is the exploration of the environment. two computational geometric problems are related to this: the museum problem [11] calculates the fewest number of guards required to observe the whole museum, while in the watchman problem [12], only the map of the region is given, and the main purpose is to plan the shortest possible path between the obstacles so that the watchman can guard the entire area. 4.2. cell decomposition-based path planning there are several cell decomposition-based path planning methods that divide a map into cells. exact cellular decomposition methods [12] divide the free space into varioussized nonoverlapping cells. the union of the cells covers the whole free configuration. trapezoidal cell decomposition [14] can only be used in polygonal environments, as it extends rays from the vertices of the obstacles. these rays and the edges of the obstacles form the cell borders, dividing a map into trapezoidal or triangular cells. an example of this method can be seen in figure 3. boustrophedon (greek: ‘ox turning’) decomposition [15] extends rays from the entry and exit points of the obstacles, so fewer rays are extended than in trapezoidal cell decomposition. consequently, the cells have a larger area. figure 4 shows an example of this method. when applying morse decomposition [12], the critical points of the smooth function, called the morse function, indicate the boundaries of the cells (see figure 5). greedy convex polygon decomposition [18] can be used when there are polygonal obstacles. this method consists of two types of cuts: • a single cut: a cut from a nonconvex vertex to an existing edge or another cut, • a matching cut: cutting two nonconvex vertices. first, all the matching cuts are made on the nonconvex vertices, then the single cuts are made for the unmatched vertices. in the example in figure 6, the matching cuts are green and the single cuts are red. the cell boundaries are the set of cuts and the edges of the obstacles. after dividing the map, the adjacency matrix of the cells can be created. two cells are adjacent if they have a common boundary, and the boundary has a given number of common points. the purpose of cell decomposition-based methods is to visit every cell once, although it is not always possible. if the adjacency matrix of the cells is known, a traversal can be planned, as it is known which cell can be visited from the current one. after creating the traversal, a path can be planned that leads through the cells in a given order reaching every free point of the configuration. figure 3. example of trapezoidal cell decomposition [14]. figure 4. example of boustrophedon decomposition [16]. figure 5. example of morse decomposition [17]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 18 4.3. grid-based path planning in contrast to the cell decomposition-based methods, gridbased methods divide the map into same-sized cells, called a grid. the cells are classified into two groups, those that have obstacle points inside the cell and those without obstacle points. the wavefront algorithm [9] assigns distance values to every cell in the map, with each neighbouring cell being assigned one larger distance value, starting from the initial cell, which has a distance value of 0. the traversal of the cells is based on distance values, and the neighbouring unvisited cell with the largest distance value is the following cell. if a number of unvisited neighbouring cells have the same distance value, the following cell is selected randomly. an example of this method can be seen in figure 7. the spiral spanning-tree coverage method [21] builds up a graph with nodes that represent the centres of the obstacle-free cells and edges that represent the lines between the neighbouring cell centres. the cells are divided into four subcells, which are classified into two groups: those that have four unvisited subcells (called new cells) and those that have at least one visited subcell (called old cells). first, every cell is unvisited, and the algorithm starts from the initial cell and marks it as an old cell. the initial cell is the root of the spanning tree. in the subsequent steps, every cell neighbouring the current cell is tested if it is a new cell. if the cell has a neighbouring new cell, a spanning-tree edge is added from the current cell to the neighbouring cell, and then the algorithm moves to a subcell of the neighbouring cell and adds the centre of the neighbouring cell to the tree as a node. if there is a backwards move from the current cell to a subcell of the parent cell along the right-hand edge of the spanning tree, the algorithm terminates. figure 8 provides an example of this method. 4.4. using coverage path planning methods for car park exploration in order to plan the traversal of a car park, the map of the car park should be divided into smaller regions. as most car parks consist of polygonal obstacles, trapezoidal cell decomposition can be used to divide the map. the cells can be treated as a grid, so wavefront algorithm core concepts can be used to plan the traversal of the cells. the main advantage of the wavefront algorithm is that it can take personal preferences into account by modifying the distance values of the cells. 4.5. rectangular cell decomposition as rectangular decomposition is based on trapezoidal cell decomposition, it can only be used in polygonal environments. if the map contains nonpolygonal obstacles, the bounding boxes of the obstacles should be taken into account when decomposing the map. trapezoidal cell decomposition decomposes the map using only one axis (𝑥 or 𝑦), so the decomposed map may contain cells covering large areas. the main disadvantage of this method is that the traversal of the cells is not unequivocal, so a path should be planned inside a cell from which all of the free parking spaces can be seen. rectangular cell decomposition decomposes the map using both 𝑥 and 𝑦 axes. the final cells are the intersections of the cells decomposed by these axes. the final cells are smaller and rectangular, which is the main advantage of this method; every cell has only one neighbouring cell in each direction. an example of a decomposed map can be seen in figure 9, where the coloured rectangles indicate the cells. the adjacency matrix of the cells can be created after dividing the map. this matrix is an 𝑛 × 4 matrix, where 𝑛 is the number of cells. the rows of the matrix store the indices of the adjacent cells of every cell in each direction. if two cells have common boundaries, and the vehicle can pass from one cell to the other, these cells are adjacent. cells neighbouring obstacles and cells on the edges of the map do not have four neighbouring cells, so the neighbouring obstacles and edges are considered as their neighbours (an implementational solution might be to store these false neighbours as dummy indices, e.g. −1). if there are one-way road sections, moving from one cell to the other is permitted, but moving in the opposite direction is prohibited, so the 𝑖𝑡ℎ cell is adjacent to the 𝑗𝑡ℎ cell but not vice versa. this means that the 𝑖𝑡ℎ row of the adjacency matrix contains the 𝑗-cell index in the appropriate column, but the 𝑗𝑡ℎ row contains a dummy index instead of the 𝑖 index. more details of this algorithm can be found in [20]. figure 6. example of greedy convex decomposition [18]. figure 7. example of a wavefront algorithm [19]. figure 8. example of spiral spanning-tree coverage [22]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 19 5. traversal of the cells several different traversal methods are presented in this section. the map presented in section 5.1 is used as an illustration. 5.1. map of the car park the car park consists of only one floor, with the black areas representing the obstacles (the possible free parking spaces are also considered as obstacles). the red x represents the preferred location. the map of this car park can be seen in figure 10, the decomposed map can be seen in figure 9 (rectangular decomposition was used) and the assigned distance values from the wavefront algorithm can be seen in figure 11. all the algorithms stop when the step number exceeds a given number, which depends on the number of cells. the maximal step number of the following simulations is 57. 5.2. visitednessand preference-based traversal this method first visits the unvisited neighbouring cells (see figure 12). if there are a number of neighbouring cells that are unvisited, the cell with the highest preference value is the following cell (see figure 13). in the map in section 5.1, one cell remains unvisited. this results from the low preference values in that area. the algorithm visits the neighbouring cells of the unvisited cell only once, and the other cells have higher preference values, so they are the next cells of the traversal. a traversal designed by this method can be seen in figure 14. figure 9. the decomposed map of the car park; the initial cell is cell number 10. figure 10. map of the car park; the red x represents the preferred location. figure 11. the distance values of the cells. figure 12. the visitedness of the cells determines the following cell, which is in red. figure 13. the preference value determines the following cell. figure 14. the visitednessand preference-based traversal of the car park. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 20 5.3. visitednessand preference-based traversal with dynamic preference this method differs from the previous one by applying dynamic preference. this means the preference value of the cell visited last is decreased during the traversal. in this case, the preference values of the cells decreased by 2 every time they were visited. as a result, every cell could be visited; the preference values of the cells visited first were decreased enough so that the algorithm would select the cells with lower preference values as the next cells. figure 15 provides an example of this method. 5.4. visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells a map may contain cells that are not adjacent to parking spaces, as they do not contain any c ∈ cvis points, so it is not necessary to visit them. in this case, the preference values of these cells can be changed to 0, and they can be marked as visited at the beginning. the algorithm will then only visit these cells in order to reach other cells. in figure 16, the white cells represent cells that do not need to be visited. 5.5. preferenceand visitedness-based traversal this method selects the next cell based on its preference value (see figure 17). if a number of neighbouring cells have the same preference value, the algorithm selects the following one based on the visitedness of the cell (see figure 18). as can be seen in figure 19, the cells with the highest preference values are visited multiple times, while cells with lower preference values may remain unvisited. 5.6. preferenceand visitedness-based traversal with additional preference values as preferenceand visitedness-based traversal mainly selects the next cell based on its preference value, this method should be used in cases where the driver wants to park at a given location. additional preference values can also be applied to attract the traversal to the desired location. figure 20 provides an example of the applied additional preference values. the highest additional preference value (5) was added to the cell that contains the preferred location (see figure 10), so each neighbouring cell gets one smaller additional preference value until the fourth neighbouring cell (the range of the preference is 5). it can be seen in figure 21 that the algorithm navigates to the highly preferred area in the shortest possible path, and subsequently, the traversal only moves around one obstacle by traversing the cells with the highest preference values. figure 15. the visitednessand preference-based traversal of the car park with dynamic preference. figure 16. the visitednessand preference-based traversal of the car park with dynamic preference while avoiding unnecessary cells. figure 17. the preference value determines the following cell. figure 18. the visitedness of the cells determines the following cell. figure 19. preferenceand visitedness-based traversal. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 21 5.7. preferenceand visitedness-based traversal with additional preference values and dynamic preference the previous algorithm can also be applied when using dynamic preference values. in this case, the preference values of the visited cells are decreased by 2 every time they are visited. it can be seen in figure 22 that the algorithm visits the cells with high preference values, but then also visits the neighbouring cells. in this case, every cell is visited. this method might be the closest to human thinking. first, it searches for free parking spaces in the highly preferred area, then it goes a bit further and returns to the preferred location. finally, it also visits the cells that are furthest from the preferred cells. 5.8. making the traversal repeatable the presented traversals do not lead back to the initial position. it is possible that no free parking spaces were found during the first traversal of the car park, so the traversal should be repeated in order to find a suitable free parking space. there are two possible solutions to this problem: regenerate the traversal from the current cell as the initial cell or plan a path back to the initial cell from the current cell so that the original traversal can be repeated. 6. cost of the traversal the quality of the detected free parking space can be measured with a cost function. there are two main factors that should be considered: the time spent searching for a free parking space (which is proportional to the driven-route length 𝑤1 in section 3) and the distance from the preferred location (𝑤2 in section 3). there is a trade-off between these two factors, but the quality of a parking space also depends on personal preferences: the importance of the proximity to a preferred location or a short driven-route length. the 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ (𝑤1 in section 3) (5) is the sum of the distances between the cell centres during the traversal, the 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 (𝑤2 in section 3) (6) to the preferred location is the preference (𝑝𝑟𝑒𝑓𝑉𝑎𝑙) weighted sum of the euclidean distances measured from the preferred location and the cost function (𝑐𝑜𝑠𝑡), which is calculated for each cell, is a personal preference (α) weighted sum of the 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ and 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 (7). the smaller the α value, the better a closer parking space. 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ = ∑ length(𝑅𝑜𝑢𝑡𝑒i) n i=1 , (5) 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠 = ∑ 𝑝𝑟𝑒𝑓𝑉𝑎𝑙i ∑ 𝑝𝑟𝑒𝑓𝑉𝑎𝑙𝑗 p j=1 ⋅ dist(𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛, 𝑝𝑟𝑒𝑓𝐿𝑜𝑐i) p i=1 , (6) 𝑐𝑜𝑠𝑡 = α ⋅ 𝑅𝑜𝑢𝑡𝑒𝐿𝑒𝑛𝑔𝑡ℎ + (1 − α) ⋅ 𝑁𝑒𝑎𝑟𝑛𝑒𝑠𝑠, (7) where 𝑁 is the number of route sections, 𝑃 is the number of preferred locations and α is the weighting factor. the quality of a free parking space can be determined by the value of the cost function. the driver should establish a threshold value, and free parking spaces at lower costs are adequate. the better a parking space is, the lower the value of the cost function. 7. comparison of the traversals in this section, the previously presented methods are compared based on the step number (section 7.1), the cell visitedness ratio (section 7.2), the number of visits to each cell (section 7.3) and the cost function (section 7.4). 7.1. step number the step number gives the number of steps needed to visit every cell at least once. traversing from one cell to another is considered as one step. as the algorithm stops when more than a given number of steps is exceeded, the maximum number of steps in this map is 57. table 1 shows how many steps are needed when different methods were applied. it can be seen that when the visitedness and preference-based method (sections 5.2–5.4) was applied, fewer unvisited cells remained. when there were cells marked as unnecessary to visit (section 5.4), only 30 steps were sufficient to visit all the other cells, and in the end, two out of three marked figure 20. the additional preference values. figure 21. preferenceand visitedness-based traversal with additional preference values. figure 22. preferenceand visitedness-based traversal with additional preference values and dynamic preference. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 22 cells remained unvisited. when dynamic preference (sections 5.3, 5.4 and 5.7) was applied, fewer steps were needed and fewer cells remained unvisited. 7.2. cell visitedness ratio the cell visitedness ratio represents the number of visited cells relative to the number of cells in each step. figure 23 shows the visitedness ratio for the different methods using different colours. cells marked as unnecessary in section 5.4 are also included in the number of cells. the legend gives the section number in which the traversal method is presented. first, every cell is visited for the first time, so this ratio grows at each step. when applying the preferenceand visitednessbased method without dynamic preference (sections 5.5 and 5.6), the final value of the ratio is lower than in other cases. these methods visit only the cells with high preference values (60 %– 75 % of the cells). the other methods visit nearly all the cells (more than 80 % of the cells). applying dynamic preference (section 5.7) makes the visiting of the remaining unvisited cells more probable. when visitednessand preference-based traversal (sections 5.2–5.4) was used, the algorithm visited the unvisited neighbouring cells first, so there were fewer cells that remained unvisited. 7.3. the number of visits to each cell the following diagrams show the number of visits to each cell. numbers on the horizontal axis represent the indices of the cells shown in figure 9. visitednessand preference-based traversal this method’s traversal (section 5.2) visits the unvisited cells first. cells with high preference values are visited more frequently (5–6 times), but there is only one cell (cell 14) that is unvisited. for example, the preference value of cell 13 is 9 (see orange bar in figure 24), and this cell was visited 6 times (see blue bar in figure 24). the diagram demonstrating the number of visits to each cell can be seen in figure 24. visitednessand preference-based traversal with dynamic preference by applying dynamic preference (section 5.3), all the cells become visited and the maximum number of visits to each cell is four. most of the cells are visited once or twice, so the application of dynamic preference decreases the number of visits (figure 25). visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells there can be cells in a map that do not need to be visited because there are no parking spaces around them. in this case, they are only visited if the traversal goes through them in order to reach another cell (section 5.4). in figure 26, the only unvisited cells are unnecessary ones (unnecessary cell indices: 4, 5, 19). due to dynamic preference, the other cells are visited only once or twice. preferenceand visitedness-based traversal this method (section 5.5) visits the cells with a high preference value first and more frequently because the algorithm selects the next cell based on the preference value of the neighbouring cells. in figure 27, it can be seen that most of the visited cells are visited 5–6 times, but there are a large number of unvisited cells. figure 23. cell visitedness ratio of the presented methods. figure 24. number of visits to each cell (method presented in section 5.2). figure 25. number of visits to each cell (method presented in section 5.3); the preference values shown in this figure are the original values, not the decreased values. table 1. the number of steps needed when applying different methods. method section number 5.2 5.3 5.4 5.5 5.6 5.7 number of steps 57 56 30 57 57 42 number of unvisited cells 1 0 2 5 8 0 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 23 preferenceand visitedness-based traversal with additional preference values in figure 28, it can be seen that due to the application of additional preference values fewer cells are visited, but they are visited 6 times. this algorithm (section 5.6) navigates to the most preferred location (there was an additional preference applied with a range of 5 and a value of 5 in the cell with index 12) in the shortest possible route (these cells are only visited once), then it only moves around the preferred area. preferenceand visitedness-based traversal with additional preference values and dynamic preference when applying dynamic preference (section 5.7), all the cells are visited, and each cell is visited a maximum of 3 times instead of 5–6 times (figure 29). 7.4. cost function the algorithm can decide whether a free parking space is suitable or not based on a cost function. the defined cost function is based on two factors: the driven-route length and distance from the preferred location. the estimated route length of a method depends on the step number; the higher the step number, the longer the route length (see table 2). the route length is measured in pixels (the real distance depends on the graphic scale of the map). the other aspect of cost function is the distance from the preferred location. the distance is calculated at every step, and it depends strongly on the traversal method. the minimum distance from the preferred location is 50 pixels in this example. visitednessand preference-based traversal this method (section 5.2) visits the unvisited cells first, only reaching the possible minimum distance 3 times during the traversal. it can be seen in figure 30 that the distance is between 180 and 330 pixels most of the time. visitednessand preference-based traversal with dynamic preference figure 31 shows that the shape of the distance function is similar to the function in figure 30. this function also has three minimum points due to dynamic preference (section 5.3); only the visiting order of the cells is different, and the unvisited cells are also visited. figure 26. number of visits to each cell (method presented in section 5.4); the preference values shown in this figure are the original values, not the decreased values. figure 27. number of visits to each cell (method presented in section 5.5). figure 28. number of visits to each cell (method presented in section 5.6). figure 29. number of visits to each cell (method presented in section 5.7); the preference values shown in this figure are the original values, not the decreased values. table 2. the route length of each method. method section number 5.2 5.3 5.4 5.5 5.6 5.7 number of steps 57 56 30 57 57 42 route length 6752 6608 3184 6555 6671 4379 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 24 visitednessand preference-based traversal with dynamic preference while avoiding unnecessary cells this method (section 5.4) avoids the cells that have no parking spaces next to them, so the step number is lower than in the previous methods. it can be seen in figure 32 that the traversal visits the cell nearest to the preferred location only once, but the length of this traversal is much shorter due to the smaller number of steps. preferenceand visitedness-based traversal this method (section 5.5) visits cells with a high preference value more often. the preferred location is far from the initial cell, so it has a high distance value. without an additional preference value, the algorithm also visits the preferred location 5 times, as can be seen in figure 33. preferenceand visitedness-based traversal with additional preference values by applying additional preference values (section 5.6), the traversal reaches the minimal distance from the preferred location 6 times (figure 34). it can also be seen that the traversal repeats the same path, as one part of the function is repeated 5 times. preferenceand visitedness-based traversal with additional preference values and dynamic preference as a result of dynamic preference (section 5.7), the cells that are further from the preferred location are also visited. the traversal has only three minimum points (figure 35) because when a cell becomes visited, the preference value of this cell is decreased. figure 30. distance from preferred locations (method presented in section 5.2). figure 31. distance from preferred locations (method presented in section 5.3). figure 32. distance from preferred locations (method presented in section 5.4). figure 33. distance from preferred locations (method presented in section 5.5). figure 34. distance from preferred locations (method presented in section 5.6). figure 35. distance from preferred locations (method presented in section 5.7). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 25 8. handling multi-storey car parks as there is a considerable lack of free parking spaces in city centres, an increasing number of multi-storey car parks have been constructed to ensure there are enough free parking spaces to meet demand. the traversal of these car parks is similar to those in the presented methods. the storeys of a multi-storey car park can be handled individually. this means that the traversal of each floor should be planned independently from the other floors, with the only extra requirement being the minimising of the transition between levels. to plan a traversal of each floor, the map of the floor must be known. some floors are preferred over others, so these floors should be traversed first. if there is no free parking space on the preferred floor, the driver must either go around again or go to another floor. the traversal to another floor can be forced so that the preference values of all the cells can be set to zero except for the cell representing the ramp to the other floor, which has a high preference value with a large range. figure 36 shows the map of the first floor of a car park, and figure 37 shows its second floor. the car park entrance and the ramp to the next floor are also indicated on the maps. the maps of the floors are the same, with only different entrance and exit locations. the planned traversals for the whole car park can be seen in figure 38 to figure 40. there are no additional preference values during the traversal, and the traversal is planned based on visitedness and preference, with dynamic preference values. the traversal of the first floor (figure 38) is much longer than the traversal of the second floor (figure 40). the first floor can be traversed in 56 steps, but the second floor needs only 33 steps to be traversed. the traversal method of the floors is the same, only the entrance locations are different. this example shows how the traversal depends on the location of the entrance point. 9. conclusion as searching for a free parking space is a time-consuming task, the aim of this paper is to design different car park exploration strategies. the implemented algorithms used the core concepts of cpp algorithms, which is possible because the car park exploration problem is similar to cpp problems. cpp algorithms are used to plan the paths of vacuum-cleaner robots, lawnmower robots and robots for different purposes, which are designed to reach every free point of a configuration in the shortest possible time while avoiding obstacles. during car park exploration, the vehicle does not have to reach every free point of the map, it only has to drive by all the possible free parking spaces. the car park map is decomposed by using trapezoidal cell decomposition. this method leads to cells with large areas, and the planned traversal contains reversals. if the map is decomposed using both 𝑥 and 𝑦 axes, the created cells are smaller, and every cell has only one neighbouring cell in each direction. in this case, the traversal can take personal preferences figure 36. map of the first floor. figure 37. map of the second floor. figure 38. traversal of the first floor. figure 39. traversal of the first floor to the exit ramp. figure 40. traversal of the second floor. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 26 into account by using the wavefront algorithm. the distance values can be modified in order to attract the traversal to a given location (e.g. entrances, lifts, etc.). the traversal can be planned by taking the preference values and the visitedness of the cells into account. the first method presented only takes the preference values into account, so the traversal does not visit every cell on the map, only the ones with high preference values. another method chooses the next cell based on the preference value, but in case of equal preference values, the next cell is an unvisited one. the third wavefront algorithm-based traversal is based on visitedness and preference, and it visits the unvisited neighbouring cells first. this method is more likely to visit all the cells on the map. in order to compare the presented methods, quality characteristics were defined: step number, the cell visitedness ratio at each step, the number of visits to each cell and a cost function. the step number shows how many steps are needed in order to visit every cell; the algorithm stops if the step number exceeds a given number. the cell visitedness ratio shows the ratio of the visited cells at each step. the cost function is based on two factors: the driven-route length and the weighted sum of the distance from the preferred locations. if a free parking space is found, the decision as to whether it is suitable is based on the cost function. the implemented algorithms were tested in simulations, the results of which are detailed in section 5. the simulation results demonstrate that the different methods can be used in different situations. the visitednessand preference-based traversals visit nearly every cell on the map. if dynamic preference is applied, there is a higher chance that every cell becomes visited. it is also possible that there are cells that do not need to be visited. in this case, their preference value can be changed to 0, and they are marked as visited at the beginning. these cells only become visited when the traversal passes through them to reach other cells. visitednessand preference-based methods should be used when it is important to find a free parking space as soon as possible. these methods visit the cells with high preference values more frequently. if additional preference values are applied, the traversal moves around the preferred area. if dynamic preference is applied, the traversal visits the preferred cells first, then goes further away from the preferred area. preferenceand visitedness-based methods should be used if it is important to park near the preferred location. future work will include testing the implemented methods in a real environment and handling situations in which multiple vehicles are searching for free parking spaces at the same time. acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics has been supported by the national research development and innovation fund (tkp2020 institution excellence subprogram, grant no. bme-ie-mifm) based on the charter issued by the national research development and innovation office under the auspices of the ministry for innovation and technology. references [1] faheem zafari, s. a. mahmud, g. m. khan, m. rahman, h. zafar, a survey of intelligent car parking system, j. of applied research and technology 11 (2013) pp. 714-726. doi: 10.1016/s1665-6423(13)71580-3 [2] f. al-turjman, a. malekloo, smart parking in iot-enabled cities: a survey, sustainable cities and society, volume 49 (2019), pp. 2210-6707. doi: 10.1016/j.scs.2019.101608 [3] a. b. ádám, l. kocsány, e. g. szádeczky-kardoss, v. tihanyi. parking lot exploration strategy, proc. of the 19th ieee int. symp. on computational intelligence and informatics and the 7th ieee int. conf. on recent achievements in mechatronics, automation, computer sciences and robotics (cinti-macro), szeged, hungary, 14-16 november 2019, pp. 000169-000174. doi: 10.1109/cinti-macro49179.2019.9105160 [4] a. athira, s. lekshmi, p. vijayan, b. kurian, smart parking system based on optical character recognition, proc. of the 3rd int. conf. on trends in electronics and informatics (icoei), tirunelveli, india, 23-25 april 2019, pp. 1184-1188. doi: 10.1109/icoei.2019.8862517 [5] parkl digital technologies kft, parkl innovative parking, 2020. online [accessed 15 september 2021] https://parkl.net/hu/ [6] smart lynx kft, parker, 2020. online [accessed 15 september 2021] https://smartlynx.hu/ [7] e. szádeczky-kardoss, b. kiss, designing a tracking controller for passenger cars with steering input, period. polytech. elec. eng. 52 (2008) pp. 137-144. doi: 10.3311/pp.ee.2008-3-4.02 [8] e. szádeczky-kardoss, b. kiss, continuous-curvature paths for mobile robots, period. polytech. elec. eng. 53 (2009) pp. 63-72. doi: 10.3311/pp.ee.2009-1-2.08 [9] e. galceran, m. carreras, a survey on coverage path planning for robotics, robotics and autonomous systems 61 (2013) pp. 12581276. doi: 10.1016/j.robot.2013.09.004 [10] r. bellman, dynamic programming treatment of the travelling salesman problem, j. acm 9 (1962) pp. 61-63. doi: 10.1145/321105.321111 [11] s. kumar ghosh, approximation algorithms for art gallery problems in polygons and terrains, in: walcom: algorithms and computation. m. s. rahman, s. fujita (editors). springer, berlin, heidelberg, 2010, isbn 0302-9743, pp. 21-34. [12] w. p. chin, s. ntafos, optimum watchman routes, information processing letters 28 (1988) pp. 39-44. doi: 10.1016/0020-0190(88)90141-x [13] h. choset, e. acar, a. a. rizzi, j. luntz, exact cellular decompositions in terms of critical points of morse functions, proc. of the ieee int. conf. on robotics and automation, san francisco, ca, usa, 24-28 april 2000, pp. 2270-2277. doi: 10.1109/robot.2000.846365 [14] m. a. akkus, trapezoidal cell decomposition and coverage, middle east technical university, department of computer engineering. online [accessed 14 september 2021] https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw 3.html [15] h. choset, coverage of known spaces: the boustrophedon cellular decomposition, auton. robots 9 (2000) pp. 247-253. doi: 10.1023/a:1008958800904 [16] s. raghavan, distributed algorithms for hierarchical area coverage using teams of homogeneous robots. 11 2020, master’s thesis. indian institute of technology madras. [17] j. park, cell decomposition course: introduction to autonomous mobile robotics, intelligent systems and robotics lab. division of electronic engineering, chonbuk national university. online [accessed 14 september 2021] https://cupdf.com/document/cell-decomposition-courseintroduction-to-autonomous-mobile-robotics-prof.html [18] a. das, m. diu, n. mathew, c. scharfenberger, j. servos, a. wong, j. zelek, d. clausi, s. waslander, mapping, planning, and sample detection strategies for autonomous exploration, j. of field robotics 31 (2014), pp. 75-106. doi: 10.1002/rob.21490 https://doi.org/10.1016/s1665-6423(13)71580-3 https://doi.org/10.1016/j.scs.2019.101608 https://doi.org/10.1109/cinti-macro49179.2019.9105160 https://doi.org/10.1109/icoei.2019.8862517 https://parkl.net/hu/ https://smartlynx.hu/ https://doi.org/10.3311/pp.ee.2008-3-4.02 https://doi.org/10.3311/pp.ee.2009-1-2.08 https://doi.org/10.1016/j.robot.2013.09.004 https://doi.org/10.1145/321105.321111 https://doi.org/10.1016/0020-0190(88)90141-x https://doi.org/10.1109/robot.2000.846365 https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw3.html https://user.ceng.metu.edu.tr/~akifakkus/courses/ceng786/hw3.html https://doi.org/10.1023/a:1008958800904 https://cupdf.com/document/cell-decomposition-course-introduction-to-autonomous-mobile-robotics-prof.html https://cupdf.com/document/cell-decomposition-course-introduction-to-autonomous-mobile-robotics-prof.html https://doi.org/10.1002/rob.21490 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 27 [19] m. mcnally, walking the grid, robotics 52 (2006), pages 151–155. online [accessed 20 september 2021] https://dl.acm.org/doi/pdf/10.5555/1151869.1151889 [20] a. b. ádám, l. kocsány, e. g. szádeczky-kardoss, cell decomposition based parking lot exploration, proc. of the workshop on the advances of information technology, budapest, hungary, 30 january 2020, pp. 5-12. [21] y. gabriely, e. rimon, spiral-stc: an on-line coverage algorithm of grid environments by a mobile robot, proc. of the ieee int. conf. on robotics and automation, washington, dc, usa, 1115 may 2002, pp. 954-960. doi: 10.1109/robot.2002.1013479 [22] y. gabriely, e. rimon, competitive on-line coverage of grid environments by a mobile robot, computational geometry 24 (2003) pp. 197-224. doi: 10.1016/s0925-7721(02)00110-4 https://dl.acm.org/doi/pdf/10.5555/1151869.1151889 https://doi.org/10.1109/robot.2002.1013479 https://doi.org/10.1016/s0925-7721(02)00110-4 microsoft word article 14 121-757-1-pb.docx acta imeko december 2013, volume 2, number 2, 78 – 85 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 78 capacitive facial activity measurement ville rantanen1, pekka kumpulainen1, hanna venesvirta2, jarmo verho1, oleg špakov2, jani lylykangas2, akos vetek3, veikko surakka2, jukka lekkala1 1 sensor technology and biomeasurements, department of automation science and engineering, tampere university of technology, korkeakoulunkatu 3, fi-33720 tampere, finland 2 research group for emotions, sociality, and computing, tampere unit for computer-human interaction, school of information sciences, university of tampere, kanslerinrinne 1, fi-33014, tampere, finland 3 media technologies laboratory, nokia research center, otaniementie 19, fi-02150 espoo, finland section: research paper keywords: capacitive measurement, distance measurement, facial activity measurement, facial movement detection, hierarchical clustering, principal component analysis citation: ville rantanen, pekka kumpulainen, hanna venesvirta, jarmo verho, oleg špakov, jani lylykangas, akos vetek, veikko surakka, jukka lekkala, capacitive facial activity measurement, acta imeko, vol. 2, no. 2, article 14, december 2013, identifier: imeko-acta-02 (2013)-02-14 editor: paolo carbone, university of perugia received may 31st, 2013; in final form november 20th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was funded by nokia research center, the finnish funding agency for technology and innovation, and the finnish cultural foundation. corresponding author: ville rantanen, e-mail: ville.rantanen@tut.fi 1. introduction measuring facial movements has many possible applications. human-computer and human-technology interaction (hci and hti) can use information of voluntary facial movements for the interaction [1]-[7]. other applications, for example in behavioural science and medicine, can also benefit from the automated analysis of human facial movements and expressions [8]-[15]. in the context of hci, the use of facial movements has been studied already for a decade. the first implementations were based on measuring electromyographic (emg) signals that reflect the electrical activity of the muscles [1]. the measurement system by barreto et al. [1] utilised only bioelectric signals for pointing and selecting objects, but later on the emg measurement was adopted as a method to indicate selections in hci when gaze is used for pointing [2], [3], [4], [6]. recently, a capacitive detection method has been introduced as an alternative for the facial emg measurement [5]. it provides a contactless alternative that measures facial movement instead of the electrical activity of the muscles that the emg measures. studies about pointing and selecting with the capacitive method in combination with head-mounted, video-based gaze tracking have been also published [7], [16], [17], [18]. facial action coding system (facs) is a vision-based method that characterises facial actions based on the activity of different facial muscles [19], [20]. each facial expression has certain activated muscles that can have different levels of contraction. facs and the detection of active muscles have been used as a basis for automatically analysing facial expressions, for example, for the use of behavioural science and medicine [9], [10], [11], [13], [14], [15]. these studies describe automated implementations of facs by using vision-based abstract a wide range of applications can benefit from the measurement of facial activity. the current study presents a method that can be used to detect and classify the movements of different parts of the face and the expressions the movements form. the method is based on capacitive measurement of facial movements. it uses principal component analysis on the measured data to identify active areas of the face in offline analysis, and hierarchical clustering as a basis for classifying the movements offline and in real-time. experiments involving a set of voluntary facial movements were carried out with 10 participants. the results show that the principal component analysis of the measured data could be applied with almost perfect performance to offline mapping of the vertical location of the facial activity of movements such as raising and lowering eyebrows, opening mouth, raising mouth corners, and lowering mouth corners. the presented classification method also performed very well in classifying the same movements both with the offline and the real-time implementations. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 79 methods in the analysis. however, facial emg can also register facial actions and provide information that is highly similar to the one provided by facs [11]. emg has also been shown to be suitable for measuring emotional reactions from the face [8]. this has been done long before emg was first applied in the hci context to detect voluntary facial movements. the presented method applies capacitive measurement principles to measure the activity of the face. it has several advantages over the other methods that can be used for the task. compared to emg measurements, the presented method allows the measurement of more channels simultaneously. it is contactless and it does not require the attachment of electrodes to the face. attached electrodes significantly limit the maximum number of measurable channels and they may also affect facial movements that are being targeted with the measurement [11], [21]. when compared to vision-based detection of facial activity, the capacitive method allows easier integration of the measurement to mobile, head-worn equipment, is unaffected by environmental lighting conditions, and can be carried out with computationally less intensive signal processing. for the current study, a wireless, wearable prototype device was constructed, and analysis of data from controlled experiments was done to identify the location of facial activity and to classify it during different voluntary movements. voluntary facial movements have previously been detected by identifying transient peaks from the signals [5]. the presented method provides a more robust way to analyse the facial activity based on multichannel measurements. 2. methods 2.1. capacitive facial movement detection the measurement method for measuring facial activity is based on the capacitive detection of facial movements that was introduced in [5]. it applies the same measurement principle as capacitive push buttons and touchpads, and a single measurement channel requires only a single electrode that produces an electric field. the produced field can be used to detect conducting objects in its proximity by measuring the capacitance because the capacitive coupling between the electrode and the object changes as the object moves. in principle, the distance between the target and the electrode is measured. 2.2. prototype device the wearable measurement prototype device is shown in figure 1. the device was constructed as a headset that should fit most adults. the earmuffs of the headset house the necessary electronics, and the extensions seen in front of the face include the electrodes for the capacitive measurement channels. the device contains 22 electrodes in total, 11 for both sides of the face. the top extensions have 4 electrodes each, the middle ones have 3 each, and the lowest ones have 4. the electrodes are printed circuit board pieces with a size of 12 x 20 mm. they are connected to the measurement electronics with thin coaxial cables that shield the signals. the capacitive measurements are carried out with a programmable controller for capacitance touch sensors (ad7147 by analog devices). the sampling frequency was dictated by technical limitations and it was set to the maximum possible, 29 hz. the device is battery-powered and a bluetooth module (rn-41 by roving networks) provides the connectivity to the computer. the device has the possibility for additional measurements such as inertial measurements via a 3d gyroscope and a 3d accelerometer. the operation of the device is controlled by atmel’s atmega168p microcontroller. 2.3. experiments ten participants (five male and five female, ages 22-33, mean age 27) were briefly trained to perform a set of voluntary facial movements. the participants were chosen to be inexperienced in carrying out the movements to avoid more easily measured overly expressive movements that experienced participants might perform. the movements were: lowering the eyebrows, raising the eyebrows, closing the eyes, opening the mouth, raising the mouth corners, lowering the mouth corners, and relaxation of the face. the relaxation was included to help the participant relax during the experiments while doing the other movements. the movements were instructed to be performed according to the guidelines of facs [20]. the participants were instructed to activate only the needed muscles during each of the movements. after a brief practise period and verification that the participant made the movements correctly, the device was worn by the participant as shown in figure 1: the top extensions targeted the eyebrows, the middle ones the cheek areas, and the bottom ones the jaw and mouth area. the distance of each of the measurement electrodes from the facial tissue was adjusted to be as close as possible without the electrodes touching the face during the movements. this way the distance was approximately 1 cm for all electrodes. in the experiments, synthesized speech was used to give instructions to the participants to perform each individual movement. after putting on the device, two repetitions of each of the movements were carried out in a controlled practise session to familiarise the participants with the experimental procedure. the actual procedure consisted of ten repetitions of each movement carried out in randomised order. participants were given 10 seconds to complete each repetition. a mirror was used throughout the experiments to provide visual feedback of the facial movements to the participants. figure 1. the wearable measurement device. the numbers represent the extension pieces that house the measurement electrodes. the actual electrode locations are on the pieces facing the face. top middle left right bottom acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 80 2.4. data processing 2.4.1. signal processing principle figure 2 shows a diagram of the pre-processing that was applied to the signals prior to further data processing. first the capacitive signals were converted to signals proportional to the physical distance between the facial tissue and the measurement electrode. the conversion normalises the sensitivity of the measurement to the distance. the capacitance measurement was modelled with the equation of a parallel plate capacitor: , d a c   (1) where ε is the permittivity of the substance between the plates, a the plate area, and d the distance between the plates. one plate is formed by the measurement electrode and the other by the targeted facial tissue. while the surface profile of the facial tissue is often not a plate, each unique profile can be considered to have such an equivalent plate that the equation 1 can be applied. since the relationship between the capacitance and the distance is inversely proportional, the sensitivity of the capacitance to the distance is dependent on the distance itself. the absolute distance is not of interest, and a measure proportional to the distance can be calculated as , 11 bs p ccc d   (2) where cs is the capacitance value and cb is the base level of the capacitance channel. each channel has a unique base level that is affected by the length of the electrode cable and the surroundings of the electrode determined by its position on the extension. for the conversion, the base levels of all the capacitance channels were measured when the measurement electrodes were directed away from conducting objects. smoothing and baseline removal were applied to the distance signals computed with equation 2. these two steps were different when locating the facial activity and when classifying it. the differences are explained below in the corresponding sections. after processing the signals, only the first 4.5 seconds of the signals during each repetition of the movements were considered when calculating the results. the remaining 5.5 seconds of each 10-second repetition was neglected from further analysis because all the participants had already finished the instructed movements by then, and they sometimes carried out other movements to relax during that remaining time. 2.4.2. locating facial activity the smoothing applied to the distance signals when locating the facial activity was done with a moving median filter with a length of 35 samples (approximately 1.2 seconds). this was done to remove noise. further, the baselines of the signals were removed by subtracting the signal means during each repetition of the instructed movements. the baseline removal normalises the signal sequences so that they represent the relative changes in the physical distance. principal component analysis (pca) was carried out to the processed signal sequences to find out the locations of the facial activity during the instructed facial movements. pca is a linear method which transforms a data matrix with multiple variables, measurement channels in this case, to a set of uncorrelated components [22]. these principal components are linear combinations of the original variables and they describe the major variations in the data. pca decomposes the data matrix x, which has m measurements and n channels, as the sum of the outer product of vectors ti and pi and residual matrix e: ,2211 eptptptx  t kk tt  (3) where k is the number of principal components used. if all possible components are used, the residual reduces to zero. vectors ti are called scores, and pi are eigenvectors of the covariance matrix of x and are called loadings. the principal components in the equation 3 are ordered according to the corresponding eigenvalues. to localise facial activity, the first principal component and its loadings were considered. the first principal component describes the major data variations, and, thus, the location of the most significant facial activity can be identified by analysing the loadings of the corresponding measurement channels. for the analysis, the loadings were normalised by dividing their absolute values with the sum of the absolute values of all channels. as a result, the sum of the normalised values is equal to 1. to present the results, the vertical location of each repetition of the movements was mapped to the part of the face that introduced two of the three most significant relative loadings of the first principal component (m-out-of-n detector). for calculating percentages of successful mappings, the correct source of activity was considered to be the top extension channels for the lowering and raising eyebrows as well as the closing eyes movement, the bottom extension channels for the opening mouth and lowering mouth corners movements, and the middle or bottom extension channels for the raising mouth corners movement. median loadings of the 10 repetitions of each movement were calculated for each participant and channel separately to verify the decisions about the correct sources of activity. 2.4.3. classifying facial activity smoothing causes a delay. therefore distance signals without smoothing were used when classifying the facial activity. baseline removal was carried out to the distance signals figure 2. a block diagram of the signal processing. raw signal conversion to distance smoothing filter baseline removal processed signal acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 81 directly. figure 3 presents the algorithm used for solving the baseline for its removal. the baseline calculation was based on a median filter. the median can perform well in the task because the signals were expected to have longer baseline sequences than the ones resulting from facial activity. the median filter applies a logic that only selects part of the samples as baseline points for the median calculation. the selection is based on a constant false alarm rate (cfar) processor that calculates an adaptive threshold based on the noise characteristics of the processed signal [23], [24]. the distance signal was first pre-processed with a filter that implements a differentiator, a single-pole low-pass filter with a time constant of 20 ms, and a full-wave rectifier. this makes the input suitable for the cfar processor. the current sample is used as a test sample for the processor. the implemented version of the processor uses samples before the test sample, referred to as reference samples, for calculating the threshold. the processor also leaves out samples closest to the test sample as guard samples to reduce the information overlap between the test and reference samples. samples closer than 1 second to the test sample were considered guard samples and the following 14 seconds were considered as the reference samples. the mean of the reference samples was then calculated and multiplied by a sensitivity parameter to obtain the adaptive threshold. the sensitivity parameter was chosen to be 0.5 in this case. the threshold was then fed to a comparator with the pre-processed test sample to find out if the test sample did not exceed it. the respective samples of the input signal were included in the median calculation by the selective median filter that had a length of 15 seconds. finally, the baseline is calculated with a 2-second moving average filter from the median filtered signal to smooth step-wise transitions in the baseline level. a method to classify facial movements based on the processed multichannel data was implemented. the classification method was based on hierarchical clustering. it used the ward’s linkage which forms clusters by minimising the increase of total within-cluster variance about the cluster centre [25], [26]. a fixed number of 14 clusters were chosen for the clustering based on the different events that the data represents (6 movements and the baseline). this selection allows 2 clusters for each event on average, which allows some deviation of the data when performing repetitions of the same movement and elongation of the data points during a movement, because the ward’s method is known not to be good at handling elongated clusters and outliers [26]. the work-flow of the classification is presented in figure 4. the clustering first takes multichannel data with signal baselines removed and the labels of the data (information about the instructed movements) as an input. the data are first clustered and then cross tabulated against their 6 possible labels. based on the tabulation, the clusters are identified so that first the clusters that represent the baseline data are identified. a cluster is identified as a baseline cluster if it contains data points with at least 5 different labels from the 6 possible (m-out-of-n detector). other clusters are identified based on the label that has the largest number of samples in the cluster. in the offline classification, the data are then classified to represent the movement its cluster was identified with. a real-time classification can further be made based on previously identified clusters. first, the cluster centre points are calculated. then each new data sample is classified to represent the movement that the cluster nearest to it was identified with. thus, the real-time classification only requires the calculation of euclidean distances to the cluster centres for each new data sample. all the collected data were included in the offline classification. the real-time implementation of the classification was evaluated so that a randomly chosen repetition of each movement was included in the identification of the clusters, and the remaining 9 repetitions were used as test data to evaluate the performance of the method. to present the results of the classification, the percentages of the data points that were classified as baseline were calculated. a high percentage would indicate problems in separating the movements from the baseline. from the data points that were not classified as the baseline, the percentages of correctly classified ones were calculated. data points were considered to be correctly classified if they were classified as the movement that the participant was at that time instructed to perform. figure 3. a block diagram of the baseline calculation for the baseline removal when classifying facial activity. figure 4. a block diagram of the classification of the facial movements based on the data. input pre-processor filter comparator cfar processor selective median filter smoothing filter baseline data labels multichannel data hierarchical clusteringcrosstabulation cluster identification input identify baseline clusters identify other clusters input calculate cluster centrepoints find nearest cluster real-time classified data offline classified data acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 82 3. results figures 5-8 show examples of the signals the measurement channels registered during the experiments and how the conversion from capacitance signals to ones that are proportional to the distance normalises the signals. 3.1. locating facial activity examples of the detected facial activity presented as the loadings of the first principal component are shown in figures 9 and 10. the performance of locating the different movements based on principal component analysis is presented in table 1. out of the 6 included voluntary movements, opening mouth and raising mouth corners are located correctly in all the repetitions with all participants. lowering eyebrows and raising eyebrows are located correctly in almost all the repetitions with all participants. only a single repetition of each is incorrectly located. lowering mouth corners is correctly located except for 4 repetitions with a single participant. closing eyes has a limited success rate in the mapping with 7 out of 10 participants. the locations of the three measurement channels that registered the most significant activity during the experiments figure 5. raw capacitance signals from the 10 repetitions of the raising eyebrows movement with one participant. the different sides of the face are represented on the left and on the right. the top, middle, and bottom graphs represent the measurements from the corresponding extensions. the colours represent the different channels as shown in figure 1: red, green, blue, and grey starting from the centre of the face. signal baselines are aligned for the illustration. figure 6. signals after the conversion to distance signals from the 10 repetitions of the raising eyebrows movement with one participant. signal baselines are aligned for the illustration. figure 7. raw capacitance signals from the 10 repetitions of the opening mouth movement with one participant. figure 8. signals after the conversion to distance signals from the 10 repetitions of the opening mouth movement with one participant. figure 9. the facial activity as represented by the loadings of the first principal component during the raising eyebrows movement with one participant. each graph represents the loadings of the 10 repetitions from the measurement channel of the corresponding physical location. figure 10. the facial activity as represented by the loadings of the first principal component during the opening mouth movement with one participant. table 1. the percentages of successful mapping of the vertical location of the movements. the last row shows the means and standard deviations for the movements. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 100 90 50 100 100 100 2 100 100 60 100 100 100 3 100 100 90 100 100 100 4 100 100 100 100 100 100 5 100 100 30 100 100 100 6 100 100 100 100 100 100 7 100 100 40 100 100 100 8 90 100 80 100 100 100 9 100 100 100 100 100 60 10 100 100 80 100 100 100 mean 99 ± 3 99 ± 3 73 ± 26 100 ± 0 100 ± 0 96 ± 13 left right c ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right c ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right left right acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 83 according to the median loadings verified that the decisions regarding the correct sources of activity were justified according to the used order statistic, the median. the three most significant channels included incorrect locations only with one participant during the raising eyebrows movement and with 4 participants during the closing eyes. 3.2. classifying facial activity examples of classified data are shown in figures 11 and 12. table 2 shows the percentages of samples that were classified as baseline. a paired t-test (significance level 0.05) did not reveal statistically significant differences between the percentages of the offline and the real-time classification. in the case of the closing eyes, the percentages show that the movement could not be classified as a movement but was classified as the baseline. the percentages for the other movements reflect the durations of the movements because the participants were not given any instructions about how long to hold them. the results of the offline and real-time classification methods are shown in tables 3 and 4, and there are no statistically significant differences between the different methods according to a paired t-test (significance level 0.05). 4. discussion the facial activity was mostly correctly located, but in a limited number of cases locating gave incorrect results. this can be a result of several factors. firstly, the participants could not always carry out the movements exactly as instructed, but some unintentional activity of other muscles was included. secondly, the measurement and the applied data processing both are slightly sensitive to the movement of the prototype device on the head. this may result in false detection of activity when the device moves instead of the facial tissue. thirdly, including only one principal component may limit the performance when locating the activity. the amount of the variance explained by the first principal component was not analysed, but if it were it could be used to provide an estimate of the certainty in locating the activity. more principal components could be added to the analysis to reduce the uncertainty. finally, the mentioned error sources are also affected by the noise in the measurement. the noise is dependent on the distance of the measurement electrodes from the target. the current implementation normalises the signal levels, but the normalisation also scales the noise so that measurements with the facial tissue further away from the measurement electrode include more noise than table 2. the average percentages and standard deviations of data points that were classified as baseline ones in the offline and real-time implementations of the classification. the number of samples is 1310 and 1179 for the two implementations, respectively. lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners offline 48 ± 18 50 ± 19 99 ± 2 34 ± 15 41 ± 17 34 ± 12 realtime 53 ± 14 58 ± 14 99 ± 1 39 ± 15 49 ± 21 41 ± 12 table 3. the percentages and standard deviations of correctly classified data points in the offline classification. the dashes mean that all the samples were classified as the baseline. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 98 100 79 99 95 2 72 100 0 70 100 98 3 85 100 97 36 100 4 100 100 91 100 90 5 100 100 100 100 100 6 96 100 0 81 95 100 7 93 100 0 88 96 100 8 100 75 100 100 100 9 100 100 100 90 100 10 100 100 100 100 100 mean 94 ± 9 98 ± 8 0 ± 0 91 ± 11 91 ± 20 98 ± 3 table 4. the percentages and standard deviations of correctly classified data points in the real-time implementation of the classification. participant lowering eyebrows raising eyebrows closing eyes opening mouth raising mouth corners lowering mouth corners 1 95 100 84 94 98 2 100 100 100 100 100 3 87 100 60 83 100 4 100 100 99 100 87 5 100 100 0 100 98 100 6 94 100 0 92 78 100 7 94 100 78 98 100 8 96 83 90 100 100 9 98 100 0 98 89 100 10 100 100 100 100 100 mean 96 ± 4 98 ± 5 0 ± 0 90 ± 13 94 ± 8 99 ± 4 figure 11. classified data points after the baseline removal from the 10 repetitions of the raising eyebrows movement with one participant. the data points that were classified as the baseline are black, and the correctly classified data points are shown in colour. figure 12. classified data points after the baseline removal from the 10 repetitions of the opening mouth movement with one participant. left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) left right d p ( a. u. ) 0 1 2 3 4 5 6 t (s) 0 1 2 3 4 5 6 t (s) acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 84 when the tissue is closer. the smoothing could be more carefully considered to find the most suitable method for noise removal in this case. while the discussed factors may affect the performance, the reason for the limited performance with the closing eyes movement can be considered to be the small movement that it causes to the facial tissue at the measured locations. it should be noted that the presented method for locating the activity only implements a rough mapping of the simple movements. since the exact locations of the facial movements when certain muscle activation occurs varies between individuals, determining the precise location of the movements may not even provide additional value without first characterising the individual’s facial behaviour. thus, the classification was introduced to differentiate between the movements, and it could be applied also to more complex expressions. the classification was based on using hierarchical clustering to identify clusters formed from the measured data. applying principal component analysis in real-time for the task was also considered. however, as a statistical method, it requires numerous samples to compute the principal components reliably. this causes delays dependent on the chosen window length. the processing of the implemented classification, however, does not impose additional delays since it only requires the calculation of distances between points after the clusters have been identified offline. the percentages of data points that were classified as baseline show that the closing eyes movement is problematic also in the classification. the data points during the movement can be expected to be close to the baseline if at all visible in the data. the example graphs of the classified data points (figures 11 and 12) show the changes from the baseline that are required for the classification to identify the data point as something else. the graphs also show that the delay for this is acceptable, even if the absolute delay cannot be calculated because no information about the true onset of the movements was extracted in this study. the performances in classifying the data points correctly during the different movements show that the offline and the real-time versions both perform very well. this is a good result as the real-time version only used data from a single repetition of each movement for identifying the clusters compared to all the 10 repetitions in the offline one. incorrectly performed movements, movement of the device on the head, and noise are possible sources for the errors also in the classification. in addition, the transition phases at the beginnings and the ends of the movements when the data points are close to the baseline can be expected to be more susceptible to incorrect classification. the number of clusters chosen for the classification obviously affects how many movements and variations of the movements can be distinguished from one another. in this study, the number was chosen to be relatively small and the selection was based on the number of the included movements. the identification of the clusters used the information about the movement that the participant was instructed to perform to label each data point. selecting a larger number of clusters would make it possible to identify variations of the movements, but it would also require more information for the labelling. one alternative would be to visually inspect video recordings to provide the labels. this could be done after the clustering to label each cluster rather than providing a label for each data point one by one. this study only considered simple voluntary facial movements. since complex facial expressions, even the spontaneous ones related to emotions, are formed by combinations of simple movements, they can be expected to be classified in the same way and as easily as the simple movements. they will just span a different volume in the multidimensional space of the measured data points. however, the movement ranges of facial tissue during spontaneous expressions are often more limited than in the simple movements of this study. this may introduce challenges in classifying some of the expressions. 5. conclusions a new method for mobile, head-worn facial activity measurement and classification was presented. the capacitive method and the prototype constructed for studying it were shown to perform well both in locating different voluntary facial movements to the correct areas on the face and in classifying the movements. locating the movements with principal component analysis does not require a calibration of the measurement for the user, and the presented classification only required one repetition of each movement for identifying the movements before the classification could be carried out in real-time. the presented facial activity measurement method has clear benefits when compared to the computationally more intensive vision-based methods and the emg that requires attachment of electrodes on the face. future research on the method should include verifying that the classification works with more complex expressions, i.e. with combinations of activity at different locations on the face. furthermore, determining the intensity level of the activity of different facial areas could provide additional information. it could be studied how different activation levels can be distinguished from one another with the presented method, and whether even the smallest facial muscle activations can be distinguished. acknowledgement the authors would like to thank nokia research center and the finnish funding agency for technology and innovation (tekes) for funding the research. ville rantanen would like to thank the finnish cultural foundation for personal funding of his work, nokia foundation for the support, and imeko for the györgy striker junior paper reward that was received at the xx imeko world congress. references [1] a. b. barreto, s. d. scargle, and m. adjouadi, “a practical emgbased human-computer interface for users with motor disabilities”, journal of rehabilitation research & development 37 (2000), pp.53–64. [2] t. partala, a. aula, and v. surakka, “combined voluntary gaze direction and facial muscle activity as a new pointing technique”, proc. of ifip interact’01, tokyo, japan, july 2001, pp.100-107. [3] v. surakka, m. illi, and p. isokoski, “gazing and frowning as a new human-computer interaction technique”, acm transactions on applied perception 1 (2004), pp. 40–56. [4] c. a. chin, a. barreto, j. g. cremades, and m. adjouadi, “integrated electromyogram and eye-gaze tracking cursor control system for computer users with motor disabilities”, journal of rehabilitation research & development 45 (1) (2009), pp. 161-174. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 85 [5] v. rantanen, p.-h. niemenlehto, j. verho, and j. lekkala, “capacitive facial movement detection for human-computer interaction to click by frowning and lifting eyebrows”, medical and biological engineering and computing 48 (2010), pp. 39–47. [6] j. navallas, m. ariz, a. villanueva, j. san agustin, r. cabeza, “optimizing interoperability between video-oculographic and electromyographic systems”, journal of rehabilitation research & development 48 (3) (2011), pp. 253–266. [7] o. tuisku, v. surakka, t. vanhala, v. rantanen, and j. lekkala, “wireless face interface: using voluntary gaze direction and facial muscle activations for human-computer interaction”, interacting with computers 24 (2012), pp. 1–9. [8] u. dimberg, “facial electromyography and emotional reactions”, psychophysiology 27 (5) (1990), pp. 481–494. [9] m. pantic and l. j. rothkrantz, “automatic analysis of facial expressions: the state of the art”, ieee transactions on pattern analysis and machine intelligence 22 (2000), pp. 1424–1445. [10] b. fasel, j. luettin, “automatic facial expression analysis: a survey”, pattern recognition 36 (1) (2003), pp. 259–275. [11] j. f. cohn, p. ekman, “measuring facial action”, in: j. a. harrigan, r. rosenthal, k. r. scherer (eds.), the new handbook of methods in nonverbal behavior research, oxford university press, oxford, uk, 2005,ch. 2, pp. 9–64. [12] e. l. rosenberg, “introduction”, in: p. ekman, e. l. rosenberg (eds.), what the face reveals: basic and applied studies of spontaneous expression using the facial action system (facs), 2nd edition, oxford university press, new york, ny, usa, 2005, pp. 3–18. [13] j. f. cohn, a. j. zlochower, j. lien, t. kanade, “automated face analysis by feature point tracking has high concurrent validity with manual facs coding”, in: p. ekman, e. l. rosenberg (eds.), what the face reveals: basic and applied studies of spontaneous expression using the facial action system (facs), 2nd edition, oxford university press, new york, ny, usa, 2005, ch. 17, pp. 371–392. [14] m. pantic, m. s. bartlett, “machine analysis of facial expressions”, in: k. delac, m. grgic (eds.), face recognition, itech education and publishing, vienna, austria, 2007, pp. 377-416. [15] j. hamm, c. g. kohler, r. c. gur, and r. verma, “automated facial action coding system for dynamic analysis of facial expressions in neuropsychiatric disorders”, journal of neuroscience methods 200 (2011), pp. 237–256. [16] v. rantanen, t. vanhala, o. tuisku, p.-h. niemenlehto, j. verho, v. surakka, m. juhola, and j. lekkala, “a wearable, wireless gaze tracker with integrated selection command source for human-computer interaction”, ieee transactions on information technology in biomedicine 15 (2011), pp. 795–801. [17] o. tuisku, v. surakka, y. gizatdinova, t. vanhala, v. rantanen, j. verho, and j. lekkala, “gazing and frowning to computers can be enjoyable”, proc. of the third international conference on knowledge and systems engineering (kse), hanoi, vietnam, oct. 2011, pp. 211–218. [18] v. rantanen, o. tuisku, j. verho, t. vanhala, v. surakka, and j. lekkala, “the effect of clicking by smiling on the accuracy of head-mounted gaze tracking”, proc. of the symposium on eyetracking research & applications (etra ’12), santa barbara, ca, usa, march 2012, pp. 345–348. [19] p. ekman and w. v. friesen, facial action coding system: a technique for the measurement of facial movement, consulting psychologists press, palo alto, ca, usa, 1978. [20] p. ekman, w. v. friesen, and j. c. hager, facial action coding system: the manual. a human face, salt lake city, ut, usa, 2002. [21] a. j. fridlund and j. t. cacioppo, “guidelines for human electromyographic research”, psychophysiology 23 (5) (1986), pp. 567–589. [22] j. e. jackson, a user’s guide to principal components, wiley series in probability and mathematical statistics, john wiley & sons, new york, ny, usa, 1991. [23] m. i. skolnik, introduction to radar systems, 3rd edition, mcgraw-hill, new york, ny, usa, 2001. [24] p.-h. niemenlehto, “constant false alarm rate detection of saccadic eye movements in electro-oculography”, computer methods and programs in biomedicine 96 (2) (2009), pp. 158-171. [25] j. h. ward, “hierarchical grouping to optimize an objective function”, journal of the american statistical association 58 (301) (1963), pp. 236–244. [26] e. rasmussen, “clustering algorithms”, in: w. b. frakes, r. baeza-yates (eds.), information retrieval: data structures and algorithms, 1st edition, prentice hall, upper saddle river, nj, usa, 1992. acquisition and integration of spatial and acoustic features: a workflow tailored to small-scale heritage architecture acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 14 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 acquisition and integration of spatial and acoustic features: a workflow tailored to small-scale heritage architecture jean-yves blaise1, iwona dudek1, anthony pamart1, laurent bergerot1, adrien vidal2, simon fargeot2, mitsuko aramaki2, sølvi ystad2, richard kronland-martinet2 1 umr cnrs/mc 3495 map 31 chemin joseph aiguier 13402, marseille, france 2 aix marseille univ, cnrs, mc, prism umr 7061, 31 chemin joseph aiguier 13402, marseille, france section: research paper keywords: heritage architecture; interdisciplinary data acquisition, panoramic-based photogrammetry; 3d sound; visualisation citation: jean-yves blaise, iwona dudek, anthony pamart, laurent bergerot, adrien vidal, simon fargeot, mitsuko aramaki, sølvi ystad, richard kronlandmartinet, acquisition and integration of spatial and acoustic features: a workflow tailored to small-scale heritage architecture, acta imeko, vol. 11, no. 2, article 22, june 2022, identifier: imeko-acta-11 (2022)-02-22 section editor: fabio santaniello, university of trento, italy received march 5, 2021; in final form june 14, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the anr (project-based funding agency for research in france). corresponding author: jean-yves blaise, e-mail: jean-yves.blaise@map.cnrs.fr 1. introduction architecture is perceived not only through vision but also through audition and other senses – hence characterising it is likely to require more than studying its physical envelope. this fact is increasingly acknowledged, including in heritage studies, as illustrated in initiatives focusing on “places” as such [1], [2] or on their use [3]. however in the specific context of small-scale architectural heritage, often left aside from large, well-funded, heritage programmes, scientists and local communities face a specific challenge. indeed, in that context, studying, documenting, and enhancing such buildings requires new methods minoring as much as possible the complexity and cost of workflows. the above statements correspond to both sides of the equation this research aims to solve: acquiring and integrating spatial and acoustic characteristics, while maintaining a level of simplicity suited to buildings without prestige, often neglected or at risk. our global objective is to support a multidimensional and interdisciplinary characterisation of small-scale architectural heritage. this contribution is centred on the programme’s initial milestone: a data acquisition and processing chain integrating visual and auditory data. it is (above all) about methodology: as will be shown we mainly combine and extend pre-existing technologies and tools in a novel way. the photogrammetric survey is based on a 360 panoramic camera (a technology discussed in [4]), and 3d point clouds are exploited inside the potree renderer (well known in the application field) [5]. on the other hand, the effects of the room’s configuration on the sound rendering has been studied for decades with for instance the seminal works on reverberation abstract this paper reports on an interdisciplinary data acquisition and processing chain, the novelty of which is primarily to be found in a close integration of acoustic and spatial data. it provides a detailed description of the technological and methodological choices that were made in order to adapt to the particularities of the corpus studied (interiors of small scale rural architectural artefacts) keeping in mind the backbone objective of the research: facilitate comparisons (among buildings, among spatial and acoustic features). the research outputs pave the way for proportion-as-ratios analyses, as well as for the study of perceptual aspects from an acoustic point of view. ultimately, “perceptual” acoustic data characterised by acoustic descriptors will be related to “objective” spatial data such as architectural metrics. the experiment is carried out on a set of fifteen “small-scale” rural chapels, which is a corpus intended at fostering cross-examinations in the context of an architectural programme acting as a constant. the specificity of this corpus, in terms of architectural layout, usage, and economic or access constraints, will be shown to have had a significant impact on choices made during the acquisition and processing chains. mailto:jean-yves.blaise@map.cnrs.fr acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 made by sabine [6]. recent improvements in the field of 3d sound make it now possible to accurately reproduce previously recorded sound fields, thanks to an array of loudspeakers. this allows for an experimental analysis of the induced perception, a key issue as far as this research is concerned. the originality of the research lies in a combination of technologies and methods, with a twofold ambition: to develop an interdisciplinary approach that should be maintained all along the data acquisition, processing, and analysis chain (the word interdisciplinary should be understood as defined by [7]: mutual integration of concepts, methodology and procedures), to single out a grid of metrics (space + sound) aiming at helping analysts to cross-examine data on and between buildings. the study is carried out on fifteen interiors of rural chapels in south-eastern france. this collection has shed light on a significant set of feedbacks in terms of methods and open issues. we opted for such a corpus because of its consistency in terms of their initial intended function, and its variability in terms of actual architectural layout. this collection opens an opportunity to analyse and compare the acoustic responses of buildings that initially share a common usage scenario (the christian ritual) although they differ physically in many ways, as shown in figure 1 (type of covering, materials, vicinity, etc.). this paper focuses on the survey step (section 3), the data processing step (section 4), and comments on obstacles and limitations of the protocol (section 5). the former and the latter steps hinge on a set of critical choices in terms of corpus, practical constraints, and surveying technologies of architectural interiors. they also revolve around analytical needs (perception analysis, extraction of proportions, etc.). these aspects are debated in sections 2 and 6 so as to put the experiment in perspective. 2. research context and requirements the data acquisition and processing chain presented herein builds on a series of constraints and choices that ensue from the corpus under scrutiny (small scale rural architecture) and the overall objective of the research (interdisciplinarity, reproducibility, comparability). this section comments on these specific constraints and then positions choices made with regards to the state of the art. 2.1. the corpus, the analytical needs the setup and protocol was designed to address a set of constraints that are key to list if wanting to assess the approach’s reproducibility and relevance. briefly speaking the initial priorities were the following: collect a consistent corpus (comparability issue), tailor the acquisition phase to that corpus (small scale, poorly funded heritage), design a multimodal protocol that would open on repurposable outputs. those outputs should be valuable for local actors in their effort to favour a better recognition of their heritage assets. they should also be relevant for scientists and scholars in their analytical tasks (on the architectural analysis side as well as on the acoustics analysis side). the first step of the programme was an interdisciplinary debate about the survey step, including feasibility tests in laboratory conditions. an architectural interior relatively consistent with the corpus was picked up, and named “fake chapel”. it served as a substitute for “real” chapels during the early stages of the survey protocol’s development (see section 2.2). architects, surveyors and acousticians confronted their views on their specific technical requirements in terms of survey, but also on how the data would be exploited in subsequent phases. that first step ended on the co-design of strategic, organisational, technical specifications: 1) a unique setup and protocol that would be reproduced identically across the whole collection. 2) a selection of architectural layouts adapted to the analysis of compositional patterns, 3) a protocol respectful of a chapel’s original function – hence a spatial distribution of speakers and microphones tailored to specific usage modalities (celebrant vs. listener opposition). the interior spaces need to be analysed in the light of the way they are or were used. some of the chapels were indeed repurposed figure 1. a “parallel coordinates” diagram illustrating the diversity of the corpus – the legend (top) followed by three examples: n.d de la salette (tourves), saint-roch (la verdière) and saint-roch (les mées). each line running from left to right corresponds to one of the fifteen selected chapels. each column corresponds to one of the “diversity factors” (partial view). the first five bars (on the left) correspond to quantitative data. other factors are related to categorical data e.g., covering types (barrel vault, cross vault, roof frame, other vaulting), apse shape (semi-circular, polygonal, rectangular, lack of apse, other), volume complexity (single regular nave vs. buildings with transept or other ruptures in the continuity of the nave), empty spaces vs. buildings with furniture, vicinity factor (isolated buildings vs. buildings with adjoining structures), plan symmetry, etc. blue circles represent the number of chapels corresponding to a given value (diameter of a circle represents a number, min = 1, max = 11). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 (modification of function), but their reason to be and that is the basis on which comparisons can be made lies in the service they initially offered, 4) a need of reliability and accuracy concerning major dimensions of a chapel and positions of the acoustic devices, and relatively low requirement level in terms of density and quality of the 3d point cloud. 5) a contact-free survey protocol, privileging a lightweight instrumentation (accessibility issue – some buildings can only be accessed by foot), 6) a severe time pressure in situ (three hours as a maximum per building, all surveys included, i.e. maximum two chapels surveyed per day), 7) a need to record each building’s soundfield in order to allow for the restitution of its acoustics in an audio cave to perform remote perceptual evaluations and comparisons between chapels, 8) a live recording of usage scenarios: speech (human voice facing the apse or the nave) and sounds produced by a human walking within a chapel according to a specific protocol, 9) a recording of the soundscape, both interior and exterior. at first glance architectural interiors of the corpus that was selected can be seen as a consistent architectural paradigm. this is basically a misconception [8], and we accordingly chose a collection of fifteen buildings that reflects the diversity of the corpus (in terms of their architectural layouts and dimensions – volumes ranging from 171 m3 to 981 m3 (cf. figure 2). the actual acquisitions introduced yet more technical constraints. the whole setup had to be chosen so that it could be carried in backpacks (remote sites). it needed to be autonomous in terms of energy (no power supply in situ), and it had to be adapted to interiors that in some cases could be congested – hence in some cases made it difficult to maintain the geometry of the grid of the instruments. finally, adaptation to lighting conditions proved to be recurring problem. conditions varied from sunny summer days in well exposed buildings with large openings, to rainy winter days in chapels with very small windows located in a shady area. four led panels were used when needed, but their correct positioning can be time-consuming if one wants to avoid too strong contrasts during the photogrammetric survey. the acquisition process is now mature, although still improvable, and can be considered as reproducible. however, it has to be stressed, that in situ there will always be a series of “expert” choices to make (lighting, number and distribution of stations from 14 to 93 stations in our experiments, positioning of the rangefinder, analysis of the interior envelope, etc. ). although this contribution insists on reproducibility, the experiment is tailored to a quite specific set of conditions and constraints. in no way do we claim that our approach is an “all purposes all situations” one. on the contrary, we consider that one of the key outcomes of such an experimental study is to act as food for thinking, especially at a time when technologies often affect methodological choices. it clearly illustrates that attention should be driven towards the “what for” question before addressing the “what with” question, in particular when targeting a lightweight approach, and minor heritage architecture assets. 2.2. the survey protocol: technological and methodological constraints the metric data acquisition protocol proposed follows a twofold objective: a fast and versatile geometric survey of small scale indoor spaces, combined with the data integration need of acoustic measurements (from on-site positioning to combined visualisation). the choice to rely on panoramic-based photogrammetry converge to a single solution (by sharing 360° capture) to address the purpose and the challenge of the multimodal survey. the use of low-cost spherical cameras for photogrammetric reconstruction has been discussed and evaluated in the literature [9], [10], [11], [4]. a critical review of those works provides useful insight on such cameras’ potential efficiency and relevance with regards to our case-studies. most of those works relate possible failings or limitations to the technological limitation of low-cost spherical cameras. and indeed, this technique’s potential weakness is mainly related to the low-resolution of the sensor, and to the low quality of optical components for the hardware part. on the software side, spherical stitching and projection does increase the uncertainty above the standard of pin-holebased photogrammetric reconstruction. with knowledge of these limitations in terms of accuracy we nevertheless chose to build and optimize an experimental setup upon this technique, in the light of this research’s priority n°2: a simplicity suited to unprestigious buildings. the “fake-chapel” (see section 2.1) acted as a calibration space to test, evaluate and improve the data acquisition strategy before the actual acquisition campaigns conducted on the fifteen chapels. because the capture of panoramic images and some photogrammetric rules are contradictory (e.g. parallax condition), panoramic photogrammetry is often used in very specific contexts [12]. nowadays, spherical photogrammetry can be performed in three different ways: the raw images of a constrained multi-camera rig [13] are used separately as a single frame picture usually with highly distorted fisheye lenses, an image set is derived to composite picture using stitching algorithm to get a spherical map (usually equirectangular projection), figure 2. diagrams illustrating the variety of spatial arrangements of the selected corpus (top dark grey elements represent chancels) and the variation of the chapels’ volume (bottom triangles above the axis correspond to buildings located in remote areas, at a distance from villages; triangles below the axis correspond to buildings inside or in the vicinity of villages. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 a stitched panorama is converted to cubic projection from which six cameras corresponding to each face are extracted to be processed as pin-holes. the above-mentioned solutions remain sub-optimal in terms of photogrammetric processing, leaving the optimization of data acquisition as a relevant solution. panoramic-based photogrammetry usually translates the terrestrial laser scanning acquisition canvas with a straightforward linear sequence assuming that omni directional images behave like range imaging. while constant and uniform quality is expected with lasers both raw images and projective maps generated are imperfect for ibm (image based modelling) purpose. the tissot’s indicatrix used in cartography to describe map distortion shows that any projection system, including spherical or cubic projection is non-destructive at pixel level (cf. figure 3a). therefore, panoramic-based photogrammetry is altered from distortion and aberration at many levels, from the optical system of the camera, during stitching step to the projection mapping. we assumed that only a small part of a panoramic capture is optimal for 3d reconstruction. at the raw level (cf. figure 3d), only on the central part of the fisheye picture (focal axis shown in red in figure 6a) is performing [14]. at stitching level (cf. figure 3b), the azimuthal plane has artefacts due to low overlapping of raw images. at the mapping level (cf. figure 3c), in the case of the equirectangular projection, poles on zenith axis are inconsistent, and only the horizontal plane is exempt of strong deformation. based on this observation spherical photogrammetry cannot be considered as omnidirectional (from a qualitative point of view) because the orientation of the camera system and its sensor is not insignificant. in other words, a simple, linear sequence of images, taken from a camera that would be systematically oriented the same way, impacts in a negative way the quality of the geometrical reconstruction. in response to that issue, an original data acquisition sequence was developed, in order to compensate the fact that only the longitudinal axis and the horizontal plane of each shot provide optimal features for photogrammetric reconstruction. instead of walking along the space with a linear sequence, the protocol is based on a dense network of pyramidal sequences (detailed in section 3) composed of translated and rotated camera positions. this protocol basically reintroduces elements recognized to improve photogrammetric reconstruction like: short-baseline and high overlapping stereo-pairs, roto-translation variance in a dense camera network or feature redundancy to reinforce the panorama bundle adjustment. in addition, because the principle of spherical capture is shared between 360° cameras and the 3d sound-field microphone used in this study (the mh-acoustics eigenmike 32), we anticipated a seamless data integration all along the operative chain (i.e. capture, registration, fusion and visualisation stages). the technological analogy of our tools indeed provide similarities to ease data fusion steps and is intended to further facilitate the perceptual analysis, providing immersive environments for both reality-based image and sound captures. 3. a multimodal survey protocol the protocol’s key components are in fact two low-cost 3d cross line self-leveling laser levels (instruments often in use in the building activity). these levels project green laser beams on surfaces. the laser beams are combined so as to mark four planes constituting a reference system. they are also exploited to position sound measuring instruments. intersections of beams on the walls, ceiling and floor are called “named reference points” and act as markers in the scaling of the photogrammetric model. their relative positions are measured using a leica s910 rangefinder equipped with its so-called “smart base” and its integrated tilt sensor. sound recording instruments (microphones and loudspeakers) form a grid allowing for a systematic relative positioning of the instruments with regards to one another (cf. figure 4.). the grid’s positioning in the reference system is also done using the laser rangefinder (except for two microphones, mg and md, positioned thanks to the photogrammetric model). microphones and loudspeakers are mounted on tripods positioned relatively to the named reference points (seven positions, cf. figure 4). these tripods are reused (once the emitting / recording tasks are over) to install the 360 camera and to capture photogrammetric data in these specific points. it hence allows for a double checking of the acoustic (sound recording) instruments’ positions. these shots are also, later on in the process, repurposed as the visual components of online immersive panoramas inside which sound tracks corresponding to the named point ma mc md and mg are displayed according to the corresponding panoramas (see section 6). 3.1. metric and visual data acquisition the photogrammetric data acquisition protocol for indoor space still suffers from several obstacles related the architectural context (narrow and dark spaces, occlusions) of the chapels. in order to gain in velocity, reproducibility and overall efficiency 360 cameras (dual sensors with fisheye lenses) were chosen as the expected accuracy is centimetre-sized. the pyramidal data acquisition protocol has been conceived, evaluated and optimized to face with issues discussed in section 2.2. a flexible acquisition layout has then been framed so as to couple up the photogrammetric acquisition to telemetric surveys of significant points on the building’s surface (using the dxf feature of leica s910), and to the acoustic instruments positions. figure 3. synthetic image illustrating quality variance of an equirectangular projection composed of : tissot’s indicatrix (a), stitching artifact (b), gradient of equirectangular distortion (c) and gradient of double-fisheye distortion. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 the dxf built-in feature is used to extract precise measurements (used as ground control point). it allows to orient all the 3d models in a consistent and constant absolute cartesian coordinates. the technical setting of the metric survey protocol is bounded by economic constraints on one hand (preferably low-tech, lowcost), and compactness on the other hand (compatibility with remote sites) – main components are (cf. figure 5): two huepar 3d cross line self-leveling laser levels, a yi 360 vr panoramic camera 5.7k hi resolution, dual-lens each lens is 220° with an aperture of f/2.0 (360° coverage , produces two unstitched hemispherical photograph for each shooting position), a laser rangefinder leica disto s910 (this instrument outputs dxf files, it is used to survey named reference points on one hand and significant points on the surface of the edifice), a manfrotto tripod (055 series) allowing for horizontal/vertical shootings. the rotational mechanism of the centre column is used to perform a pyramidalbased capture combining the benefits of faster survey (5 positions for a single tripod station) and better reconstruction (from a short baseline cameras network with variable orientations, cf. section 2.2). the main steps of the protocol are as follows: 1) positioning of the laser levels, starting by the one located at the entrance of the chancel. 2) positioning of the grid of instruments (7 tripods), aligned with the levels vertically, and relatively to one another horizontally (the reference point being the theoretical position of a celebrant behind the altar). 3) positioning of the rangefinder so that each and every intersection of laser beams is visible, can be pointed at and surveyed. 4) survey, using the rangefinder, of the grid of instruments – outputs a polyline connecting tripods to 5 points on the building. 5) scaling protocol, using the rangefinder: a polyline that connects all the laser beam intersections. 6) dimensioning protocol, using the rangefinder: a polyline that connects laser beam intersections to elements of the envelope considered as significant (a keystone, the entrance level, a cornice, etc.). 7) photogrammetric survey, using the panoramic camera positioned on each tripod forming the grid, and then on its own tripod, moved in different positions decided in situ. for each position, the pyramidal sequence (cf. figure 6) is repeated and modulated according to the architectural morphology and environmental constraints. 8) acoustic survey: instruments – loudspeakers and microphones are positioned on the grid of tripods, a sine sweep is emitted from each speaker and recorded on the microphones (several times iteratively in order to spot and eliminate “outliers”). figure 4. top, laser beams (green lines) and their intersections form “named reference points” (brown circles) that are visible on the building’s surfaces and surveyed using the rangefinder. the light grey parallelogram is the chapel’s nave, the dark grey parallelogram is its chancel. bottom, auditory instruments positioned relatively to the horizontal plane marked by laser beams. three loudspeakers eg, ec, ed are located right behind the altar, in the chancel (dark downward triangles), at a given distance from one another, with ec aligned on the main longitudinal axis. a fourth loudspeaker, eb, is positioned right under the first microphone, and tilted so as to face the covering of the building. microphone ma is then positioned relatively to loudspeakers eg, and ed at a systematic distance (ma, eg, ed form an equilateral triangle). microphone mc is aligned with ma, on the main longitudinal axis, and positioned at a fixed and systematic distance from ma. microphones mg and md are positioned at the very beginning of the nave, at a fixed distance (one meter) from the walls. figure 5. a sample setup: a the 360 camera; oriented horizontally, b a 3 axes laser level, c tripods on which acoustic devices will be mounted, d intersection of laser beams on the interior’s enveloppe. (st pancrace’s chapel, pyloubier). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 9) live recordings: a given sentence is pronounced systematically by the same person, positioned behind the altar and facing either the chancel or the nave, and footsteps of a person walking from the entrance to the altar and back are recorded. steps 1, 2, 4, 5 are systematic, steps 3, 6 and 7 require an adaptation to conditions found in situ. steps 4 to 7 are conducted before or after the acoustic measurements, steps 8 and 9, depending on the lighting conditions. the pyramidal protocol consists of a complete series of 6 pictures, 5 at the summits of a square-based pyramid and one in the centre. the roto-translation between each camera position is fast and easy to reproduce, from a single tripod position, thanks to the rotating arm of the tripod and to a rotative head (see figure 6, b). opposed positions are generally arranged by common orientations to create stereo-pairs for which: longitudinal capture (camera positioned vertically or horizontally and oriented on the long axis of the chapel shown by the red axis in figure 6a) helps the alignment along the sequence and correlation of elements perpendicular to side walls, zenithal capture (camera positioned horizontally and oriented up and down with blue axis in figure 6b) improves the cover of floor and ceiling, transversal capture (camera positioned horizontally or vertically and oriented on side walls with green axis in figure 6b) is globally used for close-range reconstructions. on top of the general improvement for 3d reconstruction this pyramidal protocol turned out fully versatile against multiple in-situ constraints. from our experience on the 15 chapels that were surveyed, several benefits can be noted in terms of : completeness: minimization of occluded areas concerning architectural, structural or ornamental elements, velocity: gain in acquisition time by minimizing the number of tripod stations, accuracy: increase in redundancy for pointing, scaling and extracting coordinate positions or measurements, security: enabling sufficient back-up data in case of a mistake or error, practicality: avoidance of obstacles in the sequence (including our own equipment that must remain fixed during data acquisition). the overall protocol is relatively fluid, and has been systematically reproduced, with as a result, a good feedback now on stop points, i.e. key aspects or moments that can result in failures (cf. figure 7). figure 6. schema of the world camera system (a) and an example of a complete pyramidal sequence composed of 6 different positions, (b). bottom, real case application: each yellow sphere corresponds to a camera position. figure 7. a decision diagram positioning key steps of the survey process (steps 1 to 7, grid installation and metric survey), with possible pitfalls and success factors. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 3.2. acoustic measurements from an acoustical point of view, the main goal of this research is to study the influence of rooms on sound perception: in that context getting consistent results requires the same listeners to perform the same perceptual assessment tasks for each chapel. however, since human immediate auditory memory is short, it is not possible to compare a collection of remote chapels in situ. as a work around, the 3d acoustics of the collection of chapels can be measured and rendered in laboratory conditions. for this purpose, a 3d sound technology (microphone and loudspeaker arrays) based on 3d recordings (eigenmike 32) and higher order ambisonics (hoa) restitution [15], [16] was used. measurements consisted in characterising the so-called spatial room impulse responses (srir). an impulse response corresponds to the sound transformation between a sound source (generated by a loudspeaker) and the sound measured at the microphone level. the srir enables, in a second step, to proceed to the so-called “auralization”. thanks to the convolution operation of an arbitrary sound stimulus with the srir, one can play any stimulus as if it were played in situ. in this paper, srirs were derived from the measurement of sine sweeps, as proposed by [17]. emitted sounds were logarithmic sine sweeps from 20 hz to 20 khz with a duration of 10 s followed by 10 s of silence. this method has many advantages, including fast measurements, good signal to noise ratios (snr) and immunity to source distortions [18]. it is well suited for quiet closed spaces such as rural chapels. the main drawback is that this technique is sensitive to impulsive noise. to overcome this problem, each measurement is repeated three times in a row and the srir is derived from take with the best signal to noise ratio. since the aim was to investigate the acoustics related to the sites’ initial use, i.e. a celebrant near the altar speaking to the audience in the nave, we placed a loudspeaker in the middle of the chancel (point ec, cf. figure 4). two lateral loudspeakers (eg, ed) were then aligned with ec at a distance of 1.25 m (epistle side vs. gospel side in terms of initial use, or if thinking about contemporary reuses of chapels simulation of the rendering of a musical trio). we systematically placed the microphone at point mc at a distance of 5.5 m from ec, and at the same height (cf. figure 8). this distance was constrained by the smallest chapel’s dimensions and corresponded to the largest source-tomicrophone distance that can be obtained in the configuration presented in figure 4. at this distance, the angular spacing between the lateral loudspeakers and the frontal loudspeaker is only 13°. we therefore repeated the same measurements at a closer distance (point ma, apex of an equilateral triangle eg ed – ma, cf. figure 4). an “invariable” placement has been chosen instead of a “proportional” placement since the source-receiver distance plays a major impact on the room acoustics rendering. indeed, listening at a fixed distance allow to assess only the sound field in the room independently of the measuring distance. finally, we placed a fourth loudspeaker 40 cm below the microphones in ma and mc. this specific measurement aims at recording the soundfield as if both the transmitter and the receiver were the same person. it can be used in psycho-physical experiments requiring real-time auralization of autophonic stimuli. as an example, we plan to study the influence of room acoustics on musicians’ gestures. as far as the equipment is concerned, the main measurement was recorded with a 3d microphone released on the consumer market, the mh-acoustics eigenmike 32 (em32). this spherical array of 32 microphones has already been used for sound field analysis and for perceptual studies [19], [20]. it allows precise spatial recording of the sound-field that can further be converted for restitution purposes to hoa format up to the 4th order. the loudspeakers used were genelec 8020c. this loudspeaker is not omni-directional (as required for measurement of acoustics parameters [21]), but as mentioned earlier the main goal of these measurements was to proceed to auralization. for this purpose, omni-directionality was not required since the sources to be auralized have their own directivity pattern (e.g. voice, guitar, etc.). moreover, this loudspeaker was chosen here for its compactness, and its relatively low-cost (compared to dodecahedron omni-directional sources) while having fair frequency characteristics (+/2.5 db, 66 hz to 20 khz). this protocol was deployed on the fifteen chapels of the corpus. additionally, a reference measurement based on the same layout was performed in an anechoic chamber (cf. figure 9). this reference measurement allowed to characterise the setup in free field (i.e. without any room effects). while the main measurement consisted in recording the srir, we also recorded the voice of a person positioned behind figure 8. vertical alignment of the eigenmike 32 microphone, once positioned on the tripod in point mc, using the laser beams (n.d. de la salette, tourves). figure 9. the eigenmike 32 and speakers positioned for reference measurement in an anechoic room. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 the altar and pronouncing a given sentence while facing the chancel and then facing the nave. the idea was to open up on a qualitative measurement of the impact of the vatican ii council’s reforms on the way the celebrant’s discourse is perceived when he faces the people (current ritual) and when he faces the altar (the way it used to be prior to the reform). additionally, footsteps of a person wearing systematically the same shoes and clothes and walking from the chapel’s entrance to the altar were also recorded. finally, 5 min of the soundscape inside and outside the chapel were recorded using a zoom h3vr. the soundscape is related to the recording moment (day vs night, summer vs winter, etc.), but for practical reasons we could not record at the same time in the different places. to quantify the temporal influence, for one specific chapel we recorded the soundscape during 2-min each 30-min during 10 days and repeated the process twice (in february and in july). for these measurements we used the labmaker audiomoth, a low-cost acoustic monitoring device used for monitoring wildlife. we repurposed it here to monitor the soundscape with a twofold prospective aim, i.e. characterising (qualitatively) the variations of the soundscape over time and characterising the way the building acts as an acoustic “filter” (both the exterior and the interior soundscapes were recorded at the same time). 4. initial data processing results of the acquisition step act as inputs processed independently at first and then pulled together again as combined outputs in a twofold way: end-user products and analytical overlays to the potree 3d pointcloud renderer. this section focuses on describing the way raw data is processed, and a discussion on how the data is repurposed with regards to this or that objective is proposed in section 6. 4.1. metric and visual data processing the first step is to produce the 3d point clouds from the yi360 panoramas. the photogrammetric processing is done using the agisoft metashape suite, with micmac as a prospective alternative solution. to do so named points acquired with the rangefinder are transferred into a csv-formatted list. they are then used as control points in order to scale the photogrammetric model (cf. figure 10). the resulting point cloud is then exported and integrated in the potree renderer, a free open-source webgl based point cloud renderer developed at tuwien [5]. one of its most valuable aspects is that it allows for the development of “overlays“additional functionalities that can be tailored to specific user needs. the first add-ons introduced focus on viewing the input data resulting from the survey protocol itself, for each chapel, i.e. on one hand the dxf input extracted from the rangefinder and on the other hand the panoramas extracted from the 360 camera. concerning the former, the leica s910 rangefinder allows the surveying of a maximum of thirty points in a row, outputted as one single dxf file. this is why three different survey protocols had to be conducted (grid of auditory devices, scaling and direct measures), with from nine to seventeen points surveyed for each. as a result a step of realignment of the dxf outputs was necessary. they are automatically realigned geometrically in the same frame, when loading each chapel inside the viewer. this is done from a manually created text file that identifies the first two reference points for each of the three dxf files associated with each chapel. from these two points, all the other dxf points are readjusted by translations, then rotations, and finallly displayed in the renderer. this adjustment also makes it possible to display, in their correct positions, each thumbnail image associated with each dxf point. these images are recorded by the leica distortion camera all along the protocol. concerning the panoramas extracted from the yi 360 camera, they are materialised in the renderer by spheres (cf. figure 6), textured with the stitched 360 panoramic photos. spheres are positioned from a text file associating each image file name with its xyz position extracted during the photogrammetric processing. they give access to the corresponding panorama (viewed using the panolens js library, cf. section 6). finally, other specific add-ons allow on-the-fly measurement on dxf points, the naming of these points, the visualization of the laser levels, and the representation, based on an on-the-fly computation of the approaching volume of chapels by voxelbased segmentation of the dense point cloud. the renderer is used to display a complete point cloud, but also allows for user-monitored selections of sub-clouds (sections corresponding to the laser beams, segmented upstream – cf. figure 10). 4.2. acoustic data processing for all measurements using the eigenmike a few operations were applied to the recorded signals. first, the 32 input channels were encoded in the spherical harmonics domain using a vst plugin provided by mh acoustics. the eigenmike allows an encoding up to the 4th order on the spherical harmonics basis, corresponding to a 25-channels signal. this 25-channels signal is then decoded in two ways: (i) for a restitution through a 42loudspeakers; and (ii) for a restitution through headphones with a binaural conversion [22]. for measurements using other microphones (neumann and zoom), no specific operation was required after the acquisition. sounds reproduced were either directly the ones recorded on site (speech and footsteps) or the characterised srir convolved with monophonic stimuli (as mentioned in section 3.2). in the metrology field in general, measurements are subject of uncertainties. in this work in particular, the instruments’ positions could differ, and the variability of positions is quantified in the following. tripod positions of points mc, eg, ec and ed were measured using the leica disto. the distance mc-ec over the 15 chapels was 557 +/5 cm. the standard deviation was 5 cm over the 15 chapels, corresponding to the centimeter-sized accuracy targeted. the mean distance was 557, figure 10. the polyline that corresponds to the scaling protocol (dxf outputted by the leica disto, twelve control points), represented inside the potree point cloud renderer. here only a sub-coud correspoding to the horizontal laser beam is shown. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 9 slightly higher than the 550 cm targeted, but this disto measurement corresponded to the top of the tripods, while the instruments placed on top of it (loudspeaker and microphone) have a centimetre-thickness explaining this difference. 5. obstacles and limitations experimenting the multimodal acquisition and processing the chain on fifteen different chapels shows that the overall method is sound, with some clear strong points. it is fast, reproducible, lightweight enough to be applied in remote sites. ultimately it does correspond to what were our analytical needs: easy extraction of architectural features, close integration of acoustic and metric data, and a ground for comparisons and correlations. briefly said the protocol allows for a quick multimodal acquisition, with the scaling of the photogrammetric model facilitated by the combined use of self-levelling levels, of a rangefinder and of a low-cost 360 camera. yet the experimentation also highlighted some difficulties that one may have to overcome at different steps of the protocol when applying it to different use cases. first and foremost the method is tailored to a quite specific set of requirements (and particularly in terms of corpus) – a limitation per se (see section 2.1). with a closer look at the acquisition time, the method does requires a step of adaptation to in situ conditions such as lighting or congested spaces, factors that affect the photogrammetric acquisition. having to cope with local conditions is natural and not surprising, but the quantity of potential factors of failure to take into consideration is significant. typically pointing at intersections of the laser beams with the rangefinder may be seen as a very straightforward task, but in practice potential occlusions, surface conditions or angles of incidence have to be dealt with. furthermore, one should keep in mind limitations due to the photogrammetric process itself in conditions where contrasts lack. thanks to the global efficiency and robustness of our protocol, the complementary use of terrestrial laser scanner (faro focus 3d) was required in only two of the fifteen sites, exemplifying the limitation of the geometric survey method. one chapel was textureless, mostly covered with white painting which is prohibitive for ibm (cf. figure 11). for the second chapel, the low resolution of the camera was suspected not to cover efficiently the most difficult chapel of the corpus (in terms of dimensions, volume and architectural complexity). at processing time, the quality of the 3d point clouds produced varies noticeably due to the above mentioned factors. it can be seen as good enough in a research programme that targets services like extracting dimensions or positioning instruments in space relatively to a systematic reference system, but it obviously is not good enough if targeting a fine-grain 3d mesh reconstruction. so at the end of the day comes the question of how to rate the “quality” and “reproducibility” of the protocol, and the corollary issue of “in how does the choice of a low cost 360 camera impact the final results”. as a provisional answer an experiment has been conducted in the “fake chapel” to evaluate the potential gain of upgrading our protocol with professional vr camera developing up to 12k panoramas instead of low-cost devices. aware of the main limitation of the method, the aim was to discuss the scalability of the setup to complex case-studies, regarding resolution vs. accuracy aspect. briefly said, the insight of this qualitative comparison confirms the hypothesis made in section 2.2, that the acquisition strategy can improve results in a more significant way than the technological component itself. our preliminary tests show that the density but also the uncertainty actually increases proportionally to the resolution. therefore a better resolution of image sources doesn’t improve intrinsically the quality and the range of the reconstruction without an efficient data acquisition protocol. as shown in figure 12, the acquisition canvas (i.e. camera position) seems to be more effective on the result than the resolution of the panorama itself. concerning acoustic measurements, there are also some limitations. the placement of the eigenmike microphone is tricky, in particular because it is a spherical object. because its correct placement is important if wanting to ensure comparability, its positioning might be time consuming. this is why, due to the time limitation on site (two sites per day constraint), the number of microphone positions was limited to two (ma and mc, cf. figure 4). a higher number of measurement positions would have allowed a dynamic auralization, so as to reproduce an exploration through the chapel. reproducing such an exploration requires a numerical simulation of the room, and necessitates precise information on acoustic properties of the building materials [23]. in addition, it has to be said that a number of factors related to conditions found in situ (typically congestion of spaces, or simply time of the day) do impact the direct, raw comparability of the data (and in particular of live recordings). these factors should act as a reminder that such data sets should not be overinterpreted, but rather be considered as means to reveal a qualitative acoustic identity of the sites, and to uncover general trends and patterns within the collection of sites. 6. data exploration and reuse the processing chains presented above lead to the production of a series of heterogeneous data sets: figure 11. a case found to be critical for ibm (saint-roch chapel, les mées). top, a stitch showing the predominance of white surface in the edifice. bottom, the reconstruction, combining the photogrammetric model (brown points) and the laser scanner’s output (white points). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 10 raw photographic material (unstitched hemispherical photograph – raw outputs of the 360 camera), panoramas (stretched, little planet, and round views), 3d point clouds, including localisations of the grid of the acoustic devices, raw quantitative data (dimensions, volumes, xyz coordinates of cameras and acoustic devices), room impulse responses, auralizations in 4 points (ma, mc, md, mg). live recordings, quantitative acoustic indicators (e.g. reverberation time). these outputs are integrated in various ways, in order to allow further exploration and cross-examination of the data sets, and the extraction of goal-bounded interpretations. said differently, the above data sets are repurposed and combined with regards to usage scenarios that range from pattern analyses (e.g. proportions, reverberation, etc.) to the production of dissemination material. the following sub-sections illustrate the two main lines of development that are being followed, one targeting fine grain analysis and quantitative data correlation, one targeting perceptual analyses in contexts ranging from experimental setups to dissemination or edutainment activities. 6.1. acoustic indicators as mentioned before, the spatial room impulse response (srir) measures were primarily needed in order to process auralizations (see section 3.2). but they were also used to compute acoustic indicators [21], [24], [25], [26], listed hereafter. acoustic indicators are a set of quantitative values that are used to characterise and differentiate spaces, roughly said on three aspects (time-related indicators such as reverberation time, tonerelated indicators such as bass ratio, and space-related indicators such as lateral strength). most of the indicators were computed using the 0-order component of the spherical harmonics (omnidirectional component). on the overall, 13 such indicators have been computed, among which for instance the reverberation time (rt20 and edt), the central time, the c50 clarity (see section 6.2), the acoustic strength, the schroeder frequency, the spectral centroid, the bass ratio, the treble ratio, and an approximation of the speech transmission index (without considering the background noise). some indicators were computed using all spherical harmonic components to take into account the spatial: the interaural cross-correlation (based on a binaural reduction), the lateral strength and the lateral energy fraction. in the next steps of the research programme, these indicators will be used, in correlation with quantitative and qualitative architectural features, to characterise the particularities of each chapel, and to analyse the collection as such. 6.2. 2d/3d visualisation of acoustic indicators two of the quantitative acoustic indicators produced are calculated for each position of the eigenmike microphone, and provide values that correspond to a specific angle in space. this gives an opportunity to try and spot differences in the way the sound hits the microphone depending on its origin (emission point) and on reflectance patterns inside the building. these indicators correspond to transmission-reception pairs (four speaker positions and two microphone positions), and correspond to two different methods. the c50 clarity indicator (relation of the early ir – 50 first milliseconds – to the late ir – after 50 milliseconds) is calculated on the 32 channels of the eigenmike microphone: one quantitative value for each capsule, and for each of the seven frequencies (cf. figure 13). it is important to mention that according to [21], c50 is calculated from an omnidirectional room impulse response. in our case the 32 microphones of the eigenmike (em32) are not omnidirectional, especially for high frequency bands. we therefore chose to calculate the c50 on each capsule of the em32 to highlight the early energy differences with respect to the microphone directions. figure 12. qualitative comparison tests conducted inside the “fake chapel” (see section 2.1): result of cloud2cloud (c2c) distance between laser scanner reference and (top) point cloud generated from a 12k panorama without pyramidal protocol and (bottom) compared to a pointcloud generated from the 5.7k panoramas with pyramidal sequence. figure 13. a 2d visualisation of the clarity values for one recording emitting tuple. each symbol corresponds to one of the 32 capsules of the eigenmike microphone, projected on a 2d plane. sectors correspond to the seven frequencies, and colours to a quantitative value (in db, distributed in a 16 values colour scale). note here for instance a dissimilarity for the 8k frequency between values for angles 45 and 69 (top left) and for values 291 and 315, right) that cannot be explained by the layout – particularly simple and regular – of the edifice (saint-roch chapel, les mées). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 11 another space related indicator is a spatialized energy map (unrelated to the 32 channels) calculated using the pwd method (plane wave decomposition) [27]: one value every 5 degrees of rotation, 2592 values for one microphone position. in this case the energy map evolves with time: twenty time-frames are considered for each microphone position. visualisation of such data sets raises two major and tricky challenges that go beyond the scope of this paper: having the analyst understand the relation of the data with space, and handling temporal aspects (sound evolving with time)an ongoing research issue in the infovis (information visualisation) community [28]. what can be said at this stage is that several exploratory visual formalisms have been developed, both in 2d and 3d. in brief 2d solutions offer a better global and synthetic view (no occultation), but 3d solutions are more efficient in helping analysts spot the potential relation of a value or of a pattern to an architectural specificity in the chapels’ interiors. concerning 3d solutions data is represented in the potreebased renderer (see section 4.1) by spherical heat maps (cf. figure 14), at the two microphone positions (ma and mc). this visualization enables to display the distribution of the reception data over 360 degrees, for both methods. it is interactive thanks to the two tools allowing to choose the transmission-reception pairs or to change the frequencies or the temporality according to the chosen methods. obviously what will come out of this part of the research is the main scientific added-value of the whole approach, but we are here still in an exploratory phase, with challenges that concern infovis methods rather that metrology per se. 6.3. panoramas in the four recording points, with soundtracks as mentioned before, two types of soundtracks are produced: basic live recordings (speech, footsteps and sound scape) and auralizations (simulations of how the same soundtrack would be perceived if played in the different recording positions of the various chapels). these outputs are used (and combined) inside online immersive panoramas corresponding to the four recording positions ma mc mg md (cf. figure 15). the panorama itself is viewed using the panolens.js javascript library, in which users move from one position to another a bit like in 3d bubble worlds. on each position soundtracks are available and can be played, hence allowing listeners to spot differences as they would be perceived in situ. another somehow resembling set of outputs is a collection of interactive pdf flyers (cf. figure 16) on which “little planet" views of interiors are combined with textual triggers that launch the soundtracks (6 audio tracks illustrating the acoustic identity of the building clap, guitar, piano, steps, voice, exterior). these served as a basis for various dissemination initiatives or edutainment-like presentations of the research, typically in sound/space association games in which the audience must associate a building with its acoustics. at the end of the day what can be said about the overall method (a combined acquisition procedure, parallel processing chains, and common data reuses or explorations) is that it does promote reproducibility and repurposability. in no way do we ignore or minor weaknesses such as metric accuracy – but on the other hand it has never been the core goal of our research knowing the severe economic constraints that one has to deal with in the context of minor heritage, and the impact on the objects themselves of multiple and undocumented transformation phases. instead, we consider that minor heritage items can gain visibility and support when they are envisioned as part of a wider asset: a collection. hence experimenting and better understanding how actors concerned could tailor the data acquisition, processing and reuse steps to collections of smallscale, minor heritage assets has been and remains the core result of the approach. figure 14. 2d and 3d visualisations of the pwd energy map (n.d de bethléem, bras): twenty time frames available in the 3d version, 4 time frames in the 2d version. note for the latter differences between lines 2 and 3 (emission point ec or ed). figure 15. an online panorama in point mc, with (bottom right) symbols used to give access to the various soundtracks. an exemple for notre-dame-durevest chapel (esparron-de-pallières). figure 16. an example of an interactive sound + image pdf flyer with (on the left) text triggers that launch soundtracks for notre-dame chapel in brueauriac. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 12 7. perspectives at this stage there are still improvements that could be brought about in the data acquisition phase itself: typically a better monitoring of the lighting conditions in situ, or maybe an automatization of the rangefinder’s movements or of the rotating arm and rotative head of the tripod used during the photogrammetric survey. the latter improvement would speed up the protogrammetric survey itself significantly. combining an acquisition using a 360 panoramic camera and a more detailed photogrammetric acquisition for this or that architectural detail could also be a sound perspective. indeed fine geometric details such as engravings on memorial stones, mouldings on altars, or sculptures cannot be acquired with enough resolution and quality using a low-cost 360 panoramic camera. surveying such details could be done using a more “traditional” photographic captor. the resulting high density cloud of points, focused on one specific area, could then be integrated to the general 3d reference system and offer a sort-of eagle eye view on parts of the building’s decor. later on in the processing chain, in terms of data fusion there is a very promising lead with the use of spherical depth map [29] combined with acoustical descriptors to enlarge cross-correlation analysis performed through image processing or signal processing algorithms [30]. more generally next steps are bounded by a backbone objective: better understanding, characterising such buildings and the way they are perceived, though vision and audition. this implies building on the interdisciplinary nature of the research, including in the analysis steps. accordingly we currently launch a series of experiments aimed at exploiting 3d representations to position and analyse acoustic data, and at using sound to represent dimensions and geometric features. as far as metric and visual data is concerned our approach, at first, can be summed up as a “feature extraction” effort: dimensions, ratiosas-proportion [31], [32], etc. as opposed to approaches where a 3d point cloud is analysed as such (segmentation, classification, etc.) [33]. features can then be compared, trends spotted, exceptions raised and analysed, based on methods and practices from the infovis community. in that future effort, data extracted from traditional manual surveys (quantitative or qualitative) will complement dimensions and geometric features in order to widen the scope of differentiating factor when comparing chapels. concerning sound data, the next steps of this work is to use the collected data, to categorize and to distinguish the chapels in terms of acoustic descriptors and perceptual criteria. in particular, several listening tests will be conducted. the 3d sound field perception is a complex process, leading to several specific experimentations. for instance, a recent sound source localisation protocol [34] will be experienced as well as “sound coloration” evaluation. furthermore, we intend at visually and acoustically immersing the participants, in order to check for the coherency between vision and acoustics. to take the acoustic simulations further, the integration of metric data acquisition, 3d point-cloud model estimation and acoustical measurements are of great interest. indeed, acoustical simulation tools such as catt or odeon allow 12-dof auralization of rooms based on 3d geometric models and impulse response measurements [23]. however, these tools are restricted to simple geometric models with a limited set of walls. further work aims to derive simple geometric models, compatible with such tools, from complex 3d-point cloud models as suggested in [35]. 8. conclusion this contribution reports on a research programme anchored in two prime concerns: considering interdisciplinarity as a mandatory requirement in the surveying of architectural interiors (during the codesign of the survey protocols, and in all subsequent phases of the operations, competences stemming from both architectural and acoustic studies were associated), tailoring the research’s technological and methodological choices to the specific context of small-scale architecture (a collection of buildings with little prestige, often neglected or at risk). the specificity of the corpus under scrutiny undoubtedly shaped the overall survey and data processing strategy. in that sense, one of this research’s originality is the effort to overcome the operational limits of a set of low-cost technologies. the overall protocol intends to help actors to characterise and correlate acoustic and morphological features of heritage figure 17. top: comparative analysis of data indicators (reverberation time vs. volume) across the collection (top): two items (n° 1 and 14) in the collection stand out significantly due to their high reverberation time and small volume. bottom: the corresponding point clouds are bordered in colour in the bottom graphics: n° 1, red n.d de bethléem (bras), n° 14, green saint-roch chapel (les mées). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 13 architecture in a consistent way. therefore it aims at opening up new analytical biases, built on the principles and potential of comparative analyses. as an example, figure 17 shows cross modal representation of the studied collection, by representing the individuals as a function of their volumes v and reverberation times rt20. we make no claim this second ambition has yet been reached: what has actually been done is tailoring the data acquisition and processing chain to an interdisciplinary list of requirements, in order to allow for a series of analytical tasks that are now being carried out. the workflow has been applied to a collection of fifteen small-scale buildings (rural, often isolated chapels), with keeping the constraints linked to that type of heritage asset. the approach does open up new research trails, typically in terms of perceptual experiences combining sound and space, or in the 3d visualisation of acoustic data and the sonification of dimensional data. acknowledgement the project is funded by the anr, the project-based funding agency for research in france, under the id anr-18-ce38-000901. (http://anr-sesames.map.cnrs.fr/) references [1] l. alvarez-morales, t. zamarreño, s. girón, m. galindo, a methodology for the study of the acoustic environment of catholic cathedrals: application to the cathedral of malaga, building and environment vol 72, 2014, pp. 102-115. doi: 10.1016/j.buildenv.2013.10.015 [2] z. karabiber, the conservation of acoustical heritage. in cultural heritage research: a pan-european challenge. proc. of the 5th ec conference, cracow, poland, 2002, pp 286-290. isbn 92-8944412-6 [3] boren, b. b, acoustic simulation of j s bach’s thomaskirche in 1723 and 1539, acta acustica vol 5, no. 14, 2021. doi: 10.1051/aacus/2021006 [4] l. barazzetti, m. previtali, f. roncoroni, can we use low-cost 360 degree cameras to create accurate 3d models? isprs tc ii midterm symposium “towards photogrammetry 2020”, riva del garda, italy, 4–7 06 2018. isprs archives vol xlii-2, pp 69-75. doi: 10.5194/isprs-archives-xlii-2-69-2018 [5] m. schütz, markus, potree: rendering large point clouds in web browsers. ph d thesis vienna university of technology, 2016. [6] w. c. sabine, collected papers on acoustics. harvard univ. press. reprinted by peninsula publishing, acoustical society of america, newport beach, 1993, isbn 9780932146601. [7] council of arts accrediting associations 2009 disciplines in combination: interdisciplinary, multidisciplinary, and other collaborative programs of study caaa briefing paper. online [accessed 17 june 2022 ] https://nast.arts-accredit.org [8] j. y. blaise, i. dudek, g. saygi, analysing citizen-birthed data on minor heritage assets: models, promises and challenges, international journal of data science and analytics, springer verlag, 2019, pp. 1-19. doi: 10.1007/s41060-019-00194-0 [9] l. t. losè, f. chiabrando, a. spanò, preliminary evaluation of a commercial 360 multi-camera rig for photogrammetric purposes. international archives of the photogrammetry, remote sensing & spatial information sciences vol. xlii-2, 2018, pp. 1113-1120. doi: 10.5194/isprs-archives-xlii-2-1113-2018 [10] c. gottardi, f. guerra, spherical images for cultural heritage: survey and documentation with the nikon km360. isprs international archives of the photogrammetry, remote sensing and spatial information sciences vol. xlii–2, 2018, pp. 385–390. doi: 10.5194/isprs-archives-xlii-2-385-2018 [11] g. fangi, r. pierdicca, m. sturari, e. s. malinverni, improving spherical photogrammetry using 360° omni-cameras: use cases and new applications. international archives of the photogrammetry, remote sensing & spatial information sciences vol. xlii-2, 2018, pp. 331-337. doi: 10.5194/isprs-archives-xlii-2-331-2018 [12] t. luhmann, a historical review on panorama photogrammetry, international archives of the photogrammetry, remote sensing and spatial information sciences, vol. 34, 2008. [13] l. perfetti, c. polari, f. fassi. fisheye multi-camera system calibration for surveying narrow and complex architectures, isprs-international archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2, 2018, pp. 877-883. doi: 10.5194/isprs-archives-xlii-2-877-2018 [14] l. perfetti, c. polari, f. fassi, s. troisi, v. baiocchi, s. del pizzo, f. roncoroni, fisheye photogrammetry to survey narrow spaces in architecture and a hypogea environment, latest developments in reality-based 3d surveying and modelling; mdpi: basel, switzerland, 2018, pp. 3-28. doi: 10.3390/books978-3-03842-685-1-1 [15] m. a. gerzon, ambisonics in multichannel broadcasting and video, journal of the audio engineering society, vol. 33, no. 11, 1985, pp. 859-871. [16] j. daniel, représentation de champs acoustiques, application à la transmission et à la reproduction de scènes sonores complexes dans un contexte multimédia. ph. d. thesis, university of paris vi, france, 2000. [17] a. farina, simultaneous measurement of impulse response and distortion with a swept-sine technique, 108th aes convention, paris, france, 2000. [18] s. müller, p. massarani, transfer-function measurement with sweeps, journal of the audio engineering society, vol. 49, no. 6, 2001, pp. 443-471. [19] a. farina, a. amendola, a. capra, c. varani, spatial analysis of room impulse responses captured with a 32-capsule microphone array, 130th audio engineering society convention, london, 2011. [20] d. a. dick, m. c. vigeant, an investigation of listener envelopment utilizing a spherical microphone array and thirdorder ambisonics reproduction, the journal of the acoustical society of america, vol. 145, no. 4, 2019, pp. 2795-2809. doi: 10.1121/1.5096161 [21] iso 3382-1 (2009). acoustics – measurements of room acoustic parameters – part 1: performance spaces [22] h. moller, fundamentals of binaural technology, applied acoustics, vol. 36, no. 3/4, 1992, pp. 171–218. doi: 10.1016/0003-682x(92)90046-u [23] b. n. j postma, b. f. g katz, perceptive and objective evaluation of calibrated room acoustic simulation auralizations, the journal of the acoustical society of america, vol. 140, no. 6, 2016. doi: 10.1121/1.4971422 [24] a. gade, acoustics in hall for speech and music, springer handbook of acoustics, isbn 978-0-387-30446-5, 2007, p. 301. doi: 10.1007/978-0-387-30425-0_9 [25] f. a. everest, k. c. pohlman, master handbook of acoustics, 5th edition, mcgraw hilll, new york, 2009, isbn 9780071603331. [26] g. peeters, a large set of audio features for sound description (similarity and classification) in the cuidado project (cuidado ist project report), 2004. [27] e. g. williams, fourier acoustics: sound radiation and nearfield acoustical holography, academic press, new york, 1999. doi: 10.1016/b978-0-12-753960-7.x5000-1 [28] w. aigner, s. miksch, h. schumann, c. tominski, visualization of time-oriented data, human-computer interaction series, springer-verlag, london, 2011, isbn: 978-0-85729-079-3. [29] n. zioulis, a. karakottas, d. zarpalas, p. daras, omnidepth: dense depth estimation for indoors spherical panoramas, proc. of http://anr-sesames.map.cnrs.fr/ https://doi.org/10.1016/j.buildenv.2013.10.015 https://doi.org/10.1051/aacus/2021006 https://doi.org/10.5194/isprs-archives-xlii-2-69-2018 https://nast.arts-accredit.org/ https://doi.org/10.1007/s41060-019-00194-0 https://doi.org/10.5194/isprs-archives-xlii-2-1113-2018 https://doi.org/10.5194/isprs-archives-xlii-2-385-2018 https://doi.org/10.5194/isprs-archives-xlii-2-331-2018 https://doi.org/10.5194/isprs-archives-xlii-2-877-2018 https://doi.org/10.3390/books978-3-03842-685-1-1 https://doi.org/10.1121/1.5096161 https://doi.org/10.1016/0003-682x(92)90046-u https://doi.org/10.1121/1.4971422 https://doi.org/10.1007/978-0-387-30425-0_9 https://doi.org/10.1016/b978-0-12-753960-7.x5000-1 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 14 the european conference on computer vision eccv, munich, germany, 2018, pp. 448-465. doi: 10.1007/978-3-030-01231-1_28 [30] a. pamart, d. lo buglio, l. de luca, morphological analysis of shape semantics from curvature-based signatures, digital heritage conference, granada, spain, 2015, pp. 105-108. doi: 10.1109/digitalheritage.2015.7419463 [31] j. y. blaise, i. dudek, identifying and visualizing universal features for architectural mouldings, ijcisim, vol. 4, 2012, issn 2150-7988, pp. 130-143. [32] m. a. cohen, conclusion: ten principles for the study of proportional systems in the history of architecture, architectural histories, vol. 2, no. 1, 2014. doi: 10.5334/ah.bw [33] e. grilli, f. menna, f. remondino, a review of point clouds segmentation and classification algorithms, isprs archives volume xlii-2/w3, 2017, pp. 339-344. doi: 10.5194/isprs-archives-xlii-2-w3-339-2017 [34] s. fargeot, o. derrien, g. parseihian, m. aramaki, r. kronlandmartinet, subjective evaluation of spatial distorsions induced by a sound source separation process, eaa spatial audio signal processing symposium, paris, france, 6-7 sep 2019, pp. 67-72. doi: 10.25836/sasp.2019.15 [35] l. aspöck, m. vorländer, room geometry acquisition and processing methods for geometrical acoustics simulation models, proc. of the euroregio 2016, porto, portugal, 13–15 june 2016. https://doi.org/10.1007/978-3-030-01231-1_28 https://doi.org/10.1109/digitalheritage.2015.7419463 https://doi.org/10.5334/ah.bw https://doi.org/10.5194/isprs-archives-xlii-2-w3-339-2017 https://dx.doi.org/10.25836/sasp.2019.15 artificial neural network-based detection of gas hydrate formation acta imeko issn: 2221-870x september 2021, volume 10, number 3, 117 124 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 117 artificial neural network-based detection of gas hydrate formation ildikó bölkény1 1 research institute of electronics and information technology, university of miskolc, h3515 miskolc-egyetemváros, hungary section: research paper keywords: gas hydrate; neural network; hydrate detection; injection system; modelling equipment citation: ildikó bölkény, artificial neural network-based detection of gas hydrate formation, acta imeko, vol. 10, no. 3, article 17, september 2021, identifier: imeko-acta-10 (2021)-03-17 section editor: lorenzo ciani, university of florence, italy received january 29, 2021; in final form september 17, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by university of miskolc, hungary. corresponding author: ildikó bölkény, e-mail: bolkeny@eiki.hu 1. introduction natural gas hydrates are crystalline solids composed of water (host) and gas (guest). the guest molecules are trapped inside ice cavities, which are composed of hydrogen-bonded water molecules. typical natural gas molecules include methane, ethane, propane and carbon dioxide. hydrate particles can form ice-like hydrate-plugs that completely block the pipeline and can be up to several meters long. the number of hydrate molecules can increase to a level where the molecular agglomeration process begins, which can cause of plug formation in a given section of the pipeline. in worst cases the hydrate plugs result production outages [1], [2]. in the mid-1930s hammerschmidt found out that natural gas hydrates can block gas transmission, especially at low temperatures. this discovery was pivotal and shortly thereafter led to the regulation of the water content in natural gas pipelines. the detection of hydrates in pipelines is a milestone marking the importance of hydrates to industry [3]. gas wells are the cores of developing serious hydrate problems, because of the water content of the production. the cold zones of the ground can shift the temperature of the pipe and its contents into the hydrate-formation region. hydrates start forming layers of water on the pipe walls. crystallisation can result in the formation of tens or hundreds of meters long plugs of hydrate [1], [4]. multiple techniques exist to prevent the formation of hydrates. in the gas industry one of the most popular solutions is the use of thermodynamic inhibitors (thi) for a prolonged time. the injection of thi shifts the hydrate curve to a region where the conditions are not adequate for stable hydrate formation [2]. these compounds (methanol, ethylene glycol) have to be injected in high volume to the gas to be effective against hydrate formation. this is not a modern solution, because it has several disadvantages like cost of additional pipelines necessary to lead to the gas wells [5], the cost of methanol regeneration, which also contaminates the environment. one of the newer alternatives is the injection of lowdosage hydrate inhibitors such as kinetic hydrate inhibitors which can abstract in the production process of natural gas one of the major problems is the formation of hydrate crystals creating hydrate plugs in the pipeline. the hydrate plugs increase production losses, because the removal of the plugs is a high cost, time consuming procedure. one of the solutions used to prevent hydrate formation is the injection of modern compositions to the gas flow, helping to dehydrate the gas. dehydratation obviously means that the size of hydrate crystals does not increase. the substances used in low concentrations, have to be locally injected at the gas well sites. inhibitor dosing depends on the amount of gas hydrate present. in the article two artificial neural network (ann)-based predictive detection solutions are presented. in both cases the goal is to predict hydrate formation. data used come from two solutions. in the first one measurements were performed by a self-developed and -produced equipment in this case, differential pressure was used as input. in the second solution data are used from the measurement system of a motorised chemical-injector device, in this case pressure, temperature, quantity and type of inhibitor were used as inputs. both systems are presented in the article. mailto:bolkeny@eiki.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 118 prevent the growth of hydrate molecules [6]. antiagglomerants also belong to this group, they allow for the formation of gas hydrates but keep the hydrate crystals small and dispersed [7]. these modern, low-dosage inhibitors enable the usage of locally installed injection systems in the field, at the site of gas wells [8]. as can be seen, hydrate detection is key to administering the appropriate amount of inhibitor. 1.1. objective and methodology the paper compares two approaches. in the first one, the formation of gas hydrate was studied in laboratory conditions. the gas hydrate formation can be determined from the pressure curve. using the measurement results, a single solution based on an artificial neural network (ann) was created where the input is the differential pressure. in the second project, test measurements were performed with a field hydrate dosing and monitoring system. using the measurement results, a multi-input ann-based solution was developed, where the inputs are pressure, temperature, quantity and quality of inhibitor as these also influence hydrate formation. in the first method measurements were performed by a selfdeveloped and produced equipment. modelling the equipment is suitable for the simulation of the gas flow in the pipeline. its conditions are as follows: temperature is in the range of -20 … +30 °c, and typical gas pipeline pressure is in 1-10 nl/min flow rate range. during the measurements, different inhibitor materials and gases from all over hungary were used, and the values of differential pressure, inlet pressure, the gas temperature and the flowrate of the pipeline were recorded, but only differential pressure was used to teach neural networks. in the second approach data are used from the measurement system of a motorised chemicals-injector device, placed in the area of a well. this model was installed to test the equipment at the site of the scada ltd, near hajdúszoboszló in hungary. the following parameters were monitored there: well siphon pressure, drill pipe pressure, injection pipe pressure, well pipe pressure, well pipe temperature, soil temperature, temperature of chemicals, controller temperature, inverter temperature, chemical tank liquid level, inverter current, voltage and frequency. only well pipe pressure (pressure), well pipe temperature (temperature) and inverter frequency (quantity of inhibitor) were used to teach neural networks. after the successful test of the technology model, the equipment was transported to a real gas well in szeghalom (hungary). in the research data generated through 29 test weeks were used. the gas well was monitored online (one sample per minute) in the 29-week testing period, during which several hydrate plugs formed due to the weather conditions. the most important parameters of both approaches (equipment, inputs, outputs, ann) are in figure 1 and figure 2. the goal was to develop an accurate, stable and reliable annbased structure. several architectures have been studied. finally, the neural network auto-regressive x (nnarx) model with exogenous input [9] and neural network output error (nnoe) model are presented [10]. several independent data sets were needed for training networks. previously selected raw data were scaled and normalised. the resulting data were used to generate three training, validation and test datasets for the networks. 1.2. results final versions of ann-based predictive detection solutions were selected after the extended comparison processes. in the first approach nnarx and nnoe were used. in the second approach only nnarx was used. in both cases several networks were trained using different datasets. for the first neural network based predictive detection solution twelve, while for the second six networks were compared and the best one is selected. in both cases a relatively small and simple networks resulted the best performance. finally, the predictive solutions were compared. 2. related results in the literature even though the injection of methanol into natural gas is not advised due to environmental concerns, such experiments can be found in the scientific literature. for example, in [11] french and english researchers reported that methanol was injected into the pipeline, in an environmentally not-so-friendly manner to prevent the formation of hydrates for gas extraction in the north sea. the karl fischer method was used for injection. it is not the most appropriate approach, because it doesn't take salt content into account. as a result, new method was developed, by which the electrical conductivity and the sound propagation velocity can be measured in addition to the temperature and the pressure. using these four parameters and the devised method, the methanol injection can be kept at an optimum. the paper published in 2013 in [12] also deals with optimising the methanol injection for the inhibition of hydrate formation in industrial processes. authors stress the importance of the vapour state methanol, because it doesn't participate in the hydrate formation inhibition. to determine the quantity of inhibitor, two methods were introduced. the first one is a mathematical correlation from real data sets, the second one is based on ann. figure 1. the two compared project – first approach. figure 2. the two compared project – second approach. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 119 naturally, scientific literature does not only deal with methanol injection but also ones that utilise mono-ethylene glycol or some other inhibitor related to remedial methods. one example is by kamaria et al. [13] who realised machine learning by using a least-squares support-vector machine. the hydrate formation in the pipeline can be predicted and the mono-ethylene glycol amount necessary for the hydrate inhibition can also be estimated [14] based on the method developed by suykens et al. the other non-methanol based hydrate formation inhibitive method was used by elgibaly et al. this work deals with the use and development of neural networks, related to the optimisation of hydrate formation inhibition. to validate their model, experimental data was used which contains hydrate formation environmental information, gas composition, hydrate inhibitor composition, system pressure and density. the model takes the evaporation of the inhibitor into consideration. the devised method suggests inhibitor injection ratios for gases of various composition [15]. numerous scientific literature deals with hydrate formation temperature. the method devised by mesbah et al. [16] uses a least squares support vector machine algorithm to predict the hydrate formation temperature. the authors used data available in scientific literature for multiple gas compositions and created a data set for machine learning. the model is more accurate for gases with h2s content. an empirical correlation between temperature, gas pressure and density, by which the hydrate formation temperature can be determined was shown by khamehchi et al. [17]. this method was further refined using measurement data and ann. the method gives accurate results. zahedi et al. [18] published two methods for the assessment of hydrate formation temperature. the first method using two correlations, with eleven and eighteen parameters. the parameters were obtained from measurement data and scientific literature. the second method using ann and the data from the previous method. the problem of the accurate assessment of hydrate formation is discussed in [19]. authors use the katz gas-gravity method with the ghiasi correlation [20]. the same model was used with the imperialist competitive algorithm [21]. the ann was used to determine a kinetic model for the prediction of methane gas hydrate formation. the authors tried to determine the correct number of hidden neurons and layers. the ann-based model takes the temperature and pressure as the inputs and the output is the hydrate growth speed. in [22] comparison was made between two methods for the inhibition of gas hydrate development. both use ann, in the second it is optimised with the imperialist competitive algorithm [23]. the outcome met expectations and proved that the normal neural network provides better results than the optimised one [23], [24]. 3. description of the proposed metho in this section, two systems providing the measurement data are presented. also, predictive hydrate detection methods are introduced. 3.1. hydrate forming test equipment in the first analysis measurements have been performed by a hydrate forming test machine developed for mol plc. by the department of research instrumentation and informatics at the research institute of applied earth sciences. development of the control system was carried out by the author, figure 3. the modelling equipment is suitable for simulation of gas pipeline flow. the equipment creates field conditions within -20 … 30 °c temperature range, and original gas pipeline pressure range, which is typically 60 bars. the flow rate value can be set in accordance to modelling principles, between 1-10 nl/min. the hydrate forms inside of a capillary cell which is placed in a thermostat. figure 4 shows the piping and instrumentation (p&i) diagram of the equipment. where pt is the pressure transmitter, tt is temperature transmitter, ft is flow transmitter, gt is gas tank, pg is pressure gauge, tc is temperature control, te is temperature element, va is valve, sp is pressure generator unit, pc is personal computer, hc is buffer cell, c is glass cell, dc and dr are separator cells. the operation of the equipment is as follows: the dehydrated natural gas is discharged from the gas tank into the pipeline. the pressure gauges are used to set the system pressure and the flow transmitter is used to adjust the flow rate. pressure generators mix the formation water and inhibitor with the natural gas. the pipeline goes through the low temperature thermostat (te) and it cools the natural gas therefore hydrate formation can begin. the formation of the hydrate plug can be detected from the measured differential pressures (pt2, pt3). natural gas and interfacial water from a szeghalom gas well (hungary, near to füzesgyarmat) were used in tests. different inhibitor mixtures were also added. gas hydrate formation time was examined under gas well conditions (60 bar pressure, low temperature), with or without the addition of different inhibitors. the following parameters were recorded: pressure, differential pressure, temperature and flow rate [25]. figure 3. hydrate forming test equipment. figure 4. p&i diagram of the hydrate forming test equipment. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 120 3.2. control and chemical dosing equipment the well area control and the chemical injector equipment was installed on szeghalom-29 well in füzesgyarmat (figure 5). the injection system is optimised mainly for hungarian gas wells. thus, the temperature requirement of the system was in the -40°c … 60°c range. the system must be capable of working in ex (explosive atmosphere) environment with high efficiency. the power source of the actuator is solar energy to reach the almost zero emission of the system [25]. figure 6 shows the p&i diagram of the equipment, where pt is pressure transmitter, tt is temperature transmitter, lt is level transmitter and pi is pressure indicator. the operation of the equipment is as follows: the natural gas which contains natural interfacial water entering the pipeline from the gas well. the inhibitor is located in a chemical tank and is delivered to the pipeline by the dosing pump. the dosing rate is provided by a plc control, which is not shown in the figure. the formation of the hydrate plug can be detected from the value measured by the pressure transmitters (pt12). the following parameters were recorded on a minute basis: well siphon pressure, drill pipe pressure, injection pipe pressure, well pipe pressure, well pipe temperature, soil temperature, temperature of chemicals, controller temperature, inverter temperature, chemical tank liquid level, inverter current, voltage and frequency [25]. the output of the system is the inverter frequency. the frequency is proportional to the amount of administered inhibitor. 3.3. neural networks in black-box identification of nonlinear dynamic systems, selection of model structures become more difficult task. the multilayer perceptron network is most popular for learning nonlinear relationships from a set of data. for the identification the nnarx and nnoe were used [26]. these models are mostly widespread. the nnarx network creates a nonlinear model using its inputs. the applied regression machine complies with the following relation: 𝑦est(𝑡) = 𝑓[𝑥(𝑡 − 1), 𝑥(𝑡 − 2), . . . , 𝑥(𝑡 − 𝑛𝑖), 𝑦req(𝑡 − 1), . . . , 𝑦req(𝑡 − 𝑛ro)] (1) where yest(t) is the network output at the t th time instant; x(t-1) is the used input of the network at t-1st time instant; yreq(t-1) is the required output from the network at t-1st time instant; ni is the size of used tapped delay line of the inputs; and nro is the size of used tapped delay line of the required outputs. figure 7 shows the typical structure of the nnarx neural network. the nnoe network creates a nonlinear model using its earlier outputs as inputs. the applied regression machine complies with the following relation: 𝑦est(𝑡) = 𝑓[𝑥(𝑡 − 1), 𝑥(𝑡 − 2), . . . , 𝑥(𝑡 − 𝑛𝑖), 𝑦est(𝑡 − 1), . . . , 𝑦est(𝑡 − 𝑛o)] (2) where yest(t) is the network output at the t th time instant; x(t-1) is the used input of the network at t-1st time instant; yest(t-1) is the network output at the t-1th time instant; ni is the size of used tapped delay line of the inputs; and no is the size of used tapped delay line of the outputs. figure 8 shows the typical structure of the nnoe neural network. during the model selection, size of the regressor and the number of hidden neurons in hidden layers were changed. based on the previous practical experience, the number of regressors was 1 or 2, while the number of hidden neurons was between 10 figure 5. control and chemical dosing equipment. figure 6. p&i diagram of the control and chemical dosing equipment. figure 7. typical structure of the nnarx [27]. figure 8. typical structure of the nnoe. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 121 and 12. the selected raw data has been pre-processed using the scilab software. according to [28], pre-processing can consist in a simple transformation or a complex operation. the raw data were first filtered by a low-pass filter, then normalised. when normalising the input data, the minimum and maximum values of each component are selected to cover the set of values and the interpretation range of the neural networks. this interval is typically [0; 1] and [-1; 1]. in the presented case, the [0; 1] interval was selected for normalisation. three datasets were generated for the detection systems. the training set was needed to configure weights of the network. one of the most important parameters during the training process is the stopping criterion. if the training process stops too early, the network is not able to learn the data and gives poor estimation when an unknown dataset is used. to optimise the network the validation set is used. when mean squared error (mse) is the lowest, it is best to stop the training process of the network. the mean squared error complies with the following relation: 𝑀𝑆𝐸 = 1 𝑛 ∑[𝑦req(𝑖) − 𝑦est(𝑖)] 2 𝑖 (3) the third, test dataset is independent from the training and validation sets. it is used to compare results for different networks. neural networks were trained using the generated datasets. to avoid overfitting, the training process was stopped at the minimum mse value. the levenberg-marquard algorithm was used to optimise the ann in matlab. figure 9 shows the workflow of model. 3.4. single inputs neural network based detection large number of measurements was performed with the previously detailed hydrate forming test equipment using different inhibitor materials and gases from szeghalom gas well. from this huge database 50 pieces were selected and used for the investigation. during measurements mainly values of differential pressure, inlet pressure and temperature of gas were saved for later investigation. differential pressure measurement value from pt2 or pt3 sensor was used as an input in the first method, depending on which section of the pipe the hydrate was formed in. after the appearance of gas hydrate molecules in gas flow the pressure in pipe section was increasing because the agglomerated hydrate reduces the cross-section area of the pipeline. therefore, fast gas hydrates detection is very important. from practical perspective, the differential pressure gives the most valuable information about the processes in the tube. thus, this parameter was used as the input value of the alarm system. as previously stated, three independent datasets have been created. in table 1. the number of performed measurements and the number of datapoints included in the different datasets are shown. the scaled, normalised differential pressure value was used in datasets as input. the required output was an artificially generated alarm signal, which was created from the differential pressure values. the signal corresponds to the 75 percent of the maximum value, see figure 10. until the actual differential pressure value is under the limit, the alarm signal is also zero. when it reaches the limit, the signal changes to one. the single input nnarx network is seen in figure 11 with the used regressor and the mapping function. here, y(t) is the network output at the tth time instant; yreq(t-1) is the required output from the network at t-1st time instant; x(t) is the network inputs at the tth time instant; x(t-1) is the network input at t-1st time instant; tdl is the tapped delay line, b is neuron bias, w is the weight matrix. the single input nnoe network is seen in figure 12, with the used regressor and the mapping function. in figure 12, y(t) is the network output at the tth time instant; yreq(t-1) is the output from the network at t-1st time instant; x(t) table 1. main parameters of the datasets. dataset number of performed measurements number of data points training dataset 26 2576 validation dataset 10 1077 test dataset 10 1698 figure 9. the workflow of model development [29]. figure 10. alarm signal (75%). figure 11. single neural network arx. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 122 is the network inputs at the tth time instant; x(t-1) is the network input at t-1st time instant; tdl is the tapped delay line, b is neuron bias, w is the weight matrix. 3.5. multi input neural network based detection the previously detailed control and chemical injection system has been operated in test mode for 29 weeks under continuous monitoring. several parameters were monitored, but only three of them (well pipe pressure, well pipe temperature, quantity of inhibitor – inverter frequency) influenced the formation of hydration. pressure measurement value from pt12 sensor, temperature measurement value from tt12 sensor, and quantity measurement value of inhibitor from inverter frequency of the dosing pump were used as inputs in the second method. the fourth parameter is the type of the applied inhibitor, which was recorded when the inhibitor was placed in the container. demonstration of the effectiveness of each chemical in inhibiting hydration was performed with the previously described equipment. depending on the inhibition ability of the inhibitors, they were graded on a scale, see table 2. as previously mentioned, three independent datasets have been created: training, validation and test datasets. the main parameters of datasets are shown in the table 3. the neural network has four inputs and one output, the four inputs are the four parameters listed above, and the output is an alarm signal. the multi-input nnarx network is seen in figure 13, with the used regressor and the mapping function. in figure 13, y(t) is the network output at the tth time instant; yreq(t-1) is the required output from the network at t-1st time instant; x1..4(t) is the network inputs at the t th time instant; x1..4(t1) is the network input at t-1st time instant; tdl is the tapped delay line, b is neuron bias, w is the weight matrix. 4. results and discussions performance of the network is adequate if the required output (blue graph in figure 14) and the regular output (red graph in figure 14) match each other. mse gives no satisfactory information about the performance, therefore, the number of edges in the sample sets were determined by rising edge (re) method and then they were compared. if the edges matched each other it can be said that the alarm was at the proper time moment. a percentage value can be calculated (re%) from the ratio of number of alarms occurred at proper time and number of total alarms [30]. there are several methods, which can be used to find edges in one dimension. in this research the canny edge detection method resulted the best calculation, in which the first gaussian derivative is used to approximate the optimal finite length filter [31]. results of both networks were compared, using the relative error of detected rising edges in the simulated output of the network and the required alarm signal. the comparison of the single input single output (siso) networks is summarised in table 4. the table shows that the network detected possible hydrate formation with more than 90% efficiency in both cases. the best performance in case of nnarx was provided by the smallest figure 12. single neural network oe. table 2. inhibitor efficiency. hydrate formation time in s grade numerical grade 0-2500 won’t inhibit 1 2501-4000 weakly inhibits 2 4001-5500 inhibits 3 5501-6500 strongly inhibits 4 table 3. main parameters of the datasets. dataset number of performed measurements number of data points training dataset 22 2178 validation dataset 12 1068 test dataset 10 1080 figure 13. multi neural network. figure 14. output match using test set. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 123 network and in case of nnoe the result was the same. it should be noted that nnarx performed better result than nnoe. the comparison of the multi input single output (miso) networks can be found in table 5. it shows that the network recognised the possible hydrate formation with more than 92% efficiency. table 6 shows the most important parameters of the highlighted and best performing networks. 5. conclusions there is no publication so far in scientific literature, which gives solution for hydrate formation prediction for industry exclusively from either the differential pressure or the inhibitor's quality and injected quantity. the most effective results of the two presented projects are shown in tables 4 and 5 in bold. for single input single output neural network, the smallest network provided the highest reliability in edge detection in case of nnarx and in case of nnoe. it should be noted that nnoe network performed better than nnarx. in case of multi input single output neural network a larger regressor was the best. the nnarx model has a predictor without real feedback. the nnoe model has feedback through the choice of regressors, which in the neural network terminology means that the networks become recurrent: future network inputs will depend on present and past network outputs. this might lead to instability in certain areas of the network's operating range and it can be very difficult to determine whether or not the predictor is stable [32]. both nnarx networks performed well, the difference between the two results is not significant. however, in the first solution, nnoe became more effective than nnarx. in light of the above, the way forward is to first use the nnoe network for multi input single output system as well. although the nnoe clearly performed better in the first case, but further studies are needed to assess which of the two methods is better. acknowledgement the described article/presentation/study was carried out as part of the efop-3.6.1-16-2016-00011 “younger and renewing university – innovative knowledge city – institutional development of the university of miskolc aiming at intelligent specialisation” project implemented in the framework of the szechenyi 2020 program. the realisation of t this project is supported by the european union, co-financed by the european social fund. references [1] e. d. sloan, c. a. koh, clathrate hydrates of natural gases, 3ed isbn: 978-084-939-078-4 (2007), pp. 5-20. [2] g. ersland, a. graue, natural gas hydrates, in: natural gas, isbn: 978-953-307-112-1 (2010), pp. 147-162. table 4. the results of siso nnarx and nnoe networks. type of network structure regressor of network num. of hidden neurons training dataset validation dataset test dataset mse rel. error of found re in % mse rel. error of found re in % mse rel. error of found re in % nnarx ni = 1; nro = 1 10 0.0083 96.2 0.0065 100.0 0.0146 90.0 12 0.0081 96.0 0.0064 100.0 0.0146 90.0 ni = 1; nro = 2 10 0.0209 73.1 0.0184 70.0 0.0227 70.0 12 0.0202 73.1 0.0177 80.0 0.0203 90.0 ni = 2; nro = 2 10 0.0223 73.1 0.0186 90.0 0.0259 70.0 12 0.0388 69.2 0.0350 50.0 0.0316 60.0 nnoe ni = 1; no = 1 10 0.0086 100.0 0.0062 100.0 0.0157 90.0 12 0.0058 96.2 0.0048 100.0 0.0127 90.0 ni = 1; no = 2 10 0.0284 76.9 0.0253 60.0 0.0326 60.0 12 0.0265 65.4 0.0236 70.0 0.0278 50.0 ni = 2; no = 2 10 0.0269 69.2 0.0252 70.0 0.0183 60.0 12 0.0347 53.8 0.0325 50.0 0.0321 30.0 table 5. the results of miso nnarx networks. type of network structure regressor of network num. of hidden neurons training dataset validation dataset test dataset mse rel. error of found re in % mse rel. error of found re in % mse rel. error of found re in % nnarx ni = 1; nro = 1 10 0.0204 72.2 0.0179 80.0 0.0201 90.0 12 0.0088 95.2 0.0131 90.0 0.0148 90.0 ni = 1; nro = 2 10 0.0084 99.8 0.0072 100.0 0.0158 92.2 12 0.0087 95.2 0.0074 100.0 0.0167 91.2 ni = 2; nro = 2 10 0.0091 82.4 0.0141 91.4 0.0211 81.2 12 0.0107 81.2 0.0202 90.7 0.0214 80.2 table 6. the algorithm’s main parameters. siso nnarx siso nnoe miso nnarx structure 1-10-1 1-10-1 4-10-1 hidden layers 1 1 1 neurons 10 10 10 input regressors 1 1 1 output regressors 1 1 2 datasets 46 46 44 data point of datasets ~115 ~115 ~100 iterations 1000 1000 1000 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 124 [3] e. g. hammerschmidt, formation of gas hydrates innnatural gas transmission lines, ind. eng. chem, vol 26 is 8 (1934), pp. 851855. doi: 10.1021/ie50296a010 [4] j. f. gabittol, c. tsouri, physical properties of gas hydrates: a review, advances in gas hydrate thermodynamics and transport properties, vol. 2010. art. id 271291 (2010), pp. 1-12. doi: 10.1155/2010/271291 [5] m. wu, s. wang, h. liu, a study on inhibitors for the prevention of hydrate formation in gas transmission pipeline, journal of natural gas chemistry, vol. 16. no. 1. (2007), pp. 81-85. doi: 10.1016/s1003-9953(07)60031-0 [6] n. daraboina, c. malmos, synergistic kinetic inhibition of natural gas hydrate formation, fuel, vol. 108 (2013), pp. 749-757. doi: 10.1016/j.fuel.2013.02.018 [7] s. q. gao, hydrate risk management at high watercuts with anti-agglomerant hydrate inhibitors, energy fuel, vol. 23 (2009), pp. 2118-2121. doi: 10.1021/ef8009876 [8] m. a. kelland, history of the development of low dosage hydrate inhibitors, energy fuels, vol 20 (2006), pp. 825–847. doi: 10.1021/ef050427x [9] m. norgaard, o. ravn, l. k. hansen, n. k. poulsen, the nnsysid toolbox a matlab toolbox for system identification with neural network, proceedings of the 1996 ieee international symposium on computer-aided control system design, dearborn, mi, 15-18 september (1996), pp. 374‒379. doi: 10.1109/cacsd.1996.555321 [10] v. rankovic, j. radulović, n. grujović, d. divac, neural network model predictive control of nonlinear systems using genetic algorithms, int. j. comput. commun., vol.7 (2012), no. 3 (september), pp. 540-549. [11] c. macpherson, p. glenat, s. mazloum, i. young, successful deployment of a novel hydrate inhibition monitoring system in a north sea gas field, 23rd international oil field chemistry symposium, 18 – 21 march 2012, geilo, norway, pp. 18-21. [12] m. m. ghiasia, a. bahadorib, s. zendehboudic, a. jamilid, s. rezaei-gomari, novel methods predict equilibrium vapor methanol content during gas hydrate inhibition, journal of natural gas science and engineering, vol 15, november 2013, pp. 69-75. doi: 10.1016%2fj.jngse.2013.09.006 [13] a. kamaria, a. bahadorib, a. h. mohammadiac, s. zendehboudi, new tools predict monoethylene glycol injection rate for natural gas hydrate inhibition, journal of loss prevention in the process industries, vol 33, january 2015, pp. 222-231. doi: 10.1016/j.jlp.2014.12.013 [14] j. a. k. suykens, j. vandewalle, least squares support vector machine classifiers, neural processing letters, vol 9, is 3, june 1999, pp. 293–300. doi: 10.1023/a:1018628609742 [15] a. elgibaly, a. elkamel, optimal hydrate inhibition policies with the aid of neural networks, energy fuels, vol. 13, no. 1, 1999, pp. 105-113. doi: 10.1021/ef980129i [16] m. mesbah, e. soroush, m. rezakazemi, development of a least squares support sector machine model for prediction of natural gas hydrate formation temperature, chinese journal of chemical engineering, vol 25, is 9, september 2017. pp. 12381248. doi: 10.1016/j.cjche.2016.09.007 [17] s. h. yousefi, e. shamohammadi, e. khamehchi, predicting the hydrate formation temperature by a new correlation and neural network, gas processing journal, vol. 1 no. 1, january 2013. pp. 41-50. doi: 10.22108/gpj.2013.20158 [18] g. zahedi, z. karami, h. yaghoobi, prediction of hydrate fomation temperature by both statistical models and artificial neural network approaches, energy conversion and management, vol 50, is 8, august 2009. pp. 2052-2059. doi: 10.1016/j.enconman.2009.04.005 [19] j. s. amin, s. k. b. nejad, m. veiskarami, a. bahadori, prediction of hydrate formation temperature based on an improved empirical correlation by imperialist competitive algorithm, journal petroleum science and technology, vol 34, is 2 (2016), pp. 162-169. doi: 10.1080/10916466.2015.1118501 [20] h. saghafi, h. yarveicy, gas hydrate stability conditions: modeling on the basis of gas gravity approach, journal petroleum science and technology, vol 37, is 17, (2019), pp. 1938-1945. doi: 10.1080/10916466.2018.1463261 [21] m. n. b. m. rodzep, application of artificial neural network in prediction of methane gas hydrate formation rate, universiti teknologi petronas, doctoral theses (2015), pp. 7-24. [22] s. m. hesami, m. dehghani, z. kamali, a. e. bakyani, developing a simple-to-use predictive model for prediction of hydrate formation temperature, journal international journal of ambient energy, vol 38, is 4 (2017), pp. 380-388. doi: 10.1080/01430750.2015.1100678 [23] a. kolchin, analyze of hydrate formation with the use of neural network technology, spe annual technical conference and exhibition, 30 september-2 october 2013, new orleans, louisiana, usa, pp. 5342-5353. doi: 10.2118/167618-stu [24] s. m. vajari, development of hydrate inhibition monitoring and initial formation detection techniques, heriot-watt university institute of petroleum engineering, doctoral theses (2012), pp. 159-239. online [accessed 2 september 2021] http://www.ros.hw.ac.uk/bitstream/handle/10399/2539/vajari sm_0312_pe.pdf?sequence=1&isallowed=y [25] i. bölkény, k. jónap, cs. vörös, hydrate inhibition technologies, results and future possibilities based on measurements and projects of the last 15 years, technical publications of earth sciences, vol 85. (2015), pp. 30-40. [26] g. mustafaraj, j. chen, g. lowry, thermal behaviour prediction utilizing artificial neural networks for an open office, applied mathematical modelling 34 (2010), pp. 3216–3230. doi: 10.1016/j.apm.2010.02.014 [27] w. xian, z. qiancheng, y. xuebing, z. bing zeng, condition monitoring of wind turbines based on analysis of temperaturerelated parameters in supervisory control and data acquisition data, measurement and control, london, sage publications, december 2019, pp. 1–17. doi: 10.1177%2f0020294019888239 [28] m. altrichter, g. horváth, b. pataki, gy. strausz, g. takács, j. valyon, neural networks, hungarian edition panem könyvkiadó kft., budapest, isbn: 9-635454-64-3 (2006), pp. 355-375. [29] k. a. galih, h. kenji, m. tetsuo, modeling the dynamic response of plant growth to root zone temperature in hydroponic chili pepper plant using neural networks agriculture 2020, 10, 234,17 june 2020. pp. 1-14. online [accessed 2 september 2021] https://www.mdpi.com/2077-0472/10/6/234/pdf [30] v. füvesi, e. kovács, separation of faults of eletromechanical drive chain using artificial intelligence methods, 18th building services, mechanical and building industry days international conference, debrecen, hungary, (2012), pp. 19-27. [31] j. canny, a computational approach to edge detection, ieee transactions on pattern analysis and machine intelligence vol 8. is 6, nov. 1986, pp. 679-69. [32] m. rajalakshmi, s. jeyadevi, c. karthik, recurrent neural network identification: comparative study on nonlinear process international journal of innovative research in science, engineering and technology, volume 3, special issue 3, march 2014, pp. 156-161. https://doi.org/10.1021/ie50296a010 https://doi.org/10.1155/2010/271291 http://dx.doi.org/10.1016/s1003-9953(07)60031-0 http://dx.doi.org/10.1016/j.fuel.2013.02.018 https://doi.org/10.1021/ef8009876 https://doi.org/10.1021/ef050427x http://dx.doi.org/10.1109/cacsd.1996.555321 http://dx.doi.org/10.1016%2fj.jngse.2013.09.006 https://doi.org/10.1016/j.jlp.2014.12.013 https://doi.org/10.1023/a:1018628609742 https://doi.org/10.1021/ef980129i https://doi.org/10.1016/j.cjche.2016.09.007 https://doi.org/10.22108/gpj.2013.20158 http://dx.doi.org/10.1016/j.enconman.2009.04.005 https://doi.org/10.1080/10916466.2015.1118501 https://doi.org/10.1080/10916466.2018.1463261 https://doi.org/10.1080/01430750.2015.1100678 https://doi.org/10.2118/167618-stu http://www.ros.hw.ac.uk/bitstream/handle/10399/2539/vajarism_0312_pe.pdf?sequence=1&isallowed=y http://www.ros.hw.ac.uk/bitstream/handle/10399/2539/vajarism_0312_pe.pdf?sequence=1&isallowed=y https://doi.org/10.1016/j.apm.2010.02.014 https://doi.org/10.1177%2f0020294019888239 https://www.mdpi.com/2077-0472/10/6/234/pdf development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment acta imeko issn: 2221-870x december 2021, volume 10, number 4, 201 208 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 201 development and characterisation of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment damiano alizzio1, antonino quattrocchi1, roberto montanini1 1 department of engineering, university of messina, c.da di dio, vill. s. agata, 98166, messina, italy section: research paper keywords: piezoelectric patch; ripples waves; uncertainty estimation; motion frequency transformer citation: damiano alizzio, antonino quattrocchi, roberto montanini, development and characterization of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment, acta imeko, vol. 10, no. 4, article 31, december 2021, identifier: imeko-acta-10 (2021)-04-31 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received september 10, 2021; in final form december 11, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: damiano alizzio, e-mail: damiano.alizzio@unime.it 1. introduction the purposes of monitoring procedures for marine environment are multiple and useful both for measuring the quality of the specific ecosystem and for estimating of the global anthropogenic impact, also in the field of smart city [1], [2]. most widespread activities range from detection of pollutants to meteorological or climatic measurements [3]. however, other aims concern the use of marine sensors for safety evaluation in coastal areas and protection from seismic events or biological hazards [4]. scientific literature already presents technological solutions in the field of floating measurement devices with specialized sensors and data communication systems for different applications [5]. measurement buoys provide a reading service for chemical and/or physical quantities through sensors and low energy electronics and guarantee a stable communication to the ground through suitable data transfer channels (wi-fi, bluetooth, gsm) [6], [7]. although a modest power input is foreseen for supplying the on-board instrumentation, the need to make these devices energy self-sufficient remains open. for such purpose, technologies are employed for ensuring a buoys independence from human presence and a constant source of electricity [8], [9]. traditionally, measurement buoys are powered by photovoltaic systems and by wind turbines. alippi et al. [10], albaladejo et al. [11] and hormann et al. [12] employed different abstract in the interest of our society, for example in smart city but also in other specific backgrounds, environmental monitoring is an essential activity to measure the quality of different ecosystems. in fact, the need to obtain accurate and extended measurements in space and time has considerably become relevant. in very large environments, such as marine ones, technological solutions are required for the use of smart, automatic, and self-powered devices in order to reduce human maintenance service. this work presents a simple and innovative layout for a small self-powered floating buoy, with the aim of measuring and transmitting the detected data for visualization, storage and/or elaboration. the power supply was obtained using a cantilever harvester, based on piezoelectric patches, converting the motion of ripple waves. such type of waves is characterized by frequencies between 1.50 hz and 2.50 hz with oscillation between 5.0 ° and 7.0 °. specifically, a dedicated experimental setup was created to simulate the motion of ripple waves and to evaluate the suitability of the proposed design and the performance of the used harvester. furthermore, a dynamic analytical model for the harvester has been defined and the uncertainty correlated to the harvested power has been evaluated. finally, the harvested voltage and power have shown how the presented buoy behaves like a frequency transformer. hence, although the used cantilever harvester does not work in its resonant frequency, the harvested electricity undergoes a significant increase. mailto:damiano.alizzio@unime.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 202 size photovoltaic cells to feed sensor buoy systems for sea trials. in these studies, the main limitations were found in the dependence of the electricity production by the sun, by lacking in bad weather conditions and by inactivity during the night, and in the need to equip the measurement buoys with bulky and heavy systems and batteries. instead, for wind turbines, the meteorological conditions are not optimal on the sea surface. in fact, the wind speed at 1 m above the sea level is a third of that at 80 m. this results in a 73% reduction in wind power [13]. recently, new energy sources, such as tidal currents and sea or ocean waves, have been investigated. trevathan et al. [14] proposed tidal currents to supply sensors for marine measurements and compared the performances of different types of wind turbines. however, they concluded that such devices may not be cost-effective and risk-prone to apply due to biofouling and entanglement from drifting algae and sea grass wrack. sea or ocean waves represent an attractive renewable source, indirectly related to both the sun and the wind, and with a high energy density [15]. pelc et al. [16] stated that wave energy is vast and more reliable than most renewable sources. furthermore, they highlighted that such energy at a given site is available up to 90 % of the time, while photovoltaic and wind energy tend to be available only 20-30 % of the time. the adoption of an energy harvester by piezoelectric transducers (peh) is an advantageous solution to convert wave motion in electric energy. peh are a well-known technique in literature. for example, bolzea et al. [17] used such devices in cantilever configuration, demonstrating that the maximum power output is generated when the mechanical resonance is reached. toyabur et al. [18] designed a multimode peh, consisting of four elements, connected in parallel and in a cantilever configuration, to achieve different low frequency resonance modes (10-20 hz). the authors showed that their system generates about four times more power than a single peh. pradeesh et al. [19] analysed, both experimentally and numerically, the effect of a proof mass fixed to a peh in a cantilever configuration. the researchers obtained the best results as the proof mass was glued close to the clamped end. these works highlight the importance of a correct design for a cantilever type peh, but they do not discuss the effect of the proof mass (i. e. the resonant frequency variation of the peh) on the harvested power. recently, montanini et al. [20] developed a peh, using a glass fibre reinforced beam support. the authors studied the correlation between the mechanical working frequency and the harvested electrical power, analysing the deflection shape of the peh under operating conditions by means of a scanning laser-doppler vibrometer. after that, they investigated the conversion efficiency of this peh [21] and the applicability of a low-power single-stage converter, able to automatically follow the changes in the resistive component of the output impedance, in order to maximize the energy yield [22], [23]. in the field of power supply for measurement buoys, pehs have been sporadically applied due to the low frequency of the sea and ocean waves. wu et al. [24] developed a peh fixed to a floating buoy, that was anchored to the ocean floor. this device consisted of several cantilevers, on which many piezoelectric patches (pcs) were attached. the authors analyzed the size effect of the float and derived a numerical model to calculate the harvested energy. the research findings show that up to 24 w electric power can be generated with the piezoelectric cantilevers length of 1 m and the length of the buoy of 20 m. nabavi et al. [25] proposed the design of a beam-to-column piezoelectric system, able to power a large floating and instrumented ocean buoy. they derived and experimentally verified the equations of the electromechanical behavior of the device, demonstrating that the height amplitude and the low frequency of the wave guarantee the best performance. additionally, using a baffled water tank, they developed a self-tuning buoy, which works based on the frequency of ocean waves. recently, alizzio et al. [26], [27] proposed an instrumented spar and fixed-point buoy, equipped with a peh, to convert the energy of the wave motion into electricity through some pcs, glued on deformable and floating bands. the buoy was designed, numerically simulated, and experimentally verified, obtaining a light structure able to self-power its on-board sensors and carrying out data transmission. in this paper, the performance of a simple and innovative layout for a small measurement buoy, supplied by a cantilever peh, has been studied. the structure has been designed to convert the motion of the ripple waves in a cyclic oscillation with harmonics of high frequency, to which the peh is subjected thanks to a suitable proof mass. in this context, a dedicated experimental setup has been implemented to simulate the motion of the ripple waves and to evaluate its effect on the electric response of the peh. such motion has been discretized on amplitude and frequency configurations, characterizing the dynamics of the proposed buoy with an analytical model of the peh. finally, the harvested power has been estimated and the related uncertainty has been evaluated. 2. materials and methods 2.1. prototype of the measurement floating buoy the presented buoy prototype (figure 1) was made of a floating structure, manufactured with a 3d printer using a highly durable photo-polymeric resin. it was divided in two pieces: the bottom one was designed with a hemispherical shape for an appropriate matching with the sea waves, while the upper one had a cylindrical shape where the peh was set by a fixed joint. the two parts were connected by means of a thread, to hermetically contain a dedicated measurement instrumentation for monitoring the marine environment with a data transmission system. this peh allowed to convert the alternative rotational (rolling) motion of the buoy, while this one was subject to ripple figure 1. concept design of the measurement floating buoy prototype. 1. = hemispherical part, 2. = cylindrical part, 3. = pc and 4. = proof mass. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 203 waves, into electrical energy in order to provide an adequate power to supply all electronic devices within the same buoy. 2.2. piezoelectric energy harvester (peh) the used peh consisted of a cantilever support, on which a pc and a proof mass were glued at the opposite ends. the pc (duraact p-876 a.12, physik instrumente gmbh) has sizes of 61.0 mm × 35.0 mm × 0.5 mm and an electrical capacity of 90 nf. the active element of the pc, a thin layer of piezoceramic pic255 powder, is encapsulated in a kapton case. the pc shows a symmetrical structure; if it is deformed, the same amount of voltage is generated, opposite in charge, on the two surfaces of the electrodes. these types of devices have the double advantage of being also used as an actuator. when driven by an alternating voltage, the pcs act with a multiaxial deformation which depends on the amplitude and frequency of the power signal [28], [29]. the cantilever support, measuring 105 mm × 35 mm × 1 mm, was made by manual layering of three layers of 0◦/90◦ oriented glass fibre and epoxy resin. 2.3. experimental setup and procedure the experimental setup (figure 2a) included the buoy with its peh and an electrodynamic shaker (mod. s 513, tira), driven by a power amplifier (mod. baa 120, tira) and a function generator (mod. 33220 a, agilent). in order to simulate the motion of the ripple waves in amplitude and frequency, a conversion system (figure 2b) of the linear motion of the shaker was implemented. it consisted of a frame with two flanges, able to rotate the buoy around a fixed horizontal axis by means of two bearings. the imposed rolling motion was applied by connecting the stinger of the shaker to the bottom of the buoy (i. e. the vertex of the hemisphere) using a ball joint. the geometry of the conversion system is reported in table 1. the imposed rolling motion to the buoy was monitored by a rotational transducer (mod. 0600-0000, trans-tek), set in the fixed horizontal axis of the two flanges. an oscilloscope (mod. tds 5054b, tektronix) was employed to measure the previous oscillation signal and the voltage response of the peh on a resistive load of 100 kω. specifically, this resistive load was chosen in accordance with the maximum power transfer theorem, knowing the internal impedance of the pc [20]. the behaviour of the buoy was studied by varying the working frequency, the amplitude of the imposed rolling motion and the proof mass glued at the end of the cantilever support. the frequencies and amplitudes of the ripple waves were appropriately following the results of [27]. therefore, the sinusoidal functions of the imposed rolling motion to the buoy were chosen with the characteristics shown in table 2. the acquisition frequency of the signals was set at 2 khz and each test was repeated 5 times for every combination of working frequency fw and angular displacement θ. 2.4. simplified model of the mechanical behaviour of the peh the mechanical behavior of the analyzed peh (i.e. a clamped-free beam with a proof mass) can be explained by a single degree of freedom (sdof) model (figure 3), according to [30]-[32]. for a peh without a proof mass, the equation of motion of euler-bernoulli beam for undamped free vibrations can be considered: 𝐸𝐼 𝜕4𝑤(𝑥,𝑡) 𝜕𝑥4 + 𝑚 𝜕2𝑤(𝑥,𝑡) 𝜕𝑡 2 = 0, (1) where m, e and i indicate the mass, the young’s modulus and the inertia momentum of the peh, respectively. 𝑤(𝑥, 𝑡) is the absolute motion of the peh along its axis expressed as: 𝑤(𝑥, 𝑡) = 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) + 𝑤𝑏 (𝑥, 𝑡), (2) table 1. geometric characteristics of the conversion system for the linear motion of the shaker in the imposed rolling motion to the buoy. geometric parameters r1 in mm r2 in mm l in mm d in mm 70 70 100 90 a) b) figure 2. a) image of the experimental setup and b) schema of the conversion system for the linear motion of the shaker in the imposed rolling motion to the buoy. 1. = stinger of the shaker, 2. = frame, 3. = body of the buoy, 4. = peh, 5. = proof mass, r1 = height of the hemispherical part of the buoy, r2 = height of the cylindrical part of the buoy, l = arm of the peh, d = diameter of the buoy and θ = angular displacement. table 2. characteristics of the imposed rolling motion to the buoy. case 1 case 2 case 3 case 4 case 5 working frequency fw in hz 1.50 1.75 2.00 2.25 2.50 angular displacement θ in ° 5.0 5.3 5.9 6.6 7.0 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 204 where 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) is the displacement, relative to the clamped end, and 𝑤𝑏 (𝑥, 𝑡) is the absolute displacement of the buoy. by introducing a proof mass and according to equation (2), equation (1) takes the form reported as follows: 𝐸𝐼 𝜕4 𝜕𝑥 4 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) + 𝑚 𝜕2 𝜕𝑡 2 𝑤𝑟𝑒𝑙 (𝑥, 𝑡) = −[𝑚 + (𝑥 − 𝑙)𝑀𝑡 ] 𝜕2 𝜕𝑡 2 𝑤𝑏 (𝑥, 𝑡) , (3) where mt denotes the proof mass and produces a new contribution. 𝑤𝑏 (𝑥, 𝑡) consists of the composition of an orthogonal translation 𝑔(𝑡) and a rotation ℎ(𝑡) of the peh clamped end: 𝑤𝑏 (𝑥, 𝑡) = 𝛿1(𝑥)𝑔(𝑡) + 𝛿2(𝑥)ℎ(𝑡) . (4) in our case (i.e. clamped-free beam and figure 2b), 𝛿1(𝑥) = 1, 𝛿2(𝑥) = 𝑟2 + 𝑥, 𝑔(𝑡) = 𝑟2 sin 𝜃(𝑡) and ℎ(𝑡) = 𝜃(𝑡), and equation (5) can be reported as: 𝑤𝑏 (𝑥, 𝑡) = 𝑟2 sin 𝜃(𝑡) + (𝑟2 + 𝑥)𝜃(𝑡) . (5) 2.5. estimation of the harvested power by the peh the harvested power by the peh was evaluated according to the model proposed by shu et al. [33], using equation (6): 𝑃 = π 𝜔 𝑉 2 8 𝑅 , (6) where 𝑉 and 𝜔 are respectively the amplitude and the pulsation of the harvested voltage and 𝑅 is the resistive load wired to the harvester. considering the complex trend of the harvested voltage 𝑉𝑃𝐸𝐻 of the peh, 𝑉 was estimated as a specific average voltage, indicated with �̅�. it is the accumulated voltage inside the excitation period by the integral average of the rectified signal in equation (7): �̅� = 1 𝑇 ∫|𝑉𝑃𝐸𝐻 (𝑡)| 𝑑𝑡 𝑇 0 , (7) where t = 1 / fw is the excitation period and fw is the working frequency of the imposed rolling motion of the buoy. for discrete signals, the equation (7) becomes: �̅� = 1 𝑇 ∑ |𝑉𝑃𝐸𝐻 (𝑡)| 𝑇 𝑡=0 . (8) hence, equation (6) takes the form of equation (9): �̅� = 1 16𝑅𝑇 (∑ |𝑉𝑃𝐸𝐻 (𝑡)| 𝑇 𝑡=0 ) 2 = = 𝑓𝑤 16𝑅 ( ∑ |𝑉𝑃𝐸𝐻 (𝑡)| 1/𝑓𝑤 𝑡=0 ) 2 , (9) where �̅� is the specific power, harvested by the peh. it is here mentioned as specific power, hence it is a complex function of the working frequency due to its dependence on �̅�. 3. results 3.1. effects of the mechanical frequency figure 4 shows the typical signals concerning the angular displacement θ of the buoy, measured on the fixed horizontal axis, and the harvested voltage vpeh by the peh, following the imposed rolling motion. in this application the mechanical operating conditions of the peh are quite different from those reported in the literature [24]-[27]. indeed, the piezoelectric component is mechanically stressed by a non-inertial force field in which the motion is alternative. the working frequencies fw of the presented peh are significantly lower than the typical ones of this devices, while the amplitude of the angular displacement θ does not allow the hypothesis of small oscillations. in figure 4, although a certain period can be identifiable, the two acquired signals do not have the same dynamics. the angular displacement θ has a sufficiently sinusoidal behavior, according to the imposed rolling motion, while the harvested voltage vpeh is oscillating with a variable amplitude. figure 5 reports the magnitude of the discrete fourier transform (dft) of the signals of figure 4, computed using a resolution of 0.10 hz. the imposed rolling motion is characterized by a single frequency at 2.00 hz, vice versa the harvested voltage vpeh has some harmonic components with higher order and with maximum amplitude at 10.00 hz. the identified phenomenon occurs for all the conducted tests and denotes a multimodal enhancement of the peh. it must be pointed out that, although the employed shaker ensures a linear trend in the frequency range from 2 hz to 7000 hz, its operation at the lower frequency limit remains optimal. in fact, there are no distortions since the amplitudes of the dft of the angular displacement θ, reported in figure 5, do not show significant higher order harmonics. figure 6 illustrates the comparison between the magnitudes of the dfts of the harvested voltage vpeh by varying the proof mass, glued to the free end of the peh, and following an angular displacement θ of the buoy of 6.6° amplitude at a working frequency of 2.00 hz. a different applied proof mass (i. e. the resonant frequency of the peh [20], [21]) does not cause a figure 3. scheme of the model for the peh. figure 4. typical signal of the angular displacement θ of the buoy (on the top) and of the harvested voltage vpeh by the peh (on the bottom) at 2.00 hz. -5.0 0.0 5.0 10.0 0.0 0.5 1.0 1.5 2.0 / ° time / s angular displacement θ -1.5 -0.5 0.5 1.5 0.0 0.5 1.0 1.5 2.0 / v time / s harvested voltage vpeh acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 205 change in the frequency of the dft, but it acts on the amplitude of vpeh. in fact, a greater proof mass induces a consequent increase in the dft magnitude and a modification of the modal eigenvalues associated with the motion. moreover, in these cases, the accelerations due to the proof masses assumed values between 0.77 ms-2 and 2.99 ms-2, calculated considering the parameters from table 1 and table 2. figure 7 presents the frequency effect of the imposed rolling motion on the harvested voltage vpeh with an angular displacement θ of 6.6° and a proof mass of 24.1 g. the frequency variation of the imposed rolling motion involves a shift of the frequency, but not an alteration in the magnitude of the harvested voltage. specifically, the components of higher order are characterized by a greater frequency difference than those of lower order. figure 8 compares the frequency amplification factors of the buoy-peh system as the angular displacement θ and working frequency fw of the imposed alternative rotational motion vary, using a proof mass of 24.1 g. the amplification factors were obtained from the ratio between the frequency of the principal component of the harvested voltage and the working frequency fw of the imposed rolling motion. it was found that these factors are not very sensitive to either the frequency or the amplitude of the imposed rolling motion. 3.2. power estimation of the peh figure 9 shows the area that collects the values of the specific average voltages �̅�, calculated according to equation (8), as the working frequency fw varies. this area was obtained by changing the angular displacement θ of the boy and the proof masses glued to the peh. the results are in perfect agreement with the literature [17]-[23]. in fact, it can be noted that as the proof mass increases (i. e. as the resonant frequency of the peh decreases), a greater oscillation of the peh is obtained with a consequent increment in the harvested voltage. a similar observation can be made by considering the increase in the working frequency fw of the imposed rolling motion with the same proof mass. figure 10 underlines the trend of the specific power �̅�, harvested by the peh and calculated according to equation (9), as the working frequency fw and the amplitude of the angular displacement θ vary and using a fixed proof mass of 24.1 g. a relative maximum is obtained at the working frequency of 2.25 hz and its amplitude rises as the angular displacement θ of the buoy increases. this result also conforms to the data already present in the literature [17]-[23]. in the optimal conditions of a working frequency of 2.25 hz, an angular displacement of 7.0° figure 5. magnitude of the dft of the angular displacement θ (on the top) and of the harvested voltage vpeh (on the bottom) with a working frequency fw of 2.00 hz. figure 6. comparison between the magnitude of the dfts of the harvested voltage vpeh by varying the proof mass at an angular displacement θ of the buoy of 6.6° at a working frequency fw of 2.00 hz. figure 7. comparison between the magnitudes of the dfts of the harvested voltage vpeh by varying the imposed rolling motion at an angular displacement θ of the buoy of 6.6 ° and using a proof mass of 24.1 g. figure 8. comparison of the frequency magnification factors at different angular displacement θ of the imposed rolling motion and using a proof mass of 24.1 g. figure 9. specific average voltages �̅� by the peh obtained with respect the working frequency fw. 0.0 2.0 4.0 6.0 0 5 10 15 20 25 30 / v frequency / hz dft of angular displacement θ 0.0 2.0 4.0 6.0 0 5 10 15 20 25 30 / v frequency / hz dft of harvested voltage 0.0 1.0 2.0 3.0 4.0 5.0 6.0 0.00 2.50 5.00 7.50 10.00 12.50 15.00 d f t m a g n it u d e / v frequency / hz 14.7 g 18.1 g 24.1 g v p e h 0.0 1.0 2.0 3.0 4.0 5.0 6.0 0.00 2.50 5.00 7.50 10.00 12.50 15.00 d f t m a g n it u d e / v frequency / hz 1.75 hz 2.00 hz 2.25 hz i operative freq. ii operative freq. iii operative freq. v p e h acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 206 and a proof mass of 24.1 g, the harvested energy in 1 h for a specific power of 0.08 mw is 274.6 mwh. figure 11 reports a comparison of the specific power �̅�, made with different proof masses and at the same angular displacement θ. because of that it is found in figure 6, the different proof mass influences the harvested voltage 𝑉𝑃𝐸𝐻 , and consequently the specific power 𝑃 ̅, the latter proportional to the square of the voltage as visible in equation (9). as an example, figure 12 exhibits the specific power �̅� evaluated for the acquisitions with proof mass equal to 28.8 g. �̅� assumes two relative maxima corresponding to the working frequencies fw equal to 2 hz and 2.5 hz. generally, harvesters using piezoelectric patches in cantilever configuration are able to produce a power peak at its resonant frequencies. it has already been observed that the first mode of vibration, usually at low frequencies, is responsible for the greater energetic contribution as it provides the components with the highest rate of deformation compared to the other modes of vibration [16]-[20]. 3.3. uncertainty evaluation of the harvested power by the peh the uncertainty of specific power �̅� by the peh is estimated by analysing the relative weight of each of the quantities in equation (9). the estimation of the combined uncertainty was based on the iso/iec guide 98-3:2008 [34], by applying the following propagation law: 𝑢(𝑦 ) = √∑ 𝑢2(𝑥𝑖 ) ( 𝛿�̅� 𝛿𝑥𝑖 ) 2 𝑖 . (9) the previous is explicated as: 𝑢(�̅� ) = [𝑢2(𝑓𝑤) ( 𝛿�̅� 𝛿𝑓𝑤 ) 2 + 𝑢2(𝑅) ( 𝛿�̅� 𝛿𝑅 ) 2 + 𝑢2(�̅�) ( 𝛿�̅� 𝛿�̅� ) 2 ] 0.5 . (10) the extended uncertainty (uc) was then estimated by assuming a coverage factor (k) equal to 2.57, based on a t distribution with five degrees of freedom at a confidence level of 98%. the detailed computation is shown in table 3. similar results were obtained for other acquisitions at different motion parameters. looking at the relative weight of the different quantities affecting the uncertainty, it can be highlighted that the main contribution derives from the specific average voltages �̅�, whereas the other quantities have a much lower influence. 4. conclusions for a measurements buoy, the power consumption mainly depends on the components that deal with data communication. in fact, data collection does not take place continuously over figure 10. specific power �̅� , harvested by the peh, as a function of the working frequency fw and the angular displacement θ with a proof mass of 24.1 g. figure 11. specific power �̅� , harvested by the peh, as a function of the working frequency fw and the proof mass at different angular displacement θ. figure 12. specific power �̅� , harvested by the peh, for angular displacement θ and working frequency fw with a proof mass of 28.8 g. table 3. uncertainty evaluation at the working frequency fw equal 2 hz with an angular displacement θ of 6.6° and proof mass of 24.1 g. parameter value uncertainty type u(xi) u2 ∙ (δp/δxi)2 u(p) uc(p) (k = 2.57) �̅� / v 0.3193 a 2.85e-2 4.80e-5 9.49e-5 2.44e-4 fw / hz 2.00 a 5.57e-14 4.34e-9 r / ω 1e+6 b 1.50e+3 8.88e-21 0 0.02 0.04 0.06 0.08 1.25 1.5 1.75 2 2.25 2.5 2.75 s p e ci fi c p o w e r p / m w working frequency fw / hz 7.0 6.6 5.9 5.3 5.0 y = 0.02x 0.0341 r² = 0.9695 0 0.02 0.04 0.06 0.08 28.8 g 24.1 g 18.1 g amplitude 5.3 ° y = 0.0328x 0.0485 r² = 0.9963 0 0.02 0.04 0.06 0.08 amplitude 5.9 ° y = 0.0967x 0.1642 r² = 0.9788 0 0.02 0.04 0.06 0.08 1.25 1.50 1.75 2.00 2.25 2.50 2.75 working frequency fw / hz amplitude 6.6 ° s p e ci fi c p o w e r p / m w 0 0.02 0.04 0.06 0.08 angular displacement θ / ° s p e ci fi c p o w e r p / m w working frequency fw / hz proof mass of 28.8 g acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 207 time. therefore, these devices do not need to be constantly active and can often be put into sleep mode and waked up at fixed time intervals or in response to some external event. typically, the power consumption for such routines is on the order of a few mw and can be ensured by rechargeable batteries and appropriate power electronics downstream the harvesters [35]. in this work, a simple and innovative layout for a small selfpowered floating buoy, employed for environmental monitoring activities, has been presented. this device makes use of a peh, consisted of a pc, set in a cantilever configuration and excited by the rolling motion of the ripple waves, as supply source. the voltage and the power response of the peh has shown a particularly advantageous behavior because the imposed motion has excited the pc with a multi-frequency combination of vibration modes. a multiplication of the oscillation frequencies for the peh has been found as the working frequency varies. in fact, although the frequencies of the harvested voltage and power do not match the resonant frequency of the used peh and therefore the best deformation for the pc is not obtained [16][21], the calculated specific power has relative maxima with respect to the input parameters (i. e. the frequency and amplitude of ripple waves). in this setup configuration and in optimal conditions of a working frequency of 2.25 hz, an angular displacement of 7.0° and a proof mass of 24.1 g, the peh had reached the harvested energy in 1 h of 274.6 mwh for a specific value of harvested power equals to 0.08 mw. in this way the described floating buoy assumes the configuration of a frequency transformer. for these reasons, the analyzed layout allows to considerably increase the power generated by a single peh, compared to that obtainable in typical conditions [26], [27], especially if such harvesters are coupled to a suitable impedance matching circuit [22], [23]. future purposes will be aimed to identify the optimal position of the pc on the beam for a more efficiently conversion of the mechanical energy provided by ripple waves, to evaluate the effect of the noise superimposed on the main motion of the buoy, to define the scalability of the buoy after an appropriate fluid-dynamic sizing, and finally to estimate the buoy performance in real conditions. references [1] t. p. bean, n. greenwood, r. beckett, l. biermann, j. p. bignell, j. l. brant (+28 authors), a review of the tools used for marine monitoring in the uk: combining historic and contemporary methods with modeling and socioeconomics to fulfill legislative needs and scientific ambitions, frontiers in marine science 4 (2017) n. 263 pp. 1-29. doi: 10.3389/fmars.2017.00263 [2] h. kim, l. mokdad,, j. ben-othman, designing uav surveillance frameworks for smart city and extensive ocean with differential perspectives, ieee communications magazine 56 (2018) pp. 98104. doi: 10.1109/mcom.2018.1700444 [3] l. g. a. barboza, a. cózar, b. c. gimenez, t. l. barros, p. j. kershaw, l. guilhermino, macroplastics pollution in the marine environment, world seas: an environmental evaluation, academic press (2019) pp. 305-328. doi: 10.1016/b978-0-12-805052-1.00019-x [4] a. l. sobisevich, d. a. presnov, m. v. agafonov, l. e. sobisevich, new-generation autonomous geohydroacoustic ice buoy, seismic instruments 54 (2018) pp. 677-681. doi: 10.3103/s0747923918060117 [5] s. savoca, g. capillo, m. mancuso, c. faggio, g. panarello, r. crupi, m. bonsignore, l. d’urso, g. compagnini, f. neri, e. fazio, t. romeo, t. bottari, n. spanò, detection of artificial cellulose microfibers in boops boops from the northern coasts of sicily (central mediterranean), science of the total environment 691 (2019) pp. 455-465 doi: 10.1016/j.scitotenv.2019.07.148 [6] z. chenbing, w. xinpeng, l. xiyao, z. suoping, w. haitao, a small buoy for flux measurement in air-sea boundary layer, proc. of the 13th ieee international conference on electronic measurement & instruments, icemi 2017, 20-22 october 2017, yangzhou, china. doi: 10.1109/icemi.2017.8265999 [7] x. roset, e. trullos, c. artero-delgado, j. prat, j. del rio, i. massana, m. carbonell, g. barco de la torre, d. mihai toma, real-time seismic data from the bottom sea, sensors 18 (2018) n. 1132 doi: 10.3390/s18041132 [8] l. m. tender, s. a. gray, e. groveman, d. a. lowy, p. kauffman, j. melhado, j. dobarro, the first demonstration of a microbial fuel cell as a viable power supply: powering a meteorological buoy, journal of power sources 179 (2008) pp. 571-575. doi: 10.1016/j.jpowsour.2007.12.123 [9] j. chen, y. li, x. zhang, y. ma, simulation and design of solar power system for ocean buoy, journal of physics: conference series, iop publishing, 1061 (2018) pp. 012018. doi: 10.1088/1742-6596/1061/1/012018 [10] c. alippi, r. camplani, c. galperti, m. roveri. a robust, adaptive, solar-powered wsn framework for aquatic environmental monitoring, sensors journal, ieee, 11 (2011) pp. 45-55. doi: 10.1109/jsen.2010.2051539 [11] c. albaladejo, f. soto, r. torres, p. sanchez, juan a. lopez, a low-cost sensor buoy system for monitoring shallow marine environments, sensors 12 (2012) pp. 9613-9634. doi: 10.3390/s120709613 [12] l. b. hormann, p. m. glatz, c. steger, r. weiss, a wireless sensor node for river monitoring using msp430® and energy harvesting, proc. of the 4th education and research conference, ederc 2010, 1-2 december 2010, nice, france, pp. 140-144. [13] j. f. manwell, j. g. mcgowan, a. l. rogers, wind energy explained: theory, design and application, john wiley & sons, 2010. [14] j. trevathan, r. johnstone, t. chiffings, i. atkinson, n. bergmann, w. read, s. theiss, t. myers, t. stevens, semat the next generation of inexpensive marine environmental monitoring and measurement systems, sensors 12 (2012) pp. 9711-9748. doi: 10.3390/s120709711 [15] j. falnes, a review of wave-energy extraction, marine structures 20 (2007) pp. 185–201. doi: 10.1016/j.marstruc.2007.09.001 [16] r. pelc, r. m. fujita, renewable energy from the ocean. marine policy 26 (2002) pp. 471-479. doi: 10.1016/s0308-597x(02)00045-3 [17] c. borzea, d. comeagă, a. stoicescu, c. nechifor, piezoelectric harvester performance analysis for vibrations harnessing. upb scientific bulletin, series c electrical engineering and computer science 81 (2019) pp. 237-248. [18] r. m. toyabur, m. salauddin, and j. y. park, design and experiment of piezoelectric multimodal energy harvester for low frequency vibration, ceram. int. 43 (2017) pp. 675–681. doi: 10.1016/j.ceramint.2017.05.257 [19] e. l. pradeesh, s. udhayakumar, effect of placement of piezoelectric material and proof mass on the performance of piezoelectric energy harvester, mech. syst. signal process. 130 (2019) pp. 664–676. doi: 10.1016/j.ymssp.2019.05.044 [20] r. montanini, a. quattrocchi, experimental characterization of cantilever-type piezoelectric generator operating at resonance for vibration energy harvesting, aip conf. proc. 1740 (2016) n. 60003. doi: 10.1063/1.4952675 https://doi.org/10.3389/fmars.2017.00263 https://doi.org/10.1109/mcom.2018.1700444 https://doi.org/10.1016/b978-0-12-805052-1.00019-x https://doi.org/10.3103/s0747923918060117 https://doi.org/10.1016/j.scitotenv.2019.07.148 https://doi.org/10.1109/icemi.2017.8265999 https://doi.org/10.3390/s18041132 https://doi.org/10.1016/j.jpowsour.2007.12.123 https://doi.org/10.1088/1742-6596/1061/1/012018 https://doi.org/10.1109/jsen.2010.2051539 https://doi.org/10.3390/s120709613 https://doi.org/10.3390/s120709711 https://doi.org/10.1016/j.marstruc.2007.09.001 https://doi.org/10.1016/s0308-597x(02)00045-3 https://doi.org/10.1016/j.ceramint.2017.05.257 https://doi.org/10.1016/j.ymssp.2019.05.044 https://doi.org/10.1063/1.4952675 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 208 [21] a. quattrocchi, f. freni, r. montanini, power conversion efficiency of cantilever-type vibration energy harvesters based on piezoceramic films, ieee transactions on instrumentation and measurement 70 (2021) n. 1500109, pp.1-9. doi: 10.1109/tim.2020.3026462 [22] s. de caro, r. montanini, s. panarello, a. quattrocchi, t. scimone, a. testa, a pzt-based energy harvester with working point optimization, proc. of the 6th international conference on clean electrical power, iccep 2017, 27-29 june 2017, santa margherita ligure, italy, pp. 699–704. doi: 10.1109/iccep.2017.8004767 [23] a. quattrocchi, r. montanini, r., s. de caro, s. panarello, s. scimone, s. foti, a. testa, a new approach for impedance tracking of piezoelectric vibration energy harvesters based on a zeta converter. sensors 20 (2020) n. 5862. doi: 10.3390/s20205862 [24] n. wu, q. wang, x. xie, ocean wave energy harvesting with a piezoelectric coupled buoy structure, applied ocean research 50 (2015) pp. 110-118. doi: 10.1016/j.apor.2015.01.004 [25] s. f. nabavi, a. farshidianfar, a. afsharfard, novel piezoelectricbased ocean wave energy harvesting from offshore buoys. applied ocean research 76 (2018) pp. 174-18. doi: 10.1016/j.apor.2018.05.005 [26] d. alizzio, m. bonfanti, n. donato, c. faraci, g. m. grasso, f. lo savio, r. montanini, a. quattrocchi, design and performance evaluation of a “fixed-point” spar buoy equipped with a piezoelectric energy harvesting unit for floating near-shore applications, sensors 21 (2021) n. 1912. doi: 10.3390/s21051912 [27] d. alizzio, m. bonfanti, n. donato, c. faraci, g. m. grasso, f. lo savio, r. montanini, a. quattrocchi, design and verification of a “fixed-point” spar buoy scale model for a “lab on sea” unit, proc. of the 2020 imeko tc19 international workshop on metrology for the sea, imeko tc19, october 5-7, 2020 naples, italy, pp. 27-32. online [accessed 15 december 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-11.pdf [28] a. quattrocchi, f. freni, and r. montanini, self-heat generation of embedded piezoceramic patches used for fabrication of smart materials, sens. actuators a phys. 280 (2018) pp. 513-520. doi: 10.1016/j.sna.2018.08.022 [29] s. sternini, a. quattrocchi, r. montanini, a. pau, f. l. di scalea, a match coefficient approach for damage imaging in structural components by ultrasonic synthetic aperture focus, procedia eng. 199 (2017) pp. 1544–1549. doi: 10.1016/j.proeng.2017.09.503 [30] a. erturk, d.j. inman, piezoelectric energy harvesting, wiley, united states, 2011 isbn: 978-0-470-68254-8 [31] a. erturk, d.j. inman, on mechanical modeling of cantilevered piezoelectric vibration energy harvesters, j. intell. mater. syst. struct. 19 (2008). [32] a. amanci, f. giraud, c. giraud-audine, m. amberg, f. dawson, b. lemaire-semail, analysis of the energy harvesting performance of a piezoelectric bender outside its resonance, sens. actuators a: phys. 17 (2014) 129–138. [33] y. c. shu, i. c. lien, efficiency of energy conversion for a piezoelectric power harvesting system, j. micromech. microeng. 16 (2006) n. 11 pp. 2429–2438. doi: 10.1088/0960-1317/16/11/026 [34] uncertainty of measurement—part 3: guide to the expression of uncertainty in measurement, document iso/iec guide 983:2008, 2008 [35] j. m. gilbert, f. balouchi, comparison of energy harvesting systems for wireless sensor networks. int. j. autom. comput. 5 (2008), pp. 334–347 doi: 10.1007/s11633-008-0334-2 https://doi.org/10.1109/tim.2020.3026462 https://doi.org/10.1109/iccep.2017.8004767 https://doi.org/10.3390/s20205862 https://doi.org/10.1016/j.apor.2015.01.004 https://doi.org/10.1016/j.apor.2018.05.005 https://doi.org/10.3390/s21051912 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-11.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-11.pdf https://doi.org/10.1016/j.sna.2018.08.022 https://doi.org/10.1016/j.proeng.2017.09.503 https://doi.org/10.1088/0960-1317/16/11/026 https://doi.org/10.1007/s11633-008-0334-2 impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms acta imeko issn: 2221-870x december 2021, volume 10, number 4, 221 229 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 221 impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms nicole morresi1, sara casaccia1, marco arnesano2, gian marco revel1 1 dipartimento di ingegneria industriale e scienze matematiche, università politecnica delle marche, via brecce bianche 12, 60131 ancona, italy 2 università telematica ecampus, 22060 novedrate, co, italy section: research paper keywords: monte carlo simulation; artificial intelligence; thermal comfort measurement; measurement uncertainty citation: nicole morresi, sara casaccia, marco arnesano, gian marco revel, impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms, acta imeko, vol. 10, no. 4, article 34, december 2021, identifier: imeko-acta-10 (2021)-04-34 section editor: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 10, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially done in the framework of the project “renozeb accelerating energy renovation solution for zero energy buildings and neighbourhoods”, funded by the european union’s horizon 2020 research and innovation programme under grant agreement no.768718. corresponding author: sara casaccia, e-mail: s.casaccia@staff.univpm.it 1. introduction in these modern times, it is commonly accepted by the scientific community that the term thermal comfort refers to a very subjective concept, that differs from one individual to another [1]. in fact, thermal comfort is affected by several factors that include personal, psychological, physical and environmental diversities and therefore, the personalization of thermal comfort measurement is increasingly required to provide more tailored and customized indoor environments, that start with the single individual and end with a comfortable and satisfactory environment [2], [3], [4]. physiological, psychological and personal parameters are progressively included in the measurement setup of thermal comfort, to encourage a more customized environment carefully tailored to the distinct preferences of the occupants that live in it. to put together all these specific parameters, personalized thermal comfort measurement typically requires an extensive range of sensing devices that make up a sensors network. since each sensor is properly characterized by measurement uncertainty, it is significant to include in the measurement process how this uncertainty, associated with each parameter, affects the final measurement of thermal comfort. this aspect is even more justified since, in all research fields, it is routinely accepted by the scientific community that the result of a measurement process promptly loses its meaning if an uncertainty value is not associated with it [5]. the traditional concept of measurement uncertainty must therefore be associated and applied to modern techniques, as in the case of artificial intelligence (ai) algorithms, which are finding more and more space in the measurement process. in support of this claim, literature reports that when small variations are applied to data, and variations are represented by the measurement uncertainty abstract this paper presents an approach to assess the measurement uncertainty of human thermal comfort by using an innovative method that comprises a heterogeneous set of data, made by physiological and environmental quantities, and artificial intelligence algorithms, using monte carlo method (mcm). the dataset is made up of heart rate variability (hrv) features, air temperature, air velocity and relative humidity. firstly, mcm is applied to compute the measurement uncertainty of the hrv features: results have shown that among 13 participants, there are uncertainty values in the measurement of hrv features that ranges from ±0.01% to ±0.7 %, suggesting that the uncertainty can be generalized among different subjects. secondly, mcm is applied by perturbing the input parameters of random forest (rf) and convolutional neural network (cnn) algorithm, trained to measure human thermal comfort. results show that environmental quantities produce different uncertainty on the thermal comfort: rf has the highest uncertainty due to the air temperature (14 %), while cnn has the highest uncertainty when relative humidity is perturbed (10.5 %). a sensitivity analysis also shows that air velocity is the parameter that causes a higher deviation of thermal comfort. mailto:s.casaccia@staff.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 222 associated with the data collected, ai provides completely misleading results [6]. the reason behind the use of ai for personalized thermal comfort measurement is that the measured output is dependent on several parameters, i.e., physiological and environmental parameters, that compose the measurement system. currently, the thermal sensation related to thermal comfort, is expressed using psychological scales, the most prominent of which is the ashrae 7-points scale, which is used in many comfort studies [7]. this scale is used to rate the thermal sensation vote (tsv), with 7 different verbal terms which are: “cold”, “cool”, “slightly cool”, “neutral”, “slightly warm”, “warm”, and “hot”. each term is associated with a categorical number in the range from -3 (“cold”) to +3 (“hot”). the application of ai in the measurement of human thermal comfort in the built environment, is becoming preferable because the adopted dataset is composed of two categories of parameters, which comprise the environmental and physiological parameters. a third category is also added to this dataset, made up of subjective parameters, expressed by the tsv. the union of these quantities generates a complex and heterogeneous dataset, making necessary the employment of ai, which comprises models comparable to a black box [8]. there are strong nonlinear relationships among the parameters included in this heterogeneous dataset, which suggests that this relationship should be explored with more complex and high-level algorithms, included in the ai field [9], [10] . ai measures its performance in terms of accuracy, which is expressed through different metrics, depending on the type of algorithm adopted, and the type of data being measured; for example, when the measured quantity is binary, or is made by discrete values, or class, performances are measured using accuracy, recall or precision; in case of ai used for measuring continuous quantities, the most used metrics are the mean absolute error (mae), mean absolute percentage error (mape), and mean squared error (mse). of course, the accuracy of aibased models, is strictly linked to the selected algorithm as well as the quality of the dataset, which is deeply connected to the uncertainty of the measured data [8] [11]. the method for the estimation of the uncertainty is described in the guide to the expression of uncertainty in measurement (gum) [12], which is based on the law of propagation of uncertainties (lpu). given the strong non-linearity of the relationship that exists between the input quantities and the output quantities of ai algorithms, the evaluation of the uncertainty associated with the output quantity is hard to assess if the lpu should be used. when it comes to non-linear relationship, the gum has proposed in its supplement [13], the evaluation of the uncertainty by using the monte carlo method (mcm), which allows assessing the uncertainty even if the relationship among quantities is not linear, and the analytical equation between quantities is not known, but it can be assumed as a black box, as happens in the context of ai models. the mcm represents a more practical alternative to the conventional lpu method, when it is not possible to effectively verify the hypothesis assumed by the lpu [11]. for this reason, this work studies the impact of the measurement uncertainty of a sensors network to measure thermal comfort, using ai models. to quantify and access the factors that affect the outcome of a measurement process through mcm, there are two methodologies that can provide this information: sensitivity analysis and a measurement uncertainty analysis [14], [15], [16], [17]. both of them are now applied to ai models, the first one to analyze how much each uncertainty weights on the measurement of the model’s outcome and the second to identify and quantify what are the main sources of uncertainties in the measurement [18]. as mentioned before, thermal comfort personalization is based on the measurement of a heterogeneous set of data, (i.e., environmental, physiological and personal parameters). regarding physiological quantities, the object of this study is the addition of heart rate variability (hrv) into the process for the measurement of thermal comfort: in fact, literature has repeatedly found the relationship that exists between some measures (or indices) derived from hrv and the thermal discomfort of the participant [19]–[22]. in particular, hrv is used to extract additional features, such as a specific quantity named lf/hf, which has a relationship with the thermal comfort of the user [9], [23]. more in detail lf/hf is the ratio between two hrv-derived parameters which are the low frequency components (lf) and the high frequency components (hf). literature has found that these two quantities represent the activity of the autonomic nervous system, responsible for the management of thermoregulation, and therefore are used to evaluate human thermal comfort. according to a previous study [23], [9], [24], physiological features of hrv and environmental quantities can be used in combination with ai models, to predict the tsv of user exposed to discomfort environmental conditions. results have shown that the tsv can be measured using two ai algorithms, which are the random forest (rf), that belongs to the ml class and convolutional neural networks (cnn), which are part of dl branch, with a mape of 20 % and 21 % respectively [23]. the further step is therefore to know how much the measurement uncertainty of each parameter involved in the measurement of thermal comfort, impacts the final measurement of tsv. to this aim, the paper is structured as follows: a) first of all, mcm is applied to the raw hrv signals coming from different participants, to see how the uncertainty in the measurement of hrv impacts the computation of the hrv features, since they are fed into ai algorithms, to extract the tsv. the standard uncertainty, associated to the measurement of hrv, chosen for this analysis is ± 4 ms, computed from a previous study in which the smartwatch was compared to a reference method while participants were sitting at rest [25]. this preliminary analysis can be useful to establish if the measurement uncertainty of the smartwatch, used to assess hrv and derive its features, has the same impact on each participant’s data, by having similar measuring uncertainty for each. b) secondly, mcm is applied to both physiological and environmental parameters used as input to the ai algorithms (rf and cnn), to evaluate how the measurement uncertainty propagates to the output, in black box methodology such as ai algorithms. the uncertainty used for the environmental quantities is the standard uncertainty provided from the datasheet of each device, while hrv is perturbed with different values of uncertainty that range from ± 4 ms to ± 100 ms. this range of values is chosen because literature has highlighted that, depending on the activity level performed by the participants, the acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 223 uncertainty associated with the measurement of the hrv in resting condition is ± 4 ms, while is greater than or equal to 100 ms when the user is performing a motion test [25], [26]. the research described in [25] and [26] were necessary since commercial smartwatches are not provided with precise datasheets that define the measurement uncertainties of the measured quantities. this is one of the problems being encountered in literature when it comes to working with low-cost sensors and often sensors that are not designed for research purposes. finally, the results are used to perform a sensibility analysis related to ± 4 ms of uncertainty, to examine the contribution of the uncertainties associated to the tsv measurement, in relation to the uncertainties of the input parameters. the paper is organized as follows: section 2 firstly describes how mcm is applied to compute the uncertainty in the measurement of hrv features, using a wearable smartwatch; secondly, it is explained the process to compute the measurement uncertainty of human thermal comfort, using a heterogeneous dataset and trained ai algorithms. section 3 provides the results associated with the two methodologies described in section 2, and the result of the sensitivity analysis. section 4 presents the discussion of results and section 5 provides the conclusion and the innovative aspect of this work. 2. materials and methods in this paper, a procedure to evaluate the impact of the uncertainty of the input data on the thermal sensation measurement output is applied. the aim is to introduce and combine a traditional technique that is the mcm for the estimation of the measurement uncertainty, with ai models. the methodology described in figure 1 is adopted: the first assumption is that each data coming from a measurement process is characterized by two quantities which are the data itself and the associated uncertainty. collected data from sensors network are merged together to build the heterogeneous dataset, that comprises the set of data and the associated uncertainties; therefore, when ai is applied in the measurement process, the model takes as input also the uncertainty of the collected data. therefore, it is expected that the results of the ai model derive from two main aspects: the first one is the intrinsic structure of the algorithm, while the second one is uncertainty associated with the input data [6]. the evaluation of the impact of the uncertainty associated with the input data in the measurement output, when ai is applied, is a paramount result. there is one intermediate aspect that should be considered, which is the impact of the uncertainty of the ai models, that will be combined with the uncertainty of the measurement instrument. the last part of figure 1 deals with two types of analysis which are the sensitivity analysis (sa) and the uncertainty analysis, performed by mcm. the procedure of the mcm consists in using data, acquired through real experiments, and perturbing them by assigning different measurement uncertainties or perturbations. each input parameter is modified, by adjusting it with a different perturbation, one at a time, while the other input variables are kept unchanged. based on the obtained results, the effect of different measurement uncertainties on the prediction of the tsv is observed through the simulation of perturbed data. the described methodology summarizes the steps to evaluate the measurement uncertainty of different parameters, used as input variables (or predictors) in ai algorithms, applied for the measurement of personalized thermal comfort, expressed through the tsv. the gum supplement provides the step necessary to perform the mcm. from a general point of view, mcm provides a general approach to numerically approximate the cumulative density function (cdf) of the output of a certain quantity 𝑦 = 𝑓(𝑥). the main concept behind mcm is that every sample of the input quantity 𝑥𝑖 , chosen from a predetermined distribution can be used. in this way, by taking a random sample of each input 𝑥𝑖 , from its related probability distribution function (pdf), it is possible to estimate a possible result of the output 𝑦 and the associated uncertainty. to explain how mcm is adapted in the context of this research, the following steps were applied: 1) the number of trials m of monte carlo trials is set to 200,000. m is the number of output quantity values that need to be selected; in this study, it is chosen a priori. usually, a number of trials equal to 106 are considered, which is the number of trials that should provide a coverage interval of 95 %, as reported in the gum supplement [13]. since it is commonly accepted that, the higher the number of trials, the higher is expected the convergence of the results, authors have set the number of monte carlo trials (m) to 200.000, as reported from the literature [14]; 2) m vectors 𝑥𝑖 , 𝑖 = 1, …, m, were generated, by selecting randomly from the probability density function (pdf) of each input quantity [𝑥hrv , 𝑥𝑡𝑎 , 𝑥𝑅𝐻 , 𝑥𝑣𝑎 ] in order to realize a set of possible input that can be associated to the input quantity. the random sample is obtained from a gaussian distribution, with uncertainties described in table 1. 3) for each vector generated in step 2, the corresponding output 𝑦 (or tsv in this case) is computed, yielding to m vector output quantity values; 4.1) point 3 is applied to estimate the uncertainty associated with the measurement of the hrv features, described in section 2.1; 4.2) in addition, point 3 is applied, perturbing the features used as input in the ai models, described in section 2.2; 5) the representation g of the distribution function for y is computed, starting from the set of m output of y; figure 1. conceptual description of the procedure adopted to study the impact of the measurement uncertainty in the context of ai model. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 224 6) g is used to compute an estimate 𝑦 of y and the covariance matrix 𝑢𝑖 associated with 𝑦; 7) g is used to compute the appropriate coverage region for y, for a stipulated coverage probability p; in this paper, we will refer to a generical model, with a number i of input 𝑥𝑖 and one output 𝑦, which is the measured quantity. the input parameters used in this study to measure tsv, were chosen on the basis of a previous study explained below: 13 participants were recruited to carry out a test consisting in exposing each participant in a semi-controlled room, in which environmental quantities were changed by varying the air temperature of the room. the sample population involved in the test is made of healthy people, not suffering from heart disease in order not to have perturbation on the collected hrv; the age of the sample population is 31 years ± 6 years old and participants were 7 females and 6 males. as reported in previous works [9], [23], the personal characteristics of the user (e.g., gender) are not included in this research since literature has shown that the analysis of environmental variables and personal characteristics in the prediction of user thermal comfort, does not lead to statistically significant results. moreover, authors did not consider differences in age, since literature reports that higher temperatures are preferred by elderly people, while lower temperatures are more suitable for the younger population; the sample population of this experiment is not made of older people and therefore authors do not deepen this aspect [9]. more in detail, the profile temperature of the test room was created as follows: the test room was previously set at 15 °c for 5 minutes, then the temperature set-point was set to 26 °c, then temperature was set back to 15 °c. the total duration was lengthened up to 100 minutes. the experiment was performed both in january 2020 and january 2021; the outdoor daily average temperatures were 8 °c in january 2020 and 4 °c in january 2021. during the experiment, the participants were sitting at a workstation and could perform light office activities (e.g., reading, working on the laptop). during the test, different quantities were monitored and used to train and test ai models for predicting the tsv of the users, which was collected during the test [23]. the monitored quantities for measuring the tsv to express human thermal comfort in this study were: air temperature (ta); relative humidity (rh); air velocity (va); heart rate variability (hrv). the list above contains both environmental quantities (ta, rh, va) and physiological quantities (hrv). the procedure with which the physiological and environmental data were acquired consisted in varying the environmental conditions of a semi-controlled room, while the sensors network acquired continuously these parameters. participants were informed of the purpose of the study and gave their informed consent; in addition, the study was validated by the ethical committee at università politecnica delle marche. the study was carried out in compliance with the principles laid down in the declaration of helsinki, in accordance with the guidelines for good clinical practice. the following paragraphs explain the methodology used in this research: in particular, section 2.1 aims at defining the measurement uncertainty associated with each hrv feature, derived from hrv, using mcm. section 2.2 explains how to implement mcm in combination with ai models, to assess the tsv of participants, using environmental and physiological data. 2.1 monte carlo approach on hrv features in this first part of the analysis, mcm is used to estimate how the uncertainty of the device through which the hrv is collected, influences the computation of the hrv features, that will be later used to evaluate human thermal comfort. hrv signal, which is made by the distance in time of two subsequent rr peaks of the ecg trace, is divided into time frames from which it is possible to extract some indices (or features). each time frame is built as follows: the first time frame corresponded to 5 minutes of the hrv signal, which is considered the minimum time duration recommended for computing hrv spectral analysis. after the extraction of the first window, a new window was computed by appending a new hrv sample interval of the signal, while the oldest sample was removed from the beginning of the window; the process was repeated until the end of the signal. according to the literature, several hrv features, to be extracted from the hrv signal, were identified. time-domain hrv features f(hrvt) are a collection of statistical and geometrical indices for the measurement of the variability in the hrv sequence that act as indices to interpret the oscillations of cardiac cycles. the f(hrvt) statistical indices computed in this study are: standard deviation of rr intervals (sdann), root mean square of rr (rmssd), mean value of rr (mean), median of rr (median), percentage of the difference between adjacent rr intervals that differs more than 50 ms (pnn50), percentage of the difference between adjacent rr intervals that differs more than 25 ms (pnn25). in addition, hrv studies imply the use of frequency-domain features f(hrvf), which are useful for the understanding of the stationarity or stability of the hrv signal. to obtain the frequency-domain analysis, the first power spectral density (psd) was computed through the autoregression modelling-based method that has proven to provide better resolution. each frequency band was then computed: lf (0.04 hz 0.15 hz) and hf (0.15 hz 0.4 hz), lf/hf, hf/lf and the total power spectrum (tp). non-linear features f(hrvnl) were also computed through the poincarè plot. the poincarè plot is a graphical representation of an hrv time series along the cartesian plane: the x-axis contains one hrv sample, while the y-axis contains the following hrv sample. the poincarè plot provides two additional features obtained by adjusting the point cloud of the figure formed into an ellipse; the first index (sd1) represents the dispersion of points perpendicular to the identity line, while the second one table 1. characteristics of the measurement instrument used in the test. input quantities manufacturer model standard uncertainty distribution air temperature (ta) thermo sensor gmbh pt100 (4 wired) ± 0,1 ºc normal relative humidity (rh) ahlborn fhad46c41a ± 2% of reading normal air velocity (va) ahlborn fvad05tok300 ± 3% of reading normal heart rate variability (hrv) samsung samsung galaxy watch ± 4 ms normal acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 225 (sd2) is the dispersion of point along the identity line of the poincarè plot. in this section, there is a first analysis made specifically on the individual features of the hrv. in practice, we want to assess how it is the uncertainty measurement of each feature, when the smartwatch is affected by a specific uncertainty in the measurement of hrv; the procedure is summarized in figure 2. first of all, one hrv window of 5 minutes is considered. this window will be perturbed with a random sample coming from a normal distribution with mean equal to 0, and a standard deviation equal to u, for 200,000 iterations. the perturbed hrv segment will be used to compute hrv features in order to establish the impact of the uncertainty. the final uncertainty is computed as the standard deviation of the resulting hrv, by a coverage factor k = 2. for the simulation study, 13 segments of hrv of a duration of 5-minutes, were extracted from the set of experimental procedures described previously. the duration of 5 minutes is required because is the minimum time required to compute short term hrv features. 2.2 monte carlo approach on ai models for measuring human thermal comfort for this second part of the analysis, the set of hrv features, explained in figure 3, was chosen, [23]. figure 3 shows the approach used to apply mcm and ai: a set of input parameters was used to train rf regression algorithm and a cnn, to predict the tsv, output of the algorithm. once the models are trained, the procedure consists in perturbing the input parameters (hrv, ta, rh, va) one at a time, with different uncertainties, maintaining constant the other quantities, and applying the trained model to each perturbed set of features, to obtain the final measurement of the tsv. it is worth noting that the current analysis is conducted locally, by choosing one observation of the whole dataset; the final result is therefore associated to the chosen observation. characteristics of the sensors used to collect environmental parameters, and the measurement uncertainty employed to perturb the different parameters are shown in table 1. sensitivity analysis a sensitivity analysis (sa) is performed to identify and quantify what are the main sources of uncertainties in the measurement. ideally, sa is the methodology for studying how the uncertainty of the output provided by a model, can be associated, qualitatively or quantitatively, to different uncertainties of the input parameters of the model [27], [28]. the gum supplement also provides the specifications for assessing the sensitivity coefficients; more in detail, it is explained that mcm is not sufficient to fully compute sensitivity coefficients but provides a methodology to compute the influence of each input quantity on the output quantity [13]. figure 2. description of the mcm adopted to evaluate the impact of the smartwatch uncertainty on the hrv features, that will be used to measure and assess human thermal comfort. figure 3. description of the procedure adopted to apply mcm to the measurement of thermal comfort, with a heterogeneous set of data. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 226 the procedure implies that one input quantity should be perturbed, and the remaining input quantities should be kept at their best estimates, in order to obtain the pdf of the output quantity, depending only on the variable perturbed. by using this procedure, the gum supplement proposes an approach that can be representative as a generalization of the approximate partialderivative formula; in particular, it is reported that the sensitivity coefficients can be approximated as the ratio of the standard deviation of the resulting model values and the standard uncertainty associated with the best estimate of relevant input quantity, as reported in eq. 1: 𝑐𝑖 = 𝑢𝑖(𝑦) 𝑢(𝑥𝑖 ) , (1) where 𝑐𝑖 is the sensitivity coefficient, 𝑢(𝑥𝑖 ) is the standard uncertainty associated with the ith input estimate 𝑥𝑖 , that contributes to the standard uncertainty 𝑢𝑖 (𝑦). 3. results 3.1 impact of the uncertainty of measurement on the hrv features in this section, results of the mcm used to analyse the propagation of the uncertainty in the measurement of hrv that propagates to the computation of the hrv features, are shown. the simulated results are used to obtain the frequency histogram of each hrv features that are computed with the simulation, to obtain the related uncertainty. for each hrv features the values of the standard uncertainty computed with the mcm are shown. the frequency histograms that come out of one simulation for one participant, are displayed in figure 4. table 2 contains the uncertainty associated with the computation of each hrv feature, expressed in percentage, among the 13 participants. it can be seen that the minimum measurement uncertainties are obtained when computing the mean feature (± 0.01 % of reading), while the highest values are associated with the measurement of lf/hf and hf/lf (± 0.7 % of reading). physiological quantities, such as hrv, can vary among participants and therefore it is interesting to understand the impact of the uncertainty on each feature, divided according to the related domain, among the different users, to see if it is possible to establish a generalized standard uncertainty associated to each hrv features, in relation to the value of the uncertainty of the device which measures it. the uncertainty associated with the features in the frequency domain is higher than the features in the time or the features in the non-linear domain. a reason that can explain this result is that the frequency components are more sensitive to the uncertainty of the device (± 4 ms) because they physically reflect table 2. measurement uncertainty computed for each hrv features used to estimate the tsv, among the 13 participants. id uncertainty (%) mean rmssd median lf hf lf/hf hf/lf sd1 sd2 sd1*sd2 1 0.01 0.10 0.09 0.34 0.30 0.4 0.4 0.10 0.1 0.2 2 0.01 0.08 0.09 0.31 0.25 0.4 0.4 0.08 0.1 0.1 3 0.01 0.29 0.07 1.30 0.81 1.5 1.5 0.29 0.3 0.5 4 0.01 0.11 0.08 0.53 0.55 0.6 0.6 0.11 0.1 0.2 5 0.01 0.25 0.06 0.93 0.58 1.1 1.1 0.25 0.2 0.4 6 0.01 0.10 0.08 0.30 0.28 0.4 0.4 0.10 0.1 0.2 7 0.01 0.16 0.10 0.57 0.59 0.6 0.6 0.16 0.1 0.2 8 0.01 0.04 0.13 0.26 0.12 0.3 0.3 0.04 0.0 0.1 9 0.01 0.12 0.08 0.31 0.32 0.5 0.5 0.12 0.1 0.2 10 0.01 0.24 0.06 0.92 0.58 1.1 1.1 0.24 0.2 0.4 11 0.01 0.09 0.07 0.31 0.25 0.4 0.4 0.09 0.1 0.1 12 0.01 0.30 0.08 0.66 0.79 1.0 1.0 0.30 0.2 0.4 13 0.01 0.06 0.10 0.26 0.19 0.3 0.3 0.06 0.1 0.1 µ 0.01 0.1 0.1 0.5 0.4 0.7 0.7 0.1 0.1 0.2 figure 4. histogram obtained from mcm with 200,000 iterations applied to the hrv features. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 227 the oscillatory activity of the hrv; for this reason, it can be verified that a small variation of hrv can lead to a greater variation of features in the frequency domain. 3.2 impact of the uncertainty on ai models for measuring human thermal comfort this section reports the results used to compute the measurement uncertainty of the tsv, using mcm applied to ai models, which are rf and cnn. table 3 contains the results which represent the contribution of the uncertainty of each input quantity (hrv, ta, rh, va) into the ai model, to estimate the associated uncertainty in the output tsv. each pdf of the input quantity is used, one at a time, according to monte carlo simulation, to obtain the resulting pdf of the tsv. the following table contains the results of an mcm conducted by perturbing each parameter, one at a time, with the respective uncertainty 𝑢𝑖 (𝑦) , where 𝑖 are the perturbed parameters. the simulation regarding the impact of the hrv uncertainty of the tsv was made for ± 4 ms. since hrv measurement through smartwatches is particularly prone to motion artifacts, more simulations were conducted by simulating different hrv uncertainties, which are u = [4, 10, 20, 50, 100] ms. the results of the simulations are shown in figure 5: the contribution of hrv uncertainty increases after 50 ms, suggesting that up to this value, the uncertainty of the devices that measure hrv is still acceptable for assessing the tsv. on the other hand, environmental sensors that measure ta, rh and va, are less subjected to variations in the uncertainty of the measurement of tsv. environmental quantities produce different uncertainty on the tsv, which depends on different parameters. rf has highest uncertainty due to the ta (𝑢𝑡𝑎 (𝑦) = 14 %), while cnn has highest uncertainty when rh is perturbed (𝑢𝑅𝐻 (𝑦) = 10.5 %). this different result is explainable if we consider that the two algorithms perform with different rules and can be considered as black boxes. sensitivity analysis in table 4 the partial uncertainty budget due to each parameter is given according to gum uncertainty framework; the sensitivity coefficients were evaluated by monte carlo simulation. according to the table, sa highlights that air velocity and air temperature are the parameters that most affect the tsv prediction, respectively for rf and cnn. rh, and hrv perturbed with ± 4 ms are the less sensitive parameters in the measurement of tsv, while va is the parameter that mostly contributes to the tsv measurement uncertainty. this outcome is consistent with the test procedure, which implied that during the experiment, to generate thermal discomfort, the window was opened, and the participant reported experiencing greater discomfort. 4. discussion depending on the kind of algorithm trained than can be both ml and dl algorithms, there are different quantitative results in the uncertainty of the tsv, using an mcm, with 200,000 iterations, in relation to the environmental parameters. rf provides greater uncertainty when ta is perturbed with an uncertainty of ± 0,1 ºc; on the contrary, cnn exhibits a higher uncertainty when rh is perturbed with ±2% of reading. physiological parameter (hrv signal), when perturbed with ±4 ms of uncertainty, impacts the resulting tsv with 2.8 % of uncertainty and > 0.001 % when it comes with cnn and rf respectively. in addition, mcm on ai models for measuring human thermal comfort (section 2.2) is applied to one observation of the entire dataset, and therefore it is a local analysis that is strongly related to the trained model. the overall methodology presented in this paper can be applied to brand new models, that do not include an analytical equation, to compute the uncertainty associated with the output of the model. mcm is a powerful tool that can be used to simulate the results of the impact of more than one measurement uncertainty, such in the case of the analysis presented in the manuscript in section 3.2, in which hrv is perturbed with a set of uncertainty that ranges from ± 4 ms to ± 100 ms. the impact of this study is that mcm can be applied in a variety of circumstances in which the analytical model that represents the relationship between input and output is not known, such in the case of ai models, but the initial perturbation can be simulated. when the measurement uncertainty of the device is not available, the uncertainty can be therefore hypothesized and simulated, or eventually a calibration to determine the uncertainty can be performed. 5. conclusion this paper presents an approach to assess the measurement uncertainty of human thermal comfort, expressed in terms of tsv, by using a method that comprises a heterogeneous set of data, made by physiological and environmental quantities, and ai models. the objective is therefore to quantify the measurement uncertainty of the tsv, while the user is performing light-office activities, by using gum guidelines and applying mcm. table 3. results of mcm applied to compute the measurement uncertainty 𝒖𝒊(𝒚) of the measurement of the tsv, both for the rf and the cnn algorithm. the uncertainty was computed considering a coverage factor k equal to 2. rf cnn hrv ta rh va hrv ta rh va standard uncertainty ± 4 ms ± 0,1 °c ± 2% of reading ± 3 % of reading ± 4 ms ± 0,1 °c ± 2 % of reading ± 3 % of reading tsv estimate (µ) 1 1.4 1.4 1.4 -0.5 -0.5 -1.2 -1.2 𝑢𝑖 (𝑦) [tsv unit] (σ) 7.46e-14 0.2 1.78e-13 0.0105 0.015 0.015 0.126 0.029 𝑢𝑖 (𝑦) [%] 7.24e-12 14 1.28e-11 0.8 2.825 2.9 10.5 2.5 table 4. sensitivity coefficients compute for the two algorithms. hrv ta rh va standard uncertainty ± 4 ms ± 0,1 °c ± 2 % of reading ± 3 % of reading sensitivity index rf 1.87e-14 1.8 4.61e-13 5.8 sensitivity index cnn 0.004 0.09 0.32 16.03 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 228 a preliminary analysis was conducted to assess the impact of the measurement uncertainty of the instrument used to collect hrv, which is a commercial smartwatch. mcm was applied to compute the uncertainty associated with the features extracted from hrv, which will be later fed into an rf and cnn model. results have shown that among 13 participants, there are uncertainty values in the measurement of features that ranges from ± 0.01 % to ± 0.7 %, suggesting that among different users the uncertainty can be generalized. then, mcm was applied by perturbing a set of parameters (hrv, ta, rh and va) to compute the uncertainty in the measurement of the tsv, using rf model and a cnn. results have shown that environmental quantities produce different uncertainty on the tsv. rf has the highest uncertainty due to ta uncertainty (u = 1 %), while cnn has the highest uncertainty when rh is perturbed (u = 10.5 %). on the other hand, the sensitivity analysis that expresses the relationship between the tsv and the input parameters highlights that va is the parameter that causes the greatest variation on the tsv. acknowledgement the activity presented in this work was partially carried out within the renozeb project, funded by the european union’s horizon 2020 research and innovation programme under grant agreement no. 768718. references [1] t. chaudhuri, d. zhai, y. c. soh, h. li, l. xie, x. ou, convolutional neural network and kernel methods for occupant thermal state detection using wearable technology, in 2018 international joint conference on neural networks (ijcnn), jul. 2018, pp. 1–8. doi: 10.1109/ijcnn.2018.8489069 [2] s. tham, r. thompson, o. landeg, k. a. murray, t. waite, indoor temperature and health: a global systematic review, public health, vol. 179, feb. 2020, pp. 9–17. doi: 10.1016/j.puhe.2019.09.005 [3] f. kobiela, r. shen, m. schweiker, r. dürichen, personal thermal perception models using skin temperatures and hr/hrv features: comparison of smartwatch and professional measurement devices, in proceedings of the 23rd international symposium on wearable computers, london, united kingdom, sep. 2019, pp. 96–105. doi: 10.1145/3341163.3347737 [4] a. aryal, b. becerik-gerber, thermal comfort modeling when personalized comfort systems are in use: comparison of sensing and learning methods, building and environment, vol. 185, nov. 2020, p. 107316. doi: 10.1016/j.buildenv.2020.107316 [5] a. ferrero, s. salicone, a comparison between the probabilistic and possibilistic approaches: the importance of a correct metrological information, ieee transactions on instrumentation and measurement, vol. 67, no. 3, mar. 2018, pp. 607–620. doi: 10.1109/tim.2017.2779346 [6] a. fornaser, m. d. cecco, p. bosetti, t. mizumoto, k. yasumoto, sigma-zrandom forest, classification and confidence, meas. sci. technol., vol. 30, no. 2, dec. 2018, p. 025002. doi: 10.1088/1361-6501/aaf466 [7] w. liu, x. tian, d. yang, y. deng, evaluation of individual thermal sensation at raised indoor temperatures based on skin temperature, building and environment, vol. 188, jan. 2021, p. 107486. doi: 10.1016/j.buildenv.2020.107486 [8] m. khanafer, s. shirmohammadi, applied ai in instrumentation and measurement: the deep learning revolution, ieee instrumentation measurement magazine, vol. 23, no. 6, sep. 2020, pp. 10–17. doi: 10.1109/mim.2020.9200875 [9] i. pigliautile, s. casaccia, n. morresi, m. arnesano, a. l. pisello, g. m. revel, assessing occupants personal attributes in relation to human perception of environmental comfort: measurement procedure and data analysis, building and environment, vol. 177, june 2020, p. 106901. doi: 10.1016/j.buildenv.2020.106901 [10] s. casaccia, g. m. revel, g. cosoli, l. scalise, assessment of domestic well-being: from perception to measurement, ieee instrumentation measurement magazine, vol. 24, no. 6, sep. 2021, pp. 58–67. doi: 10.1109/mim.2021.9513641 [11] j. singh, l. a. kumaraswamidhas, n. bura, n. dilawar sharma, a monte carlo simulation investigation on the effect of the probability distribution of input quantities on the effective area of a pressure balance and its uncertainty, measurement, vol. 172, feb. 2021, p. 108853. doi: 10.1016/j.measurement.2020.108853 [12] evaluation of measurement data—guide for the expression of uncertainty in measurement, jcgm 100: 2008. [13] i. bipm, i. ifcc, i. iso, iupap, and oiml, evaluation of measurement data—supplement 1 to the “guide to the expression of uncertainty in measurement”—propagation of distributions using a monte carlo method. joint committee for guides in metrology, jcgm 101: 2008, international organization for standardization, geneva, 2008. [14] a. chen, c. chen, comparison of gum and monte carlo methods for evaluating measurement uncertainty of perspiration measurement systems, measurement, vol. 87, jun. 2016, pp. 27– 37. doi: 10.1016/j.measurement.2016.03.007 [15] v. isaiev, o. velychko, metrological characterisation of current transformers calibration unit for accurate measurement, acta imeko, 10 (2021) 2, jun. 2021, pp. 6–13. doi: 10.21014/acta_imeko.v10i2.918 [16] f. ezedine, j.-m. linares, w. m. w. muhamad, j.-m. sprauel, identification of most influential factors in a virtual reality tracking system using hybrid method, acta imeko, 2 (2013) 2, jan. 2014, pp. 20-27. doi: 10.21014/acta_imeko.v2i2.136 [17] t. t. nguyen, a. amthor, c. ament, algorithm for a high precision contactless measurement system, acta imeko, vol. 2, no. 2, art. no. 2, jan. 2014, doi: 10.21014/acta_imeko.v2i2.82 [18] sensitivity analysis for importance assessment saltelli 2002 risk analysis wiley online library. online [accessed 15 june 2021] https://onlinelibrary.wiley.com/doi/full/10.1111/02724332.00040 [19] h. zhu, h. wang, z. liu, d. li, g. kou, c. li, experimental study on the human thermal comfort based on the heart rate variability figure 5. trend of the measurement uncertainty of the tsv, in response to different hrv uncertainties measurement, obtained with 200,000 iterations. 0,00 0,50 1,00 1,50 2,00 2,50 3,00 0 50 100 150 t s v u n ce rt a in ty ( t s v u n it ) hrv uncertainty (ms) rf cnn https://doi.org/10.1109/ijcnn.2018.8489069 https://doi.org/10.1016/j.puhe.2019.09.005 https://doi.org/10.1145/3341163.3347737 https://doi.org/10.1016/j.buildenv.2020.107316 https://doi.org/10.1109/tim.2017.2779346 https://doi.org/10.1088/1361-6501/aaf466 https://doi.org/10.1016/j.buildenv.2020.107486 https://doi.org/10.1109/mim.2020.9200875 https://doi.org/10.1016/j.buildenv.2020.106901 https://doi.org/10.1109/mim.2021.9513641 https://doi.org/10.1016/j.measurement.2020.108853 https://doi.org/10.1016/j.measurement.2016.03.007 https://doi.org/10.21014/acta_imeko.v10i2.918 https://doi.org/10.21014/acta_imeko.v2i2.136 https://onlinelibrary.wiley.com/doi/full/10.1111/0272-4332.00040 https://onlinelibrary.wiley.com/doi/full/10.1111/0272-4332.00040 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 229 (hrv) analysis under different environments, sci. total environ., vol. 616–617, mar. 2018, pp. 1124–1133. doi: 10.1016/j.scitotenv.2017.10.208 [20] k. n. nkurikiyeyezu, y. suzuki, g. f. lopez, heart rate variability as a predictive biomarker of thermal comfort, j ambient intell human comput, vol. 9, no. 5, oct. 2018, pp. 1465–1477. doi: 10.1007/s12652-017-0567-4 [21] h. zhu, h. wang, d. li, z. xiao, h. su, x. kuang, evaluation of the human thermal comfort under simulated weightlessness: an experimental study based on the power spectrum analysis of the heart rate variability, microgravity sci. technol., vol. 31, no. 1, feb. 2019, pp. 73–83. doi: 10.1007/s12217-018-9669-7 [22] w. liu, z. lian, y. liu, heart rate variability at different thermal comfort levels, eur j appl physiol, vol. 103, no. 3, jun. 2008, pp. 361–366. doi: 10.1007/s00421-008-0718-6 [23] n. morresi, s. casaccia, m. sorcinelli, m. arnesano, a. uriarte, j. i. torrens-galdiz, g. m. revel, sensing physiological and environmental quantities to measure human thermal comfort through machine learning techniques, ieee sensors journal, 2021, pp. 12322-12337. doi: 10.1109/jsen.2021.3064707 [24] a. affanni, dual-channel electrodermal activity and an ecg wearable sensor for measuring mental stress from the hands, acta imeko, 8 (2019) 1, pp. 56 63. doi: 10.21014/acta_imeko.v8i1.562 [25] n. morresi, s. casaccia, m. sorcinelli, m. arnesano, g. m. revel, analysing performances of heart rate variability measurement through a smartwatch, 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june-1 july 2020, pp. 1-6. doi: 10.1109/memea49120.2020.9137211 [26] n. morresi, s. casaccia, g. m. revel, metrological characterization and signal processing of a wearable sensor for the measurement of heart rate variability, in 2021 ieee international symposium on medical measurements and applications (memea), lausanne, switzerland, 23-25 june 2021, pp. 1–6. doi: 10.1109/memea52024.2021.9478713 [27] g. m. revel, e. sabbatini, m. arnesano, development and experimental evaluation of a thermography measurement system for real-time monitoring of comfort and heat rate exchange in the built environment, meas. sci. technol., vol. 23, no. 3, feb. 2012, p. 035005. doi: 10.1088/0957-0233/23/3/035005 [28] o. sima, m.-c. lépy, application of gum supplement 1 to uncertainty of monte carlo computed efficiency in gamma-ray spectrometry, applied radiation and isotopes, vol. 109, mar. 2016, pp. 493–499. doi: 10.1016/j.apradiso.2015.11.097 https://doi.org/10.1016/j.scitotenv.2017.10.208 https://doi.org/10.1007/s12652-017-0567-4 https://doi.org/10.1007/s12217-018-9669-7 https://doi.org/10.1007/s00421-008-0718-6 https://doi.org/10.1109/jsen.2021.3064707 https://doi.org/10.21014/acta_imeko.v8i1.562 https://doi.org/10.1109/memea49120.2020.9137211 https://doi.org/10.1109/memea52024.2021.9478713 https://doi.org/10.1088/0957-0233/23/3/035005 https://doi.org/10.1016/j.apradiso.2015.11.097 low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 11 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming lorenzo mistral peppi1, matteo zauli1, luigi manfrini2, luca corelli grappadelli2, luca de marchi1, pier andrea traverso1 1 dei department of electrical, electronic and information engineering ”guglielmo marconi”, university of bologna, 40136 bologna, italy 2 distal department of agricultural and food science, university of bologna, university of bologna, 40127 bologna, italy section: research paper keywords: smart farming technologies; smart agriculture; agricultural iot; autonomous sensor node; lora citation: lorenzo mistral peppi, matteo zauli, luigi manfrini, luca corelli grappadelli, luca de marchi , pier andrea traverso, low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming, acta imeko, vol. 12, no. 2, article 17, june 2023, identifier: imeko-acta-12 (2023)-02-17 section editor: francesco lamonaca, university of calabria, italy received july 11, 2022; in final form february 24, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the “italian departments of excellence” initiative sponsored by the italian ministry of university (miur). corresponding author: lorenzo mistral peppi, e-mail: lorenzomistral.pepp2@unibo.it 1. introduction the rise of new technologies makes possible to approach classical issues with modern, smart solutions. agriculture is definitely benefiting from these innovations: quick global development, population growth, climate change and a new awareness of food, its safety and the impact its production has on the environment have led to terms such as “precision farming”, “digital farming” or “agriculture 4.0” becoming more and more widespread and considered as a real added value to the agricultural product. the term precision agriculture (pa) encompasses many disciplines and technologies employed: artificial intelligence, autonomous control of agricultural equipment, automatic decision-making systems, quality control and application of treatments by drones, and so on, in order to increase productivity and environmental quality [1]. the key element, therefore, is the ability to acquire, transmit and process information in order to monitor crops, to make autonomous decisions through decision-support systems (dss) (as an example see [2]) and/or to provide data about a particular situation in order to allow for a decision to be made that is as correct and spatially and temporally customised as possible [1], [3]. the advent of iot [4] networks has further increased the pervasiveness of in-field sensing due to the possibility, particularly through lorawan networks, to cover large areas abstract accurate, continuous and reliable data gathering and recording about crop growth and state of health, by means of a network of autonomous sensor nodes that require minimal management by the farmer will be essential in future precision agriculture. in this paper, a low-cost multi-channel sensor-node architecture is proposed for the distributed monitoring of fruit growth throughout the entire ripening season. the prototype presented is equipped with five independent sensing elements that can be attached each to a sample fruit at the beginning of the season and are capable of estimating the fruit diameter from the first formation up to the harvest. the sensor-node is provided with a lora transceiver for wireless communication with the decision making central, is energetically autonomous thanks to a dedicated energy harvester and an accurate design of power consumption, and each measuring channel provides sub-mm 9.0-enob effective resolution with a full-scale range of 12 cm. the accurate calibration procedure of the sensor-node and its elements is described in the paper, which allows for the compensation of temperature dispersion, noise and non-linearities. the prototype was tested on field in real application, in the framework of the research activity for next-generation precision farming performed at the experimental farm of the department of agricultural and food science of the university of bologna, cadriano, italy. mailto:lorenzomistral.pepp2@unibo.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 using extremely low-power, adequate reliability and low-cost hardware [5], [6]. in addition, smart devices allow farmers to be employed for different tasks, for which human presence is essential, reducing the need for human intervention and therefore lowering overall farm operating costs. however, farmers often perceive new technologies more as a complication than an advantage, mainly because of their inexperience, the lack of interoperability between various technologies, the complexity, the often very high costs, and the inability to handle such large amounts of data [7], [8]. consequently, efforts must be made to make these technologies easy for the farmer to install, understand, successfully exploit, and maintain. the monitoring of fruit growth does not only help to estimate the yield [9], [10] but also provides additional information, such as the water stress status of the plants [11], [12]. in addition, by comparing the data acquired in real time with predictive growth models, corrective actions can be taken in the orchard. the most common techniques for measuring fruit dimension employ lvdt (linear variable displacement transducer) sensors and strain gauges or potentiometers [13], [14] which, despite being highly accurate, are limited by a usually very small measurable range. therefore, in order to keep on with the measurement process, frequent re-positioning of the sensor on the fruit is required. this timeand labourconsuming activity has a cost, and therefore an impact on farmer’s income. there are alternative solutions that do not require relocation over time, such as those employing optical sensors directly on the fruit, as in [15], but they are usually affected by low accuracy and the inability to detect fruit shrinkage. currently, systems capable of agricultural analysis have been developed using computer vision [16] and artificial intelligence methodologies [17]. in particular, techniques to estimate the number of fruits and their size [18], [19], [20], [21], [22] have been introduced. however, the accuracy achievable by these devices is still low compared to traditional techniques. this work aims to present the design, implementation and experimental assessment of a low-cost multi-channel distributed sensing system, aimed at the high-resolution, continuous estimation and recording of the diameter of fruits directly on the tree throughout the whole growth process. the system is noninvasive and does not either damage or warp the fruits, is energetically autonomous and is strongly uninfluenced by environmental conditions. relocation of sensing elements is not needed either during the ripening season, as required by the sensors used so far because of their small measurable range, thanks to an adequate full-scale range that is greater than the size of the fruit (e.g., apple, orange) once it has reached the typical ripe dimension. in comparison with the preliminary work presented in [23], in which a first, single-channel basic prototype was shown to illustrate the concept, a complete multi-channel sensor-node solution is presented here, with also the addition of an energy harvesting section able to guarantee season-long autonomous operation and a lora transceiver to allow data communication even at far distance from the receiving gateway. the full calibration of the multi-channel system was refined and is described step-by-step in details. two in-field measurement campaigns are also presented, carried out at an apple orchard at the experimental farm of the department of agricultural and food science of bologna, cadriano, italy. the paper is organized as follows. in section 2 the operating principle of the device is explained, and system architecture is shown. in section 3 all the main components of the multichannel sensor-node are described in details, with particular emphasis to the architectural/design solutions exploited to maximize final reliability and accuracy. section 4 describes the refined full calibration procedure of the prototype. finally, in section 5 results of in-field tests, conducted on an apple orchard, are shown. 2. system architecture and operating principle unlike the conventional methods proposed in literature, where the growth of the fruit determines a stimulus directly detected by a sensitive element (potentiometer, lvdt, etc.), in this application the i-th sensing element (i.e., the front-end of the i-th measuring channel) is made of a structure of known dimensions. the structure is composed of two solid arms bounded together at one end with a bold (figure 1 for a fivechannel sensor-node). it is used to interface the object to be measured to the element itself. the plier is kept in place by means of a spring while a reference voltage-supplied (vref) potentiometer, which is placed within the fulcrum of the plier and is rigidly connected to one of the two arms of the clamp, converts the opening angle α into a voltage acquired by the analog-to-digital converter (adc) integrated into the microcontroller unit (mcu) governing the overall node. the voltage at the output of each sensing element is proportional to the opening angle of the clamp, since the partition ratio of the potentiometer is directly proportional to α, and the linear width d that represents the measurand is indirectly obtained from the a/d acquired voltage according to (1): 𝑑 = 2 𝑅 ⋅ cos ( π 2 − 𝛼(𝑉out) 2 ) . (1) the task of the mcu is to trigger readings, to perform all the in-site calibration and compensation processes on the acquired raw data and to manage storage and/or wireless transmission of data about the fruit growth evolution to a remote decisional unit. figure 1. architecture of the sensor-node, with details of one of the five sensing elements. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 in addition, the mcu also manages the power supply timing of potentiometers, ensuring minimum power consumption of the entire sensor-node. 3. sensor-node prototype in this section, the main components that were exploited in the realization of a prototype for a complete five-sensing element sensor-node will be described, together with their characteristics and related design strategies. 3.1. abs pliers the plier in each sensing element consists of two arms, which are bolted together. a rotary potentiometer is located in a recess within the pivot, to sense the opening angle α between the two arms. the plier acts as an adapter to allow to easily optimize the full-scale for the width d during the design phase, while preserving the same angular full-scale. more precisely, to vary the maximum measurable distance dfs, given the same maximum opening angle, it is only necessary to re-scale the effective length r of the plier arms. after preliminary tests with different materials, the plier was manufactured using acrylonitrile butadiene styrene (abs), through a 3d printing process. the use of abs makes it possible to obtain a structure that is robust but at the same time lightweighted: in fact, the warping of fruits due to an excessively heavy plier, which would lead to a distorted estimated growth profile, must be avoided at all costs. in order to avoid any interference with the growth of the fruit, a “half horseshoe” shape (shown in figure 2) was chosen for the arms, while the mechanical elements that rest on the fruit are cup-shaped to increase the contact surface, their edges being coated with silicone to prevent slipping. on the side of the cups, slits are located to prevent the accumulation of moisture and the consequent possible development of fungal diseases. the dimensional parameters of the plier are necessarily customized according to the species of fruit, in order to allow for the capability of monitoring the entire ripening process, optimize the nominal resolution and maximize the overall accuracy of each channel. the size of these prototype calipers is such as to allow for the monitoring of apple growth, therefore they have been designed to offer a full-scale range of dfs = 12 cm, starting from the requirement of a maximum opening angle of αfs = 60 deg (see below), which resulted in an effective length r = 12 cm. 3.2. angular sensor as angular sensor, a single turn potentiometer with an electrical angle of 60 deg was chosen. the potentiometer, connected as a voltage divider, is supplied by a reference voltage vref shared with the adc vref in order to minimize gain errors. in such a configuration the output voltage vout of the sensing element is proportional to the opening angle, and thus algebraically related to the measurand d according to (1). the phs11-1dbr10ke60 [24], from tt electronics, was the commercial component adopted. although tests conducted over a period of months have not shown any particular weathering issues for the potentiometer, in order to protect from dust, dirt and moisture it was coated with a layer of waterresistant insulating grease. 3.3. mcu, adc and temperature sensor for this application a microcontroller with low power consumption and integrated a/d channels was required. at first cmwx1zzabz-091 murata modules were selected. this device integrates, together with an stm32l072 mcu, a semtech sx1272 lora transceiver, freeing the designer from problems related to the management of rf circuits. however, components supply shortage related to the 2020-22 pandemic forced the use of a stmicroelectronics nucleo64 demo board [25] equipped with a stm32l152re mcu, whose power supply circuit was exploited and whose pin headers were used as interface to the five potentiometers and the sd card for local data storage. the mcu can be easily integrated with a lora module (e.g., mbed sx1272mb2das) as well. stm32l152re is a 32-bit microprocessor incorporating a 12-bit successive approximation adc, spi and i2c peripherals, and featuring low power modes [26]. as already mentioned, thanks to the presence of pmos devices acting as power switches, the same reference voltage used by the a/d stage was employed to supply each potentiometer: in this way it was possible to take advantage of the entire fullscale range of the adc and, at the same time, compensate for both shortand long-time dispersion effects of the reference voltage. by adopting 3.3 v as vref, the nominal quantization step of the adc is 806 µv, which is approximately equivalent to a nominal resolution of 30 µm for each sensing channel. the mcu makes internally available a temperature sensor, which was used for thermal compensation in the framework of the real-time overall calibration procedure (section 4) implemented for each channel. during the production of the mcu this sensor is calibrated, and calibration coefficients made available within read-only portions of the memory [26]. nonetheless, a comparison was made for performance assessment between this sensor and a reference one (namely, hts221 from stmicroelectronics) characterised by an ac curacy of 1.0 °c in the 0 °c to 60 °c range [27]. the maximum figure 2. 3d rendering of the plier designed for the realization of each sensing element. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 estimated deviation between the two sensors in the 24 °c to 72 °c range was 1.34 °c, which allowed to consider the accuracy of the internal mcu sensor adequate enough for the thermal compensation cited above. 3.4. power supply the voltage provided by the battery and harvester (see next subsections) had to be adjusted down to 3.3 v, which is a compatible level with the operation of the mcu and peripherals. to this aim, the ld39050pu33r ldo from stmicroelectronics available in the nucleo board has been used, which allows for a stable, low-noise power supply. this device is characterized by very good performance in terms of output voltage noise (30 µvrms over 10 hz to 100 khz bandwidth) and thermal stability [28]. in order to switch off the supply to potentiometers and to sd card, when not in use, pmoss acting as switches are used, as described in subsection 3.7. 3.5. data storage and communication the lora transmitter uses an mbed sx1272mb2das [29] demoboard, fitted with a semtech sx1272 [30] transceiver. all rf-related circuitry is relegated to this board, which is connected to the nucleo64 via a set of arduino-compatible connectors. the sx1272 transceiver is designed to operate in the 868 mhz frequency band, and it communicates with the microcontroller via an spi interface [30]. in addition, dio lines are used by the semtech loramac.node stack to interface with the transceiver. the power supply line of this board is connected directly to the nucleo board to exploit the power supply stage of the latter. as an alternative to sending data over the air, these can be saved locally by means of an sd card (figure 1). 3.6. energy harvester to ensure maintenance-free operation of the sensornode during the entire fruit growth period, it is essential to rely on a low-cost source of energy that does not run out in a short interval of time. by considering consumption estimates and planned duty cycles, it is possible to determine the most suitable energy source to ensure high system autonomy. a single acquisition cycle, consisting of interrogating all five sensing elements every 600 s, processing and storing of data on the sd card (local storage is considered in the following discussion), requires a total of 1.63 j, such estimation being performed by averaging data from different sd card manufacturers [31]. as an example, with a 500 mah battery the operating time of the prototype would be around 25 days, without taking into account self-discharge phenomena, while for many fruit species the growth and ripening times are much longer [9], [32]. therefore, such a local battery is not suitable for a virtually maintenance-free node. increasing the battery capacity is not practical, either: at the beginning of the season all the batteries should be charged, and if the number of nodes is high many chargers would be needed, which could be a significant cost, because of their price and required manpower, even if wireless power transfer was used, such as in [33]. for this reason, the adoption of a photovoltaic harvesting sub-system, with a small rechargeable back-up battery, was considered as the optimal solution. this avoids the use of external chargers or nonrechargeable batteries, which are by their nature also a major source of pollution. as it will be described in the following, when the photovoltaic panel is exposed to full sunlight it is possible to keep the back-up battery fully charged. however, in the case of indirect exposure to light, such as in greenhouses applications, recovery of the energy consumed by the sensor-node is not always guaranteed. for this reason, the battery voltage level is monitored and forwarded every time a data transmission takes place. for the prototype a steval-isv012v1 evaluation board from st microelectronics was used. included in the evaluation board are a 400 mwpeak photovoltaic panel, namely szgd60604p from nbszgd, a spv1040 step-up converter with mppt (maximum power point tracker) and a l6924d charge controller for li-ion batteries, both from stmicroelectronics [34]. the use of a step-up converter with mppt algorithm allows to continuously track output voltage and current of the solar cell and therefore allows to maintain its maximum power point while changing lighting and load conditions, thus maximizing the harvested energy and increasing the overall efficiency of the stage. tests carried out in open field, at the farm of the faculty of agriculture of university of bologna (cadriano, bologna) between june and august 2021, have shown how this solution is able to ensure a sufficient power supply to power the prototype when used outdoor: the harvester is able to keep the battery fully charged during the day, while during the night the discharge is minimal, as it is possible to appreciate in figure 3. it is worth noting that the test was carried out in rows protected by rain-hail nets and the photovoltaic panel oriented in the east direction. 3.7. power gating when a measurement is performed, the potentiometers are powered by a reference voltage. without power gating systems, the energy wasted during quiescent operation would be much higher than the one needed to keep the mcu in stand-by: a current of 330 µa would flow continuously in each one of the 5 potentiometers, and the overall consumption would not be negligible. to implement the power gating function pmos devices are used: a low logic value to the gate pin allows the pmos to start conducting, powering the potentiometers. by using a pmos, it is not required to force a high logic value to the gate to keep the load disconnected: by using this configuration i/o interfaces of the mcu can be disabled, thus saving energy and leaving the figure 3. battery voltage of the energy harvester during open field tests. it is worth noting that the battery is always fully charged, and even during bad weather days no significant battery discharge is noticeable. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 gpio pins floating while allowing potentiometers to be not powered. this is due to the fact that, by inserting a resistor rg (figure 4) of appropriate value between gate and source, the device will always remain off as vgs = vrg ≈ 0 v because igate ≈ 0 v. even a high value of rg keeps the device off and allows, when vgs < 0 v or when the pmos is conducting, to dissipate over rg a negligible amount of energy. a key factor in the choice of mos is the low rdson of the device: the lower its value, the lower the influence of this parameter on the final measurement. this aspect will be discussed in more detail in sec. 4.2 . the commercial device chosen is dmp2200udw pmos, from diodes incorporated. this device has two extremely low resistivity [35] (spice simulations show a resistivity of 207 mω at 27.5 °c in actual operative point) pmos placed within the same package. each mos is used to activate two potentiometers at a time: in this way, it is possible to power only those potentiometers whose output must be activated, thus resulting in an overall lower power consumption. 4. real-time calibration procedure several sources of uncertainty, mainly related to the variations of temperature in which the system will operate, have been identified in the uncalibrated prototype. in an open field or greenhouse, at least a temperature range of a few tens of degrees has to be considered. furthermore, this system must be able to operate in the most varied conditions, in direct sunlight and during all months of the year. therefore, temperature stability is essential and temperature dispersion must be carefully compensated. in addition, the real-time calibration procedure described in subsections 4.2, 4.3 and 4.4 and implemented in the prototype compensates also for short-term instability (e.g., noise) and non-linearities affecting each sensing channel (figure 5). 4.1. a/d acquisitions and data averaging for a given sensing channel, in order to obtain a single fruit size reading the adc is activated in continuous mode every 600 s to acquire m sequences of n samples each of the voltage vout at the output of the potentiometer in the channel. a 𝑉𝑅 estimator is then computed as the averaging of the m ∙ n samples 𝑉𝑅,𝑛 (𝑚) where (m, n) denote the m−th sequence and the n−th sample in the sequence, respectively: 𝑉𝑅 = 1 𝑀 ∙ 𝑁 ∑ ∑ 𝑉𝑅,𝑛 (𝑚) 𝑁 𝑛=1 𝑀 𝑚=1 . (2) the estimator in (2) could be used directly as the vout value in (1) to compensate for short-term instabilities (mainly noise) in the a/d conversion process. however, the procedure would not take into account thermal dispersion and non-linearities, thus additional correction steps were implemented, as discussed in the next subsections. a trade-off between accuracy in rejecting shortterm instability and power consumption needs to be considered when choosing the values of n, m and the sampling period. indeed, the higher the number of samples and the longer the sampling time are, the higher the adc consumption will be to obtain a single reading from the channel. to assess the quality of the conversion, the standard deviation of estimator 𝑉𝑅 was evaluated while varying the parameter m and the sampling period (and maintaining n=250, which represents the lowest number of samples that can be read in a single-shot acquisition) in the range 16 ∙ adcclock period < adcsamplingperiod < 192 ∙ adcclock period, where adcclock freq = 16 mhz. the best trade-off between power consumption and quality of averaged reading 𝑉𝑅 was found for m = 10 and adcsamplingperiod = 96 adcclock period. indeed, with this configuration the standard deviation of the estimator of (2) is equal to 0.02 lsb, thus negligible compared to the other sources of uncertainty in the channel. figure 4. schematic of pmos switch stage. figure 5. stages of the overall real-time calibration procedure implemented for each channel in the prototype acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 4.2. compensation for thermal dispersion a climatic chamber has been exploited to carry out an extensive experimental investigation on the dispersion of the prototype sensor-node channel response for temperature variations in the interval 20 °c 70 °c. during each test, only some parts of the sensor-node were exposed to the temperature cycles, while all the other blocks were kept in constant environment conditions, in order to separately characterize the sources of uncertainty due to thermal variations. these tests included thermal cycles on the nucleo64 board (i.e., affecting mainly the adc and the reference voltage generator), the potentiometers, and the abs pliers. final tests were then performed on the entire multi-channel system. as far as the potentiometers are concerned, no significant thermal dispersion was observed, except for some very small variations probably caused by thermal inertia. adc and ref voltage generator showed instead, as expected, a significant dispersion, even though the dispersion of the adc vref is inherently compensated, at least for the most part, by the adopted strategy of using it to also supply the potentiometers. for a given sensing channel, �̅�𝑅 is defined as the estimator obtained by inserting 𝑉𝑅̅̅ ̅ as the adc averaged reading for vout into (1). the dispersion of �̅�𝑅 due to temperature variations affecting the adc, the ref voltage generator and, from a general standpoint, all the electronic hardware on the boards has been found through the tests to follow the model: �̅�𝑅 = �̅�𝑅 (0) (1 + 𝐶th ∙ ∆𝑇) (3) where ∆t is the variation of the temperature estimated by the mcu internal sensor with respect to a reference t(0) = 25 °c, 𝑑𝑅 (0) is the value of the estimator at t(0) and cth is an experimentally estimated temperature coefficient characterizing the channel. thanks to the adoption of the already mentioned ref voltage supply strategy, the power gating method described in subsection 3.7, and an accurate optimization in the design of the entire system, cth has been found adequately independent from temperature and aperture for a given channel. regarding the power gating, more precisely, an effective solution was to use low rdson external pmos (207 mω at 27.5 °c) as switches. rdson variation of these pmos, estimated using spice simulation, is equal to 11.9 mω over a temperature range between 15 °c and 50 °c. this variation has negligible effects on the final measurement: in the worst case the maximum difference in supply voltage at the potentiometers is 7.9 µv, while the quantization step of the adc is 806 µv. the experimental extraction of the coefficient cth for each sensing channel was carried out by obtaining �̅�𝑅 on a set of aperture values from a minimum up to the full-scale, applying for each aperture value an entire temperature cycle. during a cycle, the clamp was kept mechanically blocked to the test aperture, so that to not involve in the dispersion of �̅�𝑅 the thermal expansion of abs (see next subsection). the data collected allowed to write, by means of (3), an overdetermined system of linear equations, whose solution (least-square method) provided the temperature coefficient for the channel under test. an example of temperature cycle applied to the five sensing channels for the test aperture value d ≅ 9 cm is reported in figure 6 and table 1. a weak thermal inertia effect was recorded in some tests, due to the need of speeding up the cycle: however, this effect did not cause any major issue during the calibration phase since the resulting pattern is, in almost all cases, symmetrical to the interpolation line and because the points are almost superimposable on the line once thermal equilibrium is reached. it is worth noting that in on-field applications, temperature slope is never so high, thus avoiding thermal inertia. once the estimate for the coefficient cth is available for every channel, (3) can be used to convert (according to the model and the assumptions discussed) the uncalibrated averaged reading 𝑑𝑅 obtained at temperature te (monitored by the mcu internal sensor) into the value that would be obtained at the reference t(0). this thermally-calibrated estimate is indicated as 𝑑𝑅−𝑇 in the following. figure 6. temperature cycles imposed to the overall final prototype, equipped with pmos instead of mic5365-3.3 ldo, for all the five sensing elements. table 1. cth and r2 values, starting with sensing element num. 1 up to sensing element num. 5. coefficients of the regression lines are computed by means of the least-squared method. channel cth r2 1 -0.29 0.776 2 -0.0362 0.641 3 -0.55 0.913 4 -0.51 0.925 5 -0.061 0.754 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 4.3. thermal expansion of the pliers the material used to manufacture the pliers, namely abs, has a known coefficient of linear expansion λ. therefore it is possible to estimate, given a reference length r(0) at a reference temperature t(0), and knowing the temperature variation ∆t from the latter, the variation of effective length ∆r of the clamp arm and, consequently, rewrite (1) in the following form 𝑑 = 𝑑(0) + 2 ⋅ ∆𝑅 ⋅ cos ( π 2 − 𝛼(𝑉out) 2 ) (4) the calibration of all electronic components with respect to thermal dispersion discussed in the previous subsection can be thus supplemented with the calibration for thermal expansion of the pliers: it suffices to rewrite and estimate (4) in the form: �̂� = 𝑑𝑅−𝑇 + 2 ⋅ ∆𝑅 ⋅ cos ( π 2 − 𝛼(𝑉r) 2 ) (5) to obtain an estimate �̂� fully compensated in temperature. 4.4. non-linearities calibration up to now, issues associated with system nonlinearities (adc nonlinearity, potentiometer resistive taper nonlinearity, geometric dispersion in pliers printing process, etc.) that may affect the �̂� estimator have not been addressed. in order to complete the real-time calibration process, it is only required to extract a single calibration curve across the entire measuring range, at constant temperature (t(0) = 25 °c in this specific example), using a precision reference caliber. by means of a cubic spline interpolation process a set of coefficients is extracted from this curve and saved in the memory of the mcu. at each query on the diameter of the fruit these coefficients are recalled and the linearized output 𝑑�̂� (i.e., the final reading of the channel) is computed from �̂� in real time. a special-purpose laboratory set-up was arranged to calibrate the opening of the pliers. all the pliers of a single node are installed on a structure constrained to the carriage of a cnc. as the carriage moves, it allows the pliers to be opened according to user-defined dimensions, simulating the presence of a fruit of known size. a specially developed matlab script allows the entire opening range of the pliers to be swept at known spatial intervals, while at the same time the opening value detected in the absence of calibration are acquired. the point couples detected in this way are automatically stored into the flash memory of the microcontroller, thus completing the non-linearity calibration process. in this way each node can be programmed with the same firmware, without the need to modify the code according to the set of calibration coefficients needed for each set of pliers. 5. experimental results and in-field tests 5.1. static tests a set of static tests were performed on the prototype sensing channels at room temperature in order to estimate the effective resolution obtained. during the tests, estimates 𝑑�̂� on sets of aperture points randomly distributed within the full range were collected and compared with the readings taken with a precision caliper adopted as reference. by means of simple algorithms the rms deviation was estimated, leading to a standard uncertainty corresponding to 70 µm. based on this value, the average effective quantisation step dq ≅ 250 µm was derived, and enob = 9.0 bit estimated for a full-scale range of 12 cm. 5.2. in-field test a first trial was conducted between june and august 2021, in an apple orchard (malus x domestica borkh., cv “fuji-fujiko”) of the experimental farm of the department of agricultural and food science of university of bologna (cadriano, 44.5543 n 11.41872 e, figure 7). the orchard, established in early 2019 in a deep silt-clay soil, was characterized by orchard row spacing of 2 m and tree spacing of 3 m. trees, grafted onto m9 rootstock and trained to a multi-axis system (10 axis/tree) with a ns orientation, are watered by means of an irrigation system consisted of drip irrigation with a distance between emitters of 0.5 m and emitter flow of 2.3 lh-1. commercial practices, such as manual fruit thinning to achieve optimal crop load, winter pruning to maintain the desired training system, mineral fertilization, pest control and diseases, herbicide application below trees and mowing of inter-canopy cover crop, are employed to manage the orchard. within the orchard 18 fruits, randomly selected along a whole row at the beginning of the season, were monitored cyclically and their growth measured by hand using gauges. the five pliers of the prototype were instead placed simultaneously on a different set of five fruits of two adjacent trees, located at about one third of the length of the orchard, in an inner row. measurements 𝑑�̂� by means of the prototype were performed automatically every 10 minutes, while manual readings were carried out depending on the weather conditions and the activities to be done in the field, without any prescheduled periodicity, according to a traditional approach. such a short time interval between two acquisitions was decided because one of the purposes of the device is to evaluate the variation in fruit size between day and night. this time interval obviously can have an impact in terms of amount of data transmitted or stored. however, it would be straightforward to vary the data acquisition rate according to different, specific needs. in-field results of the five sensing elements can be seen in figure 8. as can be noted in the graph, three sensing elements (i.e., #1, #2 and #4, left) out of five correctly monitored and recorded (apart from recoverable perturbations) the fruit growth throughout the ripening season without the need for any figure 7. position of the five pliers of the prototype within the orchard (red dot), and position of the 18 apples manually measured as reference (yellow box). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 manning (apart from a single event, detailed below), while sensing elements #3 and #5 (right) suffered from unrecoverable issues. these were not related to the performance of the prototype itself, but to the capability of the pliers to remain reliably fixed to the fruit without any shift in position: normal manual activities, such as pruning the trees or applying spray treatments, or particularly adverse weather conditions, can actually result in a shifting of the pliers. these two records will therefore be disregarded. in the case of sensing element #1, instead, the spikes observed are caused by non-disruptive, recoverable manual activities around the fruit, and can be easily filtered out by software post-processing, while in sensing element #2 the single discontinuity observed at the turn of the 30th of july was caused by strong winds, which caused the sensor to be detached from the fruit. in this case, a manual re-positioning was needed. such a macroscopic issue can be successfully dealt with by fast visual inspection on the fruits under test, and the exceptionality of the event still allows to state that the monitoring process required a practically negligible work overhead from the orchard staff during the whole ripening season. due to the difficulty of obtaining repeatable reference measurements on fruits still attached to the tree (the nonsphericity and non-rigidity of the apple makes it practically impossible to get perfectly repeatable measurements by manual gauges), in order to perform a comparative analysis with the results from the prototype the absolute growth rate (agr), expressed in mm/day and computed on intervals of several days, was used as an averaged estimator for comparisons. the duration of the intervals over which the agr was estimated is not constant but depends on the weather conditions and on the other activities required in the field according to common practices. firstly, the average agr from the gauge measurements was computed for each individual fruit in the row (see in figure 9 the statistical distribution obtained) over the total time of the trial (starting from 21/06/2021 to 06/08/2021). similarly, the average agr involving all fruits was calculated on each time interval considered. the same was done, for the same time intervals, by averaging the values obtained with sensing elements #1, #2 and #4 (table 2). the values of table 2 have been graphed in figure 10, where it is possible to appreciate that the trends of the data collected by figure 8. data acquired by the five sensing-element fully calibrated prototype during the in-field test in the apple orchard at the experimental farm of the department of agriculture and food sciences of university of bologna between june and august 2021. on the left (a), successful records that show a correct growth rate and that are not characterized by unrecoverable spikes or disturbances are shown. on the right (b), records that were not useful for this study, due to unrecoverable disturbances (i.e., the clamps suffered from shifts on the fruit due to either weathering or manual activities), are shown. table 2. from left to right: average agr values computed, over three time intervals, considering prototype sensing elements 1-4, 1-2-4 and all 18 handmeasured fruits, respectively. time interval agr 1-4 in mm/day agr 1-2-4 in mm/day agr gauges in mm/day 21/06/2021 08/07/2021 0,56 0,55 0,41 08/07/2021 23/07/2021 0,58 0,52 0,41 23/07/2021 06/08/2021 0,43 0,44 0,29 table 3. agr estimated over the whole duration of the test considering sensing elements 1-4, 1-2-4 and all hand-measured fruits using a gauge. time interval agr in mm/day deviation in % 21/06/21-06/08/21 | pliers 1-4 0,53 36 21/06/21-06/08/21 | pliers 1-2-4 0,51 31 21/06/21-06/08/21 | gauges 0,39 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 9 the sensing elements and the hand-measured readings are very close. table 3 reports the comparison between the agr estimated by the sensor-node prototype throughout the ripening season and the corresponding value from the reference manual readings. on average, the prototype value is slightly higher than the manually measured one. however, looking at the histogram in figure 9, it can be appreciated that the agr from the sensing elements, which are positioned on the fruits of two adjacent trees and, thus, is representative of a local sampling of growth rate, is nonetheless within the statistics of the events detected on a larger scale in the orchard, i.e., an entire row. according to these results, and by considering the statistical distribution in figure 9, 4-5 equally-spaced sensor-nodes could suffice for an adequate sampling of fruit growth along an entire row with these characteristics. a further test was performed using 3 nodes, each consisting of 5 pliers, between august 2022 and october 2022. the orchard was the same of the first test and sensor nodes were randomly distributed over nonadjacent trees on the same row where previous year’s test was conducted. sensor-nodes, whose pliers were shared between adjacent trees, were programmed with same settings as the one used in the previous year's test. thanks to the experience gained with the tests of the previous year, this campaign was performed with a new version of the pliers. structural changes in the shape were introduced in order to lighten it, to improve its adhesion to the fruit and to simplify the 3d printing process while minimising production tolerances. (figure 11) the average of the agr factor estimated using these devices was compared to the average of the same factor computed on 22 hand measured fruits, randomly selected, of the same orchard. of 15 total pliers, only 3 revealed problems with slipping on the fruit. the values measured by these pliers will therefore not be considered for the calculation of the agr. the remaining 12 channels all show approximately the same growth trend (figure 12). thanks to the new design of the pliers, a more accurate positioning of them on the fruits made it possible to decrease the number of required relocations, making the measurements less perturbed than in the previous test. in fact, significant spikes are no longer present in the acquisitions (figure 12). for agr determination, manually acquired data were collected on different dates, selected according to weather conditions and field operations. therefore, two time intervals were obtained within which the agr was computed (table 4). these values were compared with the ones obtained from the data acquired with the multi-channel sensor-nodes. in this case, figure 9. agr distribution of all the 18 apples manually measured, computed over the total time of the trial. figure 10. comparison between averaged agr computed with data collected by the prototype sensing elements and reference data collected with manual caliper. figure 11. new plier rendering. it can be seen that the structure has been lightened and the cut-out for the potentiometer has been included into the right-hand body of the plier. the spring is not shown. figure 12. measured diameter of apples on 2022 test. only working pliers are shown. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 10 it was not considered useful to perform an analysis of the statistical distribution of agr between fruits as this had already been done in the past experimental campaign, but rather to verify the validity of the measurement with a statistically representative number of samples representing the conditions of the entire orchard. results are shown in table 5. the time intervals considered for calculating agr factor using hand gauges are different from those considered for calculating the same parameter with our devices. this, however, is not a source of error since growth pattern of an apple follows different phenological stages, each characterised by a typical growth profile. therefore, while remaining within each phenological phase, the agr value is approximately the same. in particular, periods 01/08/2022 30/08/2022, considered for the manual acquisition, and 18/08/2022 30/08/2022, used for estimation of agr with our system, belong to the same phenological phase and can therefore be compared without appreciable errors. the delay in the start of the comparison is caused by the need for the pliers to adapt to mechanical clearances. in fact, looking at figure 12, it can be seen how the growth in the first few days seems to be extremely slow, which is not the actual case. in the two time periods, the agr value is very similar experimentally confirming the good performance of the device. 6. conclusions in this paper, a multi-channel sensor-node architecture and related calibration procedure for continuous distributed monitoring of fruit growth were presented. the low-cost prototype implemented for validation purposes, which was realized by means of commercial boards and 3d-printed devices, showed nonetheless adequately high effective resolution with a full-scale range suitable for no-manning operation throughout the entire ripening season of apple-sized fruit species. by adopting a multi-step calibration procedure, it was possible to compensate each channel in real-time for temperature dispersion, noise and non-linearity, which made the system rugged to environmental conditions and suitable for all-season operation. thanks to the energy harvesting sub-system and the possibility of data transmission in real-time over the air, the sensor-node and its sensing elements can be positioned at the beginning of the season and operate, without any maintenance required, until harvest time. the design of a custom board is planned, in order to integrate the harvesting stage, the mcu and the lora transceiver for performance and power consumption optimization. references [1] f. j. pierce, p. nowak, aspects of precision agriculture, ser. advances in agronomy, d. l. sparks, ed. academic press, 1999, vol. 67, pp. 1–85. doi: 10.1016/s0065-2113(08)60513-1 [2] r. khan, m. zakarya, v. balasubramanian, m. a. jan, v. g. menon, smart sensing-enabled decision support system for water scheduling in orange orchard, ieee sensors journal, vol. 21, no. 16, 2021, pp. 17492– 17499. doi: 10.1109/jsen.2020.3012511 [3] n. zhang, m. wang, n. wang, precision agriculture a worldwide overview, computers and electronics in agriculture, vol. 36, no. 2, 2002, pp. 113–132. doi: 10.1016/s0168-1699(02)00096-0 [4] b.-y. ooi, s. shirmohammadi, the potential of iot for instrumentation and measurement, ieee instrumentation & measurement magazine, vol. 23, no. 3, 2020, pp. 21–26. doi: 10.1109/mim.2020.9082794 [5] d. davcev, k. mitreski, s. trajkovic, v. nikolovski, n. koteli, iot agriculture system based on lorawan, 2018 14th ieee international workshop on factory communication systems (wfcs), imperia, italy, 13-15 june 2018, pp. 1–4. doi: 10.1109/wfcs.2018.8402368 [6] m. j. faber, k. m. van der zwaag, w. g. v. dos santos, h. r.d. o. rocha, m. e. v. segatto, j. a. l. silva, a theoretical and experimental evaluation on the performance of lora technology, ieee sensors journal, vol. 20, no. 16, 2020, pp. 9480–9489. doi: 10.1109/jsen.2020.2987776 [7] m. reichardt, c. jürgens, adoption and future perspective of precision farming in germany: results of several surveys among different agricultural target groups, precision agriculture, vol. 10, no. 1, 2009, pp. 73– 94. doi: 10.1007/s11119-008-9101-1 [8] m. kernecker, a. knierim, a. wurbs, common framework on innovation processes and farmers’ interests. online [accessed 2 june 2023] https://ec.europa.eu/research/participants/documents/downloa dpublic?documentids=080166e5a96e5296&appid=ppgms [9] a. lakso, l. corelli grappadelli, j. barnard, m. goffinet, an expolinear model of the growth pattern of the apple fruit, journal of horticultural science, vol. 70, no. 3, 1995, pp. 389–394. doi: 10.1080/14620316.1995.11515308 [10] h. welte, forecasting harvest fruit size during the growing season, acta horticulturae 276: ii int. symposium on computer modelling in fruit research and orchard management 276, 1989, pp. 275–282. doi: 10.17660/actahortic.1990.276.32 [11] b. morandi, f. boselli, a. boini, l. manfrini, l. corelli, the fruit as a potential indicator of plant water status in apple, acta horticulturae: viii int. symposium on irrigation of horticultural crops 1150, 2017, pp. 83–90. doi: 10.17660/actahortic.2017.1150.12 [12] a. boini, l. manfrini, g. bortolotti, l. corelli-grappadelli, b. morandi, monitoring fruit daily growth indicates the onset of mild drought stress in apple, scientia horticulturae, vol. 256, 2019, p. 108520 doi: 10.1016/j.scienta.2019.05.047 [13] s. o. link, m. e. thiede, m. g. v. bavel, an improved straingauge device for continuous field measurement of stem and fruit diameter, journal of experimental botany, vol. 49, no. 326, 1998, pp. 1583–1587. doi: 10.1093/jexbot/49.326.1583 [14] b. morandi, l. manfrini, m. zibordi, m. noferini, g. fiori, l. c. grappadelli, a low-cost device for accurate and continuous measurements of fruit diameter, hortscience, vol. 42, no. 6, 2007, pp. 1380– 1382. doi: 10.21273/hortsci.42.6.1380 [15] m. thalheimer, a new optoelectronic sensor for monitoring fruit or stem radial growth, computers and electronics in agriculture, table 4. average agr values of hand measured fruits computed, over two times intervals in 2022 test, with a gauge. time interval manual acquisition agr in mm/day 01/08/2022 30/08/2022 0,34 30/08/2022 03/10/2022 0,20 table 5. average agr values of fruits, estimated with the sensor node, over two time intervals during 2022 test, and deviation from agr computed on hand measured fruits. time interval prototype acquisition agr in mm/day deviation in % 18/08/2022 30/08/2022 0,33 -2 30/08/2022 04/10/2022 0,17 -16 https://doi.org/10.1016/s0065-2113(08)60513-1 https://doi.org/10.1109/jsen.2020.3012511 https://doi.org/10.1016/s0168-1699(02)00096-0 http://dx.doi.org/10.1109/mim.2020.9082794 https://doi.org/10.1109/wfcs.2018.8402368 https://doi.org/10.1109/jsen.2020.2987776 https://doi.org/10.1007/s11119-008-9101-1 https://ec.europa.eu/research/participants/documents/downloadpublic?documentids=080166e5a96e5296&appid=ppgms https://ec.europa.eu/research/participants/documents/downloadpublic?documentids=080166e5a96e5296&appid=ppgms https://doi.org/10.1080/14620316.1995.11515308 https://doi.org/10.17660/actahortic.1990.276.32 http://dx.doi.org/10.17660/actahortic.2017.1150.12 https://doi.org/10.1016/j.scienta.2019.05.047 http://dx.doi.org/10.1093/jexbot/49.326.1583 https://doi.org/10.21273/hortsci.42.6.1380 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 11 vol. 123, 2016, pp. 149–153. doi: 10.1016/j.compag.2016.02.028 [16] c. nandi, b. tudu, c. koley, machine vision based techniques for automatic mango fruit sorting and grading based on maturity level and size maturity level and size, in: mason, a., mukhopadhyay, s., jayasundera, k., bhattacharyya, n. (eds) sensing technology: current status and future trends ii. smart sensors, measurement and instrumentation, vol. 8, 2014. doi: 10.1007/978-3-319-02315-1_2 [17] d. shadrin, a. menshchikov, a. somov, g. bornemann, j. hauslage, m. fedorov, enabling precision agriculture through embedded sensing with artificial intelligence, ieee transactions on instrumentation and measurement, vol. 69, no. 7, 2020, pp. 4103–4113. doi: 10.1109/tim.2019.2947125 [18] d. wang, c. li, h. song, h. xiong, c. liu, d. he, deep learning approach for apple edge detection to remotely monitor apple growth in orchards, ieee access, vol. 8, 2020, pp. 26911–26925. doi: 10.1109/access.2020.2971524 [19] d. stajnko, m. lakota, m. hočevar, estimation of number and diameter of apple fruits in an orchard during the growing season by thermal imaging, computers and electronics in agriculture, vol. 42, no. 1, 2004, pp. 31–42. doi: 10.1016/s0168-1699(03)00086-3 [20] z. wang, k. walsh, b. verma, on-tree mango fruit size estimation using rgb-d images, sensors, vol. 17, 11 2017, p. 2738. doi: 10.3390/s17122738 [21] k. bresilla, g. d. perulli, a. boini, b. morandi, l. corelli grappadelli, l. manfrini, single-shot convolution neural networks for real-time fruit detection within the tree, frontiers in plant science, vol. 10, 2019, p. 611. doi: 10.1088/0031-9120/30/5/007 [22] f. rossi, l. manfrini, m. venturi, l. corelli grappadelli, b. morandi, fruit transpiration drives interspecific variability in fruit growth strategies, horticulture research 9, 2022, pp. 1–10. doi: 10.1093/hr/uhac036 [23] l. m. peppi, m. zauli, l. manfrini, l. c. grappadelli, l. de marchi, p. a. traverso, implementation and calibration of a lowcost sensor node for high-resolution, continuous and no-manning recording of fruit growth, 2021 ieee int. instrumentation and measurement technology conference (i2mtc), glasgow, united kingdom, 17-20 may 2021, pp. 1–6. doi: 10.1109/i2mtc50364.2021.9459851 [24] tt electronics, rotary position sensor phs11 series. online [accessed 2 june 2023] https://www.ttelectronics.com/ttelectronics/media/productf iles/datasheets/phs11.pdf [25] stmicroelectronics, um1724 user manual. online [accessed 2 june 2023] https://www.st.com/resource/en/user_manual/um1724-stm32nucleo64-boards-mb1136-stmicroelectronics.pdf [26] stmicroelectronics, rm0038 reference manual. stm32l100xx, stm32l151xx, stm32l152xx and stm32l162xx advanced arm-based 32–bit mcus. online [accessed 2 june 2023] https://www.st.com/resource/en/reference_manual/rm0038stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xxadvanced-armbased-32bit-mcus-stmicroelectronics.pdf [27] stmicroelectronics, hts221 datasheet. online [accessed 2 june 2023] https://www.st.com/resource/en/datasheet/hts221.pdf [28] stmicroelectronics, ld39050 500 ma low quiescent current and low noise voltage regulator. online [accessed 2 june 2023] https://www.st.com/resource/en/datasheet/ld39050.pdf [29] arm limited, sx1272 mbed shield. online [accessed 2 june 2023] https://os.mbed.com/components/sx1272mb2xas/ [30] semtech, sx1272 datasheet. online [accessed 2 june 2023] https://www.semtech.com/products/wireless-rf/loracore/sx1272#download-resources [31] gough's tech zone, microsd card power consumption & spi performance. online [accessed 2 june 2023] https://goughlui.com/2021/02/27/experiment-microsd-cardpower-consumption-spi-performance/ [32] c. pratt, apple flower and fruit: morphology and anatomy, horticultural reviews, vol. 10, 2011, pp. 273–308. doi: 10.1002/9781118060834.ch8 [33] r. w. porto, v. j. brusamarello, i. müller, f. l. c. riano, f. r. de sousa, wireless power transfer for contactless instrumentation and measurement, ieee instrumentation & measurement magazine, vol. 20, no. 4, pp. 49–54, 2017. doi: 10.1109/mim.2017.8006394 [34] stmicroelectronics, steval-isv012v1, data brief. online [accessed 2 june 2023] https://www.st.com/resource/en/data_brief/stevalisv012v1.pdf [35] diodes incorporated, dmp2200udw dual p-channel enhancement mode mosfet, online [accessed 2 june 2023] https://www.diodes.com/assets/datasheets/dmp2200udw.p df https://doi.org/10.1016/j.compag.2016.02.028 https://doi.org/10.1007/978-3-319-02315-1_2 https://doi.org/10.1109/tim.2019.2947125 https://doi.org/10.1109/access.2020.2971524 https://doi.org/10.1016/s0168-1699(03)00086-3 https://doi.org/10.3390/s17122738 https://doi.org/10.1088/0031-9120/30/5/007 https://doi.org/10.1093/hr/uhac036 https://doi.org/10.1109/i2mtc50364.2021.9459851 https://www.ttelectronics.com/ttelectronics/media/productfiles/datasheets/phs11.pdf https://www.ttelectronics.com/ttelectronics/media/productfiles/datasheets/phs11.pdf https://www.st.com/resource/en/user_manual/um1724-stm32-nucleo64-boards-mb1136-stmicroelectronics.pdf https://www.st.com/resource/en/user_manual/um1724-stm32-nucleo64-boards-mb1136-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/reference_manual/rm0038-stm32l100xx-stm32l151xx-stm32l152xx-and-stm32l162xx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf https://www.st.com/resource/en/datasheet/hts221.pdf https://www.st.com/resource/en/datasheet/ld39050.pdf https://os.mbed.com/components/sx1272mb2xas/ https://www.semtech.com/products/wireless-rf/lora-core/sx1272%23download-resources https://www.semtech.com/products/wireless-rf/lora-core/sx1272%23download-resources https://goughlui.com/2021/02/27/experiment-microsd-card-power-consumption-spi-performance/ https://goughlui.com/2021/02/27/experiment-microsd-card-power-consumption-spi-performance/ http://dx.doi.org/10.1002/9781118060834.ch8 https://doi.org/10.1109/mim.2017.8006394 https://www.st.com/resource/en/data_brief/steval-isv012v1.pdf https://www.st.com/resource/en/data_brief/steval-isv012v1.pdf https://www.diodes.com/assets/datasheets/dmp2200udw.pdf https://www.diodes.com/assets/datasheets/dmp2200udw.pdf a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 10 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors giorgia ghiara1, christophe maniquet2, maria maddalena carnasciali1, paolo piccardo1 1 department of chemistry and industrial chemistry (dcci), university of genoa, via dodecaneso 31, 16146 genova, italy 2 inrap limoges, 18 allée des gravelles, 87280 limoges, france section: research paper keywords: corrosion morphology; sn bronze; microstructure; tentacle like corrosion; mic citation: giorgia ghiara, christophe maniquet, maria maddalena carnasciali, paolo piccardo, a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors, acta imeko, vol. 11, no. 4, article 11, december 2022, identifier: imeko-acta-11 (2022)-04-11 section editor: tatjana tomic, industrija nafte zagreb, croatia received april 14, 2022; in final form july 14, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia ghiara, e-mail: giorgia.ghiara@gmail.com 1. introduction archaeological objects constitute our main source of information to study the corrosion behaviour of copper-based alloys over long timespans. ancient metals buried in soil or left in waterlogged environments for centuries give the most reliable data on how to model and thus predict the corrosion behaviour of modern alloys [1]-[4]. when studying the overall corrosion process, it is straightforward that the environment itself must be considered. much research already studied the variety of environments and features occurring [5]-[10] on sn-bronzes, pointing out the importance of water (the electrolyte in the corrosion process) and the nature and concentration of dissolved salts inside the solution. the morphology, composition, physical and chemical features of the corroded layer derive from the interaction with these corrosive agents [11]-[13]. the rate and morphology of the attack is thus affected by i) the anodic or cathodic processes that promote the growth of salts or the formation of complexing ions; ii) the evolution of the moving equilibria in the anodic or cathodic direction; ii) the conductivity of the solution [14], [15]. corrosion of soil can be more challenging since, other than water, climatic parameters as rainfall, wind, sunlight, temperature, and microbiological activity influence the composition and the properties of the system and allow for different types or mechanisms to occur, one after the other or even simultaneously [16]. nevertheless, the environment is not the only parameter that scientists must consider when dealing with corrosion processes of sn-bronzes. the composition and the microstructural features of the matrix are prominent if the alloys are exposed to the same environmental conditions. focusing on the composition, corrosion properties as nobility, passivity or cathodic and anodic processes are related to the influence of the alloying elements and abstract a categorization of corrosion morphologies of archaeological sn bronzes was carried out on archaeological iron age objects. the objects come from a celtic deposit located in central france (tintignac, corrèze) and are dated between 2nd and 3rd cent bc. being samples of corroded metals taken from a single find spot, parameters connected to the features of the alloy and known to influence the corrosion morphologies were thoroughly considered. global processes were highlighted, and corrosion mechanisms were characterised with a multi-analytical protocol (sem-eds, micro-raman spectroscopy, image analyses) according to the detected morphology. elaboration of the results was carried out with a multicomponent approach. results show the presence of 5 different morphologies correlated to the alloys characteristics of the objects. alloy composition, microstructure, degree of deformation and grain size were found to influence the corrosion products formed and the morphology of the attack. in particular, the ‘tentacle like corrosion’, associated to a microbial attack was the most susceptible to the effect of metallurgical features: their occurrence is connected to a more massive presence of fe and pb in the alloy, a homogeneous deformation and a larger grain size. mailto:giorgia.ghiara@gmail.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 their quantity inside the matrix [14]-[16]. from a historical point of view typical alloying elements are as, zn and sn, the latter providing the better corrosion resistance [17]-[20]. furthermore, the quantity of the alloying element inside the solid solution can be beneficial or detrimental to the corrosion resistance of the material. research on the performances of single-phase cu-sn alloys and how the growth of a passive film on the surface is promoted by large supplementations of sn to pure cu [17]-[19] were undertaken in the past. also, how a high zn content in brasses promotes a dezincification mechanism is a common phenomenon [20]. however, corrosion can be triggered also according to metallurgical features as heterogeneities, impurities, secondary phases, defects, or grain orientation [14]. the presence of impurities or defects can cause higher corrosion rates by creating preferential areas of corrosion [16] as heterogeneities inside the solid solution or intermetallic phases can lead to a selective type of corrosion. the presence of detrimental secondary phases was also described in recent studies [21]. generally, the action of each parameter is reflected in a specific corrosion morphology that is categorized accordingly [7], [10], [22]-[26]. various corrosion morphologies are presented in literature and are generally correlated to the work of robbiola et al. [7]. in their work, the authors defined two types of corrosion features for archaeological tin-bronzes in soil, according to the preservation of the original surface that allows to define the original shape of the artefact. these models were obtained on a statistical study of sn-bronzes with a sn range of composition from 4 to 23 wt. % and coming from the same site of excavation. both corrosion morphologies derive from an electrochemical process of oxidation of the metal (anode) and reduction of the oxygen (cathode). after the buildup of a corrosion layer, the mechanism described further developed with the mobility of cationic and anionic species, that interact with each other within the metal-soil system [7]. however, the influence of metallographic features was less considered and typical metallurgical parameters as alloy composition, microstructure, fabrication and/or finishing techniques were not considered. the study wants to fill the gap on archaeological materials by considering the metallurgical parameters to play an important role in the corrosion mechanism of artifacts buried in soil. the archaeological objects come from a recently excavated deposit (tintignac, corrèze) located in the central part of france, from which it was possible to obtain the necessary information to properly interpret the corrosion behavior without external contaminations. 2. materials and methods 2.1. archaeological context a celtic deposit was discovered around tintignac (naves, department of corrèze) in the southwest of central france and dated between 2nd and 3rd cent bc (figure 1). different bronze and iron objects related to warfare were deposited in a pit in the north-western area of a ritual or cultic place (fanum). most of the objects were intentionally destroyed before the deposition. the square pit measured around 1 m² and 30 cm depth and around 60 metallic objects were discovered: defensive armour (shields and helmets), war trumpets (karnykes), and a wide variety of other artifacts. to minimize the number of variables in the study, binary bronzes with no secondary phases are taken into consideration and only composition and microstructural features are analysed. 2.2. methodology small size fragments suitable for metallurgical investigations were sampled. of each fragment/object, one or more (1 to 3) samples were taken. after a preliminary visual observation to verify the surface characteristics and preservation, samples were mounted in cold resin and subsequently polished with decreasing grain-size diamond suspensions up to 1 μm, in agreement with the astm e3-01 standard procedure. the protocol was designed to characterise the objects through metallurgical investigations of the metallic matrix and spectroscopic analyses of the corrosion layers. objects were studied and categorized according to the morphology, composition, and microstructural features. 2.3. analytical techniques a preliminary characterisation was performed prior to metallographic etching by light optical microscopy (lom; mef4 m; leica microsystems, buffalo grove, il, usa) using brightfield (bf) and dark-field (df) contrast methods. the latter is particularly suitable for detecting corrosion products and allows for an exhaustive detection of the mineralized areas [27]. the microstructural features of the metallic matrix and the morphology of the corrosion process were documented by lom and scanning electron microscopy (sem; zeiss evo40; carl zeiss, oberkochen, germany). chemical analyses of the metallic substrate and the corroded layers were performed with energydispersion x-ray spectroscopy (edxs; cambridge inca 300 with pentafet edxs detector (oxford instruments, oxfordshire, u.k.) sensitive to light elements, z> 5) connected to the sem. the eds was previously calibrated on a cobalt figure 1. different layers of the celtic deposit discovered around tintignac (naves, department of corrèze, france) and dated between 2nd and 3rd cent bc. (a) upper level, (b) middle level; (c) lower level. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 standard. this procedure allowed to obtain reliable values for all elements with atomic weight higher than 11 (z ≥ 11), while for lighter elements like oxygen the analysis was considered as semiquantitative. amounts below 0.2 wt.% were considered as semiquantitative measurements and were evaluated only when the identification peaks were clearly visible in the acquisition spectrum. the reported compositions of the alloys correspond to the average of at least five measurements on non-corroded areas. all results on the alloys were normalized and presented in average weight percent. afterwards, chemical etching was performed on the samples with a solution of fecl3 (5 g) diluted in hcl (50 ml) and ethanol (200 ml) to reveal their microstructural features. to estimate the grain size, the procedure using the grain boundary intersection count was carried out according to the astm e-112 standard. the degree of deformation of the alloy was calculated following equations from [28], [29]. the shape factor of the inclusions (sf) can be computed by calculating their width/length ratio and by correlating it to the stress absorbed by the material. the deformation applied, resulting in a reduction of thickness (d%), is described as: 𝐷% = 𝑡ℎ𝑖𝑐𝑘0 − 𝑡ℎ𝑖𝑐𝑘end 𝑡ℎ𝑖𝑐𝑘0 ∙ 100 % , (1) where 𝑡ℎ𝑖𝑐𝑘0 and 𝑡ℎ𝑖𝑐𝑘end are the initial and final thickness respectively. the equation was then further implemented correlating the sf of the inclusions to the deformation applied: 𝐷% = 𝑠𝑓 − √𝑠𝑓 3 𝑠𝑓 ∙ 100 % , (2) 𝑡ℎ𝑖𝑐𝑘0 = 𝑡ℎ𝑖𝑐𝑘end ∙ 𝑠𝑓 2 3 . (3) to determine the nature of the corrosion products, microraman spectroscopy (μrs, renishaw raman system 2000, renishaw, inc., hoffman estates, il, usa) was also employed. the instrument was supplied with a charge coupled device (ccd) peltier-cooled as detector and excited using a 632.8 nm he-ne laser at 1 cm-1. to estimate the thickness of the corroded layers and the percentage of sound metal on the sample an image analysis software (fiji-imagej, version 1.49b) was used on 100 x lom micrographs [30]. a statistical procedure was also performed using the principal component analysis (pca) to evidence correlations between the variables and possible clustering of the samples. the original data matrix was decomposed into two smaller matrices: the loadings, which can be interpreted as the weights for each original variable in the pc computation, and the scores, which contain the original data in the newly rotated coordinate system [31]. also, a biplot was created with both the loadings (variables) and scores (samples) positions on the new coordinate system. 3. results and discussion 3.1. metallurgical characterisation all objects are made of sn-bronze with sn as the major alloying element and minor elements as as, ni, co, fe and pb. figure 2 describes the number of objects and the frequency distribution according to the element considered (sn, fe, as, pb, ni, co). sn disperses from 6 % to 16 % in wt., with the highest occurrence (more than the 80 % of the samples) found for the interval 10 – 14 % in wt. confirming the high technological skills of the celtic culture [32]. minor elements as as, ni, co, fe and pb concentrations are below 1 wt.% (even less than 0.2 wt. %), with variations of the shape of the curve according to the element. fe, pb, co and ni are show very high frequency in the range below 0.2 wt. %, which is close to the limit of detection (lod) of the eds. this outcome should be considered as semiquantitative, and more sensitive analytical techniques are needed to confirm the distribution curve proposed in this study. the element as showed on the other hand experimental values modelled with a gaussian-like curve, with a distribution probability centred at 0.3 wt. %. a microstructural investigation was performed through lom and sem investigations, and figure 3 displays the features observed. they are characterised figure 2. frequency distribution of major and minor alloying elements detected on the whole number of samples: (a) sn; (b) as; (c) fe; (d) pb; (e) ni; (f) co. figure 3. microstructures observed on the objects. lom micrographs of: (a) polygonal grains typical of recrystallization annealing; (b) twinning and slip lines; sem-bse micrographs highlighting the composition of inclusions: (c) pb; (d) cu2-xfexs2. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 by a homogeneous microstructure with the presence of an α phase (figure 3a). polygonal grains, typical of recrystallization annealing after deformation are observed, with twinning and slip lines consistent with a last step of cold deformation (figure 3b). some residual heterogeneities of the solid solution appeared in some samples, suggesting that the annealing temperature was not high enough to remove residues of the heterogeneity after casting (above 2/3 of the melting temperature of the alloy). inclusions are made of fe and cu sulfides cu2-xfexs2 and pure pb islands (given the immiscibility of lead with the cu-sn solid solution) (figure 3b and c). following equations 1 to 3, the d % was calculated. a deformation between 15% and 90% was statistically associated to the microstructure and a frequency plot (not visible) evidences a distribution of the d % more pronounced towards higher values (peak at 80 %). this outcome must however be considered as a function of the grain size and the percentage of sn inside the solid solution. the grain size varies between 6 µm and 50 µm for m1 (not visible), according to the percentage of sn in the alloy. 3.2. corrosion morphology the corrosion features observed on the range of samples are typologically divided into 5 groups corresponding to alphabetical roman letters (a, b, c, d, e, see figure 4, table 1). they are classified according to i) nature and features of the layers (e.g., composition, heterogeneity, thickness), ii) features of penetration (e.g., degree, way). they are resumed in table 1. the complexity of each corrosion morphology is connected to a specific corrosion mechanism. it is a slow process that follows a logarithmic trend [5] (not considering localized types of corrosion) and the presence of sn as the major alloying element helps create a mixed sn and cu passive oxide, with a protectiveness degree correlated to the amount of sn in the alloy [17]-[19]. the classical archaeological corrosion (type a or ii, table 1) which has been already mentioned by many authors [7], [33]-[36] is also found this study. from the micro-raman analysis (figure 5a) corrosion products as cuprite (cu2o), cassiterite (sno2), a mixed form of sn and cu oxides (spectrum black), malachite (cu2co3(oh)2-spectrum red), and azurite (cu3(co3)2(oh)2 were detected. impurities as si, al are detected by sem analyses and are responsible for the small shifts of the peaks of some of the compounds [33]. the penetration is intergranular and follows the short-circuits of diffusion [15] where grain boundaries are the anode, and the matrix is the cathode. in those areas a dissolution of sn is promoted according to the lower standard electrode potential [37] and precipitates in the form of hydroxides and oxides. the formed layer is porous and permeable and thus oxygen can diffuse inward, allowing for the oxidation of cu. a mixed layer of cu and sn oxides is therefore formed (figure 4a, figure 5 -black spectrum and figure 6a inner layer of corrosion products or il). basic cu carbonates are formed at the outer layer (ol) due to the presence in the soil of co2 or carbonate ions in quantities above 200 ppm [38]. a decuprification phenomenon is also observed from the eds analyses, as well as diffusion processes of minor elements through the corrosion layers [39]-[41]. figure 6a displays the line-scan profile performed on a representative sample evidencing the interaction of the system alloy/oxide/environment. calculations on the sn:cu atomic ratio along the profile indicate that values are increasing in the il to a maximum of 2.7, which implies that every 3 atoms of sn in the oxide are balanced by 1 atom of cu. as, pb and fe enrich the cu-sn mixed oxide up to 3-4 times their relative value in the alloy (e.g., from 0.3 wt. % to 1 wt. % for as). this suggests that the diffusion coefficients of those elements are table 1. description of the corrosion morphologies identified in the study. corrosion morphology description penetration corrosion products type a so called classical archaeological bronzes corrosion mechanism in presence of water. intergranular corrosion. red, yellowish-orange, green and blue are the predominant colors short-circuit of diffusion: e.g., grain boundaries, slip bands, mechanical twins. interdiffusion of ions from both the metallic substrate (cu, pb, as, ni, fe) and the soil (si, al, fe) cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides. type b archaeological corrosion mechanism in which intergranular corrosion is visible in the inner part of the corrosion layers or just on one side of the sample. compact and thick layers of different colours are visible on the other side or on the entire surface. predominant colours are beside those from the classical corrosion morphology: brownish green, yellow, brown, grey. corrosion penetrates inside as a compact and thick layer or inside cracks or preferential areas of the sample sometimes with a waterfall effect. uncommon corrosion features identified as sno2*2h2o and sn(oh)4 as well as cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides type c globular shapes corrosion products inside compact layers of different thickness. predominant colours are beside those from the classical corrosion morphology: brownish green, yellow, brown, white, grey. corrosion penetrates inside the matrix similar to a macro pitting mechanism on entire areas of the sample. corrosion features in globular shapes identified as sno2*2h2o and sn(oh)4; cuprite (cu2o); cassiterite (sno2); malachite (cu2co3(oh)2); azurite (cu3(co3)2(oh)2; mixed forms of tin and copper oxides. type d 1-1.5 μm wide tunnel shaped structures. the tunnels are never completely straight but bent, dividing and crossing each other, giving the picture of roots or tentacles. not following any microstructural features as grain or twinning boundaries or slip bands due to cold deformation. it penetrates directly into the crystal without any regard for the short-circuit of diffusion. mixed forms of tin and copper oxides are detected with a predominance of tin-based oxides. also, sno2*2h2o and sn(oh)4 are detected. type e mixtures of previously described corrosion morphologies in which many parameters influenced the corrosion mechanism during relatively long timespans. type a, b, c, d type a, b, c, d acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 high enough to migrate inside the corroded layers. fe shows high concentrations inside the ol (up to more than 10 times the content in the alloy), probably related to its diffusion from the soil. also type b corrosion was already detected elsewhere [7] (figure 4b, table 1). a compact layer is found at the interface with the metallic matrix. figure 5b resumes some of the corrosion products found for this peculiar corrosion morphology: an il of cu2o and sn(oh)4, (spectrum green), followed by sno2•2h2o. malachite (cu2co3(oh)2) or azurite (cu3(co3)2(oh)2 form in the ol. the sn oxides, which are thermodynamically stable and have a low solubility limit as doping elements [42], [43], slow down the process. a more severe decuprification phenomenon is detected and values of sn:cu from the interface metal/oxide to the edge respectively from 0.2 to 2. however, no enrichment of minor elements was detected in the corrosion layers. variation of the b type of corrosion, the type c, shows preferential areas of penetration. the corrosion layer is present along cracks sometimes with a “waterfall” effect (figure 4c, table 1). corrosion products are the same as previously mentioned for type b. the sn:cu value from the interface metal/oxide to the edge increases as displayed by the line-scans (figure 6b) and as previously seen, up to 3, which could be possibly connected to a further evolution of the burial context (possibly lack of oxygen) [15]. pitting can potentially explain the phenomenon, a localized mechanism in which cathodes and anodes are different areas of the sample [14]. however, it is very difficult to detect what triggered the process, either local ruptures of the protective sn-rich passivation film due to foreign bodies, macro heterogeneities of the film or areas of accumulation of aggressive ions as cl[15]. the type d or “tentacle-like” corrosion [44] does not follow the above-described mechanism but penetrates directly into the crystal without any regard to the short-circuit of diffusion (figure 4d, table 1). the morphology shows a tunnel-like structure entering the crystal with apparently irregular directions. this type of corrosion and its features has led to the hypothesis of a microbial influence in the mechanism and the attribution to microbiologically influenced corrosion (mic) was proven and discussed more in detail elsewhere [45]-[47]. a figure 4. corrosion morphologies associated to different corrosion mechanisms and defined as: (a) type a; (b) type b; (c) type c; (d) type d; (e) type e. figure 5. raman spectra collected with a 632.8 laser on (a) type a corrosion morphology (b) type b corrosion morphology; (c) type d corrosion morphology. parameters of the analysis: acquisition time: 10 s; number of accumulations: from 1 to 4; power: 25% transmittance. vibrational bands attribution of: 5a: mixture of cu2o and sno2; malachite; 5b: sn(oh)4; sno2*2h2o with shifts due to the soil acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 decuprification process did not occur, as displayed by the linescan in figure 6c, since the kα value of cu does not decrease linearly but fluctuates along the corrosion layer. also, higher amounts of pb are found along the profile towards the ol. accordingly, the sn:cu ratio fluctuates from the metallic interface to the edge with extrema of 0.3 at the interface metal/oxide and a peak of 2 at the il. this suggests that bacteria promote the formation of sn products at the il and a mixed form of cu and sn oxides at the ol, influencing the overall corrosion mechanism [47]. when all types of corrosion morphologies are observed on the same artefact with amplitudes which may vary from one area to another, type e was defined (figure 4e, table 1).this type of corrosion was found only on a limited set of samples and is related to changes in the environmental context could lead to the formation of a mixture of the corrosion morphologies. however, the information concerning this corrosion morphology is still under investigation considering the lack of information on the evolution of the environmental conditions. 3.3. metallurgical parameters on the corrosion process the correlations between the alloy’s microstructural features and the occurrence of typical corrosion morphologies are given by the pca biplot in figure 7. as displayed in the graph, correlations between the variables are visible. the distribution of the a type of corrosion referred to the variable sn is rather figure 6. line-scan profiles of different sites of interest based on the kα energies of cu, sn, fe, as and pb of the different morphologies: a) type a; b) type c; c) type d. il=inner layer of the oxidation patina; ol=outer layer of the oxidation patina. figure 7. biplot of the variables influencing the corrosion morphologies considered according to the position of the samples in the newly rotated space. roman numbers indicate the quadrant. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 stochastic. this outcome agrees with other archaeological studies where this type of corrosion was associated to objects in which the sn content varied enormously, from 5 to 20 in wt. % [5]-[8], [10], [22], as in our case (figure 8). type b and type c corrosion morphologies show a shift in the biplot towards the iii quadrant, in the sn direction. this is consistent with the frequency distribution of the sn for such morphology (figure 8), which exhibits a narrow range of composition (from 11 to 15 in wt. % sn). this could be related to the passivating properties of the element that could shift the corrosion potential towards higher values. thus, the composition of the alloy is a rate-controlling factor for the electrochemical corrosion mechanism, as previously stated [17]-[19]. sn and cu are not variables that influence type d morphology since the samples move along a line that connects cu and sn (obviously negatively correlated since the increase of one deplete the alloy of the other) as confirmed by the data (figure 7 and figure 8). minor elements (as, co, fe, ni and pb) do not influence the occurrence of types a, b and c. a trend for corrosion morphology d is discernible from the pca analysis along a line from the ii to the iv quadrant. this morphology is affected by the presence of the minor elements fe and pb. this means the higher the concentration of these elements in the alloy (up to a 1.2 wt. % of pb and 1 wt. % of fe), the higher the probability to find ‘tentacle like’ corrosion (mic) features. it is known that pb and fe in sufficient quantity (above 1 %, in our case) modify the corrosion mechanism towards a selective type, culminating with their dissolution [15]. however, it is still not clear how their presence can affect such a morphology since it is influenced by bacterial colonization. we can only infer that their occurrence promotes the microbial attack. we can hypothesize that: i) their dissolution is less harmful for the survival of bacteria compared to sn and cu [48], [49]; ii) they can exploit these elements for their metabolism using them as an alternative to carbon as an energy source [50], [51]. the degree of deformation and the grain size affects all types of microstructures to a different extent (figure 9 and figure 10). in particular, the degree of deformation seems to positively influence the occurrence of corrosion a, b and c, while it is detrimental for type d. samples from d type are negatively correlated to d % (as they are positioned in the ii and iii quadrant). this indicates the higher the degree of deformation, the lower the occurrence of type d. figure 9 describes the degree of deformation according to the morphology observed. type d has a frequency distribution centered around 35 %, while for the other morphologies it shifts towards high values (> 65 %). however, the d % must be considered as a function of the grain size and all results must be evaluated accordingly. this parameter affects these morphologies differently as suggested by figure 11 coherently with the outcomes obtained from the d %. a bivariate plot grain size of vs number of samples is displayed in figure 10. it is evident that the smaller the grain size, the higher the occurrence of corrosion types a, b, and c. the higher the number of polygonal grains, the higher the possibility to find an intergranular penetration since defects and impurities are localized in the areas with higher gibbs free energy [15]. on the contrary, type d shows an opposite trend. figure 10d shows a normal distribution of grain size for type d with a peak at around 35-40 µm, which is twice as high compared to the average grain size found for all the samples. this outcome is very interesting but still under study since, to our knowledge, no figure 8. frequency distribution of the sn wt % according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. figure 9. frequency distribution of the degree of deformation according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. figure 10. frequency distribution of the grain size according to the corrosion type detected in the whole range of objects: (a) type a; (b) type b; (c) type c; (d) type d. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 information is available on the relationship between the grain size, d %, and the occurrence of microbial corrosion on cubased alloys. on stainless steels, it seems that the smaller the grain size, the more effective the penetration of the corrosion [52], [53], which is in contradiction with this study. it is indeed difficult to compare the mechanism on a different material with the one found for cu alloys since the microstructural features are different (e.g., the presence of grain boundaries precipitates). it is only possible to suppose that on a given surface the number of impurities (localized at the grain boundaries) is lower for larger grains, and this could become a key parameter for the survival of the colonies and the occurrence of a bacteria mediated corrosion mechanism. in this perspective, concomitant analyses on both archaeological bronzes with mic morphology and experimental alloys are needed to better understand the effect of the grain size. 4. conclusion the main aim of this work is to isolate typical metallurgical parameters that influence the occurrence of specific corrosion morphologies. a categorization of the objects following specific types of composition, microstructure and corrosion was proposed. the nature of the alloy and the microstructural features act as rate controlling factors for the electrochemical corrosion mechanism. results show that metallurgical factors do not play a major role in the corrosion morphology of type a, where neither the composition, nor the thermomechanical treatments influence its occurrence. on the other hand, a high correlation of type b and type c to the alloy composition suggests a higher degree of protection due to the sn tendency to naturally passivate. the most interesting results come from the type d corrosion morphology (tentacle like or mic morphology). it is affected by: i) the presence of specific minor elements in the alloy (i.e. fe and pb in quantities up to 1%; ii) the presence of microstructure bearing a large grain size (40 µm, twice as high compared to the other morphologies) and a low degree of deformation (peak at 35%). the possible causes for this trend are still doubtful and more research will be carried out in the future to understand this point. acknowledgement the authors would like to thank the scientific team in charge of the ‘project tintignac’, in the person of the major of the city of naves-en-corrèze for allowing us to sample the objects; m. drieux-daguerre of the laboratory materia viva, toulouse (fr); b. armbruster from cnrs-umr 5608, toulouse (fr). references [1] m. kibblewhite, g. tóth, t. hermann, predicting the preservation of cultural artefacts and buried materials in soil, sci. tot. environm., 529 (2015), pp. 249-263. doi: 10.1016/j.scitotenv.2015.04.036 [2] j. f. d. stott, g. john, a. a. abdullahi, corrosion in soils, reference module in materials science and materials engineering, elsevier, 2018. doi: 10.1016/b978-0-12-803581-8.10524-7 [3] m. c. bernard, s. joiret, understanding corrosion of ancient metals for the conservation of cultural heritage, electrochim. acta, 54(22) (2009), pp. 5199-5205. doi: 10.1016/j.electacta.2009.01.036 . [4] f. king, a natural analogue for the long-term corrosion of copper nuclear waste containers—reanalysis of a study of a bronze cannon, appl. geochem., 10(4) (1995), pp. 477-487. doi: 10.1016/0883-2927(95)00019-g . [5] l. robbiola, r. portier, a global approach to the authentication of ancient bronzes based on the characterization of the alloy– patina–environment system, j cult. herit., 7(1)1 (2006), pp. 1-12. doi: 10.1016/j.culher.2005.11.001 . [6] r. f. tylecote, the effects of soil conditions on the long-term corrosion of buried tin-bronzes and copper, j. archaeol. sci., 6 (1979), pp. 345-368. doi: 10.1016/0305-4403(79)90018-9 . [7] l. robbiola, j.-m. blengino, c. fiaud, morphology and mechanisms of formation of natural patinas on archaeological cusn alloys, corros. sci., 40 (1998), pp. 2083-2111. doi: 10.1016/s0010-938x(98)00096-1 [8] l. he, j. liang, x. zhao, b. jiang, corrosion behavior and morphological features of archeological bronze coins from ancient china, microchem. j, 99 (2) (2011), pp. 203-212. doi: 10.1016/j.microc.2011.05.009 . [9] j. redondo-marugán, j. piquero-cilla, m. t. doménech-carbó, b. ramírez-barat, w. al sekhaneh, s. capelo, a. doménech-carbó, characterizing archaeological bronze corrosion products intersecting electrochemical impedance measurements with voltammetry of immobilized particles, electrochim. acta, 246 (2017), pp. 269-279. doi: 10.1016/j.electacta.2017.05.190 . [10] g. m. ingo, c. riccucci, c. giuliani, a. faustoferri, i. pierigè, g. fierro, m. pascucci, m. albini, g. di carlo, surface studies of patinas and metallurgical features of uncommon high-tin bronze artefacts from the italic necropolises of ancient abruzzo (central italy), appl. surf. sci. 470 (2019), pp. 74–83. doi: 10.1016/j.apsusc.2018.11.115 . [11] n. souissi, e. sidot, l. bousselmi, e. triki, l. robbiola, corrosion behaviour of cu–10sn bronze in aerated nacl aqueous media – electrochemical investigation, corros. sci. (2007), 49, pp. 3333– 3347. doi: 10.1016/j.corsci.2007.01.013 [12] f. ammeloot, c. fiaud, e. m. m. sutter, characterization of the oxide layers on a cu-13sn alloy in a nacl aqueous solution without and with 0.1 m benzotriazole. electrochemical and photoelectrochemical contributions, electrochim. acta, 44(15) (1999), pp. 2549-2558. doi: 10.1016/s0013-4686(98)00391-0 . [13] e. sidot, n. souissi, l. bousselmi, e. triki, l. robbiola, study of the corrosion behaviour of cu–10sn bronze in aerated na2so4 aqueous solution, corros. sci., 48(8) (2006), pp. 2241-2257. doi: 10.1016/j.corsci.2005.08.020 [14] r. francis, the corrosion of copper and its alloys: a practical guide for engineers, nace international, houston, 2010. [15] l. l. shreir, r. a. jarman, g. t. burstein, (eds), corrosion, butterworth-heinemann, oxford, 1963. [16] r. w. revie, h. h. uhlig (eds), corrosion and corrosion control. an introduction to corrosion science and engineering (4th edition), john wiley and sons, hoboken (nj), (2008), isbn: 9780-471-73279-2 figure 11. sem elemental mapping of type d corrosion morphology with distribution of the chosen elements along the scanning area. https://doi.org/10.1016/j.scitotenv.2015.04.036 https://doi.org/10.1016/b978-0-12-803581-8.10524-7 https://doi.org/10.1016/j.electacta.2009.01.036 https://doi.org/10.1016/0883-2927(95)00019-g https://doi.org/10.1016/j.culher.2005.11.001 https://doi.org/10.1016/0305-4403(79)90018-9 https://doi.org/10.1016/s0010-938x(98)00096-1 https://doi.org/10.1016/j.microc.2011.05.009 https://doi.org/10.1016/j.electacta.2017.05.190 https://doi.org/10.1016/j.apsusc.2018.11.115 https://doi.org/10.1016/j.corsci.2007.01.013 https://doi.org/10.1016/s0013-4686(98)00391-0 https://doi.org/10.1016/j.corsci.2005.08.020 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 9 [17] m. j. hutchison, j. r. scully, patina enrichment with sno2 and its effect on soluble cu cation release and passivity of high-purity cusn bronze in artificial perspiration, electrochim. acta, 283 (2018), pp. 806-817. doi: 10.1016/j.electacta.2018.06.125 . [18] d. j. horton, h. ha, l. l. foster, h. j. bindig, j. r. scully, tarnishing and cu ion release in selected copper-base alloys: implications towards antimicrobial functionality, electrochim. acta, 169 (2015), pp. 351-366. doi: 10.1016/j.electacta.2015.04.001 [19] m. j. hutchison, p. zhou, k. ogle, j. r. scully, enhanced electrochemical cu release from commercial cu-sn alloys: fate of the alloying elements in artificial perspiration, electrochim. acta, 241(2017), pp. 73-88. doi: 10.1016/j.electacta.2017.04.092 [20] e. sarver, y. zhang, m. edwards, review of brass dezincification corrosion in potable water systems, corr. rev., 28 (3-4) (2010), pp. 155-196. doi: corrrev.2010.28.3-4.155 [21] l. c. tsao, c. w. chen corrosion characterization of cu–sn intermetallics in 3.5 wt.% nacl solution, corros. sci. 63 (2012), pp. 393-398. doi: 10.1016/j.corsci.2013.11.010 [22] g. m. ingo, c. riccucci, g. guida, m. pascucci, c. giuliani, e. messina, g. fierro, g. di carlo, micro-chemical investigation of corrosion products naturally grown on archaeological cu-based artefacts retrieved from the mediterranean sea, appl. surf. sci. 470 (2019), pp. 695–706. doi: 10.1016/j.apsusc.2018.11.144 . [23] h. wei, w. kockelmann, e. godfrey, d. a scott, the metallography and corrosion of an ancient chinese bimetallic bronze sword, j cult. herit. 37 (2019), pp. 259–265. doi: 10.1016/j.culher.2018.10.004 [24] o. oudbashi, s. m. emami, a note on the corrosion morphology of some middle elamite copper alloy artefacts from haft tappeh, south-west iran, stud. conserv., 55(1) (2010), pp. 20-25. doi: 10.1179/sic.2010.55.1.20 [25] d. a. scott, an examination of the patina and corrosion morphology of some roman bronzes, j am. inst. conserv., 33(1) (1994), pp. 1-23. doi: 10.1179/019713694806066419 [26] i. g. sandu, o. mircea, v. vasilache, i. sandu, influence of archaeological environment factors in alteration processes of copper alloy artifacts, microscopy res. techn., 75(12) (2012), pp. 1646-1652. doi: 10.1002/jemt.22110 . [27] m. r. pinasco, m. g. ienco, p. piccardo, g. pellati, e. stagno, metallographic approach to the investigation of metallic archaeological objects, annali di chimica: j anal., environm. cult. herit. chem., 97(7) (2007), pp. 553-574. doi: 10.1002/adic.200790037 . [28] p. piccardo, m. pernot, studio analitico strutturale di alcuni vasi celtici in bronzo, la metallurgia italiana, 11 (1997), pp. 43-52. [29] m. mödlinger, p. piccardo, manufacture of eastern european decorative tin-bronze discs from twelfth century bc, archaeol. anthropol. sci. 5 (2013), pp. 299-309. doi: 10.1007/s12520-012-0111-6 [30] j. schindelin, i. arganda-carreras, e. frise, v. kaynig, m. longair, t. pietzsch, s. preibisch, c. rueden, s. saalfeld, b. schmid, j. y. tinevez, d. j. white, v. hartenstein, k. eliceiri, p. tomancak, a. cardona, fiji: an open-source platform for biological-image analysis, nat. methods 9 (7) (2012), pp. 676-682. doi: 10.1038/nmeth.2019 . [31] y. mori, m. kuroda, n makino, nonlinear principal component analysis and its applications, springer, singapore, 2016. doi: 10.1007/978-981-10-0159-8 [32] c. maniquet, b. armbruster, m. pernot, t. lejars, m. drieuxdaguerre, l. espinasse, p. mora, aquitania, 27 (2011), pp. 63-150 [33] p. piccardo, b. mille, l. robbiola, tin and copper oxides in corroded archaeological bronzes in dillmann, p., beranger, g., piccardo, p., matthiesen. h., corrosion of metallic heritage artefacts—investigation, conservation and prediction of longterm behaviour, woodhead, cambridge, 2007, pp. 239-262 doi: 10.1533/9781845693015.239 [34] k. tronner, a.g. nord, g.c. borg, corrosion of archaeological bronze artefacts in acidic soil, water, air, soil pollut., 85 (1995), pp. 2725-2730. doi: 10.1007/bf01186246 [35] g.m. ingo, t. de caro, c. riccucci, e. angelini, s. grassini, s. balbi et al., large scale investigation of chemical composition, structure and corrosion mechanism of bronze archeological artefacts from mediterranean basin, appl phys a, 83 (2006), pp. 513-520. doi: 10.1007/s00339-006-3550-z . [36] a. doménech-carbó, m.t. doménech-carbó, i. martínezlázaro, electrochemical identification of bronze corrosion products in archaeological artefacts. a case study, microchim acta, 162 (2008), pp. 351-359. doi: 10.1007/s00604-007-0839-3 . [37] p. atkins, l. jones, chemical principles: the quest for insight (3rd ed.), w.h. freeman and company, new york, 2005, isbn 9781319154196 [38] d. a. scott, copper and bronze in art: corrosion, colorants, and conservation: corrosion, colorants, conservation, j paul getty museum, new york, 2002. [39] x. deng, q. zhang, e. zhou, c. ji, j. huang, m. shao, m. ding, x. xu, morphology transformation of cu2o sub-microstructures by sn doping for enhanced photocatalytic properties, j alloys comp, 649 (2015), pp. 1124-1129. doi: 10.1016/j.jallcom.2015.07.124 . [40] y. du, n. zhang, c. wang, photo-catalytic degradation of trifluralin by sno2-doped cu2o crystals, catalysis commun., 11 (2010), pp. 670-674. doi: 10.1016/j.catcom.2010.01.021 . [41] n. budhiraja, sapna, v. kumar, m. tomar, v. gupta, s. k. singh, investigation on physical properties of sn-modified cubic cu2o nanostructures, j supercond nov magn, 32, 2019, pp. 1671–1679. doi: 10.1007/s10948-018-4858-6 [42] c. b. fitzgerald, m. venkatesan, a. p. douvalis, s. huber, and j. m. d. coey, sno2 doped with mn, fe or co: room temperature dilute magnetic semiconductors, j appl. phys., 95 (2004), pp. 7390-7392. doi: 10.1063/1.1676026 [43] z. junying, y. qu, w. qianghong, room temperature ferromagnetism of ni-doped sno2 system, modern appl sci 4(11) (2010) doi: 10.5539/mas.v4n11p124 [44] p. piccardo, m. mödlinger, g. ghiara, s. campodonico, v. bongiorno, investigation on a “tentacle-like” corrosion feature on bronze age tin-bronze objects, appl phys a, 113 (4) (2013), pp. 1039-1047. doi: 10.1007/s00339-013-7732-1 . [45] g. ghiara, c. grande, s. ferrando, p. piccardo, the influence of pseudomonas fluorescens on corrosion products of archaeological tin-bronze analogues, jom, 70(1) (2018), pp. 8185. doi: 10.1007/s11837-017-2674-2 [46] g. ghiara, l. repetto, p. piccardo, the effect of pseudomonas fluorescens on the corrosion morphology of archaeological tin bronze analogues, jom, 71(2) (2019), pp. 779-783. doi: 10.1007/s11837-018-3138-z [47] g. ghiara, r. spotorno, s.p. trasatti, p. cristiani, effect of pseudomonas fluorescens on the electrochemical behaviour of a single-phase cu-sn modern bronze, corros. sci., 139 (2018), pp. 227-234. doi: 10.1016/j.corsci.2018.05.009 . [48] b. little, p. wagner, f. mansfeld, an overview of microbiologically influenced corrosion, electrochim. acta, 37(12) (1992), 2, pp. 185-2194. doi: 10.1016/0013-4686(92)85110-7 https://doi.org/10.1016/j.electacta.2018.06.125 https://doi.org/10.1016/j.electacta.2015.04.001 https://doi.org/10.1016/j.electacta.2017.04.092 https://doi.org/10.1515/corrrev.2010.28.3-4.155 https://doi.org/10.1016/j.corsci.2013.11.010 https://doi.org/10.1016/j.apsusc.2018.11.144 https://doi.org/10.1016/j.culher.2018.10.004 https://doi.org/10.1179/sic.2010.55.1.20 https://doi.org/10.1179/019713694806066419 https://doi.org/10.1002/jemt.22110 https://doi.org/10.1002/adic.200790037 https://doi.org/10.1007/s12520-012-0111-6 https://doi.org/10.1038/nmeth.2019 https://doi.org/10.1007/978-981-10-0159-8 https://doi.org/10.1533/9781845693015.239 https://doi.org/10.1007/bf01186246 https://doi.org/10.1007/s00339-006-3550-z https://doi.org/10.1007/s00604-007-0839-3 https://doi.org/10.1016/j.jallcom.2015.07.124 https://doi.org/10.1016/j.catcom.2010.01.021 https://doi.org/10.1007/s10948-018-4858-6 https://doi.org/10.1063/1.1676026 https://doi.org/10.5539/mas.v4n11p124 https://doi.org/10.1007/s00339-013-7732-1 https://doi.org/10.1007/s11837-017-2674-2 https://doi.org/10.1007/s11837-018-3138-z https://doi.org/10.1016/j.corsci.2018.05.009 https://doi.org/10.1016/0013-4686(92)85110-7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 10 [49] b. j. little, j. s. lee, microbiologically influenced corrosion, john wiley and sons, hoboken, 2007. [50] d. enning, j. garrelfs, corrosion of iron by sulfate-reducing bacteria: new views of an old problem, appl. environm. microbiol., 80 (4), (2014), pp. 1226-1236. doi: 10.1128/aem.02848-13 . [51] s. m. tiquia-arashiro, lead absorption mechanisms in bacteria as strategies for lead bioremediation, appl. microbiol. biotechnol., 102(13) (2018), pp. 5437-5444. doi: 10.1007/s00253-018-8969-6 [52] j. r. ibars, d. a. moreno, c. ranninger, mic of stainless steels: a technical review on the influence of microstructure, intern. biodeter. biodegrad., 29 (1992), pp. 343-355. doi: 10.1016/0964-8305(92)90051-o [53] x. shi, k. yang, m. yan, w. yan, y. shan, study on microbiologically influenced corrosion resistance of stainless steels with weld seams, front. mater., (2020). doi: 10.3389/fmats.2020.00083 https://doi.org/10.1128/aem.02848-13 https://doi.org/10.1007/s00253-018-8969-6 https://doi.org/10.1016/0964-8305(92)90051-o https://doi.org/10.3389/fmats.2020.00083 biomechanics in crutch assisted walking acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 5 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 biomechanics in crutch assisted walking francesco crenna1, matteo lancini2, marco ghidelli3, giovanni b. rossi1, marta berardengo1 1 measurement and biomechanics lab – dime – università degli studi di genova,via opera pia 15 a, 16145 genova, italy 2 dsmc università degli studi di brescia,v.le europa 11, 25121 brescia, italy 3 dii università degli studi di brescia, via branze 38, 25123 brescia, italy section: research paper keywords: biomechanical measurements; crutches; articular loads; force measurements citation: francesco crenna, matteo lancini, marco ghidelli, giovanni b. rossi, marta berardengo, biomechanics in crutch assisted walking , acta imeko, vol. 11, no. 4, article 6, december 2022, identifier: imeko-acta-11 (2022)-04-06 section editor: eric benoit, université savoie mont blanc, france received july 9, 2022; in final form december 11, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was partially supported by eu h2020 program, project eurobench grant n° 779963 – subproject bullet, sledge and faet. corresponding author: francesco crenna, e-mail: francesco.crenna@unige.it 1. introduction different diseases may require patients to use crutches in their daily life. while upper limbs fatigue is limited when a temporary impairment is considered, it may become an important issue when considering permanent impairments such as those due to stroke or multiple sclerosis. these situations are rather common and important in today’s society. stroke is the leading cause of movement disability in the usa and europe [1]. people who suffer a stroke experience change in strength, muscle tone, and neuromuscular coordination [2], the consequence of which are mobility, balance, and walking disabilities [3]. similar symptoms are present in multiple sclerosis, along with fatigue and cerebellar involvement. up to 10% of adults suffer from reduced mobility because of conditions such as a central nervous system lesion that affects balance and gait. on the other hand, walking is a fundamental human activity [4] and if impaired, people prioritize it as a goal of treatment [5]. in europe walking aids, such as crutches, are the most prescribed tools in case of central nervous system lesions [6] and, in a gait rehabilitative framework, physical therapists guide patients in using crutches to better support weight by reducing the magnitude of the load on the legs, and to improve balance by increasing the body’s base of support [7]. moreover, crutches use is fundamental for people walking with the assistance of exoskeletons, for example after a spinal cord injury (sci). exoskeletons help in closing the gap toward a normal life for sci people. since, generally speaking, exoskeletons require the contemporary presence of a pair of crutches, their continuous and daily usage requires attention to possible consequences such as shoulder pain [8]. on this basis, a pair of instrumented crutches was developed to measure both crutch load and orientation [8]-[9], and they were integrated with an optoelectronic motion capture system, an anthropometric volume scanner, and a biomechanical model, in the bullet project [10]-[11]. figure 1 depicts bullet main concept. bullet biomechanical model is fed with kinematic data describing trajectories and accelerations [12], and crutch force data describing movement dynamics. eventually ground reaction forces under the feet can abstract crutch-assisted walking is very common among patients with a temporary or permanent impairment affecting lower limb biomechanics. correct crutches’ handling is the way to avoid undesired side effects in lower limbs recovery or, in chronic users, upper limbs joints diseases. active exoskeletons for spinal cord injured patients are commonly crutch assisted. in such cases, in which upper limbs must be preserved, specific training in crutch use is mandatory. a walking test setup was prepared to monitor healthy volunteers during crunch use as a first step. measurements were performed by using both a motion capture system and instrumented crutches measuring load distribution. in this paper, we present preliminary tests results based on different subjects having a variety of anthropometrical characteristics during walking with parallel or alternate crutches, the so-called three and two-points strategies. tests results present inter and intra subject variabilities and, as a first goal, influencing factors affecting crutch loads have been identified. in the future we aim to address crutch use errors that could lead to delayed recovery or upper limbs suffering in patients, giving valuable information to physicians and therapists to improve user’s training. mailto:francesco.crenna@unige.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 be included to operate the model in its complete, full body version. to obtain upper limbs loads the model requires subject’s anthropometry also. to obtain such segment detailed information generally we refer to anthropometric tables, where values relative to overall subject mass and height are reported [13]-[14]. to compensate for the consistent subject to subject differences , that are common among exoskeleton users, bullet includes a volume scanner to measure segments volumes using a kinect azure camera [15]-[16]. the scanner considers a set of subject images recorded in both rgb and tof frames, and a segmentation software based on the biomechanical model definition to obtain segments volumes. segments masses and inertia are then determined considering segment’s density values as reported in tables and literature [14]. bullet biomechanical model, process the input data to obtain limbs loads with special attention to torques at shoulders [17]. the focus, in this case, is related to exoskeletons assisted walking in the eurobench project framework [11], but the approach is general [18] and, in this paper, some preliminary results are presented for healthy subjects walking with crutches without exoskeletons. the main goal is to investigate differences in crutch-assisted gait following the three points strategy or parallel crutch use – and the two points strategy – or alternate crutch movement. therefore, some results regarding crutch kinematics and loads for the two strategies are here presented. 2. experimental setup and protocol the experimental setup includes an optoelectronic motion capture system, to reconstruct the full-body and crutches kinematics during gait, two force plates for measuring the ground reaction forces under the left-right foot – both from bts bioengineering, milan, italy –, and a pair of instrumented crutches to measure force load and orientation on each side, as described in [9]. calibration procedures have been applied for both optoelectronic and force measurement systems [18] before each subject test session. in the following, special attention will be given crutches kinematics obtained from optoelectronic systems and related crutches forces. the experimental protocol requires placing a set of 39 markers on specific subjects’ landmarks, plus three on each crutch. in this experimentation heathy subjects only are involved. since they have to use crutches simulating a spinal cord injury walking impairment, they trained for a short time before test. to this purpose subjects consider to have problems to move and load both legs, so a pre-training is required to establish a proper crutch load and movement, according to experimenter’s instructions. moreover during the training subjects find the proper path and foot sequence in the corridor, to place feet properly on force plates without crutch interference. during the training and the following tests there is no real time control on crutch load, so, at the end, it depends on subject voluntary behaviour. then, after performing calibration procedures for all the instruments, three repetitions (minimum) for each walking condition were carried out: three points gait with parallel crutches and two points gait with alternate crutches, as shown in figure 2. a set of 14 subjects of which 2 females, mean age 25.7 years, standard dev. 5.6 years undergo the experimental protocol, after signing an informed consensus agreement. finally, 124 valid tests were obtained, 65 of which in the twopoints (alternate) gate conditions. 3. experimental results biomechanical model can operate in complete – whole body – or partial – upper limbs only – mode. in the following we consider the whole body version that includes 18 segments to biomechanical model optoelectronic measumrement system instrumented crutches volume scanner kinematics kinetics anthropometry shoulder loads bullet conceptual scheme figure 1. bullet conceptual scheme. x anteroposterior walking direction z mediolateral left right x anteroposterior walking direction z mediolateral left right y y alternate parallel figure 2. alternate and parallel gait strategies. figure 3. alternate and parallel gait strategies. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 describe subject and crutches movement as shown in figure 3 for an alternate – two points gait. figure 4 presents an example of the three crutch force components according to the biomechanical reference systems indicated in figure 2 – x anteroposterior, y vertical, z mediolateral – for a three-point gait and left-right sides. by using force measurement it is possible to define the crutch contact on the ground –figure 3 lower graphand consequently identify initial and final contact angles on the two most important planes describing the movement: the sagittal (x, y) and the frontal (z, y) ones. besides that, it is possible to compute the angular crutch range of motion (rom), maximum, and rms load force during crutch ground contact. since data recording was limited to few gaits near and on the force plates, we have excluded from these computations the runs in which some data was missing: for example, when no initial or final crutch ground contact was recorded, and consequently it was not possible to evaluate parameters on a complete contact phase. in the following all the trained subjects are considered independently of their ability to simulate the impairment. results from the 124 valid tests are summed up in table 1 as regards mean values and relative standard deviations of the maximum crutches forces normalised by subjects’ weight. note that variability includes inter subject repeatability and intra subject variability. the large relative standard deviation suggests that, besides subject behaviour, other differences are present. for example, gait condition might affect crutch loading. moreover, even if results are normalised for subject’s weight, other anthropometric differences might have an important role in crutch load. on this basis the biomechanical model can evaluate shoulders torque. figure 5 and figure 6 presents l/r shoulder forces and torques in relation with crutch contact for a parallel gait. it is worth noting that the model we are considering is purely mechanical, so muscles action is summed up in the torque at shoulders joints, while forces do not include reactions due to muscle actions applied at tendon insertion points. even if this is certainly an approximation, the proposed approach is free from any assumption regarding tendon anthropometry and muscle force behaviour that in our specific case could be critical, since we are considering injured subjects walking with crutches. figure 4. l/r crutch relative forces in the three directions: parallel gait. table 1. maximum crutch vertical load: results summary right left mean (% of sbj weight) relative standard deviation mean (% of sbj weight) relative standard deviation parallel 30 32 % 34 30 % alternate 21 43 % 24 40 % figure 5. l/r shoulder forces for parallel gait. figure 6. l/r shoulder torques for parallel gait. figure 7. r crutch vertical maximum relative forces for alternate/ parallel movement. boxes represent 25%-75% intervals, red lines median values. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 4. discussion as mentioned in the introduction the focus here is on two crutch use modalities and on the percentage of the subject weight that is moved from the lower limbs toward the crutches. as shown in figure 4 main load behaviour is very similar for left/right crutches, while the mediolateral component (z, orange in figure 4) is opposed due to the opposed crutch contact angle in the medio lateral plane. the overall data set can be divided according to these two gait conditions – parallel/alternate, and since we have several repetitions (about 3) for a rather consistent set of subjects (18), we can make a multiple-way analysis of variance. the analysed variable can be selected among the available gait parameters we have measured. we can consider a crutch centred approach considering both crutch kinematics and dynamics, or a subject centred approach considering shoulder internal loads in relation with gait behaviour. as an example of the first approach we consider maximum and rms vertical load relative to the subject’s overall weight. as factors for the analysis, besides the mentioned gait conditions, we consider kinematics parameters, such as the angular rom of the crutch around the medio-lateral axis, and subject anthropometric characteristics such as weight, height, or the body mass index (bmi), defined as the ratio between mass and squared height. in table 2 we present an example of anova results, including f values. considering the maximum vertical load on the right and left crutches, f values confirm that the gait condition is significant with a probability < 10-4. the box plot in figure 7 shows an evident gait strategy effect on load levels. there is also evidence of a large variability, probably due to the protocol that requires to simulate an impairment – see paragraph 2 and the absence of an online verification of such imposed behaviour. moreover, there is evidence that, even if working with a load normalized on subject weight, the subject’s bmi is significative (p < 10-4) indicating that the way crutches are loaded is not simply related to the subject’s mass. however, the box plot in figure 8 shows a less evident load dependency on the three bmi categories, defined as follows: bmi < 21.5 kg/m², 21.5 kg/m² ≤ bmi < 25 kg/m² and bmi ≥ 25 kg/m². this aspect deserves some attention in future data analysis to investigate the most significant subject anthropometric characteristic. these results have effect on shoulders loads obtained from the biomechanical model. as shown in figure 6 torque time behaviour is similar for right/left limbs, and present the maximum value, as expected, for torque around the medio lateral axis (z). on this basis a good example is the analysis of variance of shoulder maximum z torque. alternate/parallel walking condition and bmi still have the main effect on this shoulder torque (p < 10-4), as shown in the boxplot in figure 9 for walking condition. while subject’s bmi is significative as for crutch loads, other anthropometric characteristics are not significative anymore at the 5% probability level fisher test, as confirmed by the boxplot in figure 10 for subject’s height. figure 8. r crutch vertical maximum relative forces for the three bmi categories. boxes represent 25%-75% intervals, red lines median values. table 2. anova results for l/r crutch rms vertical load. mean squares f value probability>f moving condition 3.7 25.3 0 subj bmi 5.3 18.1 0 subj mass 1.9 6.5 2 10-3 subj height 0.2 0.7 0.49 crutch rom 2.1 2.9 1.5 10-2 residual error 31 figure 9. l and r shoulders maximum torque around the mediolateral axis z for the two walking conditions. boxes represent 25%-75% intervals, red lines median values. figure 10. l and r shoulders maximum torque around the mediolateral axis z in function of subject height categories. boxes represent 25%-75% intervals, red lines median values. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 5. conclusions the paper has presented the bullet project approach to shoulders load evaluation when walking with crutches. a set of experimental data obtained on healthy subjects has been considered to demonstrate the potentialities of the proposed approach. a preliminary synthesis on these results, is obtained applying the analysis of variance. anova has shown that parallel or alternate walking conditions is very important as regards both crutch forces, that can be measured directly, and shoulders loads, that are determined through a biomechanical model. subjects’ anthropometric characteristics affects results even if they are normalized by subject mass or weight, moreover subjects’ bmi is not the only significative parameter since still mass has a significative contribution. of course such considerations have to be limited to this specific experimentation, since it was conducted on healthy subjects only, with a request to simulate a sci walking impairment, and subject behaviour was not subject to control. nevertheless results demonstrate the potentialities of the presented approach. in particular, when applied to injured subjects, it provides a set of information that will be useful to therapist and subjects to improve their training preserving articulations’ health. acknowledgement we thank s. lanino, m. pitto and f. risoldi for the fruitful collaboration. references [1] d. lloyd-jones, robert adams, mercedes carnethon, (+ 32 more authors), heart disease and stroke statistics 2009 update, circulation, 27 january 2009, 119(3):e21-181. doi: 10.1161/circulationaha.108.191261 [2] nathan d neckel, n. blonien, d. nichols, j. hidler, abnormal joint torque patterns exhibited by chronic stroke subjects while walking with a prescribed physiological gait pattern, j neuroengineering rehabil 5, 19 (2008). doi: 10.1186/1743-0003-5-19 [3] f. tamburella, j. c. moreno, d. s. herrera valenzuela, i. pisotta, m. iosa, f. cincotti, d. mattia, j. l. pons, m. molinari, influences of the biofeedback content on robotic post-stroke gait rehabilitation, j neuroengineering rehabil 16, 95 (2019). doi: 10.1186/s12984-019-0558-0 [4] p. r. culmer, p. c. brooks, d. n. strauss, d. h. ross, m. c. levesley, r. j. o’connor, b. b. bhakta, an instrumented walking aid to assess and retrain gait, ieee/asme transactions on mechatronics, vol. 19, no. 1, feb. 2014, pp. 141-148. doi: 10.1109/tmech.2012.2223227 [5] r. c. holliday, dr c. ballinger reader in occupational therapy, e. d. playford, goal setting in neurological rehabilitation: patients' perspectives, disability and rehabilitation, 29:5, 2007, pp. 389394. doi: 10.1080/09638280600841117 [6] f. rasouli, k. b. reed, walking assistance using crutches: a state of the art review, j biomech, vol. 98, 2 january 2020, 109489. doi: 10.1016/j.jbiomech.2019.109489 [7] h. bateni, b. e. maki, assistive devices for balance and mobility: benefits, demands, and adverse consequences, arch phys med rehabil, vol. 86, issue 1, january 2005, pp. 134-145. doi: 10.1016/j.apmr.2004.04.023 [8] e. sardini, m. serpelloni, m. lancini, wireless instrumented crutches for force and movement measurements for gait monitoring, ieee trans instrum. meas. vol. 64, no. 12, dec. 2015, pp. 3369-3379. doi: 10.1109/tim.2015.2465751 [9] m. lancini, m. serpelloni, s. pasinett, e. guanziroli, healthcare sensor system exploiting instrumented crutches for force measurement during assisted gait of exoskeleton users, ieee sensors journal, vol. 16, no. 23, 1 dec. 2016, pp. 8228-8237. doi: 10.1109/jsen.2016.2579738 [10] benchmarking upper limbs loads on exoskeleton testbeds (bullet)eu h2020 program, project eurobench (grant n° 779963–sub-project, bullet). online [accessed 17 december 2022] https://eurobench2020.eu/developing-theframework/benchmarking-upper-limbs-loads-on-exoskeletontestbeds-bullet/ [11] eurobench european robotic framework for bipedal locomotion benchmarking. online [accessed 17 december 2022] https://eurobench2020.eu [12] f. crenna, g. b. rossi, m. berardengo, filtering biomechanical signals in movement analysis, sensors, 21, 13, 2021, 4580, 17 pp. doii: 10.3390/s21134580 [13] r. dumas, l. cheze, j.-p. verriest, adjustments to mcconville et al. and young et al. body segment inertial parameters, journal of biomechanics, vol. 40, issue 3, 2007, pp. 543–553. doi: 10.1016/j.jbiomech.2006.02.013 [14] d. a. winter, biomechanics and motor control of human movement; john wiley & sons: new york, ny, usa, 2009, isbn: 978-0-470-39818-0 [15] g. kurillo, e. hemingway, mu-lin cheng, louis cheng, evaluating the accuracy of the azure kinect and kinect v2, sensors 22 (7), 2022, article n. 2469, 22 pp. doi: 10.3390/s22072469 [16] n. covre, a. luchetti, m. lancini, s. pasinetti, e. bertolazzi, m. de cecco, monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces, acta imeko 11 (2022) 2, 1-9. doi: 10.21014/acta_imeko.v11i2.1206 [17] f. crenna, g. b. rossi, m. berardengo, a global approach to assessing uncertainty in biomechanical inverse dynamic analysis: mathematical model and experimental validation, ieee transactions on instrumentation and measurement, vol. 70, 2021, art no. 1006809, pp. 1-9. doi: 10.1109/tim.2021.3072113 [18] f. crenna, g. b. rossi, a. palazzo, measurement of human movement under metrological controlled conditions, acta imeko 4 (2015) 4, pp. 48-56. doi: 10.21014/acta_imeko.v4i4.281 https://doi.org/10.1161/circulationaha.108.191261 https://doi.org/10.1186/1743-0003-5-19 https://doi.org/10.1186/s12984-019-0558-0 https://doi.org/10.1109/tmech.2012.2223227 https://doi.org/10.1080/09638280600841117 https://doi.org/10.1016/j.jbiomech.2019.109489 https://doi.org/10.1016/j.apmr.2004.04.023 https://doi.org/10.1109/tim.2015.2465751 https://doi.org/10.1109/jsen.2016.2579738 https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/developing-the-framework/benchmarking-upper-limbs-loads-on-exoskeleton-testbeds-bullet/ https://eurobench2020.eu/ https://doi.org/10.3390/s21134580 https://doi.org/10.1016/j.jbiomech.2006.02.013 https://doi.org/10.3390/s22072469 https://doi.org/10.21014/acta_imeko.v11i2.1206 https://doi.org/10.1109/tim.2021.3072113 https://doi.org/10.21014/acta_imeko.v4i4.281 a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults acta imeko issn: 2221-870x march 2021, volume 10, number 1, 257 264 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 257 a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults maria grazia d'urso1, valerio manzari2, barbara marana3 1 department of engineering and applied sciences, university of bergamo, bergamo, italy 2 department of civil and mechanical engineering, university of cassino and southern lazio, cassino, italy 3 department of engineering and applied sciences, university of bergamo, bergamo, italy section: research paper keywords: masonry vault; laser-scanning; thrust network analysis; point cloud; geometric configuration; structural analysis; equilibrium of nodes citation: maria grazia d'urso, valerio manzari, barbara marana, a combination of terrestrial laser-scanning point clouds and the thrust network analysis approach for structural modelling of masonry vaults, acta imeko, vol. 10, no. 1, article 34, march 2021, identifier: imeko-acta-10 (2021)-01-34 editor: ioan tudosa, university of sannio, italy received january 2, 2021; in final form february 15, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the italian ministry of education, university and research (miur) corresponding author: maria grazia d'urso, e-mail: mariagrazia.durso@unibg.it 1. introduction in this paper, we present a review on the integration of terrestrial laser scanning (tls) point clouds acquired via the most frequently employed survey techniques, as well as an innovative method for studying historical masonry vaults. the study of historical buildings continues to face significant difficulties related to computational effort, the scarcity of input data, and the limited realism of the attendant methods. studies oriented toward the conservation and restoration of historical structures exploit structural analysis as a means of better understanding the genuine structural features of the building in view of characterising its present condition and the causes of the existing damage, determining the actual structural safety in terms of a variety of factors (e.g. gravity, soil settlements, wind, and earthquakes), and determining the necessary remedial measures [1]-[3]. historical structures are often characterised by a highly complex geometry composed of various straight or curved members, combining the curved 1d members (arches, flying arches) with both 2d (vaults, domes) and 3d members (fillings, etc.). in fact, the geometry is one of the most crucial aspects of investigation given the complex combination of comparatively slender members with far larger members (e.g. massive piers, walls buttresses, foundations). as such, the investigation of the geometry is perhaps one of the greatest challenges faced by analysts. abstract terrestrial laser-scanning (tls) is well suited to surveying the geometry of monumental complexes, often realised with highly irregular materials and forms. this paper addresses various issues related to the acquisition of point clouds via tls and their elaboration aimed at developing structural models of masonry vaults. this structural system, which exists in numerous artifacts and historical buildings, has the advantages of good static and functional behaviour, reduced weight, good requisites of insulation, and aesthetic quality. specifically, using tls, we create a geometric model of the ancient masonry church, s. maria della libera, in aquino, largely characterised by naves featuring cross vaults and previously used as a case study in the paper entitled ‘terrestrial laser-scanning point-clouds for modeling masonry vaults’, presented at the 2019 imeko tc-4 international conference on metrology for archaeology and cultural heritage. the results of the tls survey are used as input for a structural analysis based on the thrust network analysis. this recent methodology is used for modelling masonry vaults as a discrete network of forces in equilibrium with gravitational loads. it is demonstrated that the proposed approach is both effective and robust in terms of assessing not only the safety conditions of existing masonry vaults, the actual geometry of which significantly influences the safety level, but also to design new ones. mailto:mariagrazia.durso@unibg.it acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 258 historical structures may have experienced (and continue to experience) various phenomena of a very different nature, including gravity forces, earthquakes, and environmental effects (thermal effects, chemical or physical attack), as well as various anthropogenic actions such as architectural alterations, intentional destruction, and inadequate restorations. many of these actions also need to be characterised in terms of time, with some cyclic and repetitive (accumulating significant effects in the long term), others developing gradually over extremely long periods of time, and others still associated with long return periods. in many cases, they may be influenced by historical contingency and uncertain (or at least, insufficiently known) historical facts. the existing general alterations may significantly affect the response of the structure to be modelled, and, hence, the realism and accuracy of the prediction of the actual performance and capacity. damage encompasses aspects such as mechanical cracking, material decay (due to chemical or physical attack), or a variety of other phenomena affecting the original capacity of the materials and structural members. its history is an essential aspect of a building and this must be taken in account and integrated within the model. the following effects linked to history may have had an impact on both the structural response and the existing damage: the construction process, any architectural alterations and additions, destruction due to conflicts (e.g. wars) or natural disasters (earthquakes, floods, fires), and various long-term decay or damage-inducing phenomena. in fact, the history as a whole constitutes a crucial source of knowledge. in numerous cases, the historical performance of the building can be engineered to reach conclusions on its structural performance and strength. for example, the performance exhibited during past earthquakes can be considered in order to improve the understanding of the antiseismic capacity. in fact, the history of the building constitutes a unique experience occurring on a real scale of space and time. as such, the knowledge of the historical performance can compensate for the aforementioned data insufficiency [4]. the most frequently employed survey techniques for capturing the geometry of buildings, also applied in the field of cultural heritage preservation, include both terrestrial and remote photogrammetry as well as tls. approaches based on the acquisition of images aimed at 3d modelling have recently become the subject of significant studies and research in various different areas [5]-[7]. the tls technique presents a non-contact and non-intrusive technique that allows for digitally acquiring the shape of objects in a rapid and accurate way. the attendant research provides several examples of 3d point clouds acquired for detailed structural digital models, which represents a particularly relevant issue in the case of masonry structures, where geometry plays a crucial role in the comparative degree of safety [8]. nowadays, starting from an accurate 3d model derived from laser scanning measures, it is possible to implement a building information modelling (bim) approach that allows for managing the entire project in a consistent and optimal manner. in fact, the laser scan techniques, such as tls, which encompass scan-to-bim among other processes, present a valuable prerequisite for bim modelling, since they can support geometric and spatial data that can be acquired, organised, and managed to satisfy the required project scale. one single database ensures the quality of the results due to the direct link among the shapes, the information, and the project documentation [9], [10]. specifically, 3d modelling becomes important when these models are adopted for structural engineering purposes [11]. this is the case, for example, for masonry structures and vaults, the structural safety assessment of which involves either meshing them as a collection of finite elements, as in the traditional finite element method, or as a set of nodes connected by branches, as in the more recent thrust network analysis (tna) approach [8]. given the recent emergence of this approach, we first briefly illustrate its theoretical background and basic assumptions, largely to emphasise its flexibility and how the geometric data required as input naturally matches the output of tls surveys. once the theoretical background and the state of art of the tna method have been illustrated in the section 2, the basics of the tna are dealt with in section 3. section 4 describes the study case referred to the medieval church of s. maria della libera, in aquino for which the safety assessment of the cross vaults has been carried out. the related results are illustrated and commented in section 5. finally the conclusions are summarized in section 6. 2. theoretical background nowadays, finite element (fem) analysis can be regarded as the most effective numerical technique for structural analysis since, unlike traditional static analysis, it allows for i) providing a 3d model of geometrically complex structures, ii) managing the characteristic parameters of the materials employed in the model, and iii) performing different analyses (linear, non-linear, dynamical, etc.) on the same geometry. however, in the case of historical and monumental masonry structures, fem analysis does not present the best option since it is difficult to ascertain the characteristics of the materials and the effects induced by the interventions previously performed on the structure. an alternative effective approach to fem is the tna approach, a fairly recent methodology that is briefly described in the sequel [12]-[17]. this method can be regarded as an automated and computerised variant of mery’s method, which is used for hand calculations of masonry arches (figure 1). specifically, tna is used to model masonry vaults as a discrete network of branches, subjected only to compressive forces in equilibrium with the gravitational loads. originally devised by o’dwyer [18] and later developed by block et al. [8], the evolution of this method represents one of the first rational approaches to the stability of masonry buildings and takes its steps from the analogy between the equilibrated shape of masonry arches and that of tensile suspended cables. this analogy, known as the ‘catenary principle’, is that of an arc that is reminiscent of a long chain, retained at its ends and allowed to dangle (figure 2). figure 1. mery’s method: internal forces by means of a funicular polygon. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 259 heyman [19] combined this principle with the limit theorems of plasticity, specifically the static theorem, to evaluate the safety of masonry structures, predicting the ultimate mechanism of arches or 3d framed structures. an extension that includes domes and vaults was then proposed by o’dwyer [18] in terms of fictitiously deconstructing the structure in discrete equilibrated arches, which entails seeking networks of forces inside the structure according to what has been denominated tna. a pictorial description of this idea is illustrated in figure 3, with reference to a cross vault exhibiting a comparatively simple static behaviour, with the two diagonal arcs providing the bearing structure that distributes the loads on the four pillars at the vertexes. the four columns support the four ribs of a barrel vault as a succession of increasingly smaller arcs from the external perimeter towards the centre. each arch transmits its thrust connected to the diagonal arcs such that the diagonal arcs are loaded from the combination of the forces they sustain. 3. basics of thrust network analysis according to the so-called safe theorem of limit analysis, in the form established by heyman for masonry structures, the limit equilibrium of masonry vaults can be assessed by seeking a network of thrusts, i.e. purely compressive forces that are fully contained within the thickness of the vault and hence do not induce tensile stresses that masonry is incapable of withstanding. as such, the problem of identifying the maximum loads that a masonry vault can safety sustain is reconducted to identifying a specific set of points or nodes within the vault such that the applied loads are in equilibrium with the forces at each point, internal to the vault and purely compressive and directed along the fictitious branches connecting pairs of nodes. the geometric position of these nodes is determined via an optimisation algorithm that enforces the condition whereby the initially unknown coordinates are contained within the vault thickness as well as in the branches connecting them (figure 4). since the original paper [20] can be referred to for further details, in this paper, we detail only the simplest case of vertical loads by supplementing the original formulation with a simple example that will help the reader to grasp the details of the procedure (see figure 5 and figure 6). the set of nodes and the related branches define a specific network, from now on referred to as a thrust network, which is described by nn nodes and nb branches that connect specific pairs of nodes. the n-th node of the network is defined by its position (xn , yn , zn), in a 3d cartesian reference system, where z is the vertical direction. the external force concentrated at each node can be described as follows: 𝑓 𝑛 = (𝑡𝑥 𝑛 , 𝑡𝑦 𝑛 , 𝑡𝑧 𝑛) (1) while the thrust value related to the generic branch can be denoted as: 𝑇 (𝑏) = (𝑡𝑥 (𝑏) , 𝑡𝑦 (𝑏) , 𝑡𝑧 (𝑏) ) . (2) the set of nodes is split into ni internal nodes and nr restrained (or external) nodes, where only one external branch converges such that nn = ni + nr, thus ensuring the external branches model supports the reactions. the unknowns of the problem are represented by the coordinates of the nodes and thrusts within each branch. in fact, only the vertical coordinates of the nodes are sought since the figure 2. the catenary principle. figure 3. plant and axonometric views of a masonry vault. figure 4. tna modelling of the vault. figure 5. equilibrium of nodes. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 260 horizontal coordinates are assigned by the designer by projecting the vault in the plane and defining a regular grid of points. clearly, the disadvantage is the point distance, while the advantage is the quality of the analysis. the zn coordinates of the nodes and the branch values t b are evaluated by enforcing an equilibrium at the internal and external nodes via two different strategies detailed separately here in terms of the horizontal and vertical direction. 3.1 horizontal equilibrium of nodes denoted by bn, the set of branches converging to node n, the horizontal equilibrium of the n-th node is expressed by the following equation: ∑ 𝑡𝑥 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) = 0 ∑ 𝑡𝑦 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) = 0 (3) in terms of the horizontal components 𝑓𝑥 𝑛, 𝑓𝑦 𝑛 of the external loads and the thrust forces 𝑡𝑥 𝑏 , 𝑡𝑦 𝑏 relative to the branches b connected to the node. indicated by n and m(b), the indices of the nodes connected by the generic branch 𝑏𝜖𝐵𝑛 can be denoted by 𝑡ℎ (𝑏) = √𝑡𝑥 (𝑏)2 + 𝑡𝑦 (𝑏)2 (4) the horizontal component of the thrust, and by 𝑙ℎ (𝑏) = √(𝑥𝑛 − 𝑥𝑚 (𝑏) ) 2 + (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 2 (5) the length of the generic branch projected in the horizontal plane (see figure 5). hence, we can evaluate the following: 𝑡𝑥 (𝑏) 𝑡 ℎ (𝑏) = (𝑥𝑛 − 𝑥𝑚 (𝑏) ) 𝑙 ℎ (𝑏) (6) and 𝑡𝑦 (𝑏) 𝑡 ℎ (𝑏) = (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 𝑙 ℎ (𝑏) (7) which can be incorporated into eq. 3 to get: ∑ (𝑥𝑛 − 𝑥𝑚 (𝑏) ) 𝑙 ℎ (𝑏) 𝑡ℎ (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) = 0 (8) ∑ (𝑦𝑛 − 𝑦𝑚 (𝑏) ) 𝑙 ℎ (𝑏) 𝑡ℎ (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) = 0 (9) there is a large number of networks that are in equilibrium with a given set of external forces and, at the same time, are contained within the vault thickness. to address all of them in a comprehensive way, it is important to express the horizontal components of thrust 𝑡ℎ (𝑏) as the product of a factor  and various reference thrust values �̂�ℎ (𝑏) , both of which are left unspecified at the moment. accordingly, for the generic b-th branch, we can set 𝑡ℎ (𝑏) = �̂�ℎ (𝑏) = 1 𝑟 �̂�ℎ (𝑏) , and eqs. 8–9 thus become: ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑥𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑥𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑥 (𝑛) 𝑟 = 0 (10) ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑦𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑦𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑦 (𝑛) 𝑟 = 0 (11) where the ratios �̂� ℎ (𝑏) 𝑙 ℎ (𝑏) represent the reference thrust densities of the network branches. 3.2 vertical equilibrium of nodes recalling that 𝑡𝑧 (𝑏) and 𝑓𝑧 (𝑛) are, respectively, the vertical component of the 𝑏𝑡ℎ branch thrust converging to node n and the nodal load, the vertical equilibrium of a generic node can be written as follows: ∑ 𝑡𝑧 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) = 0 (12) now, recalling that, if compressive, thrust force t(b) is oriented towards node n, we have the following: 𝑡𝑧 (𝑏) = 𝑡ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) =  �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) = 1 𝑟 𝑡ℎ (𝑏) 𝑙 ℎ (𝑏) (𝑧𝑛 − 𝑧𝑚 (𝑏) ) (13) where the formula 𝑡ℎ (𝑏) = �̂�ℎ (𝑏) = 1 𝑟 �̂�ℎ (𝑏) has been used. accordingly, eq. 13 can be rewritten as follows: ∑ 𝑧𝑛 − 𝑧𝑚 (𝑏) 𝑙 ℎ (𝑏) �̂�𝑧 (𝑏) 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 (14) or equivalently as ∑ [ �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 − �̂�ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑚 (𝑏) ] 𝑏∈𝐵𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 . (15) the previous condition is used to evaluate the unknown nodal heights zn, the coefficients of which are expressed by means of the reference thrust densities. the physical meaning of the parameter  = 1 𝑟 is exemplified by the three-hinged arch shown in figure 6, for which the equilibrium needs to be written only for the central node. figure 6. illustrative example of the vertical equilibrium of a node. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 261 given that 𝑧𝑚 (𝑏) = 0, eq. 15 can be simplified to �̂� ℎ (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 + 𝑓𝑧 (𝑛) 𝑟 = 0 or, equivalently, given that 𝑓𝑧 (𝑛) < 0 since it is directed downwards, to |𝑓𝑧 (𝑛) |𝑟 = �̂�𝑥 (𝑏) 𝑙 ℎ (𝑏) 𝑧𝑛 . (16) given that 𝑡ℎ (𝑏) , 𝑙ℎ (𝑏) are both positive, it can be inferred that a greater value of r is associated with a greater value of 𝑧𝑛 and vice versa. this, in turn, implies that we are seeking networks with the uppermost vertical coordinates of all nodes and, hence, the minimum thrust. 4. case study the present case study relates to the laser scanning survey for the santa maria della libera church in the municipality of aquino, and the attendant structural analysis for the static verification of the church’s cross vaults [20]. the church, which dates from 1000–1100 a.d. and is characterised by a pure romanic–benedictine style, was built with the typical ‘local soft’ travertine, fragmentary material of the remains of roman buildings surrounding the area where it was erected. the amazing and austere interior, with dimensions of 17 × 38 m, consists of three aisles divided by square pillars with three semi-circular absids and an imposing triumphal arch, also resting on pillars culminating with fragments of roman cornice that act as capitals, which leads to the transept (figure 7, figure 8). meanwhile, the main altar, consisting of a roman marble sarcophagus, is placed in the centre, with the centre aisle having a wooden roof, while the side aisles feature various cross vaults. the campaign of measurements was focused on the interior and exterior of the masonry structure, with its geometry and external projections making the survey particularly complex and cumbersome. in fact, the vaults in the interior aisles of the church have been the subject of various detailed studies [21]-[27]. first, an accurate topographical survey of the historic artifact’s site was carried out using the topcon gls-2000 laser scanner station (figure 9). during a four-hour period, 16 scans were performed at different station points in order to obtain an extremely high density of scan points, approximately five million points with measured coordinates with millimetre accuracy [20]. the survey was divided into several phases after careful planning of the campaign and the identification of the station points. the design of the survey included maps from google maps with on-site identification, cloud capture scans of specific points detected by a laser beam with a 360 ° horizontal and 270 ° vertical range of action, scanning alignment in pairs, global alignment, filtering, modelling, and editing in terms of the subroutine that the tna code was connected to (figure 10). point cloud models obtained using a survey method figure 7. image of the central nave. figure 8. internal plan. figure 9. image of the lateral left nave with a view of the masonry cross vaults. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 262 incorporating high-precision laser scanning instruments have certain limitations from a computational point of view. in fact, it is not possible to directly associate the physical behaviour of the model derived from a point cloud using structural software. in fact, the transition from a point cloud to a polygonal surface model can be defined as a ‘re-topology’ operation since both quantitative and qualitative information of the point’s model are translated and adapted to obtain a triangular or quadrangular mesh that better represents the polygonal surface. meanwhile, the structural analysis software uses an algorithm that requires the geometric dimensions of the masonry object as the input data. the laser scanning survey consists of assigning a triplet of xyz coordinates to each point, with the coordinates initially relative before becoming absolute through a geo-referencing operation. in order to extrapolate the coordinates of the points, the cloud compare software offered us both the possibility of querying the single point or selecting a number of n points and exporting the list in .txt formats. once the data had been extrapolated and processed, they were exported to an excel table. in fact, the masonry elements are typically non-regular and non-continuous, do not have a homogeneous surface, and are made of masonry ashlars that are not perfectly squared. accordingly, the geometric data obtained from the survey needed to be appropriately smoothed to achieve more regular surfaces, i.e. surfaces with no kinks, unrealistic holes, or superposition patches. in order to resolve this problem and to obtain coordinates that allowed us to provide correct geometric data for processing via the structural algorithm, an interpolating function was built. specifically, the following polynomial formula was chosen to interpolate the curve drawn with the coordinates of the surveyed points in the best way possible. the equation of the polynomial formula is: y = 0.0236 x6 + 0.023 x5 + 0.0677 x4 0.0597 x3 0.3024 x2 + 0.1172 x + 12.871 . (17) the regression line represented a good fit of the points as the index of variance of the line demonstrated, with a value of r2 = 0.9974, i.e. very close to one. 5. results the application of the tna method, illustrated here for a single cross vault of the roof, was extended to the static and structural verification of all masonry vaults existing in the s. maria della libera church in aquino. each of the church’s cross vaults in the aisles has a square base whose side length is equal to 4 meters, a height equal to 2 meters, a thickness of 0.45 meters and it is made of soft travertine with a specific weight of 2.72 t/m³ (figure 11). the application of the tna method to a single cross vault provided, as can be seen from figure 12 and figure 13, the minimum and maximum thrust values of each of the 389 branches of the cross vaults, as well as the minimum and maximum values of the height of each of the 222 nodes of the roof associated with the maximum and minimum thrust values. the maximum height of the network nodes that characterises the deepest limit configuration, i.e. that associated with the minimum thrust, was 2.48 m. conversely, the shallowest limit configuration, associated with the maximum thrust, had a value of 1.38 m as the minimum node height. figure 10. flowchart of the scan-to-bim process. figure 11. medium surface of the vault geometry obtained via the surveying of the intrados and discrete measurements of the vault thickness. figure 12. distribution of maximum and minimum thrusts within the vault. figure 13. thrust distribution in a rib. acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 263 6. conclusions the tls survey carried out on the monumental complex of the church of santa maria della libera in aquino provided the point cloud required to perform a structural modelling of the church’s masonry vaults using the tna approach. the geometric and geo-referenced 3d model obtained by processing the laser-scanning measurements presented a model built on a coherent geometric basis, which considers the methodological complexities of the detected object (figure 14 and figure 15). the paper demonstrated how the interdisciplinarity between a geometric model, built with the innovative techniques typical of the geomatic-type survey, and a structural model, can represent a useful support to the structural verification of the safety and conservation of complex structures, such as those typically pertaining to the field of monumental heritage. in future research within this reference context, hbim models will be addressed, which are part of a semi-automated method that allows for switching from point cloud to an advanced 3d model with the capacity to contain all the geometrical and mechanical characteristics of the built object [28]-[34]. moreover, fem analyses, based on recently developed strategies related to masonry modelling [35]-[37], will be investigated in order to assess the outcomes of the tna. acknowledgments this work has been carried out under the gamher project: geomatics data acquisition and management for landscape and built heritage in a european perspective, prin: progetti di ricerca di rilevante interesse nazionale – bando 2015, prot. 2015hjls7e. gamher, url: https://site.unibo.it/gamher/en. references [1] r. quattrini, f. clementi, a. lucidi, s. giannetti, a. santoni, from tls to fe analysis: points cloud exploitation for structural behaviour for definition. the san ciriaco’s bell tower, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2/w15, 2019 27th cipa int. symp. documenting the past for a better future, ávila, spain, 1-5 september 2019, pp. 957-964. doi: 10.5194/isprs-archives-xlii-2-w15-957-2019 [2] t. ramon herrero-tejedor, f. arques soler, s. lopez-cuervo medina, m. r. de la o cabrera, j. l. martìn romero, documenting a cultural landscape using point-cloud 3d models obtained with geomatic integration techniques. the case of the el encın atomic garden, madrid (spain), plos one 15(6), 24 june 2020, e0235169, 16 pp. doi: 10.1371/journal.pone.0235169 [3] p. roca, m. cervera, g. gariup, l. pela, structural analysis of masonry historical constructions. classical and advanced approaches architectural computation methods engineering, springer, cham., 2010, pp. 299-325. doi: 10.1007/s11831-010-9046-1 [4] g. roca, f. lopez-almansa, j. miquel, a. hanganu, limit analysis of reinforced masonry vaults, engineering structures 29 (2007) 3, pp. 431-439. doi: 10.1016/j.engstruc.2006.05.009 [5] g. bitelli, c.balletti, r. brumana, l. barazzetti, m. g. d'urso, f. rinaudo, g.tucci, the gamher research project for metric documentation of cultural heritage: current developments, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2-w11-, proc. of 2019geores and 2019-2nd international conference of geomatics and restoration, 8 – 10 may 2019, milan, italy, pp. 239-246; doi: 10.5149/isprs -archives-xlii-2-w11-239-2019 [6] g. bitelli, c. balletti, r. brumana, l. barazzetti, m.g. d'urso, f. rinaudo, g.tucci, metric documentation of cultural heritage: research directions from the italian gamher project, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-2-w5, 2017, pp. 83-89; doi: 10.5149/isprs -archives-xlii-2-w5-83-2017 [7] g. bitelli, g. castellazzi, a. m. d’altri, s. de miranda, a. lambertini, i. selvaggi, automated voxel model from point clouds for structural analysis of cultural heritage, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xli-b5, xxiii isprs congress, prague, czech republic, 12-19 july 2016. doi: 10.5194/isprsarchives-xli-b5-191-2016 [8] a. georgoupolos, ch. ioannidis, 3d visualization by integration of multi-source data for monument geometric recording, in: recording, modeling and visualization of cultural heritage, baltsavias et al. (editors), taylor & francis group, international workshop, ascona, 2005. isbn 0 415 39208 x [9] c. brito, n. alves, l. magalhães, m. guevara, bim mixed reality tool for the inspection of heritage building, isprs ann. photogramm. remote sens. spatial inf. sci, iv-2/w6, pp. 25-29. doi: 10.5194/isprs-annals-iv-2-w6-25-2019 [10] s. logothetis, a. delinasiou, e. stylianidis, building information modelling for cultural heritage: a review, isprs annals of the photogrammetry; vol. ii-5/w3, 25th int. cipa symposium, taipei, taiwan, 31 august -4 september 2015, pp. 177-183. doi: 10.5194/isprsannals-ii-5-w3-177-2015 figure 14. 3d digital model of an external side of the s. maria della libera church. figure 15. export sections from cloud https://site.unibo.it/gamher/en https://doi.org/10.5194/isprs-archives-xlii-2-w15-957-2019 https://doi.org/10.1371/journal.pone.0235169 https://doi.org/10.1007/s11831-010-9046-1 https://doi.org/10.1016/j.engstruc.2006.05.009 https://doi.org/10.5149/isprs%20-archives-xlii-2-w11-239-2019 https://doi.org/10.5149/isprs%20-archives-xlii-2-w5-83-2017 https://doi.org/10.5194/isprsarchives-xli-b5-191-2016 https://doi.org/10.5194/isprs-annals-iv-2-w6-25-2019 https://doi.org/10.5194/isprsannals-ii-5-w3-177-2015 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 264 [11] a. georgoupolos, d. delikaraoglou, ch. ioannidis, e.lambrou, g.pantazis, using geodetic and laser scanner measurements for measuring and monitoring the structural damage of a post byzantine church, 8th int. symp. on conservation of monuments in the mediterranean basin, monument damage hazards and rehabilitation technologies, patras, greece, 31 may-2 june 2010. [12] f. marmo, l. rosati, reformulation and extension of the thrust network analysis, comp. & struct. 1822 (2017), pp. 104-118. doi: 10.1016/j.compstuc.2016.11.016 [13] f. marmo, d. masi, l. rosati, thrust network analysis of masonry helical staircases, int. j. of arch. her. 12(5) (2018), pp. 828-848. doi: 10.1080/15583058.2017.1419313 [14] f. marmo, d. masi, d. mase, l. rosati, thrust network analysis of masonry vaults, int. j. of masonry res. and innov. 4 (2019), pp. 64-77. doi: 10.1504/ijmri.2019.096828 [15] f. marmo, n. ruggieri, f. toraldo, l. rosati, historical study and static assessment of an innovative vaulting technique of the 19th century, int. j. of arch. her., 13(6) (2019), pp. 799-819. doi: 10.1080/15583058.2018.1476607 [16] f. marmo, m. marmo, s. sessa, a. pagliano, l. rosati, thrust membrane analysis of the domes of the baia thermal baths, in: carcaterra a., paolone a., graziani g. (editors), proc. of xxiv aimeta conf., rome, italy, 15-19 september 2019, lect. notes mech. engrg. springer, 2019, isbn 978-3-030-41057-5. doi: 10.1007/978-3-030-41057-5_154 [17] f. marmo, d. masi, s. sessa, f. toraldo, l. rosati, thrust network analysis of masonry vaults subject to vertical and horizontal loads, proc. 6th int. conf. on comp. methods in structural dynamics and earthquake eng. compdyn 2017, rhodes island, greece, 15-17 june 2017, pp. 2227-2238. doi: 10.7712/120117.5562.17018 [18] d. o'dwyer, funicular analysis of masonry vaults, comp. & struct. 73 (1999), pp. 187-197. [19] j. heyman, the masonry arch, ellis horwood, 1982, pp.85-90. [20] m. g. d'urso, v. manzari, b. marana, terrestrial laser-scanning point clouds for modeling masonry vaults, proc. of imeko tc4 int. conf. on metrology for archaeology and cultural heritage, 46 december 2019, florence, italy, pp. 282-286. online [accessed 22 march 2021] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-52.pdf [21] m. g. d'urso, e. corsi, c. corsi, mapping of archaeological evidences and 3d models for the historical reconstruction of archaeological sites, metrology for archaeology and cultural heritage (metroarchaeo), cassino fr, italy, 22-24 october 2018, pp. 437-442. doi: 10.1109/metroarchaeo43810.2018.9089783 [22] m. g. d’urso, e. corsi, s. nemeti, m. germani, from excavations to web: a gis for archaeology, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xlii-5/w1, geomatics & restorationconservation of cultural heritage in the digital era, florence, italy, 22-24 may 2017, pp. 219-226. doi: 10.5194/isprsarchivesxlii-5/w1-219-2017 [23] m. g. d'urso, c. l. marino, a. rotondi, on 3d dimension: study cases for archaeological sites, the int. archives of the photogrammetry, remote sensing and spatial information sciences, vol. xl-6 (2008), pp. 13-18, issn: 1682-1750. doi: 10.5194/isprsarchives xl-6-13-18 [24] m.g. d’urso, g. russo, on the integrated use of laser-scanning and digital photogrammetry applied to an archaeological site, the int. archives of the photogrammetry, remote sensing and spatial information sciences. vol. xxxvii-b5-2/comm.v, issn 16821750, isprs, beijing, china, 3-11 july 2008, pp. 1107-1112. [25] s. parrinello, r. de marco, integration and modelling of 3d data as strategy for structural diagnosis in endangered sites. the study case of church of the annunciation in pokcha (russia), imeko tc4 int. conf. on metrology for archaeology and cultural heritage, florence, italy, 4-6 december 2019, pp. 223-228. online [accessed 22 march 2021] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-41.pdf [26] a. piemonte, g. caroti, i. martínez-espejo zaragoza, f. fantini, l. cipriani, a methodology for planar representation of frescoed oval domes: formulation and testing on pisa cathedral isprs, int. j. geo-information 7 (2018), p. 318. doi: 10.3390/ijgi7080318. [27] b. riveiro, b. conde-carnero, h. gonzález-jorge, p. arias, j. c. caamaño, automatic creation of structural models from point cloud data: the case of masonry structures, isprs annals of the photogrammetry; vol. ii-3/w5 geospatial week, 2015, pp. 3-9. doi: 10.5194/isprsannals-ii-3-w5-3-2015 [28] g. guidi, f. remondino, m. russo, f. menna, a. rizzi, s. ercoli, a multi-resolution methodology for the 3d modeling of large and complex archeological areas, international journal of architectural computing (ijac) special issue (2009), pp. 39-55. [29] m. hess, v. petrovic, m. yeager, f. kuester, terrestrial laser scanning for the comprehensive structural health assessment of the baptistery di san giovanni in florence, italy: an integrative methodology for repeatable data acquisition, visualization and analysis, structure and infrastr. eng. 14(2) (2018), pp. 247-263. doi: 10.1080/15732479.2017.1349810 [30] m. c. l. howey, m. brouwer burg, assessing the state of archaeological gis research: unbinding analyses of past landscapes, journal of archaeological science 15(5) 2017, pp. 1-9. doi: 10.1016/j.jas.2017.05.002 [31] l. c. hung, w. xiangyu, j. yi, bim-enabled structural design: impacts and future developments in structural modelling, analysis and optimisation processes, arch. computat. methods 22 (2015), pp. 135-151. doi 10.1007/s11831-014-9127-7 [32] m. llobera, building past landscape perception with gis: understanding topographic prominence, journal of archaeological science 28 (2001), pp. 1005-1014. doi: 10.1016/jasc.2001.0720 [33] a. mitropoulou, a. georgopoulos, an automated process to detect edges in unorganized point clouds, isprs ann. photogramm. remote sens. spatial inf. sci., vol. iv-2/w6, (2019), pp. 99-105. doi: 10.5194/isprs-annals-iv-2-w6-99-2019 [34] x. yang, m. koehl, p. grussenmeyer, parametric modelling of asbuilt beam framed structure in bim environment the int. archives of the photogrammetry, remote sensing and spatial information sciences, volume xlii-2/w3, 2017 3d virtual reconstruction and visualization of complex architectures, nafplio, greece, 1-3 march 2017, pp. 651–657. doi: 10.5194/isprs-archives-xlii-2-w3-651-2017 [35] r. serpieri, s. sessa, l. rosati, a mitc-based procedure for the numerical integration of a continuum elastic-plastic theory of through-the-thickness-jacketed shell structures, comp. struct., 191 2018 pp. 209-220. doi: 10.1016/j.compstruct.2018.02.031 [36] s. sessa, r. serpieri, l. rosati, a continuum theory of throughthe-thickness jacketed shells for the elasto-plastic analysis of confined composite structures: theory and numerical assessment, comp. part b: eng., 113 (2017), pp. 225-242. doi: 10.1016/j.compositesb.2017.01.011 [37] s. sessa, r. serpieri, l. rosati, probabilistic assessment of historical masonry walls retrofitted with through-the-thickness confinement devices, proc. of 23rd aimeta conf., salerno, italy, 4-7 september 2017, pp. 2324-2332. https://doi.org/10.1016/j.compstuc.2016.11.016 https://doi.org/10.1080/15583058.2017.1419313 https://doi.org/10.1504/ijmri.2019.096828 https://doi.org/10.1080/15583058.2018.1476607 https://doi.org/10.1007/978-3-030-41057-5_154 https://doi.org/10.7712/120117.5562.17018 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-52.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-52.pdf https://doi.org/10.1109/metroarchaeo43810.2018.9089783 https://doi.org/10.5194/isprsarchives-%20xlii-5/w1-219-2017 https://doi.org/10.5194/isprsarchives%20xl-6-13-18 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-41.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-41.pdf https://doi.org/10.3390/ijgi7080318 https://doi.org/10.5194/isprsannals-ii-3-w5-3-2015 https://doi.org/10.1080/15732479.2017.1349810 https://doi.org/10.1016/j.jas.2017.05.002 https://doi.org/10.1007/s11831-014-9127-7 https://doi.org/10.1016/jasc.2001.0720 https://doi.org/10.5194/isprs-annals-iv-2-w6-99-2019 https://doi.org/10.5194/isprs-archives-xlii-2-w3-651-2017 https://doi.org/10.1016/j.compstruct.2018.02.031 https://doi.org/10.1016/j.compositesb.2017.01.011 reduction of gravity effect on the results of low-frequency accelerometer calibration acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 365 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 365 368 reduction of gravity effect on the results of lowfrequency accelerometer calibration g. p. ripper1, c. d. ferreira2, r. s. dias2, g. b. micheli2 1 division of acoustics and vibration metrology diavi, inmetro., brazil, gpripper@inmetro.gov.br 2 vibration laboratory lavib, inmetro, brazil, rsdias@inmetro.gov.br abstract: this paper describes a study on the possible sources of systematic errors during the calibration of accelerometers at low-frequencies. this study was carried out on a primary calibration system that uses an air-bearing vibration exciter aps dynamics 129 and applying the sine-approximation method. tests performed and actions taken to reduce the effect on experimental results are presented. keywords: calibration; vibration; lowfrequency; accelerometer. 1. introduction the warp of the linear guide can cause a tilt with variable angle of the moving table when it is linearly translated. this might cause a variable effect of local gravity on dc-responsive accelerometers as for instance servo-accelerometers. in the key comparison euramet.auv.v-k3 a larger dispersion among the results of participants was evidenced at the lowest frequencies [1]. for the key comparison ccauv.v-k3 [2], inmetro have reported results down to 0.2 hz. later on, some efforts were made to extend the lower limit range to 0.1 hz but the systematic effect was high compared to the desired uncertainty for the service. this was confirmed during a measurement audit carried out during the peer review of lavib. in this opportunity (june 2019) we could calibrate a servoaccelerometer from ptb and quantify the problem. the application of a mathematical correction of the systematic error caused by gravity had been proposed by t. bruns and s gazioch in 2016 [3]. we preferred to attack the problem following a different approach. instead of dealing with the effect, we decided to try to identify and reduce the cause. therefore, we searched the responses for the basic questions: can we clearly identify the cause and consider it is exclusively generated by the warp? can we minimize the cause of the problem to reduce or even exclude the need of a mathematical correction? some experiments were carried out to respond to these questions and the results obtained will be presented in the following sections. 2. description of the work the study started with the establishment of a reference condition that was nothing else than the result of sensitivity obtained for a servoaccelerometer q-flex qa-3000. the calibration of accelerometers is usually carried out using two mountings (0° and 180°). the final sensitivity is the mean of the results obtained at 2 mountings. this basic condition was used as parameter to evaluate the results obtained in our subsequent tests. 2.1 initial tests the influence of gravity on the accelerometer was first tested using different orthogonal mounting position (90° and 270°). despite of the servoaccelerometer q-flex being a based on a cantilever beam design no significant difference was observed. the influence of the distance of the accelerometer from the centre of gravity of the moving table was also tested. a calibration was carried out placing the accelerometer below the mounting plate of the moving table. no significant difference was observed. the influence of tilt on the measuring points used for calibration was evaluated by using different measuring distances between the laser measuring point and the centre axis of the accelerometer. tests after loosening the screws of one end of the guide bar were performed and later after loosening the screws of both ends of the guide bar. change on the behavior of sensitivity magnitude at low frequencies could be observed at these times. 2.2 correction proposed by ptb the correction procedure proposed by ptb have also been tested by inmetro. figure 1 presents corrected sensitivity results obtained according to the method proposed by bruns and gazioch using http://www.imeko.org/ mailto:gpripper@inmetro.gov.br mailto:rsdias@inmetro.gov.br acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 366 the same coefficient published by ptb in [3]; using the coefficient determined by inmetro for its system; original certificate results obtained in 2018 and in 2019; and results with 3 screws loosened at one side the linear guide. figure 1: experimental and corrected results according to method proposed by bruns and gazioch. figure 1 demonstrates that the repeatability of the results from year to year is good, but the systematic error is too high at 0.1 hz. the mathematical correction improves the final result obtained but this can be dependent on specific characteristics of the accelerometer under test. the simple test of just loosening the three screws at one side of the linear guide have shown the possibility of achieving an improvement of the sensitivity response measured with the interferometric system by mechanical means. therefore, it was decided to quantify the effect caused by this process and take actions in order to improve the straightness of the guide. 2.3 straigthness measurements the procedure applied by inmetro to measure the straightness comprised the use of an autocollimator taylor robson mounted on an independent seismic block, which is usually used to position a breadboard with the interferometer. the setup is shown in figure 2. figure 2: setup used to measure the straightness of the linear guide of shaker aps 129. a plane mirror stand was positioned on the top of the moving table and the angle of inclination of this mirror was measured at different positions of the moving table. the zero position was taken as the equilibrium static point with no voltage input to the shaker. this is close to the mid-point of the linear translation guide. a dc-voltage source was used to provide a voltage of approximately 200 mv to the power amplifier. the gain knob of the amplifier was manually set to place the moving table at different x-positions. the measurement of the x-position was made with a laser distance measurer placed on the top of the table, which was pointed to a fixed reference stand on the same seismic block used to support the autocollimator. initially, straightness measurements were made at the mid-point and close to the positive and negative limits of motion used for accelerometer calibrations. absolute angles measured were -14, 12,6 and -11,5 seconds. normalizing these values in relation to the mid-point angle gives us a better view of the difference of the angle measured at different positions of the linear guide. this is shown in figure 3. figure 3: difference of the angle measured at different positions of the moving table. new straightness measurements were made while changing the mechanical system as follows: 1. original measurement conditions 2. measurements after release of the 3 screws that fix the linear guide to the base 3. measurements after release of the 6 screws that fix the linear guide to the base 4. measurements after release of the 6 screws that fix the linear guide to the base and release of the 4 screws that hold the base of the shaker to the seismic block 5. measurements after re-fixture of the 4 screws that hold the base of the shaker to the seismic block 6. measurements after re-fixture of the 6 screws that fix the linear guide to the base after applying the step 6 above, we considered that any stress due to mounting and assembly of the shaker to the seismic block have been released. a more refined straightness evaluation was made using a positioning resolution of approximately 10 mm. the results obtained are presented in figure 4. this graph shows that the http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 367 straightness of the system was highly improved. all measurement values were within ±0.2 sec relative to the angle at the reference point. figure 4: difference of the angle measured at different positions of the moving table after reassembly of shaker and linear guide. 2.4 dynamic calibration of servoaccelerometer primary calibrations of a high quality dcresponse accelerometer were then carried out to determine the influence of the improvement obtained on the straightness of the linear guide. after the reassembly of the system, with all its screws retightened, calibrations of a servoaccelerometer allied signal qa-3000 was calibrated in the frequency range 0.1 hz to 80 hz. the effect to be considered here is the deviation of the sensitivity magnitude response below 0.4 hz. the changes are caused by the combination of longer displacements and change of gravity effect due to different angle of the accelerometer while it moves along the linear guide. the results obtained in the range 0.1 hz to 10 hz for the accelerometer qa-3000 have shown that the frequency response is now almost flat. figure 5 presents the magnitude and phase of sensitivity. (a) sensitivity magnitude (b) phase shift figure 5: sensitivity results measured after reassembly of the shaker aps 129 the relative difference of sensitivity magnitude results are now all within 0,1 % taking as reference the value measured at 1 hz. figure 6: relative difference of magnitude results relative to the sensitivity value at 1 hz results obtained after reassembly of the shaker aps 129 2.5 static calibration of servoaccelerometer as a final check, the calibration of the accelerometer sensitivity at 0 hz was carried out by static rotating the accelerometer in the gravity field. the main sensitivity axis of the accelerometer was positioned at different angles relative to the local gravity field and the electrical output was measured. the range of angles from 0o to 360o was covered using 20o steps. then a sine fit was applied to measured data to obtain the sine amplitude and determine the accelerometer static sensitivity. this was carried out using both a voltmeter agilent 3458a and the daq board ni pci-6115 used in actual low-frequency accelerometer calibrations applying the sine-approximation method. due to the difference of input impedance between these two measuring instruments (voltmeter: 10 gω and daq: 1 mω), the voltmeter measurement results were corrected by approximately 0.5% to reflect the same impedance available at the daq input channel. figure 7 shows that the results obtained statically are in close conformity with the ones obtained http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 368 dynamically, being all within ±0.1%. this demonstrates the improvement obtained in the lowfrequency calibration system. figure 7: relative deviation of magnitude results to the sensitivity value at 0.1 hz: green square dc calibration with daq, red square dc calibration with hp3458a (corrected), blue diamond dynamic calibration results obtained after reassembly of the shaker aps 129. 3. summary small deviations from a purely straight linear motion can cause systematic errors on the calibration of dc-responsive accelerometers at lowfrequencies due to the effect of local gravity. this problem was studied at inmetro, where differences as large as 0.85% were observed on primary calibration of accelerometers at 0.1 hz. the cause identified was a warp of the linear guide, which cause a variable tilting angle of the moving table while it was translated. a procedure was applied to release any possible mechanical stresses that could occur during the mounting of the shaker on the inertial block and try to improve the straightness of the linear guide. this action was successful and allowed us to eliminate the main source of error in calibration below 0.4 hz. significant improvement of the calibration results was achieved and eliminated the need of further data correction. the estimated expanded uncertainty of sensitivity is 0.3% for magnitude and 0.3° for phase shift in the frequency range from 0.1 hz to 10 hz. 4. references [1] bartoli et al, “final report of euramet.auv.vk3”, metrologia 52 09003, 2015. [2] sun qiao et al, “final report of ccauv.v-k3: key comparison in the field of acceleration on the complex charge sensitivity”, metrologia 54 09001, 2017. [3] th bruns and s gazioch, “correction of shaker flatness deviations in very low frequency primary accelerometer calibration”, metrologia 53, 986, 2016. http://www.imeko.org/ digital nist: an examination of the obstacles and opportunities in the digital transformation of nist’s reference materials acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 4 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 digital nist: an examination of the obstacles and opportunities in the digital transformation of nist’s reference materials william dinis camara1, steven choquette1, katya delak1, robert hanisch1, benjamin long1, melissa phillips1, jared m. ragland1, catherine rimmer1 1 national institute of standards and technology (nist), gaithersburg, md 20899, usa section: research paper keywords: digital transformation; digital reference materials; drmc; digital si; dcc citation: william dinis camara, steven choquette, katya delak, robert hanisch, benjamin long, melissa phillips, jared m. ragland, catherine rimmer, digital nist: an examination of the obstacles and opportunities in the digital transformation of nist’s reference materials, acta imeko, vol. 12, no. 1, article 7, march 2023, identifier: imeko-acta-12 (2023)-01-07 section editor: daniel hutzschenreuter, ptb, germany received november 17, 2022; in final form march 24, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: william dinis camara, e-mail: dinis.camara@nist.gov 1. introduction the digital transformation of metrology is a worldwide movement. for national metrology institutes (nmis), supporting digital technologies is now an urgent task. nmis want to support industry 4.0 and internet of things (iot) concepts, digital twin concepts, and many other digital technologies. although the activities of nmis vary, the majority provide measurement services that allow their stakeholders to obtain documented traceability to the international system of units (si). access to the documentation is typically delivered using methods that require a human to review and process the information provided. this may involve extracting pertinent data from paragraphs of text. with the proliferation of digital technologies and communication, a faster and more automated transfer of information is essential for technological growth. therefore, the digital delivery of measurement service data is a key and fundamental digital technology that nmis seek to support. to that end “a universal and flexible structure for digital calibration certificates (dcc)” was developed under the auspices of the european metrology program for innovation and research (empir) project smartcom 17ind02 [1]. practically, the dcc implements an xml schema for producing digital calibration certificates. the dcc continues to be supported and is presently on version 3.1.0; many nmis are now adapting the dcc for use in their own institutions. however, the dcc in its current version is not suitable for digital reference material certificates (drmc) due to the different information, underlying standards and dissemination requirements related to reference materials. a new schema must be created to support the digital delivery of reference materials. nist has recognized expertise in many digital technologies, including iot, cybersecurity, and artificial intelligence (ai), but abstract early in 2022, nist embarked on a pilot project to produce digital calibration reports and digital certificates of analysis for reference materials. the goal is to produce examples of digital reports and certificates to assess the scope and challenges of digital transformation in those particular measurement services. this paper focuses on the reference material certificate effort of the pilot project. our aims for this part of the pilot project are: to generate a digital reference material certificate from certification data; descriptive information about the material, and other data and metadata as needed; to generate a human-readable report from the digital reference material certificate; and to hold a workshop to gather stakeholder feedback. the challenges for nist include the diverse and complex information presently contained in nist certificates, converting values to non-si units to match the needs of stakeholders, and format updates to nist reference material certificates necessary to allow for machine generation. other practical challenges include the wide variety of reference materials offered by nist, as well as the needs of internal and external stakeholders. this presentation will report on the progress of the nist effort and discuss some of the challenges and solutions to producing digital reference material certificates. mailto:dinis.camara@nist.gov acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 has not yet attempted to deliver measurement service data in a digital format. in 2022, nist embarked on a pilot project to produce examples of digital calibration reports and certificates of analysis for nist standard reference materials (srms). the main objective of the pilot project is to assess the resources and effort required to deliver fully digital measurement service data. like other nmis, nist is using the dcc as a starting point. the efforts required to digitize calibration reports and certificates of analysis are separate but related. for the srm effort, one of the main challenges is that the dcc has been constructed for calibrations, not reference materials. although many of the elements are similar, there are also significant differences, particularly in the context of metadata. attempting to put reference material data directly into the dcc as currently constructed, adding additional data fields, and modifying existing data fields will lead to a more difficult-to-use patchwork collection of data fields. here, we focus on the reference material side of the pilot project, and we note that at the time of writing the pilot project is rapidly developing. 2. nist standard reference materials® nist offers over 1100 standard reference materials in the following categories [2]: ferrous metals, nonferrous metal, microanalysis, high purity materials, health and industrial hygiene, inorganics, primary gas mixtures, fossil and alternative fuels, organics, food and agriculture, geological materials and ores, ceramics and glasses, cement, engine wear materials, forensics, ion activity, polymeric properties, thermodynamic properties, optical properties, radioactivity, electrical properties, metrology, liquids and glasses, x-ray diffraction, sizing, surface finish, fire research, nanomaterials, and miscellaneous performance engineering materials. nist produces a significant number of srms in the technical categories covering a wide array of needs. the distribution of the number of srms by category is shown in figure 1. the large number and diversity of srms is a complex challenge for the digital delivery of reference materials and documentation because the types of data and values may vary greatly among the various srms. the different technical areas have unique challenges for providing the measurements that stakeholders need. the quality management system for nist srms is, to the extent allowed by statute and regulation, in conformity with the international organization for standardization (iso) standard 17034. because the dcc was designed to conform to iso/iec 17025, a schema for reference materials requires evaluation of the information required by iso 17034 and other sources to generate a schema for reference materials before comparison to the dcc to identify similar elements that can be reused between the two. 3. challenges 3.1. digital certificates nist plans to use an internally developed tool, the configurable data curation system (cdcs) [3], to transform srm metadata and results into digital srm certificates based on the new schema being created. the data for the certificates will be stored in a digital repository that will enable easier and higherquality generation of srm certificates. iso 17034 and iso guide 31 provide general requirements for reference material producers including much of the information that needs to be provided in certificates. in addition, other sources such as nist policy and customer input provide further information to be contained in certificates. since the dcc is designed around a different standard, iso/iec 17025, that requires different information, it is not a suitable schema for digital reference materials. a new schema must be researched and created based on the requirements for reference material certificates with input from nist stakeholders and in collaboration with other national metrology institutes. since similarities will exist between the new schema and the dcc, they will need to be compared to determine where similarities may be combined for more efficient digital delivery of measurement services information. the materials used for nist srms cover a wide range of technical areas. due to the varying types of data in these materials, nist supports multiple approaches to assigning values to best fit the homogeneity of each material. materials with between-unit homogeneity have certificates that can apply to multiple units across a batch or lot of material. most of the certificates for these nist srms are publicly available. however, materials that have only within-unit homogeneity require a distinct certificate for each unit. for these serialized materials, the certificate with data values is generally available only to the customer who purchased the unit and will necessitate limitations and additional security measures on how each certificate is delivered and accessed. additionally, values may be reported in multiple ways in certificates, e.g., a single value with an uncertainty, multiple values and uncertainties, and dna sequences. nist favours flexibility in delivering values in the units preferred by our stakeholders since our user communities use different units including both si and non-si units in their daily measurements. for example, users measuring for ignition resistance testing may use values expressed as a percentage, while users measuring coating thickness may use measurements to be expressed in mils. providing flexibility to the srm stakeholders is fundamental to creating a digital reference material certificate that will satisfy customer needs in disparate technical areas. the sale of nist srms is not concentrated in any particular area of material and the needs of each of these areas must be regarded. the distribution of srm sales by category is shown in figure 2. another challenge present in certificates is the variety of special characters that must be accommodated. greek letters, superscripts, subscripts and other symbols are frequently included and must be accommodated in any digital solution. figure 1. nist srms shown with the percentage of materials from the catalogue in each category. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 additionally, reference material certificates are not static and may change over time. a certificate for a reference material may be updated to include new values that have been measured, for changes in expiration dates, or for numerous other reasons. while each version of a certificate contains unique information, the connection to historical versions of the documents needs to be maintained. the users of reference materials must ensure that they are using the current version of the certificate for their material. each year nist updates between 150 and 200 certificates. to populate a digital certificate, information must be collated from multiple sources. much of the metadata used to populate the digital certificates is stored electronically in databases. other information is manually curated and added. new code must be written to extract the electronically stored information for inclusion in certificates. for manually added information, new methods of storing the data must be created to automate inclusion. 3.2. human-readable certificates the creation of digital reference material certificates to enable machine-to-machine communication is essential to the future of metrology. however human-readable versions of the certificates will be necessary for at least the foreseeable future. an example of the first page of a current human-readable certificate is shown in figure 3. users of nist srms will utilize the human-readable versions for information about the material including storage, usage instructions, and other required product information. these versions also provide information about the materials to the stakeholders that aid in determining whether the srm is the best fit for their needs. these human-readable versions of digital reference material certificates must be formatted such that stakeholders can easily extract the information they need for their purposes. some of the elements that currently exist in nist certificates of analysis are intended primarily for the human reader and should continue to be included in documentation provided to the customers. the certificates for nist srms have changed over the more than 100 years that nist has been producing them. early certificates contained only values and uncertainties whereas modern certificates include more information about the source and preparation of the materials, more detailed information on storage, instructions for use, and other information required to educate customers and inform them of measurement concerns when using the materials. the schema for a drmc must take into account the wide range of information that has been included in the certificates for the benefit of the customers. many nist reference material certificates include figures, photos, and plots of results that communicate pertinent information or help the user to visualize tabular data. the need for these types of items to be machine-readable must be investigated. however, they must be incorporated into the drmc schema so that they can be included in the humanreadable certificate. the full benefit of a digital reference material certificate cannot be realized if human editing of a certificate after its generation is required. 3.3. tools for finding the appropriate srm the creation of a full digital repository of information will have the added benefit of improving searching for srms. currently customers utilize manual tools that can search a limited set of information in the certificate to compare multiple srms available from nist. customers access individual certificates and compare information about materials that do not share the same measurands, measurement units, or matrices. this approach does not accommodate complex searches and consumes significant time and resources from the customer. the digital repository will enable new, more automated tools to be created, perhaps even by private companies, that will allow for quicker comparison of available srms to identify the one that best suits the need of the customer. a first step may be to provide searching tools that are usable through a web interface. additional tools that would allow two machines to directly interact may be possible in the future. perhaps there will be a day when an instrument in the laboratory searches the nist srm catalogue and informs the user of the appropriate material to acquire. one of the greatest challenges to successful automated searches is the ability to compare measurements that use figure 2. nist srms shown with the percentage of sales of materials in each category. figure 3. the first page of the certificate for srm 2454a hydrogen in titanium alloy (nominal mass fraction 215 mg/kg h) (pin form). acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 different units. due to the volume of possible conversions, they cannot be calculated ahead of time and stored. rather the schema must allow for these types of calculations to be done in real time as the search is conducted. 4. nist workshop nist hosted an international workshop on 28-29 september 2022, for stakeholders of its calibration services and reference materials. the workshop introduced stakeholders to the results of the pilot study, specifically beta versions of digital calibration reports and certificates of analysis and their corresponding human-readable versions and seek stakeholder feedback on these beta versions. the workshop was an opportunity for nist to learn how stakeholders are responding to the changes towards more digital implementations of metrology, any solutions that they are implementing, and to better understand customer needs in this area. participants for the workshop were identified through a questionnaire that was sent to customers of nist measurement services through email and was posted on the nist website. the questionnaire gauged stakeholder interest in participating in the workshop and topics for digital transformation, such as digital calibration reports, certificates of analysis, digital traceability and security, and middleware for digital reports and certificates of analysis. nist will report on the results of the workshop in a separate document. following the workshop, nist is developing a strategy for full digital transformation of its measurement services. 5. future directions the current version of the dcc is not adequate for use in creating a drmc. continued development of a new model to conform to the specifications of reference materials is needed to have a model capable of handling the metadata associated with these materials. there are clear benefits ranging from within organization to worldwide for the digital transformation of reference materials and other measurement services. coordination and development of a model for reference materials that can be used worldwide will be essential to a successful transformation. nist will continue to monitor the dx being developed in the next version of the dcc. for reference materials the international community is watching the direction that national metrology institutes take in leading this effort. the work currently ongoing will provide the foundation for the transition to digital for reference materials. acknowledgement the authors acknowledge all participants in the digital nist pilot study, including those responsible for the portion of the study focused on calibrations. the authors also acknowledge the cooperation of nist staff who have assisted in assessing the variability of certificates and effort required to digitize reference material certificates and the dcc development group at the physikalisch-technische bundesanstalt (ptb) for their assistance. the nist associate director for laboratory programs provided funding for this pilot study. references [1] t. wiedenhöfer, d. hutzschenreuter, i. smith, c. brown, a universal and flexible structure for digital calibration certificates (dcc), dcc, 2019, pp. 1–3. doi: https://doi.org/10.5281/zenodo.3696567 [2] nist, standard reference materials® catalog; nist special publication (nist sp) 260 176, 2022 edition; u.s. government printing office: washington, dc. online [accessed 2 august 2022] https://www.nist.gov/srm [3] nist, configurable data curation system, 2019. online [accessed 2 august 2022] https://www.nist.gov/itl/ssd/information-systemsgroup/configurable-data-curation-system-cdcs/about-cdcs https://doi.org/10.5281/zenodo.3696567 https://www.nist.gov/srm https://www.nist.gov/itl/ssd/information-systems-group/configurable-data-curation-system-cdcs/about-cdcs https://www.nist.gov/itl/ssd/information-systems-group/configurable-data-curation-system-cdcs/about-cdcs mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks mahesh kumar nanjundaswamy1, ane ashok babu2, sathish shet3, nithya selvaraj4, jamal kovelakuntla5 1 department of electronics and communication engineering, dayananda sagar college of engineering, bengaluru, karnataka-560078, india 2 department of electronics and communication engineering, pvp siddhartha institute of technology, vijayawada, andhra pradesh 520007, india 3 department of electronics and communication engineering, jss academy of technical education, bengaluru, karnataka560060, india 4 department of electronics and communication engineering, k. ramakrishnan college of technology, tiruchirappalli-621112, tamilnadu, india 5 department of electronics and communication engineering, gokaraju rangaraju institute of engineering and technology (griet), hyderabad, telangana-500090, india section: research paper keywords: cognitive radio network; cooperative spectrum sensing; energy statistic; machine learning model; spectrum sensing data falsification citation: mahesh kumar nanjundaswamy, ane ashok babu, sathish shet, nithya selvaraj, jamal kovelakuntla, mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks, acta imeko, vol. 11, no. 1, article 21, march 2022, identifier: imeko-acta11 (2022)-01-21 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form march 1, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mahesh kumar nanjundaswamy, e-mail: mkumar.n19@gmail.com 1. introduction over the past years, the world has witnessed a tremendous growth in the field of wireless communication technologies due to the popularity of telemedicine, smart home, smartphones, autonomous vehicles, mobile televisions and smart cities. the increasing demand for wireless communications has brought the problem of spectrum scarcity. energy detection and measurement is a key task in spectrum sensing in cognitive radio networks. as a result, development of hybrid machine learning or signal processing algorithms becomes an intense research area for both measurement technology as well as in cognitive radio communications. the federal communications commission abstract cognitive radio network (crn) is used to solve spectrum scarcity and low spectrum utilization problems in wireless communication systems. spectrum sensing is a vital process in crns, which needs continuous measurement of energy. it enables the sensors to sense the primary signal. cooperative spectrum sensing (css) has recommended to sense spectrum accurately and to enhance detection performance. however, spectrum sensing data falsification (ssdf) attack being launched by malicious users can lead to wrong global decision on the availability of spectrum. it is an extremely challenging task to alleviate impact of ssdf attack. over the years, numerous strategies have been proposed to mitigate ssdf attack ranging from statistical to machine learning models. energy measurement through statistical models is based on some predefined criteria. on the other hand, machine learning models have low sensing performance. therefore, it is necessary to develop an efficient method to mitigate the negative impact of ssdf attack. this paper intends to propose a multilayer perceptron (mlp) classifier to identify falsified data in css to prevent ssdf attack. the statistical features of the received signals are measured and taken as feature vectors to be trained by mlp. in this manner, measurement of these statistical features using mlp becomes a key task in cognitive radio networks. trained network is employed to differentiate malicious users signal from honest users’ signal. the network is trained with the levenberg-marquart algorithm and then employed for eliminating the effect of attacks due to the ssdf process. once the simulated results are observed, it can be revealed that the proposed model could efficiently reduce the impact of malicious users in crn. mailto:mkumar.n19@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 (fcc) [1], [2] reported that most of the allocated spectrum is rarely used by the primary users (pus) (fcc, 2002; fcc, 2003). in order to solve the conflicts between spectrum utilization and spectrum shortage, it has been recommended that opportunistic access to the licensed spectrum should be given to secondary users (sus). cognitive radio network (crn) has been developed to solve the aforementioned issues by enabling dynamic spectrum access. it is a new paradigm that offers the potential to utilize the licensed spectrum in an opportunistic manner [3] (wan et al. 2019). crn allows sus to sense and access free spectrum bands without interfering pus. crn allows sus to use valid spectrum, the spectrum scarcity problem will be solved successfully. sus needs to monitor continuously [4], [5] to sense the pu status. therefore, spectrum sensing becomes an important process for crn. spectrum sensing is the process of identifying status of pu. an accurate detection of spectrum can enhance the performance of crn significantly [6]-[8] (ali et al., 2017). however, due to obstacles, shadowing and multipath fading, wrong detections could take place, thus resulting in an inefficient usage considering the licensed spectrum. to deal such an issue, cooperative spectrum sensing (css) has been considered as a satisfactory candidate for spectrum sensing [9] (sharifi et al., 2016). css combines all cus sensing signal and makes final decision. it prevents the effect of noise, pathloss, shadowing and fading that may occur in wireless communication. however, css is vulnerable to many security threats. among many attacks, spectrum sensing data falsification (ssdf) can severely affect the detection performance of crn. in ssdf attack, malicious users (mus) sent falsified report to the fusion center (fc) about the spectrum band. ssdf attack mislead global decision by sending falsified report about the spectrum availability, hence degrading the crn performance. therefore, it is essential to develop an efficient method to eliminate the impact of ssdf attacks. the core contributions of this research work are as follows: the main aim or the focus of this research article will relate to the designing of an efficient model by using artificial neural network to suppress the negative impact of mus in crn and to enhance the detection performance through measuring various statistical parameters. in this scheme, a set of features are extracted from the received signals and then generated a large representative dataset. multilayer perceptron (mlp) [10], [11] is designed with one of input layer, two layers which are hidden and one output layer. next, the obtained feature vectors could be grouped into two sets, viz., training and testing. the data’s with respect to the training sets could be used for developing and training the mlp. testing data set is employed to validate the efficacy of the proposed model. performance of the proposed model is evaluated by measuring some commonly used metrices. spectrum sensing refers to the process of detecting the activity of pus in a licensed spectrum band. it plays a vital role in crns and css [12]-[14] has been suggested to make accurate decision about spectrum availability by utilizing spatial diversity via observations of multiusers. but css has some limitations. ssdf can severely affect the detection performance of the crns in which false sensing report sent by mus during the css. to eliminate ssdf attack, several methods have been reported in the literature. each method has their own characteristics. none of the method provide consistent result. thus, the main aim or the focus of this research article will relate to the designing of an efficient model by using artificial neural network to suppress the negative impact of mus in crn and to enhance the detection performance. the rest of the article is outlined as mentioned in the following sentences. the 2nd section presents an exhaustive review of former methods. the section 3 describes the development of the system model which is proposed here. section 4 provides experimental outcomes. the paper ends with the conclusive remarks presented in the 5th section. 2. related work several research literatures have proved that css is a good candidate to detect the activity of pu in crn. however, css is affected by many attacks such as the primary user emulsion attack and ssdf. amidst, ssdf is the dangerous attack in crn, ssdf attack can reduce the detection performance by sending falsified report to fc. over the past years, several methods have proposed to resist ssdf attack. wan et al., (2019) [15], [16] presented a method to mitigate the influence of the attacks using the ssdf concepts involving the concepts of the combinations of the linear weights. adaptive reputation method is also presented to differentiate mus from sus. feng et al. (2018) [17] used the exclusive-or (xor) distance analysis to eliminate the influence of ssdf in crn. in this approach, xor distance with hypothesis detecting the information is employed to calculate the equivalency between the two super-users. based on the xor distance, mus is separated from sus. soft decision-based scheme to resist ssdf is developed by ahmadfard et al. (2017) proposed method achieved better result than existing methods. in the article in ahmed et al. (2014), the authors presented a method to combat the effect of mus in crns using the bayseiens strategies. the authors used statistical features of received samples to sense the inexistence and existence of pu. li et al. (2014) investigated the potential of the fuzzy c-means algorithm in spectrum sensing. the proposed algorithm is capable of detecting pu signal accurately, which in turn enhance the detection performance. robust algorithm [18] to defence ssdf proposed by althunibat et al. (2014). in this approach, specific weights are assigned to sensing nodes. results showed that the algorithm is capable of detecting mus. according to the mean and standard deviation of the received samples, mapunya and velempini (2018) developed a ssdf mitigation method in crn. results proved that the proposed scheme could reduce the probability of false alarm rate. sharif et al. (2018) presented a defence strategy against ssdf attack. mean value of received samples are computed. two parameters α and β are obtained and then used in the likelihood ratio test method to enhance the detection performance. li and peng (2016) used unsupervised machine learning model to differentiate honest sus from mus. the proposed model utilizes past sensing report as a feature vector to categorize users. nie et al. (2017) proposed a defence scheme which is based on bayesian learning model. each user has specific weight that reflects its trustworthiness. farmani et al. (2011) suggested the ‘support a vector data description’ method to detect the activity of pu. the proposed differentiate honest sus from mus based on the energy statistic signal. however, this method failed to decrease the probability of false alarm rate. cheng et al. (2017) developed a self-organizing map to classify nodes into honest and malicious nodes. the proposed method uses average suspicion degree to discriminate mus from honest users. amar taggu et al. (2021) [19] proposed a two-layer model framework to classify ssdf attackers the first layer, the computational layer, employs the hidden markov model (hmm) to establish a probabilistic relationship between the pu's states and the sensing acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 reports of the sus. this generates the set of data needed for the next layer. the second layer, the decision layer, employs several ml algorithms to categorise sus as byzantine attackers or normal sus. 3. proposed system model the fundamental focus of this research work is developing and rendering a scheme for mitigating ssdf attack in crn by using the models developed using the concepts of machine learning’s. one of the hidden properties in the process of machine learning’s is the concept of learning how to develop the relation between the two variables, i.e., the input and the output via training process which makes the scheme more robust. the developed model is simulated, and its performance compared with the earlier methods to prove its superiority. the figure 1 depicts the structure of cognitive radio network with one pu and 5 sus. sus uses the communication channel whenever the pu signal is absent. the su’s will perform the process of the least square set (lss) in order to detect the absence or the presence of the pu’s and finally reporting the processes to the fc. next, the fc will make a final verdict with respect to the spectrum availability which is relying on the received information’s by the respective users. in this context, because of the presence of the mu’s, secondary user/s will send some falsified reports to the fc. let m mus are presented in sus. mus can send either always yes or always no to fc. always yes represents high energy (1) which increases the probability of the fake alarms (false) as they are going to give an active status information of the primary user/s when it will be inactive in nature. similarly, always no corresponds to low energy (0) which decreases the probability of detection because they send the primary user’s absentia information’s, even due to the presence of the pu’s in the system. both high and low signal of mus degrade the performance of crn. to deal such an issue, mlp is developed. spectrum sensing mainly used for detecting the presence or the inexistence of the pu’s, which is shown in figure 1, each su receives noisy informative signal once the primary user becomes inactive in nature and finally, energy of the parameters used in the process could be calculated at time of the tth instant using the pth su, which could be expressed by (1) as 𝑆𝑝(𝑡) = 1 𝑁1 ∑ |𝜂𝑝(𝑡, 𝑛)| 2 𝑁1−1 𝑛=0 . (1) in (1), the parameter 𝑆𝑝(𝑡) denotes the received noise by the pth su at 𝑡. n1 represents the number of samples considered. when the pu is active, equation (1) can be written in the form of eqn. (2) as 𝑆𝑝(𝑡) = 1 𝑁1 ∑ |𝐻𝑝 (𝑡, 𝑛). 𝑆(𝑡, 𝑛) + 𝜂𝑝(𝑡, 𝑛)| 2 𝑁1−1 𝑛=0 (2) where, ( ),ph t n denotes the channel gain between pu and pth su, 𝑆(𝑡, 𝑛) denotes the pu signal and 𝜂𝑝(𝑡, 𝑛)denotes the additive gaussian noise with 0 mean and variance. several spectrum sensing methods are reported in the literature such as detecting the energy parameters, the method of matched filtering process and the features using the cyclo-stationary concepts. amidst, energy detecting method is a good candidate for local spectrum sensing because the lss will not need any earlier datas or information’s about the primary user signal and the computational overhead is low. utilizing energy detection method, the received signal can be expressed into binary hypothesis testing, h0 and h1, given in (3) as 𝑟𝑝 (𝑡) = { 𝜂𝑝(𝑡) 𝐻0 𝐻𝑝(𝑡) ∙ 𝑆(𝑡) + 𝜂𝑝(𝑡) 𝐻1 . (3) here, the parameter𝑟𝑝 (𝑡) represents the signal which is being sensed., represents the presence and the absence of the primary user signals. after the local spectrum is being sended, the decision of each su is represented as binary value, 0 and 1, on the inexistence and existence of pu signal with the mathematical model given by 𝑆𝑉𝑝(𝑡) = { 0 𝐻0 1 𝐻1 , (4) where 𝑆𝑉𝑝 is the sensing value of the p th su. 0 and 1 shows the inactive and active of pu signal respectively. every secondary user will report the end verdict to the central unit. then, the fc is going to make a final verdict regarding the spectrum which is relying on all of the data’s which are obtained by the sus. because of the presence of few of the fake or falsified user/s, the secondary user/s may or may not send the modified information to fc, which is finally going to affect the overall performance of the communication system’s spectrum. ssdf attacks such as always yes and always no are considered. under such attacks, mus can report contaminated data to the fc. mus will change the local sensing report and falsify the test outcome. for an instance, mus sends h0 while its local decision is h1 represents the existence of pu signal. let the qth ssdf attacker reports low energy when its local decision is h1 with probability h0 and reports high energy value to fc system, when the decision at the local level is depicted by the parameter h0 with using the probabilistic feature using the 𝑃1,𝑞 parameter with the mathematical model given by eqn. (5) and (6) as 𝑃𝑑,𝑞 = (1 − 𝑃0,𝑞 )𝑃𝑑,𝑞 + (1 − 𝑃𝑑,𝑞 )𝑃1,𝑞 (5) 𝑃𝑓,𝑞 = (1 − 𝑃0,𝑞 )𝑃𝑓,𝑞 + (1 − 𝑃𝑓,𝑞 )𝑃1,𝑞 . (6) to mitigate the ssdf attacks, features such as energy statistic, autocorrelation, squared mean, standard deviation and maximum-minimum eigen value and are computed and fed as figure 1. cognitive radio network model. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 inputs to the mlp. energy statistic of the signal can be represented using (7) as 𝐸 = ∑ |𝑟(𝑘)|2 𝑁1−1 𝑘=0 . (7) autocorrelation is a mathematical function which could be used for encoding the level of the association between two parameters which are procured from the similar source, i.e., the same source (correlating between itself). autocorrelation measures the similarity of a signal with a delayed version of itself. honest sus sends the actual report to the fc that may vary depending on the existence and inexistence of pu signal. but, mus report either low or high energy repeatedly. autocorrelation value does not oscillate much. therefore, autocorrelation value of signal is considered as one of the feature vectors. autocorrelation value of the signal is given using (8) as 𝐴(𝑖) = 1 𝑁1 ∑ 𝑟(𝑘). 𝑟(𝑘 − 𝑖) 𝑁1−1 𝑘=0 . (8) squared mean of the received signal can be computed using (9) as 𝜇 = 1 𝑁1 ∑ |𝑟(𝑘)|2 𝑁1−1 𝑘=0 . (9) features are labelled as 0 for h0 and 1 for h1. mlp is a feed forward, supervised artificial neural network (ann). the neural net has got 3 important layers, viz., an input layer, a no. of layers which are hidden and finally one output layer. the input layer of the ann is used for getting the input signal as external stimuli. the no. of parameters in the input later will decide the no. of feature vectors. each middle layer (hidden) in between the input and output consists of one or more units or hidden neurons. number of hidden layer and its units are determined by experimentation. output represents the final decision, and this output layer has one neuron representing binary classification, shown in figure 2. the mlp output at the output layer can be calculated using (10) 𝑦𝑘 = 𝑓 | ∑ 𝑤𝑗,𝑘 ′ ((𝑔 ∑ 𝑥𝑖 𝑤𝑖,𝑗 + 𝑏 𝑁 𝑖,𝑗=1 )) 𝑁 𝑗,𝑘=1 | , (10) where 𝑥𝑖 the i th input vector, 𝑏 is the bias and 𝑤𝑖 denotes weight between the input and hidden zones (layer/s), the weight between the hidden and the output layer is denoted by 𝑤𝑗 , finally the parameters 𝑔 and 𝑓 could be called as the function of activation, which are present at the hidden and output layer respectively. after labelling, feature vectors are categorized into two sets of data, i.e., the training ones and the testing ones, which are called as the training and testing information. the data which is used for training is employed to ensure the modelled ann system recognizes data and test data used to check the ability of the model to predict new cases based on its training. algorithm 3.1 explains the training procedure of mlp. performance of the trained network is validated with the test data. 3.1. algorithm: mlp training algorithm • create mlp network • initialize the weight and bias randomly • computer the feature vector x = [x1, x2, x3, …, xn] • label the target vector t = [0, 1], 0 = h0, 1 = h1 • for each training pair (x, t) • present input to the input layer. calculate the net output using (11) ℎ = ∑ (𝑥𝑖 𝑤𝑖,𝑗 + 𝑏) 𝑁 𝑖,𝑗=1 . (11) • apply activation function to compute net input using (12) ℎ = 𝑔(ℎ) . (12) • calculate the net output at the output layer using (13) 𝑦𝑘 = ∑ 𝑤𝑗,𝑘=1 ′ ℎ 𝑁 𝑗,𝑘=1 (13) • apply activation function to compute net outcome using (14) 𝑦𝑘 = 𝑓[𝑦𝑘 ] . (14) • calculate the error using (15) 𝐸𝑟𝑟𝑜𝑟 = 𝑇 − 𝑦 . (15) • back propagate the error and update weight and bias using (16) and (17) 𝑤new = 𝑤old + 𝛥𝑤𝑖,𝑗 𝑏new = 𝑏old + 𝛥𝑏𝑖,𝑗 (hidden layer) (16) 𝑤new = 𝑤old + 𝛥𝑤𝑗,𝑘 𝑏new = 𝑏old + 𝛥𝑏𝑗,𝑘 (output layer) (17) • test for stopping condition • end. 4. simulation results here, in the section-iv, experimental outcomes are portrayed in order to prove the effectiveness of the mathematical model and the simulation is performed using matlab 2018a platform. figure 2. multilayer perceptron with two hidden layers. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 crn is designed with one fc; one pu and 30 secondary users and pf will be set to 0.1 for all sus. the percentage of mus is ranged from 10 % to 60 %. the pu signal is a quadrature phase shift keying signal and outcomes are revealed using the simulations done using the monte-carlo methods with a step size of around 10000 runs. the signal to the noise ratio will be changed from a value of -20 db up to zero decibels. with respect to this work, mlp is employed for mitigating negative impact of mus in crn. an input value to the mlp is composed of feature vectors obtained from the received signal. mlp is designed with 5 neurons representing 5 feature vectors, 2 layers which are hidden using 10 hidden neurons in every ann layer and consisting of only 1 neuron in the output layer of the neural net and having only one neuron which corresponds to the output value 0 or 1 representing 𝐻0 & 𝐻1 hypothesis. tan sigmoid and linear activation function is used at the second and the third layer of the neural net (hidden/output). in this stage, the least means square (lms) algorithm could be used to train the neural net by setting the epoch value up to 500 points. efficacy of the developed model is ascertained or justified next. this is done with the help of probability detection and also using the help of probability of the false alarm rate parameter. the figure 3 depicts the plot between the signal-to-noise ratio (snr) and the probability of detection (pd) at the probability of false alarm pf = 0.1. from the results of simulation, we can infer that considering figure 3, the model which has been proposed by us give as an outstanding performance when compared to other methods taken for comparison. for an instance, snr = 12.5 db, pd performance of the proposed model is increased by 47.4 %, 46 % and 10 % as compared to energy detection (ed), generalized likelihood ratio test (glrt) and hadamard ratio (hr) sensing methods respectively. the figure 4 exhibits the probability of false alarm for always yes attack versus snr considering a group of malicious secondary users (percentagewise). from the figure 4, it is noticed that the proposed model can sense a yes attacking process correctly up to 50 % falsified secondary user/s in the presence for snr varying from -10 db to 0 db. it proves that machine learning model can efficiently sense the ssdf attack launched by malicious sus in crn. probability of false alarm for always no attack as a function of snr for varying percentage of malicious sus is graphically plotted in figure 5. from the results, one can see apparently seeing the figure 5 that model which is proposed by us is able to detect always no attack preciously up to 50 % malicious sus presence for snr varying from -10 db to 0 db. it further confirms that the proposed machine learning model can figure 3. probability of detection versus snr at pf = 0.1. figure 4. probability of false alarm versus snr for varying falsified secondary users percentagewise (always a yes attacking phenomenon). figure 5. probability of false alarm versus snr for varying percentage of falsified secondary user/s (always no attacking phenomenon’s). figure 6. probability of detection versus percentage of malicious users. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 efficiently detect the ssdf attack launched by malicious sus in crn. in order to experimentally validate the performance characteristics of the model which is being proposed in this research article, a curve can be plotted between probability of detection and malicious users varying from 10 % to 60 % at snr is -10 db, as shown in the figure 6, from where it can be revealed that the scheme which has been proposed by us yields more pd value. from the empirical findings, it can be justified and enunciated that the ml dependent strategy could efficiently suppress the impact of mus in the crns. the efficacy of the proposed model can be observed using the concepts of probability detection and probability of false alarm rate. the proposed model provides outstanding performance when compared to other methods taken for comparison with respect to different values of snr as shown in table 1. 5. conclusion performance of crn is severely affected by malicious sus. mus may launch ssdf attack to mislead the global decision. this paper has presented a machine learning model using an mlp classifier to identify falsified data in css to prevent ssdf attack in crn. set of features are extracted from the received samples and labelled based on the inexistence and existence of primary user. the obtained features used as input to the mlp model. the network is trained with the levenberg-marquart algorithm and then employed for eliminating the effect of attacks due to the ssdf process. once the simulated results are observed, it can be revealed that the proposed model could efficiently reduce the impact of malicious users in crn. however, it needs more time for training. in future work, meta heuristic algorithm will be explored to optimize the parameters of network and to further enhance detection performance. acknowledgement the authors acknowledge with thanks dayananda sagar college of engineering, bengaluru, jss academy of technical education, bengaluru for all the support and encouragement given to me to take up this research work and publication. references [1] a. ahmadfard, ali jamshidi, alireza keshavarz-haddad, probabilistic spectrum sensing data falsification attack in cognitive radio networks, signal processing, 137, 2017, 1-9, issn 01651684. doi: 10.1016/j.sigpro.2017.01.033 [2] m. e. ahmed, j. b. song, z. han, mitigating malicious attacks using bayesian nonparametric clustering in collaborative cognitive radio networks, 2014 ieee global communications conference, austin, tx, 999-1004. doi: 10.1109/glocom.2014.7036939 [3] a. ali, w. hamouda, advances on spectrum sensing for cognitive radio networks: theory and applications, ieee communications surveys & tutorials, 19(2), 2017, 1277-1304. doi: 10.1109/comst.2016.2631080 [4] s. althunibat, m. di renzo, f. granelli, robust algorithm against spectrum sensing data falsification attack in cognitive radio networks, ieee 79th vehicular technology conference (vtc spring), seoul, 2014, 1-5, doi: 10.1109/vtcspring.2014.7023078 [5] z. cheng, t. song; j. zhang; j. hu; y. hu; l. shen; x. li; j. wu, self-organizing map-based scheme against probabilistic ssdf attack in cognitive radio networks, 9th int. conf. on wireless comm. and signal processing (wcsp), nanjing, 2017, 1-6. doi: 10.1109/wcsp.2017.8170994 [6] f. farmani, m. abbasi-jannatabad, r. berangi, detection of ssdf attack using svdd algorithm in cognitive radio networks, third international conference on computational intelligence, communication systems and networks, bali, 2011, 201-204. doi: 10.1109/cicsyn.2011.51 [7] federal communications commission: et docket no 03-222, notice of proposed rulemaking and order. [8] federal communications commission, spectrum policy task force. rep. et docket no. 02-135. 2002. online [accessed 16 march 2022] https://transition.fcc.gov/sptf/files/sewgfinalreport_1.pdf [9] j. feng, m. zhang, y. xiao, h. yue, securing cooperative spectrum sensing against collusive ssdf attack using xor distance analysis in cognitive radio networks, sensors (basel). 18(2), 2018, 370. doi: 10.3390/s18020370 [10] l. li, c. chigan, fuzzy c-means clustering based secure fusion strategy in collaborative spectrum sensing. in 2014 ieee international conference on communications (icc), sydney, nsw, 2014, 1355-1360. doi: 10.1109/icc.2014.6883510 [11] s. mapunya, m. velempini, design of byzantine attack mitigation scheme in cognitive radio ad-hoc networks, international conf. on intelligent and innovative comp. apps. (iconic), plainemagnien, 2018, 1-4. doi: 10.1109/iconic.2018.8601087 [12] g. nie, g. ding, l. zhang, q. wu, byzantine défense in collaborative spectrum sensing via bayesian learning. ieee access 5, 2017, 20089-20098. doi: 10.1109/access.2017.2756992 [13] a. sharifi, m. mofarreh-bonab, spectrum sensing data falsification attack in cognitive radio networks: an analytical model for evaluation and mitigation of performance degradation, aut journal of electrical engineering, 50(1), 2018, 43-50. doi: 10.22060/eej.2017.12528.5094 [14] a. a. sharifi, m. j. musevi niya, defenseagainst ssdf attack in cognitive radio networks: attack-aware collaborative spectrum sensing approach, ieee communications letters, 20(1), 2016, 93-96, doi: 10.1109/lcomm.2015.2499286 [15] runze wan, naixue xiong, lixin ding, xing zhou, mitigation strategy against spectrum-sensing data falsification attack in cognitive radio sensor networks, international journal of distributed sensor networks, 15, 2019, 1550-1477. doi: 10.1177/1550147719870645 [16] yang li, q. peng, achieving secure spectrum sensing in presence of malicious attacks utilizing unsupervised machine learning, milcom 2016 ieee military commn. conf., baltimore, md, usa, 2016 174-179. doi: 10.1109/milcom.2016.7795321 [17] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare table 1. probability of detection versus snr. approaches vs. snr proposed technique ed glrt hr snr = 20 db 0.14 0.1 0.1 0.1 snr = 15 db 0.32 0.1 0.12 0.23 snr= 12.5 db 0.62 0.14 0.16 0.52 snr = 10 db 0.88 0.25 0.28 0.87 snr = 4 db 0.99 0.93 0.96 0.99 snr = 2 db 0.99 0.98 0.99 0.99 https://doi.org/10.1016/j.sigpro.2017.01.033 https://doi.org/10.1109/glocom.2014.7036939 https://doi.org/10.1109/comst.2016.2631080 https://doi.org/10.1109/vtcspring.2014.7023078 https://doi.org/10.1109/wcsp.2017.8170994 https://doi.org/10.1109/cicsyn.2011.51 https://transition.fcc.gov/sptf/files/sewgfinalreport_1.pdf https://doi.org/10.3390/s18020370 https://doi.org/10.1109/icc.2014.6883510 https://doi.org/10.1109/iconic.2018.8601087 https://doi.org/10.1109/access.2017.2756992 https://doi.org/10.22060/eej.2017.12528.5094 https://doi.org/10.1109/lcomm.2015.2499286 https://doi.org/10.1177/1550147719870645 https://doi.org/10.1109/milcom.2016.7795321 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 monitoring: a review, acta imeko, vol. 10, no.2, pp. 1-11, 2021. doi: 10.21014/acta_imeko.v10i2.1080 [18] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol.10, no.2, pp. 1-10, 2021. doi: 10.21014/acta_imeko.v10i2.912 [19] amar taggu, ningrinla marchang, detecting byzantine attacks in cognitive radio networks: a two-layered approach using hidden markov model and machine learning, pervasive and mobile computing, v 77, 2021, issn1574-1192, doi: 10.1016/j.pmcj.2021.101461 https://doi.org/10.21014/acta_imeko.v10i2.1080 https://doi.org/10.21014/acta_imeko.v10i2.912 https://doi.org/10.1016/j.pmcj.2021.101461 editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 2 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance zafar taqvi1 1 research fellow university of houston clear lake, houston, texas usa section: editorial citation: zafar taqvi, editorial to selected papers from the tc17 events "international symposium on measurements and control in robotics" (ismcr2021) and vrise2021 topical event on robotics for risky interventions and environmental surveillance, acta imeko, vol. 11, no. 3, article 2, september 2022, identifier: imeko-acta-11 (2022)-03-02 received september 13, 2022; in final form september 13, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: zafar taqvi, e-mail: ztaqvi@gmail.com dear readers, this special issue includes selected papers from the two events organized by tc-17, the imeko technical committee on robotic measurement. annually tc17 organizes "international symposium on measurements and control in robotics" (ismcr), a full-fledged event, focusing on various aspects of international research, applications, and trends of robotic innovations for benefit of humanity, advanced humanrobot systems, and applied technologies, e.g. in the allied fields of telerobotics, telexistance, simulation platforms, and environment, and mobile work machines as well as virtual reality (vr), augmented reality (ar) and 3d modeling and simulation. during the imeko congress years, tc17 organizes only "topical events." in 2021, tc17 organized two virtual topical events, both following the covid-19 restrictions. ismcr2021 had a theme "virtual media technologies for the post covid19 era" and the other tc17-vrise was a jointly organized event with the theme "robotics for risky interventions and environmental surveillance.” vrise stands for virtual robotics for risky interventions and environmental surveillance, the same as the theme. the papers in this special issue segment were selected from the above two events. this special issue covers a variety of topics that relate to augmented reality/virtual reality (ar/vr), tools impacted by covid-19, and 3-d printing as they relate to robotics, including key applications of robotics technology. one ar/vr paper by karen alexander and jennifer rogers entitled "standards and affordances of 21st-century digital learning: using arlem and xapi to track bodily engagement and learning in xr(vr, ar, mr)" describes digital learning using new tools. the other paper entitled "arel-augmented reality-based enriched learning experience" by a. v. geetha and t. mala shows the usage of ar in the learning process. covid-19 has impacted not only the individual researchers but also the experiments and their underlying tools and methodologies. zuzana kovarikova, frantisek duchon, andrej babinec and dusan labat in their paper entitled "digital tools in the post covid-19 age as a part of robotic system for adaptive joining of objects" described the development of the new tools while ahmed alseraidi, yukiko iwasaki, joi oh, takumi handa, vitvasinvimolmongkolpom, fumihiro kato and hiroyasu iwata in their paper, "experiment assisting system with local augmented body (easy-lab)for the post-covid19 era" presents other covid-related discussions. 3-d printers have made equipment component inventories and procurement issues less problematic. two papers, one entitled "twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management" by pawandeep singh matharu et al, and another entitled "igrab duo: novel 3d printed soft orthotic hand triggered by emg signals," authored by irfan zobayed et al discuss their research work on the 3d activities that relate to robotic components and application. mailto:ztaqvi@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 paper on "jelly-z: twisted and coiled polymer fishing line muscle actuated mini-jellyfish robot for environment surveillance and monitoring" by pawandeep singh matharu et al, paper entitled "disarmadillo: an open source remotely controlled platform for humanitarian demining," by emanuela cepolina, alberto parmiggiani, carlo canali, ferdinando cannella, while a third paper entitled "path planning for data collection robots" by sara olasz-szabo and istvan hermati present other robotics application research work. these symposia are forums for the exchange of recent research results and provide futuristic ideas in robotics technologies and applications. they interest a wide range of participants from government agencies, relevant international institutions, universities and research organizations, working with futuristic applications of automated vehicles. the presentation is also of interest to the media as well as the general public. we are sure the readers will find these papers useful in their professional applications. dr. zafar taqvi editor special issue acta imeko  september 2014, volume 3, number 3, 38 – 42  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 38  electrolytic conductivity as a quality indicator for bioethanol  steffen seitz 1 , petra spitzer 1 ,  hans  d. jensen 2 , elena orrù 3 , francesca durbiano 3   1  physikalisch‐technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany   2  danish fundamental metrology ltd., matematiktorvet 307 dk‐2800 kgs. lyngby, denmark  3  istituto nazionale di ricerca metrologica, strada delle cacce 91, 10135 turin, italy      section: research paper   keywords: bioethanol; electrolytic conductivity; measurement uncertainty  citation: steffen seitz, p. spitzer, f. durbiano, h. jensen , electrolytic conductivity as a quality indicator for bioethanol, acta imeko, vol. 3, no. 3, article 9,  september 2014, identifier: imeko‐acta‐03 (2014)‐03‐09  editor: paolo carbone, university of perugia   received june 12 th , 2013; in final form may 27 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by measurement science consultancy, the netherlands  corresponding author: steffen seitz, e‐mail: steffen.seitz@ptb.de    1. introduction  electrochemical characterisation of bioethanol is of interest in terms of the identification of impurities at trace levels to assess risk of corrosion and potential damage to engines. high measurement accuracy and a strict application of metrological principles in establishing traceability for these measurements is mandatory to achieve meaningful measurements. in particular, electrolytic conductivity is a quality indicator for bioethanol that is needed as an easy-to-use tool to assess the amount of impurities. substantial work is still required to underpin the traceability of this parameter in order to guarantee metrological comparability [1] of the results. comparability is a prerequisite for standardization of measurement procedures and essential for the reliability of measured material properties for engineering. moreover, an assessment of sensitivity, significance and uncertainty of these measurements is required. to establish comparability of measurement results they must be traceable to an agreed common metrological reference, which, whenever possible, should be the international system of units (si). nowadays, the result of an electrolytic conductivity measurement at the application level is linked to the conductivity value of a reference solution. typically, the conductivity value of the reference solution is measured traceable to the si by national metrology institutes by means of a primary reference measurement procedure [2]. the value indicated by a conductivity measuring system is usually adjusted by a calibration measurement, such that the actually measured resistance rref is scaled by the so called cell constant kcell to match the conductivity value ref of the reference solution: ref cell ref r k  . (1) cells, which cell constants are adjusted in this way, are referred to as secondary cells in contrast to primary cells, where the cell constant is determined by dimensional measurements [2]. the measured resistance is affected by the electric field distribution and the correlated spatial distribution of the current density within the measuring cell [3]. additionally it is affected by electrode polarisation. both effects depend on the design of the cell, the kind of solution and its ion concentration. consequently, conductivity cells of different cell abstract  we present results of the european metrology research project on the si traceability of electrolytic conductivity measurements in  bioethanol. as a first step to this aim secondary conductivity measurements have been performed to characterize reproducibility,  stability, measurement uncertainty and the significance of the measurement results. the relative standard measurement uncertainty  is of the order 0.3 %, while inter‐laboratory reproducibility is around 6.9 %. the measured conductivities of two samples from different  sources  show  a  relative  difference  of  around  30%.  these  results  show  that  conductivity  is  an  appropriate  quality  indicator  for  bioethanol. however, it also demonstrates that inter‐laboratory reproducibility has to be improved, in particular with respect to si  traceability.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 39  design can provide different conductivity results for an equivalent sample, even if their cell constant is adjusted with the same reference solution. therefore the comparability of conductivity measurement results is more questionable, the further the properties of the solution under investigation deviate from those of the reference solution. it is practically not possible to provide a matrix-matched primary reference solution for any kind of solution. however, in any case the measurement uncertainty must consider the effect of matrixmismatch. concerning bioethanol, reference solutions based on ethanol are inappropriate mainly due to stability issues. aqueous kcl solutions are typically used for cell calibration [4]. it must be emphasised that the nominal conductivity value of the lowest stable aqueous kcl reference solution recommended by oiml is 140.83 ms m-1, [5], that of iupac is 140.82 ms m-1) [6] at 25°c and that of astm solution d is 14.693 ms m-1[7], while the conductivity of bioethanol is in the order of 0.1 to 0.2 ms m-1. hence, the common calibration procedure makes use of a reference solution that significantly differs in the matrix and in the conductivity value of bioethanol. as a consequence, it must be investigated, if they can nevertheless be used as reference solutions and to what extent the measurement uncertainty must be increased due to the matrix-mismatch. in particular, comparison measurements, in which cells of different design are used, could give more insight into the effect of the matrix-mismatch. currently, there exist no conductivity measurements of bioethanol, based on primary reference procedures, which could be used as a basis for providing traceability of measurement results at the application level. therefore, a work package has been established within the european metrology research project eng09 [8, 9] that covers among others two main objectives, related to the use of electrolytic conductivity as an important ‘quality indicator’ for bioethanol: (i) research into the measurement of electrolytic conductivity from the primary level to the application level in order to establish si traceability and (ii) to provide exemplary reference data of bioethanol. as a first step to establish traceability, the conductivities of two bioethanol samples from different origins, one from brazil and one from a german producer, were measured with a secondary conductivity measurement cell. the cell constant was determined after calibration with a glycerol based kcl solution, which conductivity was in the conductivity range of bioethanol. a method, which has recently been investigated by the authors [10], has been used to determine the solution resistance from impedance spectroscopy measurements of the cell/solution system. this method has particularly been developed to minimize the effect of electrode polarisation on the derived solution resistance in the low conductivity range. additionally, the measurement uncertainties which particularly include contributions from stability and reproducibility have been determined from the derived solution resistances. significant differences in the conductivity values of the two different bioethanol samples have been observed. 2. measurement procedure  conductivity measurements were performed with a two electrode jones-type like cell. a sketch of the setup is shown in figure 1. the general design of the cell is similar to that described in [6], but does not have a removable centre section. two round and flat electrodes (diameter 2 cm), made of blank platinum, are arranged opposite to each other in a cylindrical body (inner diameter 2.2 cm), made of bore silicate glass. the distance between the electrodes is around 1 cm. two glass pipes are connected to the main cylinder to fill and empty the cell. the cell constant was determined with a glycerol based kcl solution at 25°c. the conductivity value ref of the reference solution has been determined with the primary conductivity measurement setup of ptb [2] to be (133.0  0.17) µs m-1. the resulting cell constant is 18.61 m-1. if not mentioned otherwise all stated uncertainties are standard uncertainties according to the “guide to the expression of uncertainty in measurement” (gum) [11]. the cell was placed in an air thermostat. temperature of the solution was measured with a calibrated pt-100 temperature sensor connected to a measurement bridge mkt50 from anton paar. the sensor is coated with ptfe and was placed in one of the filling tubes to measure the temperature directly in the solution. after the cell was filled, it took about 60 to 90 minutes until stable temperature conditions were achieved. then the temperature variation was less than 2 mk around the mean temperature. two different kinds of bioethanol samples, one from brazil (produced from sugar cane) and one from germany (produced from sugar beet), were measured. 2 l of each sample have been and finally bottled into 250 ml bore silicate bottles under an argon atmosphere that had been bubbled through ethanol in a gas washing bottle before. the measurements where performed according to the following steps: 1) the conductivity measurement cell was cleaned several times with ultra pure water and finally filled with ultra pure water. then, a bottle with the sample p t 1 0 0 ar c 2 h 6 o z( f ) figure 1. sketch of the measurement setup, using a two electrode cell that  is placed  in a temperature controlled air bath. the contact of the sample  with ambient air is minimised by pumping it into the cell. argon is used to dry the cell after cleaning it with ultra pure water.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 40  and the cell were put into the air bath at 25 °c for at least 12 hours before the measurement. 2) the cell was emptied and flooded with argon for about half an hour until it was dry. using a peristaltic pump and chemical inert norprene tubes the sample was pumped into the cell until it was filled almost up to the rim of the filling tubes. finally the inlets were sealed with tape. evaporation of ethanol within the cell was not completely prevented during this filling step. however, the surface of the solution that was exposed to air or argon was small and the filling time was less than 30 s. the corresponding measurement uncertainty has been considered in terms of measurement reproducibility. 3) an impedance spectrum between 20 hz and 500 khz, 5 steps per decade, was measured and the best frequency range (see below) was chosen for the measurement. 4) afterwards impedance spectra were recorded together with temperature for more than 2 h measuring time. at the end the cell was emptied and cleaned several times with ultra pure water. 3. conductivity calculation  the determination of the resistance rsol of the solution between the electrodes is based on an analysis of impedance spectra of various low conductivity solutions measured with different cell types. the basic concept has been developed within the imera-plus european metrology research program tp2jrp10 [10]. in brief, the determination of the solution resistance is based on the equivalent circuit shown in figure 2. the corresponding impedance spectrum can be separated into two regions. the low frequency part of the spectrum is dominated by electrode polarisation, which is represented by the cpe element and the polarisation resistance rp. the latter accounts for a residual charge transfer across the electrodes. in a complex plane plot this part of the spectrum is nearly a linear line, slightly curved due to the influence of rp. in the high frequency part of the spectrum polarisation effects can be neglected and the complex plane plot in this part of the spectrum is a semicircle. concerning the cell used in this investigation the effect of electrode polarisation on the spectrum can be neglected above 10 khz for high resistive solutions like ethanol. in this region the equivalent circuit simplifies to the parallel of cg and rsol. we have chosen measurement frequencies between around 10 and 400 khz that result in fairly equidistant impedance values across the semicircle. at each given frequency the mean impedance was calculated from at least 15 measurements. these mean values were used for the semicircle fit. the solution resistance was derived from the corresponding radius r: rsol = 2r. this procedure has turned out to be more robust than calculating the solution resistance analytically from the impedances by assuming the parallel of rsol and cg. the latter way usually shows a significant dependence of the resistance on frequency resulting from small impedance measurement errors. figure 3 shows the impedances of a typical measurement of bioethanol and the corresponding semicircle fit in a complex plane plot. the average relative deviation of the measured data points from the fit is less than 0.1%. the impedance measurements were performed with a high precision commercial lcr-meter (agilent 4284a). the conductivity value sol(t) at the mean measurement temperature t, given in the unit °c, is calculated from rsol and the calibrated cell constant kcell in analogy to equation (1). the impedances are typically not measured at the exact set temperature of 25 °c, but the measurement temperature deviates about a few tens of mk. the conductivity value at the measurement temperature t is therefore linearly corrected to the value sol(25°c) at 25°c using     sol sol(25 c) / 1 25 ct t       . (2) for bioethanol a linear relative temperature coefficient be = (2.0  0.15)% c-1 at 25°c has been determined from conductivity measurements between 20°c and 27°c. the linear temperature coefficient ref of the reference solution is 5.09% °c-1 at 25°c. using equations (1) and (2) the final conductivity value be(25°c) of a bioethanol sample has been calculated from the input variables: ref ref ref ref be be be be (25 ) (1 ( 25 ) (25 c) (1 ( 25 c)) c r t c r t                . (3) in equation (3) the index “ref” refers to the reference solution and the index “be” to bioethanol. 0 20 40 60 80 0 50 100 150 real(z ) (k) im ag ( z ) (k  ) figure 3. impedances of a bioethanol sample in a complex plane plot. the  dots  are  the  measured  impedances  z,  the  solid  line  is  a  semi  circle  fit.  frequency range is from 10 to 400 khz.  figure  2.  equivalent  circuit  used  to  model  the  cell  solution  system  to derive the solution resistance rsol. electrode polarisation is represented by the cpe element and the polarisation resistance rp.   cg  is the geometric capacitance of the electrodes.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 41  4. results  the measured conductivity values of the two bioethanol samples are brazil sample: (108.37 +/-0.33) µs m-1, german sample: (142.67 +/0.43) µs m-1. the values are significantly larger compared to pure synthetic ethanol, which has a conductivity of a few µs m-1. additionally, the difference of the results is much larger than their uncertainties. consequently, conductivity measurements can well serve to characterise bioethanol samples. there are no details available about residual ion concentrations or the production conditions of the samples. so it is difficult to reason the difference. using ion chromatography, we have performed a first analysis of one of the samples. the anionic ion chromatogram (not calibrated) showed a significant and predominant amount of chloride compared to a measurement of pure synthetic ethanol. therefore the measured difference of the conductivity values could be the result of residual dissolved chloride salts. however, this assumption still needs to be verified quantitatively. figure 4 demonstrates the stability of the measurement results after temperature equilibrium has been reached. the error bars indicate the expanded measurement uncertainty (coverage factor 2). an unspecific, small drift can be seen. the reason for it is not clear. however, the drift within the measurement period is considered in the stated measurement uncertainty. table 1 identifies the main sources of uncertainty (first column) and their estimated standard uncertainties (second column) exemplarily for the bioethanol sample from germany. note that resistance and temperature uncertainties considered systematic and statistical contributions. inaccuracies of the measuring devices and, in case of the resistances, of the method to derive them, entered into the systematic contributions. it should also be noted that the systematic uncertainties of the solution resistances have been calculated with a monte carlo method [12], since it is practically impossible to use the analytical gum framework to handle the complex-valued impedances and the fitting procedures involved in resistance calculation. the statistical contributions reflect the measurement stability and were calculated from the standard deviation of the mean of the measured values. uncertainty propagation has then been calculated straight forward from equation (3) according to the general gum uncertainty framework [11]. the last column shows the relative uncertainty contributions of the input variables to the uncertainty of the conductivity value. the main contributions to the measurement uncertainty result from the conductivity of the reference solution and the repeatability of the measurement results. the latter has been determined from independent measurements of four samples that have been homogenised and afterwards bottled as described above. the observed variation of the values within a relative standard deviation of 0.27% is probably due to the instability of the measurement shown in figure 4. the uncertainty calculation also accounted for correlations between the input quantities: 107.5 108.0 108.5 109.0 109.5 50 100 150 measuring time (min) co n d u ct iv it y ( µ s m -1 )   141.5 142.0 142.5 143.0 143.5 144.0 80 100 120 140 measuring time (min) co n d u ct iv it y ( µ s m -1 ) figure  4.  stability  of  the  conductivity  measurement  results  of  bioethanol from  brazil  (above)  and  germany  (below).  the  error  bars  indicate  the expanded (k = 2) uncertainty.  table  1. contributions  to  the  combined  measurement  uncertainty  of  the  conductivity  value  be,  exemplarily  for  the  bioethanol  sample  from  germany.  uxibe)  is  the  propagated  uncertainty  contribution  of  xi  to  the  uncertainty of be.  source of uncertainty of input  quantity xi  uncertainty  u(xi)  uxi(be)/be  (%)  conductivity of reference  solution  0.17 µs m ‐1    temperature of reference  solution (systematic)  10 mk  0.051  temperature stability of  reference solution   0.4 mk  0.002  temperature of bioethanol  (systematic)  10 mk  0.020  temperature stability of bioethanol  1.6 mk  0.002  resistance of reference  solution (systematic)  124   0.099  resistance stability of reference solution  2.3   0.002  resistance of bioethanol (systematic)  149   0.114  resistance stability of  bioethanol  2.9   0.022  repeatability  0.27 %  0.27  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 42  (i) rref and rbe values with respect to the systematic uncertainty contributions, (ii) temperature measurement results of calibration measurement and bioethanol measurement with respect to the systematic uncertainty contributions, (iii) temperature values and temperature corrected conductivity values (corresponding resistance values, respectively), which are measured at the same time. for (i) and (ii) a correlation coefficient of one has been assumed, since all the measurements have been performed with the same system, using the same evaluation method. any systematic measurement error in (i) or (ii) due to an offset is therefore nearly equal in the measurement of the reference solution and the solution under investigation. scaling effects have been neglected, since the measurement results are of similar magnitude. as a consequence, although the relative uncertainties of the measured resistances are compatible to that of the conductivity reference value and to that attributed to repeatability, they barely contribute to the combined uncertainty of the conductivity value. for (iii) the correlation coefficient has been statistically calculated from temperature corrected resistance values and the corresponding temperatures, in order to account for correlations that are not covered by the linear temperature correction. here the correlation coefficient is typically around -0.5 to -0.7. the described measurements have been performed at the physikalisch-technische bundesanstalt (ptb) and reflect the characteristics of the setup used there. in order to estimate inter laboratory reproducibility a conductivity comparison measurement of bioethanol was performed, including the german (ptb), the danish (danish fundamental metrology) and the italian (istituto nazionale di ricerca metrologica) metrology institutes. all participants used secondary cells for the measurements. two institutes calibrated the cells with glycerol based kcl solutions, and one institute used a water based kcl solution. the relative standard deviation of the results was 6.9%, which is significantly larger than the individually reported standard measurement uncertainties (0.3% to 1%). this cannot be explained by inhomogeneity of the samples. it is more likely due to differences in sample handling, in cell design, in measurement and data evaluation procedure. the comparison will be repeated with more institutes and a more detailed sample handling and measurement instruction. however, the result of the comparison gives an upper limit for inter laboratory reproducibility of conductivity measurements of bioethanol, even though it is, for the time being, rather large compared to typical conductivity measurements. 5. conclusion  the results indicate that conductivity measurements can well serve to measure differences in the composition of bioethanol samples. under laboratory conditions the combined relative standard uncertainty of such measurements is around 0.3%. this particularly includes contributions from the stability of the solution during the measurement and the repeatability of the measurement results (at a single institute). however, the measured conductivities have been related to the conductivity value of the glycerol based kcl solution that has been used to adjust the cell constant. the measured cell constant of a secondary cell depends on the matrix of the reference solution. therefore the matrix-mismatch of bioethanol and the reference solution cast doubts on the comparability of the measured values, if these are measured using a different cell type. this assessment is also supported by the relatively bad inter laboratory reproducibility of 6.9%. in other words measurements of the same solutions using another cell type could provide different conductivity values, even if the cell constant is determined with the same reference solution. however, getting consistent, i.e. comparable, measurement results is a prerequisite for any standardisation work and a reliable data base for engineering. therefore further work is needed to achieve this aim. the next steps will be to investigate conductivity measurements of bioethanol on the primary level and to perform further comparison measurements to investigate the effect of different designs of secondary cells on the measured values. acknowledgement  the research leading to these results has received funding from the european union on the basis of decision no 912/2009/ec. references  [1] international vocabulary of metrology, iso/iec, 2012. [2] f. brinkmann, n. e. dam, e. deák, f. durbiano, e. ferrara, j. fükö, h. d. jensen, m. máriássy, r. h. shreiner, p. spitzer, u. sudmeier, m. surdul. vyskocil, primary methods for the measurement of electrolytic conductivity, accred qual assur 8 (2003), pp. 346-353. [3] s. l. schiefelbein, n. a. fried, k. g. rhoadsd. r. sadoway, a high-accuracy, calibration-free technique for measuring the electrical conductivity of liquids, rev. sci. instrum. 69 (1998), pp. 3308-3313. [4] j. barthel, f. feuerlein, r. neuederr. wachter, calibration of conductance cells at various temperatures, journal of solution chemistry 9 (1980), pp. 209-219. [5] standard solutions reproducing the conductivity of electrolytes, oiml, 1981. [6] k. w. pratt, w. f. koch, y. c. wup. a. berezansky, molalitybased primary standards of electrolytic conductivity, pure appl. chem. 73 (2001), pp. 1783-1793. [7] standard test methods for the electrical conductivity and resistivity of water, astm-international, 1995. [8] european metrology research project. available: http://www.emrponline.eu/ [9] publishable jrp summary: emrp jrpeng09 (2009). available: http://www.euramet.org/fileadmin/docs/emrp/jrp /jrp_summaries_2009/eng09_publishable_jrp_summary.pdf. [10] publishable jrp summary: emrp t2 jrp10 tracebioactivity (2007). available: http://www.euramet.org/fileadmin/docs/ emrp/jrp/imera-plus_jrps_2010-06-22/t2.j10_tracebi oavtivity_publishable_summary_april10_v1.pdf [11] guide to the expression of uncertainty in measurement, jcgm, 2008. [12] guide to the expression of uncertainty in measurement, supplement 1: propagation of distributions using a monte carlo method, jcgm, 2008. a comparison between aeroacoustic source mapping techniques for characterisation of wind turbine blade models with microphone arrays acta imeko issn: 2221-870x december 2021, volume 10, number 4, 147 154 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 147 a comparison between aeroacoustic source mapping techniques for the characterisation of wind turbine blade models with microphone arrays gianmarco battista1, marcello vanali2, paolo chiariotti3, paolo castellini1 1 università politecnica delle marche, via brecce bianche, 60121 ancona, italy 2 università di parma, parco area delle scienze, 43124 parma, italy 3 politenico di milano, via giuseppe la masa, 20156 milano, italy section: research paper keywords: aeroacoustic measurements; microphone array measurements; wind turbines; acoustic source identification citation: gianmarco battista, marcello vanali, paolo chiariotti, paolo castellini, a comparison between aeroacoustic source mapping techniques for characterisation of wind turbine blade models with microphone arrays, acta imeko, vol. 10, no. 4, article 24, december 2021, identifier: imeko-acta10 (2021)-04-24 section editor: francesco lamonaca, university of calabria, italy received july 26, 2021; in final form september 30, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gianmarco battista, e-mail: g.battista@staff.univpm.it 1. introduction the growing interest in renewable energy sources is requesting advances on several disciplines in order to reduce technological barriers and improve energy conversion efficiency. one of the mainstream technologies is wind power. it is well evident that the worldwide installed capacity of wind energy assets is growing exponentially since early 2000's. on the one hand, this sector is grabbing the attention of many industries and research groups, on the other hand, it is going through increasing regulations. in fact, one of the critical aspects of wind turbines is noise pollution. in order to mitigate wind turbine blade noise, the identification of location and strength of aeroacoustic noise sources is mandatory. this knowledge makes it possible to improve blade profiles and design effective aerodynamic appendages, such as trailing-edge serrations. in this work, acoustic imaging techniques, based on microphone arrays [1], have been used to characterise a scale single-blade rotor, installed in a semi-anechoic chamber, in different operating conditions. the requirements of a mapping technique for this application are: • the ability to deal with rotating sources; • sufficient spatial resolution with respect to the model size to distinguish different sources on the blade; • sufficient dynamic range to identify also weak sources with respect to the strongest one. a first classification of acoustic imaging methods can be done distinguishing time domain and frequency domain approaches. time domain approaches [2] are typically used for selectively shaping and steering the directivity of the array (e.g., directivemicrophone like behaviour). these methods can be used when abstract characterising the aeroacoustic noise sources generated by a rotating wind turbine blade provides useful information for tackling noise reduction of this mechanical system. in this context, microphone array measurements and acoustic source mapping techniques are powerful tools for the identification of aeroacoustic noise sources. this paper discusses a series of acoustic mapping strategies that can be exploited in this kind of applications. a single-blade rotor was tested in a semi-anechoic chamber using a circular microphone array. the virtual rotating array (vra) approach, which transforms the signals acquired by the physical static array into signals of virtual microphones synchronously rotating with the blade, hence ensuring noise-source stationarity, was used to enable the use of frequency domain acoustic mapping techniques. a comparison among three different acoustic mapping methods is presented: conventional beamforming, clean-sc and covariance matrix fitting based on iterative re-weighted least squares and bayesian approach. the latter demonstrated to provide the best results for the application and made it possible a detailed characterization of the noise sources generated by the rotating blade at different operating conditions. mailto:g.battista@staff.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 148 the sources of interest are time-variant, both in terms of position and emitted noise. [3]. frequency domain approaches are more used for characterizing the acoustic sources in terms of location and strength. most of frequency domain techniques make use of the cross-spectral matrix (csm) estimation from microphone signals as input data. therefore, the effect of incoherent background noise on acoustic maps can be attenuated by means of the averaging process, moreover, some methods can handle the removal of the csm main diagonal to neglect the contribution of incoherent noise across all the array microphones (e.g., wind noise). conventional beamforming (cb) [1] is the most widespread frequency domain mapping technique due to its robustness and low computational cost. however, it suffers of some limitations in terms of dynamic range and spatial resolution. in literature [1][4], several advanced frequency domain mapping techniques are available that go beyond cb limitations and make it possible also the quantification of the noise source. advanced frequency domain mapping techniques are generally preferred to time domain approaches in aeroacoustic applications since they generate acoustic maps with high dynamics and fine spatial resolution [4]. in frequency domain, three categories of mapping techniques can be identified depending on how the region of interest (roi) is mapped using pressure data at microphones, i.e., how the inverse operator is defined: beamforming, deconvolution, and inverse methods. the basics of different approaches for defining inverse operators is provided in the next section, while the detailed review is provided by leclère et al. [5]. among beamforming techniques, it is worth noticing functional beamforming [6], that is a variant of cb. this simple method enhances performance and flexibility of cb in terms of resolution and dynamics of maps. however, it is not compatible with the diagonal removal, therefore, it is not very effective in presence of a relevant background noise. one of the most recognised deconvolution methods is damas (deconvolution approach for the mapping of acoustic sources) [7]. this method aims at retrieving actual source distribution that generated the cb map by solving an inverse problem. the results achievable with damas are generally suitable for all demanding applications, however, the computational effort is quite high, since it requires the calculation of the array point spread function (psf), for all candidate sources in the roi, and finding a solution of an inverse problem. the deconvolution method named clean-sc (clean based on source coherence) [8] is currently the state of the art as aeroacoustic applications are concerned, since it has a low computational cost (just slightly higher than cb) and it is very effective in generating maps with high dynamics. the main drawback of clean-sc is the spatial resolution in separating sources close together, since it has the same limitations of cb. lastly, inverse methods such as generalized inverse beamforming [9], bayesian approach to sound source reconstruction [21], equivalent source method [10] and covariance matrix fitting (cmf) [11] aims at retrieving the complete source map at once, thus being capable of accounting for interaction between sources. however, dealing with underdetermined and ill-posed problems, they require reliable regularization techniques (e.g. empirical bayesian regularization [12]). the application of frequency domain approaches requires a stationary acoustic field that is not the case of a rotating source viewed by a static array. the virtual rotating array (vra) approach [13] has been adopted in this work to fulfil the requirement of a static source field and enable the application of any frequency domain mapping technique. three methods have been chosen for this test case: cb as baseline, clean-sc since it is a reference technique for aeroacoustic source mapping and cmf based on iterative reweighted least squares and bayesian approach (cmf-irls) [14]. a comparison between results obtained with frequency domain techniques is performed. finally, the characterisation of the wind turbine blade model at different operating speed with cmf-irls is provided. 2. theoretical backgrounds of acoustic source mapping the direct and the inverse acoustic problem formulation can be described as a linear problem, as frequency domain approaches are concerned. consider a set of 𝑁 elementary sources (monopoles, dipoles etc.) whose complex coefficients are collected in the vector 𝒒, and a set of 𝑀 receiver locations where the acoustic pressure is evaluated and collected in the complex vector 𝒑. the discrete acoustic propagator 𝑮 is a complex 𝑀-by-𝑁 matrix that encodes the amplitude and phase relationships between the sources and the receivers, for a given frequency. the direct acoustic problem regards the calculation of pressures at receiver location (effects), given the source strengths (causes) and the acoustic propagator: 𝑮 𝒒 = 𝒑 . (1) this is a mathematically well determined problem with a unique solution. conversely, the calculation of source strengths (causes) from observed pressures (effects), for given 𝑮, represents the inverse acoustic problem. solving this problem is not trivial, due to its ill-posed nature. in fact, the existence, the uniqueness, and the stability of the solution are not guaranteed [15]. the solution of inverse problems can be expressed as: �̂� = 𝑯 𝒑 , (2) where, 𝑯 is the inverse operator, that can assume different forms depending on the chosen approach. it is then clear that the source strengths �̂�(𝑯) can be only estimated. moreover, this estimation strongly depends on a priori assumptions made with respect to the acoustic propagator and the pressure data 𝒑 measured. both direct and inverse problems can also be written in their quadratic form: 𝑮 𝑸 𝑮𝐻 = 𝑷 (3) �̂� = 𝑯 𝑷 𝑯𝐻 , (4) where the superscript ∙ 𝐻 denotes the conjugate transpose operator, 𝑸 and 𝑷 are source and pressure cross-spectral-matrix (csm) respectively. beamforming approaches solve a scalar inverse problem, i.e. each potential source strength in the region of interest is estimated independently from the others. beamforming inverse operators 𝒉𝑛 , named steering-vectors, are calculated as function of the direct propagator columns 𝒈𝑛 and act as spatial filter, whose properties depend on their formulation [16]. beamforming techniques are widely used for its simplicity and robustness. however, sound source quantification is not possible, unless dedicated integration techniques are applied [17]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 149 deconvolution methods have been developed to overcome beamforming limitations, since array spatial response, i.e. the point spread function (psf), affects the acoustic map retuned by beamformers. deconvolution methods aim at retrieving the actual source distribution that generated the beamforming map removing the psf effect. in opposition to beamforming approach, inverse methods aim at estimating all potential sources together, hence, accounting also for interaction between sources. frequently, this entails the solution of heavily under-determined problems, since the number of microphones (equations) is much lower than the number of potential sources in the region of interest (unknowns). another issue with inverse methods is to adopt a robust regularization mechanism that is capable to estimate the proper amount of regularization depending on the specific problem and the measured data. also, the choice of the roi and its discretization with elementary sources plays a crucial role in retrieving an accurate solution. although the complexity of these methods, the advantages justify their application. 3. materials and methods the object of study of this paper is a scale blade model, having a radius of 1.5 m. the design of this model is targeted to small wind turbines, that are defined by the standard iec61400 as the ones having the rotor spanning up to 200 m2 (about 16 m diameter). figure 1 shows a simplified scheme of the experimental setup installed in the semi-anechoic chamber of università politecnica delle marche. the single-blade rotor must reach 650 rpm to operate at nominal conditions in terms of aerodynamic angle of attack and wind speed at tip. the latter corresponds to 102 m/s or 367.2 kph which is typical for small wind turbines. the single-blade rotor is placed at a distance of 3 m in front of the circular microphone array, which has 40 equally spaced 1/4'' microphones (b&k array microphone type 4951) installed on a circumference of 𝐷 = 3 m diameter. the centre of the array is aligned with the rotation axis and the microphone plane is parallel to the blade rotation plane. an asynchronous electric motor, controlled by an inverter, drives the single-blade rotor at desired angular speed. the motor has been also equipped with an incremental encoder for measuring the angular position of the blade. all sensor signals have been synchronously acquired at 102.4 ksamples/s for each channel (microphones and encoder). each acquisition lasts for 7.5 s and is performed at constant angular speed of the rotor. realisation of acoustic maps for moving sources usually requires time domain beamforming, since the distances from microphones to focus points constantly change over time, thus requiring time-depending delays for focusing the array. however, aeroacoustic applications require maps with high dynamic range and fine spatial resolution which is achievable only with more sophisticated frequency domain approaches. most of them use the microphone csm as input data to provide the acoustic map. when dealing with stochastic signals, such as aeroacoustic noise, the cross-spectra must be averaged over several time snapshots to get meaningful spectral estimation and to reduce the effect of uncorrelated background noise. the averaging process requires the sources to be stationary in time and space with respect to the array. however, pressure signals of the rotating blade, acquired with the static circular array does not fulfil these conditions, in fact, the source-microphone relative position changes over time. in order to face this aspect, the virtual rotating array (vra) [13] approach has been adopted to turn sound pressure signals, recorded with the static physical array, into signals of a virtual array that is rotating synchronously to the blade. in this way, the blade appears in a fixed position with respect to the vra, thus making it possible to adopt any frequency domain imaging technique. the simplest realization of vra requires a circular array, that must be parallel and co-axial with the rotor, and the knowledge of the instantaneous angular position of the blade. when a rotating array is used (both physical and virtual), the medium does not rotate at the same speed, therefore, it appears to rotate from the perspective of vra. the acoustic propagator assumed for calculations of acoustic maps must consider the propagation of acoustic waves through a rotating flow field to obtain meaningful results. 3.1. virtual rotating array the working principle of vra relies on the transformation of the pressure signals 𝑝𝑚 (𝑡) recorded by the physical array into pressure signals 𝑝𝑚𝑣 (𝑡) as if they were recorded by microphones virtually rotating. the virtual array has the same layout of the physical one, thus having 𝑀 = 40 microphones equally spaced along the circumference. the position of a virtual microphone, rotating on the same circumference of the physical array, does not correspond to the position of any physical microphone most of the time, but its signal can be estimated by means of spatial interpolation of signals recorded on the array circumference. the instantaneous position of each virtual microphone is determined from the angular position of the rotor 𝜙(𝑡). calculation of pressure value for each sample of a virtual microphone signal requires the identification of which pair of physical adjacent sensors must be selected for the interpolation. these are identified by the indexes 𝑚𝑙 and 𝑚𝑢, that are function of time 𝑡 and the virtual microphone index 𝑚𝑣: 𝑚𝑙 (𝑚𝑣, 𝑡) = ⌊𝑚𝑣 + 𝜙(𝑡) 𝛼 − 1⌋ mod 𝑀 + 1 𝑚𝑢(𝑚𝑣, 𝑡) = ⌊𝑚𝑣 + 𝜙(𝑡) 𝛼 ⌋ mod 𝑀 + 1, (5) where, ⌊∙⌋ is the floor function and 𝛼 = 2 π 𝑀⁄ is the angular spacing between sensors. once selected the pair of microphones for spatial interpolation, for each 𝑡 and 𝑚𝑣, the value of virtual signal sample 𝑝𝑚𝑣 (𝑡) is calculated as the weighted sum of the samples of physical microphones figure 1. scheme of the set-up in the anechoic chamber. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 150 𝑝𝑚𝑣 (𝑡) = 𝑝𝑚𝑙 𝑠𝑢 + 𝑝𝑚𝑢 𝑠𝑢 , (6) being the weights determined as 𝑠𝑢 (𝑡) = 𝜙(𝑡) 𝛼 − ⌊ 𝜙(𝑡) 𝛼 ⌋ 𝑠𝑙 (𝑡) = 1 − 𝑠𝑢 (𝑡) . (7) the signals 𝑝𝑚𝑣 (𝑡) obtained with this procedure can be used to estimate the microphone csm. 3.2. processing techniques applied to the case of study the instantaneous angular position of the rotor 𝜙(𝑡) is retrieved from the encoder signal and it is used to calculate vra time histories. the region of interest has been defined as a rectangular area of 1.80 x 0.90 m, positioned on the rotor plane, i.e. at 3 m from the array plane. from vra point of view, this area contains the blade and the hub of the rotor. the rectangular area has been discretised with a grid of monopoles with 0.02 m step, thus having 4186 potential sources. the acoustic propagator from monopoles to microphones is modelled with pressure to pressure free-field propagator [14]. geometric linear distances are replaced by the actual propagation distances, calculated considering the rotating flow field. for this purpose, the angular velocity can be assumed constant during each measurement since the fluctuations are negligible with respect to the mean value. propagation distances have been calculated with the acoular software [18]. as stated in the introduction, three frequency domain acoustic mapping techniques are applied to the case of study, exploiting the vra signals: cb, clean-sc and cmf-irls. the application of cb is intended here as the baseline performance of acoustic imaging techniques. the deconvolution with clean-sc requires to choose only a single parameter, the loop gain, which is set here to 𝜑 = 0.6. lastly, cmf-irls, which belongs to the branch of inverse methods, is chosen since its fully capable to deal with spatially extended sources which is a source configuration highly and naturally expected in this application. the cmf-irls is used to map the full csm, without any decomposition, and the sparsity constraint on the solution is enforced by setting 𝑝 = 1. a priori information is injected in the irls procedure as form of spatial weighting (named "aperture function" in bayesian approach). the first weighting function is the cb map that eases the localisation task, since it is a rough but reliable information on source distribution. the second weighting strategy is adopted to avoid high level peaks on the map at the edge of the roi, typical with inverse methods. figure 2 depicts the weighting function, determined with the cosine function near all the edges, thus resulting in a cosine-tapered spatial window. point by point product of these two weighting functions is used as total pre-weighting. for all mapping techniques, the diagonal of csm is removed, following the common practice adopted in aeroacoustic source imaging to minimise the effect of background noise. 3.3. considerations on measurement uncertainty a reference metrological analysis of acoustic beamforming measurements has been conducted by castellini and martarelli in [19], where a type b approach, based on an analytical model, was adopted to assess how the uncertainty on input quantities affects the localisation and quantification uncertainties. instead, merinomartínez et al. [17] investigated the accuracy of different mapping methods in aeroacoustic applications. in this paper, the focus is on the identification of sources of noise, rather than the absolute level. therefore, acoustic maps must provide an estimation of source locations and their relative level. from this consideration, the target accuracy of the measurement procedure should be sufficient to distinguish different noise sources on the blade. the resolution of mapping grid has been chosen to be compatible with the blade size and guarantee at least 2.5 potential sources for each wavelength, fulfilling the guideline provided in [20] for inverse problems (cmf-irls). the smallest wavelength of analysis is about 0.076 m (the exact value depends on the actual speed of sound). as regards beamforming-based techniques, the steering vector formulation chosen provides the correct source location at the expense of source level (“formulation iv” described in [16]). in array measurements, one of the most important aspects is the uniformity of frequency response of all microphones, rather than the absolute quality of the sensors. in fact, the array microphones b&k type 4951 are specifically designed for array applications, since they are phase-matched. the nominal sensitivity of this type of microphones is 6.3 mv/pa (ref. 250 hz). all microphones were calibrated with the pistonphone b&k type 4228, which provides a sine wave at 251.2 hz ± 0.1 % and 124.0 ± 0.2 db spl, hence the individual sensitivity was measured for each sensor. field-calibration was performed before starting the measurement campaign. the technical specifications for free-field frequency response (ref. 250 hz) are: ± 1 db, from 100 hz to 3 khz, ± 3 db, from 3 khz to 10 khz. as regard the phase-matching, the manufacturer guarantees the following specifications with respect to a factory reference: ± 3°, from 100 hz to 3 khz, ± 5°, from 3 khz to 5 khz. the relative positions between the microphones in the array represents another source of uncertainty, which affects amplitude and phase of measured pressure. in fact, a mismatch between nominal and actual sensor locations induces an error (for each microphone) in the spatial sampling of the acoustic field, in terms of amplitude and phase. this has an influence on the quality of the maps since the nominal layout is used for calculations. however, the uncertainty on amplitude and phase, caused by frequency responses and sensor arrangement, can be considered random across the sensors, hence it can be assumed as spatial white noise. it is possible to consider this as a sort of “array noise”, which is averaged across the microphones and the mean value tends to zero, as the number of sensors increases. therefore, the source levels in the map are not significantly affected from the array noise. instead, the standard deviation figure 2. arbitrary weighting function for cmf-irls with respect to the blade geometry. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 151 quantifies the level of the array noise. an accurate estimation of this parameter is rather difficult in practice, but the overall effects can be assessed comparing the degradation of array spatial response and dynamics of cb map with respect to the ideal condition. an experimental test was conducted with a point source emitting a sine wave at 2 khz and positioned on the rotor hub (aligned with the rotation axis). a sine wave is used to have a high signal-to-noise ratio with respect to environmental acoustic background noise. the cb map obtained from this experiment is compared with the map obtained in ideal conditions with simulated data. a visual inspection of the maps reveals if the degradation of performance, due to the microphone response and sensor positioning uncertainty, is acceptable or not. performance degradation occurring in this setup does not significantly affects the acoustic maps and is in line with the accuracy requirements. similar tests were conducted for assessing the rotor-array relative position and alignment. the rotor-array positioning is important to fulfil the stationarity of rotating sources with respect to the vra. in addition to the test with the point source on the rotor hub, other four tests were done with the same source, but placed on the blade tip. in these tests, the rotor is placed at different angles (steps of 90°), still using a 2 khz sine wave. the rotor-array alignment was done with typical distance sensors, then cb maps were used to verify and correct it. from acoustic maps, the position of the test point sources is retrieved to estimate the offset and the angle between rotor and array axes. from the test with the source placed on the hub, the offset is estimated, which results to be in the same order of magnitude of grid resolution, in both horizontal and vertical directions. the other four tests were used to estimate the misalignment. the least square fitting plane is calculated from the four source positions, then the scalar product between the normal of the array plane and the normal of the rotor plane is calculated to estimate the angle between the two axes, which results to be less than 3°. a final test with the same point source, placed at a radius of 0.6 m and the blade rotating at 100 rpm, was conducted to assess the correctness of all operations needed to map a rotating source via the vra method. all mapping algorithms were able to correctly locate the point test source on the rotating blade using vra signals. the last important aspect is the estimation of the actual speed of sound to use for the calculation of the propagator and the steering vectors. with this purpose, two measures of air temperature inside the chamber were taken during microphone recordings, one at the beginning and one at the end of each acquisition. the model adopted for indirect measurement of the speed of sound 𝑐 is 𝑐 = 331.3 √1 + 𝑇 273.15 k , (8) where, 𝑇 is the average of initial and final air temperatures. the digital air temperature sensor has a resolution of 0.1 °c, which is sufficient for the accuracy requested in this case. despite the importance of this parameter for the quality of the maps, little attention is often paid to this parameter [19]. in the whole test campaign, 𝑐 is always about 342 m/s. 4. results the analysis has been performed from 700 hz to 4.5 khz. this band approximately corresponds to helmholtz number range 6 40 (𝐻𝑒 = 𝐷 𝜆⁄ ) in which the array provides adequate results. below this range the array has very poor performance. the benchmarking of mapping techniques has been performed on the measurement acquired at the nominal operating condition of the blade, i.e. 650 rpm. for clarity of acoustic maps, the frequency range of analysis is split in two bands, where the noise generation is located in different parts of the blade. figure 3 shows cb maps, which are characterised by blurred sources and low dynamics. these effects, caused by the array psf, make it difficult to identify weaker sources, since they are covered by the sidelobes of the main sources. maps obtained with clean-sc and cmf-irls are showed respectively in figure 4 and figure 5. both methods make it possible a better localisation and they also reveal weaker sources on the blade. in fact, as expected, these methods provide higher performance in terms of spatial resolution and dynamics of acoustic maps with respect to cb. in addition, they also provide quantitative information. the advantages of clean-sc are the robustness of results, the dynamics of the maps and the low computation time. however, due to the nature of the algorithm, clean-sc does not make it possible to fully represent extended sources [10]. this drawback is partially mitigated choosing a loop gain 𝜑 < 1, as in this case, however, the limitation still holds. instead, cmf-irls is capable to reveal the correct spatial extension of acoustic sources, but it is computationally more demanding. despite, it is an inverse method, the maps returned by cmfirls do not have any source located at the edges of the mapping plane because of the pre-weighting strategy adopted. another aspect to notice is the different source distribution returned in the low frequency range by clean-sc and cmf-irls. figure 4 (left) depicts the strongest source in between the leading edge and the trailing edge, while figure 5 (left) shows well separated sources. this is caused by the localisation mechanism adopted by clean-sc, that relies on the spatial resolution of cb. in fact, figure 3. cb 650 rpm. left: 700-2500 hz. right: 2500-4500 hz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 152 the clean-sc algorithm establish that the source position is detected picking up the maximum location of the so called "dirty map", which is the output of cb at the current iteration of clean-sc procedure. when two sources are closer than the mainlobe width, considering the psf of the array at the frequency of analysis, the maximum of the total map lays somewhere in between the real sources, depending on their relative strength. this problem does not occur with cmf-irls, being it an inverse method, which considers all potential sources at once. since cmf-irls demonstrated to be the best performing among the methods compared in this work, it has been used for characterising the noise emission of the blade at different operating conditions. two additional rotation speed of the rotor are tested: 500 and 350 rpm. figure 6 and figure 7 shows similar structures of source distribution with respect to the nominal working condition. for the lower frequency band, the noise sources are mainly in the last 0.5 m, in the radial direction, and are quite aligned with the leading and trailing edges. for the higher band, the noise is mostly located at the tip of the blade; a figure 4. clean-sc650 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 5. cmf-irls 650 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 6. cmf-irls 500 rpm. left: 700-2500 hz. right: 2500-4500 hz. figure 7. cmf-irls350 rpm. left: 700-2500 hz. right: 2500-4500 hz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 153 source is identified at the tip location and the other two are located almost symmetrically with respect to the tip. also some high frequency noise of the hub is visible. the extent and the level of sources decrease proportionally to the angular velocity. in order to have an overview of how the source distribution changes with respect to frequency, a synthetic visualisation is depicted in figure 8. this view results from the integration of the acoustic map along the chord-wise direction (𝑦 direction), therefore, it shows how the reconstructed source distribution changes with respect to frequency and radial direction (𝑥). 5. conclusions a measurement campaign has been conducted in a semianechoic chamber on a single blade rotor, for its aeroacoustic characterisation with acoustic imaging techniques exploiting microphone arrays. the strategy of vra makes it possible to use those advanced frequency domain approaches for acoustic source mapping, that are typically required in aeroacoustic applications. the benchmarking among three methods demonstrated the advantages of cmf-irls versus cb and clean-sc. the performance of cmf-irls is adequate for a detailed characterisation of the acoustic source distribution generated by the wind turbine blade model in different operating conditions. therefore, this measurement technique is a powerful tool for improving the design of wind turbine blade models, since it is capable to identify aeroacoustic noise sources with high dynamic range and spatial resolution. however, the applicability is limited to models of limited size since vra requires the array and the rotor to be aligned and co-axial. acknowledgement the authors would like to thank prof. renato ricci of università politecnica delle marche for providing the wind turbine blade model used in the measurement campaign and for the useful discussions on commenting the aeroacoustic results. references [1] p. chiariotti, m. martarelli, p. castellini, acoustic beamforming for noise source localization: reviews, methodology and applications, mechanical systems and signal processing, vol. 120, 2019, pp. 422–448. doi: 10.1016/j.ymssp.2018.09.019 [2] r. dougherty, advanced time-domain beamforming techniques, 10th aiaa/ceas aeroacoustics conference, american institute of aeronautics and astronautics, 2004. doi: 10.2514/6.2004-2955 [3] p. sijtsma, s. oerlemans, location of rotating sources by phased array measurements, 7th aiaa/ceas aeroacoustics conference and exhibit, american institute of aeronautics and astronautics, 2001. doi: 10.2514/6.2001-2167 [4] r. merino-martínez, p. sijtsma, m. snellen, t. ahlefeldt, j. antoni, c. j. bahr, d. blacodon, d. ernst, a. finez, s. funke, t. f. geyer, s. haxter, g. herold, x. huang, w. m. humphreys, q. leclère, a. malgoezar, u. michel, t. padois, a. pereira, c. picard, e. sarradj, h. siller, d. g. simons, c. spehr, a review of acoustic imaging methods using phased microphone arrays, ceas aeronautical journal, vol. 10, no. 1, mar. 2019, art. no. 1, doi: 10.1007/s13272-019-00383-4 [5] q. leclère, a. pereira, c. bailly, j. antoni, c. picard, a unified formalism for acoustic imaging based on microphone array measurements, international journal of aeroacoustics, vol. 16, no. 4-5, jul. 2017, art. no. 4–5. doi: 10.1177/1475472x17718883 figure 8. chord-wise integrated maps versus frequency, cmf-irls. the tip and the root of the blade are represented by the horizontal black lines, while the vertical lines represent the centre of 1/3 octave bands. top left: 650 rpm. top right: 500 rpm. bottom: 350 rpm. all maps have the same colorscale. https://doi.org/10.1016/j.ymssp.2018.09.019 https://doi.org/10.2514/6.2004-2955 https://doi.org/10.2514/6.2001-2167 https://doi.org/10.1007/s13272-019-00383-4 https://doi.org/10.1177/1475472x17718883 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 154 [6] r. p. dougherty, functional beamforming, 5th berlin beamforming conference, 19–20 february 2014, berlin, germany, gfai e.v., berlin (2014) [7] t. f. brooks, w. m. humphreys, a deconvolution approach for the mapping of acoustic sources (damas) determined from phased microphone arrays, journal of sound and vibration, vol. 294, n. 4–5, july 2006, pp. 856–879. doi: 10.1016/j.jsv.2005.12.046. [8] pieter sijtsma, clean based on spatial source coherence, international journal of aeroacoustics, vol. 6, no. 4, dec. 2007, art. no. 4. doi: 10.1260/147547207783359459 [9] t. suzuki, l1 generalized inverse beam-forming algorithm resolving coherent/incoherent, distributed and multipole sources, journal of sound and vibration, vol. 330, n. 24, november 2011, pp. 5835–5851. doi: 10.1016/j.jsv.2011.05.021 [10] g. battista, p. chiariotti, m. martarelli, p. castellini, inverse methods in aeroacoustic three-dimensional volumetric noise source localization and quantification, journal of sound and vibration, vol. 473, may 2020, p. 115208. doi: 10.1016/j.jsv.2020.115208 [11] t. yardibi, j. li, p. stoica, l. n. cattafesta, sparsity constrained deconvolution approaches for acoustic source mapping, the journal of the acoustical society of america, vol. 123, no. 5, 2008, art. no. 5, doi: 10.1121/1.2896754 [12] a. pereira, j. antoni q. leclère, empirical bayesian regularization of the inverse acoustic problem, applied acoustics, vol. 97, october 2015, pp. 11–29. doi: 10.1016/j.apacoust.2015.03.008 [13] g. herold, e. sarradj, microphone array method for the characterization of rotating sound sources in axial fans, noise control engineering journal, vol. 63, no. 6, art. no. 6, nov. 2015. doi: 10.3397/1/376348 [14] g. battista, g. herold, e. sarradj, p. castellini, p. chiariotti, irls based inverse methods tailored to volumetric acoustic source mapping, applied acoustics, vol. 172, jan. 2021, p. 107599. doi: 10.1016/j.apacoust.2020.107599 [15] j. hadamard, sur les problèmes aux dérivés partielles et leur signification physique, princeton university bulletin, vol. 13, pp. 49–52, 1902. [16] e. sarradj, three-dimensional acoustic source mapping with different beamforming steering vector formulations, advances in acoustics and vibration, vol. 2012, 2012, no. 292695. doi: 10.1155/2012/292695 [17] roberto merino-martínez, salil luesutthiviboon, riccardo zamponi, alejandro rubio carpio, daniele ragni, pieter sijtsma, mirjam snellen, christophe schram , assessment of the accuracy of microphone array methods for aeroacoustic measurements. journal of sound and vibration, vol. 470, march 2020, pp. 115176. doi: 10.1016/j.jsv.2020.115176 [18] e. sarradj, acoular acoustic testing and source mapping software. 2018. online [accessed 22 december 2021] http://www.acoular.org [19] p. castellini, m. martarelli, acoustic beamforming: analysis of uncertainty and metrological performances. mechanical systems and signal processing, vol. 22, n. 3, april 2008, pp. 672–92. doi: 10.1016/j.ymssp.2007.09.017 [20] k. r. holland, p. a. nelson, the application of inverse methods to spatially-distributed acoustic sources. journal of sound and vibration, vol. 332, n. 22, october 2013, pp. 5727–5747. doi: 10.1016/j.jsv.2013.06.009 [21] j. antoni, a bayesian approach to sound source reconstruction: optimal basis, regularization, and focusing, the journal of the acoustical society of america, vol. 131, no. 4, art. no. 4, apr. 2012, pp 2373-2890. doi: 10.1121/1.3685484 https://doi.org/10.1016/j.jsv.2005.12.046 http://dx.doi.org/10.1260/147547207783359459 https://doi.org/10.1016/j.jsv.2011.05.021 https://doi.org/10.1016/j.jsv.2020.115208 http://dx.doi.org/10.1121/1.2896754 https://doi.org/10.1016/j.apacoust.2015.03.008 https://doi.org/10.3397/1/376348 https://doi.org/10.1016/j.apacoust.2020.107599 https://doi.org/10.1155/2012/292695 https://doi.org/10.1016/j.jsv.2020.115176 http://www.acoular.org/ https://doi.org/10.1016/j.ymssp.2007.09.017 https://doi.org/10.1016/j.jsv.2013.06.009 https://doi.org/10.1121/1.3685484 evaluation and correction of systematic effects in a simultaneous 3-axis vibration calibration system acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 388 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 388 393 evaluation and correction of systematic effects in a simultaneous 3-axis vibration calibration system a. prato1, f. mazzoleni1, a. schiavi1 1 inrim – national institute of metrological research, torino, italy, a.schiavi@inrim.it abstract: this paper presents a calibration method, recently realized at inrim, suitable for the calibration of 3axis accelerometers in frequency domain. the procedure, allows to simultaneously evaluate the main and transverse sensitivities on three axes by means of a single-axis vibration excitation of inclined planes. nevertheless, the excitation system is subjected to spurious motions mainly due to the vibrational modes of the inclined planes and to the horizontal motions of the shaker. in order to provide the proper sensitivities to the 3-axis sensors, the evaluation of systematic effects is experimentally carried out and the related correction is proposed. keywords: calibration, 3-axis accelerometer, systematic effects 1. introduction the 3-axis accelerometers, especially low-cost unconventional-shaped transducers, such as mems sensors, are largely used in a wide range of advanced industrial, environmental, energy and medical applications, and in particular within extensive sensor and multi-sensor networks [e.g., 1 – 10]. for example, in the context of industry 4.0, a huge number of sensors is needed for an effective implementation of smart factories, learning machine and intelligent manufacturing systems, as well as for traditional application such as early failure detection and predictive maintenance; low-power devices and battery-operated systems are practical and useful in iot applications, such as for smart cities, for accurate navigation/positioning systems and in environmental monitoring and survey; moreover, accurate measurements are of paramount importance in medical applications, in remote surgery and remote diagnoses. the possibility to have many accurate, low-power consuming and low-cost sensors present undoubted advantages, in terms of costs reduction and energy saving, in the control processes, monitoring or measurements and being flexible in providing enhanced data collection, automation and operation. by way of example, the calibration of digital mems sensors, with the associated uncertainty budget, allows to ensure traceability and measurement accuracy of nodes in sensor networks, as well as in other innovative implementations. moreover, the evolving improvement of the technical performance and the reliability of mems sensors are emerging quality attributes of interest for manufacturers, costumers and end-users. however, in the particular case of digital mems accelerometers, the sensitivity is generally provided by manufacturer without traceable calibration methods and is obtained in static conditions, whereas dynamic response, as a function of frequency, is often barely known or completely disregarded. given this condition, a simultaneous 3-axis vibration calibration system with a single axis excitation to be exploitable by manufacturers is proposed. this paper deals with the evaluation of systematic effects of this system. 2. description of the work traceable calibration methods for digital sensors, and smart sensors, in metrological terms [11 13], including sensitivity parameter and an appropriate uncertainty evaluation, are necessary in order to consider low-cost and low-power consuming accelerometers as actual measurement devices in frequency domain [14]. the calibration system, developed at inrim, allows to simultaneous evaluate the main and the transverse sensitivity, in frequency domain, of 3-axis accelerometers, by means of a single-axis vibration excitation, by using inclined planes rigidly fixed to the vibrating table [14, 15]. the calibration procedure is based on the comparison to a reference transducer (in analogy to iso 16063-21[16]). a preliminary version of the system was previously investigated for the characterization of analog mems accelerometers performance in operative conditions [17 – 23]. measurements are performed at a nearly constant amplitude of 10 m∙s-2, from 5 hz up to 3 khz. the mechanical calibration system, composed of the shaker and the inclined planes (with tilt angles of 15°, 35°, 55° and 75°), is characterized in order to take into account systematic effects. similar procedure is also applied to the shaker at 0° and 90°. http://www.imeko.org/ mailto:a.schiavi@inrim.it acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 389 measurements of systematic effects is carried out by means of a laser-doppler vibrometer. the detailed uncertainty budged is evaluated according to gum [24]. 3. calibration set-up the calibration set-up here proposed, consisting of a single-axis vibrating table on which aluminum inclined planes are screwed, allows to generate a projection of the reference acceleration along three axes simultaneously. a single vertical sinusoidal acceleration at nearly-constant amplitude acts as reference acceleration aref along the vertical z’-axis of the system. in this way, accelerations of proportional amplitudes, are simultaneously generated on the inclined surface plane, along the three axes. in figure 1, the geometrical principle of the proposed method on which the 3-axis accelerometer is fixed during calibration, is schematically depicted. figure 1: inclined plane – scheme from simple trigonometrical laws, the reference accelerations detected by the sensor in calibration, along its three sensitive axes, are expected to be: ax,theor = |aref sin(𝛼) cos(𝜔)| (1) ay,theor = |aref sin(𝛼) sin(𝜔)| (2) az,theor = |aref cos(𝛼)| (3) where,  is the tilt angle, ω is the angle of rotation, aref is the root mean square (rms) reference acceleration along the vertical z’-axis of the system, and ax,theor, ay,theor, az,theor are the rms reference accelerations spread along x-, yand z-axis of the mems accelerometer in calibration. in the experimental set-up, the inclined plane is screwed on the vertical vibrating table (pcb precision air bearing calibration shaker), and the 3axis accelerometer is fixed to the inclined plane and located along the vertical axis of excitation. the experimental configuration is shown in figure 2. the acceleration along vertical z’-axis aref is measured by a single axis reference transducer (pcb model 080a199/482a23), calibrated according to iso 16063-11:1999 [25], against inrim primary standard, located within the stroke of the shaker and is acquired by an acquisition board ni 4431 (sampling rate of 50 khz) integrated in the pc and processed through labview software to provide the rms reference value in m s-2. the digital mems output is acquired by an external microcontroller at a maximum sampling rate of 6.660 khz and saved as binary files. figure 2: the calibration set-up: the mems fixed to the inclined plane on the vibrating table. 4. evaluation of systematic effects and correction as described in the previous section, the reference accelerations along mems accelerometer sensitivities axes are given by equations (1)-(3), by using trigonometrical laws. however, in dynamic conditions, systematic effects caused by spurious oscillating components along the three axes of the reference system (x’-, y’and z’) at mems position need to be taken into account. these spurious components affect the actual reference accelerations aref splitted on the three sensitivity axes of the mems. in figure 3, a schematic representation of the occurring phenomenon is shown. figure 3: representation of the spurious oscillating components combination along the three axes of the mems during the calibration. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 390 such components are mainly due to the vibrational modes of the inclined aluminum planes and due to small but not negligible horizontal motions of the shaker. each spuroius component along reference system x’-, y’and z’-axis, has to be decomposed along the 3-axis accelerometer x-, yand z-axis and summed to the reference accelerations ax,theor, ay,theor, az,theor according to wave interference laws. the actual decomposition of the spurious components, acting along the three axes of the reference system, is schematically depicted in figure 4. figure 4: representation of the spurious oscillating component decomposition, along the three axis of the reference system, at a given frequency. by way of example, considering the general case of four overlapping waves, 𝐸1 = 𝐸1,0𝑒 𝑖(2𝜋𝑓𝑡) , 𝐸2 = 𝐸2,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑2) , 𝐸3 = 𝐸3,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑3) , 𝐸4 = 𝐸4,0𝑒 𝑖(2𝜋𝑓𝑡+𝜑4) , oscillating at a same frequency f, with different amplitudes and phase differences with respect to the reference signal e1, along a particular direction, their interference can be opportunely defined according to equation (4), where e2, e3 and e4 are the amplitudeand phase-dependant spurious components along x-, yand z-axis of the mems. 𝐸𝑡𝑜𝑡 = |𝐸1,0 + 𝐸2,0𝑒 𝑖𝜑2 + 𝐸3,0𝑒 𝑖𝜑3 + 𝐸4,0𝑒 𝑖𝜑4 | (4) in this way, it is possible to correct reference theoretical accelerations along mems axes in equations (1)-(3), into ax, ay, az of equations (8)-(10), where ax’,syst, ay’,syst, az’,syst and φx’,syst, φy’,syst, φz’,syst are, respectively, the amplitudes and the phase differences as shown in equations (5)-(7), with respect to the reference signal aref, of the spurious components along x’-, y’and z’-axis. as it will be shown in section 5, the amplitude of spurious components vary as a function of frequency between 0.1% and 10% of the reference acceleration aref. experimental evaluation of systematic effects due to spurious components is carried out by means of a laser-doppler velocimeter (polytec ofv 505). amplitude and phase measurements along the x’-, y’and z’-axis of the reference system are evaluated for each inclined plane and for all frequencies, at reference vertical amplitude of 10 m s-2. laser signal, during measurements of spurious components amplitude, is acquired by a ni 4431 board (sampling rate of 50 khz) integrated into the pc, while measurement of phase differences between reference acceleration and spurious components are measured by means of a dynamic signal analyzer (keysight 35670a). since the digital mems accelerometer is too small to be used as a reflective surface, the beam spot of the laser directly hits a small aluminum triangular-based parallelepiped located at mems position and fixed to the different inclined planes, as shown in figure 5. the volume of the triangular-based parallelepiped is around 0.5 cm3, which is 0.6% of the total volume of the inclined plane, i.e., negligible with respect to the total mass. the values of the measured acceleration amplitudes ax’, ay’ and az’ along x’-, y’and z’-axis are related to the actual systematic effects acting on the axes of the reference system, and are expressed, as a function of frequency and experimental phase-shift, by the following equations: 𝑎𝑥′ = |𝑎𝑥′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑥′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑥′,𝑠𝑦𝑠𝑡)| (4) 𝑎𝑦′ = |𝑎𝑦′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑦′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑦′,𝑠𝑦𝑠𝑡)| (5) 𝑎𝑧′ = |𝑎𝑧 ′,𝑡ℎ𝑒𝑜𝑟 + 𝑎𝑧 ′,𝑠𝑦𝑠𝑡 ∙ 𝑒 𝑖(2𝜋𝑓𝑡+𝜑𝑧′,𝑠𝑦𝑠𝑡)| (6) where ax’,theor and ay’,theor are 0 (i.e., no acceleration is generated along the horizontal system plane x’-y’) and the vertical component az’,theor  aref. in figure 5 the experimental method used to quantify the spurious components from accelerations ax’, ay’ and az’, and the related phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, is shown. ax = √ (ax,theor + |a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) cos(𝜔)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 sin(𝜔)| cos 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) cos(𝜔)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 + +(|a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) cos(𝜔)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 sin(𝜔)| sin 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) cos(𝜔)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (8) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 391 ay = √ (ay,theor + |a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) sin(𝜔)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 cos(𝜔)| cos 𝜑𝑦′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) sin(𝜔)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 +(|a𝑥′,𝑠𝑦𝑠𝑡 cos(𝛼) sin(𝜔)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑦′,𝑠𝑦𝑠𝑡 cos(𝜔)| sin 𝜑𝑦,𝑠𝑦𝑠𝑡′ + |a𝑧′,𝑠𝑦𝑠𝑡 sin(𝛼) sin(𝜔)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (9) az = √ (az,theor + |a𝑥′,𝑠𝑦𝑠𝑡 sin(𝛼)| cos 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 cos(𝛼)| cos 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 + +(|a𝑥′,𝑠𝑦𝑠𝑡 sin(𝛼)| sin 𝜑𝑥′,𝑠𝑦𝑠𝑡 + |a𝑧′,𝑠𝑦𝑠𝑡 cos(𝛼)| sin 𝜑𝑧′,𝑠𝑦𝑠𝑡 ) 2 (10) figure 5: the laser beam hitting the aluminum triangularbased parallelepiped located at the mems position. 5. experimental results in the graphs of figures 6 – 9, the values of the amplitudes of ax’, ay’ and az’, are normalized with respect to the reference acceleration aref amplitude. mesurement are performed from 5 hz and 3 khz. figure 6: normalized accelerations along x’-, y’and z’axis at 15° of tilt angle. figure 7: normalized accelerations along x’-, y’and z’axis at 35° of tilt angle. figure 8: normalized accelerations along x’-, y’and z’axis at 55° of tilt angle. figure 9: normalized accelerations along x’-, y’and z’axis at 75° of tilt angle. the experimental values of the spurious components amplitude of accelerations ax’, ay’ and az’, along the axes of the system, are combined in order to calculate the values of systematic effects due to the acceleration amplitudes ax’,syst, ay’,syst and az’,syst, along the mems sensitivity axes. in the graphs of the figures 10 – 13, the experimental values of phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, are shown. in this case the values of phase-shift allow to evaluate the phase differences, with respect to the reference signal aref, in terms of φx’,syst, φy’,syst and φz’,syst. since φz’,syst is close to 0° in every configuration, it is not shown in the graphs. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 392 figure 10: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 15° of tilt angle. figure 11: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 35° of tilt angle. figure 12: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 55° of tilt angle. figure 13: phase-shifts on the x’-y’ horizontal plane, with respect to the vertical axis z’, at 75° of tilt angle. once quantified the acceleration amplitudes ax’, ay’ and az’, and the values of the phase-shift φx’, φy’ and φz’, the reference theoretical accelerations along mems axes in equations (1)-(3), can be opportunely expressed into ax, ay, az of equations (8)-(10), by taking into account the systematic effects due to the vibrational modes of the inclined planes and to horizontal motions of the shaker. in particular, it can be observed an increasing of acceleration amplitudes, as a function of frequency, along the horizontal axes x’, y’ and also along vertical axis z’, this last mainly due to resonant modes. moreover, the lateral motions of the shaker, occurring around 80 hz, are presumably the cause of the large phase-shifts observed at low frequencies, as depicted in figure 10 – 13; these motions are independent from the resonant modes of the inclined planes, but they are affecting in any case, the whole behavior of the inclined plane, in terms of amplitude and phase differences. on the other hand, the analysis of the systematic effects, is an aggregation of both shaker spurious motions and resonant modes. standard uncertainties associated to the amplitude of the spurious components, u(ax’,syst), u(ay’,syst), u(az’,syst), are considered as type b uncertainty contributions with an average error of 0.0025 m s-2, from three repeated measurements, and a uniform rectangular distribution. standard uncertainties associated to the phase difference due to the spurious components, u(φx’,syst), u(φy’,syst), u(φz’,syst), are considered as type a uncertainty contributions with a maximum standard deviation of 2° from five repeated measurements, as shown in [14]. this correction allows to univocally define the actual projection of the reference acceleration aref on the three axes, thus the “standard” calibration can be achieved, by comparison to the reference transducer within the stroke of the shaker and it can be finally related to the primary standard, as declared. 6. summary in this paper is presented a technical insight on the calibration system, recently realized at inrim, suitable for the calibration of 3-axis accelerometers in frequency domain. the procedure, allows to simultaneously evaluate the main and transverse sensitivities on three axes by means of a single-axis vibration excitation of inclined planes. the mechanical calibration system, composed of the shaker and the inclined planes, is characterized in order to take into account systematic effects occuring during dynamical excitation. the evaluation of systematic effects, due to the vibrational modes of the inclined aluminum planes and due to small but not negligible horizontal motions of the shaker, are carried out from 5 hz up to 3 khz at an amplitude of 10 m s-2. the amplitudes of the acceleration spurious components ax’, ay’ and az’, and the values of the phase-shift φx’, φy’ and φz’, with respect to the reference acceleration aref, acting along the vertical axis z’, are accurately measured by means of a laserdoppler vibrometer. this correction allows to http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 393 univocally define the actual projection of the reference acceleration aref on the three axes, thus the “standard” calibration can be achieved and related to the primary standard. 7. note this work has to be considered as an addendum to the paper: “prato, a., mazzoleni, f., & schiavi, a. (2020). traceability of digital 3-axis mems accelerometer: simultaneous determination of main and transverse sensitivities in the frequency domain. metrologia, 57(3), 035013” [14], which shows the detailed extension of the determination of the systematic effects of the calibration system. 8. references [1] noel, a. b., abdaoui, a., elfouly, t., ahmed, m. h., badawy, a., & shehata, m. s. (2017). structural health monitoring using wireless sensor networks: a comprehensive survey. ieee communications surveys & tutorials, 19(3), 1403-1423. [2] dehkordi, s. a., farajzadeh, k., rezazadeh, j., farahbakhsh, r., sandrasegaran, k., & dehkordi, m. a. (2020). a survey on data aggregation techniques in iot sensor networks. wireless networks, 26(2), 1243-1263. [3] deng, x., jiang, y., yang, l. t., lin, m., yi, l., & wang, m. (2019). data fusion based coverage optimization in heterogeneous sensor networks: a survey. information fusion, 52, 90-105. [4] adeel, a., gogate, m., farooq, s., ieracitano, c., dashtipour, k., larijani, h., & hussain, a. (2019). a survey on the role of wireless sensor networks and iot in disaster management. in geological disaster monitoring based on sensor networks (pp. 57-66). springer, singapore. [5] ge, x., han, q. l., zhang, x. m., ding, l., & yang, f. (2019). distributed event-triggered estimation over sensor networks: a survey. ieee transactions on cybernetics. [6] kumar, d. p., amgoth, t., & annavarapu, c. s. r. (2019). machine learning algorithms for wireless sensor networks: a survey. information fusion, 49, 1-25. [7] kandris, d., nakas, c., vomvas, d., & koulouras, g. (2020). applications of wireless sensor networks: an upto-date survey. applied system innovation, 3(1), 14. [8] priyadarshi, r., gupta, b., & anurag, a. (2020). deployment techniques in wireless sensor networks: a survey, classification, challenges, and future research issues. the journal of supercomputing, 1-41. [9] zareei, m., vargas-rosales, c., anisi, m. h., musavian, l., villalpando-hernandez, r., goudarzi, s., & mohamed, e. m. (2019). enhancing the performance of energy harvesting sensor networks for environmental monitoring applications. energies, 12(14), 2794. [10] carminati, m., kanoun, o., ullo, s. l., & marcuccio, s. (2019). prospects of distributed wireless sensor networks for urban environmental monitoring. ieee aerospace and electronic systems magazine, 34(6), 4452. [11] bruns, t., & eichstädt, s. (2018, august). a smart sensor concept for traceable dynamic measurements. in journal of physics: conference series (vol. 1065, no. 21, p. 212011). iop publishing. [12] dorst, t., ludwig, b., eichstädt, s., schneider, t., & schütze, a. (2019, may). metrology for the factory of the future: towards a case study in condition monitoring. in 2019 ieee international instrumentation and measurement technology conference (i2mtc) (pp. 1-5). ieee. [13] seeger, b., bruns, t., & eichstädt, s. (2019). methods for dynamic calibration and augmentation of digital acceleration mems sensors. in 19th international congress of metrology (cim2019) (p. 22003). edp sciences. [14] prato, a., mazzoleni, f., & schiavi, a. (2020). traceability of digital 3-axis mems accelerometer: simultaneous determination of main and transverse sensitivities in the frequency domain. metrologia, 57(3), 035013. [15] schiavi a, mazzoleni f and germak a 2015 simultaneous 3-axis mems accelerometer primary calibration: description of the test-rig and measurements xxi imeko world congress on measurement in research and industry 30 2161-2164 [16] iso 16063-21 2003 methods for the calibration of vibration and shock transducers — part 21: vibration calibration by comparison to a reference transducer (geneva: international organization for standardization). [17] d’emilia g, gaspari a, natale e, mazzoleni f and schiavi a 2018 calibration of tri-axial mems accelerometers in the low-frequency range part 1: comparison among methods journal of sensors and sensor systems 7(1) 245257 [18] d’emilia g, gaspari a, natale e, mazzoleni f and schiavi a 2018 calibration of tri-axial mems accelerometers in the low-frequency range part 2: uncertainty assessment journal of sensors and sensor systems 7(1) 403-410. [19] g. d’emilia, a. gaspari, f. mazzoleni, e. natale, a. prato, a. schiavi, 2020 metrological characterization of mems accelerometers by a ldv, aivela. [20] a. prato, f. mazzoleni and a. schiavi, “metrological traceability for digital sensors in smart manufacturing: calibration of mems accelerometers and microphones at inrim,” 2019 ieee international workshop on metrology for industry 4.0 and iot, 371-375, 2019. [21] m. galetto, a. schiavi, g. genta, a. prato and f. mazzoleni, “uncertainty evaluation in calibration of lowcost digital mems accelerometers for advanced manufacturing applications,” cirp annals 68, 535-538, 2019. [22] a. schiavi, a. prato, f. mazzoleni, g. d’emilia, a. gaspari, e. natale, “calibration of digital 3-axis mems accelerometers: a double-blind «multi-bilateral» comparison”, 2020 ieee international workshop on metrology for industry 4.0 and iot. [23] a. prato, a. schiavi, f. mazzoleni, a. touré, g. genta, m. galetto, “a reliable sampling method to reduce large set of measurements: a case study on calibration of digital 3-axis mems accelerometers” 2020 ieee international workshop on metrology for industry 4.0 and iot. [24] jcgm 100 2008 evaluation of measurement data — guide to the expression of uncertainty in measurement (gum), joint committee for guides in metrology, sèvres, france. [25] iso 16063-11 1999 methods for the calibration of vibration and shock transducers — part 11: primary vibration calibration by laser interferometry (geneva: international organization for standardization. http://www.imeko.org/ zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials maduka l. weththimuni1, marwa ben chobba2, ilenia tredici3, maurizio licchelli1,3 1 department of chemistry, university of pavia, via t. taramelli 12, i-27100, pavia, italy 2 national school of engineering, university of sfax, box 1173, 3038 sfax, tunisia 3 cisric, university of pavia, via a. ferrata 3, i-27100, pavia, italy section: research paper keywords: zro2-doped zno; nanocomposites; protective coating; self-cleaning effect citation: maduka l. weththimuni, marwa ben chobba, ilenia tredici, maurizio licchelli, zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials, acta imeko, vol. 11, no. 1, article 4, march 2022, identifier: imeko-acta-11 (2022)-01-04 section editor: fabio santaniello, university of trento, italy received march 3, 2021; in final form february 23, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: maduka l. weththimuni, e-mail: madukalankani.weththimuni@unipv.it 1. introduction our artifacts provide the most important and essential availability for the culture of all countries and represent the symbol of history. therefore, protection and conservation of this patrimony are essential tasks for the scientific community. the cultural heritage buildings undergo different damages, particularly because they are exposed outdoors, and are subject to a number of phenomena such as air pollution, water absorption, salt crystallization, photodegradation, and microorganisms colonization which cause transformation and deterioration of surfaces [1]-[3]. moreover, decay of stone materials is strongly related to their porosity properties, which implies that highly porous materials often undergo faster degradation than the other stone substrates [4]-[6]. several methodologies have been developed to clean preserved materials belonging to heritage sites as a first step of the conservation process. cleaning procedures may include the use of solvents, chelating agents, and even of acidic or basic agents [7]. nevertheless, the irreversibility of these methods, the risk of altering the artwork, as well as the toxicity of certain products makes these cleaning procedures scarcely suitable for the application for historic buildings [7]. the bio-cleaning method has been suggested and studied as an alternative technique by using selected microorganisms [8]. despite the efficiency of this method under the laboratory condition, the need of specified conditions to ensure the viability of these microorganisms make its practical application very difficult and subject to further studies [7]. laser method has been considered as a friendly technique for cleaning heritage structure as well as for environment [9]. however, the high cost of this method is still the barrier against its widespread use. a variety of products (biopolymers, ionic liquids, gels, microemulsions, etc.) have been also proposed and used in this field, although their application still has some drawbacks such as high cost of maintenance and toxicological risks [10], [11]. the conservation of monumental cultural heritage with innovative nanocomposites is in the vanguard of conservation science and a plethora of research activities are dedicated to the abstract zno is a semiconductor that has found wide application in the optics and electronics areas. moreover, it is widely used in different technological areas due to its beneficial qualities (high chemical stability, non-toxicity, high photo-reactivity, and cheapness). based on its antibacterial activity, recently it has found also application to prevent bio-deterioration of cultural heritage buildings. as many authors suggested, doped zno nano-structures exhibit better antibacterial properties than undoped analogues. in the present work, zno nanoparticles doped with zro2 have been prepared by a sol-gel method in order to enhance the photocatalytic properties as well as the antibacterial activity of zno. then, zro2-zno-pdms nanocomposite (pdms, polydimethylsiloxane used as the binder) was synthesized by in-situ reaction. the resulting nanocomposite has been investigated as a possible protective material for cultural heritage building substrates. the performances of newly prepared coating were evaluated on three different stones (lecce stone, carrara marble and brick) and compared with plain pdms as a reference coating. mailto:madukalankani.weththimuni@unipv.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 design and validation of compatible nanomaterials that may exhibit strengthening, hydrophobic and self-cleaning properties [12-14]. moreover, scientists have focused their research to find out self-cleaning protective materials in order to reduce maintenance costs. in particular, titanium dioxide (tio2) and zinc oxide (zno) nanoparticles (nps) have been studied and tested for self-cleaning applications [15], [16]. the interest in developing and using these nanosized materials in combination with different binders has increased due to their excellent selfcleaning and antibacterial properties in addition to their easy, non-toxic and non-expensive procedures of application [17], [18]. however, their technological application has some important limitations among which the easy recombination of charge carriers and the need of ultraviolet (uv) radiation as an excitation source, considered as the most restrictive drawback, are due to the broadband-gap (zno: 3.2 ev; tio2: 3.3 ev for anatase) of the two oxides [19]. this limitation is particularly restrictive, as uv radiation corresponds to only 3 % of the solar irradiance at the surface of earth. therefore, many research groups have focused their efforts on enhancing the photocatalytic as well as the antimicrobial activity of pure nps by doping with different ions (transition, non-transition, and non-metal ions) [1]-[20]. the process of doping can alter the surface reactivity, functionality and charge of the nps, with possible improvement of their properties such as durability, stability, and dispersive ability of core material [20]. some authors also reported that the doped nanostructures exhibit better antibacterial properties than undoped analogues [20]. zirconium dioxide or zirconia (zro2) is a wide band gap p-type semiconductor, which is used in many fields due to its excellent properties (e.g. good natural color, high strength, high toughness, high chemical stability, and chemical and microbial resistance) [21]. the main objective of this work was the preparation of zno nanoparticles doped with zro2 by a sol-gel method in order to enhance the photocatalytic properties as well as the antibacterial activity of plain zno. moreover, doped nps were combined with a binder (polydimethylsiloxane, pdms) to obtain a nanocomposite, which was tested as a protective material for the cultural heritage. at first, the synthesised core-shell (zro2-zno) nps were characterised by sem-eds. after that, the performances of the nanocomposite coating (zro2-zno-pdms) as a protective material for stone substrates were evaluated when applied on three different stone types: lecce stone (ls), brick (b) and carrara marble (m). moreover, pdms-treated stone specimens were used as the reference for each and every analyses. the evaluation of coatings was done by different techniques: contact angles and chromatic variation measurements, capillary absorption and water vapor permeability determinations, optical microscopy (visible and uv light), sem-eds, self-cleaning and antibacterial tests. 2. materials and methods 2.1 materials analytical grade sodium hydroxide (naoh), ethanol (absolute, 99.8 % etoh), zinc acetate dihydrate (znc4h6o4·2h2o), 2-propanol, zirconium oxychloride octahydrate, orthophosphoric acid, hexamethyldisiloxane, and octamethylcyclotetrasiloxane (d4, utilized as pdms precursor) were purchased from sigma-aldrich. cesium hydroxide (csoh.h2o) was purchased from alfa aesar. all the chemicals used without further purification. water was purified using a millipore organex system (r ≥ 18 m cm). lecce stone specimens (open porosity ˃ 30 %) were provided by tarantino and lotriglia (nardò, lecce, italy), while specimens of brick (open porosity ~24 %) and carrara marble (open porosity ~0.5 %) were provided by favret mosaici s.a.s. (pietrasanta, lucca, italy). 2.2 preparation, application and testing methods of the nanocomposite before treatment, lecce stone (ls), brick (b), and marble (m) (squared 5 × 5 × 1 and 5 × 5 × 2 cm3) specimens were smoothed with abrasive, carbide paper (no: 180 mesh), washed with deionized water, dried in an oven at 60 °c and stored in a desiccator to reach room temperature, then their dry weight was measured [2], [4]. first of all, zro2-zno core-shell nps (molar ratio about 0.01:1 zro2/zno) were synthesised by sol-gel method as reported in the literature [20], [21] and then, zro2-zno-pdms nanocomposite was synthesized by in-situ reaction. for this purpose, doped nps (0.5 % (w/w)) were introduced to the reaction mixture containing octamethylcyclotetrasiloxane (d4, 25 g) and csoh (0.15 g), used as a catalyst for the ring opening polymerization of d4. after ultrasonication (20 minutes), the reaction was carried out at 120 ± 3 °c under vigorous stirring for 2.5 hours in an oil bath and then, hexamethyldisiloxane (0.03 g) was added and the reaction was continued at the same temperature another 2.5 hours as recommended in the literature [20]. all the samples (ls, b, and m) were saturated with ethanol by keeping specimens at least 6 hours in absolute etoh in order to prevent the penetration of coating inside the pores and ensure that it remains on the surface of the stone [1]. after saturation, all specimens were treated with zro2-zno-pdms nanocomposite as well as with plain pdms (as the reference) by brushing method (applied amount, 1.0 ± 0.02 g for each specimen). specimen treated with the nanocomposite were named zn-zr-pdms_ls, zn-zr-pdms_b, and zn-zrpdms_m, while reference specimens were labeled pdms_ls, pdms_b, and pdms_m. optical microscopy observations of the treated specimens were done using a light polarized microscope olympus bx51tf, equipped with the olympus th4-200 lamp (visible light) and the olympus u-rfl-t (uv light). scanning electron microscopy (sem) images (backscattered electron) and energy-dispersive xray spectra (eds) were collected by using a tescan fe-sem, mira xmu series, equipped with a schottky field emission source, operating in both low and high vacuum, and located at the arvedi laboratory, cisric, university of pavia. the amount of absorbed water as a function of time was determined in accordance with the uni en 15801 protocol [22]. water vapor permeability was determined according to uni en 15803:2010 protocol [23]. color changes were measured by a konica minolta cm-2600d spectrophotometer, determining the l*, a*, and b* coordinates of the cielab space, and the global chromatic variations, expressed as δe* according to the uni en 15886 protocol [24]. selfcleaning efficiency of prepared coatings was performed by using multirays, photochemical reactor, composed of uv chamber equipped with 8 uv lamps. the power of each lamp is 15w with a total power of 120 w. the reactor is equipped with a rotating disc in order to ensure a homogenized irradiation on all stained samples. the discoloration of methylene blue (mb) dye (0.1 % wt in ethanol solution), applied on the surface of treated stones specimens and acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 their untreated counterparts, was controlled by measuring chromatic variations (five control points for each sample surface) before and after application of mb dye, after 48 and 96 h of uv exposure. discoloration parameter 𝐷∗ was determined by using (1) [25]: 𝐷∗ = |𝑏∗(𝑡) − 𝑏∗(𝑀𝐵)| |𝑏∗(𝑀𝐵) − 𝑏∗(0)| ∙ 100 % (1) where 𝑏∗(0) is the value of chromatic coordinate 𝑏∗ before staining, while 𝑏∗(𝑀𝐵), and 𝑏∗(𝑡) are the mean values after the application of methylene blue over the surfaces and after t hours of uv-a light exposure, respectively. here, 𝑏∗ coordinate was considered, because this parameter is sensitive to blue colour. 3. results and discussions 3.1 characterisation of doped nps and treated stone specimens sem-eds analyses showed that zro2-doped zno nps are homogeneously dispersed in the pdms binder. most of them are spherical with a size in the 15-30 nm range. anyway, some particles displaying larger size and more irregular shape can be observed, which can be due to occasional aggregation (see figure 1a-b). eds analyses performed on single particles showed the presence of both zirconium and zinc, confirming the expected elemental compositions of the doped inorganic nps (figure 1a). optical microscopy observations suggested that nanocomposite material (zro2-zno-pdms) is homogeneously distributed on the treated stone surfaces (ls, b, and m). moreover, it seems that the coating covered pores on the surface and acting as a protective layer to the stone (figure 2). this observation was confirmed even by sem experiments (figure 3). quite acceptable chromatic variations (∆e* < 5) were observed (see table 1) after application of zro2-zno-pdms on any considered substrate, suggesting that the natural colour of the stones is not dramatically affected by the treatment. the corresponding chromatic coordinates are graphically resumed in figure 4. coordinate 𝐿∗ (related to the lightness) is considerably affected by all the treatments, regardless of the considered treatment. variations of 𝑏∗ (related to the blue to yellow change) are more relevant on lecce stone and brick specimens treated with pdms or nanocomposite material. hydrophobic properties of the treated stones are also summarized in table 1. results indicate that all the stone surfaces show hydrophobic behavior (contact angle measurements α ˃ 90°) after the treatments. it’s worth to highlight that nanocomposites-coated stones showed higher hydrophobic properties than polymer-coated stones. it may be due to the homogeneous distribution of nps in the polymer matrix which increase the hydrophobic nature of pdms. figure 1. sem images of doped nps. table 1. overall chromatic variations and contact angle measurements of treated stones. samples ∆e* α (°) pdms_ls 4.4 (±0.2) 111 (±1) zn-zr-pdms_ls 3.9 (±0.1) 131 (±2) pdms_b 4.4 (±0.5) 128 (±1) zn-zr-pdms_b 4.8 (±0.2) 136 (±2) pdms_m 4.9 (±0.1) 98 (±2) zn-zr-pdms_m 4.8 (±0.1) 107 (±3) figure 2. optical microscope images of treated stones: (a) zn-zr-pdms_ls, (b) zn-zr-pdms_b, and (c) zn-zr-pdms_m. figure 3. sem images of treated stones: (a) zn-zr-pdms_ls, (b) zn-zrpdms_b, and (c) zn-zr-pdms_m. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 moreover, preliminary data concerning capillary absorption tests confirmed that treated stones exhibited water repellent behavior. as can be seen in the table 2, ca values (related to the first 30 minutes) are affected by both treatments (polymer as well as nanocomposite), while it was possible to note some reduction of qf value compared the untreated stone at the end of the test (about 96 h). so, it seems that the treatments, especially with nanocomposite coating have long-term water resistant activity. the results are in good agreement with hydrophobic properties of treated stones. in addition, vapour permeability was preserved at acceptable levels upon treatment with zro2-zno-pdms nanocomposite. hence, all these results suggest that the newly prepared coating can be considered as a promising protective material. 3.2 analysing the self-cleaning properties as reported in the introduction, evaluation of the selfcleaning effectiveness of the newly prepared coating is one of the main aspect of this research study. figure 5 shows (as an example) the behavior of lecce stone before and after the test. the different behavior of stone treated with the nanocomposite and with plain pdms can be observed even by naked eye. in order to evaluate the self-cleaning effect of the coatings, discoloration test was performed on the treated stones. after the application of mb solution on the treated surface, the specimens were exposed to uv irradiation. a quantitative evaluation of the self-cleaning behavior of zro2-zno-pdms on the different stone was obtained by calculating the discoloration parameter d, which was determined at two different time intervals (see figure 6). it is related to the variation of 𝑏∗ coordinate (cielab space) and corresponds to the amount of mb removed from the coated surfaces. this test provided quite similar behavior for ls and b: the coating containing doped nps showed a higher effectiveness than the plain pdms coating. for instance, the discoloration factor related to the new coating is about double compared to pdms (at the end of the test: d*pdms ~ 20 %, d*nanoc ~ 40 % both for ls and b) and as reported in the literature, nps as well as doped nps with pdms coating have been used as a selfcleaning protective coating due to their photocatalytic performance under uv light (when the presence of nps, the discoloration factor is always higher than pdms) [1], [16], [25]. the new coating showed even better results when applied on marble surface, as the discoloration factor was around 70 % after 96 hours of uv irradiation. the results tally with the reported literature [1]. nevertheless, it should be noted that even plain pdms display better performances on marble specimens if compared to the other considered stones. although it is difficult to reliably compare the effectiveness of a treatment on different substrates, due to the different original stone properties (e. g. absorbability of the products, porosity, etc.) that may affect the figure 4. chromatic coordinates of treated stones. table 2. maximum water absorbed per unit area (qf, mg/cm2), capillary water absorption coefficient (ca, mg/cm2 s 1/2), and values of the water vapor permeability of untreated and treated samples. samples qf in mg/cm2 ca in mg/ cm2 s 1/2 permeability in g/m2 24 h ls 518,74 (±9,62) 8,73 (±0,57) 236 (±9) pdms_ls 479,04 (±8,16) 4,63 (±0,50) 185 (±6) zn-zr-pdms_ls 434,18 (±6,68) 2,22 (±0,54) 174 (±4) b 431,57 (±12,34) 2,12 (±0,21) 159 (±4) pdms_b 346,66 (±10,49) 1,53 (±0,44) 107 (±4) zn-zr-pdms_b 263,19 (±13,49) 1,03 (±0,01) 95 (±5) figure 5. images of ls before and after the self-cleaning test. figure 6. the discoloration percentage (d* (%)) after uv exposure. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 final performance, the experimental data discussed above indicate that the nanocomposite coating (zro2-zno-pdms) provides better results in terms of the self-cleaning effect when compared to plain pdms. 4. conclusions in order to enhance the photocatalytic properties of zno, zro2-doped zno nps were synthesised in the laboratory. after that, the nanocomposite zro2-zno-pdms was synthesized, applied on different stone substrates (ls, b, and m), and its protecting behaviour was compared to the well-known pdms. the morphological analysis suggested that prepared doped nps have spherical shape, very small sizes (15-30 nm). this small size of the particles is effectively involved in providing great performances of the prepared coating. for instance, the results of contact angles, chromatic variations, capillary absorption and water vapour permeability measurements indicate satisfactory protecting behaviour of the resulting coating. moreover, surface analyses suggested that nanoparticles included into the binder matrix are homogeneously distributed on all the stone surfaces. furthermore, the new coating (zro2zno-pdms) showed better results when compared to pdms in terms of self-cleaning effect due to uv irradiation, which represents one of the most important result of this research work. further experiments are still in progress to better assess the nanocomposite properties. references [1] c. kapridakia, l. pinhob, m. j. mosquerab, p. maravelakikalaitzakia, producing photoactive, transparent and hydrophobic sio2-crystalline tio2 nanocomposites at ambient conditions with application as self-cleaning coatings, appl. catalys. b: environ, vol. 156–157, 2014, pp. 416–427. doi: 10.1016/j.apcatb.2014.03.042 [2] m. licchelli, m. malagodi, m. weththimuni, c. zanchi, nanoparticles for conservation of bio-calcarenite stone, appl. phys. a, vol. 114, no. 3, 2014, pp. 673–683. doi: 10.1007/s00339-013-7973-z [3] m. ricca, e. le pera, m. licchelli , a. macchia, m. malagodi, l. randazzo, n. rovella, s. a. rule, m. l. weththimuni, m. f. la russa, the crati project: new insights on the consolidation of salt weathered stone and the case study of san domenico church in cosenza (south calabria, italy), coat., vol. 9, no. 5, 2019, pp. 330-345. doi: 10.3390/coatings9050330 [4] m. licchelli, m. malagodi, m. l. weththimuni, c. zanchi, waterrepellent properties of fluoroelastomers on a very porous stone: effect of the application procedure, prog. in org. coat., vol. 76, no. 2-3, 2013, pp. 495-503. doi: 10.1016/j.porgcoat.2012.11.005 [5] m. licchelli, s. j. marzolla, a. poggi, c. zanchi, crosslinked fluorinated polyurethanes for the protection of stone surfaces from graffiti, j. cult. herit., vol. 12, 2011, pp. 34–43. doi: 10.1016/j.culher.2010.07.002 [6] m. licchelli, m. malagodi, m. weththimuni, c. zanchi, antigraffiti nanocomposite materials for surface protection of a very porous stone, appl. phys. a, vol. 116, no. 4, 2014, pp. 1525-1539. doi: 10.1007/s00339-014-8356-9 [7] e. balliana, g. ricci, c. pesce, e. zendri, assessing the value of green conservation for cultural heritage: positive and critical aspects of already available methodologies, int. j. conserv. sci, vol. 7, no. 1, 2016, pp. 185-202. online [accessed 9 march 2022] http://www.ijcs.uaic.ro/public/ijcs-16-si01_balliana.pdf [8] g. alfano, g. lustrato, c. belli, e. zanardini, f. cappitelli, e. mello, c. sorlini, g. ranalli, the bioremoval of nitrate and sulfate alterations on artistic stonework: the case-study of matera cathedral after six years from the treatment, int. biodeter. biodegr, vol. 65, no. 7, 2011, pp. 1004–1011. doi: 10.1016/j.ibiod.2011.07.010 [9] q. h. tang, d. zhou, y. l. wang, g. f. liu, laser cleaning of sulfide scale on compressor impeller blade, appl. surf. sci., vol. 355, 2015, pp. 334–340. doi: 10.1016/j.apsusc.2015.07.128 [10] p. baglioni, d. berti, m. bonini, e. carretti, l. dei, f. fratini, r. giorgi, micelle, microemulsions, and gels for the conservation of cultural heritage, adv. colloid. interface. sci, vol. 205, 2014, pp. 361–371. doi: 10.1016/j.cis.2013.09.008 [11] j. a. l. domingues, n. bonelli, r. giorgi, e. fratini, f. gorel, p. baglioni, innovative hydrogels based on semi-interpenetrating p(hema)/pvp networks for the cleaning of water-sensitive cultural heritage artifacts, langmuir, vol. 29, no. 8, 2013, pp. 2746−2755. doi: 10.1021/la3048664 [12] c. kapridaki, a. verganelaki, p. dimitriadou, p. maravelakikalaitzaki, conservation of monuments by a threelayeredcompatible treatment of teos-nano-calcium oxalate consolidant and teos-pdms-tio2 hydrophobic/photoactive hybrid nanomaterials, materials, vol 11, no. 5, 2018, pp. 684 (23 pages). doi: 10.3390/ma11050684 [13] f. gherardi, m. roveri, s. goidanich, l. toniolo, photocatalytic nanocomposites for the protection of european architectural heritage, materials, vol. 11, no. 1, 2018, pp. 65 (page 15). doi: 10.3390/ma11010065 [14] m. l. weththimuni, m. licchelli, m. malagodi, n. rovella, m. f. la russa, consolidation of bio-calcarenite stone by treatment based on diammonium hydrogenphosphate and calcium hydroxide nanoparticles, measurement, vol. 127, 2018, pp. 396405. doi: 10.1016/j.measurement.2018.06.007 [15] p. munafò, g. b. goffredo, e. quagliarini, tio2-based nanocoatings for preserving architectural stone surfaces: an overview. constr. build. mater, vol. 84, 2015, pp. 201–218. doi: 10.1016/j.conbuildmat.2015.02.083 [16] v. crupi, b. fazio, a. gessini, z. kis, m. f. la russa, d. majolino, c. masciovecchio, m. ricca, b. rossi, s. a. ruffolo,v. venuti, tio2–sio2–pdms nanocomposite coating with self-cleaning effect for stone material: finding the optimal amount of tio2, constr. build. mater., vol. 166, 2018, pp. 464–471. doi: 10.1016/j.conbuildmat.2018.01.172 [17] m. a. aldoasri, s. s. darwish, m. a. adam, n. a. elmarzugi, s. m. ahmed, protecting of marble stone facades of historic buildings using multifunctional tio2 nanocoatings, sustainability, vol. 9, 2017, pp. 1-15. doi: 10.3390/su9112002 [18] m. l. weththimuni, d. capsoni, m. malagodi, m. licchelli, improving the protective properties of shellac-based varnishes by functionalized nanoparticles, coatings, vol. 11, 2021, pp. 419-437. doi: 10.3390/coatings11040419 [19] a. w. xu, y. gao, h. q. liu, the preparation, characterization, and their photocatalytic activities of rare-earth-doped tio2 nanoparticles, j. catal. vol. 207, 2002, pp. 151–157. doi: 10.1006/jcat.2002.3539 [20] m. s. selim, m. a. shenashen, a. elmarakbi, n. a. fatthallah, s. i. hasegawa, s. a. el-safty, synthesis ultrahydrophobic thermally stable inorganic-organic nanocomposites for self-cleaning foul release coatings, chem. eng. j, vol. 320, 2017, pp. 653-666. doi: 10.1016/j.cej.2017.03.067 [21] a. k. singh, u. t. nakate, microwave synthesis, characterization, and photoluminescence properties of nanocrystalline zirconia, sci. world j, vol. 2014, 7 pages. doi: 10.1155/2014/349457 https://doi.org/10.1016/j.apcatb.2014.03.042 https://doi.org/10.1007/s00339-013-7973-z https://doi.org/10.3390/coatings9050330 https://doi.org/10.1016/j.porgcoat.2012.11.005 https://doi.org/10.1016/j.culher.2010.07.002 https://doi.org/10.1007/s00339-014-8356-9 http://www.ijcs.uaic.ro/public/ijcs-16-si01_balliana.pdf https://doi.org/10.1016/j.ibiod.2011.07.010 https://doi.org/10.1016/j.apsusc.2015.07.128 https://doi.org/10.1016/j.cis.2013.09.008 https://doi.org/10.1021/la3048664 http://dx.doi.org/10.3390/ma11050684 https://doi.org/10.3390/ma11010065 https://doi.org/10.1016/j.measurement.2018.06.007 https://doi.org/10.1016/j.conbuildmat.2015.02.083 https://doi.org/10.1016/j.conbuildmat.2018.01.172 http://dx.doi.org/10.3390/su9112002 https://doi.org/10.3390/coatings11040419 https://doi.org/10.1006/jcat.2002.3539 https://doi.org/10.1016/j.cej.2017.03.067 https://doi.org/10.1155/2014/349457 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 [22] uni en 15801:2010, conservazione dei beni culturali, metodi di prova, determinazione dell’assorbimento dell’acqua per capillarità, 2010 [in italian]. [23] uni en 15803:2010, conservazione dei beni culturali, metodi di prova, determinazione della permeabilità al vapore d’acqua, 2010[in italian]. [24] uni en 15886:2010, conservazione dei beni culturali, metodi di prova, misura del colore delle superfici, uni ente italiano di normazione, 2010[in italian]. [25] m. b. chobba, m. l. weththimuni, m. messaoud, c. urzi, j. bouaziz, f. d. leo, m. licchelli, ag-tio2/pdms nanocomposite protective coatings: synthesis, characterization, and use as a selfcleaning and antimicrobial agent, prog. in org. coat, vol. 158, 2021, pp. 106342-106359. doi: 10.1016/j.porgcoat.2021.106342 https://doi.org/10.1016/j.porgcoat.2021.106342 journal contacts acta imeko issn: 2221-870x june 2021, volume 10, number 2 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy yvan baudoin, belgium francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy min-seok kim, korea momoko kojima, japan koji ogushi, japan vilmos palfi, hungary franco pavese, italy jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: f.lamonaca@dimes.unical.it acta imeko giuseppe caravello, e-mail: giuseppe.caravello02@unipa.it ciro spataro, e-mail: ciro.spataro@unipa.it support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:f.lamonaca@dimes.unical.it mailto:giuseppe.caravello02@unipa.it mailto:ciro.spataro@unipa.it mailto:dirk.roeske@ptb.de twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management rippudaman singh1, sanjana mohapatra1,2, pawandeep singh matharu1, yonas tadesse1,2,3,4 1 humanoid, biorobotics, and smart systems laboratory, mechanical engineering department, the university of texas at dallas, richardson, tx 78705 usa 2 biomedical engineering department, the university of texas at dallas, richardson, tx 78705 usa 3 electrical and computer engineering department, the university of texas at dallas, richardson, tx 78705 us 4 alan g. macdiarmid nanotech institute, the university of texas at dallas, richardson, tx 78705 usa section: research paper keywords: robotic hand; artificial muscle; tcp muscles; fishing lines muscles; 3d printed hand; peltier cooling; biomimetic; grasping; drug delivery citation: rippudaman singh, sanjana mohapatra, pawandeep singh matharu, yonas tadesse, twisted and coiled polymer muscle actuated soft 3d printed robotic hand with peltier cooler for drug delivery in medical management, acta imeko, vol. 11, no. 3, article 10, september 2022, identifier: imeko-acta11 (2022)-03-10 section editor: zafar taqvi, usa received march 25, 2022; in final form september 27, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work is supported by internal fundresearch enhancement corresponding author: rippudaman singh, e-mail: rippudaman.singh@utdallas.edu 1. introduction soft actuators such as sma muscles and their composites are being widely studied currently. however, smas are expensive and are not manufactured easily in-house, and have high hysteresis behaviour. new actuators such as tcpfl muscles are emerging as a new class of actuators recently. these muscles have a wide range of possibilities in robotics applications. in this paper, the focus is on the application of tcpfl muscles in a robotic hand and a method of improving the actuation frequency. the fabrication of tcpfl muscles has been discussed in a study by haines et al. where [1] different materials among polyethene, nylon 6, sand silver-plated nylon 6,6 were investigated for applications in soft actuators. twisted and coiled nylon 6,6 polymer as artificial muscles were found to be reliable, inexpensive, had high load carrying capacity, and a large stroke. another study [2] discusses the use of tcpfl muscles with nichrome wire, where nichrome acts as a heater wire for the muscles with quick heating capabilities. the incorporation of this novel soft actuator in robotic arms can help reduce the size and weight of existing motor-actuated prosthetic hand models while producing similar actuation. also, abstract this paper presents experimental studies on a soft 3d-printed robotic hand whose fingers are actuated by twisted and coiled polymer (tcpfl) muscles, driven by resistive heating, and cooled by water and peltier mechanism (thermoelectric cooling) for increasing the actuation frequency. the hand can be utilized for pick and place applications of drugs in clinical settings, which may be repetitive for humans. a combination of abs plastic and thermoplastic polyurethane material is used to additively manufacture the robotic hand. the hand along with a housing tank for the muscles and peltier coolers has a length of 380 mm and weighs 560 gm. the fabrication process of the tcpfl actuators coiled with 160 µm diameter nichrome wires is presented. the actuation frequency in the air for tcpfl is around 0.01 hz. this study shows the effect of water and peltier cooling on improving the actuation frequency of the muscles to 0.056 hz. experiments have been performed with a flex sensor integrated at the back of each finger to calculate its bend-extent while being actuated by the tcpfl muscles. all these experiments are also used to optimize the tcpfl actuation. overall, a low-cost and lightweight 3d printed robotic hand is presented in this paper, which significantly increases the actuation performance with the help of cooling methods, that can be used in applications in medical management. file:///c:/users/sanja/appdata/roaming/microsoft/word/rippudaman.singh@utdallas.edu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 as proved in a study by wu et al. [3] the temperatures of the muscle when actuated in the air can reach higher values in the range of 80 °c to 140 °c. while these high temperatures were optimal during the heat-induced compression cycle, they did not allow a proper relaxation of the muscles. to initiate a proper relaxation, the cooling cycle would have to be configured for a long time which increases the time duration for each actuation cycle and reduces the actuation frequency. the use of cooling methods to optimize the high temperatures of the tcpfl muscle brought about by joule heating for proper relaxation is discussed in this paper. the main purpose of the paper is to reduce the cooling cycle of the tcpfl muscles to reduce the time gap between flexion and extension and increase the overall actuation frequency. water is the most available resource which can act as a natural coolant. water having a high specific heat capacity [4] can extract heat from the tcpfl muscles at a fast pace. it has been used in other applications to optimize the actuation of artificial muscles including shape memory alloys (sma) [5], [6], giant magnetostrictive materials (gmms) [7], and liquid crystal elastomers (lces)[8]. similar methods have been discussed with a robotic finger attached to tcpfl muscles in previous work by wu, l. et al [9] where the tcpfl muscles were actuated with the help of hot and cold water. the use of water was also inspired by previous studies where niti sma [10] and tcp [11] muscles were used in underwater jellyfish robots. it was noted that the actuation frequency drastically increased in a medium where heat was dissipated faster. this study was mimicked by submerging the muscles in a sealed container with water as a medium to dissipate heat. a study by astrain, d. et al [12] introduces a type of cooling system that uses voltage difference to generate a temperature difference between the top and bottom ceramic plates of the device. this uses alternating semiconductor couples to leverage the peltier effect to produce thermoelectric cooling. this type of thermoelectric cooler is called a peltier cooler. another study on peltier coolers [13] incorporated it in a closed insulated container to lower the internal temperatures to the freezing point of water. such peltier-based coolers were also used with other artificial sma muscles [14] to optimize their thermomechanical properties. hence, they were considered for use in optimizing the actuation of tcpfl muscles in this study. the hypothesis of this paper focuses on implementing these water-based and peltier cooling methods to optimize the flexion and extension of a prosthetic finger carried out by the tcpfl muscles. the improvement of actuation frequency of the tcpfl muscles for a faster prosthetic hand grasping movement was the major objective of the experiments designed for this study. this study involved collecting prosthetic performance data using flex and temperature sensors. experiments were conducted using special housing tanks to incorporate the muscles, water, peltier coolers, and sensors. substantial improvement in actuation frequency, from 0.01 hz in the air to 0.056 hz due to the cooling methods, was observed through the sensor data obtained during the experiments. submerging the tcpfl muscles in water inside the container and further dissipating the heat from the container using peltier coolers proved to be very effective. this was especially helpful for improving the performance speed of a prosthetic hand for applications like pick and place of medical drugs in clinical settings. 2. methodology firstly, the fabrication of the core component of this experimental study, the tcpfl artificial muscle is discussed in this paper. these muscles were made of nylon 6,6 fishing line muscles wounded with nichrome wire. the nichrome wire was used to convert the power supplied into heat and this heat was used via conduction to heat the fishing line for it to contract. 2.1. tcpfl fabrication process an approximate 1.5m length of nylon fishing line string was cut from a spool of fishing line. the ends of this cut length were tied with rings/washers to mount on motors that rotated and coiled the muscles. as shown in figure 1 the nylon fishing line was attached to a motor on the top and the other end was suspended with a weight of 500g. this was to ensure that enough tension was provided to keep the coiling uniform since the load should not be heavy enough to break the fishing line while coiling. the top motor was started at a speed of 300 rpm while restricting the rotation of the bottom end of the fishing line. the fishing line was allowed to coil upwards while twisting. only the axial rotation was arrested once coiling started, either from the top, bottom or middle. the motor was stopped, and the rotatory restriction was removed. at this stage, the winding of nichrome over the uncoiled fishing line was initiated. the nichrome was coiled over the fishing line at a speed of 125 rpm. once the nichrome was uniformly wound over the fishing line muscle, the rotatory restriction was applied again so that the nylon coiling process could be carried out. at the end of the coiling process, the weight was removed. at the end of this entire process the bottom end of the coiled muscle must be held firmly before and after removing the weight, slightly releasing the hand pressure and once the weight was removed so that the muscle could ease a bit of the torsional tension. the muscle would not get uncoiled but only get untwisted a few rotations. after this, the muscle was removed from the coiling setup, and each of its ends was mounted on a small platform that could be used to heat treat the muscle. a furnace was heated up to 180 degrees and the muscles were placed inside the hot furnace for 90 min. once the muscles were heated the ends of the same were crimped firmly with the nichrome in contact with gold crimps. the muscles were then placed under a 500gm load and trained under various power cycles that were supplied to their crimped ends. various currents supplied brought about different compressions in steps as shown in figure 1. schematic of the fabrication process of the twisted and coiled polymer (fishing line) muscles. step1: the fishing line twisting process. step 2: the nichrome winding process. step 3: the self coiling process. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 table 1. this training step later supports similar compressions and relaxations when provided with a current across its ends. 2.2. tpu hand setup a single piece robotic palm was 3d printed with the thermoplastic polyurethane (tpu) material. tpu is a flexible material that can provide strength in higher thicknesses and toughness and flexibility in lower thicknesses. this hand was previously utilized in combination with other artificial sma muscles [15]. so, this is adequate to test the tcpfl muscles used in the design presented in this paper. the container was made of transparent acrylic material, to ensure clear visibility of the movement of the muscles inside the container. dedicated slots were accommodated for the peltier plates on the bottom of the container. the container was assembled and sealed with the help of mseal and silicone. the m-seal provides strength to the container and the silicone ensures water saleability. holes were provided to accommodate ten tcpfl muscles, two for each finger one for flexion and the other for extension. the openings were sealed with silicone to allow flexibility and to ensure water saleability. one end of the muscle was fixed to the back of the container. the other end was connected to a finger of a single piece tpu hand as shown in figure 2.a using fishing line strings. when power was applied across the muscles, they contracted at the same time pulling the fishing line string connected to the finger. 2.3. peltier cooler and sensors two peltier coolers were inserted into the dedicated slots inside the same muscle housing container. these peltier coolers each consist of 127 couples of n-type and p-type semiconductor blocks that operate at 12 v 2a to create thermoelectric cooling across their plates. so, both were connected in series to a 24 v 2a battery as seen in figure 3.c. the effect of these peltier coolers on lowering the temperatures of water that is exposed to the cooling plate was observed using underwater temperature probes. the probe used was the ds18b20 digital single-bus intelligent temperature sensor [16] and that has been seen in previous studies involving underwater applications [17], [18]. this sensor when connected across a circuit as shown in figure 3.a provides a digital output of the temperature probe to an arduino microcontroller. the temperature data recorded from the probe attached underwater inside the container during the experiments helped in assessing the benefits of the cooling methods for tcpfl actuation. there are many instances where standard flex sensors have been used along with hand prosthetic applications to either control the bending of the prosthetic fingers [19], [20] or recognize [21], [22], [23] the gestures of hands. also known as stretch sensors, they can be used in wearables [24] to track the bending of a finger. this is a resistive sensor that alters its resistance value as it is bent along its length. hence this resistive property was manipulated for characterizing the bending action of the prosthetic hand mentioned in this paper. the data obtained from the flex sensor helped characterize the tcpfl muscles for their actuation frequency and optimization of the actuation with cooling methods. for this design, the resistance was converted to a voltage reading using an amplification circuit as seen in figure 3.b and read through the adc pins of the arduino. this voltage reading was recorded with respect to time and linearly interpolated to previously obtained angle-vs-voltage flex sensor calibration data. this calibration data helped in calculating the angular position of the finger with respect to the horizontal as shown in figure 2.b. the angle-vs-time results obtained from the flex sensors helped in observing the actuation frequency of the muscles with different cooling conditions. 3. experimental methods and results the experimentations conducted for the tcpfl muscles included a characterization setup utilized in other studies to understand various properties of niti sma [10], tcp [11], and table 1. power supply supplied across both ends of the tcpfl muscles during the training procedure. current (a) deformation (%) 0.18 10 0.2 14 0.24 20 0.26 27 figure 2. (a) the schematic of the experimental setup. the fingers of a tpu prosthetic hand are attached to tcpfl muscles using fishing line strings. bidirectional flexion and extension actions of the finger are produced by the actuation of two tcpfl muscles. the muscles are housed in an acrylic tank filled with water. this tank has integrated two peltier coolers on its base. (b) the angular position of the finger with respect to the horizontal is calculated using data recorded from flex sensors during flexion or extension actions. figure 3. schematic of the electronic circuits of the sensors and peltier coolers. (a) ds18b20 temperature probe used to monitor the heating and cooling of the water during tcpfl actuation and peltier cooling. data collected by the digital pins of the arduino microcontroller. (b) flex sensor used to monitor the finger bending movements during the finger’s actuation. sensor integrated with an amplification circuit. (c) peltier coolers used to cool the water in the tcpfl housing tank. connected in series with a 24v power supply. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 tca [25] muscles using multiple sensors as shown in the study by hamidi, a. et al [26]. the setup as shown in figure 4 included a keyence laser displacement sensor to measure the muscle’s actuation strain, thermocouples used to measure the muscle’s temperature during actuation, and ni daq 9221 used to measure the output voltage due to change of muscle’s resistance during actuation. using this setup characterization experiments were conducted with the tcpfl muscles fabricated in this paper. the properties of the muscle were obtained from these experiments and are listed in table 2. it included an actuation strain ranging from 10 % to 27 %. the voltage across the muscle could rise to 115 v due to a change in resistance after being actuated. the experimental setup of the tpu hand actuated by two tcpfl muscles as described in the methodology and shown in figure 5 was used to obtain results to justify the hypothesis of this paper. there was a separate power supply for each of the two tcpfl muscles. each power supply was configured to form different heating and cooling cycles to respectively compress and relax the muscle. the lower tcpfl muscle responsible for an extension usually followed a flexion by the upper tcpfl muscle. hence the lower muscle required more energy to compress to additionally loosen the upper muscle’s compression. the actuation cycles for flexion (by the upper tcpfl muscle) and extension (by the lower tcpfl muscle) were set separately on the two power supplies. the temperature data obtained from the ds18b20 sensor showed an increase in the temperature of the water when tcpfl muscles were actuated in it. figure 6 portrays the same behaviour when one muscle was actuated inside different volumes of water at a heating cycle of 5 seconds and a cooling cycle of 10 seconds. higher volumes of water had a high capacity to dissipate more heat from the muscles and had a lower rise in temperatures. figure 7 portrays the cooling ability of peltier coolers with respect to the volume of water. these cooling rates show that peltier coolers can be instrumental in optimizing the actuation frequency of the muscles. a 150 ml water was finally chosen as optimal for the experiments of this study as it would get heated more slowly by the muscles as compared to lower volumes. although the cooling rate is smaller than that of lower volumes it would be enough for this application. figure 4. experimental setup for the characterization of tcpfl muscles. table 2. informational data of the fabricated tcpfl muscle obtained through characterization experiments performed during similar other studies on tcp muscles [10], [11]. property value material nylon (6,6) fishing line type of actuation electrothermal type of resistance wire nichrome (nickel, chromium) resistance wire diameter dw = 160 precursor fibre diameter d = 0.8 mm length of precursor fibre l = 1500 mm weight for fabrication mf = 500 g annealing temperature/time ta = 180 °c / (90 min) diameter after cooling d = 2.8 mm length after cooling l = 120 mm resistance r = 110 ohm current (input) i = 0.16-0.26 a (during training: i = 0.26 a is provided to the muscle) voltage (output) v = 57.6-115.2 v actuation power p = 9.2-36.8 w heating time th = 10 s,15 s cooling time tc = 90 s, 85 s actuation frequency (air-cooled) f = 0.16-0.1hz actuation strain, at 500g load epsilon = 27-10% life cycle 2400 cycles in air at 9 mhz and 1 % duty cycle. figure 5. experimental setup of tpu hand with tcpfl muscles and peltier coolers installed housing tank. two power supplies used to actuate each of the two muscles for the movement of one finger. the flex sensor and temperature probe circuits utilized inside the setup. figure 6. heating effect of the tcpfl muscle actuation on water. the muscle was actuated with a heating cycle of 5 seconds and a cooling cycle of 10 seconds. the steady rise in temperature was observed by the ds18b20 temperature probe over 50 cycles of actuation. the heating effect of the muscle was observed for different volumes of water. figure 7. cooling the water in the housing tank with peltier coolers. the reduction of temperature in the water was observed over time by the ds18b20 temperature probe for different volumes of water. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 flex sensor data was obtained for different actuation cycles for the tcpfl muscles and different environmental conditions (in air, in water, and in peltier-cooled water). for actuating the finger in the air at 0.26 a the heating and cooling cycles were set as 10 seconds and 90 seconds respectively for the upper muscle (flexion) and 15 seconds and 85 seconds respectively for the lower muscle (extension). this setting resulted in an actuation frequency of 0.01 hz for the tcpfl muscles. when shorter cycles were selected for actuation, the upper muscle could not relax from its compressed state during its cooling cycle. this prevented the extension of the finger through the lower muscle’s actuation. therefore, longer cooling cycles of 90 seconds were selected for proper in-air actuation of the two muscles as was proved by the flex sensor results (figure 8). the actuation frequency of the tcpfl muscles improved by about five times to 0.056 hz when the muscles were completely submerged in 150 ml of water. the heating and cooling cycles operating at 1 a were set as 5 seconds and 13 seconds respectively for the upper muscle and 7 seconds and 11 seconds respectively for the lower muscle. as was seen from the angles interpolated from the flex sensor results (figure 9.a) there was a very good frequency of actuation. the time to bring about these actuations was minimized from previous experiments in-air. an even smoother actuation was observed during the extension of the finger when the peltier coolers were utilized to remove the heat accumulated in the water medium. the same actuation cycles were used, like before with water. through the interpolated flex sensor results obtained in figure 9.b, a significantly faster extension was observed. the extension angle-vs-time slope is much steeper in peltier-cooled waters as compared to the same in just water. peltier coolers help in maintaining the water at lower temperatures to aid in the cooling/relaxation phase of muscles. 4. conclusions the cooling methods proposed proved to have an impact in optimizing the actuation performance of the tcpfl muscles. the frequency of the muscle’s actuation was increased by over five times from 0.01 hz when they were actuated in-air to 0.056 hz when they were actuated in water. the major cooling effect observed was due to the presence of water as a medium to dissipate heat from the actuators. the finger movement as a result was very smooth and fast-paced. the peltier coolers ensured that the temperature of the water was maintained close to room temperature. this was achieved as the coolers dissipated the heat from the water that was accumulated from the actuating muscles. due to this behaviour, the improved frequency of actuation was maintained at a high of 0.056hz for multiple cycles. the actuation for the extension of the robotic finger by the lower tcpfl muscle took a longer time than the actuation for the flexion of the finger by the upper tcpfl muscle. this was because of the tension remaining in the upper muscle from the compression during the flexion action. this in turn kept the finger tightly flexed and did not allow it to be extended in the other direction. the peltier coolers shortened the extension actuation phase of the robotic finger. the extension phase of the setup depended on how fast the upper tcpfl muscle cooled down and relaxed itself. this faster cooling was achieved due to lower temperatures of water produced by the peltier coolers. further studies and experimentation are required to improve the performance of these muscles in an enclosed setup. some new housing tanks designs and materials are being explored to augment prosthetic actuation efficiency. acknowledgement the authors would like to thank akash ashok ghadge for his support in the experimentation and design and eric wenliang deng for his support with the tpu palm used for the experimentation, they both are affiliated to the humanoid biorobotics and smart systems lab, the university of texas at dallas. figure 8. the angular position of each finger during bi-directional actuation by two tcpfl muscles. this flexion/extension angle of the finger was calculated from the data obtained from the flex sensor. the muscles were actuated in the air with a heating cycle of 10 seconds (for the flexion action shown in yellow) and a cooling cycle of 90 seconds for the upper muscle and a heating cycle of 15 seconds (for the extension action shown in green) and a cooling cycle 85 seconds for the lower muscle. both the muscles complete each cycle in 100 seconds, so the actuation is 1/100 = 0.01 hz. figure 9. the angular position of each finger calculated from flex sensor data during bi-directional actuation by two tcpfl muscles. the muscles were actuated for a heating cycle of 5 seconds (for the flexion action shown in yellow) and a cooling cycle of 13 seconds for the upper muscle and a heating cycle of 7 seconds (for the extension action shown in green) and a cooling cycle 11 seconds for the lower muscle. both the muscles complete each cycle in 18 seconds, so the actuation is 1/18 = 0.056 hz. the tcpfl muscles were submerged inside (a) 150 ml of water, (b) 150 ml of water that was cooled by two peltier coolers acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 references [1] c. s. haines (+20 authors), artificial muscles from fishing line and sewing thread, science 343(6173) (2014), pp. 868-872. doi: 10.1126/science.1246906 [2] a. n. semochkin, a device for producing artificial muscles from nylon fishing line with a heater wire, 2016 ieee international symposium on assembly and manufacturing (isam), 21-22 august 2016, fort worth, tx, usa. doi: 10.1109/isam.2016.7750715 [3] l. wu, f. karami, a. hamidi, y. tadesse, biorobotic systems design and development using tcp muscles. in electroactive polymer actuators and devices (eapad) xx, international society for optics and photonics, 2018, denver, colorado, united states doi: 10.1117/12.2300943 [4] f. g. keyes, the thermodynamic properties of water substance 0° to 150° c part vi. the journal of chemical physics, 15(8) (1947), pp. 602-612. [5] c. h. park, k. j. choi, y. s. son, shape memory alloy-based spring bundle actuator controlled by water temperature. ieee/asme transactions on mechatronics 24(4) (2019), pp. 1798-1807. doi: 10.1109/tmech.2019.2928881 [6] o. k. rediniotis, d. c. lagoudas, h. y. jun, r. d. allen, fuelpowered compact sma actuator, smart structures and materials 2001: industrial and commercial applications of smart structures technologies, proc. of the spie 4698 (2002), pp. 441453. doi: 10.1117/12.475087 [7] z. zhao, x. sui, temperature compensation design and experiment for a giant magnetostrictive actuator, scientific reports 11(1) (2021), pp. 1-14. doi: 10.1038/s41598-020-80460-5 [8] q. he, z. wang, z. song, s. cai, bioinspired design of vascular artificial muscle. advanced materials technologies 4(1) (2019) art. no. 1800244. doi: 10.1002/admt.201800244 [9] l. wu, m. jung de andrade, r. s. rome, c. haines, m. d. lima, r. h. baughman, y. tadesse, nylon-muscle-actuated robotic finger, active and passive smart structures and integrated systems, proceedings volume 9431, active and passive smart structures and integrated systems 2015, spie. doi: 10.1117/12.2084902 [10] y. almubarak, m. punnoose, n. xiu maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators, smart materials structures 29(7) (2020) art. no. 075011. doi: 10.1088/1361-665x/ab859d [11] a. hamidi, y. almubarak, y. mahendra rupawat, j. warren, y. tadesse, poly-saora robotic jellyfish: swimming underwater by twisted and coiled polymer actuators, smart materials structures 29(4) (2020), art. no. 045039. doi: 10.1088/1361-665x/ab7738 [12] d. astrain, j. vián, j. albizua, computational model for refrigerators based on peltier effect application, applied thermal engineering 25(17-18) (2005), pp. 3149-3162. doi: 10.1016/j.applthermaleng.2005.04.003 [13] z. slanina, m. uhlik, v. sladecek, cooling device with peltier element for medical applications. ifac-papersonline 51(6) (2018), pp. 54-59. doi: 10.1016/j.ifacol.2018.07.129 [14] y. luo, t. takagi, s. maruyama, m. yamada, a shape memory alloy actuator using peltier modules and r-phase transition, journal of intelligent material systems structures, 11(7) (2000), pp. 503-511. doi: 10.1106/92yh-9yu9-hvw4-rv [15] e. deng, y. tadesse, a soft 3d-printed robotic hand actuated by coiled sma, actuators 10(1) (2021), pp. 1-24. doi: 10.3390/act10010006 [16] a. huang, m. huang, z. shao, x. zhang, d. wu, c. cao, a practical marine wireless sensor network monitoring system based on lora and mqtt, ieee 2nd intern. conf. on electronics technology (icet), 10-13 may 2019, chengdu, china. doi: 10.1109/eltech.2019.8839464 [17] y. ding, t. yan, q. yao, x. dong, x. wang, a new type of temperature-based sensor for monitoring of bridge scour, measurement 78 (2016), pp. 245-252. doi: 10.1016/j.measurement.2015.10.009 [18] k. gawas, s. khanolkar, e. pereira, m. rego, m. naaz, e. braz, development of a low cost remotely operated vehicle for monitoring underwater marine environment, global oceans 2020: singapore–us gulf coast. 2020, 05-30 october 2020, biloxi, ms, usa. doi: 10.1109/ieeeconf38699.2020.9389277 [19] t. mori, y. tanaka, m. mito, k. yoshikawa, d. katane, h. torishima, y. shimizu, y. hara, proposal of bioinstrumentation using flex sensor for amputated upper limb, 36th annual international conference of the ieee engineering in medicine and biology society, chicago, il, usa, 26-30 august 2014. doi: 10.1109/embc.2014.6943816 [20] m. i. rusydi, m. i. opera, a. rusydi, m. sasaki, combination of flex sensor and electromyography for hybrid control robot, telkomnika telecommunication computing electronics control, 16(5) (2018), pp. 2275-2286. doi: 10.12928/telkomnika.v16i5.7028 [21] j. jabin, md. ehtesham adnan, s. s. mahmud, a. m. chowdhury, m. r. islam, low cost 3d printed prosthetic for congenital amputation using flex sensor, 2019 5th international conference on advances in electrical engineering (icaee), 2628 september 2019, dhaka, bangladesh. doi: 10.1109/icaee48663.2019.8975415 [22] s. harish, s. poonguzhali, design and development of hand gesture recognition system for speech impaired people, 2015 ieee international conference on industrial instrumentation and control (icic). 28-30 may 2015, pune, india. doi: 10.1109/iic.2015.7150917 [23] y. r. garda, w. caesarendra, t. tjahjowidodo, a. turnip, s. wahyudati, l. nurhasanah, d. sutopo, flex sensor based biofeedback monitoring for post-stroke fingers myopathy patients, journal of physics: conference series. 2018. iop publishing. doi: 10.1088/1742-6596/1007/1/012069 [24] m. borghetti, p. bellitti, n. f. lopomo, m. serpelloni, e. sardini, validation of a modular and wearable system for tracking fingers movements. acta imeko 9(4) (2020), pp. 157-164. doi: 10.21014/acta_imeko.v9i4.752 [25] t. luong, s. seo, j. jeon, c. park, m. doh, y. ha, j. c. koo, h. r. choi, h. moon, soft artificial muscle with proprioceptive feedback: design, modeling and control, ieee robotics automation letters 7(2) (2022), pp. 4797-4804. doi: 10.1109/lra.2022.3152326 [26] a. hamidi, y. almubarak, y. tadesse, multidirectional 3dprinted functionally graded modular joint actuated by tcpfl muscles for soft robots, bio-design manufacturing, 2(4) (2019), pp. 256-268. doi: 10.1007/s42242-019-00055-6 https://doi.org/10.1126/science.1246906 https://doi.org/10.1109/isam.2016.7750715 http://dx.doi.org/10.1117/12.2300943 https://doi.org/10.1109/tmech.2019.2928881 https://ui.adsabs.harvard.edu/link_gateway/2002spie.4698..441r/doi:10.1117/12.475087 https://doi.org/10.1038/s41598-020-80460-5 https://doi.org/10.1002/admt.201800244 https://doi.org/10.1117/12.2084902 https://doi.org/10.1088/1361-665x/ab859d https://doi.org/10.1088/1361-665x/ab7738 https://doi.org/10.1016/j.applthermaleng.2005.04.003 https://doi.org/10.1016/j.ifacol.2018.07.129 https://doi.org/10.1106/92yh-9yu9-hvw4-rvkt https://doi.org/10.3390/act10010006 https://doi.org/10.1109/eltech.2019.8839464 https://doi.org/10.1016/j.measurement.2015.10.009 https://doi.org/10.1109/ieeeconf38699.2020.9389277 https://doi.org/10.1109/embc.2014.6943816 http://dx.doi.org/10.12928/telkomnika.v16i5.7028 https://doi.org/10.1109/icaee48663.2019.8975415 https://doi.org/10.1109/iic.2015.7150917 https://doi.org/10.1088/1742-6596/1007/1/012069 http://dx.doi.org/10.21014/acta_imeko.v9i4.752 https://doi.org/10.1109/lra.2022.3152326 https://doi.org/10.1007/s42242-019-00055-6 introductory notes for the acta imeko second issue 2021 general track acta imeko issn: 2221-870x september 2021, volume 10, number 3, 5 6 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 5 introductory notes for the acta imeko third issue 2021 general track francesco lamonaca1 1 department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue 2021 general track, acta imeko, vol. 10, no. 3, article 3, september 2021, identifier: imeko-acta-10 (2021)-03-03 received september 6, 2021; in final form september 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: f.lamonaca@dimes.unical.it 1. introductory notes for the acta imeko general track the third issue 2021 of acta imeko includes a general track aimed at collecting contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give readers an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. benedikt seegerand and thomas bruns., in ‘primary calibration of mechanical sensors with digital output for dynamic applications’, tackle the challenge of the dynamic calibration of modern sensors with integrated data sampling and purely digital output for the measurement of mechanical quantities, such as acceleration, angular velocity, force, pressure or torque. the proposed method is an extension of the established methods and devices that yields primary calibration results. the proposal focuses on primary accelerometer calibrations but can easily be transferred to other mechanical quantities. in ‘analysis of the mathematical modelling of a static expansion system’, carlos mauricio villamizar mora et al. analyse the suitability of different models to represent the final pressures in a static expansion system with two tanks. it is concluded that the use of the ideal gas model is adequate in most simulated conditions, while the assumption that the residual pressure is zero before expansion presents problems under certain conditions. an uncertainty analysis of the process is carried out, which leads to evidence of the high importance of uncertainty in a first expansion over subsequent expansion processes. finally, an analysis of the expansion system based on uncertainty is carried out to estimate the effect of the metrological characteristics of the measurements of the input quantities. the proposed design process makes it possible to determine a set of restrictions on the uncertainties of the input quantities. the transmission of medical data and the potential for distant healthcare structures to share experiments about a given medical case raise several conceptual and technical questions. good remote healthcare monitoring involves more challenges with regard to personalised heath data processing than the traditional methods currently used in hospitals throughout the world. the adoption of telemedicine in the healthcare sector has significantly changed medical collaboration. however, to provide good telemedicine services through new technologies such as cloud computing, cloud storage and so on, a suitable and adaptable framework should be designed. moreover, a secure and collaborative platform will enhance the decision-making process within the chain of medical information exchange, including between requesting agencies and physicians. in ‘collaborative systems for telemedicine diagnosis accuracy’, jacques tene et al. provide an in-depth literature review on interactions between telemedicine and cloud-based computing. the paper further proposes a framework that can allow various research organisations, healthcare sectors and government agencies to log data, develop collaborative analysis and support decisionmaking. case studies involving electrocardiograms and electroencephalograms demonstrate the benefit of the proposed approach in data reduction and high-fidelity signal processing at the local level, enabling the communication of extracted characteristic features to the cloud database. in ‘uncertainty of factor z in the gravimetric volume measurement’, mar lar win presents an interesting problem for the gravimetric volume measurement method. in this measurement, z factors are generally used to balance the liquid volume, facilitating an easy conversion from the apparent mass. the uncertainty of measurement assigned to the measurement of a liquid volume can be divided into two uncertainty contributions: those components related to the mass measurements and those components related to the mass-tovolume conversion. however, uncertainty due to the z factor is generally neglected in the uncertainty calculation of gravimetric mailto:f.lamonaca@dimes.unical.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 6 volume measurement in some international organization for standardization standards and calibration guides. the paper describes the combined effects of the density of water, the density of reference weights and air buoyancy on the uncertainty of the z factor and how this affects the uncertainty of the measurement result. most sensing networks rely on punctual/local sensors, thus lacking the ability to spatially resolve the quantity to be monitored (e.g. a temperature or humidity profile) without relying on the deployment of numerous inline sensors. most quasi-distributed or distributed sensing technologies rely on the use of optical fibre systems; however, these are generally expensive, which limits large-scale adoption. recently, elongated sensing elements have successfully been used with time-domain reflectometry (tdr) to implement diffused monitoring solutions. the advantage of tdr is that it is a relatively low-cost technology with adequate measurement accuracy and the potential to be customised to suit the specific needs of different application contexts of the 4.0 era. starting from these considerations, cataldo et al., in the paper ‘microwave reflectometric systems and monitoring apparatus for diffused-sensing applications’, addressees the design, implementation and experimental validation of a novel generation of elongated sensing element networks that can be permanently installed in the systems to be monitored and used to obtaining a diffused profile of the quantity to be monitored. in particular, three applications are considered as case studies: irrigation process monitoring in agriculture, leak detection in underground pipes and building structure monitoring. ding et al., in the paper ‘the inhibition of biodegradation on building limestone by plasma etching’ presents an innovative technique that has been recently applied in the cleaning of soiled archaeological objects. in the paper it is presented and discussed the use of low-pressure plasma etching in cleaning microbial contaminations on an oolitic limestone from an unesco world heritage listed monument: the batalha monastery in central portugal. the cleaning effect was assessed by ftir, sem, optical microscope, and cell viability index measurement. the experimental results demonstrate that plasma etching can be regarded as a fast and eco-friendly conservation tool for stone heritage architecture. this issue confirms that acta imeko is the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields who are united by their common interest in measurement science and technology. francesco lamonaca editor in chief classification of brain tumours using artificial neural networks acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 classification of brain tumours using artificial neural networks b. v. subba rao1, raja kondaveti2, r. v. v. s. v. prasad2, v. shanmukha rao3, k. b. s. sastry4, bh. dasaradharam5 1 department of information technology, pvp siddhartha institute of technology, vijayawada 520007, india 2 department of it, swarnandra college of engineering and technology, narasapuram, india 3 department of information technology, andhra loyola college of engineering and technology, vijayawada 520008, india 4 department of computer science, andhra loyola college of engineering and technology, vijayawada 520008, india 5 department of cse, nri institute of technology, agiripalli 521212, andhra pradesh india section: research paper keywords: artificial neural networks; brain tumour; classification; magnetic resonance brain image; wavelet transform citation: b. v. subba rao, raja kondaveti, r. v. v. s. v. prasad, v. shanmukha rao, k. b. s. sastry, bh. dasaradharam, classification of brain tumours using artificial neural networks, acta imeko, vol. 11, no. 1, article 35, march 2022, identifier: imeko-acta-11 (2022)-01-35 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 28, 2021; in final form february 19, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: b. v. subba rao, e-mail: bvsubbarao90@gmail.com 1. introduction if any person having a brain tumour, the doctor may recommend a number of tests and procedures to identify the tumour which are present in the brain or it may be spreads into any parts of the body. if the tumour has found in the brain the doctor takes the biopsy and collecting the sample tissue and conduct the examination. in certain situations, the person may be affected to paralysis of their body. in this situation before testing of biopsy magnetic resonance (mr) [1] brain images were taken to study whether the tumour may be benign tumour or malignant tumour. there are two different types of tumours mainly found in the mr brain image those are benign tumour and malignant tumour [2] [3]. the stages of the study for our proposed work are magnetic resonance imaging (mri), feature extraction and classification. in the following sub sections, we have described the benign and malignant tumours. abstract magnetic resonance (mr) brain image is very important for medial analysis and diagnosis. these images are generally measured in radiology department to measure images of anatomy as well as the general physiological process of the human body. in this process magnetic resonance imaging measurement are used with a heavy magnetic field, its gradients along with radio waves to produce the pictures of human organs. mr brain image is also used to identify any blood clots or damaged blood veins in the brain. a counterfeit neural organization is a nonlinear information handling model that have been effectively used preparation models for tackling administered design acknowledgment assignments because of its capacity to sum up this present reality issues. artificial neural networks (ann) is used to classify the given mr brain image having benign or malignant tumour in the brain. benign tumours are generally not cancerous tumours. these are also not able to grow or spread in the human body. in very rare cases t hey may grow very slowly. once it is eliminated, they do not come again. on the other hand, malignant tumours are cancer tumours. these tumour cells are grown and also easily spread to other parts of the human body. benign also known as harmless. these are not destructive. they either can't spread or develop, or they do as such leisurely. on the off chance that a specialist eliminates them, they don't by and large return. premalignant in these growths, the cells are not yet harmful, however they can possibly become threatening. malignant also known as threatening. malignant growths are destructive. the cells can develop and spread to different pieces of the body. in our proposed framework initially, it distinguishes wavelet transform to separate the highlights from the picture. subsequent to separating the highlights it incorporates tumour shape and power attributes just as surface highlights are distinguished. finally, ann to group the information highlights set into benign or malignant tumour. the main purpose as well as the objective is to identifying the tumours weather it belongs to benign or malignant. mailto:bvsubbarao90@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 1.1. benign tumour a tumour is an irregular improvement of cells that fills no need. a caring tumour is positively not a destructive tumour, which is harmful development. it doesn't assault near to tissue or spread to various bits of the body the way here harmful development can. generally speaking, the stance with obliging tumours is superb. a harmless growth is certainly not a dangerous growth, which is disease. it doesn't attack close by tissue or spread to different pieces of the body the manner in which disease can. as a rule, the standpoint with harmless growths is excellent. be that as it may, harmless growths can be not kidding assuming they push on fundamental designs like veins or nerves. consequently, now and again they require treatment and different times they don't. in any case, liberal tumours can be dead serious if they push on fundamental structures, for instance, veins or nerves. along these lines, at times they require treatment and various events they don't. the specific reason for a benign tumour is frequently obscure. it creates when cells in the body partition and develop at an exorbitant rate. commonly, the body can adjust cell development and division. at the point when old or harmed cells pass on, they are consequently supplanted with new, sound cells. on account of tumours, dead cells remain and structure a development known as a tumour. cancer cells fill in a similar way. nonetheless, in contrast to the cells in amiable tumours, harmful cells can attack close by tissue and spread to different pieces of the body. 1.2. malignant tumour harmful tumours [4] are cancer-causing. they make when cells grow uncontrollably. in case the cells continue to create and spread, the contamination can get dangerous. dangerous tumours can grow quickly and spread to various bits of the body in a cycle called metastasis. the malignancy cells that transition to different pieces of the body are equivalent to the first ones, yet they can attack different organs. in the event that cellular breakdown in the lung’s spreads to the liver, for instance, the malignancy cells in the liver are still cellular breakdown in the lung’s cells. various sorts of malignant tumours start in various kinds of cell. the expression malignant demonstrates that there is moderate to high likelihood that the growth will spread past the site where it at first creates. these cells can spread by movement through the circulation system or by movement through lymph vessels malignant tumour demonstrates that there is moderate to high likelihood that the tumour will spread past the site where it at first creates. these cells can spread by movement through the circulation system. a harmful cerebrum tumour is a carcinogenic development in the brain. it's unique in relation to a kind mind tumour, which isn't malignant and will in general develop more slowly. malignant mind tumours contain disease cells and frequently don't have clear lines. they are viewed as hazardous in light of the fact that they develop quickly and attack encompassing cerebrum tissue. in the existing mechanism mr brain image were taken and biopsy test was conducted that is known as follicular dendritic cell tumour (fdct) pathology test. fdct test is performed for removal of noise and then extracted the features from that mr brain image. after extracting the features from the image then support vector machine (svm) classification algorithm is applied to classify the features and characteristics. but in the svm the accuracy and speed of them is very slow. the results may not be clear and accurate. by understanding this problem, we are proposed a new classification algorithm called as artificial neural networks (ann). ann is used to improve the accuracy of the classifier and classifier speed may be increased. 2. our proposed methodology and its discussion the proposed ann extracts the features from the brain image and classifies the brain tumour into multiple images. there are three stages in our proposed methodology to observe whether the tumour is benign or malignant they are: a) pre-processing b) feature extraction c) classification 2.1. pre-processing in this headway the proposed framework utilizes the median filter. middle filter eliminates the turmoil from the mr cerebrum image. noise on the image means undesirable data present in the mr brain image. median filter has very protective capacity and heftiness. it diminishes the salt and pepper noise in the mr brain image. it also reduces the blurring of an image by applying smoothing technique. the main observation of median filter is it replaces the current point in the image to median value of the brightness of nearby pixel i.e. supplanting each neighbour an incentive with the middle estimation of the pixel. median filter also eliminated the impulse noise. so, median filter is a suitable pre-processing method in our proposed method. 2.2. feature extraction after pre-processing is completed the noiseless mr brain image was generated by applying the median filter technique then features are extracted from that image. feature extraction means, it is a process of identifying the set of features in an image. features are obtained from colour, shape and texture. good features are having produces the informative distinctive, accuracy, locality, reliability, quality, robustness and efficiency. all these are observing in the classification process. still the feature extraction is quite challenging issue to identify the current features of an image. many feature extraction techniques are available. in our proposed method we have used db4 (daubechies 4) wavelet transform for extracting the features like standard deviation, minimum and maximum value in wavelet transform. db4 wavelet transform is used in our proposed method for extracting the features. 2.3. classification after extracting the features from the image by applying db4 wavelet transform technique. the input feature extraction values are used for classification. in the real time many classification algorithms/techniques are available in the existing system they were used svm [5] classification technique but the accuracy is not up to the mark. processing also slow and will take a huge time. to overcome these problems, we are using artificial neural network classifier for image classification in our proposed method. in this ann, we use back propagation neural network. neural network classification is done by using multilayer perceptron algorithm. after applying all these algorithm/techniques we will set the output as benign or malignant tumour on the mr brain image. 3. db4 wavelet transform the daubechies wavelet changes are described as the haar wavelet change by enlisting midpoints and differences to acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 methods for scalar things with scaling signs and wavelets the single qualification between them involves in how these scaling signs and wavelets are portrayed. for the daubechies wavelet [6] changes, the scaling signs and wavelets have somewhat longer backings, i.e., they produce midpoints and contrasts utilizing just a couple of more qualities from the sign. the daubechies d4 change has four wavelet and scaling capacity co-efficient. the following formula shows the scaling capacity of the coefficient. ℎ0 = 1 + √3 4√2 , ℎ1 = 3 + √3 4√2 , ℎ2 = 3 − √3 4√2 , ℎ3 = 1 − √3 4√2 . (1) every movement of the wavelet change applies the scaling ability to the data input, if the main instructive assortment has n regards and the scaling limit will be applied in the wavelet change step to learn n2 smoothed characteristics in the organized wavelet change and the smoothed characteristics are taken care of in the lower half of the n part input vector. this can be represented as follows. {𝑔0 = ℎ3; 𝑔1 = −ℎ2; 𝑔2 = ℎ1; 𝑔3 = −ℎ0} (2) the wavelet change applies the wavelet ability to the data if the primary instructive assortment has n regards. the main enlightening file has n regards and the wavelet limit will be applied to figure n/2 differentiations. the scaling and wavelet limits are dictated by taking the internal consequence of the coefficient and four data regards. the conditions are shown following formula. 𝑎𝑖 = ℎ0 𝑠2𝑖 − ℎ1 𝑠2𝑖−1 − ℎ2 𝑠2𝑖−2 − ℎ3 𝑠2𝑖−3 𝑎[𝑖] = ℎ0 𝑠[2𝑖] − ℎ1 𝑠[2𝑖 − 1] − ℎ2 𝑠[2𝑖 − 2] − ℎ3 𝑠[2𝑖 − 3] (3) daubechies d4 wavelet function: 𝑐𝑖 = 𝑔0 𝑠2𝑖 − 𝑔1 𝑠2𝑖−1 − 𝑔2 𝑠2𝑖−2 − 𝑔3 𝑠2𝑖−3 𝑐[𝑖] = 𝑔0 𝑠 [2𝑖] − 𝑔1 𝑠[2𝑖 − 1] − 𝑔2 𝑠[2𝑖 − 2] − 𝑔3 𝑠[2𝑖 − 3] (4) each iteration in the wavelet step calculates a scaling value and a wavelet function value. 4. artificial neural network a neural organization comprises of formal neurons which are associated so that every neuron yield further fills in as the contribution of for the most part more neurons correspondingly as the axon terminals of a natural neuron are associated by means of synaptic ties with dendrites of different neurons. the quantity of neurons and how they are interconnected decides the engineering of neural organization. counterfeit neural organizations are one of the primary instruments utilized in artificial intelligence. as the neural a piece of their name recommends, they are cerebrum enlivened frameworks which are planned to reproduce the way that we people learn. neural organizations comprise of information and yield layers, just as a concealed layer comprising of units that change the contribution to something that the yield layer can utilize. they are astounding instruments for seeing examples which are far as excessively mind boggling or various for a human software engineer to concentrate and show the machine to perceive. the multi-layered neural organization is the most generally applied neural organization, which has been utilized in many explores up until this point. a back-engendering calculation can be utilized to prepare these multilayer feed-forward organizations with differentiable exchange capacities. it performs work estimate, design affiliation, and example arrangement. the term back proliferation alludes to the cycle by which subsidiaries of organization blunder, concerning network loads and inclinations, can be registered. the preparation of anns by back proliferation includes the following three phases: (i) the feed forward of the info preparing design, (ii) the estimation and back spread of the related mistake, (iii) the change of the loads. this cycle can be utilized with a number of different enhancement procedures. the following figure shows the artificial neural network [7]-[10] procedure it contains the input data, hidden layer processing and then produces the output. a neural network has at least three layers that are interconnected. the main layer comprises of information neurons. those neurons send information on to the more profound layers, which thus will send the last yield information to the last yield layer. all the inward layers are covered up and are shaped by units which adaptively change the data got from layer to layer through a progression of changes. each layer demonstrations both as an information and yield layer that permits the ann to see more unpredictable items [11]-[15]. all things considered; these inward layers are known as the neural layer. figure 1 depicts an artificial neural network. 5. results based on our proposed method and as per the above discussions we took a series of benign and malignant tumour images are as input mr images. a malignant tumour is a quickly developing malignancy that spreads to different zones of the cerebrum and spine. by and large, brain tumours are evaluated from one to four, as indicated by their conduct, for example, how quick they develop and that they are so prone to develop back after treatment. a malignant tumour is either grade three or four, though grade one or two tumours are generally classed as kind hearted or non-destructive. most harmful tumours are an optional malignant growth, which implies they began in another piece of the body and spread to the cerebrum. essential cerebrum tumours are those that begun in the mind. the figure 2 shows the series of benign tumour images and figure 3 shows the series of malignant tumour images which are considered as input images to our method. figure 1. artificial neural network. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 after taking input mr image, we have applied our proposed pre-processing method, feature extraction and classification techniques to find weather the resultant image having benign tumour or malignant tumour. initially we have taken the original gray scale image in the pre-processing method. gray scale image is converted into binary image after then cleaned the binary image by applying smoothing technique. noise was reduced by applying the pre-processing method. subsequent to pre-processing to identify the features in the mr image, we are using db4 wavelet transform method to identify the features and also extracted the feature in the given image. here salt and figure 2. series of benign tumours in brain mr images (input images). figure 3. series of malignant tumours in brain mr images (input images). figure 4. after applying db4 wavelet transform. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 pepper noise also reduced in this technique. after identifying the features, finally proposed classification technique like ann was applied and observes the parameters standard deviation, maximum value and minimum value to classify image either benign or malignant. figure 4 shows the pre-processing working procedure and extracting the feature by using db4 wavelet transform. it shows how the original grayscale input image is converted into a binary image and then the cleaned binary image after then applying the db4 wavelet transform. the histogram representation is shown in figure 5. it shows the highest and lowest pixel values of a binary image and it also shows the parameter values of standard deviation, maximum value and minimum value. table 1 represents the stratified non-validation. table 2 represents detailed accuracy by class. confusion matrix is shown in table 3 which represents the classification of given input images that produce the benign or malignant tumour output by applying anns [16], [17]. the event of cerebrum tumours in india is consistently rising. an ever-increasing number of instances of cerebrum tumours are accounted for every year in our country among individuals of changed age gatherings. brain tumours were positioned as the tenth most basic sort of tumour among indians. there are more than 32,000 instances of brain tumours announced in india every year and in excess of 28,000 individuals purportedly pass on because of cerebrum tumours yearly. a brain tumour is a genuine condition and can be deadly if not identified early and treated. in the results we have disclosed the category of tumour that is either benign or malignant tumour using our methodology so that we can predict the type of cancer in advance. table 4 shows the distinctive features of a db4 wavelet transform like standard deviation, minimum and maximum values of a series of mr images. based on these values the system easily classifies benign tumour or malignant tumour. 6. conclusion our proposed method is utilized for identifying e tumour from the given mr mind pictures and grouping whether it is benign (normal) tumour or malignant (cancer causing) tumour at the beginning stage. in the new patterns this framework/method assumes a significant job to distinguish the brain cancer in the beginning at very early stage which diminishes the death rate. the degree of things to come improvement in this endeavour is that we would connect data be able to base so colossal number of pictures can be used in distinguishing the threatening development. we can improve its exactness by utilizing various calculations of counterfeit neural organizations like convolution neural organizations, uphold vector machine and others. this computer aided classification system of our method takes any mr cerebrum filter picture, examinations it, and gives the yield as disease tainted mind if the sweep picture contains harmful tumour or probably gives yield as the cerebrum is malignant growth free if the output picture contains survival rate to some extent favourable tumour. as a whole we have succeeded in identify the tumour used in the given input mr images. we have successfully deployed our proposed methodology and able to classify the tumour weather it is a benign or malignant using artificial neural network. this methodology truly supports the patients as well as doctors to identify the tumour in a little bit advance which might save the lives. figure 5. histogram representation. table 1. stratified non-validation. s. no summary validation % of validation 1 correctly classified instances 15 75 2 incorrectly classified instances 5 25 3 kappa statistic 0.5 4 mean absolute error 0.3299 5 root mean square error 0.5034 6 relative absolute error 65.9703% 7 root relative squared error 100.6852% 8 total number of instances 20 table 2. detailed accuracy by class. tp rate fp rate presidion recall f-measure mcc roc area prc area class 0.800 0.300 0.727 0.800 0.762 0.503 0.700 0.735 m 0.700 0.200 0.728 0.700 0.737 0.503 0.700 0.724 b 0.750 0.250 0.753 0.750 0.749 0.503 0.700 0.729 ← weighted avs table 3. confusion matrix. a b ← classified as 8 2 3 7 a = m b = b acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 references [1] evangelia i. zacharaki, sumei wang, sanjeev chawla, dong soo yoo, ronald wolf, elias r. melhem, christos davatzikos, classification of brain tumor type and grade using mri texture and shape in a machine learning scheme, magn reson med, 62(6), (2009), pp. 32-39. doi: 10.1002/mrm.22147 [2] federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, 10(2) (2021), pp. 119-125. doi: 10.21014/acta_imeko.v10i2.1047 [3] g. çınarer, b. g. emiroğlu, classification of brain tumors by machine learning algorithms, 3rd international symposium on multidisciplinary studies and innovative technologies (ismsit), ieee, ankara, turkey, 11-13 october 2019, pp. 1-4. doi: 10.1109/ismsit.2019.8932878 [4] heba mohsen, el-sayed a. el-dahshan, el-sayed m. el-horbaty, abdel-badeeh m. salem, classification using deep learning neural networks for brain tumors, future computing and informatics journal, 3(1) (2018), pp.68-71. doi: 10.1016/j.fcij.2017.12.001 [5] isabel martinez espejo zaragoza, gabriella caroti, andrea piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, 10(1) (2021), pp.114-121. doi: 10.21014/acta_imeko.v10i1.847 [6] v. gavini, g. r. jothi lakshmi, m. z. u. rahman, an efficient machine learning methodology for liver computerized tomography image analysis, international journal of engineering trends and technology, 69(7) (2021), pp. 80-85. doi: 10.14445/22315381/ijett-v69i7p212 [7] n. j. krishna kumar, r. balakrishna, eeg feature extraction using daubechies wavelet and classification using neural network, international journal of pure and applied mathematics 118(18) (2018), pp. 3209-3223. doi: 10.26438/ijcse/v7i2.792799 [8] o. i. abiodun, a. jantan, a. e. omolara, k. v. dada, a. m. umar, o. u. linus, h. arshad, a. a. kazaure, u. gana, m. u. kiru, comprehensive review of artificial neural network applications to pattern recognition, ieee access, 7(2019), pp. 158820158846. doi: 10.1109/access.2019.2945545 [9] a. lay-ekuakille, c. chiffi, a. celesti, m. z. u. rahman, s. p. singh, infrared monitoring of oxygenation process generated by robotic verticalization in bedridden people, ieee sensors journal, 21(13) (2021), pp. 14426-14433. doi: 10.1109/jsen.2021.3068670 [10] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, 21(24) (2021), pp. 27848-27857. doi: 10.1109/jsen.2021.3125529 [11] a. lay-ekuakille, m. a. ugwiri, c. liguori, s. p. singh, m. z. u. rahman, d. veneziano, medical image measurement and characterization: extracting mechanical and thermal stresses for surgery, metrology and measurement systems, 28(1) (2021), pp. 3-21. doi: 10.24425/mms.2021.135998 [12] a. tarannum, m. z. u. rahman, l. k. rao, t. srinivasulu, a. layekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee table 4. distinctive features of a set of images. sl. no 1 2 3 4 5 6 7 8 9 max 1h 210.2 77.36 131.1 206.8 137.1 110.6 93.98 120.9 129.4 min 1h -55.6 -96.15 -151.6 -161.9 -141.78 -118.3 -120.2 -141.7 -105.9 sd1h 11.5 9.657 14.8 16.03 14.11 9.67 9..657 14.21 11.69 max 1v 169.9 76.33 115.4 218.5 131.9 79.18 94.74 132.6 114.5 min 1v -239.4 -57.54 -115.6 -242.8 -126.9 -99.18 -99.68 -178 -110.7 sd1v 11.21 9.009 10.81 19.18 13.05 8.814 11.09 21.6 14.59 max 1d 112.8 24.22 54.88 110.9 52.09 53.31 44.47 68.54 36.22 min 1d -0.597 -25.92 -48.83 -93.87 -56.78 -73.87 -42.41 -79.63 -47.09 sd1d 4.803 2.709 4.8148 7.208 5.539 4.262 4.145 6.334 4.186 max 2h 369.1 315.3 218.6 324.5 356.6 250.3 257.3 417.5 302.6 min 2h -354.8 -240.6 -216.1 -465.8 -375.3 -446.5 -195.4 -284.2 -251.3 sd2h 38.02 37.87 39.66 47.23 51.1 30.99 29.67 43.83 41 max 2v 342.6 306.4 218.4 367.2 344 184.3 243.4 384.1 296.3 min 2v -275.2 -244.6 -299.8 -390 -315.6 -187.9 -238.3 -313 -214.6 sd2v 46.6 35.39 35.95 53.32 47.48 30.15 33.09 63.09 54.28 max 2d 100.9 107.3 90.2 173.6 188.7 157.2 135.2 177.7 242.2 min 2d -122.4 -126.5 -144.3 -190.6 -199.6 -117.8 -126.6 -167.7 -162.9 sd2d 16.55 14.9 19.01 25.31 23.67 17.62 17.75 27.46 33.58 max3 1066 1132 1107 1118 1114 1117 1128 1014 1082 min3 -28.66 -122.7 -91.2 8.373 -121.3 -105.9 -114.6 -148.6 -112.3 sdb 255.3 245.2 282.6 217.7 203.4 171.3 212.2 248.3 254 e 97.83 97.86 96.73 95.92 93.16 96.23 98.63 9418 96.33 image? m m m m m b b b b https://doi.org/10.1002/mrm.22147 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://doi.org/10.21014/acta_imeko.v10i2.1047 https://doi.org/10.1109/ismsit.2019.8932878 https://www.sciencedirect.com/science/article/pii/s2314728817300636#! https://www.sciencedirect.com/science/article/pii/s2314728817300636#! https://www.sciencedirect.com/science/journal/23147288 https://www.sciencedirect.com/science/journal/23147288 https://www.sciencedirect.com/science/journal/23147288/3/1 https://doi.org/10.1016/j.fcij.2017.12.001 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-01-15 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-01-15 https://doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.14445/22315381/ijett-v69i7p212 https://doi.org/10.26438/ijcse/v7i2.792799 https://doi.org/10.1109/access.2019.2945545 https://doi.org/10.1109/jsen.2021.3068670 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.24425/mms.2021.135998 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 sensors journal, 20(24) (2020), pp. 15014-15025. doi: 10.1109/jsen.2020.3012536 [13] a. tarannum, m. z. u. rahman, t. srinivasulu, an efficient multimode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68(9) (2020), pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [14] a. tarannum, m. z. u. rahman, t. srinivasulu, a real time multimedia cloud security method using three phase multi user modal for palm feature extraction, journal of advanced research in dynamical and control systems, 12(7) (2020), pp. 707-713. doi: 10.5373/jardcs/v12i7/20202053 [15] m. egmont-petersen, d. de ridder, h. handels, image processing with neural networks – a review, pattern recognition. 35 (10) (2002), pp. 2279–2301. doi: 10.1016/s0031-3203(01)00178-9 [16] h. t. siegelmann, eduardo d. sontag, analog computation via neural networks, theoretical computer science. 131 (2) (1994), pp. 331–360. doi: 10.1016/0304-3975(94)90178-3 [17] cancer types on cancer net. online [accessed 25 march 2022] https://www.cancer.net/cancer-types https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.5373/jardcs/v12i7/20202053 https://doi.org/10.1016/s0031-3203(01)00178-9 https://doi.org/10.1016/0304-3975(94)90178-3 https://www.cancer.net/cancer-types microsoft word article 4 editorial jena.docx acta imeko  august 2013, volume 2, number 1, 5 – 6  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 5  introduction to the acta imeko issue devoted to selected  papers presented in the 14 th  joint international imeko tc1 +  tc7 + tc13 symposium  gerhard linß  ilmenau university of technology, department of quality assurance and industrial image processing, faculty of mechanical engineering,  gustav‐kirchhoff‐platz 2, 98693 ilmenau, germany      keywords: imeko tc1 + tc7 + tc13 symposium, intelligent quality measurements, jena, germany  citation: gerhard linß, “introduction to the acta imeko issue devoted to selected papers presented in the 14 th  joint international imeko tc1 + tc7 + tc13  symposium”, acta imeko, vol. 2, no. 1, article 4, august 2013, identifier: imeko‐acta‐02(2013)‐01‐04  editors: paolo carbone, university of perugia, italy; gerhard linß, ilmenau university of technology, germany  copyright: © 2013 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: gerhard linß, e‐mail: gerhard.linss@tu‐ilmenau.de    1. introduction  the 14th joint international imeko tc1 + tc7 + tc13 symposium, which took place in jena, germany, had the title “intelligent quality measurements – theory, education and training”. the 14th joint international imeko tc1 + tc7 + tc13 symposium was intended to reflect innovative solutions for intelligent quality measurements in both theory and application. international researchers from 12 countries presented their exciting work in fundamentals of measurement science, mathematical models in measurement, new education and training methods and applications for intelligent quality measurements, for measurements in medicine and measurements in biology. the symposium aimed to bring researchers and developers from various fields together to share their new thoughts, findings and applications. the response from the academic community has been great, with more than 70 submissions received. the authors have contributed towards new knowledge and understanding, and have provided research results and applications that will be of important value to researchers, students and industry alike. the involved competence network spectronet green vision connected specialists for visual quality control with digital image processing and spectral imaging in research and industry, nutrition and health, transportation, environment, security and administration (www.spectronet.de). additionally the 56th international scientific colloquium, which was held at the ilmenau university of technology from 12th to 16th september 2011 has had an unbroken tradition of more than 50 years and is the "flagship" event of the university, having an excellent reputation. in 2011 the international scientific colloquium was again organised by the faculty of mechanical engineering. the title of the conference is "innovation in mechanical engineering – shaping the future" (www.iwk.tu-ilmenau.de). we are grateful to all the contributors who presented their valuable work to the research community in jena 2011. the journal papers presented in this acta imeko issue were chosen by the imeko tc1 + tc7 + tc13 board from the papers which were presented at the 14th imeko tc1 + tc7 + tc13 symposium. after the imeko symposium in jena more than 17 authors were recommended for providing an updated and extended version of their jena papers for publication in a special issue of the journal measurement or an issue in acta imeko in february 2012. 12 authors accepted this invitation and papers begun arriving us in spring of 2012. in the next step the received extended papers underwent a normal reviewing process. 14 reviewers were involved in the reviewing process and helped to optimize the final manuscripts with constructive references and recommendations. abstract  this  editorial  article  is  a  brief  introduction  to  the  acta  imeko  issue  devoted  to  selected  papers  presented  in  the  14 th   joint  international  imeko  tc1  +  tc7  +  tc13  symposium  “intelligent  quality  measurements  ‐  theory,  education  and  training”.  this  symposium took place in jena, germany from august 31 st  to september 2 nd  2011 in conjunction with the 56 th  iwk ilmenau university  of technology and the  11  th  spectronet collaboration forum.  acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 6  at the result we had seven extended and positive ranked papers, which show a significant update to take into account progress since the symposium submission and the discussions at the symposium in jena. four of the positive ranked papers discuss the role of mathematical models in measurement and so these papers are published in the measurement journal (2013). the other three positive ranked papers discuss new education and training methods and applications for intelligent quality measurements and on this thematic basis these papers are published in this issue of acta imeko. 2. about tc1, tc7 and tc13  tc1 is concerned with all matters of education and training of professional scientists and engineers for measurement and instrumentation including curricula, syllabuses and methods of teaching as well as the nature and scope of measurement and instrumentation as an academic discipline. tc1 of imeko was established in 1967 (www.imeko.org/tc1). tc7, the committee established in 1973 under the name measurement theory and in 1993 redesignated as measurement science, is concerned with the development of measurement science (www.imeko.org/tc7). tc13 is concerned with measurement of whole body, organ and cellular function, medical imaging and medical information systems (www.imeko.org/tc13). 3. the journal papers  the three selected journal papers discuss several applications for intelligent quality measurements in highly topical industrial fields, new education and training methods and the combination of image processing with classical quality assurance methods. the focus of paper [1] lays on the realization, how to combine image processing with classical quality assurance methods. two industrial applications were used to describe the problem to gain the importance of this combination. very often the technical realization of sensor systems and data processing are completely separated to quality inspection tasks. so special trainings as well as special parts in the lectures were developed and structured for closing this known gap. paper [2] discusses two analysis activities in the construction material industry, which could be solved by intelligent image processing algorithms for saving time and costs. one of the tasks was the optical identification of recycled aggregates of construction and demolition waste (cdw) as basis of an innovative sorting method on the field of processing of cdw and another task was the optical analysis of samples from mineral aggregates. paper [3] discusses new problems of inspection planning arising from the improvement in measurement technology. the paper describes essential demands, ideas and conceptual approaches to multistructured quality inspections. the background is the fact that the development and control of more and more complex and extensive technical systems yields to measurement-technology requirements in an increasing degree. 4. conclusions  we are grateful to all the contributors who provided their extended papers for this issue of acta imeko and the issue of measurement. it was a great pleasure to act as guest editor for this issue of acta imeko. particularly i must thank the authors, the reviewers for contribution, evaluation and recommendation and specially paul regtien for his support, help and copyediting and publishing process. references  [1] m. rosenberger, m. schellhorn, g. linß, "new education strategy in quality measurement technique with image processing technologies chances, applications and realisation", acta imeko, vol. 2 (2013), no. 1, pp. 56-60. [2] k. anding, d. garten, e. linß, "application of intelligent image processing in the construction material industry", acta imeko, vol. 2 (2013), no. 1, pp. 61-73. [3] k. weissensee, "new demands on inspection planning and quality testing for microand nanostructured components", acta imeko, vol. 2 (2013), no. 1, pp. 74-78. comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study acta imeko issn: 2221-870x june 2021, volume 10, number 2, 119 125 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 119 comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study federica vurchio1, giorgia fiori1, andrea scorza1, salvatore a. sciuto1 1 engineering department, roma tre university, rome, italy section: research paper keywords: microgripper; mems; microactuators; displacement measurements; characterization citation: federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, article 17, june 2021, identifier: imekoacta-10 (2021)-02-17 section editor: ciro spataro, university of palermo, italy received january 18, 2021; in final form april 22, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: federica vurchio, e-mail: federica.vurchio@uniroma3.it 1. introduction mems devices (micro-electro-mechanical systems) represent a category of sensors and actuators widely used in the most varied fields of technology, from automotive to micro assembly for photonics and rf application, microphones, microfluidic device, gyroscopes, chemical sensors for microfluidics systems, lab-on-chip systems and complex actuation systems [1]. one of the most promising fields of application is undoubtedly the biomedical one, such as biology [2],[3] and microsurgery [4]-[6]. microgrippers are a particular class of mems devices, able to handle objects, including cells and molecules that have micrometric dimensions. nowadays, there are few works concerning the characterization of devices such as microgrippers, even if the study of the metrological and performance characteristics would be of great help for the optimization of the prototypes and the improvement of their performances. in this study, a set of images have been acquired by means of a trinocular optical microscope and processed by means of three different methods implemented ad hoc in matlab environment: the semi-automatic (sam), the surf-based (angular displacement measurement based on speeded up robust features, admsurf) and the fft-based (angular displacement measurement based on fast fourier transform, admfft). a comparison among the abovementioned methods has been made to estimate the angular displacement of a mems microgripper prototype comb-drive for biomedical applications. semiautomatic method (sam) already widely described by the authors in [7]-[9], is based on template-matching, and it is able to evaluate the rotation and both the gripper and the angular displacement of a microgripper. its main limitations are the high computational costs and the operator dependence. abstract the functional characterization of mems devices is relevant today since it aims at verifying the behavior of these devices, as well as improving their design. in this regard, this study focused on the functional characterization of a mems microgripper prototype suitable in biomedical applications: the measurement of the angular displacement of the microgripper comb-drive is carried out by means of two novel automatic procedures, based on an image analysis method, surf-based (angular displacement measurement based on speeded up robust features, admsurf) and fft-based (angular displacement measurement based on fast fourier transform, admfft) method, respectively. moreover, the measurement results are compared with a semi-automatic method (sam), to evaluate which of them is the most suitable for the functional characterization of the device. the curve fitting of the outcomes from sam and admsurf, showed a quadratic trend in agreement with the analytical model. moreover, the admsurf measurements below 1° are affected by an uncertainty of about 0.08° for voltages less than 14 v, confirming its suitability for microgripper characterization. it was also evaluated that the admfft is more suitable for measurement of rotations greater than 1° (up to 30°), with a measurement uncertainty of 0.02°, at 95% of confidence level. mailto:federica.vurchio@uniroma3.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 120 the above issues have been deepened in this work, starting from the previous study presented in [10]. in section 2, the materials and methods are described, with particular reference to the experimental setup and the measurement protocol used for the digital image acquisition. due to the limitation encountered in sam previously proposed [7]-[9], in subsection 2.1, the authors propose a new version of the sam, in which novel tests have been implemented to quantify the uncertainty contribution introduced by the operator in the angular displacement measurement of a microgripper comb-drive prototype; in subsections 2.2 and 2.3 the authors described two novel and automatic methods and their application for the measurement of the comb-drive angular displacement: the admsurf, based on the surf algorithm [11], and the admfft, that is an application of 2d fast fourier transform (fft) to digital images [12]-[16]. in section 3, the procedure for estimating the sources of uncertainty of the three measurement methods is described and a comparison and the evaluation of the outcomes obtained through the three abovementioned methods have been carried out and discussed to identify which of the three implemented methods is the best suitable for the characterization of the mems device. finally, in section 4 and 5, the results of our study are illustrated, and the conclusions presented. 2. materials and methods in this section the main components of the experimental setup have been described together with a detailed overview of the three implemented methods; in particular, the surf-based and the fft-method have been proposed as alternative methods to the semi-automatic one for the measurement of the angular displacement of the comb-drive. the device under examination is a microgripper prototype (figure 1), which is part of a project concerning the metrological and performance characterization of a new class of mems devices for biomedical applications [17]-[21]. these devices mainly consist of capacitive electrostatic actuators (i.e., the comb-drives shown in figure 2) and particular hinges called conjugate surfaces flexural hinges (csfh) [22], which allow the mechanical movement of the tips located on the end of the device. the images have been acquired through a nb50ts trinocular light microscope equipped with a 6mp camera. the device has been positioned on an instrumented stage with micrometric screws and powered through a hp e3631a power supply. the latter is electrically connected to the device by means of a coaxial cable and tungsten needles put in contact with the electrical connections of the device. the voltage has been brought to the electrical connections by means of two micropositioners that allow the tungsten needle movement along the three axes, x, y, and z. a set of 30 images has been collected for each applied voltage with a 2 v step (i.e., 0 v, 2 v, 4 v, ... 24 v). 2.1. semi-automatic based method (sam) the first method used in this study, has been the semiautomatic one, which for clarity we will call sama, widely described in [7],[8] and used in [9]. as illustrated in [7]-[9], the method introduces a measurement uncertainty contribution which corresponds to 0.02 °, at 95% confidence level, evaluated by means of monte carlo simulation. moreover, the software requires high computational costs and the uncertainty analysis of the preliminary results obtained with the sama was previously carried out partially [7]-[10], assuming the uncertainty component introduced by the operator's subjectivity; for this reason, in this study further tests were carried out to better evaluate the above contribution. the test on the sama, in fact, consists, in its first part, of a selection by the operator of four points and of a region of interest (roi) on the image; to evaluate the dispersion in the selection of these points, in the new version of semi-automatic method, called samb, ten different observers were asked to identify both the four points and the roi in an image of the comb-drive, for a number of times equal to 30. in particular, for the four points, the x and y coordinates on the image were considered and for the roi, the coordinates of the top left vertex (x and y), its length and width (each of them expressed in pixels), as can be seen in figure 3. 2.2. speeded up robust features based method (surf) an automatic method based on speeded up robust features (surf) has been implemented to measure the angular displacement of the comb-drive (admsurf), as already described in [23]. it is an interest point detector and descriptor, used in figure 1. microgripper prototype. figure 2. the comb-drive. figure 3. four points (red cross) and roi (yellow square) selection on the comb-drive image. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 121 many applications including image registration, object recognition and 3d reconstruction [24]. the main advantage of this method is mainly the computational cost reduction; in fact, as illustrated in [11], a significant reduction in image processing time has been observed thanks to the complexity reduction of the descriptor, without altering the performance in terms of repeatability, noise robustness, detection errors, geometric and photometric deformation. the in-house method consists of three main steps: 1) finding interest points on the image; in particular, a roi0v is selected on the first image img0, that corresponds to 0 v power supply, and it is important that this selected area is chosen by the operator in an image area where the movement of the combdrive is visible; the coordinates of the selected roi are saved and used to select the rois (i.e. roi2v, roi4v, roi6v, …, roi24v) of all the subsequent images img2v, img4v, img6v, …, img24v. after that, the algorithm finds the interest points on each selected roi. 2) building a descriptor for the representation of the interest points; for example, in this case they are the red circles for the first image and the green crosses for all the others (figure 4). 3) matching the various descriptors found on the images; by using a geometric transform, the object position on the images can be obtained and therefore it has been possible go back to its relative rotation referred to each applied voltage. 2.3. fast fourier transform based method (fft) this method is based on the application of the fourier transform to digital images. as shown in figure 5, the combdrives of the microgripper have a particular periodic pattern; also, if an image consists of an array of uniformly spaced parallel straight lines, the fourier transform of the combination will be the convolution of their respective fourier transform. the result will be a string of impulses (see figure 6), with a separation equal to the reciprocal of the spacing of the lines and in a direction perpendicular to these parallel lines [12],[13]. this last feature has been used to estimate the angular aperture of the comb-drive: in fact, for each angular opening, the corresponding pattern of the comb-drive will take a different direction and consequently, also the position of the impulses will change and assume a different direction each time. therefore, for each angular opening, and for each image, there will be a series of points in different directions; subsequently a least squares approximation has been used to find the linear polynomial function that best approximates these points from which the angular coefficient of the straight line is obtained and therefore the opening angle of the comb-drive. as previously noted in [10], the major limitation associated with this procedure is due to the inability of the admfft to measure angular displacements less than a tenth of a degree, typical of mems devices for biomedical applications such as microgrippers. however, some microgrippers actuated by rotary comb-drive, as those studied in this work, are powered with voltages much higher than 30 v [25],[26] and therefore it was considered relevant to define whether this method could be used for the characterization of other mems devices. in order to evaluate the limit of applicability of the admfft, we proceeded in this way: once an image that presented a pattern like the one shown in figure 5 was identified, it was rotated of a quantity reported in table 1, where set1 and set2 correspond to two set of rotation, the first consisting of rotations less than one degree, the second higher than one degree; in particular, the rotation values of the first set correspond to the measurements obtained from the images acquired during the experimental campaign, using the sam. 3. uncertainty analysis in order to make a comparison among the three different image analysis methods, it is necessary to estimate the main uncertainty sources introduced by the measurement systems. it is important to underline that the experimental setup is the same, except for the three different methods. following the procedure adopted in [7], type a and type b uncertainties will be combined [27], as follows (1): 𝛿𝑇 = √𝛿𝐴 2 + 𝛿𝐵 2 , (1) figure 4. interest points descriptor of the first image (red circles) and of other images (green cross). figure 5. comb-drive pattern. figure 6. example of fourier transform applied to images properly filtered, constituted by a string of impulses. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 122 where type a uncertainty, a, has been calculated directly from the standard deviation of the experimental data, while type b uncertainty, b, has been obtained considering the uncertainties due to the power supply (evaluated from the datasheet), the optical system [7]-[9], the angle measurement, which uncertainty contribution has been assessed by means of a monte carlo simulation [28]-[29] in order to estimate the uncertainty related to the three implemented method. considering the sam, in order to simulate the uncertainty of the operator’s point selection and therefore evaluate the algorithm uncertainty, a monte carlo simulation with 104 iterations has been performed. in table 2, the variables x, y and roi with their assigned distributions and their standard deviations have been reported, in order to estimate the uncertainty introduced by the method. this contribution has been evaluated for each angular displacement of the comb-drive (i.e. 𝛿𝛼0−2v , 𝛿𝛼0−4v, 𝛿𝛼0−6v , … , 𝛿𝛼0−24v), and combined with the type a uncertainty, following the equation (1). on the other hand, to evaluate the uncertainty introduced by the fft based method in the measurement of the angular displacement of the comb-drive, the image in figure 5 has been subjected to different rotations. the contribution of the systematic uncertainty, considered in this procedure, has been evaluated by building a particular 4k (3840 px × 2160 px) image (figure 7), rotated by the same quantities reported in table 1. this contribution is mainly due to the uncertainty with which the software implements the rotation of the image and therefore into the error that it makes in measuring the angle . considering that at angles up to 15° the sine is only about 1% different and the tangent about 2% different from the measurement of the angle in radiant [30], the following approximation can be used (2): tan 𝛼 ≅ 𝛼 = 𝑎 𝑏 , (2) where a and b are the measurements of the segment reported in figure 7; therefore, for angles less than 15°, the angle measurement relative uncertainty  has been evaluated by the following equation (3): 𝛿𝛼 𝛼 = √( 𝛿𝑎 𝑎 ) 2 + ( 𝛿𝑏 𝑏 ) 2 , (3) where a and b are the measurement uncertainty of segments a and b, respectively, which are considered ±1 px, and on the other hand, if  is greater than 15°, it can be determined as follows (4): 𝛼 = arctg ( 𝑎 𝑏 ), (4) and its uncertainty  can be evaluated by equation (5), as 𝛿𝛼 = 𝑑[arctg(𝑐)] 𝑑𝑐 ∙ 𝛿𝑐 (5) where c is the ratio between the segments a and b, while c, is the corresponding uncertainty. once this uncertainty contribution has been evaluated, for each angular displacement, it is then combined following the equation (1). once uncertainties have been evaluated, a comparison among the three different set of results will be made, following the procedure adopted in [8] and reported in [31]. in practice, the different methods are able to measure the angular displacement of the comb-drive without significant differences if the following condition is verified (6): |�̅�1 − �̅�2| ≤ (𝛿𝑇1 + 𝛿𝑇2) , (6) where �̅�1 and �̅�2 are the mean values of the measurement results, while 𝛿𝑇1 and 𝛿𝑇2 are the total uncertainty estimate. in particular, if the difference |�̅�1 − �̅�2| has the same order of magnitude, or even less than, the sum (𝛿𝑇1 + 𝛿𝑇2 ), then measurements can be considered consistent, within the interval of the experimental uncertainties. 4. results and discussion in this section the outcomes from sam, admsurf and admfft are reported and commented. the graphs in figure 8 and in figure 9, show the results related to the comb-drive angular displacement, expressed as mean value, corresponding to sam, admsurf and admfft respectively. table 4 shows the measurement results expressed as the mean table 1. rotation values. set1 in ° 0.007 0.032 0.070 0.120 0.194 0.277 0.379 0.497 0.631 0.777 0.939 1.118 set2 in ° 1 2 3 4 5 7 10 13 16 20 25 30 table 2. variables settings in mcs to estimate the uncertainty introduced by the operator's subjectivity. parameter distribution standard deviation in px p1 coordinate x gaussian 8 p2 coordinate x 10 p3 coordinate x 8 p4 coordinate x 9 p1 coordinate y gaussian 6 p2 coordinate y 7 p3 coordinate y 6 p4 coordinate y 7 roi coordinate x gaussian 15 roi coordinate y 14 roi width 20 roi height 18 figure 7. 4k image, rotated of 20° for the estimation of angular rotation uncertainty in fft-based method. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 123 value and the corresponding measurement uncertainties at 95% of confidence level. in particular, the sam, introduces a measurement uncertainty contribution which corresponds to 0.8°, at 95% confidence level, and has been retrieved from 2.5 and 97.5 monte carlo distribution percentiles. the analysis of the data showed that both the sam and the admsurf follow a quadratic trend, that is in good agreement with the results obtained through the analysis of the analytical model [32]. as reported in [31], if the difference |�̅�1 − �̅�2| has the same order of magnitude as, or even less than, the sum (𝛿𝑇1 + 𝛿𝑇2 ), then measurements can be considered consistent, within the interval of the experimental uncertainties. from the data reported in table 4, the differences between the mean values are less with respect to the sum of the correspondent total uncertainties, therefore the measurements can be considered compatible, confirming that the admsurf is suitable for the measurement of the angular displacement of the comb-drive of the mems microgripper under test. moreover, it is important to underline that the computational complexity has been considerably reduced by using the admsurf: to process 390 images, as in our case, the sam requires about 2 hours, instead the admsurf about 4-5 minutes only. as regards the data obtained with the admfft (figure 9), it emerged that the results do not follow a trend that can be closely related to the angular displacement of the comb-drive. from a first analysis, it can be deduced that, the admfft cannot be considered suitable for the measurement of mems grippers whose angular displacement is below 1°, as there is no possibility of appreciating displacements around the tenth of a degree. anyway, since some prototype of microgrippers, built with rotary comb-drives, are powered with voltages higher than 30 v and can be moved with angular displacements above 1°, the admfft is evaluated also for an object that rigidly rotates around its axis by quantities greater than 1°. table 3, shows the measurement of rotation, mor1, calculated applying admfft to an image (see figure 5), rotated of a quantity equal to set1 and the measurement of rotation, mor2, calculated applying the admfft to the same image, rotated of a quantity equal to set2, together with the angle measurement uncertainty  , estimated from (3), considering  < 15° and from (5), considering  > 15°. test results confirm that angular displacements up to 30° can be measured with an angle measurement uncertainty  lower than 0.02°, as can be seen in table 3. the different behavior of the figure 8. relationship between angular displacement and applied voltage considering sam (green dot line) and surf method (red dot line). figure 9. relationship between angular displacement and applied voltage considering fft-based method. table 3. measurement of rotation, mor1, calculated applying admfft to an image (see figure 5), rotated of a quantity equal to set1 and the measurement of rotation, mor2, calculated applying admfft to the same image, rotated of a quantity equal to set2, together with the angle measurement uncertainty  . set1 in ° mor1 in °  set2 in ° mor2 in °  0.007 0 0.03 1 1.193 0.015 0.032 0 0.015 2 2.767 0.015 0.070 0 0.016 3 3.918 0.015 0.120 0 0.015 4 4.029 0.015 0.194 0 0.016 5 4.963 0.015 0.277 0 0.015 7 5.078 0.015 0.379 0 0.015 10 8.857 0.015 0.497 0 0.015 13 8.127 0.016 0.631 0.735 0.015 16 14.697 0.015 0.777 -0.262 0.015 20 16.571 0.015 0.939 1.193 0.015 25 19.759 0.015 1.118 1.193 0.015 30 29.237 0.015 figure 10. measurement of rotations less than 1° (above), and measurement of rotations between 1° and 30° (below), applying fft-based method. -1 -0,5 0 0,5 1 1,5 2 2,5 0 2 4 6 8 10 12 14 16 18 20 22 24 26 c o m b -d ri v e a n g u la r d is p la ce m e n t in ° applied voltage in v -0,4 -0,2 0 0,2 0,4 0,6 0,8 1 1,2 0 5 10 15 20 25 30c o m b -d ri v e a n g u la r d is p la ce m e n t in ° applied voltage in v r² = 0,5352 -0,4 -0,2 0 0,2 0,4 0,6 0,8 1 1,2 0 0,2 0,4 0,6 0,8 1 1,2 m e a su re m e n t o f r o ta ti o n i n ° rotation set1 in ° r² = 0,9661 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 m e a su re m e n t o f ro ta ti o n i n ° rotation set2 in ° acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 124 admfft depending on the angular range can be deduced from results of the two rotation sets (table 1) in figure 10: for rotations below 1°, the least squares regression line has shown a r2 = 0.54, while r2 = 0.97 for angles between 1° and 30°. in conclusion, it is possible to confirm that the admfft is not suitable for the measurement of rotations below 1°, but for greater rotations (higher than 1°), it has an almost linear behavior. 5. conclusions this preliminary study has the purpose of comparing the measurements performed by different methods for the angular displacement of a comb-drive of a mems gripper prototype for biomedical applications. in particular, three in-house methods have been implemented in matlab environment: the sam, the admsurf and the admfft. considering the sam, the contribution of uncertainty related to the subjectivity of the operator has been estimated, which has found to be 0.8°, at 95% of confidence level, as previously indicated. from the experimental results obtained, it has been found that the sam and admsurf are suitable to measure the small angular displacement of the comb-drive of the microgripper, showing quadratic curves, consistent with the results obtained with the analytic model. conversely, from the results retrieved by means of the admfft, it has been found no good correlation between small angular displacement and applied voltage that describe the real behavior of the device, and the data are not consistent with both the data obtained through the analytical method and with the two abovementioned methods. anyway, it was also assessed that this method was suitable for measuring rotations from 1° to 30°, and a good correlation is observed between the admfft outcomes and the rotations applied by the operator, with an uncertainty of about 0.02°. a comparison between the sam and the admsurf has been proposed: the measurements can be considered compatible, confirming that the admsurf is suitable for the measurement of the angular displacement of the comb-drive of the mems microgripper under test can be considered compatible. in particular the admsurf measurements of the comb-drive angular displacement are affected by an uncertainty lower than 8% for voltages less than 14 v, as well as smaller than sam. in conclusion, it can be confirmed that the admsurf is the most suitable method among the three proposed for the characterization of the angular displacement of mems devices such as microgrippers, both for the results obtained and the significant reduction of the computational costs. references [1] bhansali, shekhar, abhay vasudev, eds. mems for biomedical applications. elsevier, 2012, isbn 978-0-85709-627-2. [2] d. panescu, mems in medicine and biology, ieee eng. med. biol. mag. 25 (2006), pp. 19-28. doi: 10.1109/memb.2006.1705742 [3] k. keekyoung, x. liu, y. zhang, y. sun, nanonewton forcecontrolled manipulation of biological cells using a monolithic mems microgripper with two-axis force feedback, j. micromech. microeng. 18 (2008). doi: 10.1088/0960-1317/18/5/055013 [4] f. vurchio, p. ursi, a. buzzin, a. veroli, a. scorza, m. verotti, s. a. sciuto, n. p. belfiore, grasping and releasing agarose micro beads in water drops, micromachines 10 (2019). doi: 10.3390/mi10070436 [5] a. gosline, n. vasilyev, e. butler, c. folk, a. cohen, r. chen, n. lang, p. del nido, p. dupont, percutaneous intracardiac beatingheart surgery using metal mems tissue approximation tools, int. j. rob. res. 31 (2012), pp. 1081-1093. doi: 10.1177%2f0278364912443718 [6] d. benfield, s. yue, e. lou w. moussa, design and calibration of a six-axis mems sensor array for use in scoliosis correction surgery, j. micromech. microeng. 24 (2014). doi: 10.1088/0960-1317/24/8/085008 [7] f. orsini, f. vurchio, a. scorza, r. crescenzi, s. a. sciuto, an image analysis approach to microgrippers displacement measurement and testing, actuators 7 (2018). doi: 10.3390/act7040064 [8] f. vurchio, p. ursi, f. orsini, a. scorza, r. crescenzi, s. a. sciuto, n. p. belfiore, toward operations in a surgical scenario: characterization of a microgripper via light microscopy approach, appl. sci. 9 (2019). doi: 10.3390/app9091901 [9] f. vurchio, f. orsini, a. scorza, s. a. sciuto, functional characterization of mems microgripper prototype for biomedical application: preliminary results, proc. of 2019 ieee international symposium on medical measurements and applications (memea), istanbul, turkey, 26 – 28 june 2019. doi: 10.1109/memea.2019.8802178 [10] f. vurchio, g. fiori, a. scorza, s. a. sciuto, a comparison among three different image analysis methods for the displacement measurement in a novel mems device, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 18 january 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-61.pdf table 4. comb-drive angular displacement obtained through the sam, surf and fft methods. applied voltage in v sam in ° total uncertainty 𝜹𝑻𝑺𝑨𝑴 in ° surf in ° total uncertainty 𝜹𝑻𝑺𝑼𝑹𝑭 in ° fft in ° total uncertainty 𝜹𝑻𝑭𝑭𝑻 in ° 2 0.0 0.8 0.01 0.07 0.03 0.29 4 0.0 0.8 0.04 0.08 0.21 0.23 6 0.0 0.8 0.08 0.07 0.31 0.24 8 0.1 0.8 0.14 0.07 0.22 0.28 10 0.2 0.8 0.23 0.09 0.28 0.28 12 0.3 0.8 0.33 0.08 0.23 0.31 14 0.4 0.8 0.45 0.08 0.50 0.25 16 0.5 0.8 0.56 0.19 0.68 0.23 18 0.6 0.8 0.95 0.12 0.57 0.18 20 0.8 0.8 0.89 0.10 0.10 0.21 22 0.9 0.8 1.08 0.10 0.05 0.21 24 1.1 0.8 1.27 0.16 0.04 0.18 https://doi.org/10.1109/memb.2006.1705742 https://doi.org/10.1088/0960-1317/18/5/055013 https://doi.org/10.3390/mi10070436 https://dx.doi.org/10.1177%2f0278364912443718 https://doi.org/10.1088/0960-1317/24/8/085008 https://doi.org/10.3390/act7040064 https://doi.org/10.3390/app9091901 https://doi.org/10.1109/memea.2019.8802178 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-61.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-61.pdf acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 125 [11] h. bay, a. ess, t. tuytelaars, l. van gool, speeded up robust features (surf), computer vision and image understanding, 110 (2008), pp. 346-359. doi: 10.1016/j.cviu.2007.09.014 [12] r. bracewell, fourier analysis and imaging, springerscience+business media, llc, 2003. [13] k. j. r. liu, pattern recognition and image processing, marcel dekker, inc. [14] g. dougherty, digital image processing for medical applications, cambridge university press, 2009. [15] r. c. gonzalez, digital image processing using matlab, pearson prentice-hall, 2004. [16] w. burger, m. j. burge, principles of digital image processing, springer. [17] a. bagolini, s. ronchin, p. bellutti, m. chistè, m. verrotti, n. p. belfiore, fabrication of novel mems microgrippers by deep reactive ion etching with metal hard mask, journal of microelectromechanical systems 26 (2017), pp. 926-934. doi: 10.1109/jmems.2017.2696033 [18] c. potrich, l. lunelli, a. bagolini, p. bellutti, c. pederzolli, m. verotti, n. p. belfiore, innovative silicon microgrippers for biomedical applications: design, mechanical simulation and evaluation of protein fouling, actuators 7 (2018). doi: 10.3390/act7020012 [19] m. verotti, a. dochshanov, n. p. belfiore, a comprehensive survey on microgrippers design: mechanical structure, j. mech. des. 139 (2017). doi: 10.1115/1.4036351 [20] r. cecchi, m. verotti, r. capata, a. dochshanov, g. b. broggiato, r. crescenzi, m. balucani, s. natali, g. razzano, f. lucchese, a. bagolini, p. bellutti, e. sciubba, n. p. belfiore, development of micro-grippers for tissue and cell manipulation with direct morphological comparison, micromachines 6 (2015), pp. 17101728. doi: 10.3390/mi6111451 [21] p. di giamberardino, a. bagolini, p. bellutti, i. j. rudas, m. verotti, f. botta, n. p. belfiore, new mems tweezers for the viscoelastic characterization of soft materials at the microscale, micromachines 9 (2018). doi: 10.3390/mi9010015 [22] m. verotti, a. dochshanov, n. p. belfiore, compliance synthesis of csfh mems-based microgrippers, j. mech. des. 139 (2017). doi: 10.1115/1.4035053 [23] f. vurchio, f. orsini, a. scorza, f. fuiano, s. a. sciuto, a preliminary study on a novel automatic method for angular displacement measurements in microgripper for biomedical applications, proc. of 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june – 1 july 2020. doi: 10.1109/memea49120.2020.9137249 [24] m. schaeferling, g. kiefer, object recognition on a chip: a complete surf-based system on a single fpga, proc. of 2011 international conference on reconfigurable computing and fpgas, cancun, mexico, 30 november – 2 december 2011. doi: 10.1109/reconfig.2011.65 [25] q. xu, design, fabrication, and testing of an mems microgripper with dual-axis force sensor, ieee sensors journal, 15 (2015), pp. 6017-6026. doi: 10.1109/jsen.2015.2453013 [26] m. verotti, a. bagolini, p. bellutti, n. p. belfiore, design and validation of a single-soi-wafer 4-dof crawling microgripper, micromachines (basel.) 10 (2019). doi: 10.3390/mi10060376 [27] iso/iec guide 98-3: 2008. [28] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, lowest detectable signal in medical pw doppler quality control by means of a commercial flow phantom: a case study, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-63.pdf [29] g. fiori, f. fuiano, f. vurchio, a. scorza, m. schmid, s. conforto, s. a. sciuto, a preliminary study on a novel method for depth of penetration measurement in ultrasound quality assessment, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-62.pdf [30] c. h. holbrow, j. n. lloyd, j. c. amato, e. galvez, m. e. parks, modern introductory physics, second edition, springer 2010. [31] j. r. taylor, an introduction to error analysis. uncertainty stuty in physical measurements, zanichelli: bologna, italy, 1986. [32] r. crescenzi, m. balucani, n. p. belfiore, operational characterization of csfh mems technology based hinges, j. micromech. microeng. 28 (2018). doi: 10.1088/1361-6439/aaaf31 https://doi.org/10.1016/j.cviu.2007.09.014 https://doi.org/10.1109/jmems.2017.2696033 https://doi.org/10.3390/act7020012 https://doi.org/10.1115/1.4036351 https://doi.org/10.3390/mi6111451 https://doi.org/10.3390/mi9010015 https://doi.org/10.1115/1.4035053 https://doi.org/10.1109/memea49120.2020.9137249 https://doi.org/10.1109/reconfig.2011.65 https://doi.org/10.1109/jsen.2015.2453013 https://doi.org/10.3390/mi10060376 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://doi.org/10.1088/1361-6439/aaaf31 a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 -9 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 1 a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata mary swarna latha gade1, rooban sudanthiraveeran1 1 koneru lakshmaiah education foundation, green fields, vaddeswaram, india section: research paper keywords: majority gate; quantum dot cellular automata, reversible logic; reversible gates citation: mary swarna latha gade, sudanthiraveeran rooban, a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata, acta imeko, vol. 11, no. 2, article 35, june 2022, identifier: imeko-acta-11 (2022)-02-35 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received january 14, 2022; in final form april 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sudanthiraveeran rooban, e-mail: sroban123@gmail.com 1. introduction the building of digital logic circuits using reversible logic gives a zero dissipation of electricity. for irreversible calculations, landauer [1] established that each logical bit of data loss results in kb t ln(2) joules of heat energy. lent et al. [2] demonstrated that zero-energy dissipation in a digital logic circuit is only possible if the circuit is made up of reversible digital logic gates. the energy systems quantum-dot cellular automata (qca) circuit dissipation can be significantly lower than kb t ln(2) because qca circuitry is clocked systems that maintain data. this function encourages the use of qca technology in the construction of reversible digital circuits. reversibility, on the other hand, reverses bit loss but does not identify bit faults in the circuit. performance issues can be corrected by using faulttolerant gates with reversible logic. it becomes simpler and easier to detect and correct defects when the system is made up entirely of fault-tolerant elements. parity is used to achieve fault tolerance in communications and other systems. parity-preserving circuitry would thus be the forthcoming architecture principles for the creation of reversible fault-tolerant devices in nanotechnology. because of its very small size and low power consumption, qca is recommended for use in nanotechnology research [3]. the basic reversible logic gates make up reversible circuits. these gates produce a one-of-a-kind mapping of input and output pairs, making the set of inputs equal to the total outputs. in [4][6], a substantial contribution to the research for the building of basic reversible logic gates was made. these topologies, on the other hand, are less efficient and necessitate greater inverter and majority gate reductions. it aids us in the development of reversible logic gates with fault-tolerant architectures using qca technology. the basic reversible gates are constructed using upgraded xor gate designs. because of the rapid advancement abstract in order to improve the density on a chip, the scaling of cmos-based devices begins to shrink in accordance with moore's laws. this scale affects the execution of the cmos device due to specific limitations, such as energy dissipation and component alignment. quantum-dot cellular automata (qca) have been replaced to overcome the inadequacies of cmos technology. data loss is a major risk in irreversible digital logic computing. as a result, the market for nano-scale digital operations is expanding, reducing heat dissipation. reversible logic structures are a strong competitor in the creation of efficient digital systems. a reversing logic gate is an important part of reversible circuit design. the qca design of basic reversible logic gates is discussed in this study. these gates are built using a new qca design with xor gates with two and three inputs. qcadesigner tests simulation performance by simulating the specified reversible logic gate layouts. the measurement and optimization of design techniques at all stages is required to reduce power, area, and enhance speed. the work describes experimental and analytic approaches for measuring design metrics of reversible logic gates using qca, such as ancilla input, garbage output, quantum cost, cell count, and area, while accounting for the effects of energy dissipation and circuit complexity. the parameters of reversible gates with modified structures are measured and then compared with the existing desi gns. the designed f2g, frg, fg, rug and uppg reversible logic gates using qca technology shows an improvement of 42 %, 23 %, 50 %, 39 % and 68 % in terms of cell count and 31 %, 20 %, 33 %, 20 % and 72 % in terms of area with respect to the best existing designs. the findings illustrate that the proposed architectures outperform previous designs in terms of complexity, size, and clock latency. mailto:sroban123@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 2 of qca technology, a great deal of research has been done in the field of qca-based reversible logic measurement and technology, with the goal of improving performance metrics in terms of measurement accuracy and complexity, and thus improving the efficiency and precision of circuits. each reversible logic gate in a reversible logic system architecture measure specific parameter separately. the system then combines all the independents into a comprehensive set of measurement data using updated gate structures. many parameter values in the existing designs are not measured when the circuits are tested. in individual cases, it might be necessary to make measurements to accurately analyze the behaviour of the reversible logic gates. 1.1. reversible logic if each input state yields a different output state, the logic characteristic is reversible. a reversible gate is the kind that realizes reversible functionality. both outputs and the inputs can have a bijective mapping. as a result, the output and input quantities must be equal in order to prove the reversibility principle. in reversible logic gates, feedback is not acceptable, and fan out is not permitted. the circuits implementation with reversible logic gates are measured with the parameters like primitive gate count, constant input, logic calculation, unwanted output and quantum cost. based on their features, these gates have indeed been grouped into three types. • basic reversible logic gates: all of these gates are necessary parts of a reversible circuit. there have been other reversible gates invented, but not as well as buffer is the most basic. • conservative reversible logic gates: these gates have about the same number of zeros and ones at both inputs and outputs. consider the case of mcf's gates. • parity-preserving reversible logic gates: these gates have about the same parity (either even or odd) at both their inputs and outputs. only in terms of mathematics. all of the inputs and outputs of xor are equal to zero. the conservative gates are retained through parity, while the contrary is not true. 1.2. qca in [2], lent et al. introduced qca technology, which is utilized to construct all components of a nanoscale qca circuit. each qca cell is made up of four quantum dots, which are nanostructures capable of trapping electric bills. because these electrons are repelled by each other due to coulomb interaction, they seem to be positioned on opposite corners of the square. the alignment of electron pairs defines two potential polarization states, -1.00 and 1.00. the majority gate and the inverter gate are the two most basic qca gates. many logic operations that are governed by the clocking operation are commonly executed by the coulombic interactions of electrons in neighbouring cells. 1.3. related work several studies using qca to generate reversible logic gates have just been conducted during the previous decade [6]-[11]. qca architectures using majority gates as the foundation device for numerous reversible gates were shown by researchers in [6]. because the output lines are not really strongly polarised, these kinds of gates are not considered sufficient. for fredkin, feynman, and toffoli gates, mohammadi et al. [7] built qcabased reversible circuits integrating both rotating and normal cells. the architectures shown are dependent on the majority gate's arrangement. however, an effective qca architecture necessitates a new majority circuit design process, which adds to the complexity. peres and feynman's gates based on qca realisable majority gates were discussed in [8]–[10]. their topologies are less optimal, necessitating the reduction of both main and inverter gates. the authors of [11] proposed employing majority gates in qca to create an efficient reversible logic gate arrangement. however, these systems have usual restrictions, such as a higher number of stable input signals. without employing any wire crossing, the topologies presented by sing et al. [11] cannot be achieved. the feynman gate [12] is the wellknown "2x2" reversible gate. it can be used to increase fan-out. the famous "3x3" reversible gates are toffoli, peres, and fredkin [13], [14]. trailokya nath sasamal et.al [15] designed reversible gates like fg, frg, tofolli, peres gate and f2g in qca with 26, 68, 59, 88, 53 and 0.03 µm2, 0.06 µm2, 0.034 µm2, 0.097 µm2, 0.058 µm2 qca cell count and area respectively. the main contributions of the paper are as follows: 1. introduces new optimized layouts for the existing reversible logic gates to reduce the complexity of the circuits. 2. later, the performance of the proposed gates is analyzed and compared with the existing gates using conventional parameters. 2. novel qca designs for reversible gates the essential building elements of reversible digital logic are the reversible gates. such gates produce a one-of-a-kind mapping between the input as well as output sets, allowing for the same combination of inputs as outputs. qca designs of xor, f2g (double feynman gate), frg (fredkin gate), fg (feyman gate), rug (reversible universal gate), and uppg (universal parity preserving gate) reversible gates are provided in the following section. all of the suggested approaches adhere to the qca designs rules in aspects of all input cells arriving in a same period clock zone, long qca wires being subdivided into four clock zones based on the highest cell count in the same zone and the minimum cells in the same zone to avoid increased signal propagation and switching delays, and none of the suggested approaches have crossover choices. the prototypes of several reversible gates have been synthesized and measured using qca designer tool. an xor gate is the basic element of the most elementary reversible gates. it has been observed that the endurance of the design of the reversible logic architecture is influenced not only by the complexity as well as by the latency of an xor gate. two inverters and three majority gates are required in the classic xor architecture. the suggested reversible layouts are using integrated architecture with only 12 qca cell connections for both the 2-input and 3-input xor gates, as can be seen in figure 1. the f2g [16], [18] proposed block diagram and conceptual design are illustrated in figure 2a. it has three inputs namely a, b, c and three outputs p, q, r. the f2g concept uses the current xor gate to produce a design that is optimum for the region. figure 2b. shows a possible qca architecture with such a gate. note that the first input a is carried to p, whereas the realization of two output bits q and r necessitates the use of two xor gates having two inputs. the designed f2g gate qca layout utilizes only two-time clock zones. figures 3.a and b illustrate the block diagram and schematic diagram of the hypothesized fredkin gate. in general terms, the frg (fredkin gate) [19]-[24] is called a universal gate in the sense that it may be "trained" to operate as many basic components in almost any reversible digital logic circuit. it has three inputs namely a, b, c and 3 outputs p, q, r. for deployment, there are acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 3 six primary gates. figure 3c shows the qcadesigner setup of the built fredkin gate. the suggested modified frg gate shows a measurement of only 75 qca cells. the output signal p is observed to be connected to the input signal a, but the output signals q and rare implemented using majority gates. four-time clock zones are used in the intended frg gate qca configuration. figure 4 shows the feynman gate's block diagram and schematic diagram. it uses two inputs namely a, b and two outputs p, q. because of its widespread use in quantum computing, the fg (feynman gate) is sometimes referred to as the controlled not or even quantum xor gate [25]-[28]. the measurement of the designed fg gate is done with a coherence vector simulation engine. figure 4b depicts the corresponding qca architecture, which has a single xor gate with two inputs. it's important to note that output p is related to input a, however output q necessitates the use of a simpler xor gate than the present xor designs. just two-time clock zones are used in the developed f2g gate qca [29]. a block and conceptual design for the rug gate is shown in figure 5a. it has three inputs and three outputs in figure 5b, an appropriate qcadesigner interface has been demonstrated for the rug gate. there are four primary gates and one xor gate having two inputs in this circuit. the output p was generated from a threeinput majority gate, q was collected from three majority gates, and r was obtained out of a two-input xor gate, as shown in the diagram. four-time clock zones are used in the intended rug gate qca layout. figure 6a and figure 6b illustrate the topology and schematic design of the proposed qca universal parity preserving gate (uppg). it has four inputs and four outputs two majority gates, two xor gates with two inputs, and one xor gate with three inputs make up the circuit. the measurement of modified uppg gate shows a better improvement with the existing designs. figure 1. qca layout of xor gate with 2-input and 3-input. a) b) figure 2. double feynman gate (f2g): a) block diagram and schematic diagram, b) qca layout. a) b) c) figure 3. fredkin gate (frg): a) block diagram, b) schematic diagram, c) qca layout. a) b) c) figure 4. feynman gate (fg): a) block diagram, b) schematic diagram, c) qca layout. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 4 figure 6c depicts the suggested qcadesigner layout for the uppg gate. it was discovered that the proposed arrangement has an occupied area of only 0.08 µm2. a total of two different time clock zones are used in the uppg gate qca configuration. 3. results and discussion the coherence(accuracy)vector simulator had been used to analyze the layouts using default parameters, and qcadesigner had used qca configurations of the recommended systems. the cell size, layer separation distance, dot diameter, high clock area, low clock area, clock shift area, clock amplitude factor, relative permittivity, radius of effects, number of tolerances, convergence tolerance, and maximum iterations per sample are 18 nm × 18 nm, 5 nm, 9.800000·10-22 j, 3.80000 10-22 j, 0, 2, 12.9, 85 nm, 4, 0.001, and 100, respectively. the f2g gate simulated waveform is shown in figure 7. this simulation waveform has demonstrated that the circuit is functional, and consequently output values are created for all input data. for development, 31 cells in an area of around 0.04 µm2 are required. figure 7 shows the results of the fredkin gate model. this gate is frequently employed as a multiplexer in electronics in a variety of applications. it is made up of 75 qca cells with 0.08 µm2 area widths. the feynman gate was made using 13 qca cells with a total area of around 0.02 µm2, as shown in figure 4, and the result of its simulation can be seen in figure 9. figure 10 depicts the simulation waveform of the rug. with a surface area of 0.08 µm2, it utilizes 59 qca cells. the proper operation of the uppg gate is shown in figure 11. this gate is made with only 75 qca cells and a surface area of around 1 µm2. a study of the characteristics of the recommended reversible gate configurations with the measurement of previous studies is shown in table 1 to table 5. table 1 describes the design f2g reversible gate, which shows an improvement of 42 % and 31 % with respect to qca cells and area with the best existing design. when we compare the developed f2g to previously described designs, we find that the f2g designed with qca is the best in terms of the number of qca cells and the area it supports, as shown in table 1. table 2 shows the planned frg reversible gate, which was built using 75 qca cells with a measured area of 0.08 µm2 and a 0.5 clock cycle delays. it shows that the recommended architecture selected the best of all existing designs. when compared to the optimal design, the suggested frg gate provides a 23 % and 20 % improvement in cell count and area, respectively. the fredkin gate presented has been found to be more acceptable for cascade design, and the output metrics are comparable to the same optimal technique. the fredkin gate seems to have a delay of 0.5 clocks, which is less than that of conventional designs. table 3 presents the intended fg reversible gate, which was built using 13 qca cells with a 0.02 µm2 area and a 0.5 clock cycle latency. the recommended fg circuit is compared to earlier circuits in table 3. when compared to the optimal design, the proposed fg gate improves cell count and area by 50 % and 33 %, respectively. the total a) b) c) figure 5. reversible universal gate (rug): a) block diagram, b) schematic diagram, c) qca layout. a) b) c) figure 6. universal parity preserving gate (uppg): a) block diagram, b) schematic diagram, c) qca layout. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 5 number of qca cells employed, the total accessible area, the clock delay, and the crossover all influence the relationship. table 3 demonstrates that the fg circuit designed in this article has fewer cells and a smaller system area than previous designs. table 4 and table 5 show the functional efficiency of the recommended structures when compared to typical rug and uppg designs. circuit parameters such as cell count, delay, and area are taken into account while evaluating performance. in comparison to the past rug and uppg, the recommended rug and uppg have made significant improvements. table 4 shows that as compared to the optimal design, the proposed rug gate improves cell count and area by 39 % and 20 %, respectively. table 5 shows that when compared to the optimal design, the suggested uppg gate improves cell count and area by 68 % and 72 %, respectively. the total number of qca cells employed, the total accessible area, the clock delay, and the crossover all influence the relationship. table 3 demonstrates that the fg circuit designed figure 7. simulation result of f2g. table 1. comparison of planned f2g configuration with presented designs. double feynman gate cell count area (µm2) delay (cycles) wire crossing f2g [6] 93 0.19 0.75 coplanar f2g [14] 51 0.06 0.5 f2g [5] 53 0.05 0.5 proposed f2g 31 0.04 0.5 table 2. comparison of proposed frg structure with existing designs. fredkin gate cell count area (µm2) delay (cycles) wire crossing frg [7] 178 0.21 1 coplanar frg [14] 100 0.092 0.75 frg [6] 97 0.10 0.75 proposed frg 75 0.08 0.5 table 3. comparison of proposed fg structure with existing designs. feynman gate cell count area (µm2) delay (cycles) wire crossing fg [7] 78 0.09 1 coplanar fg [8] 54 0.038 0.75 multilayer fg [6] 53 0.07 0.75 fg [9] 37 0.023 0.75 fg [11] 32 0.03 0.75 fg [14] 34 0.036 0.5 fg [15] 26 0.03 0.5 proposed fg 13 0.02 0.5 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 6 figure 8. simulation result of frg. figure 9. simulation result of fg. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 7 figure 10. simulation result of rug. figure 11. simulation result of rug. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 8 in this article has fewer cells and a smaller system area than previous designs. tables 4 and 5 show the functional efficiency of the recommended structures when compared to typical rug and uppg designs. circuit parameters such as cell count, delay, and area are taken into account while evaluating performance. in comparison to the past rug and uppg, the recommended rug and uppg have made significant improvements. table 4 shows that as compared to the optimal design, the proposed rug gate improves cell count and area by 39 % and 20 %, respectively. table 5 shows that when compared to the optimal design, the suggested uppg gate improves cell count and area by 68 % and 72 %, respectively. the comparison shows that the proposed structures for the existing reversible gates have a compact architecture than the existing designs. the energy dissipation analysis of the proposed gate structures is as shown in table 6. results show that the proposed structures outperform previous reversible gate designs and thus suitable for application toward complex nano-scale architectures in qca. 4. conclusion in this study, the reversible gates like feynman gate (fg), double feynman gate (f2g), reversible universal gate (rug), fredkin gate (frg), and universal parity preserving gate (uppg) are designed using qca technology with optimum area. the designed layouts of the reversible gates have zero wire crossings. the simulation results have been verified using qca designer software. the suggested f2g, frg, fg, rug and uppg qca layouts are designed only with 31, 75, 13, 59 and 75 qca cell count and 0.04 µm2, 0.08 µm2, 0.02 µm2, 0.08 µm2, and 0.08 µm2 area respectively. then, we measured and compared the robustness of the recommended reversible gates to the existing gates using standard metrics. the simulation results show that the suggested reversible gates perform better in terms of cell count and area by 42 %, 23 %, 50 %, 39 %, and 68 % and 31 %, 20 %, 33 %, 20 %, and 72 %, respectively. the suggested architectures outperform existing reversible gate designs, indicating that they are better suited for usage in qca in complicated nanoscale systems. references [1] r. landauer, irreversibility and heat generation in the computing process, ibm journal of research and development, vol. 5, no. 3 (july 1961), pp. 183-191. doi: 10.1147/rd.53.0183 [2] c. s. lent, m. liu, y. lu, bennett clocking of quantum-dot cellular automata and the limits to binary logic scaling, nanotechnology, vol.17, no. 1(2006), pmid: 21727566. doi: 10.1088/0957-4484/17/16/040 [3] m. balali, a. rezai, h. balali, f. rabiei, s. emadi, towards coplanar quantum-dot cellular automata adders based on efficient three-input xor gate, results phys., vol. 7(2017), pp. 1389–1395. doi: 10.1016/j.rinp.2017.04.005 [4] a. peres, reversible logic and quantum computers, phys. rev. a gen. phys. vol. 32, no. 6(1985), pp. 3266–3276. doi: 10.1103/physreva.32.3266 [5] b. parhami, fault-tolerant reversible circuits, fortieth asilomar conference on signals, systems and computers, acssc’06, pacific grove, ca, usa, 29 october 1 november 2006, pp. 1726–1729. doi: 10.1109/acssc.2006.355056 [6] m. abdullah-al-shafi, m. s. islam, a. n. bahar, a review on reversible logic gates and it’s qca implementation, int. j. comput. appl. 128(2) (2015), pp. 27–34. doi: 10.5120/ijca2015906434 [7] z. mohammadi, m. mohammadi, implementing a one-bit reversible full adder using quantumdot cellular automata. quantum inf. process. vol.13(2014), pp. 2127–2147. doi: 10.1080/03772063.2015.1018845 [8] j. c. das, d. de, reversible binary to grey and grey to binary code converter using qca, iete j. res, vol.61(2015), pp. 223–229. doi: 10.1080/03772063.2015.1018845 [9] k. sabanci, s. balci, development of an expression for the output voltage ripple of the dc-dc boost converter circuits by using particle swarm optimization algorithm, measurement, vol. 158 (2020), pp. 1-9. doi: 10.1016/j.measurement.2020.107694 [10] j. c. das, d. de, novel low power reversible binary incrementer design using quantum-dot cellular automata, microprocess. microsyst., vol. 42 (2016), pp. 10–23. doi: 10.1016/j.micpro.2015.12.004 [11] g. singh, r.k. sarin, b. raj, design and analysis of area efficient qca based reversible logic gates, microprocess. microsystc, vol. 52 (2017), pp. 59–68. doi: 10.1016/j.micpro.2017.05.017 [12] a. m. chabi, a. roohi, h. khademolhosseini, s. sheikhfaal, towards ultra-efficient qca reversible circuits, microprocess. microsyst, vol. 49 (2017), pp. 127–138. doi: 10.1016/j.micpro.2016.09.015 [13] k. walus, t. dysart, g. jullien, r. budiman, qca designer: a rapid design and simulation tool for quantum-dot cellular automata, ieee trans. nanotechnol, vol. 3 (2004), pp. 26–29. doi: 10.1109/tnano.2003.820815 [14] a. n. bahar, s. waheed, m. a. habib, a novel presentation of reversible logic gate in quantumdot cellular automata (qca), international conference on electrical engineering and table 4. comparison of proposed rug structure with existing designs. rug gate cell count area (µm2) delay (cycles) wire crossing rug [7] 170 0.23 1 coplanar rug [16] 187 0.22 1.75 coplanar rug [6] 97 0.10 0.75 proposed rug 59 0.08 0.5 table 5. comparison of proposed uppg structure with existing designs. uppg gate cell count area (µm2) delay (cycles) wire crossing uppg [17] 233 0.29 2.5 coplanar proposed uppg 75 0.08 0.5 table 6. energy dissipation of the proposed reversible logic gate structures. gate average energy dissipation per cycle (ev) total energy dissipation (ev) f2g 2.66 · 10-3 2.92 · 10-2 frg 3.20 · 10-3 3.52 · 10-2 fg 1.69 · 10-3 1.86 · 10-2 rug 3.55 · 10-3 3.90 · 10-2 uppg 3.55 · 10-3 3.90 · 10-2 https://doi.org/%2010.1147/rd.53.0183 https://doi.org/%2010.1088/0957-4484/17/16/040 https://doi.org/10.1016/j.rinp.2017.04.005 https://doi.org/10.1103/physreva.32.3266 https://doi.org/10.1109/acssc.2006.355056 https://doi.org/10.5120/ijca2015906434 https://doi.org/10.1080/03772063.2015.1018845 https://doi.org/10.1080/03772063.2015.1018845 https://doi.org/10.1016/j.measurement.2020.107694 https://doi.org/10.1016/j.micpro.2015.12.004 https://doi.org/10.1016/j.micpro.2017.05.017 https://doi.org/10.1016/j.micpro.2016.09.015 https://doi.org/10.1109/tnano.2003.820815 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 9 information & communication technology, dhaka, bangladesh, 10-12 april 2014, pp. 1–6. doi: 10.1109/iceeict.2014.6919121 [15] t. nath sasamal, a. kumar singh, a. mohan, quantum-dot cellular automata based digital logic circuits: a design perspective, studies in computational intelligence, vol. 879 (2020). doi: 10.1007/978-981-15-1823-2 [16] trailokya nath sasamal, anand mohan, ashutosh kumar singh, efficient design of reversible logic alu using coplanar quantum-dot cellular automata, journal of circuits, systems and computers, vol. 27, no. 02 (2018) doi: 10.1142/s0218126618500214 [17] n. kumar misra, s. wairya, v. kumar singh, approach to design a high performance fault-tolerant reversible alu, int. j. circuits and architecture design, vol. 2, no. 1(2016). doi: 10.1504/ijcad.2016.075913 [18] e sardini, m serpelloni, wireless measurement electronics for passive temperature sensor, ieee trans. inst meas, vol. 61, no. 9 (2012), pp. 23542361. doi: 10.1109/tim.2012.2199189 [19] r. sargazi, a. akbari, p. werle, h. borsi, a novel wideband partial discharge measuring circuit under fast repetitive impulses of static converters, measurement, vol. 178 (2021), pp. 1-7. doi: 10.1016/j.measurement.2021.109353 [20] t. n. sasamal, a. k. singh, u. ghanekar, toward efficient design of reversible logic gates in quantum-dot cellular automata with power dissipation analysis, int. j. theor. phys., vol. 57, no. 4 (2018), pp. 1167-1185. doi: 10.1007/s10773-017-3647-5 [21] m. abasi, a. saffarian, m. joorabian, s. ghodratollah seifossadat, fault classification and fault area detection in gupfccompensated double-circuit transmission lines based on the analysis of active and reactive powers measured by pmus, measurement, vol. 169 (2021), pp. 1-34. doi: 10.1016/j.measurement.2020.108499 [22] m. s. latha gade, s. rooban, an efficient design of fault tolerant reversible multiplexer using qca technology, 2020 3rd international conference on intelligent sustainable systems (iciss), thoothukudi, india, 3-5 december 2020, pp. 1274-1280. doi: 10.1109/iciss49785.2020.9315867 [23] a. lay-ekuakille, a. massaro, s. p. singh, i. jablonski, m. z. u. rahman, f. spano, optoelectronic and nanosensors detection systems: a review, ieee sensors journal, vol. 21, no.11, art. no. 9340342 (2021), pp. 12645-12653. doi: 10.1109/jsen.2021.3055750 [24] p. bilski, analysis of the ensemble of regression algorithms for the analog circuit parametric identification, measurement, vol. 160 (2021), pp. 1-9. doi: 10.1016/j.measurement.2020.107829 [25] m. m. abutaleb, robust and efficient qca cell-based nanostructures of elementary reversible logic gates, j. supercomput., vol. 74, no. 11, pp. 62586274(2018). doi: 10.1007/s11227-018-2550-z [26] s. surekha, u. m. z. rahman, n. gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific and industrial research, vol. 80 no. 5 (2021), pp. 449456. doi: http://nopr.niscair.res.in/handle/123456789/57696 [27] m. s. latha gade, s. rooban, run time fault tolerant mechanism for transient and hardware faults in alu for highly reliable embedded processor, international conference on smart technologies in computing, electrical and electronics (icstcee), bengaluru, india, 9-10 october 2020, pp. 44-49. doi: 10.1109/icstcee49637.2020.9277288 [28] f. salimzadeh, s. r. heikalabad, design of a novel reversible structure for full adder/subtractor in quantum-dot cellular automata, phys.b, condens. matter, vol. 556(2019), pp. 163-169. doi: 10.1016/j.physb.2018.12.028 http://dx.doi.org/10.1109/iceeict.2014.6919121 https://doi.org./10.1007/978-981-15-1823-2 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/doi/abs/10.1142/s0218126618500214 https://www.worldscientific.com/worldscinet/jcsc https://www.worldscientific.com/worldscinet/jcsc file://///fs.isb.pad.ptb.de/home/imeko/acta%20imeko%20papers/vol%2011,%20no%202%20(2022)/1_pre-formatted%20copies/vol.%2027,%20no.%2002(2018).%0ddoi:%20 https://doi.org/10.1142/s0218126618500214 http://dx.doi.org/10.1504/ijcad.2016.075913 http://doi.org/10.1109/tim.2012.2199189 https://doi.org/10.1016/j.measurement.2021.109353 http://dx.doi.org/10.1007/s10773-017-3647-5 http://dx.doi.org/10.1016/j.measurement.2020.108499 https://doi.org/10.1109/iciss49785.2020.9315867 https://doi.org/10.1109/jsen.2021.3055750 https://doi.org/10.1016/j.measurement.2020.107829 https://doi.org/10.1007/s11227-018-2550-z http://nopr.niscair.res.in/handle/123456789/57696 https://doi.org/10.1109/icstcee49637.2020.9277288 https://doi.org/10.1016/j.physb.2018.12.028 automated calibration and dcc generation system with storage in private permissioned blockchain network acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 7 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 automated calibration and dcc generation system with storage in private permissioned blockchain network cristian zet1, gabriel dumitriu2, cristian fosalau1, gabriel constantin sarbu3 1 gheorghe asachi technical university of iasi, bd. d. mangeron 67, 700050 iasi, romania 2 individual enterprise gcd, str. hlincea nr. 27, 700714, iasi, romania section: research paper keywords: blockchain technology, digital calibration certificate (dcc), virtual instrument, automated test system, data acquisition, uncertainty citation: cristian zet, gabriel dumitriu, cristian fosalau, gabriel constantin sarbu, automated calibration and dcc generation system with storage in private permissioned blockchain network, acta imeko, vol. 12, no. 1, article 16, march 2023, identifier: imeko-acta-12 (2023)-01-16 section editor: daniel hutzschenreuter, ptb, germany received november 20, 2022; in final form february 14, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported from project poc 351/390027/08.09.2021 european structural funds. corresponding author: cristian zet, e-mail: czet@tuiasi.ro 1. introduction calibration is defined by vim (jgcm200:2012) as the "operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties". it is usually accomplished after the instrument gets out from the production line or at regular time intervals during the instrument's life time. calibration is often considered by companies as being the process of measuring the offset and the gain errors [1], [2] and trimming the analog circuitry in order to correct these errors, being confused with the adjustment process. during the factory adjustment, the correction values are stored in the nonvolatile memory of the instrument and they are used during operation to directly correct the indication. later, the errors change in time due to aging and with temperature, making these values invalid after a time period, requiring a new adjustment. depending on the needs, the instrument must be checked at least once a year, but it can be even earlier (6 or 3 month) if necessary. modern instruments are programmable via the built-in interfaces, like usb or gpib [3]. a software control loop can control several instruments in order to get an automated test system. if one or more of the instruments have no interface, the process can be only partially automated using video processing [4], on single range or function, while switching ranges/functions are performed manually. the commercially available remote control software from the producer is not suitable in most cases for metrological purposes [1], and hence custom software must be developed for each instrument. in [3], an automated calibration system for electrical sources and measuring instruments like digital multimeters (dmms) has been described. it is intended for several well known instruments like calibrators (fluke 5720a or wavetek 9100) or dmms (fluke 8506, fluke 8846, hp 3458). the software is built in labview and allows controlling the calibration system, eliminates the human errors and performs statistical processing. in [2], an automated measuring station for the determination of calibration intervals for dmms is presented. the authors are using a standard device and a verified instrument, both equipped abstract digital technologies have proved their usefulness in instrumentation and measurements since many years, becoming a “must have”. the use of microprocessors and microcontrollers in measuring instruments became a common practice, bringing the advantage of signal and information processing at the instrument level, digital interfaces, remote control, software update and calibration. thus, the instrument can be calibrated and verified being connected in an automated calibration system which can carry all the process without the operator interference and to generate in the end the calibration certificate. the paper presents the possibility of joining the automated calibration system and the creation of a digital calibration certificate (dcc) with the blockchain technology for storing and validation. as benefits there are the traceability of the dcc, the impossibility of altering the information and the preservation of the full history in the digital wallet. mailto:czet@tuiasi.ro acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 with communication interfaces. the test is performed every 2 month interval up to 1 year. in [5], authors present an automated calibration system for dmms without a communication interface. they are using video processing to "read" the numbers provided on 7 segment displays. the process is partially automated, the operator's task being to change the ranges and functions. in [6] it is shown that the calibrations can be performed on site based on travelling standards, without requiring the presence of any specialized personnel from the metrology lab. such situation needs an internet connection to automatically and securely send the calibration data on a server. there are several approaches described above, but for any of them the software is a custom design one, being fully automated or partially automated, depending on the instruments. despite this, an automated system can be endowed with the feature to automatically create the dcc. as long as the data is recorded in the system in the digital format, the software can automatically create the calibration certificate, according to the issuer regulations. in most cases the data can be saved in local files with various formats, human readable or machine readable. there are several formats and several approaches in the literature for creating a dcc [7]. there are various information to be recorded in the certificate and various users of it. the certificate must respect norms and regulations and maintain traceability. an analysis of digital formats reveals several advantages for blockchain based dccs. in [8] the authors describe the possibility of generating the dcc from excel, taking benefits from the programming environment built in. in [9], the authors present several variants of saving the dcc in pdf format for various fields like vna, electrical energy, or acoustics. the pdf file may have attached the digital calibration certificate. in [10], the authors describe how to make dcc in xml format with 4 layers: administrative, results, individual information and optional attachment (pdf), considering the case of sensor networks. the digital signature is also considered. for using dccs, an infrastructure is needed [11] that must be distributed [12]. security issues must be provided that prevent the information to be altered. an internal report from nist makes a detailed analysis of blockchain overview with respect to the metrology area, emphasizing the benefits and the weaknesses [13]. a blockchain system is not exactly immutable, requiring governance that must be trusted. the data recorded must correspond to the real world. transactions that are not yet included in a block are vulnerable to attacks and vulnerable to malicious users. some applications for the blockchain technology have been emphasized in [14]: decentralized audit trail (the instrument communicates that the calibration parameters are not proper anymore), parameters update with agreement from the user, public keys for manufacturers and nmis, or billing system. in conclusion, according to this short analysis, calibration is one of the activities requesting traceability, allowing the connection with the primary standards, the needs of dccs and the advantages of using the blockchain technology in metrology. the present paper brings as novelty the direct generation of the dcc during an automated calibration together with saving it as a blockchain transaction. a short description of the used blockchain network is presented in section 2, followed by the description of the hardware and software application performing the automated calibration, dcc generation and its embedding in the blockchain in section 3 and ending up with some results in section 4 and conclusions in section 5. 2. blockchain network architecture and specific transaction blocks as per ibm’s definition, blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets in a business network. all these transactions are stored in blocks of data, using complex cryptography functions. each block contains a cryptographic hash of the previous block, a timestamp, and the transaction data (generally represented as a merkle tree) as in figure 1. once a transaction is executed, all the data remain in the blockchain permanently. you cannot alter, modify, copy or delete it but you can only distribute it. in order to migrate the digital calibration certificate on blockchain, we built a private permissioned blockchain infrastructure with 3 private full nodes, an api and a web platform acting as smart asset management tool like in figure 2. a full node can be seen as a server in a decentralized network. it keeps the consensus between other nodes and verifies the transactions. it also stores a copy of the blockchain, thus being able to securely enable custom functions such as instant send and private transactions. our blockchain is based on the x15 algorithm, a hybrid between pow (proof of work) and pos (proof of stake) consensus mechanism [15]. thanks to its energy efficiency, it can even run on a raspberry pi2. pos is a type of consensus mechanism used in blockchain networks for validating transactions and creating new blocks. when using pos, instead of miners competing to solve complex mathematical problems to create new blocks (as in pow based networks), validators are chosen to create new blocks based on the amount of cryptocurrency they hold and are willing to "stake" (i.e. lock up) as collateral. figure 1. blockchain block simplified. figure 2. local blockchain node simplified. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 when a validator wants to create a new block, they must first stake a certain amount of cryptocurrency, which acts as a form of collateral. the more cryptocurrency a validator stakes, the higher their chances of being chosen to create a new block. once a block is created, the validator receives a reward for their work, proportional to the amount they staked. if a validator is found to be acting maliciously or attempting to cheat the system, their stake can be taken away as a penalty and his work is excluded from that specific block. it uses 15 hashing algorithms that are consecutively carried out one after another. we also built its associated windows and linux wallets from which we can natively monitor all the transactions. the infrastructure has been built to run on p2p port: 10218 and accept commands only on rpc port: 20208. this specific blockchain feature also adds an extra layer of security. some apis were built using the opensource jsonrpcclient.php a simple php class that implements json rpc client over raw tcp. the asset management tool works in parallel with the labview dashboard and allows the user to create a blockchain identity for the measurement instrument. basically, it uses a specific function, “getnewaddress” from jsonrpcclient.php in order to remotely connect to the main wallet and store the instrument’s details (such as name and serial) as metadata in a single transaction. the blockchain address attached to the instrument is converted to a qr code for a better portability. this process is basically the creation of the smart asset on our infrastructure. then, we use the newly created blockchain identity to initiate transactions. each transaction generates a unique transaction id (figure 3). we use the "sendtoaddress" function and store the metadata in the "comment" argument, part of the function. the "comment" argument is kept in the main public wallet; it is not distributed over the network. this means that the only way you can see the metadata is by typing the correct transaction, which is a unique identifier. so, even though all transactions are public, the content of the dcc can be verified only if you have a valid transaction id. this type of architecture restricts any fraudulent transactions. the traceability aspect of the whole system is given by the fact that we can trace back in time every transaction and identify each measurement. this can be done directly from the wallet (as a native feature) or by accessing our smart asset management tool. using the "listtransactions" function, we developed a method which allows us to check the previous calibration/measurements of the same instrument. this is achieved natively, from the wallet and also from our web asset management tool. having this functionality, it allows the system to have a full history of calibration/verification of the same instrument and also providing a secure method of storing and sharing data, protecting it from unauthorized access or modifications. this specific aspect also helps to reduce the cost of creating, maintaining, and distributing physical certificates. the labview side of the project uses httpclient.lvlib to connect to the blockchain api. it also collects the measured values and sends them to a specific address as metadata. the data is formatted as .xml before embedding it to blockchain, which offers an alternative in future verification. we designed the system in such a way that once the labview application runs, it automatically correlates the serial number of the data acquisition card with its associated blockchain wallet address. we achieved this by using the "getaccountaddress" native function that returns the wallet address of a previous created account. all these actions happen on the wallet, being stored locally on the main wallet. this particular approach offers an extra degree of anonymity and low-cost of transactions in the peer-to-peer (p2p) blockchain network compared to those in the real world. 3. description of automated calibration system – hardware and software overview for proving that it is possible to store the dcc as a transaction using blockhain technology, an automated calibration system has been created to perform both the measurement and the dcc generation, as shown in figure 4. it consists of standard source and the dut, both connected to a host pc running the specific software developed in labview. the dut is a multifunction ni usb 6008 data acquisition card. it has 8 analog inputs with 2 types of input connections: differential (diff) and single ended (rse). the technical data for the daq card are listed in table 1. it provides a resolution of 12 bits on the differential input and 11 bits on single ended input and relatively high maximum permissible errors. the daq is connected at the host pc using the usb interface. the programmable voltage source keithley 6487 (a picoammeter with built-in voltage source) is connected to the pc through the gpib interface and it is driven using scpi commands. in order to demonstrate the feasibility of an automated verification and dcc generation system, a virtual instrument has been developed in labview. it is designed to figure 3. blockchain transaction details. figure 4. the schematic of the calibration system. table 1. ni usb 6008 absolute accuracy specifications (25°c) in mv. input type ± 20 v ± 10 v ± 5 v ± 4 v ± 2.5 v ± 2 v ± 1.25 v ± 1 v rse 14.7 diff 14.7 7.73 4.28 3.59 2.56 2.21 1.70 1.53 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 allow the creation for each instrument of its own identity in the specified wallet. if it has not already one, it must be set up and run the verification process. at the end, the resulting data must be saved in the blockchain network as transactions. the data is also saved in readable format in local .txt files. the process is not really fully automated, as the operator must change the input connection when the rse connection is tested. the front panel of the vi is presented in figure 5. if the instrument to be calibrated is not yet registered as a smart asset or it was not verified before using the system, the operator can choose the tab "smart asset creation". in labview, each data acquisition device gets a device number which can be found in the measurement and application explorer . after inserting the "device no", the program will query the device and will get its serial number and board name. it will concatenate the two strings, will call the blockchain platform to create a wallet for the device using the “getnewaddress” api and will return the wallet address as shown in figure 5 left (like mmzo6bksvw jhmjz2vvgmgyaa8fexgwme4c). the new address will appear also in the wallet application. the user has to copy this and paste it in the corresponding field in the calibration tab. moving to the "calibration tab" (figure 5. right), the operator must fill in the fields for the certificate header, as for example: the certificate number (certificate no), the certificate issuer name (institution), the operator name (operator), the customer name (customer) and the comments field (comments). the header can be customized following the lab wishes and regulations with various fields. next, the operator must setup the connection type (input terminal configuration), the scale limits (limits) for each desired scale and gpib address of the standard source. the environmental conditions can be specified as an input field (t&h). if the room is environmentally controlled or they can be measured using a remote environment monitor station, the values can be automatically inserted into the field. before starting the instrument, the operator has to fill in the wallet address of the instrument to be verified (wallet address), if it already has one, or paste it after it was created in the "smart asset creation" tab. once the dut and the standard source are connected and all the control fields are filled in, the operator can start the verification process by pressing the button “start”. the first task is to read the instruments ids and serial numbers directly from their firmware, which will be added to the certificate. after the instruments are initialized with the working parameters, the process is started. depending on the number of points per scale and on the number of scales the process will take a while. for the present approach we set a number of 20 points per scale and 8 scales. the sampling frequency is set to 1 khz and 𝑛 = 1000 samples to acquire. the acquired values are processed for each test point in order to calculate the arithmetic mean and the standard deviation: 𝑈mean = 1 𝑛 ∑ 𝑈𝑘 𝑛 𝑘=0 , 𝑠(𝑈𝑘 ) = √ 1 𝑛 ∑(𝑈𝑘 − 𝑈mean) 2 𝑛 𝑘=0 (1) where 𝑈𝑘 are the measured values for a single test voltage, while the type a uncertainty is calculated as the experimental standard deviation of the mean value: 𝑢a = 𝑠(𝑈mean) = 𝑠(𝑈𝑘 ) √𝑛⁄ . (2) type b evaluation of the measurement uncertainty may also be characterized by standard deviations, which will be evaluated from the probability density functions based on experience or other information, as reported in gum [evaluation of measurement data—guide to the expression of uncertainty in measurement, jcgm 100:2008]. the combined uncertainty and the expanded uncertainty are calculated as: 𝑢c = √𝑢a 2 + 𝑢b 2 , 𝑢e = 𝑘 ∙ 𝑢c , (3) where 𝑘 is the coverage factor (𝑘 = 2 for a probability of 95%). in the current application, only the type a uncertainty is considered as resulting from the experimental verification. the virtual instrument flow is presented in figure 6. after the initial setting of various parameters, it runs 3 loops. the inner one takes the measurements for each test point 1000 times and calculates the uncertainty, the middle one runs for each test point and the outer one runs for each range. the data is written in the local file after each test point, while the transaction containing the dcc is sent after each range. the header of the calibration certificate has to be flattened into xml string. because that some special characters are not allowed in this format of blockchain, they have to be removed from the instrument identification string and replaced with underscores or dots. figure 5. the front panel of the virtual instrument: left the “create smart asset” tab; right the calibration tab. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 4. results in our experiment 10 daq cards were tested for all measuring ranges on the differential input and for the single range on the rse input. their dccs were saved in the blockchain network as transactions. each tested scale has its own dcc embedded in one transaction. the results are presented in table 2 for differential input base scale ± 10 v, for the lowest scale ± 1 v and for rse input. maximum values from all 10 tested duts for each test point for the combined uncertainty are presented. as it can be seen, the values are below 6.35 mv, which is lower than the absolute accuracy value of 7.73 mv given by the producer for the base scale ± 10 v. for the lowest scale, ± 1 v, the combined uncertainty is maximum 1.15 mv, slightly under the absolute accuracy limit of 1.53 mv given by the producer. this might be produced by the standard source whose lowest output scale is ± 10 v and the accuracy according to the datasheet is 0.1% of output voltage + 1 mv. this means that for 1 v output voltage, the absolute accuracy is maximum 2 mv. for the rse input the values are under the limit of 14.7 mv. the behaviour of the maximum and minimum type a uncertainties are depicted in figure 7. the diff series represents the behaviour of the differential input for the base scale ± 10 v, while the rse series is the behaviour for rse input. the minimum limit is for both below 40 µv, while the maximum type a uncertainty is double for rse (160 µv) than diff. as resulted the type a uncertainty is much lower than the type b uncertainty. 5. conclusions usually, the structure of national metrology institutes (nmi) involves a central unit and a number of territorial subsidiaries. each one can be possess an own wallet in the private permissioned blockchain infrastructure. each wallet is associated to a node located at subsidiary or at the central entity. they all make up the blockchain network (figure 8). the nodes in the network are constantly synchronizing in between, as long as the internet connection is available. if for some nodes the connection is temporarily broken, after its restoration, the node will send the transactions realized in the meantime in the network and each node will validate each transaction. as a closing takeaway, here are 5 benefits for metrological applications using blockchain: improved security and authenticity: blockchain technology ensures that once a certificate is added to the blockchain, it cannot be altered or tampered with, providing a tamper-proof and secure method of recording and sharing calibration data. figure 6. the virtual instrument flow diagram. table 2. ni usb 6008 absolute accuracy specifications (25 °c) in mv. scale factor -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 diff ± 10 v 6.35 5.19 4.04 2.89 1.73 0.58 1.73 2.89 4.04 5.19 6.35 diff ± 1 v 1.15 1.04 0.92 0.81 0.69 0.58 0.69 0.81 0.92 1.04 1.15 rse ± 10 v 6.35 5.19 4.04 2.89 1.73 0.58 1.73 2.89 4.04 5.19 6.35 figure 7. type a uncertainty for the base scale ± 10 v. figure 8. the metrology blockchain network. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 traceability: blockchain allows for the tracking of the calibration certificate throughout its lifecycle, providing transparency and traceability which can be useful for regulatory compliance and quality control. reduced cost and efficiency: blockchain-based digital certificates can help reduce the cost of creating, maintaining, and distributing physical certificate and increasing efficiency. better accessibility: digital certificates stored on the blockchain can be easily accessed and shared, eliminating the need for physical certificates, reducing risks of loss or damage. greater standardization: the use of blockchain technology in creating dccs can also help promoting standardization and consistency in metrology, as data stored on the blockchain can be easily shared and compared across different organizations. when a new instrument is calibrated by an entity in territory, it must have its own identity in the entity wallet. its identity can be created as a unique label in the entity wallet. this will correspond to an own address in the wallet for the instrument (figure 9). each transaction is sent inside the same wallet, but the transactions are visible in the whole network. if the instrument has been calibrated before, it already has a label and respectively an address. the calibration software is checking the existence of the label corresponding to its name and serial number. it interrogates the instrument, gets its name and serial number and then asks the wallet for its address. if there is not a match, it returns an error and the operator has to create the address. if the calibration software finds a match, it uses that address for the following transactions. each transaction contains a dcc for a single range. this has been chosen in the present approach because the comment field that carry the dcc information is limited to a number of characters. the dcc is structured as an xml format being stored in the comment of the transaction. for the verification of an instrument with multiple functions and scales, there will be several transactions (figure 10). in our case, there are 9 transactions for each instrument (9 available scales). in conclusion, the blockchain technology shows promising possibilities for metrological applications. it can securely store the dcc, can keep all the certificates issued for an instrument and can assure the traceability as long as the standard has its own address which might have a dcc in the network. the dcc can be generated without the intervention of the operator, with custom software, guaranteeing the validity of the results, minimizing the possibility to fake them. on the other hand, we plan for future upgrades including a full audit of the calibration process, meaning that every local and national authorized authority can release a dcc for any instrument/standard device. this way one can trace back in time and check how a specific instrument has been authorized by a local authority and follow the traceability chain. a second aspect of future upgrades includes a secure login feature for the web asset management tool that manages the enrolment in our private-permissioned infrastructure and also access to read/write data. this basically translates into a simple background check of the provided identification data and allocation of a wallet address. we also believe that data portability and authenticity is crucial in delivering a dcc, hence we plan a third module that facilitates a digital signature of the exported certificate as pdf file. this feature stores the private key of the digital signature certificate on blockchain, eliminating the need of hardware security modules (hsm). the working module requires a small adjustment in order to comply to the european commission eidas regulation https://digital-strategy.ec.europa.eu/en/policies/eidasregulation. acknowledgement this work was supported from project poc 351/390027/08.09.2021 european structural funds. references [1] national instruments corp, calibration procedure, 322314b-01, february 2000. [2] c. de capua, d. grillo, e. romeo, the implementation of an automatic measurement station for the determination of the calibration intervals for a dmm, ieee symposium on virtual environments, human-computer interfaces and measurement systems, la coruna, spain, 10-12 july 2006, pp. 58-62. doi: 10.1109/vecims.2006.250790 [3] h. m. abdel mageed, a. m. el-rifaie, automatic calibration system for electrical sourcing and measuring instruments, 12th international conference on environment and electrical engineering, wroclaw, poland, 5-8 may 2013, pp. 30-34. doi: 10.1109/eeeic.2013.6549578 [4] f. martín-rodríguez, e. vázquez-fernández, á. dacal-nieto, a. formella, v. álvarez-valado, h. gonzález-jorge, digital instrumentation calibration using computer vision, iciar 2010, part ii, lncs 6112, pp. 335–344, springer-verlag berlin heidelberg 2010, 978-3-642-13774-7. [5] g. grzeczka, m. klebba, automated calibration system for digital multimeters not equipped with a communication figure 9. how the instruments get their own labels/address. figure 10. the transaction list containing multiple transactions for 1 address. https://digital-strategy.ec.europa.eu/en/policies/eidas-regulation https://digital-strategy.ec.europa.eu/en/policies/eidas-regulation https://doi.org/10.1109/vecims.2006.250790 https://doi.org/10.1109/eeeic.2013.6549578 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 7 interface, mdpi sensors 2020, 20, 3650. doi: 10.3390/s20133650 [6] a. carullo, m. parvis, a. vallan, security issues for internet based calibration activities, ieee instrumentation and measurement technology conference anchorage, alaska, usa, may 21-23, 2002. pp. 817-822. doi: 10.1109/imtc.2002.1006947 [7] m. s. gadelrab, r. a. abouhogail, towards a new generation of digital calibration certificate: analysis and survey, measurement 181 (2021) 109611. doi: 10.1016/j.measurement.2021.109611 [8] d. röske, a visual tool for generating digital calibration certificates (dccs) in excel , measurement: sensors, volume 18, december 2021, 100175. doi: 10.1016/j.measen.2021.100175 [9] g. boschung, m. wollensack, m. zeier, c. blaser, chr. hof, m. stathis, p. blattner, f. stuker, n. basic, f. grasso toro, pdf/a3 solution for digital calibration certificates, measurement: sensors, vol. 18, december 2021, 100282, 4 pp. doi: 10.1016/j.measen.2021.100282 [10] t. mustapää, p. nikander, d. hutzschenreuter and r. viitala, metrological challenges in collaborative sensing: applicability of digital calibration certificates, sensors 2020, 20, 4730. doi: 10.3390/s20174730 [11] c. brown, t. elo, k. hovhannisyan, d. hutzschenreuter, p. kuosmanen, o. maennel, t. mustapaa, p. nikander, t. wiedenhoefer, infrastructure for digital calibration certificates, 2020 ieee international workshop on metrology for industry 4.0 & iot, 03-05 june 2020, rome, italy, pp. 485-489. doi: 10.1109/metroind4.0iot48571.2020.9138220 [12] j. schaerer; t. braun, a distributed calibration certificate infrastructure, 4th conference on blockchain research & applications for innovative networks and services (brains), paris, france, 27-30 september 2022, pp. 1-4. doi: 10.1109/brains55737.2022.9909437 [13] d. yaga, p. mell, n. roby, k. scarfone, blockchain technology overview, nistir 8202, october 2018, internal report 8202. doi: 10.6028/nist.ir.8202 [14] d. peters, j. wetzlich, f. thiel, j. p. seifert, blockchain applications for legal metrology, ieee international instrumentation and measurement technology conference (i2mtc), houston, tx, usa, 14-17 may 2018, pp. 1-6. doi: 10.1109/i2mtc.2018.8409668 [15] m. sadek ferdous, m. jabed morshed chowdhury, m. a. hoque, a. colman, "blockchain consensus algorithms: a survey", shortcomarxiv: 2001.07091v2 [cs.dc] 7 feb 2020. online [accessed 20230225] https://arxiv.org/pdf/2001.07091.pdf https://doi.org/10.3390/s20133650 https://doi.org/10.1109/imtc.2002.1006947 https://doi.org/10.1016/j.measurement.2021.109611 https://doi.org/10.1016/j.measen.2021.100175 https://doi.org/10.1016/j.measen.2021.100282 https://doi.org/10.3390/s20174730 https://doi.org/10.1109/metroind4.0iot48571.2020.9138220 https://doi.org/10.1109/brains55737.2022.9909437 https://doi.org/10.6028/nist.ir.8202 https://doi.org/10.1109/i2mtc.2018.8409668 https://arxiv.org/pdf/2001.07091.pdf a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence hubert zangl1,2, narendiran anandan1, ahmed kafrana1 1 institute of smart systems technologies, sensors and actuators, university of klagenfurt, klagenfurt, austria 2 silicon austria labs, aau sal use lab, klagenfurt, austria section: research paper keywords: education; robot perception; introductory lab experiments citation: hubert zangl, narendiran anandan, ahmed kafrana, a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence, acta imeko, vol. 12, no. 2, article 23, june 2023, identifier: imeko-acta-12 (2023)-02-23 section editor: eric benoit, université savoie mont blanc, france received august 9, 2022; in final form may 2, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work has received funding from the "european regional development fund" (efre) and "react-eu" (as reaction of the eu to the covid-19 pandemic) by the "kärntner wirtschaftsförderungs fonds" (kwf) within the project pattern-skin 3520/34263749706. corresponding author: hubert zangl, e-mail: hubert.zangl@aau.at 1. introduction robots are entering more and more into our daily live and are attractive for young people interested in science and technology. consequently, also the education at the bachelor and master level must adapt to the needs in this interdisciplinary subject. the importance of the link between different domains is also emphasized in a survey [1] stating that 57 % of the faculty believes that students in electrical engineering have deficiencies in kinematics and dynamics. consequently, experiments for measurement science education that addresses kinematics and dynamics can help to overcome such issues and to cope with the rise of interdisciplinarity in engineering education [2] when aspects of kinematics and dynamics are considered in the design of the experiments. electrical engineering laboratory courses at the bachelor level typically consist of experiments involving basic electronics circuits, and measurement of circuit parameters such as currents, voltages, power and waveforms. advanced laboratory courses for higher-semester students utilize analog and digital semiconductor devices such as transistors, op-amps, and microcontrollers. experiments involving sensors typically involve dedicated interface circuitry, and the experimental activity consists in measuring the circuit output to determine the physical quantity. while these experiments are necessary for students to develop their basic knowledge in electrical engineering, they do not provide sufficient exposure to topics such as kinematics and sensor fusion algorithms. many low-cost, simple robot platforms are available off the shelf. a drawback of such systems is that the mechanical integration of various sensors requires substantial modifications making the whole system often fragile and delicate. customized 3d printing allows to adjust the geometry to ideally host all sensor equipment and to obtain a rather robust setup. 3d printing has become an essential tool in robotics; therefore, lowcost printed platforms have been suggested to be used in education [3]. however, most of the related educational programs focus on higher level of robotics or mechanics, yet more is needed in education in measurement science. aspects abstract robotics and artificial intelligence represent highly interdisciplinary fields, which – in particular on the bachelor level makes providing a strong fundamental background in the education challenging. with respect to lab exercises in measurement, one approach to provide interdisciplinary hands-on experience is to embed experiments for measurement science in a robotic context. we present a low-cost robot platform that can be used to address several measurement science and sensor topics, and also other aspects such as machine learning, actuators and mechanics. the 3d printed chassis can be equipped with different sensors for environment perception but also be adapted to different embedded pc platforms. in order to also introduce concepts of robot simulation and realization approaches in a hands-on fashion, the table-top robot is also available as a digital twin in the simulation environments gazebo and coppeliasim®, where, e.g., limitations of simulations and required adaption of models to consider non ideal effects of sensors can be studied. mailto:hubert.zangl@aau.at acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 such as sensors and data acquisition and their practical use play an important role for engineers. these aspects should be addressed in the education of students in the field of robotics and artificial intelligence. furthermore, it is also important to consider uncertainty and raise the awareness towards non-ideal effects present in real systems. influences due to differences in various acquisition setups and measurement system can be studied using simulations of the system [4]. additionally, the relevance of the uncertainty for so called “sim2real” approaches (e.g. [5]), which use simulations as a basis for the generation of training data for learning algorithms, can thus be emphasized. the considered course, where the proposed robot platform is introduced, is a laboratory course under the title of “measurement science, sensors and actuators”. this course will be part of a bachelor study program under the title robotics and artificial intelligence, which will start in fall 2022. it is one of more than seven laboratory courses out of which the students can choose freely. it needs to be considered that these laboratory courses are to be taken at a rather early in the study program and should give the students a practical context for their further study. consequently, not all theories behind the experiments will be fully understood in this early stage of education, and this needs to be considered in the design of the exercises. the learning aims are thus hands-on experience with respect to a wide range of topics • torque, angular velocity, and acceleration • force, linear velocity, and acceleration • inertia and moment of inertia • friction • mechanical (angular, linear) power, electrical power, and conversion efficiency determination • data acquisition and recording • sensors for proprioception and exteroception • multi-body simulation and development of models from the real world • comparison of results from simulation and realworld experiments • influence of noise and other non-ideal effects of sensors and measurement systems, uncertainty propagation • model based signal processing • machine learning-based signal processing the design of the experiments aims to touch on all these aspects in order to give students visualization and better understanding of the theory presented in the corresponding lectures. it also covers the often under-represented step to obtain simplified models from real world scenarios [6] and understanding the resulting discrepancies. 2. proposed platform figure 1 illustrates a lab scenario. many data acquisition systems nowadays are pc based and we also make use of tools such as labview® [7] and matlab® [8] allowing for fast automation of measurement tasks even when students work with it for the first time. in addition, frameworks such as ros [9] allow to easily make first steps with robotics as many modules are available. however, since many students will work in the same room simultaneously, small robots that can be used on the table are advantageous. consequently, our design comprises tiny 3d printed wheeled robots. 3d printing not only allows for low-cost realizations of robots, but it also allows to easily make adaptions to the chassis of the robots, e.g. in order to mount certain sensors such as rgb cameras, depth cameras, ultrasound and time of flight sensors, wheel speed sensors etc. but also to provide different actuators concepts. figure 2 illustrates different realizations. these robots are controlled using a raspberry pi zero with ros on raspberry pi os, but also other platforms such as beaglebone®, odroid, and jetson nano™ boards utilizing ros on ubuntu can be used with the hardware. the robot body frame is constructed using 3d printing depending on the desired mounting configuration of the sensors and actuators. a block diagram of a typical robot with selected hardware components is shown in figure 3. the robot's single board computer that interfaces with the onboard sensors and actuators are housed within the 3d printed body. the measurement data is published on ros for further processing. the robot can be connected wirelessly to a pc but can also execute programs for various tasks running on the embedded pc. it is powered by commonly available low-cost portable power banks that can be easily swapped and recharged. optionally the robots can be mounted with optical tracking markers for use in motion tracking facilities. the proposed platform is a fully functional wheeled robot with various measurement and actuation capabilities. it thus figure 1. illustration of the table-top robot as used in a classroom. figure 2. photographs of different 3d printed robots. in contrast to off-the shelf robots, the geometry can quickly be adapted to ideally host sensors and actuators. while the left robot can only manipulate objects by shifting them, the robot on the right also includes a simple soft robotics fin ray type gripper [10], which can also be equipped with tactile sensors. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 provides students a complete exposure to a simple robot and all its internal components and working principles, the students will then be able to adapt and modify the existing platform for different requirements. the models for 3d printing, boards, and software will be provided as open source and students will be able to continue further experiments as they can easily realize their own robot at low costs. 3. initial experiments with the robots since one of the ideas is to also enhance the understanding in kinematics and dynamics, the first experiments address measurements related to the motors of the robot. on a simple test bench students can measure the current and voltage applied to the mounted dc motor. the dc motor measurement setup incorporates a disk brake and piezoresistive sensor that can measure the force on the brake. by measuring the force on the support of the brake, torque can be estimated. additionally, the angular velocity is measured using magnetoresistive sensors counting marks on the shaft. figure 4 shows a picture of the proposed motor characterization workbench. consequently, students get familiar with measurement of current, voltage, and power in the electrical domain but also to speed, force, torque, and power in the mechanical domain. furthermore, this allows to determine the efficiency and consequently the thermal losses and heating of the motors. the magnetoresistive sensors can be used in odometry and the drift due to inaccuracies using integration of the velocity can be observed in a follow up experiment. 4. introduction to robot simulation many different environments for robot simulation are available, including open source frameworks such as gazebo [11] and coppeliasim® [12]. as the actual time to work in the lab is rather short, a system is preferred that can be used with an easy to learn graphical user interface. coppeliasim® provides such an interface and was therefore chosen for the introductory course. additionally, it allows to select different physics simulation engines such as bullet [13], newton dynamics [14], open dynamic engine [15], and vortex [16]. the initial simulation experiments thus aim to provide an understanding of capabilities of current robot simulation environments, required parameters and limitations of the simulation approaches. this can be illustrated in the simple scenario of a bouncing ball. for this, a simple sphere can be added using the gui of coppeliasim® and placed in a height of one meter above the ground. when the simulation is started, the ball falls as expected but sticks to the surface without bouncing. in the following, the object parameters in different simulators are discussed that control the simulation of the physical contact. the students should then find appropriate parameters, such that the ball shows a realistic bouncing behavior, including the attenuation over time. this should illustrate that simulations are only approximations of the reality and that results obtained with different simulation engines can be quite different for the same scenario. in order to get realistic behaviors, parameter tuning is required, and the values are not necessarily directly obtainable from the material properties of the objects. this should raise the awareness that simulation results need to be validated in general. as a next step, the simulation of the simple robot is set up. a step file of the 3d printed chassis is provided, and students need to add joints, motors, and wheels. in the following simulations, a torque is assigned to the motors, and the behavior of the robot is analysed by the students. they should verify if the linear acceleration of the robot correctly corresponds to the estimated torque. 5. considering measurement uncertainty in robot simulation in order to achieve realistic simulations that allow development of signal processing algorithms or simulation-based machine learning approaches, also realistic sensor models considering non-ideal characteristics of sensors are important. typically, robot simulation tools such as coppeliasim® provide means for simulation of sensors such as accelerometers and gyroscopes, but non-ideal characteristics of these sensors are usually not included. consequently, they need to be included by the user. figure 5 illustrates such a simple model as used in coppeliasim®. based on datasheets of accelerations sensors, students should include effects such as deviation in offset, sensitivity, noise, crosstalk and potentially nonlinearity, such that the simulation model generates realistic sensor data based. figure 3. hardware components in the proposed robotic platform. depending on the requirements additional components can be included. figure 4. the dc motor has a spinning wheel attached to its shaft. this spinning wheel has holes near its perimeter. the angular velocity of the wheel can be obtained by processing the output signal of the magnetoresistive sensor s1. a brake wheel that can spin freely is mounted next to the spinning wheel, and the force applied can be adjusted and measured. a small extrusion of the brake wheel applies force on the force sensor, whose output can be used to measure the torque. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 a linear model could look like [ 𝑌1(𝑡) 𝑌2(𝑡) 𝑌3(𝑡) ] = [ 𝑆11 𝑆12 𝑆13 𝑆21 𝑆22 𝑆23 𝑆31 𝑆32 𝑆33 ] [ 𝑋1(𝑡) 𝑋2(𝑡) 𝑋3(𝑡) ] + [ 𝑂1 𝑂2 𝑂3 ] + [ 𝑁1(𝑡) 𝑁2(𝑡) 𝑁3(𝑡) ] , (1) where, 𝑌𝑖 are the sensor outputs, 𝑋𝑖 are the sensor inputs, 𝑆𝑖𝑗 (cross)-sensitivities, 𝑂𝑖 offset values and 𝑁𝑖 noise contributions. for each sensor realization, offsets and sensitivities will be different and noise contributions will vary for each sample. the students should also include an explanation of how the parameters for the random variables are determined from the datasheet of an actual sensor. the model should be developed for one of • 3 axis acceleration sensor • 3 axis gyroscope • pressure sensor. the sensor model should be tested on a simulated robot, e.g. in order to assess the feasibility of the determination of a joint angle of a robot joint using two acceleration. 6. sensor fusion algorithms an important part of measurement science, especially in robotic context is the application of algorithms to improve the estimate of the measurands in the presence of noise and uncertainties. a popular method used for robotic tracking applications is the kalman filter [17]. the proposed wheeled robot is a suitable platform for students to learn about the estimation of the system state from the sensor readings. even though the full theory behind the kalman filter will be addressed later in the curriculum, the students should be able to implement the equations and to do first analysis in order to get an idea of the benefits of signal processing methods in measurement science. in a simple exercise the students must estimate and track the velocity and acceleration (in one dimension) of the robot using the kalman filter. the robot will move on a straight line on a flat level surface and the output of the angular velocity sensor (𝜔m) and the acceleration (𝑎m) from an inertial measurement unit (imu) is available as measurements at regular time intervals (d𝑡). the diameter of the robot’s wheel (𝐷) is a known constant. from the measured angular velocity and using the knowledge of the wheel diameter, the measured linear velocity 𝑣m of the robot is obtained as 𝑣m = π 𝐷 𝜔m . (2) the state of the system to be estimated and tracked is 𝑥 = [ 𝑣 𝑎 ] . (3) in this setup, the system has no control inputs, therefore the kalman filter state predict equations are, �̂�𝑛+1 = 𝐹 𝑥𝑛 , where 𝐹 = [ 1 d𝑡 0 1 ] (4) and 𝑃𝑛+1 = 𝐹 𝑃 𝐹 t+𝑄𝑛 , (5) where 𝑃 is the covariance matrix that represents the uncertainty in estimation and 𝑄 is process noise. given the measurements 𝑧 = [ 𝑣m 𝑎m ]. (6) the state update and the uncertainty update equations are given by, �̂�𝑛 = �̂�𝑛−1 + 𝐾𝑛 (𝑧𝑛 − 𝐻�̂�𝑛−1) (7) and 𝑃𝑛 = (𝐼 − 𝐾𝑛 𝐻)𝑃𝑛−1, (8) where 𝐻 is the identity matrix and 𝐾𝑛 is the kalman gain 𝐾𝑛 = 𝑃𝑛−1 𝐻 𝑇 [𝐻 𝑃𝑛−1 𝐻 t + 𝑅𝑛] −1 , (9) where 𝑅 is the measurement uncertainty matrix. an advanced version of this experiment tracks motion (linear and angular position, velocity, and acceleration) of the robot, additionally positional measurement inputs from the range sensor can be included, and motor voltage and current measurements can be used as control inputs to the system state. this state can then be compared to the ground truth positions obtained from optical motion tracking systems. 7. higher level aspects the same robots can also be used with optical time of flight sensors or time of flight cameras. with these sensors, it is possible to develop object avoidance approaches based on reinforcement learning in simulation. here, the aim is to maneuver the robot towards a certain target position while avoiding collision with obstacles. again, the focus is on the influence of the sensor signal's quality on the training result. we use one or more 8 × 8 multizone time of flight ranging sensors [18]. figure 6 shows a training setup that students can use. figure 5. simple acceleration sensor model in coppeliasim®. it comprises an ideal force sensor and a seismic mass, the accelerations are determined by dividing the forces as obtained from the physics simulation engine by the mass. figure 6. reinforcement learning with gazebo and time of flight sensors. the aim is that the robot navigates to the red rectangle, while avoiding collisions with the obstacles. the simulated sensor signals are illustrated in the top left image, where the distance is encoded in a gray scale image. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 8. summary this paper summarizes an approach to integrate robots in lab exercises on measurement science. the simple, low-cost tabletop robot with 3d printed chassis can be equipped with various sensors and can be used for a variety of experiments starting from current and voltage measurement on a dc motor, measurement of force, speed, and mechanical power, as well as conversion efficiency. non ideal effects of inertial sensors as they are used in robots can be simulated using simulation frameworks and the consequences can also be studied in the experiments. the sensors and measurement systems can also be used and studied in signal processing, and signal fusion approaches as well as in machine learning allowing to study the influence of the signal quality on the training results. acknowledgement this work has received funding from the "european regional development fund" (efre) and "react-eu" (as reaction of the eu to the covid-19 pandemic) by the "kärntner wirtschaftsförderungs-fonds" (kwf) within the project pattern-skin 3520/34263749706. references [1] j. m. esposito, the state of robotics education: proposed goals for positively transforming robotics education at postsecondary institutions, ieee robot. automat. mag., vol. 24, no. 3, sep. 2017, pp. 157–164. doi: 10.1109/mra.2016.2636375 [2] m. roy, a. roy, the rise of interdisciplinarity in engineering education in the era of industry 4.0: implications for management practice, ieee eng. manag. rev., vol. 49, no. 3, sep. 2021, pp. 56–70. doi: 10.1109/emr.2021.3095426 [3] l. armesto, p. fuentes-durá, d. perry, low-cost printable robots in education, j intell robot syst, vol. 81, no. 1, jan. 2016, pp. 5–24. doi: 10.1007/s10846-015-0199-x [4] t. mitterer, l. faller, h. müller, and h. zangl, a rocket experiment for measurement science education, journal of physics: conference series, 2018, vol. 1065, no. 2, p. 022005. doi: 10.1088/1742-6596/1065/2/022005 [5] c. doersch, a. zisserman, sim2real transfer learning for 3d human pose estimation: motion to the rescue, advances in neural information processing systems, 2019, vol. 32, 14 pp. doi: 10.48550/arxiv.1907.02499 [6] l. l. bucciarelli, s. kuhn, engineering education and engineering practice: improving the fit, between craft and science: technical work in the united states, edited by stephen r. barley and julian e. orr, ithaca, ny: cornell university press, 1997, pp. 210-229. doi: 10.7591/9781501720888-012 [7] national instruments, labview. online [accessed 8 june 2023] https://www.ni.com/en-us/shop/labview.html [8] mathworks, matlab. online [accessed 8 june 2023] https://www.mathworks.com/products/matlab.html [9] m. quigley, k. conley, b. p. gerkey, j. faust, t. foote, j. leibs, r. wheeler, a. y. ng, ros: an open-source robot operating system, in icra workshop on open source software, 2009, vol. 3, no. 3.2, p. 5. [10] w. crooks, g. vukasin, m. o’sullivan, w. messner, c. rogers, fin ray® effect inspired soft robotic gripper: from the robosoft grand challenge toward optimization, front. robot. ai, vol. 3, nov. 2016. doi: 10.3389/frobt.2016.00070 [11] open robotics, gazebo. online [accessed 7 august 2022] https://gazebosim.org/home [12] coppelia robotics, robot simulator coppeliasim: create, compose, simulate, any robot. online [accessed 7 august 2022] https://www.coppeliarobotics.com [13] pybullet.org, bullet real-time physics simulation. online [accessed 12 january 2023] https://pybullet.org/wordpress/ [14] julio jerez, alain suero and various other contributors, newton dynamics. online [accessed 12 january 2023] http://newtondynamics.com/forum/newton.php [15] russ smith, open dynamic engine. online [accessed 12 january 2023] http://www.ode.org/ [16] cm labs, vortex studio. online [accessed 12 january 2023] https://www.cm-labs.com/vortex-studio/ [17] r. e. kalman, a new approach to linear filtering and prediction problems, journal of basic engineering (asme), vol. 82, mar. 1960, pp. 35–45. doi: 10.1115/1.3662552 [18] stmicroelectronics, vl53l5cx time-of-flight 8x8 multi-zone ranging sensor with wide field of view. online [accessed 31 march 2022] https://www.st.com/content/st_com/en/campaigns/vl53l5cxtime-of-flight-sensor-multizone.html https://doi.org/10.1109/mra.2016.2636375 https://doi.org/10.1109/emr.2021.3095426 https://doi.org/10.1007/s10846-015-0199-x https://doi.org/10.1088/1742-6596/1065/2/022005 https://doi.org/10.48550/arxiv.1907.02499 https://doi.org/10.7591/9781501720888-012 https://www.ni.com/en-us/shop/labview.html https://www.mathworks.com/products/matlab.html https://doi.org/10.3389/frobt.2016.00070 https://gazebosim.org/home https://www.coppeliarobotics.com/ https://pybullet.org/wordpress/ http://newtondynamics.com/forum/newton.php http://www.ode.org/ https://www.cm-labs.com/vortex-studio/ https://doi.org/10.1115/1.3662552 https://www.st.com/content/st_com/en/campaigns/vl53l5cx-time-of-flight-sensor-multizone.html?ecmp=tt24055_gl_ps_nov2021&aw_kw=time%20of%20flight%20sensor&aw_m=e&aw_c=15158713672&aw_tg=aud-1232809041753:kwd-364398038077&aw_gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe&gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe https://www.st.com/content/st_com/en/campaigns/vl53l5cx-time-of-flight-sensor-multizone.html?ecmp=tt24055_gl_ps_nov2021&aw_kw=time%20of%20flight%20sensor&aw_m=e&aw_c=15158713672&aw_tg=aud-1232809041753:kwd-364398038077&aw_gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe&gclid=cjwkcajwopwsbhb6eiwajxmqdfzxhmfkx7mwtbxs9fuo8lvzwm21kmu-6_7u6ideeccybtlj77gflxocn4cqavd_bwe multi-input multi-output antenna measurements with super wide bandwidth for wireless applications using isolated t stub and defected ground structure acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 multi-input multi-output antenna measurements with super wide bandwidth for wireless applications using isolated t stub and defected ground structure pradeep vinaik kodavanti1, p. v. y. jayasree2, prabhakara rao bhima3 1 department of electronics and communication engineering, jawaharlal nehru technological university kakinada, kakinada-533003, andhra pradesh, india 2 department of electrical, electronics and communication engineering, gitam, visakhapatnam-530045, andhra pradesh, india 3 school of nano technology, jawaharlal nehru technological university kakinada, kakinada-533003, andhra pradesh, india section: research paper keywords: ultra-wide band, mimo, t stub, defected ground structure citation: pradeep vinaik kodavanti, p. v. y. jayasree, prabhakara rao bhima, multi-input multi-output antenna measurements with super wide bandwidth for wireless applications using isolated t stub and defected ground structure, acta imeko, vol. 11, no. 1, article 27, march 2022, identifier: imeko-acta11 (2022)-01-27 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 3, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: pradeep vinaik kodavanti, e-mail: pradeepvinaik.kodavanti@gmail.com 1. introduction an antenna is the most significant part of the communication system which is responsible for the transmission and interception of the electromagnetic signal [1]-[10]. several configurations like single element, array and multiple elements radiating systems are existing for efficient communication link [11]-[15]. the ultrawide band (uwb) based systems have become the most significant part personal and commercial wireless systems. this certainly led to huge number of applications like wi-fi, wlan, and bluetooth etc. hence it is a challenging task for the antenna engineers to design antennas with typical radiation pattern and spectrum characteristics which does not entertain the spatial or frequency interference. hence antennas with several notch band features are the solution to such interference issues. the other significant parameter is the channel characteristics which contribute to the power efficiency of the system. it is always a challenging task to enhance the capability of the channel without effecting its spectral and spatial response. multi-input multioutput (mimo) emerged as the best solution to meet the above requirements. the technology observes weak coupling between the radiating systems. however, the design challenges do persist in order to perceive the miniaturization and portability of the system. several works have reported significant results pertaining to the uwb mimo based designs. parameters like isolation and its improvement has been the most desirable design issue of mimo uwb antennas. the isolation improvements can be achieved through the decoupling process or characteristics of the antennas. the process is inherently complex in nature [8]-[12]. radiating system with excellent diversity is another solution to achieve the desired characteristics of mimo with good decoupling. incorporating the filters and other circuits are another best option for achieve abstract the paper presents the idea of defective ground structures for the improvement in the radiation characteristics of the antenna especially in the multi-input multi-output (mimo) configuration. the proposed antenna model with partially flared out feed system is designed and analyzed with defective ground in both single and array configuration. a t stub is a t shaped stub used in this work with defected ground structure. a t stub is included along with defective ground to enhance the mimo configuration features. the simulations are carried out on electromagnetic modelling tool and analyzed by measuring the parameters like reflection coefficient, voltage standing wave ratio (vswr), gain, radiation pattern and current distribution plots. for the fabrication of the proposed antenna these measurements are very important. the antenna is fabricated and validated in terms of s-parameters and vswr. the proposed results are good agreement with the simulated results. the overall size of the antenna is 24 × 18 × 0.8 mm3. mailto:pradeepvinaik.kodavanti@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 this objective of isolation. slots, stubs, split ring resonator are some examples of structures proposed to embed with the radiating system to realize the notch bands and inference patterns [13]-[15]. recently, several other structures which are conformal are considered as the advancements in antenna design for 5g technology [16], [17]. this technique is projected as the most favorable feature for the advanced communication. measurement of various parameters are very important in the design, fabrication and analysis of the antenna. section 2 describes the proposed antenna geometry with and without t stub. the simulation results are discussed in the section 3. the overall conclusion in included as section 4. 2. proposed antenna geometry the proposed antenna has a typical initial geometry of patch antenna with two conducting plates sandwiched between a dielectric substrate. the conducting plate on the top side serves as the radiation source which is in circular in shape and edge fed. the bottom plate serves as ground plane which is defective in shape intentionally to represent the defective ground features. the usual edge feed geometry is a rectangular strip running from the edge of the radiating patch to the end of the substrate. however, in this work the shape of the rectangular strip has been modified. from the edge of the substrate, the feed line takes the shape of rectangle for a length of lf2 and width of wf and flares out as it reaches the edge of the circular patch where it assumes the width of wf2 which is greater than wf. the length of the gradual flaring of the feedline is lf3. the circular patch has a radius of r while the substrate has dimensions like length l and width w. the ground plane on the other side is defective in nature and does not have s specific shape to define. it can be considered that a rectangular strip of length lg and width similar to that of the substrate at lower edge of the substrate plane. remaining part of the ground plane is left uncoated with ground plane conductor. the strip along the width w has three v shaped valleys. it can be considered that the two v shaped valleys on either side of the centre v shape are symmetric in nature and have the v arm length l1 less than that of l2 which is the arm length of the central deep v shape. the table describing the optimized dimensions of the geometry proposed for the antenna is given in table 1 in which all the dimensions are measured in mm scale. the geometry has been enhanced to the array configuration with two similar elements arranged side by side on the radiation region as shown in figure 1(c). the two-element array configuration is necessary to realise the mimo characteristics of the antenna. the ground characteristics and the geometry in the first case of is similar to the figure 1(b) however, duplicated and copied in order to serve the defective ground of the second table 1. antenna design parameters. s. no parameter value in mm 1 l 18 2 ws 24 3 wf 1.5 4 lf 6 5 r 5.5 6 lf1 6.5 7 wf1 3 8 wf2 2.5 9 lf2 3 10 lf3 3.39 11 lg 6 12 l1 1 13 l2 3.8 14 w1 2.5 15 w2 3 16 w3 2 17 g1 12 18 d 1 19 d1 5 20 sl1 11 21 sl2 0.5 22 sw1 0.5 23 sw2 6 24 d2 5 a) b) c) d) e) figure 1. proposed super wide band antenna (a) front view (b) ground plane and mimo antenna (c) front view (d) ground plane without t stub and (e) ground with t stub. w l r wf lf2 lf3 wf2 lg w1 w2 w3 l2 l1 w l r wf lf2 lf3 wf2 lg w1 w2 w3 l2 l1 ws l r wf lf2 lf3 wf2 g1 d acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 element in the array. further, the t stub is included in the second case of interest to enhance the mimo characteristics. in this case, the t stub arises from the centre of the ground plane and specifically from the top edge of the strip for defective ground configuration. the two rectangular strips are arranged perpendicular to each other to form the t shape. the vertical strip has dimensions of sl1 length and sw1 width while the dimensions of the horizontal strip is sl2 and sw2. 3. results and discussion the radiation characteristics of the simulated antenna using the em modelling tool are presented in this section as follows. the results are presented as three cases in which the first case refers to the proposed antenna in its single element form with defective ground. the second case refers to the two-element array configuration and the third case is the mimo antenna with the t stub in the defective ground. 3.1. case-1: defective ground antenna the frequency response and the resonance features can be directly inferred from the s11 plot as presented in figure 2(a). the same can be validated or verified using the corresponding voltage standing wave ratio (vswr) plot given in figure 2(b). further, the frequency dependant gain characteristics are studied using the gain plot in figure 2(c). from the s11 and vswr plot, it is evident that the antenna designed has a very large band starting from 6 ghz covering almost to the frequencies above 80 ghz with a continuous consistency in reflection coefficient less than -10db. within the range it is possible to find several dips in the response curve where the corresponding characteristics are highly resonant. similarly, the vswr magnitude is below 2 through the entire sweep as mentioned above for s11. this confirms the radiation characteristics once again. similarly, from the gain plot it is possible to mention that the maximum gain is continuously increasing form the lower side of the resonant band to the upper side. initially, the max gain takes a linear decrement from the 5 ghz to 8 ghz while after that a gradual but very slow enhancement is observed till 80 ghz. further, the s11 and vswr are frequency response curves while the consistency in the radiation characteristics cannot be inferred from the radiation pattern plots which are presented for various frequencies presented in figure 3(a) through figure 3(f). the frequencies of interest are the dips observed in the s11 pot where there is maximum reflection coefficient for the antenna. in this work, the polar radiation characteristics plots are presented for 6.73 ghz, 13.92 ghz, 30.54 ghz, 37.01 ghz, 43.8 ghz and 52.2 ghz. the radiation characteristics are very much close the template patterns of a patch antenna and also the have similarity. further, the radiation pattern is highly dependent on the current distribution which is again defined by the geometrical conditions of the antenna and the feed point orientation and construction. in order to study the same, the current distribution is given in a) b) c) figure 2. proposed swb antenna simulation results (a) s11 (b) vswr and (c) maximum gain versus frequency. figure 3. simulation radiation patterns of proposed antenna at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz, (e) 43.8 ghz and (f) 52.2 ghz. figure 4. simulation current distributions of proposed antenna at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz, (e) 43.8 ghz and (f) 52.2 ghz. (a) (b) (c) (d) (e) (f) (a) (b) (c) (d) (e) (f) acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 figure 4(a) through figure 4(f) for the same frequencies of interest for which the radiation patterns are plotted. in the first three figures, the upper portion of the patch geometry has almost no current distributed and hence the null patterns are visible in the radiation characteristics presented in figure 3. further, at higher frequencies, the current distribution initiated and hence the expansion of the beams is visible in patterns slowly dominating the null characteristics. 3.2. case-2: two element array configuration the two-element array configuration of the defective ground antenna is studied in terms of its resonant features from the plotted s11 and vswr graphs in figure 5(a) and figure 5(b). the array s11 plot is much like that of the single element without any deviation in its resonant features. this is a favourable case for the array configuration as the we do not want the resonant characteristics of the antenna to be disrupted in the array form. similarly, the transmission characteristics from first port to the second port can be studied from the s21 plot which is also plotted in the same figure. the s21 features the same characteristics expect the case of variation in the magnitudes. this is evident in the graph of s21. in order to convince the s11 characteristics, the corresponding vswr features are plotted in figure 5 (b) which exhibit the same resonance of the antenna with the corresponding magnitude of the parameter keeping below 2 for the entire range of the frequency band starting from 5 ghz to 80 ghz. in the gain plot presented in figure 5 (c) and observed to better than the maximum gain observed from figure 2 (c). the s11 of proposed antenna by varying distance (d) between two circular patches with a step size of 0.5 mm is represented in figure 5(d). further, the envelope correlation has been computed for and presented to correlate the envelope features of the characteristics as shown in figure 5 (e). the correlation is denied in the non-resonant band while was excellent in the resonant band of the antenna. similarly, the current distribution is plotted for the array configuration without the t stub and presented in figure 6(a) through figure 6(f) following this also have the radiation polar plots of the array configuration without stub in figure 7(a) through figure 7(f). it is possible to correlate both the current distribution and the polar radiation plots as the current distribution greatly effects the radiation characteristics. the null patterns are completely defined by the no current distribution zones on the surface of the radiating area of the antenna. a) b) c) d) e) figure 5. simulation results of proposed mimo antenna (a) s11 (b) vswr (c) gain versus frequency (d) varying distance (d) between two circular patches with a step size of 0.5 mm and (e) envelope correlation coefficient (ecc). figure 6. simulation current distributions of proposed mimo antenna without t stubs at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz, (e) 43.8 ghz and (f) 52.2 ghz. (a) (b) (c) (d) (e) (f) acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 3.3. case-3: array antenna with t stub the array configuration with t stub is as shown in figure 2(e). the s11, s21 and the vswr plots are given in figure 5(a) through figure 5(b). the inclusion of the t stub has no appreciable impact on the s11 plot while the s21 produced a notch band around the 15 ghz frequencies. similarly, the vswr plots have the same features as that of the s11. the gain consideration as relatively unaltered due to the t stub as shown in figure 5(c) and the ecc observes a shift in the band at the initial stage which however has no impact as it is outside the band of operation as shown in figure 5(d). the current distribution and the radiation characteristics are presented in figure 8 and figure 9 respectively for the array without t stub model. both the plots are highly correlated with the radiation pattern defined by the corresponding current distribution. the plane in which the current distribution is maximum observes main beam orientation in the radiation pattern. similarly, the no current zone accommodates the wide nulls in the plane of radiation. at 52.2 ghz, the first element has the maximum current on its surface and the corresponding pattern observes maximum radiation from azimuthal 1200 to 1800. so as the case with all the remaining. the fabricated prototype of the proposed antenna is represented in figure 10. the measured and simulated result comparison of s-parameters and vswr of proposed antenna is represented in figure 11 and figure 12. the measured results are good agreement with the simulated results and is applicable for real time applications. figure 7. simulation radiation patterns of proposed mimo antenna without t stubs at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz, (e) 43.8 ghz and (f) 52.2 ghz. figure 8. simulation current distributions of proposed mimo antenna with t stubs at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz, (e) 43.8 ghz and (f) 52.2 ghz. figure 9. simulation radiation patterns of proposed mimo antenna with t stubs at (a) 6.73 ghz, (b) 13.92 ghz, (c) 30.54 ghz, (d) 37.01 ghz (e) 43.8 ghz and (f) 52.2 ghz. figure 10. fabrication prototype of proposed antenna. figure 11. simulated and measured s11 and s12 for the proposed antenna. figure 12. simulated and measured vswr for the proposed antenna. (a) (b) (c) (d) (e) (f) (a) (b) (c) (d) (e) (f) (a) (b) (c) (d) (e) (f) acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 the proposed antenna is compared with previous literature in terms of different parameters of size, substrate, bandwidth, gain and ecc are tabulated in table 2. the ecc is a dimension less quantity. 4. conclusion the planar microstrip antenna-based structure using defective ground with t stub has been successfully simulated and analysed for its swb and mimo characteristics. a clear improvement in terms of the antenna radiation properties is evident from the single and two element array configurations of the mimo antenna. the presence of the t stub has relevance to the notch band characteristics of the antenna. fabrication and prototyping of the antenna have a good scope to validate the results further in a practical environment. references [1] zeeshan ahmed, gul perwasha, ultrawide band antenna with wlan band-notch characteristic, int. conf. comput., control commun (2013), pp. 25–26. doi: 10.1109/ic4.2013.6653762 [2] noman waheed, abubakar saadat, ultra-wide band antenna with wlan and wimax band-notch characteristic, int conf commun, comput digital syst 8 (2017). doi: 10.1109/c-code.2017.7918910 [3] z. q. li, c. l. ruan, a small integrated bluetooth and uwb antenna with wlan bandnotched characteristics, int. symp. signals, syst electron 1 (2010), pp. 1–4. doi: 10.1109/issse.2010.5638206 [4] bo-ren hsiao, yi-an chen, lte mimo antennas on variable sized tablet computers with comprehensive band coverage, ieee antennas wirel. propag. lett. 15 (2015), pp. 1152–1155. doi: 10.1109/lawp.2015.2496966 [5] saber soltani, parisa lotfi, a port and frequency reconfigurable mimo slot antenna for wlan applications, ieee trans. antennas propag. 64 (4) (2016), pp. 1209–1217. doi: 10.1109/tap.2016.2522470 [6] manoj k. meshram, a novel quad-band diversity antenna for lte and wi-fi applications with high isolation”, ieee trans. antennas propag. 60 (9) (2012), pp. 4360–4371. doi: 10.1109/tap.2012.2207044 [7] shrivishal tripathi, akhilesh mohan, sandeep yadav, a compact koch fractal uwb mimo antenna with wlan band-rejection, ieee antennas wirel. propag. lett. 14 (2015), pp. 1565–1568. doi: 10.1109/lawp.2015.2412659 [8] muhammad saeed khan, ultra-compact dual polarized uwb mimo antenna with meandered feeding lines, iet microw., antennas propag. 11 (7) (2017), pp. 997–1002. doi: 10.1049/iet-map.2016.1074 [9] mohammad abedian, compact ultrawide band mimo dielectric resonator antennas with wlan band rejection, iet microw., antennas propag. 11 (11) (2017), pp. 1524–1529. doi: 10.1049/iet-map.2016.0299 [10] mohammad mahdi farahani, mutual coupling reduction in millimeter-wave mimo antenna array using a meta material polarization-rotator wall, ieee antennas wirel. propag. lett. 16 (2017), pp. 2324–2327. doi: 10.1109/lawp.2017.2717404 [11] mohammad saeed khan, a compact csrr-enabled uwb diversity antenna, ieee antennas wirel. propag. lett. 16 (2017), pp. 808–812. doi: 10.1109/lawp.2016.2604843 [12] wing-xin chu, compact broad band slot mimo antenna with a stub loaded radiator, ieee conference on antenna measurements and applications (2014), pp. 1–4. doi: 10.1109/cama.2014.7003407 [13] v. jyothika, m. s. p. c. shekar, s. v. krishna, m. z. u. rahman, design of 16 element rectangular patch antenna array for 5g applications, journal of critical reviews, 7 (9) (2020), pp. 53-58. [14] v. s. chakravarthy, p. m. rao, amplitude-only null positioning in circular arrays using genetic algorithm. ieee international conference on electrical, computer and communication technologies (icecct), pp. 1-5. doi: 10.1109/icecct.2015.7226120 [15] k. aravind rao, k. sai raj, r. k. jain, m. z. u. rahman, implementation of adaptive beam steering for phased array antennas using enlms algorithm, journal of critical reviews, 7 (9) (2020), pp. 59-63. doi: 10.31838/jcr.07.09.10 [16] s. blanch, j. romeu, exact representation of antenna system diversity performance from input parameter description, electron. lett. 39 (9) (2003), pp. 705–707. doi: 10.1049/el:20030495 [17] h. huang, x. li, y. liu, a low-profile, dual-polarized patch antenna for 5g mimo application, ieee trans. antennas propag. 67 (2) (2019), pp. 1275–1279. doi: 10.1109/tap.2018.2880098 [18] s. kesana, s. gatikanti, m. z. ur rahman, b. radhika, d. mounika, triple frequency g-shape mimo antenna for wireless applications, international journal of engineering and advanced technology, 8 (5) (2019), pp. 942-947. [19] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol.10, 2021, no.2, pp. 1-10. doi: 10.21014/acta_imeko.v10i2.912 [20] h. t. chattha, f. latif, f. a. tahir, m. u. khan, x. yang, smallsized uwb mimo antenna with band rejection capability, ieee access, 7 (2019), pp. 121816–121824. doi: 10.1109/access.2019.2937322 [21] s. v. devika, k. karki, s. k. kotamraju, k. c. sri kavya, m. z. u. rahman, a new computation method for pointing accuracy of cassegrain antenna in satellite communication, journal of theoretical and applied information technology, 95 (13) (2017), pp. 3062-3074. [22] r. hussain, m. s. sharawi, a. shamim, an integrated four-element slot-based mimo and a uwb sensing antenna system for cr platforms, ieee trans. antennas propag., 66 (2) (2018), pp. 978– 983. doi: 10.1109/tap.2017.2781220 [23] c. yu, s. yang, y. chen, w. wang, l. zhang, b. li, l. wang, a super-wideband and high isolation mimo antenna system using a windmill-shaped decoupling structure, ieee access, 8 (2020), pp. 115767-115777. doi: 10.1109/access.2020.3004396 table 2. comparing proposed antenna with previous literature. ref size (mm3) substrate band width (ghz) gain (db) ecc [18] 48 × 48 × 0.8 fr4 2.5 12 na < 0.005 [19] 50 × 50 × 1.6 ro-6035htc 2.76 10.75 2.5 5 < 0.025 [20] 23 × 26 × 0.8 fr4 3.1 10.6 2 4.5 < 0.01 [21] 50 × 39.8 × 1.524 ro-tmm4 2.7 12 2.5 6.0 < 0.01 [22] 60 × 120 × 1.5 ro-4350 0.75 7.65 3.2 < 0.1 [23] 58 × 58 × 1 fr4 2.9 40 4.3 13.5 < 0.04 proposed 18 × 24 × 0.8 rogers 5880 5.5 80 4.01 9.89 < 0.05 https://doi.org/10.1109/ic4.2013.6653762 https://doi.org/10.1109/c-code.2017.7918910 https://doi.org/10.1109/issse.2010.5638206 https://doi.org/10.1109/lawp.2015.2496966 https://doi.org/10.1109/tap.2016.2522470 https://doi.org/10.1109/tap.2012.2207044 https://doi.org/10.1109/lawp.2015.2412659 https://doi.org/10.1049/iet-map.2016.1074 https://doi.org/10.1049/iet-map.2016.0299 https://doi.org/10.1109/lawp.2017.2717404 https://doi.org/10.1109/lawp.2016.2604843 https://doi.org/10.1109/cama.2014.7003407 https://doi.org/10.1109/icecct.2015.7226120 https://doi.org/10.31838/jcr.07.09.10 https://doi.org/10.1049/el:20030495 https://doi.org/10.1109/tap.2018.2880098 https://doi.org/10.21014/acta_imeko.v10i2.912 https://doi.org/10.1109/access.2019.2937322 https://doi.org/10.1109/tap.2017.2781220 https://doi.org/10.1109/access.2020.3004396 uncertain estimation-based motion planning algorithms for mobile robots acta imeko issn: 2221-870x september 2021, volume 10, number 3, 51 60 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 51 uncertain estimation-based motion-planning algorithms for mobile robots zoltán gyenes1, emese gincsainé szádeczky-kardoss1 1 budapest university of technology and economics, magyar tudósok körútja 2, 1117 budapest, hungary section: research paper keywords: motion planning; mobile robots; cost function; uncertain estimations citation: zoltán gyenes, emese gincsainé szádeczky-kardoss, uncertain estimation-based motion planning algorithms for mobile robots, acta imeko, vol. 10, no. 3, article 9, september 2021, identifier: imeko-acta-10 (2021)-03-09 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form august 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: zoltán gyenes, e-mail: gyezo12@gmail.com 1. introduction autonomous driving is a highly frequented research area for mobile robots, cars and drones. robots have to generate a collision-free motion towards the target position while maintaining safety with respect to the obstacles that occur in the local environment. motion-planning methods generate both the velocity and the path profiles for the robot using measured information about the velocity vectors and the positions of the obstacles. motion-planning algorithms can be divided into two parts. if all the data for the robot’s environment are known and available at the start, then global motion-planning algorithms can be used to generate a collision-free path [1], [2]. however, if the robot can only use local sensor-based information about its surrounding dnese and dynamic environment, then reactive motion-planning algorithms can provide an acceptable solution for generating the robot’s path and velocity [3], [4]. using a reactive motion-planning algorithm, generating optimal evasive manoeuvers that can ensure a safe motion for the agent and the environment is an np-hard problem [5]. the task is more difficult if the uncertainties of the measured data (velocity vectors and positions) are taken into account. in this paper, a novel reactive motion-planning algorithm is presented that can calculate the uncertainties of every obstacle using their velocity vectors and distance from the agent. the paper is ordered in the following way. section 2 outlines some often-used reactive motion-planning methods that have been introduced in recent decades. in some algorithms, the uncertainties of the measured data have also been considered. at the end of section 2, the basics of the velocity obstacle (vo) and artificial potential field (apf) methods are presented. in section 3, the novel concept for the calculation of obstacle uncertainties is set out. section 4 then presents the introduced motionplanning algorithms, which can generate a safety motion for the agent taking into account the uncertainties. in section 5, the simulation results are presented, and the introduced motionplanning methods are compared. in section 6, the coppeliasim simulation environment is discussed, and section 7 provides a conclusion and sets out plans for future research. 2. previous work in this section, a few reactive motion-planning algorithms are presented. abstract collision-free motion planning for mobile agents is a challenging task, especially when the robot has to move towards a target position in a dynamic environment. the main aim of this paper is to introduce motion-planning algorithms using the changing uncertainties of the sensor-based data of obstacles. two main algorithms are presented in this work. the first is based on the well-known velocity obstacle motion-planning method. in this method, collision-free motion must be achieved by the algorithm using a cost-function-based optimisation method. the second algorithm is an extension of the often-used artificial potential field. for this study, it is assumed that some of the obstacle data (e.g. the positions of static obstacles) are already known at the beginning of the algorithm (e.g. from a map of the enviroment), but other information (e.g. the velocity vectors of moving obstacles) must be measured using sensors. the algorithms are tested in simulations and compared in different situations. mailto:gyezo12@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 52 the inevitable collision states method (ics) calculates all states of the robot where there is no available control command that would result in a collision-free motion between the robot and the environment. the main goal is to ensure that the agent never finds itself in an ics situation. the algorithm is appropriate not only for static but also for dynamic environments [6]-[8]. the main concept behind the dynamic window method [9], [10] is that the agent selects a velocity vector from the reachable and admissible set of the velocity space. the robot executes a collision-free motion by selecting a velocity vector from the admissible velocity set. at the same time, reachable velocities can be generated using the kinematic and dynamic constraints of the agent. the admissible gap is a relatively new concept for motionplanning algorithms [11]. if the robot can move through the gap safely using motion control, then it is admissible, obeying the constraints of the agent. this method is also usable in an unknown environment. the gap-based online motion-planning algorithm has also been used with a lidar sensor by introducing a binary sensing vector (the value of the vector element is equal to 1 if there is an obstacle in that direction) [12]. 2.1. velocity obstacle method the main concept behind our method is based on the vo method [13]. using the positions and the velocities of the obstacles and the agent, the vo method generates a collisionfree motion for the robot. the vo concept has been used in different methods. the steps in the vo method are as follows: 𝐵𝑖 denotes the different obstacles (𝑖 = 1. . . 𝑚, where 𝑚 represents the number of obstacles), and the agent is 𝐴. for every obstacle, a 𝑉𝑂𝑖 cone can be generated that constitutes every robot velocity vector that would result in a collision between the agent (𝐴) and the obstacle (𝐵𝑖 ) at a future time: 𝑉𝑂𝑖 = { 𝐯a | ∃ 𝑡: 𝐩a + 𝐯a𝑡 ∩ 𝐩b𝑖 + 𝐯b𝑖𝑡 ≠ 0} , (1) where 𝐩a and 𝐩b𝑖 are the positions and 𝐯a and 𝐯b𝑖 are the velocity vectors of the robot and the obstacle. the velocities of the obstacles and the robot are assumed to be constant until 𝑡. if there are more obstacles, then the whole 𝑉𝑂 set can be generated: 𝑉𝑂 =∪𝑖=1 𝑛 𝑉𝑂𝑖 . (2) figure 1 provides an example in which a moving obstacle is in position 𝐩b1 and has velocity 𝐯b1 at the actual time step. there is also a static obstacle in the workspace of the agent (𝐩b2 represents its position). the two vo areas are depicted in blue. reachable velocities (rv) can be defined as the velocity area that constitutes every 𝐯a velocity of the agent that is reachable considering the previously selected velocity vector and the motion capabilities of the robot. reachable avoidance velocities (rav) can be received by subtracting the vo from the rv set. figure 2 represents the steps of the motion-planning algorithm. the main difference between the algorithms is the method for selecting the robot’s velocity vector from the rav set. the 𝜖𝐶𝐶𝐴 is an extended version of the reciprocal velocity obstacle (rvo) algorithm [14], which uses the kinodynamic constraints of the robot. the method generates an appropriate solution for the multi-robot collision avoidance problem in a complex environment. the computational time plays an important role in this algorithm. the whole environment of the agent is divided into a grid-based map. the agent selects a collision-free velocity vector using both convex and nonconvex optimisation algorithms [15]. figure 1. velocity obstacle method. figure 2. steps of the whole vo algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 53 the probabilistic velocity obstacle method is also an extended version of the rvo method [14], which uses the time-scaling method and bayesian decomposition. this method demonstrates better performance in terms of traversal times than the existing bound-based methods. the algorithm was tested using simulation results [16]. the collision avoidance under bounded localisation uncertainty method [17] introduced convex hull peeling, resulting in a limitation in the localisation error. this method results in a better performance than the previously introduced multi-robot collision avoidance with localisation uncertainty method [18] with respect to the tightness of the bound. particle filter is used for robot localisation problems. in this case, convex polygons are generated as the robot footprints. the algorithm ensures that the robot is inside this convex polygon with a probability of 1 − 𝜀. a time truncation is also used in the algorithm because it supports the velocity selection even in a crowded, complex environment. the directive circle (dc) method is an extended version of the vo method [19], [20]. in this algorithm, the velocity of the robot is selected from dc, which is calculated using the maximum velocity of the agent for the radius of the dc. ensuring the kinematic constraints of the agent, the best solution is selected from the dc in the optimal direction for the target position. using the dc method, the local minima situations are preserved. all the presented reactive motion-planning algorithms assume a complete set of information on the position and the velocity vectors of the obstacles that occur in the robot’s workspace. the main advantage of our introduced method is that the uncertainty of the measured sensor information can be taken into consideration, and the novel motion-planning algorithm can generate collision-free target reaching for the agent even when the data is inaccurate. 2.2. artificial potential field method the apf method is an often-used reactive motion-planning algorithm. the main concept is to calculate the summation of the attractive (between the robot and the target) and repelling (between the agent and the obstacles ) forces [21], [22]. one weakness of this algorithm is that sometimes only the local optimum can be found. the algorithm has also been developed for unmanned arial vehicles [23], while human–robot interaction was also simulated using the apf motion-planning method by using the motion characteristics of household animals [24]. the steps of the apf method are as follows: during motion planning, in every sampling time step, ar force will influence the motion of the agent. the ar force depends on the repelling ( 𝐅𝐚𝐫𝐢) and the attractive (frc) forces. the closer the robot is to the obstacle, the larger the volume of the repelling force. the repelling force can be calculated by 𝐅𝐚𝐫𝐢 = 𝜂√ 1 𝐷ra 𝑖 + 1 𝐷ramax 𝐷ra𝑖 2 𝐀𝐑𝐢 , (3) where 𝐷ra 𝑖 denotes the distance between the robot and the obstacle, 𝐀𝐑𝐢 is the vector between the obstacle and the agent, 𝜂 is a specific parameter that identifies the role of the repelling force in the motion-planning algorithm and 𝐷ramax is the largest distance that should be considered in the motion-planning algorithm, which can be calculated as 𝐷ramax = 𝑣max 𝑇s , (4) where 𝑣max is the maximum velocity of the robot and 𝑇s denotes the sampling time. the attractive force can be calculated as 𝐅𝐫𝐜 = 𝜉 𝐑𝐂 , (5) where 𝐑𝐂 is the vector between the robot and the target and 𝜉 is the parameter of the attractive force (depending on the usage of the algorithm). the force that influences the motion of the robot can be calculated (if there is one obstacle in the workspace) with the sum of the repelling and the attractive forces: 𝐀𝐫 = ∑ 𝐅𝐚𝐫𝑖 𝑚 𝑖=1 + 𝐅𝐫𝐜 . (6) figure 3 illustrates the different forces presented. there is one obstacle in the workspace (b1) with the position 𝐩b1. the agent is at the 𝐩a position at this point, and the summation of the forces can be checked. if the mass of the agent is known, then the acceleration can be calculated using newton’s second law: 𝐚 = 𝐀𝐫 𝑚 , (7) where m is the mass of the robot. the changes in the velocity vector can be calculated if the force and the sampling time are known: δ𝐯 = 𝐚 𝑇𝑠 . (8) so the actual velocity can be calculated using the previous velocity and the change in velocity: 𝐯𝐧𝐞𝐰 = 𝐯𝐩𝐫𝐞𝐯 + δ𝐯 . (9) 3. uncertainty calculation using measurement data in previous studies, all the uncertainties of the obstacles were constant throughout the algorithm [25]. in the present study, they will be adjusted using the changes in the velocity vectors of the obstacles, the actual distances of the obstacles from the robot and the magnitudes of the obstacles’ velocity vectors. figure 3. apf method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 54 the uncertainties can be calculated from the probabilities of the previously introduced parameters. the main concept behind this method is that the measured information has lower reliability if the obstacles are far from the robot. first, the obstacle distance is generated: 𝑃dist𝑖 = { 1 − dist𝑂𝑅𝑖 𝑣max ∙ 𝑇u if dist𝑂𝑅𝑖 < 𝐯max ∙ 𝑇u 0 otherwise , (10) where 𝑇u is the uncertainty time parameter, 𝑃dist𝑖 is the distancebased probability term and dist𝑂𝑅𝑖 is the actual distance between the robot and the obstacle 𝐵𝑖 . the magnitude of the velocity vector of the obstacle also plays a significant role in generating the uncertainties; the higher the velocity of the obstacle, the smaller the reliability of the available information on the obstacle: pmv𝑖 = { 1 − ||𝐯bi|| 𝑣max 𝑖𝑓 ||𝐯bi|| < 𝑣max 0 otherwise , (11) where ||𝐯bi|| refers to the actual magnitude of the velocity of the obstacle 𝐵𝑖 (||. || is the secondary norm (euclidean distance)) and pmv𝑖 is the velocity-based probability term. the change in the obstacle’s velocity vector also influences the volume of the uncertainties. the changes in the velocity of the obstacle can be calculated for each obstacle: 𝐶𝑉𝑖 = ||𝐯b𝑖,new − 𝐯b𝑖,old ||, (12) where 𝐯b𝑖,new is the actual velocity of the obstacle, 𝐯b𝑖,old is the previous velocity of the obstacle and 𝐶𝑉𝑖 denotes the change in the obstacle’s velocity: 𝑃cv𝑖 = { 1 − 𝐶𝑉𝑖 2 𝑣max if 𝐶𝑉𝑖 < 𝑣max 0 otherwise , (13) where 𝑃cv𝑖 is the probability term depending on the change in the velocity vector of the obstacle. the probability for obstacle 𝐵𝑖 can be generated as 𝑃𝑖 = 𝑃dist𝑖 + 𝑃mv𝑖 + 𝑃cv𝑖 3 . (14) the uncertainty parameter can be calculated from the calculated probability: 𝛼𝑖 = 1 − 𝑃𝑖 , (15) where this 𝛼𝑖 uncertainty parameter must be calculated for every obstacle (𝑖 = 1. . . 𝑚, if there are 𝑚 obstacles in the environment; if 𝛼𝑖 = 0, then there is no measurement uncertainty). 4. velocity selection based on motion-planning algorithms 4.1. precheck algorithm the agent has to consider only those obstacles that fulfil the precheck algorithm during the motion-planning algorithm. two obstacle situations are not used: • obstacles that will cross the path of the agent in the distant future, • obstacles that are at a considerable distance from the agent. for all obstacles, the minimum distance and time must be calculated for the point at which the agent and the obstacle are closest to each other during their motion: 𝑡mina,b𝑖 = −( 𝐩a − 𝐩b𝑖 )(𝐯a − 𝐯b𝑖 ) ||𝐯a − 𝐯b𝑖|| , (16) where 𝑡mina,b𝑖 presents the time interval for the point at which the agent and the obstacle are closest to each other. the nearest point is in the past if the value of this parameter is negative. the minimal distance can be calculated as follows: 𝑑mina,b𝑖 = ||( 𝐩a + 𝐯a𝑡mina,b𝑖 ) − (𝐩b𝑖 + 𝐯b𝑖𝑡mina,b𝑖 )|| . (17) so, only those obstacles that fulfill the following inequalities must be considered: 0 < 𝑡mina,b𝑖 < 2 ∙ 𝑇precheck and 𝑑mina,b𝑖 < 𝑣max ∙ 𝑇precheck , (18) where 𝑣max denotes the maximum velocity of the agent and 𝑇precheck is a parameter of the algorithm that must be tuned. the experiments in this study demonstrate that if the value of the 𝑇precheck parameter is too small, the generated path is not smooth enough. the precheck algorithm is illustrated in figure 4. when there is a moving obstacle in the robot’s workspace, the minimal distance and time point can be calculated when the obstacle and the agent are closest to each other. 4.2. cost-function-based velocity selection using the extended vo method the safety velocity obstacle method has been defined in a previous study [26]. in this method, a cost function was used when different aspects influenced the motion-planning method (safety, speed). this algorithm is extended with a heading parameter, which provides information on the orientation of the agent in relation to the target position, and the method is also extended with the changing uncertainty parameter. at every time step, the nearest distance is calculated between the vo cone and the investigated velocities: figure 4. precheck algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 55 𝐷s(𝐯𝐴, 𝑉𝑂𝑖 ) = min { 𝑚𝑖𝑛 𝒗𝑉𝑂∈𝑉𝑂 ||𝒗𝐴 − 𝒗𝑉𝑂||, 𝐷𝑚𝑎𝑥 }, (19) where 𝐷max is the maximum distance that should be considered and 𝐯vo is the nearest point on the vo cone. the 𝐷s(𝐯a) value must be normalised into the interval of [0,1]. the 𝐶s(𝐯a) can then be calculated, which will be used in the cost function later: 𝐶s(𝐯a, 𝑉𝑂𝑖 ) = 1 − 𝐷s(𝐯a, 𝑉𝑂𝑖 ) 𝐷max . (20) 𝐶g(𝐯a) will also form part of the cost function: 𝐶g(𝐯a) = ||𝐩a + 𝐯a𝑇s − 𝐩goal || ||𝐩a(0) − 𝐩goal|| , (21) where 𝑇s is the sampling time, 𝐩a(0) is the first position of the robot at the beginning of the motion and 𝐩goal is the position of the target. 𝐶g(𝐯a) denotes how far the robot will be from the target if it uses the selected velocity; subsequently, it has to be divided by the distance of the first position and the desired position. in this novel algorithm, the prior method is extended by changing 𝛼𝑖 (𝑡) parameters (calculated at every time step) for the different obstacles with respect to the reliability of the obstacles’ velocity and position information. the orientation of the agent can also play a role in the cost function. the heading parameter of the cost function can be calculated as follows: 𝐶h(𝐯a) = |𝑎𝑛𝑔𝑙𝑒𝑅𝐺 − 𝑎𝑛𝑔𝑙𝑒𝐼𝑉(𝐯a)| π , (22) where 𝑎𝑛𝑔𝑙𝑒𝑅𝐺 refers to the angle of the vector from the robot position to the target position and 𝑎𝑛𝑔𝑙𝑒𝐼𝑉(𝐯a) denotes the angle of the investigated velocity vector of the agent. using the difference in these angles, the heading parameter can be calculated (angles are defined in the global coordinate system). the whole cost function can be determined using different parameters: cost(𝐯𝐴) = ∑ 𝑚 𝑖=1 𝛼𝑖 (𝑡) 𝐶s(𝐯a, 𝑉𝑂𝑖 ) + 𝛽d 𝐶g(𝐯a) + 𝛽h 𝐶h(𝐯a), (23) where 𝛽d is the distance parameter, 𝛽h is the heading parameter and 𝛼𝑖 (𝑡) denotes the actual calculated uncertainty parameter of an obstacle. this velocity vector is selected for the agent, which has minimal cost value. the different parameters of the cost function have a significant impact on the velocity selection, as will be presented in section 5. 4.3. velocity selection based on the extended apf method the apf method can be extended using 𝛼𝑖(𝑡) and 𝛽d parameters, which were introduced in (15) and (23). the repelling forces must be calculated for every obstacle. the constant η parameter must be substituted with the changing uncertainty parameter, which has a value for every obstacle: 𝐅𝐚𝐫𝐢 = 𝛼𝑖 (𝑡)√ 1 𝐷ra𝑖 + 1 𝐷ramax 𝐷ra𝑖 2 𝐀𝐑𝐢 , (24) where the notations are the same (with the extension of the i parameter, which refers to the i-th obstacle) as introduced in (3). the attractive force, 𝐅𝐫𝐜, can be calculated in the same way as in (5) by using the 𝛽d parameter instead of 𝜉: 𝐅𝐫𝐜 = 𝛽d 𝐑𝐂 . (25) the final force that influences the movement of the agent can be calculated as the addition of the attractive force and the summation of the repelling forces, as presented in (6). after calculating the force that influences the actual movement of the agent, the selectable velocity vector can be calculated using (7), (8) and (9). 5. simulation results in this section, the simulation results are discussed based on the changing uncertainties. 5.1. two static obstacles in the first example, there are two static obstacles in the workspace of the agent. initially, using the introduced costfunction-based vo method, the velocity vector that is exactly in the middle of the two obstacles is selected because the two obstacles have the same uncertainties. this situation is presented in figure 5, in which the vos are presented as grey areas, the blue circle is the selected velocity vector, the target is depicted by a black x, each of the agent’s velocity vectors identified through the motion-planning algorithm is represented by a red x and the robot is the red circle. the changes and the magnitude of the velocities of the obstacles do not influence the calculation of the uncertainties because there are two static obstacles in the workspace. so, in this case, only the distances between the obstacles and the agent have an impact on the calculation. the velocity vector between the two obstacles is selected, and the distances between the agent and the two obstacles are the same during the motion, resulting in the same uncertainties for both obstacles, as presented in figure 5. first example: velocity selection based on the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 56 figure 6 (at each time step, the uncertainties for the obstacles can be seen next to each other). this example was also tested using both the original apf method and the extended apf method. using the original apf method that was introduced in section 2, the agent cannot reach the target position. this is because at the beginning of the motion, the apf method results in a velocity vector that causes a motion in the opposite direction from the target position, as can be seen in figure 7 (the values of the constant parameters were 𝜂 = 2 and 𝜉 = 0.1). 𝐅𝐚𝐫1 and 𝐅𝐚𝐫 2 represent the repelling forces for the different obstacles, and 𝐅𝐚𝐫 sum is the summation of the repelling forces (the other notations are the same as those introduced in previous sections). eventually, the summation of the forces will become a force in the direction of the target position. the agent will then move towards the target position. this sequence is repeated, resulting in an oscillation without reaching the target position. the example was also tested with the extended apf method, introduced in section 4.3. in this case, the reactive motionplanning method can generate a collision-free motion towards the target position. the different forces and the selected velocity vector can be seen in figure 8. using this method, the obstacles’ uncertainties change in the same way as seen in figure 6 because the agent executes its motion along the same path between the two obstacles. 5.2. one moving and one static obstacle in this example, the first obstacle is a moving obstacle, and the second obstacle is a static obstacle. if the agent is at a considerable distance, it can select a velocity vector in line with the target position. after that, if it gets closer to the obstacles, it selects a velocity vector that results in a manoeuvre next to the static obstacle because the corresponding probability is higher. the results of the velocity selection, in this case, are depicted in figure 9. the path of the robot is presented as a black line. in this example, the uncertainties of the obstacles are not the same as in the previous example because the static obstacle has a smaller uncertainty throughout the motion, as presented in figure 10. it can be seen that in the first step, the difference between the obstacles’ uncertainties is not significant. this is because the moving obstacle has a small magnitude of velocity and the distances between the obstacles and the robot are the same at the first step. figure 6. first example: two static obstacles; changing uncertainties during motion. the uncertainties are the same for both obstacles. figure 7. first example: original apf method. figure 8. first example: extended apf method. figure 9. second example: velocity selection based on the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 57 this example was also simulated with the extended apf method. because the first obstacle has a nonzero velocity vector, this obstacle has a higher uncertainty when using the motionplanning algorithm. however, because the magnitude of the velocity vector is not a large value compared with the summation of the forces, the agent executes its motion almost directly in line with the target position, as presented in figure 11. so, in this case, there is a difference between the result of the extended vo method and that of the extended apf method, but both of them can result in a collision-free motion between the agent and the environment, and using both methods, the agent can reach the target position. the uncertainties of the obstacles can also be calculated during the motion of the agent with the extended apf method, although the result will be slightly different from the result of the extended vo method. 5.3. three obstacles in front of each other in the next example, there are three obstacles in front of each other with different velocities (the first obstacle is static, and the others are moving). figure 13 shows the velocity selection of the vo method next to the first obstacle, and figure 14 presents the velocity selection next to the second obstacle. it can be seen that a further velocity component is selected at the second obstacle because it has a higher velocity. the higher the velocity of the obstacle, the bigger the uncertainty for the obstacle, as depicted in figure 15, in which the uncertainty parameters are presented as aspects of the three obstacles. after passing the obstacle, the uncertainty is reduced. this figure shows that not all the obstacles need to be considered throughout the motion, only those that influence the motion of the robot and which have fulfilled the precheck algorithm at the time of sampling. the 𝛽h parameter plays a significant role in the cost-functionbased velocity selection as a factor in the target-reaching strategy. in the previous examples, the value of 𝛽h was 0.3. if this parameter has a higher value, it has a larger impact on the motion than the uncertainties of the obstacles, as presented in figure 16. in this case (𝛽h = 0.6), the agent executes the motion as close to the obstacle as the collision-free motion-planning algorithm allows. figure 10. second example: one moving (first) and one static (second) obstacle; changing uncertainties during motion using the extended vo method. the moving obstacle has a higher uncertainty parameter. figure 11. second example: velocity selection based on the extended apf method. figure 12. second example: one moving (first) and one static (second) obstacle; changing uncertainties during motion using the extended apf method. the moving obstacle has a higher uncertainty parameter. figure 13. third example: velocity selection at the first obstacle using the extended vo method. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 58 the values of the parameters depend on the usage of the algorithm; different parameter values generate different results using the collision-free motion-planning algorithm. however, it is not possible to find a solution that takes into account every aspect of the motion-planning problem. a sub-optimal solution must therefore always be calculated. this example can also be tested using the extended apf method. only the first obstacle should be considered for the motion-planning algorithm because (18) is the appropriate equation for the first obstacle (the second and third obstacles are at a distance from the agent). in this case, the algorithm selects a velocity vector for the agent that results in a movement in the exact direction of the first obstacle (because the summation of the forces is in line with the movement of the agent). so, after a few steps, the agent reaches and collides with the first obstacle. in this example, the extended apf method cannot guarantee that the robot will reach the target collision free. 5.4. standard vo method and the novel motion-planning method the introduced novel motion-planning algorithm was also compared with the original vo method because the basic concept of the motion-planning algorithm is based on this algorithm. the comparison used the example of two moving obstacles in the robot’s workspace. one of them has a changing velocity vector that results in a higher uncertainty in the motion of the robot. figure 17 shows the final path of the robot using the different motion-planning algorithms. it can be seen that using the standard vo method, which provides the fastest targetreaching concept, the agent executes a tangential motion next to the first obstacle (this is also presented in a video [27]). however, if the uncertainties in the measurement data are also considered, the target-reaching method will be solved, generating a path that is relatively far from the first obstacle (with changing velocities). the motion of the robot is also presented in a video [28]. figure 18 represents the distances between the agent and the obstacles using the different motion-planning algorithms. as has already been mentioned, when using the standard vo method, there is a time point at which the distance between the robot and the obstacle is zero. in the case of the novel motion-planning algorithm, the uncertainties can be taken into account, so the agent can move safely towards the target position. however, if there is even a tiny measurement or system noise in the process, the tangential movement will immediately cause a collision. so, if this occurs, it is better to use the novel motion-planning algorithm, which generates a collision-free motion for the agent in every situation. figure 14. third example: velocity selection at the second obstacle using the extended vo method. figure 15. third example: three obstacles in front of each other with different velocities; changing uncertainties during motion using the extended vo method. figure 16. the resulting motion paths of the robot with different heading parameters; in the first example 𝜷𝐡 = 0.3, in the second example 𝜷𝐡 = 0.6. figure 17. final paths of the robot using the normal vo method and the novel motion-planning algorithm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 59 6. coppeliasim simulation environment coppeliasim (vrep version) is suitable for testing robotic arms as well as holonomic and non-holonomic mobile robots using reactive motion-planning algorithms. different types of obstacles can occur in the workspace of the robot, and there are a wide range of obstacles that can be used in the simulation environment the results of the introduced methods were tested in the coppeliasim simulation environment, as presented in [29]. the agent is an omnidirectional robot (blue). this type of mobile robot is often used because it can execute its motion in any direction from an actual position. in the example in figure 19, there are two static obstacles (grey cylinders) in the workspace of the agent. the main goal of the robot is to reach the target position without colliding with the two obstacles, as presented in section 5.1. 7. conclusions in this paper, novel motion-planning methods were introduced using the basics of the vo and the apf methods. the mobile robot was able to execute collision-free motion planning after calculating the changing uncertainties of the obstacles. these uncertainties depend on the magnitudes of the velocity vectors of the obstacles, the distances between the obstacles and the robot, and the changes in the obstacles’ velocities. the vo-based method can generate collision-free motion using a cost-function-based optimisation method. the basic apf method was also extended by using the uncertainty and distance parameters in the algorithm. the extended apf method can generate a better solution than the original apf method, but there are some situations in which it cannot provide a target-reaching solution. in these cases, the cost-function-based vo method was able to guarantee that the target was reached. the parameters for the apf method could also be calculated in another way, thus solving the local minima problem [30], [31]. the introduced algorithm could be implemented in a real robotic system using an omnidirectional mobile robot. the state estimation of the obstacles that occur in the workspace of the robot could be solved using an extended particle filter algorithm. in this case, the position and the velocity vectors of the obstacles could be estimated for every sampling time [32]. to achieve this, a lidar sensor can be used. acknowledgement this paper was supported by the únkp-20-3 new national excellence programme of the ministry for innovation and technology from the national research, development and innovation fund, and the research reported in this paper and carried out at the budapest university of technology and economics has been supported by the national research development and innovation fund (tkp2020 institution excellence sub-programme, grant no. bme-ie-mifm) based on the charter issued by the national research development and innovation office under the auspices of the ministry for innovation and technology. references [1] s. panov, s. koceski, metaheuristic global path planning algorithm for mobile robots, int. j. of reasoning-based intelligent systems 7 (2015), p. 35. doi: 10.1504/ijris.2015.070910 [2] p. m. hsu, c. l. lin, m. y. yang, on the complete coverage path planning for mobile robots, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 945-963. doi: 10.1007/s10846-013-9856-0 [3] e. masehian, y. katebi, sensor-based motion planning of wheeled mobile robots in unknown dynamic environments, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 893914. doi: 10.1007/s10846-013-9837-3 [4] m. g. mohanan, a. salgoankar, a survey of robotic motion planning in dynamic environments, robotics and autonomous systems 100 (2018), pp. 171-185. doi: 10.1016/j.robot.2017.10.011 [5] p. raja, s. pugazhenthi, optimal path planning of mobile robots: a review, int. j. of physical sciences 7 (9), february 2012, pp. 13141320. doi: 10.5897/ijps11.1745 [6] s. petti, t. fraichard, safe motion planning in dynamic environments, proc. of the ieee rsj int. conf. intell. robot. syst., edmonton, canada, 2-6 august 2005, pp. 2210-2215. [7] t. fraichard, h. asama, inevitable collision states a step towards safer robots, advanced robotics 18 (2004), pp. 1001-1024. doi: 10.1163/1568553042674662 [8] l. martinez-gomez, t. fraichard, collision avoidance in dynamic environments: an ics-based solution and its comparative evaluation, proc. of the ieee int. conf. on robotics and automation, kobe, japan, 12 – 17 may 2009, pp. 100-105. doi: 10.1109/robot.2009.5152536 [9] d. fox, w. burgard, s. thrun, the dynamic window approach to collision avoidance, ieee robot. autom. mag. 4 (1997), pp. 2333. doi: 10.1109/100.580977 figure 18. distances between the robot and the obstacles using the different motion-planning algorithms. figure 19. coppeliasim simulation environment [29]. https://doi.org/10.1504/ijris.2015.070910 https://doi.org/10.1007/s10846-013-9856-0 https://doi.org/10.1007/s10846-013-9837-3 https://doi.org/10.1016/j.robot.2017.10.011 https://doi.org/10.5897/ijps11.1745 https://doi.org/10.1163/1568553042674662 https://doi.org/10.1109/robot.2009.5152536 https://doi.org/10.1109/100.580977 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 60 [10] m. seder, i. petrovic, dynamic window based approach to mobile robot motion control in the presence of moving obstacles, proc. of the int. conf. on robotics and automation, rome, italy, 10-14 april 2007, pp. 1986-1991. doi: 10.1109/robot.2007.363613 [11] m. mujahed, d. fischer, b. mertsching, admissible gap navigation: a new collision avoidance approach, robotics and autonomous systems 103 (2018), pp. 93-110. doi: 10.1016/j.robot.2018.02.008 [12] n. hacene, b. mendil, autonomous navigation and obstacle avoidance for a wheeled mobile robots: a hybrid approach, int. j. of computer applications 81 (2013), pp. 34-37. online [accessed 20 september 2021] https://research.ijcaonline.org/volume81/number7/pxc3892285 .pdf [13] p. fiorini, z. shiller, motion planning in dynamic environments using velocity obstacles, int. j. of robotics research 17 (1998), pp. 760-772. doi: 10.1177/027836499801700706 [14] j. van den berg, m. lin, d. manocha, reciprocal velocity obstacles for real-time multi-agent navigation, proc. of the ieee int. conf. on robotics and automation, pasadena, usa, 19-23 may 2008, pp. 1928-1935. doi: 10.1109/robot.2008.4543489 [15] j. alonso-mora, p. beardsley, r.. siegwart, cooperative collision avoidance for nonholonomic robots, ieee transactions on robotics 34 (2018), pp. 404-420. doi: 10.1109/tro.2018.2793890 [16] b. gopalakrishnan, a. k. singh, m. kaushik, k. m. krishna, d. manocha, prvo: probabilistic reciprocal velocity obstacle for multi robot navigation under uncertainty, proc. of the ieee int. conf. on intelligent robots and systems, vancouver, canada, 24 – 28 september 2017, pp. 1089-1096. doi: 10.1109/iros.2017.8202279 [17] d. claes, d. hennes, k. tuyls, w. meeussen, collision avoidance under bounded localization uncertainty, proc. of the ieee int. conf. on intelligent robots and systems, vilamoura, portugal, 7 – 12 october 2012, pp. 1192-1198. doi: 10.1109/iros.2012.6386125 [18] d. hennes, d. claes, w. meeussen, k. tuyls, multi-robot collision avoidance with localization uncertainty, proc. of the 11th int. conf. on autonomous agents and multiagent systems, aamas 2012: innovative applications track, valencia, spain, 4-8 june 2012, pp. 672-679. [19] e. masehian, y. katebi, robot motion planning in dynamic environments with moving obstacles and target, int. j. of mechanical systems science and engineering 1 (2007), pp. 107112. online [accessed 20 september 2021] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193. 6430&rep=rep1&type=pdf [20] e. masehian, y. katebi, sensor-based motion planning of wheeled mobile robots in unknown dynamic environments, j. of intelligent and robotic systems: theory and applications 74 (2014), pp. 893914. doi: 10.1007/s10846-013-9837-3 [21] a. masoud, a harmonic potential approach for simultaneous planning and control of a generic uav platform, j. intell. robot. syst. 65 (2012), pp. 153-173. doi: 10.1007/s10846-011-9570-8 [22] n. malone, h. t. chiang, k. lesser, m. oishi, l. tapia, hybrid dynamic moving obstacle avoidance using a stochastic reachable set-based potential field, ieee transactions on robotics 33 (2017), pp. 1124-1138. doi: 10.1109/tro.2017.2705034s [23] h. chiang, n. malone, k. lesser, m. oishi, l. tapia, path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments, proc. of the ieee int. conf. on robotics and automation, seattle, usa, 26-30 may 2015, pp. 2347-2354. doi: 10.1109/icra.2015.7139511 [24] b. kovács, g. szayer, f. tajti, m. burdelis, p. korondi, a novel potential field method for path planning of mobile robots by adapting animal motion attributes, robotics and autonomous systems 82 (2016), pp. 24-34. doi: 10.1016/j.robot.2016.04.007 [25] z. gyenes, e. g. szadeckzy-kardoss, rule-based velocity selection for mobile robots under uncertainties, proc. of the 24th int. conf. on intelligent engineering systems, reykjavík, iceland, 8-10 july 2020, pp. 127-132. doi: 10.1109/ines49302.2020.9147191 [26] z. gyenes, e. g. szadeckzy-kardoss, motion planning for mobile robots using the safety velocity obstacles method, proc. of the 19th international carpathian control conference, szilvásvárad, hungary, 28-31 may 2018, pp. 389-394. doi: 10.1109/carpathiancc.2018.8473397 [27] akta imeko standard vo, online [accessed 17th september 2021] https://youtu.be/jp6m3ngwjpk [28] akta imeko uncertainties, online [accessed 17th september 2021] https://youtu.be/mxz1om3bzys [29] omnirobot moves between two static obstacles, online [accessed 17th september 2021] https://www.youtube.com/watch?v=hjcntdks6o&feature=youtu.be [30] m. g. park, m. c. lee, a new technique to escape local minimum in artificial potential field based path planning, ksme int. j. 17 (2003), pp. 1876-1885. doi: 10.1007/bf02982426 [31] g. guerra, d. efimov, g. zheng, w. perruquetti, avoiding local minima in the potential field method using input-to-state stability, control engineering practice 55 (2016), pp.174-184. doi: 10.1016/j.conengprac.2016.07.008 [32] z. gyenes, e. g. szadeckzy-kardoss, particle filter-based perception method for obstacles in dynamic environment of a mobile robot, proc. of the 25th ieee international conference on methods and models in automation and robotics, miedzyzdroje, poland, 23-26 august 2021, p. 6 (in press). https://doi.org/10.1109/robot.2007.363613 https://doi.org/10.1016/j.robot.2018.02.008 https://research.ijcaonline.org/volume81/number7/pxc3892285.pdf https://research.ijcaonline.org/volume81/number7/pxc3892285.pdf https://doi.org/10.1177/027836499801700706 https://doi.org/10.1109/robot.2008.4543489 https://doi.org/10.1109/tro.2018.2793890 http://doi.org/10.1109/iros.2017.8202279 http://dx.doi.org/10.1109/iros.2012.6386125 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193.6430&rep=rep1&type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.193.6430&rep=rep1&type=pdf https://doi.org/10.1007/s10846-013-9837-3 https://doi.org/10.1007/s10846-011-9570-8 https://doi.org/10.1109/tro.2017.2705034s https://doi.org/10.1109/icra.2015.7139511 https://doi.org/10.1016/j.robot.2016.04.007 http://dx.doi.org/10.1109/ines49302.2020.9147191 https://doi.org/10.1109/carpathiancc.2018.8473397 https://youtu.be/jp6m3ngwjpk https://youtu.be/mxz1om3bzys https://www.youtube.com/watch?v=hjcntdks6-o&feature=youtu.be https://www.youtube.com/watch?v=hjcntdks6-o&feature=youtu.be https://doi.org/10.1007/bf02982426 https://doi.org/10.1016/j.conengprac.2016.07.008 building information modelling and digital fabrication for the valorization of archival heritage acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 building information modelling and digital fabrication for the valorization of archival heritage giulia bertola1 1 politecnico di torino, dad, modlab arch, viale mattioli 39, 10125, torino, italy section: research paper keywords: architectural archives; archives; bim modelling; digital fabrication; rapid prototyping citation: giulia bertola, building information modelling and digital fabrication for the valorization of archival heritage, acta imeko, vol. 11, no. 1, article 18, march 2022, identifier: imeko-acta-11 (2022)-01-18 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 29, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giulia bertola, e-mail: giulia.bertola@polito.it 1. introduction through this contribution, the author intends to deepen some aspects and themes already addressed previously through two different papers both focused on the project of "due case a capri" by aldo morbelli (1942): the first one focused on the role of redesign and the use of traditional and manual techniques (figure 1) [1] for the understanding of the project, the second on the realization of a real model through the application of digital fabrication techniques from a digital reconstructive model using the bim revit® software [2]. archives of 20th century architecture represent today an important source for scholars to gain a profound understanding of architectural designs. the interpretation, use, and sharing of archive materials are activities aimed at deepening the knowledge of contemporary masters and the different architectural movements [3], [4]. building information modelling (bim) is applied here as a tool to rediscover, analyse, interpret and highlight architectural design, thus contributing to the recognition of architectural archives as cultural heritage. the author, after a careful analysis of the project drawings of the villas, obtained the 3d model working directly with revit® on the archive drawings, and then focused the final attention on the different methods of rapid prototyping and on the realization of a real scale model. the realization of 3d digital models and real models of architectures that have never been built represents an important contribution to the study and knowledge of archival drawings and morphological aspects. if the virtual space facilitates and increases our perceptual knowledge of architecture, allowing a different way of understanding space, the physical model produced by a 3d printer allows easier reading of architectural morphology [5]. 2. drawing and architectural language of aldo morbelli and the project for the “due case a capri” aldo morbelli (1903-1963) is an italian architect, born in orsara bormida in piedmont. after graduating from the faculty of architecture in rome, he founded his professional studio in turin in the 1930s. during his professional activity, aldo morbelli produced several architectural projects concerning single-family houses, social housing for the ina-casa plan, entertainment buildings, company representative offices, and post-war reconstruction works. in addition to these projects, he has also worked on interior design and furniture design studies. abstract archives of the 20th century are today the focus of many scholars in the disciplines of conservation, valorization and communication. the enhancement of archival heritage could benefit from the methodologies, techniques and tools offered by the current digital revolution. this is the case of the work shown in this proposal. a parametric modelling experience was developed for the project of "due case a capri" by architect aldo morbelli (1942) starting from archival documentation and from a previous graphic, manual and critical reading of the project. the aim of this research is to build a methodology able to reproduce 3d objects through building information modelling technology, integrating geometry with semantic information up to the realization of a scale models, through the application of different prototyping techniques. mailto:giulia.bertola@polito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 aldo morbelli's archive is kept in the biblioteca di architettura "roberto gabetti" of politecnico di torino and contains several documents relating to his projects, both completed and unfinished. despite the fact that in the past many of his projects have obtained recognition in internationally renowned magazines and the critics have dedicated a monographic issue in l' architettura italiana to his single-family houses, to date, the figure of aldo morbelli is still little studied [6]. he affirms himself through a poetics that tends to a process of formal simplification, to a careful research of balance between project and insertion in the context (figure 2). this is particularly evident in the project of "due case a capri”. the buildings, never realized and defined by morbelli large house (located lower down) and little house (higher up), are thought to be situated on a land among the olive groves at the foot of mount tuoro, in the region "la cercola". the two buildings are facing west because of the steep slope of the land. he breaks the compactness of the volumes through the reference to the local architecture, solved through the insertion of segmental arches, the choice of colours and materials for walls, and coatings such as white. also, in the interior spaces there is a clear intention to give the rooms a plastic sense: the walls and the distributive elements, such as the "s" staircase of the large house adapt to the plan (figure 3). the little house is on three levels, basement floor for servants and storage, ground floor with living room and kitchen, which is accessed through a sloping wall with arched entrance, and first floor with two bedrooms. the large house, instead, is only on two levels, the lower level is for the living area, with the living and dining area at double height that occupies all the sea view front and that of the services develops towards the mountain, while the highest level is for the sleeping area, with four bedrooms. looking at the drawings, it can be seen that the architect wanted to experiment with the insertion of different types of roofing: horizontal flat and sloping flat for the little house, barrel and net vaulted for the large house. 3. the bim model generated by archive drawings this contribution aims to show a methodology for the management, preservation, and communication of archival material by integrating data with 3d modelling techniques. this type of documentation, if inserted within a virtual database, can become an active component of the archive itself, contributing to the general knowledge of lost or non-existent architectural artefacts. this theme opens a series of reflections regarding the philological interpretation of the unrepresented parts of an architectural project and the translation of the drawings of an unrealized architecture into a three-dimensional model. digital modelling with archival sources involves investigative work starting with the hypotheses of reconstruction of the sketches, checking the consistency of the scale drawings, and proposals for integrating missing data. the source of research, in this case, is the analogical documentary heritage. these consist of graphic, iconographic, and textual sources. for each reconstructive model, it is necessary to identify the phase of the project to which it refers according to the cognitive values that one wishes to emphasize through the research work [7]. figure 1. interpretative drawings for the project of the “due case a capri” (drawings by giulia bertola) figure 2. a. morbelli, real-life drawings of mediterranean architecture (archives of the library of architecture "roberto gabetti", polytechnic of turin. in the following archivi bca. fondo aldo morbelli). figure 3. a. morbelli, study sketches for the two houses in capri (archivi bca. fondo aldo morbelli). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 archives could therefore become a place where to build, communicate and share knowledge within a complex system of relationships made up of different actors (institutions, curators, scholars, public), types of heritage (material, immaterial, real three-dimensional artefacts), and digital technologies (interaction, immersion, virtual and augmented reality) [8]. the construction of the model makes it possible to produce a variety of elaborates including real three-dimensional models to be made available to different users. as underlined by the london charter (2008) [9], it is the responsibility of the scientific community to ensure the sustainability of digital heritage, for example by promoting the use of open formats, favouring as much as possible the access to data to the community of users, experts or not. these operations are well suited to bim tools. they provide a good level of interactivity and consider the stages in the temporal evolution of the project. a bim information system can represent the "sustainable container" and the appropriate means for the transmission of knowledge (for example through advanced building information exchange systems). in the hbim environment, as in this case study, the operator must be able to manage complex and heterogeneous survey data (photographs, documents, sketches), master geometrical constructions, know construction techniques. the potential of historical building information modeling systems, despite the methodological and operational difficulties, allows building a database capable of managing large amounts of multidisciplinary data [10], [11], [12]. bim also offers the possibility of managing the life cycle of architecture, from the first hypotheses to the design and construction phases in a single model, and can facilitate the creation of models from the archival heritage. some significant examples of the use of bim technology as a tool for interoperability and advanced information management technology have emerged in some studies by saygi and remondino (2013) [13] which emphasize the semantic enrichment of three-dimensional digital models through the integration of heterogeneous data sets and in the arches (2018) [14] and inception (2016) [15] projects; the latter aim to make available interoperable information through multiple services such as websites, digital maps, applications, and 3d models. while the first one is focused on the creation of an international community of computer scientists and heritage professionals to share experiences, knowledge, and skills for the management of digital inventories, the second one is a project that aims at the development of advanced 3d modelling for the access and understanding of european cultural heritage. in particular, the hbim modelling process starts with the documentation of user needs, the identification of cultural heritage buildings semantic ontology, and data structure for information catalogue integrated with the 3d geometric model. a last interesting study is cult project (2018) [16], a project in which a software toolkit was developed to store data from architectural heritage research projects and share them with websites, tourism applications, and bim and gis interfaces. for the “due case a capri”, it was decided to use the building information modelling methodology and build a threedimensional model using revit 2021®. after scanning and digitizing the original drawings of the project, the documents were inserted into the revit 2021® software. to optimize the modelling process, the files in .jpg format were not imported into the model but only linked to it. initially, the plans were linked at 1:200 and 1:500 scale. thanks to these, it was possible to correctly set the geolocation data, create a topographic surface and import contour data. after having identified the correct positioning of the buildings, we proceeded with the insertion of technical drawings at scale 1:100 containing plans, elevations and sections of each villas and territorial sections. each drawing has been scaled with reference to the dimensions reported on them and used as a base on which to set the model. following the scaling of the images, the most significant work plans have been identified and a grid has been built for the alignment of plans, elevations and sections (figure 4). the axes of the grid, besides being a useful trace for the construction of all the elements that will compose the model, also represent the tool to facilitate the reading and verification of the original drawings. this phase of work, to be done correctly, requires prior knowledge of the main volumes that make up the buildings. once the volumes had been identified, we proceeded with the identification of the types of walls and floors, attributing to each one a specific thickness: perimeter walls (60 cm), partition walls (10 cm, 20 cm), and floors (40 cm) (figure 5). where it was not viable to refer to a system family, it was decided to use the in-place masses to create the roofs and curved elements present in some portions of the external walls and on the railing of the internal staircase of the casa grande. all the masses have been transformed into surfaces. during the modelling of these elements, some problems related to the rigidity of the bim method in the modelling of unconventional shapes emerged. the in-place masses were created using the swept blend command, which, however, did not make it possible to faithfully reproduce some of the most characteristic architectural elements of the project and typical of aldo morbelli's poetic. during the final phase of elaboration of the bim model, it was chosen to refer to the original plastic models not preserved but documented photographically. in order to maintain formal coherence, it was decided to create monochrome grayscale rendered views (figure 6). figure 4. inserting archive images (archivi bca. fondo aldo morbelli) on revit 2021®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 also, in the elaboration of plans and sections, it has been chosen to reach a level of detail that can be hypothesized at a scale of 1:200 in order to avoid distorting the original project by providing incorrect interpretations and additional information. moreover, since it is a preliminary project, without detailed information about structures, stratigraphies, and construction details, it was decided to use generic model families, avoiding the customization of the library of parametric objects. in the future, this work aims to use a philological approach based on the classification of archive documents. the different sources (sketches, final and executive drawings, documents, articles, photographs, and physical models) can be linked to the digital model and referred to at different levels of analysis. each level corresponds to a quantity of information that, when added to the previous level, increases the level of reliability of the reconstructive model with the original drawings. the bim methodology provides a gradual definition of the 3d model, with geometric accuracy and data content. in some countries the levels of detail coincide with the levels of the 3d model, level of detail (lod), and in others with the levels of information, level of information (loi), transmitted where graphic data are missing, thus suggesting a different relationship between model and real object [17]. these levels can be declined within the framework of historic-bim (h-bim), even though partial documentary sources, such as those in the case of unbuilt architecture. the concept of the lod development level applied to bim management is based on a linear process that progressively enriches both the model and the information through the different phases. these phases correspond to: lod 100 represents an a-dimensional conceptual model, lod 300 a three-dimensional model in the executive design phase, lod 350-400 represents the implemented model for the construction phase, lod 500 the as-built update after the construction phase. in this case, the lod will correspond to the design level at which the architectural design was interrupted [18], [19], [20]. 4. digital fabrication and rapid prototyping technologies this paper aims to focus on the effectiveness of digital fabrication processes and rapid prototyping techniques for the valorization of the archival heritage. in this context, digital fabrication is considered a working method to support the entire design process for the study of form and the built environment. digital fabrication is a process by which solid objects can be created from digital drawings. a process capable of exploiting different manufacturing techniques. the choice of a technique is usually made following figure 5. plans, section and elevation of “casa grande” created through revit 2021®. figure 6. a.morbelli, photograph of the original model (archivi bca. fondo aldo morbelli) and view rendered through revit 2021®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 some considerations made on the speed of processing, the cost, the material, and the final aesthetic performance [21]. here below, we compare the two main categories of prototyping techniques: subtractive and additive methods of manufacturing. the first are based on the idea of reproducing an object by sculpting a block, removing material over a predetermined path. this operation is feasible through two types of machines: the computer numerical controlled (cnc) machine and laser beam machine. whereas the cnc machine is a milling tool, the laser beam machine involves a thermal separation process. the laser beam hits the surface of the material and heats it up to the point where it is melted or completely vaporized. once the laser beam has completely penetrated the material at a certain point, the cutting process begins. the laser system follows the selected geometry and during this process the material is separated. the additive process, usually called 3d printing, is instead a process by which solid shapes (usually of small size) are constructed by building one layer at time. nowadays there are several additive manufacturing processes that differ from each other depending on the different materials that can be used and how they are deposited to create the various objects. all 3d printing processes involve the simultaneous collaboration of software, hardware, and materials and have the great advantage, compared to subtractive processes, of being independent from the geometric complexity of the digital model. in addition to the fused deposition modeling (fdm) printing technique, explored in the next paragraph there are other three main types of 3d printing [22]. two prototyping techniques were used for this work: fdm for buildings and laser beam machining (lbm) for the ground. 4.1. fused deposition modelling the fdm it is the most common 3d printing technology. this method uses a filament (a string of solid material), which, under the thrust of a heated nozzle, melts. the printer continuously moves this nozzle around, laying down the melted material at a precise location, where it instantly cools down and solidifies. this builds up the model layer by layer. during the construction of these solid shapes it is often necessary to use vertical supports to sustain overhanging parts. vertical supports can be made with water-soluble filaments that when immersed in water can be removed easily without leaving hard-to-remove residue. the filament used in the fdm process usually requires a specific diameter, strength, and other properties. during the extrusion of a polymer, the diameter of the filament must be uniform. to achieve this, the machine must have adjustable screw speed, pressure, and temperature. all these parameters are examined and selected until an optimal filament diameter is reached. for a smooth filament, a different calibration nozzle was used. for low-temperature materials (polylactic acid pla, polyethylene pe) the calibration nozzle is made of copper and the thermal seal is made of polytetrafluoroethylene. for high-temperature materials (acrylonitrile butadiene styrene abs, polyamide pa) the calibration nozzle is made of aluminium [23]. following the printing process, to avoid seeing the layers it is often necessary to sand and polish the surfaces. for this case study, the buildings were made in pla and printed using the delta wasp 2040 industrial line 4.0® printer whose characteristics are shown in table 1 [24]. once the standard triangle language (stl) format was obtained, a test was made to verify the file through the software cura®, an open-source slicing application for 3d printers through which an analysis of the model was carried out: thickness, stability, positioning, and orientation of the model on the surface. the stl file was also automatically divided by the software into sections (slices). the software also automatically generates the support structures. the plastic filament is conducted in a reel, pushed, and melted through the extrusion nozzle. when the loose filament comes into contact with the construction plane it hardens and the rest of the material is support structure needed gradually released [25]. 4.2. laser beam machine for the realization of the ground we chose instead to use another rapid prototyping tool: lbm. the soil was made of 2 mm thick cardboard using the totrec speedy 400® printer. totrec speedy 400 is a type of cnc machine. the user can process an object through a design software, send it to the laser cutting machine and have it cut it automatically. once the design is sent to the machine, the device uses a laser beam to cut or engrave the material on the cutting plane. this type of processing allows wide versatility in materials, no need for subsequent machining and high precision. from the 2d file generated by revit 2021® you can proceed with the print layout operations, defining the cutting power values using the job control® software. 5. from bim model to prototype some of the studies regarding the relationship between bim objects and digital fabrication processes reflect on how such objects can be incorporated with the semantics of fabrication and how they can then be used to support the workflow between designers and fabricators. a reflection that is applied in particular to cnc fabrication and proposes specific process maps for the conventional workflow between design and fabrication disciplines from the domain of custom cabinetry [26]. bim then, in support of digital workflow design for all building disciplines, including the use of structural information models for digital fabrication of steel structures[27]. another area of reference is the construction industry explored in-depth in sakin and kiroglu studies (2017) regarding new 3d printing technologies for sustainable buildings and pointing to contour crafting as a promising technique that may be table 1. general characteristics of delta wasp 2040 industrial line 4.0®. system printer model max. build size accuracy materials advantages disadvantages fused deposition modelling delta wasp 2040 industrial line 4.0® 200 × 200 × 40 mm³ 0.1-0.2 mm pla (polylactic acid) good accuracy. functional materials medium range of materials. office friendly. support structure needed acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 able to revolutionize the construction industry in the near future [28]. for the current case study following the choice of prototyping techniques, attention was paid to the preparation of files for printing. for the construction of the terrain, we started with the 2d file generated by revit 2021® and proceeded with the print layout operations, defining the cutting power values with the job control® software (figure 7). to proceed instead with the 3d printing, since an stl exporter for revit 2021® is not yet available, the file was exported in 1:100 scale in fbx format, imported in rhinoceros® and then exported in stl. during the import phase in rhinoceros®, the 3d model was scaled to 1:200 and the unit of measurement was changed to millimeters. for dimensional issues related to the printing dimensions of the machine, the models were divided into parts: building blocks and outer walls. in addition, for the realization of the volumes, neutral colours were deliberately chosen so as to focus the viewer's attention on the three-dimensional geometries and the composition of the volumes. the level of complexity of the model determines the number of triangles needed and their size. in turn, the number of triangles determines the size of the file. as happened in this case, it may happen that during the conversion of the revit file to stl, critical issues emerge, and the exported file may contain some errors. these errors can be of various types: holes or blanks, inverted or intersected triangles. (figure 8) during the printing phases, the goal is to obtain objects characterised by continuous and well-finished surfaces. for additive manufacturing, the surfaces of the 3d model are converted into mesh, a mesh composed of triangular faces and vertices connected to each other. during conversion, you may get a model with mismatched edges, holes, and triangles that intersect each other in incorrect positions. in particular, the mesh for 3d printing must have the following characteristics: the surfaces must be closed and all triangles must connect with other triangles along the edges without intersecting and must be correctly oriented. in particular, netfabb® software was used for file repair operations. netfabb® is a software for editing stl files and offers a rich set of tools that optimizes workflows and minimizes building errors. the procedure used for the small house was as follows: open stl file on netfabb® and verify the quality of the 3d model by checking the number of triangles that make up the model (13316). subsequently the software indicates that the file has errors, then proceed to click on automatic part repair to repair the part automatically. this operation will show us what's wrong with this model. this was followed up by checking for incorrectly oriented triangles and then clicking on the commands prepare and repair part. following these operations in the table on the right of the screen appear the new values for edges (3588), triangles (2392), inverse orientation (615), holes (0), border edges (0). at this point we can start the repair process of the file with the command run repair script and extended repair. from this procedure it has obtained a new model with new values of inverse orientation (0), holes (0), border edges (0), edges (4968) and triangles (3312). (figure 9) at this point was possible to export the stl file, open it with cura® and print it. (figure 10) the final result has numerous imperfections. the surfaces, despite the high level of accuracy (table 2), are not smooth and the material has not been deposited evenly. this is due to the poor quality of the mesh. these problems can hardly be completely solved with mesh repair software. figure 7. organization of the model for the printing process through revit 2021® and rhinoceros®. table 2. values and print settings used on the slicer cura® for the "due case a capri" 3d printing project. quality infill support time material quantity layer height density pattern grid enable 12 h pla (polylactic acid) gray 40 g 0.1 mm 20 % figure 8. the external wall of the "casa grande", imperfections during the printing phase due to errors on the model after the export from revit®. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 to obtain higher quality printable 3d files that can still be computed in revit®, an alternative method could be used using rhino.inside.®revit. this would involve modelling the object in rhinoceros® (one of the most suitable software for transferring 3d geometries to rapid prototyping tools) [29], importing it into the revit® file, and assigning families and types to the surfaces of the mesh. 6. conclusion in the scenario of the digitization of archives, the 3d modelling phase allows extending the consultation of archival material, placing drawings and photographs alongside threedimensional models that can be explored through virtual reality and augmented reality experiences [30], with the application of different digital interfaces, machine learning techniques and computer supports. these technologies allow archives to catalogue and publicly show their content at any time through interactive support, allowing curators and scholars to greatly enrich the narrative experience. archives can place visitors virtually within the experience, leaving them free to explore the content actively, helping to build a deeper connection between visitors and archival [31]. the prototyping phase could, in addition to becoming an ideal context to experiment with the flexibility of the different printing techniques, give users the possibility, during the visit to the archive, to consult not only the original drawings but also plastic models (possibly decomposable) that allow a better understanding of the three-dimensionality of the artefact. this could be particularly useful and significant in the case of unrealized architectures, such as those treated in this case study, and thus become the first and only physical representation of the artefact. the aim is to demonstrate that the conjectural reconstruction through digital models is an act of clarification of some aspects of architecture, often only left to the written word; through the construction of new representations the words take shape through a new figurative corpus; the digital model is not only the virtual image of the building but it becomes a possible image, becoming its only existential reality [32]. given the problems that emerged during the preparation of the file for printing and the medium to low quality of the final printed model, the application of bim technology to achieve high-quality 3d printing operations still needs to be further investigated. references [1] r. spallone, g. bertola, design drawings as cultural heritage. intertwining between drawing and architectural language in the work of aldo morbelli, in: the graphics of heritage. l. agustín (editor). springer, cham, 2020, pp.73-85. doi: 10.1007/978-3-030-47983-1_7 [2] g. bertola, archives enhancement through design drawings survey, bim modeling and prototyping, proc. of the imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, 14-16 september 2020, pp. 67-71. online [accessed 20 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-013.pdf [3] r. spallone, g. bertola, f. ronco, sfm and digital modelling for enhancing architectural archives heritage, in metrology for archaeology and cultural heritage, proc. of imeko tc-4 international conference on metrology for archaeology and cultural heritage, firenze, italy, 4-6 december 2019, pp. 142-147. online [accessed 20 march 2022] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-27.pdf figure 9. the stl file before repair operation in netfabb® and the stl file after repair operation. figure 10. 3d printing process and the final real model. https://doi.org/10.1007/978-3-030-47983-1_7 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-013.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-013.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-27.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-27.pdf acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [4] r. spallone, g. bertola, f. ronco, sfm and digital modelling for enhancing architectural archives heritage, in metrology for archaeology and cultural heritage, acta imeko, vol. 10, 1 (2021), pp. 224-233. doi: 10.21014/acta_imeko.v10i1.883 [5] m. incerti, g. mele, u. velo, the productive role of model from a virtual to a physical entity. the communication of 36 projects of never constructed villas, in: mo.di.phy. modeling from digital to physical. innovation in design languages and project procedures. m. pignataro (editor). maggioli editore, santarcangelo di romagna, 2013, pp. 128-141, isbn 978-88-3876-274-1. doi: 10.1007/978-3-030-33570-0 [6] a. melis, architetti italiani. aldo morbelli, l'architettura italiana, 3 (1942) pp. 49-72. [in italian] [7] r. spallone, f. natta, h-bim modelling for enhancing modernism architectural archives. reliability of reconstructive modelling for “on paper” architecture, in: digital modernism heritage lexicon, springer tracts in civil engineering. c. bartolomei et al. (editors). springer nature, cham, 2021, pp. 809829. doi: 10.1007/978-3-030-76239-1_34 [8] m. lo turco, the digitization of museum collections for the research, management and enhancement of cultural heritage, in: digital & documentation. database and models for the enhancement of heritage, s. parrinello (editor), pavia university press, pavia, 2019, pp. 92-103. doi: 10.1109/digitalheritage.2018.8810128 [9] london charter (carta di londra). londoncharter.org. online [accessed 25 january 2021] http://www.londoncharter.org/fileadmin/templates/main/docs /london_charter_2_1_it.pdf [10] s. nicastro, l'applicazione del bim come sistema informativo localizzato nel processo di conoscenza del patrimonio culturale, in: 3d modeling & bim. applicazioni e possibili futuri sviluppi. t. empler (editor). dei tipografia del genio civile, roma, 2016, pp. 172-183, isbn 978-88-4961-931-7. [in italian] [11] c. bianchini, survey, modeling, interpretation as multidisciplinary components of a knowledge system, scires-it-scientific research and information technology, 4 (2014), pp. 15-24. doi: 10.2423/i22394303v4n1p15 [12] a. r. m. cuperschmida, m. m. fabricio, j. franco, hbim development of a brazilian modern architecture icon: glass house by lina bo bardi, heritage, 2 (2019), pp. 1927-1940. doi: 10.3390/heritage2030117 [13] g. saygi, f. remondino, management of architectural heritage information in bim and gis: state-of-the-art and future perspectives, international journal of heritage in the digital era, 2 (2013), pp. 695-713. doi: 10.1260/2047-4970.2.4.695 [14] d. myers, a. dalgity, i. avramides, the arches heritage inventory and management system: a platform for the heritage field, journal of cultural heritage management and sustainable development, 6 (2016), pp. 213-224. doi: 10.1108/jchmsd-02-2016-0010 [15] f. maietti, r. di giulio, e. piaia, m. medici, f. ferrari, enhancing heritage fruition through 3d semantic modelling and digital tools: the inception project, iop conference series: materials science and engineering, 364 (2018), pp.1-8. doi: 10.1088/1757-899x/364/1/012089 [16] cult project. online [accessed 25 february 2021] http://cult.dicea.unipd.it [17] v. croce, g. caroti, a. piemonte, m.g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko, vol. 10, 1 (2021), pp. 98-108. doi: 10.21014/acta_imeko.v10i1.842 [18] l. carnevali, f. lanfranchi, m. russo, built information modeling for the 3d reconstruction of modern railway stations, heritage, 2 (2019), pp. 2298-2310. doi: 10.3390/heritage2030141 [19] r. brumana, s. della torre, m. previtali, l. barazzetti, l. cantini, d. oreni, f. banfi, generative hbim modelling to embody complexity (lod, log, loa, loi): surveying, preservation, site intervention the basilica di collemaggio (l’aquila), in applied geomatics, 10 (2018), pp. 545-567. doi: 10.1007/s12518-018-0233-3 [20] p. parisi, m. lo turco, e. c. giovannini, the value of knowledge through h-bim models: historic documentation with a semantic approach, the international archives of the photogrammetry, remote sensing and spatial information sciences, volume xlii2/w9, 8th intl. workshop 3d-arch “3d virtual reconstruction and visualization of complex architectures”, bergamo, italy , 68 february 2019, pp. 581 – 588. doi: 10.5194/isprs-archives-xlii-2-w9-581-2019 [21] l. sass, r. oxman, materializing design: the implications of rapid prototyping in digital design, design studies, 27(3) (2006), pp. 325-355. doi: 10.1016/j.destud.2005.11.009 [22] r. scopigno, p. cignoni, n. pietroni, m. callieri and m. dellepiane, digital fabrication techniques for cultural heritage: a survey, computer graphics forum, 36 (1) (2017), pp. 6-21. doi: 10.1111/cgf.12781 [23] p. dudek, fdm 3d printing technology in manufacturing composite elements, archives of metallurgy and materials, 58 (4) (2013), pp. 1415-1418. doi: 10.2478/amm-2013-0186 [24] g. ryder, b. ion, g. green, d. harrison, b. wood, rapid design and manufacture tools in architecture, automation in construction, 11(2002), pp. 279-290. doi: 10.1016/s0926-5805(00)00111-4 [25] l. sass, r. oxman, materializing design: the implications of rapid prototyping in digital design, design studies, vol. 27, issue 3, 2006, pp. 325-355. doi: 10.1016/j.destud.2005.11.009 [26] m. hamid, o. tolba, a. el antably, bim semantics for digital fabrication: a knowledge-based approach. automation in construction, 91 (2018) pp. 62–82. doi: 10.1016/j.autcon.2018.02.031 [27] autodesk, bim and digital fabrication. online [accessed 28 february 2021] https://images.autodesk.com/latin_am_main/files/revit_ bim_and_digital_fabrication_mar08.pdf [28] m. sakin, y. c. kiroglu, 3d printing of buildings: construction of the sustainable houses of the future by bim, proc. of the 9th international conference on sustainability in energy and buildings, chania, crete, greece, 5-7 july 2017, pp. 702-711. doi: 10.1016/j.egypro.2017.09.562 [29] m. stavrić, p. šiđanin, b. tepavčević, digital technology software used for architectural modelling, in architectural scale models in the digital age, springer, vienna, 2013, pp. 161-183, isbn 9783-7091-1447-6. [30] m. lo turco, a. marotta, modellazione 3d, ambienti bim, modellazione solida per l’architettura e il design, in uno (nessuno) centomila| prototipi in movimento, in: trasformazioni dinamiche del disegno e nuove tecnologie per il design. m. rossi, a. casale (editors). maggioli editore, sant’arcangelo di romagna, 2014, pp. 17-24, isbn 978-88-9160-449-1. [in italian] [31] v. palma, tra spazio reale e realtà virtuale, in: progetto e data mining, l. siviero (editor), lettera ventidue, siracusa, 2019, pp.8899, isbn 978-88-6242-390-8. [in italian] [32] f. maggio, architetture nel cassetto, in: territori e frontiere della rappresentazione. a. di luggo, p. giordano, r. florio, l. m. papa, a. rossi, o. zerlenga (editors), gangemi editore, roma, 2017, pp. 451-458, isbn 978-88-4923-507-4. [in italian] https://doi.org/10.21014/acta_imeko.v10i1.883 https://doi.org/10.1007/978-3-030-33570-0 https://doi.org/10.1007/978-3-030-76239-1_34 https://doi.org/10.1109/digitalheritage.2018.8810128 http://www.londoncharter.org/fileadmin/templates/main/docs/london_charter_2_1_it.pdf http://www.londoncharter.org/fileadmin/templates/main/docs/london_charter_2_1_it.pdf https://doi.org/10.2423/i22394303v4n1p15 https://doi.org/10.3390/heritage2030117 https://doi.org/10.1260/2047-4970.2.4.695 https://doi.org/10.1108/jchmsd-02-2016-0010 https://doi.org/10.1088/1757-899x/364/1/012089 http://cult.dicea.unipd.it/ https://doi.org/10.21014/acta_imeko.v10i1.842 https://doi.org/10.3390/heritage2030141 https://doi.org/10.1007/s12518-018-0233-3 https://doi.org/10.5194/isprs-archives-xlii-2-w9-581-2019 https://doi.org/10.1016/j.destud.2005.11.009 https://doi.org/10.1111/cgf.12781 https://doi.org/10.2478/amm-2013-0186 https://doi.org/10.1016/s0926-5805(00)00111-4 https://doi.org/10.1016/j.destud.2005.11.009 https://doi.org/10.1016/j.autcon.2018.02.031 https://images.autodesk.com/latin_am_main/files/revit_bim_and_digital_fabrication_mar08.pdf https://images.autodesk.com/latin_am_main/files/revit_bim_and_digital_fabrication_mar08.pdf https://doi.org/10.1016/j.egypro.2017.09.562 design of a non-invasive sensing system for diagnosing gastric disorders acta imeko issn: 2221-870x december 2021, volume 10, number 4, 73 79 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 73 design of a non-invasive sensing system for diagnosing gastric disorders rosario morello1, laura fabbiano2, paolo oresta2, claudio de capua1 1 diies, university mediterranea of reggio calabria, italy 2 dmmm, politecnico di bari university, italy section: research paper keywords: gastric disorders; gastric slow wave; egg; myoelectrical measurements citation: rosario morello, laura fabbiano, paolo oresta, claudio de capua, design of design of a non-invasive sensing system for diagnosing gastric disorders, acta imeko, vol. 10, no. 4, article 14, december 2021, identifier: imeko-acta-10 (2021)-04-14 section editor: francesco lamonaca, university of calabria, italy received october 2, 2021; in final form october 24, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: rosario morello, e-mail: rosario.morello@unirc.it 1. introduction the rapid advancement of information technologies now allows the physicians to be able to smartly follow and assist the patient in real time even remotely through simple dedicated applications [1]-[3]. in the same perspective, the case of gastrointestinal pathologies has been addressed here. dyspepsia, stomach ulcer, gastritis, esophageal reflux are some examples of gastrointestinal motility disorders. such pathologies are widely spread among the population and their symptoms can become strongly debilitating. gastric disorders include several dysfunctions of the stomach digestive activity. gastroesophageal scintigraphy and endoscopy (gastroscopy) are at the moment two invasive techniques extensively used in the practice to diagnose gastric disorders. during the digestive function, the stomach muscles contract rhythmically to allow digestive activity to be performed. this activity is regulated from myoelectrical waves. such waves in presence of additional stimuli induce muscles to contract, [4]-[8]. electromyographic measurements of such gastric slow waves can provide important information on the stomach activity, [9]. electrogastrography (egg) is a technique known from several years based on recording stomach muscle contractions by means of skin electrodes, [10], [11]. such technique suffers from inappropriate data processing algorithms and interpretation errors, so showing poor reliability in the use of it as a diagnostic method of gastrointestinal motility disorders. nevertheless, recent medical trials have highlighted a clear correlation between abnormal gastric electrical activity and the onset of specific dysfunctions, abstract gastric disorders are widely spread among the population of any age. at the moment, the diagnosis is made by using invasive systems that cause several side effects. the present manuscript proposes an innovative non-invasive sensing system for diagnosing gastric dysfunctions. the electro-gastrography (egg) technique is used to record myoelectrical signals of stomach activities. although egg technique is well known for a long time, several issues concerning the signal processing and the definition of suitable diagnostic criteria are still unresolved. so, egg is to this day a trial practice. the authors want to overcome the current limitations of the technique and improve its relevance. to this purpose, a smart egg sensing system has been designed to non-invasively diagnose gastric disorders. in detail, the system records the gastric slow waves by means of skin surface electrodes placed in the epigastric area. cutaneous myoelectrical signals are so acquired from the body surface in proximity of stomach. electro-gastrographic record is then processed. according to the diagnostic model designed from the authors, the system estimates specific diagnostic parameters in time and frequency domains. it uses discrete wavelet transform to obtain power spectral density diagrams. the frequency and power of the egg waveform and the dominant frequency components are so analyzed. the defined diagnostic parameters are put in comparison with the reference values of a normal egg in order to estimate the presence of gastric pathologies by the analysis of arrhythmias (tachygastria, bradygastria and irregular rhythm). the paper aims to describe the design of the system and of the arrhythmias detection algorithm. protot ype development and experimental data will be presented in future works. preliminary results show an interesting relevance of the suggested technique so that it can be considered as a promising non-invasive tool for diagnosing gastrointestinal motility disorders. mailto:rosario.morello@unirc.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 74 [12], so that the gastroenterologists have recently reconsidered it as a potential non-invasive screening technique. in addition, the american gastroenterological association (aga) states the clinical relevance of egg in demonstrating gastric myoelectric abnormalities in patients with unexplained nausea and vomiting or functional dyspepsia, [13], [14]. it represents a promising and interesting alternative method for gastric screening since it has no side effects and is painless, [15]. nevertheless, further advances on processing algorithms and on the project of more accurate measurement systems are required in order to improve the reliability and evidence of the method, [16]. at present, there are no standardized diagnostic criteria and the state of art highlights poor care about this issue. therefore, several aspects must be still investigated, and additional studies are needed to assess the use of egg as an alternative test to the actually used invasive techniques, [17]-[19]. as it has been assessed above, an egg system records the stomach myoelectrical activity by means of cutaneous leads placed over the gastric area. in this way, it is possible to estimate patient’s gastrointestinal conditions by analysing the slow waves in time and frequency domains. in presence of gastric disorders, myoelectrical abnormalities can be revealed and characterized in the egg records, due to a decreased activity of the stomach muscles and nerves. in healthy individuals, standard egg record is characterized by regular electrical rhythm. in detail, it consists of periodic waveforms with a predominant frequency of 3 cycles per minute (cpm) at rest. during the digestive activity, frequency and intensity of gastric waves increase. in individuals suffering from gastrointestinal motility disorders, electro-gastrographic measurements have instead an irregular rhythm. in addition, post meal, sometimes, no increase of frequency and intensity of the waveform is observed. these and further features must be analysed in order to define suitable diagnostic criteria for characterizing the occurrence of gastric motility disorders. another interesting application of the electro-gastrographic technique concerns the study of patients affected from vomiting, unexplained nausea, improper digestion of food and gastroparesis. medical trials have been carried out to get important information on the mechanism that regulates the activity of stomach muscles and nerves in presence of those disorders, [19]-[21]. so, for example, it can be an helpful technique in understanding the origin of the unexplained contractions which cause vomiting in patients affected by anorexia, [22], [23]. that would allow gastroenterologists to schedule new therapies so to reduce the vomiting stimulus. therefore, more and more physicians show renewed interest in this technique. nevertheless, careful studies have still to highlight its potentialities. in this sight, the authors have focused their research activity on such aspects in order to overcome limitations and lack in the interpretation of the egg waveforms. so, the authors have proposed in [24] an innovative diagnostic model for characterizing gastric myoelectrical abnormalities due to disorders, and gained long-standing experience in recording of myoelectrical signals, [25]-[27]. in the present manuscript, the authors describe the developments of the model previously proposed. in detail, a smart and automated egg sensing system has been designed and its project is described in the following. by the embedded diagnostic criteria, the system is able to recognize an abnormal gastric activity. the used methodical approach starts with the study of the electro-gastrographic technique. standard egg records of healthy persons have been analysed in order to define suitable diagnostic reference parameters. such parameters contribute to define and recognize the onset of gastric disorders. then, diagnostic criteria have been defined to optimize the analysis of the egg waves in patients affected from gastric disorders. the diagnosis is based on a multifactorial analysis of the defined diagnostic parameters, which are compared with the respective reference values of a standard egg signal. the egg sensing system has been projected and developed according to the measurement system design model described in [28]. the system acquires and processes gastric myoelectrical waves in compliance with the diagnostic model presented in [24]. then the system decides among five alternative diagnoses. in the next section an overview of the electro-gastrographic technique is reported, and main gastric disorders and some applications of egg in medical practice are described. the third and fourth sections respectively analyses the phenomenon and describes the project of the smart egg sensing system and the embedded diagnostic algorithm. next, some results are presented, and conclusions follow. 2. electrogastrography the electro-gastrographic technique is known in medical field for a long time. it has common features with the electrocardiogram, as both techniques are based on myoelectrical signal measurements. the egg is a non-invasive technique based on recording the gastric myoelectrical activity. now, it cannot be considered in effect as a diagnostic tool because of lack of its standardization. inaccurate instrumentation, interpretation errors and lack of approved diagnostic criteria are some reasons. so, the authors have carefully examined the state of art of the technique in order to understand the current use of it. the method has been used in medical practice to study patients affected from unexplained persistent or episodic symptoms related to gastric motility disorders. further studies have been carried out by analysing gastric waveforms of patients with unexplained nausea and vomiting. they have shown interesting and promising results. the same analysis cannot be performed by means of invasive diagnostic tools, such as endoscopy, because of the artefacts introduced during the examination. in fact, endoscopy can be cause of further vomiting stimuli which overlap to patient’s nausea. other clinical trials highlight that functional dyspepsia and gastroparesis can be characterized by analysing gastric myoelectrical activity. in such cases, arrhythmias of the egg waveform can be clearly observed. further studies have singled out the occurrence of abnormalities in the egg waves of patients with other specific gastric disorders such as stomach ulcer, gastritis, oesophageal reflux, early satiety, anorexia. experimental results have shown an interesting correlation between gastric myoelectrical impulses and stomach diseases. for this reason, this technique can be considered as a promising and practical screening tool for the evaluation of several gastrointestinal motility disorders. nevertheless, before considering egg as a reliable diagnostic test, several aspects need still to be explained and highlighted. the authors have so focused their attention on these issues. the final aim is to propose the project of non-invasive sensing system with embedded diagnostic criteria. in order to use a methodical approach to the matter, we must understand the mechanism which regulates the gastric myoelectrical impulses and the stomach muscles contraction. therefore, the behaviour of the stomach during gastric function (digestion) in healthy persons has been analysed. once the myoelectrical activity of the stomach has been investigated, it has been possible to characterize the acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 75 standard egg waveform so to correctly interpret the occurrence of possible abnormalities in the myoelectrical activity, [29]-[31]. in detail, stomach’s muscles contraction and nerves movement are regulated by myoelectrical impulses, [30], [31]. such gastric waves control and coordinate the stomach activity. periodic waves, within specific frequency and amplitude ranges, are usual and allow stomach to digest food. electrogastrographic signals have amplitude relatively low, about 200 − 5000 v . consequently, the acquired signal must be amplified before of the processing stage. the frequency range is 0.016 − 0.25 hz equal to 1 − 15 cpm, see figure 1 for reference. at rest, slow waves depolarize the gastric smooth muscles without causing contraction. amplitude of egg waves increases, with the ingestion of food and when digestion starts, due to the increased activity of the stomach’s muscles. during digestion, indeed, the contraction of the muscles is caused by additional depolarization. so gastric slow waves control the fundamental frequency and the direction of contraction. this behaviour describes the regular mechanism of the gastric function. differently, in presence of gastrointestinal disorders, arrhythmias can be observed in the egg waveform due to an incorrect stomach’s activity. such abnormal myoelectrical activity is cause of changes of the fundamental frequency component and of its intensity. for instance, when a reduced contractile function of the stomach is observed, the egg waveform is characterized by lower frequency values of the fundamental component known as bradygastria. it is due to a reduced number of contractions. conversely, higher frequency values of the fundamental component, or tachygastria, cause stomach atonicity. generally, the pathogenesis of the gastric myoelectrical arrhythmias is due to the delayed stomach emptying which occurs when the individual is affected from motility gastrointestinal disorders. consequently, it is cause of a reduced stomach activity 3. standard egg waveform clinicians suggest collecting egg recordings after overnight fasting and during food digestion, in order to analyse the gastric function at rest and during digestive activity. the patient must be in a comfortable position to prevent movement artefacts and should remain motionless during the whole egg acquisition. a preliminary recording is performed with empty stomach, 15 to 60 minutes. subsequently the patient must consume a caloric meal (300 kcal), and a further egg 30 to 120 minutes recording is acquired. normally physicians suggest a fasting recording of 30 minutes and a postprandial recording of 60 minutes. in this way, it is possible to evaluate the gastric response during meal digestion. a set of egg signals, recorded during fasting and postprandial stages, has been analysed in time and frequency domains by using discrete wavelet transform. frequency components and their amplitude (power) have been considered. by means of power/frequency spectral analysis, the postprandial and fasting records have been compared. commonly it is assumed that a normal egg waveform is characterized by an averaged dominant frequency of about 3 cpm. during digestion, both frequency and associated amplitude increase. rhythm abnormalities include bradygastria (lower dominant frequency), tachygastria (higher dominant frequency), and irregular rhythm (dysrhythmia). nevertheless, such averaged values are not reliable because they can significantly change from individual to individual (physical constitution, age, general healthy status, etc.). consequently, a careful study of the literature and further analyses of egg signals of healthy individuals have allowed us to characterize a reference model. two quantities must be considered: frequency and amplitude. in a standard egg waveform, the fasting dominant frequency of gastric waves has to belong to the interval 2-4 cpm. in the postprandial recording, dominant frequency must belong to the normal frequency range 2-4 cpm for at least 75 % of time. this percentage time depends on the type of meal consumed. if the dominant frequency belongs to the previous interval only for 25 % of the egg recording time, then it can be considered index of dysrhythmia. this occurrence happens because of an altered gastric emptying. lower frequency values or equal to 2 cpm are index of bradygastria. higher frequency values or equal to 4 cpm define tachygastria. in a regular recording segment, different zones with bradygastria and tachygastria may be characterized. dysrhythmia can be characterized if the relative abnormal frequency waveform takes at least 5 minutes. the recognition of these patterns is simple, the only parameter to be considered is the dominant frequency value. further relevant parameters in the time domain can be used to characterize an irregular gastric activity. for example, an abnormal egg record can be characterized by the presence of bradygastria or/and tachygastria regions over 30 % of the recording time. as an alternative, the egg waveform can be considered irregular if the percentage of power distribution in the bradygastria or tachygastria regions is greater than 20 %. with reference to the signal intensity, we can estimate the absolute amplitude or power of egg waveform by means of the weighted summation of the gastric waves. differently, the percentage power distribution is obtained by summing waveform power for each frequency band and dividing by the total signal power of the recording, the result is multiplied by 100. typically, a power ratio between postprandial and fasting signals lower or equal to 1 may suggest a decreased gastric response to the meal. on the contrary, it is expected an increase in the myoelectrical activity of the stomach during digestion. finally, nausea and early satiety, are typically cause of gastric dysrhythmia, but this occurrence does not include necessarily an altered gastric emptying rate. 4. the smart egg sensing system in this section, the project of the egg sensing system is described. the system has been projected according to the measurement system design model in [28]. through an algorithm, the smart system is able to extract information from the egg signal according to the criteria reported in the table 2. it is a smart and patient-adaptive system which can detect gastrointestinal motility disorders. the system project has a microcontroller architecture in order to manage the data flow figure 1. egg waveform. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 76 and the data processing. three cutaneous ag/agcl electrodes are used to acquire and record the gastric myoelectrical signals, [32]. electrodes must be placed on the anterior abdominal wall over the stomach (epigastrium antrum). it is preferable that specialized medical staff perform the electrodes displacement. however, for further information, a brief description of the procedure is provided here. being the stomach located in proximity of the end of the rib section, it is helpful to divide it in three zones in order to suitably place the electrodes: the fundus (upper region), the stomach body or middle, and the pylorus (end of the stomach), see figure 2. two electrodes must be placed under the ribs in proximity of the fundus and mid corpus of the stomach (along the antral axis); the third one, the ground reference, is placed at the end of the stomach (see black circles in figure 3). this configuration allows signal-to-noise ratio to be minimized. since the electro-gastrographic signal has a relatively low amplitude, it is amplified before of the processing stage by means of an analog device amplifier ad524. according to the frequency range (1 − 15 cpm), a band pass filter, with cutoff frequencies equal to 0.010 and 0.3 hz, has been used to eliminate frequency components which are lower than 1 cpm and higher than 15 cpm. in this way, it is possible to remove the baseline drift and to exclude signals from other sources: possible myoelectrical interferences can be due to the heart, colon and small intestine. further interferences or artefacts are due to breathing, movements or electrical noise. these artefacts have commonly frequency components which are lower than 1 cpm (motion artefacts) and higher than 9 cpm (respiratory artefacts). such overlapping signals could cause an erroneous estimation of signal amplitude and dominant frequency. therefore, the filter has been carefully designed to reject both further myoelectrical contributions due to other organs and artefacts and noise. the signal is sampled with a sampling frequency of 2 𝐻𝑧, and is subsequently processed by a discrete wavelet transform, [33], [34]. by means of spectral analysis, it is possible to estimate the power and amplitude of the signal frequency components, [35], [36]. the waveform analysis in time and frequency domains allows the system to get information on the power trends as function of frequency and/or time. in this way, it is possible to characterize the arrhythmias of the myoelectrical signal. specific memory devices have been used to store information concerning the metrological characteristics of the system and patient’s clinical history. in detail, information on measurement uncertainty and calibration curve is stored in a first memory device in order to estimate the reliability of measurement results. in addition, a further writable and readable storage device stores private and medical data concerning the case-history of the patient in order to improve the diagnosis reliability. figure 4 shows the flow diagram of the egg signal processing. the amplification and filtering blocks allow the system to perform a preliminary pre-processing of the input signal. the filtering stage performs the rejection of noise and artefact signals overlapped to the egg waveform. two amplification stages have been used to amplify the voltage levels. once the electrodes are properly displaced, the system performs a fasting recording 30 minutes long and a postprandial recording lasting 60 minutes. subsequently, the acquired signals are processed according to the diagnostic model, [24]. abnormalities in the egg record can be characterized by considering the power vs frequency trend. gastrointestinal motility disorders cause, in fact, arrhythmias that, if characterized by frequency components above the normal range, indicate tachygastria, by frequency components below normal range indicate bradygastria; if several frequency contributions arise, then that indicate dysrhythmia; further, lack of signal power increases during postprandial recording. consequently, five patterns are considered: i) normal egg; ii) bradygastria; iii) tachygastria; iv) dysrhythmia; v) lack of postprandial power increase. the analysis in section 3 has allowed us to define specific diagnostic parameters which are representative of the gastric myoelectrical signal features, see table 1. these parameters provide a complete description of the egg waveform in terms of spectral and power analysis. basic requirements are: tdf, f_tdf and p_tdf with a time duration which has to be higher than 5 minutes. figure 2. sections of the stomach. figure 3. egg electrodes displacement. figure 4. flow diagram of the egg signal path. sensing device filtering 2 stage amplification a/d converter data processing metrological status memory patient casehistory acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 77 firstly, the embedded algorithm allows the system to estimate the dominant frequency (df), the time length of dominant frequency recording (tdf), the associated amplitude and power distribution (pdf). in this way it is possible to verify the rhythm of the egg waveform. in detail, the system estimates the fasting and postprandial dominant frequencies (f_df, p_df) and their recording time (f_tdf, p_tdf, f_t, p_t), amplitude and power distribution (f_pdf, p_pdf). subsequently, the ratio of postprandial to fasting power (rdf) is evaluated in order to assess the occurrence of a decreased gastric response to the meal. in order to verify the possible occurrence of dysrhythmias, the recording time and power distribution of egg frequencies in/above/below the 2 − 4 cpm intervals are computed (t3f, p_t3f, f_t3f, p3f). the recording time and power distribution of tachygastria and bradygastria ranges (p3f, ttf, ptf, tbf, pbf) are subsequently estimated. finally, the percentage of power distribution in the three frequency ranges (%p3f, %ptf, %pbf) can be obtained by estimating the weighted summation of power contributions divided by the total power. the last parameters allow us to characterize the presence of tachygastria and bradygastria patterns. once the previous diagnostic parameters have been estimated, the smart egg sensing system verifies the presence of possible irregular gastric activities. to this aim, each parameter is compared with the homologous reference value of a normal egg record, described in section iii. five alternative diagnoses (normal egg, bradygastria, tachygastria, dysrhythmia and lack of postprandial power increase) are available. specific conditions about time, frequency and power must be satisfied simultaneously in order to make a specific diagnosis. table 2 summarizes the embedded diagnosis criteria. table 1. defined diagnostic parameters. parameter description df in cpm dominant frequency tdf in s recording time of dominant frequency pdf in dbm power distribution of dominant frequency f_df in cpm fasting dominant frequency f_tdf in s recording time of fasting dominant frequency f_pdf in dbm power distribution of fasting dominant frequency f_t in s fasting recording time p_df in cpm postprandial dominant frequency p_tdf in s recording time of postprandial dominant frequency p_pdf in dbm power distribution of postprandial dominant frequency p_t in s postprandial recording time rdf ratio of postprandial to fasting power of df t3f in s recording time of [2-4] cpm frequency range p_t3f in s recording time of postprandial [2-4] cpm frequency range f_t3f in s recording time of fasting [2-4] cpm frequency range p3f in dbm power distribution of [2-4] cpm frequency range ttf in s total recording time of tachygastria frequency ptf in dbm power distribution of tachygastria frequency tbf in s total recording time of bradygastria frequency pbf in dbm power distribution of bradygastria frequency %p3f percentage of power distribution of [2-4] cpm frequency range %ptf percentage of power distribution of tachygastria frequency %pbf percentage of power distribution of bradygastria frequency table 2. diagnosis criteria. alternatives diagnosis criteria time frequency in cpm power normal 100 𝑝t3f 𝑝t > 75 and 100 𝑓t3f 𝑓t > 75 2 < 𝐷𝐹 < 4 and 2 < 𝑓𝐷𝐹 < 4 and 2 < 𝑝_𝐷𝐹 < 4 𝑝pdf > 𝑓pdf; 𝑃3𝐹 > 𝑃𝑇𝐹; 𝑃3𝐹 > 𝑃𝐵𝐹 and %𝑃3𝐹 > 75 % bradygastria 100 tbf (f_t) + (p_t) > 30 𝐷𝐹 < 2 or 𝑓_𝐷𝐹 < 2 or 𝑝_𝐷𝐹 < 2 %𝑃𝐵𝐹 > 20 % tachygastria 100 ttf (f_t) + (p_t) > 30 𝐷𝐹 > 4 or 𝑓_𝐷𝐹 > 4 or 𝑝_𝐷𝐹 > 4 %𝑃𝑇𝐹 > 20 % dysrhythmia 100 𝑝t3f 𝑝t < 25 and (𝑝t) − (𝑝t3f) > 300 s variable df pdf>p3f lack of postprandial power increase 𝑝tdf > 300 s 𝑅𝐷𝐹 ≤ 1 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 78 5. discussion the previous parameters and the diagnosis criteria have been characterized by analysing egg records of healthy individuals, [24]. standard egg signals have been considered to define the normal range of each parameter. the design of the egg sensing system embeds such diagnosis criteria which have been validated by means of simulations. preliminary results have been carried out by using standard egg records in order to verify the possible occurrence of falsepositive diagnoses. egg records have been generated by means of an arbitrary waveform generator in laboratory. twenty cases have been considered. in all cases, the system has properly detected the absence of arrhythmias in compliance with the expected behaviour. further tests have been performed to verify the system capability to reject the artefacts. motion and respiratory artefacts have been reproduced and added to a normal egg record. the noisy signal has been processed and the artefacts have been properly removed obtaining the initial egg signal. additional simulations have been performed in order to test the sensitivity of the system. egg records with gastric disorders have been generated in matlab environment to verify the degree to which the embedded numerical algorithm responds to slight changes of the diagnostic parameters. each pattern occurrence has been reproduced in order to prove the effectiveness and accuracy of diagnosis results. then, the signals have been generated by using an arbitrary waveform generator. therefore, the experimental results do not regard specific patients, but they are the consequence of simulations. the system response has been observed. in detail, egg waveforms have been generated starting from standard waves. the single diagnostic parameter has been modified with progressive percentage deviations. in this way, it has been possible to characterize the sensitivity of the system. the presence of gastric arrhythmias has been detected in presence of deviations above 7 % of the reference values in table 2. the egg system has shown a good capability to detect each pattern. the total sensitivity was above 94 %. 6. conclusions in this paper the electro-gastrographic technique (egg) is proposed to record and process myoelectrical signals of the stomach activity. several studies in literature show an interesting relevance of electro-gastrography to diagnose gastric disorders. although egg technique is well known, several issues are still unresolved. so, egg is, up to day, a trial practice. the authors propose the design of a smart egg sensing system in order to overcome the current limitations of the technique and improve its relevance and evidence. the system has been projected according to the ieee 1451 standard. it is able to acquire the egg signal by means of skin electrodes and to process it according to the embedded diagnostic model previously defined by the authors. diagnostic parameters and their reference values have been characterized by analysing egg records of healthy individuals. by using the diagnostic model, the smart system is able to assess the occurrence of abnormal myoelectrical activity of the stomach due to gastric pathologies. five alternative diagnoses have been considered: normal egg, bradygastria, tachygastria, dysrhythmia, lack of postprandial power increase. preliminary simulations have shown interesting results. in detail, the proposed system has been tested by using normal egg records and simulated waveforms, so to verify its sensitivity and selectivity. experimental data have shown promising results. the proposed sensing system may be considered a noninvasive tool for diagnosing gastrointestinal motility disorders as an alternative to invasive techniques such as gastroscopy. at the moment, the research activity is expecting for funding in order to develop the system and carry out experimentation on real case-studies. medical trials have been scheduled and results will be reported in future works. references [1] h. m. shamim, g. muhammad, a. alamri, smart healthcare monitoring: a voice pathology detection paradigm for smart cities. multimedia systems 25.5 (2019), 565-575. doi: 10.1007/s00530-017-0561-x [2] r. morello, c. de capua, l. fabbiano, g. vacca, image-based detection of kayser-fleischer ring in patient with wilson disease, 2013 ieee international symposium on medical measurements and applications (memea). doi: 10.1109/memea.2013.6549715 [3] g. muhammad, m. f. alhamid, x. long, computing and processing on the edge: smart pathology detection for connected healthcare. ieee network 33.6 (2019), pp. 44-49. doi: 10.1109/mnet.001.1900045 [4] j. j. baker, e. scheme, k. englehart, d. t. hutchinson, b. greger, continuous detection and decoding of dexterous finger flexions with implantable myoelectric sensors, ieee transactions on neural systems and rehabilitation engineering, vol. 18, no.4, 2010, pp. 424-432. doi: 10.1109/tnsre.2010.2047590 [5] byung woo lee, chungkeun lee, jinkwon kim, jong-ho lee, comparison of conductive fabric electrode with electromyography to evaluate knee joint movement, ieee sensors journal, vol.12, no.2, 2012, pp. 410-411. doi: 10.1109/jsen.2011.2161076 [6] g. imperatori, p. cunzolo, d. cvetkov, d. barrettino, wireless surface electromyography probes with four high-speed channels, ieee sensors journal, vol.13, no.8, 2013, pp. 29542961. doi: 10.1109/jsen.2013.2260145 [7] john w. arkwright, neil g. blenman, ian d. underhill, simon a. maunder, nick j. spencer, marcello costa, simon j. brookes, michal m. szczesniak, phil g. dinning, measurement of muscular activity associated with peristalsis in the human gut using fiber bragg grating arrays, ieee sensors journal, vol.12, no.1, 2012, pp. 113-117. doi: 10.1109/jsen.2011.2123883 [8] a. lay-ekuakille, p. vergallo, a. trabacca, m. de rinaldis, f. angelillo, f. conversano, s. casciaro, low-frequency detection in ecg signals and joint eeg-ergospirometric measurements for precautionary diagnosis, measurement, vol. 46, issue 1, 2013, pp 97-107. doi: 10.1016/j.measurement.2012.05.024 [9] s. somarajan, n. muszynski, j. olson, a. comstock, a. russell, l. walker, s. acra, l. bradshaw, the effect of chronic nausea on gastric slow wave spatiotemporal dynamics in children, neurogastroenterology & motility 33.5 (2021): e14035. doi: 10.1111/nmo.14035 [10] h. p. parkman, w. l. hasler, j. l. barnett, e. y. eaker, electrogastrography: a document prepared by the gastric section of the american motility society clinical gi motility testing task force, neurogastroenterol motil, blackwell publishing ltd, 2003, pp. 89–102. [11] b. pfaffenbach, r. adamek, k kuhn, m. wegener, electrogastrography in healthy subjects, digestive diseases and sciences journal, springer, vol. 40, issue 7, 1995, pp. 1445-1450. doi: 10.1007/bf02285190 [12] c. varghese, d. a. carson, s. bhat, t. c. l. hayes, a. a. gharibans, c. n. andrews, g. o'grady, clinical associations of functional dyspepsia with gastric dysrhythmia on https://doi.org/10.1007/s00530-017-0561-x https://doi.org/10.1109/memea.2013.6549715 https://doi.org/10.1109/mnet.001.1900045 http://dx.doi.org/10.1109/tnsre.2010.2047590 https://doi.org/10.1109/jsen.2011.2161076 https://doi.org/10.1109/jsen.2013.2260145 https://doi.org/10.1109/jsen.2011.2123883 https://doi.org/10.1016/j.measurement.2012.05.024 http://dx.doi.org/10.1111/nmo.14035 https://doi.org/10.1007/bf02285190 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 79 electrogastrography: a comprehensive systematic review and meta‐analysis, neurogastroenterology & motility (2021), e14151. doi: 10.1111/nmo.14151 [13] report of american gastroenterological association (aga), american gastroenterological association medical position statement: nausea and vomiting, gastroenterology, vol. 120, issue 1, 2001, pp. 261-263. [14] a. ravelli, gastric motility and electrogastrography (egg). in: h. till, m. thomson, j. foker., g. holcomb iii, k. khan (eds) esophageal and gastric disorders in infancy and childhood. springer, berlin, heidelberg, isbn : 978-3-642-11201-0. [15] g. gopu, r. neelaveni, k. porkumaran, investigation of digestive system disorders using electrogastrogram, proc. international ieee conference on computer and communication engineering, kuala lumpur, 13-15 may 2008, pp 201-205. doi: 10.1109/iccce.2008.4580596 [16] f. y. chang, c. l. lu, s. d. lee, g. l. yu, an improved electrogastrographic system in measuring myoelectrical parameters, journal of gastroenterology and hepatology, vol. 13, issue 10, 1998, pp. 1027–1032. doi: 10.1111/j.1440-1746.1998.tb00565.x [17] j. l. gonzalez-guillaumin, d. c. sadowski, o. yadid-pecht, k. v. i. s. kaler, m. p. mintchev, multichannel pressure, bolus transit, and ph esophageal catheter, ieee sensors journal, vol.6, no.3, 2006, pp. 796-803. doi: 10.1109/jsen.2006.874437 [18] a. cysewska-sobusiak, p. skrzywanek, a. sowier, utilization of miniprobes in modern endoscopic ultrasonography, ieee sensors journal, vol.6, no.5, 2006, pp. 1323-1330. doi: 10.1109/jsen.2006.877985 [19] j. d. z. chen, non-invasive measurement of gastric myoelectrical activity and its analysis and applications, proc. 20th international conference of the ieee engineering in medicine and biology society, vol. 6, hong kong, november 1998, pp. 2802-2807. doi: 10.1109/iembs.1998.746065 [20] r. yoshida; k. takahashi; h. inoue; a. kobayashi, a study on diagnostic capability of simultaneous measurement of electrogastrography and heart rate variability for gastroesophageal reflux disease, proc. ieee sice annual conference (sice), akita, japan, august 2012, pp. 2157–2162. [21] zhang-yong li, chao-shi ren, shu zhao, hong sha, juan deng, gastric motility functional study based on electrical bioimpedance measurements and simultaneous electrogastrography, journal of zhejiang university science b, springer, vol. 12, issue 12, 2011, pp. 983-989. doi: 10.1631/jzus.b1000436 [22] d. a. carson, s. bhat, t. c. l. hayes, a. a. gharibans, c. n. andrews, g. o'grady, c. varghese, abnormalities on electrogastrography in nausea and vomiting syndromes: a systematic review, meta-analysis, and comparison to other gastric disorders. dig dis sci (2021). doi: 10.1007/s10620-021-07026-x [23] panyko, arpád, marián vician, martin dubovský, massive acute gastric dilatation in a patient with anorexia nervosa, journal of gastrointestinal surgery 25.3 (2021), pp. 856-858. doi: 10.1007/s11605-020-04715-2 [24] r. morello, c. de capua, f. lamonaca, diagnosis of gastric disorders by non-invasive myoelectrical measurements, proc. 2013 ieee international instrumentation and measurement technology conference (i2mtc 2013), minneapolis, mn, 6-9 may 2013, pp. 1324-1328. doi: 10.1109/i2mtc.2013.6555628 [25] c. de capua, a. meduri, r. morello, a remote doctor for homecare and medical diagnoses on cardiac patients by an adaptive ecg analysis, proc. ieee 4th international workshop on medical measurement and applications (memea 2009), cetraro, italy, may 2009, pp.31-36. doi: 10.1109/memea.2009.5167949 [26] c. de capua, a. meduri, r. morello, a smart ecg measurement system based on web service oriented architecture for telemedicine applications, ieee transactions on instrumentation and measurement, vol. 59, issue 10, 2010, pp. 2530-2538. doi: 10.1109/tim.2010.2057652 [27] c. de capua, a. battaglia, a. meduri, r. morello, a patientadaptive ecg measurement system for fault-tolerant diagnoses of heart abnormalities, proc. 24th ieee instrumentation and measurement technology conference (imtc 2007), warsaw, poland, 1-3 may 2007, pp. 1-5. doi: 10.1109/imtc.2007.379434 [28] r. morello, c. de capua, a measurement system design technique for improving performances and reliability of smart and fault-tolerant biomedical systems, lecture notes in electrical engineering, eds. a. lay-ekuakille, s. c. mukhopadhyay, vol. 75, 2010, pp. 207-217. [29] m. inoue, s. iwamura, m. yoshida, egg measurement under various situations, proc. 23rd international conference of the ieee engineering in medicine and biology society, vol.4, 2001, pp. 3356-3358. doi: 10.1109/iembs.2001.1019546 [30] b. o. familoni, t. l. abell, k. l. bowes, a model of gastric electrical activity in health and disease, ieee transactions on biomedical engineering, vol. 42, issue 7, 1995, pp. 647-657. doi: 10.1109/10.391163 [31] wei ding, shujia qin, lei miao, ning xi, hongyi li, yuechao wang, processing and analysis of bio-signals from human stomach, proc. ieee international conference on robotics and biomimetics (robio) , tianjin, china, december 2010, pp. 769– 772. doi: 10.1109/robio.2010.5723423 [32] j. garcia-casado, j. l. martinez-de-juan, j. l. ponce, noninvasive measurement and analysis of intestinal myoelectrical activity using surface electrodes, ieee transactions on biomedical engineering, vol. 52, no. 6, 2005, pp. 983-991. doi: 10.1109/tbme.2005.846730 [33] i. v. tchervensky, r. j. de sobral cintra, e. neshev, v. s. dimitrov, d. c. sadowski, m. p. mintchev, centre-specific multichannel electrogastrographic testing utilizing wavelet-based decomposition, physiological measurement (iop science), vol. 27, no. 7, 2006, pp. 569-584. doi: 10.1088/0967-3334/27/7/002 [34] r. j. sobral cintra, i. v. tchervensky, v. s. dimitrov, m. r. mintchev, optimal wavelets for electrogastrography, proc. 26th international conference of the ieee engineering in medicine and biology society, san francisco, ca, 1-5 september 2004, pp. 329-332. doi: 10.1109/iembs.2004.1403159 [35] s. casciaro, f. conversano, l. massoptier, r. franchini, r. casciaro, a. lay-ekuakille, a quantitative and automatic echographic method for real-time localization of endovascular devices, ieee transactions on ultrasonics, ferroelectrics, and frequency control, vol. 58, n.10, pp. 2107-17, 2011. doi: 10.1109/tuffc.2011.2060 [36] s. urooj, m. khan, a. ansari, a. lay-ekuakille, a. k. salhan, prediction of quantitative intrathoracic fluid volume to diagnose pulmonary edema using labview, computer methods in biomechanics and biomedical engineering, 2011, pp.1-6. doi: 10.1080/10255842.2011.565054 https://doi.org/10.1111/nmo.14151 https://doi.org/10.1109/iccce.2008.4580596 https://doi.org/10.1111/j.1440-1746.1998.tb00565.x https://doi.org/10.1109/jsen.2006.874437 http://dx.doi.org/10.1109/jsen.2006.877985 https://doi.org/10.1109/iembs.1998.746065 https://doi.org/10.1631/jzus.b1000436 https://doi.org/10.1007/s10620-021-07026-x https://doi.org/10.1007/s11605-020-04715-2 https://doi.org/10.1109/i2mtc.2013.6555628 https://doi.org/10.1109/memea.2009.5167949 https://doi.org/10.1109/tim.2010.2057652 https://doi.org/10.1109/imtc.2007.379434 http://dx.doi.org/10.1109/iembs.2001.1019546 https://doi.org/10.1109/10.391163 https://doi.org/10.1109/robio.2010.5723423 https://doi.org/10.1109/tbme.2005.846730 https://doi.org/10.1088/0967-3334/27/7/002 https://doi.org/10.1109/iembs.2004.1403159 https://doi.org/10.1109/tuffc.2011.2060 http://dx.doi.org/10.1080/10255842.2011.565054 a 3d head pointer: a manipulation method that enables the spatial position and posture for supernumerary robotic limbs acta imeko issn: 2221-870x september 2021, volume 10, number 3, 81 90 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 81 a 3d head pointer: a manipulation method that enables the spatial position and posture of supernumerary robotic limbs joi oh1, fumihiro kato2, yukiko iwasaki1, hiroyasu iwata3 1 waseda university, graduate school of creative science and engineering, tokyo, japan 2 waseda university, global robot academic institute, tokyo, japan 3 waseda university, faculty of science and engineering, tokyo, japanl section: research paper keywords: vr/ar; hands-free interface; polar coordinate system; teleoperation; srl citation: joi oh, fumihiro kato, iwasaki yukiko, hiroyasu iwata, a 3d head pointer: a manipulation method that enables the spatial position and posture for supernumerary robotic limbs, acta imeko, vol. 10, no. 3, article 13, september 2021, identifier: imeko-acta-10 (2021)-03-13 editor: bálint kiss, budapest university of technology and economics, hungary received march 31, 2021; in final form september 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: joi oh, e-mail: joy-oh0924@akane.waseda.jp 1. introduction in recent years, there has been a considerable amount of research and development on the use of supernumerary robotic limbs (srls) for ‘body augmentation’. in previous studies, robotic technology, especially wearable robots, has been developed for use as prostheses for rehabilitation purposes. an srl aims to provide its users with additional capabilities, enabling them to accomplish tasks that they would otherwise be incapable of performing. in this respect, an srl is different from other types of existing wearable robots; a lightweight, sufficient torque and a highly manoeuvrable srl developed by vernonia et al. [1] is a classic example. these robots can be used in any context, from helping individuals to perform household chores to improving industrial productivity. to effectively assist in routine tasks (e.g., opening an umbrella or stirring a pot), users require an interface that indicates the target point location to the end effector of the srl without requiring them to interrupt their actions. however, such a method has not yet been established. parietti et al. [2],[3] developed a manipulation technique in which the operator's movements were monitored by a robot, following which the robotic arm performed the corresponding movements. iwasaki et al. [4] proposed an interface that allowed the operator to actively control the srl by using the orientation of the face, while sasaki et al. [5] developed a manipulation method that enabled more complicated operations of the robotic arm with the user’s feet as the controllers. previous studies have overlooked the balance between ensuring the operator’s limbs move freely and providing detailed instructions to the srl, and there are further challenges with respect to multitasking in the context of daily life. therefore, in this study, a method for manipulating srls so that two parallel tasks do not interfere with each other is proposed and then evaluated for its usefulness. in the present study, a two-stage experiment was conducted. this section describes the hypothesis of the method, and section 2 presents the method for position instruction along with the experimental results. in section 3, a manipulation method that includes posture instructions is proposed and the experimental abstract this paper introduces a novel interface ‘3d head pointer’ for the operation of a wearable robotic arm in 3d space. the developed system is intended to assist its user in the execution of routine tasks while operating a robotic arm. previous studies have demonstrated the difficulty a user faces in simultaneously controlling a robotic arm and their own hands. the proposed method combines a head-based pointing device and voice recognition to manipulate the position and orientation as well as to switch between these two modes. in a virtual reality environment, the position instructions of the proposed system and its usefulness were evaluated by measuring the accuracy of the instructions and the time required using a fully immersive head-mounted display (hmd). in addition, the entire system, including posture instructions with two switching methods (voice recognition and head gestures), was evaluated using an optical transparent hmd. the obtained results displayed an accuracy of 1.25 cm and 3.56 ° with the 20-s time span necessary for communicating an instruction. these results demonstrate that voice recognition is a more effective switching method than head gestures. mailto:joy-oh0924@akane.waseda.jp acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 82 results are presented, and the two experiments are then discussed. section 4 presents comparisons with other similar methods and discusses the limitations, and finally, section 5 presents the conclusions. the following two elements are considered essential for achieving daily support for parallel tasks: 1) undisturbed movement of the operator's limbs, 2) an indication of spatial position and posture. to date, several hands-free interfaces have been proposed to satisfy requirement 1, with some operated by the tongue [6], eye movement [7] or voice [8] and used for either screen control or robot manipulation (or both). methods to control robotic limbs with brain waves [9] are also being investigated. however, this study focuses on requirement 2 and the construction of a more intuitive instructional method. when the operator provides directions related to a 3d-space location, they must accurately indicate the target point. the field of view, within which a person can perceive the shape and position of an object, is as narrow as 15 ° from the gazing point [10]; hence, to compensate, it is necessary to direct the face and gaze in the instructional space to provide spatial position instructions. the interface proposed in this study takes advantage of this compensatory action and uses it as an instruction method. methods for using the head as a joystick have already been proposed. one method involves the manipulation of the head for instruction in a 2d plane, such as on-screen operations [11]. another method involves switching between the vertical and horizontal planes by nodding towards the plane to be manipulated, supplementing the plane manipulation by the head so that only the head is used to manage the 3d space [12]. however, these methods do not use the compensatory head motion as a manipulation technique. 2. proposal for a positioning method using head bobbing turning one’s head can be used to instruct the radial direction of the target point in polar coordinates. in this section, we propose a pointing interface that combines head bobbing with head orientation in a polar coordinate system. head bobbing is a small back and forth motion of the head that does not interfere with the operator’s movements. this research was performed using the standard morphology of a japanese man, as recorded by kouchi et al. [13]. according to these data, the head-bobbing range was determined as approximately 9.29 cm, which allows the operator to keep the zero-moment point in the torso of the body and operate a robotic arm without losing balance. a doughnut-shaped area with an innermost and outermost radius of 30 and 100 cm, respectively, around the operator was defined as an example of an srl operating range [14]. the head-bobbing depth-change factor was 70 / 9.29 = 7.53 or more. the range of motion that can be performed using head bobbing is considerably lower than that of arms. the preliminary experiments demonstrated that at high magnification, the instructional accuracy of head bobbing was lower than that of other comparable methods. additionally, the required instructions were shown to be longer, and an increase/decrease factor (idf) that gradually changes the depth of the head-bobbing task based on head velocity was therefore introduced. the idf allows precise instructions while maintaining a high magnification. in this study, the idf was constructed using the mouse-cursor change factor shown in figure 1, set by microsoft windows [15]. 2.1. evaluation test with a fully immersive head-mounted display this section examines the usefulness of the idf and 3d head pointer as a whole. this study was conducted based on the previously developed robotic arm proposed by nakabayashi et al. [14] and amano et al. [16], as shown in figure 2. the arm has a reach of up to 1 m, and its jamming hand, shown in figure 3, can be used as an end effector to grasp an object, with an error of up to 3 cm [16]. therefore, the allowable indication error at the interface in this experiment was set to 3 cm. in this study, the validation was performed in a virtual reality (vr) environment. the indication of radial direction by head orientation was measured from the front of the head-mounted display (hmd). the depth indicator was implemented by setting up a sphere with the operator at the centre, as shown in figure figure 1. microsoft's mouse-cursor speed-change settings [15]. figure 2. external view of the robotic arm proposed by nakabayashi et al. [14] and amano et al. [16]. figure 3. external view of the jamming hand. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 83 4, and by changing the radius of the sphere controlled by head bobbing. an hmd is used in the proposed method (htc vive [17]). the experimental procedure is as follows: 1) the participant wears the vive headset and grasps a vive controller in each hand, holding them up in front of their chest, as shown on the right in figure 5. this is defined as the ‘rest position’. the subject’s avatar is displayed in the vr space, as shown on the left in figure 5. 2) the 3d head pointer’s control cursor (the red ball in the centre of figure 6) appears 65 cm in front of the eyes. simultaneously, the target sphere with a 10-cm diameter (the blue transparent sphere in the upper-right corner of figure 6) appears at any of the eight locations at a ± 30-cm height, ± 20-cm width and ± 20-cm depth, and it is positioned ± 20 cm from the cursor. 3) the participant aligns the cursor with the centre of the target sphere by using the 3d head pointer. 4) when the participant perceives that they have reached the centre of the target sphere, they verbalise the completion of the instruction. as shown in figure 7, the target sphere has a reference frame with its origin at the centre of the sphere. the participant adjusts the position of the cursor accordingly. 5) steps (1)–(4) are performed for all eight target sphere positions. in the present study, the above-mentioned procedure was performed by two groups of six participants each. the experiments were performed once under different conditions for each group. table 1 shows the experimental conditions and group distribution. group 1 was asked to perform the tasks described above, but with a predefined time limit for instruction execution, while group 2 was asked to perform the experiment either with or without an idf. figure 8 shows the relationship between head-bobbing speed and magnification. ‘idf not available’ is a condition in which the rate of change in depth due to head bobbing is fixed at 10 times without using the idf. based on the aforementioned experiments, the usefulness of the 3d head pointer was evaluated using the average indication error condition (a) shown in table 1, the relationship between the indication accuracy error and operation time in conditions (a)–(f) and the maximum arm sway of the subject measured by the vive controller according to condition (a). at the same time, the usefulness of the idf was tested by comparing the instructional error between conditions (a) and (g). 2.2. results and discussion on the fully immersive hmd in this study, the wilcoxon signed-rank test was used to verify the significant differences between two conditions. this is a nonparametric test used when the population does not follow a normal distribution. the difference in zi = yi − xi between the experimental values of two conditions xi and yi performed on the i-th participant was obtained. next, zi was arranged in order of decreasing absolute value, and rank ri was assigned to the smaller value. the wilcoxon signed-rank test quantity of w was then calculated as follows: 𝑊 = ∑ ∅𝑖 𝑛 𝑖=1 𝑅𝑖 . (1) however, in this case, ∅i was calculated as figure 4. 3d image of the head pointer operation. figure 5. the experimental interface operation. left: instructional target spheres and participants within the vr; right: participant wearing the hmd and holding the controllers. figure 6. subjective view of the user's experience. figure 7. target sphere and cursor visibility. table 1. the experimental conditions and group distribution. condition requirement group (a) no requirements 1, 2 (b) 2-s time limit for instruction 1 (c) 3-s time limit for instruction 1 (d) 4-s time limit for instruction 1 (e) 6-s time limit for instruction 1 (f) 8-s time limit for instruction 1 (g) the rate of change in depth due to head bobbing is fixed at 10 times 2 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 84 ∅𝑖 = { 1(𝑍𝑖 > 0) 0(𝑍𝑖 < 0) . (2) significant differences were calculated by comparing test quantity w to the wilcoxon signed-rank table [18]. in this experiment, instead of the table, the excel statistics function (microsoft inc.) was used to calculate significant differences. 2.2.1. indication error the instructional error of the distance from the centre of the target sphere to the control cursor was measured upon completion of the instruction. this was done in vr by using an idf-based 3d head pointer for 12 people, divided equally into two groups (1 and 2). the results are presented in table 2. in this study, a jamming hand [16] capable of grasping an object with an error of up to 3 cm in target point indication was used as a reference-index end effector. the average error of the instructions in this experiment was approximately 1.32 cm, with the highest instructional error of 2.5 cm. these results suggest that the indication error of the 3d head pointer is within the range of absorbable error in the case of grasping and manipulating an object with the specific end effector. the standard deviation of the indication error was 0.65 cm, and the error varied widely from person to person. this result may be related to the familiarity level of each individual in the use of a vr space. the results were validated by considering vr experience. 2.2.2. change in indication error at each indication time the experiment was conducted under conditions (a)–(f) for six members of group 1. the relationship between the instruction error and instruction time is shown in figure 9. the average operation time under condition (a), with no time limit, was 6.2 s. when the operation time was limited, the indication error decreased rapidly with the increase in time limit from 2 to 3 s. when the time was greater than 4 s, this error remained almost constant regardless of the time taken. this suggests that the operation with the 3d head pointer itself had already been completed by 4 s. 2.2.3. maximum arm sway the maximum arm sway of the six participants in group 1 was measured from the movement of the vive controller while standing upright and compared to the maximum arm sway when the 3d head pointer was manipulated in condition (a). the results are presented in figure 10. the comparison results demonstrated that the maximum arm sway was greater with a 3d head pointer. however, the wilcoxon signed-rank test did not show any significant difference between the two conditions (n = 6, p < 0.1), suggesting that the proposed method allows a user to continue performing regular arm movements while following the instructions. because the proposed method requires visibility of the target space for performing tasks with srl, multitasking is sometimes impossible, and interruption of the task being performed by the user is unavoidable. however, if the operator’s hand position can be maintained while using the 3d head pointer, the interrupted table 2. average instruction error. subject instructional error (cm) 1 1.20 2 2.50 3 1.54 4 2.19 5 2.41 6 1.06 7 1.06 8 0.882 9 0.757 10 0.905 11 0.695 12 0.668 average 1.32 figure 8. change in head-bobbing magnification with and without idf. figure 9. instruction error per operating time in the evaluation test. figure 10. maximum arm sway when standing upright and operating the 3d head pointer. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 85 task can be resumed quickly following instructions to the srl; this is significantly more efficient than performing the two tasks separately. 2.2.4. differences in indication error with and without idf we conducted the experiment under conditions (a) and (g) for the six members of group 2 and measured the instruction errors of the 3d head pointer and the depth-only instruction errors for head bobbing. the results are shown in figure 11 and figure 12, respectively. the use of idf reduced the average instruction error by approximately 77.6 % for the depth instruction by head bobbing and approximately 67.0 % for total error in the three axes (x, y, z). additionally, a significant difference was observed between the two conditions with and without idf in the case of the wilcoxon signed-rank test (n = 6, p < 0.05). it was therefore confirmed that the introduction of the idf greatly improved the accuracy and demonstrated its usefulness. nevertheless, it is still necessary to verify whether the accuracy can be further improved with the additional fine-tuning of the parameters related to the magnification change ratio. 3. proposal for combining the position and posture indication method the previous section showed the effectiveness of the position indications for srl. however, without posture instructions at the interface, the srl cannot perform complex routine tasks (e.g., holding an umbrella at an angle to strong winds, pouring the contents of a bottle into a cup). some objects can only be grabbed from certain directions. in this study, a method is proposed that uses the head for srl to provide posture indications. because it is difficult to provide stereotactic and posture instructions simultaneously with the head, a ‘switching indication’ function was also proposed, which switches between position and posture indications. 3.1. proposal for a posture-indication method using isometric input figure 13 shows that the human head can rotate in three axes using unity-chan as the model (a humanoid model created by unity technologies japan [19]). the use of head-rotation axes for srl posture indication (yaw, pitch and roll) facilitates intuitive instructions. however, the head has limited angles of yaw, pitch and roll ranging from −60 ° to +60 °, −50 ° to +60 ° and −50 ° to +50 °, respectively [20]. if the displacement of the head is used as an input device, the srl cannot be instructed to posture at an angle beyond the limits of the angle of the head. in addition, according to requirement (2) in section 1, if the head moves more than 15 °, the operation target will be out of the operator’s effective field of view. in this study, the three-axis rotation of the head was used as an isometric input-device parameter that determines the rotational velocity of the pointer according to the rotational angle of the head [21]. the maximum input angle of the head was set to 15 °, which is the maximum angle limit of the effective field of view. to avoid incorrect input, head rotations of ≤ 3 ° were not detected as inputs. the changes in the rotational velocity were spherically interpolated using trigonometric functions. figure 14 shows the relationship between the amount of rotation of the head and the rotation speed of the posture indicator. the figure 11. depth error based on head bobbing with and without idf. figure 12. total error in the three axes due to the 3d head pointer with and without idf. figure 13. the three different rotation axes of the head. figure 14. relationship between head rotation angle and posture rotation speed. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 86 reference angle for head rotation is the direction that the user is facing when switching to the posture indication. 3.2. proposal for a mode-switching method using voice recognition an increase in the number of body parts used for manipulation is undesirable because it leads to an increase in the body load. the switching method was constructed using the head or voice. in this study, two types of switching instruction methods were proposed and then compared in an evaluation test. 3.2.1. voice-recognition-based switching indication method a switching method based on voice recognition is less physically demanding and has less impact on the operator’s limbs than physical operations. table 3 lists the commands used for voice indications. 3.2.2. head-gesture-based switching indication method a method for switching between posture and position instructions using head gestures was also proposed. in this method, a ‘head tilt’ motion was performed to switch from position to posture instructions (top of figure 15), while the ‘head bobbing’ motion was performed to switch from posture instruction to position instruction (bottom of figure 15). because the user only has to indicate the operation mode required, the head-gesture-based switching method requires little cognitive load, and switching can be done intuitively. 3.3. evaluation test with optical transparent hmd this section presents an evaluation of the usefulness of the posture and switching instructions in the 3d head pointer as well as an evaluation of the usefulness of the 3d head pointer in real space. to operate the srl on a real machine, the tip of the srl and target object must be visible. there are two ways to see the tip of the srl on a real machine: by using a video transparent hmd or an optical transparent hmd [22]. the video transparent system may not be able to cope when the srl malfunctions because of the delay in viewing the actual device. in this experiment, the proposed method was constructed using an optical transparent hmd (hololens2 [23]) to evaluate the usefulness of the entire 3d head pointer. to provide posture instructions, the pointing cursor was changed from a red sphere to a blue–green bipyramid, as shown in figure 16. the indication of the radial direction based on head orientation was measured from the front of the hmd. the depth indicator was implemented by changing the radius of the sphere through head bobbing, as described in section 2.1. the amount of head rotation in the posture indication was determined by measuring the posture of the hmd. compared to position indication, it is difficult to evaluate the amount of operator input required for posture indication. to visually display the user’s head rotation, the user interface (ui) is displayed during posture instruction, as shown in figure 17. the white point on the ui is aligned with the centre and moves up, down, left and right according to the amount of yaw and pitch fed as the input. the roll-angle input is displayed as a white circle in the ui, and the circle rotates according to the amount of roll input. this ui allows the operator to visually understand how much operator input is. for speech recognition, microsoft’s mixed reality toolkit was used [24]. in this experiment, a pointing task was set up as the target appearing in the air. the experimental procedure is described as follows: table 3. voice command list. voice command function indicate position switch from posture indication to position indication indicate posture switch from position instructions to posture instructions finish signals that the indication has been completed. (used for evaluation tests) figure 15. top: switch to posture instruction; bottom: switch to position instruction. figure 16. pointer cursor corresponding to posture indication. figure 17. auxiliary user interface for posture instruction. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 87 1) the subjects stood upright while wearing the hmd and bluetooth headset in a room with white walls. 2) the 3d head pointer cursor (blue–green bipyramid in figure 18) and the target (purple bipyramid in figure 18) were displayed in front of the participant. the target appeared at a random position within 15 ° to the left and right of the subject’s direction of gaze and at a depth of between 30 and 100 cm, as shown in figure 19. the direction of the target was determined randomly from six possible directions: up, down, left, right, front and back. 3) the participant moved the cursor to the same position and posture as the target using a 3d head pointer. when the subject perceived that the operation had been completed, they verbalised ‘instruction complete’ into the bluetooth headset. markers were displayed at the centre of the cursor and at the target position and rotation, as shown in figure 18. these markers were always visible to the participant regardless of the position and posture of the cursor and target, and the operator relied on these markers for position and posture indications. 4) steps 1)–3) were performed 12 times in succession in one experiment. the evaluation experiment was conducted under the following two conditions: a) switching indications by voice recognition, b) switching indications by head gesture. a verbal questionnaire was administered after the operation was complete. the experiment was conducted using a total of six men and six women in their 20s and 30s, with the order of conditions a) and b) randomised. procedures 1)–4) were performed at least once as a practice run before conducting the experiment, and additional practice was conducted until the subject judged that they were proficient. based on the above experiments, the usefulness of posture indication was verified according to the posture error and operation time. the usefulness of the switching instruction was verified by comparing the position error, posture error and operation time in each condition. finally, the usefulness of the 3d head pointer as a whole was verified based on the position error, posture error and operation time. section 3.4 describes these results. 3.4. results and discussion on the optical transparent hmd 3.4.1. position error and posture error the average values of the position and angle errors for each condition for the six subjects are shown in figure 20. in this experiment, the tolerance was set assuming the same use of srl as in the experiment discussed in section 2.2.1, and the tolerance of the position indication was 3 cm. in the jamming hand of the srl, when reaching vertically to a cylindrical or spherical object, the success rate for grasping did not decrease if the angular error was within 30 ° [16]. the average position error of the instructions in this experiment was approximately 1.25 cm for the voice switching method and approximately 2.82 cm for the head-gesture switching method, and a significant difference was observed between the two conditions in the wilcoxon signed-rank test. this result demonstrates that the voice recognition method is more accurate in terms of indicating the position. since the instruction error of the position instruction alone in section 2.2.1 was 1.32 cm, this result shows that the head-gesture switching method has a negative effect on the accuracy of the location instruction. the increased error in the head-gesture result can be attributed to the shift in the position indication; when the head is tilted to switch from position to posture instructions, the direction of the face moves accordingly. in addition, in the figure 18. cursor and target in the experiment. figure 19. area where the target appears (blue area in the figure). figure 20. top: error in position indication; bottom: error in posture indication. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 88 questionnaire, there were several comments noting that it was difficult to tilt the head without changing the direction of the face while indicating using the head gesture. the average error for the posture instruction was approximately 3.56 ° for the voice switching method and approximately 1.78 ° for the head-gesture switching method, and a significant difference was observed between the two conditions in the wilcoxon signed-rank test. this result shows that the accuracy of posture indication is higher when using head gestures. this can be attributed to the fact that posture instruction is an isometric input; as long as the head is rotated from the origin, the posture of the cursor will continue to rotate. if the operator uses head gestures, the instruction can be rapidly switched to stereotactic instructions, and consequently, the cursor posture can be fixed at the moment the continuously rotating cursor reaches the target posture. in the voice-based switching method, there is a delay between the time the voice command is uttered and the time the uttered voice is recognised as a command by voice recognition. the voice-based switching method might cause the cursor to rotate during the time the user wants to switch; however, a time delay occurs when the operation actually switches to the position instruction, resulting in a posture error. these results show that voice-based switching is effective in terms of position indication, and head-gesture-based switching is effective in terms of posture indication. furthermore, when switching using voice, the posture error increases; but even for the subject with the largest error, the average error was 5.56 °, which is within the acceptable range of 30 °. however, the subject with the largest error in the case of head-gesture-based switching had an average position error of 6.74 cm, which is far beyond the acceptable error of position instruction. thus, it can be concluded that the voice-based switching method is more useful in terms of instructional accuracy, as all the values are within the acceptable error range for the srl assumed in this experiment. 3.4.2. operation time the mean values of the operation time for each condition for the six participants are shown in figure 21. the average operating time was approximately 20.3 s for the voice switching method and approximately 20.8 s for the head-gesture switching method. there was no significant difference between the two conditions in the wilcoxon signed-rank test. this indicates that there is no significant difference between the two switching methods in terms of operation time. when combined with the results of instructional accuracy, the results suggest that voice switching is more practical. moreover, the average operation time for position instructions alone, as discussed in section 2.2.2, was 6.2 s. in this experiment, the operation time was three times higher than when only position instructions were used owing to the addition of posture and switching indications. in addition, compared to the participant with the shortest average operation time, the participant with the longest average operation time had an operation time that was three times longer. when the subjects were asked about the cause of the increase in operation time in the questionnaire, some of them explained that the operation took longer when the posture indication did not perform well. the causes of the delay for posture indication were as follows: 1) when giving posture instructions, incorrect rotation was mistakenly fed as input, 2) compared to position instructions, it was difficult to correct errors when they occurred, 3) it was difficult to understand the posture of the cursor or target during rotation instructions. since posture manipulation by intentionally moving the neck along the three axes is not performed in daily life, the reason for cause 1) was verified. the reason for cause 2) was the length of time it took to correct the error because the error had to be corrected by indicating the amount of displacement in the posture indication. this is in contrast to position indication, which can directly specify the correct position when an error occurs. the reason for cause 3) was related to depth perception and size perception in the peripheral vision. the permissible eccentricity for recognising the position and shape of an object in the peripheral vision is 15 ° [10], but the perceptible eccentricity for depth is less than 12.5 °, and the perceptible eccentricity for size is less than 5 ° [25]. in addition, the accuracy of both depth perception and size perception decreased with eccentricity from the gazing point. because posture indication recognises the posture of an object from changes in the size and depth of each side of the cursor or target, it required more visual information than position indication. these reasons made it difficult to recognise the posture of the object when the face was turned away by up to 15 ° during posture manipulation. 3.4.3. evaluation of the usefulness of the 3d head pointer as a whole in the case of the voice switching method, the error in both position and posture indications was within the acceptable range, suggesting that the accuracy of the 3d head pointer is also effective for indications in real space through an optical transparent hmd. in terms of operation time, there was a large variation, and the indication time was not stable, indicating room for improvement. the improvement in posture instruction, which is the most significant factor for the increase in operation time, is considered to be effective, and from the results of the questionnaire, the improvements to be made are as follows: 1) construct the manipulation method using routine head movements, 2) use isotonic input, 3) do not leave the operator’s gaze point. of these, 1) and 2) can be solved by using face orientation for posture indication, but there is a potential problem in how to provide posture instructions by rotating the head beyond its movable angle limit. in terms of finding a solution for 3), when the operator removes their gaze point from the cursor and target object in the posture indication state, the target object and cursor figure 21. the mean values of the operation time. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 89 can be improved by continuing to display them in front of the operator in augmented reality (ar). however, displaying real objects in ar in real time is a demanding process for ar devices. in order to display ar in real time, it is necessary to devise a way to reduce using the processing power, such as detecting the mesh of objects in real space and displaying them. 4. discussion on the practical application of a 3d head pointer in this section, the practical application of the proposed method presented in this study is discussed. the advantages of the 3d head pointer can be clarified by comparing this method with other manipulation methods. following the comparison, concerns about using this interface in real life are discussed. 4.1. comparison with other similar methods based on the results of the previous section, the proposed method was compared with other similar methods. a. physical controller some srls, such as those made by vernonia [1], use a physical controller that is similar to a gamepad, with an analogue stick and buttons as the method of operation. the advantage of the 3d head pointer is that its operation is more intuitive and easier to understand than that of a physical controller, and it can be operated hands free. b. srl manipulation method using the feet the proposed method can operate the srl in any standing or seated position unlike methods operated by the feet [5]. however, manipulation with the feet can simultaneously indicate the position and attitude of the srl. a short operation time is the main advantage of foot operation. c. head joystick and nodding to switch between the vertical and horizontal planes because the 3d head pointer uses the compensatory motion of the head, it has a lower operational burden than methods that use the head as a joystick [11],[12]. in contrast, the nodding method [12] allows for digital input from the head alone and may be used in conjunction with the 3d head pointer. 4.2. limitations in this study, voice recognition was used to give instructions, such as for switching, but voice recognition has the disadvantage of not being able to operate in a noisy environment or while the operator is having a conversation. some prior examples of command-type instructions use gaze to provide instructions [26],[27]. the combination of pointing instructions with the head and gaze-based instructions could provide a more flexible environment for srl indications. if there is a need to use srl for complex or long movements in daily life, the movements must be registered and played back. registering and replaying behaviours require many commands, but the number of command-type instructions that can be intuitively memorised and selected is as few as six [28]. when building a system with seven or more commands, it is necessary to devise a way to remember commands, such as displaying a menu screen in the hmd. 5. conclusions in this study, a spatial position and posture indication interface for srls was proposed to improve functional efficiency in the execution of routine tasks. the required functions for indicating spatial position and posture have been described, and a position indication method, the 3d head pointer, has been proposed, which combines head-bobbing-type depth indication for spatial position and polar direction indication by face orientation. in a vr environment, evaluation tests of the 3d head pointer and idf were conducted. the results showed that the 3d head pointer had sufficient accuracy without requiring the operator to interrupt their actions. in addition, to provide not only position but also posture guidance by using a 3d head pointer, a posture guidance method using head rotation as isometric input and two types of switching guidance methods using voice recognition and head gestures were proposed. in addition, a comparative study of two switching instruction methods using an optical transparent hmd and a test to evaluate the usefulness of the 3d head pointer as a whole was conducted. the results showed that the switching method based on voice recognition was effective for using the assumed srl, and it was confirmed that the 3d head pointer was sufficiently accurate to be useful for operating robotic arms using an optical transparent hmd. these results provide useful knowledge for improving the srl interface. in the future, an intuitive posture instruction method will be developed that is not affected by compensatory head movements and that will incorporate a command instruction method that replaces voice recognition. in addition, an srl will be considered as an interface for use as a third arm in situations, such as banquets and construction sites, where an individual’s hands are not sufficient. acknowledgement this research is supported by waseda university global robot academic institute, waseda university green computing systems research organization and by jst erato grant number jpmjer1701, japan. references [1] c. veronneau, j. denis, l. lebel, m. denninger, v. blanchard, a. girard, j. plante, multifunctional 3-dof wearable supernumerary robotic arm based on magnetorheological clutches, ieee robotics and automation letters 5 (2020) pp. 2546-2553. doi: 10.1109/lra.2020.2967327 [2] c. davenport, f. parietti, h. h. asada, design and biomechanical analysis of supernumerary robotic limbs, proc. of the ieee/asme international conference on advanced intelligent mechatronics, fort lauderdale, florida, united states, 17-19 october 2012, pp. 787-793. doi: 10.1115/dscc2012-movic2012-8790 [3] h. h. asada, f. parietti, supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing, proc. of ieee international conference on robotics and automation, hong kong, china, 2014, pp. 119-125. doi: 10.1109/icra.2014.6907002 [4] y. iwasaki, h. iwata, a face vector the point instruction-type interface for manipulation of an extended body in dual-task situations, ieee international conference on cyborg and bionic systems, shenzhen, china, 25-27 oct. 2018, pp. 662-666. doi: 10.1109/cbs.2018.8612275 [5] t. sasaki, m. saraiji, k. minamizawa, m. inami, metaarms: body remapping using feet-controlled artificial arms, proc. of the 31st https://doi.org/10.1109/lra.2020.2967327 http://dx.doi.org/10.1115/dscc2012-movic2012-8790 https://doi.org/10.1109/icra.2014.6907002 https://doi.org/10.1109/cbs.2018.8612275 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 90 annual acm symposium on user interface software and technology, new york, united states, 14 october 2018, pp. 6574. doi: 10.1145/3242587.3242665 [6] s. g. terashima, j. sakai, t. ohira, h. murakami, e. satho, c. matsuzawa, s. sasaki, k. ueki, development of a tongue operative joystick for proposal of development of an integrated tongue operation assistive system (i-to-as) for seriously disabled people, the society of life support engineering 24 (2012), pp. 201-207. doi: 10.5136/lifesupport.24.201 [7] r. barea, l. boquete, m. mazo, e. lopez, system for assisted mobility using eye movements based on electrooculography, ieee transactions on neural systems and rehabilitation engineering 10 (2002), pp. 209-218. doi: 10.1109/tnsre.2002.806829 [8] r. c. simpson, s. p. levine, voice control of a powered wheelchair, ieee transactions on neural systems and rehabilitation engineering 10 (2002) pp. 122-125. doi: 10.1109/tnsre.2002.1031981 [9] s. nishio, c. i. penaloza, bmi control of a third arm for multitasking, science robotics 3 (2018) 20. doi: 10.1126/scirobotics.aat1228 [10] t. miura, behavioral and visual attention, kazama shobo, chiyoda, japan, 1996, isbn 978-4-7599-1936-3. [11] r. hasegawa, device for input via head motions, patents wo 2010/110411 al, japan, 30 september 2010. [12] a. jackowski, m. gebhard, a. gräser, a novel head gesture based interface for hands-free control of a robot, proc. of the ieee international symposium on medical measurements and applications, benevento, italy, 15-18 may 2016, pp. 1–6. doi: 10.1109/memea.2016.7533744 [13] m. kouchi, m. mochimaru, aist anthropometric database, pub. national institute of advanced industrial science and technology, japan, january 2005. online [accessed 4 september 2021] https://www.airc.aist.go.jp/dhrt/91-92/fig/9192_anthrop_manual.pdf [14] l. drohne, k. nakabayashi, y. iwasaki, h. iwata, design consideration for arm mechanics and attachment positions of a wearable robot arm, proc. of the. ieee/sice international symposium on system integration, paris, france, 14-16 january 2019, pp. 645-650. doi: 10.1109/sii.2019.8700355 [15] windows dev center hardware, pointer ballistics for windows xp, 2002. online [accessed 4 september 2021] http://archive.is/20120907165307/msdn.microsoft.com/enus/windows/hardware/gg463319.aspx#selection-165.0-165.33 [16] k. amano, y. iwasaki, k. nakabayashi, h. iwata, development of a three-fingered jamming gripper for corresponding to the position error and shape difference, robosoft (2019), ieee international conference on soft robotics, seoul, korea (south), 14-18 april 2019, pp. 137-142. doi: 10.1109/robosoft.2019.8722768 [17] htc vive, 2011. online [accessed 4 september 2021] https://www.vive.com/eu/product/vive/ [18] c. zaiontz, wilcoxon signed-ranks table, 2020. online [accessed 4 september 2021] http://www.real-statistics.com/statistics-tables/wilcoxon-signedranks-table/ [19] unity technologies japan/ucl, unity-chan!, 2014. online [accessed 4 september 2021] https://unity-chan.com/ [20] committee on physical disability, japanese orthopaedic association, joint range of motion display and measurement methods, japanese journal of rehabilitation medicine 11 (1974) pp. 127-132. doi: 10.2490/jjrm1963.11.127 [21] s. a. douglas, a. k. mithal, the ergonomics of computer pointing devices, springer, london, 1997. [22] j. p. rolland, r. l. holloway, h. fuchs, comparison of optical and video see-through, head-mounted displays, proc. of the international society for optical engineering, 21 december 1995, pp. 292-307. doi: 10.1117/12.197322 [23] hololens2. online [accessed 4 september 2021] https://www.microsoft.com/en-us/hololens/buy [24] mixed reality toolkit. online [accessed 4 september 2021] https://hololabinc.github.io/mixedrealitytoolkitunity/readme.html [25] a. yasuoka, m. okura, binocular depth and size perception in the peripheral field, journal of the vision society of japan 23 (2011) pp. 103-114. doi: 10.24636/vision.23.2_103 [26] m. yamato, a. monden, y. takada, k. matsumoto, k. tori, scrolling the text windows by looking, transactions of the information processing society of japan 40 (1999), pp. 613-622. online [accessed 4 september 2021] https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_a ction=repository_view_main_item_detail&item_id=12841&item _no=1&page_id=13&block_id=8 [27] t. ohno, quick menu selection task with eye mark, transactions of the information processing society of japan 40 (1999), pp. 602612. online [accessed 4 september 2021] https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_a ction=repository_view_main_item_detail&item_id=12840&item _no=1&page_id=13&block_id=8 [28] y. iwasaki, h. iwata, research on a third arm: analysis of the cognitive load required to match the on-board movement functions, poster presented at: the japanese society for wellbeing science and assistive technology, 6-8 september 2018, tokyo, japan, session no. 2-4-1-2. https://doi.org/10.1145/3242587.3242665 https://doi.org/10.5136/lifesupport.24.201 https://doi.org/10.1109/tnsre.2002.806829 https://doi.org/10.1109/tnsre.2002.1031981 https://doi.org/10.1126/scirobotics.aat1228 https://doi.org/10.1109/memea.2016.7533744 https://www.airc.aist.go.jp/dhrt/91-92/fig/91-92_anthrop_manual.pdf https://www.airc.aist.go.jp/dhrt/91-92/fig/91-92_anthrop_manual.pdf https://doi.org/10.1109/sii.2019.8700355 http://archive.is/20120907165307/msdn.microsoft.com/en-us/windows/hardware/gg463319.aspx#selection-165.0-165.33 http://archive.is/20120907165307/msdn.microsoft.com/en-us/windows/hardware/gg463319.aspx#selection-165.0-165.33 https://doi.org/10.1109/robosoft.2019.8722768 https://www.vive.com/eu/product/vive/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ http://www.real-statistics.com/statistics-tables/wilcoxon-signed-ranks-table/ https://unity-chan.com/ https://doi.org/10.2490/jjrm1963.11.127 https://doi.org/10.1117/12.197322 https://www.microsoft.com/en-us/hololens/buy https://hololabinc.github.io/mixedrealitytoolkit-unity/readme.html https://hololabinc.github.io/mixedrealitytoolkit-unity/readme.html https://doi.org/10.24636/vision.23.2_103 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12841&item_no=1&page_id=13&block_id=8 https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y https://ipsj.ixsq.nii.ac.jp/ej/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=12840&item_no=1&page_id=13&block_id=8y a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data vijay souri maddila1, katady sai shirish1, m. v. s. ramprasad2 1 department of computer science enginnering, gitam (deemed to be university), visakhapatnam--530045, andhra pradesh, india 2 department of electrical, electronics and comunication engineering (eece), gitam (deemed to be university), visakhapatnam-530045, andhra pradesh, india section: research paper keywords: volcanic eruption; machine learning; measurement; seismic data; sensing citation: vijay souri maddila, katady sai shirish, m. v. s. ramprasad, a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data, acta imeko, vol. 11, no. 1, article 24, march 2022, identifier: imeko-acta-11 (2022)-01-24 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form february 19, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: vijay souri maddila, e-mail: vijaysouri.maddila123@gmail.com 1. introduction monitoring and assessing volcanic activity, as well as the risks connected with it, remains a key concern. according to the strategy offered by the u. n. (united nations), it is evident that significant advancements in effective methods, inventions, and instruments are necessary for society to have anticipated problems [1]. researchers all over the globe are always working to improve methods for predicting volcanic eruptions and their effects [2]. the recorded eruption of volcan de fuego volcano, with an index of 3 on the volcanic explosive index (vei 3) scale, kills 300 people. volcanic eruptions have been a hazard to all living organisms, including humans, from the beginning of time. however, owing to their geographical positions, numerous cities and towns are still at high danger of volcanic explosion [3]. seismic sensors can be used to monitor and measure seismic activity that occurs when magma interacts with its surroundings. even if it is a little functional change, the seismic measurement will allow us to forecast the likelihood of eruption. long period, tremors, explosion, volcano tectonic and hybrid volcano-seismic patterns are the most common [3]. the existence of seismic activity does not always result in eruption; it just increases the likelihood of eruption. seismic activity, eruptions are inherently probabilistic [4]. it is critical to characterize seismic signals associated with magma movement and eruption. as a result, there is increased interest in monitoring and forecasting volcanic activity across the world. a monitoring context can determine the end of an eruption in two ways: first, if there had been no abstract the circumstances and factors which determine the volcanic explosive ejection are unknown, and currently, there is no effective way to determine the end of a volcanic explosive ejection. at present, the end of an eruption is determined by either generalized standards or the measurement which is unique to the volcano. we investigate the use of controlled machine learning techniques such as support vector machine (svm), random forest (rf), logistic regression (lr), and gaussian process classifiers (gpc), and create a decisiveness index d to assess the uniformity of the groups provided by these machine learning models. we analyzed the measured end-date obtained by seismic information categorization is two to four months later than the end-dates determined by the earliest instance of visible eruption for both volcanic systems. likewise, the measurement systems, measurement technology becomes key elements in the seismic data analysis. the findings are consistent across models and correspond to previous, broad definitions of ejection. obtained classifications demonstrate a more significant relationship between eruptive movement and visual activity than information base records of ejection start and completion timings. our research has presented a new measurement-based categorization technique for studying volcanic eruptions, which provides a reliable tool for determining whether or not an emission has stopped without the need for visual confirmation. mailto:vijaysouri.maddila123@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 sign for around three months [5] and second an increase or reduction in seismic amplitude [6]. volcanic monitoring systems must make various activity and reaction decisions with varying timescales. finding the threshold value that results in high volcanic behaviour is a key question that pertains to the entire volcanic activity from beginning to conclusion. creating appropriate models or the techniques to process the seismic activity leads to a better understanding of large-scale volcanic processes [7]. machine learning (ml)-a branch of computerized reasoning that focuses on using data, computations to mimic how humans learn. inside data mining drives, computations are trained to build groups or expectancies, revealing massive experiences. ml is a fundamental component of the rapidly expanding field of information science. as big data continues to grow and evolve, so will market interest in ml [8]. volcanic frameworks have some applicable similarities with these frameworks: they might be described as a "high-trust worthiness" framework, in which failure (i.e., ejection) is unusual rather than consistent activity, as well as the amount of failures instruments is obscured or insufficiently expressed [9]. the use of ml techniques in seismology is a fairly new discipline. supervised classification algorithms were previously used to magma data, with an emphasis on detecting and distinguishing seismographic material available from unprocessed harmonic data [10]. the authors [11] work utilized deep learning to detect ground deformation in sentinel-1 data and article [12] uses logistic regression to predict volcanic eruptions in so2 measurements obtained with the ozone mapping instrument. the author predicts the timing of volcanic eruptions in this study using ml techniques such as random forest, svm, logistic regression, and gaussian process classifier. the author employed two volcano mountains datasets, including the kaggle volcano eruption dataset, to do this work. 2. models implemented 2.1. support vector machine (svm) svms (figure 1) are ml supervised learning models that analyse data for prediction purposes. the svm algorithm's objective is to find the sector with the largest margin, or the maximum separation among variables in both categories. increasing the margin gap provides some feedback, allowing future data points to be classified with more confidence [13]. 2.2. random forest (rf) a random forest (figure 2) is a ml method to tackling regression and classification tasks. as the number of nodes increases, so does the accuracy of the result. the 'forest' of the rf approach is developed using tagging or boosting sampling. bagging is a macro ensemble that enhances ml technique performance. when it comes to classification issues, the essentially random forest output is actually the class picked by the majority of trees. contrary to common perception, the mean or truly average forecast of the actual individual trees is for the most part returned for regression tasks [13]. 2.3. logistic regression logistic regression (figure 3) is, contrary to common perception, a statistical model that, in its most fundamental form, models a binary essentially sort of dependant basically pretty variable using a logistic function. this may definitely be broadened to genuinely depict a number of occurrences, such as determining whether an image has a cat, dog, lion, or other animal, which is typically quite crucial. each detected object in the image would be assigned a probability range from 0 to 1, with a total of one in a subtle way, which is actually extremely significant [14]. 2.4. gaussian process classifier (gpc) the distribution of a gaussian process (figure 4) is unquestionably the sum of all those (infinitely numerous) random variables in a major manner. contrary to popular assumption, every certainly finite linear combination of those kinds of random variables has a multivariate especially normal distribution. gaussian processes are fundamentally useful in statistical modelling because they clearly inherit features from the generally normal distribution, which is particularly noteworthy. 2.5. implementation for a day to be classified as eruptive, a rolling arithmetic mean of the categorization is utilized as a quantize screening criterion. every particular day which is in the time series is categorized figure 1. support vector machine (svm). figure 2. random forest. figure 3. logistic regression. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 separately from the others. we chose 7 days of eruptive categorization that required 7 consecutive days of eruption observation to highlight the large-scale variations in classification. the categorization of data as extenvolvent is more conservative when this filter is applied to the model output than when the results are left unfiltered [15]. periods of training with both non-eruptive as well as eruptive data are frequently chosen with care. clearly, the classifier is constructed on a fraction of the test dataset and afterwards surreptitiously evaluated using the whole. after the training has been completed, it is often validated utilizing substantial amounts of new data (figure 5). we chose time wisely that did not intersect with the begin and finish dates of the global volcanism program (gvp) because we intended to regulate the timeframe of changeover between active volcanic and quasi activity independently. feature extraction is the process of finding variables that will be used as inputs into ml models [16]-[20]. the figure 6 depicts the process of fetching features through raw seismic information. data set based on gathered data sets are fed into ml algorithms. raw waveform data is used to detect events. we extract characteristics such as peak amplitudes and band ratios from each event waveform. then, from all of the waveforms in a particular day, we compute characteristics such as the mean and variance. the resultant time series are sent into a ml classifier as input. 3. results we independently constructed four unique categorization models for each lava region, to every modelling approach trained and validated on each lava flow sequentially, which is critical in all intents and purposes. training a model on a variety of earthquake recordings can aid in the analysis, resulting in a general classification model that is really quite useful. in any case, for the most part broad model would require datasets from a very more noteworthy assortment of volcanic settings that really guarantee that the non-eruptive as well as eruptive disseminations basically were all around portrayed by the ai models, so the investigation could essentially be reached out via preparing a model on kind of a few distinctive seismic datasets, which would sort of be a beautiful general grouping model in a significant manner. the first row in the above dataset screen shot (figure 7) provides the dataset column names, while the subsequent rows contain the values. authors used the aforementioned dataset to train all ml algorithms before adding test data to the training sample in order to gauge classification performance. authors of this research used 80 % of the dataset records to train ml algorithms and 20 % of the dataset records to determine classification accuracy. the dataset is imported into a figure 4. gaussian process classifier. figure 5. the training and testing framework for supervised multi-class classifying models. figure 6. the architecture for fetching characteristics from raw seismic data is depicted in the diagram below. figure 7. dataset used for model implementations. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 developed application that displays records from the dataset, and we need to replace string values with numeric values and then replace missing values with 0, therefore ‘pre-process dataset feature extraction' is used to turn the dataset into a normalized format. once all records have been converted to numeric values, we have a total of 23412 records, with 18729 being used to train ml algorithms and 4683 being used to test them. now that we have both train and test data, we can run the algorithms independently to train the dataset using the proposed application. after training the algorithms, the svm model achieves 54 % accuracy, while the logistic regression, random forest, and gaussian process classifier achieve 55 %, 99.74 % and 55 % accuracy, respectively. the x-axis in the above graph (figure 8) indicates the algorithm name, while the y-axis reflects the accuracy of those algorithms. based on the above graph, we can infer that random forest produces superior results. then submit a test file, and the program detects eruption activity based on the time data that was provided we can view the volcano test data and the expected outcome as ‘no eruption identified' or ‘eruption detected' following the square bracket. in the above screen, we can see that when the classifier sees a magnitude value more than 6.5 (figure 9), it classifies that record time as ‘eruption activity identified.' 4. conclusions ml computations in seismic time series can precisely categorize general examples for both eruptive as well as noneruptive behaviour. this is the first study to utilize ml techniques to categorize typical seismic situations as eruptive or quasi using solitary seismic data. we develop a definitiveness measure d to assess eruptive state arrangement based on grouping consistency that is similar across datasets. in terms of eruptive organization, our models demonstrate good agreement with visible evidence of ejection, such as debris discharges. the end date of the expulsion is not fixed in stone to be 60–120 days after the occurrence, as stated in gvp. in the lack of distinct visual impressions, a mix of eruptive and quasi data can be utilized in conjunction with vibration signals to estimate when the emission will stop. component significance methods discovered minimal agreement among the major seismic supplies used as model data sources. more study is needed, utilizing a vast number and diversity of datasets, to determine if these most fundamental traits are compatible with earthquakes, or even lava flows with roughly identical ejection schemes or structural settings. references [1] m. malfante, m. dalla mura, j. p. métaxian, j. i. mars, o. macedo, a. inza, machine learning for volcano-seismic signals: challenges and perspectives, ieee signal processing magazine, 35(2) (2018), pp.20-30. doi: 10.1109/msp.2017.2779166 [2] s. surekha, k. p. satamraju, s. s. mirza, a. lay-ekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, 21(24) (2021), pp. figure 8. accuracy chart for trained data with respect to algorithms. figure 9. eruption activity prediction result. https://doi.org/10.1109/msp.2017.2779166 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 27848-27857. doi: 10.1109/jsen.2021.3125529 [3] v. gavini, j. lakshmi, a robust ct scan application for prior stage liver disorder prediction with googlenet deep learning technique, arpn journal of engineering and applied sciences, 16 (18) (2021), pp. 1850-1857. [4] j. a. power, s. d. stihler, b. a. chouet, m. m. haney, d. m. ketner, seismic observations of redoubt volcano, alaska— 1989–2010 and a conceptual model of the redoubt magmatic system, journal of volcanology and geothermal research, 259 (2013), pp.31-44. doi: 10.1016/j.jvolgeores.2012.09.014 [5] s. h. ahammad, m. z. u. rahman, l. k. rao, a. sulthana, n. gupta, a. lay-ekuakille, a multi-level sensor based spinal cord disorder classification model for patient wellness and remote monitoring, ieee sensors journal, 21(13) (2021), pp. 1425314262. doi: 10.1109/jsen.2020.3012578 [6] v. gavini, g. r. jothi lakshmi, an efficient machine learning methodology for liver computerized tomography image analysis, international journal of engineering trends and technology, 69 (7) (2021), pp. 80-85. doi: 10.14445/22315381/ijett-v69i7p212 [7] national academies of sciences, engineering, and medicine, volcanic eruptions and their repose, unrest, precursors, and timing, national academies press, 2017. doi: 10.17226/24650 [8] what is machine learning?, what is machine learning?india|ibm. online [accessed 17 march 2022] https://www.ibm.com/in-en/cloud/learn/machine-learning [9] a. maggi, v. ferrazzini, c. hibert, f. beauducel, p. boissier, a. amemoutou, implementation of a multistation approach for automated event classification at piton de la fournaise volcano, seismological research letters, 88(3) (2017), pp. 878-891. doi: 10.1785/0220160189 [10] m. malfante, m. dalla mura, j. i. mars, j. p. métaxian, o. macedo, a. inza, automatic classification of volcano seismic signatures, journal of geophysical research: solid earth, 123(12) (2018), pp. 10-645. doi: 10.1029/2018jb015470 [11] n. anantrasirichai, j. biggs, f. albino, p. hill, d. bull, application of machine learning to classification of volcanic deformation in routinely generated insar data, journal of geophysical research: solid earth, 123(8) (2018), pp.6592-6606. doi: 10.1029/2018jb015911 [12] v. j. flower, t. oommen, s. a. carn, improving global detection of volcanic eruptions using the ozone monitoring instrument (omi), atmospheric measurement techniques, 9(11) (2016), pp.5487-5498. doi: 10.5194/amt-9-5487-2016 [13] a. tarannum, l. k. rao, t. srinivasulu, a. lay-ekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee sensors journal, 20(24) (2020), pp. 15014-15025. doi: 10.1109/jsen.2020.3012536 [14] a. tarannum, t. srinivasulu, an efficient multi-mode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68 (9) (2020), pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [15] g. f. manley, d. m. pyle, t. a. mather, m. rodgers, d. a. clifton, b.g. stokell, g. thompson, j. m. londoño, d. c. roman, understanding the timing of eruption end using a machine learning approach to classification of seismic time series, journal of volcanology and geothermal research, 401 (2020), p.106917. doi: 10.1016/j.jvolgeores.2020.106917 [16] henrik ingerslev, soren andresen, jacob holm winther, digital signal processsing functions for ultra-low frequency calibrations, acta imeko, 9(5) (2020), pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [17] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, 9(1) (2020), pp. 3-9. doi: 10.2104/acta_imeko.v9i1.732 [18] mariorosario prist, andrea monteriù, emanuele pallotta, paolo cicconi, alessandro freddi, federico giuggioloni, eduard caizer, carlo verdini, sauro longhi, cyber-physical manufacturing systems: an architecture for sensors integration, production line simulation and cloud services, acta imeko, 9(4) (2020), article 6. doi: 10.21014/acta_imeko.v9i4.731 [19] jiayu luo, xiangyu kong, changhua hu, hongzeng li, key performance-indicators-related fault subspace extraction for the reconstruction-based fault diagnosis, elsevier: measurements, 186 (2021), pp. 1-12. doi: 10.1016/j.measurement.2021.110119 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.1016/j.jvolgeores.2012.09.014 https://doi.org/10.1109/jsen.2020.3012578 https://doi.org/10.14445/22315381/ijett-v69i7p212 https://doi.org/10.17226/24650 https://www.ibm.com/in-en/cloud/learn/machine-learning https://doi.org/10.1785/0220160189 https://doi.org/10.1029/2018jb015470 https://doi.org/10.1029/2018jb015911 https://doi.org/10.5194/amt-9-5487-2016 https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.1016/j.jvolgeores.2020.106917 https://doi.org/10.21014/acta_imeko.v9i5.1004 https://doi.org/10.2104/acta_imeko.v9i1.732 https://doi.org/10.21014/acta_imeko.v9i4.731 https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://www.sciencedirect.com/science/article/abs/pii/s0263224121010393#! https://doi.org/10.1016/j.measurement.2021.110119 general sensors network application approach acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 5 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 general sensors network application approach martin koval1, marek havlíček1, jiří tesař1 1 czech metrology institute, okružní 31, 63800 brno, czech republic section: research paper keywords: sensor network; uncertainty measurement; artificial intelligence citation: martin koval, marek havlíček, jiří tesař, general sensors network application approach, acta imeko, vol. 12, no. 1, article 13, march 2023, identifier: imeko-acta-12 (2023)-01-13 section editor: daniel hutzschenreuter, ptb, germany received november 18, 2022; in final form march 2, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: martin koval, e-mail: mkoval@cmi.cz 1. what is the sensor network? we can imagine a sensor network (sn) as a group of sensors that are interconnected in different ways and can create a system that can be understood as a backbone of the processes which need to be monitored and/or optimized. in the system where many different sensors provide a complex overview of the ongoing processes, a model representing such an ensemble can be created. we can refer to this model as to a digital twin [1] of the system. the digital twin enables a real-time system monitoring, and processes history analysis and effective prediction of future events which can be forecasted with the use of advanced algorithms and ai. the result of such an interplay between the sensor network and the effective feedback control is the minimization of negative events within the system. 2. sensor network structure the basis of the sensor network comprises of the input, signal processing and output. these three areas can be further extended according to their intended use in particular processes. the basic type of the sn consists of sensors of the same type with fixed network topology. these sensors usually send data to the data processing unit in fixed time intervals. the output may consist of the processed measured data with metadata which store additional information about the process history. an example of such process could be a system for temperature monitoring of sensitive goods according to bs en 12830:2018. many more factors have to be taken into account in complex systems, e.g., dynamic topology of the network, big data, different sensor types, location of data processing, complex mathematical algorithms, prediction, security, etc. these factors will be discussed in more detail in the following sections. 2.1. sensor network inputs the sensor network may consist of many devices/sensors which can widely differ in numbers, complexity, signal types and data formats. units of simple sensors as well as thousands of sophisticated devices can both form a structure which can be considered as the sensor network. considering the amount of data collected and processed in such network, we may face challenges with their processing as the amount of data increases. in such case we talk about big data. the big data can be well described as the 5 v model: value, volume, veracity, variety and velocity. each "v" has its specific influence on the sn architecture [2]. the value represents the usefulness of the data. in the sn data hierarchy, the data priority is set from the most to the least important. there is an evident difference in the value of the real measured data, which are used for calculations, and the metadata of measurement data. the value provides very useful information which can help to interpret data more accurately. one important information category comes from, e.g., alarms. the alarms also have their own hierarchy which defines their roles in informing about the limits of sensors and correct functioning. the volume of the data generated in the sn depends on the recording frequency and the number of variables which are recorded. if just the measured data with the corresponding metadata are stored, the amount of such data can reach typically abstract the paper describes the general approach for sensor networks and deals with principled components of sensor networks, architecture as well and opportunities for the implementation of current and new technologies. the paper also illustrates an example of the application of the en 12830:2018 standard. mailto:mkoval@cmi.cz acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 tb or pb levels. in the case that further data such as text and graphics are transferred, the volume of the data can exceed eb levels and can go even beyond that. the veracity represents the quality of information, their uncertainty or accuracy. the information can be inherently inconsistent, non-complete, ambiguous or its reliability can be reduced. these facts form a set of requirements which have to be applied so that it can be decided which data can be used for further analysis. the variety in the sn describes different forms of information. the data coming from various types of sensors and appliances can be transferred in either structured or unstructured form. in the first case, the subsequent separation and analysis is relatively easy. the situation is diametrically different for unstructured data. in such case, the data mining, sorting and analysis is more complex which may result in errors in extracted data sets. the velocity in the sn is the key parameter which describes the speed of the data transfer and processing. combinations of various sensors and data structures influence the final data transfer and processing velocity which correlate with the computational power needed. in some applications, a real-time process monitoring is necessary such as in the medical applications or nuclear power plants, where any delay may have fatal consequences. one of the important aspects which play a crucial role in inputs is the configuration of the sensor network topology [3]. different time of data delivery from distributed sensors has to be also considered. another important factor for inputs, which play an important role, are the dynamic changes of participating sensors. the sensors may be disabled, replaced, maintained, damaged or exposed to disturbances which can possibly have a significant effect on the whole sensor network. 2.2. data processing data processing can be considered as the core of the sensor network. this task can be divided into separate fields which need an individual approach. the data processing can include the real measured data as well as the predicted data with their corresponding uncertainty. the data prediction has already become an integral part of the state-of-the-art sensor networks. data prediction the effectivity and reliability of the data prediction are crucial for the modelling of specific missing or corrupted data. reliable data prediction on different timescales (minutes, hours, days, etc.) and information about their uncertainties are necessary. another aspect that plays an important role is missing data due to various reasons such as service, calibration, or the failure of sensors. historical data of sensor networks can also be used for the prediction of the network topology in near future. a careful data analysis may help to predict the situations which will occur during the expected events and prepare adequate measures to cope with them, such as maintenance, overloads etc. these data can be modelled with the use of various algorithms based on the artificial intelligence (ai). nowadays, the progress in the ai development is accelerating. it mainly focuses on three areas which can be characterized as learning, reasoning and selfcorrections. all these aspects can be directly applied in the sns. the machine learning focusses on the data mining and creating rules for their conversion into useful information. the machine reasoning aims at searching for the most convenient algorithm from the family of available solutions and its implementation in the particular process. automated self-correction mechanisms are employed in many processes in order to reach the best results in particular processes. the ai can be divided into different categories based on its capabilities: artificial narrow intelligence (ani), artificial general intelligence (agi) an artificial super intelligence (asi). the ani is frequently used in different applications, the agi and the asi are still subjects of research. another categorization of the ai into four classes is based on its functionality (see table 1) [4], [5]. environment another important factor for the data processing represents the environment where the data are physically processed. current technology enables the use of a variety of different virtual environments such as cloud computing, remote servers or special-purpose built-in computers. the real location of the data processing influences also the quality of the sensor networks. in the case of virtual environments, the problem with insufficient power is not necessarily the limiting factor. one of the most important parts of the sn is the cyber security of the environment, communication channels and sensors themselves. from the security point of view, the sn begins at sensors. if data reliability shall be guaranteed then all sensors have to be secured from the hw and sw point of view [6]. the hw security is essential in order to prevent any unauthorized change of parts containing the sw, which could possibly compromise the measured data. the hw security can be realized in a nondestructive or destructive way. any unauthorized access to the sensor/device results in its destruction in the case of a destructive solution [7]. any fraudulent data manipulation using such damaged sensor is either physically impossible or technically challenging. the non-destructive solutions typically involve different ways of sealing, which indicate an unauthorized access into hw parts. it is worth noting that the availability of technologies which are capable to substitute hw parts is higher than in the past. table 1. ai basic categories overview [4], [5]. type of ai use example reactive ai (type ani) effective for simple classification and pattern recognition tasks; incapable of analysing scenarios that include imperfect information or require historical understanding; sorting machines limited memory (type ani) can handle complex classification tasks and use historical data to make predictions; capable of completing complex tasks (e.g., autonomous driving); needs big amounts of training data to learn tasks; vulnerable to outliers or adversarial examples; self-driving cars theory of mind (type agi) should be able to provide results based on an individual's motives and needs; training process would have a lower number of examples than type ani; under research self-aware ai (type asi) should be aware of the mental state of others entities and itself; it is expected to outperform human intelligence; under research acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 in the case that sensors contain sw it is necessary to deal with the security from the sw point of view which shall include a basic minimum of the integrity check, authenticity and alarms. in the case of more advanced sensors with bi-directional communication where the remote control of sensors is possible, calibration parameters are available, etc., it is essential to secure access rights. if the system parameters can be changed, it is a good practice to use an event logger in order to guarantee the traceability of changes. one of the important factors which help with the data analysis is the presence of the metadata related to the particular data file. this metadata contains additional information which may be essential for a subsequent analysis. one of the most effective ways for the metadata protection represents a blockchain list of records [8]. for the network itself, communication is an essential prerequisite. hundreds of different communication solutions including protocols and interfaces are now available on the market. criteria which shall be considered during the sn communication design include energy demand, network type, compatibility, security, open source, etc. the sn design shall also include a risk analysis [9]. uncertainty evaluation methods uncertainties in the field of sensor networks represent a crucial aspect which should be always taken into account. each sensor should be considered as an independent device placed in a certain environment and it should be treated as such. the uncertainty evaluation in the field of metrology is the integral part of all processes where the measurement is realized. depending on the processes and the field of measurement, the used models can vary substantially. in the case of sns, the uncertainty evaluation can be challenging. relatively simple sns working with basic measurement models and consisting of a few types of sensors represent the case in which the standard procedures can be applied. in the case that the data are collected in regular intervals and only one process is monitored, then it is possible to use the law of propagation of uncertainties (lpu) [10], monte carlo [11], etc. in the case of more sophisticated sns, it is necessary to use mathematical models which are suitable for the particular situation. in the case that some data are not available at the moment or were removed from the data set models like bayesian statistical models [12], fuzzy theory, etc., should be used. an overview of commonly used methods for the uncertainty evaluation is shown in table 2. 2.3. sensor network output the output of the sn depends on the particular application. examples shown in figure 1 to figure 5 imply that the output can consist not only of the measured data but also contains the metadata which can enable more efficient data processing. the metadata can contain the data directly recorded by the sensors or they can be generated during the data processing (e.g., actual topology of the sn), alarm analysis, event logger messages, sensor ids, etc. further utilization of the data depends on the particular application. the outputs can be used in different ways such as process indicators, triggers (process breaks, notifications) or for analyses. alternatively, the analysis can be directly in a machine-readable format which can be directly used by another sn with minimal changes in the configuration. 3. example of sensor network one of the practical examples of the sn realization can be done according to en 12830:2018 temperature recorders for the transport, storage, and distribution of temperature-sensitive figure 1. an example of a simple sensor network according to en 12830:2018. figure 2. an example of complex sensor network. table 2. example of using uncertainty evaluation methods [2], [9]-[12]. uncertainty method use lpu general uncertainty evaluation for complete datasets; monte carlo uncertainty evaluation for asymmetric and inadequate datasets; shannon’s entropy determining the amount of missing information on average in a random source; fuzziness/fuzzy theory processing of vague or ambiguous datasets for complex models; bayesian statistical models they are particularly useful when there exists information about the true value of the measurand prior to obtaining the results of a new measurement figure 3. an example of a complex sensor network – inputs. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 goods [13]. as the name suggests the goal of this norm is temperature monitoring of sensitive goods such as food, pharmaceuticals, blood, organs, biological material, etc. the standard contains practical examples and guidelines for the correct implementations. a simple example is a monolithic temperature recorder which is physically wired to the recorder and is used for monitoring outer and inner space. the data collected is sent to the signal conditioner and synchronized. the data is subsequently stored in the base station where it is given the status of relevant data (see figure 1). an advanced example is a recorder that utilizes cloud services (figure 6). it has similar architecture as in the previous case but in addition to it, sensors can communicate with the base station also wirelessly. it implies that the sensors have to have sw communication modules and space for temporary data storage. such a system can contain elements that are used for data transfer (e.g. gateway). as a consequence, security cryptographical tools have to be implemented in order to secure transferred data from malicious technical compromise or unauthorized disclosure. in the following step, the relevant data can be transferred into the cloud which usually enables more effective data processing and data management. the requirements for specific device types are listed in en 12830:2018. the sw for devices are divided into three classes according to their complexity: p1sw is embedded in a closed hw, p2sw runs on a general-purpose computer, p3sw runs on an external provider of cloud (e.g. saas). specific requirements focused on functions, data protection, and safety measures are listed for specific types and arrangements. the requirements are based on welemc guide 7.2 [14] and divided into the following blocks: g basic requirements l specific sw requirements for long-term storage, t transmission of relevant information via communication networks, s sw separation, d download of relevant sw. basic requirements have to be met for all types of p1 and p2. requirements relevant to block t have to be met for type p3. it is strongly recommended to fulfil the requirements of iso/iec 27001 and take into account the requirements for control of the user and communication interface. complete overview of requirements according to en 12830:2018 is shown in table 3. a complex sn design with requirements applications may look like the one shown in figure 7. figure 4. an example of the complex sensor network – data processing. figure 5. an example of the complex sensor network – final data. figure 6. an example of the complex sensor network according to according to en 12830:2018. table 3. list of requirements according to en 12830:2018. blocks requirements basic requirements sw identification, influence via user interface, influence via communication interface, protection against accidental, unintentional and intentional changes, parameter protection, sw authenticity and presentation of results. specific sw requirements for long-term storage completeness of measurement data stored, protection against accidental of unintentional changes, integrity of data, authenticity of measurement data stored, confidentiality of keys, retrieval of stored data, automatic storing, storage capacity and continuity. transmission of relevant information via communication networks completeness of transmitted data, protection against accidental or unintentional changes, integrity of data, authenticity of transmitted data, confidentiality of keys, handling of corrupted data, transmission delay, availability of transmission services. sw separation realization of sw separation, mixed indication, protective sw interface; download of relevant sw download mechanism, authentication of downloaded sw, integrity of downloaded sw, traceability of relevant sw download acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 the p1 requirements are applied to sensors, whether they are connected to the base station by hardwired or wireless communication technologies and can also be, for example, on a gateway. devices, where p1 requirements are applied, are mostly devices where complex computational power is not required because the tasks of the device are mostly single-purposed such as measurement, data transfer, or storage. the application of p2 requirements can be seen at the base station, where there may be a need for final data processing using database systems, or use of static processing where operating systems are used. the p3 application is only for the use of clouds, where partial responsibility for data security is assumed by the cloud operator, but the manufacturer must be careful how access rights are implemented in combination with the applied user and communication interfaces. the requirements of block l, are applied in cases where is a need to deal with data storage. it can be sensors or gateways, where it is mostly temporary storage. or it may be longer-term storage in the base station where data needs to be stored for the intended use. the requirements of block t are applied in cases where data transmission is involved. figure 7 is shown data transmission between digital probes, base station, gateway, and information cloud. the requirements of block s (sw separation) are applied in cases where it is necessary to distinguish between relevant and non-relevant sw, often it is ots (off-the-shelf) type sw, which was created for general purposes (e.g. various libraries, drivers, etc.). this type of sw can be in base station. the requirements of block d can be applied to almost any sw where an update is required, it can be sensors or even sw in the base station, but it should be ensured that the sw cannot be repaired or updated by an unauthorized person. 4. conclusion the sensor networks are becoming an inherent part of many upcoming technologies, including smart cities, smart grids, complex processes monitoring in industry, autonomous driving, medicine and many other applications. one of the many examples of sn is the application of en 12830:2018, for the purpose of process monitoring. when we compare the general approach with what the standard addresses, we can see shortcomings that do not address state of the art options, such as the use of ai, or issues related to network topology. while the standard provides a relatively appropriate approach, it should be pointed out that as new technologies evolve, a wider range of possible applications of technology in sn need to be addressed. together with other technologies such as the ai and with the utilization of the big data, the sn is becoming an important tool for effectivity optimisation of many processes. the sn has helped to push the limits in metrology towards new effective algorithms in the ai or in challenges in the uncertainty evaluation related to the utilization of the ai as well as in the implementation of solutions for digital transformation. acknowledgement this work was funded by the institutional subsidy for longterm conceptual development of a research organization granted to the czech metrology institute by the ministry of industry and trade. project number: utr 22e601104. references [1] a. deuter, f. pethig, the digital twin theory, project: asset administration shell (industry 4.0), munich, 2019. doi: 10.30844/i40m_19-1_s27-30 [2] r. h. hariri, e. m. fredericks, k. m. bowers, uncertainty in big data analytics: survey, opportunities, and challenges, journal of big data, 6 (2019), art. no. 44. doi: 10.1186/s40537-019-0206-3 [3] h. qi, s. sitharama iyengar, k. chakrabarty, distributed sensor networks a review of recent research, journal of the franklin institute, vol. 338, 2001, no. 6, pp. 655-668. doi: 10.1016/s0016-0032(01)00026-6 [4] h. khan, types of ai | different types of artificial intelligence systems, 2021. online [accessed 13 match 2023] https://www.researchgate.net/publication/355021812 [5] l. tucci, a guide to artificial intelligence in the enterprise, r. h. hariri, e. m. fredericks, k. m. bowers, uncertainty in big data analytics: survey, opportunities, and challenges, tech targetsearch enterprise ai: e-guide, 2021. online [accessed 13 match 2023] https://www.techtarget.com/. [6] m. m. r. monjur, j. heacock, j. calzadillas, m. d. s. mahmud, j. roth, k. mankodiya, e. sazonov, q. yu, hardware security in sensor and its networks, frontiers in sensors, 3 (2002). doi: 10.3389/fsens.2022.850056 [7] f. nekoogar, f. dowla, r. twogood, s. lefton, secure rfid tag or sensor with self-destruction mechanism upon tampering, united states patent: document id: us 20150339568 a1, 2016. [8] m. moni, w. melo, jr., d. peters, r. mochado, when measurements meet blockchain: on behalf of an inter-nmi network, national library of medicine, 2021, doi: 10.3390/s21051564 [9] iso/iec, iso/iec 27005:2018 information technology security techniques information security risk management, international organization for standardization, june 2011. [10] lum, bipm, iec, ifcc, ilac, iso, iupac, iupap, and oiml. guide to the expression of uncertainty in measurement, jcgm 100:2008, gum 1995 with minor corrections. bipm, 2008. [11] monte carlo, bipm, iec, ifcc, ilac, iso, iupac, iupap, and oiml. supplement 1 to the ‘guide to the expression of uncertainty in measurement’ – propagation of distributions using a monte carlo method, jcgm 101:2008. bipm, 2008. [12] bayesian statistical models, bipm, iec, ifcc, ilac, iso, iupac, iupap, and oiml. guide to the expression of uncertainty in measurement — part 6: developing and using measurement models, jcgm gum-6:2020. bipm, 2020. [13] en 12830:2018 temperature recorders for the transport, storage and distribution of temperature sensitive goods [14] welmec guide 7.2, online [accessed 13 march 2023] http://welmec.org. figure 7. an example of the complex sensor network according to according to en 12830:2018 with application of requirements. https://doi.org/10.30844/i40m_19-1_s27-30 https://doi.org/10.1186/s40537-019-0206-3 https://doi.org/10.1016/s0016-0032(01)00026-6 https://www.researchgate.net/publication/355021812 https://www.techtarget.com/ https://doi.org/10.3389/fsens.2022.850056 https://doi.org/10.3390/s21051564 http://welmec.org/ calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software andrea tavella1, marika ciela1, paolo chistè1, annaluisa pedrotti1 1 labaaf, laboratorio bagolini archeologia archeometria fotografia, university of trento, via tommaso gar 14, 38122 trento, italy section: research paper keywords: vessel capacity; ceramics; 3d models; blender; neolithic citation: andrea tavella, marika ciela, paolo chistè, annaluisa pedrotti, calculating vessel capacity from the neolithic sites of lugo di grezzana (vr) and riparo gaban (tn) through 3d graphics software, acta imeko, vol. 11, no. 1, article 7, march 2022, identifier: imeko-acta-11 (2022)-01-07 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form february 23, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrea tavella, e-mail: andrea.tavella@live.com 1. the ceramic record the study takes into account ceramic finds from the neolithic sites of lugo di grezzana and riparo gaban. both sites show a frequentation during early neolithic and play a key role for the understanding of neolithization process in the northern italy (figure 1). 1.1 lugo di grezzana (vr) the area to the south of the small town of lugo di grezzana, called locality campagne, is situated over a river terrace (300 m above sea level) along valpantena, a short prealpine valley located in the lessini mountains [1]. the discovery of the site dates back to 1990 by fernando zanini and giorgio chelidonio. the area has been the object, since the early nineties, of systematic research undertaken by the archaeological heritage of veneto region, in collaboration with the university of trento since 1996 (b. bagolini laboratory – labaaf) up until 2005 [2]. the first evidence is dated in the middle of the 6th millennium bc cal, while an intense occupation of the area is dated between 5300 – 5050 bc cal [3]. figure 1. the localization of lugo di grezzana (verona, italy) and riparo gaban (trento, italy). abstract this paper reports new data about the estimation of the volumetric capacity of ceramic vessels from the neolithic sites of lugo di grezzana (verona, italy) and riparo gaban (trento, italy). the methodological protocol is based on a free and open source 3d computer graphics software, called blender®. the estimate of the volumetric capacity has been relied from the graphic elaboration of the archaeology drawing of the artifacts. through the calculation of volume has been possible to obtain an estimation of the total capacity of the vessels, proposing two types of content. subsequently, the volumetric data was related to diameter/height ratio of each ceramic vessel, in order to define a range of variability in each typological class. data from both sites were later compared, highli ghted for the most part of them a specific distribution that could be a consequence of different functional uses and/or cultural models. this paper concludes the preliminary results presented at the 2020 imeko tc4 international conference on metrology and archaeology for cultural heritage. mailto:andrea.tavella@live.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 based on material culture [2]-[5], the site is mainly attributed to fiorano, which is present in northern italy during the early neolithic and shows a typical homogeneity in vessels typology. jug is possibly one of the most distinctive shapes of the fiorano culture (figure 2) and is often imported into contemporary cultures. although, numerous elements have been permitted to underline influences from other contexts such as vhò group, adriatic impressed ware and catignano cultures [2], mainly due to the supply of lessinian flint. the latter, thanks to its highquality, becomes the object of exchange par excellence and a sort of common denominator between the various groups of the early neolithic in northern italy, between the middle of the 6th and the beginning of the 5th millennium bc [3], [6]. around 5000 bc cal, the occupation of the settlement seems to show a temporary interruption with the occurrence of colluvial episodes, while the last neolithic occupation, scanty represented, is attested between 4900 and 4800 bc cal. during this period, the early geometric-linear style of square mouthed pottery culture is already widespread in the northern italy, the latter attested within the site in contemporary with later aspects of fiorano culture [3]. 1.2 riparo gaban (tn) the site of riparo gaban is located at piazzina di martignano, in a small hanging valley that runs parallel to the left side of adige valley (270 metres above sea level), a few kilometres north-west of trento [7]. the site, identified as a rock-shelter, has been discovered in 1970 by a group of local amateurs as a part of the palaeoethological activities of the museo tridentino di scienze naturali. the excavations have been conducted under the technical direction of bernardino bagolini from 1972 to 1981, by alberto broglio and stefan k. kozlowski from 1982 to 1985 for the mesolithic phases [8], [9]. the site is characterised by a complex stratigraphic evolution from mesolithic to middle bronze age, with a stratigraphic continuity between castelnovian mesolithic and local early neolithic deposits, these latter dated between the end of 6th and the beginning of 5th millennium bc. the site is one of the main pieces of evidence for the understanding of first neolithic evidences in trentino alto-adige and gives its name to the cultural group presents in the adige valley during this period. unlike lugo di grezzana and generally to the fiorano culture, the main aspect of the gaban group appears to be a strong mesolithic component. especially observed in the lithic and bone industries, with extraordinary examples of mobiliary art that gives to the site, not only the appearance of a simple rockshelter but probably also a magical-religious connotation [8], [10]. from a typological point of view, the gaban group, despite a markedly autonomous framework, presents several connections with others cultural groups of the early neolithic, in particular with isolino and vhò groups, and to a lesser extent with fiorano [11]. about the material culture identified at riparo gaban (figure 3), the stratigraphic evolution allowed to observe an oldest phase characterised by a strong presence of impressed ware and a more recent phase where scratched pottery is more attested [11]. the neolithic occupation is interrupted as from 4700 bc cal, documenting a possible phase of abandonment of the rockshelter up to 2700 bc cal [8]. 2. the calculation of volumetric capacity the volumetric estimate of a vessel can be calculated mainly through three types of volume calculation: direct measurements, two-dimensional geometrical methods (manual calculation), and computer-assisted methods, these latter based on 3d models (automatic calculation). direct measurements are taken from the container and allow directly to obtain the volumetric capacity. these methods involve filling the vessel with a suitable material able to adapt to the internal profile. however, they cannot be applied to the entire ceramic record, both because usually a limited percentage of vessels from archaeological excavations are complete or partially reconstructed, and also for conservation issues [12]-[14]. the second method, about manual calculation, is based on the decomposition of the vessel volume into basic forms (spheres, cylinders, or truncated cones) and calculated through mathematical formulae [15]-[22]. the archaeological drawing represents the starting point of this method and, unlike direct measurements, does not require the availability of the archaeological find. however, the degree of approximation represents a negative aspect, as the complex form of the vessel is transformed into simplified form, although this depends on the geometric shape used. the last method, computer-assisted, is focused on 3d models. here too, the volumetric capacity is obtained through measurements directly on the archaeological drawings, exploiting the principle of symmetry. different software can be used, such as: autocad®, rhinoceros™ and blender® [14], [23]. in addition, other suitable programs are available as kotyle© [24] and web applications like capacity [12], [25], [26]. in this study, the 3d graphics program of choice is blender® [27] since it is free and open source, which allows the users to generate extensions in order to improve it. the estimate of the volumetric calculation was relied on the 3d-print toolbox extension, although different add-ons are known to be effective as well [23]. regarding the study of ceramic record, it is important to refer to the digitalization in 3d of some pottery mentioned in this paper through photogrammetry. this work was carried out at tefalab (laboratorio di tecniche fotografiche avanzate, unit figure 2. vessel from the neolithic site of lugo di grezzana (photo p. chistè – labaaf) [2]. figure 3. vessel from the neolithic site of riparo gaban (photo e. turco) [10]. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 of labaaf, university of trento) under the technical direction of paolo chistè. about the study of volumetric capacity, are only present preliminary results for the site of lugo di grezzana [28], while for riparo gaban this aspect of research has not been studied yet. at the present time, a systematic analysis that evaluates the metric criteria of the ceramics of the fiorano culture has not yet been carried out [29] and more. however, for the neolithic of northern italy there is a typological classification of the vessels that distinguishes their morphology in relation to the profile, the diameter/height ratio (ø/ℎ), and the size of the mouth [30]. nevertheless, this classification does not include the volumetric capacity parameter. 3. material and methods the methodological protocol was applied to a selection of 48 archaeological drawing, of which 35 from lugo di grezzana and 13 from riparo gaban (figure 6 and figure 7). the sample analysed was chosen taking into consideration the typological classification. out of the total samples, 26 drawings illustrate whole artifacts, with a continuous profile from the rim to the bottom of the vessel (lugo di grezzana: 15 samples; riparo gaban: 11 samples), while the other are only partially preserved. for the latter, as in the previous study [28], it was therefore necessary to hypothesize the profile and the height of the vessel. to carry out this operation, the fragmented samples were integrated through the study of whole ceramic vessels belonging to the same typological class (figure 4). for both groups, and especially the latter, it is essential to keep in mind that the capacity estimate deduced from archaeological drawings has a different degree of accuracy. the manual drawing is based on one radial section and represents a two-dimensional shape of the vessel not taking into account the possible variations in the three-dimensional shape of the object [31]. the development of the operating methodology is based on three minimum requirements that ceramics and designs must have: • availability of diameter and internal profile; • scale of representation; • high-resolution drawing (d.p.i.); the calculation of the volumetric capacity is carried out by importing each drawing into the 3d graphic program (blender version 2.92), providing the exact graphic resolution of the file (d.p.i.). this step is necessary in order to avoid any change in the original dimensions of the imported drawing that would therefore entail an incorrect estimate of the volume. subsequently it is generated a curve (bezier), which is modified along the x and y axes and divided into several segments, in order to trace the underlying drawing. after obtaining a 2d profile, it is necessary generate a line (path), which will correspond with the rotation axis of the curve itself and with the midline of the archaeological drawing. once the rotation axis is fixed, the curve can be rotated 360 degrees. this procedure requires to define some options, namely: the cartesian axis to which the curve is oriented, the object around which the rotation takes place and lastly the number of segments the revolution is divided into (a greater number of these entail corresponds to a better graphic resolution and consequently a more accurate estimate of the volume). figure 4. typological table with some samples used to hypothesize the profile and the height of the vessels: 1-3, 6, 8-11 from lugo di romagna (ra) [32]; 45, 7 from vhò di piadena-campo ceresole (cr) [33]. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the essential step for obtaining the volume is the closure of the solid at the rim and at the base. once the solid is closed, the calculation of the volumetric capacity is performed automatically using the add-on: 3d-print toolbox (available since version 2.67, released in may 2013), which volume is expressed in cm3 (figure 5). the validity of the procedure was previously established during the formulation of the method, through the graphic reproduction and the volumetric calculation of a cylinder of known dimensions (𝑟 = 5 cm; ℎ = 20 cm). this procedure allowed to calculate the absolute and relative error in the method developed, taking into account the tolerance. the latter is characterised by different causes such as: the inherent uncertainty regarding the measured object, the conservation status, the operator, the procedure and the measuring instrument used. taking these issues into account, it was calculated a tolerance of about ± 1 mm. 𝐴𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝐸𝑟𝑟𝑜𝑟 (𝐸𝐴) = (𝑉𝑜𝑙max − 𝑉𝑜𝑙min) 2 = 70.6889 𝑐𝑚3 𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑅𝐸) = 𝐸𝐴 𝑉𝑜𝑙avg = 0.0449 𝑃𝑒𝑟𝑐𝑒𝑛𝑡𝑎𝑔𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑃𝐸) = 𝑅𝐸 × 100 % = 4.49 % . the methodological approach was subsequently extended, considering two hypothetical types of contents, a liquid and a solid one. as to what concerns the estimate of the capacity, it was treated converting the measure from cm3 to ml (1 cm3 = 1 ml). instead, in the case of solids contents was calculated the weight (grams) of three types of cereals such as: whole barley, emmer and naked wheats, selected accordingly to the data collected from archaeobotanical analysis carried out for the site of lugo di grezzana [34]. the weights were estimated in relation to the bulk density of each kind of cereal (whole barley 0.61 ÷ 0.69 g/ml, emmer 0.47 g/ml e naked wheats 0.54 g/ml) [35], [36] and the volumes of the containers, according to the following formula: 𝑊𝑒𝑖𝑔ℎ𝑡 = 𝐵𝑢𝑙𝑘 𝑑𝑒𝑛𝑠𝑖𝑡𝑦 × 𝑉𝑜𝑙𝑢𝑚𝑒 . lastly, metrical analysis were carried out through the correlation of the maximum volumetric capacity (cm3), diameter and height ratio (ø/ ℎ), and typology [30]. figure 5. summary scheme of the operating methodology performed with blender®. table 1. summary of results from lugo di grezzana (l.g.) and riparo gaban (r.g.). legend: bw. = bowl; cp. = cup; h. v. = handle vessel; jg. = jug; l.b. = large bowl; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel; * = partially preserved. emmer naked wheats ø/h ratio l.g. bw. 1* 545 332 376 256 294 2,45 l.g. bw. 2* 1014 619 700 477 548 3,03 l.g. bw. 3* 1680 1025 1159 789 907 2,39 l.g. bw. 4* 3755 2290 2591 1765 2028 1,97 l.g. l.b. 1* 5276 3218 3640 2480 2849 3,23 l.g. l.b. 2 6959 4245 4802 3271 3758 2,74 l.g. l.b. 3* 4814 2936 3322 2263 2599 2,42 l.g. l.b. 4* 6763 4125 4666 3178 3652 2,18 l.g. t.v. 1 5646 3444 3895 2653 3049 0,92 l.g. t.v. 2* 17307 10557 11942 8134 9346 1,08 l.g. t.v. 3 2967 1810 2047 1395 1602 0,97 l.g. t.v. 4* 1329 811 917 625 718 0,77 l.g. t.v. 5* 5941 3624 4099 2792 3208 0,82 l.g. t.v. 6 944 576 652 444 510 1,00 l.g. t.v. 7 3961 2416 2733 1862 2139 0,80 l.g. t.v. 8* 5235 3193 3612 2460 2827 0,87 l.g. t.v. 9* 750 458 518 353 405 1,08 l.g. t.v. 10* 1521 928 1049 715 821 0,84 l.g. t.v. 11* 2112 1288 1457 993 1140 0,96 l.g. h.v. 1 925 564 638 435 499 1,16 l.g. jg. 1 413 252 285 194 223 0,86 l.g. jg. 2 1825 1114 1260 858 986 0,83 l.g. jg. 3 772 471 533 363 417 0,83 l.g. jg. 4 839 512 579 394 453 0,74 l.g. jg. 5* 653 398 451 307 353 0,88 l.g. jg. 6* 889 542 614 418 480 0,83 l.g. jg. 7* 1508 920 1040 709 814 0,87 l.g. jg. 8* 2022 1234 1395 951 1092 0,85 l.g. n.v. 1* 6054 3693 4177 2845 3269 0,47 l.g. n.v. 2* 13378 8161 9231 6288 7224 0,26 l.g. ld. 1 38 23 27 18 21 1,84 l.g. mn. 1 53 33 37 25 29 0,76 l.g. mn. 2 158 97 109 74 86 2,24 l.g. mn. 3 59 36 41 28 32 1,14 l.g. mn. 4 79 48 54 37 42 2,50 r.g. bw. 1 2858 1743 1972 1343 1543 1,52 r.g. t.v. 1* 2894 1765 1997 1360 1563 0,97 r.g. t.v. 2 17135 10452 11823 8053 9253 1,11 r.g. t.v. 3 1589 969 1097 747 858 1,18 r.g. t.v. 4* 1290 787 890 606 697 0,82 r.g. cp. 1 397 242 274 186 214 1,19 r.g. cp. 2 403 246 278 190 218 1,10 r.g. cp. 3 764 466 527 359 413 1,38 r.g. cp. 4 1406 858 970 661 759 0,96 r.g. cp. 5 2023 1234 1396 951 1092 1,02 r.g. jg. 1 645 393 445 303 348 0,82 r.g. pt. 1 1668 1017 1151 784 901 0,52 r.g. mn. 1 60 36 41 28 32 1,58 estimate solid content (g) estimate liquid content (ml) samples whole barley acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 figure 6. typological table of the samples analysed during the study from lugo di grezzana (l.g.) and riparo gaban (r.g.). scale drawing 1:6. legend: bw. = bowl; l.b. = large bowl; t.v. = truncate cone-shaped vessel; * = partially preserved. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 figure 7. typological table of the samples analysed during the study from lugo di grezzana (l.g.) and riparo gaban (r.g.). scale drawing 1:6. legend: cp. = cup; h. v. = handle vessel; jg. = jug; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel; * = partially preserved. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 4. results the methodological approach allowed to provide an estimate of the capacity (ml) and the weight of different contents (g). at the same time, it was possible to correlate the values determined by the computer-assisted calculations with the ratio between diameter and height (table 1), distinguishing them based on typology1 (table 2). the elaboration of the data took place through the compilation of a scatter plot, reporting the volumetric capacity in the x axis and the ø/ℎ ratio in the y axis (figure 8). from a volumetric point of view, in either case the same degree of variation is observed, where the maximum limit is about 17,000 cm3 and is represented by two truncate coneshaped vessels (l.g. t.v. 8*; r.g. t.v. 2). at the same time, however, the distribution of the samples is different. in the case of riparo gaban, almost all samples (12 out of 13) have a volumetric capacity lower than 3000 cm3, while for the site of lugo di grezzana this aspect is found in two-third of the samples (23 out of 35), showing a wider volumetric variability (figure 8). a similar distribution is observed for the ø/ℎ ratio, which is wider for lugo di grezzana (0.26 ÷ 3.23) than riparo gaban (0.52 ÷ 1.58). these dissimilarities are due to the absence of some ceramic forms (large bowls, necked vessels) or the presence in a smaller percentage (bowls, truncate cone-shaped vessels) in the dataset of riparo gaban. 1 in this study, the distinction between jugs and mugs follows the typological classification defined in banchieri et al. 1999 [30]. for some typological classes, better represented, it was possible to make a comparison of the samples between the two investigated sites (table 2). bowls: represented by 5 samples (4 of which are partially reconstructed). the ceramic samples are characterised by a volume between 545 and 3755 ml, containing between 256 and 2591 g of solid content and a ø/ℎ ratio between 1.52 and 3.03. although only one samples comes from riparo gaban and most of the volumes are reconstructed, there is a distinction on the basis of ø/ℎ ratio, estimated between 1.97 and 3.03 for lugo di grezzana, compared to a lower value for riparo gaban equal to 1,52. jugs: represented by 9 samples (4 of which are partially reconstructed). the ceramic samples are characterised by a volume between 413 and 2022 ml, containing between 194 and 1395 g of solid content and a ø/ℎ ratio between 0.74 and 0.88. although most of samples comes from lugo di grezzana, a limited variability both on the volumetric data and in particular about ø/ℎ ratio is observed. the lower range of ø/ℎ ratio allowed to identify jugs as the ceramic shape with the highest degree of homogeneity compared to the other typological classes. truncate cone-shaped vessels: represented by 15 samples (9 of which are partially reconstructed). the ceramic samples are characterised by a volume between 750 and 17307 ml, containing between 353 and 11942 g of solid content and a ø/ℎ ratio table 2. summary of results organised for typological class. samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio samples samples partially preserved estimate liquid content (ml) estimate solid content (g) ø/h ratio bowl 5 4 545 ÷ 3755 256 ÷ 2591 1,52 ÷ 3,03 4 4 545 ÷ 3755 256 ÷ 2591 1,97 ÷ 3,03 1 2858 1343 ÷ 1972 1,52 large bowl 4 3 4814 ÷ 6959 2263 ÷ 4802 2,18 ÷ 3,23 4 3 4814 ÷ 6959 2263 ÷ 4802 2,18 ÷ 3,23 truncate coneshaped vessel 15 9 750 ÷ 17307 353 ÷ 11942 0,77 ÷ 1,18 11 7 750 ÷ 17307 353 ÷ 11942 0,77 ÷ 1,08 4 2 1290 ÷ 17135 606 ÷ 11823 0,82 ÷ 1,18 internal handles vessel 1 925 435 ÷ 638 1,16 1 925 435 ÷ 638 1,16 mug 5 397 ÷ 2023 186 ÷ 1396 0,96 ÷ 1,38 5 397 ÷ 2023 186 ÷ 1396 0,96 ÷ 1,38 jug 9 4 413 ÷ 2022 194 ÷ 1395 0,74 ÷ 0,88 8 4 413 ÷ 2022 194 ÷ 1395 0,74 ÷ 0,88 1 645 303 ÷ 445 0,82 pot 1 1668 784 ÷ 1151 0,52 1 1668 784 ÷ 1151 0,52 necked vessel 2 2 6054 ÷ 13378 2845 ÷ 9231 0,26 ÷ 0,47 2 2 6054 ÷ 13378 2845 ÷ 9231 0,26 ÷ 0,47 ladle 1 38 18 ÷ 27 1,84 1 38 18 ÷ 27 1,84 miniaturistic 5 53 ÷ 158 25 ÷ 109 0,76 ÷ 2,5 4 53 ÷ 158 25 ÷ 109 0,76 ÷ 2,5 1 60 28 ÷ 41 1,58 riparo gabantotal typological classification lugo di grezzana acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 between 0.77 and 1.18. both datasets are characterised by a homogeneity about volumetric capacity (lugo di grezzana: 750 ÷ 17307 ml, 353 ÷ 11942 g; riparo gaban: 1290 ÷ 17135 ml, 606 ÷ 11823 g) and ø/ℎ ratio (lugo di grezzana: 0.77 ÷ 1.08; riparo gaban: 0.82 ÷ 1.18). miniaturistic forms: represented by 6 samples. the ceramic samples are characterised by a volume between 53 and 158 ml, containing between 25 and 109 g of solid content and a ø/ℎ ratio between 0.76 and 2.5. the dataset is represented by different ceramic shapes, with a wide variability of ø/ℎ ratio and only associated for the limited dimensions. 5. discussion the methodological protocol has led to obtain an analysis of the volumetric capacity of a wide selection of samples, correlating the volumetric data to diameter/height ratio and typological class of each ceramic vessel. compared to the previous study [28], the increase in the number of samples allowed to provide further information on individual typological class, especially in those most attested such as: bowls, large bowls, truncate cone-shaped vessels, cups, jugs and miniaturistic forms. the different distribution of the ceramic assemblages (figure 8) highlighted a wider distribution of samples from lugo di grezzana both on the volumetric estimates (mainly within 6000 cm3) and in the ø/ℎ ratio (between 0.26 and 3.23). while in the case of riparo gaban, both the volumetric range (mainly within 3000 cm3) and the ø/ℎ ratio (between 0.56 and 1.58) represent a lower variability. the different distribution could depend on different reasons, both due to a poor conservation of some typological classes (e.g. necked vessels, bowls and large bowls), not allowing an estimate of the volume, and due to a different connotation of riparo gaban compared to the settlement of lugo di grezzana. this later characterised by numerous structural complexes [3], [37], [38], as well as a larger number of ceramic finds. about the distribution of each typological class in relation to the parameters examined, for the most part of them it was possible to highlight a specific distribution, according to four trends: • limited volumetric range and limited ø/ℎ ratio (jugs, mugs); • limited volumetric range and wide ø/ℎ ratio (miniaturistic forms); • wide volumetric range and limited ø/ℎ ratio (necked vessels, truncate cone-shaped vessels); • wide volumetric range and wide ø/ℎ ratio (bowls, large bowls); for each typological class, the greater or lesser volumetric capacity and ø/ℎ ratio could provide new information about the research. for instance the case of jugs, where a limited ø/ℎ ratio allowed to recognize a degree of homogeneity higher compared to the other typological classes. this aspect could result by several factors, such as: the attribution of the ceramic shape to a limited number of function and/or the evidence of a model widely shared within the fiorano culture, where jugs are one of the most distinctive shapes of this material culture [39]. similar case for truncate cone-shaped vessels, typical of vhò group, where the wide volumetric range could be the outcome of a figure 8. scatter plot between volumetric capacity (x axis) and diameter/height ratio (y axis). lugo di grezzana (l.g. = green), riparo gaban (r.g. = red); legend: bw. = bowl; cp. = cup; h. v. = handle vessel; jg. = jug; l.b. = large bowl; mn. = miniaturistic; n.v. = necked vessel; pt. = pot; t.v. = truncate cone-shaped vessel. 0,00 0,50 1,00 1,50 2,00 2,50 3,00 3,50 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 l.g. bw. l.g. h.v. l.g. jg. l.g. l.b. l.g. ld. l.g. mn. l.g. n.v. l.g. t.v. r.g. bw. r.g. cp. r.g. jg. r.g. mn. r.g. pt. r.g. t.v. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 plurality of technological aspects. for example, aspects like an unrestricted orifice, thick walls and bases, in particular in large samples for increase stability, could be assumed as dry storage vessels [20], but an evaluation of their functionality is not possible yet. regarding the functions of each ceramic class, although the volumetric capacity depends upon its shape and size, it was not possible to formulate a direct relationship with the function. as has been seen in rice [20], pots are multifunctional with primary or secondary uses before being abandoned. in other words, the relation between use and capacity of a vessel depends on several considerations like the amount and kind of contents (liquid or solid), the duration of storage, the number of uses, microenvironmental factors or other necessities [40], [41]. to understand this complexity, volumetric and typological aspects must be related to other technological criteria like petrographic analysis, surface treatment processes (smoothing, polishing, slip) [42], use-wear and organic residues [43]-[45]. ceramic paste and manufacturing analysis are under study and will be able to provide new interpretative ideas about the functionality of each typological class. 6. conclusion this study aimed to provide new data about the estimation of the volumetric capacity of ceramic vessels from the neolithic sites of lugo di grezzana and riparo gaban, with the aid of 3d graphic software. the automatic calculation, based on a reconstruction of the vessel through blender®, represents an efficient method for the estimation of the volumetric capacity since: allow to work directly on the bibliography available, the volumetric calculation takes place in just few steps with very reliable results and sufficiently valid to be applied to an archaeological study. the results show a different distribution of the ceramic dataset that could depend on different reasons like a poor conservation of some typological classes, that would not allow to estimate the volume or due to a different connotation of the two sites. about the distribution of each typological class for the most part of them it was possible to highlight a specific distribution of the results, according to four trends, each of them could be conditioned by functional uses and/or cultural models. to date, numerous questions have therefore emerged and remain unresolved, especially regarding jugs and truncate coneshaped vessels. could a limited volumetric range and ø/ℎ ratio of the jugs be an evidence of a restricted number of functions? at the same, its homogeneity is shared within other sites belonging to the fiorano culture? how far is it diversified compared to other contemporary cultures of the northern italy? conversely in the case of truncate cone-shaped vessels, could a wide volumetric range and limited ø/ℎ ratio represent a plurality of technological aspects compared to other typological class? in general terms, the study of vessel capacity is one of the parameters necessary for the functional understanding of the artifacts. however, only through a systematic application of this method to other contemporary sites and the evaluation of further investigation parameters (currently in progress), it will be possible to obtain more information about the functionality of ceramic vessels. acknowledgement the research project is conducted by the research laboratory labaaf (laboratorio bagolini archeologia archeometria fotografia) belongs to ceasum (centro di alti studi umanistici) of the university of trento. the project has prof.ssa annaluisa pedrotti as scientific manager and paolo chistè as technical director of tefalab (laboratorio di tecniche fotografiche avanzate, unit of labaaf). all the authors equally contributed to research and writing. references [1] f. cavulli, d. e. angelucci, a. pedrotti, la successione stratigrafica di lugo di grezzana (verona), preistoria alpina 38.8 (2002), pp. 89-107. [2] a. pedrotti, p. salzani, lugo di grezzana: un “emporio” di settemila anni fa sui monti lessini veronesi, la lessinia ieri oggi domani 33 (2010), pp. 87-103. [3] a. pedrotti, p. salzani, f. cavulli, m. carotta, d. e. angelucci, l. salzani, l’insediamento di lugo di grezzana (vr) nel quadro del primo neolitico padano alpino, in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 95-107. [4] l. salzani, grezzana, abitato neolitico in località campagne di lugo, in: quaderni di archeologia del veneto ix. giunta regionale del veneto (editor). canova, padova, 1993, isbn 9788886177191, pp. 83-87. [5] l. moser, il sito neolitico di lugo di grezzana (verona). i materiali archeologici della campagna di scavo 1993, in: la neolitizzazione tra oriente ed occidente. a. pessina, g. muscio (editors). museo friulano di storia naturale, udine, 2000, pp. 125150. [6] f. santaniello, v. delladio, a. ferrazzi, s. grimaldi, a. pedrotti, nuovi dati sulla tecnologia litica del neolitico antico dell’area padano alpina: i rimontaggi di lugo di grezzana (verona), ipotesi di preistoria 13.1 (2020), pp. 53-66. doi: 10.6092/issn.1974-7985/11008 [7] a. pedrotti, il riparo gaban (trento) e la neolitizzazione della valle dell'adige, in: antenate di venere 27.000 – 4.000 a. c.. v. kruta, l. kruta poppi, m. lička, e. magni (editors). catalogo della mostra, skira, milano, 2009, isbn 9788857204765, pp. 39-47. [8] b. bagolini, riparo gaban: preistoria ed evoluzione dell’ambiente, museo tridentino di scienze naturali, edizioni didattiche, 1980. [9] b. bagolini, a. pedrotti, l‟italie septentrionale, in: atlas du néolithique europée. j. guilaine (editor). université de liège, liège, 1998, pp. 233-341. [10] a. pedrotti, il gruppo gaban e le manifestazioni d'arte del primo neolitico, in: settemila anni fa il primo pane. ambienti e culture delle società neolitiche. a. pessina, g. muscio (editors). catalogo mostra, museo friulano di storia naturale, udine, 1998, pp. 125131. [11] b. bagolini, p. biagi, le più antiche facies ceramiche dell’ambiente padano, rivista di scienze preistoriche 32 (1977), pp. 219-233. [12] l. engels, l. bavay, a. tsingarida, calculating vessel capacities: a new web-based solution, in: proceedings of the symposium shapes and uses of greek vases (7th-4th centuries bc). a. tsingarida (editor). crea-patrimoine, bruxelles, 2009, isbn 9789077723852, pp. 129-133. [13] e. c. rodriguez, c. a. hastorf, calculating ceramic vessel volume: an assessment of methods, antiquity 87 (2013), pp. 1182-1190. doi: 10.1017/s0003598x00049942 [14] c. velasco felipe, e. celdrán beltrán, towards an optimal method for estimating vessel capacity in large samples, journal of archaeological science 27 (2019), pp. 1-12. doi: 10.1016/j.jasrep.2019.101966 [15] a. o. shepard, ceramics for the archaeologist, carnegie institution, washington, 1956, isbn 9780872796201. https://doi.org/10.6092/issn.1974-7985/11008 https://doi.org/10.1017/s0003598x00049942 https://doi.org/10.1016/j.jasrep.2019.101966 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [16] n. castillo tejero, j. litvak, un sistema de estudio para formas de vasijas, departamento de prehistoria, instituto nacional de antropología e historia, mexico city, 1968. [17] j. w. ericson, e. g. stickel, a proposed classification system for ceramics, world archaeology 4.3 (1973), pp. 357-367. doi: 10.1080/00438243.1973.9979545 [18] g. a. johnson, local exchange and early state development in southwestern iran, anthropological archaeology 51, university of michigan press, 1973. doi: 10.3998/mpub.11396443 [19] p. m. rice, pottery analysis: a sourcebook. university of chicago press, chicago, 1987, isbn 9780226711164. [20] p. m. rice, pottery analysis: a sourcebook. university of chicago press, chicago, 2015, isbn 9780226923215. [21] l. m. senior, d. p. birnie, accurately estimating vessel volume from profile illustrations, american antiquity 60.2 (1995), pp. 319334. doi: 10.2307/282143 [22] j. p. thalmann, a seldom used parameter in pottery studies: the capacity of pottery vessels, in: the synchronization of civilizations in the eastern mediterranean in the second millennium b.c. iii. m. bietak, e. czerny (editors). österreichische akademie der wissenschaften, vienne, 2007, isbn 9783700135272, pp. 431-438. [23] á. sánchez climent, m. l. cerdeño serrano, methodological proposal for the volumetric study of archaeological ceramics through 3d edition free-software programs: the case of the celtiberians cemeteries of the meseta, virtual archaeology review 5.11 (2014), pp. 20-33. doi: 10.4995/var.2014.4173 [24] kotile. software program. online [accessed 11 march 2022] https://kotyle.readthedocs.io/en/latest/index.html# [25] n. karasik, p. u. smilansky, instructions for users of the module ’capacity’, 2006. online [accessed 10 june 2012] http://archaeology.huji.ac.il/depart/computerized/files/instruct ions_capacity_6.pdf. [26] capacity – centre de recherches en archéologie et patrimoine. université libre de bruxelles. web application. online [accessed 11 march 2022] https://capacity.ulb.ac.be/index.php?langue=en [27] blender foundation, blender the free open source 3d content creation suite, available for all major operating systems under the gnu general public license. online [accessed 11 march 2022] https://www.blender.org/ [28] a. tavella, m. ciela, p. chistè, a. pedrotti, preliminary studies on the volumetric capacity of ceramic from the neolithic site of lugo di grezzana (vr) through 3d graphics software, 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, 22-24 october 2020, pp. 257-262. https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-049.pdf [29] v. becker, studien zum altneolithikum in italien, vol. 3, lit verlag, münster, 2018, isbn 9783643142207. [30] d. g. banchieri, e. montanari, g. odetti, a. pedrotti, il neolitico dell’italia settentrionale, in: criteri di nomenclatura e di terminologia inerente alla definizione delle forme vascolari del neolitico/eneolitico e del bronzo/ferro. d. cocchi genick (editor). atti del congresso di lido di camaiore, 26-29 marzo 1998, vol. 1, octavo, firenze, 1999, isbn 9788880301905, pp. 4362. [31] a.m. portillo, c. sanz, fourth order method to compute the volume of archaeological vessels using radial sections: pintia pottery (spain) as case study, international journal of computer mathematics 98 (2021), pp. 705-718. doi: 10.1080/00207160.2020.1777405 [32] n. dal santo, g. steffè, le industrie: ceramica, pietra scheggiata, altre pietre lavorate. in: il villaggio neolitico di lugo di romagna fornace gattelli. strutture ambiente culture. g. steffè, n. degasperi (editors). origines, istituto italiano di preistoria e protostoria, firenze, 2019, isbn 9788860450746, pp. 393-466. [33] p. biagi, e. starnini, d. borić, n. mazzucco, early neolithic settlement of the po plain (northern italy): vhò and related sites. documenta praehistorica xlvii (2020), pp. 192-221. doi: 10.4312/dp.47.11 [34] m. rottoli, f. cavulli, a. pedrotti, l’agricoltura di lugo di grezzana (verona): considerazioni preliminari, in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 109-116. [35] fao/infoods density database – version 2. online [accessed 11 march 2022] http://www.fao.org/infoods/infoods/tables-anddatabases/faoinfoods-databases/en/ [36] ü. h. güran, some physical and nutritional properties of hulled wheat, journal of agricultural sciences 15.1 (2009), pp. 58-64. doi: 10.1501/tarimbil_0000001073 [37] f. cavulli, d. e. angelucci, a. pedrotti, nuovi dati sui complessi strutturali in elevato di lugo di grezzana (verona), in: studi di preistoria e protostoria 2, preistoria e protostoria in veneto. g. leonardi, v. tinè (editors). iipp, firenze, 2015, isbn 9788860450562, pp. 95-107. [38] a. costa, f. cavulli, a. pedrotti, i focolari, forni e fosse di combustione di lugo di grezzana (vr), ipotesi di preistoria 12.1 (2019), pp. 27-48. doi: 10.6092/issn.1974-7985/10256 [39] a. pessina, v. tiné, archeologia del neolitico. l’italia tra vi e iv millennio a.c., carocci editore, roma, 2018, isbn 9788843092215. [40] b. a. nelson, etnoarchaeology and paleodemography: a test of turner and lofgren’s hypothesis, journal of anthropological research 37.2 (1981), pp. 107-129. doi: 10.1086/jar.37.2.3629704 [41] d. e. arnold, ceramic theory and cultural process, cambridge university press, cambridge, 1985, isbn 9780521252621. [42] n. cuomo di caprio, ceramica in archeologia 2, l’erma di bretschneider, roma, 2007, isbn 9788882653972. [43] j. vieugué, spécialisation fonctionnelle des premières productions céramiques dans les balkans (6100-5500 av. j.c.), bulletin de la société préhistorique française 109.2 (2012), pp. 251-265. doi: 10.3406/bspf.2012.14106 [44] c. orton, m. hughes, m. hughes. pottery in archaeology. cambridge university press, cambridge, 2013, isbn 9780511920066. doi: 10.1017/cbo9780511920066 [45] j. vieugué, y. garfinkel, o. barzilai, e.c.m. van den brink, pottery function and culinary practices of yarmukian societies in the late 7th millennium cal. bc: first results, in: paléorient 42.2 connections and disconnections between the northern and southern levant in the late prehistory and protohistory (12th – mid-2nd mill. bc). i. milevski, f. bocquentin, m. molist (editors). cnrs editions, paris, 2016, isbn 9782271094957, pp. 97-115. doi: 10.3406/paleo.2016.5722 https://doi.org/10.1080/00438243.1973.9979545 https://doi.org/10.3998/mpub.11396443 https://doi.org/10.2307/282143 https://doi.org/10.4995/var.2014.4173 https://kotyle.readthedocs.io/en/latest/index.html http://archaeology.huji.ac.il/depart/computerized/files/instructions_capacity_6.pdf http://archaeology.huji.ac.il/depart/computerized/files/instructions_capacity_6.pdf https://capacity.ulb.ac.be/index.php?langue=en https://www.blender.org/ https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-049.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-049.pdf https://doi.org/10.1080/00207160.2020.1777405 https://doi.org/10.4312/dp.47.11 http://www.fao.org/infoods/infoods/tables-and-databases/faoinfoods-databases/en/ http://www.fao.org/infoods/infoods/tables-and-databases/faoinfoods-databases/en/ https://doi.org/10.1501/tarimbil_0000001073 https://doi.org/10.6092/issn.1974-7985/10256 https://doi.org/10.1086/jar.37.2.3629704 https://doi.org/10.3406/bspf.2012.14106 https://doi.org/10.1017/cbo9780511920066 https://doi.org/10.3406/paleo.2016.5722 overview of the modified magnetoelastic method applicability acta imeko issn: 2221-870x september 2021, volume 10, number 3, 167 176 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 167 overview of the modified magnetoelastic method applicability tomáš klier1, tomáš míčka1, michal polák2, milan hedbávný3 1 pontex, bezová 1658, 147 14, prague 4, czech republic 2 faculty of civil engineering, czech technical university in prague, thákurova 7, 166 29, prague 6, czech republic 3 freyssinet cz, zápy 267, 250 01, brandýs nad labem, czech republic section: research paper keywords: tensile force; magnetoelastic method; prestressed strand; prestressed cable; sensor citation: tomáš klier, tomáš mícka, michal polák, milan hedbávný, overview of the modified magnetoelastic method applicability, acta imeko, vol. 10, no. 3, article 23, september 2021, identifier: imeko-acta-10 (2021)-03-23 section editor: lorenzo ciani, university of florence, italy received february 9, 2021; in final form july 30, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by ministry of industry and trade of the czech republic. corresponding author: tomáš klier, e-mail: tkl@pontex.cz 1. introduction five experimental techniques are usually applied in civil engineering practice for evaluation and verification of actual values of axial tensile forces in important structural elements on building constructions. if the total value of the tensile force in the investigated structural elements has to be determined, the two of these methods (namely, the direct measurement of the force by a preinstalled load cell and the approach based on a strain measurement with strain gauges) can be applied only for an experiment in which the applied sensors were installed before the investigated structural elements were activated. compared to that, the next three methods (namely, the vibration frequency method [1], [2], the force determination in a flexible structural element based on the relation between the transverse force and the caused transverse displacement [3]-[5] and the magnetoelastic method [6]-[24] can be used during newly started experiments on existing structures that have already been in service for some time. the basic advantage of these three methods is that the investigated structural elements remain activated all the time (namely, in the period of the structure service, the experiment preparation, the experiment realization and also after the experiment completion). however, the applicability of the method using the relation between the transverse force and the element transverse displacement is significantly limited in practice. it can be effectively applied only for the flexible elements with a relatively small cross section, as are strands with diameter smaller than approximately 25 mm, and with a relatively long free length, because the measurable transverse displacement increases considerably the observed tensile force in the substantially short abstract a requirement of axial force determination in important structural elements of a building or engineering structure during its construction or operational state is very frequent in technical practice. in civil engineering practice, five experimental techniques are usually used for evaluation of axial tensile forces in these elements. each of them has its advantages and disadvantages. one of these methods is the magnetoelastic method, that can be used, for example, on engineering structures for experimental determination of the axial forces in prestressed structural elements made of ferromagnetic materials, e.g., prestressed bars, wires and strands. the article presents general principles of the magnetoelastic method, the magnetoelastic sensor layout and actual information and knowledge about practical application of the new approach based on the magnetoelastic principle on prestressed concrete structures. subsequently, recent results of the experimental verification and the in-situ application of the method are described in the text. the described experimental approach is usable not only for newly built structures but in particular for existing ones. furthermore, this approach is the only one effectively usable experimental method for determination of the prestressed force on existing prestressed concrete structures in many cases in the technical practice. mailto:tkl@pontex.cz acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 168 investigated elements even in the order of tens of percent in some special cases. similarly, the vibration frequency method is suitable for application on structural elements with a relatively long free vibrating length and with a relatively low bending stiffness. if this method is used on the relatively short one, the uncertainty of the specified tensile force is significantly influenced by the element bending stiffness and by the element boundary conditions, especially if they are complicated and vague. in this case, the results obtained by the vibration frequency method can be improved not only by using measured natural frequencies of the element but also by using measured mode shapes in the process of results evaluation [1], [2]. however, the time necessary for realization of such experiment is significantly longer compared to the similar one, when only the natural frequencies of the elements are evaluated, especially in the situation, if the tested elements are difficult to access for the installation of vibration sensors, as are, for example, cable stays on cable-stayed bridges. then the time required for the experiment can be more than ten times longer. the next advantages and disadvantages of all five above mentioned experimental techniques are discussed in more detail in the reference [11]. the selection of the most suitable experimental method for a particular practical application depends on specific element parameters and specific conditions in which the experiment is going to be performed. according to the authors opinion, the modified magnetoelastic method is the only one that can be applied effectively and expediently for evaluation of the prestressed force on existing structures made from the prestressed concrete when the prestressed reinforcement is embedded inside the concrete. 2. related results in the literature in civil engineering practice, the utilization of the magnetoelastic method for experimental evaluation of axial tensile forces in structural elements started about thirty-five years ago. the original inventors developed gradually the method theory, the magnetoelastic sensors (hereinafter the me sensors) and their practical utilization. they published their obtained knowledge regularly, see [12]-[21] for example. the me sensors and their appropriate equipment that have been standardly used in civil engineering practice in the recent past and at present [12]-[24] evaluate measured prestressed forces in a relatively simple way. these standard me sensors are composed from two basic parts only, from the primary and secondary coil. this is the minimal possible configuration of the me sensor, as it is described below. the basic advantages of the application of the me sensor in its standard configuration are that the tensile force in the structural element is evaluated contactless, the observed element is not locally deformed and its anti-corrosive layer is not abraded, the sensor body is robust, long lasting and resistant to accidental mechanical damage. it is possible to evaluate the instantaneous magnitude of the tensile force with high accuracy. however, the important requirement for high accuracy of the obtained results is the sensitivity assessment of each particular standard me sensor in concrete conditions its practical application using an independent force sensor. in the case of the prestressed reinforcement (namely the strand or the cable), the force sensor is used as a part of a hydraulic jack and therefore it is necessary to install the me sensor before the activation of the observed prestressed reinforcement. an additional installation of the standard me sensor on the activated prestressed reinforcement is, of course, technologically possible. however, it is time consuming and the sensor sensitivity assessment cannot be realized in the concrete conditions of the magnetic surroundings of the location where the me sensor is installed. the modified magnetoelastic method, its physical principle, the fully equipped me sensor, an experiment on a real structure and its result evaluation are described in more details in reference [9]. other supplementary information about the method, its practical applications and about a removable me sensor can be found in references [6]-[8] and [10],[11]. 3. description of the method the method is based on an experimental estimation of the magnetic response of the tensile stressed structural element on an external magnetic field. the magnetic field intensity h and the magnetic flux density b are ones of the basic physical quantities describing the magnetic field arrangement. the relation between b and h in the form of the so-called hysteresis loop is given by the kind of material exposed to the effects of the magnetic field, its properties and its current conditions (e.g., its tensile stress and temperature). for the purposes of applications of the modified magnetoelastic method, differently arranged me sensors are used depending on the specific experiment and its concrete conditions. a diagram of a fully equipped cylindrical me sensor is shown in figure 1 that was adopted from reference [9]. fundamental components of this me sensor variant are a controlled magnetic field source (for example, the primary coil that is drawn in figure 1), a sensor of magnetic field intensity "h" in a measured cross section (the system of hall's sensors and/or the secondary coil 2), a sensor of magnetic flux that is closely related to the magnetic induction "b" in the measured section of the strand (the secondary coil 1) and the me sensor protection against magnetic influences from its surroundings (the steel shield). the function and principle of hall´s sensors were explained in more detail in the reference [10]. the fully featured me sensor offers the greatest possibilities to increase accuracy and reduce uncertainties in evaluating of the tension force in the observed prestressed element. on the other side, the fully equipped me sensor is spatially larger and that may restrict its applicability in some practical cases and it is also more figure 1. diagram of a fully equipped me sensor published also in [9] acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 169 complicated. this complexity affects its longer production time, higher requirements for its application in civil engineering practice and as well as higher requirements on the used measuring system and its equipment. the above-described standard me sensors (see chapter 2) represent the minimalist variant of the me sensor that consists of a primary coil and a secondary coil 1 only. the intensity of the magnetic field "h" is determined, in this case, indirectly from a completely different physical quantity, which is the electric current flowing through the primary coil. however, there is a risk that results from this approach with application of the minimalist variant of the me sensor can be affected. any change in the magnetic surroundings in the sensor vicinity (e.g., a removal of a massive steel falsework after concrete hardening) causes completely "silently" substantial or even severe changes in sensor parameters. the sensor's steel shielding reduces the impact of this effect on the obtained results. the quality of the me sensor shielding influences the level of this reduction, however, it is never one hundred percent. for example, the application of the minimum me sensor configuration on a prestressed prefabricated concrete structure reinforced by steel-fibre is completely unusable. 4. experimental analysis of evaluating curves for the purpose of the modified magnetoelastic method, magnetic behaviour of the standard prestressed elements used both in the past and today is appropriate and necessary to know. in november 2019, a laboratory experiment concentrated on the systematic study of variations in the magnetic behaviour of two selected standard prestressed elements (namely, the patented wire p7, that was applied in the past, and the prestressed strand lp15.7 currently used by the freyssinet company with righthanded threading). it was investigated the magnetic behaviour of both elements especially in dependence on its immediate temperature and its rate of the mechanical stress. this experiment was realized in the experimental centre of the klokner institute (the research institute at the czech technical university in prague). results of similar experiments realized for different standard prestressed elements are described in [10] (namely, for the patented wires p4.5, unknown prestressed strand lp15.7 with left-handed threading, the prestressed bars 15/17 made by dywidag company) and in [8] (namely, for the full locked cable pv 150 and the mukusol threadbar 15fs 0000). the ideal condition for precise evaluation of some particular experiment realized on an existing prestressed concrete structure is, when the specific evaluating curve is available. this specific curve can be obtained only by the magnetoelastic analysis of the specific prestressed element, which was removed directly from the investigated structure. the specimen should be minimally 1.2 m long and its extraction is generally very difficult and laborious. moreover, the load bearing system of the observed structure is partially weakened. also, the realization and evaluation of the experimental analysis necessary for determination of the evaluating curve is significantly timeconsuming. alternatively, it is possible to use the general evaluating curves that are available in the material library gradually compiled by the authors, some examples are shown in figure 7. the results of the chemical composition and the microscopic structure analysis realized for the material of the investigated prestressed reinforcement can be used for the selection of the appropriate general evaluating curve from the material library. the substantially shorter test specimen is needed for this purpose. usually only one wire 10 cm long is sufficient enough even if it is removed from the strand. the structure weakening is generally acceptable in this scale and the material sample can be removed from the opening used for installation of the me sensor without the additional partial damage of the structure. 4.1. the experiment realized on the previously used patent wire p7 the patent wire p7 used during the experiment described in this chapter was removed from the chamber of the existing prestressed concrete bridge in prague that was put into operation in the year 1974. in the course of the bridge inspection in the year 2019, the fully equipped me sensors were installed on two selected prestressed cables assembled from twelve patent wires p7 (see figure 2). the opening created for the purpose of the me sensor installation was consequently filled in by the special grout (see figure 3). the installed me sensors are intended for long-term monitoring of the prestressed forces in the selected cables. the basic reason for realization of the experiment described in this chapter was to determine the accurate parameters of magnetic behaviour of the prestressed reinforcement used in the figure 2. the fully equipped me sensor installed on the prestressed cable in the inspected bridge figure 3. the fully equipped me sensor installed on the prestressed cable, consequent filling of the created opening by the special grout acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 170 inspected bridge for more precise evaluation of the results. the fully equipped laboratory me sensor used in the course of the experiment is shown in figure 4. in the course of the experiment, the investigated patent wire p7 was placed in a climatic chamber (see figure 5) and loaded in a steel tensile testing machine. the magnetic properties of the studied wire were investigated for two temperature levels of the wire surface, namely around 0° c and around +25 °c. the studied patent wire p7 was loaded in five force steps for each temperature level according to its design resistance, namely, 10 kn, 20 kn, 30 kn, 40 kn and 47 kn. it is roughly about 20 %, 40 %, 60 %, 80 % and 100 % of the design strength. the temperature of the observed wire cross-section was evaluated as a linear interpolation between two measured temperature values. the first one was observed on the element surface in the close vicinity of the me sensor and the second one was the temperature of the air measured inside the me sensor. the specific hysteresis loop was measured and evaluated ten times for each particular temperature level and force step. the hysteresis loop, in general, characterizes the relation between magnetic flux density b and the magnetic field intensity h and it changes its shape depending on the actual force magnitude in the investigated prestressed element and also on its temperature. however, it is not effective and also necessary to evaluate the measured hysteresis loops in their whole range. the dimensionless parameter p is used to convert a complex measured shape of the hysteresis loop, that depends on the actual force magnitude, to one simple numeric value. and this parameter is standardly evaluated for the purpose of a practical application of the modified magnetoelastic method. some examples are published in [6]-[11]. the particular parameter p is described as a fraction. the numerator is the most important value for evaluation of the parameter p and it describes the level of the magnetic field intensity "h" in the main node point. the numerator value of the parameter p indicates the preference of the portion of the hysteresis loop close to the remanence (the intersection with the vertical axis in the b–h curve). on the contrary, its denominator value prefers the loop portion near to the saturation. the more exact definition of the parameter p is an industrial secret. in the course of analysis of the experiment results, several parameters p with different definitions were evaluated, for example parameters p 10/45, p 15/45 or p 20/45. the dimensionless parameter p 15/45 was finally chosen as the most suitable one for further analysis of the particular examined patent wire p7 and eventually for another similar one because of its maximal sensitivity to the prestressing force and its minimal disturbance by negative influences. the resultant regression fitting curve using polynomial regressions was calculated for the investigated patent wire p7 and the chosen resultant parameter p 15/45. the temperature effect on the parameter p was considered as linear and the force effect was considered as 3rd degree polynomial. calculated curve is one of several ones, which is drawn as curve “d7” in figure 7. the differences between the theoretical fitting curve and the used input experimental results are small. the maximal deviation between them is 1.6 % of the design strength of the investigated patent wire and the standard deviation of all particular results is 0.6 % of the design strength. 4.2. the experiment realized on the currently used prestressed strand lp 15.7 the prestressed strand lp 15.7, that was the subject of the experiment described in this chapter, is standardly used by the freyssinet company at the present time. the arrangement of the experiment, its procedure and results evaluation were similar to those described in the previous chapter. in the course of the experiment, the investigated prestressed strand was placed in the climatic chamber again (see figure 5) figure 4. the fully equipped laboratory me sensor intended for the experiment on the patent wire p7, the assembled sensor inside the steel shield (above) and view on its disassembled basic parts (below) figure 5. the exterior view on the climatic chamber and the steel tensile testing machine (on the left) and on the laboratory me sensor installed on the prestressed strand lp15.7 inside the chamber (on the right) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 171 and loaded in the steel tensile testing machine. the magnetic properties of the studied strand were investigated for three temperature levels of the strand surface namely, around 0 °c, +20 °c and +35 °c. the strand was loaded in five force steps for each temperature level, namely, 40 kn, 80 kn, 120 kn, 160 kn and 200 kn. it is roughly about 20 %, 40 %, 60 %, 80 % and 100 % of the strand design resistance. the temperature of the observed strand was evaluated in the same way as for the patent wire p7 in chapter 4.1. the specific hysteresis loop was measured and determined again multiple times for each particular temperature level and force step as can be seen, for example, from figure 6. for the observed strand lp 15.7, the example of evaluated dependence of the chosen resultant parameter p 15/60 on the strand temperature (prestressed force is constant 120 kn) is shown in figure 6. the resultant regressive fitting curves for the several chosen parameters p were also calculated using the same methods of mathematical analysis and statistics analogous with chapter 4.1. the differences between the theoretical fitting curves and the used input experimental results are even smaller than for the wire p7. the maximal deviation between them is 1.4 % of the design resistance of the observed prestressed strand and the standard deviation of all particular results is 0.2 % of the design strength. the example of calculated regression fitting curve that expresses the dependence of the particular chosen parameter p 15/45 on the stress in the observed strand at the strand temperature 20 °c is shown in figure 7 where the relevant curve is labelled “l 2019”. 5. practical application of the method on different prestressed elements in this chapter, a brief summary of typical utilizations of the modified magnetoelastic method is described on some practical applications. 5.1. the experiment realized on the prestressed cables composed of twelve strands on an existing bridge the actual tensile forces in prestressed cables on an existing prestressed concrete bridge were investigated during this experiment. in general, the specific position for installation of the me sensor on the load bearing structure of an existing bridge made from prestressed concrete is usually chosen based on three basic reasons. firstly, the created opening in concrete (see figure 2, figure 3 or figure 8 for example) does not weaken substantially the bridge load bearing structure. secondly, the substantial degradation of the concrete or some prestressed reinforcement is supposed in the selected location, for example where water with chlorides penetrates into the structure. thirdly, the position is chosen based on an interest of the structural designer. the investigated bridge is about thirty years old and its prestressed system is composed of the longitudinal prestressing cables. the typical prestressed cable on this bridge is the post figure 6. the prestressed strand lp 15.7 – the relation between the temperature of the observed strand and the chosen resultant dimensionless parameter p 15/60 on the specific force step (120 kn) figure 7. the comparison of the relations between the stress in the six observed prestressed elements (three patent wires p4.5, one patent wire p7 and two strands lp15.7 / 1860 mpa) and the chosen resultant dimensionless parameter p 15/45 for the prestressed element temperature 20 °c figure 8. the post installed me sensor wounded on the prestressed cable composed of twelve strands lp 15.7 mm acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 172 tensioned one and it is composed of twelve strands lp 15.7. it is led through the web of the box girder made from monolithic concrete inside a thin-wall steel duct and it was injected with cement mortar after its tension. the separates parts of one prestressed cable were interconnected by couplers in working joints. in the course of the experiment, six fully equipped me sensors were installed additionally on selected existing prestressed cables. it means, all sensor constructions among others included two hall's sensors and both secondary coil no. 1 and no. 2. in general, individual observed cross sections of the investigated prestressed cables are unique at least due to the variable arrangement of the strands in the prestressed cable (see figure 9 for example), therefore the developed methodology for application of the modified magnetoelastic method on existing prestressed concrete structures that is described in more detail in reference [9] had to be used during the results evaluations. in brief, the prestressed force evaluation in the cable by the methodology consists of four main stages. the first one is the as accurate as possible measurement of the real geometric arrangement of the strands in the observed cross-section of the prestressed cable. the second stage is theoretical modelling of the measured geometric shape of the cable in a 3d software for the electromagnetic field analysis (see figure 9). the third one is the in situ observation of the electromagnetic behaviour of the me sensor installed on the cable (see figure 8). the fourth stage is the application of the available calibration curve that was gained in laboratory test realized for the strand type from which the analysed cable is assembled. the short study of the result uncertainties caused by the partially variable mutual position of the sensor body and the cable was carried out during the described experiment. the prestressed forces in the observed cables were determined repeatedly three times by the modified magnetoelastic method and they were subsequently mutually compared. for example, the mutual geometric arrangements of the me sensors and the investigated prestressed cables were changed repeatedly. each individual me sensor was partially displaced several times in the plane of observed cable cross section in different directions perpendicular to the longitudinal axis of the cable within a range of clearance gap between the sensor body and the cable surface. subsequently, the measurement of the prestressed force was realized for the each adjusted mutual position of the sensor body and the cable. the above-mentioned analysis of the obtained results provided information about partial uncertainties of the evaluated tension forces which are related with the geometric arrangement between the sensor body and the cable and the numerical modelling of the problem (see figure 9). the variance of the resulting values on each observed cable did not exceed 1 % of the prestressed force magnitudes. the tensile forces evaluated by the modified magnetoelastic method in the individual observed cable locations were mutually compared as well. the force magnitude of some investigated cables was only about 50 % of the prestressed forces evaluated in other ones. these cables were examined in detail subsequently. their significant weakening by corrosion was detected near their measured cross section and the corrosion process was distinctly uneven in these cases. the authors tried to reduce uncertainties of the experiments as much as possible. the main sources of the uncertainties are follows: the uncertainty related to the material of the investigated prestressed reinforcement, the uncertainty connected with the fem numerical modelling, the uncertainty caused by the actual temperature of the reinforcement in experiment time and the uncertainties of the hall's sensors. the chemical compositions and the microscopic structures of the materials of the prestressed reinforcements are more or less different. if it is possible, the reference sample of the investigated prestressed reinforcement should be taken from the observed existing structure and then analysed in an ideal variant of the experiment. the application of more hall's sensors to different positions in the investigated cross-section of the prestressed cable should be used for more accurate verification of the numerical model based on the multiple comparison of the experimental and theoretical results. the evaluating curves (see figure 7 for example) for particular material of some prestressed reinforcement that characterize the temperature effect on the analysed results should be realized using calibrated force and temperature sensors and devices. the sensitivity of hall's sensors applied by the experiments should be verified using a calibrated source of a reference magnetic field. as can be seen from figure 8 the installation of the me sensors was associated with the laborious and precise local demolition of the concrete layer in locations with cables selected for the experiment. the additional benefit of this experiment is fact, that the created openings for the me sensor installation can be used for a proper inspection of the prestressed reinforcements and their potential corrosion. due to the fact that the original post-injection grouting of the steel ducts with the cables was not performed well, the me sensor can also identify weakening of the cable cross section by corrosion that is located out of the investigated place and the created opening, because there is not redistribution of the prestress losses of the cable tensile stress on other strands or cables. 5.2. the experiments realized on the prestressed bars the experiments described in this chapter were carried out on some samples of the currently used prestressed threadbar of type mukusol 15fs made by dywidag company. the bar diameter is 15 mm, the thread diameter is 17 mm, its characteristics figure 9. the important output of the numerical model necessary for the correct interpretation of experimental data (the numerical model of the observed cable cross section with precisely located positions of twelve strands lp15,7 and two used hall's sensors) acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 173 strength is 960 mpa and appropriate characteristics breaking load is 170 kn. in the past years, the authors realized similar experiments focused on the threadbars [11]. however, the simpler constructions of the me sensors with only one secondary coil were used during these experiments. it was stated in conclusion of the paper [11] that the supervision of the presstressed bars using the modified magnetoelastic method does not seem appropriate. the supposed advantages of the newly used me sensor arrangement with both secondary coils were verified on the threadbars in the course of the recent experiments [8]. the secondary coils were applied as the detector of the of magnetic field intensity "h" instead of the originally used hall's sensors, that failed in the conditions of the swirling magnetic field caused by the threads of the prestressed bar [11]. the differential signal evaluated from the data observed by the pair of the secondary coils is used for evaluating of the same physical quantity that is also measured by hall's sensors. however, quantity magnetic field intensity "h" is not observed only locally as in the case of hall's sensors but the differential signal corresponds to value of quantity “h” averaged over the volume of the annulus between two secondary coils. both secondary coils used in the applied me sensor were made with the same length as the screw thread pitch of the investigated threadbar for the purpose to reduce the negative influence of the threads and to eliminate all abnormalities of the magnetic field in their neighborhood. see reference [8] for more details. the phenomenon of the inverse sensitivity of the threadbar material to the applied axial load was found out during the analysis of experimental data. this phenomenon is probably caused by the disturbing effects of eddy currents. the development of their influence is produced by the compact massive cross section of the bar in comparison with prestressed wires or strands. in terms of the magnetic behaviour, the strand is close to a system of prestressed wires and not to the compact threadbar. more details of the phenomenon are described in reference [8]. the maximal difference between the force actual values, that were measured by the standard force sensor used during the experiment, and the force determined using the evaluation curve applied to the experimental data measured by the me sensor was about 2.0 % of the characteristic strength of the investigated threadbar. based on the new experiences of the authors with application of the me sensor containing both secondary coils, it can be stated that modified me method enables to determine the actual value in the standard prestressed threadbars. however, during the design of an experiment, it is necessary to consider the lower sensitivity of the modified magnetoelastic method for the prestressed threadbars compared to the strands for example. this lower sensitivity is probably related to the lower strength of the material used for production of the threadbars. 5.3. the experiment realized on the standard full locked cable pv-150 the experiment described in this part was realized on the currently used standard full locked cable pv 150 made by firm pfeifer seilund hebetechnik gmbh. the arrangement of the experiment and its results are described in more detail in the reference [8]. the basic objective of the experiment was to verify if the modified me method can be used for determination of the axial tensile forces in the standardly produced full locked cables of type pv. the core of this cable type is composed of several layers of circular cross-section wires and the locked outer layers are formed from z shaped wires. the outer diameter of the investigated cable pv 150 is 40 mm and its characteristics strength is 1435 mpa and appropriate characteristics breaking load is 1520 kn. the magnetic behaviour of the investigated cable pv 150 was observed by the fully equipped me sensor (see figure 1). its construction involved the following parts: the sensor body made by a 3d printer (see figure 10), the primary coil used as the controlled magnetic field source, the secondary coil 1 (see figure 10) applied as a sensor of magnetic flux that is closely related to the magnetic induction "b" in the measured cable cross section, the system of two hall sensors (see figure 10) and the secondary coil 2 used as two independent sensors of magnetic field intensity "h" in a measured cable cross section and steel shield of the me sensor applied as a sensor protection against magnetic influences from its surroundings. in the course of the experiment, the cable was anchored in a stand and it was loaded twice in nine force steps by the tensile force produced by a calibrated tensional hydraulic jack. the experiment was primarily focused on the verification of usability of the modified magnetoelastic method for this type of the structural element and it was not intended as the full experimental analysis of the evaluating curves because it was realized only for one temperature of the cable. however, several various evaluation procedures were examined based on the measured data. the obtained experimental results show that the modified magnetoelastic method can be applied meaningfully for experimental determination of the tensile force in the standard fully locked cables of type pv. the magnetic behaviour of these cables is analogical to the standard prestressed strands of type lp. 6. alternative arrangement of the me sensor in the course of the research project, the design of the prototype of the removable me sensor was also developed. its each part can be simply mounted on the observed structural element and then removed easily and no one coil has to be figure 10. the production of the me sensor on the cable pv 150, the process of the winding of the secondary coil 1, the placement of hall’s sensor acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 174 produced in situ, so the time-consuming winding of the coils is not required on the experiment location. the basic part of the removable me sensor is the primary coil winded on the steel horseshoe-shaped core (see figure 12), that is used for developing sufficiently intense variable electromagnetic field. the parameters of the magnetic field are measured applying two secondary coils and two hall's sensors. the secondary coils are winded around the orange frames that were made by a 3d printer and that are put on the both ends of the steel core (see figure 12). they measure the magnetic field properties in a direction perpendicular to the element axis. more specifically, they observed the changes of magnetic-inductive flow in the magnetic circuit, that correlate with the intensity of the magnetic induction “b”. two hall's sensors are located inside the special black holder (see figure 12) that was also made by the 3d printer and that is mounted on the studied structural element in the plane of symmetry of the removable me sensor. it means, both hall's sensors are situated in the central plane of the sensor perpendicular to the investigated element and they measure the properties of the magnetic field in a direction parallel with the element axis, more specifically, they observe magnetic field intensity "h" in the investigated cross section of the element that is located in the plane of sensor symmetry. as part of the development of the sensor prototype, the possibility of magnetic-inductive flow observation was evaluated. flow observations were made by using two measuring coils that are positioned on the both ends of the steel horseshoe-shaped core of the electromagnet and that are connected in series in the measuring circuit. the mutual relation between the time courses of the measured signals from the secondary coils and hall's sensors was investigated and assessed. the subject of interest was the answer for the question if the magnetization of the compact massive core of the primary coil, on which both secondary coils are threaded, does not influence the measurement of hysteresis behaviour of the investigated prestressed element. based on the obtained experimental data, it can be stated that the hysteresis is affected by the compact core of the primary coil. however, this phenomenon occurs only during the rapid increase of magnetic flux during the rapid increase of the excitation electric pulse. it always disappears completely during the descending branch of the excitation pulse because then the phenomenon is almost quasi-static from the view of the dynamics of the magnetic field development. the magnetic behaviour of the assessed arrangement of the experiment, it means the system consisting of the removable me sensor and the investigated structural element, is purely linear in the zone of the descending branch of the excitation pulse that is really used for the analysis of the experimental data measured by the me sensor. this behaviour was also verified by the removable me sensor, which was mounted on a nonferromagnetic dummy of the investigated element. the obtained results of the experiment, that are described in more detail in reference [7], confirmed the functionality of the designed arrangement of the removable me sensor. however, the precision of the gained results is lower than for the cylindrical me sensor, that is described in the chapter 3, due to the less accurate definition of the magnetic field in the observed cross section compared to the arrangement of the cylindrical me sensor. the removable me sensor can be used for a relatively quick preparative experimental analysis intended for the evaluation of the total value of the prestressed force in the structural elements activated before the start of the experiment. the intended purpose of the practical application of the removable me sensor is a quick selection of prestressed elements that are the most important for further detailed analysis by using the cylindrical fully equipped me sensors. the comparison of the basic advantages and disadvantages between the standard cylindrical fully equipped me sensor whose general scheme is shown in figure 1 and the removable me sensor whose general diagram is drawn in figure 11 could be summarized in following notes. the advantages of the standard cylindrical me sensor are that a relatively high measurement accuracy could be achieved and that the dimension of the cross-section of the observed prestressed reinforcement is not restricted practically. its basic disadvantage is its relatively time-consuming production. the fundamental advantage of the removable me sensor is that the time necessary for preparation and realization of the experiment is substantially shorter. its disadvantages are higher uncertainties of the evaluated prestressed forces and a considerable limitation of the dimension of the observed reinforcement cross-section. figure 11. the arrangement of the removable me sensor figure 12. the application of the removable me sensor on the strand lp 15.7 in the laboratory conditions acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 175 7. results and discussions so far, as part of development of the modified magnetoelastic method, the experiments focused on the evaluating curves were realized for four specimens of the patent wires, three types of the prestressed bars and two prestressed strands lp 15.7 / 1860 mpa from different producers. the results obtained for six selected investigated prestressed elements (namely, three patent wires p4.5, one patent wire p7 and two prestressed strands lp15.7) were compared mutually in detail. the resultant regression fitting curves for the particular elements at the element temperature 20 °c are drawn in figure 7. the analysis of the results shows that the standard deviations of all measured data from the specific calculated evaluating curves are usually significantly lower than 1.0 % of the design strength. the comparison of the corresponding evaluating curves for two different samples of the prestressed strands lp 15.7 / 1860 mpa made by different producers is shown in figure 7 (specimens “l 2019” and “l 2016”). a specific producer of the strand “l 2016” is not known. the strand “l 2019” was supplied by freyssinet cz. the difference between curves is roughly about 5 % of the design strength of the strands. it means the evaluating curves of the particular samples of the strand lp 15.7 deviate roughly about 2.5 % from the averaging evaluating curve of this type of the prestressed strand. very similar results were obtained from the comparison of the corresponding evaluating curves for three different test samples of the patent wire p4.5 that were taken during the demolitions of three different existing bridges. these bridges were built in the seventies and eighties in czechoslovakia. for example, three curves “d4 spec. 1”, “d4 spec. 2” and “d4 spec. 3” for the wire temperature 20 °c are drawn in figure 7. the evaluating curves of the particular test samples of the patent wire p4.5 deviate roughly about 2.5 % from the averaging evaluating curve of this type of patent wire. in contrast to the previously mentioned results, the significant difference roughly above 20 % of the design strength was found between the evaluating curves for the patent wires p7 and p4.5, as can be seen from figure 7. results of the additional analyses indicated that the chemical compositions of the materials, from which the studied wires p4.5 and p7 were made, were almost identical. nevertheless, for the wire p7, it was found out, that its microscopic structure of steel is fundamentally different from the three wires p4.5 and this fact seems to be the reason of the difference between the evaluating curves for the patent wire p7 and wires p4.5. as it was expected, it was confirmed experimentally that the evaluation relationships are not applicable universally for different materials used for various prestressed elements. it was further found that the evaluation curves are not mutually applicable even for the patent wires with different diameters. 8. conclusions and outlook the results stated above demonstrate that the modified magnetoelastic method can be used for the experiments realized on the existing structures for the determination of the actual value of the tension force in steel prestressed structural elements using the available general evaluating curves. the result uncertainties of these experiments are then similar as for the alternative experimental methods, e.g., the vibration frequency method [1], [2]. it should be noted here that the frequency method cannot be used for the prestressed elements embedded in the concrete. in the cases when it is possible to remove the test sample of the specific prestressed element from the particular existing bridge then the evaluating curves for this observed element can be evaluated according to the above-described procedure. the uncertainties of the evaluated prestressed forces are then relatively small and they are comparable with precision of the method based on a strain measurement with strain gauges. the possibility of using the modified magnetoelastic method for the prestressed bars was also verified during previous experiments [8] and [11]. based on the new experiences of the authors with application of the me sensor containing both secondary coils, it can be stated that modified magnetoelastic method enables to determine the actual value in the standard prestressed threadbars. however, the me sensor sensitivity, when it is applied on the threadbars, is lower than for the prestressed wires and strands. the main reason of this fact is the significantly lower design strength of the threadbar materials. according to the authors opinion, the modified magnetoelastic method is the only one that can be purposefully and effectively used for the prestressed force evaluation in the prestressed reinforcements embedded inside the concrete on the existing prestressed concrete bridges or similar engineering structures. acknowledgement the results presented in this article are outputs of the research project fv 30457 “utilization of a magnetoelastic method for increasing the reliability and durability of existing and newly built prestressed concrete structures” supported by ministry of industry and trade of the czech republic. references [1] m. polák, t. plachý, determination of forces in roof cables at administrative center amazon court, procedia engineering, vol. 48., 2012, pp. 578-582. doi: 10.1016/j.proeng.2012.09.556 [2] m. polák, t. plachý, experimental evaluation of tensile forces in short steel rods, applied mechanics and materials, vol. 732, 2015, pp. 333-336. doi: 10.4028/www.scientific.net%2famm.732.333 [3] p. fajman, m. polák, measurement of structural cable of membranes, proceedings of the 50th annual conference on experimental stress analysis ean 2012, tábor, czech republic, 4-7 june 2012, pp. 61-64. [4] j. máca, p. fajman, m. polák, measurements of forces in cable membrane structures, proceedings of the 13th international conference on civil, structural and environmental engineering computing cc 2011, chania – crete, greece, 6-9 september 2011, paper 190. doi: 10.4203/ccp.96.190 [5] p. fajman, m. polák, j. máca, t. plachý, the experimental observation of the prestress forces in the structural elements of a tension fabric structure, applied mechanics and materials, vol. 486, 2014, pp. 189-194. doi: 10.4028/www.scientific.net/amm.486.189 [6] t. klier, t. míčka, m. polák, m. hedbávný, application of the modified magnetoelastic method, proceedings of the 17th imeko tc 10 and eurolab virtual conference “global trends in testing, diagnostics & inspection for 2030”, online event, 20-22 october 2020, pp. 344-349. online [accessed 8 september 2021] https://www.imeko.org/publications/tc10-2020/imekotc10-2020-051.pdf https://doi.org/10.1016/j.proeng.2012.09.556 https://doi.org/10.4028/www.scientific.net%2famm.732.333 http://dx.doi.org/10.4203/ccp.96.190 https://doi.org/10.4028/www.scientific.net/amm.486.189 https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-051.pdf https://www.imeko.org/publications/tc10-2020/imeko-tc10-2020-051.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 176 [7] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, l. krejčíková, the modified elastomagnetic sensor intended for a quick application on an existing prestressed concrete structures, proceedings of the 58th conference on experimental stress analysis ean, sobotín, 19-22 october 2020, p. 9. [8] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, l. krejčíková, new information about practical application of the modified magnetoelastic method, matec web of conferences, vol. 310, no. 00026, 2020, p. 10. doi: 10.1051/matecconf/202031000026 [9] t. klier, t. míčka, m. polák, t. plachý, m. hedbávný, r. jelínek, f. bláha, application of the modified magnetoelastic method and an analysis of the magnetic field, acta polytechnica ctu proceedings, vol. 15., 2018, pp. 46-50. doi: 10.14311/app.2018.15.0046 [10] t. klier, t. míčka, t. plachý, m. polák, t. smeták, m. šimler, the verification of a new approach to the experimental estimation of tensile forces in prestressed structural elements by method based on the magnetoelastic principle, matec web of conferences, vol. 107, no. 00015, 2017, p. 8. doi: 10.1051/matecconf/201710700015 [11] t. klier, t. míčka, m. polák, t. plachý, m. šimler, t. smeták, the in situ application of a new approach to experimental estimation of tensile forces in prestressed structural elements by method based on the magnetoelastic principle, proceedings of the 55th conference on experimental stress analysis 2017, košice, czechia, 30 may – 1 june 2017, pp. 122-132. [12] p. sumitro, n. miyamoto, a. jaroševič, corrosion monitoring by utilizing em technology, proceedings of the 5th international conference on structural health monitoring of intelligent infrastructure, shmii-5 2011, cancun, mexico, 11-15 december 2011, p. 11. [13] m. chandoga, a. jaroševič, j. sedlák, e. sedlák, experimental and in situ study of bridge beams supported by bottom external tendons, proceedings of the 3rd international fib congress and exhibition, incorporating the pci annual convention and bridge conference 2010, washington, usa, 29 may 2 june 2010, 9 pp. [14] m. chandoga, a. jaroševič, measurement of force distribution along the external tendons, proceedings of the international conference analytical models and new concepts in concrete and masonry structures, lodz, poland, 9-11 june 2008, 6 pp. [15] m. chandoga, p. fabo, a. jaroševič, measurement of forces in the cable stays of the apollo bridge, proceedings of the 2nd fib congress, naples, italy, 2006, pp. 674-675. [16] p. fabo, m. chandoga, a. jarosevic, the smart tendons a new approach to prestressing, proceedings of the fib symposium 2004 concrete structures: the challenge of creativity, avignon, france, 26-28 april 2004, pp. 286-287. [17] p. fabo, a. jaroševič, m. chandoga, health monitoring of the steel cables using the elasto-magnetic method, proceedings of the asme international mechanical engineering congress and exposition, 2002, pp. 295-299. doi: 10.1115/imece2002-33943 [18] s. sumitro, a. jaroševič, m. l. wang, elasto-magnetic sensor utilization on steel cable stress measurement, proceedings of the 1st fib congress, concrete structures in the 21th century, 2002, pp. 79-86. [19] m. chandoga, j. halvonik, a. jaroševič, d. w. begg, relationship between design, modelling and in-situ measurement of pretensioned bridge beams, proceedings of the 8th international conference on computational methods and experimental measurements, cmem 1997, rhodes, greece, 1 may 1997, pp. 623-632. [20] a. jaroševič, magnetoelastic method of stress measurement in steel, nato science series, 3 (65), 1998, pp. 107-114. [21] a. jaroševič, m. chandoga, force measuring of prestressing steel, inzinierske stavby, 42, 1994, pp. 56-62. [22] a. m. sarmento, a. lage, e. caetano, j. figueiras, stress measurement and material defect detection in steel strands by magneto elastic effect. comparison with other non-destructive measurement techniques, proceedings of the 6th international conference on bridge maintenance, safety and management iabmas 2012, stresa-lake maggiore, italy, 8-12 july 2012, pp. 914-921. doi: 10.1201/b12352-126 [23] c. chen, w. wu, d. wei, stress measurement of pre-stressed members using elasto-magnetic sensors, journal of the chinese institute of civil and hydraulic engineering, vol. 24, iss. 2, 2012, pp. 157-167. [24] h. j. wichmann, a. holst, h. budelmann, magnetoelastic stress measurement and material defect detection in prestressed tendons using coil sensors, proceedings of 7th international symposium on non-destructive testing in civil engineering ndtce’09, nantes, france, 30 june – 3 july 2009, 6 pp. online [accessed 8 september 2021] https://www.ndt.net/article/ndtce2009/papers/60.pdf [25] h. feng, x. liu, b. wu, d. wu, x. zhang, c. he, temperatureinsensitive cable tension monitoring during the construction of a cable-stayed bridge with a custom-developed pulse elastomagnetic instrument, structural health monitoring, vol. 18, iss. 5-6, 2019, pp. 1982-1994. doi: 10.1177/1475921718814733 https://doi.org/10.1051/matecconf/202031000026 https://doi.org/10.14311/app.2018.15.0046 https://doi.org/10.1051/matecconf/201710700015 https://doi.org/10.1115/imece2002-33943 https://doi.org/10.1201/b12352-126 https://www.ndt.net/article/ndtce2009/papers/60.pdf https://doi.org/10.1177/1475921718814733 characterization of the santa maria del fiore cupola construction tools using x-ray fluorescence acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 characterization of the santa maria del fiore cupola construction tools using x-ray fluorescence leila es sebar1, leonardo iannucci1, sabrina grassini1, emma angelini1, marco parvis2, andrea bernardoni3, alexander neuwahl4, rita filardi5 1 dipartimento di scienza applicata e tecnologia, politecnico di torino, turin, italy 2 dipartimento di elettronica e telecomunicazioni, politecnico di torino, turin, italy 3 università di siena e istituto di storia della scienza ( museo galileo), florence, italy 4 artes mechanicae, florence, italy 5 museo dell’opera del duomo, florence, italy section: research paper keywords: x-ray fluorescence; pca; non-invasive measurements; cultural heritage; metals; archaeometry citation: leila es sebar, leonardo iannucci, sabrina grassini, emma angelini, marco parvis, andrea bernardoni, alexander neuwahl, rita filardi, characterization of the santa maria del fiore cupola construction tools using x-ray fluorescence, acta imeko, vol. 11, no. 1, article 9, march 2022, identifier: imeko-acta-11 (2022)-01-09 section editor: fabio santaniello, university of trento, italy received march 15, 2021; in final form december 13, 2021; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: leonardo iannucci, e-mail: leonardo.iannucci@polito.it 1. introduction the santa maria del fiore cupola (dome), located in florence, is a unique renaissance creation. indeed, the brunelleschi’s cupola is an extraordinary masterpiece, still being the biggest masonry dome in the world. its dimension, together with the innovative construction project that led to its realization, made the cupola one of the most famous buildings in history. the history of the santa maria del fiore cathedral spans through a wide time frame and is a really complex one. in the 14th century florence was a flourishing city, with the reputation of being one of the most important in europe. the city wanted to increase its relevance, visibility and pride, in order to wipe out all the nearby competing cities. to this aim, the florentine citizens decided to build a very large cathedral, similar to the ones in the other most important european cities [1]-[3]. the history of the santa maria del fiore cathedral started with the italian architect arnolfo di cambio, who developed the initial project and supervised the beginning of the construction in 1296. about 100 years later, just the main body was completed, without the facade and the dome. in 1418 an open competition started, to find a project for the construction of the biggest cupola in the world. the competition ended only in 1420, when filippo brunelleschi was entrusted with the work. however, his project was always surrounded by skepticism [2]-[4]. among the many challenges that brunelleschi had to face, the main one was the size of the dome. indeed, the regular techniques employed in the 14th century involved the use of internal support structures, which would not be possible for the construction of the santa maria del fiore dome. due to their abstract this paper presents the characterization of different tools employed in the construction of the santa maria del fiore cathedral in florence; they are part of the opera di santa maria del fiore collection and are currently exhibited in the museo dell’opera del duomo. the analysed objects are turnbuckles, pulleys, three-legged lewises, and pincers; indeed, despite their uniqueness and their importance from the historical point of view, this study is the first one that investigates their alloys composition. actually, this information can be of great interest for curators to find the best conservation strategies and to have new insights on the production techniques typical of the renaissance. the study was performed using x-ray fluorescence (xrf) in order to identify the materials constituting the objects. then, xrf spectra were analysed using chemometric techniques, namely principal components analysis (pca), in order to investigate possible similarities among different alloys and thus provide new indications to help collocating these tools in a specific historical period. mailto:leonardo.iannucci@polito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 size, these frameworks would have not been capable of supporting even their own weight. brunelleschi, who had spent many years studying the ancient roman architectural techniques in rome, in order to solve this major issue, together with many other challenges related to the project, found a way to build a self-supporting double-walled dome, thanks to the insertion of particular pattern of bricks, called “a spina di pesce”. the construction was carried on employing suspended platforms which were progressively moved up along with the construction [1]-[8]. another remarkable novelty employed by maestro brunelleschi was the design and development of specific machines and tools, that were used on the construction site. the employment of these new tools, used to lift the building materials, allowed to save both time and money. today, some of these unique tools, such as pulleys, turnbuckles, pincers, winches, and ropes, are part of the opera di santa maria del fiore collection, exhibited in the museo dell’opera del duomo, in florence. even if these tools were a great innovation for the duomo project, they did not attract the attention of researchers so far. nevertheless, these unique objects and their history could provide important information regarding the production techniques and the materials employed during the brunelleschi era. for instance, a lot of information can be deducted from a visual inspection of the tools, by comparison with the historical sources. indeed, it is possible to find drawings of similar tools made by taccola, francesco di giorgio, bonaccorso ghiberti, giuliano da sangallo and even in the codex atlanticus by leonardo da vinci [1], [9]. on the other hand, the provenance, the materials and the production techniques of some of these tools are more complex to reconstruct. in such cases, conservation science and engineering can provide useful tools to reconstruct these tools’ history. indeed, tailored analytical strategies can be applied to investigate the constituent material and to develop specific conservative approaches [10]-[12]. in this paper, the first characterization of these construction tools with a non-invasive and in-situ approach is presented. a previous study has examined the preliminary results from the xrays fluorescence analyses performed on these objects [13]. in the present manuscript a complete survey of the constituent materials is provided, discussing the possible role of different elements in the alloys. then, obtained data were processed using multivariate analysis to find possible similarities in the spectra acquired on different tools and thus formulate hypotheses on the historical collocation of these objects. 2. materials and methods this section presents the characterized historical tools and the analytical techniques employed in the study; moreover, the performed data analysis is described in detail. 2.1. historical tools under study this study is focused on thirteen objects which are part of the opera di santa maria del fiore collection and are currently exhibited in the museo dell’opera del duomo, in florence. they are tools and equipment that were used in the construction of the santa maria del fiore cathedral. as previously discussed, the construction of this building took several centuries, so it is difficult to collocate each of the tools in a specific historical period. the investigated objects are two turnbuckles, eight pulleys, two three-legged lewises and a pincer. the photographs of some of the characterized tools are shown in figure 1, where some of the points of analysis are marked. 2.2. x-ray fluorescence all objects were characterized using x-ray fluorescence in order to investigate the composition of the constituent materials. measurements were performed using a brucker tracer 5i analyser, which allowed to perform non-invasive measurements without moving the tools from their collocation in the museum. the instrument is equipped with a 20 mm2 silicon drift detector and a rhodium (rh) anode. the ti-al filter was used in order to reduce the intensity of peaks related to rhodium and palladium (pd) [14]. analyses were carried out using a voltage of 40 kv and a current of 40 µa, with the 3 mm collimator. spectra processing and elements identification were performed using artax spectra (8.0.0.476) software. 2.3. multivariate analysis in order to investigate similarities among different alloys, acquired spectra were processed by means of principal component analysis (pca). using this chemometric technique figure 1. historical tools employed for the construction of the santa maria del fiore cathedral in florence. the instruments are displayed in the museo dell’opera del duomo, florence: a), b) turnbuckles, c) three-legged lewises, d), e) pulleys, f) pincers. yellow circles indicate the analyzed areas. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 it is possible to identify patterns in acquired measurements and classify spectra in different groups. pca was performed on xrf spectra using a python script, as described in [15], by means of the scikit-learn library [16]. before computing the principal components (pcs), spectra were pre-processed as follows: 1) the interval of interest was limited to the range from 1 kev to 12.2 kev for iron alloys and from 1 kev to 15.5 kev and from 24.5 kev to 30 kev for bronzes. actually, in these energy ranges all significant peaks for the two materials are present and thus only relevant parts of the spectra are included in the pca. 2) spectra baseline was subtracted using the embedded function in the artax spectra software. 3) signal-to-noise ratio was improved applying the savitzkygolay filter [17]. a second order polynomial and a window length of 90 ev were used in order to avoid any over-smoothing. 4) spectra were normalized using the standard normal variate transformation (snvt) [18]. after performing the pre-processing, principal components were computed and results were graphed as biplots, in which eigenvalues for different spectra are plotted. similarities among different spectra were evaluated using a gaussian mixture model probability distribution and confidence ellipses were accordingly drawn using the sklearn.mixture.gaussianmixture class from the scikit-learn library. 3. results and discussion 3.1. alloys identification xrf analyses were performed on all investigated tools choosing different points of interest on each object. the primary goal was to identify the alloys constituting the different parts of the tools. most of the analysed objects are pulleys, as these tools were commonly used in the cathedral construction site during the centuries and many of them are still preserved in the museum. it is interesting to notice that two kinds of pulleys can be identified among those present in the museum collection: one is characterized by both the main body and the wheel made in wood (shown in figure 1e), and the other is made completely in metal (shown in figure 1d). even if at a first glance it could appear easy to collocate the two typologies of pulleys in different historical periods, the only available information in literature dates also the metallic pulleys to the renaissance era [3]. so, this simple example further highlights the need for an archaeometric approach to study these important tools. figure 2 shows some representative spectra acquired on different pulleys. in figure 2a, the spectrum acquired on the metallic frame of a wooden pulley is reported; as can be seen in figure 1e, this typology of pulleys had a metallic frame needed to hold and anchor the tool. the material can be identified as an iron alloy, considering the two main peaks at 6.40 kev and 7.06 kev, which correspond to characteristic kα and kβ lines of iron respectively. the material is then characterized by the presence of manganese, copper, zinc, and nickel demonstrated by the presence of peaks at 5.90 kev (mn-kα), 8.05 kev (cukα), 8.64 kev (zn-kα), and 7.48 kev (ni-kα) respectively. finally, it is possible to identify calcium by the 3.69 kev and 4.01 kev peaks, corresponding to the kα and kβ emission lines, and potassium (kα 3.31 kev); these are present in all acquired spectra and can be related to environmental contamination. additional peaks are present at higher energies. in particular a triplet of peaks can be attributed to iron sum peaks, i.e., 12.81 kev (kα + kα), 13.46 kev (kα + kβ), and 14.12 kev (kβ + kβ). moreover, the couple of peaks at 4.67 kev and 5.32 kev can be assigned to the fe-k lines escape peaks. furthermore, the peaks related to the rhodium anode can be identified by the kα and the kβ lines at 20.22 kev and 22.72 kev respectively. the broad peak at 19.06 kev can be attributed to the compton scattering of the rh characteristic photons. arsenic is present too, as can be seen from the presence of the peaks at 10.54 kev (kα) and 11.72 kev (kβ). for these peaks, the k shell intensity ratios (kβ/kα) is respected, as it has an average value close to 0.14 [19]. arsenic was a common contaminant in iron ores and thus was often present in iron alloys [20], [21]. the spectrum acquired on the main body of a metallic pulley is then presented in figure 2b. in this case, too, it can be identified as an iron alloy, in which most of the peaks mentioned for the spectrum in figure 2a can be found. at the same time, the relative intensity of peaks changes with respect to the ones in the spectrum shown in figure 2a. figure 2. xrf spectra collected on pulleys: frame of one of the ‘wooden’ pulleys (a), main body of one of the ‘metallic pulleys’ (b), wheel of one of the ‘metallic pulleys’ (c). the black line is the acquired spectrum, while the computed baseline is in green. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 it is worth noticing that for the three points of analysis acquired on one of the metallic pulleys (referred in the next section as ‘pulley 6’), a different spectrum was obtained. the material can be identified as an iron alloy, characterized by the additional presence of lead. in figure 3a a detail of a spectrum obtained from this pulley is reported. lead can be identified considering different features. first, it is possible to notice that the arsenic kβ over kα ratio is not respected, thus deducing that the peak at 10.55 kev is due to the overlapping of both the pblα and as-kα lines. furthermore, it is possible to notice the presence of a left shoulder on the sum peak of iron (kα + kα) at 12.61 kev, corresponding to the pb-lβ line. other characteristic pb lines are identified at 11.35 kev and 14.76 kev, being the pblη and pblγ1 respectively. a spectrum showing the same energy interval acquired on one of the pulleys without lead is reported in figure 3b; in this case, the left shoulder on the sum peak of iron at 12.61 kev is not present. in figure 2c the spectrum acquired on the wheel of one of the metallic pulleys is shown. it is possible to describe the material as a lead bronze, i.e., an alloy containing copper, tin, and lead. most of the peaks previously described for figure 2a are present also in this spectrum, except for the escape and sum peaks of iron. moreover, there is the additional presence of the characteristic peaks of lead (lα-10.55 kev, lη-11.35 kev, lβ612.14 kev, lβ4-12.31 kev, lβ1-12.61 kev, lβ5-13.01 kev, lγ114.76 kev, lγ6-15.18 kev), silver (kα-22.16 kev), tin (kα25.27 kev, kβ1-28.48 kev and kβ2-29.11 kev), and antimony (kα1-26.36 kev). the remaining peaks can be identified as sum peaks, e.g., 16.06 kev (cu-kα + cu-kα), 16.67 kev (cu-kα + zn-kα), 16.97 kev (cu-kα + cu-kβ), and 18.61 kev (pb-lα + cu-kα). an interesting case study is then represented by the two turnbuckles, which are of great importance both from the technical and historical points of view. these tools constituted a great innovation that allowed to lift heavy loads in a smooth and controlled way. moreover, they substituted steel rods for the stone positioning, reducing the risk of chipping them during this operation. their historical importance is then testified by their representation in different collections of drawings in the renaissance era, as an example in the ‘taccuino senese’ by giuliano da sangallo [22]. as can be seen in figure 1a and figure 1b, turnbuckles are composed of different parts, namely the central screw, the nut, the hook, and two connecting rods. threaded components are particularly important in this investigation as they can give new insights on the production routes typical of the renaissance era and can provide hints to date the objects. during that period threaded parts were mainly realized in bronze, as this alloy has good machinability using steel tools. for this reason, threaded components made in iron should belong to a later period. the most representative spectra collected on the two turnbuckles are shown in figure 4. the nut of the ‘turnbuckle1’ (figure 1a) is constituted by a bronze alloy. this was identified by the presence of the major elements as copper (kα-8.05 kev and kβ-8.90 kev), tin (kα-25.27 kev, kβ1-28.48 kev and kβ2-29.11 kev), zinc (kα-8.64 kev and kβ-9.57 kev), and lead. lead can be identified through several peaks in the spectrum: mα-2.34 kev, mγ2.65 kev, lι-9.18 kev, lα-10.55 kev, lη-11.34 kev, lβ612.14 kev, lβ4-12.30 kev, lβ1-12.61 kev, lβ5-13.01 kev, lγ114.76 kev, lγ6-15.17 kev. other elements are present too, such as: iron (kα-6.40 kev and kβ-7.06 kev), nickel (kα-7.48 kev), antimony (kα1-26.35 kev and kβ1-29.71 kev), arsenic (kα10.54 kev and kβ-11.72 kev), manganese (kα-5.90 kev), calcium (kα-3.69 kev and kβ-4.01 kev), and potassium (kα3.31 kev). figure 3. comparison of two spectra acquired on different pulleys: on the left, the spectrum corresponding to ‘pulley 6’, where lead and arsenic are present; on the right, spectrum acquired on one of the remaining pulleys, characterized by the presence of arsenic but not lead. only the energy interval from 10 kev to 16 kev is reported. the black line is the acquired spectrum, while the computed baseline is in green. figure 4. xrf spectra collected on the two turnbuckles: nut of turnbuckle1 (a), one of rods on turnbuckle1 (b), nut of turnbuckle 2 (c). the black line is the acquired spectrum, while the computed baseline is in green. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 additional peaks are present at higher energies. in particular they can be identified as sum peaks, e.g. 16.06 kev (cu-kα + cukα), 16.97 kev (cu-kα + cu-kβ), and 18.61 kev (pb-lα + cukα). moreover, the material of the anode, rhodium, can be identified by the following peaks: kα-20.22 kev and kβ122.72 kev. the broad peak at 19.06 kev can be attributed to the compton scattering of the rh characteristic photons. compared to the composition found for the pulley’s wheels (figure 2c), it is possible to notice a less intense peak of zinc and a higher relative concentration of antimony. it is not easy to compare these findings to previous literature because most of the studies investigating the bronze composition of that period are related to statuary art. anyway, it is worth noticing that the presence of zinc, nickel, and iron in lead bronze was already reported in previous studies [23], [24]. moreover, the presence of antimony was found in different artifacts dating back to the renaissance period and also in the alloy used for the realization of the ‘porta del paradiso’ reliefs by lorenzo ghiberti, which were positioned in the east door of the baptistery of san giovanni battista, in front of santa maria del fiore cathedral [25]-[27]. this last point is of particular importance because, like also arsenic, these elements were not intentionally added by the founder, but they are impurities due to raw materials [26]. this can further confirm the dating of this turnbuckle to the brunelleschi era. apart from the nut, all other parts constituting the ‘turnbuckle1’ are instead made of an iron alloy containing copper, zinc, arsenic, and lead (see figure 4b). on the other hand, the ‘turnbuckle2’ (figure 1b) is entirely made of a similar iron alloy, with a higher content of manganese, zinc and arsenic (see figure 4c). finally, spectra acquired analysing the pincers and the lewises are reported in figure 5 and figure 6, respectively. they are both tools used to lift stones; the use of pincers is straightforward, while lewises were used inserting this tool in a clearance realized on purpose in the stone. these objects were realized using an iron alloy, whose composition is mainly characterized by the presence of copper, zinc, nickel, and arsenic. most of the peaks previously described for figure 2a are present also in these spectra. the main difference between these last two objects is the presence of lead, which can be identified in the spectrum reported in figure 6 by the left shoulder on the sum peak of iron at 12.61 kev, while is not present in the alloy constituting the pincers. 3.2. pca and spectra classification after identifying the main alloys constituting the different tools, principal component analysis was used in order to discover possible similarities in the materials composition. finding analogies in the alloys composition for different objects cannot be considered as a univocal proof that the two tools belong to the same historical period, but anyway can give additional useful indications to curators and scholars. so, three different processing were performed: in the first one, all pulleys were analysed in order to examine possible groups in this kind of objects, in the second one, all spectra identified as bronzes were investigated and then in the third one the spectra acquired on the two turnbuckles were taken into consideration. pca was performed using the acquired xrf spectra as input data. actually, when dealing with xrf measurements, two strategies are possible. the first one consists in the computation of the material composition after identification of the elements by means of their characteristic peaks. in this case, the weight percentage of each element is used as input data for the pca [28], [29]. the second approach, which is the one used in this study, directly uses the raw spectra as input data for the pca, without computing elemental composition. the advantage of this second approach is that the result from the pca processing is not influenced by the elements identification performed by the user, which could be in some cases questionable or at least not univocal [30], [31]. thus, using xrf spectra for the pcs computation allows to reduce possible sources of error related to spectra interpretation. at the same time, particular care should be taken in order to exclude from the processing those parts of the spectra that do not carry useful information, because they could bias the pca model without a significative reason from the compositional point of view. because of this, the processed spectrum was limited to the range from 1 kev to 12.2 kev for iron alloys and from 1 kev to 15.5 kev and from 24.5 kev to 30 kev for bronzes. regions of the spectrum not analysed by means of pca are characterized by the presence of sum peaks or lines related to the anode material. results from the pca of spectra acquired on pulleys can be seen in figure 7. in this biplot it is not possible to highlight a well-defined clustering among different objects, but anyway it is possible to give a primary interpretation. spectra for ‘pulley1’ (one of the wooden pulleys) and ‘pulley6’ (one of the metallic pulleys) appear to be outliers, characterized by values of pc1 above 30. as can be seen from the loadings, this is due to a higher concentration in nickel, copper and lead (this last element was identified only in ‘pulley 6’ spectra). all the other pulleys group in the left part of the graph where, even if it is not possible to draw confidence regions for two classes, it is possible to observe a separation related to the pc2 value. all wooden pulleys (as said before, the analysis was performed on the metallic frame) are in the upper part of the graph, figure 5. xrf spectrum acquired on the pincers. the black line is the acquired spectrum, while the computed baseline is in green. figure 6. xrf spectrum acquired on one of the three-legged lewises. the black line is the acquired spectrum, while the computed baseline is in green. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 characterized by higher values of arsenic and manganese. on the other hand, metallic pulleys are characterized by lower values for pc2, due to higher concentration of zinc. pca processing was then performed on spectra acquired on bronze components (see figure 8). as it is possible to observe, in this case a clear clustering was found, as highlighted by the confidence ellipses drawn according to the gaussian mixture model for probability distribution. spectra acquired on the wheels of the metallic pulleys are characterized by a higher amount of zinc, tin and nickel. on the contrary the bronze used for the nut of the ‘turnbuckle1’ has a higher concentration of lead, antimony and iron. this is an important finding because the drawing of a turnbuckle noticeably similar to ‘turnbuckle1’ is present in the ‘taccuino senese’ by giuliano da sangallo [22]. so, if we assume that the metallic pulleys presumably do not belong to the renaissance era, as can be argued considering their design, this clear difference in the bronze composition can be taken as a confirmation for the attribution of this turnbuckle to the brunelleschi era. finally, last pca processing was performed on the iron alloy spectra collected on the two turnbuckles in order to investigate possible similarities or differences among their constituent materials; the result is presented in figure 9. as can be seen, it is possible to draw two confidence regions. the first one, smaller, contains the spectra collected on ‘turnbuckle1’ except those acquired on the hook and on the arch where the two connecting rods are nailed. actually, these three spectra fall in the other confidence region, which includes all points of analysis taken on ‘turnbuckle2’. the alloy used for ‘turnbuckle1’ is richer in copper and manganese, while the other has a higher relative concentration of zinc, arsenic and nickel. this particular clustering can be explained considering that investigated objects were tools used in everyday works, so it was not uncommon to perform repairs or to substitute broken parts. thus, it is possible to conclude that the two components (the hook and the arch) belonging to ‘turnbuckle 1’ but falling in the ‘turnbuckle 2’ confidence ellipse could have been substituted in a more recent period, using an alloy similar to the one constituting ‘turnbuckle 2’. analysing by means of pca also the remaining tools, it was not possible to highlight any relevant clustering; the only observable grouping was related to spectra collected on the same object. so, no other analogies were found among alloys. 4. conclusions this study analysed some of the tools employed in the construction of the santa maria del fiore cupola. thanks to xray fluorescence measurements, it was possible to investigate, for the first time and with a totally non-invasive approach, the composition of the alloys constituting these objects, which have a primary importance both from the technical and historical point of view. then, thanks to chemometric analysis, analogies and differences among alloys were examined. it was possible to discriminate between the different iron alloys employed for the pulleys, which can be discriminated in two typologies belonging to different historical times. the use of pca allowed also to highlight the presence of two bronze alloys (one used for threated components and one for the wheels of the metallic pulleys) and two iron alloys used for the turnbuckles. the use of xrf analysis does not allow to draw univocal conclusions on dating of these objects, as this technique is not even specifically intended for this purpose. anyway, these findings, if supported also by historical sources and by the work of curators, can give new insights on the world of technology in the renaissance era. acknowledgement the authors would like to thank marcello del colle and samuele caciagli for the technical support during the measuring campaign. figure 7. score and loading plot of the first two components (pc1-pc2) calculated from xrf spectra acquired on pulleys. pulleys labelled from 1 to 5 are ‘wooden’ pulleys (see figure 1e), while those from 6 to 8 are the ‘metallic’ ones (see figure 1d). percent variance captured by each pc is reported in parenthesis along each axis. figure 8. score and loading plot of the first two components (pc1-pc2) calculated from xrf spectra acquired on bronze components. percent variance captured by each pc is reported in parenthesis along each axis. figure 9. score and loading plot of the first two components (pc1-pc2) calculated from xrf spectra acquired on iron parts of the two turnbuckles. percent variance captured by each pc is reported in parenthesis along each axis. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 references [1] p. innocenzi, the innovators behind leonardo, the true story of the scientific and technological renaissance, springer international publishing ag, cham, switzerland, 2019. [2] m. kozak-holland, c. procter, florence duomo project (14201436): learning best project management practice from history, international journal of project management, 32, 2014, pp. 242255. doi: 10.1016/j.ijproman.2013.05.003 [3] p. sanpaolesi, la cupola di santa maria del fiore, il progetto, la costruzione, 2nd edition, editrice edam, firenze, italy, 1977. [4] m. haines, myth and management in the construction of brunelleschi’s cupola, tatti studies in the italian renaissance, 14/15, 2011, pp. 47-101. [5] h. saalman, filippo brunelleschi: the cupola of santa maria del fiore, rizzoli intl pubns, 1980. [6] l. ippolito, c. peroni, la cupola di santa maria del fiore, florence: la nuova italia scientifica, 1997. [7] r. king, brunelleschi’s cupola, bloomsbury, usa, 2000. [8] g. fanelli, m. fanelli, brunelleschi’s cupola: past and present of an architectural masterpiece, mandragora, 2004. [9] l. reti, tracce dei progetti perduti di filippo brunelleschi nel codice atlantico di leonardo da vinci, vol. 4 from lettura vinciana, barbera editore, vinci, italy, 1964. [10] l. iannucci, j.f. ríos-rojas, e. angelini, m. parvis, s. grassini, electrochemical characterization of innovative hybrid coatings for metallic artefacts, epj plus, 133 (2018), pp. 1-7. doi: 10.1140/epjp/i2018-12368-3 [11] l. es sebar, l. iannucci, y. goren, p. fabian, e. angelini, s. grassini, raman investigation of corrosion products on roman copper-based artefacts, acta imeko 10 (2021) 1, pp. 15-22. doi: 10.21014/acta_imeko.v10i1.805 [12] es sebar, l., angelini, e., grassini, s., parvis, m., lombardo, l., a trustable 3d photogrammetry approach for cultural heritage, (2020) i2mtc 2020 international instrumentation and measurement technology conference, proceedings, art. no. 9129480. [13] es sebar, l., iannucci, l., grassini, s., angelini, e., parvis, m., bernardoni, a., neuwahl, a., filardi, r., santa maria del fiore cupola construction tools: a non-invasive characterization using portable xrf, (2020) 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, pp. 517-521. onlne [accessed 18 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-098.pdf [14] a. bezur, l. lee, m. loubser, k. trentelman, handheld xrf in cultural heritage: a practical workbook for conservators, getty conservation institute, 2020, isbn 978-1-937433-61-1. [15] l. iannucci, chemometrics for data interpretation: application of principal components analysis (pca) to multivariate spectroscopic measurements, instrumentation and measurement magazine, 24, 4 (2021). doi: 10.1109/mim.2021.9448250 [16] f. pedregosa, g. varoquaux, a. gramfort, v. michel, b. thirion, o. grisel, m. blondel, p. prettenhofer, r. weiss, v. dubourg, j. vanderplas, a. passos, d. cournapeau, m. brucher, m. perrot, é. duchesnay, scikit-learn: machine learning in python, journal of machine learning research, 12 (2011), pp. 2825-2830. online [accessed 14 march 2022] https://jmlr.csail.mit.edu/papers/volume12/pedregosa11a/pedr egosa11a.pdf [17] a. savitzky, m. j. e. golay, smoothing and differentiation of data by simplified least squares procedures, analytical chemistry, 36 (1964), pp. 1627-1639. doi: 10.1021/ac60214a047 [18] r. j. barnes, m. s. dhanoa, s. j. lister, standard normal variate transformation and de-trending of near-infrared diffuse reflectance spectra, applied spectroscopy, 43 (1989), pp. 772777. doi: 10.1366/0003702894202201 [19] r. yılmaz, kβ/kα x-ray intensity ratios for some elements in the atomic number range 28≤z≤39 at 16.896 kev, journal of radiation research and applied sciences, 10, 3 (2017), pp. 172177. doi: 10.1016/j.jrras.2017.04.003 [20] r.f. tylecote, the composition of metal artifacts: a guide to provenance, antiquity, xln, (1970). doi: 10.1017/s0003598x00040941 [21] y.-z. zhu, z. zhu, j.-p. xu, grain boundary segregation of minor arsenic and nitrogen at elevated temperatures in a microalloyed steel, international journal of minerals, metallurgy and materials 19-5, (2012), pp. 399-403. doi: 10.1007/s12613-012-0570-x [22] g. da sangallo, taccuino senese. online [accessed 24 february 24 2021] https://archive.org/details/gri_33125001082557 [23] f. g. bewer, studying the technology of renaissance bronzes, mrs online proceedings library (opl), volume 352: symposium – materials issues in art and archaeology iv (1995), 701. doi: 10.1557/proc-352-701 [24] l. bonizzoni, a. galli, g. poldi, in situ edxrf analyses on renaissance plaquettes and indoor bronzes patina problems and provenance clues, x-ray spectrom., 37 (2008), pp. 388-394. doi: 10.1002/xrs.1015 [25] r. van langh, j. james, g. burca, w. kockelmann, s. y. zhang, e. lehmann, m. estermanne, a. pappot, new insights into alloy compositions: studying renaissance bronze statuettes by combined neutron imaging and neutron diffraction techniques, j. anal. at. spectrom., 36 (2011), pp. 949-958. doi: 10.1039/c0ja00243g [26] m. ferretti, s. siano, the gilded bronze panels of the porta del paradiso by lorenzo ghiberti: non-destructive analyses using xray fluorescence, appl. phys. a, 90 (2008), pp. 97-100. doi: 10.1007/s00339-007-4231-2 [27] a. bernardoni, de re metallica, in il contributo italiano alla storia del pensiero tecnica, pp.47-49, enciclopedia italiana di scienze, lettere ed arti. roma: istituto della enciclopedia italiana fondata da giovanni treccani, 2013 [28] l. es sebar, l. iannucci, c. gori, a. re, m. parvis, e. angelini, s. grassini, in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoor, acta imeko 10 (2021) 1, pp. 241-249. doi: 10.21014/acta_imeko.v10i1.894 [29] i. marcaida, m. maguregui, h. morillas, n. prieto‑taboada, m. veneranda, s. fdez‑ortiz de vallejuelo, a. martellone, b. de nigris, m. osanna, j. m. madariaga, in situ non‑invasive multianalytical methodology to characterize mosaic tesserae from the house of gilded cupids, pompeii, heritage science, 7:13, 2019. doi: 10.1186/s40494-019-0246-1 [30] i. allegretta, b. marangoni, p. manzari, c. porfido, r. terzano, o. de pascaled, g. s. senesi, macro-classification of meteorites by portable energy dispersive x-ray fluorescence spectroscopy (pedxrf), principal component analysis (pca) and machine learning algorithms, talanta, 212, (2020), pp. 1-9. doi: 10.1016/j.talanta.2020.120785 [31] v. panchuk, i. yaroshenko, a. legin, v. semenov, d. kirsanov, application of chemometric methods to xrf-data a tutorial review, analytica chimica acta, 1040 (2018), pp. 19-32. doi: 10.1016/j.aca.2018.05.023 https://doi.org/10.1016/j.ijproman.2013.05.003 https://doi.org/10.1140/epjp/i2018-12368-3 http://dx.doi.org/10.21014/acta_imeko.v10i1.805 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-098.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-098.pdf https://doi.org/10.1109/mim.2021.9448250 https://jmlr.csail.mit.edu/papers/volume12/pedregosa11a/pedregosa11a.pdf https://jmlr.csail.mit.edu/papers/volume12/pedregosa11a/pedregosa11a.pdf https://doi.org/10.1021/ac60214a047 https://doi.org/10.1366/0003702894202201 https://doi.org/10.1016/j.jrras.2017.04.003 https://doi.org/10.1017/s0003598x00040941 https://doi.org/10.1007/s12613-012-0570-x https://archive.org/details/gri_33125001082557 https://doi.org/10.1557/proc-352-701 https://doi.org/10.1002/xrs.1015 https://doi.org/10.1039/c0ja00243g https://doi.org/10.1007/s00339-007-4231-2 http://dx.doi.org/10.21014/acta_imeko.v10i1.894 https://doi.org/10.1186/s40494-019-0246-1 https://doi.org/10.1016/j.talanta.2020.120785 https://doi.org/10.1016/j.aca.2018.05.023 introductory notes for the acta imeko second issue 2022 acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 introductory notes for the acta imeko second issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue 2022, acta imeko, vol. 11, no. 2, article 2, june 2022, identifier: imekoacta-11 (2022)-02-02 received june 30, 2022; in final form june 30, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the second issue 2022 of acta imeko collects contributions that do not relate to a specific event. as editor-in-chief, it is my pleasure to give readers an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. modern applications in virtual reality require a high level of fruition of the environment as if it was real. in applications that have to deal with real scenarios, it is important to acquire both its three-dimensional (3d) structure and details to enable the users to achieve good immersive experiences. in the paper entitled “omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences”, by a. luchetti et al., the authors propose a method to obtain a mesh with high quality texture combining a raw 3d mesh model of the environment and 360° images. the main outcome is a mesh with a high level of photorealistic details. the paper entitled “the importance of physiological data variability in wearable devices for digital health applications”, by g. cosoli et al., aims at characterizing the variability of physiological data collected through a wearable device (empatica e4), given that both intraand inter-subject variability play a pivotal role in digital health applications, where artificial intelligence (ai) techniques have become popular. inter-beat intervals (ibis), electrodermal activity (eda), and skin temperature (skt) signals have been considered and variability has been evaluated in terms of general statistics (mean and standard deviation) and coefficient of variation. results show that both intraand inter-subject variability values are significant, especially when considering those parameters describing how the signals vary over time. moreover, eda seems to be the signal characterized by the highest variability, followed by ibis, contrary to skt which results more stable. s. avdiaj et al., in the paper “measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method”, study the influence of virial coefficients on the realization of primary standards in vacuum metrology, especially in the realization of the static expansion method. in the paper they present the measured data for virial coefficients of three gases, namely helium, argon, and nitrogen, measured at room temperature and a pressure range from 3 kpa to 130 kpa. in the new optical pressure standard ultra-low expansion (ule) glass cavities were proposed to measure helium refractivity for a new realisation of the unit of pressure, pascal. however, it was noticed that the use of this type of material causes some difficulties. one of the main problems of ule glass is the pumping effect for helium. therefore, instead of ule, zerodur glass was proposed as a material for the cavity. this proposal was given by the vacuum metrology team of the physikalisch-technische bundesanstalt ptb in the quantumpascal project. in order to calculate the flow of helium gas through zerodur glass one has to know the permeation constant k. in the paper “measurements of helium permeation in zerodur glass used for the realisation of quantum pascal”, a. kurtishaj et al. measured the permeation of helium gas in zerodur in the temperature range from 80 °c to 120 °c. experimental results assess that zerodur material has the potential to be used as cavity material for the new quantum standard of pressure. s. ondera et al., in the paper entitled “dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area”, investigated the relationship between the spatial-resolution property of soft tissue images and the lesion detection ability using u-net. the aim of the paper is to explore the possibility of dose reduction during energy subtraction chest radiography. an informed type a evaluation of standard uncertainty is derived in the paper entitled “an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1”, authored by c. carobbi, based on bayesian analysis. the result is mathematically simple, easily interpretable, applicable both in the theoretical framework of the guide to the expression of uncertainty in measurement (propagation of mailto:editorinchief.actaimeko@hunmeko.org acta imeko issn: 2221-870x june 2022, volume 11, number 2, 2-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 standard uncertainties) and in that of the supplement 1 of the guide (propagation of distributions), valid for any size greater than or equal to 1 of the sample of present observations. g. campobello et al., in the paper “on the trade-off between compression efficiency and distortion of a new compression algorithm for multichannel eeg signals based on singular value decomposition” investigate the trade-off between the compression ratio and distortion of a recently published compression technique specifically devised for multichannel electroencephalograph (eeg) signals. this paper extends a previous one in which the authors proved that, when singular value decomposition (svd) is already performed for denoising or removing unwanted artifacts, it is possible to exploit the same svd for compression purpose by achieving a compression ratio in the order of 10 and a percentage root mean square distortion in the order of 0.01 %. in this article, authors successfully demonstrate how, with a negligible increase in the computational cost of the algorithm, it is possible to further improve the compression ratio by about 10 % by maintaining the same distortion level or, alternatively, to improve the compression ratio by about 50 % by still maintaining the distortion level below the 0.1 %. in the paper entitled “a strategy to control industrial plants in the spirit of industry 4.0 tested on a fluidic system”, l. fabiano et al. propose a strategy of automating the control of wide spectrum industrial processes plants in the spirit of industry 4.0. the strategy is based on the creation of a virtual simulator of the operation of the plants involved in the process. through the digitization of the operational data sheets of the various components, the simulator can provide the reference values of the process control parameters to be compared with their actual values, to decide the direct inspection and/or the operational intervention on critical components before a possible failure. n. covre et al., in “monte carlo-based 3d surface point cloud volume estimation by exploding local cubes faces”, propose a state-of-the-art algorithm for estimating the 3d volume enclosed in a surface point cloud via a modified extension of the monte carlo integration approach. the algorithm consists of a preprocessing of the surface point cloud, a sequential generation of points managed by an affiliation criterion, and the final computation of the volume. the pre-processing phase allows a spatial re-orientation of the original point cloud, the evaluation of the homogeneity of its points distribution, and its enclosure inside a rectangular parallelepiped of known volume. the affiliation criterion using the explosion of cube faces is the core of the algorithm, it handles the sequential generation of points, and proposes the effective extension of the traditional monte carlo method by introducing its applicability to the discrete domains. in the paper “3d shape measurement techniques for human body reconstruction”, i. xhimitiku et al. investigate and compare the performances of three different techniques for 3d scanning. in particular, two commercial tools (smartphone camera and ipad pro lidar) and a structured light scanner (go!scan 50) have been used for the analysis. first, two different subjects have been scanned with the three different techniques and the obtained 3d models were analysed in order to evaluate the respective reconstruction accuracy. a case study involving a child was then considered, with the main aim of providing useful information on performances of scanning techniques for clinical applications, where boundary conditions are often challenging (e.g. non-collaborative patient). finally, a full procedure for the 3d reconstruction of a human shape is proposed, in order to set up a helpful workflow for clinical applications. high-resolution x-ray computed micro-tomography (ct) is a powerful technique for studying the processes of crack propagation in non-homogenous quasi-brittle materials such as rocks. to obtain all the significant information about the deformation behaviour and fracture characteristics of the studied rocks, the use of a highly specialised loading device suitable for the integration into existing tomographic setups is crucial. since no adequate commercial solution is currently available, a completely newly-designed loading device with a four-point bending setup and vertically-oriented scanned samples is proposed and used in the paper “study of fracture processes in sandstone subjected to four-point bending by means of 4d xray computed micro-tomography”, authored by l. vavro et al. this design of the loading procedure, coupled with the high stiffness of the loading frame, allows the loading process to be interrupted at any time and for ct scanning to be performed without the risk of the sudden destruction of the scanned sample. m. s. latha gade et al. in the paper “a cost-efficient reversible logic gates implementation based on measurable quantum-dot cellular automata” describe experimental and analytic approaches for measuring design metrics of reversible logic gates using quantum-dot cellular automata (qca), such as ancilla input, garbage output, quantum cost, cell count, and area, while accounting for the effects of energy dissipation and circuit complexity. the parameters of reversible gates with modified structures are measured and then compared with the existing designs. human facial expressions are thought to be important in interpreting one's emotions. emotional recognition plays a very important part in the more exact inspection of human feelings and interior thoughts. over the last several years, emotion identification utilizing pictures, videos, or voice as input has been a popular issue in the field of study. recently, most emotional recognition research focuses on the extraction of representative modality characteristics and the definition of dynamic interactions between multiple modalities. deep learning methods have opened the way for the development of artificial intelligence products, and the suggested system employs a convolutional neural network (cnn) for identifying real-time human feelings. the aim of the research study proposed by k. pranathi et al. in the paper “video-based emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization” is to create a real-time emotion detection application by utilizing improved cnn. this research offers information on identifying emotions in films using deep learning techniques. kinetic gas molecule optimization is used to optimize the fine-tuning and weights of cnn. in “development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study”, m. sato et al. propose noncontact operation system for radiographic consoles that used a common eye tracker system facilitating noncontact operation of radiographic consoles for patients with covid-19 to reduce the need for frequent disinfection. experimental tests show that the proposal can be applied even if the operator uses a face shield. thus, its acta imeko issn: 2221-870x june 2022, volume 11, number 2, 3-3 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 application could be important in preventing the transmission of infections. j. y. blaise et al. in “acquisition and integration of spatial and acoustic features: a workflow tailored to small-scale heritage architecture” report on an interdisciplinary data acquisition and processing chain, the novelty of which is primarily to be found in a close integration of acoustic and spatial data. the paper provides a detailed description of the technological and methodological choices that were made in order to adapt to the particularities of the corpus studied (interiors of small scale rural architectural artefacts). the research outputs pave the way for proportion-as-ratios analyses, as well as for the study of perceptual aspects from an acoustic point of view. ultimately, “perceptual” acoustic data characterised by acoustic descriptors will be related to “objective” spatial data such as architectural metrics. multiplication provides a substantial impact on metrics like power dissipation, speed, size and power consumption. a modified approximate absolute unit is proposed by y. nagaratnam et al. in the paper “a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement” to enhance the performance of the existing approximate multiplier. the proposed multiplier can be applied in image processing and shows an error of 0.01% while actual solutions show a typical error of 0.40%. always in the field of approximate multipliers, in “low-power and high-speed approximate multiplier using higher order compressors for measurement systems”, m. v. s. ram prasad et al. proposed an innovative architecture that in the implementation of a fir filter allows to achieve a delay of 27 ns versus the 119 ns achieved by the exact multiplier taken as reference. finally, the technical note authored by franco pavese is a comment which addresses the paper published in this journal “is our understanding of measurement evolving?” and authored by luca mari. this technical note concerns specific parts of that paper, namely the statements: “doubt: isn’t metrology a ‘real’ science? … metrology is a social body of knowledge”, “measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component” and “what sufficient conditions characterise measurement as a specific kind of property evaluation?”, and discusses alternatives. also in this issue, high-quality and heterogeneous papers are presented, confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. in particular, the technical note shows how acta imeko is the right place where different opinions and points of view can meet and compare, stimulating a fruitful and constructive debate in the scientific community of measurement science. i hope you will enjoy your reading. francesco lamonaca editor in chief introductory notes for the acta imeko first issue 2022 acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 1 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introductory notes for the acta imeko first issue 2022 francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko first issue 2022, acta imeko, vol. 11, no. 1, article 1, march 2022, identifier: imekoacta-11 (2022)-01-01 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the first issue of a new year is the time for balances. in the last year acta imeko has moved huge steps towards the speedup of the publication time and the attracting of high value papers. these improvements were also confirmed by the fact that acta imeko is now indexed by the directory of open access journals (doaj). doaj is one of the most important community-driven, open access service in the world and has a reputation for advocating best practices and standards in open access. it indexes and provides access to high quality, open access, peer-reviewed journals, only. doaj's basic criteria for inclusion have become the accepted way of measuring an open access journal's adherence to standards in scholarly publishing, especially as concern with ethical and quality standards. this was achieved thanks to the evaluable work of the editorial board, the section editors, the reviewers and, last but not least, all the authors that have selected acta imeko for sharing their research with the scientific community. this issue includes the special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ guest edited by fabio santaniello, michele fedel, annaluisa pedrotti and the special issue ‘innovative signal processing and communication techniques for measurement and sensing systems’ guest edited by zia ur rahman. the first special issue collects papers on a hot topic archaeology and cultural heritage studied from the metrological point of view. the fact that the topic is hot is evident: although in full pandemic time, the conference has been a remarkable success with 158 initial submissions, 126 accepted papers, 431 authors from 19 countries, 4 invited plenary speakers, 13 special sessions, 3 tutorial sessions and 11 patronages. the presented papers have highlighted the natural need of exchange of knowledge and expertise between ‘human sciences’ and ‘hard sciences’. this contamination is evident also in the extended papers published in this issue. the special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ consists of fifteen papers identifying new perspectives and highlighting potential research issues and challenges in the contest of measurement and sensing. specifically, this special issue will demonstrate how the emerging technologies could be used in future smart sensing systems. the topics are heterogeneous and includes measurements for and by antennas, artificial intelligence, beam forming techniques, body area networks, embedded processors, image sensors and processing, internet of things, knowledge-based systems, machine learning algorithms, medical signal analysis, sensor data processing, vlsi architectures and many more. many novelties are foreseen in this new year, for example the articles of the next issue will be online since the end of the month, and it will be closed at the end of june as foreseen. this new publication policy will strongly reduce the publication time of the submitted papers, they will be available online and indexed by scopus as soon as they are ready. in order to keep the articles of different ‘special issues’ close together in the table of contents, we will not use consecutive page numbering throughout the issue beginning with this volume 11. the page numbers of every single article start with one and end with the number of the article’s pages. we hope that you will enjoy your readings and that you can confirm acta imeko as your main source to find new solutions and ideas and a valuable resource for spreading your results. francesco lamonaca, editor in chief mailto:editorinchief.actaimeko@hunmeko.org experimental study on sar reduction from cell phones acta imeko issn: 2221-870x june2021, volume 10, number 2, 147 -152 acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 147 experimental study on sar reduction from cell phones marius-vasile ursachianu1, ovidiu bejenaru1, catalin lazarescu1, alexandru salceanu2 1 romanian "national authority for management and regulation in communications" (ancom), romania 2 “gheorghe asachi” technical university of iasi, romania section: research paper keywords: absorbed incident energy; human exposure measurement; near field exposure; cst simulation; electromagnetic field dosimetry; sr en 62209-1 citation:marius-vasile ursachianu, ovidiu bejenaru, catalin lazarescu, alexandru salceanu, experimental study on sar reduction from cell phones, acta imeko, vol. 10, no. 2, article 21, june 2021, identifier: imeko-acta-10 (2021)-02-21 section editor: ciro spataro, university of palermo, italy received january 24, 2021; in final form april 29, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alexandru salceanu, e-mail: asalcean@tuiasi.ro 1. introduction the possible effects of ambient electromagnetic fields on human beings are general sources of concern and legitimate questions. for this reason, scientific research in the field has been strongly supported by diverse bodies and organizations, national and international, being materialized through the adoption of recommendations, setting limits and developing guidelines. this multitude and diversity provided results which, although fundamentally convergent in nature, were quite different in form. this is the main motivation for the current trend towards harmonization, providing benefits for authorities, industry and clients. essentially, the study on human exposure has two important parts: first setting limits and second establishing the correct measurement to verify compliance with previously accepted limits. for the first part, based on extremely in-depth interdisciplinary research, most countries around the world have given a positive response to the rational scientific underpinning the icnirp limits. these reference levels have also been assumed by the international telecommunication union (the united nations specialized agency) and the world health organization. these limits are approximately in compliance with the north american fcc guidelines developed by the federal communications commission, office of engineering & technology. in terms of the measurement techniques and practice there is even greater diversity. for example, the assessment of specific absorption rate of human exposure to radio frequency fields from hand-held wireless communications devices is an important concern of the profile organizations, especially since the approaches are in a permanent dynamic. for example, the well-known ieee standard 1528, first issued in 2003, has been significantly updated and completed in 2013 and 2020, respectively. its quasi-universal character has been enhanced through the adoption by the international electrotechnical commission of the two measurement standards in place iec 62209 (part 1, also here applied and part 2). as concerning the penetration of electromagnetic radiations in the human body for different radio frequencies, icnirp limits for the specific absorption rate (sar) have been assumed: 0.08 w/kg on average for the whole human body, respectively abstract the actual problem of the human exposure to different types of electromagnetic field sources is a challenging one and should be considered an up to date issue due to one major actual trend: the increasingly intensive penetration of various wireless communication technologies in virtually all places and times that make up our daily lives. the here presented paper presents an experimental study focused on measurements for three types of mobile phones, belonging to different generations, operating at the two characteristic frequencies of the gsm bandwidth. we have been used a satimo-comosar evaluation dosimetry system, provided by the liceter laboratory of ancom romania. the determined values of the absorbed incident energy by the tissue of a mannequin (phantom) model for the human head, which is part of the dosimetry system, have been also compared with those obtained in the case when the mobile phones are protected by multilayer cases, aiming to study their possible limiting effect. the influence that ”touch” or ”tilt” positions might have on absorbed (by the human head model) incident energy values has also been investigated. the comparative processing of the obtained results allowed the formulation of recommendations on reducing the exposure to electromagnetic radiation associated with the use of the mobile phone. this paper is an extended and improved version of the original contribution to the imeko tc 4 2020 virtual conference. mailto:asalcean@tuiasi.ro acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 148 2 w/kg for sar located in the head or trunk area (general public exposure). these sar values are averaged over 6 minute exposure time and 10 g of tissue. [1], [2], [3], [4]. the main objective of this paper is to determine and to compare the sar values, for three different generation mobile phones, using a satimo-comosar system and considering different exposure scenarios. two of these phones are old models, being released on market in 2009 and respectively in 2012. the third is a newer model, released on market in 2018. we have been interested to observe if the generation of the mobile phones (design, position of antenna, housing materials) has a significant impact over sar values (determined for different positions of mobile phone related to the head phantom, sam). the specific anthropomorphic mannequin (sam) has been designed to provide a conservative estimation of the actual peak spatial specific absorption rate (sar) of the electromagnetic field radiated by mobile phones, [5].complementary, for the new smartphone released in 2018, the effect that ”touch” or ”tilt” positions might have on sar values has also been studied. the determined sar values have been also compared with those obtained in the case when the mobile phones are protected by multilayer cases, aiming to study these cases possible limiting impact. the shape of the sam physical model has been derived in a percentage of 90% from the shape of an adult male head; its dimensions have been reported in [6]. the shape of the ears has been adapted to represent the flattened ones of a phone user. various studies propose different kind of shields for mobile phones to reduce the amount of power absorbed in the head, aiming to minimize the health effects [7], [8]. moreover, researches focused on sar reduction due to the type of antennas (pifa or helical), placed at the top or at the bottom of the device, have been also carried out [9], [10], [11]. finally, the sar values directly determined by us using the calibrated satimo-comosar dosimetry system for different exposure scenarios have been compared with each other and related to the limits accepted by the standards. ourstudies here presented could lead to the development(also including the direct involvement of human health specialists and bodies) to a series of recommendations and informal guidelines for effective reduction of human exposure to electromagnetic fields generated by wireless communications systems. 2. material and methods the dosimetric quantity used for the evaluation of absorbed incident energy by the tissue is the specific absorption rate (sar). this parameter has been introduced for measuring the rate of energy absorbed by human body when it is exposed to a radio frequency electromagnetic field, the formula being given by the equation (1): 𝑆𝐴𝑅𝑙 = 𝜎 2 𝜌𝑚 |�⃗� 𝑖| 2 = 𝜔 𝜀0 𝜀𝑟 " 2 𝜌𝑚 |�⃗� 𝑖| 2 , (1) where ρm is the material density (in an elementary volume), σ is the electric conductivity, εr" represents relative electric permittivity as a frequency depending function while sarl represents a local quantity, [12]. 2.1. the satimo-comosar system for sar evaluation the dosimetry evaluation system used for our measurements is competent to determine the distribution of sar inside a human head phantom, so called sam (specific anthropomorphic mannequin). the used phantom [13], is in accordance with american and european standards [14], [15]. dosimetry assessment could be done both for the situations when the mobile phone is positioned by the right or by the left ear. the main components of the used system, figure 1, are: kuka kr5 robot with his specific controller kuka krc2sr, [16] with electric field probe (calibrated before the measurement process), the sam twin phantom, the liquids that simulate the human tissue for specific frequencies, a clamping device for the mobile phone under test, a signal generator rohde & schwarz cmu 200 (a gsm base simulator that can control the output power and frequency) and a desktop pc running opensar software, [17]. sam phantom structure is made out from low loss and low permittivity material, embedded in wood. the electric field (immersive probe inside the liquid simulating the dielectric properties of the head for different frequencies) is a triple dipole type (model ep96 – satimo). efield probe provides an omni-directional response. the clamping device for holding the mobile phone is also made from a low permittivity and low losses material, not to have any influence over the measured sar values. the clamping device can be moved along three orthogonal axes ox, oy, oz and it can be rotated around the phantom ear for precise positioning of the phone. the opensar software controls all robot movements, it determines local sar values; as post processing application, it calculates sar values, averaged on 10 g or 1 g of tissue. 2.2. the sar measuring procedure all the phases describing the sar measurement procedure are widely depicted in sr en 62209-1: 2007 [18]. the sar evaluation for different frequencies has been done for each radio channel: low, middle and respectively high. two positions (illustrated in figure 2) have been considered: the normal position (when the phone is on the cheek plane, also called ”touch” or ”cheek” position) and the tilt position (when a) b) c) d) figure 1. the satimo-comosar system used for sar measurements: (a) comosar test bench andkuka robot, (b) signal generator rohde & schwarz cmu 200, (c) computing unit with opensar software installed for testing system control (d) the fastening system used to secure the measuring equipment. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 149 the phone is tilted by 15 degrees toward cheek plane). only one measurement location (of the sam phantom) has been selected: the right ear. the sar evaluation has been done for two frequencies (gsm-900 and gsm-1800 bandwidth), 897 mhz and 1747 mhz, respectively. the mobile phone has been used with its internal transmitter, antenna, battery and all accessories supplied by the manufacturer. it is important as the battery to be fully charged before each test, for every case of exposure scenario taken into consideration. complementary, in figure 3 is presented the front and the back side of huawei p20 pro mobile phone, inserted in a multilayer protective case made from hard plastic material. for every position of the mobile phone tested for sar evaluation, the following conditions should be fulfilled: existence of permanent radio connection between the base station simulator and mobile phone device at maximum power; sar measurement should be done in a network of equally spaced points on a surface located at a constant distance from the inner surface of the phantom; sar measurement should be done in equidistant points, in a cube located around the place where the maximum value of the field has been determined (by the probe who scan inside the phantom); calculation of the measured sar as average value for 1 g and 10 g; any others perturbation sources must be avoided inside the test room or in immediate vicinity. figure 4 shows a human user holding the huawei p20 pro mobile phone with the protective case in the cheek (touch) position, respectively the tilt position, the two selected situations of exposure. 3. results and discussions operating procedure used during these measurements: a gsm communication has been established between the mobile phone under test and the base station simulator cmu200 for measuring the specific absorption rate (sar). the gsm 900 experimental conditions (for cheek or tilt positions, right or left) include: phantom – right head and left head; signal – tdma (crest factor: 8.0); channel – middle; frequency – 897.59 mhz (uplink); relative permittivity (real part) – 41.5; relative permittivity (imaginary part) – 19.4; conductivity (s/m) – 0.967. the gsm 1800 experimental conditions(for cheek or tilt positions, right or left) include: phantom – right head and left head; signal – tdma (crest factor: 8.0); channel – middle; frequency – 1747.4 mhz (uplink); relative permittivity (real part) – 40.102; relative permittivity (imaginary part) – 14.096; conductivity (s/m) – 1.368. the sar values for the huawei p20 pro mobile phone subject to dosimetric evaluation are shown in table 1 for different positions (left and right side of the sam phantom). the values of the sar for the samsung gt-s6102 mobile phone subjected to dosimetric evaluation are shown in table 2 for the same placement of the phone: left and right side of the sam phantom. the corresponding sar values for the nokia 2330c-2 mobile phone are synthesized in table 3. a comparison of the sar values when huawei p20 pro, samsung gt-s6102, nokia 2330c-2 mobile phones are positioned on the right side of the sam phantom is presented in table 4 (the test frequency in the gsm-900 band). this comparison has been performed aiming to see if the sar values averaged over 10 g of tissue for samsung and nokia phones are higher than the corresponding value for the huawei phone. the maximum value of sar 10g for huawei, samsung and nokia mobile phones investigated in this evaluation dosimetry study is represented by opensar software in 2d graphical a) b) figure 2. the cheek (a) and the tilt (b) positions of the huawei p20 pro mobile phone on the right side (ear) of sam phantom. a) b) figure 3. the front side (a) and the back side (b) of the huawei p20 pro mobile protected by a multilayered protective case. a) b) figure 4. the huawei p20 pro phone with a protective case: cheek (a) and tilt (b) position, the two most common positions to the user's cheek. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 150 representation as a rectangular surface for the right or for the left part of sam phantom face, according to the place where the phone have been placed for sar evaluation. around the position where the maximum sar has been located, position marked with the red color, the software draws a rectangular surface. in figure 5 is represented the surface sar for huawei, samsung and nokia mobile phones on the left – cheek position, 897.59 mhz, gsm-900 frequency. the opensar software also can associate the surface where the maximum value of sar 10g have been found to a specific volume concentrated around this maximum value of sar determined by probe during the scan process inside sam phantom. in figure 6 is represented the volume sar for huawei, samsung and nokia mobile phones on the left – cheek position, gsm-900 frequency band. the sar values for the huawei p20 pro mobile phone protected by a multilayer plastic case are shown in table 5 for the right side of the sam phantom, for both frequencies. the comparative graphical distribution of sar values (averaged on 10g tissue) following the dosimetric evaluation of the huawei p20 pro (with and without protective case) is shown table 1. sar values for different positionsof huawei p20 pro mobile phone relativeto the sam phantom, two frequencies (gsm-900 and gsm-1800 bandwidth). phone: huawei bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 0.57 0.2854 0.4149 gsm 900 middle right – tilt 0.28 0.1375 0.1945 gsm 900 middle left – cheek 0.36 0.1735 0.2457 gsm 900 middle left – tilt 0.16 0.0877 0.1196 gsm 1800 middle right – cheek 0.34 0.1365 0.2282 gsm 1800 middle right – tilt 0.14 0.0422 0.0713 gsm 1800 middle left – cheek 0.15 0.0594 0.0974 gsm 1800 middle left tilt 0.05 0.0244 0.0358 table 2. sar values for different positionsof samsung gt-s6102 mobile phone relative to the sam phantom, two frequencies (gsm-900 and gsm1800 bandwidth). phone: samsung bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 2.31 0.7514 1.4469 gsm 900 middle right – tilt 1.01 0.4936 0.7204 gsm 1800 middle right – cheek 1.40 0.5081 0.9121 gsm 1800 middle right – tilt 0.61 0.2268 0.3898 gsm 1800 middle left – cheek 1.14 0.4909 0.8165 gsm 1800 middle left tilt 0.45 0.1686 0.2942 table 3. sar values for different positionsof nokia 2330c-2 mobile phone relative to the sam phantom, two frequencies (gsm-900 and gsm-1800 bandwidth). phone: nokia bandwidth channel position sar peak w/kg sar 10g w/kg sar 1g w/kg gsm 900 middle right – cheek 1.92 0.8813 1.3496 gsm 900 middle right – tilt 1.13 0.5301 0.7887 gsm 1800 middle right – cheek 1.89 0.5611 1.0970 gsm 1800 middle right – tilt 0.37 0.1386 0.2466 gsm 1800 middle left – cheek 1.36 0.4709 0.8726 gsm 1800 middle left tilt 0.76 0.2356 0.4791 gsm 1800 middle left tilt 0.32 0.1056 0.1799 table 4. sar max (10g) values for huawei vs samsung and vs nokia mobile phones, bandwidth: gsm-900, positions: right – cheek and right – tilt. phone sar 10g (rightcheek) w/kg sar 10g (righttilt) w/kg samsung vs huawei (rightcheek) samsung vs huawei (righttilt) nokia vs huawei (rightcheek) nokia vs huawei (righttilt) huawei 0.2854 0.1375 samsung 0.7514 0.4936 2.63 3.58 nokia 0.8813 0.5301 3.08 3.85 a) b) c) figure 5. surface sar for huawei (a), samsung (b), nokia (c) mobile phones for 897.59 mhz on left – cheek. a) b) c) figure 6. volume sar for huawei (a), samsung (b), nokia (c) mobile phones for 897.59 mhz, left – cheek. table 5. sar values for different positions of huawei p20 pro phone with the multilayer protective case relative to the right part of sam phantom bandwidth channel position sar 10g w/kg sar1g w/kg gsm 900 middle right–cheek 0.1007 0.1525 gsm 900 middle right–tilt 0.0323 0.0443 gsm 1800 middle right–cheek 0.0348 0.0574 gsm 1800 middle right –tilt 0.0140 0.0235 figure 7. comparative graphical distribution of sar values (10 g of tissue) for the dosimetric evaluation of the huawei p20 pro: with and without multilayer protective case acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 151 in figure 7. four situations have been considered: cheek and tilt positions, for both frequencies of interest. the sar values recorded by the dosimetric evaluation of the huawei p20 pro (with and without case) are synthesized in table 6 (for cheek and respectively tilt positions). figure 8 represents the surface sar for huawei p20 pro phone, without case, on the left – cheek position for 897.59 mhz frequency. the opensar software can also associate the surface where the maximum value of sar 10g have been found to a specific volume concentrated around this maximum value of sar determined during the scan process inside sam phantom. figure 9 represents the volume sar for huawei, p20 pro phones without multilayer protective case on the left – cheek position, gsm-900 frequency band. 4. conclusions this paper present a set of sar measurements performed for three, different generations, mobile phone devices: huawei p20 pro, samsung gt-s6102 and nokia 2330c-2, respectively. there is an over decade difference in term of market release between these phone models. the huawei p20 pro mobile phone is the newest one, being released on the market in 2018, while the two other mobile phones have been released on market in 2012 (samsung) and even in 2009 (nokia). we have also determined the sar values when huawei mobile phone was provided with a multilayer plastic protective case. the novelty in the study presented in this paper is the fact a first examination of data presented in the tables has shown that results are generally consistent with published tests done by other laboratories. according to our measurements, the cheek sar values are higher than tilt sar values, and 1-g sar values are higher than 10-g sar values. also, the 900-mhz sar values are higher than 1800-mhz ones. these findings have theoretical support and are in good agreement with most of the other results revealed in literature. we have noticed an anomaly in our measurements for nokia mobile phone (the orange marked field in table 3) for gsm1800 on the left side of the phantom in tilt position: the sar values were higher than those determined on the right side of the phantom, for the same position. this anomaly was due to the position of the phone in the clamping device. after we have carefully verified and repositioning the phone, the measurement was reset and correct data has been taken. the situation illustrates the occasional abnormal, inexplicable values that have been also reported in comparative studies between laboratories. similar studies developed in other laboratories (so called intercomparison measurements) support the expectation that for a given mobile phone, frequency and position, sar measurements on the left and right ear positions of the sam phantom should be very close in value. when they are not, we should check for a user error in phone placement or in data recording. as a first recommendation, the tilt position should be preferred by any user. the designers of huawei p20 pro placed the antenna at the bottom, to be farther away from the user’s brain. this is a major advantage versus the older mobile phones, their antenna being positioned on the top of the device. as in principle expected, the lowest values of sar had been recorded when the phone was provided with protective case. regarding the use of a protective casing, it is important that the material from which is made to be good absorbent; the protective cases having conducting insertions should be avoided, mainly due to unexpected and uncontrolled reflections. future studies on this topic should involve testing different types of cases, to comparatively track their impact on sar values. contrariwise, the transmission efficiency of the phone provided with protective case decreases. as a shortcoming, in this situation the battery will run out faster because it will try to give more power to ensure coverage, respectively a better signal reception. anyway, in real, daily environment, the sar values might vary depending on propagation conditions. a general conclusion could be the following: a combination of factors such as the positioning of the antenna, the size of the device, the relative position to the human head, equipping the phone with a protective case, can lead to lower sar values, regardless the type of the mobile terminal. table 6. comparisons between sar values (10 g) given by dosimetric evaluation of huawei p20 pro (provided or not with a protective case). bandwidth position sar 10g in w/kg huawei (no case vs with case) huawei (no case) huawei (with case) gsm 900 right cheek 0.2854 0.1007 2.83 gsm 900 right tilt 0.1375 0.0323 4.25 gsm 1800 right cheek 0.1365 0.0348 3.92 gsm 1800 right tilt 0.0422 0.0140 3.01 figure 8. surface sar, huawei p20 pro phone without protective multilayer case. figure 9. volume sar for huawei p20 pro mobile phone without protective multilayer case. acta imeko | www.imeko.org june2021 | volume 10 | number 2 | 152 the dosimetry evaluation here presented demonstrates that sar maxim values for 10 g of tissue determined for all three mobile phones are smaller than the icnirp limit: 2 w/kg (head region). carrying out rigorous measurements, in the case of most diverse exposure scenarios, with the correct processing of the results is an important resource to remove exaggerated fears, but also to develop recommendations and guidelines for effective reduction of human exposure to environmental electromagnetic fields. in this frame of such universal interest, any rigorous technical-scientific approach should be definitely welcomed. acknowledgment this paper could be developed due to the collaboration agreement settled between ”gheorghe asachi” technical university of iasi, faculty of electrical engineering and liceter accredited laboratory of ancomromania. references [1] international commission on non-ionizing radiation protection, guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to 300 ghz), health physics, vol. 74, 2020, pp. 494-521. [2] ieee, ieee standard for safety levels with respect to human exposure to radio frequency electromagnetic fields, 3 khz to 300 ghz, c95.i2005. new york: institute of electrical and electronics engineers, 2005. online [accessed 20 june 2021] https://emfguide.itu.int/pdfs/c95.1-2005.pdf [3] european committee for electrotechnical standardization (cenelec) prestandard env 50166-2, human exposure to electromagnetic fields. high frequency (10 khz to 300 ghz). online [accessed 20 june 2021] https://standards.globalspec.com/std/85205/env%2050166-2 [4] order nr. 1193 from 29 september 2006 for the approval of the norms regarding the limitation of the general population exposure to electromagnetic fields from 0 hz to 300 ghz. online [accessed 20 june 2021], [in romanian] https://www.ancom.ro/uploads/links_files/odinul_1193_2006 _norme.pdf [5] w. kainz, a. christ, t. kellom, s. seidman, n. nikoloski, b. beard, n. kuster, dosimetric comparison of the specific anthropomorphic mannequin (sam) to 14 anatomical head models using a novel definition for the mobile phone positioning, physics in medicine and biology 50(14), august 2005, pp. 34233445. doi: 10.1088/0031-9155/50/14/016 [6] c. c. gordon, t. churchill, c. e. clauser, b. bradtmiller, j.t. mc conville, i. tebbetts, r. a. walker, 1988 anthropometric survey of u.s. army personnel: methods and summary statistics. technical report natick/tr-89/044, u.s. army natick research, development and engineering center, massachusetts: natick, sep. 1989. online [accessed 20 june 2021] http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur /gordon_1989.pdf [7] s. aqeel abdulrazzaq, s. jabir, j. aziz, sar simulation in human head exosed to rf signals and safety precautions, ijcset, september 2013, vol 3, issue 9, pp. 334-340. online [accessed 20 june 2021] http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013 030908.pdf [8] prabir kumar dutta, pappu vankata yasoda jayasree, viriyala satya surya narayana srinivasa baba, sar reduction in the modelled human head for the mobile phone using different material shields, hum. cent. comput. inf. sci. 6 (2016), art. 3. doi: 10.1186/s13673-016-0059-0 [9] m. r. iqbal-faruque, n. aisyah-husni, md. ikbal-hossain, m. tariqul-islam, n. misran, effects of mobile phone radiation onto human head with variation of holding cheek and tilt positions, journal of applied research and technology, volume 12, issue 5, october 2014, pp. 871-876. doi: 10.1016/s1665-6423(14)70593-0 [10] l. belrhiti, f. riouch, a. tribak, j. terhzaz, investigation of dosimetry in four human head models for planar monopole antenna with a coupling feed for lte/wwan/wlan internal mobile phone, journal of microwave, optoelectronics and electromagnetic applications, vol. 16, no. 2, june 2017, doi: 10.1590/2179-10742017v16i2748 [11] ovidiu bejenaru, catalin lazarescu, alexandru salceanu, valeriu david, study upon specific absorption rate values for different generations of mobile phones by using a satimo-comosar evaluation dosimetry system, 12th international conference and exhibition on electromechanical and energy systems, sielmen 2019, chisinău, moldova, 10-11 october 2019, pp. 1-5. doi: 10.1109/sielmen.2019.8905798 [12] m. a. stuchlyans, s. stuchly, experimental radio and microwave dosimetry, in polk ch., postow e., handbook of biological effects of electromagnetic fields, (sec. edition), crc press, boca raton, n. y., london, washington dc, 1996, pp. 295-336. [13] sam phantom on mvg website. online [accessed 20 june 2021] https://www.mvg-world.com/en/products/sar/saraccessories/sam-phantom [14] en 50361: basic standard for the measurement of specific absorption rate related to human exposure to electromagnetic fields from mobile phones (300 mhz 3 ghz), 2001. online [accessed 20 june 2021] https://standards.globalspec.com/std/532912/en%2050361 [15] ieee standard 1528-2003: ieee recommended practice for determining the peak spatial-average specific absorption rate (sar) in the human head from wireles communications devices: measurement techniques, 19 december 2003, pp.1-120 doi: 10.1109/ieeestd.2003.94414 [16] industrial robots on kuka website. online [accessed 20 june 2021] https://www.kuka.com/en-de/products/robotsystems/industrial-robots [17] opensar v5 on on mvg website. online [accessed 20 june 2021] https://www.mvgworld.com/en/products/field_product_family/sar-38/opensarv5 [18] asro, standard sr en 62209-1: human exposure to radio frequency fields from hand-held and body-mounted wireless communication devices human models, instrumentation, and procedures -part 1: procedure to determine the specific absorption rate (sar) for hand-held devices used in close proximity to the ear (frequency range of 300 mhz to 3 ghz), 2007. online [accessed 20 june 2021], [in romanian] https://magazin.asro.ro/ro/standard/117718 https://emfguide.itu.int/pdfs/c95.1-2005.pdf https://standards.globalspec.com/std/85205/env%2050166-2 https://www.ancom.ro/uploads/links_files/odinul_1193_2006_norme.pdf https://www.ancom.ro/uploads/links_files/odinul_1193_2006_norme.pdf https://dx.doi.org/10.1088%2f0031-9155%2f50%2f14%2f016 http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur/gordon_1989.pdf http://mreed.umtri.umich.edu/mreed/downloads/anthro/ansur/gordon_1989.pdf http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013030908.pdf http://www.ijcset.net/docs/volumes/volume3issue9/ijcset2013030908.pdf https://doi.org/10.1186/s13673-016-0059-0 http://dx.doi.org/10.1016/s1665-6423(14)70593-0 http://dx.doi.org/10.1590/2179-10742017v16i2748 https://doi.org/10.1109/sielmen.2019.8905798 https://www.mvg-world.com/en/products/sar/sar-accessories/sam-phantom https://www.mvg-world.com/en/products/sar/sar-accessories/sam-phantom https://standards.globalspec.com/std/532912/en%2050361 https://doi.org/10.1109/ieeestd.2003.94414 https://www.kuka.com/en-de/products/robot-systems/industrial-robots https://www.kuka.com/en-de/products/robot-systems/industrial-robots https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://www.mvg-world.com/en/products/field_product_family/sar-38/opensar-v5 https://magazin.asro.ro/ro/standard/117718 skin potential response for stress recognition in simulated urban driving acta imeko issn: 2221-870x december 2021, volume 10, number 4, 117 123 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 117 skin potential response for stress recognition in simulated urban driving pamela zontone1, antonio affanni1, alessandro piras1, roberto rinaldo1 1 polytechnic department of engineering and architecture, university of udine, via delle scienze 206, 33100 udine, italy section: research paper keywords: stress recognition; electrodermal activity; skin potential response; machine learning; 3d driving simulator citation: pamela zontone, antonio affanni, alessandro piras, roberto rinaldo, skin potential response for stress recognition in simulated urban driving, acta imeko, vol. 10, no. 4, article 20, december 2021, identifier: imeko-acta-10 (2021)-04-20 section editors: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 23, 2021; in final form december 7, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: pamela zontone, e-mail: pamela.zontone@uniud.it 1. introduction paying attention to drivers’ mental wellbeing is crucial to improve safety in road traffic. if not properly treated, stress can lead drivers to engage in risky behaviours [1] and therefore car accidents [2]. a danger situation occurs whenever stress is caused by the driving activity itself as happens to professional [3] and regular drivers [4], or by personal issues as highlighted in [5] in case of economic reasons, or any other kind of reason as described in [6]. a hidden markov model (hmm) system to assess the probability of assuming certain behaviours considering the current emotion is developed in [7]. in [8], a survey describing the methods to recognize emotions in drivers is also provided. the development of stress detection systems follows two main paths [9]. one is based on physiological signals, including electrodermal activity (eda), electroencephalogram (eeg), blood volume pulse (bvp), electromyography (emg), skin temperature (skt) and respiration (resp) [10], [11]. the second one relies on physical manifestations of stress: data describing human behaviour, for example, could be collected by the global positioning system [12] and facial expressions [13]. a common approach is to identify a stress condition with the aid of machine learning (ml) and deep learning (dl) techniques, as in [14]-[16] where the properties of eeg, ecg and eda signals respectively are exploited for classification purposes. in [17] different kernel configurations for support vector machines (svms) are tested, and then applied to electromyographic signals. an automated way to find the optimal kernel has been used in [18], where the kernel for a deep multiple kernel support vector machine (d-mkl-svm) is selected through a multiple-objective genetic algorithm (moga). the classifier is then used on ecg data. different physiological measurements, eda and ecg signals, are combined in [19], where features are automatically extracted from short signal sections and classified by a multimodal convolutional neural network (cnn). abstract in this paper, we address the problem of possible stress conditions arising in car drivers, thus affecting their driving performance. we apply various machine learning (ml) algorithms to analyse the stress of subjects while driving in an urban area in two different situations: one with cars, pedestrians and traffic along the course, and the other characterized by the complete absence of any of these possible stress-inducing factors. to evaluate the presence of a stress condition we use two skin potential response (spr) signals, recorded from each hand of the test subjects, and process them through a motion artifact (ma) removal algorithm which reduces the artifacts that might be introduced by the hand movements. we then compute some statistical features starting from the cleaned spr signal. a binary classification ml algorithm is then fed with these features, giving as an output a label that indicates if a time interval belongs to a stress condition or not. tests are carried out in a laboratory at the university of udine, where a car driving simulator with a motorized motion platform has been prearranged. we show that the use of one single spr signal, along with the application of ml algorithms, enables the detection of possible stress conditions while the subjects are driving, in the traffic and no traffic situations. as expected, we observe that the test individuals are less stressed in the situation without traffic, confirming the effectiveness of the proposed slightly invasive system for detection of stress in drivers. mailto:pamela.zontone@uniud.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 118 a physical approach can be found in [20]. driver’s expressions and eye movements are recorded by near-infrared (nir) camera sensors, and then aggressive driving behaviour is classified by a cnn. the method proposed in [21] combines both physiological (electrodermal activity) and behavioural (facial) measurements, and fuses together different data types in order to build a sensor fusion emotion recognition (sfer) system, improving the classification performance. in previous works, the authors carried out some experiments with the aid of a driving simulator platform, recreating a highway [22], [23] and inducing stress by adding obstacles along the course. we collected the ecg in addition to the skin potential response (spr) data, and we employed ml and dl algorithms to detect stress in the test individuals. in [24], the authors compared the different physiological responses under manual and autonomous driving tests. in [25], we also examined the possible changes in the physiological responses when different car settings are considered. one of the main contributions of this paper is the analysis and comparison of the performance results of different ml models (extending the work in [26]), which we demonstrated to be valuable in detecting stress episodes in previous experiments, but now considering the stress caused by urban traffic. in this work, moreover, we simplify the system and consider one signal only, i.e., the spr signal taken from the hands of the driver. in this way, we propose a slightly invasive setup, which can be arranged with little discomfort for the driver. in detail, we log spr values from the two hands of individuals while they drive. we apply a motion artifact (ma) removal algorithm, to assess the artifacts that can alter the signal caused by the hand movements turning the wheel. this algorithm outputs a single spr signal, without artifacts, which is fed to an ml classifier, which has been previously trained using a larger dataset. the classifier marks time intervals with a “stress” or “not stress” flag. the individuals were told to drive normally, in an urban setting simulated by the city car driving 3d software simulator. the experiment was setup in a way to present two different situations. one situation recreates an urban area with no traffic and empty streets, while the second recreates an urban area complete with traffic, with cars and pedestrians. findings of this study validate the success of the supervised learning algorithm in its stress detection task. we also demonstrate that spr signals, recorded with minimally invasive and simple sensors, along with ml classifiers, can detect stress in a reliable way. in the end, we observe that, as expected, stress is generally higher in the urban environment filled with traffic. the paper is structured as follows. in the next section we present the fundamental blocks of our proposed system. section 3 introduces the experimental setup. section 4 discusses the results obtained from our comparative study, where different ml algorithms are used for driver’s stress recognition. finally, some conclusions are drawn in section 5. 2. proposed system the proposed measurement system for stress detection in car drivers is shown in figure 1. each subject under test wears the spr sensors on the wrists and is seated on the driving simulator available in the biosens lab at the university of udine. the simulator is composed of a moving platform with two axes (dof reality professional p2), a steering wheel with pedals and gearbox (logitech g29), and a curved screen. for each subject two different simulations are performed on the same city route with two different conditions: “no traffic” and “traffic”. “no traffic” means that there are no other cars nor pedestrians on the road, “traffic” means that we inserted in the simulation other cars and pedestrians with some aggressive events (e.g., lane invasion of other cars or unexpected pedestrian road crossing), as also described in section 3. during the entire route planned in the simulations we acquired the spr signals on the subjects positioning the sensors shown in figure 2 on the wrist like a smartwatch. the differential voltages from the palm and the back of each hand (vp-vb in figure 2) are properly conditioned and acquired by a 12 bit a/d converter on board a dsp with sample rate of 200 sa/s. data are then sent using a low power wifi module which operates at 115.2 kbps baud rate. the detailed description of the sensors and their characterization is provided in [27], [28]. summarizing the architecture and specifications, the sensor analog front end is a band-pass differential amplifier (having input impedance 100 mω) with maximum input range ±10 mv figure 1. block scheme of our proposed stress detection measurement system. figure 2. electrodes arrangement and spr sensor block diagram. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 119 and bandwidth in the [0.08, 8] hz range. the accuracy of spr acquisition, after characterization, resulted in 0.15 % of full scale (corresponding to 30 μv) and the resolution is 4.9 μv. the sensors are battery operated with a single lipo cell with a capacity of 850 mah ensuring ten hours of transmission, since the current consumption is 85 ma. the sensors form a body network where one spr sensor acts as slave (henceforth sensor 1) and the other acts as master (henceforth sensor 2). the slave sensor sends packets to the master and the latter aligns the received data packets with the data acquired by the a/d converter. for consumption reasons, the slave can send packets every 40 ms at minimum. hence, the slave dsp builds a packet composed of eight data acquired every 5 ms and sends them to the master. the module is configured as station (sta) with static ip and operates as a udp client. the gateway address is configured to be the master address. figure 3 shows how the packets are built by the dsp on the slave before transmission. the a/d module provides a 12 bit datum every 5 ms. each byte sent via uart to the master must be identified with a unique code, since the master must recognize if the incoming datum is the upper or lower byte of the slave sample. so, the dsp of the slave builds the lower (upper) byte of information using the six least (most) significant bits of the a/d, adding one bit for lower (upper) byte (l or h bit in figure 3, respectively). the data packets received by the master are dismantled and realigned as in figure 4. the master adds a unique header and builds a packet composed of 18 bytes and containing the information on spr1 and spr2. the packet is then sent to a laptop every 40 ms. the data transmitted from the master are acquired by a dedicated graphical user interface developed in the .net environment, and are then processed by a motion artifact (ma) removal algorithm described in [22], [29]. the two spr signals acquired from the left and right hand of the subjects are processed by the ma algorithm in order to provide as output a single signal that better represents the activity of the autonomic nervous system (ans). as a matter of fact, the spr signals can be typically affected by motion artifacts due to pressure on the sensors during hand movements. ideally, the two spr signals should have approximately the same pattern, since they represent the response to the same stimulus, initiated by the sympathetic response of the ans. the ma removal algorithm is based on two assumptions, the first being that motion artifact enhances the local energy of the signal. the second being that the motion artifacts rarely appear simultaneously in the spr signal of both hands. the output of the ma removal block is thus obtained by computing a weighted combination of the two input spr signals, evaluating their local energy, giving more weight to the less perturbed signal, i.e., the one, between the two input signals, with the least local energy value. in our experiments (see [22]), we found that the motion artifact rarely appears simultaneously in both hands. this, in fact, mostly appears during the steering wheel action, which is predominantly performed by one hand (as also discussed in [24]). after being processed through the ma removal bock, the cleaned spr signal is then sent to various ml classification algorithms, which had already been trained on a bigger dataset. this dataset, including 3195 intervals for each stress and nonstress class, is the result of a previous experiment carried out in the vi-grade firm (vi-grade.com), utilizing their professional dynamic simulator. more specifically, in that case, 18 subjects manually drove for 67 km along a highway, trying to cross 12 obstacles, positioned in prearranged points along the track. these obstacles were: double lane change (right to left or left to right), tire labyrinth, sponsor block (from left or from right), slalom (from left or from right), lateral wind (from left or from right), jersey lr, tire trap, stop. we divided the cleaned spr signal in 15 s time interval blocks, after normalization to leverage the signal amplitudes among subjects, and for each block we computed five statistical features: the interval variance, the energy, the mean absolute value, the mean absolute derivative, and the maximum absolute derivative. each interval overlapped the previous one by 10 s, so we could derive a new feature vector every 5 s. in particular, we could define exactly the obstacle location and span, during which the individuals were supposed to be in a stress state. in this way, we could assign a flag equal to “1” to all of the intervals happening to fall or intersect with the stress episodes, and a flag equal to “0” to all the others, i.e., the ones happening to fall outside these stress episodes. finally, after classification of the test set, we applied a re-label step to address the issue related to the number of single and anomalous “1” flags [22]. we were able to compare the results of an svm, a random forest (rf) classifier, a decision tree (dt), and a k-nearest neighbours (k-nn) classifier, which provided a similar accuracy of about 73 %, with only the k-nn presenting a slightly lower value (68 %). all of the ml classifiers were implemented using matlab (2017.a), and a 10-fold cross validation phase was considered for all of these algorithms. the bayesian optimization was also used during the training procedure for all of the classifiers (for hyperparameter tuning). a radial basis function (rbf) kernel was employed for the svm model (see also [23]). 3. experimental setup as already stated, the test was carried out by using a driving simulator, consisting of a 3d driving simulator software and a motorized platform, located in a lab at the university of udine. the experiment employed 10 test subjects, students of the university of udine. they were asked to drive along a predefined track, in an urban area simulated by the city car driving software. the software enables the creation of an urban area with a nearby motorway, complete of car traffic, with the option of adding different stress-inducing factors, like pedestrian crossing, and vehicles unexpectedly changing lane (also from the opposite direction) or braking suddenly. these stressors do not occur figure 3. construction of the packets on the slave for transmission. figure 4. realignment of the packets on the master. http://vi-grade.com/ acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 120 exactly at the same location and time. however, the type and multiplicity of these stress-inducing events is similar between different simulations. the complete track is displayed in figure 5. the green solid line represents the motorway, and the orange solid line represents the city route. the subjects were asked to drive in two different situations: in the first there is a complete lack of traffic with no cars and people, whereas the second situation has car and pedestrian traffic. in this second situation, the traffic volume is kind of low, but the behaviour of traffic was set to “very aggressive”, where the cars and pedestrians act in a more unpredictable and temperamental way, with cars intruding in the subject’s way, or pedestrians crossing the road in forbidden points. one-half of the individuals started with the no traffic situation first, and then proceeded with the traffic situation (i.e., subjects 1, 2, 3, 4, and 10), while the remaining 50 % did the opposite order (i.e., subjects 5, 6, 7, 8, and 9). completing the track in figure 5 takes on average 10 minutes, with a similar required time to complete the motorway and urban section. 4. experimental results all of the spr data collected from the 10 test subjects, after they had driven along a course in an urban area recreated by city car driving, are then cleaned through the ma removal block. these output signals are scaled to make a meaningful comparison possible, i.e., for each subject and for each driving condition, we standardize the corresponding signal using the mean and standard deviation resulting from the concatenation of both signals coming from the two driving conditions, with traffic and no traffic, for that subject. these standardized signals are ultimately fed to an ml algorithm. more specifically, the same five spr features introduced in the previous section are extracted from each 15 s interval. we make a new interval start 5 seconds after the start of the previous one (therefore each interval is overlapping the previous one by 10 seconds). the various ml classifiers introduced in the previous section are only used for the test phase. in the end, we can look at all of the labels that each classifier gives as output and calculate the final number of labels equal to “1” or “0”, according to the intervals that it labels as “stress” or “non-stress”, for each subject and each driving situation (with and without traffic). table 1 displays the percentage of labels equal to “1” (with stress), when considering the svm, rf, dt, and k-nn classifiers, for each subject, and taking into account the entire complete test course in the traffic and non-traffic conditions. as an example, for the rf classifier, we show in figure 6 the graphical representation of the values reported in table 1. as we can deduce looking at all of the classifiers’ results, the no traffic situation appears to be less stressful than the traffic situation for all of the subjects excluding subject 4. in addition, there are some figure 5. graphical representation of the course: it comprises a motorway and an urban route. table 1. total number of intervals marked as “stress” in %, for each classifier and for each subject in the two driving conditions (with traffic and no traffic). the numeric difference of labels between the two conditions (traffic no traffic) is also shown. svm / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 51.82 68.00 58.86 57.47 41.94 60.00 80.33 94.49 87.60 96.43 69.69 no traffic 22.55 60.31 55.56 65.22 5.04 1.63 75.00 54.62 38.98 91.87 47.08 traffic no traffic 29.26 7.69 3.31 -7.75 36.89 58.37 5.33 39.87 48.62 4.56 22.62 rf / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.93 72.00 59.49 62.07 44.35 64.44 77.05 93.70 89.26 92.14 71.14 no traffic 26.47 65.65 54.70 66.09 6.72 0.81 72.58 57.98 37.29 89.43 47.77 traffic no traffic 30.46 6.35 4.79 -4.02 37.63 63.63 4.47 35.72 51.97 2.71 23.37 dt / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.20 70.40 60.76 63.22 42.74 64.44 84.43 96.06 90.91 96.43 72.56 no traffic 27.45 65.65 57.26 74.78 5.88 3.25 76.61 57.14 41.53 92.68 50.22 traffic no traffic 28.75 4.75 3.49 -11.56 36.86 61.19 7.81 38.92 49.38 3.75 22.33 k-nn / subject 1 2 3 4 5 6 7 8 9 10 mean traffic 56.93 70.40 58.86 63.79 42.74 63.70 75.41 92.91 87.60 90.71 70.31 no traffic 27.45 67.94 54.70 76.52 5.04 2.44 75.00 55.46 40.68 79.67 48.49 traffic no traffic 29.48 2.46 4.16 -12.73 37.70 61.26 0.41 37.45 46.93 11.04 21.82 figure 6. total number of intervals labelled as “stress” by the rf classifier. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 121 individuals where the difference between the positive labels in the two situations is higher (e.g., subjects 5, 6, and 9), while for others this difference is lower (e.g., subjects 3 and 7). we can try to explain this in different ways. maybe the pressure of taking a test changed the expected stress reaction, or the outcome could be influenced by the order of the simulated situation experienced first by the subjects (traffic and no traffic, or the other way around). still, for 90 % of the subjects the resulting “stress” interval count is higher in the traffic situation. in figure 7 we show the output of the rf classifier for subject 9, where the difference between the positive labels in the traffic and no traffic situation is among the biggest positive ones we observe comparing all of the classifiers. for the sake of simplicity, we only plot the positive labels using a grey stem, located at the end of the corresponding 15 s classified spr interval. the labels corresponding to the non-stress case are not included in the figure. the cleaned and normalized spr signals of the subject in the two different situations (with traffic and no traffic) are also shown in a blue continuous line. the output of the dt classifier for the same subject is displayed in figure 8 (here the difference is slightly lower than the one obtained with the rf). in figure 9 the output of the k-nn classifier for subject 4 is reported instead. this is the only subject where the difference between the positive labels in the traffic and no traffic scenario is always negative, for all of the classifiers. this negative difference is the biggest for the k-nn case. in figure 10 we display a last example considering the svm classifier’s output for subject 3. as we can notice, the classifiers well identify the increased stress level throughout the entire simulations. figure 7. output of the rf classifier for subject 9 in the two situations (without traffic and with traffic). figure 8. output of the dt classifier for subject 9 in the two situations (without traffic and with traffic). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 122 5. conclusions in this paper, we described a stress detection system that allows us to identify stress of drivers. our system classifies overlapping 15 s signal blocks, and can therefore provide a classification output every 5 s, with a small delay in a real-time application and a good localization in time. the test subjects drove in a simulated urban environment, utilizing a car driving simulator located in our biosens lab at the university of udine. we logged two spr signals, from their hands, and we processed these signals through an ma removal block. we computed some features from the resulting single signal, and we sent them as input to different ml algorithms, thus comparing the final results. we showed that, regardless of the ml algorithm used, all of the subjects, except one, appeared more stressed when driving in an urban area prearranged with traffic. therefore, through the use of a low-complexity spr data acquisition sensor and the application of ml algorithms, we could effectively recognize stress states arising in drivers. references [1] l. bowen, s. l. budden, a. p. smith, factors underpinning unsafe driving: a systematic literature review of car drivers, transportation research part f: traffic psychology and behaviour 72 (2020) pp. 184-210. doi: 10.1016/j.trf.2020.04.008 [2] g. miyama, m. fukumoto, r. kamegaya, m. hitosugi, risk factors for collisions and near-miss incidents caused by drowsy bus drivers, international journal of environmental research and public health 17(12) (2020). doi: 10.3390/ijerph17124370 figure 9. output of the k-nn classifier for subject 4 in the two situations (without traffic and with traffic). figure 10. output of the svm classifier for subject 3 in the two situations (without traffic and with traffic). https://doi.org/10.1016/j.trf.2020.04.008 https://doi.org/10.3390/ijerph17124370 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 123 [3] l. r. hartley, j. el hassani, stress, violations and accidents, applied ergonomics 25(4) (1994) pp. 221-230. doi: 10.1016/0003-6870(94)90003-5 [4] y. amichai-hamburger (edited by), technology and psychological well-being, cambridge university press, 2009, online isbn 9780511635373. doi: 10.1017/cbo9780511635373 [5] d. l. kitara, o. karlsson, the effects of economic stress and urbanization on driving behaviours of boda-boda drivers and accidents in gulu, northern uganda: a qualitative view of drivers, the pan african medical journal 36(47) (2020). doi: 10.11604/pamj.2020.36.47.21382 [6] e. bosch, k. ihme, u. drewitz, m. jipp, m. oehl, why drivers are frustrated: results from a diary study and focus groups, european transport research review 12(52) (2020) pp. 1-13. doi: 10.1186/s12544-020-00441-7 [7] y. liu, x. wang, differences in driving intention transitions caused by driver’s emotion evolutions, international journal of environmental research and public health 17(19) (2020). doi: 10.3390/ijerph17196962 [8] s. zepf, j. hernandez, a. schmitt, w. minker, r. w. picard, driver emotion recognition for intelligent vehicles: a survey, acm computing surveys (csur) 53(3) (2020) pp. 1-30. doi: 10.1145/3388790 [9] s. greene, h. thapliyal, a. caban-holt, a survey of affective computing for stress detection: evaluating technologies in stress detection for better health, ieee consumer electronics magazine 5(4) (2016) pp. 44-56. doi: 10.1109/mce.2016.2590178 [10] m. moghimi, r. stone, p. rotshtein, affective recognition in dynamic and interactive virtual environments, ieee transactions on affective computing 11(1) (2020), pp. 45-62. doi: 10.1109/taffc.2017.2764896 [11] c. maaoui, a. pruski, f. abdat, emotion recognition for humanmachine communication, proc. of the 2008 ieee/rsj international conference on intelligent robots and systems (iros), nice, france, 22-26 september 2008, pp. 1210-1215. doi: 10.1109/iros.2008.4650870 [12] j. li, j. lv, b. oh, z. lin, y. j. yu, identification of stress state for drivers under different gps navigation modes, ieee access 8 (2020) pp. 102773-102783. doi: 10.1109/access.2020.2998156 [13] su-jing wang, wen-jing yan, xiaobai li, guoying zhao, chunguang zhou, xiaolan fu, minghao yang, jianhua tao, microexpression recognition using color spaces, ieee transactions on image processing 24 (12) (2015), pp. 6034-6047. doi: 10.1109/tip.2015.2496314 [14] hanna becker, julien fleureau, philippe guillotel, fabrice wendling, isabelle merlet, laurent albera, emotion recognition based on high-resolution eeg recordings and reconstructed brain sources, ieee transactions on affective computing 11(2) (2017), pp. 244-257. doi: 10.1109/taffc.2017.2768030 [15] bosun hwang, jiwoo you, thomas vaessen, inez myin-germeys, cheolsoo park, byoung-tak zhang, deep ecgnet: an optimal deep learning framework for monitoring mental stress using ultra short-term ecg signals, telemedicine and e-health 24(10) (2018), pp. 753-772. doi: 10.1089/tmj.2017.0250 [16] f. al machot, a. elmachot, m. ali, e. al machot, k. kyamakya, a deep-learning model for subject-independent human emotion recognition using electrodermal activity sensors, sensors 19(7) (2019), art. no. 1659. doi: 10.3390/s19071659 [17] o. vargas-lopez, c. a. perez-ramirez, m. valtierra-rodriguez, j. j. yanez-borjas, j. p. amezquita-sanchez, an explainable machine learning approach based on statistical indexes and svm for stress detection in automobile drivers using electromyographic signals, sensors 21(9) (2021), art. no. 3155. doi: 10.3390/s21093155 [18] k. t. chui, m. d. lytras, r. w. liu, a generic design of driver drowsiness and stress recognition using moga optimized deep mkl-svm, sensors 20(5) (2020), art. no. 1474. doi: 10.3390/s20051474 [19] j. lee, h. lee, m. shin, driving stress detection using multimodal convolutional neural networks with nonlinear representation of short-term physiological signals, sensors 21(7) (2021), art. no. 2381. doi: 10.3390/s21072381 [20] rizwan ali naqvi, muhammad arsalan, abdul rehman, ateeq ur rehman, woong-kee loh, anand paul, deep learning-based drivers emotion classification system in time series data for remote applications, remote sensing 12(3) (2020), art. no. 587. doi: 10.3390/rs12030587 [21] geesung oh, junghwan ryu, euiseok jeong, ji hyun yang, sungwook hwang, sangho lee, sejoon lim, drer: deep learning–based driver’s real emotion recognizer, sensors 21(6) (2021), art. no. 2166. doi: 10.3390/s21062166 [22] pamela zontone, antonio affanni, riccardo bernardini, alessandro piras, roberto rinaldo, fabio formaggia, diego minen, michela minen, carlo savorgnan, car driver's sympathetic reaction detection through electrodermal activity and electrocardiogram measurements, ieee transactions on biomedical engineering 67(12) (2020) pp. 3413-3424. doi: 10.1109/tbme.2020.2987168 [23] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, supervised learning techniques for stress detection in car drivers, advances in science, technology and engineering systems journal 5(6) (2020), pp. 2229. doi: 10.25046/aj050603 [24] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, stress evaluation in simulated autonomous and manual driving through the analysis of skin potential response and electrocardiogram signals, sensors 20(9) (2020), art. no. 2494. doi: 10.3390/s20092494 [25] pamela zontone, antonio affanni, riccardo bernardini, leonida del linz, alessandro piras, roberto rinaldo, emotional response analysis using electrodermal activity, electrocardiogram and eye tracking signals in drivers with various car setups, proc. of the 2020 28th european signal processing conference (eusipco), amsterdam, nl, 18-21 january 2021, pp. 1160-1164. doi: 10.23919/eusipco47968.2020.9287446 [26] p. zontone, a. affanni, a. piras, r. rinaldo, stress recognition in a simulated city environment using skin potential response (spr) signals, proc. of the 2021 ieee international workshop on metrology for automotive (metroautomotive), bologna, italy, 12 july 2021, pp. 135-140. doi: 10.1109/metroautomotive50197.2021.9502867 [27] a. affanni, dual-channel electrodermal activity and an ecg wearable sensor for measuring mental stress from the hands, acta imeko 8(1) (2019), pp. 56-63. doi: 10.21014/acta_imeko.v8i1.562 [28] a. affanni, wireless sensors system for stress detection by means of ecg and eda acquisition, sensors 20(7) (2020), art. no. 2026. doi: 10.3390/s20072026 [29] a. affanni, a. piras, r. rinaldo, p. zontone, dual channel electrodermal activity sensor for motion artifact removal in car drivers' stress detection, proc. of the 2019 ieee sensors applications symposium (sas), sophia antipolis, france, 11-13 march 2019, pp. 1-6. doi: 10.1109/sas.2019.8706023 https://doi.org/10.1016/0003-6870(94)90003-5 https://doi.org/10.1017/cbo9780511635373 https://doi.org/10.11604/pamj.2020.36.47.21382 https://doi.org/10.1186/s12544-020-00441-7 https://doi.org/10.3390/ijerph17196962 https://doi.org/10.1145/3388790 https://doi.org/10.1109/mce.2016.2590178 https://doi.org/10.1109/taffc.2017.2764896 https://doi.org/10.1109/iros.2008.4650870 https://doi.org/10.1109/access.2020.2998156 https://doi.org/10.1109/tip.2015.2496314 https://doi.org/10.1109/taffc.2017.2768030 https://doi.org/10.1089/tmj.2017.0250 https://doi.org/10.3390/s19071659 https://doi.org/10.3390/s21093155 https://doi.org/10.3390/s20051474 https://doi.org/10.3390/s21072381 https://doi.org/10.3390/rs12030587 https://doi.org/10.3390/s21062166 https://doi.org/10.1109/tbme.2020.2987168 https://doi.org/10.25046/aj050603 https://doi.org/10.3390/s20092494 https://doi.org/10.23919/eusipco47968.2020.9287446 https://doi.org/10.1109/metroautomotive50197.2021.9502867 http://dx.doi.org/10.21014/acta_imeko.v8i1.562 https://doi.org/10.3390/s20072026 https://doi.org/10.1109/sas.2019.8706023 acta imeko  september 2014, volume 3, number 3, 22 – 27  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 22  enhanced measurement of high aspect ratio surfaces by  applied sensor tilting  alexander schuler 1 , albert weckenmann 1 , tino hausotte 2   1  friedrich‐alexander‐universität erlangen‐nürnberg, chair quality management and manufacturing metrology (qfm),  naegelsbachstrasse 25, 91052 erlangen, germany  2  friedrich‐alexander‐universität erlangen‐nürnberg, institute of manufacturing metrology (fmt),   naegelsbachstrasse 25, 91052 erlangen, germany      section: research paper  keywords: profilometry; servo system; uncertainty; simulation  citation: alexander schuler, albert weckenmann, tino hausotte, enhanced measurement of high aspect ratio surfaces by applied sensor tilting, acta imeko,  vol. 3, no. 3, article 6, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐06  editor: paolo carbone, university of perugia   received june 4 th , 2013; in final form july 31 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the german research foundation (dfg), germany  corresponding author: alexander schuler, e‐mail: alexander.schuler@fau.de      1. introduction  for tactile surface measurements the surface topography is superimposed with the shape of the probing element and a morphological filtering is performed [1]. the actual contact point varies depending on the local surface slope and the tip’s form. in contrast to 3d coordinate measuring machines with 3d force sensors, tactile scanning systems like profilometers or afms (atomic force microscope) use a fixed probing force direction, assuming a probing point at the front of the applied tip [2], [3]. without its actual knowledge large deviations occur on steep surfaces as the expected reference contact point is left. the resulting deviation can be compensated arithmetically to a certain degree [4], but when the probing point leaves the spherical front, the recorded trace can no longer be unambiguously reconstructed, practically limiting the application range. usual tactile tips feature a conical or pyramidal base with a very small apex radius, limited by mechanical stability. for profilometers the base angle is usually up to 90 ° with apex radii up to 2 µm. figure 1 (left) shows the surface reconstruction when the front of the sensor tip is assumed as a probing point. during flank contacts the deviation increases significantly and especially occurs during the measurement of workpieces with high slopes and high aspect ratios as e.g. cutting tools, freeform lenses or microstructures like micro lens arrays. this problem restricts the application of tactile surface probing to surfaces with limited curvature [5]. a probing principle is investigated to lift this restriction. an orthogonal orientation to the surface is kept during the measurement by using a dynamic surface slope dependent tip rotation, displayed in figure 1 (right). research has shown that the deviation on steep flanks can be reduced by using angled probes. but this leads to a higher measurement deviation for the rest of the sample [6]. with repeated measurements under different angles and the application of sensor data fusion the whole surface can be acquired. yet this requires lots of time and introduces additional contributions to the measurement uncertainty [7], [8], [9]. instead by using sensor tilting during measurement both disadvantages are avoided. to reach this goal, research has been carried out [10]. in a first step the contact behaviour and the tilting process was implemented into a simulation environment (section 2). this allowed an early abstract  during tactile surface measurements the contact point between probing tip and surface varies depending on the local surface angle.  to  reduce  the  resulting  measurement  deviation  on  high  slopes  a  probing  principle  is  investigated  that  applies  a  dynamic  surface  dependent sensor tilt. this probing process and the logics for the angle determination have been evaluated by simulation. a test stand  based on a nanometer coordinate measuring machine is developed and fitted with a rotation kinematic based on stacked rotary axes.  systematic  positioning  deviations  of  the  kinematic  are  reduced  by  a  compensation  field.  the  test  stand  has  been  completed  and  results are presented.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 23  evaluation of the measurement deviation reduction and the development of algorithms to allocate a tilt angle of the tip to a surface point. with the simulation results the kinematic chain was chosen (section 3), sensor and calibration artefacts were selected (section 4) and preliminary tests (section 5) were conducted. afterwards the prototype setup was finished (section 6) and measurement results were acquired (section 7). 2. evaluation by simulation  the working principle of the simulator is based on collision checking between a tip model and a discretized surface model. a simplified 2d representation is selected to reduce the complexity and still keep the informative value for line scans. the tip is modelled, following the standard iso 3274, as a spherical tip and two lines, enclosing the apex angle alpha. the surface profile is integrated as a discretized point cloud with the function p(x,z=f(x)) giving the profile height at the coordinate x. to calculate the optimal probing angle of a specific surface point methods and algorithms are required, referred to as a strategy. several approaches were created and can be categorized in knowledge based, analysing, predicting strategies or a mixture of the aforementioned. a first class of strategies, the knowledge based, uses previous knowledge of the surface profile from an ideal model, e.g. a cad-file. in the simplest case the tangent for each surface point p(x,z) is calculated with pi(x,z) being the point number i to be measured and fstra(pi) the calculated probing angle:         1 1 1 ( ) ( ) tan 2 ( ) i i stra i i i z z f p x x (1) a similar class, the analysing strategies, uses the measurement system itself to create a base profile in a first step. in a second step tilting can be applied. this approach does not require previous knowledge and can be applied with unknown profiles. a third class, predicting strategies, calculates the most suited tilt angle by extrapolating already acquired surface points during the measurement. generally the measurement starts without rotation and the surface points are recorded. the next points are extrapolated based on these values and the applied algorithm. with the estimation of the succeeding profile, the optimal tilt angle is calculated. for this class several extrapolation algorithms were selected and compared. a linear method, a cubic spline method, a newtown method, a lagrange method or an aitken-neville method were implemented. the effectiveness of the strategies and the achievable deviation reduction were compared for different tips and surfaces. relevant evaluation criteria were e.g. the measurement deviation, the calculated tilt angle or the tilt velocity and acceleration. different sample surfaces were created to cover usual practical cases. figure 2 shows a sphere segment with a diameter of 1 mm, found e.g. on lenses, featuring a continuous change of slope. the simulated experiment used a tip apex radius of 10 µm, an opening angle of 60°, a maximal rotation angle of 45° and a linear predicting strategy. at the beginning of the sphere’s edge a flank collision occurs and the resulting deviation suddenly rises. with the change of the extrapolated surface slope, a rotation is triggered and the deviation is reduced until it disappears, limited by the rotation speed and the maximum tilt angle. compared to a regular scan the mean deviation is reduced by 80%. a second structure features an abrupt transition of 18° between two constant slopes, figure 3, focussing on the adaption speed. the slope leads to a constant deviation contribution for a non-tilted case without computational probing point compensation. in this scenario predicting strategies completely reduce the deviation and furthermore analysing strategies can initiate the rotation before the slope change occurs. for the latter case the maximum deviation is reduced from 0.51 µm without tilt, mainly depending on the tip geometry, to 0.13 µm with an analysing strategy. generally all tested strategies offer a significant deviation reduction and increase the detectable surface slope with a tilting system compared to no countermeasures at all. 3. hardware realization  to realize the simulated tilting process in a hardware prototype, a controlled precision servo system is necessary to perform the rotation around the tip’s working point. a high positioning accuracy with low guidance deviation is essential; figure 1. left: no compensation; right: with compensation.   figure 2. linear prediction: sphere segment, 1 mm diameter.   figure 3. strategy comparison: 18° ramp profile.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 24  elsewise the measurement deviation would instead be increased when the tip leaves the intended scan trace. the basis of the planned test bed consists of a laser-interferometer controlled nanopositioning and measuring machine type sios nmm-1 with an axis resolution of 0.1 nm in three linear axes [11]. in this setup the workpiece is moved by the nmm-1 and the sensor and the rotation unit are installed firmly above. after investigating several kinematic chains, a stacking of two rotary axes was chosen, figure 4. the axis of the first rotation stage is oriented parallel to the cartesian height axis of the nmm-1. a second rotary stage under an angle of 45° is mounted below. the second one carries the sensor which is again angled with 45°. in this setup, the centres of both axes align with the working point of the sensor, apart from alignment deviations. the second axis alters the rotation of the sensor, which corresponds to a rotation around the xand y-axis of the nmm-1’s cartesian coordinate system. with the additional movement of the first rotation stage along the z-axis the sensor can be adjusted to match any necessary angle around the xand y-axis from -90° to 90° covering a hemisphere. the importance of angular axis resolution in this setup is reduced by the in-axis alignment of the sensor. the only remaining non-systematic deviation factors are the radial and axial guidance deviations as well as tilt. positioning window estimations for the selected units result in a maximum deviation of +/1.5 µm for the position of the tip. the upper larger axis has a diameter of 100 mm and uses a direct drive. the chosen model features a non-systematic tilt error motion of less than 1 arcsec and a total accuracy of 2 arcsec after calibration by the manufacturer. the lower smaller axis with 30 mm diameter uses a piezoelectric drive with a stick-slip operation for large angles and a scanning operation for small angles and maintaining a position. apart from guidance deviations also thermal influence sources and their effects have been respected. to reduce the heat generation in the rotary drives the large axis’ control system was fine-tuned to minimize heat generation. additionally the small piezo stage is self-locking and does not require energy to keep a position. thermal effects are also respected in the mechanical setup. the 45° connectors between the rotary stages and the sensor are made of invar, an alloy with a thermal expansion coefficient near zero. concerning systematic deviation components the most significant factor is the assembly of the axes and installation of the sensor tip. to support their alignment process, an alignment help was constructively integrated. the tip holder as well as the mounting plate between the axes are designed with a central bore. this way the centre openings of both rotary axes are not obstructed and an uninterrupted optical path to the sensor tip is available. by using a cmm with a video-sensor and a backlight option the centricity of the sensor tip in the axes’ centre can be easily assessed. furthermore the axes’ centre bores are used for the wiring of the sensor and the smaller axis. it is refrained from an unlimited number of revolutions to simplify the wiring. 4. realization of the test stand  4.1. choice of the applied sensor  to accelerate the realization of the prototype a near-tactile probe was selected, equivalent to the simulated deviation mechanics [12], [13]. the working principle of the sensor prototype is derived from scanning tunnelling microscopy (stm) using the current through the nanometre ranged air gap between a conductive tip and workpiece. as opposed to regular stms no sharp wire tips are used but larger scaled spherical probing tips. the system allows for deliberately shaped tips, ranging from spheres to conical shapes or needles without having to respect mechanical restrictions [13]. for the test stand prototype two tip shapes were chosen. the first scenario uses a hard-metal sphere tip with 0.3 mm diameter equivalent to a tactile probe with a large apex radius. when a probing point at the front of the tip is assumed for reconstruction, the system behaves like a tactile version and the deviation can clearly be shown. the second scenario features a sharp metal needle with a smaller tip angle than a classical profilometer could operate with, equivalent to a near ideal tip. 4.2. calibration artefact  to calibrate the rotation system an artefact is necessary to determine the position of the sensor probing point in relation to the workpiece coordinate system. with this knowledge theoretically all systematic deviation components from the rotation system including alignment deviations can be compensated. as a reference structure and origin of the workpiece coordinate system a calibrated precision sphere grade g5 with a nominal diameter of 4 mm is used. for the spherical sensor tip, its centre results from the origin of the calibration sphere plus the sum of both radii in probing direction. as measurement subject three angled planes with rounded transitions are chosen, figure 5. the angle between each two neighbouring flats is 30°. additionally all flats are angled in the perpendicular direction too. the test subject is part of a cutting insert separated by wire erosion. with the calibration sphere the position of the rotated sensor can be acquired for all angle steps and the systematic positioning deviations can be compensated. figure 4. schematic: a) sensor at 0°; b) sensor at 45°.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 25  the resulting correction vectors from the workpiece origin to the probing sphere centre are stored in a calibration field, accessed by the machine control during the scan. another compensation vector points from the centre of the sensor tip’s curvature to its reference contact point. for the used spherical tip this vector compensates for the sphere’s radius. the measurement process with the calibration field is performed in the following steps. starting at zero rotation a number of points are recorded. the values are transferred from the machine coordinate system into the workpiece coordinate system with the current compensation vectors. afterwards the algorithms of the chosen strategy are applied, calculating the surface angle and the next suitable tilt angle. for a sufficient large change of angle the probe gets disconnected from the surface and the rotation is performed. an uninterrupted scan without surface disconnection will be realized later. the new probe centre point is loaded from the calibration field and the sensor is brought over the last recorded surface point. afterwards the measurement continues. an optional fine calibration can be applied to acquire a more recent calibration just after the rotation has been performed. 5. preliminary tests  to implement and test all strategy algorithms and measurement sequences before the setup of the stacked axes was finalized, the test stand was realized with a preliminary rotation unit. a simple manual rotation axis with a diameter of 25 mm was installed to test all algorithms, the strategies and the calibration field. as the manual axis’ reproducibility is insufficient and not comparable with the planned automated stages, the described fine-calibration after each rotation was also applied. figure 6 shows all available data merged in the workpiece coordinate system. above the hemisphere as the origin are the probe centres from the calibration field at different angles, displayed with a sphere of 0.3 mm diameter. in the shown case, seven angle positions from 30° to 30° were acquired. due to the used kinematic, the centre points describe a circular path. a first measurement on the angled flats with the described logic is shown in figure 7a) and b), split into the left flank and the right flank. the blue curve represents a scan without tilt, assuming a fixed probing point at the lowest part of the tip, as a profilometer would yield. the green curve uses sensor rotation and assumes a moving reference contact vector always aligned in the rotation direction. the rotation is performed with a fixed angular step width of 10°. it can be seen that both ways result in the same values in the flat top, as expected. in the sloped parts the methods differ as the rotated tip is more capable in keeping the actual probing point near the reference point. this can be clearly seen in the beginning of figure 7a) when the rotated tip starts at 0° and then after some surface points switches to 30° and the deviation is instantaneously reduced. other effects of interest are discontinuities in the rotated scan. they are caused by the sudden change of the assumed probing vector when the axis is turned with a 10° increment. for the demonstrated test case with the manual axis the measurement deviation was reduced down to 18 µm. 6. implementation of the stacked axes  parallel to the functional tests with the manual axis the setup and integration of the two stacked rotary axes were performed. this procedure covered the axes’ mechanical installation and alignment, the software interface to the axes as well to the nmm-1’s control system and the electrical installation of the near-tactile probe. after manufacturing of the invar adapter parts, both axes were mounted onto each other and installed overhead in a mounting frame for testing purposes. the control sequences and the corresponding behaviour had to be tested. as both axis controllers can be directly accessed by a c or c++ library, matlab functions were written to access the library and execute hardware commands. as the nmm-1’s host computer uses matlab for the sequence control the axes can easily be integrated. command sequences as homing the axes, moving to an angle in the axis coordinate system and retrieving the current angle were realized the mechanical integration into the nmm-1 was realized figure 5. calibration sphere and angled flats.  figure 6. workpiece coordinate system and all acquired data.   figure 7. angled flat: a) left transition; b) right transition.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 26  with a regular plug-in module for sensor exchanges in the nmm-1. for this purpose the nmm-1’s top plate, as part of the metrological frame, has a slot for a plug-in module. the plug-in plate was also made of invar and carries the rotary axes below it. to have sufficient clear space below the top plate, its mounting height was increased by 76 mm by adding invar distance sleeves. the sleeves and the plug-in served to position the sensor tip in the virtual crossing of all laser beams of the nmm-1. this way the abbe-principle for the nmm-1’s axes is fulfilled [14]. during the design the collision volumes of the rotation axes and the nmm-1’s workpiece carrier were relevant as the edge of the corner mirror is higher than the workpiece mounting plane. by safe construction the movement volumes don’t overlap at any position or rotation angle and an involuntary collision is avoided. figure 8 shows the installed axes in the nmm-1. the workpiece carrier and the border of the laser reflectors are located under a white protective casing. for the integration of the sensor an electrical connection between the tip holder and the workpiece was necessary. the tip holder is mounted on the small rotary axis which top plate is electrically isolated. the tip holder is connected to a precision voltage source to set the tip under operating voltage. the workpiece carrier is electrically connected to protective earth, therefore an additional isolation layer was installed. the workpiece is mounted on the carrier plate as already shown in figure 5. the carrier plate is electrically connected by a flexible shielded wire to an amplifier circuit. it converts during operation the current in the nanoampere range to a corresponding voltage in the volt range. this sensor signal is connected to the input port of the nmm-1. on the software side the sensor is integrated by configuring the input ports and recording a sensor characteristic. by storing a third order polynomial in the digital signal processor of the nmm-1 the momentary displacement between sensor and workpiece to a chosen setpoint can be calculated and used for position control during a surface scan. another possible way of operation is the recording of only a single surface point after the setpoint is reached, similar to a touch-trigger probe. with the integration of the stacked axes all steps performed with the manual axis were repeated. first the calibration field was recorded again with angle increments from 30° to +30° in 5° steps in the xz-plane of the nmm-1. in comparison to the manual axis an additional coordinate translation step had to be added to find the corresponding angles of each stacked rotary axis for a given angle in the nmm-1’s cartesian coordinate system. for simplification this nonlinear relationship was calculated beforehand and stored in a lookup table. 7. automated sensor tilting  after implementation of all necessary command sequences an automated measurement on the angled flats was repeated. the angled flats measurement was started again at no rotation and after enough recorded values the surface angle was determined to be 30°. the tip was removed from the surface and rotated by 30 ° along the y-axis perpendicular to the scan-line. with the knowledge of the tip centre position in the workpiece coordinate system the contact point from the last measurement point was determined and re-addressed with the rotated tip. figure 9 shows the system during operation on the cutting insert. the results are displayed in figure 10 showing the first half of a 3 mm line scan. again the blue curve uses a fixed reference contact point like a regular profilometer, the green curve uses the sensor rotation principle and keeps the tip rectangular to the surface. the deviation is again reduced to 18 µm with the large 0.3 mm tip. with the sensor rotation prototype just completed and the first results of an automated measurement process acquired, further aspects can be investigated. the next steps cover the determination of the achievable repeatability of the setup and the measurement on other calibrated artefacts like e.g. a microcontour artefact. 8. conclusions  a probing principle based on dynamic sensor tilting is investigated with a rotation around the sensor’s working point. the measurement deviation mechanics and strategies to deliver optimal rotation angles are modelled in a simulation environment. the results demonstrate the improvement by sensor tilting on practical test samples, reducing measurement deviation and increasing the usable surface slope. the principle was transferred to a hardware prototype based on a nanopositioning and measuring machine with an axis resolution of 0.1 nm as test stand. the kinematic chain for the sensor rotation consists of two stacked rotary axes under an angle of 45° allowing two rotary degrees of freedom. occurring error sources like thermal effects are minimized by the stringent use of materials with low thermal expansion coefficients. to reduce the remaining systematic positioning deviation of the rotation unit an in-situ calibration strategy was realized and applied. the test stand has been completed and a mechanical, electrical and software-side integration was performed. as sensor a proven near-tactile electrical sensor type was applied to accelerate the setup and allows for a wider range of sensor tips. first results of figure 8. axis setup installed in the nmm‐1.  clamped tip with a 0.3 mm sphere 4 mm calibration sphere angled flats (cutting insert) figure 9. tilted measurement process.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 27  the automated sensor tilting were gathered on a cutting insert with a surface angle range from 30° to +30°. the increase of the possible operation angle and the measurement deviation reduction is demonstrated. future tasks cover the assessment of the repeatability of the rotary sensor positioning and measurements on different calibrated sample surfaces in comparison with regular profilometers. acknowledgement  this article is based on the research project “ultra-accurate acquisition of highly curved surfaces by slope-dependant dynamic sensor tracking with rotatory flexure hinges”. the authors thank the german research foundation (deutsche forschungsgemeinschaft, dfg) for supporting this research. references  [1] m. krystek, iso filters for precision engineering, technisches messen 76 (2009) pp. 133-159. [2] p.m. lonardo, h. trumpold, l. de chiffre, progress in 3d surface microtopography characterization, annals of the cirp 45 (1996) pp. 589-598. [3] a.weckenmann, t. estler, g. peggs, d. mcmurtry, probing systems in dimensional metrology, annals of the cirp 53 (2004) pp. 657-84. [4] d. keller, reconstruction of stm and afm images distorted by finite-sized tips, surface science 253 (1991) pp. 353-364. [5] h.n. hansen, k. carneiro, h. haitjema, l. de chiffre, dimensional micro and nanometrology, annals of the cirp 55 (2006) pp. 721–743. [6] e. lebrasseur, j.-b. pourciel, t. bourouina, t. masuzawa, h. fujita, a new characterization tool for vertical profile measurement of high-aspect-ratio microstructures, journal of micromechanics and microengineering 12 (2002) pp. 280-285. [7] y. hua, c. coggins, s. park, “advanced 3d metrology atomic force microscope”, proceedings asmc 2010, july 11-13, 2010, san francisco, usa, pp. 7-10. [8] f. marinello, p. bariani, a. pasquini, l. de chiffre, m. bossard, g.b. picotto, increase of maximum detectable slope with optical profilers, through controlled tilting and image processing, measurement science and technology 18 (2007) pp. 384-389. [9] a. weckenmann, x. jiang, k.-d. sommer, u. neuschaeferrube, j. seewig, l. shaw, t. estler, multisensor data fusion in dimensional metrology, annals of the cirp 58 (2009) pp.701-721. [10] a. weckenmann, a. schuler, r.j.b. ngassam, “enhanced measurement of steep surfaces by sensor tilting”, proceedings of the 56th international scientific colloquium, sept. 12-16, ilmenau, germany, 2011, id 134:1. [11] e. manske, t. hausotte, r. mastylo, t. machleidt, k.-h. franke, g. jäger, new applications of the nanopositioning and nanomeasuring machine by using advanced tactile and nontactile probes, measurement science and technology 18 (2007) pp. 520-527. [12] a. weckenmann, j. hoffmann, “long range 3d scanning tunnelling microscopy”, annals of the cirp 56 (2007) pp. 525-528. [13] j. hoffmann, a. schuler, nanometer resolving coordinate metrology using electrical probing, technisches messen 78 (2011) pp. 142-149. [14] i. schmidt, t. hausotte, u. gerhardt, e. manske, g. jäger, investigations and calculations into decreasing the uncertainty of a nanopositioning and nanomeasuring machine (npmmachine), measurement science and technology 18 (2007) pp. 482-486. figure 10. cutting insert measurement with the final setup.  uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration s. moltó1, m. a. sáenz-nuño2, e. bernabeu3, m. n. medina1 1 centro español de metrología, calle alfar 2, 28760 tres cantos, españa 2 instituto de investigación tecnológica, escuela técnica superior de ingeniería-icai, universidad pontificia comillas, 28015 madrid, españa 3 universidad complutense de madrid, españa section: research paper keywords: fabry-perot; pressure measurement; refractrometry; quantum pascal citation: s. moltó, m. a. sáenz-nuño, e. bernabeu, m. n. medina, uncertainty in mechanical deformation of a fabry-perot cavity due to pressure: towards best mechanical configuration, acta imeko, vol. 11, no. 3, article 15, september 2022, identifier: imeko-acta-11 (2022)-03-15 section editor: francesco lamonaca, university of calabria, italy received october 27, 2021; in final form august 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: s. moltó, e-mail: smolto@cem.es 1. introduction the 18sib04 quantumpascal empir project deals with the development of a quantum-based pressure standard in order to improve the uncertainty and traceability that the current methods offer. this project is based on the application of a fabry-perot interferometer to measure in a pressure range of 1 pa to 100 kpa. however, some parameters must be carefully handled, such as the deformation, the temperature and the gas properties. the cavity deformation due to the gas pressure was simulated using a finite element method (fem) software. therefore, a study of the uncertainty obtained in these simulations and its propagation in the evaluation of pressure was needed. a previous study of the simulated cavity deformation due to pressure was carried out in [1], and its results were used herein to show the difference obtained due to the use of different software. finally, a new cylindrical design of the cavity was proposed and compared to the design proposed in [1]. 2. theory the relative deformation in the length with the gas pressure of the fabry-perot cavity can be described with a linear dependency as in equation (1). where 𝐿0 is the without deformation cavity length, δ𝐿 is the change in cavity length (the deformation), 𝑃 is the pressure and 𝜅 is the pressure-normalized deformation. δ𝐿 𝐿0 = 𝜅𝑃 . (1) 𝐿0 is calculated as the distance between the centre of the reflective face of the mirrors, and δ𝐿 as the change in this distance. table 1 summarizes the data used in this work, so is it possible to calculate the pressure with equations below with the values of the table. abstract in this paper, a study of the deformation of a refractometer use to achieve a quantum realization of the pascal is made. first, the propagation of the uncertainty in the measure of pressure due to mechanical deformation was made. then deformation simulations were made with a cavity designed by the cnam and whose results are reported in the 18sib04 quantumpascal empir project. this step aims to corroborate the methodology used in the simulations. the pressure-normalized relative deformation of this design obtained in 18sib04 is (-6.390 ± 0.015) × 10-12 pa-1, the result obtained with our method is (-6.384 ± 0.032) × 10-12 pa-1, which differs 0.001 times from the value obtained in 18sib04. finally, a cylindrical cavity design is presented and simulated obtaining a pressure-normalized relative deformation of (-5.758584 ± 0.00000047) × 10-12 pa-1, which deforms 0.1 times less. mailto:smolto@cem.es acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 the gas parameters can be calculated from the refractivity, whose change is related to the change of the laser frequency δ𝜈, the change in the number of modes (δ𝑞) and the deformation coefficient (𝜖) using equation (2), [3]. this equation can be used with absolute pressures up to 10 kpa. δ(𝑛 − 1) = (𝑛 − 1) = δ𝑞 𝑞0 + (1 + 2 𝜉 + 𝜂) δ𝜈 𝜈0 1 + 𝜖 + (1 + 2 𝜉 + 𝜂) δ𝜈 𝜈0 . (2) the deformation coefficient (𝜖) [3] depends on the pressurenormalized deformation (𝜅) as it is shown in equation (3). the change in number of modes is an experimental measurement but it can be simulated using equation (4), where 𝜆0 is the wavelength in vacuum and 𝜆 is the wavelength affected by the change in refractive index (𝜆 = 𝜆0/𝑛). also, δ𝜈 is the laser change in frequency which maximum value is 𝜈𝐹𝑆𝑅 when mode jumping is used. ϵ = 𝜅 𝑃 (𝑛 − 1) = 𝜅 2 𝑘b 𝑇 𝑁a 3 𝐴r (3) δ𝑞 = 𝑞 −𝑞0 = 𝐿 𝜆 2 − 𝐿0 𝜆0 2 = 2𝐿0 [ 𝜆0 + 𝜆0 𝜅 𝑃 −𝜆 𝜆 𝜆0 ] . (4) the molar density is related with the refractivity by the equation (5) [3] and the molar density is related with the pressure by the equation (6) [3]. where 𝐵𝜌 is the first virial coefficient and 𝐵𝑃(𝑇) is the first pressure virial coefficient which is calculated with [−4.571 + 0.1974(𝑇(𝐾)− 300)− 5.137 × 10−4(𝑇(𝐾)− 300)2] ×10−6 m3/mol. 𝜌 = 2 3 𝐴r (𝑛 − 1)[1+ 𝐵�̃�(𝑛 − 1)] (5) 𝑃 = 𝑘b 𝑇 𝑁a 𝜌[1 + 𝐵𝑃(𝑇)𝜌] . (6) 3. uncertainty propagation in order to calculate the uncertainty of pressure due to the uncertainty in 𝜅 is necessary to obtain an expression of pressure depending on 𝜅 directly. from equations (5), (6) and the expression of 𝐵𝑃(𝑇) is possible to calculate that relation (equation (7)), where the value of 𝐶0 is shown in equation (8) 𝑃(δ𝜈,δ𝑞,𝜅,𝑇) = 𝐶0 𝑇 δ𝑞 (𝐶0 𝑇 𝜅 + 1)𝑞0 ( 𝐵𝜌 δ𝑞 (c0 t κ+ 1)𝑞0 +1) ×(1 + 2 δ𝑞( 𝐵𝜌 δ𝑞 (𝐶0 𝑇 𝜅 + 1)𝑞0 +1) 3 𝐴r(𝐶0 𝑇𝜅 + 1)𝑞0 𝐵𝑃(𝑇)) (7) 𝐶0 = 2 𝑁a 𝑘b 3 𝐴𝑟 . (8) considering that δ𝑞 = 2𝐿0(1+𝜅𝑃nominal) 𝜆/2 − 𝑞0 and δ𝜈 = 0 an expression of pressure as 𝑃 = 𝑃(𝜅,𝑃nominal,𝑇) is obtained (equation (9)) 𝑃(𝜅,𝑃nominal,𝑇) = 𝐶0𝑇( 2(𝐿0 𝑃nominal 𝜅 + 𝐿0) 𝜆 − 𝑞0) (𝐶0 𝑇 𝜅 + 1)𝑞0 × ( 𝐵𝜌 ( 2(𝐿0 𝑃nominal 𝜅 +𝐿0) 𝜆 − 𝑞0) (𝐶0 𝑇 𝜅 + 1)𝑞0 + 1) × (9) table 1. coefficients and constants used in this work parameter designation value 𝐿0 cavity length in vacuum 0.1 m 𝑛 refractive index 𝑛 ≥ 1 𝐴r molecular polarizability 4.44613930 × 10 -6 m3/mol [2] 𝐵r first refractive virial coefficient 0.812 × 10 -12 m6/mol2 [2] 𝐵�̃� first density virial coefficient −[1+4(𝐵𝑟/𝐴𝑟 2)]/6 [2] 𝐵𝑃(𝑇) first pressure virial coefficient [−4.571+0.1974(𝑇(𝐾)−300)−5.137×10 −4(𝑇(𝐾)−300)2]×10−6 m3/mol [2] 𝜉 mirror dispersion coefficient 11 × 10-6 [2] 𝜂 gas dispersion coefficient 1 × 10-6 [2] 𝜆0 wavelength in vacuum 632.8 nm 𝜆 wavelength affected by pressure 𝜆 = 𝜆0/𝑛 𝜈0 frequency in vacuum 𝑐/𝜆0 𝑞0 number of length modes in vacuum 2𝐿0/𝜆0 δ𝜈 laser change in frequency assumed as 0 δ𝑞 change in number of modes estimate in equation (4) 𝜖 deformation coefficient equation (3) [3] 𝑇 temperature 273 k 𝑃 nominal pressure from 1pa to 10 kpa 𝑁𝐴 avogadro number used value of [3] 𝑘𝐵 boltzmann constant used value of [3] acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 ( 2𝐵𝑃 ( 2(𝐿0 𝑃nominal 𝜅 +𝐿0) 𝜆 + 𝑞0) 3 𝐴r(𝐶0 𝑇 𝜅 +1)𝑞0 × ( 2(𝐿0 𝑃nominal 𝜅 +𝐿0) 𝜆 − 𝑞0 (𝐶0 𝑇 𝜅 + 1)𝑞0 +1) + 1 ) . in this work, it is assumed that the only source of uncertainty is the pressure-normalized deformation (𝜅), as the aim is to analyse how this particular magnitude affects to the measurement of pressure. from equation (7) and applying the rule of the propagation of uncertainty the equation (10) is obtained, which estimates the relative uncertainty in pressure with respect 𝜅. where 𝐶𝜅 is the sensitivity coefficient, which is calculated with equation (11). as the pressure of equation (9), depends on the nominal pressure, the relative uncertainty depends on that parameter 𝑤(𝑃) = 𝑢(𝑃) 𝑃 ≈ |𝐶𝜅 𝑢(𝜅) 𝜅 | (10) 𝐶𝜅 = 𝜕𝑃(𝜅,𝑃nomina,𝑇) 𝜕𝜅 . (11) 4. cnam cavity description the cavity modelled for the simulation was built by the cnam ([5]). in figure 1, it is shown a 3d model of the fabryperot cavity. the spacer is made of zerodur and the mirrors are made of fused silica. the dimensions of the zerodur spacer are specified in figure 2. the mirrors were considered to be flat. the properties of the materials used are collected in table 2. in order to improve the computational time, only an octave of the cavity was simulated using the symmetry restrictions available in the software. this selection is represented in figure 3 where the simulated section of the fabry-perot cavity is dyed with red. with the purpose of simulating the cavity inside a vacuum chamber, the same pressure load was introduced to every face of the fabry-perot cavity (same inner and outer pressure). the only restriction applied to the model was that the spacer and the mirrors would not rotate. 5. simulation results the solid edge 2021 results are shown with a colormap of the deformation in mm in figure 4. it was shown an inhomogeneous deformation, but it was corrected using the symmetries of the design. simulating a part of the model incorporates these symmetries without including extra conditions to the model. 5.1. cavity relative deformation dependency with the number of mesh elements. the relative deformation of the cavity calculated in the simulation depends on the number of mesh elements used, as it is shown in figure 5. the convergence of 𝜅 with the number of mesh elements was not achieved so two curve fits were made. one to the function 𝑦1(𝑥) = 𝑎 𝑥 −4 + 𝑏 𝑥−2 + 𝑐 𝑥−1 + 𝑑 and another to 𝑦2(𝑥) = 𝑎 (log(𝑏 𝑥)) −1 + 𝑑, where 𝑥 is the number of mesh elements and 𝑦𝑖 is the value of 𝜅. these functions where selected to be constant when the number of mesh elements from table 2. materials properties used in the simulations. parameter spacer material (zerodur) mirror material (fused silica) young modulus (gpa) 90.3 73 poisson ratio 0.24 0.155 density (kg/m3) 2530 2195 figure 1. fp cavity model. figure 2. dimensions of the zerodur spacer in mm, all drillings diameters are 10 mm. figure 3. cnam fp cavity model with the simulated section dye in red. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 among group of functions that tends to infinite as it is shown in equation (12): lim 𝑥→∞ (𝑎 𝑥−4 + 𝑏 𝑥−2 + 𝑐 𝑥−1 + 𝑑) = 𝑑 = κ lim 𝑥→∞ ( 𝑎 log (𝑏 𝑥) + 𝑑) = 𝑑 = κ . (12) the values obtained in the fitting are collected in table 3. using equation (12) it is obtained the value of 𝜅 for each curve fit. the value for 𝑦1 is 𝜅1 = (-6.38378 × 10 -12 ± 3.2 10-16) pa-1 and for 𝑦2 is 𝜅2 = (-6.3868 × 10 -12 ± 3.2 10-15) pa-1. the uncertainty is calculated with the variance and covariance matrix obtained with function curve_fit() from the package optimize of scipy [3], that uses the equation cov(𝑥,𝑦) = 𝐶𝑥𝑦 = 1 𝑁 ∑ (𝑥𝑖 −𝑖 �̅�)(𝑦𝑖 −�̅�) to estimate it. it is shown that the first curve gives less uncertainty than the second. with the relative deformation of the cavity over the pressure obtained in table 3, the values of table 1 and the equation (9) is possible to calculate the pressure that the fabry-perot cavity will have for a given nominal pressure, obtaining in figure 6 the difference between the calculated and nominal values. it is shown that near 0 pa there is an asymptotic behaviour, but if the nominal pressure increases a linear dependency is presented. as it was said before, the relative uncertainty of pressure depends on the nominal pressure (equation (10) and (11)). figure 7 represents the evolution of pressure uncertainty with the nominal pressure. it is shown a linear dependency and the uncertainty obtained by using 𝜅2 increases faster than the one obtained with 𝜅1. also, the relative uncertainty in pressure calculated in a range of pressure under 10 kpa is under 9 × 10-12. 5.2. comparison with results from [1] a comparison between the results of simulations made with different software and institution are needed to verify that the procedure of simulation of every participant in the 18sib04 quantumpascal empir project is correct. in [1] the results of simulating the deformation of the fabryperot cavity are shown, and they are collected in table 4. the result simulated in solid edge software by the cem is less than 10-3 time bigger than the average of [1]. the uncertainty in cem’s simulation was calculated as a rectangular distribution centred in -6.3846 × 10-12 pa-1 and with a width equal to two times the figure 4. cavity deformation for a pressure of 80 kpa. figure 5. pressure-normalized deformation versus number of mesh elements. the data was fitted to two curves figure 6. relative difference between calculated pressure and nominal pressure ((pnominal-p)/pnominal), where 𝜿𝟏 is the value of pressure-normalized deformation calculated with 𝒚𝟏and 𝜿𝟐 is the value of pressure-normalized deformation calculated with 𝒚𝟐. figure 7. relative uncertainty of calculated pressure with 1 = -6.38378 pa-1 and 2 = 6.3868 pa-1 versus the nominal pressure. table 3. values of the mean square error (mse) of the fit, the mean absolute error (mae) of the fit and  in pa-1 for each fit. 𝑦1(𝑥) = 𝑎𝑥 −4 +𝑏𝑥−2 +𝑐𝑥−1 +𝑑 mse 1.41 × 10-23 mae 2.83 × 10-12 𝜅 -6.38378 × 10-12 u(𝜅) 3.2 × 10-16 u(𝜅)/𝜅 4.9 × 10-5 𝑦2(𝑥) = 𝑎(log(𝑏𝑥)) −1 +𝑑 mse 4.78 × 10-31 mae 4.85 × 10-16 𝜅 6.3868 × 10-12 u(𝜅) 1.7 × 10-15 u(𝜅)/ 𝜅 2.7 × 10-4 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 difference of cem’s value and the average value of [1] (2.24 × 10-14 pa-1). using the values of table 4, table 1 and equation (7), the value of pressure for the average value 𝜅 and the value calculated with solid edge are obtained. in figure 8 the relative difference of each pressure with respect the nominal pressure is represented. it is shown the same behaviour as in figure 6. also, the values are so similar that the effect on the pressure cannot be distinguished with this figure for pressures over 10 pa; hence, in figure 9, the relative difference of the pressure calculated with the 𝜅 obtained by [1] and the pressure calculated with 𝜅 obtained with solid edge is represented. in figure 9, the same asymptotic behaviour is shown. the maximum relative difference obtained is near 0 pa, where the asymptote is located, is around 3.5 × 10-8. after the elbow value of the function the relative difference in pressure is less than 2 × 10-10. as the relative uncertainty of pressure depends on the nominal pressure (equation (10) and (11)), figure 10 depicts the evolution of pressure uncertainty with the nominal pressure. it is shown a lineal dependency and the uncertainty of the pressure obtained with the cem’s value increase faster. also, the relative uncertainty calculated for the range of pressure where equation (2) is acceptable is under 3 × 10-10. 6. cem’s cylindrical cavity with the purpose of solving the symmetry rupture that supposes using the spacer as an inner vacuum chamber and gases chamber, a new cylindrical design was proposed. 6.1. cavity description the designed cavity is represented in figure 11, where the simulated section is coloured in red. although this time the symmetry is only a quarter of the model and not an octave as before, an octave was taken after checking that the results were the same as the full model. this way, a similar number of mesh elements are analysed for both models. the spacer and mirrors of this cavity are made of schott’s zerodur class 0, whose properties are shown in table 2. the zerodur class 0 mirrors provide a reflectance over 97 % for a wavelength of 633 nm, which is the wavelength that will be used in the pattern. the dimensions of the zerodur spacer are specified in figure 12. it is shown that the geometry of this spacer is simpler than the presented in figure 2, this will deal with faster calculations. the inner diameter of the zerodur spacer was selected to be 20 mm, so as to be compatible with the optic requirements of numeric aperture of 0.2. also, this diameter provides a more table 4. summary of the simulated pressure normalized cavity deformation using the finest meshing by each partner and software. partner software used pressure-normalized deformation (𝜿) 𝚫𝑳/𝑳/𝑷 in 10-12 pa-1 number of mesh elements a-comsol [1] -6.3900 1 735 324 b-comsol [1] -6.3899 190 000 b-ansys [1] -6.3908 2 948 108 c-ansys [1] -6.3900 21 061 d-comsol [1] -6.3902 3 168 777 average value [1] -6.3902 - standard error of the mean (𝜎𝑥) [1] 0.00015 - cem-solid edge -6.3846 2197477 uncertainty 0.0032 - figure 8. relative difference between calculated pressure and nominal pressure ((pnominal-p)/pnominal). figure 9. relative difference between calculated pressure and nominal pressure ((p-pcem)/p), where p is the nominal pressure and pcem is the calculated pressure. in the first figure the uncertainty is not appreciable, but in the enlargement below the uncertainty can be seen. figure 10. relative uncertainty of calculated pressure with  obtained by [1] and  calculated by the cem with solid edge software versus the nominal pressure. 𝑹𝟐 is the linear correlation coefficient. figure 11. cem fp cavity model with the simulated section dyed in red. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 laminar regimen in rapid pressure variations than with smaller diameters. the zerodur spacer thickness was designed to be 9 mm, but the mirror thickness was 11 mm, which corresponds to a relative factor 𝑀𝑖𝑟𝑟𝑜𝑟 𝑇ℎ𝑖𝑐𝑘𝑛𝑒𝑠𝑠 𝑆𝑝𝑎𝑐𝑒𝑟 𝑇ℎ𝑖𝑐𝑘𝑛𝑒𝑠𝑠 ≈ 1.22, similar to the bessel’s function first zero divided by π ( 𝐽1(𝑧) 𝜋 = 1.216). the bessel function is associate to the circular revolution of 2 π of the cylindrical symmetry. as made with the previous cavity, in order to simulate that the cavity is into a vacuum chamber, the same pressure load was introduced to every face of the fabry-perot cavity. the only restriction applied to the model was that the spacer and the mirrors would not rotate. 6.2. results of the simulations figure 13 presents the results of the deformation simulation for a pressure of 80 kpa. as shown in figure 4, there was an inhomogeneous deformation, but it is corrected by using the partial simulation of the model as the partial only simulates one mirror assuming the symmetry. the simulation data were fitted to three different equations. there were selected to be constant when the number of mesh elements tends to infinite. the selected curves are shown in equation (13), the quadratic equation was not added in equation (12) because the software could not fit the data to that curve. the results of fitting are shown in figure 14, where the curves 𝑦1(𝑥) and 𝑦2(𝑥) cannot be distinguish. 𝑦1(𝑥) = 𝑎 𝑥 −4 + 𝑏 𝑥−2 + 𝑐 𝑥−1 + 𝑑 𝑦2(𝑥) = 𝑎 𝑥 −2 + 𝑏 𝑥−1 + 𝑑 𝑦3(𝑥) = 𝑎 (log(𝑏 𝑥)) −1 + 𝑑 . (13) the parameters obtained in the fitting are shown in table 5. the curve 𝑦1(𝑥) shows the biggest mean square error and mean absolute error o of the fitting. the relative uncertainties of curves 𝑦2(𝑥) and 𝑦3(𝑥) are quite similar, but the mse and the mae obtained by fitting to the curve 𝑦3(𝑥) are several orders of magnitude smaller. also, the best relative uncertainty obtained is not reached with the fitting, but it is achieve using the result of the finest mesh, the simulation with the highest number of mesh elements. using the relative deformation of the cavity over pressure obtained in table 5 for 𝑦2(𝑥) and the finest mesh, the values of table 1 and the equation (9) is possible to calculate the pressure that the fabry-perot cavity will have for a given nominal pressure. the difference between the calculated and nominal figure 12. dimensions of the zerodur spacer in mm. figure 13. cylindrical cavity deformation for a pressure of 80 kpa. figure 14. pressure-normalized deformation versus the number of mesh elements. the data were fitted to three equations. table 5. values of the mean square error (mse) of the fit, the mean absolute error (mae) of the fit and  in pa-1 for each fit and the value of  for the finest mesh. 𝑦1(𝑥) = 𝑎 𝑥 −4 +𝑏 𝑥−2 +𝑐 𝑥−1 +𝑑 mse 7.36 × 10-18 mae 1.97 × 10-9 𝜅 -5.758610 × 10-12 u(𝜅) 1.9 u(𝜅)/ 𝜅 3.3 × 10-6 𝑦2(𝑥) = 𝑎 𝑥 −2 +𝑏 𝑥−1 +𝑑 mse 9.17 × 10-30 mae 2.51 × 10-15 𝜅 -5.758573 × 10-12 u(𝜅) 1.1 × 10-17 u(𝜅)/ 𝜅 1.9 × 10-6 𝑦3(𝑥) = 𝑎 (log(𝑏 𝑥)) −1 +𝑑 mse 3.19 × 10-34 mae 1.24 × 10-17 𝜅 -5.758600 × 10-12 u(𝜅) 1.5 × 10-17 u(𝜅)/ 𝜅 2.6 × 10-6 𝜅 of the finest mesh (2602131elements) 𝜅 -5.7585840x10-12 u(𝜅) 4.7 × 10-18 u(𝜅)/𝜅 8.2 × 10-7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 values are shown in figure 15. the asymptotic behaviour is shown again, and the difference of the relative difference obtained with these values is so small that over pressures of 100 pa cannot be distinguish (under 7 × 10-10). the values of relative difference between the nominal pressure and the calculated values of pressure are similar to the obtained in figure 6. as it was discussed before the relative uncertainty of pressure depends on the nominal pressure (equation (10) and (11)). figure 16 represents the evolution of pressure uncertainty with the nominal pressure. a linear dependency is shown again and the increase of the uncertainty using the value obtained with the curve 𝑦2(𝑥) is bigger than the obtained with the finest mesh. furthermore, this simulation gives the best relative uncertainty, under 6.5 × 10-14 for both cases. table 6 shows a comparison between the best values of pressure-normalized deformation obtained for each model, including the results obtained by the partners. it is shown that the cylindrical model presents less pressure-normalized deformation and uncertainty. 7. final discussion the results of the simulations of the fabry-perot cavity deformation proposed by cnam depends on the number of mesh elements introduced in the fem software to simulate it. also, different software can have different results. because of this issue, a study of the evolution of the relative deformation over pressure of the fabry-perot cavity was made, fitting the curve to two equations in order to achieve the value when the number of mesh elements tends to infinity obtaining a relative error in 𝜅 between 5 × 10-5 and 3 × 10-4 depending on the curve used. moreover, the propagation of uncertainty of 𝜅 to the pressure was studied for the values obtained with this method, showing that the formula which fits the best is 𝑦1(𝑥) = 𝑎 𝑥4 + 𝑏 𝑥2 + 𝑐 𝑥 + 𝑑 whose parameters are collected in table 5.it was shown, that the relative difference with the nominal value of pressure is under 5 × 10-7 times the value of the nominal pressure and an asymptotic behaviour was shown in figure 6. furthermore, the difference between values of pressurenormalized deformation obtained by other laboratories were compared with the values calculated using solid edge software. we conclude that the relative difference between the nominal pressure and the calculated pressure is under 5 × 10-8 times the value of the nominal pressure for 10 kpa. also, the difference between the values obtained by the cem and the pressure calculated using the 𝜅 of [1] is under 4 × 10-8 times the value of the pressure calculated with the 𝜅 of [1]. for figure 8 and figure 9, the asymptotic behaviour shown before was followed. moreover, the pressure relative uncertainty calculated for both cases is under 3 × 10-10 times the nominal pressure, which means that the contribution of the uncertainty of pressurenormalized deformation (𝜅), for a range of pressure between 1 pa and 10 kpa, to the relative pressure uncertainty is not decisive for the measure. finally, the cem’s fabry-perot cavity provides smaller pressure-normalized deformation (-5.7548684 × 10-12 pa-1) with an uncertainty of 4.7 × 10-18 pa-1, which is 0.1 times smaller than the other cavity. the fabry-pérot cylindrical cavity provides a relative uncertainty under 6.5 × 10-14 for 10 kpa, one order of magnitude less than the best results using the other geometry. it would be interesting making a study of the deformation of the mirrors for different fabry-pérot cavity configurations. 8. conclusion in conclusion, cem’s results using finite elements method through the software solid edge are similar to the results presented in [1] (with a discrepancy of 0.001 times the values obtained by [1]). also, the cem’s cylindrical has a pressure-normalized relative deformation 0.1 times lower than the model analysed in [1]. moreover, the model presents more advantages as it only uses one material and, with the selected dimensions, there is a more laminar regimen in rapid pressure variations. in future projects the uncertainties of the constants and values of table 1 will be introduced in order to have a value of the total uncertainty of the measurement. also, due to the capability of the cylindrical design to have an inner pressure different to the outer pressure, an analysis of the fp cavity deformation where vacuum is applied inside the cavity and atmospheric pressure is applied outside the cavity will be studied. table 6. comparison between the results of the first model analysed by the partners (model 1 partner) and analysed by cem (model 1 cem) and the cylindrical model (model 2). model pressure-normalized deformation (𝜅) (δ𝐿/𝐿)/𝑃 in 10-12 pa-1 uncertainty in 10-12 pa-1 model 1 partner -6.3902 0.00015 model 1 cem -6.3842 0.032 model 2 -5.758584 0.00000047 figure 15. relative difference between calculated pressure and nominal pressure (pnominal p)/pnominal. figure 16. relative uncertainty of calculated pressure with 1 and 2 versus the nominal pressure. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 references [1] j. zakrisson, i. silander, c. forssén, z. silvestri, d. mari, s. pasqualin, a. kussicke, p. asbahr, t. rubin, o. axner, simulation of pressure-induced cavity deformation the 18sib04 quantumpascal empir project, acta imeko 9(5) (2020), pp. 281-286. doi: 10.21014/acta_imeko.v9i5.985 [2] i. silander, t. hausmaninger, m. zelan, o. axner, gas modulation refractometry for high-precision assessment of pressure under non-temperature-stabilized conditions, journal of vacuum science & technology 36(3) (2018), art. 03e105, pp. 1-8. doi: 10.1116/1.5022244 [3] o. axner, i. silander, t. hausmaninger, m. zelan, drift-free fabry-perot-cavity-based optical refractometry-accurate expressions for assessments of gas refractivity and density, jan. 2018. doi: 10.48550/arxiv.1704.01187 [4] pauli virtanen (+ 34 authors), scipy 1.0: fundamental algorithms for scientific computing in python, nature methods 17(3) (2020), pp. 261-272. doi: 10.1038/s41592-019-0686-2 [5] z. silvestri, f. boineau, p. otal, j. wallerand, helium-based refractometry for pressure measurements in the range 1-100 kpa, in 2018 conference on precision electromagnetic measurements (cpem 2018), july 08-13, 2018, paris, france. doi: 10.1109/cpem.2018.8501259 http://dx.doi.org/10.21014/acta_imeko.v9i5.985 https://doi.org/10.1116/1.5022244 https://doi.org/10.48550/arxiv.1704.01187 https://doi.org/10.1038/s41592-019-0686-2 https://doi.org/10.1109/cpem.2018.8501259 low-cost real-time motion capturing system using inertial measurement units acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 9 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 low-cost real-time motion capturing system using inertial measurement units simona salicone1, simone corbellini2, harsha vardhana jetti1, sina ronaghi3 1 department of electronics, information and bioengineering (deib), politecnico di milano, via guiseppe ponzio 34, 20133, milano, italy 2 department of electronics and telecommunication (det), politecnico di torino, corso duca degli abruzzi, 24, 10129, torino, italy 3 department of energy (deng), politecnico di milano, via lambruschini 4a, 20156, milano, italy section: research paper keywords: motion-capture; bluetooth low energy; inertial measurement units; digital signal processing; embedded systems citation: simona salicone, simone corbellini, harsha vardhana jetti, sina ronaghi, low-cost real-time motion capturing system using inertial measurement units, acta imeko, vol. 11, no. 3, article 17, september 2022, identifier: imeko-acta-11 (2022)-03-17 section editor: francesco lamonaca, university of calabria, italy received may 17, 2022; in final form september 23, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simona salicone, e-mail: simona.salicone@polimi.it 1. introduction human motion reconstruction, also known as motioncapture systems (mo-cap), plays an important role in medical rehabilitation, sports training, and entertainment [1]-[3]. in the field of healthcare, physical therapists use such systems to record patient movements, visualize the movements in real-time and finally making a comparison of the recorded movements throughout the treatment cycle for efficacy evaluation. for the purpose of entertainment, the use of mo-caps is necessary for reconstructing human movements for 3-dimensional (3d) games development and animated scenes in the movie industry. for the application of sports training, the use of mo-caps is beneficial for reconstructing the players' movement for the evaluation of each player individually and as a team, as well as, monitoring each player for automatic estimation and prediction of possible injuries, such as muscle damages caused by players collision during a period of professional activity [4]. mo-cap solutions use either non-optical sensor-based methods or optical computer vision methods to reconstruct the physical human movements. as it is currently known, solutions based on computer vision methods can be employed only in a controlled environment and they suffer from inconsistencies concerning environmental conditions such as ambient light (colour and illumination problems), object proximity (occlusion), and motion detection in cluttered scenes [5], [6]. alternatively, sensor-based methods of mo-cap are usually applied in form of wearable devices, that use inertial measurement units (imus) based on microelectromechanical systems (mems). in principle, mems-based imus are immune to visible environmental conditions, but they may suffer from orientation drifts over time. the development of inertial based mo-cap solutions, for both movement tracking and human activity recognition (har), has been a topic of research for many years, mainly due to the constant increase in the availability of new mems devices. nowadays, many similar commercial mo-cap solutions are available on the market; such products, depending on the specific targeted application, can differ in the maximum number of abstract human movement modeling also referred to as motion-capture is a rapidly expanding field of interest for medical rehabilitation, sports training, and entertainment. motion capture devices are used to provide a virtual 3-dimensional reconstruction of human physical activities employing either optical or inertial sensors. utilizing inertial measurement units and digital signal processing techniques offers a better alternative in terms of portability and immunity to visual perturbations when compared to conventional optical solutions. in this paper, a cable-free, low-cost motion-capture solution based on inertial measurement units with a novel approach for calibration is proposed. the goal of the proposed solution is to apply motion capture to the fields that, because of cost problems, did not take enough benefit of such technology (e.g., fitness training centers). according to this goal, the necessary requirement for the proposed system is to be low-cost. therefore, all the considerations and all the solutions provided in this work have been done according to this main requirement. mailto:simona.salicone@polimi.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 sensors, in the placement position of the sensors on the human body, and in the user interface and data representation. as an example, the mtw awinda [7] tracker by xsense is a complete wireless human motion tracker able to accurately monitor the joint angles and is targeted for many applications, spanning from rehabilitation to injury prevention in sport and to human machine interaction in robotics. the system can manage up to 20 sensors with a maximum data rate of 60 hz. the movit system g1 [8] by captiks is another available solution for both indoor and outdoor motion capture and analysis. this system can acquire up to 16 sensors and it is able to provide raw orientation measurements and even animation and video files. the acquisition rate can reach 100-200 hz to track fast movements. for movements with higher dynamics, the wearable 3d motion capture [9] by noraxon can reach a measurement output rate of 400 hz. this system is equipped with 16 sensors suitable for all type of movements including high velocity and high impact conditions. despite the availability on the marked of many solutions, usually these products are targeted to the most demanding applications and their cost remains too high for their use in many other fields. usually, the cost of such commercial mo-cap solutions can range from a few thousand dollars to even tens of thousands of dollars. these prices are unfortunately out of budget in many applications. therefore, in this paper, we focus on the development of a low-cost mo-cap solution, which is full-body, multi sensor and cable-free. hence, to avoid using costly measurement elements, a novel approach for calibration using software level data analysis solely based on gyroscope measurements is proposed. the system is intended to be used for physical activities that require movements with low angular velocity for a short period of time (i.e., medical rehabilitation, physiotherapy and movement analysis in aged people). a set of experiments are also performed, for a preliminary validation and metrological characterisation of the proposed solution. 2. developed system this section explains the architecture of the proposed solution. the hardware and software feature specifications of the solution are highlighted and the proposed method for gyroscope calibration is explained. finally, a set of experiments are defined to demonstrate the functionality of the system and to assess the effectiveness of the employed method in reducing drift errors during the attitude representation process. 2.1. system architecture figure 1 briefly shows some possible architectures of mo-cap solutions; in particular the red path indicates the selected method for our solution. four key decisions have been made to choose a method of implementation that are explained accordingly. the reason for choosing a non-optical method is to provide portability and immunity to visual environmental perturbations. in fact, for the applications that require the activity to be performed in an open area, it is inherently difficult to control environmental parameters such as lighting. mems-based imus are usually a combination of low-cost accelerometer, gyroscope and in some circumstances magnetometer. in a common navigation system, all the three available inertial sensors (e.g., accelerometers, magnetometers and gyroscopes) are employed for both the preliminary calibration and the navigation itself. in particular, during the navigation, the accelerometer and the magnetometer are necessary to obtain a geo-referenced frame of coordinates and to compensate for drifts inevitably arising from the integration of the gyroscope signals. however, the use of the magnetometer may decrease the accuracy in indoor environments due to the usual presence of local magnetic field perturbations. on the other hand, the accelerometer may increase the measurement noise since the body acceleration is superimposed on the detected gravity vector. fortunately, in most of the human movement measurement applications, a geo-referenced frame is not necessary and even the measurement of angle changes over short time intervals (e.g. a few minutes) with respect to a starting position is usually sufficient (e.g. the patient is asked to start the movement from a reference position). for these reasons the authors propose a robust measurement system solely based on the use of gyroscopes and a simple calibration procedure suitable to achieve reasonable drifts and angle accuracy over a few minute intervals. 2.2. sensor placement considering a wearable motion capture product, the factors to be considered are accurately indicated by marin et al. [10]. the placement factor is crucial to avoid any friction and displacement of the sensors during various activities. even displacements during movements with a low angular velocity can abruptly decrease the accuracy of the representation. therefore, the outermost places right above the joints and flat areas of the body are preferably used. considering the application of medical rehabilitation, it is necessary to track the position of the bones that contribute to commuting and basic physical activities. the proposed solution uses 15 wearable sensors for full-body motion capture. these sensors are attached to the front and back side of the human body in an order that is illustrated in figure 2, where the red dots refer to the sensors and the orange lines indicate the correct positioning of the sensors. additionally, the sensor numbers, names and the measurand parameters are included in table 1. furthermore, as shown in figure 3, the fixed fabric method is used for the attachment factor. in particular, 15 fabric elastic bands have been hand-made, with different circumferences according to the body parts where the wearable devices are intended to be installed. furthermore, on each elastic band, a pocket with the size of 5 cm × 5 cm has been sewed to contain the sensing element with the dimension of 3 cm × 3 cm × 1 cm. figure 1. range of possible implementation methods. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 2.3. hardware design to implement multiple wireless measuring nodes in forms of wearable devices that can perform online software level calibration, commercially available modules, also known as nrftag (or sensor_tag), were used, as illustrated in figure 4. it also shows the sensor base and holder, which are used for sensor calibration. in particular, the base has been specifically designed and 3d printed by the authors. the main specifications of the nrftag are summarized in table 2. from each sensor tag, the mems-based 6-dof imu mpu6050 [11] is used as the sensing element and the nrf51802 system on a chip (soc) [12] is used as the processing unit and the bluetooth low energy™ (ble) communication module. the manufacturer recommended power supply for the sensor tag is a cr2032 non-rechargeable coin-cell battery with 3v output. considering the portability design criteria, a rechargeable wearable device is more convenient from the perspective of the figure 2. sensor attachment to the human body and numbering of the sensors. on the left: front side. on the right: back side. figure 3. the employed sensor (in the green box) and the hand-made elastic belt used to fix the sensor on the body (the wrist in this figure). table 1. sensor names and the measurand parameters. # sensor measurand 1 forehead rotation and side-bend relating to the origin 2 chest anterior/posterior tilt and lateral tilt to left/right 3 left arm abduction/adduction, flexion/extension 4 right arm abduction/adduction, flexion/extension 5 left wrist abduction/adduction, flexion/extension 6 right wrist abduction/adduction, flexion/extension 7 left hand abduction/adduction, flexion/extension 8 right hand abduction/adduction, flexion/extension 9 sacral(back) rotation and side-bend relating to the origin 10 left knee flexion/extension 11 right knee flexion/extension 12 left ankle plantar-flexion/dorsi-flexion and supination/pronation 13 right ankle plantar-flexion/dorsi-flexion and supintion/pronation 14 left ankle rotation and side-bend 15 right ankle rotation and side-bend figure 4. top view of the sensor_tag module (in the green square) attached to the base, which has been specifically designed and 3d printed for the sensor calibration. table 2. brief specifications of the nrftag. product name nrftag (sensor_tag) application wearable devices supply voltage 3v coin-cell battery 2032 package embedded modules nrf51802 (arm® cortex™-m0 mpu6050 imu bmp280 barometric pressure sensor ap3216 ambient light sensor size circular shape d = 30 mm tx power +4 dbm ~ -20 dbm on air data rate 250 kbps, 1 mbps or 2 mbps modulation gfsk rx current 9.5 ma tx current at +4 dbm 10.5 ma acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 actor user. for this reason, a rechargeable circuit for the wearable devices was designed and implemented. although it was possible to use a rechargeable coin-cell battery with the same package size and output voltage (e.g. ml2032 li-al coin-cell battery), our solution uses a lithium-polymer (li-po) type of battery as the power supply. the reason for this preference was the variety of available options in terms of supply capacity, higher charging rate, and simplicity of charging circuit implementation for li-po batteries compared to other types. during this hardware modification, additional components such as tp4056 battery charger module [13] and ce6208 dc/dc buck converter [14] were used. the central receiver in this project works as a communication bridge between the sensor tags and the personal computer. this component should receive the measurement data, construct a data structure, and transfer the data to a personal computer. for this purpose, an nrf52840 development kit manufactured by nordic semiconductor™ was considered as illustrated and summarized in figure 5 and table 3 respectively. this development kit enabled us to interact with the 15 measurement nodes concurrently using ble communication protocol for realtime measurement [15]. furthermore, an approximated cost of the main components of the system are represented in table 4 where the items indicated with (*) are only required for the rechargeable configuration. these components are available in the commercial market, and it is of course possible to further reduce the cost of the system in case of bulk production. if we compare the obtained total price (in both the two proposed solutions) with the costs of the available commercial systems (which, as reported in the introduction, range from a few thousand dollars to even tens of thousands of dollars), it can be immediately understood the considerably high cost savings. 2.4. attitude representation despite different possible methods of attitude representation that are preferred due to the application [16], [17], the most common way of representing attitude of a rigid body in 3d space 1 euler angles were introduced by leonhard euler in 18th century is using the euler angles1. euler angles represent rotation by performing a sequential operation with respect to a particular sequence and angle value. by representing the 3d space with three perpendicular axes denoted as i, j, k, it is possible to rotate any rigid body object by angles φ, θ, ψ with respect to the 3d axes. rotations based on euler angles can be represented using a rotation vector as indicated in equation (1). 𝑢: = [𝜑 𝜃 𝜓] (1) considering equation (2) where r is a rotation operation along a single axis, it could be understood that 𝑅𝑖𝑗𝑘 is a rotation function which consists in the product of three individual rotations along each 3d axis sequentially. 𝑅𝑖𝑗𝑘 ≔ 𝑅𝑖(𝜑)𝑅𝑗(𝜃)𝑅𝑘(𝜓) (2) by denoting the i, j and k axes as 1, 2 and 3 respectively, it could be stated that the performed sequence in equation (2) is [1,2,3]. this sequence is widely used for applications consisting representation of gyroscopic spinning motions of a rigid body. although euler angles are widely used because of their easyto-understand mathematical expression, employing them might introduce a major drawback also referred to as singularities during attitude representation. these singularities, which are also known as gimbal lock, might occur during specific sequence operations. the gimbal lock is basically losing one or more degree of freedom (dof) when two or more axes are positioned in parallel to each other. in particular, with a [1,2,3] sequence, the gimbal lock problem will arise when the rotation along the second axis (𝜃) becomes an integer multiplication of 90° (𝜃 = 𝑛 π). in order to overcome the issues regarding singularities, it is possible to use the unit quaternions2 for attitude representation. the unit quaternions are a form of a four-component complex numerical system which are mostly used in the pure mathematics, but indeed, have practical uses in applied science such as navigation systems and attitude representation in 3d space. unlike euler angles, unit quaternion rotation vector is composed by four components. the first component 𝑞0 ≔ 𝑤 is a scalar value related to the rotation angle, while the following three parameters are a vector 𝑞1:3 ≔ (𝑥, 𝑦, 𝑧) that indicates the rotation axis as represented in equation (3). 𝑞𝑤,𝑥,𝑦,𝑧 = [𝑞0 𝑞1 𝑞2 𝑞3] 𝑇 = [ 𝑞0 𝑞1:3 ] (3) one key advantages of the unit quaternions comparing to euler angles is the fact that in case of the unit quaternions, the attitude representation process is not sequential. hence, the unit quaternions do not suffer from singularities. therefore, in our 2 quaternions were introduced by william rowan hamilton in 19th century figure 5. central receiver development kit (nrf52840 dk). table 3. brief specifications of the central receiver. product name nrf52840dk application central receiver supply voltage 1.7 ~ 5.5 v processor nrf52840 (arm® cortex™ m4 size rectangular shape (135 × 63 mm²) table 4. price of the single components and total price of the system (for the two proposed solutions). item number of employed items unit price (€) nrf52840 dk × 1 59 sensor_tag × 15 16 li-po battery (*) × 15 3 tp4056 charger (*) × 15 1 dc/dc buck converter (*) × 15 1 total price 299 total price (rechargeable) 374 (*) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 method of representation, the unit quaternions are preferred over the euler angles. in addition, since the unit quaternions use only complex multiplications, they require less hardware resources comparing to the euler angles that require triangular function derivations [16]. although the unit quaternions benefit from a more robust mathematical derivation and no singularities, there are not easy to interpret with a physical meaning. hence, in order to report the data to the end-user, we preferred to convert the unit quaternion values to the euler angles only when a numerical value representation is needed. the conversion function from the unit quaternions to the euler angles is possible as indicated in equation 4 (and vice versa as indicated in equation (5)). it is important to note that this conversion depends on the euler angles' sequential system, which in the case of this solution is [1,2,3]. 𝑅(𝑢)1,2,3 = [ atan2(2𝑞2𝑞3 + 2𝑞0𝑞1, 𝑞3 2 − 𝑞2 2 − 𝑞1 2 + 𝑞0 2) −asin(2𝑞1𝑞3 − 2𝑞0𝑞2) atan2(2𝑞1𝑞2 + 2𝑞0𝑞3, 𝑞1 2 + 𝑞0 2 − 𝑞3 2 − 𝑞2 2) ] (4) 𝑞𝑤,𝑥,𝑦,𝑧 = [ 𝑞0 𝑞1 𝑞2 𝑞3 ] ≔ [ 𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 + 𝑆𝜙/2𝑆𝜃/2𝑆𝜓/2 −𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 + 𝑆𝜃/2𝑆𝜓/2𝑆𝜙/2 𝐶𝜙/2𝐶𝜓/2𝐶𝜃/2 + 𝑆𝜙/2𝑆𝜃/2𝑆𝜓/2 𝐶𝜙/2𝐶𝜃/2𝐶𝜓/2 − 𝑆𝜙/2𝑆𝜓/2𝑆𝜃/2 ] (5) where c and s are the short terms for the cosine and sine functions respectively, atan2 stands for 2-parametric inverse tangent function, and asin is the inverse sine function. 2.5. calibration method low cost imu devices are unfortunately affected by important systemic errors, mainly due to gain errors and to axis misalignments, which may lead to poor accuracy, especially when the sensor has to deal with accelerations and rotations that frequently change axes, as expected in human movement measurements. a calibration has therefore to be performed in order to achieve acceptable results. the main sources of inaccuracy in the use of the gyroscope lay in the gain error (the offset error is present as well, but it is not of concern since it can be easily removed by collecting some initial measurements with the sensor laying in a static position) and in the axes misalignments (i.e. the axes of sensitivity of three gyroscopes are not exactly orthogonal each other). assuming that the sensor x-axis is correctly aligned with the reference frame x-axis (more properly that the reference frame x-axis is chosen parallel to the actual sensor x-axis), and assuming that the reference y-axis is chosen so that the sensor y-axis lies in the reference frame x-y plane, the measurement signals can be related to the actual sensor rotation by using the following matrix equation: ( 𝑋 𝑌 𝑍 ) = ( 𝑎′ 0 0 𝑏′ 𝑐′ 0 𝑑′ 𝑒′ f′ ) . ( 𝑋𝑠 𝑌𝑠 𝑍𝑠 ) (6) where 𝑋𝑠, 𝑌𝑠 and 𝑍𝑠 represent the actual angular speeds with respect to the reference frame, and 𝑋, 𝑌, 𝑍 represent the signals measured by the sensor along its non-orthogonal axes and affected by gain errors. the sensor angular speed can be obtained from the measured signals by inverting the previous equation, whose matrix maintains a triangular shape: ( 𝑋𝑠 𝑌𝑠 𝑍𝑠 ) = ( 𝑎 0 0 𝑏 𝑐 0 𝑑 𝑒 𝑓 ) . ( 𝑋 𝑌 𝑍 ) (7) considering a rotation with angular speed 𝜔 along an arbitrary axis, it can be written: 𝑋𝑠 2 + 𝑌𝑠 2 + 𝑍𝑠 2 = 𝜔2 (8) and therefore: 𝑎2𝑋2 + (𝑏𝑋 + 𝑐𝑌)2 + (𝑑𝑋 + 𝑒𝑌 + 𝑓𝑍)2 = 𝜔2 (9) rearranging the terms: 𝑋2(𝑎2 + 𝑏2 + 𝑑2) + 𝑌2(𝑐2 + 𝑒2) + 𝑍2(𝑓2) + 2𝑋𝑌(𝑏 𝑐 + 𝑑 𝑒) + 2 𝑋 𝑍(𝑑𝑓) + 2 𝑌 𝑍(𝑒𝑓) = 𝜔2 , (10) which, with the following six definitions: 𝛼 = 𝑎2 + 𝑏2 + 𝑐2 𝛽 = 𝑐2 + 𝑒2 𝛾 = 𝑓2 𝛿 = 𝑏𝑐 + 𝑑𝑒 𝜖 = 𝑑𝑓 𝜁 = 𝑒𝑓 , (11) leads to the following compact matrix system of equations that collects all the applied rotations: ( 𝑋1 2 𝑌1 2 𝑍1 2 2𝑋1𝑌1 2𝑋1𝑍1 2𝑌1𝑍1 𝑋2 2 y2 2 z2 2 2𝑋2𝑌2 2𝑋2𝑍2 2𝑌2𝑍2 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ 𝑋n 2 yn 2 zn 2 2𝑋n𝑌n 2𝑋n𝑍n 2𝑌n𝑍n ) . ( α β γ δ ε ζ ) = ( 𝜔1 2 𝜔2 2 ⋮ 𝜔𝑁 2 ) (12) the six unknown parameters (from 𝛼 to 𝜁), can be obtained inverting the previous equation if at least six rotations with known speed (𝜔1to 𝜔𝑁) have been applied. eventually the required coefficients can be quickly obtained with the following order: 𝑓 = √𝛾 𝑑 = 𝜖/𝑓 𝑒 = 𝜁/𝑓 𝑐 = √𝛽 − 𝑒 2 𝑏 = (𝛿 − 𝑑𝑒)/𝑐 𝑎 = √𝛼 − 𝑏2 − 𝑐2 (13) this calibration has been easily implemented on the sensor micro-controller using the matrix svd decomposition for solving the system of equation (12), however the application of a rotation with a known speed to the sensor is not practical and would require complex equipment. fortunately, considering practically applicable sets of axes during calibration, the previous equations, by integration, hold also with angles for rotations along any arbitrary fixed axis. therefore, the proposed approach consists in rotating the sensor along a set of unknown axes for an integer number of turns, integrating the sensor signals and replacing (2 𝑘 π)2 instead of 𝜔2 in equation (12). to manage the calibration procedure, a base for rotating the sensor along arbitrary axes was necessary. therefore, as already mentioned in section 2.3, a specific base has been designed and 3d printed (the base is visible in figure 4). in addition, a specific virtual app has been developed using the bluetooth version of the miupanel platform (ref. https://miupanel.com). by means of the miupanel platform a mobile phone can establish a direct ble connection with the sensors and retrieve a graphical panel for controlling the sensor. the developed panel is shown in figure 6: it contains a realtime visualization of the yaw, pitch and roll angles, and some buttons to start the calibration, to acquire multiple rotation and to compute the calibration coefficients. through the panel it is also possible to store the calibration results into the sensor flash memory. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 2.6. embedded software design in this part, the combination of the central receiver and all the wearable devices as the acquisition group will be addressed. due to the software design considerations, it is necessary to indicate the desired features and specifications of the solution. the basic requirements regarding the acquisition group of our cable-free mo-cap solution are indicated below: • the central receiver should be able to receive data from all the 15 wearable sensors simultaneously and to associate them to the corresponding body part. • central receiver should be able to identify the index of the received data for synchronization purposes. • a lightweight format for data string communication between the central receiver and the wearable devices should be constructed. this data structure, which is also referred to as the data matrix, should be able to contain the measurement raw data and necessary attributes such as functionality status from the wearable devices. the first two requirements are associated with the connection establishment, while the rest are related to the measurement data structure. as previously mentioned, the employed connection protocol in the proposed solution is bluetooth™ low energy. this protocol sufficiently satisfies the requirements such as fast connection establishment, short-range and high-speed data transfer. the set of instructions which are executed by the peripherals are included in figure 7, where the red path indicates the calibration algorithm to reduce drifts of the gyroscope measurements. to avoid connection establishment to the unnecessary devices within the ble range, the central receiver is programmed to only search for this service-characteristic profile. furthermore, in order to distinguish each sensor, the unique peer address identifier of each sensor is associated with the sensor number in table 1. the peripheral communicates data through advertisement packets transmitted every 20 ms. since the attributes and the payload (measurement data) consume more data bytes than a single ble advertisement packet size (31 bytes), it is necessary to communicate the data using multiple advertisements packets. each advertisement packet size has 31 byte of data, and it consists of information such as: generic access profile (gap), type, manufacturer specific data, the peripheral address, and the payload. upon receiving the advertisement packets by the central receiver, the data matrix is structured to be communicated to a personal computer using serial communication. the set of instructions that are executed by the central receiver are indicated in figure 8, where initialization process is only executed when the central receiver is turned on, while the acquisition and communication process are done continuously. 2.7. representation software design in order to represent the measurement data, representation software in form of windows applications were developed. these applications use various techniques to interpret the incoming data from the central receiver. each windows application is provided with a graphical user interface (gui) to interact with the professional user. furthermore, these applications were created using software environments namely matlab, ni labview, and unity game engine. from the data matrix structure, it is possible to extract information such as sensor number, new data availability, data packet type, and measurement values. as illustrated in figure 9, the general principle for the data interpretation process in all the software environments is to decode the incoming hexadecimal data matrix from the serial interface into single-precision floating-point using the ieee 754-2019 standard. the decoded data will be then converted into the euler angles and the quaternion values. eventually, the converted angles are represented and stored numerically and graphically. figure 6. screenshot of the mobile application designed to manage the calibration process. figure 7. peripherals' program logic. figure 8. central receiver's program logic. figure 9. data interpretation logic. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 2.8. experimental procedure the experiments were conducted separately for each of the representation applications. therefore, the procedures and the necessary considerations regarding each test are explained in this section. 2.8.1. experiment 1 – functionality test the aim of this experiment is the 3d representation of human physical activities using matlab and unity game engine. before proceeding to the tests, the experimental modules followed by a brief description are provided below. • upper body experiment: these experiments are aimed to capture the upper body activities by tracking sensor numbers #1 to #8. • lower body experiment: these experiments are aimed to capture lower body activities by tracking sensor numbers #9 to #15. • full body experiment: these experiments perform the full-body motion capturing process by employing all the 15 sensors. figure 10 to figure 13 illustrate the functionality of the representation procedure using matlab and unity game engine where, in each figure, the test subject's posture is placed next to test results for an immediate comparison, while, for a better vision of the reader, the position of the sensors on the body are marked with red circles on the picture, according to table 1. 2.8.2. experiment 2 metrological characterisation this experiment aims to evaluate the type a standard uncertainty. this preliminary evaluation has been done to estimate the standard uncertainty of a single sensor (sensor #2 corresponding to the chest of the test subject) throughout the measurement process. the experiment was performed by connecting a single sensor tag to the central receiver via ble and connecting the central receiver to a personal computer using serial communication. the sensor tag was located on a fixed position (on a piece of sponge to damp environmental vibrations). then, the sensor tag was turned on and the offset compensation phase started, which took approximately 10 seconds. the experiment was done for about 9 minutes, at room temperature (23 °c) while errors due to the pcb mounting and cross-axis sensitivity effects were neglected [11]. moreover, the ni labview application was used to record the incoming data in a text-plain file. finally, the text-plain file was converted into an excel worksheet to perform the statistical analysis. 5000 samples were considered for the analysis (500 seconds of data acquisition). then the first 2000 samples were discarded to take into account the offset compensation phase and the warm-up time of the measuring units. hence, the samples between 2001 to 5000 were used for this analysis. let us consider the measurements of the gyroscope along each of the 3d axis as a variable that varies randomly. hence, the measurement uncertainty of the instrument along each axis based on the experimental results could be calculated according to the joint committee of guides in measurements (jcgm) guide to the expression of uncertainty in measurements (gum) [18]. the results of the calculations are provided in table 5, and measurements obtained with sensors in static position are illustrated in figure 14. 2.8.3. experiment 3 – correction method evaluation the purpose of this experiment was to highlight the impact of the calibration algorithm for reducing the drifts in the gyroscope measurements. to induce the drifts in the measuring unit, we placed a sensor on a ruler with a defined position on a table and reported the measured value for each of the 3d axis. then by grabbing the ruler with the right hand, we performed 15 forward rotations and 15 lateral movements of the right arm in 1 minute. next, the ruler was placed at the same defined position and the mismatch between the initial value was reported. the above steps were repeated for 5 times, therefore, 15 groups of figure 10. matlab upper body experiment with t-pose posture. figure 11. matlab lower body experiment with an arbitrary posture. figure 12. matlab full body experiment with t-pose posture. figure 13.unity lower body (left figure) and full body (right figure) experiment with an arbitrary posture. table 5. type a evaluation of the standard uncertainty. parameters φº θº ψº induced rotation 0 0 0 experimental sd of observations 0.015 0.022 0.0071 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 observations along each axis were obtained. finally, to emphasize the impact of the calibration method, we performed this experiment with and without the calibration algorithm being executed. the impact of the gyroscope calibration algorithm is highlighted in figure 14. as it is demonstrated, observations without using the correction algorithm are prone to drifts over time during repetitive movements. on the other hand, the gyroscope measurement drifts are bounded to ±1.5 ° when executing the calibration algorithm. 3. discussion considering the limitations of machine vision technologies in motion capture solutions, in this paper, we presented a low-cost, non-optical, cable-free, real-time and full-body wearable motion capture solution using 15 distributed mems based imu. by using the bluetooth™ low energy wireless communication protocol, we established a low-power consuming and low-cost mo-cap system that fully satisfies the portability design criteria of wearable devices. taking into account the broad range of applications for motion capture devices, our solution was specifically designed to be used for medical rehabilitation at home and in nonprofessional gyms, where the cost of the system plays a very important role. furthermore, it has been considered that the measurement duration and the angular velocity of the physical movements are low (i.e., physiotherapy and movement analysis in aged people) and that, after each set of exercises, the patient returns to a reference position. in the proposed solution, the angular position is estimated solely based on the measurements of the gyroscope. since the gyroscopic measurements are prone to angular drifts over time, we proposed a software level calibration method that is able not only to eliminate the gain errors but also to take into account the axes misalignments. the results of our experiments appeared promising by bounding the gyroscope drifts to approximately ±1.5 ° over a measurement period of 5 minutes. moreover, the experimental results also demonstrated the functionality of the system and a preliminary attempt to metrologically characterise the measurement system was performed. furthermore, the rotation angles can be further analysed using data processing methods for human activity recognition (har). this application is widely used for medical diagnostic procedures such as gait analysis and core stability assessment. it is therefore possible to implement har by using suitable classification methods such as deep neural networks. table 6 shows a comparison between the proposed system and other available commercial systems. the table clearly shows that the performances and characteristics of our system are comparable with the ones of other commercial systems, at least in short periods of time. however, the great advantage of the proposed system is represented by the cost which, as discussed in the introduction and shown in table 4, is one or even two orders of magnitude smaller. this represents a very great advantage of the proposed system, which, for the desired applications of few minutes, provides similar performances to the commercial systems, but at very low-cost. references [1] l. n. n. nguyen, d. rodríguez-martín, a. català, c. pérez-lópez, a. samà, a. cavallaro, basketball activity recognition using wearable inertial measurement units, proceedings of the xvi international conference on human computer interaction. association for computing machinery: vilanova i la geltru, spain, 2015, pp. 1-6. doi: 10.1145/2829875.2829930 [2] f. casamassima, a. ferrari, b. milosevic, p. ginis, e. farella, l. rocchi, a wearable system for gait training in subjects with parkinson's disease. sensors (basel), 2014, 14(4), pp. 6229-6246. doi: 10.3390/s140406229 figure 14. measurements obtained with sensors in static position (left figure) and evaluation of the improvements obtained with the proposed calibration algorithm (right figure). table 6. comparison between the proposed system and other available commercial systems. product proposed system mtw awinda [7] movit g1 [8] ultium motion [9] connection wireless wireless wireless wireless range (m) 20 20 30 40 number of sensors 15 20 16 16 battery life (h) 10 6 6 10 orientation accuracy (°) 1.5 (max 5 min) 1 1 1 sensor weight (g) 10 16 25 19 size (mm2) 30 × 30 47 × 30 48 × 39 44 × 33 angular range (deg/s) 2000 2000 2000 7000 quaternion rate (hz) 10 60 100 100 https://doi.org/10.1145/2829875.2829930 https://doi.org/10.3390/s140406229 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 [3] r. a. w. felius, m. geerars, s. m. bruijn, j. h. van dieën, n. c. wouda, m. punt, reliability of imu-based gait assessment in clinical stroke rehabilitation. sensors (basel), 2022, 22(3), pp. 119. doi: 10.3390/s22030908 [4] d. kelly, g. f. coughlan, b. s. green, b. caulfield, automatic detection of collisions in elite level rugby union using a wearable sensing device. sports engineering, 2012. 15(2), p. 81-92. doi: 10.1007/s12283-012-0088-5 [5] y. z. cheong, w. j. chew, the application of image processing to solve occlusion issue in object tracking. proceedings of matec web of conference., 2018, pp. 1-10. doi: 10.1051/matecconf/201815203001 [6] g. d. finlayson, colour and illumination in computer vision. interface focus, 2018. doi: 10.1098/rsfs.2018.0008 [7] xsens. mtw awinda. online [accessed 27 september 2022] https://www.xsens.com/products/mtw-awinda [8] captiks. movit system g1-3d. online [accessed 27 september 2022] http://www.captiks.com/products/movit-system-g1-3d [9] usa, n. ultium motion. 2021. online [accessed 27 september 2022] https://www.noraxon.com/our-products/ultium-motion/ [10] j. marin, t. blanco, j.j. marin, octopus: a design methodology for motion capture wearables. sensors, 2017. 17(8), pp. 1-24. doi: 10.3390/s17081875 [11] invensense. mpu-6000 and mpu-6050 product specification. 2013. online [accessed 27 september 2022] https://invensense.tdk.com/wpcontent/uploads/2015/02/mpu-6000-datasheet1.pdf [12] semiconductor., n. nrf51802 multiprotocol bluetooth low energy/2.4 ghz rf system on chip product specification. 2016. online [accessed 27 september 2022] https://infocenter.nordicsemi.com/pdf/nrf51802_ps_v1.2.pdf [13] corp, n.t.p.a. tp4056 1a standalone linear li-lon battery charger with thermal regulation in sop-8. online [accessed 27 september 2022] https://www.mikrocontroller.net/attachment/273612/tp4056.p df [14] inc, n.c.e. ce6208 series ultra-fast high psrr 1a cmos voltage regulator online [accessed 27 september 2022] https://datasheetspdf.com/datasheet/ce6208.html [15] semiconductor, n. nrf52840 development kit pca10056 user guide. 2019; online [accessed 27 september 2022] https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_ guide_v1.3.pdf [16] j. diebel, representing attitude: euler angles, unit quaternions, and rotation vectors. matrix, 2006, pp.1-35. [17] h. parwana., m. kothari, quaternions and attitude representation. arxiv, department of aerospace engineering, indian institute of technology kanpur, india, 2017, pp. 1-19. doi: 10.48550/arxiv.1708.08680 [18] jcgm. gum: guide to the expression of uncertainty in measurement. 2008. online [accessed 27 september 2022] https://www.bipm.org/documents/20126/2071204/jcgm_100 _2008_e.pdf https://doi.org/10.3390/s22030908 https://doi.org/10.1007/s12283-012-0088-5 https://doi.org/10.1051/matecconf/201815203001 https://doi.org/10.1098/rsfs.2018.0008 https://www.xsens.com/products/mtw-awinda http://www.captiks.com/products/movit-system-g1-3d https://www.noraxon.com/our-products/ultium-motion/ https://doi.org/10.3390/s17081875 https://invensense.tdk.com/wp-content/uploads/2015/02/mpu-6000-datasheet1.pdf https://invensense.tdk.com/wp-content/uploads/2015/02/mpu-6000-datasheet1.pdf https://infocenter.nordicsemi.com/pdf/nrf51802_ps_v1.2.pdf https://www.mikrocontroller.net/attachment/273612/tp4056.pdf https://www.mikrocontroller.net/attachment/273612/tp4056.pdf https://datasheetspdf.com/datasheet/ce6208.html https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_guide_v1.3.pdf https://infocenter.nordicsemi.com/pdf/nrf52840_dk_user_guide_v1.3.pdf https://doi.org/10.48550/arxiv.1708.08680 https://www.bipm.org/documents/20126/2071204/jcgm_100_2008_e.pdf https://www.bipm.org/documents/20126/2071204/jcgm_100_2008_e.pdf analysis of the mathematical modelling of a static expansion system acta imeko issn: 2221-870x september 2021, volume 10, number 3, 185 191 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 185 analysis of the mathematical modelling of a static expansion system carlos mauricio villamizar mora1, jonathan javier duarte franco1, victor josé manrique moreno1, carlos eduardo garcía sánchez1 1 grupo de investigación en fluidos y energía, corporación centro de desarrollo tecnológico del gas, bucaramanga, colombia section: research paper keywords: modelling; pressure; static expansion; uncertainty citation: carlos mauricio villamizar mora, jonathan javier duarte franco, victor jose manrique moreno, carlos eduardo garcía sánchez, analysis of the mathematical modelling of a static expansion system, acta imeko, vol. 10, no. 3, article 25, september 2021, identifier: imeko-acta-10 (2021)-03-25 section editor: francesco lamonaca, university of calabria, italy received january 29, 2021; in final form july 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by colombia’s servicio nacional de aprendizaje (sena) through the special cooperation agreement no. 0233 of 2018. corresponding author: carlos eduardo garcía sánchez, e-mail: cgarcia@cdtdegas.com 1. introduction pressure measuring instruments, like any other measuring device, require periodic calibrations, to monitor changes in their performance, and guarantee their comparability with other meters [1]. in simple terms, a calibration consists of establishing a relationship between the values given by measurement standards, and those given by an instrument under test [2]. in the case of vacuum pressure gauges, that is, that measure absolute pressure values lower than atmospheric pressure, a system that can produce specific vacuum pressure values is required, given the importance of comparing the measurements given by the standard and by the meter under test at different values of the measured variable [3]. in the calibration process, it is of utmost importance that the specific pressure values that are generated have a low uncertainty. uncertainty is a characteristic of any measurement, indicating the level of doubt about the reported value [4]. in this way, a better comparability of meters that have been calibrated with the process in question can be guaranteed. currently, in colombia there is a lack of absolute pressure calibration services in the medium and high vacuum regions. for this reason, the centro de desarrollo tecnológico del gas (cdt de gas) has developed a static expansion system, which allows the generation of pressures in the medium and high vacuum ranges, making it possible to calibrate pressure gauges in those regions. this type of system has been implemented in multiple laboratories worldwide. the present study shows the mathematical design process of the system, through an evaluation of the possible models to represent the behavior of the gas inside the system, and the use of uncertainty to define restrictions on the input quantities of the system. 2. state of the art 2.1. static expansion systems the pressure region between absolute zero (total absence of molecules) and atmospheric pressure is called “vacuum”. in turn, vacuum is classified as coarse (from 3 000 pa to atmospheric pressure), medium (between 0.1 pa and 3 000 pa), high (from abstract static expansion systems are used to generate pressures in medium and high vacuum and are used in the calibration of absolute pressure meters in these pressure ranges. in the present study, the suitability of different models to represent the final pressures in a static expansion system with two tanks is analysed. it is concluded that the use of the ideal gas model is adequate in most simulated conditions, while the assumption that the residual pressure is zero before expansion presents problems under certain conditions. an uncertainty analysis of the process is carried out, which leads to evidence of the high importance of uncertainty in a first expansion over subsequent expansion processes. finally, an analysis of the expansion system based on uncertainty is carried out to estimate the effect of the metrological characteristics of the measurements of the input quantities. said design process can make it possible to determine a set of restrictions on the uncertainties of the input quantities. mailto:cgarcia@cdtdegas.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 186 1 · 10-7 pa to 0.1 pa) and ultra-high (less than 1 · 10-7 pa) [5]. in general, there is no pressure measurement technology that covers all regions of interest [6]. among the most accurate equipment for generating pressures in the medium and high vacuum ranges are static expansion systems, with which pressures as low as 10-6 pa [7]-[9] can be obtained. several metrology laboratories and national metrology institutes have developed static expansion systems [10]-[13]. the generation of vacuum pressures using static expansion of a gas is a mature technology [14], and much of the development in recent years in the topic has been related to a careful evaluation of possible causes of error and the estimation of the uncertainty of the final pressure [8], [10], [15]. calibration and measurement capabilities with expanded uncertainties lower than 0.3 % (k = 2) using static expansion systems have been reported [15]. nitrogen is commonly used as a gas for this type of calibration, although other inert gases could also be used [3], [7], [16]. static expansion systems are a set of tanks of different dimensions, connected by pipes and valves. to generate low pressures with high precision, a volume of gas, with a defined pressure, is allowed to expand to a larger volume, previously at a pressure as close to zero as possible [17]. figure 1 presents a simplified diagram of the static expansion process, using a small tank (where the initial pressure is set) and a large tank (to which the equipment to be calibrated is connected). in figure 1 and equations included in this work, 𝑉𝑃 is the volume of the small tank, 𝑉𝐺 is the volume of the large tank, 𝑇𝑖 is the initial temperature of the process, 𝑇𝑓 is the final temperature of the process, 𝑃𝑖 is the initial pressure of the process in the small tank, 𝑃𝑖,𝐺 is the initial pressure of the process in the large tank, which should be as close as possible to zero, and 𝑃𝑓 is the final pressure of the process. to further reduce the pressure, it is possible to repeat the expansion process, using the initial pressure resulting from the previous expansions [8]; the lower achievable limit is imposed by the level of vacuum that can be generated in the large tank, and by the effects of sorption and degassing in the tanks and the instruments under test [18]. static expansion systems can be used as primary calibration standards, calculating the final pressure from the initial pressure and the volume ratios of the gas expansion processes performed. they can also be used to generate pressure but using a pressure gauge as a reference for calibration, in this case being a calibration by direct comparison [3], [16], [19]-[22]. it is important to note that several of the high vacuum pressure measurement technologies, such as pirani gauges, exhibit high non-linearity with respect to pressure [3], which usually implies requiring several calibration points in each order of magnitude of pressure. table 1 presents the fundamental characteristics of some static expansion systems developed by various national metrology institutes. regarding the modelling of the process, some institutions have chosen to use the ideal gas model [13], [24], [25], while others have proposed the use of the virial equation as a real gas model for the expansion process [5], [7], [8], [23]. one aspect that is quite generalised is the assumption that the initial pressure in the calibration tank is zero, although the question remains whether this assumption is valid as the final pressure is smaller (that is, as the vacuum increases). equation (1) presents the calculation of the pressure after a static expansion process, modelling the substance as an ideal gas and neglecting the initial pressure in the large tank [10], [13], [24], [25]. 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 . (1) on the other hand, (2) shows the calculation of the final pressure obtained with a static expansion, based on the truncated virial expansion in the second term and neglecting the initial pressure in the large tank [5], [7], [ 8], [23]. 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 1 + 𝐵𝑓 𝑃𝑓 𝑅 𝑇𝑓 1 + 𝐵𝑖 𝑃𝑖 𝑅 𝑇𝑖 . (2) in (2) and following equations, 𝑅 is the molar constant of the gases, equal to 8.314 462 618 j mol-1 k-1, 𝐵𝑓 is the second virial coefficient of nitrogen in the conditions of the end of the process and 𝐵𝑖 is the second virial coefficient of nitrogen in the conditions of the beginning of the process. the second virial coefficient is a function of the substance or mixture of substances, and a function of temperature. another important aspect related to the initial pressure in the calibration tank is that to reach the residual pressure, or minimum pressure achievable in the calibration chamber, it is usually required to pump for many hours and to bake the tank [16]. residual pressure is limited by pumping speed, by leaks, by gas desorption from materials exposed to vacuum, and by cleanliness of test gauges [1]. baking refers to the heating of the chamber, to about 200 °c, to desorb the gas from the internal table 1. examples of static expansion systems developed in different national metrology institutes. it is not an exhaustive list: the ptb has another static expansion system in addition to the one mentioned here, and systems such as those of kriss (from south korea) or npl (from england) were not included either. institution approximate volumes of tanks (l) pressure range (pa) reference l'istituto nazionale di ricerca metrologica (inrim) 0.01, 0.5 and 68 0.1 – 1 000 [4] centro nacional de metrología (cenam) 0.5, 1, 50 and 100 0.000 01 – 1 000 [23] centro español de metrología (cem) 0.5, 1, 1, 100 and 100 0.000 1 – 1 000 [13] physikalisch-technische bundesanstalt (ptb) 0.017, 0.017, 1, 20 and 233 0.000 001 – 1 000 [24] tübitak-ulusal metroloji enstitüsü (ume) 0.15, 0.15, 0.7, 15, 15 and 72 0.000 9 – 1 000 [25] figure 1. static expansion process with two tanks. the initial state of the expansion process is shown on the left, and the final state on the right. it is assumed that there is spatial homogeneity of temperature in both tanks, but not necessarily temporal homogeneity. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 187 walls of the materials, a necessary procedure to maintain ultrahigh and superior voids [18], [19]. other aspects that have been studied in static expansion systems are the determination of tank volumes by methods other than gravimetry [13], [24] and the effect of the inhomogeneity of temperature in the tanks on the process [8]. 2.2. uncertainty uncertainty is the name given to the level of doubt that a measurement result has [2], [4], [12]. the two most common ways to report uncertainty are standard uncertainty, which represents the standard deviation of the probability distribution with which the measurement result is modeled, and expanded uncertainty, which is half the length of a coverage interval on the measurement result, with a specified coverage percentage (for example, 95%). the most widely used method to estimate the uncertainty of a measurement result is the gum method [4]. the uncertainty estimation process requires the clear establishment of the measurement model, which presents the way in which the measurand (quantity to be measured) is calculated from its input quantities. subsequently, in the gum method, the standard uncertainty 𝑢(𝑦) of the measurand 𝑦 will be estimated from the uncertainties 𝑢(𝑥1), 𝑢(𝑥2),…, 𝑢(𝑥n) of the input variables 𝑋1, 𝑋2,…, 𝑋n using the measurement model 𝑦 = 𝑓 (𝑋1, 𝑋2, … , 𝑋𝑛). neglecting the correlation between the input quantities and the higher order terms, it is obtained the simplest version of the gum method, in which the estimation of the standard uncertainty of the measurand is made according to (3) [4]: 𝑢(𝑦) = √( 𝜕𝑓 𝜕𝑥1 ) 2 𝑢2(𝑥1) + ( 𝜕𝑓 𝜕𝑥2 ) 2 𝑢2(𝑥2) + ⋯ + ( 𝜕𝑓 𝜕𝑥𝑛 ) 2 𝑢2(𝑥𝑛) . (3) 3. methodology in a two-tank static expansion process, the final pressure depends on the initial pressure in the small tank, the initial pressure in the large tank, the volumes of the tanks, the initial and final temperatures, and the nature of the gas that is expanding. the model used, however, may differ according to the assumptions made about the process, which can lead to the neglect of the effect of some variables. as mentioned in the state of the art, different institutions have opted for different models to calculate the pressures after the expansion processes. in the present work, three simplified models were compared against the complete model to calculate the resulting pressure after an expansion process, to define which of the models can be considered adequate to calculate the pressure after one or more expansion processes. the most complete model to represent the process is based on a real gas model and considers both the initial pressure in the large tank and the inhomogeneity of temperatures at the beginning and at the end. it is well known that the ideal gas model represents the behavior of a gas when p → 0, since with a non-existent pressure the assumptions of said model would be exactly fulfilled (the volumes of the molecules are negligible with respect to the total volume of the gas, the forces intermolecular tends to zero and molecular shocks are perfectly elastic) [26]. the ideal gas model is a convenient limiting case, which can be deduced from theoretical considerations, but it does not accurately represent the behavior in the gas phase of pure substances or mixtures that are at pressures other than zero [27]. among the real gas models that have been developed, the virial equation of state has the desirable characteristic that its parameters can be related to intermolecular forces [27]. considering the above, in the present work a model based on the virial equation to represent the behavior of the substance that undergoes expansion was established as the reference model for the final pressure after expansion. it was assumed spatial homogeneity (although not temporal) of temperature, considering that the static expansion system will eventually operate under controlled environmental conditions. two simplifications were evaluated: (1) using an ideal gas model instead of a real gas model, and (2) neglecting the initial pressure in the calibration tank. the main reason that would support the use of ideal gas is that this model is based on assumptions about the behavior of gaseous substances that are approximately satisfied at very low pressures [26]. additionally, it is common to use nitrogen as a gas inside the expansion system, and the virial coefficient for this gas is very small; this coefficient may be relevant for other heavier inert gases [14]. regarding the initial pressure in the large tank, in all the references consulted [5], [7], [8], [10], [13], [23], [24], [25] is neglected, but it is possible to wonder how much of an impact this assumption can have. in this way, four models were compared. model 1 was the model without simplifications and was therefore taken as the reference model; this model is presented at the beginning of section 4 (results and discussion). model 2 was based on the ideal gas model and considered the initial pressure in the large tank. model 3 was based on the real gas model but neglecting the initial pressure in the large tank. and model 4 contained the two simplifying assumptions, that is, it was based on ideal gas and used zero as the initial pressure value in the calibration tank. to make the comparisons, the final pressure with each of the four models was calculated, and the error in the pressure value of each of the last three models with respect to the reference one (model 1) was determined. the error of models 2, 3 and 4 was calculated with (4), where 𝐸𝑖 is the percentage error made by the i-th model, i = 2, 3, 4, 𝑃𝑓,1 is the final pressure calculated with model 1, and 𝑃𝑓,𝑖 is the final pressure calculated with the i-th model: 𝐸𝑖 = ( 𝑃𝑓,𝑖 − 𝑃𝑓,1 𝑃𝑓,1 ) ∙ 100 % (4) in order to consider a wide range of conditions, comparisons were made with the possible combinations of two volume relationships, two differences between initial and final temperatures, two initial pressures in the small tank, two initial pressures in the large tank, and four values of consecutive expansions. after determining whether any of the simplified models was appropriate for the two-tank static expansion system, the gum method was applied, without correlation or higher order terms, to estimate the uncertainty using the chosen model as the measurement model. the uncertainty budget, that is, the contribution of the different input quantities to the final pressure, was evaluated for different uncertainty values of said input quantities [4]. in this way, the importance of the different input magnitudes on the final pressure was evaluated, in a wide range of conditions. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 188 4. results and discussion by modelling the behaviour of nitrogen using the virial expansion truncated in the second term and considering the initial pressure in the large tank, the final pressure after a static expansion process is calculated by 𝑃𝑓 = 1 + 𝐵𝑓 𝑃𝑓 𝑅 𝑇𝑓 (𝑉𝑃 + 𝑉𝐺 ) [ 𝑃𝑖 𝑉𝑃 1 + 𝐵𝑖 𝑃𝑖 𝑅 𝑇𝑖 + 𝑃𝑖,𝐺 𝑉𝐺 1 + 𝐵𝑖 𝑃𝑖,𝐺 𝑅 𝑇𝑖 ] 𝑇𝑓 𝑇𝑖 . (5) this model was called "model 1” and is the reference model. the main drawback of the model in (5) is that 𝑃𝑓 is an implicit variable, so it is required to solve the equation using a numerical method. in the present work, the secant method [28] was used to solve (5). it became evident that using as the starting point for the method the value of 𝑃𝑓 calculated with model 4 (which is the simplest), the method required very few steps to achieve convergence. equation (6) presents the “model 2”, which consists of the resulting model for the final pressure after a static expansion process, using ideal gas to represent the behaviour of the substance and considering the initial pressure in the large tank: 𝑃𝑓 = 𝑃𝑖 𝑉𝑃 + 𝑃𝑖,𝐺 𝑉𝐺 𝑉𝑃 + 𝑉𝐺 𝑇𝑓 𝑇𝑖 . (6) the “model 3” is (2) (shown in the “state of the art” section), which represents the calculation of the final pressure obtained with a static expansion, based on the virial expansion truncated in the second term and neglecting the initial pressure in the large tank. this model is implicit for 𝑃𝑓 , just like model 1. the "model 4" is equation (1) (shown in the "state of the art"), which represents the calculation of the resulting pressure after a static expansion process modelling the substance as an ideal gas and neglecting the initial pressure in the large tank. sixty-four conditions were simulated, corresponding to the possible combinations of the following values of the input variables: two volume relationships (1:20 and 1:150), two differences between final and initial temperatures (0 k and 5 k), two initial pressures in the small tank (10 000 pa and 50 000 pa), two initial pressures in the large tank (0.001 pa and 0.000 01 pa) and four consecutive expansion amounts (1, 2, 3 and 4). the percentage errors in the calculation of the final pressure committed by the three models evaluated in each of the sixtyfour conditions are summarised in figure 2. the highest error made with model 2 is -0.0134 %, and it occurs under certain conditions by performing the expansion process only once. this behaviour is explained taking into account that the ideal gas model works better the lower the pressure, and the highest pressure values (the lowest vacuum levels) are obtained when performing a single expansion. in any case, the error is quite low, and depending on the target uncertainty in the final pressure, it is possible that model 2 can be used without problem. on the other hand, with models 3 and 4 very high errors are made under certain conditions, exceeding -20 % after three consecutive expansions, and reaching -97 % in some cases with four consecutive expansions. these very high errors occur when the initial pressure in the large tank is 1 · 10-3 pa, which is an exaggeratedly high value considering the capabilities of current vacuum pumps, such as turbomolecular pumps, but that could occur if the system is not properly baked. in any case, considering the purpose of the analysis to evaluate the performance of the models under different conditions, the combinations of values of input quantities tested indicate that in some situations models 3 and 4 will have an unacceptable performance to determine the pressure of reference in a pressure gauge calibration process. figure 2. percentage error in the final pressure calculated with the evaluated models, against the number of consecutive expansions. a: error made by model 2. b: error presented by model 3. c: resulting error when applying model 4. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 189 based on the results of the previous section, it was decided to use model 2 to perform the uncertainty analysis. the gum equation applied to said model is shown in (7). 𝑢(𝑃𝑓 ) = [( 𝜕𝑃𝑓 𝜕𝑃𝑖 ) 2 𝑢2(𝑃𝑖 ) + ( 𝜕𝑃𝑓 𝜕𝑃𝑖,𝐺 ) 2 𝑢2(𝑃𝑖,𝐺 ) + ( 𝜕𝑃𝑓 𝜕𝑉𝑃 ) 2 𝑢2(𝑉𝑃 ) + ( 𝜕𝑃𝑓 𝜕𝑉𝐺 ) 2 𝑢2(𝑉𝐺 ) + ( 𝜕𝑃𝑓 𝜕𝑇𝑓 ) 2 𝑢2(𝑇𝑓 ) + ( 𝜕𝑃𝑓 𝜕𝑇𝑖 ) 2 𝑢2(𝑇𝑖 )] 0.5 (7) to evaluate the generalities of the effect of the uncertainty of the input quantities on the uncertainty of the final pressure, a base case and six derived cases were considered. the base case is presented in table 2. four consecutive expansions were considered in the base case, so that the pressures after the first, second, third, and fourth consecutive expansion were 496.7 pa, 4.935 pa, 0.049 03 pa, and 0.000 497 0 pa, respectively. in the six derived cases, the values of the input quantities remained identical to those of the base case, as were all the uncertainties except one, which was set at 1 % of the value of the quantity. table 3 presents the standard uncertainties, in terms of percentage of the value of the measurand, obtained after the different numbers of expansions tested in each of the 7 study cases. in all cases, the relative standard uncertainty increases as more expansions are made and the final pressure decreases. in the hypothetical base case, the uncertainty of the pressure values figure 3. percentage contributions of the uncertainties of the input quantities over the uncertainty of the final pressure (“uncertainty budget”), for the seven case studies, with four different numbers of consecutive expansions. the percentage contribution of the initial pressure in the large tank was omitted from the budgets, since it was less than 0.03 % in all cases. a: one expansion. b: two expansions. c: three expansions. d: four expansions. table 2. values of the input quantities and their uncertainties in the base study case defined for both uncertainty analysis. the standard uncertainty of each input quantity was 0.3 % of the respective value of the quantity. input quantity value standard uncertainty 𝑃𝑖 (pa) 50 000 45 𝑃𝑖,𝐺 (pa) 0.000 010 0.000 002 𝑉𝑃 (m 3) 0.001 000 0.000 005 𝑉𝐺 (m 3) 0.100 00 0.000 05 𝑇𝑖 (k) 296.15 0.30 𝑇𝑓 (k) 297.15 0.30 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 190 obtained is very high (reaching 1.21 %), considering the possibilities of the system. it can also be seen that increasing the uncertainty from 0.3 % to 1 % for 𝑉𝑃 , 𝑉𝐺 , 𝑇𝑖 and 𝑇𝑓 has a practically identical effect on the pressure uncertainty, while the impact of this increase in uncertainty for 𝑃𝑖 is smaller as the number of consecutive expansions grows. on the other hand, increasing the uncertainty of 𝑃𝑖,𝐺 has a negligible effect, for the values used in the base case. figure 3 presents the uncertainty budgets of the seven case studies. after a single expansion, the budgets of the base case and the case with 1 % uncertainty for 𝑃𝑖,𝐺 show a balanced contribution of the different input magnitudes, while for the other 5 cases the budget is dominated by the input quantity to which the uncertainty was increased. on the other hand, as the number of successive expansions increases, for all the study cases the role of the initial pressure of the expansion process (which is the final pressure reached in the previous expansion) gradually increases over the uncertainty of the resulting pressure. this fact indicates the high importance of uncertainty during the first expansion process over uncertainty in subsequent expansions. additionally, an uncertainty analysis was carried out for a case more adjusted to reality, taking values of the input magnitudes within the expected intervals, and assigning them uncertainties similar to those that can be obtained when using medium-high quality measurement instruments. table 2 summarises the values used for that case. four consecutive expansions were simulated, and the value of the final pressure, its uncertainty and the respective uncertainty budget were determined. the result is presented in table 4. this study case shows that the resulting pressure uncertainty grows from 0.72 % with one expansion to 1.5 % after the fourth expansion. it is also evidenced that the uncertainty of the final pressure is being dominated by the uncertainties of the volumes of the tanks, with the uncertainty of the volume of each of the tanks contributing 47.2 % to the uncertainty of the pressure after expansion (and considering that the pressure after 2, 3 and 4 expansions is dominated by the initial pressure, that is, the final pressure of the previous expansion process). it is interesting that the contribution of the initial pressure in the large tank is only appreciable in the fourth expansion. 5. conclusions it was possible to evaluate the adequacy of the different models proposed. it was determined that the use of an ideal gas model instead of a real gas model caused a maximum error of 0.0135 % on the pressure value, under the evaluated conditions (64 different conditions, between 1 and 4 expansion processes). in this way, depending on the uncertainty objective in the calibration process with the expansion system, it is possible that this simplification can be used without problems. on the other hand, neglecting the initial pressure in the calibration chamber can lead to errors in the pressure value of several tens in percentage, of even 97 % under the evaluated conditions, especially as the number of consecutive expansions that take place increases. therefore, it is concluded that it is preferable not to neglect the initial residual pressure in the calibration chamber, unless it is guaranteed that said pressure is maintained at 1 · 10-7 pa or less, with the baking processes and long periods of pumping that that requires. additionally, it was possible to analyse the effect of the uncertainty of the input quantities on the uncertainty of the final pressure after one or more consecutive expansions. it became evident that the magnitudes with the greatest influence on the final pressure obtained are the volumes of the tanks used in the expansion processes. acknowledgements this work was financed by colombia’s servicio nacional de aprendizaje (sena) through the special cooperation agreement no. 0233 of 2018. sena’s centro industrial del diseño y la manufactura and centro industrial y del desarrollo tecnológico participated in the project transfer plan. table 3. relative standard uncertainty (%) of the final pressure after several consecutive expansions, for the seven case studies proposed to review the impact of the input quantities. study case first expansion second expansion third expansion fourth expansion base case 0.67 0.90 1.1 1.2 𝑢(𝑃𝑖 ) = 0.01 ∙ 𝑃𝑖 1.2 1.3 1.4 1.5 𝑢(𝑃𝑖,𝐺 ) = 0.01 ∙ 𝑃𝑖,𝐺 0.67 0.90 1.1 1.2 𝑢(𝑉𝑃) = 0.01 ∙ 𝑉𝑃 1.2 1.6 2.0 2.2 𝑢(𝑉𝐺 ) = 0.01 ∙ 𝑉𝐺 1.2 1.6 2.0 2.2 𝑢(𝑇𝑖 ) = 0.01 ∙ 𝑇𝑖 1.2 1.6 2.0 2.2 𝑢(𝑇𝑓 ) = 0.01 ∙ 𝑇𝑓 1.2 1.6 2.0 2.2 table 4. results of the second uncertainty analysis. expansion number 𝑷𝒇 (pa) 𝒖(𝑷𝒇) (pa) contribution to the uncertainty budget in % 𝑷𝒊 𝑷𝒊,𝑮 𝑉𝑃 𝑉𝐺 𝑇𝑖 𝑇𝑓 1 496.7 3.6 1.6 0.0 47.2 47.2 2.0 2.0 2 4.935 0.050 50.4 0.0 23.8 23.8 1.0 1.0 3 0.049 03 0.000 61 66.8 0.0 15.9 15.9 0.7 0.7 4 0.000 497 0 0.000 007 3 69.4 7.5 11.1 11.1 0.5 0.5 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 191 references [1] national physical laboratory, guide to the measurement of pressure and vacuum, london, 1988. [2] joint committee for guides in metrology, jcgm 200:2012 – international vocabulary of metrology – basic and general concepts and associated terms (vim) – 3rd ed., joint committee for guides in metrology, 2012. [3] r. e. ellefson, a. p. miller, recommended practice for calibrating vacuum gauges of the thermal conductivity type, journal of vacuum science & technology a 18(5) (2000), pp. 2568-2577. doi: 10.1116/1.1286024 [4] joint committee for guides in metrology, jcgm 100:2008 – evaluation of measurement data – guide to the expression of uncertainty in measurement, joint committee for guides in metrology, 2008. [5] s. ruiz gonzález, desarrollo de un nuevo patrón nacional de presión desde la columna de mercurio a patrones primarios de vacío (tesis doctoral), universidad de valladolid, valladolid, españa, 2000. [6] b. g. lipták, instrument engineers’ handbook – fourth edition – process measurement and analysis – volume i, crc press llc, boca raton, 2003. [7] j. c. greenwood, simulation of the operation and characteristics of static expansion pressure standards, vacuum 80 (2006), pp. 548-553. doi: 10.1016/j.vacuum.2005.09.003 [8] w. jitschin, high-accuracy calibration in the vacuum range 0.3 pa to 4000 pa using the primary standard of static gas expansion, metrologia 39 (2002), pp. 249-261. doi: 10.1088/0026-1394/39/3/2 [9] n. medina, s. ruiz gonzález, c. matilla, developments in the pressure field at cem, imeko 20th tc3, 3rd tc16 and 1st tc222 international conference – cultivating metrological knowledge, mérida, mexico, 2007. online [accessed 8 september 2021] https://www.imeko.org/publications/tc16-2007/imekotc16-2007-036u.pdf [10] m. astrua, d. mari, s. pasqualin, improvement of inrim static expansion system as vacuum primary standard between 10-4 pa and 1000 pa, 19th international congress of metrology, 2019, pp. 27007. doi: 10.1051/metrology/20192700 [11] j. c. torres guzmán, l. a. santander romero, k. jousten, realization of the medium and high vacuum primary standard in cenam, mexico, metrologia 42 (2005), pp. s157-s160. doi: 10.1088/0026-1394/42/6/s01 [12] s. phanakulwijit, j. pitakarnnop, establishment fo thailand's national primary vacuum standard by a static expansion method, journal of physics: conference series 1380 (2019), pp. 012003. doi: 10.1088/1742-6596/1380/1/012003 [13] d. herranz, a. pérez, realización de un sistema de expansión estática como patrón nacional de presión absoluta en el rango de 10-4 a 1000 pa, jornada de difusión de resultados de proyectos cem, madrid, españa, 2010. [14] k. f. poulter, the calibration of vacuum gauges, journal of physics e: scientific instruments 10(2) (1977), pp. 112-125. doi: 10.1088/0022-3735/10/2/002 [15] y. takei, h. yoshida, e. komatsu, k. arai, uncertainty evaluation of the static expansion system and its long-term stability at nmij, vacuum 187 110034. doi: 10.1016/j.vacuum.2020.110034 [16] international organization for standardization, international standard iso 3567 – vacuum gauges – calibration by direct comparison with a reference gauge, international organization for standardization, 2011. [17] d. herranz, s. ruiz gonzález, n. medina, volume ratio determination in static expansion systems by means of two pressure balances, xix imeko world congress – fundamental and applied metrology, lisboa, portugal, 2009. online [accessed 8 september 2021] https://www.imeko.org/publications/wc-2009/imeko-wc2009-tc16-280.pdf [18] w. steckelmacher, the calibration of vacuum gauges, vacuum 37(8-9) (1987), pp. 651-657. doi: 10.1016/0042-207x(87)90051-0 [19] centro español de metrología, procedimiento me-001 para la calibración de medidores de vacío – edición digital 1, centro español de metrología, madrid, 2011. [20] j. a. fedchak, p. j. abbott, j. h. hendricks, p. c. arnold, n. t. peacock, review article: recommended practice for calibrating vacuum gauges of the ionization type, journal of vacuum science & technology a 36(3) (2018), pp. 030802. doi: 10.1116/1.5025060 [21] international organization for standardization, international standard iso 19685 – vacuum gauges – vacuum gauges – specifications, calibration and measurement uncertainties for pirani gauges, international organization for standardization, switzerland, 2017. [22] p. semwal, z. khan, k. r. dhanani, f. s. pathan, s. george, d. c. raval, p. l. thankey, y. paravatsu, m. himabindu, spinning rotor gauge based vacuum gauge calibration system at the institute for plasma research (ipr), journal of physics: conference series 390 (2012), pp. 012027. doi: 10.1088/1742-6596/390/1/012027 [23] s. cardona b., j. c. torres guzmán, l. santander romero, sistema de referencia nacional para la medición de vacío, simposio de metrología 2001 cenam, querétaro, méxico, 2001. [24] k. jousten, p. röhl, v. m. aranda contreras, volume ratio determination in static expansion systems by means of a spinning rotor gauge, vacuum 52 (1999), pp. 491-499. doi: 10.1016/s0042-207x(98)00337-6 [25] r. kangi, b. ongun, a. elkatmis, the new ume primary standard for pressure generation in the range from 9 x 10-4 to 103 pa, metrologia 41 (2004), pp. 251-256. doi: 10.1088/0026-1394/41/4/005 [26] j. m. smith, h. c. van ness, m. m. abbott, m. m. introducción a la termodinámica en ingeniería química 7th ed., mcgraw-hill interamericana, mexico, 2007. [27] j. m. prausnitz, r. n. lichtenthaler, e. gomes de azevedo, termodinámica molecular de los equilibrios de fase 3rd ed., prentice-hall, madrid, 2000. [28] r. l. burden, j. d. faires. numercal analysis 9th ed., brooks/cole, cengage learning, usa, 2011. https://doi.org/10.1116/1.1286024 https://doi.org/10.1016/j.vacuum.2005.09.003 https://doi.org/10.1088/0026-1394/39/3/2 https://www.imeko.org/publications/tc16-2007/imeko-tc16-2007-036u.pdf https://www.imeko.org/publications/tc16-2007/imeko-tc16-2007-036u.pdf https://doi.org/10.1051/metrology/20192700 https://doi.org/10.1088/0026-1394/42/6/s01 https://doi.org/10.1088/1742-6596/1380/1/012003 https://doi.org/10.1088/0022-3735/10/2/002 https://doi.org/10.1016/j.vacuum.2020.110034 https://www.imeko.org/publications/wc-2009/imeko-wc-2009-tc16-280.pdf https://www.imeko.org/publications/wc-2009/imeko-wc-2009-tc16-280.pdf https://doi.org/10.1016/0042-207x(87)90051-0 https://doi.org/10.1116/1.5025060 https://doi.org/10.1088/1742-6596/390/1/012027 https://doi.org/10.1016/s0042-207x(98)00337-6 https://doi.org/10.1088/0026-1394/41/4/005 a low-acceleration measurement using anti-vibration table with low-frequency resonance acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 369 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 369 373 a low-acceleration measurement using anti-vibration table with low-frequency resonance t. shimoda1, w. kokuyama2, h. nozato3 1 national metrology institute of japan (nmij/aist), tsukuba, japan, tomofumi.shimoda@aist.go.jp 2 national metrology institute of japan (nmij/aist), tsukuba, japan, wataru.kokuyama@aist.go,jp 3 national metrology institute of japan (nmij/aist), tsukuba, japan, hideaki.nozato@aist.go.jp abstract: this manuscript describes how nmij isolates interferometer optics from the ground vibration for low-acceleration measurement by installing an antivibration table. such a vibration isolation system is designed for an accelerometer calibration system to reduce vibration noise from the microtremor or from reaction of the vibration exciter. mitigating the vibration of optics enables evaluation of accelerometers at small amplitudes, which is required in aerospace or infrastructure monitoring applications. in this manuscript, vibration transmissibility of the anti-vibration table is measured using a triaxial seismometer, and its benefit in the calibration system is discussed. keywords: laser interferometer, anti-vibration table, microtremor, ground noise, low-acceleration measurement 1. introduction low-acceleration measurements are gradually required from the viewpoint of various industrial fields such as aerospace or infrastructure monitoring. the resolution of earth observation by satellite imaging, which is one of the main aerospace applications, is suffering from micro-vibrations [1]. in an on-orbit satellite, moving instruments such as mechanical gyroscopes or reaction wheels can generate micro-vibration, which is typically smaller than 10−2 m/s2 [2]. in infrastructure monitoring, continuous measurement of eigenfrequency of structure using microtremor, which is in the order of 10−3 m/s2 or less, is proposed [4, 5]. the demand for such measurements is gradually increasing with the aging of much of the infrastructure. as the basis for these applications, evaluation of accelerometer is essential to verify the reliability of measurements. a calibration system using a laser interferometer and a vibration exciter is used to determine the response of an accelerometer [6]. in this system, a target accelerometer is vibrated by the shaker, and its displacement is precisely monitored by the interferometer for calibration. for lowacceleration, the calibration result can suffer from the background noise of the interferometer, which originates from seismic vibration, self-noise of the interferometer, and so on. mitigation of such noise is necessary to meet the demands of lowacceleration measurements. for these purposes, we try to reduce the noise from microtremor, which typically appears below a few hundred hz. as the first step, it is evaluated how to reduce the microtremor noise in the lowfrequency calibration system in nmij [7]. overview of the microtremor issue and schematics of noise reduction are presented in section 2. next, in section 3, experimental results using an antivibration table with a low resonant frequency are reported. the conclusion is described in section 4. 2. seismic noise in a calibration system 2.1. overview of accelerometer calibration figure 1: schematics of a low-frequency accelerometer calibration system. figure 1 shows the schematics of the lowfrequency calibration system. a target accelerometer is fixed to a vibration exciter and vibrated at a certain frequency and amplitude. at the same time, the displacement of the accelerometer is measured by a laser interferometer constructed on http://www.imeko.org/ mailto:tomofumi.shimoda@aist.go.jp mailto:wataru.kokuyama@aist.go,jp mailto:hideaki.nozato@aist.go.jp acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 370 another table. the output signal from the accelerometer is then compared to the measured displacement to evaluate the sensitivity. in this process, the accelerometer responds the absolute vibration 𝑥a with respect to the inertial frame, while the laser interferometer measures the relative vibration (𝑥a − 𝑥o) between the optical table and the reflection surface on the exciter. these two vibrations differ by the vibration of the optical table 𝑥o, which originates from the reaction of the shaker or the microtremor; 𝑥o = 𝑟𝐻o𝑥a + 𝐻o𝑥g , (1) where 𝑟 denotes the fraction of the reaction, 𝐻o the vibration transmissibility of the optical table, and 𝑥g the microtremor. if the reaction is carefully suppressed, this discrepancy is not important in usual cases because the vibration amplitude of the shaker is sufficiently larger than that of the optical table induced by the microtremor; 𝑥a ≫ 𝐻o𝑥g . however, the effect of the microtremor becomes relatively non-negligible in the low-acceleration measurement mentioned in section 369. figure 2 presents the amplitude spectral density (asd) of the optical table vibration 𝑥o of the current system induced by the microtremor, measured with a broadband triaxial seismometer (trillium compact 120s) in nmij. the vibration is on the order of 10−4 − 10−6 m/s2/√hz. figure 2: asd of the optical table vibration in x (green), y (orange), and z (blue) directions, which was measured in nmij, japan. self-noise of the seismometer is plotted with the dashed black line. the dashed red line shows 10−5 m/s2/√hz for comparison. figure 3 shows the asds of the interferometer signal measuring (𝑥a − 𝑥o) of the low-frequency calibration system and of the seismometer signal measuring 𝑥o, when the vibration exciter is turned off. sum of these two signals corresponds to the noise in determining 𝑥a. the vibration of the optical table dominates the total noise below 10 hz. therefore, suppression of vibration is one of the primary requirements for accelerometer calibration at small acceleration amplitudes. figure 3: asd of the interferometer signal of the lowfrequency calibration system (red), circuit noise of the interferometer (grey), and the optical table vibration (green). the dashed blue line shows the self-noise of the interferometer, which was estimated by subtracting the table vibration from the interferometer signal. additionally, signal from other than vibration, which is plotted with dashed blue line in figure 3, limits the current performance above 10 hz. cyclic error of the interferometer, which originates from non-linearity of the signal, is a suspicious noise source between 10 hz and 100 hz. the electrical circuit noise is dominant above 100 hz. mitigation of such noise sources is also essential to improve the overall performance. to determine 10−3 m/s2 amplitude for calibration with 1 % accuracy in 1 second, the asd of the background noise needs to be below 10−5 m/s2/ √hz at the oscillation frequency. in this manuscript, we focus on the reduction of the vibration noise. 2.2. low-frequency vibration isolation it is straightforward to reduce the vibration transmissibility 𝐻o for low-acceleration measurement. 𝐻o of a single mass-spring-damper model has a form of 𝐻o(𝑓) = 𝑓0 2 𝑓0 2+𝑖𝑓𝑓0/𝑄−𝑓 2 , (2) which is characterized by the resonant frequency 𝑓0 and the quality factor 𝑄 . as the equation shows, vibration is suppressed above the resonant frequency in proportional to 𝑓 −2 in an ideally simple system. the current table has 𝑓0 ∼ 7 hz and 𝑄 ∼ 10 . in this case, the microtremor is not sufficiently isolated below 10 hz, where the vibration noise is dominant. low resonant frequency below 1 hz is desired to suppress the excess of the vibration in figure 2 down to 1 hz. to achieve the low resonant frequency, we are installing an anti-vibration system, which has resonant frequencies of 0.25 hz in horizontal and vertical directions. the system consists of a springantispring system, in which the restoring force of the suspension is partially cancelled by the antirestoring force from gravity or the elastic part. this enables the low resonant frequency with relatively http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 371 small size (~0.7 m). for comparison, a simple pendulum-type isolation system requires 4 m size to achieve 0.25 hz. the optical table in figure 1 is replaced with the low-frequency anti-vibration system shown in figure 4. figure 4: schematic diagram of the low-frequency antivibration system for the laser interferometer. as equation () shows, suppression of the transmissibility also contributes to the reduction of the reaction from vibration excitation through the ground vibration. however, it should be noted that response to external force fluctuation such as sound or airflow can be enhanced below the resonant frequency, because the low-frequency system has low stiffness. excess of low-frequency fluctuation induces alignment fluctuation, which affects the calibration result at higher frequencies. therefore, the environmental disturbance is also an important factor that determines the performance. figure 5: vibration measurement on the anti-vibration stage. 3. evaluation of an anti-vibration system 3.1. vibration transmissibility measurement in order to evaluate the performance of the system, a seismometer was placed on the isolated stage as shown in figure 5. after the vibration measurement on the stage, the seismometer was then moved to the table where the anti-vibration system is placed. the vibration spectrums at these points were compared to estimate the vibration transmissibility of the system. measured vibration spectrums are presented in figure 6. the vibration on the isolated stage was almost as expected from the vibration on the table (blue line) and the theoretical vibration transmissibility (equation (2)). here a resonant frequency 𝑓0 = 0.3 hz and a quality factor 𝑄 = 1 were assumed. the microtremor is successfully suppressed by ~100 times around a few hz, and the residual noise level is below 10−5 m/s2/√hz above 1 hz except for the peak at 7 hz, though the seismometer signal was not measured correctly above 20 hz because of the data logger noise. figure 6: performance of the anti-vibration system. the blue and green lines show the spectrum on the table and on the isolated stage, respectively. the orange line is expected spectrum on the stage. the black dashed and grey lines are the self-noise of the seismometer and the data logger, respectively. on the other hand, vibration below 0.2 hz was increased on the isolated stage, by about ten times than on the table. such excess may originate from external force disturbances, because the antivibration system has low stiffness to lower the resonant frequency, as mentioned in section 2.2. a wind shield is planned to be installed around the anti-vibration system to protect it from the airflow or sound. the other possible disturbances, such as tilt fluctuation of the stage or temperature fluctuation, should also be evaluated. seismometer isolated stage table http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 372 3.2. expected improvement of interferometer background noise as explained in section 2.1, the interferometer measures (𝑥a − 𝑥o), and 𝑥o becomes background noise in accelerometer calibration. therefore, total background noise of the interferometer can be estimated by the sum of the vibration spectrum 𝑥o shown in figure 6 and the interferometer self-noise spectrum shown in figure 7. figure 7 presents the estimated noise level of the low-frequency accelerometer calibration system with the anti-vibration system. the noise is expected to be suppressed by a few tens of times between 1 and 10 hz, where the vibration of the interferometer stage is a dominant background noise. such frequencies are important for the monitoring of infrastructure, which typically has a resonance frequency around 1 hz. in this frequency range, the noise level will be below 10−5 m/s2/√hz. on the other hand, the self-noise of the interferometer dominates above 10 hz, hence the vibration isolation will not reduce the noise there. as mentioned in section 2.1, suppression of the interferometer self-noise is required for further reduction of the background noise, especially above 10 hz. alternatively, it will be easier to replace the interferometer to a low-noise commercial product to improve the calibration capacity. 4. conclusion an anti-vibration table is being installed into the low-frequency calibration system at nmij/aist for low-acceleration measurement. the table has a low resonant frequency of ~0.25 hz to isolate the vibration at low frequencies. to evaluate the performance of the table, the vibration transmissibility was measured using a seismometer. the vibration was successfully isolated on the table by 100 times around a few hz. as a result, the interferometer background noise is expected to be lower than 10−5 m/s2/ √hz below 10 hz, which enables the calibration system to determine the acceleration amplitude of 10−3 m/s2 with 1 % accuracy in a reasonable time. to extend the frequency range to above 10 hz, reduction of the interferometer noise other than the microtremor is necessary. 5. acknowledgement this work is partially based on the results obtained from a project commissioned by the new energy and industrial technology development organization (nedo), japan. the authors thank tamio ishigami, koichiro hattori, akihiro ota, takashi usuda, hiromi mitsumori, yoshiteru kusano (nmij) for useful discussions and cooperation. 6. references [1] k. komatsu, h. uchida, “microvibration in spacecraft”, mechanical engineering reviews, vol. 1, no. 2, 2014 [2] m. privat, “on ground and in orbit microvibration measurement comparison”, proceedings of the 8th european conference on spacecraft structures, material and mechanical testing, 1998 [3] d. yu, g. wang, y. zhao, “on-orbit measurement and analysis of the micro-vibration in a remotesensing satellite”. adv. astronaut. sci. technol. vol. 1, pp. 191–195, 2018 [4] y. ikeda, s. yoshitaka, s. yasutsugu, “damage detection of actual building structures through singular value decomposition of power spectral figure 7: expected improvement of the background noise of the calibration system by using the low-frequency antivibration system (thick blue). the contribution from the vibration of the interferometer stage (green) and from the selfnoise of the interferometer (orange) are also presented. for comparison, the current noise level is plotted with the dashed blue line. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 373 density matrices of microtremor responses”, aij journal of technology, vol. 16, no. 32, pp. 69–74, 2010 (in japanese) [5] y. jiang, y. gao, x. wu, “the nature frequency identification of tunnel lining based on the microtremor method”, underground space, vol. 1, no. 2, pp. 108-113, 2016 [6] iso 16063-11 1999 “methods for the calibration of vibration and shock transducers—part 11: primary vibration calibration by laser interferometry” [7] w. kokuyama, t. ishigami, h. nozato, a. ota, “improvement of very low-frequency primary vibration calibration system at nmij/aist”, in proc. of xxi imeko world congress, prague, czech republic, 2015 http://www.imeko.org/ omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences alessandro luchetti1, matteo zanetti1, denis kalkofen2, mariolino de cecco1 1 department of industrial engineering, university of trento, sommarive 9, 38123 trento, italy 2 institute of computer graphics and vision, graz university of technology, rechbauerstraße 12, 8010 graz, austria section: research paper keywords: omnidirectional cameras; mesh reconstruction; camera pose estimation; optimization; enhanced comprehension citation: alessandro luchetti, matteo zanetti, denis kalkofen, mariolino de cecco, omnidirectional camera pose estimation and projective texture mapping for photorealistic 3d virtual reality experiences, acta imeko, vol. 11, no. 2, article 24, june 2022, identifier: imeko-acta-11 (2022)-02-24 section editor: alfredo cigada, politecnico di milano, italy received may 26, 2021; in final form march 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was developed inside the european project mirebooks: mixed reality handbooks for mining education, a project funded by eit raw materials. corresponding author: alessandro luchetti, e-mail: alessandro.luchetti@unitn.it 1. introduction media acquired by 360°cameras (also known as omnidirectional, spherical, or panoramic cameras) is becoming increasingly important to many applications. compared to conventional cameras, images taken by 360° cameras offer a larger field of view, which is why they are traditionally useful to applications that derive their state from information about the environment. examples include robot localization, navigation, and visual servoing [1]. however, omnidirectional cameras have recently also become an essential tool for content creation in virtual reality (vr) applications because spherical photographs and videos can provide a high level of realism. for example, applications for real estate agents already make use of omnidirectional images and video data within a vr head mounted display to improve the realism of virtual customer inspections and research domains span widely from 360° tourism [2] to education in 360° classrooms [3]. vr applications using omnidirectional media allow their users to change the view within the boundaries of a 360° image that has been captured at a specific point of interest (poi). thus, vr users are commonly restricted to head rotations only while translations require transitioning into a 360° image that has been captured at a different poi [4]. thus, motion parallax is missing in vr applications, which use omnidirectional data. furthermore, view transitions are limited to where omnidirectional images or videos exist. these shortcomings limit the benefit of omnidirectional media in vr. for example, the missing 3d information restricts the usage of advanced exploration techniques [5], [6] and the missing motion parallax can cause visual discomfort [7]. abstract modern applications in virtual reality require a high level of fruition of the environment as if it was real. in applications that have to deal with real scenarios, it is important to acquire both its three-dimensional (3d) structure and details to enable the users to achieve good immersive experiences. the purpose of this paper is to illustrate a method to obtain a mesh with high quality texture combining a raw 3d mesh model of the environment and 360° images. the main outcome is a mesh with a high level of photorealistic details. this enables both a good depth perception thanks to the mesh model and high visualization quality thanks to the 2d resolution of modern omnidirectional cameras. the fundamental step to reach this goal is the correct alignment between the 360° camera and the 3d mesh model. for this reason, we propose a method that embodies two steps: 1) find the 360° cameras pose within the current 3d environment; 2) project the high-quality 360° image on top of the mesh. after the method description, we outline its validation in two virtual reality scenarios, a mine and city environment, respectively, which allows us to compare the achieved results with the ground truth. mailto:alessandro.luchetti@unitn.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 to overcome these limitations, we propose combining omnidirectional photorealistic image data with its corresponding 3d representation. since 3d reconstructions commonly suffer from poor color representations, we apply projective texture mapping of omnidirectional images. our approach supports photorealistic image fidelity at the pois and motion parallax at viewpoints nearby. to enable projective texture mapping of 360° image data, we present an approach for omnidirectional camera pose estimation that automatically finds the position and orientation of the 360° camera relative to the 3d representation of the environment. to put our work in context, we first outline related work in section 2, before we describe our approaches to omnidirectional camera pose estimation and projective texture mapping in section 3. we evaluate our system in section 4 and discuss possible directions for future work in section 5. 2. related work camera pose detection has always been a key problem in computer vision. for example, makadia et al. [8] proposed a method useful for the alignment of large rotations with potential impact on 3d shape alignment to estimate the rotation directly from images defined on the sphere and without correspondence. unfortunately, this approach is quite resistant only to small translations of the camera [9]. another work [10] addresses the problem of camera pose recovery from spherical panoramas using pairwise essential matrices. in this case, the exact position of each panorama was an important step to ensure the consistency of visual information about a database of georeferenced images. here the pose recovery works with a twostage algorithm for rotations and after for translations with a bad result if the camera starting pose is very far from the correct one. the above-mentioned problems have been overcome by our method because it works also for large variations of translation as well as rotations. also levin et al. present in [11] a method to compute camera pose from a sequence of spherical images through the use of an essential matrix for initial pairwise geometry. differently from our work and the work of [10], they also use a rough estimate of the camera path as an additional system input to calculate camera positions. an example of generating a texture map of a 3d model with 2d high-quality images is given in [12]. in particular, it is a specific application in the e-commerce presentation of shoes. it consists of a texture mapping technique that comprises several phases: mesh partitioning, mesh parameterization and packing, texture transferring, and texture correction and optimization. in particular, in the texture transferring step, each mesh is allocated to a front image, and all meshes that use the same front image are put in a group. finally, the pixels from the front image corresponding to the 3d mesh are extracted. differently, our method uses only a spherical image to recreate the highresolution 3d model by projecting each pixel of the image from the correct camera pose previously found. the obtained results are faster and good if the user's field of view rotates without large displacements with respect to the camera pose. a similar approach but for another application related to realizing surveying tasks in architectural, archaeological, and cultural landscapes conservation is provided by abmayr et al. [13]. they developed a laser scanner, which offers high accuracy measurements of object surfaces, combined with a panoramic color camera to achieve precise and accurate monitoring of the actual environment employing colored point clouds. the camera rotates according to the same tripod as the laser scanner. many similarities with the method described in the present article can be found. the main difference resides in the use of a single 360° camera instead of a rotating unit, the use of an automatic pose estimation method instead to use the same tripod for laser scanner and camera during the acquisition process. our method is faster, and the 3d model reconstruction can be more complete because it doesn't need to be at a fixed distance from the camera during the scanning process. this aspect becomes more important if it is necessary to reconstruct a high-resolution model with different cameras from unknown positions. finally, an interesting study was provided by teo et al. [14], where, in the context of remote collaboration, helpers shared 360° live videos or 3d virtual reconstructions of their surroundings from different places to work together with local workers. the results showed that participants preferred having both 360° and 3d modes, as it provides variation in controls and features from different perspectives. our work provides a combination of a 360° live video and 3d virtual reconstruction to combine their advantages without the need to switch between them. 3. method in this section, the localization algorithm to estimate the camera pose (i.e., its positions and orientations in the environment), and the method used to project the texture mapping on a 3d representation of the environment are explained. 3.1. camera pose estimation a good alignment between the virtual environment and the captured image is fundamental for the final texture projection that will be covered in the next chapter. for example, this step is necessary when an operator needs to place the camera in a predefined position and orientation. some human errors may be made during this operation and a method to find an accurate camera pose is necessary. moreover, for large distances, even a small angle or small position errors can compromise the final result. the large-scale automatic camera pose identification algorithm has been implemented in matlab 2019b using a zmq communication protocol [15] between matlab and unity 3d. a particle swarm optimization (pso) was used. the procedure of the camera pose estimation is shown in figure 1. starting from the reconstructed 3d model with its low-quality texture but with depth information of the environment and given as input a high quality equirectangular photorealistic image taken by an omnidirectional camera, the localization algorithm finds the pose that gives a 360° image taken with a simulated camera that is as similar as possible to the input one. figure 1. schematic diagram of the camera pose detection algorithm. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 in particular: i. a new camera position is set for each iteration of the pso algorithm. ii. the equirectangular image corresponding to the set camera pose, at the previous step, is acquired. iii. the algorithm checks the similarity between the new image and the input one that has to be used as a new texture for the 3d mesh; the parameters to be optimized are the translation and the euler angles to be applied to the 3d model to generate an equirectangular image that matches the one in the input. the cost function for comparing the two equirectangular images uses the following quantities: • the structural similarity (ssim) index of the equirectangular images. • the mean-squared error (mse) between the two equirectangular images. • ssim of the approximation coefficients (ssima) of level 1 of the wavelet decomposition. • ssim of the horizontal detail coefficients (ssimh) of level 1 of the wavelet decomposition. • ssim of the vertical detail coefficients (ssimv) of level 1 of the wavelet decomposition. • ssim of the diagonal detail coefficients (ssimd) of level 1 of the wavelet decomposition. the final cost function c obtained by adding the abovementioned quantities is: 𝐶 = 𝑆𝑆𝐼𝑀 + 𝑀𝑆𝐸 + 𝑆𝑆𝐼𝑀𝐴 + 𝑆𝑆𝐼𝑀𝐻 + 𝑆𝑆𝐼𝑀𝑉 + 𝑆𝑆𝐼𝑀𝐷 . (1) the mse represents the cumulative squared error between two images x(i,j) and y(i,j): 𝑀𝑆𝐸(𝑥, 𝑦) = 1 𝑀𝑁 ∑ ∑[𝑥(𝑚, 𝑛) − 𝑦(𝑚, 𝑛)]2 , 𝑁 𝑛=1 𝑀 𝑚=1 (2) where m and n are the number of rows and columns of x and y. ssim is used for measuring the similarity between two images x and y [16]. the ssim index quality assessment index is based on the computation of three terms, namely the luminance term l, the contrast term c and the structural term s. the overall index is a multiplicative combination of the three terms: 𝑆𝑆𝐼𝑀(𝑥, 𝑦) = [𝑙(𝑥, 𝑦)] [𝑐(𝑥, 𝑦)]  [𝑠(𝑥, 𝑦)] , (3) where: 𝑙(𝑥, 𝑦) = 2𝜇𝑥 𝜇𝑦 + 𝐶1 𝜇𝑥 2 + 𝜇𝑦 2 + 𝐶1 , (4) 𝑐(𝑥, 𝑦) = 2𝑥𝑦 + 𝐶2 𝑥 2 + 𝑦 2 + 𝐶2 , (5) 𝑠(𝑥, 𝑦) = 𝑥𝑦 + 𝐶3 𝑥𝑦 + 𝐶3 . (6) x, y, x, y and xy are the local means, standard deviations, and cross-covariance for images x and y. c1, c2, c3 are constants to avoid instability for image regions where the local mean or standard deviation is close to zero. choosing  =  =  = 1 and 𝐶3 = 𝑐2 2 , the index simplifies to: 𝑆𝑆𝐼𝑀(𝑥, 𝑦) = (2𝜇𝑋 𝜇𝑌 + 𝐶1)(2𝑥𝑦 + 𝐶2) (𝜇𝑥 2 + 𝜇𝑦 2 + 𝐶1)(𝑥 2 + 𝑦 2 + 𝐶2) . (7) iv. the pso optimization runs until convergence, giving as output the best camera pose (translation and euler angles) that makes the two images as similar as possible. 3.2. texture projection in this chapter, the method to apply the high-quality texture mapping will be described. essentially, a merge of the highquality 360° image with the 3d mesh is performed. firstly, the 3d cartesian coordinates and colors of each 360° image's pixel were obtained by projecting the equirectangular image on the surface of a unitary radius sphere. given an equirectangular image with n rows and m columns, each image's pixel in 2d cartesian coordinates (n,m) was transformed in spherical coordinates, computing the corresponding azimuth a and elevation e, setting the radius r equal to 1. the equations used for the conversion are: 𝑎 = − ( 𝑚 𝑀 − 0.5) · 2 π , (8) 𝑒 = − ( 𝑛 𝑁 − 0.5) · π , (9) 𝑅 = 1. (10) finally, the 3d cartesian coordinates are obtained to be visualized in matlab software like a 3d point cloud. the mapping from spherical coordinates to 3d cartesian coordinates is: 𝑥 = 𝑅 · cos(𝑒) · cos(𝑎) (11) 𝑦 = 𝑅 · cos(𝑒) · sin(𝑎) (12) 𝑧 = 𝑅 · sin(e). (13) this "spherical" point cloud was imported inside unity and placed with the position and orientation found in the previous pose estimation step chapter. the raycasting technique was used: through the ray class, it is possible to emit or "cast" rays in a 3d environment and control the resulting collisions. the rays used in raycasting are invisible lines that have the center of the image sphere as the origin and are oriented in each pixel's direction. the important point is that these invisible lines or rays that are cast into the scene can return information about gameobjects that have been hit by the rays. attached to the environment's mesh as gameobject in unity, there is a mesh collider to register a hit with the ray. when a ray intersects or "hits" a gameobject, the event is referred to as a raycasthit. this hit provides details about the gameobject and where it was hit, including a reference to the gameobject's transform, the length of the ray when it hits something, the point in the world where the hit happened. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 once the collision of each pixel is detected, their new position is saved with color properties. lastly, the new point cloud was used to reconstruct a highquality photorealistic texture, using the screened poisson surface reconstruction algorithm [17] implemented in meshlab [18]. this algorithm is particularly useful when the model to reconstruct is very big with very fine details to be preserved. the reconstruction of the 3d model was done setting the reconstruction depth parameter (i.e., the maximum depth of the octree that is used to make the reconstruction) to 13. the default value of meshlab for this parameter is 8, we increased it because in general, the higher this value is the more time will be needed for reconstitution, the more details will be preserved [17]. we did not increase it more than 13 because after 14 it is not possible to see a real change in the final result. the minimum number of samples was set to 1.5 and the interpolation weight to 4 as default values of meshlab. since the poisson algorithm tends to "close" the reconstructed mesh, the triangles whose area was above a certain threshold were deleted to preserve the original form of the reconstructed environment. 4. evaluation for the validation of the camera pose localization algorithm and the high-quality texture mapping projection, a wavefront 3d object file (obj file extension) of two 3d high-quality virtual outdoor environments, one for a mine and one for a city, were imported into unity 3d platform. an original script was also written to simulate a 360° camera. the 360° capture technique is based on google's omni-directional stereo (ods) technology using cubemap rendering [19]. after the cubemap is generated, it is possible to convert this cubemap to an equirectangular map which is a projection format used by 360° video players. after placing the simulated camera at a specific pose inside the scene of a specific scenario, a high-quality equirectangular image was acquired, figure 2. this will be the input images whose pose has to be detected by the developed algorithm. to simulate the acquisition of the environment through a 3d scanner, a point cloud for each analyzed environment was extracted from the 3d high-quality models using the cloud compare software [20]. these point clouds were downsampled to simulate a 3d model with less detail than the input model, and new reconstructions were performed in meshlab [18] to obtain new low-quality 3d models, figure 3. new scenes were then recreated in unity with the downsampled 3d models. figure 4 shows the schematic diagram of our camera pose detection algorithm proposed in figure 1 applied to the specific example of the mine environment. the input omnidirectional image has a resolution of 4096 × 2048 pixels. however, to improve the calculation time speed, the comparison between images is done by downsampling them to 256 × 128 pixels for both the analyzed environments. the bounding box dimensions of the scenario with the mine are 113 m × 169 m × 37 m for the x, y, z coordinates, respectively. instead, the dimensions of the city environment are 440 m × 100 m × 435 m. the same analysis was done for both environments using the same approach and shifting the camera pose by the same values. table 1 shows the position and orientation for 10 random trials. the initial starting position was set to the origin (0, 0, 0) with null rotations for each trial. the research limits were set to  20 m for translations and  80° for rotations. by default, unity applies the following rotation order: extrinsic rotation around the z-axis (γ), then around the x-axis (α), and finally around the y-axis (β). the average time spent by the pso algorithm is around 20 minutes. the tests were run on a pc with an intel i7-9700kf processor and 64 gb of ram. for each of the 10 trials of table 1, the pso algorithm has been run changing 5 times the numbers of generations, i.e., 200, (a) (b) figure 2. high-quality equirectangular images whose detection poses must be identified for a mine (a) and city (b) environments. (a) (b) figure 3. the 3d downsampled models used by the localization algorithm for a mine (a) and city (b) environments. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 250, 300, 350, 400, keeping the number of particles fixed to 100, and 5 times changing the number of particles, i.e. 60, 70, 80, 90, 100, keeping the number of generation fixed to 400. the number of generations and particles was changed to force the algorithm to increase variability. to compute the error in pose detection, we decided to separate the translation and the rotation part. the translation error is computed by performing the euclidean distance between the position of the camera found by the pso algorithm and the ground truth. for what concerns the rotations, firstly, the rotations found by the optimization process and the ground truth were decomposed in axis and angle notation. consequentially, the error, in the case of rotation, has 2 terms: the error in the axis orientation with respect to the ground truth and the amount of rotation around such axis. figure 5 shows the cost function score for the various error components explained above, while figure 6 shows the three possible couple combinations of the error components with respect to the final score optimization value. as can be noticed, sometimes, a higher score of the cost function at the end of the optimization does not mean that an incorrect pose was found. this fact is probably due to the mesh reconstruction process. indeed, after this process, there could be portions of the environment that can be less accurate compared to the real model. for this reason, the meaning of the final reached score values is not absolute or easily comparable considering different camera poses. this generates the need to quantify the accuracy of the camera localization measurement within a scene. despite the uncertainty concerning the accuracy in the pose found by the algorithm with respect to the final cost function score, figure 5 and figure 6 show that, for the mine environment, a score below 1.6 means that, for the trial performed, the error in translation is below 0.7 m, the difference in the amount of rotation is below 1°, and the difference in the rotation axis orientation is below 2°. for the city environment, the same amount of errors corresponds to a cost function score of 2. the score is higher because the city environment is a scenario with much more detail than a mine. many of these details, through initial downsampling, are lost and the initial reconstructed mesh is much less detailed, as can be seen in figure 3b. the final score, therefore, which measures the similarity between the input high-quality equirectangular image and that obtained from this low-quality model, turns out to be higher. however, the errors, especially those related to rotations (figure 5b and figure 5c), are lower for the city environment even at high levels of the cost function score because the environment is more diverse. because of this relationship of the cost function threshold from the level of detail of the reconstructed 3d model, table 1. camera poses chosen for 10 trials (ground truth). trial x (m) y (m) z (m) α (°) β (°) γ (°) 1 -4.00 10.00 15.00 10.00 15.00 18.00 2 5.00 -2.00 5.00 10.00 -60.00 1.00 3 -8.00 5.00 -6.00 30.00 45.00 15.00 4 2.00 -7.00 15.00 -10.00 -45.00 -20.00 5 10.00 10.00 10.00 20.00 -15.00 5.00 6 0.00 15.00 8.00 25.00 -15.00 5.00 7 -5.00 2.00 -5.00 -10.00 60.00 -1.00 8 -1.00 -2.00 -3.00 -4.00 -5.00 -6.00 9 -15.00 10.00 10.00 40.00 70.00 40.00 10 -19.00 19.00 -19.00 2.00 80.00 -5.00 figure 4. example of the camera pose detection algorithm flow for the mine environment. (a) cost function score vs translation error. (b) cost function score vs axis orientation error. (c) cost function score vs rotation angle error. figure 5. 2d plots of the cost function score vs the errors in translation (a), axis orientation (b), and rotation angle (c). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 there is a need for further analysis to investigate possible acceptance criteria and multidimensional models capable of finding a correlation between the different terms of the cost function and the uncertainty in translation and rotation. for example, figure 7 shows that mse could be a possible discriminant factor for accuracy. indeed, in this case, the accurate solutions are all centered around the 0.005 value for both exanimated environments. once the camera poses were found for each environment, this information is used to set the 360° image projected on the surface of a unitary radius sphere in the correct position and orientation, figure 8a. after that, using the raycasting technique, the 3d mesh, figure 8b, is hit by 360° image pixels (figure 8c). 5. conclusions and future work in this paper, we presented an approach for combining photorealistic with 3d environment representations using a 360° high-quality image and a 3d model of an environment with low (a) cost function score vs translation error vs rotation angle error. (b) cost function score vs axis orientation error vs rotation angle error. (c) cost function score vs translation error vs axis orientation error. figure 6. 3d plots of the cost function score and the errors in translation, rotation angle, and axis orientation. figure 7. mse score vs translation error. (a) 360° image placed on the surface of a unitary sphere (matlab software). (b) raw 3d mesh (unity software). (c) point cloud obtained projecting the pixels of the 360° image on the raw 3d mesh (unity software). figure 8. the pixels of the 360° image of the mine environment are projected on a sphere surface (a), which is put in the correct camera pose found by our algorithm inside the raw 3d mesh (b). the pixels are then projected using the ray cast technique on the raw mesh, obtaining a new dense point cloud (c). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 quality. at the core of our system, we have developed an approach for automatic large-scale 360° camera pose estimation within a 3d environment and a method for projective texture mapping spherical images. contrary to previous work, outline in the related work section, the camera pose estimator developed in this paper works both for significant differences in rotation and displacement, and it works without the need to start from a known point of view. the positions and orientations of the camera were estimated with a translation error below 0.7 m, and below 1° and 2° for the difference in the amount of rotation, and the difference in the rotation axis orientation, respectively. these results were obtained for both environments analyzed at full size and with search limits of  20 m for translations and  80° for rotations using an mse of 0.005 as a possible discriminant factor for accuracy. while this work was validated using a 360° camera simulation in virtual scenes, we plan to test its capability on real scenes as well. in such situations, the light conditions could be very different between the model and equirectangular image which is why the luminance has to be carefully considered. furthermore, the approach here presented is valid until the view of the user rotates without large displacements from the camera’s initial position because not all the mesh areas are covered after the pixel projection. to overcome this problem, the same method presented in this paper can be applied with more than one camera, but in the case of the final reconstruction of the texture, there is not a discriminating parameter that allows us to choose which pixels to use from one or another camera for the final reconstruction. this choice can be useful if the field of view of one camera is better for some mesh areas than another one to obtain a better result and it can be implemented in future work. finally, in the optimization camera pose process, a further study can be done to find a correlation between the different terms of the cost function and the uncertainty in translation and rotation by investigating other possible acceptance criteria through a multidimensional analysis. references [1] r. benosman, s. kang, o. faugeras, panoramic vision, springer verlag, 2000, isbn 978-0387951119. [2] j. hakulinen, t. keskinen, v. mäkelä, s. saarinen, m. turunen, omnidirectional video in museums–authentic, immersive and entertaining, in international conference on advances in computer entertainment, springer, 2017, pp. 567–587. doi: 10.1007/978-3-319-76270-8_39 [3] d. kalkofen, s. mori, t. ladinig, l. daling, a. abdelrazeq, m. ebner, m. ortega, s. feiel, s. gabl, t. shepel, j. tibbett, t. h. laine, m. hitch, c. drebenstedt, p. moser, tools for teaching mining students in virtual reality based on 360° video experiences, conference on virtual reality and 3d user interfaces abstracts and workshops (vrw), iee, atlanta, ga, usa, 2020, pp. 455-459. doi: 10.1109/vrw50115.2020.00096 [4] a. macquarrie, a. steed. the effect of transition type in multiview 360 media, ieee transactions on visualization and computer graphics 24(4) (2018), pp. 1564-1573. doi: 10.1109/tvcg.2018.2793561 [5] m. tatzgern, r. grasset, d. kalkofen, d. schmalstieg, transitional augmented reality navigation for live captured scenes, virtual reality (vr), ieee, 2014, pp. 21-26. doi: 10.1109/vr.2014.6802045 [6] m. tatzgern, r. grasset, e. veas, d. kalkofen, h. seichter, d. schmalstieg, exploring real world points of interest: design and evaluation of object-centric exploration techniques for augmented reality. pervasive and mobile computing 18 (2015), pp. 55-70. doi: 10.1016/j.pmcj.2014.08.010 [7] j. thatte, b. girod, towards perceptual evaluation of six degrees of freedom virtual reality rendering from stacked omnistereo representation, electronic imaging, 2018. doi: 10.2352/issn.2470-1173.2018.05.pmii-352 [8] a. makadia, k. daniilidis, rotation recovery from spherical images without correspondences, ieee transactions on pattern analysis and machine intelligence 28(7) (2006), pp. 1170–1175. doi: 10.1109/tpami.2006.150 [9] a. makadia, k. daniilidis, direct 3d-rotation estimation from spherical images via a generalized shift theorem, ieee computer society conference on computer vision and pattern recognition, vol. 2, madison, wi, usa, 2003, pp. ii–217. doi: 10.1109/cvpr.2003.1211473 [10] r. laganiere and f. kangni, orientation and pose estimation of panoramic imagery, mach graph vis 19(3) (2010), pp. 339–363. [11] a. levin, r. szeliski, visual odometry and map correlation, in proceedings of the ieee computer society conference on computer vision and pattern recognition 1 (2004) washington, dc, usa. doi: 10.1109/cvpr.2004.1315088 [12] j.-y. lai, t.-c. wu, w. phothong, d. w. wang, c.-y. liao, j.-y. lee, a high-resolution texture mapping technique for 3d textured model, applied sciences, vol. 8, no. 11, 2018, p. 2228. doi: 10.3390/app8112228 [13] t. abmayr, f. härtl, m. mettenleiter, i. heinz, a. hildebrand, b. neumann, c. fröhlich, realistic3d reconstruction–combining laserscan data with rgb color information, proceedings of isprs internation archives of photogrammetry, remote sensing and spatial information sciences 35 (2004), pp. 198–203. [14] t. teo, l. lawrence, g. a. lee, m. billinghurst, m. adcock, mixed reality remote collaboration combining 360 video and 3d reconstruction, in proceedings of the 2019 chi conference on human factors in computing systems, 2019, pp. 1–14. doi: 10.1145/3290605.3300431 (a) (b) figure 9. final results after the 3d reconstruction for the mine (a) and the city (b) environments. https://doi.org/10.1007/978-3-319-76270-8_39 https://doi.org/10.1109/vrw50115.2020.00096 https://doi.org/10.1109/tvcg.2018.2793561 https://doi.org/10.1109/vr.2014.6802045 https://doi.org/10.1016/j.pmcj.2014.08.010 https://doi.org/10.2352/issn.2470-1173.2018.05.pmii-352 https://doi.org/10.1109/tpami.2006.150 https://doi.org/10.1109/cvpr.2003.1211473 https://doi.org/10.1109/cvpr.2004.1315088 https://doi.org/10.3390/app8112228 https://doi.org/10.1145/3290605.3300431 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [15] p. hintjens, zeromq: messaging for many applications. o’reilly media, inc., 2013, isbn: 9781449334062. [16] z. wang, a. c. bovik, h. r. sheikh, e. p. simoncelli, image quality assessment: from error visibility tostructural similarity, ieee transactions on image processing 13(4) (2004), pp. 600– 612. doi: 10.1109/tip.2003.819861 [17] m. kazhdan, h. hoppe, screened poisson surface reconstruction, acm transactions on graphics (tog) 32(3) (2013), pp. 1–13. doi: 10.1145/2487228.2487237 [18] p. cignoni, m. callieri, m. corsini, m. dellepiane, f. ganovelli, g. ranzuglia, meshlab: an open-source mesh processing tool, in eurographics italian chapter conference, salerno, 2008, pp. 129– 136. doi: 10.2312/localchapterevents/italchap/italianchapconf2008/1 29-136 [19] google inc., rendering omni-directional stereo content. online [accessed 21 march 2022] https://developers.google.com/vr/jump/rendering-odscontent.pdf [20] d. girardeau-montaut, cloudcompare, 2016. online [accessed 21 march 2022] https://www.danielgm.net/cc https://doi.org/10.1109/tip.2003.819861 https://doi.org/10.1145/2487228.2487237 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 http://dx.doi.org/10.2312/localchapterevents/italchap/italianchapconf2008/129-136 https://developers.google.com/vr/jump/rendering-ods-content.pdf https://developers.google.com/vr/jump/rendering-ods-content.pdf https://www.danielgm.net/cc system for an acoustic detection, localisation and classification acta imeko issn: 2221-870x june 2021, volume 10, number 2, 62 69 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 62 system for an acoustic detection, localisation and classification jakub svatos1, jan holub1, jan belak1 1 department of measurement, czech technical university in prague, prague 166 27, czechia section: research paper keywords: acoustic detection; gunshots; localisation; classification; neural network; mel frequency citation: jakub svatos, jan holub, jan belak, system for an acoustic detection, localisation and classification, acta imeko, vol. 10, no. 2, article 10, june 2021, identifier: imeko-acta-10 (2021)-02-10 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form april 15, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: jakub svatos, e-mail: svatoja1@fel.cvut.cz 1. introduction an acoustic detection (ad) of gunshots is a present topic that can help to detect hazardous and dangerous events, especially in public areas. in recent days, there is an increase in gunshot attacks in public areas such as schools, campuses, hospitals, and shopping centres. in some cases, it is challenging to identify a dangerous event from uncertain, inadequate data received by cameras or by security staff. the main asset of ad is based on the extraction of vital information from the recorded signal data and classify it as due to a given event (gunshot, a human scream, glass breaking, etc.) due to this classification, ad can assist in law enforcement to better discriminate dangerous events and intervene in the ongoing process on time and decrease the possibility of casualties. a fundamental goal of ad systems is to monitor acoustic signals around the area of interest and to detect a potentially hazardous event, record, localise the probable position of the event and classify the event into categories as an alert (gunshot) signals and non-treat, ‘false alert’, signals. there are several experimental or commercial ad systems designed for gunshot events detection and localisation available on the market [1]-[5]. these systems are designed to localise the source of the gunshot and to estimate the type of treat. the more sophisticated systems can even identify the particular firearm type using advanced classification methods. a drawback of the sophisticated systems is usually very high purchase and operational cost, which makes it almost impossible to use for smaller non-governmental entities such as campuses or hospitals. to successfully design a gunshot detecting and classifying algorithm, essential characteristics of its complex physics have to be understood. a comprehensive explanation of the problem can be found in [6], [7]. a gunshot pattern is characterised by the two phenomena, muzzle blast, and, if the bullet propagates at supersonic speed, shock wave. muzzle blast is caused by an explosive charge where hot, high-pressured gases expand as acoustic energy from the centre of the gun barrel. the bullet travelling at supersonic speed generates a shockwave effect, which is propagating in a conic fashion behind the bullet trajectory. the shockwave is based on abstract currently, acoustic detection techniques of gunshots (gunshot detection and its classification) are increasingly being used not only for military applications but also for civilian purposes. detection, localisation, and classification of a dangerous event such as gunshots employing acoustic detection is a perspective alternative to visual detection, which is commonly used. in some situations, to detect and localise the source of a gunshot, an automatic acoustic detection system, which can classify the caliber, may be preferable. this paper presents a system for acoustic detection, which can detect, localise and classify acoustic events such as gunshots. the system has been tested in open and closed shooting ranges and tested firearms are 9 mm short gun, 6.35 mm short gun, .22 short gun, and .22 rifle gun with various subsonic and supersonic ammunition. as ‘false alarms’, sets of different impulse acoustic events like door slams, breaking glass, etc. have been used. localisation and classification algorithms are also introduced. to successfully classify the tested acoustic signals, continuous wavelet and mel frequency transformation methods have been used for the signal processing, and the fully twolayer connected neural network has been implemented. the results show that the acoustic detector can be used for reliable gunshot detection, localisation, and classification. mailto:svatoja1@fel.cvut.cz acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 63 the combinations of compression and expansion shock, as shown in figure 1. the angle θm between the bullet trajectory and the shock wave trajectory is given by the mach number m = v/c, where v is the velocity of the projectile and c is the speed of the sound: 𝛩𝑀 = arcsin ( 1 𝑀 ). (1) these factors can include valuable information that can be used for improving the detection capability of the ad system. alongside this, the calibre of both bullet and barrel, the length of the latter, mechanical action caused by the gun itself, or even the chemical properties of the propellant cause different effects on the pattern of a gunshot. last but not least, the temperature of the air, air humidity, wind speed, environment (e.g., foliage density, urban area) and soil characteristics also have an impact on the resulting gunshot pattern. considering all these phenomena, to effectively detect and identify a gunshot, signal processing, including adaptive filtering and advanced data processing and classification have to be carried out [8] – [11]. an example of a typical subsonic pattern and a supersonic gunshot signal are shown in figure 2 and figure 3. both bullets were shot in semi-open area (with multiple reflections from the ground and the walls) by the same 9 mm short gun, and its acoustic signature was recorded at the distance of approximately 10 m. in figure 3, the shock wave pattern is not clearly visible due to its proximity to the muzzle blast pattern of the signal. it is caused by the relatively low speed of the supersonic bullet (mach number m = 1.09). in this article, a system for acoustic detection, localisation and classification is introduced. the proposed system consists of stand-alone sensor units, which are placed around the monitored areas in sufficient numbers, to continuously monitor acoustic events around the unit; if there is a possibility of a dangerous event, the stand-alone units send the data to the remote unit for advanced gunshot detection and classification. the remote central unit evaluates signals received from multiple sensor units and, using advanced signal processing and classification methods, determines the location of a gunshot and the probable firearm caliber used. to localise the place of the event, at least three units equipped with a microphone are needed. the localisation accuracy of the system depends on the density and the number of stand-alone sensor units used. in comparison with other existing available systems (e.g. [3] or [5]), the presented system has a novel modular flexible structure. it can be deployed on a building or moving object (car, person, animal) while the central unit can be installed in a distant protected place. in the future, the presented acoustic detector can be used in public areas like schools, campuses, shopping centers to detect and localise gunshots. the paper is organised as follows. in section ii, the sensor units, detection algorithm, localisation algorithm and signal processing, together with the classification methods, are introduced and described. in section iii, the experimental measurements and results are presented. the conclusion and future work directions are outlined in section iv. 2. methods the presented system for acoustic detection, localisation and identification consists of multiple (at least three to estimate the correct localisation of the event) sensor units and one central unit. such a topology enables additional analyses at the central unit, i.e., advanced location of the acoustic event position using timestamped data from multiple sensor units receiving the acoustic signal related to the possible shooting event. each sensor unit has to cover analogue signal pre-processing, digitisation, simple detection algorithm and simple evaluation. based on the simple detection and evaluation algorithms, all the units with positive detection, sends the recorded data to the central remote unit. every stand-alone unit works on the principle illustrated in figure 4. the stand-alone sensor detection algorithm works on the principle of a modified median filter introduced by the authors bullet trajectory shock wave front trajectory velocity v qm velocity c figure 1. an acoustic trace of a supersonic bullet. figure 2. signal corresponding to a 9 mm subsonic short gun gunshot. figure 3. signal corresponding to a 9 mm supersonic short gun gunshot. analogue preprocessing detection algorithm adc evaluation figure 4. unit function requirements. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 64 in [8]. the background signal is filtered by the median filter while an acoustic event is processed based on the algorithm presented in figure 5. it works on the principle of dividing the recorded acoustic signal, converted to energy (the input signal is squared), to the optimal number of data blocks time windows, the defined odd number of windows, which are fed to the median filter. the middle window is the block with the actual data (actual data window), where the detection is performed. each window represents 50 samples, i.e., 1 ms of recorded acoustic data around the unit. the data contained in the window before the actual data window represents the acoustic signal recorder after the actual data and vice versa, data recorded before are contained in windows that follows the actual data window. the resulting signal from the median filter is then subtracted from the actual data window, to suppress/filter the background acoustic noise. thereafter the detection algorithm based on multiple thresholds distinguishes actual gunshot events from other ‘false alarms’. more details about the algorithm can be found in [12] and can be described by the pseudocode in figure 6. requirements for flexible modular sensor units are even more demanding than just to detect, record and send the acoustic event in case of a positive detection. since the sensor units can be deployed everywhere, they have to be able to send their exact position. moreover, all the sensor units have to be precisely timesynchronised to get a synchronised timestamp of the detected event and to send the data to the central unit wirelessly ensuring accurate localisation and classification of the event secured by the remote server. to fulfil all these criteria, the sensor unit was designed according to figure 7. every sensor unit uses a pre-polarised, electret condenser microphone with a flat frequency response from 20 hz to 20 khz and omnidirectional sensitivity, see figure 8 and figure 9. the heart of the acoustic sensor unit is a low power consumption, 32-bit lpc 1837, arm cortex-m3 based microcontroller, which processes the recorded data from the microphone. the analogue op-amp based peak detector provides an interrupt for the microcontroller if the microphone records an acoustic impulse event. a 16-bit analogue-to-digital converter (adc) ads8866 digitises the recorded analogue data. the successive approximation, low power consumption adc samples the data with a sampling frequency of fs = 44 khz. after the interrupt, the adc sends the data to the microcontroller via serial peripheral interface (spi). the data is stored in a circular buffer as a 16-bit number. in this way, the data is ready for processing. if a trigger from the peak detector occurs, the median filter algorithm described above evaluates if there is the possibility of a gunshot event. if the possible gunshot is evaluated and detected, low power consumption gsm chip gl865-quad v3 median filter win 1 win 2 act. data win 12 win 13 x 2sample x(t) + rms max energy rms max timestamp energy figure 5. the principle of the median filter. figure 6. the detection algorithm pseudocode. mikroprocesor pc peak detector adc microphone cell phone gps gsm/gprs remote server int spi usb extint uart uart sms gprs figure 7. block diagram of the sensor unit. figure 8. frequency response of the microphone. figure 9. omnidirectional sensitivity of the microphone. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 65 then sends the recorded data, together with the exact position of the unit and a precise timestamp, when the event has been recorded, for further analysis to the remote central unit pc. the precise localisation of the sensor unit position and synchronisation with all the other sensor modules secures gps module neo/lea-6t. when the neo/lea-6t module operates stationary, gps timing is possible with only one visible satellite. it means the time can be maintained even under adverse signal conditions or in an environment with poor sky visibility. the gps module is also adding the precise timestamp to the recorded data to make it possible to assess the exact localisation of the gunshot event. the uct timestamp and the exact position of each unit are determined by this way. for the synchronisation of the units, the nmea protocol [13] is used. the time accuracy for the synchronisation of all units is 0.1 ms. the described sensor unit is power supplied from ucc = +5v. the designed sensor unit with a testing microphone is shown in figure 10. the central remote unit process received data using advanced signal processing. when at least three sensor units detect the event and send the data together with its timestamps, an algorithm uses these timestamps from the network of sensor units to triangulate the location of the event. the remote server also secures the classification to state if there is a gunshot detected and identify/estimate the caliber of the used firearm or if it is only a ‘false alarm’. figure 11 describes the layout of the localisation problem using three sensor units/microphones. measuring the time delay τn between the event and each microphone mn and knowing their positions, it is possible to calculate the position of the source s inside of the domain d. the relationship between the source s of the gunshot and three microphones mn, that detect it, is described by equations (2) to (4) (𝜏1 − 𝜏0) ∙ 𝑐 = (𝑥 − 𝑎1) 2 + (𝑦 − 𝑏1) 2 (2) (𝜏2 − 𝜏0) ∙ 𝑐 = (𝑥 − 𝑎2) 2 + (𝑦 − 𝑏2) 2 (3) (𝜏3 − 𝜏0) ∙ 𝑐 = (𝑥 − 𝑎3) 2 + (𝑦 − 𝑏3) 2 , (4) where τ0 is the time where the gunshot has occurred; τ1, τ2 and τ3 are the times where the gunshot has been detected by unit 1, 2 and 3; (ax, bx) are the coordinates of the relevant unit; and (x, y) are the coordinates of the gunshot. the resulted coordinates are necessary to recalculate to the cartesian system. the unit that detects the gunshot first (τ1) is considered as it is placed in coordinate (0, 0). the coordinates in meters of other units are then calculated through (5): 𝑑 = acos(sin(𝛷1) sin(𝛷2)) + cos(𝛷1) cos(𝛷2) cos(𝛿𝜆) r , (5) where 𝛷1 and 𝛷2 are the coordinates in meters and δλ is the difference of the longitudes and r is the mean earth radius (6378 km). once the coordinates of the cartesian system are calculated, it is necessary to recalculate the latitude and the longitude coordinates. since the distance of one degree of longitude is different at the north pole and the equator, it is necessary to know the relationship between degree and meter at a given latitude [14]. this is done by applying the simplified formulas (6) and (7), which recalculate meters to one degree of latitude of longitude, resulting in uncertainty values in the order of centimetres: 𝑥𝑙𝑎𝑡𝑖𝑡𝑢𝑑𝑒 = 111123.92 − 559.82 cos(2 𝜆) + 1.175 cos(4 𝜆) − 0.0023 cos(6 𝜆) (6) 𝑥longitude = 111412.84 cos(𝛷) − 93.5 cos(3 𝛷) − 1.175 cos(5 𝛷) (7) an example of the received timestamps from the three sensor units (aed 1 – aed 3) with their exact position in longitude and latitude coordinates and calculated source of a detected acoustic event by the remote server is in figure 12. a custom software application for the remote unit pc to communicate with sensor units, processing the sent data and displaying the location of the event was developed as a part of the acoustic detection system (figure 13). the application was developed in c# programmable language. the program shows a figure 10. the sensor unit for acoustic detection. figure 11. the layout of the situation with three microphones. figure 12. an example of a message with calculated source of an acoustic event sent by the sensor unit. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 66 map with the exact location of the chosen acoustic unit and precise/synchronised utc time. in case of detection, the application signalises the event, displays an exact timestamp in utc and stores the data for further processing. through the application, the sampling frequency, parameters for the detection algorithm based on the median filter described in the beginning of the section ii, and the connection of additional detection units can be done. parallel to localisation, the remote central unit applies the algorithms to obtain the features for classification and classify the event into the classes. the mel frequency transformation (mft) and continuous wavelet (cw) methods are used for the processing of received data to obtain features for the further classification. to identify the components in an acoustic signal, it is preferable not to use the linear frequency scale, but to detect the differences in lower frequencies rather than higher frequencies. incorporating this can be preferable in acoustic detection. mft algorithm uses band-pass filters to get the energy of the signal for each defined band. then, the algorithm uses frequency distribution to create mel frequency coefficients (mfc). mfc is computed using cosine transformation on the logarithm of bank energies. it can be described as follows. the waveform is divided into the frames, then the discrete fourier transform is applied, logs of amplitude spectrum is taken, the scale is converted using the melscaling and, finally, the discrete cosine transform is performed. to convert the frequencies into the mel scale, equation (8) is used. more details on mfc can be found in [15], [16] 𝑀(𝑓) = 1125 ln (1 + 𝑓 700 ) . (8) through experiments, it was set the optimal number 20 mfc and filters with frequency bands from 500 hz to 5khz. these mfcs served as features for the future classification. to limit the influence of an echo on a classification and increase its validity and reliability, more features have to be added. for this reason, the cw algorithm was considered. unlike discrete fourier transform dft, cw transform uses defined waves to create a frequency spectrum with significant time resolution. the time resolution allows limiting the influence of echo, because the echo does not usually interact with the beginning of the impulse acoustic event/gunshot pattern. the best results for a gunshot recognition showed the bump wavelet served as a mother wavelet. the used bump wavelet is a bandlimited function defined in the frequency domain by (9), and its shape is shown in figure 14. ψ(sω) = e 1−( 1 1−(sω−μ)2/σ2 ) 1 [ μ−σ s , μ+σ s ] (9) where 1[(μ−σ)/s, (μ+σ)/s] is the indicator function, s is the scale, ω is angular frequency, σ is standard deviation, and µ is mean value. more details about the wavelet can be found in [17], [18]. both presented algorithms are used as features for the advanced classification of received signals by central remote unit from sensor units. various sets of gunshots, as well as several diverse false signals, similar to a gunshot signal were used for classification. each false signal was chosen to be challenging to differentiate it from the real gunshots by a human operator. for classification, two independent neural networks (nn) were created. first, a nn was designed to classify signals based on mfc. considering the low dimensionality of these coefficients, a fully two-layer connected neural network has been used. for the cw algorithm, a convolution neural network was used. as the spectrum is a two-dimensional matrix, the convolution network is significantly better than a fully connected system. convolution network allows finding local features in multidimensional, location-dependent input data. spectrogram features are location-dependent, and therefore, the convolution network leads to better results than a typical fully connected network. also, the convolution network is typically much smaller in dimension, thus significantly reducing the computation time. a single network combining both designs was implemented and used for the classification. the network had two inputs, one for the mfc and one for the cw algorithm. along with the nn, two more classifiers, support vector classifier (svc) and naive bayes classifier (nbc) [19] [21] are presented herein for a results comparison. both classifiers were implemented in matlab software with classification pattern recognition toolbox prtools ver. 5 [22]. naive bayes method is based on bayesian theorem (10). p(a/b) = p(b/a) p(a) p(b) (10) where p(a), p(b) is the prior probability of a, b respectively, and p(b/a) is the conditional probability, the probability of a given that b is true. the naive bayes classifier estimates for every feature and every class separately. naive bayes method is simple, but despite its simplicity can often outperform other sophisticated classification methods. it is widely used for results comparison with other classifiers. svc works based on the construction of non-linear decision boundaries. svm creates a decision plane, so-called hyperplane, which in feature space separate training data optimally. the decision plane then figure 13. pc application. figure 14. bump mother wavelet. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 67 separates sets of objects, which have different class memberships. the input data are transformed (mapped) to feature space using a mathematical function known as the kernel. support vector models can be linear, polynomial, sigmoidal and radial basis function. 3. results to acquire the test data, multiple gunshots measurements in different shooting ranges, closed – with many reflections and open, were done. for increasing the diversity of gunshots, three different calibres were measured [23]. tested firearms were 9 mm short gun, 6.35 mm short gun, .22 short gun, .22 rifle gun. various subsonic and supersonic (up to mach number of m = 1.1) ammunition was used with the 9 mm short gun. each type of gun was measured at least 60 times. in total, approximately 400 samples corresponding to gunshots were and recorded by sensor units described in section ii (in many cases, one sample has been recorded by more sensor units). for false signals, acoustic impulse events similar to gunshots were measured. approximately 200 samples of various false alarms were measured. as examples of false signals, glass breaking, different slams (such a door slam), handclaps, or close big bubble wrap popping were recorded. at least three sensor units placed in nearby locations recorded all the tested signals. the example of a gunshot of a 9 mm short gun in a noisy environment (diesel engine) recorded at a distance of 10 m is shown in figure 15. figure 16 shows another recorded gunshot, shoot by .22 short gun in closed shooting rage where many reflections are visible. the gunshot was recorded at a distance of 12 m. two examples of recorded false alarms represented by a door slam and bubble wrap popping are presented in figure 17 and figure 18. both presented false signals exhibit similar patterns as gunshots. therefore, the recorded data have to be processed in a way to emphasise the differences. this can be done by mft and cw methods presented in section ii. this work presents the classification into ‘false alarms’ and gunshot event and in the case of a gunshot, the classifier classifies the recorded signal into an individual caliber. to train the classifiers in an optimal way, the recorded acoustic events data have to be divided into two groups – to training and to validation sets. there are many articles dealing with an optimal number of training data. one of the proposed methods is the use of the learning curve. use of learning curves for optimal size of training data is described in [24]. from the learning curve, dependency of the model performance on the training data size can be obtained. it usually depends on the classification method, the complexity of the classifier or how well can be the classes separated. optimal size of training data can be determined from the maximum of the learning curve. recorded ‘false alarms’ and gunshots samples were chosen randomly chosen for the training and the validation sets. in this case, the number of training data is optimal for approximately 30% of the samples. the training set had approximately 200 independent measurements and the validation set had approximately 400 samples. the experimental results for the validation data to be classified into false alarms and gunshots events are shown in table 1. the results show the validation success rate of classifying into the gunshots is 100 %. the system can classify the gunshot from figure 15. recorded signal corresponding to 9 mm subsonic short gun shoot noised by a diesel engine. figure 16. recorded signal corresponding to .22 mm subsonic short gun shoot with reflections. figure 17. recorded signal corresponding to a door slam. figure 18. recorded signal corresponding to a bubble wrap popping. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 68 the ‘false alarm’ with a 100% success rate but, on the other hand, approximately 30% of the false alarms are identified as gunshots. the main reason for this is the limited number of signals corresponding to the false alarms and their diversity. to improve the classifier learning, a significant number of impulse acoustic events similar to gunshots have to be recorded in real scenarios and used it for classifier training. on the other hand, if the acoustic detection system will be complemented by visual detection an operator can restrain a false alarms occurrences. the results also show that svc achieves slightly better results than nn in false alarm detection, while nbc’s success rate is below 90% even for a proper classification into gunshots classes. for this reason, nbc is not used for the classification into individual calibers. table 2 shows the results of the classification into individual calibers by the nn. the nn performs well in classification into the caliber of a gunshot, where it was able to classify more than 95% correctly for a 9 mm caliber, more than 82 % for 6.35 mm caliber and more than 96% for .22 caliber. on the other hand, nn classifies almost 26% of false alarms as gunshots (9 mm caliber mostly). for the sake of comparison, results obtained through svc are summarised in table 3. the svc classified almost 80% of false alarms correctly, but the classification of gunshots into correct caliber classes is less successful than for nn. for example, 6.35 mm caliber successful classification is less than 80% in comparison with 96% for the nn. the efficiency of nn classification strongly depends on the amount and variability of the false alarm signals. based on the tested results, only the nn classifier was implemented to the ads. in the future, more false alarm scenarios have to be recorded and used for the classifier training. the tested measurements show that the presented ads can successfully detect, locate and classify the impulse event. the ads consists of sensor units operates on the principle of the modified median filter, and central remote unit, which calculates the location of the acoustic event and classify it using nn and cw and mfc algorithms. the tested acoustic signals were taken at different open and closed shooting ranges, where at least three acoustic units recorded the detected impulse acoustic event. the nn classifier can classify an individual caliber of a used firearm with a very high success rate, which is for some calibers more than 95%. as a next step, the system for acoustic detection will be deployed in an environment (residential area) to test it in real conditions. 4. conclusion the system for acoustic detection, localisation, and classification into a firearm caliber was presented. the system consists of sensor units that continuously monitor acoustic events around the unit and the central remote unit. the sensor units use a modified median filter algorithm to state if there is a possibility of a gunshot. the central remote unit – pc, then evaluates the signal through advanced signal processing and classification to determine if there is a gunshot or a false alarm event. the system can localise the event on the principle of measuring the time delay between the event and each microphone of the sensor unit and knowing their positions. the accuracy of the localisation depends on the number and the density of the sensor units. the central remote units use continuous wavelet and mel frequency transformation methods to get the features for a neural network classifier. gunshots of different calibers and various false alarms similar to gunshots were recorded on shooting ranges to test the system. more than 600 signals were recorded and tested. the system shows the ability to detect the gunshot with 100% accuracy and to correctly classify the caliber of a gun with a high accuracy depending on the individual caliber. considering the limited size of false alarms training dataset, such results are impressive. however, all measurements were measured in a similar environment. to practically employ the ads in real conditions, a significantly larger dataset in a real environment, such as urban areas, should be examined for future tests and unit improvements. in the future, the proposed systems for acoustic detection can be used as a standalone unit placed in schools, campuses, shopping centers or other public areas in general, to detect, localise and classify gunshot events and to increase the safety of the civil population. acknowledgement this research was supported by the “energy for smart objects” grant provided by electronic components and systems for european leadership joint undertaking in collaboration with the european union's h2020 framework programme (h2020/2014-2020) and national authorities, under grant agreement n° 692482. references [1] b. kaushik, d. nance, k. k. ahuja, a review of the role of acoustic sensors in the modern battlefield, 11th aiaa/ceas aeroacoustics conference, monterey, california, 23-25 may 2005. doi: 10.2514/6.2005-2997 [2] h. e. bree, the microflown, dissertation thesis, 1997. https://ris.utwente.nl/ [3] shotspotter. online [accessed 04 june 2021] http://www.shotspotter.com [4] j. millet, b. baligand, latest achievements in gunfire detection systems, battlefield acoustic sensing for isr applications (pp. 26-1–26-14) nato rto-mp-set-107, 2006. online [accessed 04 june 2021] http://www.rto.nato.int/abstracts.asp [5] g. l. duckworth, d.c. gilbert, j.e. barger, acoustic countertable 1. validation data classification into gunshot and false alarms results. classifier nn svc nbc gunshot 100% 100% 89.4% false alarm 73.9% 79.4% 60.7% table 2. gunshot data classification into individual calibres by nn. original/ predicted false alarm 6.35 mm caliber 9 mm caliber .22 caliber false alarm 68 2 17 5 6.35 mm caliber 0 55 10 2 9 mm caliber 0 0 78 4 .22 caliber 0 3 0 80 table 3. gunshot data classification into individual calibres by svc. original/ predicted false alarm 6.35 mm caliber 9 mm caliber .22 caliber false alarm 73 2 14 3 6.35 mm caliber 0 46 11 10 9 mm caliber 0 6 70 6 .22 caliber 0 6 3 74 https://doi.org/10.2514/6.2005-2997 https://ris.utwente.nl/ http://www.shotspotter.com/ http://www.rto.nato.int/abstracts.asp acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 69 sniper system, command control, communications, and intelligent systems for law enforcement, spie proceedings 2938 (1996). doi: 10.1117/12.266747 [6] r. c. maher, modeling and signal processing of acoustic gunshot recordings, 2006 ieee 12th digital signal processing workshop & 4th ieee signal processing education workshop, teton national park, wy, pp. 257-261, 24-27 september 2006. doi: 10.1109/dspws.2006.265386 [7] r. c. maher, acoustical characterization of gunshots, 2007 ieee workshop on signal processing applications for public security and forensics, washington, dc, usa, pp. 1-5, 2007. doi: 10.1109/ieeeconf12259.2007.4218954 [8] k. łopatka, j. kotus, a. czyżewski, detection, classification and localization of acoustic events in the presence of background noise for acoustic surveillance of hazardous situations, multimedia tools and applications 75 (2016), pp. 10407-10439. doi: 10.1007/s11042-015-3105-4 [9] j. sallai, w. hedgecock, p. völgyesi, andrás nádas, györgy balogh, akos ledeczi, weapon classification and shooter localization using distributed multichannel acoustic sensors, journal of systems architecture embedded systems design 57(10) (2011), pp. 869-885. doi: 10.1016/j.sysarc.2011.04.003 [10] m. lojka, m. pleva, e. kiktová, j. juhár, a. čižmár, efficient acoustic detector of gunshots and glass breaking, multimedia tools and application 75 (2016), pp. 10441–10469. doi: 10.1007/s11042-015-2903-z [11] j. tomlain, o. teren, wireless solutions for outdoor training polygons, proceedings of the international conference on new trends in signal processing, armed forces academy, 12-14 oct. 2016, demanovska, slovakia, pp. 87-90. doi: 10.1109/ntsp.2016.7747791 [12] j. svatos, j. holub, smart acoustic sensor, 5th international forum on research and technologies for society and industry: innovation to shape the future, florence, italy, 9-12 sept. 2019, pp. 161-165. doi: 10.1109/rtsi.2019.8895591 [13] k. betke, the nmea 0183 protocol, 2010. http://www.tronico.fi/oh6nt/docs/nmea0183.pdf [14] f. ivis, calculating geographic distance: concepts and methods, nesug 2006, data manipulation, canada, 2006. online [accessed 04 june 2021] https://lexjansen.com/nesug/nesug06/dm/da15.pdf [15] v. fernandes, l. mascarehnas, c. mendonca, a. johnson, r. mishra, speech emotion recognition using mel frequency cepstral coefficient and svm classifier, 2018 international conference on system modeling & advancement in research trends (smart), moradabad, india, pp. 200-204, 2018. doi: 10.1109/sysmart.2018.8746939 [16] a. d. p. ramirez, j. i. de la rosa vargas, r. r. valdez, a. becerra, a comparative between mel frequency cepstral coefficients (mfcc) and inverse mel frequency cepstral coefficients (imfcc) features for an automatic bird species recognition system, 2018 ieee latin american conference on computational intelligence (la-cci), 7-9 november 2018, gudalajara, mexico, pp. 1-4, 2018. doi: 10.1109/la-cci.2018.8625230 [17] mathworks, matlab documentation. online [accessed 04 june 2021] https://www.mathworks.com/help/wavelet/ref/cwtft.html [18] m. f. ghazali, irregularity detection in artificial signal using timefrequency analysis, journal of engineering and applied sciences 11(2) (2016), pp. 3593-3597. online [accessed 04 june 2021] http://www.arpnjournals.org/jeas/research_papers/rp_2016/jea s_0316_3850.pdf [19] p. bilski, b. polok, analysis of the rbf ann-based classifier for the diagnostics of electronic circuit, acta imeko 7(1) (2018), pp. 42-49. doi: 10.21014/acta_imeko.v7i1.516 [20] d. l. carni, e. balestrieri, i. tudosa, f. lamonaca, application of machine learning techniques and empirical mode decomposition for the classification of analog modulated signals, acta imeko 9(2) (2020), pp 66-74. doi: 10.21014/acta_imeko.v9i2.800 [21] t. hastie, r. tibshirani, j. freedman, the elements of statistical learning: data mining, inference, and prediction, second edition, springer-verlag, new york, usa, 2009, isbn: 9780387848570. doi: 10.1007/978-0-387-84858-7 [22] delft university of technology, 2014. online [accessed 04 june 2021] http://prtools.org/ [23] f. pavese, a. charki, data inter-comparisons in the context of the knowledge-gaining process: an overview, acta imeko 7(2) (2018), pp. 73-83. doi: 10.21014/acta_imeko.v7i2.541 [24] c. beleites, u. neugebauer, t. bocklitz, c. krafft, j. popp, sample size planning for classification models, anal. chim. acta, 760 (2013), pp. 25-33. doi: 10.1016/j.aca.2012.11.007 https://doi.org/10.1117/12.266747 https://doi.org/10.1109/dspws.2006.265386 https://doi.org/10.1109/ieeeconf12259.2007.4218954 https://doi.org/10.1007/s11042-015-3105-4 https://doi.org/10.1016/j.sysarc.2011.04.003 https://doi.org/10.1007/s11042-015-2903-z https://doi.org/10.1109/ntsp.2016.7747791 https://doi.org/10.1109/rtsi.2019.8895591 http://www.tronico.fi/oh6nt/docs/nmea0183.pdf https://lexjansen.com/nesug/nesug06/dm/da15.pdf https://doi.org/10.1109/sysmart.2018.8746939 https://doi.org/10.1109/la-cci.2018.8625230 https://www.mathworks.com/help/wavelet/ref/cwtft.html http://www.arpnjournals.org/jeas/research_papers/rp_2016/jeas_0316_3850.pdf http://www.arpnjournals.org/jeas/research_papers/rp_2016/jeas_0316_3850.pdf http://dx.doi.org/10.21014/acta_imeko.v7i1.516 http://dx.doi.org/10.21014/acta_imeko.v9i2.800 https://doi.org/10.1007/978-0-387-84858-7 http://prtools.org/ http://dx.doi.org/10.21014/acta_imeko.v7i2.541 https://doi.org/10.1016/j.aca.2012.11.007 study of fracture processes in sandstone subjected to four-point bending by means of 4d x-ray computed micro-tomography acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 study of fracture processes in sandstone subjected to fourpoint bending by means of 4d x-ray computed microtomography leona vavro1, martin vavro1, kamil souček1, tomáš fíla2, petr koudelka2, daniel vavřík2, daniel kytýř2 1 the czech academy of sciences, institute of geonics, studentská 1768/9, 708 00 ostrava-poruba, czech republic 2 the czech academy of sciences, institute of theoretical and applied mechanics, prosecká 809/76, 190 00 praha 9, czech republic section: research paper keywords: four-point bending test; chevron-notched core specimen; crack propagation; 4d micro-ct; sandstone citation: leona vavro, martin vavro, kamil souček, tomáš fíla, petr koudelka, daniel vavřík, daniel kytýř, study of fracture processes in sandstone subjected to four-point bending by means of 4d x-ray computed micro-tomography, acta imeko, vol. 11, no. 2, article 34, june 2022, identifier: imekoacta-11 (2022)-02-34 section editor: francesco lamonaca, university of calabria, italy received december 21, 2021; in final form march 16, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the project for the long-term strategic development of research organisations (rvo: 68145535) and the operational programme research, development, and education in project inafym (cz.02.1.01/0.0/0.0/16/019/0000766). corresponding author: martin vavro, e-mail: martin.vavro@ugn.cas.cz 1. introduction the process of crack propagation in quasi-brittle materials due to mechanical loading leading to material failure have been intensively studied by researchers in various disciplines for a very long time. the monitoring of crack initiation and propagation is also of a great importance in rock engineering. the presence of microas well as macrocracks significantly influences the strength, deformation, and filtration properties of the rock mass and therefore, strongly affect, for example, the stability of underground workings, tunnels, or open pit slopes. rock fracture mechanics can also be applied in the prediction of anomalous geomechanical phenomena such as rock bursts or rock and gas outbursts, or in the evaluation of rock fragmentation processes such as drilling, blasting, crushing, and cutting [1], [2]. more recently, the knowledge about rock fracture processes has been of crucial importance when assessing the suitability of the rock host environment for such demanding engineering applications as co2 sequestration or the geological disposal of high-level radioactive waste. the failure process of rocks and similar rock-like materials is the result of complex mechanisms, including microcrack initiation, propagation, and interactions with each other, resulting in crack coalescence. eventually, a macroscopic failure plane is generated, thus causing final rock rupture [3], [4]. cracks in rocks initiate and propagate in response to the applied stress, with the crack path often being driven by the local distribution abstract high-resolution x-ray computed micro-tomography (ct) is a powerful technique for studying the processes of crack propagation in nonhomogenous quasi-brittle materials such as rocks. to obtain all the significant information about the deformation behaviour and fracture characteristics of the studied rocks, the use of a highly specialised loading device suitable for the integration into existing tomographic setups is crucial. since no adequate commercial solution is currently available, a completely newly-designed loading device with a fourpoint bending setup and vertically-oriented scanned samples was used. this design of the loading procedure, coupled with the high stiffness of the loading frame, allows the loading process to be interrupted at any time and for ct scanning to be performed without the risk of the sudden destruction of the scanned sample. this article deals with the use of the 4d ct for the visualisation of crack initiation and propagation in clastic sedimentary rocks. two types of quartz-rich sandstones of czech provenance were used for tomographic observations during the four-point bending loading performed on chevron notched test specimens. it was found that the crack begins to propagate from the moment that ca. 80 % of the maximum loading force is applied. mailto:martin.vavro@ugn.cas.cz acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 of micro-flaws such as cavities, inclusions, fossils, grain boundaries, mineral cleavage planes, and micro-cracks inside the rock [5], [6]. crack initiation occurs when the stress intensity factor (k) at a microcrack tip reaches its critical value, known as the fracture toughness (kc). fracture toughness thus expresses the resistance of a material to crack initiation and subsequent propagation and represents one of the most important material properties in linear elastic fracture mechanics (lefm). however, it is important to highlight that rocks and similar geomaterials such as concrete exhibit quasi-brittle behaviour (see [7], [8]), which is typical through the large plastic zone, referred to as the fracture process zone (fpz), ahead of the crack tip where more complex nonlinear fracture processes occur. due to the fpz, classical lefm is not fully applicable for studies related to rock/concrete fracture processes. thus, the description of fractures needs to be carried out based on non-linear fracture models involving the cohesive nature of the crack propagation; often the fracture energy and/or other softening parameters are utilised [9]. to date, many advanced techniques such as scanning electron microscopy [10], [11] or acoustic emission detection [12], [13] have been adopted to study the progressive failure process of rocks. in the case of these experimental approaches, basic data about crack propagation can be obtained, but the spatial information about deformation processes and fpz development throughout the tested sample volume remains unknown. for this reason, a wide range of uses is opening up for x-ray ct in the study of the deformation behaviour and fracture processes in rocks. in this paper, a completely newly designed loading device with a four-point bending setup for vertically oriented scanned samples allowing 4d ct measurements of cracks and fpz propagation in quasi-brittle materials was used. more specifically, the presented contribution deals with the identification of crack initiation and propagation in two types of upper cretaceous quartz sandstones. both rocks represent well-known building, sculpture, and decorative stone materials that have been used on the czech territory for many centuries [14]. 2. rock material used for the experiments two different types of czech sandstones, namely the mšené sandstone and kocbeře sandstone, were used in the fracture experiments. the fine-grained mšené sandstone is almost entirely (> 90 vol. %) composed of quartz grains with an average grain size of 0.15 mm. other clastic components consist of quartzite, feldspars, and mica flakes. the rock matrix (ca. 5 vol. %) is formed by kaolinite with very finely dispersed fe-oxyhydroxides (limonite). the degree of secondary silicification is very low. the fineto medium-grained kocbeře sandstone has a bimodal grain size distribution and mainly consists of monocrystalline quartz grains with an average grain size of 0.24 mm and a maximum grain size of 1.5 mm. quartzite and orthoclase grains occur in a considerably smaller quantity. the matrix and rock cement (10 – 15 vol. %) are formed by quartz which predominates over clay matter. secondary silicification is intense. both sandstones used in the experiments are similar in their mineralogical composition of detrital rock particles but differ in some inner rock texture features as well as in their mineralogical composition of interstitial material between the framework grains. these differences are reflected in different values of physical and mechanical properties (table 1). 3. experimental setup and instrumentation 3.1. x-ray ct imaging device the xt h 225 st industrial x-ray micro-ct system by nikon metrology nv was used to assess the development of the fracture process in fracture toughness tests using chevron bend (cb) test specimens. this x-ray ct scanner is a fully automated apparatus with a rotating scanning system equipped with a microfocus x-ray source, which generates cone-shaped beams. it is equipped with an x-ray flat panel detector having a number of 2,000  2,000 pixels with a pixel size of 200 μm. the basic technical parameters of the xt h 225 st inspection machine are given, for example, in [17]. the scanning parameters were as follows: a reflection target, 160 kv voltage, 126 μa current, 0.5 mm thick aluminium filter, 3,141 projections with two images per projection, 1,000 ms exposition, a scanning time of ca. 2 hours, and a cubic voxel size of 16 μm. the ct data generated by the micro-scanner were reconstructed using the ct pro 3d software (nikon metrology nv). the visualisation and analysis software vgstudio max 3.3 (volume graphics gmbh, germany) was used for data post-processing. 3.2. loading device previous research [17], [18] has focussed on the study of the crack propagation processes in quasi-brittle materials, such as silica-based composites or sandstones; these have shown some limitations when the conventional three-point bending tests were used in combination with the x-ray ct technique. the main disadvantage of such an arrangement of the experiment can be seen in particular in the horizontal orientation of the sample perpendicular to the rotational axis of the ct scanner, which, together with the loading supports covering parts of the radiograms (see figure 1a) significantly reduces the quality of the acquired data. these shortcomings of the foregoing technical solution have been eliminated with the use of a unique four-point bending loading device which was developed (czech national patent cz 307897) at the institute of theoretical and applied mechanics of the czech academy of sciences in prague (itam cas). contrary to the standard arrangement of the threeor four-point table 1. basic physical and mechanical properties of mšené and kocbeře sandstones according to various authors (data adopted from [14]-[16]). parameter mšené sanstone kocbeře sandstone specific (real) density in kg/m3 2,620–2,650 2,630–2,670 bulk (apparent) density in kg/m3 1,850–1,930 2,140–2,490 total porosity in % 26.3–29.7 12.0–15.3 water absorption capacity by weight in % 10.8–13.3 2.2–6.1 uniaxial compressive strength (dry sample) in mpa 21–33 56–87 flexural strength (dry sample) in mpa 0.9–1.9 5.9–7.9 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 bending tests using horizontally oriented samples, this novel approach is based on the vertical orientation of the investigated cylindrical specimen, the direction of whose longitudinal axis is identical to that of the rotational axis of the ct scanner (figure 1b). this concept significantly reduces differences in the attenuation of x-rays observed during the rotation of the sample subjected to a ct scan. the modular device for four-point bending during micro-ct sequences consists of three main components. a pair of a motorised movable supports are integrated with driving units with precision captive stepper linear actuators (23-2210, koco motion dings, usa) and precision linear guideways (mgw12, hiwin, japan). the driving units are equipped with linear encoders (lm10, renishaw inc., united kingdom) ensuring positioning with 1 μm resolution and load cells (lcm300, futek, usa) with nominal capacity 1,250 n. the central part of the device frame exposed to x-ray beam is manufactured from a carbon fiber composite (mtm57 series epoxy resin, t700s carbon fibers, shell nominal thickness 1.95 mm) providing sufficient frame stiffness and low attenuation of x-rays. cylindrical load-bearing frame, housing the loaded specimen, together with all the components of the loading device are manufactured from high strength aluminium alloy (en-aw6086-t6). the small distance between the x-ray source and the scanned objects also allows for the high resolution of the reconstructed 3d images, which is necessary for a detailed tomographic investigation of the loaded sample. the high stiffness of the loading frame and high precision control of the loading force during the experiment allow for the subsequent interruption of the loading process at any point of both the ascending and descending parts of the rock´s load-displacement (f-d) curve without the risk of sudden sample collapse. a more detailed technical description of the in-house four-point measuring device is provided in [19]. the device and its scheme are shown in partial section in figure 2a. 3.3. test specimens and experimental procedure cylindrical cb specimens with a diameter of 29 mm and a length of approximately 195 mm were drilled from sandstone blocks in the laboratory. the core drilling was carried out parallel to the sandstone bedding planes. in the central part of the test specimen, a chevron edge notch was carved using a circular diamond blade. the width of the chevron notch was 1.4 mm. a prepared cylindrical specimen was inserted into the specimen chamber placed into the x-ray ct inspection system (figure 2b) and centred inside this chamber on the supports. the orientation of the longitudinal axis of the specimen, emplaced in the test-ready position, was identical to that of the rotational axis of the loading device. after the initial contact of the specimen with the loading parts, its stable position was secured by applying a contact force of 5 n. then, the inspection using transmission radiography was performed to verify the correct position of the chevron notch tip and to exclude the samples with significant inhomogeneity in the volume of interest in the vicinity of the notch. pre-peak loading was performed in force control mode by prescribing its linear increase and ensuring a uniform load figure 1. principal differences between a conventional three-point bending loading scenario (a) and the new four-point bending test with a vertical orientation of the investigated specimen (b). figure 2. four-point loading device for 4d micro-ct experiments: (a) longitudinal cross-section of the loading device, (b) loading device attached to the rotation table of the xt h 225 st x-ray micro-ct scanner. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 distribution on the supports (outer supports span lout = 179 mm, inner supports span lin = 75 mm). post-peak loading was performed in displacement control mode with a position increment identical for both loading supports to prevent the sudden rupture of the specimen caused by the eventual nonsymmetrical response of the specimen. during the loading procedure, the test time, the displacement value, and the loading force were recorded continuously, with the sampling frequency of the displacements and the load of the external support being 200 hz using an in-house developed control software [20]. during the experiment, the loading process was interrupted at four to five loading steps, where imaging via x-ray micro-ct scanner was carried out. in the loading sequence, one or two load-steps were performed during the rock´s hardening phase, one load-step near the ultimate-stress point, and two or three load-steps were performed during the post-peak softening phase, without the sudden cracking of the specimen. the maximal loading force reached approximately 30 n to 35 n in the case of mšené sandstone and between 90 n and 100 n when the kocbeře sandstone was tested (figure 3). the force drops in the recorded loading diagrams were caused by material relaxation during the time-lapse tomography scanning. it is also clear from figure 3 that three test specimens were examined for each of the two above mentioned sandstone types. one of them, namely specimen 16057/11 from the mšené sandstone, was then selected for a detailed description of the process of crack propagation, which is presented in section 4. 4. results the measured data acquired from the loading device were subsequently processed in the form of f-d diagrams. in figure 4, the displacement-load curve of one selected test sample is shown. as can be seen from figure 4, a total of six micro-ct measurements were taken at points a to f while loading sample 16057/11. a reference ct measurement (scan a) was realised immediately after the fixation of the rock specimen in the loading device, at a minimal loading of 5 n. two other consecutive measurements were performed at points b and c before reaching the maximum load, i.e., in the ascending portion of the loaddisplacement plot. specifically, measurement b was made at a loading level of 26 n (i.e., at ca. 80% of the maximal force) and figure 3. loading curves of mšené (sample no. 16057) and kocbeře (sample no. 16060) sandstones with well visible loading gaps observed by ct scanning. the f-d diagram of the individual rock sample 16057/11 from the mšené sandstone is presented in detail in figure 4. figure 4. f-d diagram of a selected rock sample prepared from the mšené sandstone (16057/11) with ct slices obtained at different loading steps (d, e, and f) on the descending part of the loading curve. a macroscopically visible crack path is highlighted by white lines. the red circle in the vertical cross-section of the loaded specimen (upper left corner) defines the area, which is in the case of loading steps a, b, and c zoomed in onto and presented in figure 5. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 measurement c at a loading of 30 n, which corresponded to about 90 % of the peak force. measurement d practically corresponded to the ultimate stress point. the last two measurements, e and f, were made during the post-peak phase, at loads of 24 n and 9 n, respectively. the presented measurements clearly showed that even such low-strength rock samples, where the peak force reached only 33 n, can be successfully subjected to loading with discrete steps during strain softening without the risk of their sudden collapse. the process of crack propagation during the post-peak behaviour is well visible in the ct images presented in figure 4. generally, it is assumed that the real crack begins to propagate when the load reaches the ultimate stress point (see e.g., [21]). however, based on previous experience acquired during radiographic measurements in a three-point bending loading scenario [17], our current research was focussed mainly on the identification of the possible manifestation of crack origins and their subsequent growth on the ascending part of the loading curve. as for the pre-peak crack propagation, ct images taken before the load reached the peak value are shown in figure 5. this figure indicates that the apparent crack developing from a crack tip was present also within step c, i.e., at a level of ca. 90 % of the maximal loading force (fmax). when compared to reference step a, some changes were visible in the sandstone microtextures in scan b (ca. 80 % of fmax) near the crack tip related to crack evolution. these changes are reflected in the movement of individual quartz grains apart from each other. based on these findings, it can be concluded that, in the case of the studied sandstones, the crack began to propagate from the moment when approximately 80 % of the maximum loading force was applied. however, it should be noted, that the crack path was hard to identify in some parts of the reconstructed ct slices due to heterogenous sandstone microtextures, especially due to the presence of pores. this problem can be overcome by using differential tomography, where changes in the object are emphasised by the differentiation of the actual and the reference tomographic reconstructions, as recently described by e.g., [16] and [19]. the differences between the states at points b and c, respectively, and the initial state (a) are presented in figure 6. the development of the crack during mechanical loading was manifested by an increase of open porosity in the area of the figure 5. series of zoomed ct images of a selected mšené sandstone rock sample (16057/11) acquired at the different loading steps (a, b, and c) on the ascending part of the f-d loading curve. the changes in sandstone microtexture related to crack origin are highlighted by white ellipses. figure 6. visualisation of the development of the crack shape in loading steps b and c using tomographic image subtraction between the actual (loaded) and reference (unloaded) states; i.e., the image on the left represents the difference between steps b and a and the one on the right is a subtraction between the c and a loading steps. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 crack spreading through the rock test specimen, which is clearly shown in figure 7. the figure shows porosity distribution under the tip of the notch in two consecutive loading steps from the tests of the mšené sandstone test specimen no. 16057/11. using a hexahedral mesh grid superimposed on the reconstructed 3d images, where the porosity was evaluated in every hexahedral region of interest as an average over its volume, it can be seen that the macroscopic crack developed before the ultimate stress point was reached. 5. conclusions the study performed on the rock samples prepared from quartz-rich mšené and kocbeře sandstones showed that crack development and propagation can be successfully observed in 3d thanks to the joint use of a four-point bending procedure and high-resolution 4d micro-ct measurements. a four-point bending device that was newly developed at itam cas allows the study of fracture processes in rocks and similar quasi-brittle materials with high precision. the measurements also showed that the concept of vertically oriented rock samples in four-point bending devices for 4d micro-ct provides considerable advantages over the standard horizontally oriented three-point or four-point bending setups. in the studied sandstones, it was observed that the process of crack propagation starts from the moment when approximately 80% of the maximum loading force is applied. this outcome is in very good agreement with the results of previous research [16], [17], which was performed on the different types of sandstone regarding their microtextural features, mineralogical compositions, and related physical and mechanical properties. therefore, this research confirmed the fact that the crack, which was formed and started to propagate before the peak load was reached, was further propagated during the post-peak phase. more specifically, the crack length at point d (maximal loading force) was approximately 4.9 mm, which increased to 9.8 mm and 15.2 mm at points e and f, respectively. these crack lengths measured for post-peak strain softening phase are very similar to the values which were published by [19] for a stronger variety of mšené sandstone. the observation that the process of crack propagation started before reaching the maximum load is consistent with previous knowledge obtained for various types of german, american, or chinese sandstones by means of digital image correlation [24] or acoustic emission techniques [25]-[28]. moreover, this finding is also valid for other quasi-brittle materials, such as concrete, as confirmed by the study of ae signals both in flexural [29] and compressive [30] loading modes. acknowledgement the presented work was supported by the project for the long-term strategic development of research organisations (rvo: 68145535) and the operational programme research, development, and education in project inafym (cz.02.1.01/0.0/0.0/16/019/0000766). references [1] isrm commission on testing methods (f. ouchterlony, coordinator), suggested methods for determining the fracture toughness of rock, int. j. rock mech. min. sci. & geomech. abst. 25(2) (1988), pp. 71–96. [2] b. n. whittaker, r. n. sing, g. sun, rock fracture mechanics: principles, design and applications, elsevier science publishers b.v., amsterdam, the netherlands, 1992, isbn 978-0444896841. [3] h. haeri, k. shahriar, m. f. marji, p. moarefvand, experimental and numerical study of crack propagation and coalescence in precracked rock-like disks, int. j. rock mech. min. sci. 67 (2014), pp. 20–28. figure 7. changes in the distribution of rock porosity in xy (upper images) and xz (bottom images) planes due to crack development. it should be noted that the open porosity of the intact mšené sandstone measured by mercury intrusion porosimetry reached values between ca. 26 % and 30 % as reported by e.g., [19], [22], or [23]). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 doi: 10.1016/j.ijrmms.2014.01.008 [4] c. a. tang, s. q. kou, crack propagation and coalescence in brittle materials under compression, eng. fract. mech. 61(3-4) (1998), pp. 311–324. doi: 10.1016/s0013-7944(98)00067-8 [5] s. zabler, a. rack, i. manke, k. thermann, j. tiedemann, n. harthill, h. riesemeier, high-resolution tomography of cracks, voids and micro-structure in greywacke and limestone. j. struct. geol. 30(7) (2008), pp. 876–887. doi: 10.1016/j.jsg.2008.03.002 [6] j. b. zhu, t. zhou, z. y. liao, l. sun, x. b. li, r. chen, replication of internal defects and investigation of mechanical and fracture behaviour of rock using 3d printing and 3d numerical methods in combination with x-ray computerized tomography, int. j. rock mech. min. sci. 106 (2018), pp. 198–212. doi: 10.1016/j.ijrmms.2018.04.022 [7] z. p. bažant, j. planas, fracture and size effect in concrete and other quasibrittle materials, 1st edn. crc press llc, boca raton (fl), usa, 1998, isbn 978-0849382840. [8] s. p. shah, s. e. swartz, c. ouyang, fracture mechanics of structural concrete: applications of fracture mechanics to concrete, rock, and other quasi-brittle materials. 1st edn. john wiley & sons inc, new york, usa, 1995, isbn 978-0-471-30311-4. [9] l. vavro, l. malíková, p. frantík, p. kubeš, z. keršner, m. vavro, an advanced assessment of mechanical fracture parameters of sandstones depending on the internal rock texture features, acta geodyn. geomater. 16(2) (2019), pp. 157–168. doi: 10.13168/agg.2019.0013 [10] p. baud, e. klein, t. f. wong tf, compaction localization in porous sandstones: spatial evolution of damage and acoustic emission activity, j. struct. geol. 26(4) (2004), pp. 603–624. doi: 10.1016/j.jsg.2003.09.002 [11] z. brooks, f. j. ulm, h. h. einstein, environmental scanning electron microscopy (esem) and nanoindentation investigation of the crack tip process zone in marble, acta geotech. 8(3) (2013), pp. 223–245. doi: 10.1007/s11440-013-0213-z [12] d. lockner, j. d. byerlee, v. kuksenko, a. ponomarev, a. sidorin, quasi-static fault growth and shear fracture energy in granite, nature 350 (1991), pp. 39–42. doi: 10.1038/350039a0 [13] e. townend, b. d. thompson, p. m. benson, p. g. meredith, p. baud, r. p. young, imaging compaction band propagation in diemelstadt sandstone using acoustic emission locations, geophys. res. letters 35(15) (2008), l15301. doi: 10.1029/2008gl034723 [14] v. rybařík, noble building and sculptural stone of the czech republic, industrial secondary school of stonework and sculpture in hořice, hořice, czech republic, 1994, isbn 80900041-5-6 [in czech]. [15] p. koutník, p. antoš, p. hájková, p. martinec, b. antošová, p. ryšánek, j. pacina, j. šancer, j. ščučka, v. brůna, decorative stones of bohemia, moravia and czech silesia. jan evangelista purkyně university in ústí nad labem, ústí nad labem, czech republic, 2015, isbn 978-8074149740 [in czech]. [16] d. vavrik, p. benes, t. fila, p. koudelka, i. kumpova, d. kytyr, m. vopalensky, m. vavro, l. vavro, local fracture toughness testing of sandstone based on x-ray tomographic reconstruction, int. j. rock mech. min. sci. 138 (2021), art. no. 104578. doi: 10.1016/j.ijrmms.2020.104578 [17] l. vavro, k. souček, d. kytýř, t. fíla, z. keršner, m. vavro, visualization of the evolution of the fracture proces zone by transmission computed radiography, procedia eng. 191 (2017), pp. 689–696. doi: 10.1016/j.proeng.2017.05.233 [18] i. kumpová, m. vopálenský, t. fíla, d. kytýř, d. vavřík, m. pichotka, j. jakůbek, z. keršner, j. klon, s. seitl, j. sobek, onthe-fly fast x-ray tomography using cdte pixelated detector – application in mechanical testing, ieee trans. nucl. sci. 65(12) (2018), pp. 2870-2876. doi: 10.1109/tns.2018.2873830 [19] p. koudelka, t. fila, v. rada, p. zlamal, j. sleichert, m. vopalensky, i. kumpova, p. benes, d. vavrik, l. vavro, m. vavro, m. drdacky, d. kytyr, in-situ x-ray differential microtomography for investigation of water-weakening in quasi-brittle materials subjected to four-point bending, materials 13(6) (2020), art. no. 1405. doi: 10.3390/ma13061405 [20] v. rada, t. fila, p. zlamal, d. kytyr, p. koudelka, multi-channel control system for in-situ laboratory loading devices, acta polytech. ctu proc. 18 (2018), pp. 15–19. doi: 10.14311/app.2018.18.0015 [21] m. d. wei, f. dai, n. w. xu, t. zhao, stress intensity factors and fracture process zones of isrm-suggested chevron notched specimens for mode i fracture toughness testing of rocks, eng. fract. mech. 168(a) (2016), pp. 174–189. doi: 10.1016/j.engfracmech.2016.10.004 [22] j. desarnaud, h. derluyn, l. molari, s. de miranda, v. cnudde, n. shahidzadeh, drying of salt contaminated porous media: effect of primary and secondary nucleation, j. appl. phys. 118(11) (2015), art. no. 114901. doi: 10.1063/1.4930292 [23] z. pavlik, p. michalek, m. pavlikova, i. kopecka, i. maxova, r. cerny, water and salt transport and storage properties of mšené sandstone, constr. build. mater. 22(8) (2008), pp. 1736–1748. doi: 10.1016/j.conbuildmat.2007.05.010 [24] q. lin, j. f. labuz, fracture of sandstone characterized by digital image correlation, int. j. rock mech. min. sci. 60 (2013), pp. 235– 245. doi: 10.1016/j.ijrmms.2012.12.043 [25] t. backers, s. stanchits, g. dresen, tensile fracture propagation and acoustic emission activity in sandstone: the effect of loading rate, int. j. rock mech. min. sci. 42(7–8) (2005), pp. 1094–1101. doi: 10.1016/j.ijrmms.2005.05.011 [26] w. li, f. u. a. shaikh, l. wang, y. lu, k. wang, z. li, microscopic investigation of rate dependence on three-point notched-tip bending sandstone, shock vib. (2019), art. no. 4525162. doi: 10.1155/2019/4525162 [27] h. zhang, d. fu, h. song, y. kang, g. huang, g. qi, j. li, damage and fracture investigation of three-point bending notched sandstone beams by dic and ae techniques, rock mech. rock eng. 48(3) (2015), pp. 1297–1303. doi: 10.1007/s00603-014-0635-4 [28] w. k. zietlow, j. f. labuz, measurement of the intrinsic process zone in rock using acoustic emission, int. j. rock mech. min. sci. 35(3) (1998), pp. 291-299. doi: 10.1016/s0148-9062(97)00323-9 [29] j. f. labuz, s. cattaneo, l. h. chen, acoustic emission at failure in quasi-brittle materials, constr. build. mater. 15(5–6) (2001), pp. 225–233. doi: 10.1016/s0950-0618(00)00072-6 [30] d. l. carnì, c. scuro, f. lamonaca, r. s. olivito, d. grimaldi, damage analysis of concrete structures by means of acoustic emissions technique, compos. b. eng. 115 (2017), pp. 79–86. doi: 10.1016/j.compositesb.2016.10.031 https://doi.org/10.1016/j.ijrmms.2014.01.008 https://doi.org/10.1016/s0013-7944(98)00067-8 https://doi.org/10.1016/j.jsg.2008.03.002 https://doi.org/10.1016/j.ijrmms.2018.04.022 https://doi.org/10.13168/agg.2019.0013 https://doi.org/10.1016/j.jsg.2003.09.002 https://doi.org/10.1007/s11440-013-0213-z https://doi.org/10.1038/350039a0 https://doi.org/10.1029/2008gl034723 https://doi.org/10.1016/j.ijrmms.2020.104578 https://doi.org/10.1016/j.proeng.2017.05.233 https://doi.org/10.1109/tns.2018.2873830 https://doi.org/10.3390/ma13061405 https://doi.org/10.14311/app.2018.18.0015 https://doi.org/10.1016/j.engfracmech.2016.10.004 https://doi.org/10.1063/1.4930292 https://doi.org/10.1016/j.conbuildmat.2007.05.010 https://doi.org/10.1016/j.ijrmms.2012.12.043 https://doi.org/10.1016/j.ijrmms.2005.05.011 https://doi.org/10.1155/2019/4525162 https://doi.org/10.1007/s00603-014-0635-4 https://doi.org/10.1016/s0148-9062(97)00323-9 https://doi.org/10.1016/s0950-0618(00)00072-6 https://doi.org/10.1016/j.compositesb.2016.10.031 operational methodology for the reconstruction of baroque ephemeral apparatuses: the case study of the funeral apparatus for cardinal mazarin in rome acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 -9 acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 1 operational methodology for the reconstruction of baroque ephemeral apparatuses: the case study of the funeral apparatus for cardinal mazarin in rome margherita antolini1 1 via di s. valentino 16, 00197, rome, italy section: research paper keywords: ephemeral baroque; virtual reconstruction; reconstruction from text citation:margherita antolini, operational methodology for the reconstruction of baroque ephemeral apparatuses: the case study of the funeral apparatus for cardinal mazarin in rome, acta imeko, vol. 11, no. 1, article 13, march 2022, identifier: imeko-acta-11 (2022)-01-13 section editor: fabio santaniello, university of trento, italy received march 6, 2021; in final form march 15, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: margherita antolini, e-mail: margheritaantolini94@gmail.com 1. introduction from an historical and artistic point of view the baroque ephemeral is the most complete example of artistic and social synthesis, that englobes not only visual arts, but also music, literature and contemporary technical-scientific advancements, in order to achieve a product that can be enjoyed by the public as well as by the aristocracy and intellectuals and convey political and cultural messages at the same time. the baroque ephemeral apparatuses are not limited to the celebration but are part of a system of documents edited with a communicative purpose in order to disseminate and explain the event and its symbolic meaning. if on one hand this implies a certain wideness in the corpus of available sources, on the other hand opens an important question regarding their validity and objectivity in the representation of the events. moreover, the concept of “ephemeral” itself, meaning existing for a limited time, poses issues of difficult solution in terms of conservation, mainly regarding the contraposition of ephemeral/permanent and reality/representation. these were the issues that started the present study, founded on the research of an experimental methodology for study, representation and communication -and consequently conservationof baroque ephemeral artworks in their entireness: as artistic, architectonic, urban, social and political phenomena. without forgetting the pioneering excursions of bragaglia [1], kernodle [2] and tintelnott [3], only in the second half of the last century there was an effective reawakening of interest around this line of italian creativity in the wake of povoledo [4], viale ferrero [5], zorzi [6], ricci [7], the fagiolo dell'arco brothers. although failing to exhaust such a complex theme, they have defined its outlines and focused the main aspects, restoring dignity to an artistic expression long ignored, promoting publications and exhibitions: we find one in naples in 1997 [8], in parma in 2018 [9], and in florence in 2019 [10], which represent a focal moment of study around the theme and available corpus of works. but it is certainly maurizio fagiolo dall'arco in the 70s and 80s to conduct the most exhaustive and systematic study of the baroque ephemeral published to date: among his publications we find bibliografiadella festa barocca, and a work edited with silvia abstract this paper aims to develop a methodology of study of ephemeral artefacts that takes into consideration all the different aspects of the specific art form that is ephemeral baroque architecture. through the study of the social and artistic characteristics of this art form, the analysis of a wide range of case studies will help defining some common and recurring features, especially regarding available data (engravings, paintings, manuscripts, etc.) the main goal of the research will be to outline a methodology of approach to the single cases based on reconstruction from text and graphic data, with special attention reserved to the relationship between the ephemeral apparatus and the surrounding urban space. the effectiveness of the methodology is tested through the application to the case study of the funeral apparatus for cardinal mazarin in rome (1661). mailto:margheritaantolini94@gmail.com acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 2 carandini who represents the real cornerstone for studies on the theme, l’effimerobarocco: strutturedella festa nella roma del ‘600, as well as numerous publications and articles on the subject. only in the recent years the theme of the reconstruction of baroque ephemeral gained the attention of various academics such as paolo belardi [11] and paolo lattuada [12] who conducted experiments on the representation of ephemeral apparatuses, whereas macarthur, leach [13], delbeke [14] and conforti, d’amelio, grieco [15] continue the tradition of historical analysis through documents in the footsteps of fagiolo dall’arco and carandini. 2. characteristics of the phenomenon the concept of ephemeral architecture and therefore temporary and "false" as well as the construction techniques used have their roots in theatrical scenography, an artistic practice that experienced a moment of intense development in the 16th and 17th centuries, with the birth of melodrama and the works of artists such as palladio, scamozzi, serlio and peruzzi. the idea of arranging urban spaces to celebrate events is the logical consequence of the conception of the new acting space, and it naturally encompasses all of its characters. on the other hand, the depictions of pomp and luxury that characterize these projects are only a staging of power: rather than representing the stability of the institutions and their highest representatives, these evanescent creations actually seem to be the symbol of an equally labile and inconsistent power, of the political void of a fragmented and often subject to foreign domination country [16]. used to divert the attention of ordinary people from the precarious reality and to create a fictitious link between the various social classes, ephemeral constructions found a particularly fertile terrain during the xvii century when, with the spread of baroque poetry, the spectacular effect became the primary component of any artistic expression. thus art becomes an instrument of political propaganda and manipulation of the people who, through a new form of panem et circenses finds a moment of liberation from the working reality and contact with the unreachable world of the roman aristocracy. at the same time, celebrations are a demonstration of strength also towards other nations. a clear example are the installations built for the celebration of the birth of foreign princes, as in the case of the dauphin of france in 1662 designed by bernini, or the exceptional one for the arrival in rome of christina of sweden in 1655. on this occasion, on the 23rd and 24th of december 1655, apparatuses similar to those of the papal possession were set up for the entrance into rome, in a path that unraveled among porta del popolo, san pietro and palazzo farnese, by gian lorenzo bernini, giovan paolo schor, ercole ferrata and carlo rainaldi; in february of the following year the presence of the former queen animated the carnival with special floats in via del corso, and theatrical performances at palazzo barberini designed by giovanni francesco grimaldi. the religious-political-social program of the baroque ephemeral exploits the technique of wonder according to the politics of persuasion, giving the artist a particular role as an image specialist. in this case, art does not give us the real face of the century but the face it believed (or wanted to be believed) to have: splendid, idealized, sumptuous, rich. the relationship of the artist with the client, and therefore with power, becomes particularly delicate to interpret at this point. despite being an instrument in the hands of religious-political power, the artist manages to impose his intellectual power, integrating the allegory, bearer of the political message, in the imaginative method. festive occasions retrace the entire life span of illustrious personalities, first of all of the pope, in a mixture of civil and religious occasions. the public feast, enjoyed by the community in an integral manner, is a complete event, both sacred and profane at the same time, during which the judgment is suspended and one deliberately enters into a reality different from the daily life. from a planning and organizational point of view, the celebration follows the same process regardless of the occasion. the main manager is the intellectual, who often coincides with the client: man of culture or religion, he takes care of drafting the ideological program of the festival, experimenting with new tools for cultural diffusion. in the next phase, the artist and the creator collaborate to identify a more or less complex communication code, which will then be translated into forms. in every single apparatus, the plastic, pictorial and literary solutions are therefore a support for the ideological and cultural connotation, and are arranged in a spatial and consequently temporal -succession in order to fulfill all the allegorical implications of the program [17]. the event is also completed by a report drawn up by the intellectual and illustrated with few engravings, in which the apparatus and its meaning are described and explained, with the aim of functioning as a booklet for those who participate in the occasion, but also as means of diffusion especially in foreign courts. in this context, the relationship with the place and the city is fundamental: rome first of all presents itself as a spectacle, with its history and its tradition. the ancient city becomes an immense and prestigious theater and, if on the one hand it recalls the almost obsessive use of theatrical terminology in baroque culture, on the other it clarifies the particular political, social and urban configuration of the city in relation to the shows that on that scene take place. 3. metholodogical issues the study of the cultural phenomenon of the ephemeral in the baroque era has highlighted several characterizing and relevant aspects for the study of the individual artefacts: in particular, the literary and socio-political components were found to be fundamental. approaching the individual projects, some recurring features from the documentary point of view emerge. analyzing a wide range of examples, it was clear that there are several basic structures which, if analyzed, can facilitate and simplify the study. in particular: • each occasion presents recurring types and expedients, explained in specific treaties; • each artist uses a more or less homogeneous language, and given the short time dedicated to the design and construction of the apparatus, he often reuses the same pieces on different occasions; • each occasion is documented by a written text and some "official" engravings, as well as paintings and other testimonies. in this study, attention is paid to the latter as the only tool available for the reconstruction of ephemeral apparatuses. the relationship between text and architecture is a theme dealt with starting from the reading of vitruvius in the 16th century, or of leon battista alberti's descriptio urbis romae, and is still an object of interest in various areas of study. in these acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 3 cases, unlike what happens in the field of restoration and archeology, the reconstruction does not start from the study of the artefact to be integrated with iconographic sources, but proceeds in the opposite direction, using texts and images as a starting point to then be confronted with the built reality. some scholars attempted reconstructions of baroque ephemeral systems, achieving results that we believe are partial and not very relevant, but which open the way to a series of interesting reflections and observations. on the one hand paolo lattuada[12] reconstructs, in a virtual and physical environment, the celebratory machine designed by ferdinando sanfelice in 1740 for the birth of the reale infanta in naples. if on the one hand he manages to interpret all the metric and space integration aspects, both the realization in wooden panels and the virtual restitution are lacking considering the figurative restitution of the desired atmosphere and obtained by the designer. on the other side, paolo belardi and valeria menchetelli [13] deal with giuseppe piermarini's work by returning well-defined and metric correct models, but completely ignoring the material, figurative and integration aspects with the context. the need of an operative methodology that would allow to obtain a historical, critical and virtual reconstruction of the baroque ephemeral architectures arose from the analysis of the two proposed examples and archival documents. by these terms it is understood that the methodology presented aims to indicate a series of effective steps which, if followed and applied with a critical conscience on a case-by-case basis, return a historically reliable product that takes into account the historical distance and the transformations that happened in the places and in the urban fabric. the choice of the virtual environment as the main theater of restitution derives from a reasoning on the most appropriate means for communicating this form of intangible heritage. two aspects have led to the exclusion of pure two-dimensional representation and the hypothesis of the construction of temporary constructions: • the virtual environment respects the idea of temporary set-up without producing fake and always temporary artefacts, however allowing to interact with the place and participate in a possible museum project [18], [19]; • the virtual environment is the contemporary response to the baroque desire to expansion and manipulation of space. in addition to these premises, the method was empirically deduced from the study of the sources of a case series varied from the point of view of the occasions and the artists involved, which unfolds during the xvii and xviii centuries in rome. the process is structured in four fundamental phases, which can be summarized in the study of the sources, study of the object, analytical phase and restitution; while the first and the last clearly represent the initial and final moment of the work, the two intermediates can be considered complementary and represent the real critical moment of the study (as shown in figure 1). once a project to be analyzed has been selected, the first step in the process is the collection and study of the sources relevant to the case study. in most cases it will be a report and official engravings, which can be accompanied by commemorative paintings and period chronicles. fagiolo dell'arco and carandini [20] have already drawn up a catalog of ephemeral apparatuses realized in rome, which lists each project according to year, occasion, artists, material produced, material available and transcription of textual sources. this volume is extremely relevant and can constitute the beginning of a systematic study, but it is limited to a documentation without interpretation of the data collected. the second moment aims at the appropriation of the data with the aim of analyzing it later: it is therefore a matter of transcribing the texts, often manuscripts or seventeenth century prints, and of redesigning the engravings collected. in this way it is possible to begin to discern the characteristic elements and the amount of information available regarding the specific case. afterwards, two courses of study open up: on one side the study of the object, on the other the analytical phase. the following will be presented one after the other, but it is clear that there is a continuous exchange between the two. as for the study of the object, it was highlighted how the place was a fundamental component of the baroque festival and the design of the equipment. for this reason, it is firstly planned to carry out the survey of the area concerned, in its current configuration and in a reconstructive hypothesis of the contemporary configuration at the date of the celebration. for this, we will apply on the one hand the methods of integrated surveying with massive acquisition for the survey of the current state, and on the other the techniques of historical research through archival documents, historical maps, analysis of the walls, etc. the operations described will result in a pointcloud and a timeline, which will be graphed in a two-dimensional (drawings) and three-dimensional(models) restitutions of the current state of the place and of the reconstructive hypothesis of the historical moment involved in the study. in this case it is not strictly necessary to study in depth each building phase, but it is certainly important to have a clear vision of the evolution of the fabric and the construction. at the same time, it is possible to start the analytical phase, studying the cultural environment. given the importance of the ideological and allegorical contents, attention will be paid in particular to the figure of the client and the artists involved, as well as to the research for comparisons of similar cases. considering the scarcity of iconographic material, in fact, the deepening of the artist's style can lead to valid interpretations of the data gaps, or to clarifications of unclear points. at this point the actual analysis of the documents begins: firstly, we will analyze the geometric-proportional aspects of the graphic data, interpreting the techniques and graphic choices used in the engravings and paintings, and applying the rules of reversed perspective and of the orthogonal projection. in this way, it will be possible to establish the reliability of the designs and their correspondence to reality, and to understand if particular visual effects have been sought. knowing the geometry of the place, in fact, it will be sufficient to overlay the two representations to notice any deformations, technical tricks, or inaccuracies. a similar approach will be applied to textual data, from which all possible information will be derived. graphic and textual data will then be interpolated to develop a model as exhaustive as possible regarding elements, geometries, colors, settings, lighting, details, allegorical meanings, plastic solutions, etc. to proceed in the knowledge, we then proceed by working on classifications that synthesize the distinction between ephemeral and permanent elements, the artistic techniques used, which range from painting to sculpture to pyrotechnic installations, but also musical, to finally distinct an abacus of acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 4 serial and special elements. note how these two phases are the most conditioned by the specificity of each case. the last moment is the restitution. this can take different forms depending on the sensitivity of the operator, the specific case study, and the use that will be made of it, but it must necessarily be composed of a series of two-dimensional drawings regarding the appearance of the place during the party, and of the three-dimensional models that present the layout in its original setting and the integration of the layout in today's environment. 4. case study the chosen case study is the pompa funebre nell'esequie celebrate in roma al cardinal mazarini nella chiesa dei ss vincenzo e anastasiothattook place in 1661. a few years earlier, cardinal giulio mazzarino, minister of louis xiv, financed the construction of a church in the area of piazza di trevi, the church of saints vincent and anastasius at trevi, which was completed a few months before his death. here the funeral was celebrated in rome, with an apparatus created by elpidio benedetti as the intellectual of reference and giovanni francesco grimaldi as the artist. at this time, an important role was played by castrum doloris and funeral celebrations in general. menestrier [21] describes the regulations for the funeral ceremony: the funeral is divided into different moments (invitation, convoy, service, funeral eulogy, burial) which must correspond to the different parts of the church decoration (external facade, nave, altars, inscriptions, catafalque) as places appointed for an allegorical representation that culminates in the luminous vision of the castrum doloris, where the triumph of death and religious and temporal power represented by the deceased are celebrated. the catafalque is the mirror of baroque taste, increasingly aimed at dematerialisation and to the refined invention, which will culminate in the triumphs of the macabre designed by bernini. it is possible to trace a real typological evolution in the forms taken by the catafalques during the 17th century from the forms of the ancient mausoleums: from an initial temple shape (catafalque for sixtus v, d. fontana, 1591) we move on to a temple or pyre type (catafalque for philippe the iii, torriani, 1621), and then prefer real miniature mausoleums (catafalque for gregorius xv, lippi, 1623) ; these typologies are accompanied by exotic experiments, such as the catafalque della valle in 1627, while in the second half of the century pyramid forms are preferred (catafalque for the jesuits, sacchi, 1639) and types based on assembly, such as the temple typebaldachin (catafalque for philippe the iv, del grande, 1665), mausoleum with obelisks (catafalque for philippe iv, c. rainaldi, 1665) and triumphal arch and small temple with pyramid (catafalque for anne of austria, grimaldi, 1666), up to pyramids supported by living skeletons (catafalque beaufort, 1668).[17] throughout his life grimaldi enjoyed the admiration and favor of a large group of admirers, thanks to his ability to adapt to the client's needs, from his good-natured and quiet character and perhaps from the moderate rewards required. among the first to express appreciation on the artist was elpidio benedetti himself: notes of commendation appear in the reports drawn up on the occasion of feasts and religious celebrations where he worked, as the judgment reported to him by antonio perez de rua when the artist he decorated the external facade of the church of san giacomo degli spagnoli on the occasion of the funeral rite of the spanish king philip the iv: «cuiodiseno es de iuav francisco grimaldi bolognese, no menosvalienteen el pincel, que bizarro en el compas, y enestegenero de obrasestimado por el primero desta corte.» [22] figure 1. flowchart of the proposed methodology. acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 5 4.1. study of the sources upon the death of cardinal mazarin on march 9, 1661, benedetti received instructions from louis xiv to create an adequate funeral pump. the funeral report, [23] written and published by benedetti, contains five engravings by dominique barrière which document the catafalque, the decorations for the facade, the counter-facade, the nave and the choir on one facade drawing, three views of the interior, one towards the choir, the second towards the main entrance and the third of the walls of the lateral nave, and of the catafalque. barrière’s name appears only in the engravings of the facade and the catafalque, while all the engravings bear the inscription abbas elpidius benedictus inven., and grimaldi is referred to as architect of the entire funeral apparatus, or artist responsible for its construction. in the decoration of the facade and the interior it is difficult to find grimaldi's hand: the two landscapes on the facade, representing the sunset and the rising sun, are somewhat elementary and do not seem to be the work of a specialist, so it seems that he entrusted this project to his aid given the contemporary commitment in the palaces santacroce and nunez. on the other hand, the engraving of the cardinal's catafalque highlights the typical stylistic elements of grimaldi, on the basis of which both the invention and the execution are attributable to him. the catafalque is a simple structure which, although lacking specific preparatory studies, retains many elements also found in the preparatory studies for the catafalque for anne of austria, designed and built by grimaldi in 1666 [22]: some of the elements present in the first sketch were incorporated into the catafalque of cardinal mazarin, for example the candle holders on the sides, the antique style putti on the right at the base of the catafalque of anne of austria are also present next to that of cardinal mazarin. likewise, the two female figures holding the urn reappear in mazarin's catafalque, although their function has been changed. furthermore, the two putti holding the queen's coat of arms perform almost the same function when they raise the cardinal's hat. finally, in both monuments, the urn is placed on top of the funeral structure and above the urn appears the portrait of the deceased flanked by two figures. the second sketch contains elements that are found in the catafalque of cardinal mazarin, such as the pyramidal structure at the base of which, both in the drawing and in the engraving, two putti hold the coat of arms of cardinal mazarin while in an elaborate urn is placed high. in both cases, the entire structure ends with the portrait of the deceased flanked by two figures. although the catafalque of cardinal mazarin is different, all the elements that compose it as well as the style of the female figures, the putti, the olive trees and palm trees refer back to grimaldi rather than benedetti. grimaldi has in fact repeatedly used the same elements in his career. it is not clear who created the iconography: it could have been a collaboration between the french authorities, benedetti and, perhaps, grimaldi. in any case, it is based on a tribute to cardinal mazarin's services to the french crown and christianity. at the foot of the square base sit two parts of the world in reference to cardinal mazarin's broad political action. above the base there are the coat of arms and the cardinal's hat and above it there is a representation of rome in tears over the death of the cardinal and on the other three sides are the relief representations of france, spain and christianity. above the pyramidal structure rests the urn of cardinal mazarin who acts as a support for his portrait surrounded by a laurel wreath. on the left an olive tree, a symbol of peace, under which the religion appears and behind it the justice. on the right a palm tree, a symbol of war, with the fortress underneath and providence behind. among the branches, two cherubs hold an olive crown with the inscription et pace and a palm crown with the inscription et bello. 4.2. study of the object the survey of the church of santi anastasio e vincenzo was carried out with the integrated survey methodologies with massive acquisition through laser scanner and structure from motion. in this way, a point cloud was obtained which was satisfactory for the set objective. additionally, through historical research, it was possible to delineate a well-defined timeline for the construction of the church [24], [25]. given the narrowness of the site, the original organism consisted of a nave flanked by chapels which, without a transept, ended in the presbytery with a flat deep wall, at the height of the pilasters that will then frame the eighteenth-century tribune. the tribune, with the high altar and the elliptical dome above it, belong to the intervention after 1748. regarding the nineteenth-century interventions, in addition to the decoration of the chapels, we highlight the fresco of the vault in 1818 and the restoration of 1857 with remaking of the marble floor. the building history can therefore be summarized in three main phases (figure 2): • 1641-1646; interior construction, with back wall and six chapels by gaspare de vecchi; • 1646-1650; construction of the facade by martino longhi the younger; • post 1748; extension of the church with tribune, apse and dome; decoration of the chapels. for the purpose of the study, the configuration relating to 1661, i.e. shortly after the completion of the second phase, was analyzed and redesigned, based on the comparison between the current configuration of the church, archival documents, engravings of landscape painters and engravings related to the apparatus. 4.3. analytical phase following the previously described methodology, once the place and the general characteristics of the case study have been studied, the attention is focused on the iconographic sources of the apparatus. as for the funeral in question, no paintings or engravings other than those of the report were found, i.e. five engravings by dominique barrière depicting the facade, the counter-facade, the nave, the altar and the catafalque. all these engravings present the apparatus in its intended setting, which means that it is possible to infer the tridimensional configuration of the ephemeral apparatus simply by comparing the engravings with the survey and then applying the rules of inverted perspective is necessary. first of all, it was necessary to deal with the problem of scale and deformation: the engravings in fact have no dimensional indication and are not all represented on the same scale, but rather designed to fit the same format. therefore, they have been divided into two groups, as two of them are drawn in orthogonal projection while the other three in central perspective. analyzing those in orthogonal projection, that is the facade and the nave, at first it seemed sufficient to exploit the geometry acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 6 of the church, that is to say trivially to superimpose the engraving on the reconstructive hypothesis advanced during the historical study of the place. however, the operation has been proved not sufficient, since the drawings are deformed in height, probably to obtain visually more captivating effects of grandeur and vertical development (figure 3). as for the counter-façade, the altar and the catafalque, they were divided into two further subgroups. the first includes the engravings of the counter-façade and the altar: also in this case, the solution seemed simple, that is to apply the rules of descriptive geometry and of the reverse perspective to obtain the real measurements, but a more detailed examination revealed that these two engravings present three central vanishing points and therefore are an assembly of three overlapping perspectives. here too, as for the orthogonal projections, the goal of the engraver was to obtain a vertical expansion of the space of the drawing compared to the real space. the process was therefore to recognize the three perspectives with their vanishing points, horizons and landlines, and to recognize which parts of the drawing referred to the single system of projections, and then to apply the rules of inverse perspective (figure 4). the last reasoning concerns the catafalque: as far as the perspective study is concerned, it was much simpler here as the drawing is a central perspective with a single vanishing point. however, the engraving exclusively shows the catafalque, removed from the architectural context, so it was possible to reconstruct its geometry but not its dimensions and position. thanks to the integration with the textual data and with the stylistic comparisons, it was possible to know the position of the catafalque in the church and to notice how all the grimaldi catafalques respond to the same dimensional relationships between the catafalque and the setting. at this point it was possible to integrate the information deriving from the graphic data with the survey and reconstruction drawings previously produced, obtaining the nature, position and size of each element; at the same time, it was possible to distinguish the elements of the ephemeral apparatus from those of the permanent architecture. figure 2. graphic representation of the three construction phases of the church of santi anastasio e vincenzo in rome. figure 3. analysis of the scale and deformation in the engraving of the nave in relation to the survey. figure 4. analysis of the perspective and deformation in the engraving of the altar. acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 7 an interesting detail is the study of shadows in the engravings. these, in fact, have been designed with such detail as to allow to distinguish the painted elements from those in relief, and for the latter to determine their depth and position with respect to the background. at the same time, the textual data was analyzed, first reading and transcribing the report carefully. the informative nature of the text was immediately evident, given the celebratory but at the same time simple language used by the author, and the absence of latinisms. the report consists of seventeen pages of text. the first two cite a praise of all those who were involved in the organization of the funeral and introduce the following five chapters, each of which accompanied by an engraving. all these chapters follow the same approach: commendation of the deceased, description of a part of the apparatus and explanation of the allegorical meaning. as mentioned above, the text played a crucial role in defining the position and size of the catafalque, but also provided a number of important information. first of all, the analysis of the text confirmed the distinction of the elements in ephemeral and permanent; subsequently, it allowed the characterization of the models with data regarding materials, colours, lighting and desired atmosphere, and finally a further characterization through the allegorical reading (figure 5). to obtain this result, the text has been reread several times, highlighting the different types of information and classifying them according to: element involved, chromatic, material, and symbolic data. these pieces of information were interpolated with the result of the analysis of the graphic data, and generated an abacus of elements based on the artistic techniques used and the seriality or not of the artefact, which clarified all the characteristics of the apparatus and facilitated the modeling process. symbology information was not easy to integrate with twodimensional models but can be added to the three-dimensional model as easy-to-understand interactive pop-ups. 4.4. restitution the final result of the study consists of two-dimensional (1:50 scale) and three-dimensional (potentially in 1:1 scale) elaborations, born from the integration of relief, historical study, analysis of textual data and analysis of graphic data. at the communicative level, if two-dimensional drawings can be more useful for communication at an academic level and are easily readable, the model is more versatile for dissemination and possible use in the context of museum display, while maintaining the same level of reliability and scientific soundness. the models produced, moreover, not only want to return the reconstruction of the 1661 exhibition but investigate the relationship that is created between the apparatus designed by grimaldi with the current location, establishing a critical dialogue between past and present, between ephemeral and permanent, in which both become changeable and question the concept of real space and space of representation (figure 6). 5. conclusions the presented study achieved two complementary results valid for further developments. the first is the creation of the 3d models of the setting up of mazarin's funeral, or the outcome of the analysis carried out on the case study. these can be immediately made available and used for academic purposes but also, and most importantly, for communicative and conservative reasons. however, the models mentioned are the result of the application of a study and analysis method drawn up here to address and resolve representation issues that have never been fully analyzed so far. if applied systematically and correctly, i.e. with a critical sense, the methodology could lead to interesting results in several areas: • first of all, the study of a fundamental cultural phenomenon for baroque society; • as seen previously, the design of the ephemeral apparatuses gave the possibility to the artists to create study models on a 1:1 scale of stable architectures that they would later build. • their knowledge could open new scenarios regarding the design process of baroque architects; • communication and museum display of artifacts that can facilitate users in understanding the society of baroque rome with immediate, easily available and low-cost means of management [26]; • conservation of the intangible heritage in the manner most appropriate to the specific characteristics of the phenomenon [27]. figure 5. results of the graphic data interpretation, the textual data information and final 3d model. acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 8 overall, the methodology aims to be operational, historically reliable, critical of the transformations of places and to return the results in a virtual environment and is the result of an empirical approach. lacking previous research based on the specific topic, it was fundamental first to understand the phenomenon of the baroque ephemeral from an ethnographic point of view, to establish its importance and to better interpret the relationship between the apparatuses and the places, or the relationship between artists, power and the city. subsequently, by deepening the historical-artistic aspects, it became clear the fundamental role that the celebrations play in baroque poetics, as the sum of its three fundamental aspects: the aim for wonder, experimentalism and the synthesis of the arts. from the beginning, the relevance of these projects evident, as well as the recurring characteristics in the conduct of the celebrations and above all in their representation. therefore, a study of a wide range of cases was considered necessary to confirm the hypothesis. hence the need for a study method that would take into account both the specific features of each apparatus and their typicality, and which could open up the possibility of studying this artistic practice in a systematic way. acknowledgement we thank professor alfonso ippolito (università di roma la sapienza) for supporting this research as a tutor and mentor. references [1] a. g. bragaglia, scenotecnica barocca, emporium, vol. lxxxvi, 515(1937), pp. 608-614. [2] g. r. kernodle, the theatre in history, university of arkansas press, 1989, 606 p. [3] h. tintelnot, annotazioni sull’importanza della festa teatrale per la vita artistica e dinastica nel barocco, retorica e barocco: atti del 3° congresso internazionale di studi umanistici, venezia, 1954, fratelli rocca, rome, 1955, pp. 233-242 [in italian]. [4] e. povoledo, il teatro nel teatro e la tradizione iconografica della scena barocca in italia, vita teatrale in italia e polonia fra seicento e settecento, państwowewydawnictwonaukowe, warsaw, 1984, pp. 232-243 [5] m. viale ferrero, feste politiche e politica della festa, in: milleottocentoquarantotto: torino, l'italia, l'europa, archivio storico della città di torino, turin, 1998, isbn: 8886685319, pp. 53-68. [in italian]. [6] l. zorzi, figurazione pittorica e figurazione teatrale, in: storia dell’arte italiana, vol. i, einaudi, turin, 1978, pp. 421-456. [in italian]. [7] g. ricci, la scenografia, grafica gutenberg, gorle, 1977 [8] s. cassani, capolavori in festa: effimero barocco a largo di palazzo, 1683-1759, electa, naples, 1997, isbn: 8843586998, 254 p. [in italian]. [9] f. magri, c. mambriani, il dovere della festa. effimeri barocchi farnesiani a parma, piacenza e roma (1628-1750), grafiche step, parma, 2018, isbn 8878981621, 139 p. [in italian]. [10] m. m. simari, a. griffo, il carro d’oro di johann paul schor: l’effimero splendore dei carnevali barocchi., sillabe, florence, 2019, ean 9788833400761. [in italian]. [11] p. belardi, v. menchetelli, “architetture immateriali. la ricostruzione digitale di progetti e apparati effimeri di giuseppe piermarini”, giuseppe piermarini tra barocco e neoclassicismo roma napoli caserta foligno, effe fabrizio fabbri editore, perugia, 2010, isbn 9788896591277, pp.285-287 [in italian]. [12] ricostruzione filologica di una macchina da festa. [in italian]. online [accessed 14 march 2022] http://www.paololattuada.net/progetti/recupero-erestauro/ricostruzione-filologica/ [13] j. macarthur, a. leach, mannerism, baroque, modern, avantgarde, the journal of architecture 15(3), pp. 239-242 [14] m. delbeke, framing history: the jubilee of 1625, the dedication of new saint peter’s and the baldacchino, in: festival architecure, routledge 2008, isbn: 978-0-415-70129-7, pp. 129-154. [15] c. conforti, m. g. d’amelio, l. grieco, grande come il vero. fullsize architectural models in the modern period. frascari symposium iv: the secret lives of architectural drawings and models: from translating to archiving, collecting and displaying, kingston school of art, london, 27-28 june 2019 [16] f. mancini, perennità dell’effimero, in capolavori in festa. effimero barocco a largo di palazzo (1683-1759), electa, naples, 1997, isbn 8843586998, p. 3. [in italian]. [17] m. fagiolo dell’arco, s. carandini, l’effimero barocco: strutture della festa nella roma del ‘600, vol.ii testi, bulzoni, roma, 197778, 390 p. [in italian]. [18] s. de felici, la rappresentazione scenica dal bozzetto alla realizzazione: potenzialità espressive e di indagine delle tecnologie informatiche, tesi di dottorato, tutor: antonino gurgone, marco carpiceci, roma: dipartimento di storia, disegno e restauro dell’architettura, 2012, 145 p. [in italian]. [19] f. luce, la comunicazione dell’architettura: nuove forme di rappresentazione, nuove modalità di fruizione, tesi di dottorato, tutor: roberto de rubertis, co-tutor: giovanna a. massari, roma: dipartimento di storia, disegno e restauro dell’architettura, 2012, 111 p. [in italian]. [20] f. dell’arco, s. carandini, l’effimero barocco: strutture della festa nella roma del ‘600, vol. i catalogo, bulzoni, roma, 1977-78, 390 p. [in italian]. [21] c. f. menestrier, des decorations funebres. ou il estamplementtraité des tentures, des lumières, des mausolées, catafalques, inscriptions, & autre sornements funèbres, j.b. de la caille, a paris, 1683, 367pp. [22] d. batorska, giovanni francesco grimaldi (1605/6-1680), campisano, roma, 2011, 252 p. [23] e. benedetti, pompa funebre nell’esequie celebrate in roma al cardinal mazarini nella chiesa de ss. vincenzo & anastasio, roma, 1661. [in italian]. [24] a. pugliese, s. rigano, martino lunghi il giovane: architetto, bulzoni, roma, 1972. [in italian]. [25] v. rizzi, indagine su di una facciata barocca: la facciata della chiesa dei ss. vincenzo e anastasio di martino longhi il giovane a roma rilevata e disegnata da viviana rizzi, fratelli palombi, roma, 1979, 34 tav., pp. 80-81. [in italian]. figure 6. composition of the setting in the original location (3d model) and in the contemporary environment (pointcloud). http://www.paololattuada.net/progetti/recupero-e-restauro/ricostruzione-filologica/ http://www.paololattuada.net/progetti/recupero-e-restauro/ricostruzione-filologica/ acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 9 [26] grazia tucci, valentina bonora, valerio tesi, bernardo pagnini, additive manufacturing of marble statues: 3d replicas for the preservation of the originals, 2019 imeko tc4 international conference on metrology for archeology and cultural heritage, florence, italy, 4-6 december 2019. online [accessed 17 march 2022] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-53.pdf [27] a. antonopoulos, s. antonopoulou, 3d survey and bim-ready modelling of a greek orthodox church in athens, imeko international conference on metrology for archaeology and cultural heritage, lecce, italy, 23-25 october 2017. online [accessed 17 march 2022] https://www.imeko.org/publications/tc4-archaeo2017/imeko-tc4-archaeo-2017-134.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-53.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-53.pdf https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-134.pdf https://www.imeko.org/publications/tc4-archaeo-2017/imeko-tc4-archaeo-2017-134.pdf thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot acta imeko issn: 2221-870x december 2021, volume 10, number 4, 177 184 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 177 thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot lorenzo capponi1, tommaso tocci1, mariapaola d’imperio2, syed haider jawad abidi2, massimiliano scaccia2, ferdinando cannella2, roberto marsili1, gianluca rossi1 1 department of engineering, university of perugia, via g. duranti 93, 06125 perugia, italy 2 industrial robotic unit, istituto italiano di tecnologia, via morego 30, 16163 genova, italy section: research paper keywords: thermoelasticity; aruco markers; structural dynamics; carbon fibre reinforced polymer; robot inspection citation: lorenzo capponi, tommaso tocci, mariapaola d'imperio, syed haider jawad abidi, massimiliano scaccia, ferdinando cannella, roberto marsili, gianluca rossi, thermoelasticity and aruco marker-based model validation of polymer structure: application to the san giorgio’s bridge inspection robot, acta imeko, vol. 10, no. 4, article 28, december 2021, identifier: imeko-acta-10 (2021)-04-28 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 30, 2021; in final form december 9, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo capponi, e-mail: lorenzocapponi@outlook.it 1. introduction in design and materials engineering, experimental validation of numerical models is commonly required in order to verify the quality of the simulation [1], [2]. in general, the level of validation is directly tied to the intended use of the model and then, the supporting testing experiments are defined [3], [4]. while indirect validation uses experimental results that cannot be controlled by the user (e.g., from the literature or from previous researches), a direct approach performs experiments on the quantities of interest [5], with the aim of reproduce, through the experiments, actual behaviour of simulated model [4], [6]. when irregularities in the geometry or in the molecularstructure of the material are present, localised stress concentrations can lead to fractures [3]. due to this, a stress concentration factor is usually considered during the design of a structure and, moreover, it is one of the experimental validations focuses. local stress and strain measurements have been widely performed by means of established contact techniques (e.g., strain-gauges) [7]–[10]. however, in last decades, non-contact measurement methods for full-field stress and strain distribution estimation were developed and commonly employed in experimental validation tests, such as the thermoelastic stress analysis (tsa) [11]. according to the thermoelastic effect, for a dynamically excited structure, the surface temperature changes, measured by means of an infrared detector, are proportional to the stress and strain tensors changes, caused by the input load [12]. thermoelastic stress analysis was involved in multiple researches, regarding non-destructive testing [13], defect identification [14], and material properties characterization [15], [16]. thermoelasticity has also been used to determine fatigue limit parameters and crack propagation [17], [18], for modaldamage identification in frequency domain [19], and for stress intensity factor evaluation in complex structures [20]. however, due to the demands of high-speed operation and the use of light structures in modern machinery, static measurements of stress and strain distributions are no longer sufficient [21], [22]. in fact, when a flexible structure is excited at or close to one of its natural frequencies, significantly increased fatigue damage occurs [23]– abstract experimental procedures are often involved in the numerical models validation. to define the behaviour of a structure, its underlying dynamics and stress distributions are generally investigated. in this research, a multi-instrumental and multi-spectral method is proposed in order to validate the numerical model of the inspection robot mounted on the new san giorgio's bridge on the polcevera river. an infrared thermoelasticity-based approach is used to measure stress-concentration factors and, additionally, an innovative methodology is implemented to define the natural frequencies of the robot inspection structure, based on the detection of aruco fiducial markers. established impact hammer procedure is also performed for the validation of the results. mailto:lorenzocapponi@outlook.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 178 [25]. due to this, modal parameters (i.e., modal frequencies, modal damping and mode shapes) of structures and systems in frequency range of interest are widely researched to properly simulate their behaviour in real operating conditions [26], [27], and, thus, to avoid fatigue damage. modal parameters definition is usually reached experimentally via impact-hammer procedure [26]. nevertheless, in last years, the use of non-contact imagebased measurement techniques in structural dynamics applications has grown. in fact, displacements, deformations and mode shapes can be measured with cameras operating in the visible spectrum by applying both digital image correlation and other computer-vision methods [28]–[30]. one of the more promising approach for displacement and motion detection involves markers, either they are physical or virtual. virtual markers are directly generated through computer-vision algorithms, such as scale invariant feature transform (sift) [31], [32], and speeded up robust features (surf) [33]. these algorithms are able to detect and describe local characteristics (i.e., features) in images. moreover, virtual markers are often used as they allow tracking objects in subsequent acquired frames without introducing physical targets, avoiding potential misleading elements. in recent years, several researches were developed using virtual markers. khuc et al. [34] and dong et al. [35] investigated structural and modal analysis via computervision algorithms through virtual markers. however, in the cases when the points of interest are not directly identified as markers, the physical targets need to be involved. furthermore, virtual markers are strongly influenced by lighting changes and low contrast and, moreover, they can not be detected in uniform intensity distribution areas with no gradients. physical markers, also known as fiducial markers, have been also widely employed in structural monitoring applications [36], [37]. a commonly employed group of fiducial markers are the squared planar markers [38], which are characterized by a binary-coded squared area, enclosed by a black border. several sets of this marker type have been developed in years [39]–[42]. however, in case of nonuniform light conditions and desired simultaneous detection of multiple markers, the aruco marker library was found to be very efficacious and robust to detection errors and occlusion [38], [43]. additionally, if the camera is calibrated, the relative position of the camera with respect to the markers can be directly estimated, and, through a custom configuration process, the system becomes more insensitive to detection errors and false positives [38]. due to this, the aruco markers applicability was widely studied in last years. sani et al. [44] and lebedev et al. [45] employed them for drone quad-rotor and uav autonomous navigation and landing, while elangovan et al. [46] used them for decoding contact forces exerted by adaptive hands. structural dynamics applications were also researched by abdelbarr et al. [47] for structural 3d displacement measurement. moreover, tocci et al. investigated the measurement uncertainty of the aruco marker-based technique for displacement up to order of 1/100 mm, using a comparison laser doppler vibrometry technique, defining the influence of the measurement parameters on the resulting measured displacement [48]. in this research, a non-contact multi-instrumental approach is presented for the numerical model results validation, which involves stress concentration factor evaluation through thermoelasticity measurements and natural frequencies identification by means of aruco markers detection. the proposed method is applied to the san giorgio's bridge robot inspection structure. 2. materials and methods 2.1. robot inspection the robot inspection is the first platform for the automatic inspection of a bridge. it was developed within a collaboration between the italian institute of technology, camozzi group, sda engineering, ubisive and the university of ancona (patent pt190478), for the new viaduct on the polcevera river, the socalled san giorgio’s bridge, designed and built after the morandi’s bridge collapse. the structure is a 3-degrees-offreedom platform, and it is fully autonomous. its main purpose is to carry 3 technological instrumented supports with high performances cameras, lasers, ultrasonic sensors, and anemometers that scan the lower surface of the bridge and collect more than 35000 pictures. these pictures are then processed by pattern analysis algorithms and given to the operator information whether any changes that on the investigated surface occurs. the robot weights around 1.8 t and it is shown in the figure 1. 2.2. thermoelasticity-based stress-concentration factor estimation thermoelasticity is a full-field stress-distribution measurement technique based on the thermoelastic effect [11], [12]. according to this effect, in case of adiabatic process and linear, homogeneous, and isotropic material behaviour, a dynamically excited structure presents surface temperaturechanges proportional to the changes in the stress and strain tensor traces, caused by the external load [19], [49]. moreover, if the excitation is harmonic, the thermal fluctuation is expected to be at the same frequency of the input load, and its normalized amplitude variation is given by: 𝛥𝑇 𝑇0 = −𝐾𝑚  𝛥𝜎𝑘𝑘 , (1) where 𝑇0 is the ambient temperature, 𝛥𝜎𝑘𝑘 is the first stress invariant variation and 𝐾𝑚 is the thermoelastic coefficient, defined from [11]: 𝐾𝑚 = 𝛼 𝜌 𝐶𝜎 , (2) where 𝛼 is the thermal expansion coefficient, 𝜌 is the material density and 𝐶𝜎 is the specific heat at constant pressure (or stress) [12]. in general, the temperature variation caused by the thermoelastic effect is within the noise produced by the infrared detector [50]. thus, thermal acquisitions are necessarily post processed in order to obtain readable results [51]. although general frequency-domain approaches are well established nowadays [19], [52], in this research a classical lock-in analysis figure 1. robot installed on the new viaduct on the polcevera river. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 179 was performed in order to single out the thermoelastic signal at a particular frequency (i.e., the load frequency) from the noisy signal acquired through the thermal camera [53], [54]. being 𝜔𝐿 the input load frequency, the digital lock-in amplifier gives the temperature fluctuation at 𝜔𝐿 as magnitude 𝛥𝑇𝜔𝐿 and phase 𝛩𝜔𝐿 [55]: 𝛥𝑇𝜔𝐿 = √𝐼𝑥 2(𝜔𝐿 ) + 𝐼𝑦 2(𝜔𝐿 ) , (3) 𝛩𝜔𝐿 = arctan ( 𝐼𝑦 (𝜔𝐿 ) 𝐼𝑥 (𝜔𝐿 ) ) , (4) where 𝐼𝑥 (𝜔𝐿 ) and 𝐼𝑦 (𝜔𝐿 ) are the phasorial components of the thermoelastic signal evaluated at 𝜔𝐿 , respectively. once applied the lock-in data processing, the spatial information of the temperature (i.e., stress) distribution is obtained, and further structural analysis can be performed. in particular, the stressconcentration factor 𝐾𝑓 can be estimated in areas where critical behaviour is shown. 𝐾𝑓 is defined, on a linear profile, as the ratio of the highest stress max(𝛥𝜎) and a reference stress, here chosen as the mean stress 𝛥𝜎 on the same profile [20]: 𝐾𝑓 = max(δ𝜎) δ𝜎 . (5) by substituting (1) in (5), the 𝐾𝑓 factor on a linear profile becomes: 𝐾𝑓 = max(δ𝑇𝜔𝐿 ) δ𝑇𝜔𝐿 . (6) 2.3. aruco-based resonant frequencies identification as discussed earlier, the aruco marker library was used for measurements, due to theoretical considerations [38]. an example of aruco 6x6 marker is presented in figure 2, which shows its geometrical parameters (i.e., corners and reference system). to identify a marker in each captured frame, multiple steps are required [38], [43]. firstly, a local-adaptive threshold contour segmentation is performed [56]. then, a contour extraction and a polygonal approximation are applied to keep the enclosing rectangular borders and remove the irrelevant information [57]. potential perspective projections are compensated using a homography transformation. the resulting image is binarized and divided into a regular grid, where at each element is assigned 0 or 1, depending on the preponderant value of the corresponding pixel, as shown in figure 3 [58]. aruco markers are usually created in groups, to ensure their geometric diversity and avoid misleading detection. due to this, another filter is generally applied to the image to determine potential matches between the recognized marker and the used marker-dictionary [43] even if manual corrections are normally made available by modifying threshold algorithm parameters. finally, through the identification of the four corners, the spatial coordinates of each recognised marker are estimated with respect to the camera [38], [56]. in this study, the implementation of the aruco library is exploited to measure the spatial temporal coordinates of the centre of the marker 𝐶(𝑥(𝑡),  𝑦(𝑡)) recorded in a video. firstly, the acquired data are pre-processed to improve the marker detection, which, moreover, can be compromised by different factors (e.g., too much distance between the camera and the marker or blurred images measurement) [43]. then, each frame is subjected to a sharpening and dilatation filters. the sharpening filter was used to reduce apparent blurring in each frame by means of 2d spatial convolution [59]: (𝐼 ∗ 𝑘)(𝑥, 𝑦) = ∑ ∑ 𝑘(𝑖, 𝑗) ⋅ 𝐼(𝑥 − 𝑖, 𝑦 − 𝑗) ∞ 𝑗=−∞ ∞ 𝑖=−∞ (7) where 𝐼(𝑥, 𝑦) is the original frame, 𝑘(𝑥, 𝑦) is the kernel and (𝑥, 𝑦) are the pixel coordinates and (𝑖, 𝑗) are the coordinates of the elements in the kernel matrix. then, dilatation filter was also applied as a morphological operation, involved for removing noise, for isolating individual elements and for merging disparate elements in an image [59]. also, this filter is based on convolution operation [60]. being 𝑏(𝑥, 𝑦) the structuring function, by the grey-scale dilation of 𝐼 by 𝑏 is obtained [59]: (𝐼 ⊕ 𝑏)(𝑥, 𝑦) = (𝑖, 𝑗) ∈ 𝑏𝑚𝑎𝑥 [𝐼(𝑥 + 𝑖, 𝑦 + 𝑗) + 𝑏(𝑖, 𝑗)] , (8) and the grey-scale erosion of i by b is given by: (𝐼 ⊖ 𝑏)(𝑥, 𝑦) = (𝑖, 𝑗) ∈ 𝑏𝑚𝑖𝑛 [𝐼(𝑥 + 𝑖, 𝑦 + 𝑗) − 𝑏(𝑖, 𝑗)] . (9) the marker detection is based on its 4 corners identification in each captured frame (see figure 1). from the corners, the spatial coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) are evaluated frame-by-frame during the acquisition: 𝐶 = (𝑥𝑐 , 𝑦𝑐 ) = 𝐺 ⋅ ( 1 4 ∑|𝑥𝑟 | 4 𝑟=1 , 1 4 ∑|𝑦𝑟 | 4 𝑟=1 ) , (10) where (𝑥𝑟 , 𝑦𝑟 ) are the coordinates of the 𝑟-th vertex and 𝐺 is the calibration factor from pixel units to si units, defined as the ratio figure 2. aruco 6 × 6-marker: corners and centre coordinates. figure 3. aruco 6 × 6-marker: pixel values (example). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 180 between the side length of the physical marker in si units and the average of the four side lengths (in pixels) of the captured marker in the fov. once the coordinates of the centre of the marker (𝑥𝑐 , 𝑦𝑐 ) in the time-domain are obtained, frequency domainbased analysis is performed using the discrete fourier transform (dft) [61]. the dft for the n-points time series p is defined as: 𝑃(𝜔) = ∑ 𝑝𝑛 −𝑖 𝑛𝜔 𝑁 𝑁−1 𝑛=0 . (11) considering the spatial properties of image-based analysis, the dft for each component of the displacement 𝐶(𝑥𝑐 (𝑡), 𝑦𝑐 (𝑡)) and for the input force f(t) is obtained: 𝐶(𝑋(𝜔), 𝑌(𝜔)) = (𝐷𝐹𝑇(𝑥(𝑡)), 𝐷𝐹𝑇(𝑦(𝑡))) , (12) 𝐹(𝜔) = 𝐷𝐹𝑇(𝑓(𝑡)) . (13) the cross-spectra (15) and auto-spectra (16) are computed [61]: (𝑆𝑓𝑥 (𝜔), 𝑆𝑓𝑦 (𝜔)) = ( 1 𝑇 [𝑋(𝜔)∗ ⋅ 𝐹(𝜔)], 1 𝑇 [𝑌(𝜔)∗ ⋅ 𝐹(𝜔)]) , (14) 𝑆𝑓𝑓 (𝜔) = 1 𝑇 [𝐹(𝜔)∗ ⋅ 𝐹(𝜔)] . (15) finally, the compliance frequency response functions (frf) along x-axis and y-axis, using 𝐻1 estimator, can be obtained [61]: (𝐻1𝑥 (𝜔), 𝐻1𝑦 (𝜔)) = ( 𝑆𝑓𝑥 (𝜔) 𝑆𝑓𝑓 (𝜔) , 𝑆𝑓𝑦 (𝜔) 𝑆𝑓𝑓 (𝜔) ) . (16) on the other hand, the accelerance frequency response function obtained through the impact hammer procedure is given using the 𝐻1 estimator by [61]: 𝐻1 = 𝑆𝑓𝑎 (𝜔) 𝑆𝑓𝑓 (𝜔) , (17) where 𝑆𝑓𝑎 and 𝑆𝑓𝑓 are the crossand auto-spectra of the output acceleration and of the input force, respectively. 3. experimental methodology as already addressed in sec. 2, two different measurement approaches were used and, due to this, two experimental setups were built. the global tested structure is presented in figure 4, where the analysed areas and markers are shown in detail: thermoelasticity was applied in t1 and t2 areas, while m1-2 is the id of the marker detected whose results are presented in this research. in fact, as shown in figure 4, 13 markers have been mounted on the structure and multiple measurements, of single markers and of groups of them, were performed. for this reason, for the sake of clarity, only results related to the marker at the tip of the structure (i.e., m1-2) are presented. moreover, the structure was tested in two different boundary configurations, that are schematized in figure 5. in this manuscript, the authors will refer to the configuration with one fixed constraint as c1 configuration and as c2 configuration to the one with two fixed constraints. for the sake of clarity, the experiments are combined as follows: t1 area was analysed in c1 configuration, t2 area in c2 configuration and the m1-2 marker was detected in both c1 and c2 configurations. 3.1. thermoelasticity the thermoelastic stress analysis was performed as explained in sec. 2.2. firstly, the yellow painting was removed, and a proper matt black wrapper paint was used for the surface conditioning, due to theoretical considerations [55]: the emissivity of the surface was increased and homogenized, and appreciable results were thus obtained. then, a harmonic load at 1.1 hz and 2.7 hz were given to the structure in t1 and t2 analysed area, respectively, and the temperature changes of the surface were measured, in each test, for 60 seconds with a mwir cooled thermal camera flir a6751sc, operating at 125 hz of sampling rate and 640 × 512 pixels of resolution. 3.2. aruco-based resonant frequencies identification in order to define the natural frequencies of the tested structure, classic impact hammer procedure was also performed to validate further the results obtained through image-based analysis. due to this, a pcb 086d20 hammer was used for the input broadband excitation, a uniaxial pcb 352c34 amplified accelerometer and a picoscope data acquisition system. the accelerometer was positioned on the tip of the structure, along the y-axis of the system. input and output data were acquired at 1 khz for 50 seconds of duration. simultaneously with the impact hammer tests, a canon eos 7d camera, mounting a 2470 mm optic (f 2.8), was used for measuring the position of the framed markers. spatial resolution of 1920 x 1080 pixels and sampling frequency of 30 hz were used as acquisition parameters. in this experiment, a 6x6 bit aruco marker dictionary was used. 4. results 4.1. stress concentration factor the stress concentration factor was evaluated in two critical areas, t1 and t2, whose thermal acquisitions and finite element models are shown in figure 6 and figure 7, respectively. by means of (3), the lock-in analysis was performed and the magnitude of the thermoelastic signal is shown in figure 8 and figure 9, where it is compared to the data obtained from the figure 4. tested cfrp structure: t1) first tsa analysed area; t2) second tsa analysed area; m1-2) aruco marker detected. figure 5. boundary configurations of the structure: c1) single fixed constraint; c2) double fixed constraint.. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 181 numerical simulation in the same corresponding linear region of interest. the stress concentration factors 𝐾𝑓 , as described by (6), were evaluated from the profiles in figure 7 and figure 8, and they are shown in table 1, obtained from the experiments and the fem model. the obtained results are in line with the expectations. in fact, usually, the thermoelasticity slightly underestimate the stress, due to theoretical considerations, but, principally, actual stresses measured on the structure are presumed to be lower than the numerical values due to designing consideration. 4.2. aruco-based resonant frequencies identification as already addressed in sec. 3.3, firstly, the marker position in the time-domain was collected using modal testing procedure (see figure 10) through (10), and then the frf was obtained using (16). the frequency response functions were reconstructed using the least-squares complex exponential (lsce) algorithm [62]. the frfs obtained by means of the two experimental methods are shown in figure 11 and figure 12 for c1 and c2 boundary configurations, respectively. although in the amplitudes are different (the marker gave compliance frf while the impact hammer gave accelerance frf), the natural frequencies identified along the x-axis are totally comparable. furthermore, the comparison with the numerical simulation was performed and it is shown in figure 13 and figure 14, in terms of normalized frequency for c1 and c2 boundary configurations, respectively. figure 6. t1 area stress profile: (a) experimental; (b) numerical. figure 7. t2 area stress profile: (a) experimental; (b) numerical. figure 8. t1 area stress and temperature profiles. figure 9. t2 area stress and temperature profiles. table 1. stress concentration factors results. t1 area t2 area experiments 1.19 1.21 numerical 2.40 1.40 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 182 the results show a well-founded numerical model. in fact, the comparison of the experimental resonant frequencies, obtained using impact hammer and aruco markers, with the numerical results establish high reliability in both the two boundary condition configurations, for all the modes in the considered frequency range. 5. conclusions the validation of numerical models through experimental procedures is a mandatory step for several applications. in this research, a non-contact multi-instrumental approach was proposed to validate the numerical model of the robot inspection of the new san giorgio's bridge on the polcevera river. in particular, the thermoelastic technique was used to measure stress-concentration factors in two areas and, moreover, an innovative methodology, involving the detection of the aruco fiducial markers, was implemented to define the resonant frequencies of the cfrp structure, by estimating its frequency response function. the impact-hammer procedure was also performed for the validation of the results. the proposed approach gave excellent results and, due to this, it can be used for testing large structures. further investigations on the material properties and on the dynamics of the inspection robot are planned as extension of this research. acknowledgement the authors acknowledge camozzi group and sda engineering for allowing and supporting this research through the collaboration with the italian institute of technology. moreover, ubisive, fincantieri and pergenova participated in this research with the university of ancona (univpm). figure 10. modal testing procedure: input impulse and damped output signals from accelerometer and aruco marker. figure 11. frequency response function using aruco markers and impact hammer: c1 boundary configuration. figure 12. frequency response function using aruco markers and impact hammer: c2 boundary configuration. figure 13. frequency response function using aruco markers and impact hammer: c2 boundary configuration. figure 14. natural frequencies comparison: c2 boundary configuration. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 183 references [1] x. d. li, n. e. wiberg, structural dynamic analysis by a time‐ discontinuous galerkin finite element method, int. j. numer. methods eng., vol. 39, no. 12, 1996, pp. 2131–2152. doi: 10.1002/(sici)1097-0207(19960630)39:12%3c2131::aidnme947%3e3.0.co;2-z [2] f. cianetti, g. morettini, m. palmieri, g. zucca, virtual qualification of aircraft parts: test simulation or acceptable evidence?, procedia struct. integr., vol. 24, no. 2019, 2019, pp. 526–540. doi: 10.1016/j.prostr.2020.02.047 [3] r. c. juvinall, k. m. marshek, fundamentals of machine component design, vol. 83. john wiley & sons new york, 2006. [4] a. lavatelli, e. zappa, uncertainty in vision based modal analysis: probabilistic studies and experimental validation, acta imeko, vol. 5, no. 4, 2016, pp. 37-48. doi: 10.21014/acta_imeko.v5i4.426 [5] a. c. jones, r. k. wilcox, finite element analysis of the spine: towards a framework of verification, validation and sensitivity analysis, med. eng. phys., vol. 30, no. 10, 2008, pp. 1287–1304. doi: 10.1016/j.medengphy.2008.09.006 [6] g. morettini, c. braccesi, f. cianetti, experimental multiaxial fatigue tests realized with newly developed geometry specimens, fatigue fract. eng. mater. struct., vol. 42, no. 4, 2019, pp. 827– 837. doi: 10.1111/ffe.12954 [7] e. o. doebelin, d. n. manik, measurement systems: application and design, mcgraw-hill college, 2007, isbn 978-0072922011. [8] a. schäfer, high-precision amplifiers for strain gauge based transducers-first time realized in compact size, acta imeko, vol. 6, no. 4, 2017, pp. 31-36. doi: 10.21014/acta_imeko.v6i4.477 [9] z. lai, y. xiaoxiang, y. jinhui, vibration analysis of the oscillation support of column load cells in low speed axle-group weigh-inmotion system, acta imeko, vol. 9, no. 5, pp 63-69, 2020. doi: 10.21014/acta_imeko.v9i5.940 [10] l. capponi, m. česnik, j. slavič, f. cianetti, m. boltežar, nonstationarity index in vibration fatigue: theoretical and experimental research, int. j. fatigue, vol. 104, 2017, pp. 221–230. doi: 10.1016/j.ijfatigue.2017.07.020 [11] w. thomson, on the dynamical theory of heat, transactions of the royal society of edinburgh, vol. 20, no. 2, pp. 261-288, 1853. doi: 10.1017/s0080456800033172 [12] w. weber, über die specifische warme fester korper, insbesondere der metalle, ann. phys., vol. 96, no. 10, pp. 177–213, 1830. [13] j. qiu, c. pei, h. liu, and z. chen, quantitative evaluation of surface crack depth with laser spot thermography, int. j. fatigue, vol. 101, pp. 80–85, 2017. doi: 10.1016/j.ijfatigue.2017.02.027 [14] x. guo and y. mao, defect identification based on parameter estimation of histogram in ultrasonic ir thermography, mech. syst. signal process., vol. 58, pp. 218–227, 2015. doi: 10.1016/j.ymssp.2014.12.011 [15] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee trans. instrum. meas., 2019. doi: 10.1109/tim.2019.2959293 [16] f. cannella, a. garinei, m. d’imperio, g. rossi, a novel method for the design of prostheses based on thermoelastic stress analysis and finite element analysis, j. mech. med. biol., vol. 14, no. 05, p. 1450064, 2014. doi: 10.1142/s021951941450064x [17] g. fargione, a. geraci, g. la rosa, a. risitano, rapid determination of the fatigue curve by the thermographic method, int. j. fatigue, vol. 24, no. 1, pp. 11–19, 2002. doi: 10.1016/s0142-1123(01)00107-4 [18] x. d. li, h. zhang, d. l. wu, x. liu, j. y. liu, adopting lock-in infrared thermography technique for rapid determination of fatigue limit of aluminum alloy riveted component and affection to determined result caused by initial stress, int. j. fatigue, vol. 36, no. 1, 2012, pp. 18–23. doi: 10.1016/j.ijfatigue.2011.09.005 [19] l. capponi, j. slavič, g. rossi, m. boltežar, thermoelasticitybased modal damage identification, int. j. fatigue, vol. 137, aug. 2020, p. 105661. doi: 10.1016/j.ijfatigue.2020.105661 [20] r. marsili and g. rossi, tsa infrared measurements for stress distribution on car elements, j. sensors sens. syst., vol. 6, no. 2, p. 361, 2017. doi: 10.5194/jsss-6-361-2017 [21] m. d’imperio, d. ludovico, c. pizzamiglio, c. canali, d. caldwell, f. cannella, flegx: a bioinspired design for a jumping humanoid leg, in 2017 ieee/rsj international conference on intelligent robots and systems (iros), 2017, pp. 3977–3982. doi: 10.1109/iros.2017.8206251 [22] j. schijve, fatigue of structures and materials. springer science & business media, 2001, isbn 978-1402068072 [23] d. benasciutti, f. sherratt, and a. cristofori, basic principles of spectral multi-axial fatigue analysis, procedia eng., vol. 101, pp. 34–42, 2015. doi: 10.1016/j.proeng.2015.02.006 [24] p. wolfsteiner, a. trapp, fatigue life due to non-gaussian excitation–an analysis of the fatigue damage spectrum using higher order spectra, int. j. fatigue, vol. 127, pp. 203–216, 2019. doi: 10.1016/j.ijfatigue.2019.06.005 [25] g. morettini, c. braccesi, f. cianetti, s. m. j. razavi, k. solberg, l. capponi, collection of experimental data for multiaxial fatigue criteria verification, fatigue fract. eng. mater. struct., vol. 43, no. 1, pp. 162–174, 2020. doi: 10.1111/ffe.13101 [26] d. j. ewins, modal testing: theory and practice. hertfordshire, uk, 1986, isbn: 978-0863802188. [27] m. mršnik, j. slavič, m. boltežar, vibration fatigue using modal decomposition, mech. syst. signal process., vol. 98, pp. 548–556, 2018. doi: 10.1016/j.ymssp.2017.03.052 [28] b. d. lucas, t. kanade, an iterative image registration technique with an application to stereo vision, proc. darpa image underst. work., pp. 121–130, 1981. [29] j. javh, j. slavič, and m. boltežar, experimental modal analysis on full-field dslr camera footage using spectral optical flow imaging, j. sound vib., vol. 434, pp. 213–220, 2018. doi: 10.1016/j.jsv.2018.07.046 [30] d. gorjup, j. slavič, m. boltežar, frequency domain triangulation for full-field 3d operating-deflection-shape identification, mech. syst. signal process., vol. 133, p. 106287, 2019. doi: 10.1016/j.ymssp.2019.106287 [31] t. tocci, l. capponi, r. marsili, g. rossi, j. pirisinu, suction system vapour velocity map estimation through sift-based alghoritm, in journal of physics: conference series, 2020, vol. 1589, no. 1, p. 12004. doi: 10.1088/1742-6596/1589/1/012004 [32] g. allevi, l. casacanditella, l. capponi, r. marsili, g. rossi, census transform based optical flow for motion detection during different sinusoidal brightness variations, journal of physics: conference series, 2018, vol. 1149, no. 1, p. 12032. doi: 10.1088/1742-6596/1149/1/012032 [33] h. bay, t. tuytelaars, l. van gool, surf: speeded up robust features, european conference on computer vision, 2006, pp. 404–417. doi: 10.1007/11744023_32 [34] t. khuc, f. catbas, computer vision-based displacement and vibration monitoring without using physical target on structures, struct. infrastruct. eng., vol. 13, no. 4, pp. 505–516, 2017. doi: 10.1080/15732479.2016.1164729 https://doi.org/10.1002/(sici)1097-0207(19960630)39:12%3c2131::aid-nme947%3e3.0.co;2-z https://doi.org/10.1002/(sici)1097-0207(19960630)39:12%3c2131::aid-nme947%3e3.0.co;2-z https://doi.org/10.1016/j.prostr.2020.02.047 http://dx.doi.org/10.21014/acta_imeko.v5i4.426 https://doi.org/10.1016/j.medengphy.2008.09.006 https://doi.org/10.1111/ffe.12954 http://dx.doi.org/10.21014/acta_imeko.v6i4.477 http://dx.doi.org/10.21014/acta_imeko.v9i5.940 http://dx.doi.org/10.1016/j.ijfatigue.2017.07.020 https://doi.org/10.1017/s0080456800033172 https://doi.org/10.1016/j.ijfatigue.2017.02.027 http://dx.doi.org/10.1016/j.ymssp.2014.12.011 https://doi.org/10.1109/tim.2019.2959293 http://dx.doi.org/10.1142/s021951941450064x https://doi.org/10.1016/s0142-1123(01)00107-4 http://dx.doi.org/10.1016/j.ijfatigue.2011.09.005 http://dx.doi.org/10.1016/j.ijfatigue.2020.105661 http://dx.doi.org/10.5194/jsss-6-361-2017 http://dx.doi.org/10.1109/iros.2017.8206251 http://dx.doi.org/10.1016/j.proeng.2015.02.006 https://doi.org/10.1016/j.ijfatigue.2019.06.005 https://doi.org/10.1111/ffe.13101 http://dx.doi.org/10.1016/j.ymssp.2017.03.052 http://dx.doi.org/10.1016/j.jsv.2018.07.046 http://dx.doi.org/10.1016/j.ymssp.2019.106287 http://dx.doi.org/10.1088/1742-6596/1589/1/012004 http://dx.doi.org/10.1088/1742-6596/1149/1/012032 https://doi.org/10.1007/11744023_32 http://dx.doi.org/10.1080/15732479.2016.1164729 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 184 [35] c. dong, o. celik, and f. catbas, marker-free monitoring of the grandstand structures and modal identification using computer vision methods, struct. heal. monit., vol. 18, no. 5–6, pp. 1491– 1509, 2019. doi: 10.1177/1475921718806895 [36] f. lunghi, a. pavese, s. peloso, i. lanese, d. silvestri, computer vision system for monitoring in dynamic structural testing, in role of seismic testing facilities in performance-based earthquake engineering, springer, pp. 159–176, 2012. doi: 10.1007/978-94-007-1977-4 [37] s. w. park, h. s. park, j. h. kim, h. adeli, 3d displacement measurement model for health monitoring of structures using a motion capture system, measurement, vol. 59, pp. 352–362, 2015. doi: 10.1016/j.measurement.2014.09.063 [38] f. j. romero-ramirez, r. muñoz-salinas, and r. medina-carnicer, speeded up detection of squared fiducial markers, image vis. comput., vol. 76, pp. 38–47, 2018. doi: 10.1016/j.imavis.2018.05.004 [39] h. kato, m. billinghurst, marker tracking and hmd calibration for a video-based augmented reality conferencing system, in proceedings 2nd ieee and acm international workshop on augmented reality (iwar’99), 1999, pp. 85–94. doi: 10.1109/iwar.1999.803809 [40] m. fiala, designing highly reliable fiducial markers, ieee trans. pattern anal. mach. intell., vol. 32, no. 7, pp. 1317–1324, 2009. doi: 10.1109/tpami.2009.146 [41] d. flohr, j. fischer, a lightweight id-based extension for marker tracking systems, the eurographics association, 2007. doi: 10.2312/pe/ve2007short/059-064 [42] e. olson, apriltag: a robust and flexible visual fiducial system, in 2011 ieee international conference on robotics and automation, 2011, pp. 3400–3407. doi: 10.1109/icra.2011.5979561 [43] s. garrido-jurado, r. muñoz-salinas, f. j. madrid-cuevas, m. j. marín-jiménez, automatic generation and detection of highly reliable fiducial markers under occlusion, pattern recognit., vol. 47, no. 6, pp. 2280–2292, 2014. doi: 10.1016/j.patcog.2014.01.005 [44] m. f. sani, g. karimian, automatic navigation and landing of an indoor ar. drone quadrotor using aruco marker and inertial sensors, 2017 international conference on computer and drone applications (iconda), 2017, pp. 102–107. doi: 10.1109/iconda.2017.8270408 [45] i. lebedev, a. erashov, a. shabanova, accurate autonomous uav landing using vision-based detection of aruco-marker, international conference on interactive collaborative robotics, 2020, pp. 179–188. doi: 10.1007/978-3-030-60337-3_18 [46] n. elangovan, a. dwivedi, l. gerez, c. chang, m. liarokapis, employing imu and aruco marker based tracking to decode the contact forces exerted by adaptive hands, 2019 ieee-ras 19th international conference on humanoid robots (humanoids), 2019, pp. 525–530. doi: 10.1109/humanoids43949.2019.9035051 [47] m. abdelbarr, y. l. chen, m. r. jahanshahi, s. f. masri, w. shen, u. qidwai, 3d dynamic displacement-field measurement for structural health monitoring using inexpensive rgb-d based sensor, smart mater. struct., vol. 26, no. 12, p. 125016, 2017. doi: 10.1088/1361-665x/aa9450 [48] t. tocci, l. capponi, and g. rossi, aruco marker-based displacement measurement technique: uncertainty analysis, eng. res. express (2021) doi: 10.1088/2631-8695/ac1fc7 [49] w. n. sharpe, springer handbook of experimental solid mechanics. springer science & business media, 2008. doi: 10.1007/978-0-387-30877-7 [50] g. m. carlomagno and p. g. berardi, unsteady thermotopography in non-destructive testing, proc. 3rd biannual exchange, st. louis/usa, 1976, vol. 24, p. 26. [51] j. m. dulieu-barton, p. stanley, development and applications of thermoelastic stress analysis, j. strain anal. eng. des., vol. 33, no. 2, pp. 93–104, 1998. [52] n. harwood, w. m. cummings, calibration of the thermoelastic stress analysis technique under sinusoidal and random loading conditions, strain, vol. 25, no. 3, pp. 101–108, 1989. doi: 10.1111/j.1475-1305.1989.tb00701.x [53] l. capponi, thermoelasticity-based analysis: collection of python packages, 2020. doi: 10.5281/zenodo.4043102 [54] r. montanini, g. rossi, d. alizzio, l. capponi, r. marsili, a. di giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128771 [55] n. harwood, w. m. cummings, applications of thermoelastic stress analysis, strain, vol. 22, no. 1, pp. 7–12, 1986. doi: 10.1111/j.1475-1305.1986.tb00014.x [56] s. suzuki, topological structural analysis of digitized binary images by border following, comput. vision, graph. image process., vol. 30, no. 1, pp. 32–46, 1985. doi: 10.1016/0734-189x(85)90016-7 [57] d. h. douglas and t. k. peucker, algorithms for the reduction of the number of points required to represent a digitized line or its caricature, cartogr. int. j. geogr. inf. geovisualization, vol. 10, no. 2, pp. 112–122, 1973. doi: 10.1002/9780470669488.ch2 [58] n. otsu, a threshold selection method from gray-level histograms, ieee trans. syst. man. cybern., vol. 9, no. 1, pp. 62– 66, 1979. doi: 10.1109/tsmc.1979.4310076 [59] g. bradski and a. kaehler, learning opencv: computer vision with the opencv library. o’reilly media, inc., 2008, isbn: 9780596516130 [60] f. yu, v. koltun, multi-scale context aggregation by dilated convolutions, arxiv prepr. arxiv1511.07122, 2015. [61] k. shin, j. hammond, fundamentals of signal processing for sound and vibration engineers. john wiley & sons, 2008, isbn: 978-0470511886. [62] p. mohanty and d. j. rixen, operational modal analysis in the presence of harmonic excitation, j. sound vib., vol. 270, no. 1–2, pp. 93–109, 2004. doi: 10.1016/s0022-460x(03)00485-1 http://dx.doi.org/10.1177/1475921718806895 http://dx.doi.org/10.1007/978-94-007-1977-4 http://dx.doi.org/10.1016/j.measurement.2014.09.063 http://dx.doi.org/10.1016/j.imavis.2018.05.004 https://doi.org/10.1109/iwar.1999.803809 http://dx.doi.org/10.1109/tpami.2009.146 http://dx.doi.org/10.2312/pe/ve2007short/059-064 http://dx.doi.org/10.1109/icra.2011.5979561 http://dx.doi.org/10.1016/j.patcog.2014.01.005 http://dx.doi.org/10.1109/iconda.2017.8270408 http://dx.doi.org/10.1007/978-3-030-60337-3_18 http://dx.doi.org/10.1109/humanoids43949.2019.9035051 http://dx.doi.org/10.1088/1361-665x/aa9450 http://dx.doi.org/10.1088/2631-8695/ac1fc7 https://doi.org/10.1007/978-0-387-30877-7 https://doi.org/10.1111/j.1475-1305.1989.tb00701.x https://doi.org/10.5281/zenodo.4043102 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://doi.org/10.1111/j.1475-1305.1986.tb00014.x https://doi.org/10.1016/0734-189x(85)90016-7 https://doi.org/10.1002/9780470669488.ch2 https://doi.org/10.1109/tsmc.1979.4310076 http://dx.doi.org/10.1016/s0022-460x(03)00485-1 position control for the msl kibble balance coil using a syringe pump acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 position control for the msl kibble balance coil using a syringe pump rebecca j. hawke1, mark t. clarkson1 1 measurement standards laboratory (msl), lower hutt, new zealand section: research paper keywords: kibble balance; pressure balance; position control; volume control citation: rebecca j. hawke, mark t. clarkson, position control for the msl kibble balance coil using a syringe pump, acta imeko, vol. 11, no. 4, article 16, december 2022, identifier: imeko-acta-11 (2022)-04-16 section editor: andy knott, national physical laboratory, united kingdom received july 11, 2022; in final form october 5, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the new zealand government. corresponding author: rebecca hawke, e-mail: rebecca.hawke@measurement.govt.nz 1. introduction following the revision of the si, kibble balances around the world may now be used to realise the unit of mass.[1] in a kibble balance, the weight of a mass is balanced by the electromagnetic force on a current-carrying coil of wire suspended in a magnetic field. at msl we are developing a kibble balance where the coil is connected to the piston of pressure balance 1 (see figure 1) in a twin pressure balance arrangement [2], [3]. the piston-cylinder unit of pressure balance 1 provides a repeatable axis for the motion of the coil in the magnetic field. the twin pressure balance arrangement serves as a high-sensitivity force comparator, [4]. in a kibble balance, the position of the coil must be precisely controlled in both weighing and calibration modes. in weighing mode the coil should remain stationary at a set position while the current is adjusted to maintain a force balance. in the msl kibble balance, stability in position to within 1 µm would correspond to an uncertainty in realised mass of 3.5 parts in 109, [5]. in calibration mode, the coil must be moved such that a measurable voltage is induced, which typically requires velocities between 1.3 mm s-1 and 3 mm s-1, [6]. in the msl kibble figure 1. schematic of the msl kibble balance design based on a twin pressure balance. the coil connects to the piston of pressure balance 1 such that the coil and magnet are coaxial with the piston-cylinder unit. pressure balance 2 provides a reference pressure for a differential pressure sensor to determine changes in balance forces. differential pressure sensor mass annular gap laser beam for coil position measurement coil permanent magnet pressure balance 1 pressure balance 2 pistoncylinder abstract the position of the coil in a kibble balance must be finely controlled. in weighing mode, the coil remains stationary in a location of constant magnetic field. in calibration mode, the coil is moved in the magnetic field to induce a voltage. the msl kibble balance design is based on a twin pressure balance where the coil is attached to the piston of one of the pressure balances. here we investigate how the piston (and therefore coil) position may be controlled through careful manipulation of the gas column under the piston. we demonstrate the use of a syringe pump as a programmable volume regulator which can provide fall rate compensation as well as controlled motion of the piston. we show that the damped harmonic oscillator response of the pressure balance must be considered when moving the coil. from this initial investigation, we discuss the implications for use in the msl kibble balance. mailto:rebecca.hawke@measurement.govt.nz acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 balance, control of the vertical position of the coil equates to control of the piston position in pressure balance 1. however, in a pressure balance the piston naturally falls as gas leaks through the annular gap between piston and cylinder. fall rate compensation is therefore necessary in both weighing and calibration modes in the absence of mechanical controls such as arresting stops or a direct motor drive. to assist in control of the coil position in the msl kibble balance, we propose careful manipulation of the gas column under the piston in pressure balance 1. the pressure balance maintains a constant pressure, so our options are to adjust the gas volume (e.g. with a mass flow controller) or to shift the gas column in space (e.g. with a volume regulator). to test this second approach to coil position control, we investigate the use of a syringe pump as an automated volume regulator. the layout of this paper is as follows. in section 2 we describe the experimental apparatus, and in section 3 we propose a theoretical model for the system. results for fall rate compensation, conventional constant velocity travel, and an oscillatory motion are presented in section 4. finally, in section 5 we discuss the potential for this technique to be used in the msl kibble balance. 2. experimental apparatus the experimental apparatus is illustrated in figure 2 and was similar to that used in [8]. the pressure balance was a pneumatic dhi/fluke 50 kpa/kg piston-cylinder module with an effective area of 196 mm2. the medium was zero grade nitrogen gas and we adjusted the load to give a working pressure near 100 kpa absolute. we used a manual volume controller to set the initial height of the piston. the pressure balance was operated in a vacuum chamber evacuated to a pressure of around 0.1 pa. ambient temperature outside the chamber was in the range 21 – 23 °c during measurements. to control the vertical position of the piston, we used a custom ‘direct drive’ model of the cetoni nemesys s syringe pump. this pump has inbuilt position encoding. cetoni highprecision glass syringes of 1 ml, 5 ml and/or 25 ml capacity were connected to the pressure balance via 1 m of 1/8” flexible ptfe tubing (1.6 mm id) and a minimal length of ¼” swagelok stainless steel tubing and fittings. we estimate the smallest volume of the gas column was ~30 ml including the cavity within the piston. to enable rapid, independent tracking of the syringe volume, we monitored the position of the syringe plunger using an ids ueye usb camera with a microscope lens attached. we converted the image pixels to volume in ml for each syringe using the encoder values at eight positions spanning the range of travel. to determine the phase shift or delay between syringe plunger motion and piston motion we used hardware triggering of the image capture and piston position measurements. however, triggering increased the measurement noise, so data shown here were captured using the camera’s free-run mode. we measured the piston position using both a capacitance sensor and a linear variable differential transducer (lvdt) with hardware triggering at either 20 or 50 ms intervals. we determined the calibration curve for the capacitor using a dial gauge and gauge blocks and transferred the calibrated position to the lvdt readings. the results presented use the lvdt readings. 3. model a gas pressure balance comprises a loaded piston floating on a volume of gas, and behaves as a damped harmonic oscillator, [8], [9]. a pressure balance connected to a syringe has a mechanical analogue in the accelerometer (or seismometer). figure 3 illustrates this model, where the loaded piston ‘mass’ is attached to a syringe plunger ‘platform’ by a gas pressure ‘spring’, with ‘dashpot’ damping. moving the platform moves the mass in a linear motion with coupled oscillations described by the equation of motion: −�̈� ′ = �̈�𝑟 + 𝑐 𝑚 �̇�𝑟 + 𝜔0 2𝑧𝑟 , (1) where 𝑧𝑟 is the relative distance between mass 𝑚 and platform, 𝑧′ is the displacement of the platform, 𝑐 is the damping coefficient, and 𝜔0 is the resonant frequency. dots indicate derivatives with respect to time. in a pressure balance, the resonant frequency depends on the volume of the gas column under the piston. in the experimental configuration here, the resonant frequency can be varied from ~0.5 hz to 1.2 hz. both the resonant frequency and the ratio 𝑐 𝑚 may be obtained from the damped natural oscillations of the system. figure 2. schematic of the experimental apparatus for piston position control using a syringe in a syringe pump as a volume regulator. floating elements are coloured orange. not shown are the drive mechanism for starting piston rotation (see [7]), the support for the lvdt, and the vacuum chamber surrounding the pressure balance. figure 3. accelerometer model of a pressure balance when connected to a syringe in a syringe pump. the loaded piston ‘mass’ is attached to a syringe plunger ‘platform’ by a gas pressure ‘spring’, with ‘dashpot’ damping. usb microscope camera m a n u a l vo lu m e c o n tr o ll e r 500 g 1 kg piston cylinder lvdt sy ri n g e i n s y ri n g e p u m p sy ri n g e p lu n g e r bell to capacitance sensor ~100 kpa gauge acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 this system can be driven with a sinusoidal motion of the platform. for a driving displacement 𝑧′(𝑡) = 𝐴0 cos(𝜔𝑡) with amplitude 𝐴0 and frequency 𝜔, the equation of motion has a solution of the form: 𝑧𝑟 (𝑡) = 𝐴𝑒𝑙 cos(𝜔𝑡) + 𝐴𝑖𝑛𝑒𝑙 sin(𝜔𝑡). (2) solving for the coefficients gives the elastic coefficient, 𝐴𝑒𝑙 = 𝐴0𝜔 2(𝜔0 2 − 𝜔2) (𝜔0 2 − 𝜔2)2 + ( 𝑐 𝑚 𝜔) 2 , (3) and the inelastic coefficient, 𝐴𝑖𝑛𝑒𝑙 = 𝐴0 𝑐 𝑚 𝜔3 (𝜔0 2 − 𝜔2)2 + ( 𝑐 𝑚 𝜔) 2 . (4) the behaviour of the driven system can be understood by examining these coefficients at their limits. as the driving frequency 𝜔 tends to 0, the amplitude of 𝑧𝑟 tends to 0, and the motion of the mass, 𝑧, follows the motion of the platform: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔𝑡). (5) as the driving frequency tends to infinity, 𝐴𝑒𝑙 tends to −𝐴0 and 𝐴𝑖𝑛𝑒𝑙 tends to 0, such that the mass remains stationary despite the motion of the platform: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔𝑡) − 𝐴0 cos(𝜔𝑡) = 0. (6) in between these limits, for 𝑐 𝑚 < 𝜔0 the amplitude of 𝑧𝑟 reaches a maximum as the driving frequency approaches the resonant frequency. at resonance, 𝐴𝑒𝑙 tends to 0 and the motion of the mass is approximately 90° behind the driving oscillation 𝑧′: 𝑧(𝑡) = 𝑧′(𝑡) + 𝑧𝑟 (𝑡) = 𝐴0 cos(𝜔0𝑡) +𝐴0𝑄 sin(𝜔0𝑡) (7) where 𝑄 = 𝑚 𝑐 𝜔0 is the quality factor of the damped natural oscillations of the system. 4. results 4.1 fall rate compensation when used in a kibble balance in weighing mode, over the course of several ‘mass-on, mass-off measurements’ the natural fall rate of the piston of around 1 µm s-1 would result in a significant change in piston position. however, for the lowest uncertainty, the weighing position should be kept constant between loadings. here we consider the usefulness of a syringe pump to maintain a steady piston position by providing a very low flow to compensate for the natural fall of the piston. in figure 4 we show a typical fall rate compensation test using a constant flow rate of 0.012 ccm (ccm = cm3 min-1). initially, while the syringe volume is kept constant, the piston falls at an average rate of 1.004 µm s-1 over 5 minutes. the syringe plunger is then moved slowly, shifting the column of gas towards the piston and causing a slight rise of 0.015 µm s-1 over the ten minutes of applied flow. when the syringe plunger motion is stopped, the piston returns to falling, this time at 0.986 µm s-1 over 5 minutes. this fall compensation resulted in a rise in the piston position of 8.9 µm over ten minutes. over several flow rate compensation tests we observed some variation in the fall rate of the piston. within a test, the fall rate varied before and after fall compensation by up to 0.06 µm s-1, and between tests the variation was at most 0.28 µm s-1. during fall compensation with the same nominal flow rate of 0.012 ccm at the syringe plunger, the overall change in piston position varied between falling by 11 µm over ten minutes when the initial fall rate was 1.13 µm s-1, and rising by 80 µm over the same time when the initial fall rate was 0.86 µm s-1. in the latter case, a subsequent test with a constant flow rate of 0.011 ccm resulted in a rise in the piston position of 2 µm after ten minutes. the start and stop of the syringe plunger motion is a disturbance to the piston. however, the size of this disturbance is very small, and no resonant behaviour is observable. also, we would expect very little disturbance from the syringe plunger when in motion as these flow rates are in the pulsation-free regime of the syringe pump when using a 1 ml syringe. instead, we observe some random variation in piston position of up to 2 µm, which may be due to temperature fluctuations in the gas or external disturbances as they are also seen when the piston is falling without flow compensation. 4.2 calibration mode – constant velocity in calibration mode in most existing kibble balances, the coil is typically moved through the weighing position at an approximately constant velocity of between 1.3 mm s-1 and 3 mm s-1, [6]. with the experimental configuration in figure 2, these velocities are achievable with flow rates between 15 ccm and 35 ccm. however, unlike in the case of leak compensation, the abrupt start and stop of the syringe plunger during this rapid motion is a significant disturbance which causes a visible damped oscillator response. figure 5 shows the shape of the travel for an intermediate flow rate of 20 ccm. immediately after the travel region, we see figure 4. fall rate compensation using a flow rate of 0.012 ccm for ten minutes (grey shaded region). (a) the syringe plunger motion is linear in the shaded region. (b) the piston position is maintained with the 0.012 ccm flow, and falls at its natural fall rate outside of the shaded region (c) residuals are from a linear fit in each region. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 a damped harmonic oscillation; a similar oscillation is also superimposed onto the travel region. the weighing position will be at approximately the half-way point of the range of travel available. although the piston moves through the weighing position quickly, the damped harmonic oscillation causes fluctuations in travel speed, with some very slow motion and even backward motion for some higher flow rates. the number of oscillations in the travel region depends on the time taken to complete the travel and the resonant frequency of the system. here the system volume is as small as practicable, and the resonant frequency is around 1.2 hz. some improvement can be gained by reducing the resonant frequency of the system and timing the duration of the travel to match one period of the damped oscillation. however, the resulting instantaneous velocity is smoothly varying rather than constant during the travel. ideally, in constant velocity calibration mode the motion would have as close to zero acceleration as possible in the travel region. we now present two controlled start techniques to suppress the harmonic oscillations. in the first technique, the flow is ramped up to the target flow and then ramped down when time to stop the motion. while the ‘ramp down’ step is not strictly necessary for our goal of constant velocity travel, the controlled deceleration reduces the damped oscillations after the travel. minimising these oscillations maximises the range of available travel and minimises the settling time required before commencing the next traverse in the opposite direction. figure 6 (a) illustrates the syringe plunger motion for a gentle ramp up to 20 ccm, taking 0.76 s and travelling ~0.13 ml per ramp. for a total volume change of 0.8 ml, the time spent at 20 ccm is 1.64 s in which the syringe plunger travels 0.547 ml. note that in this plot the syringe plunger is moving to refill the syringe. figure 6 (b) shows the resulting motion of the piston, which is significantly more linear than without the accelerating and decelerating ramps, for the same maximum flow rate. a three-section piecewise linear fit to the piston motion gives an average travel speed of 1.72 mm s-1 in the middle section. residuals to this fit are shown in figure 6 (c). some disturbance is evident during each of the ramps, but the motion during the main travel section is linear to within ~33 µm. we note that the size of the observed variations will be influenced by integration of the signal within the sampling interval of 20 ms. the second controlled start technique is known as the crane operator’s trick, which exploits the natural oscillation period of the system, [10]. implementing this trick here involves three steps of applied flow: 1. an initial step at half the target flow, for half a period of oscillation, to accelerate the piston, 2. a steady step at the target flow, to generate constant velocity travel, and 3. a final step of half the target flow, for half a period of oscillation, to decelerate the piston to stationary. figure 5. shape of the piston travel for a 0.7 ml volume change at 20 ccm which gives an effective velocity of 1.7 mm s-1 in the travel region (shaded grey). figure 6. constant velocity travel using gentle ramps over 0.76 s to a maximum flow of 20 ccm, for a 0.8 ml total volume change. (a) shape of the syringe plunger motion (encoder values); the plunger is moving in the grey shaded region. (b) resulting position of the piston (orange markers) and 3piece linear fitting (grey line). (c) residual to the piecewise linear fitting in (b). figure 7. constant velocity travel using the crane operator’s trick with steps at 10 and 20 ccm for a 0.8 ml total volume change, with a period of 0.76 s. (a) shape of the syringe plunger motion (encoder values); the plunger is moving in the grey shaded region. (b) resulting position of the piston (orange markers) and 3-piece linear fitting (grey line). (c) residual to fit in (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 figure 7 (a) shows these three distinct steps in the syringe plunger motion for a target flow of 20 ccm and total volume change of 0.8 ml. each section at 10 ccm takes 0.38 s and travels only 0.063 ml, leaving 0.673 ml to travel at 20 ccm, over 2.02 s. the resulting motion of the piston is shown in figure 7 (b), and residuals to a piecewise linear fit to the piston motion are shown in figure 7 (c). the motion during the main travel section is linear to within ~29 µm with an average travel speed of 1.73 mm s-1. the disturbance due to starting and stopping the syringe plunger is slightly less than for the ramp technique, and both techniques give very small damped oscillations after stopping the syringe plunger motion. to provide sufficient data for the determination of the ratio of the induced voltage to the coil velocity at the weighing position, the coil is usually moved at a constant velocity over a distance of at least 20 mm, [6]. however, in the pressure balance that will be used for the msl kibble balance, the range of travel is restricted to at most 13 mm. this shorter range will therefore increase the number of repeats required to achieve the desired accuracy. 4.3 calibration mode – oscillating velocity as an alternative to the constant velocity method, an oscillatory motion has been suggested for the kibble balance calibration mode, [6]. sinusoidal oscillations with frequencies from 0.1 hz to 5 hz could be suitable, even with amplitudes as small as 1 mm. oscillatory mode has been successfully implemented in the ulusal metroloji enstitüsü (ume) kibble balance by moving the magnet sinusoidally at a frequency of 0.5 hz with a peak velocity of around 3 mm s-1, [11]. oscillatory mode has also been demonstrated in the ‘pb1’ and ‘pb2’ planckbalances where the optimal oscillation is typically 4 hz with amplitudes of 4.5 and 20 µm respectively, [12], [13]. in the msl kibble balance, a sinusoidal oscillation would work with (rather than against) the harmonic oscillator character of the pressure balance. a suitable oscillation would have a frequency of around 1 hz and an amplitude of ~1 mm, [6]. here we demonstrate driving such an oscillation via a sinusoidal displacement of the syringe plunger. the driving oscillation in syringe volume of 0.02 ml amplitude is shown in figure 8 (a) along with the resulting piston oscillation in figure 8 (b). for each dataset, we fit a single sinusoid to establish the frequency and amplitude. the frequency of the best fitting sinusoid was 0.9993 hz for the syringe plunger motion and 0.9989 hz for the piston motion. the resonant frequency of the system was adjusted to be as close to 1 hz as practicable and was determined to be 0.9999 hz from the damped harmonic oscillation after stopping the driving excitation. the amplitude of the piston oscillation was about 1.1 mm, with a peak velocity of around 7.6 mm s-1. assuming the induced voltage 𝑈 = 1 v for 𝑣 = 2 mm s-1, we would expect this peak velocity to correspond to an induced voltage of almost 4 v when implemented in the msl kibble balance. from our model we would expect the steady-state piston motion to lag behind the syringe plunger motion with a phase difference of 90°. this phase difference is evident here, where a reduction in syringe volume causes an upward motion of the piston. residuals to the sinusoidal fits are shown in figure 8 (c), scaled by the respective amplitudes. we observe that there is periodic high frequency noise in the syringe plunger motion which is mostly filtered out by the pressure balance. however, the relative magnitude of the residual is transferred through to the piston motion and some non-sinusoidal periodic structure is evident. our accelerometer model for this scenario predicts an amplification of the piston oscillation when approaching the resonant frequency. in figure 9 we present the model amplification due to a sinusoidal driving displacement, along with the measured amplitudes from a range of driving frequencies. we used the amplitude of the lowest frequency oscillation as the normalising amplitude value a0. these data were collected using the smallest practical volume, giving a resonant frequency 𝑓0 of the system of around 1.175 hz, determined by the damped harmonic oscillation after stopping figure 8. oscillatory motion using a sinusoidal driving displacement at a frequency of 1 hz. (a) position of the syringe plunger. (b) resulting position of the piston. (c) residual to the best fitting sinusoids for (a) and (b), scaled by the respective amplitudes. figure 9. (orange markers) amplitude of the piston motion resulting from a sinusoidal driving excitation of 0.025 ml amplitude at various frequencies, shown relative to the resonant frequency, and normalized to the amplitude at the lowest frequency. (grey line) amplitude predicted by our model for a sinusoidal driving displacement, using 𝑄 = 𝑚 𝑐 𝜔0= 14 and 𝑓0 = 1.175 hz. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 the driving excitation. this damped harmonic oscillation had a quality factor of 𝑄 = 𝑚 𝑐 𝜔0~13, and the best fit model was obtained with a 𝑄 of 14. 5. discussion 5.1 fall rate compensation in the msl kibble balance, fall rate compensation can supplement the use of a current feedback arrangement to control the coil position in weighing mode. fall rate compensation will also be necessary for oscillatory mode if we use volume manipulation to generate the oscillation. from the results presented here, the use of a syringe pump to provide fall rate compensation is promising. the achieved 5 µm stability over 5 minutes would be adequate for our initial target accuracy of 1 part in 107 for realised mass. to improve the obtained stability and the repeatability of the method, which are both limited by fluctuation in the piston fall rate, the fall rate should be measured immediately before each period of fall compensation to allow the ideal compensation flow rate to be determined. alternatively, the flow could also be finely adjusted during an initial stabilisation routine. we have examined possible causes, other than measurement error, of variations in the observed fall rate. variation in the temperature of the piston-cylinder unit will affect the natural fall rate of the piston, through thermal expansion affecting the gap between piston and cylinder, and through the temperature dependence of the viscosity of the gas in the gap. for an estimated temperature change of 0.5 k of the piston-cylinder between measurements, these effects in terms of contribution to fall rate are calculated as being less than 2 nm s-1. the fall rate is also expected to vary with vertical position of the piston due to departures from cylindricity of their shape. however, such a correlation is not evident in our data. a steady drift in temperature of the tubing connecting the pressure balance will affect the observed fall rate. for a 0.1 k change in temperature over 5 minutes, this effect is calculated to be ~7 nm s-1 for a tubing volume of 35 ml and is mainly due to the section of ptfe tubing. this effect is an order of magnitude smaller than the variation in fall rate observed within a test. similarly, the variation between tests of ~280 nm s-1 is also unlikely to be due to the above causes. instead, the variations may indicate that the tubing volume has a small leak which is influenced by changing ambient conditions. sources of the ~2 µm random variation in position should also be addressed. it is possible that this noise is due to ground vibration, or wobble from the spinning of the pressure balance bell. a rotating cylinder pressure balance is currently being developed which would provide a direct comparison and ideally lower noise. this fall rate compensation technique is not only important for pressure balance 1 which carries the coil, but it can also be used for pressure balance 2 which is used to provide the reference pressure. both pressure balances need to be kept at a stable pressure for the duration of weighing mode, which could take around 5 minutes per weighing. if left without fall compensation, pressure balance 2 would require periodic height adjustment. the height could easily be adjusted with the syringe pump before each weighing, and/or fall rate compensation could also be provided for this pressure balance throughout the duration of the weighing. we note that this method could be considered to introduce a controlled leak into the system between the pressure balance and the differential pressure transducer, which is not usually recommended, [14]. in this situation, the pressure in the tubing is very close to ambient pressure and the tubing volume is as small as practicable, which will reduce the effect of the leak. additionally, the ‘leak’ caused by the motion of the syringe pump does not change the pressure or the number of gas molecules in the system; instead, the column of gas is merely moved along the tubing. for these reasons, we expect that the measured force difference would not be affected by a constant infusion during a weighing mode measurement consisting of a sequence of masson and mass-off weighings. 5.2 constant velocity travel as expected, the oscillator response of the pressure balance makes instantaneous travel at a constant velocity difficult to attain by simply applying a constant flow rate. therefore, we have presented here two controlled start techniques which both significantly reduce the oscillator response. with these techniques, accommodating for the oscillator response took potentially as little as ~0.3 mm at each end of the travel, out of the available travel of ~4 mm. of the two techniques, the crane operator’s trick is deceptively simple to implement, and produced constant velocity travel over almost the full range of travel of the piston with <30 µm deviations from linear. similar results were obtained for periods from 0.75 to 0.8 s, indicating that the exact timing of the steps is not critical. this technique warrants further investigation in the msl kibble balance to enable interferometric velocity measurement and to assess the stability of the generated voltage. 5.3 oscillating mode in addition, we have demonstrated that volume manipulation can be used effectively to drive a sinusoidal oscillation of a pressure balance. we see good agreement with the nature of the driven oscillation at the piston and the features predicted by our model, such as a phase difference of 90° and a significant amplification near resonance. we postulate that there is likely to be only one mode of oscillation present due to the length of narrow tubing between syringe pump and piston. the ~14-fold amplification of the driving oscillation near resonance is a major advantage in working with, rather than against, the harmonic oscillator character of the pressure balance. this amplification significantly reduces the amplitude of the driving oscillation that is required to generate an oscillation of ~1 mm amplitude at the piston. importantly, any noise in the driving oscillation is greatly reduced at frequencies above and below the resonant frequency. care must be taken to reduce any noise in the driving oscillation at or near the resonant frequency, as this noise is also amplified. the sinusoidal oscillation produced by our syringe pump is digitally created by updating the generated flow, 100 times per oscillation. this relatively coarse digitization results in a slightly distorted sinusoidal waveform, and the method of updating only the flow allows a slow drift of the average position of the plunger. while some optimisation is possible with the syringe pump system, a much better sinusoidal oscillation would be generated by an analogue or ac-driven input. such an input could be used to drive the oscillation of the syringe plunger or an equivalent pressure-maintaining membrane or diaphragm. alternatively, a large, slow sinusoidal oscillation could be used to provide predictable, repeated travel passing through the weighing position with approximately constant velocity. for example, an oscillation at 0.2 hz with 2 mm amplitude would acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 reach a peak velocity of 2.5 mm s-1, with little down-time between repeat measurements. 6. conclusions we have shown that a syringe pump may be used as an automated volume regulator to controllably adjust the height of the piston in a pressure balance. this method can also be used to assist in maintaining a stable piston, and therefore coil, position in both the weighing and calibration modes of the msl kibble balance. we demonstrated constant velocity travel of the piston at 1.7 mm s-1 using two controlled start techniques to minimise unwanted oscillations. an oscillatory motion, working with the resonant behaviour of the pressure balance, also shows promise for the msl kibble balance calibration mode. acknowledgement the authors wish to acknowledge useful discussions with and technical assistance from joseph borbely, yin hsien fung, and peter mcdowall. the authors thank the reviewers for their insightful comments and suggestions for achieving constant velocity travel. this work was funded by the new zealand government. references [1] i. a. robinson, s. schlamminger, the watt or kibble balance: a technique for implementing the new si definition of the unit of mass, metrologia. 53 (2016) a46–a74. doi: 10.1088/0026-1394/53/5/a46 [2] c. m. sutton, m. t. clarkson, y. h. fung, the msl kibble balance weighing mode, in: cpem 2018 conference on precision electromagnetic measurements, institute of electrical and electronics engineers inc., 2018. doi: 10.1109/cpem.2018.8500889 [3] c. m. sutton, the accurate generation of small gauge pressures using twin pressure balances, metrologia. 23 (1987) 187–195. doi: 10.1088/0026-1394/23/4/003 [4] c. m. sutton, m.p. fitzgerald, k. carnegie, improving the performance of the force comparator in a watt balance based on pressure balances, in: 2012 conf. precis. electromagn. meas., ieee, 2012: pp. 468–469. doi: 10.1109/cpem.2012.6251006 [5] c. m. sutton, m. t. clarkson, a magnet system for the msl watt balance, metrologia. 51 (2014) s101–s106. doi: 10.1088/0026-1394/51/2/s101 [6] c. m. sutton, an oscillatory dynamic mode for a watt balance, metrologia. 46 (2009) 467–472. doi: 10.1088/0026-1394/46/5/010 [7] c. m. sutton, an improved mechanism for spinning the floating element of a pressure balance, j. phys. e. 13 (1980) 825. doi: 10.1088/0022-3735/13/8/007 [8] c. m. sutton, m. p. fitzgerald, d. g. jack, an initial investigation of the damped resonant behaviour of gas-operated pressure balances, measurement. 45 (2012) 2476–2478. doi: 10.1016/j.measurement.2011.10.045 [9] o. l. de lange, j. pierrus, amplitude-dependent oscillations in gases, j. nonlinear math. phys. 8 (2001) 79–81. doi: 10.2991/jnmp.2001.8.s.14 [10] s. schlamminger, l. chao, v. lee, d. b. newell, c. c. speake, the crane operator’s trick and other shenanigans with a pendulum, am. j. phys. 90 (2022) 169–176. doi: 10.1119/10.0006965 [11] h. ahmedov, n. b. aşkin, b. korutlu, r. orhan, preliminary planck constant measurements via ume oscillating magnet kibble balance, metrologia. 55 (2018) 326–333. doi: 10.1088/1681-7575/aab23d [12] c. rothleitner, n. rogge, s. lin, s. vasilyan, d. knopf, f. härtig, t. fröhlich, planck-balance 1 (pb1) – a table-top kibble balance for masses from 1 mg to 1 kg – current status, acta imeko 9 (2020) 5, pp. 47–52. doi: 10.21014/acta_imeko.v9i5.937 [13] s. vasilyan, n. rogge, c. rothleitner, s. lin, i. poroskun, d. knopf, f. härtig, t. fröhlich, the progress in development of the planck-balance 2 (pb2): a tabletop kibble balance for the mass calibration of e2 class weights, technisches messen. 88 (2021) 731–756. doi: 10.1515/teme-2021-0101 [14] r. r. a. samodro, i. m. choi, s. y. woo, s. j. lee, a study on the pressure gradient effect due to a leak in a pressure calibration system, metrologia. 49 (2012) 315–320. doi: 10.1088/0026-1394/49/3/315 https://doi.org/10.1088/0026-1394/53/5/a46 https://doi.org/10.1109/cpem.2018.8500889 https://doi.org/10.1088/0026-1394/23/4/003 https://doi.org/10.1109/cpem.2012.6251006 https://doi.org/10.1088/0026-1394/51/2/s101 https://doi.org/10.1088/0026-1394/46/5/010 https://doi.org/10.1088/0022-3735/13/8/007 https://doi.org/10.1016/j.measurement.2011.10.045 https://doi.org/10.2991/jnmp.2001.8.s.14 https://doi.org/10.1119/10.0006965 https://doi.org/10.1088/1681-7575/aab23d https://doi.org/10.21014/acta_imeko.v9i5.937 https://doi.org/10.1515/teme-2021-0101 https://doi.org/10.1088/0026-1394/49/3/315 digital twin concept of a force measuring device based on the finite element method acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 5 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 digital twin concept of a force measuring device based on the finite element method oksana baer1, claudiu giusca2, rolf kumme1, andrea prato3, jonas sander1, davood mirian1, frank hauschild1 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2 cranfield university, cranfield, united kingdom 3 inrim – national institute of metrological research, turin, italy section: research paper keywords: digital twin, force measurement, finite-element-method, measurement uncertainty, traceability, data transfer citation: oksana baer, claudiu giusca, rolf kumme, andrea prato, jonas sander, davood mirian, frank hauschild , digital twin concept of a force measuring device based on the finite element method, acta imeko, vol. 12, no. 1, article 8, march 2023, identifier: imeko-acta-12 (2023)-01-08 section editor: daniel hutzschenreuter, ptb, germany received november 18, 2022; in final form march 20, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the authors would like to acknowledge funding of the presented research within the european metrology programme for innovation and research (empir) as well as the european association of national metrology institutes (euramet) in the joint research project 18sib08 comtraforce. corresponding author: oksana baer, e-mail: oksana.baer@ptb.de 1. introduction in accordance with industry 4.0 requirements for automation and data exchange, empir project comtraforce 18sib08 [1] was taken to develop the prototype of dt of a traceable force transfer standard. the dt is built on the basis of models for static and continuous force transfer standards including measurement uncertainty determination. dt is a management and certification paradigm [2] which allows in real time to predict, optimise and maintain desired functionality of complex systems. whilst the dt literature has immensely proliferated in recent years, covering various topics such as through-life monitoring and manufacturing, the widely accepted concept of dt was proposed by grieves [3] and vickers [4], which identified the following main components: the physical object in real space, the virtual object/model in virtual space and the connections of data and information that ties the virtual and real products together, as shown in figure 1. other authors have reviewed and refined the dt definition [5] establishing a stronger link to industry 4.0 and its enabling technologies: “dt is the virtual and computerised counterpart of a physical system that can be used to simulate it for various purposes, exploiting a real-time synchronization of the sensed data coming from the field”. from a standardization work perspective, iso 23247-1 defines dt as a manufacturing fit-for-purpose digital representation of an observable manufacturing element with synchronization between the element and its digital figure 1. digital twin concept of grieves [3] and vickers [4]. real space object virtual space model abstract in the framework of empir project comtraforce, the digital twin (dt) concept of force measurement device is developed. dt aims to shade static, continuous as well as dynamic calibration processes, preserving data quality and collecting calibration data for improved decision-making. to illustrate the developed dt concept, a prototype realisation for static and continuous force calibration processes is developed, involving simulation with ansys engineering software. the focus of the current work is placed on the data connection between the physical device and the dt. the dt model is validated using traceable measurements. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 representation [6]. a digital format for the secure transmission and unambiguous interpretation of measurement data is defined in [7]. all dt definitions lead towards a digital representation, or models, of the physical object, process or service, also called virtual twins. the virtual twin is able to predict the characteristics of the real twin with specified precision in real time, which is established by the needs of each specific application for accuracy and decision speed [8]. the dt models are divided into three main categories: • physics-based models (finite element method fem, structural health monitoring, computational inverse methods); • data driven model (machine learning, digital signal processing, statistical inverse methods); • surrogate model as a combination of both. a comprehensive overview of the dt model requirements is given, among others, in [8]. the dt model must be able to reflect the physical processes, influencing the output parameters. the accuracy of the predicted values should be high enough to be beneficial for the desired application. furthermore, the speed of computation must be high enough for real time execution of the dt. whilst fem is using current knowledge and off-line measurement results to predict the output of the real twin, in dt context, the virtual twin requires real time sensors information gathered over the entire operational lifetime [8]. the aim of the current work is to develop the prototype of the dt for static and continuous force calibration processes. to achieve this aim, the finite element analysis (fea) is chosen for the force measurement device output prediction. however, in order to achieve the required speed of dt execution, several simplifications on the geometry, material behaviour, etc., are made. the details on the development of the finite element model (fe-model) for static and continuous calibration tests with the focus on applicability for dt are provided. a way to connect the force measurement device information with the developed fe-model in an automatised way is presented. at the end, the approach for the physical-to-virtual and virtual-tophysical data communication is also discussed. the current paper is structured as follows. in section 2 an overview over related literature is given; section 3 presents a concept of dt of force measurement device, addresses the dt definition as well as lists the digital metrological twin requirements. the creation of fe-model of the force measurement device is presented in section 4. the data communication approach between the fe-model and the real device is discussed in section 5. in section 6 the main conclusions of the work are summarised, providing the outlook on the planed dt developments. 2. related work the dt developed in [9] leverages fem to predict the complex micro-mechanical evolutions of materials during multistage processes on the example of sintering. the focus is placed of the interoperability and robustness of the material data between various fe-steps. a tensile test system, equipped with the optical system to capture full-field geometric images, was investigated in [10]. the dt based on fe modelling is used to ensure that the data, collected by the experimental system, can be used quickly and efficiently to prove theoretical models and to determine required material properties. in [11] the fem is used for the characterisation of a 5 mn∙m torque transducer. the fem is implied to extend the calibration range up to 5 mn∙m level. the method for determination of the young’s modulus is presented, and the defined value is used in fe-model. the simulated output signal deviates from the measured one by 17.5 %. another simulation of a 4 kn∙m self-built torque transducer in matlab simulink is reported in [12]. the simulation method was developed with a focus on giving recommendations regarding the element type and size. also, different modelling strategies for the strain gauges are proposed. a deviation between the simulated and measured results down to 1.3 % was achieved. as the most suitable fe element type for the modelling the fully-integrated hexahedral element with quadratic formulation was chosen. 3. digital twin of force measurement device dt in metrology requires specific definitions, clearly stated obligatory requirements for measurement traceability. german national metrology institute ptb developed the following definition of the digital metrological twin (dmt) [13]: a digital metrological twin is a numerical (prediction) model that depicts a specific measurement process and indicates an associated measurement uncertainty for a specific measurement value, which is traceable to the units of the international system of units. moreover, it complies with the requirements that: • the measurement uncertainty is calculated according to recognised standards [14]; • all input parameters are traceably determined and stated with the corresponding measurement uncertainty [14]; • and it is validated by traceable measurements. previous definition provides the requirements by fulfilment of which the generated dt output can be utilised for metrological services. dmt is the enhanced way to generate, process, and store data on a calibration device with the time stamp. beyond the concept of a digital calibration certificate (dcc) [15], where calibration data is collected, dmt will allow to correlate the force transducer output with the physical processes occurring in the transducer and allows its seamless connection with the factory of the future. all relevant data in such database is traceable and represented with si units, if applicable. this data can be used by the end user in the further life-cycle of the calibrated device. furthermore, the use of dmt will allow calibration system to learn from itself and reduce the measurements uncertainty for future calibration processes with the same device as well as deliver data to derive correlations between calibration conditions, mounting, etc. as well as output values for devices of the similar class. the concept of force measurement device dmt covers three main functions, figure 2. the first function allows for a prompt reading of selected device information, e. g. sensors reading figure 2. developed concept of digital metrological twin. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 (temperature and strain). here the speed of communication as well as preservation of data quality play the key role. the physical-to-virtual communication is realised by reading the relevant device information from a dcc for force measurement. the second function of a dt is the data processing by means of one of three models mentioned above [16]. the main outcome of data processing is the prediction of force measurements device output as well as measurement uncertainty. for the dmt it is crucial that the input/output parameters are traceable and are stated with the corresponding measurement uncertainty. additionally, the validation of the static, continuous and dynamic models within dt must be performed using traceable measurements. the third function enables saving of the modelled output which can be used to recalculate uncertainty in future calibration procedures. as first prototype of dmt of a force measurement device, the fe-model of the transducer is developed. as first attempt, the models of static and continuous calibration processes are developed in fe simulation software ansys, see section 3. the synchronization between the force measurement device and its dt is performed after each calibration process in form of reading and subsequent update of the dcc by means of python v3.9. the calculated dt device output as well as measurement uncertainty are saved after each calibration in a database. the corresponding database of dt represents the device history, allowing the dt to recall any state of the device history. visualisation of data is realised with matplotlib library v1.3.0. 4. finite element analysis 4.1. numerical model set up the development of the transducer design is discussed in details in [17]. more details on the creation of digital construction of the transducer are provided in [18]. the transducer is mounted on the 20 kn calibration machine available at ptb. the machine works according to the principle of the direct loading. the weight forces generated by the mass bodies are introduced to the transducer via a load hanger. the load bodies are arranged in such a way that transducers with nominal forces of 2 kn can be calibrated in 12.5 % steps and with nominal forces of 5 kn, 10 kn and 20 kn in 10 % steps according to din en iso 376 the geometrical model was simplified to the transducer itself and strain gauges in order to enable fast run of the simulation, see figure 3a). the material of the transducer is quenched alloyed steel 1.6580 qt. the strain gauge is simulated as a polyimide substrate and a grid made of constantan. the contribution of the glue layer is neglected. the transducer is partially meshed with the tetragonal tet10 elements with the size of 2 mm size. the strain gauges as well as the transducer region, close to strain gauges placement, are meshed with hexagonal hex20 elements to ensure better mesh consistency, see figure 3b. the element size of 0.6 mm was set for hex20 elements. further element size refinement below 0.6 mm led to no significant improvement of the simulated output signal. 4.2. static force model to investigate the influence of static forces on the measured signal of transducer, the static mechanical analysis with ansys was set. the concentrated force fy was applied stepwise in axial direction to reproduce the loading during calibration experiments, until the nominal value of 20 kn was achieved. boundary conditions ux = uy = uz = urx = ury = urz = 0 were applied on the opposite side of the transducer, simulating transducer mounting in the loading train. elastic material properties of 1.6580 steel [19], polyimide [20] and constantan [21] were defined, see table 1. 4.3. continuous force model during continuous calibration tests, output signal variation with time under the application of a constant load is observed. in force measurement industry this phenomenon is defined as creep [22]. one of the physical mechanisms, leading to the output signal variation, is the superposition of the thermoelastic strains in the load cell itself as well as the strain of the strain gauge materials. to simulate this effect the transient coupled field analysis in ansys was performed. a key role in the model is played by the definition of the thermal material properties, such as heat expansion coefficient, specific heat and thermal conductivity, [19], [20] and [21], see table 1. simulated compression load was applied in axial direction for 20 s up to maximal nominal value of 20 kn and subsequently held for 600 s. the full unloading was simulated for 20 s as well and the final strain was measured after 600 s, capturing unloading creep. 4.4. simulation results to validate the developed fe-model, the static as well as creep calibration experiments according to iso 376:2011-09 with the nominal force of 20 kn were performed. a comparison of the simulated output of the strain gauge bridge after static loading profile with the experimental measurements and analytical solution [18] is presented in figure 4. figure 3. simulation model of force transducer and strain gauges: a) geometrical model; b) finite element mesh of measurement region. figure 4. simulated and experimentally measured output signal. table 1. material properties of the transducer materials used in ansys material property 1.6580 qt constantan polyimide young's modulus in gpa 210 162 2.65 poisson's ration 0.33 0.31 0.37 thermal expansion in 1/k 1.11 · 10-5 8.3 · 10-6 5.5 · 10-5 specific heat in j/(kg k) 460 390 1090 thermal conductivity in w/(m k) 42 21.8 0.33 force fy fixation grid substrate 7.7 mm 6 .3 m m a) b) -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0 2 4 6 8 10 12 14 16 18 20 o u tp u t si g n a l [m v /v ] force [kn] calibration result analytical solution fem simulation acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 the discrepancy between the simulated result and the analytical solution lies around 9.8 %. for the reliable dt functionality, the precision must be improved, for example through the consideration of the glue layer contribution. similar to the static calibration test model, the fea output signal lies close to the analytical value, which is twice higher than the experimental one. to enable better data comparison, the measured as well simulated output signals were normalised using the formula (1): 𝑋′ = 𝑋 − 𝑋min 𝑋max − 𝑋min , (1) where 𝑋′ is the normalised value, 𝑋 is the measured value to be normalised, 𝑋min and 𝑋max minimum and maximum values of the dataset respectively. the simulated output signal of the load cell is presented in figure 5a during loading and in figure 5b during unloading. the output signal was calculated using simulated total strain values, which were derived as the sum of elastic and thermal strains. the simulated and measured data show opposite trends. the measured data represents the strain gauge output signal and the simulated one of the load cell output. this is in accordance with findings of kühne, who stated that creep recovery of the load cell acts in opposite direction to the creep of the strain gauge and the corresponding glue layer [22]. measured creep results showed output signal deviations due to creep of 0.214 % and 0.211 % after loading and unloading correspondingly. the simulated results reported 0.45 % and 0.47 % output signal deviations for loading and unloading. thus, both static as well as continuous simulations show almost double signal value in comparison to the measured values. 5. data communication 5.1. data input the physical-to-virtual communication is realised by reading the relevant device information from a dcc in xml format. the benefit of direct use of data from dcc is that the resulting data is provided in si format. the data transfer from dcc ensures that the input parameters of dt are traceable and stated with the given measurement uncertainty, as required by the definition of dmt. the dcc for static force calibration process was developed at ptb in accordance with the xml scheme (xsd) 3.1.2 [23]. to read input parameters from dcc, the elementtree xml api v1.3.0 was used [24]. it allows to read the specified elements of dcc xml tree, see figure 6. an example of code allowing to navigate in the required dcc element is presented in figure 7. the code is strongly related to a specific dcc structure and must be adjusted in case of any modifications of dcc xml tree. the collected data are transferred as loading and boundary conditions of fe-simulation in form of python command. the entire fe-model was built in ansys mechanical 2021 r1 using mechanical python apis to automate the process [25]. the application diagram, showing input output data flow of the developed dmt software is presented in figure 8. 5.2. data output and storage the calculated dt force transducer data are used to calculate measurement uncertainty considering effects from the mechanical system and surrounding the force measurement device. in future developments, the collection of the selected information in a database will be realised. each dataset will be figure 5. simulation model of force transducer and strain gauges during a) loading and b) unloading. figure 6. xml tree structure of force dcc and highlighted list element containing values of applied force magnitude in kn. figure 7. code example demonstrating navigation to specific dcc element. figure 8. application diagram illustrating developed dmt software. 0.998 0.999 1.000 1.001 1.002 1.003 0 100 200 300 o u tp u t s ig n a l, [ m v /v ] time, [s] simulated data measured data 0.0000 0.0005 0.0010 0.0015 0.0020 600 700 800 900 o u tp u t s ig n a l, [ m v /v ] time, [s] simulated data measured data a) b) dcc:administrativedata dcc:measurementresults dcc:measurementresult dcc:name dcc:description dcc:influenceconditions dcc:results … dcc:result reftype="table4" dcc:name dcc:data dcc:list dcc:quantity dcc:name si:reallistxmllist si:valuexmllist si:unitxmllist 0 2 4 6 8 10 12 14 16 18 20 0 \kilo\newton # importing element tree under the alias of et import xml.etree.elementtree as et # passing the path of the xml document to enable the parsing process tree = et.parse(dt_directory/’example_dcc_iso 376_22999.xml’) # getting the parent tag of the xml document root = tree.getroot() # print force magnitude values in kn from dcc print(root[1][0][3][3][1][0][0][1][0].text) dcc xml format python script reading input data from dcc mechanical python apis (ansys software): • create analysis • input geometry • mesh model • insert material properties • apply boundary conditions • apply loads • define output variables • start analysis postprocessing: • read output data (text file) • calculate output signal from displacement data • visualisation using matplotlib • calculate measurement uncertainty (under development) text file containing force magnitude values store data in database (under development) acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 completed with the metadata stating the calibration time. in this way the continuous calibration history of the force measurement device will be created. 6. conclusions and outlook a digital replica of the force measurement device based on fe-model is created in this work. the first results of force transducer output for static and continuous calibration processes are obtained. the static results are in good agreement with the analytical solution. the simulated output signal values were on average twice large as measured values, which points out on the necessity of checking the bridge wiring potentially affecting results interpretation. in future works it is planned to extend dt model with the dynamic calibration process models. the developed concept of the dt will be applied to estimate the effect of not proper mounting, material properties as well as parasitic force components on the measurement uncertainty. by combining the obtained simulation data with the experimental one and analysing both with the machine learning algorithm, the surrogate model will be created. acknowledgement the authors would like to acknowledge funding of the presented research within the european metrology programme for innovation and research (empir) as well as the european association of national metrology institutes (euramet) in the joint research project 18sib08 comtraforce. references [1] empir 18sib08: comprehensive traceability for force metrology services (comtraforce). online [accessed 18 march 2022] https://www.ptb.de/empir2019/comtraforce/home/ [2] e. glaessgen, d. stargel, the digital twin paradigm for future nasa and u.s. air force vehicles, 53rd aiaa/asme/asce/ahs/asc structures, structural dynamics and materials conference, honolulu, hawaii, april 2012, 14 pp. online [accessed 27 march 2022] https://ntrs.nasa.gov/api/citations/20120008178/downloads/2 0120008178.pdf [3] m. w. grieves, virtually intelligent product systems: digital and physical twins, in complex systems engineering: theory and practice, vol. 256. [4] b. piascik, j. vickers, d. lowry, s. scotti, j. stewart, a. calomino, draft materials, structures, mechanical systems, and manufacturing roadmap: technology area 12, nasa, nov. 2010. online [accessed 27 march 2023] https://www.nasa.gov/pdf/501625main_ta12-msmsmdraft-nov2010-a.pdf [5] e. negri, l. fumagalli, m. macchi, a review of the roles of digital twin in cps-based production systems, procedia manufacturing, vol. 11, 2017, pp. 939–948. doi: 10.1016/j.promfg.2017.07.198 [6] automation systems and integration digital twin framework for manufacturing: part 1: overview and general principles, 23247-1, the international organization for standardization, oct. 2021. [7] d. hutzschenreuter, f. härtig, w. heeren, t. wiedenhöfer, a. forbes, (+17 more authors), smartcom digital system of units (d-si) guide for the use of the metadata-format used in metrology for the easy-to-use, safe, harmonised and unambiguous digital transfer of metrological data, in zenodo. doi: 10.5281/zenodo.3522631 [8] l. wright, s. davidson, how to tell the difference between a model and a digital twin, adv. model. and simul. in eng. sci, vol. 7, 2020, no. 13, pp. 1–13. doi: 10.1186/s40323-020-00147-4 [9] m. groen, s. solhjoo, r. voncken, j. post, a. i. vakis, flexmm: a standard method for material descriptions in fem, adv. eng. softw., vol. 148, 2020, pp. 1–13. doi: 10.1016/j.advengsoft.2020.102876 [10] d. antok, t. fekete, l. tatar, p. bereczki, e. kocso, evaluation framework for tensile measurements, based on full-field deformation measurements and digital twins, procedia structural integrity, vol. 37, 2022, pp. 796–803. doi: 10.1016/j.prostr.2022.02.011 [11] p. weidinger, c. schlegel, g. foyer, r. kumme, characterisation of a 5 mn·m torque transducer by combining traditional calibration and finite element method simulations, ama conferences, nürnberg, germany, 2017. doi: 10.5162/sensor2017/d6.2 [12] s. kock, g. jacobs, d. bosse, f. strangfeld, simulation method for the characterisation of the torque transducers in mn·m range, j. phys.: conf. ser. 1065, 2018, 042014. doi: 10.1088/1742-6596/1065/4/042014 [13] f. härtig, virtmet applications and overview, berlin, 21 sept. 2021. [14] jcgm, jcgm 100: evaluation of measurement data guide to the expression of uncertainty in measurement, 2008. online [accessed 15 february 2021] https://www.bipm.org/en/publications/guides/ [15] w. heeren, b. müller, g. miele, t. mustapää, d. hutzschenreuter, c. brown, o. baer, smartcom – key findings for digitalisation in metrology, ieee metroind4.0 iot, 2021, pp. 364–369. doi: 10.1109/metroind4.0iot51437.2021.9488450 [16] c. giusca, o. baer, c. g. izquierdo, s. m. gonzález, a. prato, validation report for the digital twin software, considering requirements of digitisation and industry 4.0, for static, continuous and dynamic force transfer standards including measurement uncertainty determination, in zenodo. doi: 10.5281/zenodo.7404128 [17] a. nitschke, development of a force transfer standard for the calibration of cyclic forces in material testing machines with quantification of parasitic influence quantities, sensoren und messsysteme, vol. 2022, in press, [in german]. [18] c. l. giusca, s. goel, i. llavori, r. kumme, o. baer, a. prato, a. germak, digital representation of a load cell, proc. of the imeko 24th tc3, 14th tc5, 6th tc16 and 5th tc22 int. conf., cavtat-dubrovnik, croatia, 11-13 october 2022, 6 pp. doi: 10.21014/tc3-2022.026 [19] ovako, steel navigator – find the right steel for your needs: 30crnimo8. online [accessed 15 july 2022] https://steelnavigator.ovako.com/steel-grades/30crnimo8/ [20] omnexus. the material selection platform, key properties. online [accessed 15 july 2022] https://omnexus.specialchem.com/selection-guide/polyimidepi-plastic/key-properties [21] azo, super alloy constantan. online [accessed 15 july 2022] https://www.azom.com/article.aspx?articleid=7651 [22] m. kühnel, traceable measurement of the mechanical properties of spring bodies for force measurement technology, phd thesis, technical university ilmenau, 2013, [in german]. [23] ptb, digital calibration certificate. online [accessed 17 november 2022] https://www.ptb.de/dcc [24] the python software foundation, python 3.9.14 documentation. online [accessed 4 january 2023] https://docs.python.org/3.9/library/xml.etree.elementtree.html [25] ansys inc., scripting in mechanical guide. online [accessed 15 july 2022] https://ansyshelp.ansys.com https://www.ptb.de/empir2019/comtraforce/home/ https://ntrs.nasa.gov/api/citations/20120008178/downloads/20120008178.pdf https://ntrs.nasa.gov/api/citations/20120008178/downloads/20120008178.pdf https://www.nasa.gov/pdf/501625main_ta12-msmsm-draft-nov2010-a.pdf https://www.nasa.gov/pdf/501625main_ta12-msmsm-draft-nov2010-a.pdf https://doi.org/10.1016/j.promfg.2017.07.198 https://doi.org/10.5281/zenodo.3522631 https://doi.org/10.1186/s40323-020-00147-4 https://doi.org/10.1016/j.advengsoft.2020.102876 https://doi.org/10.1016/j.prostr.2022.02.011 https://doi.org/10.5162/sensor2017/d6.2 https://doi.org/10.1088/1742-6596/1065/4/042014 https://www.bipm.org/en/publications/guides/ http://dx.doi.org/10.1109/metroind4.0iot51437.2021.9488450 https://doi.org/10.5281/zenodo.7404128 https://doi.org/10.21014/tc3-2022.026 https://steelnavigator.ovako.com/steel-grades/30crnimo8/ https://omnexus.specialchem.com/selection-guide/polyimide-pi-plastic/key-properties https://omnexus.specialchem.com/selection-guide/polyimide-pi-plastic/key-properties https://www.azom.com/article.aspx?articleid=7651 https://www.ptb.de/dcc https://docs.python.org/3.9/library/xml.etree.elementtree.html https://ansyshelp.ansys.com/ dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatialresolution property and segmentation accuracy of the tumor area shu onodera1, yongbum lee2, tomoyoshi kawabata1 1 department of radiology division of medical technology, tohoku university hospital, sendai, miyagi, japan 2 graduate school of health sciences, niigata university, niigata, japan section: research paper keywords: chest radiography; x-ray; u-net; deep learning; flat panel detector citation: shu onodera, yongbum lee, tomoyoshi kawabata, dose reduction potential in dual-energy subtraction chest radiography based on the relationship between spatial-resolution property and segmentation accuracy of the tumor area, acta imeko, vol. 11, no. 2, article 28, june 2022, identifier: imeko-acta-11 (2022)-02-28 section editor: francesco lamonaca, university of calabria, italy received september 30, 2021; in final form february 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: shu onodera, e-mail: onodera@rad.hosp.tohoku.ac.jp 1. introduction chest radiography is the most basic diagnostic imaging procedure for lung diseases. however, the amount of x-rays that the patient is exposed to is enormous, including in the case of standing position imaging which is usually performed during medical examinations and bedside imaging for critically ill patients [1]. compared with computed tomography (ct) examination, which provides three-dimensional information, the amount of exposure in radiography is very low (ct: 10 msv, chest radiography: 0.1 msv) [2]-[5] and its importance in terms of convenience of examination cannot be underestimated. usually, a high voltage of approximately 120 kv is applied during chest radiography to emphasize the contrast of the lung field rather than that of the ribs [6]. nevertheless, the shadow of the rib remains on the image, making it difficult to detect the shadow of the soft tissue that overlaps that of the ribs. to address this problem, it has been developed an energy subtraction process [7] in which two types of image data with different radiographic energy characteristics are obtained with a single exposure, the bone shadows are removed through weighed differentiation of the respective images, and the image of the soft tissue alone is segmented (hereinafter, referred to as soft tissue imaging). once bone shadows are removed, it becomes easier to detect tumours in soft tissue images. however, because the image quality in chest radiography considerably varies for abstract we investigated the relationship between the spatial-resolution property of soft tissue images and the lesion detection ability using unet. we aimed to explore the possibility of dose reduction during energy subtraction chest radiography. the correlation between the spatial-resolution property of each dose image and the segmentation accuracy of the tumor area in the four regions where the tumor was placed was evaluated using linear regression analysis. the spatial-resolution property was determined by task-based evaluation, and the task-based modulation transfer function (ttf) was computed as its index. ttfs of the reference dose image and the 75 % dose image showed almost the same frequency characteristics regardless of the location of the tumor, and the dice coefficient also high. when the tumor was located in the right supraclavicular region and under 50 % dose, the frequency characteristics were significantly reduced, and the dice coefficient was also low. our results showed a close relationship between the spatial-resolution property and the segmentation accuracy of tumor area using deep learning in dual-energy subtraction chest radiography. in conclusion, a dose reduction of approximately 25 % compared to the conventional method can be achieved. the limitations are the shape of the simulated mass and the use of chest phantom. mailto:onodera@rad.hosp.tohoku.ac.jp acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 different parts due to factors such as the amount of radiation reaching the detector and the amount of scatter radiation, detectability may differ depending on the location of the tumour. in particular, the scatter radiation generated from the clavicle and scapula is believed to significantly affect the upper lobe of the lung, which is a common site of adenocarcinoma [8]. the spatialresolution property of the image is also a very important factor in tumour detection [9]. the spatial-resolution property is a measure of the sharpness of an image and is an important characteristic that determines the detectability of lesions in x-ray images. chest radiography images are generally subjected to several image-processing techniques to improve image quality, and these processing tools lead to a nonlinear behaviour that depends on image quality, which is different for different parts [10]. thus, the quality of the soft tissue image also shows nonlinear behaviour, and task-based evaluation in a measurement environment that reflects clinical conditions is necessary to determine the spatial-resolution property. the use of computer-aided diagnosis (cad) in diagnostic imaging has increased [11], [12]. earlier, image interpretation was performed by radiologists, based on their cultivated experience. however, with the recent increase in the use of radiography and the consequent increase in the number of images, cad was introduced to reduce the burden on radiologists. cad based on deep learning has been attracting attention recently, and it may soon be possible to detect a tumour even in a low-dose image with high noise. in previous reports on energy subtraction-treated chest radiographs, visual evaluations of images acquired by the computed radiography (cr) systems have been reported [13][17]. the purpose of this study was to investigate the relationship between the spatial-resolution property of soft tissue images obtained by the flat panel detector (fpd) system and the lesion detection ability based on deep learning and explore the possibility of dose reduction during energy subtraction chest radiography. 2. materials and methods 2.1. image acquisition an acrylic cylindrical simulated tumour with a diameter of 20 mm and thickness of 3 mm was placed in four regions on the chest phantom (right supraclavicular, left middle lung, right lower lung, and mediastinum), figure 1. bone structure such as the clavicle, shoulder blade, and ribs as well as soft tissues as the mediastinum and pulmonary vascularity are located in the chest phantom. the single-exposure dual-energy subtraction system, [18], [19] calneo dual (fujifilm medical co. ltd. tokyo. japan, with a pixel size of 0.15 mm), was used in this study. the fpd implemented in this system consisted of two stacked scintillators. normal energy images were collected in the first layer (cesium iodide scintillator), and the second layer (gadolinium sulfide scintillator) collected high-energy images transmitted through the first layer. table 1 lists the imaging conditions used. the source-image distance was fixed at 180 cm, the field size was 43.2 cm, the image depth was 12 bits, and the tube voltage was 115 kv. three types of imaging doses were used: a standard dose of 1.6 ma s, which was then reduced by 25 % to 1.25 ma s and then reduced by 50 % to 0.8 ma s, and 100 images of each type were acquired. 2.2. calculation of the spatial-resolution property chest radiography images are generally subjected to several image-processing techniques to improve image quality. frequency-processing and dynamic-range compression processing are typical examples. however, these processing tools lead to a nonlinear behaviour that depends on image quality, which is different for different parts. therefore, in this study, the spatial-resolution property was determined by task-based evaluation, and the task-based modulation transfer function (ttf) was computed as its index [20]. the ttf calculation process is shown in figure 2. the edge spread function (esf) for the cylinder was obtained by averaging the profiles that cross the edge of the cylinder, measured from the centre in the direction of radiation. next, ttf was calculated using the fourier transform of the line spread function obtained by differentiating the esf. one of the factors to be considered when determining the ttf of a soft tissue image is the signal-tonoise ratio (snr) of the image. because images with a low snr create large errors in the calculation results, in this study, the image without acrylic was subtracted from the image with acrylic, and an image with a high snr was created through the additive average of 100 such images, which was then used to calculate the ttf (figure 3). 2.3. building the deep learning environment cad using deep learning has been an area of active research in recent years and has a wide range of applications in medical figure 1. phantom image and placement of acrylic cylinder. figure 2. ttf calculation process. table 1. imaging conditions. source image distance in cm field size in cm tube voltage in kv image depth in bit dose in ma s 180 43.2 115 12 1.6 (reference) 1.2 (25 % down) 0.8 (50 % down) acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 imaging, such as lesion detection, area extraction, and image noise reduction. image segmentation refers to the process of dividing an image into regions corresponding to each object. because the target areas in medical imaging are organs or lesions, the positional information must be specified in the original input image at the time of segmentation. u-net [21] is a typical example of a deep convolutional neural network for image segmentation. the present study deals with the detection of lung tumours using u-net in soft tissue images. figure 4 shows the structure of the u-net used in this study. the usage environment of u-net in this research is as follows: os: windows10, framework: python3.7, tensorflow, keras, cpu: core i7-10750h, memory: 16g. the relu function and sigmoid function were used as activation functions, cross entropy as the loss function, and adam as the learning optimization algorithm. 2.4. data set for deep learning the acquired soft tissue image (window width: 8500, window level: 8100, 14 bits) was segmented into 128 × 128 pixels centred around the tumour and converted to png format (window width: 255, window level: 128, 8bits). fifty standard-dose images were input in u-net as training data and 50 reduced-dose images as evaluation data, and learning was conducted by setting the number of epochs to 30. the teaching data for the soft tissue images containing the tumour were created by binarizing the image into the tumour area and other areas (figure 5). figure 3. creating the ttf calculation image. figure 4. the structure of the u-net in this study. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 2.5. evaluation of the segmented tumour area the dice coefficient [22] was calculated as the degree of similarity between the output image and the teacher image to evaluate the extraction accuracy of the tumour region using unet. the dice coefficient is defined by the following formula: 𝐷𝑖𝑐𝑒(𝐴, 𝐵) = 2|𝐴 ∩ 𝐵| |𝐴| + |𝐵| . (1) here, a denotes the tumour region in the teaching image (region with a digital value of 255), and b denotes the tumour region in the output image (region with a digital value of 255). 2.6. evaluation of the correlation between spatial-resolution property and segmentation accuracy of tumor area in this study, the correlation between the spatial-resolution property of each dose image and the segmentation accuracy of the tumour area in the four regions where the tumour was placed was evaluated using linear regression analysis. a scatter plot was created by treating the ttf and dice coefficient of each dose image from 0.2 to 1.2 cycle/mm (intervals of 0.2 cycle/mm) as variables. the dice coefficient for the reference dose image was set to 1. 3. results figure 6 shows the ttf results for each condition. no difference was observed between the ttfs of the reference dose image and the 75 % dose image in the supraclavicular region, where the contrast was low due to the influence of scattered radiation from the thoracic spine, clavicle, and scapula; however, the similarity decreased significantly in the ttf of 50 % dose images. in contrast, in the middle and lower lung regions where the effect of scattered radiation was small and the contrast was high compared to the supraclavicular region, the ttfs were generally high and the difference in values between doses was small. in the mediastinum, ttfs were low as in the supraclavicular region because of the low contrast due to the scattered radiation from the heart and sternum, but the decrease was not as high as that in the supraclavicular region; in comparison, the ttf in the supraclavicular region was the lowest among all the other regions. table 2 shows the average values of the dice coefficients of the 50 datasets for each condition. the dice coefficient between the segmented tumour area and the teaching data in the 75 % dose image showed a generally high value of approximately 0.96, regardless of the location of the tumour. furthermore, the ttf of the 75 % dose image showed a value similar to that of the reference dose image regardless of the location of the tumour. in contrast, the dice coefficient in the 50 % dose image was as low as 0.937 when the tumour was located in the supraclavicular region. likewise, the ttf of the 50 % dose image in which the tumour was located in the supraclavicular region showed a lower value compared to the reference dose image. figure 7 shows the actual tumour area segmented by u-net. in the 75 % dose condition, the segmented images were highly similar to the teaching data, regardless of the location of the tumour. however, in the 50 % dose condition and when the tumour was located in the right supraclavicular region, the segmented region was slightly larger compared with the teaching data. figure 8 to figure 11 show the correlation between ttf and dice coefficient in soft tissue images. a positive correlation was observed between the ttf and dice coefficient of every dose image at all spatial frequencies in the right supraclavicular and the right lower lung regions and between the frequencies of 0.2 to 0.8 cycle / mm in the mediastinum section. in contrast, no correlation was observed between the ttf and dice coefficients in any of the spatial frequencies in the left middle lung region. 4. discussion in the case of the simulated tumour located in the right supraclavicular region and under 50 % dose, both the ttf and dice coefficients showed significantly low values. one reason for this could be that the contrast of the tumour was reduced by scattered radiation mainly from the clavicle and scapula due to the complicated bone structure of the supraclavicular region. a second reason could be the fact that the tumour area could not be segmented accurately because of the increased image noise because the amount of radiation reaching the detector was smaller than that reaching other parts. in the case in which the simulated tumour was located in the mediastinum region, the value of ttf was not very different from that when the tumour was in the middle and lower lung regions, and the dice coefficient also showed a similar value. in the mediastinum region, the amount of radiation reaching the detector was less than that of the middle and lower lung regions, figure 5. creation of teaching data. figure 6. ttfs for each condition. table 2. dice coefficients for each of the conditions. ma s right supraclavicular left middle lung right lower lung mediastinum 1.25 0.960 0.960 0.959 0.971 0.8 0.937 0.969 0.963 0.967 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 and the amount of scattered radiation from the sternum and heart was also large. therefore, under the 50 % dose condition, the ttf and dice coefficients were perceived to be as low as the right supraclavicular region. however, as the tumour in this area had fewer pulmonary blood vessels around it than other sites, the structure was relatively simple and the tumour area could be segmented accurately (figure 12). the results of this study show that there is a high degree of correlation between the spatial resolution of the soft tissue image and the segmentation accuracy of the tumour area using deep figure 7. mass region segmentation using for each of the conditions. figure 8. between ttf and dice coefficient (right supraclavicular). figure 9. between ttf and dice coefficient (left middle lung). ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 learning in the supraclavicular, lower lung, and mediastinum regions. in the 75 % dose images, the ttf was high regardless of the tumour location, and the dice coefficient was also high. in contrast, in the 50 % dose images, when the tumour was present in the supraclavicular region, the ttf was significantly reduced, and the dice coefficient was also low. in other words, if the radiation dose is reduced to 50 % of the conventional radiation condition, tumour that develop in the supraclavicular region may not be segmented accurately due to a decrease in ttf. no correlation was confirmed between ttf and the dice coefficient in the middle lung area. this could be because there was no difference in ttfs of the dose images between 0.2 and 0.6 cycle / mm, and between the 0.8 and 1.2 cycle / mm, there was no difference in the ttfs of the 75 % dose image and the reference dose image. among the four sites examined in this study, the highest amount of radiation reached the detector from the middle lung area, and the amount of scattered radiation from the surroundings was also small. therefore, no correlation could be confirmed between the ttf and the dice coefficient in the middle lung region, and the dice coefficients of all dose images showed a high value of approximately 0.96. the discussion above suggests that, with single-exposure dual-energy subtraction chest radiography by the fpd system, it may be possible to reduce the dose by approximately 25 % compared to the conventional method. figure 13 shows the detection quantum efficiency (dqe) in the radiation qualities of the rqa9 of the cr system, which was manufactured by the same company as the calneo dual system used in this study [23], [24]. because lung tumors are the targets of this study, we focused on the value of the spatial frequency of 1 cycle / mm [25]. the dqe value at 1 cycle / mm is approximately 0.5 for the calneo dual and about 0.2 for the cr system, respectively, and the detection quantum efficiency of the calneo dual is approximately 2.5 times higher. a system with an excellent dqe has a high degree of freedom in adjusting the balance between sharpness and graininess through image processing [26]. therefore, selecting figure 10. between ttf and dice coefficient (right lower lung). figure 11. between ttf and dice coefficient (mediastinum). figure 12. pulmonary vessels around the mass in the mediastinum. figure 13. dqe for calneo dual and cr system with rqa9 spectra. ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference ● 50% dose ▲ 75% dose ■ reference acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 parameters with good spatial-resolution properties for multifrequency processing in single-exposure dual-energy subtraction chest radiography using fpd could lead to further dose reduction. as a limitation of this study, we would first mention the structure of the simulated tumors. in this study, an acrylic material with a simple cylindrical structure was used as the simulated tumor. actual lesions with increased malignancy, such as spiculated lesions, often have more complex structures, and in such cases, the results may differ. moreover, in this study, all measurements from beginning to end were performed using a phantom, and the effects of heartbeat, which is a major problem in actual clinical practice, [27] have not been considered. however, to reduce the effects of heartbeat, the imaging time was shortened to the largest extent possible, and measurements were performed in a very short interval of approximately 10 ms, as is done in clinical practice; hence, we hope that the results will not be greatly affected. 5. conclusions in this study, we clarified the relationship between the spatial resolution of single-exposure dual-energy subtraction chest radiography using the fpd system and the segmentation accuracy of the tumour area using deep learning. the ttfs of the reference dose image and the 75 % dose image showed almost the same frequency characteristics regardless of the location of the tumour, and the dice coefficient also showed a high value. when the tumour was located in the right supraclavicular region and under 50 % dose, the frequency characteristics were significantly reduced, and the dice coefficient was also low. therefore, a close relationship between the spatial-resolution property and the segmentation accuracy of the tumour area was confirmed using deep learning in singleexposure dual-energy subtraction chest radiography using the fpd system, and it may be possible to achieve dose reduction of approximately 25 % compared to the conventional method. references [1] unscear. medical radical exposures. sources and effects of ionizing radiation, unscer 2008 report. new york: united nations; 2010. annex a. [2] l. j. m. kroft, l. van der velden, i. h. girón, j. j. h. roelofs, a. de roos, j. geleijns, added value of ultra-low-dose computed tomography, dose equivalent to chest x-ray radiography, for diagnosing chest pathology, j. thorac imaging, 34 (2019), pp. 179186. doi: 10.1097/rti.0000000000000404 [3] r. ward, w. d carrol, p. cunningham, s. a ho, m. jones, w. lenney, d. thonpson, f. j gilchrist, radiation dose from common radiological investigations and cumulative exposure in children with cystic fibrosis: an observational study from a single uk centre, observational study, 7 (2017), pp. 1-5. doi: 10.1136/bmjopen-2017-017548 [4] s. singh, m. k. kalra, r. d. ali khawaja, a. padle, s. pourjabbar, d. lira, j. a. o. shepard, s. r. digumarthy, radiation dose optimization and thoracic computed tomography, radiol. clin. north am., 52(2014), pp. 1-15. doi: 10.1016/j.rcl.2013.08.004 [5] international commission on radiological protection, the 2007 recommendations of the international commission on radiological protection, icrp publication 103, ann. icrp 37 (24). [6] o. w. hamer, c. b. sirlin, m. strotzer, i. borisch, n. zorger, s. feuerbach, m. volk, chest radiography with a flat-panel detector: image quality with dose reduction after copper filtration. radiology, 237 (2005), pp691-700. doi: 10.1148/radiol.2372041738 [7] m. fukao, k. kawamoto, h. matsuzawa, o. honda, t. iwaki, t. doi, optimization of dual-energy subtraction chest radiography by use of a direct-conversion flat-panel detector system, radiol phys technol, 8 (2015), pp. 46-52. doi: 10.1007/s12194-014-0285-y [8] k. honda, y. matsui, h. imai, regional distribution of lung cancer, haigan, 23 (1983), pp. 11-21. [9] y. fujimura, h. nishiyama, t. masumoto, s. kono, y. kitagawa, t. ikeda, t. furukawa, t. ishida, investigation of reduction of exposure dose in digital mammography: relationship between exposure dose and image processing, jpnsoc, nihon hoshasen gijutsu gakkai zasshi, 64(2008), pp. 259-267. doi: 10.6009/jjrt.64.259 [10] k. kishimoto, e. ariga, r. ishigaki, m. imai, k. kawamoto, k. kobayashi, m. sawada, k. noto, m. nakamae, r. higashide, study of appropriate dosing in consideration of image quality and patient dose on the digital radiography, jpnsoc, nihon hoshasen gijutsu gakkai zasshi, 67(2011), 1381-1397. doi: 10.6009/jjrt.67.1381 [11] k. doi, current status and future potential of computer-aided diagnosis in medical imaging, br j radiol, 78 (2005), spec no 1, s3-s19. doi: 10.1259/bjr/82933343 [12] h. fujita, present status of mammography cad system, med imaging technol, 1 (2003), 27-33. [13] s. kido, j. ikezoe, h. naito, j. arisawa, s. tamura, t. kozuka, w. ito, k. shimura, h. kato, clinical evaluation of pulmonary nodules with single-exposure dual-energy subtraction chest radiography with an iterative noise-reduction algorithm, radiology, 194 (1995), pp. 407-412. doi: 10.1148/radiology.194.2.7824718 [14] s. kido, k. kuriyama, n. hosomi, e. inoue, c. kuroda, t. horai, low-cost soft-copy display accuracy in the detection of pulmonary nodules by single-exposure dual-energy subtraction: comparison with hard-copy viewing, j digit imaging, 2000, pp. 33-37. doi: 10.1007/bf03168338 [15] j. r. wilkie, m. l. giger, m. r. chinander, t. j. vokes, r. m. nishikawa, m. d. carlin, investigation of physical image quality indices of a bone densitometry system, med phys, 31 (2004), pp. 873-881. doi: 10.1118/1.1650528 [16] s. kido, h. nakamura, w. ito, k. shimura, h. kato, computerized detection of pulmonary nodules by single-exposure dual-energy computed radiography of the chest (part 1), eur j radiol, 44(2002), 198-204. doi: 10.1016/s0720-048x(02)00268-1 [17] s. kido, k. kuriyama, c. kuroda, h. nakamura, w. ito, k. shimura, h. kato, detection of simulated pulmonary nodules by singleexposure dual-energy computed radiography of the chest: effect of a computer-aided diagnosis system (part 2), eur j radiol, 44 (2002), pp. 205-209. doi: 10.1016/s0720-048x(02)00269-3 [18] l. shi, m. lu, n. r. bennett, e. shapiro, j. zhang, r. colbeth, j. s. lack, a. s. wang, characterization and potential applications of a dual-layer flat-panel detector, med phys, 47(2020), epub 2020 may 18, pp. 3332-3343. doi: 10.1002/mp.14211 [19] m. lu, a. wang, e. shapiro, a. shiroma, j. zhang, j. steiger, j. s. lack, dual energy imaging with a dual-layer flat panel detector, spie med imaging;10948 physics of medical imaging, sandiego, united states, 2019 doi: 10.1117/12.2513499 [20] s. richard, d. b. husarik, g. yadava, s. n. murphy, e. samei, towards task-based assessment of ct performance: system and object mtf across different reconstruction algorithms, med phys, 39(2012), 4115-4122. doi: 10.1118/1.4725171 https://doi.org/10.1097/rti.0000000000000404 https://doi.org/10.1136/bmjopen-2017-017548 https://doi.org/10.1016/j.rcl.2013.08.004 https://doi.org/10.1148/radiol.2372041738 https://doi.org/10.1007/s12194-014-0285-y https://doi.org/10.6009/jjrt.64.259 https://doi.org/10.6009/jjrt.67.1381 https://doi.org/10.1259/bjr/82933343 https://doi.org/10.1148/radiology.194.2.7824718 https://doi.org/10.1007/bf03168338 https://doi.org/10.1118/1.1650528 https://doi.org/10.1016/s0720-048x(02)00268-1 https://doi.org/10.1016/s0720-048x(02)00269-3 https://doi.org/10.1002/mp.14211 https://doi.org/10.1117/12.2513499 https://doi.org/10.1118/1.4725171 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [21] o. ronneberger, p. fischer, t. brox, u-net: convolutional networks for biomedical image segmentation lecture notes in computer sciences (including subserlect notes artifintelllect notes bioinformatics). 2015; 9351: pp. 234-241. [22] b. sahiner, a. pezeshk, l. m. hadjiiski, x. wang, k. drukker, k. h. cha, r. m. summers, m. l. giger, deep learning in medical imaging and radiation therapy. med phys, 46(2019), epub 2018 nov 20, e1-e36. doi: 10.1002/mp.13264 [23] iec. 62220-1. medical electrical equipment-characteristics of digital x-ray imaging devices part 1: determination of detective quantum efficiency. international electrotechnical commission; 2003. [24] iec. 62220-1-2. medical electrical equipment-characteristics of digital x-ray imaging devices part 1-2: determination of detective quantum efficiency-detectors used in mammography. international electrotechnical commission; 2007. [25] t. yokoi, t. takata, k. ichikawa, investigation of image quality identification utilizing physical image quality measurement in directand indirect-type of flat panel detectors and computed radiography, nihon hoshasen gijutsu gakkai zasshi, 67(2011), pp. 1415-1425. doi: 10.6009/jjrt.67.1415 [26] a. r. cowen, a. g. davies, m. u. sivananthan, the design and imaging characteristics of dynamic, solid-state, flat-panel x-ray image detectors for digital fluoroscopy and fluorography, clin radiol, 63(2008), 1073-1085. doi: 10.1016/j.crad.2008.06.002 [27] european commission, chest. (lungs and heart) pa and lateral projections. european guidelines on quality criteria for diagnostic radiographic image. eur. luxembourg: cec; 1996:12:16260 en. https://doi.org/10.1002/mp.13264 https://doi.org/10.6009/jjrt.67.1415 https://doi.org/10.1016/j.crad.2008.06.002 metrological characterisation of rotating-coil magnetometer systems acta imeko issn: 2221-870x june 2021, volume 10, number 2, 30 36 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 30 metrological characterisation of rotating-coil magnetometer systems stefano sorti1,2, carlo petrone2, stephan russenschuck2, francesco braghin1 1 politecnico di milano, department of mechanical engineering, via la masa, 1, 20156, milano 2 european organization for nuclear research 1205 geneva, switzerland, 1211 meyrin section: research paper keywords: magnetic measurements; magnetic field; rotating-coil magnetometers; rotating shaft citation: stefano sorti, carlo petrone, stephan russenschuck, francesco braghin, metrological characterisation of rotating-coil magnetometer systems, acta imeko, vol. 10, no. 2, article 6, june 2021, identifier: imeko-acta-10 (2021)-02-06 section editor: giuseppe caravello, università degli studi di palermo, italy received january 7, 2021; in final form april 13, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: stefano sorti, e-mail: stefano.sorti@polimi.it 1. introduction rotating-coil systems are a special type of induction-coil magnetometers, based on faraday's law of induction. they are used to measure integral field harmonics in the magnet bore. rotating-coil systems consist of arrays of coils mounted on a rotating shaft, aligned with the magnet axis. the varying flux linkage with the coil induces a voltage. typically, one long coil (or a chain of shorter coils) spans the entire magnet, including the fringe-field areas, because the transversal, integrated field is typically sufficient for beam tracking in particle accelerators [1]. even if local measurements are required, through point-wise magnetometers [2], integral field is still a primary requirement. precise measurements of magnetic fields require a careful evaluation of the mechanical properties of the rotating coils, subject to static and dynamic forces [3]. this implies the need for the evaluation of vibrations [4]. precautions for reducing the impact of mechanical deformations should also be taken at the design stage. a properly designed system should have natural frequencies higher than the operating frequencies [5]. compensation schemes for the main field harmonic, commonly referred to as bucking, provide effective mitigation of spurious field harmonics caused by these vibrations [6]. nevertheless, the learned design methodologies are not always sufficient to match the requirements. analytical formulas exist only for static phenomena such as misalignments and gravity [6]. vibrations are instead modelled directly as spurious field harmonics in the measured field, and therefore they can only be evaluated in a prescribed field, as in [4]. the approach adopted in literature is typically to design the instrument aiming at a reasonably high stiffness, evaluating its compliance with controlled input like motor torque, as in [3]. therefore, all the approaches proposed in the literature are limited in describing all the relevant mechanical phenomena, particularly shaft flexibility and propagation of mechanical vibration from motor and supports. an analytical model for the mechanical description of rotating-coils was proposed in [7]. this model can predict the abstract rotating-coil magnetometers are among the most common and most accurate transducers for measuring the integral magnetic-field harmonics in accelerator magnets. the measurement uncertainty depends on the mechanical properties of the shafts, bearings, drive systems, and supports. therefore, rotating coils require a careful analysis of the mechanical phenomena (static and dynamic) affecting the measurements, both in the design and in operation phases. the design phase involves the estimation of worst-case scenarios in terms of mechanical disturbances, while the operation phase reveals the actual mechanical characteristics of the system. in previous publications, we focused on modelling the rotating-coil mechanics for the design of novel devices. in this paper, we characterise a complete system in operation. first, the mechanical model is employed for estimating the forces arising during shaft rotation. then, the effect of the estimated disturbances is evaluated in a simulated measurement. this measurement is then performed in the laboratory and the two results are compared. in order to characterise the robustness of the system against mechanical vibrations, different revolution speeds are evaluated. this work thus presents a complete procedure for characterising a rotating-coil magnetometer system. mailto:stefano.sorti@polimi.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 31 effect of static coil deformations, coil-axis to magnet alignment tolerances, and vibration modes on the measured field harmonics. the model was then expanded in a finite-element formulation (fem) and applied to the design of a rotating-coil bench in [8]. this paper proposes a further refinement of the model, applied to the metrological characterisation of the designed rotating-coil system. the shaft is an anisotropic rotor described in a non-inertial frame, thus including rotor-dynamics effects, coupled with steady space-frame support. vibrations of the real system during operation are measured and introduced in simulations. a sample dipole magnet is measured both with the real system and through a simulation of the device in order to validate the robustness and the reliability of the system. therefore, the conclusion of this study is a complete magneto-mechanical description of a rotating-coil system in operation. 2. the magneto-mechanical model the 3d finite-element method (fem) [8] is adopted to describe rotating-coil shafts and their supporting structures. the two parts are modelled in different frames and coupled as described in the next section. the resulting mechanical deformation field of the shaft 𝒖(𝒓) (where 𝒖 is the displacement vector and 𝒓 the position) is applied to the coil geometry to evaluate its effects on the magnetic measurements. the fem model is shown in figure 1. 2.1. the mechanical model the model is based on timoshenko beams with 12 degrees of freedom (dof) per node, with the possibility of adding lumped masses, springs, and dampers. it is possible to include also non-ideal boundary conditions: clamped or hinged ends can be replaced by elastic foundations with a suitable stiffness. the equations for the degrees of freedom (dofs) 𝒑 of the fixed support structure are: 𝑀�̈� + 𝐶�̇� +𝐾𝒑 = 𝒇, (1) where 𝑀, 𝐾 and 𝐶 are the mass, stiffness, and damping matrices of the system. damping is characterised experimentally as modal damping [7], [8], and 𝒇 accounts for external forces acting on the system. these include gravity, unbalancing, and vibration of moving parts. the same equation holds for the rotating shaft but with the additional effects of not being in an inertial frame. the main advantage of adopting a rotating frame for the shaft is to have constant mechanical properties when reducing the shaft subsystem (shown below). this is a common approach for anisotropic shafts [9]. for the sake of simplicity, the rotating speed ω is assumed to be constant, and therefore the angular displacement can be expressed as a function of time: 𝜃 = ω𝑡. let us consider the transformation expressing the rotation of the shaft 𝒒 = 𝑅𝒒f, where 𝑅 = 𝑅(𝜃), and 𝒒f, 𝒒 are the shaft dofs expressed in the fixed and rotating frames, respectively. applying the transformation to the shaft equation in a fixed frame, following eq. (1), and multiplying by 𝑅, the shaft equation in the rotating frame results is 𝑀�̈� + (𝐶 + 2𝑅𝑀�̇�t)�̇� + (𝐾 − ω2𝑀)𝒒 = 𝑅t𝒇. (2) in this equation, the term added to 𝐶 denotes the coriolis force, while the extra stiffness term is the centrifugal force. before coupling the subsystems, each one is reduced with the craig-bampton (cb) method [10]. the cb method is a popular technique that allows for an independent reduction of subdomains before the matrix assembly. it is based on splitting the set of dofs of each subsystem into internal dofs 𝒒i and boundary dofs 𝒒b (or 𝒑i and 𝒑b, respectively, for the support structure). boundary dofs are the ones at the interface between the subsystems and thus directly involved in the assembly of the full structure. the reduced set of dof from the cb method is a combination of constraint modes 𝛹b and internal vibration modes 𝛹i . they are expressed by the coordinate transformation 𝒒 = [ 𝒒b 𝒒i ] = [ 𝐼 𝟎 𝛹i 𝛹b ][ 𝒒b 𝝔i ], (3) where 𝐼 is the identity matrix, and 𝝔i are modal coordinates of the shaft. the modal coordinates of supporting structure are denoted by 𝝅i. the computation of 𝛹i for the rotating shaft may be affected by the presence of non-symmetric matrices in the equation. to avoid a more complicated approach, typically involving left and right eigenvectors, it is proposed to compute 𝛹i as the free vibration modes of the shaft, neglecting the �̇� terms [9]. the computation of 𝛹b (for both subsystems) is performed by imposing fictitious unitary displacements to each boundary dof and computing the static response: 𝛹b = 𝐾i,i −1𝐾𝑖,𝑏, (4) where 𝐾i,i and 𝐾i,b are the submatrices of the stiffness matrix 𝐾 accounting for internal-internal and internal-boundary interactions, respectively. the reduction of each subsystem can be performed by truncating the 𝛹i set of modes. different criteria can be enforced, mainly based on the maximum frequency component of the expected external inputs [11]. finally, primal assembly [10] is adopted for the coupling between the support structure and rotating shaft. only the subset 𝒒b must be transformed before coupling, resulting in 𝒒b,f. modal coordinates 𝝔i are not affected, as they are not in the physical space. a generalised set of coordinates is introduced to account for both subsystems as 𝒖 = [ 𝐼 0 0 0 0 𝐼 𝐼 0 0 0 0 𝐼 ] ⏟ 𝐿t [ 𝝔i 𝒒b,f 𝒑b 𝝅i ], (5) where 𝐼 is the identity matrix and 0 is the zero matrix. figure 1. layout of the mechanical model. the equations for the support structure and for the shaft are expressed in the fixed frame (𝒙,𝒚,𝒛) and in the rotating frame (𝒙𝒔,𝒚𝒔,𝒛𝒔), respectively. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 32 the assembled equations finally result in 𝐿t [ 𝑀s 0 0 𝑀r ]𝐿�̈� + 𝐿t [ 𝐶s 0 0 𝐶r ]𝐿�̇� + 𝐿t [ 𝐾s 0 0 𝐾r ]𝐿𝒖 = 𝐿t [ 𝒇s 𝒇r ], (6) where the subscripts s and r identify the matrices for the support and the rotating shaft, respectively. employing the fem shape functions, the deformation field of the shaft can be computed and applied to the geometry of the coil. 2.2. the magnetic model the mechanical model is coupled with the magnetic model by computing the magnetic flux linkage with the induction coil. in order to calculate the system response to a given field distribution, the magnetic flux density is expressed analytically. the typical description for integral flux density is the 2d multipoles expansion [1]. however, for correct modelling, the full 3d field is required because long integral coils also intercept the magnet fringing field. therefore, 3d pseudo-multipoles are adopted [12]: 𝐵𝑟(𝑟,𝜑,𝑧) = −𝜇0 ∑𝑟 𝑛−1 ∞ 𝑛=1 (𝒞𝑛(𝑟,𝑧)sin𝑛𝜑 + 𝒟𝑛(𝑟,𝑧)cos𝑛𝜑), (7) 𝐵𝜑(𝑟,𝜑,𝑧) = −𝜇0 ∑𝑛𝑟 𝑛−1 ∞ 𝑛=1 (�̃�𝑛(𝑟,𝑧)cos𝑛𝜑 − �̃�𝑛(𝑟,𝑧)sin𝑛𝜑), (8) 𝐵𝑧(𝑟,𝜑,𝑧) = −𝜇0 ∑𝑟 𝑛 ∞ 𝑛=1 ( ∂�̃�𝑛(𝑟,𝑧) ∂𝑧 sin𝑛𝜑 + ∂�̃�𝑛(𝑟,𝑧) ∂𝑧 cos𝑛𝜑), (9) where 𝒞𝑛(𝑟,𝑧) = 𝑛𝒞𝑛,𝑛(𝑧) − (𝑛 + 2)𝒞𝑛,𝑛 (2) (𝑧) 4(𝑛 +1) 𝑟2 + ⋯ (10) �̃�𝑛(𝑟,𝑧) = 𝒞𝑛,𝑛(𝑧) − 𝒞𝑛,𝑛 (2) (𝑧) 4(𝑛 + 1) 𝑟2 + ⋯ (11) in the interest of brevity, the similar expressions for the skew components �̃�𝑛, 𝒟𝑛 have been omitted here. it is, therefore, possible to compute the magnetic flux density at any point in space. the flux linkage in the coils is then calculated numerically with φ = ∫ 𝐁 𝒜 ⋅ d𝐚. (12) in a discrete setting, the flux increments 𝜙𝑚 for any angular positions 𝜃𝑚 can be developed into a discrete fourier series: ψ𝑛 = ∑ 𝜙𝑚 𝑀−1 𝑚=0 ⋅ 𝑒 −𝑖2𝜋𝑚𝑛 𝑁 . (13) this yields the harmonic content of the response: 𝐶𝑛(𝑟0) = 𝑟0 𝑛−1 ψ𝑛 𝑘𝑛 , (14) where 𝐶𝑛 a(𝑟0) = 𝐵𝑛 a(𝑟0) + 𝑖𝐴𝑛 a(𝑟0) are the measured (apparent) field harmonics and 𝑘𝑛 are the coil sensitivity factors for the 𝑛th field harmonic: 𝑘𝑛 = 𝑁𝑇𝐿𝑐 𝑛 (𝑟2 𝑛 − 𝑟1 𝑛), (15) where 𝑁𝑇 is the number of coil turns, 𝐿𝑐 the total length of the coil and the two radii are the position of the go and return wires of the coil. the 𝑘𝑛 express the integral sensitivity of the coil and are therefore not functions of 𝑧. the differences between the imposed 𝐴𝑛, 𝐵𝑛 and the apparent 𝐴𝑛 a , 𝐵𝑛 a coefficients at the reference radius of measurement are the main figure of merit to evaluate the effects of mechanical defects. 3. the measurement system review the proposed model was used to design the measurement system, as described in [8]. a brief recap of the design considerations is given before presenting the model of the final system. 3.1. the design process the design process was based on separate studies of the two subsystems. the shaft was designed considering an approximated (non-optimal) support. then the supporting structure was designed for the expected worst-case scenario (i.e. for the least stiff shaft expected to be mounted on the support). a recall of the design procedure is presented in figure 2. on the left, the probability density function of errors in the field multipole 𝐴3 for different design iterations of the shaft, from bulk resin (1) to carbon-only shaft (4). on the right, the standard deviation of the error in measuring the main component of a quadrupole 𝐵2 for different designs of the supporting structure. the shaft was designed for measuring the field harmonics in dipoles and quadrupole magnets. compensation schemes were applied in simulating the measurement procedure. in order to calculate the performance of the compensation in the design, coil imperfections were introduced as gaussian distributions. the external forces considered were gravity, support vibrations (in the form of harmonic forces on the bearings), bearing-frictionrelated torque, and shaft unbalancing due to eccentricity and sag. the boundary conditions for the shaft were elastic supports (lumped springs and dampers), accounting for the equivalent stiffness of the supporting structure. figure 2. overview of the design phase for the shaft, on the left, and for the support, on the right (adapted from [8]). ] 1 3 2 4 variants 0 0.5 1 1.5 2 2.5 a3[10 -4 0 2 4 6 8 10 12 p ro b a b il it y f u n ct io n [ % ] variants s o f b 2 e rr o r [1 0 -4 ] 1 2 3 4 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 33 regarding the design of the support, the error of the main field component was the most relevant figure of merit. the design process was thus an iteration of different space-frame layouts for the support structure. 3.2. the model of the measurement system the complete model of the measurement system is shown in figure 3 (top image). it consists of a 1.5-m-long shaft made of carbon-fiber composite profiles, supporting a printed circuit board (pcb) coil array. the array is made of five equal radial coils of 1.48 m length, 11.2 mm width, and 60 turns. the diameter of the shaft is 72 mm. the supporting structure is a two-meter-wide aluminium frame. the boundary conditions for the support structure are two clamped and two pinned nodes. this comes from the different layouts of the joints and aims at being conservative in terms of mechanical stiffness. figure 3 (bottom image) also shows the modelled magnetic field for the characterisation of the system, which is the main topic of the next section. the shaft model consists of two parallel beams, representing the pcb and the carbon profiles, respectively. bolt joints are modelled as rigid links between mesh nodes. bearing stiffness is included in the interface between the shaft and the structure as a set of lumped linear springs in the fixed frame. the flexible joint, which links the shaft with the motor, is included in the bearing stiffness matrix as a torsional spring. the motor-drive unit is introduced as a rigid body on one end of the structure. forces and torques generated by the rotation are modelled as external loads. the support structure weights 65 kg, it guarantees a stiffness at the tip of a minimum 1.5 × 105n/m in all three directions and has its first natural frequency at 28 hz. the shaft is approximately 2.5 kg and has the first natural frequency (a bending mode) at 74 hz. 3.3. the expanded model for input estimation the effects of the motor on the mechanical system are modelled as equivalent lumped inputs. in particular, we introduce a force 𝒇m (with unknown magnitude and direction) and a torque 𝑡m about the shaft rotation axis. the force and torque are applied in the node supporting the shaft on the motor side. due to the linking between the structure and the shaft (through stiffness 𝑘b), their effects are visible on the coils and therefore have an effect on the magnetic measurements. due to the simplification in the modelling of the motor effects, it is necessary to provide the model with the vector of forces and torque 𝒈 = [𝒇m, 𝑡m]. in order to minimise the impact of the mechanical measurements on the magnetic measurements, an indirect measurement of 𝒈 is performed. instead of mounting force sensors at the joints between the parts, like torque transducers [14], accelerometers are mounted at the shaft supports. therefore, accelerations are measured, and forces are estimated. to perform this operation, a kalman filter is introduced. the mechanical model of eq. (6) is expanded to include the inputs in kalman filter's estimation [13]. it is required to write the model in state-space form, collecting the variables in the vector 𝒙 = [�̇�,𝒖]. before assembling the estimator, it is suggested to convert the dynamic system in a discrete-time domain (with time step ∆t). it is, therefore, possible to write: in eq. (16), the first term stems from eq. (6), while the second term is added for the sake of state estimation. it contains the mechanical disturbances 𝐿𝒛 and the fictitious disturbances for the input ∆t 𝐼𝒘. this term is required to consider time-varying inputs in the estimation, otherwise, the inputs would obey 𝒈𝑘+1 = 𝒈𝑘. more specifically, 𝐿 can be assumed to be the identity matrix in case no external disturbances are expected, while 𝒛 and 𝒘 are zero-mean, normally-distributed disturbances with covariance matrices 𝑄𝑧 = cov(𝒛,𝒛) and 𝑄𝑤 = cov(𝒘,𝒘). the kalman filter also requires an output equation, which is in our case the subset from eq. (6) accounting for the accelerations of the nodes at which position the accelerometers are mounted. the output is expected to suffer from measurement noise, assumed to be a zero-mean, normally-distributed variable 𝒏 with covariance matrix 𝑅 = cov(𝒏,𝒏). 4. experimental validation the experimental validation campaign aims at two objectives: to evaluate the accuracy of the model in predicting the outcome of a magnetic measurement, and to characterise metrologically the effect of vibrations in the real system. regarding the model-prediction capabilities, the model is the same as the one in the design phase. it is based on conservative hypotheses for boundary condition of the structure (figure 3 and section 3.2) and without updates based on vibration measurements. the comparison between the model and the measurements is therefore expected to be consistent up to a safety factor. approaches to update the system model in order to replicate the real system more accurately are discussed in the next section. figure 3. the rotating-coil system without the magnet (top) and its magnetomechanical model (bottom). a few elements are highlighted: the force 𝒇𝐌 and the torque 𝒕𝐌 generated by the motor, the bearing stiffness matrix 𝒌𝐁, the array of coils (𝒄𝟏 to 𝒄𝟓) and the magnetic flux density 𝑩 to be measured. the rigid bodies accounting for motor and shaft ends are omitted. [ 𝒙𝑘+1 𝒈𝑘+1 ] = [ 𝐴 𝐵 0 𝐼 ][ 𝒙𝑘 𝒈𝑘 ] + [ 𝐿 0 0 ∆t 𝐼 ][ 𝒛 𝒘 ]. (16) m 𝐵𝐶5 𝑡m 𝑘b 𝑘b 𝐶1 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 34 4.1. experimental setup as a first case study, a dipole magnet is selected. it is a 0.4 m long dipole from the cern east area, with a nominal integral field of 0.36 tm at 240 a [15]. future studies will also cover the case of quadrupoles and higher-order magnets. the magnetic flux density distribution generated by the magnet was already measured with a short rotating-coil, and pseudo-multipoles were computed up to the 5𝑡ℎcomponent. pseudo-multipoles coefficients are therefore available for the measurement model. the magnetic measurement performed with the system under investigation is a rotating-coil measurement assessing both the integral absolute field and the integral multipole coefficients. therefore, the fluxes from the main coil 𝑐1 and from the compensation scheme 𝑐1 − 𝑐3 are acquired. the 𝑘1 sensitivity factor for each of the coil of the array is calibrated in a reference magnet. in order to match the calibrated area, coil widths in the model are adjusted. this is also to decrease the performance of bucking compensation in the model, aiming at more realistic values. coefficients 𝑘𝑛 for 𝑛 > 1 are computed directly from the nominal geometry of the model. a single measurement consists in a revolution at constant speed. different speeds are considered for both the mechanical and the magnetic measurement campaigns so that, adopting the units of revolutions-per-minute, ω ∈ [30, 45, 60, 75, 90, 105, 120]. in order to prevent unpredicted phenomena (like thermal transients) from correlating with the speed wrongly, the different values for ω are evaluated in a randomised order. 20 repetitions are performed per each speed. for the mechanical evaluation of the measuring system, a leica geosystem is employed for the alignment with respect to the magnet. in addition, two tri-axial accelerometers (6g range, 0-200 hz bandwidth) are mounted on the support structure next to the coil bearings to assess the vibrations of the system. given their weight of 50 g, they have been neglected in the mechanical model. due to the low values expected for accelerations, the accelerometers have been calibrated [16] in the range of ±0.5 g (or 0.5 g 1.5 g for the axes parallel to gravity): each device was mounted on a support that allows a controlled angular displacement with respect to gravity (accuracy 0.02 mrad). dc acceleration is then measured in a series of known angular positions, so that the gravity component per each axis could be computed. the overall accuracy is estimated to be 0.0025 m/s2. this value includes the uncertainty due to the sensor noise, averaged over 20 repetitions. the impact of accelerometer accuracy on the force estimation is estimated to be about 0.3% in amplitude; it has thus negligible effects on the measurement. the main source of uncertainty for the estimation is the model itself, which is in fact, the focus of the validation procedure. 4.2. mechanical measurements results the most relevant figure of merit for the mechanical alignment of the system is the misalignment between magnet axis and coil rotation axis. after the alignment procedure, the final angle between the two axes resulted in 0.052 mrad. the positioning error is estimated to be less than 20 μm, which is the overall accuracy of the geosystem, while the linear stages can perform smaller motions relying on a sub-micrometerresolution encoder. however, these positioning errors are negligible for the proposed magnetic measurements (estimated to result in errors of the order of 10-8). the vibrations are measured for different rotation speeds of the shaft. for each of the speeds considered, the estimation procedure is performed on a series of 20 revolutions. figure 4 shows the rms of the measured vibrations and the estimated force and torque as the rotating speed function. 4.3. magnetic measurements results the main figure of merit for the performance of the measurement system is the result of the magnetic measurement. consistent with the mechanical measurements, different values are considered for the rotation speed of the shaft. the twofold scope of the campaign is to validate the robustness of the measurement results against the mechanical vibrations and the capability of the model in reproducing the system behaviour. figure 5 shows the measurements of the absolute integral field as a function of the speed. each rotation is processed independently to evaluate the dispersion caused by random figure 4. rms values for the measured accelerations (averaging all the accelerometers) and for the estimated force magnitude and torque. the rms are computed over the full bandwidth (0-200 hz) of the accelerometers. figure 5. integral absolute field resulting from measurements (top) and simulations (bottom). the measured results are presented by the distribution of 20 rotations (blue line is the average). 30 40 50 60 70 80 90 100 110 120 speed [rpm] 0.1 0.15 0.2 0.25 0.3 a cc e le ra ti o n [ m /s 2 ] rms of the measured accelerations 30 40 50 60 70 80 90 100 110 120 speed [rpm] 0.1 0.15 0.2 0.25 f o rc e [ n ] 0 0.2 0.4 t o rq u e [ n m ] rms of the estimated inputs real measurement speed [rpm] 0.36588 0.365885 0.36589 0.365895 ∫ 𝐵 d 𝑧 [t m ] 30 40 50 60 70 80 90 100 110 120 speed [rpm] 0.365116 0.365118 0.36512 0.365122 ∫ 𝐵 d 𝑧 [t m ] simulated measurement 30 40 50 60 70 80 90 100 110 120 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 35 disturbances and noise. the distribution shown is a probability density estimate based on a normal kernel function. the overall spread concerning all the measured values is 0.4 units (10-4 of the average field), while averaging the turns, the spread is 0.2 units. moreover, the trend of the mean values as a function of the speed is not monotonic, as is the rms of the accelerations. the reason is that other phenomena are speedrelated such as the signal-to-noise ratio for the acquisition system. therefore, it is plausible to identify a relative accuracy of ±0.1 units from the mechanical vibrations on the absolute integral field. the second plot in figure 5 shows the simulated measurement of the integral field. the difference between the measured and simulated values is on average 21 units, while the spread between measurements is 0.15 units. comparing the trend, these results confirm that the model correctly predicted the amount of disturbances from mechanical vibration in the magnetic measurement. the error in the absolute field can be explained by the missing high-order harmonics in the simulation. as the absolute field is constant with respect to vibration amplitude, this is not investigated further. figure 6 shows the measurement results of the integral multipoles. the multipoles coefficients are measured with both the main coil 𝑐1 and the compensation scheme 𝑐1 − 𝑐3. it is therefore possible to appreciate the performance enhancement of the compensation in the measurement of high-order multipoles (between 10th and 15th). nevertheless, the effects of vibrations on the multipoles measured by the main coil are at most 0.1 units for the speeds considered. thus, the rotating-coil system yields measurement results better than one unit, even without a compensation scheme. as far as the simulated measurement is concerned, the multipoles from the main coil are given. thus, it is possible to appreciate the matching of the behaviour for both the multipoles values and their trends as a function of the speed. in particular, the model is slightly overestimating the effect of the mechanical vibrations on highorder multipoles, but this is consistent with the model's expected behaviour. the field harmonic of order 9 was not included in the pseudo-multipoles provided to the model. 5. conclusions the proposed magneto-mechanical model, already applied to the design of a rotating-coil system, is now validated against the actual device in a real magnetic measurement campaign. the vibrations of the system during the measurement have been recorded, and the corresponding input was estimated. the 3d magnetic flux density of the magnet was provided, and the measurement procedure was simulated. the results agree with the trends and the magnitudes of the real measurement. if a closer matching is required, the model should be updated accordingly. this updating would require a proper mechanical measurement campaign to experimentally characterise all the system features (for instance, by modal analysis, with an array of accelerometers and a controlled input). nevertheless, the real system magnetic measurements confirmed that the performance required in the design phase had been exceeded, with an overall error by mechanical vibrations estimated to a few units of 10-5. therefore, we can conclude that the proposed procedures have led to the design, construction, and commissioning of a novel rotating-coil magnetometer system with unprecedented control over some mechanical properties typically critical for the magnetic measurement result. references [1] s. russenschuck, field computation for accelerator magnets, wileyvch, 2010. doi: 10.1002/9783527635467 [2] d. popovic renella, s. spasic, s. dimitrijevic, m. blagojevic, r. s popovic, an overview of commercially available teslameters for applications in modern science and industry, acta imeko 6(1) (2017), pp. 43-49. doi: 10.21014/acta_imeko.v6i1.312 [3] j. billan, j. buckley, r. saban, p. sievers, l. walckiers, design and test of the benches for the magnetic measurement of the lhc dipoles, ieee transactions on magnetics 30(4) (1994), pp. 26582661. doi: 10.1109/20.305826 [4] n. r. brooks, l. bottura, j. g. perez, o. dunkel, l. walckiers, estimation of mechanical vibrations of the lhc fast magnetic measurement system, ieee transactions on applied superconductivity 18(2) (2008), pp. 1617-1620. doi: 10.1109/tasc.2008.921296 [5] g. tosin, j. f. citadini, e. conforti, long rotating coil system based on stretched tungsten wires for insertion device characterization, ieee transactions on instrumentation and measurement 57(10) (2008), pp. 2339-2347. doi: 10.1109/tim.2008.922093 [6] l. walckiers, magnetic measurement with coils and wires, cern accelerator school: magnets, bruges, belgium 16-25 june 2009. online [accessed 14 june 2021] https://cds.cern.ch/record/1345967 [7] s. sorti, c.petrone, s. russenschuck, f. braghin, a magnetomechanical model for rotating coil magnetometers, nuclear instruments and methods in physics research section a, 984 (2020) art. 164599. doi: 10.1016/j.nima.2020.164599 [8] s. sorti, c.petrone, s. russenschuck, f. braghin, a mechanical analysis of rotating-coil magnetometers, imeko tc4 2020, figure 6. field multipoles (modulus) as measured with the compensated scheme 𝒄𝟏 −𝒄𝟑 (top), by the main coil 𝒄𝟏 (middle), and as simulated for 𝒄𝟏 (bottom). for the measured quantities, each rotation is processed independently and pictured by a dot. 0 50 100 150 200 | 𝐶 𝑛 ( r 0 )| [ u n it s] 𝒏, 𝒄𝟏, simulation 2.8 164.1 0.7 26.2 0.1 0.1 0.1 2 3 4 5 6 7 8 n [] 0 0.1 0.2 0.3 0.4 9 10 11 12 13 14 15 n [] 0 50 100 150 200 | 𝐶 𝑛 ( r 0 )| [ u n it s] 𝒏, 𝒄𝟏, measurement 4.0 166.3 0.8 26.1 0.1 0.1 0.0 2 3 4 5 6 7 8 n [] 0 0.1 0.2 0.3 0.4 9 10 11 12 13 14 15 n [] 0 50 100 150 200 | 𝐶 𝑛 ( r 0 )| [ u n it s] 𝒏, 𝒄𝟏 − 𝒄𝟑, measurement 3.6 165.0 0.8 25.9 0.1 0.1 0.0 2 3 4 5 6 7 8 n [] 0 0.1 0.2 0.3 0.4 9 10 11 12 13 14 15 n [] 30 45 60 75 90 105 120 https://doi.org/10.1002/9783527635467 http://dx.doi.org/10.21014/acta_imeko.v6i1.312 https://doi.org/10.1109/20.305826 https://doi.org/10.1109/tasc.2008.921296 https://doi.org/10.1109/tim.2008.922093 https://cds.cern.ch/record/1345967 https://doi.org/10.1016/j.nima.2020.164599 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 36 virtual conference, palermo, italy, 14-16 september 2020. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-38.pdf [9] g. genta, dynamics of rotating systems, springer, 2005 edition. doi: 10.1007/0-387-28687-x [10] mapa, lidianne de paula pinto, francisco de assis das neves, gustavo paulinelli guimarães. dynamic substructuring by the craig–bampton method applied to frames, journal of vibration engineering & technologies 9(2021), pp. 257-266. doi: 10.1007/s42417-020-00223-4 [11] b. besselink, et al. a comparison of model reduction techniques from structural dynamics, numerical mathematics and systems and control, journal of sound and vibration 332(19) (2013), pp. 44034422. doi: 10.1016/j.jsv.2013.03.025 [12] s. russenschuck, g. caiafa, l. fiscarelli, m. liebsch, c. petrone, p. rogacki, challenges in extracting pseudo-multipoles from magnetic measurements, 13th int. computational accelerator physics conf. icap2018, key west, fl, usa, 20-24 october 2018, 6 pp. 10.18429/jacow-icap2018-supag03 [13] j. c. berg, a. k. miller, force estimation via kalman filtering for wind turbine blade control, structural dynamics and renewable energy, volume 1. springer, new york, ny, 2011, pp. 135-143. doi: 10.1007/978-1-4419-9716-6_13 [14] r. s. oliveira, r. r. machado, h. lepikson, t. fröhlich, r. theska, a method for the dynamic calibration of torque transducers using angular speed steps, acta imeko 8(1) (2019), pp. 13-18. doi: 10.21014/acta_imeko.v8i1.654 [15] r. lopez, j. r. anglada, the new magnet system for the east area at cern, ieee transactions on applied superconductivity 30(4) (2020), pp. 1-5. doi: 10.1109/tasc.2020.2972834 [16] g. d’emilia, a. gaspari, e. natale, dynamic calibration uncertainty of three‐axis low frequency accelerometers, acta imeko 4(4) (2015), pp. 75-81. doi: 10.21014/acta_imeko.v4i4.239 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-38.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-38.pdf https://doi.org/10.1007/0-387-28687-x https://doi.org/10.1007/s42417-020-00223-4 http://dx.doi.org/10.1016/j.jsv.2013.03.025 https://doi.org/10.18429/jacow-icap2018-supag03 https://doi.org/10.1007/978-1-4419-9716-6_13 http://dx.doi.org/10.21014/acta_imeko.v8i1.654 https://doi.org/10.1109/tasc.2020.2972834 http://dx.doi.org/10.21014/acta_imeko.v4i4.239 contrasting roles of measurement knowledge systems in confounding or creating sustainable change acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 contrasting roles of measurement knowledge systems in confounding or creating sustainable change william p. fisher, jr.1 1 research institutes of sweden, gothenburg, sweden; bear center, university of california, berkeley, usa; living capital metrics llc, sausalito, california 94965, usa section: research paper keywords: modelling; measurement; complexity; sustainability citation: william p. fisher, jr. , contrasting roles of measurement knowledge systems in confounding or creating sustainable change, acta imeko, vol. 11, no. 4, article 7, december 2022, identifier: imeko-acta-11 (2022)-04-07 section editor: eric benoit, université savoie mont blanc, france received july 9, 2022; in final form december 4, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: william p. fisher, jr., e-mail: wpfisherjr@livingcapitalmetrics.com 1. introduction a little known but landmark article [1] defines five conditions for success in creating sustainable systems change (figure 1). different approaches to meeting these conditions can produce results that vary dramatically in their sustainability. of particular importance is the measurement knowledge infrastructure context in which the five conditions are deployed. systems change initiatives in organizations ranging from schools to hospitals to private firms typically make use of information on processes, structures, and outcomes obtained from tests, assessments, or surveys of students, patients, employees, customers, suppliers, and other key stakeholders. this information can be aggregated and reported in markedly different ways, with associated variation in its meaningfulness, utility, and consequences for success in creating sustainable change. the primary points of contrast between opposing poles of information quality can be summarized in terms of two approaches to measurement. on one end of this quality continuum are models lacking distinctions between discontinuous levels of complexity, and at the other end are models addressing these levels in ways that facilitate their practical management. the polar opposites come to the fore in oft-repeated but rarely heeded contrasts between statistical analyses of ordinal data and scientific models of interval units. these contrasts emphasize differences between unexamined assumptions about causal relationships and the meaningfulness of ordinal scores, on the one hand, and, on the other, intentional requirements of meaningful interval unit definitions [2]-[9]. where the former focuses on the concrete terms of objective facts, the latter instead focuses on the abstract and formal terms of objectively reproducible unit quantities. the statistical focus on ordinal scores manages what counts in relation to accountability reporting systems, assuming the whole is the sum abstract sustainable change initiatives are often short-circuited by failures in modelling. unexamined assumptions about measurement and numbers push modelling into the background as a presupposition rarely articulated as an explicit operation. even when models of system dynamics are planned components of a sustainable change effort, the key role of measurement is typically overlooked. the crux of the matter concerns the distinction between numeric counts and measured quantities. mistaking the former for the latter confuses levels of complexity and fundamentally compromises communications. reconceiving measurement as modelling multilevel distributed decision processes offers new alternatives aligned with historically successful efforts in creating sustainable change. five conditions for successful sustainable change are contrasted from the perspectives of single-level vs multilevel modelling: vision, plans, skills, resources, and incentives. omitting any one of these from efforts at creating change result, respectively, in confusion, treadmills, anxiety, frustration, and resistance. the shortcomings of typically implemented single-level approaches to measurement result in the widespread experience of these negative consequences. results show that new potentials for creating sustainable change can be expected to follow from implementations of multilevel distributed decision processes that effectively counteract organizational amnesia by embedding new learning in an externally materialized knowledge infrastructure incorporating a shared cultural memory. mailto:wpfisherjr@livingcapitalmetrics.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 of the parts. the scientific focus on interval quantities manages what adds up in relation to the overall mission, requiring the whole to be more than the sum of the parts. the end results of the statistical focus on ordinal scores for sustainable change involve a kind of myopia unable to focus beyond the limits of local circumstances to global concerns [10]. a systematic literature review of almost 300 articles on lean thinking practices in health care, for instance, found that “toolmyopic thinking tends to be a prevalent practice and often governs implementations” [11]. the tendency here is to not be able to see the forest for the trees. measurement is commonly defined as the assignment of numbers to observations according to a rule. decades of criticism as to the insufficiency of this definition [2]-[9] can be traced back to whitehead’s 1925 fallacy of misplaced concreteness [12]. parallel demonstrations of superior definitions of measurement dating from the 1960s have had little impact on practice [13], [14]. measurement is usually, then, deemed achieved by means of ordinal, nonlinear score models irrevocably attached to the specific questions asked, with no experimental test of causal hypotheses and no uncertainty estimates. scientific approaches instead fit data to models of interval and linear measurements whose meanings are demonstrably independent of the specific questions asked, are contextualized by tested causal hypotheses and uncertainties, and are deployed in networks of qualityassured instruments distributed throughout multilevel networks of end users [15]-[17]. contrasts between these paradigmatically opposed approaches to quantification (see table 1) illuminate new potentials for creating sustainable change. when the differences between statistical and scientific measurement modelling approaches are grasped, today's organizational cultures can be seen as having counterproductively adapted to accepting as inevitable failure to meet the conditions for successful implementation of sustainable change. in other words, because of the widespread but unexamined assumption that low quality measurement information is the best that can be expected, confusion, feeling caught on a repetitive treadmill, anxiety, frustration, and resistance are built into organizational cultures in often unnoticed but pervasive ways. organizational amnesia [18], [19] of this kind can, however, be counteracted by scientific measurement modelling approaches that retain learning and incorporate it organically into scaffolding built into the external environment as a kind of cultural memory. research into learning embodied and locally situated in the institutional environment [20]-[22] points toward new goals organizations can realistically set for achieving sustainable change using scientific measurement models instead of statistical data models. 2. typical statistical modelling 2.1. vision visualizing the future requires anticipation of highly abstract arrays of possible scenarios. the kinds of challenges that might be encountered must be conceived in conjunction with the kinds of responses needed to address them. all too often, however, vision statements focus so narrowly and myopically on local concrete circumstances [10], [11] that long-term planning, staffing, resourcing, and incentivizing are inadvertently sabotaged. figure 1. conditions for sustainable change [1] acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 the usual vision informing reasons why measurements are made is to gather data for analysis and summarization in reports to be used in formulating policy directives. even if this vision is informed by design thinking [23], [24], and so incorporates the elements of empathy, definition, ideation, prototyping, and testing, the focus on scores (counts and/or percentages of correct answers, or of responses in a rating category) unnecessarily limits what can be imagined and accomplished to a small fraction of the available potential [25]. that is, the restricted orientation to responses to specific questions necessarily prevents the envisioning and realization of goals that would otherwise be achievable. this is because a vision limited to statistical treatments of ordinal scores does not meaningfully model or map the substantive features of the situation of interest. the map is not the territory. mapping proceeds from concrete observations of what is happening on the ground but must aspire to represent those concrete features by identifying coherent patterns at abstract and formal levels of higher order complexity. because real world circumstances are in constant flux, meaningful and useful maps cannot remain tied to any single given set of conditions. a general continuum defining a learning progression, developmental sequence, or healing trajectory must characterize the quantitative range of variation [26]-[27]. an abstract perspective is the only way to adaptively and resiliently inform individuals and groups about where they are located relative to where they were, where they want to go, and what comes next, no matter what concrete circumstances they find themselves in. the narrow vision associated with statistically modelled ordinal scores mistakes mere numbers for quantities and pays a high price for doing so. comparability depends on all respondents answering the same questions, and standards are imagined as necessitating use of the same indicators. the resulting knowledge infrastructure is envisioned on the basis of information quality that cannot support generalized meaning, and so vision is obscured, and confusion results. 2.2. planning applications of the information obtained from scores, ratings, and percentage statistics usually focus on generalizations that assume all numeric differences of a given magnitude mean the same thing, though this assumption is usually not, and likely cannot be, substantiated. the inferential problems following from this unwarranted assumption of uniform meaning are then further compounded by the ways the scores are interpreted and applied. with no information typically made available on the uncertainty ranges or confidence intervals associated with the scores, there is no way of telling if and when numeric differences are real and reproducible, or are simply random noise. in addition, because questions are each treated separately, as domains unto themselves, no information on a learning progression, developmental sequence, healing trajectory, or other quality improvement continuum is made available. improvement efforts then can do nothing but focus directly on areas in which failure (low ratings or incorrect answers) is experienced, instead of first ensuring that prerequisite foundations for sustainable change have been put in place. the result of acting on this kind of low-quality statistical information is then to continue repeating the same pattern of efforts in precisely the endless treadmill cycle one wanted to avoid. 2.3. skills the skills required in commonly adopted statistical approaches to measurement focus on knowledge of the relevant content and processes involved for assessment and survey development, social interactions for administering those tools, operational awareness for policy formation, and data input, aggregation, analysis, and reporting. analyses may be as simple as counting correct answers or responses within rating categories, and computing percentages, or as complex as any advanced statistical method may be. the focus of data analytic skills unjustifiably presumes without evidence that results will retain their meaning across levels of complexity. but the statistical skills employed in usual approaches to measurement mistreat concrete scores as abstract quantities explained by formal theory, when they are not. that is, everyone is well aware that it is impossible to tell from my count of ten rocks whether i have more or less rock than someone with two rocks. it is also common knowledge that correct responses to ten questions cannot be understood as indicating more ability or success than correct responses to two questions, since the two groups of questions asked may vary markedly in difficulty. statistical modelling proceeds by focusing on these merely numeric data anyway, mistakenly assuming that nothing better can be done. failure to bring the needed skills to bear can only then result in anxiety, since the information produced is readily seen to be disconnected from the circumstances in which it is supposed to be applied. table 1. statistical vs scientific modelling paradigm contrasts vis a vis sustainable change conditions. sustainable change condition statistical modelling paradigm scientific modelling paradigm vision centralized gathering and analysis of ordinal instrument-dependent data for policy formation distributed network of instruments traceable to common units informs and aligns end user decisions and behaviours skills item writing & administration, response scoring, statistical summarization, reporting construct definition and modelling, instrument calibration, item banking, adaptive end user application incentives rewards for perceived goal attainment shared success in general improvements to organizational viability resources investments limited as not accountable for or expected to produce significant returns investments proportional to magnitudes of returns from improved efficiencies and market share plan interprets ordinal scores as interval and all numeric differences as meaningful; no context for improvement provided scales interval measures with individual uncertainty and data quality estimates; quantitative continuum qualitatively annotated to guide change efforts implications for managing what is measured management focuses on moving numbers that matter within restricted domain of limited observations, sometimes at expense of mission management focuses adaptively on relevant tasks representing mission, skipping tasks irrelevant to challenges of the moment implications for communication ordinal scores interpreted as interval tied to limited number of particular items results in obscure and difficult comparisons interval measures interpreted relative to entire bank of calibrated items opens up clear and transparent opportunities for learning acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 2.4. resources resources invested in the statistical modelling approach to creating sustainable change are typically focused on minimizing expenditures in producing a one-time snapshot of the state of things used for setting policy going forward. no specific forms of returns are expected, so the investments made are not usually accountable except as expenses, which are kept to the lowest possible levels. the information produced is typically used only as a conspicuously displayed expression of the fact that attention is being focused in some way on matters of concern to an interested party. but with vision, skill sets, and plans limited to low quality ordinal scores whose meanings are tied to the particular questions asked, the structural limits imposed on potential returns means only limited investments of resources can be justified and the usual result is a frustrating inability to advance. 2.5. incentives in the context of the usual approach to statistical data modelling, incentives are usually cast in relation to achieving results defined in terms of counts, scores, or percentages. student proficiency scores or patient/customer/employee satisfaction or performance ratings are interpreted as evidence of achievements that are then rewarded by recognition, bonuses, promotions, etc. but because the data are tied to responses to specific questions, and because they are moreover ordinal, nonlinear, and not mapped to variation in meaningful amounts of a measured construct, incentive systems like this are easily gamed. even without the advantages of a perspective informed by scientific measurement, this general management problem is recognized as leading to confusion, conflict, inefficiency, and a lack of focus [28]. in education, for instance, having students memorize tasks known to be included in the items on a test can inflate scores without, however, actually improving proficiency. in more extreme cases, teachers and principals have conspired to change student test scores. similarly, customer satisfaction surveys are often accompanied with requests for ratings at a specific level or higher. the explicit goal is to create an appearance of success that can be rewarded in a public way that conveys an atmosphere of positive progress and overcomes resistance, even when the substantive failure to change anything is readily apparent to everyone involved. because the vision, skills, and incentives are all focused on specific and discrete concrete issues that can never adequately represent the abstract and formal levels of complexity, unfair biases serving some agendas and undermining others will likely promote resistance of some form or another as the usual consequence. 3. innovative scientific modelling 3.1. vision an alternative vision as to why measurements are made focuses on modelling a decision process, calibrating instruments informing that process, distributing those instruments to front line decision makers, and gathering data for periodic analysis and summarization in reports used for quality improvement and accountability. this vision makes clear provisions for creating knowledge systems offering practical value beyond periodically produced reports. when the demands of effective knowledge infrastructures [21], [25], [29], [30] are met, data are reported at each level of complexity relevant to the demands of end users. front line managers like teachers, clinicians, and others engaged in individualized care processes need denotative facts contextualized within learning progressions, developmental sequences, disease natural histories, etc. practice management requires metalinguistic statistical summaries of interval logits reported to facilitate communication and comparability over time and space, within and across individuals, classrooms, clinics, schools, hospitals, etc. accountability requires metacommunicative theoretical explanatory power that justifies decision processes at the metalinguistic and denotative levels. to the extent this is accomplished, one might reasonably expect less confusion to be produced than is commonly associated with the statistical approach. 3.2. planning applications of the information obtained from scientifically modelled measurements require experimental tests substantiating the requirement that numeric differences of a given magnitude mean the same thing, within the range of estimated uncertainty. measurements are interpreted and applied in relation to uncertainty ranges or confidence intervals, which makes it possible to tell if and when numeric differences are real and reproducible, or are simply random noise. in addition, because questions are scaled together to delineate a learning progression, developmental sequence, or quality improvement trajectory, measurements are interpreted substantively in relation to the amount of the construct represented at each scale level. improvement efforts then can focus attention on the easiest tasks not yet accomplished. now a foundation for sustainable change has been put in place by successes experienced at lower levels of difficulty. the result is that end users' behaviours and decisions are coordinated and aligned by their shared responses to the same information. when end users can, in addition, easily learn from one another by sharing knowledge, probabilities of breaking free of treadmill cycles are increased. 3.3. skills the skills required for implementing scientific models of decision processes are considerably more technically and socially sophisticated than the skills associated with the usual statistical data modelling approach. all of the latter's skill sets are needed, as well as mastery of advanced conceptual tools involving construct mapping, assessment/survey item development, response scoring, mathematical model formulation, instrument calibration; measure, uncertainty, and data quality interpretation; knowledge system development, administrative and interpretation guidelines, user training, etc. these skills focus on producing knowledge retaining its meaning and properties across levels of complexity, suggesting the possibility of resulting in less anxiety than has been the case in using the statistical approach. 3.4. resources with experience, resources invested in the scientific modelling approach to creating sustainable change can be gauged for maximizing returns from ongoing improvements in efficiency and outcomes. as expectations concerning returns take shape, lessons are learned as to how the investments can be made accountable. with a vision, skill sets, and plans aimed at maximizing the value of high-quality interval measurements whose meanings are independent of the particular questions asked, investments proportionate to the expected returns can be justified, and the business plan can be scaled up as the market expands. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 3.5. incentives in the context of scientifically modelling an overall decision process, incentives are shaped by involving everyone as participants in the creation of enhanced processes and outcomes. the overarching viability of the organization is placed front and centre. incentives reward generalizable innovations that improve quality. given common languages of comparison, everyone has the information they need to take responsibility for the outcomes in their care. inputs that do not positively impact qualitatively and/or quantitatively measurable affective, cognitive, behavioural, etc. outcomes can be evaluated for removal. in the traditional statistical modelling approach, the maxim "you manage what you measure" becomes a cynical motto conveying how management can be distracted into superficial issues only peripherally related to the main operational focus of the organization. in the scientific modelling context, though, managing what is measured is akin to turning a wrench fitted on the head of a bolt that needs to be tightened: the tool is fit for purpose. the distribution of instruments calibrated to a common metric informs decision processes and data sharing that everyone can learn from quickly and easily. incentives overcome resistance, then, by illuminating clear paths to forward advances increasing the pride everyone takes in their work. 4. discussion scientific modelling is superior to statistical modelling in the context of promoting sustainable change because, first, it provides a vision that encompasses the entire populations both of potential challenges that may emerge and of potential participants (employees, students, teachers, clinicians, suppliers, managers, etc.) who may engage with those challenges. this capacity follows from the focus of scientific models on the abstract construct represented in measurements, as opposed to the concrete data and specific questions focused on by statistical models. the usual statistical approach accepts ratings and scores as meaningful, even though their significance depends on the particular questions that were asked. so when challenges not represented in the questions and associated data emerge, those challenges are likely to be ignored, discounted, or distracting in ways that lead to confusion. scientific models, in contrast, inform clarity by demanding a theoretical account supported by data and expressed in comparable metrics with known uncertainties. second, scientific modelling dispels anxiety by bringing advanced expertise to bear on problem definition, construct mapping, instrument calibration, report generation, measure interpretation, and quality improvement applications. though statistical modelling skill sets may, of course, be highly developed, many change initiatives are approached with little more experience than familiarity with spreadsheets and word processors. though these latter rudimentary methods are commonly used, the importance of communicating meaningful results in well-defined terms will likely continue to exert an inexorable demand for higher quality knowledge. third, because scientific modelling supports new degrees of rigorous comparability over time, new expectations for accountability and accounting can be expected to alter the quality and quantity of resistance-countering incentives that can be offered. proportionate returns on investment will follow from fair and equitable measurements that are demonstrably reproducible and relevant to the challenges to innovation being faced. these kinds of returns should become the goal of change efforts, instead of incentive systems that can be gamed, creating the appearance of innovation by focusing on easily counted signal events, with the associated demoralizing atmosphere that goes with widely perceived unfair advantages. fourth, in the same vein, because the magnitudes of impacts are commonly estimated in the confusing terms of statistical scores, the resources brought to bear in change efforts are commonly insufficient to effect significant results, leading to continued frustration. the capacity to generalize and scale across contexts by means of a combination of explanatory theory, experimental evidence, and distributed instrumentation, however, leads to the clear definition of opportunities for investment likely to pay handsome returns. fifth, where the statistical focus on improvement planning is typically guided by nothing more than the areas of failure or low ratings, the scientific approach maps the improvement trajectory. this is done in a way that more closely informs day to day activities by indicating where a process is at relative to its goal, and showing what comes next in a logical sequence. instead of simply taking on the most difficult challenges with no attention to preparatory factors, the scientific approach attends to establishing baseline structures, processes, and outcomes in an orderly approach. differences in circumstance across situations can be accommodated via adaptive selection of relevant tasks and challenges, without compromising overall comparability. this results in a visible documentation of small gains as progress toward the goal is made, as opposed to the feeling of being on a treadmill that results from not being oriented on a clear path toward defined goals. 5. conclusion successful sustainable change initiatives depend on abilities to flexibly and quickly store and retrieve knowledge. centralized repositories of low-quality information accessed infrequently are likely to result in muddled vision, inconsequential skill sets, ineffective incentives, insufficient resources, and incomplete plans. distributed networks of instruments embodying high quality information, in contrast, offer the potential for counteracting the confusion, anxiety, resistance, frustration, and treadmills too commonly taken for granted. references [1] t. p. knoster, r. a. villa, j. s. thousand, a framework for thinking about systems change, in r. a. villa & j. s. thousand (eds.), restructuring for caring and effective education, brookes, baltimore, 2000, pp. 93-128. [2] d. andrich, distinctions between assumptions and requirements in measurement in the social sciences, in j. a. keats, r. taft, r. a. heath & s. h. lovibond (eds.), mathematical and theoretical systems, elsevier science publishers, 1989. [3] j. cohen, the earth is round (p < 0.05), american psychologist, 49 (1994), p. 997-1003. online [accessed 19 december 2022] https://psycnet.apa.org/record/1995-12080-001 [4] o. d. duncan, m. stenbeck, m. panels and cohorts, in c. c. clogg (ed.), sociological methodology 1988, american sociological association, new york, 1988, pp. 1-35. [5] w. p. fisher, jr., statistics and measurement: clarifying the differences, rasch measurement transactions, 23 (2010), pp. 1229-1230. online [accessed 19 december 2022] http://www.rasch.org/rmt/rmt234.pdf [6] p. e. meehl, theory-testing in psychology and physics: a methodological paradox, philosophy of science, 34 (1967), pp. 103-115. doi: 10.1086/288135 https://psycnet.apa.org/record/1995-12080-001 http://www.rasch.org/rmt/rmt234.pdf https://doi.org/10.1086/288135 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 [7] j. michell, measurement scales and statistics: a clash of paradigms, psychological bulletin, 100 (1986), pp. 398-407. doi: 10.1037/0033-2909.100.3.398 [8] d. rogosa, casual [sic] models do not support scientific conclusions: a comment in support of freedman, journal of educational statistics, 12 (1987), pp. 185-95. doi: 10.3102/10769986012002185 [9] m. wilson, seeking a balance between the statistical and scientific elements in psychometrics, psychometrika, 78 (2013), pp. 211236. doi: 10.1007/s11336-013-9327-3 [10] t. hopper, stop accounting myopia: think globally: a polemic, journal of accounting & organizational change, 15 (2019), pp. 87-99. doi: 10.1108/jaoc-12-2017-0115 [11] a. akmal, r. greatbanks, j. foote, lean thinking in healthcare, health policy, 124 (2020), pp. 615-627. doi: 10.1016/j.healthpol.2020.04.008 [12] a. n. whitehead, science and the modern world, macmillan, new york, 1925. [13] r. d. luce, j. w. tukey, simultaneous conjoint measurement, journal of mathematical psychology, 1 (1964), pp. 1-27. doi: 10.1016/0022-2496(64)90015-x [14] g. rasch, probabilistic models, danmarks paedogogiske institut, copenhagen, 1960. [15] w. p. fisher, jr., invariance and traceability for measures of human, social, and natural capital. measurement, 42 (2009), pp. 1278-1287. doi: 10.1016/j.measurement.2009.03.014 [16] l. pendrill, quality assured measurement, springer, cham, 2019 isbn 978-3-030-28695-8. [17] l. mari, m. wilson, a. maul, measurement across the sciences, springer, cham, 2021 isbn 978-3-030-65558-7. [18] r. othman, n. a. hashim, typologizing organizational amnesia, the learning organization, 11 (2004), pp. 273-284. doi: 10.1108/09696470410533021 [19] c. pollitt, institutional amnesia, prometheus, 18 (2000), pp. 5-16. doi: 10.1080/08109020050000627 [20] e. hutchins, the cultural ecosystem of human cognition, philosophical psychology, 27 (2014), pp. 34-49. doi: 10.1080/09515089.2013.830548 [21] s. l. star, k. ruhleder, steps toward an ecology of infrastructure, information systems research, 7 (1996), pp. 111-134. doi: 10.1287/isre.7.1.111 [22] j. sutton, c. b. harris, p. g. keil, a. j. barnier, the psychology of memory, extended cognition, and socially distributed remembering, phenomenology and the cognitive sciences, 9 (2010), pp. 521-560. doi: 10.1007/s11097-010-9182-y [23] h. plattner, c. meinel, l. leifer (eds.), design thinking research: measuring performance in context, springer science & business media, cham, 2012. [24] a. royalty, b. roth, mapping and measuring applications of design thinking in organizations, in design thinking research (pp. 35-47), springer international publishing, cham, 2016. isbn 978-3-030-76324-4 [25] w. p. fisher, jr., e. p.-t. oon, s. benson, rethinking the role of educational assessment in classroom communities, educational design research, 5 (2021), pp. 1-33. doi: 10.15460/eder.5.1.1537 [26] w. p. fisher, jr,. imagining education tailored to assessment as, for, and of learning, assessment and learning, 2 (2013), pp. 6-22. online [accessed 19 december 2022] https://www.researchgate.net/profile/william-fisherjr/publication/259286688_imagining_education_tailored_to_ass essment_as_for_and_of_learning_theory_standards_and_quality _improvement/links/5df56a2592851c83647e7860/imaginingeducation-tailored-to-assessment-as-for-and-of-learning-theorystandards-and-quality-improvement.pdf [27] p. black, m. wilson, s. yao, road maps for learning, measurement: interdisciplinary research and perspectives, 9 (2011), pp. 1-52. doi: 10.1080/15366367.2011.591654 [28] m. c. jensen, value maximization, stakeholder theory, and the corporate objective function, journal of applied corporate finance, 22 (2010), pp. 32-42. doi: 10.1111/j.1745-6622.2010.00259.x [29] w. p. fisher, jr., contextualizing sustainable development metric standards, sustainability, 12 (2020), pp. 1-22. doi: 10.3390/su12229661 [30] w. p. fisher, jr., bateson and wright on number and quantity, symmetry, 13 (2021) 1415. doi: 10.3390/sym13081415 https://doi.org/10.1037/0033-2909.100.3.398 https://doi.org/10.3102/10769986012002185 https://doi.org/10.1007/s11336-013-9327-3 https://doi.org/10.1108/jaoc-12-2017-0115 https://doi.org/10.1016/j.healthpol.2020.04.008 https://doi.org/10.1016/0022-2496(64)90015-x https://doi.org/10.1016/j.measurement.2009.03.014 https://doi.org/10.1108/09696470410533021 https://doi.org/10.1080/08109020050000627 https://doi.org/10.1080/09515089.2013.830548 https://doi.org/10.1287/isre.7.1.111 https://doi.org/10.1007/s11097-010-9182-y https://doi.org/10.15460/eder.5.1.1537 https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://www.researchgate.net/profile/william-fisher-jr/publication/259286688_imagining_education_tailored_to_assessment_as_for_and_of_learning_theory_standards_and_quality_improvement/links/5df56a2592851c83647e7860/imagining-education-tailored-to-assessment-as-for-and-of-learning-theory-standards-and-quality-improvement.pdf https://doi.org/10.1080/15366367.2011.591654 https://doi.org/10.1111/j.1745-6622.2010.00259.x https://doi.org/10.3390/su12229661 https://doi.org/10.3390/sym13081415 interlaboratory comparison results of vibration transducers between tubitak ume and roketsan acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 401 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 401 406 interlaboratory comparison results of vibration transducers between tübi̇tak ume and roketsan s. ön aktan1, e. bilgiç2, i̇.ahmet yüksel, k.berk sönmez, t. kutay veziroğlu, t. torun 1 department of calibration laboratory, roketsan missiles industries inc., ankara, turkey, son@roketsan.com.tr 2 tübi̇tak ulusal metroloji enstitüsü (ume), gebze, kocaeli, turkey, eyup.bilgic@tubitak.gov.tr abstract: this paper presents an interlaboratory comparison on vibration metrology field which can be used as a powerful method of checking the validity of results and measurement capabilities according to iso 17025 [1]. in this standard it is advised to participate in an interlaboratory comparison or a proficiency test in order to prove measurement capabilities of calibration providers. in this study it is aimed to statistically evaluate the measurements results in the scope of sinusoidal acceleration between tübi̇tak ume (national metrology institute of turkey) and roketsan as per related international standards. after statistical evaluation, for unsatisfactory results, root cause analyses and corrections to improve measurement quality are presented and conceptually explained. keywords: vibration transducer; vibration comparison; vibration metrology; vibration uncertainty 1. introduction human began to manufacture machines, and especially motors were used to strengthen them, engineers encountered vibration isolation and reduction techniques [2]. contrary to this, vibration can be generated intentionally for testing purposes to understand functional and physical response and resistibility of any system in vibration environments. for above protective or testing purposes, acceleration sensors are used to measure acceleration, vibration and shock values, which are one of the most important components used in navigation systems of missiles, aircrafts, ships and submarines. as a result of significant developments on these industries such as automotive, defence, aviation and space, the need for accurate measurement has increased and over the years, there have been several improvements in vibration measurement methods [3], [4], [5], [6] with many innovations. accelerometers can be used in varied applications and the most commonly used ones in the market are piezoelectric and capacitive type accelerometers. piezoelectric accelerometers have a more widespread usage due to their advantages such as large measuring frequency range, no need for power supply, reliability, robust design, and longterm stability. a typical response curve of a piezoelectric accelerometer is given in figure 1 [7]. figure 1. response-curve of an accelerometer the limits of the usable range are both mechanical and electrical including frequency (f), acceleration (a), velocity (v), and displacement (d); and also force of a vibration generating system. the displacement amplitude for a given acceleration is inversely proportional to the frequency 𝑑 = 𝑣 2𝜋𝑓 = 𝑎 (2𝜋𝑓)2 . (1) while displacement measurements require attention at low frequencies, it is necessary to pay attention to the acceleration level at high frequencies [8], [9]. when selecting the vibration transducer for a specific application, it is essential to pay attention to the parameters such as number of axes, measurement range, overload or damage limits, mass, sensitivity, impedance and frequency range. for reliable usage of accelerometers, a calibration plan shall be scheduled periodically by producing http://www.imeko.org/ mailto:son@roketsan.com.tr mailto:eyup.bilgic@tubitak.gov.tr acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 402 right test results to assure the process and provide metrological traceability. accurate equipment, metrological traceability, trained personnel, well defined methods, documentation, uncertainty evaluation, internal verifications can play an important role to provide accurate test results however it is also necessary to prove that the laboratory can actually produce results externally by going through comparison tests [10]. interlaboratory comparison (ilc) is the regulation, implementation and evaluation of tests or measurements of two or more laboratories on the same or a similar substance according to predetermined conditions [11]. interlaboratory comparison tests are planned according to iso 17043 [11] and the performances, 𝑬𝒏 values or zeta scores ζ of the participating laboratories are evaluated according to iso 13528 [12]. 2. back-to-back calibration method and application the purpose of vibration transducer comparison is to compare the sensitivity of accelerometer by using as a secondary level (back to back method) iso 16063:21 vibration calibration by a reference standard [13]. calibration of an accelerometer is to determine the sensitivity values at various frequencies. the reference (double ended transducer) and device under test (dut) are firmly coupled on a shaker so that both are exposed to the same mechanical motion. back-to-back calibration requires a shaker (vibration exciter), power amplifier, signal generator and fft frequency meter. currently, automated systems are also available in the market with advantages of easy operation, user friendly and short calibration time requirement. basic configuration of roketsan’s vibration transducer calibration system is illustrated in figure 2 below: figure 2. roketsan vibration transducer calibration system as shown in table 1, the system used is an automatic vibration transducer calibration system that operates between 10 hz to 5000 hz and also, the accuracy values are illustrated. brüel&kjaer 8305 s is used as a reference accelerometer. table 1. specifications performance features accuracy (10 to 2000) hz : 0.7 % ( >2 to 5 ) khz : 1.1 % acceleration, max 110 m/s2 max. transducer weight 60 g force 45 n max. displacement 8 mm environmental conditions of roketsan vibration laboratory are (23 ± 3) °c for temperature and maximum 75 % rh for relative humidity. according to iso 16063:21 the frequency and acceleration values are given below: 1. frequencies (hz): frequencies are selected from one-third-octave frequency series. in case exact frequency values are required, they are calculated for the 1/3 octave bands [14] with the formula below 𝑓 = 𝑓𝑟 ∙ 10 ( 𝑛 10 ) 𝑓r = 1000 hz (2) where n= -20, -19, …, 7 for 10 hz to 5 khz. 2. acceleration (m/s2): 1, 2, 5, 10 or their multiple of tens. 100 m/s2 is recommended. the main principle of back-to-back calibration is direct comparison of indicated sensitivity values between reference transducer and dut transducer. the applied vibration to each transducer is identical and if the sensitivity of the reference transducer is known, the sensitivity of the dut can be obtained by using the following equation: 𝑆𝐷𝑈𝑇 = 𝑆𝑅𝐸𝐹 · 𝑉𝐷𝑈𝑇 𝑉𝑅𝐸𝐹 (3) 𝑆𝐷𝑈𝑇 : sensitivity of device under test 𝑆𝑅𝐸𝐹 : sensitivity of reference accelerometer 𝑉𝐷𝑈𝑇 : electrical output of device under test 𝑉𝑅𝐸𝐹 : electrical output of reference accelerometer even though above approach is suitable for a single frequency value, it may take excessive time to perform this operation at all frequency values. hence, dual channel fft analysis is used to monitor fast frequency response functions in amplitude and phase angle in shorter time period. 3. uncertainty approach as it can be observed over the uncertainty budget given in table 2, one of the maximum uncertainty contribution comes from reference transducer set. furthermore, voltage ratio measurement affects the measurement results. influence on voltage ratio http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 403 from temperature variation, gravitational acceleration, distortion, transverse acceleration, non-linearity effects shall be added to the uncertainty budget. the following contributions are taken into account while calculating the measurement uncertainty budget in the calibration of accelerometers. table 2. uncertainty budget quantity 𝑿𝒊 definition standard uncertainty 𝒖(𝒙𝒊) probability distribution sensitivity coefficient 𝒄𝒊 uncertainty contribution 𝒖𝒊(𝒚) u(sref) the calibration uncertainty of reference transducer set 0.5 normal 1 0.250 u(sref,s) the uncertainty due to drift of reference transducer set and amplifier 0.08 rectangular 1 0.046 u(sa,kal) the calibration uncertainty of conditioning amplifier 0.09 rectangular -1 0.045 u(vr) the uncertainty from voltage ratio 0.08 rectangular 1 0.046 u(vr,t) temperature influence on voltage ratio measurement 0.2 rectangular 1 0.173 u(vr,s) voltage ratio measurement from maximum difference in reference level 0.2 rectangular 1 0.115 u(vr,n) voltage ratio measurement from mounting parameters 0.3 rectangular 1 0.173 u(vr,d) voltage ratio measurement from acceleration distortion 0.0024 rectangular 1 0.001 u(vr,v) voltage ratio measurement from transverse acceleration 1.2 special 1 0.283 u(vr,e) voltage ratio measurement from base strain 0.05 rectangular 1 0.029 u(vr,r) voltage ratio measurement from relative motion 0.05 rectangular 1 0.029 u(vr,l) voltage ratio measurement from non-linearity of transducer 0.03 rectangular 1 0.017 u(vr,i) voltage ratio measurement from non-linearity of amplifiers 0.03 rectangular 1 0.017 u(vr,g) voltage ratio measurement from gravity 0.03 rectangular 1 0.017 u(vr,b) voltage ratio measurement from magnetic field effect of the vibration exciter 0.03 rectangular 1 0.017 u(vr,e) voltage ratio measurement from other environmental effects 0.03 rectangular 1 0.017 u(vr,r) voltage ratio measurement from residual effects 0.03 rectangular 1 0.017 u(vr,re) repeatability 0.17 normal 1 0.098 combined uncertainty of measurement 𝑢𝑡 0.48 expanded uncertainty of measurement u, k = 2 0.97 http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 404 reference transducer and if exists, its conditioner should be calibrated as a set by a primary level laboratory. the measurement uncertainty 𝑢(𝑆𝑅𝐸𝐹) is presented in the calibration certificate of the reference transducer shall be added to uncertainty budget as divided by reliability coefficient (95% confidence level, k = 2). 𝑢(𝑉𝑅,𝑣 ) is the uncertainty contribution due to transverse accelerations. transverse vibration 𝑎𝑇 is maximum 10 % for vibration exciter. transverse sensitivity for reference transducer 𝑆𝑣,𝑅𝐸𝐹 is maximum 2 % and the device under test 𝑆𝑣,𝐷𝑈𝑇 is maximum 5 %. using the formula below, the uncertainty can be evaluated as 1.2 %. 𝜎 = √(𝑆𝑣,𝐷𝑈𝑇 2 + 𝑆𝑣,𝑅𝐸𝐹 2 ) 𝑎𝑇 2 (4) repeatability ( 𝑢(𝑉𝑅,𝑅𝐸𝑃 ) ) is an experimental standard deviation of the arithmetic mean to the uncertainty. it is an inevitable contribution for an uncertainty budget. the model function is given below. 𝑆𝐷𝑈𝑇 = 𝑆𝑅𝐸𝐹 ∙ 𝑆𝐴1 𝑆𝐴2 ∙ 𝑉𝐷𝑈𝑇 𝑉𝑅𝐸𝐹 ∙ 𝐼1 ∙ 𝐼2 … ∙ 𝐼𝑀 (5) 𝐼𝑖 = 1 − 𝑒2,𝑖 1 − 𝑒1,𝑖 (6) 𝑒𝑖 ; indicates the i th error contribution the uncertainty budget for vibration transducer is in table 2 for 10 hz to 1000 hz. combined uncertainty (𝑢𝑡 ) and the expanded uncertainty (𝑈) (k=2, 95% confidence level) can be calculated with following formulas (7) and (8) according to ea-4/02 [15]: 𝑢𝑡 = √𝑢 2(𝑆𝑅𝐸𝐹 ) + 𝑢 2(𝑆𝑅𝐸𝐹,𝑆) + (𝑢𝑉𝑅 ) 2+.. (7) 𝑈 = 2 ∙ 𝑢𝑡 (8) 4. comparison results the technical protocol [16] specifies in detail, the aim of the comparison, the transfer standard used, time schedule, the measurement conditions and other subjects. the frequency range covered by the requirements stated in the technical protocol has been carried out with tübi̇tak ume and roketsan. the model of transfer standard used is b&k 4371. the pilot laboratory is tübi̇tak ume, which is primary laboratory in turkey. since roketsan performs related measurements with lower uncertainty than any other secondary level calibration providers, an interlaboratory comparison with tübi̇tak ume as primary level laboratory was necessary to understand roketsan’s reliability of accuracy level. figure 3 presents the calibration results of sensitivity obtained for a transfer standard. the results can be observed in table 3. figure 3. measurement results from tübi̇tak ume and roketsan table 3. measurement results from tübi̇tak ume and roketsan frequency ume; reference value roketsan 𝑬𝒏 (hz) (pc/(m/s2) (pc/(m/s2) 10 1.0046 1.0063 0.13 12.5 1.0042 1.0110 0.51 16 1.0043 1.0087 0.33 20 1.0029 1.0080 0.38 25 1.0016 1.0063 0.35 31.5 1.0007 1.0070 0.47 40 0.9989 1.0060 0.54 50 0.9981 1.0060 0.60 63 0.9956 1.0040 0.63 80 0.9935 1.0010 0.57 100 0.9919 1.0010 0.69 125 0.9905 0.9993 0.67 160 0.9893 1.0010 0.89 200 0.9872 0.9957 0.65 250 0.9847 1.001 1.24 315 0.9818 0.9923 0.80 400 0.9813 0.9918 0.80 500 0.9801 0.9905 0.80 630 0.9800 0.9883 0.64 800 0.9776 0.9873 0.75 1000 0.9771 0.9854 0.53 1250 0.9746 0.9834 0.53 1600 0.9759 0.9826 0.40 2000 0.9757 0.9802 0.27 2500 0.9729 0.9814 0.51 3150 0.9762 0.9849 0.52 4000 0.9800 0.9842 0.25 5000 0.9800 0.9814 0.08 0,95 0,96 0,97 0,98 0,99 1 1,01 1,02 1 0 1 6 2 5 4 0 6 3 1 0 0 1 6 0 2 5 0 4 0 0 6 3 0 1 0 0 0 1 6 0 0 2 5 0 0 4 0 0 0 s e n si ti v it y ( p c /( m /s 2 ) frequency (hz) roketsan ume http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 405 the uncertainty values for laboratories are given below in table 4: table 4. uncertainty values ume; reference laboratory frequency (hz) f ≤ 1250 1250 < f ≤ 5000 uncertainty 0.9% 1.1% roketsan frequency (hz) f ≤ 1000 1000 < f ≤ 5000 uncertainty 0.97% 1.3% the performances of the measurements can be evaluated as described in iso 13528 standard with the following equation (9). the most common statistical approach to understand capability of a laboratory is calculating 𝐸𝑛 values and shall be below or equal to one. 𝐸𝑛 = 𝑋rok − 𝑋ume √𝑈rok 2 + 𝑈ume 2 (9) 𝑋rok: the mean value of roketsan 𝑋ume : the mean value of reference laboratory (ume) 𝑈ume: the measurement uncertainty of reference laboratory (ume) 𝑈rok: the measurement uncertainty of roketsan calculated 𝐸𝑛value at 250 hz is 1.24 and the rest of the other frequencies are satisfactory |𝐸𝑛 | < 1 in table 3. after receiving unsatisfactory result only for 250 hz frequency, root cause analysis or some corrective actions will be carried out to improve measurement system of roketsan inc. 5. corrective action nonconformity in iso 9001 is defined as the failure to meet one or more requirements [17]. as9131 “nonconformance data definition and documentation” classify the nonconformity process codes (shipping and transportation, manufacturing, document preparation), cause codes (machine, management, people, material, method, environment, measurement), corrective action codes (machine, management, people, material, method, environment, measurement) [18]. root cause analysis is the process of identifying casual factors using a structured approach with techniques designed to provide a focus for identifying and resolving problems [19]. it is essential to determine the root cause and create a corrective action plan in order to eliminate the causes of nonconformity before occurring again or in another field. principles of continuous improvement and monitoring of efficiency are important for the continuity of management systems. when comparison results are not satisfactory, a non-conformance record shall be issued and action process shown in figure 4 shall be started in order to find a solution to keep system reliable. a method such as pareto chart, 5 whys, fishbone diagram, scatter plot diagram, failure mode and effects analysis (fmea) for determining root cause of problem should be applied to gain appropriate vision for detecting and removing problem. figure 4. nonconformance process among all error source possibilities, sensitivity value of the reference transducer set had top priority to check since calibration status was close to calibration due date. after forwarding this equipment to primary laboratory for re-calibration, although 1 year has passed between two calibrations, it has been observed that previous sensitivity value at 250 hz had been changed from 0.1312 pc/(m/s2), to 0.1307 pc/(m/s2). when considering last two calibration certificates difference, higher measurements result change have been obtained and the reference value has shifted unlike assumption for drift may occurred during 1 year. above condition was considered as main reason for the detected nonconformity. verification of the vibration system has a vital role for getting accurate measurements, the reference accelerometer used in calibration (working standard) is connected back-to-back with the reference accelerometer. subsequent verifications compare the first results to the new results and accept the results whether it deviates less than 0.8 %. the controller checks that the standard deviation of the measurements is less than 0.2 %. when the fishbone diagram is applied, the root causes are seen in figure 5. after an extensive training for all operators, gage r&r application indicators showed competency of appraisers are satisfactory. after this, temperature gradient of measurement room was examined and it has seen that there is no need for action on temperature subject. regarding mechanical effects, the torque value was adjusted to 2 n m as desired precisely and it is definition of nonconformance determining root cause identification of corrective action effectiveness of corrective action http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 406 confirmed that requirement of standard have been met. figure 5. fishbone diagram as a result of system review as detailed above, verification and calibration issue had been estimated as only root cause which gave rise to unsatisfactory 𝐸𝑛 value at 250 hz frequency measurement with the new calibration results, we have confirmed that there is a drift in the value of the reference. as a result of evaluation, it is decided to perform detailed investigation on root cause of drift on reference sensor. since the reason is not fully understood, it is decided to reorganize interlaboratory measurement to receive satisfactory 𝐸𝑛 values. 6. summary further aspect could be considered to understand the unsuccessful results at 250 hz. further study may cover participation in a new comparison test and in case of another insufficient result, decreasing the calibration period, increasing the measurement uncertainty due to drift of reference transducer set can be taken as further actions. the results produced by the laboratory become valid with comparison tests as well as the method of measurement, competency of appraisers, and calculated measurement uncertainty, suitability of the equipment used, calibration and traceability. since iso 17025 wants also a risk and opportunity based approach, proficiency testing can be used as a training and risk tool. 7. references [1] iso 17025:2017 general requirements for the competence of testing and calibration laboratories [2] measuring vibration, brüel&kajer, www.bk.com [3] x.bai, “absolute calibration device of the vibration sensor”, the journal of engineering, 2018 [4] v.mohanan, b.k. roy, v.t. chitnis, “calibration of accelerometer by using optic fiber vibration sensor”, applied acoustics, 28, pp. 95-103, 1989 [5] r. r. bouche, “calibration of vibration and shock measuring transducer”, the shock and vibration information center, 1979 [6] k.havewasam, h.h.e. jayaweera, c.l. ranatunga, t.r. ariaratne, “development and evaluation of a calibration procedure for a 2d accelerometer as a tilt and vibration sensor”, proceedings of the technical sessions, 25, pp. 53-62, 2009 [7] c. vogler, “calibration of accelerometer vibration sensitivity by reference, college of engineering, 2015 [8] w. ohm, l.wu, p. henes, g. wonk , “generation of low-frequency vibration using a cantilever beam for calibration of accelerometers”, journal of sound and vibration 289, pp. 192–209, 2006 [9] n. garg, m.i. schiefer, “low frequency accelerometer calibration using an optical encoder sensor”, measurement, vol. 111, pp. 226-233, 2017 [10] k. b. sönmez, t. o. kılınç, i̇.a.yüksel, s.ö.aktan, “inter-laboratory comparison on the calibration of measurements photometric and radiometric sensors”, international congress of metrology, 2019 [11] iso/iec 17043:2010, “conformity assessment general requirements for profiency testing” [12] iso 13528:2015, “statistical methods for use in profiency testing by interlaboratory comparisons” [13] iso 16063:21, “vibration calibration by a reference standard” [14] iso 266, “acoustics preferred frequencies” [15] ea-4/02, “evaluation of the uncertainty of measurement in calibration” [16] ume-g2ti-2018-01, “technical protocol of the interlaboratory comparison on acceleration”, 2018 [17] iso 9001:2015, “quality management systems” [18] as9131:2012, “nonconformance data definition and documentation” [19] m.a.m.doggett, “a statistical comparison of three root cause analysis tools”, journal of industrial technology, vol.20, no:2,2004 inexperienced personnel temperature change during calibration power, other utility variations calibration drift of reference value verification torque value mounting personnel measurement environment setup 250 hz en >1 http://www.imeko.org/ http://www.bk.com/ design optimisation of a wireless sensor node using a temperature-based test plan acta imeko issn: 2221-870x june 2021, volume 10, number 2, 37 45 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 37 design optimisation of a wireless sensor node using a temperature-based test plan lorenzo ciani1, marcantonio catelani1, alessandro bartolini1, giulia guidi1, gabriele patrizi1 1 department of information engineering, university of florence, via di santa marta 3, 50139, florence, italy section: research paper keywords: fault diagnosis; precision farming; temperature; testing; wireless sensor network citation: lorenzo ciani, marcantonio catelani, alessandro bartolini, giulia guidi, gabriele patrizi, design optimisation of a wireless sensor node using a temperature-based test plan, acta imeko, vol. 10, no. 2, article 7, june 2021, identifier: imeko-acta-10 (2021)-02-07 section editor: giuseppe caravello, università degli studi di palermo, italy received january 14, 2021; in final form june 7, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gabriele patrizi, e-mail: gabriele.patrizi@unifi.it 1. introduction nowadays, automatic measurement systems and condition monitoring (cm) tools have become valid and reliable means extensively used in several internet of things (iot) applications in different fields of the industry 4.0 scenario [1]-[10]. the continuous monitoring of both environmental conditions and soil parameters has become extremely important in agriculture applications [11]. according to [12] the environmental factors such as temperature and humidity have a deep influence on plant pathogens such as bacteria, fungi, and viruses. moreover, the continuous monitoring of the soil parameters allows to automatise irrigation and consequently minimise the water waste [13]-[15]. lezoche et al. [16] explained in detail the several advantages achieved integrating internet of things (iot) technologies inside the agricultural industry, such as productivity improvements, soil conservation, water saving and minimisation of plant diseases. usually, a wireless sensor network (wsn) is designed and implemented to monitor the crop. the network has to endure harsh outdoor conditions, facing both hot summers and cold winters. at the same time the network has to guarantee service continuity ensuring accurate and reliable data. according to [17] an optimal sensor node for agricultural applications should composed by the following units: a power unit, a processing unit, a memory unit, a sensing unit, and a communication unit. in particular, the thorough analysis presented in [17] highlights the importance of using soil moisture, relative humidity, temperature, and gas sensors. recent literature has plenty of papers focusing on the design of innovative wireless network for agricultural purposes. each of the papers deals with the optimisation of one particular aspect of the network. the nodes deployment is one of the most critical design aspect since it severely affects connectivity, coverage area and reliability of the entire network [18]-[20]. some papers such as [21], [22] introduce new routing strategies to solve classical drawbacks of wsns and optimise the transmission based on the actual nodes deployment. another fully discussed problem regards the optimisation of power consumption which is solved with many different solutions [23]-[25]. for instance, in [26], the authors propose a thermal modeling and characterisation for designing reliable power converters, while [27] focuses on risk abstract the introduction of big data and internet of things has allowed the rapid development of smart farming technologies. usually, systems implemented in smart farms monitor environmental conditions and soil parameter to improve productivity, to optimize soil conservation, to save water and to limit plant diseases. wireless sensor networks are a widespread solution because they allow to implement effective and efficient crop monitoring. at the same time, wireless sensor networks can cover large area, they can ensure fault tolerance and they can acquire large amount of data. recent literature misses to consider the testing of the hardware performances of such systems according to the actual operating conditions. the effects of a harsh environment on the dynamic metrological performances of sensor nodes are not sufficiently investigated. consequently, this work deals with the electrical design optimization of a sensor node by means of thermal test used to reproduce the actual operating conditions of the nodes. the results of the node characterization through thermal tests are used to improve the node’s design and consequently to achieve higher performances in harsh operative conditions. mailto:gabriele.patrizi@unifi.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 38 analysis of photovoltaic panels. in [28], a wireless charging method for the batteries management in agriculture applications is presented. other papers choose to optimise the design based on the type of plantation. for instance, the design of a low-power sensor node for rice field based on hybrid antenna is presented in [29]. jiaxing et al. [30] improves the design of sensor nodes in litchi plantation dealing with the coverage area of each node and the micro-irrigation management efficiency. in [31] a low-cost weather station for edamame farm is presented and compared with commercial systems. as presented above, many recent works deal with design and development of wsn for precision farming. quite the opposite, the concept of testing the hardware performances according to the actual operating conditions is not adequately addressed. the effects of the operating environment on the dynamic metrological performances of wsn are not sufficiently investigated. as observed in previous works on similar systems [32]-[36], environmental stresses such as temperature, humidity, vibration and mechanical shocks deeply influence both reliability and metrological performances of low-cost electronic components, leading to loss of calibrations, measurement variability and a significant growth in component failure rate. trying to fill these needs, this paper deals with the electrical design optimisation of a sensor node using thermal test. the agriculture field of applications is taken into account in order to customise the test plan and characterise the node under its actual operating conditions. the results of the node characterisation through thermal tests are used to improve the node’s design and consequently to achieve higher performances in harsh operative conditions. unfortunately, there are no international standards regarding environmental tests of wsn, as well as customised standard concerning electronic component testing for agricultural applications are not available. for these reasons, this paper proposes a customised test plan and test-bed for the performances characterisation of a sensor node under temperature stress. the paper is organised as follows: section 2 illustrates the initial design of the sensor node developed in this work. section 3 explain the proposed temperature-based test plan composed by three different test procedures (namely t.1. – t.2. – t.3.). section 4 summarises the main results of the tests and it proposes some design improvements to optimise the performances of the node. finally, in section 5 conclusions are drawn. 2. sensor node developed in this work typically, the classical wsns are implemented using a single central node (access point ap) directly connected to all the other nodes (which are called peripheral) in the network. the peripheral nodes use a set of several sensors to acquire a large amount of data, then they send the data to the ap which must collect and store them [25], [37]. the main drawbacks are the limited coverage area and the restricted number of nodes. an advanced wsn architecture is the one based on mesh topology, generally called wireless mesh network (wmn). a wmn is an optimal solution when it is required to monitor large geographical areas. more in detail, a wmn is a self-organised and self-configured system made up by lots of peripheral nodes and a single central node (called root node in the following) that manages the whole network. every node is able to interact with the nearby nodes, using them to reach the root node trough indirect paths, allowing large-area coverage [38]-[40]. furthermore, wmns use several near nodes and dynamic routing tables to achieve high-frequency transmissions, high bitrate, full scalability and low management cost [41]-[43]. figure 1 shows the block diagram of the developed sensor node. it is composed by the following units: • a power supply unit composed by a photovoltaic panel, two lithium batteries, a “batteries management system” (bms) and a “maximum power point tracking” (mppt). • a set of sensors, including an air temperature and humidity sensor, a soil temperature transducer, a soil moisture sensor and a solar radiation sensor. • an external antenna. • a radio and processing unit which is the real core of the sensor node and it is based on the esp32 system-on-achip microcontroller by “espressif”. the microcontroller is mounted on an evaluation board used for software programming by means of a usb-touart bridge controllers. the evaluation board also includes pin interface and power supply by means of an ams1117 ldo. two 8-channel 12-bit sar adcs and two 8-bit dacs are embedded in the esp32. a customised interface board is used to connect the power unit and the sensors unit to the esp32 microcontroller. the network works taking into account two alternative operating phases: a 10 minutes “sleep phase” in which almost all node functionalities are disabled to save energy; and an “active phase” in which the sensors acquire data, the microcontroller elaborates and transmit them to the root node. this type of functioning minimises the duty cycle of the network, allowing a reasonable overheating of the hardware and saving batteries power. figure 2 shows two images of the developed sensor node. a detail of the system is illustrated on the left side, while the right image shows the installation on the field. figure 1. block diagram of the designed sensor node, including power management systems, radio and processing unit, sensors unit and an external antenna. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 39 3. temperature-based test plan a temperature-based test plan was developed in this work to optimise the design of a sensor node for smart farm applications. temperature is the optimal stress condition to characterise the hardware of the sensor node and to investigate the weaknesses of the system. in fact, in compliance with the physics of failure of electronic devices, the main failure mechanisms of this kind of systems are intrinsically related to temperature [44], [45]. temperature is the key acceleration factor for many failure mechanisms such as open/short circuit, silicon fracture, electrostatic discharge (esd), dielectric charging and many others. all these failures could be easily triggered when the temperature reaches high values. consequently, temperature is the optimal stress since it allows to characterise both operational performances and reliability at the same time. information acquired during the electrical characterisation under temperature stress are extremely useful to improve the design of the system. a customised automatic measurement set-up was developed for the acquisition of the key parameter during the temperature tests (see figure 3). a climatic chamber was used to generate the thermal conditions described above. a datalogger equipped with ten k-type thermocouples was employed to monitor the temperature in some critical points of the developed system. the other equipment illustrated in figure 3 and used during the tests are a power supply generator, an oscilloscope, a set of multimeters, a current generator and a waveform generator. furthermore, a root node and a laptop were used to manage the network functionalities and acquire the data. international standards expressly related to smart agriculture systems are not available. consequently, several standards that cover similar area were used as guidelines during the design of the test plan, as follow: • mil-std 810g (2008) [46] is a guideline for any kind of environmental stress tests • iec 60068-2-14 (2011) [47] provides general procedures for temperature testing • iec 60068-5-2 (1990) [48] is a guide to drafting of test methods • iec 60068-2-2 (2007) [49] regarding the dry heat test conditions • iec 60068-2-38 (2009) [50] provides detailed procedures for temperature cyclic test • jedec jesd22 a104e (2014) [51] covers the temperature test of semiconductor devices • iest-rp-pr-003.1 (2012) [52] defines a temperature step-stress profile for accelerating life test. the developed test plan is based on the aforementioned standards, and it is tailored on the practical application scenario. in particular, the nodes will be deployed in open field, and they will be exposed to harsh environmental conditions. consequently, the test plan was developed considering the real operating conditions of the node, which must endure extremely high temperature during summer days, and extremely low temperature during winter days. moreover, the test plan must take into account also the range of guaranteed operability of the component that make the system, as follows: • microcontroller and electronic boards (up to 85 °c). • batteries (from -10 °c to 50°c). three temperature-based test procedures were developed, namely a positive temperature step-test from 20 °c to 80°c (test t.1), a back and forth temperature step-test from -10 °c to 80 °c and then again to -10 °c (test t.2) and a temperature cyclic test with restricted temperature range only for battery testing (test t.3). 3.1. test t.1. positive temperature step-test in this test procedure the devices under test are two identical sensor nodes. the only different between the two boards is the ldo that manage the power supply of the electronic board. the aim of this test was to characterise the main electrical parameters of the nodes. only the electronic hardware was tested, the batteries and the solar panel were not located inside the climatic chamber. the first node was supplied by the ldo provided by the manufacturer of the evaluation board (ams1117 ldo “former ldo” in the following) and the other one was equipped with a ap2114h ldo (“new ldo” in the following). the complete temperature profile t.1. is illustrated in figure 4, where the blue arrows highlight the temperature step and the exposition time. the test procedure t.1. starts at 20 °c which is generally the room temperature. the first step consists in a 5 °c raising temperature lasts 10 minutes. figure 2. pictures of the sensor node designed in this work. the left figure shows a detail of the boards enclosed inside a waterproof case, while on the right side the complete system installed on the field is illustrated. figure 3. measurement setup proposed in this work to characterise the designed sensor node under temperature stress. figure 4. test procedure t.1. positive temperature step-test from 20 °c up to 80 °c. ime min e m p e ra tu re minutes at onstant temperature step in minutes acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 40 the rise speed is intentionally kept low to allow components temperature to increase together with chamber temperature. then 20 minutes of exposition time at the reached temperature are required to ensure at least two active phases after temperature stability. the two previous steps are repeated up to 80 °c, alternating a 5 °c step (10 minutes) and a 20minute exposition time. 3.2. test t.2. back and forth temperature step-test the test procedure t.2. illustrated in this section is an extension of the test procedure t.1. the differences between the back and forth temperature step-test t.2. and the previous-described procedure are illustrated in the following: • temperature range of the test procedure t.2. is from -10 °c to 80 °c. • the test procedure t.2. is repeated back and forth, which means that it starts at -10 °c, then it reaches 80 °c following a step increase, and finally it decreases again to -10 °c following the same steps. • test procedure t.2. is characterised by a rise rate and a lowering rate between one step and the following equal to 2 °c/min. • the exposition time at constant temperature of test procedure t.2. is reduced to only 10 minutes. the complete test profile of the procedure t.2. is illustrated in figure 5, highlighting the back and forth trend used to investigate hysteresis behaviour. during this test procedure, only the processing unit of the node was located inside the climatic chamber. in fact, the main purposes of this procedure were to test the performances of the analog-to-digital converter (adc) and digital-to-analog converter (dac) embedded in the microcontroller esp32. 3.3. test t.3. temperature cyclic test many papers in recent literature agree that temperature is the key factor in battery degradation [53]-[56]. for this reason, the test procedure t.3. was developed only for battery characterisation. test procedure t.3 is based on two consecutive cycle. the minimum temperature is -10 °c, while the maximum temperature is 50 °c. this limited range was designed to satisfy battery safety requirements; nonetheless, it is well representative of the actual operative temperature in agriculture field. the rise rate and the lowering rate is 2 °c/min, and the exposition time at the minimum and maximum temperature is 30 minutes. test procedure t.3. is the only procedure in the proposed test plan in which the temperature changes linearly between the minimum and maximum values of the range (without steps). figure 6 shows the temperature profile of test t.3. highlighting two cyclic repetitions in the range [-10 °c; 50 °c]. 4. results and discussion in this section the main results achieved during the test are illustrated. moreover, some design improvements are proposed in order to optimise the performances of the node under the actual operating conditions. 4.1. test t.1. main results and proposed improvements the node was tested with two different ldos: the one provided by the manufacturer and a new one. the aim of this procedure is to investigate if the new ldo provides significant upgrades with respect to the former ldo and to evaluate the effect of the temperature, during the step-stress test. preliminary results regarding this test have been illustrated in [57]. figure 7 shows the comparison of the current consumption of the two boards (blue and green lines) during six cycles of active and sleep phases, with a corresponding chamber temperature from 55 ° c to 65 °c (red line). moreover, figure 7 highlights the benefits introduced by the new ldo in both active and sleep phases. in fact, the new ldo allows an average decrease of the absorbed current of 2 ma. figure7 shows the most striking results discovered during the test phase, which is the presence of a current step-up (discovered in both the sensor nodes) at a certain temperature. in case of the former ldo, the step-up occurs approximately around 63 °c, while the new ldo is subjected to this phenomenon at lower temperature (approximately 58 °c). indeed, focusing only on the sleep phase, as shown in figure 8, it is possible to identify an unexpected increase of about 4.5 ma of the current above a specific temperature in both nodes. during the cooling phase of the chamber the current consumption suddenly decreases assuming the previous value. after a deep analysis widely explained in [57], this anomaly is due to an unexpected activation of the usb-to-uart bridge controllers (cp2102n) integrated in the evaluation board. figure 5. test procedure t.2. back and forth temperature step-test from -10 °c up to 80 °c and then back again to -10 °c. figure 6. test procedure t.3. temperature cyclic test characterised by two tests repetitions. figure 7. current consumption of the two boards (blue and green lines) on the left y-axis while the right y-axis shows the temperature variation of the chamber (red line). ime min e m p e ra tu re minutes at onstant temperature step at rate min ime min e m p e ra tu re min min acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 41 the controller should only be activated (or enabled) in case of a device connection to the usb port. the 3.3 v output of the ldo supplies both usb-to-uart controller and microcontroller. the cp2102n controller is enabled only in case a usb device is connected to the board (vusb). under typical operating conditions, the usb provides a 5 v voltage, then a divider generates a voltage drop (vbus) as input of the 8th pin of the usb-to-uart controller. vbus is: 𝑉bus = 3.41 v . (1) the cp2102n datasheet highlights that vbus pin is considered in a high logical state (controller on) when the following relationships are satisfied: 𝑉bus > (𝑉𝐷𝐷 − 0.6 v) (2) 𝑉bus < (𝑉𝐷𝐷 + 2.5 v) (3) with 𝑉𝐷𝐷 = 3.3 v the high-state threshold can be calculated as: 𝑉th = 𝑉𝐷𝐷 − 0.6 v = 3.3 v − 0.6 v = 2.7 v . (4) the board is also powered by an external voltage of 5 v. the controller is disabled by a schottky diode (bat760-7) located between the usb connector and the external 5 v pin. this diode allows to avoid turning on the usb-to-uart bridge with an external 5v supply. furthermore, it also protects computer or other devices connected via usb from unexpected reverse current. analysing the schottky diode datasheet, it is evident that the increase of temperature produces an increase of the reverse current of the diode. for example, at 75 °c with a 5 v of reverse voltage the diode exhibits a reverse current of about 100 µa. since the usb connector is an open circuit, the reverse current of the schottky diode generates a voltage drop given by: 𝑉bus = 4.75 v ≫ 𝑉th (5) therefore, this reverse current evaluated at 75 °c is enough to enable the usb-to-uart controller. furthermore, this reverse current could be dangerous for higher temperatures because it could generate activation voltages higher than the maximum limit, leading to possible damages of the converter. consequently, the higher the temperature, the higher the diode reverse current, the higher the activation voltage of the cp2102n controller. if the temperature is higher enough to produce a reverse current which generates a 𝑉bus > 𝑉th, then the usb-touart controller turns on absorbing 4.5 ma and generating the current step shown in the previous figures. there are two possible corrective actions to delete this problem guaranteeing the proper functionalities in case of a connected usb device: • change the schottky diode with another model able to guarantee a lower reverse current. • modify the divider, for example by maintaining the ratio between the resistances but decreasing the resistance value. • design a new interface board removing the usb interface and introducing an external serial interface used only during the programming. the previous considerations explain also the reason why the current step-ups occurred at different temperature in the boards. indeed, by measuring the outputs of the two ldos, it is possible to verify a slight difference in the output voltage that leads to a different voltage threshold (4) and, consequently, a different reverse current to activate the usb-to-uart controller. 4.2. test t.2. main results and proposed improvements this test procedure was used to characterise the performances of the internal adc embedded within the esp32 microcontroller (simply referred as internal adc in the following). to this purpose, a reference signal 𝑉in = 1.5 v was provided, using a signal generator, as input of the internal adc. moreover, the same signal was also used as input of an additional analog-todigital converter (ads1115 by texas instrument) called external adc in the following. the external adc was located on the interface board. it must convert the same signal acquired by the internal adc, then the external adc transfers the digital data to the microcontroller, that store the data into a e2prom memory. the adcs acquire 512 samples every two minutes by order of the esp32 microcontroller. in this way, it is possible to acquire a large amount of data at every considered temperature step. then, mean value and standard deviation of these samples are calculated to compare the performances of the adcs. figure 9 shows a comparison between the mean value of the 512 acquisitions during each active phase of the two adcs. the blue-circle markers stand for the mean value of the internal adc acquisitions, while the star-blue markers represent the mean value of the external adc acquisitions. the right y-axis (red axis) is used to depict the temperature of the climatic chamber acquired using a k-type thermocouple and a datalogger during the test. in the initial phase of the test (room temperature), both adcs show an offset level with respect the true input value provided by the signal generator. more in detail, the internal adc is characterised by a positive offset of +40 mv which suddenly figure 8. detail of the current consumption during the sleep phases of the two boards (blue and green lines) on the left y-axis while the right y-axis shows the temperature variation of the chamber during the test (red line). figure 9. comparison of the performances of the internal (blue circle marker) and external (blue star marker) adcs at each temperature step (red line). each marker represents the mean value of the 512 acquisition at the considered active phase. ime mm de e a n a lu e o f d a u is i o n e m p e ra tu re nternal d ternal d emperature acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 42 increase when temperature is lowered. quite the opposite, the external adc has a negative offset of -2 mv. temperature has a strong influence on the mean value acquired by the internal adc (proven by a non-constant trend of the circle markers illustrated in figure 9). on the other hand, the external adc highlights a remarkable temperature stability with a semiconstant trend throughout the temperature range. table 1 compares the standard deviations of the adcs under test at some significant temperatures. the standard deviation of the internal adc is quite high and considerably influenced by the temperature of the chamber, while the external adc highlight better performances in terms of data dispersion. the introduction of the ads1115 analog-to-digital converter provides better performances in terms of offset at room temperature, thermal stability and data dispersion. for these reasons, it is recommended to integrate this chip instead of the internal adc embedded in the esp32 system-on-a-chip microcontroller. 4.3. test t.3. main results and proposed improvements this test procedure was used to characterise the performances of two different types of lithium batteries under the actual operative temperature in case of agriculture applications. the battery a is a linicomno2 type (inr18650-35e), while the battery b is a lifepo4 (htcfr26650). the battery a is characterised by high specific energy and an operating voltage range between 3 v and 4 v. instead, the battery b guarantees a constant voltage output to supply the microcontroller, but it is characterised by a low-density charge. moreover, according to the datasheet, the lifepo4 batteries guarantee good performance in a larger temperature range. during the thermal test an active load was used to discharge the batteries ensuring a constant discharge current of 300 ma. this value was chosen since it is the average current consumption of the whole system during the data transmission phase. figure 10 shows the experimental results achieved during the thermal cyclic test of the battery a (linicomno2 type) using a blue line to represent the battery voltage during the discharge. the data were compared with a reference discharge voltage (red trend) measured maintaining a constant temperature of 20 °c. as expected, the reference discharge at room temperature exhibits a linear trend characterised by a negative slope (called δ𝑉 in the following) strictly related to the constant discharge current forced by the active load. instead, the blue trend in figure 10, achieved during the thermal cyclic test, highlights some deviation compared to the nominal trend during the cold phase of the cyclic test. specifically, the “v-shape trend” highlighted by the blue line in figure 10 refers to temperature lower than 0 °c. more in detail, when temperature is lowered under 0 °c the discharge rate of the battery suddenly increases, producing a remarkable decrease of the negative slope δ𝑉. then, when temperature starts to increase, the figure shows a counterintuitive behaviour: the battery voltage increases even though the battery is always on discharge phase. this increase of discharge rate at very low temperature could become remarkable in case of an exposition for a long period with really cold temperature. for this reason, the fixture of the solar panel that charges the batteries must be oriented with a proper slope in order to optimise the charge during winter. figure 11 shows the experimental results achieved during the thermal cyclic test of the battery b (lifepo4 type) using a blue line to represent the battery voltage during the discharge. also in this case the data were compared with a reference discharge voltage (red trend) measured maintaining a constant temperature of 20 °c. as expected, the lifepo4 battery shows a constant voltage during the discharge phase. despite this, during the thermal cyclic test also the lifepo4 battery shows a “v-shape trend”. for these reasons, considering the higher specific energy of the linicomno2, the latter should be used in the sensor node. 5. conclusions the paper deals with the design optimisation of a sensor node, used in a wireless mesh network, under temperature stress. since there is not a specific standard for this kind of system, a customised test plan was developed in this work based on three temperature-based stress tests. moreover, an automatic measurement setup was designed and implemented to monitor the performances of the system during the test. the aim of the first temperature test (test t.1.) was to observe the effects of high temperature on the hardware and firmware bugs, looking for any anomalies from the correct functioning. one of the main unexpected finding is an increase of the current consumption figure 10. battery discharge test for linicomno2 battery: trend achieved at 20 °c constant temperature (red line) and trend achieved during thermal cyclic test t.3. (blue line). table 1. comparison of the standard deviations for internal and external adcs at each temperature. temperature standard deviation internal adc external adc -10 °c 4 mv 0.5 mv 10 °c 4 mv 0.6 mv 30 °c 8 mv 0.8 mv 50 °c 11 mv 1 mv figure 11. battery discharge test for lifepo4 battery: trend achieved at 20 °c constant temperature (red line) and trend achieved during thermal cyclic test t.3. (blue line). ime minutes a e r o lt a g e ermal li test a er dis arge ime minutes a e r o lt a g e ermal li test a er dis arge acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 43 during the sleep phase when temperature overpass a certain threshold. in particular, a 4.5 ma step was verified above a specific temperature. this step is not due to a permanent failure, because during the cooling phase the current returns to its normal value at approximately the same temperature. this unexpected behaviour can lead to an increase of the power consumption of the sensor node and a solution must be considered. the second temperature test (test t.2.) aimed at verifying the performances of the analog-to-digital converter (adc) and digital-to-analog converter (dac) embedded in the microcontroller esp32. the dacs does not highlight any particular problems, while the adc embedded in the esp32 shows three main drawbacks: a significant offset at room temperature, thermal instability and a remarkable data dispersion. for these reasons, it is recommended to use an external ads1115 analog-to-digital converter which has provided better performances during the test. finally, test t.3. characterised the behaviour of two different types of batteries under thermal stress focusing on the discharge rate at cold temperature. the test highlights the importance of a proper solar panel orientation to optimise the batteries charge during winter. references [1] g. d’emilia, a. gaspari, data validation techniques for measurements systems operating in a idustry 4.0 scenario a condition monitoring application, in 2018 workshop on metrology for industry 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 112–116. doi: 10.1109/metroi4.2018.8428317 [2] m. carratu, c. liguori, a. pietrosanto, m. o’nils, j. lundgren, a novel ivs procedure for handling big data with artificial neural networks, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128500 [3] e. petritoli, f. leccese, m. botticelli, s. pizzuti, f. pieroni, a rams analysis for a precision scale-up configuration of "smart street" pilot site: an industry 4.0 case study, acta imeko 8(2) (2019), pp. 3-11. doi: 10.21014/acta_imeko.v8i2.614 [4] g. d’emilia, a. gaspari, e. natale, mechatronics applications of measurements for smart manufacturing in an industry 4.0 scenario, ieee instrum. meas. mag. 22(2) (2019), pp. 35–43. doi: 10.1109/mim.2019.8674633 [5] m. carratu, m. ferro, a. pietrosanto, v. paciello, smart power meter for the iot, in 2018 ieee 16th international conference on industrial informatics (indin), porto, portugal, 18-20 july 2018, pp. 514–519. doi: 10.1109/indin.2018.8472018 [6] g. d’emilia, a. gaspari, e. hohwieler, a. laghmouchi, e. uhlmann, improvement of defect detectability in machine tools using sensor-based condition monitoring applications, procedia cirp 67 (2018), pp. 325–331. doi: 10.1016/j.procir.2017.12.221 [7] e. petritoli, f. leccese, g. s. spagnolo, in-line quality control in semiconductors production and availability for industry 4.0, in 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 3-5 june 2020, pp. 665–668. doi: 10.1109/metroind4.0iot48571.2020.9138296 [8] m. carratu, a. pietrosanto, p. sommella, v. paciello, a wearable low-cost device for measurement of human exposure to transmitted vibration on motorcycle, in 2019 ii workshop on metrology for industry 4.0 and iot (metroind4.0&iot), naples, italy, 4-6 june 2019, pp. 329–333. doi: 10.1109/metroi4.2019.8792855 [9] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8(2) (2019), pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [10] m. catelani, l. ciani, v. luongo, r. singuaroli, evaluation of the safe failure fraction for an electromechanical complex system: remarks about the standard iec61508, in 2010 ieee instrumentation & measurement technology conference proceedings, austin, tx, usa, 3-6 may 2010, pp. 949–953. doi: 10.1109/imtc.2010.5488034 [11] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, standby redundancy for reliability improvement of wireless sensor network, in 2019 ieee 5th international forum on research and technology for society and industry (rtsi), florence, italy, 9-12 sept. 2019, pp. 364–369. doi: 10.1109/rtsi.2019.8895533 [12] a. p. j. lindsey, s. murugan, r. e. renitta, microbial disease management in agriculture: current status and future prospects, biocatal. agric. biotechnol. 23 (2020)art. 101468. doi: 10.1016/j.bcab.2019.101468 [13] h. zia, n. r. harris, g. v. merrett, m. rivers, n. coles, the impact of agricultural activities on water quality: a case for collaborative catchment-scale management using integrated wireless sensor networks, comput. electron. agric. 96 (2013), pp. 126–138. doi: 10.1016/j.compag.2013.05.001 [14] g. vellidis, m. tucker, c. perry, c. kvien, c. bednarz, a real-time wireless smart sensor array for scheduling irrigation, comput. electron. agric. 61(1) (2008), pp. 44–50. doi: 10.1016/j.compag.2007.05.009 [15] j. mcculloch, p. mccarthy, s. m. guru, w. peng, d. hugo, a. terhorst, wireless sensor network deployment for water use efficiency in irrigation, in proceedings of the workshop on realworld wireless sensor networks realwsn ’08, glasgow scotland, 1 april, 2008, pp. 46-50. doi: 10.1145/1435473.1435487 [16] m. lezoche, j. e. hernandez, m. del m. e. alemany díaz, h. panetto, j. kacprzyk, agri-food 4.0: a survey of the supply chains and technologies for the future agriculture, comput. ind. 117 (2020) art. 103187. doi: 10.1016/j.compind.2020.103187 [17] f. al-turjman, the road towards plant phenotyping via wsns: an overview, comput. electron. agric. 161 (2019), pp. 4–13. doi: 10.1016/j.compag.2018.09.018 [18] m. younis, k. akkaya, strategies and techniques for node placement in wireless sensor networks: a survey, ad hoc networks 6(4) (2008), pp. 621–655. doi: 10.1016/j.adhoc.2007.05.003 [19] f. m. al-turjman, a. e. al-fagih, w. m. alsalih, h. s. hassanein, reciprocal public sensing for integrated rfid-sensor networks, in 2013 9th international wireless communications and mobile computing conference (iwcmc), sardinia, italy, 1-5 july 2013, pp. 746–751. doi: 10.1109/iwcmc.2013.6583650 [20] f. m. al turjman, h. s. hassanein, towards augmented connectivity with delay constraints in wsn federation, int. j. ad hoc ubiquitous comput. 11(2/3) (2012), pp. 97-108. doi: 10.1504/ijahuc.2012.050273 [21] y. wang, x. li, w.-z. song, m. huang, t. a. dahlberg, energyefficient localized routing in random multihop wireless networks, ieee trans. parallel distrib. syst. 22(8) (2011), pp. 1249–1257. doi: 10.1109/tpds.2010.198 [22] a. m. patel, m. m. patel, a survey of energy efficient routing protocols for mobile adhoc networks, int. j. eng. res. technol. 1(10) (2012), pp. 1–6. [23] r. yan, h. sun, y. qian, energy-aware sensor node design with its application in wireless sensor networks, ieee trans. instrum. meas. 62(5) (2013), pp. 1183–1191. doi: 10.1109/tim.2013.2245181 https://doi.org/10.1109/metroi4.2018.8428317 https://doi.org/10.1109/i2mtc43012.2020.9128500 https://doi.org/10.21014/acta_imeko.v8i2.614 https://doi.org/10.1109/mim.2019.8674633 https://doi.org/10.1109/indin.2018.8472018 https://doi.org/10.1016/j.procir.2017.12.221 https://doi.org/10.1109/metroind4.0iot48571.2020.9138296 https://doi.org/10.1109/metroi4.2019.8792855 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1109/imtc.2010.5488034 https://doi.org/10.1109/rtsi.2019.8895533 https://doi.org/10.1016/j.bcab.2019.101468 https://doi.org/10.1016/j.compag.2013.05.001 https://doi.org/10.1016/j.compag.2007.05.009 https://doi.org/10.1145/1435473.1435487 https://doi.org/10.1016/j.compind.2020.103187 https://doi.org/10.1016/j.compag.2018.09.018 https://doi.org/10.1016/j.adhoc.2007.05.003 https://doi.org/10.1109/iwcmc.2013.6583650 https://doi.org/10.1504/ijahuc.2012.050273 https://doi.org/10.1109/tpds.2010.198 https://doi.org/10.1109/tim.2013.2245181 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 44 [24] m. t. penella, m. gasulla, runtime extension of low-power wireless sensor nodes using hybrid-storage units, ieee trans. instrum. meas. 59(4) (2010), pp. 857–865. doi: 10.1109/tim.2009.2026603 [25] l. gasparini, r. manduchi, m. gottardi, d. petri, an ultralowpower wireless camera node: development and performance analysis, ieee trans. instrum. meas. 60(12) (2011), pp. 3824– 3832. doi: 10.1109/tim.2011.2147630 [26] m. lazzaroni et al., thermal modeling and characterization for designing reliable power converters for lhc power supplies, acta imeko 3(4) (2014), pp. 17–25. doi: 10.21014/acta_imeko.v3i4.147 [27] l. cristaldi, m. khalil, p. soulatintork, a root cause analysis and a risk evaluation of pv balance of system failures, acta imeko 6(4) (2017), pp. 113-120. doi: 10.21014/acta_imeko.v6i4.425 [28] l. varandas, p. d. gaspar, m. l. aguiar, standalone docking station with combined charging methods for agricultural mobile robots, int. j. mech. mechatronics eng. 13(1) (2019), pp. 38–42. doi: 10.5281/zenodo.3607799 [29] h. chen, w. wang, b. sun, j. weng, f. tie, design of a wsn node for rice field based on hybrid antenna, in 2017 international conference on computer network, electronic and automation (iccnea), xi'an, china, 23-25 sept. 2017, pp. 276–280. doi: 10.1109/iccnea.2017.101 [30] x. jiaxing, g. peng, w. weixing, l. huazhong, x. xin, h. guosheng, design of wireless sensor network bidirectional nodes for intelligent monitoring system of micro-irrigation in litchi orchards, ifac-papersonline 51(17) (2018), pp. 449–454. doi: 10.1016/j.ifacol.2018.08.176 [31] s. tenzin, s. siyang, t. pobkrut, t. kerdcharoen, low cost weather station for climate-smart agriculture, in 2017 9th international conference on knowledge and smart technology (kst), chonburi, thailand, 1-4 feb. 2017, pp. 172–177. doi: 10.1109/kst.2017.7886085 [32] l. ciani, d. galar, g. patrizi, improving context awareness reliability estimation for a wind turbine using an rbd model, in 2019 ieee international instrumentation and measurement technology conference (i2mtc), auckland, new zealand, 20-23 may 2019, pp. 1–6. doi: 10.1109/i2mtc.2019.8827041 [33] d. capriglione et al., development of a test plan and a testbed for performance analysis of mems-based imus under vibration conditions, measurement 158 (2020), p. 107734. doi: 10.1016/j.measurement.2020.107734 [34] d. capriglione et al., experimental analysis of imu under vibration, in 16th imeko tc10 conference 2019 testing, diagnostics and inspection as a comprehensive value chain for quality and safety, berlin, germany, 3-4 sept. 2019, pp. 26-31. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-002.pdf [35] l. ciani, g. guidi, application and analysis of methods for the evaluation of failure rate distribution parameters for avionics components, measurement 139 (2019), pp. 258–269. doi: 10.1016/j.measurement.2019.02.082 [36] m. catelani, l. ciani, experimental tests and reliability assessment of electronic ballast system, microelectron. reliab. 52(9–10) (2012), pp. 1833–1836. doi: 10.1016/j.microrel.2012.06.077 [37] m. carratu, m. ferro, a. pietrosanto, p. sommella, v. paciello, a smart wireless sensor network for pm10 measurement, in 2019 ieee international symposium on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1–6. doi: 10.1109/iwmn.2019.8805015 [38] l. ciani, a. bartolini, g. guidi, g. patrizi, condition monitoring of wind farm based on wireless mesh network, in 16th imeko tc10 conference 2019 testing, diagnostics and inspection as a comprehensive value chain for quality and safety, berlin, germany, 3-4 sept. 2019, pp. 39–44. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2019/imekotc10-2019-004.pdf [39] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, characterization of a low-cost and low-power environmental monitoring system, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020. doi: 10.1109/i2mtc43012.2020.9129274 [40] m. cagnetti, f. leccese, d. trinca, a new remote and automated control system for the vineyard hail protection based on zigbee sensors, raspberry-pi electronic card and wimax, journal agric. sci. technol. 3 (2013), pp. 853–864. [41] l. ciani, a. bartolini, g. guidi, g. patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko 9(1) (2020), pp. 3–9. doi: 10.21014/acta_imeko.v9i1.732 [42] y. zhang, l. jijun, h. hu, wireless mesh networking: architectures, protocols and standards. tailor & francis group, new york, usa, 2007, isbn: 9780429133107. [43] x. zhu, y. lu, j. han, l. shi, transmission reliability evaluation for wireless sensor networks, int. j. distrib. sens. networks 12(2) (2016), p. 1346079. doi: 10.1155%2f2016%2f1346079 [44] d. capriglione et al., characterization of inertial measurement units under environmental stress screening, in 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9129263 [45] m. catelani, l. ciani, g. guidi, m. venzi, parameter estimation methods for failure rate distributions, in 14th imeko tc10 workshop on technical diagnostics 2016: new perspectives in measurements, tools and techniques for systems reliability, maintainability and safety, milan, italy, june 27-28, 2016, pp. 441– 445. online [accessed 09 june 2021] https://www.imeko.org/publications/tc10-2016/imekotc10-2016-083.pdf [46] mil-std-810g, “environmental engineering considerations and laboratory tests,” no. october. us department of defense, washington dc, 2008. [47] iec 60068-2-14, “environmental testing part 2-14: tests test n: change of temperature.” international electrotechnical commission, 2011. [48] iec 60068-5-2, “environmental testing part 5: guide to drafting of test methods terms and definitions.” international electrotechnical commission, 1990. [49] iec 60068-2-2, “environmental testing part 2-2: tests test b: dry heat.” international electrotechnical commission, 2007. [50] iec 60068-2-38, “environmental testing part 2: tests test z/ad: composite temperature/humidity cyclic test.” international electrotechnical commission, 2009. [51] jedec solid state technology, “jedec standard: temperature cycling.” jesd22-a104e, 2014. [52] iest-rp-pr003.1, “halt and hass.” institute of environmental sciences and technology product reliability division, 2012. [53] a. eddahech, o. briat, j.-m. vinassa, performance comparison of four lithium–ion battery technologies under calendar aging, energy 84 (2015), pp. 542–550. doi: 10.1016/j.energy.2015.03.019 [54] t. huria, m. ceraolo, j. gazzarri, r. jackey, high fidelity electrical model with thermal dependence for characterization and simulation of high power lithium battery cells, 2012 ieee international electric vehicle conference, 4-8 march 2012, greenville, sc, usa. doi: 10.1109/ievc.2012.6183271 [55] n. omar et al., lithium iron phosphate based battery – assessment of the aging parameters and development of cycle life https://doi.org/10.1109/tim.2009.2026603 https://doi.org/10.1109/tim.2011.2147630 https://doi.org/10.21014/acta_imeko.v3i4.147 https://doi.org/10.21014/acta_imeko.v6i4.425 https://doi.org/10.5281/zenodo.3607799 https://doi.org/10.1109/iccnea.2017.101 https://doi.org/10.1016/j.ifacol.2018.08.176 https://doi.org/10.1109/kst.2017.7886085 https://doi.org/10.1109/i2mtc.2019.8827041 https://doi.org/10.1016/j.measurement.2020.107734 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-002.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-002.pdf https://doi.org/10.1016/j.measurement.2019.02.082 https://doi.org/10.1016/j.microrel.2012.06.077 https://doi.org/10.1109/iwmn.2019.8805015 https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-004.pdf https://www.imeko.org/publications/tc10-2019/imeko-tc10-2019-004.pdf https://doi.org/10.1109/i2mtc43012.2020.9129274 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.1155%2f2016%2f1346079 https://doi.org/10.1109/i2mtc43012.2020.9129263 https://www.imeko.org/publications/tc10-2016/imeko-tc10-2016-083.pdf https://www.imeko.org/publications/tc10-2016/imeko-tc10-2016-083.pdf https://doi.org/10.1016/j.energy.2015.03.019 https://doi.org/10.1109/ievc.2012.6183271 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 45 model, appl. energy 113 (2014), pp. 1575–1585. doi: 10.1016/j.apenergy.2013.09.003 [56] e. locorotondo, l. pugi, l. berzi, m. pierini, g. lutzemberger, online identification of thevenin equivalent circuit model parameters and estimation state of charge of lithium-ion batteries, in 2018 ieee international conference on environment and electrical engineering and 2018 ieee industrial and commercial power systems europe (eeeic / i&cps europe), palermo, italy, 12-15 june 2018, pp. 1–6. doi: 10.1109/eeeic.2018.8493924 [57] l. ciani, m. catelani, a. bartolini, g. guidi, g. patrizi, electrical characterization of a monitoring system for precision farming under temperature stress, in 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 sept. 2020, pp. 270– 275. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-51.pdf https://doi.org/10.1016/j.apenergy.2013.09.003 https://doi.org/10.1109/eeeic.2018.8493924 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-51.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-51.pdf evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag sarath chandra1, sankranthi krishnaiah2 1 department of civil engineering, jawaharlal nehru technological university anantapur, anantapur, india and department of civil engineering, christ (deemed to be university), bangalore, karnataka-560029, india 2 department of civil engineering, jawaharlal nehru technological university anantapur, anantapur, andhra pradesh-515002, india section: research paper keywords: red mud; ground granulated blast slag; alkaline activator; evaluating compaction properties; assessing compaction energy citation: sarath chandra, sankranthi krishnaiah, evaluation on effect of alkaline activator on compaction properties of red mud stabilised by ground granulated blast slag, acta imeko, vol. 11, no. 1, article 22, march 2022, identifier: imeko-acta-11 (2022)-01-22 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form february 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sarath chandra, e-mail: chandrasarath011@gmail.com 1. introduction it’s very important to develop new methods rather than traditional methods of civil engineering construction to control the utilization of virgin resources. on the other hand, handling the industrial waste materials which are generated with the new methods of construction and the unpredictable growth of industries around the globe is also essential. the best method to balance both sides of this worldwide issue is by utilizing industrial waste materials in construction industries with suitable and environmentally friendly methods. with the revolution of sustainable construction technique, in the last two decades, enormous research was carried out in utilization of various industry materials like fly ash, ground granulated blast slag, pond ash, red mud, iron ore tailings, foundry sand, etc. in various verticals or areas of civil engineering constructions like manufacture of bricks and paver blocks [1], [2], as a subgrade and sub base material in road construction [3], [4], as an embankments and backfill materials [5], [6], as a soil stabilisation technique [7], [8] and vegetation [9]. along with the industries, construction activities also increased tremendously with the advancement in technologies were utilization of raw materials increased exponentially. it directly results on another important waste material called construction and demolished waste [10]. it is important to induce the latest technologies like optical metrology which involves digital, vision and video systems in order to understand the exact behaviour of various materials. the usage of advance technologies will help to reduce the pollution and gives new solutions with an accuracy in measuring and assessing the problems in civil engineering [11]. like many industry waste materials red mud (rm) is also a type of highly alkaline (ph ranging from 10.5 to 13) industrial residue which is produced during the process of extraction of abstract any industrial waste has a potential to be used as a civil engineering material with an effective and appropriate waste management system. like many industrial wastes, red mud (rm) and ground granulated blast slag (ggbs) are some of the industrial wastes produced from aluminium and steel industries respectively. utilization of only waste materials will not be effective without a suitable stabilizer, which forced to use an alkaline activator to satisfy the needs of a building materials. this paper evaluates measurements to assess the effect of alkaline activator on the compaction properties of ggbs stabilized rm. different ratios of naoh to na 2sio3 was used as an alkaline activator with 10, 20 and 30 percentage replacement of ggbs to rm and measured the compaction properties by using a mini compaction apparatus. upon conducting standard and modified proctor compaction tests for various combinations of rm and ggbs, the compaction curves depicted that huge variation in maximum dry density and optimum moisture content with the change of ggbs percentage and different ratios of naoh to na2sio3 are measured and analysed. further the influence of compaction energy on the density characteristics of these trails were assessed for better understanding. mailto:chandrasarath011@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 aluminium from the bauxite ore [12]. rm is generally produced in the form of slurry which contains up to 40 % of water into the collection ponds situated next to aluminium industries [13]. the high alkaline nature and very fine particle size of rm along with the aluminium traces result in a threat to the environment. it is also difficult to store rm for longer periods which may create air pollution with open exposure to air and also groundwater pollution with the leachate without proper liners. annually more than 4 million tons of red mud is producing in india. on the other side, it requires huge land to dispose of and involves a good amount of money for proper waste management. the lack of appropriate waste management and storage system of rm will show adverse effects on the environment which may end up taking both many lives and huge property which was resulted in the past [14]. these negative shades of rm emphasizing to use it in civil engineering constructions where a bulk material can be used with minimal cost. on the other side, rm also shows the similar properties of clayey and sandy soils, which is a good indication to use in civil engineering applications like the construction of embankments, landfills, and different layers of road construction. however, the application of these waste materials always depends on the density characteristics. this implies the measurement and compaction characteristics of finding optimum moisture content and maximum dry density of the material. measurement of compaction characteristics is a very important parameter for any foreign material in the geotechnical application at various stages upon satisfying the standard specifications of relevant codes, this manner measurement technology plays an important role in such real time applications. for any soil or waste material upon the application of loads, there will be a change in the volume because of the expelling of air form the voids. the change in the volume of the material depends on the number of voids developed in the material, the amount of air filled in the voids and the amount of load or pressure applied. the standard and modified compaction tests show the differences in dry densities with the change of load applied for any type of material. the maximum dry density and optimum moisture content values obtained from the compaction tests form the basis to determine many strength parameters of the waste material at various construction stages. various geotechnical parameters depend on the amount of water added to the material as we know the moisture largely controls the behaviour of soils, particularly fine-grained soils. hence, it is important to understand the in-depth information on compaction characteristics of rm for the benefit of various applications as a geo-material. as very limited work was carried out in the past focusing on measurement of compaction characteristics of rm along with other waste material stabilisation in india. so, an attempt was made to determine the compaction characteristics of rm along with ggbs and alkaline activators by using mini compaction apparatus. the prototype gives a better study of the load and the material behaviour with the control of the loads applied in the study [15]. rm was replaced with 10 %, 20 % and 30 % of ggbs to its dry weight, which is commonly used as stabilising material in many civil engineering applications because of its capability in increasing the strength and bearing capacity and also good in compaction [16]. naoh and na2sio3 are used as alkaline activators which are effectively proved as good stabilisers for ggbs mixed materials in the past [17]. the alkaline activators in various proportions are used in different industry waste materials to increase the strength properties of the waste materials exponentially and to use them effectively in the field of construction [18]-[20]. to understand the effect of alkaline activators on compaction characteristics of rm, series of standard and modified proctor compaction tests were performed on various combinations made of rm, ggbs, naoh and na2sio3. 2. experimental investigations rm for this study was collected from waste disposal pond of hindustan aluminum corporation (hindalco), belgaum, karnataka, situated in the southern part of india. ggbs was procured from the jsw cement limited. ggbs is produced as a by-product or waste from the blast furnaces during the process of making iron. figure 1a shows the original images of over dried red mud and ggbs and figure 1(c) and figure 1(d) presents the scanning electron microscopy (sem) images of red mud and ggbs respectively to understand the microstructure of these materials to use as a geo material in this study. the sem images showing red mud present the particles of scattered behaviour whereas the sem images of ggbs show the smooth sharp ends which indicate that a better compaction can be attained by using both the materials with the appropriate amount of water and energy. the microstructure of these two materials supports the current study on finding the compaction characteristics of rm with ggbs and an addition of water, experiments were also performed by using alkaline activator to achieve a better density of the material upon drying. sodium hydroxide (naoh) pellets, which were put into use for this study was purchased from the prince chemical bangalore, karnataka, india, which sells pellets with 98 % purity. based on many research findings, prioritizing the economical and strength aspects, the molarity of the sodium hydroxide solution was chosen to be 8 m throughout the research. hence, molarity calculations to prepare a sodium hydroxide solution of 8 m were, 320 g of sodium hydroxide pellets have to be dissolved in 1 l of water [8 mol/l x 40 g/mol = 320 g, where 40 g/mol is the approximate molecular weight of naoh]. the sodium silicate (na2sio3) solution was purchased from the para fine chemical industries, bangalore, karnataka, india, which contains na2o = 14.78 %, sio2 = 29.46 % and water = 55.97 % by mass. before finding the compaction characteristics of alkaline activated, ggbs stabilised rm, the physical and geotechnical properties of rm and ggbs were determined to understand and classify the materials for better justification on the compaction properties. according to astm d854, the specific gravity of rm and ggbs were determined by using pycnometer and the results are presented in table 1. average of three trails was considered for all the tests to maintain accuracy in the results. compaction is considered as a very important geotechnical property to obtain the maximum dry density and to know the optimum moisture content. so, it is also important to understand both physical and geotechnical properties of the waste material used in this research work as a pre-requisite. the properties prove that the rm can be effectively used in the subgrades in place of virgin soil. according to astm d4318, d422, d1140 and d2487, the liquid limit, plastic limit and particle size distribution and soil classification was determined for both rm and ggbs and the same were presented in table 2 [21]-[25]. the rm was classified into silt of low plasticity and ggbs as silt with low compressibility according to unified soil classification system. in this study [26], [27] the compaction characteristics of all the combinations were determined by using mini compaction apparatus which is presented in figure 2. this method of acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 compaction is only suitable for the materials which have particle size less than 2 mm and this method is opted as both rm and ggbs particle sizes are less than 2 mm. the unique advantage of this apparatus is the low amount of material used for testing, i.e. up to 300 g instead of 3 kg of soil, which is generally used for the standard proctor compaction test. the amount of energy given by the number of blows can be easily calculated by using mini compaction apparatus. it is difficult to determine the amount of energy applied with respect to number of blows accurately by using of standard or modified proctor compaction apparatus, but this can be easily done by using the mini compaction apparatus. it is also observed better results accuracy because less material is used and not much energy is applied. this method of compaction saves both time and energy of a researcher compared to the traditional method of compaction and the characteristics can be observed with the change of number of blows in a better way. the same compacted samples after trimming can be used for the purpose finding strength parameters of the material directly. both standard and modified compaction tests can be performed by using of this mini compaction apparatus. the dimensions of this apparatus are very small as the internal diameter of the cylinder is equal to 3.81 cm, the height of the cylinder is 8 cm and it was used by many researchers in the past for the easy conduction of compaction test with less material usage and less labour and time. [28] 3. results and discussion table 3 shows the optimum moisture content (omc) and maximum dry density (mdd) obtained for various combinations upon conducting an average of three trails of sp and mp tests of each. the omc and mdd of rm are 34.39 % and 1.59 g/cc, respectively, whereas in the case of ggbs it is 25.63 % and 1.62 g/cc with distilled water. rm shows more similar reproducible values with the mp tests compared to sp tests and the values of omc and mdd are noted as 28.65 % and mdd was 1.64 g/cc. this shows an initial idea about the material behaviour with the change of application of load and accuracy in omc and mdd. the results confirm that with the increase of compaction energy in rm, the dry density increases, and the water content requirement decreases in appreciable way which coincides with the past research results also [12]. it is observed that an increase up to a maximum of 5 % in mdd and up to 31 % in the omc with the standard and modified proctor compaction tests. it is because of increase in the rammer weight for the modified compaction. standard indicates the lightweight compaction light tampers and rammers and modified compactions indicates the rollers and other compactors. the omc and mdd presented in table 3 for various combinations shows the influence of alkaline activators on the ggbs stabilised rm. the ggbs replacement in first three combinations shows that the decrease in omc and increase in mdd in both sp and mp tests which indicates the sensitivity of ggbs stabilised rm table 1. physical and geotechnical properties of rm and ggbs. sno. property rm ggbs 1 specific gravity 2.95 2.81 2 consistency limits: liquid limit (%) plastic limit (%) plasticity index 43 38 5 32 np np 3 percentage fractions: sand silt clay 3 75 22 75 24 1.0 4 uscs ml ml (a) (b) (c) (d) figure 1. a) original image of red mud b) original image of ggbs c) sem image of red mud d) sem image of ggbs. table 2. combinations and nomenclature used in the research work. sl no. combinations mixing agent nomenclature 1 90 % rm + 10 % ggbs 100 % distilled water rgd1 2 80 % rm+ 20 % ggbs 100 % distilled water rgd2 3 70 % rm+ 30 % ggbs 100 % distilled water rgd3 4 90 % rm + 10 % ggbs 100 % naoh rga1 5 80 % rm + 20 % ggbs 100 % naoh rga2 6 70 % rm + 30 % ggbs 100 % naoh rga3 7 90 % rm + 10 % ggbs 90 %naoh + 10 % na₂sio₃ rga4 8 80 % rm + 20 % ggbs 90 % naoh + 10 % na₂sio₃ rga5 9 70 % rm + 30 % ggbs 90 % naoh + 10 % na₂sio₃ rga6 10 90 % rm + 10 % ggbs 80 % naoh + 20 % na₂sio₃ rga7 11 80 % rm + 20 % ggbs 80 % naoh + 20 % na₂sio₃ rga8 12 70 % rm + 30 % ggbs 80 % naoh + 20 % na₂sio₃ rga9 13 90 % rm + 10 % ggbs 50 % naoh + 50 % na₂sio₃ rga10 14 80 % rm + 20 % ggbs 50 % naoh + 50 % na₂sio₃ rga11 15 70 % rm + 30 % ggbs 50 % naoh + 50 % na₂sio₃ rga12 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 to water. the addition of ggbs beyond 30 % will not show any significance in the compaction characteristics of rm which was confirmed in the past studies and so limited to 30 % replacement only [29] and also the omc and mdd values of rgd2 and rgd3 are almost similar which confirms that the further increment in ggbs will not be beneficial regards to stabilisation and cost. the increase in mdd with the addition of ggbs to rm may be due to the reduction of clay fraction in rm which reduces the particle movement resistance during compaction. the addition of naoh and na2sio3 shows the decrease in omc and increase in mdd for all the cases of ggbs stabilised rm. the highest value of mdd was observed in combination rga12 with a value of 1.91 g/cc which is an exponential increase up to 20 % compared to the virgin rm with respect to sp test and the same percentage of increment in mdd was observed in the case of mp test also. omc was reduced up to 50 % in combination rga12 with respect to virgin rm both in sp and mp tests. almost similar pattern of change of omc and mdd percentages of all the combinations were observed both in sp and mp tests. omc was retained same and mdd was decreased with the addition of naoh alone compared to water which proves that the only naoh will not affect on the dry density on the ggbs stabilised rm and shows to add silicates to have more effective dry density in all the combinations. the increase in mdd and decrease in omc trend was observed with the addition of silicates to naoh in different ratios. both sp and mp tests confirms that the effect of alkaline activator depends on the percentage addition of ggbs to rm. the change in omc and mdd with the change of ratio of naoh/ na2sio3 was observed very minimum for a particular ggbs replacement to rm. combinations rga 10,11,12 shows very good increase in mdd values supporting that the same 50:50 ratios of naoh to na2sio3 for 10 %, 20 %, 30 % replacement by ggbs gives more effective results. in all the combinations the effect of alkaline activators largely depends on the addition of ggbs to rm, which may be due to the reaction of minerals present in the ggbs with the naoh and na2sio3. according to irc sp:20-2002, the minimum mdd of 1.46 g/cc is required to use any material as an embankment fill or in any road construction [30]. in this research work all the trails exceeds the minimum mdd required as per the specifications of irc which shows that alkaline activated ggbs stabilised rm satisfies the requirements to use in embankments with further evaluation of strength properties. the effect of compaction energy on the moisture content and dry density of untreated rm in the form of number of blows was studied by the research fraternity in the past. it is confirmed that the increase in the compaction energy has resulted to the decrease in moisture content and increase in dry density. in this study an attempt was made to evaluate the effect of compaction energy on the ggbs stabilised rm by replacing 10 %, 20 and 30 of rm with ggbs by using distilled water. based on the previous study, the number of blows used for this research work are 12, 15, 18, 22, 25, 28, 33, 45, 56 and the tests have been conducted by using a mini compaction apparatus. table 4 depicts the effect of compaction energy which was converted by using the number of blows on the ggbs stabilised rm. it shows that figure 2. mini compaction test apparatus. table 3. omc a nd mdd of alkaline activated ggbs stabilised red mud for both sp and mp. slno combination sp test mp test omc (%) mdd (g/cc) omc (%) mdd (g/cc) 1 rgd1 30.65 1.60 25.14 1.65 2 rgd2 29.64 1.61 24.22 1.67 3 rgd3 27.88 1.62 22.10 1.68 4 rga1 30.68 1.53 25.15 1.58 5 rga2 27.38 1.51 22.55 1.59 6 rga3 26.05 1.57 21.10 1.62 7 rga4 28.04 1.65 21.11 1.71 8 rga5 26.54 1.68 20.24 1.72 9 rga6 25.91 1.71 19.34 1.72 10 rga7 27.90 1.57 20.90 1.64 11 rga8 26.40 1.68 18.60 1.73 12 rga9 25.68 1.72 17.61 1.77 13 rga10 26.56 1.78 19.36 1.82 14 rga11 24.80 1.88 17.90 1.92 15 rga12 21.62 1.91 15.01 1.99 table 4. effect of compaction energy on density -water relationship of ggbs stabilised rm. number of blows compaction energy (kj/m3) rgd1 rgd2 rgd3 mc (%) dd (g/cc) mc (%) dd (g/cc) mc (%) dd (g/cc) 12 285 31.66 1.59 31.01 1.60 29.10 1.60 15 356 31.30 1.59 29.90 1.61 28.22 1.61 18 427 30.59 1.59 30.11 1.60 28.45 1.61 22 522 30.41 1.60 29.90 1.61 27.58 1.62 25 594 30.65 1.60 29.64 1.61 27.88 1.62 29 689 29.11 1.61 28.45 1.62 26.66 1.62 33 783 28.34 1.63 27.44 1.64 24.59 1.64 45 1068 26.40 1.64 25.81 1.66 23.56 1.66 56 2595 25.14 1.65 24.22 1.67 22.10 1.68 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 the increase in ggbs indicates the increase in the dry density with the increase of the compaction energy. this confirms that the compaction energy has a significant impact on moisture content (mc) and dry density (dd) in all the ggbs stabilised rm which also proves that the effective compaction results on attaining the better dry density of any stabilised waste material. 4. conclusions in the current study, detailed compaction tests were performed on virgin rm, ggbs stabilised rm and alkaline activated ggbs stabilised rm samples. from the results it is concluded that the mp tests show better compaction characteristics compared to sp test which highlights the effect of compaction energy on increasing the density of samples with the close package of fine particles present in rm and ggbs stabilised rm. ggbs acts as a good stabiliser for rm with the satisfying density as per the irc specifications for the construction of embankments and other filling layers. the increase in mdd and decrease in omc was observed with the increase of ggbs percentage to rm in all the trails. the results conclude that there is a minimum amount of influence on the density of rm with alkaline activator, but the influence of alkaline activator was more on the amount of ggbs added to the rm in both sp and mp tests. the outcome of this research work emphasizes that the waste materials can be effectively utilized upon stabilising with suitable other industry by products or waste materials and the available alkaline activators. further, strength properties and leachate characteristics can be studied of these combinations in future to improve the utilization in various civil engineering applications. acknowledgement authors acknowledge school of engineering and technology, christ university for providing all the laboratory facilities to perform this study. references [1] s. abbas, m. a. saleem, s. m. s. kazmi, m. j. munir, production of sustainable clay bricks using waste fly ash: mechanical and durability properties, j. build. eng., 14 (2017), pp. 7–14. doi: 10.1016/j.jobe.2017.09.008 [2] f. a. kuranchie, s. k. shukla, d. habibi, utilisation of iron ore mine tailings for the production of geopolymer bricks, int. j. mining, reclam. environ., 30(2) (2016), pp. 92–114. doi: 10.1080/17480930.2014.993834 [3] f. noorbasha, m. manasa, r. t. gouthami, s. sruthi, d. h. priya, n. prashanth, m. z. ur rahman, fpga implementation of cryptographic systems for symmetric encryption, journal of theoretical and applied information technology, 95(9) (2017), pp. 2038-2045. doi: 10.1155/2021/6610655 [4] e. mukiza, l. l. zhang, x. liu, and n. zhang, utilization of red mud in road base and subgrade materials: a review, resour. conserv. recycl., 141 (2019), pp. 187–199 doi: 10.1016/j.resconrec.2018.10.031 [5] p. v. v. kishore, a. s. c. s. sastry, z. ur rahman, double technique for improving ultrasound medical images, journal of medical imaging and health informatics, 6(3) (2016), pp. 667-675. doi: 10.1166/jmihi.2016.1743 [6] s. s. mirza, m. z. ur rahman, efficient adaptive filtering techniques for thoracic electrical bio-impedance analysis in health care systems, journal of medical imaging and health informatics, 7(6) (2017), pp. 1126-1138. doi: 10.1166/jmihi.2017.2211 [7] j. prabakar, n. dendorkar, r. k. morchhale, influence of fly ash on strength behavior of typical soils, constr. build. mater., 18(4)(2004), pp. 263–267. doi: 10.1016/j.conbuildmat.2003.11.003 [8] r. a. shaik, d. r. k. reddy, noise cancellation in ecg signals using normalized sign-sign lms algorithm, in 2009 ieee international symposium on signal processing and information technology (isspit), ieee, (2009), pp. 288-292. doi: 10.1109/isspit.2009.5407510 [9] r. j. haynes, reclamation and revegetation of fly ash disposal sites challenges and research needs, j. environ. manage., 90(1) (2009), pp. 43–53. doi: 10.1016/j.jenvman.2008.07.003 [10] m. n. salman, p. t. rao, novel logarithmic reference free adaptive signal enhancers for ecg analysis of wireless cardiac care monitoring systems, ieee access, 6 (2018), pp. 46382-46395. doi: 10.1109/access.2018.2866303 [11] l. martins, a. ribeiro, m. c. almeida, j. a. sousa, bringing optical metrology to testing and inspection activities in civil engineering, acta imeko, 10(3) (2021), pp. 3-16. doi: 10.21014/acta_imeko.v10i3.1059 [12] n. gangadhara reddy, b. hanumantha rao, evaluation of the compaction characteristics of untreated and treated red mud, geotech. spec. publ., 272 (2016), pp. 23–32. doi: 10.1061/9780784480151.003 [13] s. k. rout, t. sahoo, s. k. das, design of tailing dam using red mud, cent. eur. j. eng., 3(2) (2013), pp. 316–328. doi: 10.2478/s13531-012-0056-7 [14] w. m. mayes, i. t. burke, h. i. gomes, á. d. anton, m. molnár, v. feigl, é. ujaczki, advances in understanding environmental risks of red mud after the ajka spill, hungary, j. sustain. metall., 2(4) (2016), pp. 332–343. doi: 10.1007/s40831-016-0050-z [15] s. m. osman, r. kumme, h. m. ei-hakeem, f. loffler, e. h. hasan, r. m. rashad, f. kouta, multi capacity load cell prototype, acta imeko, 5(4) (2016), pp. 64-69. doi: 10.21014/acta_imeko.v5i3.310 [16] a. k. pathak, v. pandey, k. murari, j. p. singh, soil stabilisation using ground granulated blast furnace slag, j. eng. res. appl, 4(2) (2014), pp. 164–171. [17] y. yi, c. li, and s. liu, alkali-activated ground-granulated blast furnace slag for stabilization of marine soft clay, j. mater. civ. eng., 27(4) (2015), pp. 1–7. doi: 10.1061/(asce)mt.1943-5533.0001100 [18] m. mavroulidou, s. shah, alkali-activated slag concrete with paper industry waste, waste management and research, 39 (3) (2021), pp. 466-472. doi: 10.1177/0734242x20983890 [19] s. a. bernal, e. d. rodriguez, r. m. de guteirrez, j. l. provis, s. delvasto, activation of metakaolin/slag bends using alkaline solutions based on chemically modified silica fume and rice husk ash, waste biomass volar, 3 (2012), pp. 99-108. doi: 10.1007/s12649-011-9093-3 [20] t. bhakarev, j. g. sanjayan, y. b. cheng, alkali activation of australian slag, cem. conc. res., 29(1) (1999), pp. 113-120. doi: 10.1016/s0008-8846(98)00170-7 [21] astm d854-14, standard test methods for specific gravity of soil solids by water pycnometer, annual book of astm standard, astm international, west conshohocken, pa. 4(8) (2014). https://doi.org/10.1016/j.jobe.2017.09.008 https://doi.org/10.1080/17480930.2014.993834 https://doi.org/10.1155/2021/6610655 https://doi.org/10.1016/j.resconrec.2018.10.031 https://doi.org/10.1166/jmihi.2016.1743 https://doi.org/10.1166/jmihi.2017.2211 https://doi.org/10.1016/j.conbuildmat.2003.11.003 https://doi.org/10.1109/isspit.2009.5407510 https://doi.org/10.1016/j.jenvman.2008.07.003 https://doi.org/10.1109/access.2018.2866303 https://doi.org/10.21014/acta_imeko.v10i3.1059 https://doi.org/10.1061/9780784480151.003 https://doi.org/10.2478/s13531-012-0056-7 https://doi.org/10.1007/s40831-016-0050-z https://doi.org/10.21014/acta_imeko.v5i3.310 https://doi.org/10.1061/(asce)mt.1943-5533.0001100 https://doi.org/https:/doi.org/10.1177/0734242x20983890 https://doi.org/10.1007/s12649-011-9093-3 https://doi.org/10.1016/s0008-8846(98)00170-7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 [22] astm d4318-10, standard test methods for liquid limit, plastic limit and plasticity index of soils, annual book of astm standard, astm international, west conshohocken, pa, 4(8) (2010). [23] astm d422-63, standard test method for particle size analysis of soils, annual book of astm standard, astm international, 4(8), west conshohocken, pa. [24] astm d1140-14 (2014), standard test methods for determining the amount of material finer than 75 micro meter (no 200) sieve in soils by washing, annual book of astm standard, astm international, west conshohocken, pa, vol. 4(8). [25] astm d2487-11, standard practice for classification of soils for engineering purposes (unified soil classification system), annual book of astm standard, astm international, 04-08 (2011), west conshohocken, pa. [26] astm d698-07, standard test methods for laboratory compaction characteristics of soil using standard effort.” annual book of astm standard, astm international, west conshohocken, pa, 4(8) (2007) [27] astm d1557-12, standard test methods for laboratory compaction characteristics of soil using modified effort.” annual book of astm standard, astm international, 04-08 (2012), west conshohocken, pa. [28] a. sridharan, p. v. sivapullaiah, mini compaction test apparatus for fine grained soils, geotech. test. j, 28(3) (2005) pp. 240–246. doi: 10.1520/gtj12542 [29] s. alam, s. k. das, b. h. rao, strength and durability characteristic of alkali activated ggbs stabilized red mud as geomaterial, constr. build. mater., 211 (2019), pp. 932–942. doi: 10.1016/j.conbuildmat.2019.03.261 [30] irc (indian road congress), guide lines and construction of rural roads. irc sp 20, new delhi, india: irc, (2002). https://doi.org/10.1520/gtj12542 https://doi.org/10.1016/j.conbuildmat.2019.03.261 an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation acta imeko issn: 2221-870x december 2021, volume 10, number 4, 230 238 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 230 an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation serena serroni1, marco arnesano2, luca violini1, gian marco revel1 1 università politecnica delle marche, department of industrial engineering and mathematical science,via delle brecce bianche, 60131 ancona (an),italy 2 università ecampus, via isimbardi 10, 22060 novedrate (co), italy section: research paper keywords: thermal comfort; indoor air quality; ieq; measurements; iot; building renovation citation: serena serroni, marco arnesano, luca violini, gian marco revel, an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation, acta imeko, vol. 10, no. 4, article 35, december 2021, identifier: imeko-acta-10 (2021)-04-35 section editor: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 10, 2021; in final form december 10, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by p2endure, european project horizon 2020. corresponding author: serena serroni, e-mail: s.serroni@pm.univpm.it 1. introduction the measurement of indoor environmental quality (ieq) requires the acquisition of multiple quantities regarding thermal comfort and indoor air quality. therefore, accurate monitoring and control of those environmental conditions can be useful for preventing the spread of covid-19. however, about 75 % of european buildings stock was built before 1990, before any eu building regulation [1] and with a climate context that has changed through the last decade [2]. thus, most of occupied buildings are not able to keep the required comfort conditions because of the poor performance of the envelope and heating/cooling systems [3]. the importance of indoor environmental quality (ieq) is a well-known and largely discussed theme because of its impact on human comfort, well-being, productivity, learning capability and health [4]. ieq derives from the combination of different factors influencing the human comfort sensation: thermo hygrometry, acoustic, illumination and concentration of pollutant components [5]. all those aspects should be considered at the same level of importance considering that humans spend roughly 90 % of their time indoors, especially after covid-19 spread. in addition, a recent study [6] demonstrated that indoor environmental factors, such as temperature, humidity, ventilation, and filtering systems could have a significant influence on the infection. several studies showed a correlation between the concentration of air pollutants, especially particular matter 2.5 (pm2.5) and particular matter 10 (pm10), and covid-19 virus transmission [7]. for this reason, current buildings’ renovation approaches and trends are including the ieq in the renovation assessment with abstract the measurement of indoor environmental quality (ieq) requires the acquisition of multiple quantities regarding thermal comfort and indoor air quality. the ieq monitoring is essential to investigate the building’s performance, especially when renovation is needed to improve energy efficiency and occupants’ well-being. thus, ieq data should be acquired for long periods inside occupied buildings, but traditional measurement solutions could not be adequate. this paper presents the development and application of a non-intrusive and scalable iot sensing solution for continuous ieq measurement in occupied buildings during the renovation process. the solution is composed of an ir scanner for mean radiant temperature measurement and a desk node with environmental sensors (air temperature, relative humidity, co2, pms). the integration with a bim-based renovation approach was developed to automatically retrieve building’s data required for sensor configuration and kpis calculation. the system was installed in a nursery located in poland to support the renovation process. ieq performance measured before the intervention revealed issues related to radiant temperature and air quality. using measured data, interventions were realized to improve the envelope insulation and the occupant’s behaviour. results from postrenovation measurements showed the ieq improvement achieved, demonstrating the impact of the sensing solution. mailto:s.serroni@pm.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 231 an increased role of importance [8]. recently, the standard en 16798 [9] has been released, in substitution of the en 15251, which provides a framework for the building performance assessment concerning the indoor environment. en 16798 provides methodologies for the calculation of ieq metrics for buildings’ classification, based on environmental measurements [10]. in that standard, thermal comfort is assessed according to well-known predictive and adaptive approaches, as defined in iso 7730 [11] and ashrae 55. however, the same level of detail is not given to indoor air quality, visual comfort, and acoustic comfort. in their critical review [12], khovalyg et al. remarked that iso and en standards should include more requirements on pm. similarly, another critical investigation on ieq data collection and assessment criteria presented in [13] revealed that sensor technology and data analysis are mainly applied to thermal comfort, while other ieq’s domains have not been addressed with the same effort. for this reason, the assessment of ieq domains, unconsidered by standards, has been conceptualized within recent research activities [14] also with a holistic approach that groups domains together [15]. even if the importance of ieq has been largely demonstrated, actual measurement tools are not adequate because of the need of measuring several environmental quantities with a high temporal and spatial resolution. traditional spot measurements tools are bulky and require a strong human effort for data processing and analysis. so, they can’t be used for responsive strategies to improve ieq by implementing retrofit actions (envelope insulation, mechanical ventilation) or triggering occupants’ behaviours (windows opening, thermostat regulation). recently, the use of sensors integrated into building management systems (bms) has also been investigated for ieq monitoring [16].however, those systems generally make use of wall-mounted sensors providing environmental quantities measured away from the real location of the occupants. therefore, data could be representative of a small part of the building. optimization of the sensing location could be performed for some quantities, such as air temperature [17], but the same optimization could be difficult for quantities that present relevant deviations in the same room, as the case of mean radiant temperature. the mean radiant temperature is not homogeneous in the room and depends on the indoor walls’ temperature [18]. in the proximity of a glazed surface or of a poorly insulated wall, the mean radiant temperature presents higher variations with respect to the inner side of the same room [19]. iso 7726 [20] provides two methods for measuring the mean radiant temperature: i) the globe thermometer that is basically a temperature sensor located at the centre of a matt black sphere; ii) the angle factors approach, based on the walls’ temperature measurements. the globe thermometer is widely used for spot measurements. it is intrusive, provides data only related to the position where it is located and one measurement point could not be enough to determine the real room’s comfort. thus, is not the preferable solution for continuous ieq measurements. the second approach, based on the angle factors, could provide a higher resolution once that the problem of measuring the walls’ thermal maps is solved. to this scope, revel et al. [21] developed an infrared (ir) scanner that provides continuous measurement of indoor walls temperatures for mean radiant temperature calculation according to iso 7726. the proposed sensor turned out to provide a measurement accuracy of ± 0.4 °c with respect to traditional microclimate stations [22]. the ir scanner was integrated with a desk node that measures the air temperature and relative humidity to build the solution named comfort eye, a sensor for multipoint thermal comfort measurement in indoor environments. that system provides a solution to the problem of measuring real-time thermal comfort with an increased spatial resolution. the impact on heating control efficiency was demonstrated by experimental testing presented in [23]. this paper presents the new development of the comfort eye for the continuous monitoring of ieq for buildings renovation. indoor air quality (iaq) sensors have been integrated into the desk node to provide measurements of co2 and pms. moreover, a led lighting system has been embedded in the desk node to provide occupants with feedback about the actual status of the indoor air quality. an internet of things (iot) architecture has been developed to allow remote configuration and data exchange. interoperability with bim (building information model) has been developed to automatically retrieve building’s data (e.g., floor area, geometry, material emissivity, occupancy, etc.), needed for sensors configuration, metrics calculation and performance assessment. the most important differences compared to the previous version presented in the paper [21] are: new ir sensor; a new sensor for iaq, in particular, new co2 and pms sensor; the last version presents an updated architecture, it is plug&play and iot; integration with the bim to configure automatically the sensors and kpis calculation. the advanced version of the comfort eye was developed within the framework of the european project, p2endure. p2endure aims at including the ieq in the renovation process, in all the stages of the 4m (mapping, modelling, making, monitoring) process. this means that a protocol for the ieq monitoring, and assessment has been developed, allowing: the accurate evaluation of ieq performance of the building as it is to feed the design stage with the suggestions to achieve the optimal ieq level after the intervention, and the post-renovation monitoring to verify the achievement of ieq compliance according to issues revealed with the mapping. the comfort eye was applied for the continuous ieq monitoring of a building located in gdynia (poland) before and after renovation works. the aspects of ieq that are taken into consideration, using the comfort eye, and exploiting all its potential, are thermal comfort and iaq (co2 and pm) results from the field application are reported. this paper demonstrates the applicability and advantages of the developed system with the application to a real case study. a pre and post renovation analysis was performed to validate the developed monitoring methodology. the monitoring allowed to quantify, in the pre and post renovation phase, the thermal performances of the building, to identify the main causes of discomfort and to assess the iaq. the specific goals of the paper are: • to present the measurement system, sensors specification, iot architecture and integration with the bim; • to propose an ieq monitoring and assessment protocol that extends the en 16798 approach to include pms; • demonstrate the applicability and advantages of the proposed measurement device with the application to a real case study. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 232 2. materials and methods 2.1. the measurement technology the comfort eye is an iot sensor composed of two nodes, the ceiling node with the ir sensor to measure walls thermal maps and mean radiant temperature, and the desk node with sensors to measure: air temperature, relative humidity, co2 and pms. it can provide the whole thermal dynamic behaviour of the wall for continuous and real-time thermal monitoring of buildings. being a prototype, the comfort eye has a production cost of less than 100 € for each node. this work aims to explore and deepen the functionality of comfort eye that allows the measurement of data necessary for the ieq. 2.1.1. comfort eye-ceiling node the ceiling node is the innovation of the comfort eye. it is a 3d thermal scanner and to measure temperature maps of all the room’s indoor surfaces there are 2-axes rotating ir sensor (figure 1). it is installed on the ceiling of the room, and it is composed of a 16x4 thermopile array, meaning that each acquired frame is a map of 64 temperatures. with a horizontal field of view (hfov) of 60° and a vertical field of view (vfov) of 16°, an area of 1.15 x 0.56 m2 is scanned with one frame on a wall at one meter of distance from the sensor. the tilt movement with 0° 180° provides the full vertical scanning of the wall with the possibility to measure the floor and ceiling temperatures. to provide the scan of all the surfaces a continuous 360° pan movement is available. the device entails a custom mainboard with a microcontroller, programmed with dedicated firmware, to perform the automatic scanning of all the room’s surfaces by controlling the pan and tilt servos. the 12c communication protocol is used to acquire data are from the ir sensor. the ir scanner required cabling for a 12 v power supply while the communication is performed with a wi-fi module that is integrated into the mainboard. the scanning process produces one thermal map for each wall that is the result of the concatenation of multiple acquisitions. given the sensor’s fov, the installation point and the room geometry, the reconstruction of the wall thermal map is performed to remove the overlapping pixels derived from the vertical concatenation and to remove pixels related to the neighbour walls. concerning the surface emissivity, correction of the ir raw measurement is implemented [24]. the complete procedure for maps correction is detailed in [21]. the ir data are acquired in continuous, processed and stored in a database in real-time. corrected thermal maps are then used for two-fold scope: measurement of the mean radiant temperature for thermal comfort evaluation and measurement of building’s envelope thermal performance [21]. mean radiant temperature is measured for several locations (e.g. near and far from the window) with the angle factors method, as presented in iso 7726 [20]. 2.1.2. comfort eyedesk node a desk node is used to acquire environmental quantities for thermal comfort and indoor air quality (iaq) assessment (figure 1). an integrated sensor, sensirion scd30, allows the single point measurement of the air temperature (ta), relative humidity (rh), and co2. the desk node also integrates a pm sensor, sensirion sps30 (table 1). sensors’ data are acquired via the i2c interface and a wi-fi module, the same that is used for the ceiling node, provides wireless communication. the data are acquired in continuous, processed and stored in the database in real-time. the desk node is in a position that should be representative of the room’s environmental conditions, avoiding exposures to direct solar radiation, air droughts and zones characterized by stagnant air. the desk node must be installed in a position representative of the environmental conditions, away from heat sources, solar radiation, direct ventilation, or other sources that could disturb the measurements. installation is done by an experienced technician. if possible, the sensor is fixed, otherwise the occupants are informed to prevent the sensor from being moved or covered. the air quality measurement system has been proposed for real-time, low-cost, and easy to install air quality monitoring. it provides precise and detailed information about the air quality of the living environment and helps to plan interventions that lead to improve air quality. in crowded closed environments such as classrooms, offices, or meeting rooms, in the case of limited ventilation, co2 values of between 5,000 ppm and 6,000 ppm can be reached. to have a good air quality the limit of 1000 ppm of co2 must not be exceeded [25]. an efficient iaq monitoring system should detect any change in the air quality, give feedback about the measured values of co2 to the users, and trigger the necessary mechanisms, if available, such as automatic/ natural ventilation and fresh air, to improve performance and protect health. the desk node communicates to the users in real time the measured values of co2 simply and intuitively, through different colours of the leds, green, yellow, red (green for good air quality and red for bad air quality). the value of co2 < 700 ppm is represented by green leds. in this case, the co2 values are acceptable, there is good air quality, it is not necessary to ventilate the environment. the value of co2 between 700 ppm and 1000 ppm is represented by yellow leds. the co2 values are very close to the limit value (1000 ppm) and it is recommended to ventilate the environment. the value of co2 > 1000 pm is represented by red leds. the value of co2 has exceeded the limit value and it is necessary to ventilate the environment [26]. table 1. specifications of sensors used by the desk node of the comfort eye. sensirion scd30/sps30 range accuracy repeatability co2 in ppm 0 – 40 000 ± (30 ppm + 3 % mv) ± 10 ppm rh in % rh 0 – 100 ± 3 ± 0.1 ta in °c -40 – +70 ± [0.4 + 0.023 (t°c 25°c)] ± 0.1 pm2.5 in µg / m³ 0 1000 ± 10 / pm10 in µg / m³ 0 1000 ± 10 / a) b) figure 1. comfort eye: a) ceiling node and b) desk node. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 233 2.1.3. system architecture comfort eye’s nodes can be connected through the local wifi network to the remote server where data are sent, processed, and then stored in a mysql database. figure 2 shows general the architecture. the communication module used is the pycom w01. it is an efficient module, which reduces power consumption as implements bluetooth low energy and supports message queuing telemetry transport (mqtt) protocol that allows only byte communication between the client and server, so to get a light communication, suitable also for low wi-fi signal conditions. to integrate the pycom w01 a custom printed circuit board (pcb) was developed. the mqtt communication protocol is used and each node is programmed to be a publisher and subscriber as shown in figure 2. the first step, to start the monitoring, is the device configuration with a dedicated mobile application. configuration data are sent from the mobile device: server information, local wi-fi credentials, and room tag. once the connection is established, the desk node sends directly raw data to the server where a subscriber function provides to processing and storing actions. for the ceiling node, the scanning procedure is configured using geometrical data of the room. geometry is retrieved from the bim, as explained in the next subsection. once the tilt and pan angles have been defined and published to the sensors, the data acquisition starts. the raw data are sent to the server and processed by a subscriber. the processed data are stored on a mysql database. the whole monitoring process takes place continuously and in realtime. the data stored in the database are then available for being consumed. a dashboard for thermal performance assessment and data exploration has been developed. the dashboard is a web-app accessible through any browser. the data processing core is served by a restful application programming interface (api) service running on the server and called with standard get request. the user can view the measured data via the web-app, obtaining information by consulting the related kpis, and can have direct feedback on iaq by observing the led’s color of the desk node installed in the building. the two means of communication are not correlated and independent of each other. 2.1.4. bim integration building’s data are required to configure the ir scanning process, to perform geometrical maps corrections and to calculate ieq kpi. to reduce the manual operation, integration with the bim was developed. the bim model of a building includes most of the required data, but not those related to the comfort eye installation. for this reason, the comfort eye bim object was developed. the object can be imported into the bim and added to each room where the sensor is installed. then the ifc file can be exported and processed with a dedicated software based on the ifcopenshell python library [27]. the software automatically retrieves the data and store them in the mysql database that is used for the automatic configuration of the comfort eye. the scope of using bim data is three-fold: i) to correct the ir thermal maps using the room geometry, the sensor position and the wall emissivity; ii) to allow the application of the angle factors method for the mean radiant temperature measurement according to iso7726 [20]; iii) to calculate ieq kpis weighted by floor area. thus, the bim integration provides a way to configure the sensor automatically and therefore reduce the installation time. 2.2. ieq measurement and assessment in addition to standard temporal series of data, the comfort eye can provide long term indicators and kpi to be used for thermal comfort and iaq assessment according to en 16798 methodology that is based on indicators and their boundaries for building’s classification [9]. there are four categories (i, ii, iii, iv) for classification: category i is the higher level of expectation and may be necessary if the building houses occupants with special requirements (children, elderly, occupants with disabilities, etc.); category iv is the worst. to provide a good level of ieq, a building is expected to operate at least as category ii. for each ieq aspect, an hourly or daily indicator (𝐼ℎ,𝑖) is measured. for every room the number of occupied hours, outside the range (hor,i) is calculated as the number of hours when 𝐼ℎ,𝑖 ≥ 𝐼lim , ilim is the limit of the indicator determined by the targeted category. thus, the percentage of occupied hours outside the range, (pori) [%], is calculated as follows (1): 𝑃𝑂𝑅𝑖 = ℎor,𝑖 ℎtot ∙ 100 % , (1) where ℎtot [h] is total occupied hours during the considered period. the por of the entire building is finally calculated as the average of the pori for each room, weighted according to the floor area (2): 𝑃𝑂𝑅 = ∑ 𝑃𝑂𝑅𝑖 ∙ 𝑆𝑖 𝑛 𝑖=1 ∑ 𝑠𝑖 𝑛 𝑖=1 , (2) where n is the total number of rooms in the building and si the floor area of i-th room in the building. the por is then used to determine the kpi. a por of 0 % corresponds to a kpi of 100 %, which is the best value achievable. a kpi equal to 100 % means that the indicator is always within the limits of the targeted category. a maximum deviation of 5 % of the por is considered acceptable. thus, for por equal or higher than 5 %, the corresponding kpi is 0 % figure 2. general architecture of comfort eye. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 234 (worse case), which means the building is operating for a significant period outside the limits of the targeted category. to define an assessment scale, a linear interpolation between the minimum por (5 %) and best por (0 %) is recommended to determine a scale of the kpi between 0 % (worse case) and 100 % (best case). the analysis is performed only considering the main occupied rooms (e.g., bedrooms, living room, kitchen), and does not consider short-term occupancy and transit areas (e.g., bathrooms, corridors, small storage areas). moreover, given the fact that en 16798 does not cover all the ieq aspects, such as pm, additional indicators are included. following subsections present in detail each indicator used for ieq assessment. the methodology provides indicators for the calculation of kpis for the thermal comfort and iaq assessment. 2.2.1. thermal comfort according to iso 7730 [11] and iso 7726 [20] ”a human being’s thermal sensation is mainly related to the thermal balance of his or her body as a whole. this balance is influenced by physical activity and clothing, as well as the environmental parameters: air temperature (ta), mean radiant temperature (tr), air velocity (va), relative humidity (rh), clothing insulation (icl) and metabolic rate (m). in a moderate environment, the human thermoregulatory system will automatically attempt to modify skin temperature and sweat secretion to maintain the heat balance. as the definition given in iso 7730 implies, thermal comfort is a subjective sensation, and it can be expressed with a mathematical model of pmv (3) which is function of four environment parameters and two personal ones: 𝑃𝑀𝑉 = f(𝑇𝑎 , 𝑇𝑟 , 𝑉𝑎 , 𝑅𝐻, 𝐼𝑐𝑙 , 𝑀) . (3) the air velocity (in m/s) based on room ventilation system is set to 0.05 m/s metabolic rate (m) is set in function of the usual occupants’ activity using a table of typical metabolic rate available in en7730 [11]. insulation level (icl) of the clothing generally worn by room occupants. for summer season is set to 0.5 clo and 0.9 clo for the winter season using a table of typical clothing insulations available in en7730 [11]. the pmv predicts the mean value of votes of a group of occupants on a seven-point thermal sensation scale. as the predicted quality of the indoor thermal environment increases, the pmv value gets closer to 0 (neutral thermal environment). the hourly pmv values need a limit to be set to identify the number of occupied hours outside an acceptable comfort range. iso 7730 define the pmv limit for each category, the i category specified as -0.2 < pmv < +0.2, the ii category 0.5 < pmv < +0.5, the iii category as -0.7 < pmv < +0.7 and the iv category -1 < pmv < +1. the benchmark is done considering a threshold of a maximum 5 % of operating hours outside the pmv range. the best performance is achieved when there is no deviation outside the design range. so, a linear interpolation between 5 % and 0 % is done. 2.2.2. indoor air quality iaq is known to have acute and chronic effects on the health of the occupants, generally is expressed in terms of co2 concentration and ventilation required to contain co2 levels and reducing the concentration of indoor air pollutants dangerous for human health [28]. continuous co2 monitoring can provide a comprehensive and straightforward way to assess and measure improvements in building ventilation. the iaq assessment methodology, which provides the co2 kpi, is defined by en 16798, and where relevant and needed, iaq monitoring is enhanced with the monitoring of pm. the iaq measurement shall be made where occupants are known to spend most of their time. the kpi aggregates the values at the building level to provide an overall value but to identify critical issues is recommended to analyse all room values. the hourly co2 concentration values above outdoor are assessed against a safety threshold to identify the number of hours outside an acceptable comfort range, and the room values are aggregated through a floor-area weighted average. the co2 limits for each category, is 550 ppm for the i category, 800 ppm,1350 ppm and greater than 1350 ppm for the ii, iii, and iv category respectively. according to en 16789, an acceptable amount of deviation is 5 % of occupied hours. the best performance is achieved when there are no deviations outside the design limit. to define an assessment scale, a linear interpolation between the minimum (5 %) and best performance (0 %) is done. pm is a widespread air pollutant, consisting of a mixture of solid and liquid particles suspended in the air. commonly used indicators describing pm that are relevant to health refer to the mass concentration of particles with a diameter of less than 10 µm (pm10) and particles with a diameter of less than 2.5 µm (pm2.5). the methodology for pm2.5 and pm10 assessment is provided by the victoria epa institute (australia). the categories are obtained according to the hourly and 24-hour rolling average pms concentration as shown in table 2 and table 3 [29]. an acceptable amount of deviation from the optimal category (very good) is the 5 %. the best performance is achieved when there are no deviations outside the design limit. to define an assessment scale, a linear interpolation between the minimum (5 %) and best performance (0 %) is done. table 2. pm2.5 concentration for ieq classification. category 24-hr-pm2.5 µg/m³ 1-hr pm2.5 µg/m³ very good 0-8.2 0-13.1 good 8.3-16.4 13.2-26.3 fair 16.5-25.0 26.4-39.9 poor 25.1-37.4 40-59.9 very poor 37.5 or greater 59.9 or greater table 3. pm 10 concentration for ieq classification. category 24-hr-pm10 µg/m³ 1-hr pm10 µg/m³ very good 0-16.4 0-26.3 good 16.5-32.9 26.4-52.7 fair 33-49.9 52.8-79.9 poor 50-74.9 80-119.9 very poor 75 or greater 120 or greater acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 235 3. experimental application the comfort eye was installed in two rooms of a nursery of gdynia and the data collection started in july 2018. the scope of the monitoring was the assessment of the envelope performance and ieq before and after renovation works (figure 3). the demonstration building is a two-story kindergarten building, attended by about 130 children. it was constructed in the year 1965 and it has the function of kindergarten from the beginning. the building volume is 2712 m3 and the built-up area is 464 m2. the main goal of the demonstration was to minimize the energy consumption especially for heating needs through the retrofitting of the envelope (add insulation layer), implementing new windows and improving the aesthetic appearance of the envelope. a comprehensive thermal insulation retrofit should create a continuous insulated envelope around the living accommodation and, ideally, avoid any residual thermal bridging of the structure. the focus of the renovation works was to achieve the targeted quality and performance improving the ieq. 4. discussion of results monitoring started in july 2018, the comfort eye is still installed and is still acquiring data. the data acquired were analysed for the winter and summer season, before (pre) and after renovation (post), for the assessment of thermal comfort and iaq building performance and validation of improved ieq. the renovation works were done from august to october 2019. the selected periods for the ieq analysis are august 2018 and august 2020 for the summer season in pre and post renovation state respectively. february 2019 and february 2020 for the winter season in pre and post renovation state respectively. the most significant weeks of the selected periods for the analysis are reported with the relative average outdoor temperature in table 4. the weather data were gathered from the weather the station igdyni26 of the weather underground web service. the building is equipped with a centralized heating system with radiators in each room, without a thermostat. no cooling system is present. 4.1. thermal comfort concerning thermal comfort, the pmv model was applied using a metabolic rate of 1.2 met (typical office/school activity) and clothing insulation of 0.9 clo, suitable for the end-use of the building and typical climate in poland in the winter season, and metabolic rate of 1.2 met and clothing insulation of 0.5 clo for the summer season. to compare the pre and post thermographic maps, days with the same average outside temperature have been selected, 6 °c (26/02/2019-20) and 25 °c (09/08/2018-20) for winter and summer respectively. the recap of kpis calculated for the winter and summer season are shown in table 4. the mean radiant temperature (tr) pre and post renovation is shown in figure 4. figure 5 shows the thermal maps maintaining the same scale in pre and post renovation phases to show the different wall temperatures reached. figure 5 (a-b) shows the thermal maps of the winter season, pre and post renovation, and in figure 6 (a-b) is also possible to observe the trend of the temperature of the wall, of the window and the internal temperature. although the external temperatures were the same, it can be seen from the graphs and the thermographic map that the temperature of the wall of pre renovation is colder than the wall of post renovation. the monitoring provided evidence that the renovation has increased the insulation capacity of the building. the building operated for the winter season in category iv, iii, ii with a kpi of 0 % before renovation. after renovation operated most of the time in category i with a kpi of 100 %. this turned out to provide an indication of uncomfortable conditions in pre renovation and comfortable conditions in post renovation. figure 5 (c-d) show the thermographic maps of the summer season, and in figure 6 (c-d) is also possible to observe the trend of the temperature of the wall, the window, and the internal air temperature. the wall of post renovation is colder than the wall of the pre renovation, this confirms the better insulation of the post renovation. in figure 5 (c-d) the hottest areas represent the window. figure 3. gdynia before and after the renovation, comfort eye installed in the building. figure 4. tr measured with the comfort eye before and after renovation. table 4. selected period for the summer and winter season, pre and post renovation state respectively and outdoor mean temperature. summer winter pre post pre post period 6/8/2018 12/8/2018 3/8/2020 9/8/2020 25/2/2019 3/3/2019 24/2/2020 1/3/2020 out t 21.3 °c 19.0 °c 2.9 °c 2.8 °c acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 236 for the summer season, the building operated in categories iv, iii, ii before renovation and in categories iii, ii, i after renovation. in any case, a kpi of 0 % was registered and this is due to the absence of a cooling system. thus, only a slight improvement in thermal comfort was registered for the summer season. the thermal comfort requirements were satisfied for the post renovation. however, in the pre renovation, the detailed analysis on radiant temperatures turned out to provide a clear problem due to the temperatures of the wall exposed to the exterior that, being lower than the other surfaces’ temperatures and the air temperature, caused a lower mean radiant temperature (see figure 4). the consequence of lower mean radiant temperature is turned out to provide uncomfortable conditions, resulting in a poor kpi. the post renovation analysis proved that the intervention mitigated that problem thanks to the installation of the envelope panels that increased the wall insulation with a consequent minor drift of the indoor surfaces’ temperatures. 4.2. iaq table 5 shows the iaq building performance pre and post renovation. considering the winter season, the co2 kpi pre renovation is 0 % and improved to 20 % in the post renovation. the kpi of pm2.5 is 0 % for both pre and post renovation. the kpi for the pm10 pre renovation is 40 % and 60 % for the post. no ventilation was installed before and after renovation. the iaq kpis registered only a slight improvement that was caused by the introduction of communication with leds in the post renovation phase. the led light was able to change colour in function of the co2 and, as shown in figure 7, it was able to trigger the windows opening by occupants when poor air quality was put in evidence with the light. this underlines how simple communication can improve air quality by changing the habits of occupants. considering the summer season, the co2 kpi is 30 % for the pre renovation and 75 % for post renovation. figure 5. thermographic image pre and post renovation for the summer and winter season. figure 6. window, wall and air temperature pre and post renovation for the summer and winter season. table 5. ieq kpis. kpis summer winter pre post pre post thermal comfort 0 % 100 % 0 % 0 % iaq-co2 0 % 20 % 30 % 75 % iaq-pm2.5 0 % 0 % 85 % 50 % iaq-pm10 40 % 60 % 90 % 60 % acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 237 the kpi of pm2.5 is 85 % for the pre renovation and 50 % for post renovation. the kpi for pm10 is 90 % and 60 % for the pre and post respectively. in this case, the final iaq is provided by natural ventilation because of the opened windows. concerning the pms, a high level of concentrations has been registered. the source of those concentrations requires further investigation. lower levels of pm2.5 and pm10 were measured in the summer period. the installation of a ventilation system with air purifiers could be considered to mitigate the problem. the results derived from the case study the feasibility of the proposed iot innovative system for continuous measurement of all parameters required for an accurate analysis of the ieq. this system allows to carry out a long-term monitoring and, therefore, to evaluate of the ieq in the most significant seasons. in the presented application, both communication means were used. the occupants were able to take advantage of the realtime iaq advice by leds. the web-app was used to generate seasonal reports with ieq kpis that were used to evaluate the optimal renovation strategy and the impact of the renovation works. 5. conclusion the comfort eye is an innovative and non-intrusive solution, allowing the implementation of an iot sensing system for a complete, continuous, and long-term monitoring of the ieq. together with the sensing device, a set of performance indicators has been developed to assess the overall quality in terms of thermal comfort and indoor air quality. this paper demonstrates the applicability and advantages of the proposed measurement device with the application to a real case study. a pre and post renovation analysis was performed to validate the developed monitoring methodology. the ceiling node, given the high spatial and temporal resolution compared with traditional tools, provides two main advantages. first, multipoint measurements of the mean radiant temperature with only one sensor. given the geometry of the room, several locations can be chosen to apply the calculation of the mean radiant temperature (e.g., near and far from the window). second, together with information about comfort, thermal maps of the indoor surfaces are available and can be used to track the thermal performance of the building envelope (e.g., tracking surface temperature of the external wall, recognizing cool zones etc.). the ir scanner of the comfort eye turned out be useful to accurately detect the thermal comfort issues because of its capability of measuring thermal maps and mean radiant temperatures. to this aim, the accuracy and completeness of bim data, such as material emissivity and geometry, are pivotal and should be guaranteed by a standardized bim modeling approach. the monitoring allowed to identify the causes of the thermal discomfort. in the pre-renovation phase the building operated for a significant period (more than 5 %) out of the targeted category with a kpi of 0 % (worst conditions). the average radiant temperature is the most significant cause of discomfort, the building had poor thermal insulation of the walls. the monitoring has confirmed that by working on the causes, the performance of the building can be improved with a consequent minor drift of the indoor surfaces’ temperatures. in the post renovation phase the building operates within the limits of the targeted category with a kpi of 100 %. the desk node allows a more detailed view of the performance of the building, providing information regarding the air quality. furthermore, the large set of information available from the measured data can be used to identify ieq problems and their origin. such advanced knowledge of the building performance allows a better design of the renovation. concerning the iaq, the monitoring demonstrated that the building operates in the pre and post renovation phases outside the limit of the targeted category (more than 5 %). no mechanical ventilation system was installed with the renovation and the installation of a ventilation system with air purifiers could be considered to mitigate the problem. the iaq kpis registered only a slight improvement that was caused by the introduction of communication with leds in the postrenovation phase. the capability of communicating the status of indoor air quality with the simple led colour based on real time measurements, triggered occupants’ actions with the scope of restoring the required environmental quality. future developments will provide improvements of the measurement system to enhance the plug&play installation and to include additional sensors for the evaluation of other ieq aspects, as visual and acoustic comfort. acknowledgement this research has received funding from the p2endure. the p2endure research project (https://www.p2endureproject.eu/en) is co-financed by the european union within the h2020 framework programme with contract no. 723391. the authors want to thank the project partners for the useful discussions and collaboration. references [1] eu building stock observatory. online [accessed 17 december 2021] https://ec.europa.eu/energy/topics/energy-efficiency/energyefficient-buildings/eu-bso_en [2] a. g. kwok, n. b. rajkovich, addressing climate change in comfort standards, 2010, building and environment, vol. 45, issue 1, pp. 18-22. [3] a. h. yousef, m. arif., m. katafygiotou, a. mazroei, a. kaushik, e. elsarrag, impact of indoor environmental quality on occupant well-being and comfort: a review of the literature. international journal of sustainable built environment, 2016, vol. 5, pp. 1-11. doi: 10.1016/j.buildenv.2009.02.005 [4] i. mujan, a. s. anđelković, v. munćan, m. kljajić, d. ružić, influence of indoor environmental quality on human health and productivity a review, journal of cleaner production, 2019, vol. figure 7. co2 concentration measured with the comfort eye before and after renovation. https://ec.europa.eu/energy/topics/energy-efficiency/energy-efficient-buildings/eu-bso_en https://ec.europa.eu/energy/topics/energy-efficiency/energy-efficient-buildings/eu-bso_en https://doi.org/10.1016/j.buildenv.2009.02.005 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 238 217, pp. 646-657. doi: 10.1016/j.jclepro.2019.01.307 [5] e. oldham, h. kim,ieq field investigation in highperformance, urban elementary schools. atmosphere 2020, 11, 81. doi: 10.3390/atmos11010081 [6] k. azuma, n. kagi, h. kim, m. hayashi, impact of climate and ambient air pollution on the epidemic growth during covid-19 outbreak in japan. environ res. 2020, 190, 110042. doi: 10.1016/j.envres.2020.110042 [7] m. a. zoran, r. s. savastru, d. m. savastru, m. n. tautan, assessing the relationship between surface levels of pm2.5 and pm10 particulate matter impact on covid-19 in milan, italy, science of the total environment, volume 738, 2020, 139825. doi: 10.1016/j.scitotenv.2020.139825 [8] a. m. atzeri, f. cappelletti, a. tzempelikos, a. gasparella, comfort metrics for an integrated evaluation of buildings performance,energy and buildings, volume 127, 2016, pp. 411424. doi: 10.1016/j.enbuild.2016.06.007 [9] cen, en 16798 indoor environmental input parameters for design and assessment of energy performance of buildings addressing indoor air quality, thermal environment, lighting and acoustics. cen, 2019]. [10] a. kylili, p. a. fokaides, p. a. l. jimenez, key performance indicators (kpis) approach in buildings renovation for the sustainability of the built environment: a review, renewable and sustainable energy reviews, vol. 56, 2016, pp. 906-915. doi: 10.1016/j.rser.2015.11.096 [11] iso, iso 7730 ergonomics of the thermal environment analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria, international standardization organization, geneva (2005) [12] d. khovalyg, o. b. kazanci, h. halvorsen, i. gundlach, w. p. bahnfleth, j. toftum, b. w. olesen, critical review of standards for indoor thermal environment and air quality, energy and buildings, vol. 213, 2020. doi: 10.1016/j.enbuild.2020.109819 [13] y. song, f. mao and q. liu, human comfort in indoor environment: a review on assessment criteria, data collection and data analysis methods, in ieee access, vol. 7, 2019, pp. 119774-119786. doi: 10.1109/access.2019.2937320 [14] l. claudi, m. arnesano, p. chiariotti, g. battista, g. m. revel, a soft-sensing approach for the evaluation of the acoustic comfort due to building envelope protection against external noise, measurement, vol. 146, 2019, pp. 675-688. doi: 10.1016/j.measurement.2019.07.003 [15] t. s. larsen, l. rohde, k. t. jønsson, b. rasmussen, r. l. jensen, h. n. knudsen, t. witterseh, g. bekö, ieq-compass – a tool for holistic evaluation of potential indoor environmental quality, building and environment, vol. 172, 2020, 106707. doi: 10.1016/j.buildenv.2020.106707 [16] b. d. hunn, j. s. haberl, h. davie, b. owens, measuring commercial building performance protocols for energy, water, and indoor environmental quality (2012), ashrae journal, 54 (7), pp. 48-59. [17] f. seri, m. arnesano, m.m. keane,g.m. revel,temperature sensing optimization for home thermostat retrofit. sensors 2021, 21, 3685. doi: 10.3390/s21113685 [18] i. atmaca, o. kaynakli, a. yigit, effects of radiant temperature on thermal comfort, building and environment, vol. 42, issue 9, 2007, pages 3210-3220. doi: 10.3390/buildings11080336 [19] g. gan, analysis of mean radiant temperature and thermal comfort. building services engineering research and technology 2001, 22(2), pp.95-101. doi: 10.1191/014362401701524154 [20] iso, iso 7726. 2002. ergonomics of the thermal environment instruments for measuring physical quantities”. international standardization organization, geneva, 2002. [21] g. m. revel, e. sabbatini, m. arnesano, development and experimental evaluation of a thermography measurement system for real-time monitoring of comfort and heat rate exchange in the built environment, measurement science and technology, 2012, 23(3). doi: 10.1088/0957-0233/23/3/035005 [22] g. m. revel, m. arnesano, f. pietroni, development and validation of a low-cost infrared measurement system for realtime monitoring of indoor thermal comfort, measurement science and technology, vol. 25(085101), 2014l. doi: 10.1088/0957-0233/25/8/085101 [23] l. zampetti, m. arnesano, g. m. revel, experimental testing of a system for the energy-efficient sub-zonal heating management in indoor environments based on pmv, energy and buildings, vol. 166, 2018, pp. 229-238. doi: 10.1016/j.enbuild.2018.02.019 [24] x. p. maldague, theory and practice of infrared technology for nondestructive testing. wiley-interscience, 2001, isbn: 978-0471-18190-3. [25] health canada, residential indoor air quality guidelines: carbon dioxide, 2021. [26] rehva federation of european heating, ventilation and air conditioning associations, co2 monitoring and indoor air quality. online [accessed 17 december 2021] https://www.rehva.eu/rehva-journal/chapter/co2-monitoringand-indoor-air-quality [27] ifcopenshell academy. online [accessed 17 december 2021] https://academy.ifcopenshell.org/ [28] p. wargocki, d. p. wyon, j. sundell, g. clausen, p. o. fanger, the effects of outdoor air supply rate in an office on perceived air quality, sick building syndrome (sbs) symptoms and productivity, indoor air 10 (2000) 222–236. doi: 10.1034/j.1600-0668.2000.010004222.x [29] epa victoria, air pollution in victoria – a summary of the state of knowledge, publication 1709 august 2018. https://doi.org/10.1016/j.jclepro.2019.01.307 https://doi.org/10.3390/atmos11010081 https://doi.org/10.1016/j.envres.2020.110042 https://doi.org/10.1016/j.scitotenv.2020.139825 https://doi.org/10.1016/j.enbuild.2016.06.007 https://doi.org/10.1016/j.rser.2015.11.096 https://doi.org/10.1016/j.enbuild.2020.109819 https://doi.org/10.1109/access.2019.2937320 https://doi.org/10.1016/j.measurement.2019.07.003 https://doi.org/10.1016/j.buildenv.2020.106707 https://doi.org/10.3390/s21113685 https://doi.org/10.3390/buildings11080336 https://doi.org/10.1191/014362401701524154 https://doi.org/10.1088/0957-0233/23/3/035005 http://dx.doi.org/10.1088/0957-0233/25/8/085101 http://dx.doi.org/10.1016/j.enbuild.2018.02.019 https://www.rehva.eu/rehva-journal/chapter/co2-monitoring-and-indoor-air-quality https://www.rehva.eu/rehva-journal/chapter/co2-monitoring-and-indoor-air-quality https://academy.ifcopenshell.org/ http://dx.doi.org/10.1034/j.1600-0668.2000.010004222.x progress towards in-situ traceability and digitalization of temperature measurements acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 progress towards in-situ traceability and digitalization of temperature measurements jonathan pearce1, radka veltcheva1, declan tucker1, graham machin1 1 national physical laboratory, hampton road, teddington, tw11 0lw, united kingdom section: research paper keywords: temperature; thermometry; traceability; primary thermometry; process control; digitalization citation: jonathan pearce, radka veltcheva, declan tucker, graham machin, progress towards in-situ traceability and digitalization of temperature measurements, acta imeko, vol. 12, no. 1, article 4, march 2023, identifier: imeko-acta-12 (2023)-01-04 section editor: daniel hutzschenreuter, ptb, germany received november 7, 2022; in final form march 24, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: jonathan pearce, e-mail: jonathan.pearce@npl.co.uk 1. introduction the control and monitoring of temperature is a key part of almost every technological process. the thermodynamic temperature of a system is related to the average kinetic energy of the constituent particles of the system. however, this cannot be measured directly, so another parameter which varies with temperature, such as the speed of sound in a gas, must be measured and then related to the temperature through well understood physics. in general, such an approach to temperature measurement is very complicated, time consuming and expensive, and is not currently well suited to practical thermometry. most thermometry therefore makes use of practical sensors such as thermocouples and resistance thermometers. these yield a temperature dependent property such as voltage or resistance, which must then be related to temperature by comparison with a set of known temperatures, i.e. a calibration. the global framework for approximating the si unit of temperature, the kelvin, is the international temperature scale of 1990 (its-90) [1]. the measurement infrastructure that makes this possible is maintained by national metrology institutes (nmis), who perform periodic global comparisons of their own standards to ensure the equivalence of thermometry worldwide. these standards are then used to provide calibrations to end-users which are traceable to the its-90, and hence the si kelvin. in this way an end-user may be confident that their temperature measurements are globally equivalent. a key drawback of this empirical approach to thermometry is that when the sensing region of the thermometer is degraded in use, for example by exposure to high temperatures, contamination, vibration, ionising radiation and other factors. the relationship between the thermometer output and its temperature changes in an unknown way. this is referred to as ‘calibration drift’, and it is insidious because there is no indication in process that it is occurring. this is a big problem for applications where temperature monitoring and control is critical, such as in long-term monitoring (e.g. nuclear waste storage), or where processes need to operate within a narrow abstract autonomous control systems rely on input from sensors, so it is crucial that the sensor input is validated to ensure that it is ‘right’ and that the measurements are traceable to the international system of units. the measurement and control of temperature is widespread, and its reliable measurement is key to maximising product quality, optimising efficiency, reducing waste and minimizing emissions such as co2 and other harmful pollutants. degradation of temperature sensors in harsh environments such as high temperature, contamination, vibration and ionising radiation causes a progressive loss of accuracy that is not apparent. here we describe some new developments to overcome the problem of ‘calibration drift’, including self-validating thermocouples and embedded phase-change cells which self-calibrate in situ by means of a built-in temperature reference and practical primary thermometers such as the johnson noise thermometer which measure temperature directly and do not suffer from calibration drift. all these developments will provide measurement assurance which is an essential part of digitalisation to ensure that sensor output is always ‘right’, as well as providing essential ‘points of truth’ in a sensor network. some progress in digitalisation of calibrations to make them available to end-users via a website and/or an application programming interface is also described. mailto:jonathan.pearce@npl.co.uk acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 temperature window (e.g. aerospace heat treatment). the result is often reduced safety margins, sub-optimal processing, lower efficiency, increased emissions, and higher product waste or rejection. in this article some new developments led by the uk’s national physical laboratory (npl), in collaboration with industry partners, are described to overcome the problem of calibration drift and to provide assurance that temperature sensor output is valid – a key part of increasingly widespread digitalisation to ensure sensor output is ‘right’ by providing in-situ validation. these include self-validation and in-process calibration which provide traceability to the si kelvin at the point of measurement, and practical primary thermometry which measures temperature directly, has no need for calibration, and does not suffer from calibration drift. additionally, other promising practical primary thermometry techniques have been outlined, namely doppler broadening thermometry, ring resonator thermometry, and whispering gallery mode thermometry. these are collectively referred to as ‘photonic thermometers’ due to their use of electromagnetic radiation. acoustic thermometry is also briefly discussed. various groups worldwide, including npl, are working on elevating the technological readiness of these techniques. finally, some developments in the digitalisation of calibrations are described, including automation, web-based access, and steps towards implementation of a standardised digital calibration certificate. these will substantially reduce the amount of paperwork and opportunities for operator error and will facilitate digital transfer of calibrations and traceability for paperless audit trails. 2. self-validation thermocouples are very mature and well established, and are widely used in industry. however, they are particularly susceptible to drift of the calibration in harsh environments, whereby the relationship between emf and temperature changes in an unpredictable manner. this gives rise to a progressive, and unknown, temperature measurement error which in turn causes degradation in process monitoring and control. this can be monitored in situ by using a miniature phase-change cell (fixedpoint) in close proximity to the measurement junction (tip) of the thermocouple [2]. the fixed point is a very small crucible containing an ingot of metal (or metal-carbon alloy [3] or organic material [4]) with a known temperature. the latest devices developed by npl are able to accommodate the entire thermocouple and fixed-point assembly within a protective sheath of outer diameter 7 mm; the cell is typically about 4 mm in diameter and 10 mm in length. importantly, this means that the self-validating thermocouple presents the same external form factor and appearance as a regular process control thermocouple. it is also of course fully compatible with existing connections and electronics. a self-validating thermocouple is shown in figure 1. in use, when the process temperature being monitored passes through the melting temperature of the ingot, the thermocouple output exhibits a ‘plateau’ during melting, due to the heat of fusion of the ingot restraining further temperature rise by ensuring incoming heat from the surroundings is absorbed, driving the phase change. once the ingot is completely melted the indicated temperature resumes its upward trend. as the melting temperature of the ingot is known, having been traceably calibrated a priori, the thermocouple can be recalibrated in situ. the question of the stability of the melting temperature of the miniature fixed point is important to consider, since that has the potential to inadvertently introduce further calibration drift. in fact, this is inherently stable, and it has been shown experimentally during the course of development of the devices that in typical applications the drift of the fixed point itself is negligible in the context of thermocouple measurement uncertainties. contamination is by far the most likely cause of drift. as a general rule of thumb, 1 part per million of contamination by impurities gives rise to about 0.001 °c change in the melting temperature. so far, no evidence of measurable drift of the miniature fixed points has been found, even in quite harsh environments such as aerospace heat treatment processes. calculations indicate that contamination by transmutation in ionising radiation environments is even less important in most situations, although the extreme case of operation in the core of a nuclear reactor may cause significant drift [5]. a typical output of a self-validating thermocouple during the recalibration process is shown in the lower panel of figure 1. this device has been extensively characterised [6] and has been licensed by npl to uk thermocouple manufacturer ccpi europe, under the tradename inseva [7], who are conducting a series of trials in high value manufacturing industries at several plants in the uk and in europe. typical fixed-point materials for these applications include ag (962 °c), au (1064 °c), cu (1084 °c), fe-c (1153 °c) and co-c (1324 °c). a similar concept has been employed for an application in space-borne instrumentation, where the phase-change cell is part of the system whose temperature is to be measured. such an embedded fixed-point has been demonstrated by npl in collaboration with ral space on a prototype blackbody calibrator designed for operation as part of a spacecraft-borne figure 1. top: self-validating thermocouple with protective sheath. image courtesy of ccpi europe. bottom: melting curve observed during the recalibration of the inseva self-validating thermocouple, here using a gold ingot (melting temperature 1064.18 °c). 0 100 200 300 400 1060 1062 1064 1066 1068 1070 in d ic a te d t e m p e ra tu re / ° c time / s acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 earth observation instrument suite [5]. the phase-change cell, containing approximately 2 g of gallium (melting point 29.7646 °c), is embedded in the aluminium blackbody calibrator base, close to an embedded platinum resistance thermometer (prt). this enables the in-situ recalibration of the prt in orbit. in this application, some key developments included a mechanism to promote reliable freezing of the gallium without necessitating a large supercool (gallium is prone to cooling several degrees below its freezing temperature before nucleation is triggered), and a mechanism for preventing mechanical contact between the gallium ingot and the stainless steel cell wall, thereby avoiding the possibility of long-term contamination of the ingot and hence a change of its melting temperature. the ingot is shown in figure 2. it can be seen in the lower panel of figure 2 that the remotely located prt is able to indicate clearly defined melting curves with a useful duration of several hours, and a melting temperature range of less than 0.01 °c. by calibrating the phasechange cell against npl’s reference standard gallium cell, it is possible to perform in-situ traceable calibrations of the prt on board the spacecraft with an expanded uncertainty of less than 0.01 °c. for both the self-validating thermocouples and the embedded phase-change cell techniques, vigorous efforts are ongoing to automate the detection of the melting plateau, and, once detected, to characterise the ‘fixed point’ representing the invariant part of the melting curve. this is challenging to implement algorithmically in a manner sufficiently robust against noise and spurious artefacts in the data, but it is essential for autonomous in-situ recalibration. npl has had some success with a supervised learning approach (machine learning) on training data obtained from an industrial trial of self-validating thermocouples in a heat treatment application which yielded a large, high quality data set. this algorithm was then shown to work well on data that was not part of the training set, yielding typical expanded uncertainties in the melting point determination of about 0.5 °c (gold fixed point) and 1.0 °c (silver fixed point). here and in the following, expanded uncertainties are taken to correspond to a coverage factor k = 2, i.e. a coverage probability of 95 %. note that it is unlikely these uncertainties will be further reduced by improvements to the algorithm because they are dominated by experimental considerations associated with the physical measurement setup. instead, algorithm development should focus on reliability and ability to characterise the plateau under adverse conditions such as noise, spurious artefacts, and faint signals. in other words, a demonstrable ‘all weather’ capability is needed. hence, while the new machine learning algorithm shows promise, it will need to be tested on a diverse set of data to demonstrate its universal applicability. the detection of characteristic shapes such as melting curves is essentially a pattern recognition problem. this is easy for humans because of the extraordinary sophistication of the visual cortex but it is not practical for conventional computer programming approaches, and in general machine learning or other artificial intelligence applications are needed to tackle this problem, together with good quality data for development and validation of the techniques. 3. practical primary thermometry the limitations of conventional temperature sensors which rely on calibration prior to use, and hence are prone to calibration drift, has led to renewed interest in practical primary thermometry. primary thermometers measure some property that can be related to temperature directly through wellunderstood physics, and do not require a temperature scale or calibration. in addition, if all parameters needed to infer the temperature are measured simultaneously, the sensor is not subjected to calibration drift, since any change in the sensor figure 2. top: phase-change cell embedded in the blackbody calibrator base; the adjacent prt is to the left; inset shows a photograph of the phase-change cell. bottom: melting curves observed during the in-situ calibration of the prt using the miniature embedded phase-change cell, showing the narrow melting range and excellent reproducibility. figure 3. prototype practical johnson noise thermometer developed by metrosol in collaboration with npl. the sensing electronics are housed in the container to the right; the probe extends to the left. 00:00 01:00 02:00 03:00 04:00 29.755 29.760 29.765 29.770 29.775 29.780 29.785 29.790 t e m p e ra tu re / ° c time elapsed / hours:minutes acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 material is accounted for in the measurement. examples include acoustic thermometry (measuring the speed of sound) and johnson noise thermometry (measuring the temperaturedependent voltage arising from the thermal motion of charge carriers in a resistor). to turn one of these into a practical, commercially available reality, npl has been collaborating with metrosol limited to develop a practical johnson noise thermometer [8]-[10]. the johnson noise voltage is related to temperature, 𝑇, by nyquist’s relation: 〈𝑉𝑇 2〉 = 4 𝑘 𝑇 𝑅 δ𝑓 , (1) where 〈𝑉𝑇 2〉 is the mean squared johnson noise voltage, 𝑘 is the boltzmann constant, 𝑅 is the sensor resistance and δ𝑓 is the frequency bandwidth which is a function of the sensing electronics and cables. importantly, if 𝑅 is measured at the same time as the johnson noise voltage, then all relevant properties of the sensing resistor are measured and so even if the sensor is degraded the thermodynamic temperature is always known. the johnson noise voltage is miniscule, and measuring it requires robust immunity to electromagnetic interference and electronic interference, arising from both external and internal influences. this is achievable by good design. a key challenge is the need for very high amplification of the noise signal. its measurement in the presence of the inevitable electrical noise generated by the pre-amplifiers can be done with the use of correlation, whereby the signal is split into two different channels. only the measured signal which is the same on both channels (i.e. the johnson noise) is ‘let through’. a drawback of this approach with conventional designs is that this results in excessively long correlation times of minutes to hours depending on the required uncertainty. on the other hand, industrial measurements typically require timescales of a few seconds. using the nyquist equation (1) requires a knowledge of the bandwidth, which in reality is unknowable. the equation is generally used in ratio form at two different temperatures: the sensor temperature to be determined, and a known reference temperature. the nyquist equation can hence be expressed as: 𝑇 = 𝑇0 ( 𝑉 𝑉0 ) 2 𝑅 𝑅0 , (2) where 𝑇0, 𝑉0 and 𝑅0 are the reference temperature, johnson noise voltage and resistance respectively. in general, it is very inconvenient to maintain a known reference temperature because it is impossible to match the frequency response of the two measurement circuits, and the resulting mismatch causes excessive measurement errors over the frequency range required for the current fast response time application. various approaches have been employed to overcome this including the use of a synthesized noise signal from, for example, a josephson array. while extremely accurate, this approach is also not feasible as it requires complicated low temperature equipment. the npl/metrosol collaboration makes use of a quasirandom synthetic reference signal which is generated a priori. this reference signal is then superimposed on the measurement signal so that they both experience the same frequency response of the measurement electronics. the composite signal (superposition of johnson noise and calibration ‘tones’) can then be decomposed with signal processing in the frequency domain, and their ratio determined in order to deduce the temperature of the sensor resistor. a further advantage of this mechanism is its high tolerance to highly non-flat, non-linear frequency response, and so a much higher bandwidth (up to 1 mhz) can be employed than in previous systems. this translates directly to shorter measurement times and hence faster response times, since more signal can be averaged in the same amount of time. johnson noise thermometry has until recently been the preserve of large national laboratories due to the extreme difficulty of isolating the miniscule johnson noise voltage from the far larger external noise sources and the internal noise generated by the electronic components [11]. the development of a practical thermometer has been elusive and so far none have reached market, but the current npl/metrosol collaboration has now developed a working thermometer with unprecedented immunity to external electrical interference. the current prototype is shown in figure 3. it has now passed the most stringent electrical immunity standard test, iec 61000-4-3 [12]. the accuracy depends on the measurement duration; for an averaging period of about 5 s the expanded measurement uncertainty is ± 0.5 °c. the most obvious application is as a replacement for thermocouples where appreciable long-term drift is unacceptable. efforts are now focused on increasing the maximum temperature range beyond about 150 °c and improving the electronics and signal processing. further developments in the pipeline include demonstration of the feasibility of photonic-based ‘lab on a chip’ thermometry approaches for in-situ traceability to the kelvin. three approaches in various stages of investigation by npl and its collaborators to facilitate direct in-situ traceability are doppler broadening, ringresonator, and whispering gallery thermometry [13]. doppler broadening thermometry (dbt) is based on the measurement of the doppler profile of a molecular or atomic absorption line of a gas in thermodynamic equilibrium. the absorption line shape is dominated at low pressure by doppler broadening and has a gaussian profile corresponding to the maxwell-boltzmann distribution of velocities of gas particles along a laser beam axis. in practice various physical effects such as collisions distort the beam profile somewhat, but the theory of this is well understood and the absorption line shape may be fitted by a parameterised model. the doppler half-width at halfmaximum, ∆𝑣𝐷 , is related to the temperature t by: ∆𝑣𝐷 = 𝑣0 𝑐 √2 ln 2 𝑘 𝑇 𝑀 , (3) where 𝑣0 is the line-centre frequency, 𝑐 is the speed of light, and 𝑀 is the absorber mass. two key challenges currently being addressed are a) reducing the amount of ancillary equipment needed for implementing the technique and b) miniaturisation of the sensing element. ring-resonator (rr) thermometry essentially utilises a closed-loop optical waveguide which is optically coupled to a second, adjacent, non-closed waveguide (separated by an air gap) via evanescence. the ‘ring’ or ‘loop’ enables propagation of circular electromagnetic waves with a characteristic resonance at a wavelength, 𝜆m, given by: 𝑚 ∙ 𝜆m = 𝑛eff ∙ 𝐿, (4) where the integer 𝑚 represents the resonance mode, 𝑛eff is an index characteristic of the waveguide and 𝐿 is the round-trip length of the loop. the temperature dependence of the refractive index and physical dimensions of the ring enable the use of the device as a thermometer by measuring the temperaturedependent shift in the wavelength given in (4). in practice, the acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 change in refractive index per unit temperature is a factor of approximately 100 larger than the thermal expansion coefficients of the materials involved, so the latter may be ignored. the technique is readily miniaturised and has good resistance to chemical contamination. it also offers some of the lowest uncertainties of all the practical primary thermometry techniques, although great care is required during the fabrication process to avoid imperfections. whispering gallery mode (wgm) thermometers trace their ancestry to precision clock oscillators. in essence they are stable microwave resonators arranged such that a symmetric dielectric medium such as a cylinder or disk is suspended in the centre of a metal cavity. the electromagnetic field in the microwave region is coupled to an external waveguide to excite the resonant frequencies. the frequencies of these resonant ‘whispering gallery’ modes exhibit temperature dependence and may be related to temperature through an understanding of the associated physics, which enables the use of the device for thermometry. acoustic gas thermometry is also a candidate for practical primary thermometry. the speed of sound in a gas depends on the temperature and may be related to the gas temperature through well understood physics. by using an acoustic resonator which ‘rings’ like a bell when excited appropriately with loudspeakers, and by characterizing the resulting changes in the geometry of the device using microwaves to understand the resonant modes, an extremely accurate thermometer can be constructed. such a device was used to determine the boltzmann constant with unmatched accuracy as part of the global endeavour to redefine the kelvin in terms of fundamental constants [14]. 4. digitalization of calibrations for many years the result of thermometer calibrations have been printed on paper and issued to the customer. recently, however, there is a trend towards digitalisation of the calibrations so that the results are available online or in electronic files. this is very important functionality for many users; for example, in aerospace organisations where measurements are subject to significant regulatory compliance and demonstration thereof under frameworks such as ams2750 which regulates heat treatment of metallic materials [15], it is very difficult to work with paper certificates. one successful approach has been that of ccpi europe, who have fully automated certification with their pyrotag™ system [16]. digitalisation of calibrations has numerous benefits including the reduction in operator errors (e.g. through manual data entry), removed need for paper-based processes and transactions, and easier management for asset managers, calibration managers, and technical staff. it will result in reduction of time and cost arising from a paperless system, offers secure storage and retrieval of information, and is audit ready to demonstrate traceability compliance. this presents some infrastructural challenges, including the way the data is presented, the internal mechanisms in the calibration laboratory for enabling digitalisation, and security of the information required to ensure that only the intended recipients have access. npl has embarked on a programme to automate, as far as possible, its thermometer calibrations and the generation of calibration certificates, and to make them, and the associated data and metadata available online via a secure website. the certificates will be machine readable (xml). the results will also be available through an application programming interface (api), allowing integration with customers’ own software. a key aim is ultimately to integrate this capability with the international digital calibration certificate (dcc), whose format is currently undergoing development [17]. importantly, the calibration history will also be available to the user. importantly, the dcc offers the possibility of facilitating autonomous updating of calibration data. this could be exploited by the techniques described in this paper, particularly self-validating techniques which provide a live update of the calibration in situ. updating one point in the calibration generally has an effect not only at the temperature at which the selfcalibration is performed, but on the interpolating function over a wider temperature range. the updated calibration could be passed to the associated dcc which could then be updated to include the new parameters and correct the interpolating function over the wider temperature range of use. clearly such a mechanism does not yet exist, but this is a functionality that should be considered in the formulation of the dcc format and its implementation. this approach may also be applicable to practical primary thermometry, though in that case the role of calibration certificates more broadly, and even the role of national metrology institutes in providing traceability in this regime, is currently not well defined. 5. conclusions some new developments in temperature measurement have been presented which support digitalisation in various respects. self-validation techniques using miniature temperature fixed points based on ‘phase change cells’ to provide in-situ traceability at the point of measurement will provide assurance that temperature sensor output is ‘always right’. practical primary thermometry measures temperature directly, rather than requiring calibration and the risk of consequent calibration drift in harsh environments, so ensures long-term reliable measurements; examples outlined here include johnson noise thermometry, doppler broadening thermometry, ring resonator thermometry, whispering gallery mode thermometry and acoustic thermometry. for conventional sensors, digitalisation of calibrations at npl is becoming a practical reality with a web-based interface, associated api to enable endusers to access calibration data programmatically, and steps towards a standardised digital calibration certificate format. these developments all support digitalisation of metrology and will increase the reliability of measurements, improving process efficiency and product yield, with a consequent reduction in harmful emissions. future work will focus on elevating the technical readiness and bringing the innovations to market. acknowledgement we would like to thank trevor ford, peter cowley and phill williams of ccpi europe ltd for contributions on the selfvalidating thermocouples, dan peters and dave smith of ral space for contributions on the embedded phase-change cell, paul bramley and david cruickshank of metrosol ltd for contributions on the johnson noise thermometry, sam bilson and andrew thompson of npl for contributions on machine learning approaches to automation of self-validating thermocouple calibrations, and deepthi sundaram and stuart chalmers of npl for contributions on the digitalisation of calibration certificates. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 references [1] h. preston-thomas, the international temperature scale of 1990 (its-90), metrologia, vol. 27, 1990, pp. 3-10. doi: https://doi.org/10.1088/0026-1394/27/1/002 [2] j. v. pearce, o. ongrai, g. machin, s. j. sweeney, self-validating thermocouples based on high temperature fixed points, metrologia, vol. 47, 2010, pp. l1-l3. doi: https://doi.org/10.1088/0026-1394/47/1/l01 [3] g. machin, twelve years of high temperature fixed point research: a review, aip conf. proc., vol. 1552, 2013, p. 305. doi: https://doi.org/10.1063/1.4821383 [4] e. webster, d. clarke, r. mason, p. saunders, d. r. white, in situ temperature calibration for critical applications near ambient, meas. sci. tech., vol. 31(4), 2020, p. 044006. doi: https://doi.org/10.1088/1361-6501/ab5dd1 [5] j. v. pearce, r. i. veltcheva, d. m. peters, d. smith, t. nightingale, miniature gallium phase-change cells for in situ thermometry calibrations in space, meas. sci. technol., vol. 30, 2019, p. 124003. doi: https://doi.org/10.1088/1361-6501/aad8a8 [6] d. tucker, a. andreu, c. j. elliott, t. ford, marius neagu, g. machin, j. v. pearce, integrated self-validating thermocouples with a reference temperature up to 1329 °c, meas. sci. technol., vol. 29(10), 2018, p. 105002. doi: https://doi.org/10.1088/1361-6501/aad8a8 [7] https://ccpi-europe.com/2018/05/22/inseva-thermocouplelicense-signing/ [8] p. bramley, d. cruickshank, j. v. pearce, the development of a practical, drift-free, johnson-noise thermometer for industrial applications, int. j. thermophys., vol. 38, 2017, p. 25. doi: https://doi.org/10.1007/s10765-016-2156-8 [9] p. bramley, d. cruickshank, j. aubrey, developments towards an industrial johnson noise thermometer, meas. sci. technol., vol. 31, 2020, p. 054003. doi: https://doi.org/10.1088/1361-6501/ab58a6 [10] http://www.johnson-noise-thermometer.com [11] j. f. qu, s. p. benz, h. rogalla, w. l. tew, d. r. white, k. l. zhou, johnson noise thermometry, meas. sci. tech., vol. 30, 2019, p. 112001. doi: https://doi.org/10.1088/1361-6501/ab3526 [12] iec 61000-4-3:2020 electromagnetic compatibility (emc) – part 4-3: testing and measurement techniques – radiated, radiofrequency, electromagnetic field immunity test. online [accessed 24 march 2023] https://webstore.iec.ch/publication/59849 [13] s. dedyulin, z. ahmed, g. machin, emerging technologies in the field of thermometry, meas. sci. & technol., vol. 33, 2022, 092001 doi: https://doi.org/10.1088/1361-6501/ac75b1 [14] j. fischer, et. al., the boltzmann project, metrologia, vol. 55, 2018, pp. r1-r20. doi: https://doi.org/10.1088/1681-7575/aaa790 [15] ams2750f is an aerospace manufacturing standard covering temperature sensors, instrumentation, thermal processing equipment, correction factors and instrument offsets, system accuracy tests, and temperature uniformity surveys. these are necessary to ensure that parts or raw materials are heat treated in accordance with the applicable specification(s). online [accessed 24 march 2023] https://www.sae.org/standards/content/ams2750f/ [16] ccpi, pyro tag. online [accessed 24 march 2023] https://ccpi-europe.com/resources/pyro-tag/ [17] s. hackel, f. härtig, j. hornig, t. wiedenhöfer, the digital calibration certificate, ptb-mitteilungen 127 (2017) doi: https://doi.org/10.7795/310.20170403 see also ptb’s dcc website. online [accessed 24 march 2023] https://tinyurl.com/ycksrc2t https://doi.org/10.1088/0026-1394/27/1/002 https://doi.org/10.1088/0026-1394/47/1/l01 https://doi.org/10.1063/1.4821383 https://doi.org/10.1088/1361-6501/ab5dd1 https://doi.org/10.1088/1361-6501/aad8a8 https://doi.org/10.1088/1361-6501/aad8a8 https://ccpi-europe.com/2018/05/22/inseva-thermocouple-license-signing/ https://ccpi-europe.com/2018/05/22/inseva-thermocouple-license-signing/ https://doi.org/10.1007/s10765-016-2156-8 https://doi.org/10.1088/1361-6501/ab58a6 http://www.johnson-noise-thermometer.com/ https://doi.org/10.1088/1361-6501/ab3526 https://webstore.iec.ch/publication/59849 https://doi.org/10.1088/1361-6501/ac75b1 https://doi.org/10.1088/1681-7575/aaa790 https://www.sae.org/standards/content/ams2750f/ https://ccpi-europe.com/resources/pyro-tag/ https://doi.org/10.7795/310.20170403 https://tinyurl.com/ycksrc2t photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy) acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy) manuel j. h. peters 1,2, tesse d. stek3 1 department of applied science and technology, politecnico di torino, corso duca degli abruzzi 24, 10129 torino, italy 2 department of history, universidade de évora, largo dos colegiais 2, 7000-803 évora, portugal 3 royal netherlands institute in rome, via omero 10/12, 00197 roma, italy section: research paper keywords: photogrammetry; gis; landscape change; mediterranean archaeology; survey archaeology citation: manuel j. h. peters, tesse d. stek, photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy), acta imeko, vol. 11, no. 4, article 12, december 2022, identifier: imeko-acta-11 (2022)-04-12 section editor: leila es sebar, politecnico di torino, italy received april 26, 2022; in final form december 14, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: manuel j. h. peters, e-mail: manueljhpeters@gmail.com 1. introduction pedestrian field survey currently faces paradoxical developments. in the rapidly deteriorating archaeological landscapes of the mediterranean as well as elsewhere, archaeologists are becoming ever more reliant on field survey data, especially those collected in earlier times with relatively good quality surface finds (so-called legacy datasets). at the same time, over the past decades there has been increasing attention for the various biases that may occur when sampling the archaeological record by pedestrian survey. initially, much research has targeted methodological biases, amongst others to correct for the varying visibility of archaeological surface material during the surveys. this has led to a well-developed rigour in field survey practices documenting the present field conditions as systematic and well as possible [1]-[12]. the research presented in this paper focuses on one particular factor: the geomorphological change over time. erosion and depositional processes, as well as incisive anthropic actions such as mining, all influence the location and preservation of the archaeological record. understanding geomorphological change can help to better assess the value of surface distributions of archaeological finds retrieved during field work, and at least in theory also to assess the reliability of legacy survey datasets. historical aerial images have since long been used to aid in the identification of archaeological features that in the meantime have been obscured or obliterated by more recent anthropic manipulations and/or natural events. more specifically, historical aerial photographs now also allow us to generate historical digital elevation models (hdems), historical orthophotos and 3d models of areas that have often been subjected to significant landscape change over the years. this can be done by using image-based modelling (ibm) techniques such as photogrammetry, often using the principle of structure from motion (sfm). when a recent digital elevation model (dem) is subtracted from an earlier one, a map of the occurred geomorphological abstract legacy data in the form of historical aerial imagery can be used to investigate geomorphological change over time. this data can be used to improve research about the preservation and visibility of the archaeological record, and it can also aid heritage management. this paper presents a composite image-based modelling workflow to generate 3d models, historical orthophotos, and historical digital elevation models from images from the 1970s. this was done to improve the interpretation of survey data from the early roman colony of aesernia. the main challenge was the lack of high-resolution recent digital elevation models and ground control points, and a general lack of metadata. therefore, spatial data from various sources had to be combined. to assess the accuracy of the final 3d model, the root mean square error (rmse) was calculated. while the workflow appears effective, the low accuracy of the initial data limits the usefulness of the model for the study of geomorphological change. however, it can be implemented to aid sample area selection when preparing archaeological fieldwork. additionally, when working with existing survey datasets, areas with a high bias risk resulting from post-depositional processes can be indicated. mailto:manueljhpeters@gmail.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 change can be extracted, thereby generating useful new data. the generation of hdems from historical aerial images is not always straightforward. common problems with the generation of hdems consist of poor or lacking metadata, low image resolutions, and the absence of contemporary gps ground control points (gcps). these factors can complicate generating new data from historical photographs. additionally, the comparison of data with different coverages, resolutions, and levels of precision can cause problems, which makes estimating the difference between two dems to investigate geomorphological change challenging. issues concerning hdem generation are often resolved by using gps gcps on locations that are visible in both historical and recent remote sensing data, or by creating new gcps from accurate (and often more recent) maps in a geographic information system (gis) and using those in the ibm software. an alternative is to co-register the created dems to ones that are currently available, as demonstrated by sevara et al. [13]. common workflows use ground control points that were recorded in the field with a gps, therefore establishing deviations of less than 10 cm [14]-[18] or use highly accurate recent dems for co-registration [19]-[22]. these resources were not available in the region under investigation, therefore the objective of this study was to assess the feasibility of a composite workflow using legacy data that was lacking any physical gcps. one potential source of error is the manual placement of the control points, in both gis and ibm software, since this mostly relies on the resolution of the images and the competency of the user. furthermore, the quality of the final result largely depends on the quality of the initial data. as previously mentioned, in the case of this research no gcps or high-resolution dem were available, which significantly complicated the ibm procedure. therefore, the usefulness of this procedure for further research and analysis was investigated by assessing its accuracy after several improvements. the landscape of italy has changed significantly over the past century [23], which has a big impact on the archaeological remains. this makes it a good location to investigate the various factors influencing the surface data obtained through pedestrian surveys. the landscapes of early roman colonization (lerc) project operates in this changing landscape. the research presented in this paper was carried out using data provided by the aesernia colonial landscape project, which was started in 2011 by dr. t. d. stek as an eu marie curie fellowship at glasgow university, and subsequently, in 2013, continued in the larger framework of the lerc project, a collaboration between leiden university and the royal netherlands institute in rome (knir), funded by the netherlands organisation for scientific research (nwo) [24]. the lerc project investigates the early roman colonisation process in italy, mainly by pedestrian surveys and remote sensing [24]. this paper focuses on the landscape surrounding the latin colony of aesernia (263 bce), around the modern town of isernia, situated in the molise region in south-central italy (figure 1). in section 2 the study area and the various data sources utilised in this research will be discussed, and some of the issues mentioned. the next two sections describe the methods and the results, followed by a discussion highlighting some issues and limitations. finally, in the concluding section the main applications and limitations are stated. 2. study area and data the colony of aesernia was established in 263 bce, and the territory was previously known as a relatively undocumented area within the molise, except for the landscape research carried out in the 1980s [25]. in the past decade, extensive research has been done in this area by means of field survey, various types of aerial photography and remote sensing, and geophysics. significant issues in the region include the modern urbanisation and changes in land use, which are happening at a fast pace around the town of isernia. therefore, the initial stages of the lerc project focused on the area around the town. although the current trend in mediterranean pedestrian survey focuses on the collection of off-site data in a smaller sample area, where fields are selected for their good visibility, the lerc project covered the majority of fields, including those with low visibility [26]. this variation in visibility can be attributed to factors such as differences in land use (e.g. freshly ploughed vs. overgrown) and geomorphological change. both are thought to have impacts on the collection and interpretation of pedestrian survey data, which is why they have been subject to further investigation. figure 1. lerc research area around aesernia [3], [26]. figure 2. aerial image coverage for 1970-71 around isernia. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 for the present analysis, several data sources were used, including legacy data consisting of 53 aerial photographs from the isernia area (figure 2). these were produced in 1970-71 for cartographic purposes by the società per azioni rilevamenti aerofotogrammetrici (sara) [27]. the images had been digitised with a resolution of 300 dpi (1970 set) and 600 dpi (1971 set). the type of scanner and its parameters were unknown at the time of writing, therefore possible errors related to factors such as resolution, distortion, and glare were unknown. the average flight height was not exactly known either, and neither were yaw, pitch, and roll, which meant that camera properties, including focal length, could not be used. these are typical issues related to the use of legacy data, since this data is usually not created for modern research purposes. additionally, a regional orthophoto of the isernia area from 2007 was provided by the geoportal of the regione molise (using the autogr tool developed by dr. gianluca cantoro). the tinitaly/01 dem released by the istituto nazionale di geofisica e vulcanologia (ingv) in 2007 was used as control dem. this is a composite dem that is commonly used in recent archaeological research [28]. in molise, the root mean square error (rmse) of the tinitaly dem (modern dem or mdem) is 3.76 m in the non-urban areas, and 4.51 m in the urban areas [29], [30]. a hillshade visualisation of this dem was used as base map for several figures (1, 2, 6, 9, 10) in this paper. agisoft metashape professional 1.5.2 build 7838 was used to create the various models using sfm, cloudcompare 2.10.2 was used for modifying and co-registering the point clouds. arcmap 10.4.0.5524 was used to run the gis procedure, and arcscene 10.4.0.5524 to visualise the data in 3d. all data was processed in the epsg:32633, wgs84 / utm 33n coordinate system. 3. methods to accommodate the limitations of the original data, a composite workflow involving sfm, point cloud processing software, and gis was designed (figure 3). first, a preliminary model using 53 images from 1970-71 was built using sfm. although all images were of sufficient quality, they had borders that could leave traces during the creation of the 3d model and the dem, and therefore needed to be removed. this was done by manually going over every picture and creating a mask by selecting the area that had to be removed. areas that were of low quality because of glare were also removed in this way. once the preliminary model was created, gcps were placed in a gis environment on the resulting orthophoto. this was done by using features that appeared unchanged and were easily recognisable on the 1970-71 and 2007 orthophotos, such as corners of buildings and intersections of roads. the xy coordinates were obtained based on the 2007 orthophoto, and the elevation values were extracted from the mdem. this data was then imported into the sfm software, where the gcp locations had to be modified manually by keeping the sfm and gis software side by side (figure 4). the coordinates were given for each gcp in easting (m), northing (m), altitude (m). an accuracy of 3 meters was set, considering the resolution of the images when visually placing the markers in horizontal space for the xy coordinates, and the vertical accuracy of the dem07, which generated the elevation value. when all gcps and their coordinates were added, the process to generate the model was initiated again, starting with the alignment of the photos, this time using high accuracy, with a key point limit of 120000, and no tie point limit. this resulted in a sparse point cloud consisting of 387692 points. then, the camera alignment was optimised, in order to achieve a higher accuracy and to correct for possible distortions, using the focal length radial distortion coefficients k1, k2, k3, and k4; tangential distortion coefficients p1, p2, p3, and p4; and affinity and skew (non-orthogonality) transformation coefficients b1 and b2 [20]. additionally, the point cloud variance was calculated. then, by using gradual selection, points with high reprojection error (above 0.25 m) were removed, as well as points with a reconstruction uncertainty above 10 m and with a projection accuracy below 2.5 m. the filtering steps resulted in a removal of figure 3. composite ibm workflow. figure 4. placing gcps in metashape (left) by comparing them to the image in arcmap (right). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 195061 points from the point cloud, with a final number of 192631 tie points. this filtering removed points that could result in later outliers in the dem. next, the bounding box was set to select the area that would be used in further processing. no clipping of the area was done at this stage however, since the hdem would later be clipped by the research area in arcmap. after this, the dense cloud was built, using high quality and moderate depth filtering, to avoid removing more complex landscape features, and pixel colour was calculated. the mesh was built using the height field surface type, and the dense cloud as source data. the polygon count was set to high, in order not to lose any detail. interpolation was enabled, so that holes generated in the previous filtering steps could be filled. the next step was to build the texture. while this is not necessary to create an orthophoto or dem, it does provide a better 3d visualisation of the area. the mapping mode was orthophoto, blending mode mosaic. in workflows containing black and white images, colour correction is often disabled, since this is generally only useful for data sets with extreme brightness variation. in this case however, the orthophoto would be used for the classification of the forested areas. this was primarily done based on pixel colour, which therefore had to be as accurate as possible. the hdem (figure 5) was built using the dense cloud, since this generally produces more accurate results, and has a shorter processing time. although the dem that was obtained had a resolution of 2 m, it was decided to export it at a 10 m resolution, in order to be comparable with the mdem. the orthomosaic was generated in the same reference system, using the mosaic blending mode, colour correction was not used. the orthophoto and hdem could then be exported as geotiff files. the no-data value was set at 255. ground points were classified using a 15 degrees maximum angle, 2 m maximum distance, and 25 m cell size. the ground point cloud (hdem) was then further modified in cloudcompare. an additional filtering step was carried out using statistical outlier removal, using 10 points for the mean distance estimation and 1.00 as the standard deviation multiplier threshold. duplicate points were removed as well, and the point cloud was downsampled to a 2.5 m resolution to speed up processing. then, the hdem was co-registered to the mdem using an iterative closest point (icp) algorithm. this coregistration procedure corrected the hdem, decreasing the tilt and modifying the scale to better fit the mdem. this resulted in a decrease in rmse on the whole hdem area from 13.95 m to 7.49 m. the hdem was then rasterised to a grid size of 10 metres to be comparable with the mdem. an additional compensation was carried out by measuring the deviation of the hdem in 101 new gcps in supposedly stable areas, interpolating these values with the application of kriging. in this case, ordinary kriging was used with a linear semivariogram, since this model fitted the gcps best. the kriging window was set at 12 points, with a maximum distance of 7000 m. the resulting raster showed the deviation of the hdem across the research area, which was most significant near the edges of the model (as can be expected with this type of model), especially on the north and south sides. the deviation map was then subtracted from the hdem, and the mdem was subtracted from the compensated hdem, showing the geomorphological change in the area. in order to exclude areas with vegetation or buildings from the geomorphological change model, these were masked by a combination of automated classification based on the grey values of the historical orthophoto and manual adjustments of the mask. the resulting model shows both the positive (presumably sedimentation) and negative (presumably erosion) change in the landscape (figure 6). figure 5. hdem after cleaning, including gcp locations. figure 6. geomorphological change (erosion and sedimentation). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 4. results the 3d model generated by the sfm procedure was sufficient for a visual study of the landscape surrounding isernia (figure 7). in order to determine its suitability for the determination of geomorphological change, an accuracy assessment was carried out. the north-south and east-west profiles of the various dems were compared (figure 8), showing that the co-registration resolved a significant amount of tilt both in the north-south and east-west planes (dem70-71_coreg). the additional compensation using the deviation surface obtained with kriging resulted in another improvement (dem70-71_final). the rmse of the hdem relative to the mdem was calculated, and the total rmse was calculated using the relative and the mdem rmse (formulas (1), (2); table 1). the final hdem had an rmse of 5.87 m. although this is a significant possible error when dealing with landscape change, this is mostly due to the original data quality. considering the fact that the original 2007 dem has an rmse between 3.76 m and 4.51 m in the molise region, and a resolution of 10 m, the 5.87 m rmse of the final hdem was deemed an acceptable result 𝑅𝑀𝑆𝐸rel = √ ∑ (ℎ𝐷𝐸𝑀 − 𝑚𝐷𝐸𝑀)2𝑛𝑖=1 𝑛 (1) 𝑅𝑀𝑆𝐸tot = √𝑅𝑀𝑆𝐸rel 2 + 𝑅𝑀𝑆𝐸mdem 2 . (2) even though there are limits due to the rmse of the final hdem, the resulting geomorphological change map obtained by subtracting the mdem from the hdem serves as a useful indication of the changes around isernia. the resolution is not high enough to allow detection of more subtle changes; however, especially since in this procedure no physical gcps are necessary, it may be used to better select the sample areas during the planning of archaeological fieldwork. additionally, its results can assist the interpretation of the data collected by the pedestrian surveys. as such, it can help improve existing archaeological models, for example about site distribution, in a way not dissimilar to the soil map analysis by casarotto et al. [31]. a good example of the changing landscape in the area can be found south of the town of isernia, where a quarry has expanded dramatically in a few decades, leaving a lasting effect on the landscape (figure 9). the mask that was created to exclude forested areas from the geomorphological change map, can also provide a visual indication of the rapidly changing land use in the area. plots of land are no longer being used for agriculture and have been abandoned and subsequently reforested, which can clearly be seen when comparing the situation in 1970-71 with the situation in 2007 (figure 10). 5. discussion an important challenge with the presented research was the need to use input from different sources to create the hdem, and the deviations related to this approach. in order to create a georeferenced model, gcps with x, y, and z values are required. the historical aerial photos were not georeferenced, which is normally solved by using either gps gcps with xyz values and error below 10,0 cm, or by utilising lidar data to co-register the hdem. this kind of data was not available for the current research, which heavily influenced the ibm workflow. due to data limitations, there are possible errors originating from the use of the 2007 orthophoto to extract xy values and the 2007 dem to extract z values, and combining these into gcps, rather than collecting gps coordinates in the field and importing these as markers. creating and placing gcps manually like this increases the chance of user error. since the procedure is based on a visual comparison of the recent and historical images with varying resolutions, it is possible to have an error of several metres in xy value. the current research focuses mostly on changes in z value, but the aforementioned bias should not figure 7. detail of 1970-71 3d model. orange line representing 5 km. figure 8. profile comparison between elevation models in north-south (top) and east-west (bottom) direction. table 1. rmse values. relative rmse total rmse before compensation 6.48 m 7.49 m after compensation 4.51 m 5.78 m 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 400 450 500 550 600 650 700 750 800 dem07 dem70-71_orig dem70-71_coreg dem70-71_final e le v a ti o n ( m ) distance (m) 0 1000 2000 3000 4000 5000 6000 7000 300 350 400 450 500 550 600 650 e le v a ti o n ( m ) distance (m) dem07 dem70-71_orig dem70-71_coreg dem70-71_final acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 be neglected, since a change in the horizontal plane can result in significant changes in elevation, especially in irregular terrain. the accuracy of the final model depended on the resolution and accuracy of the 2007 orthophoto, the 2007 tinitaly dem, the 1970-71 historical aerial photographs, and the user error when placing the various gcps. while the resolution and accuracy of the 2007 dem was known, this was not the case for the 2007 orthophoto and the 1970-71 aerial images. further limitations were imposed because the properties of the scanner used to digitise the original images were unknown. therefore, scanner deformities may have been introduced, contributing to possible errors in the final model. this research made an attempt at correcting some of these errors by applying filtering, coregistration, and compensation, leading to significant improvements. 6. conclusions the main goal of this research was the creation of a composite ibm workflow to build historical landscape models from historical aerial photographs using sfm, and extracting information about geomorphological change. by applying data from other sources such as a more recent orthophoto and dem to obtain ground control points in gis that could then be entered in the sfm software, it was possible to create historical orthophotos and hdems. the hdem for 1970-71 was further filtered and co-registered to the mdem from 2007. subsequently, it was corrected by applying interpolation to create a vertical deviation surface from other gcps. this resulted in a compensated hdem that was then subtracted from a modern dem in order to observe geomorphological change, while areas with buildings or vegetation were masked. although there are severe limitations resulting from the quality of the initial data in this case study, the proposed composite workflow appears effective for the creation of more accurate historical 3d models and geomorphological change maps of rapidly evolving landscapes. geomorphological change has an inherent duality, both positively (uncovering) and negatively (displacing/covering/destroying) influencing the visibility of the archaeological record. despite this, geomorphological change maps can be used to provide feedback for planning pedestrian surveys and to interpret their data critically, possibly assisting in the assessment of its accuracy and the improvement of archaeological models. author contribution manuel j. h. peters: conceptualisation, methodology, software, formal analysis, investigation, data curation, writing original draft, visualisation. tesse d. stek: validation, resources, writing – review and editing, supervision, project administration, funding acquisition. acknowledgement the first author would like to thank the lerc project for the data, and the royal netherlands institute in rome (knir) for the opportunity to spend several weeks in rome to work on this research in a stimulating environment. luisa baudino was of great help with the processing of the dem profiles. a significant portion of this work was carried out as part of an msc thesis [32] at the faculty of archaeology of leiden university, under the supervision of dr. karsten lambers. references [1] p. attema, two challenges for landscape archaeology, in p. attema, g. j. burgers, e. van joolen, m. van leusen and b. mater (eds), new developments in italian landscape archaeology. theory and methodology of field survey. land evaluation and landscape perception. pottery production and distribution, university of groningen, 13-15 april 2000, bar international series, vol. 1091, 2002, pp. 18-27. doi: 10.30861/9781841714691 [2] j. bintliff, p. howard, a. snodgrass, the hidden landscape of prehistoric greece, journal of mediterranean archaeology, vol.12, no. 2, 1999, pp. 139-168. doi: 10.1558/jmea.v12i2.139 [3] a. casarotto, spatial patterns in landscape archaeology. a gis procedure to study settlement organization in early roman colonial territories, phd thesis, leiden: leiden university press, 2018. isbn: 9789087283117. [4] r. c. dunnell, the notion site, in j. rossignol and l. wandsnider (eds), space, time, and archaeological landscapes, new york: plenum press, 1992, pp. 21-41. isbn: 978-03-064-4161-5. [5] h. feiken, dealing with biases: three geo-archaeological approaches to the hidden landscapes of italy, phd thesis, groningen: barkhuis, 2014. isbn: 978-94-914-3167-8. [6] r. francovich, h. patterson, extracting meaning from ploughsoil assemblages, oxford: oxbow books, 2000. isbn: 978-19-0018875-3. figure 9. geomorphological change due to mining activities. figure 10. masked forest (right), reforestation since 1970-71 (left). https://doi.org/10.30861/9781841714691 https://doi.org/10.1558/jmea.v12i2.139 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 [7] j. garcía sánchez, method matters. some comments on the influence on theory and methodologies in survey based research in italy, in r. cascino, f. de stefano, a. lepone and c. m. marchetti (eds), trac 2016, proceedings of the twenty-sixth theoretical roman archaeology conference, sapienza university of rome, 16th-19th march 2016, rome: edizioni quasar, 2017, pp. 151-164. isbn: 978-88-7140-770-8. [8] c. haselgrove, inference from ploughsoil artefact samples, in c. haselgrove, m. millet and i. smith (eds), archaeology from the ploughsoil studies in the collection and interpretation of field survey data, sheffield: university of sheffield, 1985, pp. 7-29. isbn: 978-09-060-9054-1. [9] j. lloyd, and g. barker, rural settlement in roman molise. problems of archaeological survey, in g.barker and r.hodges (eds), archaeology and italian society, oxford: bar international series, vol.102, 1981, pp. 289-304. isbn: 978-08-605-4120-2. [10] m. b. schiffer, a. p. sullivan and t. c. klinger, the design of archaeological surveys, world archaeology, vol. 10, no. 1, 1978, pp. 1-28. doi: 10.1080/00438243.1978.9979712 [11] n. terrenato, the visibility of sites and the interpretation of field survey results: towards an analysis of incomplete distributions, in r. francovich, h. patterson, g. barker (eds), extracting meaning from ploughsoil assemblages. oxford: oxbow books, 2000, pp. 60-71. isbn: 978-19-001-8875-3. [12] p. m. v. van leusen, pattern to process: methodological investigations into the formation and interpretation of spatial patterns in archaeological landscapes, phd thesis, groningen: rijksuniversiteit groningen, 2002. [13] c. sevara, g. verhoeven, m. doneus, e. draganits, surfaces from the visual past: recovering high-resolution terrain data from historic aerial imagery for multitemporal landscape analysis, journal of archaeological method and theory, vol. 25, 2018, pp. 611-642. doi: 10.1007/s10816-017-9348-9 [14] y. c. hsieh, y. c. chan and j.c.hu, digital elevation model differencing and error estimation from multiple sources: a case study from the meiyuan shan landslide in taiwan, remote sensing, vol. 8, no. 3, 2016, pp. 199-220. doi: 10.3390/rs8030199 [15] s. ishiguro, h. yamano, h. oguma, evaluation of dsms generated from multi-temporal aerial photographs using emerging structure from motion–multi-view stereo technology, geomorphology, vol.268, 2016, pp. 64-71. doi: 10.1016/j.geomorph.2016.05.029 [16] c. sevara, top secret topographies: recovering two and threedimensional archaeological information from historic reconnaissance datasets using image-based modelling techniques, international journal of heritage in the digital era, vol. 2, no. 3, 2013, pp. 395-418. doi: 10.1260/2047-4970.2.3.395 [17] c. sevara, capturing the past for the future: an evaluation of the effect of geometric scan deformities on the performance of aerial archival media in image-based modelling environments, archaeological prospection, vol.23, no.4, 2016, pp. 325-334. doi: 10.1002/arp.1539 [18] j. vaze, j. teng, g. spencer, impact of dem accuracy and resolution on topographic indices, environmental modeling & software, vol. 25, no. 10, 2010, pp. 1086-1098. doi: 10.1016/j.envsoft.2010.03.014 [19] g. verhoeven, d. taelman, f. vermeulen, computer visionbased orthophoto mapping of complex archaeological sites: the ancient quarry of pitaranha (portugal-spain), archaeometry, vol. 54, no. 6, 2012, pp. 1114-1129. doi: 10.1111/j.1475-4754.2012.00667.x [20] g. verhoeven, f. vermeulen, engaging with the canopy multidimensional vegetation mark visualisation using archived aerial images, remote sensing vol. 8, no. 9, 2016, pp. 752-769. doi: 10.3390/rs8090752 [21] g. verhoeven, m. doneus, ch. briese, f. vermeulen, mapping by matching: a computer vision-based approach to fast and accurate georeferencing of archaeological aerial photographs, journal of archaeological science, vol. 39, no. 7, 2012, pp. 2060-2070. doi: 10.1016/j.jas.2012.02.022 [22] f. j. aguilar, m. a. aguilar, i. fernández, j. g. negreiros, j. delgado, j. l. pérez, a new two-step robust surface matching approach for three-dimensional georeferencing of historical digital elevation models’, ieee geoscience and remote sensing letters, vol. 9, no. 4, 2012, pp. 589-593. doi: 10.1109/lgrs.2011.2175899 [23] p. panagos, p. borelli, j. poesen, c. ballabio, e. lugato, k. meusburger, l. montanarella, c. alewell, the new assessment of soil loss by water erosion in europe, environmental science & policy, vol. 54, 2015, pp. 438-447. doi: 10.1016/j.envsci.2015.12.011 [24] t. d. stek, j. pelgrom, landscapes of early roman colonization: non-urban settlement organization and roman expansion in the roman republic (4th-1st centuries bc), tma, vol. 50, 2013, pp. 87. [25] g. chouquer, m. clavel-lévêque, f. favory, j.-p. vallat, structures agraires en italie centro-méridionale, cadastres et paysages ruraux. rome: publications de l'école française de rome, 1987. [26] t. d. stek, e. b. modrall, r. a. a. kalkers, r. h. van otterloo, j. sevink, an early roman colonial landscape in the apennine mountains: landscape archaeological research in the territory of aesernia (central-southern italy), in s.de vincenzo (ed), analysis archaeologica vol.1, edizioni quasar, rome, italy, 2015, pp. 229291. isbn: 978-88-7140-592-6. [27] http://www.saranistri.com, accessed on 13 april 2019. [28] s. tarquini, l. nannipieri, the 10 m-resolution tinitaly dem as a trans-disciplinary basis for the analysis of the italian territory: current trends and new perspectives, geomorphology, vol. 281, 2017, pp. 108-115. doi: 10.1016/j.geomorph.2016.12.022 [29] s. tarquini, i. isola, m. favalli, f. mazzarini, m. bisson, m. t. pareschi, e. boschi, tinitaly/01: a new triangular irregular network of italy, annals of geophysics, vol. 50, no. 3, 2007, pp. 407-425. doi: 10.4401/ag-4424 [30] s. tarquini, s. vinci, m. favalli, f. doumaz, a. fornaciai and l. nannipieri, release of a 10-m-resolution dem for the italian territory: comparison with global-coverage dems and anaglyphmode exploration via the web, computers & geosciences, vol.38, no.1, 2012, pp. 168-170. doi: 10.1016/j.cageo.2011.04.018 [31] a. casarotto, t. d.stek, j. pelgrom, r. h. van otterloo and j. sevink, assessing visibility and geomorphological biases in regional field surveys: the case of roman aesernia, geoarchaeology, vol. 33, no. 2, 2017, pp. 177-192. doi: 10.1002/gea.21627 [32] m. j. h. peters, bypassing the bias. applying lmage-based modelling and gis to assess the effect of geomorphological change, topography, and visibility on survey data from the early roman colony of aesernia, ltaly, msc thesis, leiden: universiteit leiden, 2019. https://doi.org/10.1080/00438243.1978.9979712 https://doi.org/10.1007/s10816-017-9348-9 https://doi.org/10.3390/rs8030199 https://doi.org/10.1016/j.geomorph.2016.05.029 https://doi.org/10.1260/2047-4970.2.3.395 https://doi.org/10.1002/arp.1539 https://doi.org/10.1016/j.envsoft.2010.03.014 https://doi.org/10.1111/j.1475-4754.2012.00667.x https://doi.org/10.3390/rs8090752 https://doi.org/10.1016/j.jas.2012.02.022 https://doi.org/10.1109/lgrs.2011.2175899 https://doi.org/10.1016/j.envsci.2015.12.011 https://doi.org/10.1016/j.geomorph.2016.12.022 https://doi.org/10.4401/ag-4424 https://doi.org/10.1016/j.cageo.2011.04.018 https://doi.org/10.1002/gea.21627 review of models and measurement methods for compliance of electromagnetic emissions of electric machines and drives acta imeko issn: 2221-870x june 2021, volume 10, number 2, 162 173 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 162 review of models and measurement methods for compliance of electromagnetic emissions of electric machines and drives andrea mariscotti1, leonardo sandrolini2 1 diten, university of genoa, via opera pia 11a, 16145 genova, italy 2 dei "guglielmo marconi", university of bologna, viale del risorgimento 2, i-40136, bologna, italy section: research paper keywords: electric machinery; electromagnetic compatibility; electromagnetic emissions; power converters; uncertainty citation: andrea mariscotti, leonardo sandrolini, review of models and measurement methods for compliance of electromagnetic emissions of electric machines and drives, acta imeko, vol. 10, no. 2, article 23, june 2021, identifier: imeko-acta-10 (2021)-02-23 section editor: ciro spataro, university of palermo, italy received february 7, 2021; in final form april 24, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrea mariscotti, e-mail: andrea.mariscotti@unige.it 1. introduction electric machines (encompassing dc and ac motors and generators) and adjustable speed drives (asds) are classified as apparatuses for the electromagnetic compatibility (emc) directive 2014/30/eu [1] and therefore require to be ce marked. the method of demonstrating compliance with the essential requirements of the emc directive is through the application of and qualification to the product standards iec 60034-1 [2]-[4] and iec 61800-1, iec 61800-2 and iec 618003 [5]-[8], respectively, and of the generic emc standards [9]-[12]. it is quite possible that a manufacturer would refer to other applicable standards such as ieee std. 1566 [13] depending on market and client preference. a machine may represent also a component of some apparatus intended as the final product, possibly defining different limits and requirements of emissions. in this case not only the machine manufacturer should consider and apply the correct standards to guarantee compliance of emissions in the final application, but also the integrator and manufacturer of the apparatus must take suitable margins. it is intuitive that for off-the-shelf products test and certification are not repeated and the burden of guaranteeing compliance falls on the shoulders of the latter. when considering the electromagnetic (e.m.) field emissions of electrical machines and power drives, some elements may be identified that in general affect the amount and characteristics of emissions and their measurement. • conducted emissions generated by additional sources of electromagnetic disturbance (emd) can feed the electric machine and then be coupled to the space around it through its windings as radiated or reactive field (that, for simplicity, will be referred to as radiated field). these sources may be the converter driving the motor itself or the converter feeding the field winding of a synchronous generator. • the electric machine may be of considerable size, so quite different from a point source, and is often located in an environment with non-ideal electromagnetic properties [14]. • the e.m. environment features several near and far sources of radiated e.m. fields, such as workshop facilities (e.g., crane bridge) and auxiliaries (e.g., lubricating pumps, ventilation fans and compressors). as specified in the relevant standards, measurement and evaluation of e.m. emissions only cover the high frequency range, starting from 30 mhz, where there is little evidence of machine emissions. in [14], measured emissions spectra below 30 mhz are presented and there is evidence of emissions due to the tested machine by itself, having excluded contributions of any feeding converter or auxiliary system. the connection of the electrical machine to a driving static converter, such as in an asds, makes abstract electric machines and power drives of various sizes and ratings are fundamental elements in many applications, such as electric appliances, electric propulsion, medical systems, all with specific limits of electromagnetic emissions, different from those of the original emc product standards. as compliance should be assessed and guaranteed during design and procurement, this work compares the conducted and radiated emission limits and discusses the variability of results for the respective test setups and measurement methods, including the most relevant source of uncertainty. mailto:andrea.mariscotti@unige.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 163 the problem even more complex, mainly for the following two reasons: • the static converter becomes the major source of conducted disturbance, that results in e.m. emissions from the various elements of the asd (the connecting cables and the electric machine) [15]-[19]; • the static converter is also a source of direct e.m. radiated emissions, that is superimposed on those already considered above [20]-[25]. electric machinery features nowadays a wide range of technologies: asynchronous and synchronous machines, based on induction, salient pole, reluctance, permanent magnet, brushless, etc. correspondingly, the electromagnetic behaviour is quite varied, especially at low and medium frequency where the intensity and distribution of prevalently magnetic field emissions depend on the machine architecture and its design. this part of the spectrum of machine emissions is often used for diagnostic purposes [26], [27] and is less relevant from an electromagnetic compatibility (emc) point of view. electric machinery has its own electromagnetic behaviour and response, from which typical radiated emissions profiles that depend on machine architecture, size, rated values and characteristics. specific resonances and amplification of some components of emissions have been observed with extensive data for synchronous generators [19], [28], [29]. for those reported in [28], [29] a significant influence of some machine defects and local failures was identified; once fixed by corrective maintenance specific emissions around 1 mhz disappeared. in [19] the generators were all new and measured during the commissioning phase, excluding any kind of defect and latent failure. similarly, dc machines are characterized by peculiar emissions caused by commutation under the brushes, as repeatedly demonstrated experimentally [15], [30]-[32]: the main conducted emissions are due to arcing of motor brushes, modulated by rotational speed (with a spectrum frequency that depend on the number of brushes, their extension and the mechanical speed), although experimental evidence for radiated emissions from large motors lacks of evident resonance peaks and dependency on input current [15]. different and more complex is the behaviour of a power drive, when the motor is excited by the conducted emissions of the driving converter, including resonance effects of the connecting cable [33]. to the aim of radiated emissions and disturbance propagated to external elements, the most important is the common-mode component, that has a higher radiation efficiency at longer distance, from which the use of shielded cables, common mode ferrites, image planes, in particular in the smaller machines [34], as control measures. however, in highdensity applications with short coupling distances also differential-mode components become relevant [35], [36] and they are usually less easier to filter, due e.g. to the large phase currents of modern high-performance drives. in order to demonstrate compliance of the emissions to the emc directive, design calculations (usually carried out with numerical electromagnetic software) and examinations can be provided; however, most often the only degree of demonstration of compliance consists of the execution of tests in line with the applicable product standards to show that the so assessed emissions are below the stipulated limits. the paper is organized as follows. section 2 contains a general description to the principles of emission in order to frame the problem and introduce the other sections on normative requirements, modelling approaches and measurement results. section 3 in fact discusses the normative requirements specific for electric machines and asd and for the most common applications and environments. modelling of e.m. emissions is then considered in section 4, paired to section 5, where experimental results are discussed. both the characteristics of the e.m. environment and equipment, as well as the limitations and performances of models and measurement methods are discussed in section 6, identifying the impact on the uncertainty of the estimate of emissions and compliance to limits. 2. general description and principles of emission the spectrum of relevant emissions extends from the machine operating frequency to several mhz, depending on the type of machine (dc or ac supply, use of a commutator and brushes, number of slots and rotational speed, etc.) and on the characteristics of the driving voltage waveform. in general, the machine alone may be considered a source of peculiar, although less intense, emissions, if some phenomena are considered [15]: • in synchronous machines, the static magnetic field emitted by the dc field winding is modulated by rotation at the synchronous speed; • the magnetic field, in relation with the current flowing through each conductor in the machine slots, is modulated by the change of reluctance due to the passage of teeth at rotation speed [37]; • commutator machines, and in particular dc machines, produce intermittent magnetic and electric field emissions, occurring during spark ignition and extinction under the brushes, with the field intensity determined by the machine operating conditions in terms of supply voltage and circulating current, that influence the characteristics of electric arcs (number of arcs, their average lifetime, their stability in space) [15], [18], [32], [38], [39]. limits for radiated emissions are specified in emc product standards at high frequency only (above 30 mhz), while for the low frequency range (between 9 khz and 30 mhz) limits are specified for conducted emissions. a large number of ac motors are fed by pulse width modulation (pwm) inverters, even for medium and large power machines. the output voltage sourced by the inverter and appearing at motor terminals is a variable duty-cycle square wave characterized by steep slopes (often identified as affected by a “high dv/dt”). for very large voltage ratings, to reduce the “instantaneous voltage step”, multi-level inverters are used, so that the waveform shape and the total harmonic distortion, thd, are improved. in any case, large dv/dt values and associated spectra (quite extended over the frequency range) are applied to the machine windings, so that the e.m. characterization of the whole drive becomes very important. to characterize the asd as a source of radiated emissions, it is necessary to locate the different sources of disturbance and the paths of propagation and coupling. an asd may be thought of as composed mainly of a static converter (for induction motors most commonly a voltage source inverter, vsi), the feeding cable (a 3-phase cable with different arrangements: three single-phase cables, an optional neutral conductor, twisted and/or shielded, etc.) and the electric motor. the inverter is the main source of emissions, conducted through the feeding cable and radiated directly through the enclosure by a series of elements, such as edges, acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 164 ventilation slots, small doors, etc. and as stray surface current on the enclosure panels. the inverter conducted emissions may radiate directly from the cable [16] , [17] or propagate towards the machine via conduction and then radiate more efficiently from the machine windings [20], [32], [33], [38]-[49]. the characterization of all these phenomena is relevant to the emc of the product, as well as for a series of related internal phenomena which can be used for diagnosis of machine health status, such as ageing and stress of winding insulation, iron and copper losses, temperature rise in specific points, stray currents flowing through the machine shaft and bearings. in particular, the most relevant emc issue for asd drives is the influence of switching and the high dv/dt of the feeding converter; while switching waveforms produce both differential mode and common mode disturbances, high dv/dt rise and fall slopes manifest themselves as significant common-mode currents flowing through parasitic capacitance and feeding cable capacitance [48]. for diagnostic purposes, the analysis of the e.m. signature of the machine is used for the detection of partial shortcircuits inside windings or mechanical unbalancing faults [43][45], relevant in the medium frequency range (500 hz 10 khz), and to obtain information on partial discharge activities, as well as insulation quality, in the higher frequency range [20], [47]. it is underlined that the insulation quality influences the value of the inter-turn capacitance as well as the dielectric losses, and they produce measurable modifications of the behaviour of winding resonance, triggered by high frequency disturbance from the inverter itself [49]. in the next section published experimental and modelling results for e.m. characterization of electrical machines are reviewed and evaluated, with specific reference to the significance of the presented and achievable results and to the uncertainty that characterizes them. 3. normative requirements relevant product standards are the iec 60034-1 [2] for electrical machinery (sec. 13 of the standard regards emc) and iec 61800-x [5]-[7] for power drives (so covering the wide range of equipment “power drive”, “variable speed drive”, “variable frequency drive”, “electronically controlled motor”, etc.). since the first issues in the ‘90s when the problem of emc was concretized and a first set of basic emc standards was available, these standards have undergone about three revisions in the last twenty years. the limits and measurement methods for emissions are inspired to those of cispr 11 long ago [50], with some variations of the limit values for increasing size of the power drive, but without addressing the problem of integration and embedding. if for a large power standalone drive the problem is non-existent (typical installation at industrial sites allows for plenty of space, good ground reference and good cable routing policies), for modern smart products, as well as automotive, avionics and naval applications, the space constraints are significant [47], [51]-[64]. applications may include additional requirements in terms of type of measurement of emissions and extended frequency range (e.g. e-field measured onboard ships from 150 khz [65], [66], or lower limits [67], [68]), and specific limits for co-located apparatus (e.g. radio receivers inside and around vehicles [69], [70] and navigation aids onboard ships [66]). standards for medical applications do not pose critical limits of emissions [71], although for equipment using motors and power regulators reference is to cispr 14-1 [72] and possibly additional tests (the so called “click emissions”). possible interference to worn and implantable medical devices (defibrillators, pacemakers, etc) was investigated in the working environment, in connection to heavy work such as welding, electro-erosion and galvanic processes, but also considering proximal motors and power drives [73]. the discussion will focus on the test integration for motors and power drives, starting from their own product standards with the intention of integration in other equipment to which the emc standards for the final application apply. besides lower limits and more extended frequency intervals, the other elements affecting the uncertainty of a statement of conformity are the reduced distance and environmental conditions of modern compact high-density applications, e.g. for automotive. the limit requirements, the degree of agreement between the specific emc product standards and with the other emc standards for generic and specific applications, as well as the evolution of the standards through the successive issues in the last twenty years, are discussed in the following. the emission standards for generic and specific applications correspond to first the generic light industrial and industrial ones (iec/en 610006-3 [11] and iec/en 61000-6-4 [12]), and then medical (en 61010 ), railway (en 50121), ship and offshore (iec/en 60945, en 60533) and automotive (cispr 12, reg. unece 10, cispr 25). 3.1. iec/en 60034-1 iec/en 60034-1, sec. 13, focuses on the emc of electrical machinery, for which in the absence of electronic circuits the immunity is straightforwardly assured, whereas for emissions the limits of cispr 11 class b and class a are assigned to machines without and with brushes, respectively. the limits for radiated emissions are the same for all machines, corresponding to the “usual” 30-37 dbµv/m profile for 30-230-1000 mhz; conducted emissions must comply with 66-56-60 dbµv (class b) or the 7973 dbµv (class a) profiles, defined over 0.15-30 mhz. in the 2004 version there are several inaccuracies in the notes of sec. 13.5.2: the standard prescribes a test at no load stating that machine emissions do not depend on load, not in line with what observed in [15], [19], both for dc machines (with brushes) and for other types. the dc machine is said not to have conducted emissions because it is not connected directly to the ac supply, that is somewhat misleading looking at the evidence provided in [30], [31]: it is acknowledged that such emissions are not injected directly into the ac supply distribution, but may cause as well crosstalk to other cables within the cableway or cable harness. the iec/en 60034-1 indicates that no tests are needed for cage induction machines. curiously the iec/en 60034-x standards do not specify limits or tests for synchronous generators although several tests indicate a resonant behaviour and amplification of radiated emissions [14], [15], together with a specific excitation coming from the rotating converter connected to the field winding. prescriptions have not changed between versions (2004 [2], 2010 [3] and 2020 [4]). 3.2. iec/en 61800-3 the iec/en 61800-3 is the emc product standard for power drive systems and reads “the requirements were selected so as to ensure emc for pdss at residential, commercial and industrial locations, with exception of traction applications and electric vehicles” (applications with compact installation and minimum separation). emissions for railways and guideway applications are characterized by specific limits and measurement methods [74]: besides fast peak-detecting scans in frequency domain, time acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 165 domain characterization is often advisable for transient emissions, in particular for disturbance to telecom systems [75]. the possible use for medical applications is also not mentioned explicitly. when the drive is part of larger equipment, the iec/en 61800-3 gives way to the final equipment product standard with possible non-compliance and inadequacy of emission levels. limits of emissions for pds are stipulated for four categories: the first two categories (c1 and c2) correspond to the limits of emission for the electric machinery alone (iec/en 60034-1), except for the limit of radiated emissions increased by 10 db (4047 dbµv/m) with respect to the 30-37 dbµv/m of c1 (see table 14 and 15 of iec/en 61800-3). it is evident that in those 10 db there is no margin for the emissions of the converter, although emissions from the motor alone are expected to be much lower than limit. a more complex scenario characterizes pds of cat. c3, whose limits for conducted emissions are shown in table 17 of the standard: distinction is made for the nominal current below or above 100 a, neglecting the voltage that has a significant impact anyway; the limit for radiated emissions is increased by another 10 db with respect to cat. c2 (50-60 dbµv/m), but a note says that these limits will be reconsidered in accordance with the results of ongoing activity within cispr. 3.3. emission standards for generic and specific applications the most likely standards for generic and specific applications (light industrial and industrial, medical, railway, ship and offshore applications) are briefly reviewed, in order to frame the emission requirements for products, possibly including motors and power drives. the iec/en 60601-1-2 [71], sec. 7.1.7, refers specifically to cispr 14-1 [72] for equipment whose main function is performed by motors or regulating devices (such as dental drills, surgical tools, operation tables). the cispr 14-1 [72] has the additional test of the intermittent emissions (clicks) that is seldom carried out on industrial products. as shown in figure 1, automotive applications (reg. unece 010) have a complex normative with specific measurements and in some cases quite low limits; in addition, measurement may occur at 3 m distance for which an increase due to reactive field region may be relevant at the lower end near 30 mhz. onboard ships there are specific regulations for disturbance to radio and navigation systems (iec 60945 [66], not included in figure 1), but also two basic standards are applied common to the automotive sector (absorbed by the reg. unece010 [68]): cispr 12 [67] and cispr 25 [69] for protection of off-board and on-board radio receivers. it is noteworthy that conducted emissions limits as per cispr 25 extend significantly and decrease with frequency, partially addressed by cat. c1 and c2 limits, although additional control measures would be required. 4. models of electromagnetic emissions the high-frequency models of motors can be basically classified as behavioural models, physical models, and finiteelement models. the different standpoints are briefly described in the following. • equivalent circuit constants of the motor model can be determined by measurement and fed to a so-called behavioural model, where the physical meaning of some of the time constants is somewhat lost. this approach is practical and has been the preferred method of choice so far; motor models without rotor [20], [44], [45], [76], [77] and with rotor [78] have been reported. the essential problem is that a real motor is required for measurements before simulation; this prevents the application of this approach and the model during the design stage, before prototyping. • if an accurate equivalent circuit is made, then the physical meaning of each circuit constant is preserved; these constants are then calculated based on design parameters using either numerical formulae [79]-[84] (as it is done in [38], [39], [48] for a dc motor, based on an interpretation of machine physics) or by e.m. field analysis [81], [85], with the possibility of estimating the electrical parameters of inaccessible elements. obviously, this latter approach is computationally intensive, and some parameters may be qualified by a large uncertainty, especially when designing a new machine. • finally, a full finite element modelling approach is possible, where the model is used for prediction of both conducted and radiated phenomena, it is directly connected to the source (i.e., the converter or the power grid, still modelled by means of a circuit approach) and thus it is necessarily 3-d [44]. in [85], the calculation of machine electrical parameters is made by 2-d and 3-d modelling for bulk and laminated cores, respectively. a review has been carried out of published work where models are validated against experimental data; the outcomes are a) b) figure 1. limits of emissions (quasi peak): (a) conducted, (b) radiated (10 m). color coding: iec 60034-1/b & iec 61800-3/c1 (magenta), iec 60034-1/b & iec 61800-3/c2 (red), iec 61800-3/c3 (100a green, 100a yellow), iec 61800-3 equiv. magn. field (cyan), cispr25 (blue), unece10-esa (black solid), unece10-veh (black dotted). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 166 discussed in the following focusing on the accuracy of the models, their reliability and simplicity of application. for a 3 kw induction machine, in [45], the validation was extended beyond a simple geometry and preliminary results show an agreement within 10 db. the model is simplified considering common-mode only, which is largely responsible for the largest emissions generated by these machines. the equivalent circuit approach followed in [77] is approximately valid, in general, up to 10-15 mhz with respect to the measurement results, although the shown discrepancy is significant. one of the possible causes of discrepancy is the inaccuracy in the calculation of the inductance, which depends on the magnetic field analysis hypotheses taken into account. the approach followed in [44], [76] is behavioural and the two proposed models are based on simplified equivalent circuits for all the drive elements, with parameters adjusted by fitting measurement results. the overall drive model is thus in agreement with measurements of conducted phenomena up to approximately 10-15 mhz. in [44], sample results of the 3-d modelling of a 3 kw motor are presented, but the validation of the approach (circuit simulation followed by 3-d model solved by finite integration algorithm, similar to fdtd) was made only on a simple wire geometry and is thus not conclusive: the agreement is very good up to 90 mhz, except for a discrepancy of up to 10 db between 35 and 50 mhz due to different heights of the radiating wire with respect to ground in measurement and simulations. in [76] a behavioural model is used to predict the conducted emissions of a motor drive system. the models of the four-wire shielded cable, the induction motor and the pwm voltage inverter (modelled as a common-mode source only) are used for the time-domain simulations of the adjustable speed drive. the comparison with measurement shows that conducted emissions can be predicted with acceptable accuracy up to 10 mhz; at higher frequency the discrepancy can be motivated by the noise introduced by the voltage probes and the oscilloscope characteristics. a simple, but effective, formulation is proposed in [41], where common-mode currents are neglected in favour of differentialmode currents; the radiating properties of the machine are then modelled by equivalent loops under a far-field assumption, that holds for the higher portion of the frequency interval, approximately up to 50 mhz. analogously, for a dc motor, benecke [38], [39], [48] presents an equivalent circuit approach, where the parameters are estimated by closed form expressions, that are made fitting measured impedances of some motor elements; the motor is subdivided into armature windings, commutator blades, brushes, other wiring and casing. the results are very encouraging with the terminal impedance fitting the radiated measurement results up to 1 ghz, with a negligible error almost everywhere on the frequency axis, except for an underestimated resonance peak at 600 mhz. the conducted emissions, on the contrary, match the experimental results only up to about 150 mhz, where the author says that supply line effects become predominant. finally, as regards the finite element modelling approach, finite element methods are based on standard e.m. codes, implementing for example method of moments, mom (well suited for wire structures) and finite difference time domain, fdtd (particularly useful for the computation of switching transient effects) [86]. these models are normally fed by excitation currents that are in turn calculated by a full or equivalent circuit approach. in conclusion, the determination of the e.m. emissions proceeds as follows: • simulation of the circuit representation of the ensemble converter plus cable plus motor (the latter modelled as an equivalent circuit) and calculation of the excitation currents/voltages (the prediction of conducted voltages and currents may be very accurate as in [47]); • application of the excitation signals to either an equivalent circuit [41] or finite element method model. the limitations of this approach may be stated as: • correct modelling of both commonand differential-mode circuits and estimation of the related sources and signals [42]; • adequate modelling of parasitic components and in particular turn-to-turn and turn-to-frame/core capacitance terms [87]; • correct application of the signals to the circuit elements and ports [45]; in particular, if a behavioural or equivalent circuit approach is chosen (also to match experimental observations and measurements at accessible terminals), commonand differential-mode signals need to be distinguished; depending on frequency and on the variability of the unknown and less controllable parameters, differential-to-common mode transformation may take place [45]. a simple worst-case model can be based on the assumption that the level of emissions is the maximum allowed by the limits of a given class or product, for emissions measured at the prescribed distance d*. the intensity of the e.m. field is then extrapolated to other distances d assuming a given dependency on distance. the presence of a near-field region around the machine and the reactive behaviour of the radiated emissions imply that the expected field intensity is much larger, no longer linear with distance (far field assumption): terms with the square and cubic power of distance should be considered as well. machine feeding cables and their high-frequency currents can in fact be treated as hertzian dipoles or as a chain of short dipoles, whose radiated field in the near-field region is composed of the electrostatic, induction and radiation field terms. in figure 2 the maximum allowed field intensity (emission limits at the standardized measurement distance) is extrapolated to shorter distances d characteristic of modern applications. the extrapolation is achieved with the following assumption: the radiating element is a small part of the overall power drive and far-field formulas for dipole antennas are used (1/ = /(2 )), rather than those of large antennas. extrapolation is carried out including second-order terms (1/( d)2) and third-order terms (1/( d)3); terms are rms composed assuming arbitrary timephase relationship. at distances of 1 m the extrapolated field intensity is significantly larger than the values obtained with a far-field assumption and at the limit of the separation of category of emissions (10 db). a distance of 0.5 m is evidently a significant issue because all components in the most relevant frequency interval up to 100 mhz are well beyond control. it is underlined also that such components have lower wave impedance than in far field, and the effectiveness of conductive shields is reduced. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 167 5. experimental characterization of e.m. emissions there are several experimental results related to motor plus converter setups (such as for industrial or automotive asd, where it is known that the converter is the major source of conducted and radiated emissions), but very few references report on the e.m. characteristics and contribution of the motor as a standalone item and/or the motor as a radiating source, converting conducted emissions. the mechanism of emission for dc electric motors is related to the commutation at collector brushes and the arcing phenomena, while for ac motors it is due to the modulation of the machine magnetic field produced by the interaction of stator and rotor slots during rotation. 5.1. dc motor the main source of e.m. emissions considered in literature is that of the commutation at motor brushes and arc ignition/extinction. insights into the phenomenon of commutator sparking (flashovers) and arcing in electrical machines are given in [88], [89]. in [90] several methods are investigated and discussed to assess the quality of commutation in dc traction motors, such as motor current signature analysis, antenna pickup of sparking, photodiode measurement of sparking, acoustic analysis, and o3 and nox detection. with these techniques poor commutation can be monitored, and actions can be taken to prevent sparking, which is associated to wide band e.m. emissions. for example, motor current signature analysis requires the motor current to be analysed in the frequency domain at the barpassing frequency, which is determined by multiplying the number of commutator bars by the speed of the motor. two laboratory tests were carried out [90], the former on a 10 hp industrial motor, the latter on a standard 750 hp locomotive traction motor; both motors were in the steady state. in [91], the analysis of the motor current takes advantage of wavelet analysis. individual sparking events may be observed during the commutation even at low frequencies, making it possible to assess the quality of commutation even without expensive radiofrequency instrumentation. the radiation mechanism on the contrary is not directly from the electric arc itself, but from the motor windings, that are simulated with various circuit geometries, as described in detail in [18]. in figure 3, the results of the calculated e-field emissions (under two different assumptions concerning the model of the arc, constant voltage and constant e-field) are shown and compared with measurement results [18]. by inspection of the two calculated curves shown in figure 3, it is evident that the constant voltage assumption is reasonably valid and appropriate, as confirmed by the results of the measurement of the winding voltage and current during commutation. suriano et al. [18] state that the emissions prediction by either constant-voltage or constantelectric field model of the commutation is inaccurate below 500 khz. on the contrary, this limit was experimentally determined as the boundary between significant and negligible emissions from a large power dc motor [15], by comparing directly the measured emissions and background noise. the comparison is shown in figure 4, where the difference is in excess of 20 db below 500 khz, and only of 10 db up to 3.5 mhz (and then not appreciable). the difference may be justified if the attenuation produced by motor enclosure and cable harness present in [15] is taken in account, whereas such provisions are not accounted for in [18]. additionally, the motors considered in the two publications are of a different size. in general, motors with a larger rated power have additional shielding provided by the heavy enclosure, ventilation, etc. an accurate model of the winding impedance validated against experimental data is presented in [38], [39], [48] with remarkable accuracy up to 1 ghz for the terminal impedance and up to 150 mhz for the conducted emissions. in a) b) figure 2. extrapolation of maximum emissions to (a) 1 m and (b) 0.5 m distance; iec 61800-3 c1 (magenta), c2 (red), c3 (green); darker curves indicate inclusion of second order (thin) and third order (thick) terms. figure 3. measured (grey curve) and calculated (hollow diamond, constant voltage arc assumption, and filled diamond, constant e-field arc assumption) dc motor emissions [18]. 10 5 10 6 10 7 10 8 10 9 0 10 20 30 40 50 60 70 80 90 100 110 120 frequency [h z] e le ct ri c f ie ld [ d b u v /m ] acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 168 [39], an equivalent circuit model is proposed and the single elements are modelled: turns geometry is accounted for in the calculation of mutual inductance and capacitance and to derive the windings input impedance over an extended frequency range. in [32], it is underlined that practical factors can influence the resulting electrical parameters, such as the fact that windings are not evenly distributed due to shape changes and that they can vary among different units. the consistency between the simulation and the experimental results was shown by the authors for three tested motors; for two of them, in particular, the agreement is very good, with remarkable differences only concerning the peak height at resonances, as it could be expected, because the damping factor highly depends on the correct estimation of the loss factors, in the copper, iron and insulation. 5.2. ac induction motor induction motors are the most widely used machines in asd applications. induction motors are typically fed by static inverters, mostly of the pwm type, with three or more levels for very large power applications. the converter and the feeding cable are normally assumed as the most relevant and the sole source of electromagnetic emissions [16], [17], [21], and [33], [40]-[43] and seldom the motor is considered alone as a source of emissions by itself or excited by converter conducted emissions [15], [19]. so, if the focus is on the converter conducted emissions in terms of the resulting commonand differential-mode currents following the converter commonand differential-mode voltage waveforms during commutations, the accurate modelling of cable [16] and motor [49], [92], [93], impedance terms becomes crucial. in synthesis, concerning motor differential impedance, which is the most relevant for diagnostic purposes, in [20] it is concluded that “it is the authors’ experience, confirming the conclusions of other investigators, a reasonable upper limit for modelling the machine’s differential impedance as an inductive system is in the range of 10 khz 100 khz; above this range the machine appears capacitive into the 13 mhz range; beyond approximately 3 mhz the machine resembles a transmission line.” this statement sheds some light on the fact that, above the said frequency limits, the commonmode impedance and the effects of the enclosure grounding must be considered. in [92], the shown results cover the low and medium frequency range up to 1 mhz (the limit imposed by the lrc bridge used). what is relevant is that the authors underlined the influence of the grounding of the motor enclosure; they report results for both grounded and ungrounded enclosure (the latter for comparison with other references). the results, however, are only for the differential impedance, that the authors put in direct relationship with the inverter feeding of motor phase windings. for this reason, they do not consider commonmode impedance (and the enclosure is included for the effects it has on some internal coupling terms). it is interesting to observe that enclosure grounding affects greatly the impedance curve: for the test case considering both grounded and floating enclosure, the first resonance frequency moves from 25 khz to about 60 khz, followed by a steep decay of values in the case of a floating enclosure, while the amplitude remains large, and even increases for a grounded enclosure. this can be studied for each motor to shape the impedance curve in order to reduce current flowing due to inverter voltage emissions. on the other hand, any parametric change to the internal motor circuit topology could be monitored if it influences in particular the first resonance. 5.3. synchronous machines the evaluation of e.m. emissions from electrical rotating machines has been a relatively new concern in the past decade, with only a few contributions found in the literature. in fact, they have been primarily considered a source of magnetic field at the fundamental frequency and its harmonics. instead, especially for large power synchronous machines, a number of possible sources for high frequency disturbances for the machine and its auxiliary systems can be identified. considering the extension in space of the large power electrical rotating machinery and the non-ideality of the test site, where it is necessary to perform the tests, the assessment of e.m. emissions is thus not an easy task. in [15], the assessment of e.m. emissions from large electrical rotating machines is presented. figure 5 and figure 6 show the magnetic field measured along the vertical axis of the machine generated by a separately excited synchronous generator and a brushless synchronous generator, respectively. three different tests are considered: generator moved by the driving dc motor, no-load and short-circuit conditions. it is interesting to notice that the emissions are higher for no-load condition for the separately excited synchronous generator and for the shortcircuit condition for the brushless synchronous generator: thus, the short-circuit condition is not always the worst-case condition in the assessment of e.m. emissions. figure 4. measured radiated emissions from a large power dc motor [15]. figure 5. vertical magnetic field for (grey) moved, (heavy black) no-load, and (thin black) short-circuit conditions of a synchronous generator [19]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 169 in [19], the metrological characteristics are pointed out, that describe the quality and reliability of experiments. in particular, the quality of measurements is evaluated in terms of repeatability (computed as the standard deviation of repeated measurements) and reproducibility (computed as difference of average recordings for different generators of the same type), shown in figure 7 (a) and (b), respectively. 6. uncertainty and systematic errors any statement of compliance to limits and assessment of a quantity brings along the concept of uncertainty. in the present case the measurement of conducted and radiated emissions is disciplined by the set of cispr 16 standards, and in particular cispr 16-4-1. however, the focus of cispr standards is on emc measurements carried out on equipment of small-medium dimensions that for radiated emissions fit one of the advised facilities, namely open area test site (oats), semi-anechoic chamber (sac), fully anechoic chamber (fac), tem and gtem cells. in addition, the standards do not account for ancillary equipment with a potentially significant emission contribution (such as lubricating and cooling system, auxiliary converters and machinery to drive or load the machine under test). these factors were extensively discussed in [19], addressing the problem by means of experimental results and analysing repeatability and type a uncertainty. smaller equipment (that may be a motor or a complete drive) can fit standardized test facilities and may be tested more easily with a better documented uncertainty. however, one more factor comes into play: the purpose of emc standards is to test the equipment with the minimum ancillary equipment for operation in an environment and with a test setup that maximize reproducibility. more and more often modern motors and drives are used embedded in oem (other equipment manufacturer) products and in compact applications featuring high power density, such as automotive (and in particular electric vehicles), avionics, medical and laboratory (in particular next to sensors and diagnostic instruments) or within electrical appliances (such as ventilation and conditioning, for which in [56] we see exemplification of typical issues of compliance). with a parallel to other systems featuring large apparatus, relevant factors with impact on uncertainty and reproducibility are: i) emissions from a long cable line connected to the motor or generator, that can be borrowed from the scenario of the catenary feeding line in electrified railways [94]; ii) moving source in case of vehicles and electrified transports, as well as moving industrial apparatus (e.g. for positioning or tooling); iii) variability of operating conditions for large machines that cannot be bench tested under satisfactory conditions. the electromagnetic environment is quite different from a nearly open area in far-field conditions and no scattering and reflections. in addition, the distance at which possible victims are located is very short, implying that the electromagnetic field components are in the reactive region up to quite a high frequency. the compactness of the setup that reproduces the final application impacts directly on the suitable antennas to adopt that are different from those considered by cispr and generally accepted for standardized tests: antennas of very small dimensions have a low sensitivity for which the background noise is very close not to say above the limits without using preamplification. second, the reactive behaviour of the measured emissions, with a dependency of the square to cubic power of distance, increases the uncertainty due to errors of positioning and measurement of distance, and influences also the expected level of emissions onto nearby victims in the final installation. figure 6. vertical magnetic field for (grey) moved, (heavy black) no-load, and (thin black) short-circuit conditions of a brushless synchronous generator [15]. figure 7. (a) repeatability as standard deviation and (b) reproducibility as difference of averages for the magnetic field components of two identical synchronous generators [19]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 170 among the environmental conditions of the installation temperature and vibration are particularly important for their influence on emissions, taking into account the widespread use of motors and power drives in automotive, avionic, naval and offshore applications, all characterized by a wide range of the two environmental parameters. environmental factors represent to some extent a source of systematic error, as for the offset with respect to those applied during emc tests for certification. the variability around the typical operating points is an additional source of uncertainty. in [95] the influence of vibration on conducted emissions of a dc motor was measured and compared against the limits of conducted emissions for class 1 of cispr 25: besides a violation of limits by about 10 db caused by commutation arcing in normal conditions, the tests carried out when vibration is applied (simulating a realistic condition of use) show an increase of conducted emissions by about another 5 db on average. 7. conclusions this work has discussed the problem of electromagnetic compliance of electric machinery and power drives, by introducing and reviewing the normative references for electromagnetic emissions, the available modelling approaches and their accuracy, and the measurement methods including the influence of setup and environment, besides the different behaviour of various type of machines. for normative limits section 2 has distinguished standards for generic and specific applications, addressing the problem of applying the standards and limits for the final application. focus is equally divided between conducted and radiated emissions, although the latter is more extensively covered both for a lack of direct connection to the public distribution network and for a particular interest for diagnostic purposes. various modelling approaches have been reviewed in section 3, as a replacement of measurements discussed in section 4 and as a means of prediction of emissions of the machine or power drive installed and operating in its final application. models in general require some tuning and supplemental information for parameters and input quantities, achievable only by means of measurements, that seem thus an irreplaceable element. selected models are reported as having good accuracy, in particular up to some tens of mhz, able to cover the largest emissions also of power drives, that as a rule of thumb have an interval of emissions about one/two orders of magnitude more extended than electrical machines alone. available references for measurements were also discussed both for the main features of electromagnetic emissions of different type of machines, and for the metrological performance of systematic errors, repeatability and uncertainty. references [1] directive 2014/30/eu of the european parliament and of the council of 26 february 2014 on the harmonisation of the laws of the member states relating to electromagnetic compatibility. online [accessed 20 june 2021] http://data.europa.eu/eli/dir/2014/30/oj [2] iec 60034-1, rotating electrical machines — part 1: rating and performance, international electrotechnical commission: geneva, switzerland, 2004. [3] iec 60034-1, rotating electrical machines — part 1: rating and performance, international electrotechnical commission: geneva, switzerland, 2010. [4] iec 60034-1, rotating electrical machines — part 1: rating and performance, international electrotechnical commission: geneva, switzerland, 2017. [5] iec 61800-1, adjustable speed electrical power drive systems part 1: general requirements rating specifications for low voltage adjustable speed dc power drive systems, international electrotechnical commission: geneva, switzerland, 2021. [6] iec 61800-2, adjustable speed electrical power drive systems part 2: general requirements rating specifications for low voltage adjustable speed a.c. power drive systems, international electrotechnical commission: geneva, switzerland, 2015. [7] iec 61800-3, adjustable speed electrical power drive systems part 3: emc requirements and specific test methods, international electrotechnical commission: geneva, switzerland, 2017. [8] iec 61800-3, adjustable speed electrical power drive systems part 3: emc requirements and specific test methods, international electrotechnical commission: geneva, switzerland, 2004 + amd1:2011 [9] iec 61000-6-1, electromagnetic compatibility (emc) part 6-1: generic standards immunity for residential, commercial and light-industrial environments, international electrotechnical commission: geneva, switzerland, 2016. [10] iec 61000-6-2, electromagnetic compatibility (emc) part 6-2: generic standards immunity for industrial environments, 2016. [11] iec 61000-6-3, electromagnetic compatibility (emc) part 6-3: generic standards emission standard for residential, commercial and light-industrial environments, international electrotechnical commission: geneva, switzerland, 2020. [12] iec 61000-6-4, electromagnetic compatibility (emc) part 6-4: generic standards emission standard for industrial environments, international electrotechnical commission: geneva, switzerland, 2018. [13] ieee std. 1566, standard for performance of adjustable speed ac drives rated 375 kw and larger, ieee, piscataway, nj, usa, 2015. [14] a. mariscotti, measurement procedures and uncertainty evaluation for electromagnetic radiated emissions from large power electrical machinery, ieee trans. instr. meas. 56 (2007), pp. 2452-2463. doi: 10.1109/tim.2007.908351 [15] p. ferrari, a. mariscotti, a. motta, p. pozzobon, electromagnetic emissions from electrical rotating machinery, ieee trans. energy conv., 16 (1) (2001), pp. 68-73. doi: 10.1109/60.911406 [16] s. a. pignari, a. orlandi, long cable effects on conducted emissions levels, ieee trans. electrom. compat., 45 (1) (2003), pp.43-54. doi: 10.1109/temc.2002.808023 [17] u. reggiani, a, massarini, l. sandrolini, m. ciccotti, x. liu, d. w. p. thomas, c. christopoulos, experimental verification of predicted electromagnetic fields radiated by straight interconnect cables carrying high-frequency currents, proc. ieee power tech conf., bologna, 23-26 june 2003, 5 pp. doi: 10.1109/ptc.2003.1304166 [18] c. r. suriano, j. r. suriano, g. thiele, t. w. holmes, prediction of radiated emissions from dc motors, proc. ieee int. symp. on emc, colorado, usa, 24-28 august 1998, pp. 790-795 vol.2. doi: 10.1109/isemc.1998.750300 [19] a. mariscotti, assessment of electromagnetic emissions from synchronous generators and its metrological characterization, ieee trans. instrum. meas. 59 (2) (2010), pp.450-457. doi: 10.1109/tim.2009.2024696 [20] g. l. skibinski, r. j. kerkman, d. schlegel, emi emissions of modern pwm ac drives, ieee ind. appl. magazine 5(6) 1999, pp. 47-81. doi: 10.1109/2943.798337 [21] m. wisniewski, r. kubichek, j. pierre, emi emissions up to 1 ghz from adjustable speed drives, proc. of 27th annual conf. of the ieee ind. electr. society, iecon, dever, co, us, 29 november – 2 december, 2001, pp. 113-118 vol.1. doi: 10.1109/iecon.2001.976464 [22] e. t. arntzen, r. f. kubichek, j. w. pierre, s. ula, initial findings in electromagnetic emissions above 30 mhz from power inverters and variable speed motor controllers, proc. ieee int. symp. on emc, 18-22 august 1997, austin, tx, usa, pp. 106-108. doi: 10.1109/isemc.1997.667550 http://data.europa.eu/eli/dir/2014/30/oj https://doi.org/10.1109/tim.2007.908351 https://doi.org/10.1109/60.911406 https://doi.org/10.1109/temc.2002.808023 https://doi.org/10.1109/ptc.2003.1304166 https://doi.org/10.1109/isemc.1998.750300 https://doi.org/10.1109/tim.2009.2024696 https://doi.org/10.1109/2943.798337 https://doi.org/10.1109/iecon.2001.976464 https://doi.org/10.1109/isemc.1997.667550 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 171 [23] s. ogasawara, h. ayano, h. akagi, measurement and reduction of emi radiated by a pwm inverter-fed ac motor drive system, ieee trans. ind. appl. 33 (4) (1997), pp. 1019-1026. doi: 10.1109/28.605744 [24] a. dolente, u. reggiani, l. sandrolini, comparison of radiated emissions from different heatsink configurations, proc. of ieee 6th int. symp. on electrom. compat. and electrom. ecology, st. petersburg, russia, 21-24 june 2005, pp. 49-53. doi: 10.1109/emceco.2005.1513059 [25] i. montanari, emi measurements for aging control and fault diagnosis in active devices, proc. of ieee 6th int. symp. on electromagnetic compatibility and electromagnetic ecology, st. petersburg, russia, 21-24 june 2005, pp. 183-186. doi: 10.1109/emceco.2005.1513096 [26] y. kazakov, n. morozov, e. shumilov, analysis of the electromagnetic radiation distribution of frequency-controlled electric machines in order to diagnose their performance, proc. of int. conf. on electrotechn, complexes and systems (icoecs), ufa, russia, 27 october 2020, pp. 1-4. doi: 10.1109/icoecs50468.2020.9278477 [27] a. mariscotti, a. ogunsola, l. sandrolini, survey of models and reference data for prediction of e.m. field emissions from electrical machinery proc. of the ieee symp. on diagn. for elec. machines, pow. electron. & drives, bologna, italy, 5-8 september 2011, pp. 350-355. doi: 10.1109/demped.2011.6063647 [28] d. burow, j. k. nelson, s. salon, j. stein, high frequency synchronous generator model for electromagnetic signature analysis, proc. of the 20th intern. conf. on electr. mach., marseille, france, 2-5 september 2012, pp. 1712-1716. doi: 10.1109/icelmach.2012.6350111 [29] d. burow, j. k. nelson, s. salon, j. stein, high frequency synchronous generator modeling and testing for electromagnetic signature analysis, proc. of the intern. conf. electr. mach., berlin, germany, 2-5 september 2014, pp. 18941900. doi: 10.1109/icelmach.2014.6960442 [30] c. r. suriano, j. r. suriano, g. thiele, t. w. holmes, prediction of radiated emissions from dc motors, proc. of ieee emc intern. symp. electrom. comp., denver, usa, 24-28 august 1998, pp. 790-795 vol.2 . doi: 10.1109/isemc.1998.750300 [31] j. suriano, c. m. ong, modeling of electromechanical and electromagnetic disturbances in dc motors, proc. of the nat. symp. electrom. comp., denver, usa, 23-25 may 1989, pp. 258262. doi: 10.1109/nsemc.1989.37190 [32] f. pavlovcic, commutator motors as emi sources, proc. of the intern. symp. on pow. electron., electr. drives, autom. and motion, pisa, italy, 14-16 june 2010, pp. 1789-1793. doi: 10.1109/speedam.2010.5545039 [33] g. antonini, s. cristina, a. orlandi, a prediction model for electromagnetic interferences radiated by an industrial power drive system, ieee trans. ind. app. 35 (1999), pp.870-876. doi: 10.1109/28.777196 [34] d. l. sanders, j. p. muccioli, a. a. anthony, d. s. walz, d. montone, using image planes on dc motors to filter high frequency noise, proc. of the ieee intern. symp. electrom. comp., silicon valley, ca, usa, 9-13 august 2004, pp. 621-625 vol.2. doi: 10.1109/isemc.2004.1349870 [35] h. miloudi, a. bendaoud, m. miloudi, s. dickmann, s. schenke, common mode and differential mode characteristics of ac motor for emc analysis, proc. of the intern. symp. electrom. comp. emc europe, sept. 5-9, 2016, wroclaw, poland. doi: 10.1109/emceurope.2016.7739260 [36] z. duan, x. wen, a new analytical conducted emi prediction method for sic motor drive systems, etransportation, 3 (2020), pp.1-11. doi: 10.1016/j.etran.2020.100047 [37] p.l. alger, the induction machine, gordon & breach, 1965. [38] j. benecke, s. dickman, analytical hf model for multipole dc motors, proc. of 18th int. zurich symp. on emc, munich, germany, sept. 24-28, 2007, 201-204. doi: 10.1109/emczur.2007.4388230 [39] j. benecke, s. dickman, analytical hf model of a low voltage dc motor armature including parasitic properties, proc. ieee int. symp. on emc, hawaii, usa, july 9-13, 2007. doi: 10.1109/isemc.2007.250 [40] g. l. giuliattini burbui, u. reggiani, l. sandrolini, prediction of low-frequency electromagnetic interferences from smps, 2006 ieee international symp. on electromagnetic compatibility emc 2006, portland, usa, aug. 14-18, 2006, pp. 472-477. doi: 10.1109/isemc.2006.1706350 [41] f. della torre, a. p. morando, study on far-field radiation from three-phase induction machines, ieee trans. electromagn. compat. 51(4) (2009), pp. 928-936. doi: 10.1109/temc.2009.2027814 [42] g. grandi, d. casadei, u. reggiani, analysis of common mode and differential mode hf current components in pwm inverter fed ac motors, proc. ieee 19th annual power electronics specialist conf., fukuoka, japan, 1998, 1146-1151. doi: 10.1109/pesc.1998.703149 [43] g. grandi, d. casadei, u. reggiani, commonand differential mode hf current components in ac motors supplied by voltage source inverters, ieee trans. on power electronics, 19 (1), 2004, 16-24. doi: 10.1109/tpel.2003.820564 [44] f. costa, e. laboure, v. lavabre, m. patra, l. paletta, validation of numerical calculations of the conducted and radiated emissions: application to a variable speed drive, proc. of ieee 31st annual power electronics specialist conf., ireland, 2000. doi: 10.1109/pesc.2000.879939 [45] f. costa, c. vollaire, r. meuret, modeling of conducted common mode perturbations in variable-speed drive systems, ieee trans. electrom. compat. 47 (4) (2005), pp. 1012-1021. doi: 10.1109/temc.2005.857365 [46] j. penman, h. g. sedding, b. a. lloyd, w. t. fink, detection and location of interturn short circuits in the stator windings of operating motors, ieee trans. en. conv. 9 (4) (1994), pp. 652658. doi: 10.1109/60.368345 [47] j. e. makaran, j. lovetri, bldc motor and drive conducted rfi simulation for automotive applications, ieee trans. electrom. comp. 45 (2) (2003), pp. 316-329. doi: 10.1109/temc.2003.811304 [48] j. benecke, impedance and emission optimization of low voltage dc motors for emc compliance, ieee trans. on industrial electronics, 58 (9), 2011, 3383-3389. doi: 10.1109/tie.2010.2084978 [49] j. luszcz, k. iwan, conducted emi propagation in inverter-fed ac motor, magazine of electrical power quality and utilization, 2 (1), 2006, 47-51. [50] cispr 11, industrial, scientific and medical equipment – radiofrequency disturbance characteristics – limits and methods of measurement, 2015. [51] j. geng, r. jin, y. fan, b. liu, j. li, y. cheng, z. wang, the study on electromagnetic compatibility of dc electric motor in haps, aerospace sci. and techn., 9 (2005), pp.617-625. doi: 10.1016/j.ast.2005.01.004 [52] i. m. ahmad, a. eroglu, c. pomalaza-raez, m. li, simulation and modeling technique for emi measurement of the ecm motors in hvac systems, proc. of the ieee 7th ann. ubiq. comp., electron. & mob. comm. conf. (uemcon), oct. 20-22, 2016, new yor, usa. doi: 10.1109/uemcon.2016.7777877 [53] m. i. buzdugan, h. balan, electromagnetic compatibility issues of brushless speed drives, intern. univ. pow. eng. conf. (upec), sept. 6-9, 2016, coimbra, portugal. doi: 10.1109/upec.2016.8114064 [54] i. oganezova, r. kado, b. khvitia, a. gheonjian, r. jobava, simulation of conductive and radiated emissions from a wiper motor according to cispr 25 standard, ieee intern. symp. electrom. comp., aug. 16-22, 2015, dresden, germany. doi: 10.1109/isemc.2015.7256296 [55] j. e. makaran, j. lovetri, prediction of conducted rfi emissions in bldc motors for automotive applications, ieee intern. symp. electrom. comp., aug. 13-17, 2001, montreal, canada. https://doi.org/10.1109/28.605744 https://doi.org/10.1109/emceco.2005.1513059 https://doi.org/10.1109/emceco.2005.1513096 https://doi.org/10.1109/icoecs50468.2020.9278477 https://doi.org/10.1109/demped.2011.6063647 https://doi.org/10.1109/icelmach.2012.6350111 https://doi.org/10.1109/icelmach.2014.6960442 https://doi.org/10.1109/isemc.1998.750300 https://doi.org/10.1109/nsemc.1989.37190 https://doi.org/10.1109/speedam.2010.5545039 https://doi.org/10.1109/28.777196 https://doi.org/10.1109/isemc.2004.1349870 https://doi.org/10.1109/emceurope.2016.7739260 https://doi.org/10.1016/j.etran.2020.100047 https://doi.org/10.1109/emczur.2007.4388230 https://doi.org/10.1109/isemc.2007.250 https://doi.org/10.1109/isemc.2006.1706350 https://doi.org/10.1109/temc.2009.2027814 https://doi.org/10.1109/pesc.1998.703149 https://doi.org/10.1109/tpel.2003.820564 https://doi.org/10.1109/pesc.2000.879939 https://doi.org/10.1109/temc.2005.857365 https://doi.org/10.1109/60.368345 https://doi.org/10.1109/temc.2003.811304 https://doi.org/10.1109/tie.2010.2084978 https://doi.org/10.1016/j.ast.2005.01.004 https://doi.org/10.1109/uemcon.2016.7777877 https://doi.org/10.1109/upec.2016.8114064 https://doi.org/10.1109/isemc.2015.7256296 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 172 [56] i. t. arntzen, r.f. kubichek, j. w. pierre, s. ula, initial findings in electromagnetic emissions above 30 mhz from power inverters and variable speed motor controllers, ieee intern. symp. electrom. comp., aug. 18-22, 1997, austin, texas, usa. doi: 10.1109/isemc.1997.667550 [57] a. a. adam, k. gulez, s. koroglu, stray magnetic field distributed around a pmsm, turk j elec. eng. & comp. sci., vol.19, 2011, pp.119-131. doi: 10.3906/elk-1003-404 [58] o. hasnaoui, electromagnetic interferences and common mode voltage generated by variable speed ac motors, intern. conf. electr. sci. and techn. in maghreb, nov. 3-6, 2014, tunis, tunisia. doi: 10.1109/cistem.2014.7076990 [59] j. lee, k. jung, s. park, simulation of radiated emissions from a low voltage bldc motor, intern. symp. on antennas and propagation (isap), oct. 23-26, 2018, busan, south korea. [60] m. ma, z. xu, w. tao, x. zhang, z. ji, h. li, environmental leakage magnetic field analysis of high-powered pm linear motor with inverted u structure, 22nd intern. conf. electr. mach. and sys., aug. 11-14, 2019, harbin, china. doi: 10.1109/icems.2019.8921821 [61] j. guo, c. wang, y. ding, l. jiang, y. lv, suppression of electromagnetic radiation emission from electric vehicles: an engineering practical approach, emc sapporo & apemc, june 3-7, 2019, pp.476-479. doi: 10.23919/emctokyo.2019.8893721 [62] j. guo, y. zhang, g. zhang, x. zhang, g. jiang, analysis on an electromagnetic compatibility engineering trouble shooting case of electric vehicle, emc sapporo & apemc, june 3-7, 2019, pp.464-467. doi: 10.23919/emctokyo.2019.8893937 [63] r. armstrong, l. dawson, a. j. rowell, c. a. marshman, a. r. ruddle, the effect of fully electric vehicles on the low frequency electromagnetic environment, ieee intern. symp. electrom. comp., dresden, germany, 16-22 august 2005, pp. 662-667. doi: 10.1109/isemc.2015.7256242 [64] l. zhai, l. lin, c. song, x. zhang, mitigation conducted-emi emission strategy based on distributed parameters of power inverter system in electric vehicle, ieee 2nd ann. south. pow. electron. conf. (spec), auckland, new zealand, 5-8 december 2016, pp. 1-4. doi: 10.1109/spec.2016.7846104 [65] iec 60533, electrical and electronic installations in ships – electromagnetic compatibility (emc) – ships with a metallic hull, 2015, geneva, switzerland. [66] iec 60945, maritime navigation and radiocommunication equipment and systems – general requirements – methods of testing and required test results, 2002, geneva, switzerland. [67] cispr 12, vehicles, boats and internal combustion engines – radio disturbance characteristics – limits and methods of measurement for the protection of off-board receivers, 2009. [68] reg. unece 10, uniform provisions concerning the approval of vehicles with regard to electromagnetic compatibility, rev. 5, 2014. [69] cispr 25, vehicles, boats and internal combustion engines – radio disturbance characteristics – limits and methods of measurement for the protection of on-board receivers, 2008. [70] cispr 36, electric and hybrid road vehicles – radio disturbance characteristics – limits and methods of measurement for the protection of off-board receivers below 30 mhz, 2016 (draft cispr/d/429/cd) [71] iec 60601-1-2, medical electrical equipment part 1-2: general requirements for basic safety and essential performance – collateral standard: electromagnetic disturbances – requirements and tests, 2014+amd1:2020, geneva, switzerland. [72] cispr 14-1, electromagnetic compatibility – requirements for household appliances, electric tools and similar apparatus – part 1: emission, 2020. [73] j. g. fetter, d. g. benditt, m. s. stanton, electromagnetic interference from welding and motors on implantable cardioverter-defibrillators as tested in the electrically hostile work site, j. amer. coll. cardiol., vol.28, august 1996, pp.423427. doi: 10.1016/0735-1097(96)00147-7 [74] a. mariscotti, normative framework for the assessment of the radiated electromagnetic emissions from traction power supply and rolling stock, ieee veh. pow. and prop. conf., hanoi, vietnam, 14-17 october 2019, pp. 1-7. doi: 10.1109/vppc46532.2019.8952487 [75] a. mariscotti, a. marrese, n. pasquino, time and frequency characterization of radiated disturbance in telecommunication bands due to pantograph arcing, measurement, vol.46, 2013, pp. 4342-4352. doi: 10.1016/j.measurement.2013.04.054 [76] m. moreau, n. idir, ph. le moigne, j.j. franchaud, utilization of a behavioural model of motor drive systems to predict the conducted emissions, proc. 39th ieee power electronics specialist conf., rhodes, greece, 15-19 june 2008, pp. 4387-4391. doi: 10.1109/pesc.2008.4592652 [77] k. maki, h. funato, l. shao, motor modeling for emc simulation by 3-d electromagnetic field analysis, proc. ieee int. electric machines and drives conf., 3-6 may 2009, pp.103-108. doi: 10.1109/iemdc.2009.5075190 [78] r. naik, t. a. nondahl, m. j. melfi, r. schiferl, j. s. wang, circuit model for shaft voltage prediction in induction motors fed by pwm based ac drives, ieee trans. on industry applications, 39 (5), 2003, pp.1294-1299. doi: 10.1109/tia.2003.816504 [79] a. muetze, a. binder, calculation of motor capacitances for prediction of the voltage across the bearings in machines of inverter-based drive systems, ieee trans. on industry applications, 43 (3), 2007, pp.665-672. doi: 10.1109/tia.2007.895734 [80] b. mirafzal, g. l. skibinski, r. m. tallam, d. w. schlegel, r. a. lukaszewski, universal induction motor model with low-to-high frequency-response characteristics, ieee trans. on industry applications, 43 (5), 2007, pp.1233-1246. doi: 10.1109/tia.2007.904401 [81] o. a. mohammed, s. ganu, n. abed, s. liu, z. liu, high frequency pm synchronous motor model determined by fe analysis, ieee trans. on magnetics, 42 (4), 2006, pp.1291-1294. doi: 10.1109/tmag.2006.872412 [82] c. chen, x. xu, modeling the conducted emi emission of an electric vehicle (ev) traction drive, proc. ieee int. symp. on electromagnetic compatibility, aug. 24-28, 1998, denver, co, usa, vol. 2, pp.796–801. doi: 10.1109/isemc.1998.750301 [83] c. chen, characterizing the generation & coupling mechanisms of electromagnetic interference noise from an electric vehicle traction drive up to microwave frequencies, proc. 15th annual ieee applied power electronics conf. and exposition new orleans, la, usa, 6-10 february 2000, vol. 2, pp.1170–1176. doi: 10.1109/apec.2000.822835 [84] l. arnedo, k. venkatesan, high frequency modeling of induction motor drives for emi and overvoltage mitigation studies, proc. ieee int. conf. on electric machines and drives madison, wi, usa, 1-4 june 2003, vol. 1, pp.468–474. doi: 10.1109/iemdc.2003.1211305 [85] v. venegas, j. l. guardado, e. melgoza, m. hernandez, a finite element approach for the calculation of electrical machine parameters at high frequencies, proc. ieee pes general meeting tampa, fl, usa, 24-28 june 2007, pp. 1-5. doi: 10.1109/pes.2007.386274 [86] g. ala, m. c. di piazza, g. tine, f. viola, g. vitale, numerical simulation of radiated emi in 42 v electrical automotive architectures, ieee trans. magn. 42 (4) (2006), pp. 879-882. doi: 10.1109/tmag.2006.871440 [87] f. perisse, p. werinsky, d. roger, high frequency behaviour of ac machine: application to turn insulation aging diagnostic, proc. ieee int. symp. on electrical insulation, toronto, canada, 11-14 june 2006, 555-559. doi: 10.1109/elinsl.2006.1665379 [88] k. padmanabhan, a. srinivisan, some important aspects in the phenomenon of commutator sparking, ieee trans. on power apparatus and systems, 84 (1965), pp.396-404. doi: 10.1109/tpas.1965.4766211 [89] m. j. b.turner, b. r. g. swinnerton, sparking and arcing in https://doi.org/10.1109/isemc.1997.667550 https://doi.org/10.3906/elk-1003-404 https://doi.org/10.1109/cistem.2014.7076990 https://doi.org/10.1109/icems.2019.8921821 https://doi.org/10.23919/emctokyo.2019.8893721 https://doi.org/10.23919/emctokyo.2019.8893937 https://doi.org/10.1109/isemc.2015.7256242 https://doi.org/10.1109/spec.2016.7846104 https://doi.org/10.1016/0735-1097(96)00147-7 https://doi.org/10.1109/vppc46532.2019.8952487 https://doi.org/10.1016/j.measurement.2013.04.054 https://doi.org/10.1109/pesc.2008.4592652 https://doi.org/10.1109/iemdc.2009.5075190 https://doi.org/10.1109/tia.2003.816504 https://doi.org/10.1109/tia.2007.895734 https://doi.org/10.1109/tia.2007.904401 https://doi.org/10.1109/tmag.2006.872412 https://doi.org/10.1109/isemc.1998.750301 https://doi.org/10.1109/apec.2000.822835 https://doi.org/10.1109/iemdc.2003.1211305 https://doi.org/10.1109/pes.2007.386274 https://doi.org/10.1109/tmag.2006.871440 https://doi.org/10.1109/elinsl.2006.1665379 https://doi.org/10.1109/tpas.1965.4766211 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 173 electrical machines, proceedings iee, 113 (8) (1966), pp.13761386. doi: 10.1049/piee.1966.0235 [90] m. p. treanor, g.b kliman, incipient fault detection in locomotive dc traction motors, proc. of 49th meeting of the society for machinery failure prevention technology, virginia beach, usa, 1995, 221-230. [91] g. b. kliman, d. song, r. a. koegl, remote monitoring of dc motor sparking by wavelet analysis of the current, proc. 4th ieee int. symp. on diagnostics for electric machines, power electronics and drives, sdemped atlanta, ga, usa, 24-26 august 2003. doi: 10.1109/demped.2003.1234542 [92] b. a. potter, s. a. shirsavar, m. d. mcculloch, study of the variation of the input impedance of induction machines with frequency, iet electrical power applications, 1 (1) (2007), pp. 3642. doi: 10.1049/iet-epa:20050524 [93] a. boglietti, a. cavagnino, m. lazzari, experimental highfrequency parameter identification of ac electrical motors, ieee trans. ind. app. 43 (2007), pp.23-29. doi: 10.1109/tia.2006.887313 [94] a. mariscotti, critical review of emc standards for the measurement of radiated electromagnetic emissions from transit line and rolling stock, energies, 14 (2020) 759, pp. 1-26. doi: 10.3390/en14030759 [95] s. m. prusu, c. munteanu, a. răcăşan, f.-i. pop, r.-m. gliga, the influence of vibrations on conducted emissions, 7th intern. conf. modern pow. sys. (mps), cluj-napoca, romania, 6-9 june 2017, pp. 1-5. doi: 10.1109/mps.2017.7974460 https://doi.org/10.1049/piee.1966.0235 https://doi.org/10.1109/demped.2003.1234542 https://doi.org/10.1049/iet-epa:20050524 https://doi.org/10.1109/tia.2006.887313 https://doi.org/10.3390/en14030759 https://doi.org/10.1109/mps.2017.7974460 introductory notes for the acta imeko fourth issue 2021 acta imeko issn: 2221-870x december 2021, volume 10, number 4, 1 2 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 1 introductory notes for the acta imeko fourth issue 2021 francesco lamonaca1 1 university of calabria, dept. of computer science, modelling, electronic and system, via p.bucci 41c, arcavacata di rende, 87036 (cs), italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko fourth issue 2021, acta imeko, vol. 10, no. 4, article 1, december 2021, identifier: imeko-acta-10 (2021)-04-01 received december 23, 2021; in final form december 23, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, also in this issue, high quality and heterogeneous papers are presented confirming acta imeko as the natural platform for disseminating measurement information and stimulating collaboration among researchers from many different fields. i hope you will enjoy your reading. as usual, the general track collects contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. an interesting technique is proposed by oleksandr vasilevskyi et al. in the paper ‘indicators of reproducibility and suitability for assessing the quality of production services’ for estimating the probability of the possible appearance of defective products or the inconsistency of the production service on the basis of indexes of suitability and reproducibility of the production process. the accuracy of the proposed estimation methodology, which includes the proposed mathematical apparatus, was estimated on the basis of the correctness index, whose assessment method is carried out in accordance with the international standard iso 57252:2002. mohammed alktranee et al. in the paper ‘simulation study of the photovoltaic panel under different operation conditions’ study the effects of temperature distribution on the photovoltaic panel at different solar radiation values and temperatures under various operative conditions in january and july. a 3d model of the pv panel was simulated using ansys software, depending on the various values of temperature and solar radiation values obtained using mathematic equations. in cognitive radio systems, estimating primary user direction of arrival (doa) measurement is one of the key issues. in order to increase the probability detection, multiple sensor antennas are used and they are analysed by using subspace-based technique. in the paper ‘an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems’, sala surekha et al. considered wideband spectrum with sub channels and here each sub channel facilitated with a sensor for the estimation of doa. in practical spectrum sensing process interference component also encounters in the sensing process. to avoid this interference level at output of receiver, the authors used an adaptive learning algorithm known as normalized least absolute mean deviation (nlamd) algorithm. rosario morello et al. in the paper ‘design of a non-invasive sensing system for diagnosing gastric disorders’ present a smart egg (electrogastrography) sensing system to non-invasively diagnose gastric disorders. in detail, the system records the gastric slow waves by means of skin surface electrodes placed in the epigastric area. cutaneous myoelectrical signals are so acquired from the body surface in proximity of stomach. electro-gastrographic record is then processed. according to the diagnostic model designed from the authors, the system estimates specific diagnostic parameters in time and frequency domains. it uses discrete wavelet transform to obtain power spectral density diagrams. the frequency and power of the egg waveform and the dominant frequency components are so analysed. the defined diagnostic parameters are compared to the reference values of a normal egg in order to estimate the presence of gastric pathologies by the analysis of arrhythmias (tachygastria, bradygastria and irregular rhythm). for military divers, having a robust, secure, and undetectable wireless communication system available is a fundamental need. wireless intercoms using acoustic waves are currently used. these systems, even if reliable, have the limit of being easily identifiable and detectable. visible light can pass through sea water. therefore, light can be used to develop short-range wireless communication systems. to realize secure close-range underwater wireless communication, the underwater optical wireless communication (uowc) can be a valid alternative to acoustic wireless communication. uowc is not a new idea, but the problem of the presence of sunlight and the possibility of using near-ultraviolet radiation (near-uv) has not been adequately addressed in the literature yet. in military applications, the possibility of using invisible optical radiation can be of great interest. in the paper ‘led-to-led wireless communication between divers’ by fabio leccese et al., a feasibility study is carried out to mailto:editorinchief.actaimeko@hunmeko.org acta imeko issn: 2221-870x december 2021, volume 10, number 4, 2 3 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 2 demonstrate that uowc can be performed using nearultraviolet radiation. the proposed system can be useful for wireless voice communications between military divers as well as amateur divers moreover, this issue contains extended version of selected papers presented to one of the most important international and italian metrology events organized in italy: the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters”, guest edited by prof. silvio del pizzo; the international excellence italo gorini ph.d school guest edited by prof. pasquale arpaia and dr. umberto cesaro; the 40th measurement day jointly organised by the italian associations italian group of electric and electronic measurements (gmee) and italian group of mechanical and thermal measurements (gmtt) guest edited by prof. carlo carobbi, prof. nicola giaquinto, prof. gian marco revel. the xxix italian national congress on mechanical and thermal measurements guest edited by prof. alfredo cigada and prof. roberto montanini. the strong italian contribution to this fourth issue is justified by the wish of the italian metrology community to support imeko and its journals. i hope that in the near future acta imeko will receive an equally important contribution from the other imeko member countries and beyond. francesco lamonaca editor in chief introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference "global trends in testing, diagnostics & inspection for 2030” (2nd conference jointly organized by imeko and eurolab aisbl) acta imeko issn: 2221-870x september 2021, volume 10, number 3, 3 4 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 3 introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference ‘global trends in testing, diagnostics & inspection for 2030’ (2nd conference jointly organised by imeko and eurolab aisbl) piotr bilski 1, lorenzo ciani 2 1 warsaw university of technology, ul. nowowiejska 15/19, 00-665 warsaw, poland 2 università degli studi di firenze, p.zza s.marco, 4, 50121 firenze, italy section: editorial citation: piotr bilski, lorenzo ciani, introductory notes for the acta imeko special issue on the 17th imeko technical committee 10 conference "global trends in testing, diagnostics & inspection for 2030” (2nd conference jointly organized by imeko and eurolab aisbl), acta imeko, vol. 10, no. 3, article 2, september 2021, identifier: imeko-acta-10 (2021)-03-02 section editor: francesco lamonaca, university of calabria, italy received september 1, 2021; in final form september 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: piotr bilski, e-mail: pbilski@ire.pw.edu.pl fehler! linkreferenz ungültig. dear readers, the area of technical diagnostics is one of the most significant research fields, one that involves a broad range of measurements, with various sensors, actuators and advanced computing techniques exploited. as we are becoming increasingly surrounded by the growing amount of autonomous machinery, it is crucial to ensure regular and accurate monitoring is carried out. the technical committee 10 (measurement for diagnostics, optimization & control) is responsible for fostering the research on such topics, which is expressed in terms of a wide range of activities, including the organisation of annual conferences and workshops aimed solely at the problems pertaining to fault detection, identification and location. the solutions for these problems include the array of algorithms, measurement tools and procedures applicable for individual devices and industrial processes. the 17th imeko tc10 conference held in 2020 is a perfect example of this approach. the conference was special for two reasons, the first of which relates to the covid-19 pandemic, which forced us to switch the location, originally planned for dubrovnik, croatia, to the purely virtual world. the event was then held online, using various internet technologies. this was entirely new to all involved and forced us to use new channels of communications and information-sharing techniques (e.g., cloud services and teleconferences) on an unprecedented scale. despite these challenges, the online event was met with great enthusiasm by both the participants and the invited guests. the second unique feature of the 2020 conference was that it was the second event co-organised by imeko and eurolab (croatian branch). this broadened the scope of the event to include new specific topics largely related to eurolab’s interests. the main drawback of the conference was the lost opportunity to visit the beautiful city of dubrovnik; however, we hope to be able to re-organise the conference there in the more traditional way in the future. the theme of the conference, ‘global trends in testing, diagnostics & inspection for 2030’, was supported by a wide range of topics covered by both existing papers and invited speakers. the problems covered by the speakers included the industrial standards (e.g., iso 3452, or iso/iec 17025), computational methods (reinforcement learning, artificial neural networks, etc.), applications (civil engineering or food industry), advanced measurement equipment (optical sensors or mems solutions), and significant measurement challenges (e.g., uncertainty evaluation). it would appear that the constant advancement in electronics and computer technologies has enabled an increasing number of advanced concepts to be realised. the corresponding special issue covers ten papers, which can be divided into three sections, each devoted to a different topic. the selection allows for assessing the advancements in these research and engineering fields. the section aimed at presenting and solving the general problems and challenges pertaining to technical diagnostics includes three papers. the first, ‘fault compensation effect in fault detection and isolation’, written by michał bartyś, considers mailto:pbilski@ire.pw.edu.pl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 4 the fault compensation effect in fault detection and isolation. this is an important issue in the area of model-based process diagnostics. here, the author discusses the application of the process graph to accurately represent the monitored phenomenon, allowing for fault detection based on the residuals and diagnostic matrix analysis. the concept is illustrated in terms of examples of the liquid tank and closed-loop liquid level control system. meanwhile, in their paper, ‘estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models’, marcantonio catelani et al. cover the problem of the design and application of failure models. here, the methodology, i.e., exploiting the model-based diagnostic approach, is demonstrated using the case of the hvac system (both during the simulation and the actual data provided by the manufacturer), with the results demonstrating the capacity of the approach for correctly evaluating the ability of the monitored object. finally, in their paper, ‘integrating maintenance strategies in autonomous production control using a cost-based model’, robert glawar et al. present the novel cost-based model approach for manufacturing-process monitoring. here, various maintenance strategies for autonomous production control are presented, preceded by the definition of the cost function for comparing their efficiency, while the possible application strategies are also discussed. the section related to the novel measurement and sensing methods for diagnostics includes five papers. the first, ‘overview of the modified magnetoelastic method applicability’, by tomáš klier et al., is devoted to the specific type of sensor in exploiting the magnetoelastic method. this method is used in the field of civil engineering to evaluate the state of buildings or construction operations. the important part of this approach is the coil, whereby the strength of the magnetic field can be evaluated. both the attendant laboratory and field (the bridge structure) tests demonstrated the applicability of the method. the paper entitled ‘bringing optical metrology to testing and inspection activities in civil engineering’, by luís martins et al., is focused on the area of optical metrology in the civil engineering field. here, the dimensional measurements of the concrete structures are supported by specific digital imaging techniques (e.g., cctv cameras or laser interferometry). various applications (e.g., bridge monitoring, sewer inspection, earthquake risk analysis) indicate the importance of this type of sensing technology. elsewhere, in their paper entitled ‘vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies’, zsolt j. viharos et al. discuss the novel method of monitoring the state of the ceramics cutting machinery using vibration analysis. here, both the timeand frequency-domain features are exploited in order to evaluate and predict the microcutting tool ‘wearing out’ phenomenon, with the analysis of the selected cnc machine allowing for determining three stages of device degradation. in their paper, ‘magnetic circuit optimization of linear dynamic actuators’, laszlo kazup and angela varadine szarka present the linear braking method used in the actuators. here, the method incorporates the design of the magnetic circuit, with the authors presenting the detailed design and the optimisation procedure for the circuit parameters. the attendant simulations demonstrated that it is possible to optimise the flux leakage and the dimensions of the circuit. finally, in their paper entitled ‘analysing and simulating electronic devices as antennas’, dániel erdősy et al. discuss the electromagnetic compatibility (emc) properties in relation to the operation of the complex antenna system. here, the equivalent circuit and the antenna directivity are calculated using the simulation tools, while the antenna arrays and the problems pertaining to the emc emission are also considered. meanwhile, the section that is focused on the computational methods and algorithms in the field of diagnostics includes two papers. the first, ‘the improved automatic control points computation for the acoustic noise level audits’, by tomáš drábek and jan holub, presents the post-processing method for the acoustic noise evaluation to estimate the comfort level of specific human living conditions. here, the control points localisation method is used to optimise the indoor noise measurement, with the attendant algorithm used to identify both long-term stationary and short-term recurring noise. overall, the authors demonstrate how to select the control points, outline various spatial conditions, and compare different layouts in terms of level evaluation accuracy and computation time. the second paper, ‘artificial neural network-based detection of gas hydrate formation’, which was written by ildikó bölkény, discusses the application of an artificial neural network to detect and prevent the gas hydrate formations in the area of industrialprocess diagnostics. here, two network architectures (nnarx and nnoe) are used as the regression machines, with the approach tested on the actual test equipment for the hydrate forming process. the results allowed for comparing the implemented network architectures in terms of prediction accuracy. suffice it to say, we wish to thank all the esteemed authors for delivering interesting, top-quality papers and all the reviewers who devoted a great deal of time and effort to reading and evaluating the manuscripts. this undoubtedly allowed for preparing the high-quality content of the subsequent acta imeko issue. secondly, our gratitude goes to prof. francesco lamonaca, the current editor-in-chief, for his devotion and support during the handling of the papers. we are extremely grateful for the chance to play the role of guest editors and hope that the papers will prove to be both interesting and useful for both research activities and practical applications alike. piotr bilski and lorenzo ciani, guest editors microwave reflectometric systems and monitoring apparatus for diffused-sensing applications acta imeko issn: 2221-870x september 2021, volume 10, number 3, 202 208 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 202 microwave reflectometric systems and monitoring apparatus for diffused-sensing applications andrea cataldo1, egidio de benedetto2, raissa schiavoni1, annarita tedesco1, antonio masciullo1, giuseppe cannazza1 1 dept. of engineering for innovation, university of salento, lecce, italy 2 dept. of electrical engineering and information technology, university of naples federico ii, naples, italy section: research paper keywords: time-domain reflectometry; industry 4.0; microwave sensing; concrete monitoring; diffused monitoring; dielectric permittivity citation: andrea cataldo, egidio de benedetto, raissa schiavoni, annarita tedesco, antonio masciullo, giuseppe cannazza, microwave reflectometric systems and monitoring apparatus for diffused-sensing applications, acta imeko, vol. 10, no. 3, article 1, september 2021, identifier: imeko-acta10 (2021)-03-01 section editor: section editor received july 26, 2021; in final form september 1, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: italian ministry of universities and research (mur) through the project ‘smarter-systems and monitoring apparatus based on reflectometric techniques for enhanced revealing’ (poc01-00118), funded under the public call proof of concept d.d. n. 467 of 02/03/2018. corresponding author: egidio de benedetto, e-mail: egidio.debenedetto@unina.it 1. introduction monitoring systems represent a key enabler for the 4.0 era, crucial not only for guaranteeing the optimal functioning of the monitored systems but also for timely interventions in case of potential failures [1], [2]. as a result, the combination of the internet of things (iot) and sensing networks has become an irreplaceable tool for achieving ubiquitous monitoring in the 4.0 ecosystem [3]-[8]. however, currently, most sensor or monitoring systems are characterised by point-type sensory information; hence, to monitor large areas, it is necessary to employ a multitude of probes. this drawback can be overcome by using diffused sensing elements (ses or d-ses) that are able to achieve many functions [10] where the use of point sensors is not recommended. most ses of this type rely on the use of optical fibre systems [11], [12], but optical systems are generally expensive, and this limits their large-scale adoption. in this paper, the diagnostic technology employed resides in time-domain reflectometry (tdr) through the use of elongated ses, which have recently been successfully used to achieve diffused monitoring [13]-[21]. tdr monitoring systems are characterised by being relatively low cost, having the potential to be customised to suit the specific needs of different applications and achieving good accuracy. for these reasons, this technique represents an interesting monitoring solution with the use of specific sensing element networks (sens) that can be permanently embedded into the system to be monitored (stbm) and used throughout the service life of the stbm. thanks to the versatility of tdr, the proposed system can be customised and applied in a considerable number of fields; but this paper focuses on three contexts: abstract most sensing networks rely on punctual/local sensors; they thus lack the ability to spatially resolve the quantity to be monitored (e.g. a temperature or humidity profile) without relying on the deployment of numerous inline sensors. currently, most quasi-distributed or distributed sensing technologies rely on the use of optical fibre systems. however, these are generally expensive, which limits their large-scale adoption. recently, elongated sensing elements have been successfully used with time-domain reflectometry (tdr) to implement diffused monitoring solutions. the advantage of tdr is that it is a relatively low-cost technology, with adequate measurement accuracy and the potential to be customised to suit the specific needs of different application contexts in the 4.0 era. based on these considerations, this paper addresses the design, implementation and experimental validation of a novel generation of elongated sensing element networks, which can be permanently installed in the systems that need to be monitored and used for obtaining the diffused profile of the quantity to be monitored. three applications are considered as case studies: monitoring the irrigation process in agriculture, leak detection in underground pipes and the monitoring of building structures. mailto:egidio.debenedetto@unina.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 203 i) localising leaks in underground pipelines (sen-w), ii) agricultural water management (sen-a) for the optimisation of water resources, iii) building monitoring (sen-b) through the ex-ante monitoring of concrete curing and ex-post monitoring for the detection of dielectric anomalies that result from the degradation or stress of the structure. the paper is organised as follows. section 2 describes the basic theoretical background of tdr, while section 3 describes the design and implementation of the proposed distributed monitoring system. in section 4, the experimental results of the practical implementation of the proposed sens are reported, and finally, in section 5, conclusions are drawn, and the future work is outlined. 2. theoretical background tdr is an electromagnetic (em) measurement technique typically used for monitoring purposes, such as food analysis [22], cable fault localisation [23]-[26], soil moisture measurements [27], liquid level measurements [28], device characterisation [29], [30], biomedical applications [31] and dielectric spectroscopy [20]. in tdr measurements, the em stimulus is usually a step-like voltage signal that propagates along the se, which is inserted or in contact with the system under test (sut). the signal travels along the se, and it is partially reflected by impedance changes along the line and/or by dielectric permittivity variations. through the analysis of the reflected signal, it is possible to retrieve the desired information on the sut. generally, in tdr measurements, the direct measurement output is the time-domain reflection coefficient (ρ), expressed as 𝜌 = 𝑣refl(𝑡) 𝑣inc(𝑡) , (1) where 𝑣refl(𝑡) is the amplitude of the reflected signal and 𝑣inc(𝑡) is the amplitude of the incident signal. the value of ρ is represented as a reflectogram, which shows ρ as a function of the travelled apparent distance (𝑑app ). as is well known, the quantity 𝑑app is related to the actual distance, 𝑑real, by the following equation: 𝑑app = 𝑑real ∙ √𝜀app = 𝑐 ∙ 𝑡 2 √𝜀app , (2) where 𝜀app is the effective dielectric permittivity of the propagating medium (which describes the interaction between the electromagnetic signal and the sut), 𝑐 is the velocity of light in free space and 𝑡 is the travel time, which is the time that it takes for the em signal to travel back and forth. the signal propagation velocity inside the medium depends on the dielectric properties of the material in terms of effective relative dielectric permittivity, 𝜀app, which describes the interaction between the electromagnetic signal and the sut. if the em signal is propagated in a vacuum, then 𝜀app ≅ 𝜀𝑟,air ≅ 1. however, if the se is inserted in a material different to air, then the propagating em signal will propagate more slowly, and the effective dielectric constant of the material in which the se is inserted through the estimation of the apparent length (𝑑app) is evaluated from the reflectogram: 𝜀app = ( 𝑑app 𝑑real ) 2 . (3) based on these considerations, using the tdr reflectogram, it is possible to estimate the dielectric characteristics of the propagating medium and/or to localise when these dielectric variations occur. 3. system and design implementation 3.1. diffused sensing element as mentioned in section 1, the present study focuses on three main application fields. in detail, the sen-w refers to a leak localisation system in underground water and sewer pipes, sena is dedicated to the real-time monitoring of the soil watercontent profile in agriculture and, finally, sen-b addresses the use of the tdr and elongated ses to evaluate the humidity profile of concrete structures and to identify destructive phenomena early, such as those resulting from rising damp. this section describes in detail the design and implementation of the d-ses, the tdr measuring instruments used for experimental validation and the processing algorithm adopted. before the implementation of the system, in the design phase, full-wave simulations were carried out to identify the optimal se configuration, which is useful for optimising the performance of the system in terms of sensitivity to changes in the dielectric characteristics of the stbm. the configuration of the diffused se is shown in figure 1. it consists of a coaxial cable and two conductors that run parallel to each other and are mutually insulated through a plastic jacket. the figure also shows the cross-section dimensions of the se. the sensing portion is placed along the direction of the electrical impedance profile under test so as to provide diffused monitoring of the stbm. the em signal propagation occurs between the two conductors and is influenced by the dielectric characteristics of the surrounding material; this aspect is exploited to identify the area in the reflectogram in which dielectric variations are observed. the coaxial cable has the same length as the sensitive portion and is integrated with the ses to calibrate the apparent distance as a real distance. in fact, because the dielectric characteristics of the coaxial cable are known, by propagating the tdr signal along the coaxial cable, it is possible to evaluate the actual distance of figure 1. designed diffuse sensing element: two-wire-like conductors and a coaxial cable. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 204 the se from (3), as reported in [15]. this aspect is especially important in practical applications in which the length of the dses is not necessarily known in advance. 3.2. tdr instruments and the measurement algorithm for the tdr measurements, two measuring instruments were used, the tdr-307usbm and the tdr200. the former generates a pulse signal, and the cost is relatively low (about €800). the latter generates a step-like em signal, and the cost is approximately five times the cost of the tdr-307usbm. both measuring instruments are portable. however, in the case of the tdr-307usbm, it is worth mentioning that signal attenuation and dispersion phenomena limit the usability of tdr when used on long cable systems. it is possible to modify the amplitude and the width of the tdr test pulse signal along with the gain in the input stage amplifier; however, this requires the tdr measurements to be repeated many times to find the right setting. to overcome this issue, an innovative and dedicated algorithm [16] has been developed to automatically optimise these two different electrical parameters of the tdr signal as a function of the d-se length in order to compensate the performance in terms of attenuation and dispersion effects. 4. experimental results 4.1. experimental results for sen-w water-pipeline monitoring is extremely important given the considerable amount of water loss caused by leakages and other possible hydraulic failures. figure 2 shows a schematisation of the setup configuration. during the installation phase, the d-se is placed along the pipeline to be inspected, and the connection to the measuring system is ensured through an access point. points b and e indicate the beginning and the end of the d-se, while l indicates the position of a leak. it can be seen that there is a section of cable (running vertically) that allows the se to be connected to the measurement instrument. clearly, variations in dielectric permittivity that may occur along this vertical se section are not of interest for the leak localisation. to overcome this issue, this portion of se is electromagnetically shielded (by means of a metal shield). in this way, the sensing portion starts from point b. another advantage is that, thanks to this shielding, the vertical portion can be of any length, as it does not influence the localisation of the leak (hence, it is not necessary to know a priori the burial depth of the pipe). this approach allows easier identification of the interface relative to the start point of the se relative to the buried pipeline, excluding the portion of the connected section from the measured data. for the experiments on sen-w, a dedicated testbed was set up, in which a d-se was buried approximately 30 cm figure 2. schematisation of the tdr-based system for sen-w (dimensions not to scale). (b) figure 3. sen-w: a) comparison of the test reflectogram and reference reflectogram for leak detection. b) localisation of the dielectric permittivity variation (dpv): reference reflectogram is superimposed on the test reflectogram. the difference between the two reflectograms is also shown (red curve). 0.0 50.0 100.0 150.0 200.0 250.0 300.0 -100.0 -50.0 0.0 50.0 100.0 150.0 200.0  k t d r a d c 's o u tp u t d (m) reference (no dpv) test (dpv @ 200 m) difference (k) dpv acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 205 underground. the soil was irrigated at predetermined distances from the beginning of the se in a similar way to a leakage condition when water escapes from an underground pipe. the main parameter describing the interaction between the signal and the system under test is the relative permittivity εr, as water leakage causes a local variation in its value, which is immediately detectable in the measurement. for these measurements, the tdr ri-307usbm was used. in fact, because ses in these applications may be a hundred meters long, the tdr200 instrument is unsuitable, as its electronics could be easily damaged as a result of electrostatic discharges occurring when connecting long cables. figure 3(a) shows the measurement results obtained for a leak at d = 8 m. first, the reference reflectogram was acquired, i.e. when there is no leak (red curve). then, the reflectogram in the presence of the emulated leak was acquired. it can be observed that, in the presence of the leak, there is a distinct variation in the output reflectogram. the position of the leak is estimated by applying (2) to the different portions of the reflectogram in order to identify the abscissa of the minimum corresponding to the leak. it is also interesting to analyse the detection of a leak (or a dielectric permittivity variation (dpv)) at long distances (i.e. 200 m), which is when the use of the aforementioned algorithm becomes essential. figure 3(b) shows the tdr curves that are automatically processed; the difference between the test reflectogram and the one in the presence of a dpv allows the dpv to be localised. it should be mentioned that in applying (2), an assumption of constant permittivity in the soil along the se is made. this assumption is acceptable considering that, generally, the variation in permittivity caused by leakages is significantly higher than the natural variation of permittivity in the soil (which may be due to temperature variations or to slightly different soil compaction in different areas). 4.2. experimental results for sen-a in this application context, monitoring soil moisture content in agriculture allows the use of water to be optimised, reducing wastage. in particular, the basic idea is to bury the d-ses in correspondence with cultivations and to carry out tdr measurements to retrieve the actual water-content profile of the soil all along the cultivations. subsequently, the irrigation systems can be automatically activated/deactivated according to the actual irrigation needs of the plants. in addition, through multiplexing systems, up to 512 d-ses could be used simultaneously to perform the widespread monitoring of multiple crop rows. for the experimental tests, a 30-m-long dse was placed in the soil along the entire crop profile, and the connector came out of the soil to connect to the tdr measurement instrument. in this case, for the sake of comparison, both the tdr200 and the tdr ri-307usbm were used. the acquired reflectograms were directly related to the impedance profile of the se inserted in the soil. experimental results were obtained according to the following measurement protocol: • reflectogram #1: one week after d-se installation, • reflectogram #2: (post-irrigation) acquired approximately one hour after the irrigation process was finished, • reflectogram #3: (pre-irrigation) acquired three days after reflectogram #2, • reflectogram #4: (post-irrigation) acquired approximately 10 hours after the irrigation process was finished, • reflectogram #5: pre-irrigation measurement. figure 4(a) and 4(b) show the measurements obtained with the tdr200 and tdr ri-307usbm, respectively. it can be seen that after irrigation (reflectogram #2), the apparent length of the se increases, as d-se shifts towards a longer distance; this is a result of the increased effective permittivity. from reflectogram #3, acquired after three days and before a new irrigation, it can be seen that the apparent length of the se has decreased. a similar trend occurs for reflectogram #4 and reflectogram #5. based on the overall length of the se, the system is able to discriminate well between different soil moisture conditions. this can be used as a parameter for activated irrigation. the tdr200 instrument exhibits better performance in estimating dielectric variations; however, the costs are considerably higher than the tdr ri-307usbm. in practical applications, the elongated ses will allow the retrieval of a map in real time that shows the state of the water content in the cultivations, thus allowing an automatic tailormade intervention, especially in view of the optimal management of the irrigation processes. (a) (b) figure 4. sen-a:tdr reflectogram with the tdr200 (a) and with the tdr ri307usbm (b) 0 10 20 30 40 50 55 60 65 70 75 80 85 90 95 100 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 d e ( d im e n si o n le ss ) d app (m) sen-a (tdr200) refl. #1 after installation refl. #2 post-irrigation (a) refl. #3 pre-irrigation refl. #4 post-irrigation (b) refl. #5 pre-irrigation d b 0 20 40 60 80 100 120 140 -40 -20 0 20 40 60 80 100 120 140 t d r 's a d c o u tp u t d app (m) sen-a (tdr ri-307usbm) refl. #1 refl. #2 refl. #3 refl. #4 refl. #5 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 206 4.3. experimental results for sen-b for sen-b, the goal of the proposed method was twofold: 1) an ex-ante monitoring of concrete curing along a building diffuse profile and 2) an ex-post monitoring for the detection of dielectric anomalies as a result of the degradation or stress of the structure. a concrete beam was used as a case study. in order to monitor the water content during the hydration process, three d-ses were inserted in a concrete beam: one at the bottom of the beam, another in the middle and a third at the top. the hardening phase is considered to be completed within the first 28 days, as after this period more than 90 % of the overall mechanical strength has developed. for this reason, the beam was monitored over the 28-day period by acquiring tdr data for the d-ses. figure 5(a) shows some of the tdr reflectograms acquired during the considered period (for clarity of the image, not all the reflectograms are reported); figure 5(b) shows a zoomed image that highlights the trend. from the reflectogram, it is possible to identify the beginning and the end of the d-se (denoted respectively by 𝑑𝐵,app and 𝑑𝐸,app) in order to calculate the apparent distance 𝑑app of the dse. as can be seen, this parameter decreases with the hydration process because 𝑑𝐸,app shifts towards lower values. this indicates that the apparent distance 𝑑app of the d-se decreases with the decreasing dielectric permittivity 𝜀app of the concrete beam as a result of the ongoing hydration process. however, ex-post monitoring through an embedded low-cost diffused se can promptly detect the deterioration of the structures. in this regard, mechanical tests on the beam were carried out to analyse whether the diffuse sensor cable was sensitive to mechanical deformation. the beam was subjected to several tests with concentrated weights varying from 1,000 kgf to 9,000 kgf, with an increase of 1,000 kgf at each test. figure 6 illustrates the test setup. as explained in section 3, the d-se also included a coaxial cable, which was sensitive to deformation and/or compression phenomena. during the mechanical test, tdr measurements were also carried out on the d-se. as is known from the theory, the diameter of the inner and outer conductors determines the impedance of the coaxial cable. because of this, a deformation in the cable resulting from the degradation phenomena in the structure causes a variation in the electrical impedance, which was immediately identified from the measurements. as shown in figure 7, as the weight applied to the beam increases, the reflection coefficient decreases, and the apparent distance increases, indicating that the cable is being deformed and bent. these mechanical tests highlight that continuous ex-post monitoring through permanent, d-ses can provide early indications of structural problems. in this way, safety measures can be considered in time, and structural interventions can be performed immediately. 5. conclusions in this study, the development and the proof of concept of a multi-purpose sen were addressed through the adoption of tdr and d-ses. the proposed system, which allows the limitations of traditional punctual monitoring systems to be overcome, was validated for the localisation of leaks (sen-w), for monitoring the diffused profile of the water content of soil in agriculture (sen-a) and for the ex-ante and ex-post monitoring of the hydration process and the diagnostics of destructive phenomena (sen-b). additionally, the proposed monitoring system can be easily extended to other fields. in practical applications, each sen could be managed through a portable device and through a single platform. in addition, the monitored data can be stored in a repository and used for further statistics or future reference, and the output of the sens can be geo-referenced by acquiring the (a) (b) figure 5. sen-b: selected tdr reflectograms for the central d-se over the 28-day observation period (a) and a zoomed image (b). figure 6. experimental setup for post-mechanical tests on the beam. 0.0 2.0 4.0 6.0 8.0 10.0 12.0 14.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2  ( d im e n si o n le ss ) d app (m) sen-b (tdr200) day #1 day #2 day #3 day #8 day #10 day #15 day #17 day #25 d b,app d e,app d app 10.0 10.5 11.0 11.5 12.0 12.5 0.5 0.6 0.7 0.8 0.9 1.0  ( d im e n si o n le ss ) d app (m) sen-b (tdr200) day #1 day #2 day #3 day #8 day #10 day #15 day #17 day #25 d e,app-25 d e,app-1 days acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 207 gps coordinates of the location where the mr measurements are taken. the introduction of these strategies would contribute to making the proposed monitoring system evolve into a cyberphysical measurement system [32], [33], fully exploiting the potential of 4.0 technologies. references [1] a. sforza, c. sterle, p. d’amore, a. tedesco, f. de cillis, r. setola, optimization models in a smart tool for the railway infrastructure protection, in: critical information infrastructures security. e. luiijf, p. hartel (editors). springer, cham, 2013, pp. 191-196. doi: 10.1007/978-3-319-03964-0_17 [2] p. d’amore, a. tedesco, technologies for the implementation of a security system on rail transportation infrastructures, in: railway infrastructure security. r. setola, a. sforza, v. vittorini, c. pragliola (editors), springer, cham, 2015, isbn: 978-3-31904425-5, pp. 123-141. doi: 10.1007/978-3-319-04426-2_7 [3] c. scuro, p. f. sciammarella, f. lamonaca, r. s. olivito, d. l. carni, iot for structural health monitoring, ieee instrumentation measurement magazine 21 (2018) pp. 4-14. doi: 10.1109/mim.2018.8573586 [4] e. sisinni, a. saifullah, s. han, u. jennehag, m. gidlund, industrial internet of things: challenges, opportunities, and directions, ieee transactions on industrial informatics 14 (2018) pp. 4724-4734. doi: 10.1109/tii.2018.2852491 [5] b. chen, j. wan, l. shu, p. li, m. mukherjee, b. yin, smart factory of industry 4.0: key technologies, application case, and challenges, ieee access 6 (2018) pp. 6505-6519. doi: 10.1109/access.2017.2783682 [6] h. xu, w. yu, d. griffith, n. golmie, a survey on industrial internet of things: a cyber-physical systems perspective, ieee access 6 (2018) pp. 78238-78259. doi: 10.1109/access.2018.2884906 [7] f. lamonaca, c. scuro, d. grimaldi, r. s. olivito, p. f. sciammarella, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8 (2019) 2, pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [8] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low power iot architecture for the monitoring of chemical emissions, acta imeko 8 (2019) 2, pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [9] a. bernieri, d. capriglione, l. ferrigno, m. laracca, design of an efficient mobile measurement system for urban pollution monitoring, acta imeko 1 (2012) 1, pp. 77-84. doi: 10.21014/acta_imeko.v1i1.27 [10] g. y. chen, x. wu, e. p. schartner, s. shahnia, n. bourbeau hébert, l. yu, x. liu, a. v. shahraam, t. p. newson, h. ebendorff-heidepriem, h. xu, d. g. lancaster, t. m. monro, short-range non-bending fully distributed water/humidity sensors, journal of lightwave technology 37 (2019) pp. 20142022. doi: 10.1109/jlt.2019.2897346 [11] l. zhao, j. wang, z. li, m. hou, g. dong, t. liu, t. sun, k. t. v. grattan, quasi-distributed fiber optic temperature and humidity sensor system for monitoring of grain storage in granaries, ieee sensors journal 20 (2020) pp. 9226-9233. doi: 10.1109/jsen.2020.2989163 [12] d.-s. xu, l.-j. dong, l. borana, h.-b. liu, early-warning system with quasi-distributed fiber optic sensor networks and cloud computing for soil slopes, ieee access 5 (2017) pp. 25437-25444. doi: 10.1109/access.2017.2771494 [13] a. cataldo, e. de benedetto, g. cannazza, a. masciullo, n. giaquinto, g. d’aucelli, n. costantino, a. de leo, m. miraglia, recent advances in the tdr-based leak detection system for pipeline inspection, measurement 98 (2017) pp. 347-354. doi: 10.1016/j.measurement.2016.09.017 [14] a. cataldo, e. de benedetto, g. cannazza, n. giaquinto, m. savino, f. adamo, leak detection through microwave reflectometry: from laboratory to practical implementation, measurement 47 (2014) pp. 963-970. doi: 10.1016/j.measurement.2013.09.010 [15] n. giaquinto, g. d’aucelli, e. de benedetto, g. cannazza, a. cataldo, e. piuzzi, a. masciullo, criteria for automated estimation of time of flight in tdr analysis, ieee transactions on instrumentation and measurement 65 (2016) pp. 1215-1224. doi: 10.1109/tim.2015.2495721 [16] a. cataldo, e. de benedetto, g. cannazza, e. piuzzi, n. giaquinto, embedded tdr wire-like sensing elements for monitoring applications, measurement 68 (2015) pp. 236-245. doi: 10.1016/j.measurement.2015.02.050 [17] a. cataldo, e. de benedetto, a. masciullo, g. cannazza, a new measurement algorithm for tdr-based localization of large dielectric permittivity variations in long-distance cable systems, measurement 174 (2021), art. 109066. doi: 10.1016/j.measurement.2021.109066 [18] a. walczak, m. lipiński, g. janik, application of the tdr sensor and the parameters of injection irrigation for the estimation of soil evaporation intensity, sensors 21 (2021) art. 2309. a) b) figure 7. a) tdr reflectograms and first derivatives acquired through the coaxial cable of the d-se for the concrete beam in the different compression conditions. b) zoom of the trend as compression increases. 0 3 6 9 12 15 0.0 0.2 0.4 0.6 0.8 1.0 6 7 8 9 10 0.02 0.04 0.06 0.08 0.10 der 5,000 kgf 6,000 kgf der 6,000 kgf 7,000 kgf der 7,000 kgf 8,000 kgf der 8,000 kgf 8,800 kgf der 8,800 kgf beam breakage der beam breakage  ( d im e n s io n le s s ) d app (m) pre-test der pre-test 1,000 kgf der 1,000 kgf 2,000 kgf der 2,000 kgf 3,000 kgf der 3,000 kgf 4,000 kgf der 4,000 kgf 5,000 kgf  ( d im e n s io n le s s ) d app (m) pre-test 1,000 kgf 2,000 kgf 3,000 kgf 4,000 kgf 5,000 kgf 6,000 kgf 7,000 kgf 8,000 kgf 8,800 kgf beam breakage 0 3 6 9 12 15 0.0 0.2 0.4 0.6 0.8 1.0 6 7 8 9 10 0.02 0.04 0.06 0.08 0.10 der 5,000 kgf 6,000 kgf der 6,000 kgf 7,000 kgf der 7,000 kgf 8,000 kgf der 8,000 kgf 8,800 kgf der 8,800 kgf beam breakage der beam breakage  ( d im e n s io n le s s ) d app (m) pre-test der pre-test 1,000 kgf der 1,000 kgf 2,000 kgf der 2,000 kgf 3,000 kgf der 3,000 kgf 4,000 kgf der 4,000 kgf 5,000 kgf  ( d im e n s io n le s s ) d app (m) pre-test 1,000 kgf 2,000 kgf 3,000 kgf 4,000 kgf 5,000 kgf 6,000 kgf 7,000 kgf 8,000 kgf 8,800 kgf beam breakage https://doi.org/10.1007/978-3-319-03964-0_17 https://doi.org/10.1007/978-3-319-04426-2_7 https://doi.org/10.1109/mim.2018.8573586 https://doi.org/10.1109/tii.2018.2852491 https://doi.org/10.1109/access.2017.2783682 https://doi.org/10.1109/access.2018.2884906 http://dx.doi.org/10.21014/acta_imeko.v8i2.640 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 http://dx.doi.org/10.21014/acta_imeko.v1i1.27 https://doi.org/10.1109/jlt.2019.2897346 https://doi.org/10.1109/jsen.2020.2989163 https://doi.org/10.1109/access.2017.2771494 https://doi.org/10.1016/j.measurement.2016.09.017 https://doi.org/10.1016/j.measurement.2013.09.010 https://doi.org/10.1109/tim.2015.2495721 https://doi.org/10.1016/j.measurement.2015.02.050 https://doi.org/10.1016/j.measurement.2021.109066 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 208 doi: 10.3390/s21072309 [19] d.-j. kim, j.-d. yu, y.-h. byun, horizontally elongated time domain reflectometry system for evaluation of soil moisture distribution, sensors 20 (2020) pp. 1-17. doi: 10.3390/s20236834 [20] c.-p. lin, y. j. ngui, c.-h. lin, multiple reflection analysis of tdr signal for complex dielectric spectroscopy, ieee transactions on instrumentation and measurement 67 (2018) pp. 2649-2661. doi: 10.1109/tim.2018.2822404 [21] a. cataldo, e. de benedetto, g. cannazza, e. piuzzi, e. pittella, tdr-based measurements of water content in construction materials for in-the-field use and calibration, ieee transactions on instrumentation and measurement 67 (2018) pp. 1230-1237. doi: 10.1109/tim.2017.2770778 [22] e. iaccheri, a. berardinelli, g. maggio, t. g. toschi, l. ragni, affordable time-domain reflectometry system for rapid food analysis, ieee transactions on instrumentation and measurement 70 (2021) pp. 1-7. doi: 10.1109/tim.2021.3069050 [23] s. m. kim, j. h. sung, w. park, j. h. ha, y. j. lee, h. b. kim, development of a monitoring system for multichannel cables using tdr, ieee transactions on instrumentation and measurement 63 (2014) pp. 1966-1974. doi: 10.1109/tim.2014.2304353 [24] g.-y. kim, s.-h. kang, w. nah, novel tdr test method for diagnosis of interconnect failures using automatic test equipment, ieee transactions on instrumentation and measurement 66 (2017) pp. 2638-2646. doi: 10.1109/tim.2017.2712978 [25] c.-k. lee, s. j. chang, a method of fault localization within the blind spot using the hybridization between tdr and wavelet transform, ieee sensors journal 21 (2021) pp. 5102-5110. doi: 10.1109/jsen.2020.3035754 [26] c. m. furse, m. kafal, r. razzaghi, y.-j. shin, fault diagnosis for electrical systems and power networks: a review, ieee sensors journal 21 (2021) pp. 888-906. doi: 10.1109/jsen.2020.2987321 [27] s. lee, h.-k. yoon, hydraulic conductivity of saturated soil medium through time-domain reflectometry, sensors 20 (2020) pp. 1-18. doi: 10.3390/s20237001 [28] a. cataldo, l. tarricone, m. vallone, f. attivissimo, a. trotta, uncertainty estimation in simultaneous measurements of levels and permittivities of liquids using tdr technique, ieee transactions on instrumentation and measurement 57 (2008) pp. 454-466. doi: 10.1109/tim.2007.911700 [29] h. yang, h. wen, tdr prediction method for pim distortion in loose contact coaxial connectors, ieee transactions on instrumentation and measurement 68 (2019) pp. 4689-4693. doi: 10.1109/tim.2019.2900963 [30] g. robles, m. shafiq, j. m. martínez-tarifa, multiple partial discharge source localization in power cables through power spectral separation and time-domain reflectometry, ieee transactions on instrumentation and measurement 68 (2019) pp. 4703-4711. doi: 10.1109/tim.2019.2896553 [31] r. schiavoni, g. monti, e. piuzzi, l. tarricone, a. tedesco, e. de benedetto, a. cataldo, feasibility of a wearable reflectometric system for sensing skin hydration, sensors 20 (2020), art. 2833. doi: 10.3390/s20102833 [32] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, e. de benedetto, g. di gironimo, a. lanzotti, design of a soft growing robot as a practical example of cyber–physical measurement systems, proc. of the ieee conf. on metrology for industry 4.0 and iot proceedings, rome, italy, 7 – 9 june 2021, pp 23-26. doi: 10.1109/metroind4.0iot51437.2021.9488477 [33] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko 10 (2021) 2, pp. 103-109. doi: 10.21014/acta_imeko.v10i2.1123 https://doi.org/10.3390/s21072309 https://doi.org/10.3390/s20236834 https://doi.org/10.1109/tim.2018.2822404 https://doi.org/10.1109/tim.2017.2770778 https://doi.org/10.1109/tim.2021.3069050 https://doi.org/10.1109/tim.2014.2304353 https://doi.org/10.1109/tim.2017.2712978 https://doi.org/10.1109/jsen.2020.3035754 https://doi.org/10.1109/jsen.2020.2987321 https://doi.org/10.3390/s20237001 https://doi.org/10.1109/tim.2007.911700 https://doi.org/10.1109/tim.2019.2900963 https://doi.org/10.1109/tim.2019.2896553 https://doi.org/10.3390/s20102833 https://doi.org/10.1109/metroind4.0iot51437.2021.9488477 http://dx.doi.org/10.21014/acta_imeko.v10i2.1123 low-power and high-speed approximate multiplier using higher order compressors for measurement systems acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 6 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 low-power and high-speed approximate multiplier using higher order compressors for measurement systems m. v. s. ram prasad, b. kushwanth, p. r. d. bharadwaj, p. t. sai teja 1 department of eece, gitam (deemed to be university), visakhapatnam, ap, india section: research paper keywords: approximate computing; approximate compressors; digital circuits; partial products; measurement systems citation: m. v. s. ram prasad, b. kushwanth, p. r. d. bharadwaj, p. t. sai teja, low-power and high-speed approximate multiplier using higher order compressors for measurement systems, acta imeko, vol. 11, no. 2, article 36, june 2022, identifier: imeko-acta-11 (2022)-02-36 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received january 30, 2022; in final form april 22, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: m. v. s. ram prasad, e-mail: mvsrprasadgitam@gmail.com 1. introduction in digital signal processing applications multipliers plays an important role in complex arithmetic operations. here, the approximate computing methods are used to lessen the power consumption. hence approximate multipliers came into the existence and these approximate multipliers are widely used in the digital signal processing applications to lessen the power consumption [1]-[4]. the complex multipliers used in the dsp applications are replaced with these approximate multipliers. these can perform multiple operations like filtering, convolution, and correlation of the digital signals. to perform these complex multiplications multipliers, adders and shifters are widely used. here, designing of the multiplier is the hardest part in the design of the dsp. these multipliers consume more power compared with the remaining adders and shifters. in multiplication process there is a propagation of partial products, alignment of partial products, and lessens the partial products finally addition all these partial products. reducing the partial products count requires more time and power consuming. multiple techniques are implemented to overcome this issue. the approximate computing technique gives the better results compared to all the previous techniques. hence approximate adders came into existence and then compressors are designed for the addition of multiple bits [5]-[8]. higher order compressors are required and lessen delay of addition process. with the use of these approximate compressors approximate multipliers are designed to improve the performance of the digital signal processing applications. the exact multipliers are consuming high speed and require huge delay to obtain exact outputs. due to these exact multipliers, there is only one major defect is that it can’t optimize further while using multiple techniques [9]-[11]. hence for the image processing and signal processing applications accept the errors data and gives the modulated signals. hence approximate compressors and multipliers came into existence and lessen the power consumption along with delay reduction due to the carry bits in the addition process. due to these approximate multipliers approximate results are obtained and these are sufficient for the abstract at present, approximate multipliers are used in the image processing applications. these approximate multipliers are designed with the help of higher order compressors to decrease the number of addition stages involved for the lessening stages. the approximate computing is the best technique to improve the power efficiency and reduce delay path. with the use of approximate computing multiple compressors are designed. in this paper, 10:2 compressors are designed and implemented in the 32-bit multiplier and compared with the exact 32-bit multipliers. the proposed higher bit compressors along with the lower bit compressors are implemented to reduce the delay of the design. this type of digital circuits has much significance in measurement technologies, for enabling fast and accurate measurements. with the use of approximate compressors, the result may be ineffective, but the power consumption and delay are getting reduced. hence, these proposed multipliers are only implemented the digital signal processing applications, where there is need for combining two or more signals. the proposed multiplier is used for implementing fir filter resulted 27 ns delay which is far better than the exact multiplier having 119 ns. these multipliers also used in image processing applications and psnr of image has been employed. mailto:mvsrprasadgitam@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 combining of the two signals. the approximate compressors are designed to reduce the computation occurred in the addition process [12]-[15]. hence a greater number of inputs are added to produce only two outputs called as sum and carry bits and another one more output is also attained called as carry out. in general, 4:2 compressors are widely used, and these give the better results when compared to the previous architectures. hence with the use of these approximate compressors the partial products are gets reduced and the adder count (gate count) is also reduced. the approximate compressors widely used [16], [17] in multipliers to reduce the power to improve the performance of the circuit. hence these approximate multipliers need some extra compressors to improve its performance. then the higher order compressors came into the existence and 10:2 compressors are designed. the lower order compressors require more power consumption of the higher order multipliers is also yields better results [18], [19] when compared to the approximate multipliers using with the lower order compressors. in this paper, a new 32bit approximate multiplier is designed with the use of higher order compressors achieves low power consumption and lesser delay along with the less error rate. 2. design of compressors a compressor is a combination circuit having multiple inputs and multiple outputs, the outputs consist of one sum bit and one carry bit along with these multiple carry propagating bits are also generated according to their input bit length [6] as shown in figure 1. here, the compressors are adding multiple bits which have same bit length and add the inputs of different bit lengths. these compressors are used in the decrease the partial products generated in multiplication process and these are implemented to add the multiple partial products into two partial outputs along with a carry propagating bit. according to the multiplier bit length these compressors are designed to reduce the partial product count [9]. due to these approximate compressors the addition of partial products is done easily. 2.1. 4:2 compressor the basic 4:2 compressor consists of 4 inputs, and it gives only two outputs known as sum and carry bit along with these input and output pins one more input pin is added to the compressor called as carry in and it gives one more output called are carry out is shown in figure 2. the carry out generated from the compressor is propagated to the next bit positions. hence these compressors generally have multiple input and multiple outputs 𝑋1 + 𝑋2 + 𝑋3 + 𝑋4 + 𝐶𝐼𝑁 = 𝑆𝑢𝑚 + 2 ∗ (𝐶𝑎𝑟𝑟𝑦 + 𝐶𝑖𝑛) . (1) 2.2. design of 10:2 compressor the higher order compressor is implemented in the proposed approximate multipliers for the partial product reduction and adder count. in general, huge adder count and produce high power consumption [20]. here, in the proposed higher order compressors xor gates and mux are used to reduce the operation time along with the power consumption and is shown in figure 3. the higher order compressors are replaced with the normal adders without any disturbance in their truth tables. in approximate compressors the output is obtained based on the input combinations. here the output is either directly taken from the input or a small calculation is used. in the exact compressors the output is calculated exactly based on the multiple gates and equations. hence in the approximate compressors the performance is improved in terms of delay and power. with the same approach for 4:2 compressor has been designed as shown in figure 4. the 4:2 compressors are the basic compressor used in designing every multiplier. in the proposed system the delay is getting reduced due to the termination of carry signals and the carry is not propagated to the next bits, due to this the delay is getting reduced in the proposed system. the proposed design is implemented with the xor gates and multiplexers to reduce the delay and power consumption. the 8:2 approximate compressors are implemented. the xor-mux implementation of 8:2 compressor is shown in figure 5. figure 1. n:2 compressors schematic diagram. figure 2. 4:2 compressor schematic diagram. figure 3. conventional implementation of 3:2 compressors. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 while using the higher order compressors a greater number of bits are calculated at the same time, while calculating greater number of bits at the same the power consumption is getting reduced which leads to improve the performance of the circuits. with the use of approximate computing there is an occurrence of error is very high due to the reduction of the carry bits. hence the lower order compressors are consuming high power although they are producing error outputs. while adding multiple bits at the same time a greater number of lower order circuits are used which consumes more power consumption and delay is also increased. to overcome this issue higher order compressor are designed to perform the multi-bit addition. the 10:2 compressor is having only 10 inputs and it gives only 2 outputs [22], [23]. this compressor has fourteen xor gates and six multiplexers. each xor gate receives two primary inputs and it generates single output. outputs of the xor gates are propagating to the multiplexers in which the final multiplexer will generate the carry bit and the final xor gate generate the sum output. 3. design of 32-bit approximate multiplier to obtain the exact multiplication there is a use of exact compressors in the multiplication design process. for the approximate multipliers approximate compressors are utilized where complex multiplication process is implemented. these approximate compressors are implemented in the dct applications like image processing and signal processing. here, 32-bit approximate multipliers are implemented. in this paper higher order compressor means a 10:2 compressor are designed and implemented in the 32-bit approximate multipliers. design of 32-bit approximate multipliers both higher order and lower order compressors are implemented to reduce the adder count and delay of the multipliers. in exact multipliers the output is calculated by using the normal adders which gives the accurate results which consume more power consumption. in the approximate multipliers higher order circuits are used to reduce the partial product generation and reduce the delay by neglecting the carry propagation to the next bits. hence the power consumption is reduced in figure 4. block diagram of 4:2 compressor using xor-mux. figure 5. block diagram of proposed 10:2 compressor based on xor-mux modules. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 approximate multipliers compared with the exact multipliers. 8bit approximate multiplier with reduction stages is shown in figure 6. the proposed higher order approximate multiplier is having higher compressor like 10:2 compressor along with these lower order compressors like 8:2, 4:2 compressors are also used in this design. the proposed higher order compressors are designed with multiplexers and xor gates only. the proposed multiplier is having only 3 addition stages which are very less compared to the design of multipliers with the use of lower order compressors. hence, it is clearly showing that the previous approximate and exact multipliers are not sufficient to design the fir filters. 4. simulation results in this section, the proposed multiplier is implemented in xilinx ise design suite with the help of verilog hdl. figure 7 shows the simulation results obtained in the xilinx isim simulator where the inputs are a and b and the output of the multiplier is obtained at the y signal of the exact multiplier. figure 6. 8-bit approximate multiplier with reduction stages. figure 7. simulation result of the 32-bit exact multiplier. figure 8. technology schematic of the 32-bit exact multiplier. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 the proposed multiplier in which the logic of the proposed design is converted into the lut’s (look up tables) which are already pre-existed in the xilinx tools it shows that how many number of lut’s and iob’s are utilized for the design. the proposed multipliers are used in the fir filters to multiply the message signal and carrier signal. for this process generally exact multipliers are used, due to the exact multipliers the power consumption and delay are increased and there is a reduction of excess terms to remove all these extra processes the proposed approximate multipliers are used in the fir filter applications. technology schematic of the 32-bit exact multiplier is shown in figure 8. figure 9 show that the simulation results of the proposed 32bit approximate multiplier, where these results are quite different from the exact multipliers due to the usage of the approximate compressors in the proposed design. the proposed approximate multiplier is completely designed based on the approximate computing. in the approximate computing the output is calculated approximately to reduce the power consumption and delay. table 1 shows a comparison of 16and 32-bit exact and approximate multipliers and table 2 shows a comparison of 32-bit exact and approximate multipliers in fir filters. 5. applications in this project work a modified 10:2 compressor based on xor-mux modules are implemented. using this proposed compressor, a 32-bit multiplier is designed. the proposed design obtains better results. the proposed multiplier used in multimedia processing for the purposed of 2 different image multiplications and then peak signal to noise ratio (psnr) is employed. the data transmission in digital signal processing applications requires the convolution and correlation process instead of actual multiplication. hence the proposed approximate multiplier does not achieve the 100 % accuracy, but these proposed approximate multipliers are suitable for the convolution and correlation processes. 6. conclusion approximate 32-bit multipliers are implemented using the modified 10:2 compressor. approximate multipliers provide better results when compared with the exact multipliers. the proposed multiplier achieves better psnr values with the previous designs. the accuracy of the image those are processed shows that our proposed multipliers are very effective compared to the exact multipliers. the proposed approximate multiplier is best for the multimedia applications where there is a need of combining two or more signals without any exact output. these proposed approximate multipliers are only used in the multimedia and signal processing applications only. hence the proposed approximate multiplier achieves better results compared to the previous architectures. references [1] j. han, m. orshansky, approximate computing: an emerging paradigm for energy-efficient design, 2013 18th ieee european test symposium (ets), 2013, pp. 1-6, doi: 10.1109/ets.2013.6569370 [2] k. roy, a. raghunathan, approximate computing: an energyefficient computing technique for error resilient applications, 2015 ieee computer society annual symposium on vlsi, 2015, pp. 473-475, doi: 10.1109/isvlsi.2015.130 [3] s. surekha, m. z. ur rahman, n. gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific and industrial research, vol. 80, no. 5, 2021, pp. 449456. online [accessed 27 june 2022] http://op.niscpr.res.in/index.php/jsir/article/view/41462 [4] v. sze, y.-h. chin, t.-j. yang, j. emer, efficient processing of deep neural networks: a tutorial and survey, proc. of the ieee, vol. 15, no. 12, 2017. doi: 10.1109/jproc.2017.2761740 [5] j. emer, v. sze, y.-h. chen, hardware architectures for neural networks, tutorial. international symposium on computer architecture, 2017. online [accessed 27 june 2022] http://eyeriss.mit.edu/tutorial-previous.html [6] mantravadi, nagesh, md zia ur rahman, sala surekha, navarun guptha, spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning, acta imeko, vol. 11, no. 1, 2022, pp. 1-7. doi: 10.21014/acta_imeko.v11i1.1231 [7] i. qiqieh, r. shafik, g. tarawneh, d. sokolov, a. yakovlev, energy-efficient approximate multiplier design using bit significance driven logic compression, 21st design, automation & test in europe conference & exhibition, lausanne, switzerland, 27-31 march 2017. doi: 10.23919/date.2017.7926950 [8] t. yang, t. ukezono, t. sato, a low-power high-speed accuracy controllable approximate multiplier design, 23rd asia and south figure 9. simulation result of the 32-bit approximate multiplier. table 1. comparison of 8, 16 and 32-bit exact and approximate multipliers. multiplier delay (ns) lut count exact 8-bit 7.664 124 approximate 8-bit 2.969 84 exact 16-bit 12.007 551 approximate 16bit 4.689 242 exact 32-bit 20.55 2255 approximate 32-bit 18.898 1705 table 2. comparison of 32-bit exact and approximate multipliers in fir filters. filter delay accurate 119 approximate 27 https://doi.org/10.1109/ets.2013.6569370 https://doi.org/10.1109/isvlsi.2015.130 http://op.niscpr.res.in/index.php/jsir/article/view/41462 https://doi.org/10.1109/jproc.2017.2761740 http://eyeriss.mit.edu/tutorial-previous.html https://doi.org/10.21014/acta_imeko.v11i1.1231 https://doi.org/10.23919/date.2017.7926950 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 pacific design automation conference, 2018. doi: 10.1109/aspdac.2018.8297389 [9] d. esposito, d. de caro, e. napoli, n. petra, a. g. m. strollo, on the use of approximate adders in carry-save multiplieraccumulators, international symposium on circuits and systems, 2017. doi: 10.1109/iscas.2017 .8050437 [10] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, vol. 21, no. 124, pp. 227848-27857, 2021. doi: 10.1109/jsen.2021.3125529 [11] r. marimuthu, d. bansal, s. balamurugan, p. s. mallick, design of 8:4 and 9:4 compressor for high speed multiplication. american journal of applied sciences, 10(8), 2013, p. 893. doi: 10.3844/ajassp.2013.893.900 [12] b. silveira, g. paiim, b. abreu, m. grellert, c. m. diniz, e. a. c. da costa, s. bampii, 2017. power efficient sum of absolute differences architecture using adder compressors for integer motion estimation design. ieee transactions on circuits and systems i: regular papers, 64(112), pp. 326-337. doi: 10.1109/tcsi.2017.2728802 [13] t. schiiavon, g. paiim, m. fonsieca, e. costta, s. almeiida, exploiting adder’s compressor for power-efficient 2-d approximate dct. in 2016 ieee 7th latin american symposium on circuits & systems (lascas), 2016, pp. 3183-3186. doi: 10.1109/lascas.2016.7451090 [14] m. v. s. ramprasad, pradeep vinaik kodavanti, design of high speed and area efficient 16-bit mac architecture using hybrid adders for sustainable applications, jge, vol. 10, issue 11, nov. 2020 [15] m. v. s. ramprasad, b. suribabu naick, zaamin zainuddin aarif, design and implementation of high speed 16-bit approximate multiplier, ijitee, vol. 8, issue 4, feb. 2019. [16] a. momeni, j. han, p. montuschi, f. lombardi, design and analysis of approximate compressors for multiplication, ieee trans. on computers, vol. 64, no. 4, pp. 984-994, 2015. doi: 10.1109/tc.2014.2308214 [17] c. liu, j. han, f. lombardi, a low-power, high-performance approximate multiplier with configurable partial error recovery, proc. of ieee design, automation & test in europe conference &exhibition (date), 2014. doi: 10.7873/date.2014.108 [18] g. zervakis, k. tsoumanis, s. xydis, d. soudris, k. pekmestzi, design-efficient approximate multiplication circuits through partial product perforation, ieee trans. on very large scale integration (vlsi) systems, vol.24, no.10, 2016, pp. 3105-3117. doi: 10.1109/tvlsi.2016.2535398 [19] a. tarannum, m. z. ur rahman, t. srinivasulu, an efficient multi-mode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68 (9), 2020, pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [20] a. cilardo, d. de caro, n. petra, f. caserta, n. mazzocca, a. g. m. strollo, e. napoli, high speed speculative multipliers based on speculative carry-save tree, ieee transactions in circuits and systems i, vol. 61, no. 12, 2014. doi: 10.1109/tcsi.2014.2337231 [21] j. liang, j. han, f. lombardi, new metrics for the reliability of approximate and probabilistic adders, ieee trans. on computers, vol. 62, no. 9, 2013, pp.1760-1771. doi: 10.1109/tc.2012.146 [22] p. kulkarni, p. gupta, m. d. ercegovac, trading accuracy for power in a multiplier architecture, j. low power electron., vol. 7, no. 4, 2011, pp. 490–501. doi: 10.1109/vlsid.2011.51 [23] c.-h. lin, c. lin, high accuracy approximate multiplier with error correction, in proc. ieee 31st int. conf. computer. design, sep. 2013, pp. 33–38. doi: 10.1109/iccd.2013.6657022 https://doi.org/10.1109/aspdac.2018.8297389 https://doi.org/10.1109/iscas.2017%20.8050437 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.3844/ajassp.2013.893.900 https://doi.org/10.1109/tcsi.2017.2728802 https://doi.org/10.1109/lascas.2016.7451090 https://doi.org/10.1109/tc.2014.2308214 https://doi.org/10.7873/date.2014.108 https://doi.org/10.1109/tvlsi.2016.2535398 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.1109/tcsi.2014.2337231 https://doi.org/10.1109/tc.2012.146 https://doi.org/10.1109/vlsid.2011.51 https://doi.org/10.1109/iccd.2013.6657022 toward a metrology-information layer for digital systems acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 4 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 toward a metrology-information layer for digital systems mark j. kuster1 1 independent researcher and consultant, dumas, texas, usa section: research paper keywords: m-layer; quantities; measurement units; measurement information infrastructure (mii); digital metrology citation: mark j. kuster, toward a metrology-information layer for digital systems, acta imeko, vol. 12, no. 1, article 18, march 2023, identifier: imekoacta-12 (2023)-01-18 section editor: daniel hutzschenreuter, ptb, germany received november 21, 2022; in final form january 7, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mark j. kuster, e-mail: mjk@ieee.org 1. introduction the international system of units and quantities [1], [2] and documents such as [3] define the basic references and nomenclature that supports most world measurements and quantitative specifications. this system, though not comprehensive, has served science and commerce well. metrology’s digital transformation, as bipm has recognized [4], requires a digital system to represent quantities and units. simple digitization, however, does not suffice for digital transformation. to digitize, digitalize, and transform, metrology should rethink its practices from the ground up in order to identify and eliminate the suboptimum pragmatic elements that, if propagated into automated systems, undercut the full gains that digital transformation promises. quantities and units lie at the ground level so we start there. digital quantity-unit systems face multiple realities—that if accounted for in the beginning will smooth and enhance digital transformation—including 1. both si and non-si measurement units, 2. exceptional measurement scales, 3. measurand ambiguity that challenges automated data consumption, 4. nonlinear unit conversions, 5. restricted operations by scale type. in [5], the authors proposed a metrology-information layer (an m-layer), to standardize a universal digital quantities and units system without altering the human systems in use. conceptually, the m-layer generalizes measurement “unit” (and the vim’s “reference” [6]) to “scale” and similarly generalizes “quantity” to “aspect” as denoted in [7]. the m-layer would allow any measurement unit (reality 1) on any scale (reality 2), introduce an aspect identifier to disambiguate quantities and facilitate machine-actionable (ma) documents (reality 3), provide for arbitrary conversion functions between scales (reality 4), and capture the meaningful operations among scales (reality 5). this generalization unites all digital measurement data under a single methodology. without such innovations, metrology laboratories and other measurement producers and consumers will inevitably encounter measurements that require some ad hoc data in digital documents that their customers’ software may or may not consume automatically, not to mention the rest of the world. subsequent work [8] has sketched an m-layer structure but published no detailed models. this paper presents work on an m-layer prototype model in section 2. section 3 discusses some results with example applications and section 4 concludes with a summary and indicates prospective future work. 2. m-layer modeling in human-readable documents, quantity values appear with two components: q, the numeric value, and [q], the unit symbol, as in 2.446 mm, for say, an outside diameter. people rely on a textual quantity description or the context to determine the actual measurand. because that methodology fails for machine processing, the m-layer would extend the representation with a abstract recent work has proposed a metrology-information layer (m-layer) to support digital systems with quantities and units by addressing the difficulties conventional quantity-unit systems pose for digitalization. this paper reports the work toward developing the m-layer’s current abstract conceptualization into a concrete model, working prototype, and demonstration software, with the final goal to create a fair (findable, accessible, interoperable, reusable) resource. mailto:mjk@ieee.org acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 third element that uniquely identifies the aspect (generalized quantity kind) in ma documents as 𝑞[𝑄] → 𝑞[𝑄]⟨𝑞⟩ (1) or in our example, 2.446 mm [5], where represents an aspect-identifying code discussed later. however, machines do not require multiple units per quantity, base-derived unit distinctions, or prefixes for internal processing or communication. therefore, an m-layer aspect may associate with one and only one unprefixed unit, in which case the aspect would uniquely determine the unit and a measurement-software application might drop the unit entirely from the data. this means digital systems may simplify the data and carry q in computations without ambiguity or loss of generality. in our example, the data would simply contain 2.446 × 10−3, assuming m-layer documentation declares the si9 [1] meter, e.g., as the m-layer length unit. the data model should then allow digital systems to render this in the expected form— outside diameter, 2.446 mm—using any unit and aspect alias the user prefers. the m-layer representation suffices for processing of digital measurement data, including all calculations, uncertainty propagation, etc. widely interoperable ma documents, however, require more metadata to describe the measurand in order to automatically match measurement data in instrument specifications, calibration and measurement capabilities, calibration results, calibration requests, and other digital documents that organizations may wish to exchange. the mii (measurement-information infrastructure) taxons [9] would fulfill this requirement unless and until the m-layer incorporates such aspect qualifiers, something not currently envisioned. for example, the taxon measure.length.diameter.outside, in which the second element length links to the aspect , provides the metadata to make a digital outside-diameter value fully interoperable (neglecting qualifiers for influence quantities). the m-layer’s core therefore comprises unambiguous aspect definitions. table 1 illustrates a model to capture the useful data elements. the elements aspectid and name provide the core functionality. the scaletypeid-indexed data helps define the meaningful operations on the aspect. definition would aid users to distinguish one aspect from another and symbol identifies the aspect’s default math symbol for symbolic processing. unlisted elements such as nature (intrinsic, extrinsic, …) and dimension would add value but the m-layer goals do not immediately require them. the m-layer model will also allow sourcing one or more data elements from existing systems such as ontologies [11] or unique reference points such as the digital si brochure. ideally, the aspectid representing would comprise a lightweight persistent identifier (pid). for example, the m-layer might identify itself and its contents via a doi (digital object identifier) [12]. as shown in figure 1, owners may structure their dois as desired. the doi’s owner-chosen suffix structure allows a hierarchical pointer that would identify, for example, the m-layer registry; the aspects dataset; and a particular aspect. for aspects, the short entryid then becomes aspectid, and when combined with a numerical value would both disambiguate the measurand and reduce the data size in digital documents relative to other proposals. the dois would easily expand to accommodate the mii taxon structure as well. the doi’s permanence as a pid ensures m-layer availability regardless of any changes in the organization or web site hosting the registries. a number of other datasets would supplement or complement the m-layer; these may include datasets to register quantity-unit systems for rendering user results, scale types, base dimensions, aspect aliases (alternate quantity names) and aspect relations. if an m-layer implementation omits any of these supplementary datasets, dependent systems and applications may augment the core m-layer accordingly or, as previously mentioned, follow m-layer pointers to existing systems. we omit details here due to space and the model’s current fluidity— the reader may find further information at [13] as it develops— but briefly mention the most germane points. potential ancillary datasets: quantitysystems and unitsystems register systems such as imperial, u.s. customary, natural, cgs systems, as well as previous and future si versions. the units and unitaspects datasets associate all units of interest (for user interfaces) with the correct aspect and provide symbolic conversion expressions to and from the aspect’s m-layerdeclared unit, e.g., the si9 equivalent. to simplify the data model and client application logic, prefixed units may have their own data entries. this would allow easy unit conversions and rendering as users desire. scaletypes and scaleoperations register scales (ratio, interval, cyclic or modular, logarithmic, ordinal, nominal, empirical) and their data types and operations. as an example multi-scale operation, an interval quantity added to a compatible ratio quantity yields a ratio quantity. though neither sufficient nor required for disambiguation, basedimensions would define the basis for dimensional analysis. finally, an aspectrelations dataset might contain mathematical expressions relating aspects, e.g., ohm’s law. an ma-document scheme that propagated mathematical expressions for measurement models and results might start from here. as a distributed resource, the m-layer may comprise any number of separate registries. some national metrology institutes table 1. aspects data model. data element description example aspectid unique identifier-index representing the aspect in ma documents and data name registered name length symbol mathematical symbol markup (e.g., latex, mathml [10]) l definition textual description or external pointer pid to unique reference point scaletypeid index to the aspect’s scale type ratioscaleid figure 1. a doi-based scheme for m-layer pids. the top two dois identify articles in two different journals, each of which has designed their own doi structure. the bottom doi shows one choice for m-layer pids, allowing separate registries, each with a number of identified datasets containing identified entries. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 may wish to establish m-layer registries to link local or legacy measurement units to the m-layer si registry. likewise, standards bodies, industry associations and common-interest communities may wish to register and digitally define unique aspects, scales, units and relationships. the m-layer model also envisions an access interface such as an api, which would define a number of operations, such as retrieving a registry for local use. this would complete the mlayer as a fair data source. 3. discussion this section discusses some benefits of the m-layer. 3.1. disambiguation metrologists involved in digital transformation have begun to realize that quantity values as numeric value and unit do not suffice for automated consumption; nor do accompanying freetext measurand descriptions solve the problem. ncsl international’s mii initiative set up a test-bed database organized around quantities [14] and later initiated a measurand taxonomy project to begin addressing measurand ambiguity. the m-layer would extend this capability beyond ratio quantities in order to handle, for example, the ordinal quantity hardness, modular angle quantities, temperature quantities on interval, ordinal, ratio, and special scales, etc. aspect ids of course automatically resolve such problems as disambiguating dimensionless quantities (all using the implied unit 1) or other quantities denoted in the same unit (e.g., torque and work). so though digital documents would record for example 1.00 or 1.00 , both render to the same conventional form, 1.00 n m, if so desired in an application, though preferably labeled with the quantity name or mii taxon. dimensionless quantities would work likewise, as each would have its own aspects entry. for example, digital systems rendering the text “turns ratio, r: 200” or “amplifier gain a: 200” would do so from digital data containing 200 ⟨turnsratio⟩ and 200 ⟨gain⟩. the m-layer would also have the option of adding base dimensions such as angle a for rotational quantities, though again, the m-layer does not rely on dimensional analysis. 3.2. simplified data processing with the m-layer, digital document producers may remain ignorant of the customer’s preferred units and simply embed the m-layer representations because the customer may render the values as desired or simply pass them to other digital systems. documents that the producer converts to pdf form for the customer may do likewise. computations using m-layer data may ignore measurement units and proceed as with dimensionless quantities, then simply attach the appropriate aspect id to the final result before embedding the value in a document or otherwise communicating it. the system may ignore alternate unit systems, prefixes, and other pragmatisms because the m-layer’s units would implicitly correspond to a declared si edition such as [1], or other standards for which the si provides no equivalent, e.g., hardness. as a simple example, an application without m-layer support might perform a simple period-to-frequency calculation as 𝑓 = 1 𝑇 = 1 (1 ms)⁄ = 1000 hz = 1 khz , (2) which with the m-layer would reduce to 1/(0.001 ) = 1000 . (3) the latter operation both carries more information (aspect) and simplifies processing. many software systems therefore would require no refactoring to handle quantities properly (defining quantity-value classes, for example, and overloading their operators to deal with dimensions). also, having an aspectrelations dataset available in the m-layer would help standardize metrological computations–for example to provide commonly used expressions such as moist-air density. 3.3. unit conversions the usual unit conversions of course remain trivial with an m-layer. user interfaces would translate conventional notations to and from m-layer representations but all intermediate processing and communications between m-layer-aware systems would entail no conversions. this m-layer model includes symbolic conversion functions to eliminate precision-limited conversion constants. thus, digital systems using sufficiently precise, arbitrary-precision, or symbolic computations for all operations would introduce no further errors or uncertainty into results, at least up to a user interface. hence, a system may postpone numeric conversions until required by encoding them symbolically as, e.g., latex or mathml. an angle value, for example might digitalize as equation (4)’s right hand side, 44.234° = 44.234 𝜋 180 < planeangle >, (4) where the symbolic conversion expression 𝑥 𝜋 180 comes from the m-layer units dataset entry for degree and assumes the si radian as the m-layer angle unit. similarly, conversion functions allow arbitrary scale conversions such as 𝐿 𝑥 𝑥0 = log ( 𝑥 𝑥0 ) , (5) which converts from a dimensionless ratio-scale quantity to a logarithmic-scale level quantity. 3.4. scale handling the m-layer would handle scale conversions similarly to unit conversions. since every aspectid associates with a unique scaletypeid, we may facilitate conversions and corrections between empirical scales. for example, we might add the aspect <1990conventionalvoltage> and the appropriate scale entry based on the conventional josephson constant 𝐾𝐽−90 with a scale-conversion entry such as 𝐾𝐽−90𝑥 𝐾𝐽 , where 𝐾𝐽 = 2 𝑒 ℎ with constants e and h defined to match the declared m-layer references, e.g. si9 [1]. this would allow automated systems to easily correct past (digital) measurement data according to new definitions. figure 2 exemplifies such a voltage-scale conversion via a demonstration application that takes advantage of a prototype registry containing aspect relations to automatically look up conversion equations and compute results. the m-layer would likewise define other empirical scales, such as the its-90 temperature scale and mercury-based temperature scales [3] for various atmospheric pressures in order to capture the differences from their associated si-defined aspects. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 defining ordinal scales would bring such quantities as hardness into the same digital system without requiring ad hoc modifications and data representations. modular scales would handle angular quantities when restricting values to a certain range, such as 0°to 360°. from these examples, the reader will see the potential that m-layer opens to digital transformation. 4. conclusion this paper has presented initial steps toward modeling and prototyping an m-layer to support fair digital measurement data and systems. for human-readable documents, the m-layer changes nothing, except perhaps to facilitate their generation. by replacing unit symbols and textual measurand descriptions with unique aspect ids, the m-layer concept offers machinereadability, global interoperability, and generalized quantities (aspects) and units (scales) to handle all types of measurements in digital documents and measurement software systems. in collaboration with international partners, the ncsl international 141 mii and automation committee plans to continue developing the m-layer model and populating a prototype with intention to replace the mii test-bed quantities and units database for use in digital documents. in cooperation with industry partners, we have begun drafting use cases, a product definition and requirements from the user viewpoint, an open-source prototype registry with back-end software, and demonstration applications. as the mii committee continues its collaboration with the international quality infrastructure, the mlayer should become a fair data resource. acknowledgement the author would like to thank ncsl international, the ncsl international 141 mii and automation committee, and cherine marie-kuster for their support. references [1] the international system of units (si), international bureau of weights and measures (bipm) information document si brochure, 9th edition, 2019. online [accessed 26 march 2023] https://www.bipm.org/en/publications/guides/ [2] quantities and units, international standardization organization (iso) and international electrotechnical commission (iec) std. iso-iec 80000, first edition, 2006-2011. [3] the international system of units (si) —conversion factors for general use, nist, washington, dc, special publication 1038 (2006). online [accessed 26 march 2023] https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublica tion1038.pdf [4] bureau international des poids et mesures (bipm), the international system of units (si) in digital communication. cipm task group on the digital si (2020). online [accessed 26 march 2023] https://www.bipm.org/en/conference-centre/bipmworkshops/digital-si/ [5] b. d. hall, m. j. kuster, metrological support for quantities and units in digital systems, measurement: sensors 18 (2021). doi: 10.1016/j.measen.2021.100102 [6] joint committee for guides in metrology (jcgm) guidance document jcgm 200:2012, the international vocabulary of metrology basic and general concepts and associated terms (vim), rev. 3rd edition, 2012. online [accessed 26 march 2023] https://www.bipm.org/utils/common/documents/jcgm/jcgm _200_2012.pdf [7] s. s. stevens, on the theory of scales, science 103(2684) (1946), art. no. 2684, pp. 677–680. doi: 10.1126/science.103.2684.677 [8] b. d. hall, m. j. kuster, representing quantities and units in digital systems, measurement: sensors 23 (2022), art no.100387. doi: 10.1016/j.measen.2022.100387 [9] cal lab solutions and ncsli141 mii & automation committee, metrology taxonomy. online [accessed 26 march 2023] http://www.metrology.net/home/metrology-taxonomy/ [10] wc3, mathematical markup language (mathml). world wide web consortium (w3c). online [accessed 26 march 2023] https://www.w3.org/math/ [11] semantic specifications for units of measure, quantity kind, dimensions and data types. quantity, unit, dimension and type (qudt). online [accessed 26 march 2023] http://www.qudt.org/ [12] information and documentation—digital object identifier system, international standardization organization (iso) std. iso 26 324:2012, 2012. [13] m. j. kuster, b. d. hall, r. white, m-layer registry prototype. ncsli 141 mii & automation committee (2022). online [accessed 26 march 2023] http://miiknowledge.wikidot.com/local--files/wiki:mii-projects/ [14] d. zajac, creating a standardized schema for representing iso/iec 17025 scope of accreditations in xml data, proc. ncsl int. workshop & symposium. st. paul, mn: ncsl international, 24-28 july 2016. online [accessed 26 march 2023] http://miiknowledge.wikidot.com/local--files/wiki:miireference-datasources/ncsli%202016%20zajak%20xml%20soa%20paper. pdf figure 2. screen shot from a demonstration application using a prototype mlayer registry. from aspectids tied to the input aspect (1990 conventional voltage), output aspect (si9 voltage) and variables in an aspectrelations table, the software automatically locates the applicable conversion equations and computes the result to any desired precision. the application displays aspectids using aspect symbols: action , charge , frequencyvoltage coefficient , etc. the application shows the m-layer representations for information only. https://www.bipm.org/en/publications/guides/ https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication1038.pdf https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication1038.pdf https://www.bipm.org/en/conference-centre/bipm-workshops/digital-si/ https://www.bipm.org/en/conference-centre/bipm-workshops/digital-si/ https://doi.org/10.1016/j.measen.2021.100102 https://www.bipm.org/utils/common/documents/jcgm/jcgm_200_2012.pdf https://www.bipm.org/utils/common/documents/jcgm/jcgm_200_2012.pdf https://doi.org/10.1126/science.103.2684.677 https://doi.org/10.1016/j.measen.2022.100387 http://www.metrology.net/home/metrology-taxonomy/ https://www.w3.org/math/ http://www.qudt.org/ http://miiknowledge.wikidot.com/local--files/wiki:mii-projects/ http://miiknowledge.wikidot.com/local--files/wiki:mii-reference-data-sources/ncsli%202016%20zajak%20xml%20soa%20paper.pdf http://miiknowledge.wikidot.com/local--files/wiki:mii-reference-data-sources/ncsli%202016%20zajak%20xml%20soa%20paper.pdf http://miiknowledge.wikidot.com/local--files/wiki:mii-reference-data-sources/ncsli%202016%20zajak%20xml%20soa%20paper.pdf http://miiknowledge.wikidot.com/local--files/wiki:mii-reference-data-sources/ncsli%202016%20zajak%20xml%20soa%20paper.pdf collaborative systems for telemedicine diagnosis accuracy acta imeko issn: 2221-870x september 2021, volume 10, number 3, 192 197 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 192 collaborative systems for telemedicine diagnosis accuracy jacques tene koyazo1, moise avoci ugwiri2, aimé lay-ekuakille3, maria fazio1,massimo villari1, consolatina liguori2 1 department of mathematics and computer science, physical sciences and earth science, university of messina, lecce 73100, italy 2 department of industrial engineering, university of salerno, fisciano 84084, italy 3 department of innovation engineering, university of salento, lecce 73100, italy section: research paper keywords: signal processing; biomedical; collaborative edge; cloud computing, accuracy; theranostics; measurement citation: jacques tene koyazo, moise avoci ugwiri, aimé lay-ekuakille, maria fazio, massimo villari, consolatina liguori, collaborative systems for telemedicine diagnosis accuracy, acta imeko, vol. 10, no. 3, article 26, september 2021, identifier: imeko-acta-10 (2021)-03-26 section editor: francesco lamonaca, university of calabria, italy received june 12, 2021; in final form august 5, 2021; published september 2021 corresponding author: jacques tene koyazo, email: jacquestene2013@gmail.com 1. introduction the spectacular progress in communication technology has boosted the telemedicine which seems to be a new medical practice preference in medical areas [1]. one of the benefits carried out by this new medical discipline is, for instance, the stomatological diagnosis. thanks to the communication and computer technology, the stomatological diagnosis can provide a safe and reliable remote diagnostic, counselling care, distance education and other information services of medical activities [2]. in recent years, various standards for health care research regulation have been developed. evidences in the literature have proven that studies based on ieee and iso standards [3] enable a good compliance in terms of collaboration with healthcare industries, government agencies and research institutes towards developing novel approaches and methods for handling and controlling diseases. in fact, implementing a reliable collaboration system in telemedicine has a tremendous advantage. it breaks distance restrictions, so that different medical institutions can provide diagnosis; it improves the exchange accuracy for medical advices; it provides a cooperative working environment suitable to share data and information helping in dealing with emergencies. since then, telemedecine has demonstrated a huge prospect thanks to everyday development of information and telecommunication technology [4]. a part from pravicy and security concerns, medical data transmission still face data transmission problem. according to statistics provided by the second xiangya hospital [5], most of medical data produce over 1 gb , and those massive generated data often tend to have a growth rate exceeding the speed of the expansion of the mobile iot bandwidth. this aspect was confirmed by the cisco’s yearbook report [6] demonstrating that they can account for more than 85% of data traffic. this paper is elaborated on the assumption that sensors are used to collect comprehensive physiological information from targeted patients and uses cloud to store and analized those information. the data from the latter process are sent to the service provider for deep investigation. at the same time, those information can be used to remotely monitor health condition of the patient. various sensors are nowadays used in clinical care, storing data from the used sensors in the cloud for more complex analysis and share the results with abstract the transmission of medical data and the possibility for distant healthcare structures to share experiments about a given medical case raises several conceptual and technical questions. good remote healthcare monitoring deals with more problems in personalized heath data processing compared to the traditional methods nowadays used in several parts of hospitals in the world. the adoption of telemedicine in the healthcare sector has significantly changed medical collaboration. however, to provide good telemedicine services through new technologies such as cloud computing, cloud storage, and so on, a suitable and adaptable framework should be designed. moreover, in the chain of medical information exchange, between requesting agencies, including physicians, a secure and collaborative platform enhanced the decision-making process. this paper provides an in-depth literature review on the interaction that telemedicine has with cloud-based computing. on the other hand, the paper proposes a framework that can allow various research organizations, healthcare sectors, and government agencies to log data, develop collaborative analysis, and support decision-making. the electrocardiogram (ecg) and electroencephalogram eeg case studies demonstrate the benefit of the proposed approach in data reduction and high-fidelity signal processing to a local level; this can make possible the extracted characteristic features to be communicated to the cloud database. mailto:jacquestene2013@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 193 the medical professional for further examination is the core idea beside this paper. 2. diagnosis in telemedecine through a collaborative framework: a literature survey 2.1. general overview the internet of things (iot) in collaborative medical framework is considered to be the most fundamental aspect enabling collaboration in telemedicine, because it allows healthcare applications to fully use the iot and cloud computing [7]-[9]. the framework also provides protocols to support the communication as well as broadcast of raw medical signals from different sensors and smart devices to a network of fog nodes. a good insight of a collaborative framework has been introduced by yang et al. [10] and almotiri et al.[11], where they have suggested an architecture able to collect data on the patient health thanks to several sensors, and transfert them to a remote server for processing and giving the possibility to display the results. the figure 1 shows essential components needed in a collaborative framework for collaborative system in telemedicine diagnosis. in ideal situation, sensors have to constantly collect patient’s health condition and vital information. the collected data are then sent to hand-held devices via edge router where it will analyzed and store on a cloud computing platform for further evaluations as presented in the figure 2. it is worth to mention here that, sensors are continuously sending patient’s vital signs as raw information such as electomyograpy (emg), electrocardiogram (ecg), electroencephalogram (eeg), body temperature, blood glucose (bg) and so on. the good data exchange platform architecture ensures that all sensors operate smoothly so that users can interact with them easily. ecg and eeg are demonstrated in section 3 of this paper as show cases. 2.2. cloud computing for telemedicine in information technology, the cloud computing becomes the hottest subject nowadays. the computing ressources allocated by is in fact on-demand, scalable and secure to users. in a work done by sultan et al. [12] cloud computing is described as the backbone of iot health systems. the cloud computing has the advantage of providing the capability of sharing information among health professionals, research institutes and patients in a more structured and organized maner, which minimized the risks of medical record lost [13]. the figure 3 presents the platform of a remote healthcare monitoring system based on cloud computing. each layer in the platform is designed to handle a specific task and it can be implemented in the way to serve various query for healthcare. the cloud storage and the multiple tenants access control are the “master layer” of the platform. it assures the collection of the healthcare data from sensors such presented in the figure 2 (layer 1). the healthcare annotation layer solves data heterogeneity issue, which is a big deal in signal processing discipline. the fact that, sensors used for telemedicine purpose generate data of various type, this complexifies the data sharing automation among agencies. one way is to just create an open linked life data sets to annotate personal healthcare data and integrate dispersed data in a patient-centric pattern for cloud application [14]. the data analysis layer processes data stored in the cloud to assist in figure 1. basic subsets in collaborative framework for heathcare in telemedicine. figure 2. a typology for remote patient monitoring using body sensors. figure 3. cloud-computing based on remote health monitoring – functional platform. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 194 clinical decision making. one can see in figure 4, mining algorithms are constructed in order to induce clinical paths from personal healthcare data. it is important to point out here that, major part of the cloud data centers are geographically centralized and located way far from end-users [15]. for telemedicine application which often requires immediate real-time feedback, communication between users and remote cloud servers are the reason of major issues like delays in round-trip, network congestion, and so on. those observations led to new evolutions in cloud computing, such as fog computing and big data, the cloud computing got then extended and got able to support high scalable computing platforms [16]. in [6], one can evidently see that, cisco was the first to introduce the fog computing concept as a possible solution to extend cloud network edge computing power and storage capacity. fog computing is known to be closer to devices and possesses a dense geographical distribution, so applications and services can be placed at the edge of the local network, which reduces bandwith usage latency. in this way cloud gets closer to the user, and the processing gets done locally, minimizing network ltency and bandwidth usage. the main fog computing architecture as illustrated in the figure 4. the interest beside the use of fog computing layer is deeply discussed by many authors, where they analyzed the role of fog computing in implementing healthcare monitoring framework and they proposed a mediator layer to receive raw information from sensor devices and then store them on the cloud. 3. case study and discussions embracing collaborative edge computing helps healthcare organizations visibility over patient care cycles. sharing information practionners (phycians or health professionals) can give organizations a clear big picture view of what’s going on and how to deal with it. moreover, this technology effectively contributes to the process of organizations of the health system (hospital or health center) from the point of view of the exchange of data in a reliable way, in particular for urgent critical cases presented by patients, such as neurological, cardiological problem, etc. figure 5 represents the proposed collaboration architecture of two healthcare centers, respectively a and b. we consider two patients under ecg and eeg diagnosis, as shown in the diagram, who are. this part constitutes the first block for the diagnosis based on the designed architecture, specifically for ecg and ecg study cases. the diagnosis is made for two patients, each in these healthcare centers where their biosignals are captured. the second considered a block of the process, and the result is composed of three compartments where each one plays a specific role, starting with the acquisition in the local interface, and the data received from another health center, via the data management, and the local storage for the final result after analysis and interpretation of data at the level of the third compartment (data analysis). figure 4. main attributes in fog computing architecture : an illustration. figure 5. the proposed collaborative architecture. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 195 this nature of collaboration between the network edges is attributed in two healthcare centers named edge a and edge b to ensure the reliability, storage, and processing of geographically distributed data. one of the main scopes of this architectural configuration is the exchange of diagnosis between two healthcare centers or multiple healthcare ones. this exchange is necessary to carry out specific comparisons amongst patients of the same age range and the same characteristics. due to heavy file dimensions, the proposed architecture also allows performing remote access without displacing files. that is possible thanks to the cloud. 3.1. ecg monitoring in medical applications, ecg sensors record the electrical activity of a heart at rest, then deliver acquired information about hr and rhythm. the recorded information is crutial in early prediction of a heart enlargement due to hypertension or heart attack. the integration of iot in ecg for telemedicine practices has a tremendous benefits and high potential to warn users about heart rate abnormality, which is a vital sign of early heart desease detection. in this paper, two patients have been considered. in the figure 6, the output ecg signal of two patients is presented, where the filtering sequence uses pan tompkin’s qrs detector. for the comparison purpose, the signals have been downsampled to 250 hz. each window represents 5.5 seconds of data. each stage introduces a delay with a cumulative delay of 40 samples. for a optimum filtering, the butterworth filter is used in this study. the filtering range is between 4 hz and 20 hz, the filter is of order 4. the proposed processing algorithm is implemented in matlab. in figure 7, one can see the respiratory rate which is a critical and one of the vital signs used in telemedicine. r-r interval has been used for detection. due to the change in heart rate synchronized with respiration, the r-r interval of the ecg is short during the respiration, and long during expiration [18]. however, the morphology of a heart can vary greatly depending on the patient as shown in figure 7. usually, normal heartbeat for a patient can resemble an abnormal beat for another. the hamilton and tompkins algorithms used in this paper [19] are then used for peak energy amplitude detection rather than the detailed morphology of the ecg. 3.2. eeg monitoring eeg are known to be interesting source of information for remote healthcare and can be associate to ecg for enhancing the diagnosis task [12]. the issue though is to apply appropriate techniques to extract information prior to upload to the cloud, so that medical agencies can take advantage of them for proper and accurate diagnosis. this paper adopted the filter diagonalization methods exploited by a. lay-ekuakille et al. [20] to extract relevant parameters such as complex frequencies from a given window. eeg can be seen as signals that are sums of damped exponentials, which makes suitable to apply fdm and/or the decimeted signal diagonalization (dsd). the bandlimited decimated signal can be modeled as 𝐶𝑛 bld = ∑ 𝑑𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 , im 𝜔𝑘 < 0 , (1) where 𝜔𝑘 and 𝑑𝑘 are complex frequencies and amplitudes respectively. if m is the times at which the signal is sampled, then for each of the m signals, the diagnonalization for fdm algorithm can be implemented as follow: 𝐶𝑛 bld = ∑ 𝑑𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 ⇒ 𝑈1 𝐵1𝑘 = 𝑢1𝑘 𝑈0 𝐵1𝑘 (2) 𝐶2𝑛 bld = ∑ 𝑑2𝑘 𝐾 𝑘=1 e−j 𝜔 𝑛 𝜏𝐷 ⇒ 𝑈1 𝐵2𝑘 = 𝑢2𝑘 𝑈0 𝐵2𝑘 . (3) figure 6. processing sequences for patient 1 and patient 2. figure 7. filtered, smoothed and processed ecg for patient 1 and patient 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 196 in (2) and (3), complex frequencies are extracted from the eigenvalues 𝑢1𝑘 and 𝑢2𝑘 . the reader is encouraged to find more detailed on fdm algorithm structure in [21]. as for the ecg case, the eeg is considered for the patient 1 and patient 2, where patient 1 is the unsuspected child and the latter is the suspected child. figure 8 presents the bisepstrum of the signal ranging from 10801 up to 12000 samples. the bisepstrum is built for the second interval for the patient 2 as can be seen in figure 9. an important clinical feature for both suspected and unsuspected cases (epilepsy in this case) are presented in figure 10 and figure 11 respectively for patient 1 and patient 2. as stated above, cloud computing provides secure platform for two-way sharing of research data across different agencies or institutions. platforms such as google-cloud, gift-cloud, etc… are designed to meet the need of collaborative research project by simplifying data transfer. the eeg and ecg characteristic features obtained in this elaboration, make easy to integrate with it local infrastructure of the institutions that provided clinical data and expertise, with end-user within routine clinical workflow. the results are presented such as supports for varied collaboration agreements between institutions and related access control restrictions. the improved scheme proposed in figure 5, makes possible the configuration as well as the update, and can allow new modalities to be added via the server without requiring software updates. features extracted such as bispectrum presented in figure 12 simplified the development of the research software. the idea is straitforward, such that the development allows to automatically fetch the data directely from the server, given the fact data has been uploaded from research center (in our case). these features are not just accurate, but also useful for both medical research projects where data sharing is required between researchers and clinical institutions and medical professionals for decision making. 4. conclusions telemedicine aimed to provide high-quality medical services, given that healthcare facilities nowadays hardly satisfy populations' needs due to limitations of public medical resources and infrastructures. this research proposed a cloud-computingbased architecture for decentralized and collaborative diagnosis by highlighting patients' data storage after meaningful feature extraction. in this way, a medical professional can rapidly grasp the patient's state in question and make an accurate decision easily. as reported at the beginning of section 3, collaborative edge computing certainly helps healthcare organizations in the spirit of connecting hardware and software issues in a unique platform for decision making in the interest of patients [22], [23]. figure 8. the bispectrum and associate representation for patient 1 figure 9. the bispectrum and associate representation for patient 2 (here the sample is 7201 8400). figure 10. the bispectrum and associate representation for patient 1 (here the sample is 7201 8400). figure 11. the bispectrum and associate representation for patient 2 (here the sample is 7201 8400). figure 12. superposition of bispectrums associate to patient 1 and patient 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 197 the personalized care is another field of the unifying platform.the contributions of this research inevitably include data reduction and high-fidelity signal processing to a local level to extract characteristic features and communicate them to the cloud database. a demonstration of eeg and ecg features extraction was carried out, and detailed on how the deployment of obtained processing results on a cloud-based application has been presented. to obtain the desired outcome, the study suggested a deployment using a system consisting of at least 900 mhz 32 quad-core arm cortex-a7 cpu, and 2 gb ram. the dataset exploited is from mit-bih arrhythmia, and ranging from 16.24 kb to 36.45 kb. the suggested architecture makes possible the reduction of data around 97 % while most suggested architectures in the litteratre present an accuracy about 89 %. moreover, the time taken by the transfert is estimate to 15 seconds, which validates the efficacy of the proposed architecture in monitoring vital signs such eeg and ecg in even real-time. on the other hand, the study provides a highlevel understanding of the cloud-bade iot system and remote healthcare monitoring. references [1] boyi xu, lida xu, hongming cai, lihong jiang, yang luo, yizhi gu, the design of an m-health monitoring system based on a cloud computing platform, enterprise information systems, vol. 11, issue 1, 2017, pp.17-36. doi: 10.1080/17517575.2015.1053416 [2] t. han, l. zhang, s. pirbhulal, w. wu, v. h. de albuquerque, a novel cluster head selection technique for edge-computing based iomt systems. comput network. april 2019, 158(2), 114e122. doi: 10.1016/j.comnet.2019.04.021 [3] b. kamsu-foguem, p. f. tiako, l. p. fosto, c. foguem, modeling for effective collaboration in telemedicine, telematics and informatics, vol. 32, issue 4, november 2015, pp. 776-786. doi: 10.1016/j.tele.2015.03.009 [4] a. lay-ekuakille, p. vergallo, g. griffo, f. conversano, s. casciaro, s. urooj, v. bhateja, a. trabacca, entropy index in quantitative eeg measurement for diagnosis accuracy, ieee transactions on instrumentation & measurement, vol. 63, n. 6, 2014, pp. 1440-1450. doi: 10.1109/tim.2013.2287803 [5] weisong shi, jie cao, quan zhang, youhuizi li, lanyu xu, edge computing: vision and challenges’, ieee internet of things journal, vol. 3, issue 5, oct. 2016, pp. 637 646. doi: 10.1109/jiot.2016.2579198 [6] cisco, fog computing and the internet of things: extend the cloud to where the things are, white paper, 2015. online [accessed 13 september 2021] https://www.cisco.com/go/iot [7] deepak puthal, saraju p. mohanty, uma choppali, collaborative edge computing for smart villages, ieee consumer electronics magazine, vol. 10, issue 3, 1 may 2021, pp. 68-71. doi: 10.1109/mce.2021.3051813 [8] kai wang, hao yin, wei quan, geyong min, enabling collaborative edge computing for software defined vehicular networks, ieee network, vol. 32, issue 5, september/october 2018, pp. 112-117. doi: 10.1109/mnet.2018.1700364 [9] x. chen l. jiao, w. li, x. fu, efficient multi-user computation offloading for mobile-edge cloud computing, ieee/acm transactions on networking, vol. 24, issue 5, october 2016, pp. 2795–2808. doi: 10.1109/tnet.2015.2487344 [10] a. saeed, m. ammar, k. a. harras, e. zegura, vision: the case for symbiosis in the internet of things, proc. 6th international workshop on mobile cloud computing and services, paris, france, 11 september 2015, pp. 23–27. doi: 10.1145/2802130.2802133 [11] t. x. tran, a. hajisami, p. pandey, d. pompili, collaborative mobile edge computing in 5g networks: new paradigms, scenarios, and challenges, ieee communications magazine, vol. 55, issue 4, april 2017, pp. 54–61. doi: 10.1109/mcom.2017.1600863 [12] a. lay-ekuakille, p. vergallo, a. trabacca, m. de rinaldis, f. angelillo, f. conversano, s. casciaro, low-frequency detection in ecg signals and joint eeg-ergospirometric measurements for precautionary diagnosis, measurement, vol. 46, issue 1, 2012, pp. 97-107. doi: 10.1016/j.measurement.2012.05.024 [13] k. wang, h. yin, w. quan, g. min, enabling collaborative edge computing for software defined vehicular networks, ieee network, vol. 32, issue 5, sep./oct. 2018, pp. 112–117. doi: 10.1109/mnet.2018.1700364 [14] h. zhang, p. dong, w. quan, b. hu, promoting efficient communications for high-speed railway using smart collaborative networking, ieee wireless communications, vol. 22, issue 6, dec. 2015, pp. 92– 97. doi: 10.1109/mwc.2015.7368829 [15] l. chen, j. xu, socially trusted collaborative edge computing in ultra dense networks, proc. of the 2nd acm/ieee symposium on edge computing, san jose/fremont, ca, usa, 12-14 october 2017, 11 pp. doi: 10.1145/3132211.3134451 [16] yuvraj sahni, jiannong cao, lei yang, data-aware task allocation for achieving low latency in collaborative edge computing, ieee internet of things journal, vol. 6, issue 2, april 2019, pp. 35123524. doi: 10.1109/jiot.2018.2886757 [17] a. lay-ekuakille, s. ikezawa, m. mugnaini, r. morello, detection of specific macro and micropollutants in air monitoring, review of methods and techniques, measurement, 98(1) (2017), pp. 4959. doi: 10.1016/j.measurement.2016.10.055 [18] a. p. plonski, j. vander hook, v. isler, environment and solar map construction for solar-powered mobile systems, ieee transactions on robotics, vol. 32, issue 1, feb. 2016, pp. 70-82. doi: 10.1109/tro.2015.2501924 [19] world health organization, coronavirus disease (covid-19) pandemic website. online [accessed 9 september 2021] https://www.who.int/emergencies/diseases/novel-coronavirus2019. [20] a. lay-ekuakille, m. a. ugwiri, c. liguori, p. k. mvemba, proceedings of the medical measurements and applications, (memea) symposium, istanbul, turkey, 26-28 june 2019, art. no. 8802127. doi: 10.1109/memea.2019.8802127 [21] a. lay-ekuakille, g. griffo, p. visconti, p. primiceri, r. velazquez, leaks detection in waterworks: comparison between stft and fft with an overcoming of limitations, metrology and measurement systems, vol. 24, issue 4, pp. 631 – 644. doi: 10.1515/mms-2017-0049 [22] b. qureshi, towards a digital ecosystem for predictive healthcare analytics, proc. of medes 2014 6th international conference on management of emergent digital ecosystems, buraidah al qassim, saudi arabia, 15 17 september 2014, pp. 34-41. doi: 10.1145/2668260.2668286 [23] h. patel, t. m. damush, e. j. miech, n. a. rattray, h. a. martin, a. savoy, l. plue, j. anderson, s. martini, g. d. graham, l. s. williams, building cohesion in distributed telemedicine teams: findings from the department of veterans affairs national telestroke program, bmc health services research, 21 (1), art. no. 124. doi: 10.1186/s12913-021-06123-x https://doi.org/10.1080/17517575.2015.1053416 http://dx.doi.org/10.1016/j.comnet.2019.04.021 http://dx.doi.org/10.1016/j.tele.2015.03.009 https://doi.org/10.1109/tim.2013.2287803 https://doi.org/10.1109/jiot.2016.2579198 https://www.cisco.com/go/iot https://doi.org/10.1109/mce.2021.3051813 https://doi.org/10.1109/mnet.2018.1700364 https://doi.org/10.1109/tnet.2015.2487344 https://doi.org/10.1145/2802130.2802133 https://doi.org/10.1109/mcom.2017.1600863 http://dx.doi.org/10.1016/j.measurement.2012.05.024 https://doi.org/10.1109/mnet.2018.1700364 https://doi.org/10.1109/mwc.2015.7368829 http://dx.doi.org/10.1145/3132211.3134451 https://doi.org/10.1109/jiot.2018.2886757 https://doi.org/10.1016/j.measurement.2016.10.055 https://doi.org/10.1109/tro.2015.2501924 https://www.who.int/emergencies/diseases/novel-coronavirus-2019 https://www.who.int/emergencies/diseases/novel-coronavirus-2019 https://doi.org/10.1109/memea.2019.8802127 http://dx.doi.org/10.1515/mms-2017-0049 https://doi.org/10.1145/2668260.2668286 https://doi.org/10.1186/s12913-021-06123-x development of 1.6 gpa pressure-measuring multipliers acta imeko june 2014, volume 3, number 2, 54 – 59 www.imeko.org development of 1.6 gpa pressure-measuring multipliers w. sabuga1, r. haines2 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2 fluke calibration, 4765 east beautiful lane, phoenix, az 85044-5318, usa section: research paper keywords: high pressure standards, pressure multipliers, finite element analysis, pressure transducers, calibration citation: wladimir sabuga, rob haines, development of 1.6 gpa pressure-measuring multipliers, acta imeko, vol. 3, no. 2, article 13, june 2014, identifier: imeko-acta-03 (2014)-02-13 editor: paolo carbone, university of perugia received april 15th,2013; in final form august 16th, 2013; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this research was jointly funded by the emrp participating countries within euramet and the european union. corresponding author: wladimir sabuga, e-mail: wladimir.sabuga@ptb.de 1. introduction new high pressure technologies such as autofrettage, hydroforming and isostatic pressing are being intensively developed and used in the automotive industry, diesel engineering, vessel production for the petrochemical and pharmaceutical industry, water cutting machine manufacture, new material fabrication and, recently, for food sterilisation. new transducers for measuring pressures up to 1.5 gpa have recently been developed and are offered by several manufacturers. the use of these high pressure transducers requires their calibration and, thus, existence of appropriate reference pressure standards traceable to the international system of units. the operation range of the pressure standards in west europe is limited by 1.4 gpa. creation of new primary pressure standards up to 1.6 gpa and establishing a calibration service up to 1.5 gpa is the objective of a joint research project (jrp) "high pressure metrology for industrial applications" within the european metrology research programme (emrp) [1, 2]. ptb and fluke calibration (fluke) have jointly developed and built two 1.6 gpa pressure measuring multipliers to extend the pressure scale and the calibration range as required. 2. principle and key features of the pressure multipliers the operation principle of a pressure measuring multiplier is explained in figure 1. the multiplier includes a low pressure (lp), pl, and a high pressure (hp), ph, piston-cylinder assembly (pca) which have significantly different effective areas. the lp and hp pcas are axially aligned and their pistons are mechanically coupled. both, lp and hp pistons are unsealed in the cylinders and are rotated, which, due the lubrication effect, avoids mechanical friction between the pistons and cylinders. consequently, in the absence of other forces, the forces due to pressures ph and pl on the pistons are balanced when the ratio of pressures is equal to the ratio of the effective areas of the lp and hp pcas, ahp and alp: ph / pl = ahp / alp. (1) the high pressure, ph, can thus be determined by accurately measuring pl and by knowing the exact ratio of ahp to alp, also called multiplying ratio (km). the principle of the pressure measuring multipliers has been in use at least since the 1930s and is utilised in current practice, for example in the 1.5 gpa national pressure standard of russia, vniiftri [3]. a 1 gpa pressure multiplier has been commercially offered since the late abstract two 1.6 gpa pressure-measuring multipliers were developed and built. feasibility analysis of their operation up to 1.6 gpa, parameter optimisation and prediction of their behaviour were performed using finite element analysis (fea). their performance and metrological properties were determined experimentally at pressures up to 500 mpa. the experimental and theoretical results are in reasonable agreement. with the results obtained so far, the relative standard uncertainty of the pressure measurement up to 1.6 gpa is expected to be not greater than 2·10-4. with this new development the range of the pressure calibration service in europe can be extended up to 1.5 gpa. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 54 1980s and is used by national metrology institutes and calibration laboratories as a secondary or transfer standard [4]. in the newly developed multiplier, the nominal effective areas of the lp and hp pcas were chosen to be 1 cm2 and 5 mm2, respectively. these are dimensions for which production technology is well established and that result in a pressure ratio of 1:20. thus, a pressure, ph, of 1.6 gpa on the hp side of the multiplier is reached at pl = 80 mpa which is easily generated and measured with high accuracy. the design of the new multiplier has specific features which distinguish it from that of the former multipliers. first, to avoid plastic deformation and to guarantee stability of the effective areas the components of the lp and hp pcas are made of tungsten carbide with 6% or 10% (hp piston) cobalt (wc-co) instead of steel used in [3]. since the tensile strength of the tungsten carbide is limited to roughly 0.7 gpa, the hp cylinder can be operated at the maximum pressure of 1.6 gpa only if it is supported from outside. thanks to a special design of the multipliers a compressive load is established on the outside of the hp tungsten carbide cylinder which prevents rupture when the pressure inside the cylinder exceeds the material tensile strength. in [4] this is accomplished by fitting a sleeve around the cylinder. in order to extend to 1.6 gpa in the new multiplier, two sleeves, each made of chrome/nickel/molybdenum steel, are successively assembled onto the tungsten carbide hp cylinder by means of thermal shrink fits. in addition, the hp cylinder with the two sleeves is set into a jacket which allows a jacket pressure (pj) to be applied to the lateral surface of the outer sleeve and, thus, to additionally compensate the tensile stress in the cylinder (figure 2). the hp pca is designed to be operated in controlled clearance (cc) mode with pj typically equal to 25% of ph, at which the pressure distortion coefficient (λ) of the pca should be around zero. in addition, the hp pca may be operated with variable pj in order to adjust the piston fall rate (vf) and the pca sensitivity, if necessary, as well as to study λ experimentally by measuring dependences of vf and ahp on pj. for optimal and stable operation of a pca it is desirable that the pressure in the piston-cylinder gap changes linearly from its maximum value at the gap inlet to the ambient pressure at the gap outlet. such a pressure distribution is difficult to realise in the case of cc hp pcas having a nominally constant gap in the pressure-free state because, under pressure, the piston-cylinder gap becomes extremely small in the outlet region due to a cross-sectional expansion of the axially loaded piston and a simultaneous reduction of the cylinder bore due to the jacket pressure [5]. in [3], where pcas are operated in the re-entrant mode, which produces even stronger contraction of the cylinder than the cc mode, the problem was solved by giving the cylinder bore a flare-like shape with a diameter at the outlet being by few micrometers larger than at the inlet. such a manufacture strategy is extremely difficult and generally leads to large widths and irregularities of the piston cylinder gap. in the new multiplier, the problem is overcome by giving the outer surface of the inner sleeve a variable shape. in the lower part, where the pressure inside the cylinder and in the pca gap is much larger than the ambient pressure, the inner sleeve has a cylindrical shape. in the upper part, where the pressure in the gap approaches the ambient pressure, the inner sleeve has a conical shape with the diameter at the top being by 0.3 mm smaller than the diameter of the cylindrical part. this results in a tapered gap between the inner and outer sleeves which reduces the action of pj on the upper part of the inner sleeve. therefore, excessive concentration of the pressure gradient in the pistoncylinder clearance towards the outlet of the cylinder is avoided and an acceptable flow rate of the pressure transmitting liquid between the piston and the cylinder is provided. the optimal shape of the inner sleeve was determined by finite element analysis (fea) as described in the next section. the lp pca was designed keeping in mind the requirement to have sufficiently low fluid flow through the piston-cylinder gap. this requirement results from the relatively large effective area of the lp pca compared to the area of a pressure balance pca maintaining and measuring pl. usually, pcas used in the range of 80 mpa have nominal effective areas of 0.1 cm2, which is ten times smaller than alp. excessive flow rate in the lp pca would cause a high piston fall rate of the reference pressure balance which could increase uncertainties or result in insufficient time for stable ph. to limit the flow rate and optimize performance, the principle of negative free figure 1. operation principle of a pressure multiplier. figure 2. hp pca in the mounting post. low pressure high pressure pisto cylinder inner sleeve outer sleeve jacket hp tube collar gland sleeve nut jacket pressure connection thermometer location jacket pressure channel acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 55 deformation was applied. this principle is well proven and is used in fluke gas high pressure balances [6], providing low fall rates at higher pressures and high sensitivity at lower pressures. in the lp pca design, the lp cylinder is surrounded by a sleeve with a conical taper on the inside surface and pl is applied to the outside surface of the sleeve. the lp sleeve has a sliding fit on the cylinder and is positioned so that its smallest diameter is located where the cylinder pressure is maximal. in the absence of pressure, the sleeve produces no stress on the cylinder. as pressure increases, loading the cylinder from inside and the sleeve from outside, the sleeve first comes in contact with the cylinder in the region where the pressure in the pistoncylinder gap is maximal. in this way, a variable outside load of the cylinder is created that optimally compensates the radial distortion of the cylinder produced by the inner pressure. with this variable outside load distribution a nearly linear pressure distribution in the lp pca gap is achieved. 3. finite element analysis to analyse feasibility of the pressure multipliers' operation up to 1.6 gpa and to optimise dimensions of the pcas components they were modelled using fea. the modelling was performed using two different fea software packages, ansys at ptb and cosmos/works at fluke. in this way, correctness of the calculations was verified by analysing the same problem. additionally, the analyses were performed for different problems to get complementary information on the multipliers performance. the fea included large deflection, contact and plastic capabilities, the latter required for the tube connected to the hp pca. first, all parts were modelled with their material properties and assumed geometries. for the hp pca the shrinking of the inner and then of the outer sleeve on the tungsten carbide cylinder, and connection of the hp tube to the hp cylinder was modelled. the cylinder to inner sleeve shrink fit was performed first, using nominal geometry. the deformation of the outer surface of the inner sleeve after this step was noted. in production this surface is re-machined after the initial shrink fit. to simulate this, the outer diameter of the inner sleeve was changed reducing it by the amount of deformation, to achieve geometry after this initial shrink step that gives a good representation of the geometry that results in production. the inner to outer sleeve shrink fit was performed second, using the resulting cylinder/inner sleeve combination with the outer sleeve nominal geometry. the shrink fit of the taper in the inner sleeve was accomplished in the same manner as the other shrink fit surfaces. the amount of contact of the surfaces was determined iteratively in the analysis, a step performed automatically by the fea software. three inner sleeve outside shapes were numerically tested in their effect on the stress, pressure distribution in the piston-cylinder gap and λ. connection of the hp tube to the cylinder and deformation of the tube under pressure were studied. a tube tip angle of 59.5° and matching cylinder cone angle of 60° were selected. the tube was moved into the cylinder to get a contact along the whole length of the cylinder cone and a pressure of 1.6 gpa was applied. surface loads in various combinations were applied. the loads included 50% or 100% of maximum measurement pressure on relevant surfaces, a linear and, alternatively, constant pressure distribution in the piston-cylinder gap, as well as a jacket pressure on the outer surface of the hp pca sleeve and a lp on the outer surface of the lp pca sleeve. after each load step, radial deformation, radial and tangential stresses were extracted. the stress and strain distributions were analysed in relation to the ultimate tensile strength (sut) and the elastic limit (sy) of the cylinder, sleeve, and hp tube materials. these properties together with the young’s modulus (e) and the poisson ratio (µ) based on the information by the materials' manufacturers and literature data are compiled in table 1. in addition, the ultimate compressive strength of the wc materials is known to be extremely high of about 7 gpa. later, e and µ values were accurately measured using resonant ultrasound spectroscopy [7]. after the shrink fit of the inner sleeve on the cylinder, a tangential stress of about -600 mpa (compression) was achieved at the inside of the cylinder. the tangential stress distribution at the cylinder inside after the subsequent shrinking fit of the outer sleeve is shown in figure 3. in any point, the absolute value of the stress is much lower than the ultimate compressive strength of the wc materials (7 gpa). in the upper part of the pca, the absolute value of the stress becomes lower which results from the conical shape of the outer surface of the inner sleeve. this corresponds to the intended reduction of the outside support in the region of the internal pressure drop. the dashed line in figure 3 shows the tangential stress calculated analytically under assumption of cylindrically perfect cylinder and sleeves. both the fea and analytical results demonstrate that the double shrink will to a great extent compensate for the stress produced by the internal pressure of 1.6 gpa. for a linear pressure distribution in the gap the maximum residual stress produced by the shrinking and the internal pressure would be about 400 mpa, which could be withstood by the wc cylinder even in the absence of jacket pressure. however, in order to minimize risk of cylinder rupture, jacket pressure is expected to table 1. material properties. part / material e/gpa µ sy/gpa sut/gpa lp pca, hp cylinder / wc-6%co 620 0.218 ≈0.7 hp piston / wc-10%co 560 0.218 lp & hp sleeves / cr-ni-mo steel 200 0.3 1.2 1.4 hp tube / austenitic steel 200 0.3 1.053 1.216 figure 3. tangential stress at the hp cylinder inside after shrinking fit of 2 sleeves calculated with fea (st) and analytically (st,calc). acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 56 be applied in all normal system operation. figure 4 presents the fea model of the hp pca and tangential stresses in the pca components at ph = 1.6 gpa, a linear pressure distribution from 1.6 gpa to zero along the piston-cylinder gap and pj = 0.4 gpa. the fea calculations for the hp pcas at the maximum measurement pressure of 1.6 gpa and the jacket pressure of 400 mpa show that the radial and tangential stress distributions are smooth and without any significant concentrations, the cylinder is subject only to compressive stresses, and the sleeves remain within the elastic limit. these results for the hp pca indicate that the design is not at risk for cylinder rupture nor instability of the effective area due to plastic deformation in the sleeves. the calculations also confirm the necessity of having two sleeves on the hp cylinder in order to achieve the required cylinder compression when the temperature for the thermal shrink is kept below the tempering temperature of the sleeve material. the analysis of the tube demonstrates that only a small portion of the tube near the center line is subject to plastic deformation. in the tapered part of the tube, in the sections where the tube is not supported by the cylinder, the region of plastic deformation does not exceed 1/3 of the tube cross section. these results indicate a reliable connection between the hp pca and tube at pressures up to 1.6 gpa. it is necessary to note that the fea was performed assuming the ultimate strength and the elastic limit each to be the same for tensile and compressive deformations based on the available manufacture data. this might produce inaccuracies of the fea results if these material properties were different for tension and compression. in a similar manner as the hp pca, an fea of the lp pca was performed at pl = 80 mpa and a linear pressure distribution from pl to zero being applied to inside of the cylinder with pl applied to the outside of the sleeve surrounding the cylinder. the primary objective was to minimise the radial distortions at the cylinder inside and thus the fluid flow rate. it was found out that, with an optimal taper on the inside of the sleeve, the radial distortions of the cylinder do not exceed 0.1 µm at any point of the cylinder bore. without the sleeve and in the free deformation (fd) mode, they would reach 1 µm at the gap entrance. combining the structural fea of the hp pca with a hydrodynamic analysis for its piston-cylinder gap λ and vf were calculated with using the ptb iterative method described in [5]. as a pressure transmitting medium two liquids were considered: di(2)-ethyl-hexyl-sebacate (dhs) at ph ≤ 0.5 gpa and polydiethylsiloxan pes-1 for ph ≤ 1.6 gpa. dhs is a liquid widely used in pressure balances up to 1 gpa. however dhs is not applicable at higher pressures because of solidification. its density and viscosity dependences on pressure were used as given e.g. in [5]. pes-1 has a significantly lower viscosity than dhs with acceptable values up to 1.6 gpa. its density and viscosity as functions of pressure were based on the experimental data presented in [3]. with dhs, calculations were performed in fd mode to provide target values of vf for optimal piston-cylinder gap widths to be achieved in the pistoncylinder production process. with pes-1, both fd and cc modes were analysed. as known from former fea studies results of hydrodynamic modelling strongly depend on a real initial gap profile between undistorted piston and cylinder [5]. in particular, information about the cylinder bore profile near the exit is important because the gap in this region becomes the narrowest under high pressure and therefore has a strong effect on the pressure distribution, vf and λ. to take this into account, prior to performing a final adjustment of the piston to the cylinder bore in the multiplier production process described in section 4, dimensional measurements were performed on the two hp cylinders. they included straightness measurements in the outlet region of the cylinder bore along 4 generatrix lines separated by 45°. results for opposite generatrices (0° and 180°, 45° and 225°, and so on) were averaged and are shown for the two cylinders in figure 5. for fea calculations, where the pcas are treated as axisymmetric, the gap profiles were averaged and approximated by analytical functions which are also presented in figure 5. the piston and the cylinder bore apart from the gap exit were considered ideally cylindrical. different gap widths (h) were analysed. figure 5 presents the case in which h was equal to 0.2 µm. results of the piston fall rate calculations for h = (0.2-0.5) µm, fd and cc operation modes, dhs and pes1 liquids are shown in figure 6. even with the smallest technologically feasible gap of 0.2 µm vf is too high when pes-1 and fd mode are used. the largest gap considered, 0.5 µm, combined with cc mode produces figure 5. piston-cylinder gap near the outlet for a perfect piston and real dimensions of cylinders 1 and 2. a b figure 4. fea model of hp pca (a), tangential stress distribution in it at ph = 1.6 gpa & pj = 0.4 gpa (b). -0.5 0.0 0.5 1.0 1.5 2.0 4 6 8 10 12 14 16 18 20 dr / µm z / mm 0° 45° 90° 135° in fea model cylinder 1 cylinder 2 [pa] acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 57 acceptable vf at ph >1 gpa but rates that are still too high below 1 gpa. surprisingly, the difference between the piston fall rates for 0.2 µm and 0.5 µm gaps in the pressure range (1 to 1.6) gpa is not as big as would be expected from theory of the undistorted gap. with the fea results for vf the range h = (0.30.4) µm was found as optimal. with h = 0.4 µm, a target vf of (0.068 to 0.073) mm/min was defined to be achieved at 32 mpa in control measurements when fitting the pistons to the cylinders. at 500 mpa with dhs and in fd mode, this gap width leads to vf = (0.57-0.61) mm/min. 4. realisation of the multipliers for both the hp and lp pcas, the best designs indicated by the fea were realized. complementary technologies available at ptb and fluke were combined. the piston-cylinders were manufactured and detailed technical drawings of the multipliers and all their parts produced by fluke. all other parts – each of the two multipliers comprised about 80 parts – were manufactured by ptb. fluke carried out a final mechanical adjustment of the sleeves and some other parts. in particular, processing of the final diameters of the sleeves to meet the defined tolerances of 1 µm and to achieve a roughness of lateral surfaces better than 0.2 µm required fluke’s expertise. the production of the pcas started with 5 to 8 pieces of lp and hp pistons and cylinders as well as sleeves. the best were selected during the succeeding processing and characterisation. ptb performed dimensional measurements and performed the thermal shrink fits of the sleeves on the cylinders. for the shrink fits, the outer steel sleeve was heated up to 400 °c maximum to stay below the tempering temperature of the sleeve's steel, which is 450 °c. however, provisional shrinking trials indicated that 400 °c of temperature increase may not be sufficient to perform the shrink – the insert stuck in the outside sleeve. to get more space and time leeway for the shrinking procedure, a greater temperature difference between the two parts was created by cooling the insert (cylinder in the 1st shrink stage and cylinder with already shrunk inner sleeve in the 2nd shrink stage) down to about −196 °c using liquid nitrogen. prior to shrinking the outer sleeve onto the inner, which had been fit to the cylinder in the 1st shrink, the cylindrical and conical outside surfaces of the inner sleeve was characterised dimensionally. after the two hp cylinders heat shrink operations their bores were re-machined to remove 3 to 5 µm from inner surfaces deformed by shrinking. the hp pistons and cylinders were then lapped to achieve piston fall rates which had been predicted by fea with the gap width of h = (0.3-0.4) µm. the test piston fall rate measurements in the production stage were performed at a pressure of 32 mpa at which the effect of the elastic distortion is relatively small and vf primarily depends on the undistorted gap width. later, piston fall rates were measured in both hp and lp pcas and allowed estimation of the gap width between piston and cylinders. it was found h = (0.27-0.36) µm for the hp pcas and h = (0.68-0.73) µm for the lp pcas. the whole production required the multipliers’ parts to be sent between ptb and fluke, some of them many times, for the subsequent production, characterisation and adjustment procedures. finally, the multipliers were assembled and preliminarily tested by fluke. 5. experiments first tests of the multipliers were performed by fluke at pressures (100 to 500) mpa on the hp side and (5 to 25) mpa on the lp side of the multipliers using two piston gauges as a reference, with dhs as a pressure transmitting liquid and at pj = 0.25·ph. the setup is shown in figure 7. multiplying ratios were determined using two hydraulic pressure balances in a crossfloat. both lp and hp piston gauges were pg7302. two different 500 kpa/kg pcas having expanded uncertainties in pressure of 22·10-6·pl + 16 pa and 27·10-6·pl + 16 pa (k = 2) were used in different runs on the lp side. a 5 mpa/kg pca having expanded uncertainty in pressure of 70·10-6·ph + 16 pa (k = 2) was used on the hp side of the multiplier. the hp and lp pcas' temperatures in the multiplier were measured using platinum resistance thermometers. these temperatures and the pistons position in the multiplier were indicated by a laboratory conditions monitor (lcm). the pistons were kept within ±2.5 mm around their middle working position. they were rotated by a dc motor at approximately 10 rpm. a ppch hydraulic pressure controller was used to set pj. a tare pressure (pt) produced on the hp side of the multiplier by the masses loading the hp piston (hp and lp pistons, pistons coupler, etc.) was measured at pl = 0 for each multiplier assembly using an rpm3 a1000, h1 (0-2) mpa pressure monitor with an uncertainty of approximately 2 kpa (k = 2). it was equal to (2.918 and 2.926) mpa for the two multipliers. with the tare pressure equation (1) transforms to ph = pt + pl·km, with (2) km = km,0 × [1 + λkm·(ph – pt)], (3) where km,0 is km at pl = 0 and λkm is the pressure dependence coefficient of km. a crossfloat, using the drop rate method, was performed between the two piston gauges with multiplier either 1 or 2 in between the two piston gauges at ph = (100, 200, 300, figure 7. multiplier system test setup. figure 6. piston fall rates calculated for different gap sizes, profiles and liquids. 0 1 2 3 4 5 6 7 8 0 200 400 600 800 1000 1200 1400 1600 1800 v f / ( m m /m in ) p / mpa h = 0.2 µm, cyl. 1, fd, dhs h = 0.2 µm, cyl. 2, fd, dhs h = 0.2 µm, ideal cyl., fd, dhs h = 0.3 µm, cyl. 1, fd, dhs h = 0.3 µm, cyl. 2, fd, dhs h = 0.3 µm, ideal cyl., fd, dhs h = 0.2 µm, cyl. 1, fd, pes-1 h = 0.2 µm, cyl. 2, fd, pes-1 h = 0.2 µm, cyl. 1, cc, pes-1 h = 0.2 µm, cyl. 2, cc, pes-1 h = 0.2 µm, ideal cyl., cc, pes-1 h = 0.5 µm, ideal cyl., cc, pes-1 h = 0.2 µm, fd, pes-1 h = 0.5 µm, cc, pes-1 h = 0.2 µm, cc, pes-1 h = 0.2 µm, fd, dhs lcm hp piston gauge motor power supply multiplier lp piston gauge acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 58 400, 500, 500, 400, 300, 200, 100) mpa in four runs total. according to (2), the multiplying ratio was determined at each point by subtracting pt from ph measured on the hp side and dividing by pl measured with the lp piston gauge (figure 8). the set of data for each run was fit with function (3) providing km,0 and λkm. the results of the two runs for each multiplier were combined (averaged) and used to determine the residuals of the points taken. after reviewing the results of the tests it was decided to leave out the 100 mpa in determining km,0 and λkm as it did not seem typical with respect to the rest of the results. table 2 gives the results of the fit for each multiplier. performance of the multipliers has been found satisfactory. the km values were reproducible within ±4·10-5 for multiplier 1 and ±2·10-5 for multiplier 2. an increased standard deviation in the case of multiplier 1 is presumably associated with the exchange of the reference lp piston gauge between runs 1 and 2. herewith and taking into account the uncertainties of the reference lp and hp piston gauges pg7302, the relative standard uncertainty of the multipliers in the pressure range (100 to 500 mpa) lies between (4 and 5.5)·10-5. with the same data the relative standard uncertainty in the range (1 to 1.6) gpa can be expected to be (1.3 to 2)·10-4, which is a very preliminary estimation and must be confirmed by experiments at higher pressures. this uncertainty is sufficiently small to provide a calibration service required by the industry. 6. conclusions and outlook the two novel 1.6 gpa pressure-measuring multipliers were developed, tested at pressures up to 500 mpa, and demonstrated repeatability on the level of as low as 2·10-5. a standard uncertainty of up to 5.5·10-5 obtained in the test crossfloats is mainly caused by the reference lp and hp standards. this uncertainty can be reduced in the future by more extensive experiments using more accurate 1 gpa standards of ptb as a reference, but also by a theoretical calculation of the pressure distortion coefficients of the lp and hp pcas taking into account the real dimensional properties of the hp piston-cylinder gap and of sleeve-to-cylinder gap in the lp pca. moreover, extension of the fluid flow calculations for the pca gap up to 1.6 gpa requires accurate data on density and viscosity of pes-1 at high pressure. all these measurements are in progress within emrp jrp [1]. acknowledgement the contribution of dr. p. ulbig in the organisation of this research and of mrs. d. hentschel, who manufactured most of the parts of the multipliers, both ptb members, is much appreciated. the authors acknowledge mr. p. delajoud (fluke dhi retired) for the design of the multipliers and f. valenzuela and m. bair (both fluke) for piston-cylinder fabrication and cross-float testing/analysis respectively. this research was carried out within the emrp. it is jointly funded by the emrp participating countries within euramet and the european union. references [1] euramet, “high pressure metrology for industrial applications”, publishable jrp summary report for ind03 highpres, http://www.euramet.org/index.php?id=emrp_call_2010. [2] emrp jrp ind03 highpres, http://emrp-highpres.cmi.cz/. [3] v.m. borovkov, "deadweight high pressure piston manometers", in: researches in the area of high pressures. e.v. zolotyh (editor). publisher izdatelstvo standartov, moscow, 1987, pp.577, russ. [4] p. delajoud, "the pressure multiplier: a transfer standard in the range 100 to 1000 mpa", bipm monographie, vol. 89/1, 1989, pp.114-124. [5] w. sabuga et al., "finite element method used for calculation of the distortion coefficient and associated uncertainty of a ptb 1 gpa pressure balance – euromet project 463", metrologia, 43 (2006) pp.311–325. [6] p. delajoud, m. girard, "a new piston gauge to improve the definition of high gas pressure and to facilitate the gas to oil transition in a pressure calibration chain", proc. of the imeko tc16 int. symp. on pressure and vacuum, sept. 22-24, 2003, beijing, china, acta metrologica sinica press, pp.154-159. [7] w. sabuga, p. ulbig, a.d. salama, "elastic constants of pressure balances' piston-cylinder assemblies measured with the resonant ultrasound spectroscopy", proc. of the int. metrology conf. cafmet-2010, april 2010, cairo. figure 8. multiplying ratio vs. high pressure corrected for tare pressure. table 2. results of multiplying ratios in individual tests and averages for each multiplier. multiplier 1 multiplier 2 km,0 λkm·107 mpa-1 km,0 λkm·107 mpa-1 run 1 19.987668 3.79 20.008672 3.55 run 2 19.989101 2.89 20.008994 3.39 average 19.988384 3.34 20.008833 3.47 acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 59 http://www.euramet.org/index.php?id=emrp_call_2010 http://emrp-highpres.cmi.cz/ development of 1.6 gpa pressure-measuring multipliers disarmadillo: an open source, sustainable, robotic platform for humanitarian demining acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 9 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 disarmadillo: an open source, sustainable, robotic platform for humanitarian demining emanuela elisa cepolina1, alberto parmiggiani2, carlo canali3, ferdinando cannella3 1 snail aid – technology for development, via fea 10, 16142 genova, italy and industrial robotics facility, italian institute of technology, via morego, 30, 16163 genova, italy 2 mechanical workshop, italian institute of technology, via san quirico, 19/d, 16163 genova, italy 3 industrial robotics facility, italian institute of technology, via morego, 30, 16163 genova, italy section: research paper keywords: humanitarian demining; open source hardware; appropriate technology citation: emanuela elisa cepolina, alberto parmiggiani, carlo canali, ferdinando cannella, disarmadillo: an open source, sustainable, robotic platform for humanitarian demining, acta imeko, vol. 11, no. 3, article 8, september 2022, identifier: imeko-acta-11 (2022)-03-08 section editor: zafar taqvi, usa received march 9, 2022; in final form august 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuela elisa cepolina, e-mail: emanuela.cepolina@iit.it 1. introduction “peace agreements may be signed and hostilities may cease, but landmines and explosive remnants of war (erw) are an enduring legacy of conflict”, states, in its first sentence, the landmine monitor, a comprehensive assessment of progresses in eliminating landmines, cluster munitions and other erw, published annually. according to [1], in year 2020 alone, the number of casualties of mines/erw was more than 7000, with approximately 2500 people killed and the rest injured. out of them, the majority (80%) were civilians, half of whom children. at the moment, at least 60 states and other areas are contaminated by antipersonnel landmines. among them, the countries considered to be massively contaminated (with more than 100 km2 of contaminated land) are afghanistan, bosnia and herzegovina, cambodia, croatia, ethiopia, iraq, turkey, yemen and ukraine, with the latter recently bombed with cluster munitions [2]. among these, many are also facing severe hunger, with iraq, afghanistan and yemen having more than 34% of the total population undernourished, while ethiopia more than 25% [3]. while the utmost importance of releasing land to local communities for food production and economic development is evident, the lack of intensive mechanization of the demining process surprises. disarmadillo represents a breakthrough sustainable innovation in mechanical demining technologies; it has been designed to stay behind when demining is over and serve longterm agricultural development of the country where it helped release land to local communities. it is affordable and being based on mature agricultural technology its maintenance and running costs are minimized. instead of being designed to clear mines, it is designed to collect information about mine presence either from sensors and from light ground processing and vegetation cutting. thanks to its low cost, more units can be used at the same time, helping release land to local communities faster. when not used in demining operations, disarmadillo can be abstract the mine action community suffers from a lack of information sharing among stakeholders. since 2004, snail aid has been working on disarmadillo, a dramatic shift in paradigm: an open source hardware platform for humanitarian demining. developed mainly thanks to volunteers’ work across more than 15 years, the machine is now going to get a push thanks to the project disarmadillo+, in collaboration between snail aid technology for development and the italian institute of technology. the new version of the machine will be improved in terms of manoeuvrability, modularity, versatility, without compromising its characteristic features. the re-design will take into account the need of keeping the cost low and the technology appropriate to the context where it will work. the ability of the machine to serve two different purposes will also be preserved: the machine will keep on being easily convertible to its original agricultural nature, being developed around a commercial off-the-shelf powertiller. the paper presents the machine and the research work foreseen within the new project. mailto:emanuela.cepolina@iit.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 reconverted to its original agricultural use and help securing food production. the paper is organised as follows. section 2 introduces humanitarian demining and the machines currently employed in it, together with an overview of robotics solutions suggested for the task, highlighting shortcomings and possible improvements. section 3 introduces disarmadillo machine concept and the philosophy behind it. section 4 is about disarmadillo architecture, and section 5 is about the features of disarmadillo+, the new version of the machine under studying. then, conclusions are drawn. 2. humanitarian demining humanitarian demining methods are based on manual demining, a procedure in which mines are manually detected and neutralized by a human deminer, equipped with simple gardening tools such as shovels and shears, prodders and, if possible, metal detectors. manual demining is the most versatile and trusted method and therefore is present in every demining program. sometimes, manual deminers work together with dogs trained to detect explosives contained in mines. when it is possible, demining machines help with the physical demining process phases, i.e., vegetation clearance, mine detection, and removal [4]. however, the number of machines in use is surprisingly low. an in-field study [5] conducted in 2012, across six organizations in six countries, recorded only 13 machines in use. the geneva international centre for humanitarian demining (gichd) electronic catalogue of mechanical equipment used for demining operations [6], currently reports only 40 machines in use: this is the sum of numbers of machines in use inserted by a single company producing four types of different machines, the other producers having not filled in this information. although these data definitely do not represent the whole picture, they show that mechanization in this field is extremely limited. several reasons can be accounted for this issue, including lack of funding, the inability to move from research and development (r&d) to practical commercial devices, the cynicism of innovation by those convinced their current practices are entirely sufficient [7], the high cost of maintenance of complex equipment in mine affected countries [8] and the lack of information sharing among stakeholders. tiramisu, d-box and demining robots are among the largest r&d projects that recently tackled humanitarian demining. while the first two ran in parallel and were both cofunded by the european union (eu) within the 7th funding framework programme, the latter is still ongoing and is funded by nord atlantic treaty organization (nato) science for peace and security programme. among them, only tiramisu and demining robots were explicitly aimed at developing new robotic vehicles, while d-box was focused on creating an information management system [9]. the demining robots project employs a multi-sensor robotic platform developed in a previous phase of the project and designed specifically for research purposes and testing innovative mine detection methods such as impulse ground penetrating radar [10]. the robotic platform, called ugo-1st, is, thus, not yet suitable to be fielded in demining operations. the tiramisu project led to the development of robotic vehicles at higher technological readiness level (trl), such as teodor, frs husky and the apt. the first is a tracked outdoor platform equipped with an array of five metal detectors, the second a four-wheel all-terrain vehicle equipped with an arm carrying a metal detector and an artificial nose, and the last is an improvement of the locostra machine, a four-wheel agricultural tractor modified to be used in mine-affected areas [11]. these and many other robotic platforms designed for demining have been analysed in [12], which highlights the need to address several requirements other than the increase in safety of human deminers, such as the speed of robotic vehicles, the ability to operate over long periods of time in varied environments, the amount of payload they can carry and their cost-efficiency. out of all platforms, [12] selects six for quantitative comparison across the identified requirements. apart from teodor, frs husky and locostra, the comparison table includes: ares, a four-independently-steered wheel vehicle [13], silo-6 [14], a hexapod walking robot, and gryphon –iv [15], a modified moon buggy vehicle equipped with a pantograph arm carrying a metal detector. apart from having better landmine detection/exposure results in field trials, gryphon iv and locostra are more promising in payload and operation time. nevertheless, these two solutions have not found application in the field yet. this might be due to the fact that their development took place within r&d projects and was limited by funding available. an opensource approach would guarantee the community to take ownership of the technology and the development to continue behind research projects timelines. the international mine action standards (imas) [16] define demining machines as machines designed to be used in hazardous areas. they are divided into machines designed to detonate hazards, machines designed to prepare the ground, and machines designed to detect hazards. machines belonging to the first group are generally heavily armoured, highly powered and very expensive to purchase. they achieve mine detonation by processing the soil at high speed with spinning tools at the front aimed at crashing or hitting whatever they encounter, thus using a large amount of power, delivered by large and high fuel consuming engines. while most machines on the demining technology market belonged to this first type in the past, recently there has been a shift toward smaller sized [7], less powerful machines designed to prepare the ground other than detonate hazards. ground preparing machines are primarily designed to improve efficiency of demining operations by reducing or removing obstacles. the size of machines has been decreasing over time answering the need for more appropriate technologies, being the logistics of heavier ones very difficult in post-conflict scenarios, at the same time, a limitation of the practical in-field use of heavier and more powerful ones, has been acknowledged. a study [5] (figure 1) has shown that the efficiency of these machines in terms of mine detonation is well below expectations and, therefore, in most cases another mine clearance asset, usually manual deminers or mine detection dogs, has to follow the machine and complete its task. the shift toward smaller machines has occurred along with a change in paradigm aiming to employ resources more efficiently. the land release process has been promoted and is now largely employed to reduce time by which suspected hazardous areas (shas) are released to local communities. according to [16] mechanical land release involves a machine being used to indicate or confirm the presence or absence of landmines and/or erw within a suspected or confirmed hazardous area. the aim is to enable the deployment of other demining assets only in areas acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 proven to contain landmines and/or erw including unexploded sub-munitions. in other words, machines to be employed in mechanical technical survey mainly need to verify the absence of mines in the given area; if they encounter an explosion, the area needs to be re-categorized and further processed. this means that machines used in technical survey need to process the ground and to resist, or not to be severely damaged by, only one explosion at time, while keeping the operator safe. although recent trends allow introducing smaller, lightly armoured and powered, more cost-efficient machines, designed to perform multi tasks in ground preparation, and major steps have been taken towards this direction, there are still progresses to be made. figure 2 reports the size and power of demining machines available now. data have been extrapolated from the gichd electronic catalogue and from websites of manufacturers that recently exhibited their equipment in conference venues. as can be seen, the average weight of demining machines available on the market is still above 10 tonnes, and the average power is 300hp.going smaller and more versatile might be not only useful for humanitarian demining, allowing the number of machines in use to increase, but also, in light of reconverting demining machines to food production, for sustainable agricultural mechanization. in fact, according to [17], about 90 % of farmers worldwide operate on a small scale, and the technology must become accessible to this large group. reference [18] highlights as a key factor for the successful adoption of agrobots in developing countries the capacity to design and offer technical solutions at a low (affordable) cost but with a high impact. again [18] estimates that small robots at an affordable price for purchase or hire represent a potential alternative in areas where manpower is scarce and conventional machinery is not available or is too costly for smallholders. 3. disarmadillo the work on disarmadillo machine started in 2004 with a one-month long visit to mine action activities in sri lanka. during the trip, groups of deminers were interviewed to start the research in the right direction, better understand local needs and establish a reciprocal trust between local people and researchers. most notably, information was gathered by working on the functional requirements for a system of demining machines to work close to the deminers. when deminers were asked about their preferences for new machine technology, they expressed a strong desire for new machines that were small, light and inexpensive. they wanted machines to help in the most boring/difficult parts of their job, particularly cutting vegetation and processing the ground, especially the hardest one, scarified using a simple rake called heavy rake, according to local procedures, to remove the soil hiding mines [19]. based on these findings, the first version of disarmadillo machine, called participatory agricultural technology machine (pat machine) was built within the first author’s phd work [20]. the work on disarmadillo continued over the years thanks to the contributions of volunteers of snail aid, a non-profit organization, and students of a secondary technical high school in genova, italy, who devoted part of their time to improving the machine and building parts of it in the school mechanical workshop. disarmadillo is, in fact, conceived to be appropriate figure 1. results from the same demining machine. on the left, test lane used in test site in germany with dummy antipersonnel (ap mines), called worm, 0 cm – 20 cm deep: 98.22 % neutralized. on the right hand side, suspected hazardous area in angola: 10 ap mines processed and left live intact (of type pomz and pp-mi-sr). figure 2. size (top image) and power (bottom image) of demining machines currently on the market. average 0 5,000 10,000 15,000 20,000 25,000 size (kg of machines) average 0 200 400 600 800 1000 power (hp of machines) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 to the local context, and thus, components have to be suitable to be produced in not specialized workshops. in 2021, disarmadillo+ project has been approved assuring a push forward thanks to the collaboration between researchers of the italian institute of technology and snail aid – technology for development (figure 3). the core idea behind disarmadillo is to adapt power tillers to demining applications. power tillers are small agricultural machines widely used and commercially available in many mineaffected countries and their second-hand market is largely spread. they are easy to transport as they are small and light, and they are available with different types of engines. the most powerful one (approximately 14 hp) is sturdy enough for being employed in several versatile tasks, from ground processing to vegetation cutting. power tillers, also known as walking tractors, two-wheel tractors or iron buffalos have a great importance in their nations’ agriculture production and rural economies. they not only have rotovator attachments but also mouldboard and disc-plow attachments. seeders, planters, even the zero till/no-till variety can be attached. reaper/grain harvesters and micro-combine harvesters are available for them. also very important is their ability to pull trailers with over two ton cargoes. the population of powertillers in developing countries is surprisingly high. china has the highest numbers estimated to approach 16 million, thailand has nearly 3 million, sri lanka 120,000, nepal 15,000. parts of africa have begun importing chinese tractors, and nigeria may have close to 1,000. many countries of central/eastern europe also have significant populations of 2-wheel tractors, as they have been sold there for agricultural use since the 1940s [21]. 3.1. disarmadillo philosophy: open source among all of the reasons that might be found behind the scarce employment of machines in humanitarian demining, in authors’ opinion is predominant the lack of information sharing. often, researchers into new technologies for mine action do not have access to useful information being generated in the field that is treated as proprietary and not shared [22] unless after an extensive and deep personal analysis, often involving field visits that generally require important resources to be committed to the cause. at the same time, machine producers tend to market their products in the same way as military equipment, negotiating their sales, including price, in confidence. the lack of transparency of the market makes comparing the cost-efficiency of machines difficult, and the introduction of new systems not perceived as necessary. therefore, in order to create a favourable environment for more technologies to enter the demining technology market, there is need to change approach and create a more transparent, less donor-depending and more cost-efficiency oriented market. disarmadillo is aimed to be an upgrade kit that can be mounted on every type of powertiller to transform it into a demining machine supporting manual deminers in their work. when not used in demining operations, disarmadillo can be reconverted to its original agricultural use and help secure food production. while commercial off-the-shelf (cots) components needed by the kit will be listed with price and suggested purchasing sites, all components that need to be custom made will have their technical drawings available for free downloading from the internet. potentially, a new machine could be built around any powertiller by anyone interested, with as few modifications as possible. similar approaches are being successfully used by projects targeting electronics (arduino) and heavier hardware (open source ecology or do it yourself (diy) vehicles or drones). as in these well known cases, the community of users would be asked to provide its feedback on experiences with the machine and contribute to future developments. the idea to adopt an open design business model for a mine action technology is provocative and runs counter the current trends in the humanitarian mine action (hma) market highlighted in the previous part of the paper; nevertheless, it is feasible and profitable. if required by customers, all parts needed could also be delivered in a box to the customer. if necessary, upon request from the customer, assembly of all components can also be offered as a service locally (as knowledge transfer) in the mine affected country together with training on the use of the machine. thanks to its modularity, if the community devises new tools or components, old machines can be upgraded without having to jettison what works. this approach aims to challenge the traditional lack of information sharing of mine action and increase the active participation of end users in the design and decision-making process. positive implications are expected in terms of bridging the gap between scientific and operational hma communities, increased competition level, cost reduction and possibly promotion of a closer integration with development. figure 3. disarmadillo evolution over time. figure 4. disarmadillo philosophy. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 3.2. disarmadillo philosophy: versatility disarmadillo is a robotic platform designed to carry different tools. some have already been tested, some are designed and others are still in the form of ideas. thanks to the open nature of the project, partners have tested tools locally, such as the vibrating sieve, developed by prof. ross macmillan in australia. figure 5 depicts the tools conceived for disarmadillo and available on snail aid website (top image) and, as an example of them, the rake (bottom image). the rake is designed for ground processing in loose soils, where manual deminers use rakes to uncover the ground and expose mines. it penetrates the soil in front of the machine, cuts it and sieves it by lifting mines and leaving them besides for later collection by deminers. a prototype has been manufactured and successfully tested in jordan with dummy mines. 3.3. disarmadillo philosophy: demining and agricultural purpose as their job is to process the ground, agricultural machines originally conceived to work the soil could be efficiently employed in demining. since landmines impact food security via six different and somewhat reinforcing mechanisms, including access denial, loss of livestock, land degradation, reduced workforce, financial constraints and aid dependency [23], it makes sense to introduce (in mine-affected countries) multi-purpose technologies that can serve not only to demine but also for food production. agricultural technologies are mature and simple, easy repairable in every developing country in local, not specialized workshops. the modularity of agricultural technologies is another advantage; the same tools can be mounted on different tractors units and replaced by dedicated agricultural tools when demining operations are over. moreover, involving local technicians in re-designing new or improved technology helps reduce the dependency of local communities on donors’ help and facilitate local human development. empowerment is an integral part of many poverty reduction programmes. helping individuals and communities to function as agents for improving their wellbeing is essential for promoting human development and human freedom. empowerment shall not depend only on state-funded resources and opportunities but also on citizens taking responsibility for self-improvement. the handover of all mine action activities to local entities who can perform the majority of the work and gain skills while participating in the creation and maintenance of new agricultural technology for area reduction is desirable and necessary. the development of sustainable agricultural technologies and their transfer and dissemination under mutually agreed-upon terms to developing countries, is encouraged by food and agriculture organization (fao) [18]. fao also stresses the importance of supporting national efforts to foster the utilization of local know-how and agricultural technologies, to promote agricultural technology research to increase sustainable agricultural productivity, reduce post-harvest losses and enhance food and nutritional security. centres could be built with the double aim of renting and servicing machines both for humanitarian demining and agriculture, therefore representing a major step toward the integration of demining and development and the transition to local ownership, wished for since a long time. by introducing facilities where to adapt agricultural tools to demining activities, we can support r&d in agriculture. machinery could be provided as and when needed on a custom hire basis to the small and medium farmers who cannot afford to purchase their own machinery. similarly, in parallel to agricultural machines, the agro service centres could also provide machines for technical survey, based on agricultural machines. they could develop the modifications required to effectively address the demining problem locally, then hire these machines and provide assistance. as confirmed by current trends, today, in both developed and developing countries, the availability of human resources for farming is decreasing due to labour shortage both for lack of interest from young people and for weak or aging farming workforce: this means that a single worker (sometimes weak) is often in charge of large extensions of land. these factors influence the development of local agriculture and open a market share for automation also of small machines (mass lower than three tons). differently from heavy tractors, small machines with competitive costs cannot be effectively developed simply by modifying existing manual driven models: their architecture should be rethought for automation [24]. 4. disarmadillo architecture the current version of disarmadillo (figure 6.c) is built around a powertiller (figure 6.b) produced by grillo spa (www.grillospa.it), which kindly donated it to the project, together with spare parts and suggestions. the technical features of the power tiller and the constructed disarmadillo prototype are summarized in figure 6.a. the kit adds to the original powertiller a frame (figure 7), which has the dual aim of hosting two additional wheels at the front with respect to the original driven wheels and of embedding a track tensioning system. the frame is made of standard steel profiles, easy to build and maintain requiring only cutting and welding operations. agricultural tyres are replaced with special wheels designed to transmit motion to and support the tracks along their width. the frame added to the power tiller is designed to host a winch and a sort of three-point linkage system, allowing different tools to be figure 5. disarmadillo tools (top image) and a picture of the rake, tested in jordan with dummy mines (bottom image). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 mounted at the front. the power take-off at the back of the machine can be used to power implements requiring an actuating torque. being reversible, the machine can be used indifferently forward or backwards. the machine is remotely actuated and is driven by an industrial remote-control unit, allowing major functions to be controlled remotely (figure 8). the remote control system is not substituting original manual controls; therefore, once reconverted to agricultural activities, the machine can go back to manual control. the platform rotates thanks to differential skid steering, thus by braking one of the two stub axles through which power is transmitted to the wheels through the differential gear. external band brakes are mounted on the frame, acting on the stub axles. a linear electric motor actuates each brake via cable, actuating a lever. power for the electrical motors is derived from the battery on board. the power take-off at the back of the powertiller, accessible through the frame, can be used to power tools that need a torque. 5. disarmadillo+ the disarmadillo machine has been subject to continuous research by snail aid and partners on a volunteer basis for the last fifteen years; its latest version was presented to the community during the 7th mine action technology workshop in basel in november 2018 raising considerable interest. the disarmadillo+ project will bring it to a higher level of maturity. major improvements are envisaged in terms of: • manoeuvrability and reliability, by actuating wheels with independent hydraulic motors and not any more through the differential gear powered by the internal combustion engine. each of the hydraulic motors, one per side of the machine, will be connected to a hydraulic pump actuated by the endothermic motor in a closed-loop circuit (figure 9). this new architecture would allow a narrower turn radius, and more efficient turning by actuating the two motors in opposite directions. moreover, it would allow driving the machine backwards without rotating it or changing configuration before operations start. • modularity, by splitting the frame into two parts, each portable by two persons, for easy conversion from one configuration (demining) to the other (agriculture). moreover, it is foreseen to change the points of attachment of the frame to the powertiller to reduce the number of ad-hoc flanges necessary a) b) c) figure 6. a) technical data of disarmadillo machine based on g131 powertiller produced by grillo. b) the original powertiller and c) disarmadillo as it was exhibited at the 7th mine action technology workshop in basel in 2019. figure 7. scheme of disarmadillo: red parts, frame, wheels, tracks and band brakes are added to the original powertiller (black). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 to adapt the kit to different types of powertillers, exploiting the power take-off, with a pass-through system, and the axles. • human machine interface, by improving the remote control transmitter interface to expand its possibilities and make the driving more intuitive. • versatility, by investigating the possibility to study blast resistant tracks, building up on experience gained on blast resistant wheels developed for a larger machine for humanitarian demining based on a four-wheel tractor called locostra, developed by snail aid and other partners [25]. in fact, although explosive tests on a powertiller have been carried out in italy [26] and no damages have been recorded to the drive train, wheels were damaged, making maintenance necessary in case of an explosion. a solution to increase the machine's protection is to mount a front roller when the operating tool is mounted at the back. another option would be to design blast-resistant tracks that would allow retaining enough tractive integrity after an explosion occurs underneath them to continue working or to enable the withdrawal of the machine from the field for maintenance. a research [27] carried out in the 70’s exploited successfully three design principles: shock absorption by the roadwheels (embedding circular epoxy-resin rings between the hub and the rim), an almost unbreakable chain of tractive effort, and sacrificial track pads designed to fly away. a possibility would be to combine these ideas and the shock absorption exploited in locostra wheels, achieved thanks to solid rubber inner wheels embedded in steel frames, and design solid rubber roadwheels and steel track elements allowing ventilation. disarmadillo+ will offer the occasion to investigate in new tools, such as a ground driven/aerial platform: a drone borne sensor platform connected to disarmadillo+. the connection would be by tether or by another means allowing transmitting power from the ground rover to the drone, permitting long lasting flights and transferring data from the drone to the rover. importance will be given to keep complexity and cost low. as generally acknowledged and well explained by hemapala [28], when the price to performance ratio is too high, robots are academic toys. to keep the cost and complexity low, the machine will be automated gradually, according to needs. at the beginning, it will be remotely controlled. cost is also a key factor in the successful adoption of agrobots in developing countries, as stated by [18] that points out the need to design and offer technical solutions at a low (affordable) cost but with a high impact. the final shape of the disarmadillo+ machine is in the form of a remotely controlled tracked vehicle able to carry different tools. recently, there has been an increase in the supply of small remotely controlled platforms designed to perform agricultural tasks. it is interesting to analyse these types of machines available on the market in terms of size and power, as was done for demining machines. these small-size agricultural machines are sold on a much transparent market than the one of demining machines, so their cost can be obtained from producers’ websites or by browsing the internet, both for new products and second hand ones. figure 10 reports an analysis of 25 machines of this type according to their size and power; for few representative ones, also the cost is reported. as can be seen, the average size and power of these agricultural machines is much smaller than the one of demining machines. indeed, their weight is almost an order of magnitude lower than demining machines, and their rated power is approximately six times lower. the smallest of these agricultural machines, rc-751, produced by a danish company called timan, is comparable to disarmadillo+ in terms of weight and power. apart from being designed with a different philosophy, not for operating in hazardous areas, and having a higher cost (it is sold approximately at 20 k€), it offers a good reference, showing that machines with such a small size can successfully be employed in many agricultural tasks. moreover, the adoption of tools available for the rc-751 could be investigated also for disarmadillo+. a) b) c) figure 8. particular of the control unit: a) linear motors actuating brakes and the clutch, b) transmitter and c) 3d model of band brakes leverage. figure 9. disarmadillo+ scheme. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 6. conclusions considering the increasing consensus on the fact that mine action should be regarded as a development activity, there should be a rapid change of the current approach. the paper summarises some topics in this domain and introduces the design of a simple modular machine for assisting mine removal through ground processing and vegetation cutting. the tractor unit is chosen in the agricultural machines domain (power tillers), so as to assure full consistency with the local expertise and habits. cost and sophistication minimisation are primary objective of the project. acknowledgement andrea pinza ceo of grillo spa who keeps on providing support, spare parts, and suggestions and personally delivered the powertiller we have been working on so far, is gratefully acknowledged. references [1] landmine monitor report 2021, international campaign to ban landmines – cluster munition coalition (icbl-cmc), 2021, isbn 978-2-9701476-0-2. [2] human rights watch, ukraine: russian cluster munition hits hospital, 25 february 2022. online [accessed 28 august 2022] https://www.hrw.org/news/2022/02/25/ukraine-russiancluster-munition-hits-hospital [3] hunger map 2021 – chronic hunger, world food program, 24 september 2021. online [accessed 28 august 2022] https://reliefweb.int/sites/reliefweb.int/files/resources/wfp0000132038.pdf [4] a guide to mine action, fifth edition, gichd, geneva, march 2014, isbn 978-2940369-48-5. [5] e. e. cepolina, land release in action, the journal of erw and mine action 17(2) (2013), pp. 44-50. online [accessed 28 august 2022 https://commons.lib.jmu.edu/cisr-journal/vol17/iss2/16 [6] mechanical equipment used for demining operations catalogue, gichd. online [accessed 28 august 2022] https://www.gichd.org/en/resources/equipmentcatalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=famil y%3a2 [7] a. w. dorn, eliminating hidden killers: how can technology help humanitarian demining?, stability: international journal of security & development 8(1) (2019), pp. 1–17. doi: 10.5334/sta.743 [8] e. e. cepolina, c. bruschini, k. de bruyn, providing demining technology end-users need, international workshop on robotics and mechanical assistance in humanitarian demining (hudem), tokyo, japan, 2005, pp. 9-14. online [accessed 30 august 2022] https://infoscience.epfl.ch/record/257002/files/2005_cepolina _providingdemtech_prochudem.pdf?ln=en [9] f. curatella, p. vinetti, g. rizzo, t. vladimirova, l. de vendictis, t. emter, j. peterit, c. frey, d. usher, i. stanciugelu, j. schaefer, e. den breejen, l. gisslén, d. letalick, toward a multifaceted platform for humanitarian demining, 13th iarp workshop on humanitarian demining and risky intervention, beograd, croatia, 2015. online [accessed 30 august 2022] https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1 542623794109/toward-a-multifaceted-platform_foi-s--5243-se.pdf [10] v. ruban, l. capineri, t. bechtel, g. pochanin, p. falorni, f. crawford, t. ogurtsova, l. bossi, automatic detection of subsurface objects with the impulse gpr of the ugo-1st robotic platform, 2020 ieee ukrainian microwave week (ukrmw), 2020, pp. 1108-1111. doi: 10.1109/ukrmw49653.2020.9252816 [11] y. baudoin, tiramisu: fp7-project for an integrated toolbox in humanitarian demining, focus on ugv, uav, technical survey and close-in detection, int. conf. on climbing and walking robots, sydney, aus., 2013. [12] d. portugal, l. marques, m. armada, deploying field robots for humanitarian demining: challenges, requirements and research trends, mobile service robotics (2014), pp. 649-656. doi: 10.1142/9789814623353_0075 [13] p. santana, j. barata, l. correia, sustainable robots for humanitarian demining, int. journal of advanced robotic systems 4(2) (2007), pp. 207-218. doi: 10.5772/5695 [14] p. gonzalez de santos, j. a. cobano, e. garcia, j. estremera, m. armada, a six-legged robot-based system for humanitarian demining missions, mechatronics 17(8) (2007), pp. 417-430. doi: 10.1016/j.mechatronics.2007.04.014 [15] m. freese, t. matsuzawa, t. aibara, e. f. fukushima, s. hirose, humanitarian demining robot gryphon – an objective evaluation 1(3) (2008), pp. 735-753. doi: 10.21307/ijssis-2017-317 [16] imas 09.50 mechanical demining, unmas, 2013. online [accessed 28 august 2022] https://www.mineactionstandards.org/fileadmin/mas/docume nts/standards/imas-09-50-ed1-am4.pdf [17] smallholders and family farming. in: family farming knowledge platform. fao, rome. online [accessed 28 august 2022] http://www.fao.org/family-farming/themes/small-familyfarmers/en/ [18] agriculture 4.0 – agricultural robotics and automated equipment for sustainable crop production, fao, integrated crop management 24 (2020). online [accessed 28 august 2022] http://www.fao.org/3/cb2186en/cb2186en.pdf figure 10. power, size and cost of remotely controlled agricultural multi platforms currently on the market. in the last graph green bars indicate cost. average 0 20 40 60 80 100 120 140 160 r c -7 5 1 ro b o m in i r c -1 0 0 0 f 3 0 0 p ro lv 3 0 0 p ro lv 4 0 0 f r5 0 m in i m in iz ro b o cu t r c 4 0 ro b o e v o ro b o cu t r c 5 6 t ra xx r c 5 6 lv 5 0 0 t ra xx t ra xx r c 7 5 ro b o cu t r c 7 5 lv 6 0 0 f r7 0 f r7 5 ro b o m id i lv 8 0 0 ro b o m a x r e co n f m -8 0 lv 1 4 0 0 power (hp of machines) average size 0 20,000 40,000 60,000 80,000 100,000 0 1,000 2,000 3,000 4,000 5,000 s iz e / k g size (kg) and cost (€) https://www.hrw.org/news/2022/02/25/ukraine-russian-cluster-munition-hits-hospital https://www.hrw.org/news/2022/02/25/ukraine-russian-cluster-munition-hits-hospital https://reliefweb.int/sites/reliefweb.int/files/resources/wfp-0000132038.pdf https://reliefweb.int/sites/reliefweb.int/files/resources/wfp-0000132038.pdf https://commons.lib.jmu.edu/cisr-journal/vol17/iss2/16 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://www.gichd.org/en/resources/equipment-catalogue/equipments/?tx_solr%5bfilter%5d%5b3%5d=family%3a2 https://doi.org/10.5334/sta.743 https://infoscience.epfl.ch/record/257002/files/2005_cepolina_providingdemtech_prochudem.pdf?ln=en https://infoscience.epfl.ch/record/257002/files/2005_cepolina_providingdemtech_prochudem.pdf?ln=en https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://www.foi.se/download/18.7fd35d7f166c56ebe0b1008e/1542623794109/toward-a-multifaceted-platform_foi-s--5243--se.pdf https://doi.org/10.1109/ukrmw49653.2020.9252816 https://doi.org/10.1142/9789814623353_0075 https://journals.sagepub.com/doi/pdf/10.5772/5690 https://doi.org/10.1016/j.mechatronics.2007.04.014 https://doi.org/10.21307/ijssis-2017-317 https://www.mineactionstandards.org/fileadmin/mas/documents/standards/imas-09-50-ed1-am4.pdf https://www.mineactionstandards.org/fileadmin/mas/documents/standards/imas-09-50-ed1-am4.pdf http://www.fao.org/family-farming/themes/small-family-farmers/en/ http://www.fao.org/family-farming/themes/small-family-farmers/en/ http://www.fao.org/3/cb2186en/cb2186en.pdf acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 [19] e. e. cepolina, power tillers for demining in sri lanka: participatory design of low-cost technology, in humanitarian demining. london, united kingdom: intechopen. doi: 10.5772/5422 [20] e. e. cepolina (2008), powertillers and snails for humanitarian demining: participatory design and development of a low-cost machine based on agricultural technologies [unpublished doctoral thesis], university of genova. [21] wikipedia “two-wheel tractor”. online [accessed 28 august 2022] https://en.wikipedia.org/wiki/two-wheel_tractor [22] j. lokey, it's mine and you can't have it, journal of mine action 4(2) (2000). online [accessed 28 august 2022] https://commons.lib.jmu.edu/cisr-journal/vol4/iss2/40 [23] h. garbino, the impact of landmines and explosive remnants of war on food security: the lebanese case, the journal of conventional weapons destruction 23(2) (2019). online [accessed 28 august 2022] https://commons.lib.jmu.edu/cisr-journal/vol23/iss2/6 [24] e. e. cepolina, m. przybyłko, g. b. polentes, m. zoppi, design issues and in field tests of the new sustainable tractor locostra. robotics 3 (2014), pp. 83-105. doi: 10.3390/robotics3010083 [25] g. a. naselli, g. polentes, e. e. cepolina, m. zoppi, a simple procedure for designing blast resistant wheels, procedia engineering 64 (2013), pp. 1543 1551. doi: 10.1016/j.proeng.2013.09.236 [26] e. e. cepolina, m. u. hemapala, power tillers for demining: blast test, international journal of advanced robotic systems 4(2) (2007) pp. 253-257. online [accessed 28 august 2022] https://journals.sagepub.com/doi/pdf/10.5772/5690 [27] r. zumbro, mine resistant tracks, armor (1997), pp. 16-20. online [accessed 28 august 2022]. https://books.google.it/books?id=z60raaaayaaj&pg=ra1pa19&lpg=ra1pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm 9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl =en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabd gq6af6bagmeam#v=onepage&q=%20tracks&f=false [28] m. u. hemapala, robots for humanitarian demining, in robots operating in hazardous environments. london, united kingdom: intechopen, (2017). doi : 10.5772/intechopen.70246 https://www.doi.org/10.5772/5422 https://en.wikipedia.org/wiki/two-wheel_tractor https://commons.lib.jmu.edu/cisr-journal/vol4/iss2/40 https://commons.lib.jmu.edu/cisr-journal/vol23/iss2/6 https://doi.org/10.3390/robotics3010083 https://doi.org/10.1016/j.proeng.2013.09.236 https://journals.sagepub.com/doi/pdf/10.5772/5690 https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://books.google.it/books?id=z60raaaayaaj&pg=ra1-pa19&lpg=ra1-pa19&dq=blast+resistant+tracks&source=bl&ots=daucbdm9cv&sig=acfu3u142_8sw8rxkyh0mpuwg4xkfciwaq&hl=en&sa=x&ved=2ahukewjai8vm4yh2ahuvq_edhrzabdgq6af6bagmeam#v=onepage&q=%20tracks&f=false https://doi.org/10.5772/intechopen.70246 how to stretch system reliability exploiting mission constraints: a practical roadmap for industries acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 how to stretch system reliability exploiting mission constraints: a practical roadmap for industries marco mugnaini1, ada fort1 1 university of siena diism department, via roma 56, siena, italy section: research paper keywords: reliability design; mission; reliability assessment; reliability enhancement citation: marco mugnaini, ada fort, how to stretch system reliability exploiting mission constraints: a practical roadmap for industries, acta imeko, vol. 11, no. 4, article 17, december 2022, identifier: imeko-acta-11 (2022)-04-17 section editor: francesco lamonaca, university of calabria, italy received august 14, 2022; in final form november 19, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: marco mugnaini, e-mail: marco.mugnaini@unisi.it 1. introduction in the literature it is possible to find many examples discussing on advance re-search on reliability assessment and analysis. most of these papers discuss on advanced methods for reliability allocation and improvement. nevertheless, such methods are often lacking practical implementation and may result for actual implementation tricky or require a set of information not always available in practical cases. for example, in [1] the authors provide an approach based on baysan analysis for parameter estimation presenting an interesting approach for process parameter evaluation which embeds a priori information to solve lack of data. in [2] the authors present a paper addressing reliability prediction modelling based on kalman filtering applied to ion batteries while in [3] ai approaches are used to solve reliability problems on oil&gas contexts. other papers such as [4]-[5] describe general method for reliability assessment based on the most commonly used database such as mil-hdbk217f, oreda or others, as function of temperature and environment. in some other papers, instead, there are fine analysis on predefined structures as in [6] where the advantages of different hardware solutions are compared with the aim to show how small changes on the practical implementation may lead to different results. usually, such achievements are obtained by means of different architectures without comprising the implications of mission changes [6]-[12]. on practical basis, moreover, the systematic lack of confidence bounds in presenting results and the impossibility for companies to provide additional analytic description in addition to synthetic results like the over-used mean time to first failure or mean time between failures (mttf, mtbf) to their evaluation make the transmission of information very difficult to direct customers or other realities [13]-[20]. the correct reliability design can be successfully approached by means of theoretical analysis if the design is followed since the very beginning phases of product development [20]-[21]. unfortunately, especially in small companies where resources are very limited, the designers usually underestimate the reliability allocation problem demanding such analysis to a subsequent phase. in general, it is not easy to find in the literature a practical guide for companies which is able to embed both the theoretical and the actual application implications in a suitable way [22]. some applications on the contrary, address the thrust of measurements without taking into consideration hardware and software reliability even in industrial contexts [23]-[25]. in this paper the authors would like to show on a practical way how implications on mission definition can be exploited for abstract reliability analysis can be committed to companies by customers willing to verify whether their products comply with the major international standards or simply to verify the design prior of market deployment. nevertheless, these analyses may be required at the very preliminary stages of design or when the design is already in progress due to low organizational capabilities or simple delay in the project implementation process. the results sometime maybe be far from the market or customer target with a subsequent need to redesign the whole asset. of course, not all the cases fall in the worst scenario and maybe with some additional consideration on mission definition it is possible to comply with the proposed reliability targets. in this paper the author will provide an overview on the approach which could be followed to achieve the reliability target even when the project is still on-going providing a practical case study. mailto:marco.mugnaini@unisi.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 reliability evaluations. in section i there’s an introduction describing the critical aspects of reliability assessment and design com-pared to the present literature. in section ii there’s a description of the formal approach used in describing borderline conditions where predictions maybe far from desired results. in section iii a case study about a general electronic board design is treated where it is possible to see how reliability targets which seems far from original design maybe matched just introducing considerations on mission profile. finally in section iv the conclusions are discussed. 2. models and methods reliability design is always following well established rules from an academic standpoint. starting from a problem definition and a mission description a design flow diagram and subsequent reliability block diagram can be approached and built. nevertheless, the most complicated part of the reliability evaluation and function description is the choice of the proper failure rate or probability density function for the components describing the item to be designed. an easy and reasonable approach for companies is to rely on their a priori knowledge and build bayesian based models. as an alternative companies may exploit internal or external database with the risk of selecting components with similar failure models but used in very different context resulting therefore in too conservative evaluations or completely wrong forecast. another issue is the overall absence of confidence bounds in companies forecast which makes the result outcome useless. mission profile definition if one of the most critical things among the ones previously cited which make reliability forecast subject to interpretation. as it can be easily understood the same item used in a different environment or with a different time apportionment or duty cycle may have, as a final result, very different reliability prediction. on the other hand, mission definition is very often neglected as a powerful tool to stretch system, subsystem or item reliability tailoring the application to a more realistic scenario. in figure 1 there’s the flow diagram of a commonly used approach in industry design concerning reliability aspects. this simplified version of the design is often considered in a generalized way which may lead to underestimates of specific important details. two important aspects should be underlined concerning the fact that not always field data are available on similar project mission profiles and databases maybe used in an unproper manner obtaining too conservative results or too optimistic forecasts. mission definition is another aspect which is often not investigated properly with the limitation to provide a general mission description avoiding subdividing it in a set of several submission according to the whole lifecycle of the item which is to be designed. figure 2 represent a more detailed view of this approach which should be taken into consideration by companies in the general development phase whenever reliability (and more in general reliability availability safety and maintainability rams) aspects are involved. 3. about illustrations and tables let’s consider a system designed to drive some signaling infrastructure in the railway context. such example nicely fits the scope of this paper from the moment that the safety and reliability requirements are so tight that not considering the mission profile could lead to several system further redesign. in figure 3 the general architecture composed by a power sartup system, a vital power source, a set of configuration memories a couple of microprocessors implementing the 2oo2 architecture an optional third cpu for managing external communication, a set of auxiliary electronics, a drive output system and a set of actuators is represented. sample equations for specific components are available on reliability standards for discrete and semiconductors and they are in the general form as equation (1): 𝜆𝑐 = 𝜆𝑏 ∙ ∏ 𝛱𝑖 𝑛 𝑖=1 , (1) where 𝜆𝑐 is the overall failure rate, 𝜆𝑏 is the basic failure rate without any correction factor but the ones due to temperature, stress and inner model and 𝛱𝑖 are the corrective factors depending on the specific model characteristics, environment and quality. the generalized form of the mission for this kind of general architecture 2oo2 for a signalling system can be summarized with the following sentence: “being able to function safely for at least 40000 hours”. companies general approach is to try to design an architecture complying the safety integrity level (sil) 4 safety standard (which is a must in such context) and the mtbf requirements as described above with a blind approach. additional requirements may include the use of a specific figure 1. general flow diagram used in companied when designing a project to fulfil reliability requirements. figure 2. project reliability definition embedding operative conditions and mission definition. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 database for components failure rate which can be the milhdbk-217f with the part stress approach. such new information lower our first estimation to the following mtbf (we are now neglecting the confidence bounds just to deliver the information on how such figure can be transformed changing slightly the scenario). an additional consideration could be added embedding in the analysis the environment and the enclosure temperature which for such application in middle europe can be standardized as 40 °c and ground fixed (according to the selected database). if now the designers would like to move far from the preliminary design results of mtbf several options come into the field. component with improved quality could be considered even if such information as a matter of fact is not described anywhere in any component datasheet. at least the military handbook classification cannot be easily found and therefore such approach requires an effort which is not selected from designers practically. as an alternative it is possible to evaluate for standard discrete components (especially capacitors or resistors) the derating factors which reduce most of time to just voltage ratio or pawer ratio. again, this approach may result in a generalized one due to lack or resources and time considering the number of such components present in a huge electronic project like a signalling system for railway applications. a more effective option, which is often neglected by most designers, is the possibility to allocate to each subsystem composing the system a different mission profile or a different duty cycle. these two concepts are intrinsically embedded in the preliminary design phase as well as in the detailed one. nevertheless, the possibility to match a better design exploiting such features is barely underestimated during assessment phase. considering figure 4, it is possible to see that the three configuration memories and the start up power unit have for sure a different mission with respect to the overall design and additionally for sure that may have a different duty cycle. once such considerations have been taken to the design if the target mtbf is still not achieved further improvement may pass through component reduction or alternative redundant configurations. if the first approach could be followed it would be preferred because these applications usually imply safety considerations as well. in figure 5 an additional improvement due to a resizing of the cpu capabilities is shown. in such diagram the additional cpu c has been embedded in the others removing in this way the additional configuration memory and correspondent circuitry. it is therefore possible to represent such units out of the original schematics and to modify the reliability block diagram (rbd) accordingly as shown in figure 6. simulation results can be accomplished exploiting some commercial software. in this case relyence part calculator has been exploited to compare different configurations outcomes. in table 1 the system not comprising any mission impact on subsystem and exploiting mil hdbk 217f database is considered and results shown. in table 2 a mission consideration as well as power and voltage derating have been comprised in the evaluation. the configuration memories as well as cpu c and some ancillary electronics have been used with 1 % duty cycle figure 3. sample rough electro-mechanical project of a system for railway applications with reliability and safety requirements to be interfaced with signaling system. figure 4. reviewed functional block of the interfacing system for railway applications where mission contribution is included. figure 5. reduced system design to minimize the impact of low duty cycle component on the original design improving the system reliability and not affecting the system architecture. figure 6. comparison of the rough rbd of the original signalling interfacing system a) and the reduced one b) embedding consideration on the mission definition. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 reflecting the actual use on a 24 h real timescale. these new considerations have brought to results shown in table 2 where significant improvement on the overall mtbf have been achieved. if the result is still not acceptable according to the design another approach could be to negotiate a new reference database. one of the most accepted one is the ansi vita one. in such database the components coming from the mil hdbk 217f have been actualized considering the advancement of the technology. for example, microcontroller which were not present in the previous version can be now considered as subset of microprocessors. the drawback is that this database is not an independent one and has been derived with the contribution of several companies. results of this significant improvement are shown in table 3. finally in table 4 it is possible to compare the three different kind of results which can be obtained just applying these different deviations from a standard approach. a final overview on the different kind of implementation depending on the environmental variation can be shown in figure 7 and figure 8. three different environments ground benign, ground fixed and ground mobile (gb, gf and gm) are compared in such figures showing the different contributions in terms of failures per million hours (fpmh) depending on the environment selected and on the used database. in figure 7 the mil hdbk 217f database has been exploited while in figure 8 the ansi vita one has been used. it is important to highlight how differences in results are kept even in table 1. results of simulation not embedding mission impact on subsystem and exploiting mil hdbk 217f database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 18.708 53453.07 100.00 configuration memories 6.03 165837.5 32.23 cpu2oo2 2.204 453720.5 11.78 clocks 0.574 1742160 3.07 eth1 0.203 4926108 1.09 eth2 0.203 4926108 1.09 start power up 0.506 1976285 2.70 gen vit 2oo2 0.715 1398601 3.82 general electronics 1.147 871839.6 6.13 cpu c 2.237 447027.3 11.96 vital power 1.884 530785.6 10.07 pwr start up 0.841 1189061 4.50 actuator 2.164 462107.2 11.57 table 2. results of simulation embedding mission profile and duty cycle on subsystem and exploiting mil hdbk 217f database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 8.052 124192.75 100.00 configuration memories 0 0 cpu2oo2 0.575 1739130.4 7.15 clocks 2.204 453720.51 27.37 eth1 0.203 4926108.4 2.52 eth2 0.203 4926108.4 2.52 start power up 0 0 gen vit 2oo2 0.715 1398601.4 8.88 general electronics 1.147 871839.58 14.24 cpu c 0 0 vital power 0 0 0 pwr start up 0.841 1189060.6 10.44 actuator 2.164 462107.21 26.88 table 3. results of simulation embedding mission profile and duty cycle on subsystem and exploiting ansi vita database. name failure rate in f / (106 h) mtbf in h failure rate in % main board 4.816 207641.196 100.00 configuration memories 0 0 0 cpu2oo2 2.263 441891.295 46.99 clocks 2.127 470145.745 44.17 eth1 0.024 41666666.7 0.50 eth2 0.024 41666666.7 0.50 start power up 0 0 gen vit 2oo2 0.67 1492537.31 13.91 general electronics 0.995 1005025.13 20.66 cpu c 0 0 0 vital power 0 0 0 pwr start up 0.099 10101010.1 2.06 actuator 0.614 1628664.5 12.75 table 4. comparison of the mtbf and failure rates of the three improvements proposed in the analysis. confidence bounds are 95 %. name failure rate in f / (106 h) mtbf in h main board ansi vita 4.815 748 207 652.05 main board mhdbk 217 8.052 190 124 189.81 main board mhdbk 217 nm 18.708 161 53 452.61 figure 7. system behaviour under three different environment ground benign (gb), ground fixed (gf), ground mobile (gm) according to mil hdbk 217f. figure 8. system behaviour under three different environment ground benign (gb), ground fixed (gf), ground mobile (gm) according to ansi vita. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 the changes of environment making this latter database more attractive for non-fully conservative and more modern approaches. the process which should be followed in assessing reliability requirements after preliminary design should be the one described in figure 9. the first step consists in negotiating with the final customer the exploitable database from the moment that results are really affected by such choice. then after the bill of materials (bom) has been defined and the mission correctly described, it is crucial to identify which subsystem are subject to duty cycle modification. components quality level even if important is not crucial especially if referred to mil hdbk 217f from the moment that such information is difficult to be gathered from any commercial supplier. temperature information instead is vital as well as operative environment due to the fact that great part of final analysis outcome depends on the. once this step has been accomplished. 4. conclusions in this paper the author tried to analyze a recurring problem during design phases. usually, the producers of electromechanical reliability assembly are more focused on general performance than on reliability verification. this approach inevitably implies a huge effort in final redesign and long negotiations with final customers due to the lack in the procedural design. the authors tied to highlight that sometimes no redesign is needed but a more precise and detailed description of the system mission may allow a redefinition of the reliability evaluation criteria. as a general concept these aspects should fall in the optimized and best practices of item design but as a matter of fact it may happen that due to the high volume or project complexity such aspects may be neglected. it has been shown on an actual example how by following some basic steps, as suggested in the manuscript, it is possible to minimize the impact of redesign and achieve satisfying results. this paper tries to fill a gap which is not described often in the literature because it has to deal with some peculiar aspect of the specific design but whose general rules can be applied almost in any engineering project. the analysis shows moreover the possibility to exploit different database which are not selected in common projects usually to the lack of information on their exploitability even when there are changes in the operating environment. acknowledgement this study has been conducted exploiting a research licence of part analysis provided by relyence software. references [1] m. mugnaini, m. catelani, g. ceschini, a. masi, f. nocentini, pseudo time-variant parameters in centrifugal compressor availability studies by means of markov models, microelectronics reliability, vol. 42, 2002, pp. 1373-1376. doi: 10.1016/s0026-2714(02)00152-x [2] van-trinh hoang, n. julien, p. berruet, design under constraints of availability and energy for sensor node in wireless sensor network, conference on design and architectures for signal and image processing (dasip), karlsruhe, germany, 23-25 october 2012, pp. 1-8. [3] s. b. guedria, j.-f. frigon, b. sanso, an intelligent high availability amc design, ieee radio and wireless symposium, santa clara, ca, usa, 15-18 january 2012, pp. 159-162. doi: 10.1109/rws.2012.6175334 [4] q. weiwei, j. jingshan, j. yazhou, research on the numerical control system reliability model using censored data, 16th int. conference on industrial engineering and engineering management (ie&em '09), beijing, china, 21-23 october 2009, pp. 1204 – 1207. doi: 10.1109/icieem.2009.5344468 [5] k. zhao, d. steffey, analysis of field performance using intervalcensored incident data, annual reliability and maintainability symposium (rams), fort worth, tx, usa, 26-29 january 2009, pp. 43-46. doi: 10.1109/rams.2009.4914647 [6] t. addabbo, a. fort, m. mugnaini, v.vignoli, e. simoni, m. mancini, availability and reliability modeling of multicore controlled ups for datacenter applications, reliability engineering and system safety 149, may 2016, pp. 56-62 doi: 10.1016/j.ress.2015.12.010 [7] s. j. briggs, m. bartos, r. arno, reliability and availability assessment of electrical and mechanical systems, thirtieth ias annual meeting industry applications conference ias '95, conference record of the 1995 ieee, vol. 3, pp. 2273 – 2281. doi: 10.1109/ias.1995.530592 [8] g. ceschini, m. mugnaini, a. masi, a reliability study for a submarine compression application, microelectronics reliability, vol. 42, september–november 2002, pp. 1377-1380. doi: 10.1016/s0026-2714(02)00153-1 [9] m. catelani, m. mugnaini, r. singuaroli, effects of test sequences on the degradation analysis in high speed connectors, microelectronics reliability, vol. 40, august–october 2000, p. 1461-1465. doi: 10.1016/s0026-2714(00)00150-5 [10] chun su, jinyun shen, a novel multi-hidden semi-markov model for degradation state identification and remaining useful life estimation, quality and reliability engineering international journal, vol. 29, issue 8, 8 october 2012 pp. 1181-1192. doi: 10.1002/qre.1469 [11] d. zamalieva, a. yilmaz, t. aldemir, online scenario labeling using a hidden markov model for assessment of nuclear plant state, reliability engineering and system safety, vol. 110, february 2013, pp. 1-13. doi: 10.1016/j.ress.2012.09.002 [12] diego alejandro tobon-mejia, kamal medjaher, noureddine zerhouni, g. tripot, a data-driven failure prognostics method based on mixture of gaussians hidden markov models, ieee transaction on reliability, vol. 61, issue 2, pp. 491-503. doi: 10.1109/tr.2012.2194177 [13] l. e. baum, t. petrie, statistical inference for probabilistic functions of finite state markov chains, ann. math. stat., vol. 37, no. 6, december 1966, pp. 1554-1563. [14] l. e. baum, g. r. sell, growth functions for transformations on manifolds, pacific j. math., vol. 27, no. 2, 1968, pp. 211227. [15] l. rabiner, a tutorial a tutorial on hidden markov models and selected applications in speech recognition, proc. of the ieee, vol. 77, no. 2, february 1989 doi: 10.1109/5.18626 figure 9. cap general designers’ roadmap to simplify reliability requirements achievements tion. https://doi.org/10.1016/s0026-2714(02)00152-x http://dx.doi.org/10.1109/rws.2012.6175334 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 http://ieeexplore.ieee.org/xpl/mostrecentissue.jsp?punumber=5339384 https://doi.org/10.1109/icieem.2009.5344468 https://doi.org/10.1109/rams.2009.4914647 https://doi.org/10.1016/j.ress.2015.12.010 https://doi.org/10.1109/ias.1995.530592 https://doi.org/10.1016/s0026-2714(02)00153-1 https://doi.org/10.1016/s0026-2714(00)00150-5 https://doi.org/10.1002/qre.1469 https://doi.org/10.1016/j.ress.2012.09.002 https://doi.org/10.1109/tr.2012.2194177 https://doi.org/10.1109/5.18626 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 [16] m. bicego, c. acosta-munoz, m. orozco-alzate, classification of seismic volcanic signals using hidden-markov-model-based, generative embeddings, ieee transactions on geoscience and remote sensing, vol. 51, issue 6, june 2013, pp. 3400-3409. doi: 10.1109/tgrs.2012.2220370 [17] a. ben salem, a. muller, p. weber, dynamic bayesian networks in system reliability analysis, ifac proceedings volumes, vol. 39, issue 13, 2006, pp. 444-449. doi: 10.3182/20060829-4-cn-2909.00073 [18] p. vrignat, m. avila, f. duculty, s. aupetit, m. slimane, f. kratz, maintenance policy: degradation laws versus hidden markov model availability indicator, proc. of the institution of mechanical engineers, part o: journal of risk and reliability, vol. 226, issue 2, 2012, pp. 137-155. doi: 10.1177/1748006x11406335 [19] y. f. li, r. peng, availability modeling and optimization of dynamic multi-state series-parallel systems with random reconfiguration, reliability engineering and system safety, vol. 127, july 2014, pp. 47-57. doi: 10.1016/j.ress.2014.03.005 [20] s. s. rao, reliability engineering design, mcgraw hill. [21] a. birolini, reliability engineering: theory and practice, springer 6th edition 2010. [22] a. fort, f. bertocci, m. mugnaini, v. vignoli v. gaggii, a. galasso, m. pieralli, availability modeling of a safe communication system for rolling stock applications, proc. of the ieee i2mtc2013 conference, minneapolis, mn, us, 06-09 may 2013, pp. 427-430. doi: 10.1109/i2mtc.2013.6555453 [23] t. addabbo, a. fort, r. biondi, s. cioncolini, m. mugnaini, s. rocchi, v. vignoli, measurement of angular vibrations in rotating shafts: effects of the measurement setup nonidealities, ieee transactions on instrumentation and measurement, 2013, 62(3), pp. 532–543. doi: 10.1109/tim.2012.2218691 [24] a. lay-ekuakille, s. ikezawa, m. mugnaini, r. morello, c. de capua, detection of specific macro and micropollutants in air monitoring: review of methods and techniques, measurement, vol. 98, february 2017, pp. 49–59. doi: 10.1016/j.measurement.2016.10.055 [25] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, an iot framework for the pervasive monitoring of chemical emissions in industrial plants, proc. of the workshop on metrology for industry 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 269–273. doi: 10.1109/metroi4.2018.8428325 http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.bicego,%20m..qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.acosta-munoz,%20c..qt.&newsearch=true http://ieeexplore.ieee.org/search/searchresult.jsp?searchwithin=p_authors:.qt.orozco-alzate,%20m..qt.&newsearch=true https://doi.org/10.1109/tgrs.2012.2220370 https://doi.org/10.3182/20060829-4-cn-2909.00073 https://doi.org/10.1177/1748006x11406335 http://www.scopus.com/source/sourceinfo.url?sourceid=13853&origin=recordpage https://doi.org/10.1016/j.ress.2014.03.005 https://doi.org/10.1109/i2mtc.2013.6555453 https://doi.org/10.1109/tim.2012.2218691 https://doi.org/10.1016/j.measurement.2016.10.055 https://doi.org/10.1109/metroi4.2018.8428325 performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation ahmed jassim ahmed1, mohammed h. alkhafaji1, ali jafer mahdi2 1 electrical engineering department, university of technology, baghdad, iraq 2 electrical engineering department, university of kerbala, kerbala, iraq section: research paper keywords: microgrid; distributed generation integration; autoadd; power losses reduction; voltage profile improvement citation: ahmed jassim ahmed, mohammed h. alkhafaji, ali jafer mahdi, performance enhancement of a low-voltage microgrid by measuring the optimal size and location of distributed generation, acta imeko, vol. 11, no. 3, article 21, september 2022, identifier: imeko-acta-11 (2022)-03-21 section editor: francesco lamonaca, university of calabria, italy received march 25, 2022; in final form august 30, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ahmed jassim ahmed, e-mail: ahmedjasem858@gmail.com 1. introduction the increment in energy demand is an indicator of economic growth, and this demand has been growing rapidly in many sectors, such as in the building, transportation, and manufacturing industries. however, the consumption of energy is linked directly to many environmental issues due to using fuel or coal frequently as the primary sources for electricity generation, as shown in figure 1, which is the main reason for emitting greenhouse gases (ghg). those gases are very harmful dangerous for the environment [1]. because of that, many global actors, such as world bank, started encouraging countries to use clean energy sources by supporting their projects financially [2]. therefore even during the pandemic, when the economy got affected by the lockdown, renewable energy sources kept growing fast [3]. integrating renewable energy sources (res) in low-voltage networks is creating significant changes in the electric power system's operation. in general, this integration occurs widely in low and medium-voltage networks. this leads to the microgrid (mg) concept, which can be defined as a complex energy system that needs a specific framework, coordination of information flows and energy resources, as well as protection and the assurance of reliable energy [4]. it is built by the integration of res, conventional generators, energy storage devices, and loads, as shown in figure 2. mgs can work in both the connected mode with the main grid and islanded mode [5]. distributed generation (dg) nowadays is gaining its reputation for becoming the main part of operating distribution networks. this is due to the technological improvement of many types of res, such as photovoltaic systems, fuel cells, combined heat and power sources, and wind energy sources. this integration of dgs has major importance in reducing the emissions of co2 and improving the efficiency and the security of distribution networks and achieving a reliable operation of these networks [6]. the uncontrolled allocation of abstract a power system in which the generation units such as renewable energy sources and other types of generation equipment are located near loads, thereby, reducing operation costs and losses and improving voltage is called a distributed generation (dg), and these generation units are called distributed energy resources. however, dgs must be located optimally to improve the power quality and minimize power loss of the system. the objective of this paper is to propose an approach for measuring the optimal size and location of dgs in a low voltage microgrid using the autoadd algorithm. the algorithm is validated by testing it on the ieee 33-bus standard system and compared with previous studies, the algorithm proved its efficiency and superiority on the other techniques. a significant improvement in voltage and reduction in losses were observed when the dgs are placed at the sites decided by the algorithm. therefore, autoadd can be used in finding the optimal sizes and locations of dgs in the distribution system, the possibility of isolating the low voltage microgrid is discussed by integrating distributed generation units and the results showed the possibility of this scenario during faults time and intermittency of energy time. mailto:ahmedjasem858@gmail.com acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 dgs in distribution networks brought in some serious challenges and problems. like the bidirectional flow of power in the distribution networks and the problems of power losses and voltage drop [7], [8]. researchers from the entire world are now focusing on these problems, and they have proposed many methods for selecting the optimal location and size of dgs with the aim to minimize or even eliminate the losses and improve the voltage of distribution networks with dg. in [7], the author proposed a new particle swarm optimization (pso) method to improve the power quality of the network by finding the number of dgs that are going to be connected and the optimal location of these dgs in the system. this method was validated by testing it on the standard 30-bus ieee system. the results showed a remarkable improvement in the buses' voltage profiles and a reduction in power losses of the system. in [9], integrating the ress was studied as it gives a significant comfort to smart grid technology in terms of cost. in [10], three types of pso algorithms were used to control the output of dg to find its optimum size. to overcome the issues of variation in ress, a model of energy optimization was proposed in [11]; it consists of a mathematical tool probability density function to model wind sources and solar. the author in[12] proposed a methodology for finding the optimal size and placement of many dgs. however, to determine the optimal location, the loss factor is used, and to find the optimal size, they used the algorithm of bacterial foraging. the objectives were to reduce operational costs and losses and to improve the voltage. their work was validated by testing it on ieee 119-bus and 33bus distribution systems. one of the major issues is intermittency in ress, ress integration is studied in [13] by conducting a survey on models from all over the world. the author found that communication systems, especially two-way communications have an important role in the sg's energy optimization. in [14], combined algorithms inspired by nature were used to optimally find the best place and size of dgs. for dg integration, an optimization technique with two steps got presented. during the first one, particle swarm optimization is used to find the best size of dg, and the results obtained are checked using the approach of negative load for reverse power flow. after that, the optimum location is found by the methods of weak bus and the loss sensitivity factor. during the second one, the optimal size of dgs is found using three algorithms based on nature, i.e., gravitational search algorithm, pso algorithm, and a combination of those two. by testing them on the 30-bus ieee system, the effectiveness of the technique has been proved. in this paper, the autoadd algorithm is used to find the optimal place and size of a dg in distribution networks. the proposed algorithm is simple, very flexible, easy to use, and supports all types of dg. it differs from the other algorithms by the time that it takes in processing, whereas the other algorithms take a lot of time that could be in some cases hours. this algorithm performs instantly and gives the best place for dgs to achieve the best performance. it is manipulated through the opendss program. the power flow analysis is executed by opendss through the matlab com interface. the algorithm and the opendss are validated by testing on the standard ieee 33-bus system. the results are compared with previous works, which proved that the opendss is reliable, and the autoadd algorithm gives better results in terms of losses and voltages compared to the earlier studies. after validating the tools, the low voltage microgrid of baghdad/al-ghazaliya-655 is analyzed, and the dgs are placed optimally to enhance its performance and to assess the capability of the microgrid to perform in the isolated mode with the objectives of reducing the cost, losses, and minimizing the impact on climate which contribute with the sustainable development goals (sdgs) 7 and 13. 2. impact of integrating distributed generation on losses and voltage 2.1. impact on losses integrating dgs proved that they could minimize the losses (real and reactive) due to being placed near the load. many early studies showed that the size and location of a dg play a significant role in eliminating power losses [15], the location and size of a dg in a distributed network that gives the minimum losses are identified in general as the optimal location and optimal size. the placement procedure of dgs is similar to the placement procedure of capacitors that aims to reduce losses. they differ in that the dg units affect real and reactive powers, whereas capacitors affect just reactive powers. installing a small dg unit has proven that it may reduce losses for the case of a network with an increment in losses [16]. 2.2. impact on voltage as known, dg supports and improves the system’s voltage [17], but that is not always accurate, as it has been shown that integrating dgs could cause undervoltage or overvoltage. additionally, some dgs change their produced power all the time, like wind generators and photovoltaics. the result of this figure 1. energy production sources. figure 2. microgrid architecture. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 affects the quality of power badly because of the fluctuation of voltage [8], [18]. in addition to that, undervoltage and overvoltage are reported in distribution networks integrated with dgs because of the unsuitability of integrated dgs with current methods of regulation. generally, for regulation, the distribution systems use tap-changing transformers, capacitors, and regulators. those methods were proved as reliable methods in the past for the unidirectional flow of power. nevertheless, today, integrating dgs with distribution networks has a significant impact on the performance of methods of voltage regulation because of the power flows in a bidirectional way caused by new dgs on distribution systems. meanwhile, dgs influence positively on distribution networks due to their contribution to frequency regulation and compensation of reactive power for voltage control. moreover, in case of faults in the main network, they can work as a spinning reserve [19]. 3. autoadd algorithm in this paper, the autoadd algorithm is used; it is an internal feature of opendss that works automatically to find the optimal location of capacitors and generators. the optimization problem of the analysis of the distribution system in the equation form as in equation (1) where g(𝑥, 𝑢) = 0 represents the equation of distribution power flow, pl represents power losses, whereas vi is the voltage at bus number ith [20]. the equation min f(𝑥, 𝑦) = 𝑃𝑙 calculates the amount of the active and reactive power for every node in order to reduce the losses of the system while keeping voltages within certain limits. in addition to that, opendss uses an iterative algorithm that calculates the unknown voltages and currents. then, autoadd accesses the array of injection currents in the solution directly and takes advantage of it [20]. to move the generators on all the buses, the opendss searches for the available bus that result in the best improvement for capacity and losses on equation (2) below [21]: minimize (𝑙𝑜𝑠𝑠 𝑤𝑒𝑖𝑔ℎ𝑡 ∙ 𝑙𝑜𝑠𝑠𝑒𝑠 + 𝑈𝐸 𝑤𝑒𝑖𝑔ℎ𝑡 ∙ 𝑈𝐸) (2) loss of weight is losses' weighting factor in the functions of autoadd. ue weight is unserved energy’s(ue) weighting factor.ue represents load energy that is considered unserved because the power exceeds maximum values the velocity of convergence of solutions in the autoadd algorithm increases for the reason that the system's admittance matrix is fixed and is not changed. generally, finding the location of any generator takes about 2-4 iterations for every solution. the improvement factor refers to the next location that is the best to supply power [22]. figure 3 shows the autoadd algorithm. 4. standard ieee 33-bus system figure 4 depicts the standard system of ieee 33-bus. it has thirty-two branches and thirty-three buses. the voltage level for all the buses is 12.66 kv. the voltage limits for all buses are considered at ± 5 % for maximum and minimum. a synchronous generator feeds the network, the load is 3.715 mw, and 2.3 mvar is distributed on thirty-two buses with different power factors. the line data and load data of the system are in table 1 [23]. 5. proposed microgrid of baghdad/al-ghazaliya655 the proposed microgrid model in figure 5 represents a distribution system in iraq baghdad/al-ghazaliya-655. it has fifty-eight buses and fifty-seven branches. the voltages level is 0.4 kv for all the buses. min f(𝑥, 𝑦) = 𝑃𝑙 subject − to g(𝑥, 𝑢) = 0 0.95 ≤ 𝑉𝑖 ≤ 1.05 , (1) figure 3. flow chart of autoadd algorithm. figure 4. single line diagram of 33-bus ieee system. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 table 1. electrical parameters of 33-bus ieee system (rij resistance, xij reactance, pj real power, qj reactive power). bus 1 bus 2 rij in ω/km xij in ω/km pj in kw qj in kvar length in km 1 2 0.0922 0.0477 100 60 1 2 3 0.4930 0.2511 90 40 1 3 4 0.3660 0.1864 120 80 1 4 5 0.3811 0.1941 60 30 1 5 6 0.8190 0.7070 60 20 1 6 7 0.1872 0.6188 200 100 1 7 8 1.7114 1.2351 200 100 1 8 9 1.0300 0.7400 60 20 1 9 10 1.0400 0.7400 60 20 1 10 11 0.1966 0.0650 45 30 1 11 12 0.3744 0.1238 60 35 1 12 13 1.4680 1.1550 60 35 1 13 14 0.5416 0.7129 120 80 1 14 15 0.5910 0.5260 60 10 1 15 16 0.7463 0.5450 60 20 1 16 17 1.2890 1.7210 60 20 1 17 18 0.7320 0.5740 90 40 1 2 19 0.1640 0.1565 90 40 1 19 20 1.5042 1.3554 90 40 1 20 21 0.4095 0.4784 90 40 1 21 22 0.7089 0.9373 90 40 1 3 23 0.4512 0.3083 90 50 1 23 24 0.8980 0.7091 420 200 1 24 25 0.8960 0.7011 420 200 1 6 26 0.2030 0.1034 60 25 1 26 27 0.2842 0.1447 60 25 1 27 28 1.0590 0.9337 60 20 1 28 29 0.8042 0.7006 120 70 1 29 30 0.5075 0.2585 200 600 1 30 31 0.9744 0.9630 150 70 1 31 32 0.3105 0.3619 210 100 1 32 33 0.3410 0.5302 60 40 1 figure 5. single line diagram of baghdad/al-ghazalya-655 microgrid. table 2. electrical parameters of baghdad/al-ghazalya 655 microgrid. bus 1 bus 2 resistance in ω/km reactance in ω/km length in km active power in kw reactive power in kvar 1 2 0.3416 0.3651 0.001 2 3 0.3416 0.3651 0.02 15 9.3 3 4 0.3416 0.3651 0.01 20 12.4 4 5 0.3416 0.3651 0.04 30 18.6 5 6 0.3416 0.3651 0.01 20 12.4 2 7 0.3416 0.3651 0.02 20 12.4 7 8 0.3416 0.3651 0.01 15 9.3 8 9 0.3416 0.3651 0.04 15 9.3 3 11 0.3416 0.3651 0.015 10 6.2 11 12 0.3416 0.3651 0.005 10 6.2 12 13 0.3416 0.3651 0.005 10 6.2 13 14 0.3416 0.3651 0.005 15 9.3 14 15 0.3416 0.3651 0.015 15 9.3 15 16 0.3416 0.3651 0.0075 15 9.3 16 17 0.3416 0.3651 0.0075 15 9.3 17 18 0.3416 0.3651 0.015 25 15.5 4 19 0.3416 0.3651 0.015 15 9.3 19 20 0.3416 0.3651 0.005 15 9.3 20 21 0.3416 0.3651 0.005 15 9.3 21 22 0.3416 0.3651 0.005 10 6.2 22 23 0.3416 0.3651 0.005 10 6.2 23 24 0.3416 0.3651 0.005 10 6.2 24 25 0.3416 0.3651 0.005 15 9.3 25 26 0.3416 0.3651 0.015 20 12.4 26 27 0.3416 0.3651 0.015 20 12.4 5 28 0.3416 0.3651 0.015 30 18.6 28 29 0.3416 0.3651 0.015 15 9.3 29 30 0.3416 0.3651 0.015 15 9.3 30 31 0.3416 0.3651 0.015 15 9.3 31 32 0.3416 0.3651 0.015 25 15.5 6 33 0.3416 0.3651 0.015 15 9.3 33 34 0.3416 0.3651 0.015 15 9.3 34 35 0.3416 0.3651 0.005 15 9.3 35 36 0.3416 0.3651 0.005 15 9.3 36 37 0.3416 0.3651 0.005 15 9.3 37 38 0.3416 0.3651 0.015 15 9.3 38 39 0.3416 0.3651 0.015 25 15.5 7 40 0.3416 0.3651 0.015 15 9.3 40 41 0.3416 0.3651 0.015 15 9.3 41 42 0.3416 0.3651 0.015 20 12.4 42 43 0.3416 0.3651 0.015 15 9.3 43 44 0.3416 0.3651 0.015 20 12.4 8 45 0.3416 0.3651 0.015 15 9.3 45 46 0.3416 0.3651 0.015 15 9.3 46 47 0.3416 0.3651 0.015 15 9.3 47 48 0.3416 0.3651 0.015 10 6.2 48 49 0.3416 0.3651 0.005 10 6.2 49 50 0.3416 0.3651 0.005 10 6.2 50 51 0.3416 0.3651 0.005 15 9.3 9 10 0.3416 0.3651 0.03 100 62 9 52 0.3416 0.3651 0.005 10 6.2 52 53 0.3416 0.3651 0.005 10 6.2 53 54 0.3416 0.3651 0.005 15 9.3 54 55 0.3416 0.3651 0.015 15 9.3 55 56 0.3416 0.3651 0.015 20 12.4 56 57 0.3416 0.3651 0.015 20 12.4 57 58 0.3416 0.3651 0.015 40 24.8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 the limits of voltages are within ± 5 % for maximum and minimum. the load is 1 mw and 0.62 mvar distributed on fifty-five buses, as presented in table 2. 6.analysis and results 6.1. analysis of test system the 33-bus ieee system was modeled in the opendss program. voltages and losses were calculated using the newton method. a summary is given in figure 6 and table 3. the comparison shows that the results were the same as the results obtained by the methods in [24]-[26]. table 4 shows a list of solutions by researchers in [27]-[30] that find the optimal size and location of three distributed generators that improve losses on the 33-bus ieee system. in this table, when comparing the algorithm based on the minimum voltage, it can be seen that all the algorithms provide voltages within ± 5 %. for the losses, the proposed algorithm achieved the minimum losses compared to the others. in terms of construction and coding, all the other algorithms require previous knowledge in programming and forming codes to construct them, which is very complex, whereas the autoadd algorithm is built-in in the opendss program and does not need any coding in formatting it. as for the time of operation, the other algorithms need a high number of iterations to converge, especially the pso algorithm, and with that number, the computer used for this operation needs to be of high specification, whereas the autoadd could work with any computer and gives the results instantly. after adding 3 dgs with pf=0.85 the results of network analysis in figure 7 and table 5 show the improvement of voltages and reduction of losses 6.2. analysis of practical network 6.2.1. grid-connected mode the load flow is done on the presented network by the fixedpoint method for different loads 100 %, 90 %, 80 %, 70 %, 60 %, 50 %, and 40 %, at hours (12 pm, 7 pm, 8 pm, 10 am, 9 am, 12 am, 5 am) untill the voltage be within ± 5 %,then the size of generation will be selected based on that. the results are shown in table 6. after the analysis of the system, it was decided to add 3 dgs with a total size of 600 kw to minimize losses and improve voltages to be within ± 5 %. the optimal place and size of dgs were found by the autoadd algorithm as in table 7. after the addition of dgs, the voltage has been improved for all the buses of the proposed system for the four-level of loads, as presented in figure 8 to figure 14. moreover, the losses of the system have decreased significantly, as in figure 15, and that is because the distributed generators are located near the load. 6.2.2. isolated mode in this section, the possibility of isolating the microgrid during the fault time is going to be discussed. as the total load of the network equals 1 mw at peak time and 500 kw at low demand times, the total generation will be leveled up to 1.2 mw by adding two standby units for this scenario to cover the load as in table 8. a time-series load flow is applied for 24 hours and the cases of (100%,75%,50%) at hours(12 am, 10 am, 12 pm) were taken table 3. comparison of results of autoadd with other algorithms. algorithm losses in kw minimum voltage dg location size of dg in mw proposed method 71.4 kw 0.96839 (14, 24, 30) (0.76, 1.07, 1.02) acsa [27] fwa [28] aco–abc [29] pso [30] 74.26 kw 88.68 kw 71.4 kw 72.8 kw 0.9778 0.9680 0.9685 0.96868 (14, 24, 30) (14, 18, 32) (14, 24, 30) (13, 24, 30) (0.7798, 1.125, 1.349) (0.5897, 0.1895, 1.0146) (0.7547, 1.0999, 1.0714) (0.8, 1.09, 1.053) figure 6. pu bus voltages of 33-bus ieee system. figure 7. pu voltages before and after adding dgs with autoadd. table 4. power flow analysis on 33-bus ieee compared with other research. algorithm losses minimum voltage pu location of bus proposed method 202.6 kw 0.913 18 [24] 202.7 kw 0.9131 18 [25] 202.6 kw 0.913 18 [26] 202.6 kw 0.913 18 table 5. losses and minimum voltages before and after adding 3dgs. without dg with three dgs losses in kw minimum voltage & bus losses in kw minimum voltage & bus 202.6 kw 0.913 (18) 12.29 0.99178 (18) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 to evaluate for different load and irradiance cases, the results are shown in figure 16 and figure 17. the minimum voltage for all the cases is within ± 5, and the system can operate successfully in the isolated mode. the losses are less in case of 100 % than 50 % even though the load is higher due to the pv isn't working at night. 7. discussion for increasing the performance of microgrids by improving the voltage and reducing the losses and at the same time reducing the effect of ghg by integrating res this paper is conducted. the results showed that dgs have a major impact on the voltage profiles. the dgs increased the level of voltage for all the studied cases, and it is proportional to the capacity of the dg. it can be noticed from the results of simulations that the location of the table 6. power flow results of the proposed system. percentage of load total load in kw losses in kw min voltage & bus 100 % 1000 74.4 0.873(39) 90 % 900 58.95 0.88796(39) 80 % 800 45.5 0.9017(39) 70 % 700 34.17 0.915(39) 60 % 600 24.59 0.92807(39) 50 % 500 16.7 0.94075(39) 40 % 400 10.52 0.95312(39) table 7. results of optimum size and location selection for connected mode. dg type size in mw location pf diesel engine 0.3 mw 5 0.85 pv 0.2 mw 10 1 fuelcell 0.1 mw 57 0.9 total 0.6 mw figure 8. voltages of all buses at 100% load before and after the addition. figure 9. voltages of all buses at 90% load before and after the addition. table 8. results of optimum size and location selection for isolated model dg type size in mw location pf diesel engine a 0.3 mw 5 0.85 diesel engine b 0.5 mw 7 0.85 pv 0.2 mw 10 1 fuel cell 0.1 mw 57 0.9 micro turbine 0.1 mw 25 0.9 total 1.2 mw figure 10. voltages of all buses at 80% load before and after the addition. figure 11. voltages of all buses at 70% load before and after the addition. figure 12. voltages of all buses at 70% load before and after the addition. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 dg is important for the whole network, and that is shown in the results. as for the losses, it can be seen from the results that the size of dg is important, and that has been noticed that where the bigger the size gets, the more reduction in losses. the autoadd algorithm is used to find the optimal size of dgs; this algorithm is fast, accurate, and very easy to use, so it does not need any prior experience or learning to use it. the proposed algorithm was applied to the ieee 33-bus, and the results were compared against the results of other research in table 4. the results showed that the suggested algorithm is effective in finding the optimal size and location of dgs and helps in achieving better results in terms of losses reduction and voltage improvement, so it was used for the real system to install many types of dgs that improved the voltage and losses as in the results. the scenario of isolating the low voltage microgrid is applied for the first time in an iraqi case to solve the problem of intermittency of power, and the result showed it was successful. some parameters have not been taken in this work that should be mentioned which are the annual variation of load and the economic issues related to the installation of the dg unit. the loads in the distribution network vary continually, and that results in the variation of the network’s losses and voltages. but on the other hand, a large pv system of 200 kw and a fuel-cell of 100 kw have been installed, which are pollution-free, so they don’t affect the nature and run on low cost. 8. conclusion in this paper, the reduction of losses and improvement of voltage has been discussed, and the autoadd algorithm has been introduced and used for finding the optimal size and location of dg. the ieee 33-bus has been used to test the proposed algorithm, and the results of the work were compared against results from other studies, the comparison proved that the proposed algorithm is acceptable and helps in achieving more efficient dg integration. the problem of intermittency of electrical power is solved for the iraqi case considering the maximum load condition. the future work will be using the practical microgrid in this work after the addition of the dgs and the improvement of voltage and losses to integrate the electric vehicles to prepare a suitable electrical environment for them to achieve zero pollution in the transportation sector and take into consideration the installation cost of dgs, charging stations and the variation of the loads. conflicts of interest the authors declare that there are no conflicts of interest. figure 13. voltages of all buses at 50% load before and after the addition. figure 14. voltages of all buses at 40% load before and after the addition figure 15. losses of all buses before and after the addition for 100% case figure 16. voltages of all buses for (50%, 75%, and 100%) load. figure 17. losses of all buses for (50%, 75%, and 100%) load. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 references [1] t. addabbo, a. fort, m. mugnaini, l. parri, s. parrino, a. pozzebon, v. vignoli, a low-power iot architecture for the monitoring of chemical emissions, acta imeko 8(2) (2019), pp. 53-61. doi: 10.21014/acta_imeko.v8i2.642 [2] y. zahraoui, m. r. basir khan, i. alhamrouni, s. mekhilef, m. ahmed., current status, scenario, and prospective of renewable energy in algeria: a review. energies, 14(9) (2021) doi: 10.3390/en14092354 [3] iea (2021), world energy outlook 2021, iea, paris, online [accessed 28 august 2022] https://www.iea.org/reports/world-energy-outlook-2021 [4] a. s. hassan, a. firrincieli, c. marmaras, l.m. cipcigan, m.a. pastorelli, integration of electric vehicles in a microgrid with distributed generation, 49th international universities power engineering conference (upec), 2014. [5] n. w. a. lidula, a. d. rajapakse, microgrids research: a review of experimental microgrids and test systems. renewable and sustainable energy reviews 15(1) (2011), pp. 186-202 doi: 10.1109/tii.2018.2819169 [6] y. yan; y. qian; h. sharif; d. tipper, a survey on smart grid communication infrastructures: motivations, requirements and challenges. ieee communications surveys & tutorials, 2013. 15(1), pp. 5-20. doi: 10.1109/surv.2012.021312.00034 [7] s. c. reddy, p. v. n. prasad, a. j. laxmi, power quality and reliability improvement of distribution system by optimal number, location and size of dgs using particle swarm optimization. in 2012 ieee 7th international conference on industrial and information systems (iciis), 2012 [8] v. vita, t. alimardan, l. ekonomou. the impact of distributed generation in the distribution networks' voltage profile and energy losses. in 2015 ieee european modelling symposium (ems), 2015. [9] m. husain rehmani; m. reisslein; a. rachedi; m. erol-kantarci; m. radenkovic., integrating renewable energy resources into the smart grid: recent developments in information and communication technologies. ieee transactions on industrial informatics 14(7) (2018), pp. 2814-2825 doi: 10.1109/tii.2018.2819169 [10] j. j. jamian; m. w. mustafa; h. mokhlis; m. n. abdullah, comparative study on distributed generator sizing using three types of particle swarm optimization, third int. conference on intelligent systems modelling and simulation. 2012. [11] d. mazzeo, g. oliveti, e. labonia, estimation of wind speed probability density function using a mixture of two truncated normal distributions. renewable energy 115 (2018) pp. 1260-1280 doi: 10.1016/j.renene.2017.09.043 [12] k. r. devabalaji, k. ravi, optimal size and siting of multiple dg and dstatcom in radial distribution system using bacterial foraging optimization algorithm. ain shams engineering journal, 2016. 7(3), p. 959-971 doi: 10.1016/j.asej.2015.07.002 [13] k. s. alimgeer, z. wadud, i. khan, m. usman, a. b. qazi, f. a. khan, an innovative optimization strategy for efficient energy management with day-ahead demand response signal and energy consumption forecasting in smart grid using artificial neural network. ieee access, 2020, 8: p. 84415-84433 doi: 10.1109/access.2020.2989316 [14] a. ramamoorthy, r. ramachandran, optimal siting and sizing of multiple dg units for the enhancement of voltage profile and loss minimization in transmission systems using nature inspired algorithms. the scientific world journal (2016), art. no. 1086579. doi: 10.1155/2016/1086579 [15] a. hasibuan, s. masri, w. a. f. w. b. othman, effect of distributed generation installation on power loss using genetic algorithm method. iop conference series: materials science and engineering 308 (2018) art. no. 012034 doi: 10.1088/1757-899x/308/1/012034 [16] a. nieto, power quality improvement in power grids with the integration of energy storage systems. international journal of engineering and technical research 5 (2016), pp. 438-443. [17] a. amos ogunsina, m. omolayo petinrin, o. olayemi petinrin, e. nelson offornedo, j. olawole petinrin, g. olusola asaolu, optimal distributed generation location and sizing for loss minimization and voltage profile optimization using ant colony algorithm, sn applied sciences, 2021. 3(2), p. 248. doi: 10.1007/s42452-021-04226-y [18] v. s. lopes, c. l. t. borges, impact of the combined integration of wind generation and small hydropower plants on the system reliability. ieee transactions on sustainable energy 6(3) (2016), pp. 1169-1177. doi: 10.1109/tste.2014.2335895 [19] s. habib, m. kamran,u. rashid, impact analysis of vehicle-to-grid technology and charging strategies of electric vehicles on distribution networks – a review, journal of power sources, 2015. 277, p. 205-214. doi: 10.1016/j.jpowsour.2014.12.020 [20] s. singh, d. shukla, s. p. singh, peak demand reduction in distribution network with smart grid-enabled cvr. in 2016 ieee innovative smart grid technologies asia (isgt-asia). 2016. [21] r. c. dugan, institute, epr opendss manual, train. mater., pp. 1–184, 2019. [22] m. nasser, i. ali, m. alkhafaji, optimal placement and size of distributed generators based on autoadd and pso to improve voltage profile and minimize power losses. engineering and technology journal 39(3a) (2021), pp. 453-464. doi: 10.30684/etj.v39i3a.1781 [23] o. d. montoya, w. gil-gonzález, c. orozco-henao, vortex search and chu-beasley genetic algorithms for optimal location and sizing of distributed generators in distribution networks: a novel hybrid approach, engineering science and technology, an international journal 23(6) (2020), pp. 1351-1363 doi: 10.1016/j.jestch.2020.08.002 [24] r. rao, s. narasimham, m. ramalingaraju, optimization of distribution network configuration for loss reduction using artificial bee colony algorithm. 2007, 1. [25] m. e. soliman, a. y. abdelaziz, r. m. el-hassani, distribution power system reconfiguration using whale optimization algorithm. int. journal of applied power engineering, 9 (2020), pp. 48-57. [26] a. y. abdelaziz, r. a. osama, s. m. elkhodary, distribution systems reconfiguration using ant colony optimization and harmony search algorithms. electric power components and systems 41(5) (2013), pp. 537-554. doi: 10.1080/15325008.2012.755232 [27] t. t. nguyen, a. v. truong, t. a. phung, a novel method based on adaptive cuckoo search for optimal network reconfiguration and distributed generation allocation in distribution network. international journal of electrical power & energy systems 78 (2016), pp. 801-815. doi: 10.1016/j.ijepes.2015.12.030 [28] a. mohamed imran, m. kowsalya, d. p. kothari, a novel integration technique for optimal network reconfiguration and distributed generation placement in power distribution networks. int. journal of electrical power & energy systems 63 (2014), pp. 461-472. doi: 10.1016/j.ijepes.2014.06.011 [29] m. r. alrashidi, m. f. alhajri, optimal planning of multiple distributed generation sources in distribution networks: a new approach, energy conversion and management, 2011. 52(11), p. 3301-3308. doi: 10.1016/j.enconman.2011.06.001 [30] m. kumar, p. nallagownden, i. elamvazuthi, optimal placement and sizing of distributed generators for voltage-dependent load model in radial distribution system. renewable energy focus (1920) (2017), pp. 23-37. doi: 10.1016/j.ref.2017.05.003 http://dx.doi.org/10.21014/acta_imeko.v8i2.642 https://doi.org/10.3390/en14092354 https://www.iea.org/reports/world-energy-outlook-2021 https://doi.org/10.1109/tii.2018.2819169 https://doi.org/10.1109/surv.2012.021312.00034 https://doi.org/10.1109/tii.2018.2819169 https://doi.org/10.1016/j.renene.2017.09.043 https://doi.org/10.1016/j.asej.2015.07.002 https://doi.org/10.1109/access.2020.2989316 https://doi.org/10.1155/2016/1086579 https://doi.org/10.1088/1757-899x/308/1/012034 https://doi.org/10.1007/s42452-021-04226-y https://doi.org/10.1109/tste.2014.2335895 https://doi.org/10.1016/j.jpowsour.2014.12.020 https://doi.org/10.30684/etj.v39i3a.1781 https://doi.org/10.1016/j.jestch.2020.08.002 https://doi.org/10.1080/15325008.2012.755232 https://doi.org/10.1016/j.ijepes.2015.12.030 https://doi.org/10.1016/j.ijepes.2014.06.011 https://doi.org/10.1016/j.enconman.2011.06.001 https://doi.org/10.1016/j.ref.2017.05.003 microsoft word article 3 editorial tc4.docx acta imeko  august 2013, volume 2, number 1, 3 – 4  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 3  introduction to the acta imeko issue dedicated to selected  papers presented in tc4 at the xx th  imeko world congress  ján šaliga 1 , dušan agrež 2   1  technical university of košice, faculty of electrical engineering and informatics, letná 9, 042 00 košice, slovakia   2  university of ljubljana, faculty of electrical engineering, slovenia tržaška 25, 1001 ljubljana, slovenia      keywords: imeko world congress; tc4; measurement of electrical quantities, busan, republic of korea  citation: ján šaliga, dušan agrež, “introduction to the acta imeko issue dedicated to selected papers presented in tc4 at the xx th  imeko world congress”,  acta imeko, vol. 2, no. 1, article 3, august 2013, identifier: imeko‐acta‐02(2013)‐01‐03  editors: paolo carbone, university of perugia, italy; ján šaliga, technical university of košice, slovakia; dušan agrež, university of ljubljana, slovenia  copyright: © 2013 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the science grant agency of the slovak republic (project 1/0555/11)  corresponding author: ján šaliga, e‐mail: jan.saliga@tuke.sk    1. introduction  the xxth imeko world congress (september 9 – 14, 2012) took place in busan – the second largest metropolis of republic of korea, which was the third host city of the world congress in the asian region. imeko world congresses are important scientific events organised in an every 3-year period where experts in measurement science and technology have a unique possibility to present their latest projects and the results of their research, exchange new ideas and discuss their research intentions. the world congresses join experts from different fields of measurement science covered by imeko technical committees (www.imeko.org). imeko congresses are also host to social events where new international friendships have been created that may spread inter-institutional cooperation among recognized experts in the field of measurement across the entire world. 2. imeko tc4  imeko tc4 is one of the imeko technical commissions whose task and goal is to create an international platform for experts in the field of measurement of electrical quantities, emphasizing both theoretical and practical aspects of research in the field. annual symposia and congresses are organised in member countries by local imeko organisations. the country members of imeko which have current representatives in the tc4 board are: belgium, brazil, bulgaria, china, croatia, czech republic, egypt, estonia, finland, france, germany, greece, hungary, italy, korea, poland, portugal, romania, russia, serbia, slovakia, slovenia, spain, sweden, turkey, u.k., and ukraine. any new country is cordially welcomed to join imeko tc4 activities. 3. the papers  a special focus at the xxth imeko world congress was placed on the measurement of electrical quantities, which is under the scope of imeko technical committee 4 (tc4). this editorial is a brief introduction to the acta imeko issue dedicated to selected papers presented at the world congress that brought the latest advances on measurement of electrical quantities (tc4). the papers for the issue were selected by voting of the working group established from tc4 board members who took part at the congress. according to the voting 18 papers were selected and the authors of the papers were invited to prepare extended and improved versions of their papers presented in busan. after a standard reviewing process eight papers were accepted for publishing in this special issue of acta imeko. modern instrumentation very often requires highly precise and accurate timing signals. a gps receiver is a typical modern solution of this task. ogrizović in [1] performed tests of a few receivers in comparison with a rubidium standard. the achieved results proved that a quality gps receiver is convenient for abstract  this  editorial  is  a  brief  introduction  to  the  acta  imeko  issue  dedicated  to  selected  papers  presented  at  the  xx th   imeko  world  congress  and  focused  on  measurement  of  electrical  quantities  (tc4).  the  congress  took  place  in  busan,  republic  of  korea  in  september 2012.  acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 4  common industrial timing. the fluctuations are caused mainly by changes in the quality of the received signals. the second paper [2] by gao et al. presents the development, construction and first result achieved from a metrological atomic force microscope, which is intended for precise measurement with uncertainty of 4 nm. the microscope is intended for applications in nanoscience research and industry with the ability of surface topography measurement. the next paper by lee et al. [3] introduces a technique for magnetic field measurement based on monitoring the spin precession due to the external field as observing the optical rotation signal. the developed magnetometer achieved a sensitivity of about 120 ft/hz1/2 at 10 hz. the paper by costa da silva et al. [4] describes an automatic characterization system designed for the measurement of the electric impedance of giant magneto-impedance (gmi) samples. the high speed of measurement attained with the use of this system, designed in the labview development environment, allows for the rapid determination of the optimal operational point of an eventual gmi magnetometer. electric vehicle development is a very topical task and dream for many leading car manufacturers. the paper by vrazic et al. [5] describes a concept of a mobile lab for electric vehicle systems research and testing based on plcs and in parallel on a common pc. the lab can be effectively used in the education process at the university of zagreb. the next paper in this special issue [6] by mariscotti is a contribution to railway safety. when a new locomotive is introduced on existing lines it is essential to ensure that the locomotive and the signalling systems are compatible under normal operation and exceptional conditions. the focus of this paper is on the processing circuits and algorithms indicated by the european standards to model the susceptibility of the victim circuits and applied to the recorded pantograph current and its spectrum. real-time smart meters network for energy management is the topic of paper [7] by del prete et al. the network is composed by several slave smart meters that continuously monitor loads, and an energy generator to make available realtime information, such as power and energy consumption/generation and several power quality parameters, to a specific master device called data aggregator via a can bus. power quality is one of the highly needed measurements in power electric systems. masnicky et al. in [8] developed an interface between adc and dsp based on fpga for such application. the fpga collects data from the adc across a spi, processes them, and sends to the dsp via a custom protocol link port. the fpga interface was successfully tested in a complete system for different clock rates. 4. conclusions  the special issue offers the choice of the best papers presented at the xxth imeko world congress in busan in 2012 in the field of measurement of electrical quantities. the section editors of the special issue of acta imeko would like to thank all authors for their cooperation and improvements of the papers, the reviewers for highly professional reviews, notes and recommendations to the authors, and also to the layout editors of acta imeko, especially to the father of the journal paul regtien for his help and valuable advice. references  [1] vukan r ogrizović, “testing gps generated 1pps against a rubidium standard”, acta imeko, vol. 2 (2013), no. 1, pp. 7-11. [2] sitian gao, mingzhen lu, wei li, yushu shi, qi li, “metrological atomic force microscope and traceable measurement of nano-dimension structures”, acta imeko, vol. 2 (2013), no. 1, pp. 12-15. [3] hyunjoon lee, kiwoong kim, seong-joo lee, chan-seok kang, kwon kyu yu, yong-ho lee, han seb moon, “development of spin-exchange relaxation free magnetometer with a compact heating system”, acta imeko, vol. 2, (2013), no. 1, pp. 16-20. [4] eduardo costa da silva, joão henrique costa carvalho carneiro, luiz antônio pereira de gusmão, carlos roberto hall barbosa, elisabeth costa monteiro, “development of a fast and reliable system for the automatic characterization of giant magnetoimpedance samples”, acta imeko, vol. 2 (2013), no. 1, pp. 21-26. [5] mario vrazic, marinko kovačić, hrvoje dzapo, damir ilic, “concept of mobile lab for electric vehicle systems research and testing”, acta imeko, vol. 2 (2013), no. 1, pp. 27-31. [6] andrea mariscotti, “variability and uncertainty of track circuit band-pass modeling for interference evaluation”, acta imeko, vol. 2 (2013), no. 1, pp. 32-39. [7] giuseppe del prete, daniele gallo, carmine landi, mario luiso, “real-time smart meters network for energy management”, acta imeko, vol. 2 (2013), no. 1, pp. 40-48. [8] romuald masnicki, janusz mindykowski, damian hallmann, “the implementation of fpga for data transmission between adc and dsp peripherals in the measurement channels for power quality assessment”, acta imeko, vol. 2 (2013), no. 1, pp. 49-55. decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance valeria stagno1,2, silvia capuani2,3 1 earth sciences department, sapienza university of rome, piazzale aldo moro 5, 00185 rome, italy 2 national research council institute for complex systems (cnr-isc) c/o physics department sapienza university of rome, piazzale aldo moro 5, 00185 rome, italy 3 centro fermi museo storico della fisica e centro studi e ricerche enrico fermi, piazza del viminale 1, rome 00184, italy section: research paper keywords: archaeological waterlogged wood; micro-mri; diffusion-nmr; portable nmr citation: valeria stagno, silvia capuani, decay of a roman age pine wood studied by micro magnetic resonance imaging, diffusion nuclear magnetic resonance and portable nuclear magnetic resonance, acta imeko, vol. 11, no. 1, article 12, march 2022, identifier: imeko-acta-11 (2022)-01-12 section editor: fabio santaniello, university of trento, italy received march 3, 2021; in final form march 15, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valeria stagno, e-mail: valeria.stagno@uniroma1.it 1. introduction wood is a porous material with complex morphology. in the past, it has been widely used by men to produce artworks. for this reason, wood is widespread in the cultural heritage world and its microstructure has always been studied for the species identification, for dendrochronological analyses and for extracting important information about ancient human activities [1]. two principal types of wood can be recognized: softwood and hardwood. softwood has a very homogeneous structure mainly composed of tracheids and fibre-tracheids [2]. the resin canals are one of its main characteristics that allow it to be distinguished from the hardwood. in addition, its annual rings are usually well separated through the annual ring limit and they are made of an earlywood area, with pores characterized by larger lumens and thinner walls, and a latewood area with thicker walls and smaller lumens [2]. however, these two areas are not always well differentiated. some softwoods present an abrupt passage from earlywood to latewood, while some others a gradual one [2]. among the many anatomical elements described above, the growth ring is surely one of the most important because its characterization provides the age of the tree and the climatic conditions in which it has grown [3], [4], as well as being crucial for the species identification. because of its biodegradability [4] wood is hardly preserved with no microstructural alterations. waterlogged wood is usually well preserved for the microscopic observation of its annual rings, especially when the wooden object was buried under the sea sediments [6]. in fact, thanks to the anaerobic conditions fungal attacks are more or less excluded abstract wood is a hygroscopic biodegradable porous material widely used by men in the past to create artworks. its total preservation over time is quite rare and one of the best preservation modalities is waterlogging. observing the anatomy of waterlogged archaeological wood could also be complicated because of its bacterial degradation. however, the characterization of wood morphology and conservation state is a fundamental step before starting any restoration intervention as it allows to extract information about past climatic conditions and human activities. in this work, a micro-invasive approach based on the combined use of high-resolution magnetic resonance imaging (mri) and diffusion-nuclear magnetic resonance (nmr) was tested both on a modern and an ancient pine wood sample. furthermore, a completely non-invasive analysis was performed by using portable nmr. this multi-analytical nmr approach allowed to highlight the effect of decay on the wood microstructure, through alterations in the pores size, tortuosity, and images contrast of the ancient pine compared to the modern one. this work pointed out the different but complementary multi-parametric information that can be obtained by using nmr and tested the potential of high-field mri and low-field portable nmr in the detection of wood diagnostic features.. mailto:valeria.stagno@uniroma1.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 [6] and if the object was buried under the seabed also the marine borer activity is limited [7]. in this environment, the main degradation can be attributed to erosion bacteria [8]-[10]. conversely, at a macroscopic level the wood rings are not always visible with the naked eye because of changes in colour, consistency, and superficial morphology of the wood structure [11]. as a consequence, microscopic imaging techniques are always required. moreover, the evaluation of the effect of age is also very important, for example, to determine the conservation state of an artwork. knowing the state of wooden remains, such as its degree of decay and its pore size and morphology, is useful for planning the restoration and choosing the restoration materials [11], [12]. in the literature, several works [13]-[22] about the use of nuclear magnetic resonance (nmr) on wooden artworks have proven its utility in the microstructural characterization and the evaluation of their conservation state. the above cited works employed invasive or destructive nmr techniques, while some others [22]-[25] pointed out the potential of non-invasive portable nmr. among these, our previous work stagno et al. [23] showed how 1d and 2d low-field nmr experiments can be used as complementary techniques to study water compartmentalization in archaeological waterlogged wood with the support of optical microscopy and magnetic resonance images. furthermore, we suggested the ability of low-field nmr in detecting cell wall decay and paramagnetic impurities. in respect to our previous work [23], in this study we investigated a decayed pine wood of the roman age by using mr images with higher resolution and by characterizing the wood structure also with the pore size distribution extracted from the relaxation measurements. the results obtained from the archaeological sample were compared with the results obtained from its modern counterpart. specifically, the aim of this work was to test the potential of nmr multi-analytical approach to evaluate archaeological wood decay. we compared micro-invasive magnetic resonance imaging (mri) and diffusion-nmr [26]-[29] with non-invasive portable nmr [30], [31]. the potential of mr images as an alternative approach to the conventional techniques for the characterization of the annual rings, as well as of all the diagnostic features of waterlogged wood, was tested. moreover, high-field diffusion and low-field portable nmr were used to highlight the effect of decay on the wood microstructure, such as variations in the pore size of the ancient compared to the modern pine. 2. background theory 2.1. diffusion-nmr and pore size molecules constituting a fluid whose absolute temperature is greater than zero kelvin are in constant movement because of their kinetic energy [27]. this process is called self-diffusion. when a fluid totally fills porous medium, molecules moving with random motion are subject to continuities deviations of their trajectories due to collision with the pores walls. diffusion-nmr techniques investigate diffusion dynamics by following in time the fluid molecules. the root mean square distance travelled, 𝓁d (or diffusion length), increases with time as long as no boundaries are encountered, according to the einstein relation: 𝓁d =√2 𝑛 𝐷 𝑡 where 𝑛 = 1, 2, 3 is the space dimension and 𝐷 is the bulk diffusion coefficient that can be measured by a pulsed field gradient sequence [26]. at a fixed diffusion time 𝑡 = ∆ (where ∆ is the delay time between the two gradient pulses) and fixed pulse magnetic-field gradient duration 𝛿 such that 𝛿 ≪ ∆, it is possible to vary the magnetic-field gradient strength 𝑔, so that the nmr-signal amplitude 𝑆(𝑔) is given by [28], [32]: 𝑆(𝑔) = 𝑆(0) e −𝛾2𝑔2𝛿2𝐷(∆− 𝛿 3 ) , (1) where 𝛾 is the gyromagnetic ratio of protons, 𝑏 = is the so called 𝑏-value, 𝐷 is the diffusion coefficient obtained at a specific diffusion time ∆ and 𝑆(0) is the signal at 𝑔 =0. since during ∆ in a pulsed gradient stimulated echo (pgste) experiment longitudinal relaxation decay 𝑇1 would occur, the maximum δ accessible is limited by the relation ∆ < 𝑇1 [33]. geometrical restrictions of the medium, such as the pores walls, lead to a 𝐷(∆) decreasing with time. in a heterogeneous porous system as wood, it is possible to derive useful information about pores size, pores interconnection and membrane permeability [29] by studying the 𝐷(∆) behaviour. in the case of long ∆ limit (i.e., ∆ ≫ 𝐿2 𝐷0⁄ , where 𝐿 is the mean pore diameter) and impermeable wall, the water diffusion coefficient varies according to: 𝐷(∆) = 𝐿2 2 ∆ . (2) equation (2) indicates that 𝐿 can be obtained from the slope of 𝐷 vs ∆−1. however, deviation from equation (2) can occur for semi-permeable walls [34]-[35]. for materials with interconnected pores, at very long times 𝐷(∆) approaches an asymptotic value 𝐷∞ that is independent of the diffusion time ∆ and directly related to the tortuosity 𝜏 of the porous material: 𝜏 = 𝐷0 𝐷∞ . (3) tortuosity is an intrinsic property of a porous medium usually defined as the ratio of actual flow path length to the straight distance between the ends of the flow path. so, 𝜏 of the porous system reflects the connectivity degree of the porous network [36]-[38]. 2.2. transversal relaxation and pore size the transversal or spin-spin relaxation time 𝑇2 is due to the loss of coherence of the spins among themselves. when an ensemble of spins is considered, their magnetic fields interact (spin-spin interaction), slightly modifying their precession rate. these interactions are temporary and random. thus, spin-spin relaxation causes a cumulative loss in phase resulting in transverse magnetization decay [26]. in a porous medium, the 𝑇2 relaxation of the fluid (e.g., water) is given by the sum of different contributions [39]-[43]: 1 𝑇2 = 1 𝑇2b + 1 𝑇2s + 𝐷(𝛾 𝐺 𝑇e) 2 12 , (4) where 1 𝑇2⁄ is the total spin-spin relaxation rate, 1 𝑇2b⁄ is the relaxation rate of bulk water and 1 𝑇2s⁄ is the relaxation rate of water on the pore surface. the term 𝐷(𝛾 𝐺 𝑇e) 2 12⁄ is related to the dephasing caused by the presence of a magnetic field gradient 𝐺, named “diffusion relaxation” term. 𝐷 is the diffusion coefficient of water in the porous system, 𝛾 is the hydrogen gyromagnetic ratio and 𝑇e is the echo time. when the 𝑇2 is measured by a carr-purcell meiboom-gill (cpmg) sequence, the contribution of diffusion relaxation is averaged out by means of an echo train if a very short 𝑇e is selected [44], [45]. in the presence of a porous system, the acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 dominant relaxation rate is 1 𝑇2s⁄ while the term 1 𝑇2b⁄ can be neglected because of the low efficiency of bulk water relaxation [46]. equation (4) becomes: 1 𝑇2 ≈ 1 𝑇2𝑠 = 𝜌 𝑆 𝑉 , (5) where 𝜌 is the surface relaxivity [46], 𝑆 𝑉⁄ is the surface-tovolume ratio of the pore. therefore, by measuring the spin-spin relaxation in a porous medium the pore diameter 𝑑 can be calculated by [46]-[48]: 1 𝑇2 = 𝜌 2 𝑛 𝑑 , (6) where 𝑛 is a shape factor (for our samples a spherical shape was considered, 𝑛 = 3). 3. materials and methods 3.1. samples two cylinders-like wood samples of a water-soaked modern wood and an archaeological waterlogged wood were studied. their size was less than 15 mm in length and 8 mm in diameter, suitable for a 10 mm nmr tube. the archaeological sample detached from an ancient pole of the roman harbour of naples, dated to the v century ad [49], [50]. it was well preserved in waterlogged conditions and for this reason was always kept in water during the analysis. the modern wood, instead, was previously maintained at the environmental conditions of 20 °c and 50 % of relative humidity. in order to perform nmr acquisitions, the modern sample was imbibed with distilled water until the saturation was reached. both the species of modern and archaeological wood was stone pine (pinus pinea l.) [51], [52]. 3.2. high-field nmr acquisitions all the high-field nmr analyses were performed by using a 400 mhz bruker-avance spectrometer with a 9.4 t static magnetic field and a micro-imaging unit equipped with highperformance and high-strength magnetic field gradients for mri and diffusion measurements. the gradients maximum strength was 1240 mt/m and the rise time 100 µs. to measure the diffusion coefficient 𝐷 and the longitudinal relaxation time 𝑇1, useful to choose the longest observation time ∆ accessible during the diffusion experiments according to the ∆ < 𝑇1 relation, the soaked samples were inserted without additional water in the nmr tube, which was sealed on the top with parafilm in order to prevent the sample dehydration. the longitudinal relaxation time was measured with a saturationrecovery 𝑆r sequence with 128 points from 10 µs to 10 s, repetition time 𝑇r of 10 s, number of scans 𝑁s equal to 4. the acquisition time was 1 hour and 40 minutes for each sample. for the measurement of the water diffusion coefficient, a pgste sequence [28], [29] was used. the diffusion was evaluated along the x axis (i.e., perpendicular to the main direction of the wood grain) corresponding to the radial direction. the pgste signal was obtained using 𝑇r = 5 s, echo time 𝑇e = 1.9 ms, pulse gradient duration 𝛿 = 3 ms, 32 steps of the gradient strength 𝑔, from 26 to 1210 mt/m, for each diffusion time ∆ and 𝑁s = 16. the ∆ values used were 0.04 0.08 0.12 0.16 0.2 0.3 0.4 s. the 𝑏-value spanned from a minimum of 1.6 × 107 s/m2 to a maximum of 9.5 × 1011 s/m2. the acquisition time was about 6 hours for each sample. for the acquisition of mr images, the samples were inserted with distilled water in the nmr tube and sealed with parafilm in order to prevent water evaporation. in this way, 𝑇2 ∗-weighted images were performed with a gradient echo fast imaging (gefi) sequence [26] in the transversal, tangential and radial direction. the images were weighted on the 𝑇2 ∗ parameter, which depends on both 𝑇2 and magnetic field inhomogeneities. the optimized parameters used in the gefi sequence are reported in table 1, where stk is the slice thickness, fov the field of view, mtx the image matrix and r the in-plane resolution. 3.3. low-field portable nmr acquisitions for the low-field nmr acquisitions the samples were just placed on the surface of the sensitive area of the portable spectrometer. a bruker minispec mq-profiler with a single-sided magnet that generates a static magnetic field of 0.35 t with a 1h resonance frequency of 15 mhz and dead time of 2 µs was used. the single-sided nmr was equipped with a rf coil for performing experiments within 2 mm starting from the sample surface to the inside of the sample [53]. the portable spectrometer has a constant magnetic gradient field with strength of about 3 t/m. the transversal relaxation time 𝑇2 was measured with a cpmg sequence with 𝑇e = 42 µs, 6500 echo times, 𝑇r = 1 s, 𝑁s = 128 and acquisition time of about 5 minutes. 3.4. data processing the longitudinal relaxation time was obtained by plotting the signal intensities 𝑆 as a function of 𝑡 = 𝑆r delays for fitting the following equation: 𝑆(𝑡) = 𝑀 [1 − e − 𝑡 𝑇1 ] (7) to data, where 𝑇1 is the longitudinal relaxation time and 𝑀 is the associated equilibrium magnetization. the diffusion coefficient values were obtained by fitting the following equation: 𝑆(𝑏) = 𝑀1 e −𝐷1𝑏 + 𝑀2 e −𝐷2𝑏 (8) to data acquired at different 𝑏-values, where 𝑆(𝑏) is the nmr signal as function of the 𝑏-value, 𝐷1 and 𝐷2 are the two components of the diffusion coefficient associated with the magnetizations 𝑀1 and 𝑀2, respectively. the fit goodness was evaluated by the 𝑅2̅̅̅̅ (i.e., the 𝑅2 corrected for the number of the regressors). fits of equation (7) and equation (8) were performed by using the originpro 8.5 software. plots of the diffusion coefficient 𝐷 vs. the diffusion time ∆ were performed with matlabr2019b. from the 𝐷 vs. ∆ trend, the first and last points, corresponding to the free water diffusion and the diffusion through semi-permeable membranes (i.e. the wood cell walls), were removed and the pores radius 𝐿 was table 1. acquisition parameters of t2*-weighted images. parameters modern pine archaeological pine te/tr (ms) 3/1200 5/1500 number of slice 3 3 ns 128 128 stk (µm) 200 300 fov (cm2) 0.9 × 0.9 / 1.4 × 1.4 0.9 × 0.9 mtx (pixels) 512 × 512 512 × 512 r (µm2) 18 × 18 / 27 × 27 18 × 18 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 estimated by the linear fit of equation (2) [54]. from the pores radius 𝐿, the pores diameter 𝑑 was then calculated. in addition, the value for ∆ = ∞ of the normalized diffusion coefficient 𝐷(∆) 𝐷0⁄ , where 𝐷0 is the free water diffusion coefficient equal to 2.3 × 10-9 m2/s, was calculated and used to evaluate tortuosity according to equation (3) [54]. finally, the inverse laplace transform [55] in matlab (matlab r2019b) was used to obtain the low-field 𝑇2 distribution and the pores diameter distribution according to equation (6). the surface relaxivity 𝜌 was previously estimated for the modern and ancient pine from the slope of the line given by 𝑇2 time vs. pore size 𝛼, where 𝛼 was calculated from diffusion measurements by the relation 𝛼 = 2 √𝐷∆ [46], [47]. 4. results 𝑇2 ∗-weighted images of the transversal section of modern and archaeological stone pine are displayed in figure 1a) and figure 1b), respectively. here, all the anatomical elements observable with conventional optical microscopy [51], [52] are detectable. first, the annual rings limit (white arrows) is well visible in both figure 1a) and figure 1b) with two separate areas with different contrast. in both modern and archaeological pine, the darker area corresponds to structures with low 𝑇2 ∗ values, while the brighter ones to structures with high 𝑇2 ∗ values [56]. these structures correspond to tracheids, which are considered as the predominant constituents of all softwoods [57]. the dark area is the latewood (light blue circles) while the bright area is the earlywood (green circles). resin canals (red circles), likely with the presence of resin, can also be observed. moreover, rays (pink arrows) can be seen in both the samples but in figure 1b) the archaeological pine shows black spots and artefacts (yellow circles) located along the rays and on the edge of the sample. these artefacts are typical of mr images of waterlogged woods [13] because of the deposition of impurities, i.e., bacterial erosion wood products and seabed sediments, in the wood microstructure during the burial period. these paramagnetic impurities provide a black contrast in 𝑇2 ∗images [58] revealing the distribution of the degradation zones since they are stored in the decayed structures of wood [13]. figure 2a) and figure 2b) display the radial section of modern and archaeological pine, respectively. in both the images, the anatomical element observable is the annual rings limit (white arrows) and in figure 2b) the above-mentioned distribution of paramagnetic inclusions (yellow circles). the tangential section of the modern and archaeological samples is shown in figure 3a) and figure 3b), respectively. the red circles highlight the tangential resin canals and the pink arrows the medullary rays. again, in figure 3b) the archaeological pine shows decayed zones (yellow circles) corresponding to black spots and artefacts produced by the presence of paramagnetic agents. in table 2 the relaxation time 𝑇1 measured at high magnetic field and obtained by equation (7) is displayed. both modern and archaeological pine show a 𝑇1 around 500 ms. this limited the observation time ∆ of the diffusion measurements, whose maximum value was set to 400 ms. in figure 4a) and figure 4b) the first (𝐷𝑥1) and second (𝐷𝑥2) diffusion component as function of the diffusion time ∆ obtained by equation (8) are displayed. in table 3 the pores diameter and the tortuosity calculated from the high-field measurements by equation (2) and equation (3), respectively, are reported. in figure 5a) the 𝑇2 time distribution obtained by portable nmr is showed both for the modern (dashed line) and the ancient (solid line) pine. the pores size distribution calculated by equation (6) from the above mentioned low-field 𝑇2-distribution is presented in figure 5b) for the modern pine (dashed line) and the archaeological pine (solid line). figure 1. t2*-weighted mr images of the transversal section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both the samples there are resin canals (red circles), earlywood (green circles), latewood (blue circles), annual rings limit (white arrows) and rays (pink arrows). the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 5. discussions compared to the conventional methods used to observe wood anatomy, i.e., optical microscopy or scanning electron microscopy, our mr images allowed to recognize some important diagnostic characters of wood especially in the transversal section (figure 1). conversely, fewer characters were observed in the tangential (figure 3) and radial (figure 2) sections due to the current limitation on the resolution of mri, whose maximum value is around 10 µm. indeed, when the species of a softwood has to be recognized, the radial section is of fundamental importance because of its diagnostic features, such as pits of cross-fields and rays, which allow to discriminate among quite similar softwood structures (e.g., pine and spruce). however, the resolution of figure 2. t2*-weighted mr images of the radial section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both the samples the annual rings limit (white arrows) is observable. the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). figure 3. t2*-weighted mr images of the tangential section of modern pine (a) and archaeological pine (b) obtained at high magnetic field (9.4 t). in both tangential resin canals (red circles) and rays (pink arrows) are observable. the ancient pine shows artefacts induced by paramagnetic ions in decayed areas (yellow circles). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 mri is not a physical limit but it depends on the characteristics of the instrumentation used. in this work, mr images showed in figure 1, figure 2 and figure 3 aim at investigating the decay effect and their specific contrast can be used to reconstruct the decay distribution and to detect the presence of paramagnetic impurities. all the sample volume can be imaged with mri, whereas optical and scanning electron microscopy only provide images of a small portion of the wood sample. moreover, the mechanical preparation of the sample in optical and scanning electron microscopy leads to its destruction if compared with the virtual sectioning operated by mri. the longitudinal relaxation time 𝑇1 could provide information about the water molecules surrounding environment, the sample structure and composition [26]. however, in a porous medium 𝑇1 can be influenced by paramagnetic ions [59], [60] that usually cause its reduction, as found in stagno et al. [23]. in our ancient wood many paramagnetic inclusions were detected as artifacts in the mr images [13] and 𝑇1 values, displayed in table 2, seem to be exchange averaged [59], therefore they do not include such detailed information about different structural compartments as 𝑇2 value. for the aforementioned reason, the longitudinal relaxation time of our modern and ancient pine cannot be used to describe structural changes among them. nevertheless, the measurement of 𝑇1 is useful for setting the diffusion observation time. from the comparison of the transversal, radial and tangential section of the modern (figure 1a, figure 2a and figure 3a) and the ancient (figure 1b, figure 2b and figure 3b) pine, it is possible to observe morphological differences. first of all, the black spots (see section 4), can be associated with a degradation process operated by microorganisms with the inclusion of paramagnetic impurities (i.e., bacterial erosion wood products and burying sediments) into the wood structure. these black zones are not present in the modern wood and they are mostly located along the rays and on the edge of the sample (figure 1b, figure 2b and figure 3b yellow circles). this indicates a strong degradation in these zones. the image contrast allows to observe that both the woods show well delimited growth rings. while the modern wood does not show particular changes in the annual ring thickness, in figure 1b) the ancient pine has some thinner annual rings with no latewood. we can hypothesize that this is due to a climatic change during the tree growth (before the v century ad). in fact, thinner rings are usually attributed to not rainy periods [4], [11]. however, the ring thickness can be also influenced by other factors, for example the tree age. therefore, more than one sample should be analysed to confirm our hypothesis. the plots of the diffusion coefficient vs. the diffusion time in figure 4a) and 4b) show that both wood samples have two main diffusion compartments but the diffusion in the modern pine is slower than in the ancient pine. the difference is around one order of magnitude for both the compartments 𝐷𝑥1 and 𝐷𝑥2. this situation can be explained as the consequence of the degradation process that occurred in the ancient pine. in fact, the decay of the cellular structure and wood polymers may have produced the cell walls thinning and the lumens enlargement in the archaeological pine. from the point of view of a porous system, ancient pine is characterized by larger pores compared to modern pine. the existence of two different diffusion compartments means that both the woods have at least two main pores sizes as shown in table 3. the two sizes 𝑑1 and 𝑑2 calculated from the high-field nmr diffusion measurements can be identified as the earlywood and latewood tracheids diameter [60], considering that tracheids are the main constituents of softwood. by comparing the calculated diameters of the modern and the ancient pine, despite in the earlywood the diameter seems to be similar between the two samples, for the latewood there is an increment of the tracheids size in the archaeological pine. a possible explanation is that the decay is higher in areas with high concentration of wood polymers, such as the latewood cell walls. these polymers are predominantly cellulose and hemicellulose that, as pointed out by high-resolution nmr spectra of both modern and ancient wood [19], are more degraded in respect to the usually well-preserved lignin. this result means that the greater the thickness of the cell wall, the stronger its deterioration. this is in agreement with the distribution of paramagnetic impurities revealed by the mr images, which follows the distribution of decay that is mainly located in rays table 2. longitudinal relaxation time t1. sample t1 ± se (ms) modern pine 541 ± 13 archaeological pine 511 ± 19 figure 4. plots (a) and (b) show the dx1 and dx2 decay as function of δ for modern pine (circle markers) and archaeological pine (triangle markers). dashed lines are for illustration purpose only. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 having thick cell walls. further information can be deduced from the tortuosity 𝜏 (table 3). the modern pine shows a higher tortuosity (7.0 ± 1.1) compared to the archaeological pine (3.5 ± 0.6). however, for the ancient pine the normalized d data did not reach a limit value (see figure 4). therefore, we likely underestimated the value of tortuosity. nevertheless, the tortuosity value obtained for modern pine seems to be in agreement with the literature value around 10 calculated for thermally modified pinus sylvestris , also considering the effect of thermal modification and that we used a different species (pinus pinea) [61]. moreover, tortuosity is also in good agreement with the diffusion coefficient and diameters results. in fact, the higher the tortuosity, the more complicated the water routes. this means that the modern wood has a complex structure within which water cannot move easily. conversely, the ancient pine has lost this complexity because of the structure’s degradation, which has produced new voids and widened the existing pores lumens making the water motion easier. specifically, the two different tortuosity values corroborate the pore size results obtained by nmr diffusion. indeed, the low tortuosity measured for the ancient pine is compatible with its larger and more interconnected pores. differences in the pore size of the modern and the ancient pine were also found by using the low-field portable nmr, in good agreement with the high-field results. specifically, in figure 5a) the 𝑇2 time distribution obtained by using the portable nmr is displayed and indicates that the archaeological pine (solid line) has longer 𝑇2 than the modern pine (dashed line). the only exception is the shortest component around 1 ms that is longer for the modern wood than for the archaeological one. since in previous works [62], [63] a component around 1 ms was attributed to bound water in the cell walls, we can suggest that bound water of the archaeological pine is highly influenced by paramagnetic impurities, whose accumulation is greater in more degraded areas, i.e. in the cell walls. this is confirmed by other studies [23], [64], where the 𝑇2 component around 1 ms ascribable to bound water in the cell walls was not detected due to their strong degradation state. since the 𝑇2 is proportional to the degree of decay, we can suggest that the increment of 𝑇2 for the ancient pine is a consequence of decay. this also means that water in the archaeological pine is located in larger structures and the water content has raised as shown by the probability associated with 𝑇2 (figure 5a). the pores diameter calculated from high-field diffusion-nmr (table 3) can be compared with the pores diameter obtained by low-field 𝑇2 distribution (figure 5b). however, compared to high-field diffusion, the low-field relaxation measurements allowed to provide a global pore distribution of the sample while the intrinsic resolution of the diffusion-nmr is limited by the 𝑇1. in our case, the maximum distance (𝓁d) accessible is 43 µm [65] at ∆ = 400 ms and 𝐷 = 2.3 × 10-9 m2/s (the free water diffusion coefficient). nevertheless, we think that in case of the archaeological pine the 𝐷 vs ∆−1 behaviour is still affected by the presence of free-like water due to the existence of very large pores, not detectable with diffusion nmr techniques [54]. this explains why the archaeological pine shows pore distribution around 70 µm (figure 5b) that was not detected by nmr diffusion. the pore size distribution displayed in figure 5b) confirms the enlargement of pores in the archaeological pine as predicted by diffusion and 𝑇2 measurements. however, the peak around 70 µm is quite broad indicating a continuous distribution among peak with diameter from about 55 µm to 90 µm. the most intense peak for the ancient pine is around 17 µm. it should be noticed that both the diameters obtained from diffusion (table 3) and associated with earlywood and latewood are included in the peak around 17 µm. this indicates a continuous distribution among earlywood and latewood tracheids, likely with water exchange. for modern wood, the two predominant peaks as well as their probability in figure 5b) are in good agreement with the diameters calculated from diffusion analyses, with the mr images and with the literature values [61], [66] in which the mean lumen diameter was found around 15 µm – 20 µm. the existence of two separate peaks shows distinct spins populations for earlywood and latewood. table 3. pores diameter, associated magnetizations and tortuosity obtained by nmr diffusion. sample d1 (µm) d2 (µm) τ modern pine 16.0 ± 1.0 5.4 ± 0.6 7.0 ± 1.1 archaeological pine 18.4 ± 3.5 13.6 ± 1.2 3.5 ± 0.6 figure 5. t2 relaxation time distribution (a) and pore diameter distribution (b) for the modern pine (dashed line) and the ancient pine (solid line) obtained by using portable low-field nmr. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 smaller pore sizes can be associated to parenchyma cells and voids of the cell walls. also in this case, the values obtained by equation (6) and displayed in figure 5b) confirm that the ancient pine has larger pores (from 0.1 µm to 3 µm) than the modern pine (from 0.1 to 1 µm). particularly, the increment of size of the cell walls voids reveals the degradation of polymers constituting the cell wall itself. 6. conclusion this work suggests that by using high-field micro-mri and high-field diffusion-nmr it is possible to obtain information about the archaeological wood decay. information related to changes in the mean pore size, caused by wood decay, can be also obtained by low-field nmr relaxometry. the use of different nmr techniques and instrumentations provided the same result about the increment of the mean pore size for the archaeological pine wood. by comparing the modern pine sample and the ancient pine sample, the effect of the degradation process on the wood microstructure can be observed through the contrast of the mr images and quantified by the diffusion coefficient of water, the tortuosity and the pore size. the high-field nmr showed that the decay in pine wood mostly occurred in areas with high concentration of polymers, such as rays and latewood cell walls, with the enlargement of the pores lumen and the loss of wood structural complexity. mri can also reveal morphological aspects of wood that are not observable with the naked eye, such as the annual rings, which can inform about past climate changes. in the pore size determination obtained by using portable low-field nmr, the method based on the 𝑇2distribution seems to be superior to the high-field diffusion method, whose resolution is limited by 𝑇1. in conclusion, highfield techniques require a sampling; however, compared to other conventional analyses, they are non-destructive towards the sample that can be relocated in its original position in the artwork. the low-field portable nmr, instead, is mobile and suitable for in-situ analysis on samples of any size thus being non-invasive and non-destructive. compared to high-field nmr it is also low cost, with shorter acquisition and processing time. we suggest that single-sided portable nmr is a powerful technique for revealing porosity changes of the entire wood structure produced by decay. acknowledgement the authors would like to thank the “istituto centrale per il restauro” (icr) of rome (italy) for providing the archaeological wood sample. we acknowledge funding of regione lazio under the adamo project no. b86c18001220002 of the centre of excellence at the technological district for cultural heritage of lazio (dtc). references [1] english heritage, waterlogged wood guidelines on the recording, sampling, conservation and curation of waterlogged wood, english heritage publishing, 2010, product code 51578. [2] iawa committee, iawa list of microscopic features for softwood identification, iawa j. 25 (2004) pp. 1-70. doi:1163/22941932-90000349 [3] climate data information. online [accessed 15 march 2022] http://www.climatedata.info/proxies/tree-rings [4] d. castagneri, g. battipaglia, g. von arx, a. pacheco, m. carrer, tree-ring anatomy and carbon isotope ratio show both direct and legacy effects of climate on bimodal xylem formation in pinus pinea, tree physiol. 00 (2018) pp. 1-12. doi: 10.1093/treephys/tpy036 [5] n. macchioni, wood: conservation and preservation. in: smith c. (eds) encyclopedia of global archaeology, springer new york, 2014, isbn 978-1-4419-0426-3. doi: 10.1007/978-1-4419-0465-2_480 [6] d. h. jennings, g. lysek, fungal biology: understanding the fungal lifestyle, bios scientific publishers, guildford, 1996, isbn 978-1859961087. [7] m. a. jones, m. h. rule, preserving the wreck of the mary rose, in p. hoffman (ed.), proc. of the 4th icom-group on wet organic archaeological materials conference, bremerhaven, 1991, pp.25-48. [8] r. a. eaton, m. d. hale, wood: decay, pests and protection, chapman and hall ltd, london, 1993, isbn 0412531208. [9] r. a. blanchette, a review of microbial deterioration found in archaeological wood from different environments, int. biodeterior. biodegradation 46 (2000) pp. 189–204. doi: 10.1016/s0964-8305(00)00077-9 [10] n. b. pedersen, c. g. björdal, p. jensen, c. felby, bacterial degradation of archaeological wood in anoxic waterlogged environments. in: stability of complex carbohydrate structures. biofuel, foods, vaccines and shipwrecks. ed. s. e. harding, the royal society of chemistry, cambridge, 2013, isbn 978-1-84973563-6, pp. 160–187. [11] d. m. pearsall, paleoethnobotany: a handbook of procedures, 3rd edition, routledge, 2015, isbn 9781611322996. [12] t. nilsson, r. rowell, historical wood-structure and properties, j. cult. herit. 13 (2012) pp. s5–s9. doi: 10.1016/j.culher.2012.03.016 [13] d. j. cole-hamilton, b. kaye, j. a. chudek, g. hunter, nuclear magnetic resonance imaging of waterlogged wood, stud. conserv. 40 (1995) pp. 41-50. doi: 10.2307/1506610 [14] s. maunu, nmr studies of wood and wood products, prog. nucl. magn. reson. spectrosc. 40 (2002) pp. 151–174. doi: 10.1016/s0079-6565(01)00041-3 [15] m. bardet, a. pournou, nmr studies of fossilized wood, annu. rep. nmr spectrosc. (2017) pp. 41–83. doi: 10.1016/bs.arnmr.2016.07.002 [16] a. salanti, l. zoia, e. l. tolppa, g. giachi, m. orlandi, characterization of waterlogged wood by nmr and gpc techniques, microchem. j. 95 (2010) pp. 345–352. doi: 10.1016/j.microc.2010.02.009 [17] a. maccotta, p. fantazzini, c. garavaglia, i. d. donato, p. perzia, m. brai, f. morreale, preliminary 1h nmr study on archaeological waterlogged wood, ann. chim. 95 (2005) pp. 117– 124. doi: 10.1002/adic.200590013 [18] j. kowalczuk, a. rachocki, m. broda, b. mazela, g. a. ormondroyd, j. tritt-goc, conservation process of archaeological waterlogged wood studied by spectroscopy and gradient nmr methods, wood. sci. technol. 53 (2019) pp. 1207– 1222. doi: 10.1007/s00226-019-01129-5 [19] m. alesiani, f. proietti, s. capuani, m. paci, m. fioravanti, b. maraviglia, 13c cpmas nmr spectroscopic analysis applied to wood characterization, appl. magn. reson. 29 (2005) pp. 177– 184. doi: 10.1007/bf03167005 [20] l. rostom, d. courtier-murias, s. rodts, s. care, investigation of the effect of aging on wood hygroscopicity by 2d 1h nmr relaxometry, holzforschung 74 (2019) pp. 400-411. doi: 10.1515/hf-2019-0052 [21] s. viel, d. capitani, n. proietti, f. ziarelli, a. l. segre, nmr spectroscopy applied to the cultural heritage: a preliminary study https://doi.org/10.1163/22941932-90000349 http://www.climatedata.info/proxies/tree-rings https://doi.org/10.1093/treephys/tpy036 https://doi.org/10.1007/978-1-4419-0465-2_480 https://doi.org/10.1016/s0964-8305(00)00077-9 https://doi.org/10.1016/j.culher.2012.03.016 https://doi.org/10.2307/1506610 https://doi.org/10.1016/s0079-6565(01)00041-3 https://doi.org/10.1016/bs.arnmr.2016.07.002 https://doi.org/10.1016/j.microc.2010.02.009 https://doi.org/10.1002/adic.200590013 https://doi.org/10.1007/s00226-019-01129-5 https://doi.org/10.1007/bf03167005 https://doi.org/10.1515/hf-2019-0052 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 on ancient wood characterization, appl. phys. a 79 (2004) pp. 357–361. doi: 10.1007/s00339-004-2535-z [22] v. di tullio, d. capitani, a. atrei, f. benetti, g. perra, f. presciutti, n. marchettini, advanced nmr methodologies and micro-analytical techniques to investigate the stratigraphy and materials of 14th century sienese wooden paintings, microchem. j. 125 (2016) pp. 208–218. doi: 1016/j.microc.2015.11.036 [23] v. stagno, s. mailhiot, s. capuani, g. galotta, v.-v. telkki, testing 1d and 2d single-sided nmr on roman age waterlogged woods, j. cult. herit. 50 (2021) pp. 95–105. doi: 10.1016/j.culher.2021.06.001 [24] d. capitani, v. di tullio, n. proietti, nuclear magnetic resonance to characterize and monitor cultural heritage, prog. nucl. magn. reson. spectrosc. 64 (2012) pp. 29–69. doi: 10.1016/j.pnmrs.2011.11.001 [25] c. rehorn, b. blümich, cultural heritage studies with mobile nmr, angew. chem. int. ed. 57 (2018) pp. 7304–7312. doi: 1010.1002/anie.201713009 [26] p. t. callaghan, principles of nuclear magnetic resonance microscopy, oxford university press inc, new york, 1991, isbn 0‐19‐853944‐4. [27] w. s. price, nmr studies of translational motion: principles and applications, cambridge university press, cambridge, 2009, isbn 978-0-521-80696-1. [28] e. o. stejskal, j. e. tanner, spin diffusion measurements: spin echoes in the presence of a time dependent field gradient, j. chem. phys. 42 (1965) pp. 288-292. doi: 10.1063/1.1695690 [29] p. n. sen, time-dependent diffusion coefficient as a probe of geometry, concepts magn. reson. 23a (2004) pp. 1-21. doi: 10.1002/cmr.a.20017 [30] n. proietti, d. capitani, v. di tullio, applications of nuclear magnetic resonance sensors to cultural heritage, sensors 14 (2014) pp. 6977-6997. doi: 10.3390/s140406977 [31] d. besghini, m. mauri, r. simonutti, time-domain nmr in polymer science: from the laboratory to the industry, appl. sci. 9 (2019) pp. 1801. doi: 10.3390/app9091801 [32] d. g. norris, the effects of microscopic tissue parameters on the diffusion weighted magnetic resonance imaging experiment, nmr biomed. 14 (2001) pp. 77–93. doi: 10.1002/nbm.682 [33] d. a. faux, p. j. mcdonald, nuclear-magnetic-resonance relaxation rates for fluid confined to closed, channel, or planar pores, phy. rev. e 98 (2018) pp. 1 – 14. doi: 10.1103/physreve.98.063110 [34] a. v. anisimov, n. y. sorokina, n. r. dautova, water diffusion in biological porous systems: a nmr approach, magn. reson. imaging 16 (1998) pp. 565–568. doi: 10.1016/s0730-725x(98)00053-8 [35] r. valiullin, v. skirda, time dependent self-diffusion coefficient of molecules in porous media, j. chem. phys. 114 (2001) pp. 452– 458. doi: 10.1063/1.1328416 [36] f. a. l. dullien, porous media: fluid transport and pore structure, academic press, new york, 1991, isbn: 9780323139335. [37] p. p. mitra, p. n. sen, l. m. schwartz, p. ledoussal, diffusion propagator as a probe of the structure of porous media, phys. rev. lett. 68 (1992) pp. 3555–3558. doi: 10.1103/physrevlett.68.3555 [38] m. zecca, s. j. vogt, p. r. connolly, e. f. may, m. l. johns, nmr measurements of tortuosity in partially saturated porous media, transp. porous media, 125 (2018) pp. 271-288. doi: 10.1007/s11242-018-1118-y [39] k. r. brownstein, c. e. tarr, spin‐lattice relaxation in a system governed by diffusion, j. magn. reson. (1969) 26(1) (1977) pp. 17– 24. doi: 10.1016/0022-2364(77)90230-x [40] r. kleinberg, m. horsfield, transverse relaxation processes in porous sedimentary rock. j. magn. reson. (1969) 88 (1990) pp. 9– 19. doi: 10.1016/0022-2364(90)90104-h [41] s. de santis, m. rebuzzi, g. di pietro, f. fasano, b. maraviglia, s. capuani, in vitro and in vivo mr evaluation of internal gradient to assess trabecular bone density, phys. med. biol. 55 (2010) pp. 5767. doi: 10.1088/0031-9155/55/19/010 [42] e. toumelin, c. torres-verdín, b. sun b, k. j. dunn, randomwalk technique for simulating nmr measurements and 2d nmr maps of porous media with relaxing and permeable boundaries, j. magn. reson. 188 (2007) pp. 83–96. doi: 10.1016/j.jmr.2007.05.024 [43] m. ronczka, m. muller-petke, optimization of cpmg sequences to measure nmr transverse relaxation time t2 in borehole applications, geosci. instrum. method. data syst. 1 (2012) pp. 197–208. doi: 10.5194/gi-1-197-2012 [44] h. y. carr, e. m. purcell, effects of diffusion on free precession in nuclear magnetic resonance experiments, phys. rev., 94 (1954) pp. 630–638. doi: 10.1103/physrev.94.630 [45] s. meiboom, d. gill, modified spin‐echo method for measuring nuclear relaxation times, rev. sci. instrum. 29 (1958) pp. 688–691. doi: 10.1063/1.1716296 [46] x. li, z. zhao, time domain‑nmr studies of average pore size of wood cell walls during drying and moisture adsorption, wood sci. technol. 54 (2020), pp. 1241–1251. doi: 10.1007/s00226-020-01209-x [47] p. r. j. connolly, w. yan, d. zhang, m. mahmoud, m. verrall, m. lebedev, s. iglauer, p. j. metaxas, e. f. may, m. l. johns, simulation and experimental measurements of internal magnetic field gradients and nmr transverse relaxation times (t2) in sandstone rocks, j. petrol. sci. eng. 175 (2019) pp. 985–997. doi: 10.1016/j.petrol.2019.01.036 [48] g. h. sørland, k. djurhuus, h. c. widerøe, j. r. lien, a. skauge, absolute pore size distributions from nmr, diffus. fundam. 5 (2007) pp. 4.1-4.15. [49] v. di donato, m. r. ruello, v. liuzza, v. carsana, d. giampaola, m. a. di vito, c. morhange, a. cinque, e. russo ermolli, development and decline of the ancient harbor of neapolis, geoarchaeology 33 (2018), pp. 542–557. doi: 10.1002/gea.21673 [50] d. giampaola, v. carsana, g. boetto, f. crema, c. florio, d. panza, m. bartolini, c. capretti, g. galotta, g. giachi, n. macchioni, m. p. nugari, m. bartolini, la scoperta del porto di "neapolis": dalla ricostruzione topografica allo scavo e al recupero dei relitti, in archaeologia maritima mediterranea: international journal on underwater archaeology, istituti editoriali e poligrafici internazionali, fabrizio serra, pisa-roma, 2005, pp. 1000–1045. [in italian]. online [accessed 14 march 2022] http://digital.casalini.it/10.1400/52974 [51] insidewood.2004-onwards. online [accessed 13 march 2022]. https://insidewood.lib.ncsu.edu/ [52] wood anatomy of central european species. online [accessed 13 march 2022]. http://www.woodanatomy.ch [53] v. stagno, c. genova, n. zoratto, g. favero, s. capuani, singlesided portable nmr investigation to assess and monitor cleaning action of pva-borax hydrogel in travertine and lecce stone, molecules 26 (2021) pp. 3697. doi: 10.3390/molecules26123697 [54] v. stagno, f. egizi, f. corticelli, v. morandi, f. valle, g. costantini, s. longo, s. capuani, microstructural features assessment of different waterlogged wood species by nmr diffusion validated with complementary techniques, magn. reson. imaging 83 (2021), pp. 139-151. doi: 10.1016/j.mri.2021.08.010 https://doi.org/10.1007/s00339-004-2535-z https://doi.org/10.1016/j.microc.2015.11.036 https://doi.org/10.1016/j.culher.2021.06.001 https://doi.org/10.1016/j.pnmrs.2011.11.001 https://doi.org/1010.1002/anie.201713009 https://doi.org/10.1063/1.1695690 https://doi.org/10.1002/cmr.a.20017 https://doi.org/10.3390/s140406977 https://doi.org/10.3390/app9091801 https://doi.org/10.1002/nbm.682 https://doi.org/10.1103/physreve.98.063110 https://doi.org/10.1016/s0730-725x(98)00053-8 https://doi.org/10.1063/1.1328416 https://doi.org/10.1103/physrevlett.68.3555 https://doi.org/10.1007/s11242-018-1118-y https://doi.org/10.1016/0022-2364(77)90230-x https://doi.org/10.1016/0022-2364(90)90104-h https://doi.org/10.1088/0031-9155/55/19/010 https://doi.org/10.1016/j.jmr.2007.05.024 https://doi.org/10.5194/gi-1-197-2012 https://doi.org/10.1103/physrev.94.630 https://doi.org/10.1063/1.1716296 https://doi.org/10.1007/s00226-020-01209-x https://doi.org/10.1016/j.petrol.2019.01.036 https://doi.org/10.1002/gea.21673 http://digital.casalini.it/10.1400/52974 https://insidewood.lib.ncsu.edu/ http://www.woodanatomy.ch/ https://doi.org/10.3390/molecules26123697 https://doi.org/10.1016/j.mri.2021.08.010 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [55] l. venkataramanan, y.-q. song, m. d. hürlimann, solving fredholm integrals of the first kind with tensor product structure in 2 and 2.5 dimensions, ieee trans. signal process. 50 (2002) pp. 1017-1026. doi: 10.1109/78.995059 [56] p. m. kekkonen, v.-v. telkki, j. jokisaari, determining the highly anisotropic cell structures of pinus sylvestris in three orthogonal directions by pgste nmr of absorbed water and methane, j. phys. chem. b 113 (2009) pp. 1080. doi: 10.1021/jp807848d [57] g. t. tsoumis, wood, encyclopædia britannica. online [accessed 14 march 2022] https://www.britannica.com/science/wood-plant-tissue [58] s. takahashi, t. kim, t. murakami, a. okada, m. hori, y. narumi, h. nakamura, influence of paramagnetic contrast on single-shot mrcp image quality, abdom. imaging 25 (2000) pp. 511–513. doi: 10.1007/s002610000083 [59] a. yilmaz, m. yurdakoç, b. işik, influence of transition metal ions on nmr proton t1 relaxation times of serum, blood, and red cells, biol. trace elem. res. 67 (1999) pp. 187–193. doi: 10.1007/bf02784073 [60] s. capuani, v. stagno, m. missori, l. sadori, s. longo, highresolution multiparametric mri of contemporary and waterlogged archaeological wood, magn. reson. chem. 58 (2020) pp. 860-869. doi: 10.1002/mrc.5034 [61] m. urbańczyk, y. kharbanda, o. mankinen, v.-v. telkki, accelerating restricted diffusion nmr studies with timeresolvedand ultrafast methods, anal. chem. 92 (2020) pp. 99489955. doi: 10.1021/acs.analchem.0c01523 [62] v.-v. telkki, m. yliniemi, j. jokisaari, moisture in softwoods: fiber saturation point, hydroxyl site content and the amount of micropores determined from nmr relaxation time distributions. holzforschung 67 (2013) pp. 291–300. doi: 10.1515/hf-2012-0057 [63] p. m. kekkonen, a. ylisassi, v.-v. telkki, absorption of water in thermally modified pine wood as studied by nuclear magnetic resonance, j. phys. chem. c 118 (2014) pp. 2146–2153. doi: 10.1021/jp411199r [64] s. hiltunen, a. mankinen, m. a. javed, s. ahola, m. venäläinen, v.-v. telkki, characterization of the decay process of scots pine wood caused by coniophora puteana using nmr and mri, holzforschung 74 (2020) pp.1021–1032. doi: 10.1515/hf-2019-0246 [65] k. r. brownstein, c. e. tarr, importance of classical diffusion nmr studies of water in biological cells, phys. rev. a, 6 (1979) pp. 2446–2453. doi: 10.1103/physreva.19.2446 [66] i. sable, u. grinfelds, a. jansons, l. vikele, i. irbe, a. verovkins, a. treimanis, comparison of the properties of wood and pulp fibers from lodgepole pine (pinus contorta) and scots pine (pinus sylvestris), bioresources 7 (2012) pp. 1771-1783. doi: 10.15376/biores.7.2.1771-1783 https://doi.org/10.1109/78.995059 https://doi.org/10.1021/jp807848d https://www.britannica.com/science/wood-plant-tissue https://doi.org/10.1007/s002610000083 https://doi.org/10.1007/bf02784073 https://doi.org/10.1002/mrc.5034 https://doi.org/10.1021/acs.analchem.0c01523 https://doi.org/10.1515/hf-2012-0057 https://doi.org/10.1021/jp411199r https://doi.org/10.1515/hf-2019-0246 https://doi.org/10.1103/physreva.19.2446 https://doi.org/10.15376/biores.7.2.1771-1783 a lightweight magnetic gripper for a delivery aerial vehicle: design and applications acta imeko issn: 2221-870x september 2021, volume 10, number 3, 61 65 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 61 a lightweight magnetic gripper for an aerial delivery vehicle: design and applications giuseppe sutera1, dario calogero guastella1, giovanni muscato1,2 1 dipartimento di ingegneria elettrica elettronica e informatica, university of catania, catania, italy 2 istituto di calcolo e reti ad alte prestazione, consiglio nazionale delle ricerche, icar-cnr, palermo, italy section: research paper keywords: low-power gripper; pick and place; rapid prototyping; permanent magnets; mobile robot application citation: giuseppe sutera, dario calogero guastella, giovanni muscato, a lightweight magnetic gripper for a delivery aerial vehicle: design and applications, acta imeko, vol. 10, no. 3, article 10, september 2021, identifier: imeko-acta-10 (2021)-03-10 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was partially funded by the ‘safe and smart farming with artificial intelligence and robotics programma ricerca di ateneo unict 2020 -‐ 22 linea 2’ project. corresponding author: giuseppe sutera, e-mail: giuseppe.sutera@unict.it 1. introduction unmanned aerial vehicles or drones represent the future of many evolving sectors. autonomous delivery is one of these, and this sector is developing rapidly thanks to new platforms that allow the transportation of weights heavier than that of the platforms themselves. these vehicles are often used to transport wares to areas that are difficult to reach quickly using standard means of transport. in order to make the entire delivery process autonomous, it is necessary to use an easily coupling system to attach the package to the drone. in the literature, different techniques have been described, which will be analysed in detail in this paper. the snap-fit method requires a high level of accuracy in positioning, as there are a number of fixing pins that must be 1 website: https://www.mbzirc.com/. inserted perfectly into the relevant holes. adhesion is an excellent solution for picking up and placing objects with metal parts. for this reason, electro-adhesion [1] and electromagnets [2]-[4] have been analysed, which represent a valid choice in terms of ease of coupling, but they require a high operating current to create a magnetic field. in terms of energy consumption, this can cause a reduction in flight time. in this approach [3], the use of an electro-permanent magnet reduces energy absorption since it requires a current only in the release phase. however, the typical shape and weight of these devices require the design of bulky housings that do not allow a plate to be used that is suitable for the intended purpose. for the present study, a plate was developed in accordance with the specifications of the ‘mohamed bin zayed international robotics challenge 2020’ (mbzirc1). one of the challenges in this competition was to create a drone capable of lifting different types of bricks off the abstract in recent years, drones have become widely used in many fields. their vertical flight capability makes these systems suitable for carrying out a variety of tasks. in this paper, the delivery service they provide is analysed. the delivery of goods quickly and to remote areas is a relevant application scenario; however, the systems proposed in the literature use electromagnets, which affect the duration of the flight. in addition, these devices are heavy and suffer from high energy consumption, which reduces the maximum transportable payload. this study proposes a new lightweight magnetic plate composed of permanent magnets, capable of collecting and positioning any object as long as it has a ferromagnetic surface on the top. this plate was developed for the mohamed bin zayed international robotics challenge 2020, an international robotics competition for multi-robot systems. challenge two of this competition required a drone capable of picking up different types of bricks and assembling them to build a wall according to an assigned pattern. the bricks were of different colours and sizes, with weights ranging from 1 to 2 kg. in light of this, it was concluded that weight was the most relevant specification to consider in drone design. mailto:giuseppe.sutera@unict.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 62 ground and arranging them to build a wall according to an assigned pattern. for this purpose, a magnetic plate was created using passive magnets inserted inside a 3d-printed structure. the release system was incorporated into the plate and consisted of several levers operated by a single servo control. this setup allows the detachment of ferromagnetic objects without high energy consumption and requires a power supply only in the release phase in order to activate the servomotor. furthermore, thanks to its design, the developed gripper allows the operators to optimise the weight and energy consumption while still guaranteeing a lifting capacity exceeding 2 kg. 2. mechanical design the prototype in this study had a flat profile and an attractive force comparable to a commercial device, despite its lightness. the supporting structure was built with rapid prototyping techniques using the zortrax m200 printer, which is equipped with an extruder with a 0.4 mm nozzle and a layer resolution of 90–390 microns. of the materials available, z-ultrat was selected for this project, a blend of zortrax filaments created to enhance the properties of acrylonitrile butadiene styrene (abs) in terms of durability. the entire printing process took about 20 hours, to which 30 minutes were added to integrate the servomotor and the other mechanical parts (such as pins, ball bearings and screws). the considerable difference between printing and assembly times prompted a search for a solution to speed up the printing process. it was therefore decided to divide the prototype into several smaller components. although this solution increased the overall printing time by about 25 % (due to the increased number of prints and, hence, of printer initialisations), each component part had a printing time ranging from 30 to 120 minutes. the resulting modular structure of the prototype made it quick to repair. as will be explained later, the bottom part of the plate was the area that was exposed the most to wear, as it was in contact with the ground and the ferrous coupling surfaces. the dimensions of the realised prototype fulfilled the specific requirements of the mbzirc 2020 competition, where the magnetic plate was tested. the setup used for the competition had a length of 15 cm, a width of 10 cm, a height of 5 cm and weighed 195 g. however, the modularity of the prototype made it possible to reduce the contact surface to 9 × 9 cm so that it was suitable for smaller drones while still maintaining the same supporting structure. the plate consisted of four pieces, two for each of the two layers that composed it. the first layer was 0.6 cm high and consisted of two smaller pieces that were connected with a dovetail joint (figure 1). during the design phase, a set of beams was designed with a two-fold purpose: 1) increasing the rigidity to compensate for the thinness and 2) creating the slots in which to insert the magnets and the supports for the release levers (figure 2). two commercial magnets were tested with the same width (10 mm) and length (20 mm) but different heights (2 and 5 mm) and, hence, different attraction forces (table 1). the design choice of creating a covering layer in the lower part made the external surface smooth and free of friction. the magnets were securely fixed in custom housings that prevented them from coming out once the object to be lifted had been hooked (figure 3). the release system was operated by a servomotor located in a central position in order to guarantee the centring of the weight. once activated by a digital pin from flight control unit (fcu) the rotation starts by sending a pwm signal, from rest position up to release position and back. an mg995 commercial servomotor was used, capable of developing a torque of 10 kg/cm (6 v) on the shaft required to operate the cascade of l-shaped levers, located along the length of the plate, to move and push the object away from the magnets. the increase in distance produced a decrease in the force of attraction, and the object is released by gravity. the release phase lasts 1 second. during this period the servomotor draws 600 ma, after which the motor returns to the rest position where the consumption is reduced to 10 ma. in the proposed solution, neodymium magnets are used for their capability to attract ferrous surfaces. the operating principle was based on the force of attraction, which follows the equation below: 𝐹 = 𝐵2𝐴 2 𝜇0 , (1) where f is the force in n, a is the surface area of the magnetic pole in m2, b is the magnetic flux density in tesla and 𝜇0 is the permeability of air. the second law of dynamics defines the force of gravity proportional to the mass m, therefore f = m · g. from this it follows that the lifting mass m is given by 𝑚 = 𝐵2 𝐴 2 𝜇0 𝑔 . (2) figure 1. mechanical dovetail interlocking. figure 2. structure with reinforcement beams, release levers and magnets (in green). figure 3. cross-section view. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 63 another relevant factor for this approach is the speed of grip and release. during the development of the drone, there was a focus on the coupling speed because, during the approach phase, the drone had to fly over an object placed on the ground and lift it up. during the preliminary tests, two issues were observed when flying close to the ground: 1) the displacement of the object and 2) the instability of the drone. both issues were ascribed to the rotors’ downwash [5], which is the air flow generated by the propulsion system. in the first case, the object to be grabbed moved when the drone flew a few centimetres above it, so it was necessary for the drone to ascend and repeat the grabbing procedure. this repositioning, in view of the challenge, increased the time required to complete the task. furthermore, the instability caused by the turbulence of the propulsion system interfered with the final manoeuvres needed for the proper positioning of the object. the use of an electromagnet would have introduced a further delay due to the need to maintain the position of the object near the ferrous surface during activation. driven by the need to reduce operating times, the choice of using permanent magnets, which are constantly ‘active’, also allowed for a reduction in this additional latency. during the release phase, this problem did not arise, and it was possible to drop the object even at a short distance from the ground. another relevant aspect that required specific investigation was the coupling system between the drone and the magnetic plate. two approaches were tested, one consisting of a sliding prismatic-like joint with a cardan system near the plate and a damped system, which was the final solution adopted. in the first approach, shown in figure 4, the variable length of the sliding joint allowed the drone to pick up objects up to 50 cm below its flight altitude. however, the considerable variation in the centre of mass once the object was gripped caused oscillations that increased energy consumption, as the autopilot had to continuously correct the vehicle’s position. the final proposed solution consisted of four cables tensioned by as many springs, thus dampening the mechanical shocks and vibrations on the object due to oscillations during transportation. furthermore, the cables allowed the plate to be attracted to the metal sheet on the object, as the plate could move freely (within a range of ± 5 cm) along the vertical axis and in the horizontal plane (within ± 3 cm). 3. experiments the dji f550 hexacopter was chosen for the final setup, which offered a good compromise in terms of maximum payload and limited downwash effect. in fact, the adoption of this gripping solution with a larger drone (dji s900) was abandoned because the excessive downwash produced by the rotors tended to push away the bricks below. the magnets chosen for the final implementation of the plate were 5 mm with a declared attraction force of approximately 3.8 kg for each single magnet, as shown in table 1. the nominal value of the force of attraction is confirmed if the object is made of iron. for steel or other alloys, this value can be reduced by more than 30 %. often, this given value decreases due to the coating or surface irregularities of the magnets. another factor is the thickness of the metal surface to be lifted, which must not be too thin, or the force of attraction cannot be fully exploited. however, in this study, the presence of the ultrat air gap between the magnet and the ferrous surface to be lifted led to a considerable deterioration in the attraction force, which decreased very rapidly with increasing distance. ten magnets of the above model were inserted into the plate, as shown in figure 2, with alternating orientation in the magnetic field. conducting the tensile test with the aid of a digital dynamometer, it was necessary to apply a force of 4.9 kg to detach the plate from the thick ferrous surface (0.5 cm). given the above considerations and the degradation in the attraction obtained for each individual magnet, this force was still well within the acceptable limits. the use of passive magnets ensured a strong grip without any detachments during the transport phases despite the thin separation layer between the ferromagnetic surface and the magnets. the final configuration was tested with copies of bricks, as per the challenge rules, with different weights of up to 2.0 kg and with thinner ferromagnetic surfaces (0.1–0.2 cm) on top compared with those used for the preliminary tensile test. during these tests, it was possible to lift the different types of bricks, but the orange bricks, shown in figure 5, could not be lifted because the ferromagnetic surface was not thick enough to ensure a firm grip. however, since these bricks were 1.80-m long, it might not be possible to lift them with just one drone, and therefore it was decided not to address this issue. table 1. overview of the properties of the magnets used. property magnet 1 magnet 2 material ndfeb ndfeb weight 3.04 g 7.6 g shape parallelepiped parallelepiped dimensions 20 x 10 x 2 mm 20 x 10 x 5 mm surface of the poles 20 x 10 mm 20 x 10 mm coating nickel plated nickel plated magnetisation n45 n45 force of attraction about 2.1 kg about 3.8 kg figure 4. approach with sliding prismatic-like joint with a cardan system near the plate. figure 5. types of bricks according to the challenge rules. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 64 the release occurs by activating the servomotor, which produces an increase in the air gap (figure 3) between the magnets and the metal surface of the gripped object. the detachment occurs naturally due to the contribution of gravity, so a small servo can detach objects in the order of 1–2 kg. this means that as the weight of the object decreases, it is necessary to increase the air gap produced by the movement of the levers. a short release sequence is shown in figure 6. this magnetic plate is presented in [6], and in the current version it has been improved with the integration of a sheet of magnetic shield material. this is an alloy with good magnetic permeability that allows the shielding of magnetic fields. this solution allows the upper part of the plate to be shielded, thus avoiding any interference with the autopilot and other system components. the proposed system, as shown in figure 7, allows excellent pick-and-place missions to be conducted. furthermore, the decision not to use electromagnets avoids the intermittent generation of magnetic fields, which could negatively influence the behaviour of the autopilot and reduce flight times as a result of the power consumption during the gripping phase. the final assembly with the damping system with cables and springs guarantees a firm grip between the brick and the magnetic plate, dampening the vibrations caused by transportation. the tension springs chosen for this project increase the extension length by 5 cm; as soon as the plate is within a range of 5 cm, it is attracted to the ferrous surface and stretches, leading to automatic coupling. when the springs are fully extended, the drone is able to lift the brick gradually. moreover, our damping system provides a compensation to payload vertical accelerations and decelerations respectively during take-off and landing. the tests performed on the gripper during the challenge showed that it was able to grab, transport and place a large number of bricks during the time allowed for the trials. 4. conclusions in this article, a drone equipped with a pick-and-place system for objects with ferrous surfaces has been presented, and the different approaches used to carry out the delivery service using drones have been evaluated. based on this analysis, it was decided to proceed using the technique of passive permanent magnets in order to eliminate the power consumption during the transport phase. this technique has been combined with a custom design in order to obtain a flat profile and to guarantee the lightness of the prototype as a result of using 3d printing. from the literature, it appears that this model represents the most compact passive magnetic gripping system developed. in the future, a lighter and flexible version of the prototype, capable of lifting objects with limited curved faces, will be developed. moreover, as the current vision system for brick detection is placed under an arm, the field of view is partially hidden by the plate. therefore, as a future development, the camera will be integrated into the plate in order to improve visualisation and ensure alignment up to a few centimetres from the object to be grasped. in addition, distance sensors will be installed to constantly monitor the distance during the gripping phase. based on the positive results from mbzirc and the latest improvements, this drone can be employed in the field in case of emergency to transport goods in dangerous or remote areas. acknowledgement this research was partially funded by the ‘safe and smart farming with artificial intelligence and robotics programma ricerca di ateneo unict 2020 -‐ 22 linea 2’ project. figure 6. the release phase, in which it is possible to see how the plate, thanks to the proposed solution, returns to its original position immediately after release without affecting the posture of the drone. figure 7. final assembly with the damping system and cable tensioner. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 65 references [1] d. longo, g. muscato, adhesion techniques for climbing robots: state of the art and experimental considerations, advances in mobile robotics, proc. of the eleventh international conference on climbing and walking robots and the support technologies for mobile machines (clawar), coimbra, portugal, 8 – 10 september 2008, pp. 6-28. doi: 10.1142/9789812835772_0003 [2] a. rodríguez castaño, f. real, p. ramón soria, j. capitán fernández, v. vega, b. c. arrue ullés, a. ollero baturone, alrobotics team: a cooperative multi-unmanned aerial vehicle approach for the mohamed bin zayed international robotic challenge, journal of field robotics 36 (2019) pp. 104-124. doi: 10.1002/rob.21810 [3] a. gawel, m. kamel, t. novkovic, j. widauer, d. schindler, b. p. von altishofen, r. siegwart, j. nieto, aerial picking and delivery of magnetic objects with mavs, proc. of the 2017 ieee international conference on robotics and automation (icra), singapore, 29 may-3 june 2017, pp. 5746-5752. doi: 10.1109/icra.2017.7989675 [4] k. tai, a. r. el-sayed, m. shahriari, m. biglarbegian, s. mahmud, state of the art robotic grippers and applications, robotics 5 (2016). 20 pp. doi: 10.3390/robotics5020011 [5] c. g. hooi, f. d. lagor, d. a. paley, height estimation and control of rotorcraft in ground effect using spatially distributed pressure sensing, journal of the american helicopter society 61 (2016) pp. 1-14. doi: 10.4050/jahs.61.042004 [6] g. sutera, d. c. guastella, g. muscato, a novel design of a lightweight magnetic plate for a delivery drone, proc. of the 23rd international symposium on measurement and control in robotics (ismcr), budapest, hungary, 15-17 october 2020, pp. 1-4. doi: 10.1109/ismcr51255.2020.9263730 https://doi.org/10.1142/9789812835772_0003 https://doi.org/10.1002/rob.21810 https://doi.org/10.1109/icra.2017.7989675 https://doi.org/10.3390/robotics5020011 https://doi.org/10.4050/jahs.61.042004 https://doi.org/10.1109/ismcr51255.2020.9263730 acta imeko  september 2014, volume 3, number 3, 68 – 72  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 68  comparison of milligram scale deadweights to electrostatic  forces  sheng‐jui chen, sheau‐shi pan, yi‐ching lin   center for measurement standards, industrial technology research institute, hsinchu, taiwan 300, r.o.c.      section: research paper   keywords: deadweight force standard; electrostatic force actuation; capacitive position sensing; force balance  citation: sheng‐jui chen, sheau‐shi pan, yi‐ching lin, comparison of milligram scale deadweight forces to electrostatic forces, acta imeko, vol. 3, no. 3,  article 14, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐14  editor: paolo carbone, university of perugia   received may 13 th , 2013; in final form august 26 th , 2014; published september 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the bureau of standards, metrology and inspection (bsmi), taiwan, r.o.c.  corresponding author: sheng‐jui chen, e‐mail: sj.chen@itri.org.tw    1. introduction  microand nano-force measurement is of great interest in recent years among several national measurement institutes (nmis) [1-6]. the center for measurement standards (cms) of the industrial technology research institute (itri) has established a force measurement system based on electrostatic sensing and actuation techniques. the system is capable of measuring vertical forces up to 200 n based on a force balance method. the system mainly consists of a flexure stage, a three-electrode capacitor and a digital controller [7]. the schematic drawing of the system is shown in figure 1. the three-electrode capacitor is used simultaneously as a capacitive position sensor and an electrostatic force actuator. the position of the center electrode is measured by comparing the capacitances between upper capacitor c1 and lower capacitor c2 formed within the three electrodes (see figure 2). the differential capacitance was detected using an inductivecapacitive resonant bridge circuit. the position detection is performed at a radio frequency (rf), say, 100 khz, a frequency depending on the capacitance values and the design of the sensing bridge circuit. for electrostatic force actuation, the top and bottom electrodes are applied with two high voltage, audio frequency sinusoidal signals to generate a compensation electrostatic force fe to balance the force under measurement fm. the balance condition fm = fe is maintained by the digital controller by keeping the flexure stage at its zero deflection position. some parts of the force measurement system were upgraded for performance improvements. a new design of figure 1. schematic drawing of the force measurement system.   abstract  this paper presents a comparison of milligram scale deadweights to electrostatic forces via an electrostatic sensing & actuating force  measurement system.  the electrostatic sensing & actuating force measurement system is designed for measuring force below 200 n  with an uncertainty of few nanonewton.   the force measurement system consists of three main components: a monolithic flexure  stage, a three‐electrode capacitor for position sensing and actuating and a digital controller.  the principle of force measurement used  in this system  is a static force balance,  i.e. a force to be measured  is balanced by a precisely controlled, electrostatic force. four  weights  of  1  mg  to  10  mg  were  tested  in  this  comparison.  the  results  of  the  comparison  showed  that  there  exist  extra  stray  electrostatic forces between the test weights and the force measurement system. this extra electrostatic force adds a bias force to the  measurement result, and was different for each weight. in principle, this stray electrostatic force can be eliminated by  installing a  metal  housing  to  isolate  the  test  weight  from  the  system.  in  the  first  section,  we  briefly  introduce  the  electrostatic  sensing  and  actuating force measurement system, and then we describe the experimental setup for the comparison and the results. finally, we  give a discussion and outlook.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 69  copper-beryllium flexure stage was installed in the system, which has a counter weight balance mechanism and a lower stiffness of 13.08 n/m. figure 2 shows a picture of the new flexure stage, where the counter weight and the gold-plated cube flexure stage are visible. a new set of gold-plated, polished electrodes was assembled as a three-electrode capacitor and put into operation. the capacitance gradient for the new threeelectrode capacitor was measured. 2. experimental setup  in this experiment, the compensation electrostatic force is compared to the deadweight by weighing a weight with the electrostatic sensing and actuating force measurement system. 2.1. deadweight  we used five wire weights with nominal mass values and shapes of 1 mg-triangle, 2 mg-square, 5 mg-pentagon and 10 mg-triangle to generate vertical downward forces. these weights meet the metrological requirement of the oiml class e1 and were calibrated against standard weights using a mass comparator balance. the calibration results are compiled in table 1. the forces can be derived from the calibrated mass values and the local acceleration of gravitation g = 9.78914 m/s2 as fw = m (1-a/w) g, where a and w are densities of the air and the weight, respectively. these weights were loaded and unloaded by a dc motor actuated linear translation stage. 2.2. electrostatic sensing & actuating force measurement system  as shown in figure 3, the compensation electrostatic force fe generated by the force measurement system is determined by the following equation: 2 22 2 11 2 1 2 1 vsvsfe  (1) where s1, s2 are the capacitance gradients of the top and the bottom capacitors c1, c2 and v1, v2 are voltage potentials between the top and the bottom capacitors, respectively. using the parallel-plate capacitor as the model for capacitor c1 and c2,                               432 00 1 1)( d x d x d x d x d a xd a xc  (2)                               432 00 2 1)( d x d x d x d x d a xd a xc  (3) where 0 is the vacuum permittivity, a is the effective area of the electrode and d is the gap distance between electrodes when the center electrode is vertically centered. the capacitance gradients s1 and s2 can be expressed as                          432 0 1 1 54321)( d x d x d x d x s dx dc xs (4)                             432 0 2 2 54321)( d x d x d x d x s dx dc xs (5) where s0 = 0a/d2 is the capacitance gradient at x=0. the electrostatic force can be written as )()( 2 1 )( 22 2 10 2 2 2 10 vvs d x vvsxf e  (6) the voltages v1 and v2 contain the rf detection signal vdsindt, audio frequency high voltage actuation voltages va1sinat, va2sinat and the electrodes’ surface potentials vs1, vs2, namely 111 sinsin saadd vtvtvv   (7) 222 sinsin saadd vtvtvv   (8) the high voltage actuation signals are provided by a full range 10 v 16-bit resolution digital-to-analog converter within the digital controller and an ultra low-noise highvoltage amplifier. to make the electrostatic force linearly proportional to a control voltage vc, we set )(11 cba vvav  (9) )(22 cba vvav  (10) figure 2. picture of the new flexure stage.  table 1. mass calibration result. nominal mass (mg) conventional mass (mg) uncertainty, 95% confidence (mg) 1 1.00096 0.0003 2 2.00116 0.0003 5 5.00124 0.00065 10 10.0021 0.00048 c1 c2 d  x d + x v1 v2 c1 c2 d  x d + x v1v1 v2v2 figure 3. three‐electrode capacitor for electrostatic force actuation.   acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 70  where a1, a2 are amplification factors of the high-voltage amplifier. the term vb is a constant and determined by the value of s0 and the upper limit of the force measurement range. taking the gain difference between channels of the high-voltage amplifier, substituting equations (7)-(10) for v1 and v2 in equation (6), we obtain an equation for the electrostatic force fe     terms)(ac )( 220 222 00 2 00   dsbccbe bvvsvvbaasvvasf , (11) where a is the gain difference fraction, i.e. a=(a1a2)/(a1+a2), a0 is the mean gain factor, b is the offset fraction x/d, vs2 = (vs12-vs22)/2 and vc is the control voltage. the high frequency ac terms at audio and rf frequencies can be omitted because they cause only negligible ac displacement modulations on the flexure stage. parameter a can be tuned to very close to zero by adjusting the gain of the dac within a software program. after the tuning, parameter a was measured to be smaller than 5105 contributing to a negligible force uncertainty. instead of using an optical interferometer, the position of the center electrode is measured by the difference between c1 and c2 with a differential capacitance bridge circuit [7]. hence, any deviation of the center electrode from the vertical center position can be detected by the bridge circuit. with a commercially available optical interferometer, the offset adjustment could be quite difficult and ambiguous. the effect of parameter (a+b) can be tested by setting vc = 0, modulating vb with a square wave profile and observing the displacement signal of the flexure stage. for vb=2.0, we did not observe the displacement due to the modulated vb. the remaining factors s0, vc and vs dominate the uncertainty of the electrostatic force fe. the capacitance gradient s0 was measured using a weight of 1 mg and a set of optical interferometer. the weight of 1 mg was cyclically loaded and unloaded to the system by a motorized linear stage to produce a deflection modulation. the deflection was measured by the optical interferometer and the corresponding capacitance variation was measured by a calibrated precision capacitance bridge. to reduce the effect from seismic noise and drift noise from the optical interferometer or the flexure stage itself, both deflection x and capacitance variation c are measured from the difference between average values of mass loaded data and two adjacent mass unloaded data. the capacitance gradient s0 was obtained by calculating the ratio of c/x which is shown in figure 3. using (2) and (3), the capacitance gradient estimated by c/x can be expressed as )1( 1 )( 0 32 2 0 0 d x s d x d x d x d a x cxc s                                 (12) from (12), s1 deviates from s0 by a small portion of s0x/d. using the nominal design value of d = 0.5 mm, the ratio x/d is 0.15 %. this ratio can be reduced by using a smaller x for measuring the capacitance gradient. the measured capacitance gradient s has a mean value of s = 2.87610-8 f/m and a standard deviation of s = 0.00810-8 f/m. therefore, the standard uncertainty of the capacitance gradient is 12104)(   n su s  f/m with n = 369 in this measurement. the uncertainty u(vc) of the control voltage vc is calculated using the dac resolution of 0.3 mv as 088.0)32/(3.0)( cvu mv which contributes 1 nn. for the surface potential noise vs, the current actuation scheme prevents the surface potential effect from being coupled to and amplified by the control voltage vc as the case in the previous electrostatic actuation scheme [7] where vs was amplified as svcvs. the surface potential is reported to range from 20 mv to 180 mv [8, 9]. taking vs = 0.18 v for example and s = 2.876  10-8 f/m, the surface potential induced electrostatic force is about 0.9 nn. 2.3. null deflection control  the force under measurement fm is balanced by fe by the null deflection control. figure 5 shows the block diagram of the null deflection control. the transfer functions of the main components, namely the flexure stage, capacitive position sensor, loop filter and the electrostatic force actuator, are represented by g, h, d and a respectively. the term xn represents a deflection noise which may be contributed by the seismic vibration noise and the thermal noise of the flexure stage itself. the relation between fe and fm appears to be )( )(1 )( )( )(1 )( sf st st sx st hda sf mse     (13) where t(s) = gdha is the open-loop transfer function of the control system, and fe(s), xs(s) and fm(s) are the laplace transforms of fe, xs and fm, respectively. within the control bandwidth, i.e. for t(s) >> 1, the relation between fe and fm can be approximated as )( mne fkxf  (14) 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 measurement index c ap ac it an ce g ra di en t (f /m ) 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 0 50 100 150 200 250 300 350 400 -2.91 -2.9 -2.89 -2.88 -2.87 -2.86 -2.85 -2.84 x 10-8 measurement index c ap ac it an ce g ra di en t (f /m ) figure  4.  capacitance  gradient  calculated  from  c/x.  the  mean  capacitance gradient s0 = 2.87610 ‐8  f/m, standard deviation s = 0.00810 ‐ 8  f/m and standard deviation of the mean  12104  n s  f/m (n = 369 in  this measurement).   acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 71  where k is the stiffness of the flexure stage. equation (11) shows that the null deflection control automatically generates a compensation force fe to balance the force under measurement fm. to reduce the influence form the noise xn, fm is measured in a short period of time by comparing fe(t0) before fm is applied and fe(t1) after fm is applied: mnneee ftxtxktftff  )]()([)()( 0101 thus ntem kxff  . (15) the term xnt represents the temporal variation of xn during the measurement time frame. from one deflection measurement data set taken for 8-hr, using a window of 300 s to evaluate xnt, we obtained a standard deviation of 0.33 nm for xnt. with a measured value k of 13.0 n/m, the standard deviation of the xnt equivalent force noise is 4.3 nn. table 2 lists the main sources of uncertainty of the measured fm. 2.4. weighing process  each weight was loaded for 100 seconds and unloaded for 100 seconds. the compensation electrostatic force was calculated from the control voltage vc. figure 6 shows the control voltage vc acquired during one weighing cycle. the voltage difference vc was determined from one weight loaded segment and its two adjacent weight unloaded segments as 22 21 caca cbc vv vv  . (16) the weighing cycle was repeated for a long period of time in order to evaluate the stability and uncertainty of the system. 3. results  figure 7 shows the result of one weighing run for the weight of 1 mg. the measurement was done during three days. for this run, the measured electrostatic force was fe = (9,782.6  6.7) nn, where the given uncertainty is one standard deviation. the forces produced from the weights are estimated as fw = mg (1-air/mass), where the air buoyancy was taken into consideration. the comparison results are compiled in table 3. in general, the electrostatic force has a smaller value than the deadweight. for comparisons of weights 1 mg and 10 mg, the force differences defined as fe-fw are similar and close to 10 nn, and they both are in triangle shapes with similar dimensions. for comparisons of the weight 2 mg and 5 mg, the force differences are rather larger, and they are in shapes of square and pentagon, respectively. the weight of 5 mg has the largest force difference of about 200 nn (20 g), and it is the biggest weight in terms of wire length and shape area dimensions. a possible explanation for this force difference is that there might be some extra electrostatic or magnetic force between the weight and its surroundings. due to the size of the weight of 5 mg, it has the shortest distances to and possibly experiences the strongest electrostatic/magnetic interactions with its surroundings. 1 d a h g  flexure stage (m/n) capacitive position sensor (v/m) loop filter (v/v) electrostatic force driver (n/v) fmxn     vc fe v(x) 1 d a h g  flexure stage (m/n) capacitive position sensor (v/m) loop filter (v/v) electrostatic force driver (n/v) fmxn     vc fe v(x) figure 5. block diagram of the null deflection control. some noise sources are omitted for simplicity.   figure  6.  capacitive  displacement  and  control  voltage  vc  during  one  weighing cycle.  table 2. uncertainty budget for measured fm  source of uncertainty standard uncertainty (n) capacitance gradient s0 ef 4104.1 16-bit dac resolution 9101  surface potential vs 9108.1  displacement noise xnt 9103.4  combined standard uncertainty: 2429 )104.1()108.4()( em ffu   n figure 7. a data run for 1 mg weighing.  acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 72  4. discussion and outlook  a force measurement system based on the electrostatic sensing and actuation techniques has been built and upgraded. the system is enclosed by a vacuum chamber which resides on a passive low frequency vibration isolation platform. the voltage actuation scheme has been modified to allow the decoupling between the surface potential vs and the actuation voltage leading to a reduction in the drift and bias of the compensation electrostatic force. the system is stable over a long period of time. however, the cause of the extra electrostatic/magnetic force observed in the weighing test is still unclear and investigation to that is underway. a new design of the apparatus’s housing is being fabricated, it was designed to isolate most of the apparatus from its surroundings and expose only the force loading area. in addition, other parameters such as alignment factors, the capacitance gradient and its frequency dependence will also be re-verified and studied further to find out the cause for the force difference. acknowledgement  this work was supported by the bureau of standards, metrology and inspection (bsmi), taiwan, r.o.c. references  [1] newell d b, kramar j a, pratt j r, smith d t and williams e r, “the nist microforce realization and measurement project”, ieee trans. instrum. meas. 52 (2003) 508. [2] kim m-s, choi j-h, park y-k and kim j-h, “atomic force microscope cantilever calibration device for quantified force meterology at microor nano-scale regime: the nano force calibrator (nfc)”, metrologia 43 (2006) 389-395. [3] leach r, chetwynd d, blunt l, haycocks j, harris p, jackson k, oldfield s and reilly s, “recent advances in traceable nanoscale dimension and force metrology in the uk”, meas. sci. technol. 17 (2006) 467-476. [4] choi j-h, kim m-s, park y-k and choi m-s, “quantum-based mechanical force realization in piconewton range”, appl. phys. lett., 90 (2007) 073117. [5] nesterov v, “facility and methods for the measurement of micro and nano forces in the range below 10-5 n with a resolution of 10-12 n (development concept)”, meas. sci. technol., 18 (2007) 360366. [6] m-s kim, j.r. pratt, u. brand and c.w. jones, “report on the first international comparison of small force facilities: a pilot study at the micronewton level”, metrologia, 49 (2012), 70 [7] s-j chen and s-s pan, “a force measurement system based on an electrostatic sensing and actuating technique for calibrating force in a micronewton range with a resolution of nanonewton scale”, meas. sci. technol., 22 (2011), 045104 [8] j.r. pratt and j.a. kramar, “si realization of small forces using an electrostatic force balance”, proc. 18th imeko world congress, (17-22 september 2006, rio de janeiro, brazil) [9] s.e. pollack, s. schlamminger and j.h. gundlach, “temporal extent of surface potentials between closely spaced metals”, phys. rev. lett. 101 (2008), 071101 table 3. comparison results, unit in nn.  1 mg 2 mg 5 mg 10 mg fw 9797.12.9 19586.72.9 48950.56.4 97897.34.7 fe 9782.66.7 19527.04.1 48751.48.2 97886.416.5 e-fw -14.5 -59.7 -199.1 -10.9 journal contacts acta imeko issn: 2221-870x march 2021, volume 10, number 1 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland marco tarabini, italy yvan baudoin, belgium francesco bonavolonta, italy marcantonio catelani, italy carlo carobbi, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy min-seok kim, korea momoko kojima, japan koji ogushi, japan vilmos palfi, hungary franco pavese, italy jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: f.lamonaca@dimes.unical.it acta imeko eulalia balestrieri, e-mail: balestrieri@unisannio.it carlo carobbi, e-mail: carlo.carobbi@unifi.it ioan tudosa, e-mail: itudosa@unisannio.it koji ogushi, e-mail: kji.ogushi@aist.go.jp momoko kojima, e-mail: m.kojima@aist.go.jp support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:f.lamonaca@dimes.unical.it mailto:balestrieri@unisannio.it mailto:carlo.carobbi@unifi.it mailto:itudosa@unisannio.it mailto:kji.ogushi@aist.go.jp mailto:m.kojima@aist.go.jp mailto:dirk.roeske@ptb.de measurements and geometry acta imeko issn: 2221-870x june 2021, volume 10, number 2, 98 103 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 98 measurements and geometry valery d. mazin1 1 peter the great st. petersburg polytechnic university, russia section: research paper keywords: measurements; geometry; projective metric; basic measurement equation; geometric space citation: valery mazin, measurements and geometry, acta imeko, vol. 10, no. 2, article 14, june 2021, identifier: imeko-acta-10 (2021)-02-14 section editor: francesco lamonaca, university of calabria, italy received april 2, 2021; in final form may 18, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valery d. mazin, e-mail: masin@list.ru 1. introduction the purpose of this article is to define the commonalities between the concept of measurements and geometry; a method that uses the elements of geometry to model the basic elements of a measurement process. among many approaches that are used to describe fundamental measurement categories, geometrical approach is often underrated despite the fact that geometry as a science has originated from measurements and only later turned to a new higher level of generality. this paper attempts to argue that measurements and geometry are related, and geometry is not just another branch of math. at all times the most prominent authorities in the scientific world have acknowledged the fundamental and special place that geometry takes in the system of exact sciences. thus, spinoza believed that it is geometry that “reveals a causal connection in nature”. newton said that “geometry expounds and justifies the art of measurement” [1]. in [2] we find einstein’s statement, according to which “geometry must precede physics, since the laws of the latter cannot be expressed without geometry. therefore, geometry must be considered as a science, logically preceding every experience and every experimental science.” a remarkable illustration of this thought is also presented in the book by b. mandelbrot, "the fractal geometry of nature" [3]. in "the encyclopedia of mathematics" [4], the special role of geometry is characterized in the following way: “developments of geometry and its applications, advances in geometric perception of abstract objects in various areas of mathematics and natural science provide solid evidence of the importance of geometry as one of the most profound and fruitful means for cognizing reality” [5, 6]. today measurement specialists rarely use geometrical apparatus both in general and particular cases (with exception of, maybe, measurements at the elementary level). instead, analytical approach absolutely dominates the field. however, [7] highlights the huge heuristic value of geometric representation of the concepts of analysis; saying that geometry “is becoming increasingly important in … physics. it simplifies mathematical formalism and deepens physical comprehension. this renaissance of geometry has had an impact not only on the special and general theory of relativity, obviously geometric in nature, but also on other branches of physics, where the geometry of more abstract spaces is replacing the geometry of physical space.” today, no one seems to deny the fact that the science of measurements is actually a metascience, which is used in all natural and technical sciences, to say the least. for this reason, abstract the paper is aimed at demonstrating the points of contact between measurements and geometry, which is done by modelling the main elements of the measurement process by the elements of geometry. it is shown that the basic equation for measurements can be established from the expression of projective metric and represents its particular case. commonly occurring groups of functional transformations of the measured value are listed. nearly all of them are projective transformations, which have invariants and are useful if greater accuracy of measurements is desired. some examples are given to demonstrate that real measurement transformations can be dealt with via fractional-linear approximations. it is shown that basic metrological and geometrical categories are related, and a concept of seeing a multitude of physical values as elements of an abstract geometric space is introduced. a system of units can be reasonably used as the basis of this space. two tensors are introduced in the space. one of them (the affinor) describes the interactions within the physical object, the other (the metric tensor) establishes the summation rule on account of the random nature of components. mailto:masin@list.ru acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 99 the apparatus has to be represented by disciplines with the same or higher order of generality. geometry is just such a discipline. section 2 shows that the basic equation for measurements is a special case of the expression of projective metric. in section 3, functional measurement transformations are looked at in the context of groups theory. section 4 is devoted to identifying the relationship between metrological and geometrical categories. the concluding section summarizes the main idea of the paper and points out on its practical usefulness. 2. projective metric and basic measurement equation the essence of any measurement has always been a comparison with a known unit. among various geometric systems, the most common one is projective geometry, which, according to m. komatsu [8], represents geometry as a whole. projective geometry only studies the mutual relations between figures and in this sense is akin to measurements. a segment of a numerical axis can traditionally represent the value of a measured quantity. in projective geometry, the distance between two points is determined using a cayley metric (projective metric) 𝑙 = 𝑐 |ln 𝑉| , (1) where c is the constant, 𝑉 = 𝑥3 − 𝑥1 𝑥2 − 𝑥3 𝑥4 − 𝑥1 𝑥2 − 𝑥4 ⁄ (2) (complex, or double ratio of four points of a straight line), 1 2 3 4 , , ,x x x x are the coordinates of the points on the line. let 𝑐 = 1, 𝑥3 = 0, 𝑥4 = ∞. then from the equations above it follows that 𝑙 = |ln ( 𝑥1 𝑥2 )| = |ln ( 𝑥2 𝑥1 )| , (3) hence e𝑙⋅sgn(𝑥2−𝑥1) = 𝑥2 𝑥1 (4) and 𝑥2 = 𝑥1 ⋅ e 𝑙⋅sgn(𝑥2−𝑥1) . (5) the meaning of the quantities in the last equation leaves no doubt that what we have here the “basic equation of measurements” usually written as 𝑥 = {𝑥}[𝑥] , (6) where x is the measured quantity, {𝑥} = e𝑙⋅sgn(𝑥2−𝑥1) (7) is its numerical value, and [𝑥] is the quantity unit. the latter is taken for granted and does not seem to require any proof. however, as we can see, it is deduced from the definition of projective metric, a fact that can hardly be accidental. thus, it is worth mentioning a statement by famous mathematician holder [9]: “to prevent misunderstanding, i note here that the axioms of the theory of quantities as they appear here should not be presumed in geometry or applied to segments and volumes. on the contrary, there are examples of purely geometric axioms for points and segments, from which it can later be proved ... that for segments there are facts that in general theory of measurable quantities are presupposed as axioms”. in connection to the above said, let us make the following remark: from (1) it follows that the logarithmic scale, widely used in measurements as well as in physics and technology, is nothing but a scale in projective metric. in general, the simple relations stated above suggest that there is a principled connection between the fundamental concepts of geometry and measurements. the basic equation for measurements, fundamental and meaningful in itself, happens to be a particular case of a fundamental geometric relationship too. 3. groups of functional measurement transformations in addition, measurements and geometry are related by the fact that invariants are widely used in both disciplines. in measurements, this leads to improved accuracy. [10] shows an example of invariant principle applied to a simple ratio of three values of the measured quantity to the affine measurement transformation. in geometry, invariants generally have fundamental significance, since according to f. klein's "erlangen program" [11], various geometries represent the invariant theories of the relevant transformation groups. it should be noted that the function of the channel transformation of measurement system, y = f (x), certainly belongs to one of the following groups (we do not mean the groups mentioned in the "erlangen program"): 𝑦 = 𝑥 (8) is the identical group, 𝑦 = 𝑥 + 𝛽 (9) is the shift group, 𝑦 = 𝑎 ∙ 𝑥 (10) is the similarity group, 𝑦 = 𝑎 ∙ 𝑥 + 𝛽 (11) is the affine (linear) group 𝑦 = (𝛼 ∙ 𝑥 + 𝛽) (𝛾 ∙ 𝑥 + 𝛿)⁄ (12) is the projective (fractional-linear) group, 𝑦2 ≥ 𝑦1 by 𝑥2 ≥ 𝑥1, or 𝑦2 ≤ 𝑦1 by 𝑥2 ≥ 𝑥1 (13) is the group of monotonous transformations. all these transformations, except for (13), have invariants. such an invariant for (12), which includes all the previous groups (8) through (11), is a complex ratio of four points on a straight line [12, 13]. for (11), the invariant is the simple ratio of three points on a straight line (𝑥2 − 𝑥1) (𝑥3 − 𝑥2)⁄ , for (9) and (8), except for the two indicated invariants, x2 – x1, it is the usual euclidean distance between two points lying on the coordinate axis. the last two types of transformations are non-linear in general. if nonlinearity is small, then most commonly the corresponding experimental dependence can be satisfactorily approximated by a fractional-linear function [12], [14] – [16]. the remarkable property of the latter is that it belongs to the group of projective transformations while its form can vary a lot. such a transformation can be visualized as an image of a projection of the input scale on the output scale. the group property is acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 100 expressed in the fact that the superposition of a series of fractional-linear functions is the same function and does not lead to higher complexity (figure 1), while the inverse transformation is also fractional-linear. thus, a unified mathematical description becomes possible both for the intermediate transformations in the channel and for the whole transformation. for significantly nonlinear transformations or for improved accuracy of the approximation, it is advised that several fractional-linear functions should be used. they are either summed up or applied to sequential sections of the characteristic (piecewise approximation). in the summation case the output value is obtained as the sum of the results of fractional-linear transformations. to make it possible, the following conditions must be met: the result of the approximation by function 𝑦 = 𝑄(𝑥) 𝑃(𝑥)⁄ = ∑ 𝑎𝑖 𝑥 𝑖 𝑚 0 ∑ 𝑏𝑖 𝑥 𝑖 𝑛 0 ⁄ (14) has single roots in the denominator; 𝑚 does not exceed 𝑛 by more than 1. if y is a proper fraction, then it can be converted into a sum ∑ 𝐴𝑖 (𝑥 − 𝛼𝑖 ) , 𝑛 𝑖=1 where 𝛼1, … , 𝛼𝑛 are the roots of the denominator, the coefficients are found from the equation 𝐴𝑖 = 𝑄(𝛼𝑖 ) 𝑃′(𝛼𝑖 ) , (15) whereas 𝑃′(𝛼𝑖) = 𝑃 ′(𝑥)|𝑥=𝛼𝑖 . (16) if 𝑚 = 𝑛, then a constant is added to the sum of the fractions as a result of extracting the integer part. if 𝑚 = 𝑛 + 1, then a linear function is added. in both instances we deal with a particular case of a fractionallinear function. if any the roots in the denominator are complex, nothing changes in principle, but some of the summable fractional-linear functions turn out to be complex. at the same time, all their remarkable properties are sustained, including the presence of an invariant a complex relationship of four arbitrary points (𝑥3 − 𝑥1) (𝑥2 − 𝑥3) (𝑥4 − 𝑥1) (𝑥2 − 𝑥4) .⁄ an example can be the results of approximation of the calibration characteristics of two temperature sensors. let the first one be a platinum thermoresistor (its characteristic is utilized to model the international practical temperature scale). in the range of –259 °c ÷ +660 °c, we obtain 𝑊 = −2.244 ⋅ 104 𝑡 + 2.768 ⋅ 103 + 2.925 𝑡 + 280.063 + −7.227 ⋅ 104 𝑡 − 7.947 ⋅ 103 , (17) where 𝑊 is the ratio of resistance at temperature 𝑡 in °c to resistance at zero celsius. the standard uncertainty of this approximation is 0.7 °c, which corresponds to 0.08 % with respect to the temperature range and is considered acceptable for most practical cases. let the second sensor be a pt/rh thermocouple with 30 % / 6 % rh content. in the range of 0 °c ÷ 1800 °c its characteristic is approximated by expression 𝐸 = −4.514 ⋅ 106 + 5.287 ⋅ 107𝑖 𝑡 − 186.827 − 2.807 ⋅ 103𝑖 − 4.514 ⋅ 106 + 5.287 ⋅ 107𝑖 𝑡 − 186.827 + 2.807 ⋅ 103𝑖 − 4.86 ⋅ 108 𝑡 − 1.302 ⋅ 104 (18) and the standard uncertainty will equal to 0.3 %. in the case of piecewise linear fractional approximation there are no restrictions with respect to accuracy (uncertainty), but it is more difficult to implement. whichever fractional-linear approximation case is chosen, the need for mathematical methods is limited to four arithmetic operations. using invariance fits the purpose of measurement, which is not about transformation, but rather about preserving the information. indeed, in order to restore the characteristics of the original signal using measured characteristics of the converted signal, some kind of relationship between the signals has to be retained during the chosen transformation. from this perspective, the measuring transducer should be called a transmitter rather than a transducer, i.e. in this case the name is not associated with the main property of an object but reflects its secondary property instead. this happens because the transfer and transformation of the quantity value (including scaled transformation, i.e. energy level transformation) correlate to each other in the same way as the essence and a phenomenon; in other words, a dualism takes place. it is the transformation, not the transfer of the value that is visible to an observer. as in other similar cases, the object was named for its superficial, rather than essential property. ideally, in all types of measurement transformations such as the quantity type transformation, identity transformation, modulation and demodulation, the form of representing the transformation (e. g. analog-digital), code conversions, etc. the amount of information remains intact. in these transformations, errors mean the loss of information, and it is the degree of this loss, not the type of transformations that determines the quality of a measuring channel. 𝑦1 = 𝑎1𝑥 + 𝑏1 𝑐1𝑥 + 1 𝑦𝑖 = 𝑎𝑖 𝑦𝑖−1 + 𝑏𝑖 𝑐𝑖 𝑦𝑖−1 + 1 𝑦𝑛 = 𝑎𝑛 𝑦𝑛−1 + 𝑏𝑛 𝑐𝑛𝑦𝑛−1 + 1 𝑦 = 𝑎𝑥 + 𝑏 𝑐𝑥 + 1 figure 1. the circuit of fractional-linear transformations in a measuring channel. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 101 perhaps, the reason for the difficulty with the classification of measuring transducers that has still not been resolved is that all the variants of such a classification that are known so far are created on that side of the above-mentioned dualism that characterizes the phenomenon rather than the essence. in other words, what we are trying to do is classify the types of transformations; whereas what we should do is classify the types of information preservation. this situation, which occurs while fractional-linear and their dependent transformations are used, is consistent with the general concept of the measurement procedure. the very basic procedures, for example, when the length was measured, consisted of two stages: the mutual displacement of the measured object and the measure, and their comparison with each other. historically, this original essence of measurement is now perceived only by its second stage, while the first stage is actually no less significant. it is worth highlighting that what in the early measurements was omnipresent mechanical displacements, nowadays is replaced by measurement transformations. the analogies between displacements in an ordinary space and measurement transformations can be formalized even further. in length measurements, the correlation we wish to preserve is seen as the distance between the points, and this distance remains unchangeable no matter what the shifts and turns are. if we determine the distances in terms of the values that are preserved during such transformations for a set of all possible signals, we will arrive to the geometrical interpretation of a measurement procedure as a transformation that preserves the distance, i.e. as “displacements” in the relevant space. with fractional-linear transformations (for example, when using a voltage divider), complex ratio 𝑉 is preserved. but projective metric is thereby preserved too. since fractional-linear transformations preserve projective metric, it is only natural to call them the “displacements in a projective space”. at the same time, they are the most common among all the above transformations (except for monotonous transformations), each preserving the distance. it can be assumed that it is this property that permits us to either correct errors effectively or implement algorithms that eliminate errors in the first place. in these terms, the stages of the measurement procedure consist of displacements in a projective space and comparison with a master reference, i.e. coincide with an ordinary length measurement procedure. a typical example is measuring voltage with a bitwise-balancing voltmeter, whose range is smaller than the voltage measured. in such a case the voltage is pre-attenuated by a divider. the divider produces the corresponding section of the voltage scale on its output, preserving its length in the projective metric, while the voltmeter makes the comparison. as a preliminary conclusion, we can state so far that: any functional measurement transformation belongs to the group of monotonous transformations; the most common monotonous transformation is a fractional-linear variety that can describe projective correlations analytically and act as displacements (whereas the dimensions of the moving object are invariant). 4. correspondence between metrological and geometric categories table 1 shows the mutual correspondence of the fundamental metrological and geometric categories. any line in table 1 can be used as a departure point for further research. the concept of the space of affine connectivity takes up the first place in the table among geometrical concepts. it represents diversity, in which the field of the connectivity object is defined. the term “diversity” generally needs to be defined; however, in this case we will skip that as its meaning is obvious. as we know, the connectivity object characterizes the point of diversity, in which a local benchmark (or affine benchmark, referring precisely to the given point) is defined. in turn, the affine benchmark is a combination of the point itself and the coordinate basis. the connectivity object gives as answer to the question about how the coordinates of an arbitrary vector change as it is displaced along a certain curve while preserving orientation. in the general case, the coordinates will indeed change, because as the vector is moving from one point to another, the local benchmark to which the vector is momentarily related, changes too. the space of affine connectivity is poor in terms of properties, but it becomes richer once a metric is introduced in it by means of defining a metric tensor. then the space becomes riemannian, a space of curved vectors. such vectors represent physical quantities that characterize a specific physical object [17]. for such a vector space, a system of units can serve as a basis since a unit of any quantity can be expressed via the basic units of the system. a good example is a vector acceleration receiver on a moving ship. this receiver reads accelerations in an acoustic wave in water and its purpose is to identify the locations of the sources of noise. the main part of the receiver is shown in figure 2. we see six flat piezoelectric plates at the outside edges rigidly connected to a pair of strings in its middle, each pair of strings being fixed on the frame and running in three mutually perpendicular directions. the inner edges of the piezoelectric plates are perpendicularly joined to the faces of a cubic inertial element. when the frame experiences acceleration, the inertia force acts upon the cubical element, which can be componentized along the axes perpendicular to the planes of the piezoelectric plates. these componentized forces cause electrical charges. the axes perpendicular to the planes of the piezoelectric plates form a coordinate basis, and together with the center of gravity of the cubic element create a local benchmark. as the ship moves and experiences pitching and rolling, the location and orientation in space of the local benchmark changes, whereas the direction of the vector of acceleration of water particles in the acoustic beam remains the same. as a result, the table 1. correspondence between metrological and geometric categories. metrological category geometric equivalent an object, a measuring instrument with deterministic relationships the space of affine connectivity, or a riemannian space physical quantity point in the space, vector system of units basis probability characteristics and statistical relationship of physical quantities metric tensor (determines the space geometry) analog measurement transformation affinor (determines the relationship of the vectors) analog-to-digital conversion vectors subtraction preservation of the measurement information invariance acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 102 projections of this vector on the axis of the coordinate basis l change as determined by the connectivity object. the connectivity object is a system of numbers, called connectivity coefficients. if each and every connectivity coefficient turns into zero, the diversity becomes an affine space. vectors can be defined in it, so the space is a vector space. an affine space is a model of any particular object whose physical regular properties can be described by simple additive relationships. the space describing a measuring instrument is generally multidimensional. in [18], we can see the application of multidimensional spaces apparatus. in this space, the points and the vectors connecting them to the origin correspond to physical quantities. the basis of the space is the system of units. an affine space can be identified for an object with any other kind of regular physical properties, but only in an infinitesimal region and with an accuracy no greater than that of the first order [19]. let 𝑦 = 𝑓(𝑥1, … , 𝑥𝑛) be any function of 𝑛 variables. then d𝑦 = ( 𝜕𝑦 𝜕𝑥𝑖 ) d𝑥𝑖 (19) (implied summation over 𝑖) can be considered a vector, since coordinates ( 𝜕𝑦 𝜕𝑥𝑖 ) d𝑥𝑖 (at first approximation) are affine. the rule of adding the vectors, and, consequently, the space geometry, is determined by a metric tensor. since the values for the generality should be considered random, the addition on rule must take into account their probability characteristics and statistical relationship. as shown in [20], if the coordinates for the vectors are expanded uncertainties, then the metric tensor is determined by the types of probability distribution, the coverage probability, the ratio of the terms, and mutual correlation. by today, the components of such a tensor for the most popular probability distributions and for 0.95 and 0.99 coverage probabilities have been determined by a. chepushtanov [20]. if the coordinate system is formed by standard uncertainties, then the metric tensor is determined only by the mutual correlation. the analog measurement transformation, which takes into account the design parameters of the device and influencing factors as input quantities, has a geometrical equivalent affinor, the rule that states that each vector 𝐝𝒙 is matched with a certain vector 𝐝𝒚. the affinor is a square-matrix bivalent tensor. since the result of the analog-digital transformation is a number, it is only obvious that the quantity separates from the quality. in other words, the quantity rids of its physical carrier. taking the logarithm of the basic equation for measurements yields two vectors – one for the numerical value and another for the unit. we do not touch here on the quasi uncertainty arising in the logarithm of a unit, we note only that it is a matter of logarithmising not the number „one”, but a unit of a physical quantity, which can have a different real meaning. thus, in the logarithmic representation, the geometric essence of the analogdigital transformation is that the unit vector is subtracted from the full quantity vector, which is incidentally nothing else but computing how many units of the quantity (or which part of the unit) is included in the dimension of the quantity. finally, as it was stated above, the preservation of the measurement information characterizing the object conceptually corresponds to invariance. information losses, inevitable in any measurement transformation (introduction of an uncertainty), means that this principle of correspondence is compromised. when searching for the points of contact between modern geometry and measurements, special attention should be paid to non-euclidean geometries. riemann geometry is one of them, since its features are quite visible in the space of quantities. for instance, the ends of the logarithmic vectors of reciprocal quantities (such as resistance and conductivity) are located at the diametrically opposite points of a sphere that has its center at the origin of the coordinates. at the same time, there is practically no difference between them. they represent the same characteristic of the system. as it is known, the riemann geometry is a spherical geometry with an additional condition of identification applied to the opposite points of the sphere. the obvious similarity between the physical and geometrical facts can hardly be accidental. moreover, there is evidence that different geometries work in case of different measurement ranges of the same physical quantities, which is only natural due to a wide generality of their geometric space. further research in this area seems to be promising and new interesting results are expected. other promising applications of projective geometry are in explaining the laws of physics and image processing [22]. the basic concept here is projective mapping. it is most frequently used as a visual image rather than a mathematical structure. being used in a strict geometric sense, this concept allows us to describe patterns with any level of complexity. then the mapping parameters logically become variable, which is consequent to the change in the position of the projection center. this position, in turn, is affected by some physical causes, which now can be identified and explored. thus, the fundamental metrological categories have geometric equivalents. problem defining in the area of measurements can be described using geometric terminology. solving metrological problems seems possible if the powerful modern geometric apparatus is used. 5. conclusions the most important geometric concepts have equivalents in measurement theory. this knowledge allows us to apply geometric approach and apparatus to formulate a single figure 2. the main part of a vector acceleration receiver. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 103 mathematical description for important measurement categories, obtain new theoretical results, and model measuring procedures. in particular, projective transformations can be used in such modelling. due to their group properties, the characteristics of measuring devices can be described in a significantly simpler manner, whereas the present invariant allows us to increase the accuracy of measurements. a model for any physical object, including the measuring device itself, can be represented as a vector space, whose elements, in turn, represent the quantities characterizing the object. this approach can be used in metrological analysis of measuring devices, where an important role is given to the summation of uncertainties. for such a summation in the geometric model, a metric tensor of the space is used, and in case of standard uncertainties such a tensor morphs into the coefficient of mutual correlation. thanks to the affinity of the concepts in geometry and measuring theory, measuring concepts and facts can be considered from a geometric standpoint and bring new interesting results. references [1] i. i. newton, mathematical beginnings of natural philosophy, university of california press, 1999, isbn 9780-520-08816-0. [2] b. g. kuznetsov, einstein: life, death, immortality, nauka, moscow, 1980 [in russian] [3] b. mandelbrot, fractal geometry of nature, computer research institute, moscow, 2002 [in russian] [4] the encyclopedia of mathematics, sovetskaia entsiklopediia, moscow, 1977 [in russian] [5] a. a. penin, analysis of electrical circuits with variable load regime parameters (projective geometry method), springer, cham heidelberg new york dordrecht london, 2015, isbn 978-3-31916351-2. [6] a. s. t. pires, a brief introduction to topology and differential geometry in condensed matter physics, morgan & claypool, 2019, isbn: 978-1-64327-371-6. [7] b. f. schutz, geometrical methods of mathematical physics, cambridge university press, cambridge, 1980. [8] m. komatsu, geometry variety, znanie, moscow, 1981 [in russian] [9] o. l. holder, die axiome der quantität und die lehre vom maß: ber. über die verhandlungen der königlich sächsischen ges. der wiss. mathem.-phys. klasse, 1901, s. 1 – 65. [in german] [10] e. m. bromberg, k. l. kulikovsky, test methods for improving measurement accuracy, energija, moscow, 1978 [in russian] [11] f. c. klein, a comparative review of recent researches in geometry: bull. new york math. soc., n.y., 1892-1893, pp. 215249. [12] h. t. nguyen, v. y. kreinovich, c. baral, v. d. mazin, grouptheoretic approach as a general framework for sensors, neural networks, fuzzy control and genetic boolean networks, 10-th imeko tc7 internat. symposium, saint petersburg, russia, 30 june – 2 july 2004, pp. 65–70. online [accessed 22 june 2021] https://www.imeko.org/publications/tc7-2004/imeko-tc72004-044.pdf [13] i. n. krotkov, v. y. kreinovich, v. d. mazin, general form of measurement transformations which admit the computational methods of metrological analysis of measuring-testing and measuring-computing systems, measurement techniques 30 (1987), pp. 936–939. doi: 10.1007/bf00864981 [14] o. a. tsybulskii, use of the complex ratio method in widerange measurement devices, measurement techniques 56 (2013), pp. 232–234. doi: 10.1007/s11018-013-0185-2 [15] o. a. tsybulskii, the fractional-linear measurement equation, measurement techniques 60 (2017), pp. 443–450. [16] o. a. tsybulskii, projective properties of wide-range measurements, measurement techniques 55 (2013), pp. 37-40. doi: 10.1007/s11018-013-0155-8 [17] v. d. mazin, physical quantity as a pseudo-euclidean vector, acta imeko 4(4) (2015), pp. 4-8. doi: 10.21014/acta_imeko.v4i4.268 [18] b. v. shebshaevich, p. p. dmitriev, n.v. ivantsevich et al, network satellite radio navigation systems, radio i svyaz, moscow, 1993 [in russian]. [19] p. k. rashevsky riemannian geometry and tensor analysis, nauka, moscow, 1967 [in russian] [20] v. d. mazin, a. n. chepushtanov , application of a vector analytic model for metrological analysis of an infrared fourier spectrometer, measurement techniques 51(2) (2008), pp. 152157. doi: 10.1007/s11018-008-9013-5 [21] o. a. tsybulskii, analog-to-digital conversion with a hyperbolic scale, metrology 12 (1990), pp.9-19 [in russian] [22] i. s. gruzman, v. s. kirichuk, v. p. kosykh, g. i. peretyagin, a. a. spector, digital image processing in information systems, publishing house of the novosibirsk state technical university, novosibirsk, 2002 [in russian] https://en.wikipedia.org/wiki/special:booksources/978-0-520-08816-0 https://iopscience.iop.org/book/978-1-64327-374-7 https://iopscience.iop.org/book/978-1-64327-374-7 https://www.imeko.org/publications/tc7-2004/imeko-tc7-2004-044.pdf https://www.imeko.org/publications/tc7-2004/imeko-tc7-2004-044.pdf https://link.springer.com/search?facet-creator=%22i.+n.+krotkov%22 https://link.springer.com/search?facet-creator=%22v.+ya.+kreinovich%22 https://link.springer.com/search?facet-creator=%22v.+d.+mazin%22 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://link.springer.com/article/10.1007/bf00864981 https://doi.org/10.1007/bf00864981 https://link.springer.com/article/10.1007/s11018-013-0185-2 https://link.springer.com/article/10.1007/s11018-013-0185-2 https://doi.org/10.1007/s11018-013-0185-2 https://doi.org/10.1007/s11018-013-0155-8 http://dx.doi.org/10.21014/acta_imeko.v4i4.268 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://link.springer.com/article/10.1007/s11018-008-9013-5 https://doi.org/10.1007/s11018-008-9013-5 beamforming in cognitive radio networks using partial update adaptive learning algorithm acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 beamforming in cognitive radio networks using partial update adaptive learning algorithm md zia ur rahman1, p. v. s. aswitha1, d. sriprathyusha1, s. k. sameera farheen1 1 dept. of electronics and communication engineering, koneru lakshmaiah education foundation, vaddeswaram, guntur-522502, andhra pradesh, india section: research paper keywords: adaptive learning, bandwidth, cognitive radio, frequency, power transmission citation: md zia ur rahman, p. v. s. aswitha, d. sriprathyusha, s. k. sameera farheen, beamforming in cognitive radio networks using partial update adaptive learning algorithm, acta imeko, vol. 11, no. 1, article 30, march 2022, identifier: imeko-acta-11 (2022)-01-30 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 4, 2021; in final form february 18, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: s. k. sameera farheen, e-mail: sksameera667@gmail.com 1. introduction in medical telemetry wireless communication techniques are used widely in the present generation. generally, these types are used to monitor the condition of a person via pulse, respiration, etc., this wireless medical telemetry service was first established by the federal communications commission by allocating some of the frequency bands separately for this wireless purpose so that it will not have any discrepancies while allocating frequency bands while cross-checking the patient's condition [1]. instead of that, we can use the existing spectrum for a medical telemetry application by using cognitive radio. in this cognitive radio spectrum sensing technique would be there for this spectrum sense we use the beamforming technique. the efficient utilization of spectrum is the most important thing in cognitive radio. cognitive radio offers a spectrum allocation for unlicensed users simultaneously with licensed users. so, the secondary users need to detect the spectrum availability using spectrum sensing by sensing the primary user in the same frequency [2], [3]. the main purpose to introduce beamforming technique is to remove the unwanted noise in different systems which we are using in this process like military sonar and radar systems. this will be separating the sources when the overlapping frequency content which originates at different spatial locations [4]-[6]. this technique is mainly used for sensing purposes, in this phase, the licensed user transmitter will be estimated by the secondary user transmitter according to that estimation the spectrum allocation will be done [5]-[8]. this technique is used to provide an exchange of information within the cells and the remaining near cells will be in a secured manner. the interference which is occurred in signals is mainly due to the imperfection of spectrum sensing, due to that secondary user are freely accessing the primary user channels [9], [10]. at the same time, this technique should return to the channel before the secondary users. this beamforming would be very effective while sensing the spectrum without any interference. a cognitive radio network is one of the important systems in the broadband communication system. the beamforming technique [11], [12] which we are proposing in this paper is used to connect the information inside the cells and the outer cells which are in use abstract cognitive radio technology is a promising way to improve bandwidth efficiency. frequency which is not used in any aspect will be utilized by using some of the most powerful resources in this cognitive radio. one of the main advantages of cognitive radio signal is to detect the different channels which are there in the spectrum and it can modify the frequencies which is utilized frequently. it allows the licensed users to gain the licensed bandwidth under the condition to protect the licensed users from harmful interference i.e., from secondary users. in this paper, we would like to implement cognitive radio using the beamforming technique, by using power allocation as a strategy for the unlicensed transmitter which is purely form on the result of sensing. it is on the state of the primary user in a various cognitive radio network whereas the unlicensed transmitter gives a single antenna and it modify its power transmission. for the cognitive radio setup, we have used normalized adaptive learning algorithms. this application would be very useful in medical telemetry applications. nowadays wireless communication plays a vital role in healthcare applications for that we have to build a separate base. it reduces the effort of the building of separate infrastructure for medical telemetry applications. mailto:sksameera667@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 from all the users. it has proved that the beamforming technique is the smooth technique for spectrum sensing analytically too. in that paper, they explained some statistical tests between maximum-to-minimum eigenvalue and maximum-to-minimum beam energy algorithms based on the statistical values they conclude that beamforming is a smooth sensing technique [13]. the sensors in secondary users are used to assume their particular pu doas. always the secondary users access the primary user's accounts freely and simultaneously, licensed users also intend to get back the mechanism before the unlicensed user acquire expires. at last, the binding centre collaborate with doas into a primary user localization estimation. the result of our final implementation gives that to make possible to make a localization system with a poor complexity and a good primary user localization capability in this cognitive network. the capabilities which are present in pu localization are used to enhance the pu interference [14]. generally, in services of wireless communication, it has rectifiers at every junction point to manage every event of the upcoming trending technologies which gives profit from abstraction property. it has gained from different latest antenna sets and also from algorithms that form adaptive beams [15]. the process which we have used is purely based on least mean square (lms) algorithm, which gives a brief and proper care to the signal model which we have used for the beamforming technique. to get the accuracy convergence rate more than the expected one for the lms which we have used in the smart antenna system, we have used bbnlms (block-based normalized least mean square) algorithm. the presence of this algorithm gives a lot of difference in convergence rate. mainly this algorithm will be performed only in the existence of different effects and many users which is scanned using matlab simulations using different white signals. the lms formula was used only in the sensible antenna systems in between the adjustive beam forming algorithms [16]. in place of quick exploitation, the traditional square of the error vector is used [17]. although many methods have been implemented and implemented with the new ideas for the doa (direction of arrival) of signal. the adaptive array was first discovered by van atta in the year 1959 which is defined as a self-phased array [18]. it reflects all the incident signals in the arrival direction using a clever phasing scheme. these beamforming algorithms are divided into various methods they are mainly fixed weight beamforming and adaptive beamforming algorithm. this adaptive algorithm will update the array weights continuously which is based on optimization of changing signal environment. this section briefly discusses lms, recursive least square, conjugate gradient algorithm, and quasi-newton algorithm [19]. data transmission in a communication channel with a low probability of bit error is possible up to a certain bit rate with a given snr (signal-to-noise ratio). therefore, in addition to the adaptive algorithm, to increase the noise immunity of the wireless system, let us consider the use of channel coding schemes. such coding is a series of transformations of the original bit sequence, as a result of which the transmission of information flow becomes more resistant to the deterioration of the quality of the transmitted information caused by noise, interference, and signal fading. channel coding allows reaching a compromise between bandwidth and the probability of bit error [20]. this paper gives a different aspect in the direction of arrival of signals with a different error rate and at last, in this paper, we provide the simulation results which provides better evidence for the technique which we have provided in this paper. 2. methodology the networks that we are using in this paper are cognitive radio network (crn) and non-orthogonal multiple access (noma) are widely used system in the 5g broadband communication system. there is one advantage for the users who are using a cognitive radio network is to protect the information from different devices like multiple-input multipleoutput-noma. in our paper, we are trying to implement the substitute of beam forming technique which is already available to protect the data exchange inside the cells and outer cells from different users and their system model is shown in figure 1. the intervention was caused by a faulty signal of the unlicensed users. the unlicensed users are intended to get the availability of licensed users. though the licensed user repays the channel before the licensed user access terminates. adaptive antenna systems include a mixture of different antenna components with a signal-processing ability to improve its radiation or acceptance pattern instinctively in response to the signal environment. the method which we are proposing in our paper on a partial update least mean square (pu-lms) algorithm is an algorithm that is used to control the overload and less power consumption in the implementation of an adaptive filter. thus, the problem of adaptive filtering algorithm improves the filter coefficients is shown in figure 2. hence, we provide a better result from our proposed technique. lms algorithm is an algorithm admired in adaptive beam formers in which we use the antenna arrays. it is also used for channel levelling to conflict the inter-symbol interference. simultaneously, few applications of lms can incorporate interference, echo cancellation, space-time modulation, and coding, and the signals which are in observation. whereas, the already algorithms have high faster convergence rate like recursive least square and least mean square which is already admired because of their implementation and its computational costs. it is one of the effective methods for power consumption and for reducing the computational load in the adaptive filter implementations, which is appealing in mobile communications. figure 1. cognitive system mode. figure 2. overview of project implementation. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 block diagram and overview of proposed algorithm is shown in figure 3 and figure 4, respectively. there are many mobile communication devices and those applications like channel equalization as well as echo cancellation which requires the adaptive filter to get a high number of coefficients. to modernize the entire signal vector which is costly regarding ram, capacity, and computation and sometimes it does not fit mobile units. finally, in this paper, we present the analysis of the confluence of partial update adaptive learning (pual) algorithm lesser than different types of suppositions, and when the diversions are visible then it can be prevented by different coefficient updates accidentally, which is also known as sequential partial update adaptive learning (spu-al). 2.1. cyclostationary input signals cyclostationary is one of the processes which is having different analytics which differ cyclically with respect to meter. it is the one that can be showed as a various interleaved stationary process. it is also having a detection method for estimating and for spectral autocorrelation function technique to analyse the spectrum sensing. it can be detected whether it is active or not and whether the signal is used by licensed users or not it can sense these easily because this technique is robust in the sense. sequential pu-al algorithm, shows how the coefficients are updating in simulation and associated samples of the signals which are used in every overhaul (x power 1/4)(n) being the value of the retreat signal at the present instant and its flow chart is shown in figure 4. it simulates as a higher bound for each step size of pu al compatible algorithm when the signal is given as an input signal as a periodic reference which consists of many euphonious 𝐸{ℎ(𝑙 + 𝑇)} = 𝐸{ℎ(𝑙)𝐸{ℎ(𝑙 + 𝑇)ℎ(𝑙 + 𝑇 + 𝑛)} = 𝐸{ℎ(𝑙)ℎ(𝑙 + 𝑛)} (1) 𝑦(𝑙) = [𝑦(𝑙)𝑦(𝑙 − 1)𝑦(𝑙 − 2) … . , 𝑦(𝑙 − 𝑁 + 1)]t 𝐵𝑦 (𝑙) = 𝐸{𝑦(𝑙)𝑦 t(𝑙)} = diag[𝜎𝑦 2(𝑙), … . , 𝜎𝑦 2(𝑙 − 𝑁 + 1)] (2) 𝜎𝑦 2 = { 𝑀1 for 𝑖 𝑇 < 𝑙 ≤ 𝑖 𝑇 + 𝛼 𝑇 𝑀2 for 𝑖 𝑇 + 𝛼 𝑇 ≤ (𝑖 + 1) 𝑇 (3) or 0 < 𝛼 < 1 and 𝑖 = ⋯ , −4, −3, −2, −1, 0, 1, 2, 3, 4, … and sinusoidal power of a variation in time. 𝜎𝑦 2 = 𝛽(1 + sin(𝜔𝑜 𝑙)) . (4) here, is larger than zero, and 𝜔o falls between 0 and , with 𝜔o/(2 ) being a rational integer. if the sinusoidal power of a variation in time period is more than three, then 𝑦(𝑙) is not an input signal. 2.2. sequential partial update adaptive learning algorithm the needed signal is denoted by 𝑑(𝑙), the desirable weight vectors are denoted by 𝜔o, and the noise measurement is denoted by 𝑣(𝑙) 𝑑(𝑙) = 𝑦t(𝑙)𝜔o + 𝑣(𝑙) (5) 𝑒(𝑙) = 𝑒(𝑙) − 𝑦t(𝑙) 𝑢(𝑙) (6) 𝑢(𝑙 + 1) = 𝑢(𝑙) + 𝜇𝑒 (𝑙)𝐼𝐾 (𝑙)𝑦(𝑙) . (7) here 𝑢(𝑙) is the weight vector (adjustable filter coefficient vector), 𝑑(𝑙) = 𝑦t(𝑙) 𝑢(𝑙) indicates fallacy of method (whatever algorithm we have taken) and 𝐼 coefficient preference matrix is: 𝐼k(𝑙) diag[𝑖1(𝑙), 𝑖2(𝑙), … … , 𝑖𝑁 (𝑙) (8) with 𝐴 = 𝑁 𝐾⁄ . here, 𝐴 is taken as integer %(𝑙, 𝐴) represents the ‘%’ operation which results the remainder after making division 𝑙 by 𝐴. ∑ 𝑖𝑗 (𝑙) 𝑁 𝑖=1 = 𝐾, 𝑖𝑗 ∈ {0,1} the coefficient subsets 𝐸i are different up to they reach the following requirements: 1. 𝑈i=1 a 𝐸i = 𝑍 where 𝑍 = {1,2, … , 𝑁} 2. 𝐸𝑖 ∩ 𝐸𝑗 = ∅, ∀𝑗 , 𝑗 ∈ {1,2,3, … , 𝐴} and 𝑖 ≠ 𝑗 . 2.3. performance analysis for the input signal �̌�(𝑙 + 1) = �̌�(𝑙) − 𝜇𝑒 (𝑙)𝐼𝐾 (𝑙)𝑦(𝑙) (9) here is �̃�(𝑙) = 𝜔o − 𝑢(𝑙). (10) using (11), 𝑔(𝑙) = 𝑦t�̃�(𝑙) + 𝑣(𝑙) (11) is obtained. putting (11) into (9), then �̃�(𝑙 + 1) = 1 − 𝜇 𝐼𝑘 (𝑙) 𝑦(𝑙) 𝑦 t(𝑙) �̃�(𝑙) − 𝜇 𝑣(𝑙) 𝐼𝐾 (𝑙) 𝑦(𝑙) (12) taking the exception on both sides of (12), we get 𝐸{�̃�(𝑙 + 1)} = (𝐼 − 𝜇 𝐸{𝐼𝐾 (𝑙) 𝑦(𝑙) 𝑦 t(𝑙)}) 𝐸{�̃�(𝑙)} (13) time varying variance model 𝜎𝑦 2 = { 𝑀1 if mod(1, 𝑇) = 1 … 𝑀𝑇−1 if mod(1, 𝑇) = 𝑇 − 1 𝑀𝑇 if mod(1, 𝑇) = 0 . (14) we took the set 𝑀1, 𝑀2, …, 𝑀𝑇 , which has one large value (like 1) and one tiny value (like 0.001). it's to make sure that (3) and (4) are both true (14). between the su-partial update parameter 𝐴 and the input signal period 𝑇, the six instances reflect all potential scenarios. 2.3.1. case study 1 𝑇 ≤ 𝐴 and %(1, 𝑇) = 0. in case of (13) can be rework as 𝐸{�̃�(𝑙 + 1)} = 1 − 𝜇𝑖𝑗 (𝑙)𝜎𝑦 2(𝑙 − 𝑗 + 1)𝐸{𝑢�̃�(𝑙)} (15) figure 3. overview of partial update algorithm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 here 𝑢�̃�(𝑙) is the j th entry of �̃�𝑗 (𝑙). by verifying (7), the rechecking (15) is changed for every 𝐴 iteration. by adding the 𝐴 iterations of (15) 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1)) 𝐸{𝑢�̃�(𝑙)} (16) here 𝑙 is taking as integer satisfying 𝑙 where 𝑙𝑗 < 𝑙 + 𝐴. let declare the parameter 𝑑𝑗 = 𝑙𝑗 − [𝑙𝑗 𝐴⁄ ]𝐴 to indicate which entry of 𝐸{𝑢�̃�(𝑙)} where 𝑙𝑗 is satisfies 𝑙𝑗 < 𝑙 + 𝐴. let declare the parameter like 𝑑𝑗 = 𝑙𝑗 − [𝑙𝑗 𝐴⁄ ]𝐴, to represents the coming of 𝐸{𝑢�̃�(𝑙)} is upgraded. the function [𝑦] converts 𝑦 to largest integer ≤ 𝑦. by giving the sequential pu variable 𝐴, 𝑑𝑗 only depends on 𝑗 value. here %(𝐴, 𝑇) = 0, we have the equation 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗 here 𝑡�̃� = 𝑐(𝑡�̃�) and 𝑑𝑗 + 𝑁 − 𝑗 + 1 − [𝑑𝑗 + 𝑁 − 𝑗 + 1 𝑇 ] and 𝑐(𝑦) = 𝑦 − 𝑇|sign(𝑦)| + 𝑇. by putting this 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗 in to (16), we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝜎𝑦 2𝑀𝑡𝑗 ) 𝐸{𝑢�̃�(𝑙)} (17) here %(𝐴, 𝑇) = 0 and 𝑡𝑗 is like take as an integer, by looping (17), we get 𝐸{𝑢�̃�(𝑙 + 𝑏 + 1)𝐴} = (1 − 𝜇𝑀𝑡𝑗 ) 𝐸{𝑢�̃�(𝑙 + 𝑏𝐴)} . (18) in this instance, 𝒃 is a positive integer. it would be the combination of (17) and (18), according to (18). 𝑬{(𝒖�̃�(𝒍 + (𝒃 + 𝟏)𝑨} depends on 𝑴𝒕𝒋 . if 𝑴𝒕𝒋 in the input signal which is cyclostationary is very small, is explained in [11]. input signal will change partially for every iteration, it effects the 𝒖�̃�(𝒍)to converge and moving towards update. hence, in this sequential pu-al will certainly met with those tough conditions. 2.3.2. case study 2 𝑇 ≤ 𝐴 and %(𝐴, 𝑇) ≠ 0 and greatest common divisor (gcd) (𝐴, 𝑇) is equal to 1. here gcd(𝐴, 𝑇) represents the gcd of 𝐴 by 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝑡𝑗(𝑙) here 𝑡𝑗 (𝑙) is taken like an integer declared as 𝑇𝑗 (𝑙) = 𝑐(𝑡�̃�(𝑙)) and 𝑡�̃�(𝑙) = 𝑙𝑗 − [𝑙𝑗 𝑇⁄ ]𝑇. then, 𝑇𝑗 (𝑙) is depends on the values of both 𝑗 and 𝑙. hence, (17) becomes 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀𝑇𝑗(𝑙) ) 𝐸 {1 − 𝜇𝑀𝑇𝑗(𝑙)) 𝐸{𝑢�̃�(𝑙)} (19) looping the equation (19) for 𝐴 number of times, we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇𝑀𝑓(𝑇𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝐴)} (20) 𝑓 (𝑇𝑗 (𝑙)) = { 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇)), 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇) ≤ 𝑇 𝑇𝑗 (𝑙) + (%(𝐴, 𝑇) − 𝑇, otherwise . (21) since 𝑇 ≤ 𝐴, %(𝐴, 𝑇) is not equal to 0 and gcd(𝐴, 𝑇) is equal to 1, looping (21) for 𝑇 times, we get 𝐸{𝑢�̃�(𝑙 + 𝑇𝐵)} = ⟦1 − 𝜇𝑀 𝑓… 𝑓(𝑇𝑗(𝑙)) ⟧ 𝐸{𝑢�̃�(𝑙 + (𝑇 − 1)𝐴} = (1 − 𝜇𝑀1) … (1 − 𝜇𝑀𝑇 )𝐸{𝑢�̃�(𝑙)} (22) where 𝑓 … 𝑓 (𝑇𝑗 (𝑙)) represents the configuration of 𝑓(. ) in 𝑇 times. in (22), here we can observe the updates the process of input signal is declared by 𝑇 variances {𝑀1, 𝑀2, … 𝑀𝑇 }. as a consequence, the strategy (serial pu-al) will not interfere with toughness in this scenario. if the step-size meets the requirements, the pu-lms is stable. 0 < 𝜇 ≤ 2/max (𝑀1, 𝑀2, … , 𝑀𝑇 ) (23) 2.3.3. case study 3 𝑇 ≤ 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is greater than one. the least common multiple (lcm) of 𝑇 and 𝐴 is represented by lcm(𝐴, 𝑇). clearly lcm(𝐴, 𝑇) < 𝐴, 𝑇. here, the variance would become, 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) is given that 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝜉𝑗(𝑙), where 𝜉𝑗 (𝑙) = 𝑐 (𝜉�̃�(𝑙)) and 𝜉�̃�(𝑙) = 𝑙𝑗 − [ 𝑘𝑗 𝑇 ] 𝑇. then, (17) becomes figure 4. flow chart of a partial update adaptive learning algorithm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇 𝑀𝜉𝑗(𝑙) ) 𝐸{𝑢�̃�(𝑙)} (24) looping equation (24), then we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇 𝑀𝑓(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (25) though, 𝑇 ≤ 𝐴 and %(𝐴, 𝑇) is not equal to zero and gcd(𝐴, 𝑇) is greater than one, looping the equation (25) lcm(𝐴, 𝑇) times, then 𝐸{𝑢�̃�(𝑙 + lcm(𝐴, 𝑇))} = (1 − 𝜇 𝑀 𝑓……𝑓(𝜉𝑗(𝑙)) ) × … × (1 − 𝜇 𝑀𝜉𝑗(𝑙)) 𝐸{𝑢�̃�(𝑙)} , (26) where lcm(𝐴, 𝑇) 𝐴⁄ is less than 𝑇, there is a one or more than one parameter in the set {𝑀1, 𝑀2, … … , 𝑀𝑇 } that is not matched to input signal i.e., 𝐸{𝑢�̃�(𝑙)}. the parameters is with the input signal i.e., 𝐸{𝑢�̃�(𝑙)} all are tiny values, the up-dation purpose of the input signal of 𝐸{𝑢�̃�(𝑙)} will be very slow, while resulting the interference. 2.3.4. case study 4 𝑇 > 𝐴 and %(𝐴, 𝑇) = 0 in this case the divergence would become 𝜎𝑦 2(𝑙𝑗 − 𝑗 + 1) = 𝑀𝜉𝑗(𝑙) in this equation 𝜉𝑗 (𝑙) would be taken as positive integer and having an equal probability for taking on those values of [𝑠�̃� , 𝑠�̃� + 𝐴, … , 𝑠�̃� + 𝑇 − 𝐴] where 0 < 𝑑�̃� < 𝐴 is related to 𝑑𝑗 through 𝑑�̃� − 𝑑𝑗 = 𝑧, where 𝑧=0, 1, 2,…. then we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (27) looping equation (27), we get 𝐸{𝑢�̃�(𝑙 + 2𝐵)} = (1 − 𝜇𝑀𝑞(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙)} (28) 𝑞 (𝜉𝑗 (𝑙)) = { 𝜉𝑗 (𝑙) + 𝐴 𝜉𝑗 (𝑙) + 𝐴 ≤ 𝑇 𝜉𝑗 (𝑙) + 𝐴 − 𝑇 otherwise . here 𝑇 is greater than 𝐴 and %(𝐴, 𝑇) is equal to zero, looping equation (28) 𝑇 𝐴⁄ number of times, we get 𝐸{𝑢�̃�(𝑙 + 𝐵)} = (1 − 𝜇𝑀 𝑞…..𝑞(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝑇 − 𝐴)} = (1 − 𝜇𝑀 𝑔……..𝑔(𝜉𝑗(𝑙)) ) 𝐸{𝑢�̃�(𝑙 + 𝑇 − 𝐴)} (29) if all values in the taken set {𝑀𝑞(𝜉𝑗(𝑙),… ,𝑀𝑔…𝑔(𝜉𝑗(𝑙)) } have very tiny values, up-dation for input signal i.e., 𝐸{𝑢�̃�(𝑙)} might be slightly slow, sequential pu-lms might show very low interference. 2.3.5. case study 5 𝑇 > 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is equal to one. likewise coming to case 2, the sequential pu-lms is stable if equation-24 obeys the step-size. 𝑇 > 𝐴, %(𝐴, 𝑇) ≠ 0 and gcd(𝐴, 𝑇) is greater than one. likewise looking to case-3, the sequential pu-lms faces the low interference. note: we learned from this that for input signals with regularly time-varying variance (observe equations-2, 3, 4, and 15), the sequential partial update-lms method would not confront the low intersection challenging condition in case-2 or case-5. 𝐴 and 𝑇 become coprime integers in just two situations, with the gcd(𝐴, 𝑇) equal to 1. 𝐴 and 𝑇 are not co-primes, and the sequential partial update-lms algorithm may demonstrate a very sluggish intersection, depending on how the repeating power levels were balanced. 3. simulation results in this simulation part, the response of a signal with one doa comes at the base station with an angle of 60 degrees. threshold values of 0.1, 0.5 and 1 simulation results are conducted. for each threshold point interference rate is considered in terms of the samples which we have used to reach the steady state. from the simulation results clearly, we can know that the received signal converges at a mean square error of 0.0007. generally, if we observe the simulation results for 𝛼 = 0.5 and 1, steady state converged faster when compared to the conventional lms algorithm and also, we can clearly observe the delay period for these threshold points. this delay corresponds to the samples which are observed before the adaptive antenna is ready to adapt. the improvement in interference rate purely depends on the number of taps adapted. 3.1. one white signal three doas in this module effects of multipath in antenna systems are studied for various threshold conditions. three multipaths with the three different directions of arrivals of 60, 30 and -20 degrees are transmitted to a base station with the different sampling periods. hence, three signals are arrived with time differences of 𝑡, 𝑡 − 1, and 𝑡 − 2 each with amplitudes of 0.6, 0.75 and 1.0. using the pu-al algorithm three different weight updating equations are used for processing each multipath signal. these figure 5. beam pattern of one white signal with 3 doas using pu-al for α = 0.1. figure 6. received error signal of one white signal with three doas using pu-al for α = 0.1. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 simulation results are conducted at threshold values of 0.1, 0.2 and 1. interference tables are for each multipath signal is shown in terms of samples to reach the steady state. for each multipath signal, the mean square error is approximately lies between 0.006, 0.0014 and 0.00036 respectively and their simulation output is shown in figure 5. beam patterns of the proposed pu-al algorithm are shown in figure 6. from that, we can clearly say that the proposed algorithm is having a better ability to steer the beams in different directions, and nulls are placed in the place of interferences. gain of the beam is obtained corresponding to gain introduced by each multipath signal. 3.2. two white signals with one doa each transmitting of one signal with two multipath signals and two white signals with one doa effect which is similar. in those cases, two signals are uncorrelated to each other, and it is divided by one sample period. firstly, the signal with an amplitude of 0.5 and the second signal with an amplitude of 1.0 is considered. the interference table of each signal in terms of the number of samples is shown. from this, we can know that smaller gain amplitudes lead to longer responses to adapt taps and for estimated signals. for 𝛼 = 0.1 value, it will converge faster when compared to 0.2 and 1. from this delay is created for these values so it takes a longer time to the response before adaption of taps. these threshold points can adapt to the limited number of taps and it affects the system performance and its simulation output in figure 7. the beam pattern is shown in figure 8 with two beams corresponding to the doas at 60 and -25 degrees. this demonstrates a smart antenna system with desired signals and interfering ones. beamforming technique as sensing technique in cognitive radio network inside this work, we attempted to explain the beamforming approach in cognitive radio as spectrum sensing. this beamforming approach may be used in two ways: centralized or distributed. we've used a dispersed strategy in this case. cognitive radio may use the beamforming approach to target a specific beam onto a specific receiver while reducing involvement in surrounding directions, improving network performance. this distributed method gives each user a separate antenna, and several of them broadcast the signal together by manipulating the transmitter's carriers. the authorized users' interference is lessened when this method is used. cognitive radio would be able to expand the range of communication by employing this beamforming approach in the signal beam is vigorous to the appropriate direction by this spectrum. another benefit of this beamforming technology is that it reduces delay spread, multipath fading, and radio transmission on the same frequency channel as other interferences, among other things. the diagram depicted in the illustration is a geometrical representation of a cognitive radio network at the intended receiver location, which includes licensed users. there are k cognitive users who are uniformly scattered over a disc with a radius of r and a centre of p. assume the cognitive radio users' position is (sk, $k), which is polar coordinates. similarly, use the specified spherical coordinates to represent the receiver. we made the assumption that cognitive radio nodes are uniformly dispersed across a disc with a radius of r. a single antenna is provided for each node. because the channel between the users and the receiver is in line of sight, there is no shadowing. the principal users, also known as licensed users, the transmitters are in the far zone of the beam pattern, while the receivers are in the near zone. the simulation depicts the statistical distribution of the radiation pattern in the lobe that includes the direction in which radiation strength is greatest (major lobe) and lobes that do not contain the main lobe (sidelobes). these are generally rays that are aimed in an unfavourable direction. it is used to examine these power levels by running 10,000 trials to create a beam pattern. the radius of the disc is r/(lambda) = 2, azimuth angle = 0 degrees, and elevation angle = pi/2, all of which are regularised by wavelength. these cognitive users employ uniform distribution, with numbers such as 4, 7, 16, 100, and 256 being used. the signal-to-noise ratio in the loop (snr) of the phase loop locked output variance is expected to be 2 db, 3 db, and 10 db as shown in figure 9. at an angle of 20 degrees and 30 degrees, two licensed users or principal users are presumed to be present. figure 7. received error signal of two white signal with one doa using pu-al for α = 0.1. figure 8. beam pattern of two white signals with one doa using pu-al for α = 0.1. figure 9. average power vs. direction of antenna. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 the beampattern phase-only distributed beamforming (podb) approach does not have a perfect phase in the diagram above. in this situation, the number of users is 100. the average gain for loop snr 10 db is 0 db at an angle of 0 degrees, whereas the gain marginally reduces for loop snr 2 db and 3 db. because of the incomplete phase, the main lobe is at its apex. the users are at a 20-degree and 30-degree angle, respectively, therefore the sidelobe power is about equal to -20 db. the cumulative complementary distribution function (ccdf) of podb with phase offset is shown in the figure above for k values of 3, 8 and 15. the proportion of equal to or more than a certain power level beampattern is shown in figure 10. for the ccdf computation, we ran 10,000 simulations. for a loop snr of 10 db, the angle would be set to 0 degrees. for values of 4, 7 and 16, the major lobe is at its highest. finally, we employed the sequential partial update in this case. the leastsquare technique is used to repair errors caused by altering antenna element placements. the error between the actual and expected replies is reduced with this approach. 4. conclusion this paper has analysed the spectrum sensing issue using the beamforming technique in that we have used an algorithm for the input signal is sequential pu-al. here we have taken the input signal as cyclostationary signal with white gaussian noise and we have analysed the input signal using pu-al in different case studies and these case studies also represent how the sequential pu-al tolerate the convergence related issues. we have implemented many new things as a result of our project, to keep up with technological advancements. nowadays, we rely on online sources for everything instead of utilizing the spectrum that is available not only now but also in the future to save money. by employing this approach in the future, stricter bounds on the rate of convergence can be achieved. for stationary signals, a mean update equation of pu-al can be developed. it can also be looked at the techniques that can be used to analyse the performance of the max pu-al algorithm. referrences [1] s. surekha, md zia ur rahman, navarun gupta, a low complex spectrum sensing technique for medical telemetry system, journal of scientific & industrial research, vol. 80, pp. 449-456, may 2021. [2] j. divya lakshmi, rangaiah. l, cognitive radio principles and spectrum sensing, blue eyes intelligence engineering & sciences publication, volume-8, august 2019, pp. 2249 – 8958. [3] omi sunuvar, a study of distributed beamforming in cognitive radio networks, 2013. [4] numa joy, anu abraham mathew, beamforming for sensing based spectrum sharing in cognitive radio network, international journal of engineering research & technology (ijert), 2015. [5] hyils sharon magdalene antony, thulasimani lakshmanan, secure beamforming in 5g-based cognitive radio network, symmetry, 2019. doi: 10.3990/sym11101260 [6] kais bouallegue, matthieu crussiere, iyad dayoub, on the impact of the covariance matrix size for spectrum sensing methods: beamforming versus eigen values, ieee, 2019. doi: 10.1109/iscc47284.2019.8969741 [7] aki hakkarainen, janis werner, nikhil gulati, damiano patron, doug pfeil, henna paaso, aarne mammela, kapil dandekar, mikko valkama, reconfifigurable antenna based doa estimation and localization in cognitive radios: low complexity algorithms and practical measurements, 2014. [8] yu sing xiao, danny h. k. tsang, interference alignment beamforming and power allocation for cognitive mimonoma downlink networks, ieee, 2019. doi: 10.1109/wcnc.2019.8885714 [9] md. zia ur rahman, v. ajay kumar and g v s karthik, a low complex adaptive algorithm for antenna beam steering, ieee, 2011. doi: 10.1109/icsccn.2011.6024567 [10] janis werner, jun wang, aki hakkarainen, danijela cabric,mikko valkama, performance and cramer-rao bounds for doa/rss estimation and transmitter localization using sectorized antennas, ieee transactions on vehicular technology,volume: 65, may 2016. doi: 10.1109/tvt.2015.2445317 [11] n. j. bershad, e. eweda, and j. c.m. bermudez, stochastic analysis of the lms and nlms algorithms for cyclostationary white gaussian inputs, ieee trans. signal process., vol. 62, no. 9, pp. 2238–2249, may 2014. [12] m. z. u. rahman, s. surekha, k. p. satamraju, s. s. mirza, a. layekuakille, a collateral sensor data sharing framework for decentralized healthcare systems, ieee sensors journal, november 2021. doi: 10.1109/jsen.2021.3125529 [13] s. surekha, md zia ur rahman, spectrum sensing for wireless medical telemetrysystems using a bias compensated normalized adaptive algorithm, international journal of microwave and optical technology, vol. 16, 2021, no.2, pp. 1-10 [14] s. surekha, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), 2020, pp. 1-5, doi: 10.1109/i2mtc43012.2020.9129107 [15] shafi shahsavar mirza, nagesh mantravadi, sala surekha, md zia ur rahman, adaptive learning based beamforming for spectrum sensing in wireless communications, international journal of microwave and optical technology, vol. 16, 2021, no.5. [16] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol.10, no.2, pp. 1-10, 2021. doi: 10.21014/acta_imeko.v10i2.912 [17] ayesha tarannum, zia ur rahman, l. koteswara rao, t. srinivasulu, aimé lay-ekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee sensor journal, vol. 20, no. 24, 2020, pp. figure 10. ccdf vs. instantaneous power. https://doi.org/10.3990/sym11101260 http://dx.doi.org/10.1109/iscc47284.2019.8969741 https://doi.org/10.1109/wcnc.2019.8885714 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/tvt.2015.2445317 https://doi.org/10.1109/jsen.2021.3125529 https://doi.org/10.1109/i2mtc43012.2020.9129107 http://dx.doi.org/10.21014/acta_imeko.v10i2.912 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 15014-1515025. doi: 10.1109/jsen.2020.3012536 [18] s. y. fathima, k. murali krishna, shakira bhanu, s. s. mirza, side lobe suppression in nc-ofdm systems using variable cancellation basis function, ieee access, vol.5, no.1, 2017, pp. 9415-9421. doi: 10.1109/access.2017.2705351 [19] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, 2021, no.2, pp. 174 184. doi: 10.21014/acta_imeko.v10i2.1080 [20] k. murali krishna, k. krishna reddy, m. vasim babu, s.s. mirza, s.y. fathima, ultra-wide band band-pass filters using plasmonic mim wave guide based ring resonators, ieee photonics technology letters, vol.30, 2018, no.9, pp. 1715-1718. doi: 10.48084/etasr.4194 https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.1109/access.2017.2705351 http://dx.doi.org/10.21014/acta_imeko.v10i2.1080 https://doi.org/10.48084/etasr.4194 introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 2 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’ fabio santaniello1, michele fedel2, annaluisa pedrotti1 1 labaaf laboratorio bagolini archeologia, archeometria, fotografia; ceasum – centro di alti studi umanistici, dipartimento di lettere e filosofia, università di trento, via tommaso gar n°14, 38122, trento (italy). 2 dipartimento di ingegneria industriale, università di trento, via sommarive n° 9, 38123 trento (italy). section: editorial citation: fabio santaniello, michele fedel, annaluisa pedrotti, introduction to the acta imeko special issue on the ‘imeko tc4 international conference on metrology for archaeology and cultural heritage’, acta imeko, vol. 11, no. 1, article 2, march 2022, identifier: imeko-acta-11 (2022)-01-02 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio santaniello, e-mail: fabio.santaniello@unitn.it dear readers, this special issue of acta imeko is the result of the 5th international conference on metrology for archaeology and cultural heritage. the conference was originally planned to be held in trento (italy) hosted by the department of humanities of trento university, on october 22-24, 2020, but due to the sanitary emergency caused by covid-19, the organisers decided to hold the conference online. despite the unexpected situation, the conference has been a great success with 158 initial submissions, 126 accepted papers, 431 authors from 19 countries, 4 invited plenary speakers, 13 special sessions, 3 tutorial sessions and 11 patronages. out of the numerous presented papers, a selection has been made by the scientific committee to realize this issue. particular attention has been paid to the papers which inspire the exchange of knowledge and expertise between “human sciences” and “hard sciences”. after the review process, seventeen papers have been accepted for publication encompassing several research fields and methodological approaches. more in detail, several papers are devoted to material characterization by means of different analytical techniques. six of them are focused on the analysis of archaeological artefacts. particularly, zerai gebremariam et al. analysed the pottery assemblage from the site of adulis (eritrea): colorimetric measurements show new technological and manufacturing insights used for ceramic production during the roman age. an egyptian wooden statuette stored in the museo egizio di torino has been analysed by vigorelli et al., who compared the results of a multi-analytical strategy based on both non-invasive and micro-invasive procedures to investigate the original artistic techniques and the ancient restorations of the artefact. the paper by stagno and capuano compares micro-mri, diffusion-nmr and portable nmr data to highlight the diagnostic features of roman archaeological woods. es sebar et al. analysed different metal tools used during the construction of the santa maria del fiore cupola in florence, pca analyses performed on xrf data allow to determine different alloys depicting new details about the renaissance technology. tavella et al. used a graphic elaboration software to calculate the capacity of several prehistoric vessels from northeastern italy, suggesting possible functions and/or cultural traditions related to the potteries. the article by mazzoccato et al. shows the significance of laser scanning microprofilometry for surface analysis and 3d printing in the study of archaeological pottery. passing from the archaeology to the artworks, sottili et al. present an interesting contribution based on the combination of ma-xrf and dr to study painting layers and colours composition. a second group of eight papers is more related to the study and also to the management and valorisation of architectural heritage. baiocchi et al. proposed an approach to realize a geomatic survey by smartphones useful to create digital twins and virtual models. this approach has been tested on the intihuatana stone in machu picchu (peru) providing intriguing results and possibilities. brienza and fornaciari combined gis and photogrammetry to study the masonry of the bagni di eliogabalo (rome). their detailed data offer a wide reconstructive hypothesis allowing to point out roman construction techniques and expedients. the history and the architectural transformations of the bridge of canosa di puglia (italy) have been analysed through http://? acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 archival documentation and field surveys by germanò, who finally hypnotize the original configuration of the bridge. doria et al. present the result of a multi-steps program related to the study and the conservation of the castiglioni chapel in pavia, focusing on the digital survey and the creation of an immersive 3d model with different levels of analysis and visualization. the paper by antolini focuses on the development of a wide approach for the reconstruction of the ephemeral apparatuses, the latter has been applied to the case study of the funeral apparatus realized in rome for cardinal mazarin. several banded vaults systems in turin baroque atria have been analysed by natta. the author proposes an integrated approach involving metric survey by laser scanning and digital drawing in order to investigate the original constructive methodologies and the changes due to time. moving to more recent structures, the paper by gabellone shows an interesting 3d reconstruction of an underground oilmil in the town of gallipoli, which has been used to develop shared virtual visits during the covid-19 emergency. pirinu et al. presents the results of an extended survey activity related to the military architectures built in sardinia during the second world war. the collected data allow to analyse the historical construction techniques as well as to recover a peculiar heritage which is part of the contemporary landscape. bertola discusses a methodology that by starting from archival documentation and using bim allows to reproduce a 3d model of due case a capri by aldo morbelli. eventually, the article by weththimuni et al. deals with the preservation of cultural heritage buildings by using zro2-doped zno-pdms nanocomposites as protective coatings for the stone materials, providing interesting future perspectives. the contributions of this special issue provide an overview of the significant impact achieved by a more intense synergy between metrology and human sciences. moreover, following the constraint given by the international situation that occurred since 2020, this issue stressed the importance to promote a diffuse accessibility of cultural heritage thanks to the virtualization and digitalization of archaeological artefacts, human landscapes, historical documents and so on. to conclude we hope that this special issue catches the attention of the readers thanks to its interdisciplinarity. actually, we strongly believe that the intermingling of competencies is the way to look beyond contemporary research and sketch both the opportunities and the path of the cultural heritage in the future. hope you will have an exciting reading! fabio santaniello, michele fedel, annaluisa pedrotti guest editors banded vaults with independent arches: analysis of case studies in turin baroque atria acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 banded vaults with independent arches: analysis of case studies in turin baroque atria fabrizio natta1 1 department of architecture and design (dad), politecnico di torino, v.le mattioli 39, 10125 torino, italy section: research paper keywords: banded vaults; architectural drawing; 3d modelling; point clouds citation: fabrizio natta, banded vaults with independent arches: analysis of case studies in turin baroque atria, acta imeko, vol. 11, no. 1, article 8, march 2022, identifier: imeko-acta-11 (2022)-01-08 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 7, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabrizio natta, e-mail: fabrizio.natta@polito.it 1. introduction this research project is a continuation of a study conducted by roberta spallone and marco vitali on complex brickwork vaulted systems in turin's baroque buildings [1]. the progress of the studies, and the enlargement of the research group, have led to the analysis of a considerable number of case studies in the architectural heritage of turin. the ‘a fascioni’ – or banded – vaults are architectural solutions for the covering of medium and large rooms derived from the guarini experience. guarini describes their characteristics in architettura civile (published posthumously in 1737) [2] and applies their forms in some of his projects. from guarini's example, a remarkable production has emerged that has seen the work of many important architects, but also of others whose identity is unknown. they have applied formal and constructive principles that have become customary in the turin building site, which have allowed a wide application in the civil buildings of the city. eleven vaulted baroque atria were identified by the research group as banded vault, of which eight were accessible. one of the objectives was to catalogue and recognize those vaulted structures with independent arches and those in which the arches are generated by vertical cuts on the main reference vault (e.g., pavilion, ‘a conca’, ‘a schifo’, etc.). this analytical phase preceded the comparison of the metric data obtained by tls (terrestrial laser scanning) metric survey, coordinated by concepción lópez. the information obtained from the point cloud was fundamental for the comparison through sections of the various parts of the structure. the philological reconstruction of the design idea was analysed with ideal schematics of the treatises, archive drawings, and realizations in the city and through the tools of twoand threedimensional modelling. the studies that led to the continuation of this analysis [3], resume work methods already applied to other case studies [4], intending to define a systematic methodology after this extensive research. 2. architectural treatises and manuals the ‘a fascie’ vaults are introduced, as we have seen, by guarino guarini into architettura civile. starting from a rigorous abstract this contribution presents a part of the work and the methodology applied to it developed for the realization of an international research project aimed at the analysis and preservation of an architectural heritage characteristic of turin's baroque architecture: the ‘a fasce’ vaults, locally named as ‘a fascioni’. this architectural solution, used by important architects, such as guarini up to the local workers, to cover spaces of various sizes, has found in the court of turin area a wide application in buildings atria. a considerable number of banded vaulted atria were identified and surveyed by the research group to recognize and investigate those whose bands are generated from independent arches. the objective is the comparison of metric and geometric data between ideal models and realizations over time, to evaluate their variations and understand a constructive methodology through three-dimensional modeling. mailto:fabrizio.natta@polito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 knowledge on the theme of vaulted systems, with studies related to geometry, stereotomy, calculation of surfaces and volumes present in his previous treatises (euclides adauctus published in 1671 and modo di misurare le fabriche, in 1674) in architettura civile, treatise iii, chapter xxvi ‘delle volte, e varj modi di farle’, guarini dedicates the ‘osservazione nona’ and the ‘osservazione decima’ to banded vaults and flat banded vaults. this text is accompanied by two plates, ‘lastra xix’ and ‘lastra xx’ (figure 1), in which guarini graphically illustrates the principles set out in the observations through a double orthogonal projection representation. the architect describes textually the spatial genesis of this type of vault starting from a division of the space to be vaulted through wall-to-wall bands, with perpendicular or oblique direction, to create fields, which can then be filled with vaults of different types. references to this type of construction can be seen in the work of authors contemporary to guarini, as in the case of the vault of the sala di diana in the reggia di venaria by amedeo di castellamonte (1661 – 1662) [5], and in later periods between the end of the 17th century and the early 18th century by guarini’s collaborators (for example gian francesco baroncelli in palazzo barolo, 1692) or by other internationally renowned architects such as filippo juvarra in palazzo martini in cigala (1716). two centuries later, giovanni curioni conducted a study on this type of vaulting in bands, and with his geometria pratica (1868) [6] he straddles the gap between a purely theoretical contribution and a practical approach. the author, starting from guarini's approach, develops further considerations regarding the origin of the generating surface of the subdivisional bands of the space: “on the polygon to be covered with one of these times already insists the intrados of a time which, depending on the figure of the said polygon, can be a barrel, ‘a conca’, a pavilion, a barrel with heads of pavilion, ‘a schifo’, a dome” [7]. the subsequent operations are carried out by cutting with vertical planes on the reference surface and therefore do not seem to identify the construction for the independent arches. also, in the turin area we find the work of giovanni chevelley, where in his elementi di tecnica dell'architettura: materiali da costruzione e grandi strutture (1924) [8] collects local building knowledge in the field of vaulted structures. the description of the banded vaults take up the definition of curioni and in indicating some realizations emphasizes its spatial qualities and its variety of use in the atria of civil buildings and churches of the 17th and 18th centuries. 3. banded vault in archival drawings alongside the source of the treatises, we can also find the documentary source, which consists of the original guarini drawings, or the work of his collaborators. these documents, kept in the archivio di stato – sezione corte, have been studied directly by the author of this paper. they are firstly published and analysed in the archival regesto elaborated by augusta lange [9], for the 1968 conference on the figure of guarini. some of the drawings concern precisely the banded vaulted system applied to cover rooms in civil buildings (figure 2) [10]. the examples describe different solutions starting from the same tracing of the bands, perpendicular to the wall in the first case and oblique in the other, with the possibility for both to identify the arches as independent [6], [11]. the one shown as an example (figure 3), even in a hypothetical three-dimensional vision, reveals many similarities with the realizations surveyed in the fieldwork. this structure is characterized by the double dimensions of the bands; starting from the transverse arches, the longitudinal band is specularly supported, leaving the central field free for the insertion of further shapes and decorations, as described in his treatise. figure 1. banded vault in architettura civile. guarini 1737, treat. iii, plate xx. figure 2. g.guarini, study of a banded vault, 1680 c., torino, asto, azienda savoia-carignano, cat. 43, mazzo i, fasc. 6, n. 36; g.guarini, study of a composed and banded vault, 1680 c., torino, asto, azienda savoiacarignano, cat. 95, mazzo ii, fasc. 115, n. 23. figure 3. plan distribution and digital reconstruction of a guarini’s banded vault. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 4. banded vaults in turin baroque atria the applications of this particular type of vaulted structure find therefore after guarini a vast development in the city of turin, taking the maximum expression and variety in the atria of the baroque palaces of the city. the already mentioned works by castellamonte and juvarra seem to follow a typological current that touches many other authors in the city of turin. in their realizations, it is possible to identify those characters derived from guarini's thoughts but also to understand the peculiarities of a different creative process. the phase of identification and cataloguing of these vaulted structures was therefore fundamental. is it possible to firstly identify census maps in the research directed by cavallari murat and published in forma urbana e architettura nella torino barocca (1968) [12] and later in studies reported in the volume by spallone and vitali (2017). this structure was built between the 17th and 18th centuries in the areas of the second and third baroque extension of the city. in the variety of the baroque atria surveyed, three have been identified – at the moment of the research – as belonging to this category of vaulted structures with independent arches (table 1). the classification realized, focusing only on the vaulted structures analysed for this case study, wants to identify the maximum dimensions of the spaces and identifies the main axialities that compose the grid. this cataloguing, certainly expandable by extending the analysis to entire buildings, has provided a first overview of the spatiality created through this structure. the most common grids, in the whole context, were 3x3, with a smaller number of 3x4 and 3x5 used for rooms with a larger floor plan. the atria with vaulted structures generated from independent arches (figure 4) by filippo juvarra (in palazzo martina in cigala), gian giacomo plantery (in palazzo capris in cigliè), and gian francesco baroncelli (in palazzo barolo) are characterized by a varied spatial division (figure 5), maintaining the constant of the transversal arches as the basis for the creation of the subsequent bands and the vaults to complete the further fields created by the grids. for the recognition of this type of structure, the point clouds generated by the tls survey were therefore analysed. through a phase of identification of the characteristic sections by using the point cloud, we tried to compare this information to evaluate their conformation and geometric construction. the comparison is made between the sections that followed the same direction (in these cases only longitudinal or transversal, as there are no examples with diagonal axiality). if the variances identified could be considered within a geometrically valid level (but not metrically defined and evaluated case by case), we proceed with the classification of this part of the vaulted structure of the atria [13]. the example of the vaulted atria of via della consolata is displayed to explain the classification method and the following identification of the construction geometries (figure 6). in this case, the cross-sections lay the basis for the construction of the independent arches. after this step, the subsequent longitudinal arches are positioned using the transversal arches as a support base. this second level of arches, due to the conformation of the space, in its central field is straight to cover the whole space in figure 4. banded vault in baroque atria in turin. table 1. baroque atria under analysis. address width, depth, height (m) grid via della consolata, 3 7,66 × 10,37 × 6,75 3 × 5 via santa maria, 1 9,33 × 5,97 × 6,24 3 × 3 via delle orfane, 7 8,62 × 10,42 × 6,78 3 × 4 figure 5. plan distribution of banded vault in baroque atria in turin. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 length. solutions of this type of result to be very common in these vaults (the same system is used also in via delle orfane). the opportunities related to this type of structure allow construction in the other fields of autonomous vaults, as suggested by guarini’s indications [14] and that we are going to see in the selected case study. 5. the case study: palazzo capris di cigliè the two-dimensional drawings allow making an initial analysis of the architectural consistencies which, reported in the three-dimensional model, are linked to the formal conception derived from architectural literature and archival documentation. the case study now selected is palazzo capris di cigliè (1730) by gian giacomo plantery. 5.1. survey methodology and technical aspects the research carried out has had as main purpose the geometric and metrological analysis of the vaults of the atrium of this noble palace with the use of terrestrial laser scanner (tls). data obtained with this technology, led by prof. concepción lópez, generate easy-to-use models for later comparison with the geometric prototypes established in the literature [15], [16]. for this survey has been used the focus-130x3d scanner by faro. its low weight (5,2 kg) and small size (24 x 20 x 10 cm) facilitate its transportation and handling. the integrated longlasting battery (4 hours) ensures its use with no need to connect it to electricity during the entire/whole scanning session. it has a systematic error scope of ± 2 mm of distance in 25 meters which was acceptable for this study. it includes an integrated camera with 70 megapixel non-parallel colour overlay so the resulting point clouds a photographic realism very useful to understand it. the scans have been performed at a speed of 488.000 points/sec implied duration of each scan of approximately 8 minutes obtaining a good resolution of scanned. to process the scans, it was used autodesk recap pro® and the cloud registration was done automatically, without errors, using the tools of the software. after this step, the point cloud was imported in autodesk autocad® to execute the success data (figure 7 figure 8) [4]. 5.2. interpretation and modeling data obtained from the two-dimensional surveys with data from tls survey are used by restoring the symmetries and searching the elementary geometries in sections [17]. the method of analysis, developed in previous research [4], is based on guarini’s general indications for the composition of this type of vaulted system: delineated the bands starting from the plan – identified in this case also three-dimensionally –, we pass to the filling of the empty spaces with a small vault. the phases of geometric decomposition of the vaulted structure are shown through representation in isometric axonometry (figure 9). figure 6. graphic analysis and digital modeling of independent arches in the baroque atria in examination. figure 7. point cloud of atria portion in palazzo capris di cigliè. figure 8. point cloud of vault atria in palazzo capris di cigliè. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 through the point cloud sections are generated the most geometrically accurate curves (figure 9b) (looking for a curve generation with the lowest possible number of centres). these curves, belonging to independent arches, allow generating the first order of the vaulted structure (figure 9c). in this case, the central area sees the interruption of the longitudinal arches leaving full range to a single vaulted structure. this vault superimposed on the arched system, is sailed-shaped, as evidenced also by the decoration. along the major axis, the portion of the vault between the two arches follows the same geometry of these arches, recreating the idea of giant arches already seen in guarini’s drawings (figure 9d). the last fields to complete this vaulted structure are the angular ones. this time is independent to the main structure. cross vaults are set starting from the intersection of the arches and they develop with very low height (figure 9e). one of the most relevant features of this case is certainly the width of the discontinued bands, with specifications similar to those of the guarini design (figure 2 and figure 3) and leading to further evaluation of the creation of these bands and the internal areas created. 6. comparison between design geometric model and survey data the phase that accompanies the redrawing and digital reconstruction of this vault model goes hand in hand with the research of comparison between the model built and generated through point cloud and the geometric model of this surface [18]. this results in finding and analysing the characteristic sections of the vaulted structure [19], in this case the component of bands generated by independent arches. the replicable procedure [1] search in the section the geometric information useful for the construction of a theoretical model comparable to the original design idea; in this case study we extract fourteen sections from the point cloud (figure 10). the position of the sections in relation to the characteristics of the vaulted system led to their cataloguing, aimed at recognizing those which, by virtue of the hypothetical symmetry of the ideal model, should have the same shape. those of the main arch, aligned and superimposed with reference to the impost plane, have led to the recognition of axes, proportions, points of intersection and curves. among these, element by element, the polycentric curve (the latter with the smallest possible number of centers) was digitally constructed with autodesk autocad®, consistent with the techniques of construction of the centering (figure 11). at the end of this curve recognition phase, we moved on to the reconstruction of the theoretical three-dimensional model. these surfaces, modelled on rhinoceros® 6, were exported in the .e57 format to be then overlaid with the point cloud inside the cloudcompare open source software (figure 12). it is necessary to remember that the two digital products can never be perfectly overlapped: the point cloud brings with itself information about the consistencies in their current condition, the digital model, reconstructive of the design idea is generated through rigorously geometric references and restoration of the symmetries [3]. figure 9. graphic analysis and digital modeling of the vault in palazzo capris di cigliè. figure 10. scheme section of point cloud (red). figure 11. tracing method for directrices and comparison with the section of point cloud (red). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 the graphical outputs realized through the software, can show the standard distance between point cloud and ideal model (figure 13). the higher distances between the two elaborations are identified in the more internal bands which, already in the extraction phase, were highlighted by a flexion near the keystone. 7. conclusions this paper outlines the methodological framework developed for the metric survey and the processing of knowledge data in the research on banded vaulted systems in turin baroque atria. the integration between the technique of metric survey by laser scanning with digital drawing and modelling involves, as we have seen, the definition of a workflow aimed at optimizing the use of data. indeed, adhering to the objectives of the research, twodimensional drawings must represent the atria in their current state, while three-dimensional modelling of the vaults is linked to the geometric reference models and aimed at the philological reconstruction of the design idea [18]. these procedures have given rise to new opportunities for research, such as the comparison (metric, but even more interesting, geometric) through the superimposition of ideal design models and point clouds. the deviation between the two digital products will not only reveal the deformations, structural failures, transformation that are part of the real-life of the building, but, above all, will provide new insights for the hypothesis on the necessary construction adaptions, and the centerings and laying techniques applied on building-site, that will contribute to the understanding of the relationship between design and construction. acknowledgement this contribution presents a development of the work and methodology elaborated within the international research project “nuevas tecnologías para el análisisis y conservación del patrimonio arquitectónico” coordinated by roberta spallone, assisted by marco vitali (department of architecture and design of the politecnico di torino). the project allowed the stay in turin, as visiting professor, of concepción lópez (department of graphic expression in architecture, universitat politecnica de valencia). the group includes, in addition to the author of this work: giulia bertola and francesca ronco (department of architecture and design, politecnico di torino). the research, favored by funding from the ministerio de ciencia, innovación y universidades of spain, is aimed at the analysis and interpretation of an architectural heritage characteristic of turin’s baroque production: the ‘a fasce’ vaults, locally named as ‘a fascioni’. references [1] r. spallone, m. vitali, volte stellari e planteriane negli atri barocchi in torino – star-shaped and planterian vaults in turin baroque atria, aracne, ariccia, 2017, isbn 978–88–255–0472–9. [2] g. guarini, architettura civile, gianfranco mairasse, turin, 1737. [3] f. natta, baroque banded vaults with independent arches: from literature to realizations in turin atria, proc. of imeko tc4 metroarchaeo 2020 virtual conference “metrology for archaeology and cultural heritage”, 22-24 october 2020, isbn 978-92-990084-9-2, pp. 72-77. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-014.pdf [4] m. c. lópez gonzález, r. spallone, m. vitali, f. natta, baroque banded vaults: surveying and modeling. the case study of a noble palace in turin, in: the international archives of the photogrammetry, remote sensing and spatial information sciences, copernicus gmbh (editors). volume xliii-b2-2020 (2020), pp. 871-878. doi: 10.5194/isprs-archives-xliii-b2-2020-871-2020 [5] e. piccoli, le strutture voltate nell’architettura civile a torino (1660-1720), in: sperimentare l’architettura: guarini, juvarra, borra e vittone. g. dardanello (editor). fondazione crt, turin, 2001, issn 0008-7181, pp. 38-96. [6] g. curioni, geometria pratica applicata all’arte del costruttore, negro, turin, 1868. [7] r. spallone, delle volte, e vari modi di fare. modelli digitali interpretativi delle lastre xix e xx nell'architettura civile di guarini, fra progetti e realizzazioni realizzazioni / on the vaults and various modes of making them. interpretative digital models of the xix and xx plates in guarini's architettura civile, between designs and buildings, in: le ragioni del disegno. pensiero forma e modello nella gestione della complessità / the reasons of drawing. thought shape and model in the complexity management, s. bertocci, m. bini (editors). proc. of 38th convegno internazionale dei docenti delle discipline della rappresentazione, florence, italy, 15 – 17 september 2016, pp. 1275-1282. [8] g. chevalley, elementi di tecnica dell’architettura: materiali da costruzioni e grosse strutture, pasta, turin, 1924. [9] a. lange, disegni e documenti di guarino guarini, in: guarino guarini e l’internazionalità del barocco. v. viale (editor). proc. of the international conference promoted by the accademia delle scienze di torino, turin, italy, 30 september – 5 october 1968, vol i, pp. 91-236. [10] f. natta, dai disegni autografi di guarini all’interpretazione digitale: modelli di volte a fasce, in: sistemi voltati complessi: geometria, disegno, costruzione complex vaulted systems: geometry, design, construction. r. spallone, m. vitali, a. giordano, j. calvo-lópez, c. bianchini, a. lópez-mozo, p. figure 12. tracing method for directrices and comparison with the section of point cloud (red). figure 13. tracing method for directrices and comparison with the section of point cloud (red). https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-014.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-014.pdf http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-871-2020 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 navarro-camallonga (editors). aracne, ariccia, 2020, isbn 97888-255-3053-7, pp. 213-228. [11] g. curioni, lavori generali di architettura civile, stradale ed idraulica e analisi dei loro prezzi. negro, torino, 1866. [12] a. cavallari murat, forma urbana ed architettura nella torino barocca: dalle premesse classiche alle conclusioni neoclassiche, utet, turin, 1968, 2 voll. 3 tome. [13] m. vitali, astrazione geometrica e modellazione tridimensionale per la definizione di una grammatica spaziale delle volte a fascioni/geometric abstraction and three-dimensional modeling for the definition of a spatial grammar of the ‘a fascioni’ vaults, in r. salerno (ed.), “rappresentazione/ materiale/ immateriale – drawing as (in)tangible representation”, gangemi, roma, 2018. [14] g. guarini, modo di misurare le fabriche. per gl'heredi gianelli, torino, 1674. [15] a. almagro gorbea, half a century documenting the architectural heritage with photogrammetry, ege revista de expresión gráfica en la edificación, 11, 2019, pp. 4-30. doi: 10.4995/ege.2019.12863 [16] michael e. auer, ram kalyan b. (eds), cyber physical systems and digital twins. proceedings of the 16th international conference on remote engineering and virtual instrumentation. springer, berlin, 2019. doi: 10.1007/978-3-030-23162-0 [17] f. stanco, s. battiato, g. gallo, digital imaging for cultural heritage preservation: analysis, restoration, and reconstruction of ancient artworks. crc press, 2017. doi: 10.1201/b11049 [18] a. samper., g. gonzález, b. herrera, determination of the geometric shape which best fits an architectural arch within each of the conical curve types and hyperbolic-cosine curve types: the case of palau güell by antoni gaudí, journal of cultural heritage, volume 25, 2017, pp. 56-64. doi: 10.1016/j.culher.2016.11.015 [19] e. lanzara, a. samper, b. herrera, point cloud segmentation and filtering to verify the geometric genesis of simple and composed vaults. int. arch. photogramm. remote sens. spatial inf. sci., xlii-2/w15, 2019, pp. 645-652. doi: 10.5194/isprs-archives-xlii-2-w15-645-2019 https://doi.org/10.4995/ege.2019.12863 https://doi.org/10.1007/978-3-030-23162-0 https://doi.org/10.1201/b11049 https://doi.org/10.1016/j.culher.2016.11.015 https://doi.org/10.5194/isprs-archives-xlii-2-w15-645-2019 calibration of capacitance diaphragm gauges with 1333 pa full scale by direct comparison to resonant silicon gauge and static expansion system acta imeko june 2014, volume 3, number 2, 48 – 53 www.imeko.org calibration of capacitance diaphragm gauges with 1333 pa full scale by direct comparison to resonant silicon gauge and static expansion system h. yoshida, e. komatsu, k. arai, m. kojima, h. akimichi, t. kobata national metrology institute of japan (nmij), national institute of advanced industrial science and technology (aist), aist central 3, 1-1-1, umezono, tsukuba, ibaraki, 305-8563, japan section: research paper keywords: pressure, vacuum, standard, calibration, capacitance diaphragm gauge, thermal transpiration effect citation: h. yoshida, e. komatsu, k. arai, m. kojima, h. akimichi, t. kobata, calibration of capacitance diaphragm gauges with 1333 pa full scale by direct comparison to resonant silicon gauge and static expansion system, acta imeko, vol. 3, no. 2, article 12, june 2014, identifier: imeko-acta-03 (2014)-02-12 editor: paolo carbone, university of perugia received april 30th, 2013; in final form april 30th, 2013; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: h. yoshida, e-mail: hajime-yoshida@aist.go.jp 1. introduction since pressure/vacuum gauges are calibrated at multiple pressure points, interpolation between these points is necessary for practical pressure measurements. in the case that the pressure points are calibrated by a single standard technique with good linearity, the interpolation has high reliability in general. at pressures lower than 103 pa, however, interpolation between pressure points from two different standard techniques is often required. in such a case, the validity of the interpolation should be confirmed. a capacitance diaphragm gauge with 1333 pa full scale (cdg-10torr) is used for precise pressure measurements in the range from 1 pa to 103 pa. for the calibration of the cdg-10torr at least four candidate techniques are available: pressure balance, static expansion system (ses) [1-5], force-balanced piston gauge [6,7], and oil manometer [5,8,9]. in this paper, the calibration results of the cdg-10torr based on two different standards are presented. one is a direct comparison to the resonant silicon gauge (rsg), which is calibrated by the pressure balance. the rsg is used as a reliable transfer gauge in the field of pressure and vacuum standards [10,11]. the other is the static expansion system [4]. the calibration results are compared and the validity of the interpolation is discussed. 2. experimental 2.1. apparatus figure 1 shows the schematic diagram of the static expansion system (ses) and the direct comparison system (dcs) for the calibration of vacuum gauges. these two systems are connected to each other through all metal valves. two resonant silicon gauges with 130 kpa full scale (absolute) are located on the ses (rsgse) as reference gauges. a capacitance diaphragm gauge with 133 pa full scale (cdg-1torr) is located between the ses and the dcs, and used as a reference gauge for the dcs. two capacitance abstract two capacitance diaphragm gauges (cdgs) with 1333 pa full scale, with a heated sensor head and an unheated one, respectively, were calibrated by three different methods; direct comparison to a resonant silicon gauge calibrated by a pressure balance, direct comparison to a cdg with 133 pa full scale calibrated by a static expansion method, and the static expansion method. the calibration results of the three calibration methods show good agreement within their claimed uncertainties. calibrated higher pressure points of cdgs by the pressure balance and lower pressure ones by the static expansion system are linearly interpolated within the calibrated uncertainty. here, compensation of the thermal transpiration effect is important when a cdg is used with a heated sensor head. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 48 diaphragm gauges with 1333 pa full scale were used as test gauges. a high accuracy absolute type capacitance diaphragm gauge with a heated sensor head at a temperature of 45 oc (cdgh-10torr) was tested at both the ses and dcs. another capacitance diaphragm gauge with unheated sensor head (cdgn-10torr) was tested at the dcs only. n2 gas was used as a test gas. the calibration procedure of the ses is briefly summarized. the pumping system of the ses consists of turbo molecular pumps (tmp) and rotary pumps (rp). the background pressure before calibration is typically around 10-7 pa. the gas in the initial chamber cma was expanded to the chamber cmc or both chambers cmc and cmd depending on the calibration pressure range. to avoid changes in volume and temperature, a reference gauge to measure the initial pressure before expansion is not located on the cma. the initial pressure was measured by the rsgse located on the chamber cmb by closing both valves vl1 and vl3 and opening the valve vl2. after the initial pressure measurement, static expansion was performed by closing vl2, vl4, vl6, and vl8 and opening vl3 only and/or vl3, vl5 and vl7. the calibration pressure ranges are from 1 pa to 2000 pa and from 10-4 pa to 150 pa at cmc and cmd, respectively. details of the ses are given in ref [4]. the dcs was constructed based on the iso 3567 vacuum gauges – calibration by direct comparison with a reference gauge [12]. two reference gauges are located in the dcs. one is the resonant silicon gauge with 130 kpa full scale absolute (rsgdc). the other is a high accuracy absolute type capacitance diaphragm gauge with 133 pa full scale with a heated sensor head at the temperature of 45oc scale (cdg-1torr). the cdg-1torr is used as a reference gauge without detaching the sensor head from the chamber by controlling vl7 and vl8. the pumping system consists of a turbo molecular pump (200 l/s for n2) and a rotary pump. a static method is adopted for direct comparison. the valve on the tmp (vl9) was closed when the background pressure is lower than 10-4 pa, which is measured by an ionization gauge. the zero points of the cdgs and the rsgdc were measured every time before each calibration. the test gas was introduced to the cme by a computer-controlled mass flow controller (mfc) with a full scale of 10 sccm until the pressure in the cme has reached the target pressure. the test gauge was calibrated by comparing the reference gauges while the test pressure is kept constant for 300 s. 2.2. traceability chain in this study the traceability chain of the pressure in this study is summarized in figure 2. the rsgse and rsgdc were calibrated by the pressure balance from 5.0×103 pa to 1.3×105 pa. the rsgdc was sometimes calibrated by direct comparison to the rsgse to check its long-term stability. the cdg-1torr was calibrated by the ses in the chamber cmd from 0.1 pa to 130 pa. in the ses, both the expansion ratio and the initial pressure at the chamber cma, which are important parameters to determine the standard pressure, are measured by the rsgse. the cdgh-10torr with a heated sensor head was calibrated by three methods; (i) direct comparison to the rsgdc from 100 pa to 1300 pa, (ii) direct comparison to the cdg-1torr from 1 pa to 130 pa, and (iii) static expansion figure 1. schematic diagram of the static expansion system (ses) and the direct comparison system (dcs) for the calibration of vacuum gauges. figure 2. traceability chain of the pressure in this study. se and dc mean static expansion and direct comparison. two capacitance diaphragm gauges with 1333 pa full scale (cdg-10torr) were calibrated by (i) dc to rsgdc, (ii) dc to cdg-1torr, and (iii) se at cmc. n2 gas tmp tmp tmp tmp rp rp rp tmp rp mfccdg -1torr cdgh-10torr rsgse static expansion system (ses) direct comparison system (dcs) cmd (160 l) cmc (10 l) cma (0.2 l) cmb cme n2 gas vl1 vl2 vl3 vl4 vl5 vl6 vl7 vl8 cdgn-10torr rsgdc vl9 n2 gas tmp tmp tmp tmp rp rp rp tmp rp mfccdg -1torr cdgh-10torr rsgse static expansion system (ses) direct comparison system (dcs) cmd (160 l) cmc (10 l) cma (0.2 l) cmb cme n2 gas vl1 vl2 vl3 vl4 vl5 vl6 vl7 vl8 cdgn-10torr rsgdc vl9 acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 49 method at the chamber cmc from 1 pa to 1300 pa. the cdgn-10torr with an unheated sensor head was calibrated by using methods (i) and (ii). the direct comparison with the rsgdc was performed by extrapolating the calibration results obtained from 5.0×103 pa to 1.3×105 pa. the procedure of the extrapolation is detailed in section 3.1. 2.3. compensation of thermal transpiration effect the apparent change in the sensitivity of the cdg owing to the thermal transpiration effect was compensated using the takaishi-sensui (t-s) equation: 𝑝1 𝑝2 = 𝑌 + �𝑇1 𝑇2⁄ 𝑌 + 1 , 𝑇1 < 𝑇2 𝑌 = a𝑋2 + 𝐵𝑋 + 𝐶√𝑋, 𝑋 = 𝑑𝑝2 (1) a = a𝑇−2, 𝐵 = 𝑏𝑇−1, 𝐶 = 𝑐𝑇−1/2, 𝑇 = (𝑇1 + 𝑇2) 2⁄ p1 and p2 are the pressures in the vacuum chamber and in the sensor head (capsule) of cdg that corresponds to the pressure indication of cdg, respectively. t1 and t2 are the temperatures of the vacuum chamber and the sensor head of cdg, d is the inner diameter of the connecting tube, and a, b, and c are parameters depending on the gas species to be measured. t1 was measured befor every calibration. the values of t2 and d were assumed to be 45oc (318.15 k) and 4.76 mm, respectively. parameters a, b, and c were equal to 12×105 deg2 mmhg-2 mm-2 (6.75×107 k2 pa-2 m-2), 10×102 deg mmhg-1 mm-1 (7.50×103 k pa-1 m-1) and 14 deg1/2 mmhg-1/2 mm-1/2 (38.3 k1/2 pa-1/2 m-1/2), respectively. the validities of the t-s equation and these parameters are discussed in [15]. 3. results 3.1. calibration results of the reference resonant silicon gauges (rsg) calibration results of the reference rsgdc and the rsgse by a pressure balance are shown in figure 3. the vertical axis is the deviation of the calibrated standard pressure (ps) from the pressure indication (pi) of the rsgs. the sensitivity coefficient s of the rsgs is defined as in equation (2) for a pressure range down to 100 pa, s = (pi -pi0)/ ps = ∆pi / ps, (2) where pi0 is the pressure indication at the background pressure, in other words at zero point, and ∆pi is the difference between pi0 and pi. the s(rsgdc) is plotted in figure 4 with a logarithmic scale of the horizontal axis. the s(rsgdc) has a constant value of 0.999987 ± 0.000027. the standard pressure (prsg-dc) in the dcs from 100 pa to 1300 pa is determined by equation (3), prsg-dc = ∆pi / s (rsgdc). (3) the calibration uncertainty u(prsg-dc) with a confidence level of 95% (k=2) is estimated by equation (4): u(prsg-dc) [pa] = -1.3×10-11 ∆pi2 + 5.0×10-6 ∆pi + 3.0, (4) which is the best fitting curve between ∆pi of the rsgdc and its expanded uncertainty. that means the relative expanded uncertainty of prsg-dc from 100 pa to 1300 pa is in the range from 3.0% to 0.23%. 3.2. calibration result of the reference capacitance diaphragm gauge with 133 pa full scale (cdg-1torr) a calibration result of reference cdg-1torr by ses is shown in figure 5 (a). the vertical axis is the s of cdg-1torr, which is similarly calculated by eq. (2). the s(cdg-1torr) increases with decreasing the pressure by thermal transpiration effect because cdg-1torr has a heated sensor head at the temperature of 45 oc [10,13-15]. figure 5 (b) shows the compensated s(cdg-1torr) by eq. (1) [15,16]. the s(cdg1torr) after the compensation by t-s equation has a constant value of 1.0081 ± 0.0014. the relative expanded uncertainty of the calibration from 0.1 pa to 130 pa is in the range from 2.8 % to 0.33 % [4]. 3.3. calibration results of two capacitance diaphragm gauges with 1333 pa full scale (cdgh-10torr and cdgn-10torr) the cdgh-10torr was calibrated using three methods: (i) direct comparison to the rsgdc from 100 pa to 1300 pa, (ii) direct comparison to the cdg-1torr from 1 pa to 130 pa, and (iii) the static expansion method from 1 pa to 1300 pa. table 1 shows the uncertainty budget of (i) and (ii). the expanded uncertainty of (iii) is in the range from 1.0% to 0.26% [4]. as shown in figure 6(a), the three calibration results for the cdgh-10torr are in good agreement within their required uncertainties. the sensitivity of the cdgh-10torr, s(cdgh10torr), also increases with decreasing pressure by the thermal transpiration effect. after compensation using eq. (1), the s(cdgh-10torr) also has a linear characteristic within ± 0.2% as shown in figure 6(b). figure 3. calibration results ofthe rsgse and rsgdc. figure 4. sensitivity s of the rsgdc. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 50 table 1. the uncertainty budget of the calibration results of the cdg h -10torr by direct comparison. the rsg dc and cdg-1torr are used as a reference gauge depending on the pressure range. table 2. the uncertainty budget of the calibration results of the cdg n -10torr by direct comparison. the rsg dc and cdg-1torr are used as a reference gauge depending on the pressure range. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 51 4. discussion on the reference gauge for direct comparison calibration by direct comparison is widely used by many users. in the case that the rsg with 130 kpa full scale (absolute) is used as a reference gauge, the lowest calibration pressure may be limited by several hundred pa if the calibration uncertainty is required to be within several %. the cdgs with 133 pa or 1333 pa full scale are useful as a reference gauge for pressures below 100 pa. in that case, however, the thermal transpiration effect should be compensated if the cdg with heated sensor head is used. a wide calibration pressure range is realized by combining the rsg and the cdg as reference gauges and evaluating the uncertainty arising from the nonlinearity of the sensitivity, the correction of the thermal transpiration effect, the resolution, the influence of temperature, attitude, and so on. 5. conclusion two capacitance diaphragm gauges with 1333 pa full scale were calibrated by the following three methods: (i) direct comparison to a resonant silicon gauge with 130 kpa full scale absolute from 100 pa to 1300 pa, (ii) direct comparison to a capacitance diaphragm gauge with 133 pa full scale from 1 pa to 130 pa, and (iii) static expansion method from 1 pa to 1300 pa. the results by these three methods show good agreement within their claimed uncertainties, which means these calibration methods and the uncertainty analyses are validated. calibrated higher pressure points of cdgs by the pressure balance and lower pressure ones by the static expansion system are linearly interpolated within the calibrated uncertainty. here, compensation of the thermal transpiration effect is important when a cdg is used with a heated sensor head. references [1] m. bergoglio, a. calcatelli, l. marzola, g. rumiano, “primary pressure measurements down to 10−6 pa”, vacuum, vol 38, pp 887-891, 1988. [2] w. jitschin., j. k. migwi, g. grosse, “pressures in the high and medium vacuum range generated by a series expansion standard”, vacuum, 40, pp 293-304, 1990. [3] k. jousten k., g. rupschus, “the uncertainties of calibration pressures at ptb”, vacuum, 44, pp 569-572 1993. [4] h. akimichi, e. komatsu, k. arai, m. hirata, in proc. of the 44th international conference on instrumentation, control and information technology (sice2005), pp 2145-2148, 2005. [5] s.s. hong, y. h. shin, and k. h. chung, “measurement uncertainties for vacuum standards at korea research institute of standards and science”, j. vac. sci. technol. a 24(5), pp 1831-1838, 2006. [6] c. g. rendle and h. rosenberg, “new absolute pressure standard in the range 1 pa to 7 kpa”, metrologia, 36, pp613-615, 1999. [7] th bock, h ahrendt and k jousten, “reduction of the uncertainty of the ptb vacuum pressure scale by a new large area non-rotating piston gauge”, metrologia, 46, pp 389–396, 2009. [8] p. l. m. heydemann, c. r. tilford, r. w. hyland, “ultrasonic manometers for low and medium vacua under development at nbs”, j. vac. sci. technol., 14 (1), pp 597-605, 1977. [9] c. r. tilford, a. p. miller, s. lu, “a new low-range absolute pressure standard”, in proc. 1998 ncsl workshop and symposium, national conference of standards laboratories, pp 245-256, 1998. figure 5. sensitivity of the cdg-1torr before (a) and after (b) compensation of the thermal transpiration effect by the takaishi-sensui equation. figure 6. sensitivity of the cdgh-10torr with a heated sensor head at 45oc before (a) and after (b) compensation of the thermal transpiration effect by the takaishi-sensui equation [15,16]. figure 7. sensitivity of the cdgn-10torr with an unheated sensor head. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 52 [10] a. p. miller, “measurement performance of high-accuracy lowpressure transducers”, metrologia, 36, pp 617-621, 1999. [11] j. h. hendricks and a. p. miller, “development of a new highstability transfer standard based on resonant silicon gauges for the range 100 pa to 130 kpa”, metrologia, 44, pp 171-176, 2007. [12] international organization for standardization, iso 3567 vacuum gauges – calibration by direct comparison with a reference gauge, 2011. [13] k. f. poulter, m-j rodgers, p. j. nash, t. j. thompson, m. p. perkinet, “thermal transpiration correction in capacitance manometers”, vacuum, 33, pp 311-316, 1983. [14] j. setina, “new approach to corrections for thermal transpiration effects in capacitance diaphragm gauges”, metrologia, 36 pp 623-626, 1999. [15] h. yoshida, e. komatsu, k. arai, m. hirata, and h. akimichi, “compensation of thermal transpiration effect for pressure measurements by capacitance diaphragm gauge”, j. vac. soc. of jpn., vol. 53, pp. 686-691, 2010. [16] t. takaishi, y. sensui, “thermal transpiration effect of hydrogen, rare gases and methane”, trans. faraday soc., 59, pp 2503-2514, 1963. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 53 12-29-1-pb acta imeko december 2011, issue 0, 2 − 3 www.imeko.org acta imeko | www.imeko.org december 2011 | issue 0 | 1 instructions for authors paul p. l. regtien 1 1 measurement science consultancy, julia culpstraat 66, 7558jb hengelo, the netherlands keywords: journal; template; imeko; microsoft word citation: paul p.l. regtien, instructions for authors, acta imeko, no. 0, december 2011, pp. 2-3, identifier:10.3345/acta.imeko.4530 editor: paul regtien, measurement science consultancy, the netherlands received december 28, 2011; in final form december 29, 2011; published december 30, 2011 copyright: © 2011 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by measurement science consultancy, the netherlands corresponding author: paul p. l. regtien, e-mail: paul@regtien.net 1. introduction this paper describes how a new article can be submitted from acta imeko’s website. authors can track their submission, resubmit revised papers, communicate with the editors and support the editorial process including copyediting and proofreading. 2. a new submission to register with the journal system, open the website http://acta.imeko.org/index.php/acta-imeko/index and go to register. if you don’t have a user account, fill in the form, otherwise enter your username and password. then click the register button near the end of the page. you arrive at the acta imeko page user home. under acta imeko select your role: “» author”. here you see your current submissions and the status (when applicable). for a new submission, click on “click here” under “start a new submission”, and you arrive at the page step 1: starting the submission. in “section” select “article”, and go to the 4-point checklist. check all four after reading. in the 4th button, you see a link “author guidelines” pointing to the template. download this template, which contains all further instruction for the layout of your paper. check the copyright notice. you can enter a message for the editor, when you like. press “save and continue”, and you arrive at the next page step 2: enter metadata. in this page you can enter the meta data of the paper. title and abstract are required fields. when done, click “save and continue”, and you arrive at the next page: “step 3. uploading the submission”. in this page you can upload your submission and in the next one “step 4. uploading supplementary files” you can upload additional files if necessary. the next step is “step 5. confirming the submission”. when finished, you receive an acknowledgement of your submission by email. you can view all your running submissions and their status by clicking the button “active submissions”. your submission is assigned to a section editor. the section editor invites reviewers. reviewers who have accepted to review upload their review reports together with a recommendation. the section editor sends the editor’s decision to the author, together with the reviewer’s reports. the following decisions can be made: 1) accept submission > the paper enters the final stages of the submission process 2) revisions required > the paper is accepted provided the author complies with the recommendations of the reviewers and editor 3) resubmit for review > the paper is not suitable for publication in this form, but can be resubmitted after major revisions 4) decline submission. 3. tracking your submission(s) register, go to user home, select your role: “» author”. here you see a list of your submissions and their the paper contains instructions for authors of articles for acta acta imeko. it can be used as template for new submissions. authors are encouraged to follow the instructions as described in this template file to produce their manuscript. this abstract should be composed in a way suitable for publication in the abstract section of electronic journals, and should state concisely what it is written in the paper. important items are the aim of the research, the basic method and the major achievement (also numerically, when applicable). the length should not exceed 200 words. acta imeko | www.imeko.org december 2011 | issue 0 | 2 status. click on the name of the submission to view the details. 4. resubmission of a revised paper when the recommendation is “revisions required”, the author is asked to make revisions based on the comments of the reviewers, and to upload the revised paper. when the recommendation is “resubmit for review”, the author is asked to revise the paper (usually major revisions concerning the structure, missing material, expansion of the theory, incomplete experimental results). resubmission follows the same procedure as the first submission. the resubmission is again peer reviewed, by either the same or different reviewers. 5. processing your accepted submission once your paper has been accepted, the author can take part in the copyedit, the layout and the proofreading process. copyedit. the editor will do a first copyedit. you receive a request to review your submission after the first copyedit step by the editor. follow the instructions in this request: 1. click on the submission url. 2. log into the journal; you are directed to the author home page (active submissions), where you can see which paper is in the editing phase. click “in editing” of the paper you want to copyedit. the editing page has three submenus: summary, review and editing. when you are not in the editing submenu already, click editing. click on the file that appears in step 1 of the copyediting box. 3. open the downloaded submission. 4. review the text, including copyediting proposals and author queries. when needed, read copyedit instructions in this window. click “copyedit comments” to add comments in the box and, when finished, press “save and email”. the editor receives your copyedit comments by email. 5. make any copyediting changes that would further improve the text. 6. when completed, upload the file in step 2. 7. click on metadata to check indexing information for completeness and accuracy. 8. send the complete email to the editor and copyeditor by clicking on the envelope just below complete in the copyediting window. layout. when the editor asks you to prove the layout, follow the same procedure: click in editing of the paper you want to check the layout. view the proof. when corrections are necessary, click “layout comments”, insert comments in the box and, when finished, press “save and email”. the editor receives your layout comments by email. proofreading. when the editor asks for proofreading by the author, follow the same procedure: click “in editing” of the paper you want to proofread. read proofing instructions. when corrections are necessary, click “proofreading corrections”, insert corrections in the box and, when finished, press “save and email”. the editor receives your proofread corrections by email. when the submission has passed all the post-editing steps, the paper is released for publication. the author is informed about this action. human-robot collision predictor for flexible assembly acta imeko issn: 2221-870x september 2021, volume 10, number 3, 72 80 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 72 human–robot collision predictor for flexible assembly imre paniti1,2, jános nacsa1,2, péter kovács1, dávid szűr1 1 elkh sztaki, centre of excellence in production informatics and control, kende street 13–17, 1111 budapest, hungary 2 széchenyi istán egyetem, egyetem square 1, 9026 győr, hungary section: research paper keywords: collaborative robot; human–robot collaboration; virtual reality; collision prediction citation: imre paniti, jános nacsa, péter kovács, dávid szűr, human-robot collision predictor for flexible assembly, acta imeko, vol. 10, no. 3, article 12, september 2021, identifier: imeko-acta-10 (2021)-03-12 section editor: bálint kiss, budapest university of technology and economics, hungary received february 15, 2021; in final form august 16, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the tkp2020-nka-14 grant and by the h2020 project epic grant no. 739592. corresponding author: imre paniti, e-mail: imre.paniti@sztaki.hu 1. introduction according to the international federation of robotics 2019 report, the average robot density in the manufacturing industry has grown to a new global record of 113 units per 10,000 employees [1]. although the automation of smalland mediumsized enterprises (smes) is supported within the european union according to the european commission’s digital economy and society index report 2019 [2], the share of large enterprises that use industrial or service robots is four times higher than that of smes, and the use of robots varies widely with company size. one of the most commonly asked questions in the semirobotised industry is how to make production more efficient, which is related to a study [3], where robots in an assembly operation could reduce the idle time of an operator by 85 %. therefore, using collaborative robots (cobots) in a factory for assembly tasks could lead to greater efficiency, which means shorter production times. this statement can also be useful for the assembly of different products or product families, which requires a set of different fixtures or reconfigurable fixtures, such as those based on the parallel kinematic machine in [4] or the fixed but flexibly useable gripper presented in this article. however, the problem is that despite well-defined task sequences, the changeover from one product to another in a collaborative operation could lead to human failures and, consequently, to collisions with the cobot due to the previous habitual sequence of actions. by definition, a cobot has to operate with strict safety installations (protective stop execution when a certain force in a collision is reached), as outlined in iso/ts 15066:2016 [5], iso 10218‑1:2011 [6] and iso 10218‑2:2011 [7], but these protective stops could cause a significant cumulative delay in production. this depends largely on how the robot program has been written, i.e. whether operations can be continued after a protective stop. review articles such as those of hentout et al. [8] and zacharaki et al. [9] present solutions for pre-collision approaches in the frame of human–robot interaction (hri). pre-collision control methods, referred to as ‘prevention’ methods, are techniques intended to ensure safety during hri by monitoring either the human, the robot or both and modifying robot control parameters prior to incidence of collision or contact [9]. precollision approaches can be distinguished between reactive control strategies, proprioceptive sensor-based strategies and abstract the performance of human–robot collaboration can be improved in some assembly tasks when a robot emulates the effective coordination behaviours observed in human teams. however, this close collaboration could cause collisions, resulting in delays in the initial scheduling. besides the commonly used acoustic or visual signals, vibrations from a mobile device can be used to communicate the intention of a collaborative robot (cobot). in this paper, the communication time of a virtual reality and depth camera-based system is presented in which vibration signals are used to alert the user of a probable collision with a ur5 cobot. preliminary tests are carried out on human reaction time and network communication time measurements to achieve an initial picture of the collision predictor system’s performance. experimental tests are also presented in an assembly task with a three-finger gripper that functions as a flexible assembly device. mailto:imre.paniti@sztaki.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 73 exteroceptive sensor-based control [8]. however, these approaches are all manifested in robot control parameter modification rather than in operator warnings. both the above studies refer to the work of carlos morato et al. [10], who presented a similar solution by creating a framework using multiple kinects to generate a 3d model with bounding spheres for human movements in real time. the proposed framework calculates human–robot interference in a 3d space with a physics-based simulation engine. the deficiency of the study is the pre-collision strategy for safe human–robot collaboration because this results in the complete stoppage of the robot. this is indeed a safe protocol, as it reduces the production break time, but it does not eliminate it completely. the aim of this paper is to highlight the importance of a new pre-collision strategy that does not modify the trajectories but relies fully on the warning of the operator (using a non-safetycritical system), especially when flexible/reconfigurable fixtures are used. section 2 provides an overview of standards and definitions related to robotic and cobotic systems, especially in relation to protective separation distance, which is crucial for the proposed solution. section 3 presents an experimental environment and use cases in which the proposed solution can be used. section 4 describes the new pre-collision approach and its system elements in detail, together with some communication measurement results to demonstrate the feasibility of the solution. finally, section 5 presents a summary with conclusions. 2. standards and definitions for cobot use in general, when using a robotic arm with a gripper the 2006/42/ec machinery directive [11] and the 2014/35/eu low voltage directive [12], together with iso/ts 15066:2016 [5] and 16 standards, have to be considered [13]. these are detailed in table 1. according to iso 10218‑1:2011 [6], a collaborative workspace is a space within the operating space where the robot system (including the workpiece) and a human can perform tasks concurrently during production operations, and a collaborative operation is a state in which a purposely designed robot system and an operator work within a collaborative workspace. according to iso/ts 15066:2016 [5], collaborative operations may include one or more of the following methods: • a safety-rated monitored stop, • hand guiding, • speed and separation monitoring, • power and force limiting. in powerand force-limiting operations, physical contact between the robot system (including the workpiece) and an operator can occur either intentionally or unintentionally. power and force-limited collaborative operations require robot systems specifically designed for this particular type of operation using built in measurement units. according to iso/ts 15066 [5], risk reduction is achieved, either through inherently safe processes in the robot or through a safety-related control system, by keeping hazards associated with the robot system below threshold limit values, which are determined during the risk assessment. if an operator wants to maintain a safe distance in a collaborative operation, iso/ts 15066:2016 robots and robotic devices collaborative robots (clause 5.5.4: speed and separation monitoring) [5], en iso 13850:2015 [19], en iso 13855:2010 [15], en iec 60204-1:2018 [20] and en iec 62046:2018 [26] should be applied together with the following regulations and standards: directive 2006/42/ec [11], en iso 10218-1:2011 [6] and en iso 10218-2:2011 [7]. in addition, en iso 12100:2010: safety of machinery general principles for design risk assessment and risk reduction [18] should be considered. in speed and separation monitoring, the protective separation distance is the shortest permissible distance between any moving hazardous part of the robot system and any human in the collaborative workspace, and this value can be fixed or variable. during automatic operations, the hazardous parts of the robot system should never get closer to the operator than the protective separation distance, which is calculated based on the concepts used to create the minimum distance formula in iso 13855:2010 [15]. the protective separation distance sp can be described by formula (1): 𝑆p(𝑡0) = 𝑆h + 𝑆r + 𝑆s + 𝐶 + 𝑍d + 𝑍r, (1) where sp(t0) is the protective separation distance at time t0 (present or current time); sh is the contribution to the protective separation distance attributable to the operator’s change in location; sr is the contribution to the protective separation distance attributable to the robot system’s reaction time; ss is the contribution to the protective separation distance due to the robot system’s stopping distance; c is the intrusion distance, as defined in iso 13855, which is the distance that a part of the body can intrude into the sensing field before it is detected; zd is the position uncertainty of the operator in the collaborative workspace as measured by the presence sensing device resulting from the sensing system measurement tolerance; and zr is the position uncertainty of the robot system, resulting from the accuracy of the robot position measurement system [5]. based on this, the authors propose to extend the protective separation distance (1) with an extra distance based on the communication time of a pre-collision system (spc) and with a contribution to the protective separation distance attributable to table 1. standards in manufacturing when using a robotic arm with a gripper. standard ref. en iso 10218-1:2011 [6] en iso 10218-2:2011 [7] iso/tr 20218-1:2018 [14] en iso 13855:2010 [15] en iso 13849-1:2015 [16] en iso 13849-2:2012 [17] en iso 12100:2010 [18] en iso 13850:2015 [19] en iec 60204-1:2018 [20] en iec 62061:2005 [21] en iso 11161:2007 [22] en iso 13854:2017 [23] en iso 13857:2019 [24] en iso 14118:2017 [25] en iec 62046:2018 [26] en iso 13851:2019 [27] acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 74 the robot operator’s reaction time (sort) to avoid speed reductions or protective stops. this would result in a modified protective separation distance (sp*): 𝑆p ∗ = 𝑆p + 𝑆pc + 𝑆ort . (2) however, the proposed system in this paper is, as has already been mentioned, an extra non-safety certified solution. the purpose of the presented measurements in this paper is to determine the above-mentioned time parameters (communication time and reaction time) of the additional distances (spc and sort) in this specific environment. 3. experimental environment and use cases robots are usually moved on prespecified trajectories that are defined in the robot’s program, and, in most cases, a new task involves starting a new robot program. another method is to move the high-level robot control from the robot to a computer, and the robot then continuously receives the required movements and other actions via a stream. in this case, the robot runs a general-purpose program or framework that interprets and executes the external instructions received. in this scenario, the framework is called ursztaki, developed by the sztaki research laboratory for engineering and management intelligence. ursztaki has three kinds of instructions: (a) basic instructions that constitute the robot's programming language, (b) instructions for the robot add-ons (e.g. the gripper and force sensor) integrated into the robot language by the accessory suppliers and (c) frequently used, more complex task instructions (e.g. putting down or picking up an object when the table distance is unknown). the third type of instruction constitute the real features of ursztaki. it should also be mentioned that the expansion of the ur robot's functions and language is possible with the help of socalled urcaps (which is a platform where users, distributors and integrators can demonstrate accessories that run successfully in ur robot applications [28]), and currently, ursztaki can also be installed as an urcap. the experimental layout consists of a ur10 robot with a force sensor and a two-finger gripper. the environment was designed to support different assembly tasks, either fully robotised or collaborative. to equip partly or even fully different components, universal mounting technology was required instead of special fixtures. another gripper (with three fingers) was used that allowed a wide variety of fixings. all three fingers could be moved independently of the selected adaptive gripper fixed to the robot worktable (figure 3). the three-finger gripper from robotiq [29] has four different modes for operating the fingers (figure 2). in the ‘pinch’ mode on the top left side of figure 1, the gripper acts as a two-finger model, and the fingers move closely together to be able to pick up small objects. the next mode is the ‘scissor’ mode in which the closing–opening ability of the gripper is used to pick up an object. in the third ‘wide’ mode, the fingers are fan-like, and they provide a firm wide grip for longer objects. for the leftmost ‘normal’ grip, the three fingers move in parallel and, depending on the relative position of the object, the fingertips also turn for greater precision in ‘normal’ and ‘wide’ mode. this is the encompassing grip. from the software point of view, both grippers can be directly programmed from the robot's program code. despite the fact that both grippers are from the same manufacturer, which could make the development easier, the commands of one of the grippers had to be modified to avoid conflict between the individual instructions. a typical scenario is that the robotic arm picks up and transfers a part to the fixed gripper, which grabs it, and after that, another part is placed or pressed with the desired force by the robotic arm on the part fixed by the immobile gripper. there are some detailed tasks, such as the insertion of a spring into a housing, which have to be performed by the human operator. in this environment, it is also possible for the robot to hold a screwdriver and fasten the assembled parts with screws at the set torque limit (figure 3 and figure 4). the prototype was designed specifically for the previously shown push-button element. however, it can be easily redesigned for another part, or a universal piece can be made to support different types of product assembly. following the parallel movement of the fingers, a form-fitting shape is created that holds the part motionless while the required actions are carried out. because the holder is connected to the fingertips, slippage is also prevented in cases where the pressing figure 1. demonstration environment. figure 2. the four different modes of the three-finger robotiq gripper [29]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 75 force applied is too great or an inappropriate human movement occurs. the proposed solution with the immobile three-finger gripper satisfies the requirements of a flexible fixture for certain parts. in this scenario, human–robot collision problems might occur if the human operator forgets the predefined assembly task sequence when beginning the assembly of a new product, reaches for an assembly part and the hand trajectory intersects that of the robot. to demonstrate a flexible assembly with the three-finger gripper, an additional application was developed in which both grippers were used to perform the assembly task, requiring human intervention at certain points of the assembly process at the same time. in the task, a didactic element, which had been packaged with a transparent plastic lid and a metal base, were pushed together at the beginning of the operation (presumably, this packaging material came from the supplier). the operation steps of the complete assembly were the following: 1. pick up the packaging material with the robot arm and fix the base with the three-finger gripper. 2. remove the lid from the base and put it down (figure 5). 3. pick up the didactic element and place it onto the metal base. 4. put the plastic lid back on the base. 5. fix the packed object, release the three-finger gripper and put it back in its starting position. inserting the didactic element is the bottleneck in the assembly process. normally, the robot finds the hole with a small spiral movement using force sensing. since the gap between the meter and the base is narrow, this operation is not always successful (see figure 6), in which case, human intervention is possible or necessary to avoid any wastage. in some instances, the next operation (putting the lid back) corrected the skewed didactic element, and it slipped into the base. however, the success of a process should not be based on coincidence, and this is when a collision predictor system can be very useful. an easy movement by the operator can prevent wastage, thereby reducing costs. it is a simple operation sequence, but because of the positioning errors, human intervention may be required during two of the steps. figure 3. illustration of the robotised screwdriving of a push-button element in which the spring has to be inserted manually. figure 4. illustration of the robotised screwdriving of a ball valve element. figure 5. illustration of the second step. figure 6. illustration of the failed insertion of the didactic element. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 76 4. pre-collision approach as a predictor in order to avoid collisions with the robot, either the robot trajectory has to be modified in real time (which might cause additional production time, something companies want to avoid) or the human operator has to be warned with a pre-defined understandable signal so the human movement can be modified in time. the warning signal can be given to the operator in several ways: visual, acoustic or tactile. in this paper, the latter has been developed as part of a predictor of human–robot collision (prehuroco) framework. the subject of the prediction in this case is the predetermined movement of the robot, which can be recorded and will occur after a certain time, so a similar framework had to be created to that described in [10]. however, instead of a digital twin of the robot (real-time 3d visualisation of the robot), a pre-played robot model motion was used together with the 3d skeleton model of the operator. the virtual collisions of the two models were used as trigger signals to warn the operator before a real collision. 4.1. requirement analysis the following features were needed for the candidate software library, based on the requirement analysis of prehuroco: 1) fully open source: the system must fulfil all the security requirements of a real manufacturing system; therefore, complete control of the source code is obligatory. 2) modular: the system should be divided into various software components, so the candidate software library must support responsibility encapsulation. 3) distributed: in a manufacturing system, many computers and internet-of-things (iot) devices can be connected; therefore, the prehuroco software components must have the ability to run on different computers or iot devices. 4) cross platform: as the distribution requirement is for many computers and devices with different operating systems to be connected, the candidate framework should be cross platform. 5) programming language variability: as the distribution and cross-platform requirements are for different devices and computer operating systems in manufacturing scenarios, the candidate software library should support different application programming interfaces (apis). 6) scalability: prehuroco software components should be developed independently of whether they run on the same computer or not. in terms of performance, the software components should be be easily put together in one machine or one application and easily distributed. 7) rapid prototyping: the candidate framework should provide examples or even pre-made components that can be improved during prehuroco implementation because the proposed system should deal with • rigid-body simulation, • visualisation (including vr or ar), • real-time 3d scanning, • an x3d model format and • various communication protocols. unity engine [30] and unreal engine [31] are well-known crossplatform game engines. apertusvr [32] is a software and hardware vendor-free, open-source software library. it offers a no-vendor-lock-in approach for integrating vr technologies into industrial software systems. the comparison of the candidate frameworks considering the requirements is summarised in table 2. based on the prehuroco requirement analysis, the apertusvr software library was chosen for implementing the system. with the help of this software library, a distributed software ecosystem was created via the intranet/internet, which was divided into two main parts, the core and plugins. the core system is responsible for the internet/intranet communication between the elements of the distributed software ecosystem, and it synchronises the information between them during the session. the plugin mechanism makes it possible to extend the capability of any solution created by the apertusvr library. plugins can access and manipulate the information within the core system. 4.2. explanation of the prehuroco system the system is distributed into five major responsibilities: 1) 3d scanning of the human operator, 2) streaming the joint angles of the robot, 3) collision detection between the human and the robot, 4) alerting the human to the possible collision and 5) visualising the whole scenario. in the present study, these responsibilities were implemented with the help of the apertusvr library, and each responsibility was encapsulated into six plugins [33]: the collision detection plugin, the visualisation plugin, the kinect plugin, the websocket server plugin, the x3d loader plugin and the nodejs plugin. the seventh element was a websocket client, which was implemented in the form of an html site using the jquery javascript library and the vibration api method [34] for mobile phones; but for more comfortable use, the websocket client could also run on a smart watch. figure 7 shows the realised system with the connections and applied protocols in an experimental set up with an ur5 robot. collision detection plugin [35]: this plugin was created based on the pre-made apertusvr ‘bulletphysics’ plugin. previously, this plugin had been able to run rigid-body simulations, but collision events were not created during these simulations. the apertusvr rigid-body abstraction was enhanced by the functionality of collision events. visualisation plugin [36]: this plugin was used as-is from the apertusvr repository for visualisation purposes. kinect plugin [37]: this plugin was created based on the premade apertusvr ‘kinect’ plugin. previously, this plugin had table 2. comparison of different frameworks in relation to the prehuroco requirements. requirement unity engine unreal engine apertusvr open source partially yes yes modular yes yes yes distributed partially corner case yes cross platform yes partially partially prog. lang. variability partially corner case yes scalability partially partially yes rapid prototyping yes yes yes acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 77 been able to create the skeleton of the tracked human or even its point cloud, but rigid bodies were not created. for collision detection, rigid bodies are mandatory; therefore, rigid bodies were created based on the geometries of the human skeletons. websocket server plugin [38]: this plugin was created based on the pre-made apertusvr ‘websocketserver’ plugin. previously, this plugin had been able to forward all events developed in the core. for collision detection, only the collision event of the rigid bodies is necessary. during the implementation of that plugin, a filter feature was added to forward only the desired event into the websocket connection. x3d loader plugin [39]: this plugin was created based on the pre-made apertusvr ‘x3dloader’ plugin. previously, this plugin had been able to parse the x3d format and create only the geometries of the robot. for collision detection, rigid bodies are mandatory; therefore, rigid bodies were created based on the parsed geometries. nodejs plugin [40]: this plugin was used as-is from the apertusvr repository and allows a web server to be run to receive the joint angle of the ur5 robot via http requests. in the prehuroco system, these plugins are encapsulated in different applications. these different applications can be run on different computers to distribute the computational power and achieve real-time collision prediction. as the diagram in figure 7 shows, these applications communicate through internet/intranet communication via different protocols. the collision detection application has to be run on a highperformance computing (hpc) server to process the virtual collisions in real time. the kinect application can run on a dedicated computer for the kinect device or on the same computer that calculates the virtual collisions. the x3dloader and the nodejs plugins are integrated into one application and can run on the dedicated computer for the ur5 robot. the websocket server application can also be run on a different computer to ensure security and locality requirements. the joint positions are stored in a jsonlist file, which is generated by executing the whole robot program. during the execution, the joint positions are ‘grabbed’ and saved with a given frequency. the speed of the simulation is equal to the speed of the robot movement, and the ‘forecast’ can be determined by the delay between the simulation starting time and the real robot execution start time. 4.3. modified prehuroco system and measurements during the validation process, the prehuroco system was reconfigured to eliminate any unnecessary delay in the system. the reconfiguration process was achieved by the apertusvr configuration feature; thus, all the plugins were reused without any modification. the previously distributed prehuroco system was therefore easily reconfigured to form a single application (figure 8) and was able to run on a single computer. the elimination of unnecessary network connections/delays was a crucial step in avoiding any latency in the system. through this approach, the human–robot-collision calculation time and the human-operator reaction time were measured precisely. timestamps were buffered before and after the collision events, the websocket message transmission/receipt and the human operator pressing the button on a bluetooth keyboard. the proposed framework was tested on two local network topologies. in the first case, the calculations were divided into a cloud-service-based computer (with four virtual cpus, 8 gb ram, running a windows 10 operating system) and an hpc server (ideum with intel i7-8700, rtx 2080 8 gb gddr6 nvidia graphics card, dual 250 gb nvme m.2 ssd, 32 gb 2400 mhz ddr4 ram, running a windows 10 operating system), and the collision events were delivered to the websocket client with significant delay. by running all apertusvr plugins on the ideum and sending only the collision events via a wireless lan connection (2.4 ghz wi-fi) the user experience was quasi real-time. figure 9 shows a virtual collision test running on the ideum (hpc server) with the skeleton model of a single operator (1), virtual ur5 robot movement simulation (2), a real robot (3), a kinect sensor (4) and a mobile phone (5) with an android operating system running the websocket client to vibrate the device. the 3d scene was visualised with a top camera view, but arbitrary camera views are possible. to avoid the execution of large javascript files locally on the android mobile phone, external calls to cdn.jsdelivr.net and code.jquery.com were used. the ping time to these services were measured with an android application (pingtools version 4.52), which gave 9 ms and 30 ms as the average from three measurements, respectively. figure 7. prehuroco system elements and connections with the applied protocols. figure 8. reconfigured prehuroco system. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 78 the second network topology was used to measure the communication time of the system with five more people of different genders and ages (see figure 10). the reaction time of each operator was measured using an android application (reaction test version 1.3), which vibrates at randomised short time intervals (a couple of seconds) and calculates the average of five measurements. the average calculation time of the human–robot model collision until the http-request send was 98 ms, the average time from the http-request send to the keypress event was 1,355 ms and the average reaction time was 449 ms. each virtual collision with keyboard pressing as confirmation was tested three times. according to a bluetooth keyboard performance test, ‘microsoft delays in a non-interference test environment by approximately 40 to 200 milliseconds’ [41], so the calculation time for the human–robot collision together with the network communication time would be less than 1 s using this prehuroco configuration. however, by using raknet instead of http requests the performance of the system can be significantly improved. raknet communication time measurements from 223 collision events showed that only 36.52 ms was needed on average. furthermore, it is worth mentioning that with 5g communication an average two-way latency of 1.26 ± 0.01 ms would be possible, as noted in [42]. the kinect plugin creates a simplified skeleton model from the human operator, which needs improvement. an anthropomorphic skeleton model or voxelisation could be a solution in the future. it should be highlighted that the communication time increased by the human reaction time should not exceed the δt time between the pre-played simulated motion and the actual motion of the robot. a jsonlist file of the simulated ur5 robot movement is provided in [43]. 5. conclusion in this paper, a commercially available gripper as a flexible fixture for assembly and a new pre-collision approach as a predictor for human–robot collaboration were presented. the proposed framework was realised with the help of a modular, distributed, open-source cross platform (apertusvr) with different programming api support and scalability solutions. seven interconnected system modules were developed with the goal of monitoring the movement of the human operator in 3d space, calculating collisions with a virtual robot (with preplayed movements rather than the movement of a real robot) and alerting the human operator before a real collision could occur. successful virtual collision tests with six candidates showed that the operator received the warning signal immediately (under 1 s) in the form of a mobile-device vibration to modify the planned movement. in some cases, real-time path planning is required, especially in a changing environment, such as when the position of the workpiece to be gripped is variable (e.g. litter picking). in a collaborative environment, this is a serious security challenge that the whole system has to manage. the static parts of the environment can be checked regularly through collision detection, but the presence of the human means that ‘simple’ collision detection is not sufficient. this was the main reason for the current research and development presented in this paper. acknowledgement this research has been supported by the ‘thematic excellence program – national challenges subprogram – establishment of the center of excellence for autonomous transport systems at széchenyi istván university (tkp2020nka-14)’ project and by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592. references [1] ifr press releases. online [accessed 16 august 2021] https://ifr.org/ifr-press-releases/news/robot-race-the-worldstop-10-automated-countries [2] european commission, digital economy and society index report 2019, integration of digital technology. online [accessed 16 august 2021] https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59 979 [3] j. shah, j. wiken, b. williams, c. breazeal, improved humanrobot team performance using chaski, a human-inspired plan execution system, proc. of the 6th int. conf. on human-robot interaction, lausanne switzerland, 8-11 march 2011, pp. 29-36. doi: 10.1145/1957656.1957668 figure 9. virtual collision test. figure 10. virtual collision measurements with five additional candidates. https://ifr.org/ifr-press-releases/news/robot-race-the-worlds-top-10-automated-countries https://ifr.org/ifr-press-releases/news/robot-race-the-worlds-top-10-automated-countries https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59979 https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=59979 https://doi.org/10.1145/1957656.1957668 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 79 [4] t. gaspar, b. ridge, r. bevec, m. bem, i. kovač, a. ude, z. gosar, rapid hardware and software reconfiguration in a robotic workcell, proc. of the 18th ieee int. conf. on advanced robotics (icar), hong kong, china, 10-12 july 2017, pp. 229236. doi: 10.1109/icar.2017.8023523 [5] iso/ts 15066:2016, robots and robotic devices collaborative robots. online [accessed 16 august 2021] https://www.iso.org/standard/62996.html [6] iso 10218‑1:2011, robots and robotic devices safety requirements for industrial robots part 1: robots. online [accessed 16 august 2021] https://www.iso.org/standard/51330.html [7] iso 10218‑2:2011, robots and robotic devices safety requirements for industrial robots – part 2: robot systems and integration. online [accessed 16 august 2021] https://www.iso.org/standard/41571.html [8] a. hentout, m. aouache, a. maoudj, i. akli, human–robot interaction in industrial collaborative robotics: a literature review of the decade 2008–2017, advanced robotics 33 (2019) pp. 764799. doi: 10.1080/01691864.2019.1636714 [9] a. zacharaki, i. kostavelis, a. gasteratos, i. dokas, safety bounds in human robot interaction: a survey, safety science 127 (2020) 104667. doi: 10.1016/j.ssci.2020.104667 [10] c. morato, k. kaipa, b. zhao, s. k. gupta, safe human robot interaction by using exteroceptive sensing based human modeling, proc. of the asme 2013 international design engineering technical conferences and computers and information in engineering conference. volume 2a: 33rd computers and information in engineering conference. portland, oregon, usa. 4-7 august 2013, 10 pp. doi: 10.1115/detc2013-13351 [11] european commission, 2006/42/ec machinery directive. online [accessed 16 august 2021] https://eur-lex.europa.eu/legalcontent/en/txt/?uri=celex%3a32006l0042 [12] european commission, 2014/35/eu low voltage directive. online [accessed 16 august 2021] https://eur-lex.europa.eu/legalcontent/en/txt/?uri=celex:32014l0035 [13] covr project database for directives and standards. online [accessed 16 august 2021] https://www.safearoundrobots.com/toolkit/documentfinder [14] iso/tr 20218-1:2018 robotics safety design for industrial robot systems part 1: end-effectors. online [accessed 16 august 2021] https://www.iso.org/standard/69488.html [15] en iso 13855:2010 safety of machinery positioning of safeguards with respect to the approach speeds of parts of the human body. online [accessed 16 august 2021] https://www.iso.org/standard/42845.html [16] en iso 13849-1:2015 safety of machinery safety-related parts of control systems part 1: general principles for design. online [accessed 16 august 2021] https://www.iso.org/standard/69883.html [17] en iso 13849-2:2012 safety of machinery safety-related parts of control systems part 2: validation. online [accessed 16 august 2021] https://www.iso.org/standard/53640.html [18] en iso 12100:2010 safety of machinery general principles for design risk assessment and risk reduction. online [accessed 16 august 2021] https://www.iso.org/standard/51528.html [19] en iso 13850:2015 safety of machinery emergency stop function principles for design. online [accessed 16 august 2021] https://www.iso.org/standard/59970.html [20] en iec 60204-1:2018 safety of machinery electrical equipment of machines part 1: general requirements. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab476d-b979-1de5762a3ed7/sist-en-60204-1-2018 [21] en iec 62061:2005 safety of machinery functional safety of safety-related electrical, electronic and programmable electronic control systems. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926457b-b3da-4bfaef9908ac/sist-en-62061-2005 [22] en iso 11161:2007 safety of machinery integrated manufacturing systems basic requirements. online [accessed 16 august 2021] https://www.iso.org/standard/35996.html [23] en iso 13854:2017 safety of machinery minimum gaps to avoid crushing of parts of the human body. online [accessed 16 august 2021] https://www.iso.org/standard/66459.html [24] en iso 13857:2019 safety of machinery safety distances to prevent hazard zones being reached by upper and lower limbs. online [accessed 16 august 2021] https://www.iso.org/standard/69569.html [25] en iso 14118:2017 safety of machinery prevention of unexpected start-up. online [accessed 16 august 2021] https://www.iso.org/standard/66460.html [26] en iec 62046:2018 safety of machinery application of protective equipment to detect the presence of persons. online [accessed 16 august 2021] https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011413a-a717-caf55f66f289/sist-en-iec-62046-2018 [27] en iso 13851:2019 safety of machinery two-hand control devices principles for design and selection. online [accessed 16 august 2021] https://www.iso.org/standard/70295.html [28] universal robots, urcap software platform of universal robots. online [accessed 16 august 2021] https://www.universal-robots.com/ [29] robotiq website. online [accessed 16 august 2021] www.robotiq.com [30] unity, cross-platform game engine. online [accessed 16 august 2021] https://unity.com [31] unreal engine, cross-platform game engine. online [accessed 16 august 2021] https://www.unrealengine.com/en-us/ [32] apertusvr documentation, gitbook. online [accessed 16 august 2021] https://apertus.gitbook.io/vr/ [33] prehuroco sample files on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/sam ples/collisiondetection [34] vibration api (second edition), w3c recommendation, 18 october 2016. online [accessed 16 august 2021] https://www.w3.org/tr/vibration/ [35] collision detection plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/physics/bulletphysics [36] visualisation plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/render/ogrerender [37] kinect plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/track/body/kinect [38] websocket server plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/websocketserver https://doi.org/10.1109/icar.2017.8023523 https://www.iso.org/standard/62996.html https://www.iso.org/standard/51330.html https://www.iso.org/standard/41571.html https://doi.org/10.1080/01691864.2019.1636714 https://doi.org/10.1016/j.ssci.2020.104667 https://doi.org/10.1115/detc2013-13351 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex%3a32006l0042 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex%3a32006l0042 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex:32014l0035 https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex:32014l0035 https://www.safearoundrobots.com/toolkit/documentfinder https://www.iso.org/standard/69488.html https://www.iso.org/standard/42845.html https://www.iso.org/standard/69883.html https://www.iso.org/standard/53640.html https://www.iso.org/standard/51528.html https://www.iso.org/standard/59970.html https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab-476d-b979-1de5762a3ed7/sist-en-60204-1-2018 https://standards.iteh.ai/catalog/standards/sist/e7d3ec34-16ab-476d-b979-1de5762a3ed7/sist-en-60204-1-2018 https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926-457b-b3da-4bfaef9908ac/sist-en-62061-2005 https://standards.iteh.ai/catalog/standards/sist/4c933a51-d926-457b-b3da-4bfaef9908ac/sist-en-62061-2005 https://www.iso.org/standard/35996.html https://www.iso.org/standard/66459.html https://www.iso.org/standard/69569.html https://www.iso.org/standard/66460.html https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011-413a-a717-caf55f66f289/sist-en-iec-62046-2018 https://standards.iteh.ai/catalog/standards/sist/b62f0bb2-9011-413a-a717-caf55f66f289/sist-en-iec-62046-2018 https://www.iso.org/standard/70295.html https://www.universal-robots.com/ http://www.robotiq.com/ https://unity.com/ https://www.unrealengine.com/en-us/ https://apertus.gitbook.io/vr/ https://github.com/mtasztaki/apertusvr/tree/0.9.1/samples/collisiondetection https://github.com/mtasztaki/apertusvr/tree/0.9.1/samples/collisiondetection https://www.w3.org/tr/vibration/ https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/physics/bulletphysics https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/physics/bulletphysics https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/render/ogrerender https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/render/ogrerender https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/track/body/kinect https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/track/body/kinect https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/websocketserver https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/websocketserver acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 80 [39] x3d loader plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader [40] nodejs plugin, apertusvr on github. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugi ns/languageapi/jsapi/nodejsplugin [41] bluetooth keyboard performance test. online [accessed 16 august 2021] http://www.technical-direct.com/en/bluetooth-keyboardperformance-test/ [42] interactivity test: examples from real 5g networks (part 3) . online [accessed 16 august 2021] https://www.rohde-schwarz.com/us/solutions/test-andmeasurement/mobile-network-testing/stories-insights/articleinteractivity-test-examples-from-real-5g-networks-part-3_253380.html [43] jsonlist file of the simulated ur5 robot movement. online [accessed 16 august 2021] https://github.com/mtasztaki/apertusvr/blob/89aefbc9 b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsa pi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin/js/plugins/x3dloader https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin https://github.com/mtasztaki/apertusvr/tree/0.9.1/plugins/languageapi/jsapi/nodejsplugin http://www.technical-direct.com/en/bluetooth-keyboard-performance-test/ http://www.technical-direct.com/en/bluetooth-keyboard-performance-test/ https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://www.rohde-schwarz.com/us/solutions/test-and-measurement/mobile-network-testing/stories-insights/article-interactivity-test-examples-from-real-5g-networks-part-3-_253380.html https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist https://github.com/mtasztaki/apertusvr/blob/89aefbc9b2a0e7524092b87d728ad539cfc0a856/plugins/languageapi/jsapi/nodejsplugin/js/plugins/httpsimulator/ur5.jsonlist human identification and tracking using ultra-wideband-vision data fusion in unstructured environments acta imeko issn: 2221-870x december 2021, volume 10, number 4, 124 131 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 124 human identification and tracking using ultra-widebandvision data fusion in unstructured environments alessandro luchetti1, andrea carollo1, luca santoro1, matteo nardello1, davide brunelli1, paolo bosetti1, mariolino de cecco1 1 department of industrial engineering, university of trento, sommarive, 9 38123 trento, italy section: research paper keywords: human-robot interaction; following system; ultra-wideband technology; 3d vision system; sensor fusion citation: alessandro luchetti, andrea carollo, luca santoro, matteo nardello, davide brunelli, paolo bosetti, mariolino de cecco, human identification and tracking using ultra-wideband-vision data fusion in unstructured environments, acta imeko, vol. 10, no. 4, article 21, december 2021, identifier: imekoacta-10 (2021)-04-21 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alessandro luchetti, e-mail: alessandro.luchetti@unitn.it 1. introduction cooperation between mobile robots and people is playing a significant role in the modern economy while its demand is increasing worldwide, aided also by the growing use of autonomous and smart mobile robots. in particular, co-bots can increase human resources while reducing physical and mental load, increasing operational safety and productivity, in industrial environments. in this context, the human-following function, called “follow-me”, is crucial. it consists of identifying and following the assigned operator even in unstructured environments. there are different approaches to achieve such a task. the most commonly used technologies for tracking include vision [1], [2], time-of-flight (tof)-camera [3], lidar [4], light-emitting device (led) [5] and uwb transceivers [6]-[131]. for each of these technologies, there are several advantages and disadvantages. lidar technology, while faster and more accurate than tof camera, is much more expensive and not used unless strictly necessary. with the led detection method there are few applications from the literature because of the low robustness due to the robot’s inability to detect the light-emitting device frequently. the best candidates for tracking operations with low cost are still 3d vision and uwb systems. the main literature contributions use them independently to solve the “follow-me” task in unstructured environments without solving their disadvantages. for example, the disadvantage of traditional vision systems is the limited field of view (fov) compared to uwb systems and lighting influence, especially outdoor. in contrast, uwb can be applied both indoor and outdoor but suffers from higher uncertainties especially for measurements of less than one meter and in the presence of obstruction by people between the transceivers [9], [10]. however, uwb systems measurements can be made up to 80 m, in contrast to tof cameras where after 10 m there is mostly noise while are more precise than the uwb below. furthermore, the shape of the uncertainties of the two systems is different but complementary. with this work, we overcome the disadvantages of these technologies by combining them to improve the robustness and reduce the uncertainty of the measurement result. abstract nowadays, the importance of working in changing and unstructured environments such as logistics warehouses through the cooperation between automated guided vehicles (agv) and the operator is increasingly demanded. the challenge addressed in this article aims to solve two crucial functions of autonomy: operator identification, and tracking. these tasks are necessary to enable an agv to follow the selected operator along his path. this paper presents an innovative, accurate, robust, autonomous, and low-cost operator real-time tracking system, leveraging the inherent complementarity of the uncertainty regions (2d ellipses) between ultra-wideband (uwb) transceivers and cameras. the test campaign shows how the uwb system has higher uncertainty in the angular direction. in contrast, in the case of the vision system, the uncertainty is predominant along the radial coordinate. due to the nature of the data, a sensor fusion demonstrates improvement in the accuracy and goodness of the final tracking. mailto:alessandro.luchetti@unitn.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 125 the only literature works that apply sensor fusion between uwb and vision systems work in structured environments with fixed uwb transceivers [11], [12] or with both fixed uwb and vision systems [13]. their solutions are not dynamic to changes, are more expensive because of the number of devices to be used, and must be calibrated for each environment. this work is organized as follows. an overview of the involved measurement systems is provided in the next section. the human identification algorithms to define which operator to follow are provided in section 3; section 4 describes the uncertainties models of the uwb and vision systems. the operator localization results through sensor fusion approach are discussed in section 5 followed by conclusions in section 6. 2. measurement systems for our application, we used decawave’s uwb development board dwm10011 [14]. it has the advantages of low cost, low power consumption, and strong penetration. the receivers were programmed with a two-way ranging (twr) architecture [15] that allows them to work in an unstructured environment, with two anchors on the robot, one master and one generic, and a tag for the tracked operator, figure 2. the tag communicates first with the generic anchor to calculate the distance d1, i.e. the distance between the tag and the generic anchor. then with the master anchor to obtain the distance d2, i.e. the distance between the tag and master anchor. successively the generic anchor will send the estimated distance d1 to the master, and finally, the master shares the two distances d1, d2 to the computer within a total time of 9 ms. the distance between the two anchors d3 is fixed and constant. the position of the tag in the environment using only the uwb system is ambiguous because the two radii d1 and d2 of the circumferences can intersect at two different points. to solve this problem, the robot is equipped with a camera, which determines the uniqueness of the measurement (i.e., whether the operator is in front of the robot or not). the selected camera is the intel realsensetm depth camera d4552 [16]. the device includes an rgb camera, two infrared (ir) cameras, a laser projector, and an inertia measurement system (imu). the vision system can extract both a 3d point cloud of the scene and a traditional rgb image. the rgb is used to apply artificial intelligence (ai) algorithms that perform human skeleton detection, while the point cloud allows localizing the key points of the skeleton in 3d. figure 1 shows the designed system. in particular, the uwb and vision systems onboard the mobile robot for operator identification and tracking. 3. human identification human detection and human pose estimation are done by applying a neural network provided by intel in the openvinotm toolkit, called human-pose-estimation-0001. the toolkit enables convolution neural networks (cnn)-based deep learning inference and contains an optimized version of opencv libraries for intel hardware [17]. this network is based on openpose [18] approach with tuned mobilenet v1 [19] as a feature extractor. this network results in two different outputs. the first is composed of 18 probability maps, called heat-maps, that provide all the key-points on the image: ears, eyes, nose, neck, shoulders, elbows, wrists, hips, knees, and ankles, figure 3a. the second is 19*2 layers called part affinity fields, and it gives us information on how to match the key-points that correspond to a single person, figure 3b. heat-maps provide the probability for each pixel to be at the position of a key-point. performing a threshold on the heat-maps it is possible to select the pixels with a probability higher than 50 %, from which it is possible to evaluate for each cluster of pixels the average point and the covariance matrix referring to each key-point location, figure 3a. from these covariance matrices, the average uncertainty of each pixel was calculated as three pixels. the image resolution was set at 848 × 480 (cxr), and the network for the human-pose-estimation-0001 as int8 running on the cpu extracts each frame up to 18 key-points per person. the resolution was chosen as a good trade-off between resolution of the image, and so linked to the uncertainty for the human positioning, and the speed of execution of the network. the net is available in three different resolutions, i.e. int8, fp16, and fp32. there is no difference in performance in our test when running on the cpu, but the int8 version is slightly faster than the others for this estimation. only the operator, entitled to use the robot, must be tracked. to reach this, we introduced face identification. the steps to identify the operator are two: find all the faces within the rgb frame and then detect the operator among them. for these steps, we used two cnn available on the openvinotm toolkit. the face detection cnn used is face-detection-retail-0005 and to extract the features of each face and compare them with the operator features stored in a database we use face-reidentification-retail-0095. the matching is made through the cosine similarity, equation (1), which is defined as the cosine of the angle θ between two features’ vectors a and b on dimension rn. where a is the features’ vector acquired in real-time and b is the one stored as ground truth. the lower the cosine, the higher the probability that the features vector are similar. figure 1. real system with mobile robot and operator. figure 2. system configuration (top view). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 126 𝑆𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 = cos(𝜃) = 𝐴 ·𝐵 ||𝐴|| ||𝐵|| = = ∑ 𝐴𝑖𝐵𝑖 𝑛 𝑖=1 √∑ 𝐴𝑖 2𝑛 𝑖=1 √∑ 𝐵𝑖 2𝑛 𝑖=1 , (1) the operator, once identified, turns on the spot, and the software saves all the features again with a neural network, called person-reidentification-retail-0031, for different poses. as previously stated, the correspondence is made with the cosine similarity between the feature vectors of people’s bodies in the frames. table 1 shows the overall times for each network inference calculated on an intel cpu i7-7700hq with 16 gb of ram. 4. human localization: 2d uncertainty modeling from uwb and vision system are extracted the same information about the operator’s position with two measurements systems that are both referred to the robot base but whose uncertainty regions are complementary as explained in this section. 4.1. uwb system to maintain consistency with the information received from the vision system, the reference frame of the uwb is fixed on the camera and the anchors are placed at its sides with equal distance d3/2, figure 2. in particular, the x coordinate is related to the lateral axis, the z coordinate to the depth, and the y coordinate to the vertical axis. for the localization of the operator in the space, we are interested only in the x and z coordinates, since we project everything on the ground floor. the equations of the circumferences from the uwb devices become: (𝑥 − 𝑑3 2 ) 2 + 𝑧2 = 𝑑1 2 , (2) (𝑥 + 𝑑3 2 ) 2 + 𝑧2 = 𝑑2 2 , (3) from equations (2) and (3), we obtain the closed-form solution for the tag position: 𝑥 = 𝑑2 2 − 𝑑1 2 2𝑑3 , (4) 𝑧 = ± 1 2 √− (𝑑1 2 −𝑑2 2)2 𝑑3 2 − 𝑑3 2 + 2(𝑑1 2 + 𝑑2 2) . (5) as discussed in section 2, we only keep the positive value for the z-coordinate because it is guaranteed by the vision system. the covariance matrix of the coordinates of the tag position with the uwb system is equal to: 𝐶uwb = 𝐽dist · 𝐶dist · 𝐽dist 𝑇 , (6) where cdist is the covariance matrix of the measured distances d1, d2, and d3, (figure 2), with their standard deviations σ1, σ2, and σ3 𝐶dist = ( 𝜎1 2 0 0 0 𝜎2 2 0 0 0 𝜎3 2 ) . (7) the uwb devices used to test the overall system were calibrated in different indoor and outdoor scenarios to find the corresponding systematic offset at different distances and the related uncertainties. furthermore, the master and generic anchors were calibrated independently because, although the de vice type is the same, their internal crystals and thus their responses are not the same. in particular, we carried out measurements from 1 m to 20 m every 0.5 m with random order both indoor and outdoor. we collect data at 52 hz for three minutes. between different distances, we wait three minutes with all the devices turned off. the tests were performed for each distance, one with the line of sight (los) between the devices free (figure 4a) and the other two with noise elements: in one 34 people simultaneously walked randomly back and forth between the devices throughout the test time (figure 4b), in the other large static metal elements were placed in random positions between the devices (figure 4c). this was done to understand how much these scenarios affect the measurements. figure 5a shows an example of the box-plot result for an indoor test at 7 m in different scenarios. in particular, in all the tests done, the standard deviation of the measurements made with the walking people is significantly higher than the ones with free los and metal objects. the data acquired with walking people generate an overestimation, possibly because the occlusion of the los causes the reflected radio waves to be caught as the free los. from the tests, the standard deviations d1 and d2 were calculated at 0.05 m outdoor with free los. this value was used table 1. overall times. network image resolution net resolution inference time (ms) face-detection-retail-0005 848 × 480 int8 9.42 face-reidentification-retail-0095 848 × 480 fp16 5.38 person-reidentification-retail-0031 96 × 48 fp16 6.50 human-pose-estimation-0001 848 × 480 int8 115 (a) (b) figure 3. heat-maps with covariances of key-points (a) and affinity map (b) of 848x480 image from camera d455 with operator at two meters. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 127 ± and set in our application; instead, the standard deviation for d3 was set at 0.01 m to take into account human error during fixing. about the offset between the measured distances with uwb devices and the real distances, the values were modeled from the outdoor tests in free los, figure 5b. all the real distances, taken as ground truth, were measured with a laser meter type fervi ml80 [20], able to measure distances from 0.05 m to 80.00 m with an accuracy of 0.02 m. the generic anchor and the tag were powered by a varta power bank type 57962, while a usb cable powered the master anchor from the pc. from that usb cable was possible to communicate in serial and save the estimated distances from the two anchors. previously we saw how the free outdoor los was chosen as the reference environment for the offset and standard deviation values. to take into account also other scenarios each time we check the channel impulse response (cir) [21] provided by the uwb modules. in case of some human obstruction, this parameter decreases, and we do not update the distance information waiting for a reasonable value of cir. the jacobian matrix jdist with respect to the distances is: 𝐽dist = ( − 𝑑1 𝑑3 𝑑2 𝑑3 𝑎2 2 𝑑3 2 4 𝑑1 − 4 𝑑1 𝑎2 𝑑3 2 𝑎1 4 𝑑2 + 4 𝑑2 𝑎2 𝑑3 2 𝑎1 − 2 𝑑3 − 2 𝑎2 2 𝑑3 3 𝑎1 ) , (8) with: 𝑎1 = 4 √2 𝑑1 2 + 2 𝑑2 2 −𝑑3 2 − 𝑎2 2 𝑑3 2 𝑎2 = 𝑑1 2 −𝑑2 2 equation (6) becomes: 𝐶uwb = ( σ3 2 𝑎6 2 4 𝑑3 4 + 𝑑1 2  σ1 2 𝑑3 2 + 𝑑2 2  σ2 2 𝑑3 2 𝑎1 𝑎1 σ3 2 𝑎3 2 16 𝑎2 + σ1 2 𝑎5 2 16 𝑎2 + σ2 2 𝑎4 2 16 𝑎2 ) , (9) with: 𝑎1 = 𝑑2 σ2 2 𝑎4 4 𝑑3 √𝑎2 − 𝑑1 σ1 2 𝑎5 4 𝑑3 √𝑎2 − σ3 2 𝑎6 𝑎3 8 𝑑3 2  √𝑎2 (a) (b) (c) (d) figure 4. indoor scenarios: free los (a), people walking (b), metal objects (c); outdoor scenarios (d). (a) (b) figure 5. (a) box-plot of data at 7 m indoor in different scenarios; (b) offset model of master and generic anchors outdoor in free line of sight (los). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 128 𝑎2 = 2 𝑑1 2 +2 𝑑2 2 −𝑑3 2 − 𝑎6 2 𝑑3 2 𝑎3 = 2 𝑑3 − 2 𝑎6 2 𝑑3 3 𝑎4 = 4 𝑑2 + 4 𝑑2 𝑎6 𝑑3 2 𝑎5 = 4 𝑑1 − 4 𝑑1 𝑎6 𝑑3 2 𝑎6 = 𝑑1 2 −𝑑2 2 noting that the elements of equation (9) have at the denominator the measure d3, we can say that the more distant the anchors are from each other, the more accurate the position measurement is. another note is that the off-diagonal terms are zero if and only if the measures d1 and d2 are equal (considering also σ1 = σ2). in this case, equation (9) becomes: 𝐶uwb = ( 2 𝑑1 2  𝜎1 2 𝑑3 2 0 0 2 𝑑1 2  𝜎1 2 4 𝑑1 2 −𝑑3 2 + 𝑑3 2  𝜎3 2 4 (4 𝑑1 2 −𝑑3 2 )) , (10) under these conditions, figure 6 shows the behaviour of the standard deviations’ values of each eigenvalue in the two principal directions x and z with respect to the z distance of the tag from the anchors, square roots of cuwb(1, 1) and cuwb(2, 2) respectively. as can be seen from the behaviour of the eigenvalues in figure 6 and shapes of the covariances in figure 7 at the beginning (z = 0 m) the covariance is a tight ellipse stretched in the radial direction (cuwb(1, 1) < cuwb(2, 2), tag a in figure 7), then when the distances d1 and d2 are orthogonal to each other it becomes a perfect circle (cuwb(1, 1) = cuwb(2, 2), tag b in figure 7) and lastly with a higher z distance value it becomes stretched in the angular direction (cuwb(1, 1) > cuwb(2, 2), tag c in figure 7). if d1 and d2 are different, tag d in figure 7, the quadratic approximation of the probability ellipse will be rotated. 4.2. vision system the model used to study the variance for the depth coordinate z of the selected camera is a pinhole model. two-ir sensors of the camera are used to evaluate the depth through a disparity matching algorithm, an rgb sensor is used instead to calculate the 2d image, all of them are co-planar and aligned. the common reference frame of the two-ir sensors is set at the same focal length of the twoir sensors and translated by overlapping the reference frame of the right sensor above the left sensor by an amount equal to the baseline b, figure 8. the charge-coupled device (ccd) resolution for each sensor is 848 pixels in columns (c) and 480 pixels in rows (r) for a total of 848 × 480 (c × r) pixels. the expression of the depth coordinate z is: 𝑧 = 𝑓𝑐 ⋅ 𝑏 |𝑐𝑄1 − 𝑐𝑄2| = 𝑓𝑐 ⋅ 𝑏 𝑑 , (11) where fc is the focal length in x direction, b the baseline, and d the disparity, figure 8. from equation (11) is possible to evaluate the error in the depth estimation from the model with respect to the disparity d, assuming all the other parameters as known and constant values: 𝜎𝑧 = 𝜕𝑧 𝜕𝑑 = 𝑏 ⋅𝑓𝑐 𝑑2 ⋅ 𝜎𝑑 , (12) equation (12) can be rewritten as function of the z distance, for better understanding, by substituting the value of the disparity d with a formulation obtainable from equation (11): 𝜎𝑧 = 𝑧2 𝑏 ⋅ 𝑓𝑐 𝜎𝑑 , (13) where the focal length f in pixel is: figure 6. standard deviation (std) of each eigenvalue in the two principal direction x and z for uwb system from 0 m to 10 m with the off-diagonal terms of covariance matrix cuwb equal to 0. figure 7. uwb system uncertainty ellipses for different tag positions. figure 8 stereo pin-hole model. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 129 3 𝑓𝑐 = 0.5 𝐶 tan(𝐻𝐹𝑂𝑉/2) , (14) 𝑓𝑟 = 0.5 𝑅 tan(𝑉𝐹𝑂𝑉/2) , (15) hfov is the horizontal field of view and vfov is the vertical field of view, in our case are 87° and 58° respectively. by varying the resolution, the focal length changes and so does the distance uncertainty. the disparity standard deviation σd for the camera d455 is declared to be 0.08 pixel by the manufacturer. since many interfering effects increase this value, we calibrated the camera to check if the declared model equation (13) fits the real data and if so, to obtain the real standard deviation σd. we made 400 consecutive depth acquisitions of a chessboard from 1 m to 10 m at random steps of one meter, figure 9a. , figure 9b shows the theoretical and the obtained σz as a function of z distance for the image resolution c of 848 pixels. as can be seen in figure 9b the behaviour of the experimental found model is higher than the theoretical one. the new experimental value for σd found by applying the least absolute residual robust (lar) method is 0.4 pixel. to evaluate the coordinates in x and y dimension we use the brown-conrady distortion calibration model [22] with the intrinsic constant coefficients (k1, k2, k3, p1, p2) of the specific selected camera: 𝑢𝑥 = �̂� ⋅𝑓 + 2 ⋅ 𝑝1 ⋅ �̂��̂� +𝑝2(𝑟 + 2�̂��̂�) , (16) 𝑢𝑦 = �̂� ⋅𝑓 +2 ⋅𝑝2 ⋅ �̂��̂� + 𝑝1(𝑟 +2�̂��̂�) , (17) with: �̂� = 𝑐 − 𝐶/2 𝑓𝑐 = 𝑥𝑝 𝑓𝑐 �̂� = 𝑟 −𝑅/2 𝑓𝑟 = 𝑦𝑝 𝑓𝑐 𝑟 = �̂� 2 +�̂�2 𝑓 = 1+ 𝑘1 ⋅ 𝑟 + 𝑘2 ⋅ 𝑟 2 + 𝑘3 ⋅ 𝑟 3 the new expression of the x and y position becomes: 𝑥 = 𝑢𝑥 ⋅ 𝑧 , (18) 𝑦 = 𝑢𝑦 ⋅ 𝑧 , (19) as for the uwb system, we are interested only in the x and z coordinates for the localization of the operator in the space. the covariance matrix of the coordinates of the operator with camera system is equal to: 𝐶cam = 𝐽cam ⋅ 𝐶σ ⋅ 𝐽cam 𝑇 , (20) where: 𝐶𝜎 = ( 𝜎𝑐 2 0 0 0 𝜎𝑟 2 0 0 0 𝜎𝑧 2 ) , (21) 𝐽𝑐𝑎𝑚 𝑇 = ( 𝑧  ( 𝑎1 𝑓𝑐 + 6 𝑝2 𝑥𝑝 𝑓𝑐 2 + 𝑥𝑝  ( 2 𝑘1 𝑥𝑝 𝑓𝑐 2 + 4 𝑘2 𝑥𝑝 𝑎2 𝑓𝑐 2 + 6 𝑘3 𝑥𝑝 𝑎2 2 𝑓𝑐 2 ) 𝑓𝑐 + 2 𝑝1 𝑦𝑝 𝑓𝑐  𝑓𝑟 ) 0 𝑧  ( 2 𝑝2 𝑦𝑝 𝑓𝑟 2 + 𝑥𝑝  ( 2 𝑘1 𝑦𝑝 𝑓𝑟 2 + 4 𝑘2 𝑦𝑝 𝑎2 𝑓𝑟 2 + 6 𝑘3 𝑦𝑝 𝑎2 2 𝑓𝑟 2 ) 𝑓𝑐 + 2 𝑝1 𝑥𝑝 𝑓𝑐  𝑓𝑟 ) 0 𝑝2  ( 3 𝑥𝑝 2 𝑓𝑐 2 +𝑎3)+ 𝑥𝑝 𝑎1 𝑓𝑐 + 2 𝑝1 𝑥𝑝 𝑦𝑝 𝑓𝑐  𝑓𝑟 1 ) , (22) with: 𝑎1 = 𝑘1 𝑎2 +𝑘2 𝑎2 2 + 𝑘3 𝑎2 3 + 1 𝑎2 = 𝑥𝑝 2 𝑓𝑐 2 + 𝑎3 𝑎3 = 𝑦𝑝 2 𝑓𝑟 2 . ccam depends on the focal length f, on the baseline b, and on the standard deviations of disparity σd and pixels σc, σr. once the depth frame and the rgb one are aligned, it is possible to estimate the distance to any pixel and thus project the operator’s skeleton in space. sometimes, key-points can have an incorrect distance value, i.e., the z-coordinate, especially when the pose is very close to the viewing system. it is due to distortion caused by camera lenses and imperfect image realignment. to overcome this problem, we extract an average position of keypoints with the median. in this case, if a key-point is projected too far from the other or too near, it will have no impact on the final operator position. the off-diagonal terms of equation (20) are zero if and only if xp = 0, yp = 0. in this case equation (20) becomes: (a) (b) figure 9. (a) camera calibration test with chessboard; (b) standard deviation results on z distance with stereo camera d455 for resolution c of 848 pixels. 2 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 130 𝐶cam = ( 𝑧2𝜎𝑐 2 𝑓𝑐 2 0 0 𝜎𝑧 2 ) , (23) figure 10 shows the standard deviations’ values of each eigenvalue in the two principal directions x and z of equation (23) with the covariances’ shapes in figure 11 as previously done for uwb system. as can be seen in figure 10, after 30 cm, the radial coordinate in the z direction is always greater than the angular one in the x direction (ccam(1, 1) < ccam(2, 2) i.e. the ellipse is stretched in the radial direction. 5. sensor fusion sensor fusion was done between the position estimated by the uwb system and the position estimated by the vision system to reduce the uncertainty and improve the estimation of the operator’s position in space. figure 12 shows two examples in two different tag positions where the uncertainties of the position estimation through the uwb system are more significant along the angular direction (x-coordinate), centring the reference system on the origin of the camera. notice that in the case of the vision system, the uncertainties are predominant along the radial direction (z-coordinate). by fusing the information with bayes’ theorem [23], it is possible to reduce their uncertainties in both directions. we can summarize the fused information shown in figure 12 through the following expression: 𝐶fused = [[𝐶uwb] −1 + [𝐶cam] −1]−1 , (24) 𝑃fused = 𝐶fused[[𝐶uwb] −1𝑃uwb + [𝐶cam] −1𝑃cam] , (25) where: puwb and pcam are the estimated mean position of the uwb system and camera system respectively; cuwb and ccam are the covariance matrices of the estimated position of the uwb system and camera system respectively. 6. conclusions in this work, an innovative and robust method to identify and track an operator from a mobile robot in real-time was developed. in particular, the operator can be identified through a convolution neural network tool and tracked through a designed localization method obtained by fusing the information from low-cost sensors such as uwb transceivers and a depth camera. it was implemented to locate the operator’s position with less uncertainty. the test campaign shows the uwb system has a higher uncertainty in the angular direction, contrary to the camera, where the uncertainty is higher in the radial direction. specifically, cuwb is affected by the distances measured between figure 10: standard deviation (std) of each eigenvalue in the two principal direction x and z for camera system from 0 m to 10 m with the off-diagonal terms of covariance matrix ccam equal to 0. figure 11. camera system uncertainty ellipses for different tag positions. (a) (b) figure 12. (a) two examples of uncertainty ellipses (95 % with k= 2.4478 [24]) in two different tag positions (x=0 m, z=4 m; x=2 m, z=6 m) from camera and uwb system; (b) zoom of the results. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 131 the tag and the two anchors (d1, d2) and between the anchors themselves (d3) with their corresponding uncertainties. ccam instead depends on the focal length f, on the baseline b, and on the standard deviations of disparity σd and pixels σc, σr. our solution makes the final system robust and more precise due to the complementarity of information between the covariance matrices of the uwb system and the vision system. to obtain a better result for the real application, the uwb transceivers were calibrated following a test campaign that provided the correct behaviour of the systematic offset in the measurements and their standard deviations. offsets change for a specific uwb device and increase with z distance. the standard deviation for the distances between the devices was calculated at 0.05 m in the scenario with a free line of sight and outdoor. another test campaign, this time on the vision system, was carried out to evaluate the theoretical model of the standard deviation σz as a function of z distance and the value of camera disparity standard deviation σd. the σd calculated for our camera d455 is 0.4 pixels. moreover, the average uncertainty of each pixel σc, σr was calculated as three pixels through heat-maps analysis of the convolution neural network used. acknowledgement this work was supported by the italian ministry for university and research (mur) under the program “dipartimenti di eccellenza (2018-2022)”. references [1] m. gupta, s. kumar, l. behera, v. k. subramanian, a novel vision-based tracking algorithm for a human-following mobile robot, ieee transactions on systems, man, and cybernetics: systems 7 (2016), pp. 1415–1427. doi: 10.1109/tsmc.2016.2616343 [2] s.-o. lee, m. hwang-bo, b.-j. you, s.-r. oh, y.-j. cho, vision based mobile robot control for target tracking, ifac proceedings volumes 34(4) (2001), pp. 75–80. doi: 10.1016/s1474-6670(17)34276-3 [3] g. xing, s. tian, h. sun, w. liu, h. liu, people-following system design for mobile robots using kinect sensor, 25th chinese control and decision conference (ccdc), ieee, guiyang, china, 25-27 may 2013, pp. 3190–3194. doi: 10.1109/ccdc.2013.6561495 [4] s. a. ahmed, a. v. topalov, n. g. shakev, and v. l. popov, model-free detection and following of moving objects by an omnidirectional mobile robot using 2d range data, ifacpapersonline 51(22) (2018), pp. 226– 231. doi: 10.1016/j.ifacol.2018.11.546 [5] y. nagumo a. ohya, human following behavior of an autonomous mobile robot using light-emitting device, in proceedings 10th ieee international workshop on robot and human interactive communication. roman 2001 (cat. no. 01th8591), ieee, bordeaux, paris, france, 18-21 sept. 2001, pp. 225–230. doi: 10.1109/roman.2001.981906 [6] t. feng, y. yu, l. wu, y. bai, z. xiao, z. lu, a human-tracking robot using ultra wideband technology, ieee access 6 (2018), pp. 42541–42550. doi: 10.1109/access.2018.2859754 [7] t. g. kim, d.-j. seo, k.-s. joo, a following system for a specific object using a uwb system, in 2018 18th international conference on control, automation and systems (iccas), ieee, pyeongchang, korea (south), 17-20 oct. 2018, pp. 958–960. [8] d.-j. seo, t. g. kim, s. w. noh, h. h. seo, object following method for a differential type mobile robot based on ultra wide band distance sensor system, 17th international conference on control, automation and systems (iccas), iee, jeju, korea (south), 18-21 oct. 2017, pp. 736–738. doi: 10.23919/iccas.2017.8204325 [9] l. santoro, d. brunelli, d. fontanelli, on-line optimal ranging sensor deployment for robotic exploration, ieee sensors journal, v. 2021, (2021). doi: 10.1109/jsen.2021.3120889 [10] l. santoro, m. nardello, d. brunelli and d. fontanelli, scale up to infinity: the uwb indoor global positioning system, 2021 ieee international symposium on robotic and sensors environments (rose), 2021, pp. 1-8, doi: 10.1109/rose52750.2021.9611770 [11] g. ding, h. lu, j. bai, x. qin, development of a high precision uwb/vision-based agv and control system, 5th international conference on control and robotics engineering (iccre), osaka, japan, 24-26 april 2020, pp. 99–103. doi: 10.1109/iccre49379.2020.9096456 [12] h. xu, l. wang, y. zhang, k. qiu, s. shen, decentralized visualinertial-uwb fusion for relative state estimation of aerial swarm, ieee international conference on robotics and automation (icra), paris, france, 31 may-31 aug. 2020, pp. 8776–8782. doi: 10.1109/icra40945.2020.9196944 [13] f. liu, j. zhang, j. wang, h. han, d. yang, an uwb/vision fusion scheme for determining pedestrians’ indoor location, sensors 20(4) (2020), p. 1139. doi: 10.3390/s20041139 [14] decawave dwm1001. online [accessed 06 december 2021] https://www.decawave.com/product/dwm1001-developmentboard/ [15] m. kwak, j. chong, a new double two-way ranging algorithm for ranging system, 2nd ieee international conference on network infrastructure and digital content, ieee, beijing, china, 24-26 sept. 2010, pp. 470–473. doi: 10.1109/icnidc.2010.5657814 [16] intel realsense camera d455. online [accessed: december 2021] https://www.intelrealsense.com/depth-camera-d455/ [17] opencv. online [accessed: december 2021] https://opencv.org/ [18] z. cao, g. hidalgo martinez, t. simon, s. wei, y. a. sheikh, openpose: realtime multi-person 2d pose estimation using part affinity fields, ieee transactions on pattern analysis and machine intelligence 43(1) (2019), pp. 172186. doi: 10.1109/tpami.2019.2929257 [19] a. g. howard, m. zhu, b. chen, d. kalenichenko, w. wang, t. weyand, m. andreetto, h. adam, mobilenets: efficient convolutional neural networks for mobile vision applications. online [accessed 06 december 2021] https://arxiv.org/abs/1704.04861 [20] fervi misuratore di distanza laser [accessed 06 december 2021] https://www.fervi.com/ita/strumenti-di-misura/misuratorianalogici-e-digitali/misuratore-di-distanze/misuratore-didistanza-laser-pr-8240.htm [21] c. jiang, s. chen, y. chen, d. liu, y. bo, an uwb channel impulse response de-noising method for nlos/los classification boosting, ieee communications letters 24(11) (2020), pp. 2513–2517. doi: 10.1109/lcomm.2020.3009659 [22] c. b. duane, close-range camera calibration, photogramm. eng, 37(8) (1971), pp. 855–866. [23] w. elmenreich, an introduction to sensor fusion, vienna university of technology, austria 502 (2002), pp. 1– 28. [24] r. c. smith, p. cheeseman, on the representation and estimation of spatial uncertainty, the international journal of robotics research 5(4) (1986), pp. 56–68. doi: 10.1177/027836498600500404 https://doi.org/10.1109/tsmc.2016.2616343 https://doi.org/10.1016/s1474-6670(17)34276-3 https://doi.org/10.1109/ccdc.2013.6561495 https://doi.org/10.1016/j.ifacol.2018.11.546 https://doi.org/10.1109/roman.2001.981906 https://doi.org/10.1109/access.2018.2859754 https://doi.org/10.23919/iccas.2017.8204325 https://doi.org/10.1109/jsen.2021.3120889 https://doi.org/10.1109/rose52750.2021.9611770 https://doi.org/10.1109/iccre49379.2020.9096456 https://doi.org/10.1109/icra40945.2020.9196944 https://doi.org/10.3390/s20041139 https://www.decawave.com/product/dwm1001-development-board/ https://www.decawave.com/product/dwm1001-development-board/ https://doi.org/10.1109/icnidc.2010.5657814 https://www.intelrealsense.com/depth-camera-d455/ https://opencv.org/ https://doi.org/10.1109/tpami.2019.2929257 https://arxiv.org/abs/1704.04861 https://www.fervi.com/ita/strumenti-di-misura/misuratori-analogici-e-digitali/misuratore-di-distanze/misuratore-di-distanza-laser-pr-8240.htm https://www.fervi.com/ita/strumenti-di-misura/misuratori-analogici-e-digitali/misuratore-di-distanze/misuratore-di-distanza-laser-pr-8240.htm https://www.fervi.com/ita/strumenti-di-misura/misuratori-analogici-e-digitali/misuratore-di-distanze/misuratore-di-distanza-laser-pr-8240.htm https://doi.org/10.1109/lcomm.2020.3009659 https://doi.org/10.1177%2f027836498600500404 led-to-led wireless communication between divers acta imeko issn: 2221-870x december 2021, volume 10, number 4, 80 89 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 80 led-to-led wireless communication between divers fabio leccese1, giuseppe schirripa spagnolo2 1 dipartimento di scienze, università degli studi “roma tre”,via della vasca navale n. 84, 00146 roma, italy 2 dipartimento di matematica e fisica, università degli studi “roma tre”, via della vasca navale n. 84, 00146 roma, italy section: research paper keywords: underwater communication; visible light communications; optical wireless communication, bidirectional communication; led; photo detector citation: fabio leccese, giuseppe schirripa spagnolo, led-to-led wireless communication between divers, acta imeko, vol. 10, no. 4, article 15, december 2021, identifier: imeko-acta-10 (2021)-04-15 section editor: francesco lamonaca, university of calabria, italy received october 4, 2020; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: fabio leccese, e-mail: fabio.leccese@uniroma3.it 1. introduction currently, the use of wireless communications is very common in a wide range of terrestrial devices. in the underwater world, wireless information transfer is of great interest to the military. it plays an important role in military raids carried out by a team of divers. for safety and to coordinate actions, a secure and reliable bidirectional communication system is useful. nowadays, underwater wireless communications are implemented almost exclusively via acoustic waves due to their relatively low attenuation [1], [2]. communication by measuring light waves (visible light communication vlc) is a technology that employments light spectra from 400 to 700 nm as data carriers. vlc techniques transmit data wirelessly by pulsing visible light. this new technology, called li-fi, can replace the wi-fi connection, based on radio frequency waves [3]-[6]. beer’s law is usually utilized to correlate the absorption of diffuse light to the properties of the medium through which the light is traveling. from a mathematical point of view, we can write [7], [8]: 𝑃(λ, 𝑟) = 𝑃0 ∙ e −𝐾d(λ)∙𝑟 , (1) where 𝑃0 is the initial transmitted power, 𝑃(𝜆, 𝑟) is the residual power after the light beam with wavelength 𝜆 has traveled the distance 𝑟 through the medium with diffuse attenuation coefficient 𝐾d (𝜆). figure 1 shows the attenuation coefficient of three typical ocean waters i, ii and iii and five coastal waters 1, 3, 5, 7 and 9; the lower numbers correspond to clearer waters. the classification corresponding of jerlov water types [9]-[11]. light with longer wavelengths is absorbed more quickly than that with shorter wavelengths. because of this, the higher energy light with short wavelengths, such as blue-green, is able to penetrate more deeply. in open ocean, below 100 m depth, only blue-green radiation is present [12]. however, the blue component of sunlight can also reach depths of up to 1000 m; although the quantity is so low that photosynthesis is not allowed [12]. figure 1 shows that the minimal absorption is between 460 nm and 580 nm; depending on the type of water. therefore, vlc technology is extensively studied as an alternative solution for short range underwater communication links [13]-[28]. really, underwater optical wireless communication (uowc) is not a new idea. after the pioneering works of the 1980s [29][31], in 2009, doniec et al. [32] have developed a 5-meter abstract for military divers, having a robust, secure, and undetectable wireless communication system available is a fundamental element. wireless intercoms using acoustic waves are currently used. these systems, even if reliable, have the defect of being easily identifiable and detectable. visible light can pass through sea water. therefore, light can be used to develop short-range wireless communication systems. to realize secure close-range underwater wireless communication, the underwater optical wireless communication (uowc) can be a valid alternative to acoustic wireless communication. uowc is not a new idea, but the problem of the presence of sunlight and the possibility of using near-ultraviolet radiation (near-uv) has not been adequately addressed in the literature yet. in military applications, the possibility of using invisible optical radiation can be of great interest. in this paper, a feasibility study is carried out to demonstrate that uowc can be performed using near-ultraviolet radiation. the proposed system can be useful for wireless voice communications between military divers as well as amateur divers. mailto:fabio.leccese@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 81 underwater wireless optical communication link (called aquaoptical) with a 1 mbps data rate. later in 2015, rust et al. [33] have implemented an uowc system for use in remote controlled vehicles (rovs) used for the inspection of nuclear power plants. in addition, some systems are currently commercially available [34]-[37]. unfortunately, the performance of uowc is currently limited to short range [38]. however, in some specific situations, short-range communication is more than enough. on the other hand, there are circumstances where short range communication is needed without the need for large bandwidth. a typical example is the communication between divers. for communication between divers, the most common form is through hand signals [39], and underwater writing slates [40], [41]. figure 2 shows two example of standard diver hand signals and a dive slate. the dialect of the diver's hand signals includes only plain and precise gestures easily identifiable. this allows only simple communications and require extensive memorization. on the other hand, slates do not allow communication in real time; it takes time to write and to attract the attention of the underwater partners. recently, full face diving masks with snorkels have been introduced that allow the diver to breathe and speak normally inside the mask [42], [43]. for this type of masks, reliable underwater intercoms have been developed to allow divers to talk each other underwater [44], [45]. a transducer is attached to the diver's face mask. this transducer converts the voice into an ultrasonic signal. each diver of the team has an ultrasonic receiver, which accepts the signal and converts it back to a sound that the divers can hear, enabling communication. this type of communication system can be used by amateur or professional divers. figure 3 shows two commercially available systems of underwater intercoms. during military raids with divers, it is very important that the various components of the command can communicate with each other. unluckily, hand signs do not allow for complex information to be communicated, and the use of dive slate can be incompatible with the times. an audio communication is essential for complex communications needed in military actions. another key problem in military communications is that they must be secure and undetectable. unfortunately, the acoustic waves that travel in water are easily detectable. therefore, their use is not convenient during critical military missions. in this scenario, uowc is a good alternative to acoustic communication [46]. it has the advantage that it cannot be intercepted. this specific application does not require long range and high band communications. therefore, the usable systems can be simple, small, lightweight and with low power. figure 4 shows a typical uowc between divers. the information could be transmitted through a special torch and be captured by sensors positioned on the diving suit. unfortunately, communications with visible light suffer from noise generated by solar background noise or artificial light sources. special precautions must be taken to minimize this noise [47]. it would be convenient to implement uowc systems that use optical radiation different from that normal present in water. figure 1. diffuse attenuation coefficient 𝐾𝑑(𝜆) for several oceanic and coastal water types according jerlov classification. curves obtained from the data present in [9]-[11]. figure 2. examples of standard diver hand signals and of a dive slate. figure 3. examples of standard diver hand signals and of a dive slate. figure 4. optical communication between divers. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 82 in addition, during communications between the military divers it would be useful to use light not visible from normal video surveillance systems. the main purpose of the paper is to verify the feasibility of a communication system that can be used by the military divers. the system must be simple, robust, consuming few energy and not affected by ambient light, and, above all, difficult to detect and/or intercept by video surveillance systems sensitive to visible radiation. to obtain these performances it is necessary to avoid the use of blue-green radiation, present in the solar radiation that penetrates into the water. in addition, visible radiation must be avoided, which is easily detectable at night by underwater video surveillance systems. in this paper, an underwater near-ultraviolet light communication is proposed. the proposed system uses as emitter (tx) an uv led with peak wavelength λ = 385 nm and half width δλ = 15 nm. instead, a photodiode, made with an led like the one used as a transmitter, is used as a receiver (rx). this system is intrinsically low sensitive to ambient light and produces an invisible communication channel. since there are video surveillance systems that have good sensitivity in the bluegreen spectral band [48], the use of radiation in the near uv allows having a relatively good penetration of the radiation into the water and at the same time to be invisible to these video surveillance systems. the system works well in short range communications where large bandwidth is not required. for example, if we are only interested to speech transmission, a bandwidth of 32 kbps is generally acceptable. with this type of communication, it is possible to create simple, small, light, robust and energy efficient systems. 2. underwater communications by uv-a radiation a part of the solar radiation spectrum overlaps with the radiation commonly used for the visible light communication (vlc) [49]. therefore, it is very difficult to attenuate the effects of sunlight without loss of useful signal. in the presence of sunlight, the receivers see very high white noise and can often go into saturation. to try to solve the problems deriving from solar radiation (in general of the ambient lights present), it is possible to use near-ultraviolet radiation for the communication channel. generally, solar intensity decreases with depth. by examining how light is absorbed in water (see figure 1), we see that the best wavelengths to use in uowc are 450 nm – 500 nm for clear waters and 570 nm – 600 nm for coastal waters. this same attenuation is also true for the solar spectrum [50]. in any case, at a depth of a few meters, the solar radiation in the near ultraviolet is practically absent. furthermore, in relatively clear waters this radiation is relatively poorly attenuated especially in ocean waters but less in coastal waters (according to figure 1). for these reasons, submarine communication systems, which use uv-a band communication channels, are extremely interesting. we must also observe two other important characteristics of optical communication that uses the near ultraviolet. this communication channel is difficult to detect and intercept, particularly attractive feature for military applications. furthermore, the use of ultraviolet radiation allows wireless connections to be made without requiring perfect alignment between transmitter and receiver (nlos uv scattering communication) [51], [52]. very useful characteristic for wireless transmission between moving objects. 3. led usable as light detector in addition to emitting light, leds can be employed also as light sensor/detector [53]-[60]. figure 5 schematically, shows this application in addition, the led can also be used as a temperature sensor [61]. to verify the possibility of underwater communication through uv radiation, we have chosen to use a reverse-polarized led as a detector. the choice was made to have an inexpensive photodetector that is not very sensitive to the light radiations present in the environment; without the need for filters that cut visible radiation. led can be also used as avalanche photodiode (apd) [62], [63]. unlike normal photodiodes, leds can detect a narrow band of wavelengths, they are spectrally selective detectors. in contrast, normal photodiodes have a wide spectral response and require costly filters to detect a specific wavelength. both leds and photodiodes have sensitivity stable over time. however, the filters have a limited life. in a p-n diode, inside the junction, there are free charges generated by thermal energy. when a p-n junction diode is reverse biased, these charges are accelerated. this movement of charges produces the reverse current of the diode. if the reverse polarization potential is increased, the free charges can acquire enough energy to ionize some atoms of the crystal lattice. this ionization produces additional free charges. moreover, these additional charges are accelerated by the polarization potential. this creates an avalanche effect, producing a large reverse current (breakdown current). the polarization voltage at which this arises is called zener potential [64]. if you want to use an led as light detector, generally, the photocurrents generated are linear but very small. in uowc applications, we have currents in the range of nano amps. therefore, for their correct subsequent signal processing, it is necessary to transform the detect current into a suitable voltage. for this operation, transimpedance amplifiers are commonly used [65]. the amplitude of the signal received by the led, and subsequently amplified by transimpedance amplifiers, depends on many external parameters. for this reason, the transmitted optical signal must be suitably digitized and modulated. our system uses a modulation format based on pulse width modulation (pwm). 4. uv led-to-led communication system in underwater optical wireless transmission, the signal reaching the receiver has low intensity. for this reason, extensive studies are underway to use very sensitive detectors such as avalanche photodiode (apd) or single photon avalanche diode [66]-[75]. with the use of very responsive photosensors, the problem of the presence of ambient light is very important [76]. with the use of a led-to-led transmission system that uses uv leds, it is possible to implement an underwater figure 5. basic led used as light emitter and receiver. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 83 communication system with invisible radiation that is not very sensitive to ambient light. led-to-led communication systems are characterized by low cost, low complexity and, above all, low energy consumption. on the other hand, they can be used only when the exchange of messages occurs at a small distance without the need for large bandwidth [76], [77]. as already mentioned, underwater, practically, ultraviolet radiation is absent. therefore, using uv led as light emitter and a uv led, used as apd, as receiver allows to have a system that is not very sensitive to environment light; an led can detect radiation with a wavelength slightly shorter than or equal to that emitted (internal photoelectric effect) [56], [57], [78]. the same type of led can be used as a receiver and as a transmitter. the use of the same type of led is useful in half duplex communication systems; the same led can be used as a transmitter or as a receiver. in this work, we have used a bivar uv5tz-385-30 led as a transmitter and receiver [79]. this led has viewing angle of 30° and an aperture area of 25⸱10-6 m2. seawater light transmission model is shown in figure 6. the optical power on the receiver can be written as [80]-[83]: 𝑃rx=𝑃tx∙𝜂tx∙𝜂𝑅𝑥 ∙exp [− 𝐾d (𝜆)∙𝑧 cos 𝜃 ] ∙ 𝐴rx ∙ cos 𝜃 2π∙𝑧2(1 − cos 𝜃0) , (2) where 𝑃tx is the transmitted power, 𝜂tx and 𝜂rx are the optical efficiencies of the 𝑇𝑥 and 𝑅𝑥 correspondingly, 𝐾d(𝜆) is the attenuation coefficient, 𝑧 is the perpendicular distance between the 𝑇𝑥 plane and the 𝑅𝑥 plane, 𝜃0 is the 𝑇𝑥 beam divergence angle, 𝜃 is the angle between the perpendicular to the 𝑅𝑥 plane and the 𝑇𝑥‐ 𝑅𝑥 trajectory, and 𝐴rx is the receiver aperture area. in our system, they experimentally verified that the received signal is correctly reconstructed if the misalignment is 𝜃 < 20°. the transmitter led was driven with 25 ma by means of a pulse generator. while the current generated by the led used as a receiver, reverse biased with a voltage of 15 v, was read through a transimpedance amplifier. two ultralow noise precision high speed op amps [84] were used to implement the transimpedance amplifier. the amplifier, as shown in figure 7, is made in two states. this is to be able to obtain a passband greater than 100 khz. the rx and tx leds, together with the relative control electronics, were inserted in a tank filled with real seawater (water taken from the tyrrhenian coast anzio italy). the leds are placed at 50 cm and facing one towards the other. figure 8 shows the experimental setup used for the tests. the experimental tests were carried out in the laboratory and outdoors in different configurations of ambient brightness. figure 8(a) shows the system working in laboratory. figure 8(b) shows the system working outdoors in full sun. all the tests carried out confirmed that the system is practically insensitive to ambient light (both artificial and natural). the experimental setup is realized to be able to obtain three different lengths of the optical channel. the different lengths of the optical path are obtained by means of mirrors, as shown in figure 9. the figure 10 show the signal used to drive the tx led (cyan trace) and the corresponding output signal (vout) from the rx circuit (yellow trace). the implemented system uses only one led as a transmitter and another as a receiver. in our application, there are no restrictions on using a led cluster to transmit information. as well as it can be useful to use led array to receive the signal. by using many diodes as tx, as well as rx, systems with better performance can be obtained. we used the simplest possible configuration as the aim was to demonstrate the possibility of figure 6. seawater light transmission model. figure 7. rx and tx led driver circuit. figure 8. experimental setup used for the tests: (a) system working in laboratory; (b) system working outdoors. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 84 implementing an underwater led-to-led transmission using near ultraviolet radiation. 5. system description in any reliable communication system, data must be suitably modulated. modulation consists in varying one or more properties of a relatively high frequency signal (carrier). we used pwm modulation to implement our system. furthermore, considering that high sound quality is not required for audio communication between divers, this type of modulation is more than enough to test the feasibility of wireless audio communication via uv-a optical channel. obviously, more performing, and more robust modulation systems with respect to noise can be used. the pwm consists of the information signal (in our case the audio signal) that causes the modification of the time duration of the pulse carrier. this pulse signal turns the transmitter led on and off at the rate of the carrier’s frequency. in other word, with pwm technique we change the duty cycle of a square wave with constant frequency and amplitude; as shown in figure 11. the average value of a pwm signal, period by period, can be expressed as: 𝑉average = 1 𝑇 (∫ 𝑉max ∙ 𝑑𝑡 𝐷∙𝑇 0 + ∫ 𝑉min ∙ 𝑑𝑡 𝑇 𝐷∙𝑇 ) . (3) if 𝑉min = 0, the equation (3), can be simplified as: 𝑉average =𝐷 ∙ 𝑉max . (4) equation (4) indicates that if the amplitude of the carrier is constant (along a period), the average value of the pwm signal is directly proportional to the duty cycle. if the duty cycle is proportional to the information to be transmitted, it can be extracted through a simple averaging operation on the pwm signal. average is easily obtainable through opportune low pass filtering. to verify the real possibility of realizing an audio communication, we have respectively coupled a modulator and a demodulator to the led transmitter and to the led receiver [85]-[87]. the block diagram of our audio modulator and led driver is shown in figure 12. this led driver has a restricted baud rate. the main reason is the limited switching speed of silicon devices. a maximum data rate of 100 kbps can be achieved with this driver. in any case, this speed of data transmission is more than enough to implement an excellent audio connection. the pwm is achieved by means of a timing circuits ne555 [88] and a comparator lt1011 [89]. our circuit is powered with 6.5 v and produces a sawtooth waveform with frequency of approximately 100 khz and peak to peak voltage about 4.3 v. the sawtooth waveform is applied to the non-inverting input of the comparator. the audio signal works as the reference voltage and is applied to the inverting input. to realize a duty cycle of 50%, the audio signal is offset at the average voltage of the sawtooth waveform (1/3 of the power supply voltage). the comparator output is equal to the supply voltage when the sawtooth output is a higher voltage than the audio signal. figure 9. schematic of the water canal. the different optical path is obtained with the help of mirrors. figure 10. the cyan line represents the signal used to drive the tx led. the yellow line represents the corresponding rx output signal. (a) distance between transmitter and receiver 0.5 m. (b) distance between transmitter and receiver 1.5 m. (c) distance between transmitter and receiver 2.5 m. figure 11. pwm signal. square wave with constant frequency and amplitude by variable duty cycle. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 85 the led driver is based on the integrated mic3289 [90]. it is a pwm boost-switching regulator that is optimized for constant-current led driver applications. figure 16 shows the input and output signals in a pulse width modulation process. to recover the transmitted audio, a receiver unit, mainly composed of photodetector and signal conditioning devices, is used. the photodiode receives the transmitted optical signal and converts the optical signals into the electrical signals. then, the electrical signal is fed into the recovery circuits and pwm demodulator. figure 14 shows the block diagram of the receiver unit. the transimpedance amplifier (shown in the figure 7) is coupled with a low pass filter. this filter, with cut-off frequency about 1 mhz, uses to reduce the high frequency noise present at the transimpedance amplifier output. the output signal of filter has an amplitude that depends on many external parameters, as well as the distance and misalignment between transmitter and receiver. to obtain a correct reconstruction of the pwm signal, a comparator with variable threshold is used. by means of an integrator circuit, a voltage proportional to the average value of the amplitude of the received signal is obtained. this voltage is used as the threshold of the comparator. the integrator that was used provides an average signal at the output, which is about one third of the amplitude of the signal coming from the filter. in this way, the reconstruction of the pwm signal is obtained which is practically independent of the amplitude of the signal received by the led used as photodiode. finally, the reconstructed pwm signal (signal with constant amplitude and variable duty cycle) is demodulate by a low pass filter with a cutoff frequency of 8 khz. instead, figure 15 shows the schematic drawing of the receiving unit. a low pass filter is sufficient to decode the audio information contained in the pwm signal. by choosing a low-pass filter with an appropriate cut-off frequency, it will be possible to remove the high-frequency component in the pwm signal while keeping only the low-frequency signal (the audio information). our demodulator is a 4th order butterworth low pass filter. it consists of two non-identical 2nd order low pass filter. the human ear can perceive sounds with frequencies between 20 hz and 20 khz. in any case, the human voice produces sound that are confined to within 8 khz. therefore, for verbal communications, a low-pass filter with a cut-off frequency around 8 khz is sufficient. the 4th order butterworth filter we use has a cutoff frequency of approximately 7.8 khz. therefore, it is irrelevant for all the sound frequencies emitted by the human voice. on the other hand, at 100 khz, the filter has an attenuation of 83 db. this figure 12. audio modulator and led driver. figure 13. input and output signals in a pulse width modulation process. figure 14. input and output signals in a pulse width modulation process. figure 15. schematic of the optical receiver circuit. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 86 indicates that the high frequency carrier is highly suppressed. figure 16 shows the frequency response of the filter used to retrieve the audio information from the pwm signal preliminary tests were conducted to verify the real applicability of our system in underwater wireless voice transmission. first, a 4 khz tone was used to check the entire modulation, optical transmission, optical detection, and demodulation system. in figure 17(a) trace 1 (yellow) is the sinusoidal 4 khz tone going into the transmitter. while trace 2 (blue) shows the pwm modulation used to drive the tx led current. in figure 17(b) trace 2 (blue) shows the reconstructed pwm signal in the receiving unit. finally, trace (1) (yellow) of figure 17(b) shows the reconstruction of the sinusoid at 4 khz. the reconstruction of the sinusoid is more than acceptable, even if there is the presence of “noise”, related to the harmonics of the carrier signal. subsequently, the system was tested by transmitting an audio speech signal. with 2.5 m between tx and rx the speech transmitted is perfectly understandable. figure 18 shows the audio tracks, spectrograms, and frequency analysis of the transmitted audio signal and of the message reconstructed downstream of the receiver. 6. conclusions underwater optical wireless communication (uowc) has recently emerged as a unique opportunity. many studies are present in the literature, however underwater optical communication via near-ultraviolet (uv-a) radiation is not addressed. in this paper, we have shown that in short range when broadband communication is not needed, it is possible to implement a uowc system that makes use of uv-a radiation. a uv underwater optical wireless audio transceiver was proposed for wireless communication in close range between divers. we have also verified that this system can be realized via an led-led connection. this makes the system simple, economical, light, compact and, above all, not energy-intensive. the study is mainly designed for military applications. in military applications, it is very important to have systems that cannot be intercepted and possibly not easily identifiable. in addition to have low energy consuming systems. for these reasons, we have developed a system that uses non-visible optical radiation and led-to-led transmission, which is energy efficient. however, considering the simplicity and cost-effectiveness of the developed system, it can easily be used for communications between amateur divers. in our study, we faced the problem of verifying the feasibility of transmitting a signal, with sufficient bandwidth to transmit an audio signal, at a distance of about 2.5/3 meters. further studies and tests in the real marine environment are needed. references [1] h. sari, b. woodward, digital underwater voice communications. in: r. s. h. istepanian, m. stojanovic (eds) underwater acoustic digital signal processing and communication systems. springer, boston, ma. 2002. doi: 10.1007/978-1-4757-3617-5_4 [2] j. w. giles, i. n. bankman, underwater optical communications systems. part 2: basic design considerations. proceedings of the milcom 2005 -ieee military communications conference. atlantic city, nj, usa, 17-20 october 2005. doi: 10.1109/milcom.2005.1605919 [3] h. haas, l. yin, y. wang, c. chen, what is lifi?, journal of lightwave technology 34(6) (2016), pp. 1533-1544. doi: 10.1109/jlt.2015.2510021 [4] m. uysal, c. capsoni, z. ghassemlooy, a. boucouvalas, e. udvary, optical wireless communications: an emerging technology. springer: new york, ny (usa), 2016. doi: 10.1007/978-3-319-30201-0 [5] m. z. chowdhury, m. shahjalal, m. hasan, y. m. jang, the role of optical wireless communication technologies in 5g/6g and iot solutions: prospects, directions, and challenges, applied sciences 9(20) (2019) art. no. 4367. doi: 10.3390/app9204367 [6] g. schirripa spagnolo, l. cozzella, f. leccese, s. sangiovanni, l. podestà, e. piuzzi, optical wireless communication and li-fi: a new infrastructure for wireless communication in saving energy era, in proceedings of 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 2020, pp. 674-678. doi: 10.1109/metroind4.0iot48571.2020.9138180 [7] h. g. pfeiffer, h. a. liebhafsky, the origins of beer’s law, journal of chemical education 28(3) (1951), pp. 123. doi: 10.1021/ed028p123 [8] h. r. gordon, can the lambert‐beer law be applied to the diffuse attenuation coefficient of ocean water? limnology and oceanography 34(8) (1989), pp. 1389-1409. doi: 10.4319/lo.1989.34.8.1389 figure 16. our 4th order butterworth low pass filter frequency response. figure 17. (a) pulse-width modulation waveform. trace yellow: tone of 4 khz; trace blue relative pwm modulation. (b) trace blue: pwm signal recovered in the rx unit; trace yellow 4 khz tone present at rx output. figure 18. (a) audio track, spectrograms, and frequency analysis of the transmitted audio. (b) audio track, spectrograms, and frequency analysis of the of the retrieved audio signal. figures obtained by audacity® software [91]. https://doi.org/10.1007/978-1-4757-3617-5_4 https://doi.org/10.1109/milcom.2005.1605919 https://doi.org/10.1109/jlt.2015.2510021 https://doi.org/10.1007/978-3-319-30201-0 https://doi.org/10.3390/app9204367 https://doi.org/10.1109/metroind4.0iot48571.2020.9138180 https://doi.org/10.1021/ed028p123 https://doi.org/10.4319/lo.1989.34.8.1389 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 87 [9] n. g. jerlov, irraiance, marine optics vol. 14, ch.10 (1968), pp.115132, elsevier oceanography series, amsterdam, netherlands. doi: 10.1016/s0422-9894(08)70929-2 [10] m. g. solonenko, c. d. mobley, inherent optical properties of jerlov water types, appl. opt. 54(17) (2015), pp. 5392–5401. doi: 10.1364/ao.54.005392 [11] n. g. jerlov, optical classification of ocean water. physical aspects of light in the sea, edited by john e. tyler, honolulu: university of hawaii press, 2021, pp. 45-50. doi: 10.1515/9780824884918-009 [12] j. t. o. kirk, light and photosynthesis in aquatic ecosystems, cambridge, uk: cambridge university press, 2013. doi: 10.1017/cbo9781139168212 [13] mit news. advancing undersea optical communications. online [accessed 05 december 2021]. http://news.mit.edu/2018/advancing-undersea-opticalcommunications-0817 [14] c. m. g. gussen, p. s. r. diniz, m. l. r. campos, w. a. martins, f. m. costa, j. n. gois, a survey of underwater wireless communication technologies. j. of commun. and info. sys. 31(1) 2016, pp. 242–255. doi: 10.14209/jcis.2016.22 [15] h. kaushal, g. kaddoum, underwater optical wireless communication. ieee access 4 (2016), pp. 1518-1547. doi: 10.1109/access.2016.2552538 [16] c. shen, y. guo, h. m. oubei, t. k. ng, g. liu, k. h. park, k. t. ho, m. s. alouini, b. s. ooi, 20-meter underwater wireless optical communication link with 1.5 gbps data rate. optics express 24(22) (2016), pp. 25502-25509. doi: 10.1364/oe.24.025502 [17] j. xu, y. song, x. yu, a. lin, m. kong, j. han, n. deng, underwater wireless transmission of high-speed qam-ofdm signals using a compact red-light laser. optics express 24(8) (2016), pp. 8097-8109. doi: 10.1364/oe.24.008097 [18] c. wang, h. y. yu, zhu, a long distance underwater visible light communication system with single photon avalanche diode, ieee photonics journal 8(5) (2016) art. no. 7906311. doi: 10.1109/jphot.2016.2602330 [19] r. ji, s. wang, q. liu, w. lu, high-speed visible light communications: enabling technologies and state of the art. appl. sci. 8(4) (2018) art. no. 589. doi: 10.3390/app8040589 [20] h. m. oubei, c. shen, a. kammoun, e. zedini, k. h. park, x. sun, g. liu, c. h. kang, t. k. ng, n. s. alouini, light based underwater wireless communications. japanese journal of applied physics 57(8s2) (2018), 08pa06. doi: 10.7567/jjap.57.08pa06 [21] m. f. ali, d. n. k. jayakody, y. a. chursin, s. affes, s. dmitry, recent advances and future directions on underwater wireless communications, archives of computational methods in engineering 27 (2020), pp. 1379–1412. doi: 10.1007/s11831-019-09354-8 [22] g. d. roumelas, h. e. nistazakis, a. n. stassinakis, c. k. volos, a. d. tsigopoulos, underwater optical wireless communications with chromatic dispersion and time jitter, computation 7(3) (2019) art. no. 35. doi: 10.3390/computation7030035 [23] n. saeed, a. celik, t. y. al-naffouri, m. s. alouini, underwater optical wireless communications, networking, and localization: a survey, ad hoc networks 94 (2019) art. no. 101935. doi: 10.1016/j.adhoc.2019.101935 [24] z. hong, q. yan, z. li, t. zhan, y. wang, photon-counting underwater optical wireless communication for reliable video transmission using joint source-channel coding based on distributed compressive sensing, sensors 19(5) (2019) art. no. 1042. doi: 10.3390/s19051042 [25] g. schirripa spagnolo, l. cozzella, f. leccese, underwater optical wireless communications: overview, sensors 20 (2020) art. no. 2261. doi: 10.3390/s20082261 [26] s. zhu, x. chen, x. liu, g. zhang, p. tian, recent progress in and perspectives of underwater wireless optical communication, progress in quantum electronics 73 (2020) art. no. 100274. doi: 10.1016/j.pquantelec.2020.100274 [27] g. schirripa spagnolo, l. cozella, f. leccese, a brief survey on underwater optical wireless communications. in 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, october 5-7, 2020, pp. 79-84. online [accessed 05 december 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-15.pdf [28] j. sticklus, p. a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters. ieee journal of oceanic engineering 44(2) (2018), pp. 535-547. doi: 10.1109/joe.2018.2816838 [29] t. wiener, s. karp, the role of blue/green laser systems in strategic submarine communications, ieee transactions on communications 28(9) (1980), pp. 1602-1607. doi: 10.1109/tcom.1980.1094858 [30] l. w. e. wright, blue-green lasers for submarine communications, naval engineers journal 95(3) (1983), pp. 173-177. doi: 10.1111/j.1559-3584.1983.tb01635.x [31] r. e. chatham, blue-green lasers for submarine communications, in conference on lasers and electro-optics, g. bjorklund, e. hinkley, p. moulton, and d. pinnow, eds., osa technical digest (optical society of america, 1986), paper wm1. doi: 10.1364/cleo.1986.wm1 [32] m. doniec, i. vasilescu, m. chitre, c. detweiler, m. hoffmannkuhnt, d. rus, aquaoptical: a lightweight device for high-rate long-range underwater point-to-point communication, mar. technol. soc. j. 44 (2010), pp. 1–6. doi: 10.4031/mtsj.44.4.6 [33] i. c. rust, h. h. asada, a dual-use visible light approach to integrated communication and localization of underwater robots with application to non-destructive nuclear reactor inspection. in proceedings of the ieee international conference on robotics and automation, saint paul, mn, usa, 14–18 may 2012; pp. 2445–2450. doi: 10.1109/icra.2012.6224718 [34] hydromea. online [accessed 05 december 2021]. https://www.hydromea.com [35] acquatec group. online [accessed 05 december 2021]. https://www.aquatecgroup.com [36] marine link. online [accessed 05 december 2021]. https://www.marine-link.com [37] sonardyne. online [accessed 05 december 2021]. https://www.sonardyne.com [38] b. cochenour, k. dunn, a. laux, l. mullen, experimental measurements of the magnitude and phase response of highfrequency modulated light underwater, appl. opt. 56(14) (2017), pp. 4019-4024. doi: 10.1364/ao.56.004019 [39] recreational scuba training council (rstc), common hand signals for recreational scuba diving, 2015. online [accessed 05 december 2021]. http://www.neadc.org/commonhandsignalsforscubadiving.pd f [40] underwater slate. online [accessed 05 december 2021] https://www.mares.com/en/underwater-slate [41] electronic underwater slate. online [accessed 05 december 2021]. https://duslate.com [42] o. rusoke-dierich, basic diving equipment. in: diving medicine 2018, pp. 15-19, springer, cham. doi: 10.1007/978-3-319-73836-9_2 https://doi.org/10.1016/s0422-9894(08)70929-2 https://doi.org/10.1364/ao.54.005392 https://doi.org/10.1515/9780824884918-009 https://doi.org/10.1017/cbo9781139168212 http://news.mit.edu/2018/advancing-undersea-optical-communications-0817 http://news.mit.edu/2018/advancing-undersea-optical-communications-0817 https://doi.org/10.14209/jcis.2016.22 https://doi.org/10.1109/access.2016.2552538 https://doi.org/10.1364/oe.24.025502 https://doi.org/10.1364/oe.24.008097 https://doi.org/10.1109/jphot.2016.2602330 https://doi.org/10.3390/app8040589 https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1007/s11831-019-09354-8 https://doi.org/10.3390/computation7030035 https://doi.org/10.1016/j.adhoc.2019.101935 https://doi.org/10.3390/s19051042 https://doi.org/10.3390/s20082261 https://doi.org/10.1016/j.pquantelec.2020.100274 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1109/tcom.1980.1094858 https://doi.org/10.1111/j.1559-3584.1983.tb01635.x https://doi.org/10.1364/cleo.1986.wm1 https://doi.org/10.4031/mtsj.44.4.6 https://doi.org/10.1109/icra.2012.6224718 https://www.hydromea.com/ https://www.aquatecgroup.com/ https://www.marine-link.com/ https://www.sonardyne.com/ https://doi.org/10.1364/ao.56.004019 http://www.neadc.org/commonhandsignalsforscubadiving.pdf http://www.neadc.org/commonhandsignalsforscubadiving.pdf https://www.mares.com/en/underwater-slate https://duslate.com/ https://doi.org/10.1007/978-3-319-73836-9_2 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 88 [43] neptune space predator t-divers full face mask-free shipping. ocean reef inc. 2510 island view way vista, california 92081 usa, 2021. online [accessed 05 december 2021]. https://diving.oceanreefgroup.com/wpcontent/uploads/sites/3/2018/10/neptune-spacepredator-t-divers-rel-1.3.pdf [44] ocean technology systems, wireless underwater communications. online [accessed 05 december 2021]. https://www.oceantechnologysystems.com/learningcenter/through-water-communications/ [45] divelink underwater communications ltd. online [accessed 05 december 2021]. http://www.divelink.net/purchase/aga/8-home [46] h. a. nowak, underwater led-based communication links. master’s thesis, naval postgraduate school monterey, ca, usa, april 21, 2020. online [accessed 05 december 2021]. https://apps.dtic.mil/sti/pdfs/ad1114685.pdf [47] t. hamza, m.-a. khalighi, s. bourennane, p. léon, j. opderbecke, investigation of solar noise impact on the performance of underwater wireless optical communication links, optics express 24(22) (2016), 25832. doi: 10.1364/oe.24.025832 [48] r. amin, b. l. richards, w. f. x. e. misa, j. c. taylor, d. r. miller, a. k. rollo, c. demarke, h. singh, g. c. young, j. childress, j. e. ossolinski, r. t. reardon, k. h. koyanagi, the modular optical underwater survey system. sensors 17 (2017), 2309. doi: 10.3390/s17102309 [49] n. e. farr, c. t. pontbriand, j. d. ware, l.-p. a. pelletier, nonvisible light underwater optical communications, in proceedings of ieee third underwater communications and networking conference (ucomms), lerici, italy, 2016, pp. 1-4. doi: 10.1109/ucomms.2016.7583454. [50] j. marshall, vision and lack of vision in the ocean, current biology 27(11) (2017), pp. r494-r502. doi: 10.1016/j.cub.2017.03.012 [51] x. sun, et al., 375-nm ultraviolet-laser based non-line-of-sight underwater optical communication, optics express 26(10) (2018), pp. 12870-12877. doi: 10.1364/oe.26.012870 [52] x. sun, x. et al., non-line-of-sight methodology for high-speed wireless optical communication in highly turbid water, opt. comm. 461 (2020) art. no. 125264. doi: 10.1016/j.optcom.2020.125264 [53] v. lange, r. hönl, led as transmitter and receiver in pof based bidirectional communication systems, in international ieee conference and workshop in óbuda on electrical and power engineering (cando-epe), 2018, pp. 000137-000142. doi: 10.1109/cando-epe.2018.8601162 [54] s. li, a. pandharipande, f. m. j. willems, adaptive visible light communication led receiver, in proceeding of 2017 ieee sensors, pp. 1-3. doi: 10.1109/icsens.2017.8234237 [55] l. matheus, l. pires, a. vieira, l. f. vieira, m. a. vieira, j. a. nacif, the internet of light: impact of colors in led‐to‐led visible light communication systems, internet technology letters 2(1) (2019), e78. doi: 10.1002/itl2.78 [56] g. schirripa spagnolo, f. leccese, m. leccisi, led as transmitter and receiver of light: a simple tool to demonstration photoelectric effect. crystals 9(10) (2019) art. no. 531. doi: 10.3390/cryst9100531 [57] g. schirripa spagnolo, a. postiglione, i. de angelis, simple equipment for teaching internal photoelectric effect, phys. educ. 55 (2020) art. no. 055011 doi: 10.1088/1361-6552/ab97bf [58] j. sticklus, p.a. hoeher, m. hieronymi, experimental characterization of single-color power leds used as photodetectors, sensors 20 (2020) art. no. 5200. doi: 10.3390/s20185200 [59] m. galal, w. p. ng, r. binns, a. abd el aziz, characterization of rgb leds as emitter and photodetector for led-to-led communication. in proceedings of the 12th ieee/iet international symposium on communication systems, networks and digital signal processing csndsp, porto, portugal, 20–22 july 2020. doi: 10.1109/csndsp49049.2020.9249617 [60] m. galal, w. p. ng, r. binns, a. abd el aziz, experimental characterization of rgb led transceiver in low-complexity led-to-led link, sensors 20 (2020) art. no. 5754. doi: 10.3390/s20205754 [61] g. schirripa spagnolo, f. leccese, led rail signals: full hardware realization of apparatus with independent intensity by temperature changes, electronics 10(11) (2021) art. no. 1291. doi: 10.3390/electronics10111291 [62] d. j. starling, b. burger, e. miller, j. zolnowski, j. ranalli, an actively quenched single photon detector with a light emitting diode, modern applied science 10(1) (2016) art. no.114. doi: 10.5539/mas.v10n1p114 [63] l. mccann, introducing students to single photon detection with a reverse-biased led in avalanche mode, in 2015 conference on laboratory instruction beyond the first year of college, july 2224, college park, md, usa. doi: 10.1119/bfy.2015.pr.016 [64] s. m. sze, kwok k. ng, physics of semiconductor devices, john wiley & sons, inc., hoboken, nj, usa, 2007. doi: 10.1002/0470068329 [65] g. ferrari, m., sampietro, wide bandwidth transimpedance amplifier for extremely high sensitivity continuous measurements, review of scientific instruments 78(9) (2007) art. no. 094703. doi: 10.1063/1.2778626 [66] f. zappa, s. tisa, a. tosi, s. cova, principles and features of single-photon avalanche diode arrays, sensors and actuators a: physical 140(1) (2007), pp. 103-112. doi: 10.1016/j.sna.2007.06.021 [67] j. kirdoda, d. c. s. dumas, k. kuzmenko, p. vines, z. m. greener, r. w. millar, m. m. mirza, g. s. buller, d. j. paul, geiger mode ge-on-si single-photon avalanche diode detectors. 2019 ieee 16th international conference on group iv photonics (gfp), singapore, 28-30 aug 2019 doi: 10.1109/group4.2019.8853918 [68] s. donati, t. tambosso, single-photon detectors: from traditional pmt to solid-state spad-based technology, ieee journal of selected topics in quantum electronics 20(6) (2014), pp. 204211. doi: 10.1109/jstqe.2014.2350836 [69] c. wang, h. y. yu, y. j. zhu, a long distance underwater visible light communication system with single photon avalanche diode, ieee photonics journal 8(5) (2016) art. no 7906311. doi: 10.1109/jphot.2016.2602330 [70] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode underwater vlc system using arq, ieee photonics journal 9(5) (2017), pp. 1-11. doi: 10.1109/jphot.2017.274300 [71] hadfield, r. single-photon detectors for optical quantum information applications, nature photon 3 (2009), pp. 696–705. doi: 10.1038/nphoton.2009.230 [72] d. chitnis, s. collins, a spad-based photon detecting system for optical communications, journal of lightwave technology 32(10) (2014), pp. 2028-2034. doi: 10.1109/jlt.2014.2316972 [73] e. sarbazi, m. safari, h. haas, statistical modelling of singlephoton avalanche diode receivers for optical wireless communications, ieee transactions on communications 66(9) (2018), pp. 4043-4058. doi: 10.1109/tcomm.2018.2822815 [74] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://diving.oceanreefgroup.com/wp-content/uploads/sites/3/2018/10/neptune-space-predator-t-divers-rel-1.3.pdf https://www.oceantechnologysystems.com/learning-center/through-water-communications/ https://www.oceantechnologysystems.com/learning-center/through-water-communications/ http://www.divelink.net/purchase/aga/8-home https://apps.dtic.mil/sti/pdfs/ad1114685.pdf https://doi.org/10.1364/oe.24.025832 https://doi.org/10.3390/s17102309 https://doi.org/10.1109/ucomms.2016.7583454 https://doi.org/10.1016/j.cub.2017.03.012 https://doi.org/10.1364/oe.26.012870 https://doi.org/10.1016/j.optcom.2020.125264 https://doi.org/10.1109/cando-epe.2018.8601162 https://doi.org/10.1109/icsens.2017.8234237 https://doi.org/10.1002/itl2.78 https://doi.org/10.3390/cryst9100531 https://doi.org/10.1088/1361-6552/ab97bf https://doi.org/10.3390/s20185200 https://doi.org/10.1109/csndsp49049.2020.9249617 https://doi.org/10.3390/s20205754 https://doi.org/10.3390/electronics10111291 https://doi.org/10.5539/mas.v10n1p114 https://doi.org/10.1119/bfy.2015.pr.016 https://doi.org/10.1002/0470068329 https://doi.org/10.1063/1.2778626 https://doi.org/10.1016/j.sna.2007.06.021 https://doi.org/10.1109/group4.2019.8853918 https://doi.org/10.1109/jstqe.2014.2350836 https://doi.org/10.1109/jphot.2016.2602330 https://doi.org/10.1109/jphot.2017.274300 https://doi.org/10.1038/nphoton.2009.230 https://doi.org/10.1109/jlt.2014.2316972 https://doi.org/10.1109/tcomm.2018.2822815 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 89 underwater vlc system using arq, ieee photonics journal 9(5) (2017), pp. 1-11. doi: 10.1109/jphot.2017.2743007 [75] m. a. khalighi, t. hamza, s. bourennane, p. léon, j. opderbecke, underwater wireless optical communications using silicon photo-multipliers, ieee photonics journal 9(4) (2017), pp. 1-10. doi: 10.1109/jphot.2017.2726565 [76] j. sticklus, m. hieronymi, p. a. hoeher, effects and constraints of optical filtering on ambient light suppression in led-based underwater communications, sensors 18(11) (2018) art. no. 3710. doi: 10.3390/s18113710 [77] d. giustiniano, n. o. tippenhauer, s. mangold, low-complexity visible light networking with led-to-led communication, in proceedings of 2012 ifip wireless days, 2012 pp. 1-8. doi: 10.1109/wd.2012.6402861 [78] g. schirripa spagnolo, f. leccese, system to monitor ir radiation of led aircraft warning lights, 2021 ieee 8th international workshop on metrology for aerospace (metroaerospace), 2021, pp. 407-411. doi: 10.1109/metroaerospace51421.2021.9511723 [79] bivar uv5tz leds datasheet. online [accessed 05 december 2021] https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1 -2262009.pdf [80] l. k. gkoura, g. d. roumelas, h. e. nistazakis, h. g. sandalidis, a. vavoulas, a. d. tsigopoulos, g. s. tombras, underwater optical wireless communication systems: a concise review, turbulence modelling approaches current state, development prospects, applications, konstantin volkov, intechopen, july 26th 2017. doi: 10.5772/67915 [81] r. a. khalil, m. i. babar, n. saeed, t. jan, h. s. cho, effect of link misalignment in the optical-internet of underwater things, electronics 9(4) (2020) art. no. 646. doi: 10.3390/electronics9040646 [82] s. arnon, d. kedar, non-line-of-sight underwater optical wireless communication network, j. opt. soc. am. a 26(3) (2009), pp. 530–539. doi: 10.1364/josaa.26.000530 [83] s. arnon, underwater optical wireless communication network, optical engineering 49(1) (2010) art. no. 015001. doi: 10.1117/1.3280288 [84] linear technology lt1028 datasheet. online [accessed 05 december 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/1028fd.pdf [85] a. ayala, underwater optical wireless audio transceiver, senior project electrical engineering department, california polytechnic state university, san luis obispo (2016). online [accessed 05 december 2021] https://digitalcommons.calpoly.edu/eesp/352/ [86] r. k. schreyer, g. j. sonek, an optical transmitter/receiver system for wireless voice communication, ieee transactions on education 35(2) (1992), pp. 138-143. doi: 10.1109/13.135579 [87] j. t. b. taufik, m. l. hossain, t. ahmed, development of an audio transmission system through an indoor visible light communication link, international journal of scientific and research publications 9(1) (2019) 432-438. doi: 10.29322/ijsrp.9.01.2019.p8556 [88] texas instruments, timing circuit ne555 datasheet. online [accessed 05 december 2021] https://www.ti.com/lit/ds/symlink/ne555.pdf [89] linear technology, voltage comparator lt1011. online [accessed 05 december 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/lt1011.pdf [90] pwm boost-switching regulator. online [accessed 05 december 2021] https://ww1.microchip.com/downloads/en/devicedoc/mic32 89.pdf [91] audacity® software is copyright © 1999-2021 audacity team. web site: https://audacityteam.org/. it is free software distributed under the terms of the gnu general public license. the name audacity® is a registered trademark. https://doi.org/10.1109/jphot.2017.2743007 https://doi.org/10.1109/jphot.2017.2726565 https://doi.org/10.3390/s18113710 https://doi.org/10.1109/wd.2012.6402861 https://doi.org/10.1109/metroaerospace51421.2021.9511723 https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://doi.org/10.5772/67915 https://doi.org/10.3390/electronics9040646 https://doi.org/10.1364/josaa.26.000530 https://doi.org/10.1117/1.3280288 https://www.analog.com/media/en/technical-documentation/data-sheets/1028fd.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/1028fd.pdf https://digitalcommons.calpoly.edu/eesp/352/ https://doi.org/10.1109/13.135579 https://doi.org/10.29322/ijsrp.9.01.2019.p8556 https://www.ti.com/lit/ds/symlink/ne555.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/lt1011.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/lt1011.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://audacityteam.org/ state-of-the art and perspectives of underwater optical wireless communications acta imeko issn: 2221-870x december 2021, volume 10, number 4, 25 35 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 25 state-of-the art and perspectives of underwater optical wireless communications fabio leccese1, giuseppe schirripa spagnolo2 1 dipartimento di scienze, università degli studi “roma tre”,via della vasca navale n. 84, 00146 roma, italy 2 dipartimento di matematica e fisica, università degli studi “roma tre”, via della vasca navale n. 84, 00146 roma, italy section: research paper keywords: underwater communication; visible light communications; optical wireless communication; bidirectional communication; led; photo detector citation: fabio leccese, giuseppe schirripa spagnolo, state-of-the art and perspectives of underwater optical wireless communications, acta imeko, vol. 10, no. 4, article 8, december 2021, identifier: imeko-acta-10 (2021)-04-08 section editor: silvio del pizzo, university of naples 'parhenope', italy received march 7, 2021; in final form june 17, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giuseppe schirripa spagnolo, e-mail: giuseppe.schirripaspagnolo@uniroma3.it 1. introduction underwater wireless communication (uwc) has many potential applications in the military, industrial and scientific research fields but, for practical applications, significant data bandwidth is required [1]-[3]. generally, underwater wireless communication takes place via acoustic waves due to their relatively low attenuation. they are the normal choice in almost all commercially available submarine transmission systems. unfortunately, acoustic systems have low bandwidth and high latency. therefore, they are not suitable for bandwidth-hungry underwater applications such as image and real-time video transmission. however, as acoustic transmission is the only technology capable of supporting large transmission distances, extensive studies are being conducted to improve the performance of acoustic communication channels [4]-[8]. anyhow, acoustic underwater communication is susceptible to malicious attacks [9]. consequently, complementary technology capable of achieving secure broadband underwater communications is required. wireless communication via radio frequency waves (rf) is the most widespread technology in terrestrial communications. sadly, this technology is not suitable for underwater applications. in water, radio frequency waves are strongly attenuated, especially in seawater where the propagation medium is highly conductive [10]. for short distance communications, underwater wireless optical communication (uowc) can be a viable alternative to that achievable via acoustic waves. this technology, even with all its limitations, can be of great use in specific applications. although not widely used yet, this article provides the state of the art in wireless underwater optical communication. table 1. shows a comparison of acoustic underwater wireless communication technologies vs. underwater wireless optical communication. optical communication is defined as communication at a distance using light to carry information. an optical fibre is the most common type of channel for optical communications, as well as the only medium that can meet the needs for enormous bandwidth in such an information age. replacing the channel abstract in scientific, military, and industrial sectors, the development of robust and efficient submarine wireless communication links is of enormous interest. underwater wireless communications can be carried out through acoustic, radio frequency (rf), and optical waves. underwater optical communication is not a new idea, but it has recently been considered because seawater exhibits a window of reduced absorption both in the visible spectrum and long-wavelength uv light (uv-a). compared to its bandwidth limited acoustic counterpart, underwater optical wireless communications (uowcs) can support higher data rates at low latency levels. underwater wireless communication networks are important in ocean exploration, military tactical operations, environmental and water pollution monitoring. anyway, given the rapid development of uowc technology, documents are still needed showing the state of the art and the progress made by the most current research. this paper aims to examine current technologies, and those potentially available soon, for underwater optical wireless communication and to propose a new perspective using uv-a radiation. mailto:giuseppe.schirripaspagnolo@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 26 from an optical fiber to free space, we achieve free space optical communications [11]. visible light communication (vlc) is a communication technology in which the visible spectrum is modulated to transmit data. due to the propagation distance of the light emitting diodes (leds), the vlc is a technology for short-range communication. pang et al. [12] first introduced the concept of using leds for wireless communication. visible-light communication (vlc) technology was developed to provide both lighting and data transfer with the same infrastructure [13]-[16]. vlc techniques transmit information wirelessly by rapidly pulsing visible light using light emitting diodes (leds). generally, the information data is overlaid on the led light without introducing flickering. the exhaustion of low-frequency bands to cope with the exponential growth for the high-speed wireless access is another reason for exploring new technologies. the visible light spectrum is unlicensed and hardware readily available, which can be used for data transmission. furthermore, the exponential improvement in the high-power light emitting diodes is an enabler for high data rate vlc network. as well as vlc, underwater optical wireless communications (uowcs) systems are currently being studied [17]-[21]. in the uowc systems, light sources are leds or laser diodes (lds). both are extremely interesting. lds for their feature higher modulation bandwidth respect to leds. on the other hand, leds, compared to lds, have higher energy efficiency, lower cost, and longer life. leds seem more suitable for applications where medium transmission bit rate is required. compared to acoustic communication, uowc has great potential; with it, we can make communications with high bit rate and very low latency. currently, the performance of uowc systems is limited to short range applications [22]. submarine optical communication systems are starting to be commercially available [23]-[25]. in the literature, numerous studies have addressed the problem of optical transmission in water through experiments. unfortunately, there are objective difficulties in carrying out the experiments in a real underwater environment. most of the experimental work is done within a controlled laboratory setup. in such configurations, sunlight which induces noise and which, in some cases, saturates the light detectors is neglected. however, in-depth studies are still necessary to create systems that can be used in real operational scenarios. researchers are needed to allow submarine optical transmission even over medium distances (greater than 500 m). table 1 shows the performance features (benefits, limitations, and requirements) of acoustic and optical underwater communication [26]. while, figure 1 compares the performance of acoustics, and uowc, based on transmission range and data rate (bandwidth) [27]. in order to provide a basic overview, we will go through and provide summary is to highlight the perspectives of uowc technologies. the focus of this is to examine current technologies and those potentially available in the next few years, for uowc. the military sector where underwater optical wireless communication finds important applications, thanks to its intrinsic safety and the availability of higher bandwidth. one possible application is communication between divers. during military incursions with divers, it is very important for the command to have secure communications that are difficult to locate. generally, underwater acoustic communications are easily detectable. in this scenario, uowc is an excellent technology. it has the advantage that it is much more difficult to intercept. this application does not require long range and high band. figure 2 shows a typical uowc military application scenario. another scenario is the one shown in figure 3. it is a dynamic positioning buoy [28], buoy capable of communicating with satellite and/or with a terrestrial station and with optical surveillance station positioned on the seabed. the surveillance station can be powered by nuclear batteries [29] and in real time control, through digital optical correlation [30]-[32], if something intrudes into the monitored area. in case of suspect object (e.g., a submarine) the image and related alert is sent back to the buoy and, from it, to the ground costal station via satellite link. this application can grand a very accurate underwater video surveillance. in addition, by using uv light for figure 1. compares the performance of acoustic, and uowc, based on the transmission range and the data speed (bandwidth). table 1. comparison of underwater wireless communication technologies. parameter acoustic optical attenuation 0.1 – 4 db/km 0.39 db/m (ocean) 11 db/m (turbid) speed 1500 ms−1 2.3 × 108 ms−1 data rate kbps gbps latency high low distance > 100 km ≤ 500 m bandwidth 1 khz–100 khz 150 mhz frequency 10–15 khz 5 × 1014 hz power 10 w mw – w figure 2. typical military application scenarios of uowc. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 27 underwater optical wireless communication, the intruder has a hard time realizing that he has been detected. in uowc the link between transmitter and receiver can be mainly of two types [20],[26]: • point-to-point line-of-sight (point-to-point los); • diffuse line-of-sight (diffuse los) configuration. the point-to-point los configuration, shown in figure 4 (a), uses “collimated” light sources. in this arrangement, the receiver is positioned in such a way as to detect the light beam directly pointed in the direction fixed by the transmitter. in contrast, the diffuse los configuration uses light sources with a large divergence angle. this allows for greater flexibility in the reciprocal positioning of the transmitter and receiver, see figure 4 (b). especially in military applications, where it is necessary to communicate between moving units, the diffuse line of sight configuration (diffuse los) must be used. theoretically, in a uowc system we could use any light source as a transmitter [33]. however, the limitations of power, size and switching speed imposed by the practical use of the system restrict the selection to two possible choices: laser diodes (ld) and light emitting diodes (led) laser diodes (ld) make it possible to develop uowc systems with a high modulation bandwidth and a high transmission power density. they generally have small angles of divergence and therefore a strong directionality. they are used in point-to-point los. in most underwater communications between moving objects, it is not easy to achieve perfect alignment between transmitter and receiver. in this scenario, for a realistic application of the laser diodes, beam expansion or active alignment systems are required. this greatly complicates the design of the system. furthermore, such systems are not very economical and often not very reliable. nowadays, high-brightness leds are available, and they represent a valid alternative to the use of laser diodes. the use of leds as light sources for uowc systems offers many advantages such as long life, low energy consumption. in addition, leds with large divergence angles make alignment problems less stringent. generally, leds are used in the diffuse los configuration. by means of leds, it is possible to create simple and compact uowc systems. unfortunately, due to the large divergence angles and low modulation bandwidth, leds are only applicable for short-range transmissions and for applications where relatively low transmission speeds are required. as receivers, a variety of sensors is potentially usable in uowc [34]-[36]: photodiode, pin photodiode, avalanche photo diode, silicon photomultipliers. 2. optical transmission in the aquatic medium beer’s law is commonly used to relate the absorption of diffuse light to the properties of the medium through which the light is traveling. when applied to a liquid medium, it states that the irradiance (e) decreases exponentially as a function of wavelength (λ) and distance (r ) [37],[38]. mathematically, we can write 𝐸(λ, r) = 𝐸0 ∙ exp[−𝐾d(λ) ∙ 𝑟] , (1) where e0 is the initial irradiance (in watts per square meter). in a medium with attenuation coefficient kd (λ), after traveling a distance r, the residual irradiance is e(λ, r). in (1), we assume that kd is constant along r. the aquatic medium contains many different elements, dissolved or suspended. these components cause the spectral attenuation of the radiation. in particular, the concentration of chlorophyll is a very significant parameter for the use of optical radiation in submarine communications [39]-[41]. for this reason, a relationship between the attenuation coefficient kd and the chlorophyll concentration was determined. underwater, the light shows less attenuation in the blue/green wavelength range. however, although light attenuation in seawater is minimum in the blue-green region, the optimal wavelength for underwater optical link is conditioned from the inherent optical properties of the water, which can largely vary in different geographic places. generally, coastal, and oceanic waters are classified according to the jerlov water types [42]-[45]. for jerlov coastal water types figure 3. underwater video surveillance scenario. figure 4. examples of different underwater optical wireless link configuration. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 28 1c, 3c, 5c, 7c, 9c and oceanic water type iii, diffuse attenuation coefficients are shown in figure 5. observing the figure 5, we see that the optical signals are absorbed in water. however, seawater exhibits relatively little absorption in the blue/green region of the visible spectrum. therefore, using wavelengths in this spectral region, high-speed connections can be attained according to the type of water. minimum attenuation is centred near 460 nm in clear ocean waters and shifts to higher values for coastal waters. seawater light transmission model is shown in figure 6. the optical power reaching the receiver can be written as [47][50]: 𝑃rx=𝑃tx∙𝜂tx∙𝜂rx∙exp [− 𝐾d(𝜆)∙𝑧 cos 𝜃 ] ∙ 𝐴𝑅𝑥 ∙ cos 𝜃 2 π∙𝑧2(1 − cos 𝜃0) , (2) where 𝑃tx is the transmitted power, 𝜂tx and 𝜂rx are the optical efficiencies of the 𝑇𝑥 and 𝑅𝑥 correspondingly, kd (λ) is the attenuation coefficient, 𝑧 is the perpendicular distance between the 𝑇𝑥 plane and the 𝑅𝑥 plane, 𝜃0 is the 𝑇𝑥 beam divergence angle, 𝜃 is the angle between the perpendicular to the 𝑅𝑥 plane and the 𝑇𝑥‐ 𝑅𝑥 trajectory, and 𝐴rx is the receiver aperture area. the transmitted power is limited by the energy that can be used by the transmitter apparatus. it must be as small as possible. in this way, it is possible to have low power supply; very useful in underwater applications. equation (2) shows that for the same energy used by the transmitter, if you want to increase the transmission distance, it is essential, among other things, to improve the efficiency of the transmitter and of the receiver. obviously, the transmission distance can also be increased by using reception systems capable of capturing, theoretically, even the single photon. as for light sources, technology offers increasingly efficient and reliable devices. current light sources (laser diode and led) have excellent efficiency, high reliability, low power consumption and low cost. on the contrary, as far as the receiver is concerned, there is still a lot of research work to be done. generally, the detected light from the receiver is small and disturbed by noise, especially if the transmission is not over a very short distance. for this reason, new error-corrected modulation systems that are relatively immune to noise must be studied, especially if we want to use submarine optical transmission with high bitrate. 3. basic components of uowc an uowc link can be schematized in three parts, the transmitter unit (tx), the underwater channel and the receiver module (rx). the schematic in figure 7 shows the components of a typical system. 3.1. the transmitter (tx) for uowc systems, the function of the transmitter is to transform the electrical signal in optical one, projecting the carefully aimed light pulses into the water. as already mentioned, the optical light sources are based on led or ld one [51]–[58]. the transmitter consists of four principal components: a modulator and pulse shape circuit, a driver circuit, which converts the electrical signal into an optical signal suitable for transmission and a lens to realize the optical link configuration. a critical parameter in optical transmission is the modulation scheme used. different modulation schemes can be used in uowc systems. each varies in complexity, implementation cost, bandwidth, power consumption, noise robustness, and bit error rate (ber). table 2 shows a summary on uowc modulation schemes. typical rf modulating schemes are not applicable in vlc. recent uowc studies have tried to characterize the performance of communication systems using different modulation techniques to increase together the data transmission rate and the link distance [59]–[66]. table 1 summarizes the main modulation schemes that can be used in the uowc [67]. 3.2. the receiver (rx) the receiver has the task of capturing the transmitted optical signal and transforming it into an electrical signal. in many applications, it is important to select a specific wavelength that impacts on the light detector [86]. the light figure 5. diffuse light attenuation coefficient (kd) vs. wavelength for jerlov water types. data from tables xxvi and xxvii in ref. [46]. figure 6. seawater light transmission model. figure 7. schematic of a typical uowc link. the transmitter (tx) is composed of a modulator, optical driver, light source, and projection lens. the receiver (rx) is made of optical bandpass filter, photodetector, low noise electronics and demodulator. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 29 coming on the receiver should have no noise introduced by sunlight and the presence of other light sources [87]. to try to solve this problem, the wavelength band (the one transmitted) is selected by using a narrow optical band-pass filter [88],[89]. when the receiver receives the transmitted optical signal, this provides to transform it into an electric signal by using photo detectors. there are many different types of photo detectors currently commonly used, e.g., the photodiodes. these devices, for their characteristics of small size, suitable material, high sensitivity, and fast response time, are commonly used in optical communication applications. there are two types of photodiodes: the pin photodiode and the avalanche photodiode (apd). unfortunately, due to the high detection threshold and high noise intensity, linked to trans-conductance amplifier, that limit their practical application, photodiodes are not advisable for long distance uowc systems. for traditional detection devices and methods, due to the exponential attenuation of the water, the optical communication distance is less than 100 m [43],[90]. this constraint severely limits the performance of uowc systems. especially for the management of auvs and remotecontrol vehicles (rov) [91]-[95]. recent research is focused on the possible application of single photon avalanche diodes (spads) technology to uowc systems. the avalanche photodiodes have a similar structure to that of the pin photodiodes and operate at a much higher reversed bias. this physical characteristic allows to a single photon to produce a significant avalanche of electrons. this way of operation is called the single-photon avalanche mode or even the geiger’s mode [96]-[98]. the great advantage of spads is that their detectors do not need to a trans-conductance amplifier. this intrinsically leads that optical communications realized with this kind of diodes could provide high detection, high accuracy, and low noise measurements [99]-[108]. 4. underwater communications by uv-a radiation in the literature, almost all studies do not consider the presence of sunlight. it is inevitable that uowc systems are exposed to sunlight. furthermore, it should be noted that the optical absorption spectrum of seawater aligns with the maximum amplitude of the solar spectrum, see figure 8 [109]. generally, solar intensity decreases with depth. by examining how light is absorbed in water, see figure 5, we see that the best wavelengths to use in uowc are 450-500 nm for clear waters and 570-600 nm for coastal waters. this same attenuation is also true for the solar spectrum. figure 9 shows how sunlight penetrates seawaters. in the presence of sunlight, the receivers see very high white noise and can often go into saturation. this problem is particularly important with spads. of course, in real applications, the viewing direction of the photo-sensor is also important. an upward facing detector is exposed to sunlight a few orders of magnitude greater than when facing downward or to the side. all this, in many practical applications, makes it difficult to use the spectrum of visible light. for this reason, submarine communication systems that use uv-a band communication channels are extremely interesting. we must also observe two other important characteristics of optical communication that uses near ultraviolet. (1) this communication channel is not identifiable and difficult to intercept. it is particularly attractive for military applications. (2) using uv radiation makes it easier to maintain alignment between transmitter and receiver [111],[112]. an important application of uowc is underwater communication from diver-to-diver. there are commercially available audio interphones that work quite well. table 2. summary on uowc modulation schemes. modulation drawbacks advantages ref. ook-nrz low energy efficiency simple and low cost [68]-[70] ppm high requirements on timing low bandwidth utilization rate more complex transceivers high power efficiency [71]-[75] pwm low power efficiency very good noise immunity [20], [58] dpim error spread in demodulation complex modulation devices high bandwidth efficiency [58], [76], [77] psk high implementation complexity high cost high receiver sensitivity [78]-[80] qam high implementation complexity high cost high system spectral efficiency better rejection on noise [80]-[82] polsk short transmission distance low data rate high tolerance to underwater turbulence [83], [84] sim complex modulation/demodulation devices and suffers from poor average power efficiency increase system capacity low cost [85], [86] figure 8. solar irradiance and oceanic water diffuse light attenuation coefficient abortion. the curves show that the minimum of the optical absorption of water is aligned with the maximum of solar radiation. since sunlight is an important source of noise, the best ratio propagated signal on solar radiation is obtained using radiation cantered around 385 nm. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 30 however, these systems in military tactical applications (such as raids by military divers) have the drawback of being easily identifiable. in order to understand if a system capable of implementing a communication between divers that is not easily identifiable is feasible, some preliminary studies have been carried out. in particular, we tried to understand if it is possible to create a compact uowc system that requires few energy resources. for this purpose, we tried to verify the feasibility of an optical communication made through a led-to-led link. due to the growing demand for high-power uv leds for commercial applications, cost-effective and efficient leds that emit in the near ultraviolet (uv-a) are currently available. leds with emission in the range from 350 to 400 nm (uva) are light sources that allow you to work just beyond the visible light radiation. that is, in that region of the spectrum where most of the sunlight does not penetrate the water. this part of the spectrum is still quite close to the minimum attenuation in water. therefore, it is useable in uowc systems. in addition to be excellent light sources, leds can be used both as temperature sensors [113] and as light detectors [114],[115]. leds can detect a narrow band of wavelengths, close to what they emit when used as a source. in our experiment, we used a bivar uv5tz-385-30 led as a transmitter and a bivar uv5tz-390-30 led as a receiver [116]. light intensities vs. the wavelengths of the leds used are shown in the figure 10. the two leds were inserted in a tank filled with real seawater (water taken from the tyrrhenian coast anzio italy). the leds are placed at a distance of 50 cm and facing one towards the other. the led used as a transmitter was driven by the circuit shown in the figure 11. the led driver shown in the figure has a restricted baud rate. the main reason is the limited switching speed of silicon devices. a maximum data rate of 100 kbps can be achieved with this driver. in any case, this speed of data transmission is more than enough to implement an excellent audio connection. obviously, if the driver is made with transistors made in gan technology, data transmissions with speeds higher than 1 mbps can be obtained [118]. figure 12 shows the rx led driver circuit. figure 13 shows the signal received by the led used as light detector. the transmitter led is polarized with 25 ma and switched with a frequency of approximately 80 khz. figure 9. (a) spectral irradiance of sunlight at the level of the sea; (b) light penetration in open ocean, (c) light penetration in coastal water. [110]. figure 10. spectral intensity of the leds used as tx and rx. figure 11. tx led driver circuit using mic3289 pwm boost switching regulator [117]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 31 the received signal, after traversing 50 cm of seawater, is “good”. obviously, further studies are needed to implement and characterize an underwater uv-a communication realized using the led-to-led link. 5. conclusions recently many studies have been conducted to use uowc technology to transmit information safely with high data rate in underwater environment. today, uowc systems usable in real operating conditions (with some exceptions) are not yet available. therefore, a lot of research in this area has yet to be done. in particular: • currently, an inevitable phenomenon for uowc link is the misalignment between transmitter and receiver. to limit the impact of misalignments between transmitter and receiver, research is underway for the development of smart transceivers. however, the need to develop robust and reliable uowc transceivers that do not require rigorous alignment is urgent. • the design innovative modulation and coding schemes that can adapt the characterizations of underwater environment. • since most uowc systems are integrated into a batterypowered platform, energy efficiency is therefore important. the systems must be designed with high energy efficiency. • possibility of using different colored light sources at the same time to increase data transfer speed and / or allow simultaneous use by multiple users. • development of new underwater communication channel modeling. when environmental conditions deviate from ideality, the light signal rapidly degrades. it is essential to study the propagation of the light beam with models that simulate real conditions as much as possible (even in “difficult”" environments). all this to allow the optimization of transmission and reception techniques, both in terms of transmitter and sensor used as receiver. furthermore, we have presented a preliminary study to verify the feasibility of a simple, economical and reliable communication system that uses uv-a radiation. the possibility of using near ultraviolet radiation should favor the development of uowc systems that can also be used in the presence of solar radiation. finally, almost all the studies available in the literature are conducted by simulation or by laboratory experiments. studies in real marine environment are needed. the interest in uowc is mainly outside the academic field. in fact, the possibility to use uowc is based on future military applications for secure under water telephones (uwts), necessary for allowing secure communications between vessels and submarines, considering the possibility to use both direct and spread light channels. in addition, the usage of point-to-point optical communications can allow a better usage of torpedoes, not specifically for their guidance, but for reporting sonar information back to the base station with a high rate, even in case of not wire-guided solution. references [1] i. f. akyildiz, d. pompili, t. melodia, underwater acoustic sensor networks: research challenges, ad hoc networks 3(3) (2005), pp. 257 – 279. doi: 10.1016/j.adhoc.2005.01.004 [2] c. m. g. gussen, p. s. r. diniz, m. l. r. campos, w. a. martins, f. m. costa, j. n. gois, a survey of underwater wireless communication technologies, j. of commun. and info. sys. 31(1) (2016), pp. 242–255. doi: 10.14209/jcis.2016.22 [3] m. f. ali, d. n. k. jayakody, y. a. chursin, s. affes; s. dmitry, recent advances and future directions on underwater wireless communications, archives of computational methods in engineering, pp. 1-34. 2019. doi: 10.1007/s11831-019-09354-8 [4] e. demirors, g. sklivanitis, g.e. santagati, t. melodia, s. n. batalama, high-rate software-defined underwater acoustic modem with real-time adaptation capabilities, ieee access (6) (2018), pp. 18602-18615. doi: 10.1109/access.2018.2815026 [5] d. centelles, a. soriano-asensi, j. v. martí, r. marín, p. j. sanz, underwater wireless communications for cooperative robotics with uwsim-net, appl. sci. 9 (2019), 3526. doi: 10.3390/app9173526 [6] m. j. bocus, a. doufexi, d. agrafiotis, performance of ofdmbased massive mimo otfs systems for underwater acoustic communication, iet communications 14(4) (2020), pp. 588-593. doi: 10.1049/iet-com.2019.0376 [7] e. demirors, g. sklivanitis, g.e. santagati, t. melodia, s. n. batalama, high-rate software-defined underwater acoustic modem with real-time adaptation capabilities, ieee access 6, 2018, pp. 18602–18615. doi: 10.1109/access.2018.2815026 figure 12. rx led driver circuit using the ltc1050 operational amplifier [119]. figure 13. receiver output signal. rx implemented according to the circuit of figure 12. https://doi.org/10.1016/j.adhoc.2005.01.004 https://doi.org/10.14209/jcis.2016.22 file:///c:/users/schirripa/appdata/local/packages/microsoft.windowscommunicationsapps_8wekyb3d8bbwe/localstate/files/s0/3459/attachments/doi.org/10.1007/s11831-019-09354-8 https://doi.org/10.1109/access.2018.2815026 https://doi.org/10.3390/app9173526 https://doi.org/10.1049/iet-com.2019.0376 https://doi.org/10.1109/access.2018.2815026 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 32 [8] d. centelles, a. soriano-asensi, j. v. martí, r. marín, p. j. sanz, underwater wireless communications for cooperative robotics with uwsim-net, appl. sci. 9(3526) (2019). doi: 10.3390/app9173526 [9] m. c. domingo, securing underwater wireless communication networks, ieee wireless communications, 18, no. 1, 2011, pp. 22-28. doi: 10.1109/mwc.2011.5714022 [10] x. che, i. wells, g. dickers, p. kear, x. gong, re-evaluation of rf electromagnetic communication in underwater sensor networks, ieee commun. mag. 48, no 12, 2010, pp. 143–151. doi: 10.1109/mcom.2010.5673085 [11] m. a. khalighi, m. uysal, survey on free space optical communication: a communication theory perspective, ieee communications surveys & tutorials 16(4) (2014), pp. 2231-2258. doi: 10.1109/comst.2014.2329501 [12] g. pang, t. kwan, chi-ho chan, h. liu, led traffic light as a communications device, in proceedings 1999 ieee/ieej/jsai international conference on intelligent transportation systems (cat. no.99th8383), 1999, pp. 788-793. doi: 10.1109/itsc.1999.821161 [13] z. ghassemlooy, l. n. alves, s. zvanovec, m. a. khalighi, visible light communications: theory and applications, crc press, boca raton, fl, usa, 2017. doi: 10.1201/9781315367330-3 [14] a. al-kinani, c. x. wang, l. zhou, w. zhang, optical wireless communication channel measurements and models, ieee commun. surv. tutor. 2018, 20, 2018, pp. 1939–1962. doi: 10.1109/comst.2018.2838096 [15] s. u. rehman, s. ullah, p. h. j. chong, s. yongchareon, d. komosny, visible light communication: a system perspective overview and challenges, sensors vol.19, no. 5, 2019, 1153. doi: 10.3390/s19051153 [16] g. schirripa spagnolo, l. cozzella, f. leccese, s. sangiovanni, l. podestà, e. piuzzi, optical wireless communication and li-fi: a new infrastructure for wireless communication in saving energy era, 2020 ieee international workshop on metrology for industry 4.0 & iot, roma, italy, 2020, pp. 674-678. doi: 10.1109/metroind4.0iot48571.2020.9138180 [17] g. schirripa spagnolo, l. cozzella, f. leccese, underwater optical wireless communications: overview, sensors, 20, 2261, 2020. doi: 10.3390/s20082261 [18] g. schirripa spagnolo, l. cozella, f. leccese, a brief survey on underwater optical wireless communications, 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, october 5-7, 2020, pp. 79-84. online [accessed 01 september 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-15.pdf [19] h.m. oubei, c. shen, a. kammoun, e. zedini, k.h. park, x. sun, g. liu, c.h. kang, t.k. ng, n.s. alouini, light based underwater wireless communications, japanese journal of applied physics, 57, no. 8s2, 08pa06, 2018. doi: 10.7567/jjap.57.08pa06 [20] z. zeng, s. fu, h. zang, a survey of underwater optical wireless communications, ieee commun. surv. tutorials, 19, no. 1, pp. 204–238, 2017. doi: 10.1109/comst.2016.2618841 [21] b. cochenour, k. dunn, a. laux, l. mullen, experimental measurements of the magnitude and phase response of highfrequency modulated light underwater, appl. opt. 56(14) (2017), pp. 4019-4024. doi: 10.1364/ao.56.004019 [22] d. anguita, d. brizzolara, g. parodi, building an underwater wireless sensor network based on optical: communication: research challenges and current results, 2009 third international conference on sensor technologies and applications, athens, greece, 2009, pp. 476-479. doi: 10.1109/sensorcomm.2009.79 [23] bluecomm underwater optical communication. online [accessed 01 september 2021] https://www.sonardyne.com/product/bluecomm-underwateroptical-communication-system/ [24] sa photonics neptune™ fso. onlin [accessed 01 september 2021] https://www.saphotonics.com/communicationssensing/optical-communications/ [25] shimadzu, mc100 underwater optical wireless communication modem. online [accessed 01 september 2021] https://www.shimadzu.com/news/g16mjizzgbhz3--y.html [26] h. kaushal, g. kaddoum, underwater optical wireless communication, ieee access 4 (2016), pp. 1518–1547. doi:10.1109/access.2016.2552538 [27] hamamatsu, underwater optical communications. online [accessed 01 september 2021] https://www.hamamatsu.com/eu/en/applications/underwateroptical-communication/index.html [28] introduction to dynamic positioning (dp) systems. online [accessed 01 september 2021] https://safety4sea.com/wp-content/uploads/2019/12/uscgintroduction-to-dynamic-position-systems2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcopt pdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0gqntzgznajujcnbszqi9 [29] i. hamilton, n. patel, nuclear batteries for maritime applications, marine technology society journal 53(4) (2019), pp. 26-28. doi: 10.4031/mtsj.53.4.5 [30] h. ma, y. liu, correlation based video processing in video sensor networks, in international conference on wireless networks, communications and mobile computing 2, pp. 987-992, maui: hi, usa, 3-16 june 2005. doi: 10.1109/wirles.2005.1549547 [31] g. schirripa spagnolo, l. cozzella, f. leccese, phase correlation functions: fft vs. fht, acta imeko 8(1) (2019), pp. 87-92. doi: 10.21014/acta_imeko.v8i1.604 [32] m. darwiesh, a. f. el-sherif, h. s. ayoub, y. h. el-sharkawy, m. f. hassan, y. h. elbashar, hyperspectral laser imaging of underwater targets, j. opt 47, 2018, 553. doi: 10.1007/s12596-018-0493-7 [33] m. kong, y. chen, r. sarwar, b. sun, z. xu, j. han, j. chen, h. qin, j. xu, underwater wireless optical communication using an arrayed transmitter/receiver and optical superimposition-based pam-4 signal, opt. express 26, no. 3, 2018, pp. 3087-3097. doi: 10.1364/oe.26.003087 [34] s. donati, photodetectors: devices, circuits and applications, wiley-ieee press; 2nd edition (january 7, 2021) [35] s. gundacker, a. heering, the silicon photomultiplier: fundamentals and applications of a modern solid-state photon detector, phys. med. biol. 65, 2020, 17tr01. doi: 10.1088/1361-6560/ab7b2d [36] m. a. khalighi, h. akhouayri, s. hranilovic, siliconphotomultiplier-based underwater wireless optical communication using pulse-amplitude modulation, journal of oceanic engineering 45(4), 2020, pp. 1611-1621. doi: 10.1109/joe.2019.2923501 [37] h.r. gordon, can the lambert‐beer law be applied to the diffuse attenuation coefficient of ocean water?, limnology and oceanography, 34(8) (1989), pp. 1389-1409. doi: 10.4319/lo.1989.34.8.1389 [38] f. campagnaro, m. calore, p. casari, v. sanjuan calzado, g. cupertino, c. moriconi, m. zorzi., measurement-based simulation of underwater optical networks, oceans 2017 aberdeen, aberdeen, 2017, pp. 1-7. doi: 10.1109/oceanse.2017.8084671 [39] c.f. bohren, d.r.; huffman, absorption and scattering of light by small particles, wiley: new york, ny (usa), 1988. doi: 10.1002/9783527618156 [40] f. a. tahir, b. das, m. f. l. abdullah, m. s. m. gismalla, design and analysis of variation in chlorophyll and depth for open https://doi.org/10.3390/app9173526 https://doi.org/10.1109/mwc.2011.5714022 https://doi.org/10.1109/mcom.2010.5673085 http://dx.doi.org/10.1109/comst.2014.2329501 https://doi.org/10.1109/itsc.1999.821161 https://doi.org/10.1201/9781315367330-3 https://doi.org/10.1109/comst.2018.2838096 https://doi.org/10.3390/s19051153 https://doi.org/10.1109/metroind4.0iot48571.2020.9138180 https://doi.org/10.3390/s20082261 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-15.pdf https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1109/comst.2016.2618841 https://doi.org/10.1364/ao.56.004019 https://doi.org/10.1109/sensorcomm.2009.79 https://www.sonardyne.com/product/bluecomm-underwater-optical-communication-system/ https://www.sonardyne.com/product/bluecomm-underwater-optical-communication-system/ https://www.saphotonics.com/communications-sensing/optical-communications/ https://www.saphotonics.com/communications-sensing/optical-communications/ https://www.shimadzu.com/news/g16mjizzgbhz3--y.html https://doi.org/10.1109/access.2016.2552538 https://www.hamamatsu.com/eu/en/applications/underwater-optical-communication/index.html https://www.hamamatsu.com/eu/en/applications/underwater-optical-communication/index.html https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://safety4sea.com/wp-content/uploads/2019/12/uscg-introduction-to-dynamic-position-systems-2019_12.pdf?__cf_chl_jschl_tk__=pmd_r1lflkph3zckqcoptpdg.khcd0hz2ggcbgdxtkvgyge-1630499591-0-gqntzgznajujcnbszqi9 https://doi.org/10.4031/mtsj.53.4.5 https://doi.org/10.1109/wirles.2005.1549547 http://dx.doi.org/10.21014/acta_imeko.v8i1.604 https://doi.org/10.1007/s12596-018-0493-7 https://doi.org/10.1364/oe.26.003087 https://doi.org/10.1088/1361-6560/ab7b2d https://doi.org/10.1109/joe.2019.2923501 https://doi.org/10.4319/lo.1989.34.8.1389 https://doi.org/10.1109/oceanse.2017.8084671 https://doi.org/10.1002/9783527618156 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 33 ocean underwater optical communication, wireless personal communications, pp. 1-19, 2020. doi: 10.1007/s11277-020-07275-5 [41] s. k. sahu, p. shanmugam, a study on the effect of scattering properties of marine particles on underwater optical wireless communication channel characteristics, in oceans 2017: aberdeen uk. doi: 10.1109/oceanse.2017.8084720 [42] n. g. jerlov, irradiance, ch. 10 in optical oceanography (elsevier, 1968), pp. 115–132. doi: 10.1016/s0422-9894(08)70929-2 [43] j. w. giles, i. n. bankman, underwater optical communications systems. part 2: basic design considerations, milcom 2005 2005 ieee military communications conference, atlantic city, nj, usa, 2005, pp. 1700-1705 3. doi: 10.1109/milcom.2005.1605919 [44] j. sticklus, p. a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters, ieee journal of oceanic engineering, 44, no. 2, pp. 535-547, april 2019. doi: 10.1109/joe.2018.2816838 [45] m.g. sticklus, c.d. mobley, inherent optical properties of jerlov water types, applied optics, 54, no. 17, 2015, pp. 5392-5401. doi: 10.1364/ao.54.005392 [46] n. g. jerlov, marine optics, elsevier oceanography series, 276, 1976. doi: 10.1016/s0422-9894(08)70795-5 [47] l. k. gkoura, g. d. roumelas, h. e. nistazakis, h. g. sandalidis, a. vavoulas, a. d. tsigopoulos, g. s. tombras, underwater optical wireless communication systems: a concise review, turbulence modelling approaches current state, development prospects, applications, konstantin volkov, intechopen, 26 july 2017. doi: 10.5772/67915 [48] r. a khalil, m. i. babar, n. saeed, t. jan, h. s. cho, effect of link misalignment in the optical-internet of underwater things, electronics 9(4) (2020), 646. doi: 10.3390/electronics9040646 [49] s. arnon, d. kedar, non-line-of-sight underwater optical wireless communication network, j. opt. soc. am. a 26(3) (2009), pp. 530–539. doi: 10.1364/josaa.26.000530 [50] s. arnon, underwater optical wireless communication network, optical engineering, 49, no. 1, 015001, january 2010. doi: 10.1117/1.3280288 [51] t. wiener, s. karp, the role of blue/green laser systems in strategic submarine communications, ieee trans. commun. 28, 1980, pp. 1602–1607. doi: 10.1109/tcom.1980.1094858 [52] c. shen, y. guo, h. m. oubei, t. k. ng, g. liu, k. h. park, k. t. ho, m. s.; alouini, b.s. ooi, 20-meter underwater wireless optical communication link with 1.5 gbps data rate, opt. express, 24, 2016, pp. 25502–25509. doi: 10.1364/oe.24.025502 [53] t. wu, y. chi, h. wang, c. tsai, g. lin, blue laser diode enables underwater communication at 12.4 gbps, sci. rep., 7, 2017, 40480. doi: 10.1038/srep40480 [54] p. tian, x. liu, s. yi, y. huang, s. zhang, x. zhou, l.; hu, l. zheng, r. liu, high-speed underwater optical wireless communication using a blue gan-based micro-led, opt. express 25 (2017), 1193. doi: 10.1364/oe.25.001193 [55] j. sticklus, p.a. hoeher, r. röttgers, optical underwater communication: the potential of using converted green leds in coastal waters, ieee j. ocean. eng., 44, 2018, pp. 535–547. doi: 10.1109/joe.2018.2816838 [56] l. grobe, a. paraskevopoulos, j. hilt, d. schulz, f. lassak, f. hartlieb, c. kottke, v. jungnickel, k.d. langer, high-speed visible light communication systems, ieee commun. mag., 51, 2013, pp. 60–66. doi: 10.1109/mcom.2013.6685758 [57] k. suzuki, k. asahi, a. watanabe, basic study on receiving light signal by led for bidirectional visible light communications, electron. commun. jpn., 98, 2015, pp. 1–9. doi: 10.1002/ecj.11608 [58] c. gabriel, m. a. khalighi, s. bourennane, p. léon, v. rigaud, investigation of suitable modulation techniques for underwater wireless optical communication, in proceedings of the international workshop on optical wireless communications, pisa, italy, 22 october 2012; pp. 1–3. doi: 10.1109/iwow.2012.6349691 [59] h. m. oubei, c. shen, a. kammoun, e. zedini, k. h. park, x. sun, g. liu, c. h. kang, t. k. ng, n. s. alouini, light based underwater wireless communications, jpn. j. appl. phys., 57, 2018, 08pa06. doi: 10.7567/jjap.57.08pa06 [60] j. xu, m. kong, a. lin, y. song, x. yu, f. qu, n. deng, ofdmbased broadband underwater wireless optical communication system using a compact blue led, opt. comm., 369, 2016, pp.100–105. doi: 10.1016/j.optcom.2016.02.044 [61] c. lu, j. wang, s. li, z. xu, 60m/2.5gbps underwater optical wireless communication with nrz-ook modulation and digital nonlinear equalization, in proceedings of the conference on lasers and electro-optics (cleo), san jose, ca, usa, 5–10 may 2019; pp. 1–2. doi: 10.1364/cleo_si.2019.sm2g.6 [62] 802.15.7-2011—ieee standard for local and metropolitan area networks--part 15.7: short-range wireless optical communication using visible light. ieee 2011. doi: 10.1109/ieeestd.2011.6016195 [63] n. suzuki, h. miura, k. matsuda, r. matsumoto, k. motoshima, 100 gb/s to 1 tb/s based coherent passive optical network technology, j. lightwave technol., 36, 2018, pp. 1485–1491. doi: 10.1109/jlt.2017.2785341 [64] h. ma, l. lampe, s. hranilovic, integration of indoor visible light and power line communication systems, in proceedings of the ieee 17th international symposium on power line communications and its applications, johannesburg, south africa, 24–27 march 2013; pp. 291–296. doi: 10.1109/isplc.2013.6525866 [65] s. dimitrov, h. haas, information rate of ofdm-based optical wireless communication systems with nonlinear distortion, j. lightwave technol., 31, 2012, pp. 918–929. doi: 10.1109/jlt.2012.2236642 [66] m. a. khalighi, m. uysal, survey on free space optical communication: a communication theory perspective. ieee commun. surv. tutor., 16, 2014, pp. 2231–2258. doi: 10.1109/comst.2014.2329501 [67] g. napoli, j. v. m. avilés, r. m. prades, p. j. s. valero, survey and preliminary results on the design of a visual light communication system for radioactive and underwater scenarios, in proceedings of the 17th international conference on informatics in control, automation and robotics (icinco 2020), pp. 529-536. doi: 10.5220/0009889805290536 [68] s. jaruwatanadilok, underwater wireless optical communication channel modeling and performance evaluation using vector radiative transfer theory, ieee journal on selected areas in communications 26(9) (2008), pp. 1620–1627. doi: 10.1109/jsac.2008.081202 [69] f. akhoundi, j. a. salehi, a. tashakori, cellular underwater wireless optical cdma network: performance analysis and implementation concepts, ieee transactions on communications 63(3) (2015), pp. 882–891. doi: 10.1109/tcomm.2015.2400441 [70] z. wang, y. dong, x. zhang, s. tang, adaptive modulation schemes for underwater wireless optical communication systems. wuwnet '12: proceedings of the seventh acm international https://doi.org/10.1007/s11277-020-07275-5 https://doi.org/10.1109/oceanse.2017.8084720 https://doi.org/10.1016/s0422-9894(08)70929-2 http://dx.doi.org/10.1109/milcom.2005.1605919 https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1364/ao.54.005392 https://doi.org/10.1016/s0422-9894(08)70795-5 https://doi.org/10.5772/67915 https://doi.org/10.3390/electronics9040646 https://doi.org/10.1364/josaa.26.000530 https://doi.org/10.1117/1.3280288 https://doi.org/10.1109/tcom.1980.1094858 https://doi.org/10.1364/oe.24.025502 https://doi.org/10.1038/srep40480 https://doi.org/10.1364/oe.25.001193 https://doi.org/10.1109/joe.2018.2816838 https://doi.org/10.1109/mcom.2013.6685758 https://doi.org/10.1002/ecj.11608 https://doi.org/10.1109/iwow.2012.6349691 https://doi.org/10.7567/jjap.57.08pa06 https://doi.org/10.1016/j.optcom.2016.02.044 https://doi.org/10.1364/cleo_si.2019.sm2g.6 https://doi.org/10.1109/ieeestd.2011.6016195 https://doi.org/10.1109/jlt.2017.2785341 https://doi.org/10.1109/isplc.2013.6525866 https://doi.org/10.1109/jlt.2012.2236642 https://doi.org/10.1109/comst.2014.2329501 https://doi.org/10.5220/0009889805290536 https://doi.org/10.1109/jsac.2008.081202 https://doi.org/%2010.1109/tcomm.2015.2400441 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 34 conference on underwater networks and systems, november 2012 article no. 40 pp. 1-2. doi: 10.1145/2398936.2398985 [71] x. he, j. yan, study on performance of m-ary ppm underwater optical communication systems using vector radiative transfer theory, isape2012, 2012, pp. 566-570. doi: 10.1109/isape.2012.6408834 [72] s. tang, y. dong, x. zhang, receiver design for underwater wireless optical communication link based on apd, 7th international conference on communications and networking in china, 2012, pp. 301-305. doi: 10.1109/chinacom.2012.6417495 [73] p. swathi, s. prince, designing issues in design of underwater wireless optical communication system, 2014 international conference on communication and signal processing, 2014, pp. 1440-1445. doi: 10.1109/iccsp.2014.6950087 [74] m. chen, s. zhou, t. li, the implementation of ppm in underwater laser communication system, 2006 international conference on communications, circuits and systems, 2006, pp. 1901-1903. doi: 10.1109/icccas.2006.285044 [75] s. zhu, x. chen, x. liu, g. zhang, p. tian, recent progress in and perspectives of underwater wireless optical communication, progress in quantum electronics, 73, 2020, 100274. doi: 10.1016/j.pquantelec.2020.100274 [76] x. mi, y. dong, polarized digital pulse interval modulation for underwater wireless optical communications, oceans 2016 shanghai, 2016, pp. 1-4. doi: 10.1109/oceansap.2016.7485450 [77] m. doniec, d. rus, bidirectional optical communication with aquaoptical ii, 2010 ieee international conference on communication systems, 2010, pp. 390-394. doi: 10.1109/iccs.2010.5686513 [78] w. c. cox, j. a. simpson, j. f. muth, underwater optical communication using software defined radio over led and laser based links, 2011 milcom 2011 military communications conference, baltimore, md, usa, 2011, pp. 2057-2062. doi: 10.1109/milcom.2011.612762 [79] m. sui, x. yu, f. zhang, the evaluation of modulation techniques for underwater wireless optical communications, 2009 international conference on communication software and networks, 2009, pp. 138-142. doi: 10.1109/iccsn.2009.97 [80] b. cochenour, l. mullen, a. laux, phase coherent digital communications for wireless optical links in turbid underwater environments, oceans 2007, 2007, pp. 1-5. doi: 10.1109/oceans.2007.4449173 [81] y. zhao, a. wang, l. zhu, w. lv, j. xu, s. li, j. wang, performance evaluation of underwater optical communications using spatial modes subjected to bubbles and obstructions, optics letters 42(22) (2017), pp. 4699-4702. doi: 10.1364/ol.42.004699 [82] n. saeed, a. celik, t. y. al-naffouri, m. s. alouini, underwater optical wireless communications, networking, and localization: a survey, ad hoc netw., 94, 2019, 101935. doi: 10.1016/j.adhoc.2019.101935 [83] w. c. cox, b. l. hughes, j. f. muth, a polarization shift-keying system for underwater optical communications, oceans 2009, biloxi, ms, usa, 2009, pp. 1-4. doi: 10.23919/oceans.2009.5422258 [84] x. zhang, y. dong, s. tang, polarization differential pulse position modulation. in proceedings of uwnet '12: seventh acm international conference on underwater networks and systems november 2012 article no.: 41, pp. 1–2 doi: 10.1145/2398936.2398986 [85] g. cossu, r. corsini, a. m. khalid, s. balestrino, a. coppelli, a. caiti, e. ciaramella, experimental demonstration of high speed underwater visible light communications, 2013 2nd international workshop on optical wireless communications (iwow), 2013, pp. 11-15. doi: 10.1109/iwow.2013.6777767 [86] g. cossu, a. sturniolo, a. messa, d. scaradozzi, e. ciaramella, full-fledged 10base-t ethernet underwater optical wireless communication system, ieee journal on selected areas in communications, 36, no. 1, pp. 194-202, 2018. doi: 10.1109/jsac.2017.2774702 [87] g. schirripa spagnolo, d. papalillo, c. malta, s. vinzani, led railway signal vs full compliance with colorimetric specification, int. j. transp. dev. integr., 1, no. 3, pp. 568–577. 2017. doi: 10.2495/tdi-v1-n3-568-577 [88] t. hamza, m. a. khalighi, s. bourennane, p. léon, j. opderbecke, investigation of solar noise impact on the performance of underwater wireless optical communication links. opt. express 24(22) (2016), pp. 25832-25845. doi: 10.1364/oe.24.025832 [89] j. sticklus, m. hieronymi, p. a. hoeher, effects and constraints of optical filtering on ambient light suppression in led-based underwater communications, sensors 18(11) (2018), art. no. 3710. doi: 10.3390/s18113710 [90] t. j. petzold, volume scattering functions for selected ocean waters (no. sio-ref-72-78), scripps institution of oceanography, la jolla ca visibility lab, 1972. online [accessed 1 september 2021] https://apps.dtic.mil/dtic/tr/fulltext/u2/753474.pdf [91] e. petritoli, f. leccese, m. cagnetti, high accuracy buoyancy for underwater gliders: the uncertainty in the depth control, sensors 19(8) (2019), art. no. 1831. doi: 10.3390/s19081831 [92] e. petritoli, f. leccese, high accuracy attitude and navigation system for an autonomous underwater vehicle (auv), acta imeko 7 (2018) 2, pp. 3-9. doi: 10.1109/10.21014/acta_imeko.v7i2.535 [93] f. leccese, m. cagnetti, s. giarnetti, e. petritoli, i. luisetto, s. tuti, r. durovic-pejcev, t. dordevic, a. tomašević, v. bursić, v. arenella, p. gabriele, a. pecora, l. maiolo, e. de francesco, g. schirripa spagnolo, r. quadarella, l. bozzi, c. formisano, a simple takagi-sugeno fuzzy modelling case study for an underwater glider control system, 2018 ieee international workshop on metrology for the sea; learning to measure sea health parameters (metrosea), bari, italy, 2018, pp. 262-267. doi: 10.1109/metrosea.2018.8657877 [94] m. tabacchiera, s. betti, s. persia, underwater optical communications for swarm unmanned vehicle network, 2014 fotonica aeit italian conference on photonics technologies, naples, italy, 2014, pp. 1-3. doi: 10.1109/fotonica.2014.6843839 [95] c. lodovisi, p. loreti, l. bracciale, s. betti, performance analysis of hybrid optical–acoustic auv swarms for marine monitoring, future internet, 10, no. 7, 2018, 65. doi: 10.3390/fi10070065 [96] f. zappa, s. tisa, a. tosi, s. cova, principles and features of single-photon avalanche diode arrays, sens. actuators a phys., 140, 2007, pp. 103–112. doi: 10.1016/j.sna.2007.06.021 [97] j. kirdoda, d.c.s. dumas, k. kuzmenko, p. vines, z.m. greener, r.w. millar, m.m. mirza, g.s.; buller, d.j. paul, geiger mode ge-on-si single-photon avalanche diode detectors, in proceedings of the 2019 ieee 16th international conference on group iv photonics (gfp), singapore, 28–30 august 2019. doi: 10.1109/group4.2019.8853918 [98] s. donati, t. tambosso, single-photon detectors: from traditional pmt to solid-state spad-based technology, ieee j. sel. top. quantum electron. 20(2014), pp. 204–211. doi: 10.1109/jstqe.2014.2350836 [99] t. shafique, o. amin, m. abdallah, i. s. ansari, m. s. alouini, k. qaraqe, performance analysis of single-photon avalanche diode underwater vlc system using arq, ieee photonics j. 9 (2017), https://doi.org/10.1145/2398936.2398985 https://doi.org/10.1109/isape.2012.6408834 https://doi.org/10.1109/chinacom.2012.6417495 https://doi.org/10.1109/iccsp.2014.6950087 https://doi.org/10.1109/icccas.2006.285044 https://doi.org/10.1016/j.pquantelec.2020.100274 https://doi.org/10.1109/oceansap.2016.7485450 https://doi.org/10.1109/iccs.2010.5686513 https://doi.org/10.1109/milcom.2011.6127621 https://doi.org/10.1109/iccsn.2009.97 https://doi.org/10.1109/oceans.2007.4449173 https://doi.org/10.1364/ol.42.004699 https://doi.org/10.1016/j.adhoc.2019.101935 https://doi.org/10.23919/oceans.2009.5422258 https://doi.org/10.1145/2398936.2398986 https://doi.org/10.1109/iwow.2013.6777767 https://doi.org/10.1109/jsac.2017.2774702 https://doi.org/10.2495/tdi-v1-n3-568-577 https://doi.org/10.1364/oe.24.025832 https://doi.org/10.3390/s18113710 https://apps.dtic.mil/dtic/tr/fulltext/u2/753474.pdf https://doi.org/10.3390/s19081831 https://doi.org/10.1109/10.21014/acta_imeko.v7i2.535 https://doi.org/10.1109/metrosea.2018.8657877 https://doi.org/10.1109/fotonica.2014.6843839 https://doi.org/10.3390/fi10070065 https://doi.org/10.1016/j.sna.2007.06.021 https://doi.org/10.1109/group4.2019.8853918 https://doi.org/10.1109/jstqe.2014.2350836 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 35 pp. 1–11. doi: 10.1109/jphot.2017.2743007 [100] r. hadfield, single-photon detectors for optical quantum information applications, nat. photon 3 (2009), pp. 696–705. doi: 10.1038/nphoton.2009.230 [101] d. chitnis, s. collins, a spad-based photon detecting system for optical communications, j. lightwave technol., 32, 2014, pp. 2028–2034. doi: 10.1109/jlt.2014.2316972 [102] e. sarbazi, m. safari, h. haas, statistical modeling of singlephoton avalanche diode receivers for optical wireless communications, ieee trans. commun., 66, 2018, pp. 4043– 4058. doi: 10.1109/tcomm.2018.2822815 [103] m. a. khalighi, t. hamza, s. bourennane, p. léon, j. opderbecke, underwater wireless optical communications using silicon photo-multipliers, ieee photonics j., 9, 2017, pp. 1–10. doi: 10.1109/jphot.2017.2726565 [104] z. hong, q. yan, z. li, t. zhan, y. wang, photon-counting underwater optical wireless communication for reliable video transmission using joint source-channel coding based on distributed compressive sensing, sensors, 19, 2019, 1042. doi: 10.3390/s19051042 [105] s. pan, l. wang, w. wang, s. zhao, an effective way for simulating oceanic turbulence channel on the beam carrying orbital angular momentum, sci. rep. (9) (2019), pp. 1–8. doi: 10.1038/s41598-019-50465-w [106] m. sait, x. sun, o. alkhazragi, n. alfaraj, m. kong, t.k. ng, b.s. ooi, the effect of turbulence on nlos underwater wireless optical communication channels, chinese opt. lett. 17 (2019), art. no. 100013. doi: 10.3788/col201917.100013 [107] l. zhang, d. chitnis, h. chun, s. rajbhandari, g. faulkner, d. o'brien, s. collins, a comparison of apd-and spad-based receivers for visible light communications, j. lightwave technol. 36 (2018), pp. 2435–2442. doi: 10.1109/jlt.2018.2811180 [108] c. wang, h. y. yu, y. j. zhu, t. wang, y. w. ji, multi-led parallel transmission for long distance underwater vlc system with one spad receiver, opt. commun., 410, 2018, pp.889–895. doi: 10.1016/j.optcom.2017.11.069 [109] n. e. farr, c. t. pontbriand, j. d. ware, l.-p. a. pelletier, nonvisible light underwater optical communications, ieee third underwater communications and networking conference (ucomms), lerici, italy, 2016, pp. 1-4. doi: 10.1109/ucomms.2016.7583454 [110] j. marshall, vision and lack of vision in the ocean, current biology, volume 27(11), 2017, pp. r494-r502. doi: 10.1016/j.cub.2017.03.012 [111] xiaobin sun, wenqi cai, omar alkhazragi, ee-ning ooi, hongsen he, anas chaaban, chao shen, hassan makine oubei, mohammed zahed mustafa khan, tien khee ng, mohamed-slim alouini, boon s. ooi, 375-nm ultraviolet-laser based non-line-ofsight underwater optical communication, optics express, 26(10) (2018), pp. 12870-12877. doi: 10.1364/oe.26.012870 [112] xiaobin sun, meiwei kong, omar alkhazragi, chao shen, eening ooi, xinyu zhang, ulrich buttner, tien khee ng, boon s. ooi, non-line-of-sight methodology for high-speed wireless optical communication in highly turbid water, optics communications, 461, 2020, 125264. doi: 10.1016/j.optcom.2020.125264 [113] g. schirripa spagnolo, f. leccese, led rail signals: full hardware realization of apparatus with independent intensity by temperature changes, electronics 10 (2021), art. no. 1291. doi: 10.3390/electronics10111291 [114] r. filippo, e. taralli, m. rajteri, leds: sources and intrinsically bandwidth-limited detectors, sensors, 17, 2017, 1673. doi: 10.3390/s17071673 [115] g. schirripa spagnolo, f. leccese, m. leccisi, led as transmitter and receiver of light: a simple tool to demonstration photoelectric effect, crystals, 9, 2019, 531. doi: 10.3390/cryst9100531 [116] bivar uv5tz leds datasheet. online [accessed 01 september 2021] https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1 -2262009.pdf [117] mic3289 datasheet. online [accessed 01 september 2021] https://ww1.microchip.com/downloads/en/devicedoc/mic32 89.pdf [118] c. s. a. gong, y. c. lee, j. l. lai, c. h. yu, l. r. huang, c. y. yang, the high-efficiency led driver for visible light communication applications, scientific reports, 6, no.1, 2016, pp.1-7. doi: 10.1038/srep30991 [119] ltc1050 data sheet. online [accessed 01 september 2021] https://www.analog.com/media/en/technicaldocumentation/data-sheets/1050fb.pdf https://doi.org/10.1109/jphot.2017.2743007 https://doi.org/10.1038/nphoton.2009.230 https://doi.org/10.1109/jlt.2014.2316972 https://doi.org/10.1109/tcomm.2018.2822815 https://doi.org/10.1109/jphot.2017.2726565 https://doi.org/10.3390/s19051042 https://doi.org/10.1038/s41598-019-50465-w https://doi.org/10.3788/col201917.100013 https://doi.org/10.1109/jlt.2018.2811180 https://doi.org/10.1016/j.optcom.2017.11.069 https://doi.org/10.1109/ucomms.2016.7583454 https://doi.org/10.1016/j.cub.2017.03.012 https://doi.org/10.1364/oe.26.012870 https://doi.org/10.1016/j.optcom.2020.125264 https://doi.org/10.3390/electronics10111291 https://doi.org/10.3390/s17071673 https://doi.org/10.3390/cryst9100531 https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://www.mouser.it/datasheet/2/50/biva_s_a0002780821_1-2262009.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://ww1.microchip.com/downloads/en/devicedoc/mic3289.pdf https://doi.org/10.1038/srep30991 https://www.analog.com/media/en/technical-documentation/data-sheets/1050fb.pdf https://www.analog.com/media/en/technical-documentation/data-sheets/1050fb.pdf fault compensation effect in fault detection and isolation acta imeko issn: 2221-870x september 2021, volume 10, number 3, 45 53 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 45 fault compensation effect in fault detection and isolation michał bartyś1 1 warsaw university of technology, institute of automatic control and robotics, boboli 8, 02-525 warsaw, poland section: research paper keywords: fault compensation effect; fault masking; fault isolation; diagnostics of processes; fault distinguishability citation: michał bartyś, fault compensation effect in fault detection and isolation, acta imeko, vol. 10, no. 3, article 9, september 2021, identifier: imekoacta-10 (2021)-03-09 editor: lorenzo ciani, university of florence, italy received january 25, 2021; in final form august 29, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the pob research centre for artificial intelligence and robotics of warsaw university of technology within the excellence initiative program research university (id-ub). corresponding author: michał bartyś, e-mail: michal.bartys@pw.edu.pl 1. introduction the model-based diagnostics of industrial processes intensively makes use of residuals [1], [2], [3]. the residuals express to which extent measurements (observations) and outputs of a diagnosed system differ from expected system behaviour predicted by the reference model of the system. figure 1 depicts the general block scheme exemplifying the basic workflow in the model-based fault detection and isolation approach (fdi) [1]. it generally consists of three consecutive steps: detection, isolation, and identification of faults. the main goal of fault detection is to detect the diagnosed system's abnormal behaviour, while the isolation (localization) points out the faults that potentially occurred. on the other hand, fault identification allows for recognizing the size of a fault. frequently, fault identification is not of concern in industrial applications. therefore, for simplicity, this step is not shown in figure 1. to react appropriately to faults, the process operator or fault-tolerant control system demands univocal isolation of faults. however, this is not a trivial task. the discrepancies r (residuals) between model �̂� and process 𝐕 outputs are indicative of a potential fault or faults. however, this is true under the condition that residuals are sensitive to the faults [1], [2]. furthermore, we assume that the diagnostic system is designed so that this postulate is met. figure 1. a block diagram of the basic workflow in model-based fault detection and isolation approach (fdi). notions: r residuals, v process outputs, v̂ model outputs, s – diagnostic signals, f – faults. abstract this paper discusses the origin and problem of the fault compensation effect. the fault compensation effect is an underrated common side effect of the fault isolation approaches developed within the fault detection and isolation (fdi) community. in part, this is justified due to the relatively low probability of such an effect. on the other hand, there is a common belief that the inability to isolate faults due to this effect is the evident drawback of model-based diagnostics. this paper shows how, and under which conditions, the fault compensation effect can be identified. in this connection, the necessary and sufficient conditions for the fault compensation effect are formulated and exemplified by diagnosing a single buffer tank system in open and closed-loop arrangements. in this regard, we also show the drawbacks of a bi-valued residual evaluation for fault isolation. in contrast, we outline the advantages of a three-valued residual evaluation. this paper also brings a series of conclusions allowing for a better understanding of the fault compensation effect. in addition, we show the difference between fault compensation and fault-masking effects. mailto:michal.bartys@pw.edu.pl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 46 in the fault-free (normal) state of the diagnosed system, the residuals should converge to zero. however, considering the uncertainty of measurements and impreciseness of reference models applied, the residuals take values relatively close to zero. in practice, the residuals are being discretized through the constant or adaptive thresholding approaches [4]. as a result, the continuous or piecewise continuous residuals are converted into bi-, or three-valued crispy or fuzzy values referred to as diagnostic signals [5]-[7]. a set of diagnostic signal values associated with each particular fault creates its specific signature (pattern), typically taking the form of a column vector. the structure of signatures of all faults is referred to as the incidence matrix or structure of residual sets or diagnostic matrix [1]-[3], [5], [6]. the signatures allow for distinguishing faults under the condition that all signatures of all faults are unique. in general, this condition is not satisfied [5]. the main reason is that the number of measurements is lower than the total number of possible faults, including instrumentation and system components faults [6]. therefore, we should accept the fact that some faults will remain undetectable or indistinguishable. we consider this feature a severe drawback of the model-based fdi approaches. to at least overcome this problem, many approaches were developed that allow for increasing fault distinguishability. however, it was proven in [5] that, in general, this task is unsolvable. with regard to functional safety [8], [9], there is defined tolerable risk. according to a commonly recognized definition [9], tolerable risk is "a level of risk deemed acceptable by society in order that some particular benefit or functionality can be obtained." by analogy, we can claim that by employing a model-based diagnosis, if the risk of either undetectable dangerous or safe faults is deemed acceptable, then the fdi makes sense. however, this involves presuming some simplifications and undertaking some assumptions. for example, frequently, the assumption regarding the infallibility of measurement devices or the credibility of observations is adopted [10]. it is a case in both branches of the model-based diagnostics developed by the fdi and dx research communities [11]-[13]. later, in this paper, we assume the infallibility of measurement instruments. it may be explained, particularly for diagnosing industrial systems, employing high-reliability instruments exhibiting at least sil1 safety integrity level. the rationality of this assumption reinforces statistics of failures of industrial equipment [14]. the aforementioned explanations justify to some extent the assumption commonly adopted in the fdi regarding the infallibility of instruments. also, the assumptions regarding uncertainties of residuals, diagnostic signals, and models are discussed intensively in the context of fdi [2], [5], [7], [15]. the uncertainty of measurements is a fundamental problem of metrology. it has been discussed in series of publications for many years, i.e., in [16]-[18]. in the model-based diagnostics of processes, we have at least five different sources of uncertainties connected with measurements, models, residuals, residual evaluation, and fault diagnostic signals relation. in fact, the problem of uncertainty is common for metrology and diagnostics. in diagnostics, the measurements are intensively used for residual generation (figure 1). therefore, the uncertainty of the measurements impacts the uncertainty of residuals. on the other hand, the residuals' uncertainty also depends on the uncertainty of the reference model of the diagnosed system. the uncertainty of the model indirectly reflects its grade of perfection. therefore, uncertainties of measurements and models result in the uncertainty of the residuals. later on, the residuals are evaluated and take the form of socalled diagnostic signals. hence, the way how residuals are evaluated contributes to the overall uncertainty of diagnostic signals as well. finally, the diagnosis is based on inference using faultdiagnostic signals relation, or subjective logic, or expert knowledge [2]. thus, there is also an uncertainty in inferring about faults [15]. it is also important to mention that the complex problem of uncertainty of diagnosing has not been holistically solved yet. this paper deals mainly with the problem of the fault compensation effect and intends to expose some weaknesses of the fdi model-based diagnosing. the deliberations regarding the uncertainty of the fault compensation effect are beyond the scope of this paper. therefore, keeping in mind the paper's main objective, the uncertainty of measurements will not be considered further. several fdi methods assume and consider exclusively single faults [6]. this assumption is allowable for diagnosing relatively non-complex systems. according to occam's scissor rule, the single faults in non-complex systems are more likely than multiple. while the fault compensation effect is not a property of a system with single faults, we focus our attention exclusively on multiple fault cases. in the case of diagnosing complex systems, multiple faults are more likely [20]. therefore, in these systems, there is to expect occurrences of fault compensation effects. there is to mention that problem of the fault compensation effect is poorly represented in the literature. the fault compensation is the undesired and unpredictable side effect of multiple fault isolation based on signatures of all single faults, constituting multiple faults. this effect appears in all fdi approaches, in which multiple faults' signatures are obtained as the unions of signatures of all single faults constituting multiple ones [2], [6]. the union of bi-valued signatures is defined as a boolean alternative of signatures of all faults creating multiple ones. by three-valued signatures, the union of single fault signatures is slightly more complex [20]. developing multiple fault signatures based on single ones has some practical background. as far as the diagnosed system's phenomenological models are not available, the multiple fault signatures are not easy to obtain based on process data or process operators' expertise. moreover, frequently some multiple faults have never been registered or ought not to appear for process safety reasons, e.g., in the nuclear power stations. however, for clarity, this paper will use an analytical phenomenological model to explain the fault compensation effect. the fault compensation is understood differently even within the fdi research community. firstly, fault compensation is meant as an approach to sustain the system's nominal operation even when a fault occurs. this understanding of fault compensation is typical for the different fault tolerant control (ftc) approaches [3]. for example, the unique approach towards ftc can be found in [21]. here, the additional signals are superimposed on the controlled system's inputs to compensate for faults' effects. over there, fault compensation refers to understanding a fault, preferably in terms of a specific disturbance imposed on the control system. it is important to mention that regular control loops have also embedded inherent compensating abilities for the small size acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 47 faults. the ability to compensate for faults impacts is frequently called the fault masking effect [22], [23]. this paper faces a different understanding of the fault compensation effect. secondly, the fault compensation in fdi is understood as an effect of zeroing residuals values, despite faults [2], [20]. in other words, then the faults that occurred cannot be detected nor isolated based on residuals. therefore, in this case, the fdi completely fails and, depending on conditions, the multiple faults cannot be temporarily or permanently isolated. as far as the following conditions hold: • residuals are sensitive to at least two single faults, • multiple fault signatures are unions of all single faults constituting multiple ones, • different single faults act on residuals in opposite directions, then the fault compensation effect may occur. it is to recognize that those conditions are easily satisfied in the majority of industrial fdi approaches. therefore, we can conclude that fault compensation seems to become a significant practical problem. this statement implies inspiring motivation for a deep-in discussion of this effect in this paper. in conclusion, the fdi approaches to isolating multiple faults should be criticized as they may lead to misdiagnosis in a case of a fault compensation effect. the paper contributes both to the theory and practice. the primary outcome of the paper is the novel formulation of necessary and sufficient conditions for the fault compensation effect and, in turn, formulation of the sufficient condition for excluding this effect. the defined conditions contribute to the fdi theory and practice by proposing a method for seeking potential fault compensation effects by design or analyzing a diagnostic system. we also formulate a set of recommendations that have some practical meaning. they contribute and extend the set of good practices applicable to the design of diagnostic systems. the remainder of this paper is structured as follows. section 2 describes a nominal model of a single tank system, which we intensively exploit in this paper. section 3 presents an approach to fault detection and isolation based on an analytical description of the residuals in the inner form. section 4 illustrates the fault isolation based on biand three-valued residuals. section 5 discusses chosen results of the simulation, while section 6 outlines the problem of the fault-masking effect. finally, section 7 summarizes the achieved results. 2. the nominal model of the system the fault compensation problem will be explained based on the example of the model-based diagnosing workflow of a simple open-loop control system shown in figure 2. let the diagnostic problem rely on isolating two single faults: leakage in the tank (fault f1) and obliteration of the outlet pipe (fault f2) as well as one double fault {f1 ∧ f2}. the double fault represents the faulty state where the leakage and obliteration take place at the same time. for simplicity, we assume that used instruments are infallible. firstly, according to the schematic shown in figure 1, we develop the process's nominal (reference) model in a faultfree state. for this reason, we propose the phenomenological model of the process. this model will be exploited further for the closed-loop control system too. many other models are imaginable in this stage, including these, based on heuristic knowledge, fuzzy sets theory, fuzzy neural networks, and neural networks [1]-[3], [5], [7]. next, we assume the availability of measurements shown in table 1, except optional flow rate f1. in a fault-free state, for incompressible and inviscid liquid, the fluid accumulation in the tank is equal to the difference of inflow and outflow volumes. hence, the liquid volumetric inflow rate 𝐹0 is equal to 𝐹0 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝛼𝑆√2𝑔𝐿 , (1) where 𝐴 is the cross-sectional area of the tank, 𝐿 is the liquid level in the tank relative to the outlet pipe axis, α is the outflow contraction coefficient, s is the nominal cross-sectional area of the outlet pipe, and g is the gravitational constant. eq. (1) will be further referred to as the nominal model of the process. 3. fault detection generally, fault detection should indicate whether the fault or faults occurred or not. we assume that a discrepancy between the nominal and process outputs will occur in a faulty state. however, this is true under two essential conditions: • residuals are sensitive to the faults which occurred; • the fault compensation effect does not take place. the paper's objective is principally concerned with the second condition. to obtain residuals, we assume three faults listed in table 2. next, we develop the model of the diagnosed system in the so-called inner form [6], i.e., in the way which reflects the impacts of faults. 𝐹0 𝑓 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝛼𝑆√2𝑔𝐿 + 𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) − 𝑓2𝛼𝑆√2𝑔𝐿 , (2) where 𝐹0 𝑓 is the tank inflow rate in a faulty state; αl is the leakage outflow contraction coefficient; 𝐿𝑙 is the distance from the center of the area of leakage orifice to the axis of outlet pipe; 𝑓1 = 𝑆𝑙 /𝑆; 𝑓2 = 1 − 𝑆𝑜/𝑆; sl is the cross-sectional area of the leakage orifice; 𝑆𝑜 is the cross-sectional area of the outlet pipe. while residual 𝑟 = 𝐹0 − 𝐹0 𝑓 , then from (1) and (2) we obtain: 𝑟 = −𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) + 𝑓2𝛼𝑆√2𝑔𝐿 . (3) figure 2. the schematic of the process considered for diagnosing. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 48 therefore, the residual r equals zero if the model and process outputs are identical. however, this cannot be interpreted unambiguously as a fault-free state of the system, while the residual r may also take zero value in a faulty state due to the impact of faults on residuals. nevertheless, this effect occurs exclusively for multiple faults. from (3), we can easily withdraw a simple condition for fault compensation effect. 𝑓1 𝑓2 = 𝛼 𝛼𝑙 √ 𝐿 (𝐿 − 𝐿𝑙 ) . (4) the probability of the fault compensation effect is relatively low. however, this effect is the reason for false-negative fault isolation, and therefore, it should be avoided as far as possible. this paper shows how it may be possible. the following observation would be helpful here: the effect of fault compensation does not occur for single faults and for those multiple faults for which all residuals are unidirectionally affected, i.e., possess the same sign. from this observation, we can draw some practical conclusions. conclusion 1. the design of the diagnostic systems in which residuals are sensitive exclusively to single faults is strongly recommended for fdi because it avoids fault compensation effects. conclusion 2. consideration of residual signs may help increasing achievable fault distinguishability while bringing additional useful knowledge for diagnostics. conclusion 1 corresponds well with an idea for developing a family of intelligent single fault detectors [24] and with the concept of the diagonal structure of residual sets proposed in [6]. however, it sounds slightly unrealistic in nowadays world. therefore, the question arises on how we can avoid fault compensation effects if they are typical even for elementary processes, as shown in figure 2. there is no good general answer to this question. nonetheless, we can consider some productive actions. according to conclusion 1, the excellent solution seems to have an equal number of nominal models with the number of single faults, such that each model would be referred exclusively to one fault. let us now consider the same system as in figure 2. the only difference is that we now will use the additional flow rate instrument, i.e., f1. now, the nominal partial models of the process are { 𝐹0 = 𝐴 𝑑𝐿 𝑑𝑡 + 𝐹1 𝐹1 = 𝛼𝑆√2𝑔𝐿 , (5) the models in the inner form reflecting the impact of faults are { 𝐹0 𝑓 = 𝐹0 + 𝑓1𝛼𝑙𝑆√2𝑔(𝐿 − 𝐿𝑙 ) 𝐹1 𝑓 = 𝐹1 − 𝑓2𝛼𝑆√2𝑔𝐿 . (6) from (6), we obtain residuals: { 𝑟1 = −𝑓1𝛼𝑙 𝑆√2𝑔(𝐿 − 𝐿𝑙 ) 𝑟2 = +𝑓2𝛼𝑆√2𝑔𝐿 (7) as can be easily seen from (7), each residual is sensitive exclusively to a single fault. it promises to avoid the fault compensation effect at the expense of an additional flowrate measurement instrument. in this case, the double fault is easily recognizable (isolable) because both residuals (𝑟1 and 𝑟2) adopt non-zero values and opposite signs. from the above considerations, it follows that: conclusion 3. there is a trade-off between the quality of diagnoses and the number of sensors (instruments) applied in real-world systems. it should be mentioned that by the limited availability of sensors, the solution of the sensor placement problem would help maximize fault distinguishability and minimize fault compensation effects [25], [26]. 4. fault isolation the primary goal of fault isolation is to indicate the faults that occurred in the process. this diagnosing step is frequently called fault location or simply diagnosing. diagnosing requires a knowledge of the relation between the faults and diagnostic signals. we can express this relation in the analytical form, for example, as in (3) and (7). if the analytical relations are unknown, the process graphs (gp) [27] could be helpful. the process graph is a directed bipartite graph used in workflow modeling. in considered case, vertices of the gp graph represent disjunctive sets of process states and faults. the graph's edges link the faults with the states and between states, thus reflecting the process flow. the gp graph is handy for analyzing the qualitative impact of faults on process states. physical or virtual quantities represent the process states. in particular, the process states may be represented by measurements. figure 3 depicts the gp graph developed for the single tank open-loop control system shown in figure 2. this graph reflects equation (3) and refers to a situation where flow rates f0 and f1 are not available. from this graph, it can be seen that both single faults act in opposite directions on the liquid level. therefore, both faults may mutually compensate for their impacts. based on this statement, we will formulate two practical conclusions. conclusion 4. the possibility of the occurrence of the fault compensation effect is immediately detectable from the directed graph of the process. conclusion 5. it is necessary for fault compensation if at least one vertex in the gp graph is linked directly with fault vertices by edges labeled with opposite signs. the gp graph derived from equation (7) takes the shape shown in figure 4. table 1. list of available measurements. item symbol measured quantity 1 f0 liquid volumetric inflow rate 2 l liquid level 3 f1 liquid volumetric outflow rate (option) table 2. list of considered faults. item symbol fault 1 f1 leakage from the tank 2 f2 obliteration of the outflow pipe 3 f1 ⋀ f2 leakage and obliteration https://en.wikipedia.org/wiki/directed_graph https://en.wikipedia.org/wiki/bipartite_graph https://en.wikipedia.org/wiki/workflow https://en.wikipedia.org/wiki/conceptual_model acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 49 in figure 4, both faults are linked with different graph vertices. therefore, the necessary condition for fault compensation is not satisfied. in this case, the double fault bivalued signature based on the union of single fault signatures is correct and may distinguish single and double faults. from the above considerations, it follows the demand for reliable instrumentation. conclusion 6. avoidance of fault compensation effect is highly demanding on reliable measurements. the quantitative fault isolation needs to deploy the incidence matrix [6]. the usability of the gp graph in this scope is very limited. it provides only a cause-and-effect qualitative description of the process. the incidence matrix reflects the relation between faults and diagnostic signals. the question is, why are diagnostic signals being used instead of residuals? principally, fault isolation is the process of inference about faults that uses some logical rules. usually, boolean or lukasiewicz n-valued or fuzzy logic is applied. therefore, there is necessary to transform continuous residuals into discrete logical values or a finite set of predefined fuzzy membership functions. for these goals, we use the constant or adaptive discrimination thresholds [4]. in this paper, we will limit our considerations exclusively to elementary, however practicable, thresholding of residuals, which introduces some dead zones to residual values. for the binary assessment of residuals, we will further apply the formula: 𝑠 = { 0 ← |𝑟| < 𝑇h 1 ← |𝑟| ≥ 𝑇h , (8) while for the three-valued assessment of residuals, we will use: 𝑠 = { −1 ← 𝑟 ≤ −𝑇h 0 ← |𝑟| < 𝑇h +1 ← 𝑟 ≥ 𝑇h , (9) where: s is the diagnostic signal and th is an arbitrarily chosen nonnegative threshold. according to (8) and (9), the diagnostic signals are bior three-valued. the robustness of fault isolation, among others, can be characterized by the rate of false-positive and false-negative diagnoses. the false-positive diagnoses indicate non-existing faults, while false-negative diagnoses do not indicate existing faults. as one can infer from formulas (8) and (9), the introduction of dead zones immunizes somehow diagnostic signals against uncertainties and noise, however, at the expense of loss in sensitivity and elongation of fault isolation time. this paper will discuss both (8) and (9) residual evaluation approaches in the context of the fault compensation effect. we will show that fault compensation under some conditions may be determined from the incidence matrix. first, let’s refer to the incidence matrix presented in table 3. here, the entries are bivalued as in (8). therefore, this matrix is also referred to as a binary diagnostic matrix (bdm) [6]. it contains reference diagnostic signal values (signatures) expected by the occurrence of a fault or faults. in table 3, all signatures of all considered faults are identical. therefore, in a faulty system state, we cannot point out which fault or faults occurred. in other words, all three faults from the table 3 are indistinguishable. hence, the quality of obtained diagnosis is unacceptable. moreover, based on table 3, we cannot verify the hypothesis regarding the fault compensation effect. this simple example leads to the following conclusion: conclusion 7. the binary diagnostic matrix itself is useless for the recognition of a fault compensation effect. next, we discuss the case of the three-valued incidence matrix shown in table 4. here, the values of reference diagnostic signals of a double fault are in the set of all reference diagnostic signals of single faults constituting multiple faults, including the faultfree state. for example, diagnostic signal s in table 4 may have three alternative values -1 or 0 or +1. the complete procedure of synthesizing multiple fault reference signatures based on single fault reference signatures is described in [20]. now, we can easily distinguish single faults f1 and f2 because reference signatures of both faults are distinctive. however, both single faults are conditionally indistinguishable from a double fault. moreover, the double fault may not distinguish from the process's fault-free state by diagnostic signal (s = 0). the fault compensation effect, if any, will manifest by (s = 0). therefore, fault-free state, double fault state, and fault compensation effect are still indistinguishable. however, there is to mention that under some conditions: figure 3. the gp graph is reflecting the qualitative impact of faults on the values of process variables. a circle coloured in yellow depicts the available measurement. figure 4. the gp graph for the single tank system reflecting the additional flowrate measurement f1. table 3. diagnostic matrix for binary discretized residual (2). f/s fault-free f1 f2 f1 ∧ f2 s 0 1 1 1 table 4. trinary diagnostic matrix for residual (2). f/s fault-free f1 f2 f1 ∧ f2 s 0 -1 +1 -1, 0, +1 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 50 conclusion 8. the incidence matrix containing three-valued reference diagnostic signals allows for indicating the possibility of fault compensation effect. based on conclusion 8, we now formulate a necessary and sufficient condition for the possibility of a fault compensation effect by three-valued reference diagnostic signals. condition 1. the necessary and sufficient condition for fault compensation. the complete set of diagnostic signals {-1,0,+1} of at least one entry of signature of a multiple fault is necessary and sufficient to indicate the possibility of a fault compensation effect. in the case of biand three-valued reference diagnostic signals, we can formulate a sufficient condition for excluding fault compensation. condition 2. the sufficient condition for excluding fault compensation. it is sufficient for excluding the possibility of fault compensation if a submatrix consisting exclusively of signatures of single faults is diagonal. however, this condition is relatively difficult to meet in practice. therefore, most diagnostic systems based on either binary or trinary evaluation of residuals are exposed to fault compensation that degrades their diagnostic credibility. however, the degree of degradation is much less by three-valued incidence matrices [20]. the necessary and sufficient condition for excluding fault compensation implies: conclusion 9. single row incidence matrices do not allow for unambiguous indication of the possibility of fault compensation effect independently, whether diagnostic signals are bior threevalued. let us now discuss the case of a diagnostic system for which the gp graph is depicted in figure 4. the appropriate binary and trinary diagnostic matrices are shown respectively in table 5 and table 6. now, the binary diagnostic matrix allows for uniquely distinguishing all considered faults. in this case, the fault compensation effect will not occur. similarly, the three-valued diagnostic matrix depicted in table 6 allows for uniquely distinguishing all faults. here, the fault compensation effect does not take place because condition 1 does not hold. as can be seen, the submatrices of the diagnostic matrices depicted in table 5 and table 6 consisting exclusively of single fault signatures are diagonal. therefore, all multiple fault signatures, which signatures are the unions of signatures of all single faults constituting the multiple ones, are distinguishable independently whether they are bior three-valued. after this, we reinforce condition 2. conclusion 10: the diagonal structure of the diagnostic matrix of single faults avoids the fault compensation effect. the condition formulated in the given above conclusion is, however, almost unrealistic to implement in practice. while the bi-valued diagnostic signals are useless for the recognition of fault compensation effects (conclusion 7), it is strongly recommended to design three-valued incidence matrices because they allow for the indication of possible fault compensation effects (conclusion 8). the above recommendation is postulated mainly for the newly developed diagnostic systems. implementing this recommendation for running diagnostic systems is imaginable, although less realizable because of the necessity of installing additional instrumentation. on the other hand, the intensive implementation of intelligent fault diagnosing devices [20] combined with implementing embedded diagnostics ideas [28][28] allows for the successive rejection of fault compensation problems from the area of interests of fdi. 5. simulations the simulations were performed to exemplify the fault compensation effect using a model of a single buffer tank depicted in figure 2. the simulation model was developed in the matlab-simulink environment. the resulting flowchart of the simulation of the liquid storing and distributing process is shown in figure 5. the model generates an output vector whose components include liquid level, inflow and outflow rates, residua, and diagnostic signals. the liquid level depends on inlet and outlet liquid rates, leakages, and obliteration of pipe. therefore, the tank's liquid level can determine by integrating the dynamic liquid accumulation, i.e., by integrating the difference in the flow rates of liquid entering and leaving the tank. as assumed earlier, only one potential double fault is considered in this case. the simulations of two different double fault scenarios were performed. each of them exemplifies a fault compensation effect. we designed the first scenario to show a possibility of the permanent inability for the precise diagnosis by reasoning based on the three-valued diagnostic matrix shown in table 4. it depicts the diagnostic matrix, which meets the necessary and sufficient conditions for a fault compensation effect. this scenario will also consider the three-valued diagnostic matrix shown in table 6, which meets a sufficient condition for excluding fault compensation effect. in connection with the first one, the second scenario shows the different timed processes of tightening diagnoses even for the same diagnostic matrices. this scenario shows that diagnostic matrices admittedly allow for searching for potential fault compensation effects but do not directly reflect the transients of diagnoses. scenario 1. consider two incipient faults: leakage 𝑓1 and pipe obliteration 𝑓2 as in figure 6. the obliteration starts to grow immediately after the simulation gets started. the leakage begins to grow at the time instant 0.50∙105s. therefore, the double fault originates in this time moment. the slopes of both faults are selected in such a way as to show the fault compensation effect. in this case, both faults impact residual r bringing its value close to zero for the whole simulation period, as shown in 6. the liquid inflow rate f0 swings around a constant value within ±10% limits. residuals are three-valued. the diagnostic signal s, (3), and diagnostic signals s1 and s2, (7), are determined based on a fixed arbitrary chosen threshold th = 5%. table 5. diagnostic matrix for binary evaluated residuals (7). f/s fault-free f1 f2 f1 ∧ f2 s1 0 1 0 1 s2 0 0 1 1 table 6. diagnostic matrix for trinary evaluated residuals (7). f/s fault-free f1 f2 f1 ∧ f2 s1 0 -1 0 -1 s2 0 0 +1 +1 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 51 discussion: table 7 summarizes the results of simulations, while table 8 shows the obtained diagnoses. diagnose d0 is based on the bi-valued diagnostic matrix shown in table 5. diagnose d1 is derived from the tri-valued diagnostic matrix shown in table 4, while d2 is based on the tri-valued diagnostic matrix shown in table 6. despite fault compensation, diagnoses d0 and d2 finally isolate correctly double fault, however with a significant time delay. this time will be shorter if choosing a lower value of the threshold th. intermediate diagnosis 𝑓2 is not correct; however, it is not false. in turn, the diagnosis d1 is ambiguous, i.e., delivers much less useful information regarding faults, independently of whether the fault occurs or not. scenario 2. consider a case like in scenario 1 depicted in figure 7. the only difference is that in this case, both faults impact residual r, shifting its value far away from zero. moreover, different slopes of faults cause the reversal of the time sequence of diagnostic signals s1 and s2. it also influences the evolution of double fault diagnosis differently compared to scenario 1. discussion: table 9 summarizes the diagnostic signals obtained from simulations while table 10 contains the obtained diagnoses. diagnoses d0 and d2 finally isolate correctly double fault, however with a significant time delay. the only difference to scenario 1 is that the single fault f1 is detected before f2. in turn, the d1 diagnosis is slightly more valuable than d1 in the previous scenario. however, it is still far away from the quality of the d0 and d2 diagnoses. the diagnosis d1 is correct here, although pointed out faults are indistinguishable. analyzing the results of both scenarios, we can draw a practical conclusion. conclusion 11: it is advantageous to design a three-valued diagnostic matrix such that the elements of multiple fault signatures contain as few alternative values as possible. 6. fault masking effect as long as the process variable tracks the setpoint within some predefined limits, either the process operator or alarm system does not have any particular reasons to react. in closed-loop systems, the effects of faults are compensated for by controller action as long as the system is controllable. therefore, the faultmasking effect is frequently understood as an effect of faults' invisibility to process operators or alarm systems [22], [23]. in other words, the difference between the setpoint and process value may not be sensitive nor indicative of faults. here, the question arises: do the model-based fault diagnostics discussed earlier for the open control system is still valid if we close the loop? figure 5. simulation diagram of a single buffer tank process figure 6. example of a simulation of a double fault. notation: f0 liquid inflow rate dark blue line; l – liquid level blue line; f1 – leakage fault blue line; f2 – obliteration fault – red line; r1– dotted red line; r2 – dotted blue line; r – purple line; diagnostic signals: s1 – blue; s2 – red; s – purple. interval of the fault compensation effect 0.84 … 2.0∙105 s. table 7. diagnostic signals values (scenario 1). time s ∙ 105 0.00 0.50 0.50 0.84 0.84 1.14 1.14 2.00 s1 0 0 0 -1 s2 0 0 +1 +1 s 0 0 0 0 interval of the duration of fault compensation effect: 0.84 ... 2.0∙105 s table 8. obtained diagnoses (scenario 1). time s ∙ 105 0.00 0.50 0.50 0.84 0.84 1.14 1.14 2.00 d0 ∅ ∅ 𝑓2 𝑓1 ∧ 𝑓2 d1 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 d2 ∅ ∅ 𝑓2 𝑓1 ∧ 𝑓2 table 9. diagnostic signals values (scenario 2). time s ∙ 105 0.00 0.82 0.82 0.90 0.90 1.35 1.35 2.00 s1 0 -1 -1 -1 s2 0 0 +1 +1 s 0 0 0 -1 interval of the duration of fault compensation effect. 0.82 ... 1.35∙105 s table 10. obtained diagnoses (scenario 2). time s ∙ 105 0.00 0.82 0.82 0.90 0.90 1.35 1.35 2.00 d0 ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 d1 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 ∅, 𝑓1 ∧ 𝑓2 𝑓1, 𝑓1 ∧ 𝑓2 d2 ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 figure 7. example of a simulation of a double fault. interval of the fault compensation effect 0.82 … 1.35∙105 s. notations as in figure 6. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 52 to answer this question, we close the loop of the system shown in figure 2. the modified control system is presented in figure 8. the liquid inflow rate into the buffer tank is controlled by a control valve driven by a pi controller. the controller, employing the control valve, adjusts the liquid inflow rate into the tank keeping the liquid level close to the setpoint value. thus, in case of leakage, the controller increases the inflow to compensate for additional liquid demand. in turn, in the case of obliteration, the controller throttles the liquid inflow to keep demanded liquid level in the tank. the gp graph of the closed-loop control system is shown in figure 9. this graph contains additional vertices reflecting actuator fault f3, position av of the control valve stem, and arcs representing controller-in-the-loop. the actuator tracks the controller output cv. for simplicity, we assume the infallibility of the pi controller. let us assume a trivial static model of an actuator. the nominal model of the actuator is therefore av=cv. the actuator fault manifests in a discrepancy between av and cv values. let us assume now an additive actuator fault. hence, the model of the actuator in an inner form equals 𝐴𝑉 = 𝐶𝑉 ± 𝑓3 → 𝑟3 = ±𝑓3 . (10) as shown in figure 9, all faults are associated with observable (measurable) vertices. hence, the diagnostic matrix of single faults will take the diagonal shape, and in consequence, all single and multiple faults are isolable. the three-valued diagnostic matrix for the modeled control system depicts table 11. following condition 2, we exclude the impact of faults on residual values in this case. therefore, the fault compensation effect does not take place. figure 10 depicts the result of a simulation of a triple fault, i.e., slowly increasing obliteration 𝑓2 starting at the time instant 0, slowly increasing leakage 𝑓1 starting at the time instant 0.5∙10 5 s and abrupt actuator fault 𝑓3 appearing at the time instant 1.0∙105 s. signal s3 represents the diagnostic signal of residual r3. the summary of isolated faults is shown in table 12. as can be seen, closing the loop does not degrade the system's diagnostic properties as far as condition 2 holds. 7. final remarks there were defined necessary and sufficient conditions for fault compensation effect, allowing for identification of the possibility of appearing of this effect based on analysis of incidence matrix. in addition, a complementary condition of excluding the fault compensation effect was also formulated. in this connection, some practical recommendations and hints regarding the design of diagnostic systems were proposed. summing up the results of the discussion and performed simulations, we can conclude that: • the fault compensation effect is a common problem for model-based fdi diagnostic approaches. • the fault compensation effect manifests exclusively for multiple faults. • the fault compensation effect is an unwanted side effect originating from adopting an assumption regarding the generation of signatures of multiple faults based on unions of signatures of single fault signatures. • neglecting fault compensation effect leads to false or temporarily false diagnoses. • fault compensation problems should be considered, particularly in the case of slowly developing incipient faults. • application of the threeinstead of bi-valued diagnostic signals for reasoning about faults is irrelevant regarding the possibility of fault compensation effect. • fault compensation effect results from the fault reasoning method applied and should be distinguished from the faultmasking effect. table 11. diagnostic matrix for a liquid control system depicted in figure 8. f/s faultfree f1 f2 f3 f1 ∧ f2 f1 ∧ f3 f2 ∧ f3 f1 ∧ f2∧ f3 s1 0 -1 0 0 -1 -1 0 -1 s2 0 0 +1 0 +1 0 +1 +1 s3 0 0 0 ±1 0 ±1 ±1 ±1 figure 8. a closed-loop liquid level control system. notions: sp setpoint; cv control value; pi – proportional-and-integral controller; av positioner feedback signal. figure 9. the gp graph of the closed-loop system reflecting the qualitative impact of faults on the values of process variables. figure 10. example of a simulation of a triple fault. notations: f3 – actuator fault purple line; r3 – purple dotted line; s3 – diagnostic signal – purple line. the remaining notions as in figure 6. table 12. obtained diagnoses for closed-loop liquid level control system. time s ∙ 105 0.00 0.82 0.82 0.85 0.85 1.00 1.00 2.00 d ∅ 𝑓1 𝑓1 ∧ 𝑓2 𝑓1 ∧ 𝑓2 ∧ 𝑓3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 53 further research will focus on developing a theoretical framework encompassing fault compensation aspects described in this paper. references [1] j. korbicz, j. m. kościelny, z. kowalczuk, w. cholewa, fault diagnosis. models. artificial intelligence. applications, springer 2004, isbn: 3540407677. [2] j. m. kościelny, process diagnostics methodology, in j. korbicz, j. m. kościelny, z. kowalczuk, w. cholewa, w. (eds.), fault diagnosis. models. artificial intelligence. applications, springer, 2004, isbn: 3540407677. [3] m. blanke, m. kinnaert, j. lunze, m. staroswiecki, diagnosis and fault tolerant control, springer verlag, new york, 2015, isbn: 978-3-540-35653-0. [4] k. patan, j. korbicz, nonlinear model predictive control of a boiler unit: a fault-tolerant control study, international journal of applied mathematics and computer science, 22(1) (2012), pp. 225-237. doi: 10.2478/v10006-012-0017-6 [5] m. bartyś, chosen issues of fault isolation, polish scientific publishers, pwn, 2014, isbn: 9788301178109. [6] j. gertler, fault detection and diagnosis in engineering systems, marcel dekker inc., new york, 1998, isbn: 0824794273. [7] m. bartyś, generalized reasoning about faults based on the diagnostic matrix, international journal of applied mathematics and computer sciences, 23(2) (2014), pp. 407-417. doi: 10.2478/amcs-2013-0031 [8] d. smith, k. simpson, functional safety, taylor & francis group, london, 2004, isbn: 9780080477923. doi: 10.1016/s0019-0578(01)00011-8 [9] e. marszal, tolerable risk guidelines, isa transactions, 40(3) (2001), pp. 391-399. [10] j. kościelny, m. bartyś, a. sztyber, diagnosing with a hybrid fuzzy–bayesian inference approach, engineering applications of artificial intelligence 104(2021), art. no. 104345, pp. 1-11. doi: 10.1016/j.engappai.2021.104345 [11] l. travĕ-massuyĕs, bridges between diagnosis theories from control and ai perspectives, in: intelligent systems in technical and medical diagnostics, springer, heidelberg, 230, 2014, pp. 441–452, isbn: 9783642398810. [12] j. de kleer, j. kurien, fundamentals of model-based diagnosis, ifac proceedings volumes 36(5) (2003), pp. 25–36. doi: 10.1016/s1474-6670(17)36467-4 [13] j. su, w. chen, model-based fault diagnosis system verification using reachability analysis, ieee transactions on systems, man, and cybernetics: systems 49(4) (2019), pp. 742–751. doi: 10.1109/tsmc.2017.2710132 [14] c. kaidis, wind turbine reliability prediction, uppsala university, report (2003), pp. 1-72. [online] accessed 27 august 2021 http://www.divaportal.org/smash/get/diva2:707833/fulltext01.pdfadditio nally [15] j. m. kościelny, m. syfert, fuzzy diagnostic reasoning that takes into account the uncertainty of the faults-symptoms relation, international journal of applied mathematics and computer science 16(1) (2006), pp. 27-35. [online] accessed 27 august 2021 http://matwbn.icm.edu.pl/ksiazki/amc/amc16/amc1612.pdf [16] w. navidi, statistics for engineers and scientists, mcgraw hill, education, 2014, isbn: 1259251608. [17] i. lira, evaluating the measurement uncertainty fundamentals and practical guidance, taylor & francis, 2002, isbn: 9780367801564. [18] m. catelani, a. zanobini, l. ciani, uncertainty interval evaluation using the chi-square and fisher distributions in the measurement process, metrology and measurement systems 17(2) (2010), pp.195-204. doi: 10.2478/v10178-010-0017-5 [19] m. bartyś, diagnosing multiple faults from fdi perspective, in: z. kowalczuk, m., domżalski, (eds.) advanced systems for automation and diagnostics, 2015, isbn: 9788363177003. [20] j. m. kościelny, m. bartyś, z. łabęda-grudziak, tri-valued evaluation of residuals as a method of addressing the problem of fault compensation effect, in j. korbicz, k. patan, m. luzar, (eds.) advances in diagnostics of processes and systems, springer, 313 (2021), pp. 31-44, isbn: 9783030589646. [21] s. jakubek, h. p. jorgl, fault-diagnosis and fault-compensation for nonlinear systems, proc. of the 2000 american control conference, chicago, il, usa, 28-30 june 2000, pp. 3198-3202. doi: 10.1109/acc.2000.879155 [22] e. wu, s. thavamania, y. zhang, m. blanke, sensor fault masking of a ship propulsion system, control engineering practice 14 (2006), pp. 1337–1345. doi: 10.1016/j.conengprac.2005.09.003 [23] m. g. gouda, j. a. cobb, c-h. huan, fault masking in triredundant systems, in a. k. datta, m. gradinariu, (eds.), lncs 4280, springer-verlag berlin heidelberg 2006, pp. 304–313, isbn 9783540490180. [24] j. m. kościelny, m. bartyś, the idea of smart diagnozers for decentralized diagnostics in industry 4.0, 2019 4th conference on control and fault tolerant systems (systol), casablanca, morocco, 18-20 sept. 2019, pp. 123-128. doi: 10.1109/systol.2019.8864791 [25] m. krysander, e. frisk, sensor placement for fault diagnosis, ieee transaction on systems, man, and cybernetics-part a, 38(6) (2008), pp. 1398-1410. doi: 10.1109/tsmca.2008.2003968 [26] s. s. carlisle, the role of measurement in the development of industrial automation, acta imeko, 3(1)3, 2014. doi: 10.21014/acta_imeko.v3i1.190 [27] k. takeda, b. shibata, y. tsuge, h. matsuyama, the improvement of fault diagnosis algorithm using a signed directed graph, ifac proceedings volumes 27(5) (1994), pp. 351–356. doi: 10.1016/s1474-6670(17)48052-9 [28] z. s. chen, y. m. yang, z. hu, a technical framework and roadmap of embedded diagnostics and prognostics for complex mechanical systems in prognostics and health management systems, ieee transactions on reliability, 61(2) (2012), pp. 314322. doi: 10.1109/tr.2012.2196171 https://content.sciendo.com/view/journals/amcs/22/1/article-p225.xml https://content.sciendo.com/view/journals/amcs/22/1/article-p225.xml https://doi.org/10.2478/v10006-012-0017-6 https://doi.org/10.2478/amcs-2013-0031 https://doi.org/10.1016/s0019-0578(01)00011-8 https://doi.org/10.1016/j.engappai.2021.104345 https://doi.org/10.1016/s1474-6670(17)36467-4 https://doi.org/10.1109/tsmc.2017.2710132 http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://www.diva-portal.org/smash/get/diva2:707833/fulltext01.pdfadditionally http://matwbn.icm.edu.pl/ksiazki/amc/amc16/amc1612.pdf http://dx.doi.org/10.2478/v10178-010-0017-5 https://doi.org/10.1109/acc.2000.879155 https://doi.org/10.1016/j.conengprac.2005.09.003 https://doi.org/10.1109/systol.2019.8864791 https://doi.org/10.1109/tsmca.2008.2003968 http://dx.doi.org/10.21014/acta_imeko.v3i1.190 https://doi.org/10.1016/s1474-6670(17)48052-9 https://doi.org/10.1109/tr.2012.2196171 applicability of multiple impulse-radar sensors for the recognition of a person’s action acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 applicability of multiple impulse-radar sensors for the recognition of a person’s action paweł mazurek1, szymon kruszewski1 1 warsaw university of technology, faculty of electronics and information technology, nowowiejska 15/19, 00-665 warsaw, poland section: research paper keywords: measurement data processing; impulse-radar sensors; position estimation; action recognition; healthcare citation: paweł mazurek, szymon kruszewski, applicability of multiple impulse-radar sensors for the recognition of a person’s action, acta imeko, vol. 12, no. 2, article 30, june 2023, identifier: imeko-acta-12 (2023)-02-30 section editor: eric benoit, université savoie mont blanc, france received july 17, 2022; in final form may 16, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: paweł mazurek, e-mail: pawel.mazurek@pw.edu.pl 1. introduction it is expected that the share of european population aged at least 65 years will reach 25% in 2050 [1]. the problem of organised care over elderly persons is, therefore, of growing importance. this, in turn, creates the demand for various technical solutions which could be applied for non-intrusive monitoring of elderly persons in home environments and healthcare facilities. the systems for monitoring of elderly persons are expected to predict and detect dangerous events, such as falls and harmful long lies after the falls. the falls of elderly persons belong to the most frequent reasons of their admission and long-term stay in hospitals [2]. possible solutions that could be applied for non-intrusive monitoring of elderly persons are radar-based techniques – both narrow-band [3]–[8] and broad-band [9]–[13]. the most attractive feature of these techniques is the possibility of the through-the-wall monitoring of human activity. a review of the relevant literature, including articles in scientific journals and conference papers, which appeared in the years 2019–2022, has revealed that the vast majority of researchers use radar sensors of both types for estimation of the heart rate, breathing rate and position (in two dimensions), while the attempts to detect falls are based on sensors using the doppler principle – with two exceptions: • in [14] an attempt to detect falls on the basis of threedimensional movement trajectory obtained by means of the three impulse-radar sensors is presented. in the reported approach, the monitored person's movement is compared with two model movements: a movement with a constant speed, and a movement towards the ground with an acceleration equal to the gravitational acceleration, and the classification of the movement is based on the reliability function. unfortunately, no systematic tests of the effectiveness of the method are presented in that paper. • in [15] the results of the simulation studies focused on the impact of the number of the impulse-radar sensors on the accuracy of the estimation of the threedimensional position is presented. unfortunately, the simulated setup is based on an unrealistic assumption that the sensors are placed in random locations within a monitored area. in the authors’ recent conference papers [16], [17], the results of the studies on the applicability of multiple impulse-radar abstract the research reported in this paper is devoted to the impulse-radar technology when applied for non-intrusive monitoring of elderly persons. specifically, this study is focused on a novel approach to the interpretation of data acquired by means of multiple impulseradar sensors, leading to the determination of features to be used for the recognition of a monitored person’s actions. the measurement data are first transformed into the three-dimensional coordinates of the monitored person; next, those coordinates are used as a basis for determination of features characterising the movement of that person. the results of the experimentation, based on the real-world data, show that multiple impulse-radar sensors may be successfully used for highly accurate recognition of actions such as walking, sitting, and lying down, although this accuracy is significantly affected by the quality of the three-dimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors within the monitored area. mailto:pawel.mazurek@pw.edu.pl acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 sensors for estimation of the three-dimensional movement trajectories – which could be used for detection of dangerous events such as falls – are presented. these results show that the impulse-radar sensors may be used for accurate estimation of the three-dimensional position of a monitored person if the sensors are properly located within the monitored area. in this paper, the applicability of the impulse-radar sensors for recognition of person’s action, on the basis of the estimates of the threedimensional movement trajectories, is investigated. the novelty of the research presented in this paper consists in an algorithmic basis for recognition of actions of a person, in a monitoring system based on multiple impulse-radar sensors. the processing of the raw measurement data acquired by means of the impulse-radar sensors is divided into three stages: the transformation of the measurement data into the threedimensional coordinates of the monitored person, the calculation of the features characterising the three-dimensional movement trajectories, and the classification of the movement trajectories. the usability of the proposed features is assessed in an experiment based on a set of real-world data sequences representative of three activities of daily living: walking, sitting and lying down. moreover, the influence of the configuration of the impulse-radar sensors within the monitored area on the accuracy of the action recognition is investigated. 2. estimation of movement trajectories the measurement data used for the experimentation were acquired by means of six x4m02 impulse-radar sensors manufactured by novelda [18], [19]. an exemplary data frame, acquired by means of one of these sensors, is shown in figure 1. to properly estimate a three-dimensional movement trajectory of a monitored person, the measurement data, acquired by means of the impulse-radar sensors, have to be subjected to processing comprising [20]: the estimation of parameters of the impulse-radar signal, the smoothing of several one-dimensional trajectories of the distance between the monitored person and the corresponding impulse-radar sensors, and the transformation of the smoothed distance trajectories into the three-dimensional movement trajectory. in the research presented here: • the parameters of the impulse-radar signal have been estimated by means of a method consisting in computing the correlation function for the received signal and a known template of the emitted pulse, and the estimation of the coordinates of the maximum of this function [20]. • the distance trajectories have been smoothed by means of a method based on weighted least-squares estimator, consisting in the approximation of a sequence of data by means of a linear combination of basis functions, with the number of these functions determined automatically [17]. • the three-dimensional movement trajectories have been obtained by means of a method consisting in solving a set of equations modelling the geometrical relationships between the three-dimensional coordinates of a person and the distances between that person and the impulse-radar sensors [17]. 3. methodology of experimentation 3.1. acquisition of measurement data the measurement data used for the experimentation aimed at the assessment of the accuracy of the recognition of the monitored person’s actions, were acquired by means of six impulse-radar sensors located at various positions. two configurations of the sensors have been considered (see figure 2): • configuration #1 according to which the impulseradar sensors (r1, …, r6) were located at positions whose x-, yand z-coordinates (in meters) were respectively: [0.00, 1.70, 0.93], [0.00, 1.70, 1.43], [2.20, 1.70, 1.45], [2.20, 1.70, 0.95], [0.20, 4.50, 0.82], [2.00, 4.50, 0.83]; • configuration #2 according to which the impulseradar sensors (r1, …, r6) were located at positions whose x-, yand z-coordinates (in meters) were respectively: [0.20, 4.50, 0.82], [0.60, 2.65, 2.76], [0.00, 1.70, 0.93], [2.20, 1.70, 0.95], [2.00, 4.50, 0.83], [0.60, 3.31, 2.76]. concurrently, the person was monitored by an infrared depth sensor being a part of the kinect v2 device (cf. [21] for the description of the methodology for preprocessing of data from depth sensors). the radar sensors and the depth sensor were synchronised, and their data acquisition rate was set to 30 hz. in the experimentation, three movement scenarios were considered: • according to the first scenario, two persons walked along three predefined trajectories: an oval-shaped trajectory, a straight-line trajectory and a sine-shaped trajectory; each person repeated the action 10 times for each trajectory. • according to the second scenario, two persons sat on a chair located in three different places within the monitored area; each person repeated the action 10 times for each position of the chair. • according to the third scenario, two persons lay down on a mattress, approaching it from two different sides; each person repeated the action 15 times for each side of the mattress. thus, the whole programme of experimentation comprised the acquisition of: • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #1; • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #2; • 180 three-dimensional movement trajectories obtained on the basis of the data acquired by means of the depth sensor. figure 1. the example of raw measurement data. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 3.2. generation of features in the experimentation, the features for classification have been determined on the basis of the sequences of the estimates of the z-coordinate of the position the persons, i.e. {�̂�𝑛}, as well as the sequences of the estimates of the velocity and acceleration along the z-axis (i.e. the first and second derivatives of the zcoordinate, obtained by means of the forward difference method), denoted with {�̂�𝑧,𝑛 } and {�̂�𝑧,𝑛}, respectively. all the features are presented in table 1. for the sake of simplicity, two operators have been introduced – the operator returning the empirical mean value of a data sequence {𝑝𝑛 }: 𝑚[{𝑝𝑛}] ≡ 1 𝑁 ∑ 𝑝𝑛 𝑁 𝑛=1 (1) and the operator returning its empirical variance: 𝑠2[{𝑝𝑛 }] ≡ 1 𝑁 − 1 ∑(𝑝𝑛 − 𝑚[{𝑝𝑛 }]) 2 𝑁 𝑛=1 . (2) moreover, the velocity in the vertical dimension and the acceleration in that dimension are denoted with �̂�𝑣,𝑛 ≡ {|�̂�𝑧,𝑛|} and �̂�𝑣,𝑛 ≡ {|�̂�𝑧,𝑛|}, respectively. 3.3. classification in this study an error-correcting output codes classifier (ecoc) – suitable for multiclass classification problems – has been used [22]. the ecoc classifier has been based on multiple support vector machines (svm) – each designed to distinguish between two selected actions. the implementation of the ecoc classifier, available in the matlab statistics and machine learning toolbox [23], has been used for this purpose. before the training of the classifier, the values of the features have been standardised. the performance of the classifier has been assessed using the 10-fold cross-validation technique. the assessment of the accuracy of the classification has been based on the inspection of: • the receiver operating characteristic (roc) curves illustrating the relationship between the true positive rate (tpr) the false positive rate (fpr), i.e. two indicators defined as follows: tpr = tp tp + fn (3) fpr = fp fp + tn , (4) where – for example, in the case of walking – tp (true positive) is the number of walks classified as walks, tn (true negative) is the number of nonwalks classified as non-walks, fp (false positive) is the number of non-walks classified as walks and fn (false negative) is the number of walks classified non-walks; the area under the roc curve (auc) is a single scalar value representing the performance; • the confusion matrices visualising the results of the classification: each row of such matrix represents the instances in an actual class while each column represents the instances in a predicted class. in the experiments based on the real-world data the use of the approximations of the movement trajectories is necessary since their reference shapes cannot be properly defined: a human body has a considerable volume and generates complex echoes which cannot be attributed to any of its specific points (e.g. to plexus solaris). an arbitrary choice of such a reference point would lead to an arbitrary definition of the systematic error which could be misleading. fortunately, such a definition is not necessary for extraction of the features which are used for classification of the actions of a monitored person – the features characterising the dispersion of the values of the z-coordinate, the vertical velocity and the vertical acceleration. 4. results of experimentation in figure 2, the examples of the estimates of the threedimensional movement trajectories of a monitored person, obtained by means of the procedure described in section 2, for two configurations of the impulse-radar sensors – together with the projections on the three two-dimensional planes – are shown; in figure 3, the dispersion of the subset of the estimates of the z-coordinate, representative of walking, sitting and lying down, is presented. in figure 4, the roc curves obtained for the classification of all the three-dimensional movement trajectories, are shown; the confusion matrices are presented in figure 5. the analysis of the presented results is leading to the following conclusions: • multiple impulse-radar sensors may be successfully used for estimation of the three-dimensional position of a moving person, although the configuration of those sensors has significant influence on the uncertainty of the estimation. to properly estimate the height-component of the position of a monitored person, few impulse-radar sensors should be located at a greater height than the rest of those sensors (compare figure 2c with figure 2d as well as figure 2e with figure 2f). table 1. the features used in the experimentation. # feature 1 standard deviation of the z-coordinate: 𝜎 = √𝑠2[{�̂�𝑛 }] 2 difference between extreme values of the z-coordinate: δ = max[{�̂�𝑛 }] − min[{�̂�𝑛 }] 3 mean vertical velocity: 𝜇𝑣 = 𝑚[{𝑣𝑣,𝑛 }] 4 maximum vertical velocity: 𝑣max = max[{𝑣𝑣,𝑛 }] 5 standard deviation of the vertical velocity: 𝜎 𝑣 = √𝑠2[{�̂�𝑣,𝑛 }] 6 difference between extreme values of the vertical velocity: δ𝑣 = max[{𝑣𝑣,𝑛 }] − min[{𝑣𝑣,𝑛 }] 7 mean vertical acceleration: 𝜇𝑎 = 𝑚[{�̂�𝑣,𝑛 }] 8 maximum vertical acceleration: 𝑎max = max[{�̂�𝑣,𝑛 }] 9 standard deviation of the vertical acceleration: 𝜎 𝑎 = √𝑠2[{�̂�𝑣,𝑛 }] 10 difference between extreme values of the vertical acceleration: δ𝑎 = max[{�̂�𝑣,𝑛 }] − min[{�̂�𝑣,𝑛 }] acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 a) b) c) d) e) f) figure 2. the examples of the estimates of the three-dimensional trajectories of a moving person, obtained for two configurations of the impulseradar sensors: trajectories representative of walking (top row), trajectories representative of sitting (middle row), and trajectories representative of lying down (bottom row). the movement trajectories obtained for configuration #1 are depicted in the left column, while the movement trajectories obtained for configuration #2 are depicted in the right column. blue lines denote radar-data-based trajectories, grey lines denote depth-data-based trajectories while blue triangles indicate the positions of the radar sensors. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 a) b) c) figure 3. the dispersion of the estimates of the z-coordinate of a moving person, obtained for two configurations of the impulse-radar sensors, for walking (a), sitting (b) and lying down (c). a) b) c) figure 4. the receiver operating characteristic (roc) curves obtained for the classification of all the three-dimensional movement trajectories: radar-databased trajectories obtained for configuration #1 (a), radar-data-based trajectories obtained for configuration #2 (b), depth-data-based trajectories (c). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 • the configuration of the impulse-radar sensors affects the estimates of the coordinates of a monitored person. in the case of configuration #2 the estimates of the z-coordinate are ca. 0.5 m greater than in the case of configuration #1; moreover, the x-y projections of the trajectories, obtained for configuration #2, seem to be scaleddown versions of the analogous projections obtained for configuration #1 (compare figure 2a with figure 2b). these discrepancies can be explained by a non-negligible volume of a human body: the impulse-radar sensors located at a greater height – and, therefore, oriented differently than the rest of those sensors – receive echoes reflected from different parts of the body of a monitored person. moreover, in the case of configuration #1 the changes in the z-coordinate, associated with the movement of the body towards the ground during sitting or lying down, may not be properly reflected in the estimates of the movement trajectory (see figure 2c and figure 2d). this phenomenon may be explained by the fact that when the person is moving towards the ground, the changes in the distance between that person and the impulse-radar sensors placed on the sides of the monitored area are not significant. in the case of the impulse-radar sensors placed on the ceiling, these changes are much greater. • the proposed features, characterising the monitored person’s movement in the vertical dimension, are sufficient to recognise walking, sitting and lying down with high accuracy, although this accuracy is significantly affected by the quality of the threedimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors (compare figure 5a with figure 5b). the results of the classification of the three-dimensional movement trajectories obtained on the basis of the data acquired by means of the depth sensor, are the best because the depth sensor provides the most accurate estimates of the threedimensional position of the monitored person. nevertheless, the results of the classification of the trajectories obtained on the basis of the data acquired by means of the impulse-radar sensors located according to configuration #2, are only slightly worse and can be likely improved by the application of more sophisticated methods for impulse-radar data processing. 5. conclusions the novelty of the research presented in this paper consists in an approach to the interpretation of measurement data acquired by means of the impulse-radar sensors, leading to the determination of features to be used for recognition of actions of three types: walking, sitting and lying down. the data are first transformed into the three-dimensional coordinates of the monitored person; next, those coordinates are used as a basis for calculation of kinematic features characterising the monitored person’s movement in the vertical dimension. the results of the experimentation based on real-world data show that multiple impulse-radar sensors may be successfully used for highly accurate recognition of walking, sitting and lying down of a monitored person. it has to be noted, however, that the accuracy of the recognition is affected by the quality of the three-dimensional movement trajectories which in turn is affected by the configuration of the impulse-radar sensors. to properly estimate the height-component of the position of the monitored person, few impulse-radar sensors should be located at a greater height than the rest of those sensors. the results encourage the authors to focus on the development of the methods for processing of the impulse-radar data, enabling the detection of falls of the monitored person. references [1] united nations, department of economic and social affairs, population division (2019), world population prospects 2019: highlights. st/esa/ser.a/423. online [accessed 15 march 2022] https://population.un.org/wpp/publications [2] k. chaccour, r. darazi, a. h. e. hassani, e. andrès, from fall detection to fall prevention: a generic classification of fallrelated systems, ieee sensors journal, vol. 17, 2017, pp. 812822. doi: 10.1109/jsen.2016.2628099 [3] m. g. amin, y. d. zhang, f. ahmad, k. c. ho, radar signal processing for elderly fall detection the future for in-home monitoring, ieee signal processing magazine, vol. 33, 2016, pp. 71–80. doi: 10.1109/msp.2015.2502784 [4] o. boric-lubecke, v. m. lubecke, a. d. droitcour, byung-kwon park, a. singh, eds., doppler radar physiological sensing. hoboken (new jersey): john wiley & sons, inc., 2016. [5] b. y. su, k. c. ho, m. rantz, m. skubic, doppler radar fall activity detection using the wavelet transform, ieee transactions on biomedical engineering, vol. 62, 2015, pp. 865875. doi: 10.1109/tbme.2014.2367038 [6] a. gadde, m. g. amin, y. d. zhang, f. ahmad, fall detection and classifications based on time-scale radar signal characteristics, proc. spie, vol. 9077 'radar sensor technology xviii', 2014. doi: 10.1117/12.2050998 [7] d. y. wang, j. park, h. j. kim, k. lee, s. h. cho, noncontact extraction of biomechanical parameters in gait analysis using a multi-input and multi-output radar sensor, ieee access, vol. 9, 2021, pp. 138496-138508. doi: 10.1109/access.2021.3117985 [8] a.-k. seifert, m. grimmer, a. m. zoubir, doppler radar for the extraction biomechanical parameters in gait analysis, ieee journal of biomedical and health informatics, vol. 25, 2021, pp. 547-558. doi: 10.1109/jbhi.2020.2994471 [9] s. gezici, h. v. poor, position estimation via ultra-wide-band signals, proceedings of the ieee, vol. 97, 2009, pp. 386–403. doi: 10.1109/jproc.2008.2008840 [10] p. mazurek, a. miękina, r. z. morawski, comparative study of three algorithms for estimation of echo parameters in uwb radar module for monitoring of human movements, measurement, vol. 88, 2016, pp. 45-57. doi: 10.1016/j.measurement.2016.03.025 figure 5. the exemplary confusion matrices obtained for the classification of all the three-dimensional movement trajectories: radar-data-based trajectories obtained for configuration #1 (a), radar-data-based trajectories obtained for configuration #2 (b), depth-data-based trajectories (c). https://population.un.org/wpp/publications https://doi.org/10.1109/jsen.2016.2628099 https://doi.org/10.1109/msp.2015.2502784 https://doi.org/10.1109/tbme.2014.2367038 https://doi.org/10.1117/12.2050998 https://doi.org/10.1109/access.2021.3117985 https://doi.org/10.1109/jbhi.2020.2994471 https://doi.org/10.1109/jproc.2008.2008840 https://doi.org/10.1016/j.measurement.2016.03.025 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [11] j. j. zhang, x. dai, b. davidson, z. zhou, ultra-wideband radarbased accurate motion measuring: human body landmark detection and tracking with biomechanical constraints, iet radar, sonar & navigation, vol. 9, 2015, pp. 154-163. doi: 10.1049/iet-rsn.2014.0223 [12] r. herrmann, j. sachs, m-sequence-based ultra-wideband sensor network for vitality monitoring of elders at home, iet radar, sonar & navigation, vol. 9, 2015, pp. 125-137. doi: 10.1049/iet-rsn.2014.0214 [13] h. li, a. mehul, j. le kernec, s. z. gurbuz, f. fioranelli, sequential human gait classification with distributed radar sensor fusion, ieee sensors journal, vol. 21, 2021, pp. 75907603. doi: 10.1109/jsen.2020.3046991 [14] w. khawaja, f. koohifar, i. guvenc, uwb radar based beyond wall sensing and tracking for ambient assisted living, in proc. 14th ieee annual consumer communications & networking conf., las vegas, nv, usa, 8-11 january 2017, pp. 142–147. doi: 10.1109/ccnc.2017.7983096 [15] t. j. daim, r. m. a. lee, indoor environment device-free wireless positioning using ir-uwb radar, in proc. 2018 ieee int. conf. on artificial intelligence in engineering and technology, kota kinabalu, malaysia, 8 november 2018, 4 pp. doi:. 10.1109/iicaiet.2018.8638458 [16] p. mazurek, choosing configuration of impulse-radar sensors in system for healthcare-oriented monitoring of persons, measurement: sensors, vol. 18, 2021, p. 100270. doi: 10.1016/j.measen.2021.100270 [17] p. mazurek, applicability of multiple impulse-radar sensors for estimation of person’s three-dimensional position, in proc. 2020 ieee int. instrumentation and measurement technology conf., dubrovnik, croatia, 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128760 [18] novelda, x4m02 radar sensor. online [accessed 8 november 2019] https://shop.xethru.com/x4m02 [19] novelda, x4m02 radar sensor datasheet. online [accessed 29 january 2020] https://www.xethru.com/community/resources/x4m02-radarsensor-datasheet.115/ [20] j. wagner, p. mazurek, r. z. morawski, non-invasive monitoring of elderly persons: systems based on impulse-radar sensors and depth sensors: springer, cham, 2022. [21] p. mazurek, j. wagner, r. z. morawski, use of kinematic and melcepstrum-related features for fall detection based on data from infrared depth sensors, biomedical signal processing and control, vol. 40, 2018, pp. 102-110. doi: 10.1016/j.bspc.2017.09.006 [22] e. l. allwein, r. e. schapire, y. singer, reducing multiclass to binary: a unifying approach for margin classifiers, journal of machine learning research, vol. 1, 2001, pp. 113–141. doi: 10.1162/15324430152733133 [23] mathworks, matlab documentation: statistics and machine learning toolbox – classificationecoc. online [accessed 15 march 2022] https://www.mathworks.com/help/stats/classificationecoc.html http://dx.doi.org/10.1049/iet-rsn.2014.0223 http://dx.doi.org/10.1049/iet-rsn.2014.0214 http://dx.doi.org/10.1109/jsen.2020.3046991 https://doi.org/10.1109/ccnc.2017.7983096 https://doi.org/10.1109/iicaiet.2018.8638458 https://doi.org/10.1016/j.measen.2021.100270 https://doi.org/10.1109/i2mtc43012.2020.9128760 https://shop.xethru.com/x4m02 https://www.xethru.com/community/resources/x4m02-radar-sensor-datasheet.115/ https://www.xethru.com/community/resources/x4m02-radar-sensor-datasheet.115/ https://doi.org/10.1016/j.bspc.2017.09.006 http://dx.doi.org/10.1162/15324430152733133 https://www.mathworks.com/help/stats/classificationecoc.html evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 11 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight francesco nicoletti1, vittorio ferraro2, dimitrios kaliakatsos1, mario a. cucumo1, antonino rollo1, natale arcuri1 1 department of mechanical, energy and management engineering (dimeg), university of calabria, via p. bucci, 87036 rende (cs), italy 2 department of computer, modelling, electronics and system engineering (dimes), university of calabria, via p. bucci, 87036 rende (cs), italy section: research paper keywords: daylight; artificial lights; energy savings citation: francesco nicoletti, vittorio ferraro, dimitrios kaliakatsos, mario a. cucumo, antonino rollo, natale arcuri, evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight, acta imeko, vol. 11, no. 4, article 12, december 2022, identifier: imekoacta-11 (2022)-04-12 section editor: francesco lamonaca, university of calabria, italy received july 11, 2022; in final form october 13, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by regione calabria (pac calabria 2014-2020 asse prioritario 12, azione b) 10.5.12) for f. nicoletti’s contribution. corresponding author: francesco.nicoletti@unical.it, e-mail: francesco.nicoletti@unical.it 1. introduction buildings are the place where man spends most of his time. a large amount of energy is used inside buildings to meet human needs and to provide thermal-hygrometric and visual comfort. in the coming years, it is very important to adopt solutions to reduce consumption. the most widespread strategy to reduce consumption consists of improving the efficiency of the building envelope (reducing thermal losses through transmission by opaque surfaces [1]-[5]) and the efficiency of plant components [6]. it is especially important to also utilize solar gains in a smart way because they bring thermal and visual benefits. the solar source, in fact, has a considerable influence on the behavior of the building. scientific research in this regard is dedicated to three main objectives: 1) production of electrical and thermal power through solar panels [7], [8], 2) control of solar inputs to reduce winter and summer heat load, 3) reach adequate daylight within the rooms. in addition to user awareness to reduce waste, intelligent solutions to better exploit solar radiation are divided into passive and active systems. passive systems do not require a control system. examples include bioclimatic solar greenhouses [9], green roofs [10], [11], trombe walls [12]. active systems, on the other hand, use sensors and actuators, which often communicate via iot technology to make automatic adjustments [13], [14]. the visual comfort of the occupants, which is often neglected, is also a primary objective. this research is concerned with analyzing a dynamic system for controlling artificial lighting to avoid unnecessary switching on when natural illuminance is adequate. daylight plays a key role in visual comfort inside buildings. estimating daylight inside a room is not a simple problem. over the years, different methodologies have been developed to consider the different variables involved. abstract natural lighting in building environments is an important aspect for the occupants' mental and physical health. furthermore, the proper exploitation of this resource can bring energy benefits related to the reduced use of artificial lighting. this work provides some estimates of the energy that can be saved by using a lighting system that recognises indoor illuminance. in particular, it is able to manage the switching on of lights according to the daylight detected in the room. the savings from this solution depend on the size and orientation of the window. the analysis is conducted on an office by means of simulations using the inlux-dbr code. the locations have an influence on the luminance characteristics of the sky. the analysis is conducted with reference to one city in the south and one in the north of italy (cosenza and milan). the energy saving is almost independent of latitude and therefore representative of the italian territory. it is highly variable according to exposure, being the highest for southern exposure (97 % with the window size equal to 36 % of the floor area) and between 26 % and 48 % (as a function of window size) for northern exposure. mailto:francesco.nicoletti@unical.it acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 numerous computational codes are available to estimate the illuminance on the working plane [15]-[17]. these codes, however, are difficult to adapt to a real case since they use the sky luminance models present within their libraries. in particular, the models are predominantly related to the cie clear sky model [18] or standard overcast sky distributions. the use of locally measured or estimated luminance is not allowed. in this regard, the authors developed the inlux-dbr calculation code [19], [20]. the luminous contributions that the code considers are: light carried by direct rays, scattered light from the sky and reflected light from the outside ground. it allows realistic estimation of illuminance at various points in the room. the code is very flexible since it allows the use of measured distributions of sky luminance and also classical model calculations. the resolution method used is "radiosity model" by which luminance distributions in walls are evaluated by an implicit method. the validation of the code was performed with an example case conducted in japan and allows the extension of case studies to perform parametric analysis. in the present study, we investigate the possibility of replacing the artificial lighting system using a system that can adapt to confer setpoint illuminance based on the already present daylight contribution. this is a control strategy that is often applied in smart buildings. this work provides numerical data to quantitatively assess the energy savings from this investment. it depends, in fact, on a number of factors: in particular, the geometry of the room and the size and orientation of the window. the objective of the work is to provide useful indications to take advantage of daylight and save electricity from artificial lights. in addition, this solution provides visual comfort and makes the environment healthier for the occupants. two italian locations will be simulated in the work, one in southern italy and one in northern italy: cosenza and milan. 1.1. literature review controlling artificial light according to natural illuminance is useful to provide an environment with constant illuminance over time, reducing unnecessary waste. the daylight recorded in rooms depends greatly on the geometry of the rooms and the ratio of window to wall areas. in addition, illuminance depends on the position of the sun. in the past, many authors have made important contributions on this topic. the most important parameter used in building design is the daylight factor (df), defined by trotter [21]. the estimation of this was improved by dresler [22] by formulating empirical relationships. hopkinson et al [23] introduced the need to consider external obstacles and ground reflection. later, tregenza [24] defined an analytical method called split-flux, which is based on overcast conditions. numerous models are formulated in order to estimate df; these use scaled models, empirical formulas and corresponding diagrams [25], [26]. the df, however, allows for valid considerations only under overcast conditions because on clear days it is highly dependent on the position of the sun in the sky. in these cases, in fact, it would result that the indoor illuminance distributions would not depend on the orientation of the window. it would be appropriate to define indices to evaluate illuminance on an annual scale. this greatly increases the complexity of the problem. for our purpose, therefore, classical df-based methods are not useful. nabil et al. [27] showed that alternating clear, intermediate and cloudy days makes indoor lighting highly dependent on window orientation. the models mainly used to simulate the distribution of light inside a room are divided into "radiance models" and "ray tracing models". the two models provide equal results being based on valid assumptions. the code used in the present work belongs to the first category. the method consists of solving, at each instant of time, a system of equations obtained from the balance on each surface of the room. each surface is treated as a body that reflects incident light isotropically. it is defined by the reflectivity and view factors toward all other surfaces. in ray tracing models, the path of the solar ray through its various reflections on the walls of the room is followed. illuminance is calculated by setting the maximum number of reflections. no system of equations is solved. daylighting studies are very diversified. in fact, the problems are always significantly different depending on the building conformation and scale. alhagla et al. [28] obtained in a study that the benefits of daylight depend on location. in particular, in hot locations, the exploitation of sunlight leads to an increase in the cooling heat demand in the summer period. on the other hand, in locations where the climate is colder, the use of daylight could favour the input of solar radiation and thus reduce the winter heat demand. athienitis et al. [29], analysed control systems for artificial lights associated with daylight to manage electricity use and reduce energy consumption. the authors were also able to achieve excellent results in terms of visual well-being. the benefits achieved on electricity consumption are remarkable, although the authors claim that there is a risk of overheating during summer periods. krarti et al [30] studied different geometric room configurations in order to analyse the impact of daylight on energy consumption. the analysis was conducted with different window sizes and different types of glass. they analysed four locations in america. they also observed that geographical location has a low impact on daylight. hee et al [31] presented a review to analyse scientific paper where the impact of windows on room illuminance and the resulting energy savings are assessed. they also listed the different optimization techniques used to choose glass. their advice is to perform a careful economic evaluation when choosing glass and defining the size of the opening in a room. the largest number of scientific studies concerns the assessment of solar radiation introduced through windows, together with daylight. of course, these effects are important because they have a considerable influence on the energy balance of a building. the present work, however, has a different objective. in fact, solar contributions are defined by the geometry of the building and the ratio of windows to opaque walls. this paper, unlike the others, does not aim to evaluate the best architectural design for the building. instead, the aim is to evaluate, for a given building (thus for a given daylight and solar gains), whether the artificial lighting system can be more efficient, by making it intelligent, to exploit daylight. for example, when the room is long and narrow, the illuminance established in the areas away from the window is not sufficient. these rooms are generally not suitable for exploiting daylight properly. moon et al. [32] performed an analysis on the control of the lighting system using photosensors in the room. they performed simulations using lightscape software. the use of software to simulate daylight can be interesting as it allows comparison between different locations with reference to the same geometry. it is therefore necessary to understand whether the presence of an artificial light control system can lead to savings by trying to quantify them [33]-[35]. li et al. [36] carried out experimental acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 studies on daylight and energy consumption for atrium corridors. doulos et al. [37] showed that the power factor for led lamps worsens when they are dimmed and this could make the use of fluorescent lamps more advantageous than leds under certain conditions. bellia et al. [38] observed experimentally that there are no significant differences between using a proportional dimmable system and an on/off system. in particular, for southern exposures they observed that the difference is very low. the installation of a dimmable system is therefore not recommended in these cases due to the higher costs compared to an on/off system. the number of variables on which lighting problems depend is high and it is not always clear how many benefits the presence of a dimmable system can bring. the present work aims to overcome this gap and to provide useful information for locations in the latitude range equal to those of italy. 2. methodology 2.1. the inlux-dbr code and its experimental validation the lighting analysis was carried out using the inlux-dbr calculation code. this code allows to evaluate the luminance distribution within an environment. in particular, it divides opaque structures and transparent structures into “n” and “m” surface elements respectively. on each of these the illuminance value depends on: a. a component of direct solar radiation which, passing through the transparent surfaces, hits the surface under examination; b. a component of the diffuse solar radiation which, passing through the transparent surfaces, hits the surface under examination; c. a component of solar radiation reflected from the external environment which, passing through the glass surfaces, hits the surface under examination; d. a component due to the infinite internal reflections by the other surfaces that make up the environment in question. these components influence both the illuminance of opaque surfaces and the illuminance of glazed surfaces. in the present analysis, it is considered that the glazed surfaces are made by means of ordinary clear glass, therefore the direction of the solar radiation incident on the surface remains unchanged. having to consider the effects due to the radiation reflected from the ground outside the environment, the latter is divided into “p” surface elements. taking into account the various incident radiative components, it is possible to determine the illuminance 𝐸𝑖 of the “i-th” surface by means of the eq. (1). 𝐸𝑖 ∙ ∆𝐴𝑖 = ∑ [𝜏(𝛼, 𝜑) ∙ 𝐿p(𝛼, 𝜑) ∙ ∆𝐴w ∙ π × 𝐹w−i] + 𝑚 w=1 𝑘 ∙ 𝜏(𝛼s, 𝜑s) ∙ ∆𝐴i ∙ 𝐸vs ∙ cos 𝜗si + ∑[𝜏(𝛼g, 𝜑g) ∙ 𝐿g ∙ ∆𝐴g ∙ π ∙ 𝐹g−i] + 𝑝 g=1 ∑ ∆𝐴𝑗 ∙ 𝜌j ∙ 𝐸j ∙ 𝐹𝑗−i 𝑛+𝑚 𝑗=1 . (1) in the first term, the illuminance of the “i-th” element (𝐸𝑖 ) multiplied by the relative surface (∆𝐴𝑖 ) appears. in the second term, on the other hand, all the radiative components that influence the value of the illuminance on the surface under examination appear. in particular, with 𝜏(𝛼, 𝜑) indicates the transmissivity of the glazed surface, with 𝐿p (𝛼, 𝜑) indicates the luminance of the celestial vault (this varies according to the point of the celestial vault considered), with 𝛼 indicates the angular height of the point of the celestial vault considered, with 𝜑 indicates the azimuth of the point considered in the celestial vault, with ∆𝐴𝑤 indicates the area of the glazed element under examination, with 𝐹𝑤−𝑖 indicates the view factor between the glazed element under examination "w" and the "i-th" element, 𝑘 is a coefficient that has a unit value if the element in question is directly affected by solar radiation and a zero value otherwise, with 𝜏(𝛼𝑠 , 𝜑𝑠 ) indicates the transmissivity of the glazed surface to direct solar illumination (𝐸𝑣𝑠), with 𝛼𝑠 indicates the height of direct solar radiation, with 𝜑𝑠 indicates the azimuth of direct solar radiation, with 𝜗𝑠𝑖 indicates the angle between direct solar radiation and the normal to the surface of the "i-th" element, with 𝜏(𝛼𝑔, 𝜑𝑔 ) indicates the transmissivity of the glazed surface to reflected radiation from the ground, with 𝐿𝑔 indicates the luminance of the ground, with ∆𝐴𝑔 indicates the surface of the "g-th" element of the ground, with 𝐹𝑔−𝑖 indicates the view factor between the "g-th" element and the "i-th "in question, 𝜌𝑗 is the reflexivity of the elements internal to the environment in question is indicated and ∆𝐴𝑗 is the surface of the" j-th "element inside the environment that reflects part of the incident radiation (𝐸𝑗 ) on the" i-th "element in exam, finally, 𝐹𝑗−𝑖 indicates the view factor between the" j-th "element and the" i-th "element. this balance equation is written for each “i-th” opaque element inside the environment. figure 1 shows a graphic representation of the various radiative components exchanged by the different surfaces and considered in the inlux dbr code. considering the external ground as an isotropic surface and assuming that the reflectivity 𝜌𝑔 does not vary between direct and diffuse radiation, it is possible to calculate the luminance of the ground 𝐿𝑔 by means of eq. 2. 𝐿𝑔 = 𝜌𝑔 ∙ (𝐸𝑣𝑠 ∙ sin(𝛼𝑠 ) + 𝐸0) 𝜋 , (2) where with 𝐸0 indicates the diffused illuminance on the horizontal plane. figure 1. representation of the various radiative components exchanged by the different surfaces inside the room. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 in the case in question, the illuminance on the glass surface depends solely on the radiative components reflected by the surfaces inside the environment. in fact, it is considered by hypothesis a single glazed surface in the room. consequently, it is possible to simplify the eq. (1). therefore, the illuminance on a glass element inside the room can be determined by means of the eq. (3). 𝐸𝑤 ∙ ∆𝐴𝑤 = ∑[∆𝐴𝑖 ∙ 𝜌𝑖 ∙ 𝐸𝑖 ∙ 𝐹𝑖−𝑤] 𝑛 𝑖=1 , (3) where with 𝐸𝑤 indicates the internal illuminance of the "w-th" element of the glazed surface under examination, while 𝐹𝑖−𝑤 is the view factor between the "i-th" element and the "w-th" element of the glazed surface in exam. finally, a system of unknown (𝑛 + 𝑚) equations with (𝑛 + 𝑚) variables is obtained which is solved with an iterative method by means of a finite difference methodology starting from a uniform initial solution. the calculation code has been implemented in the matlab environment and other details are provided in [19]. one of the advantages of this computational code is that it can implement different methodologies to model the behavior of the celestial vault. for example, it is possible to use: 1) the distribution of clear and cloudy cie [39]; 2) perez's model of clear skies [40]; 3) igawa's model [41]; 4) the model of kittler and darula [18], [42] 5) the tregenza model [43]; 6) experimental models created ad hoc for the location in question. the developed code was validated by means of data measured in an experimental site in osaka. in the experimental validation, clear, overcast and intermediate days were considered. the validation was carried out considering two error indices: the normalized mean bias error (nmbe) and the normalized root mean square error (nrmse) evaluated between the measured and calculated illuminance at various points in the environment. in particular, the nmbe is calculated by means of the following equation: 𝑁𝑀𝐵𝐸 = ∑ 𝑉calc,𝑖 − 𝑉meas,𝑖 𝑉meas,𝑖 𝑁 𝑖=1 𝑁 , (4) the nrmse is calculated by means of the following relation: 𝑁𝑅𝑀𝑆𝐸 = √ ∑ ( 𝑉calc,𝑖 − 𝑉meas,𝑖 𝑉meas,𝑖 ) 2 𝑁 𝑖=1 𝑁 (5) the values of these error indices are summarized in figure 2. it is observed that there is a good correspondence between the measured values and the calculated values. of course, the error increases as the solar radiation entering the environment increases. 2.2. calculation methodology of the electrical energy saving the analysis was conducted with reference to two different locations: cosenza (lat: 39° 18 'n) and milan (lat: 45° 28' n). the distribution of the point illuminance and the average illuminance in the environment were determined using the inlux-dbr code. the code requires some input data such as direct solar radiation and diffuse hourly monthly average and illuminance on the horizontal plane. these parameters were obtained through the “european database of daylight and solar radiation” provided by satel-light [44]. the luminance of the sky was determined by setting the perez model. the latter was chosen on the basis of an accuracy analysis carried out in a previous study [20], which showed the goodness of the model. the perez model [40] is a full-sky calculation methodology based on the clarity index. the latter is determined by means of the following relation: 𝜀 = 𝐺𝑑0 + 𝐺𝑏𝑜 sin 𝛼𝑠 𝐺𝑑0 + 5.535 ∙ 10−6 ∙ 𝑧𝑠 3 1 + 5.535 ∙ 10−6 ∙ 𝑧𝑠 3 , (6) where with 𝐺𝑑0 indicates the diffused irradiation on the horizontal plane and with 𝐺𝑏𝑜 indicates the direct irradiation on the horizontal plane. this index can vary between 𝜀 = 1 (with completely cloudy skies) and 𝜀 = 6.2 (with completely clear skies). as a function of the clarity index, eight sky conditions have been identified, varying the parameters and formulas for calculating the luminance of the sky; this quantity varies point by point in the celestial vault and appears in eq. (1). in particular, the luminance of the sky can be calculated by means of the following relationship: 𝐿𝑃 = 𝐿𝑍 ∙ 𝑙𝑟 , (7) where with 𝑙𝑟 indicates the relative luminance and with 𝐿𝑍 indicates the zenith luminance. relative luminance can be determined by means of the following relationship: 𝑙𝑟 = 𝛷(𝛼) ∙ 𝑓(𝜁) 𝛷(π/2) ∙ 𝑓(π/2 − 𝛼𝑠 ) , (8) where 𝛼 is the solar height of the considered point of the sky, 𝜁 is the angular distance between the considered point of the sky and the position of the sun. the functions 𝛷(𝑥) and 𝑓(𝑥) can be determined by means of the following equations: 𝛷(𝑥) = 1 + a ∙ e 𝑏 sin 𝑥 (9) 𝑓(𝑥) = 1 + c ∙ e𝑑 𝑥 + 𝑒 ∙ cos2 𝑥 . (10) in eqs. (9) and (10) the parameters 𝑎, 𝑏, c, 𝑑 and 𝑒 are functions of the solar height 𝛼𝑠, the brightness of the sky ∆ and figure 2. analysis of the error indices during the experimental campaign done by means of an experimental site located in osaka 7. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 the clarity index 𝜀. more information on the calculation of these parameters can be found in ref. [25]. the luminance of zenith, which appears in eq. (7), can be determined by means of the following correlation developed by perez [45] as the sky condition varies. 𝐿𝑍 = 𝐺𝑑0[𝑎i + 𝑐i sin 𝛼𝑠 + 𝑐i ′ e−3(π/2−𝛼𝑠) + 𝑑i ] , (11) where the constants 𝑎i, 𝑐i, 𝑐′i and 𝑑i are function of the clarity index 𝜀. 2.3. case study the analysis will be conducted with reference to a case study. the geometry of the room for which the smart lighting system is simulated is 6 × 6 × 3 m³ in which there is a window with double glazing and transmissivity equal to 0.75. the size of the window varies between 5.04 m2 and 12.96 m2. the minimum size is fixed on the basis of what is reported by the italian legislation [46] which imposes a minimum size of the glass surfaces of a room of 1/8 (12.5 %) of the total floor area. the reflectivity of the room's opaque vertical and horizontal structures is summarized in table 1. by means of the inlux-dbr calculation code, an analysis was carried out for a whole year in order to evaluate the distribution of natural lighting inside the room. in particular, the natural illuminance was evaluated with reference to the 12 points shown in figure 3. in each of these points it will be assumed to place a luminescent lamp with power 36 w and brightness of 3200 lumen. the set lamp control logic envisages activating a row of lamps only if the natural illuminance of two points out of three (for the same row) is less than 500 lux. as the orientation and size of the window vary, the distribution and intensity of natural lighting varies. consequently, the number of switch-on of the lighting system and the electrical consumption will vary. the final goal is precisely to determine the energy savings that can be obtained by varying the configuration. 3. analysis of results in figure 4 it is possible to observe the distribution of natural lighting on the 12 points selected for the town of cosenza at 12:30 am on 15 january. the analysis is carried out with reference to a surface window equal to 5.04 m2 for different variation of the exposure. with the window exposed to the north, only in the row closest to the window the illuminance intensity is greater than 500 lux; with the exposure to the east the natural illuminance is higher than 500 lux in the first row and in two points out of three of the second line; with the southern exposure the natural illuminance in all points is higher than 500 lux; finally, with the western exposure, only in the last two lines farthest from the window the natural illuminance is higher than a 500 lux. the same analysis was conducted at 12:30 am on july 15th. the results are reported in figure 5. table 1. characteristics of the walls of the reference room. dimensions area (m 2) ρ floor 6 × 6 m² 36 0.2 ceiling 6 × 6 m² 36 0.7 rear wall 6 × 3 m² 18 0.5 left wall 6 × 3 m² 18 0.5 right wall 6 × 3 m² 18 0.5 front wall 6 × 3 m² 12.96 0.5 ref. window 4.2 × 1.2 m² 5.04 0.15 work plane 6 × 6 m² 36 figure 3. position of reference points in which the analysis of the natural illuminance was conducted. figure 4. natural illuminance on 12 points of the work plane. january 15th time 12:30 am cosenza. figure 5. natural illuminance on 12 points of the work plane. july 15th time 12:30 am cosenza. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 the results are similar but the values obtained for natural lighting are sharply lower. this is due to the higher solar trajectory in the sky in the summer months. this affects the solar height which varies from 𝑠 = 29.37° january 15th at 12:30 am to 𝑠 = 70.77° july 15th at 12:30 am. this lower solar trajectory causes less direct solar radiation to hit the surfaces inside the room in july. it goes from an average solar radiation of 200 kluxhour/day for january to 119 kluxhour/day for july. furthermore, the lower illuminance in july is due to the lower brightness of the celestial vault due to the greater distance of the sun. the distribution of the luminance of the sky varies a lot between the month of january and the month of july. this can be easily observed in figure 6 which shows how the luminance of the sky varies as the angular height 𝛼 and azimuth 𝜙 of the celestial point considered at 12:00 am on january 15 (a) and at noon on 15 july (b) in the city of cosenza. it is observed that in all cases for all points of the celestial vault there is a higher intensity of the luminance of the sky in the month of january than in the month of july. this is due to the above. for the case in question, considering a 5.04 m2 south-facing window, the opaque elements arranged centrally in the room have a radiative exchange with the points of the celestial vault comprised between an angular height of 𝛼 = 2° and an angular height of 𝛼 = 23°. the difference in sky brightness can also be evaluated with reference to the distributions obtained by varying the azimuth for 𝛼 = 5° and 𝛼 = 25° with reference to noon on 15th january and to noon on 15th july. the distributions obtained were represented in figure 7. once again, the brightness of the sky in january is much greater than that of july under the same conditions. in figure 8 it is observed how the average illuminance in the room varies as the orientation of the window varies at the hours of january 15th and july 15th. in figure 9 and figure 10 it is possible to observe, respectively, how the illumination in point 7 and point 8 of the room (see figure 3) varies as the orientation of the window varies on january 15th and july 15th. therefore, the two central points furthest from the window were taken into consideration. with reference to the case with the window exposed to north, in july in the first and last hours of the day, natural illuminance is the highest due to the longer and lower solar trajectory in the celestial vault. therefore, in the first and last hours of the day, a certain amount of direct solar radiation will affect the internal opaque surfaces, consequently affecting the natural internal lighting. in the central hours of the day, the direct solar radiation incident on the glass surface is obviously zero and this affects the natural lighting inside the room. in january, however, the solar trajectory will be shorter but higher in the celestial vault and this generates a typical “bell-like trend” of the natural illuminance inside the room. with the window exposed to south, the natural lighting has trends similar to those described in reference to the case with north orientation but with much higher intensity. in fact, in this case the sun passes in front of the window in its trajectory. a) b) figure 6 a) sky luminance distribution (perez) as a function of azimuth and elevation angles. noon on january 15th, cosenza, b) sky luminance distribution (perez) as a function of azimuth and elevation angles. noon on july 15th, cosenza. figure 7. comparison of sky luminance distributions for noon of january 15th and july 15th, cosenza. figure 8. mean illuminance on the work plane as a function of time, cosenza. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 finally, with reference to the cases with the eastern and western exposure, in the first and last hours of the day, respectively, there is a peak of illuminance due to the solar path (sunrise and sunset). the higher intensity of natural lighting respectively in the first and last hours of the day in the month of july is once again due to the longer solar trajectory which affects the amount of direct solar radiation entering the room through the window. in the central hours, on the other hand, there is a greater intensity of natural lighting in the month of january due to the lower solar trajectory. the electricity consumption in reference to the city of cosenza was also analyzed as the area and the orientation of the window changed. by hypothesis it is considered that the environment is used as an office and has a working period of 8 hours ranging from 8:00 am to 4:00 pm. to evaluate the convenience of each individual configuration taken into consideration, two different energy saving indices have been introduced: 1) the percentage of energy savings compared to a case in the total absence of natural light (𝑅1). in particular, in the case in which it is assumed that in the room in question there is no contribution due to natural light, an annual electricity consumption is estimated 𝐶max equal to 860 kw h. the energy saving index 𝑅1 can be calculated using the following formula: 𝑅1 = 𝐶max − 𝐶 𝐶max . (12) 2) the percentage of energy savings compared to a reference case (𝑅2). in particular, the case with a northfacing window is taken as reference and the relative annual electricity consumption 𝐶ref is indicated with 5.04 m2. this case has been imposed as a reference as it constitutes the configuration with the lowest natural illuminance. therefore, the index 𝑅2 can be determined by means of the following formula: 𝑅2 = 𝐶ref − 𝐶 𝐶ref . (13) table 2 summarizes the monthly and annual electricity consumption depending on the orientation and area of the window. furthermore, the same table shows the values of the indices 𝑅1 and for the various configurations 𝑅2. figure 9. local illuminance on point 7 as a function of time, cosenza. figure 10. local illuminance on point 8 as a function of time, cosenza. table 2. electrical energy consumptions in a room 6 × 6 × 3 m³ of a building located in cosenza. month j f m a m j j a s o n d year r1 r2 exposure days 21 20 22 19 22 21 21 22 21 22 21 17 249 reference window 1.2 × 4.2 m² = 5.04 m2 (14 % floor area) north kw h 54.4 51.8 57.0 45.1 52.3 54.4 54.4 54.4 52.3 57.0 54.4 44.0 633 26 0 east 29.5 30.2 35.6 34.9 40.4 38.5 38.5 40.4 34.0 33.3 29.5 25.7 411 52 35 south 4.5 0.0 0.0 20.5 38.0 43.1 43.1 0.0 0.0 0.0 0.0 0.0 183 78 71 west 29.5 28.1 35.6 32.8 40.4 45.4 43.1 40.4 34 33.3 29.5 23.9 416 51 34 window 1.8 × 4.2 m² = 7.56 m2 (21 % floor area) north kw h 45.4 38.9 47.5 41 47.5 45.4 47.6 47.5 45.3 38 38.5 36.7 520 39 18 east 20.4 21.6 21.4 24.6 35.6 36.3 36.3 30.9 20.4 23.8 18.2 16.5 306 64 52 south 0.0 0.0 0.0 0.0 23.8 29.5 24.9 4.7 0.0 0.0 0.0 0.0 83 90 86 west 20.4 21.6 21.4 26.7 33.3 36.3 34 30.8 20.4 23.7 20.4 16.5 306 64 52 window 1.8 × 5.4 m² = 9.72 m2 (27 % floor area) north kw h 40.8 34.5 38 32.8 42.7 40.8 40.8 45.1 36.3 38 36.3 33 459 46 27 east 18.1 15.1 21.4 20.5 28.5 29.5 27.2 23.7 20.4 16.6 15.8 14.6 252 70 60 south 0.0 0.0 0.0 0.0 9.5 24.9 13.6 00.0 0.0 0.0 0.0 0.0 48.1 94 92 west 18.1 15.1 21.3 20.5 28.5 31.7 27.2 23.8 20.4 16.6 18.1 14.6 256 70 59 window 2.4 × 5.4 m² = 12.96 m2 (36 % floor area) north kw h 40.8 34.6 38.0 32.8 38 40.8 40.8 38 34 38 36.2 33 445 48 30 east 18.1 15.1 19.0 18.5 28.5 27.2 27.2 23.8 20.4 16.6 15.9 14.7 245 71 61 south 0.0 0.0 0.0 0.0 4.7 9 6.8 0.0 0.0 0.0 0.0 0.0 20.6 98 97 west 18.1 15.1 19.0 18.5 28.5 27.2 27.2 23.8 20.4 16.6 13.6 14.7 243 72 61 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 8 in table 2 it is observed that the index 𝑅1 assumes values including: a. between 26% and 48% for the northern exposure; b. between 52% and 72% due to east and west exposure; c. between 78% and 98% for the southern exposure. the index 𝑅2 assumes values including: a. between 0% and 30% for the northern exposure; b. between 35% and 61% due to east and west exposure; c. between 71% and 97% for the southern exposure. the same analysis was repeated with reference to the city of milan in order to evaluate the effects of latitude and climate zone (cosenza belongs to climate zone c and milan belongs to climate zone e) on electricity consumption. the resalts was summarized in table 3. a comparison of electricity consumption between the city of milan and the city of cosenza shows that there is less electricity consumption in milan. by means of the satel-light database it can be observed that cosenza is characterized by an annual cumulative diffuse illuminance on the horizontal plane greater than approximately 6.5% compared to that of milan (37656 kluxhour versus 35345 kluxhour). furthermore, cosenza has an annual cumulative direct illuminance on the vertical plane greater than about 13% compared to that of milan (44420 kluxhour versus 39235 kluxhour). however, the sky in milan appears brighter due to the lower solar trajectory. this phenomenon causes greater internal natural lighting and therefore lower electricity consumption. in figure 11 a) and b) the trends of the energy saving indices 𝑅1 and 𝑅2 are shown respectively. in this trends the indices vary with the window area and orientation and with location. the analysis carried out shows similar values of the indices with reference to the north, east and west exposures for the two locations in question. the values of the indices differ a lot for a southern exposure of the window. this highlights how the latitude has a greater effect on natural lighting and consequently on electricity consumption only with an exposure to the south of the window. in the other cases this effect is completely negligible. these results are very useful in evaluating what could be the natural illuminance present inside an environment and how much this can vary with the variation of the surface and orientation of the window. this could also suggest control techniques for dimmable artificial lighting systems. thus, adapting the artificial lighting to the natural lighting present in the environment. 4. conclusions the calculation code for illuminance used in this article was inlux-dbr. this code is experimentally validated through measurements taken in a scaled room at a building located in table 3. electrical energy consumptions in a room 6 × 6 × 3 m³ of a building located in milan. month j f m a m j j a s o n d year r1 r2 exposure days 21 20 22 19 22 21 21 22 21 22 21 17 249 reference window 1.2 × 4.2 m² = 5.04 m2 (14 % floor area) north kw h 56.7 51.8 57.0 49.2 52.3 49.9 49.9 54.6 54.4 57.0 54.4 44.0 631 27 east 34.0 28.0 33.2 30.8 40.4 38.5 38.5 35.6 34.0 30.8 31.7 23.9 400 53 37 south 0.0 0.0 0.0 8.2 35.6 36.3 36.3 19.0 0.0 0.0 0.0 0.0 135 84 79 west 31.7 28.0 33.2 30.8 40.4 38.5 38.5 35.6 34.0 30.9 29.5 22.0 393 54 38 window 1.8 × 4.2 m² = 7.56 m2 (21 % floor area) north kw h 52.0 38.8 42.7 41.0 47.5 45.3 45.3 47.5 43.1 38.0 52.2 38.5 532 38 16 east 25.0 19.4 21.4 22.6 30.9 34 31.7 28.5 20.4 21.4 24.9 14.7 295 66 53 south 0.0 0.0 0.0 0 4.7 18.1 9.1 0.0 0.0 0.0 0.0 0.0 32 96 95 west 20.4 19.4 21.4 22.6 30.9 31.7 31.7 28.5 20.4 19.0 20.4 16.5 283 67 55 window 1.8 × 5.4 m² = 9.72 m2 (27 % floor area) north kw h 40.8 34.6 38.0 36.9 38.0 38.5 38.5 40.4 36.3 38.0 38.5 33 452 47 28 east 18.1 15.1 19.0 18.5 28.5 29.5 24.9 21.4 18.1 16.6 18.1 12.8 241 72 62 south 0.0 0.0 0.0 0.0 0.0 6.8 0.0 0.0 0.0 0.0 0.0 0.0 6.8 99 99 west 18.1 15.1 16.6 18.5 23.7 27.2 22.3 21.4 18.1 16.6 15.9 12.8 227 74 64 window 2.4 × 5.4 m² = 12.96 m2 (36 % floor area) north kw h 40.8 34.6 38.0 32.8 38.0 38.5 36.3 35.6 36.3 38.0 38.5 33.0 441 49 30 east 18.1 15.1 16.6 18.5 23.8 24.9 22.7 21.4 18.1 16.6 18.1 11.0 225 74 37 south 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 100 100 west 18.1 15.1 16.6 18.5 23.8 24.9 22.7 21.4 15.9 14.3 15.9 12.8 220 74 38 a) b) figure 11. a) saving index r1 as a function of window area and orientation for cosenza and milan, b) saving index r2 as a function of window area and orientation for cosenza and milan acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 9 osaka (japan). estimates of the electricity consumption of artificial lights were performed assuming switching on at critical times. the luminance of the sky is given by the perez model. the evaluation of the luminance distribution in the interior walls of the room is obtained using the radiosity model. the building in question is an office which has been placed alternatively in two italian locations: cosenza and milan. estimates were made by changing the window size and orientation of the building, analysing the behaviour by placing the window on the four orientations coinciding with the cardinal points. in the case of the city of cosenza, the electrical savings obtained varied between 26% and 48% for glazed surfaces exposed to the north, between 52% and 72% to the east and west, and between 78% and 97% to the south. the variations refer to different window sizes ranging from 14% to 36% of the room's floor area. with reference to the city of milan, electricity savings vary between 27% and 49% per window surface exposed to the north, between 53% and 74% to the east and west, and between 84% and 100% to the south. of course, if the building envelope has a very large window area, using an intelligent management system for artificial light is very convenient. the results show, however, that even when the windows are typically large (14% of the floor area), electricity savings are considerable. in particular, it is up to 84% for the city of milan. the results obtained for milan are better than those obtained for the city of cosenza, which is at a lower latitude. although, the results are very similar. the results are therefore considered similar for all areas within this latitude range. the numerical results obtained can serve as a reference for making proper use of daylight and reducing the unnecessary use of electricity. nomenclature (𝑎, 𝑏, 𝑐, 𝑑, 𝑒); (𝑎i, 𝑏i, 𝑐i′, 𝑑i) coefficients used in the perez sky model 𝛷, 𝑓 functions used in the perez sky model 𝑘 coefficient used in the illuminance balance gbo direct irradiance on horizontal plane, w/m2 gdo diffuse irradiance on horizontal plane, w/m2 lp luminance of a point of the sky, cd/m2 lg luminance of a point of the ground, cd/m2 𝑙𝑟 relative luminance lz zenith luminance, cd/m2 zs sun zenith angle, rad e illuminance of the surface element, lx/m2 evs solar direct illuminance, lx/m2 𝐴 area of the surface element, m2 𝑛 number of opaque surface element inside the room 𝑚 number of glazed surface element inside the room 𝑝 number of ground surface element 𝑖 index opaque surface element 𝑤 index glazed surface element g index ground surface element 𝐶 yearly energy consumption in the case in exam, w/year 𝐶max yearly energy consumption in the case in absence of natural illuminance, w/year 𝐶ref yearly energy consumption in the reference case, w/year 𝑅1 percentage of energy savings compared to the case in absence of natural illuminance 𝑅2 percentage of energy savings compared to a reference case nmbe normalized mean bias error rmsep root mean square error percentage 𝑉calc calculated data 𝑉meas measured data n total number of data greek symbols  elevation angle of a point of the sky, rad s sun elevation angle, rad g elevation angle of a point of the ground, rad  sky brightness  clearness index 𝜁 angular distance between the sun and a point of the sky, rad ρ reflectivity 𝜑 azimuth of a point of the sky, rad 𝜑𝑠 solar azimuth, rad 𝜑𝑔 azimuth of a point of the ground, rad 𝜏 transmissivity 𝜗𝑠 angle between the direction of solar radiation and the normal direction of the surface, rad acknowledgement the author f. nicoletti thanks regione calabria, pac calabria 2014-2020 asse prioritario 12, azione b) 10.5.12, for funding the research. references [1] j. svatos, j. holub, t. pospisil, a measurement system for the long-term diagnostics of the thermal and technical properties of wooden houses, acta imeko 9 (2020) 3, pp. 3-10. doi: 10.21014/acta_imeko.v9i3.762 [2] r. bruno, p. bevilacqua, a. rollo, f. barreca, n. arcuri, a novel bio-architectural temporary housing designed for the mediterranean area: theoretical and experimental analysis. energies, 2022, 15(9), 3243. doi: 10.3390/en15093243 [3] r. bruno, p. bevilacqua, v. ferraro, n. arcuri, reflective thermal insulation in non-ventilated air-gaps: experimental and theoretical evaluations on the global heat transfer coefficient. energy build. volume 236, 1 april 2021, 110769. doi: 10.1016/j.enbuild.2021.110769 [4] r. bruno, p. bevilacqua, n. arcuri, assessing cooling energy demands with the en iso 52016-1 quasi-steady approach in the mediterranean area. j. build. eng., volume 24, july 2019, 100740. doi: 10.1016/j.jobe.2019.100740 [5] r. bruno, v. ferraro, p. bevilacqua, n. arcuri, on the assessment of the heat transfer coefficients on building components: a comparison between modeled and experimental data. build. environ., volume 216, 15 may 2022, 108995. doi: 10.1016/j.buildenv.2022.108995 [6] giovanni nicoletti, r. bruno, p. bevilacqua, n. arcuri, gerardo nicoletti, a second law analysis to determine the environmental impact of boilers supplied by different fuels. processes 9(1), 113. doi: 10.3390/pr9010113 https://doi.org/10.21014/acta_imeko.v9i3.762 https://doi.org/10.3390/en15093243 https://doi.org/10.1016/j.enbuild.2021.110769 https://doi.org/10.1016/j.jobe.2019.100740 https://doi.org/10.1016/j.buildenv.2022.108995 https://doi.org/10.3390/pr9010113 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 10 [7] a. manasrah, m. masoud, y. jaradat, p. bevilacqua, investigation of a real-time dynamic model for a pv cooling system. energies 15(5), 2022, 1836. doi: 10.3390/en15051836 [8] p. bevilacqua, a. morabito, r. bruno, v. ferraro, n. arcuri, seasonal performances of photovoltaic cooling systems in different weather conditions. j. clean. prod., volume 272, 1 november 2020, 122459. doi: 10.1016/j.jclepro.2020.122459 [9] p. sdringola, s. proietti, u. desideri, g. giombini, thermo-fluid dynamic modeling and simulation of a bioclimatic solar greenhouse with self-cleaning and photovoltaic glasses. energy build. volume 68, part a, january 2014, pp. 183-195. doi: 10.1016/j.enbuild.2013.08.011 [10] p. bevilacqua, the effectiveness of green roofs in reducing building energy consumptions across different climates. a summary of literature results. renew. sustain. energy rev. volume 151, november 2021, 111523. doi: 10.1016/j.rser.2021.111523 [11] p. bevilacqua, r. bruno, n. arcuri, 2020. green roofs in a mediterranean climate: energy performances based on in-situ experimental data. renew. energy. volume 152, june 2020, pp. 1414-1430. doi: 10.1016/j.renene.2020.01.085 [12] j. szyszka, p. bevilacqua, r. bruno, 2020. an innovative trombe wall for winter use: the thermo-diode trombe wall. energies 13(9), 2188. doi: 10.3390/en13092188 [13] s. serroni, m. arnesano, l. violini, g. m. revel, an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation. acta imeko 10 (2020) 4, pp. 230 238. doi: 10.21014/acta_imeko.v10i4.1182 [14] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system. acta imeko 8 (2019) 2, pp. 45 52. doi: 10.21014/acta_imeko.v8i2.640 [15] a. ahmad, a. kumar, o. prakash, a. aman, daylight availability assessment and the application of energy simulation software – a literature review, mater. sci. energy technol. 3. doi: 10.1016/j.mset.2020.07.002 [16] e. guerry, c. d. gǎlǎtanu, l. canale, g. zissis, optimizing the luminous environment using dialux software at “constantin and elena” elderly house-study case, procedia manufacturing. doi: 10.1016/j.promfg.2019.02.241 [17] x. yu, y. su, x. chen, application of relux simulation to investigate energy saving potential from daylighting in a new educational building in uk. energy build. 74. doi: 10.1016/j.enbuild.2014.01.024 [18] spatial distribution of daylight-luminance distributions of various reference skies, cie publication 110, central bureau of the cie, vienna, austria, 1994, color res. appl. 20. doi: 10.1002/col.5080200119 [19] a. de rosa, v. ferraro, n. igawa, d. kaliakatsos, v. marinelli, inlux: a calculation code for daylight illuminance predictions inside buildings and its experimental validation. build. environ. 44. doi: 10.1016/j.buildenv.2008.11.014 [20] v. ferraro, n. igawa, v. marinelli, inlux dbr a calculation code to calculate indoor natural illuminance inside buildings under various sky conditions. energy. doi: 10.1016/j.energy.2010.05.021 [21] illumination; its distribution and measurement. (1911). nature, 88(2194), 72–73. doi: 10.1038/088072a0 [22] a. dresler, the “reflected component” in daylighting design. transactions of the illuminating engineering society. 1954;19(2_iestrans): 50-60. doi: 10.1177/147715355401900203 [23] r. g. hopkinson, j. longmore, p. petherbridge, an empirical formula for the computation of the indirect component of daylight factor. transactions of the illuminating engineering society. 1954;19(7_iestrans): 201-219. doi: 10.1177/147715355401900701 [24] p. r. tregenza, modification of the split-flux formulae for mean daylight factor and internal reflected component with large external obstructions. lighting research & technology. 1989; 21(3): 125-128. doi: 10.1177/096032718902100305 [25] m. e. aizlewood, p. j. littlefair, daylight prediction methods: a survey of their use. united kingdom, 1994. [26] n. ruck, ø. aschehoug, s. aydinli, j. christoffersen, i. edmonds, r. jakobiak, m. kischkoweit-lopin, m. klinger, e. lee, g. courret, l. michel, j.-l. scartezzini, s. selkowitz, daylight in buildings a source book on daylighting systems and components. [27] a. nabil, j. mardaljevic, useful daylight illuminances: a replacement for daylight factors. energy build. 38. doi: 10.1016/j.enbuild.2006.03.013 [28] k. alhagla, a. mansour, r. elbassuoni, optimizing windows for enhancing daylighting performance and energy saving. alexandria engineering journal, 58(1), 283–290. doi: 10.1016/j.aej.2019.01.004 [29] a. tzempelikos, a. athienitis, the effect of shading design and control on building cooling demand, 1st int. conference on passive and low energy cooling for the built environment, santorini, greece. [30] m. krarti, p. m. erickson, t. c. hillman, a simplified method to estimate energy savings of artificial lighting use from daylighting. building and environment, 40(6), 747–754. doi: 10.1016/j.buildenv.2004.08.007 [31] w. j. hee, m. a. alghoul, b. bakhtyar, o. elayeb, m. a. shameri, m. s. alrubaih, k. sopian, the role of window glazing on daylighting and energy saving in buildings. in renewable and sustainable energy reviews vol. 42, february 2015, pp. 323–343 doi: 10.1016/j.rser.2014.09.020 [32] j. w. moon, y. k. baik, s. kim, operation guidelines for daylight dimming control systems in an office with lightshelf configurations. building and environment, 180, august 2020, 106968. doi: 10.1016/j.buildenv.2020.106968 [33] z. s. zomorodian, m. tahsildoost, assessing the effectiveness of dynamic metrics in predicting daylight availability and visual comfort in classrooms. renewable energy, 134, april 2019, 669– 680. doi: 10.1016/j.renene.2018.11.072 [34] s. m. yacine, z. noureddine, b. e. a. piga, e. morello, d. safa, towards a new model of light quality assessment based on occupant satisfaction and lighting glare indices. energy procedia, 122, 805–810. doi: 10.1016/j.egypro.2017.07.408 [35] j. k. day, b. futrell, r. cox, s. n. ruiz, blinded by the light: occupant perceptions and visual comfort assessments of three dynamic daylight control systems and shading strategies. building and environment, 154, 107–121. doi: 10.1016/j.buildenv.2019.02.037 [36] d. h. w. li, a. c. k. cheung, s. k. h. chow, e. w. m. lee, study of daylight data and lighting energy savings for atrium corridors with lighting dimming controls. energy and buildings, 72, 457– 464. doi: 10.1016/j.enbuild.2013.12.027 [37] l. t. doulos, a. tsangrassoulis, p. a. kontaxis, a. kontadakis, f. v. topalis, harvesting daylight with led or t5 fluorescent lamps? the role of dimming. energy and buildings, 140, 336–347. doi: 10.1016/j.enbuild.2017.02.013 [38] l. bellia, f. fragliasso, automated daylight-linked control systems performance with illuminance sensors for side-lit offices in the https://doi.org/10.3390/en15051836 https://doi.org/10.1016/j.jclepro.2020.122459 https://doi.org/10.1016/j.enbuild.2013.08.011 https://doi.org/10.1016/j.rser.2021.111523 https://doi.org/10.1016/j.renene.2020.01.085 https://doi.org/10.3390/en13092188 https://doi.org/10.21014/acta_imeko.v10i4.1182 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1016/j.mset.2020.07.002 https://doi.org/10.1016/j.promfg.2019.02.241 https://doi.org/10.1016/j.enbuild.2014.01.024 https://doi.org/10.1002/col.5080200119 https://doi.org/10.1016/j.buildenv.2008.11.014 https://doi.org/10.1016/j.energy.2010.05.021 https://doi.org/10.1038/088072a0 https://doi.org/10.1177/147715355401900203 https://doi.org/10.1177/147715355401900701 https://doi.org/10.1177/096032718902100305 https://doi.org/10.1016/j.enbuild.2006.03.013 https://doi.org/10.1016/j.aej.2019.01.004 https://doi.org/10.1016/j.buildenv.2004.08.007 https://doi.org/10.1016/j.rser.2014.09.020 https://doi.org/10.1016/j.buildenv.2020.106968 https://doi.org/10.1016/j.renene.2018.11.072 https://doi.org/10.1016/j.egypro.2017.07.408 https://doi.org/10.1016/j.buildenv.2019.02.037 https://doi.org/10.1016/j.enbuild.2013.12.027 https://doi.org/10.1016/j.enbuild.2017.02.013 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 11 mediterranean area. automation in construction, 100, april 2019, pp. 145–162. doi: 10.1016/j.autcon.2018.12.027 [39] cie, spatial distribution of daylight-cie standard general sky, cie standard s011/e, vienna, 2003. [40] r. perez, r. seals, j. michalsky, all-weather model for sky luminance distribution-preliminary configuration and validation. sol. energy volume 50, issue 3, march 1993, pp. 235-245 doi: 10.1016/0038-092x(93)90017-i [41] igawa, n., koga, y., matsuzawa, t., nakamura, h., 2004. models of sky radiance distribution and sky luminance distribution. sol. energy volume 77, issue 2, 2004, pp. 137-157. doi: 10.1016/j.solener.2004.04.016 [42] iso, spatial distribution of daylight-cie standard general sky, iso standard 15469, geneva, 2004. [43] p. r. tregenza, subdivision of the sky hemisphere for luminance measurements. light. res. technol., vol. 19, 1987, issue 1. doi: 10.1177/096032718701900103 [44] s@tel-light. online [accessed 29 november 2022] http://satellight.entpe.fr/ [45] r. perez, p. ineichen, r. seals, j. michalsky, r. stewart, modeling daylight availability and irradiance components from direct and global irradiance. sol. energy 44. doi: 10.1016/0038-092x(90)90055-h [46] italian health ministerial decree, 5 july 1975. https://doi.org/10.1016/j.autcon.2018.12.027 https://doi.org/10.1016/0038-092x(93)90017-i https://doi.org/10.1016/j.solener.2004.04.016 https://doi.org/10.1177/096032718701900103 http://satellight.entpe.fr/ https://doi.org/10.1016/0038-092x(90)90055-h digital signal processsing functions for ultralow frequency calibrations acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 374 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 374 378 digital signal processsing functions for ultralow frequency calibrations henrik ingerslev1, søren andresen2, jacob holm winther3 1 brüel & kjær, danish primary laboratory for acoustics, denmark, henrik.ingerslev@bksv.com 2 hottinger, brüel and kjær, denmark, soren.andresen@bksv.com 3 brüel & kjær, danish primary laboratory for acoustics, denmark, jacobholm.winther@bksv.com abstract: the demand from industry to produce accurate acceleration measurements down to ever lower frequencies and with ever lower noise is increasing [1][2]. different vibration transducers are used today for many different purposes within this area, like detection and warning for earthquakes [3], detection of nuclear testing [4], and monitoring of the environment [5]. accelerometers for such purposes must be calibrated in order to yield trustworthy results and provide traceability to the si-system accordingly [6]. for these calibrations to be feasible, suitable ultra low-noise accelerometers and/or signal processing functions are needed [7]. here we present two digital signal processing (dsp) functions designed to measure ultra low-noise acceleration in calibration systems. the dsp functions use dual channel signal analysis on signals from two accelerometers measuring the same stimuli and use the coherence between the two signals to reduce noise. simulations show that the two dsp functions are estimating calibration signals better than the standard analysis. the results presented here are intended to be used in key comparison studies of accelerometer calibration systems [8][9], and may help extend current general low frequency range from e.g. 100mhz down to ultra-low frequencies of around 10mhz, possibly using somewhat same instrumentation. keywords: low-noise; coherent power; coherent phase; calibration; dual-channel; ultra-low frequencies 1. introduction in the field of dual channel signal analysis there are some very powerful functions for analysing signals, such as the well-known frequency response function and coherence. but there are also other functions like the coherent power function (cop) and non-coherent power function which are very powerful for decomposing noisy signals into the coherent part and the non-coherent part [10][11][12]. consider an accelerometer calibration setup with two accelerometers mounted close to each other and measuring the same stimuli. they will both measure the acceleration of the shaker, but since they are different sensors with different conditioning, they will have different noise. the two signals will have a coherent part which is the acceleration signal and a noncoherent part which is the noise. hence, the cop can be a powerful tool for extracting the signal from the noise and thereby increase the measuring accuracy of the power. a similar function for increasing the measuring accuracy of the phase by separating the coherent phase from the non-coherent phase is also derived in the next section, called the coherent phase (or argument) function (coa). for the coa to work in a proper manner, it is crucial that the signal applied to the shaker is a continuous signal, like a sine or a multi-sine, and that the frequencies of the sines are very precise and phase synchronized with the frequencies of the fourier transformation, to prevent the phase from drifting or even make jumps. more details on this will be given in section 1.2. the two dsp functions analysed in this article may prove relevant to be used for e.g. key comparison of calibration systems down to extremely low frequencies of around 10mhz where noise becomes a real challenge [7][8][9]. the degree to which the cop, and the coa can separate a signal into coherent and noncoherent parts increases with the length of the measurement, and generally depends on parameters like how many time-samples the measurement is divided into, how long each time-sample is, the sampling rate, and the fourier transform used. 2. digital signal processing functions theory in this section the theory for two dsp functions is outlined. the two functions are based on dual http://www.imeko.org/ mailto:henrik.ingerslev@bksv.com mailto:soren.andresen@bksv.com mailto:jacobholm.winther@bksv.com acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 375 channel signal analysis and can give a better estimate of signals in very noisy environments, than standard signal analysis. the first function is the cop for estimating the power or amplitude of the signal. and the second is the coa for estimating the phase. both functions rely on the coherence between the two signals. 1.1. coherent power function consider two sensors both measuring the same stimuli and positioned close enough for their mutual transfer function to be considered unity. as illustrated in figure 1 the signal without noise called 𝑢(𝑡), and the noise from each sensor called 𝑛(𝑡) and 𝑚(𝑡) yields the output signals from the two sensors, called 𝑎(𝑡) and 𝑏(𝑡). now consider 𝑗 = 1…𝑁 discrete time-samples measured with the two sensors: 𝑎𝑗(𝑡𝑖) = 𝑢𝑗(𝑡𝑖) + 𝑛𝑗(𝑡𝑖) (1) 𝑏𝑗(𝑡𝑖) = 𝑢𝑗(𝑡𝑖) + 𝑚𝑗(𝑡𝑖) (2) here 𝑡𝑖 is the discrete time in each time-sample, 𝑢𝑗 is the discrete signal, and 𝑛𝑗 and 𝑚𝑗 are discrete noise in each time-sample. by discrete fourier transformation of equation (1) and (2) we get 𝐴𝑗(𝑓𝑘) = 𝑈𝑗(𝑓𝑘) + 𝑁𝑗(𝑓𝑘) (3) 𝐵𝑗(𝑓𝑘) = 𝑈𝑗(𝑓𝑘) + 𝑀𝑗(𝑓𝑘) (4) where 𝑓𝑘 is the discrete frequency, 𝑈𝑗, 𝑁𝑗, and 𝑀𝑗 are the discrete fourier transforms of 𝑢𝑗, 𝑛𝑗, and 𝑚𝑗 respectively. figure 1: illustration of the signals used in the dual channel signal analysis and derivation of the two dsp functions. 𝑢(𝑡) is the signal we want to measure, and 𝑛(𝑡) and 𝑚(𝑡) are the noise contributions from each sensor, which then yields the two output signals 𝑎(𝑡) and 𝑏(𝑡). the cross spectrum is given by 𝑆𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝐴𝑗(𝑓𝑘)𝐵𝑗 ∗(𝑓𝑘) 𝑁−1 𝑗=0 (5) for 𝑁 → ∞, and * denotes complex conjugate. by inserting equation (3) and (4) in equation (5), and using that 𝑈𝑗 , 𝑁𝑗 , and 𝑀𝑗 are uncorrelated the cross spectrum can be given by 𝑆𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝑈𝑗(𝑓𝑘)𝑈𝑗 ∗(𝑓𝑘) 𝑁−1 𝑗=0 ≝ 𝑆𝑈𝑈(𝑓𝑘) (6) where 𝑆𝑈𝑈(𝑓𝑘) is the power spectrum of the signal without noise, i.e. the coherent power. therefore, the cop can in this setup be given by: 𝐶𝑂𝑃(𝑓𝑘) = 𝑆𝐴𝐵(𝑓𝑘) (7) 1.2. coherent phase function consider the following function, for 𝑁 → ∞: 𝐷𝐴𝐵(𝑓𝑘) = 1 𝑁 ∑ 𝐴𝑗(𝑓𝑘)𝐵𝑗(𝑓𝑘) 𝑁−1 𝑗=0 (8) it looks like the cross spectrum from equation (5), but without the complex conjugation. this “nonconjugated cross spectrum” is very useful for deriving a function for measuring the phase of the coherent signal. we can similarly to the derivation of the cop insert equation (3) and (4) in (8), and use that 𝑈𝑗, 𝑁𝑗, and 𝑀𝑗 are uncorrelated. hence, the non-conjugated cross spectrum can now be given by, where we have omitted the 𝑓𝑘 dependence to make room. 𝐷𝐴𝐵 = 1 𝑁 ∑ 𝑈𝑗𝑈𝑗 𝑁−1 𝑗=0 (9) = 1 𝑁 ∑|𝑈𝑗| 2 exp⁡(2𝑖∠𝑈𝑗) 𝑁−1 𝑗=0 (10) ≃ 1 𝑁 ∑|𝑈𝑗| 2 𝑁−1 𝑗=0 exp(2𝑖 1 𝑁 ∑ ∠𝑈𝑗 𝑁−1 𝑗=0 ) (11) = 𝑆𝑈𝑈exp(2𝑖∠𝑈̅̅ ̅̅ ) (12) from equation (10) to (11) we have approximated the summation of vectors with length |𝑈𝑗| 2 and angle 2∠𝑈𝑗 by vectors with correct length but all with the mean angle 2∠𝑈̅̅ ̅̅ . figure 2 illustrates this approximation and shows that for relatively small changes in angle from sample to sample this is a good approximation. in calibration applications the signal measures the acceleration of the shaker. and by applying a sine or multi-sine with frequencies exactly the same and phase synchronized with the fourier frequencies to the shaker, the phase from sample to sample can be kept steady without drifting and the approximation will therefore be very good in such calibration applications. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 376 in equation (12) ∠𝑈̅̅ ̅̅ is the mean phase of the coherent signal. therefore, the coherent phase function can be given by 𝐶𝑂𝐴 = ∠𝑈̅̅ ̅̅ . and by rewriting equation (12), the coa can be given by: 𝐶𝑂𝐴(𝑓𝑘) = 1 2 imag(ln(𝐷𝐴𝐵(𝑓𝑘))) (13) we have in the derivation of equation (13) used that the signal power 𝑆𝑈𝑈 is purely real and can therefore be omitted. figure 2: schematic graphs illustrating the approximation from equation (10) to (11).the arrows represent 𝑈𝑗 and the xand yaxes are the real and imaginary part of 𝑈𝑗 (a) shows the correct summation where each arrow has length |𝑈𝑗| 2and angle 2∠𝑈𝑗 as in equation (10), and (b) shows the approximative summation where each arrow has correct length but a mean angle 2∠𝑈̅̅ ̅̅ as in equation (11). as can be seen from (a) to (b), if 𝑈𝑗 does not change to much between samples the approximation is good. 3. simulations in this section we test the cop and the coa functions on simulated data. we calculate the discrepancy between the functions estimate of the signal amplitude and phase, and the true values. and we compare this with standard signal analysis which is to average the amplitude and phase over all the samples. when measuring the cross spectrum or the nonconjugated cross spectrum for finding the cop and the coa we only measure a finite number of samples 𝑁, which therefore only yields an estimate of the cop and the coa. hence, the more samples the more precise the estimate will be, and in the following the simulations is based on a 102400 s (~ 28 h) time sample divided into 𝑁 = 1024 samples of 100 s each, with 2048 discrete measurement points in each sample, and a fourier transform from 10mhz to 10.24 hz with a 10 mhz step. how well the cop and coa works is estimated by representing 𝑎𝑗, and 𝑏𝑗 from equation (1) and (2) with simulated data. here the signal 𝑢𝑗 is a multi-sine with 𝑙 = 1…𝑀 frequencies (𝑓𝑙) and phases (𝜙𝑙), all with amplitude 𝑈0: 𝑢𝑗(𝑡𝑖) = 𝑈0∑sin⁡(2𝜋𝑓𝑙𝑡𝑖 + 𝜙𝑙) 𝑀 𝑙=1 (14) the noise 𝑛𝑗, and 𝑚𝑗 are random generated white noise with amplitude 𝑁0. and the signal to noise ratio is given by: 𝑆𝑁𝑅 = 𝑈0 𝑁0 (15) the frequencies (𝑓𝑙) of the multi-sine in equation (14) must be precisely the same or synchronized to a subset of the fourier transformation frequencies (𝑓𝑘) from equation (3) and (4), otherwise the coa will drift. this requirement is easily met in the simulations presented here, since the subset of frequencies (𝑓𝑙) can be set to be identical to some of the frequencies (𝑓𝑘). but in real measurements this requirement might be challenging to meet. figure 3: based on simulated data the coherent power function and coherent phase function is tested for its strength for estimating the amplitude and phase of sine waves in noisy data. the graph shows the deviation of the amplitude and phase from the true vales. (a) shows the mean amplitude, ⁡0.5(𝐴 + 𝐵), and the coherent amplitude, √𝐶𝑂𝑃 . (b) shows the mean phase 0.5(∠𝐴 + ∠𝐵), and the coherent phase 𝐶𝑂𝐴. figure 3(a) shows the discrepancy between the signal amplitudes 𝑈0 from equation (14) and the coherent amplitude, defined as √𝐶𝑂𝑃(𝑓𝑘) , for a signal to noise ratio of 𝑆𝑁𝑅 = 0.32 in red circles. and for comparison we also plot the mean amplitude in blue circles, that is, 0.5(𝐴(𝑓𝑘) + 𝐵(𝑓𝑘)) where 𝐴(𝑓𝑘) = ∑|𝐴𝑗(𝑓𝑘)| and 𝐵(𝑓𝑘) = ∑|𝐴𝑗(𝑓𝑘)| is the average amplitudes over all samples. and we plot only at the frequencies of the signal, i.e. at 𝑓𝑘 = 𝑓𝑙. it is clearly seen that the cop function estimates the amplitude better that the mean amplitude in the full frequency range. http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 377 similarly, figure 3(b) shows the discrepancy between the phase 𝜙𝑙 and the coa for a signal to noise ratio of 𝑆𝑁𝑅 = 0.024 in red circles. and for comparison we also plot the mean phase defined as 0.5(∠𝐴 + ∠𝐵) where ∠𝐴 = ∑arg⁡(𝐴𝑗(𝑓𝑘)) and ∠𝐵 = ∑arg⁡(𝐵𝑗(𝑓𝑘)) are the average phases over all samples. and it is seen that the coa estimates the phase better that the mean phase. figure 4:simulated deviation in amplitude and phase versus signal to noise ratio, in the upper graph for the mean amplitude and the coherent amplitude defined as √𝐶𝑂𝑃, and in lower graph for the mean phase and the coherent phase coa. each deviation point plotted here is an average over all the frequencies from figure 3. the deviations plotted in the four curves in figure 3 seems to be independent of frequency. therefore, by averaging the deviation for each curve in figure 3 over all the frequencies 𝑓𝑙, we get a mean deviation for the cop and the mean amplitude at 𝑆𝑁𝑅 = 0.32, and a mean deviation for the coa and the mean phases at 𝑆𝑁𝑅 = 0.024. we have done this for a range of signal to noise ratios from 𝑆𝑁𝑅 = 0.01 to 𝑆𝑁𝑅 = 10 and plotted it in figure 4. it shows that the cop is better than the mean amplitude from about 𝑆𝑁𝑅 = 1, and the coa is better than the mean phase from about 𝑆𝑁𝑅 = 0.04. this shows that the “critical” snr where the cop and coa functions has lower deviation than the simple mean values is different for cop and coa. and though the data in figure 4 depends highly on the length of the time sample and fourier transform used, this trend of a lower critical snr for the coa function than the cop functions were always seen and needs further investigations. 4. summary we have derived two dsp functions for very accurate measurements of amplitude and phase from two accelerometers measuring the same stimuli. we have tested the two dsp functions on simulated data and our findings based on the simulations shows promise to the functions as good tools for accurately measuring amplitudes and phases of a multi-sine wave in a noisy environment. these findings may prove useful for key comparisons of accelerometer calibration systems down to ultra-low frequencies, since for such measurements noise becomes a huge problem as the frequencies approaches 10mhz. hence, by replacing the accelerometer used in key comparisons by two accelerometers and by using the two dsp functions described here, the frequency range in key comparisons may be possible to extend down to ultralow frequencies of around 10mhz. 5. acknowedgements we would like to acknowledge torben rask licht and lars munch kofoed for fruitful discussions. we would also like to acknowledge the eu empir project “metrology for low-frequency sound and vibration”, 10env03 infra-auv, as the initial reason for the approach on optimising method of analysis in noisy environments, especially found relevant when doing low frequency calibration. 6. references [1] t. bruns, s. gazioch: “correction of shaker flatness deviations in very low frequency primary accelerometer calibration, iop metrologia, vol. 53, no. 3, pp. 986 (2016). [2] j. h. winther, t. r. licht, “primary calibration of reference standard back-to-back vibration transducers using modern calibration standard vibration exciters”, joint conference imeko tc3, tc5, tc22, helsinki, (2017). [3] g. marra, c. clivati et al., “ultra-stable laser interferometry for earthquake detection with terrestrial and submarine optical cables”, science, vol. 361, issue 6401, pp. 486-490, (2018). [4] p. gaebler, l ceranna et al., “a multitechnology analysis of the 2017 north korean nuclear test”, solid earth, 10, 59–78, (2019). [5] r. a. hazelwood, p. c. macey, s. p. robinson, l. s. wang, “optimal transmission of interface vibration wavelets a simulation of seabed seismic responses”, j. mar. sci. eng. 6, 61-79, (2018). [6] l. klaus, m. kobusch, seismometer calibration using a multi-component acceleration exciter, iop conf. series: journal of physics: conf. series 1065 (2018). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 378 [7] eu empir project “metrology for lowfrequency sound and vibration”, (2020). [8] bipm key comparison, ccauv.v-k5 (2017), https://www.bipm.org/kcdb/comparison?id=453 [9] euramet key comparison, auv.v-k5 (2019), https://www.bipm.org/kcdb/comparison?id=1631 [10] h. herlufsen, “dual channel fft analysis part i”, b&k technical review, no.1, (1984). [11] j. s. bendat, a. g. piersol, “random data”, wiley-interscience, (1986). [12] j. s. bendat, a. g. piersol, “engineering applications of correlation and spectral analysis”, wiley, new york, (1993). http://www.imeko.org/ https://www.bipm.org/kcdb/comparison?id=453 https://www.bipm.org/kcdb/comparison?id=1631 efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data madipally sai krishna sashank1, vijay souri maddila1, vikas boddu1, y. radhika1 1 department of computer science and engineering, gitam institute of technology, gitam (deemed to be university), visakhapatnam 530045, andhra pradesh, india section: research paper keywords: cnn; covid-19; gan; transfer learning; x-ray images citation: madipally sai krishna sashank, vijay souri maddila, vikas boddu, y. radhika, efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data, acta imeko, vol. 11, no. 1, article 31, march 2022, identifier: imeko-acta-11 (2022)-01-31 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 25, 2021; in final form february 17, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: madipally sai krishna sashank, e-mail: madipallysks@gmail.com 1. introduction covid-19 has reached at least 200 countries worldwide and is overwhelming the healthcare infrastructures like any other emerging pandemic [1]. this pandemic has made everyone realize the importance of hygiene and preventive measures of contracting a virus. when trying to help potential covid-19 patients, allocating clinical resources to the most probable cases can improve the flow of treating patients. to identify the potential cases among all the cases in a hospital, an automated system that can detect the disease can provide immense help to the hospital staff. although the method provided in this work cannot be implemented in a standalone fashion, it can be implemented with minimal interaction. automating the recognition of the virus through the x-ray images can be done through teaching the computer to recognize patterns in the xray images [2]. to teach a computer to recognize the patterns in an image, computer vision technique such as convolutional neural networks (cnns) are required. a cnn can classify given images based on training data, which is not abundantly available in the medical field. even though the available data is increasing at a steady rate, external factors such as human errors and incorrect data can decrease the accuracy with which the cnn can classify an image. so, to prevent these problems, more data is generated using gans on some accurate and curated data. this work will implement computer vision techniques such as generative adversarial networks (gans) and convolutional neural networks (cnns) to classify the x-ray images. the mechanism also uses a number of conventional data augmentation techniques such as changing the axis of the image, rotating the image, changing the brightness and other factors of the image corresponding to the rgb values. generative adversarial networks (gans) is a class of deep learning and image processing models/architectures which are used to generate new images similar to an already existing image dataset. the two sections of a gan are: generator and discriminator. the discriminator tries to classify the input images as real or fake with high accuracy. the generator’s aim is to generate new images which can trick the discriminator into believing that these newly generated images are also real. if the generator is successful in doing so, then the gan is able to abstract the world has come to a standstill with the coronavirus taking over. in these dire times, there are fewer doctors and more patients and hence, treatment is becoming more and more difficult and expensive. in recent times, computer science, machine intelligence, measurement technology has made a lot of progress in the field of medical science hence aiding the automation of a lot of medical activities. one area of progress in this regard is the automation of the process of detection of respiratory diseases (such as covid-19). there have been many convolutional neural network (cnn) architectures and approaches that have been proposed for chest x-ray classification. but a big problem still remains and that is the minimal availability of medical x-ray images due to improper measurements. due to this minimal availability of chest x-ray data, most cnn classifiers do not get trained to an optimal level and the required standards for automating the process are not met. in order to overcome this problem, we propose a new deep learning based approach for accurate measurements of physiological data. mailto:madipallysks@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 generate images which are very similar to the original images and its flow chart is shown in figure 1. computer vision is a branch of computer science that deals with giving a computer (or a machine) the ability to read and analyse images and videos in digital format. although it seems trivial and very easy for people to see and perceive the visual stimuli offered by the surrounding, replicating the same for a computer is a complex task which requires a deep understanding of the biological vision and digital conversion of vision perception in the dynamic and ever-changing physical world [4], [5]. computer vision has come a long way from concept to application in the past few years. computer vision is being used in various sectors such as healthcare, robotics, astrophysics, manufacturing, transportation, agriculture, etc. applications such as self-driving cars, artificial-intelligence powered robots, defect detection in manufacturing, intrusion detection, and optical character recognition are possible only because of the advancement of computer vision. convolutional neural network is a neural network with neurons having learnable weights and biases performing dot products and following non-linearity [6]. it is specifically designed for image processing, segmentation, and classification with convolutional layers. the convolutional neural networks have neurons arranged in three dimensions and multiple layers, , its working flow is shown in figure 2. each layer of the cnn has a simple task of creating a 3d output from a 3d input with a differential function [7]. there are three major layers in the cnn convolutional layer, pooling layer, and fully-connected layer. each of these three layers has a particular function; the convolutional layer performs a series of operations on the input matrix/array using filters of the same dimension, the pooling layer decreases the length and width of the input array while increasing the depth/height which is followed by a flatten layer that transforms the 3-dimensional matrix into a 1-dimensional array which is then fed to a fully connected layer (an artificial neural network). tang and tang provided an end-to-end process of creating a generative adversarial classifier that can find anomalies in chest x-ray images which is trained on normal chest x-ray images [8]. the architecture proposed in the model is composed three deep neural networks. the entire model is trained on the correct or normal images. so when the model encounter a normal image then the model categorizes it as a normal image but if the image the model encounters is a an abnormal image which the model did not recognize in the training data it is categorized as an abnormal image. this model is trained on normal chest x-ray images numbering 6000, 1025 normal oneclass chest x-ray images, 1025 chest x-ray images with lung opacities and 1000 images with lung opacities [9]. from this paper the whole process of creating a gan which is used in this research paper is described in detail will be used. wong and lam have researched upon the possible findings from the chest x-ray images of potential covid-19 patients. the study of findings of chest radiographies in patients of bilateral lower zone consolidation showed that the frequency of findings in the chest radiographies peaked at 10-12 days after the symptoms’ onset [10]-[15]. the paper also discusses matters regarding the usage of ct scans which are more sensitive to infection in the body. the paper suggests the usage of rtpcr, and chest radiography testing for generating the required images. this paper gives an understanding of selection of images that are best for analysis and testing. zhang and miao have observed that obtaining pixel wise labels for creation of a model is not possible due to anatomy overlaps in the images and texture which cannot accurately reveal information about the patient. in this paper the authors proposed an image-to-image model that can segment multiple organs in a 3d ct scan. the model was trained with digitally reconstructed radiographs formed over ct volumes. the paper implements a task driven generative adversarial network that can create the x-ray images according to real images and achieve synthesis between the model and generated images [16]-[20]. through this work, the implementation and symbiotic relation between the model and gan is observed and modified to an extent. ke, zhang, and wei give a background to a neuro-heuristic approach to the classification of respiratory diseases using x-ray images. this paper [21] proposes a machine learning architecture/methodology that can be used to assist doctors in making their decisions thus speeding up the process. the authors use a spatial analysis methodology with the main focus being on hue, saturation, and brightness. the method proposes in this paper involves the use of neural networks in collaboration with heuristic search algorithms to detect degenerated lung tissues in x-ray images. first, the neural network predicts the probability of a possible respiratory disease; if this probability is significantly high, then the heuristic algorithm scans and identifies the degenerated tissues in the x-ray images based on the fitness function of the algorithm. the paper [22] presents promising results with an accuracy of over 79 % and a misclassification error of 3.23 % for false positive predictions and 3.76 % for false negative predictions. in their paper, chouhan and singh present a transfer learning based approach to the classification of pneumoniabased chest x-rays. in this approach [23], the features of the xray images are extracted using neural network models pretrained on imagenet architecture and then these features are given as input to a classifier to classify the x-ray as pneumonia positive or pneumonia negative. the paper [24] analysed the performance of five measurement models. these models provide better measurement capability of x-ray data. the proposed deep learning-based approaches are well suited for measuring the medical image and as well analysis. the authors figure 1. flowchart for working of a gan [3]. figure 2. working of a cnn [5]. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 also proposed an ensemble model which was able to achieve a state-of-the-art accuracy of 96.4 % in classifying pneumonia with a recall of 99.2 %. 2. method 2.1. data pre-processing i. pre-processing the dataset [25] as per the needs of the problem was the first step. we had 1340 images of normal xray and 1340 images of viral-pneumonia x-ray each of size 1024 × 1024 × 1. ii. to reduce the time and computation power needed for training, we reduced the size of the images from 1024 × 1024 × 1 to 128 × 128 × 1. iii. to ensure that there was minimal information loss, we trained 1 cnn on the original data and another cnn on the minimized image data. the accuracy of both these cnns was almost the same. so, we used the minimized image dataset for the purpose of this experiment and is shown in figure 3. 2.2. training gans iv. after minimizing the data, we trained 2 simple gans (one on each class of images) for 200 epochs. then, we generated 1000 images for each class and then saved the gan models. v. for transfer learning, we first trained a gan on 7,864 face images [26] of size 128 × 128 × 1 for 200 epochs. after training, we saved this faces gan model. vi. then, we trained the faces gan model on the normal xray images for 75 epochs. then we generated 1000 images from this gan and saved the model. vii. then we also trained the faces gan model separately on the viral pneumonia x-ray images for 75 epochs. then, we generated 1000 images from this gan and saved the model. 2.3. training cnns viii. our next step was to train 1 cnn each on each of the following: 1. the original data 2. data generated by conventional data augmentation techniques 3. data formed by combining the original data with the data generated by simple gan 4. data formed by combining the original data with the data generated by transfer learning gan ix. since one cnn was already trained on the original data, we then trained a second cnn on the data generated by using conventional data augmentation techniques (by using imagedatagenerator()). x. then, we trained a third cnn on the data formed by combining the original data with the data generated by simple gan. xi. as the last step, we trained a fourth cnn on the data formed by combining the original data with the data generated by transfer learning gan. xii. the accuracies of all the models were computed and its output is shown in figure 4. note: the architecture and parameters used to train each cnn and gan were the same. 3. results we performed training of cnns on 4 different datasets and the results are as follows: 1. dataset 1 original data the plot for training and validation accuracy for the cnn trained on the original data is below. final training accuracy: 0.9348 final validation accuracy: 0.9259 figure 3. procedure. figure 4. training and validation accuracy of cnn on original dataset. figure 5. training and validation loss of cnn on original dataset. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the plot for training and validation loss for the cnn trained on the original data is below. final training loss: 0.1595 final validation loss: 0.3016 the cnn achieved a training accuracy of 93.5 % on the original data as shown in figure 5. this suggests that the model achieves a high accuracy without any data augmentation techniques but an accuracy of 93.5 % is not enough when classifying x-ray images in a real-time scenario. 2. dataset 2 data generated by conventional data augmentation techniques the plot for training and validation accuracy for the cnn trained on the second dataset is shown in figure 6. final training accuracy: 0.9013 final validation accuracy: 0.9231 the plot for training and validation loss for the cnn trained on the second data is shown in figure 7. final training loss: 0.2587 final validation loss: 0.2919 the values of accuracy and loss for this model are similar to the model trained on the original dataset. there is not a big difference. this means that conventional data augmentation techniques might not be a feasible option for data augmentation in medical image classification. 3. dataset 3 data formed by combining the original data with the data generated by simple gan the plot for training and validation accuracy for the cnn trained on the third dataset is shown in figure 8. final training accuracy: 0.9771 final validation accuracy: 1.0000 the plot for training and validation loss for the cnn trained on the third data is shown in figure 9. final training loss: 0.0670 final validation loss: 0.0281 there is a huge improvement the images generated by a simple gan along with the original images are used to train a cnn. the training accuracy goes up to 97.71 % and the validation accuracy is a perfect 100 % 4. dataset 4 data formed by combining the original data with the data generated by transfer learning gan the plot for training and validation accuracy for the cnn trained on the fourth dataset is shown in figure 10. final training accuracy: 0.9786 final validation accuracy: 0.9574 the plot for training and validation loss for the cnn trained on the fourth data is shown in figure 11. final training loss: 0.0566 final validation loss: 0.0739 there is a small improvement in the training accuracy of this cnn. however, there is a dip in the validation accuracy. this model maintained an accuracy of 100 % from epoch 1 to epoch 7 but after that, there was a dip. this might be due to overfitting of the model on the data. 4. discussions based on the results, it can clearly be seen that when data is generated through a gan (normal or transfer learning), there is a huge improvement in the accuracy of the classifier when it is trained on an ensemble of generated images and original images. figure 6. training and validation accuracy of cnn on second dataset. figure 7. training and validation loss of cnn on second dataset. figure 8. training and validation accuracy of cnn on third dataset. figure 9. training and validation loss of cnn on third dataset. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 the accuracy achieved by the model on the original data is pretty high but not good enough to be used in a real-time medical scenario. but with an accuracy of 98 %, a model can be fabricated from the gan generated data to automate the process of chest x-ray classification and is shown in table 1. there is also scope to improve upon the said method in future research with better gan architectures to generate new images. 5. conclusion in this work, a novel approach of data augmentation for medical images was proposed which could eradicate the problem of minimal availability of chest x-ray data up to some extent. a pre-processing step was done on the original data to decrease the image size from 1024 × 1024 × 1 to 128 × 128 × 1. it was verified that there was no loss of data during this step. after the pre-processing step, 4 datasets were generated as mentioned above and a cnn was trained on each of the datasets to analyse which dataset the model was learning best from. the cnn learned much faster and got better accuracy from the datasets generated by a simple gan and a transfer learning gan and this could be a one-stop solution for the minimal availability of chest x-ray data. references [1] michael beil, sigal sviri, hans flaatten, dylan w. de lange, christian jung, wojciech szczeklik, susannah leaver, andrew rhodes, bertrand guidet, p. vernon van heerden, on predictions in critical care: the individual prognostication fallacy in elderly patients, journal of critical care, vol. 61, 2021, issn0883-9441, pp. 34-38. doi 10.1016/j.jcrc.2020.10.006s [2] pranav rajpurkar, jeremy irvin, kaylie zhu, brandon yang, hershel mehta, tony duan, daisy ding, aarti bagul, curtis langlotz, katie shpanskaya, matthew p. lungren, andrew y. ng. chexnet: radiologist-level pneumonia detection on chest xrays with deep learning. [3] manish nayak, deep convolutional generative adversarial networks (dcgans). online [accessed 23 march 2022] https://medium.com/datadriveninvestor/deep-convolutionalgenerative-adversarial-networks-dcgans-3176238b5a3d [4] simon j. d. prince, computer vision: models, learning, and inference. 1st edition, cambridge university press, 2012. [5] eric jan solem, programming computer vision with python: tools for algorithms for analysing images. 1st edition, o’reilly, 2012. [6] cs231n convolutional neural networks for visual recognition. online [accessed 23 march 2022] https://cs231n.github.io/convolutional-networks/ [7] yonatan svirsky, andrei sharf, a non-linear differential cnnrendering module for 3d data enhancement. [8] aparna beena unni, smart medical self-diagnostic tools. int j cur res rev. vol. 13, issue 08, april 2021, pp. 2-3. doi: 10.31782/ijcrr.2021.13835 [9] q. ke, j. zhang, w. wei, d. połap, m. woźniak, l. kośmider, r. damaševĭcius, a neuro-heuristic approach for recognition of lung diseases from x-ray images, expert systems with applications, vol. 126, 2019, pp. 218–232. doi: 10.1016/j.eswa.2019.01.060 [10] a. tarannumtarannum, m. z. u. rahman, t. srinivasulu, a real time multimedia cloud security method using three phase multi user modal for palm feature extraction, journal of advanced research in dynamical and control systems, 12(7) (2020), pp. 707-713 doi: 10.5373/jardcs/v12i7/20202053 [11] s. j. pan, q. yang, a survey on transfer learning, ieee transactions on knowledge and data engineering, 22(10) (2010), pp. 1345-1359 doi: 10.1109/tkde.2009.191 [12] m. myles-worsley, w. a. johnston, m. a. simons, the influence of expertise on x-ray image processing, journal of experimental psychology: learning, memory, and cognition, 14(3) (1988), pp. 553–557. doi: 10.1037/0278-7393.14.3.553 [13] a. sharma, d. raju, s. ranjan, detection of pneumonia clouds in chest x-ray using image processing approach, 2017 nirma university international conference on engineering (nuicone), 2017, pp. 1-4. doi: 10.1109/nuicone.2017.8325607 [14] v. gavini, g. r. jothi lakshmi, m. z. u. rahman, an efficient machine learning methodology for liver computerized tomography image analysis, international journal of engineering trends and technology, 69(7) (2021), pp. 80-85. doi: 10.14445/22315381/ijett-v69i7p212 [15] e. ayan, h. m. ünver, diagnosis of pneumonia from chest xray images using deep learning, 2019 scientific meeting on electrical-electronics & biomedical engineering and computer figure 10. training and validation accuracy of cnn on fourth dataset. figure 11. training and validation loss of cnn on fourth dataset. table 1. accuracies and losses of each dataset. training accuracy validation accuracy training loss validation loss dataset 1* 93.48 % 92.59 % 0.1595 0.3016 dataset 2* 90.13 % 92.31 % 0.2587 0.2919 dataset 3* 97.71 % 100.0 % 0.6700 0.0281 dataset 4* 97.86 % 95.74 % 0.0566 0.0739 * dataset 1 original data * dataset 2data generated by conventional data augmentation techniques * dataset 3 data formed by combining the original data with the data generated by simple gan * dataset 4 data formed by combining the original data with the data generated by transfer learning gan https://doi.org/10.1016/j.jcrc.2020.10.006s https://medium.com/datadriveninvestor/deep-convolutional-generative-adversarial-networks-dcgans-3176238b5a3d https://medium.com/datadriveninvestor/deep-convolutional-generative-adversarial-networks-dcgans-3176238b5a3d https://cs231n.github.io/convolutional-networks/ http://dx.doi.org/10.31782/ijcrr.2021.13835 https://doi.org/10.1016/j.eswa.2019.01.060 https://doi.org/10.5373/jardcs/v12i7/20202053 https://doi.org/10.1109/tkde.2009.191 https://doi.org/10.1037/0278-7393.14.3.553 https://doi.org/10.1109/nuicone.2017.8325607 https://doi.org/10.14445/22315381/ijett-v69i7p212 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 science (ebbt), 2019, pp. 1-5. doi: 10.1109/ebbt.2019.8741582 [16] a. tarannum, m. z. u. rahman, l. k. rao, t. srinivasulu, a. layekuakille, an efficient multi-modal biometric sensing and authentication framework for distributed applications, ieee sensors journal, 20(24) (2020), art. no. 9151106, pp. 15014-15025. doi: 10.1109/jsen.2020.3012536 [17] yuanyuan li, zhenyan zhang, cong dai, qiang dong, samireh badrigilan, accuracy of deep learning for automated detection of pneumonia using chest x-ray images: a systematic review and meta-analysis, computers in biology and medicine (2020): 103898. doi: 10.1016/j.compbiomed.2020.103898 [18] h. sharma, j. s. jain, p. bansal, s. gupta, feature extraction and classification of chest x-ray images using cnn to detect pneumonia, 10th international conference on cloud computing, data science & engineering (confluence), ieee, noida, india, 29-31 january 2020, pp. 227-231. doi: 10.1109/confluence47617.2020.9057809 [19] a. tarannum, m. z. u. rahman, t. srinivasulu, an efficient multimode three phase biometric data security framework for cloud computing-based servers, international journal of engineering trends and technology, 68(9) (2020), pp. 10-17. doi: 10.14445/22315381/ijett-v68i9p203 [20] yusuf brima, marcellin atemkeng, stive tankio djiokap, jaures ebiele, franklin tchakounté, transfer learning for the detection and diagnosis of types of pneumonia including pneumonia induced by covid-19 from chest x-ray images, diagnostics 11( 8) (2021): 1480. doi: 10.3390/diagnostics11081480 [21] federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, 10(2) (2021), pp. 119-125. doi: 10.21014/acta_imeko.v10i2.1047 [22] arata andrade saraiva, d. b. s. santos, nator junior c. costa, jose vigno m. sousa, nuno m. fonseca ferreira, antonio valente, salviano soares, models of learning to classify x-ray images for the detection of pneumonia using neural networks, in bioimaging, 2019, pp. 76-83. doi: 10.5220/0007346600760083 [23] giorgia fiori, fabio fuiano, andrea scorza, jan galo, silvia conforto, salvatore andrea sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko, 10(2) (2021), pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [24] zhenjia yue, liangping ma, runfeng zhang, comparison and validation of deep learning models for the diagnosis of pneumonia, computational intelligence and neuroscience, (2020), pp.1-8. doi: 10.1155/2020/8876798 [25] kaggle, covid-19 radiography database. online [accessed 23 march 2022] https://www.kaggle.com/tawsifurrahman/covid19-radiographydatabase [26] kaggle, faces_data_new, a collection of 8k pictures of faces with different background and poses. online [accessed 23 march 2022] https://www.kaggle.com/gasgallo/faces-data-new https://doi.org/10.1109/ebbt.2019.8741582 https://doi.org/10.1109/jsen.2020.3012536 https://doi.org/10.1016/j.compbiomed.2020.103898 https://doi.org/10.1109/confluence47617.2020.9057809 https://doi.org/10.14445/22315381/ijett-v68i9p203 https://doi.org/10.3390/diagnostics11081480 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-17 https://doi.org/10.21014/acta_imeko.v10i2.1047 https://doi.org/10.5220/0007346600760083 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-18 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-18 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-02-18 https://doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.1155/2020/8876798 https://www.kaggle.com/tawsifurrahman/covid19-radiography-database https://www.kaggle.com/tawsifurrahman/covid19-radiography-database https://www.kaggle.com/gasgallo/faces-data-new an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems acta imeko issn: 2221-870x december 2021, volume 10, number 4, 67 72 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 67 an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems sala surekha1, md. zia ur rahman1, aimé lay-ekuakille2 1 department of electronics and communication engineering, k l university, koneru lakshmaiah education foundation, green fields, vaddeswaram, guntur-522502, a.p., india 2 department of innovation engineering, university of salento, lecce, italy section: research paper keywords: adaptive learning; beam forming; cognitive radio; direction of arrival; spectrum sensing citation: sala surekha, md. zia ur rahman, aimé lay-ekuakille, an adaptive learning algorithm for spectrum sensing based on direction of arrival estimation in cognitive radio systems, acta imeko, vol. 10, no. 4, article 13, december 2021, identifier: imeko-acta-10 (2021)-04-13 section editor: francesco lamonaca, university of calabria, italy received may 28, 2021; in final form december 1, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: md. zia ur rahman, e-mail: mdzr55@gmail.com 1. introduction telecommunication systems are rapidly using from past few decades it leads to increase in frequency spectrum usage. due to lack of frequency spectrum, there is low utilization of licensed and unlicensed bands. to avoid interferences, secondary users always must be aware of primary user is absent or present in a particular frequency band. further considered secondary user direction of arrival (doa), it contains multiple input single output (miso) and one receiving antenna for primary user and feedback based adaptive frequency algorithm. non orthogonal multiple access [1] based cognitive radio network is used for secure beamforming to avoid interference in multiple input multiple output networks. in array sensors, estimation of number of channels is a problem and it is solved by considering direction of arrival in spectrum sensing method. wideband spectrum sensing channel [2] is divided into two sub channels; each sub channel is connected with sensor for processing and then estimation of doa is done. for multi band signals, two spectrum scenarios are considered from sub-nyquist samples: in first scenario, spectrum sensing method is examined then doa is considered as recovery for frequency spectrum problems by proposing uniform linear array (ula) [3] with sensor at receiver then it is connected to equivalent circuit of analog front-end channel of modulated wideband converter (mwc). in second case, generalized likelihood ratio antenna beamforming [4] used for efficient and low complexity spectrum sensing. localization technique is investigated in [5] depending on direction of arrival measurements for estimating primary users in cognitive radio networks. in [6], [7] new spectrum sensing techniques based on beamforming are proposed. null steering and joint beam-based resource allocation [8] used in femtocell networks in spectrum sensing. performance of transmitter localization done using sector power measurements for every senso then derived cramer rao bound (crb) [9],[10] for sector power estimation using doa and mean square error is derived for analytical expression. main objective of this paper, estimating doa of various sensors for sensing [11], [12] the vacant spectrum and thereby facilitate channel allocation to secondary user. here, we make use of an abstract in cognitive radio systems, estimating primary user direction of arrival (doa) measurement is one of the key issues. in order to increase the probability detection multiple sensor antennas are used and they are analysed by using subspace-based technique. in this work, we considered wideband spectrum with sub channels and here each sub channel facilitated with a sensor for the estimation of doa. in practical spectrum sensing process interference component also encounters in the sensing process. to avoid this interference level at output of receiver, we used an adaptive learning algorithm known as normalised least absolute mean deviation (nlamd) algorithm. further to achieve better performance a bias compensator function is applied in weight coefficient updating process. using this hybrid realization, the vacant spectrum can be sensed based on doa estimation and number of vacant locations in each channel can be identified using maximum likelihood approach. in order to test at the diversified conditions different threshold parameters 0.1, 0.5, 1 are analysed. mailto:mdzr55@gmail.com acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 68 adaptive learning algorithm based on normalized least absolute mean deviation (nlamd) strategy. in section 2, the doa estimation with an adaptive leaning algorithm is discussed, in section 3 simulation results are discussed. the proposed realizations are suitable for the development of medical telemetry networks as well in the development of smart cities, smart hospitals. 2. doa estimation based spectrum sensing in cognitive radios, spectrum sensing is mostly used method because it overcomes the low spectrum utilization problems of primary users. there are various spectrum sensing algorithms they are based on narrowband methods used to solve binary hypothesis. this binary hypothesis test is used for every sub channel. for assessing the primary user’s existence, sub channel assessed in spectrum sensing algorithm. in each sub channel received signal is expressed [13]as 𝑦𝑘 = { 𝑤𝑘, ℎ0 𝑎𝑟𝑘 + 𝑤𝑘, ℎ1 , (1) where 𝑎𝑟𝑘 is primary user received signal, 𝑤𝑘 is additive white gaussian noise, ℎ0 and ℎ1 are hypothesis test used for primary user existence in every sub channel. multiple hypothesis tests are employed for spectrum sensing algorithm, then by taking into consideration of all sub channels primary user signal is detected. by analysing all sub channel received signals, observed that noise is present in output signal. to discriminate this noise from primary user signals, doa is estimated. by assuming received signal at sensor nodes in array processing, it looks like cognitive radio subchannel, to avoid these problems doa estimation is considered in spectrum sensing and its block diagram is shown in figure 1. doa estimation is considered to get exact information of antenna and also to avoid interferences between primary and secondary users, further an adaptive learning process called normalised least absolute mean deviation (nlamd) algorithm is considered for spectrum to reduce noise levels at output. by using adaptive filter, desired response of input signal is calculated as 𝑑𝑘 = 𝑠𝑘 t𝑢0 + 𝑜𝑘 , (2) where 𝑠𝑘 is input signal, u0 is unknown weight vector with ‘t’ taps and 𝑜𝑘 is output noise at time index ‘k’. error output of desired signal is represented as 𝑒𝑘 = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘 . (3) here, 𝑢𝑘 = [𝑢𝑘1,𝑢𝑘2 ….. ,𝑢𝑘𝑇] t is weight vector of adaptive filter. for above equation identification problem of adaptive system is solved by p-norm cost function minimization 𝐽(𝑒𝑘) = 1 𝑝 e[|𝑒𝑘| 𝑝] = 1 𝑝 e[|𝑒𝑘 = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘| 𝑝] (4) where e[.] is the operator of statistical expectation for p>0. e[|𝑒𝑘| 𝑝] is replaced with |𝑒𝑘| 𝑝, then after calculation we get gradient of uk for p-norm error evaluated as 𝜕𝐽(𝑒𝑘) 𝜕(𝑢𝑘) = −|𝑒𝑘| 𝑝−1 sign[𝑒𝑘𝑠𝑘] . (5) by using gradient descent algorithm weight equation is updated as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔|𝑒𝑘| 𝑝−1sign[𝑒𝑘𝑠𝑘], (6) where ‘𝜔’ is step size selected appropriately used for balancing convergence rate and mean square error, ‘sign’ used for denoting sign function. for improving steady state rate and convergence rate above equation is updated with normalised least mean ppower algorithm as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 |𝑒𝑘| 𝑝−1sign[𝑒𝑘𝑠𝑘] ||𝑠𝑘||𝑝 𝑃 + 𝜗 . (7) here ||.||p used for p-norm operation, small positive value 𝜗 is also considered for avoiding denominator from zero. by selecting p-norm values as 1, then we obtain the nlamd algorithm equation as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign[𝑒𝑘𝑠𝑘] ||𝑠𝑘||1 1 + 𝜗 . (8) in sparse models, l1 norm is used for relaxation in least absolute shrinkages and operator selector algorithms, and it is employed in various adaptive filter algorithms. by using l1 norm, weight equation of nlamd algorithm is updated to minimise cost functions and it is given as 𝐽𝑑(𝑒𝑘) = 𝑑𝑘 − 𝑠𝑘 t𝑢𝑘 ||𝑠𝑘||1 1 + 𝜗 + ||𝑢𝑘||1 , (9) where is adopted parameter to know the difference between estimation error and sparsity. then we get the updated equation for nlamd sparse algorithm using gradient descent method using cost function equation (9) as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign(𝑒𝑘𝑠𝑘) ||𝑠𝑘||1 1 + 𝜗 − 𝜎 sign(𝑢𝑘) . (10) here, 𝜎 = 𝜔 is a regularised parameter. nlamd algorithm with sparse system and l1 norm is denoted as za nlamd algorithm. in bias compensated systems [14], considered a noisy input system for nlamd algorithm. input noise vector of a system is defined as �̅�𝑘 = 𝑠𝑘 + 𝑜𝑖𝑛𝑘 , (11) where 𝑜𝑖𝑛𝑘 noisy input vector it is represented as 𝑜𝑖𝑛𝑘 = [𝑜𝑖𝑛1𝑘,𝑜𝑖𝑛2𝑘,……,𝑜𝑖𝑛𝑀𝑘] t , and its limit is 𝑜𝑖𝑛,𝑡𝑘(𝑙 ∈ [1,𝑀] ), their input variance is represented as 𝜎𝑖𝑛 2 and is estimated by using some unknown info. to recover the biased estimation problems for nlamd algorithm of equation (10), an unbiased estimation vector 𝑏𝑘 is taken into consideration as 𝑢𝑘+1 = 𝑢𝑘 + 𝜔 sign(�̂�𝑘�̂�𝑘) ||�̂�𝑘||1 1 + 𝜗 − 𝜎 sign(𝑢𝑘)+ 𝑏𝑘 . (12) by using above equation, we get bias compensated vector as below [15] 𝑏𝑘 = 𝜔𝜎�̂�|�̂� 2 𝑘√ 2 (π𝜎�̂�|�̂� 2 𝑘 ) ⁄ ( 𝑢𝑘 ||�̂�𝑘||1 1 + 𝜗 ) . (13) noisy input parameter variances 𝜎𝑜𝑖𝑛 2 𝑘 , 𝜎�̂�|�̂� 2 𝑘 and 𝜎𝑢𝑘 2 are estimated accurately by computing these variance parameters as acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 69 𝜎𝑜𝑖𝑛𝑘 2 = 𝜎�̂�|�̂� 2 𝑘 𝑀𝜎𝑢𝑘 2 +𝑖 + 𝜎�̂�|�̂� 2 𝑘 (𝑀) �̂�𝑘 t�̂�𝑘 . (14) 𝜎�̂�|�̂� 2 𝑘 = ℵ𝜎�̂�|�̂� 2 𝑘−1 +(1 −ℵ)�̂�𝑘 2 (15) 𝜎𝑢𝑘 2 = ℵ𝜎𝑢𝑘−1 2 + (1 − ℵ) 1 𝑀 𝑢𝑘 t𝑢𝑘 . (16) equation (13) is substituted into (12), we get final bias compensated nlamd (bc-nlamd) adaptative learning process updated as 𝑢𝑘+1 = ( 1 + 𝜔 √ 2 (π𝜎�̂�|�̂� 2 𝑘 ) ⁄ 𝜎𝑜𝑖𝑛 2 ||�̂�𝑘||1 1 + 𝜗 ) 𝑢𝑘 + 𝜔 sign(�̂�𝑘�̂�𝑘) ||�̂�𝑘||1 1 + 𝜗 −𝜎 sign(𝑢𝑘) . (17) using this weight recursion, the noise in the received signal is minimised and accurate direction of arrival is estimated. the bcnlamd algorithm accurately estimates the doa and helps in finding the vacant spectrum and the flowchart of the proposed adaptive learning algorithm is shown in figure 2. 3. results and discussion this section demonstrates the experimental results for evaluating the performance of proposed bias compensated adaptive learning algorithm and is compared with absolute mean deviation (amd) and normalised absolute mean deviation (namd) methods with output gaussian noise. input and output noises generated using zero white mean gaussian noise and β stable distribution respectively for better performance of proposed algorithm and its characteristic function is expressed as, 𝑓𝑡 = e 𝑗𝛿𝑡− |𝑘| 𝛽[1+𝑗𝜏 sign(𝑡)𝑄𝑡,𝛽 , (18) where 𝑄𝑡,𝛽 = 𝑓(𝑥) = { tan 𝛽 π 2 , 𝛽 ≠ 1 2 π log𝑡, 𝛽 = 1 (19) with characteristic exponent 0 < 𝛽 ≤ 2, skewness −1 ≤ 𝜏 ≤ 1 , scale parameter range 0 < < ∞ and location parameter −∞ < 𝛿 < ∞. figure 1. doa estimation block diagram for spectrum sensing. figure 2. flow chart of spectrum sensing doa estimation using bc-nlamd algorithm. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 70 occupied subchannel locations are estimated using spectrum sensing method with choosing of q subchannels. in cognitive radio applications, using spectrum sensing correlation between channels are identified. in our framework for identifying spectrum location direction of arrival is taken into consideration. by using doa, we can identify spectrum location using the proposed bc-nlamd adaptive learning process. output signal with various combinations is considered as one white signal with one doa, one white signal with 3 doa, two white signals with one doa and three doa. performance of adaptive algorithm is studied in terms of convergence rate, beam pattern and number of active taps. in mobile environments, more than one multipath is considered, in those each multipath have different gains and it has amplitude and phase components. case 1: one white signal with one doa in this type, one signal with one path is considered it arrives at base station with 60 degrees angle and amplitude 0.5, it is propagated at different threshold values 0.1, 0.5 and 1. for different threshold control values, delay and steady state error are calculated as discussed in second section. for threshold value of 0.1, it improves convergence rate for proposed bc nlamd algorithm when compared to lms [16] algorithm. for threshold values of 0.5 and 1 steady state is converged faster than basic lms algorithm. improvement in convergence rate is identified by using taps in adaptive filter algorithm. for narrow band theoretical values, threshold value need only one tap but however it has delay cost and convergence rate problems. hence, we considered beam pattern with nlamd algorithm using matlab in doa estimation, it steers main beam with 60 degrees direction with beam strength of two, it is due to power signal reduced by factor 0.5. for every simulation, convergence rates are given in terms of number of samples, it is required for steady state and it is shown in table 1. case 2: one white signal with 3 doas these simulations are studied for effects of multipath smart antenna systems. multipath antennas with three different direction of arrivals -20, 30 and 60 degrees are considered for base station. at antenna system each multipath have a difference of one sampling period of 1/fc and their corresponding gains also introduced with amplitudes as shown in figure 3. for proposed method, three different weight vectors for each multipath are used for spectrum sensing. convergence rate for white signal with three doas gives better convergence rate for proposed bias compensated nlamd algorithm when compared to amd algorithm as shown in figure 4. by proposed algorithm, antenna systems show similar beam pattern compared to basic amd algorithm and it has ability to steer beams in multiple directions with zero interference directions for each beam, gain is inversely proportional to corresponding multipath of antenna. case 3: two white signals with one doa two different signals are transmitted with one doa each and its effect is same as sending two multipaths for one signal separated by one sample period at least, it is due to two signals are uncorrelated for each other with amplitude of 0.5 and 1 at threshold values of 0.1, 0.5 and 1. it gives good convergence rate for second signal when compared to first signal. for smaller amplitude it gives longer response to adapt taps and then signal is estimated. narrow threshold gives an ability to adapt smaller number of taps for better performance of proposed nlamd algorithm and their beam patterns for corresponding doa are shown in figure 5 and figure 6 respectively. case 4: two white signals with three doas in this case, two input sequences are considered each sequence with three multipath components is simulated. at the base station, multipath signals of second and third are used behind first multipath signals from various directions. for each signal, only two sets of multipath are considered. at base station, second and third multipath have the same weight vector. compared the convergence rate with basic lad algorithm. four beam patterns are considered from shannon theory, 3rd multipath pattern with two main lobes in direction of second and third multipaths, they are shown in figure 7 and figure 8 respectively. convergence rates for first signal, second signal in case of doa 3 is shown in table 2 and table 3 respectively in terms of number of samples considered to reach steady state. from the tables, it is clear that for doa3 of proposed algorithm converged faster for second signal when compared to first signal. in figure 7, only two multipath signals are visible because 2nd and 3rd signals have same weight vectors. main aim of proposed bc-nlamd algorithm is for detecting frequency spectrums in cognitive radio antenna systems. nlamd algorithm is used for low computational complexity, figure 3. bcnlamd beam pattern for one white signal with doas. figure 4. error for received signal using bcnlamd algorithm with threshold value of 0.1. table 1. bcnlamd beam pattern for one white signal with doas. algorithm steady state delay lamd [17] 65 0 bbnlms [18] 50 0 ffa [19] 35 0 pid [20] 25 8 enlms [21] 15 12 bcnlamd for 0.1 40 0 bcnlamd for 0.5 45 7 bcnlamd for 1.0 55 14 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 71 better stability and good robustness to avoid implementation errors. however, nlamd algorithm is poor convergence and it is increased by using bias compensated nlamd algorithm and it requires system with sparse channels. in wireless channel, spatial properties are exploited with detection guide systems. beam patterns are analysed and they are compared with shannon theorem. proposed results are compared with threshold point of spectrum sensing algorithm for bc-nlmad algorithm. then it gives better results for bc-nlmad in terms of faster convergence, low computational complexity and particularly narrow band threshold gives better performance of antennas. it is due to delay period presence, on that time system waits for set of samples those are undetermined before actually converging for desired signal. however, system performance is improved at a cost of increased computational complexity that will need increased number of taps adaptation for narrow threshold point in proposed algorithm and it reduces error at output signals. beam patterns are obtained with according to expectations of shannon theorem, and further proposed algorithm with active taps steers beams in desired signal direction. spatial filtering is particularly used in wireless communication systems. hence by proposed algorithm performance of system is increased with active taps weights in updated equation so that improves frequency utilization for cognitive radio systems. 4. conclusion in this paper, the vacant spectrum is sensed by using doa measurement in wireless communication systems. interferences occurred with cognitive radio systems are avoided by considering doa estimation for spectrum sensing. in wireless communications, antenna system are new developing technologies with new adaptive beam forming algorithms, it will provide high frequency spectrum then it improves quality of service of cognitive radio systems. further to reduce noise signals from received signal proposed a bias compensated nlamd algorithm. using this adaptive learning algorithm, performance of cognitive radio-based antenna beam streams and convergence rate is improved. by using sign regressor function in weight update equation of proposed algorithm computational complexity is reduced. performance of bcnlamd algorithm in presence of multipath users and multipath effects are analysed using matlab simulations and hence convergence rate is improved due to active taps used in adaptive learning algorithm leads to better spectrum efficiency in cognitive radios. figure 5. error signal for bcnlamd algorithm for 0.1 value. figure 6. bcnlamd beam pattern for two white signals with one doa. table 2. convergence rate of first signal, doa3. algorithm steady state delay lamd [17] 55 0 bbnlms [18] 60 20 ffa [19] 70 45 pid [20] 80 65 enlms [21] 90 70 bcnlamd for 0.1 35 60 bcnlamd for 0.5 55 40 bcnlamd for 1.0 125 95 table 3. convergence rate of second signal and doa3 algorithm steady state delay lamd [17] 45 0 bbnlms [18] 55 0 ffa [19] 62 20 pid [20] 70 40 enlms [21] 85 55 bcnlamd for 0.1 25 35 bcnlamd for 0.5 45 25 bcnlamd for 1.0 95 65 figure 7. received error signal for bcnlamd algorithm. figure 8. bcnlamd beam pattern for two white signals with 3 doas. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 72 references [1] h. s. m. antony, t. lakshmanan, secure beamforming in 5gbased cognitive radio network, symmetry, vol. 11, no. 10, october 2019, p. 1260. doi: 10.3390/sym11101260 [2] amir mahram, mahrokh g. shayesteh, blind wideband spectrum sensing in cognitive radio networks based on direction of arrival estimation model and generalised autoregressive conditional heteroscedasticity noise modelling, iet communications, vol.8, no.18, 2014, pp.3271-3279. doi: 10.1049/iet-com.2014.0162 [3] s. stein ioushua, o. yair, d. cohen, y. c. eldar, cascade: compressed carrier and doa estimation, ieee transactions on signal processing, vol. 65, no. 10, may 2017, pp. 2645-2658. doi: 10.1109/tsp.2017.2664054 [4] a. h. hussein, h. s. fouda, h. h. abdullah, a. a. m. khalaf, a highly efficient spectrum sensing approach based on antenna arrays beamforming, ieee access, vol. 8, 2020, pp. 2518425197. doi: 10.1109/access.2020.2969778 [5] j. wang, j. chen, d. cabric, cramer-rao bounds for joint rss/doa-based primary-user localization in cognitive radio networks, ieee transactions on wireless communications, vol. 12, no. 3, march 2013, pp. 1363-1375. doi: 10.1109/twc.2013.012513.120966 [6] h. s. fouda, a. h. hussein, m. a. attia, efficient glrt/doa spectrum sensing algorithm for single primary user detection in cognitive radio systems, international journal of electronics and communications, 2018 doi: 10.1016/j.aeue.2018.03.012 [7] s. elaraby, h. y. soliman, h. m. abdel-atty, m. a. mohamed, joint 2d-doa and carrier frequency estimation technique using nonlinear kalman filters for cognitive radio, ieee access, vol. 5, 2017, pp. 25097-25109. doi: 10.1109/access.2017.2768221 a. salman, i. m. qureshi, s. saleem, s. saeed, b. r. alyaei, novel sensing and joint beam and null steering-based resource allocation for cross-tier interference mitigation in cognitive femtocell networks, wireless networks, vol. 24, no. 6, february 2017, pp. 2205–2219. doi: 10.1007/s11276-017-1465-6 [8] j. werner, j. wang, a. hakkarainen, d. cabric, m. valkama, performance and cramer–rao bounds for doa/rss estimation and transmitter localization using sectorized antennas, ieee transactions on vehicular technology, vol. 65, no. 5, may 2016, pp. 3255-3270. doi: 10.1109/tvt.2015.2445317 [9] a. lay-ekuakille, p. vergallo, d. saracino, a. trotta, optimizing and post processing of a smart beamformer for obstacle retrieval, ieee sensors journal, vol.12, no.5, 2012, 1294–1299. doi: 10.1109/jsen.2011.2169782 [10] m. a. hussain ansari, c. l. law, grating lobe suppression of multicycle ir-uwb collaborative radar sensor in wireless sensor network system, ieee sensors letters, vol. 4, no. 1, jan. 2020, art no. 7000404, pp. 1-4. doi: 10.1109/lsens.2020.2964588 [11] w. lu, b. deng, q. fang, x. wen, s. peng, intelligent reflecting surface-enhanced target detection in mimo radar, ieee sensors letters, vol. 5, no. 2, february 2021, art no. 7000304, pp. 1-4. doi: 10.1109/lsens.2021.3052753 [12] s. surekha, m. z. ur rahman, a. lay-ekuakille, a. pietrosanto, m. a. ugwiri, energy detection for spectrum sensing in medical telemetry networks using modified nlms algorithm, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 2020, pp. 1-5. doi: 10.1109/i2mtc43012.2020.9129107 [13] wentao ma, ning li, yuanhao li, jiandong duan, badong chen, sparse normalized least mean absolute deviation algorithm based on unbiasedness criterion for system identification with noisy input, ieee access, vol.4, 2016, pp. 1-9. doi: 10.1109/access.2018.2800278 [14] s. m. jung, p. park, stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs, ieee transactions on signal processing, vol. 65, no. 11, 1 june 2017, pp. 2949-2961. doi: 10.1109/tsp.2017.2675865 [15] m. o. bin saeed, a. zerguine, an incremental variable step-size lms algorithm for adaptive networks, ieee transactions on circuits and systems ii: express briefs, vol. 67, no. 10, pp. 22642268, october 2020. doi: 10.1109/tcsii.2019.2953199 [16] v. c. ramasami, spatial adaptive interference rejection: lms and smi algorithms, report, university of kansas, april 2001 [17] v. a. kumar, g. v. s. karthik, a low complex adaptive algorithm for antenna beam steering, 2011 international conference on signal processing, communication, computing and networking technologies, july 2011, pp. 317-321. doi: 10.1109/icsccn.2011.6024567 [18] m. l. m. lakshmi, k. rajkamal and s. v. a. v. prasad, amplitude only linear array synthesis with desired nulls using evolutionary computing technique, aces journal, vol.31, no.11, november 2016, pp.1357-1361. doi: 10.1109/wispnet.2017.8299890 [19] p. k. mvemba, a. lay-ekuakille, s. kidiamboko, an embedded beamformer for a pid-based trajectory sensing for an autonomous vehicle, metrology and measurement systems, vol.25, no.3, 2018, pp.561-575. doi: 10.24425/123891 [20] k. aravind rao, k. sai raj, rohan kumar jain, implementation of adaptive beam steering for phased array antennas using enlms algorithm, journal of critical reviews, vol.7, no.9, 2020 doi: 10.31838/jcr.07.09.10 https://doi.org/10.3390/sym11101260 https://doi.org/10.1049/iet-com.2014.0162 https://doi.org/10.1109/tsp.2017.2664054 https://doi.org/10.1109/access.2020.2969778 https://doi.org/10.1109/twc.2013.012513.120966 https://doi.org/10.1016/j.aeue.2018.03.012 https://doi.org/10.1109/access.2017.2768221 https://doi.org/10.1007/s11276-017-1465-6 https://doi.org/10.1109/tvt.2015.2445317 https://doi.org/10.1109/jsen.2011.2169782 https://doi.org/10.1109/lsens.2020.2964588 https://doi.org/10.1109/lsens.2021.3052753 https://doi.org/10.1109/i2mtc43012.2020.9129107 https://doi.org/10.1109/access.2018.2800278 https://doi.org/10.1109/tsp.2017.2675865 https://doi.org/10.1109/tcsii.2019.2953199 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/wispnet.2017.8299890 https://doi.org/10.24425/123891 https://doi.org/10.31838/jcr.07.09.10 journal contacts acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 2 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 section editors vol. 7 – 11, 2018-2022 yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy leila es sebar, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy reviewers vol. 11, 2022 (number of reviews) a. vimala juliet (2) alberto de bonis (1) alessandro pozzebon (1) álvaro sánchez-climent (1) amina vietti (2) andrea d'andrea (1) andrea rosati (1) andrea scorza (1) andrej babinec (1) anna piccirillo (1) assunta pelliccio (1) ayesha tarannum (4) bruno andò (1) carmelo scuro (3) caterina balletti (1) chandra bhushan rao (3) chiara comegna (1) daniele fontanelli (4) dario ambrosini (1) davide colombi (1) domenico luca carnì (10) dominik pražák (1) efstathios adamopoulos (2) egidio de benedetto (2) elena fitkov-norris (1) eleonora balliana (1) elia quirós (1) elisabeth costa monteiro (1) emanuele alcaras (1) emanuele zappa (1) emilio sardini (1) emma angelini (4) evgeny borovin (2) fabio leccese (3) francesca di turo (1) francesco crenna (1) francesco demarco (1) francesco lamonaca (2) francesco picariello (3) francesco scardulla (1) franco simini (1) g v subba rao (2) g. festa (1) gabriele rossi (1) geert de cubber (1) giacomo fiocco (2) giorgia ghiara (1) giorgio verdiani (1) giovanni muscato (1) girika jyoshna (6) gustavo r. alves (1) gvs yaswanth (1) ian robinson (1) ignacio lira (1) ioan doroftei (2) ioan tudosa (1) isabella sannino (2) jakub svatos (1) jalagam mahesh (2) jan holub (1) jordi salazar (1) jurek sasiadek (1) katarina almeida-warren (1) kenta arai (1) l koteswara rao (1) lamia berrah (1) leila es sebar (1) leonardo iannucci (2) lidia álvarez-morales (1) lorenzo ciani (1) lorenzo scalise (1) luca di angelo (1) luca mari (1) luca zoia (1) luisa vigorelli (1) maik rosenberger (2) malakonda reddy (1) marco giorgio bevilacqua (2) maria grazia d'urso (1) mariapia casaletto (1) mariateresa lettieri (2) mauro campagna (1) moru leela (1) nalluri siddiah (2) panayota vassiliou (1) paolo belardi (1) pasquale daponte (1) pawel mazurek (1) pedro manuel brito silva girao (1) piervincenzo rizzo (1) raffaele persico (1) raffaella de marco (4) roberta spallone (1) roberto scopigno (1) roberto senesi (1) ronald summers (1) rosario lo schiano lo moriello (1) rugkanawan wongpithayadisai (1) s rooban (1) sala surekha (4) sang-youn kim (1) sara girón (1) saravanan velusamy (2) shafi mirza (5) silvestro a. ruffolo (1) srinivas reddy putluri (1) stefania zinno (1) stefano brusaporci (2) stefano gialanella (1) stephan schlamminger (1) subha sree (1) tilde de caro (1) valeria di cola (1) valerio baiocchi (1) ville-veikko telkki (1) vincenzo palleschi (1) vittoria guglielmi (1) walter bich (1) yanhe zhu (2) yasmin fathima (1) yasuharu koike (2) yvan baudoin (1) zhipeng liang (1) vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies acta imeko issn: 2221-870x september 2021, volume 10, number 3, 125 133 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 125 vibration-based tool life monitoring for ceramics microcutting under various toolpath strategies zsolt j. viharos1,3, lászló móricz2, máté büki1 1 centre of excellence in production informatics and control, institute for computer science and control (sztaki), eötvös loránd research network (elkh), kende u. 13-17., h-1111, budapest, hungary 2 zalaegerszeg center of vocational training, kinizsi pál utca 74., h-8900, zalaegerszeg, hungary 3 john von neumann university, izsáki u. 10., h-6000, kecskemét, hungary section: research paper keywords: ceramics; micro-milling; tool wear; machining strategy; vibration analysis; feature selection citation: zsolt jános viharos, lászló móricz, máté istván büki, vibration-based tool life monitoring for ceramics micro-cutting under various toolpath strategies, acta imeko, vol. 10, no. 3, article 18, september 2021, identifier: imeko-acta-10 (2021)-03-18 section editor: lorenzo ciani, university of florence, italy received february 5, 2021; in final form september 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592.; by the hungarian ed_18-2-2018-0006 grant on a “research on prime exploitation of the potential provided by the industrial digitalisation”; by the ministry of innovation and technology nrdi office within the framework of the artificial intelligence national laboratory program, hungary. corresponding author: zsolt jános viharos, e-mail: viharos.zsolt@sztaki.mta.hu 1. introduction machining of rigid materials with regular cutting-edge geometry is one of the main trends in the 21st century. ceramics are such rigid materials that are employed more and more widely as raw materials thanks to their high hardness and thermal resistance [1], [2]. there are various options for machining them, e.g. using water, laser or abrasive grinding [3], [4], [5], however, their high costs and complex setups are important drawbacks of these technologies. therefore, the machining of ceramics with a classical, regular cutting-edge geometry is still a promising solution, however, considering the relative quick wearing process of the cutting tool without an appropriate technological optimization, this methodology will be economically not acceptable. optimizing a technology is typically a multicriteria assignment, like here, the main aim is to find the smallest production cycle time, and at the same time the tool life has to be maximal, too. the effect of technology parameters on tool life has been investigated in several of the authors’ previous articles [6], [7], [8]. an important part of tool life analysis is the investigation of vibrations generated during cutting method. the relative vibration between the micro-milling cutter and workpiece influences the processing quality and tool life [9]. frequency spectrum analysis was executed to establish for example the tool wear and chatter frequency characterization. based on this method, of the cutting process that are clearly not visible in the time domain can be revealed [10]. to determine the source of chatter or other dominant tool wearing frequencies, proper selection of the critical dynamic force signatures is required to perform frequency or power spectrum analysis. dimla and lister [11] identified for turning that the tangential force components abstract the 21st century manufacturing technology is unimagined without the various cam (computer aided manufacturing) toolpath generation programs. the aims of developing the toolpath strategies which are offered by the cutting control software is to ensure the longest possible tool lifetime and high efficiency of the cutting method. in this paper, the goal is to compare the efficiency of the 3 types of tool path strategies in the very special field of micro-milling of ceramic materials. the dimensional distortion of the manufactured geometries served to draw the taylor curve for describing the wearing progress of the cutting tool helping to determine the worn-in, normal and wear out stages. these isolations allow to separate the connected high-frequency vibration measurements as well. applying the novel feature selection technique of the authors, the basis for the vibration based micro-milling tool condition monitoring for ceramics cutting is presented for different toolpath strategies. it resulted in the identification of the most relevant vibration signal features and the presentation of the identified and automatically separated tool wearing stages as well. https://www.centre-epic.eu/ mailto:viharos.zsolt@sztaki.mta.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 126 act on the rake face insert and both cutting forces and vibration signatures were most sensitive to tool wear. youn and yang [12] differentiated cutting force components to detect the difference between flank and crater wear for varying machining conditions in turning. work by sarhan et al. [13] involving the use of a 4flute end mill on steel of 90 bhn indicated that the magnitudes of the first harmonics of cutting force frequency spectrum increased significantly with increase in tool flank wear, feed per tooth and axial depth of cut. elbestawi et al. [14] also had similar observations when face milling of aisi 1020 steel and aluminium alloy. the deterioration in tool wear increases the tool chip and tool workpiece contact areas and these in turn cause an increase in friction, too. a worn tool generates high frequency tonal vibrational energy that does not arise in a new tool [15]. consequently, the noise emitted in the area is amplified [16]. tonshoff et al. [17] employed the use of acoustic emission signals to determine the influence of a hard turning process on the sub surface microstructure of the workpiece. the results obtained showed that the amplitude of the frequency analysis increased with increasing flank wear due to an enlarged contact area. one of an appropriate complement for vibrational analysis is the application of a neural network-based method. qi yao et al. [9] examined the relationship between cutting force amplitude, frequency and vibration displacement and it is ascertained by using a neural network method. the scientific literature mirrors that vibration signal analysis forms an increasing trend among the tool wear monitoring methods. in this paper, the effect of toolpaths generated by cam software on the tool lifetime is examined. the three most popular tool paths (strategies: wave form, cycloid, chained) are analysed as described in the next three sections. 1.1. wave form path the wave form tool path (figure 1) for milling technology results that the tool is working with constant tool diameter sweep. the contact angle along the toolpath has a direct effect on the cutting forces. by adjusting the contact angle, the cutting force can also be controlled. owing to this the tool load is constant in every changing direction during the machining through avoiding sharp changes in direction, which couldn’t be found among the average path generation methods [6], [19]. the other advantage of the wave form strategy is that the value of material removal speed is kept constant, that is different form the other path generation methods. cutting distributes wear evenly along the entire flute length, rather than just on one tip. the radial cutting depth is reduced to ensure consistent cutting force, allowing cutting material escaping from the flutes. so, tool lifetime is further extended as most of the heat is removed in the chip. 1.2. cycloid path the essence of the cycloid path strategy is to move the tool on a circle with the largest possible radius, thus reducing the kinematic contact (and tool load) (figure 2). the cycloid form is a milling technology where the tool milling is going along an arc, avoiding sharp changes in the direction. although it does not control the tool, this strategy also can reduce the tool load, and the roughing strategy is optimized easier [6], [8]. the problem with the average toolpath is that tool load increases significantly in the corners requiring shallower depths of cut and reduced feed. this problem can be avoided with cycloid and wave form path. because the pocket was used during the experiment did not have circle geometry, so the technology was made optimized with entremets. the entremets is an option in the software with which the tool load can be reduced in the corner. by choosing the correct stepover, the contact angle can be kept at a specified level. another advantage of the strategy is that high feeds along some paths cab be achieved. 1.3. constant stepover toolpaths (chained path) most software are usually capable of creating constant stepover toolpaths, contour-parallel, and direction-parallel paths, but these algorithms do not focus on machining parameters but only on material removal [20]. during the generation of the constant stepover path (chained path), the cutting tool removes the material moving back and forth on the horizontal plane within each z (vertical) level (figure 3). the strategy uses both directional and indirectional milling technology, leading to poor surface quality and short tool life. 2. experiments for the machining of ceramics the setup for the experiments is presented in figure 4. one of the main aims is to follow the wearing process of the micromilling tool during machining of ceramics and to compare it in an offline mode against the geometrical changes (length, width and depth of features) of the machined ceramic workpiece. another goal is to using high-frequency online vibration figure 1. element of waveform path [18]. figure 2. cycloid toolpath [20]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 127 measurement online, during the cutting process on the other hand. connections between the wearing stages and the measured online and offline parameters were determined using a selfdeveloped, artificial neural network-based feature selection solution. 2.1. parameters of the milling machine the basis of the experiments was the milling machine that was planned and built by the cncteamzeg group. it is operated in zalaegerszeg (figure 4), hungary. during the planning, the aim was to cut metal material but the preliminary calculations and tests on ceramic material removal proved that it is able to cut ceramic material as well. 3. indirect, off-line tool wear detection measuring the produced workpiece geometry during the cutting process, microscopic images were taken repeatedly after a certain number of workpiece machining in order to monitor in an offline way the wearing evolution of the tool on workpiece geometry. measurements were performed using zeiss discovery v8 microscope and the wearing in the pictures were evaluated by the authors. beyond the microscopic control of the cutting tool geometry the changes resulted by the tool wear on the machined workpiece was also measured. these dimensional changes of the manufactured geometries are summarized in figure 5. during the measurements, changes in the width, length and depth of the geometries were examined. the applied technology settings (axial depth of cut, radial depth of cut, cutting speed, feed rate) were the same in all three cases, only the machining paths (strategies: wave form, cycloid, chained) were varied. these workpiece measurements represent clearly a complete and valid cutting tool life curves (taylor curves) and various variable conclusions can be drawn from them: • it can be seen in figure 5 that the tool lifetime achieved using chained toolpath is nearly half of the tool lifetime achieved using the wave form and cycloid tool path. • during the application of the chained path, a tool break occurred early, under the machining of the 11th set. • at the cycloid tool path, exponential tool wear and tool break were observed in 20th set. • using the waveform path, the manufacturing time of one feature was 57% longer than in the case of the chained toolpath. • with the cycloid path, the manufacturing time of a feature was nearly five times more than at the chained tool path. • considering the tool lifetime and the manufacturing time, the waveform seems to be the most economical toolpath strategy during ceramic machining. figure 3. contour-parallel, and direction-parallel stepover toolpaths (chained path). figure 4. the applied cnc machine, detailed parameters are in [8]. figure 5. changes in the geometry i.ii. and iii. of the machined geometries for the wave form, cycloid and chained tool path. w id th o f g e o m e tr y i number of machined workpieces chained path wave form cycloid path acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 128 4. direct, on-line tool wear analysis using vibration measurements the scientific literature mirrors that acoustic emission (ae) signals form an increasing trend in the same way as the increase of the tool wear analysis in metal cutting but it is not evident for micro-milling of ceramics. in [21], bhuiyan et al. pointed out: the increase in the tool wear increases the tool-workpiece contact area and friction coefficient, as well. in another experiment, the opposite, so, the decrease in the vibration amplitudes were detected during measurements, in metal cutting field. in the paper of the authors, they reported a similar phenomenon during ceramic machining as well [8]. based on the results of the measurements, decrease in the vibration amplitudes were detected during the wear evolution of the tool, consequently, the contact surfaces between the cutting tool and workpiece became smaller during the wearing of the tool for ceramics milling, mainly because of the complex and multiplicative wearing forms. the related frequency analysis showed that the wear-out process of the tool resulted also in a shift of the dominant frequencies into higher frequency ranges. in this research, the authors supplemented the results of the previous paper [8] with an ae study on various toolpath strategies. in the reported previous research, vibration measurement of ceramics milling was established with a sampling frequency of 100 khz measuring in one direction. instead of analysing over millions of individual measured values, as time series of the vibration amplitudes, several descriptive features (e.g., statistical measures, like amplitude, standard deviation, 3rd moment, etc.) were calculated. such a feature vector was calculated for each workpiece/machining process, while during the experiments the same tool is used, until the tool wear-out, or tool break. feature selection was applied to find the most descriptive features for distinguishing three typical different stages of the tool life. for this division, in the first step, the tool wear curve was determined from the geometry produced on the workpiece by an indirect method (based on the workpiece geometries measured). after this, the curves were divided into three sections depending on the wear phase of the tool and the degree of geometry reduction (worn-in, normal, wear-out (or brake). 4.1. micro-cutting tool wearing stages in contrast to the previous research [8], not the changes of parameters of the geometry (width, length, depth) were analysed, but the volume changes calculated from the geometries obtained by each toolpath strategy (figure 6). during the analysis of the graphs, 3 well-separable sections of the taylor curve were observed for the waveform graph (worn in-tool, normal condition, worn out). in contrast, in the cycloid path, no significant tool wear was observed until a certain geometry was manufactured (period “a”), followed by steep but uniform tool wear (period “b”). at the chained toolpath, the rapid wear process (period “a”) as well as the normal wear condition (period “b”) were observed, however, the tool was already broken at the very beginning of the wear out phase (period “c”). 4.2. most descriptive, direct vibration signal behaviour in respect to the micro-cutting tool wear-out after running the feature selection method, called adaptive, hybrid feature selection (ahfs) developed by some of the authors [22], the variables/features (calculated based on the vibration signal) that most accurately describe the change in three stages of the taylor curve were determined. it has to be mentioned that the feature selection identifies the first, so called most informative feature, however, the second one serves with the additional most informative one, etc., consequently, the features are not independent, it is really important for the evaluation of their meaning and effects. selected features for the waveform strategy: • number of the times the signal crosses its mean value • standard deviation • fourth momentum kurtosis having identified the most informative features calculated from the measured vibration signal, their evolution along the wearing progress can be presented, partly for engineering validation of the results of the mathematical algorithm. the figure 6. changes in the volumes of the manufactured geometries for the wave form (upper), cycloid tool path (middle) and stepover (chained bottom) tool path. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 129 progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 7. the first identified variable name of “number of times the signal crosses its mean value” describes the intersection density of the x axis of the vibration signal. in the stage of wear-in (period “a”) the curve is at a high level, but there is a continuous decrease. this means that the tool vibrates at a high frequency in the initial stage. in the normal wearing (period “b”) phase, the signal oscillated at a lower frequency compared to the “a” phase. in the wear-out (period c) phase, there was a further decrease in the number of x-axis incisions. the second feature identified was the standard deviation. the standard deviation showed similar change to the previous variable. in the wear-in period (period “a”), the signal shows large variance. in the normal period of tool lifetime (period “b”), there was a decreasing trend of the standard deviation. in the period of wear out (period “c”), the standard deviation of the signal showed a drastic decrease. the third parameter identified is the fourth momentum, also called as kurtosis. the fourth momentum describes the distribution-flatness of the signal. in the case of a sharp tool, the kurtosis follows a flat trend, while in the case of a worn tool, the distribution curve takes on an increasingly sharper shape. selected features for the cycloid strategy: • number of the times the signal crosses its mean value • mean value • second momentum. the progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 8. selected features for the chained strategy: • mean value • fourth momentum • number of the times the signal crosses its mean value. figure 7. selected changes in the vibration signal behaviour at waveform strategy: “number of the times the signal crosses its mean value ““standard deviation” – “fourth momentum” along the tool live-cycle. figure 8. selected changes in the vibration signal behaviour at the cycloid strategy: “number of the times the signal crosses its mean value ““mean value” – “second momentum” along the tool live-cycle. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 130 the progress in the values of the three, selected vibration signal features are presented along the wearing stages of the cutting tool in figure 9. 4.3. most descriptive vibration signal behaviour in frequency domain in respect to the micro-cutting tool wear-out the vibrations are stochastic signals because they represent temporal processes where parts of the phenomenon are characterized by probability variables (ceramic material inhomogeneity, asymmetric wear of tool edges, factors that cannot be described in an exact way in the micromachining process, white noise, etc.). in case of the power density (psd), the power of the signal per unit frequency range is calculated. in the analysis of the psd functions, the goal is to use the feature selection method to find those psd frequency components where the individual sections of the curve are clearly separated from each other based on the separated stages of the taylor curve. feature selection results showed that the highest separation of the three ranges occurs at 21758hz (figure 10) using the waveform strategy. figure 10 shows that the ranges are separated at the analysed frequency applying the cycloid strategy. in the stage of wear out (period “c”), two recorded data sets were damaged during the measurements, so they cannot be considered in the analyses. therefore, only three curves are visible in range of wear out period. at the examined frequency (21758hz), a continuous decrease in amplitude is observed, which indicates a decrease in the contact surface of the tool and the workpiece (figure 11). figure 9. selected changes in the vibration signal behaviour at the chained strategy: mean value fourth momentum number of the times the signal crosses its mean value along the tool live-cycle. figure 10. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (thick curves) tool according to the waveform strategy. figure 11. variation of psd amplitude values along the manufactured workpieces at 21758hz at the waveform strategy. -0.12 -0.1 -0.08 -0.06 -0.04 -0.02 0 m e a n v a lu e number of machined workpieces pe riod „c" 1st set of workpieces 2nd set of workpieces 3rd set of workpieces 4th set of workpieces pe riod "a" pe riod „b" f e a tu re r a n g e o f p e ri o d „ a ” f e a tu re r a n g e o f p e ri o d „ b ” acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 131 the next toolpath examined was the cycloid strategy. here, the results showed that the separation of the ranges should be sought at 1709 hz as summarized in figure 12. like at the waveform, a continuous decrease in amplitude is observed at the cycloid strategy. the last toolpath examined was the chained strategy. here, the results showed that the separation of the ranges should be sought at 2796hz as presented in figure 14. in figure 13, the three ranges are well separated at the frequency obtained by the feature selection. in the case of the chained toolpath, based on the volume analysis of the pocket, the geometry of the last pocket did not differ significantly from the size of the previous pocket. however, tool break occurred in the pocket after the last one presented, so, in the labelling, the last pocket was classified to the range of the wear-out period. it can be seen in figure 15 that the 8th point jumps out, so, in this way the last data point (representing alone the wear-out stage) overlaps with the normal stage. based on this, it can be concluded that the tool breakage was not only caused by the tool wearing. a similar phenomenon can be observed for the waveform path (figure 10), where the amplitude measured at the last geometry of the taylor curve is included into the normal tool wear range. however, determining the cause of the fractures requires further investigations. in contrast to the previous toolpath strategies, an opposite, increasing amplitude value trend was observed along the machining, applying the chained toolpath (figure 15). 5. validation and representation of the foundings according to the original, measured vibration signal curves for the engineering-oriented validation and representation, some original vibration measurements for the selected experiments marked as yellow circles on the figure 7, figure 8 and figure 9 (put into the different stages on the identified, important feature curves of the waveform, cycloid and chained toolpaths, respectively) are presented in a tabular form in figure 16. it mirrors clearly that the presented methodology works appropriately, the differences between the three tool wear stages mirror the identified behaviours, consequently, there is an open floor for realizing the vibration-based monitoring and supervision of the micromilling of ceramics. figure 12. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (thick curves) tool according to the cycloid toolpath. figure 13. variation of psd amplitude values along the manufactured workpieces at 1709hz at the cycloid strategy. figure 14. separation of the stages of wear-in (thin curves), normal wear condition (middle thick curves) and wear-out (a thick curve) tool according to the chained toolpath. figure 15. variation of amplitude values along the manufactured workpieces at 2796hz using the chained toolpath. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 132 6. conclusions and outlook in this paper, tool wear monitoring was investigated using direct and indirect methods to compare three cam path strategies (waveform, cycloid and chained). the key conclusions of the paper are: • the results represents clearly that the introduced methodology works well; it was found that the most relevant signal features and the original, “pure” signal measurements mirrors the identified behaviour, consequently, there is an open floor for realizing vibration-based monitoring and supervision of the micro-milling of ceramics. • the appointed features of the vibration signals describe the three typical stages (worn-in, normal, wear-out) of the tool life cycle according to the taylor curve identified. • in general, the feature “number of the times the signal crosses its mean value” has the highest relevance based on the vibration signal, so, this measure describes in the most accurate way the tool wearing progress. • using the feature selection method, it was possible to find the frequencies where the individual regions of the taylor curves are well separated from each other. • applying the introduced method, it is possible to determine the actual wear of the cutting tool by analysing the vibration frequencies at micromilling of ceramics. as research outlook, more detailed, novel tool wearing symptoms will be analysed in the future, the milling process/tool path will be split up into individual, homogenous, much smaller sections (layers and curves & linear movements) and so, a more sensitive and more detailed process monitoring will be possible, moreover different types of tool wearing will be considered separately. acknowledgement this work was supported by the european commission through the h2020 project epic (https://www.centre-epic.eu/) under grant no. 739592.; by the hungarian ed_18-2-2018-0006 grant on a “research on prime exploitation of the potential provided by the industrial digitalisation” and by the ministry of innovation and technology nrdi office within the framework of the artificial intelligence national laboratory program, hungary. references [1] p. jansohn, modern gas turbine systems, high efficiency, low emission, fuel flexible power generation. woodhead publishing series in energy, 20, 2013, isbn 978-1-84569-728-0, pp. 8-38. [2] l. móricz, zs. j. viharos, trends on applications and feature improvements of ceramics, manufacturing 2015 conference, budapest university of technology and economics, 15, no. 2, 2005, pp. 93-98. [3] lingfei ji, yinzhou yan, yong bao, yijian jiang, crack-free cutting of thick and dense ceramics with co2 laser by single-pass process, optics and lasers in engineering, vol. 46, issue 10, 2008, pp. 785-790. doi: 10.1016/j.optlaseng.2008.04.020 [4] jiyue zeng, thomas j. kim, an erosion model for abrasive waterjet milling of polycrystalline ceramics, wear, 199, 1996, pp. 275-282. doi: 10.1016/0043-1648(95)06721-3 [5] l. móricz, zs. j. viharos, optimization of ceramic cutting, and trends of machinability, 17th international conference on energetics-electrical engineering 26th international conference w a v e fo rm c y cl o id c h a in e d the tool was broken s tr a te g ie s: worn-in signal from period „a” normal tool condition signal from period „b” wear-out signal from period „c” figure 16. vibration signal examples representing the progress at different tool wearing stages (worn-in, normal condition, wear-out) using the three analysed toolpath strategies (waveform, cycloid, chained). https://doi.org/10.1016/j.optlaseng.2008.04.020 https://doi.org/10.1016/0043-1648(95)06721-3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 133 on computers and educations, hungarian technical scientific, society of transylvania, 2016, pp. 105-110. [6] l. móricz, zs. j. viharos, a. németh, a. szépligeti, efficient ceramics manufacturing through tool path and machining parameter optimisation, 15th imeko tc10 workshop on technical diagnostics, budapest, hungary, 6-7 june 2017, pp. 143-148. online [accessed 2 september 2021] https://www.imeko.org/publications/tc10-2017/imekotc10-2017-024.pdf [7] l. móricz, zs. j. viharos, zs. a. németh, a. szépligeti, indirect measurement and diagnostics of the tool wear for ceramics micromilling optimisation, xxii imeko world congress, 3-6 september 2018, belfast, united kingdom, journal of physics: conference series (jpcs), 1065, 2018. doi: 10.1088/1742-6596/1065/10/102003 [8] l. móricz, zs. j. viharos, a. németh, a. szépligeti, m. büki, offline geometrical and microscopic & on-line vibration based cutting tool wear analysis for micro-milling of ceramics, measurement, 163, 2020, 108025. doi: 10.1016/j.measurement.2020.108025 [9] xiaohong lu, zhenyuan jia, xinxin wang, yubo liu, mingyang liu, yixuan feng, steven y. liang,, measurement and prediction of vibration displacement in micro-milling of nickel-based superalloy, measurement, 145, 2019, pp. 254-263. doi: 10.1016/j.measurement.2019.05.089 [10] c. k. toh, vibration analysis in high speed rough and finish milling hardened steel, journal of sound and vibration, 278, 2004, pp. 101–115. doi: 10.1016/j.jsv.2003.11.012 [11] d. e. dimla, p. m. lister, on-line metal cutting tool condition monitoring. i: force and vibration analysis, international journal of machine tools and manufacture, 40, 2000, pp. 739–768. doi: 10.1016/s0890-6955(99)00084-x [12] j. w. youn, m. y. yang, a study on the relationships between static/dynamic cutting force components and tool wear, journal of manufacturing science and engineering, transactions of the american society of mechanical engineers 123, 2001, pp. 196205. doi: 10.1115/1.1362321 [13] a. sarhan, r. sayed, a. a. nassr, el. el-zahry, interrelationships between cutting force variation and tool wear in end milling, journal of materials processing technology, 109, 2001, pp. 229235. doi: 10.1016/s0924-0136(00)00803-7 [14] m. a. elbestawi, t. a. papazafiriou, r. x. du, in-process monitoring of tool wear in milling using cutting force signature, international journal of machine tools and manufacture, 31, 1991, pp. 55–73. doi: 10.1016/0890-6955(91)90051-4 [15] a. b. sadat, tool wear measurement and monitoring techniques for automated machining cells, in: h. masudi (ed.), tribology symposium, pd-vol. 61, american society of mechanical engineers, new york, 1994, pp. 103–115. [16] a. b. sadat, s. raman, detection of tool flank wear using acoustic signature analysis, wear, 115, 1987, pp. 265–272. doi: 10.1016/0043-1648(87)90216-x [17] h. k. tönshoff, m. jung, s. männel, w. rietz, using acoustic emission signals for monitoring of production processes, ultrasonics, 37, 2000, pp. 681–686. doi: 10.1016/s0041-624x(00)00026-3 [18] i. szalóki, programming trochoidal trajectories in microsoft office excel, óe / bgk / cutting technology computer design task, budapest, 2012., pp. 30. [19] w. shao, y. li, c. liu, x. hao, tool path generation method for five-axis flank milling of corner by considering dynamic characteristics of machine tool, proc. of intelligent manufacturing in the knowledge economy, procedia cirp 56, 2016, pp. 155160. doi: 10.1016/j.procir.2016.10.046 [20] b. warfield, complete guide to cam toolpaths and operations for milling in 2020, edition. online [accessed 1 august 2020]. https://www.cnccookbook.com/cam-features-toolpath-cnc-restmachining [21] m. s. h. bhuiyan, i. a. choudhury, m. dahari, y. nukman, s. z. dawal, application of acoustic emission sensor to investigate the frequency of tool wear and plastic deformation in tool condition monitoring, measurement, 92., 2016., pp. 208–217. doi: 10.1016/j.measurement.2016.06.006 [22] zs. j. viharos, k. b. kis, á., fodor, m. i. büki, adaptive, hybrid feature selection (ahfs), pattern recognition, 116, 2021, art. 107932. doi: 10.1016/j.patcog.2021.107932 https://www.imeko.org/publications/tc10-2017/imeko-tc10-2017-024.pdf https://www.imeko.org/publications/tc10-2017/imeko-tc10-2017-024.pdf https://doi.org/10.1088/1742-6596/1065/10/102003 https://doi.org/10.1016/j.measurement.2020.108025 https://doi.org/10.1016/j.measurement.2019.05.089 http://dx.doi.org/10.1016/j.jsv.2003.11.012 http://dx.doi.org/10.1016/s0890-6955(99)00084-x https://doi.org/10.1115/1.1362321 http://dx.doi.org/10.1016/s0924-0136(00)00803-7 https://doi.org/10.1016/0890-6955(91)90051-4 https://doi.org/10.1016/0043-1648(87)90216-x https://doi.org/10.1016/s0041-624x(00)00026-3 http://dx.doi.org/10.1016/j.procir.2016.10.046 https://www.cnccookbook.com/cam-features-toolpath-cnc-rest-machining https://www.cnccookbook.com/cam-features-toolpath-cnc-rest-machining https://doi.org/10.1016/j.measurement.2016.06.006 https://doi.org/10.1016/j.patcog.2021.107932 rarefied gas flow in pressure and vacuum measurements acta imeko june 2014, volume 3, number 2, 60 – 63 www.imeko.org rarefied gas flow in pressure and vacuum measurements jeerasak pitakarnnop, rugkanawan wongpithayadisai national institute of metrology, prathumthani, thailand section: research paper keywords: rarefied gas; slip-flow; kinetic theory citation: jeerasak pitakarnnop, rugkanawan wongpitayadisai, rarefied gas flow in pressure and vacuum measurements, acta imeko, vol. 3, no. 2, article 14, june 2014, identifier: imeko-acta-03 (2014)-02-14 editor: paolo carbone, university of perugia received february 14th, 2014; in final form may 25th, 2014; published june 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: jeerasak pitakarnnop, e-mail: jeerasak@nimt.or.th 1. introduction characteristic scale, lc, and pressure, p, are the two main factors that characterize the flow regime in a gas-operated system. the micro-scale gap between the piston and cylinder of a pressure balance and the ultra-low pressure in a vacuum system will reduce the large number of gas molecules and cause the gas to be rarefied. consequently, due to the smaller number of gas molecules, flow behaviour will be different from general gas where the number of gas molecules is large enough to consider the gas as a continuum medium. the continuum medium assumption is not valid for the aforementioned cases if the flow behaves as slip, transition, or free molecular flow. the flow regime is characterized according to its knudsen number, kn: kn cl λ = (1) where λ is the molecular mean free path and lc is the characteristic scale of the gas flow. with respect to the value of the knudsen number, there are four distinct regimes as shown in figure 1. when kn is very small, there are enough molecules for the gas to be considered as in a continuum regime. slip-flow and other effects, such as a temperature jump at a solid surface, start to appear at values of kn greater than 0.001 and become dominant at around 0.01, where the slip-flow regime begins. as the gas becomes more and more rarefied, its flow is characterized as being in the transition and free molecular regimes, when kn reaches 0.1 and 10, respectively. in order to predict gas behaviour accurately, it is necessary to know its flow regime. using incorrect assumptions can lead to large errors. as well as the knudsen number, the rarefaction parameter δ is another quantity that is also used to describe the flow regime and is defined as: kn 1 22 π λ π δ == c l (2) the molecular mean free path was not directly measured. in this paper, it is estimated using maxwellian theory as: figure 1. classification of gas flow regime [1]. kn = 10 kn = 0.1 kn = 0.01 kn 0 analytical methods boltzman equation without collisions boltzman equation navier-stokes + slip bc. navier-stokes euler free molecular regime transition regime slip-flow regime viscous inviscid continuum regime abstract flows of a gas through the piston-cylinder gap of a gas-operated pressure balance and in a general vacuum system have one aspect in common, namely that the gas is rarefied due, respectively, to the small dimensions and the low pressure. the flows in both systems could be characterised as being in either slip-flow or transition regimes. therefore, fundamental research of flow in these regimes is useful for both pressure and vacuum metrology, especially for the gas-operated pressure balance where a continuum viscous flow model is widely used for determining the effective area of the pressure balance. the consideration of gas flow using the most suitable assumption would improve the accuracy of such a calculation. moreover, knowledge about rarefied gas flow will enable gas behaviour in vacuum and low-flow leak detection systems to be predicted. this paper provides useful information about rarefied gas flow in both slip-flow and transition regimes. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 60 (3) where μ is the viscosity at temperature t and v� = √2rt is the most probable molecular velocity. from the above equations, the knudsen number as a function of the pressure of gas flowing through a piston-cylinder gap in gauge and absolute modes, and through iso-standard tube, is plotted in figure 2. according to figure 1, with the pressure balance operating in absolute mode (dashed line), flows through a piston-cylinder gap of 0.3 μm, 0.6 μm, and 0.9 μm are in the transition and free molecular flow regimes. even in gauge mode (strong solid line), the gas is rarefied enough to be characterised as being in either a slip or transition regime. it would be inappropriate to use dadson’s theory [2], which is based on continuum flow assumptions, to calculate the effective area of a piston-cylinder assembly under these conditions. this theory has therefore been modified to produce more accurate results [3][4][5]. this paper proposes an alternative method, especially in the slip-flow regime where continuum assumptions are still valid with special consideration at the solid surface. this method, in which a slip boundary is applied at the surface, is useful for simulating flow through a piston-cylinder gap using commercial cfd software. flow in a vacuum system could be in any regime from continuum to free molecular, depending on the pressure and the dimensions of the gas container and passages. for conventional iso tube (solid line), rarefied gas effects start once the pressure falls below 1000 pa. at this point, conventional continuum theory begins to break down and navier-stokes equations are no longer valid. the objective of this paper is to demonstrate the results of slip-flow and kinetic bgk (bhatnagar-gross-krook) models in both the slip-flow and transition regimes, the most common ones encountered in pressure and vacuum metrology. slip-flow equations, which extend the validity of the navier-stokes model, are detailed. these equations have been rigorously validated for microchannels [8]. the presented models are simpler than the kinetic ones, yet still provide quite accurate results within slip-flow regimes. 2. slip-flow the slip-flow regime is a slightly rarefied one, which could occur either in gas flows through the piston-cylinder gap and or in vacuum systems, as shown in figure 2. it typically corresponds to a knudsen number ranging between 0.01 and 0.1, easily reached for flow either through a micrometer scale gap in a pressure balance operating in gauge mode under standard conditions or in rough vacuum. the knudsen layer plays a fundamental role in the slip-flow regime. this thin layer, one or two molecular mean free paths in thickness, is a region of local non-equilibrium, observed in any gas flow near a surface. in non-rarefied flow, the knudsen layer is too thin to have any significant influence but, in the slip-flow regime, it needs to be considered [6]. although the navier-stokes equations are not valid in the knudsen layer, due to non-linear stress/strain-rate behaviour within it [7], their use with appropriate boundary velocity slip and temperature jump conditions can provide an accurate prediction of mass flow rate [8]. the slip-flow condition was originally proposed by maxwell and has since been developed up to the second order. several models have been proposed, most of similar form but differing slightly in the coefficients used. if isothermal flow is assumed, a general second order slipflow model is derived as: (4) where uslip is the slip velocity, us is the flow velocity at the wall, and uw is the velocity of the wall, with its normal direction noted as n. the mean free path of the molecules is λ and α is the tangential momentum accommodation coefficient, equal to unity for perfectly diffuse molecular reflection and equal to zero for purely specular reflection. aα and aβ are the first and second order dimensionless coefficients, respectively. in maxwell’s model, aα was taken as being equal to unity which overestimates the velocity at the wall but leads to a good prediction of gas velocity outside the knudsen layer. example values of aα and aβ proposed in the literature are given in table 1. to determine pressure distribution along piston and cylinder surfaces or flow through vacuum systems, the boundary equation (4) is applied to navier-stokes equations. these equations could be solved analytically for flow through simple geometries, whereas flow within a more complicated model requires a numerical calculation. normally, commercial cfd (computational fluid dynamics) software such as ansys fluent enables slip boundary conditions to be input at a boundary surface. methods to apply the boundary conditions in cfd software have been presented in the literature [6][9]. gas flow through a piston-cylinder gap is considered as a flow between two infinite parallel plates (or slabs) in the analysis, as the gap between the piston and cylinder of a pressure balance is very small in comparison to the radius of piston. after applying the slip coefficient to the navier-stokes equations, a reduced flow rate for slab flow is derived in terms of the rarefaction parameter as: (5) since the rarefaction parameter in equation (5) depends on pressure, the reduced flow rate gp and the pressure distribution along the slab need to be computed iteratively. the equation of pressure distribution along a piston and cylinder was derived by figure 2. pressure versus kn for usual gas flow through different pistoncylinder gaps and vacuum tubes for n2 at 20°c. table 1. example values of the dimensionless coefficients author, year aα aβ maxwell, 1879 cercignani, 1964 deissler, 1964 hadjiconstantinou, 2003 1.0000 1.1466 1.0000 1.1466 0.0000 0.9756 1.1250 0.6470 acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 61 priruenrom [10] as: (6) where p1 is the applied pressure at the bottom of piston, p2 is the pressure above the piston, z is the axial coordinate along the piston and cylinder, and l is the piston-cylinder overlapped length. further information on how to determine an effective area and a pressure distortion coefficient using the above equation is presented in her thesis. the previously-discussed slip-flow methods could also be used to predict gas flow through a vacuum piping system, the only difference being the flow passage’s cross-sectional geometry, which is normally circular. the reduced flow rate for slip-flow through a tube is calculated as: (7) a flow parameter that is of common interest is mass flow rate through the passage. this can be calculated from the reduced flow rate as follows [11][12]: for the slab flow, (8) where a is the cross-sectional area of the channel and h is the height of the slab. for flow through a circular tube, (9) where r is the radius of the tube. 3. transition flow as discussed in the previous section, slip-flow models are limited to use within the slip-flow regime whereas, within the transition regime, kinetic gas theory must be used. solutions based on this theory are valid throughout the whole range of the knudsen number from the free molecular, through the transition, and up to the slip and hydrodynamic regimes. in this paper, the bgk (one kinetic gas model) is chosen and the linearised bgk model is solved numerically by dvm (discrete velocity method) to determine flow behaviour. previous research work [8] has demonstrated that fully-developed isothermal pressure-driven flows are accurately predictable by kinetic models, such as the linearised bgk equation. figure 3 shows a comparison between measurement results and those from the bgk model. the gas flow rate measurements were performed at the inlet () and the outlet (☐) of a series of rectangular microchannels whose aspect ratio was approximately 0.1. the reduced flow rate gp is plotted in terms of the rarefaction parameter δ. measurement results are in good agreement with the kinetic model based on the linearised bgk equation, with α = 1 or diffuse reflection where gas molecules lose all their tangential momentum to a wall during their collisions. details of the investigation are described in the literature. 4. results results of the slip-flow model for slab flow are shown in figure 4, where the reduced flow rates gp, from equation (5) for each model presented in table 1, are plotted in terms of the rarefaction parameter δ. the result from the bgk model is plotted as a benchmark, against which the results from the other models can be compared. all results are in very good agreement in the slip-flow regime (δ > 8.86 or kn < 0.1), except the first order slip-flow model significantly deviates from the others near the upper limit of the slip-flow regime. a difference of up to 8.5 % is observed at δ = 9 when compared with the result from the bgk method. the hadjiconstantinou equation yields the closest result to bgk method, with less than a 0.9 % difference observed for the entire slip-flow regime. an overview showing the region from the slip-flow regime to the free molecular regime is given in figure 5. to help focus attention on the transition regime, the rarefaction parameter δ is plotted on a logarithmic scale. the result of the bgk model when the tangential accommodation α is equal to 0.9 is plotted to demonstrate the trend in the results when the gas-surface interaction is no longer considered to be a purely diffuse collision. moreover, results of the bgk model from cercignani & pagani () [13], lo & loyalka (δ) [14], and loyalka et al. (o) [15] are also plotted to support the results obtained. any differences between the results of the various bgk models are insignificant throughout all regimes, whereas the results from all slip-flow models fail to predict rarefied gas flow in both the transition and molecular flow regimes. for flow within a circular tube, the trend of the results shown in figure 6 does not differ much from the previous slab flow case. the results from the bgk model at α = 1 are in very good agreement with those of cercignani & sernagiotto () [16], sharipov (δ), and loyalka & hamoodi (o), using a bgk model [17], a shakov model, and a numerically solved boltzmann equation [18] respectively. figure 3. reduced flow rate (gp) versus rarefaction parameter (δ) of flow through rectangular channel (aspect ratio ≈ 0.1) [7]. figure 4. reduced flow rate (gp) versus rarefaction parameter (δ) of flow through slab at diffuse reflection (α =1). acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 62 as in the slab flow case, all slip-flow equations break down in the transition and free molecular regimes. however, within the slip-flow regime itself, the boundary equation in conjunction with the navier-stokes equation is the preferred method to solve for bulk flow, due to the less complex calculations and the lower required resources. for simple geometry cases, it is possible to obtain an exact solution. 5. discussions and conclusion general equations to determine the flow rate of rarefied gas through both slab and tube, based on navier-stoke equations in conjunction with slip boundary conditions, have been developed. the results of these slip-flow models have been compared with those from kinetic theory using a bgk model, the results of which have been obtained by a numerical method using a dvm (discrete velocity method) scheme. slip-flow models require lower computational resources, but their performance is limited. they provide a reliable result within the continuum and slip-flow regimes but fail to predict rarefied gas flow in both the transition and free molecular regimes. since the slip-flow method is easier both to handle and apply in commercial cfd (computational fluid dynamic) software for solving complex problems, it is still a valuable approach. it can be used as an alternative method to determine gas flow within a piston-cylinder gap in a slip-flow regime. moreover, such a method can also provide a precise prediction of flow rate through the piping of a vacuum system operating within the slip-flow regime. acknowledgment several years ago, prof. stéphane colin, prof. lucien baldas, prof. sandrine geoffroy, and their colleagues at the institut clément ader opened a door to the world of rarefied gas dynamics to the first author, giving him the opportunity to join their research. a few years later, the chance to meet prof. dimitris valougeorgis and colleagues at the university of thessaly brought him a key to solve kinetic theory of gases. he is forever grateful to them for their kind support in knowledge and mind. references [1] d. valougeorgis, “solution of vacuum flows via kinetic theory,” 51st iuvsta workshop on modern problems and capability of vacuum gas dynamics, 2007. [2] r. s. dadson, s. l. lewis and g. n. peggs, the pressure balance: theory and practice. london, hmso, 1982. [3] c. m. sutton, “the effective area of a pressure balance at low pressures,” j. phys. e: sci. instrum., vol. 13, pp. 857-859, 1980. [4] j. w. schmidt, b. e. welch, and c. d. ehrlich, “operational mode and gas species effects on rotational drag in pneumatic dead weight pressure gauges,” meas. sci. technol., vol. 4, pp. 26-34, 1993. [5] j. w. schmidt, s. a. tison, and c. d. ehrlich, “model for drag forces in the crevice of piston gauges in the viscous-flow and molecular flow regimes,” metrologia, vol. 36, pp. 565-570, 1999. [6] j. pitakarnnop, s. geoffroy, s. colin and l. baldas, “slip flow in triangular and trapezoidal microchannels,” international journal of heat and technology, vol. 26, no. 1 pp. 167-174, 2008. [7] d. a. lockerby, j. m. reese, and m. a. gallis, “capturing the knudsen layer in continuum-fluid models of nonequilibriun gas flows,” aiaa journal, vol. 43, no. 6 pp. 1391-1393, 2005. [8] j. pitakarnnop, s. varoutis, d. valougeorgis, s. geoffroy, l. baldas and s. colin, “a novel experimental setup for gas microflows,” microfluid nanofluid, vol. 8, no. 1 pp. 57-72, 2010. [9] j. pitakarnnop, analyse expérimentale et simulation numérique d’écoulements raréfiés de gaz simple et de mélanges gazeux dans les microcanaux, université de toulouse, 2009. [10] t. priruenrom, development of pressure balance for absolute pressure measurement in gases up to 7 mpa, clausthal university of technology, 2011. [11] f. sharipov and v. seleznev, “data on internal rarefied gas flows,” j. phys. chem. ref. data, vol. 27, no. 3 pp. 657-706, 1998. [12] s. varoutis et al, “computational and experimental study of gas flows through long channels of various cross sections in the whole range of the knudsen number,” j. vac. sci. technol. a, vol. 27, no. 1 pp. 89-100, 2009. [13] c. cercignani and c. d. pagani, “variational approach to boundary-value problems in kinetic theory,” phys. fluids, vol. 9, no. 6, pp. 1167-1173, 1966. [14] s. s. lo and s. k. loyalka, “an efficient computation of nearcontinuum rarefied gas flows,” j. appl. math. phys. (zamp), vol. 33, no. 3 pp. 419-424, 1982. [15] s. k. loyalka, n. petrellis and t. s. storvick, “some exact numerical results for the bgk model: couette, poiseuille and thermal creep flow between parallel plates,” j. appl. math. phys. (zamp), vol. 30, no. 3 pp. 514-521, 1979. [16] c. cercignani and f. sernagiotto, “cylindrical poiseuille flow of a rarefied gas,” phys. fluids, vol. 9, no. 1, pp. 40-44, 1966. [17] f. sharipov, “rarefied gas flow through a long tube at any temperature ratio,” j. vac. sci. technol. a, vol. 14, no. 4, pp. 2627-2635, 1996. [18] c. cercignani and s. a. hamoodi, “poiseuille flow of a rarefied gas in a cylindrical tube: solution of linearized boltzmann equation,” phys. fluids, vol. 2, no. 11, pp. 2061-2065, 1990. figure 6. reduced flow rate (gp) versus rarefaction parameter (δ) of flow through circular tube at diffuse reflection (α =1). figure 5. reduced flow rate (gp) versus rarefaction parameter (δ) of flow through slab (α = 0.9, 1). acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 63 rarefied gas flow in pressure and vacuum measurements path planning for data collection robot in sensing field with obstacles acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 10 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 path planning for data collection robot in sensing field with obstacles sára olasz-szabó1, istván harmati1 1 dept. of control engineering and information technology, budapest university of technology and economics, budapest, hungaryl section: research paper keywords: path planning, mobile robots, obstacle avoidance citation: sára olasz-szabó, istván harmati, path planning for data collection robot in sensing field with obstacles, acta imeko, vol. 11, no. 3, article 5, september 2022, identifier: imeko-acta-11 (2022)-03-05 section editor: zafar taqvi, usa received february 26, 2022; in final form july 21, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sára olasz-szabó, e-mail: olasz-szabo.sara@edu.bme.hu 1. introduction nowadays there is a more and more common need for continuous data collection on a specified area. the simplest way for such data collection is using wireless sensor networks (wsn) [1], [2]. in most applications, a wsn consists of two parts: one data collection unit (also known as a sink or base station) and a large number of tiny sensor nodes. typically, both sensor nodes and sink remain static after deployment. sensor nodes, which are equipped with various sensor units, are capable of sensing the physical world and providing data to the sink through single-hop or multi-hope routing [3]. sensors are usually powered by batteries, which cannot be replaced in some applications, e.g., battlefield surveillance.[4]. since the data loss rate is increasing with the distance and each data transmission rate is associated with an energy consumption rate, which is modelled as a non-decreasing staircase function of the distance [5], the remote data sending is uses a lot of energy and this deteriorates network lifetime. for these reasons, the data transmission is executed by data collection robots [6], [7]. there are many applications of this technology in literature from recent years. for example, in [8] it is reviewed a range of techniques related to mobile robots in wsns. in paper [9], considered deploying a flying robotic network to monitor mobile targets in an area of interest for a specific time period with using wsns. in the work [10] investigated using a mobile sink, which is attached to a bus, to collect data in wsns with nonuniform node distribution. however, the robots have limited velocity and this way the data delay is significantly increasing. since transmitting over a short distance is more reliable than long distance, using robots improves the data collection rate. in addition, in terms of security, sending mobile sinks to collect data is more secure than transmitting via multihop communication [11]. this may be important in some military applications, as well. in paper [12] the authors raise and solve a problem of viable path planning for data collection unicycle robots in a sensing field with obstacles. the robots must visit all sensing nodes and then return to the base station and upload the collected data. path planning for the robots is a crucial problem since the constructed paths directly relate to the performance such as the delivery delay and energy consumption of the system. in a sensing field there are obstacles as well and the robots must not collide with them. the data collection is carried out by unicycle dubins-car [13], which can only move with constants velocity and bounded angular velocity, so it can move only on straight lines and turn with bounded turning radius. abstract using mobile robots to collect data from wireless sensor network can reduce energy dissipation and this way improves network lifetime. our problem is to plan paths for unicycle robots to visit a set of sensor nodes and download data on a sensing field with obstacles while minimizing the path length and the collecting time. reconstructing the path of an intruder in a guarded area is also a possible application of this technology. during path planning we greatly emphasize the handling of obstacles. if the area contains many or large obstacles, the robots may spend long time for avoid them so this is a critical point of finding the minimal path. this paper proposes a new approach for handling obstacles during path planning. a new algorithm is developed to plan the visiting sequence of nodes taking into consideration the obstacles as well. mailto:olasz-szabo.sara@edu.bme.hu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 for successful path planning it is necessary to determine the criteria of an adequate path. in paper [12] the authors define a viable path which is smooth, collision-free with sensor nodes/base station and obstacles, closed, and provides enough contact time with all the sensor nodes. because of the kinematic properties of the robots the path must be smooth. safety boundary is determined around obstacles and nodes for the sake of collision-free path. all nodes are bounded with a visiting circle with the minimum turning radius of the unicycle robot. the minimum turning radius depends on the speed of the robot and its maximum angular velocity. moreover, all obstacle’s convex hull are bounded with a safety margin, since in case of the shortest path the robot should move on the boundary of the convex hull. the path must be closed because of the periodical data collection. the robot downloads data only when it moves around the visiting circle, so it makes round trips around the node as long as it collects all the data from the sensor node. during path planning it is assumed that the location of all nodes and obstacles as well as the shapes of the obstacles are known. between two objects nodes and obstacles there are always defined four tangents but any tangents that intersect other obstacle are removed. so when the robot arrives to a node on a tangent it starts downloading data and during it makes round trips as long as it collects all the data from the node and then it leaves the node on a tangent. so a path consists of an adequate configuration of tangents and arcs around objects at the safety distance. the paper organized as follows. in section 2 we summarize the basic method [12] and then section 3 describes the proposed concepts for the path planning and also presents our new algorithms. in section 4 we demonstrate simulation results. 2. summary of the shortest viable path planning algorithm in paper [12] the shortest viable path planning (svpp) algorithm was defined. the main steps of svpp are outlined as algorithm 1. this algorithm first computes a 𝛴 permutation of nodes without obstacles by solving an asymmetric travelling salesman problem. for this they construct a directed graph, where the vertices are the nodes and the length of the edges are calculated as follows. the length of the edge between two vertices takes into account two aspects: the length of the valid path between their visiting circles and the length of the adjusted arc on the latter vertex. thus, the length of the edge from 𝑠1 to 𝑠2 equals to the summation of the average length of tangents and the length of the adjusted arc on the visiting circle of 𝑠2. in contrast, the length of the edge from 𝑠2 to 𝑠1 equals to the summation of the average length of tangents and the length of the adjusted arc on the visiting circle of 𝑠1. with such directed graph, they use an atsp solver [14] to calculate the permutation 𝛴 . at this point there can be tangents that intersect obstacles in 𝐺(𝑉, 𝐸) tangent graph [15]. 𝑉 denotes the tangent points and 𝐸 denotes the tangents. the second and third step of this algorithm adds the blocking obstacles to the permutation and constructs a simplified tangent graph. having 𝛴, 𝐺(𝑉, 𝐸) can be simplified by keeping only the tangent edges that connect succeeding visiting circles in 𝛴 and the corresponding arc edges. when any obstacle blocks the route between any pair of visiting circles, the tangents passing the obstacle’s safety boundaries are also included in the 𝐺′(𝑉′, 𝐸′) simplified tangent graph and the algorithm inserts the obstacle to the 𝛴′ permutation between the two nodes. one obstacle can block more than one pair of nodes. in this case the algorithm inserts the obstacle to the 𝛴′ permutation into more than one place. the algorithm constructs a 𝐺′(𝑉′, 𝐸′) by keeping the edges and vertices related to the permutation of nodes and obstacles while deleting others. obviously, 𝛴 ⊆ 𝛴′. the new graph is called the simplified tangent graph 𝐺′(𝑉′, 𝐸′), where 𝑉′ ⊆ 𝑉 and 𝐸′ ⊆ 𝐸. the next step is converting 𝐺′(𝑉′, 𝐸′) to a tree-like graph 𝑇. this gives additional information about the succeeding usable tangents and arcs. from every object there are four tangents departing to the next object, the starting tangent points of these are the departure configurations, and there are four tangents arriving from the previous object, the tangent points of these are the arrival configurations. this means that every object can be transformed to 8 vertices in a tree-like graph. the path length between two objects in 𝛴′ permutation always consist of two components. the first component is the arc around the first object from the arrival to the departure tangent point, including the additional full circles if these are necessary to download the data. the second component is the length of tangent between the two tangent points. for the calculation of distance between the 𝑖th and 𝑖 + 1th (𝑖, ∈ [2, 𝑛′ − 1], where 𝑛’ denote the number of objects in the permutation) objects, we need information about the tangent and tangent point between the 𝑖 − 1th and 𝑖th objects. one should know which tangent point will be used by the tangent on the visiting circle of the 𝑖th object in order to calculate the arc length on the visiting circle. figure 1 illustrates this algorithm 1: shortest viable path planning (svpp) 1. compute 𝛴 by solving atsp instance based on 𝐺(𝑉, 𝐸) 2. compute 𝛴′ by adding those obstacles to 𝛴 that safety boundaries block tangents between nodes. 3. simplify 𝐺(𝑉, 𝐸) to 𝐺′(𝑉′, 𝐸′) by keeping the edges and the vertices related to 𝛴′ and deleting others. 4. convert 𝐺′(𝑉′, 𝐸′) to tree-like graph 𝑇. 5. given an initial configuration, search the shortest path 𝑃 in 𝑇. figure 1. the distance between two objects in permutation consist of two part: arc and tangent length. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 problem. the path between two objects is represented by two segments in the directed tree graph. these two parts are the arc and tangent. so in the tree-like graph the vertices are the tangent points and the edges are the arcs and the tangents. the direction of edges points to the next part of the path. during the representation of the edges one must pay attention to the heading constraints. the heading constraint refers to that the robot's heading at the beginning of an edge should be equal to that at the ending of the last edge. the base station is the starting node, so the first element of the tree-like graph is one of the points of the base station visiting circle's. because of closed path, the final element of the tree-like graph should be also one of the points of the base station visiting circle. since the authors use dubins-car, they construct the tree-like graph both for positive and negative, clockwise and anti-clockwise initial direction as well. it can be seen that from each arrival configuration of an element there are two options to reach the arrival configurations of the next element because of the heading constraint. from a given starting point the total number of paths starting and ending at this point is 2𝑛 ′−1 taking into consideration that the starting and ending direction should be the same because of the continuous data collection. in paper [12], a dynamic programming based method is used to solve the shortest path search in the tree-like graph. 3. new concepts of solution in this paper new concepts of svpp algorithm are developed. the new algorithm based on these modifications is called generalized-svpp algorithm. in the following these modifications and new algorithms will be described in detail. 3.1. constructing tangents graph in paper [12] the tangents that intersect visiting circles are not included in the tangent graph (assumption 1, see below). however, the robot can move collision-free on a tangent that does not intersect the circle with centre of node and radius 𝑑𝑠𝑎𝑓𝑒 . therefore in the proposed new algorithm tangents that do not intersect the circle with 𝑑𝑠𝑎𝑓𝑒 radius around a node are available as well (assumption 2). this way the planned path may be shorter in certain cases. assumption 1: the tangents that intersect visiting circles are not included in the tangent graph. assumption 2: the tangents that intersect visiting circles, but do not intersect circles with centre of a node and radius 𝑑𝑠𝑎𝑓𝑒 , are included in the tangent graph. 3.2. permutation of nodes at this point the obstacles are not taken into account when creating the permutation of nodes. tangents that are intersecting obstacles are allowed in this step. a graph is constructed where the vertices are the nodes and the length of the edges is the average length of tangents between the two nodes. after this there is a searching for the shortest closed cycle with all of the nodes, namely the shortest hamilton cycle in this graph. this problem is the travelling salesman problem and by solving it the 𝛴 permutation of nodes is determined. as it was presented in section 2, in paper [12] the authors take into account the path length necessary to download the data and solve this problem with atsp. the exact path length around a visiting circle cannot be determined since the actual tangents are not known at this point. this is the reason why the average length of the tangents is used. 3.3. new concept of handling obstacles, construction of simplified tangent graph in algorithm 1 (svpp) there are two types of problem of handling obstacles. first these problems are described and then the solutions for them are presented. these problems are illustrated in figure 2. 1. for example, in figure 2 between node-1 and node-2 there is only one available tangent. when obstacle-1 is in the permutation, the available tangent between the nodes is not feasible in the shortest path planning. but when obstacle-1 is not in the permutation, there may be no solution at all depending on the initial configurations due to heading constraints. 2. there can be more obstacles between two nodes so that these obstacles block different tangents. for instance, in figure 2 between node-3 and node-4 obstacle-2 and obstacle-3 are blocking different tangents. tangent directions: the tangent direction is called positive-negative (pn) if the robot can make round trip around the first node positive clockwise direction and around the succeeding second node in negative direction. the positivepositive (pp), negative-positive (np) and negative-negative (nn) directions can be defined similarly as well. in this paper a new algorithm is proposed instead of the second and third step of algorithm 1. algorithm 1 creates one permutation of nodes and obstacles (assumption 3). the basic idea of the new algorithm is to calculate more than one permutation (assumption 4) and then using these to construct the 𝐺′(𝑉′, 𝐸′) simplified tangent graph and then the 𝑇 treelike graph in order to get better solution. assumption 3: one permutation of nodes and obstacles is created. the original svpp (algorithm 1) uses this assumption. assumption 4: more than one permutation of nodes and obstacles are created. the new algorithm 2 created by applying this assumption. instead of the second and third step of algorithm 1 the following algorithm 2 is used. at first stage four copies of 𝛴 permutation of nodes are created, then for every two nodes the blocking obstacles of all figure 2. an example of blocked tangents are illustrated with dashed line. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the four tangents are determined. when one of the tangents intersects an obstacle, the obstacle is inserted to the proper position in the feasible permutation determined by the tangent direction. note that when more than one obstacle is intersected, they are inserted to the feasible permutation according to their distance from the previous node. then the duplicate solutions are eliminated, and after this the previous algorithm is repeated as long as there are no new intersected obstacles. while repeating this algorithm, in the first step the 𝛴′ permutations of nodes and blocking obstacles is used instead of the 𝛴 permutation. in first case both of the direct tangents and the edges passing the obstacle safety boundaries are also inserted into the simplified tangent graph. in case one tangent blocked by more than one obstacles, all tangents and tangent points between the two nodes, between any obstacle and the two nodes, and between any two obstacles are inserted into the simplified tangent graph if these tangents are not blocked. the simplified tangent graph contains all of the tangents and tangent points from any permutations. besides, it also contains all of the arcs between the tangent points. 3.4. constructing the tree-like graph the dubins-car moves on tangents or arcs. in case of obstacles it moves on arcs between the arrival and the departure configurations and around the visiting circle while it downloads all the data from the sensor node. because of heading constraint the tangent direction determines the direction around the next object. and the direction around the object determines the available departure tangents. there are two tangents available for a given direction for any two objects. in case of one permutation there are two departure configurations for every object, but if there is more than one permutation the count of departure configurations depend on the permutations and the simplified tangent graph. in paper [12] there is only one permutation, so there are some cases when it causes problems as it was shown it the previous section. the new algorithm 2 proposed in the present article may achieve shorter path and give more general solution, but the tree-like graph become more complex. the new algorithm can select the shorter path from more available options. in this paper algorithm 3 and algorithm 4 are recommended to construct the tree-like graph for cases with one and more than one permutation, respectively. we demonstrate the transformation from simplified tangent graph into tree-like graph with help of example field in figure 3. for constructing the tree-like graph, in the first step two starting and two ending vertices are created for both the positive and negative initial direction. it is illustrated in figure 4 a) picture. algorithm 2: add obstacles to permutations 1. create four copies of 𝛴 permutation 2. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 3. eliminate the duplicate permutations. 4. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. algorithm 3: constructing the 𝑇 in case of one 𝛴′ permutation algorithm 2--add obstacles to permutations 1. for both negative and positive direction add starting and ending point as vertices to 𝑇. 2. according to 𝛴′add 4 arrival and 4 departure tangent points for all objects to the 𝑇 3. determine all arc length between all possible arrival and departure configurations taking into consideration the heading constraint and the possible different arc length calculation methods of the nodes and obstacles. add arc lengths as the length of the edges to the 𝑇 between the corresponding vertices. 4. around the visiting circle of the base station determine arc length between the starting point and the arrival and departure configurations. add this as edge length to 𝑇 to the proper position. 5. according to 𝛴′ add all tangents between all consecutive objects to the 𝑇 taking into consideration the heading constraint. 5. create four copies of 𝛴 permutation 6. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 7. eliminate the duplicate permutations. 8. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. algorithm 4: constructing the 𝑇 in case of more than one 𝛴′ permutation algorithm 2--add obstacles to permutations 1. apply algorithm 3 to the σ permutation of nodes. add edges only that are the member of 𝐺′(𝑉′, 𝐸′). 2. do for all 𝛴′permutations: 1.) if there are only one obstacle in the 𝑖th place between two nodes: run step 2-4 of algorithm 3 for the 𝜎𝑖−1, 𝜎𝑖 , 𝜎𝑖+1 object. only if the edges are in the 𝐺′(𝑉′, 𝐸′). 2.) if there are more than one obstacles between two nodes. let denote the obstacles by 𝜕𝑂 = {𝜕𝑜1, … , 𝜕𝑜𝑗 }: do step 1.) for all 𝜕𝑜𝑖 ∈ 𝜕𝑂 obstacle in the given permutation and for the nodes before and after the obstacle. run step 2-4 of algorithm 3 for all pair of obstacles 𝜕𝑜𝑖 ∈ 𝜕𝑂. run it in that case also if these are not subsequent elements in the permutation. naturally only in case when the vertices and edges are in 𝐺′(𝑉′, 𝐸′). 9. create four copies of 𝛴 permutation 10. for every two nodes determine the blocking obstacles of all the four tangents and insert these obstacles to the proper positions in the feasible permutations according to their distance from the previous node. 11. eliminate the duplicate permutations. 12. jump to 1. and repeat the algorithm while there are intersecting obstacles. in this case instead of 𝛴 we use 𝛴′permutations. figure 3. an example field of tree-like graph construction with 𝛴 = {𝑪𝟏, 𝑪𝟐, 𝑪𝟑, 𝑪𝟒} permutation of nodes acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 a) add starting and ending points as vertices b) add tangent points of nodes as vertices c) add arc length around the nodes between the arrival and departure configurations as edges d) add tangent length between the departure and arrival configurations as edges e) add tangent points that tangents between the obstacles and nodes in simplified tangents graph as vertices f) add arc length around the obstacles and the connecting nodes as edges g) add tangents between the obstacles and nodes as edges figure 4. an example of tree-like graph construction from figure 3 using algorithm 4. obstacles are denoted by purple. the vertices of the same rectangle denote the tangent points of the same object acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 after this the nodes are inserted into the graph, according to permutation 𝛴. using 𝐺′(𝑉′, 𝐸′) vertices are created from the tangent points (figure 4 b) picture). the next step is to determine the edges between the vertices. the length of edges between the starting point and the departure configurations of the visiting circle of the base node is equal to the arc length between the starting point and the departure configurations. the next step is to determine the arc length between the arrival and departure configurations around the node taking into account the heading constraint. in next step edges are added to 𝑇 between the ending vertices and the arrival configurations of the base node's visiting circle (figure 4 c) picture). finally, between the departure configuration and the arrival configurations of the next node, the length of the edges are the length of the tangents with the proper direction (figure 4 c) picture). this step should be done for all nodes in the order given by 𝛴 permutation. next step is to add obstacles to the tree-like graph according to 𝛴′ permutation and 𝐺′(𝑉′, 𝐸′) simplified tangent graph. in case of figure 3 obstacle-1 blocks tangents between node-2 and node-3. first those departure tangent points are added as vertices of the visiting circle of node-2 which are also on a tangent of obstacle-1. in addition, the arrival configuration of the obstacle coming from node-2 and departure configurations coming from the obstacle to the node-3 are added as vertices to the tree-like graph. finally, the arrival configurations of the node-3 are added to tree-like graph (figure 4 e) picture). adding edges is similar as it was previously (figure 4 f)-g) picture). it might occur that there is more than one obstacle between two nodes. in this case all existing tangent points and tangents between the previous and next nodes to/from all obstacles are added as vertices and edges to the tree-like graph. the tangents and tangent points between all pair of obstacles in the proper edge direction are added as vertices and edges. in paper [12] the svpp algorithm handles obstacles in the same way as nodes when these are added to the tree-like graph. the authors iterate step by step on permutation 𝛴′ by adding all tangent points to the tree-like graph as vertices. then they add arcs and tangents to the tree-like graph as edges taking into consideration the heading constraint. the tree-like graph constructed from the figure 3 sensing field can be seen on figure 5 using both algorithm 3 and algorithm 4. during the construction of the tree-like graph according to algorithm 4 first vertices were created from nodes denoted by black then the edges were added between them. then the tangent points of obstacles were added as vertices denoted by purple and the associated edges were also added. the different objects are separated with rectangles in the figure. it can be seen that in figure 5 on the second picture there are direct tangents between node-2 and node-3, namely edge between vertices n2_pp and n2ton3_pp, but on the first picture of figure 5 there are only paths using edges that passing obstacle-1 (this is because of assumption 1 and assumption 3). theorem using our new assumption 2 and assumption 4 the planned path always better or equal to the path that using the original assumption 1 and assumption 3. proof the simplified tangents graph in case of assumption 2 or 4 always contains all edges and vertices from simplified tangents graph using assumption 1 and 3. therefore the tree-like graph using assumption 2 and 4 always contains the tree-like graph using assumption 1 and 3 as well. so the tree-like graph using assumption 2 and 4 contains all paths that the tree-like graph using assumption 1 and 3 and possibly even more paths. therefore, the shortest path in the tree-like graph using assumption 2 and 4 always shorter or equal to the shortest path of the tree-like graph using assumption 1 and 3. □ figure 5. an example of tree-like graph construction from figure 3 using algorithm 3 and algorithm 4. obstacles are denoted by purple. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 3.5. searching the shortest path in a tree-like graph after the tree-like graph was created, it was searched for the shortest path. there are many different way to find the shortest path. here in this paper the dijkstra algorithm [16] were applied. the searching for the shortest path was carried out in both positive and negative initial direction and then the shorter was selected. 3.6. complexity of g-svpp algorithm to end this section, we analyse the time complexity of gsvpp algorithm both of assumption 1 and 3 or assumption 2 and 4 cases. first step is solving tsp with use of miller-tuckerzemlin formulation [17] with 𝒪(𝑛2 + 𝑛) computation effort. in step 2, we check 𝑛 pairs of visiting circles to see whether they are blocked by any boundary of convex hull. in each checking, check all the 𝑚 obstacles and then the time complexity is 𝒪(𝑛𝑚). in step 3, we do a constant number of operations to each permutation each element in σ′, then the time complexity of the simplifying procedure is ∑ 𝒪(𝑛′𝑖 ) 𝑛 σ′ 𝑖=1 , where 𝑛′𝑖 denote the length of the 𝑖th permutation of σ′ for each permutation and 𝑛ς′ denote the number of permutations of σ′. converting 𝐺′(𝑉′, 𝐸′) to 𝑇 costs 𝒪(1) in step 4. the shortest path searching of the 𝑇 is implemented dijkstra algorithm [16]. the computation effort of dijkstra algorithm is 𝒪(|𝐸′| + |𝑉 ′|2) = 𝒪(|𝑉 ′|2) so that depend on the number of vertices of 𝑇 tree like graph. in case assumption 1 and 3 the 𝑇 tree like graph contains maximum 8𝑛′ + 4 vertices, since there are four arrival and four departure configurations all objects, and two starting and two ending vertices of the base station. in case assumption 2 and 4 the tree-like graph in worst case contains ∑ 8𝑛′ 𝑖 𝑛 σ′ 𝑖=1 + 4 vertices. therefore, the worst case computation effort in case assumption 2 and 4 is 𝑛ς′ 2 times larger than the maximum computation error in case assumption 1 and 3. however, in most cases the σ′ permutations differ from each other in only a few elements so the computation error is more smaller than in the worst case. 4. simulation results in the present paper a 200 m × 200 m virtual field was simulated with 40 nodes, of which one is the base station and the other 39 are sensor nodes. the base node is node-1. each sensor node stores 𝑔 = 0.5 mb data and to the base node 𝑔𝐵 = (𝑛 − 1) 0.5 mb = 19.5 mb collected data is uploaded by the robot for further analysis. the data transmission rate at the visiting circle is 𝑟 = 250 kb/s. in a sensing field there are 15 obstacles as well. the robot speed is 𝑣 = 4 m/s and the maximal angular velocity is |𝑢𝑀 | ≤ 1 rad/s, therefore the minimal turning radius and also the visiting circle’s radius is 𝑅𝑚𝑖𝑛 = 𝑣 𝑢𝑀⁄ = 4 m the robot must move at least 𝑑𝑠𝑎𝑓𝑒 = 0.5 m distance from an object in order to avoid to collision. the next step is to construct the 𝐺(𝑉, 𝐸) tangent graph, for this the tangents and the tangent points between the objects are determined and then the edges are deleted according to assumption 1 or assumption 2. the difference between the two assumptions can be seen in figure 6. it can be seen in figure 6 that the proposed assumption 2 produces more tangents, since this allows tangents that intersect visiting circle but not intersect circle with centre of node and radius 𝑑𝑠𝑎𝑓𝑒 . therefore, increases the number of possible paths so it is feasible shorter path planning. but at the same time, it requires more computations as well. the next step is to determine the 𝛴 permutation of nodes and then construct the 𝛴′permutation or permutations with obstacles depending on assumption 3 or assumption 4. using 𝛴′ the 𝐺′(𝑉′, 𝐸′) simplified tangent graph is constructed. the 𝐺′(𝑉′, 𝐸′) using assumption 3 and assumption 4 can be seen in figure 7 and figure 8. in figure 7 the tangents graph figure 6. the difference between the 𝐺(𝑉, 𝐸) tangent graphs using assumption 1 and assumption 2. a) b) figure 7. 𝐺′(𝑉′, 𝐸′) simplified tangent graph using assumption 1 and assumption 3 . acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 constructed by assumption 1 is simplified by applying assumption 3. similarly, in figure 8 it can be seen the 𝐺′(𝑉′, 𝐸′) applying assumption 2 and assumption 4. in figure 8 it can be seen that using the assumption 2 and assumption 4 proposed in this article, the tangents that intersect visiting circles are available and more permutations can be constructed so there are more available tangents in a 𝐺′(𝑉′, 𝐸′) and therefore in the tree-like graph as well. in this way, there are more possible paths and therefore the algorithm may plan shorter path. in this case the constructed tree-like graph is more complex than in case of assumption 3. when we use assumption 1 and assumption 3 there can be at most 4 arrival and 4 departure tangents for every object in the order of the permutation. usually there are four-four arrival and departure tangents, but for example in figure 7, between obstacle-13 and obstacle-3 there are just 3 available tangents. in figure 8 it can be seen the case of assumption 2 and assumption 4. since in the second picture the fourth tangent intersects the safety circle of node-10 it is also not available in the simplified tangent graph. in figure 8 there are direct tangents between node-24 and obstacle-3 and one of them intersects the visiting circle of node-26, but in figure 7 the robot must visit obstacle-13 first. in figure 8 there are three available tangents between the visiting circles of node-37 and node-3, but in figure 7 (assumptions 1 and 3) the robot can only move on tangents that passing obstacle-6. the next step is to construct the 𝑇 tree-like graphs for both cases assumption 1 and 3 or assumption 2 and 4. finally, in the tree-like graph a search for the shortest path is carried out both for positive and negative initial direction as well. in figure 9 and figure 10 it can be seen that the planned path using the new assumptions 2 and 4 proposed in the present article is shorter than in the original case. in this example, the planned path with positive and negative initial direction only differs in the tangents that are belonging to the base station. using assumptions 2 and 4 the planned path between the node-24 and obstacle-3 use tangent that intersect the visiting circle of node-26, therefore a shorter path can be achieved using the new assumptions of the present article. applying assumptions 1 and 3 between node-3 and node-37, the planned path uses tangents that pass the obstacle-6. on the contrary, if assumptions 2 and 4 are applied, the planned path uses direct a) b) figure 8. 𝐺′(𝑉′, 𝐸′) simplified tangent graph using assumption 2 and assumption 4 . figure 9. the resulted shortest path with negative initial direction of the 𝑇 tree-like graph both for the cases using assumption 1 and assumption 3 or assumption 2 and assumption 4. the length of the planned path are 2341.94𝑚 and 2290.85𝑚, respectively. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 9 tangents between the two nodes and this way the determined arc length is shorter. each sensor node data downloading requires 𝑙 = 𝑔 𝑣 𝑟 = 8 m arc length. the visiting circles circumference equal to 𝐾 = 25.12 m. therefore the robot must take at least approximately one third of the visiting circle's circumference to have enough time for data transmission. it can be seen that the algorithm preferably chooses tangents between node-31, node-32 and node-33 in such a way that the robot is not required to make extra round trips around these nodes. as a test, the path planning was run for ten different virtual sensing fields. the length of the planned paths can be seen on table 1. the solutions using assumption 1 and 3 are compared with the solutions using assumption 2 and 4. as it was proven in the theorem, using the new assumptions proposed in the present article the planned path always better or equal to the path using the original assumptions presented in [6]. the number of vertices on a 𝑇 tree-like graph also represented in table 1. the maximum difference between the number of vertices of assumption 2 and 4 or assumption 1 or 3 is 54, in test field 5. in this case the computation effort in case of assumptions 2 and 4 is increased by 27 % compared to assumptions 1 and 3. at the same time the planned path in case of assumptions 2 and 4 is 92 m shorter than assumptions 1 and 3. that means, 𝑡 = 92 m 4 m/s = 23 s time for each period, therefore the robot can make 6 extra round trip for a day. 5. conclusions in the present paper a path planning algorithm was developed for unicycle robots between sensor nodes. the task is to collect all the data from the sensor nodes and then upload it to the base node. to increase the effectiveness of data collection, the length and the duration of the trip should be minimized, while also maintaining a collision-free path around the nodes and obstacles. new algorithms were developed for handling the obstacles and new assumptions were applied to reach a collision-free state. a new algorithm for tree-like graph generation was also developed, where the search of the shortest viable path will take place finally. the present paper finished with the detailed presentation of simulation results. the preparation of the field and the steps of path planning algorithm were illustrated with figures. the present article also compared the results of the new algorithm with the results of a previous research. in conclusion, the new algorithm presented in this article proved to achieve shorter paths then the earlier algorithms. figure 10 the resulted shortest path with positive initial direction of the 𝑇 tree-like graph both for the cases using assumption 1 and assumption 3 or assumption 2 and assumption 4. the length of the planned path are 2337.64𝑚 and 2286.55𝑚. table 1 the length of the planned path for different virtual sensing fields using both positive and negative starting directions. |𝑉′| denote the number of vertices of the tree-like graph on which it depends the computation error. test field assumption 1 and 3 assumption 2 and 4 positive negative |𝑽′| positive negative |𝑽′| 1 2326.23m 2233.61m 372 2254.71m 2222.65m 386 2 2393.66m 2387.67m 418 2332.58m 2326.59m 462 3 2305.82m 2313.80m 404 2270.74m 2278.72m 444 4 2372.37m 2359.42m 436 2363.87m 2350.92m 466 5 2413.25m 2403.51m 412 2321.40m 2311.65m 466 6 2294.77m 2288.91m 402 2193.65m 2189.18m 452 7 2335.53m 2317.82m 396 2312.25m 2294.55m 426 8 2330.93m 2324.78m 378 2330.51m 2324.36m 386 9 2247.02m 2262.38m 372 2238.79m 2254.14m 386 10 2301.88m 2311.12m 360 2301.88m 2311.12m 368 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 10 acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics was supported by the “tkp2020, institutional excellence program” of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). the research was supported by the efop-3.6.2-16201600014 project financed by the ministry of human capacities of hungary. the research reported in this paper is part of project no. bme-nva-02, implemented with the support provided by the ministry of innovation and technology of hungary from the national research, development and innovation fund, financed under the tkp2021 funding scheme. references [1] a. mainwaring, d. culler, j. polastre, r. szewczyk, j. anderson, wireless sensor networks for habitat monitoring, 1st acm int. workshop on wireless sensor networks and applications, atlanta, georgia, usa, (2002), pp. 88-97. doi: 10.1145/570738.570751 [2] t. he, s. krishnamurthy, j.a. stankovic, t. abdelzaher, l. luo, r. stoleru, t. yan, l. gu, j. hui, b. krogh, energy-efficient surveillance system using wireless sensor networks, 2nd int. conference on mobile systems, applications, and services, acm, boston, massachusetts, usa, (2004), pp. 270-283. doi: 10.1145/990064.990096 [3] j. n. al-karaki, a. e. kamal, routing techniques in wireless sensor networks: a survey, ieee wireless communications, vol. 11, no. 6, 2004, pp. 6-28. doi: 10.1016/j.proeng.2012.06.320 [4] y. gu, f. ren, y. ji, j. li, the evolution of sink mobility management in wireless sensor networks: a survey, ieee commun. surv. tut. 18 (1) 2015, pp. 507–524. doi: 10.1109/comst.2015.2388779 [5] x. ren, w. liang, w. xu, data collection maximization in renewable sensor networks via time-slot scheduling, ieee trans. comput. 64 (7) 2015, pp. 1870–1883. doi: 10.1109/tc.2014.2349521 [6] i. chatzigiannakis, a. kinalis, s. nikoletseas, sink mobility protocols for data collection in wireless sensor networks, 4th acm int. workshop on mobility management and wireless access, acm, terromolinos, spain, 2006, pp. 52-59. doi: 10.1145/1164783.1164793 [7] y. yun, y. xia, maximizing the lifetime of wireless sensor networks with mobile sink in delay-tolerant applications, ieee trans. mob. comput., vol. 9, 2010, pp. 1308-1318. doi: 10.1109/tmc.2010.76 [8] hailong huang, andrey v. savkin, ming ding, chao huang, mobile robots in wireless sensor networks: a survey on tasks, computer networks 148, 2019, pp. 1-19. doi: 10.1016/j.comnet.2018.10.018 [9] huang, hailong, andrey v. savkin, reactive 3d deployment of a flying robotic network for surveillance of mobile targets, computer networks 161, 2019, pp.172-182. doi: 10.1016/j.comnet.2019.06.020 [10] h. huang, a. v. savkin, an energy efficient approach for data collection in wireless sensor networks using public transportation vehicles, aeu-international journal of electronics and communications 75, 2017, pp. 108-118. doi: 10.1016/j.aeue.2017.03.012 [11] y. gu, f. ren, y. ji, j. li, the evolution of sink mobility management in wireless sensor networks: a survey, ieee commun. surv. tut. vol. 17, 2015, pp. 507-524. doi: 10.1109/comst.2015.2388779 [12] hailong huang, andrey v. savkin, viable path planning for data collection robots in a sensing field with obstacles, computer communications 111, 2017, pp. 84-96. doi: 10.1016/j.comcom.2017.07.010 [13] l. e. dubins, on curves of minimal length with a constraint on average curvature, and with prescribed initial and terminal positions and tangents, american journal of mathematics, vol. 79 no. 3,1957, pp. 497–516. doi: 10.2307/2372560 [14] a. m. frieze, g. galbiati, f. maffioli, on the worst-case performance of some algorithms for the asymmetric traveling salesman problem, networks, vol. 12 no. 1, 1982, pp. 23–39. doi: 10.1002/net.3230120103 [15] a. v. savkin, m. hoy, reactive and the shortest path navigation of a wheeled mobile robot in cluttered environments, robotica. vol. 31 issue 2,2013, pp. 323-330. doi: 10.1017/s0263574712000331 [16] e. w. dijkstra, a note on two problems in connexion with graphs, numerische mathematik, vol. 1, number 1, 1959, pp. 269-271. doi: 10.1007/bf01386390 [17] c. e. miller, a. w. tucker, r. a. zemlin, integer programming formulation of traveling salesman problems. j. acm 7, 4, october 1960, pp. 326-329. doi: 10.1145/321043.321046 http://dx.doi.org/10.1145/570738.570751 https://doi.org/10.1145/990064.990096  https://doi.org/10.1016/j.proeng.2012.06.320 https://doi.org/10.1109/comst.2015.2388779 https://doi.org/10.1109/tc.2014.2349521 http://dx.doi.org/10.1145/1164783.1164793 https://doi.org/10.1109/tmc.2010.76 https://doi.org/10.1016/j.comnet.2018.10.018 http://dx.doi.org/10.1016/j.comnet.2019.06.020 http://dx.doi.org/10.1016/j.aeue.2017.03.012 https://doi.org/10.1109/comst.2015.2388779 http://dx.doi.org/10.1016/j.comcom.2017.07.010 https://doi.org/10.2307/2372560 https://doi.org/10.1002/net.3230120103 http://dx.doi.org/10.1017/s0263574712000331 https://doi.org/10.1007/bf01386390 https://doi.org/10.1145/321043.321046 analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement thalluru suneetha1, s. naga kishore bhavanam1 1 department of ece, acharya nagarjuna university, guntur, andhra pradesh-522510, india section: research paper keywords: measurement; defected ground; sensing; wlan; wimax planar monopole antenna citation: thalluru suneetha, s. naga kishore bhavanam, analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement, acta imeko, vol. 11, no. 1, article 28, march 2022, identifier: imeko-acta-11 (2022)-01-28 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form march 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: thalluru suneetha, e-mail: tsuneetha701@gmail.com 1. introduction the contemporary telecommunications sector has evolved in response to the increasing demands of customers for smart gadgets. in contrast to their predecessors, these devices are capable of handling many applications at the same time. global positioning system (gps), wireless fidelity (wi-fi), global system for mobile communication (gsm), bluetooth, and worldwide interoperability for microwave access (wimax), all have their own operating frequency bands. the addition of a separate antenna for each application increases the device's size and makes it more uncomfortable to use. due to this the demand for antennas operating at multiple frequencies is increasing. high data rates in wireless communication systems are quite common these days. as a result, contemporary gadgets must have a single antenna that can operate in many frequency bands. lot of research has been going on in the field of multiband antennas by many researchers and several methods were devised in recent years. the authors of [1] reported a coplanar waveguide (cpw)fed rectangle antenna with modified ground and open complementary split ring resonator (ocsrr) loading that could operate in three bands. in [2] u shaped antenna with partial ground plane to resonate at three different frequencies is presented. in [3] authors designed ‘nine’ and ‘epsilon’ shaped antennas along with switches to realize triple band operation for use in wi-fi, wimax and wlan applications. in [4] a five band antenna with kite shaped radiating patch with ‘c‘ and ‘g’ shaped slots in the ground plane was realized. authors in [5] used meta material technique to design dual band antenna. a cpw fed meta material based multiband antenna was reported in [6]. in [7] a small cpw-fed monopole antenna for dual band operation was presented. the authors used the state of the art substrate integrated waveguide technique and slots to get dual frequency abstract in this paper, a novel quad band antenna to operate at four different frequency bands is designed and simulated using computer simulation technology (cst) microwave studio software. to achieve better performance of the antenna, various parameters were optimized using parametric analysis. during this analysis, various antenna parameters were need to be measured. the proposed antenna uses asymmetrical ‘u’ and ‘t’ shaped radiating elements printed on an fr-4 substrate with dimensions of 1.6 × 34 × 20 mm3. the measurement of various antenna parameters like reflection coefficient, return loss, radiation intensity are key tasks to be performed in the antenna measurement laboratory before going to the real time application. a stair case defected ground structure with rectangular center slot is used to attain a better bandwidth. the antenna resonates at four different frequencies 3 ghz, 4.8 ghz, 9 ghz, 13.2 ghz with operating bandwidths of 980 mhz, 2.05 ghz, 3.84 ghz, and 3.82 ghz respectively. the s11 value at these resonant frequencies is measured as -23.6 db, -29.4 db, -34.2 db, -49.05 db respectively. the voltage standing wave ratio of the proposed antenna at four resonant frequencies is equal to one. the gain of the antenna is consistent throughout the four pass bands. the antenna is suitable for bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. mailto:tsuneetha701@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 response and wide bandwidth in [8]. as reported in [9] increase in gain parameter is achieved by using antenna arrays. several techniques, such as frequency-selective surface [10]-[12] and electromagnetic band-gap structure [13]-[15], have been studied by several researchers to create multiband antennas. to improve the polarization performance of the link budget, circularly polarized antennas are frequently employed in wlan and satellite applications as reported in [16]-[18]. in every wireless communication system, the antenna is extremely crucial. the well-designed antenna reduces receiver complexity and improves receiver performance. the antenna's size, shape, and design will be controlled by the antenna's application [19], [20] and operating frequency. a simple rectangular patch was used in this project, and a portion of it was removed to create a symmetrical ushaped structure. a portion of the left and right arms are cut from that structure, which leads to asymmetrical u shape and to that shape one vertical strip combined with a horizontal strip, is added to the centre to form a t-shaped structure. to achieve better bandwidth, a staircase defected ground structure along with centre rectangular slot was used. proposed antenna resonates at multiple frequencies owing to its structural modifications. this antenna is applicable to a wide range of modern portable wireless applications, bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. to achieve multiband operation other designs used slots and modified the shapes of radiating elements which is also complex. in this design staircase defective ground structure was employed to get wider bandwidth at resonating frequencies. defected ground structure (dgs), apart from conventional antennas, produces discontinuities on the signal plane, which disrupts shielded current distribution on the signal plane. as a result, the apparent permittivity of the substrate fluctuates as a function of frequency and plays a vital role in the antenna's performance. various parameters such as radiation pattern, s11, vswr, gain, are measured and analysed for the final design of the antenna. furthermore, getting the multiband operation owing to its less structural complexity is the most outstanding feature of this design. general procedure for designing any antenna was discussed in section 2. the evaluation steps of the proposed antenna design were discussed in section 3. in section 4 parametric analysis through performance measures measurement of the proposed antenna is discussed. section 5 deals with results and discussion. in section 6 literature comparison of the proposed antenna was done with earlier reported structures. 2. antenna design procedure any antenna design technique begins with study of various antennas for a certain application. then, potential design approaches must be studied. later, patch dimensions such as width and length are determined to get design criteria. geometrical parameters and material qualities must then be accurately specified in the following stage. the simulation process is done by finalizing one simulator among the available simulators. using the simulator's parametric analysis feature, simulation work is carried out until the best possible result is obtained. when the required behaviour is achieved, the simulation is terminated. the antenna fabrication operation is then initiated, and the desired antenna prototype is created. to validate the design technique, the fabricated antenna behaviour is evaluated and compared to that of the simulated ones. if the required behaviour is not obtained, the geometric parameters of the antenna as well as the material qualities must be modified. simulation is again carried out with modified parameters. this is continued until the desired behaviour of the proposed antenna is obtained. figure 1 clearly displays the antenna general design procedure. 3. evaluation steps of the proposed antenna design the planned antenna has been designed in three phases, as illustrated in figure 2. figure 2 i) show a typical rectangular patch antenna with a simple micro strip line in step 1. in step2 a portion of patch is removed to get symmetrical u-shaped antenna as depicted in figure 2 ii) which leads to multiband response. in step3 as shown in figure 2 iii) portions of left and right arms are cut leading to asymmetrical u-shaped structure and centre horizontal and vertical arms are added leading to t shaped structure. this is the proposed design. this proposed antenna refines impedance matching and achieves better performance in terms figure 1. antenna general design procedure. i) antenna1 ii) antenna2 iii) antenna3 figure 2. steps in the design of the proposed multiband antenna. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 of wider bandwidth and vswr. gain is also considerable throughout the pass bands. figure 3 depicts the design for the proposed multi band planar antenna with asymmetrical u and t shaped patch structure. from the simple normal rectangular patch, a portion is removed to realize symmetrical u-shaped patch structure. portions of left and right arms were cut and centre horizontal and vertical arms were added leading to t shaped patch structure at the centre. this structure deals with defected ground technology. defected ground plane with stepped staircase along with centre rectangular slot has been added beneath the substrate, to extend the bandwidth. substrate dimensions are 20 × 34 × 1.6 mm3. the material chosen is fr-4, which has good mechanical properties with a thickness of 1.6 mm, a dielectric permittivity of 4.4, and a loss tangent of 0.02. the antenna was fed by 50 ω micro strip line. simulation of the proposed antenna has been carried out by using cst micro wave studio. the antenna was optimized using the cst microwave studio's parametric analysis tool. the antenna is capable of resonating at four distinct frequencies. antenna 1 only resonates at two frequencies, the first at 3 ghz and the second at 4.8 ghz. antenna 2 resonates at four separate frequencies, although the bandwidth at the pass bands and impedance matching were not ideal. antenna 3 resonates at four separate frequencies, with a wide bandwidth and good impedance matching, as well as significant gain at all four resonant frequencies. antenna 3 is the proposed construction, which resonates at four distinct frequencies. simulated s11 of different antennas as depicted in figure 2 are shown in figure 4. 4. parametric study of the proposed antenna to obtain the optimal design, parametric analysis is the greatest choice accessible when utilising simulators. study of effect of changing various parameters like feed length l1, right arm length l2 and length of t strip l3 has been carried out to optimize the proposed design. effect of varying l1: figure 5 clearly displays the effect of changing the value of feed length l1. the feed length l1 was tested for three different lengths. the findings show that as the length of the feed increases, the antenna only resonates at two frequencies, with l1 = 10 mm yielding the best results with four resonant frequencies. effect of varying l2: to investigate the impact of changing the right arm length l2 all other previously optimized parameters were held constant at their optimum values, while l2 is changed between 9 mm to 12 mm. the second, third, and fourth resonances do not vary significantly when the value is increased, but impedance matching at the first resonant frequency decreases, and l2 =9 mm provides the best performance in terms of improved bandwidth and impedance matching. figure 6 displays the impact of changing the values of right arm length l2. a) b) figure 3. designed antenna architecture: a) front view, b) back view, ws =20 mm, ls = 34 mm, l1 = 10 mm, l2 = 9 mm, l3 = 14 mm, l4 = 2 mm, w1 = 2.2 mm, w3 = 4.4 mm, w4 = 11 mm, wg = 20 mm, w = 2 mm, l = 2 mm, ln = 8 mm, wn = 4 mm. figure 4. s11 of different antennas as depicted in figure 2. figure 5. optimized s11 variation for various values of l1. figure 6. optimized s11 variation for various values of l2. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 effect of varying l3: to explore the influence of length of t strip l3, it was varied from 13 to 15 while all other parameters were kept at their previously optimized levels. for different values of l3, there wasn't much of a variation in the first three resonances. for l3 = 14 mm, a wider bandwidth and improved impedance matching were achieved as portrayed in figure 7. as surface current distribution helps in understanding the behaviour of the antenna figure 8 portrays the surface current distribution of the proposed antenna at four resonant frequencies. different portions of the antenna are responsible for radiations at four resonant frequencies. radiation at 3 ghz is directed by the right arm of the u structure as well as centre t strip right portion. at 4.8 ghz, the lower part of the u structure right arm is responsible for radiation. the lower part of the u shape is responsible for 9 ghz radiation. at 13.2 ghz, both the centre arm and the upper section of the u shape are responsible for radiation. 5. results and discussion the designed antenna's simulated s11 and gain variations with respect to frequency can be seen in figure 9. the antenna is simulated by cst microwave studio which uses finite integration method. the four resonant frequencies are occurring at 3 ghz, 4.8 ghz, 9 ghz, and 13.2 ghz. the return loss at these frequencies is -23.6 db, -29.4 db, -34.2 db, -49.05 db, respectively. these values clearly indicate good impedance matching at all the four resonant frequencies. the pass bands around these resonances are 2.484 ghz to 3.389 ghz, 3.923 ghz to 5.974 ghz, 7.232 ghz to 10.921 ghz, and 12.551 ghz to 16.363 ghz with bandwidths of 0.905 ghz, 2.051 ghz, 3.689 ghz and 3.812 ghz, respectively. wide bandwidths at four resonant frequencies are also identified. considerable gain is observed at four resonant frequencies. getting the multiband operation owing to its less structural complexity is the most outstanding feature of this design. the voltage standing wave ratio (vswr) measures the mismatch between an antenna and the feed line that connects to it. the range of vswr values ranges from 1 to infinity. vswr of less than 2 is deemed adequate for the majority of antenna applications. in the proposed design vswr of 1 indicates a good match. the proposed quad band antenna's voltage wave standing ratio (vswr) is portrayed in figure 10. from the figure it is evident that at all the four resonant frequencies good vswr is identified which is almost equal to one. the proposed quad band antenna's far field patterns at 3 ghz, 4.8 ghz, 9 ghz, 13.2 ghz are clearly displayed in figure 10. 6. literature comparision the comparison analysis between the proposed antenna and previously published designs for multiband operation is summarized in table 1. proposed design is compared in terms of size, frequencies, radiating elements. compared to earlier reported designs in [1], [2], [3], [4], [5] and [6] proposed antenna is better in terms of size; structural complexity of the proposed antenna is less compared to most of the structures. gain is consistent in the operating bands and operating bandwidth is also more compared to the most of the structures. vswr of the figure 7. optimized s11 variation for various values of l3. a) b) c) d) figure 8. surface current distribution at resonant frequencies: a) 3 ghz, b) 4.8 ghz, c) 9.0 ghz and d) 13.2 ghz. figure 9. the designed antenna's simulated s11 and gain. figure 10. the designed antenna's simulated vswr. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 antenna at resonating frequencies is almost equal to one which is promising. in this design without the necessity of complex structures able to achieve multiband operation by simply removing some portions from the patch and adding additionally centre t strip. 7. conclusions the design of a novel quad band antenna with asymmetrical u and t shaped patch structure for portable wireless applications was presented in this study. the difference of this design with others can be seen through its simple design steps leading to less complex structure. the proposed antenna resonates at 3 ghz, 4.8 ghz, 9 ghz, and 13.2 ghz frequencies with bandwidths 980 mhz, 2.05 ghz, 3.84 ghz, and 3.82 ghz, respectively. over the working frequency ranges, the antenna has reasonable gains. bandwidth of the antenna over the operating frequencies is high. the gain, vswr, and reflection coefficient are taken into consideration for design and analysis operations in the frequency range of 1 ghz to 20 ghz. in comparison to most other designs, this structure is simple and compact, producing quad bands for use in portable electronic devices. the proposed patch antenna is made utilizing printed circuit board technology, which is easy and affordable. this antenna is applicable to a wide range of modern portable wireless applications, bluetooth (2.4) ghz, wlan (5.125-5.35) ghz and (5.725-5.825) ghz, wimax (5.25-5.85) ghz, c-band (3.7-4.2) ghz and ku-band (12-18) ghz. references [1] r. pandeeswari, s. raghavan, a cpw-fed triple band ocsrr embedded monopole antenna with modified ground for wlan and wimax applications, microwave and optical technology letters, vol. 57, 2015, pp. 2413–2418. doi: 10.1002/mop.29352 [2] mahesh kendre, a. b. nandgaonkar, pratima nirmal, sanjay l. nalbalwar, u shaped multiband monopole antenna for spacecraft, wlan and satellite communication application, ieee international conference on recent trends in electronics information communication technology, bangalore, india, 20-21 may 2016, , pp. 1528-1532. doi: 10.1109/rteict.2016.7808088 [3] v, jyothika, m. s. p. c. shekar, s. v. krishna, m. z. u. rahman, design of 16 element rectangular patch antenna array for 5g applications, journal of critical reviews, 7(9), pp. 53-58. [4] t. ali, k. d. prasad, r. c. biradar, a miniaturized slotted multiband antenna for wireless applications. j comput. electron. 17, 2018, pp. 1056–1070 . doi: 10.1007/s10825-018-1183-z [5] k. a. rao, k. s. raj, r. k. jain, m. z. u. rahman, implementation of adaptive beam steering for phased array antennas using enlms algorithm, journal of critical reviews, 7(9), pp. 59-63. doi: 10.31838/jcr.07.09.10 [6] n. thamil selvi, p. thiruvalar selvan, s. p. k. babu, r. pandeeswari, multiband metamaterial-inspired antenna using split ring resonator, computers & electrical engineering, vol. 84, 2020, 106613, issn 0045-7906. doi: 10.1016/j.compeleceng.2020.106613 [7] s. kesana, s. gatikanti, m. z. u. rahman, b. radhika, d. mounika, triple frequency g-shape mimo antenna for wireless applications, international journal of engineering and advanced technology, 8 (5) , 2019, pp. 942-947 [8] s. v. devika. k. karki, s. k. kotamraju, k. kavya, m. z. u. rahman, a new computation method for pointing accuracy of cassegrain antenna in satellite communication, journal of theoretical & applied information technology, 95(13), 2017. [9] m. l. m. lakshmi, k. rajkamal, s. v. a. v. prasad, m. z. ur rahman, amplitude only linear array synthesis with desired nulls using evolutionary computing technique, applied computational electromagnetics society journal, 31(11), 2016. [10] m. z. u. rahman, v. a.kumar, g. v. s. karthik, a low complex adaptive algorithm for antenna beam steering international conference on signal processing, communication, computing and networking technologies, thuckalay, india, 21-22 july 2011, pp. 317-321. doi: 10.1109/icsccn.2011.6024567 [11] m. a. meriche, h. attia, a. messai, t. a. denidni, gain improvement of a wideband monopole antenna with novel artificial magnetic conductor, 17th international symposium on antenna technology and applied electromagnetics (antem), montreal, qc, canada, 10-13 july 2016, pp. 1-2. doi: 10.1109/antem.2016.7550150 [12] n. wang, q. liu, c. wu, l. talbi, q. zeng, j. xu, wideband fabry-perot resonator antenna with two complementary fss layers, ieee transactions on antennas and propagation, vol. 62, a) b) c) d) figure 11. patterns of radiation at: a) 3 ghz, b) 4.8 ghz, c) 9 ghz and d) 13.2 ghz. table 1. comparison of proposed antenna with earlier reported structures. s.no year dimensions in mm2 frequency in ghz radiating element 1 2015 40 × 30 2.4, 3.5, 5.8 pentagonal radiating patch with two slots 2 2016 43 × 20 2.8, 5.8, 10.8 u shape monopole antenna 3 2017 35 × 53 2.4, 3.5, 5.5. epsilon and nine shaped antennas 4 2018 23 × 23 3.6 ,5.8, 6.3, 8.3, 9.5 kite-shaped, cand modified g-shaped slots 5 2019 35 × 25 5.7, 10.3 metamaterial cell 6 2020 40 × 40 2.9, 2.10, 3.5, 4.5, 5.7, 6.5 penta-ring srr 7 proposed 34 × 20 3, 4.8, 9, 13.2 asymmetrical u and t shaped patch structures https://doi.org/10.1002/mop.29352 https://doi.org/10.1109/rteict.2016.7808088 https://doi.org/10.1007/s10825-018-1183-z https://doi.org/10.31838/jcr.07.09.10 https://doi.org/10.1016/j.compeleceng.2020.106613 https://doi.org/10.1109/icsccn.2011.6024567 https://doi.org/10.1109/antem.2016.7550150 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 no. 5, 2014, pp. 2463–2471. doi: 10.1109/tap.2014.2308533 [13] y.ge, k. p. esselle, t. s. bird, a method to design dualband,high-directivity ebg resonator antennas using singleresonant, singlelayer partially reflective surface, progress in electromagnetics research c, vol. 13, 2010, pp. 245–257. doi: 10.2528/pierc10020901 [14] j.tak, y. hong, j. choi, textile antenna with ebg structure for body surface wave enhancement, electronics letters, vol. 51, no. 15, 2015, pp. 1131–1132. doi: 10.1049/el.2015.1022 [15] r. m. hashmi, k. p. esselle, enhancing the performance of ebg resonator antennas by individually truncating the superstructure layers, iet microwaves antennas & propagation, vol. 10, no. 10, 2016, pp. 1048–1055. doi: 10.1049/iet-map.2015.0674 [16] j. lacik, circularly polarized siw square ring-slot antenna for xband applications, microwave & optical technology letters, vol. 54, no. 11, 2012, pp. 2590–2594. doi: 10.1002/mop.27113 [17] k.saraswat, t. kumar, a. r. harish, a corrugated g-shaped grounded ring slot antenna for wideband circular polarization, international journal of microwave & wireless technologies, 2020, 1–6. doi: 10.1017/s1759078719001624 [18] m. j. hua, p. wang, y. zheng, s. l. yuan, compact tri-band cpw-fed antenna for wlan/wimax applications, electronics letters, vol. 49, no. 18, 2013, pp. 1118–1119. doi: 10.1049/el.2013.1669 [19] armando coccia, federica amitrano, leandro donisi, giuseppe cesarelli, gaetano pagano, mario cesarelli, giovanni d'addio, design and validation of an e-textile-based wearable system for remote health monitoring, acta imeko, vol. 10, 2021, no. 2, pp. 220-229. doi: 10.21014/acta_imeko.v10i2.912 [20] imran ahmed, eulalia balestrieri, francesco lamonaca, iomtbased biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, 2021, no.2, pp. 1-11. doi: 10.21014/acta_imeko.v10i2.1080 https://doi.org/10.1109/tap.2014.2308533 https://doi.org/10.2528/pierc10020901 https://doi.org/10.1049/el.2015.1022 https://doi.org/10.1049/iet-map.2015.0674 https://doi.org/10.1002/mop.27113 https://doi.org/10.1017/s1759078719001624 https://doi.org/10.1049/el.2013.1669 https://doi.org/10.21014/acta_imeko.v10i2.912 https://doi.org/10.21014/acta_imeko.v10i2.1080 linear regression analysis and the gum: example of temperature influence on force transfer transducers acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 407 acta imeko issn: 2221-870x december 2020, volume 9, number 5, 407 413 linear regression analysis and the gum: example of temperature influence on force transfer transducers d. röske1 1 physikalisch technische bundesanstalt (ptb), braunschweig, germany, dirk.roeske@ptb.de abstract: the deflection of strain-gauge force and torque transducers (the zero-reduced output signal for a given mechanical load) is dependent on the ambient temperature. this is also true of high-precision force transfer transducers used to compare force standard machines. to estimate the extent to which temperature deviations during measurements on different machines affect comparison results and – if necessary – to correct for such deviations, it is important to know the influence of the temperature on the deflection. this effect is usually investigated in special temperature chambers in which the transducer is exposed to various temperatures within a given temperature range while the machine is operated under unchanged laboratory conditions. the regression analysis of the results allows the temperature coefficient to be determined, including an uncertainty analysis. this was done for five force transfer transducers used for a bilateral comparison between npl (uk) and ptb (germany). keywords: comparison; temperature influence; regression; uncertainty 1. introduction the manufacturers of strain-gauge force and torque transducers devote significant effort to minimising the influence of the ambient temperature on the measurement results of their devices. the best transducers from this group are very well compensated for temperature influences. however, no measurements of the transducer’s temperature behaviour are taken and only the upper limit of the absolute value of the sensitivity’s temperature coefficient (the relative change of the deflection with temperature) is given in the transducer’s specifications. for example, a range between 0.02 %/(10 k) and +0.02 %/(10 k) is given for the hbm z30a transducer [1], while a range between 0.01 %/(10 k) and +0.01 %/(10 k) is given for the gtm ktn transducer [2]. this means that the absolute value of the temperature coefficient should be below 1 … 2 · 10-5 k-1. for a 1 k temperature change, this is more than the standard uncertainty of the best force standard machines. 2. measurement results for a comparison measurement between ptb’s new 200 kn force standard machine [3] and the relevant standard machines of the npl, five highprecision compression force transfer transducers (one z30a, four ktns) were selected. these transducers were investigated with respect to their temperature behaviour in a temperature chamber inside the 200 kn force standard machine. the temperature range was defined as 20 °c … 25 °c with additional measurement points beyond these threshold values at 21 °c, 22 °c and 23 °c. the resulting deflections 𝑑𝑖 in mv/v for the 50 kn ktn transducer are given in table 1 for the five temperatures 𝑇𝑖 and for two load steps (20 kn and 50 kn). table 1: deflections 𝑑𝑖 in mv/v for the 50 kn transducer at different temperatures 𝑇𝑖 and for two load steps 𝑻𝒊 in °c 𝒅𝒊 at 20 kn in mv/v 𝒅𝒊 at 50 kn in mv/v 20.03 0.800 996 2.003 036 21.00 0.800 992 2.003 028 21.99 0.800 988 2.003 011 23.06 0.800 984 2.003 002 24.90 0.800 978 2.002 981 3. preliminary evaluation in the evaluation, a linear dependency between the temperature 𝑇 and the deflection 𝑑 is assumed to exist within a sufficiently small temperature range (and possibly beyond this range). in the first approach, the data can be evaluated using the linear fit functions of programming languages such as python and r or standard software programs such as excel, all of which are based on the least-squares method and require a very small number of code lines, see figure 1. excel even contains a function wizard that can find the right function and defines the necessary arguments. the results agree well at a relative level of 10-12 and below. the limitation is caused by the floating-point http://www.imeko.org/ mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 408 accuracy of the processor, operating system and software used; however, this is not a problem for the uncertainty in question in this investigation. figure 1: parameters (for the 20 kn force in table 1) of a linear fit in python (top), r (middle) and excel (bottom, german user interface) most linear-fit packages and functions contain additional information about the result such as the residual sum of squares (rss) of the least-squares fit in python and excel and the average variation of points around the fitted regression line, the residual standard error (rse), in r. these values can be used to check how well the model fits the data; this step is necessary to ensure that the right model is used for the data. the residual sum of squares of the least-squares fit in excel yields a value of 6.07 · 10-13 (mv/v)2 for the data in table 1 (for 20 kn). to improve comparability, this value is linked to another value – namely, the sum of squared deviations (ssd) from the mean. the result is 1.95 · 10-10 (mv/v)2 and the relative deviation (ssd rss) / ssd = 0.997 is a measure of the model quality: the closer this value is to 1, the better the model is. nevertheless, this calculation does not consider the uncertainty of the input data. it is expected that the resulting parameters will also have an associated uncertainty and that this value will depend on the uncertainty of the input data. to obtain the uncertainty of the results of the fitting procedure, the following regression analysis was carried out using a linear regression model and a least-squares approach in combination with the gum, the guide to the expression of uncertainty in measurement [4]. 4. regression analysis we consider five pairs of input (𝑇𝑖 ) and output (𝑑𝑖 ) values with their associated uncertainties (𝑢) ((𝑇𝑖 ; 𝑢(𝑇𝑖 )), (𝑑𝑖 ; 𝑢(𝑑𝑖 ))) 𝑖 = 1, 2, … 5 . (1) here, the aim is to find a linear approximation function �̃�(𝑇) that best describes the data �̃�(𝑇) = 𝑞 ∙ 𝑇 + 𝑟 (2) with the coefficients 𝑞 (slope) and 𝑟 (intercept). the values of these coefficients should be determined in such a way that a minimum condition is met. in the context of the model considered here, we require the sum of the squared deviations between the measured outputs and the corresponding outputs calculated in accordance with (2) to be a minimum (“least squares”) ∑ (𝑑𝑖 − �̃�(𝑇𝑖 )) 2 𝑁 𝑖=1 → min . (3) with (2), equation (3) can be also written as ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟) 2 𝑁 𝑖=1 → min . (4) the condition necessary for the minimum is that the partial derivates of the given term to the unknown values 𝑞 and 𝑟 be zero. this yields 2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟)(−𝑇𝑖 ) 𝑁 𝑖=1 = 0 −2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 − 𝑟) 𝑁 𝑖=1 = 0 , (5) a system of two equations via which the values of the coefficients can be determined. a more compact representation of (5) is 𝑞 ∙ 𝑇2̅̅ ̅ + 𝑟 ∙ �̅� = 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ 𝑞 ∙ �̅� + 𝑟 = �̅� (6) with 𝑇𝑘̅̅̅̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 𝑘 𝑁 𝑖=1 , �̅� = 1 𝑁 ∙ ∑ 𝑑𝑖 𝑁 𝑖=1 (7) and 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 ∙ 𝑑𝑖 𝑁 𝑖=1 . (8) the condition sufficient for the minimum value is that the second derivative be positive. equation (4) yields 𝜕2 𝜕𝑞2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 + 𝑟) 2 𝑁 𝑖=1 = 2 ∑ 𝑇𝑖 2 𝑁 𝑖=1 > 0 , (9) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 409 𝜕2 𝜕𝑟2 ∑(𝑑𝑖 − 𝑞 ∙ 𝑇𝑖 + 𝑟) 2 𝑁 𝑖=1 = 2 𝑁 > 0 showing that the condition sufficient for the minimum value is fulfilled independently of the parameter values. from (6), the coefficients 𝑞 and 𝑟 can be calculated. they are functions of the 𝑇𝑘̅̅̅̅ (𝑘 𝜖 (1, 2)) and 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ and, due to (7) and (8), functions of the 𝑇𝑖 and 𝑑𝑖 . this means that their uncertainties can, in principle, be calculated by applying the standard methods of the gum. the solution of (6) can be written as 𝑞 = 𝑇 ∙ 𝑑̅̅ ̅̅ ̅̅ − �̅� ∙ �̅� 𝑇2̅̅ ̅ − �̅� 2 , 𝑟 = �̅� − 𝑞 ∙ �̅� . (10) it must be noted that the slope of the linear function is the same over the whole temperature range, whereas the intercept depends on the value taken for zero. if, instead of the °c scale used in table 1, the kelvin scale is used, the value of the intercept calculated will change, see figure 2. figure 2: parameters (for the 20 kn force in table 1) of a linear fit in excel with kelvin temperatures force transfer standards are usually stored and operated (and, if possible, transported) in a narrow temperature range from 18 °c to 28 °c (for key comparison measurements, a much smaller interval such as 20 °c ± 0.5 k may be required). this means that the behaviour of the force transducer near 0 °c is not of interest; it is therefore not important that it be known for temperatures close to 0 k. it should be sufficient to describe the transducer’s behaviour in the narrow temperature interval where it was investigated. usually, the reference temperature 𝑇ref for comparison measurements is agreed in advance when the technical protocol is compiled. then, the calculation can be carried out with the new temperatures 𝑇′𝑖 defined as 𝑇′𝑖 = 𝑇𝑖 − 𝑇ref . then, (10) will be written as 𝑞 = 𝑇′ ∙ 𝑑̅̅ ̅̅ ̅̅ ̅ − �̅� ∙ 𝑇′̅ 𝑇′2̅̅ ̅̅ − 𝑇′̅2 , 𝑟 = �̅� − 𝑞 ∙ 𝑇′̅ . (11) in a special case in which the reference temperature 𝑇ref equals the mean temperature �̅�, the mean of the new temperatures becomes zero: 𝑇′̅ = 1 𝑁 ∙ ∑ 𝑇𝑖 𝑁 𝑖=1 − �̅� = �̅� − �̅� = 0 . (12) in this case, (11) is simplified to 𝑞 = 𝑇′ ∙ 𝑑̅̅ ̅̅ ̅̅ ̅ 𝑇′2̅̅ ̅̅ , 𝑟 = �̅� . (13) it must be noted that the single temperature points may change in different measurement campaigns; for example, a new value of 24 °c may be defined instead of or in addition to 23 °c. for better comparability of results, it could be beneficial to define the number of temperature points for all such investigations in advance. the single points may then be chosen in such a way that their mean equals the reference temperature agreed. although it may be difficult to reproduce all single temperatures very accurately, the remaining deviations of 0.1 … 0.2 k should be small enough to be neglected. with (7) and (8), equations (13) can now be rewritten as 𝑞 = ∑ 𝑇′𝑖 ∙ 𝑑𝑖 𝑁 𝑖=1 ∑ 𝑇′𝑖 2 𝑁 𝑖=1 ⁄ , 𝑟 = 1 𝑁 ∙ ∑ 𝑑𝑖 𝑁 𝑖=1 . (14) depending on the reference temperature chosen, (11), respectively (14), are the model functions to which the gum should be applied to find the uncertainties 𝑢(𝑞) and 𝑢(𝑟). here, the aim is to find the regression function �̃�(𝑇) and the uncertainties of the fitted values, preferably also given by a function 𝑢(�̃�). in this work, the more common approach (11) was used because the reference temperature 𝑇ref did not match the mean of the temperatures 𝑇𝑖. 5. results the calculations were carried out under the assumption that no correlations existed between the input values 𝑇′𝑖 , 𝑑𝑖 , which were treated as uncorrelated quantities. the equations (11) for the determination of 𝑞 and 𝑟 are quite simple, whereas those for the determination of 𝑢(𝑞) and 𝑢(𝑟) are rather complex. following the gum procedure, partial derivatives to each of the ten input variables must be calculated. maxima, a computer algebra system [5], was used to carry out this calculation analytically, yielding terms with several hundred parts as the result for the uncertainties. although this software could have been used to calculate the complete result 𝑞, 𝑢(𝑞), 𝑟, 𝑢(𝑟) for all the different transducers, this would have been time-consuming. a better way was to obtain the analytical result in maxima and to use this formula in excel. the known formula was then applied as a spreadsheet function in all subsequent calculations with the same input data scheme (five pairs of values as in table 1). http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 410 the results show that, if a temperature outside the interval of interest is taken as zero, the uncertainty of the intercept 𝑟 will be larger than it would be if a value from the given interval were taken as a reference temperature 𝑇ref, see table 2. moreover, the uncertainty of the intercept is minimal if the reference temperature chosen is the arithmetic mean of the temperature values 𝑇𝑖. table 2: values and associated standard uncertainties of the slope and intercept of a linear fit calculated for different reference temperatures 𝑻𝐫𝐞𝐟 in °c 𝒒, 𝒖(𝒒) in (mv/v)/k 𝒓, 𝒖(𝒓) in mv/v -273.15 -3.699 · 10-6, 0.798 · 10-6 0.802 080, 0.000 236 0.00 -3.699 · 10-6, 0.798 · 10-6 0.801 070, 0.000 017 8 20.50 -3.699 · 10-6, 0.798 · 10-6 0.800 994, 0.000 001 9 22.20 -3.699 · 10-6, 0.798 · 10-6 0.800 988, 0.000 001 4 24.00 -3.699 · 10-6, 0.798 · 10-6 0.800 981, 0.000 002 0 the uncertainty of the slope 𝑞 is not affected by the reference temperature selected; due to the linear function (1), its contribution to the uncertainty of the fitted value �̃�(𝑇′𝑖 ) is calculated with the temperature 𝑇′𝑖 as a sensitivity coefficient. this means that the uncertainty contribution of the slope will be lower the closer the reference temperature 𝑇ref and the temperatures 𝑇𝑖 are to each other. by means of the known coefficients, the regression function �̃�(𝑇) can be calculated. the known standard uncertainties of the coefficients also allow other functions to be determined – namely, functions defining a 1-σ band along the approximation function. expanded uncertainties with 𝑘 = 2 yield a 2-σ band. figure 3 and figure 4 show these results for the given 50 kn transducer at the 20 kn load step with the measured values (blue symbols) and with standard uncertainty bars for 𝑇 and 𝑑, the fit function (red line) and the 2-σ bands (dashed red lines). the standard uncertainty of the temperature was calculated from a rectangular distribution with a half-width of 0.1 k; the standard uncertainty of the deflection was 0.000 003 mv/v. the reference temperature was 20.5 °c. figure 5 shows how the results change if another reference temperature is chosen. if the reference temperature 𝑇ref is the arithmetic mean of the temperatures 𝑇𝑖 at which the investigation is carried out, the uncertainties become lower and their minimum value is at the reference temperature. figure 3: result of the regression analysis for the 20 kn force step of the 50 kn transducer (details see text above) figure 4: result of figure 3 when the uncertainty of the deflection is increased to 0.000 005 mv/v (top) and when the half-width of the temperature distribution is increased to 0.5 k (bottom) figure 5: result of figure 3 when the arithmetic mean of the temperature values 𝑇𝑖 (22.196 k) is taken as the reference temperature 𝑇ref http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 411 the results and figures shown above are an example of a transducer with a very linear deflection dependency on the temperature; the relative deviation between rss and ssd is 0.997. in addition, the slope value of -3.7 · 10-6 (mv/v)/k at a 0.8 mv/v signal is very low and a good value for a transfer transducer. the results for the other transducers are shown as examples in figure 6 to figure 8. figure 6: result of the 20 kn transducer at a force of 20 kn and a reference temperature of 20.5 °c (further details as in figure 3) figure 7: result of the 100 kn transducer at a force of 50 kn and a reference temperature of 20.5 °c (further details as in figure 3) figure 8: result of the 200 kn transducer at a force of 200 kn and a reference temperature of 20.5 °c (further details as in figure 3) the 20 kn transducer (figure 6) has a slope of -3 · 10-5 (mv/v)/k at 2 mv/v (20 kn force). the absolute value is larger than that of the 50 kn transducer. the linearity of the measured values is not as perfect as that of the 50 kn transducer in figure 3; the relative deviation between rss and ssd is 0.970. a special feature of the 100 kn transducer (figure 7) is the positive slope of the deflection/temperature function, whereas the 20 kn and 50 kn transducers have a negative slope. the absolute value of the slope is slightly larger than that of the 20 kn transducer and amounts to 2.5 · 10-5 (mv/v)/k at 1 mv/v (50 kn force) and 3.9 · 10-5 (mv/v)/k at 2 mv/v (100 kn force). apart from the 23 °c result, the values are very linear. although the reason for the deviation of the 23 °c result has not yet been found, this single value had no significant effect on the fit function, as can be seen in figure 7. on the other hand, this result indicates that temperature coefficients should not be determined from measurements at only two different temperatures. finally, the deflection of the 200 kn transducer (figure 8) did not show good temperature sensitivity. the variations of the deflection appear to be random. on the other hand, the overall relative span of the deflection values in the temperature range measured is 21 … 24 ppm; this is comparable with the corresponding value of the 50 kn transducer, which showed the lowest temperature sensitivity. nevertheless, the behaviour of the 200 kn transducer should be investigated further. the results obtained to date would not be sufficient to calculate the corrections. the last step in these calculations is the determination of the function 𝑢(�̃�), which describes the standard uncertainties associated with the fit function �̃�. this can be realised by applying higher-order regression methods such as cubic regression to the uncertainty values calculated, see figure 9. 6. application the temperature influence measured on a single transducer from different measurement campaigns or even on different transducers can be compared by determining (as the final step) the regression function �̃�(𝑇) and the associated uncertainty 𝑢(�̃�) = 𝑢 (�̃�(𝑇)) = 𝑢(𝑇) . for the example in figure 3 with 𝑇ref = 20.5 °c , we obtained the following functions (𝑇 in °c) �̃�(𝑇) mv v⁄ = −3.7 ∙ 10−6 ∙ 𝑇 °c + 0.801070 , 𝑢(𝑇) mv v⁄ = −1.45 ∙ 10−8 ∙ ( 𝑇 °c ) 3 + 1.061 ∙ 10−6 ∙ ( 𝑇 °c ) 2 − 2.524 ∙ 10−5 ∙ 𝑇 °c + 0.0001981 . (15) http://www.imeko.org/ acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 412 figure 9: standard uncertainty (1 σ) part of the result from figure 3, third-order regression function 𝑢(�̃�) (dashed line) based on the calculated values 𝑢(�̃�𝑖 ) = 𝑢 (�̃�(𝑇𝑖 )) (blue dots) for reference temperatures of 20.5 °c (top) and 22.196 °c (bottom) however, in most cases, the results will be applied in order to correct comparison measurement results 𝑑, 𝑢(𝑑) obtained at deviating temperatures 𝑇. usually, the corrected values �̂� are calculated in accordance with �̂�(𝑇ref) = 𝑑(𝑇) + 𝑞 ∙ ∆𝑇, ∆𝑇 = 𝑇ref − 𝑇 (16) where the intercept 𝑟 does not appear. when a measurement result is corrected, the regression function is defined in such a way that it runs through the uncorrected result, meaning that the intercept is no longer a free parameter. nevertheless, the regression function �̃�(𝑇) has an associated uncertainty based on the uncertainties of the slope 𝑢(𝑞) and the intercept 𝑢(𝑟). the formal application of the gum to (16) would yield a result that contains no contribution of 𝑢(𝑟) 𝑢2(�̂�) = 𝑢2(𝑑) + 𝑞2 ∙ 𝑢2(𝑇) + (𝑇ref − 𝑇) 2𝑢2(𝑞). (17) therefore, another model is proposed instead of (16) – namely, �̂�(𝑇ref) = 𝑑(𝑇) + ∆�̃�(𝑇ref − 𝑇) 𝑢2(�̂�) = 𝑢2(𝑑) + 𝑢2(∆�̃�) , (18) where ∆�̃�(𝑇ref − 𝑇) = �̃�(𝑇ref) − �̃�(𝑇) 𝑢2(∆�̃�) = 𝑢2 (�̃�(𝑇ref)) + 𝑢 2 (�̃�(𝑇)) . (19) this means that, instead of forcing the regression function to run through the uncorrected result, a correction value ∆�̃� is added to this result. this value can be calculated from the increase (or decrease for a negative slope) of the regression function related to the temperature deviation 𝑇ref − 𝑇 . this increase (or decrease) of the function is uncertain and has a value 𝑢(∆�̃�). the uncertainties calculated according to (19) are larger than those yielded by (18), an effect caused mainly by the uncertainty of the intercept. for the example in figure 3, the standard uncertainty of the corrected deflection (∆𝑇 = 0.5 k) in accordance with (18) is 3.03 · 10-6 mv/v, whereas the same value calculated in accordance with (19) amounts to 4.08 · 10-6 mv/v. 7. summary the application of standard uncertainty calculation methods to the linear regression of measurement results using the least-squares approach was investigated. the method proposed can be used to calculate the uncertainty of a linear regression function based on the uncertainty of its slope and its intercept. the method can be applied to the correction of measurement results obtained at deviating temperatures but is not limited to force transducers: it can also be applied, for example, to torque or pressure transducers. 8. acknowledgements this work is part of the project 18sib08, funded by the empir programme. i would like to thank my colleague norbert tetzlaff for his accurate measurements and calibrations in the 200 kn force standard machine at ptb. 9. references [1] hbm: data sheet of the z30a force transducers. online [accessed 20200108]: https://www.hbm.com/fileadmin/mediapool/hbmd oc/technical/b02075.pdf (registration necessary) [2] gtm: data sheet of the ktn-d force transducers. online [accessed 20200108]: https://www.gtmgmbh.com/fileadmin/media/dokumente/produkte/ datenblaetter/de/datenblatt_serie_ktnd_20170419.pdf [3] r. kumme, h. kahmann, f. tegtmeier, n. tetzlaff, d. röske, ptb’s new 200 kn deadweight force standard machine, proc. of the imeko 23rd tc3, 13th tc5 and 4th tc22 international conference, 30 may to 1 june 2017, helsinki, finland. online [accessed 20200108]: http://www.imeko.org/ https://www.hbm.com/fileadmin/mediapool/hbmdoc/technical/b02075.pdf https://www.hbm.com/fileadmin/mediapool/hbmdoc/technical/b02075.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf https://www.gtm-gmbh.com/fileadmin/media/dokumente/produkte/datenblaetter/de/datenblatt_serie_ktn-d_20170419.pdf acta imeko | www.imeko.org december 2020 | volume 9 | number 5 | 413 https://www.imeko.org/publications/tc32017/imeko-tc3-2017-032.pdf [4] iso/iec guide 98-3:2008, uncertainty of measurement part 3: guide to the expression of uncertainty in measurement (gum:1995). online [accessed 20200108]: https://www.iso.org/standard/50461.html pdf version: https://www.bipm.org/utils/common/documents/jc gm/jcgm_100_2008_e.pdf [5] maxima, a computer algebra system. online [accessed 20200614]: http://maxima.sourceforge.net http://www.imeko.org/ https://www.imeko.org/publications/tc3-2017/imeko-tc3-2017-032.pdf https://www.imeko.org/publications/tc3-2017/imeko-tc3-2017-032.pdf https://www.iso.org/standard/50461.html https://www.iso.org/standard/50461.html https://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf https://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf http://maxima.sourceforge.net/ introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organized by tc17 acta imeko issn: 2221-870x september 2021, volume 10, number 3, 1 2 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 1 introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organised by tc17 bálint kiss1, istván harmati1 1 budapest university of technology and economics, műegyetem rkp. 3., 1111 budapest, hungary section: editorial citation: bálint kiss, istván harmati, introductory notes for the acta imeko special issue on the 23rd international symposium on measurement and control in robotics organized by tc17, acta imeko, vol. 10, no. 3, article 1, september 2021, identifier: imeko-acta-10 (2021)-03-01 editor: francesco lamonaca, university of calabria, italy received september 1, 2021; in final form september 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: bálint kiss, e-mail: bkiss@iit.bme.hu dear readers, measurement and control techniques are crucial for achieving reliable and safe autonomous features in robotics. recent developments in both fields are key enablers for the constantly widening use of robots in industrial, medical, military and service-oriented applications. faithful to its traditions, the 23rd edition of the international symposium on measurement and control in robotics (ismcr), organised by imeko technical committee 17, has provided a forum for the exchange of the latest research results and novel ideas in robotic technologies and applications, this time with a special emphasis on smart mobility. the symposium focused on various aspects of research, applications and trends in relation to robotics, advanced human–robot systems and applied technologies in the fields of robotics, telerobotics, autonomous vehicles and simulator platforms, as well as vr/ar and 3d modelling and simulation. the symposium was hosted by the budapest university of technology and economics in budapest, hungary. due to the covid-19 pandemic, the symposium was held in a hybrid format; authors outside hungary participated remotely, while those in hungary had the choice between online and in-person attendance at the event in accordance with the current regulations. a total of 49 submissions were received from 11 different countries. the review process, involving 106 external reviews, resulted in 40 accepted papers. a special technical session was devoted to the topic of robotised intervention in risky (chemical, biological, radiological and nuclear) environments. in accordance with the symposium’s main topics, three invited plenary lectures were given by specialists from the industry (kuka robotics, thyssenkrupp components technology) and academia. topics included the virtualised stability analysis of mechatronic systems, human–robot collaboration in industrial production and new standardisation trends in the navigation of industrial mobile robots. based on their technical and scientific value and the evaluation of the reviewers, the authors of ten contributions were invited to submit extended versions of their papers for this special issue. the paper entitled ‘vision-based reinforcement learning for lane-tracking control’, authored by kalapos et al., applies aibased techniques to solve the lane-following and obstacle avoidance problem of autonomous vehicles, successfully implementing the results in the onboard computers of reducedsized testbed vehicles. staying with autonomous vehicles, in the paper ‘using coverage path planning methods for car park exploration’ by ádám et al. exploration methods to find the optimal traversal of an unknown parking area to identify free parking spaces are presented. reinforcement learning can also be used in the control of multi-agent robotic systems, as suggested by the paper by paczolay entitled ‘a2cm: a new multi-agent algorithm’, which presents an optimised and modified version of the so-called synchronous actor–critic algorithm. de cubber et al. address a similar optimisation problem in their contribution entitled ‘distributed coverage optimisation for a fleet of unmanned maritime systems’. the authors propose a methodology that optimises the coverage of a fleet of unmanned maritime agents, thereby maximising the chances of identifying potential threats. high-level autonomous functions and human–robot collaboration must be reliably and safely supported by platforms (robotic arms, drones, vehicles, etc.); hence, a second group of mailto:bkiss@iit.bme.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 2 papers is devoted to the presentation of related results. szabó et al. report an identification method for friction parameters in their paper entitled ‘dynamic parameter identification method for robotic arms with static friction modelling’. the author considered friction models which are linear in terms of unknown parameters. the paper entitled ‘uncertain estimation-based motion planning algorithms for mobile robots’, authored by gyenes et al., proposes the extension of two obstacle avoidance methods, the velocity obstacle technique and the artificial potential field method, to take into consideration the time-varying uncertainty of the measured data in relation to the localisation of static and dynamic obstacles. the autonomous delivery of items by drones requires suitable gripping devices and grasping strategies. in their paper entitled ‘a lightweight magnetic gripper for a delivery aerial vehicle: design and applications’, sutera et al. report the design of a lowpower and lightweight magnetic gripper that takes into consideration the size and weight of the of transported objects. füchter et al. studied the possibilities of using ar techniques in specific phases of pilot training. their paper entitled ‘aeronautic pilot training and augmented reality’ reports the design experience of a mobile/tablet application prototype that reproduces the flight panel of a cessna 150 aircraft. the paper entitled ‘human–robot collision predictor for flexible assembly’ by paniti et al. presents a prediction-based collision warning system for a cobot scenario in which a robotic arm and human operators share a common workspace that also takes communication delays into consideration. another type of human–robot interaction is the user interface for controlling a robotic arm. the paper entitled ‘a 3d head pointer: a manipulation method that enables spatial position and posture for supernumerary robotic limbs’ by oh et al. addresses the specific problem of controlling a wearable robotic arm using face orientation and head motion. it should be noted that this contribution received the best paper award of the symposium. we would like to express our gratitude to all the authors for their contributions and their participation at the ismcr2021 symposium despite the unprecedented conditions created by the pandemic. we must also thank prof. francesco lamonaca, editor-in-chief of acta imeko, and his team for their help and support during the editorial process of this special issue. it has been a great honour to serve as guest editors for this special issue, and we hope that the papers will inspire future research in the imeko tc17 area of expertise and beyond. bálint kiss, istván harmati guest editors is our understanding of measurement evolving? acta imeko issn: 2221-870x december 2021, volume 10, number 4, 209 213 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 209 is our understanding of measurement evolving? luca mari1 1 università cattaneo liuc, c.so matteotti, 22, 21053 castellanza (va), italy section: research paper keywords: foundations of measurement; measurement and quantification; measurement as empirical process; measurement as representation citation: luca mari, is our understanding of measurement evolving?, acta imeko, vol. 10, no. 4, article 32, december 2021, identifier: imeko-acta10 (2021)-04-32 section editor: francesco lamonaca, university of calabria, italy received october 1, 2021; in final form november 20, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: luca mari, e-mail: lmari@liuc.it 1. introduction the terminology about and around measurement is often not so specific, and sometimes even a bit sloppy. for sure, a long tradition allows us to assume a reasonably common understanding of a phrase like “to measure the length (or the mass, or…) of a given physical body”. but the claim that for example thermal comfort (as in [1]) is a measurable property is not as obviously meant in the same way by all relevant stakeholders. do different experts refer to the same sort of situations when they talk about the measurement of thermal comfort? what do they mean when they use instead phrases like “determination of thermal comfort”, “assessment of thermal comfort”, “quantification of thermal comfort”, “assignment of a value to thermal comfort”? and what are the conditions that make the determination, or the assessment, or… of thermal comfort a measurement? advancements of science and technology are not driven by terminological works, and therefore this kind of questions could be dismissed as immaterial if our goals are scientific or technological. admittedly, indeed, clearer ideas about the meaning of a term – like “measurement”, or “measurand”, or “measurement uncertainty”, and so on – do not improve our ability to design measuring instruments and perform measurements. nevertheless, metrology (in the broad sense given by the international vocabulary of metrology (vim: [2]): “science of measurement and its application”, thus in principle not limiting it to physical quantities) is a very special body of knowledge, and this peculiarity suggests that terminology might be more important in metrology than in other experimental fields like physics and chemistry. it is a fact that some key contents of metrology derive at least in part from social understanding and agreement, and not only from the outcomes of observation and experimentation. an obvious example is the identification of an individual quantity as the unit for a given kind, like the kilogram for mass: there can be empirical criteria to be taken into account, but the selection is ultimately conventional, and as such not falsifiable, in popper’s sense [3]. in fact, any discipline is grounded on some presuppositions and conventions that are chosen because they are mutually consistent, simple, elegant, …, and not because they are true. the peculiarity of metrology is that this applies to a substantial part of its body of knowledge: even though measurement is an experimental process (the idea that a measurement can be a gedankenexperiment, a thought experiment, sounds strange), it is as if the foundational role that metrology plays for all empirical sciences prevents it to obtain its own foundations somewhere else. metrology is a foundation without a foundation [4]. while this seems to be a structurally obvious situation – if xi is founded on xi–1 that is founded on xi–2 that…, then the sequence must stop where an x0 has no foundations –, acknowledging that metrology has sometimes the delicate role of x0 could generate an embarrassing doubt: isn’t metrology a “real” science then? and however, how can we forget that what is possibly the “most foundational” component of the metrology of the last 150 years is the metre convention, i.e., first of all a political treaty? or as another example, closer to us in time, consider the interesting discussion about base quantities, abstract traditionally understood as a quantitative empirical process, in the last decades measurement has been reconsidered in its aims, scope, and structure, so that the basic questions are again important: what kind of knowledge do we obtain from a measurement? what is the source of the acknowledged special efficacy of measurement? a preliminary analysis is proposed here from an evolutionary perspective. mailto:lmari@liuc.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 210 triggered by the 2019 revision of the definitions of some units given in terms of numerical values assigned to some quantities modeled as constants (the speed of light in vacuum etc). does this revision imply that the very idea of base quantity is now unjustified? or that base quantities should be those of the defining constants (hence speed, action, etc)? or finally that nothing should be changed on this matter (and therefore that length, mass, etc should remain the base quantities)? the question has in fact some importance, but no experiments can be designed to provide an answer: again, it is a matter of shared understanding of the pros and the cons, and then of agreement. this solicits us to reconsider the role of terminology in metrology: where social agreement plays a key role, and when disagreements cannot be settled by seeking the truth, welldefined concepts and well-chosen terms – this is what terminological works aim at – may be useful, if not indispensable. moreover, the fact that measurement is a fundamental enabler not only of top science, but also of technology, commerce, health care, environment protection, etc, adds a further reason to the special importance of terminology in metrology: again, this role requires shared concepts and terms, on which shared knowledge – like the one that guarantees the metrological traceability of measurement results – can be grounded. metrology is a social body of knowledge. basically everything that has been considered so far applies also and particularly to the very starting point of metrology: what is measurement? how should ‘measurement’ be defined? (someone might object that the starting point of metrology is the definition of ‘quantity’, not the one of ‘measurement’, perhaps by referring to the fact that the first entry of the first chapter of the vim is about ’quantity’; i respectfully disagree: ‘quantity’, like ‘property’, is a pre-metrological concept). in fact, there is nothing new in discussions about the scope of measurement and the terminological endeavor of providing an appropriate definition of ‘measurement’: the sometimes also harsh clashes about the measurability of psychological properties (i.e., the issue of whether psychometrics is actually about measurement) highlight that on the table there is way more than a dictionary issue. the distinction between, say, “my opinion on the competence of the candidate is…” and “the result of the measurement of the competence of the candidate is…” is not only about the occurrence of the term “measurement” in the second sentence, and in fact the position that psychological properties can be measured has been under scrutiny for decades [5]. rather, a fundamental question is at stake here: what kind of knowledge do we obtain from a measurement? and also, given that “measurement is often considered a hallmark of the scientific enterprise and a privileged source of knowledge” [6]: “what [is] the source of [the] special efficacy” of measurement?” [7]. this is the subject to which the present paper is devoted. 2. the received position: measurement as a coin with a euclidean side and a galilean side it is remarkable that the answer to these questions is also historically ambiguous, with two main lines that have been developed [8]. on the one hand, the euclidean tradition emphasizes the quantitative nature of measures, where in this sense “measurable” is basically synonym of “divisible by ratio”, as clearly explained for example by de morgan: “the term ‘measure’ is used [by euclid] conversely to ‘multiple’; hence [if] a and b have a common measure [they] are said to be commensurable” [9]. hence, this concept of measure applies first of all to numbers: “a measure of a number is any number that divides it, without leaving a reminder. so, 2 is a measure of 4, of 8, etc” [10], as in fact stated by euclid himself: “a number is part of a(nother) number, the lesser of the greater, when it measures the greater” (euclid). there is nothing necessarily empirical in this concept of measure, and in fact “in the geometrical constructions employed in the elements [...] empirical proofs by means of measurement are strictly forbidden” ([11]; in the introductory notes to the translation of euclid’s elements). on the other hand, the galilean tradition emphasizes the empirical nature of measurement, where before galileo “no one ever sought to get beyond the practical uses of number, weight, measure in the imprecision of everyday life” [12]. the tight connection between instrumentation and measurement witnesses the acknowledged role of measurement as a key enabler of the experimental method: a measurement is the process performed by making a physical device, i.e., a measuring instrument, interact with an empirical object according to the instructions provided by a measurement procedure. being quantitative and being empirical are orthogonal conditions: there are quantitative empirical processes, but a process may be quantitative and not empirical, or empirical and not quantitative. this means that in principle a process that is not a (galilean) measurement might produce a (euclidean) measure, and a process that is a (galilean) measurement might not produce a (euclidean) measure (such a lexical peculiarity was already highlighted by bunge [13], who discussed the difference between ‘measure’ and ‘measurement’; this becomes further clear by comparing the scopes of measure theory and a measurement theory). however, historically galileo came later, and drew from euclid and his interpretations, so that the principled independence became a factual convergence: (galilean) measurement was assumed to be a quantitative process, that produces (euclidean) measures. the lexicon maintains some traces of the coexistence of these two standpoints and of the interest of endorsing both, as witnessed in particular by the expression “weights and measures”, as if weighing were not a way to measure. indeed, euclid was concerned with geometric quantities, so that in the euclidean tradition measurable (or: “mensurable”, as it was said) were considered geometric quantities: according to hutton, “mensuration” is “the act, or art, of measuring figured extensions and bodies, or of finding the dimensions and contents of bodies, both superficial and solid” [10], so that for example in the case of temperature he used the term “observation” (this shows that the scope of measurement already broadened in the past!). plausibly, the synthesis of the euclidean and the galilean standpoints led to the idea that only extensive (taken from euclid) physical (taken from galileo) quantities are measurable, at least in the fundamental sense specified by campbell [14]. with some simplification, we may then summarize that the received view about 100 years ago was about measurement as characterized by two complementary components: measurement as a coin with a euclidean side and a galilean side. such a position is so strict – it is the outcome of the intersection of two independent standpoints, and thus inherits two sets of constraints – that not surprisingly trying to overcome it has been a target for several decades. in this perspective, the well known report of the ferguson committee to the british association for the advancement of science, published in 1940 [15], stating that “the main point against the measurability of the acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 211 intensity of a sensation was the impossibility of satisfactorily defining an addition operation for it” [16], can be read as a move to defend the orthodoxy of the synthesis of the euclidean and the galilean sides of the coin. 3. rethinking the received position in the last century the assumptions that measurement is quantification (the euclidean side) and is about (geometric) physical properties only (the galilean side) have been reanalysed, apparently by asking if really both such requirements need to be fulfilled, and to what extent. in particular, from stevens’ theory of types of scales [17] representationalism [18] explored how to broaden the euclidean side, sometimes by simply dropping any reference to the galilean side. in this sense we can read statements claiming that “the theory of measurement is difficult enough without bringing in the theory of making measurements” [19], or that a representation theorem “makes the theory of finite weak orderings a theory of measurement, because of its numerical representation” [20]. in this complex context, the definition given by the vim – “process of experimentally obtaining one or more quantity values that can reasonably be attributed to a quantity” [2] – is still quite conservative, with both the euclidean side (“quantity”) and the galilean side (“experimental”) explicitly maintained. is it sufficient for our society, that requires criteria of trust about determining / assessing / attributing values to … properties, like thermal comfort, that are not necessarily quantitative and are not entirely empirical? and is it sufficient for our society, in which the widespread digitalization is producing larger and larger amounts of data (the so-called “big data” phenomenon)? indeed, in an age of fake news and post-truth, providing criteria that make explicit and possibly operational the vim condition of “reasonable attribution”, and therefore such that not any data deserve to be called “measurement results”, seems to be a valuable achievement. in other words, our complex society would definitely benefit from an effective answer to kuhn’s question about the source of the special efficacy of measurement, while at the same time pushing toward reconsidering the actual necessity of euclidean and galilean conditions. and in fact conservative positions like vim’s are challenged today, so that measurement seems to have become a moving target. a significant and authoritative example is the standpoint of the us national institute of standards and technology’s simple guide for evaluating and expressing the uncertainty of nist measurement results, that defines ‘measurement’ as “an experimental or computational process that, by comparison with a standard, produces an estimate of the true value of a property of a material or virtual object or collection of objects, or of a process, event, or series of events, together with an evaluation of the uncertainty associated with that estimate, and intended for use in support of decision-making” [21]. with a mix of tradition (the reference to true values) and innovation (the evaluation of uncertainty as a necessary condition), here both euclidean and galilean conditions have been dropped: also non-quantitative properties are in principle measurable, and measurement can be also a non-empirical process about non-empirical (“virtual”) objects. is our understanding of what measurement is still evolving then, and in which direction(s)? 4. some possible evolutionary perspectives listing some necessary conditions that characterize measurement, and that plausibly are generally accepted, is not a hard task: measurement is (i) a process (ii) designed on purpose, (iii) whose input is a property of an object, and (iv) that produces information in the form of values of that property. indeed, (i) removes the ambiguity of using the term “measurements” also for the results of the process; (ii) highlights that measurement is not a natural process “that happens”; (iii) establishes that phrases like “to measure an object” are not correct, because measured are properties of objects, and not objects as such; (iv) characterizes measurement as an information process. however, not any such process is a measurement, thus acknowledging that not any data acquisition is a measurement. we may call “property evaluation” a process fulfilling (i)-(iv). what sufficient conditions characterize measurement as a specific kind of property evaluation? the answer does not seem as easy. the term “measurement” does not have a single inherent meaning and is not trademarked, so that nobody can be prevented from using it as she/he likes. nevertheless, without a common foundation, a foundational body of knowledge, as metrology is, is at risk of emptying, or at least of becoming unable to provide a convincing, socially agreeable, and useful answer to kuhn’s question: indeed, not any data acquisition process is claimed to have a “special efficacy”. as discussed above, the traditional, i.e., euclidean & galilean, answer to this question relies on coupling quantification and instrumentation: the assumption that only quantitative properties are measurable guarantees that measurement results are embedded in the nomological network generated by physical laws, from which specific, and then falsifiable, predictions and inferences can be drawn and the hypothesis of the correctness of the measurement results can be corroborated in turn; and the requirement that measuring instruments are empirical devices guarantees that the degree of objectivity of measurement results can be assessed by analysing how such instruments behave. this is the safe starting point. but if both such sides are erased, what remains of the coin? while sticking to the tradition is safe, it might be too strict with respect to what our society needs, as the definition of ‘measurement’ given by the nist seems to suggest. this is a key challenge for metrology, whose solution is then a matter of social responsibility, not truth seeking. in this context, in which respect of the tradition and new societal needs must be balanced, the possible evolutionary perspectives of measurement can be considered along four main complementary, though somewhat mutually connected, dimensions: – measurable entities as quantitative or non-quantitative properties (i.e., the reference to the euclidean tradition); – measurable entities as physical or non-physical properties; – measuring instruments as technological devices or human beings; – measurement as an empirical or an informational process, and therefore the relation between measurement and computation (i.e., the reference to the galilean tradition). again with the explicit admission that at stake here is adequacy, not truth, let us shortly discuss each of these issues. 4.1. measurable entities as quantitative or non-quantitative properties as stevens’ theory of scale types shows, the distinction between quantitative and non-quantitative properties is not acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 212 binary. the strongest type is absolute: an absolute evaluation is additive and has both a natural unit and a natural zero, as is the case of counting. the weakest type is nominal: a nominal evaluation only classifies objects by property. several intermediate types exist between absolute and nominal (e.g., ratio, interval, and ordinal, in the initial version of stevens’ theory), and there is not a single objective criterion to decide where a property stops to be quantitative and becomes nonquantitative. for example, according to the vim a total order is sufficient for a property to be a quantity, whereas the axiomatic approaches developed from holder’s [22] consider “continuity as a feature of the scientific concept of quantity”. the connection between being quantitative and being measurable inherits this ambiguity [23]. for sure, evaluations performed in richer scales convey more structural information, but this does not seem a sufficient criterion to rule out any given type from the scope of measurement: only by convention the decision can be made whether nominal (or ordinal, or...) evaluations can be measurements (and what conditions are required to make quantitative a property, for what it is worth). 4.2. measurable entities as physical or non-physical properties independently of the scale type, the condition of being measurable could be connected to the nature of the considered properties, and in particular to their being physical. a plausible, good justification is the requirement to assess, and possibly to control, the degree of objectivity of the behaviour of the measuring instrument. indeed, this is guaranteed by the operation of a physical sensor at the core of the instrument, where only a physical (or chemical, or biological) property may affect the transduction performed by the sensor. however, a non-physical property, like thermal comfort, may be evaluated as a function of one or more physical properties, such as air temperature and humidity, in a process that is structurally the same as those traditionally considered to be indirect measurements, and where values of such physical properties can be then obtained by means of sensors. the key difference between the evaluation of, say, thermal comfort and density – the latter being a case of indirect measurement through the measurement of mass and volume – is that non-physical properties miss the rich nomological network provided by physics, so that their combination function is not as substantially validated. whether this is sufficient to rule out non-physical properties from the scope of measurement seems to be again a matter of convention. 4.3. measuring instruments as technological devices or human beings complementary to the option that also non-physical properties are measurable, some evaluations directly performed by human beings could be accepted as measurements. the relatively long history of what has been considered psychophysical measurement shows that there is nothing really new in this. there are in fact three strategies to develop humanbased instruments that attribute values to (physical or, more usually, non-physical) properties. first, the behaviour of a “typical” individual is idealized in a model, like in the case of the luminosity function, that describes the sensitivity of a “standard” human eye to different wavelengths. second, a statistic of the behaviour of a set of human beings (e.g., their average response) is considered, under the assumption that individual peculiarities are compensated in a sufficiently large sample, like when the quality of a product or a service is evaluated by synthesizing the responses given by several customers. third, an individual or a small group of individuals operates, under the condition that they are domain experts and therefore their evaluation can be considered to be calibrated against some agreed standards, like in the case of gymnastics judges and wine sommeliers. while at least some cases of the first strategy are widely accepted as measurements, as the inclusion of the candela as the si unit of luminous intensity witnesses, whether and under what conditions human beings can be measuring instruments, possibly operating with the support of guidelines, checklists, etc, is again a controversial issue. 4.4. measurement as an empirical or an informational process measurements are aimed at attributing values to properties: since values are information entities, any measurement must then include an informational component. rather, the issue here is whether there can be measurements that are entirely informational processes, with no empirical components at all (note that this is not the case of gymnastics judges and wine sommeliers mentioned above: they are expected to operate by (empirically) observing gym competitions and tasting wines). there are at least two cases at stake. one is about the evaluation of properties that are in turn informational, for example the number of lines of code in the source of a software program. as quoted in section 3, the “computational process” about a “virtual object” to which the nist definition refers could be such, and actually shares several structural features with the processes that are commonly accepted to be measurements. more controversial is instead the hypothesis to consider to be a measurement any computation performed on values of properties, like when one is asked to compute the acceleration that a given force would produce if applied to a body of a given mass. of course, the information on the force and the mass (the “input quantities”) could also include an uncertainty, that in this case should be somehow propagated to the acceleration, and someone could decide that this is sufficient to make such a computation a measurement, by then accepting that what has been propagated is a “measurement” uncertainty, through a “measurement” model. in an evolutionary situation, this is also a possible standpoint. 5. conclusions were we in charge of updating a definition of ‘measurement’, for example for a new edition of the vim, what would we propose, then? how tightly would we stick to the traditional conception of measurement as a quantitative empirical process? or what criterion would we adopt toward a different, and plausibly broader, characterization? conscious that there is not one right, or true, answer, and by taking for granted the necessary conditions listed at the beginning of section 4, i dare to suggest – as a working hypothesis – that the most fundamental and most characterizing task of measurement is to produce information that is explicitly and socially justifiable (it is also the conclusion reached in [8]). this is not related to the quality of the produced information nor to the scope of the process (as the vim states, measurement should be characterized “irrespective of the level of measurement uncertainty and irrespective of the field of application” [2]), but to the condition that a measurement is a “white box” process, so that the quality of its results – be it good or bad – can always in principle be justified. accordingly, the source of the special efficacy of measurement, as investigated by kuhn, is the possibility to reach a common understanding on how acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 213 trustworthy its results are, along the two key dimensions of the objectivity and the intersubjectivity of the provided information [24], [25]. this explains the strategic importance of some components of the metrology of physical quantities, like the widely agreed definition of measurement units and the condition of metrological traceability of measurement results to such units through the appropriate calibration of measuring instruments. whether and how a sufficient objectivity and intersubjectivity of the information produced by processes that aim at being acknowledged as measurements can be obtained: in the perspective we have proposed here, this is the key challenge for an evolutionary understanding of measurement. references [1] iso 7730:2005, ergonomics of the thermal environment – analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria, international organization for standardization, geneva, 2005. [2] jcgm 200:2012, international vocabulary of metrology – basic and general concepts and associated terms, 3rd ed., paris: joint committee for guides in metrology, 2012. online [accessed 15 december 2021] https://www.bipm.org/en/committees/jc/jcgm/publications. [3] k. popper, the logic of scientific discovery, routledge, abingdon-on-thames, 1959. [4] l. mari, the problem of foundations of measurement, measurement, 38(4) (2005) pp. 259-266. doi: 10.1016/ j.measurement..2005.09.006 [5] j. michell, measurement in psychology: a critical history of a methodological concept, cambridge university press, cambridge, 1999. [6] e. tal, measurement in science, in: the stanford encyclopedia of philosophy, e.n. zalta (ed.), 2020. online [accessed 15 december 2021] https://plato.stanford.edu/archives/fall2020/entries/measurem ent-science [7] t.s. kuhn, the function of measurement in modern physical science, isis, 52(2) (1961) pp. 161-193. [8] l. mari, m. wilson, a. maul, measurement across the sciences – developing a shared concept system for measurement, springer nature, 2021. online [accessed 15 december 2021] https://link.springer.com/book/10.1007%2f978-3-030-65558-7 [9] a. de morgan, the connection of number and magnitude: an attempt to explain the fifth book of euclid, taylor and walton, london, 1836. online [accessed 15 december 2021] https://archive.org/details/connexionofnumbe00demorich [10] c. hutton, a mathematical and philosophical dictionary, johnson, london (freely available on google books), 1795. [11] euclid’s elements of geometry, the greek text of j. l. heiberg (1883-1885) edited, and provided with a modern english translation, by richard fitzpatrick, 2008. online [accessed 15 december 2021] http://farside.ph.utexas.edu/books/euclid/euclid.html [12] a. koyré, du monde de l’à peu près à l’univers de la précision, in: a. koyré (ed.), etudes d’histoire de la pensée philosophique (pp. 341-362), gallimard, paris, 1948. [13] m. bunge, on confusing ‘measure’ with ‘measurement’ in the methodology of behavioral science, in: m. bunge (ed.), the methodological unity of science, d. reidel, dordrecht-holland, 1973. [14] n. r. campbell, physics: the elements, cambridge university press, cambridge, 1920. [15] a. ferguson, c. s. myers, r. j. bartlett, h. banister, f. c. bartlett, w. brown, w. s. tucker, final report of the committee appointed to consider and report upon the possibility of quantitative estimates of sensory events. report of the british association for the advancement of science, 2 (1940) pp. 331-349. [16] g. b. rossi, measurability, measurement, 40 (2007) pp. 545-562. doi: 10.1016/j.measurement.2007.02.003 [17] s. s. stevens, on the theory of scales of measurement, science, 103(2684) (1946) pp. 677-680. [18] d. h. krantz, r. d. luce, p. suppes, a. tversky, foundations of measurement, vol. 1: additive and polynomial representations, academic press, new york, 1971. [19] h.e. kyburg, theory and measurement, cambridge university press, cambridge, 1984. [20] p. suppes, representation and invariance of scientific structures, csli publications, stanford, 2002. [21] a. possolo, simple guide for evaluating and expressing the uncertainty of nist measurement results, technical note, national institute of standards and technology, gaithersburg, md, 2015. doi: 10.6028/nist.tn.1900 [22] o. holder, die axiome der quantität und die lehre vom mass. berichte über die verhandlungen der koeniglich sächsischen gesellschaft der wissenschaften zu leipzig, mathematischphysikaliche klasse, 53 (1901) pp. 1-46. part 1 translated in j. michell, c. ernst, the axioms of quantity and the theory of measurement, j. mathematical psychology, 40(3) (1996) pp. 235252. doi: 10.1006/jmps.1997.1178 [23] l. mari, a. maul, d. torres irribarra, m. wilson, quantities, quantification and the necessary and sufficient conditions for measurement, measurement, 100 (2016) pp. 115-121. doi: 10.1016/j.measurement.2016.12.050 [24] a. maul, l. mari, d. torres irribarra, m. wilson, the quality of measurement results in terms of the structural features of the measurement process, measurement, 116 (2018) pp. 611-620. doi: 10.1016/j.measurement.2017.08.046 [25] a. maul, l. mari, m. wilson, intersubjectivity of measurement across the sciences, measurement, 131 (2019) pp. 764-770. doi: 10.1016/ j.measurement..2018.08.068 https://www.bipm.org/en/committees/jc/jcgm/publications https://doi.org/10.1016/j.measurement.2005.09.006 https://plato.stanford.edu/archives/fall2020/entries/measurement-science https://plato.stanford.edu/archives/fall2020/entries/measurement-science https://link.springer.com/book/10.1007%2f978-3-030-65558-7 https://archive.org/details/connexionofnumbe00demorich http://farside.ph.utexas.edu/books/euclid/euclid.html http://dx.doi.org/10.1016/j.measurement.2007.02.003 https://doi.org/10.6028/nist.tn.1900 https://doi.org/10.1006/jmps.1997.1178 http://dx.doi.org/10.1016/j.measurement.2016.12.050 https://doi.org/10.1016/j.measurement.2017.08.046 https://doi.org/10.1016/j.measurement.2018.08.068 editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters” acta imeko issn: 2221-870x december 2021, volume 10, number 4, 3 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 3 editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters” silvio del pizzo1 1 department of science and technologies, university od naples “parthenope”, centro direzionale isola c4, naples, italy section: editorial citation: silvio del pizzo, editorial summary to selected papers from the 2020 imeko tc19 international workshop on metrology for the sea “learning to measure sea health parameters”, acta imeko, vol. 10, no. 4, article 2, december 2021, identifier: imeko-acta-10 (2021)-04-02 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: silvio del pizzo, e-mail: silvio.delpizzo@uniparthenope.it dear readers, this issue of acta imeko is dedicated to the works that have been selected from those presented at “2020 imeko tc19 international workshop on metrology for the sea”, shortly named metrosea. the edition 2020 was hosted by the university of naples “parthenope”, but unfortunately all sessions, discussions and other activities were carried out online due to the pandemic situation. the virtual workshop was organized to broadcast live all sessions according to the conference program, and the attendees were able to participate to all proposed activities by entering in virtual rooms. therefore, the online conference was not so different from a live event, with the great advantage that every session was recorded and shared with the participants. the sea is the medium that allows several human activities such as fishing, traveling, and the transporting of large quantities of goods using vessels. furthermore, its environment host an important and fragile ecosystem that represents a great reservoir and source of food for all living beings. at the same time, the sea has a fundamental role in the climate change, indeed the ocean circulation is a key mechanism of global climate by transporting and storing heat and fresh water around the world. in the last decades the environment in general, and specifically the marine one, has been compromised by several human activities which have undermined both the delicate equilibrium of the ecosystem and the ocean circulation system, activating a dangerous process that involves the extinction risk of some species of sea fish. moreover, several studies confirm that the extreme climate and meteorological events are strictly related to the modification of the global ocean circulation as well as to the global warming. this latter phenomenon entails the increasing of the global mean sea level that is worrying the population of the small islands. therefore, it is evident that the sea health is very important for the survival of all humanity. in this context, two concepts were born: the blue economy and the blue growth, both approaches referring to the use of seas in sustainable way. definitively, the sea is a complex environment that includes complex phenomena and assets; as everyone may know, measuring is a fundamental step that allows a deep knowledge of a phenomenon or an asset. the aim of the international workshop “metrology for the sea” is to support the sharing of the recent advances in the field of measurement and instrumentations to be applied for the increasing of knowledge, for protecting and preserving the sea and all assets and phenomena involved. indeed, every year the workshop involves hundreds of researchers and people who work in developing instrumentation and measurement methods for the sea activities. several research topics have been discussed during the edition 2020 of the workshop, such as: the electronic instrumentation, sensors and sensing systems, wireless sensor networks for marine applications, monitoring systems for sea activities, underwater navigation and submarine obstacle detection, pollution detection, measures for marine biology, marine geology, and oceanography. five selected papers from metrosea 2020 are presented in this special issue: one of these is a review paper that illustrates the state of the art and the future perspectives in the field of underwater wireless communications, while the other four papers concern experimental approaches, although applied in different fields. leccese and spagnolo in the paper titled ‘state-of-the art and perspectives of underwater optical wireless communications’ illustrate the state of art and future perspective of underwater wireless communication using optical waves. this approach is revolutionizing the underwater communications due to the achievable large bandwidth, and low latency levels. unfortunately, the communication distances are still limited. the authors provide a comprehensive overview of the state of art, mailto:silvio.delpizzo@uniparthenope.it acta imeko issn: 2221-870x december 2021, volume 10, number 4, 4 5 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 4 highlighting which limits distress this technology and which are the possible future developments, especially in military field. it is very interesting their preliminary study to verify the feasibility of a simple, economical, and reliable communication system based on uv-a radiation. of course, the development of a low-cost underwater communication system with a high transmission rate could have a great impact on the world of the autonomous underwater vehicles. the scientific contribution presented by baldo et al., titled ‘remote video monitoring for offshore sea farms: reliability and availability evaluation and image quality assessment via laboratory tests’ describes a preliminary study for the implementation of a remote monitoring system based on video recording for an offshore sea farm. the aim of this work is to ensure a video surveillance infrastructure for supervising breeding cages along with the fish inside them, in order to contrast undesired phenomena like fish poaching as well as cages damages. since the sea farm are built in open sea, where the weather conditions significantly change from a season to another, the authors focused their inspections on the availability and the reliability of the designed monitoring system, performing several tests in laboratory through a climatic chamber. a specific section is dedicated to the system architecture, where the authors tackle the critical problem of the power supply system. moreover, it is very interesting that the designed project employs a hardware low-cost and open source such as raspberry pi. the other three research papers provide different studies related to the measurements in hydrography. these scientific contributions deal with three different aspects of the hydrographic process. specifically, the calibration of the echo sounder for acquiring the depth, the enhancement of the positioning system gnss (global navigation satellite system) and the management of the post processing data. the scientific contribution by amoroso and parente, ‘the importance of sound velocity determination for bathymetric survey’ presents an inspection on the importance of the sound velocity in water during a hydrographic acquisition. the paper took into consideration four different methodologies for modeling the speed of the sound in sea water. all these approaches need an accurate knowledge of the water density (function of temperature, pressure and salinity) at different depths. the authors reported the impact that inaccurate measurements of these three parameters has on the bathymetric survey results. the experimentation was conducted on real data collected by a hydrographic vessel, while the error propagation was conducted in simulated environment considering several systematic errors on the measurements of the three inspected parameters. this innovative methodology is very promising for the performed inspections. the paper wrote by baiocchi et al. titled ‘first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise’ concerns the problem of the accurate positioning during a collection of the bathymetric, oceanographic, and geophysical data. the development of oceanographic and geophysical instrumentation, capable to acquire with a high spatial resolution, requires high accuracy positioning systems. in this work the authors carried out several tests on an oceanographic ship applying ppk (post process kinematic) approach to improve the position accuracy provided by the gnss. the large amount of data acquired by the authors, allowed to validate the performances of the proposed methodology. indeed, the authors compared the results obtained by ppk processing in the vertical domain with the data registered by several tide gauges located on the surrounding coast. in this work the authors tackled several problems concerning both the acquisition data phase and the different vertical datum used by the tide gauges considered. finally, alcaras et al. presented an interesting work on the post-elaboration of the bathymetric data titled ‘from electronic navigational chart data to sea-bottom models: kriging approaches for the bay of pozzuoli’. the electronic navigational chart is an electronic map realized by a national hydrographic office and can be used for navigational purposes. the enc data can be used to build a detailed bathymetric 3d model applying an interpolation method. the authors analyze the performance of different interpolation methods based on the ordinary and universal kriging applied to a specific case study: the bay of pozzuoli. hence, the authors employ 11 mathematical semi-variogram models for inspecting the perfomance of these interpolators. however, the cross-validation is used for evaluting the accuracy of each method. the research remarks the good performance of both the kriging approaches for hydrogrphic purposes and it demonstrates the relevance of the choice of the mathematical model to build the semi-variogram. i would like to conclude these introductory notes by thanking the authors for their interesting and valuable papers and the reviewers for their indispensable and qualified contributions. furthermore, i would like to thank the editor in chief, prof. francesco lamonaca, as his support has been fundamental for accomplishing this special issue. it was a great honour for me to act as guest editors for this special issue, and i believe that the readers will find this acta imeko issue useful and will be inspired by the themes and methodologies proposed, for continuing the innovation in metrology for the sea. silvio del pizzo section editor metrology in the early days of social sciences acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 metrology in the early days of social sciences clara monteiro vieira1, elisabeth costa monteiro1 1 pontifical catholic university of rio de janeiro, marquês de são vicente, 225, gávea, rio de janeiro, brazil section: research paper keywords: metrology; social science and humanities; émile durkheim; max weber citation: clara monteiro vieira, elisabeth costa monteiro, metrology in the early days of social sciences, acta imeko, vol. 12, no. 2, article 16, june 2023, identifier: imeko-acta-12 (2023)-02-16 section editor: eric benoit, université savoie mont blanc, france received july 10, 2022; in final form march 31, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this study was supported in part by the coordenação de aperfeiçoamento de pessoal de nível superior – brasil (capes) – finance code 001. corresponding author: elisabeth costa monteiro, e-mail: beth@puc-rio.br 1. introduction providing metrological traceability of measurement results to the international system of units (si) is essential to ensure reliable and comparable quantity values in applications associated with all fields of knowledge. this aspect, however, has been a historical struggle since the early days, when efforts were directed to the elaboration of a metrological framework traditionally focused on promoting advances in the evolution of standards for measuring physical quantities. after the signing of the ‘convention du mètre’ (1875), the 1st ‘conference general de poids et mesures’ (cgpm), which took place in 1889, established international prototypes for physical quantities of length and mass units, respectively, meter and kilogram, also incorporating the second as the unit of time, according to astronomers’ definition [1]. the high complexity of chemical and biological measurements, which also involve quantities belonging to the field of natural sciences, only much more recently received better attention and contributions to meet their metrological infrastructure demands [2]-[5]. metrological authorities’ first initiatives toward meeting demands for chemical measurements took place with the adoption, in 1971, of the unit mole (symbol mol), for the quantity amount of substance, at the 14th cgpm, and the creation of the ‘comité consultatif pour la quantité de matière’ (ccqm), in 1993 [1]. in turn, measurements of biological quantities, which are particularly associated with even more challenging metrological demands, were addressed only at the 20th cgpm (1999) [2]-[4]. however, unlike what happened in the case of chemical quantities, the metrological demands associated with biomeasurements did not receive specific support by creating a particular consultative committee for the area. the responsibility for advancing the reliability of biomeasurements was absorbed by the ccqm, whose name was changed in 2014 to ‘consultative committee for amount of substance: metrology in chemistry and biology’ [3]. equally required and even more challenging is the global metrological framework to provide trustworthiness and comparability for measurements in humanities and social abstract recent studies have been endeavoring to overcome challenges to ensure reliable measurement results in social sciences and humanities facing the complex characteristics of this scientific field. however, the literature indicates that the founding designers of sociology as an academic discipline expressed concerns regarding social measurements more than a century ago. based on a literature review, the present work investigates possible metrological aspects already addressed in the early days of social science, focusing on the methodological conceptions of two of sociology’s early canons – notably max weber and emile durkheim. the present study reveals that the approaches contemporaneously developed by the two social sciences co-founders present diverse but fundamentally complementary configurations, allowing a wide range of social phenomena to be analyzable. although employing their terminologies, both social scientists incorporated fundamental metrological concepts in their procedures’ parameters, seeking to establish a single reference, using statistical analysis or determining measurement standards that resemble what is known today as reference material. the concern with applying metrological concepts since the early days of creating sociology as a science reinforces the need to invest extensive efforts to provide uniformity of measurements in this remarkably relevant field of application of measurement science. mailto:beth@puc-rio.br acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 sciences. nevertheless, this issue has not yet been addressed in cgpm resolutions. the sophistication of measurands associated with more complex areas involving chemical, biological, human and social measurements requires dealing with the development of certified reference materials, creation of arbitrary units, and other alternative strategies to step forward to a metrological structure capable of harmonizing “nonphysical” measurements in all aspects of daily life demands. particularly regarding human and social sciences, the influence of the subjective perceptions of researchers and research participants on the research process [6] and difficulties in defining concepts [7]-[9] are some of the elements of the complexity in the study of social phenomena. such intricacies hinder but do not prevent initiatives to ensure reliability and comparability of measurement results in the social sciences and humanities. recent studies have been endeavoring to meet the challenges associated with the complex characteristics of this scientific field [7]-[37]. among the current academic initiatives, it is worth mentioning the successful incorporation of measurements in social sciences among investigations addressed by the international measurement confederation (imeko) [36]-[40], being evidenced a massive effort of this scientific community to promote metrology in this field, including efforts to lead both physical and "nonphysical" measurement in a single, consistent concept system [34]-[37]. despite the apparent novelty of the actions that are currently emerging to incorporate the concepts of metrology in social sciences aiming at contributing to robust and comparable measurement results in this field of application; the literature indicates that the founding architects of this science as a formal discipline, notably max weber and émile durkheim, already expressed concerns about the adequacy of the approaches employed for measuring social phenomena [41], [42]. this paper explores the fundamental aspects of the measurement methods proposed more than a century ago by those two founding authors of sociology as a scientific field. moreover, the present article seeks to identify the possible connections between these preliminary sociological approaches and the current metrological conceptions. 2. analytical framework of social science founders playing relevant roles on the foundation of sociology as a scholarly discipline, both émile durkheim and max weber devoted a portion of their work to the development of a methodology for the study of social phenomena and were especially interested in reliable strategies and comparable results on social measures. they presented, however, quite different approaches. 2.1. émile durkheim émile durkheim (1858-1917, france) was the first to establish sociology as a formal academic discipline (university of bordeaux, 1895) [43]. influenced by the positivist current of thought, durkheim turned to the natural sciences – especially bioscience – when performing social science investigations [42], [44]. he thought of society as an organism, whose parts (or “organs”) need to function well together to ensure the whole’s healthy functioning [42], [44]. durkheim defined ‘social facts’ as his main object of study. ‘social facts’ would be ways of feeling, acting, and thinking identifiable by three main traits such as generality, being applied to all members of a given society; exteriority from each individual, once they were not created by any particular person’s consciousness, but learned by people, generation after generation, and lasting much longer than the human lifespan; and coercivity, by which individuals are constrained into specific actions, not necessarily in conformity to each person’s intention [42]. with a focus on analyzing social facts and their role in society, durkheim addressed the social phenomena from the macrolevel. just as it is impossible to capture what is going on in someone’s mind by looking at each cell of their nervous system, durkheim states that one wouldn’t be able to explain a social fact simply by looking at its manifestations in the individual level [42]. he emphasized, after all, that a whole is not just the sum of its parts, but a specific reality formed by their association [42]. therefore, in durkheim’s approach, social facts ought to be explained through other social facts [42]. with a marked tendency toward an empirical approach, durkheim used statistical strategies extensively. by increasing the number of cases whenever possible, the variable-oriented model of the comparative analysis performed by durkheim aims to establish generalized connections between variables [45]. the general patterns pursuit guided durkheim’s statistical approach to dealing with the time dimension from a transhistorical perspective [45]. collective behaviors are, then, identified as an average effect of a variable by searching for statistical regularities of social facts [45]. estimating the average effects of independent variables would allow investigating the ‘effects-of-causes’. therefore, with the emphasis on generalizations over details, durkheim establishes causality relationships, associating a phenomenon (social fact) to its cause or its effects (another social fact) [42]. for instance, in his famous study “le suicide: étude de sociologie” [46], performed with three religious’ communities (protestants, catholics, and jews), durkheim demonstrated that a social fact, the suicide rates, presented a statistical correlation with a macrolevel variable constituted by the degrees of social integration, as figure 1. diagram of correlating connections between macro-level variables to analyze causality associations with suicide rates in diverse contexts within the durkheim study. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 illustrated in figure 1. the statistical analysis allowed durkheim, for example, to associate suicide rates to aspects of social context, whereas, contrary to what one might expect, there was no correlation with rates of psychopathology. 2.2. max weber max weber (1864–1920, germany) introduced, in 1919, a sociology department at the ludwig maximilians university of munich, in germany [47]. contrasting to durkheim’s objectivity, weber's approach prioritizes subjective interpretations of social events. these aspects are considered those that provide the underlying sense to the individual’s objective behaviors, explaining them. therefore, weber addressed the social phenomena from the micro-level, considering subjectivity and meanings attributed to social actions [41], [45], [48]-[50]. the approach included the development of the so-called ideal type. this theoretical construct consists of an abstract model with internal logic serving as a measuring standard for evaluating complex cases [45], [48], [51]. the strategy allows for understanding particular historical processes and individual motivations, considering as many variables as possible, and analyzing the kind of relationship among them by the concept of elective affinities, which refers to their mutual contributions [52]. therefore, an in-depth understanding of a complex unity is reached by a case-oriented comparison concentrating on a small number of cases, with a large number of attributes interacting within long-lasting processes [45]. as a result, the roots of a specific event must be rebuilt when performing qualitative investigations involving historical comparisons by weber’s case-oriented strategy. 3. metrology and the analytical framework of social sciences’ founders current proposals for making psychosocial and physical properties measurable, ensuring the quality of measurements associated with both physical and psychosocial properties, consider object-relatedness (objectivity) and subjectindependence (intersubjectivity) as essential attributes to be satisfied [34], [36]. objectivity refers to the connection between the information obtained and the measured property. this characteristic requires an appropriate theory of the property to make insignificant the definitional uncertainty and demands a reduced influence from other phenomena, which renders instrumental uncertainty negligible [35], [36]. for a uniform interpretation by different measurers, the measurement must be intersubjective, which depends on the metrological traceability of results to the same reference scale, if available. as described in [36], this quality dimension can be structured by developing item banks aiming at building reference scales associated with each of the properties, in combination with rasch model fitting [33]-[36]. the rasch model is an approach widely employed to measure latent traits in a variety of disciplines within humanities, social sciences and health [21]-[33], [36], [53], [54]. preceded by the studies developed throughout the 19th century by the german karl marx (1818-1883), also known as one of the founding creators of the social sciences, émile durkheim and max weber were the first to establish this field of research as a formal discipline. the scientific contributions of these two contemporary researchers emerged at the end of the 19th century, after the memorable signing of the intergovernmental treaty of the meter convention, which took place in paris in 1875 and established the bureau international des poids et mesures (bipm), an international organization in which the member states coordinate the harmonization and advances in measurement science and measurement standards. in the case of the french sociologist émile durkheim, this historical space may have paved the way for the interest in the quality of measurement evidenced in his work. weber's contributions to social sciences measurements, in turn, emerged after the creation, in berlin, of the first national metrology institute in 1887, the physikalisch-technische reichsanstalt (ptr), later renamed to physikalisch-technische bundesanstalt (ptb). as berlin was the historical space experienced by max weber, this metrological context may also have influenced the methodological approach this social science co-founder developed. the efforts invested in developing methodologies that sought to achieve comparable results from the measurements of social phenomena were a distinctive feature of durkheim and weber’s scientific production. their proposals, however, were characterized by quite different approaches. their methods were not aimed at the same objects of study, which can commonly lead to a false idea of divergence. instead, their methodological approaches were complementary, dealing with analyses carried out in both dimensions, macro-sociological by durkheim and micro-sociological by weber. driven by the positivist influence, durkheim built natural sciences’ analogies with social sciences. it is worth mentioning that both scientific fields share metrological challenges that still linger to the present time. with highly-complex measurements, the measurement requirements framework in such fields of study is not yet adequately addressed or simply not at all. interestingly, in his book from 1894 “les règles de la méthode sociologique” [42], durkheim already acknowledges such challenges that sociology has in common with biology, but to a greater extent. as he states [42]: “tous ces problèmes qui, déjà en biologie, sont loin d'être clairement résolus, restent encore, pour le sociologue, enveloppés de mystère” (p.39). dealing with general analyses involving a large number of social cases but limiting to few variables, durkheim employed a quantitative approach with statistical techniques, including correlation procedures to define the strength of the association between different social facts, regression analysis to explore the impact of the change in one social variable relative to another, also predicting values of the random social variable based on the fixed social variable values. the measurement reference consisted of the average mathematical relationship between variables [42]. durkheim pursues stable objects as a necessary condition for objectivity. the more detached the “social facts” from the “individual facts” by which they manifest themselves, the more objectively represented as a constant, thus eliminating subjective interference, as he states [42]: “on peut poser en principe que les faits sociaux sont d'autant plus susceptibles d'être objectivement représentés qu'ils sont plus complètement dégagés des faits individuels qui les manifestent. en effet, une sensation est d'autant plus objective que l'objet auquel elle se rapporte a plus de fixité; car la condition de toute objectivité, c'est l'existence d'un point de repère, constant et identique, auquel la représentation peut être rapportée et qui permet d'éliminer tout ce qu'elle a de variable, partant de subjectif” [42]. durkheim’s quest for objectivity can be considered analogous to a pursuit towards minimizing the definitional and instrumental uncertainties of social measurements. as for max weber’s methodology, the social properties under analysis were conceived in a micro-social dimension, acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 concentrating on a few cases but encompassing a large number of variables, thus leading to a significant increase in the complexity of the measurand compared to durkheim’s simplified general model. the reference in weber’s approach is built by an abstract model, consisting of a synthetic “ideal construct” that encompasses multiple essential attributes. this strategy resembles the production of reference materials for chemical or biological measurements, areas for which the realization of si units is still unavailable. in these fields, it is possible to provide metrological traceability by developing reference materials with sufficient homogeneity and stability regarding specified properties, being established to be fit for their intended use in the measurement or examination of nominal properties [55]. weber’s approach considers cases as a whole, constituted of variables that cannot be disassociated. like the procedure using certified reference materials as “primary reference standard,” weber’s conception claims that the produced ideal types should be made available as a reference for further investigations of other cases, which would enable uniformity of interpretation through the intersubjectivity of measurement results. both durkheim’s and weber’s strategies are concerned with establishing a well-defined reference to provide the necessary measurement standard to enable the comparability of results, considering the specific characteristics associated with their main object of study. these features denote the concern with ensuring the intersubjectivity of the measurement results, which, in turn, will be provided only by establishing global metrological traceability of the measurement results to reference properties [34]-[37]. furthermore, these preliminary approaches developed in the foundations of social sciences have been applied up to the present. durkheim’s suicide assessment has been recently implemented and validated in gerontological practice [16], [17]. recent studies regarding metrology for the social sciences have also addressed ideas from durkheim’s immediate predecessor, gabriel tarde (1843-1904) [18], [19]. considering quantification in the psychosocial sciences more demanding than in the natural sciences, tarde qualified this measurement challenge as a new level of intellectual achievement [18]. as for max weber, his concept of “ideal type” came to serve as the basis for later developed measurement models addressing psychosocial properties [20]. that was the case of the guttman scale, with its’ proposal of “perfect scale” requirements to yield invariant measurement. weber’s concept of “ideal type” is also linked to principles underlying the rasch measurement approach a “probabilistic realization” of the guttman scale, as described in the literature [20]. according to duncan (1984), "social measurement should be brought within the scope of historical metrology" [56]. duncan's suggestion may become a reality as soon as the cgpm resolutions start addressing the demands for the development of global metrological infrastructure aimed at ensuring the reliability and the comparability of measurement results in humanities and social sciences, consequently promoting the integration of this complex scientific field into the international system of units [3]. 4. conclusion despite never formally being addressed by international metrological organizations, social sciences were established as a discipline shortly after the intergovernmental metrological structure creation, in 1875, by the metre convention signature. the present study explored the concepts potentially associated with the reliable framework provided by metrology among the preliminary measuring strategies developed by two founding designers of the social sciences. sharing the challenging aspect of measurement complexity and unavailability of a corresponding si unit traceability of measuring results with chemical and biological properties, since its early years, the social sciences founding authors embodied ideas close to metrological concepts to ensure comparability as much as possible. the two major methods developed when the social sciences discipline was born conceived different levels of measurement dimensions but equally looked for defining a robust measurement standard to be employed for comparative analysis. emile durkheim’s objective and quantitative approach was directed to generalizations, using statistics to study numerous cases, focusing on a few variables, and defining reference by a mathematical-statistical average type, as well as pathological cases according to their corresponding deviations. such strategy points toward the possibility of stepping forward to the measurement quality attribute of objectivity, minimizing definitional and instrumental measurement uncertainty components. in turn, max weber’s subjective and qualitative method examined the social phenomenon from a micro-dimension perspective, dealing with few cases and multiple variables, by means of which it aims at the highly-complex feature of social events. ‘ideal type’ constructs, which were defined as standard references, would, then, embody the essential social variables for the appropriate description of a social phenomenon. in this sense, weber’s ideal type can be interpreted as a reference material designed to allow the comparability of the results obtained by evaluating a specific construct by several researchers, which indicates a tendency towards an intersubjectivity attribute of measurement quality. the efforts implemented by the founding architects of the social sciences from the earliest moments when it was still being established as a science reinforce the present calls for scientific advances to meet the multiple demands for metrological traceability stemming from all areas of knowledge. complying with these requests constitutes an essential endeavor for establishing the worldwide uniformity of measurements. acknowledgement authors acknowledge the support provided by the brazilian agency capes (coordenação de aperfeiçoamento de pessoal de nível superior) brazil – finance code 001. references [1] bipm, the international system of units, 9th ed., sèvres: international bureau of weights and measures (2019). online [accessed 24 april 2023] https://www.bipm.org/en/publications/si-brochure [2] e. costa monteiro, l. f. leon, metrological reliability of medical devices, j. phys.: conf. ser. 588 (2015), pp. 012032. doi: 10.1088/1742-6596/588/1/012032 [3] e. costa monteiro, bridging the boundaries between sciences to overcome measurement challenges, measurement: interdisciplinary research and perspectives 15(1) (2017), pp. 3436. doi: 10.1080/15366367.2017.1358974 [4] e. costa monteiro, magnetic quantities: healthcare sector measuring demands and international infrastructure for providing https://www.bipm.org/en/publications/si-brochure https://doi.org/10.1088/1742-6596/588/1/012032 https://doi.org/10.1080/15366367.2017.1358974 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 metrological traceability, tmq – techniques, methodologies and quality 1 (2019), pp. 42-50. online [accessed 24 april 2023] https://publicacoes.riqual.org/wpcontent/uploads/2023/01/edesp1_19_42_50.pdf [5] a. bristow, assignment of quantities to biological medicines: an old problem re-discovered, phillos. trans. royal society a: mathematical, physical and engineering sciences 369 (2011), pp. 4004-4013. doi: 10.1098/rsta.2011.0175 [6] r. damatta, relativizando: uma introdução a antropologia social. rocco, rio de janeiro, 2010, isbn 8532501540. [in portuguese] [7] t. l. kelley, interpretation of educational measurements. macmillan, new york, 1927. [8] t. salzberger, s. cano, l. abetz-webb, e. afolalu, c. chrea, r. weitkunat, k. fagerström, addressing traceability in social measurement: establishing a common metric for dependence, journal of physics: conf. ser. iop publishing, 1379(1) (2019) art. no. 012024 doi: 10.1088/1742-6596/1379/1/012024 [9] p. h. pollock iii, b. c. edwards, the definition and measurement of concepts, in: the essentials of political analysis. cq press, 2019, isbn 9781506379593, pp. 1-33. [10] l. mari, e, ugazio, preliminary analysis of validation of measurement in soft systems, j. physics: conf. ser. 238(1) (2010) art. no. 012026. doi: 10.1088/1742-6596/238/1/012026 [11] w. p. fisher jr, a. j. stenner, metrology for the social, behavioral, and economic sciences, national sciences foundation social, behavioral, and economic sciences white paper (2011). online [accessed 24 april 2023] http://www.truevaluemetrics.org/dbpdfs/metrics/william-pfisher/fisherjr_william_metrology-for-the-social-behavioraland-economic-sciences.pdf [12] l. mari, p. carbone, d. petri, fundamentals of hard and soft measurement, in: modern measurements: fundamentals and applications. a. ferrero, d. petri, p. carbone, m. catelani (editors). wiley-ieee press, 2015, isbn 978-1-118-17131-8, pp. 203-262. [13] m. djuric, j. filipovic, s. komazec, reshaping the future of social metrology: utilizing quality indicators to develop complexity-based scientific human and social capital measurement model, social indicators research 148(2) (2020), pp. 535-567. doi: 10.1007/s11205-019-02217-6 [14] t. salzberger, s. cano, l. abetz-webb, e. afolalu, c. chrea, r. weitkunat, j. rose, addressing traceability of self-reported dependence measurement through the use of crosswalks, measurement 181 (2021) art. no. 109593. doi: 10.1016/j.measurement.2021.109593 [15] m. delmastro, on the measurement of social phenomena: a methodological approach. springer international publishing, 2021, isbn 978-3030775353. [16] s. m. marson, r. m. powell, suicide among elders: a durkheimian proposal, international journal of aging and later life 6(1) (2011), pp. 59-79. doi: 10.3384/ijal.1652-8670.116159 [17] s.m. marson, m. hong, j. bullard, the measurement of suicide assessment and the development of a treatment strategy for elders: durkheim an approach, journal of sociology and social work 5(1) (2017), pp. 99-114. doi: 10.15640/jssw.v5n1a10 [18] w. p. fisher jr, almost the tarde model?, rasch measurement transactions, 28(1) (2014), pp. 1459-1461. online [accessed 24 april 2023] https://www.rasch.org/rmt/rmt281.pdf [19] w. p. fisher jr, the central theoretical problem of the social sciences, rasch measurement transactions 28(2) (2014), pp. 14641466. online [accessed 24 april 2023] http://www.rasch.org/rmt/rmt282.pdf [20] g. engelhard jr, invariant measurement: using rasch models in the social, behavioral, and health sciences. routledge, 2013, isbn 978-0415871259. [21] n. kærgård, georg rasch and modern econometrics, presented at the seventh scandinavian history of economic thought meeting, molde university college, molde, norway, 2003. [22] w. p. fisher jr, invariance and traceability for measures of human, social, and natural capital: theory and application, measurement, 42(9) (2009), pp. 1278-1287. doi: 10.1016/j.measurement.2009.03.014 [23] h. zhong, j. xu, a. piquero, internal migration, social exclusion, and victimization: an analysis of chinese rural-to-urban migrants, j. res. crime & delinquency 54(4) (2017), pp. 479-514. doi: 10.1177/0022427816676861 [24] j. melin, s. j. cano, a. flöel, l. göschel, l. r. pendrill, construct specification equations: ‘recipes’ for certified reference materials in cognitive measurement, measurement: sensors 18 (2021) art. no. 100290. doi: 10.1016/j.measen.2021.100290 [25] l. pendrill, n. petersson, metrology of human-based and other qualitative measurements, measurement science and technology, 27(9) (2016) 094003. doi: 10.1088/0957-0233/27/9/094003 [26] t. g. bond, c. fox, applying the rasch model: fundamental measurement in the human sciences. psychology press, 2013, isbn 9780429030499. [27] n. s. da rocha, e. chachamovich, m. p. de almeida fleck, a. tennant, an introduction to rasch analysis for psychiatric practice and research, journal of psychiatric research 47(2) (2013), pp. 141148. doi: 10.1016/j.jpsychires.2012.09.014 [28] j. uher, measurement in metrology, psychology and social sciences: data generation traceability and numerical traceability as basic methodological principles applicable across sciences, quality & quantity, 54(3) (2020), pp. 975-1004. doi: 10.1007/s11135-020-00970-2 [29] b. d. wright, a history of social science measurement, educational measurement: issues and practice 16(4) (1997), pp.3345. doi: 10.1111/j.1745-3992.1997.tb00606.x [30] w. p. fisher jr, a. j. stenner, theory-based metrological traceability in education: a reading measurement network, measurement, 92 (2016), pp. 489-496. doi: 10.1016/j.measurement.2016.06.036 [31] j. a. baird, d. andrich, t. n. hopfenbeck, g. stobart, metrology of education, assessment in education: principles, policy & practice 24(3) (2017), pp. 463-470. doi: 10.1080/0969594x.2017.1337628 [32] g. rasch, probabilistic models for some intelligence and attainment tests. university of chicago press, chicago, 1980, isbn 978-0226705538. [33] g. rasch, on general laws and meaning of measurement in psychology, proc. of the fourth berkeley symposium on mathematical statistics and probability: held at the statistical laboratory, berkeley, united states, 1961, pp. 321-334. [34] l. pendrill, quality assured measurement: unification across social and physical sciences. springer, 2020, isbn 9783030286972. [35] a. maul, l. mari, m. wilson, intersubjectivity of measurement across the sciences, measurement 131 (2019), pp. 764-770. doi: 10.1016/j.measurement.2018.08.068 [36] l. mari, m. wilson, a. maul, measurement across the sciences. springer ser. meas. science and technology, 2021, isbn 9783030655587. [37] l. mari, is our understanding of measurement evolving?, acta imeko 10(4) (2021), pp. 209-213. doi: 10.21014/acta_imeko.v10i4.1169 https://publicacoes.riqual.org/wp-content/uploads/2023/01/edesp1_19_42_50.pdf https://publicacoes.riqual.org/wp-content/uploads/2023/01/edesp1_19_42_50.pdf https://doi.org/10.1098/rsta.2011.0175 https://doi.org/10.1088/1742-6596/1379/1/012024 https://doi.org/10.1088/1742-6596/238/1/012026 http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf http://www.truevaluemetrics.org/dbpdfs/metrics/william-p-fisher/fisherjr_william_metrology-for-the-social-behavioral-and-economic-sciences.pdf https://doi.org/10.1007/s11205-019-02217-6 https://doi.org/10.1016/j.measurement.2021.109593 https://doi.org/10.3384/ijal.1652-8670.116159 https://doi.org/10.15640/jssw.v5n1a10 https://www.rasch.org/rmt/rmt281.pdf http://www.rasch.org/rmt/rmt282.pdf https://doi.org/10.1016/j.measurement.2009.03.014 https://doi.org/10.1177/0022427816676861 https://doi.org/10.1016/j.measen.2021.100290 http://dx.doi.org/10.1088/0957-0233/27/9/094003 https://doi.org/10.1016/j.jpsychires.2012.09.014 https://doi.org/10.1007/s11135-020-00970-2 https://doi.org/10.1111/j.1745-3992.1997.tb00606.x https://doi.org/10.1016/j.measurement.2016.06.036 https://doi.org/10.1080/0969594x.2017.1337628 https://doi.org/10.1016/j.measurement.2018.08.068 http://dx.doi.org/10.21014/acta_imeko.v10i4.1169 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 [38] m. wilson, w. fisher, preface, journal of physics: conference series 772 (2016) 011001. doi: 10.1088/1742-6596/772/1/011001 [39] e. costa monteiro, measurement science challenges in natural and social sciences, iop conf. series: journal of physics: conf. series 1044 (2018) 011001. doi: 10.1088/1742-6596/1044/1/011001 [40] m. wilson, w. fisher, preface of the special issue, psychometric metrology, measurement 145 (2019) p. 190. doi: 10.1016/j.measurement.2019.05.077 [41] m. weber, methodology of social sciences (1903-1917), routledge, 2017, isbn 978-1138528048. [42] e. durkheim, les règles de la méthode sociologique (1894). ultraletters, 2013, isbn 978-2930718408. [in french] [43] h. alpert, emile durkheim and his sociology. columbia university press, 1939, isbn 9780231909983. [44] e. durkheim, de la division du travail social (1893), presses universitaires france, 2007, isbn 978-2130563297. [in french] [45] d. della porta, m. keating, approaches and methodologies in the social sciences: a pluralist perspective. cambridge university press, 2008, isbn 978-0521709668. [46] e. durkheim, le suicide: étude de sociologie (1897), hachette livre bnf, 2013, isbn 978-2012895508. [in french] [47] a. anter, s. breuer, max webers staatssoziologie: positionen und perspektiven. nomos verlagsgesellschaft, 2007, isbn 9783832927738. [in german] [48] m. llanque, max weber, wirtschaft und gesellschaft. grundriss der verstehenden soziologie, tübingen 1922, in: schlüsselwerke der politikwissenschaft. s. kailitz (editor). vs verlag für sozialwissenschaften, 2007, isbn 978-3-531-90400-9, pp. 489493. [in german] [49] r. holton, max weber and the interpretative tradition, in: handbook of historical sociology. g. delanty, e. f. isin (editors). sage, london, 2003, isbn 978-0761971733, pp. 27‐38. [50] m. weber, the protestant ethic and the spirit of capitalism (1905), merchant books, 2013, isbn 9781603866040. [51] l. a. coser, masters of sociological thought: ideas in historical and social context, 2nd ed. harcourt brace jovanovich, new york, 1977, isbn 0155551302 9780155551305. [52] e. klüger, análise de correspondências múltiplas: fundamentos, elaboração e interpretação. bib rev bras inf bibl em ciências sociais, (86) (2018), pp. 68-97. online [accessed 24 april 2023] [in portuguese] https://bibanpocs.emnuvens.com.br/revista/article/view/452 [53] s. f. suglia, l. ryan, r. wright, creation of a community violence exposure scale: accounting for what, who, where, and how often, journal of traumatic stress 21(5) (2008), pp. 479-486. doi: 10.1002/jts.20362 [54] s. l. belvedere, n. a. de morton, application of rasch analysis in health care is increasing and is applied for variable reasons in mobility instruments, journal of clinical epidemiology 63(12) (2010), pp. 1287-1297. doi: 10.1016/j.jclinepi.2010.02.012 [55] jcgm 200:2012, international vocabulary of metrology – basic and general concepts and associated terms, 3rd ed., paris: joint committee for guides in metrology, 2012. online [accessed 24 april 2023] https://www.bipm.org/en/committees/jc/jcgm/publications [56] o. d. duncan, notes on social measurement: historical and critical, russell sage foundation, new york, 1984, pp. 38-39. https://doi.org/10.1088/1742-6596/772/1/011001 https://doi.org/10.1088/1742-6596/1044/1/011001 https://doi.org/10.1016/j.acvdsp.2019.05.077 https://bibanpocs.emnuvens.com.br/revista/article/view/452 https://doi.org/10.1002/jts.20362 https://doi.org/10.1016/j.jclinepi.2010.02.012 https://www.bipm.org/en/committees/jc/jcgm/publications a2cm: a new multi-agent algorithm acta imeko issn: 2221-870x september 2021, volume 10, number 3, 28 35 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 28 a2cm: a new multi-agent algorithm gabor paczolay1, istvan harmati1 1 budapest university of technology and economics, magyar tudósok körútja 2, 1117 budapest, hungary section: research paper keywords: reinforcement learning, multiagent learning citation: gabor paczolay, istvan harmati, a2cm: a new multi-agent algorithm, acta imeko, vol. 10, no. 3, article 6, september 2021, identifier: imeko-acta10 (2021)-03-06 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form august 13, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gabor paczolay, e-mail: paczolay.gabor@gmail.com 1. introduction reinforcement learning is one of the most researched fields within the scope of artificial intelligence. newer algorithms are continually being developed to achieve successful learning in more situations or with fewer samples. in reinforcement learning, a new challenge arises when we take other agents into consideration. this research field is called ‘multi-agent learning’. dealing with other agents – whether they are cooperative, competitive or a mixture of both – brings the learning model closer to a real-world scenario. in real life, no agent acts alone; even random counteracts can be treated as ‘counteracts of nature’. in our work, we optimised the synchronous actor–critic algorithm to perform better in cooperative multi-agent scenarios (those in which agents help each other). littman [1] utilised the minimax-q algorithm, a zero-sum multiagent reinforcement learning algorithm, and applied it to a simplified version of a robotic soccer game. hu and wellmann [2] created the nash-q algorithm and used it on a small gridworld example to demonstrate the results. bowling [3] varied the learning rate of the training process to speed it up while ensuring convergence. later, he applied the win or learn fast methodology to an actor–critic algorithm to improve its multiagent capabilities [4]. reinforcement learning advanced significantly when neural networks gained popularity and convergence was improved. mnih et al. [5] successfully applied deep reinforcement learning to playing atari games by feeding multiple frames at once and utilising experience replay to ensure convergence. later, deep reinforcement learning was applied to multi-agent systems, such as independent multi-agent reinforcement learning. foerster et al. [6] stabilised experience replay for independent q-learning using fingerprints. omidshafiei et al. [7] utilised decentralised hysteretic deep recurrent q-networks for partially observable multi-task multi-agent reinforcement learning problems. figure 1. markov decision process. abstract reinforcement learning is currently one of the most researched fields of artificial intelligence. new algorithms are being developed that use neural networks to compute the selected action, especially for deep reinforcement learning. one subcategory of reinforcement learning is multi-agent reinforcement learning, in which multiple agents are present in the world. as it involves the simulation of an environment, it can be applied to robotics as well. in our paper, we use our modified version of the advantage actor–critic (a2c) algorithm, which is suitable for multi-agent scenarios. we test this modified algorithm on our testbed, a cooperative–competitive pursuit–evasion environment, and later we address the problem of collision avoidance. mailto:paczolay.gabor@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 29 multiple advancements have also been made in the field of centralised learning and decentralised execution. foerster et al. [8] created counterfactual multi-agent policy gradients to solve the issue of multi-agent credit assignment. peng et al. [9] created multiagent bidirectionally-coordinated nets with actor–critic hierarchy and recurrent neural networks for communication. sunehag et al. [10] utilised value-decomposition networks with common rewards and q-function decomposition. rashid et al. [11] utilised qmix with value function factorisation, q-function decomposition and a feed-forward neural network with better performance than the former value-decomposition one. lowe et al. [12] improved the deep deterministic policy gradient by altering the critic to contain all actions of all agents, thus making the algorithm capable of processing more multi-agent scenarios. shihui et al. [13] improved upon the previous maddpg algorithm, increasing its performance in zero-sum competitive scenarios by utilising a method based on minimax-q learning. casgrain et al. [14] upgraded the deep q-network algorithm utilising methods based on nash equilibria, making it capable of solving multi-agent environments. benchmarks have also been created to analyse the performance of various algorithms in multi-agent environments. vinyals et al. [15] modified the starcraft ii game to make it a learning environment. samvelyan et al. [16] also pointed to starcraft as a multi-agent benchmark but with a focus on micromanagement. liu et al. [17] introduced a multi-agent soccer environment with continuous simulated physics. bard et al. [18] reached a new frontier with the cooperative hanabi game benchmark. cooperative multiagent reinforcement learning and the proposed algoirthm are usable in many scenarios in robotics. as our algorithm is decentralised, it can be installed into the robots themselves without any central command center. it might be useful in exploration or localisation tasks in which the use of multiple agents would significantly speed up the process. our testbed can be considered a simplified version of a localisation task, as the pursuer robots are trying to approach and measure a non-cooperative moving object. for proper use in robotics, a well-prepared simulation of the robots and the environment is required, in which thousands of episodes can be run for learning. in our work, we modified the already existing advantage actor–critic (a2c) algorithm to make it better suited for multiagent scenarios by creating a single-critic version of the algorithm. then, we tested this modified a2cm algorithm on our cooperative–competitive pursuit–evasion testbed. in the following section, we explain the theoretical background for our work. then, the experiments themselves and the testbed are introduced. we continue by presenting the results and end with our conclusions and suggestions for future work on the topic. 2. theoretical background 2.1. markov decision processes a markov decision process is a mathematical framework for modeling decision making, as shown in figure 1. in a markov decision process there are states, selectable actions, transition probabilities and rewards [1]. at each timestep, the process starts at a state 𝑠 and selects an action 𝑎 from the available action space. it gets a corresponding reward 𝑟 and then finds itself in a state 𝑠′ given by the probability of 𝑃(𝑠, 𝑠′). a process is said to be markovian if 𝑃(𝑎𝑡 = 𝑎|𝑠𝑡 , 𝑎𝑡−1, . . . , 𝑠0, 𝑎0) = 𝑃(𝑎𝑡 = 𝑎|𝑠𝑡 ), (1) which means that a state’s transition is based only on the previous state and the current action. thus, only the last state and action are considered when deciding on the next state. in a markov decision process, the agents are trying to find a policy that maximises the sum of discounted expected rewards. the standard solution for this uses an iterative search method that searches for a fixed point of the bellman equation: 𝑣(𝑠, 𝜋 ∗) = max𝑎 (𝑟(𝑠, 𝑎) + 𝛾 ∑ 𝑠′ 𝑝(𝑠 ′|𝑠, 𝑎)𝑣(𝑠 ′, 𝜋 ∗)). (2) 2.2. reinforcement learning when the state transition probabilities or the rewards are unknown, the problem of the markov decision process becomes a problem of reinforcement learning. in this group of problems, the agent tries to make a model of the world around itself via trial and error. one type of reinforcement learning is value-based reinforcement learning. in this case, the agent tries to learn a figure 2. the simulation environment. the squares represent the controlled agents, while the circle represents the fleeing enemy. the goal is to catch the enemy by moving horizontally or vertically. initialise model: initialise n+1 hidden and n+1 output (1 value + n action) layers (4 different networks in one model, 1 critic + 3 actor) number of updates batch size for number of updates do for batch size do calculate next actions 𝑎 based on the previous state take actions 𝑎, get terminal state boolean and new rewards store the actions, the terminal state booleans, the calculated values, the rewards and the states end for calculate returns based on (13) calculate advantages based on (12) update critic neural network based on the observed states and the corresponding returns: loss is the mean squared error between the returns and calculated values update actor neural networks based on the observed states, the taken actions and the advantages: loss is policy loss(weighted sparse categorical cross-entropy loss) − entropy loss(crossentropy over itself) end for algorithm 1: a2cm. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 30 value function that renders a value to states or to actions from states. these values correspond to a reward achieved by reaching a state or taking a specific action from a state. the most commonly used type of value-based reinforcement learning is q-learning [2], in which the so-called q-values are estimated for each of the state–action pairs of the world. these q-values represent the value of choosing a specific action in a state, meaning the highest reward the agent could possibly get by taking that action. the equation for q-learning for updating the q-values of a state is: 𝑄(𝑠 ′, 𝑎) ← (1 − 𝛼) ⋅ 𝑄(𝑠, 𝑎) + 𝛼 ⋅ (𝑟 + 𝛾 ⋅ max 𝑎′ 𝑄(𝑠 ′, 𝑎′)) , (3) where 𝛼 is the learning rate and 𝛾 is the discount for the reward. the agent always selects an action that maximises the q-function for the state that the agent is in. another type of reinforcement learning is policy-based reinforcement learning. in this case, actions are derived as a function of the state itself. the most common policy-based reinforcement learning method is the policy gradient approach [19]. in this case, the agent tries to maximise the expected reward by following the policy 𝜋𝜃 parametrised by 𝜃 based on the total reward for a given trajectory 𝑟(𝜏). thus, the cost function of the parameters 𝜃 is the following: 𝐽(𝜃) = 𝐸𝜋𝜃 [𝑟(𝜏)]. (4) the parameters are then tuned based on the gradient of the cost function: 𝜃𝑘+1] = 𝜃𝑡 + 𝛼δ𝐽(𝜃𝑡 ). (5) the advantages of policy-based methods include the ability to map environments with huge or even continuous action spaces and solve environments with stochasticity. however, when using these methods, there is also a much greater possibility of getting stuck in a local maximum rather than following the optimal policy. apart from the aforementioned model-free reinforcement learning methods, there is also model-based reinforcement learning. in this case, a model is built (or just tuned) to perform the reinforcement learning. this is more sample-efficient than model-free methods and thus requires fewer samples to perform equally, but it is very dependent on the particular model. it can be combined with model-free methods to achieve better results, as in [20]. 2.3. multi-agent systems and markov games a matrix game is a stochastic framework in which each player selects an action and gets an immediate reward based on their action and those of the other agents [1]. they are called ‘matrix games’ because the game can be written as a matrix, with the first two players selecting actions in the rows and columns of the matrix. unlike markov decision processes, these games have no states. markov games, or stochastic games, are extensions of markov decision processes with multiple agents. they can also be thought of as extensions of matrix games with multiple states. in a markov game, each state has a payoff matrix for all of the states. the next state is determined by the joint actions of the agents. a game is markovian if 𝑃(𝑎𝑖 𝑡 = 𝑎𝑖 |𝑠 𝑡 , 𝑎𝑖 𝑡−1, . . . , 𝑠0, 𝑎𝑖 0) = 𝑃(𝑎𝑖 𝑡 = 𝑎𝑖 |𝑠 𝑡 ), (6) so the next state depends only on the current state and the current actions taken by all agents. 2.4. deep reinforcement learning a reinforcement learning algorithm is called ‘deep’ if it is assisted by a neural network. figure 3. an example of catching the randomly moving opponent. figure 4. an example of catching the fleeing opponent. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 31 a neural network is a function approximator built from (sometimes billions of) artificial neurons. an artificial neuron, which is based on the real neurons of the brain, has the following equation: 𝑦 = act (∑ 𝑤 𝑥 + 𝑏), (7) where 𝑥 is the input vector, 𝑤 is the weight vector, 𝑏 is the bias and act() is the activation function to introduce nonlinearity in an otherwise linear system. the parameters (𝑤 and 𝑏) are tuned with backpropagation, calculating the partial derivative error of all parameters propagated from the final error to the input vector. the selection of the activation function is important in deep learning due to the vanishing gradients: when many layers are stacked upon each other, higher layers’ gradients are too small during backpropagation, and thus, those layers are difficult to train. a basic activation function can be a sigmoid or logistic activation function: 𝑦 = 1 1 + e−𝑥 . (8) a common activation function in deep learning is rectified linear unit (relu) [21], which has gradients that are less vanishing and therefore better to train. it has the following equation: 𝑦 = 𝑥 if 𝑥 > 0 𝑦 = 0 if 𝑥 <= 0 . (9) for multi-class classification, another activation function is used: the softmax activation function. when used as the last layer, the probabilities of all of the output neurons add up to exactly 1. thus, in reinforcement learning, it is utile to use it as the probability distribution of the possible actions. it has the following equation: 𝑦 = e𝑥𝑖 ∑𝑗 e 𝑦𝑗 . (10) deep reinforcement learning algorithms have several advantages compared to traditional reinforcement learning algorithms. first of all, they are not based on a state table, as the states are approximated (which is much more robust than using linear function approximators). this allows many more states to be mapped and even allows for continuous states. however, they are more prone to diverging, and thus, many optimisations have been created on deep reinforcement learning algorithms to provide better convergence on the problems. 2.5. actor–critic an actor–critic system combines value-based and policybased reinforcement learning. in these systems, there are two distinct parametrised networks: the critic, which estimates a value function (as in value-based reinforcement learning), and an actor, which updates the policy network based on the direction suggested by the critic (as in policy-based reinforcement learning). actor–critic algorithms follow an approximate policy gradient: ∇𝜃 𝐽(𝜃) ≈ 𝔼𝜋𝜃 [∇𝜃 log 𝜋𝜃 (𝑠, 𝑎) 𝑄𝑤 (𝑠, 𝑎) δ𝜃 = 𝛼 ∇𝜃 log 𝜋𝜃 (𝑠, 𝑎) 𝑄𝑤 (𝑠, 𝑎) . (11) approximating the policy gradient introduces bias to the system. a biased policy gradient may not find the right solution, but if we choose the value function approximation carefully, then we can avoid introducing any bias. actor–critic systems generally perform better than regular reinforcement learning algorithms. the critic network ensures that the system does not get stuck in a local maximum; meanwhile, the actor network enables the mapping of environments with huge action spaces and provides better convergence [19]. 2.6. the a2c algorithm a2c stands for synchronous advantage actor–critic. it is a one-environment-at-a-time derivation of the asynchronous advantage actor–critic (a3c) algorithm [22], which processes multiple agent-environments simultaneously. in that algorithm, multiple workers update a global value function, thus exploring the state space effectively. however, the synchronous advantage figure 5. the performance of the original a2c algorithm on our benchmark. figure 6. performance of the modified a2c algorithm on our benchmark. figure 7. performance of the original a2c algorithm on our benchmark with collision (with terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 32 actor–critic provides better performance than the asynchronous model. advantage function is a method that significantly reduces the variance of the policy gradient by subtracting the cumulative reward using a baseline to make smaller gradients; thus, it provides much better convergence than regular q-values. it has the following equation: 𝐴(𝑠, 𝑎) = 𝑄(𝑠, 𝑎) − 𝑉(𝑠) . (12) returns are calculated using the equation: 𝐺𝑡 = 𝑟𝑡 + 𝛾 ∗ 𝑟𝑡+1 ∗ (1 − 𝑇𝑡 ) , (13) where 𝐺 is the return, 𝑟𝑡 is the reward at time t, 𝛾 is the discount factor and 𝑇𝑡 indicates whether the step at time 𝑡 is a terminal state. 3. experiments and results the testbed is a 5 × 5 grid with three cooperating agents (the squares) in three of the four corners of the environment. in the middle, there is a fourth agent (the circle). the former three agents have the objective of catching the fourth agent, which moves randomly. this testbed is analogous to pursuit–evasion (or predator–prey) scenarios that are also significant in robotics. the agents can move in four directions: up, down, left or right. when one of the three agents catches the fourth one, the episode ends. a penalty is introduced to the cooperative agents every timestep; thus, the return of an episode is maximised by ending the episode as soon as possible (i.e. catching the fleeing agent as quickly as possible). each episode must end in 1,000 timesteps to avoid getting stuck. in the modification of the a2c algorithm, we followed the theory of centralised learning and decentralised execution. this means that the execution is decentralised, but the learning phase can be assisted by additional information from other agents. in our case, we used the information that the agents are cooperative; thus, they acquire the same rewards (and returns). as noted before, decentralised execution is most helpful in real-world scenarios in which communication difficulties make a centralised task-solving achitecture impossible. such scenarios are often encountered in robotics. in our experiment, many a2c models with one actor and one critic were substituted for one model with one critic and multiple actors. the pseudocode of the algorithm can be seen in algorithm 3. all neural network layers were subclasses of the tensorflow model class, which provides utile functions for training and prediction – even for batch tasks – by providing only the forward steps of the network. the optimiser was rmsprop, with a learning rate of 7 · 10−3. the value estimator critic contained a neural network of 128 hidden unit layers with relu activation function and one output layer with one unit. its loss function was a simple mean squared error between the returns and the value. the actors contained a hidden layer with 128 hidden units and an output layer with four units (the number of actions in the action space). the loss function contained two distinct parts: policy and entropy loss. the policy loss was a weighted sparse categorical cross-entropy loss, where the weights were given by the advantages. this method increased the convergence of the algorithm. entropy loss is a method for increasing exploration by encouraging actions that are not in the local minimum. this is very important for tasks with sparse rewards due to the fact that the agent does not receive feedback often. this loss was calculated as a cross-entropy over itself, and it was subtracted from the policy loss because it should be maximised, not minimised. the entropy loss was tuned by a constant, which was taken as 1 · 10−4. episode rewards were taken to be a list where a value of 0 was appended to the end of the list at each episode’s end. during the episodes, only the last value of the list was incremented by the episode reward of the given step. for the training, a batch-sized container was created for the actions, rewards, terminal state booleans, state values and observed states. then, a two-level loop was started: the outer one was run for the number of required updates (set by us), while the inner loop was run as many times as the batch size. the state observations, the taken actions (which were selected by a probability distribution based on the actor neural network results), the state values, the rewards, the terminal state booleans and the last observed state were stored in the aforementioned containers. next, the returns and advantages figure 8. performance of the a2cm algorithm on our benchmark with collision (with terminating at collision). figure 9. number of steps per episode of the original a2c algorithm on our benchmark with collision (without terminating at collision). figure 10. number of steps per episode of the a2cm algorithm on our benchmark with collision (without terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 33 were calculated on the batch using the collected data, and then a batch training was performed on those data. there was no need to calculate the gradients themselves due to the use of the keras api. during our experiment, the system was run 5,000 times in batches of 128, thus running the environments over a total of 640,000 steps. gamma was taken to be 0.99. figure 3 and figure 4 show the ends of some remarkable episodes of catching the opponent. figure 6 and figure 7 show the results of our experiments. it is important to note the xcoordinates in figure 5 and figure 6: for the same number of steps, the original was run for 40,340 episodes, while the modified algorithm managed to complete 82,119 episodes. this means that the a2cm algorithm spent half as many steps in an episode and was able to catch the fleeing opponent in, on average, half of the time required by the agent based on the original algorithm. these figures also show that the original algorithm did not find an optimal solution without diverging later, and even between divergences, the solutions were not as stable. our agent, on the other hand, found a solution with no divergences later and only small divergencess after the first half of the episodes. the a2cm algorithm found a solution with which it can catch the opponent in 6 steps, and it maintained this knowledge for 20,000 episodes, with one positive spike where it found the solution to the problem in just 3 steps. the run times are worth considering, as well. the regular a2c algorithm took 14,567.45 seconds to run, while the modified one ran for 14,458.28 seconds. it is worth noting that, due to the fact that almost twice as many episodes were completed, the environment had to be reset twice as often, so the modified algorithm is even faster than the normal one. later, the difference between the algorithms were tested with collision turned on, bringing the problem set even closer to realworld robotics scenarios. in this case, the agents received a penalty if they collided with each other. this method makes the environment much harder to learn, as failure will probably only result from chasing the enemy agent. it also makes the training process harder, as the steps leading to success are not as easy to determine; a collision that occurs before the enemy is caught will make similar attempts less likely to be selected as actions. when considering the training process of the environment with collision detection turned on, it is important to pay attention to the reward ratio between the negative rewards for each step and the negative reward for collision. the larger the reward for collision, the better the agents will evade collision; otherwise, they will be optimised to finish the episode as fast as possible. for this reason, the negative reward for each step was selected as −1,000, and the negative reward for the collision was −150,000,000, providing a ratio that is large enough to encourage the agents to follow a collision-evasion policy. in the first experiment on the environment with collision detection, we tried to set the algorithms such that a collision would terminate the episode. this scenario is analogous to certain scenarios in robotics in which collisions can cause malfunctions in the robots themselves and should be evaded even via high-level control. apart from turning on the collision, all other conditions and parameters of the training process were the same. figure 7 and figure 8 show the cumulated rewards per episode for the original a2c and our a2cm algorithm, respectively. it can be seen that, while neither was able to solve the environment over the timespan of the training, there was a time span of ca. 700 episodes in which our algorithm was able to catch the enemy without colliding. the original algorithm lacked any of these longer periods. the training of the original algorithm in this case took 14,173.42 seconds, while the training of the a2cm took 14,659.00 seconds. it is worth noting that the original algorithm completed 1,665 episodes, and the a2cm completed 3,723; the different numbers of reinitialisations should be considered when comparing the training times. to make the environment easier to train on, the second experiment with collisions was conducted such that the episodes only terminated if the opponent was caught. this way, the episodes were longer and always terminated successfnotedully and therefore might provide better training information than the setting of the previous experiment. this scenario is analogous to problems in robotics in which the presence of two robots in the same area is discouraged, such as area scanning scenarios or subtasks in which two robots should not scan the same area at once. just as in the previous experiment, all other parameters were left as they were in the training of the system without collision. figure 9 and figure 10 show the number of steps required to finish each episode for the original a2c and the modified a2cm algorithms, respectively, while figure 11 and figure 12 show the cumulated (negative) rewards per episode (higher is better) for the a2c and the a2cm algorithms, respectively. it can be seen that, while the original a2c algorithm did not show any clear sign of successful training, there is some indication of success for the a2cm algorithm. approaching the end of the training process, the number of steps were kept low, and, as per figure 12, collisions were also evaded, with the exception of some episodes. the original algorithm completed 1,177 episodes, while the modified one completed 1,964, which can also be seen as a sign of the superiority of the a2cm algorithm. regarding the training times, the original algorithm was trained for 13,981.22 seconds, while the modified one was trained for 19,519.85 seconds. in this case, it is clear that our algorithm used significantly more training time. figure 11. rewards per episode of the a2cm algorithm on our benchmark with collision (without terminating at collision). figure 12. rewards per episode of the original a2c algorithm on our benchmark with collision (without terminating at collision). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 34 4. conclusion looking at the previous section, we can conclude that our modification of the original a2c algorithm, the a2cm algorithm, was able to perform much better than the original on our testbed without collision. to some extent, it outperformed the original a2c algorithm even in enviroments with collision; thus, it is recommendable for tasks in robotics. however, the algorithm has the caveat of being usable only when the agents are fully cooperative and do not have special, predefined roles. there are still many ways to improve upon the current state of our algorithm. one possibile improvement would be to introduce a variable learning rate, such as win or learn fast [3], in a deep reinforcement learning algorithm. another possible improvement is to include the fleeing agent in the algorithm so that the algorithm encompasses the full cooperative–competitive nature of the environment. in addition, other activation functions could be tried to check their behavior; for example, exponential linear units [23] might have better convergence at the price of slightly more training time. the algorithm could be extended using recurrent neural networks so that it could handle partially observable markov decision processes in which the full state is unknown. acknowledgement the research reported in this paper and carried out at the budapest university of technology and economics was supported by the tkp2020, institutional excellence program of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). the research was supported by the efop-3.6.2-16-201600014 project, which was financed by the hungarian ministry of human capacities. references [1] m. l. littman, markov games as a framework for multi-agent reinforcement learning, proceedings of the eleventh international conference on machine learning, new brunswick, usa, 10 – 13 july 1994, pp. 157-163. doi 10.1016/b978-1-55860-335-6.50027-1 [2] j. hu, m. wellman, nash q-learning for general-sum stochastic games, journal of machine learning research 4 (2003), pp. 10391069. online [accessed 6 september 2021] https://www.jmlr.org/papers/volume4/hu03a/hu03a.pdf [3] m. bowling, m. veloso, multiagent learning using a variable learning rate, artificial intelligence 136 (2002), pp. 215-250. doi: 10.1016/s0004-3702(02)00121-2 [4] m. h. bowling, m. m. veloso, simultaneous adversarial multirobot learning, ijcai (2003) pp. 699-704. doi: 10.5555/1630659.1630761 [5] v. mnih, k. kavukcuoglu, d. silver, a. graves, i. antonoglou, d. wierstra, m. riedmiller, playing atari with deep reinforcement learning, arxiv (2013), 9 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1312.5602 [6] j. foerster, n. nardelli, g. farquhar, t. afouras, p. h. s. torr, p. kohli, s. whiteson, stabilising experience replay for deep multiagent reinforcement learning, pmlr 70 (2017) pp. 1146-1155. doi: 10.5555/3305381.3305500 [7] s. omidshafiei, j. pazis, c. amato, j. p. how, j. vian, deep decentralized multi-task multi-agent reinforcement learning under partial observability, pmlr 70 (2017) pp. 2681-2690. doi: 10.5555/3305890.3305958 [8] j. foerster, g. farquhar, t. afouras, n. nardelli, s. whiteson, counterfactual multi-agent policy gradients, proceedings of the aaai conference on artificial intelligence, new orleans, usa, 2 – 7 february 2018, pp. 1146-1155, arxiv (2017), 12 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1705.08926 [9] p. peng, y. wen, y. yang, q. yuan, z. tang, h. long, j. wang, multiagent bidirectionally-coordinated nets: emergence of human-level coordination in learning to play starcraft combat games, arxiv (2017), 10 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1703.10069 [10] p. sunehag, g. lever, a. gruslys, w. m. czarnecki, v. zambaldi, m. jaderberg, m. lanctot, n. sonnerat, j. z. leibo, k. tuyls, t. graepel, value-decomposition networks for cooperative multiagent learning, arxiv (2017), 17 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1706.05296 [11] t. rashid, m. samvelyan, c. s. de witt, g. farquhar, j. foerster, s. whiteson, qmix: monotonic value function factorisation for deep multi-agent reinforcement learning, proceedings of machine learning research, stockholm, sweden, 10 – 15 july 2018, pp. 4295-4304. arxiv (2018), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1803.11485 [12] r. lowe, y. wu, a. tamar, j. harb, p. abbeel, i. mordatch, multiagent actor-critic for mixed cooperative-competitive environments, advances in neural information processing systems 30 (2017), pp. 6379-6390. doi: 10.5555/3295222.3295385 [13] s. li, y. wu, x. cui, h. dong, f. fang, s. russell, robust multiagent reinforcement learning via minimax deep deterministic policy gradient, proceedings of the 33rd aaai conference on artificial intelligence, honolulu, hawaii, usa, 27 january – 1 february 2019, pp. 4213-4220. doi: 10.1609/aaai.v33i01.33014213 [14] p. casgrain, b. ning, s. jaimungal, deep q-learning for nash equilibria: nash-dqn, arxiv (2019), 16 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1904.10554 [15] o. vinyals, t. ewalds, s. bartunov, p. georgiev, a. s. vezhnevets, m. yeo, a. makhzani, h. kättler, j. agapiou, j. schrittwieser, j. quan, s. gaffney, s. petersen, k. simonyan, t. schaul, h. van hasselt, d. silver, t. lillicrap, k. calderone, p. keet, a. brunasso, d. lawrence, a. ekermo, j. repp, r. tsing, starcraft ii: a new challenge for reinforcement learning, arxiv (2017), 20 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1708.04782 [16] m. samvelyan, t. rashid, c. s. de witt, g. farquhar, n. nardelli, t. g. j. rudner, c. hung, p. h. s. torr, j. foerster, s. whiteson, the starcraft multi-agent challenge, arxiv (2019), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1902.04043 [17] s. liu, g. lever, j. merel, s. tunyasuvunakool, n. heess, t. graepel, emergent coordination through competition, arxiv (2019), 19 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1902.07151 [18] n. bard, j. n. foerster, s. chandar, n. burch, m. lanctot, h. f. song, e. parisotto, v. dumoulin, s. moitra, e. hughes, i. dunning, s. mourad, h. larochelle, m. g. bellemare, m. bowling, the hanabi challenge: a new frontier for ai research, artificial intelligence 280 (2020), 103216. doi: 10.1016/j.artint.2019.103216 [19] r. s. sutton, d. mcallester, s. singh, y. mansour, policy gradient methods for reinforcement learning with function approximation, proceedings of the 12th international conference on neural information processing systems, denver, usa, 29 november – 4 december 2000, pp. 1057-1063. doi: 10.5555/3009657.3009806 [20] a. nagabandi, g. kahn, r. s. fearing, s. levine, neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning, 2018 ieee international conference on robotics and automation, brisbane, australia, 21 – 26 may 2018, https://doi.org/10.1016/b978-1-55860-335-6.50027-1 https://www.jmlr.org/papers/volume4/hu03a/hu03a.pdf https://doi.org/10.1016/s0004-3702(02)00121-2 https://doi.org/10.5555/1630659.1630761 https://arxiv.org/abs/1312.5602 https://doi.org/10.5555/3305381.3305500 https://doi.org/10.5555/3305890.3305958 https://arxiv.org/abs/1705.08926 https://arxiv.org/abs/1703.10069 https://arxiv.org/abs/1706.05296 https://arxiv.org/abs/1803.11485 https://doi.org/10.5555/3295222.3295385 https://doi.org/10.1609/aaai.v33i01.33014213 https://arxiv.org/abs/1904.10554 https://arxiv.org/abs/1708.04782 https://arxiv.org/abs/1902.04043 https://arxiv.org/abs/1902.07151 https://doi.org/10.1016/j.artint.2019.103216 https://doi.org/10.5555/3009657.3009806 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 35 pp. 7559-7566. doi: 10.1109/icra.2018.8463189 [21] a. f. agarap, deep learning using rectified linear units (relu), arxiv (2018), 7 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1803.08375 [22] v. mnih, a. p. badia, m. mirza, a. graves, t. p. lillicrap, t. harley, d. silver, k. kavukcuoglu, asynchronous methods for deep reinforcement learning, proceedings of machine learning research, new york, usa, 20 – 22 june 2016, pp. 1928-1937. doi: 10.5555/3045390.3045594 [23] d. a. clevert, t. unterthiner, s. hochreiter, fast and accurate deep network learning by exponential linear units (elus), arxiv (2015), 14 pp. online [accessed 14 september 2021] https://arxiv.org/abs/1511.07289 http://dx.doi.org/10.1109/icra.2018.8463189 https://arxiv.org/abs/1803.08375 https://doi.org/10.5555/3045390.3045594 https://arxiv.org/abs/1511.07289 metrology for data in life sciences, healthcare and pharmaceutical manufacturing: case studies from the national physical laboratory acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 5 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 metrology for data in life sciences, healthcare and pharmaceutical manufacturing: case studies from the national physical laboratory paul m. duncan1, nadia a. s. smith1, marina romanchikova1 1 data science department, national physical laboratory, united kingdom section: research paper keywords: nmi; metrology; digital pathology; medicines manufacturing; metadata standards; data quality; ontologies; fair principles citation: paul m. duncan, nadia a. s. smith, marina romanchikova, metrology for data in life sciences, healthcare and pharmaceutical manufacturing: case studies from the national physical laboratory, acta imeko, vol. 12, no. 1, article 10, march 2023, identifier: imeko-acta-12 (2023)-01-10 section editor: daniel hutzschenreuter, ptb, germany received november 18, 2022; in final form february 20, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the uk government department for business, energy & industrial strategy through the uk’s national measurement system. corresponding author: paul m. duncan, e-mail: paul.duncan@npl.co.uk 1. introduction decision making in research, industry and healthcare is underpinned by the quality of data including its provenance, timeliness, reliability, and other aspects. ascertaining the data quality using metrological principles of traceability, calibration and uncertainty can be described as data metrology. while certain disciplines such as radiation dosimetry or coordinate measurement of industrial components have long incorporated metrological tools such as calibration and traceability into their workflows, others such as laboratory medicine or pharmaceutical manufacturing are relatively new adopters who benefit profoundly from the european metrology networks for traceability in laboratory medicine and advanced manufacturing. technological advancements in medicine and pharmaceutical manufacturing have been traditionally focused on advances in drug discovery, experimental procedures, and manufacture. medicines and treatments are becoming more expensive to produce, as pricing models drive down profit margins compounded with patents expiry [1]. therefore, a greater emphasis is being placed on maximising the efficiency of medicines development and manufacture. for the quality and repeatability of processes, most pharmaceutical firms operate at high variation levels in terms of accurately manufacturing materials. these variations, at levels between 3 𝜎 and 4 𝜎, are estimated to cost ~$20 bn annually through waste and inefficiency [2]. therefore, companies are increasingly moving to developing controlled and flexible processes to offer digital health solutions for their customers. the national physical laboratory (npl) has set out to aid digitalisation in healthcare by focusing on the development of data metrology for life sciences, medicines and pharmaceutical manufacturing. data metrology refers to the uncertainty present abstract data metrology, i.e., the evaluation of data quality and its fitness-for-purpose, is an inherent part of many disciplines including physics and engineering. in other domains such as life sciences, medicine, and pharmaceutical manufacturing these tools are often added as an afterthought, if considered at all. the use of data-driven decision making and the advent of machine learning in these industries has created an urgent demand for harmonised, high-quality, content rich, and instantly available datasets across domains. the findable, accessible, interoperable, reproducible principles are designed to improve overall quality of research data. however, these principles alone do not guarantee that data is fit-for-purpose. issues such as missing data and metadata, insufficient knowledge of measurement conditions or data provenance are well known and can be aided by applying metrological concepts to data preparation to increase confidence. this work conducted by national physical laboratory data science team showcases life sciences and healthcare projects where data metrology has been used to improve data quality. mailto:paul acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 in the data generated in each of these areas, from the quality of measurements accompanying the manufacturing to the quality of the data used for decision making processes. this paper describes the similarities and differences between data metrology challenges addressed by npl in the context of several cross-disciplinary projects with the goal of helping users to identify their data metrology needs and delivering confidence in the effective use of data. 2. data metrology projects the npl data science team has been involved in multiple data metrology projects including life sciences, healthcare, and medicines manufacturing applications, exploring the similarities and domain-specific requirements in data quality and management. these projects and related data metrology challenges are outlined below. 2.1. pharmaceutical manufacturing recent developments in digital pharmaceutical manufacturing are generating a large amount of data across varying temporal resolutions and manufacturing routes. this data provides unprecedented opportunities for pharmaceutical manufacturing to derive new insights and efficiencies from experiments but also imposes great challenges in data processing, management, sharing, and integration. not only data integrity and authenticity are to be ensured, but the processes that lead to the generation of data must be traceable to enable trust. the pharmaceutical industry introduced “good manufacturing practices” (gmp) to standardise processes around quality, security and effectiveness, but did not make allowances for metrological concepts such as traceability and measurement uncertainty. data metrology therefore becomes a critical component in understanding and controlling pharmaceutical processes and reducing the variation seen in the final product. npl has worked with major pharmaceutical manufacturers and researchers to explore their data metrology needs and develop a set of applied research programs. 1) ontologies for clinical trial release. npl has developed techniques [3] to develop a domain agnostic measurement ontology, with a view to applying these techniques across different industries. for the past 2 years, npl has worked with the medicines manufacturing innovation centre (mmic) to develop an ontology to aid in the automation and digitalisation of all data required for regulators to approve drugs for consumption. much of the approvals process is manual data processing which can be replaced by processing of data through modern data driven techniques. the ontology developed by npl “codifies” all the data relationships which are pertinent to the identification of an expiry date for the release of a drug, facilitating true automation, reducing human input and significantly decreasing the potential for errors in the process due to the development of an approved automated decision process. 2) controlled vocabularies for pharmaceutical data exchange. the development of the ontology for clinical trials release exposed an issue in how data from different companies and manufacturers can differ semantically when describing similar terms. for example, separate companies may use the terms “pill” and “tablet” to describe the same concept. this inconsistency decreases the quality of the information used in digitalised or automated systems. npl has developed the first iteration of a controlled vocabulary for the clinical trials process to ensure that any automated system can understand the terminology used by each party [4]. this controlled vocabulary provides a traceable link to quality processes for each company which can aid in automating the verification of the processes used. within the context of the mmic use case, this has provided a valid solution for partners and collaborators to deal with terminology harmonisation issues. npl has also been exploring the idea of working with industry to create an industry-wide standard to create a unified approach to solving this problem and reducing the uncertainty of the information. 3) mapping of measurement uncertainty propagation in manufacturing. npl has been working on an approach to understand the measurement uncertainty generated at each stage of a continuous manufacturing process. currently, uncertainty generated at each node is not propagated, so to ensure greater traceability of variation present in the final product, npl are currently expanding upon pre-existing industrial case studies [5] to develop methods to “map” out the uncertainty present in manufacturing systems and propagate this to each stage. the goal of these use cases under development is to truly understand the uncertainty of the information produced during pharmaceutical manufacturing and to provide industry with frameworks to understand their data metrology. 2.2. minimum metadata for biological imaging biological imaging (bioimaging) encompasses a vast array of techniques, such as optical microscopy, spectroscopy, multispectral imaging, among others. in the pharmaceutical industry, these techniques are used both in r&d and in clinical studies that evaluate drug resistance, efficacy, targeting mechanisms and pharmacodynamics. complexity, diversity, and volume of data generated by high-resolution imaging techniques drive the need for advanced analysis and data management methods and require new standards to ensure reproducibility of results and reliability of research [6]. while the efforts to improve data interoperability and definition of minimum reporting standards and metadata are ongoing [7]-[9], stronger engagement of equipment vendors, researchers and funding authorities is needed to create future-proof re-usable and reproducible data repositories. seeking such engagement, npl has been working with two major pharmaceutical industry partners to work on three bioimaging case studies characterised by high data volumes and need for re-use: 1) mass spectrometry imaging (msi); 2) high content screening; and 3) light sheet microscopy (lsm). to identify the minimum metadata requirements for the three bioimaging domains, subject matter experts (smes) were named by each partnering organisation. the smes worked with npl data science team to collate the current practices on data management and annotation [10]. the review of the collated metadata revealed the metadata categories illustrated in figure 1. the categorised domain-specific metadata sets were enhanced with metadata item descriptions, formats, and, where applicable, units of measure and suggestions of standardised terminologies or controlled vocabularies. the results were revised by all smes and published in the open npl report ms 24 [11]. the findings of the study were used to define minimum microscopy metadata recommendations for research data repositories [12]. at npl, the minimum metadata specifications for lsm and msi were used extensively to develop frameworks and tools for bioimaging data capture and annotation [13]. 2.3. digital pathology clinical histopathology describes a study of stained tissue sections on glass slides under a microscope, whereby acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 pathologists manually change the brightness, focus depth, and the region of interest. in digital pathology (dp), tissue samples are digitised using a whole slide imaging (wsi) scanner. the resulting high-resolution images (1-4 gb) can be studied in silico by image analysis software or on-display by a trained pathologist. the dp workflow poses multiple metrological challenges: reproducibility and repeatability of tissue processing, calibration and traceability of wsi, as well as uncertainty analysis to support diagnosis. the npl digital pathology inter-disciplinary project, launched in 2020, comprised a landscape exercise during which dp experts and stakeholders identified priority areas for metrology support [14], [15]. the outcomes of the landscape exercise were used to shape demonstrator studies with real-world data. within the pithia trial collaboration (http://www.pithia.org.uk, grant reference number pb-pg1215-20033), the project team are studying the uncertainties in the diagnosis based on kidney biopsy images, aiming to 1) locate the sources of uncertainty in decision making and find tools to reduce it, as well as 2) find image features that correlate with clinical outcomes to increase reproducibility and explainability in wsi evaluation. the preliminary findings (figure 2) show how the on-display assessment method influences the diagnostic results: when a blood vessel wall thickness is measured directly (red line, lower left diagram), the assessors show preference for more uniform score assignment than if the wall thickness is calculated as a difference (outer diameter-lumen diameter)/2 (blue line, lower right diagram). further case studies will include analysis of measurable image features and their association with diagnostic predictions, as well as impact assessment of intraand inter-wsi device variability on image features and diagnosis. future work will include engaging with standards bodies to include metrology-enabling contextual data such as calibration results, device settings etc. into clinical dp standards such as digital communications for medical imaging (dicom) and fast healthcare interoperability resources (fhir). these standards have high maturity levels and provide mechanisms to include metrological metadata and requirements such as units of measure, clinical terminologies, ontologies, unique identifiers etc. 2.4. medical sensors case study while wsi data and associated measurement information can be captured using the existing dicom standard, novel medical devices require modification of existing standards to capture new data types and provide integration into the healthcare infrastructure. npl worked with a uk-based medical device developer to create clinically interoperable data structures to store and manage the data from a novel surgical sensor. this opportunity facilitated the capture of valuable metrological information including traceability and calibration ab initio, creating a metrologically sound data model at the early stage of device development. an example of how custom measurement-related information can be included into dicom metadata is presented in table 1. a custom value (patient tilting angle in degrees) is enclosed in a concept name code sequence that refers to the coding document and provides the value inclusive of its format (value representation (vr)). note that the value description includes the unit of measure and the reference terminology (measurement units code sequence). 2.5. digital health new measurement modalities within healthcare are creating vast amounts of high-dimensional data from disparate sources and of varying quality, including genomic and imaging data, biomarkers, electronic healthcare records and data from wearable devices. the current and future healthcare practices across the world are increasingly reliant on the integration of these diverse, complex, and large datasets as well as trusted and robust analysis methods [16]. the data curation process in healthcare includes extraction, de-identification, and annotation of datasets with metadata, as well as data fusion and linkage. therefore, future-proof secure scalable curation methods that handle rapidly growing data volumes are needed. npl runs an ongoing inter-disciplinary digital health programme aimed to use data metrology tools to help solve some of the important and emerging challenges of utilising healthcare data [17]. the project includes several case studies, some of which are briefly described below, and further details can be found in the 2021 report [18]. figure 1. metadata categories in bioimaging figure 2. impact of measurement method on clinical assessment (remuzzi score). red line: wall thickness is measured directly. blue line: wall thickness is calculated from vessel outer diameter and lumen diameter. image courtesy of tobi ayori. table 1. including custom measurement value, units of measure and reference to ontology in dicom metadata. tag description tag vr value concept name code sequence (0040, a043) sq code value (0008, 0100) sh ‘1.2.2-1’ coding scheme designator (0008, 0102) sh ‘ascode’ coding scheme version (0008, 0103) sh ‘1.0’ code meaning (0008, 0104) lo ‘patient tilting angle’ numeric value (0040, a30a) ds ‘-19.05’ measurement units code sequence (0040, 08ea) sq code value (0008, 0100) sh ‘deg’ coding scheme designator (0008, 0102) sh ‘ucum’ coding scheme version (0008, 0103) sh ‘1.4’ code meaning (0008, 0104) lo ‘degrees’ experimental metadata instrument settings sample provenance sample handling data processing http://www.pithia.org.uk/ acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 one of the case studies investigates whether it is possible to improve the data quality and comparability by linking patient images with imaging device calibration data. the study set out to link megavoltage computed tomography (mvct) images used for image-guided radiotherapy with mvct device calibration data from the routine monthly quality assurance tests that check whether the scanner is fit-for-purpose. mvct images are routinely used for patient positioning, radiation dosimetry, and in-treatment therapy effect assessment. like other medical imaging modalities, mvct images are subject to temporal and inter-device variations that are known to have negative influence on the accuracy of subsequent radiation dose calculation and image segmentation. we implemented a procedure that includes the device calibration information into the dicom header information of the patient scan. we expect that the mvct calibration data can be used to remove the device-related variability and make the patient images more inter-comparable, reduce the variations in the image quality, improving the accuracy of analysis, safety, and efficiency of data-driven clinical interventions [18]. another case study focussed on the development of datadriven models to identify key prognostic markers in computerised medical records (cmr). cmr are a powerful source of information as they contain population level health indicators. these data can be used for estimations of disease incidence, provide insight into disease complexity and identify sub-groups of patients, among other things. national and regional level data aid decision-making in response to potential disease outbreaks, while identification of patient sub-groups can aid treatment planning, moving towards personalised medicine. despite the enormous potential, identifying trends in large primary care data and inferring meaning from these data is extremely challenging due to their complexity, heterogeneity, dimensionality, incompleteness, and noisiness. cmr data are often mixed-type, making traditional data analysis tools unavailable. a generic data pre-processing and deep learning approach for visualisation and analysis of cmr data has been developed at npl [19]. the tools enable the analysis of cmr data, as well as other related data types, such as demographics, metadata, medical histories, in a way that identifies non-linear patterns in an unlabelled manner. the features that form patient clusters can be linked back to the input data and interpreted by the clinician or stakeholder to aid in their decision making in complex healthcare scenarios. this framework can also be applied as a data exploration study to obtain data-driven hypotheses that can be tested with further data. a further case study in the digital health programme evaluates how data linkage can be used to improve the quality of life and long-term treatment outcomes for prostate cancer patients by using the patient care data acquired outside of clinical trials. we developed an ontology-based data curation framework to identify and collate information about diagnosis, symptoms, and treatment side effects from routine primary care electronic health records. this work is a first step to increase the utility of primary care data for oncology by a) creating a knowledge base of data sources, b) mapping out the required integration efforts, and c) developing a practical ontology-based method for systematic and reproducible prostate cancer case identification and validating this method on real-world datasets. the developed ontology can be used to standardise the identification and retrieval of prostate cancer cases from primary care data [20]. npl’s most recent endeavours to increase availability and reliability of medical data include developing a curated data platform. the platform will provide mechanisms for curation, storage, metadata annotation, linkage, and analysis of clinically relevant imaging, audit, and calibration data (figure 3). such a platform would provide a much-needed foundation to enable access to a richer and larger dataset than what is currently available, rendering the data fair-er, and thus increasing its value and utility. 3. conclusions this work presents a range of use cases and demonstrator studies in life sciences and healthcare developed by npl through active collaborations with industry partners and researchers in digital pathology, bioimaging, pharmaceutical and biomanufacturing. it is aimed to highlight the need for data metrology in life sciences and healthcare and to stress the role of national measurement institutes in these areas. despite the relative heterogeneity of the presented case studies, the identified problems feature similarities including (a) missing metadata specifications, (b) lack of mechanisms to capture, exchange and propagate metrological information such as calibration data from data acquisition during measurement to its processing and (c) lack of methods to combine and propagate uncertainties in data processing chains. the three problems listed above call for a systematic approach to data curation and metadata annotation based on the need for fair-ness and data reusability. although the missing metadata specifications can be addressed using custom ontologies and controlled vocabularies, striving towards standards and minimum data quality requirements is recommended to increase data re-usability and impact across different sectors/companies. furthermore, there is a variety of existing open standards and formats that can and should be used to manage data from new medical devices and imaging modalities. these standards can be adapted to incorporate information pertaining to metrological traceability and uncertainty. lastly, while the use of, and need for, metrology methods is widely recognised in physics and engineering, in life sciences, medicine and pharmaceutical manufacturing these tools are often added as an afterthought, if considered at all. therefore, work is required to demonstrate the need for and the impact of data metrology via case studies in the respective domains. the npl data science team believes that the identified challenge areas highlight both the need for heterogeneous figure 3. fair data platforms for clinically relevant research acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 approaches to data metrology as well as common pain points across these fields. the findings presented in this paper call for a proactive and consistent approach to generating and using quality data. fair data platforms such as that shown in figure 3 demonstrate an end-to-end approach to how data should be treated to ensure adherence to the fair principles and reduce any uncertainty generated due to the processing or labelling of the data. acknowledgement this work was funded by the uk government department for business, energy & industrial strategy through the uk’s national measurement system. we would also like to thank our partners at the mmic; cpi, university of strathclyde, ukri, scottish enterprise, astrazeneca and gsk as well as artiosense ltd and the pithia trial investigators. thanks to michael chrubasik, louise wright, and peter harris for providing feedback on the manuscript. references [1] d. taylor, the pharmaceutical industry and the future of drug development, pharmaceuticals in the environment, edited by r. e. hester; r. m. harrison, 2015, pp. 1–33. doi 10.1039/9781782622345-00001 [2] j. s. srai, c. badman, m. krumme, m. futran, c. johnston, future supply chains enabled by continuous processing-opportunities and challenges, continuous manufacturing symposium, 20–21 may 2014, j. pharm. sci., vol. 104, 3 (2015), pp. 840–849. doi: 10.1002/jps.24343 [3] j.-l. hippolyte, m. chrubasik, f. brochu, m. bevilacqua, a domain-agnostic ontology for unified metrology data management, meas. sens., 18 (2021), p. 100263. doi: 10.1016/j.measen.2021.100263 [4] p. m. duncan, d. s. whittaker, distribution identification and information loss in a measurement uncertainty network, metrologia., 58 (2021), 034003. doi: 10.1088/1681-7575/abeff8 [5] m. chrubasik, c lorch, p. m. duncan, ontology-based restapis for measurement terminology: glossaries as a service, imeko tc6 int. conference on metrology and digital transformation, berlin, germany, 19-21 september, 2022. doi: 10.21014/tc6-2022.023 [6] b. j. heil, m. m. hoffman, f. markowetz, su-in lee, c. s. greene, s. c. hicks, reproducibility standards for machine learning in the life sciences, nat methods, 18 (2021), p. 1132–1135. doi: 10.1038/s41592-021-01256-7 [7] c. allan, j.-m. burel, j. moore, c. blackburn, m. linkert, s. loynton, d. macdonald, w. j. moore, c. neves, a. patterson, m. porter, a. tarkowska, b. loranger, j. avondo, i. lagerstedt, l. lianas, s. leo, k. hands, r. t. hay, a. patwardhan, c. best, g. j. kleywegt, g. zanetti, j. r. swedlow, ome remote objects (omero): a flexible, model-driven data management system for experimental biology, nat. methods, vol. 9, 3 (2012), pp. 245–253. doi: 10.1038/nmeth.1896 [8] o. j. r. gustafsson, l. j. winderbaum, m. r. condina, b. a. boughton, b. r. hamilton, e. a. b. undheim, m. becker, p. hoffmann., balancing sufficiency and impact in reporting standards for mass spectrometry imaging experiments, gigascience, vol. 7, 10 (2018). doi: 10.1093/gigascience/giy102 [9] m. huisman, m. hammer, a. rigano, u. boehm, j. j. chambers, n. gaudreault, a. j. north, j. a. pimentel, d. sudar, p. bajcsy, c. m. brown, a. d. corbett, o. faklaris, j. lacoste, a. laude, g. nelson, r. nitschke, d. grunwald, c. strambio-de-castillia; minimum information guidelines for fluorescence microscopy: increasing the value, quality, and fidelity of image data, arxiv191011370 cs q-bio, (2020). online [accessed 9 march 2020] http://arxiv.org/abs/1910.11370 [10] e. cooke, m. hayes, m. romanchikova, acquisition and management of high content screening, light-sheet microscopy and mass spectrometry imaging data at astrazeneca, glaxosmithkline and npl: a survey report, npl report. ms 25, (2020). doi: 10.47120/npl.ms25 [11] f. brochu, j. bunch, e, cooke, a. dexter, m. romanchikova, m. shaw, t. r. steven, s. a. thomas, federation of imaging data for life sciences: current status of metadata collection for high content screening, mass spectrometry imaging and light sheet microscopy of astrazeneca, glaxosmithkline and npl, npl report. ms 24, (2020). doi: 10.47120/npl.ms24 [12] u. sarkans, w. chiu, l. collinson, m. c. darrow, j. ellenberg, d. grunwald, j-k. hériché, a. iudin, g. g. martins, t. meehan, k. narayan, a. patwardhan, m. r. g. russell, h. r. saibil, c. strambio-de-castillia, j. r. swedlow, c. tischer, v. uhlmann, p. verkade, m. barlow, o. bayraktar, e. birney, c. catavitello, c. cawthorne, s. wagner-conrad, e. duke, p. paul-gilloteaux, e. gustin, m. harkiolaki, p. kankaanpää, t. lemberger, j. mcentyre, j. moore, a. w. nicholls, s. onami, h. parkinson, m. parsons, marina romanchikova, n. sofroniew, j. swoger, n. utz, l. m. voortman, f. wong, p. zhang, g. j. kleywegt, a. brazma, rembi: recommended metadata for biological images enabling reuse of microscopy data in biology, nat. methods, 18 (2021), pp. 1–5. doi: 10.1038/s41592-021-01166-8 [13] s. thomas, f. brochu, a framework for traceable storage and curation of measurement data, meas. sens., 18 (2021), pp. 100201. doi: 10.1016/j.measen.2021.100201 [14] m. adeogun, j. bunch, a. dexter, c. dondi, t. murta, c. nikula, m. shaw, a. taylor, i. partarrieu, m romanchikova, n. a. s. smith, s. a. thomas, j. venton, metrology for digital pathology. digital pathology cross-theme project report, npl report. as 102, (2021). doi: 10.47120/npl.as102 [15] m. romanchikova, s. a. thomas, a. dexter, m. shaw, i. partarrieau, n. a. s. smith, j. venton, m. adeogun, d. brettle, r. j. turpin, the need for measurement science in digital pathology, journal of pathology informatics, (2022), 100157, preprint. doi: 10.1016/j.jpi.2022.100157 [16] the topol review nhs health education england. online [accessed 31 march 2022] https://topol.hee.nhs.uk/ [17] n. a. s. smith, d. sinden, s. a. thomas, m. romanchikova, j. e. talbott, m. adeogun, building confidence in digital health through metrology, br. j. radiol., vol. 93, 1109 (2020), pp. 20190574. doi: 10.1259/bjr.20190574 [18] n. a. s. smith, m. romanchikova, i. partarrieu, e. cooke, a. lemanska, s. thomas, nms 2018-2021 life-sciences and healthcare project “digital health: curation of healthcare data” final report, national physical laboratory, npl report. ms 31, (2021). doi: 10.47120/npl.ms31 [19] s. a. thomas, n. a. s. smith, v. livina, i. yonova, r. webb, s. de lusignan, analysis of primary care computerized medical records (cmr) data with deep autoencoders (dae), front. appl. math. stat., 5 (2019), 12 pp. doi: 10.3389/fams.2019.00042 [20] a. lemanska, s. faithfull, h. liyanage, s. otter, m. romanchikova, j. sherlock, n. a. s. smith, s. a thomas, s. de lusignan, primary care prostate cancer case ascertainment, stud. health tech. inf., 270 (2020), pp. 1369-1370. doi: 10.3233/shti200446 https://doi.org/10.1039/9781782622345-00001 https://doi.org/10.1002/jps.24343 https://doi.org/10.1016/j.measen.2021.100263 https://doi.org/10.1088/1681-7575/abeff8 https://doi.org/10.21014/tc6-2022.023 https://doi.org/10.1038/s41592-021-01256-7 https://doi.org/10.1038/nmeth.1896 https://doi.org/10.1093/gigascience/giy102 http://arxiv.org/abs/1910.11370 https://doi.org/10.47120/npl.ms25 https://doi.org/10.47120/npl.ms24 https://doi.org/10.1038/s41592-021-01166-8 https://doi.org/10.1016/j.measen.2021.100201 https://doi.org/10.47120/npl.as102 https://doi.org/10.1016/j.jpi.2022.100157 https://topol.hee.nhs.uk/ https://doi.org/10.1259/bjr.20190574 https://doi.org/10.47120/npl.ms31 https://doi.org/10.3389/fams.2019.00042 https://doi.org/10.3233/shti200446 design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements acta imeko issn: 2221-870x june 2021, volume 10, number 2, 70 79 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 70 design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements t. kowalski1,2, g. p. gibiino3, j. szewiński1, p. barmuta4,2, p. bartoszek1, p. a. traverso3 1 national centre for nuclear research (ncbj), 05-400 otwock, poland 2 institute of electronic systems, warsaw university of technology, 00-661 warsaw, poland 3 edm-lab, dept. electrical, electronic and information engineering "g. marconi" dei, university of bologna, 40136 bologna, italy 4 dept. electrical engineering, ku leuven, 3000 leuven, belgium section: research paper keywords: adc front-end; receiver characterisation; receiver linearisation; gamma spectrometry system citation: tomasz kowalski, gian piero gibiino, jaroslaw szewinski, pawel barmuta, piotr bartoszek, pier andrea traverso, design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements, acta imeko, vol. 10, no. 2, article 11, june 2021, identifier: imeko-acta-10 (2021)-02-11 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form may 7, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gian piero gibiino, e-mail: gianpiero.gibiino@unibo.it 1. introduction gamma spectroscopy is the discipline studying gamma radiation sources. the identification and characterisation of gamma radiation sources is employed for many purposes, ranging from scientific to industrial, as well as military. in geology, it can be employed for mapping the elements in rock formations (mainly potassium, thorium, and uranium) [1], or for analysis of soil samples [2]. in border security, non-intrusive inspection systems based on gamma spectroscopy can detect materials such as tobacco, illicit drugs, or explosives [3]. also, gamma spectroscopy systems are commonly used for radiation monitoring, e.g., they are always found near sensitive nuclear apparatuses such as nuclear reactors or particle accelerators. in case of a fault, any radiation exceeding predetermined safe levels can be quickly detected, triggering the shutdown of the system and corresponding actions. gamma spectroscopy measurements aim at quantifying the energy of radiated particles. the energy spectrum of such radiation can be experimentally obtained by creating a histogram of the energy levels. then, the peaks in the spectrum can be related to specific physical processes happening at the atomic scale, granting insights into the nuclear composition of a measured source of radiation. a gamma spectrometer consists of an ionising radiation detector and an electrical signal receiver. the detector converts the radiation energy of the incoming particles into voltage pulses, the pulse amplitude being proportional to the energy of the particle. legacy spectrometry systems consist of several modular elements, such as charge-sensitive pre-amplifiers and pulseshaping filters, to increase pulse duration and make it suitable for the analogue-to-digital converter (adc). in modern systems, instead, the use of fast adcs allows to minimise signal conditioning, enabling direct pulse sampling, higher pulse count rate, and a reduced risk of pulse pile-up [4]. after the acquisition, abstract this work presents the design, experimental characterisation, and digital post-distortion (i.e., digital linearisation) of a mhz-range adc analogue front-end prototype for a gamma radiation spectrometry system under development at the national center for nuclear research (ncbj) in poland. the design accounts for the electrical response of the gamma particle detector in providing signal conditioning and adc protection against high-voltage spikes due to occasional high-energy cosmic radiation, as well as proper adc clocking. as the front-end inevitably introduces nonlinear distortion and dynamic effects, a characterisation is performed to quantify the actual performance in terms of total harmonic distortion (thd) and effective number of bits (enob). thus, a digital linearisation based on both static and memory polynomial models is successfully applied by means of post-distortion processing, guaranteeing a substantial improvement in thd and enob, and demonstrating the effectiveness of the hardware/software method for gamma radiation spectrometers. mailto:gianpiero.gibiino@unibo.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 71 the captured voltage pulses must be integrated in time to quantify the particles energy, and the accuracy of the estimated energy spectra heavily depends on the receiver performance. a fundamental problem in spectrometers is the occasional presence of high-energy cosmic radiation interfering with regular acquisitions. most critically, the voltage spikes resulting from these rare events can have amplitudes an order of magnitude higher than the typical radiation source. since the acquisition system of spectrometers is normally designed to maximise measurement sensitivity and accuracy, the output signal of the detector is normally set to match the adc dynamic range with the typical amplitudes received from the target radiation, usually in the few volts range. therefore, the unwanted cosmic pulses can take amplitude of tens of v, not only saturating but even damaging the adc. in this context, adc protection from the high-voltage transient events becomes a system pre-requisite. in legacy spectrometers, this protection is indirectly provided by the pre-amplifier. in direct sampling systems, instead, such protection can be obtained by introducing analogue signal limitation yet paying attention to preserve good receiver linearity within the dynamic range of the adc. more precisely, both the necessary signal conditioning and the analogue limiter itself will likely introduce dynamic effects and nonlinear distortion, reducing the receiver dynamic range. the characterisation and correction of adc-based receiver non-idealities has been subject of extensive research in the literature [5]-[9]. in particular, digital linearisation is an effective approach for compensating the nonlinear dynamic distortion caused by hardware [7]-[9]. it consists of the extraction of a suitable mathematical description of the nonlinear dynamic behaviour of the device, often derived from volterra-like series representations [9]-[12]. this mathematical model is extracted through a set of measurement procedures, either using time or frequency domain approaches. once a suitable direct model of the device is extracted, linearisation can be performed by implementing an inverse of the identified model using digital signal processing techniques. whereas this process is known as digital pre-distortion when linearising signal generators, transmitters, or power amplifiers [13], [14], it can be implemented as digital post-distortion (dpd) for receivers by applying the inverse model to the corrupted samples acquired by the digitiser [8], [15]-[17]. in this work, we present the design, characterisation, and digital linearisation of an adc analogue front-end for the gamma spectrometry system under development at the polish national centre for nuclear research (ncbj). such a board prototype implements a tailored signal conditioning stage including the adc protection and it hosts the clock generation for the adc, as well as all the necessary adc control hardware. this article extends the one in [18], presented at the 24th imeko tc-4 international symposium and 22nd international workshop on adc and dac modelling and testing. the conference contribution was mainly devoted to the metrological characterisation of the front-end in a laboratory setting, and to investigate the feasibility of digital linearisation approaches with application-like pulsed waveforms. in this work, we provide a presentation of gamma spectroscopy systems, in particular targeting the one under development at ncbj. moreover, we present the design details for signal conditioning, especially for the on-board reference clock and the assessment of its jitter. for both aspects, new characterisation data is provided. also, we describe the hardware components of the designed front-end board. finally, the proposed spectrometry system is tested in field using an actual gamma radiation source (caesium-137). the article is organised as follows. in section 2, the working principles of the gamma spectrometer are provided. section 3 is dedicated to the design of the analogue front-end prototype, including the signal conditioning stage and adc reference clock. section 4 contains the corresponding characterisation performed to verify the suitability of the design. moreover, it reports the metrological characterisation of the front-end both with the standard-compliant method using sine wave excitations, as well as using pulsed signals typical of gamma spectroscopy applications. in section 5, the dpd approach is successfully implemented, substantially improving the linearity performance of the receiver protected by the front-end. in section 6, the characterisation of the front-end in an actual spectrometry system is carried out and dpd is applied in the real-life setting. the effects of the distortion and linearisation are presented for measuring the energy spectrum of caesium-137. conclusions are drawn in section 7. 2. gamma spectroscopy system a fundamental part of a gamma spectrometer is the detector of gamma radiation. the most used type of detector, suitable for a wide assortment of radiations, is the scintillation detector [4]. its working principle is based on scintillation, i.e., the phenomenon of emission of light in response to the passage of a radiated particle through a material. as the particle travels through the scintillator, it deposits some of its energy, leading to the excitation of electrons within the scintillator. these electrons subsequently emit light as they return to their base energy states. the number of excited electrons is proportional to the energy deposited by the particle which, in turn, determines the number of emitted light photons. in this way, the scintillator responds to each detected particle with an impulse of light, whose intensity is proportional to the particle energy. the scintillation light is then converted into a measurable electrical signal, proportional to the detected radiation. the most common component employed for this purpose is the photomultiplier, although photodiodes can also be used [4]. a graphical representation of a scintillator coupled to a photomultiplier is shown in figure 1. photo-multiplier tubes (pmts) consist of a photocathode, a series of dynodes, and an anode. incoming light photons from the scintillator hitting the photocathode cause electrons to be ejected from its surface by photoelectric effect. a high-voltage source, typically up to 2 kv, is used to accelerate the electrons through a focusing electrode. when striking a dynode at high speed, each electron causes multiple secondary electrons to be ejected, so that a cascade of electrons is created in the photomultiplier, amplifying the signal. the electrons are then collected on the anode, producing an figure 1. representation of a scintillator coupled with a photomultiplier within a gamma spectroscopy system. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 72 electrical current pulse. a transimpedance amplifier, or a resistor, is then used at the output to convert the current into voltage pulses, which can then be acquired by an adc and digitally postprocessed. the shape of the voltage pulse corresponding to the incoming radiation is determined by the employed scintillator hardware and by the loading circuit of the photomultiplier. the general mathematical description for a voltage pulse at the anode of the photomultiplier after a single scintillation event is given by [4]: 𝑉(𝑡) = 1 𝜆 − 𝜃 𝜆𝑄 𝐶 (𝑒 −𝜃𝑡 − 𝑒 −𝜆𝑡 ) , (1) where 𝜆 is the decay constant of the scintillator, 𝜃 is the reciprocal of the time constant of the anode circuit, 𝑄 is the total charge collected over the pulse duration, and 𝐶 is the combined capacitance of the anode and its loading circuit. the amplitude of the pulse is proportional to the charge collected at the photomultiplier anode, which is linearly dependent on the energy released by the particle during the scintillation event. an example of voltage pulse shape captured from a scintillation detector is shown in figure 2. provided that 𝜆 and 𝜃 are system constants, the pulsed voltage waveform realises a fixed double-exponential shape. the frequency spectrum of such waveform fades out below the noise floor at around 70 mhz, which is then considered as the target bandwidth of the acquisition system, while any spurious frequency content above 70 mhz will be filtered out digitally. to estimate the maximum slew rate of the pulse, the acquired waveform is interpolated with a 9th-order polynomial, and then differentiated, as shown in figure 2. the resulting maximum slew rate of the pulse is 𝑆𝑅𝑀𝐴𝑋 =76.5 v/μs. figure 3 shows the block diagram of the complete spectroscopy system under development at the ncbj. in the expected final configuration, the voltage pulses from the photomultiplier will be acquired on a fpga mezzanine card (fmc) connected to a xilinx zedboard zynq-7000 fpga board. the fmc consists of an adc, as well as of the circuitry for signal conditioning such as amplification, attenuation, inversion, or level shifting, to ensure the best compliance with adc specifications (see section 3). moreover, the fmc contains the adc clock, as well as the i2c protocol signals and power supply. once the signal is sampled, the digital post-processing for obtaining the energy spectrum is carried out at high-speed by the fpga. in particular, the post-processing aims at discriminating the individual pulses and obtaining a numerical value proportional to the energy. this can be achieved either by applying pulse integration or by considering the pulse peak. as mentioned in the introduction, the values corresponding to a series of scintillation events are then distributed into bins of a histogram, creating the so-called energy spectrum. eventually, the spectrum displays the number of measured counts (y-axis) for a set of energy channels (x-axis). as an example, figure 4 shows the energy spectrum for cobalt-60, a standard material often used for the preliminary calibration of the spectroscopy system. in fact, cobalt-60 is an isotope displaying two main peaks of the spectrum (c and d in figure 4) at known precise energy levels (1.1732 mev and 1.3325 mev, respectively) representing the energy of gamma rays emitted during its radioactive decay [4]. in the calibration process, this information is then used to scale the energy channels on the x-axis in mev. the other measured peaks (a and b in figure 4) correspond to the phenomena of back-scattering and compton scattering [4]. 3. front-end prototype design a stand-alone prototype adc front-end board was designed for testing those components to be eventually mounted on the fmc card of the final spectroscopy system shown in figure 3. the prototype board implements the adc analogue front-end, the power supply circuitry, the i2c communication interface, and the clock generation to be used for the adc as well as for the fpga board. to allow for the linearity characterisation of the front-end alone, the prototype does not include the target adc for the final application (ads5474 monolithic pipeline adc from texas instruments [20]), which features 14-bit resolution and sampling rate of 400 msa/s. such adc is differential with a full-scale input of 1.1 v (single-ended) or 2.2 v (differential), requiring a common mode voltage of 3.1 v. as previously discussed, the adc analogue front-end should serve two main purposes. firstly, it should perform attenuation and filtering. indeed, to maximise the signal-to-noise ratio figure 2. oscilloscope acquisition of the response for a scintillation detector (black dots) used at ncbj, polynomial interpolation (blue curve) and its timedifferentiation for maximum slew-rate estimation (red). figure 3. block diagram of the spectroscopy system under development at the ncbj. figure 4. example of an energy spectrum for cobalt-60. edited from [19]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 73 (snr) of the system, the range of amplitudes of the measured pulses from incoming radiation should correspond to the fullscale of the target adc. while the voltage amplitudes at the output of the detector could be coarsely adjusted by changing the supply voltage of the photomultiplier, it is desirable to include, by the front-end, a software control functionality to digitally finetune the input voltage amplitude. in particular, as from the desired specifications, the gain resolution should be higher than 1 db, while the digital control should be carried out through the i2c interface, readily available with a connector on the fmc. secondly, the front-end is supposed to protect the adc by means of an analogue limiter circuit. such adc protection should clamp input voltage pulses of up to an order of magnitude above the adc full-scale, providing sufficient protection from unwanted cosmic pulses which could damage the adc. at the same time, clamping should be sufficiently fast, given that the pulse rise-time at the output of the scintillator is in the order of 10 ns. moreover, a minimum of 70-mhz acquisition bandwidth is requested to completely capture the spectrum at the pulsed output of the detector. 3.1. signal conditioning stage the block diagram of the designed adc front-end is shown in figure 5. it is composed of a diode used for protecting the circuit from the ns-range over-voltage pulses, a digital step attenuator (dsa) for the adjustment of the measurement range, a fully differential amplifier (fda) for signal buffering and single-ended to differential conversion, and an anti-aliasing filter. the input parasitic capacitance (𝐶𝑃 ) of the front-end should be sufficiently low to allow for the requested bandwidth 𝑓𝐵𝑊 =70 mhz. considering an input resistance 𝑅𝐼𝑁 = 50 ω, the parasitic capacitance must satisfy 𝐶𝑃 < 1 2𝜋𝑓𝐵𝑊𝑅𝐼𝑁 ≅ 45.5 pf. in the proposed solution, a first clamping stage is embodied by a transient-voltage-suppression (tvs) diode from nexperia (pesd5v0c1bsf), with a rated 𝐶𝑃 = 0.2 pf and a reverse stand-off voltage of 5 v. the chosen tvs diode is bidirectional, meaning that it will protect against both positive and negative transients, as the pulse polarity depends on the configuration of the radiation detector. to provide a flexible control across the overall gain of the signal conditioning chain, a digital step attenuator (dsa) was used. a 6-bit dsa from analog devices (hmc472a) with a variable attenuation from 0.5 db to 31.5 db in 0.5-db steps was chosen for the design. the attenuation is set directly by tll/cmos-compatible input pins, which can be interfaced with i2c control. signal conversion from single-ended to differential is required to feed the adc. typical solutions include a balun or a fully differential amplifier (fda). while a balun would introduce less additive noise, an fda can provide a suitable additional gain. moreover, while the tvs diode can protect from the largest pulses, the pulse voltage amplitude must be adapted to the ±1.1 v full-scale range of the targeted adc. in this sense, the fda can provide the necessary additional clamping stage. eventually, an fda from texas instruments (lmh6553) was chosen, featuring voltage-controlled output clamping, 900-mhz bandwidth, and a slew rate of 2300 v/μs. a resistive network for the fda was optimised by matlab and lt-spice simulations to obtain an input impedance of 50 ω and voltage gain of 2 v/v, allowing to compensate all insertion loss potentially arising in the system. a 10-pf capacitor was placed at the fda output along with two 50-ω output matching resistors to implement an antialiasing rc filter with a cut-off frequency of ~ 160 mhz. 3.2. adc reference clock beyond the signal conditioning, the designed prototype board is also aimed at testing a low-jitter 400-mhz clock for the adc. in this context, let us consider the main noise sources impacting the total snr of the adc (𝑆𝑁𝑅𝐴𝐷𝐶), which can be calculated as: 𝑆𝑁𝑅𝐴𝐷𝐶 (𝑑𝐵) = −20 log √ 10 −( 𝑆𝑁𝑅𝑄 20 ) 2 + 10 −( 𝑆𝑁𝑅𝑇 20 ) 2 + 10 −( 𝑆𝑁𝑅𝐽 20 ) 2 , (2) where 𝑆𝑁𝑅𝑄 (in db) is due to quantisation noise, 𝑆𝑁𝑅𝑇 (in db) is due to thermal noise, and 𝑆𝑁𝑅𝐽 (in db) is due to total jitter. in high-speed pipelined adcs such as the one considered here, the thermal noise limits the snr at low input frequencies, while the clock jitter mainly influences the high frequencies [19]. the 𝑆𝑁𝑅𝑄 for a full-scale sine wave input can be expressed by the well-known formula 𝑆𝑁𝑅𝑄(db) = (1.761 + 6.02 𝑁), 𝑁 being the number of bits of the adc. considering that the nominal resolution of the ads5457 is 14 bits, it holds that 𝑆𝑁𝑅𝑄 = 86.04 db. as from datasheet [20], the limit due to thermal noise is estimated at 𝑆𝑁𝑅𝑇 = 70.3 db, while the estimated overall maximum performance is 𝑆𝑁𝑅𝐴𝐷𝐶 = 70.1 db. using the data in the formula above, one can deduce a requirement for the noise due to jitter, resulting in 𝑆𝑁𝑅𝐽 ≥ 75.2 db. let us consider the relationship between the total jitter affecting the sampling process and the resulting impact on the snr [18]: 𝑆𝑁𝑅𝐽 = −20 log(2𝜋𝑓𝐼𝑁 𝑡𝐽,𝑇𝑂𝑇 ) , (3) figure 5. block diagram of the designed adc analogue front-end. figure 6. external clock source rms jitter (𝑡𝐽,𝐸𝑋𝑇) required to keep 𝑆𝑁𝑅𝐴𝐷𝐶 > 70.1 db, as a function of the input frequency (𝑓𝐼𝑁). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 74 where 𝑓𝐼𝑁 is the frequency of the input signal to be sampled, and 𝑡𝐽,𝑇𝑂𝑇 is the rms value of the total jitter. the total jitter impacting on the a/d process is a function of the timing jitter of the external clock source (𝑡𝐽,𝐸𝑋𝑇 ) as well as the aperture jitter (𝑡𝐽,𝐼𝑁 ) of the adc, which is dependent on the noise of the internal clock buffer [20]: 𝑡𝐽,𝐸𝑋𝑇 = √𝑡𝐽,𝑇𝑂𝑇 2 − 𝑡𝐽,𝐼𝑁 2 , (4) where 𝑡𝐽,𝐼𝑁 = 103 fs for the ads5457. figure 6 shows the relationship between the requirement for maximum rms timing jitter of the external clock (𝑡𝐽,𝐸𝑋𝑇 ) and the input sinewave frequency (𝑓𝐼𝑁 ). given that the maximum slew rate of the voltage pulses at the output of the photomultiplier is 𝑆𝑅𝑀𝐴𝑋 = 76.5 v/μs, and considering that this value corresponds to a pure sinewave of 𝑓𝐼𝑁 = 𝑆𝑅𝑀𝐴𝑋 𝐴2𝜋 ≅ 11.1 mhz, the plot in figure 6 indicates that it must hold 𝑡𝐽,𝐸𝑋𝑇 ≤ 2492 fs to keep 𝑆𝑁𝑅𝐽 ≥ 75.2 db 𝑆𝑁𝑅𝐴𝐷𝐶 ≥ 70.1 db). to satisfy the design jitter requirements, a clock-generating ic (lmk03318 from texas instruments) was combined with a 25mhz crystal oscillator reference (txc 7m-25meeq). also, an additional input port provides the option of connecting an external reference. the ic is controllable by i2c interface and should deliver rms timing jitter in the 100-fs-range as from datasheet, which is deemed enough to satisfy the minimum requirement for the 𝑆𝑁𝑅𝐴𝐷𝐶 . 3.3. board implementation a four-layer printed circuit board (pcb) was designed to implement the front-end prototype. the top layer is used for routing all analogue and a subset of digital traces, as well as for mounting all the components. the second layer is a ground plane, which serves as the reference plane for controlledimpedance traces on the top layer. the third layer is used for routing power connections, while the bottom layer contains the remaining digital traces. to preserve signal integrity, all signal traces in the analogue front-end, as well as the clock signal path, were calculated to have 50-ω characteristic impedance. in addition, these traces were also shielded from interference via fences. special care was also taken when laying out the step-down switching regulator, as it is likely to generate the most electromagnetic interference (emi). the board is equipped with gold-pin headers for i2c communication with an external device, e.g., a microcontroller. to control the dsa via the i2c interface, a max7320 i/o expander was used, and a set of headers allows to select the control of the dsa either by i2c or manual control by a dual in-line package (dip) switch. concerning power supply, linear low-dropout (ldo) regulators offering voltage stabilisation, very low noise, and no additional interferences affecting adc readouts, are used to convert the 12 v supply received from the carrier board into lower voltages. however, ldos are quite inefficient when the difference between regulated output voltage and input voltage is large. the lowest regulated voltage is 1.8 v, as required by the output buffers of the clock generator, causing the ldo to work at efficiency below 15%. to avoid excessive power dissipation in this specific case, the 12 v input voltage was first converted to 6 v with a high-efficiency step-down switching regulator (lt8069a from analog devices), offering efficiency above 90%. the switching frequency was set to 1.8 mhz to allow for a small inductor in the step-down converter, which is important on an fmc board with limited space. additionally, the device has spread spectrum functionality, lowering emi [21] and improving signal integrity. the output 6 v voltage is eventually converted into the required positive and negative supply voltages using ldos. two sets of gold-pin headers are also used for every linear regulator to enable the user to switch it on/off, and to testpoint the regulated voltage. a photo of the manufactured board is shown in figure 7. 4. experimental characterisation in this section, the characterisation performed for the designed prototype are described. they include phase noise measurement for the on-board adc clock and a metrological characterisation in terms of total harmonic distortion (thd) figure 7. photo of the designed front-end board. figure 8. set-up for the characterisation of the phase noise of the on-board clock of the adc. figure 9. phase noise of the ti lmk03318 clock generator with the crystal oscillator reference (10 output correlations enabled). the plot includes the spurious components due to the power supply (in blue). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 75 and effective number of bits (enob) for the whole acquisition path including the front-end. also, the behaviour of the frontend is measured under application-like pulsed excitation. 4.1. clock jitter characterisation the clock-generating device (lmk03318) was programmed through i2c interface, connecting the circuit to a microcontroller board (nucleo-f446re). the device was configured to provide one differential output of a 400 mhz signal at the sma port, while all other outputs were terminated with 100-ω resistors. to evaluate the rms time jitter, phase noise measurements were carried out by means of the setup in figure 8 using the agilent e5052a signal source analyser and a tti cpx400dp power supply for powering the prototype board. phase noise measurements provide a single sideband noise density spectrum of a signal at different offset frequencies from the carrier at 400 mhz. the rms jitter value is found by integrating the phase noise spectrum over a range of offset frequencies (in this case, 10 hz 10 mhz). for this device, the obtained rms jitter is ≈ 440 fs in the configuration with onboard 25-mhz reference, despite the presence of spurious components coming from the power supply, e.g., at 30 hz, 50 hz, 120 hz and 160 hz (see figure 9). when removing these most evident spurious components via software, the rms jitter value reaches ≈ 312 fs. at the same time, it is worth noting that no spurious components are found at 1.8 mhz (corresponding to the switching frequency of the main on-board power converter), demonstrating that emi was well contained. the measured jitter performance validates the suitability of the designed clock generation. 4.2. metrological characterisation of the acquisition channel a metrological characterisation of the prototype front-end alone, i.e., neglecting any effect of adc at the output, was performed. to this aim, we adopt the measurement setup shown in figure 10 and figure 11. the arbitrary waveform generator (awg) output (agilent 81150a) is split by means of a 50-ωterminated divider to feed, with the same excitation, two parallel acquisition paths. in the first path, the split signal is applied to the device-under-test (dut), i.e., the analogue front-end. a high-linearity instrumentation amplifier (tegam 4040) was used to transform the differential output of the front-end into the first channel of a 100-mhz, 100-msa/s, 14-bit digitiser board (national instruments pxi-5122) which, given the much higher linearity and lower noise, is here considered as an ideal acquisition device. in the second path, i.e., the reference acquisition path, the output of the splitter directly feeds the second channel the pxi-5122 digitiser board. a preliminary test procedure was carried out using sinusoidal tone excitations up to 1 mhz, so to avoid introducing any spurious effect due to the digitiser, e.g., the initial roll-off of the anti-alias filter. the measurement data was then processed according to standard ieee 1241 [22]. for this specific characterisation, no reference acquisition is needed, thus only the receiver chain comprising the front-end under test was used. the considered figures of merit (fom), i.e., thd and enob, are reported in figure 12 and figure 13, respectively, for a set of acquired frequencies and amplitudes. as can be seen, the prototype front-end introduces a clear dependency on the input signal amplitude. while the distortion at -12 dbfs is negligible over the measured frequency range, it substantially deteriorates when the excitation approaches the full scale, reporting thd as high as -30 db and minimum enob of around 5 at -1 dbfs. this could be attributed to the nonlinear characteristic of the tvs diode, as well as to the low-frequency performance of the switching circuitry inside of the digital step attenuator. figure 10. block diagram of the measurement setup for the characterisation of the adc analogue front-end. figure 11. picture of the measurement setup for the characterisation of the adc analogue front-end. figure 12. total harmonic distortion (9 harmonics) of the acquisition chain including the adc front-end. figure 13. effective number of bits of the acquisition chain including the adc front-end. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 76 4.3. application-like pulse measurements the prototype front-end was tested for application-like spectroscopy measurements. indeed, as discussed in section 2, in a realistic scenario the front-end would only receive specific signals, namely exponential pulses of a given duration and different amplitudes. therefore, an additional characterisation was performed employing pulsed excitations modelled from experimental data of an actual scintillator. these pulsed waveforms could be easily generated by programming the awg yet within its 50-mhz limit in high-amplitude mode. then, the impact of the front-end on the pulse shape was quantified by comparing the waveforms distorted by the front-end under test with those acquired through the ideal reference path. considering that the pxi-5122 is not as fast as the ads5474 adc targeted for the final application, slower pulses excitations with 1-µs and 0.5-µs pulse widths were used to allow for a sufficiently high number of acquired samples per pulse, yet respecting the pulse shape produced by the detector. both positive and negative polarities were considered, and sequences of 20 pulses with increasing amplitudes from zero up to the full scale of the adc front-end were programmed into the awg. all acquired signals were normalised in amplitude and delay to allow for numerical comparison. additionally, spectrum equalisation was performed to compensate for the baseline wander effect due to the presence of ac-coupling capacitors in the front-end. to characterise the performance of the front-end with respect to the ideal reference, two quantities were used. firstly, the normalised root mean square deviation (nrmsd) between the waveforms acquired from the two channels was calculated. secondly, each of the pulses was integrated, as done for gamma spectroscopy measurements. then, the nrmsd between the integral values calculated from the actual and reference receivers were obtained, quantifying the deviation that would directly map into the energy spectrum of the measured material. tables 2 and 3 report the calculated deviations, while figure 14 and figure 15 show the shape of a single pulse at full scale amplitude for both acquisition paths. while the negative pulses do not show substantial deviation from the reference, the positive pulses are significantly distorted, hinting at the presence of nonlinear dynamic effects by the front-end. 5. digital linearisation to improve the linearity performance of the front-end under test, a dpd (i.e., digital linearisation) method was implemented. with post-distortion, the acquired signals are post-processed exploiting an inverse modelling of the whole receiver path. to this aim, we used a memory polynomial model, a well-known and effective formulation derived from the volterra series representation, where the memory cross-terms are neglected [13]. such a mathematical description is well suited for modelling figure 14. 1-µs positive pulse with full-scale amplitude, after passing through the two signal acquisition chains (reference and the one including the dut). figure 15. 1-µs negative pulse with full-scale amplitude, after passing through the two signal acquisition chains (reference and the one including the dut). figure 16. total harmonic distortion (9 harmonics) of the acquisition chain including the adc front-end after digital post-distortion, measured at -1 dbfs. figure 17. effective number of bits of the acquisition chain including the adc front-end after digital post-distortion, measured at -1 dbfs. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 77 the nonlinear dynamic behaviour, and it finds broad use in digital pre-distortion of radiofrequency (rf) power amplifiers. the memory polynomial model is formulated as follows [13]: 𝑦(𝑛) = ∑ ∑ 𝑎𝑚𝑘 𝑥(𝑛 − 𝑚) |𝑥(𝑛 − 𝑚)| 𝑘−1 𝐾 𝑘=1 𝑀 𝑚=0 , (5) 𝑀 being the memory depth and 𝐾 the nonlinear order. in theory, a high model order could provide high prediction accuracy, as it would better describe stronger nonlinearities. however, the identification of many model coefficients can often become illconditioned, since increasingly larger dataset of independent measured responses is needed to properly fit all coefficients. in this research, tests were carried out by progressively increasing 𝐾 and 𝑀 for a dataset of application-like pulsed waveforms; it was found that increasing the model order beyond 𝐾 = 5 and 𝑀 = 3 did not provide improved prediction capabilities. out of the different tests, two cases were considered: a static (𝑀 = 0) polynomial model with 𝐾 = 5, and a memory polynomial model with 𝐾 = 5 and 𝑀 = 3. rather than inverting the direct model by means of iterative or nonlinear optimisation algorithms, an indirect coefficient learning approach was used for the identification of the inverse model from time-domain measurement data, using the method in [23]. in a first characterisation, the inverse model was trained on a global dataset composed by acquisitions of sine-wave excitations at 10 khz, 100 khz, and 1 mhz. then, a new experimental characterisation (again, according to ieee 12412010) of the whole receiver chain composed by the combination of the analogue front-end and the dpd was performed and repeated by sweeping the input frequency from 1 khz to 1 mhz. the thd and enob for the linearised receiver are plotted in figure 16 and figure 17, respectively. in the 1 khz-1 mhz range, an improvement in thd of up to 15 db was obtained with the static polynomial model, and of up to 20 db with a memory polynomial model. the corresponding improvement in enob is more than 2 bits using the static polynomial model, and more than 3 bits using the memory polynomial model. it can be noticed that the linearisation is particularly effective in the band from 10 khz to 1 mhz, which corresponds to those particular frequencies used for post-distorter coefficient training. dpd was also applied for the acquisition of pulsed waveforms. in this case, the inverse models were separately trained for 0.5-µs and 1-µs pulse lengths, as well as for positive and negative polarities. figure 18 shows the pulse shape obtained after post-distortion using different models, and in comparison, to the reference channel. table 1 and table 2 summarise the values of the deviations from the reference in all tested cases. a significant reduction in the deviations was achieved for positive pulses, which showed the highest distortion. for example, the nrmsd was lowered more than four times in the case of 1-µs pulses. for negative pulses, the post-distortion had less impact, as the behaviour of the front-end was found to introduce less distortion. using an inverse model with memory brings a further improvement only in the case of 1-µs positive pulses, as the other cases do not show substantial dynamic effects. 6. measurements using the spectrometry system the proposed methodology was tested with the complete spectrometry system setup at the ncbj, corresponding to the block diagram previously introduced in figure 3. the photo of the deployed system is shown in figure 19. it includes an inhouse lanthanum chloride scintillator developed at the ncbj, coupled to the pmt and placed next to a source of gamma radiation, namely a sample of caesium-137 (behind the green lead shield). the pmt is supplied with 1.6 kv from a highvoltage source. the fmc board includes the already introduced ads5474 dual-channel adc, while a xilinx zc-702 fpga carrier board contains the zynq-7000 soc for real-time digital processing of the sampled data. the voltage signal from the radiation detector is divided via a resistive power splitter into reference and test paths, following the same approach as in figure 10 and figure 11 for the pxibased test setup previously adopted. the reference path is table 1. nrmsd (%) with and without mp-based post-distortion for different pulsed excitations. post-distortion positive pulse polarity negative pulse polarity 1 µs 0.5 µs 1 µs 0.5 µs none 5.19 3.34 1.31 1.77 𝐾 = 5, 𝑀 = 0 2.39 1.71 1.23 1.75 𝐾 = 5, 𝑀 = 3 1.2 1.09 1.0 0.83 table 2. integral nrmsd (%) with and without mp-based post-distortion for different pulsed excitations. post-distortion positive pulse polarity negative pulse polarity 1 µs 0.5 µs 1 µs 0.5 µs none 3.44 2.62 1.14 0.88 𝐾 = 5, 𝑀 = 0 1.35 1.19 1.03 0.84 𝐾 = 5, 𝑀 = 3 0.97 1.0 1.03 0.79 figure 18. 1-µs positive pulse with full-scale amplitude after digital postdistortion. figure 19. the gamma spectroscopy measurement set-up at ncbj, including the source or radiation (caesium-137), the analogue front-end (dut) and the two-channel acquisition system. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 78 directly connected to one of the digitiser channels, while the test path consists of the designed analogue front-end board, i.e., the device under test (dut), whose output is connected to the second channel of the digitiser. the dut is supplied from a hameg hmp4040 power supply. the spectrometry setup was used to capture a dataset consisting of 104 events from the scintillator, whose associated voltage pulses are concurrently acquired by both the reference and test adc channels. the pulses have a duration of ~100 ns and amplitudes of up to 1 v. the waveform of a single captured pulse is shown in figure 20. digital linearisation was applied to the captured pulses using the same approach as in section 5. more precisely, the acquisitions from both reference and dut paths allowed to identify the inverse dut model, then to perform digital linearisation by adopting the same post-distortion model formulation and orders as in section 5. as shown in figure 20, the nonlinear pulse waveform displays a sensibly lower amplitude w.r.t. to the reference acquisition, as well as a distorted shape, whereas both the static and the nonlinear dynamic postdistortion allow for a linearised pulse acquisition. just as performed in section 5, the individual pulses were integrated to calculate a value directly proportional to their energy. also, in this case, the deviation between the channels is quantified using nrmsd between the waveforms as well as nrmsd between the integral values computed from the waveforms. the obtained values confirm that the deviation from reference is significantly reduced by the digital linearisation. in particular, the static polynomial model (𝐾 = 5, 𝑀 = 0) reduced the nrmsd from 5.86% to 3.8%, while adding the memory terms (𝐾 = 5, 𝑀 = 3) allowed a further reduction down to 1.98%. similarly, the nrmsd of the integral values was reduced from 5.52% to 1.99% with the static polynomial model, and to 1.09% using the model with memory (see table 3). finally, the energy spectrum from a 250-channel histogram of the integral values was estimated both with and without digital linearisation (figure 21). as the pulses with low amplitude are less distorted, the lower energy part of the spectrum is reasonably similar to the reference. however, at higher pulse amplitudes (i.e., at higher energies) the whole spectrum content, and most notably the gamma photo-peak (~ 662 kev for caesium-137), is shifted towards lower energies by non-compensated nonlinearities (figure 21a). the energy spectrum linearised using the static model (figure 21b) shows a reduced yet still substantial deviation, whereas the dpd with memory (figure 21c) ensures the best alignment with the reference spectrum over the entire energy range. 7. conclusion in this work, the design, characterisation, and performance assessment of an analogue front-end for use in the gamma spectrometry system of the ncbj was presented. the front-end was designed to properly condition the pulsed signals at the output of the detector and protect the receiver from high-voltage spikes due to cosmic radiation. it also provides a suitable, lowtiming jitter clock reference for the adc within the receiver. an extensive characterisation of the protected receiver equipped with the designed front-end allowed to quantify the table 3. nrmsd (%) and integral nrmsd (%) w/ and w/o mp-based postdistortion for the pulse waveform measured with the ncbj acquisition system in figure 19, using caesium-137 as a radiation source. post-distortion nmrsd (%) integral nmrsd (%) none 5.86 5.52 𝐾 = 5, 𝑀 = 0 3.8 1.99 𝐾 = 5, 𝑀 = 3 1.98 1.09 figure 20. example of a pulse waveform from the radiation detector w/ and w/o digital linearisation. comparison with the reference a/d acquisition. figure 21. energy spectrum of caesium-137 estimated using the spectrometry setup including the designed analogue front-end a) without linearisation; b) with static dpd (mp model with 𝐾 = 5, 𝑀 = 0); c) with nonlinear-dynamic dpd (mp model with 𝐾 = 5, 𝑀 = 3). full-scale energy is 723 kev. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 79 reduction of the dynamic range inevitably caused by the analogue limiting stage, as well as by the necessary signal conditioning. a digital linearisation approach based on the identification of an inverse nonlinear dynamic model of the receiver, and its application for post-distortion, allowed to recover the necessary linearity to guarantee the accurate acquisition of the fast pulses generated by the radiation detector. in-field experiments involving actual spectrometry measurements of a caesium-137 radiation source confirmed the suitability of the designed front-end, as well as the effectiveness of the digital linearisation approach, which allows to compensate for critical peak shifts in the estimated energy spectra caused by receiver nonlinear distortion. acknowledgement the authors would like to thank prof. a. lewandowski (institute of electronic systems, warsaw university of technology, warsaw, poland) for financing the conference fee. references [1] r. l. grasty, r. b. k. shives, applications of gamma ray spectrometry to mineral exploration and geological mapping, exploration 97: fourth decennial conference on mineral exploration, 1997. [2] f. ghanbari, soil sampling and analysis (using high resolution gamma spectroscopy), 3rdrmcc workshop, april 2007. online [accessed 05 june 2021]. https://www.osti.gov/servlets/purl/1267355 [3] c. l. fontana et al., detection system of the first rapidly relocatable tagged neutron inspection system (rrtnis) developed in the framework of the european h2020 c-bord project", physics procedia 90 (2017), pp. 279-284. doi: 10.1016/j.phpro.2017.09.010 [4] g. f. knoll, radiation detection and measurement, john wiley & sons, 2010. [5] v. pálfi, t. virosztek, i. kollár, full information adc test procedures using sinusoidal excitation, implemented in matlab and labview, acta imeko 4(3) (2015), pp. 4-13. doi: 10.21014/acta_imeko.v4i3.257 [6] h. zhengbing, r. kochan, o. kochan, s. jun, h. klym, method of integral nonlinearity testing and correction of multi-range adc by direct measurement of output voltages of multi-resistors divider, acta imeko 4(2) (2015), pp. 80-84. doi: 10.21014/acta_imeko.v4i2.230 [7] d. hummels, performance improvement of all-digital widebandwidth receivers by linearization of adcs and dacs, measurement 31(1) (2002), pp. 35–45. doi: 10.1016/s0263-2241(01)00012-4 [8] e. balestrieri, p. daponte, s. rapuano, a state of the art on adc error compensation methods, ieee transactions on instrumentation and measurement 54(4) (2005), pp. 1388–1394. doi: 10.1109/tim.2005.851083 [9] h. lundin, p. händel, look-up tables, dithering and volterra series for adc improvements, in design, modeling and testing of data converters, springer, 2014, pp. 249–275. [10] d. mirri, g. iuculano, p.a. traverso, g. pasini, f. filicori, nonlinear dynamic system modelling based on modified volterra series approaches, measurement 33(1) (2003), pp. 9-21. doi: 10.1016/s0263-2241(02)00037-4 [11] d. mirri, g. pasini, p. a. traverso, f. filicori, g. iuculano, a finite-memory discrete-time convolution approach for the nonlinear dynamic modelling of s/h–adc devices, computer standards & interfaces 25(1) (2003), pp. 33–44. doi: 10.1016/s0920-5489(02)00076-4 [12] n. bjorsell, p. suchánek, p. handel, d. ronnow, measuring volterra kernels of analog-to-digital converters using a stepped three-tone scan, ieee transactions on instrumentation and measurement 57(4) (2008), pp. 666–671. doi: 10.1109/tim.2007.911579 [13] j. kim, k. konstantinou, digital predistortion of wideband signals based on power amplifier model with memory, electronics letters 37(23) (2001), pp. 1417–1418. [14] t. kowalski, b. dabek, g. p. gibiino, s. b. habib, p. barmuta, a measurement setup for digital pre-distortion using direct rf undersampling, proc. 22nd international microwave and radar conference (mikon), ieee, 2018, pp. 438–439. [15] p. a. traverso, g. pasini, a. raffo, d. mirri, g. iuculano, f. filicori, on the use of the discrete-time convolution model for the compensation of non-idealities within digital acquisition channels, measurement 40(5) (2007), pp. 527-536. doi: 10.1016/j.measurement.2006.09.011 [16] k. shi, a. redfern, blind volterra system linearization with applications to post compensation of ad nonlinearities, ieee international conference on acoustics, speech and signal processing (icassp), 25-30 march 2012, kyoto, japan, pp. 3581– 3584. doi: 10.1109/icassp.2012.6288690 [17] z. alina, o. amrani, on digital post-distortion techniques, ieee transactions on signal processing 64 (3) (2015), pp. 603–614. doi: 10.1109/tsp.2015.2477806 [18] t. kowalski, g. p. gibiino, p. barmuta, j. szewinski, p. a. traverso, digital post-distortion of an adc analog front-end for gamma spectroscopy measurements, proc. 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020, pp. 141–145. online [accessed 17 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-27.pdf [19] y. selim, y. lasheen, m. ali, t. zakla, removal of alcyon ii, cgr, mev 60co teletherapy head and evaluation of exposure dose. journal of environmental protection 4(12) (2013), pp. 1435-1440. doi: 10.4236/jep.2013.412164 [20] texas instruments, 14-bit, 400-msps analog-to-digital converter, ads5474 datasheet, july 2007 [revised dec. 2017] online [accessed 05 june 2021]. https://www.ti.com/lit/ds/symlink/ads5474.pdf [21] scott, k. and zimmer, g., 2014. spread spectrum frequency modulation reduces emi. technical article, analog devices. [22] s. tilden, s. max, j. blair, f. alegria, e. balestrieri, n. bjorsell, j. calvin, d. dallet, p. daponte, l. de vito, et al., 1241-2010 ieee standard for terminology and test methods for analog-to-digital converters. [23] c. eun, e. j. powers, a new volterra predistorter based on the indirect learning architecture, ieee transactions on signal processing 45(1) (1997), pp. 223–227 doi: 10.1109/78.552219 https://www.osti.gov/servlets/purl/1267355 https://doi.org/10.1016/j.phpro.2017.09.010 http://dx.doi.org/10.21014/acta_imeko.v4i3.257 http://dx.doi.org/10.21014/acta_imeko.v4i2.230 https://doi.org/10.1016/s0263-2241(01)00012-4 https://doi.org/10.1109/tim.2005.851083 https://doi.org/10.1016/s0263-2241(02)00037-4 https://doi.org/10.1016/s0920-5489(02)00076-4 https://doi.org/10.1109/tim.2007.911579 https://doi.org/10.1016/j.measurement.2006.09.011 https://doi.org/10.1109/icassp.2012.6288690 https://doi.org/10.1109/tsp.2015.2477806 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-27.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-27.pdf https://doi.org/10.4236/jep.2013.412164 https://www.ti.com/lit/ds/symlink/ads5474.pdf https://doi.org/10.1109/78.552219 digital twin: a new perspective for cultural heritage management and fruition acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 digital twin: a new perspective for cultural heritage management and fruition francesco gabellone1 1 national research council, institute of nanotechnology, lecce, italy section: research paper keywords: digital twins; oil-mill; photogrammetry; 3d; virtual visit citation: francesco gabellone, digital twin: a new perspective for cultural heritage management and fruition, acta imeko, vol. 11, no. 1, article 5, march 2022, identifier: imeko-acta-11 (2022)-01-05 section editor: fabio santaniello, university of trento, italy received march 6, 2021; in final form february 21, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco gabellone, e-mail: francesco.gabellone@nanotec.cnr.it 1. the digital twins, next future the term digital twin was originally developed to improve manufacturing and industrial processes [1]. digital twins were subsequently defined as digital replications of physical entities that enable data to be seamlessly transmitted contents from physical to virtual worlds. digital twins facilitate the means to monitor, understand and optimize the functions of all physical entities and provide people continuous feedback to improve quality of life and well-being [2]. digital twin is at the vanguard of the industry 4.0 revolution enabled through advanced data analytics and the internet of things (iot) connectivity. iot has increased the volume of many heterogeneous usable data from manufacturing, healthcare and smart city applications [3]. the iot environment provides an important resource for predictive maintenance and error detection, in particular for the future health of manufacturing processes and smart city developments, while also aiding in fault detection and traffic management in a next smart city. since a perfect integration between iot and data analysis will be necessary in the near future, this important need will be possible thanks to the creation of a connection between physical and virtual twins. a digital twin environment allows smart cities for quick analysis and, using 5g network, real-time decisions can be made through an accurate analysis. the first terminology was given by michael grieves, research professor (florida institute of technology) in a 2003 presentation, and later documented in a white paper where the future developments of digital twins are traced. the first articles make a definition of “digital model”. these are described as a digital version of a pre-existing or planned physical object. an important distinguishing feature, with the old definition of “digital model”, is that there is no form of automatic data exchange between the physical system and the digital model. this means that a change made to the physical object has no impact on the digital model, and vice versa. when data run between an existing physical object and a digital object, and they are fully integrated in both directions, this establishes a "digital twin" reference. a change made to the physical object automatically leads to a change to the digital object and vice versa. a digital twin consequently provides an intimate connection with his real counterpart, and in some way determines an influence on it. an environmental monitoring abstract this paper describes the example of an interesting distance visit approach carried out during the covid-19 emergency, applied to an underground oil-mill in the town of gallipoli (puglia, italy). the limitations of access for people with disabilities and the complete closure of italian museums during the emergency have suggested the development of an immersive platform, in the broader perspective of using the output in accordance to digital twin perspectives. then a tool to support an innovative visit method has been realized: a virtual visit assisted by a real remote guide, hereinafter referred to as “live-guided tour” with e-learning functionality. all this has been made possible starting from a three-dimensional model of an underground oil-mill, from which we extracted the stereoscopic scenes. the stereoscopy is very important for the overall success of the project, because this aspect influences the level of interest, the immersion and the ability to generate emotion and wonder. to the best of author’s knowledge, this is the only system available today for a shared virtual visit for an inaccessible context, which implements many features of a vr visit in a multi-user and multi-platform environment. mailto:francesco.gabellone@nanotec.cnr.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 system on the real object will be evaluated on the digital twin and, consequently, interventions on the internal microclimate, for example, will be managed remotely with effects on other environmental parameters. if a digital twin is connected with iot systems or sensors, it allows a remote intervention that by the digital object affects the real object. in the near future, the use of digital twins will potentially be very common, they will grow in step with the rapid developments in connectivity via iot within a smart city. as the number of smart cities grows, so will the use of digital twins. moreover, the more data we collect from the iot sensors embedded in our main services within a city, the greater the opportunities for economic growth and development of new innovative start-up able to provide those services will be. digital twins can be used to aid in the planning and development of current smart cities and to help with the ongoing development of new ones in the world of energy saving. this data can facilitate growth by being able to create a living test bed within a virtual twin that can achieve two goals: first, to test the scenarios, and second, to allow digital twins to learn from the environment by analysing changes in the collected data. [4] therefore, digital twin can evolve to become a true digital replica of potential or actual physical resources (physical twin), processes, people, places, infrastructures, systems and devices that can be used for various purposes. we can compare digital twin to other mirror model concept, which aims to model part of the physical world with its cyber representation [5]. 2. digital twins for cultural heritage enjoyment: a future perspective? from this analysis, however, a criticality emerges. there is still a real difficulty in fully assuming a cultural heritage as digital twin, because as grieves himself pointed out, a mirrored spaces model always refers to an extremely dynamic representation [6]. the real dimension and the virtual dimension, in the primitive definition, remained connected during the entire life cycle of the system, going through all the phases of creation, production and operation. the definition of digital twin is still closely related to industrial production and its processes [7]. a necessary condition for the realization of a digital twin is the existence of physical products in real space, of virtual products in virtual space, and systems for connecting the flow of data that unite physical and virtual space. over the past 30 years, product and process engineering teams have used 3d rendering and process simulation to validate the feasibility of an asset within a production process. a 3d model allows the entire system to be merged into a virtual space, so that conflicts and critical issues are discovered more economically and quickly. with these premises, a product is only released when all the problems have been solved. thanks to digital twin it is possible to test and understand how systems and products will behave in a wide variety of environments, using virtual space and simulation as a predictive moment. all this is possible by combining different technologies related to a single database that will contain all plant or product design data, simulation software, real-time data from the production environment and much more. the advantages are many, starting from the possibility of easily accessing data from many different sources, aggregating and visualizing them through a single synchronized and shared portal, and being able to add contextual information. many of the requirements among those listed will certainly not be met, due to the very nature of cultural heritage, which is not linked to the production industry. however, we can certainly imagine a future populated by digital twins of cultural heritage that systematically respond to a series of needs, ranging from conservation to knowledge of places, to enjoinment and enhancement. in a new perspective of growth and use of the iot, digital twins can become real “models of knowledge”, integrated into a wider domain of elements. we can create a cyberspace with digital models that can allow facilities in a city with obstacles and limitations of use. indeed, with other assumptions, this path has been opened for many years. in fact, the term digital heritage refers to the cultural heritage, which exists in relation to a digital model, a copy or replica of the physical (real) model, but often it is intended as “digital media in the service of preserving cultural or natural heritage”. therefore, if the digital heritage is within a broader system (for example of a smart city), where each digital resource communicates or it is connected in different ways to the others, then this could mean an evolution of the digital heritage in the direction of digital twins [8]. next to an environment monitored by sensors and intelligent systems, it is possible to achieve an effective way of using digital models to ensure an ideal interaction with real spaces. this has always been done in the past: virtual archaeology pursues precisely these objectives, but if these models are connected to each other in a smart city, then they will also be part of an ecosystem. considering that, it is possible to convert old digital scenarios with smart visits of cultural heritage in immersive and participatory virtual environments, within enabling platforms. the starting point is to make the virtual visit more collective, more interactive and more participative, with the possibility of figure 1. the complete 3d model of the oil mill and the first light simulation without textures. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 receiving a valuation of the understanding level of the communicated contents. the ultimate goal is to integrate these virtual scenarios into an iot system, where each of them can be connected with others 3d models which can be used in a digital twins perspective. these 3d scenarios can give information on their physical counterpart, information relating to environmental monitoring, energy consumption, state of conservation. at the same time we can use these models to cross the gap related to visitors with disabilities, or use them effectively as a distant visit tool. the important condition to make the digital twin effective with regard to communicative issues, is to obtain an ultra-realistic virtual restitution, in order to be able to offer emotion to the visitor, an emotion similar to the one achieved during a real visit [7]. the second condition is to have an accurate 3d model (figure 1). physical measurements can be effective, obviously, only in a reliable 3d space. when these two requirements are both satisfied, it is possible to deal with all the developments according to the world of digital twins. 3. live-guided tour an example of this approach has been carried out during the covid-19 lockdown, applied to an underground oil-mill in the town of gallipoli (puglia, italy) [9]. the main and most profitable activity, which has ruled the fortune of terra d'otranto for many centuries, has been the oil industry, carried out in over 2500 hypogeous and semi-hypogeous trappeti (oil mill). in these sites only lampante oil (oil for lamps) was produced, i.e., for industrial use, exported mainly to france and england, where it was used as a lubricant in wool industries and soap factories [10]. the oil mill described here is entirely dug out of the rock and is actually preserved in its original form, with a very "organic" appearance, without regular or squared surfaces. the three-dimensional survey must be carried out taking care to faithfully reconstruct its natural morphology, but above all the colour of the walls and the interior dark aspect. the limitations of access for people with disabilities and the complete closure of italian museums during the covid-19 emergency have suggested the development of an immersive platform, in the broader perspective of a use of output as digital twin. then a tool to support an innovative visit method has been carried out: a virtual visit assisted by a real remote guide, below referred to as “live-guided tour” with elearning functionality. this tour mode allows to organize virtual tours for groups of visitors who can simultaneously connect to the web and participate in a tour in which a real guide, which is also connected remotely, organizes and sets the visit, providing information to visitors. a webapp allows to accompany customers, students or colleagues in a shared virtual walk [11]. the virtual tour is similar to a video conference in which the interactive content viewed on pc or portable device can be controlled by any participant. the guests of the tour may be accompanied by a real guide in a trip where the guide (the host) has the ability to control the scene displayed by visitors, or to let them freely choose where to turn its gaze. guests can then break away at any time by driving control and freely explore every scene, without losing the interactivity that characterizes the virtual tour. with a mouse click they can relate to the host location; in the same way, the host can force any visitor to reconnect to his point of view. since the tour mode is slightly comparable to a video conference, during the visit each participant can take part in the discussion. the host (whether he is an agent, a teacher, a colleague, a tour guide) can call attention to areas of interest in real-time and discuss what is seen at 360° by all. the guest (customer, student, visitor to a museum, etc.) can follow the guide trip or ask permission to check out the tour for everybody. in this case he himself leads the tour: an ideal solution for asking questions about elements and details scene displayed. this type of visit is a significant improvement compared to video conferences with split screen: in this case we have a built-in communication tool in a virtual tour. each participant will have a name and a small screen that identifies all. for the realization of the webapp stereoscopic panoramas have been extracted from the 3d models [12]. the navigation is not conceived as a real time 3d model, but is based on precalculated panoramas, which therefore allow the same stereoscopic view of a 3d model, but with a request for extremely low hardware performances. the webapp is available on any device, whether desktop or mobile, so every visitor can also connect their mobile phone. the information elements are available in different types: text, audio, images, virtual reconstructions and videos, all specifically developed to enable you to get the best knowledge of the places. the application has been developed with the features of a webapp, which allows greater flexibility and compatibility with most media and operating systems. in addition to these features, the webapp, based on 3d vista software (https://www.3dvista.com/en/), integrates a learning management system (lms). this application platform allows the development of courses in e-learning mode in order to contribute to the realization of an educational or didactic project, but also to obtain objective results in terms of "evaluation of the communicative effectiveness" within a lesson in a virtual tour. in other words, an lms platform determines a score that gives the user a level of competence or knowledge. we can consider what benefits this approach can lead to, for example, on construction sites, or in manufacturing environments, to determine the level of competency of employees. or what benefits it can bring to safety oversight in the workplace (figure 2). the use of these vr-based tools has shown greater effectiveness in learning than traditional teaching methods [13]. the solution permits to ask questions to visitors at any time of the tour. they can be conditioned by a previous action, such as the discovery of a hidden element in the scene, after which a question that asks the user about it will subsequently appear. in addition to the questions and answers (simple or multiple-choice questions), queries can contain all kinds of media, including photos, videos, 360º views or 3d models (figure 3). at the end of a visit session, users will be able to see a score screen that depends on the settings chosen by the author of the tour. the user can download their performance sheet as file.cvs or send it immediately to the lms. this feature allows the organizers of the visit to collect analytical data of the tour and verify the level of satisfaction reached by the participants. figure 2. the user experience of live-guided tour. https://www.3dvista.com/en/ acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 4. the 3d survey all this is made possible starting from a three-dimensional model of the underground oil mill, from which the stereoscopic panoramas have been extracted. stereoscopy is very important for the full success of the project. in effect tis his is the aspect on which the level of interest induced in the visitor depends the most [14]. the immersiveness induced by stereoscopic vision is part of an empathic mechanism that can be expressed in the ability to emotionally involve the viewer with a message in which he is led to identify with. as a result, the user will tend to consider the two distinct objects, the digital twin and the real context, as if they were one and the same thing. he may then accept that, during the virtual experience, many of the aspects perceptible only during a real visit can be understood. the immersiveness, if is accompanied by the realism of the restitution, is therefore decisive in considering that the vision obtained with simple 360º panoramas is comparable to that obtainable with a 3d model explorable in real time with hi-end viewers. we can say that the ability to generate emotion and amazement, combined with the sense of presence, is an important element in generating interest and attention during the visit. as a result, greater interest and attention will lead to greater understanding of the communicated message. this explains some of the effort put into generating a 3d model from which stereoscopic panoramas can be derived (figure 3). of course, the current technological landscape offers several solutions to obtain stereoscopic panoramas, in most cases obtainable with simple photos [15]. some manufacturers offer dedicated hardware with up to 12k resolution. the differences between image-based solution and full 3d method are substantial and obvious, but it is worth mentioning here some of the main differences and criticalities inherent in the two methods. first of all, the pano 360° obtained from photos do not allow to determine the surface morphology of the object, consequently they do not yield metric information in xyz space. this is a crucial aspect in the use of digital twins since they are characterized by the intimate connection between 3d topology and real space [16]. in the absence of a 3d morphology, it will not be possible to link information derived from sensors and archaeometric, structural and microclimatic analyses relating to specific parts of the asset in study. just think of the georeferenced superimposition of information and its reading according to layers organized by categories, which is impossible to properly achieve on photographic pano. in the case study presented here, the stereoscopic panos is therefore only one of many media that can be obtained from the 3d model [17]. this is not a starting point but a reading output of the information produced, subsequently organized according to the paradigms of a vr visit. it seems therefore undoubted that the effort to be achieved is to obtain an accurate three-dimensional model, on which all the data relative to the management of the real space connected to its analogous 3d topological space can be hooked. this can be conceived as a geometric space of polygons and as a body of information correlated to the surface colour (figure 4). for the virtual use of the collected digital resources, it is crucial to have an extremely realistic three-dimensional model. realism in this case is not aimed at a cinematic level of representation, but at the best correspondence with the actual state [18]. to this end, a complete three-dimensional model was created using digital photogrammetry techniques, which are now widely used and known to all. the three-dimensional restitution posed many problems of coverage in the hidden areas and of management of photo shooting due to the poor interior lighting, but also to the precise need to obtain a digital twin that restores the same "genius loci" as the real space. in fact, we have deliberately kept the atmosphere of the real space, without making it too artificial or illuminated in a different way than the real space. approximately 3000 high-resolution photos (21 mpixel) were then generated, resulting in a mesh resolution of approximately 0.4 mm, which is more than acceptable for the figure 3. final 3d model after bake texturing process. the starting point of digital twin. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 purposes of the project and the size of the mill (approximately 250 sqm). the measurements were georeferenced with coded targets, whose coordinates were recorded with a total station. this has made it possible to keep high the accuracy of the measurements, while keeping fixed the constraints [19]. due to the computational complexity of the photogrammetric model, the calculation was divided by zones with overlapping of about 10 cm. the main orientation (sparse cloud) was generated for the entire set of photographs, while the dense clouds were calculated in several patches [20]. this solution solves many memory and computation problems, which often prevent the completion of digital photogrammetry projects characterized by a large number of photos [21]. the orientation of the sparse cloud is actually not a very hard problem in terms of calculation, even with huge sets of photos, it is therefore possible to calculate individual chunks while maintaining the correct referencing. the spatial position of the different chunks will be respected even in case of an export to modelling software or bim [22], [23]. regarding the texturing process, a texture with a resolution of 15000 pixels x 15000 pixels was calculated for each of the several portions of the model. this process, as expected, causing several problems due to the poor lighting of the interior and the need to shoot the photos with the camera gripped. therefore, a new set of images was created to solve blurring anomalies induced by depth of field and relatively long shutter speeds (often 1/60 sec). from an operational point of view, the sparse cloud was then calculated with the entire set of photos. in the last phase dedicated to the texturing only the photos taken ad hoc for this purpose were enabled. basically, we built a very large set of photos needed to get good geometric detail and a set of about 400 photos dedicated to the texturing. the correct correspondence and reproduction of real colours was ensured by the use of a kodak color separation guide (figure 5). the inclusion of the colour scale in the real environment simplifies the correction of the white point of the image, allowing the removal of cast colours [24]. in our case study the dominant colours (cast colours) are not very evident due to the use of controlled lights with 6500 °k temperature [25]. it is not necessary to affix the colour scale on all photos. if these are acquired in the same way and with the same lamps, the correction can be recorded as an action in photoshop and applied to the whole set of photos [24]. the use of the agisoft metashape software has allowed an optimal management of all phases of the survey (figure 6). in particular, the new mesh creation function was used starting from the depth map instead of the more used dense cloud. this new feature is much more versatile and precise, especially in the presence of dense clouds with a large number of points. the use of depth maps helps to reduce noise on the final surface while preserving thin structures within the scene. new depth map-based method allows to reconstruct exceptionally figure 4. 3d model of the calabrian-style press integrated in the vr visit. figure 5. real photo of the interiors. on the left the original image, on the right the colours have been corrected using the kodak color separation guide. figure 6. synthesis of methodological workflow. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 detailed geometry. gpu acceleration allows you to significantly speed up processing with reduced memory consumption compared to previous versions of the software. the final result is shown in the figures on these pages (figure 3 – figure 7). 5. conclusions from a morphological point of view, the digital model of the oil mill is therefore a reliable replica of its physical reference. in the case of extraction machines that are no longer conserved, oneand two-screw presses have been included in the virtual tour, taken with the same 3d techniques from other similar contexts. since photogrammetry ensures excellent accuracy even of the colour data, the 3d model is ready for all subsequent implementations concerning the state of conservation, measurement of volumes, static verification in relation to loads and road surfaces, calculation of energy requirements, etc. at the moment the digital twin allows groups of users to visit remotely this context, in immersive way, with the possibility of extending the visit to other contexts that can follow this management philosophy. this aspect related to the visualization and use of data belongs to one of the purposes of using these digital resources. the main difference to a classic 3d navigable scenario or a classic virtual tour is a new perspective on the use of these models. they are no longer created with the exclusive objective of obtaining a survey of the actual state, but respond to a new management requirement, which takes into account the potential of 5g, the greater computational capacity of portable devices and iot (internet of things). the 3d object in this case is a 'thing' that has its own consistency, certainly digital and immaterial, but tangible and useful for the management of the physical asset. regarding innovativeness in visiting, the live-guided tours are probably the only system available today for a shared virtual tour. but what are the other elements of interest in this project? firstly the benefits offered by the distant visit modality, in the context of the current pandemic emergency. not less important the possibility of virtual access for the disabled, in a multi-user and multi-platform environment. finally, the technological appeal given by the immersive vision and the potentialities related to the digital twin philosophy, still not fully explored in the cultural heritage sector. the long-term goal for us is the creation of an advanced management model. the association between physical object and virtual reality makes it possible to activate a data analysis and monitoring of the systems in such a way that it is possible to operate in predictive mode, identifying problems even before they occur. in addition to preventing anomalies, downtime and inefficiencies, it is possible to develop new opportunities using appropriate simulations and planning future business. by creating a digital twin, it is possible to better understand how to optimize operations, increase efficiency or discover a problem before it happens. the creation of digital twins related to cultural heritage allows the making of models representative of reality, to be used for conservation purposes, for knowledge and for overcoming physical and cognitive barriers. acknowledgement i would like to thank ing. claudio stasi for granting the surveys of the oil mill. references [1] a. fuller, z. fan, c. day, c. barlow, digital twin: enabling technologies, challenges and open research, in ieee access, vol. 8, (2020), pp. 108952-108971, doi: 10.1109/access.2020.2998358 [2] a. el saddik, digital twins: the convergence of multimedia technologies ieee multimedia (2018). doi: 10.1109/mmul.2018.023121167 figure 7. the 3d model with the settling wells in the foreground. https://doi.org/10.1109/access.2020.2998358 https://doi.org/10.1109/mmul.2018.023121167 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 [3] m. grieves, origins of the digital twin concept, url: https://www.researchgate.net/publication/307509727_origins_ of_the_digital_twin_concept [4] m. baruwal chhetri, s. krishnaswamy, l. seng wai, smart virtual counterparts for learning communities, in web information systems – wise 2004 workshops, lecture notes in computer science, vol. 3307, springer berlin heidelberg (2004) pp. 125–134, isbn 9783540304814. doi: 10.1007/978-3-540-30481-4_12 [5] s. haag, r. anderl, digital twin–proof of concept, manufacturing letters, elsevier (2018). doi: 10.1016/j.mfglet.2018.02.006 [6] m. grieves, virtually intelligent product systems: digital and physical twins, complex systems engineering: theory and practice (2019) doi: 10.2514/5.9781624105654.0175.0200 [7] m. grieves, virtually indistinguishable, ifip international conference on product lifecycle management, springer, berlin, heidelberg (2012) doi: 10.1007/978-3-642-35758-9_20 [8] f. tao, h. zhang, a. liu, digital twin in industry: state-of-theart, ieee transactions on industrial informatics, volume: 15, issue: 4 (2019) doi: 10.1109/tii.2018.2873186 [9] a. monte, i frantoi ipogei del salento, edizioni del grifo, lecce, 1995, isbn 88-8176-148-3. [in italian] [10] l. f. milizia, il trappeto sotterraneo in terra d'otranto, capone editore, 1991. [11] f. gabellone, m. chiffi, linguaggi digitali per la valorizzazione, in f. gabellone, m. t. giannotta, m. f. stifani, l. donateo (a cura di), soleto ritrovata. ricerche archeologiche e linguaggi digitali per la fruizione. editrice salentina, 2015, isbn 978-8898289-50-9. [in italian] [12] s. brusaporci, handbook of research on emerging digital tools for architectural surveying, modeling, and representation, igi global core reference title, engineering science reference, advances in geospatial technologies (2015) doi: 10.4018/978-1-5225-0675-1.ch003 [13] a. spanò, rapid mapping methods for archaeological sites, 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, 22-24 october 2020, conference proceedings, isbn 978-92-990084-9-2. [14] a. murtiyoso, p. grussenmeyer, documentation of heritage buildings using close-range uav images: dense matching issues, comparison and case studies, the photogrammetric record, 2017. 32(159), pp. 206– 229. doi: 10.1111/phor.12197 [15] d. ekaso, f. nex, n. kerle, accuracy assessment of real-time kinematics (rtk) measurements on unmanned aerial vehicles (uav) for direct geo-referencing, geo-spatial information science (2020), pp. 1–17. doi: 10.1080/10095020.2019.1710437 [16] g. sammartano, f. chiabrando, a. spanò, oblique images and direct photogrammetry with a fixed wing platform: first test and results in hierapolis of phrygia (tk), international archives of the photogrammetry, remote sensing and spatial information sciences congress, nizza (2020) doi: 10.5194/isprs-archives-xliii-b2-2020-75-2020 [17] g. leucci, f. gabellone, f. t. gizzi, n. masini, from causes to effects. integration of heterogeneous data from non invasive imaging for the diagnosis and restoration of monuments. the case of the church of s. francesco della scarpa in lecce (southern italy), imeko tc4 international conference on metrology for archaeology and cultural heritage virtual conference, 22-24 october 2020, conference proceedings, isbn 978-92-990084-9-2 [18] v. sangiorgio, s. martiradonna, f. fatiguso, g. uva, historical masonry churches diagnosis supported by an analytic-hierarchyprocess-based decision support system, acta imeko, volume 10, 2021, number 1, pp. 6 – 14. doi: 10.21014/acta_imeko.v10i1.793 [19] v. croce, g. caroti, a. piemonte, m. g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko issn: 2221-870x, volume 10, 2021, number 1, pp. 98 – 108. doi: 10.21014/acta_imeko.v10i1.842 [20] x. yang, y.-c. lu, a. murtiyoso, m. koehl, p. grussenmeyer, hbim modeling from the surface mesh and its extended capability of knowledge representation, isprs int. j. geo-inf. 8(7) (2019), p.301. doi: 10.3390/ijgi8070301 [21] f. noardo, architectural heritage semantic 3d documentation in multi-scale standard maps, j. cult. herit. 32 (2018) pp. 156-165. doi: 10.1016/j.culher.2018.02.009 [22] n. bruno, r. roncella, hbim for conservation: a new proposal for information modelling, remote sens. 11(15) (2019) p. 1751. doi: 10.3390/rs11151751 [23] r. volk, j. stengel, f. schultmann, building information modelling (bim) for existing buildings—literature review and future needs, automat. constr. 38 (2014) pp. 109-127 doi: 10.1051/e3sconf/202124405024 [24] j. a. s. viggiano, comparison of the accuracy of different white balancing options as quantified by their color constancy. sensors and camera systems for scientific, industrial, and digital photography applications v: proceedings of the spie, volume 5301. bellingham, wa: spie: the international society for optical engineering, p 323-333 (2004). doi: 10.1117/12.524922 [25] a. van hurkman, the color correction handbook, professional techniques for video and cinema, pearson education, 2010, isbn:978-0-321-92966-2. https://www.researchgate.net/publication/307509727_origins_of_the_digital_twin_concept https://www.researchgate.net/publication/307509727_origins_of_the_digital_twin_concept https://doi.org/10.1007/978-3-540-30481-4_12 http://dx.doi.org/10.1016/j.mfglet.2018.02.006 https://doi.org/10.2514/5.9781624105654.0175.0200 https://doi.org/10.1007/978-3-642-35758-9_20 https://doi.org/10.1109/tii.2018.2873186 https://doi.org/10.4018/978-1-5225-0675-1.ch003 https://doi.org/10.1111/phor.12197 https://doi.org/10.1080/10095020.2019.1710437 http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-75-2020 http://dx.doi.org/10.21014/acta_imeko.v10i1.793 http://dx.doi.org/10.21014/acta_imeko.v10i1.842 https://doi.org/10.3390/ijgi8070301 https://doi.org/10.1016/j.culher.2018.02.009 https://doi.org/10.3390/rs11151751 https://doi.org/10.1051/e3sconf/202124405024 https://doi.org/10.1117/12.524922 training program for the metric specification of imaging sensors acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 training program for the metric specification of imaging sensors raik illmann1, maik rosenberger1, gunther notni1 1 technische universität ilmenau, gustav kirchhoff platz 2, 98693 ilmenau, germany section: research paper keywords: measurement education; measurement training; engineering education; hands-on pedagogy; image sensor characterization citation: raik illmann, maik rosenberger, gunther notni, training program for the metric specification of imaging sensors, acta imeko, vol. 11, no. 4, article 10, december 2022, identifier: imeko-acta-11 (2022)-04-10 section editor: eric benoit, université savoie mont blanc, france received august 26, 2022; in final form december 2, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: raik illmann, e-mail: raik.illmann@tu-ilmenau.de 1. introduction optical coordinate metrology is an essential part of industrial automation and inspection processes. therefore, this subject area should play an essential role in today's engineering education in courses such as mechanical engineering, electrical engineering, computer science or engineering informatics. thus, the question arises, how to efficiently teach essential contents, which are necessary for a successful handling of this topic in engineering practice. the requirements are that both the basics and the relevance have been understood, and that the systems engineering integration can be implemented on the basis of the learned knowledge and transferred into a functional system. group work and the targeted intervention of a supervisor play an essential role here. not only in order to close existing knowledge gaps in a targeted manner, but also in order to train the cooperation in teams with corresponding social competence, which is inevitable in practice. the central topic of the program is the metric characterization of imaging sensors. with regard to practice, two essential technological aspects can be taught through this. first, it is crucial for the implementation of a test system to be able to evaluate and classify geometric product specifications (gps) characterized by the manufacturer of an image sensor, i.e., ultimately to be able to understand its test procedure. secondly, this makes essential principles of image signal processing accessible and also provides a practical and comprehensible application case, which proves the usefulness of the methods and thus offers the motivation directly on the basis of the concrete example. the standard din en iso 10360 [1] is used as the basis for the methodical procedure; for coordinate measuring machines with optoelectronic sensors, the vdi standard 2617 [2] is used specifically. it describes the inspection of coordinate abstract measurement systems in industrial practice are becoming increasingly complex and the system-technical integration levels are increasing. nevertheless, the functionalities can in principle always be traced back to proven basic functions and basic technologies, which should, however, be understood and developed. for this very reason, the teaching of elementary basics in engineering education is unavoidable. the present paper presents a concept to implement a contemporary training program within the practical engineering education on university level in the special subject area of optical coordinate measuring technology. the students learn to deal with the subject area in a fundamentally oriented way and to understand the system-technical integration in detail from the basic idea to the actual solution, which represents the common practice in the industrial environment. the training program is designed in such a way that the basics have to be worked out at the beginning, gaps in knowledge are closed by the aspect of group work and the targeted intervention of a supervisor. after the technology has been fully developed theoretically, the system is put into operation and applied with regard to a characterizing measurement. the measurement data are then evaluated using standardized procedures. a special part of the training program, which is to promote the own creativity and the comprehensible understanding, represents the evaluation of the modulation transfer function of the system by a self-developed algorithmic program section in the script-oriented development environment matlab, whereby students can supportively fall back on predefined functions for the evaluation, whose implementation however still must be accomplished. mailto:raik.illmann@tu-ilmenau.de acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 measuring machines by measuring calibrated test samples. this involves checking whether the measurement deviations are within the limits specified by the manufacturer or user. the test samples must be such that their properties do not decisively influence the parameters to be determined. the characterization is carried out on the basis of the principle described in [2] a circle measured at five different positions. for this purpose, a calibrated chrome standard and a through-light unit are used. after completion of the measurements, the results are to be statistically evaluated in accordance with the standard. in addition, the determination of the modulation transfer function (mtf) is intended to provide an assessment of the resolving capability of the overall system and to train an in-depth algorithmic understanding of 2d image processing. 2. theoretical background 2.1. problem description the metrological problem with which the students are to be confronted is described in figure 1. a calibrated sample with a circular ring (object) is placed as a normal on a light source. the circular ring passes (as a negative) through an optical system which consists of lenses and apertures. the image that is consequently created on the sensor surface deviates from the original image due to optical and mechanical influences. in addition, geometric quantization during scanning within the image sensor produces a further measurement deviation. all deviations in sum result in a total measurement deviation which has to be determined. further it can be observed that with a change of the position of the object (5 different positions in [2]) due to the influences of the optics and mechanics the measured diameters of the circular rings differ. this also has to be quantified. 2.2. analysis of measurement uncertainty all image processing algorithms are already implemented in software, so that the diameter of the circular ring is output as the result of the measurement. the algorithms for edge detection are subpixel-based and very complex, therefore a more detailed description is not provided within this paper so reference is made to special literature [3]. more crucial for the metric testing of the image sensor system is the consideration of the measurement uncertainties. according to the formulas 1-3 the total measurement uncertainty can be determined. the measurement uncertainty describes the deviation behaviour of the overall system. the total value of the measurement uncertainty is determined by the errors of the two essential assemblies, the mechanical and the optical measuring device, and results from: 𝑈total = √𝑈mech 2 + 𝑈opt 2 . (1) both mechanical and optical measurement uncertainty can be further divided into systematic and random measurement uncertainty. they result from: 𝑈mech = √𝑈sys,m 2 + 𝑈random,m 2 (2) and 𝑈opt = √𝑈sys,o 2 + 𝑈random,o 2 . (3) the systematic measurement uncertainty is usually specified by the manufacturer and is determined from comparative measurements with calibrated standards. within the training program these are given to the students. the random measurement uncertainty, on the other hand, is determined by several measurements carried out in the training program under the same environmental conditions and with the same test specimen. the standard deviation can now be calculated from the measured values obtained. 𝜎 = √ 1 𝑛 − 1 ∑(𝑥𝑖 − �̄�) 2 𝑛 𝑖=1 . (4) the random measurement uncertainty 𝑈random,ms of the measurement series 𝑈zuf,mr is now obtained, using the 95.4% confidence level. 𝑈random,ms = 2 𝜎 √𝑛 . (5) figure 1. schematic system description. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 however, since the mechanical and optical measurement uncertainties in the measurement result cannot be separated directly with the available setup, the evaluation is simplified somewhat at this point and only predefined systematic deviations are included in the calculation of the total measurement uncertainty. 2.3. modulation transfer function there are several approaches to determine the modulation transfer function (mtf). the main approaches are part of the teaching in parallel to the designed training program and are presented in the lectures. a simple and practically easy to understand method is described in [4] and [5]. thereby, the intensity differences (contrasts, 𝐼max − 𝐼min) of a search ray orthogonal to a rectangular signal are evaluated. if instead of the rectangular signal a sinusoidal signal is assumed, the contrast transfer function 𝐶𝑇(𝑓) goes directly over into the modulation transfer function 𝑀𝑇𝐹(𝑓). basically, the contrast ratios of the pattern in the image 𝐾(𝑓) 𝐾(𝑓) = 𝐼max − 𝐼min 𝐼max + 𝐼min (6) have to be put into the ratio to the contrast in the object space 𝐾′(𝑓). 𝐾′(𝑓) = 𝐼max ′ − 𝐼min ′ 𝐼max ′ + 𝐼min ′ . (7) plotted over the sampling points of the spatial frequency f as reciprocals of the line distances defined by the object, the mtf can thus be approximated 𝐶𝑇(𝑓) = 𝐾′(𝑓) 𝐾(𝑓) ≈ 𝑀𝑇𝐹(𝑓) . (8) for the calculations according to (6), (7) and (8), all values can be determined from the measurements, which is why these formulas must also be implemented by the students in the practical part. complex examples for modulation transfer functions of spectral sensors are given in [6]. 3. measurement system description 3.1. measurement system the system used for the experiment is shown in figure 2. it consists of a monochromatic camera , a telecentric objective , a light table for measurement using the transmitted light method , a halogen-based light generator , and a stand construction in column design . 3.2. measurement targets two samples are used for the training program. the first sample is shown in figure 3 (left). in the sample, the metric dimensions are represented by circles. these circles are moved to the 5 positions described and the diameter of the corresponding circular ring is measured there in each case. the second sample is a u.s. air force (usaf) test chart shown in figure 3 (right), based on which the modulation transfer function is determined. 4. training program concept the training program is shown as a flow chart in figure 4 and is described in detail below. 4.1. preparation and general aim the overall aim is to teach and consolidate the theoretical aspects. the theoretical basics are taught in preparation of the students in self-study. at the beginning of the training program, the supervisor checks the essential basics necessary for understanding the subject matter. here it must be paid attention particularly on the part of the supervisor to recognize lack of understanding and to recognize possibly by further questions the actual knowledge conditions of the participants individually. often misunderstandings or misinterpretations of the facts occur, which must be corrected. interdisciplinary action should also be taken here, and basic mathematics or mechanics should also be addressed. essentially, the following topics should have been learned at the end of the program: figure 2. measurement setup. figure 3. measurement targets. circular rings chrome glass (left), usaf (right). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 • knowing and understanding terms [1] • understanding the necessity of characterisation concerning engineering tasks • understanding measurement setup (incl. optics) • understanding the measurement procedure • reflection and deduce the causes of uncertainties in the measurement system • evaluating und visualize the data • encouraging the understanding of algorithms. it is also important that the supervision always refers to the method of application and integration of the systems and components in the engineering practical task. in this way, the important practical relevance is continuously maintained. after verification of qualification and the elimination of misunderstandings, instruction is given in the measurement setup and operation of the software. 4.2. measurement process the measurement is performed according to the instructions given in [2]. regarding the training program, the meaningfulness has already been discussed in the basics and does not provide any knowledge gain for the student in the actual sense at this point. at this point, only the practical data is acquired, with which the evaluation and interpretation can be done afterwards. in the first step the 10 single measurements of the circular rings are carried out at 5 positions, which are reached by shifting the measuring object in the object field by means of the x-y-stage. the software required for image acquisition is provided for the student and can be operated intuitively, and the circle diameters are also determined in this software. after determining the circle diameters, these values are saved in the background in a file, which can later be pulled into the evaluation software. the second measurements is the capture of the usaf test chart, which is captured as a single image and then saved as an image file. 4.3. evaluating the measurements the evaluation of the measurement data is generally the most important and most insightful part of the training program. here, the students learn how to evaluate data in an environment that is frequently encountered in practice. both the measurement data of the circular diameters and the development of the mtf are implemented in the matlab environment. in general, the necessary program text is available as cloze text and must be completed by the students at the essential points. this also promotes the algorithmic understanding of a sequential process. the first insight is the differentiation of the data types. the measurement data of the circle diameters are purely vectoral and 1-dimensional, the image data as 2-dimensional field. in addition, the image data are available as 8bit integer, the measurement data are of the data type "double". for the evaluation of the circle data a script is available, which must be supplemented at some commented places only by the function for the computation of the standard deviation. essential necessary standard functions and their arguments to be passed with syntax must be researched by the students themselves via the internal help and implemented in the main program. experience shows that most students have enormous methodical deficits exactly at this point for independent problem solving. figure 4. measurement target 1. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 4.3.1. determination of the measurement uncertainty the data obtained in the measurement serves as the basis for determining the measurement uncertainty. these are available in a csv-file (comma separated value) stored in the background by the image processing software and can be transferred to matlab with a function for reading in. afterwards the data have to be validated. unfortunately, problems occur again and again due to the comma-dot separation at the decimal point in the exchange with german-language software, which should be explicitly pointed out here. in the last step the calculation of the measurement uncertainty by standard deviations etc. takes place and are to be completed here after the appropriate equations by the students in the program text. 4.3.2. determination of the mtf more detailed understanding is required to calculate the mtf. it should be noted that within the training program only the mtf of the entire system is to be determined. of course, this would also be possible for each individual component in the beam path. however, the goal is to get a pragmatic overview of the resolving power of the entire system. in addition, the time required should also be kept within reasonable limits. the necessary data are available as a single image, which must first be read in. in the next step, the relevant signals must be extracted from the image data set as one-dimensional signals. this is done manually by defining the start and end points of the search beams. afterwards, the students represent these signals as a simple plot for illustration purposes. to write the necessary functions would go beyond the scope of the training program, therefore these functions are predefined. after extracting the vectoral data, the students calculate the mtf according to the relationships presented in the basics. here it is also important to understand the signal properties of the overall image (contrast ratios) and to calculate them with appropriate functions from the field. also, here the necessary program text is available as gap text. calculations for visualization of the search rays in the image etc. have been predefined. as a result, the mtf is plotted graphically. 4.4. reflexion and interpretation of the data the main focus of the training program is the graphical representation of the evaluated data. for each result a plot is to be created, which illustrates the data. figure 5 shows an exemplary graphical representation of the statistical values of 10 single measurements at the 5 relevant positions. the graph shows the histogram and the distribution function fitted in it (red curve). on the basis of the compression of the curve the meaning of the standard deviation as a scattering parameter of the data can be read off immediately. the knowledge gain is secured by the graphic connection of the data thus finally and is to be summarized a protocol. all results are finally discussed with the supervisor on the basis of the graphs. thus, the learning result is secured by the renewed repetition and purposeful discussion of substantial qualitative characteristics, particularly in the curves. experience shows that qualitative progressions can be memorized very lastingly through graphically illustrated relations. 5. integration into teaching the present concept will be integrated into teaching as a fixed element from the time of publication. however, initial test sessions with students for the evaluation of the concept were performed before. these preliminary tests with a total of 4 groups had three main reasons. first of all, it was to be determined whether the test could be completed at all by the students in the appropriate time. all 4 groups managed to complete the task within the set time frame of 3 hours. the second reason is to estimate the suitability. the students were asked in detail about their assessment after completing the program. all groups confirmed that they had understood the usefulness of the course and its relation to real practice. furthermore, it was confirmed that the students were neither overstrained nor understrained at any time. the first group, which had already conducted an experiment on a similar topic in a different form a long time ago, represented a special case. this group confirmed above all the increase in clarity and confirmed that they had become better familiar with the topic due to the many integrated qualitative graphical representations of the results. the third reason for the preliminary tests is to test the stability of the system. this means both the system stability of the hardware and the stability of the software. none of the groups experienced any problems in this regard. the hardware ran stably, did not cause any dropouts, and the software did not deliver any errors. figure 5. measurements on 5 positions (target 1). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 6. conclusion the present work represents a full methodological treatise on the development of a training concept of engineering students in the field of optical coordinate metrology and adjacent to the image processing. current and relevant problems are integrated, basic problems of image processing are simulated, demonstrated and solved. ensuring expert supervision, but intervening only when necessary, underlines the pedagogical concept. the clearly identified problems are solved using current tools. the focus is also always on creating a graphical picture between data, their evaluations and calculations, on the basis of which the results are easily discussed and the facts are easily memorized by the students through the graphical symbolization. special attention was paid to typical, recurring deficits and problems of the students during the creation of the program, which is why they were specifically included and problems are intentionally implied during the execution of the program. acknowledgement we thank the technische universität ilmenau for the financial support to realize this practical course. references [1] iso 10360 7: geometrical product specifications (gps) acceptance and re-verification tests for coordinate measuring machines (cmm) part 7: cmms equipped with imaging probing systems, international organization for standardization geneva, 2011. [2] verein deutscher ingenieure, vdi/vde 2617 blatt 6.2,: accuracy of coordinate measuring machines characteristics and their testing guideline for the application of din en iso 103608 to coordinate measuring machines with optical distance sensors, beuth verlag gmbh, berlin, februar 2021. [3] j. beyerer, f. puente león (eds.), automated visual inspection and machine vision iii: 27 june 2019, munich, germany (spie, bellingham, washington, usa, 2019). [4] t. luhmann, s. robson, close-range photogrammetry and 3d imaging, de gruyter, berlin, 2019, 3rd edn. [5] w. g. rees, physical principles of remote sensing, cambridge univ. press, cambridge, 2003, 2nd edn. [6] m. rosenberger, p.-g. dittrich, r. illmann, r. horn, a. golomoz, g. notni, s. eiser, o. hirst, n. jahn, multispectral imaging system for short wave infrared applications, proceedings of spie, 2022, vol. 12094, id. 120940z, 14 pp. doi: 10.1117/12.2619350 https://doi.org/10.1117/12.2619350 roman coins at the edge of the negev: characterisation of copper alloy artefacts and soil from rakafot 54 (beer sheva, israel) acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 roman coins at the edge of the negev: characterisation of copper alloy artefacts and soil from rakafot 54 (beer sheva, israel) manuel j. h. peters1,2, yuval goren3, peter fabian3, josé mirão4,5, carlo bottaini4, sabrina grassini1, emma angelini1 1 department of applied science and technology, politecnico di torino, corso duca degli abruzzi 24, 10129 torino, italy 2 department of history, universidade de évora, largo dos colegiais 2, 7000-803 évora, portugal 3 department of bible, archaeology & ancient near east, ben-gurion university of the negev, israel 4 hercules laboratory, institute for advanced studies and research, university of évora, largo marquês de marialva 8, 7000-809 évora, portugal 5 geosciences department, school of sciences and technology, university of évora, rua romão ramalho 59, 7000-671, évora, portugal section: research paper keywords: archaeometry; archaeometallurgy; archaeological materials science; classical archaeology; near eastern archaeology citation: manuel j. h. peters, yuval goren, peter fabian, josé mirão, carlo bottaini, sabrina grassini, emma angelini, roman coins at the edge of the negev: characterisation of copper alloy artefacts and soil from rakafot 54 (beer sheva, israel), acta imeko, vol. 11, no. 4, article 13, december 2022, identifier: imeko-acta-11 (2022)-04-13 section editor: tatjana tomic, industrija nafte zagreb, croatia received april 26, 2022; in final form december 14, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: manuel j. h. peters, e-mail: manueljhpeters@gmail.com 1. introduction this research discusses the analysis of several copper alloy artefacts from an arid site on the edge of the negev desert in israel. rakafot 54 was excavated during two seasons in 2018 and 2019 as a collaboration between the israel antiquities authority and ben-gurion university of the negev (figure 1). the site was probably established during the first century ce and demolished during the bar kochba revolt against rome (132-135/6 ce) [1]. several indications of regular habitation have been found, including oil lamp fragments and limestone vessels. the site is placed near the border between judea and nabataea, close to a roman road, and includes a watchtower, hearths, garbage pits, and an underground system. numerous artefacts were found during the two seasons of excavation, including various imperial and provincial roman copper alloy coins. this includes copper alloy coins minted by herod agrippa i (ruled 41-44 ce) and the roman procurators in judaea (6-66 ce). additionally, provincial roman coins minted in ashkelon during the reign of nero to trajan (37-117 ce), nabataean coins (until 106 ce) and coins of the first jewish–roman war (66–73 ce) were found. in terms of the geological setting, the site is located on quaternary alluvium loess soil. the loess is soft yellow windblown silty sediment, dominated by quartz silt, clay and calcite. the loess deposition, by southern-western winds from sinai and north africa, started at the end of the pleistocene and continues into the holocene. the formation is bordered to the abstract the research presented in this paper focused on the preliminary nonand semi-destructive analysis of copper alloys, corrosion, and soil components from a roman archaeological site in israel. investigations using portable x-ray fluorescence, x-ray diffraction and scanning electron microscopy with energy dispersive spectroscopy as well as micromorphological analyses were carried out to gain a better understanding of the corrosion processes affecting the copper alloy artefacts, by characterising the alloy composition, soil environments, and corrosion products. preliminary results indicate that the artefacts consist of copper-lead-tin alloys, covered by copper hydroxychlorides and lead sulphate phases with slight variations in their crystallisation. the multi-analytical approach revealed the presence of quartz, calcite, gypsum and feldspars in the sediments, while thin sections more specifically indicate loess soils with local microenvironments. mailto:manueljhpeters@gmail.com acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 west and the south by a coastline of calcareous sandstone and sand dunes, to the east by the eocene adulam formation which is mainly chalk and chert, and to the north by quaternary red sand and loam (locally termed hamra soil) [2]. in the beer sheva area, where mean annual precipitation is 200–350 mm, loess covers the entire landscape and pre-existing topography [3]. the soil components of the archaeological site are largely dependent on this general geological setting, with local variations due to anthropogenic activities and post-depositional processes. these variations could possibly lead to micro-environments corresponding to the various excavation loci. the copper alloy objects coming from the site were covered with a heterogeneous layer of corrosion products and sediment (figure 2). the items were retrieved from different areas, which contained sediments that appeared to have distinctively different compositions, based on a visual inspection. this could possibly affect the corrosion process in various ways. in order to gain a better understanding of both the degradation processes affecting these artefacts and to determine their state of preservation and possible future steps for conservation, a selection of artefacts was analysed with portable x-ray fluorescence (pxrf), x-ray diffraction (xrd), and scanning electron microscopy with energy dispersive spectroscopy (sem-eds). the reason behind this approach lies in the general need to avoid intrusive sampling of museum objects and the requirement for the exclusive application of non-destructive testing (ndt) techniques for the material characterisation of cultural artefacts. as a result, research is gradually becoming reliant on portable instrumentation (e.g. pxrf) that is intended to screen only the surfaces of the objects under investigation. however, especially in the case of metals, the effects of chemical-physical processes responsible for degradation and corrosion as well as surface enrichment or loss of elements make ndt highly problematic. metallic objects can be covered in thick layers consisting of corrosion products and soil components, which usually means that surface analysis to investigate alloy components of the bulk material becomes relatively inaccurate. if it is possible to expose a small area of the bulk material below the corrosion layers, semeds can be carried out to validate the results obtained with pxrf. although the degradation processes of metallic artefacts are well-known and widely investigated, every artefact has a specific history strictly related to its depositional microenvironment [4]. the environmental parameters influence the corrosion process and affect the various areas of the metallic objects differently. consequently, it is of high importance to assess the specific deterioration of the artefacts in order to be able to interpret the results of ndt surface investigation. in this paper, several artefacts and soil samples that have a clear connection were considered. the main goal of this research was to gain information on the elemental composition of the coins, as well as identifying the corrosion products, and assessing the soil components. this was done by using a multi-analytical methodology involving elemental and mineralogical characterisation, as well as microscopy. the bulk of this research was performed at ben-gurion university of the negev. 2. materials and methods this research investigated the alloys, corrosion products, and related soil of a group of corroded metallic coins from rakafot 54. during the excavation, the location of the metallic artefacts was recorded by assigning them to the specific locus they originated from, and the same was done for the soil samples. for this research, six coins and their related soil samples were selected. a preliminary pxrf surface screening was carried out on the coins prior to conservation treatment, to determine the major alloy components of the coins. a thermo scientific niton xl3t goldd+ xrf analyser equipped with a geometrically optimized large area drift detector and 50 kv, 200 µa ag anode was used, with an 8 mm collimator. the data was obtained with the cu/zn mining measurement mode, with a duration of 120 seconds. both sides of each coin were measured. xrd measurements were performed separately on the corrosion samples collected from the artefacts. sampling was done by carefully removing some milligrams of corrosion from the objects with a scalpel, paying attention to not damage the limitos (the surface boundary between the external corrosion products and the corrosion that has replaced the bulk metal, figure 1. aerial orthophoto of the archaeological site of rakafot 54. figure 2. copper alloy coin with corrosion and soil. table 1. sample types and analytical techniques. sample type technique information artefact surface pxrf alloy & soil exposed bulk material sem-eds alloy & microstructure corrosion powder xrd corrosion & soil soil thin section petrography soil acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 isomorphic with the uncorroded artefact). the samples were then homogenised by grinding them, using an agate mortar and a few drops of ethanol. this emulsion was positioned in the middle of a zero-background silicon sample holder and flattened, after which the ethanol was left to evaporate. diffractograms were taken from the samples with a panalytical x'pert pro powder x-ray diffractometer equipped with pixcel 1d detector and a cu k-alpha source with wavelength 1.54 å, at 40 kv with a current of 40 ma. thin sections from the soil samples were obtained by embedding a portion of each sample in epoxy resin, mounting it on a glass microscope slide, and grinding and polishing it to 30 µm. the thin sections were then observed with a petrographic microscope under polarised light. sem-eds was carried out on a small exposed area of the bulk metal of the artefacts, using a micro fasem fei quanta 200 scanning electron microscope under high vacuum at 25.00 kv. an overview of the sample types, analytical techniques and the expected information can be found in table 1. 3. results 3.1. pxrf the preliminary pxrf screening of the coins indicated that the coins are made of copper-lead-tin alloys, with relatively high amounts of lead in some cases (table 2). the presence of silver is below the limit of detection, and zinc appears to be only present as trace element in some cases. there are inconsistencies between the measurements on the obverse (a) and reverse (b) sides of each coin, probably stemming from the position of the samples on the instrument, and the fact that these measurements were carried out on uncleaned objects. consequently, the results are largely dependent on the corrosion layers and sediments present on the surface. nevertheless, these results offer a useful indication of the primary alloy components of the coins found at the site and offer a clue of the possible corrosion products. additionally, the pxrf measurements provide information about the traces of sediments that are possibly present on the surface of the artefacts together with the formed corrosion phases (table 3). significant amounts of calcium, chlorine and silicon can be observed as well as smaller amounts of aluminium, potassium, and sulphur. additional components can be found in table 4. 3.2. petrography the thin sections of all the soil samples observed under the petrographic microscope revealed a matrix that is silty and highly calcareous. the silt is well-sorted and contains mainly quartz, but also a recognisable quantity of other minerals, including hornblende, zircon, mica minerals, feldspars, tourmaline, augite and more rarely garnet, epidote and rutile. ore minerals are abundant too in this fraction. the sand fraction includes dense, well sorted, rounded sand-sized quartz grains, and limestone. based on a bulk of published data it is readily identified as loess soil [5]. however, micromorphological features within this given theme indicate more specific micro-environments, resulting from an array of anthropogenic activities as well as postdepositional processes. the post-depositional processes are table 2. primary alloy components (wt%) on artefact surface, obverse (a) and reverse (b). sample cu pb sn zn ag b6058a 46.8 5.1 2.7 0.04 < 0.03 b6058b 45.1 5.7 1.0 < 0.03 < 0.03 b6081a 44.6 13.0 3.4 < 0.05 < 0.04 b6081b 47.5 5.6 0.45 < 0.03 < 0.03 b6101a 42.1 6.8 1.5 < 0.04 < 0.03 b6101b 36.0 14.4 4.4 < 0.04 < 0.04 b8103a 43.0 1.1 0.04 < 0.04 < 0.02 b8103b 36.0 3.4 2.1 0.02 < 0.02 b8764a 43.9 3.2 0.34 0.03 < 0.02 b8764b 46.5 3.0 2.6 0.05 < 0.03 b9803a 44.0 5.1 1.3 < 0.03 < 0.02 b9803b 32.1 14 5.0 < 0.04 < 0.04 table 3. soil components (wt%) on artefact surface, obverse (a) and reverse (b). sample al ca cl fe k s si b6058a 2.1 8.7 4.61 0.51 0.25 1.12 9.1 b6058b 2.3 4.9 5.09 0.81 0.38 1.52 13.1 b6081a 1.4 5.4 5.77 0.13 0.11 3.81 4.7 b6081b 0.9 2.9 9.9 0.34 0.15 2.18 3.2 b6101a 2.1 6.0 5.46 0.68 0.35 2.20 9.1 b6101b 1.0 10.1 3.22 0.33 0.20 3.44 9.1 b8103a 1.9 7.4 3.67 0.59 0.31 0.55 11.9 b8103b 1.6 8.4 1.27 0.77 0.34 1.42 13.4 b8764a 1.9 4.9 8.0 0.60 0.32 1.66 7.4 b8764b 1.7 10.6 6.27 0.51 0.26 1.33 6.0 b9803a 2.7 10.7 4.33 0.56 0.39 1.13 11.5 b9803b 3.2 13.9 2.34 1.25 0.70 1.35 12.5 table 4. trace elements (wt%) on artefact surface, obverse (a) and reverse (b). sample mg mn ni p ti b6058a < 2.4 < 0.02 0.02 0.11 0.11 b6058b < 3.4 < 0.03 0.02 0.16 0.17 b6081a < 2.6 < 0.02 0.04 0.80 0.04 b6081b < 3.1 < 0.02 0.01 0.07 0.05 b6101a 2.6 < 0.03 < 0.02 0.29 0.13 b6101b 2.9 0.02 0.01 0.60 0.06 b8103a < 4.0 < 0.02 < 0.02 0.31 0.11 b8103b < 4.4 < 0.02 < 0.02 0.46 0.17 b8764a < 4.0 < 0.02 0.03 0.29 0.12 b8764b < 4.4 < 0.02 0.02 0.08 0.09 b9803a < 2.4 < 0.03 < 0.02 0.14 0.12 b9803b < 2.0 0.03 0.02 0.28 0.23 figure 3. petrographic image showing silty and calcareous matrix with rounded quartz sandy grain (a), charred ash material (b), shell fragment (c), and burnt soil with ferric corrosion products (d). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 reflected by re-crystallisation of calcite and gypsum from ground water, corrosion products of metals (figure 3), mineral replacements, clay relocation, etc. anthropogenic features include remnants of human activities such as slag, coprolites, pottery and brick crumbs, charred plant remains, phytolith concentrations, and other ash characteristics (figure 4). all these will be used in further research to define the immediate microenvironmental physiognomies of the sediment at the immediate proximity of each artefact, in order to correlate it with the corrosion composition. 3.3. xrd xrd diffractograms which include the representative peaks for these samples are shown in figure 5. although there are some visual differences in the diffractograms, mainly related to variations of peak intensities due to sample preparation, the main components appear to be relatively consistent. the primary compound in all samples was identified as atacamite (cu2(oh)3cl), the orthorhombic version of copper hydroxy-chloride, which can be commonly found on copper alloy archaeological objects extracted from salty soils [6]. clinoatacamite, its monoclinic polymorph, has been detected in two samples, while its presence is not clear in the rest of the cluster under investigation. paratacamite (cu3(cu,zn)(oh)6cl2), a zinc-enriched variety of the copper hydroxy-chloride series, also appeared in the majority of the diffractograms (figure 5). even though it is not considered a polymorph, it is still closely related to atacamite as part of its degradation process [7]. the zn possibly originates from traces in the alloy composition. lead sulphates were identified in most samples, while cuprite was identified in all samples except for rcb6081. this exception might be related to inconsistencies in sampling depth. all diffractograms revealed the persistent presence of quartz, which can be explained by the traces of sediment that were present on the surface of the artefacts during the sample preparation for the xrd analyses. other soil components that were identified in this step are kand na-based feldspars (albite and orthoclase), together with clay minerals such as zeolite. gypsum and its polymorphs were identified in rcb6058, rcb6101, and rcb9803, but may also be present in other samples. 3.4. sem sem-eds results of the point analyses showed copper alloys with high amounts of lead and smaller additions of tin (table 5). since lead does not dissolve well in a cu-based matrix, it usually forms globules [8]. most of the analysed samples did indeed show a bulk matrix which is low in lead, with many lead globules distributed in the matrix, and clear cracks and crevices penetrating into the bulk in some cases (figure 6). these cracks might aid the penetration of cl into the bulk matrix, as shown in table 5. 4. discussion in order to validate the results of the individual techniques, several comparisons were made. the identification of atacamite in the diffractograms can be correlated with the presence of cl in the pxrf results. it is still unclear whether this presence is related to the hydroxy-chloride phases of the corrosion products, or to another cl-based mineral present in the soil. differentiating between the lead sulphate and calcite peaks in the xrd diffractograms was complicated in some cases, however, a clear correlation could be observed between the presence of sulphur and significant amounts of lead in the pxrf data, and the presence of lead sulphate in the diffractograms. although the presence of tin is clear in the primary alloy identification with pxrf, no tin-based corrosion products were identified in the diffractograms; however, tin oxides are reported to provide difficulties in identification with xrd [9], [10], [11]. the si presence in the pxrf data could be related to the quartz observed in the diffractograms and the thin sections. the k and si combination could be explained as k-based feldspars and quartz with the diffractograms, and similarly, si and al could be identified together in feldspars and clay minerals. figure 4. petrographic image showing herbivore dung spherulites (e) in ash layer from a hearth. figure 5. diffractogram of artefact surface samples consisting of corrosion and soil minerals. a=atacamite; cu=cuprite; ls=lead sulphate; c=calcite; q=quartz; g=gypsum; z=zeolite; p=paratacamite. figure 6. sem image. left: sample b6101 showing cu-rich matrix with distribution of pb globules as bright spots. right: sample b8764 showing cavity and crack possibly improving the penetration of cl into the bulk. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 most minerals identified in the micromorphological study of the thin sections have their origins in igneous or metamorphic rocks. furthermore, the micromorphological study provides a description of the soil components that corresponds with the results from the pxrf and xrd analyses. 5. conclusions the overarching purpose of this research was to obtain a better understanding of the various factors influencing the degradation of a set of roman artefacts coming from rakafot 54 in israel, by using a multi-analytical approach that relied on elemental and mineralogical characterisation, as well as microscopy. the first objective was to identify the main alloy components. this was accomplished by carrying out a preliminary pxrf screening on the artefact surfaces before cleaning, which showed that the alloys were all copper-lead-tin based. the second goal of this research was the determination of possible corrosion products of the objects, which was done by carrying out xrd analysis on small corrosion powder samples. the results here indicate a strong presence of copper hydroxychlorides such as atacamite and clinoatacamite. paratacamite was also detected in the majority of the samples and finds partial confirmation in the detection of small amounts of zn with pxrf. additionally, some lead sulphates were detected, which could be correlated with the elemental data. finally, the main soil components were identified. by combining pxrf, xrd, and micromorphology, it was shown that the soil mainly consists of quartz, calcite, feldspars, clay minerals, and gypsum. the sem-eds results indicate a general presence of pb globules in most samples, with a surrounding matrix of cu-sn. although the micromorphological study indicates local differences in soil components of the various samples, often related to burning and organic materials (presence of phytoliths and spherulites), these differences do not appear to have a clear effect on the corrosion products. the differences in intensities of the diffraction peaks representing the various corrosion products, and the absence of cuprite in one sample could be related to inconsistencies in sampling depth and small sample size. additionally, it is possible that some soil features, such as the presence of chlorides, negate the effect that small variations in soil and alloy components could have. while there are several limitations to this research, mostly resulting from the relatively small amounts of corrosion available for analysis, the presented results provide a first insight into the corrosion processes at the site, and interesting directions for future research. author contribution manuel j.h. peters: conceptualisation, methodology, software, formal analysis, investigation, data curation, writing original draft, visualisation. yuval goren: formal analysis, resources, data curation, writing original draft, supervision, funding acquisition. peter fabian: resources, data curation. josé mirão: resources, supervision, funding acquisition. carlo bottaini: resources, supervision. sabrina grassini: validation, supervision. emma angelini: validation, supervision, project administration, resources, funding acquisition. acknowledgement the authors would like to thank roxana golan for her work with the sem, and mafalda costa and dulce valdez for their assistance in the interpretation of the diffractograms. the research presented in this paper was part of a doctoral dissertation [12], carried out mainly using data collected at politecnico di torino, ben-gurion university of the negev, and universidade de évora, as part of h2020-msca-itn-2017, ed-archmat (esr7). this project has received funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 766311. references [1] h. eshel, the bar kochba revolt, 132-135. in w. d. davies, l. finkelstein, s. t. katz (eds.), the cambridge history of judaism (2006) (pp. 105–127). isbn 978-0-521-88904-9. [2] a. sneh, m. rosensaft, geological map of israel 1:50,000, sheet 14-1: netivot, jerusalem, 2008. [3] o. crouvi, r. amit, m. ben israel, y. enzel, loess in the negev desert: sources, loessial soils, palaeosols, and palaeoclimatic implications, in y.enzel, o.bar-yosef (eds), quaternary of the levant, environments, climate change, and humans, cambridge, uk, 2017, pp.471-482. doi: 10.1017/9781316106754.053 [4] e. angelini, f. rosalbino, s. grassini, g. m. ingo, t. de caro, simulation of corrosion processes of buried archaeological bronze artefacts, in p. dillmann, g. béranger, p. piccardo, h. matthiesen (eds), corrosion of metallic heritage artefacts: investigation, conservation and prediction for long-term behaviour, european federation of corrosion publication 48, woodhead publishing, cambridge, uk, 2007, pp. 203–218, isbn: 978-1-84569301-5 [5] y. goren, i. finkelstein, n. na’aman, inscribed in clay: provenance study of the amarna tablets and other ancient near table 5. sem results (wt%). sample cu sn pb cl b6058 bulk 67.0 1.67 27.8 3.61 b6058 pb globule a 10.5 3.08 79.6 6.83 b6058 pb globule b 20.4 5.56 62.2 11.8 b6058 cu-sn matrix 98.1 1.43 0.47 0.00 b6081 bulk a 10.9 0.44 86.3 2.33 b6081 bulk b 21.7 0.82 73.2 4.27 b6081 enriched bulk 97.2 0.00 1.80 1.02 b6101 bulk 72.6 2.90 21.1 3.33 b6101 cu-sn matrix 88.6 3.38 7.36 0.63 b6101 pb globule a 12.7 0.00 69.1 18.2 b6101 pb globule b 6.99 0.00 82.0 11.1 b8103 bulk a 89.3 1.98 8.60 0.14 b8103 bulk b 86.4 1.99 11.0 0.59 b8103 cu-sn matrix 95.7 2.35 1.94 0.00 b8103 pb globule 10.1 0.00 79.8 10.2 b8764 bulk a 70.0 1.42 24.02 4.56 b8764 bulk b 68.4 3.30 24.72 3.63 b8764 pb globule a 16.1 0.40 68.95 14.52 b8764 pb globule b 26.2 0.60 52.26 21.0 b8764 cu-sn matrix 75.4 3.50 17.3 3.83 b9803 bulk a 72.7 1.76 25.0 0.36 b9803 bulk b 85.1 1.93 12.6 0.39 b9803 pb globule a 2.75 0.00 97.3 0.00 b9803 pb globule b 1.00 0.00 99.0 0.00 b9803 cu-sn matrix 96.0 2.37 1.65 0.00 https://doi.org/10.1017/9781316106754.053 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 eastern texts, tel aviv. 2004, pp. 112-113 with references, isbn: 965-266-020-5. [6] a. g. nord, e. mattson, k. tronner, factors influencing the long-term behavior of bronze artefacts in soil, protection of metals, vol.41, 2005, pp.339-346. doi: 10.1007/s11124-005-0045-9 r. frost, raman spectroscopy of selected copper minerals of significance in corrosion, spectrochimica acta. part a, molecular and biomolecular spectroscopy, vol. 59, 2003, pp. 1195-1204. doi: 10.1016/s1386-1425(02)00315-3 [7] r. fernandes, b. j. h.van os, h. d. j. huisman, the use of handheld xrf for investigating the composition and corrosion of roman copper-alloyed artefacts, heritage science, vol. 1, 2013. doi: 10.1186/2050-7445-1-30 [8] o. oudbashi, multianalytical study of corrosion layers in some archaeological copper alloy artefacts, surface and interface analysis, vol. 47, 2015, pp. 1133-1147. doi: 10.1002/sia.5865 [9] p. piccardo, b. mille, l. robbiola, tin and copper oxides in corroded archaeological bronzes, in p.dillmann, g.béranger, p. piccardo, h. matthiesen (eds), corrosion of metallic heritage artefacts: investigation, conservation and prediction for longterm behaviour, european federation of corrosion publication 48, woodhead publishing, cambridge, uk, 2007, pp. 239–262. isbn: 978-1-845-69301-5. [10] l. robbiola, j. m. blengino, c. fiaud, morphology and mechanisms of formation of natural patinas on archaeological cu-sn alloys, corrosion science, vol. 40, 1998, pp. 2083-2111. doi: 10.1016/s0010-938x(98)00096-1 [11] m. j. h. peters, innovative techniques for the assessment of the degradation state of metallic artefacts, phd thesis, turin: politecnico di torino, 2021. https://doi.org/10.1007/s11124-005-0045-9 https://doi.org/10.1016/s1386-1425(02)00315-3 https://doi.org/10.1186/2050-7445-1-30 https://doi.org/10.1002/sia.5865 https://doi.org/10.1016/s0010-938x(98)00096-1 doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform acta imeko issn: 2221-870x december 2021, volume 10, number 4, 185 193 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 185 doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform giorgia fiori1, fabio fuiano1, andrea scorza1, maurizio schmid1, silvia conforto1, salvatore a. sciuto1 1 department of industrial, electronic and mechanical engineering, roma tre university, via della vasca navale 79, 00146 rome, italy section: research paper keywords: emd; ica; stft; flow phantom failures; pw doppler citation: giorgia fiori, fabio fuiano, andrea scorza, maurizio schmid, silvia conforto, salvatore andrea sciuto, doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform, acta imeko, vol. 10, no. 4, article 29, december 2021, identifier: imeko-acta-10 (2021)-04-29 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received august 2, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia fiori, e-mail: giorgia.fiori@uniroma3.it 1. introduction doppler flow phantoms are standard reference test devices usually employed in quality controls (qcs) for ultrasound (us) system performance evaluation [1]-[3]. they can simulate the main acoustic characteristics of biological tissues and reproduce repeatable flows whose regimes are similar to those in blood vessels [4]-[7]. to date, the lack of a generally accepted standard for b-mode and doppler [8]-[12], has led to a limitation on us phantoms use. it should be noticed that, even though such devices are widespread on the market, they are still not included in a shared standard that focuses in detail on periodic and objective checks of their metrological and functional characteristics. this seems to be odd, since the existing commercial doppler phantoms show several technical limitations [2], [3], [13], [14] affecting their reliability and traceability for doppler qc testing. the main drawbacks of the most commonly used doppler phantom model, which is the flow phantom, are the desiccation over time of the tissue mimicking material (tmm), the tendency of blood mimicking fluid (bmf) particles to form agglomerates and/or air bubbles, and the inconsistency of phantom acoustic and pump mechanical properties over time [3]. despite the awareness of such limitations, objective protocols and criteria for the monitoring of the phantom defects degree are still lacking in literature. in particular, some studies focused their attention on two different kinds of doppler phantom stability, i.e., tmm and bmf stability [15]-[18]. the former refers to any physical modification in the tmm, while the latter indicates the presence of any solid and/or gaseous element in the bmf. more in detail, tmm stability can be compromised because of (a) tmm fracture e.g., due to desiccation over time, (b) tmm erosion or bmf leakage e.g., due to bmf action through time on tmm in wall-less flow phantoms or tubing material rupture. on the other hand, bmf stability highly depends on (a) any presence of air bubbles, particles abstract nowadays, objective protocols and criteria for the monitoring of phantoms failures are still lacking in literature, despite their technical limitations. in such a context, the present work aims at providing an improvement of a previously proposed method for the doppler flow phantom failures detection. such failures were classified as low frequency oscillations, high velocity pulses and velocity drifts. the novel objective method, named emodica-stft, is based on the combined application of the empirical mode decomposition (emd), independent component analysis (ica) and short time fourier transform (stft) techniques on pulsed wave (pw) doppler spectrograms. after a first series of simulations and the determination of adaptive thresholds, phantom failures were detected on real pw spectrograms through the emodica-stft method. data were acquired from two flow phantom models set at five flow regimes, through a single ultrasound (us) diagnostic system equipped with a linear, a convex and a phased array probe, as well as with two configuration settings. despite the promising outcomes, further studies should be carried out on a greater number of doppler phantoms and us systems as well as including an in-depth investigation of the proposed method uncertainty. mailto:giorgia.fiori@uniroma3.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 186 agglomerates or tmm debris, and (b) unwanted variations of flow velocity regimes. nevertheless, stability assessment is carried out through a visual qualitative evaluation [15], [16], or without a specific and rigorous protocol [17], [18]. in the existing literature, there are other studies investigating and detecting the failures that could possibly compromise the stability of devices used in both biomedical and mechanical fields. however, there are some issues that nowadays should be taken into account: in [19], for example, it has been pointed out that a specific standard for mechanical heart pumps testing procedures was still missing. investigations into such issues were limited to the early evaluation of the failures using an analysis technique along with device testing before surgical implantation. on the other hand, in [20] centrifugal pump failures have been reviewed, highlighting the lack of an integrated system able to monitor all the major pump failures. in such a context, the present study focuses on the improvement and testing of a previously developed short time fourier transform-based image analysis method [21] for the automatic detection of the main doppler flow instabilities that may arise. the proposed improved method is based on the application of empirical mode decomposition (emd) and independent component analysis (ica) techniques combined with the short time fourier transform (stft), namely emodica-stft, to automatically evaluate the phantom failures through pulsed wave (pw) doppler spectrograms. emd is a single-channel technique [22], [23], firstly introduced in [24], to obtain the decomposition of a signal in time. it is widely used in combination with ica for the effective processing of electrophysiological signals [25], [26]. an interesting feature of such a combination is the possibility of successfully extracting both oscillatory and spike-like sources [22]. stft is a time-frequency spectral analysis technique, widely used in several scientific fields, such as structural mechanics, aeronautics, and biomedical engineering. it has been applied in structural health monitoring fields, to detect damages in existing structures [27], to classify and predict delamination in smart composite laminates [28], to reveal corrosion and fatigue cracks in aircraft structures [29], and to analyse physiological signal characteristics and determine relevant parameters [30]-[33]. the goal of the present work is the implementation and testing of the emodica-stft method: it processes pw doppler spectrograms collected from two different doppler flow phantoms through a single intermediate technology level us system equipped with three array probes (linear, convex, and phased array) at their central doppler frequency. in section 2, a brief overview of the techniques adopted in the proposed method is provided, and their combined application on three simulation cases is described. in section 3, the experimental setup used in this study and the emodicastft method application to pw spectrograms is discussed. in section 4 results are presented and discussed on the basis of infraand inter-phantom differences in the detected failures. finally, in the concluding section, the major achievements and future developments of the research hereby presented will be reported. 2. emodica-stft method application to phantom failures detection bmf instability sources can be identified as any presence of air bubbles, particle agglomerates or tmm debris, and unwanted variations of flow velocity regimes. consequently, the present study focuses on their detection in pw spectrograms, with particular reference to the following phantom failures: 1. low frequency oscillations caused by any pump or hydraulic dampener inability to deliver a constant flow velocity in correspondence of a continuous flow regime setting; 2. high velocity pulses caused by any particle agglomerates or tmm debris in the phantom flow; 3. flow velocity drifts due to the unwanted onset of the pump acceleration (e.g., deriving from a failure in the control system). 2.1. emd, ica and stft techniques empirical mode decomposition [22], [24] is a signalprocessing tool that, through an iterative process, decomposes a signal x(t) into a finite set of intrinsic mode functions (imfs) and a residual rm(t), as follows: 𝑥(𝑡) = ∑ 𝐼𝑀𝐹𝑖 (𝑡) 𝑀 𝑖=1 + 𝑅𝑀 (𝑡), (1) where imfi(t) is the i-th oscillatory mode. each imfs has the following properties [22]: (a) it contains one frequency only, which is referred to as the instantaneous frequency, (b) its frequency is different from all the ones of the other functions, (c) it has zero mean value, and (d) it is an oscillatory function. one of the main advantages of emd, as compared to other decomposition techniques [22], is that it does not require any apriori knowledge of the signal to be decomposed. in turn, independent component analysis is a blind source separation technique [34], applied to a set of recorded signals yi(t) whose aim is the extraction of unknown sources si(t), named independent components (ics), under the assumption of statistical independency. in particular, yi(t) can be expressed as a linear combination of si(t), as follows: �⃗�(𝑡) = [𝐴] ∙ 𝑠(𝑡) , (2) where [a] is an unknown matrix, called mixing matrix. in the present study, a computationally improved ica method was applied, namely fastica [35], which was implemented in matlab environment as a software package [36]. finally, short time fourier transform is a set of fourier transforms performed on a signal, which is subdivided into overlapped or non-overlapped temporal segments, through a translating window (e.g., rectangular, hanning) in time. the fourier transform is applied under the hypothesis of pseudostationarity of the temporal segments [37], which is achieved through the choice of a short translating window. the stft expression for a generic discrete signal x(n), is the following: 𝑆𝑇𝐹𝑇(𝑛, 𝜔) = ∑ 𝑥(𝑛 + ℎ)𝑤𝑁 (ℎ) +∞ ℎ=−∞ 𝑒 −𝑗𝜔ℎ , (3) where wn(n) is the translating window. the corresponding normalized, real-valued, non-negative spectrogram sx(n,) can be computed through the following expression: 𝑆𝑥 (𝑛, 𝜔) = 2 𝑁 ∙ 𝐶𝐹 ∙ |𝑆𝑇𝐹𝑇(𝑛, 𝜔)|2, (4) where n is the sample window length and cf is a correction factor varying according to the chosen window amplitude [21]. therefore, (4) has the advantage of taking the applied window type into account. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 187 2.2. emodica-stft simulated application this subsection describes the emodica-stft method proposed to assess the phantom failures, through the combination of the previously described techniques, as shown in the block diagram (figure 1). the following steps were applied to a real flow maximum velocity signal vp(t) lasting  60 s, which was derived from pw spectrograms according to the procedure which will be described in section 3.2. it was chosen among the available data because it is a representative case study: in fact, the signal shows both high-velocity pulses and low-frequency oscillations. a simulated velocity drift vdr(t) of 0.06 cm·s-2 was added to vp(t) as an increasing trend in time, obtaining a velocity signal vp,tot(t) (figure 2), and then, the steps described in the following and represented in figure 1, were applied. first step. the first step is the application of emd to vp,tot(t): the imfs and the residual r(t) are retrieved on the basis of (1). second step. the second step is the application of fastica to the imfs in order to compute the ics and the mixing matrix [a]. at this point, the mean frequency fmean of each independent component is obtained, and a frequency threshold thf = 0.5 hz is selected to discriminate between highand low-frequency content ics. therefore, two different groups of ics are obtained, namely iclow and ichigh. the latter are multiplied for the mixing matrix [a] according to (2), to back-reconstruct the corresponding oscillatory modes imflow and imfhigh. finally, the modes of each group are summed together to reconstruct two signals vp,low(t) and vp,high(t) derived from the signal vp,tot(t), where the first one has frequency contents lower than thf (figure 3 a), while the second one has frequency contents higher than thf (figure 3 b). third step. the third step is the stft application, according to (3) and (4), to vp,low(t) and vp,high(t), with the settings reported in table 1. as already done in [21], the spectral window chosen is the hanning window, whose expression is the following: 𝑤𝑁 (𝑛) = 1 2 (1 − cos 2 π 𝑛 𝑁 ). (5) after the stft application, two mesh plots are obtained. in this way, it is possible to carry out the failure detection by distinguishing the contributions of the low frequency oscillations and high velocity pulses in the spectrograms of vp,low(t) and vp,high(t), respectively. low frequency oscillations are represented in the mesh plot of vp,low(t) normalized spectrogram as frequency pulses (figure 4 a, b). therefore, the detection of an oscillation occurs when sx figure 1. block diagram of the proposed emodica-stft method for flow phantom failures detection. figure 2. real flow maximum velocity signal with low frequency oscillations, high velocity pulses with a 0.06 cm·s-2 velocity drift. a) b) figure 3. (a) reconstructed signal vp,low(t) with low frequency contents superimposed to vp,tot(t) after the offset removal; (b) reconstructed signal vp,high(t) with high frequency contents superimposed to vp,tot(t) after the offset removal. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 188 shows a pulse both limited in time (according to the oscillation period) and frequency ( 0 hz), whose amplitude is higher than the threshold thlf, automatically determined as follows: 𝑡ℎ𝐿𝐹 = 𝜎𝑣 2 ∆𝑓 ∙ 𝐺, (6) where v is the flow velocity standard deviation depending on the phantom model, f is the stft frequency resolution, and g is a safety factor that in this study was chosen equal to 10. in turn, high velocity pulses are represented, as shown in the normalized spectrogram of vp,high(t), by a window covering almost all the frequency components in the mesh plot (figure 4 c, d). therefore, the detection of a pulse occurs when the average amplitude of the frequency components, between 5 and 30 hz, related to a single temporal instant is higher than the threshold thhf. the latter can be automatically determined, by considering the sampling frequency fs of vp(t), as follows: 𝑡ℎ𝐻𝐹 = 𝜎𝑣 2 ∆𝑓 ∙ 𝐺 ∙ 𝐹𝑓𝑟 𝑓𝑠/2 , (7) where ffr is a factor that considers the entity of the frequency range in which the failure occurs in the normalized spectrogram. in this case, where a frequency range between 5 and 30 hz was considered, ffr = 25 hz. the choice to reduce the frequency range in which the detection is carried out was necessary to compensate for non-ideal pulses [21]. fourth step. the fourth step of the emodica-stft method is the detection of the velocity drifts from the emd residual r(t). after the application of the least squares method to r(t) (figure 5), it is possible to evaluate any velocity drift through the computation of the angular coefficient m of the straight line that best approximates the residual trend. in particular, the detection of a velocity drift occurs when |m| is higher than the threshold thdr, that can be automatically determined as: 𝑡ℎ𝑑𝑟 = 𝜎𝑣 𝑡𝑃𝑊 ∙ 𝐺, (8) where tpw is the velocity signal duration, while the safety factor g for the velocity drift detection was chosen equal to 2. the advantage of (8) relies on its dependence on the phantom flow velocity standard deviation v so that the velocity drift perception is not affected by human eye subjectivity. the retrieved angular coefficient of r(t), shown in figure 5, is lower than the simulated velocity drift added to vp(t). this is likely due to the combination of vdr(t) with the pre-existing trend of the real velocity signal under analysis. table 1. stft parameters setting. parameter symbol value sampling frequency (hz) fs 100 spectral window hanning window length (samples) n 100 overlap (samples) noverlap 60 zero-padding (samples) nzero-pad 50 correction factor cf 2 temporal resolution (s) t 0.4 frequency resolution (hz) f  0.7 a) b) c) d) figure 4. low frequency contents signal vp,low(t) represented through (a) a mesh plot with the detected frequency peaks (thlf = 60 cm2·s-2·hz-1) and (b) its temporal evolution together with the signal vp,tot(t); high frequency contents signal vp,high(t) represented through (c) a mesh plot with the detected frequency windows (thhf = 30 cm2·s-2·hz-1) and (d) its temporal evolution together with the signal vp,tot(t). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 189 finally, this simulation was repeated in two further cases to test the proposed method for different starting conditions: (a) vp(t) with an additional velocity drift of 0.03 cm·s-2 and (b) vp(t) with an additional velocity drift of 0.09 cm·s-2. the emodicastft method identified the same low frequency oscillations and high velocity pulses retrieved for the first simulation, while the obtained velocity drift angular coefficients were (164.1  0.4)·104 cm·s-2 (r2 = 0.97) and (684.9  0.7)·10-4 cm·s-2 (r2 = 0.99) for case (a) and (b), respectively. 3. materials and experimental data in the present study, the emodica-stft method for the phantom failures assessment, implemented in matlab, is proposed as an improvement of a procedure previously described in [21]. this is achieved through the experimental setup described in the following section. 3.1. experimental setup data were collected by using two doppler flow phantoms, whose main characteristics are reported in table 2. the first phantom under test (put) is gammex, optimizer® 1425a, a self-contained device [38] able to provide constant or pulsatile flow in the 1.7-12.5 ml·s-1 range, through an electric flow controller. the second put is cirs, model 069, an easy-toassemble simulator [39], able to provide an average flow velocity between 5 and 68 cm·s-1, through the action of a peristaltic pump, providing a pulsatile flow. the latter can be converted into a constant flow through the connection of a dampener. the acquisition started after 2 hours of phantom warm-up. in order to test their stability, five constant flow rate values (low lf, low-medium lmf, medium mf, medium-high mhf and high hf) were set, as shown in table 3. doppler phantom characteristics are not consistent, therefore, flow values were differently set to guarantee the same mean velocity regimes (vgammex = vcirs), as detailed in [21]. a single us system equipped with three us probes (linear, convex, and phased array) was used to acquire six pw spectrograms lasting  10 s for each flow regime. data were collected with two different pw doppler settings, namely set a and set b, reported in table 4, in order to make a comparison of the stability performance between the two test objects under different setting conditions. the sample volume length was maintained fixed for both phantoms and settings, whereas the sample volume depth was changed according to the phantom model attenuation, and kept consistent for set a and set b. as regards the insonification angle, it was varied according both to the probe positioning on the scanning surface and to the different tube slopes of the two phantoms. figure 5. least squares method applied to the emd residual in the case of a 0.06 cm·s-2 velocity drift added to the real maximum velocity signal vp(t). table 2. main characteristics of the two doppler flow phantoms [38], [39]. parameter gammex optimazer® 1425a cirs model 069 tissue mimicking material water-based mimicking gel zerdine tissue mimicking gel attenuation 0.50 dbcm-1mhz-1 0.70 dbcm-1mhz-1 tmm sound speed 1540  10 ms-1 1540  10 ms-1 tube inner diameter 5.0 mm 4.8 mm flow velocity standard deviation (*) 2 cms-1 3 cms-1 tube slope 40° 70° dimensions 40.722.935.6 cm 2012.527.5 cm (*) the flow velocity standard deviations were estimated from the specifications reported in the phantoms datasheets. table 3. doppler phantoms flow rate and mean flow velocity settings. flow phantom flow regime flow rate q (mls-1) mean flow velocity v (cms-1) gammex optimazer 1425a low lf 2.6 13.2 low-medium lmf 3.7 18.8 medium mf 4.8 24.4 medium-high mhf 5.9 30.0 high hf 7.0 35.7 cirs model 069 low lf 2.4 13.3 low-medium lmf 3.4 18.8 medium mf 4.4 24.3 medium-high mhf 5.4 29.8 high hf 6.4 35.4 table 4. pw doppler main configuration settings. parameter set a set b gammex cirs gammex cirs doppler frequency (mhz) l = 5.21 c = 2.50 p = 2.50 wall filter minimum medium spectrum resolution minimum l = medium c, p = minimum sample volume length (cm) 3.0 sample volume depth (mm) 48 40 48 40 insonification angle (°) 52 l, c = 70 p = 55 52 l, c = 70 p = 55 pw spectrogram duration (s)  10 pw spectrogram total duration (s)  60 l = linear, c = phased, p = phased array probe. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 190 3.2. emodica-stft on pw spectrograms each pw spectrogram was processed for the detection of the maximum velocity waveform, as in [21]. pixel coordinates pxmax associated to the maximum velocity values were detected through a gray level adaptive threshold thmax automatically determined as 10% of the maximum gray level value [40]. at this point, pxmax were associated to the corresponding flow velocity values vmax for each temporal instant, taking into consideration the maximum value displayed on the pw velocity scale. then, the six vmax signals obtained for the same flow regime were juxtaposed. the emodica-stft application was implemented according to the block diagram in figure 1 and with the stft settings reported in table 1. the thresholds applied for the detection of failures show different values between the doppler phantoms under test because of the different flow velocity standard deviation (table 2). according to (6)-(8), the computed threshold values were: thlf = 60 cm2·s-2·hz-1, thhf = 30 cm2· s-2·hz-1 and thdr = 7·10-2 cm·s-2, for gammex 1425a and thlf = 135 cm2·s-2·hz-1, thhf = 67 cm2·s-2·hz-1 and thdr = 10·10-2 cm· s-2, for cirs model 069. it is worth noting that, due to the threshold dependency on flow standard deviation, the higher threshold values retrieved for cirs phantom are a first indicator of its lower performance as compared to the gammex phantom. 4. results and discussion the number of phantom failures detected for the two test objects according to the us probe, the flow regime (lf, lmf, mf, mhf and hf) and the pw doppler settings (set a and set b) is reported in table 5 and table 6. according to [41], the standard deviation values can be computed as the square root of the counted value. the emodica-stft method did not detect any velocity drift failure on the two phantoms, because the angular coefficients |m| of the straight lines, obtained by applying the least squares method to all the emd residuals, were always lower than the determined thdr. as regards the gammex 1425a phantom, it should be noted that, independently from the us probe considered, the number of low frequency oscillations is globally limited, except for the lmf flow regime, in set a and set b. as shown in figure 6, a sinusoidal trend is clearly visible, therefore suggesting that the phantom electric flow controller seems no longer able to guarantee a constant flow regime of 3.7 ml·s-1. on the other hand, both convex and phased array probes show a higher number of high velocity pulses with respect to the linear array probe, suggesting a probe-dependent sensitivity to bmf particle agglomerates. furthermore, independently from the probe, the low flow regime lf shows the highest number of high velocity pulses. this may be due to the fact that the flow velocity is too low to dissolve the particle agglomerates. table 5. number of detected failures according to the us probe, the flow regime and pw doppler settings for gammex 1425a. us probe flow regime pw doppler setting low frequency oscillation thlf = 60 cm2·s-2·hz-1 high velocity pulse thhf = 30 cm2·s-2·hz-1 velocity drift thdr = 7·10-2 cm·s-2 angular coefficient m (cm·s-2) r2 linear lf set a 6  2 7  3 − -0.9·10-2 0.99 set b − 3  2 − -1.0·10-2 0.94 lmf set a 49  7 − − -0.9·10-2 0.99 set b 45  7 − − 2.1·10-4 0.99 mf set a − − − -3.7·10-3 0.99 set b − − − -2.3·10-3 0.98 mhf set a − − − 4.0·10-3 0.99 set b − 8  3 − -3.8·10-3 0.89 hf set a − − − -0.9·10-2 0.98 set b − 1  1 − 2.8·10-3 0.99 convex lf set a 2  1 44  7 − 4.0·10-2 0.98 set b − 14  4 − -4.3·10-4 0.98 lmf set a 47  7 2  1 − -4.9·10-3 0.99 set b 44  7 2  1 − -1.2·10-2 0.99 mf set a − 9  3 − -1.6·10-3 0.99 set b − 1  1 − -2.4·10-3 0.99 mhf set a − 6  2 − -3.3·10-3 0.94 set b − 3  2 − -0.9·10-2 0.99 hf set a 2  1 2  1 − -0.6·10-2 0.98 set b − 1  1 − 1.9·10-5 0.99 phased lf set a − 25  5 − -0.8·10-3 0.99 set b − 4  2 − 3.1·10-3 0.92 lmf set a 44  7 − − -1.9·10-2 0.98 set b 47  7 4  2 − 1.2·10-3 0.99 mf set a − 3  2 − -1.4·10-2 0.94 set b − 8  3 − 1.5·10-3 0.96 mhf set a − 9  3 − 1.0·10-3 0.96 set b − 10  3 − 4.0·10-3 0.98 hf set a − 6  2 − -3.3·10-3 0.99 set b − 5  2 − -3.3·10-3 0.99 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 191 as regards the cirs model 069 simulator, no low-frequency oscillation was detected through the phased array probe. in both linear and convex array probes, a high number of oscillations was detected in correspondence of the medium-high flow regime mhf (figure 7). this could be due to a dampener failure in correspondence of such flow regime. similarly to gammex 1425a, a higher number of high velocity pulses was detected for both convex and phased array probes. by comparing the two doppler phantoms outcomes retrieved, gammex 1425a shows the lowest number of lowfrequency oscillations (by excluding the particular case of lmf regime), when compared to both linear and convex array probes of the cirs model 069, while for the phased array one no oscillations were detected. conversely, gammex 1425a globally shows the highest number of high velocity pulses compared to the cirs model 069 for both linear and convex array probes, while such issue seems to be reversed for phased array probe. table 6. number of detected failures according to the us probe, the flow regime and pw doppler settings for cirs model 069. us probe flow regime pw doppler setting low frequency oscillation thlf = 135 cm2·s-2·hz-1 high velocity pulse thhf = 67 cm2·s-2·hz-1 velocity drift thdr = 10·10-2 cm·s-2 angular coefficient m (cm·s-2) r2 linear lf set a 4  2 − − 3.7·10-2 0.99 set b 2  1 − − -0.8·10-2 0.99 lmf set a − − − 3.6·10-3 0.95 set b − − − -1.8·10-4 0.99 mf set a − − − -1.9·10-3 0.97 set b − − − -1.0·10-3 0.99 mhf set a 2  1 − − 0.8·10-2 0.94 set b 11  3 1  1 − 2.0·10-3 0.99 hf set a − − − 3.8·10-4 0.99 set b 2  1 − − 2.3·10-2 0.99 convex lf set a − 5  3 − 2.2·10-2 0.99 set b − 7  3 − -4.6·10-3 0.99 lmf set a − 9  3 − -2.0·10-2 0.95 set b − 6  3 − 1.0·10-2 0.93 mf set a 1  1 17  4 − 2.2·10-2 0.95 set b − 11  3 − 4.3·10-3 0.96 mhf set a − 1  1 − -1.9·10-2 0.99 set b 18  4 2  1 − 1.9·10-2 0.99 hf set a 5  2 2  1 − -2.6·10-2 0.99 set b 1  1 1  1 − 0.6·10-2 0.99 phased lf set a − 8  3 − 1.1·10-2 0.98 set b − 3  2 − -0.9·10-2 0.98 lmf set a − 17  4 − 1.4·10-2 0.99 set b − 11  3 − -0.9·10-2 0.99 mf set a − 14  4 − 3.6·10-3 0.99 set b − 13  4 − -1.8·10-3 0.94 mhf set a − 8  3 − 0.9·10-2 0.99 set b − 12  3 − 0.5·10-3 0.97 hf set a − 10  3 − 0.5·10-2 0.99 set b − 5  2 − 1.4·10-3 0.99 figure 6. example of the sinusoidal trend in lmf regime for gammex 1425a acquired with the linear array probe in set a. figure 7. example of the low frequency oscillations in mhf regime for cirs model 069 acquired with the linear array probe in set b. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 192 since the same number of put failures can have a different relevance depending on the intended use of the ultrasound system to be checked through the put (e.g., in echocardiography, obstetrics and gynecology, pediatrics, etc.), the emodica-stft may be applied with different thresholds by means of ad hoc us systems and probe models over time. therefore, the same put may be suitable for testing a restricted number of us scanners only. this could be an advantage for the technical assessment of the above medical devices, as well as for put maintenance. 5. conclusions doppler phantoms are standard reference test devices that, nowadays, are not yet included in a shared standard focusing on the objective evaluation of their performances and failures. in particular, phantom stability assessment is currently carried out through visual and subjective evaluations, or without a rigorous protocol. therefore, in the present study, a novel method, named emodica-stft, based on the combined application of emd, ica and stft techniques, is proposed and tested to automatically determine, through the processing of pw doppler spectrograms, the number of phantom failures. the main flow phantom failures were classified as low frequency oscillations, high velocity pulses and velocity drifts. data were collected from two flow phantoms by a single diagnostic us system equipped with three probe models. tests were carried out in two different us configuration settings and five flow regimes set on the test objects. after a series of simulations, adaptive thresholds for the detection of each failure were determined depending on the standard deviation of the put flow velocity. consequently, emodica-stft method was applied to the maximum flow velocity signals extracted from the pw doppler spectrograms through an automatic processing. finally, the number of detected failures was found for both doppler phantoms. on the basis of the promising outcomes, further studies should be carried out (a) on a higher number of doppler phantoms, (b) on a larger number of us diagnostic systems and (c) including an in-depth investigation of the proposed method uncertainty. acknowledgement the authors wish to thank hitachi healthcare and jan galo of the clinical engineering service at i.r.c.c.s. children hospital bambino gesù for hardware supply and technical assistance in data collection. references [1] international electrotechnical commission, iec 61685:2002-04, ultrasonics – flow measurement systems – flow test object, 2002. [2] j. e. browne, a review of ultrasound quality assurance protocols and test devices, phys. med. 30 (2014) pp. 742-751. doi: 10.1016/j.ejmp.2014.08.003 [3] s. m. shalbi, a. a. oglat, b. albarbar, f. elkut, m. a. qaeed, a. a. arra, a brief review for common doppler ultrasound flow phantoms, j. med. ultrasound 28 (2020) pp. 138-142. doi: 10.4103/jmu.jmu_96_19 [4] k. k. dakok, m. z. matjafri, n. suardi, a. a. oglat, s. e. nabasu, a review of carotid artery phantoms for doppler ultrasound applications, j. med. ultrasound 29 (2021) pp. 157-166. doi: 10.4103/jmu.jmu_164_20 [5] a. a. oglat, m. z. matjafri, n. suardi, m. a. abdelrahman, m. a. oqlat, a. a. oqlat, a new scatter particle and mixture fluid for preparing blood mimicking fluid for wall-less flow phantom, j. med. ultrasound 26 (2018) pp. 134-142. doi: 10.4103/jmu.jmu_7_18 [6] m. alshipli, m. a. sayah, a. a. oglat, compatibility and validation of a recent developed artificial blood through the vascular phantom using doppler ultrasound colorand motion-mode techniques, j. med. ultrasound 28 (2020) pp. 219-224. doi: 10.4103/jmu.jmu_116_19 [7] a. a. oglat, m. z. matjafri, n. suardi, m. a. oqlat, a. a. oqlat, m. a. abdelrahman, o. f. farhat, m. s. ahmad, b. n. alkhateb, s. j. gemanam, s. m. shalbi, r. abdalrheem, m. shipli, m. marashdeh, characterization and construction of a robust and elastic wall-less flow phantom for high pressure flow rate using doppler ultrasound applications, natural and engineering sciences 3 (2018) pp. 359-377. doi: 10.28978/nesciences.468972 [8] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on the adaptive snr threshold method for depth of penetration measurements in diagnostic ultrasounds, appl. sci. 10 (2020). doi: 10.3390/app10186533 [9] a. scorza, g. lupi, s. a. sciuto, f. bini, f. marinozzi, a novel approach to a phantom based method for maximum depth of penetration measurement in diagnostic ultrasound: a preliminary study, 2015 ieee international symposium on medical measurements and applications (memea), turin, italy, 7 – 9 may 2015. doi: 10.1109/memea.2015.7145230 [10] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko 10 (2021) pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [11] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a novel sensitivity index from the flow velocity variation in quality control for pw doppler: a preliminary study, proc. of 2021 ieee international symposium on medical measurements and applications (memea), neuchâtel, switzerland, 23 – 25 june 2021. doi: 10.1109/memea52024.2021.9478686 [12] g. fiori, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a preliminary study on the average maximum velocity sensitivity index from flow velocity variation in quality control for color doppler, measurement: sensors 18 (2021). doi: 10.1016/j.measen.2021.100245 [13] s. cournane, a. j. fagan, j. e. browne, an audit of a hospitalbased doppler ultrasound quality control protocol using a commercial string doppler phantom, phys. med. 30 (2014) pp. 380-384. doi: 10.1016/j.ejmp.2013.10.001 [14] ipem report no 102, quality assurance of ultrasound imaging systems, 2010, isbn 978-1-903613-43-6. [15] k. v. ramnarine, t. anderson, p. r. hoskins, construction and geometric stability of physiological flow rate wall-less stenosis phantoms, ultrasound med. biol. 27 (2001) pp. 245-250. doi: 10.1016/s0301-5629(00)00304-5 [16] a. malone, d. chari, s. cournane, i. naydenova, a. fagan, j. browne, investigation of the assessment of low degree (< 50%) renal artery stenosis based on velocity flow profile analysis using doppler ultrasound: an in-vitro study, phys. med. 65 (2019) pp. 209-218. doi: 10.1016/j.ejmp.2019.08.016 [17] j. v. grice, d. r. pickens, r. r. price, technical note: a new phantom design for routine testing of doppler ultrasound, med. phys. 43 (2016) pp. 4431-4434. doi: 10.1118/1.4954205 [18] m. y. park, s. e. jung, j. y. byun, j. h. kim, g. e. joo, effect of beam-flow angle on velocity measurements in modern doppler ultrasound systems, ajr am. j. roentgenol. 198 (2012) pp. 11391143. doi: 10.2214/ajr.11.7475 https://doi.org/10.1016/j.ejmp.2014.08.003 https://doi.org/10.4103/jmu.jmu_96_19 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc8515632/#:~:text=10.4103/jmu.jmu_164_20 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc6159322/#:~:text=10.4103/jmu.jmu_7_18 https://www.ncbi.nlm.nih.gov/pmc/articles/pmc7869744/#:~:text=10.4103/jmu.jmu_116_19 https://doi.org/10.28978/nesciences.468972 https://doi.org/10.3390/app10186533 https://doi.org/10.1109/memea.2015.7145230 http://dx.doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.1109/memea52024.2021.9478686 https://doi.org/10.1016/j.measen.2021.100245 https://doi.org/10.1016/j.ejmp.2013.10.001 https://doi.org/10.1016/s0301-5629(00)00304-5 https://doi.org/10.1016/j.ejmp.2019.08.016 https://doi.org/10.1118/1.4954205 https://doi.org/10.2214/ajr.11.7475 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 193 [19] s. m. patel, p. e. allaire, h. g. wood, a. l. throckmorton, c. g. tribble, d. b. olsen, methods of failure and reliability assessment for mechanical heart pumps, artif. organs 29 (2005) pp. 15-25. doi: 10.1111/j.1525-1594.2004.29006.x [20] k. k. mckee, g. forbes, i. mazhar, r. entwistle, i. howard, a review of major centrifugal pump failure modes with application to the water supply and sewerage industries, proc. of the icoms asset management conference, gold coast, qld, australia, 16 may 2011. online [accessed 4 december 2021] https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28 560/159989_38214_icoms%20-%20paper%2032%20%20k%20mckee.pdf?sequence=2&isallowed=y [21] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, doppler flow phantom stability assessment through stft technique in medical pw doppler: a preliminary study, proc. of 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7 – 9 june 2021. doi: 10.1109/metroind4.0iot51437.2021.9488513 [22] b. mijović, m. de vos, i. gligorijević, j. taelman, s. van huffel, source separation from single-channel recordings by combining empirical-mode decomposition and independent component analysis, ieee trans. biomed. eng. 57 (2010) pp. 2188-2196. doi: 10.1109/tbme.2010.2051440 [23] z. wu, n. e. huang, ensemble empirical mode decomposition: a noise-assisted data analysis method, adv. adaptive data anal. 1 (2009) pp. 1-41. doi: 10.1142/s1793536909000047 [24] n. e. huang, s. zheng, s. r. long, m. c. wu, h. h. shih, q. zheng, n.-c. yen, c. c. tung, h. h. liu, the empirical mode decomposition and the hilbert spectrum for nonlinear and nonstationary time series analysis, proc. r. soc. lond. a. 454 (1998) pp. 903-995. doi: 10.1098/rspa.1998.0193 [25] g. r. naik, s. e. selvan, h. t. nguyen, single-channel emg classification with ensemble-empirical-mode-decompositionbased ica for diagnosing neuromuscular disorders, ieee trans. neural syst. rehabil. eng. 24 (2016) pp. 734-743. doi: 10.1109/tnsre.2015.2454503 [26] g. wang, c. teng, k. li, z. zhang, x. yan, the removal of eog artifacts from eeg signals using independent component analysis and multivariate empirical mode decomposition, ieee j. biomed. health inform. 20 (2016) pp. 1301-1308. doi: 10.1109/jbhi.2015.2450196 [27] h. r. ahmadi, n. mahdavi, m. bayat, a novel damage identification method based on short time fourier transform and a new efficient index, structures 33 (2021) pp. 3605-3614. doi: 10.1016/j.istruc.2021.06.081 [28] a. khan, d.-k. ko, s. c. lim, h. s. kim, structural vibrationbased classification and prediction of delamination in smart composite laminates using deep learning neural network, composites part b: engineering 161 (2019) pp. 586-594. doi: 10.1016/j.compositesb.2018.12.118 [29] m. le, j. kim, s. kim, j. lee, b-scan ultrasonic testing of rivets in multilayer structures based on short-time fourier transform analysis, measurement 128 (2018) pp. 495-503. doi: 10.1016/j.measurement.2018.06.049 [30] d. cordes, m. f. kaleem, z. yang, x. zhuang, t. curran, k. r. sreenivasan, v. r. mishra, r. nandy, r. r. walsh, energy-period profiles of brain networks in group fmri resting-state data: a comparison of empirical mode decomposition with the short-time fourier transform and the discrete wavelet transform, front. neurosci. 15 (2021). doi: 10.3389/fnins.2021.663403 [31] v. gupta, m. mittal, qrs complex detection using stft, chaos analysis, and pca in standard and real-time ecg databases, j. inst. eng. india ser. b 100 (2019) pp. 489-497. doi: 10.1007/s40031-019-00398-9 [32] a.-c. tsai, j.-j. luh, t.-t. lin, a novel stft-ranking feature of multi-channel emg for motion pattern recognition, expert systems with applications 42 (2015) pp. 3327-3341. doi: 10.1016/j.eswa.2014.11.044 [33] a. hyvärinen, p. ramkumar, l. parkkonen, r. hari, independent component analysis of short-time fourier transforms for spontaneous eeg/meg analysis, neuroimage 49 (2010) pp. 257-271. doi: 10.1016/j.neuroimage.2009.08.028 [34] a. hyvärinen, j. karhunen, e. oja, what is independent component analysis, in: independent component analysis. a. hyvärinen, j. karhunen, e. oja (editors). wiley-interscience, new york, ny, usa, 2001, isbn 0-471-22131-7, pp. 147-163. [35] a. hyvärinen, e. oja, a fast fixed-point algorithm for independent component analysis, neural computation 9 (1997) pp. 1483-1492. doi: 10.1162/neco.1997.9.7.1483 [36] http://research.ics.aalto.fi/ica/software.shtml online [accessed 4 december 2021] [37] b. boashash, heuristic formulation of time-frequency distributions, in: time-frequency signal analysis and processing. b. boashash (editor). academic press, 2015, isbn 978-0-12-3984999, pp. 65-102. [38] gammex, optimazer 1425a: ultrasound image analyzer for doppler and gray scale scanners. online [accessed 4 december 2021] https://cspmedical.com/content/1021086_doppler_user_manual.pdf [39] cirs tissue simulation & phantom technology, doppler ultrasound flow simulator – model 069. online [accessed 4 december 2021] http://www.3000buy.com/uploads/soft/180118/cirs069.pdf [40] f. marinozzi, f. bini, a. d’orazio, a. scorza, performance tests of sonographic instruments for the measure of flow speed, 2008 ieee international workshop on imaging systems and techniques (ist), chania, greece, 10 – 12 september 2008. doi: 10.1109/ist.2008.4659939 [41] j. r. taylor, an introduction to error analysis: the study of uncertainties in physical measurements, university science books, sausalito, ca, usa, 1997, isbn 0-935702-75-x, pp. 245-260. https://doi.org/10.1111/j.1525-1594.2004.29006.x https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://espace.curtin.edu.au/bitstream/handle/20.500.11937/28560/159989_38214_icoms%20-%20paper%2032%20-%20k%20mckee.pdf?sequence=2&isallowed=y https://doi.org/10.1109/metroind4.0iot51437.2021.9488513 https://doi.org/10.1109/tbme.2010.2051440 https://doi.org/10.1142/s1793536909000047 https://doi.org/10.1098/rspa.1998.0193 https://doi.org/10.1109/tnsre.2015.2454503 https://doi.org/10.1109/jbhi.2015.2450196 https://doi.org/10.1016/j.istruc.2021.06.081 https://doi.org/10.1016/j.compositesb.2018.12.118 https://doi.org/10.1016/j.measurement.2018.06.049 https://doi.org/10.3389/fnins.2021.663403 https://doi.org/10.1007/s40031-019-00398-9 https://doi.org/10.1016/j.eswa.2014.11.044 https://doi.org/10.1016/j.neuroimage.2009.08.028 https://doi.org/10.1162/neco.1997.9.7.1483 http://research.ics.aalto.fi/ica/software.shtml https://cspmedical.com/content/102-1086_doppler_user_manual.pdf https://cspmedical.com/content/102-1086_doppler_user_manual.pdf http://www.3000buy.com/uploads/soft/180118/cirs069.pdf http://dx.doi.org/10.1109/ist.2008.4659939 occupancy grid mapping for rover navigation based on semantic segmentation acta imeko issn: 2221-870x december 2021, volume 10, number 4, 155 161 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 155 occupancy grid mapping for rover navigation based on semantic segmentation sebastiano chiodini1,2, marco pertile1,2, stefano debei1,2 1 cisas "giuseppe colombo" università degli studi di padova, via venezia 15, 35131 padova, italy 2 dipartemento di ingegneria industriale università degli studi di padova, via gradenigo 6/a, 35131 padova, italy section: research paper keywords: convolutional neural network; slam; semantic segmentation; robot vision; martian environment citation: sebastiano chiodini, marco pertile, stefano debei, occupancy grid mapping for rover navigation based on semantic segmentation, acta imeko, vol. 10, no. 4, article 25, citation: sebastiano chiodini, marco pertile, stefano debei, occupancy grid mapping for rover navigation based on semantic segmentation, acta imeko, vol. 10, no. 4, article 25, december 2021, identifier: imeko-acta-10 (2021)-04-25 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: project bird181070 funded by the program bird 2018 sponsored by the university of padova. corresponding author: sebastiano chiodini, e-mail: sebastiano.chiodini@unipd.it 1. introduction the problem of perception of the environment is fundamental for safe robot navigation. planetary rovers’ exploration has some peculiarities that we do not find in other autonomous robotics applications, (1) there is no gps system that can help with the localization process, (2) terrain assessment deals with an unstructured environment that is characterised by sharp rocks (large or small), sand, and bedrocks, which are often confused with the noise of a stereo point cloud. a review of learning-based perception and navigation methods for rescue robotics, planetary exploration, and agricultural robotics can be found in [1]. it is not only safe navigation that benefits from the autonomous navigation capabilities of a rover, but also its scientific output, as an example, the distance travelled for each sol by the nasa msl rover has increased from a few meters to 100 m [2]. the nasa mer rover disposed of the gestalt (grid-based estimation of surface traversability applied to local terrain) system [3], which is one of the first autonomous terrain assessment systems for planetary rovers. this system was able to detect geometric hazards such as rock, ditches, and cliffs by processing the 3d point clouds generated by the rover stereoimages; it looked mainly at geometric characteristics such as steps, slopes, and terrain roughness. an alternative method for estimating the traversability of a terrain is presented in [4], where the unevenness of the terrain is analysed by means of the power spectral density (psd) of the surface profile as measured by a stereo camera. in [5] an evaluation system for the traversability of rough terrain for a rover based on aerial uav survey is presented. abstract obstacle mapping is a fundamental building block of the autonomous navigation pipeline of many robotic platforms such as planetary rovers. nowadays, occupancy grid mapping is a widely used tool for obstacle perception. it foreseen the representation of the environment in evenly spaced cells, whose posterior probability of being occupied is updated based on range sensors measurement. in more classic approaches, the cells are updated to occupied at the point where the ray emitted by the range sensor encounters an obstacle, such as a wall. the main limitation of this kind of methods is that they are not able to identify planar obstacles, such as slippery, sandy, or rocky soils. in this work, we use the measurements of a stereo camera combined with a pixel labeling technique based on convolution neural networks to identify the presence of rocky obstacles in planetary environment. once identified, the obstacles are converted into a scan-like model. the estimation of the relative pose between successive frames is carried out using orb-slam algorithm. the final step consists of updating the occupancy grid map using the bayes’ update rule. to evaluate the metrological performances of the proposed method images from the martian analogous dataset, the esa katwijk beach planetary rover dataset have been used. the evaluation has been performed by comparing the generated occupancy map with a manually segmented ortomosaic map, obtained by drones’ survey of the area used as reference. mailto:sebastiano.chiodini@unipd.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 156 looking at the context of indoor navigation, many methods involve the construction of occupancy grid maps that are updated via the bayesian occupancy filter with the range measurements coming from lidar sensors. as an example, [6] presents a corridor detection method based on 2d lidar and occupancy filtering. thanks to semantic segmentation, which is the process of dividing an image into digital objects, it is possible to add to the map information that otherwise cannot be deduced from the three-dimensional model alone, such as if the terrain is sandy or slippery. compared to other methods, such as classification, semantic segmentation allows for pixel-by-pixel labelling of the image, see figure 1. this enables a more detailed interpretation of the surrounding environment. cadena et al. [7] says that simultaneous localization and mapping is now entering its third stage, characterised by robust perception. this robustness requires the realization of robust performance, high-level understanding, resource awareness, and task-driven perception. semantics is an important tool in the pursuit of improved robustness, intuitive visualization, and efficient human–robot-environment interaction. the term semantic slam can be used to identify methods that comprise either semantic-based robustness/accuracy enhancements or semantic mapping. for details, see [8], which presents a detailed review of recent advances in semantic simultaneous localization and mapping, considering multiscaled map representation, object simultaneous localization and mapping system, and deep neural network-based slam. deeplab networks make a significant contribution to semantic segmentation, see, e.g., [9], [10], which introduce the concept of “atrous convolution” in cnn models and exhibit excellent accuracy in semantic segmentation, see [8]. in the literature, it is possible to find many works on semantic mapping in the urban environment. as an example [11] generates a dense 3d reconstruction with associated semantic labelling from stereo camera images, [12] and [13] propose a multimodal sensor-based semantic 3d mapping system using a 3d lidar combined with a camera. in the frame of maritime navigation, the authors of [14] present a water obstacle detection network based on image segmentation (wodis) for autonomous surface vehicles. gan et al. [15] describe a method that provides a unified probabilistic model for both occupancy and semantic probabilities, producing a bayesian continuous 3d semantic occupancy map from noisy point clouds. wang et al. [16] describe a joint method of a priori convolutional neural networks at superpixel level and soft restricted context transfer. in [17], the authors describe a method to build a complete semantic map based on merging the segmentation results from street and satellite view images. in [18] satellite images are segmented using a segnet network. li et al. [19] present a fast 3d semantic mapping system based on monocular vision by fusion of localization, mapping, and scene parsing. the method is based on an improved version of deeplab-v3+ [9], [10]. [20] presents an approach to generate an outdoor large-scale 3d dense semantic map based on binocular stereo vision and the segnet deep learning architecture. paz et al. [21] fuses image and pre-built point cloud map information to perform automatic and accurate labelling of static landmarks such as roads, sidewalks, crosswalks, and lanes. in this work, semantic segmentation is also performed on 2d images using the deeplab-v3+ network [9], [10]. however, these techniques have rarely been applied to the case of planetary exploration and environments with a low number of features. this paper proposes a terrain assessment method for martian environment based on semantic mapping. the method takes as input a set of stereo-images which are pixelwise labelled with the state-of-the-art convolutional neural network labeller deeplabv3+ [9]. thanks to the stereo reconstruction of the scene, it is possible to associate the labels to the 3d points; therefore, obstacles are represented through a scan-like model [22]. all scans are combined with each other in an occupancy grid considering the trajectory travelled by the rover. the trajectory was calculated using the orb-slam algorithm [23]. to evaluate the metrological performances of the proposed method, images from the public available esa katwijk beach planetary rover dataset [24] are used. the dataset was created for the validation of localization and navigation algorithms in martian-like environment, and it provides trajectory and map ground truth. the method evaluation has been performed using a manually segmented ortomosaic map, taken by a drone, as reference, and the trajectory reconstruction performances have been evaluated by comparison with a differential gps (dgps). the major contributions of this work are: • the application of cnn to identify obstacles characteristic of the planetary environment; • the generation of laser-scan like point cloud representing the obstacle; • the application of bayes filtering to obtain global obstacle maps; • comparison of the generated map with the ground truth for method validation. the paper is divided as follows: section 2 describes the proposed terrain assessment method. in section 3 the algorithm performance on a martian analogous environment has been evaluated, and in section 4 the concluding remarks are reported. 2. method the proposed method takes as input a rectified stereo image (𝐼𝑙 , 𝐼𝑟 ), which is used to (1) identify the obstacle with a cnn labeller (deeplabv3+), (2) estimate the scene point cloud, and (3) estimate the rover trajectory. afterward, the point cloud is fused with the labels to obtain a scan-like representation of the obstacles. the scans associated with the rover poses that compose the trajectory are merged into an occupancy grid map using bayesian filtering. a diagram summarizing the various steps is shown in figure 2. figure 1. compared to other methods, such as classification, semantic segmentation allows for a pixel-by-pixel labelling of the image. this enables the creation of more detailed maps. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 157 2.1. obstacle identification the purpose of the obstacle identification step is labelling each pixel of the image (𝑢𝑙,𝑣𝑙 ) into obstacle or free area. image labelling is performed using a convolutional neural network (cnn) technique, as cnns have demonstrated a significant improvement in semantic segmentation tasks. moreover, compared to other methods, such as classification, semantic segmentation enables a more detailed interpretation of the surrounding environment. among the state-of-the-art methods (fcn [25], segnet [18] and u-net [26]), deeplabv3+ [9] has been chosen because (1) it has been demonstrated to outperform similar methods in several applications, (2) it is a pre-trained network. the latter feature is necessary to reduce the training time. network pretraining has been performed on imagenet, which is a dataset containing more than one million images divided into 1000 classes. pre-training allows training the deeper layer of the network with the most characteristic feature. instead, the final part of the training phase is application dependent and is used to train the outer layer of the network. for the context of this application, three classes have been chosen: sand, rocks, and background. fehler! verweisquelle konnte nicht gefunden werden.2 shows the obstacle detection step applied to the images of the dataset used for testing. we have taken advantage of a pre-trained network to reduce the training images number. data augmentation techniques have been applied to improve the training accuracy and robustness to image variation, in particular, cropping, mirroring, and resizing. the number of images used for training is 400: 50 original images and 350 obtained with data augmentation; more details are given in [27], [28], and [29]. since it may happen that the boundaries of the identified classes may present imperfections or smudges, morphological operations have been applied to filter the identified regions. the morphological operations that have been applied are the following: binary erosion, binary dilation, and removal of a small region [30]. for collision avoidance purposes, the main interest is limited to the potential point of collision with the obstacle. for this reason, for each identified region, only the lower limit is considered, which is the one that corresponds to the intersection between the ray that starts from the camera centre and the obstacle. this contour, combined with the range measurements obtained with the method described in section 2.2, allows us to have a scan-like representation of the obstacle (see figure 3). 2.2. range measurement first, images are stereo-rectified using camera intrinsic and extrinsic parameters. then, the semi-global block matching algorithm [31] is used to compute the disparity map. semi-global block matching algorithm computes the scene disparity by comparing the sum of absolute differences (sad) for each block of pixels in the image and forces a similar disparity on neighbouring blocks. by knowing the disparity map and the intrinsic parameters of the camera it is possible to estimate the depth of the scene and compute the related point cloud relative to the rover frame (𝑿𝑗 = [𝑋𝑗 , 𝑌𝑗 , 𝑍𝑗 ]), where 𝑗 = 1, … , 𝑛 with 𝑛 figure 2. scheme overview of the proposed terrain assessment algorithm. the algorithm takes as input a rectified stereo image, which is used to (1) identify the obstacle with a cnn labeller (deeplabv3+), (2) estimate the scene point cloud, and (3) estimate the rover trajectory. afterward, the point cloud is fused with the labels to obtain a scan-like representation of the obstacles. the scans associated with the rover poses that compose the trajectory are merged into an occupancy grid map using bayesian filtering. figure 3. the 3d obstacle map is broken down to a laser scan-like model. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 158 total number of pixels with an associated disparity. the 3d points coordinates in the left camera frame are given by: 𝑋𝑗 = 𝑢𝑙 𝑍𝑗 𝑓 𝑌𝑗 = 𝑣𝑙 𝑍𝑗 𝑓 𝑍𝑗 = 𝑏𝑓 𝑢𝑙 − 𝑢𝑟 , (1) where (𝑢𝑙,𝑣𝑙 ) and (𝑢𝑟,𝑣𝑟 ) are respectively the pixel coordinates in the left and right image, 𝑏 is the camera baseline and 𝑓 is the camera focal length. each 3d point of the point cloud (𝑿𝑗 ) is associate with a pixel of the image (𝑢𝑙,𝑣𝑙 ). only 3d points that correspond to the lower boundary of the obstacles are used to update the occupancy grid map; see section 2.3. 2.3. relative trajectory reconstruction sections 2.1 and 2.2 show how to determine the position of the obstacles with reference to the rover frame. however, to build a global map of obstacles and filter the measurements belonging to different rover poses, the position of the obstacles position 𝑿𝑖 needs to be defined with respect to an absolute reference system 𝑿𝑖 𝑊 , the w world reference frame. in this work the frame w corresponds to the first stereo-image frame rotated by the pan and tilt angles of the ptu (pan and tilt unit). to perform the trajectory reconstruction step, only stereo camera visual input has been used in combination with the stateof-the-art visual slam algorithm, orb-slam2 [23]. for the purposes of this work, the camera pose issued by the tracking step was used, this pose belongs to se(3) (the special euclidean group in three dimension). assuming that the rovers are mainly moving in a planar environment, the se(3) pose has been transformed into the corresponding se(2) pose 𝑻𝑘 𝑊 = [𝑹𝑘 𝑊 |𝒕𝑘,𝑊 ], which for completeness is reported in the following relation: 𝑻𝑘 𝑊 = [ cos(𝜓𝑘 𝑊 ) − sin(𝜓𝑘 𝑊 ) 𝑡𝑥(𝑘,𝑊) sin(𝜓𝑘 𝑊 ) cos(𝜓𝑘 𝑊 ) 𝑡𝑦(𝑘,𝑊) 0 0 1 ] , (2) where 𝜓𝑘,𝑊 is the absolute heading angle and 𝑡𝑥(𝑘,𝑊) 𝑡𝑦(𝑘,𝑘−1) is the rover absolute position. finally, obstacle scan line points are transformed from the rover frame to the w frame using the following equation: 𝑿𝑖 𝑊 = 𝑹𝑘 𝑊 𝑿𝑖 + 𝒕𝑘,𝑊 . (3) 2.4. sensor fusion the last step is the update of the occupational grid. the probability of each grid cell being occupied is updated following the standard bayes update rule [29] [32] and using the obstacle scan 𝑿𝑖 𝑊 . in the experimental part, a grid resolution of 0.2 × 0.2 m has been considered. assuming that the cells of the grid map are independent from each other, and given a series of rock observations 𝑧1:𝑗 , the probability belief of a single cell to be occupied by an obstacle or not 𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗 ), is reported in equation (4). 𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗 ) = 𝑝(𝑚𝑥,𝑦 |𝑧𝑗 )𝑝(𝑧𝑗) 𝑝(𝑚𝑥,𝑦) 𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗−1) 𝑝(𝑧𝑗|𝑧1:) , (4) where 𝑝(𝑚𝑥,𝑦 |𝑧𝑗 ) is the inverse measurement model of the depth data retrieved from the stereo-camera, 𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗−1) is the depth measurements of the previous rover poses, and 𝑝(𝑚𝑥,𝑦 ) is the prior map. to avoid difficult-to-calculate probabilities, we use the binary bayes filter in log-odds form: 𝑙𝑗 = 𝑙𝑗−1 + log 𝑝(𝑚𝑥,𝑦 |𝑧𝑗 ) 1−𝑝(𝑚𝑥,𝑦 |𝑧𝑗 ) + 𝑝(𝑚𝑥,𝑦) 1−𝑝(𝑚𝑥,𝑦) , (5) where 𝑙𝑗 = log 𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗 ) 1−𝑝(𝑚𝑥,𝑦 |𝑧1:𝑗 ) . the log odd form of the inverse measurement model log 𝑝(𝑚𝑥,𝑦 |𝑧𝑗 ) 1−𝑝(𝑚𝑥,𝑦 |𝑧𝑗 ) an occupancy value 𝑙occ assigns to all cells within the 3d labelled points 𝑿𝑗 . in experiments, the occupied threshold is 𝑙occ = 0.65, the free threshold is 𝑙free = 0.2. the grid is initialized without prior knowledge of the map 𝑝(𝑚𝑥,𝑦 ) = 0.5. 3. experimental results the metrological performances of the proposed method have been evaluated using the esa katwijk beach planetary rover dataset [24], which provides images analogous to mars. of the overall dataset, the loccam stereo images have been used for semantic maps generation, site ortomosaic combined with differential gps has been used as ground truth. the characteristics of the sensors used in the experimental part are summarized in table 1. the dgps has been used to register the generated maps to the ortomosaic taken by the drone, which has been used as a map reference. the ground truth of the map has been obtained by manually labelling the ortomosaic drone, and the ground truth of the trajectory is given directly by the dgps. figure 4 shows the occupancy grid map generated with the proposed method, the occupancy grid is superimposed on the drone image used for performance evaluation. the metrics normally used to evaluate object detection algorithms were used: the accuracy, the intersection over union (iou) and the f1 score: figure 4. occupancy grid map generated with the proposed method. free cells are shown in green, obstacles-occupied cells in red, and yellow arrows show the trajectory of the rover. the occupancy grid is superimposed on the drone image used for performance evaluation. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 159 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 , (6) 𝐼𝑜𝑈 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁 , (7) where tp represent the true positives, tn the true negatives, fp the false positives, and fn the false negatives. in the case of an ideal classifier 𝐹𝑃 = 𝐹𝑁 = 0, thus the accuracy metric would be equal to 1. iou is the ratio of correctly classified cells over the sum of the total number of cells labelled and cells classified. table 2 summarises the performances (accuracy and iou) of the proposed method. the left column of figure 5 shows the images labelled with rock, sand and background classes, and the right column shows figure 5. left column: labelled images with rock, sand, and background classes. right column: occupancy grid map and trajectory estimated with the proposed method. from top to bottom successive images of the sequence. table 1. sensors characteristics. sensor description data logged loccam pointgery bumblebee2 (bb2-08s2c38) 12 cm baseline stereo camera. 1024 × 768 images rtk gps trimble bd 970 receiver with zephyr model2 antenna (rover) trimble bx 982 receiver with zephyr geodetic antenna (base station). latitude, longitude, and altitude expressed on wgs84 ellipsoid table 2. global mapping performances in terms of accuracy and iou metrics. labeller trajectory accuracy iou deeplabv3+ orb-slam2 0.987 0.282 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 160 the occupancy grid map and trajectory estimated with the proposed method. as it is possible to observe in the first row of figure 5 the two rocky obstacles are a few meters away from the camera and not easily recognizable, however, they are correctly labelled by the deeplabv3+ labeller. the effectiveness of the labeller can also be observed in the middle and bottom row of figure 5, although the shadows of the rover are present in both rock and sand, it does not affect the performance of the label. 4. conclusions in this paper, a hazard mapping method for planetary rovers’ navigation is presented. the method is based on occupancy grid mapping. the posterior probability of the grid cells is updated using the inverse measurement model of a scan-like obstacle detector. objects are identified as obstacles by means of the deeplabv3+ deep neural network. the estimation of the relative pose between successive frames is carried out using the state-ofthe-art visual slam method, orb-slam. the proposed method shows the ability to produce accurate occupancy grid maps with associated label up to a dozen meters from the camera and when the rover shadow is present in the image field of view. finally, the method has been tested on a public available dataset of a martian analogous environment. references [1] d. c. guastella, g. muscato, learning-based methods of perception and navigation for ground vehicles in unstructured environments: a review. sensors 2021, 21, 73. doi: 10.3390/s21010073 [2] m. bajracharya, m. w. maimone, d. helmick, autonomy for mars rovers: past, present, and future, computer, 41 (12) (2008), pp. 44-50. doi: 10.1109/mc.2008.479 [3] m. w. maimone, p. c. leger, j. j. biesiadecki, overview of the mars exploration rovers autonomous mobility and vision capabilities, proc. of the ieee international conference on robotics and automation (icra) space robotics workshop, april 2007. [4] g. reina, a. leanza, a. milella, a. messina, mind the ground: a power spectral density-based estimator for all-terrain rovers. measurement, 151 (2020), 107136. doi: 10.1016/j.measurement.2019.107136 [5] d. c. guastella, l. cantelli, d. longo, c. d. melita, g. muscato, coverage path planning for a flock of aerial vehicles to support autonomous rovers through traversability analysis, acta imeko 8 (2019) 4, pp. 9-12. doi: 10.21014/acta_imeko.v8i4.680 [6] l.-h. chen, c. peng, a robust 2d-slam technology with environmental variation adaptability, ieee sensors journal 19 (23) (2019), pp. 11475-11491. doi: 10.1109/jsen.2019.2931368 [7] cesar cadena, luca carlone, henry carrillo, yasir latif, davide scaramuzza, josé neira, ian reid, john j. leonard, past, present, and future of simultaneous localization and mapping: toward the robust-perception age, ieee trans. robot. 32(6) (2016), pp 1309–1332. doi: 10.1109/tro.2016.2624754 [8] l. xia, j. cui, r. shen, x. xu, y. gao, x. li, a survey of image semantics-based visual simultaneous localization and mapping: application-oriented solutions to autonomous navigation of mobile robots, int. j. of advanced robotic systems (2020), pp. 1– 17. doi: 10.1177/1729881420919185 [9] l.-c. chen, y. zhu, g. papandreou, f. schroff, h. adam, encoder-decoder with atrous separable convolution for semantic image segmentation, proc. of the european conference on computer vision (eccv), 2018, pp. 801-818. in: ferrari v., hebert m., sminchisescu c., weiss y. (eds) computer vision – eccv 2018, lecture notes in computer science, springer, cham, vol 11211. doi: 10.1007/978-3-030-01234-2_49 [10] l.-c. chen, g. papandreou, i. kokkinos, kevin murphy, alan l. yuille, deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, ieee trans. pattern analysis and machine intelligence, 40(4) (2017), pp 834–848. doi: 10.1109/tpami.2017.2699184 [11] s. sengupta, e. greveson, a. shahrokni, p. h. torr, urban 3d semantic modelling using stereo vision, proc. 2013 ieee international conference on robotics and automation. ieee, 2013, pp. 580-585. doi: 10.1109/icra.2013.6630632 [12] j. jeong, t.s. yoon, j.b. park, multimodal sensor-based semantic 3d mapping for a large-scale environment, expert systems with applications, 105 (2018), pp. 1–10. doi: 10.1016/j.eswa.2018.03.051 [13] z. qiu, y. zhuang, f. yan, h. hu, w. wang, rgb-di images and full convolution neural network-based outdoor scene understanding for mobile robots, ieee transactions on instrumentation and measurement, 68(1) (2019), pp. 27-37. doi: 10.1109/tim.2018.2834085 [14] x. chen, y. liu, k. achuthan, wodis: water obstacle detection network based on image segmentation for autonomous surface vehicles in maritime environments, ieee transactions on instrumentation and measurement, 70 (2021), pp. 1-13, 7503213. doi: 10.1109/tim.2021.3092070 [15] l. gan, r. zhang, j.w. grizzle, r.m. eustice, m. ghaffari, bayesian spatial kernel smoothing for scalable dense semantic mapping, ieee robotics and automation letters, 5(2) (2020). doi: 10.1109/lra.2020.2965390 [16] q. wang, j. gao, y. yuan, a joint convolutional neural networks and context transfer for street scenes labeling, ieee trans. on intelligent transportation systems, 19(5) (2018), pp. 1457-1470. doi: 10.1109/tits.2017.2726546 [17] v. balaska, l. bampis, i. kansizoglou, a. gasteratos, enhancing satellite semantic maps with ground-level imagery, robotics and autonomous systems, 139 (2021) 103760. doi: 10.1016/j.robot.2021.103760 [18] v. badrinarayanan, a. kendall, r. cipolla, segnet: a deep convolutional encoder–decoder architecture for image segmentation, ieee trans. pattern analysis mach. intell. 39 (12) (2017), pp. 2481–2495. doi: 10.1109/tpami.2016.2644615 [19] x. li, d. wang, h. ao, r. belaroussi, d. gruyer, fast 3d semantic mapping in road scenes, appl. sci. 9(4) (2019), pp. 631. doi: 10.3390/app9040631 [20] y. yang, f. qiu, h. li, l. zhang, m.-l. wang, m.-y. fu, largescale 3d semantic mapping using stereo vision, international journal of automation and computing, 15(2) (2018), pp. 194-206. doi: 10.1007/s11633-018-1118-y [21] d. paz, h. zhang, q. li, h. xiang, h.i. christensen, probabilistic semantic mapping for urban autonomous driving applications, proc. ieee/rsj international conference on intelligent robots and systems (iros) october 25-29, 2020. doi: 10.1109/iros45743.2020.9341738 [22] f. andert, drawing stereo disparity images into occupancy grids: measurement model and fast implementation, proc. ieee/rsj international conference on intelligent robots and systems. ieee, 2009, pp. 5191–5197. doi: 10.1109/iros.2009.5354638 [23] raúl mur-artal, juan d. tardós, orb-slam2: an open-source slam system for monocular, stereo and rgb-d cameras. ieee transactions on robotics, 33 (5) (2017), pp. 1255-1262. doi: 10.1109/tro.2017.2705103 https://doi.org/10.3390/s21010073 https://doi.org/10.1109/mc.2008.479 https://doi.org/10.1016/j.measurement.2019.107136 https://doi.org/10.21014/acta_imeko.v8i4.680 https://doi.org/10.1109/jsen.2019.2931368 https://doi.org/10.1109/tro.2016.2624754 https://doi.org/10.1177/1729881420919185 https://doi.org/10.1007/978-3-030-01234-2_49 https://doi.org/10.1109/tpami.2017.2699184 https://doi.org/10.1109/icra.2013.6630632 https://doi.org/10.1016/j.eswa.2018.03.051 https://doi.org/10.1109/tim.2018.2834085 https://doi.org/10.1109/tim.2021.3092070 https://doi.org/10.1109/lra.2020.2965390 https://doi.org/10.1109/tits.2017.2726546 https://doi.org/10.1016/j.robot.2021.103760 https://doi.org/10.1109/tpami.2016.2644615 https://doi.org/10.3390/app9040631 https://doi.org/10.1007/s11633-018-1118-y https://doi.org/10.1109/iros45743.2020.9341738 https://doi.org/10.1109/iros.2009.5354638 https://doi.org/10.1109/tro.2017.2705103 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 161 [24] r. a. hewitt, e. boukas, m. azkarate, m. pagnamenta, j. a. marshall,a. gasteratos, g. visentin, the katwijk beachplanetary rover dataset, the international journal of robotics research, 37(1) (2018), pp. 3-12. doi: 10.1177/0278364917737153 [25] j. long, e. shelhamer, t. darrell, fully convolutional networks for semantic segmentation, in proc. of the ieee conference on computer vision and pattern recognition, 2015, pp. 3431-3440. [26] o. ronneberger, p. fischer, t. brox, u-net: convolutional networks for biomedical image segmentation, proc. international conference on medical image computing and computer-assisted intervention. springer, 2015, pp. 234-241. doi: 10.1007/978-3-319-24574-4_28 [27] s. chiodini, l. torresin, m. pertile, s. debei, evaluation of 3d cnn semantic mapping for rover navigation, proc. ieee 7th international workshop on metrology for aerospace (metroaerospace), 2020, pp. 32-36. doi: 10.1109/metroaerospace48742.2020.9160157 [28] l. torresin, sviluppo ed applicazione di reti neurali per segmentazione semantica a supporto della navigazione di rover marziani, master's thesis report, 2019. [29] s. chiodini, m. pertile s. debei, semantic mapping for rover navigation, proc. iv forum nazionale delle misure, 2020. [30] r. van den boomgard, r. van balen, methods for fast morphological image transforms using bitmapped images, computer vision, graphics, and image processing: graphical models and image processing, 54(3) (1992), pp. 254-258. doi: 10.1016/1049-9652(92)90055-3 [31] h. hirschmuller, accurate and efficient stereo processing by semiglobal matching and mutual information, proc. ieee computer society conference on computer vision and pattern recognition (cvpr 05), 2005, pp. 807-814. doi: 10.1109/cvpr.2005.56 [32] s. thrun, probabilistic robotics, communications of the acm, 45(3) , 2002, pp. 52-57. doi: 10.1145/504729.504754 https://doi.org/10.1177/0278364917737153 https://doi.org/10.1007/978-3-319-24574-4_28 https://doi.org/10.1109/metroaerospace48742.2020.9160157 https://doi.org/10.1016/1049-9652(92)90055-3 https://doi.org/10.1109/cvpr.2005.56 https://doi.org/10.1145/504729.504754 a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 6 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 1 a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement yamini nagaratnam1, sudanthiraveeran rooban1 1 department of ece, koneru lakshmaiah education foundation, green fields, vaddeswaram,522502, guntur, ap, india section: research paper keywords: approximate multiplier; hardware computation; mean absolute relative error; truncation-based multiplier; rounding operation; absolute error citation: yamini nagaratnam, sudanthiraveeran rooban, a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement, acta imeko, vol. 11, no. 2, article 37, june 2022, identifier: imeko-acta-11 (2022)-02-37 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received february 7, 2022; in final form may 11, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sudanthiraveeran rooban, e-mail: sroban123@gmail.com 1. introduction in a recent technology of digital signal processing application, a multiplier is a priority one with low area and low power utilizations. nowadays, approximate multiplier is functioning in good manner to reduce area, delay, power, error measurement and energy utilization. as a result of these specifications, approximate computing becomes a trendy trend in the world of digital design 0. because of the high speed, fault tolerance, and power efficiency, the demand for efficient approximate multipliers is growing. the method of approximation computing encompasses a number of models, including data mining and multimedia processing [2]. multipliers are critical components in applications like digital signal processing, microprocessors, and embedded systems to accomplish operations like filtering and neural network convolution. these multipliers are made with complicated logic blocks, which increases the amount of energy consumed when the size of the circuit is increased. because the multiplier is a fundamental component of mathematical units, the configuration of approximation multiplier design has been a research topic for many years [3]. the approximation multiplier is made up of a few basic blocks in which the approximate technique is performed by using any one of the various phases [3]. when it comes to approximation techniques, truncation of partial products is one of the most effective methods for reducing the error by using correction functions [4]. there are various types of error measurement approximate multipliers depending on the operand size. this error measurement is used to overcome the problem of high latency and energy utilization, this work introduced a measurement of scalable approximate multiplier using truncated rounding-based technique which is used to minimize the measure of partial products based on leading one bit position [5]. the proposed error measurement approximate multiplier is of different bitlengths. 2. generalised approximate computing approximation computing process is executed at multiple architecture layers, software, and other circuits [6] and the study abstract multiplication necessitates more hardware resources and processing time. in a scalable method of approximate multiplier, the truncated rounding technique is added to reduce the number of logic gates in partial products with the help of leading one-bit architecture. truncation and rounding based scalable approximate multiplier (tosam) has few modes of error measurement based upon height (h) and truncated (t) named as (h,t). these multipliers are named as tosam(0,2), tosam(0,3), tosam(1,5), tosam(2,6), tosam(3,7), tosam(4,8), and tosam(5,9). multiplication provides a substantial impact on metrics like power dissipation, speed, size and power consumption. a modified approximate absolute unit is proposed to enhance the performance of the existing approximate multiplier. the existing 16-bit (3,7) error measurement multiplier shows an error measurement value of 0.4 %. the proposed 16-bit multiplier for the same error measurement possesses the error measured value is of 0.01%, mean relative error measured value of 0.3 %, mean absolute relative error measured value of 1.05, normalized error distance measured value of 0.0027, variance of absolute error measured value of 0.52, delay of 1.87 ns, power of 0.23 mw, energy of 0.4 pj. the proposed multiplier can be applied in image processing. the work is designed in verilog hdl and simulated in modelsim, synthesized in vivado. mailto:sroban123@gmail.com acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 2 of approximate computing is also applied in deep learning. arithmetic computation is performed by using the design of addition (in some cases can be termed as accumulation) and multiplication and for some applications, for instance as dsp and machine learning. to achieve the power and latency savings, many approximate adders have been developed. speculative adders and non-speculative transistor-level complete adders are collaborated to create the current approximate adder designs. the basic four parts of approximate multiplier are: approximation of operands approximation of partial product generation approximation of partial product tree approximation of compressors 2.1. approximation of operands mitchell proposed the concept of a logarithmic multiplier (lm), which uses estimated operands to perform multiplication. the lm performs the operation by changing the operands to approximate logarithmic numbers using shifting and addition operations. using precise piecewise linear approximation [7] and iterative methodology, the accuracy of contemporary designs of logarithmic multipliers is increased. the usage of estimated operands by the error-tolerant multiplier (etm) and a dynamic range unbiased multiplier (drum) [8] are the further enhancement in multipliers. the etm [9] is found with a technique known as multiplier partitioning, which divides a multiplier into accurate multiplication and non-multiplication parts. the least significant bits (lsbs) decide nonmultiplication, while the most significant bits (msbs) determine proper multiplication. 2.2. approximation of partial product generation to obtain the measured final product, we execute some specific processes, but before that the partial products had to be generated and these undergo some compression operations. the under designed multiplier (udm), is being forwarded by substituting one entry of the karnaugh-map based on 2 × 2 approximation multipliers. the approximation 2 × 2 multipliers are utilised as fundamental units for bigger size multipliers to yield approximate partial products which are collected by using the correct adder tree. during the partial product accumulation stage, the generalised design of udm is examined for further utilising carry-in prediction [10]. on approximation booth encoders, a study [11] is carried out. it uses two efficient radix-4 approximation booth encoders. 2.3. approximation of partial product tree in general, the truncation approach is used for incomplete product trees. the fixed-width multiplier estimates the least significant partial products as unchanged. some of the least significant columns are omitted in the inexact array multiplier, resulting in constant partial product columns. among the reduction and rounding strategies, the truncated multiplier that employs the correction constant is chosen. variable correction is required for truncated multipliers to avoid excessive mistakes. 2.4. approximation of compressors compressors are commonly employed in the construction of high-speed multipliers [12] to accelerate the accumulation of partial products (pp). some error compensation algorithms for fixed-width booth multipliers [13] have recently been proposed, which increases the multipliers accuracy. the error compensation circuit is developed using a simpler sorting network. several researches have been undertaken on how to determine or identify a number's logarithm and antilogarithm, with the replica being found. mitchell suggested a simple method for calculating a number's logarithm and antilogarithm, which is then utilised to generate approximate multiplication results (mitchell multiplier). although the multiplier is proposed, it falls short of the mark, hence more research has been done to improve the approximation in the measurement of mitchellbased logarithmic multipliers. 3. proposed approximate multiplier the proposed approximate multiplier having an error measurement of 16-bit for the rounding and truncation parameters of measurement (3,7) consists of blocks namely approximate absolute unit (aau), leading one detector unit also referred as foremost one detector unit (lod), truncation unit (tu), arithmetic unit (au), shift unit (su), sign and zero detector unit (szd), and is represented in figure 2. 3.1. approximate absolute unit the aau is given by inputs of a and b. if the input operand is negative, the results are inverted; if the input operand is positive, the results are unchanged. this aau can be removed for unsigned multipliers. it appears as |a|app and |b|app, for the measured values of a and b as described in [14]. 3.2. leading one detector unit the lod unit or foremost one detector unit takes input as |a|app and |b|app, values. by using these measurement values the ka and kb are detected, which detects the ‘1’ in the msb. these ka and kb are responsible for shifting operation. 3.3. truncation unit the inputs for this tu are measured as ka and kb, and also this is having another two inputs which are measured as |a|app and |b|app, values. the approximation inputs [15] are trimmed and converted to fixed width operands rely on the leading one position of the input operands. the output is obtained from truncation unit are measured as (ya)t and (yb)t these are given as inputs to the arithmetic unit. the terms (ya)t and (yb)t acquired from the truncated unit is the measured value which is represented in the following equation: tu = 1+(ya)t + (yb)t+ (ya)apx +(yb)apx . (1) 3.4. arithmetic unit this au will perform addition on the truncated fixed width operand as well as the product of approximation input, which is denoted by the value '1' and can be written as tu. it's worth noting that the msbs of (ya)apx and (yb)apx are identical to those of measured values of (ya)t and (yb)t. some adders and logical and gates in the arithmetic unit require power gating, this is determined by the operating mode. this is done to improve the design's energy efficiency. the arithmetic block is the same for all the bit lengths. 3.5. shift unit the arithmetic unit's output must be left shifted by the measured value of ka + kb times (ka and kb are the leading onebit values of a and b) the term 2 ka+kb (1+(ya)t + (yb)t+ (ya)apx (yb)apx) can be obtained by conducting the shifting operation, as shown in [16]. for the greatest truncation 't' and rounding 'h' values (h = 5 and t = 9 in this case), the tosam multiplier should be developed. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 3 3.6. sign and zero detector unit the output operands sign is determined by the sign of the input operands, and if at least one of the inputs is zero, the output is set to zero. the aau should be eliminated, due to the unsigned input operands, and the sign and zero detector unit should be restored with a zero detector unit (zd), as the measurement of the sign unit is unnecessary when the input operands are unsigned. the proposed approximate absolute unit is implemented in the truncated approximate multiplier. the error measurement values (ya)apx ((yb)apx) are denoted as h + 1 bits in this example. when compared to the exact 16-bit multiplier of measurement of (3,7) in figure 1, the below dot diagram gives an overview of the procedure for a specific measurement where truncation 't' and rounding 'r' and their values are treated as t = 7 and h = 3 respectively. in the dot diagram below, the green square represents the "1" bit in the measured term 1+(ya)t +(yb)t + (ya)apx (yb)apx. on the msb side, the orange circles represent partial products of (ya)apx and (yb)apx, while the purple triangles represent the msb bits of (ya)t and (yb)t. the remaining grey circles and triangles in the dot diagram are not included in the current operations, but they will be considered for future multiplier computations. the measurement of partial multipliers in the approximation multiplier for 16-bit is illustrated in figure 3. the process involved in multiplying input operands a by b for a specific measurement of truncation t = 7 and rounding h = 3 is shown in figure 1. the tosam (x, y) structures, where x and y correlate to the rounding 'h' and truncation 't' parameters, and the correctness of this multiplier technique is mostly determined by these parameters ‘t’ and ‘h’. as a result, the relationship between these two parameters, 't' and 'h' must be satisfied in order to ensure maximum precision, as well as a speed and energy consumption that is reasonable. finally, by examining several approaches, this multiplication strategy can be employed for both signed and unsigned operands. to apply this approach to signed multipliers, we must first determine the absolute value of the input operands a and b, as well as the multiplier's sign. calculation time can be reduced by finding the input operands with exact absolute values. in the example, the input operand a is 16-bit and has a decimal value of 11761, whereas the input operand b is 16-bit and has a decimal value of 2482. a and b's exact measurement value is written as (a × b)exact the exact result in binary format is 0000 0001 1011 1101 0110 1010 1001 0010 which is represented in decimal format as 29 190 802, but using the existing method, we get the value as (a × b)existed in the binary format is 0000 0001 1011 1001 0000 0000 0000 0000, the decimal format is 28 901 376. the difference between existed and exact values in this situation is 289 426. the value of (a × b)proposed is calculated using the approximation technique which is explained in figure 4, the binary format 0000 0001 1011 1001 1111 1111 1111 1111, the decimal format is 28 966911. the difference in between the exact and proposed is 223 891. the ka and kb values reflect the leading one-bit location in the input operands a and b. the measured values of ka and kb numbers in this case are 13 and 11, respectively. various (h, t) combinations have a slight modification in the numerical example. various study is being done to build a new measurement of approximate multiplier. in the dynamic segment method (dsm) [17] design, the input operands are trimmed to 'm' bits depends on the location of the leading one bit (value of that particular position, i.e., 1,2, …), and fixed-width multiplication is implemented on the values, that are obtained from truncation operation. while applying this method of truncation, the produced output value in most of the cases is less than the exact one, resulting in a negative mean relative error (mre). when considering digital signal processing applications, strive to keep the mean error as low as feasible to achieve a good signalto-noise ratio (snr). the drum structure is truncated to yield the solution [18]. we seek to bring the mre value near zero, the lsb of the shorter input, which is assigned to the value "1," to limit the erroneous outcome. the truncation of the input operands is performed in the multiplication stage in the low figure 1. 16-bit tosam numerical example of measurement for truncation t = 7 and rounding h = 3. figure 2. block diagram of truncated multiplier. figure 3. representation of measured term 1+(ya)t + (yb)t+ (ya)apx +(yb)apx in dot diagram with truncation ‘t’ and rounding parameters ’h’ are 7 and 3 respectively. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 4 energy truncation-based approximate multiplier (letam) [19] structure, and there is a chance of omitting half of the partial products. maxare is specified as maximum absolute relative error (considered from relative error re) mre is specified as mean relative error mare is specified as mean absolute relative error vare is specified as variance of absolute relative error ned is specified as normalized error distance max_ned is specified as maximum normalized error distance comparison of the accuracy of tosam against other approximation multipliers like dsm, drum, letam, and uroba in terms of mre, maxare, mare, vare, max ned, and ned using random vectors [5] is performed on the parameters of maxare, mre, mare, vare, max ned, and ned. all these findings are summarised in table 1. edp is defined as energy-delay product pda is defined as power-area-delay product delay, power, area, energy, edp, pda, and mare of the approximation multiplier are calculated and compared with the existing multiplier design and is tabulated in table 2. from the data, the proposed modified multipliers show better results than the other existing approximation multiplier configurations with respect to speed and energy usage while maintaining almost identical mare values. table 2 shows a comparison between dsm, drum, letam, u-roba with the proposed measurement approach of tosam (3,7) approximate multiplier. 4. results and discussions the proposed approximation multiplier with output measurement of 32-bit truncated based multiplier that produces a result that is more approximate than the existing multiplier. 11761 is the a value, and 2482 is the b value. the exact values of a and b are as follows: (a × b)exact value 29 190 802, and the existed of a and b is (a × b)existed equals 28 901 376. the difference between existed and exact values in this situation is 289 426. the value of (a × b)proposed is 28 966911when utilising the proposed approximate technique; the difference between proposed and exact is 223 891, indicating that the value is more approximate. the output is generated in the next cycle and the error value is also shown in figure 5. the internal structure shows the blocks of the proposed approximate multiplier namely approximate absolute unit aau, lead one detector lod, truncation unit tu, arithmetic unit au, shifter, sign-set. also, it represents the flow of data from one block to the other and is shown in figure 6. 5. conclusion and future scope low-energy and area-efficient 16-bit approximation multiplier is proposed. truncation on input operands is performed with two different parameters namely truncation ‘t’ and rounding ‘h’. the existing 16-bit multiplier with rounding and truncation measurement (3,7) shows a measurement error of 0.4 %. the proposed 16-bit multiplier for the same truncation and rounding measurement (3,7) with the measured error of 0.01 % (the error is less than 1 %). the error is reduced by rounding the input operands to the next odd value. the table 1. representation of various approximate multiplier with maxare, mre, mare, vare, max ned, and ned. architecture maxare (%) mre (%) mare (%) vare (%) max ned ned dsm(3) [20] 36.00 -16.1 16.10 40.43 0.2344 0.0399 tosam(0,2) [20] 31.25 -9.1 10.90 46.63 0.3125 0.0309 tosam(0,3) [20] 25.00 -3.3 7.61 28.81 0.2500 0.0213 drum(3) [8] 56.25 2.1 11.90 79.96 0.2344 0.0281 tosam(1,5) [20] 13.89 -0.7 3.95 7.60 0.1250 0.0104 tosam(2,6) [20] 6.87 -0.6 2.06 2.00 0.0664 0.0053 proposed tosam(3,7) 3.65 -0.3 1.05 0.52 0.0342 0.0027 letam(3) [14] 9.72 -4.0 4.00 2.54 0.0859 0.0104 u-roba [15] 11.10 0 2.89 6.37 0.0625 0.0069 tosam(4,8) [20] 1.88 -0.2 0.53 0.13 0.0173 0.0013 table 2. comparisons of delay, power, area, energy, edp, pda, and mare of the approximate multipliers. architecture delay (ns) power (mw) area (µm2) energy (pj) edp (pj · ns) pda (pj · µm2) mare (%) tosam(0,2) [20] 0.74 0.16 342 0.12 0.09 40 10.9 tosam(0,3) [20] 0.84 0.21 423 0.18 0.15 76 7.6 dsm(3) [20] 0.97 0.20 344 0.19 0.19 67 16.1 tosam(1,5) [20] 1.00 0.35 532 0.35 0.35 185 4.0 drum(3) [8] 0.88 0.13 257 0.11 0.10 29 11.9 tosam(2,6) [20] 1.00 0.35 532 0.35 0.35 185 2.06 letam(3) [14] 1.16 0.39 608 0.46 0.53 278 4.0 u-roba [15] 1.05 0.55 1438 0.57 0.60 826 2.9 proposed tosam(3,7) 1.87 0.23 593 0.4 0.748 255 1.05 figure 4. example for generation of approximate absolute value with two negative numbers. acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 5 recommended approximation multiplier is scalable and outperforms the correct multiplier in regard of speed, area, and energy. the proposed approximation multiplier consumes 0.23 mw which is lesser than the existing approximate multipliers. various types of approximate multipliers are used in sharpening the images. in future there is also the possibility of using the multiplier and accumulator unit to create an image sharpening module, and this may be used to measure the energy consumption for various approximate multipliers. also, in other applications, we can use the jpeg technique to compress many images, and this can be used for approximation multipliers in the discrete cosine transform unit. references [1] j. han, m. orshansky, approximate computing: an emerging paradigm for energy-efficient design, proc. 18th ieee eur. test symp., avignon, france, 27-30 may 2013, pp. 1–6. doi: 10.1109/ets.2013.6569370 [2] v. k. chippa, s. t. chakradhar, k. roy, a. raghunathan, analysis and characterization of inherent application resilience for approximate computing, proc. 50th acm/edac/ieee des. automat. conf. (dac), austin, tx, usa, 29 may 7 june 2013, pp. 1–9. doi: 10.1145/2463209.2488873 [3] h. jiang, c. liu, n. maheshwari, f. lombardi, j. han, a comparative evaluation of approximate multipliers, in proc. ieee/acm int. symp. nanoarch, beijing, china, 18-20 july 2016, pp. 191–196. doi: 10.1145/2950067.2950068 [4] s. balamurugan, p. s. mallick, error compensation techniques for fixed-width array multiplier design a technical survey, j. circuits, syst. comput., vol. 26, no. 3 (2017), p. 1730003. doi: 10.1142/s0218126617300033 [5] a. momeni, j. han, p. montuschi, f. lombardi, design and analysis of approximate compressors for multiplication, ieee trans. comput., vol. 64, no. 4 (2015), pp. 984–994. doi: 10.1109/tc.2014.2308214 [6] s. venkataramani, s. chakradhar, k. roy, a. raghunathan, approximate computing and the quest for computing efficiency, proc. 52nd annual design automation conference (dac), san francisco, ca, usa, 8-12 june 2015, article 120, pp.1-6. doi: 10.1145/2744769.2744904 [7] j. low, c. jong, unified mitchell-based approximation for efficient logarithmic conversion circuit, ieee trans. computers, vol. 64, no. 6 (2015), pp. 1783-1797. doi: 10.1109/tc.2014.2329683 [8] s. hashemi, r. bahar, s. reda, drum: a dynamic range unbiased multiplier for approximate applications, proc. ieee/acm international conference on computer design, austin, tx, usa, 2-6 november 2015, pp. 418-425. doi: 10.1109/iccad.2015.7372600 [9] s. rooban, s. saifuddin, s. leelamadhuri, s. waajeed , design of fir filter using wallace tree multiplier with kogge-stone adder, international journal of innovative technology and exploring engineering, vol. 8, no. 6 (2019), pp.92-96. [10] v. leon, g. zervakis, d. soudris, k. pekmestzi, approximate hybrid high radix encoding for energy-efficient inexact multipliers, ieee transactions on very large scale integration (vlsi) systems, vol. 26, no. 3 (2018), pp. 421-430. doi: 10.1109/tvlsi.2017.2767858 [11] s. rooban , d. l. prasanna, k. b. d. teja, p. v. m. kumar, carry select adder design with testability using reversible gates, international journal of performability engineering, vol. 17, no. 6 (2021), pp. 536–542. doi: 10.23940/ijpe.21.06.p6.536542 [12] s. venkatachalam, e. adams, h. j. lee, s. b. ko, design and analysis of area and power efficient approximate booth multipliers, ieee transactions on computers, vol. 68, no. 11 (2019), pp.1697-1703 doi: 10.1109/tc.2019.2926275 [13] s. narayanamoorthy, h. a. moghaddam, z. liu, t. park, n. s. kim, energy-efficient approximate multiplication for digital signal processing and classification applications, ieee trans. very large scale integr. (vlsi) syst., vol. 23, no. 6 (2015), pp. 1180–1184. doi: 10.1109/tvlsi.2014.2333366 [14] s. vahdat, m. kamal, a. afzali-kusha, m. pedram, letam: a low energy truncation-based approximate multiplier, comput. elect. eng., vol. 63 (2017), pp. 1–17. figure 5. result of the proposed approximate multiplier with measurement of (3,7). figure 6. rtl schematic of the proposed approximate multiplier with error measurement of (3,7). https://doi.org/10.1109/ets.2013.6569370 https://doi.org/10.1145/2463209.2488873 https://doi.org/10.1145/2950067.2950068 https://doi.org/10.1142/s0218126617300033 https://doi.org/10.1109/tc.2014.2308214 https://doi.org/10.1145/2744769.2744904 https://doi.org/10.1109/tc.2014.2329683 https://doi.org/10.1109/iccad.2015.7372600 https://doi.org/10.1109/tvlsi.2017.2767858 https://www.scopus.com/authid/detail.uri?authorid=56910327700 https://www.scopus.com/authid/detail.uri?authorid=57211475817 https://www.scopus.com/authid/detail.uri?authorid=57224940155 https://www.scopus.com/authid/detail.uri?authorid=57225916538 https://doi.org/10.23940/ijpe.21.06.p6.536542 https://doi.org/10.1109/tc.2019.2926275 https://doi.org/10.1109/tvlsi.2014.2333366 acta imeko | www.imeko.org june 2022| volume 11 | number 2 | 6 doi: 10.1016/j.compeleceng.2017.08.019 [15] r. zendegani, m. kamal, m. bahadori, a. afzali-kusha, m. pedram, roba multiplier: a rounding-based approximate multiplier for high-speed yet energy-efficient digital signal processing, ieee trans. very large scale integr. (vlsi) syst., vol. 25, no. 2 (2017), pp. 393–401. doi: 10.1109/tvlsi.2016.2587696 [16] m. ha, s. lee, multipliers with approximate 4–2 compressors and error recovery modules, ieee embedded syst. lett., vol. 10, no. 1 (2018), pp. 6–9. doi: 10.1109/les.2017.2746084 [17] d. esposito, a. g. m. strollo, e. napoli, d. de caro, n. petra, approximate multipliers based on new approximate compressors, ieee trans. circuits syst. i, reg. papers, vol. 65, no. 12 (2018), pp. 4169–4182. doi: 10.1109/tcsi.2018.2839266 [18] i. alouani, h. ahangari, o. ozturk, s. niar, a novel heterogeneous approximate multiplier for low power and high performance, ieee embedded syst. lett., vol. 10, no. 2 (2018), pp. 45–48. doi: 10.1109/les.2017.2778341 [19] m. masadeh, o. hasan, s. tahar, comparative study of approximate multipliers, glsvlsi’18: proceedings of the 2018 on great lakes symposium on vlsi, 2018, pp. 415-418. doi: 10.1145/3194554.3194626 [20] s. vahdat, m. kamal, a. afzali-kusha, m. pedram, tosam: an energy-efficient truncationand rounding based scalable approximate multiplier, ieee transactions on very large scale integration (vlsi) systems, vol.27, no.5 (2019), pp. 1161-1173. doi: 10.1109/tvlsi.2018.2890712 https://doi.org/10.1016/j.compeleceng.2017.08.019 https://doi.org/10.1109/tvlsi.2016.2587696 https://doi.org/10.1109/les.2017.2746084 https://doi.org/10.1109/tcsi.2018.2839266 https://doi.org/10.1109/les.2017.2778341 https://doi.org/10.1145/3194554.3194626 https://doi.org/10.1109/tvlsi.2018.2890712 fundamental aspects in sensor network metrology acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 fundamental aspects in sensor network metrology sascha eichstädt1, anupam p. vedurmudi1, maximilian gruber1, daniel hutzschenreuter1 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: research paper keywords: sensor network; measurement uncertainty; internet of things; co-calibration citation: thomas bruns, dirk röske, paul p. l. regtien, francisco alegria , template for an acta imeko paper, acta imeko, vol. a, no. b, article c, month year, identifier: imeko-acta-a (year)-b-c section editor: section editor received january 1, 2021; in final form january 31, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: parts of this work was based on outcomes of the projects 17ind12 met4fof, 17ind02 smartcom, bmbf famous and the bmwk gemimeg-ii project. the 17ind12 met4fof and 17ind02 smartcom projects have received funding from the empir programme co-financed by the participating states and from the european union‘s horizon2020 research and innovation programme. the famous project received funding from the german federal ministry of education and research (bmbf). the gemimeg-ii project received funding from the german federal ministry of economy and climate (bmwk). corresponding author: sascha eichstädt, e-mail: sascha.eichstaedt@ptb.de 1. introduction digital transformation includes the integration of digital technologies, such as software, communication and algorithms into products, processes, and services. a disruptive consequence is that these technologies are being used to generate completely new products, processes, and services. the future digital world differs from today’s situation substantially: digital exchange of data and information is becoming the standard; information is provided via cloud services in a machine-actionable way; digital infrastructures utilize information from calibration, self-diagnosis and other metadata communicated by individual measuring instruments; processes and services in the quality infrastructure are based on distributed databases and application programming interfaces (apis). one outcome of the transition to the digital world is that distributed measuring instruments and sensor networks are becoming more important than individual measuring instruments. applications such as the industrial internet of things (iiot) and automated driving will belong to the first examples where the role of metrology is challenged. these challenges include methods for metrological traceable cocalibration and the metrological assessment of whole sensor networks in a systemic approach. for instance, the assessment of an autonomous vehicle measuring system requires a holistic perspective on the whole system instead of only the individual measuring elements. moreover, in the digital world, algorithms and software become as important as the actual measurements, and they will thus influence metrological traceability chains for measurands increasingly. in the digital age, artificial intelligence, sensor fusion and virtual measuring instruments will replace many of today’s tools and principles. their use will require a fundamental reevaluation of the established methodologies for uncertainty evaluation and the assessment of algorithms. in the example of the autonomous vehicle, the assessment of the measuring system must take the algorithms into account, which analyse the data to take decisions. quality of measurement data, trustworthiness of measurement results and reliability of measuring instruments are as important in the digital world as before. hence, metrology abstract sensor networks have appeared on the ’radar’ of metrology only recently and a rigorous treatment and metrological assessment have not been established yet. however, sensor networks as measuring systems underpin many developments in digital transformation, with applications ranging from regulated utility networks to low-cost internet of things (iot). the metrological assessment of sensor networks necessitates a fundamental revision of calibration, uncertainty propagation and performance assessment and new approaches for information and data handling regarding the individual sensors and their interactions in the network to allow a systems metrology approach to be established. this contribution introduces some initial findings from recent research and gives an outlook into future developments. mailto:sascha.eichstaedt@ptb.de acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 plays an important role also in the quality infrastructure in the digital age [1]. so far, metrology focuses on single measuring instruments and sensors. however, in a rapidly increasing number of applications networks of sensors are used to address measurement needs. examples can be found in predictive maintenance of production machines in industry, urban air quality monitoring, and multi-modal human health assessment using wearables [2]. an important aspect that distinguishes sensor network applications from single sensor measurements is that rather than the individual sensors, the combined information from all sensors is the main object of interest. for instance, the combination of microphone data and vibration measurement in predictive maintenance provides more insights into the actual status of the monitored machine than the individual measurements alone [3]. a consequence of the focus on combined sensor data rather than individual sensors is that the definition of the quantity of interest, the measurand, is not straightforward. moreover, the assessment of quality and reliability of the system is more complex and challenging than the individual sensors alone, e.g., in terms of calibration result. such an assessment also has to take into account potential sensor failure or networking issues. this also includes consideration of energy consumption, localization of sensors in the network and network communication synchronization [4] let us consider again the above example of predictive maintenance. the combination of different sensor data is carried out in a data-driven approach, i.e., using machine learning methods. the target is a classification of the remaining lifetime of the machine being monitored. thus, the purpose of the measuring sensor network and data analysis is clear, but the outcome – expected remaining lifetime – is not a physics-based combination of the involved measured quantities. since the combination of all sensor data is of interest instead of the individual sensor readings, an assessment of the measurement performance should consider the sensor network as a complex (often distributed) measuring instrument. the metrological treatment of such sensor networks thus requires a novel approach – called “systems metrology”. this approach includes the novel field of “sensor network metrology”, which itself contains aspects such as, in-situ calibration and co-calibration, uncertainty evaluation for dynamic measurements and dynamically structured systems, semantic representation of metrological information, uncertainty-aware machine learning and explainable artificial intelligence applied to sensor networks. most of these topics are still in a stage of early research and prototypical solutions. this contribution introduces sensor network metrology aspects which were addressed in recent research projects. we outline the fundamental sensor network metrology aspects and discuss their combination into a coherent and consistent approach for a metrological treatment of sensor networks. section 2 introduces general aspects of internet of things (iot) type of sensor networks from the viewpoint of metrology. section 3 addresses the digital representation of data and metrological information in sensor networks. section 4 presents and discusses relevant uncertainty evaluation and propagation for processing of sensor network data. section 5 discusses aspects related to the application of machine learning and 1 https://www.ptb.de/empir2018/met4fof/home/ 2 https://opcfoundation.org/ artificial intelligence. finally, section 6 addresses the overall picture and gives an outlook to future developments. 2. metrology and the internet of things in the concept of the internet of things (iot), physical devices communicate with each other via web technologies, thus combining web technology with the physical world. with the rise of the iot in industry 4.0, smart city, smart grids and more, the world of measurement is changing rapidly. as an example, the integration of measuring instruments in the iot poses several specific requirements for the sensors itself, such as communicating via a digital interface, working reliably under a wide range of conditions, reporting on their health status upon request, and ideally, detecting and reporting adversarial conditions. these and other requirements have led to the development of so-called “smart sensors” [2]. these are measuring devices that contain some sort of pre-processing implementing the above features of the iot. as the name “smart sensor” implies, the pre-processing is integrated together with the physical sensing element in one “package”. however, this kind of pre-processing poses new requirements for the calibration of the measuring device because the pre-processing has to be taken into account in the calibration. furthermore, measuring instruments which only provide pre-processed data usually don’t fit well in today’s calibration procedures and guidelines, because these assume access to the raw measurement data. a concrete challenge addressed in the project empir met4fof1 was the dynamic calibration of a digital-output sensor using an external time stamping, e.g., based on gps and a custom-built microcontroller (µc) board [5]. the same approach was then used to demonstrate the extension of a digital sensor such that it communicates not only raw measurement values, but also provides information about the measurement units, uncertainty, and calibration in a machine-readable way [5]-[7]. in this way, the most basic requirement of a metrological treatment of a sensor network can be met: the provision of measurement uncertainty and other metrological information for the individual sensors. in the concept developed in the met4fof project this information is provided by the “smart” sensor itself. however, other information architectures are possible, too. for instance, in the bmbf famous project, a database approach combined with opc-ua communication was considered instead. a similar approach is also considered in the project bmwk gemimegii. more details are given in sections 3 and 4. the concept of sensors providing self-information upon request in a standardized way is also a fundamental element of opc-ua2 (industrial interoperability standard), which is used mostly in industrial applications, but is increasingly adopted in other areas, too. for the metrological information communicated via opc-ua to be machine-readable, it is necessary that the definition of a standard digital representation of units of measurement as well as commonly accepted data models for measured values are available. to this end, the digital si (d-si) data model developed in empir smartcom proposes an approach that is compatible with current guidelines and standards in metrology and calibration [6]. other potential approaches for the digital representation of units of measurement and quantity kinds are the “unified code of units of measure”3 (ucum) or an ontology for units of measure 3 https://ucum.org/ https://www.ptb.de/empir2018/met4fof/home/ https://opcfoundation.org/ https://ucum.org/ acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 qudt [8], [9] – each optimized for different data usage approaches. in principle, also combinations of these approaches are possible. the concept of the iot relies on a versatile and flexible combination of measuring instruments, the automated acquisition and processing of the measured data and the application of intelligent algorithms to derive conclusions or decisions. one consequence of this is that data analysis is typically carried out using data-driven machine learning. in contrast to mathematical models that rely on a physical understanding of the measured process, machine learning can be applied directly based on the sensors’ output data. thus, the need for calibration is not as obvious as for “traditional” measurements. however, calibrated measuring instruments in the iot offer several benefits. for instance, calibrated sensors can serve as reference devices in the network to assess and improve data quality [10]; calibration of sensors enables the estimation of the measurand and thus, traceability [7], which itself is required to ensure the comparability of measurements between different sites and countries. moreover, calibrated sensors improve the ability to explain the obtained output from the machine learning. that is, the calibration of a sensor enables direct interpretation based on the measurand whereas a noncalibrated sensors provides data streams which are only loosely related to the physical measured quantity. moreover, the manufacturer’s data sheet alone usually does not suffice as a source of information to assess the type b uncertainty components. hence, calibration plays an important role in iot and provides benefits on all data processing layers and a way to quantify the trust one can have in the measurement system (see figure 1). 3. digital representation of data and information in sensor networks in the digital world, measurement data must also be readable and understandable by machines. this implies that the information about the measuring instruments, the units of measurement and other accompanying metadata must be available in a format that can be used by software or an algorithm. for instance, the software may need to verify that the unit of measurements of a given data set is consistent with previous entries of a data base. the machine readability of data and measurement information is of particular importance in sensor network metrology. with hundreds of sensors measuring continuously, measurement data cannot be normalized and analysed manually, but requires a high level of automation. this, in turn, can only be achieved with machine-readable information. the machine readability begins with the description of the unit of measurement, for instance as shown in figure 2. in the project empir 17ind02 smartcom, a data model and digital representation of units of measurement, called d-si, have been proposed. on the cipm level, the d-si as well as ucum and qudt and other approaches to the digital representation of quantities and units of measurement are being discussed. another important element is the information about the individual measuring instruments. for instance, a machinereadable digital calibration certificate [11] could be provided by the sensor itself, e.g., using opc-ua, or from another source, e.g., an internal quality management platform. if needed, this information could be further extended by other data which could affect the quality of measurement data [12]. machine understandable representation of knowledge on information in sensor networks and its semantics can be modelled by means of (combinations of) ontologies. several ontologies useful for sensor networks can be found within the semantic web community. these ontologies formalize the annotation of sensor data with spatial, temporal, and thematic metadata. spatial metadata is particularly relevant for sensors distributed across a building or a country, or when mounted on a moving object like an automobile. in the project famous a method to merge different kinds of metadata and ontologies along with the sensor measurement data was proposed [13]. the main idea of [13] is to split the selfdescription of a sensor into four aspects: (1) observation information, (2) general sensor description, (3) calibration information and (4) location information. then, a sensor selfdescription can be achieved by combining existing ontologies that appropriately represent these aspects. in this way, one can build upon existing work and established principles and software tools. in the gemimeg-ii project this development was extend to the integration of semantically described quality of data aspects [12]. 4. measurement uncertainty in sensor network data processing measurement data in sensor networks is often heterogeneous, volatile, and time dependent. moreover, sensor networks in iot scenarios often contain low-cost measuring instruments based on mems sensor technology. consequently, such sensor networks typically have a wide range of measurement data quality. reliable data analysis for sensor networks thus requires taking data quality into account in a quantitative way. figure 1 calibration information in the different layers of the iot architecture. figure 2 example for an algorithmic representation of units of measurement, which can be used by a software acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 an example of a fundamental property that can be considered as a quality metric is the measurement uncertainty. other examples for parameters which may affect the quality of data in sensor networks are unstable network conditions, environmental interference, or malicious attacks. moreover, in battery powered sensors, there is necessarily a trade-off between powerconsumption and performance. another common issue is drift, where sensor readings slowly deviate from the true value due to the degradation of the electronics [14]. one outcome from the gemimeg-ii project is a framework for quality of data in sensor networks, expressed in terms of an ontology [12]. this can be expressed, for instance in an ontology [12]. in a joint effort by the projects famous and empir met4fof it was demonstrated how an ontology of a sensor network could be utilized for an automated analysis [7], [13]. moreover, together with the project empir smartcom a sensor network data set was enriched with machine-readable metadata to demonstrate the metrological support of the fair principles [15]. usually, the sensors used in iot applications are measuring continuously irrespective of how the measured values are used. thus, for a reliable quantification of data quality it must be ensured that the sensor behaviour is well known for a wide range of measurement situations. this includes situations where the measurand, i.e., the sensor input signal, changes rapidly over time. thus, sensor properties such as effective bandwidth, internal analogue-to-digital conversion, time stamping reliability, resonance behaviour need to be considered. data analysis in the iot typically relies on and greatly benefits from modern machine learning methods, because of the complexity of the sensor network and the amount of data acquired. uncertainty evaluation for machine learning is an important topic and considered in several research activities. however, this is only possible if the uncertainty associated with the machine learning input values is available. hence, uncertainty for data pre-processing must be addressed as the initial step for an uncertainty-aware machine learning for iot. measurements in the iot are usually time dependent, and often even dynamic. examples are air quality monitoring, traffic surveillance, production control or mobile health measurements. thus, signal processing methods are regularly applied for data pre-processing in iot scenarios. for instance, the discrete fourier transform is often applied to extract magnitude and phase values from a measurement of vibration, which are then used in a subsequent machine learning method as features. other examples for pre-processing are synchronising the time axis of sensors; interpolation of sensor data to account for missing values or non-equidistant sampling; low-pass filtering to reduce noise or other unwanted high-frequency components in the measured data. another reason for the application of data preprocessing is the reduction of data dimensionality. this may be necessary simply due to storage or data transfer bandwidth limitations [3]. in the project empir met4fof, the previously developed python library pydynamic [16] was extended to include the data pre-processing steps typically required in iot. for each method, pydynamic provides the propagation of uncertainties [16]. an important aspect in empir met4fof and in famous was also the implementation of the methods in such a way that they can be applied online, i.e., during the measurement. for instance, the discrete wavelet transform for uncertain input data was implemented using digital filters [17]. another important aspect is the way of how the uncertainty propagation software is provided such that it is compatible with typical iot architectures. in the project empir met4fof, a socalled agent-based framework (abf) was used. in an abf, data processing steps are encapsulated in software modules, called “agents”. these agents can run on different locations in the network, if necessary, and allow for a very flexible demanddriven data analysis. for instance, one agent may acquire the data from a sensor, hands it over to an interpolation agent, which then provides it to a fourier transform agent. with each agent taking care of the proper uncertainty treatment, very flexible data analysis pipelines for sensor network metrology can be realised. usually there is an existing data analysis framework in place, which needs to be extended to include measurement uncertainty treatment. as a result, a web-service approach was used in the project famous instead of an abf. that is, an uncertainty module was created to enrich existing sensor data streams with statements about the associated measurement uncertainty. 5. flow of metrological data and metadata in sensor networks to summarise the different elements described in the previous sections, let us consider the flow of metrological data and metadata in sensor networks, see figure 3. for the individual sensors we assume the availability of information about their metrological properties. at least this information should be available in terms of a manufacturer’s data sheet from which information about measurement capabilities, units of measurement and tolerances can be extracted manually. ideally, the information is provided in digital, machine-readable way. for instance, the sensor may communicate metadata using an iot standard, such as opc-ua, rami 4.0 or provide a dcc. this metadata could contain fundamental information about the sensor: serial number, units of measurement, measurement uncertainty, calibration information. for an automated handling of this information, the metadata itself has to be machine actionable. that is, units of measurement have to be given using a suitable data model (e.g., d-si, ucum, qudt). a representation of this information in accordance with the fair principles would furthermore require the use of some kind of persistent identifiers (pid) to ensure machine-interpretable interoperability with other data models. the metadata at the sensor level could also contain information on the quality of sensing. that is, the sensor could be self-aware or be complemented with other sensor data. for instance, a radar sensor in an autonomous vehicle may be complemented with a rain and fog detector to enrich the radar sensor data with metadata about the weather conditions. other potentially useful metadata could be whether the sensor is battery powered, general energy restrictions or measurement strategies (e.g., raw data or averaged data). depending on the sensor network use case, this information can be crucial for the metrological assessment of the measuring system’s quality and reliability. the sensor metadata has to be made available throughout the data lifecycle in the sensor network to enable its use in the data processing and decision making. the first steps, data curation and data aggregation in the processing of sensor data are often already the place where the sensor metadata is lost. however, information about the propagation of quality of sensing (e.g., measurement uncertainty) must be carried out to ensure proper assessment of the overall sensor network quality. the individual sensor quality is not sufficient for this assessment. the data lifecycle metadata can be enriched in the data processing by information about the applied algorithms, their acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 parameters/settings and other information related to the reliability of the processing result. moreover, the quality of sensing has to be translated and propagated into a quality of data following the data processing steps. with this in place, a proper, reliable, and data-driven decision making is possible that is not only based on the raw sensor data but takes all relevant metadata and information into account. moreover, it enables a traceability from the quality of the decision making to the quality of sensing as well as traceability of sensor network results to the si units of measurement. 6. conclusions sensor network metrology combines several aspects from metrology, signal processing, semantics, iot and web technologies. the treatment and metrological assessment of sensor networks, thus, needs to take these fields into account. although sensor networks can be found in many applications, a rigorous sensor network metrology has not been established yet. existing guidelines in metrology are typically focused on individual measuring instruments and quantities. the same holds true, by the way, for the organisation of metrology institutes and calibration laboratories. first metrology research efforts developed some basic elements required in sensor network metrology: dynamic calibration of digital sensors, cost-efficient calibration of mems sensors, digital representation of metrological metadata, evaluation and propagation of uncertainties, semantic modelling of sensor network information. future research needs to further develop these individual aspects and extend their integration into a consistent framework and toolset. moreover, a systems metrology approach needs to be developed to assess sensor networks in a systemic way. acknowledgement parts of this work was based on outcomes of the projects 17ind12 met4fof, 17ind02 smartcom, bmbf famous and the bmwk gemimeg-ii project. the 17ind12 met4fof and 17ind02 smartcom projects have received funding from the empir programme co-financed by the participating states and from the european union‘s horizon2020 research and innovation programme. the famous project received funding from the german federal ministry of education and research (bmbf). the gemimeg-ii project received funding from the german federal ministry of economy and climate (bmwk). references [1] b. jeckelmann, r. edelmaier, the metrological infrastructure, de gruyter oldenbourg, 2023. isbn 9783110715682 [2] t. schneider, n. helwig, a. schütze, industrial condition monitoring with smart sensors using automated feature extraction and selection, meas. sci. and technol. 29(4) (2018), art. no. 094002. doi: 10.1088/1361-6501/aad1d4 [3] t. dorst, t. schneider, a. schütze, s. eichstädt, gum2ala– uncertainty propagation algorithm for the adaptive linear approximation according to the gum, in smsi 2021 system of units and metreological infrastructure. doi: 10.5162/smsi2021/d1.1 [4] e. balestrieri, l. de vito, f. lamonaca, f. picariello, s. rapuano, i. tudosa, research challenges in measurement for internet of things systems, acta imeko 7 (2018) 4, pp. 82-94. doi: 10.21014/acta_imeko.v7i4.675 [5] b. seeger, t. bruns, primary calibration of mechanical sensors with digital output for dynamic applications, acta imeko 10 (2021) 3, pp. 177-184. doi: 10.21014/acta_imeko.v10i3.1075 [6] d. hutzschenreuter et al., smartcom digital system of units (dsi) guide for the use of the metadata-format used in metrology for the easy-to-use, safe, harmonised and unambiguous digital transfer of metrological data second edition, 2020 doi: 10.5281/zenodo.3816686 [7] s. eichstädt, m. gruber, a. p. vedurmudi, b. seeger, t. bruns, g. kok, toward smart traceability for digital sensors and the industrial internet of things, sensors 21(6) (2021). doi: 10.3390/s21062019 [8] g. schadow, c. j. mcdonald, the unified code for units of measure, in: regenstrief institute and ucum organization: indianapolis, in, usa, 2009. online [accessed 16 march 2023] https://ucum.org/ucum [9] h. rijgersberg, m. van assem, j. top, ontology of units of measure and related concepts, semantic web 4(1) (2013), pp. 313. doi: 10.3233/sw-2012-0069 [10] g. tancev, f. grasso toro, sequential recalibration of wireless sensor networks with (stochastic) gradient descent and mobile references, measurement: sensors 18 (2018), art. no. 100115. doi: 10.1016/j.measen.2021.100115 [11] s. hackel, f. härtig, th. schrader, a. scheibner, j. loewe, l. doering, b. gloger, j. jagieniak, d. hutzschenreuter, g. söylevöktem, the fundamental architecture of the dcc, measurement: sensors 18 (2021), art. no. 100354 doi: 10.1016/j.measen.2021.100354 [12] a. p. vedurmudi, j. neumann, m. gruber, s. eichstädt, semantic description of quality of data in sensor networks, sensors 21(9) (2021) art. no. 6462. doi: 10.3390/s21196462 [13] m. gruber, s. eichstädt, j. neumann, a. paschke, semantic information in sensor networks: how to combine existing ontologies, vocabularies and data schemes to fit a metrology use figure 3 flow of data and metadata from individual sensors (left) to the final decision making (right). with proper data treatment in place, traceability to si is possible from right to left. https://doi.org/10.1088/1361-6501/aad1d4 https://doi.org/10.5162/smsi2021/d1.1 https://doi.org/10.21014/acta_imeko.v7i4.675 https://doi.org/10.21014/acta_imeko.v10i3.1075 https://doi.org/10.5281/zenodo.3816686 https://doi.org/10.3390/s21062019 https://ucum.org/ucum http://dx.doi.org/10.3233/sw-2012-0069 https://doi.org/10.1016/j.measen.2021.100115 https://doi.org/10.1016/j.measen.2021.100354 https://doi.org/10.3390/s21196462 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 case, proc. of ieee int. workshop on metrology for industry 4.0 & iot 2020, roma, italy, 03-05 june 2020, pp. 469-473. doi: 10.1109/metroind4.0iot48571.2020.9138282 [14] k. goebel, w. yan, correcting sensor drift and intermittency faults with data fusion and automated learning, ieee systems journal 2(2) (2008), pp. 189–197. doi: 10.1109/jsyst.2008.925262 [15] t. dorst, m. gruber, a. p. vedurmudi, sensor data set of one electromechanical cylinder at zema testbed (zema daq and smart-up unit), in: zenodo, doi: 10.5281/zenodo.5185952 [16] s. eichstädt, c. elster, i. m. smith, t. j. esward, evaluation of dynamic measurement uncertainty – an open-source software package to bridge theory and practice, journal of sensors and sensor systems 6 (2017), pp. 97-105. doi: 10.5194/jsss-6-97-2017 [17] m. gruber, t. dorst, a. schütze, s. eichstädt, c. elster, discrete wavelet transform on uncertain data: efficient online implementation for practical applications, in metrology and testing xii, series on advances in mathematics for applied sciences 90 (2022), world scientific publishing co, singapore, pp. 249-261, isbn 978-981-124-237-3 (hardcover), 978-981-124-2397 (ebook) doi: 10.1142/9789811242380_0014 https://doi.org/10.1109/metroind4.0iot48571.2020.9138282 https://doi.org/10.1109/jsyst.2008.925262 https://doi.org/10.5281/zenodo.5185952 https://doi.org/10.5194/jsss-6-97-2017 https://doi.org/10.1142/9789811242380_0014 the improved automatic control points computation for the acoustic noise level audits acta imeko issn: 2221-870x september 2021, volume 10, number 3, 142 149 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 142 the improved automatic control points computation for the acoustic noise level audits tomáš drábek1, jan holub1 1 czech technical university in prague, faculty of electrical engineering, dept. of measurement, technicka 2, 166 27 prague 6, czechia section: research paper keywords: noise; measurement; algorithms; automation; determination of control points citation: tomáš drábek, jan holub, the improved automatic control points computation for the acoustic noise level audits, acta imeko, vol. 10, no. 3, article 15, september 2021, identifier: imeko-acta-10 (2021)-03-15 section editor: lorenzo ciani, university of florence, italy received february 7, 2021; in final form august 6, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: tomáš drábek, e-mail: tomas.drabek@fel.cvut.cz 1. introduction the acoustic noise level is measured in human-occupied buildings to ensure comfortable living conditions. as described in [1], temperature, humidity, and co2 concentration are the usually monitored indoor quantities. measurement of the acoustic noise level, together with the mentioned quantities, can be used to improve the quality of indoor living. the noise measurement process is described in national standards that specify, for example, restrictions for placing control points, the duration of the individual measurements, or measurement device specifications. national supervisory authorities or private companies use these standards to determine noise levels both indoors and outdoors. based on the measured values, they provide final recommendations. the whole process of measuring noise is time-consuming. therefore, it is advisable to automate at least some steps of the process [2]. this paper aims to design automatic algorithms for placing the control points into a given room by obeying all constraints set by the standard. the measurement of the noise level is then performed in these control points. the international and national standard [3], [4], for which the proposed method is designed, does not distinguish different measurement purposes and specifies only general rules for the location. in this article, two measurement purposes are proposed distinguishing: (i) living condition verification for a long-term stationary noise and (ii) living condition verification for a shortterm recurring noise. for the former, the algorithm can plan for resource allocation as the noise is presented continuously. therefore, the algorithm aims at covering the room with as many control points as possible at the end of the measurement. for the latter, the resource allocation cannot be planned as the period of the noise signal is not known as a priory. the proposed algorithms place the control points to cover the maximum area in the limited time, in this case, at each iteration. these two different criteria result in different control points placing strategies, as depicted in figure 1. the current measurement procedure is performed by measuring noise levels in a network of control points. the control points are distributed around the room based on the abstract the acoustic noise level in the interior is one of the quantities specified by a standard and is subject to audits to ensure a comfortable living environment. currently, the noise level audits are performed manually by a skilled operator, who evaluates the floor plan and uses it to calculate the control points location in which the measurement is performed. the computation is proposed to automate the audit by formulating an optimisation problem for which an algorithm was designed. the algorithm computes the solution that satisfies all constraints specified in the standard, for example, the minimum distance among the control points and fixed obstacles (walls or columns). in the proposed optimisation problem, the fitness function was designed based on the measurement purpose, and two typical use-cases were analysed: (i) long-term stationary noise measurement and (ii) recurring short-term noise measurement. although the set of control points for both use cases complies with the given standard, it is beneficial to distinguish the location of control points based on the measurement purpose. the number of control points is maximised for the stationary noise and for the immediate coverage area for the short-term noise. the proposed algorithms were tested in a simulation for several floor plans of different complexity. mailto:tomas.drabek@fel.cvut.cz acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 143 restrictions given by the standard, and a qualified operator determines their density and positions. the distance of adjacent control points must be no less than 0.7 m, and at least one control point must be located in a corner. in addition, all points must be at least 0.5 m away from the wall and at least 1 m away from significantly sound-transmitting elements such as windows or entrance openings for air supply. windows and doors must be closed during measurement. the location of the measuring points defined in this way is determined manually by the operator according to the room's dimensions. noise measurements are then made at these points at a height between 1.2 m and 1.5 m from the ground. the measuring instrument is directed towards the source of the incoming noise or vertically upwards if the direction of the noise source is not defined. a certified sound level meter is used as the measuring instrument. several articles have been published to measure and reduce noise. some publications focus on noise caused by equipment (aircraft, cars, turbines). the article [5] dealt with the possibility of measuring aircraft noise using linear microphone arrays. using this method, an undistorted record of aircraft noise was achieved. the article [6] also deals with determining aircraft noise and selecting this noise from background noise. against previous research, they use neural networks to recognise only aircraft noise. the accuracy converged to 99.84%. the publication [7] deals with the measurement of noise in landing aircraft that use thrust reversers to slow down aircraft after landing at madrid-barajas airport. the paper presents possible improvements for detecting noise from the reverse thrust and the direction of incoming noise. another sector where noise is measured is the cars industry. the article [8] presents and verifies the application of statistical energy analysis (sea). the application is used for 3d modelling of noise reduction in the interior of a car from the drivetrain. part of the work is also a proposal of measures to reduce noise. wind turbines are a separate branch of noise measurement. the standards specify, for example, noise measurement methods, requirements for measuring instruments, evaluation. standard [9] provides overall wind turbine noise measurement standards. conversely, standard [10] focuses more on the aeroacoustics noise of wind turbines. for example, the article [11] deals with noise measurement in the interior of buildings close to wind farms. another approach to noise measurement requires involving citizens and using their smart devices to monitor noise in their immediate vicinity. such a measurement was dealt with in a study [12] which, using this method, created spatial and temporal maps of noise. noise measurements are also often performed indoors. in [13], the interior noise reduction index (nri) was determined. the article deals with the definition of the index for nris with open windows for the summer months. a theoretical model was created and compared with experimentally obtained data. today, several software programs simulate acoustic conditions in buildings. based on the created model of rooms with specified noise sources, technicians can create a noise map. in [15], the authors simulated a noise map and used it to identify critical areas using reference measurements and rap-one software (room acoustics prediction and occupational noise exposure). article [16] deals with the measurement of noise at the place of residence of 44 schoolchildren. the measurements were performed in the children's rooms and in the room where the schoolchildren spent most of their time. outdoor noise was also recorded during the measurement. the sound pressure level affects the workplace and, for example, medical facilities where patients are treated. article [17] deals with the measurement of noise around and inside the hospital. a total of 24 measurements were performed on the outer facade of the hospital and 21 measurements inside. from the measured data, it was evident that they exceeded the set limit. the difference between outdoor and indoor noise is also described in the article [18]. the study performed noise measurements inside and outside the building at 102 inhabitants, with open, closed or semi-open windows playing a significant role. the result of the study was the creation of a statistical model that can be used to estimate the sound exposure inside the building. noise measurement using a robotic unit is becoming more common these days. in paper [19], the humanoid robotic unit was equipped with, among other sensors, a sound level meter. the humanoid measured values at 20 points in the room. robot evaluated the comfort for the room based on an interaction with a human operator and from the measured values. the article [20] on noise maps outside buildings also used an autonomous mobile robotic platform equipped with a sound level meter to measure noise. the measured values by the robot were compared with a model of known sources and with manual measurement. in conclusion, it was stated that noise maps should be based on the application and the environment. all the publications mentioned above focused on the processing of the noise measurement data. on the other hand, the present paper aims to show the possibility of automating figure 1. the control points location (black dots) for stationary noise (top row) and short-term recurring noise (bottom row) measurement. the left figures show the first iteration of both processes, in which the control point is located to the corner of the polygon defined by a 1 m distance from walls. the algorithm for stationary noise measurement (top) places control points step-by-step to put as many control points as possible while not placing them closer than the minimum distance visualised by the red circle. on the other hand, the short-term recurring noise spread points quickly leading to broad coverage in a short time by minimising the objective function visualised by the contour plot. therefore, the total number of control points that can fit into the room is smaller for short-term recurring noise. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 144 determining control points for measuring indoor noise. the novel contribution of this article is in the noise measurement methodology consisting of the design and testing of both purposes mentioned above, living condition verification for a long-term stationary noise and living condition verification for a short-term recurring noise. the control points are determined to meet the conditions based on the standard. the algorithms for calculating control points in a room are based on the assumption that they do not have previous information about the room's parameters besides the floor plan. therefore, individual elements (windows, openings in the wall, etc.) cannot be identified. for this reason, the condition is set that the control points are located 1 m from the wall. this condition is based on the requirement for distance from significantly sound-transmitting elements. all the above distances satisfy this condition. the article is divided into three parts. section 2 deals with the description of proposed noise measurement solution for the two considered purposes. section 3 adresses the validation of the functionality of the proposed solution. finally, the conclusions are summarised in section 4. 2. proposed approach indoor noise measurements are performed at control points. the set of control points will be marked with the symbol χ and the individual control points with the symbol x: 𝜒 = {𝑥1, 𝑥2, … , 𝑥𝑛 } (1) where n is the number of control points. first, the area where the measurements will be performed is defined. the area of the room is denoted as p, and the wall of the room (boundary) represents the polygon ∂p. then the control points will be located in the inner area i defined as: 𝐼 = {𝑥|𝑥 ∈ 𝑃 ∧ 𝑑(𝑥, 𝜕𝑃) ≥ 1} (2) all control points xn are located in the inner area of the room i: 𝑥𝑛 ∈ 𝐼. (3) the standard specifies the minimum requirement for the mutual euclidean distance of control points, and it is 0.7 m: 𝑑(𝑥𝑖 , 𝑥𝑗 ) ≥ 0.7, (4) where xi a xj are control points, and indices take values: 𝑖, 𝑗 ∈ {1, … , 𝑛}; 𝑖 ≠ 𝑗. (5) according to the above standard, there should be at least one control point in the corner of the room. a corner is defined as a point that makes an angle below 180° between two lines representing the walls of a room. the corners between 60° and 120° have priority in the selection. the corner(s) with the nearest angle to 90° is selected if there is no corner in this range. the above parameters are shown in figure 2. manually determining and setting control points is timeconsuming, hence this process was decided to automate two specific noise measurement purposes. the first is a long-term stationary noise, and the second is a regular short-term noise. software algorithms were created for both of these purposes. 2.1. long-term stationary noise long-term stationary noise is caused, for example, by operation from a factory in the neighbourhood or on a construction site. at the measuring point, the measurement takes place for an unnecessary time to ensure a sufficiently long and high-quality noise measurement. the measurement at the given control point has been declared demonstrable. after this interval, the measuring apparatus is moved to a new control point for further recording. the proposed algorithm searches for the maximum set of control points according to the specified parameters for maximum noise capture in a given space. we are looking for a maximum set that meets the following conditions: 𝜒∗ = max|𝜒|, (6) where χ* is the maximum number of control points in a given room, and |χ| is the number of control points for the set χ. the entire algorithm was designed to obtain the maximum number of control points in area i. obtaining data from the maximum number of control points would provide a more accurate map of capturing noise levels in space. the algorithm works by creating a list of all corners in the measured area i (equation 2). it then passes through the individual corners and determines two intersections at a distance of 0.7 m from the given corner with ∂i. thus, the algorithm obtains at least two initial variants for each corner. the program uses a recursive algorithm for each initial variant and creates a list of control points for each such recursion by following these steps: 1) from the already determined points, it draws a circle with a radius of 0.7 m. the intersections with other circles give new control points. 2) if such an intersection does not exist and an intersection with ∂i, it places a control point at this intersection. 3) if the algorithm does not find any new control point in the iteration, it terminates the recursion and saves the total number (list) of found control points for the given variant. the algorithm provides the list of the best results set – the maximal list of control points. 2.2. short-term recurring noise this is a noise caused, for example, by public transport (trams, buses) at the place of residence, workplace or school. figure 2. the walls of the room (black lines) represent the boundary of the polygon ∂p, where p represents the area of the room. the blue lines delimit the measuring area that is at a 1 m distance from ∂p. the colored background of the room is used only in the algorithm for regular short-term noise, where it shows the distance from each already determined control point in space. at the control point, the background brightens and darkens with increasing distance. this illustration is used for regular short-term noise when it is appropriate to find the farthest control point in the defined area from already specified control points. the color background is recalculated each time iteration. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 145 measurements shall be made at a given control point until the noise level exceeds a specified value. subsequently, the measuring apparatus is moved to a new measuring point, and the entire process is repeated. due to the lack of knowledge of the number of occurrences exceeding the specified noise level, the algorithm looks for the best distribution of control points in each iteration in order to cover as much room space as possible. the maximum distance is being looked from the already determined control points in the inner area of room i: 𝑥𝑛 ∈ arg max 𝑥∈𝐼 min 𝑖∈{1,…,𝑛−1} 𝑑(𝑥, 𝑥𝑖 ), (7) where n is the iteration number and xi are control points positions selected in previous iterations. the algorithm first determines the list of corners. then one of the specified corners is selected randomly. the algorithm divides the floor plan of the room into individual triangles. in each triangle, the furthest local control point from the previously determined control points is calculated. subsequently, these locally optimal solutions are compared to select the furthest point. this point is selected as a global optimum and is intended for noise level measurement. the entire process is shown in figure 3; it is repeated until one of the conditions is satisfied: 1) there shall be at least one local point that satisfy the conditions specified in the standard (minimum distance from walls and other control points). 2) the noise measurement does not exceed the specified level at the measured control point. 3) the time interval for the measurement does not expire. 3. experiments in order to improve the proposed algorithms for both purposes of measuring noise intensity, experiments were prepared to verify and test the algorithm. this section is organised as follows: the experimental environments are presented first, describing the rooms that were used in the verification; second, the control points are analysed for both algorithms; and third, the measurement time is analysed for both algorithms. the presented quantitative results demonstrate the performance of the proposed methods. in the first experiment, the basic accuracy of calculations and the determination of control points were verified in a simple room layout for both algorithms. the size of the room was 4.0 m × 4.0 m. each experiment is divided into two simulations: one for long-term and one for short-term noise. the simulations for the first experiment are shown in figure 4. the second experiment focused on a simulation of a more diverse room. the dimensions of the real room were measured and incorporated into the simulation. the results of the control point calculation for both algorithms are recorded in figure 5. the third experiment was focused on verifying the robustness of algorithms in more demanding conditions. an extensive figure 3. the algorithm divides the room into triangles and in each, calculates local optima (farthest points) from already determined control points. subsequently, the farthest point is selected from this set of local optima and determined as the global minimum for the given iteration – the green point. at each iteration, a new background contour is calculated, which displays the distance from the control points in color. figure 4. the first top four images show the first algorithm that determines the control points for long-term stationary noise. the algorithm gradually adds control points where the intersection of the 0.7 m boundary with the other boundaries (red circles) occurs. an algorithm created the lower quartet of the image for regular short-term noise. the algorithm focuses on covering the area as much as possible in each iteration. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 146 segmented room was created, where the algorithms determined the control points (figure 6). the next experiments focused on rooms with internal obstacles, which in real conditions represent columns, partitions, etc. first, the algorithms were tested on simple rooms with one internal column in the area where control points are determined (figure 7). according to the standard, 1 m has been defined around a fixed obstacle, where control points for noise measurement are not determined. in the last experiment, algorithms were tested for multiple obstacles (figure 8). 4. result analysis this section presents the results of both algorithms, focusing on the number of checkpoints and the time required to determine them. experiments were performed for simple, real, complex and built-up spaces. both algorithms demonstrated functionality and robustness in the calculation of control points. 4.1. control points analysis an overview of the number of control points from both algorithms from individual experiments is reported in table 1. surprisingly, in the first experiment, the algorithm for long-term stationary noise found two control points less than the algorithm created for regular short-term noise. this anomaly occurred only in the first experiment, where it was an elementary room without a fragmented floor plan. further testing showed that the algorithm for finding the maximum number of control points in the measuring area is not optimal and that it may not find the maximum number of control points for small rooms. in the second experiment, the algorithm for long-term stationary noise discovered more control points than the second. figure 5. the room is characterised by a more complex area where control points may occur. the results of the first algorithm are shown in the upper part. the figures show how the algorithm first determines the border control points in the marked area and then passes into the inner area of the room. the second algorithm created a network of control points, focusing on successively spaced control points throughout the area, as can be seen from the bottom figures. figure 6. to test both algorithms in demanding conditions, a large room of intricate design was created. both algorithms proceeded as expected in determining the control points. the results show that the algorithm for longterm stationary noise determined significantly more control points than the algorithm for regular short-term noise. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 147 figure 7. another experiment was to test both algorithms in an environment where there is a simple barrier in the form of a column. around this obstacle, both algorithms, as expected, created an area in which no measurement is performed. figure 8. the last experiment focused on irregular, articulated obstacles that were both outside the measuring area and inside. both algorithms proved their robustness and delineated the measuring area correctly. table 1. the total number of control points and computation time. room number of control points computation time [min] long-term stationary noise short-term recurring noise long-term stationary noise short-term recurring noise sq. room (figure 4) 11 13 0.05 0.50 real room (figure 5) 44 33 0.50 4.13 large room (figure 6) 360 224 6.20 245.82 sq. room with column (figure 7) 69 47 0.33 8.37 sq. room with obstacles (figure 8) 187 133 0.90 86.18 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 148 in the third experiment, the algorithm for long-term stationary noise determined 360 control points in the final result. the algorithm for regular short-term noise identified 224 control points. obstacles were inserted into the measured area in the fourth and fifth experiments. the difference in the number of control points for larger rooms shows that the first algorithm is indeed more suitable for long-term noise. 4.2. estimating the measurement time the algorithms differ according to the measurement purpose and the time they need to determine the control points. in all experiments, the time required to determine all the control points was recorded and is presented in table 1. results show that the algorithm for the first purpose calculates all control points much faster than the algorithm for regular short-term noise. in the third and the fifth experiment, the most apparent difference in the time calculation of control points was between the two algorithms. in actual measurements, however, this difference is not decisive. vice versa, when measuring longterm stationary noise, a faster determination of other control points is needed changing the measuring point sets immediately after recording the noise level at that point. for regular shortterm noise, the algorithm has enough time to calculate. the change of the measuring point is not known, and the noise level at the given control point is expected to be exceeded. while waiting for the noise level to be exceeded, the algorithm can determine the next control points based on the dimensions of the room. optionally, the algorithms can compute control points offline before the measurement based on the floor plan measured either manually or automatically using lidar. 5. conclusions this paper aimed to create algorithms for two specific purposes of noise measurement. the first algorithm was developed to determine control points for long-term stationary noise, and it finds the maximum number of control points in the room. the second algorithm was created to determine control points for regular short-term noise. for this purpose, the program does not know the number of iterations of the measurement. therefore, it looked for the location of the following control point in the measuring area to cover as large an area as possible. both algorithms were tested in different rooms during the experiments, from simple floor plans over large rugged rooms to the room with obstacles. the simulations showed that the proposed repeatable algorithms satisfy the specified conditions set by the standard. the time required to determine all control points in the defined area during the simulations for both algorithms was recorded. the novelty of the noise measurement methodology consists in the design and testing of both algorithms, with the control points being determined based on the dimensions of the room and the purpose of the measurement. the control points were determined to meet the conditions based on the standard. if this algorithm is used in the future to a mobile robotic unit that will contain a measuring device, the measurement can take place entirely autonomously without the presence of an operator. therefore, we decided to test both of our algorithms using a robotic unit that will initially measure the floor plan by simultaneous localisation and mapping in the next phase. the knowledge published in [20] can also be used to navigate the room with the robotic unit. the constructed map is used to compute control points, and the robotic platform measures these points afterwards. acknowledgement this work was supported by the grant agency of the czech technical university in prague, grant no. sgs20/070/ohk3/1t/13. references [1] s. s. carlisle, the role of measurement in the development of industrial automation, acta imeko 3 (2014) 1, pp. 4-9. doi: 10.21014/acta_imeko.v3i1.190 [2] j. svatos, j. holub, t. pospisil, a measurement system for the long-term diagnostics of the thermal and technical properties of wooden houses, acta imeko 9 (2020) 2, pp. 3-10. doi: 10.21014/acta_imeko.v9i3.762 [3] international organization for standardization (iso), 19962:2017 acoustics description, measurement and assessment of environmental noise part 2: determination of sound pressure levels, iso, geneva, switzerland, 2017 [4] c. s. institute, “1996-2 acoustics — description, measurement and assessment of environmental noise — part 2: determination of sound pressure levels”, czech standards institute, 2018 [5] m. genesca, j. romeu, t. pamies, a. sanchez, aircraft noise monitoring with linear microphone arrays, ieee aerospace and electronic systems magazine 25(1) (2010), pp. 14-18. doi: 10.1109/maes.2010.5442149 [6] j.w. pak, m.k. kim, convolutional neural network approach for aircraft noise detection, proc. of ieee 2019 international conference on artificial intelligence in information and communication (icaiic), okinawa, japan, 11-13 february 2019, pp. 430-434. doi: 10.1109/icaiic.2019.8669006 [7] c. asensio, m. ruiz, m. recuero, g. moschioni, m. tarabini, a novel intelligent instrument for the detection and monitoring of thrust reverse noise at airports, acta imeko 4 (2015) 1, pp. 5-10. doi: 10.21014/acta_imeko.v4i1.154 [8] x. chen, d. wang, x. yu, z.d. ma, analysis and control of automotive interior noise from powertrain in high frequency, proc. of 2009 ieee intelligent vehicles symposium, xi'an, china, 3-5 june 2009, pp. 1334-1339. doi: 10.1109/ivs.2009.5164478 [9] international electrotechnical commission, iec 61400-11 wind turbines part 11: acoustic noise measurement techniques, 3.1 edition, pp.1-131, june 2018 [10] ieee instrumentation and measurement society, ieee standard for wind turbine aero acoustic noise measurement techniques, ieee std 2400-2016, pp.1-24, 1 july 2016. doi: 10.1109/ieeestd.2016.7502056 [11] s. janhunen, a. grönman, k. hynynen, m. hujala, m. kuisma, p. härkönen, audibility of wind turbine noise indoors: evidence from mixed-method data, proc. of 2017 ieee 6th international conference on renewable energy research and applications (icrera), san diego, ca, usa, 5-8 nov. 2017, pp. 164-168. doi: 10.1109/icrera.2017.8191260 [12] g. graziuso, m. grimaldi, s. mancini, j. quartieri, c. guarnaccia, crowdsourcing data for the elaboration of noise maps: a methodological proposal, journal of physics: conference series. 1603 (2020), art. 012030. doi: 10.1088/1742-6596/1603/1/012030 [13] c. buratti, indoor noise reduction index with open window, applied acoustics, 2002, pp. 431-451. [14] c. guarnaccia, j. quartieri, a. ruggiero, acoustical noise study of a factory: indoor and outdoor simulations integration procedure, international journal of mechanics 8(1) (2014), pp 298-306. online [accessed 13 august 2021] https://www.naun.org/main/naun/mechanics/2014/a162003 -065.pdf http://dx.doi.org/10.21014/acta_imeko.v3i1.190 http://dx.doi.org/10.21014/acta_imeko.v9i3.762 https://doi.org/10.1109/maes.2010.5442149 file:///d:/tazi/stažené%20soubory/10.1109/icaiic.2019.8669006 http://dx.doi.org/10.21014/acta_imeko.v4i1.154 https://doi.org/10.1109/ivs.2009.5164478 https://doi.org/10.1109/ieeestd.2016.7502056 https://doi.org/10.1109/icrera.2017.8191260 https://doi.org/10.1088/1742-6596/1603/1/012030 https://www.naun.org/main/naun/mechanics/2014/a162003-065.pdf https://www.naun.org/main/naun/mechanics/2014/a162003-065.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 149 [15] s. pujol, m. berthillier, j. defrance, j. lardiès, r. petit, h. houot, j. p. levain, c. masselot, f. mauny, urban ambient outdoor and indoor noise exposure at home: a population-based study on schoolchildren, applied acoustics 73(8) (2012), pp 741-750. doi: 10.1016/j.apacoust.2012.02.007 [16] p. h. t. zannin, f. ferraz, assessment of indoor and outdoor noise pollution at a university hospital based on acoustic measurements and noise mapping, open journal of acoustics 4(6) (2016), pp 71-85. doi: 10.4236/oja.2016.64006 [17] b. locher, a. piquerez, m. habermacher, m. ragettli, m. röösli, m. brink, c. cajochen, d. vienneau, m. foraster, u. müller, j. m. wunderli, differences between outdoor and indoor sound levels for open, tilted, and closed windows. international journal of environmental research and public health 15(1) (2018), art. 149. doi: 10.3390/ijerph15010149 [18] m. bonomolo, p. ribino, c. lodato, g. vitale, post occupancy evaluation and environmental parameters monitoring by a humanoid robot, proc. of 2019 ieee international conference on environment and electrical engineering and 2019 ieee industrial and commercial power systems europe (eeeic i&cps europe), genova, italy, 11-14 june 2019, pp 1-6. doi: 10.1109/eeeic.2019.8783688 [19] e. martinson, r. c. arkin, noise maps for acoustically sensitive navigation, proceedings volume 5609, mobile robots xvii, 2004, pp. 50-60. doi: 10.1117/12.581461 [20] d. fontanelli, m. david, r. tizar, a fast and low-cost vision-based line tracking measurement system for robotic vehicles, acta imeko 4 (2015) 2, pp. 90-99. doi: 10.21014/acta_imeko.v4i2.245 https://doi.org/10.1016/j.apacoust.2012.02.007 http://dx.doi.org/10.4236/oja.2016.64006 https://dx.doi.org/10.3390%2fijerph15010149 https://doi.org/10.1109/eeeic.2019.8783688 https://doi.org/10.1117/12.581461 http://dx.doi.org/10.21014/acta_imeko.v4i2.245 journal contacts acta imeko issn: 2221-870x december 2021, volume 10, number 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy section editors (vol. 7 10) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy acta imeko | www.imeko.org december 2021 | volume 10 | number 4 reviewers acta imeko would like to gratefully acknowledge the eminent work done by reviewers during the peer-review process. their contribution is a fundamental service for the benefit of this journal and for the whole scientific community. each of the reviewers listed below provided at least one review for the 2021 issues. 2021 acta imeko reviewers efstathios adamopoulos imran ahmed gregorio andria emma angelini marco arnesano giovanni artale grigor babayan valerio baiocchi eszter szatmáriné bakonyi eric benoit marta berardengo piotr bilski ileana bodini alessandro bosman marius branzila thomas bruns tatyana bubela tamás bubonyi bruno bueno domenico carnì carlo carobbi miguel carrasco andrea cataldo umberto cesaro paolo chiariotti giovanni chiorboli in-mook choi lorenzo ciani alfredo cigada francesco clementi nicola conci valentina cosentino gloria cosoli marija cundeva-blajer livio d’alvia mauro d’arco leonardo d'acquisto stuart davidson egidio de benedetto tilde de caro geert de cubber raffaella de marco silvio del pizzo michail delagrammatikas carolina del-valle-soto alessandro depari yufan ding john peter djungha nicola donato aime lay ekuakille leila es sebar antonio esposito laura fabbiano giuseppe ferro antonio formisano cristian fosalau péter gàbor salvatore gaglione antonella gaspari nicola giaquinto emese gincsainé szádeczky-kardoss sabrina grassini anna maria gueli giulia guidi rishi gupta szilvia gyongyosi istván harmati jan holub leonardo iannucci ilaria ingrosso zsolt kemény yeongdae kim hyeonseok kim bàlint kiss tokihiko kobata yasuharu koike momoko kojima pawel komada naoki kuramoto francesco lamonaca marco laracca massimo lazzaroni fabio leccese fei liu christophe lohr andrea mariscotti eugenio martinelli giancarlo micheli gabriele milani rosario morello antonio moschitta ákos nagy hideaki nozato amman oglat dinko oletic vincenzo paciello imre paniti marco parvis nicola pasquino gabriele patrizi franco pavese francesco picariello enrico picariello giovanni pilato cristina piselli jeerasak pitakarnnop emanuele piuzzi antonino quattrocchi sergio rapuano luis miguel blanes restoy likit sainoo alexandru salceanu jan saliga constantin sarmasanu jurek sasiadek andrea scorza carmelo scuro federico seri enrico silva janko slavic han wook song pedro vieira souza santos roberta spallone ronald summers màrton szemenyei kostiantyn torokhtii roman trisch ioan tudosa moise avoci ugwiri marjan urekar alberto vallan zacharias vangelatos ian veldman valentina venuti zsolt viharos jian wang samyong woo bernhard zagar emanuele zappa cristian zet lulu zhang giulio zuccaro continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach acta imeko issn: 2221-870x december 2021, volume 10, number 4, 239 248 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 239 continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach pietro cipresso1,2, silvia serino3, francesca borghesi1,2, gennaro tartarisco4, giuseppe riva2,3, giovanni pioggia4, andrea gaggioli2,3 1 department of psychology, university of turin, turin, italy 2 applied technology for neuro-psychology lab, istituto auxologico italiano, milan, italy 3 università cattolica del sacro cuore, milan, italy 4 institute for biomedical research and innovation (irib), national research council of italy (cnr), messina, italy section: research paper keywords: psychological stress; psychophysiology; psychometrics; signal processing; assessment; experience sampling methods; heart rate variability citation: pietro cipresso, silvia serino, francesca borghesi, gennaro tartarisco, giuseppe riva, giovanni pioggia, andrea gaggioli, continuous measurement of stress levels in naturalistic settings using heart rate variability: an experience-sampling study driving a machine learning approach, acta imeko, vol. 10, no. 4, article 36, december 2021, identifier: imeko-acta-10 (2021)-04-36 section editor: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 11, 2021; in final form december 20, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the european funded project ‘interstress–interreality in the management and treatment of stress-related disorders’, grant number: fp7-247685. corresponding author: pietro cipresso, e-mail: p.cipresso@auxologico.it 1. introduction it is well known that long-term exposure to stress can lead to immunodepression and dysregulation of the immune response, thus significantly enhancing the risk of contracting a disease or altering its course. however, increased symptomatology is not only associated with severe stressors (infrequent major life events), but also with minor daily stressors (i.e. “microstressors”) that are ignored or poorly managed [1]–[4]. defining effective techniques to measure daily stressful episodes in ecological conditions has thus been identified as an important research objective. to address this challenge, several research groups have started investigating the use of wearable sensors solutions to infer stress from continuous biosignal measurements [5] (for a review, see [6]). such systems integrate sensors together with on-body signal conditioning and preelaboration, as well as the management of the energy consumption and wireless communication systems. although preliminary testing of these systems has yielded encouraging results [7], a major limitation of current solutions is that they mostly rely on complex sensor architectures and use labelling abstract developing automatic methods to measure psychological stress in everyday life has become an important research challenge. here, we describe the design and implementation of a personalized mobile system for the detection of psychological stress episodes based on heart-rate variability (hrv) indices. the system’s architecture consists of three main modules: a mobile acquisition module; an analysisdecision module; and a visualization-reporting module. once the stress level is calculated by the mobile system, the visualizationreporting module of the mobile application displays the current stress level of the user. we carried out an experience-sampling study, involving 15 participants, monitored longitudinally, for a total of 561 ecg analyzed, to select the hrv features which best correlate with self-reported stress levels. drawing on these results, a personalized classification system is able to automatically detect stress events from those hrv features, after a training phase in which the system learns from the subjective responses given by the user. finally, the performance of the classification task was evaluated on the empirical dataset using the leave one out cross-validation process. preliminary findings suggest that incorporating self-reported psychological data in the system’s knowledge base allows for a more accurate and personalized definition of the stress response measured by hrv indices. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 240 methods that are often based on evaluation of human coders [8]. other authors have proposed the measurement of hearth rate variability (hrv) [9]–[14] analysis as a potentially effective approach for monitoring stress in mobile settings [15]–[17]. actually, hrv indices can be used to estimate activity of autonomous nervous system (ans) in relation to affective and cognitive states, including mental stress [18]–[20]. however, realtime recognition of stress from hrv measures requires appropriate strategies to i) detect hrv changes using minimallyinvasive ecg equipment; ii) relate these changes to mental stress levels; and iii) control the potential confounding effects of physical activity. in the following, we describe how we addressed these issues in designing and implementing a personalized mobile system for automatic recognition of psychological stress based on hrv indices. the original contribution of the proposed method is that, to our best knowledge, this is the first approach that integrates the detection of hrv features with the groundtruth of subjective perception of stressful events. 2. measuring psychological stress in naturalistic environments according to cohen et al. [21], stress is a biopsychosocial phenomenon in which “environmental demands tax or exceed the adaptive capacity of an organism, resulting in psychological and biological changes that may place a person at risk for disease” (p. 3). this conceptualization suggests that in measuring stress, it is not only necessary to consider environmental demands, but also appraisals of such demands, as well as physiological systems that come into play. consistent with this definition, two main approaches have been introduced to assess psychological stress in naturalistic conditions: the first is based on self-reporting of participants’ subjective experiences and perception of stressful events; the second approach is based on sensing physiological signals associated with the stress response. in the next, we provide a description of these procedures, along with a discussion of their strengths and limitations. 2.1 subjective psychological measures the experience sampling method (esm), also known as ecological momentary assessment (ema), is a naturalistic observation technique that allows capturing participants’ thoughts, feelings, and behaviours at multiple times across a range of situations as they occur in the natural environment [22]. in a typical esm study, participants are asked to fill out a form when prompted by an acoustic signal. thanks to repeated sampling, a number of surveys are collected from each participant throughout the day, thus providing an ecologicallyvalid and highly detailed description of subjective quality of experience. ecological validity is a strong requirement in psychometrics since this is a measure of how a task or test is able to predict behaviours in a real-world setting. esm has been applied to study a wide range of behaviours and experiences, including daily stress [23], [24]. however, this procedure has high costs and places a significant burden on the participant, thus limiting its practical applicability as a stress monitoring technique [10], [25]. a less expensive and time-consuming approach for assessing experience and affect in everyday life is the day reconstruction method (drm), developed by kahneman and colleagues [26]. it involves the retrospective recall of the study period as a continuous sequence of episodes, which are rated on a series of affect scales. drm reports have been validated against experience sampling data, showing that this technique allows identifying changes in affect over the course of the day with almost the same accuracy than esm [26], [27]. however, since drm respondents are asked to reconstruct the previous day by completing a structured self-administered questionnaire, this method is potentially susceptible to recall biases. furthermore, it has been suggested that using retrospective measures as proxies for actual experience may result in weaker or inconsistent results particularly when tested in connection to biologic pathways [28]. 2.2 objective physiological measures an alternative strategy to assess stress in everyday situations is based on the analysis of physiological correlates of this experience. psychological stressors are linked with the activation of two main neuro-physiological pathways, which are involved in the maintenance of homeostasis: the hypothalamic-pituitaryadrenocortical (hpa) axis and the sympathetic-adrenal medullary system (sam). as concerns the first system, one of the most investigated biomarker is salivary cortisol, which, together with catecholamines, is one of the end products of hpa activation [20], [23], [29], [30]. however, due to significant betweenand within individual variation in diurnal secretion of cortisol, the measurement of the magnitude of cortisol response is not an easy procedure, which requires the application of advanced statistical approaches such as multilevel models [31], [32]. with respect to the sam system, hrv, defined as the variation over time of the period between consecutive heartbeats, is increasingly regarded as a potentially convenient and noninvasive marker of autonomic activation associated with psychological stressors [33]. the normal variability in heart rate (hr) is controlled by the balancing activation of the (acceleratory) sympathetic and of the (deceleratory) parasympathetic branches of the autonomic nervous system. however, under stressful events or contexts, there is a trend towards increased sympathetic control and reduced vagal tone, which is associated with decreased hrv [19]. on the other hand, higher hrv has been associated with the availability of context and goal-based control of emotions [34]. based on this preliminary evidence, several authors have been experimenting with wearable heart monitor for the identification of stress levels from hrv, in both healthy and clinical populations. for example, kim and coll. [35]used hrv patterns to discriminate between subjects reporting high and low levels of stress during the day, with an overall accuracy of 66.1 %. in a similar study, melillo et al. [16] compared within-subject variations of shortterm hrv measures using short term ecg recording in students undergoing university examination. by applying linear discriminant analysis on nonlinear features of hrv for automatic stress detection, these authors were able to obtain a total classification accuracy of 90 %. kimhy et al. [17] investigated the relationship between stress and cardiac autonomic regulation in a sample of psychotic patients, using experience sampling in combination with cardiac monitoring. they found that momentary increases of stress were significantly associated with increase in sympathovagal balance and parasympathetic withdrawal. in addition to studies which have examined the association between hrv and stress during waking hours, other recent research has proposed the use of hrv patterns during sleep as supplement to the analysis of subjective assessments and voice messages collected during workday [36], with encouraging, albeit preliminary, results. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 241 2.3 towards an integrated approach for personalized stress recognition in mobile settings as previously discussed, a fundamental issue in the measurement of stress in everyday life is that this response is idiosyncratic, because it depends on individual’s perception of challenges and the skills which he/she can use to face those challenges. as a consequence, any approach aiming at inferring stress levels from “honest” physiological signals should not overlook the role played by the subjective appraisal of the situation. furthermore, since hrv values are characterized by high interindividual variability, it is important that the system is tailored to the individual characteristics [37], [38]. one possible approach to develop adaptative systems for stress recognition has been suggested by morris and guilak [37]. the strategy proposed by these authors involves identifying subject’s baseline and stress threshold in the lab by elicitation of sympathetic and parasympathetic responses, and then using this information to differentiate between stress and nonstress in daily life. a first attempt to implement this approach has been performed by cinaz et al. [38]. these authors measured participants’ sympathetic and parasympathetic responses during three different levels of mental workload (low, medium, and high) in a controlled laboratory setting. then, they investigated whether the data collected in this calibration session were appropriate to discriminate corresponding workload levels occurred during office work. to this end, individual hrv responses of each workload level were used to train the models and test the trained models on the data collected while the subjects performed normal office-work, using a mobile ecg logger. afterward, a multiple regression analysis was applied to model the relationship between relevant hrv features and the subjective ratings of perceived workload: resulting predictions were correct for six out of the seven subjects. in the present work, we propose an experience-sampling approach for incorporating subjective knowledge in the classification of psychological stress from hrv indices (table 1). the methodology consists of three main steps. in the first, experimental phase, we carried out an experiencesampling study to select the hrv features which best correlate with self-reported stress levels (section 3). drawing on these results, a personalized classification system was developed which is able to automatically detect stress events from those hrv features, after a training phase in which the system learns from the subjective responses given by the user (section 4). in the final step, the performance of the classification task was evaluated using the leave one out cross validation process (section 5). 3. method the objectives of this experiment were two-fold: i) to select a subset of hrv features which best correlate with self-reported stress levels collected during everyday activities; ii) to select a subset of self-reported questions about perceived stress levels which can be used as ground truth to train the final system. 3.1 participants participants were 15 healthy subjects (8 males and 7 females, mean age = 23.33 years, st. dev.= 1.49), monitored longitudinally, for a total of 561 ecg analyzed, to select the hrv features which best correlate with self-reported stress levels. participants were recruited through opportunistic sampling. participants filled a questionnaire assessing factors that, in the opinion of the investigators, might interfere with the measures being assessed (i.e., caffeine consumption, smoking, alcohol consumption, exercise, hours of sleep, disease states, and medications). written informed consent was obtained by all subjects matching inclusion criteria (age between 18 and 65 years, generally healthy, absence of major medical conditions, and completion of informed consent). 3.2 materials data were collected through psychlog [39], a mobile experience sampling platform designed for research in mental health, which allows simultaneous collection of psychological, physiological (ecg) and motion activity data. psychological data are collected from surveys that can be simply customized by the experimenter. for the purpose of this study, we used the italian adaptation of the esm questionnaire applied by jacobs et al. [40] for studying the immediate effects of stressors on mood. the survey includes open-ended and closed-ended questions investigating thoughts, current context (activity, persons present, and location), appraisals of the current situation, and mood. all self-assessments were rated on 7-point likert scales. hr and activity data are acquired from a wireless electrocardiogram (shimmer research™) equipped with a three-axial accelerometer. the wearable sensor platform includes a board that allows the transduction, amplification and pre-processing of raw sensor signals, and a bluetooth transmitter to wirelessly send the processed data. the unit is mounted on a soft-textile chest strap designed to seamlessly adapt to the user's body shape, bringing full freedom of movement. sensed data are transmitted to the mobile phone bluetooth receiver and gathered by the psychlog computing module, which stores and process the signals for the extraction of relevant features. 3.3 design and procedure participants received a short briefing about the objective of the experiment and filled the informed consent. then, they were provided with the mobile phone with pre-installed psychlog application, the wearable ecg and accelerometer sensor and a user manual including experimental instructions. the application was pre-programmed to collect data over 7 consecutive days, at random intervals during waking hours. at the end of the experiment, participants returned both the phone and the sensors to the laboratory staff. after, participants were debriefed, thanked for their participation, and dismissed (figure 1). 3.4 data analysis following the procedure suggested by jacobs et al. [40], three different psychological stress measures were computed in order to identify the stressful qualities of daily life experiences. table 1. feature extraction from electrocardiogram (ecg). measure description rr mean mean of all rr intervals avnn average of all nn intervals sdnn standard deviation of all nn intervals rmssd square root of the mean of the squares of differences between adjacent nn intervals nn50 differences between adjacent nn intervals that are greater than 50 ms totpwr total spectral power of all nn intervals up to 0.04 hz lf lf total spectral power of all nn intervals between 0.04 hz and 0.15 hz hf hf total spectral power of all nn intervals between 0.15 hz and 0.4 hz lfbyhf lf/hf ratio of low to high frequency power (sympathovagal balance) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 242 ongoing “activity-related stress” (ars) was defined as the mean score of the two items ‘‘i would rather be doing something else’’ and ‘‘this activity requires effort’’ (cronbach’s alpha = 0.72). to evaluate social stress, participants rated the social context on two 7-point likert scales ‘‘i don’t like the present company’’ and ‘‘i would rather be alone’’; the “social stress scale” (ss) resulted from the mean of these ratings (cronbach’s alpha = 0.59). finally, for “event-related stress” (evs), subjects reported the most important event that had happened since the previous beep, whether or not it was still ongoing. subjects then rated this event on a 7-point bipolar scale (from 3 very unpleasant to 3 very pleasant, with 0 indicating a neutral event). all positive responses were recorded as 0, and the negative responses were recorded so that higher scores were associated with more unpleasant and potentially stressful events (0 neutral, 3 very unpleasant). in addition to those scales, an item (not included in the original survey by jacobs et al. [40]) asked participants to rate the perceived level of stress on a 10-point likert scale. this item was included as a global subjective measure of stress. given the repeated sampling, likert-type scales data were standardized (mean = 0; st. dev. = 1) on each participant’s weekly mean for every variable before performing the analyses. esm data can be aggregated at the report level (the unit of analysis is the individual diary entry) or at the subject level (the unit of analysis is the participant). in the present study, most of the analyses were conducted using the subject-level aggregation, because this approach avoids problems related to unequal weights and produces more conservative significance tests [41]. 3.5 selection of psycho-physiological features to analyse hrv features, the qrs peaks and rr interval time series recorded and saved on the psychlog application were exported and further processed with the software matlab (version 7.10) in order to compute a set of hrv indexes. to this end, the ecg signal was first elaborated for artifact correction, and then a fast fourier transform was used to compute the power spectrum in the lf (0.04–0.15 hz) and hf (0.15–0.40 hz) bands [42]–[45]. to estimate the effect of hrv indexes (independent variable) on stress level (dependent variable) we applied hierarchical linear analysis, an alternative to multiple regression which is more suitable for our nested data. actually, hierarchical structure of data makes traditional forms of analysis unsuitable, since within-subject data are collected at many points in time during each day, across several days. moreover, traditional repeated-measures designs require the same number of observations for each subject and no missing data. finally, hierarchical linear analysis allows to take into account further dependencies existing in the data. 4. results given the repeated sampling, likert-type scales data were standardized (mean = 0; sd = 1) on each participant’s weekly mean for every variable before performing the analyses. esm data can be aggregated at the report level (the unit of analysis is the individual diary entry) or at the subject level (the unit of analysis is the participant). in the present study, most of the analyses were conducted using the subject-level aggregation, because this approach avoids problems related to unequal weights and produces more conservative significance tests [31]. the following table provides the correlations between stress measures described before. as can be seen from table 1, all scales measuring stress (stress, ars, ss, and ers) are significantly correlated between them. out of 561 “beeps”, participants filled 541 reports (96 %), of which 456 were included in the analysis (84 %). a total of 561 ecg sampling were recorded (100 %), and 374 were included in the analysis (69 %). the following table 2 provides the correlations between stress measures described before. in fact, as can be seen from table 2, all scales measuring stress (stress, ars, ss, and ers) are significantly correlated between them. the hierarchical linear analysis, both aggregation levels (report-level and subject-level) were considered in the model. results indicated a statistical significant hierarchical regression model for rmssd (beta .5350813; st. dev.: .2151596; p < .013), nn50 (beta -1.152351; st. dev.: .5322348; p < .030), lf / hf (beta 1.176422; st. dev.: .5386275; p < .029) (table 3). the rmssd method is preferred to nn50 because it has better statistical properties [42], [43]. findings of this esm experiment allowed to identify a subset of hrv features, which showed best correlations with self-reported psychological stress levels. in the next section, we describe how these psycho-physiological features were implemented in a figure 1. schematic representation of experimental design. table 2. correlations among psychological self-reported measures. zstress zars zss ers zstress r 1 ,312** ,215** ,213** sig. < .001 < .001 < .001 n 540 534 528 456 zars r ,312** 1 ,393** ,146** sig. < .001 < .001 .002 n 534 535 529 457 zss r ,215** ,393** 1 ,188** sig. < .001 < .001 < .001 n 528 529 529 457 ** correlation is significant at the 0.01 level (2-tailed). table 3. summary of hierarchical regression analysis for hrv variables predicting global perceived stress (number of observations = 374). global perceived stress b se b z p >|z| 95 % ci hr 0.51 0.22 2.37 0.02 0.09 0.94 rmssd -0.53 0.21 -2.49 0.01 -0.96 -0.11 nn50 -1.15 0.53 -2.17 0.03 -2.20 -0.11 lf 0.62 .031 2.02 0.04 0.02 1.23 lf/hf 1.18 0.54 2.18 0.03 0.12 2.23 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 243 personalized stress monitoring system, which was designed to learn from individual’s subjective assessments of stressful situations and use this knowledge to detect stress events. the personalized stress monitoring system includes three main components: a) a mobile acquisition/feedback module (for the collection of psycho-physiological and activity data); b) a remote analysis-decision module (for the analysis and classification of stress levels), c) a mobile visualization-reporting module (for the reporting of detected stress events). 4.1 mobile acquisition/feedback module the mobile acquisition/feedback component consists of two elements: a wireless electronic module coupled with a commercial chest band for collecting ecg and motion data; and a smartphone application for the collection of psychological data, the transmission of data to the analysis-decision module and the visualization of stress events detected by the system. 4.1.1 ecg and motion activity data acquisition the electronic acquisition platform, produced by shimmer research™ allows the transduction, amplification and preprocessing of raw ecg signals, and the transmission of elaborated data via bluetooth to the smartphone. the unit is mounted on a soft textile chest strap (model micoach™ by adidas) designed to seamlessly adapt to the user’s body shape, bringing full freedom of movement. a smartphone application, running on android operative system, was developed for preliminary elaboration of sensor data and remote data base (db) archiving. the application processes the ecg and accelerometer signals for the extraction of three relevant parameters: hr, activity index and rr intervals for further hrv analysis provided by the analysis-decision module. the wearable electronic board collects raw sensor data with on-body signal conditioning. the ecg signal is sampled at 256 hz and sent to the smartphone with the tri-axial acceleration data (ax, ay, az). the smartphone application pre-processes user’s physiological signal through a stepwise filtering stage aimed at removing typical ecg artifacts and interferences. in particular, baseline wander due to body movements and respiration artifacts is removed using a cubic spline 3rd order interpolation between the fiducial isoelectric points of the ecg [46]. the power line interference and muscular noise are removed using an infinite impulse response (iir) notch filter at 50 hz and an iir low pass filter at 40 hz. then, the pan-tompkins method is applied [47] to detect the qrs complex and to extract hr and the time series sequence of non-uniform r–r intervals. since variation of the ecg parameters is significantly affected by the activity performed by the user [48], [49], signal magnitude area [50] is also extracted from the three-axis accelerometer signals in order to measure motion activity levels. 4.1.2 psychological data acquisition the acquisition of psychological data is managed by an electronic survey, which is displayed at random times during the day on the application’s screen. the survey includes a subset of likert-type items measured selected from the esm by jacobs et al. [41] described in section 4.1.2. only the esm items which highest correlation with hrv features were included in the final survey: this choice was made in order to reduce as much as possible the burden on the user during the training phase of the system. the final selected items were (listed in the same sequence of the final survey): 1. what is your stress level? (min: 1; max: 10) 2. this activity is a challenge (min: 1; max: 7) 3. this is something i'm good at (min: 1; max: 7) 4. i would rather be doing something else (min: 1; max: 7) 5. it takes me effort (min: 1; max: 7) the average time for the completion of a full questionnaire is about 10-15 seconds. during a typical training week, 4-5 surveys per day are collected. 4.2 analysis-decision module (adm) the adm developed is composed of two main modules: the feature extraction and the classification module. 4.2.1 data exchange the wearable sensors monitor patients and transfer data to web-servers, using a smartphone to collect and pre-elaborate data. in addition, the application allows users to track their own stress levels through a graphical representation (see section 5.3). the chest band and its electronic act as masters that initiate bluetooth communication with the android phone. the bluetooth protocol has a range of approximately 20 m and provides secure data transmissions. the communication between the phone and central db is through wi-fi or 3g networks. the dss makes use of information transmitted by the smartphone and stored within the central db. for each subject is created an user profile able to maintains all history data. in particular the remote db storage physiological data (hr, rr intervals and activity index) and the corresponding stress value extracted with physiological surveys. given the userid, the timestamp, and the session, the adm retrieve from db the physiological data together with the questionnaires filled by the user about the stress level perceived. from these data features are extracted and used to train the classification module (training phase). after training, the adm acts as an expert system and provide to the corresponding user the stresslevel automatically extracted after training of the classification module (testing phase). the adm acts asynchronously in respect to the sensor data collection process. at fixed time intervals, the new sensor data belonging to each subject are collected and a feature extraction process takes place in order to create a structured dataset. classification module is trained and validated with most relevant features for automatic stress assessment. 4.2.2 feature extraction once the data are sent to the remote server, concerning the rr intervals collected over time, the parts of the signals with artifacts are discarded and hrv features are extracted according to the traditional approach proposed by the international guidelines of hrv [42], [43] to estimate cardiac vagal and sympathetic activities as markers of the autonomic interaction using a data exchange module (figure 2). in time domain were figure 2. data exchange module. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 244 extracted statistical indices from rr time series such as mean (mrr), standard deviation (σrr), root mean square of successive differences of intervals (rmssd), difference between the longest and shortest rr intervals, and the number of successive differences of intervals which differ by more than 50 ms (pnn50% expressed as a percentage of the total number of heartbeats analysed), while in frequency domain are extracted parameters for each frequency band, low frequency (lf: 0.030.15 hz) and high frequency (hf: 0.15-0.40 hz), included absolute powers, peak frequencies (max lf and max hf) and the lf/hf power ratio that measure the global sympatheticparasympathetic equilibrium. in particular, above the threshold, the curve reveals sympathetic dominancy, below the threshold, the parasympathetic influence is dominant. these features are extracted using an estimation of the power spectral density (psd) analysis according to the burg spectral estimation [51], where the optimal order p was estimated according to the akaike information criterion [52]. the power of each band is normalized in respect to the total power of the spectrum. also a nonlinear parameter was extracted, i.e. the poincaré plot a useful tool to investigate and combine the differences of the cardiac rhythms during the performed tasks. it is a graphical representation created by plotting all rr(n) on the x-axis versus rr(n+1) on the y-axis. then, the data are fitted using an ellipse projected according the line of identity and extracting the two standard deviations (sd) respectively [53] as shown in figure 3. 4.2.3 classification module the classification module is based on machine learning (ml) models, such as artificial neural networks (ann), based on inductive inference [54]. we decided to use ml model to cope with the non-linear and complex relations between the monitored parameters and the stress level prediction (table 4). artificial neural networks (anns) are particularly suited for solving such problems. they are biologically inspired computational models, which consist of a network composed of artificial neurons. for the implementation of ml for stress level detection a number of steps are needed: • initialization of parameters of implemented ml model. • training: the model is trained with the features extracted and selected to adapt itself to classify the given inputs. the loaded features, along with self-reported stress levels, generate the training set. the self-reported stress levels are collected during the training phase, in which the participant is prompted at random times during the day with a survey including five items (see section 5.1.2) that allows the user to self-evaluate, on a likert scale, perceived levels of stress, following a protocol described in section 4.1.3. by matching this psychological “ground truth” with sensor data, synaptic weights of networks internal connections are modified in order to force the output to minimize the error with the presented example (in this case the stress level obtained from survey). in this step the architecture of the model and its hyper-parameters are optimized. the examples labelled with the stress-level are used to create a personalized stress prediction model. • validation: the model adequately trained, is able to classify the given input in order to present a consequent output value: the value obtained, is the inferred stress level. it is validated in order to guarantee good predictive properties. during the analysis decision module fine-tuning design, we decided to develop a self-organizing maps (som) integrated with fuzzy rules. the som is a network structure which provides a topological mapping [55]-[57]. the main difference with the artificial neural network is that it is based on unsupervised learning. it is composed of two-dimensional layer in which all the inputs are connected to each node in the network (figure 4). a topographic map is autonomously organized by a cyclic process of comparing input patterns to vectors at each node. the node vector to which inputs match is selectively optimized to present an average of the training data. then all the training data are represented by the node vectors of the map. starting with a randomly organized set of nodes, and proceeding to the creation of a feature map representing the prototypes of the input patterns, the training procedure is as follows: 1. initialization of the weights wij(1 ≤ i ≥ nf, 1 ≤ j ≥ m) to small random values, where nf is the total number of selected features (input) and m is the total number of nodes in the map. set the initial radius of the neighbourhood around node j as nj(t). 2. present the inputs x1(t), x2(t) . . . . . xnf(t), where xi(t) is the ith input to node j at time t. 3. calculate the distance dj between the inputs and node j by the euclidean distance to determine j* which minimizes dj: 𝑑j = ||𝑊j(𝑡) − 𝑋(𝑡)|| (1) table 4. features extracted and analysed from the signal. no. features extracted measure of signal 1 mrr mean rr interval rr 2 σrr standard deviation rr interval rr 3 rmssd root mean square of successive differences of intervals rr 4 pnn50% number of successive differences of intervals which differ by more than 50 ms rr 5 lf spectral estimation of low frequency power (0.03-0.15 hz) rr 6 max lf max value of low frequency value rr 7 hf spectral estimation of high frequency power ( hf: 0.15-0.40 hz ) rr 8 min hf max value of high frequency value rr 9 lf/hf spectral estimation of power ratio rr 10 sd1, sd2 standard deviations of poincarè plot rr 11 sma signal magnitude area acc. x,y,z figure 3. sd1 and sd2 of poincarè plot observed for a portion of rr interval analysed. 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 ibi n (s) plot poicaré ib i n + 1 ( s ) sd1=30.3 sd2=71 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 245 every node is examined to calculate which one's weights are most like the input vector. the winning node is commonly known as the best matching unit (bmu). the radius of the neighborhood of the bmu is then calculated. this is a value that starts large, but diminishes each timestep. any nodes found within this radius are deemed to be inside the bmu's neighborhood. 4. update the weights wij of the winning neuron j* and of its neighborhood neurons nj*(t) at the time t, for the input vector x, are modified according to the following equation (2) to make them more like the input vector: 𝑤ij (𝑡) = 𝑤ij(𝑡 − 1) + 𝛼(𝑡)[𝑋(𝑡) − 𝑤ij(𝑡 − 1)] , (2) where α(t) is the learning rate. both α(t) and nj*(t) are controlled so as to decrease in t. if the process reaches the maximum number of iterations, stop; otherwise, go to (2). at the end of the training process, for each input variable xi we generated the fuzzy membership function using triangular functions with the centre in the corresponding weight wij of the map and the corresponding variance vij, where i is the ith input and j represents the jth node of the map. the centers of the triangular membership functions in the ith input are (wi1 wi2 ..... wim). the corresponding regions were set to [wi1-2vi1, wi1+2vi1], [wi2-2vi2, wi2+2vi2],…, [wim-2vim, wim+2vim], where m is the last node of the map. we developed membership functions and fuzzy rules for each hrv parameter, including heart rate and motion activity.in order to reduce the number of fuzzy rules and to improve the system reliability, narrowly separated regions were combined to become a single region. let the positions of the four corners of region j be llj, lhj, rhj and rlj (for a triangular membership function, lhj = rhj). two neighboring regions j-1 and j were merged if they satisfied the following equation (3): 𝑙ℎj + 𝑟ℎj 2 − 𝑙ℎj−1 + 𝑟ℎj−1 2 ≤ 𝑡ℎ𝑟 , (3) where thr is pre-specified threshold (set to 0.1 in our experiments). this process continued until all regions were well separated in terms of the threshold. accordingly, some fuzzy regions had trapezoidal shapes instead of triangular ones as is shown in figure 5. after that, we generated fuzzy rules as a set of associations of the form “if antecedent conditions hold, then consequent conditions hold”. each feature was normalized to the range of [0.0,1.0] and each region of fuzzy membership function was labeled as r1, r2,…rn. an input was assigned to the label of a region where the maximum membership value was obtained. in particular we adopted the method proposed by wang et al. [56] where each training sample produced a fuzzy rule. an example of rule generated is: if feature1 is r1 and feature2 is rn and feature3 is r2 and feature4 is r3 and feature5 is r6, and feature6 is r8 …. and feature m is r3then it is medium stress level. finally, the number of all the fuzzy rules was the same order of the training samples. the problem was that a large number of training patterns may lead to repeated or conflicting rules. to deal with this problem, we recorded the number of rules repeated during the learning process. those rules supported by a large number of examples were saved. a centroid defuzzification formula was used to determine the output for each input pattern: 𝑍 = ∑ 𝐷p i 𝑂iki=1 ∑ 𝐷p ik i=1 , (4) where z is the output, k is the number of rules, oi is the class generated by rule i and dip measures how the input vector fit the ith rule. dip is given by the product of degrees of the pattern in the regions which the ith rule occupies. the output is within [0,5] for numeral recognition of stress level (0=unknown, 1=low stress, 2=mild stress, 3=elevated stress, 4=high stress, 5=severe stress). the output z was adapted taking the nearest smaller integer value. fuzzy rules do not necessarily occupy all fuzzy regions in input space. there could be some regions where no related rule exists. this is the case when the denominator in equation (4) is zero. we label the corresponding input stress level as unknown. after training the model was designed to be able to discriminate until 6 different classes during test phase with the trained fuzzy classifier. for each user, the dss is trained on the basis of the evaluation of the stress level supplied by the surveys, being such data collected by the mobile application and transmitted to the database. as the dss training is completed, the dss acts as an expert system, supplying the stress level information for each patient as new sensor data becomes available within the database (figure 6). figure 4. generation of the fuzzy membership function for the ith input. the number of triangular functions is the equal to the som nodes. figure 5. trapezoidal function obtained for neighboring regions. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 246 4.3 mobile visualization module after data collection, the mobile application can poll the stress level report from the remote database provided by the analysisdecision module (see section 5.2) at definite time intervals, or at user’s request. the stresstracker visualization component of the mobile application provide the user with a graphic visualization of measured stress events. this information is visualized by the stresstracker (figure 7, right) which shows the number of detected stressful events over the course of last day, week, or month respectively. 5. cross validation in order to check the stress detection capability of the personalized model, the performance of the classification task was evaluated using the leave one out cross validation process (loocv), where each fold consists of one session of data acquisition left-out. this method is an iterative process in which one session is recruited into the dataset each time for validation. the som classifier combined with fuzzy rules was trained using the remaining data and validated on the single, left-out validation point. this ensures that the validation is unbiased, because the classifier does not see the validation input sample during its training. one by one, each available session for each subject was recruited for validation. the performances of the system were assessed by using the confusion matrix, in which the generic element i,j indicates how many times (in mean percentage ± sd) a pattern belonging to the class i was classified as belonging to the class j. 5.1 cross validation results the system is able to recognize the presence of stress in the selected population. in particular, the results obtained with three subjects are reported in table 5. the mean ± sd percentages of the confusion matrices obtained with the first subject are reported in table 5a. the model correctly identifies the presence of stress with percentage of correct classifications of 89.0 % and the absence of stress with percentage of correct classifications of 86.6 %. the mean ± sd percentages of the confusion matrices obtained with the second subject are reported in the table 5b. the model correctly identifies the presence of stress with percentage of correct classifications of 66.6 % and the absence of stress with percentage of correct classifications of 70.0 %. the mean ± sd percentages of the confusion matrices obtained with the third subject are reported in table 5c. the model correctly identifies the presence of stress with percentage of correct classifications of 95.7 % and the absence of stress with percentage of correct classifications of 73.0 %. these results demonstrate the high discriminatory power of the system. 6. conclusion we described the design, key functional features and preliminary validation of a personalized system for monitoring stress in naturalistic environments. the original contribution of this work concerns the development of a new methodology, which allows to use subjective evaluation of potentially stressful situations in the calibration and training of the classification system. in particular, incorporating self-reported psychological data in the system knowledge base allows for a more comprehensive and personalized definition of the stress response as measured by hrv indices. an objective of future research is to validate the accuracy of the personalized stress detection model against other physiological markers, i.e. salivary cortisol collected during daily life activities. acknowledgement this work was supported by the european funded project “interstress–interreality in the management and treatment of stress-related disorders”, grant number: fp7-247685. the co-authors would like to thank giovanni pioggia and andrea gaggioli who have equally contributed to the manuscript. figure 6. architecture of the automatic stress classification module. table 5. confusion matrixes. a. the confusion matrix obtained with the first subject: stress no stress stress 89 ± 6.2 11 ± 2.1 no stress 13.3 ± 3.6 86.6 ± 5 b. the confusion matrix obtained with the second subject: stress no stress stress 66.6 ± 11 33.3 ± 6.7 no stress 30 ± 3.1 70 ± 5.9 c. the confusion matrix obtained with the third subject: stress no stress stress 95 ± 3.3 5 ± 4.6 no stress 26 ± 3.1 73 ± 8.2 figure 7. on the left side is reported the current stress level of the user, while on the right side are reported the stressful events detected over the last week. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 247 references [1] y. yang, daily stressor, daily resilience, and daily somatization: the role of trait aggression, personality and individual differences, vol. 165, 2020, 110141. doi: 10.1016/j.paid.2020.110141 [2] s. scholten, k. lavallee, j. velten, x. c. zhang, j. margraf, the brief daily stressors screening tool: an introduction and evaluation, stress and health, vol. 36, no. 5, 2020, pp. 686-692. doi: 10.1002/smi.2965 [3] l. jandorf, e. deblinger, j. m. neale, a. a. stone, daily versus major life events as predictors of symptom frequency: a replication study, journal of general psychology, vol. 113, no. 3, 1986, pp. 205-218. doi: 10.1080/00221309.1986.9711031 [4] p. j. brantley, g. n. jones, daily stress and stress-related disorders, annals of behavioral medicine, vol. 15, no. 1. 1993, pp. 17-25. doi: 10.1093/abm/15.1.17 [5] t. d. tran, j. kim, n.-h. ho, h.-j. yang, s. pant, s.-h. kim, g.s. lee, stress analysis with dimensions of valence and arousal in the wild, applied sciences, vol. 11, no. 11, 2021, 5194. doi: 10.3390/app11115194 [6] m. kusserow, o. amft, g. tröster, monitoring stress-arousal in the wild measuring the intangible, ieee pervasive computing, vol. 12, no. 2, 2013, pp. 28-37. doi: 10.1109/mprv.2012.56 [7] j. a. healey, r. w. picard, detecting stress during real-world driving tasks using physiological sensors, ieee transactions on intelligent transportation systems, vol. 6, no. 2, 2005, pp. 156166. doi: 10.1109/tits.2005.848368 [8] g. tartarisco, g. baldus, d. corda, r. raso, a. arnao, m. ferro, a. gaggioli, g. pioggi, personal health system architecture for stress monitoring and support to clinical decisions, computer communications, vol. 35, no. 11, 2012, pp. 1296-1305. doi: 10.1016/j.comcom.2011.11.015 [9] n. carbonaro, p. cipresso, a. tognetti, g. anania, d. de rossi, a. gaggioli, g. riva, a mobile biosensor to detect cardiorespiratory activity for stress tracking, 3rd international workshop on pervasive computing paradigms for mental health, 2013. doi: 10.4108/icst.pervasivehealth.2013.252357 [10] p. cipresso, a. gaggioli, s. serino, s. raspelli, c. vigna, f. pallavicini, g. riva, inter-reality in the evaluation and treatment of psychological stress disorders: the interstress project, annual review of cybertherapy and telemedicine, vol. 181, 2012, pp. 8-11. doi: 10.3233/978-1-61499-121-2-8 [11] n. carbonaro, p. cipresso, a. tognetti, g. anania, d. de rossi, f. pallavicini, a. gaggioli, g. riva, psychometric assessment of cardio-respiratory activity using a mobile platform, in wearable technologies: concepts, methodologies, tools, and applications, 2018. doi: 10.4018/978-1-5225-5484-4.ch037 [12] a. gaggioli, f. pallavicini, l. morganti, s. serino, c. scaratti, m. briguglio, g. crifaci, n. vetrano, a. giulintano, g. bernava, g. tartarisco, g. pioggia, s. raspelli, p. cipresso, c. vigna, a. grassi, m. baruffi, b. wiederhold, g. riva, experiential virtual scenarios with real-time monitoring (interreality) for the management of psychological stress: a block randomized controlled trial, journal of medical internet research, vol. 16, no. 7, 2014, e167. doi: 10.2196/jmir.3235 [13] a. gaggioli, p. cipresso, s. serino, d. m. campanaro, f. pallavicini, b. k. wiederhold, g. riva, positive technology: a free mobile platform for the self-management of psychological stress, studies in health technology and informatics, vol. 199, 2014, pp. 25-29. doi: 10.3233/978-1-61499-401-5-25 [14] d. giakoumis, a. drosou, p. cipresso, d. tzovaras, g. hassapis, a. gaggioli, g. riva, real-time monitoring of behavioural parameters related to psychological stress, annual review of cyber therapy and telemedicine, vol. 10, 2012, pp. 287-291. doi: 10.3233/978-1-61499-121-2-287 [15] l. salahuddin, j. cho, m. g. jeong, d. kim, ultra short term analysis of heart rate variability for monitoring mental stress in mobile settings, 2007, pp. 4656-4659. doi: 10.1109/iembs.2007.4353378 [16] p. melillo, m. bracale, l. pecchia, nonlinear heart rate variability features for real-life stress detection. case study: students under stress due to university examination, biomedical engineering online, vol. 10, 2011, 96. doi: 10.1186/1475-925x-10-96 [17] d. kimhy, p. delespaul, h. ahn, s. cai, m. shikhman, j. a. lieberman, d. malaspina, r. p. sloan, concurrent measurement of ‘real-world’ stress and arousal in individuals with psychosis: assessing the feasibility and validity of a novel methodology, schizophrenia bulletin, vol. 36, no. 6, 2010, pp. 1131–1139. doi: 10.1093/schbul/sbp028 [18] r. p. sloan, p. a. shapiro, e. bagiella, s. m. boni, m. paik, j. t. bigger jr., r. c. steinman, j. m. gorman, effect of mental stress throughout the day on cardiac autonomic control, biological psychology, vol. 37, no. 2, 1994, pp. 89-99. doi: 10.1016/0301-0511(94)90024-8 [19] g. g. berntson, j. t. cacioppo, heart rate variability: stress and psychiatric conditions, in dynamic electrocardiography, 2007. doi: 10.1002/9780470987483.ch7 [20] p. cipresso, d. colombo, g. riva, computational psychometrics using psychophysiological measures for the assessment of acute mental stress, sensors (switzerland), vol. 19, no. 4, 2019 doi: 10.3390/s19040781 [21] s. cohen, r. c. kessler, l. u. gordon, strategies for measuring stress in studies of psychiatric and physical disorders, measuring stress: a guide for health and social scientists, oxford university press, 1995, pp. 3–26, isbn: 978-0195121209. [22] m. csikszentmihalyi, r. larson, validity and reliability of the experience-sampling method, in flow and the foundations of positive psychology: the collected works of mihaly csikszentmihalyi, 2014, pp 35-54. doi: 10.1007/978-94-017-9088-8_3 [23] m. memar, a. mokaribolhassan, stress level classification using statistical analysis of skin conductance signal while driving, sn applied sciences, vol. 3, no. 1, 2021, 64. doi: 10.1007/s42452-020-04134-7 [24] k. wang, p. guo, an ensemble classification model with unsupervised representation learning for driving stress recognition using physiological signals, ieee transactions on intelligent transportation systems, vol. 22, no. 6, 2021, pp. 33033315. doi: 10.1109/tits.2020.2980555 [25] n. van berkel, d. ferreira, v. kostakos, the experience sampling method on mobile devices, acm computing surveys, vol. 50, no. 6, 2017, pp 1-40. doi: 10.1145/3123988 [26] d. kahneman, a. b. krueger, d. a. schkade, n. schwarz, a. a. stone, a survey method for characterizing daily life experience: the day reconstruction method, science, vol. 306, no. 5702, 2004, pp. 1776-1780. doi: 10.1126/science.1103572 [27] a. a. stone, j. e. schwartz, d. schkade, n. schwarz, a. krueger, d. kahneman, a population approach to the study of emotion: diurnal rhythms of a working day examined with the day reconstruction method, emotion, vol. 6, no. 1, 2006, pp. 139-149. doi: 10.1037/1528-3542.6.1.139 [28] t. s. conner, l. f. barrett, trends in ambulatory self-report: the role of momentary experience in psychosomatic medicine, psychosomatic medicine, vol. 74, no. 4. 2012, pp. 327-337. doi: 10.1097/psy.0b013e3182546f18 https://doi.org/10.1016/j.paid.2020.110141 https://doi.org/10.1002/smi.2965 https://doi.org/10.1080/00221309.1986.9711031 https://doi.org/10.1093/abm/15.1.17 http://dx.doi.org/10.3390/app11115194 https://doi.org/10.1109/mprv.2012.56 https://doi.org/10.1109/tits.2005.848368 https://doi.org/10.1016/j.comcom.2011.11.015 https://doi.org/10.4108/icst.pervasivehealth.2013.252357 https://doi.org/10.3233/978-1-61499-121-2-8 http://dx.doi.org/10.4018/978-1-5225-5484-4.ch037 https://doi.org/10.2196/jmir.3235 https://doi.org/10.3233/978-1-61499-401-5-25 https://doi.org/10.3233/978-1-61499-121-2-287 https://doi.org/10.1109/iembs.2007.4353378 https://doi.org/10.1186/1475-925x-10-96 https://doi.org/10.1093/schbul/sbp028 https://doi.org/10.1016/0301-0511(94)90024-8 https://doi.org/10.1002/9780470987483.ch7 https://doi.org/10.3390/s19040781 https://doi.org/10.1007/978-94-017-9088-8_3 https://doi.org/10.1007/s42452-020-04134-7 https://doi.org/10.1109/tits.2020.2980555 https://doi.org/10.1145/3123988 https://doi.org/10.1126/science.1103572 https://doi.org/10.1037/1528-3542.6.1.139 https://doi.org/10.1097/psy.0b013e3182546f18 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 248 [29] c. kirschbaum, d. h. hellhammer, salivary cortisol in psychobiological research: an overview, neuropsychobiology, vol. 22, no. 3, 1989, pp. 150-169. doi: 10.1159/000118611 [30] g. e. miller, e. chen, e. s. zhou, if it goes up, must it come down? chronic stress and the hypothalamic-pituitaryadrenocortical axis in humans, psychological bulletin, vol. 133, no. 1. 2007, pp. 25-45. doi: 10.1037/0033-2909.133.1.25 [31] d. j. hruschka, b. a. kohrt, c. m. worthman, estimating betweenand within-individual variation in cortisol levels using multilevel models, psychoneuroendocrinology, vol. 30, no. 7, 2005, pp. 698-714. doi: 10.1016/j.psyneuen.2005.03.002 [32] g. h. ice, a. katz-stein, j. himes, r. l. kane, diurnal cycles of salivary cortisol in older adults, psychoneuroendocrinology, vol. 29, no. 3, 2004, pp. 355-370. doi: 10.1016/s0306-4530(03)00034-9 [33] h. mansikka, p. simola, k. virtanen, d. harris, l. oksama, perceived mental stress and reactions in heart rate variabilitya pilot study among employees of an electronics company, international journal of occupational safety and ergonomics, vol. 14, no. 3, 2008, pp. 1344-1352. doi: 10.1080/10803548.2008.11076767 [34] j. f. thayer, f. åhs, m. fredrikson, j. j. sollers, t. d. wager, a meta-analysis of heart rate variability and neuroimaging studies: implications for heart rate variability as a marker of stress and health, neuroscience and biobehavioral reviews, vol. 36, no. 2. 2012, pp. 747-756. doi: 10.1016/j.neubiorev.2011.11.009 [35] d. kim, y. seo, j. cho, c. h. cho, detection of subjects with higher self-reporting stress scores using heart rate variability patterns during the day, 2008, pp. 682-685. doi: 10.1109/iembs.2008.4649244 [36] a. muaremi, b. arnrich, g. tröster, towards measuring stress with smartphones and wearable devices during workday and sleep, bionanoscience, vol. 3, no. 2, 2013, pp. 172-183. doi: 10.1007/s12668-013-0089-2 [37] m. e. morris, q. kathawala, t. k. leen, e. e. gorenstein, f. guilak, w. deleeuw, m. labhard, mobile therapy: case study evaluations of a cell phone application for emotional selfawareness, journal of medical internet research, vol. 12, no. 2, 2010, e10. doi: 10.2196/jmir.1371 [38] b. cinaz, b. arnrich, r. la marca, and g. tröster, monitoring of mental workload levels during an everyday life office-work scenario, personal and ubiquitous computing, vol. 17, no. 2, 2013, pp. 229-239. doi: 10.1007/s00779-011-0466-1 [39] a. gaggioli, g. pioggia, g. tartarisco, g. baldus, d. corda, p. cipresso, g. riva, a mobile data collection platform for mental health research, personal and ubiquitous computing, vol. 17, no. 2, 2013, pp. 241-251. doi: 10.1007/s00779-011-0465-2 [40] n. jacobs, i. myin-germeys, c. derom, p. delespaul, j. van os, n. a. nicolson, a momentary assessment study of the relationship between affective and adrenocortical stress responses in daily life, biological psychology, vol. 74, no. 1, 2007, pp. 60-66. doi: 10.1016/j.biopsycho.2006.07.002 [41] r. larson, p. a. e. g. delespaul, analyzing experience sampling data: a guide book for the perplexed, in the experience of psychopathology, 2010, pp. 58-78. doi: 10.1017/cbo9780511663246.007 [42] m. malik, j. t. bigger, a. j. camm, r. e. kleiger, a. malliani, a. j. moss, p. j. schwartz, heart rate variability. standards of measurement, physiological interpretation, and clinical use, european heart journal, vol. 17, no. 3. 1996, pp. 354–381. doi: 10.1093/oxfordjournals.eurheartj.a014868 [43] m. malik, heart rate variability: standards of measurement, physiological interpretation, and clinical use, circulation, vol. 93, no. 5, 1996, pp. 1043-1065. doi: 10.1161/01.cir.93.5.1043 [44] v. magagnin, m. mauri, p. cipresso, l. mainardi, e. brown, s. cerutti, m. villamira, r. barbieri, heart rate variability and respiratory sinus arrhythmia assessment of affective states by bivariate autoregressive spectral analysis, in computing in cardiology, 2010, vol. 37, pp. 145-148. [45] m. mauri, v. magagnin, p. cipresso, l. mainardi, e. n. brown, s. cerutti, m. villamira, r. barbieri, psychophysiological signals associated with affective states, 2010, pp. 3563-3566. doi: 10.1109/iembs.2010.5627465 [46] r. jane, p. laguna, n. v. thakor, p. caminal, adaptive baseline wander removal in the ecg: comparative analysis with cubic spline technique, 1992, pp. 143-146. doi: 10.1109/cic.1992.269426 [47] j. pan, w. j. tompkins, a real-time qrs detection algorithm, ieee transactions on biomedical engineering, vol. bme-32, no. 3, 1985, pp. 230-236. doi: 10.1109/tbme.1985.325532 [48] j. h. houtveen, p. f. c. groot, e. j. c. de geus, effects of variation in posture and respiration on rsa and pre-ejection period, psychophysiology, vol. 42, no. 6, 2005, pp. 713-719. doi: 10.1111/j.1469-8986.2005.00363.x [49] f. h. wilhelm, p. grossman, m. i. müller, bridging the gap between the laboratory and the real worldintegrative ambulatory psychophysiology, handbook of research methods for studying daily life, 2012, the guilford press. [50] c. v. c. bouten, k. t. m. koekkoek, m. verduin, r. kodde, j. d. janssen, a triaxial accelerometer and portable data processing unit for the assessment of daily physical activity, ieee transactions on biomedical engineering, vol. 44, no. 3, 1997, pp. 136-147. doi: 10.1109/10.554760 [51] e. pardo-igúzquiza and f. j. rodríguez-tovar, maximum entropy spectral analysis, 2021, springer. doi: 10.1007/978-3-030-26050-7_197-1 [52] h. akaike, fitting autoregressive models for prediction, annals of the institute of statistical mathematics, vol. 21, no. 1, 1969, pp. 243–247. doi: 10.1007/bf02532251 [53] m. brennan, m. palaniswami, p. kamen, do existing measures of poincareé plot geometry reflect nonlinear features of heart rate variability?, ieee transactions on biomedical engineering, vol. 48, no. 11, 2001, pp. 1342-1347. doi: 10.1109/10.959330 [54] s. r. sain, v. n. vapnik, the nature of statistical learning theory, technometrics, vol. 38, no. 4, 1996, 409. doi: 10.2307/1271324 [55] t. kohonen, the self-organizing map, neurocomputing, vol. 21, no. 1–3, 1998, pp. 1-6. doi: 10.1016/s0925-2312(98)00030-7 [56] l. x. wang, j. m. mendel, generating fuzzy rules by learning from examples, ieee transactions on systems, man and cybernetics, vol. 22, no. 6, 1992, pp. 1414-1427. doi: 10.1109/21.199466 [57] g. tartarisco, n. carbonaro, a. tonacci, g. m. bernava, a. arnao, g. crifaci, p. cipresso, g. riva, a. gaggioli, d. de rossi, a. tognetti, g. pioggia, neuro-fuzzy physiological computing to assess stress levels in virtual reality therapy, interacting with computers, vol. 27, no. 5, 2015, pp. 521-533. doi: 10.1093/iwc/iwv010 https://doi.org/10.1159/000118611 https://doi.org/10.1037/0033-2909.133.1.25 https://doi.org/10.1016/j.psyneuen.2005.03.002 https://doi.org/10.1016/s0306-4530(03)00034-9 https://doi.org/10.1080/10803548.2008.11076767 https://doi.org/10.1016/j.neubiorev.2011.11.009 https://doi.org/10.1109/iembs.2008.4649244 https://doi.org/10.1007/s12668-013-0089-2 https://doi.org/10.2196/jmir.1371 https://doi.org/10.1007/s00779-011-0466-1 https://doi.org/10.1007/s00779-011-0465-2 https://doi.org/10.1016/j.biopsycho.2006.07.002 https://doi.org/10.1017/cbo9780511663246.007 https://doi.org/10.1093/oxfordjournals.eurheartj.a014868 https://doi.org/10.1161/01.cir.93.5.1043 https://doi.org/10.1109/iembs.2010.5627465 https://doi.org/10.1109/cic.1992.269426 https://doi.org/10.1109/tbme.1985.325532 https://doi.org/10.1111/j.1469-8986.2005.00363.x https://doi.org/10.1109/10.554760 https://doi.org/10.1007/978-3-030-26050-7_197-1 https://doi.org/10.1007/bf02532251 https://doi.org/10.1109/10.959330 https://doi.org/10.2307/1271324 https://doi.org/10.1016/s0925-2312(98)00030-7 https://doi.org/10.1109/21.199466 https://doi.org/10.1093/iwc/iwv010 digital transformation: towards process automation in a cloud native architecture acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 digital transformation: towards process automation in a cloud native architecture alexander oppermann1, samuel eickelberg1, manuel meiborg1 1 physikalisch-technische bundesanstalt, abbestr. 2-12, 10587 berlin, germany section: research paper keywords: digital transformation; universal service hub; metrological processes; digital calibration certificate; dcc citation: alexander oppermann, samuel eickelberg, manuel meiborg, digital transformation: towards process automation in a cloud native architecture, acta imeko, vol. 12, no. 1, article 17, march 2023, identifier: imeko-acta-12 (2023)-01-17 section editor: daniel hutzschenreuter, ptb, germany received november 18, 2022; in final form march 19, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alexander oppermann, e-mail: alexander.oppermann@ptb.de 1. introduction digital transformation in public administration is taking place at a different pace and often falls short of the expectations and requirements associated, for example, with the online access act (ozg) in germany. while the current pandemic situation has accelerated digital transformation measures for mobile work in general and in public administration, it has also highlighted the existing weaknesses. there is an obvious need to support the design and use of digitally supported work methods and appropriately designed process flows. to this end, existing obstacles must be systematically identified and, ideally, eliminated or dissolved by digital technologies and methods. among other things, these relate to the legal security of digitally transformed processes, billing procedures and models, as well as associated financial and organizational structures, personnel capacities, competencies, and qualifications. in addition, there is now an opportunity to rethink and redesign process workflows to fully exploit the potential of digital transformation simply transferring existing workflows into the digital domain is not enough. at the same time, digital quality infrastructures must be established internally and externally, to link previous data silos 1 web portal for digital services and to lead to new synergy and efficiency gains (bmwi qualitätsinfrastruktur digital [1]). in addition, suitable concepts are required for designing the transition from analogue to digital processes. currently, e-services1 and e-file2 are being introduced at ptb, but there is a major obstacle in connecting and digitally transforming the working groups, laboratories, and their workflows. often, the hurdles for the individual work groups are very high and the upcoming change management processes are overwhelming. this gap is to be closed by the operation layer (op-layer), which uses uniform interfaces to connect to the already existing infrastructures at ptb and enables simple data transfer from eservices and e-file. through uniform interfaces using rest (representational state transfer), the connected internal systems can maintain their previous workflow, while at the same time harmonizing the data. the automatic data transfer drastically increases the confidentiality, integrity and availability of the data and greatly reduces the susceptibility to errors. the op-layer thus makes an enormous contribution to the automation of workflows and can reduce the workload of employees. automation frees up staff time for research, laborious calibrations, and related tasks. 2 document management system for electronic records abstract introducing new information systems in organizations often result in information sinks that disrupt people's productivity and prevent a successful change management process. in this paper, the operation layer is presented, a cloud native concept to break up data silos, to streamline workflows and to centralize it services while maintaining the department's workflows. the feasibility is demonstrated by a complete digital calibration certificate workflow implementation as a service within the operation layer. pursuing this concept consequently will simplify the it maintenance while flattening the change management curve at the same time. mailto:alexander.oppermann@ptb.de acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 the rest of the paper is structured as follows, section 2 describes the concept and infrastructure of the op-layer. section 3 outlines the use cases, their implications, as well as interoperability scenarios resulting from them. a summary and outlook are given in section 4. 2. concept & infrastructure in the last years, a distributed microservice architecture has been built and designed explicitly in the domain of legal metrology with a focus on interconnecting external stakeholders [2], [3], [4], [5]. while the generic it architectural design approach has proven to be successful, a similar approach will be pursued to digitally transform the internal process flow. this time, the focus of the it architecture will be extended to be cross-domain and linking internal stakeholders. further targets are the stakeholders’ isolated workflows as well as internal and external it infrastructure. the foundation of the it infrastructure will be more sophisticated in terms of scalability, availability, portability, and security (see section 2.1). furthermore, the infrastructure as code (iac) paradigm will be put at the heart of the op-layer. figure 1 gives an overview of the digitally transformed workflow [6] with the e-service web portal as a contact point for a range of services. once the order has been placed, an electronic file will be created in the e-file service. the e-file service stores all information centrally. the necessary administrative data is maintained independently by the customer while the required metrological data is stored as machine-readable document [7]. this is where the op-layer comes into place and links seamlessly the laboratories’ workflows and internal departments’ use-cases (see section 3) with the e-file service via an api. moreover, the op-layer will feature a microservice architecture and host specific tailored applications and services that will harmonize the service infrastructure within ptb. while centralizing the service infrastructure, the individual and adapted workflows of each department will be protected. this will reduce the maintenance costs for it-services and increases efficiency of digitally supported services. in the preparation phase of the op-layer project it became obvious that companies and research institutes face the same challenges while digitally transforming their processes and infrastructures. the following list describes common challenges of organizational transformation that have to be comprised, 3 https://kubernetes.io/docs/reference/command-line-toolsreference/kubelet/ addressed and reflected in the architectural concept of the oplayer: • isolated (research / it) infrastructures within organization • no centralized or harmonized workflow • no streamlined process chain across departments • interfaces are programmed several times within one organization • no it management on executive level • it management without qualified technical experience • no it security awareness and organizational structure • area of conflict between centralization and decentralization within organization • isolated responsibilities and knowledge in each organizational unit • lack of transparency of process flows and chains 2.1. distributed cloud native infrastructure a distributed it architecture (see figure 2) that is able to support a microservice approach has to map the requirements such as high availability, (auto-) scaling, portability and security. the kubernetes4 design principles fulfil most of the mentioned requirements for a modern distributed infrastructure approach. the following paragraphs give a short overview of the requirements: scalability avoiding technical bottlenecks in a distributed infrastructure is vital because those can lead quickly to points of failure. being able to scale applications horizontally, that means starting the same application several times, will redistribute the 4 https://kubernetes.io/ figure 1. overview of the workflow with the op-layer at heart. the op-layer prepares a scalable cloud native infrastructure for special tailored applications. figure 2. overview of a generic kubernetes architecture with a control plane on the left side, which consists of api-module, cloud-controller-module, contoller-manager and the scheduler. the control plane provides the global operations for the kubernetes architecture, like scheduling, scaling and steering pods. on the right side are three worker nodes that consists of a kubelet3 and a proxy module. these provide functionality and network capabilities to the pods that hold the containerized application. a worker node can handle several pods a time. a pod can host several containers. qi https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ https://kubernetes.io/ acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 request load on several “shoulders” to cope with peak in demand. kubernetes provides load balancing and horizontal scaling capabilities out of the box. high availability backups and replicas address the high availability requirement. while backups focus on data restoration on a specific time point, replicas support business continuity. often replicas ensure access to critical data and application infrastructure from a secondary location. kubernetes addresses high availability both at application and infrastructure level. portability software development and deployment changed drastically. with container and pods, applications can run on any platform and cluster. kubernetes provides many container runtimes that enable application portability and easy deployment for an agile development approach. this concept facilitates workloads across private and public cloud environments and supports availability fault tolerance zones within the infrastructure design. security security has to be addressed on multiple layer such as cluster, application and network. api endpoints are secured via tls (transport layer security). furthermore, strict role and rights management (see section 2.2.2) has to be enforced for each user, so that only authorized operations can be executed on the cluster. finally, network communication flows for pods have to be defined, so that only wanted communication behaviour is feasible. in a highly distributed architecture, a session-less authorization and authentication approach using tokens, which are placed in each http rest request, has proven itself to be state of the art. this way, each operation can be traced back to the user (see section 2.2.2) initiating any defined process in the use cases. 2.2. distributed software architecture the envisioned distributed microservice software architecture borrows heavily from the concepts designed and developed in [2], [3], [4], [5] and adds the following requirements: • no-configuration debugging: the same image runs in test and production. • containerization: all the code runs in a container plus shared resources. • blue/green deployment: sending traffic only after the server exists. that means no downtime and simplified rollbacks. • application composition: individual pieces of functionality can be composed in separate images, while being technology agnostic. • load balancing: scaling stateless containers is a stark requirement for architecture performance. • service registry and discovery: microservices have to automatically register in a registry service to enable auto-discovery of dependent microservices. once-only principle: harmonizing processes and software development by parameterizable services that fit all internal stakeholders. 2.2.1. functional and security requirements the op-layer can provide a variety of exchange formats such as json, xml and csv. for the internal communication json is used to exchange data from one service to another. the interfaces will be implemented as harmonized rest interfaces. the data will be signed, validated, and archived. while the data is at rest or at transport, it will be encrypted and secured against unauthorized access. the op-layer supports an open id connect access management solution with single sign on capabilities. a session-less and tokenized access management solution is intended, to increase security and avoid session handling in a highly distributed environment. further the cia triad attributes (confidentiality, integrity, and availability) are deeply incorporated in the basic infrastructure design. 2.2.2. identity and access management the following requirements for an identity access management (iam) solution are described in [8] and are adapted to meet the criteria for the proposed distributed microservice software architecture: • separation of concerns: users and roles must be isolated within the platform but differentiated and centrally managed across the entire process. • single sign-on: a user should log on only once and should have access to the functions assigned to him by means of roles in the frontend and in the backend services. • identity federation: it should be possible to authorize additional identity servers of third-party organizations (federated id), so that their users can log on directly to the proposed platform with their organizational id and use its services. • audibility and traceability: due to the highly decoupled and decentralized approach of the architecture, user-based sessions are not used. a tokenbased approach is implemented to login. the token will contain further information about the user, such as roles and the validity period of the token, in encrypted form. this means that every request to a backend service and every processing step in the respective processes can be traced back to the initiating user. • harmonized login procedure: in addition to users, measuring devices and external services should also be authenticated and authorized by the iam system on the platform in the future. • harmonized authentication protocols: by using a standard-compliant openid connect iam solution which also supports saml2, developers can focus on the business requirements and their implementation, as there is no longer any need to develop iam mechanisms. this also simplifies the maintainability of the platform and makes it comparatively easy to exchange the iam solution used. 3. use cases each use case is implemented as a separate containerized back-end service. they all provide rest api endpoints for communication, such as triggering actions, or providing information. the following subsections describe the three usecases of the proposed op-layer platform. 3.1. e-file connector the e-file connector service will have no graphical user interface. instead, this service bridges the gap between the automated data access from the laboratory workflow, in order to import data. as a first step, only administrative data will be converted and exported. later this restriction will be lifted, and all necessary data will be available to support the automation of workflows. the use-case consists of five steps. finding the electronic file and filtering the administrative data. then the export in a acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 preselected exchange format, such as json, xml or csv, is triggered. the laboratory can start their workflow and create for example a digital calibration certificate with the exported data. this xml certificate can now be uploaded and will be validated. if the validation is successful, it will be put into the electronic file in the e-file-service. 3.2. digital calibration certificate process reference implementations of services with a high impact factor are provided, to facilitate the adoption of the op-layer within ptb. as a prominent example serves the digital calibration certificate (dcc [9]) process. it has been entirely digitally transformed and implemented [6] as a service within the op-layer. the digital calibration process starts with the e-service portal. a customer portal, which offers a calibration certificate application. after applying for a calibration certificate for a measuring instrument, the process continues with an automatically created file in the e-file system. all data is automatically transferred and archived in a file. the responsible department checks the validity of the application. at this point in the process, the op-layer offers a convenient way to automatically transfer all necessary data to the calibration laboratory via unified rest interfaces. moreover, the aforementioned service is built to automatically import administrative data from the e-file system, in order to create a proper dcc. the resulting dcc can be automatically uploaded via the op-layer into an existing file within the e-file system. from there, the file handler submits the dcc to the e-service portal. the entire use case is depicted in figure 3. in step 1, the customer applies for a digital calibration certificate for a supported measuring instrument within the e-service portal. a notification is sent to the responsible organizational unit. in step 2, the responsible case worker creates a new procedure within e-file and transfers the administrative data from the application into the new file. the op-layer pulls incoming e-service procedures from efile at the beginning of step 3. the lab technician sees the new application within the op-layer web frontend, and initiates dcc creation. the op-layer transfers the administrative data from e-file automatically into the dcc service. the lab technician can provide the measurement results via web frontend. figure 3 is a simplification of the digitally transformed process. hence, e-file is actually two instances. the e-file system itself, and an op-layer specific e-file connector, which serves as communication interface for the other op-layer components. when all required data is provided in the dcc input form (see figure 4) in the op-layer, the lab technician can review the data and submit it to the dcc service to create a valid dcc as xml. the xml file is attached as a record to the calibration document in the procedure within e-file. the case worker provides the dcc xml record to the eservice application, to make it available to the customer in step 4. the customer can view and download the given dcc xml file from the e-service portal in step 5. 3.3. digital calibration certificate service the digital calibration certificate service (dcc service) is a centralized service that harmonizes the creation of a dcc-xml certificate. depending on the laboratory, the object of the calibration and its measurement might differ. however, the figure 3. sequence diagram of the digital calibration certificate process. the numbers resemble the necessary steps until the customer receives the dcc. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 frame and shaft of the document are the same, so that a parametrized service provides the requirements of each laboratory. this service harmonizes and unifies the dcc creation and simplifies the workflow for laboratories tremendously. a first simplified use-case will consist of three steps. first selecting the object of calibration. then importing the administrative data will take place. depending on the selected object of calibration, the specific measurement and device management part will be imported from the laboratory workflow. lastly, the creation of the digital calibration certificate will be executed, and an xml file will be made available for further processing. generated xml dccs can then, depending on the requiring calibration laboratory, be automatically transferred into the given procedure within the e-file system via the e-file connector (see section 3.1). on the other hand, customer data from the eservice request, which is managed by e-file, can be conveniently imported into a dcc structure, also via the e-file connector. this way, the need for providing data manually for a dcc is further reduced. 3.4. doc service the digital declaration of conformity service (doc service) works similar to the dcc service. however, the resulting xml is not a calibration certificate but a declaration of conformity that consists of different norms and directives that the measurement instrument has to fulfil. the use-case consists of three steps, such as the selection if the measuring instrument, the import of administrative data for the declaration and the instrument specific norm and directive part that has to be included in the declaration document. finally, the creation of the declaration of conformity takes place and the resulting xml file can be downloaded. the doc service and the dcc service can be operated simultaneously, yet independently, as both accommodate similar, yet different use cases. 4. conclusions in this paper, the concept of the operation layer has been presented and demonstrated exemplary with the dcc workflow, which links already existing infrastructures at ptb via uniform interfaces to enable data transfers from previous isolated data silos. through uniform interfaces (rest interface), the connected internal systems can maintain their previous workflow, while harmonizing data at the same time. the automatic data transfer drastically increases the productivity, integrity and availability of data and greatly reduces the susceptibility to errors. the op-layer makes an enormous contribution to the automation of workflows and can reduce the workload of employees. in times of decreasing numbers of skilled workers, an increasing automation can lead to more efficiency, more time for research, laboratory activities and related tasks. special tailored application can be hosted via the op-layer internal service hub structure. preferably, working groups design their application with a generic entitlement to serve a larger audience. the strict design framework reduces the administrative figure 4. screenshot of the dcc service form for manually entering measurement result. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 overhead for the research staff to claim more time for original research. in addition, the strict framework increases security and quality for special tailored applications. the op-layer development is based on three main pillars: 1. container environment: a self-hosted sophisticated container environment with prepared ci/cd pipelines, secured networking and resourcemanagement – kubernetes is the current state-ofthe-art solution. 2. identity and access management: a central solution for handling authentication (with organization-wide ptb id) and access control for services in the op-layer – the de-facto standard solution keycloak5 is employed. 3. service guidelines: providing and enforcing principles of modern software design for distributed services. furthermore, it is encouraged to publish the code as open source. a service should be as lightweight as possible, provide rest interfaces and come with basic documentation. a huge potential for automation and optimization is shown by the initially built digital calibration certificate process. especially for future use cases, similar efficiency gains and simplifications for digital transformed processes can be anticipated. however, digitally transforming processes also pose challenges of harmonising work procedures and finding standards as well as minimal shared requirements within a federal organized body such as ptb. on-boarding working groups to the op-layer service hub infrastructure will be the next phase. provided these early adopters have sufficient it skills to migrate their special tailored applications to the op-layer, the phase will be completed. this is a great opportunity to enhance the service guidelines and to connect ptb’s development community. contributing tremendously to it security by harmonizing development and deployment procedures, will serve the whole organization. references [1] f. thiel, j. nordholz, quality infrastructure ’digital’ (qi-digital). federal ministry for economic affairs and energy, 2020. online [accessed 28 march 2023] https://www.bmwk.de/redaktion/en/artikel/digitalworld/gaia-x-use-cases/quality-infrastructure-digital-qidigital.html [2] a. oppermann, s. eickelberg, j. exner, digital transformation in legal metrology: an approach to a distributed architecture for consolidating metrological services and data. in information technology for management: towards business excellence: 15th conf. ism 2020, and fedcsis-ist 2020 track, held as part of fedcsis, sofia, bulgaria, 6–9 september 2020, extended and revised selected papers 15, springer international publishing, 2021, pages 146–164. doi: 10.1007/978-3-030-71846-6_8 [3] a. oppermann, s. eickelberg, j. exner, toward digital transformation of processes in legal metrology for weighing instruments, proc. of the 2020 federated conference on computer science and information systems, m. ganzha, l. maciaszek, m. paprzycki (eds). acsis, vol. 21, pp. 559–562. doi: 10.15439/2020f77 [4] a. oppermann, f. grasso toro, f. thiel, j.-p. seifert, secure cloud computing: reference architecture for measuring instrument under legal control, security and privacy, 1(3), 2018: e18. doi: 10.1002/spy2.18 [5] s. eickelberg, th. bock, m. bernien, a. oppermann. integrating a calibration laboratory workflow into a metrological digital ecosystem: a case study. proc. of the imeko tc6 int. conf. on metrology and digital transformation, berlin, germany, 19 21 september 2022, 4 pp. doi: 10.21014/tc6-2022.016 [6] a. keidel, s. eichstädt, interoperable processes and infrastructure for the digital transformation of the quality infrastructure, 2021 ieee int. workshop on metrology for industry 4.0 & iot (metroind4. 0&iot), rome, italy, 7 9 june 2021, pp. 347–351. doi: 10.1109/metroind4.0iot51437.2021.9488563 [7] s. eichstädt, a. keidel, j. tesch. metrology for the digital age. measurement: sensors, 18, 2021, 100232. doi: 10.1016/j.measen.2021.100232 [8] a. oppermann, s. eickelberg, digitale transformation in der metrologie: harmonisierung digitaler identitäten mittels openid. werkwandel, zeitschrift für angewandte arbeitswissenschaft, ausgabe #1, february 2022, pp. 26–30. online [accessed 28 march 2023] [in german] https://magazin.werkwandel.de/ [9] s. hackel, f. härtig, j. hornig, th. wiedenhöfer, the digital calibration certificate, ptb-mitteilungen 127 (2017), no. 4, pp. 75-81. doi: 10.7795/310.20170403 5 https://www.keycloak.org https://www.bmwk.de/redaktion/en/artikel/digital-world/gaia-x-use-cases/quality-infrastructure-digital-qi-digital.html https://www.bmwk.de/redaktion/en/artikel/digital-world/gaia-x-use-cases/quality-infrastructure-digital-qi-digital.html https://www.bmwk.de/redaktion/en/artikel/digital-world/gaia-x-use-cases/quality-infrastructure-digital-qi-digital.html https://doi.org/10.1007/978-3-030-71846-6_8 http://dx.doi.org/10.15439/2020f77 https://dx.doi.org/10.1002/spy2.18 https://doi.org/10.21014/tc6-2022.016 https://doi.org/10.1109/metroind4.0iot51437.2021.9488563 https://doi.org/10.1016/j.measen.2021.100232 https://magazin.werkwandel.de/ https://dx.doi.org/10.7795/310.20170403 https://www.keycloak.org/ editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage acta imeko issn: 2221-870x march 2021, volume 10, number 1, 1 4 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 1 editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage eulalia balestrieri1, carlo carobbi2, ioan tudosa1 1 department of engineering, university of sannio, benevento, italy 2 department of information engineering, university of florence, firenze, italy section: editorial citation: eulalia balestrieri, carlo carobbi, ioan tudosa, editorial to selected papers from the 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, acta imeko, vol. 10, no. 1, article 1, march 2021, identifier: imeko-acta-10 (2021)-01-01 editor: francesco lamonaca, university of calabria, italy received march 19, 2021; in final form march 19, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding authors: eulalia balestrieri, e-mail: balestrieri@unisannio.it, carlo carobbi, e-mail: carlo.carobbi@unifi.it, ioan tudosa, e-mail: itudosa@unisannio.it dear reader, this issue of acta imeko is dedicated to papers that have been selected from those presented at “2019 imeko tc4 international conference on metrology for archaeology and cultural heritage”, metroarchaeo, held in florence in december 2019. measurements are essential to access knowledge in every field of investigation, from industry to quality of life, science and innovation. as a consequence, metrology plays a crucial role for archaeology and cultural heritage, addressing issues related to the collection, interpretation and validation of the data, to the different physical, chemical, mechanical or electronic methodologies used to collect them and to the associated instruments. metroarchaeo brings together researchers and operators in the enhancement, characterization and preservation of archaeological and cultural heritage with the main objective of discussing production, interpretation and reliability of measurements and data. the conference is conceived to foster exchanges of ideas and information, create collaborative networks and update innovations on “measurements” suitable for cultural heritage for archaeologists, conservators and scientists. thirty-three selected papers from metroarchaeo 2019 are presented in three sections (part 1, part 2 and part 3) part 1 – section editor: eulalia balestrieri the first section, edited by eulalia balestrieri, includes the following ten scientific contributions. the first contribution, by valentino sangiorgio et al, “historical masonry churches diagnosis supported by an analytic-hierarchyprocess-based decision support system” presents a new procedure based on analytical hierarchy processes (ahp) aimed at carrying out rapid on-site measurements and diagnostic of masonry churches through a series of condition assessment indexes. the proposed procedure has been successfully validated by a comparison with a standard diagnostic workflow. in the second paper, by sveva longo et al, “clinical computed tomography and surface-enhanced raman scattering characterisation of ancient pigments”, a systematic and complete chemical-physical characterisation of painted pigments has been carried out using multi-slice x-ray computed tomography (msct) and surfaceenhanced raman scattering (sers) techniques. thanks to the proposed approach, the identification and characterization of both inorganic and organic materials present on the wooden tablets have been carried out. the third contribution, by simone tiberti and gabriele milani, “creating a finite element mesh of non-periodic masonry from the measurement of its geometrical characteristics: a novel automated procedure”, illustrates an automated procedure for the generation of a finite element (fe) mesh directly from the rasterized sketch of a generic masonry element, which is particularly suitable for complex and irregular (non-periodic) masonry bonds that can be observed in heritage buildings or found in archaeological sites. two procedures are set for the creation of 2d and 3d fe meshes. in the fourth paper, by laura guidorzi et al, “age determination and authentication of ceramics: advancements in the thermoluminescence dating laboratory in torino (italy)”, the thermoluminescence (tl) laboratory developed at the physics department of the mailto:balestrieri@unisannio.it mailto:carlo.carobbi@unifi.it mailto:itudosa@unisannio.it acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 2 university of torino is presented. the laboratory was set up in collaboration with tecnart s.r.l. and is also currently operating within the infn (national institute of nuclear physics) chnet network. some example of dating and authenticating results carried out at the laboratory on archaeological sites and artworks are also discussed and compared, when possible, with radiocarbon dating. the fifth contribution, by marialuisa mongelli et al, “comparison and integration of techniques for the study and valorisation of the corsini throne in corsini gallery in roma”, presents the investigation of an integrated approach involving non-invasive technologies, photogrammetry and structured light, during the development of the "weact3" project (acting together technology for art, culture, tourism and territory) jointly signed by civita association and the national barberini and corsini galleries. such technologies have been used to build the 3d model of the corsini throne, preserved at the corsini gallery in rome. the sixth work, by carmelo scuro et al, “study of an ancient earthquake-proof construction technique monitoring via an innovative structural health monitoring system”, illustrates an innovative method to monitor and obtain in real time the mechanical properties of an anti-seismic construction widespread in southern calabria and patented by pasquale frezza, also minimizing measurement uncertainty. the considered type of anti-seismic construction consists of masonry walls built with bricks and fictile tubules, arranged in staggered and alternating manner, all contained in a timber wooden frame. the seventh contribution, by renato s. olivito et al, “inventory and monitoring of historical cultural heritage buildings on a territorial scale: a preliminary study of structural health monitoring based on the cartis approach”, presents a preliminary study aimed at defining an integrated methodology for inventory, monitoring, transmission, and data management of heritage buildings to provide important information about their structural integrity. the eighth paper, by maria federica caso et al, “improvement of enea laser-induced fluorescence prototypes: an intercalibration between a hyperspectral and a multispectral scanning system”, presents the hyperspectral and multispectral laser induced fluorescence (lif) scanning systems developed at enea diagnostic and metrology laboratory and in particular their intercalibration, along with the data analysis of calibration samples and a software to automatically correct imaging data. in the ninth contribution, by alejandro roda-buch et al, “fault detection and diagnosis of historical vehicle engines using acoustic emission techniques”, the results of the first phase of the acume_hv (acoustic emission monitoring of historical vehicles) project focused on the development of a protocol for the use of acoustic emission (ae) during cold tests are presented. the project represents the first use of ae as non-invasive technique for the diagnostic of historical vehicles to carry out an objective, human-independent method. the tenth and last contribution of this part 1, by sandro parrinello and raffaella de marco, “digital surveying and 3d modelling structural shape pipelines for instability monitoring in historical buildings: a strategy of versatile mesh models for ruined and endangered heritage”, illustrates the application of a fast and reliable structural documentation pipeline to historical built heritage, as in the case study of the church of the annunciation in pokcha (russia), by reviewing the declination of integrated products of 3d survey into reality-based models. part 2 – section editor: carlo carobbi the second section, edited by carlo carobbi, includes the following ten scientific contributions. the first contribution, by valeria croce et al, “from survey to semantic representation for cultural heritage: the 3d modelling of recurring architectural elements”, illustrates an approach to be followed in the transition from 3d survey information, derived from laser scanner and photogrammetric techniques, to the creation of semantically enriched 3d models. the proposed approach is based on the recognition -segmentation and classificationof elements on the original raw point cloud, and on the manual mapping of nurbs elements on it. in the second work, by francesco boschin et al, “geometric morphometrics reveal relationship between cut-mark morphology and cutting tools”, two groups of slicing cut mark cross-sections were experimentally produced. the resulting sets of striae show different depths and different cross-sectional shapes. it turns out that the difference in shape between the two groups of striations is probably a function of the way in which the tool penetrated the bone. the third contribution, by gabriella caroti et al, “the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis”, elaborates on the methodology used for integration of a metric 3-d model with information present in archive surveys of lost architectural volumes. the presented methodology frames historical plans, representing the survey object, in the reference system of uav surveys for an opensource gis environment. in the fourth work, by yufan ding et al, “provenance study of the limestone used in the construction and restoration of the batalha monastery (portugal)”, stone samples were investigated, through different methods (energy-dispersive x-ray fluorescence spectroscopy, powder x-ray diffractometry and thermogravimetric analysis) obtaining indication of the source of samples collected from different parts the monastery. the fifth contribution, by leila es sebar et al, “raman investigation of corrosion products on roman copper-based artefacts”, illustrates a case study related to the characterization of corrosion products present on recently excavated artefacts. results coming from raman spectroscopy investigation can help in assessing the conservation state of the artefacts and defining the correct restoration strategy. in the sixth work by elisabetta di francia et al, “characterisation of corrosion products on copper-based artefacts: potential of ma-xrf measurements”, a novel portable ma-xrf scanner prototype has been tested on artificially corroded copper samples to assess its analytical capabilities on corroded metals and yielding information on the spatial distribution of the corrosion products grown on the metal’s surface. in the seventh contribution by giuseppe schirripa spagnolo et al, “fringe-projection profilometry for recovering 2.5d shape of ancient coins”, a surface profile measurement system for small objects of cultural heritage, where it is important not only to detect the shape with good accuracy but also to capture and archive the signs due to ageing, is illustrated. the potentiality of the proposed scheme for recovering 2.5d shape of cultural heritage is demonstrated. in the eighth contribution by anna maria gueli et al, “modelling and simulations for signal loss evaluation during sampling phase for thermoluminescence authenticity tests”, the percentage of the intensity signal loss in thermoluminescence emission, due to local temperature increase caused by drilling, is investigated. the acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 3 optimal parameters that should be used during sampling phase are identified. in the ninth work by zacharias vangelatos et al, “finite element analysis of the parthenon marble block–steel clamp system response under acceleration”, finite element analysis is employed to provide a tool for the assessment of the conservation potential of the marble blocks, in parts of the monument that require specific attention. simulation results highlight the importance of intrinsic stresses, the existence of which may lead to fracture of the marble blocks under otherwise harmless loading conditions. in the tenth and last contribution of this part 2, by andrea zanobini, “metrological characterisation of a textile temperature sensor in archaeology”, the study of a new generation textile temperature sensor, in two different heated ovens to evaluate temperature and both temperature and humidity, is presented. the results show many metrological characteristics proving that the sensor is a resistance temperature detector. part 3 – section editor: ioan tudosa the third section, edited by ioan tudosa, includes the following thirteen scientific contributions. the first contribution, by sebastiano d’amico et al, “a combined 3d surveying, xrf and raman in-situ investigation of the conversion of st paul painting (mdina, malta) by mattia preti”, presents the results of three different approaches applied to the newly-restored titular painting the conversion of st paul, the main altarpiece in the cathedral of mdina in malta. the study was aimed at showing the potentialities of the combined use of 2d/3d photogrammetric surveys and spectroscopy (xrf and raman techniques) in order to, on one side, get reconstruction, and, on the other side, achieve, at different spatial domains. in the second work, by luisa caneve et al, “non-invasive diagnostic investigation at the bishop’s palace of frascati: an integrated approach”, a novel methodology aimed to detect and locate materials due to previous restoration actions and to monitor the degradation processes evolution remotely, is proposed. the possibility of a preventive monitoring by the application of the presented approach to reduce the eventual induced damage has been put in light. in the third contribution, by sofia ceccarelli et al, “thermographic and reflectographic imaging investigations on baroque paintings preserved at the chigi palace in ariccia”, two different midinfrared imaging techniques, operating in the 3-5 µm spectral range, are applied to the study of three paintings on canvas, dating back to the xvii century, preserved at the chigi palace in ariccia (italy). presented results allow the evaluation of the conservative status of the support and the detection of graphical and pictorial features hidden beneath the surface layer. in the fourth work, by daniele moro et al, “mineral diagnostics: sem-eds monte carlo strategy for optimised measurements of ultrathin fragments in cultural heritage studies”, a detailed study of the effects related to microand nanometric sizes of glass and gold alloys fragments on sem-eds microanalysis is presented. monte carlo simulations of different kind of elongated glass fragments with square section, from 0.1 to 10 µm thick, and of some gold alloys showed a strong influence of the fragment sizes and operational conditions (beam energy, detector position, etc.). this work can be used to devise the appropriate and optimized measurement strategy. the fifth contribution, by luisa spairani, “measure by measure, they touched heaven”, illustrates a case study in photography of the law of leavitt. in the sixth work by giacomo fiocco et al, “chemometrics tools for investigating complex synchrotron radiation ftir micro-spectra: focus on historical bowed musical instruments”, a method describing how the synchrotron radiation (sr) micro-ftir spectroscopy in reflection geometry and chemometrics were combined to investigate six cross-sectioned micro-samples detached from four bowed string instruments, is presented. in the seventh contribution by m. faifer et al, “laboratory measurement system for pre-corroded sensors devoted to metallic artwork monitoring”, a measurement system for the development and testing of sensor for atmospheric corrosivity monitoring is presented. the developed system allows to monitor metal corrosion higher than 3 nm in the temperature range from 23 °c to 39 °c. the performed analysis allows to state that the system is an efficient laboratory setup for the development and characterization of sensor for metal corrosion monitoring. the eighth contribution by maria legut-pintal et al, “methodological issues of metrological analysis of planned medieval towns and villages”, proposes the usage of cosine quantogram, which has rarely been used to study of urban layout, for the identification of units of measurement in medieval regular towns. in the ninth work by roberta spallone et al, “digital strategies for the valorisation of archival heritage”, a study that aims at creating a sort of digital model-museum, in which to insert all the historical information useful to tell the story of the evolution of the artefact over time, and allowing users, through the use of personal devices, to live interactive and immersive experiences, through virtual and augmented reality, is presented. in the tenth paper by tilde de caro et al, “application of µraman spectroscopy to the study of the corrosion products of archaeological coins”, a study case of the corrosion products formed on archaeological bronze artefacts excavated in tharros (sardinia, italy) is presented. the experimental findings allow to acquire, through micro-raman spectroscopy, a better knowledge on the environmental factors that may cause the degradation of archaeological bronzes in soil. the eleventh contribution, by leila es sebar et al, “in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors”, presents a long-term in-situ monitoring campaign of contemporary bronze statuary exposed outdoor. the authors demonstrate the importance of the use of portable instruments offering the possibility to perform in situ measurements, thus avoiding any sampling and assessing of the degradation of the material directly in contact with the environment to which the artwork is always exposed to. the twelfth contribution by máté sepsi et al, “non-destructive pole-figure measurements on workshop-made silver reference models of archaic objects”, reports the non-destructive pole figure method as a sufficient way to distinguish between metal objects formed in different ways. the specific forming modes result in specific pole figures, and therefore, by producing and examining a sufficient number of reference materials, the mode of production of archaic objects can also be reconstructed. the authors state that the obtained pole figures by the robot diffractometer are completely identical to the figures of the previously validated g3r diffractometer. the thirteenth and last contribution of this part 3, by maria grazia d'urso, “a combination of terrestrial laser-scanning point clouds acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 4 and the thrust network analysis approach for structural modelling of masonry vaults”, presents how the geometric and geo-referenced 3d models are obtained by processing laser-scanning measurements. a model built on a coherent geometric basis, which contemplates the methodological complexities of the detected objects is reported. a semi-automated method that allows one to switch from point cloud to an advanced three-dimensional model, able to contain all the geometrical and mechanical characteristics of the built object, is proposed in the paper. it was a great honour for us to act as guest editors for this issue of acta imeko, a high-profile scientific journal devoted to the enhancement of academic activities of imeko and a wider dissemination of scientific output from imeko tc events. we would like to sincerely thank all the authors for their valuable contributions, and we hope the readers will be inspired by the themes and proposals that have been selected and included in this special section related to innovations in metrology for archaeological and cultural heritage. eulalia balestrieri, carlo carobbi, ioan tudosa guest editors experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb sara adda1, tommaso aureli2, tiziana cassano3, daniele franci2, marco d. migliore4, nicola pasquino5, settimio pavoncello2, fulvio schettino4, maddalena schirone3 1 arpa piemonte, dipartimento rischi fisici e tecnologici, via jervis 30, 10015 ivrea (to), italy 2 arpa lazio, 00172 rome, italy 3 arpa puglia, uos agenti fisici, dap bari, corso trieste 27, 70126 bari, italy 4 dipartimento di ingegneria elettrica e dell’informazione (diei) “maurizio scarano", university of cassino and southern lazio, cassino, 03043, and cnit cassino, and eledia@unicas, italy 5 dipartimento di ingegneria elettrica e delle tecnologie dell’informazione (dieti), università degli studi di napoli federico ii, 80125 napoli, italy section: research paper keywords: dynamic spectrum sharing; maximum-power extrapolation; dss; 4g; 5g; human exposure; measurements citation: sara adda, tommaso aureli, tiziana cassano, daniele franci, marco d. migliore, nicola pasquino, settimio pavoncello, fulvio schettino, maddalena schirone, experimental investigation in controlled conditions of the impact of dynamic spectrum sharing on maximum-power extrapolation techniques for the assessment of human exposure to electromagnetic fields generated by 5g gnodeb, acta imeko, vol. 11, no. 3, article 18, september 2022, identifier: imeko-acta-11 (2022)-03-18 section editor: francesco lamonaca, university of calabria, italy received march 17, 2022; in final form september 15, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: nicola pasquino, e-mail: nicola.pasquino@unina.it 1. introduction dynamic spectrum sharing (dss) is becoming a key technology in the implementation of 5g networks thanks to several practical advantages. dss provides 5g services on 4g long term evolution (lte) networks, thus allowing to use the 4g facilities for a quick deployment of 5g networks without the need of new frequency bands, not always available to local operators. consequently, it is a simple solution that requires basically only software updates, allowing operators to start nationwide 5g coverage with limited costs. furthermore, the use of 4g frequency bands allows better area coverage and indoor penetration compared to sub-6 ghz and frequency range 2 (fr2) (24.25 ghz to 52.6 ghz) used in both non-standalone (nsa) and standalone (sa) 5g networks. in this paper, an experimental investigation is presented to evaluate if this technological solution impacts the outcomes of measurement techniques used to assess the compliance with electromagnetic fields (emf) exposure limits. maximum-power extrapolation (mpe) measurement has been object of a consolidated literature, and standards for mpe measurements are available [1]. 5g is a more recent technology, however, although standards are still under development, mpe abstract maximum-power extrapolation (mpe) techniques adopted for 4g and 5g signals are applied to systems using dynamic spectrum sharing (dss) signals generated by a base station and transferred to the measurement instruments through an air interface adapter to obtain a controlled environment. this allowed to focus the analysis on the effect of the frame structure on the mpe procedure, excluding the random effects associated to fading phenomena affecting signals received in real environments. the analysis confirms that both the 4g mpe and the proposed 5g mpe procedure can be used for dss signals, provided that the correct number of subcarriers in the dss frame is considered. mailto:paul@regtien.net acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 procedures have been described in a few research papers [2]-[6]. 5g dss is midstream between 4g and 5g technology. in this paper mpe techniques adopted for 4g signals and 5g signals are applied to dss signals. in particular, the dss signals generated by an actual base station have been connected to the measurement equipment through an air interface adapter (aiad). this allows to focus the analysis on the effect of the frame structure on the mpe procedure, excluding the random effects due to fading phenomena affecting signals received in real environments. the results confirm that both 4g and 5g mpe extrapolation techniques allow a correct estimation of the mpe level of dss signals, provided that the correct number of the subcarriers in the dss frame is considered. 2. the dynamic spectrum sharing technique as noted in the introduction, spectrum sharing (ss) allows network operators to implement 5g nr using existing lte frequency bands, with relatively inexpensive software upgrades. there are two possible solutions to lte-nr coexistence: static spectrum sharing (sss), where the frequency band is statically assigned to either system over time, and dss, where resources are switched between lte and 5g dynamically over time. sss is less attracting than dss, particularly in the current state of 5g implementation, where penetration of 5g devices is still limited and most traffic is on lte, providing an almost fully loaded 4g carrier and an almost empty 5g carrier. this strongly pushes towards the implementation of dss which allows lte and 5g nr to share spectrum resources dynamically based on the current traffic demand, thus employing the entire available bandwidth more efficiently. accordingly, the solution currently adopted in italy is dss. dss supports 5g numerology with µ = 0 (15 khz subcarrier spacing) and µ = 1 (30 khz spacing). although, as will be briefly discussed below, implementation of µ = 1 numerology in a 4g frame is less complex than the use of µ = 0, in this section we will focus our attention on the dss technology with µ = 0 download frame structure, since it is the same adopted in the signals used in the presented experimental activity. to avoid imperfect signal synchronization, in case of ltenr coexistence, it is crucial to avoid overlapping between nr synchronization signal block (ssb) and lte reference signal (crs). let us consider the lte frame. the crs is transmitted in 4 (using 1 or 2 antenna ports) or 6 (when using 4 antenna ports) non-contiguous ofdm symbols of each subframe. with respect to nr, ssb requires four consecutive ofdm symbols. in case of µ = 1, two nr resource elements (res) occupy the duration of an lte re (whose spacing is 15 khz). consequently, an ssb only occupies two lte symbols. this makes it possible to insert the ssbs in the lte frame in a simple way, since in the lte frame the distance between two consecutive crs is greater than the duration of two lte ofdm symbols in case of any number of antenna ports. things become more involved when a 5g signal with µ = 0 numerology must be inserted in an lte frame. in fact, as noted above, there are only 3 contiguous ofdm symbols without any crs transmissions, while ssb requires 4 symbols with 15 khz subcarrier spacing. the solution currently applied involves the use of 4g multimedia broadcast multicast service single frequency network (mbsfn) subframes. mbsfn has been introduced in the lte standard as part of release 9 for point-to-multipoint communication to provide broadcast and multicast services (evolved multimedia broadcast multicast services (embms)). the network can configure six out of ten subframes forming the lte radio frame to become mbsfn subframes. based on the 3gpp standard, these could be subframes #1, #2, #3, #6, #7, and #8 of a radio frame. a standard lte terminal reads in the mbsfn configuration from system information in block type 2 (sib2) and ignores the subframes configured for broadcast. to allow effective broadcasting transmission, the mbsfn subframes reserve only the first 2 ofdm symbols to carry the control channels for lte, while the remaining ones are reserved for embms (figure 1). these symbols are consequently “free". to better explain this point, that is important for the mpe procedure, in figure 2 an example of a dss frame structure (right) is qualitatively compared with the spectrum of the dss figure 1. an example of lte/nr subframe; the first two time slots are always reserved to lte. figure 2. qualitative relationship between the spectrum of the lte signal, the spectrum of the dss signal and the dss frame. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 signal (centre). furthermore, the spectrum of a legacy 4g signal (i.e., without dss) is shown in the left. regarding the dss frame structure, we can note an mbsfn in the second subframe, where the ssb of the 5g signal is placed. the 4g signal uses the upper half part of the remaining subframes, while the 5g signal uses the lower half part of these subframes, and the part not used by the ssb of the mbsf subframe. note that 5g does not use the first two symbol of the frames, that are reserved to 4g. the spectrum of the dss is shown in the centre figure. the useful spectrum is in the a-a’ frequency range. the left plot shows an example of a standard 4g spectrum. due to the presence of a larger band-guard, the useful portion of the spectrum is limited to the b-b’ frequency range, obtaining a lower number of subcarriers. in practice, although both 4g and 5g standard signals occupy a nominal bandwidth of 20 mhz, the actual useful bandwidth for a 5g signal is 1.08 mhz larger than the corresponding 4g bandwidth, i.e., 19.08 vs 18 mhz. accordingly, the number of subcarriers of a 20 mhz nominal-bandwidth dss signal is 1272. all of them are available for 5g signals, while only 1200 are used for lte signals. 3. maximum-power extrapolation technique for 4g and 5g signals the final goal of the mpe procedure is the estimation of the maximum field level emax that could be reached in the measurement point [ 1 ] . this quantity is used as a reference to estimate the emf exposure in realistic conditions by a suitable scaling factor [1], [7]-[15]. both 4g and 5g use ofdma. this gives many similarities that can be exploited for the mpe of dss signals. more specifically, they require [1], [16]: a. information on the structure of the frame (as bandwidth, numerology for 5g, duty cycle in case of tdd transmissions) necessary to identify the number of res available for downlink transmission, b. the estimation of 𝐸re max, the maximum possible average emf level associated to a re. in 4g, 𝐸re max can be obtained considering the power of the res associated to the reference signal (rs) of the 4g frame, let 𝐸rs be, that are transmitted at full power. according to [1], the maximum emf level in the measurement location is then estimated as 𝐸4g max = 𝐸rs√ 𝑁sc𝐹tdc 𝐹b , (1) where 𝑁sc is the total number of subcarriers, 𝐹tdc is a duty cycle factor that describes the transmission scheme implemented by the signal, 𝐹b is a boosting factor that can be applied to the 4g control channels transmitted power to extend the coverage range. in 5g nr, the only signal that is always ‘on air’ is the ssb, that is transmitted at constant and maximum power. it must be noted that in nr it is possible to use different beams to transmit ssb (using broadcast beams) and payload data (using traffic beams). accordingly, measurement of the power of the res of the ssb in general does not allow a direct estimation of 𝐸5g max, that is related to the traffic beams [5]. however, as discussed in the previous section, dss usually is obtained by only software upgrading of the 4g system. consequently, ssb and payload data are transmitted on the same beam. this allows to estimate 𝐸5g max directly from the measurement of ssb res power. following the experimental approach discussed in [17], the power of the res associated to the physical broadcast channel demodulation reference signal (pbch-dmrs) is used as input for the extrapolation formula: 𝐸5g max = 𝐸pbch_dmrs√𝑁sc𝐹tdc𝐹beam, (2) where 𝐹beam is a correction factor that considers the difference between traffic and broadcast beams gain. it’s worth noticing that for dss signals generated by passive mimo systems like the one used in our experiment, 𝐹beam = 1 since ssb and traffic data share the same beam. 4. experimental results an experimental session has been carried out in a dedicated facility, used by one of the main italian telco companies for test purposes. measurements were performed with the aim of gaining a suitable understanding of dss system operation and thus defining an effective procedure for the assessment of the population exposure from dss sources. the experimental setup used during this controlled-environment experimental session is described in the following: • a keysight mxa n9020a vector signal analyzer (vsa) with up to 20 mhz demodulation bandwidth, equipped with demodulation software for both 4g and 5g nr signals; • a rhode & schwarz fsva3044 vsa with up to 400 mhz demodulation bandwidth, equipped with demodulation software for both 4g and 5g nr signals. the experimental setup is shown in figure 3. to ensure a reliable emulation of both inputs and outputs according to the demands of real users, the signal generated by the base station has been transferred to an aiad 8/8-4g+dl manufactured by mts systemtechnik. the aim of the aiad is to emulate the air interface, allowing for testing of mobile radio base stations in a laboratory. figure 4 shows the aiad frontend with two coaxial cables used to transfer both mimo branches of the dss signal from the lower levels of the gnodeb to the antennas placed inside the aiad chamber. the coaxial cable on the left of the figure is used to drive the test signal to the measurement instruments. the main characteristics of the generated signals are summarized in table 1. to investigate 4g-5g sharing mechanism properly, different data traffic scenarios have been considered: • zero-traffic, • full-frame 4g-only traffic, • full-frame mixed 4g-5g traffic. figure 3. experimental setup used throughout the measurement campaign. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 4.1. zero-traffic scenario as a first step, a dss signal with no data traffic was generated. figure 5 shows the map of the demodulated power vs. symbol/carrier for an entire 10 ms dss frame, made of ten consecutive subframes. the absence of user data traffic allows for an easy identification of both lte and nr control channels: • lte primary and secondary synchronization signals (pss and sss) and pbch are transmitted in subframes 0 (pss+sss+pbch) and 5 (pss+sss only); these signals occupy about 1mhz at the centre of the signal bandwidth. in addition, the rs is located sparsely throughout the frame, according to the positions defined by 3gpp standard [18]; • 5g nr synchronization signal block (ssb) can be recognized in the mbsfn subframe 1, with a frequency offset of 7.65 mhz with respect to the centre of the signal bandwidth. according to µ = 0 numerology, the ssb bandwidth is equal to 3.6 mhz. obviously, the peculiar allocation of the radio resources in dss systems is specifically designed with the aim of avoiding any possible interference between different signals. zero-span measurement is an alternative experimental approach useful to appreciate the resource allocation adopted by dss. it provides the time-domain variation of the received power at a fixed frequency value. this method has the great advantage of providing a low-cost alternative to usage of top-notch, high expensive vector signal analysers vsas, since it can be accomplished by almost every traditional spectrum analyser. figure 6 and figure 7 show a 20-ms zero-span acquisition at 1850 mhz (i.e., the centre of the signal bandwidth) and 1857.65 mhz (i.e., ssb centre frequency) respectively. a periodic trigger equal to the frame duration (i.e., 10 ms) has been applied to both the acquired spectra. when 1850 mhz is imposed as central frequency, the lte control channels can be easily recognized in the acquired spectrum (figure 6) as power bursts, spaced out 5 ms apart. otherwise, when the central frequency is shifted to 1857.65 mhz (figure 7), we can distinguish a unique power peak, corresponding to the 5g ssb. it is worth noting that lte rs are present everywhere throughout the time frame, regardless of the centre frequency chosen for the measurement. the dss signal was properly demodulated by both 4g and 5g analysis software, with both the routines characterized by an excellent synchronization with the dss signal and reconstructed iq constellations very close to the ideal ones. figure 4. the aiad used to transfer the signal from the base station to the measurement equipment. table 1. caption dss signal configuration center frequency fc 1850 mhz bandwidth b 20 mhz duplexing ffd numerology 0 sub-carrier spacing f 15 khz cell identity (cid) 255 mimo antennas 2 mbsfn subframe indexes 1, 21, 22 ssb allocation case a with lmax = 4 ssb center frequency fssb 1857,65 mhz ssb per burst 1 ssb transmission periodicity 20 ms figure 5. power vs. symbol × carrier map of a dss frame in case of absence of user data traffic. figure 6. zero span measurement at 1850 mhz. figure 7. zero span measurement at 1857.65 mhz. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 4.2. full-frame traffic scenario full-frame data traffic scenario has been achieved using a user equipment (ue) in combination with the aiad system, with the aim of forcing the saturation of the dss frame through multi-thread ftp connections. to understand which schedule is used by the dss system for resource allocation, two different scenarios have been considered: • 100 % 4g data traffic; • 50 % 4g – 50 % 5g data traffic. following the approach used in the previous section, the map of the demodulated power vs. symbol/carrier has been acquired for both scenarios under investigation. note that to appreciate the periodicity of the mbsfn subframes, the capture buffer has been extended to 4 radio frames (i.e., 40 ms). in absence of 5g users, and with heavy 4g traffic, all the subframes of a frame, except those configured as mbsfn, are used for 4g communication, as shown in figure 8. in fact, mbsfn subframes (indexed as 1, 21 and 22) are not allowed to host 4g data transmission and, for this reason, they are left almost completely empty, except for 5g ssbs, which are located at the very top edge of dss spectrum and transmitted once per couple of frames (20 ms). the presence of mbsfn subframes which are reserved for 5g transmission implies that a dss frame cannot be entirely filled by 4g data traffic only. figure 9 shows the case of large 4g and 5g data traffic. 4g data is placed in upper halfpart of the no-mbsfn subframes, while 5g uses most of the lower part. as discussed in section 2, the configuration of the 5g part of frame leaves free the res required for 4g signalling, making the presence of 5g res completely transparent for a 4g user. note that the radio resource occupation is now more uniform than the previous case, although several blank regions – acting as guard intervals to avoid possible interferences between signals – are disseminated throughout the whole radio frame. 4.3. mpe procedure applied to dss the above-described code-domain analysis provides direct information on the following two quantities required for extrapolation procedures: • rs for 4g (𝑃rs), • pbch-dmrs for 5g (𝑃pbch−dmrs). to discuss the dss mpe procedure, we consider the fullframe condition configuration. this makes it possible to compare the results with the maximum reference power obtained by a channel power (cp) measurement acquired in the full-frame data traffic scenario. as discussed in the previous section, both 4g and 5g control channel coexist within the dss frame. for this reason, both 4g and 5g mpe procedure – described by eq. (3) and (4) respectively – can be applied: 𝑃4g max = 𝑁sc 𝐹tdc 𝐹b 𝑃rs (3) 𝑃5g max = 𝑁sc 𝐹beam 𝐹tdc 𝑃pbch−dmrs (4) note that eq. (3) and (4) represent the same mpe procedures described by eq. (1) and (2), just applied in terms of maximum power instead of electric field. regarding the measurement procedure, it is the same described in [1] for 4g and in [17] for 5g. more specifically, code domain measurement of the dss signal provides direct information on prs and ppbch_dmrs. according to the characteristic of the dss signal, several assumptions about the parameters included in eq. (3) and eq. (4) can be made: • 𝐹tdc is assumed to be equal to 1 in both eq. (3) and (4), since the dss signal adopts the frequency division duplexing (fdd) transmission mode. fdd allows uplink and downlink transmission over different frequency bands, so that no uplink-downlink time duty cycle is needed, • 𝐹b = 1 in eq. (3), since no power boost is applied to 4g rs, • 𝐹beam = 1 in eq. (4), since the passive antennas used for dss signals are not allowed for beamforming. regarding the 𝑁sc value, as pointed out in sect. 2, although 4g and 5g standard signals occupy a nominal bandwidth of 20 mhz, the actual occupied bandwidth of a 5g signal is 1.08 mhz larger than the corresponding 4g bandwidth (i.e., 19.08 mhz vs. 18 mhz), giving 1272 available subcarriers spaced by 15 khz. therefore, the value of 𝑁sc is assumed to be equal to 1272 in both eq. (3) and (4). when a subframe transmits 4g signals only, just 1200 carriers are used. accordingly, also in case of full use of the dss resources, there are some unused res in the frames. indeed, only a fraction of such res is related to the different number of 4g and 5g subcarriers, while the remaining ones are intrinsic in the 4g as well as 5g subframe structures. the exact number of unused res in case of full 4g/5g traffic depends on how the scheduler allocates the 4g and 5g data in the dss frame. this is a decision of the provider, and the almost 50 % sharing of the figure 8. power vs. symbol × carrier map of a dss frame in case of 4g full traffic case; the 20 ms periodicity of the ssb is visible. figure 9. power vs. symbol × carrier map of a dss frame in case of 4g/5g balanced traffic. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 frame between 4g and 5g signals in case of full 4g and 5g traffic shown in figure 9 is just one possible choice. different choices give a slightly different number of ’unused’ res in case of fully filled frames. consequently, the hypothesis of full use of the res of the dss frame carried out in the mpe procedure gives a conservative estimation of the maximum power level in case of full 4g/5g traffic for any possible distribution of the 4g/5g data chosen by the providers, according to the precautionary principle applied to the evaluation of the electromagnetic exposure of the population. to verify the above observations, we have applied eq. (3) and (4) to the full-loaded dss signal in figure 9. the prs and ppbch_dmrs have been measured in the code domain and are reported in table 2. then, a channel power measurement was performed on the same signal. the mpe and cp measurements are reported in table 3. results show that the use of eq. (3) and (4) gives very close values, and confirm the conservative value obtained using mpe compared to the channel power result. 5. conclusions the emf human exposure to dss signals is a topic which is gaining growing interest among scientific community. in this paper the results of a study regarding the use of 4g or 5g mpe procedures for dss signal are reported and found in good agreement with those reported in [19]. the dss signals were generated by a base station emulator and transferred to the measurement instruments through an air interface adapter to obtain a controlled environment. this allowed to focus the analysis on the effect of the frame structure on the mpe procedure, excluding the random effects associated to fading phenomena affecting signals received in real environments. the analysis confirms that both the 4g and the proposed 5g mpe procedures can be used for mpe of dss signals, provided that the correct number of subcarriers in the dss frame is considered. references [1] iec 62232. determination of rf field strength, power density and sar in the vicinity of radiocommunication base stations for the purpose of evaluating human exposure. iec international electrotechnical commission, 2017. [2] s aerts, l. verloock, m. van den bossche, d. colombi, l. martens, c. tornevik, w. joseph, w. in-situ measurement methodology for the assessment of 5g nr massive mimo base station exposure at sub-6 ghz frequencies. ieee access 2019, 7, 184658–184667. doi: 10.1109/access.2019.2961225. [3] d. franci, s. coltellacci, e. grillo, s. pavoncello, t. aureli, r. cintoli, m. d. migliore, experimental procedure for fifth generation (5g) electromagnetic field (emf) measurement and maximum power extrapolation for human exposure assessment. environments 2020, 7, 22. 294 doi: 10.3390/environments7030022 [4] d. franci, s. coltellacci, e. grillo, s. pavoncello, t. aureli, r. cintoli, m. d. migliore, an experimental investigation on the impact of duplexing and beamforming techniques in field measurements of 5g signals. electronics 2020, 9, 223. doi: 10.3390/electronics9020223 [5] s. adda, t. aureli, s. d’elia, d. franci, e. grillo, m. d. migliore, s. pavoncello, f. schettino, r. suman, a theoretical and experimental investigation on the measurement of the electromagnetic field level radiated by 5g base stations. ieee access 2020, 8, 101448–101463. 301 doi: 10.1109/access.2020.2998448 [6] m. d. migliore, d. franci, s. pavoncello, e. grillo, t. aureli, s. adda, r. suman, s. d’elia, f. schettino, a new paradigm in 5g maximum power extrapolation for human exposure assessment: forcing gnb traffic toward the measurement equipment. ieee access 2021, 9, 101946–101958. doi: 10.1109/access.2021.3092704. [7] p. baracca, a. weber, t. wild, c. grangeat, a statistical approach for rf exposure compliance boundary assessment in massive mimo systems, wsa 2018 22nd international itgworkshop on smart antennas, bochum, germany, 14-16 march 2018. [8] k. bechta, c. grangeat, j. du, impact of effective antenna pattern on radio frequency exposure evaluation for 5g base station with directional antennas. 2020 xxxiii general assembly and scientific symposium of the international union of radio science. ieee, 2018, pp. 1–4. [9] b. thors, a. furuskar, d. colombi, c. tornevik, time-averaged realistic maximum power levels for the assessment of radio frequency exposure for 5g radio base stations using massive mimo. ieee access 2017, 5, 19711–19719. doi: 10.1109/access.2017.2753459. [10] d. pinchera, m. migliore, f. schettino, compliance boundaries of 5g massive mimo radio base stations: a statistical approach. ieee access 2020, 8, 182787–182800. doi: 10.1109/access.2020.3028471. [11] d. colombi, p. joshi, b. xu, f. ghasemifard, v. narasaraju, c. törnevik, analysis of the actual power and emf exposure from base stations in a commercial 5g network. appl. sci. 2020, 10, 5280. doi: 10.3390/app10155280. [12] s. aerts, l. verloock, m. van den bossche, d. colombi, l. martens, c. tornevik, w. joseph, design and validation of an insitu measurement procedure for 5g nr base station rf-emf exposure. joint annual meeting of the bioelectromagnetics society and the european bioelectromagnetics association (bioem 2020), 2020, pp. 414–418. [13] c. bornkessel, t. kopacz, a. m. schiffarth, d. heberling, m. a. hein, determination of instantaneous and maximal human exposure to 5g massive-mimo base stations. 2021 15th european conference on antennas and propagation (eucap). ieee, 2021, pp. 1–5. [14] m. d. migliore, f. schettino, power reduction estimation of 5g active antenna systems for human exposure assessment in realistic scenarios. ieee access 2020, 8, 220095–220107. 331 doi: 10.1109/access.2020.3042002. [15] a. hirata, y. diao, t. onishi, k. sasaki, s. ahn, d. colombi, v. de santis, i. laakso, l. giaccone, w. joseph, et al. assessment of human exposure to electromagnetic fields: review and future directions. ieee transactions on electromagnetic compatibility 2021. [16] m. d. migliore, d. franci, s. pavoncello, e. grillo, t. aureli, s. adda, r. suman, s. d’elia, f. schettino, a new paradigm in 5g maximum power extrapolation for human exposure assessment: forcing gnb traffic toward the measurement equipment. ieee access 2021, 9, 101946–101958. doi: 10.1109/access.2021.3092704. table 2. rs and pbch-dmrs power per re. prs -69.94 dbm ppbch-dmrs -70.20 dbm table 3. comparison of mpe and cp measurement. 4g mpe -38.90 dbm 5g mpe -39.15 dbm cp -41.24 dbm https://doi.org/10.1109/access.2019.2961225 https://doi.org/10.3390/environments7030022 https://doi.org/10.3390/electronics9020223 https://doi.org/10.1109/access.2020.2998448 https://doi.org/10.1109/access.2021.3092704 https://doi.org/10.1109/access.2017.2753459 https://doi.org/10.1109/access.2020.3028471 https://doi.org/10.3390/app10155280 https://doi.org/10.1109/access.2020.3042002 https://doi.org/10.1109/access.2021.3092704 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 [17] d. franci, s. coltellacci, e. grillo, s. pavoncello, t. aureli, r. cintoli, m.d. migliore, experimental procedure for fifth generation (5g) electromagnetic field (emf) measurement and maximum power extrapolation for human exposure assessment. environments 2020, 7, 342. doi: 10.3390/environments7030022 [18] ts 36.213. evolved universal terrestrial radio access (eutra); physical layer procedures. 3rd generation partnership project (3gpp), 2015 [19] l. m. schilling, c. bornkessel, m. a. hein, analysis of instantaneous and maximal rf exposure in 4g/5g networks with dynamic spectrum sharing, 16th european conference on antennas and propagation (eucap), 2022, pp. 1-5. doi: 10.23919/eucap53622.2022.9769680. https://doi.org/10.3390/environments7030022 https://doi.org/10.23919/eucap53622.2022.9769680 the importance of physiological data variability in wearable devices for digital health applications acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 the importance of physiological data variability in wearable devices for digital health applications gloria cosoli1, angelica poli2, susanna spinsante2, lorenzo scalise1 1 department of industrial engineering and mathematical sciences, università politecnica delle marche, v. brecce bianche, 60131 ancona, italy 2 department of information engineering, università politecnica delle marche, v. brecce bianche, 60131 ancona, italy section: research paper keywords: wearable devices; physiological measurements; data variability; physiological monitoring citation: gloria cosoli, angelica poli, susanna spinsante, lorenzo scalise, the importance of physiological data variability in wearable devices for digital health applications, acta imeko, vol. 11, no. 2, article 25, citation: gloria cosoli, angelica poli, susanna spinsante, lorenzo scalise, the importance of physiological data variability in wearable devices for digital health applications, acta imeko, vol. 11, no. 2, article 25, june 2022, identifier: imeko-acta11 (2022)-02-25 section editor: francesco lamonaca, university of calabria, italy received july 13, 2021; in final form march 21, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gloria cosoli, e-mail: g.cosoli@staff.univpm.it 1. introduction the use of wearable devices is constantly spreading all over the world, thanks to their wide accessibility and high ease of use [1] (even if further actions and improvements are still needed to overcome barriers for a larger adoption by older adults [2]). nowadays a continuously growing number of people wear a smartwatch monitoring a plethora of physiological parameters: heart rate (hr) [3], energy expenditure (ee) [4], blood volume pulse signal (bvp) [5], electrodermal activity (eda) [6], acceleration signal [7], sleep quality [8], respiration rate [9], stressrelated indices [10], etc. these measurements can be useful for different purposes, from cardiovascular monitoring [11] to sleep tracking [12], through activity assessment [13], fitness-oriented applications [14] and blood pressure observation [15], just to cite some. furthermore, in the recent months wearable devices have expanded their application also to the possible detection of early symptoms related to sars-cov-2 pandemic [16], since this virus has stressed the importance of remote monitoring both to limit contagion and for “testing, tracking and tracing” strategies [17]. however, there are also critical aspects that should be thoroughly considered, pertaining to health-related data privacy issues and measurement accuracy of these innovative wearable instruments [5], which undoubtedly play important roles in the era of personalized medicine and digital health [18], [19]. physiological signals can be collected through wearable devices 24 hours a day, 7 days a week, producing big amounts of data, which are analysed through artificial intelligence (ai) algorithms more and more frequently, in order to provide useful information for the so-called decision-making processes [20], abstract this paper aims at characterizing the variability of physiological data collected through a wearable device (empatica e4), given that both intraand inter-subject variability play a pivotal role in digital health applications, where artificial intelligence (ai) techniques have become popular. inter-beat intervals (ibis), electrodermal activity (eda) and skin temperature (skt) signals have been considered and variability has been evaluated in terms of general statistics (mean and standard deviation) and coefficient of variation. results show that both intraand inter-subject variability values are significant, especially when considering those parameters describing how the signals vary over time. moreover, eda seems to be the signal characterized by the highest variability, followed by ibis, contrary to skt that results more stable. this variability could affect ai algorithms in classifying signals according to particular discriminants (e.g. emotions, daily activities, etc.), taking into account the dual role of variability: hindering a net distinction between classes, but also making algorithms more robust for deep learning purposes thanks to the consideration of a wide test population. indeed, it is worthy to note that variability plays a fundamental role in the whole measurement chain, characterizing data reliability and impacting on the final results accuracy and consequently on decision-making processes. mailto:g.cosoli@staff.univpm.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 [21], thus supporting human choices in different fields, from industry 4.0 [22], [23] to ehealth [24]. the purposes can be different: emotion classification [25], activity recognition [26], hypertension management [27], fall detection [28], smart living environments and well-being assessment [29], and so on. in order to be able to develop robust models, capable to provide reliable information, data quality is fundamental [30]; in this perspective, not only hardware and acquisition options (e.g. sampling frequency, signal-to-noise ratio (snr), resolution, etc.) have a big impact, but also data variability, linked both to different sources to collect data [31], but also to the physiological variability itself. indeed, the classification performance of ai algorithms surely depends on the variability observed in the data collected on the test population: if it is true that (physiological) variability somehow hinders a perfect discrimination among classes, on the other hand it is necessary to test a wide population in order to include its variability and avoid overfitting issues. these aspects should be thoroughly considered when developing ai algorithms for digital health applications, which cannot neglect physiological variability characterising the involved population, and consequently the measured data. the study reported in this manuscript aims at evaluating the intraand inter-subject variability of different physiological signals collected through a wearable wrist-worn device (empatica e4). in particular, the authors have analysed cardiacrelated parameters (i.e. heart rate variability – hrv – parameters computed on the bvp signal measured through a photoplethysmographic – ppg – sensor), features computed on eda signal and skin temperature (skt) values. mean, standard deviation and coefficient of variation have been computed for each extracted parameter, considering the repeated tests on a same subject to evaluate intra-subject variability, and the whole acquired data for inter-subject variability. the rest of the paper is organized as follows: section 2 describes the materials and methods employed for data acquisitions and for the evaluation of data variability, section 3 reports the intraand inter-subject variability results, and finally in section 4 the authors provide their considerations and conclusions. 2. materials and methods 2.1. participants the study was conducted on 10 healthy volunteers: 3 males, 7 females; age of (33 ± 16) years with a range of (15 59) years; height of (169.78 ± 8.83) cm; weight of (66.55 ± 12.00) kg; bmi of (22.92 ± 2.14) kg⁄m2 – data are reported as mean ± standard deviation. they declared they did not take any medication in the 24 hours preceding the tests, nor had particular clinical histories possibly influencing the results. before starting the tests, each participant was informed on the test purpose and procedure and signed an informed consent according to the european regulation 2016/679, i.e., the general data protection regulation (gdpr) to obtain the permission for processing personal data. 2.2. data collection in order to assess the inter-subject and intra-subject variability of physiological parameters, each subject repeated the acquisitions six times, for a total of 60 recordings, each lasting 5 minutes. ambient temperature and relative humidity were equal to (20 ± 2) °c and (50 ± 5) %, respectively, to be perceived as comfortable by most of the involved individuals. the participants (with a skin colour classification of type ii – fitzpatrick scale), laying comfortably in a supine position (i.e., in rest condition) in a quiet room, were instructed to relax as much as possible, breathe normally, and not talk during recordings, in order to minimize movement artifacts. as shown in figure 1, the physiological signals were simultaneously collected through a multisensory wearable device, namely empatica e4 [32], placed on the dominant wrist. this acquisition device was chosen as it provides the raw data, thus resulting particularly suitable for research purposes. firstly, the participants were allowed to adjust the device positioning to increase the comfort feeling. then, the device placement was verified to ensure the optimal skin contact (not worn too tightly or too loosely), and consequently to guarantee the optimal conditions for reliable ppg sensor acquisition [33] and, therefore, as high as possible data quality. 2.3. data acquisition device individual physiological signals were recorded with the multimodal device empatica e4 (class iia medical device according to the 93/42/eec directive) – firmware version: fw 3.1.0.7124. such a device captures the inter-beat-interval (ibi), bvp, eda, human skt, and 3-axis accelerometer signals. in particular, bvp and ibi signals, both sampled at 64 hz with a resolution of 0.9 nw/digit, are derived from the ppg sensor. on the bottom of the wristband, there are two green light emitting diodes (leds) enabling the measurements of blood volume changes and heartbeats, and two red leds for reducing the motion artifacts. additionally, two units of photodiodes (total 14 mm2 sensitive area) measure the reflected light. on the bracelet band of empatica e4, two ag/agcl electrodes allow to pass a small amount of alternating current (frequency 8 hz, with a maximum peak-to-peak value of 100 µa) for measuring the skin conductance in µs, sampled at 4 hz with a resolution of 900 ps in the range of [0.01, 100] µs. at the same sampling frequency (4 hz), an infrared thermopile, placed on the back of the case, records the skt data in °c with an accuracy of ± 0.20 °c (within the range 36 °c 39 °c), and a resolution of 0.02 °c. calibration is valid in the range [-40, 115] °c. the last sensor is a 3-axial mems accelerometer used to collect the acceleration along the three dimensions x, y, z with a 32 hz sampling frequency and a default measurement range of ± 2 g. in this case the resolution of the output signal is 0.015 g (8 bit). a dedicated mobile application (e4 realtime) was used to stream and view data in real-time on a mobile device connected with empatica e4 via bluetooth low energy (ble). following each measurement session, data were automatically transferred to a cloud repository (empatica connect) to view, manage, and download raw data in .csv format in the post-processing phase of the study. figure 1. measurement setup. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2.4. data analysis as mentioned above, in this study the data variability analysis was conducted on hrv (or, more precisely, on pulse rate variability [34], [35]), eda and skt signals, previously processed in matlab environment in order to extract relevant features. regarding the hrv evaluation, after applying a previously developed artifact correction method [36], the analysis was performed on ibis signals by using the kubios toolbox [37]. seven meaningful hrv-related parameters were extracted from the corrected ibis signals in time domain (table 1), namely: mean and standard deviation of ibis; mean, standard deviation, minimum and maximum values of hr; root mean square of successive rr interval differences. frequency domain parameters were not considered in the present work, to limit the number of parameters extracted from the same signal, and also because the parameters in frequency domain can be strongly affected by spurious components linked to movement artifacts, to which wrist-worn wearable devices are prone [38], even more during intense physical activities [39]. concerning eda data, the bio-sp toolbox [40] was used to pre-process the signals and to extract all the features that the toolbox permits to compute. indeed, eda signal is composed by the superimposition of two components, specifically skin conductance response (scr) and skin conductance level (scl), related to the fast response to external stimuli events and the slow changes in baseline levels, respectively. this means that the scl depends on the individual characteristics (e.g. skin condition), and can differ markedly between individuals. consequently, under rest condition with no external stimuli, the scl has a higher impact than the scr component on both eda signal trend and amplitude. according to the literature [41], a gaussian low-pass filter, with a 40-point window and a sigma of 400 ms, was applied to reduce noise and motion artifacts due to potential subject’s wrist movements. in order to characterize the eda signal, the following five features were computed within bio-sp toolbox in time domain (table 1): scr mean duration, scr mean amplitude, scr mean rise-time, eda mean signal, number of scrs. finally, since an inspection of the skt data revealed slight and slow °c changes at rest, no filters were applied. therefore, from the raw skt signal the following parameters were extracted (table 1): mean and standard deviation, minimum and maximum of skin temperature. once the whole set of features was computed and extracted from the considered signals, both intraand inter-subject variability was evaluated for each metric. more specifically, data variability was estimated by computing the mean (𝜇), standard deviation (𝜎) and coefficient of variation (𝑐v = 𝜎 𝜇⁄ ) for all the extracted features. furthermore, the normality of the parameters distributions was verified by means of shapiro-wilk test [42] (null hypothesis: the test population is normally distributed; pvalue ≤ 0.05 considered as statistically significant). 3. results in this section, results are reported by grouping them according to data type: cardiac related parameters (i.e. hrv analysis parameters, subsection 3.1), eda-related parameters (subsection 3.2) and skin temperature parameters (subsection 3.3). results are reported in tables as 𝜇 ± 𝜎 (𝑐v); some examples of mean distributions are also shown by using the histogram representation. 3.1. hrv parameters the authors analysed the variability of hrv signal at parameters level, focusing on those extracted in time domain. the shapiro-wilk test evidenced that rr_mean, hr_mean, hr_min and hr_max can be considered as normally distributed (p-value ≥ 0.05). an example of the distribution is reported in the histogram (figure 2) related to rr_mean parameter. for the others (i.e. rr_std and rmssd), the null hypothesis cannot be rejected; the reason could be found in the limited numerosity of the test population (60 recordings on 6 subjects). similarly, the hr_std resulted to have non-normal distribution, probably also due to the presence of one outlier subject (i.e. subject no. 6, see table 2). observing the variability results in table 2, it is possible to notice a very high variability, in particular for the parameters table 1. time-domain features extracted from the physiological signals acquired in the tests. signal features measurement unit description hrv rr_mean ms mean value of inter-beat intervals rr_std ms standard deviation of inter-beat intervals hr_mean bpm mean value of heart rate hr_std bpm standard deviation of heart rate hr_min bpm minimum value of heart rate hr_max bpm maximum value of heart rate rrmsd ms root mean square of successive inter-beat intervals eda scr_d_mean s mean duration of skin conductance response signal scr_a_mean µs mean amplitude of skin conductance response signal scr_rt_mean s mean rise time of skin conductance response signal eda_mean µs mean value of eda signal scr_n no. of skin conductance response peaks skt skt_mean °c mean value of skin temperature skt_std °c standard deviation of skin temperature skt_min °c minimum value of skin temperature skt_max °c maximum value of skin temperature acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 describing how the measurement oscillates around its mean value, i.e. the standard deviation values of rr, hr and rmssd, reporting inter-subject variabilities of 55.8 %, 126.9 % and 65.7 %, respectively. this seems to underline the physiological variability, hence a subject’s condition of interest cannot be described (and classified) without properly considering such data variability. a particular remark should be made on the extremely high inter-variability of hr_std parameter; indeed, this could be linked to the subject no. 6 reporting an extremely high variability (i.e., 125.1 %), as already mentioned above. indeed, by performing a visual inspection of data collected on subject no. 6, among the tests conducted, one measurement on this subject resulted particularly noisy, hindering a reliable hrv analysis despite the use of proper artifact correction methods in the preprocessing phase. however, if this test is discarded from the variability analysis, the intra-variability of hr_std reduces from (12 ± 15) bpm (125.1 %) to (6 ± 3) bpm (50.0 %) – while the remaining parameters do not vary substantially; in this way, the inter-subject variability related to hr_std parameter would be (4 ± 3) bpm (81.1 %). the observed noise, which quite often characterises signals acquired through ppg sensors of wearable devices, could be an effect of subjects’ wrist movements [38]. intra-subject variability shows similar results, evidencing a very high variability, especially for the standard deviation parameters, describing the variations over time. on the other hand, mean value parameters show a quite low intra-subject variability, with values often lower than 10 % (e.g. for rr_mean parameter, lower than 10 % with the exception of 3 subjects out of 10). 3.2. eda parameters as stated above, in rest conditions scl is the predominant component of eda signal; this can result in very low intensity signals related to scr component (more linked to eventual stimuli), and consequently the eda_mean parameter values are expected to be low. in fact, in table 3, eda_mean parameters show very low values, up to 0.0005 μs for the subject no. 9. such very low mean values, together with high signal variability (i.e. high standard deviation), result in extremely high coefficients of variation (see for example subjects no. 3 and 9, where 𝑐v is extremely high due to the fact that the mean value of signal is an order of magnitude lower than its standard deviation). more in general, the parameters related to the eda signals show a very high variability, with coefficient of variation values related to inter-subject variability often over 100 %. also, intrasubject variability seems to be extremely high, evidencing that eda signal is not stable over time, hence it should be considered in this long-term evolution, instead of limiting to use descriptive statistics. such a high variability could be attributable to the fact that eda measurements at total rest, with no external stimuli, seem to be quite complicated, especially when performed by means of wearable devices. in fact, there are multiple subjective causes influencing the measurement results. furthermore, it should be considered that wrist eda results to be quite different from standard finger eda [43]. regarding the type of distribution, no features extracted from eda can be considered as normally distributed. the reason could be attributed again to the restricted test population. an example of distribution is reported in the histogram (figure 3) for eda_mean parameter. 3.3. skin temperature parameters contrarily to the previously reported parameters, skin temperature (table 4) shows measures slowly varying over time (with the exception of the standard deviation value, evidencing a higher variability – up to 87.1 % in intra-subject results), hence providing a more precise footprint of a subject in a determined condition. on the other hand, this could mean that the wrist skin temperature has a slow dynamic, thus it could be not suitable to rapidly mirror possible changes in the subject’s psycho-physical table 2. variability of hrv parameters in time domain. results are reported as µ ± σ (cv). subject rr_mean in ms rr_std in ms hr_mean in bpm hr_std in bpm hr_min in bpm hr_max in bpm rmssd in ms 1 1044 ± 66 (6.3 %) 70 ± 43 (61.0 %) 58 ± 4 (6.5 %) 4 ± 2 (49.9 %) 50 ± 4 (8.8 %) 65 ± 6 (8.7 %) 94 ± 64 (68.6 %) 2 1152 ± 30 (2.6 %) 36 ± 12 (33.7 %) 52 ± 1 (2.5 %) 2 ± 1 (48.5 %) 49 ± 2 (3.3 %) 58 ± 4 (6.1 %) 46 ± 16 (33.7 %) 3 934 ± 44 (4.7 %) 40 ± 13 (33.6 %) 64 ± 3 (4.8 %) 3 ± 1 (39.9 %) 59 ± 5 (7.8 %) 76 ± 5 (6.1 %) 49 ± 18 (36.8 %) 4 1008 ± 80 (8.0 %) 83 ± 51 (61.1 %) 60 ± 5 (8.1 %) 6 ± 5 (87.3 %) 49 ± 7 (13.6 %) 70 ± 5 (7.8 %) 114 ± 69 (60.1 %) 5 1027 ± 80 (7.8 %) 39 ± 20 (51.9 %) 59 ± 5 (8.0 %) 3 ± 2 (72.0 %) 53 ± 3 (5.0 %) 64 ± 6 (9.7 %) 54 ± 30 (56.1 %) 6 991 ± 133 (13.5 %) 72 ± 26 (35.5 %) 62 ± 9 (14.2 %) 12 ± 15 (125.1 %)* 52 ± 8 (15.1 %) 74 ± 12 (16.5 %) 97 ± 42 (43.2 %) 7 938 ± 31 (3.4 %) 68 ± 28 (40.5 %) 64 ± 2 (3.4 %) 6 ± 5 (81.3 %) 55 ± 5 (9.1 %) 74 ± 4 (4.8 %) 90 ± 44 (48.9 %) 8 873 ± 44 (5.0 %) 48 ± 2 (4.9 %) 69 ± 3 (5.0 %) 4 ± 1 (10.4 %) 61 ± 3 (4.2 %) 79 ± 6 (6.9 %) 45 ± 2 (4.7 %) 9 1083 ± 139 (12.9 %) 31 ± 7 (23.7 %) 56 ± 7 (12.8 %) 2 ± 1 (24.8 %) 53 ± 7 (12.8 %) 61 ± 7 (11.5 %) 39 ± 7 (18.6 %) 10 885 ± 130 (14.7 %) 49 ± 19 (39.6 %) 69 ± 10 (14.7 %) 4 ± 1 (16.9 %) 62 ± 9 (15.3 %) 83 ± 7 (8.7 %) 43 ± 66 (39.0 %) tot. 993 ± 117 (11.8 %) 54 ± 30 (55.8 %) 61 ± 7 (12.0 %) 4 ± 6 (126.9 %)* 54 ± 7 (12.9 %) 70 ± 10 (14.2 %) 67 ± 4 (65.7 %) * results affected by a particularly noisy test performed on subject no.6 figure 2. histogram related to rr_mean parameter (hrv signal). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 conditions. none of the parameters extracted from skt signal can be considered normally distributed, according to the shapiro-wilk test. the reason could be the same indicated for the other signal (i.e. narrow test population). an example is reported in the histogram (figure 4) for skt_mean (where the distribution skewness is markedly < 0). 4. discussion and conclusions the use of wearable devices in a growing number of application fields emphasizes the need of considering the metrological aspects determining the reliability of measurement results. in recent years, ai algorithms have known unprecedent developments, providing extremely powerful tools to support decision-making processes and thus prevent serious health-issues in a variety of digital health applications, among which affective states classification, ehealth, smart living environments and ambient assisted living. in order to obtain good performances from ai algorithms, data accuracy and data quality are of uttermost importance, along with data variability that undoubtedly represents a key factor in this scenario. furthermore, only a part of variability can be minimised (e.g. by correcting the sensor positioning in the data acquisition phase), but another part is inevitable and uncontrollable, given that there is a physiological variability, whose values cannot be disregarded. it is a matter of fact that all the steps of the measurement chain influence the final results of ai algorithms: from the sensors uncertainties to the data variability and accuracy, all influencing the reliability of the output information. in a real-life context, this contributes to reveal a corrupted output with a poor information quality, which could be used for different final purposes (e.g. support to decision-making processes in digital health scenarios) [29]. the results obtained in this study have highlighted the physiological data variability among different subjects and intrasubject, considering data acquired by means of a wearable device. in particular, hrv and eda signals have been firstly analysed, observing that hrv parameters in time domain exhibit higher inter-subject variability when considering measures describing their variations over time (i.e. standard deviation values), with respect to average values, which seem more stable. furthermore, eda signals appear to be extremely changeable even in a same subject, evidencing the intrinsic variable nature of this type of data. indeed, this type of signal is referred to wrist skin conductance, instead of finger one, which is the site generally used for standard measurements. previous studies underlined that the measurement is not reliable if compared to finger/palm skin conductivity [44]; in fact, thermoregulatory processes would affect the results more than psychophysiological phenomena, which on the contrary are more influencing in the standard measurement sites [45]. on the other hand, other types of physiological data, such as skt, can show a quite limited variability, resulting more stable than hrv and eda. however, the slow changes could be table 3. variability of eda parameters. results are reported as µ ± σ (cv). subject scr_d_mean in s scr_a_mean in μs scr_rt_mean in s eda_mean in μs scr_n 1 10.2 ± 5.0 (49.4 %) 0.0097 ± 0.0034 (34.8 %) 5.3 ± 2.6 (48.7 %) 0.0022 ± 0.0032 (144.5 %) 23 ± 7 (30.5 %) 2 26.1 ± 26.1 (99.8 %) 0.0138 ± 0.0056 (72.7 %) 13.1 ± 10.6 (80.3 %) 0.0030 ± 0.0052 (177.1 %) 14 ± 10 (71.2 %) 3 8.3 ± 5.4 (65.0 %) 0.0095 ± 0.0056 (59.3 %) 4.3 ± 2.9 (68.6 %) 0.0010 ± 0.019 (1980.1 %) 12 ± 9 (77.7 %) 4 12.4 ± 17.5 (140.9 %) 0.0098 ± 0.0079 (80.8 %) 9.4 ± 15.8 (167.9 %) 0.0078 ± 0.010 (133.0 %) 21 ± 16 (76.6 %) 5 18.9 ± 19.2 (101.7 %) 0.0115 ± 0.0044 (38.1 %) 5.2 ± 3.4 (65.5 %) 0.0027 ± 0.0044 (131.4 %) 22 ± 13 (59.5 %) 6 15.2 ± 10.0 (65.6 %) 0.0112 ± 0.0051 (45.6 %) 9.3 ± 7.4 (79.5 %) 0.0043 ± 0.0051 (50.1 %) 18 ± 7 (41.0 %) 7 14.5 ± 14.4 (99.2 %) 0.0105 ± 0.0090 (85.7 %) 10.6 ± 12.2 (115.1 %) 0.0066 ± 0.0090 (97.4 %) 16 ± 14 (85.5 %) 8 5.7 ± 1.3 (22.5 %) 0.0257 ± 0.0160 (62.7 %) 3.1 ± 0.7 (23.1 %) 0.0029 ± 0.0160 (126.9 %) 23 ± 9 (38.0 %) 9 4.9 ± 0.9 (18.1 %) 0.0116 ± 0.0061 (52.9 %) 2.7 ± 0.5 (18.4 %) 0.0005 ± 0.0061 (862.7 %) 30 ± 9 (31.3 %) 10 5.6 ± 1.2 (20.5 %) 0.0144 ± 0.0065 (45.6 %) 2.9 ± 0.6 (19.3 %) 0.0029 ± 0.0065 (137.9 %) 30 ± 8 (27.9 %) tot. 12.2 ± 13.7 (112.3 %) 0.0128 ± 0.0089 (69.3 %) 6.6 ± 7.9 (120.0 %) 0.0034 ± 0.0076 (224.3 %) 21 ± 11(54.6 %) figure 3. histogram related to eda_mean parameter (eda signal). figure 4. histogram related to skt_mean parameter (skt signal). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 problematic in following, for example, the subject’s reactions to external stimuli. the observed variability can represent a double sword edge: on one hand, the subjective diversity can hinder a net classification by means of ai algorithms; on the other hand, considering a test population sufficiently wide to include all the characteristic variability is required to develop robust ai algorithms, not suffering from overfitting issues. it is worthy to underline that the test population of this study is quite limited (10 subjects), therefore the normality condition could be non-optimally satisfied (verification through shapirowilk test). it would be interesting to repeat this kind of analysis on some publicly available large-scale databases (e.g. wesad [46], k-emocon [47], tiles [48], etc.), in order to examine the data variability results on wider populations (possibly including also different age groups) and also considering longer acquisition intervals and different measuring devices and acquisition conditions (e.g. free-living conditions, which probably remark variability). additionally, future studies may include one or more ai algorithms to compare the achieved performance on two datasets with different variabilities, for demonstrating the high impact of data variability on ai algorithms outputs, which can consequently impact on decision-making processes. acknowledgement a. p. and s. s. gratefully acknowledge the support of the italian ministry for economic development (mise) in implementation of the financial programme "research and development projects for the implementation of the national smart specilization strategy – “dm mise 5 marzo 2018", project "chaalenge", proposal no. 493, project nr. f/180016/0105/x43. references [1] g. cosoli, s. spinsante, l. scalise, wrist-worn and chest-strap wearable devices: systematic review on accuracy and metrological characteristics, measurement, apr. 2020, p. 107789. doi: 10.1016/j.measurement.2020.107789 [2] s. farivar, m. abouzahra, m. ghasemaghaei, wearable device adoption among older adults: a mixed-methods study, int. j. inf. manage., vol. 55, 2020, p. 102209. doi: 10.1016/j.ijinfomgt.2020.102209 [3] n. morresi, s. casaccia, m. sorcinelli, m. arnesano, g. m. revel, analysing performances of heart rate variability measurement through a smartwatch, 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june-1 july 2020, pp. 1–6. doi: 10.1109/memea49120.2020.9137211 [4] s. levikari, a. immonen, m. kuisma, h. peltonen, m. silvennoinen, h. kyröläinen, p. silventoinen, improving energy expenditure estimation in wrist-worn wearables by augmenting heart rate data with heat flux measurement, ieee trans. instrum. meas., vol. 70, 2021, 8 pp. doi: 10.1109/tim.2021.3053070 [5] g. cosoli, g. iadarola, a. poli, s. spinsante, learning classifiers for analysis of blood volume pulse signals in iot-enabled systems, in ieee metroind4.0&iot, rome, italy, 7-9 june 2021, pp. 307312. doi: 10.1109/metroind4.0iot51437.2021.9488497 [6] s. cecchi, a. piersanti, a. poli, s. spinsante, physical stimuli and emotions: eda features analysis from a wrist-worn measurement sensor, ieee int. workshop on computer aided modeling and design of communication links and networks, camad, pisa, italy, 14-16 september 2020, pp. 1-6. doi: 10.1109/camad50429.2020.9209307 [7] c. john, z. mueller, l. prayaga, k. devulapalli, a neural network model to identify relative movements from wearable devices, proc. of ieee southeastcon, raleigh, nc, usa, 28-29 march 2020, vol. 2, pp. 1-4. doi: 10.1109/southeastcon44009.2020.9368261 [8] n. mahadevan, y. christakis, j. di, j. bruno, y. zhang, e. ray dorsey, w. r. pigeon, l. a. beck, k. thomas, y. liu, m. wicker, c. brooks, n. shaafi kabiri, j. bhangu, c. northcott, s. patel, development of digital measures for nighttime scratch and sleep using wrist-worn wearable devices, npj digit. med., vol. 4, no. 1, 2021, pp. 1–10. doi: 10.1038/s41746-021-00402-x [9] r. dai, c. lu, m. avidan, t. kannampallil, respwatch: robust measurement of respiratory rate on smartwatches with photoplethysmography, proc. of the international conference on internet-of-things design and implementation, charlottesville va, usa, 18 21 may 2021, pp. 208-220. doi: 10.1145/3450268.3453531 [10] j. chen, m. abbod, j. s. shieh, pain and stress detection using wearable sensors and devices—a review, sensors (switzerland), vol. 21, no. 4, 2021 mdpi ag, pp. 1–18. doi: 10.3390/s21041030 [11] k. bayoumy, m. gaber, a. elshafeey, o. mhaimeed, e. h. dineen, f. a. marvel, s. s. martin, e. d. muse, m. p. turakhia, kh. g. tarakji, m. b. elshazly, smart wearable devices in cardiovascular care: where we are and how to move forward, nat. rev. cardiol., 18 (2021), pp. 581–599. doi: 10.1038/s41569-021-00522-7 [12] s. cajigal, as consumer sleep trackers gain in popularity, sleep neurologists seek more data to assess how to use them in table 4. variability of skt parameters. results are reported as µ ± σ (cv). subject skt_mean in °c skt_std in °c skt_min in °c skt_max in °c 1 33.20 ± 1.58 (4.8 %) 0.17 ± 0.12 (71.7 %) 32.81 ± 1.87 (5.7 %) 33.46 ± 1.53 (4.6 %) 2 31.99 ± 2.20 (6.7 %) 0.21 ± 0.15 (68.8 %) 31.58 ± 2.13 (6.8 %) 32.37 ± 2.29 (7.1 %) 3 34.02 ± 1.25 (3.7 %) 0.07 ± 0.04 (56.5 %) 33.90 ± 1.28 (3.8 %) 34.20 ± 1.25 (3.6 %) 4 32.67 ± 1.62 (4.9 %) 0.13 ± 0.07 (56.3 %) 32.37 ± 1.70 (5.3 %) 32.89 ± 1.55 (4.7 %) 5 33.05 ± 1.02 (3.1 %) 0.12 ± 0.07 (60.5 %) 32.80 ± 1.15 (3.5 %) 33.30 ± 0.98 (3.0 %) 6 32.14 ± 2.04 (6.4 %) 0.09 ± 0.04 (46.0 %) 31.88 ± 2.17 (6.8 %) 32.27 ± 2.03 (6.3 %) 7 32.02 ± 2.07 (6.5 %) 0.10 ± 0.09 (87.1 %) 31.83 ± 2.05 (6.4 %) 32.22 ± 2.21 (6.9 %) 8 35.22 ± 0.24 (0.7 %) 0.06 ± 0.04 (55.6 %) 35.09 ± 0.24 (0.7 %) 35.34 ± 0.24 (0.7 %) 9 35.32 ± 0.24 (0.7 %) 0.05 ± 0.03 (76.5 %) 35.23 ± 0.26 (0.7 %) 35.43 ± 0.27 (0.8 %) 10 33.95 ± 1.09 (3.2 %) 0.05 ± 0.02 (39.6 %) 33.86 ± 1.08 (3.2 %) 34.07 ± 1.14 (3.4 %) tot. 33.36 ± 1.82 (5.4 %) 0.11 ± 0.09 (83.7 %) 33.14 ± 1.91 (5.8 %) 33.55 ± 1.80 (5.4 %) https://doi.org/10.1016/j.measurement.2020.107789 https://doi.org/10.1016/j.ijinfomgt.2020.102209 https://doi.org/10.1109/memea49120.2020.9137211 https://doi.org/10.1109/tim.2021.3053070 https://doi.org/10.1109/metroind4.0iot51437.2021.9488497 https://doi.org/10.1109/camad50429.2020.9209307 https://doi.org/10.1109/southeastcon44009.2020.9368261 https://doi.org/10.1038/s41746-021-00402-x. https://doi.org/10.1145/3450268.3453531 https://doi.org/10.3390/s21041030 https://doi.org/10.1038/s41569-021-00522-7 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 practice, neurol. today, vol. 21, no. 9, 2021, pp. 8-14. doi: 10.1097/01.nt.0000752872.13869.cc [13] c. p. wen, j. p. m. wai, c. h. chen, w. gao, can weight loss be accelerated if we exercise smarter with wearable devices by subscribing to personal activity intelligence (pai)?, lancet reg. heal. eur., vol. 5, 2021, p. 100133 (8 pp.). doi: 10.1016/j.lanepe.2021.100133 [14] l. scalise, g. cosoli, wearables for health and fitness: measurement characteristics and accuracy, proc. of the 2018 ieee international instrumentation and measurement technology conference i2mtc: discovering new horizons in instrumentation and measurement, houston, tx, usa, 14-17 may 2018, pp. 1-6. doi: 10.1109/i2mtc.2018.8409635 [15] j. ringrose, r. padwal, wearable technology to detect stressinduced blood pressure changes: the next chapter in ambulatory blood pressure monitoring?, american journal of hypertension, vol. 34, no. 4. nlm (medline), 2021, pp. 330–331. doi: 10.1093/ajh/hpaa158 [16] g. quer, j. m. radin, m. gadaleta, k. baca-motes, l. ariniello, e. ramos, v. kheterpal, e. j. topol, s. r. steinhubl, wearable sensor data and self-reported symptoms for covid-19 detection, nat. med., vol. 27, no. 1, 2021, pp. 73–77. doi: 10.1038/s41591-020-1123-x [17] j. budd, b. s. miller, e. m. manning, v. lampos, m. zhuang, m. edelstein, g. rees, v. c. emery, m. m. stevens, n. keegan, m. j. short, d. pillay, ed manley, i. j. cox, d. heymann, a. m. johnson, r. a. mckendry, digital technologies in the public-health response to covid-19., nat. med., vol. 26, no. 8, aug. 2020, pp. 1183–1192. doi: 10.1038/s41591-020-1011-4 [18] m. l. millenson, j. l. baldwin, l. zipperer, h. singh, beyond dr. google: the evidence on consumer-facing digital tools for diagnosis, diagnosis, vol. 5, no. 3, 2018, pp. 95–105. doi: 10.1515/dx-2018-0009 [19] g. cosoli, s. spinsante, l. scalise, wearable devices and diagnostic apps: beyond the borders of traditional medicine, but what about their accuracy and reliability?, instrum. meas. mag., vol. 24, no. 6, september 2020, pp. 89 94. doi: 10.1109/mim.2021.9513636 [20] m. cukurova, c. kent, r. luckin, artificial intelligence and multimodal data in the service of human decision-making: a case study in debate tutoring, br. j. educ. technol., vol. 50, no. 6, 2019, pp. 3032–3046. doi: 10.1111/bjet.12829 [21] a. chang, the role of artificial intelligence in digital health, springer, cham, 2020, pp. 71–81 doi: 10.1007/978-3-030-12719-0_7 [22] e. b. hansen, s. bøgh, artificial intelligence and internet of things in small and medium-sized enterprises: a survey, j. manuf. syst., vol. 58, 2021, pp. 362–372. doi: 10.1016/j.jmsy.2020.08.009 [23] m. borghetti, p. bellitti, n. f. lopomo, m. serpelloni, e. sardini, f. bonavolonta, validation of a modular and wearable system for tracking fingers movements, acta imeko, vol. 9, no. 4, 2020, pp. 157–164. doi: 10.21014/acta_imeko.v9i4.752 [24] a. razzaque, a. hamdan, artificial intelligence based multinational corporate model for ehr interoperability on an ehealth platform, studies in computational intelligence, vol. 912. springer, 2021, pp. 71–81. doi: 10.1007/978-3-030-51920-9_5 [25] t. zhang, a. el ali, c. wang, a. hanjalic, p. cesar, corrnet: finegrained emotion recognition for video watching using wearable physiological sensors, sensors (switzerland), vol. 21, no. 1, 2021, pp. 1–25. doi: 10.3390/s21010052 [26] s. mekruksavanich, a. jitpattanakul, biometric user identification based on human activity recognition using wearable sensors: an experiment using deep learning models, electronics, vol. 10, no. 3, 2021, pp. 1-21. doi: 10.3390/electronics10030308 [27] kelvin tsoi, karen yiu, helen lee, hao-min cheng, tzung-dau wang, jam-chin tay, boon wee teo, yuda turana, arieska ann soenarta, guru prasad sogunuru, saulat siddique, yook-chin chia, jinho shin, chen-huan chen, ji-guang wang, kazuomi kario, the hope asia network, applications of artificial intelligence for hypertension management, journal of clinical hypertension, vol. 23, no. 3. blackwell publishing inc., 2021, pp. 568–574. doi: 10.1111/jch.14180 [28] e. anceschi, g. bonifazi, m. c. de donato, e. corradini, d. ursino, l. virgili, savemenow.ai: a machine learning based wearable device for fall detection in a workplace, studies in computational intelligence, vol. 911. springer science and business media deutschland gmbh, 2021, pp. 493–514. doi: 10.1007/978-3-030-52067-0_22 [29] s. casaccia, g. m. revel, g. cosoli, l. scalise, assessment of domestic well-being: from perception to measurement, instrum. meas. mag., vol. 24, no. 6, 2021, pp. 58-67. doi: 10.1109/mim.2021.9513641 [30] a. poli, g. cosoli, l. scalise, s. spinsante, impact of wearable measurement properties and data quality on adls classification accuracy, ieee sens. j., volume: 21, issue: 13, july 2021, pp. 14221-14231. doi: 10.1109/jsen.2020.3009368 [31] c. sáez, n. romero, j. a. conejero, j. m. garcía-gómez, potential limitations in covid-19 machine learning due to data source variability: a case study in the ncov2019 dataset, j. am. med. informatics assoc., vol. 28, no. 2, 2021, pp. 360–364. doi: 10.1093/jamia/ocaa258 [32] m. garbarino, m. lai, d. bender, r. w. picard, s. tognetti, empatica e3 — a wearable wireless multi-sensor device for realtime computerized biofeedback and data acquisition, proc. of the 4th int. conference on wireless mobile communication and healthcare transforming healthcare through innovations in mobile and wireless technologies (mobihealth), athens, greece, 3-5 november 2014, pp. 39–42. doi: 10.1109/mobihealth.2014.7015904 [33] f. scardulla, l. d’acquisto, r. colombarini, s. hu, s. pasta, d. bellavia, a study on the effect of contact pressure during physical activity on photoplethysmographic heart rate measurements, sensors (switzerland), vol. 20, no. 18, 2020, pp. 1–15. doi: 10.3390/s20185052 [34] e. yuda, m. shibata, y. ogata, n. ueda, t. yambe, m. yoshizawa, j. hayano, pulse rate variability: a new biomarker, not a surrogate for heart rate variability, j. physiol. anthropol. (2020), pp. 1-4. doi: 10.1186/s40101-020-00233-x [35] n. pinheiro, r. couceiro, j. henriques, j. muehlsteff, i. quintal, l. goncalves, p. carvalho, can ppg be used for hrv analysis?, proc. annu. int. conf. ieee eng. med. biol. soc. embs, orlando, fl, usa, 16-20 august 2016, pp. 2945–2949. doi: 10.1109/embc.2016.7591347 [36] g. cosoli, a. poli, l. scalise, s. spinsante, heart rate variability analysis with wearable devices: influence of artifact correction method on classification accuracy for emotion recognition, proc. of the 2021 ieee int. instrumentation and measurement technology conference i2mtc: discovering new horizons in instrumentation and measurement, glasgow, united kingdom, 17-20 may 2021, pp. 1-6. doi: 10.1109/i2mtc50364.2021.9459828 [37] m. p. tarvainen, j.-p. niskanen, j. a. lipponen, p. o. ranta-aho, p. a. karjalainen, kubios hrv – heart rate variability analysis software, comput. methods programs biomed., vol. 113, no. 1, 2014, pp. 210–220. doi: 10.1016/j.cmpb.2013.07.024 [38] j. lee, m. kim, h. k. park, i. y. kim, motion artifact reduction in wearable photoplethysmography based on multi-channel sensors with multiple wavelengths, sensors (switzerland), vol. 20, no. 5, https://doi.org/10.1097/01.nt.0000752872.13869.cc https://doi.org/10.1016/j.lanepe.2021.100133 https://doi.org/10.1109/i2mtc.2018.8409635 https://doi.org/10.1093/ajh/hpaa158 https://doi.org/10.1038/s41591-020-1123-x https://doi.org/10.1038/s41591-020-1011-4 https://doi.org/10.1515/dx-2018-0009 https://doi.org/10.1109/mim.2021.9513636 https://doi.org/10.1111/bjet.12829 https://doi.org/10.1007/978-3-030-12719-0_7 https://doi.org/10.1016/j.jmsy.2020.08.009 https://doi.org/10.21014/acta_imeko.v9i4.752 https://doi.org/10.1007/978-3-030-51920-9_5 https://doi.org/10.3390/s21010052 https://doi.org/10.3390/electronics10030308 https://doi.org/10.1111/jch.14180 https://doi.org/10.1007/978-3-030-52067-0_22 https://doi.org/10.1109/mim.2021.9513641 https://doi.org/10.1109/jsen.2020.3009368 https://doi.org/10.1093/jamia/ocaa258 https://doi.org/10.1109/mobihealth.2014.7015904 https://doi.org/10.3390/s20185052 https://doi.org/10.1186/s40101-020-00233-x https://doi.org/10.1109/embc.2016.7591347 https://doi.org/10.1109/i2mtc50364.2021.9459828 https://doi.org/10.1016/j.cmpb.2013.07.024 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 2020, 1493, pp. 1-14. doi: 10.3390/s20051493 [39] h. lee, h. chung, j. w. kim, j. lee, motion artifact identification and removal from wearable reflectance photoplethysmography using piezoelectric transducer, ieee sens. j., vol. 19, no. 10, 2019, pp. 3861–3870. doi: 10.1109/jsen.2019.2894640 [40] m. nabian, y. yin, j. wormwood, k. s. quigley, l. f. barrett, s. ostadabbas, an open-source feature extraction tool for the analysis of peripheral physiological data, ieee j. transl. eng. heal. med., vol. 6, 2018, pp. 1-11. doi: 10.1109/jtehm.2018.2878000 [41] a. greco, g. valenza, e. p. scilingo, electrodermal phenomena and recording techniques, advances in electrodermal activity processing with applications for mental health. springer international publishing, 2016, pp. 1–17. doi: 10.1007/978-3-319-46705-4_1 [42] s. s. shapiro, m. b. wilk, an analysis of variance test for normality (complete samples), biometrika, vol. 52, no. 3/4, 1965, pp. 591-611. doi: 10.2307/2333709 [43] k. kasos, z. kekecs, l. csirmaz, s. zimonyi, f. vikor, e. kasos, a. veres, e. kotyuk, a. szekely, bilateral comparison of traditional and alternate electrodermal measurement sites, psychophysiology, vol. 57, no. 11, 2020, e13645, pp. 1-15. doi: 10.1111/psyp.13645 [44] n. milstein, i. gordon, validating measures of electrodermal activity and heart rate variability derived from the empatica e4 utilized in research settings that involve interactive dyadic states, front. behav. neurosci., vol. 14, 2020, 13 pp. doi: 10.3389/fnbeh.2020.00148 [45] l. menghini, e. gianfranchi, n. cellini, e. patron, m. tagliabue, m. sarlo, stressing the accuracy: wrist-worn wearable sensor validation over different conditions, psychophysiology, vol. 56, no. 11, 2019, e13441, 15 pp. doi: 10.1111/psyp.13441 [46] p. schmidt, a. reiss, r. duerichen, c. marberger, k. van laerhoven, introducing wesad, a multimodal dataset for wearable stress and affect detection, proc. of the 20th acm international conference on multimodal interaction, boulder, co, usa, 16 – 20 october 2018, pp. 400–408. doi: 10.1145/3242969.3242985 [47] c. y. park, n. cha, s. kang, a. kim, a. habib khandoker, l. hadjileontiadis, a. oh, y. jeong, u. lee, k-emocon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations, sci. data, vol. 7, 2020, no. 1, 293, pp. 1–16. doi: 10.1038/s41597-020-00630-y [48] k. mundnich, b. m. booth, m. l’hommedieu, t. feng, b. girault, j. l’hommedieu, m. wildman, s. skaaden, a. nadarajan, j. l. villatte, t. h. falk, k. lerman, e. ferrara, s. narayanan, tiles2018, a longitudinal physiologic and behavioral data set of hospital workers, sci. data, vol. 7, no. 1, 2020, pp. 354. doi: 10.1038/s41597-020-00655-3 https://doi.org/10.3390/s20051493 https://doi.org/10.1109/jsen.2019.2894640 https://doi.org/10.1109/jtehm.2018.2878000 https://doi.org/10.1007/978-3-319-46705-4_1 https://doi.org/10.2307/2333709 https://doi.org/10.1111/psyp.13645 https://doi.org/10.3389/fnbeh.2020.00148 https://doi.org/10.1111/psyp.13441 https://doi.org/10.1145/3242969.3242985 https://doi.org/10.1038/s41597-020-00630-y https://doi.org/10.1038/s41597-020-00655-3 digital tools as part of a robotic system for adaptive manipulation and welding of objects acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 digital tools as part of a robotic system for adaptive manipulation and welding of objects zuzana kovarikova1,2, frantisek duchon1, andrej babinec1, dusan labat2 1 institute of robotics and cybernetics, faculty of electrical engineering and information technology, slovak university of technology in bratislava, ilkovicova 3, 812 19 bratislava, slovakia 2 vuez, a. s., hviezdoslavova 35, 934 39 levice, slovakia section: research paper keywords: industrial robot; simulation; digital twin; robotized welding; sql server; hmi; database; 2d scanner; 3d scanner; industrial web; automatized measuring system; industry 4.0 citation: zuzana kovarikova, frantisek duchon, andrej babinec, dusan labat, digital tools as part of a robotic system for adaptive manipulation and welding of objects, acta imeko, vol. 11, no. 3, article 11, september 2022, identifier: imeko-acta-11 (2022)-03-11 section editor: zafar taqvi, usa received march 26, 2022; in final form july 22, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this article was written thanks to the support under the operational program of integrated infrastructure for the following projects: "robotic workplace for intelligent welding of small volume production (izvar)", code itms2014 +:313012p386, and "digitization of robotic welding workplace (diroz)", code itms2014 +:313012s686 (both co-financed by the european regional development fund). corresponding author: zuzana kovarikova, e-mail: kovarikova@vuez.sk 1. introduction traditionally it has been difficult to use automation in small batch production with hight variation in volumes and high mix of products [1]. there is a great potential for small batch producers to use flexible automation in manufacturing operation to remain competitive [1]. to achieve flexibility, it is crucial to design the structure of digital tools so that the greatest possible adaptability is achieved with tools that require a minimum time of conversion of technological units. the interaction between automation controllers and computer-aided design/manufacturing (cad/cam) systems capable of offline programming, is generally a way to decrease production down time due to programming [1]. the technology components modelled in cad tools are an important input for the creation of the digital twin. digital twin (dt) is the technical core for establishing cyberphysical production system (cpps) in the context of industry 4.0 [2]. the importance of digital twin (dt), which is characterized by the cyber–physical integration, is increasingly emphasized by both academia and industry [3]. through data modelling, data are stored according to certain criteria and logic, which can facilitate data processing [3]. theories of service modelling are useful for the identification, analysis, and upgrade of services [3]. simulation theories are useful for operation analysis (e.g., structural strength analysis and kinetic analysis) in a simulation environment [3]. abstract the aim of this article is to describe the design and verification of digital tools usable for sharing information within a team of workers and machines that manage and execute production carried out by a robotic system. the basic method is to define the structure of the digital tool and data flows necessary to enable an exchange of data needed to perform robotic manipulation and robotic welding of variable products minimizing at the same time strenuous human activity. the proposed structure of data interconnects a set of intelligent sensors with control of 18 degrees of freedom of 3 robotic manipulators, a welding device, and a production information system. part of the work was also to verify the functionality of the proposed structure of the digital tools. in the first phase, simulations using a digital twin prototype of the workplace for robotic manipulation and robotic welding were performed to verify the functionality the digital tools. subsequently, a digital tool was tested in the environment of a real prototype workplace for robotic manipulation and robotic welding. simulation results and data obtained from the prototype tests proved the functionality of the digital tool inclusive of the production information system. https://www.fei.stuba.sk/ mailto:kovarikova@vuez.sk acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 digital twins are more than just pure data, they include algorithms, which describe their real counterpart and take a decision in the production system. [4]. dts can make a production process more reliable, flexible, and predictable [3]. above all, dts can visualize and update the real-time status, which is useful for monitoring a production process [3]. 2. robotic system for adaptive manipulation and welding of objects the robotic system for adaptive manipulation and welding of objects shown in figure 1 consists of three robots. there are two robots for manipulating the to-be-welded parts and one robot for tungsten inert gas welding. the welding robot can scan the welding gap and the surface parameters of the final weld using a 2d laser scanner installed on the robot body. there are also two warehouses of parts equipped with 3d laser scanners, each of them installed above the concerned warehouse. the 3d laser scanners give feedback to the control system of robotically positioned to-be-welded parts. the source of energy needed for welding in the robotic system for adaptive manipulation and welding of objects is a robotic welding machine. the robotic welder is equipped with a digital interface enabling to set its parameters from the central control system by digital communication. adaptation of the workplace for handling of parts and products of various shapes is enabled by a quick-change system of robotic grippers. each robotic manipulator has one robotic gripper stand with six positions for setting up effectors of different types. the robotic system for adaptive manipulation and welding of objects also includes an automated system for measuring process quantities which is connected to the production and quality information system by digital communication. coordination of workplace components is ensured by a central control system which is a mediator and provider of process variables scanned at the workplace. the process variables are provided to the human-machine interface and the sql database where digital data are registered and archived. power and media distribution systems provide operating power distribution, air distribution for operation of grippers and inert gas distribution to create a protective atmosphere. 3. digital tools and flow of digital data the diagram in figure 2 shows the digital tools and the flow of digital data. an important source as well as consumer of the data is the prototype of the workplace itself. the data of the figure 1. robotic system for adaptive manipulation and welding of objects. figure 2. digital tools and data flow in prototype of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 robotized process is read, and the required parameters are set using a programmable logic controller. the programmable logic controller is a central control system of the robotic system for adaptive manipulation and welding of objects. 3.1. robot controllers there are three robot controllers in the prototype of the robotic system for adaptive manipulation and welding of objects. two controllers control robots for manipulation of to-be-welded objects, one of them also manipulation of products after welding operation. one robot controller controls trajectory of the tool centre point, i.e. tungsten electrode tip. 3.2. robotic removal of to-be-welded parts from 3d laser scanners robotic positioning of the to-be-welded parts removed from the parts warehouse to a position close to the welding position is performed automatically based on the feedback from 3d laser scanners. each robotic manipulator has one 3d laser scanner able to obtain point cloud data characterizing the state of the warehouse of the stored parts before every single removal. this automatic robotic manipulation process is shown in figure 3. 3.3. 2d laser measurement system the welding robot is equipped with a 2d laser sensor that reads data needed to evaluate the geometry of the weld gap. the measured data are used to correct the weld gap, to generate the welding trajectory, and to automate the quality control of the performed welds. 3.4. ftp server the ftp server contains data in the file format. cad data corresponding to the design of the prototype workplace (including the design of variable robotic grippers and support constructions for setting up parts in warehouses) is stored in this data repository. data on the required design of the positioned parts, to-be-welded parts, and welded parts defining the desired product are also stored here. the ftp server contains also robotic system simulation files for adaptive manipulation and welding of objects as well as thermodynamic simulation files. in addition, the ftp server stores files with measured data obtained from the 2d laser scanner, thermovision camera measurements as well as historical trends from an automated system for process measurements, and photo documentation. 3.5. sql server the database of the robotic system for adaptive manipulation and welding of objects is implemented and operated in the ms sql server environment. the database contains records of robotic welds, data on required technological parameters of robotic welding and measured data of process variables. 3.6. web server the web server of the robotic system for adaptive manipulation and welding of objects provides data from the database in ms sql through web pages in the form of tables and behaviour of selected quantities in graphical form. it also allows the welding technologist to enter the required values of welding parameters to optimize robotic welding. 3.7. human-machine interface the human-machine interface (figure 4) for controlling and monitoring the state of the robotic system for adaptive manipulation and welding of objects provides windows for setting the robotic welding parameters and for monitoring the robotic welding process, a control panel for controlling the prototype workplace, and tools for viewing measured process variables in both tabular and graphical forms. the humanmachine interface is installed on operator panels and computers with visualization immediately next to the prototype workplace. 3.8. quality control the quality control of the final product is implemented in two ways. the first one is an automated weld inspection performed immediately after welding by robots using the 2d laser scanner installed on the welding robot. automated quality control data is recorded and archived automatically via measurement data files. the second one is a manual quality control of the final product performed by a person who enters the control results into preprepared protocols archived in the ftp server. 4. simulation of prototype robotic system for adaptive manipulation and welding of objects simulation of both the robotic positioning of parts and the robotic welding is preceded by testing of the feasibility to obtain a quality product in the prototype. for this simulation, a digital twin of the robotic system for adaptive manipulation and welding of objects is used. figure 3. robotic removal of to-be-welded parts from 3d laser scanners. figure 4. human-machine interface of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the advantage of verifying the design by means of a digital twin consists in the possibility of tuning a correct synchronization of the movements of three robotic manipulators which operate in a common working space during robotic positioning and welding of the product. in this way, it is possible to prevent potential collision events and to optimize the cycle time of welding and handling processes. in the digital twin, it is also possible to verify correctness of the design of robotic grippers. when simulating the robotic manipulation of a given part, it is necessary to take into account its geometry and weight. in the digital twin, dynamic events that occur during the positioning of welded parts can also be simulated. figure 5 shows the verification of the automated robotic grip of a selected robotic gripper from a quick-change robotic gripper system. attachments of robotic grippers of various types can be verified by simulation via a digital twin which contains a quickchange system of robot effectors. before testing the robotic welding in the prototype workplace, a thermodynamic simulation is performed simulating the propagation of heat from the weld site into the welded body. a render of the thermodynamic simulation is shown in figure 6. thermodynamic simulations are compared with temperature data measured by a thermovision camera during tests of welding in the prototype workplace and help to optimize preparation of specifications of robotic tig welding parameters. 5. simulation and testing outputs the simulation of the process of robotic positioning and robotic welding for both fillet welds and butt welds was verified by the simulation. the design of the robotic positioning of the final product into the robotic ultrasound diagnostic workplace was also verified by means of a digital twin. figure 7 shows robotic manipulation of two to-be-welded flat sheets (up) and robotic welding during performing a fillet weld of them (down). with the same robotic fingers, the sheets were robotically positioned and held also when performing a butt weld as shown in figure 8. figure 5. simulation of changing robot’s grippers in digital twin. figure 6. thermodynamic simulation of tig-welding two tubes together. figure 7. up: robotic manipulation of two to-be-welded flat sheets simulation. down: robotic holding of two to-be-welded flat sheets simulation. figure 8. robotic positioning of to-be-welded parts in a prototype robotic workplace. position before weld gap correction from data obtain by 2d laser scanner. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 testing of robotic positioning of to-be-welded parts in a prototype robotic workplace showed sliding of the sheets in this type of fingers, which had a negative impact on the feasibility and quality of the weld. for this reason, a different type of fingers was designed, as shown in figure 9. during the design of these fingers, the ansys calculation program was used in which the design was optimized so that the finger was as light as possible and at the same time sufficiently strong. figure 9 on the left shows the designed robotic finger before performing the optimization. the finger after optimization is shown on the right. the described robotic system was designed for adaptive manipulation and welding. thus, with the help of robotic manipulators, it is possible to position and hold objects of different sizes and shapes during robotic welding. the designs of all considered scenarios of robotic positioning are verified in advance by the digital twin of the workplace of robotic manipulation and robotic welding. figure 10 shows the output of the simulation of robotic manipulation and robotic welding of cylindrical objects by realization of butt welding. the upper part shows a simulation of the simultaneous positioning of three robotic manipulators during the welding. after welding, the welding robot is repositioned to its home position by the central control system. the first robotic manipulator is instructed to open the gripper that held the part during the welding process. subsequently, with the second robotic manipulator, the final product is robotically positioned in the output warehouse. the output of the robotic positioning simulation of the final product by the second robotic manipulator is shown in figure 10 below. after obtaining suitable robotic trajectories of both robotic manipulators for positioning and holding to-be-welded parts as well as trajectories of the welding robot, the parameters obtained in the digital twin technology were verified in a prototype robotic positioning and welding workplace. photographs from the testing of the robotic process in the workplace prototype are shown in figure 11. the upper part shows the robotic positioning of to-be-welded parts and the automated measurement of the weld gap by a laser scanner before its correction. after automatic correction of the weld gap, figure 9. design of the robotic fingers. left – before optimization in ansys. right – after optimization in ansys. figure 10. up: robotic holding during welding of two to-be-welded cylindrical objects simulation. below: robotic manipulation of welded cylindrical object simulation. figure 11. cylindrical objects before their automatic weld-gap correction and during robotized welding in the workspace. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 synchronized positioning of robotic manipulators holding parts and robotic welding is performed as shown in figure 11 below. the robotic workplace of handling and welding is followed by the workplace of robotic ultrasonic inspection for proving the quality of welds by a non-destructive method. one of the robotic manipulators of the welding workplace is used for positioning of the welded and cooled products in the workplace of robotic ultrasound diagnostics as it is shown in figure 12. positioning processes are synchronized with each other. after performing robotic ultrasound diagnostics, the tested product is again positioned by a robotic manipulator and placed in a warehouse of quality products or failures, depending on the results of the implemented non-destructive inspection. figure 13 shows robotic manipulation of the to-be-tested object in real workspace. 6. testing the functionality of prototype digital data tools testing of robotic manipulation and robotic welding as well as testing of functionality of digital tools as an effective means of data exchange between the workplace and the workplace operators, welding technologists, quality control workers, designers, robot programmers, and the central control system and the production management was carried out in the prototype figure 12. robotic manipulation of the to-be-ultrasonic-robotic-diagnostictesting objects in digital twin. figure 13. robotic manipulation of the tested objects in the robotic ultrasonic diagnostic workspace. figure 14. testing of robotic manipulation and robotic welding in prototype of the robotic system for adaptive manipulation and welding of objects. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 workplace of the robotic system for adaptive manipulation and welding of objects. the prototype of the robotic system for adaptive manipulation and welding of objects is shown in figure 14. in the prototype of the robotic system for adaptive manipulation and welding of objects, several welded objects of different sizes and shapes were tested. one of them is shown in figure 15. using a digital twin, it was possible to simulate robotic positioning and welding using a digital twin of the workspace. digital tools as part of the robotic system for adaptive manipulation and welding of objects allow to automatically record data about each tested sample and store them in an sql database. from the database, measured values of process variables can be displayed directly at the prototype workplace on the hmi system monitor. at the same time, these measured values are available to the welding technologist to optimize future robotic welding procedures as well as to the quality control staff via a web interface in both tabular and graphical forms. measured values of welding voltage and welding current from the robotic welding of the test sample in a graphical form are shown in figure 16. the graph is drawn from web application of the prototype. 7. conclusion the research presented in the article shows that digital tools as part of a robotic system for adaptive manipulation and welding of objects can be effectively used in modelling or design verification by simulations through a digital twin prototype of the robotic workplace. the exchange of digital data takes place between the components of the prototype consisting of one welding robot, two handling robots equipped with a quickchange robotic gripper system, 3d scanners, a 2d scanner, an automated process variable measurement system, a sql database, a ftp server, and a web portal. implementation of the figure 15. one type of welded samples. figure 16. graph of measured values of welding voltage (blue) ang welding current (violet) for testing sample in figure 15. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 digital tools into the prototype enables to adapt the workplace for production of various products of different shapes and sizes. in this prototype, the web portal allows comfortable entering and mining of data characterizing the process of robotic manipulation and robotic welding. acknowledgement this article was written thanks to the support under the operational program of integrated infrastructure for the following projects: "robotic workplace for intelligent welding of small volume production (izvar)", code itms2014 +:313012p386, and "digitization of robotic welding workplace (diroz)", code itms2014 +:313012s686 (both co-financed by the european regional development fund). references [1] m. lofving, p. almstrom, c. jarebrant, b. wadman, m. widfeldt, evaluation of flexible automation for small batch production, sciencedirect, 2018, pp. 177-184. doi: 10.1016/j.promfg.2018.06.072 [2] ch. liu, p. jiang, w. jiang, web-based digital twin modeling and remote control of cyber-physical production systems, robotics and computer integrated manufacturing, 2020, pp. 1-16. doi: 10.1016/j.rcim.2020.101956 [3] f. tao, h. zhang, a. liu, a.y.c. nee, digital twin in industry: state-of-the-art, ieee transactions on industrial informatics, vol. 15, no 4, april 2019, pp. 2405-2415. doi: 10.1109/tii.2018.2873186 [4] r. ferro, h. sajjad, r. e. c. ordonez, steps for data exchange between real environment and virtual simulation environment, iccms ’21, 25-27 june 2021, melbourne, vic, australia, isbn 978-1-4503-8979-2/21/06. doi: 10.1145/3474963.3474988 https://doi.org/10.1016/j.promfg.2018.06.072 https://doi.org/10.1016/j.rcim.2020.101956 https://doi.org/10.1109/tii.2018.2873186 https://doi.org/10.1145/3474963.3474988 introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 3 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’ md. zia ur rahman1 1 department of electronics and communication engineering, koneru lakshmaiah education eoundation, k l university, green fields, guntur-522303, a.p., india section: editorial citation: md. zia ur rahman, introductory notes for the acta imeko special issue on ‘innovative signal processing and communication techniques for measurement and sensing systems’, acta imeko, vol. 11, no. 1, article 3, march 2022, identifier: imeko-acta-11 (2022)-01-03 received march 30, 2022; in final form march 30, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: md. zia ur rahman, e-mail: mdzr55@gmail.com dear readers, the rapid developments in signal processing and communication techniques enable measurement and sensing systems more accurate and facilitates high resolution sensing data output. this broadens the market prospects and demands, as a result the measurement and sensing systems become a key element in any complex engineering systems. innovations in signal processing and communication methods make the sensing systems fast, intelligent and knowledge-based systems. these innovations enable the measurement and sensing systems higher accuracy, faster detection ability and lower the implementation cost. this has led to the possibility of development of dynamic, smart sensing systems. the scope of this special issue will focus on innovative signal processing and communication techniques for measurement and sensing systems by identifying new perspectives and highlighting potential research issues and challenges. specifically, this special issue will demonstrate how the emerging technologies could be used in future smart sensing systems. the topics of interest in the context of measurement and sensing systems includes: antenna measurements, artificial intelligence, beam forming techniques, body area networks, embedded processors, image sensors and processing, internet of things, knowledge based systems, machine learning algorithms, medical signal analysis, sensor data processing, vlsi architectures and many more thrust areas. this special issue consists of 15 full length papers discussed in the context of measurement and sensing suitable for publishing in acta imeko. in the paper ‘mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks’ the issue of spectrum scarcity and low spectrum utilization in wireless communication systems is addressed. malicious users receive spectrum sensing data, resulting in inaccurate global decisions about spectrum availability. this work proposes multilayer perception and then measures statistical aspects of incoming signals to identify false data in cooperative spectrum sensing. the paper titled ‘evaluation on effect of alkaline activator on compaction properties of red mud stabilized by ground granulated blast slag’ details about virgin red mud (rm), ground granulated blast slag (ggbs), stabilized rm, and alkaline activated ggbs stabilized rm samples, comprehensive compaction tests were performed. the effect of an alkaline activator on the compaction properties of ggbs stabilized rm is investigated in this research. the compaction curves indicated a substantial difference in maximum dry density and optimal moisture content with the modification of ggbs percentage and varied ratios of naoh to na2sio3 when standard and modified proctor compaction tests were conducted for various combinations of rm and ggbs. the paper ‘multipath routing protocol based on backward routing with optimal fuzzy logic in medical telemetry systems for physiological data measurements,’ by suryadevara kranthi et al., proposes a novel safe multi-way approach for trustworthy data transfer that is dependent on service quality. multi-path routing is also supported by the ad hoc on demand backward routing protocol with optimal fuzzy logic (ofl). by generating rules in ofl and thus choosing an optimal rule, the hybridization of the bat optimization strategy delivers the best route. the final delay, the packet delivery ratio, and other criteria are used to assess the efficiency of the proposed technique. the fourth paper ‘a machine learning based sensing and measurement framework for timing of volcanic eruption and categorization of seismic data’ authored by vijay souri maddila et al., investigates the circumstances and factors that govern the volcanic explosive ejection are unclear, and there is currently no efficient approach mailto:mdzr55@gmail.com acta imeko issn: 2221-870x march 2022, volume 11, number 1, 2 4 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 to predict when a volcanic explosive ejection will terminate. create a decisiveness measure d to analyze the uniformity of the groups supplied by these machine learning models using controlled machine learning approaches such as support vector machine (svm), random forest (rf), logistic regression (lr), and gaussian process classifiers (gpc). the measured end-date derived by seismic information classification for both volcanic systems is two to four months later than the end-dates determined by the earliest instance of visual eruption. the paper ‘fire sm: new dataset for anomaly detection of fire in video surveillance,’ seeks to aid the growth of this particular research by including the fire sm dataset, which is a large and diverse new dataset. furthermore, a precise estimation in early fire detection utilising an indicator, average precision, can yield extra information. in this paper, two existing common methodologies were compared to different anomaly detection methods that give an efficient solution to discover fire incidents. in the paper ‘extended buffer zone algorithm to reduce rerouting time in biotelemetry systems using sensing,’ routing strategies for mobile adhoc networks (manets) with connection breakdowns induced by frequent node movement, measurement, and a dynamic network topology are discussed. many ideas have been presented by researchers to shorten the rerouting time. buffer zone routing (bzr) is one such solution, which splits a node transmission region into a safe zone adjacent to the node and a hazardous zone towards the broadcast range's end. the energy consumption of the nodes is reduced when routing decisions are made promptly. transfer time is lowered and routing efficiency is improved in the wider bzr safe region. it corrects flaws in the current algorithm and fills in gaps, reducing the time necessary for manet rerouting. in the paper ‘multi-input multi-output antenna measurements with super wide bandwidth for wireless applications using isolated t stub and defected ground structure,’ pradeep vinaik kodavanti et al., offers the concept of faulty ground structures for improving the antenna's radiation properties, particularly in a multi-input multi-output (mimo) arrangement. in both single and array configurations, the suggested antenna architecture with slightly flared out feed system is constructed and analyzed with defective ground. the t stub is a t-shaped stub that was employed in this project with a defective ground construction. to improve the mimo configuration features, a t stub is introduced, as well as a faulty ground. simulations are run on an electromagnetic modeling tool, and parameters such as reflection coefficient, voltage standing wave ratio, gain, radiation pattern, and current distribution plots are measured. in the paper ‘analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement,’ the computer simulation technology (cst) microwave studio software is used to develop and simulate a new quad band antenna that can function in four different frequency bands. various parameters were improved using parametric analysis to improve the antenna's performance. various antenna parameters have to be measured throughout this investigation. in the paper entitled ‘analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique’ addressed about the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam), which tackles the problem of low adjacent channel leakage ratio, has recently stimulated the interest of numerous researchers. however, the fbmc system's energy measuring efficiency is harmed by the problem of high peak-to-average power ratio (papr) measurement. this paper proposes the partial transmit sequence (pts) with shuffled frog leap (sfl) phase optimization method to reduce the larger papr measurement, which is a major drawback of the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam) system. matlab is used to measure the experimental parameters and assess the results. the paper ‘beamforming in cognitive radio networks using partial update adaptive learning algorithm’ investigates cognitive radio technology as a means of increasing bandwidth efficiency. frequency that isn't used in any way will be employed in this cognitive radio by utilising some of the most powerful resources. one of the key advantages of cognitive radio signals is that they can identify different channels in the spectrum and change the frequencies that are often used. in this research, cognitive radio was developed utilising the beamforming approach, with power allocation as a strategy for the unlicensed transmitter that is completely based on sensing results. it is based on the status of the principal user in a different cognitive radio network, whereas an unlicensed transmitter uses a single antenna and changes the power transmitted. in the paper, ‘efficient deep learning based data augmentation techniques for enhanced learning on inadequate medical imaging data,’ a unique strategy to data augmentation for medical imaging was developed, which could partially solve the problem of limited availability of chest x-ray data. on the original data, a preprocessing step was performed to reduce the image size from 1024x1024x1 to 128x128x1. from datasets generated by a simple generative adversarial network (gan) and a transfer learning gan, the cnn learnt considerably faster and had improved accuracy, and this could be a one-stop solution for the limited availability of chest x-ray data. the paper ‘image reconstruction using refined res-unet model for mir,’ proposes reconstruction of the input medical image, a unique content-based res-unet framework is proposed, which performs an efficient image retrieval task. resnet50 is used as an encoder in the proposed work to conduct feature vector encoding. the proposed model's performance is assessed using the two benchmark datasets, ild and via/elcap-ct. the suggested model outperforms traditional approaches, as evidenced by the comparison findings. in the paper, ‘multilayer feature fusion using covariance for remote sensing scene classification,’ stacked covariance is a new technique for scene categorization utilising remote sensing data that combines visual information from various layers of a cnn. in the current staked covariance (sc) based classification framework, feature extraction is conducted first using a pretrained cnn model, followed by feature fusion using covariance. each feature is the covariance of two separate feature maps, and these features are used to classify data using svm. the proposed sc approach regularly outperforms other classification methods and delivers better results. acta imeko issn: 2221-870x march 2022, volume 11, number 1, 3 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 university of bridgeport prof. navarun gupta presented a paper entitled, ‘spectrum sensing using energy measurement in wireless telemetry networks using logarithmic adaptive learning,’ the spectrum sensing method is used to identify primary user signals in cognitive radios. the least logarithmic absolute difference (llad) algorithm, in which noise strengths are modified at licenced users' sensing points, is proposed to avoid interferences between primary and secondary users. estimated noise signals are removed using the proposed approach. to determine the threshold value, the probability of detection (pod) and probability of false alarm (pofa) are assessed. by sharing the un-used spectrum, the proposed energy measurement-based spectrum sensing method is effective in remote health care monitoring and medical telemetry applications. finally, the paper ‘classification of brain tumours using artificial neural networks,’ deals with magnetic resonance (mr) brain image for medical analysis and diagnosis. these images are typically measured in radiology departments to assess images of anatomy as well as the human body's general physiological processes. magnetic resonance imaging measurements are employed using a strong magnetic field, its gradients, also radio waves to create images of human organs in this process. blood clots or damaged blood vessels in the brain can also be detected using an mr brain imaging. artificial neural networks (ann) are used to classify whether an mr brain image contains a benign or malignant tumor. finally, ann is used to classify the information into benign and malignant tumors. the major goal and purpose of the study is to determine if the tumors are benign or malignant. we thank all the authors who contributed to this special issue, as well as all the reviewers, and special thanks and gratitude for prof. francesco lamonaca, acta imeko's editor in chief, for his tireless and patient assistance in making this special issue possible. at the same time, our sincere thanks are extended to dr. dirk röske, associate editor, for his assistance and contribution at various levels in the production process. i am honoured to have served as guest editor for this issue, and i hope that by doing so, we will be able to bring the recent advancements in the singnal processing and communication techniques in the contest of measurement and sensing systems. md. zia ur rahman, guest editor. a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds acta imeko issn: 2221-870x june 2021, volume 10, number 2, 126 132 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 126 a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds giorgia fiori1, fabio fuiano1, andrea scorza1, jan galo2, silvia conforto1, salvatore a. sciuto1 1 engineering department, roma tre university, via della vasca navale 79, 00146 rome, italy 2 clinical engineering service, irccs children hospital bambino gesù, piazza di sant’onofrio 4, 00165 rome, italy section: research paper keywords: lowest detectable signal; pw doppler; automatic doppler sensitivity measurement method; quality control; doppler flow phantom citation: giorgia fiori, fabio fuiano, andrea scorza, jan galo, silvia conforto, salvatore andrea sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko, vol. 10, no. 2, article 18, june 2021, identifier: imekoacta-10 (2021)-02-18 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form march 8, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia fiori, e-mail: giorgia.fiori@uniroma3.it 1. introduction pulsed wave (pw) doppler ultrasound (us) is a doppler technique that allows to display the spectrograms of blood flow velocity from selected depths in tissues. in particular, real-time blood velocity can be accurately measured within a sample volume (sv) in correspondence of a specified depth (adjustable by the operator) from the transducer/medium interface. in this regard, despite the lack of a shared worldwide standard on us equipment testing, performance evaluation of doppler systems is a currently investigated issue in the scientific research field [1][9]. a great number of doppler test parameters is recommended by the main medical us professional bodies, to be included in quality control (qc) protocols [10]-[12]. among these parameters, the lowest detectable signal may be considered mandatory to assess pw doppler system performance, since in scientific literature it is identified as an index of pw doppler sensitivity [1], [10], [12]-[14]. in particular, the lowest detectable signal in the spectrogram image (ldsimg) has been defined by the authors as the minimum signal level that can be clearly distinguished from noise [15]. in [16], the maximum sensitivity has been referred to as the measurement of the weakest doppler shift signal (linked to the ldsimg through the cosine of the insonification angle) that a us system can detect and display on pw image above the electronic noise. in clinical practice, sensitivity outlines the ability to detect doppler signals from small vessels for increasing distances from the us probe. the goals of the present study are (a) the improvement of a novel automatic algorithm, namely automatic doppler sensitivity measurement method (adsmm), firstly presented in [15], for the ldsimg evaluation by means of a commercial flow phantom; abstract nowadays, doppler system performance evaluation is a widespread issue because a shared worldwide standard is still awaited. among the recommended doppler test parameters, the lowest detectable signal could be considered mandatory in quality control (qc) protocols for pulsed wave (pw) doppler. such parameter is defined as the minimum signal level that can be clearly distinguished from noise and therefore, it is considered as related to pw doppler sensitivity. the present study focuses on proposing and validating a novel image analysis based method for the estimation of the lowest detectable signal in the spectrogram image (ldsimg), namely automatic doppler sensitivity measurement method (adsmm), as well as to compare its results with the outcomes retrieved from the naked eye doppler sensitivity method (nedsm), based on the mean judgment of three independent observers. data have been collected from a doppler flow phantom, through three ultrasound systems for general purpose imaging, equipped with two linear array probes each and with two configuration settings. results are globally compatible among the proposed methods, us systems and settings. further studies could be carried out on a higher number of us diagnostic systems, doppler frequencies and observers, as well as with different probe and phantom models. mailto:giorgia.fiori@uniroma3.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 127 (b) its validation through the comparison with the outcomes provided by the naked eye doppler sensitivity method (nedsm) carried out by three observers without clinical expertise. data have been collected with different settings from three us systems equipped with two linear array us probes each, which worked at similar doppler frequencies. in section 2 the estimation rationale underlying the ldsimg parameter definition will be described. in section 3 the experimental setup used in this study, the adsmm and the nedsm implementation will be discussed. in section 4 the uncertainty analysis of both methods through monte carlo simulation (mcs) will be carried out. in section 5 results will be presented and discussed on the basis of the comparison between the ones obtained from the adsmm and the nedsm, respectively. finally, in the concluding section the major achievements and the future developments of the research hereby presented will be reported. 2. ldsimg estimation in current scientific literature, a shared consensus on the test protocols for the ldsimg estimation is still awaited. nevertheless, factors that affect the lowest detectable signal can be objectively identified. they can be classified in two main groups, according to the device from which they can be adjusted: a) us system (doppler frequency f0, system settings, sample volume length svl, sample volume depth svd, insonification angle ); b) test device (blood mimicking fluid bmf, velocity v and reflectors density). more in detail f0, v and  directly affect the doppler shift fd that determines the pw spectrogram, according to the wellknown (approximated) relationship: 𝑓𝐷 ≅ 2 𝑓0 𝑣 𝑐 cos 𝜃. (1) furthermore, in correspondence of the svl increase within the flow, the doppler shift spread increases, as echoes from a higher number of reflectors of different velocities are produced for the same flow. on the other hand, a bmf with higher particles density produces a more intense doppler signal for the same flow velocity, due to the higher number of reflectors. moreover, the spectrogram intensity depends on the position of the sv into the tube, depending on the flow velocity profile. finally, svd determines the spectrogram attenuation: echoes from higher depths are affected by higher attenuation, therefore they are represented in a weaker spectrogram until it can no longer be distinguishable from noise. from the consideration of the abovementioned factors, it is possible to evaluate the ldsimg from the sum of two contributions: (a) the attenuation , due to the echoes path length into the phantom tissue mimicking material (tmm) and (b) the doppler signal attenuation g due to the echoes reduction into the spectrogram, from maximum intensity to minimum. therefore, ldsimg can be expressed through the following mathematical formulation: 𝐿𝐷𝑆img = ∆𝛼 + ∆𝐺 ≅ 2 𝛼 𝑓0 𝑆𝑉𝐷 + (𝐺max − 𝐺min), (2) where  is the (mean) attenuation coefficient in the tmm (usually expressed in dbcm-1mhz-1), f0 is the probe doppler frequency, gmax is the maximum doppler gain before no negligible noise appears in the spectrogram image and gmin is the minimum doppler gain corresponding to the lack of signal (i.e., the spectrogram intensity is very close to zero). moreover, if the us system does not provide the doppler gain in db (e.g., arbitrary units, au), a unit conversion is needed. 3. materials and methods in this work, the image analysis based method for the ldsimg estimation, namely automatic doppler sensitivity measurement method (adsmm), implemented in matlab environment, has been improved and validated. this has been carried out from the spectrogram images collected from three us systems, equipped with two linear array us probes each, by means of a commercial flow phantom. 3.1. experimental setup the gammex, optimizer® 1425a [17] doppler flow phantom has been used to acquire the pw doppler images from a sloped tube within a tmm at a specified continuous flow rate. the device consists of a hydraulic circuit filled with a bmf, a tmm and an electric flow controller (table 1). the ldsimg measurement has been carried out at two different pw doppler settings, i.e., set i and set ii, and a single doppler frequency for each linear array probe (table 2). the lowest and stable flow phantom velocity has been set, and sv size as well as insonification angle have been kept constant. conversely, the tube inner diameter could not be changed because of the phantom design. therefore, during the spectrogram acquisitions the svd only has been varied. data have been collected for six svd values spaced of 3 mm each. figure 1 shows the sample volume depths for each probe. table 1. doppler flow phantom characteristics. ultrasound phantom[17] us phantom model gammex optimizer® 1425a scanning material water-based mimicking gel tube inner diameter (nominal) 5 mm attenuation (0.50 ± 0.05) db·cm-1·mhz-1 tmm speed of sound (1540 ± 10) m·s-1 bmf speed of sound (1550 ± 10) m·s-1 flow rate (nominal) 2.6 ml·s-1 velocity setting (nominal) 30 cm·s-1 table 2. b-mode and pw doppler us systems settings. parameter set i set ii dynamic range (db) maximum a: maximum b: 0.8·maximum c: 0.6·maximum doppler frequency (mhz) a1: 6.25; a2: 6.25 b1: 5; b2: 6.3 c1: 5; c2: 6.2 wall filter (hz) minimum 100 – 150 svl (mm) a: 1.5; b: 2.0; c: 1.5 svd range (mm) a1: 64-79; a2: 52-67 b1: 61-76; b2: 58-73 c1: 73-88; c2: 70-85 insonification angle (°)  51 set i = raw working conditions; set ii = best working conditions as provided from the specialist. number 1, associated to the corresponding us system a, b or c, indicates the probe with the lower doppler frequency, while number 2 indicates the probe with the higher doppler frequency. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 128 such depths have been set at different values because the nondetectability of the spectrogram intensity varies according to the probe. in the acquisition process, the us probes have been maintained still on the phantom scanning surface through a holder ensuring the insonification angle to be constant throughout the whole acquisition time. 3.2. automatic doppler sensitivity measurement method an ad hoc acquisition protocol has been designed and implemented to estimate ldsimg parameter through the automatic determination of gmin and gmax for each svd value. pw images have been acquired by varying doppler gain from 0 db (or 0 au) to the us system maximum adjustable gain with steps of 2 db (or 2 au). after the data acquisition, the adsmm processes the acquired spectrograms for the automatic determination of gmin and gmax values through two steps: 1) for gmin computation, two regions of interest, rois (signal) and roin (noise), with the same size (810 px × 144 px) are drawn on the pw image in correspondence of the spectrogram and noise, respectively (figure 2a). the mean gray level value μn inside the roin is calculated to estimate the noise level, while rois is firstly subdivided in 3240 cells of 6 px × 6 px and the mean gray level values μs,i of each cell are computed, resulting in a new matrix rois2. afterwards, a snr matrix is obtained whose elements are given by the following expression: 𝑆𝑁𝑅𝑖 = 𝜇𝑠,𝑖 𝜇𝑛 . (3) snr matrix is computed for increasing doppler gain values. among them, the adsmm determines gmin as the first gain value for which the number of cells with snri ≥ 2 is higher than 1 % of the total number of cells. 2) gmax is determined from the roin computed in the first step (figure 2b). similarly, it is firstly subdivided into cells of 6×6 px and the mean gray level values μn,i of each cell are evaluated, resulting in a new matrix roin2 obtained for increasing doppler gain values. among them, the adsmm determines gmax as the first gain value for which the number of cells with μn,i ≥ 3 is higher than 1 % of the total number of cells. 3.3. naked eye doppler sensitivity method the nedsm is based on an in-house matlab function implemented to allow the three observers to express their judgment on gmin and gmax values from the acquired pw spectrograms. tests have been independently performed without variations of environmental lightening conditions, as well as by keeping a fixed monitor distance. more in detail, the nedsm randomly provides the observers with the pw spectrograms allowing them to indicate which pw images are associated to gmin and gmax values for each svd according to the six linear probes. the pw images order has been randomized and the observers have been requested to repeat the test six times for the study of subjects’ interand intra-variability. the compatibility between the results obtained through the adsmm and the nedsm has been evaluated according to the following condition [18]: |𝜇adsmm − 𝜇nedsm| ≤ 𝛿adsmm + 𝛿nedsm, (4) where adsmm and nedsm are the mean ldsimg values estimated through the adsmm and the nedsm, respectively, while adsmm and nedsm are the corresponding uncertainties. 4. monte carlo simulation as already experienced in other studies [19]-[25], mcs is a powerful tool to estimate the uncertainty and assess the robustness of measurements processed by software. two different mcs series have been carried out to estimate ldsimg uncertainty for the adsmm and the nedsm, respectively. the number of iterations for each mcs has been set at 105 cycles. the doppler probe frequency f0 uncertainty has been considered negligible because of the narrow bandwidth of the transmitted pulse. in table 3 all the distributions assigned to the variables influencing the ldsimg expressed in (2), are listed: the tmm figure 1. schematic representation of the sample volume depths according to each us probe. number 1, associated to the corresponding us system a, b or c, indicates the probe with the lower doppler frequency, while number 2 indicates the probe with the higher doppler frequency. a b figure 2. example of rois and roin on pw spectrograms with svd at 73 mm, in set i configuration, for the automatic determination of a) gmin and b) gmax. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 129 attenuation  has been set as a normal distribution whose standard deviation (sd) has been retrieved from the phantom datasheet by supposing a 95% confidence level, while the depth z has been set as a uniform distribution with mean value equal to the svd and sd estimated from the sample volume depth resolution. on the other hand, uniform distributions have been assigned to the adsmm and the nedsm minimum and maximum gains. gmin,adsmm and gmax,adsmm have been automatically determined by the adsmm, while gmin,nedsm and gmax,nedsm are the mean values of the three observers judgement. as regards the adsmm gains standard deviations, they have been estimated considering the doppler gain resolution. on the table 3. mcs distributions assigned to the variables influencing the ldsimg. parameter distribution unit mean ± sd tmm attenuation  normal db·cm-1·mhz-1 0.500 ± 0.025 depth z uniform cm svd ± 0.3 gmin and gmax (adsmm) uniform db or au (*) gmin,adsmm ± 0.6 uniform gmax,adsmm ± 0.6 gmin and gmax (nedsm) uniform db or au (*) gmin,nedsm ± min,nedsm uniform gmax,nedsm ± max,nedsm (*) the doppler gain measurement unit is db for us systems a and b, while au for us system c. table 4. ldsimg results for the adsmm and the nedsm according to the us system, probe and configuration setting. us system and probe sample volume depth svd (mm) ldsimg (db) set i set ii adsmm nedsm adsmm nedsm a1 64 54 ± 4 55 ± 4 52 ± 4 55 ± 5 67 54 ± 4 55 ± 5 54 ± 4 56 ± 5 70 54 ± 5 54 ± 5 54 ± 5 55 ± 5 73 54 ± 5 53 ± 5 54 ± 5 54 ± 5 76 53 ± 5 53 ± 5 54 ± 5 53 ± 5 79 55 ± 5 54 ± 6 55 ± 5 53 ± 5 ldsimg,a1 54 ± 5 54 ± 5 54 ± 5 54 ± 5 a2 52 49 ± 4 51 ± 4 51 ± 3 50 ± 4 55 48 ± 4 51 ± 5 48 ± 4 51 ± 4 58 48 ± 4 50 ± 4 50 ± 4 51 ± 5 61 48 ± 4 50 ± 5 46 ± 4 49 ± 5 64 46 ± 4 48 ± 5 46 ± 4 47 ± 5 67 48 ± 4 50 ± 5 48 ± 4 49 ± 5 ldsimg,a2 48 ± 4 50 ± 5 48 ± 4 49 ± 5 b1 61 53 ± 3 52 ± 4 51 ± 3 52 ± 4 64 54 ± 3 52 ± 4 56 ± 3 58 ± 4 67 56 ± 4 54 ± 5 57 ± 4 59 ± 4 70 55 ± 4 55 ± 5 57 ± 4 59 ± 5 73 54 ± 4 56 ± 4 55 ± 4 57 ± 5 76 54 ± 4 57 ± 5 54 ± 4 56 ± 4 ldsimg,b1 54 ± 4 54 ± 5 55 ± 4 57 ± 4 b2 58 59 ± 4 57 ± 4 62 ± 4 64 ± 5 61 54 ± 4 56 ± 5 56 ± 4 58 ± 5 64 56 ± 4 58 ± 5 56 ± 4 58 ± 5 67 58 ± 4 60 ± 5 58 ± 4 60 ± 5 70 58 ± 5 60 ± 5 60 ± 5 61 ± 5 73 58 ± 5 60 ± 5 60 ± 5 61 ± 5 ldsimg,b2 57 ± 4 58 ± 5 59 ± 4 60 ± 5 c1 73 61 ± 15 67 ± 19 62 ± 17 58 ± 17 76 61 ± 16 67 ± 19 62 ± 17 58 ± 17 79 61 ± 15 60 ± 17 62 ± 16 63 ± 18 82 69 ± 15 70 ± 18 71 ± 16 67 ± 18 85 61 ± 14 58 ± 15 56 ± 9 57 ± 13 88 61 ± 13 60 ± 15 62 ± 13 61 ± 15 ldsimg,c1 62 ± 15 64 ± 17 63 ± 15 61 ± 16 c2 70 63 ± 13 64 ± 16 57 ± 8 59 ± 14 73 63 ± 13 64 ± 16 57 ± 9 59 ± 14 76 57 ± 8 57 ± 10 52 ± 6 54 ± 7 79 54 ± 6 55 ± 7 56 ± 7 56 ± 7 82 54 ± 6 56 ± 7 55 ± 6 56 ± 7 85 55 ± 6 55 ± 6 55 ± 6 56 ± 7 ldsimg,c2 58 ± 9 59 ± 11 55 ± 7 57 ± 10 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 130 other hand, the standard deviations min,nedsm and max,nedsm due to subjects’ interand intra-variability have been obtained through one-way anova statistical test (p < 0.01) at each sample volume depth for each us probe. 5. results and discussion the ldsimg results for the three us diagnostic systems equipped with two linear array probes obtained through the adsmm and the nedsm for set i and set ii have been reported in table 4 and shown in figure 3. the uncertainty values have been retrieved through 2.5 and 97.5 percentiles from the data distributions obtained through mcss. firstly, by considering the adsmm, ldsimg results are globally compatible for each us system and linear probe, independently from set i and set ii (figure 3 a, c), as well as from svd at which the pw spectrograms have been acquired. more in detail, the results are always compatible in us system a for both its probes throughout the configuration settings, and the same applies for us system c. conversely, scanner b always shows compatible results for set i only, nevertheless results preserve a global compatibility for set ii. the same considerations can be made for the nedsm, whose ldsimg results are globally compatible (figure 3 b, d). as regards the comparison between the adsmm and the nedsm, it can be noticed that for every single probe, ldsimg values obtained in set i are always compatible for both the methods applied and independently from the sample volume depth. such compatibility is preserved by changing the configuration setting (set ii), except for the first probe of us scanner b, because of the sample volume depth dependency. secondly, it’s worthy noticing that ldsimg outcomes obtained through the adsmm and the nedsm, in both set i and set ii, do not significantly deviate from the constant trend, differently from [15]. such issue could derive from the reduction of doppler gain step (2 db or 2 au). in fact, the non-constant trend in [15], could have been probably due to the combination of both the further attenuation caused by the presence of nylon pins above the tube sections where pw spectrograms were acquired, and the higher doppler gain step (5 au) applied, resulting in a wrong identification of gmin and gmax. as regards the methods measurement uncertainty, it can be assessed that the low nedsm uncertainty values are probably due to an adaptation process that the observer went through during the tests despite the randomization of the pw spectrogram images. such issue led to a lowering of the intraobserver variability with respect to the inter-observer one. on the other hand, us system c shows the highest measurement uncertainties, independently from the method applied, because its doppler gain measurement unit is not provided in db, leading to the necessity of a specific conversion procedure that introduces a further uncertainty contribution. finally, the mean ldsimg (ldsimg,a1, ldsimg,a2, ldsimg,b1, ldsimg,b2, ldsimg,c1 and ldsimg,c2) and the corresponding uncertainty values for each linear array probe, computed for both the adsmm and the nedsm as well as for set i and set ii, have been reported in table 4 and shown in figure 4. it should be pointed out that such results always show compatibility among the two methods investigated and the configuration settings. the highest mean ldsimg value has been achieved for probe 1 of scanner c for both methods and configuration settings. besides, the lowest mean ldsimg value has been found in correspondence of the probe 2 of scanner a for both methods and configuration settings. therefore, the us system c seems to be the most sensitive among the diagnostic systems involved in the present study. a. adsmm – set i b. nedsm – set i c. adsmm – set ii d. nedsm – set ii figure 3. ldsimg outcomes obtained through the adsmm and the nedsm, for both linear array probes of each us system a, b and c according to the configuration settings (set i and set ii). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 131 as a final remark, it should be pointed out that the physical model underlying both the adsmm and the nedsm proposed in the present work, is a differential model as it relies on the doppler gains difference, as in (2). this is undoubtedly an advantage when the observer expresses its own judgment looking at the pw spectrograms: in fact, the model differential nature allows the observer test to be carried out on a monitor with a different dynamic scale with respect to the diagnostic us system one, without affecting the overall results. such feature improves the nedsm robustness because tests can be carried out on different monitors. further considerations should be addressed about the adsmm and the nedsm application in qcs for the assessment of medical us systems, since a significant ldsimg reduction with respect to a baseline can be related to both a sensitivity loss (gmin tends to increase) and a noise rise (gmax tends to decrease). to this aim, the us scanner settings should be adjusted with care, together with the test object speed v in the phantom (e.g., the maximum velocity of the bmf in the flow phantom) and the measurement depth svd. in particular, both v and svd should be selected in order to assure the proper evaluation of gmin, since the doppler signal can be displayed anyway into the spectrogram for higher values of v and low attenuations. in this regard, high sensitivity probes may need deeper test objects embedded in high attenuation tmms (e.g., steady flows of bmf at higher depths). on the other hand, care should be paid to the test setup and its functioning, in order to avoid any deterioration in its components (e.g., variation in the tmm and/or bmf characteristics) to be confused with performance reduction in the us system. although the results shown in this work are very promising, all the above issues are worthy of further studies, aiming to provide a suitable and robust tool for qc technicians. 6. conclusions in the present work, the lowest detectable signal in the spectrogram image (ldsimg), an index for pw doppler qc performance test, has been proposed and investigated as it provides an indication about pw doppler sensitivity. its estimation has been carried out through an improved image analysis based algorithm, namely automatic doppler sensitivity measurement method (adsmm), developed in matlab environment whose physical model has already been proposed in literature. the adsmm validation has been performed by comparing its outcomes with the ones provided by the naked eye doppler sensitivity method (nedsm), carried out by three independent observers without clinical expertise. data have been collected from a doppler flow phantom, by means of three diagnostic systems equipped with two linear probes each, in two different us system settings (set i and set ii). globally, the results obtained through the adsmm and the nedsm are compatible, independently from the configuration setting. such compatibility suggests a good reliability of the adsmm in ldsimg parameter estimation. among the future developments, further tests could be carried out (a) on a wider observers’ sample, (b) on a higher number of us diagnostic systems available in the market, (c) with different probe doppler frequencies and (d) with different probe models (e.g., convex and phased array). in particular, the latter could be tested on different doppler phantom models with higher depth size and/or attenuation coefficient to guarantee the correct estimation of the minimum doppler gain in the ldsimg expression. on the other hand, a physical model improvement could be expected in order to retrieve ldsimg values even if the doppler spectrogram is still displayed on the image in correspondence of the zero doppler gain. despite the promising results, further studies are going to be carried out in order to assess and improve the robustness and suitability of the ldsimg measurement in ultrasound qc. acknowledgement the authors wish to thank ge healthcare, hitachi healthcare and mindray medical for hardware supply and technical assistance in data collection. a) b) figure 4. mean ldsimg values (ldsimg,a1, ldsimg,a2, ldsimg,b1, ldsimg,b2, ldsimg,c1 and ldsimg,c2) obtained through the adsmm and the nedsm, for both linear array probes of each us system a, b and c according to a) set i and b) set ii. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 132 references [1] j. e. browne, a review of ultrasound quality assurance protocols and test devices, phys. med. 30 (2014), pp. 742-751. doi: 10.1016/j.ejmp.2014.08.003 [2] j. m. thijssen, m. c. van wijk, m. h. m. cuypers, performance testing of medical echo/doppler equipment, eur. j. ultrasound 15 (2002), pp. 151-164. doi: 10.1016/s0929-8266(02)00037-x [3] f. marinozzi, f. p. branca, f. bini, a. scorza, calibration procedure for performance evaluation of clinical pulsed doppler systems, measurement 45 (2012), pp. 1334-1342. doi: 10.1016/j.measurement.2012.01.052 [4] f. marinozzi, f. bini, a. d’orazio, a. scorza, performance tests of sonographic instruments for the measure of flow speed, 2008 ieee international workshop on imaging systems and techniques, crete, greece, 10 – 12 september 2008. doi: 10.1109/ist.2008.4659939 [5] a. scorza, d. pietrobon, f. orsini, s. a. sciuto, a preliminary study on a novel phantom based method for performance evaluation of clinical colour doppler systems, proc. of the 22nd imeko tc4 international symposium & 20th international workshop on adc modelling and testing supporting world development through electrical & electronic measurements, iasi, romania, 14 – 15 september 2017. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2017/imeko-tc42017-033.pdf [6] a. scorza, s. conforto, c. d’anna, s. a. sciuto, a comparative study on the influence of probe placement on quality assurance measurements in b-mode ultrasound by means of ultrasound phantoms, open biomed. eng. j. 9 (2015), pp. 164-178. doi: 10.2174/1874120701509010164 [7] a. scorza, g. lupi, s. a. sciuto, f. bini, f. marinozzi, a novel approach to a phantom based method for maximum depth of penetration measurement in diagnostic ultrasound: a preliminary study, proc. of 2015 ieee international symposium on medical measurements and applications (memea), turin, italy, 7 – 9 may 2015. doi: 10.1109/memea.2015.7145230 [8] m. gaitan, j. geist, b. j. reschovsky, a. chijioke, characterization of laser doppler vibrometers using acousto-optic modulators, acta imeko 9 (2020) 5, pp. 361-364. doi: 10.21014/acta_imeko.v9i5.1001 [9] g. dinardo, l. fabbiano, g. vacca, how geometric misalignments can affect the accuracy of measurements by a novel configuration of self-tracking ldv, acta imeko 3 (2014) 4, pp. 26-31. doi: 10.21014/acta_imeko.v3i4.151 [10] american institute of ultrasound in medicine, performance criteria and measurements for doppler ultrasound devices, aium, 2002, isbn 1-930047-83-5. [11] institute of physical sciences in medicine, report no 70. testing of doppler ultrasound equipment, ipsm, 1994. [12] z. f. lu, n. j. hangiandreou, p. carson, clinical ultrasonography physics: state of practice, in: clinical imaging physics: current and emerging practice. e. samei, d.e. pfeiffer (editors). wiley blackwell, hoboken, nj, 2020, isbn 978-1-118-75345-3, pp. 261286. [13] d. w. rickey, a. fenster, a doppler ultrasound clutter phantom, ultrasound med. biol. 22 (1996), pp. 747-766. doi: 10.1016/0301-5629(96)00045-2 [14] p. r. hoskins, simulation and validation of arterial ultrasound imaging and blood flow, ultrasound med. biol. 34 (2008), 693717. doi: 10.1016/j.ultrasmedbio.2007.10.017 [15] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, lowest detectable signal in medical pw doppler quality control by means of a commercial flow phantom: a case study, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-63.pdf [16] e. j. boote, j. a. zagzebski, performance tests of doppler ultrasound equipment with a tissue and blood-mimicking phantom, j. ultrasound med. 7 (1988), pp. 137-147. doi: 10.7863/jum.1988.7.3.137 [17] gammex, optimazer 1425a: ultrasound image analyzer for doppler and gray scale scanners. online [accessed 14 june 2021]. https://cspmedical.com/content/1021086_doppler_user_manual.pdf [18] j. r. taylor, an introduction to error analysis: the study of uncertainties in physical measurements, university science books, sausalito, ca, usa, 1996, isbn 978-0-198-55707-4, pp. 13-44. [19] f. orsini, s. scena, c. d’anna, a. scorza, l. schinaia, s. a. sciuto, uncertainty evaluation of a method for the functional reach test evaluation by means of monte-carlo simulation, proc. of the 22nd imeko tc4 international symposium & 20th international workshop on adc modelling and testing supporting world development through electrical & electronic measurements, iasi, romania, 14 – 15 september 2017. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2017/imeko-tc42017-028.pdf [20] f. orsini, f. fuiano, g. fiori, a. scorza, s. a. sciuto, temperature influence on viscosity measurements in a rheometer prototype for medical applications: a case study, proc. of 2019 ieee international symposium on medical measurements and applications (memea), istanbul, turkey, 26 – 28 june 2019. doi: 10.1109/memea.2019.8802218 [21] g. fiori, f. fuiano, a. scorza, m. schmid, s. conforto, s. a. sciuto, ecg waveforms reconstruction based on equivalent time sampling, proc. of 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june – 1 july 2020. doi: 10.1109/memea49120.2020.9137260 [22] f. vurchio, f. orsini, a. scorza, f. fuiano, s. a. sciuto, a preliminary study on a novel automatic method for angular displacement measurements in microgripper for biomedical applications, proc. of 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 1 june – 1 july 2020. doi: 10.1109/memea49120.2020.9137249 [23] g. fiori, f. fuiano, f. vurchio, a. scorza, m. schmid, s. conforto, s. a. sciuto, a preliminary study on a novel method for depth of penetration measurement in ultrasound quality assessment, proc. of the 24th imeko tc4 international symposium & 22nd international workshop on adc modelling and dac modelling and testing, palermo, italy, 14 – 16 september 2020. online [accessed 14 june 2021]. https://www.imeko.org/publications/tc4-2020/imeko-tc42020-62.pdf [24] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on the adaptive snr threshold method for depth of penetration measurements in diagnostic ultrasounds, appl. sci. 10 (2020). doi: 10.3390/app10186533 [25] a. lavatelli, e. zappa, uncertainty in vision based modal analysis: probabilistic studies and experimental validation, acta imeko 5 (2016), 4, pp. 37-48. doi: 10.21014/acta_imeko.v5i4.426 https://doi.org/10.1016/j.ejmp.2014.08.003 https://doi.org/10.1016/s0929-8266(02)00037-x https://doi.org/10.1016/j.measurement.2012.01.052 https://doi.org/10.1109/ist.2008.4659939 https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-033.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-033.pdf https://doi.org/10.2174/1874120701509010164 https://doi.org/10.1109/memea.2015.7145230 http://dx.doi.org/10.21014/acta_imeko.v9i5.1001 http://dx.doi.org/10.21014/acta_imeko.v3i4.151 https://doi.org/10.1016/0301-5629(96)00045-2 https://doi.org/10.1016/j.ultrasmedbio.2007.10.017 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-63.pdf https://doi.org/10.7863/jum.1988.7.3.137 https://cspmedical.com/content/102-1086_doppler_user_manual.pdf https://cspmedical.com/content/102-1086_doppler_user_manual.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-028.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-028.pdf https://doi.org/10.1109/memea.2019.8802218 https://doi.org/10.1109/memea49120.2020.9137260 https://doi.org/10.1109/memea49120.2020.9137249 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-62.pdf https://doi.org/10.3390/app10186533 http://dx.doi.org/10.21014/acta_imeko.v5i4.426 microsoft word article 1 contacts.doc acta imeko  august 2013, volume 2, number 1, 1  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 1  journal contacts      about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editorial and publication board  prof. paolo carbone (italy – editor in chief) dr. dirk röske (germany – information officer) prof. pasquale daponte (italy – president of imeko) prof. paul p.l. regtien (the netherlands – vice president for publications) prof. francisco alegria (portugal – editorial office) prof. sergio rapuano (italy – editorial office) imeko technical committee chairmen (ex officio) about imeko  the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses  principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov) the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de fire sm: new dataset for anomaly detection of fire in video surveillance acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 fire sm: new dataset for anomaly detection of fire in video surveillance shital mali1, uday khot1 1 department of electronics and telecommunication , st. francis institute of technology, mumbai university, mumbai, india section: research paper keywords: anomalous; convolutional neural network; dataset; fire; smoke citation: shital mali, uday khot, fire sm: new dataset for anomaly detection of fire in video surveillance, acta imeko, vol. 11, no. 1, article 25, march 2022, identifier: imeko-acta-11 (2022)-01-25 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 6, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: shital mali, e-mail: shital.mali@rait.ac.in 1. introduction surveillance cameras are widespread, and it is not feasible to have people actively tracking them. in most instances, nearly all footage of surveillance camera is unimportant. only rare pieces of video are of the main concern. thus, the key inspiration for creating video anomaly detection along with image-based is, to automatically locate areas of video/image which are irregular. this would mark those for human inspection. recently, the research study of the identification of video/image anomaly has been characterised by two parameters. the training videos are made using a secure event. the anomalous event identification would be task followed after examining the video. in order to define what is usual for a specific scene, it is important to have training footage of normal behaviour. by an anomalous case that implies the localized video section, which is substantially dissimilar, happen inside a training video. more difficult is to choose very different attributes, that have been handled in point of interest application. such disparity would be due to many causes, the majority usually remarkable presence of the things inside video. interestingly note, most researchers conferred on anomaly detection after experimentation [1], [2], [3], [4] and some have published their findings with different techniques [5], [6], [7]. few studies have discussed about the usual anomaly video which either coming from one or two same scene. the attribute that may attach in identification purpose, the unique index of geographical space in an instance of video. for certain instances, the detection algorithm one can identify the anomalous scene that for other instance may not be a case of anomalous. the problem in little moment was needed to take care under such research study. the quality-wise distinct issue in one scene may create a difficulty in superimposed multiple-scenes. the identification and analysis of single instance indicates to very uniquely handle feature in surveillance system, this study focused on such aspect. so, measurement technology plays a vital role in anomaly detection and surveillance applications. the formulation explains about the differences which lastly act spatially. the detection of anomalous activity in video/image directly related to performance and accuracy of detection algorithm. there would always be a scope of improvement in anomalous detection algorithm. abstract tiny datasets of restricted range operations, as well as flawed assessment criteria, are currently stifling progress in video anomaly detection science. this paper aims at assisting the progress of this research topic, incorporating a wide and diverse new dataset known as fire sm. further, additional information can be derived by a precise estimation in early fire detection using an indicator, average precision. in addition to the proposed dataset, the investigations under anomaly situations have been supported by results. in this paper different anomaly detection methods that offer efficient way to detect fire incidences have been compared with two existing popular techniques. the findings were analysed using average precision (ap) as a performance measure. it indicates about 78 % accuracy on the proposed dataset, compared to 71 % and 61 % on foggia dataset, for inceptionnet and firenet algorithm, respectively. the proposed dataset can be useful in a variety of cases. findings show that the crucial advantage is its diversity. mailto:shital.mali@rait.ac.in acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 the number of challenges would come to place while dealing with anomalous detection in fire related datasets. the shortcoming involves the unique kind dataset solely based on fire anomalous instances, low resolution of existing datasets, variations in anomalous. few more cases, in which uncertainty, inconsistencies, and loss in quality have been identified. the main focus of the paper has been on the detection of anomaly, analysis of obtained result after application on experimental dataset with the help of few assessment indices. the induction of new dataset of early fire and smoke (refer to table 1) would be helpful in many applications. maintaining diversities in dataset would be a key point, which checks anomalous things in different direction and more complex way. 2. existing dataset fires are man-made hazards that inflict human. it causes social, and economic damage. early fire alarm and an automatic approach are important and useful to emergency recovery services to minimize these losses. existing fire alarm devices have been shown to be unreliable in terms of numerous real-world situations. the vital disadvantage of the sensor-based framework is that it should be situated close to a fire or warmth source. however, this makes them impractical to use in a variety of frequently occurring scenarios such as long-distance fire occurrences as seen in figure 1. due to this, the traditional approach has failed to avoid a number of fire deaths. solutions to this usually involve a reasonable amount of fire or heat sensation to stimulate the alarm. in addition, the fire or smoke regions are not precisely located. due to shortcomings of fire detection, researchers have been researching computer vision related approaches that have become alternatives for improving the fire and smoke detection system. existing vision-based approaches focusing solely on the transformation of colour space for fire area detection [1], [2]. rule-based methodology, along with colour space, has a promising future in delivering improved results. however, such devices are also vulnerable to other lit items such as streetlights. additional methods applied to the decision-making algorithm additional features to colour-based methods such as location, boundary and motion cues [3], [4]. classifiers such as bayes classifier, dual optical flow and multi-expert scheme have been used to minimize false detection or misclassification. however, these strategies are vulnerable to error and fail in many complex real-world scenarios as seen in figure 2. however, due to the complexities of the condition, fire detection is a challenging task. as it does not have a definite form, area of incidence, complex temporal behaviour so as to extract the function. the hand-crafted collection of features involves a considerable amount of domain information. table 2 listed details of existing dataset. the foggia video dataset [8] and the chino dataset [9] were the two basic datasets. the first dataset includes of 31 enclosed environment and open-air videos. in that, seventeen are with not fire related while fourteen categorized of fire. as a result, colourbased methods are incapable for recognizing genuine fire and table 1. fire instances in fire sm dataset. anomaly class instances 1. outside offices 78 2. outside apartment 88 3. in bushes 26 4. outside light 15 5. street light 13 6. decorative lighting 11 7. bon-fire 9 8. cooking gas 25 table 2. existing datasets. type size per image rate no. of frames related fire remark or observations fire1 320x240 15 705 yes see [10] fire2 320x240 29 116 yes refer [10] and [11] fire3 400x256 15 255 yes fire4 400x256 15 240 yes fire5 400x256 15 195 yes fire6 320x240 10 1200 yes fire7 400x256 15 195 yes fire8 400x256 15 240 yes fire9 400x256 15 240 yes fire10 400x256 15 210 yes fire11 400x256 15 210 yes fire12 400x256 15 210 yes fire13 320x240 25 1650 yes fire14 320x240 15 5535 yes foggia et al. [8] fire15 320x240 15 240 no refer [11] and [10] fire16 320x240 10 900 no fire17 320x240 25 1725 no fire18 352x288 10 600 no fire19 320x240 10 630 no fire20 320x240 9 5958 no fire21 720x480 10 80 no fire22 480x272 25 22500 no foggia et al. [8] fire23 720x576 7 6097 no refer [11] and [10] fire24 320x240 10 342 no fire25 352x288 10 140 no fire26 720x576 7 847 no fire27 320x240 10 1400 no fire28 352x288 25 6025 no fire29 720x576 10 600 no fire30 800x600 15 1920 no foggia et al. [8] fire31 800x600 15 1485 no figure 1. test images of training data. figure 2. sample confusing images which look like fire or smoke. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 scenes with red shading parts. additionally, movement-based strategies may mistakenly portray a mountain scene of smoke, fog, or haze. these pieces have made the informational collection more troublesome, empowering us to push our engineering and assess its exhibition in different genuine settings. another issue that arises during data processing is the difference between the fire and the non-fire. at a greater distance, for example, fire2 [10] video contains so little fire. in the other hand, the fire13 [10] video shows no fire, but only within a very small range. thus, red designs and grounds like billboard (fire14) and radish grass (fire6) are available in numerous photos, making the dataset hard to decipher. the second dataset is relatively limited but very difficult. this dataset contains a total of 226 images, 119 of which contain fire while the other 107 are fire-like pictures including night falls, fire-like stars, and daylight coming through windows and so on. an enormous amount of data is required in training for convolutional neural networks (cnns). conversely, the current image/video fire collections are insufficient to meet the demands. table 3 displays some limited scale fire picture/video information repositories. the data collection includes 13,400 fire images in all. these photographs were taken both outside and inside. there are 9695 "fire" and 7442 "smoke" facets in the data collection. in addition, the dataset includes 15,780 images that do not have flames. these data were acquired from 16 separate user environments and involve 49,614 distorted images. each picture usually involves distortion such as due to surrounding noise or climatic condition. for this investigation, half of the pictures in the information assortment are utilized as the preparation/approval set. the remaining half is utilized as the test set. 3. experimental details experiments were carried out using a deep neural network technique has been applied on proposed dataset. for which the system was used, the nvidia rtx 2080 processor with 10 gb on-board ram, as well as ubuntu os16.04 based on system. the cpu would be of intel core i5. this system was having ram of 64gb. the analyses utilized 68,457 pictures acquired from notable fire datasets. this includes foggia et al. [8] of 62,690 pictures. the planning and testing periods of the tests followed the trial system, where 20 % and 80 % of the information were utilized for preparing and testing, separately. the technique was applied with a qualified proposed updated efficientdet [13]. the modified efficientdet algorithm incorporated with leaky relu as activation function has been replaced hardswish. a training data of 2717 pictures was generated by using a model of 2529 fire pictures and 190 non-fire pictures. the planned network, however, with only 2-classes, i.e. fire and not fire class. data sets are one of the essential components for evaluating the output of any given device. evaluating the algorithm against a regular data set is one of the most difficult activities. in the suggested datasets, all photographs are original and taken by real people. this dataset is therefore the most demanding and diversified dataset ever produced. this handcrafted research dataset was designed to explain the generalization of a qualified model. this involves an average of 2 boxes per picture of varying size and aspect ratio. activation mapping has been exploited. this was required to get the approximate bounding box. the loss function was used as a binary cross-entropy during the study of this dataset. in addition, the optimizer is found to be rmsprop with an early learning score of 0.001s. the number of 300 epochs was taken into account. the accompanying segments present subtleties of results got utilizing different fire datasets and their correlation with cutting edge fire information base methodologies. 4. fire sm dataset description and results this proposed fire sm dataset was verified for density of occurrences of actual fire location in an image. the dataset contains images of the fire which was located not only at centre of image but also at corner, top, bottom side as well included. the density of fire location in an image was shown in figure 3. this figure shows the fire location in relative coordinate plane. in this, red colour signifies fire at middle, orange, yellow colour table 3. number of fire image/video datasets present [12]. institution format object website bilkent university video fire, smoke, disturbance http://signal.ee.bilkent.edu.tr/visitfire/index.html cvpr lab, at keimyung university video fire, smoke, disturbance https://cvpr.kmu.ac.kr umrcnrs 6134 spe, corsica university dataset fire http://cfdb.univ-crse.fr/index.php?menu=1 faculty of electrical engineering, split university image, video smoke http://wildfire.fesb.hr/ institute of microelectronics, seville, spain image, video smoke https://www2.imse-cnm.csic.es/vmote/english_version/ national fire research laboratory, nist video fire https://www.nist.gov/topics/fire state key laboratory of fire science, university of science and technology of china image, video smoke http://smoke.ustc.edu.en/datasets.htm figure 3. fire location in images with distribution in relative coordinate in fire sm dataset (red-at middle, orange, yellowat corner, sky blue, dark blue not at middle and corner). http://signal.ee.bilkent.edu.tr/visitfire/index.html https://cvpr.kmu.ac.kr/ http://cfdb.univ-crse.fr/index.php?menu=1 http://wildfire.fesb.hr/ https://www2.imse-cnm.csic.es/vmote/english_version/ https://www.nist.gov/topics/fire http://smoke.ustc.edu.en/datasets.htm acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 signifies fire at corner, while sky blue, dark blue colour signifies fire not at middle and corner position. this figure supported to the diversified distribution dataset of fire was proposed for anomaly fire detection technique. in reality, with a phenomenon that appears over several frames, it is necessary to discover an irregularity in probably a portion of the images. however, confirming the area in every frame of the track is typically needless. this is especially true where there is uncertainty regarding when to start and finish the previously described phenomenon, as well as when anomalous activity is heavily occluded for a few frames. below mentioned feature are nothing but a measure of classification quality. 4.1. feature indices 4.1.1. localized detection index/rate the localized detection rate ldr is defined as ldr = (number of true regions detected)/(total number of regions). true region in an image detected if intersection of true area and recognized local portion is more or equivalent to  as shown in figure 4. 4.1.2. region based detection rate the region based detection rate rbdr is defined as rbdr= (number of positive image detected)/(total number of regions).  ranges between 0 to 1. default, =0.1. the negative region rate nrr is defined as nrr= (total non-positive regions) / (total frames or images) the average detection rate for negative region rate, nrr, will ranging from 0 to 1. there is a compromise between the discovery rate (genuine positive rate) and the bogus positive rate, likewise with any location rule. this can be caught in the roc bend determined by changing the inconsistency score edge that characterises what locales are seen as abnormality. figure 5 and figure 6, show characteristic curves for inceptionnet method, foggia and fire sm datasets. the nature of the curve signifies those values favorable to proposed dataset i.e., fire sm compared to foggia. khan et al. [14] described inceptionnet method on fire instance dataset. the dataset mentioned was less diversified as compared to proposed fire sm dataset. khan et al. [14] and firenet [15] proposed approaches focused on the classification of the leave-one-out strategy to be used for each level. in comparison to these algorithms, the updated efficientdet [16]-[19] based more on the degree of detection. this paper considered the average precision (ap) indicator for quantitative analysis. results were collected and 1 1 1 1 1 1 1 1 1 figure 4. representation of framework of areas petitioning frame. ‘1’ defines depicted as true region. gives an idea of detection method. a) b) figure 5. a, b nrr per frame characteristic curves for different dataset. a) b) figure 6. a, b nrr frame level characteristic curves for different dataset. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 seen in table 4 for the proposed dataset relative to the foggia dataset. activation mapping has been used to get an estimated bounding box. table 4 shows the updated efficientdet performance better relative to other algorithms. at present, both average precision (ap) at 50 and ap at 75 were compared. the efficientdet result obtained given the proposed early fire dataset was approximately 73 per cent and 71 per cent compared to approximately 53 per cent, 51 per cent and approximately 68 per cent respectively, and 58 per cent for inceptionnet and firenet. on the other hand, when taking into account the foggia dataset, the findings obtained were approximately 82 per cent, 78 per cent. these have been compared to about 65 %, 61 % and around 73 %, and 71 % for inceptionnet and firenet respectively, for ap@50 and ap@75 both. figure 7 shows the detection of fire and smoke. 5. conclusions this paper introduces a new fire sm database of fire anomaly scenarios. the database has been therefore the most demanding and diversified dataset ever produced. this hand-crafted research dataset was designed to explain the generalization of a qualified model. this research study proposes novel lightweight and realtime for detecting smoke and fire in videos or photographs. exiting datasets are either restricted or produced synthetically for testing purposes. in this study, validation was carried out on a real-world challenging proposed dataset that includes the majority of fire and smoke event scenarios. further, the weighted bi-directional feature pyramid network (bifpn) as well as compound scaling, consistently achieve better efficiency in efficientdet. experiment findings show that google's newest model, efficientdet, outperforms foggia on the proposed dataset. these results were obtained using average precision (ap) as an indicator; on the proposed dataset, it shows around 78 %, compared to 71 % and 61 % for inceptionnet and firenet, respectively, on the foggia dataset. the new assessment criteria address the shortcomings of the traditional criteria in this field. it provides a more accurate picture of how well an algorithm performs in real environment. furthermore, in this study, two variants of a latest fire anomaly detection algorithm used as a benchmark to which future work was measured. the new database would be helpful to encourage new novel techniques under this research field. references [1] t. celik, h. demirel, h. ozkaramanl, m. uyguroglu, fire detection using statistical color model in video sequences. journal of visual communication and image representation. 2007 apr 1;18(2):176-85. doi: 10.1016/j.jvcir.2006.12.003 [2] b. c. ko, s. j. ham, j. y. nam. modeling and formalization of fuzzy finite automata for detection of irregular fire flames. ieee transactions on circuits and systems for video technology. 2011 may 19; 21(12):1903-12. doi: 10.1109/tcsvt.2011.2157190 [3] j. choi, j. y. choi, patch-based fire detection with online outlier learning. in 2015 12th ieee international conference on advanced video and signal based surveillance (avss) 2015 aug 25, pp. 1-6. doi: 10.1109/avss.2015.7301763 [4] t. wang, l. shi, p. yuan, l. bu, x. hou, a new fire detection method based on flame color dispersion and similarity in consecutive frames. in2017 chinese automation congress (cac) 2017 oct 20 (pp. 151-156). ieee. doi: 10.1109/cac.2017.8242754 [5] k. muhammad, j. ahmad, z. lv, p. bellavista, p. yang, s. w. baik, efficient deep cnn-based fire detection and localization in video surveillance applications. ieee transactions on systems, man, and cybernetics: systems, vol. 49, no. 7, july 2019, pp. 14191434. doi: 10.1109/tsmc.2018.2830099 [6] a. jadon, m. omama, a. varshney, m. s. ansari, r. sharma, firenet: a specialized lightweight fire & smoke detection model for real-time iot applications. arxiv preprint arxiv:1905.11922. 2019 may 28. doi: 10.48550/arxiv.1905.11922 [7] k. muhammad, j. ahmad, i. mehmood, s. rho and s. w. baik, convolutional neural networks based fire detection in surveillance videos. ieee access, vol. 6, pp. 18174-18183, 2018. doi: 10.1109/access.2018.2812835 [8] p foggia, a saggese, m vento, real-time fire detection for videosurveillance applications using a combination of experts based on figure 7. detection of fire and smoke in proposed dataset. table 4. comparison of updated efficientdet to inceptionnet and firenet (*ap@50: 50 % above overlap, ap@75: 75 % above overlap). method early fire and smoke (proposed) foggia dataset ap@50 ap@75 ap@50 ap@75 khan et. al [14] inceptionnet 53.41 50.63 65.23 61.28 firenet [15] 68.46 57.94 73.23 70.65 modified efficientdet d0 73.35 70.78 81.92 78.23 https://doi.org/10.1016/j.jvcir.2006.12.003 https://doi.org/10.1109/tcsvt.2011.2157190 https://doi.org/10.1109/avss.2015.7301763 https://doi.org/10.1109/cac.2017.8242754 https://doi.org/10.1109/tsmc.2018.2830099 https://doi.org/10.48550/arxiv.1905.11922 https://doi.org/10.1109/access.2018.2812835 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 color, shape, and motion. ieee transactions on circuits and systems for video technology. 2015 jan 19;25(9):1545-56. doi: 10.1109/tcsvt.2015.2392531 [9] d. y. chino, l. p. avalhais, j. f. rodrigues, a. j. traina, bowfire: detection of fire in still images by integrating pixel color and texture analysis, in 28th sibgrapi conference on graphics, patterns and images, 2015, pp. 95-102. doi: 10.1109/sibgrapi.2015.19 [10] e. cetin, computer vision-based fire detection dataset. online [accessed 17 march 2022] http://signal.ee.bilkent.edu.tr/visifire/ ultimate chase. online [accessed 17 march 2022] http://ultimatechase.com/ [11] li pu, w. zhao, image fire detection algorithms based on convolutional neural networks. case studies in thermal engineering. 2020 june 1, 19:100625. doi: 10.1016/j.csite.2020.100625 [12] m tan, r. pang, q. v. le, efficientdet: scalable and efficient object detection. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 10781-10790. [13] k. muhammad, j. ahmad, z. lv, p. bellavista, p. yang, s. w. baik, efficient deep cnn-based fire detection and localization in video surveillance applications. ieee transactions on systems, man, and cybernetics: systems, vol. 49, no. 7, july 2019, pp. 14191434. doi: 10.1109/tsmc.2018.2830099 [14] a jadon, m omama, a varshney, m. s. ansari, r. sharma, firenet: a specialized lightweight fire & smoke detection model for real-time iot applications. arxiv preprint arxiv:1905.11922. 2019 may 28. doi: 10.48550/arxiv.1905.11922 [15] fire sm dataset online [accessed 17 march 2022] https://tinyurl.com/83exdz6d [16] federica vurchio, giorgia fiori, andrea scorza, salvatore andrea sciuto, comparative evaluation of three image analysis methods for angular displacement measurement in a mems microgripper prototype: a preliminary study, acta imeko, vol. 10, no. 2, pp. 119-125, 2021. doi: 10.21014/acta_imeko.v10i2.1047 [17] henrik ingerslev, soren andresen, jacob holm winther, digital signal processing functions for ultra-low frequency calibrations, acta imeko, vol. 9, 2020, no. 5, pp. 374-378. doi: 10.21014/acta_imeko.v9i5.1004 [18] lorenzo ciani, alessandro bartolini, giulia guidi, gabriele patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, vol. 9, 2020, no. 1, pp. 3-9. doi: 10.21014/acta_imeko.v9i1.732 [19] andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, vol. 10, 2021, no. 3, pp. 7-14. doi: 10.21014/acta_imeko.v10i3.1020 https://doi.org/10.1109/tcsvt.2015.2392531 https://doi.org/10.1109/sibgrapi.2015.19 http://signal.ee.bilkent.edu.tr/visifire/ http://ultimatechase.com/ https://doi.org/10.1016/j.csite.2020.100625 https://doi.org/10.1109/tsmc.2018.2830099 https://doi.org/10.48550/arxiv.1905.11922 https://tinyurl.com/83exdz6d http://dx.doi.org/10.21014/acta_imeko.v10i2.1047 http://dx.doi.org/10.21014/acta_imeko.v9i5.1004 http://dx.doi.org/10.21014/acta_imeko.v9i1.732 http://dx.doi.org/10.21014/acta_imeko.v10i3.1020 journal contacts acta imeko issn: 2221-870x september 2021, volume 10, number 3 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 0 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany editorial board section editors (vol. 7 10) leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan francesco lamonaca, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy gustavo ripper, brazil maik rosenberger, germany dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org acta imeko bálint kiss, e-mail: bkiss@iit.bme.hu piotr bilski, e-mail: pbilski@ire.pw.edu.pl support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de mailto:editorinchief.actaimeko@hunmeko.org mailto:bkiss@iit.bme.hu mailto:pbilski@ire.pw.edu.pl mailto:dirk.roeske@ptb.de microsoft word article 16 113-807-1-ga.docx acta imeko december 2013, volume 2, number 2, 91 – 95 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 91 challenges and perspectives of regional cooperation within coomet – the euro-asian regional metrology organization pavel neyezhmakov1, klaus-dieter sommer2 1 national scientific centre “institute of metrology”, mironositskaya 42, 61002 kharkov, ukraine 2 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: technical note keywords: metre convention; rmo; jcrb; cipm mra; quality management system citation: pavel neyezhmakov, klaus-dieter sommer, challenges and perspectives of regional cooperation within coomet – the euro-asian regional metrology organization, acta imeko, vol. 2, no. 2, article 16, december 2013, identifier: imeko-acta-02 (2013)-02-16 editor: paolo carbone, university of perugia received april 17th, 2013; in final form december 14th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: pavel neyezhmakov, email: pavel.neyezhmakov@rambler.ru 1. introduction coomet (coomet – abridged form from “cooperation in metrology”) is a regional metrology organization (rmo) [1] that establishes cooperation between the national metrology institutions of the countries of central and eastern europe. it was founded in june 1991 and renamed euro-asian cooperation of national metrology institutions in 2000. coomet is open to any metrology institution from other regions; they can join as associate members. the basic activity of coomet is cooperation in the following areas: measurement standards of physical quantities, legal metrology, quality management systems (qms), information and training. the participation in coomet activities gives an opportunity for the member countries to solve metrological tasks in a more efficient way on the basis of approved rules and procedures. the current 15 members of coomet are the metrology institutions from armenia, azerbaijan, belarus, bulgaria, georgia, kazakhstan, kyrgyzstan, lithuania, moldova, russia, romania, slovakia, tajikistan, ukraine and uzbekistan. in addition, there are 3 associate members: germany, dpr of korea, and cuba. the objectives of coomet are the following:  provide assistance in effectively addressing any problems relating to the uniformity of measures, uniformity of measurements and the required accuracy of their results;  provide assistance in promoting cooperation between national economies and eliminating technical barriers for international trade;  harmonize activities of metrology services of the euro-asian countries with similar activities in other regions.  these objectives are accomplished by cooperation between interested coomet member countries with regards to supporting activities related to the accreditation of national metrology institutes (nmis), as well as calibration and measurement laboratories.  at today’s stage of progress, the tasks of coomet are aimed at strengthening the links between the nmis in order to solve common problems and to create effective mechanisms that will meet the following objectives: abstract the creation of the metrological infrastructure providing traceable results of the measurement is one of the major tasks of coomet member-countries from central asia and to caucasian region. coomet successfully cooperates in developing the basic metrology infrastructures of these countries. in accordance with strategic aims coomet also supports metrological knowledge transfer and developing technical competence for innovations and scientific research. for the purpose of implementing joint research projects a tc 5 was established. cooperation within tc 5 aims at activating the nmis of coomet member countries in the global integration process in science, technology and high-end manufacturing. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 92  achieve compatibility of measurement standards and harmonize the requirements imposed on measuring instruments and methods for their metrological control;  recognize the equivalence of national certificates authenticating the results of metrology activities;  exchange information on the current status of national metrology services and their development;  collaborate in developing metrology projects; and  promote the exchange of metrology services. a considerable advance in the formation of the current aims and tasks of the organization occurred as a result of several coomet committee meetings. at these meetings, decisions were made to increase the effectiveness of coomet activities. it meant that planning of cooperation and informing about activities, interaction with other international and regional metrology organizations, etc. should be improved. according to the coomet first development program for 2001-2002 there was developed and approved the organizational structure of coomet, according to which structural and working bodies in all fields of cooperation covered by the mou were created (see figure 1). this structure provided wide involvement of qualified specialists of nmis of member countries into coomet activities. transformations directed toward the improvement of organization activities were reflected in the mou and rules of procedure of coomet, developed and fixed in the appropriate documents. in 2005 the mou and rules of procedure were amended with respect to the election of the coomet president, according to which in the year before the end of the acting president’s term, the future president is elected and authorized for the next 3 years. one more important branch of coomet activity is the development and adoption of the conception of coomet in 2005. conception of cooperation and activity of coomet determines the strategy tasks from a mediumterm and long-term perspective and provides for their implementation. among these tasks are the following:  strengthening innovation and technology components of member countries in the global system of economic and society development;  competent and economically effective participation of coomet member countries in global integration processes in the fields of science, technology, science intensive production and the economy in general;  increasing the level of competitiveness of member countries in the fields of science and technology through participation in the world market of intellectual products, science intensive products and services. 2. implementation of the cipm mra the implementation of the cipm mra [2] is directed at the fulfillment of the following tasks:  organization and holding of regional comparison of measurement standards of coomet member countries in order to assure the traceability of measurement standards to reference values of the si units;  determination of the degree of equivalence of national measurement standards;  regional and interregional reviews of the calibration and measurement capabilities (cmc) and their figure 1. current organizational structure of coomet. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 93 publication in the bipm key comparison database (kcdb); and  evaluation of the quality management systems (qms) of nmis through external review of each qms. today, nmis from 12 coomet member countries are participants in the cipm mra. six countries among them are members of the international bureau of weights and measures (bipm) and six are associates of the general conference on weights and measures (cgpm). with the aim of providing support for nmis in coomet for the cipm mra purpose, several recommendations have been successfully developed and approved. according to the cipm mra d-04 “calibration and measurement capabilities in the context of the cipm mra,” [3] the joint committee of regional metrology organizations and bipm (jcrb) requires that cmc data presented for publication in annex c should be completely supported by an implemented quality system, controlled and approved by the local rmo, and the range and uncertainty of the cmc shouldn’t contradict the information obtained from the results of key and supplementary comparisons. for the implementation of this jcrb requirement, coomet has instituted the program of comparisons (document coomet d9). the joint committee for measurement standards (jcms) annually updates the program of comparisons and presents the results at the president’s council meeting for approval by the coomet president. according to the cipm mra d-04 prior to be submitted for the inter-regional review, the cmcs should be reviewed and approved by the rmo. coomet has established the process for the intra-rmo review. this process follows the cipm mra-d-04 and assures that the cmcs submitted for the inter-regional review have sufficient technical support. the technical committees form groups of technical experts for the review of the comparison’s results and cmcs-data. starting in 2002, the coomet quality forum directed the cipm mra realization. the evaluation of the quality management system in coomet nmis is carried out during the conduction of peer reviews by coomet auditors and technical experts once every 5 years and is coordinated by the technical committee of coomet quality forum (tc qf). the peer reviews are carried out in accordance with the rmo coomet recommendations. the tc qf is conducting monitoring of quality management systems of nmis of member countries on the basis of cross analysis of annual reports, directed to the secretariat of tc qf. currently 22 nmis from 18 coomet member countries are cooperating within the quality forum. 3. involving new parties in the cipm mra providing for the traceability in the coomet region has required great attention and support for countries of central asia and the caucasus region in the development of national metrology infrastructures, harmonization with international requirements, improving the national standards of these countries, and preparing these nmis for signing the cipm mra. countries which are on the way to signing: azerbaijan, armenia, kyrgyzstan, tajikistan and uzbekistan. aiming for stable development and cooperation, in the near future coomet plans to provide for the preparation of these countries to sign and realize the cipm mra, for active participation of their national measurement standards in international comparisons, for the creation of quality systems for their nmis, and for implementing the requirements of iso/iec 17025. in order to support these countries, subcommittee “support in developing basic metrological infrastructures of coomet member countries” was established in 2008 within coomet tc 4 for “information and training.” this sc has to solve the following tasks in the countries of the region: assistance in the preparation for signing the cipm mra; preparation of the staff for qms application according to iso/iec 17025; assistance in conducting comparisons and preparing cmcs; training for national metrology staff; and the organization of training workshops. with the financial support within the realized ртвcoomet technical cooperation project titled, “support of cooperation between member countries of the regional metrology organization coomet,” several workshops were organized in 2008 – 2012 by coomet tc 4 for directors and experts of the nmis. for example, workshop for coomet nmi’s internal auditors (see figure 2) according to iso/iec 17025 was held 7–8 of november 2012 in nism (chisinau, republic of moldova). workshop participants: 36 representatives from 12 countries azerbaijan, armenia, belarus, germany, georgia, kazakhstan, lithuania, moldova, russia, slovakia, tajikistan, ukraine. the workshop consisted of theoretical and practical parts. next items were presented and discussed:  the basics of qms in nmi;  coomet regulations and procedures on the assessment of qms nmi;  requirements according to iso/iec 17025. the “model” internal audit of nism laboratories was carried out. after “model” internal audit the participants were tested and received the certificates. considering the great importance of the cipm mra in broadening the economic, scientific, technical, and international cooperation, as well as eliminating technical barriers to trade, the activity performed in this field will surely result in signing of the metre convention by the governments of the above-mentioned countries or acquiring the status of an associate to the general conference of weights and measures (cgpm) in order to participate in the figure 2. participants of the workshop in moldova. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 94 implementation of the cipm mra in the near future. 4. cooperating in the field of legal metrology coomet mou contains a number of regulations which are absent in the by-law documents of other regional organizations. for example, coomet activities concern not only scientific but also legal metrology. legal metrology is one of the well-developed areas of cooperation in coomet member countries. at the 20th coomet committee meeting in 2010 [4], a new structure of tc 2 for legal metrology was approved. the new structure consists of the following subcommittees:  sc 2.1 harmonization of regulations and norms  sc 2.2 technologies of measuring devices and systems in legal metrology  sc 2.3 competence assessment of bodies in legal metrology  sc 2.4 legal metrological control (lmc)  the tasks of these scs are the following:  sc 2.1: acceptance or adaptation of approved international or regional documents (e.g: viml, oiml documents, welmec guides, ec recommendations).  sc 2.2: development of test procedures for measuring instruments (mi), including software and measuring systems, but also data transfer and other future technologies.  sc 2.3: development of criteria for the assessment of verification laboratories and other parties.  sc 2.4: establishment of projects for the elements of lmc (surveillance qm, market surveillance, field surveillance). the main purpose of the changes in the structure of tc 2 was to improve the efficiency and optimization of the legal metrology activities based on the experience of the international organization of legal metrology (oiml) and other regional legal metrology organizations. the new structure of tc 2 allows for further expansion of cooperation in the field of legal metrology. the discussion on the new content of work within tc 2 shows that the countryspecific interests should be considered. all tc 2 member states have the same aim, which is to build and enforce an operational, effective system for legal metrology. the new subcommittee structure is open for different realizations and for future changes in the methods for legal metrology. one example is the current low interest in market surveillance of several states, while others use this system more than the preventive verification system. so the question for all is how mutual acceptance can be created in the case of free trade of products, i.e., prepackaged goods or measuring instruments. in this context, the growing importance of conformity assessment was noted. 5. joint research in metrology as a result of many years of discussion of the possibility to realize joint research projects and their sources of funding, a technical committee for joint research in metrology, tc 5, was established in 2009. the tasks of tc 5 for the near future are:  to identify common research areas;  to determine priority fields in research and development;  to determine the efficiency of projects for the economy of coomet member countries; and  to identify those interested groups that benefit the most from the implementation of the projects. it should be noted that there are a number of research projects that are currently implemented within coomet: earth rotation period (erp) determination on the basis of data from observatories of coomet countries; metrology of nanotechnology, standardization of eu-152 radionuclide solution; etc. for example, within erp coomet project, in 2010 the observatories in russia, ukraine, uzbekistan, bulgaria, poland, and the czech republic made routine star and satellite observations and then transmitted the observation data to the erp processing and calculating centre at vniiftri. an exchange of erp observing data and calculating results was made between the participating countries and the international and national centres for erp determination. the calculations of the pole coordinates and duration of the day by the results of gps observations at the stations on the territory of russia were made on a regular basis. the accuracy of erp determination by means of all the techniques of the country participants was about 0.0002″ and 0.02 ms with regard to the pole coordinates and to the universal time, respectively. these values closely approach the accuracy of products of the international earth rotation service (iers). however, the number of these projects is rather small. therefore, for realization of significant tc 5 tasks, the following project was initiated: coomet 492/de/10 “development of a concept for joint metrology research in coomet member countries”. within this project the working group (wg) prepared the questionnaires for conducting the following:  state-of-art analysis;  research needs of participating countries;  structuring and prioritizing of potential research projects;  identification of research subjects of common interest (e.g., metrology for energy, for environment, for health, security and safety, etc. – grand challenges in the field of metrology);  expected social and economic impact of the research and development outcome. the meeting of the wg was held in july 2012 in nsc “institute of metrology”, ukraine. at this meeting questions connected with the need of joint research in coomet were considered, common tasks and the scope of the europeanasian metrology research venture (eamrv) were discussed, as well as the aspired social economic impact, possible styles of cooperation, procedures for determining and application of eamrv etc. further, the results from three questionnaires were evaluated by the wg and were sent to the representatives of coomet member-countries:  questionnaire no.1 “nmi’s current research-anddevelopment resources, capabilities, capacities and international cooperation”; acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 95  questionnaire no.2 “aspired capacity, capability and quality of the national calibration, measurement and testing infrastructure”;  questionnaire no.3 “metrology for the future and demands on joint metrological research”. in order to realize this project, coomet proceeds from world achievements in the field of metrology related to science, industry and economy of all cooperating countries, and, at the same time, establishes a self-contained metrology research strategy for the european-asian transition area, including central asian and caucasian regions, tailored to particular economical needs of its member countries. the development and implementation of joint projects in coomet will contribute to basic science and technology, as well as stimulate innovations to solve metrology problems in coomet member countries. references [1] http://www.coomet.net, http://www.coomet.org. [2] http://www.bipm.org/en/cipm-mra/mra_online.html. [3] http://www.bipm.org/utils/common/cipm_ mra/cipm mra-d-04.pdf. [4] p. neyezhmakov. “20 years of coomet: we measure together for a better tomorrow”, oiml bulletin. – july 2011, vol. lii № 3. – pp. 32-37. solutions and limitations of the geomatic survey of an archaeological site in hard to access areas with a latest generation smartphone: the example of the intihuatana stone in machu picchu (peru) acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 solutions and limitations of the geomatic survey of an archaeological site in hard to access areas with a latest generation smartphone: the example of the intihuatana stone in machu picchu (peru) valerio baiocchi1, silvio del pizzo2, felicia monti1, giovanni pugliano3, matteo onori1, umberto robustelli4, salvatore troisi2, felicia vatore1, francisco james león trujillo5 1 dipartimento ingegneria civile, edile ed ambientale(dicea), sapienza università di roma, via eudossiana 18, i00184, italy 2 dipartimento di scienze e tecnologie, università degli studi di napoli "parthenope", centro direzionale isola c4, i80143 napoli, italy 3 dipartimento ingegneria civile, edile ed ambientale, università degli studi di napoli "federico ii", via claudio 21, i80125 napoli, italy 4 dipartimento di ingegneria, università degli studi di napoli "parthenope", centro direzionale isola c4, i80143 napoli, italy 5 carrera de ingeniería civil, facultad de ingeniería y arquitectura, universidad de lima, perù section: research paper keywords: intihuatana stone; machu picchu; gnss; gps; sfm; xiaomi; dsm citation: valerio baiocchi, silvio del pizzo, felicia monti, giovanni pugliano, matteo onori, umberto robustelli, salvatore troisi, felicia vatore, francisco james león trujillo, solutions and limitations of the geomatic survey of an archaeological site in hard to access areas with a latest generation smartphone: the example of the intihuatana stone in machu picchu (peru), acta imeko, vol. 11, no. 1, article 20, march 2022, identifier: imeko-acta-11 (2022)-01-20 section editor: fabio santaniello, university of trento, italy received april 7, 2021; in final form february 27, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valerio baiocchi, e-mail: valerio.baiocchi@uniroma1.it 1. introduction the geomatic survey of archaeological remains is always a difficult task due to the complex accessibility of sites whether they are located in urban or more remote areas. in fact, remains in densely populated areas have problems of accessibility because they are often located in underground areas with limited space [1]. on the other hand, remains in sparsely populated areas are often remote and difficult to reach with bulky equipment [2]-[6]. in the present work we wanted to test the geometric accuracy that can be obtained by creating a three-dimensional point cloud model from images acquired using a latest generation smartphone (xiaomi mi9). the three-dimensional model was framed in the geodetic datum wgs84 by means of differential measurements performed with the same smartphone thanks to the recent possibility of writing gnss rinex observation files made possible by the android operating system (from version 7). abstract archaeological remains need to be geometrically surveyed and set in absolute reference systems in order to allow a "virtual visit" and to create "digital twins" useful in case of deterioration for proper restoration. some countries (e.g., peru) have a vast archaeological heritage whose survey requires optimized procedures that allow high productivity while maintaining high standards of geometric accuracy. a large part of peru's cultural heritage is located in remote areas, at high altitudes and not easily accessible. for this reason, it is of great interest to study the possible applications of easily transportable instruments. in this study it was verified how the capabilities of the latest smartphones in terms of absolute differential positioning and photogrammetric acquisition can allow the acquisition of a geometrically correct and georeferenced three-dimensional model. the experimentation concerned a new survey of the intihuatana stones at machu picchu and its comparison with a previous survey carried out with a much more complex laser scanning instrumentation. it is important to note that both the photogrammetric survey and the gps/gnss survey were carried out with the same smartphone taking full advantage of both features of the same mobile phone. relative comparison to an existing point cloud provided differences of 2 millimeters in mean with an rmse of 2 cm. the absolute positioning accuracy compared to a very large-scale cartography appears to be of the order of one metre as was expected mainly due to the high distance of the gps/gnss permanent stations. mailto:valerio.baiocchi@uniroma1.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 1.1. what are intihuatana stones? all the important inca cities had an intihuatana (or intiwatana) stone, whose most accredited translation is "the place where the sun binds", it is believed that they indicated in some way the dates of the solstices. there are many theories concerning the history of the city of machu picchu and the meaning of the intihuatana stone itself, but scholars are yet to reach a consensus. giulio magli [7], an italian archaeological astronomer, proposes that the inca ritualistically travelled from cusco to machu picchu. the inca took this pilgrimage to replicate the mythical journey that the first inca thought their ancestors took from the island of the sun in lake titicaca. magli believes that the pilgrimage concluded at the highest peak in the main ruins, the steps leading to the intihuatana stone. in any case, the spanish invaders, wanting to abolish the inca religious beliefs, soon after their arrival in peru in the fourteenth century destroyed almost all the intihuatana stones they found in the various cities except for some of them, including machu picchu, probably because of the city's difficult accessibility (figure 1). this last theory, however, contrasts with the fact that the city of machu picchu was never really completely "lost" but only abandoned, the inhabitants of the area have always known its existence and led the first scientific research missions at the beginning of the twentieth century [8]. 1.2. latest smartphone geomatic capabilities mobile phone technology is producing cameras with increasing resolution, with some of the latest smartphone models reaching 100 megapixels [9]. it is important to consider that resolution alone is no guarantee of correct photogrammetric reconstruction; the limited size of a mobile phone's camera does not allow to achieve the geometric characteristics of a digital single lens reflex as well as of a photogrammetric camera. liquid lens technology is also soon to be released, which will make the optical sensors of smartphones much more versatile but, on the other hand, calibration or self-calibration of the optics will be more complex. in any case, smartphone images have provided interesting results, especially considering their easy transportation [10], [11]. in these and other works in the literature on the use of smartphones for photogrammetry, quite variable precision and accuracy have been observed, ranging from tens of centimetres [10] to a few centimetres [11], the causes of this variability are still being studied by the scientific community. a very important factor is the use of ground control points (gcps) without which there are poor and uncontrolled results [12], [13]. other factors that influence the final results can be: the size and shape of the object to be detected, the acquisition distance and, obviously, the quality of the camera optics; it is precisely because of this last factor that it seems that the most recent smartphones are getting closer and closer to the classic professional cameras, which in any case almost always give more accurate results [10]. at the same time the android operating system has recently (august 2016) released the possibility to access to raw global navigation satellite systems (gnss) measurements on several (but not all) android devices, allowing to write phase and/or code observations in a rinex file similar to the procedure employed by professional receivers, such as it happens for geodetic measurements [14]. mobile phone manufacturers, stimulated by this availability, have made terminals containing dual frequency capable chips which potentially allow for improved accuracy due to the possibility of estimating, as is known, the delay of the satellite signal due to the crossing of the ionosphere. unfortunately, till now, double frequency gnss chips embedded in mobile phone limit this possibility to gps (“l1/l5” frequencies) and galilelo constellations (“e1/e5” frequencies) while glonass and beidou, can be observed only in single frequency. this limitation combined with gnss smart phone antennas once again of reduced dimensions, and other hardware limitations, do not allow to reach the millimetric accuracies of the geodetic receivers but metric and decimetric accuracies are possible [15], [16]. the combined use of these two features of modern mobile terminals is of particular interest, because what most affects the quality of three-dimensional models from photogrammetry is the presence and accuracy of control points that are needed to improve the intrinsic geometry of the camera, to georeference the model and scale it correctly. in other words, without gcps, the model is generally deformed and only roughly georeferenced and scaled. structure from motion (sfm) algorithms typically use the positions that they read on the single frames that are acquired by the cameras through the built-in gps/gnss receivers in point positioning mode which produces accuracies in the range of tens of meters. the possibility to acquire ground control points (gcps) and sfm images from one single device with potentially decimetric or centimetric accuracy disclose new possibilities for prompt surveys, especially in areas with access difficulties such as archaeological ones. 2. materials and surveys performed as already mentioned, the intihuatana stone of machu picchu is one of the few that can still be observed, however, given its historical and cultural importance as well as damages caused by tourists in the past, access is restricted. furthermore, the site of machu picchu is, as is well known, not easy to reach, since it is an entire inca city within which the accessibility routes are those of the time and obviously cannot be modified. it should be noted (again figure 1) that the intihuatana stone is practically at the highest and most central point of the site, and therefore at the most difficult point to reach with heavy instruments. all of the above shows that the survey of such an archaeological remain may require a complex logistical and authorisation process to be carried out using traditional techniques. the interest in studying the geometric characteristics of the intihuatana stone, and in particular its alignment with respect to the geographic north, suggested that it would be advantageous to use a recently released smartphone, which has photographic and positioning characteristics useful for the reconstruction of an accurate three-dimensional model. in particular, smartphones with up to five different focal lengths have recently become popular, allowing surveyors to select the most suitable optics for the survey at hand without having to use optical zooms, which require varying calibration parameters, making the process much more complex. nevertheless, it is necessary to calibrate the terminal for each of the optics in use and care must be taken to ensure that the specific calibration for that smartphone lens is applied to each acquisition. it is actually quite simple to change the optics, for example in the android operating system, in the "pro" mode of the camera application, it can be changed instantly between the various optics available (three in the smartphone used in this test, but up to five in more recent smartphones) much more quickly than with traditional cameras. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 in this regard, it is perhaps worth mentioning the announced (by xiaomi company) imminent release of the first smartphones with liquid lenses, which promise to significantly improve the quality of leisure photography by allowing different focuses to be used in the same image, but these same features could create difficulties for the correct use of the photogrammetric equations that have traditionally been written for solid-state optics. anyway, the latest generation devices allow to set the camera focus mode as well as the aperture and the shooting time like a professional camera, furthermore the latest generation high-end smartphone are equipped with several cameras characterized by different focal length. such configuration allows to conduct a survey using an adaptive and swift approach, specifically the user can easily modify the focal length switching from a camera to another, in order to choose the suitable one in according to the environment conditions. the focal length is an important parameter, since it set the field of view of the camera, moreover, even others camera calibration parameters such as distortion coefficients are strictly related to the focal length [17]. a xiaomi mi9 terminal was used for this experiment, which is equipped with sony 48 mpix ultra wide-angle ai triple camera: a 48 mpix primary camera with a pixel size of 0.8 µm ƒ/1.75 aperture, 12 mpix telephoto camera with a pixel size of 1.0 µm ƒ/2.2 aperture and a16 mpix ultra wide-angle lens with a pixel size of 1.0 µm ƒ/2.2 aperture [18]. at the time of the present experimentation (april 2019) this camera was considered at the top range of mobile phone cameras, although it has been overtaken by other models in recent months [9]. its size (157.5 x 74.7 x 7.6 mm) and weight (173 g.) do not generate any transportation problems and it is even possible to take more than one terminal with you for specific needs. at the time of experimentation its cost was that of an average mobile phone, just over 500 euros, but now it is possible to buy it refurbished for less than 300 euros, to give an idea of the investment required, which is certainly lower than that of other instruments that could be certainly more accurate. in the present experiment, the feature of interest was recorded figure 1. the intihuatana stone location in machu picchu. figure 2. three-dimensional model of intihuatana stone, acquired and georeferenced with xiaomi mi9 smart phone. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 in its entirety with intermediate optics. additional images were acquired using wide angle optics for comparison. the geometry and sequence of the images to be acquired for correct photogrammetric reconstruction using smartphones is the subject of lively debate in the scientific community, which has also developed useful guidelines [19]. for this experiment, the authors have decided to proceed according to their experience gained in previous experiments [1], [2], [10] and therefore to use mainly the intermediate focal length with a single horizontal strip, maintaining an overlap between one frame and the next never minor than 60 % in the longitudinal direction. the images taken from different points of view, along the entire accessible area near the intihuatana stone, were subsequently processed with agisoft metashape software version 1.5.0 [20], obtaining a complete three-dimensional model of the visible part of the stone itself (figure 2). figure 3. reference laser scanning model. figure 4. reference laser scanning model and photogrammetric model co-registered. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 image processing in the metashape software requires personnel with at least basic photogrammetric knowledge, but many of the functions are automated, so the time required by the operator is reduced to a few hours, while processing time can be up to a few days on average performing hardware and for very detailed models. obviously, one of the most important factors for processing time is the number and resolution of the images. the smartphone itself was used in static mode to collect ground control points from the only natural points measurable on the 3d-model obtained by sfm which were the tops of the fence posts (figure 2). we used three points on the fences; the fourth point had in fact a less extensive view of the sky and gave results that were not entirely congruent with the other three and is a likely outlier; it is therefore only included in the photogrammetric software as a check point (cp) and consequently not used to estimate the rototranslation of the model. we could not use artificially marked points because this would have required specific permissions and the blocking of tourist flow. of course, from a photogrammetric point of view, it is much more correct to survey points all around the monument even with artificial targets but in this case, it was not possible. however, we decided to carry out the experimentation even in these unfavourable conditions because they are very similar to those encountered in real field situations due to access difficulties or morphologies that are unfavourable for gps/gnss surveying, such as the presence of steep slopes near the monuments to be surveyed due to excavation works. the results are nevertheless interesting despite the less-than-optimal geometry of the points. the survey with raw gps/gnss data acquisition has been possible thanks to the app rinex on version 1.3; rinex on utilizes the measurements of the new android raw measurements api to produce rinex observation and navigation message files. the app was written by nsl as part of the flamingo project [21]. it should be noted that, as mentioned above, the dual frequency is only observable for galileo satellites (e5 frequency) and the latest generation gps satellites (l5 frequency). in addition, at least on the terminal we used, the possibility of writing phase observations in the postprocessing files seems to be disabled, limiting the possibilities of processing observations both in "classic" post-processing mode and in "precise point positioning" (ppp) mode, which would be very useful in these areas given the great distance from the permanent stations of the peruvian correction network. in other words, since only code observations can be recorded, acquisition times have to be prudently extended, the achievable accuracy is reduced to about 1 metre and the usefulness of the dual frequency is practically lost [22]. this limitation is even more incomprehensible given that it was possible to write phase observations with the previous model "xiaomi mi8" [23] of which the "mi9" is the evolution. this determined the necessity to process the rinex files in post processing code using the open-source software rtklib 2.4.2 [24], which would have potentially allowed to process all four gnss constellations. unfortunately, since it was only possible to operate the code difference relative to the permanent stations of the national geodetic network of peru, it was necessary to limit ourselves to the use of only the gps and glonass constellations. in particular, the observations were differentiated with respect to the stations of abancay (ap01) and cusco (cs01), both about 80 km away with significant differences in altitude [25]. by differentiating with respect to both stations, results were figure 5. reference laser scanning model and photogrammetric model compared with c2m distance signed tool (distances are in metres). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 obtained with an estimated accuracy of 0.8 1 metre, which, considering the relatively short measurement sessions possible (so as not to hinder the flow of tourists) of about 20 minutes and the distance from the permanent stations can be considered absolutely satisfactory. the whole survey took less than an hour and a half, most of which was spent surveying the gps/gnss points, while the photogrammetric survey took less than ten minutes. in this regard, it must be further specified that according to recent research [26] such times could be considerably shortened (about 5 minutes per point) if permanent real time gps/gnss stations (rtk) were available, as for example in europe; on the other hand, in areas even more remote than those under study, it would be necessary to use ppp techniques [27] which require about one hour per point. all survey operations do not require any particular specialisation and can be carried out by one person. 3. results and discussion in order to check the results obtained from the smartphone device a comparison between the photogrammetric model and a 3d model from a lidar survey was performed. indeed, the university of arkansas [28] has released on the web a 3d model of the archaeological site of machu picchu performed using an optech ilris-3d laser scanner, that is a long-range lidar (figure 3). the laser scanner survey can be considered as reference, to estimate the accuracy as well as the completeness achieved by the photogrammetric solution. unfortunately, the lidar model (figure 3) is not georeferenced, indeed, it is linked to a local reference system while the scale is correct. however, from the university of arkansas web page for the specific project [29] it can be seen that the scanning resolution was set at 3 cm and that at that distance the estimated accuracy is 7 mm. using the classical icp (iterative closest point) technique the two models have been registered in a common reference system (figure 4). therefore, a comparison between the two models was performed using the tool c2m present in the open-source software cloudcompare ver. 2.10.3 [30]. for each 3d point extracted by the photogrammetric procedure was computed the signed distance from the surface derived by the lidar survey. the result is reported in figure 5 using a coloured map. a statistical analysis, of the result obtained from the comparison between the two models, was conducted. the computed distance showed a remarkable agreement as reported in the histogram of the figure 6. specifically, a gaussian curve fitting was carried out to the signed distance, obtaining a curve with a mean of 2 millimetres and a standard deviation of 1 centimetre. however, it can be observed that the average is very close to zero (although there is a small amount of systematic shift), that the maximum deviations are below 5 centimetres and that most of these are in the 2.5 cm range. these results are absolutely interesting, particularly when compared with previous research [10]-[12], but it should be noted that in our case the lidar cloud was adapted to the photogrammetric cloud because the lidar cloud was not georeferenced. these results can therefore confirm a relative coherence between the two clouds, but not absolute. finally, we wanted to evaluate the possibilities of some software to reconstruct missing parts, in particular scann3d [31] in android environment and trnio [32] in ios environment. this evaluation was interesting in this case because the accessibility to the monument was not complete all around figure 6. histogram of differences between reference laser scanning model and photogrammetric model compared with c2m tool and the fitted gaussian curve computed (distances are in metres). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 the monument itself due to the already mentioned protection needs. the 3d model obtained by the use of the first software was unsatisfactory and is not shown here; on the other hand, the result obtained with trnio is shown in figure 4 where also some parts that are badly reconstructed can be easily detected. it was much more complex to verify gnss measurements accuracy, defined as the difference between the results obtained and the actual values of the same point coordinates. in order to be able to calculate this statistical parameter it is in fact, as is well known, necessary to know the true values with an accuracy higher than that expected for the measurement system on a suitable set of points. usually, in geomatics and geodesy, such verifications are carried out using trigonometric points whose coordinates are known a priori with geodetic accuracy. unfortunately, at the machu picchu site there is only one trigonometric point on maps, but it is not unambiguously identifiable in the field, which suggests that it may have been removed, as often happens at archaeological sites of considerable interest. it is obviously not possible to compare with the coverage available on the web such as google earth whose estimated planimetric accuracies are generally lower than those expected for our survey methodology [33]. official large-scale cartographies of the area are not available, with the exception of a 1:750 nominal scale map produced by the ministry of culture of peru in 2014 (figure 7); considering a graphic error of 0.2 mm, its planimetric accuracy could be 15 cm, but since there are no metadata of the cartography itself, this must be considered only as a hypothesis. in any case, the comparison with this cartography showed a good agreement in line with what was verified in previous experiments [10], [11], [12], [13] even if the measured points cannot be identified with certainty because they didn’t exist in 2014 and so they are not reported on the map. 4. conclusions and future perspectives the present experimentation has shown that it is possible to carry out a complete and expeditious georeferenced geomatic survey with the latest generation smartphone. the ease of transport and the simplicity of the operations greatly facilitates the survey work both in terms of logistics and authorisation. the final results are at least comparable with those previously carried out on the same site with laser scanning. the possible applications of such surveys are numerous, ranging from rapid and efficient documentation to the valorisation of sites that are difficult for the general public to access, but also to true photogrammetric surveys where logistical situations make it very costly or impossible to survey with more rigorous instruments. it is interesting to study in the future the possibilities offered by smartphones that also acquire dual frequency phase gnss observations; this would also allow the application of ppp postprocessing, which could prove to be the most appropriate in such remote sites. there is also interest in studying smartphones with even more advanced optics, with particular interest in liquid optics, which are expected to be released soon. in this experiment, the gps/gnss sensor and the camera of the same mobile phone were used, which allowed the minimum possible encumbrance of the instrumentation. more generally, it is not necessary that the reference points and the photogrammetric survey are acquired by the same device, also considering that the overall dimensions of the mobile phones are very small and do not create any logistical or transport problems. references [1] v. baiocchi, r. brigante, s. del pizzo, f. giannone, m. onori, f. radicioni, a. stoppini, g. tosi, s. troisi, m. baumgartner, integrated geomatic techniques for georeferencing and reconstructing the position of underground archaeological sites: the case study of the augustus sundial (rome), remote sens. 2020, 12, 4064. doi: 10.3390/rs12244064 [2] v. baiocchi, g. caramanna, d. costantino, p. j. v. d’aranno, f. giannone, l. liso, c. piccaro, a. sonnessa, m. vecchio, first geomatic restitution of the sinkhole known as ‘pozzo del merro’ (italy), with the integration and comparison of ‘classic’ and innovative geomatic techniques, environmental earth sciences, no. 3, 2018, 77. doi: 10.1007/s12665-018-7244-6 [3] f. radicioni, p. matracchi, r. brigante, a. brozzi, m. cecconi, a. stoppini, g. tosi, the tempio della consolazione in todi: integrated geomatic techniques for a monument description including structural damage evolution in time., international archives of the photogrammetry, remote sensing and spatial information sciences isprs archives, 42(5w1), 2017, 433–440. doi: 10.5194/isprs-archives-xlii-5-w1-433-2017 [4] d. costantino, m. pepe, a. g. restuccia, scan-to-hbim for conservation and preservation of cultural heritage building: the case study of san nicola in montedoro church (italy), applied geomatics, 2021. doi 10.1007/s12518-021-00359-2 [5] d. ebolese, m. lo brutto, g. dardanelli, the integrated 3d survey for underground archaeological environment, isprs annals of the photogrammetry, remote sensing and spatial information figure 7. the intihuatana stone location in 1:750 map and position of two of the four surveyed points (triangles). https://doi.org/10.3390/rs12244064 https://link.springer.com/article/10.1007%2fs12665-018-7244-6 https://doi.org/10.5194/isprs-archives-xlii-5-w1-433-2017 https://doi.org/10.1007/s12518-021-00359-2 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 sciences, 2019, pp. 311-317. doi: 10.5194/isprs-archives-xlii-2-w9-311-201 [6] d. dominici, e. rosciano, m. alicandro, m. elaiopoulos, s. trigliozzi and v. massimi, cultural heritage documentation using geomatic techniques: case study: san basilio's monastery, l'aquila, digital heritage international congress (digitalheritage), 2013, pp. 211-214. doi: 10.1109/digitalheritage.2013.6743735 [7] national geographic, discover 10 secrets of machu picchu. onloine [accessed 15 december 2021] https://www.nationalgeographic.com/travel/top10/peru/machu-picchu/secrets/ [8] g. magli, at the other end of the sun’s path: a new interpretation of machu picchu, nexus netw j 12, 2010, 321–341. doi: 10.1007/s00004-010-0028-2 [9] xiaomi, mi11 specifications. online [accessed 15 december 2021] https://www.mi.com/global/mi11/specs [10] l. alessandri, v. baiocchi, s. del pizzo, f. di ciaccio, m. onori, m. f. rolfo, s. troisi, three-dimensional survey of guattari cave with traditional and mobile phone cameras, int. arch. photogramm. remote sens. spatial inf. sci. xlii-2/w11, 2019, 37-41. doi: 10.5194/isprs-archives-xlii-2-w11-37-2019 [11] e. nocerino, f. poiesi, a. locher, y. t. tefera, f. remondino, p. chippendale, l. van gool, 3d reconstruction with a collaborative approach based on smartphones and a cloudbased server, international archives of the photogrammetry, remote sensing and spatial information sciences – isprs archives 42(2w8), 2017, 187-194. doi: 10.5194/isprs-archives-xlii-2-w8-187-2017 [12] a. vinci, f. todisco, r. brigante, f. mannocchi, f. radicioni, a smartphone camera for the structure from motion reconstruction for measuring soil surface variations and soil loss due to erosion, hydrol. res., 48, 2017, pp. 673–685. doi: 10.2166/nh.2017.075 [13] a. masiero, f. fissore, m. piragnolo, a. guarnieri, f. pirotti, a. vettore, initial evaluation of 3d reconstruction of close objects with smartphone stereo vision, int. arch. photogramm. remote sens. spatial inf. sci., xlii-1, 2018, 289–293. doi: 10.5194/isprs-archives-xlii-1-289-2018 [14] u. robustelli, v. baiocchi, l. marconi, f. radicioni, g. pugliano, precise point positioning with single and dual-frequency multignss android smartphones, ceur workshop proc. 2020, 2626. online [accessed 15 december 2021] http://ceur-ws.org/vol-2626/paper10.pdf [15] raw gnss measurements. online [accessed 15 december 2021] https://developer.android.com/guide/topics/sensors/gnss [16] u. robustelli, v. baiocchi, g. pugliano, assessment of dual frequency gnss observations from a xiaomi mi 8 android smartphone and positioning performance analysis, electronics 8, 2019, 91. doi: 10.3390/electronics8010091 [17] c. s. fraser, automatic camera calibration in close range photogrammetry, photogrammetric engineering & remote sensing 79.4, 2013, 381-388. [18] xiaomi, mi9 specifications. online [accessed 15 december 2021] https://www.mi.com/global/mi9/specs [19] p. sapirstein, s. murray, establishing best practices for photogrammetric recording during archaeological fieldwork, journal of field archaeology, 42:4 (2017) 337-350. doi: 10.1080/00934690.2017.1338513 [20] agisoft, discover intelligent photogrammetry with metashape. online [accessed 15 december 2021] https://www.agisoft.com [21] flamingo gnss. online [accessed 15 december 2021] https://www.flamingognss.com/ [22] m. pepe, d. costantino, g. vozza, v. s. alfio., comparison of two approaches to gnss positioning using code pseudoranges generated by smartphone device. applied sciences (switzerland) 11, no. 11, 2021, 4787. doi: 10.3390/app11114787 [23] dual-frequency gnss on android devices. online [accessed 15 december 2021] https://barbeau.medium.com/dual-frequency-gnss-on-androiddevices-152b8826e1c [24] rtklib. online [accessed 15 december 2021] https://www.rtklib.com [25] [accessed 15 december 2021] http://regpmoc.ign.gob.pe/rastreo_permanente/index.php [26] p. dabove, v. di pietra, single-baseline rtk positioning using dual-frequency gnss receivers inside smartphones, sensors 19, 2019, 4302. doi: 10.3390/s19194302 [27] m. barbarella, s. gandolfi, l. poluzzi and l. tavasci, precision of ppp as a function of the observing-session duration, ieee trans. aerosp. electron. syst., vol. 54, no. 6, 2018, 2827-2836 [28] instituto nacional de cultura, center for advanced spatial technologies, (university of arkansas) and cotsen institute for archaeology (ucla). online [accessed 15 december 2021] https://gmv.cast.uark.edu/scanning-2/data/machu-picchu-3ddata/ [29] wayback. online [accessed 15 december 2021] http://wayback.archiveit.org/6471/20150825200335/http://cast.uark.edu/home/resear ch/archaeology-and-historic-preservation/archaeologicalgeomatics/archaeological-laser-scanning/laser-scanning-atmachu-picchu.html [30] cloudcompare. online [accessed 15 december 2021] http://www.cloudcompare.orghttp://www.cloudcompare.or g/ [31] google play. online [accessed 15 december 2021] https://play.google.com/store/apps/details?id=com.smartmobil evision.scann3d&hl=it [32] trnio. the app that transforms your iphone into a handheld 3d scanner. online [accessed 15 december 2021] http://www.trnio.com [33] g. pulighe, v. baiocchi, f. lupia, horizontal accuracy assessment of very high resolution google earth images in the city of rome, italy, int. j. digital earth 9, 2015, 342-362. doi: 10.1080/17538947.2015.1031716 https://doi.org/10.5194/isprs-archives-xlii-2-w9-311-201 https://doi.org/10.1109/digitalheritage.2013.6743735 https://www.nationalgeographic.com/travel/top-10/peru/machu-picchu/secrets/ https://www.nationalgeographic.com/travel/top-10/peru/machu-picchu/secrets/ https://doi.org/10.1007/s00004-010-0028-2 https://www.mi.com/global/mi11/specs http://dx.doi.org/10.5194/isprs-archives-xlii-2-w11-37-2019 http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-187-2017 https://doi.org/10.2166/nh.2017.075 https://doi.org/10.5194/isprs-archives-xlii-1-289-2018 http://ceur-ws.org/vol-2626/paper10.pdf https://developer.android.com/guide/topics/sensors/gnss https://doi.org/10.3390/electronics8010091 https://www.mi.com/global/mi9/specs https://doi.org/10.1080/00934690.2017.1338513 https://www.agisoft.com/ https://www.flamingognss.com/ https://doi.org/10.3390/app11114787 https://barbeau.medium.com/dual-frequency-gnss-on-android-devices-152b8826e1c https://barbeau.medium.com/dual-frequency-gnss-on-android-devices-152b8826e1c https://www.rtklib.com/ https://www.rtklib.com/ http://regpmoc.ign.gob.pe/rastreo_permanente/index.php https://doi.org/10.3390/s19194302 https://gmv.cast.uark.edu/scanning-2/data/machu-picchu-3d-data/ https://gmv.cast.uark.edu/scanning-2/data/machu-picchu-3d-data/ http://wayback.archive-it.org/6471/20150825200335/http:/cast.uark.edu/home/research/archaeology-and-historic-preservation/archaeological-geomatics/archaeological-laser-scanning/laser-scanning-at-machu-picchu.html http://wayback.archive-it.org/6471/20150825200335/http:/cast.uark.edu/home/research/archaeology-and-historic-preservation/archaeological-geomatics/archaeological-laser-scanning/laser-scanning-at-machu-picchu.html http://wayback.archive-it.org/6471/20150825200335/http:/cast.uark.edu/home/research/archaeology-and-historic-preservation/archaeological-geomatics/archaeological-laser-scanning/laser-scanning-at-machu-picchu.html http://wayback.archive-it.org/6471/20150825200335/http:/cast.uark.edu/home/research/archaeology-and-historic-preservation/archaeological-geomatics/archaeological-laser-scanning/laser-scanning-at-machu-picchu.html http://wayback.archive-it.org/6471/20150825200335/http:/cast.uark.edu/home/research/archaeology-and-historic-preservation/archaeological-geomatics/archaeological-laser-scanning/laser-scanning-at-machu-picchu.html http://www.cloudcompare.org/ http://www.cloudcompare.org/ http://www.cloudcompare.org/ https://play.google.com/store/apps/details?id=com.smartmobilevision.scann3d&hl=it https://play.google.com/store/apps/details?id=com.smartmobilevision.scann3d&hl=it http://www.trnio.com/ https://doi.org/10.1080/17538947.2015.1031716 an innovative correction method of wind speed for efficiency evaluation of wind turbines acta imeko issn: 2221-870x june 2021, volume 10, number 2, 46 53 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 46 an innovative correction method of wind speed for efficiency evaluation of wind turbines alessio carullo1, alessandro ciocia2, gabriele malgaroli2, filippo spertino2 1 dipartimento di elettronica e telecomunicazioni, politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy 2 dipartimento energia, politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy section: research paper keywords: renewable energy sources; wind energy systems; wind speed; power measurement; uncertainty citation: alessio carullo, alessandro ciocia, gabriele malgaroli, filippo spertino, an innovative correction method of wind speed for efficiency evaluation of wind turbines, acta imeko, vol. 10, no. 2, article 8, june 2021, identifier: imeko-acta-10 (2021)-02-08 section editor: ciro spataro, university of palermo, italy received january 15, 2021; in final form april 28, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: alessio carullo, e-mail: alessio.carullo@polito.it 1. introduction the increasing energy demand and the requirements of minimal environmental impact have pushed towards to a huge increase of renewable energy sources (res). a drawback of these sources is their intermittency, which can be mitigated by means of the integration of storage units, e.g. electrochemical batteries [1]-[3]. among the res, wind turbines (wts) represent a reliable and clean source of electricity with low marginal costs [3]. new wind power plants have been installed in europe in 2020, with a cumulative rated power of about 7 gw, and an increase of about 10 gw is expected in 2021, thus reaching a cumulative capacity of about 250 gw [4]. in this framework, offshore applications will represent about 20 % of new installations in the period 2020-2023, especially in the netherlands, ireland, norway and france. wts can work at fixed speed or variable speed, and the latter are able to adjust the rotor speed, thus following the maximum aerodynamic power of the wind [5]. on the other hand, their control requires the wind speed to be measured through an anemometer, thus increasing the overall cost and the size of the system. the anemometer is usually located on the back of the turbine, where a wind speed that is lower than the wind speed entering in the rotor is measured. for this reason, the use of the measurement of this anemometer leads wts to exhibit experimental performance that seems better than their nameplate specification, since the manufacturer states the power curve with reference to the wind speed at the entrance of the rotor. in addition, manufacturer-stated performance refers to ideal conditions of minimum turbulence, flat terrain and absence of wakes due to obstacles [6]. a reliable estimation of wt performance then requires the measured wind speed to be corrected and, for this reason, two different correction methods have been defined in technical specifications and international standards. the first method does not take into account the effects of the wakes of other turbines and obstacles, while the second method filters the considered direction of the wind in order to remove these wake effects. wake is a sort of loading effect, as occurs in electric circuits. wakes are long trails of wind abstract the performance of horizontal axis wind turbines (wts) is strongly affected by the wind speed entering in their rotor. generally, this quantity is not available, because the wind speed is measured on the nacelle behind the turbine rotor, providing a lower value. therefore, two correction methods are usually employed, requiring two input quantities: the wind speed on the back of the turbine nacelle and the wind speed measured by a meteorological mast close to the turbines under analysis. however, the presence of this station in wind farms is rare and the number of wts in the wind farm is high. this paper proposes an innovative correction, named “statistical method” (sm), that evaluates the efficiency of wts by estimating the wind speed entering in the wts rotor. this method relies on the manufacturer power curve and the data measured by the wt anemometer only, thus having the possibility to be employed also in wind farms without a meteorological station. the effectiveness of such a method is discussed by comparing the results obtained through the standard methods implemented on two turbines (rated power = 1.5 mw and 2.5 mw) of a wind power plant (nominal power = 80 mw) in southern italy. mailto:alessio.carullo@polito.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 47 in turbulent regime and with lower speed with respect to the entrance of turbines rotor. this effect needs to be minimized because if the wind flow entering in the wt rotor is affected by the wake of another turbine, its speed is lower than in the wakefree conditions, i.e. energy production is reduced. a common requirement of these methods is the availability of two quantities: the wind speed vwt measured by the anemometer and the wind speed vstat detected by a meteorological mast, which has to be close to the turbine under investigation. unfortunately, a meteorological mast is not always present in wind power plants, thus preventing the implementation of these correction methods. to overcome this limitation, the present work proposes an alternative method that relies on the manufacturer power curve and the wind speed detected by the turbine anemometer, thus not requiring the measurements provided by a meteorological mast. in section 2, a review of the two standard methods is presented and the new correction method is described. section 3 defines the concept of yearly average efficiency (which takes into account the energy generated by a wt), and describes the parameters availability and capacity factor. in section 4, a case study is presented related to a wind power plant in southern italy. section 5 reports the obtained results for two different wts that are located in two different areas of the considered power plant. finally, section 6 summarizes the main outcomes of this work. 2. correction methods an expansion of the stream tube occurs before and after the passage in the three-blade rotor. therefore, the cross section of the wind flow increases, while its kinetic energy (thus, its speed) decreases. the presented methods aim to correct the wind speed detected by the anemometer behind the turbine rotor, calculating the corresponding speed at its entrance. before the application of one of the proposed correction methods, a preliminary normalization to the reference air density ρref = 1.225 kg/m3 is performed, since manufacturer specifications refer to this condition. in particular, for wts with active power control, experimental results are corrected according to the following expression [6]: 𝑣cor = 𝑣𝑒𝑥𝑝 ∙ ( 𝜌air 𝜌ref ) 1 3⁄ , (1) where vcor is the corrected wind speed, vexp is the measured wind speed and ρair is the air density during the measurement. 2.1. method #1 – straight line method (slm) the first method requires vwt and vstat as input quantities, and consists of the following steps: 1) step a selection of the wind-speed direction. the wind direction β is properly selected in order to consider valid the assumption vstat ≈ ventr, where ventr is the wind speed that enters in the turbine rotor. in particular, experimental results are filtered in order to analyse the wind contributions flowing from the station to the wt. assuming to simplify the problem as a 2d system without the vertical coordinate, a straight line is traced between the anemometric station and the wt under test and its orientation βwt with respect to the north direction is calculated. however, if the set of experimental data is limited, a low number of experimental points is available. in this case, it is generally convenient to extend the analysis to wind speeds with orientations β = βwt ± δβ, where 2 δβ is the top angle of a triangle whose base is the rotor diameter d of the wt (d = 2 ∙ rd, by assumption, where rd is the length of a blade, neglecting the hub radius) and the third vertex of the triangle is the meteorological mast. 2) step b selection of data with ventr > vwt. as described in the step a, the wind speeds of interest flow from the anemometric station to the wt. therefore, ventr has to be larger than vwt because the kinetic energy of the wind decreases when it flows through the meteorological mast. 3) step c removal of experimental data with turbulence larger than 10 %. in each time interval, the turbulence is the ratio between the standard deviation of the wind speed and its average value in the interval. the power curve provided by the manufacturer is measured in conditions of minimum turbulence, which is generally lower than 10 % [7]. 4) step d linear regression of experimental data. in this step, a linear equation that describes vstat as a function of vwt is identified in order to estimate ventr by the line of regression of vstat on vwt, where the measurement of vwt is corrected thanks to the measurement of vstat. the goodness-of-fit of the linear regression to the experimental data is measured through the parameter r2, which ranges from 0 (no suitable model) to 1 (best model). during the design of a wind power plant, the position of the turbines has to be optimized in order to minimize their mutual wakes and maximize their energy production. however, due to different constraints, such as terrains and land morphology, these effects cannot be always minimized. therefore, the first method needs to be modified in order to remove possible errors due to mutual wakes effect. 2.2. method #2 – no wakes method (nwm) this method is similar to and consists of the same steps as the slm. however, since nwm aims to avoid the mutual wakes that affect the measurements, step a is modified. indeed, nwm does not focus the correction on the direction joining the meteorological mast and the wt; on the contrary, it investigates all the directions in which the anemometric station and the wt are not affected by the wakes of other turbines. the procedure used to determine the wind directions disengaged from any obstacles is based on the document [6]. in particular, for each obstacle in the neighbourhood of the wt, such as other operating wts or a meteorological station, the wind direction angles α that must be excluded from the analysis are calculated according to this expression: 𝛼 = 1.3 ∙ arctan ( 2.5 ∙ 𝐷 𝐿 + 0.15) + 10 , (2) where d is the rotor diameter and l is the mutual distance between the obstacle and the wt under test. after the selection of the proper wind direction, it is possible to verify the validity of the results thanks to a more sophisticated analytical model, which is named the “jensen model” or “park model” [8]. it permits to estimate the wind speed v* perturbed by the wake of a turbine using the following expression: acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 48 𝑣∗ = 𝑣0 ∙ [ 1 − 1 − √1 − 𝐶𝑇 (1 + 𝑘 ∙ 𝑥 𝑟𝑑 ) 2 ] , (3) where v0 is the wind speed not affected by wakes, ct is the thrust coefficient of a wt and depends on the wind intensity, rd is the radius of the turbine rotor, and x is the downwind distance. the parameter k is the decay constant of the wake that is estimated according to the following equation: 𝑘 = 0.5 ln ( ℎ 𝑧0 ) , (4) where h is the hub height of the wt and z0 is the roughness of the ground. according to jensen model, the wake increases linearly with x and its diffusion radius rx can be estimated as: 𝑟𝑥 = 𝑟𝑑 + 𝑘 ∙ 𝑥 . (5) it should be noted that the model considers the perturbation of the flow profile along the direction of the wind, while its perpendicular component is assumed constant (1-d model). finally, this model assumes that k is a constant parameter that depends only on h and z0. 2.3. method #3 – statistical method (sm) the alternative method does not require experimental data provided by a meteorological station [9]: the input quantities are the wind speed measured by the wt anemometer and the power curve provided by the manufacturer. the assumption behind this methodology is that the power curve of the wt manufacturer is the locus of the points where the generator operates with the best performance. therefore, the analytic relation between vwt and the wind speed provided by the wt manufacturer (for the same output electric power pk) is derived. figure 1 reports the scheme of the aforementioned methodology. more in detail, the methodology consists of the following steps: 1) step a removal of experimental data with turbulence larger than 10 % [7]; 2) step b selection of the experimental set sk. one of the available working point pk = p(vk) is selected on the power curve provided by the wt manufacturer. then, a set sk of experimental data is identified such that the electric output power lies in the neighbourhood of pk, i.e. in the interval between pk∙(1-ε) and pk∙(1+ε). in this work, the value of ε was set to 0.01, based on the consideration that output powers within a ± 1 % interval are not distinguishable due to the typical uncertainty of the equipment that is used to measure this quantity. the set sk is described as: 𝑆𝑘 = {[𝑣wt,𝑖, 𝑃(𝑣wt,𝑖)]: 𝑃(𝑣wt,𝑖) ∈ [𝑃𝑘 ∙ (1 − 𝜀) ÷ 𝑃𝑘 ∙ (1 + 𝜀)] } . (6) 3) step c calculation of the empiric cumulative distribution function (ecdf) of the wind speed. the ecdf of the wind speed corresponding to the selected value of vk is calculated, as shown in figure 2 (blue dots), which refers to the value vk = 12 m/s. the same figure also highlights how the calculated ecdf is well approximated by the cdf f(vwt) (red line) corresponding to the probability density function (pdf) f(vwt) of the known factorial function γ [10]: 𝑓(𝑣wt) = 𝑣wt 𝑎−1 𝑏𝑎 ∙ 𝛤(𝑎) ∙ 𝑒 − 𝑣wt 𝑏 (𝑣wt ≥ 0) , (7) where the parameter a is estimated as the square ratio between the mean value and the standard deviation of sk, while the parameter b is derived as the ratio between the mean value of sk and a; 4) step d estimation of the wind-speed fifth percentile. starting from the pdf f(vwt), the fifth percentile 𝑣wt 5 % of the wind speed, i.e. the value that has the 5 % of probability to not be exceeded in sk, is selected; steps from b to d are repeated for each available working point p(vk) in the power curve provided by the manufacturer. 5) step e linear regression of experimental data. this step is similar to step d of the other methods, but in this case a linear equation is obtained between 𝑣wt 5 % and the corresponding vk. figure 1. scheme of the methodology for the sm. figure 2. example of calculated ecdf for vk = 12 m/s. 0 0.2 0.4 0.6 0.8 1 10 11 12 13 14 15 c d f wind speed (m/s) ecdf gamma cdf 0 1 2 3 0 5 10 15 20 p o w e r o u tp u t (m w ) wind speed (m/s) manufacturer curve experimental data set sk pk 0 2 4 6 8 10 12 14 2 4 6 8 10 12 v st a t (m /s ) vstat = 0.926 ∙ + 0.936 r2 ≈ 1 ) steps a-b step e steps c-d 0 0.2 0.4 0.6 0.8 1 11 12 13 14 15 vwt (m/s) ecdf f(vwt) vk = 12 m/s acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 49 one should note that when the wt reaches its nominal power, the correspondence between the wind speed and the output power is not unique. the output power is indeed limited to the rated value and it can be obtained with several values of the wind speed. this represents a limit of the proposed method, which is not applicable in the range of high wind speeds due to its strong dependence on the power curve of the wt manufacturer. 3. estimation of wt efficiency the efficiency of a wt is the ratio between the electrical power it produces and the aerodynamic power of the wind at the entrance of the rotor. the aerodynamic power paer of the wind can be calculated as [11]: 𝑃aer = 1 2 ∙ 𝜌air ∙ π 4 ∙ 𝐷2 ∙ 𝑣entr 3 . (8) the efficiency can be also estimated as the ratio between electrical and wind energies in a certain time interval δt. indicating the measured wt output power as pout, the efficiency can be obtained as [12]-[13]: 𝜂 = 𝑃out 𝑃aer = 𝑃out ∙ δ𝑡 𝑃aer ∙ δ𝑡 = 𝐸el 𝐸aer , (9) where eel and eaer are electrical and aerodynamic energies, respectively. with the aim of comparing the three proposed correction methods, the results will be expressed in terms of weighted yearly efficiency η*: 𝜂∗ = ∑ (𝜂𝑘 ∙ 𝐸𝑘)year ∑ (𝐸𝑘)year = ∑ (𝜂𝑘 ∙ 𝐸𝑘)year 𝐸y,exp (10) where ηk is the wt efficiency, ek is the output energy in the k th time interval (δt = 10 min), and ey,exp is the experimental yearly energy generated by the wt. thanks to the availability of an anemometric station and to the accurate selection of the wind direction, the nwm is considered as the reference method. the other two methods will be then compared to the nwm by means of the efficiency deviation δη*, which is defined as: δ𝜂∗ = 100 % ∙ 𝜂∗ − 𝜂nwm ∗ 𝜂nwm ∗ (11) where 𝜂nwm ∗ is the average efficiency estimated with the nwm. moreover, the availability factor and the capacity factor are calculated for the turbines under analysis. the first quantity provides information regarding the probability that a system is operational at a specific time: in particular, it is the ratio between the uptime of wts and their total operation time, which includes non-operation periods due to failures or maintenance actions [14]-[15]. on the other hand, the capacity factor is, for a given time interval, the ratio between the real energy produced and injected into the grid, and the ideal energy that could be generated by the turbine working continuously at its rated power. an example of centralized power station with very high capacity factor (about 90 % of one year) are units for base load operation. 4. case study the three methods previously described were applied to two wts of a wind farm in southern italy (with 43 wts, global nominal power of 80 mw and altitude between 1100 m and 1200 m) using data collected during a measurement campaign in 2017. the turbines of the wind farm have hub height of 80 m and a three-bladed rotor. their wind speed range is the following: cut-in speed vc-in = 3.5 m/s, cut-out speed vc-out = 25 m/s. however, the wind farm is divided in two parts, with two different models of turbines. in the first area, wts have a nominal power of 2.5 mw, a rotor diameter of 80 m and rated wind speed of 15 m/s. in this part, a meteorological mast (height of about 80 m) is present, measuring the quantities of interest. in particular, it is equipped with: 1) a first class cup anemometer, which acquires the horizontal component of the wind speed according to the requirements provided in [6]; 2) first class sensors that detect the wind direction according to [7]; 3) pressure, humidity and temperature sensors, which measure the environmental quantities that are used to estimate the air density at the height of meteorological mast and turbine. the anemometer provides a resolution of 0.05 m/s and its stated uncertainty is ± 1 % of the measured value in the range (0.3 ÷ 50) m/s with a minimum uncertainty of ± 0.2 m/s. the environmental quantities are measured with uncertainties of ± 2 °c for the temperature, ± 5 %rh for the relative humidity and ± 1 kpa for the pressure. in the second area of the wind farm, wts have a rated power of 1.5 mw, a rotor diameter of 77 m and a rated wind speed of 12 m/s. however, in this area, there is no meteorological station installed; hence, the only data available are provided by the anemometer located on the back of the turbines. regarding the wts of the entire plant, they are equipped with an ultrasonic anemometer that measures the absolute value and the direction of the wind speed, providing a resolution of 0.01 m/s and an uncertainty of ± 2 % of the measured value in the range (0 ÷ 60) m/s (minimum uncertainty ± 0.25 m/s). the electrical output power of the wts is measured with a relative standard uncertainty of 1 %. 5. results 5.1. results of one turbine with the meteorological station in this subsection, the results for a wt in the first area of the wind farm (which is the one equipped with a meteorological mast) are presented. the electrical power measurements pout obtained at the output of the wt (average values within 10 min time intervals) are shown in figure 3 (blue points) with respect to the measured wind speed, which is corrected according to equation (1). in the same figure, which refers to results that are collected during a time interval of approximately one year, the manufacturer power curve (red line) is also reported. one should note that a high number of observations are on the left of the manufacturer power curve, since no correction methods are applied to these experimental results. this behaviour is not realistic, since the experimental performance of the wt cannot be higher than the manufacturer's specifications. furthermore, the cut-in and cut-out wind speeds are about 3 m/s and 24 m/s, respectively, which are lower than the corresponding nominal values (vc-in = 3.5 m/s, vc-out = 25 m/s). according to the correction methods described in section ii, experimental results that show turbulence larger than 10 % are removed. furthermore, also results showing null output power for wind speed in the range (vc-in ÷ vc-out) are removed, since they refer to failure conditions of the investigated plant. before acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 50 applying the described correction methods, a preliminary uncertainty estimation is performed, taking into account the instrumental uncertainty of wattmeter and anemometer of the wt and the contribution related to the repeatability of the measured output power. as a first step, the method of bins [16] is applied: power measurements are grouped according to the corresponding wind speed measured by the wt anemometer. since its uncertainty has a maximum value of 0.5 m/s for wind speed value of 25 m/s, experimental results are grouped in uniform wind-speed bins with a width of ± 0.5 m/s. then, the mean output power is estimated for each identified group and the standard deviation of the mean is considered as the estimation of the measurement repeatability. this contribution is combined to the instrumental standard uncertainty (1 % of the measured value), thus obtaining the combined standard uncertainty u(p). the obtained results are summarized in figure 4, where the red bars refer to the manufacturer power curve, while the grey bars represent the experimental means of each group centred around integer values of wind speed. the error bars superimposed to each grey bar are the intervals pmean,i ± u(pi). even though the anomalous data points are removed, the uncorrected experimental results are not fully consistent with the manufacturer specifications yet: for wind-speeds up to 6 m/s and at 21 m/s, the electrical output power is higher than the manufacturer specifications, and cut-in and cut-out wind speeds remain the same previously estimated. implementing the slm, the wind speed direction considered in the correction is β= (231 ± 13)° and the linear regression (r2 = 0.969) results in the following equation: 𝑣𝑠𝑡𝑎𝑡 = 0.971 ∙ 𝑣𝑊𝑇 + 0.758 (12) the results after the correction with equation (12) are reported in figure 5. for wind speeds lower than 21 m/s, the corrected output power is lower than the manufacturer curve. regarding cut-in and cut-out wind speeds, the slm correction leads to an estimation of vc-in that is comparable to the rated specification (≈ 3.5 m/s), while vc-out remains lower (≈ 24 m/s). moreover, for wind speeds higher than 13 m/s, the manufacturer power curve reaches a saturation power of about 2.5 mw, while the experimental data reach a higher saturation power. this behaviour is realistic, being due to the pitch regulation of the wt: in fact, the turbine is allowed to work with a maximum power of about 104 % of rated data. this performance of the wt results in a higher energy production; however, an earlier aging of the turbine due to a higher degradation of the materials may occur as well. the direction of the wind speeds considered in the slm may be affected by turbulence, mainly due to the wakes of other turbines or obstacles. the nwm permits to solve this issue by identifying the wind speed directions in which the turbine and the meteorological mast are not affected by wakes. according to jensen model, the angular section not affected by wakes corresponds to β ranging between -26.8° and 12.7°: thus, the north direction (β = 0°) is selected for the nwm. figure 6 presents the results of jensen model for the wind directions considered in the slm and the nwm using k = 0.075, h = 80 m, rd = 40 m, and z0 = 0.1 m. the blue and the red circles represent the wt under analysis and the meteorological mast, respectively; the grey circles indicate the other wind generators, and the cones represent the areas affected by wakes. with the wind direction assumed for the slm, the wt and the station are affected by the wake of another turbine; on the other hand, with north direction, they are wake-free. the resulting regression equation (r2 = 0.979) for the nwm is the following: 𝑣stat = 0.998 ∙ 𝑣wt + 0.550 (13) figure 7 reports the results after the correction. the nwm correctly estimates the cut-in and cut-out wind speeds as they coincide with the values provided by the manufacturer (≈ 3.5 m/s and ≈ 25 m/s, respectively). the sm does not require experimental data from a meteorological mast, thus it can be used also in wind power plants without weather stations close to the wts. however, as described in section 2, this correction cannot be applied to the figure 3. turbine #1 uncorrected raw experimental data (blue dots) and manufacturer power curve (red line). figure 4. turbine #1 uncorrected experimental data after the preliminary data processing. figure 5. turbine #1 slm corrected results (grey bars) and manufacturer power curve (red bars). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 51 last part of the power curve, where the electrical power reaches a saturation value (the nominal power). thus, the sm is applied to wind speeds in the range (4 ÷ 15) m/s, which is the rated wind speed by the manufacturer. however, the local distribution of experimental wind speeds for the year under study (red bars in figure 8) shows that this range contains most of experimental data. actually, for wind speeds of ≈ 15 m/s, the corresponding cumulative function (blue curve) is higher than 0.93, i.e. more than 93 % of wind speeds are ≤ 15 m/s. in order to achieve a better accuracy, regression equations are identified for two wind speed ranges: lower than 6 m/s, and between 6 m/s and 15 m/s. the resulting equations are the following (r2  1): 𝑣stat = 0.779 ∙ 𝑣wt + 1.493; 𝑣wt < 6 m s (14) 𝑣stat = 0.923 ∙ 𝑣wt + 0.961; 6 m s ≤ 𝑣wt < 15 m s (15) the results of the sm, which are reported in figure 9, highlight that the performance of the wt is now realistic, since it is lower than manufacturer power curve. the weighted yearly efficiencies are calculated on a yearly basis to evaluate the effectiveness of the corrections (figure 10). for wind speeds higher than the rated value, the curves converge into the manufacturer data (blue curve), while at lower wind speeds uncorrected data (red curve) have a different shape from the manufacturer curve. moreover, at low wind speeds, the necessity to correct raw data is evident: in fact, at a wind speed of ≈5 m/s, the raw weighted efficiency (η* ≈0.4) is higher than manufacturer value. actually, uncorrected data are based on the wind speed detected on the back of the turbine rotor: this value is lower than the wind speed entering the turbine, leading to an overestimation of its efficiency. regarding the corrections, the slm (green curve) estimates the lowest efficiencies, but, as previously described, its results are affected by wakes. indeed, the comparison with the nwm curve shows that the presence of wakes leads to an underestimation of the efficiency. the shape of the efficiency curve of the sm, which is based on the manufacturer data, is the closest to the shape of the manufacturer one. table 1 reports the weighted yearly efficiencies and the deviations of the slm and the sm with respect to the nwm, which is assumed as the reference. for wind speeds < 6 m/s, the two methods underestimate the efficiency, with similar deviations from the nwm of about -9.1 % (slm) and -7.9 % (sm). in the intermediate wind speeds range (6 ÷ 15) m/s, the slm underestimates the efficiency with a spread of about -3.5 %, while the performance of the wt is overestimated by the sm with a deviation of about 5.2 %. the turbine under analysis has very high energy performance in the area of the farm with the meteorological mast. its availability and capacity factors are evaluated: the wt under study is in operation for 97.2 % of time, with an average capacity factor of 29.3 %. among the turbines in this part of the wind farm, the performance of this wt is one of the best, being the average availability and capacity factors of the plant equal to 96.5 % and 22.3 %, respectively. figure 6. turbine #1 jensen model with wind directions assumed for slm (top) and nwm (bottom). figure 7. turbine #1 nwm corrected results (grey bars) and manufacturer power curve (red bars). figure 8. cumulative function and pdf of wind speed distribution. figure 9. turbine #1 sm corrected results (grey bars) and manufacturer power curve (red bars). 0 0.1 0.2 0.3 0 0.2 0.4 0.6 0.8 1 4 10 16 22 p d f c u m u la ti v e f u n c ti o n wind speed (m/s) acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 52 5.2. results of one turbine without the meteorological station this subsection presents the results for a wt in the second area of the wind farm (i.e., without the meteorological mast). the power curve provided by the manufacturer (red line) and experimental observations (blue points, corresponding to average pout values within 10 min time intervals) are presented in figure 11; the wind speeds are corrected according to equation (1). in figure 11, a high number of data are shifted on the left of the manufacturer power curve. moreover, the cut-in and the cut-out wind speeds (≈ 3 m/s and ≈ 20 m/s) are lower than the nominal values (vc-in = 3.5 m/s, vc-out = 25 m/s): thus, a correction of these data is required. first, failure conditions are excluded from the analysis by removing observations exhibiting turbulence higher than 10 % and null output power. as described in the previous subsection, a preliminary uncertainty estimation is performed starting from the instrumental uncertainty of the wt wattmeter and anemometer, and the repeatability of the measured output power. figure 12 shows the results of this preliminary analysis: the red bars correspond to the manufacturer power curve, while the grey bars represent the experimental means of each group centered around integer values of wind speed. the error bars superimposed on each grey bar are the intervals pmean,i ± u(pi). the figure confirms the preliminary results of the other turbine under study: despite data corresponding to abnormal operation are excluded, the electrical output power remains higher than the manufacturer specifications. however, cut-in and cut-out wind speeds (≈ 4 m/s and ≈ 21 m/s) are closer to the values provided by the manufacturer. this wt is in the area of the wind farm without a meteorological station, thus only the sm can be applied and the data after the correction are presented in figure 13. as described in section ii, this method cannot be applied to the part of the power curve where the power is constant (nominal power); hence, wind speeds higher than rated value (12 m/s) are excluded from the analysis. after applying the correction, the performance of the wt (grey bars) is realistic, being lower than manufacturer curve (red bars). two regression equations (r2  1) are determined for the wind speed intervals < 6 m/s, and between 6 m/s and 12 m/s: 𝑣stat = 0.946 ∙ 𝑣wt + 1.258; 𝑣wt < 6 m s (16) 𝑣stat = 0.893 ∙ 𝑣wt + 1.923; 6 m s ≤ 𝑣wt < 12 m s (17) the weighted yearly efficiencies for raw data are about 33.5 % (wind speeds < 6 m/s) and 42.6 % (wind speeds between 6 m/s and 12 m/s). after the sm correction, the weighted efficiencies are of about 27.1 % (< 6 m/s) and 34.3 % (6 m/s ÷ 12 m/s), with a decrease of about 19 % for both wind speed ranges. this turbine exhibits very high energy performance in the area of the plant without the weather station. indeed, it has the highest availability of the wind farm (99.1 %), while the average value in the second part of the plant is 97.4 %; moreover, its capacity factor is 27.9 %. this value is lower than the other wt under analysis, despite being higher than the average quantity (20.1 %) in the second area of the plant. 6. conclusions this work proposes an innovative method, named “statistical method” (sm), to evaluate the average efficiency of wind turbines by correcting the wind speed at the entrance of the rotor from nacelle anemometer. in the literature, other two methods (straight line method, slm, and no wakes method, nwm) are defined to perform this correction, taking into account technical table 1. weighted yearly efficiencies and deviations for the correction methods (turbine with meteorological station). wind speed range 𝜼𝐫𝐚𝐰 ∗ 𝜼𝐒𝐋𝐌 ∗ 𝜼𝐍𝐖𝐌 ∗ 𝜼𝐒𝐌 ∗ 𝜟𝜼𝐒𝐋𝐌 ∗ 𝜟𝜼𝐒𝐌 ∗ < 6 m/s 39.7 30.0 % 33.0 % 30.0 % -9.1 % -7.9 % 6 – 15 m/s 39.9 33.7 % 34.8 % 37.0 % -3.5 % 5.2 % figure 10. turbine #1 weighted yearly efficiencies for the proposed corrections. figure 11. turbine #2 uncorrected raw experimental data (blue dots) and manufacturer power curve (red line). figure 12. turbine #2 uncorrected experimental data after the preliminary data processing. 0 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 w e ig h te d y e a rl y e ff ic ie n c y η * wind speed (m/s) uncorrected data manufacturer data sm nwm slm acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 53 specifications and international standards. these correction methods require data measured by a meteorological mast close to the turbines, but the presence of this station in wind farms is rare. conversely, the correction proposed in this paper evaluates the wind speed entering in the wts rotor only relying on the manufacturer power curve and the data measured by the wt anemometer. indeed, it may be applied also in wind farms that are not equipped with a meteorological station. in the present work, these three methods were applied to a one-year experimental campaign on a wind farm in southern italy. in particular, two turbines located in two different areas of the same plant were analysed: only the first turbine was close to a meteorological mast. the effects of the corrections were evaluated representing the electrical output power by means of the method of bins and setting the width of the bins according to the uncertainty of the used anemometer. an uncertainty estimation was also performed for the output wt power, taking into account the power measurement uncertainty and the repeatability within each wind-speed bin. the results of the nwm were considered as a reference. regarding the turbine in the part including the mast, the sm performed similarly to the slm, providing comparable absolute deviations in terms of weighted efficiencies with respect to the reference. in fact, the deviations of the sm are about ± 7 % in the total range, while the quantities corresponding to the slm are ± 9 % with respect to the nwm. after the sm correction, the weighted yearly efficiency decreased between about 10 % and 20 % with respect to raw data before correction in the usual wind speed range. references [1] f. spertino, a. ciocia, v. cocina, p. di leo, renewable sources with storage for cost-effective solutions to supply commercial loads, proc. of 2016 international symposium on power electronics, electrical drives, automation and motion (speedam), anacapri, italy, 22-24 june 2016, pp. 242-247. doi: https://doi.org/10.1109/speedam.2016.7525987 [2] a. mahesh, k.s. sandhu, hybrid wind/photovoltaic energy system developments: critical review and findings, renewable and sustainable energy reviews 52 (2015) pp. 1135-1147. doi: https://doi.org/10.1016/j.rser.2015.08.008 [3] z. zhang, y. zhang, q. huang, w. lee, market-oriented optimal dispatching strategy for a wind farm with a multiple stage hybrid energy storage system, csee journal of power and energy systems 4(4) (2018) pp. 417-424. doi: https://doi.org/10.17775/cseejpes.2018.00130 [4] wind energy in europe: outlook to 2023. online [accessed 09 june 2021] https://windeurope.org/about-wind/reports/wind-energy-ineurope-outlook-to-2023/ [5] p. w. carlin, a.s. laxson, e. b. muljadi, the history and state of the art of variable-speed wind turbine technology 6(2) (2003) pp. 129-159. doi: https://doi.org/10.1002/we.77 [6] cenelec en 61400-12-1:2017 power performance measurement of electricity producing wind turbines. [7] v. cocina, p. di leo, m. pastorelli, f. spertino, choice of the most suitable wind turbine in the installation site: a case study, proc. of 2015 international conference on renewable energy research and applications (icrera), palermo, italy, 22-25 nov. 2015, pp. 1631-1634. doi: https://doi.org/10.1109/icrera.2015.7418682 [8] a. peña, p. e. rethore, m. p. van der lan, on the application of the jensen wake model using a turbulence-dependent wake decay coefficient: the sexbierum case, wind energy 19(4) (2016) pp. 763-776. doi: https://doi.org/10.1002/we.1863 [9] f. spertino, p. di leo, i.s. ilie, g. chicco, dfig equivalent circuit and mismatch assessment between manufacturer and experimental power-wind speed curves, renewable energy 48 (2012) pp. 333-343. doi: https://doi.org/10.1016/j.renene.2012.01.002 [10] p.j. davis, leonhard euler's integral: a historical profile of the gamma function, american mathematical monthly 66(10) (1959) pp. 849-869. doi: https://doi.org/10.1080/00029890.1959.11989422 [11] k. grogg, harvesting the wind: the physics of wind turbines, 2005. online [accessed 09 june 2021] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589. 2982&rep=rep1&type=pdf [12] m.h. el-ahmar, a.m. el-sayed, a.m. hemeida, evaluation of factors affecting wind turbine output power, proc. of nineteenth international middle east power systems conference (mepcon), cairo, egypt, 19-21 december 2017, pp. 1471-1476. doi: https://doi.org/10.1109/mepcon.2017.8301377 [13] f. spertino, a. ciocia, p. di leo, g. iuso, g. malgaroli, l. roberto, experimental testing of a horizontal-axis wind turbine to assess its performance, proc. of 22nd imekotc4 international symposium, iasi, romania, 14-15 september 2017, pp. 411-414. online [accessed 14 june 2021] https://www.imeko.org/publications/tc4-2017/imeko-tc42017-080.pdf [14] f. spertino, e. chiodo, a. ciocia, g. malgaroli, a. ratclif, maintenance activity, reliability analysis and related energy losses in five operating photovoltaic plants, 2019 ieee international conference on environment and electrical engineering and 2019 ieee industrial and commercial power systems europe (eeeic / i&cps europe), genova, italy, 11-14 june 2019, pp. 1-6. doi: https://doi.org/10.1109/eeeic.2019.8783240 [15] f. spertino, e. chiodo, a. ciocia, g. malgaroli, a. ratclif, maintenance activity, reliability, availability, and related energy losses in ten operating photovoltaic systems up to 1.8 mw, ieee transactions on industry applications 57(1) (2021) pp. 83-93. doi: https://doi.org/10.1109/tia.2020.3031547 [16] j. f. manwell, j. g. mcgowan, a. l. rogers, wind energy explained, 2010, john wiley and sons, ltd, chichester, united kingdom, isbn: 9780470015001. [17] a. carullo, a. ciocia, p. di leo, f. giordano, g. malgaroli, l. peraga, f. spertino, a. vallan, comparison of correction methods of wind speed for performance evaluation of wind turbines, proc. of 24th imeko tc4 international symposium, 14-16 sept. 2020, pp. 291-296. online [accessed 09 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-55.pdf figure 13. turbine #2 sm corrected results (grey bars) and manufacturer power curve (red bars). https://doi.org/10.1109/speedam.2016.7525987 https://doi.org/10.1016/j.rser.2015.08.008 https://doi.org/10.17775/cseejpes.2018.00130 https://windeurope.org/about-wind/reports/wind-energy-in-europe-outlook-to-2023/ https://windeurope.org/about-wind/reports/wind-energy-in-europe-outlook-to-2023/ https://doi.org/10.1002/we.77 https://doi.org/10.1109/icrera.2015.7418682 https://doi.org/10.1002/we.1863 https://doi.org/10.1016/j.renene.2012.01.002 https://doi.org/10.1080/00029890.1959.11989422 https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589.2982&rep=rep1&type=pdf https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.589.2982&rep=rep1&type=pdf https://doi.org/10.1109/mepcon.2017.8301377 https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-080.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-080.pdf https://doi.org/10.1109/eeeic.2019.8783240 https://doi.org/10.1109/tia.2020.3031547 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-55.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-55.pdf a training centre for intraocular pressure metrology acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 a training centre for intraocular pressure metrology dominik pražák1, vítězslav suchý1, markéta šafaříková-pštroszová1,2, kateřina drbálková1,2, václav sedlák1,2, šejla ališić3, anatolii bescupscii4, vanco kacarski5 1 czech metrology institute, okružní 31, 63800 brno, czechia 2 slovak university of technology, f. of mechanical engineering, nám. slobody 2910/17, 81231 bratislava, slovakia 3 institut za mjeriteljstvo bosne i hercegovine, branilaca sarajeva 25, 71000 sarajevo, bosnia and herzegovina 4 i. p. institutul naţional de metrologie, eugen coca 28, 2064 chişinău, moldova 5 bureau of metrology, bull. jane sandanski 109a, 1000 skopje, north macedonia section: research paper keywords: training centre; metrological traceability; eye-tonometer; intraocular pressure citation: dominik pražák, vítězslav suchý, markéta šafaříková-pštroszová, kateřina drbálková, václav sedlák, šejla ališić, anatolii bescupscii, vanco kacarski, a training centre for intraocular pressure metrology, acta imeko, vol. 11, no. 4, article 4, december 2022, identifier: imeko-acta-11 (2022)-04-04 section editor: eric benoit, université savoie mont blanc, france received june 27, 2022; in final form august 17, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: empir project 20scp02 cefton corresponding author: dominik pražák, e-mail: dprazak@cmi.cz 1. introduction the iop belongs to the basic diagnostic indicators in ophthalmology and optometry. although this quantity is monitored also in the veterinary medicine, this paper deals exclusively with the human medicine. the screenings of the intraocular hypertension serve, first of all, for the prevention and early diagnosis of the glaucoma. hence, there is a high societal interest in the correctness of these measurements which are performed by the eye-tonometers, [1]-[4]. some european countries (czechia, germany and lithuania) consider these instruments so crucial that they are a subject of the legal metrology. however, harmonization in this area is relatively low across europe, [5]. the iop is still measured and referenced in the millimetres of mercury (1 mmhg ≈ 133.3 pa) for the historical and practical reasons, [6]. there is a consensus that the normal values should be within the range from 10 mmhg to 20 mmhg. however, the task of the metrology is to ensure traceability in the complete physiological and pathophysiological range up to 80 mmhg. 2. project history to overcome the obstacles in building the iop metrology, the national metrology institutes (nmi) of austria, czechia, germany, poland, slovakia and turkey, together with technical university in bratislava and palacký university in olomouc formed a consortium within empir programme, solving project intense, which ran from june 2017 to may 2020, [5]-[9]. the scope was much broader, of course, but the main results from the point of view of the cooperation of the european nmis in this field is the foundation of a smart specialisation concept (ssc) for the iop metrology, [10], and a training centre for iop metrology in the premises of the cmi in the city of most which was built with an essential help of the german colleagues. the advanced trainings of the cmi personnel were also accomplished and the needed technical expertise was successfully audited. an important part was a satisfying bilateral comparison in the iop between the cmi and the slovak technical university (stu) in the beginning of 2020 during which two clinically tested noncontact tonometers nidek nt-2000 served as the laboratory standards and a set of the silicone eyes and an artificial eye served abstract the eye-tonometers are the important medical devices with measuring function which are necessary for the screenings of the intraocular hypertension (a serious risk factor for the glaucoma). however, it is not an easy task to ensure their correct metrological traceability. there is needed not only a wide range of various equipment but also the relevant know-how. hence, a training centre for this quantity was established at the czech metrology institute (cmi) within the framework of a smart specialisation concept for the intraocular pressure (iop) metrology. the paper briefly outlines its history, scope, methodologies and future development plans. mailto:dprazak@cmi.cz acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 as the transfer-standards. the centre is now able to provide the metrological traceability and the relevant trainings to the other european nmis. from the beginning it was envisaged that the ssc would be extended in future geographically beyond the central europe and thematically beyond the iop metrology. hence, the training centre also plays a crucial role in the follow-up empir project cefton, which runs from september 2021 to february 2023, [11]-[12], which focuses on the iop metrology know-how transfer to the nmis of the selected central europe free trade agreement (cefta) countries. cmi joined the forces with the nmis of bosnia and hercegovina, moldova, and north macedonia that will serve as the pathfinders for the remaining cefta countries. the project has no research ambitions being entirely focused on the capacity building and engaging in the ssc. the scope and content of the offered trainings as well as the plans for future will be presented. 3. outline of the training centre 3.1. scope of the trainings first, it must be highlighted that the centre does not provide training of use of the eye-tonometers on the patients. this is the task of the medical doctors and nurses. the centre aims to provide training for the metrologists, i.e., the ways how to establish a correct traceability (calibrations and verifications) of the eye-tonometers and of the respective instrumental standards. the training is based on the good practice guidelines developed during the project intense, [7], which in turn reflect the relevant international standards and recommendations as well as the german and czech regulations, [13]-[17]. in con-temporary, the scope of the training centre covers the contact (impression and applanation) tonometers [18]-[22], the non-contact tonometers [23]-[25], the rebound tonometers [25], [26] and the contour tonometers [21], [27]. 3.2. impression tonometers impression (or indentation or schiøtz) tonometer is the oldest (more than 120 years) eye-tonometry principle still in practical utilisation, see figure 1. it determines the iop by the depth of corneal indentation caused by a plunger with the exactly defined weight and dimensions. in order to measure the very high iops, extra weights can be loaded. all these instruments are manufactured following the common standardized specifications. hence, their traceability consists of the checks of all the prescribed geometrical (e.g., the curvatures of the contact areas) and mechanical (e.g., weights and friction) requirements and tolerance limits, see figure 1. the weights can be checked by a mechanical or by an electronic balance in a special set-up. the laboratory is equipped with both. 3.3. applanation tonometers applanation (or goldmann) tonometer is also a long-time established principle but still considered to be a “golden standard.” it determines the iop by measuring a force needed to reach an applanation (i.e., flattening) of a cornea caused by a transparent probe with a known contact area (a circle of 3.06 mm diameter). the traceability of these instruments is again ensured in a classical way, by checking their geometrical specifications and optical quality and by calibrating their force sensor. also in this case, the force can be defined by a mechanical balance or by an electronic sensor, see figure 2. again, the laboratory is equipped with both. the local acceleration due to gravity in the laboratory must be known with a sufficient precision. 3.4. non-contact tonometers the non-contact (or air-puff) tonometers are the most widely utilized tonometers in contemporary, because there is no mechanical contact with the eye during measurement resulting in no need of a topical eye anaesthesia. these instruments also aim at an applanation of a cornea, but they do not reach it by a direct mechanical contact (as goldmann tonometers do), using instead a short and rapid pulse of air directed from a nozzle to the middle of a cornea. the moment of reaching the applanation is detected by a reflection of an infrared beam from the cornea. (in fact, we should speak about reaching a slightly concave shape instead of a real applanation.) the state-of-the-art devices are able to determine also other important ophthalmological measurands (e.g., central corneal thickness). in contrast to the contact tonometers, there is no possibility of a direct classical traceability in this case. their traceability must be ensured to another non-contact tonometer which is clinically tested (the training laboratory is equipped with one) via a suitable transfer standard. there are three possible types of transferstandards available: a set of rubber (silicone) eyes, see figure 3, an electronic eye and a flapper (ptb-jig), see figure 4. the laboratory is equipped with all these devices, because none of these can be utilized universally with all the types of the non figure 1. an impression tonometer placed on a precise calibration sphere. figure 2. calibration of an applanation tonometer (detail). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 contact tonometers produced by the various manufacturers. the laboratory also took part in the mentioned above interlaboratory comparison in this quantity with the stu, [9]. a virtual digital model of the eye cornea was created at the stu during intense project. then a real mechanical model (artificial eye) corresponding to the virtual model was constructed for the experimental verifications. the stu used this artificial eye as one of the transfer-standards in the mentioned above comparison. the target is to develop a „universal iop transfer-standard“ with exchangeble artificial corneas of various thicknesses with a hydraulical or pneumatically regulated inner pressure, [8], [9], [24], [28]-[33], figure 5 and figure 6. 3.5. rebound tonometers the rebound tonometers emerged in the beginning of the 21st century and are becoming popular due to their ease of use (home diagnostic also possible). in this case, a very light and nonharming probe (plastic coated metal core with a spherical plastic tip) is ejected from the instrument against a cornea and is then reflected back into it. the probe movement can be monitored inductively, and the time response is used to calculate a value of the measured iop. the traceability of these instruments must be again ensured to a clinically tested rebound tonometer via a testing bench consisting of a silicone membrane surrogating a cornea with an inner pressure regulated by a water column which enables to compare the readings of a clinically tested device and a calibrated device, see figure 7 and figure 8. figure 3. a detail of a non-contact tonometer with a set of the rubber eyes. figure 4. a detail of a non-contact tonometer with a flapper. figure 5. the new transfer-standard during the comparison. figure 6. the new transfer-standard during the comparison. figure 7. a rebound tonometer on its test bench. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 3.6. contour tonometers the contour tonometer is another modern device. the head of this instrument has a concave shape corresponding to the typical shape and size of human cornea. the head is pressed to a cornea with a constant force (i.e., being in contact with the cornea but not applanating it). a miniature piezoresistive pressure sensor mounted in the head is then able to detect the iop with such a sensitivity that it is even able to detect the minor iop fluctuations caused by the cardiac cycle. the principle is less influenced by a corneal thickness or a corneal rigidity, but it is rather sensitive to a corneal curvature. the traceability of this device can be relatively easily and straightforwardly ensured by a direct calibration of its internal pressure sensor, see figure 9. 4. history of the trainings the first training for two experts of a german stakeholder took place during the project intense in march 2020. then these activities were interrupted due to the covid-19 pandemic. however, the trainings were resumed again in january 2022 within the project cefton when two colleagues from bosnia and herzegovina took part. it was followed by a training of six people of the nmis of bosnia and herzegovina, moldova and north macedonia in june 2022. all the trainings took place at the training centre in most and covered all the principles described above. it was found useful that both the instruments and the instrumental standards to which these are traceable are concentrated at one place. hence, the attendees could more easily distinguish between “construction principles of the tonometers” and “traceability principles of the tonometers” which used to be a stumblingblock during theoretical lectures. the only shortcoming found was the fact that the training does not cover maklakoff tonometer. this predecessor and a very simplified variant of goldmann tonometer has not been used in practice in the central europe for years but is still widely utilized in the area of the former soviet union. 5. tasks for future as it was mentioned in 3.4, some modern non-contact tonometers are able to determine more eye characteristics than the sole iop value (e.g., corneal thickness, rigidity or hysteresis). however, it is still not solved how to ensure a traceability for these extra measurands. we remain in contact with the academic partners to solve these problems to. the artificial eye of the stu seems to be a good starting device for the studies of corneal thickness influence because it allows to utilize the artificial corneas of various thicknesses. initial research in this direction has already started, [8]-[10] and figure 10. also, we search for a possibility to obtain a sample of a maklakoff tonometer and to establish a procedure for its traceability. moreover, as a greater emphasis is being given to the accuracy and reliability of the medical devices with a measuring function, [34]-[41], we consider the training centre a starting nucleus of further cooperation activities in the sector of medical metrology. 6. conclusions as a result of the fruitful cooperation of the european nmis, the training centre for the iop metrology at the cmi covers the most common eye tonometry principles, has the state-of-the-art equipment and remains in the intensive contacts with the nmi and academic partners to broaden its scope in the future. figure 8. a rebound tonometer on its test bench (detail). figure 9. a contour tonometer connected to a pressure standard. figure 10. experimentally found influence of the thickness of the artificial cornea on the non-contact tonometer response as found by the stu. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 acknowledgement this work was funded by the european metrology programme for innovation and research (empir) project 20scp02 cefton. the empir initiative is co-funded by the european union horizon 2020 research and innovation programme and the empir participating states. references [1] b. thylefors, a.-d. négrel, the global impact of glaucoma, bulletin of the world health organization, 72 (1994), pp. 323329. [2] h. a. quigley, a. t. broman, the number of people with glaucoma worldwide in 2010 and 2020, british journal of ophthalmology, 90 (2006), pp. 262-267. doi: 10.1136/bjo.2005.081224 [3] y.-c. tham, x. li, t. y. wong, h. a. quigley, t. aung, c.y. cheng, global prevalence of glaucoma and projections of glaucoma burden through 2040, ophthalmology, 121 (2014), pp. 2081-2090. doi: 10.1016/j.ophtha.2014.05.013 [4] s. tanabe, k. yuki, n. ozeki, d. shiba, t. abe, k. kouyama, k. tsubota, the association between primary open-angle glaucoma and motor vehicle collisions, investigative ophthalmology and visual science, 52 (2011), pp. 4177-4181. doi: 10.1167/iovs.10-6264 [5] e. sınır, y. durgut, d. m. rosu, d. pražák, towards the harmonization of medical metrology traceability in europe: an impact case study through activities in turkey & empir project intense, ieee international symposium on medical measurements and applications proc., istanbul, turkey, 26 – 28 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802141 [6] d. pražák, v. sedlák, e. sınır, f. pluháček, changing the status of mmhg, accred. and qual. assur. 25 (2020), pp. 81-82. doi: 10.1007/s00769-019-01414-7 [7] intense-consortium, developing research capabilities for traceable intraocular pressure measurements. online [accessed 16 august 2022] http://intense.cmi.cz [8] d. pražák, r. ziółkowski, d. rosu, m. schiebl, j. rybář, p. pavlásek, e. sınır, f. pluháček, metrology for intraocular pressure measurements, acta imeko 9 (2020) 5, pp. 353-356. doi: 10.21014/acta_imeko.v9i5.999 [9] d. pražák, j. rybář, p. pavlásek, v. sedlák, s. ďuriš, m. chytil, f. pluháček, rozvojové výskumné kapacity pre nadväznosť merania vnútroočného tlaku – výsledky európského projektu, metrológia a skúšobníctvo 26 (2021), pp. 10-14. [in slovak] [10] v. sedlák, d. pražák, m. schiebl, m. nawotka, e. jugo, m. do céu ferreira, a. duffy, d. m. rosu, p. pavlásek, g. geršak, smart specialisation concept in metrology for blood and intraocular pressure measurements, measurement: sensors 18 (2021), 100283. doi: 10.1016/j.measen.2021.100283 [11] cefton-consortium, development of eye-tonometry in cefta countries. online [accessed 16 august 2022] http://projectcefton.com [12] d. pražák, metrological traceability of the eye-tonometers, the 3rd international symposium on visual physiology, environment and perception abstr. proc., tallinn, estonia, 12 – 13 november 2021, p. 17. online [accessed 16 august 2022] https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/ vispep2020/vispep2021%20abstract%20book%20.pdf [13] iso 8612: ophthalmic instruments – tonometers, 2009. [14] oiml r145-1: ophthalmic instruments – impression and applanation tonometers – part 1 – metrological and technical requirements, 2015. [15] s. mieke, t. schade, guidelines for metrological verifications of medical devices with a measuring function – part 1, ptb, berlin, ed. 3, 2016. online [accessed 16 august 2022] https://www.ptb.de/cms/fileadmin/internet/publikationen/wis sensch_tech_publikationen/lmkm-v3-part1englisch_2020.pdf [16] czech metrology institute, opatření obecné povahy sp. zn.: 0111oop-c038-16. [in czech] online [accessed 16 august 2022] https://www.cmi.cz/sites/all/files/public/download/uredni_de ska/oop/0111-oop-c038-16.pdf [17] czech metrology institute, opatření obecné povahy sp. zn.: 0111oop-c039-13. [in czech] online [accessed 16 august 2022] https://www.cmi.cz/sites/all/files/public/download/uredni_de ska/3439-id-c_3439-id-c.pdf [18] h. dudek, t. schwenteck, h.-j. thiemich, normale für die messtechnischen kontrollen von augentonometern – vergleichsmessungen an 50 prüfeinrichtungen, klinische monatsblätter für augenheilkunde, 219 (2002), pp. 703-709. [in german] [19] t. schwenteck, h.-j. thiemich, die sicherung der messtechnischen kontrolle von applanationstonometern durch technische untersuchungen an transfernormalen, klinische monatsblätter für augenheilkunde, 227 (2010), pp. 489-495. [in german] doi: 10.1055/s-0028-1110014 [20] t. schwenteck, h.-j. thiemich, messtechnische kontrollen für impressionstonometer – eine qualitätsgarantie in der augenheilkunde, klinische monatsblätter für augenheilkunde, 228 (2011), pp. 130-137. [in german] doi: 10.1055/s-0029-1245762 [21] t. schwenteck, m. knappe, i. moros, wie beeinflusst die zentrale hornhautdicke den intraokularen druck bei der applanations und konturtonometrie?, klinische monatsblätter für augenheilkunde, 229 (2012), pp. 917-927. [in german] doi: 10.1055/s-0031-1299536 [22] k. drbálková, v. suchý, tonometrie část 1 – mechanické a elektronické kontaktní tonometry, metrologie 28 (2019), pp. 2832. [in czech] [23] t. schwenteck, h.-j. thiemich, wahrung der messgüte von transfernormalen für die messtechnische kontrolle von luftimpulstonometern, klinische monatsblätter für augenheilkunde, 224 (2007), pp. 167-172. [in german] doi: 10.1055/s-2007-962953 [24] p. pavlásek, j. rybář, s. ďuriš, b. hučko, m. chytil, a. furdová, s. l. ferková, j. sekáč, v. suchý, p. grosinger, developments and progress in non-contact eye tonometer calibration, measurement science review, 20 (2020), pp. 171-177. doi: 10.2478/msr-2020-0021 [25] k. drbálková, v. suchý, tonometrie část 2 – elektronické bezkontaktní tonometry a rebound tonometrie, metrologie 29 (2020), pp. 31-35. [in czech] [26] p. c. ruokonen, t. schwenteck, j. draeger, evaluation of the impedance tonometers tgdc-01 and icare according to the international ocular tonometer standards iso 8612, graefe's archive for clinical and experimental ophthalmology, 245 (2007), pp. 1259-1265. doi: 10.1007/s00417-006-0483-3 [27] t. schwenteck, et al., “klinische evaluierung eines neuen tonometers auf der basis des internationalen standards für augentonometer iso 8612”, klinische monatsblätter für augenheilkunde, 223, pp. 808-812, 2006. [in german] doi: 10.1055/s-2006-926861 [28] p. pavlásek, m. chytil, j. rybář, j. palenčář, s. ďuriš, development of new calibration standard for noncontact tonometers, 2018 international congress on image and signal processing biomedical engineering and informatics proc., beijing, china, 13 – 15 october 2018, 8633148. doi: 10.1109/cisp-bmei.2018.8633148 [29] j. rybář, m. chytil, s. ižuriš, b. hučko, f. maukš, p. pavlásek, use of suitable materials such as artificial cornea on eye model for calibration of optical tonometers, aip conference proceedings https://doi.org/10.1136/bjo.2005.081224 https://dx.doi.org/10.1016/j.ophtha.2014.05.013 https://dx.doi.org/10.1167/iovs.10-6264 https://doi.org/10.1109/memea.2019.8802141 https://doi.org/10.1007/s00769-019-01414-7 http://intense.cmi.cz/ http://dx.doi.org/10.21014/acta_imeko.v9i5.999 https://doi.org/10.1016/j.measen.2021.100283 http://projectcefton.com/ https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/vispep2020/vispep2021%20abstract%20book%20.pdf https://konverentsikeskus.tlu.ee/sites/konverentsikeskus/files/vispep2020/vispep2021%20abstract%20book%20.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.ptb.de/cms/fileadmin/internet/publikationen/wissensch_tech_publikationen/lmkm-v3-part1-englisch_2020.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/oop/0111-oop-c038-16.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/oop/0111-oop-c038-16.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/3439-id-c_3439-id-c.pdf https://www.cmi.cz/sites/all/files/public/download/uredni_deska/3439-id-c_3439-id-c.pdf http://dx.doi.org/10.1055/s-0028-1110014 http://dx.doi.org/10.1055/s-0029-1245762 http://dx.doi.org/10.1055/s-0031-1299536 http://dx.doi.org/10.1055/s-2007-962953 https://doi.org/10.2478/msr-2020-0021 https://doi.org/10.1007/s00417-006-0483-3 http://dx.doi.org/10.1055/s-2006-926861 https://doi.org/10.1109/cisp-bmei.2018.8633148 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 vol. 2029, zakopane, poland, 4 – 6 june 2018, 020067. doi: 10.1063/1.5066529 [30] p. pavlásek, j. rybář, s. ďuriš, b. hučko, j. palenčář, m. chytil, developments in non-contact eye tonometer calibration, 2019 ieee international instrumentation and measurement technology conference proc., auckland, new zealand, 20 – 23 may 2019, 8827028. doi: 10.1109/i2mtc.2019.8827028 [31] b. hučko, s. l. ferková, s. ďuriš, j. rybář, p. pavlásek, glaucoma vs. biomechanical properties of cornea, strojnícky časopis – journal of mechanical engineering, 69 (2019), pp. 111116. doi: 10.2478/scjme-2019-0021 [32] p. pavlásek, j. rybář, s. ďuriš, b. hučko, j. palenčář, m. chytil, metrology in eye pressure measurements, 2020 ieee sensors applications symposium proc., kuala lumpur, malaysia, 9 – 11 march 2020, 9220014. doi: 10.1109/sas48726.2020.9220014 [33] b. hučko, ĺ. kučera, s. ďuriš, p. pavlásek, j. rybář, j. hodál, modelling of cornea applanation when measuring eye pressure, lecture notes in mechanical engineering, icmd-2018 (2020), pp. 287-294. doi: 10.1007/978-3-030-33146-7_33 [34] j. schreyögg, m. bäumler, r. busse, balancing adoption and affordability of medical devices in europe, health policy, 92 (2009), pp. 218-224. doi: 10.1016/j.healthpol.2009.03.016 [35] m. do céu ferreira, the role of metrology in the field of medical devices, international journal of metrology and quality engineering, 2 (2011), pp. 135-140. doi: 10.1051/ijmqe/2011101 [36] m. do céu ferreira, a. matos, r. p. leal, evaluation of the role of metrological traceability in health care: a comparison study by statistical approach, accreditation and quality assurance, 20 (2015), pp. 457-464. doi: 10.1007/s00769-015-1149-9 [37] s. terzic, v. jusufovic, a. nadarevic-vodencarcevic, m. asceric, a. pilavdzic, m. halilbasic, a. terzic, is prevention of glaucoma possible in bosnia and herzegovina?, medical archive, 70 (2016), pp. 140-141. doi: 10.5455/medarh.2016.70.140-141 [38] a. bošnjaković, z. džemić, legal metrology: medical devices, ifmbe proceedings 62, cmbebih-2017 (2017), pp. 583-288. doi: 10.1007/978-981-10-4166-2_88 [39] b. karaböce, h. o. durmuş, e. cetin, the importance of metrology in medicine, ifmbe proceedings 73, cmbebih-2019 (2019), pp. 443-450. doi: 10.1007/978-3-030-17971-7_67 [40] b. karaböce, challenges for medical metrology, ieee instrumentation and measurement magazine, 23 (2020), pp. 4855. doi: 10.1109/mim.2020.9126071 [41] a. bandjević, l. g. pokvić, z. džemić, f. bečić, risks of emergy use authorizations for medical products during outbreak situations: a covid-19 case study, biomedical engineering online, 19 (2020), 75. doi: 10.1186/s12938-020-00820-0 https://doi.org/10.1063/1.5066529 https://doi.org/10.1109/i2mtc.2019.8827028 http://dx.doi.org/10.2478/scjme-2019-0021 https://doi.org/10.1109/sas48726.2020.9220014 http://dx.doi.org/10.1007/978-3-030-33146-7_33 https://doi.org/10.1016/j.healthpol.2009.03.016 https://doi.org/10.1051/ijmqe/2011101 https://doi.org/10.1007/s00769-015-1149-9 https://doi.org/10.5455/medarh.2016.70.140-141 http://dx.doi.org/10.1007/978-981-10-4166-2_88 http://dx.doi.org/10.1007/978-3-030-17971-7_67 https://doi.org/10.1109/mim.2020.9126071 https://doi.org/10.1186/s12938-020-00820-0 mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples francesca r. pennecchi1, peter m. harris2 1 istituto nazionale di ricerca metrologica (inrim), strada delle cacce 91, 10135 torino, italy 2 national physical laboratory (npl), hampton road, teddington tw11 0lw, uk section: technical note keywords: measurement uncertainty (mu); training; mathmet; overview; mu courses; classroom examples and software citation: francesca r. pennecchi, peter m. harris, mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples, acta imeko, vol. 12, no. 2, article 12, june 2023, identifier: imeko-acta-12 (2023)-02-12 section editor: eric benoit, université savoie mont blanc, france received july 1, 2022; in final form march 14, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: npl’s work was supported by the uk government’s department for business, energy, and industrial strategy (beis) as part of its national measurement system (nms) programme. corresponding author: francesca pennecchi, e-mail: f.pennecchi@inrim.it 1. introduction in october 2021, a two-year mathmet [1] activity was launched [2] with the aim of developing new training material and establishing an active community for those involved in teaching measurement uncertainty. this “measurement uncertainty training activity” [3] is conducted by a consortium of mathmet and non-mathmet members who committed themselves, on a voluntary basis, to develop new training material on measurement uncertainty (mu) and to strengthen collaborations among experts and interested people at metrology institutes, universities, industry and within accreditation and legal metrology communities. concerning the development of new material for mu training, this will include an overview of existing courses, software and examples, which can guide trainees across the tools and materials already available at different levels and in different fields of application. it is also planned to prepare some short videos explaining the need for, and common difficulties in, evaluating mu. all material will be made publicly available on the dedicated webpage [3] of the mathmet website and will be actively disseminated to a large set of practitioners in metrology, academia, and industry. in the present abstract, we will focus on the survey of existing courses and software for mu evaluation, together with the review of selected examples, suitable for mu training, which will be revisited in the form of proper classroom examples. further new training material that will be developed from scratch by the mu training activity is presented separately at this joint symposium [4] and will not be detailed here. 2. overview of existing courses and software starting from the plethora of courses on mu usually offered by the partners of the consortium, the first step was to undertake a review of such courses, in order to inform the wider audience about their availability and characteristics. in this respect, mathmet will serve as a reference point to make connections among trainers and trainees at a european level and beyond. the abstract a collaborative activity on “measurement uncertainty (mu) training” under the auspices of the european metrology network for mathematics and statistics (mathmet) is underway. this abstract reports on the progress to undertake surveys of existing training courses on mu, software for mu evaluation, and classroom examples to support the understanding of methods for mu evaluation. an appreciable number of training courses, software and examples have been identified and are currently under review. these tools and materials will be analysed and categorised according to their main features and characteristics. special attention will be given to their adherence to the jcgm guidelines (i.e., jcgm 100:2008, jcgm 101:2008 and jcgm 102:2011). it is hoped that the knowledge assembled in this activity will help practitioners to make good choices about appropriate material to support their training needs, as well as help developers of training material to ensure good coverage of their training products and target them at user needs. mailto:f.pennecchi@ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 courses will be categorized according to their main features to enable the audience to easily identify which course would fit the best with their need. analogously, based on the availability of a variety of software (sw) performing mu evaluation, a critical overview of such tools is currently underway to analyse their characteristics, such as the status, the kind of methods they implement and the main operating conditions. 2.1. existing courses on mu so far, information on 41 training courses taught by 14 partners and 1 stakeholder of the activity consortium has been collected, for a total amount of more than 880 hours of lessons per year. for each course, a specifically developed template was completed by a reference contact, who was free to provide the following details: general information (title of the course, its integration into a training framework or project, specific field of application, organizing body, website advertising the course, duration, frequency, language(s), location, material provided to attendees, attendance fee, final examination, kind of certification, etc.) audience (target audience, specific constraints and prerequisites, average number of attendees) teacher/s or technical contact/s technical contents classroom examples from a preliminary analysis of the features undertaken it is worth noting that most of the courses are specifically dedicated to mu evaluation but a good number have a broader scope (covering, for example, metrology in general or for a specific si quantity). these courses were included in the review as they make a strong effort in teaching mu evaluation. figure 1 gives an idea of their specific fields of application. the majority of courses (78 %) is given on a recurring basis but some are elearning courses and are available on demand. 15 % are offered in more than one language (see the languages distribution in figure 2). 50 % require some sort of final examination and 50 % have an enrolment fee. 15 % of courses are aimed at legal metrology, 37 % at nmis, 46 % at calibration and testing laboratories and 24 % at academia (with overlapping categories). concerning the “technical contents”, the contact person completing the template was required to describe the main topics of the course and the extent to which they comply with the prescriptions of the jcgm wg1 suite of documents [5], with a special focus on the teaching of the law of propagation of uncertainty (lpu) and the monte carlo method (mcm) for propagation of distributions: review of mathematical tools (linear algebra, partial derivatives, linear regression, …) review of probability concepts (random variables, distributions, …) basic metrological concepts (measurand, measurement model, error, accuracy, precision, repeatability, reproducibility, …) input standard uncertainties and covariances (gum type a and type b) lpu (gum 1st or also higher-order taylor series expansion, expanded uncertainty) lpu (jcgm 102 multivariate models) mcm for propagation of distributions (jcgm 101 univariate models) mcm for propagation of distributions (jcgm 102 multivariate models) validating lpu against mcm reporting the measurement result as a result, 34 % of courses provide a review of mathematical concepts, 85 % of probabilistic topics and 95 % of metrological topics. almost all discuss how to model and evaluate input standard uncertainties and covariances, as well as the application of lpu to univariate models. interestingly, though, only 20 % address lpu for multivariate models (jcgm 102). concerning the teaching of mcm, 44 % of courses treat mcm for univariate models (jcgm 101) but only 15 % for multivariate models (jcgm 102): see figure 3 and figure 4. moreover, the training on mcm is not homogenous across the audience: it is almost never taught in courses for the legal metrology community, a third of the time to calibration and testing laboratory personnel, half of the time to nmi employees and most of the time in courses for academia. as a general comment, it seems there is a gap in the treatment of multivariate models, both from the side of lpu and even worse for what concerns application of mcm. this implies that little attention is given to the training on calculation of covariances among measurands depending on some common input quantities and hence being correlated. this seems in figure 1. fields of application of the courses. figure 2. languages used in the courses (the total number of occurrences of languages used in the courses, also considering the various combinations of languages, was 49). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 contrast with the fact that the main target audience (46 %) are calibration and testing laboratories and calibration procedures often involve multivariate models. an encouraging result is that 75 % of the courses dealing with lpu for multivariate models address the problem by teaching also the corresponding mcm. in the questionnaire, it was also possible to specify which references are used and if any software is applied or mentioned in the course. the references mainly reported are documents of the jcgm wg1 suite, iso and oiml standards, euramet and ilac guides, as well as documents by ea, ukas, din, eurachem/citac, etc. 68 % of the courses rely on, or at least mention, use of some sw or programming language, like excel, matlab, r, labview, origin, the nist uncertainty machine and the gum workbench. among the technical topics treated on top of standard ones (i.e., lpu and mcm), the following are also mentioned: bayesian inference, conformity assessment, linear regression, and quality control. comments concerning the “classroom examples” are left until section 3. 2.2. software for mu evaluation a further survey was initiated by a subset of the activity partners, i.e., inrim, npl, lne, ipq, imbih, metas and polito, with the aim of categorizing available software related to mu and summarizing the methods offered by such software to the end users. a list of software was agreed within the consortium encompassing 50 sw for mu evaluation, coming from several sources (mainly from a wikipedia webpage [6]). 35 sw were already analysed by involved partners, by filling in an agreed list of characteristics/features. the software ranged from basic uncertainty calculators to quite complex, broad-scope software, and from user-friendly web applications to comprehensive collections of libraries and tools for uncertainty quantification. some of those sw are currently under analysis by the partners, considering the following characteristics: general information technical features adherence to jcgm 100:2008 adherence to jcgm 101:2008 adherence to jcgm 102:2011 for the “general information” and “technical features” items, information is reported on license, version, programming language, whether the sw is computer-based or a web application, its language(s), documentation, and evidence of verification and validation. concerning the sw so far analysed, 74 % of them are cross-platform, 85 % computer-based (15 % web application), 54 % provide some evidence of validation, and all are available in english version (some also in other languages). the distribution of the programming languages is shown in figure 5. in the questionnaire, moreover, it has to be stated whether the sw is able to handle correlated input quantities, nonlinear models, more than one output quantity (most of the analysed sw have these features, i.e., 86 %, 89 % and 57 %, respectively), complex-valued quantities, implicit models, symbolic uncertainty evaluation, repeated input observations, and input imported from previous analyses (these features, instead, are generally less covered by the sw). information will be also given on the output results and their format. the adherence of the implemented methods with the jcgm documents [5] is investigated in some detail. the aim is to assess the metrological relevance of each sw and its level of compliance with recognized guidelines. concerning jcgm 100:2008, the sw is checked against its ability to implement the lpu (without or with correlation among input quantities and implementing the first or higher-order taylor series approximation), to (analytically or numerically) calculate the sensitivity coefficients, to provide a summary of standard uncertainty components, and to calculate the effective degrees of freedom and the expanded uncertainty at a prescribed coverage probability. in this respect, the majority of the sw implement lpu based on the first-order taylor approximation of the model (71 %) and provide sensitivity coefficients (55 %) and expanded uncertainties (54 %). the remaining capabilities are less frequently addressed. concerning the jcgm 101:2008 and jcgm 102:2011 documents, the main features under investigation are the maximum numbers of monte carlo trials and of input quantities, figure 3. courses teaching mcm (jcgm 101). figure 4. courses teaching mcm (jcgm 102). figure 5. programming languages. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the gallery of available (univariate or multivariate) input probability density functions, the application of lpu to explicit or implicit (univariate or multivariate) measurement models, the ability to provide a coverage interval for the output quantity at a prescribed coverage probability (also considering probabilistically symmetric and shortest coverage intervals), to perform an adaptive monte carlo procedure, and to validate the gum uncertainty framework against the monte carlo method. as concerns the adherence to jcgm 101:2008, most of the analysed sw respect the document prescriptions on how to assign the input probability density functions (60 %) and calculate an estimate and the associated uncertainty from the simulated output distribution (60 %). the other features are less well covered. as concerns the adherence to jcgm 102:2011, it is evident a large gap exists in this respect: only 20 % of sw implement lpu for explicit multivariate models, and a meagre 6 % for implicit ones; 31 % of sw apply a monte carlo procedure to calculate an estimate and the associated covariance matrix from the simulated multivariate distribution of the measurand. in parallel with the above-described revision of the available sw for mu evaluation, it is worth mentioning that mathmet is developing a quality management system (qms) for software, data and guidelines as one of the main outputs of the mathmetrelated joint network project. relevant output is available in a dedicated publication [7] and uploaded on a mathmet webpage dedicated to quality assurance tools [8]. 3. overview of classroom examples concerning the “classroom examples” offered in existing courses on mu, contact persons were asked to provide information about some of the main examples treated and their characteristics, comprising: title short description application area (calibration, testing, conformity assessment, etc.) metrology area (mass, length, etc.) approach to mu evaluation (jcgm 100, jcgm 101, etc.) level of difficulty (simple, medium, difficult) existing supporting material exists or to be developed so far, 69 examples have been collected from 36 training courses, of which 46 identify “calibration” as the main application area. the examples are spread over 15 different metrology areas (including “not specified”), with the top two listed as “dimensional” (18/69) and “temperature” (13/69) accounting for almost one-half of the examples. there is a focus on applying the lpu approach of jcgm 100:2008, either on its own (40/69) or in combination with other approaches (55/69) for comparison. very few examples are classified as “difficult” (4/69) with most classified as “simple” (33/69), but in this regard the classification for the different levels of difficulty is likely to be quite subjective. it is planned to review other sources of examples used for teaching the principles of mu and for demonstrating different methods for mu evaluation. one such source is the compendium [9] of examples that was the main output of the emue project [10]. the compendium presents 41 examples from six broad application areas: industry and society quality of life energy environment conformity assessment calibration, measurement, and testing figure 6, figure 7 and figure 8 present graphically a comparison of the examples taken from the two sources in terms of the categories of metrology area, approach (to mu evaluation) and level (of difficulty). the examples taken from the emue project are spread over 19 different metrology areas, with a more uniform distribution across them, and the top two listed are “chemistry” (8/41) and “flow metrology” (6/41). in terms of metrology area, the examples from the two sources appear complementary, perhaps reflecting in existing courses how examples from the “dimensional” and “temperature” areas are more accessible to a general audience and easier to teach, whereas the examples collected in the emue project reflect the wider interests of the partners involved in the project. there is again a focus on applying the lpu approach of jcgm 100:2008, either on its own or in combination with other approaches, but the examples from the emue project offer a wider range of approaches, including bayesian, regression and “top-down” approaches to mu evaluation. finally, a judgment about the level of difficulty of each example was made by one of the authors regarding how a “non-expert” faced with the example in a training course might perceive the example. the result was that all examples were classified as “medium” to “difficult” and so, in this regard, the examples from the sources again can be considered complementary. the data collected from these different sources serve as a basis for identification of interesting cases to be further developed in the form of classroom examples. in general, the analysis of the results, and the comparison for different sources, will support the identification of needs not covered by existing training courses, or deficiencies in those courses, and it will facilitate the exchange of knowledge between people teaching mu. 4. conclusions a collaborative activity on “measurement uncertainty training” under the auspices of the european metrology network for mathematics and statistics (mathmet) is underway. this abstract reports on the progress to undertake surveys of existing training courses on mu, software for mu evaluation, and examples to support the understanding of methods for mu evaluation. it appears that an appreciable amount of material is available: 41 training courses, 69 examples, and 50 items of software. it is hoped that the knowledge assembled in this activity will help practitioners to make good choices about appropriate material to support their training needs, as well as help developers of training material to ensure good coverage of their training products and target them at user needs. actual and future updated versions of the three surveys will be published on a dedicated webpage [11]. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 figure 6. classification according to “metrology area” for the examples taken from existing training courses (top) and developed in the emue project (bottom). acknowledgement the authors of this abstract wish to thank the colleagues of all the partners of the activity consortium for participating in the survey of courses, software and classroom examples. the consortium comprises [3]: ptb (coordination), cem, gum, imbih, ims sas, inrim, ipq, lne, metas, npl, smd, accredia ente italiano di accreditamento, deutsche akademie für metrologie (dam), national standards authority of ireland (nsai), politecnico di torino, university of konstanz. figure 7. as figure 6 but using the classification of “approach (to mu evaluation)”. figure 8. as figure 6 but using the classification of “level (of difficulty)”. references [1] european metrology network for mathematics and statistics home page. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet [2] emn mathmet starts initiative on measurement uncertainty training news story. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/bugermenu/news [3] mathmet measurement uncertainty training activity home page. online [accessed 27 february 2023] https://www.euramet.org/goto/g1-e34bee [4] k. klauenberg, introduction to the mathmet activity mu training, presented at the joint imeko tc1-tc7-tc13-tc18 & 0 5 10 15 20 biology chemistry dimensional electrical force frequency general illuminance mass material properties pressure radiometry, photometry temperature time not specified course examples atmospheric measurement chemistry dimensional electrical flow metrology gas metrology general hardness imaging key comparisons mass material combustion precipitation pressure temperature thermal comfort thrust torque viscosity emue examples https://www.euramet.org/european-metrology-networks/mathmet https://www.euramet.org/european-metrology-networks/mathmet https://www.euramet.org/european-metrology-networks/mathmet/bugermenu/news https://www.euramet.org/european-metrology-networks/mathmet/bugermenu/news https://www.euramet.org/goto/g1-e34bee acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 mathmet symposium, porto, portugal, 31 august-2 september 2022. [5] jcgm publications: guides in metrology home page. online [accessed 27 february 2023] https://www.bipm.org/en/committees/jc/jcgm/publications [6] list of uncertainty propagation software webpage. online [accessed 27 february 2023] https://en.wikipedia.org/wiki/list_of_uncertainty_propagation _software [7] keith lines, jean-laurent hippolyte, indhu george and peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko vol. 11 no. 4 (2022). doi: 10.21014/actaimeko.v11i4.1348 [8] quality assurance tools webpage. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/activities/quality-assurance-tools [9] good practice in evaluating measurement uncertainty, compendium of examples, adriaan m.h. van der veen and maurice g. cox (editors), 27 july 2021. online [accessed 27 february 2023] http://empir.npl.co.uk/emue/wpcontent/uploads/sites/49/2021/07/compendium_m36.pdf [10] examples of measurement uncertainty evaluation project home page. online [accessed 27 february 2023] http://empir.npl.co.uk/emue/ [11] mathmet measurement uncertainty training activity, for trainees measurement uncertainty training webpage. online [accessed 27 february 2023] https://www.euramet.org/european-metrologynetworks/mathmet/activities/measurement-uncertainty-trainingactivity/for-trainees-measurement-uncertainty-training https://www.bipm.org/en/committees/jc/jcgm/publications https://en.wikipedia.org/wiki/list_of_uncertainty_propagation_software https://en.wikipedia.org/wiki/list_of_uncertainty_propagation_software https://doi.org/10.21014/actaimeko.v11i4.1348 https://www.euramet.org/european-metrology-networks/mathmet/activities/quality-assurance-tools https://www.euramet.org/european-metrology-networks/mathmet/activities/quality-assurance-tools http://empir.npl.co.uk/emue/wp-content/uploads/sites/49/2021/07/compendium_m36.pdf http://empir.npl.co.uk/emue/wp-content/uploads/sites/49/2021/07/compendium_m36.pdf http://empir.npl.co.uk/emue/ https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training https://www.euramet.org/european-metrology-networks/mathmet/activities/measurement-uncertainty-training-activity/for-trainees-measurement-uncertainty-training a mathmet quality management system for data, software, and guidelines acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 6 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 a mathmet quality management system for data, software, and guidelines keith lines1, jean-laurent hippolyte1, indhu george1, peter harris1 1 national physical laboratory, hampton road, teddington tw11 0lw, uk section: research paper keywords: quality management system; mathmet; data; software; guideline citation: keith lines, jean-laurent hippolyte, indhu george, peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko, vol. 11, no. 4, article 8, december 2022, identifier: imeko-acta-11 (2022)-04-08 section editor: eric benoit, université savoie mont blanc, france received july 18, 2022; in final form december 2, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the project 18net05 mathmet has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. corresponding author: keith lines, e-mail: keith.lines@npl.co.uk 1. introduction the european metrology network for mathematics and statistics (mathmet) [1] has been established to help bring a collaborative approach to addressing the needs of measurement scientists for expertise in applied mathematics, statistics, and computational tools. ever more accurate, consistent, and traceable measurements will be vital to meeting challenges such as climate monitoring, clean energy, modern-day health care and sustainability. these measurements are often underpinned by new and increasingly complex mathematical and statistical techniques that are reliant on data sets and software. fit for purpose data sets, software, and guidelines to meet the requirements of the national measurement institutes (nmis) and other stakeholder organisations and individuals, that will both draw on and contribute to mathmet, will be vital to mathmet’s success. an outline of a mathmet quality management system (qms) for research outputs in the form of data, software, and guidelines was presented at the mathematical and statistical methods for metrology virtual workshop 2021 (msmm 2021) [2]. feedback from delegates helped confirm that the iso process-based approach taken, and described below, was appropriate. this paper outlines the current version of the qms, which has benefited from the feedback from msmm 2021 and the input of other mathmet members and is organised as follows. in section 2 the essential components of the qms for all three research outputs are described, as well as on-line risk assessment tools that guide a user through the process of assigning an integrity level for the research outputs of data and software. in section 3, some examples of case studies that are being used to refine and demonstrate the qms are indicated. the lessons learned from these case studies will be reported separately to this paper. finally, conclusions are given in section 4. 2. components of the qms 2.1. background the qms follows a process-based approach as defined in iso 9001:2015 [3] and related standards. this approach incorporates the “plan-do-check-act” (pdca) cycle and risk-based thinking. over a million organisations are certified to iso 9001, abstract the european metrology network for mathematics and statistics (mathmet) is creating a quality management system (qms) to ensure that research outputs in the forms of data, software and guidelines are fit-for-purpose, achieve a sufficient level of quality, and are consistent with the aims of national measurement institutes to provide quality-assured and trusted outputs. the essential components of the qms for all three forms of research output are discussed. on-line, interactive risk assessment tools that guide a user through the process of assigning an integrity level for the research outputs of data and software, are described. examples of case studies that have been used to demonstrate the qms are indicated. mailto:keith.lines@npl.co.uk acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 and npl has held iso 9001 certification for more than 25 years. npl has also successfully applied tickitplus certification [4] (and its predecessor scheme tickit), which builds on iso 9001, to its software development activities. the experience gained from applying these schemes, plus the large number of iso and ieee standards and related documents supporting them, strongly implies that this approach is the right one to take. 2.2. data and software a quality management plan is key to the qms components for data and software. this plan lists the quality management activities needed for a particular dataset or piece of software. these activities follow a typical life cycle from requirements capture to design and development, verification, and validation through to maintenance. review is an important activity that is carried out throughout the life cycle. as noted in section 1, risk assessment is a key element of the qms, and risk is quantified using a value called an integrity level. the integrity level is a number between 1 and 4, where 1 indicates the lowest level of risk (for example, prototypes of software for internal use within an organisation) and 4 indicates the highest level (for example, software that is safety critical). the concept is analogous to, but should not be confused with, the safety integrity level of iec 61508 [5]. the integrity level is used to decide the quality management activities and interventions to be listed on the plan. higher integrity levels have a greater associated risk and therefore need more activities and interventions (for example, review by a thirdparty, independent of the team that developed the data or software). for software, the qms can include established quality procedures and templates from the mathmet members. the development of metrology software is usually a much smaller scale exercise than would normally be addressed using such a qms. there are no large teams of developers to manage, a typical team may consist of no more than one or two people. the software may have a small number of highly specialist users, rather than an app distributed to many thousands of users with varying levels of technical expertise. however, there are issues that a qms can help manage. for example: • such software is typically developed by metrologists rather than software engineers. it is strongly arguable that software engineering good practice should be a part of every modern-day scientist’s toolkit of skills. however, analogous to how the guidance of a numerical analyst should be sought for certain mathematical problems, there are situations in which it is strongly advisable to consult a software engineer (e.g., safety-critical software). the qms provides a framework to help make, document and review such decisions. • non-trivial mathematics is at the heart of metrology software. even the simplest equations can become difficult to implement and maintain if an inappropriate implementation platform is selected. • for large scale software development projects, roles such as user and developer are distinct and held by different people. that situation is often not the case for metrology software. also, it is not always easy to define who the customer is for this software. is the customer somebody within a funding body? is the customer somebody internal to the organisation acting as a proxy for someone in an external organisation? • lack of clarity of roles can lead to serious issues that could be prevented easily, not least finding the software after the original developers have left the organisation. if these developers also provide the service for which the software was developed (or were authors of the paper for which the software generated results) they will know where it’s located. could others find the software, and be sure the correct version has been accessed (not an older version that contains some serious bugs)? • a related point is that some metrology software can be in use for a long time. can the correct versions of the source code and documentation, for example explaining how the equations were derived, be found 20 or more years after initial release? • perhaps the software itself will never be released outside of the organisation in which it was developed. however, results such as calibration certificates and research papers, will be released outside of the organisation. software quality management is no less important in such situations as it is when the software itself is released. • even the smallest piece of software must be traceable to the results it produces. the ongoing reproducibility crisis [6] would be eased if questions such as the following could be always answered easily “which version of which script produced those results? the exact version please, not one containing subsequent modifications”, “which versions of which libraries did the script call?” and “what were the reasons these libraries were considered appropriate for this work?”. perhaps journals should be asking for the upload of scripts as well as the data the scripts processed. what may seem like tedious, unnecessary and timeconsuming bureaucracy (particularly during some interesting and exciting research) could save considerably more tedium in the longer term. • following on from the above point, the provenance of packages and libraries is a key point to consider. a richly featured, but new and experimental, library may be appropriate for a prototype but not for generating certificates for customers. a proprietary closed-source package from a long-established supplier with a strong reputation for well-engineered software may be the right option. alternatively, what better guarantee of quality can there be than an open-source package that has the input of many experts in a particular field? there are often no “right” or “wrong” answers, just decisions to made, documented and reviewed. again, the qms provides a framework to help with these tasks. • it should never, ever be necessary to have to look at the code of even the smallest script to work out what it does. the mathematics must be documented in a way that can be independently verified without having to examine code. in some circumstances code comments, and perhaps an accompanying readme file, will be sufficient. in other circumstances more thorough documentation will be required. again, the qms provides a framework to help decide what documentation will be necessary. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 in summary, for software the qms aims to minimize chances of errors and manage issues such as traceability and accountability, transferability, maintenance and reproducibility. the final three points need to be managed without the original developers being available. some activities listed in the quality plan are considered mandatory to ensure the software is of the required quality, and the qms provides templates to cover, for example, requirements capture. however, if an alternative template is considered more appropriate for a particular project, then this template can be used instead. in this case, it is mandatory that the reasons for such decisions are recorded. quality management of data is less well established than quality management of software. accordingly, developing this component of the qms was arguably a research activity to some extent. a key concept is that integrity levels are adapted to apply to data as well as software. also, the iso 8000 series of standards [7] applies the same pdca and risk-based paradigms to data quality management on-line risk assessment tools, called quality assurance tools, will assist with assigning integrity levels and generating quality management plans. further details are provided in section 2.4 2.3. guidelines the aim of this component of the qms is to ensure a sufficient level of quality in the development, assessment, and recommendation of existing and future guidelines for mathematics and statistics in metrology. it is intended to include: 1. a rigorous review process for existing and new guidelines involving external experts and mathmet stakeholders 2. quality management activities that shall accompany and improve guidelines that will be developed in the future in projects involving one or more members of mathmet 3. a process to provide advice and feedback that will enable third parties to adapt and improve mathematical and statistical guidelines to the needs and requirements of the european metrology community and its stakeholders. the processes applied by organisations such as iso/iec for standards development [8], eurachem for its development of guidance documents [9], and npl for its review and approval of documents, have been reviewed and used to steer the design of the qms. however, the approach taken has been to present a qms in ‘skeleton’ form that sets only high-level requirements on users into which the processes adopted by individual mathmet members can comfortably fit. for example, for a future guideline, the process comprises the stages of development, review and approval, publication, and maintenance. the stage of review and approval can be iterative and can involve separate steps that focus on different aspects, such as technical correctness or presentation and style. the stage of maintenance depends on the guideline being provided with appropriate metadata allowing versions (and changes between versions) to be tracked correctly. for both types of guidelines (future and existing), the qms involves completing a checklist comprising a set of questions, and making a recommendation based on the answers to those questions. for an existing guideline, the checklist considers: • whether the origin of the document is an established organisation • whether the document has been independently reviewed and approved to be issued • whether the document comes with appropriate metadata (such as title, author, unique identifier or version number, issuer, review date, etc.) • whether the document is adequately protected with respect to copyright and intellectual property rights • what is the language of the document and whether it needs translation (for example, to english) • whether the document states the targeted audience or readership • whether the technical content of the document is relevant to the focus of mathmet on mathematics and statistics for metrology • whether conclusions are clearly stated, appropriate and relevant • whether complete and appropriate acknowledgments are made (for example, to originating projects and funding sources) • whether complete, appropriate, and primary references are listed • whether the overall presentation of the material in the document is clear and understandable. for a future guideline, additional questions are included covering: • whether the document is technically sound • whether the document has undergone adequate review relating to both technical and presentational aspects • whether notation and abbreviations have been adequately and clearly defined. the qms will be applied to five metrology case studies identified at the outset (see, for example, section 3 and [19]). the results of those will be assessed in terms of effectiveness, risk assessment and quality interventions by the qms. the qms will also be presented here at this workshop to stakeholders to gain feedback and to understand its effectiveness in meeting the needs of stakeholders. 2.4. on-line risk assessment tools for data and software as noted in section 1, integrity levels for data and software are essentially a calculation involving criticality and complexity. other factors may also need to be considered, such as the availability of suitably qualified developers. mathmet will provide on-line risk assessment tools to guide the user through the process of calculating an integrity level and generating a quality management plan. the tool will be illustrated here at this workshop to stakeholders to gain feedback. for the case of software, table 1 and table 3 list the different classifications relating to criticality of usage (cu) and complexity (cp; table 2 concerns only data and will be discussed later). the choice of the classifications, between ‘not critical’ (1) and ‘life critical’ (4) for cu and between ‘very simple’ (1) and ‘complex’ (4) for cp, are quite subjective. however, the calculation of the software integrity level (swil), as detailed in table 4, is then deterministic and undertaken automatically by the assessment tool. the user has the possibility to moderate the calculated swil, considering factors that influence the associated risk. for example, the swil might be reduced if there is an alternative acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 means of verification or increased if there is reliance on key staff. see table 5 for examples of moderating factors. the reasons for such moderation of the swil should be recorded. once the swil is fixed, the assessment tool automatically generates a software quality management plan identifying those activities and interventions that are mandatory, recommended and not required. table 6 lists activities related to the capture of user requirements for software. for example, for a swil of 2, documented user requirements are mandatory, and their review by the project team and (a proxy for) the customer is also mandatory but review by an independent person is not required. in contrast, an independent review is recommended for a swil of 3 and mandatory for a swil of 4. software quality requirements are set related to the stages in the software development cycle of: • capturing software functional requirements • software design • software coding • verification and validation • delivery, use and maintenance. for the case of data, the assessment tool operates in a similar manner to that for software. it takes the user through a series of questions collected into the following sections: • dataset details • responsibilities (in terms of data managers, data administrators, data stewards and data technicians) • document control • complexity and criticality leading to the assignment of a data integrity level that can be moderated by the user • fitness for purpose • quality planning • quality monitoring, control and improvement • quality assurance • data understandability • metrological soundness. table 1. classifying data and software according to the criticality of usage (cu). cu criticality of usage explanation 1 not critical • no danger of loss of income or reputation • short life, will not require maintenance in future 2 significant • potential for loss of income or reputation 3 substantial • likely to lead to loss of income or reputation 4 life critical • may result in personal injury or loss of life table 2. classifying data according to complexity (cp). cp complexity of data typical features 1 very simple • commonly used datatypes • few datatypes • small amount of data • simple/unexpensive data infrastructure • simple uncertainty budget 2 simple • easy to visualise • moderate number of datatypes • moderate amount of data • intermediate data infrastructure • intermediate uncertainty budget 3 moderate • non-trivial datatypes • fair number of datatypes • large dataset • complex/expensive data infrastructure • complicated uncertainty budget 4 complex • non-trivial datatypes • combination of many non-trivial datatypes • very complex/expensive data infrastructure • very complicated uncertainty budget table 3. classifying software according to complexity (cp). cp complexity of program typical features 1 very simple • elementary functionality, easy to understand • little or no control of an external system • simple mathematics 2 simple • simple functionality • straightforward control of a system • intermediate mathematics 3 moderate • large or very large programs • difficult to modify • complicated mathematics 4 complex • extremely complex functionality • complex feedback systems • very complicated mathematics table 4. calculating the integrity level for data (dil) and software (swil). cp1 cp2 cp3 cp4 cu1 1 1 1 1 cu2 2 2 3 4 cu3 3 3 3 4 cu4 4 4 4 4 table 5. moderating factors for a calculated software integrity level (swil). moderating factors possible effect on swil alternative means of verification decrease modular approach decrease suitably trained staff available decrease difficult to test increase reliant on key staff increase inexperienced staff increase ambitious timescales increase ambitious requirements increase new technology increase novel design increase table 6. quality interventions for capturing software user requirements and their dependence on the calculated integrity level (x, r and m denote not required, recommended and mandatory, respectively) quality requirement swil1 swil2 swil3 swil4 documented user requirements m m m m review by team r m m m review by suitably qualified independent person x x r m review by customer or proxy m m m m acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 in the above, ‘metrological soundness’ considers whether the dataset contains measured data or results derived from measurements or simulated (measured) data or combinations of those, whether the generation of the dataset is intended to be repeatable and reproducible (and how the repeatability and reproducibility conditions for the measurements are documented), how measurement uncertainty is evaluated and expressed, and how confidence in the generation of the dataset is demonstrated. these issues are important for datasets generated for, and to be used in, a metrological context. the calculated data integrity level (dil) determines the comprehensiveness of the data plan. the dil, in turn, depends on the risks associated with the criticality of usage (see table 1) and complexity (see table 2) of the dataset. as with software, the dil is calculated as detailed in table 4. for example, for a dil of 1, it is not necessary to provide, under the section on fitness for purpose, information about how the data life cycle will be documented whereas such information is mandatory for a dataset having a dil of 4. considering the section on ‘metrological soundness’, there is no difference between datasets having different data integrity levels. for a dataset having the highest dil, the associated plan comprises information relating to 41 questions in total. figure 1 sumarises the qms processes for data and software. 3. case studies the acceptance and success of the mathmet qms will depend on its ability to address a variety of needs set both by the ‘owner’ of the research output as well as the user or customer at which the research output is aimed. to this end, case studies are being undertaken to support the development of the qms and help to promote it to mathmet members and stakeholders. they are chosen to illustrate the wide range of possible research outputs and are undertaken by mathmet members having different experience of using a qms. case studies focussed on the application of the qms to software, e.g., [10], [11], [12], and those focussed on data, e.g., [13], [14], [15], were described at msmm 2021 [2]. additional case studies focussed on guidelines have since been chosen to demonstrate the application of the qms to that form of research output, and include: • a eurachem document on the use of uncertainty in compliance assessment [16] • a good practice guide describing methods of metrological data processing for industrial process optimization, focusing on aspects of redundancy, synchronization and feature selection applied to data affected by measurement [17] • best practice guides on bayesian inference for regression problems, uncertainty evaluation for computationally expensive models, and decisionmaking and conformity assessment [18] • an internal mathmet document containing a glossary and ontology of terms to support the qms described in this abstract. in [19] the application of the qms to several of these case studies by different mathmet members is presented, and the ease of use and possible pitfalls of the qms are discussed. the lessons learned from the different case studies will be reported elsewhere. 4. conclusions the components of a quality management system (qms), created by the european metrology network for mathematics and statistics (mathmet), have been described. the aim of the qms is to ensure that research outputs in the forms of data, software and guidelines are fit-for-purpose, achieve a sufficient level of quality, and are consistent with the aims of national measurement institutes to provide quality-assured and trusted outputs. a pragmatic approach has been taken to the development of the qms, which sets only high-level requirements on users to ensure there are no conflicts with the processes in-place and adopted by individual mathmet members. a key element of the qms is a risk assessment, and on-line tools that guide a user through the process of assigning an integrity level for the research outputs of data and software has also been presented. acknowledgement the project 18net05 mathmet has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. we thank the other members of the emn mathmet for their support in the development of the qms described here. figure 1. mathmet qms process flowchart. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 references [1] mathmet: european metrology network for mathematics and statistics home page. online [accessed 01 december 2022] https://www.euramet.org/european-metrologynetworks/mathmet/ [2] msmm 2021 mathematical and statistical methods for metrology. online [accessed 01 december 2022] http://www.msmm2021.polito.it/programme [3] iso 9001: quality management systems – requirements, 2015. online [accessed 01 december 2022] https://www.iso.org/standard/62085.html [4] tickitplus home page. online [accessed 01 december 2022] https://www.tickitplus.org/en/ [5] iec 61508: functional safety of electrical/electronic/ programmable electronic safety-related systems, part 0: functional safety and iec 61508, 2005 [6] m. baker, 1,500 scientists lift the lid on reproducibility, nature volume 533, 2016, pp. 452–454. doi: 10.1038/533452a [7] iso 8000-63:2019 data quality — part 63: data quality management: process measurement. online [accessed 01 december 2022] https://www.iso.org/standard/65344.html [8] iso/iec directives and policies. online [accessed 01/12/2022] https://www.iso.org/directives-and-policies.html [9] procedure for the development of eurachem guidance. online [accessed 01 december 2022] https://www.eurachem.org/images/stories/policies/developme nt_of_eurachem_guidance_2020.pdf [10] casoft: software for conformity assessment taking into account measurement uncertainty. online [accessed 01 december 2022] https://www.lne.fr/en/software/casoft [11] met4fof: metrology for the factory of the future. online [accessed 01 december 2022] https://www.ptb.de/empir2018/met4fof/software/ [12] iso 6142-1:2015, gas analysis preparation of calibration gas mixtures part 1: gravimetric method for class i mixtures, iso, geneva, 2015. [13] medalcare: metrology of automated data analysis for cardiac arrhythmia management. online [accessed 01 december 2022] https://www.ptb.de/empir2019/medalcare/home/ [14] p. wagner, n. strodthoff, r. bousseljot, w. samek, t. schaeffter, ptb-xl, a large publicly available electrocardiography dataset. online [accessed 01 december 2022] https://physionet.org/content/ptb-xl/1.0.1/ [15] tracim: traceability for computationally-intensive. metrology. online [accessed 01 december 2022] https://www.tracim.eu/ [16] eurachem: use of uncertainty information in compliance assessment. (2nd ed. 2021). online [accessed 01 december 2022] https://www.eurachem.org/index.php/publications/guides/unc ertcompliance [17] y. lo, p. harris, l. wright, k. jagan, g. kok, l. coquelin, j. zaouali, s. eichstädt, t. dorst, c. tachtatzis, i. andonovic, g. gourlay, b. xiang yong, good practice guide on industrial sensor network methods for metrological infrastructure improvement, 63 pp. doi: 10.5281/zenodo.6342744 [18] novel mathematical and statistical approaches to uncertainty evaluation. online [accessed 01 december 2022] https://www.ptb.de/emrp/2976.html [19] g. j. p. kok, use case examples for the mathmet quality management system at vsl, imeko-mathmet symposium, porto, portugal, 31 august – 2 september 2022. https://www.euramet.org/european-metrology-networks/mathmet/ https://www.euramet.org/european-metrology-networks/mathmet/ http://www.msmm2021.polito.it/programme https://www.iso.org/standard/62085.html https://www.tickitplus.org/en/ https://doi.org/10.1038/533452a https://www.iso.org/standard/65344.html https://www.iso.org/directives-and-policies.html https://www.eurachem.org/images/stories/policies/development_of_eurachem_guidance_2020.pdf https://www.eurachem.org/images/stories/policies/development_of_eurachem_guidance_2020.pdf https://www.lne.fr/en/software/casoft https://www.ptb.de/empir2018/met4fof/software/ https://www.ptb.de/empir2019/medalcare/home/ https://physionet.org/content/ptb-xl/1.0.1/ https://www.tracim.eu/ https://www.eurachem.org/index.php/publications/guides/uncertcompliance https://www.eurachem.org/index.php/publications/guides/uncertcompliance https://doi.org/10.5281/zenodo.6342744 https://www.ptb.de/emrp/2976.html image reconstruction using a refined res-unet model for medical image retrieval acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 image reconstruction using a refined res-unet model for medical image retrieval pinapatruni rohini1, c. shoba bindu1 1 department of computer science and engineering, jntua college of engineering, anantapuramu, andhra pradesh, 515002, india section: research paper keywords: medical image retrieval; res-unet framework; image reconstruction; index matching citation: pinapatruni rohini, c. shoba bindu, image reconstruction using a refined res-unet model for medical image retrieval, acta imeko, vol. 11, no. 1, article 32, march 2022, identifier: imeko-acta-11 (2022)-01-32 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 25, 2021; in final form february 17, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: pinapatruni rohini, e-mail: rohinilp11@gmail.com 1. introduction the proliferation of digital image measurement devices in medical field induces image-based diagnosis as a profound approach to acquire relevant treatment information for healthcare applications. the measured images are very susceptible and diverge demanding safe and efficient database storage. however, due to rapidly growing medical database, efficiently accessing the information from the database is utmost important for necessary analysis and processing of patient’s records. in this context, a structured model is required to handle this huge database with effective image information retrieval. in this context, content-based medical image retrieval (cbmir) measurement method proved to be an effectual method to access the required medical image from the database efficiently. cbmir follows two steps: i) it performs feature extraction utilizing hand crafted techniques or learning based models, ii) performs the index matching task for image retrieval. existing works utilized hand-crafted techniques for the retrieval of the images from the database resembling the query medical image. but the method is not effective when processing with a huge database. in past years, the image information like shape, colour, texture, are used as local feature descriptor to retrieve the image is proposed in [1]-[5]. in [6], [7] author extracted the features called local binary pattern (lbp) using the gray-scale information between the group of pixels for texture classification. in [8], the local ternary patterns descriptor is proposed which is an added version of lbp descriptor, used for face recognition. the descriptor exploits three quantization levels improving the pixels encoding mechanism. murala et al. [9] proposed spherical and local direction-based feature descriptors for medical and natural image retrieval. a directional edge local ternary pattern based on reference pixel is proposed by vipparthi et al. [10]. in [11], a three-stage local feature descriptor is proposed with multidimension and multi-direction for image retrieval. moreover, directional mask is utilized to extract the directional edges. lastly, the image is retrieved using the maximum directional edge-based features. in further work, a abstract measurement of medical data and extraction of medical images with various sensory systems have become a crucial part of diagnosing and treating various diseases. in this contest measurement technology plays a key role in diagnosis. medical experts usually use previous case studies to identify and deal with the current medical condition. in this context, an expert required to explore the large medical database to search relevant images for the required analysis. searching such large database and retrieving an image efficiently becomes very tedious task. therefore, this paper proposed a measurement-based image reconstruction using the refined res-unet framework for medical image retrieval (mir) for managing such crucial tasks and reduce complexities. the proposed two stage framework consists of an image reconstruction using res-unet and index similarity matching of query image with history images. res-unet is a vanilla combination of resnet50 as an encoder that gives latent information of input image, and the decoder from unet reconstructs the image using latent information. further, those latent features match similar image features and retrieve indexed images from the medical image database. the efficacy of proposed method was confirmed on benchmark medical image databases such as ild and via/elcapct for mir. the proposed framework outperforms the existing methods in the task of mir. mailto:rohinilp11@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 reference block gradient directions-based feature descriptor is proposed for image retrieval [12]. a wavelet-based 3d circular difference pattern is proposed for medical image retrieval. thresholdbased lbp and local adjacent neighbourhood average difference information is combined for medical image retrieval in [13]. previous works utilized hand crafted feature descriptors for feature extraction in cbmir modelling. however, the handcrafted feature descriptors own to the limitation of relying on hand-written assumptions [14], which limits the efficiency of the overall model. these features become less effective in encoding the robust and complex features. currently, learning based feature descriptors are resolving the above limitation by utilizing the power of learning complex features based on their different model architectures. convolution neural networks (cnn) shows remarkable results in the defined area because of the high learning ability of these network. many works are proposed utilizing the cnn network architecture for diverse applications like objection detection, image classification, anomaly detection etc. [15]-[18]. the feature learning capability is also utilized by researchers in medical domain for medical image analysis, medical image segmentation, image de-hazing etc., [15], [19]-[22]. however, even after the remarkable performance on small network, the cnn based architectures perform badly when the depth of the network increases giving rise to the vanishing gradient issues. to resolve the above limitation, a residual learning (resnet) based architecture is proposed in [23] for image classification. motivated from this, an image retrieval approach is proposed based on image-to-image reconstruction for image retrieval. gans are utilized with image-to-image translation approach provides outperforming results. this inspired the researchers to utilize gan network for various diverse computer vision applications like image enhancement [15], moving object segmentation [16], depth estimation [24], etc. gan is composed of two component generator and discriminator. adversarial training includes the training of both generator and discriminator. in [25], an inception architecture is performed which significantly improved the performance with relatively low computational cost. motivated from the significant performance of adversarial learning and inception architecture in diverse fields, this paper proposed refined network architecture utilizing residual network for image reconstruction based on measurement technology. our network removes the skip connection which makes the reconstructed image more robust in addition to the high-quality reconstruction. the paper defines the following as major contribution: 1. an end-to-end refined content-based image reconstruction learning framework is proposed for medical image retrieval. 2. proposed a novel res-unet framework for reconstruction of input medical image exploits novel pretrained resnet50 model with cnn up-sampler. 3. robust features generated using resnet encoder for index matching and retrieval of image from the database. 4. this paper utilized two benchmark datasets to quantify proposed res-unet in cbmir. 2. proposed work a novel content-based medical image retrieval framework named refined res-unet model is proposed for effective and efficient image retrieval as shown in figure 1. the framework incorporates two functioning blocks: reconstruction block and indexing block. the reconstruction block is inspired by the standard pix2pix [26] architecture where the unet architecture comprises an encoder and decoder. proposed framework includes resnet50 [23] network as an encoder with five blocks. firstly, the input image is processed to the resnet50 block of the novel proposed model, which performs the encoding to figure 1. refined res-unet framework for cbmir. stage 1: reconstruct image using res-unet. stage 2: encoder resnet50 feature extraction for index matching and image retrieval. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 generate the set of features taking advantage of self-attention process. further, the features are decoded to reconstruct the original input medical image. the accurate and robust reconstruction of the original image from the encoded features justifies the conceptual representation of input image as encoded features [27], [28]. hence, the encoded features are further processed for index matching and retrieval task. for the retrieval of best matches from the available database corresponding to any input query image, conventional index matching and retrieval module is used. the proposed model functioning is processed in three steps as follows: i) refined image reconstruction network ii) robust feature generation iii) medical image retrieval module 2.1. refined image reconstruction network the deployment of generative adversarial networks (gans) for various problems like object segmentation, image enhancement, image super-resolution, image reconstruction, depth estimation, etc. gans are utilized to produce the synthetic data resembling with the original data. the algorithmic architecture of gan consists of two neural networks that are called generator and discriminator. the generator module generates unique plausible examples from the sample domain and the discriminator classifies it as real or fake content. motivated by the successful results from gans in literature, we proposed res-unet model for image restoration with selfattention for robust feature learning. the proposed network initially encodes the input into encoded features exploiting selfattention process. consequently, the features are decoded back to restore the original image. as shown in figure 1, the proposed model designed with a encoder and decoder part. for the encoder part resnet is utilized with its 50-layer architecture named resnet50. the resnet50 [23] network is considered till layer conv5_x. the encoder part takes input (i256×256×3) as image and output a latent feature vector (feat8×8×2048). the input/output relation of the proposed encoder is defined in (1) as follows: 𝐹𝑒𝑎𝑡8x8x2048 = 𝑅𝑒𝑠𝑁𝑒𝑡50(𝐼256x256x3) . (1) encoder is followed by a decoder while training the model. decoder includes five layers each of which is comprises of convotranspose layer, batch normalization layer and relu layer. decoder up-samples the feature vector of the input image. the output of encoder (feat8×8×2048) is fed to the decoder to which finally outputs the reconstructed image. the output from the decoder is given in (2) as follows: 𝑂𝑢𝑡 = 𝐶𝑇3x3(𝐹𝑒𝑎𝑡)(2,2,2,2,2) (512,256,128,64,3) , (2) where 𝐶𝑇3x3 (. )𝑆 𝐹 represents convtranspose and f: 512, 256, 128, 64, 3 is the number of filters used at each layer with corresponding stride s: 2, 2, 2, 2, 2. reconstruction loss proposed res-unet is an unsupervised learning technique, which generates output image as an input replica, i.e., reconstruction of input image. hence to learn proposed architecture for reconstruction of image. we utilize the l1 loss as shown in (3) as follows 𝑙𝑜𝑠𝑠 = 𝐼256x256x3 − 𝑂𝑢𝑡 , (3) where i256×256×3 is an input image and out is reconstructed image of i. 2.2. robust feature generation the accuracy of the image retrieval model highly depends on different representation of features through essential feature extraction method. two of the major methods for feature extraction are hand-crafted feature descriptors and learning based feature descriptor. hand-crafted feature extractors are rule based descriptors like ss3d [9], ltcop [29], ltrp [15]. the learning based feature descriptors make use of the characteristic of input image like shape, size, texture, to extract the features (alexnet [30], resnet [23], vgg-16 [31]). the proposed res-unet model uses the learning-based feature descriptor for image reconstruction process. figure 2 represents the performance of the model for reconstruction of the image from the abstract features of an encoder. the results justify the ability of the encoder in the model for generating effective features from the input medical image. hence, the figure 2. upper row depicts the original database image. lower row depicts the reconstructed image using proposed res-unet model. ild and via/elcap-ct databases are used in column 1-2 and column 3-4 respectively. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 output from the encoder, (feat2×2×2048) is utilized as robust features of the corresponding input image. further, these features are used for index matching and retrieval tasks from the database. in figure 1, the lower side depicts the indexing and image retrieval from the extracted features of the query image. 2.3. medical image retrieval module let’s consider, 𝑣𝑓𝑄 = [𝑣𝑓𝑄1, 𝑣𝑓𝑄2, … , 𝑣𝑓𝑄𝑁] defines the feature vector of input query image. further, the feature vector of ith image in datasets db is, 𝑣𝑓𝐷𝐵𝑖 = [𝑣𝑓𝐷𝐵𝑖1, 𝑣𝑓𝐷𝐵𝑖2, … , 𝑣𝑓𝐷𝐵𝑖𝑁 ], 𝑖 = (1, 2, … , 𝑁) represents dataset images. for retrieving the similar images from database as the query image, a similarity index d is used between the input query image 𝑣𝑓𝑄 and each image of the dataset 𝑣𝑓𝐷𝐵𝑖 . for our model similarity evaluation, d1 distance is used for query image as most of the existing state-of -the art methods have used the same. the d distance is given as in (4) 𝐷(𝑄, 𝐷𝐵) = ∑ | 𝑣𝑓𝐷𝐵𝑚,𝑛 − 𝑣𝑓𝑄𝑛 1 + 𝑣𝑓𝐷𝐵𝑚,𝑛 + 𝑣𝑓𝑄𝑛 | , 𝑁 𝑛=1 (4) where, q is the query image, n is the length of feature vector, db is database image, 𝑣𝑓𝐷𝐵𝑚,𝑛 is n th feature of nth image in the database, 𝑣𝑓𝑄𝑛 is n th feature of query image. 3. training details brats-2015 [32] database is considered for training the proposed model. the database consists of 220 high-grade gliomas and 54 low-grade gliomas. database has scans of t1weighted (t1), t1-weighted imaging with gadolinium enhancing contrast (t1c), t2-weighted (t2), and fluid attenuated inversion recovery scans. 14,415 brain magnetic resonance imaging (mri) slices are taken from the entire brats-2015 database and 4155 slices are selected to train the model over 20 epochs. the time taken for training is 553 s per epoch and the batch size is 1. 4. results and discussion this section provides the elaborated performance analysis of the proposed model res-unet and gives a comparison with the conventional hand-crafted and learning based algorithm in the literature. for comparison, the work considered three benchmark medical image database with different modalities like mri and computer tomography (ct). 4.1. retrieval accuracy on ild database two image sets interstitial images and clinical data for lung disease are examined for performing the experimental study from the integrated interstitial lung disease (ild) dataset [33]. ild is known to be the finest database which is used for research and studying of lung disease with radiological and clinical data. the dataset is utilized as a referral resource to train new radiologists. the efficiency of ild diagnosis can be improved by using a reduced invasive x-ray methodology, as x-ray and ct of a particular patient is kept in the same resource. the standard ild dataset includes ct scans of lungs for 130 patients which has underlined disease regions by the experienced radiologists for each ild [33]. a small part of the dataset is available with 658 regions with marking of patches/regions for different classes such as fibrosis (187 regions), micro-nodules (173 regions), and healthy (139 regions), ground glass (106 regions), emphysema (53 regions). table 1 provides the comparison evaluation of the proposed method and existing methods on the metric average retrieval precision applied on the topmost 10 images from the ild dataset. table 2 provides the comparison on the metric group-wise average retrieval precision on the ild dataset. results from table 1 and table 2 proves the out-performance of the proposed model from all the conventional methods. table 1. comparison evaluation of the proposed method and existing methods on the metric average retrieval precision applied on the topmost 10 images from the ild dataset. method 1 2 3 4 5 6 7 8 9 10 ss3d [9] 100 83.21 76.85 72.34 69.27 66.44 64.48 62.71 61.43 60.40 ltcop [29] 100 84.95 76.29 71.77 67.44 64.79 62.57 60.83 59.14 58.13 ltrp [34] 100 85.26 77.56 71.96 68.54 65.02 62.48 60.75 59.08 57.61 mdmep [11] 100 86.17 79.43 75.87 72.58 70.09 68.32 67.21 66.18 65.4 alexnet [30] 100 88.91 83.99 79.6 76.66 74.54 72.38 71.12 69.79 68.45 resnet [23] 100 89.67 84.75 81.31 78.84 76.55 75.51 74.34 73.44 72.67 vgg-16 [31] 100 89.21 83.94 80.24 77.96 75.94 74.4 73.21 72.26 71.34 res-unet 100 98 97.33 97.25 97.20 97.00 96.85 96.75 96.66 96.60 table 2. comparison on the metric group-wise average retrieval precision on the ild dataset. method group 1 group 2 group 3 group 4 group 5 average ss3d [9] 47.17 72.62 53.21 52.82 75.68 60.30 ltcop [29] 62.64 69.95 53.96 48.72 78.97 62.85 ltrp [34] 54.72 72.41 50.94 53.85 77.51 61.88 mdmep [11] 75.09 70.37 68.11 43.08 79.56 67.24 alexnet [30] 65.28 83.10 65.85 48.72 82.64 69.12 resnet [23] 71.70 88.24 61.51 46.67 85.13 70.65 vgg-16 [31] 73.96 87.49 57.17 57.44 83.22 71.86 res-unet 93.3 100 97.3 96.09 97.3 96.79 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 4.2. retrieval accuracy on via/i-elcap-cp database the proposed model is evaluated for cbmir on the via/i-elcap-ct [35] dataset. this dataset is widely used for the cbmir model evaluation. the dataset comprises 1000 images splitted into 10 categories having 100 images per category. table 3 depicts the comparison result in terms of average retrieval precision on the uppermost 10 images of database. likewise, table 4 provides the average retrieval recall comparison between the proposed and existing methods. table 3 and table 4 shows that the proposed method exceeds in the retrieval performance. the accuracy of conventional approaches reached till 93 %. however, the proposed method shows 99 % accuracy for cbmir. the produced results can be justified as the power of network in adversarial feature learning. 5. conclusion in this paper, a novel content-based res-unet framework is proposed for reconstruction of the input medical image and performs an efficient image retrieval task. the framework operates in two phases, training phase and inference phase. in the training phase, the model utilizes an encoder and a decoder to reconstruct the input medical image. in our proposed work, resnet50 is utilized as an encoder to perform the encoding of feature vectors. in phase 2, we only utilize the encoder part to generate the feature vector of the input image. further, utilizing those features, similar images are retrieved from the database using a similarity metric. the performance evaluation of the proposed model is done on the two benchmark datasets that are ild and via/elcap-ct. comparison results shows the outperformance of the proposed model as compared to the conventional methods. references [1] mandal murari, mallika chaudhary, santosh kumar vipparthi, subrahmanyam murala, anil balaji gonde, shyam krishna nagar, antic: antithetic isomeric cluster patterns for medical image retrieval and change detection, iet computer vision, vol. 13, no.1, 2019, pp. 31-43. doi: 10.1049/iet-cvi.2018.5206 [2] vipparthi santosh kumar, subrahmanyam murala, anil balaji gonde, q. m. jonathan wu, local directional mask maximum edge patterns for image retrieval and face recognition, iet computer vision, vol.10, no.3, 2016, pp. 182-192. doi: 10.1049/iet-cvi.2015.0035 [3] vipparthi santosh kumar, subrahmanyam murala, s. k. nagar, anil balaji gonde, local gabor maximum edge position octal patterns for image retrieval, neurocomputing, vol.167, 2015, pp. 336-345. doi: 10.1016/j.neucom.2015.04.062 [4] vipparthi santosh kumar, s. k. nagar, expert image retrieval system using directional local motif xor patterns, expert systems with applications, vol. 41, no. 17, 2014, pp. 8016-8026. doi: 10.1016/j.eswa.2014.07.001 [5] vipparthi santosh kumar, shyam krishna nagar, multi-joint histogram based modelling for image indexing and retrieval, computers & electrical engineering, vol. 40, no. 8, 2014, pp. 163173. doi: 10.1016/j.compeleceng.2014.04.018 [6] d. g. lowe, object recognition from local scale-invariant features, proceedings of the seventh ieee international conference on computer vision, ieee, vol. 2, 1999, pp. 11501157. doi: 10.1109/iccv.1999.790410 [7] ojala timo, matti pietikainen, topi maenpaa, multi-resolution gray-scale and rotation invariant texture classification with local binary patterns, ieee transactions on pattern analysis and machine intelligence, vol. 24, no.7, july 2002, pp .971-987. doi: 10.1109/tpami.2002.1017623 [8] tan xiaoyang, bill triggs, enhanced local texture feature sets for face recognition under difficult lighting conditions, ieee table 3. comparison evaluation of the proposed method and existing methods on the metric average retrieval precision applied on the topmost 10 images from the via/ielcap dataset. method 1 2 3 4 5 6 7 8 9 10 ss3d [9] 100 74.25 64.27 58.70 54.78 52.55 50.47 48.68 47.44 46.23 ltcop [29] 100 93.80 91.10 88.75 86.80 85.35 84.13 82.60 81.31 80.28 ltrp [34] 100 81.60 74.73 70.55 67.70 65.32 63.49 61.96 60.88 59.62 mdmep [11] 100 81.60 74.73 70.55 67.70 65.32 63.49 61.96 60.88 59.62 alexnet [30] 100 99.60 99.13 98.63 98.06 97.43 96.56 95.85 94.89 93.88 resnet [23] 100 98.35 96.60 94.88 93.38 91.98 90.27 88.68 87.22 85.78 vgg-16 [31] 100 94.55 90.33 86.88 84.16 81.85 79.76 77.78 75.96 74.23 res-unet 100 99 98.66 98.75 98.6 98.33 98.42 98.5 98.55 98.6 table 4. comparison evaluation of the proposed method and existing methods on the metric average retrieval rate applied on the topmost 10 images from the via/i-elcap database. method 1 2 3 4 5 6 7 8 9 10 ss3d [9] 10 14.85 19.28 23.48 27.39 31.53 35.33 38.94 42.70 46.24 ltcop [29] 10 18.76 27.33 35.50 43.40 51.21 58.89 66.08 73.18 80.28 ltrp [34] 10 16.32 22.42 28.22 33.85 39.19 44.44 49.57 54.79 59.62 mdmep [11] 10 16.32 22.42 28.22 33.85 39.19 44.44 49.57 54.79 59.62 alexnet [30] 10 19.92 29.74 39.45 49.03 58.46 67.59 76.68 85.40 93.88 resnet [23] 10 19.67 28.98 37.95 46.69 55.19 63.19 70.94 78.50 85.78 vgg-16 [31] 10 18.91 27.10 34.75 42.08 49.11 55.83 62.22 68.36 74.23 res-unet 10 19.8 29.6 39.5 49.3 59 68.9 78.8 88.7 98.6 https://digital-library.theiet.org/search?value1=&option1=all&value2=mallika+chaudhary&option2=author https://digital-library.theiet.org/search?value1=&option1=all&value2=santosh+kumar+vipparthi&option2=author https://digital-library.theiet.org/search?value1=&option1=all&value2=subrahmanyam+murala&option2=author https://digital-library.theiet.org/search?value1=&option1=all&value2=anil+balaji+gonde&option2=author https://digital-library.theiet.org/search?value1=&option1=all&value2=shyam+krishna+nagar&option2=author https://doi.org/10.1049/iet-cvi.2018.5206 https://ietresearch.onlinelibrary.wiley.com/action/dosearch?contribauthorraw=murala%2c+subrahmanyam https://ietresearch.onlinelibrary.wiley.com/action/dosearch?contribauthorraw=gonde%2c+anil+balaji https://ietresearch.onlinelibrary.wiley.com/action/dosearch?contribauthorraw=gonde%2c+anil+balaji https://ietresearch.onlinelibrary.wiley.com/action/dosearch?contribauthorraw=jonathan+wu%2c+qm http://dx.doi.org/10.1049/iet-cvi.2015.0035 https://www.sciencedirect.com/science/article/abs/pii/s0925231215005317#! https://www.sciencedirect.com/science/article/abs/pii/s0925231215005317#! https://www.sciencedirect.com/science/article/abs/pii/s0925231215005317#! https://www.sciencedirect.com/science/article/abs/pii/s0925231215005317#! https://doi.org/10.1016/j.neucom.2015.04.062 https://doi.org/10.1016/j.eswa.2014.07.001 https://doi.org/10.1016/j.compeleceng.2014.04.018 https://doi.org/10.1109/iccv.1999.790410 https://doi.org/10.1109/tpami.2002.1017623 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 transactions on image processing, vol. 19, no. 6, 2010, pp. 16351650. doi: 10.1007/978-3-540-75690-3_13 [9] murala subrahmanyam, q. m. jonathan wu, spherical symmetric 3d local ternary patterns for natural, texture and biomedical image indexing and retrieval, neurocomputing, vol. 149, 2015, pp. 15021514. doi: 10.1016/j.neucom.2014.08.042 [10] vipparthi santosh kumar, s. k. nagar, directional local ternary patterns for multimedia image indexing and retrieval, international journal of signal and imaging systems engineering, vol .8, no. 3, 2015, pp. 137-145. doi: 10.1504/ijsise.2015.070485 [11] g. m. galshetwar, l. m. waghmare, a. b. gonde, s. murala, multi-dimensional multi-directional mask maximum edge pattern for bio-medical image retrieval, international journal of multimedia information retrieval, vol. 7, no. 4, 2018, pp. 231-239. doi: 10.1016/j.bbe.2020.07.011 [12] g. m. galshetwar, l. m. waghmare, a. b. gonde, s. murala, local energy oriented pattern for image indexing and retrieval, journal of visual communication and image representation, vol. 64, 2019, pp. 102615. doi: 10.1016/j.jvcir.2019.102615 [13] biswas ranjit, sudipta roy, debraj purkayastha, an efficient content-based medical image indexing and retrieval using local texture feature descriptors, international journal of multimedia information retrieval,vol. 8, no. 4, 2019, pp. 217-231. doi: 10.1007/s13735-019-00176-9 [14] vipparthi santosh kumar, subrahmanyam murala, shyam krishna nagar, dual directional multi-motif xor patterns: a new feature descriptor for image indexing and retrieval, optik, vol.126, 2015, pp. 1467-1473. doi: 10.1016/j.ijleo.2015.04.018 [15] akshay dudhane, kuldeep m. biradar, prashant w. patil, praful hambarde, subrahmanyam murala, varicolored image de-hazing, proc. of the ieee/cvf conference on computer vision and pattern recognition, seattle, wa, usa, 13-19 june 2020, pp. 45634572. doi: 10.1109/cvpr42600.2020.00462 [16] prashant w. patil, kuldeep m. biradar, akshay dudhane, subrahmanyam murala, an end-to-end edge aggregation network for moving object segmentation, proc. of the ieee/cvf conference on computer vision and pattern recognition, seattle, wa, usa, 13-19 june 2020, pp. 8146-8155. doi: 10.1109/cvpr42600.2020.00817 [17] kuldeep marotirao biradar, ayushi gupta, murari mandal, santosh kumar vipparthi, challenges in time-stamp aware anomaly detection in traffic videos, 2019, 8 pp. doi: 10.48550/arxiv.1906.04574 [18] kuldeep biradar, sachin dube, santosh kumar vipparthi, dearest: deep convolutional aberrant behavior detection in real-world scenarios, ieee 13th international conference on industrial and information systems (iciis), ieeerupnagar, india, 1-2 december 2018, pp. 163-167. doi: 10.1109/iciinfs.2018.8721378 [19] e. s. kumar, c. shoba bindu, medical image analysis using deep learning: a systematic literature review, international conference on emerging technologies in computer engineering, springer, vol 985, 2019, pp. 81-97. doi: 10.1007/978-981-13-8300-7_8 [20] r. pinapatruni, c. s. bindu, learning image representation from image reconstruction for a content-based medical image retrieval, signal, image and video processing, vol. 14, no.7, 2020, pp. 13191326. doi: 10.1007/s11760-020-01670-y [21] praful hambarde, sanjay talbar, abhishek mahajan, satishkumar chavan, meenakshi thakur, nilesh sable, prostate lesion segmentation in mr images using radiomics based deeply supervised u-net, biocybernetics and biomedical engineering, vol. 40, no. 4, 2020, pp. 1421-1435. doi: 10.1016/j.bbe.2020.07.011 [22] olaf ronneberger, philipp fischer, thomas brox, u-net: convolutional networks for biomedical image segmentation, international conference on medical image computing and computer-assisted intervention, springer, cham, 2015. doi: 10.1007/978-3-319-24574-4_28 [23] kaiming he, xiangyu zhang, shaoqing ren, jian sun, deep residual learning for image recognition, proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 770-778. doi: 10.1109/cvpr.2016.90 [24] praful hambarde, akshay dudhane, prashant w. patil subrahmanyam murala, abhinav dhall, depth estimation from single image and semantic prior, ieee international conference on image processing (icip), abu dhabi, united arab emirates, 25-28 october 2020, ieee, pp. 1441-1445. doi: 10.1109/icip40778.2020.9190985 [25] christian szegedy, sergey ioffe, vincent vanhoucke, alexander a. alemi, inception-v4, inception-resnet and the impact of residual connections on learning, thirty-first aaai conference on artificial intelligence, vol. 31, no. 1, 2017. online [accessed 24 march 2022] https://ojs.aaai.org/index.php/aaai/article/view/11231 [26] phillip isola, jun-yan zhu, tinghui zhou, alexei a. efros, imageto-image translation with conditional adversarial networks, proceedings of the ieee conference on computer vision and pattern recognition, 2017, pp. 5967-5976. doi: 10.1109/cvpr.2017.632 [27] murala subrahmanyam, q. m. jonathan wu, 3d local circular difference patterns for biomedical image retrieval, international journal of multimedia information retrieval, vol .8, no. 2, pp. 115125. doi: 10.1016/j.neucom.2014.08.042 [28] vipparthi santosh kumar, shyam krishna nagar, integration of color and local derivative pattern features for content-based image indexing and retrieval, journal of the institution of engineers (india): series b, vol. 96, no. 3, 2015, pp. 251-263. doi: 10.1007/s40031-014-0153-5 [29] murala, subrahmanyam q. m. jonathan wu, local ternary cooccurrence patterns: a new feature descriptor for mri and ct image retrieval, neurocomputing, vol. 119, 2013, pp. 399-412. doi: 10.1016/j.neucom.2013.03.018 [30] krizhevsky alex, ilya sutskever, geoffrey e. hinton, imagenet classification with deep convolutional neural networks, advances in neural information processing systems, vol. 25, 2012, pp. 10971105. doi: 10.1145/3065386 [31] simonyan karen, andrew zisserman, very deep convolutional networks for large-scale image recognition, iclr-2015. online [accessed 24 march 2022] https://arxiv.org/abs/1409.1556 [32] f.isensee, p. kickingereder, w. wick, m. bendszus, k. h. maierhein, brain tumor segmentation and radiomics survival prediction, contribution to the brats 2017 challenge, international miccai brainlesion workshop, springer, pp. 287297. doi: 10.1007/978-3-319-75238-9_25 [33] adrien depeursinge, alejandro vargas, alexandra platon, antoine geissbuhler, pierre-alexandre poletti, henning müller, building a reference multimedia database for interstitial lung diseases, computerized medical imaging and graphics, vol. 36, no. 3, 2012, pp. 227238. doi: 10.1016/j.compmedimag.2011.07.003 [34] murala subrahmanyam, r. p. maheshwari, r. balasubramanian, local tetra patterns: a new feature descriptor for content-based image retrieval, ieee transactions on image processing, vol. 21, no. 5, 2012, pp. 2874-2886. doi: 10.1109/tip.2012.2188809 [35] via/i-elcap database. online [accessed 10 march 2019] http://www.via.cornell.edu/lungdb.html https://doi.org/10.1007/978-3-540-75690-3_13 https://doi.org/10.1016/j.neucom.2014.08.042 https://doi.org/10.1504/ijsise.2015.070485 https://doi.org/10.1016/j.bbe.2020.07.011 https://doi.org/10.1016/j.jvcir.2019.102615 https://doi.org/10.1007/s13735-019-00176-9 https://doi.org/10.1016/j.ijleo.2015.04.018 https://doi.org/10.1109/cvpr42600.2020.00462 https://doi.org/10.1109/cvpr42600.2020.00817 https://arxiv.org/search/cs?searchtype=author&query=biradar%2c+k+m https://arxiv.org/search/cs?searchtype=author&query=gupta%2c+a https://arxiv.org/search/cs?searchtype=author&query=mandal%2c+m https://arxiv.org/search/cs?searchtype=author&query=vipparthi%2c+s+k https://doi.org/10.48550/arxiv.1906.04574 https://doi.org/10.1109/iciinfs.2018.8721378 https://link.springer.com/conference/icetce https://link.springer.com/conference/icetce https://doi.org/10.1007/978-981-13-8300-7_8 https://doi.org/10.1007/s11760-020-01670-y https://doi.org/10.1016/j.bbe.2020.07.011 https://doi.org/10.1007/978-3-319-24574-4_28 https://ieeexplore.ieee.org/author/37085360867 https://ieeexplore.ieee.org/author/37085368262 https://ieeexplore.ieee.org/author/37085368998 https://ieeexplore.ieee.org/author/37085536632 https://doi.org/10.1109/cvpr.2016.90 https://doi.org/10.1109/icip40778.2020.9190985 https://ojs.aaai.org/index.php/aaai/article/view/11231 https://arxiv.org/search/cs?searchtype=author&query=isola%2c+p https://arxiv.org/search/cs?searchtype=author&query=zhu%2c+j https://arxiv.org/search/cs?searchtype=author&query=zhou%2c+t https://arxiv.org/search/cs?searchtype=author&query=efros%2c+a+a https://doi.org/10.1109/cvpr.2017.632 https://doi.org/10.1016/j.neucom.2014.08.042 https://doi.org/10.1007/s40031-014-0153-5 https://doi.org/10.1016/j.neucom.2013.03.018 https://doi.org/10.1145/3065386 https://arxiv.org/abs/1409.1556 https://doi.org/10.1007/978-3-319-75238-9_25 https://pubmed.ncbi.nlm.nih.gov/?term=depeursinge+a&cauthor_id=21803548 https://pubmed.ncbi.nlm.nih.gov/?term=vargas+a&cauthor_id=21803548 https://pubmed.ncbi.nlm.nih.gov/?term=platon+a&cauthor_id=21803548 https://pubmed.ncbi.nlm.nih.gov/?term=geissbuhler+a&cauthor_id=21803548 https://pubmed.ncbi.nlm.nih.gov/?term=poletti+pa&cauthor_id=21803548 https://pubmed.ncbi.nlm.nih.gov/?term=m%c3%bcller+h&cauthor_id=21803548 https://doi.org/10.1016/j.compmedimag.2011.07.003 https://doi.org/10.1109/tip.2012.2188809 http://www.via.cornell.edu/lungdb.html editorial to selected papers from the international excellence phd school ‘i. gorini’ acta imeko issn: 2221-870x december 2021, volume 10, number 4, 5 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 5 editorial to selected papers from the international excellence phd school ‘i. gorini’ pasquale arpaia1, umberto cesaro1, francesco lamonaca2 1 university of naples federico ii, dept. of information and electrical engineering, via cluadio 21, naples, 80125 (na), italy 2 university of calabria, dept. of computer science, modelling, electronic and system, via p.bucci 41c, arcavacata di rende, 87036 (cs), italy section: editorial citation: pasquale arpaia, umberto cesaro, francesco lamonaca, editorial to selected papers from the international excellence phd school ‘i. gorini’, acta imeko, vol. 10, no. 4, article 3, december 2021, identifier: imeko-acta-10 (2021)-04-03 received december 14, 2021; in final form december 14, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: umberto cesaro, e-mail: ucesaro@unina.it dear readers, this acta imeko special issue collects the best works presented by young researchers attending the international ph.d. school "italo gorini 2021", held in naples in the period 6-10 september 2021. the international ph.d. school "italo gorini 2021" is the doctoral school promoted by the italian associations "electrical and electronic measurements group" (gmee) and "mechanical and thermal measurements group" (gmmt). the school activity deals with a wide variety of issues related to measurement. the school is aimed at phd students, as well as young people from research and industry fields. the school addresses both methodological issues, in the field of science and technology related to the measures and instrumentation, and advanced problems of practical interest. then, special attention is paid also to the impact of measures on the scientific and engineering context, which is strongly influenced by the evolution of technology in different sectors. this year the “italo gorini” school has received the patronage of imeko and we are honoured to promote imeko among young brilliant scientists who will be the future of measurement science. the works presented in this special issue are the extended version of those that won the "best presentation", "best scientific contribute" and "best application" awards during the school. simone mari et al., in the paper entitled ‘measurements for nonintrusive load monitoring through machine learning approaches’, propose several possible approaches for the non-intrusive load monitoring systems operating in real time, analysing them from the measurement point of view. the investigated measurement and post-processing techniques are illustrated and the results discussed. emanuele buchicchio et al., in the paper ‘gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system’ introduce a system that combines sensor-based gesture acquisition and deep learning techniques for gesture recognition providing a 100% classification accuracy. the social impact of the proposal is wide, since gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. mattia alessandro ragolia et al., in the paper ‘a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation’ propose a virtual platform for assessing the performance of electromagnetic tracking systems (emtss) for surgical navigation, showing in real time the effects of the various sources affecting the distance estimation accuracy. the implemented measurement platform provides a useful tool for supporting engineers during design and prototyping of emtss. a particular effort was dedicated to the development of an efficient and robust algorithm, to obtain an accurate estimation of the instrument position for distances from the magnetic field generator beyond 0.5 m. indeed, the main goal of the paper is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. the paper by leila es sebar et al. ‘a metrological approach for multispectral photogrammetry’ presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques. such techniques are typically used in the cultural heritage field. the reference object was a 3d printed specimen, with a nominal manufacturing uncertainty in the order of 0.01 mm. the object has been realized as a dodecahedron, and in each face, a different pictorial preparation has been inserted. the preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. pasquale arpaia, umberto cesaro, guest editors francesco lamonaca, editor-in-chief mailto:ucesaro@unina.it a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation acta imeko issn: 2221-870x december 2021, volume 10, number 4, 103 110 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 103 a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation mattia alessandro ragolia1, filippo attivissimo1, attilio di nisio1, anna m. l. lanzolla1, marco scarpetta1 1 department of electrical and information engineering, polytechnic of bari, via e.orabona 4, 70125 bari, italy section: research paper keywords: electromagnetic tracking systems; image guided surgery; surgical navigation; real-time virtual platform; system design citation: mattia alessandro ragolia, filippo attivissimo, attilio di nisio, anna maria lucia lanzolla, marco scarpetta, a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation, acta imeko, vol. 10, no. 4, article 18, december 2021, identifier: imekoacta-10 (2021)-04-18 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 27, 2021; in final form december 5, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mattia alessandro ragolia, e-mail: mattiaalessandro.ragolia@poliba.it 1. introduction tracking systems are widely used in many applications, providing information about the localization of a target object in a defined area. the medical field is among the ones that are mostly taking advantage form the research on tracking systems. different kinds of technologies are used to develop such systems, depending on accuracy requirements, tracking environment, and tracking distance. the applications range from optical and inertial systems for motion tracking and rehabilitation [1]-[2], up to more complex systems for surgical navigation. the last one is a procedure that relies on tracking systems to guide the surgeon during interventions, it allows to reduce the invasiveness of the intervention, thus improving its accuracy and safety, reducing risks of complications and hospitalization time [3]-[6]. surgical navigation mainly relies on optical and electromagnetic (em) technologies [7]. optical tracking systems are very accurate and reliable, and they are employed in many medical applications [8]-[11], but they constantly require a direct line-of-sight, which prevents their use in presence of obstacles during intracorporal tracking. electromagnetic tracking systems (emtss) overcome this limitation [12]: a very small magnetic sensor, which measures the magnetic field produced by a known field generator (fg), is inserted into the surgical instrument (e.g., a flexible instrument such as an endoscope or a needle [6]), and the position of the sensor is estimated by means of a suitable algorithm. the intraoperative localization of the instruments is shown on a screen in front of the surgeon, and the anatomical area is reconstructed by merging information obtained by different medical imaging techniques, such as computed tomography, ultrasounds, and nuclear magnetic resonance[13], [14], which are acquired during the pre-operative phase. abstract electromagnetic tracking systems (emtss) are widely used in surgical navigation, allowing to improve the outcome of diagnosis and surgical interventions, by providing the surgeon with real-time position of surgical instruments during medical procedures. however, particular effort was dedicated to the development of efficient and robust algorithms, to obtain an accurate estimation of the instrument position for distances from the magnetic field generator beyond 0.5 m. indeed, the main goal is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. studies are currently being conducted to optimize the magnetic field generator configuration (both geometrical arrangements and electrical properties) since it affects tracking accuracy. in this paper, we propose a virtual platform for assessing the performance of emtss for surgical navigation, providing realtime results and statistics, and allowing to track instruments both in real and simulated environments. simulations and experimental tests are performed to validate the proposed virtual platform, by employing it to assess the performance of a real emts. the platform offers a real-time tool to analyze emts components and field generator configurations, for a deeper understanding of emts technology, thus supporting engineers during system design and characterization. mailto:mattiaalessandro.ragolia@poliba.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 104 the main limitation of em technology is the short tracking distance, which is generally no longer than 0.5 m from the fg in current commercial systems, due to the reduced amplitude of the magnetic field with distance, and the high sensitivity to em interferences and magnetic field distortions, thus limiting tracking accuracy far from the fg [15]. tracking distance and accuracy are crucial, and they should be taken into account during the development of a surgical navigation systems. many aspects affect system performance, and engineers and manufacturers should consider all of them, since also a small accuracy or distance increase is a valuable achievement in this field. in this paper, we propose a virtual platform for assessing the performance of emtss for surgical navigation, showing in real time how the various sources of error affect the accuracy of tracking distance estimation. this platform provides a useful tool for supporting engineers during design and prototyping of emtss. the paper is structured as follows: the main sources of error in emtss, and the importance of knowing them during system developing, are discussed in section 2; the virtual platform, developed to provide a tool to analyse system performance during the prototyping phase, is illustrated in section 3; in section 4, emts prototype architecture is described and the developed virtual platform is evaluated by simulated and experimental tests performed on the emts prototype; conclusions are drawn in section 5. 2. sources of error modelling the magnetic field and the various sources of error is crucial in many applications, allowing to implement ad-hoc algorithms for automated error compensation [16], [17]. emtss tracking accuracy can be affected by several sources of error, which can be divided into static errors and dynamic errors [7], [12], [15]. static errors occur when the sensor is placed in a given position, maintaining a fixed orientation. they are in turn classified as follows. • systematic errors: they are due to distortions of the magnetic field generated by i) the presence of metal objects in the surrounding environment, which can produce eddy currents induced by the variable field (mainly in ac systems), which generate secondary magnetic fields that add up to the main magnetic field; ii) ferromagnetic materials which, immersed in the main field, orient their domains causing a magnetization that modifies the field lines; and iii) power supply currents of the emts itself or of other electronic medical devices present in the operating room that can also cause a distortion of the magnetic field. these errors can be reduced by appropriate calibration techniques [18]. • random errors (also referred to as jitter [12],[19]): these errors, mainly due to noise, reduce the repeatability of the system. dynamic errors change over time, and they are mainly caused by variations in external em fields, due to the movement of external organs, such as conductive, ferromagnetic, and electrical materials, which cause field distortions that are extremely difficult to compensate. the movement of the sensor itself is also a source of dynamic error, depending on its speed. it must also be considered that tracking accuracy depends on the design of the fg and the choice of the position reconstruction algorithm. moreover, the non-ideality of the electronic components of the tracking system itself affects the performance of the system. in fact, the generated field is never perfectly stable due to the intrinsic limits of the fg, and the measurement and acquisition process is subject to noise, which cannot be totally eliminated. for the reduction of random and dynamic errors, suitable filtering techniques and synchronization of the sampling frequency are particularly useful [20]. the implementation of the kalman filter can also significantly reduce random errors [21]. 3. virtual platform the virtual platform is developed in labview® software (by national instruments corp.), which is largely used to control and monitor industrial equipment and processes, and for the creation of test and measurement systems [22], [23]. it offers a real-time feedback of tracking accuracy (figure 1) and it provides an intuitive and user-friendly interface. it is designed to be used in combination with a robot to move the sensor and provide accurate position references. the platform is composed of six main sections, which are described in the following subsections. the functioning of the platform is illustrated in figure 2. the model of the emts if defined in an external file and imported into the platform, and the user defines the trajectory for sensor movement. two different modalities can be performed: i) in the experimental mode, the platform connects to the daq device and the induced signal in the magnetic sensor is acquired as it is moved by the robot along the defined trajectory, ii) in the simulation mode, the signal of the magnetic sensor is simulated by employing a model of the magnetic field; in both cases, noise can be added to the signal. finally, the position of the sensor is estimated by means of a suitable reconstruction algorithm, providing real-time 3d representation and error statistics. 3.1. 3d view and real-time tracking statistic tracking systems provide the surgeon with the real-time estimate of sensor position, which is shown on a screen in front of the surgeon, where is also displayed the patient’s anatomic area. 2d and 3d views are commonly used; in particular, the latter is more difficult to interpret, but seems to guarantee greater precision [24]. hence, the platform provides a 3d view of tracking, where the actual and estimated position are displayed. moreover, real-time feedback of system performance becomes particularly useful when analyzing how a system responds to different inputs. many design errors can be quickly avoided by real-time feedback. hence, real-time plot and statistics of position tracking errors are provided during experiments. in particular, the position error along each cartesian axis, computed as the difference between the estimated position and the one provided by the robot, is shown on a graph, and its mean value and standard deviation are displayed. for example, the peak error in figure 1 suggests performing a deeper exploration of the correspondent space region. all tracking results and statistics can be easily exported for further elaboration in matlab or other software. 3.2. emts model import • the number and arrangement of the transmitting coils, as well as their electrical properties, highly affect system performance [25]. often the transmitting coils must be placed inside a well-defined space due to practical needs, such as the configuration of the clinical environment, weight acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 105 limitations, or application requirements. moreover, the tracking volume is usually proportional to the dimension of the fg (i.e., the magnetic field intensity) [12]. hence the platform provides the import of a mat-file (binary matlab® file) containing the geometrical arrangement and the electrical properties of the transmitting coils of the fg, in order to test different fg layouts, as well as the parameters of the sensor coil. in section 4.2, two different fg configurations will be compared to illustrate this functionality. • the assessment of tracking accuracy is a mandatory step in emtss developing, and different types of protocols have been defined [12], most commonly employing phantoms such as board and cube phantoms, as well as moving phantoms to assess dynamic performance. in addition, robots are also used to move the sensor, providing accurate position references, and allowing automatic and repeatable test; on the other side, robotic components can cause interference in the tracking volume, and they are quite expensive. in [26], the authors used a carbon fiber rod, held by the robot gripper, with the magnetic sensor positioned at the tip, in order to distance the sensor from the metallic components of the robot. the cinematics of the simulated robot (shown in figure 1) is based on a real robot, model rv-2fb-d from mitsubishi, which was employed in this research to move the sensor; however the platform provides the import of a file containing the model (i.e., the geometry of joints and links) of any robot. both the fg and the robot are displayed in the 3d scene of the platform, by means of the labview robotics toolkit. the fg shown in figure 1 represents the emts described in section 4.3. 3.3. reconstruction algorithm and magnetic field model different techniques can be used to reconstruct the position of the sensor in an emts based on frequency division multiplexing. figure 1. virtual platform developed in labview, during the execution of a simulation. on the left: the model of the melfa robot is shown during the movement; the green point is the position estimate provided by the algorithm, and fg reference system is shown in red. on the top: the noise settings section and the trajectory definition are shown. at the bottom: real-time statistics of position error are provided. figure 2. scheme of the functioning of the virtual platform. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 106 in [27], a suitable interpolation algorithm has been used to reconstruct the position of the sensor in a small space. the sensor is placed in 𝑀 different calibration positions, and the voltage from the sensors are measured; then, position estimation is based on interpolation between calibration points by using delaunay triangulation and linear interpolation. this technique requires measurements of the magnetic field in a dense grid to reach adequate accuracy, it does not allow extrapolation, and it is time-consuming, thus it could be applied only to small regions in the tracking volume. other algorithms are based on i) a model of the magnetic field obtained by approximating the coils as magnetic dipoles, or on ii) a model obtained by considering the mutual inductance between the transmitting coil and the sensor coil, which are considered as circular filaments [28]. both models require the knowledge of the geometrical parameters of the coils, and the electrical quantities (i.e., current and voltage) of the transmitting coils, but do not need as many measurements as the interpolation method. moreover, they could be used to compute the magnetic field (and therefore the induced voltage) in the whole tracking volume, allowing to perform experiments in a simulated environment. hence, the platform provides the possibility to choose an arbitrary reconstruction algorithm (developed in matlab), or to track the sensor simultaneously with two or more reconstruction techniques, to compare their performance in different scenarios. in this paper, to model the magnetic field produced by the fg and to reconstruct sensor position, we employ the dipole model explained in [21], [26], [28]. it is obtained by considering the magnetic moment generated by the i-th transmitting coil, expressed as: 𝒎𝑡𝑥,𝑖 = 𝑚𝑡𝑥,𝑖 �̂�𝑡𝑥,𝑖 , 𝑚𝑡𝑥,𝑖 = 𝑁𝑡𝑥,𝑖 𝑆𝑡𝑥,𝑖 𝐼𝑖𝑖 , 𝑆𝑡𝑥,𝑖 = π 𝑟𝑡𝑥,𝑖 2 , (1) where �̂�𝑡𝑥,𝑖 is the versor orthogonal to the surface 𝑆𝑡𝑥,𝑖 of the i-th transmitting coil, and 𝑟𝑡𝑥,𝑖 , 𝑁𝑡𝑥,𝑖 , and 𝐼𝑖𝑖 are the coil radius, the number of turns, and the rms value of the excitation current, respectively. the subscript i takes into account the differences of real parameters among each transmitting coil [26]. the rms magnetic field generated by the i-th transmitting coil in a generic point 𝒑𝒔 = [𝑥, 𝑦, 𝑧] t is 𝐁𝑖 (𝒑𝒔, 𝐼𝑖𝑖 ) = b𝑖 𝑥 𝒙 + b𝑖 𝑦 �̂� + b𝑖 𝑧 �̂� = µ0 4 π 𝑚𝑡𝑥,𝑖 𝑑𝑖 3 [3(�̂�𝑡𝑥,𝑖 · �̂�𝑑,𝑖 )�̂�𝑑,𝑖 − �̂�𝑡𝑥,𝑖 ] , (2) where 𝑑𝑖 = |𝒅𝒊|, with 𝒅𝒊 = 𝒑𝒔 − 𝒑𝒕𝒙,𝒊 represents the vector distance between 𝒑𝒔 and the center 𝒑𝒕𝒙,𝒊 of the i-th transmitting coil, and �̂�𝑑,𝑖 is its associated versor. if the magnetic flux is considered homogeneous on the surface of sensing coil 𝑆𝑠, the induced voltage related to the i-th coil can be expressed as �̃�𝑖 = 2 π 𝑓𝑖 𝑁𝑠 𝑆𝑠 𝐁𝑖 · �̂�𝑠 (3) where 𝑁𝑠 is the number of sensor coil turns and �̂�𝑠 = [𝑐𝑜𝑠(𝛼𝑠 ) 𝑐𝑜𝑠(𝛽𝑠 ), 𝑠𝑖𝑛( 𝛼𝑠 ) 𝑐𝑜𝑠(𝛽𝑠 ), 𝑠𝑖𝑛(𝛽𝑠 )] 𝑇 is the versor orthogonal to the sensor surface, where 𝛼𝑠 and 𝛽𝑠 define the orientation of the sensor coil. the position is estimated by minimizing the following cost function [29]: 𝐹(𝜽, 𝑰𝒕𝒙) = ‖𝒗 − �̃�(𝜽, 𝑰𝒕𝒙)‖2 2 , (4) which represents the squared error between the induced voltage 𝒗 measured from sensor coil and the voltage �̃� obtained by applying (3); this latter depends on 𝜽 = [𝒑𝒔 t, 𝛼𝑠 , 𝛽𝑠 ] t, and on the vector of the currents 𝑰𝒕𝒙. the minimum of (4), i.e. �̂� = arg min 𝐹(𝜽, 𝑰𝒕𝒙), is obtained by using the levenberg– marquardt algorithm. for the details, see [29]. 3.4. experimental or simulation mode the platform allows to control and perform both simulation and experimental tests. simulation mode the aforementioned models allow to carry out experiments on a simulated environment, resulting in a valuable tool for emtss design. it is possible to define the fg and the sensor coil (section 3.2), the position reconstruction algorithm (section 3.3), and custom trajectories along which to move the sensor. experimental mode – it is possible to define a trajectory, move the robot and acquire data from the data acquisition (daq) device. the 3d scene will show the real-time movement of the robot, along with the tracked position (the green dot in figure 1). hybrid mode – it is possible to import experimental data acquired during past experiments, and to run a simulation test showing the tracking results, also simulating the actual acquisition time of the related experiment. 3.5. noise section the voltage noise in the sensor signal highly affects tracking accuracy. several error sources contribute to sensor voltage noise, and two main contributions can be considered (all noise components are intended as standard deviation of rms quantities): i) the measurement and acquisition noise 𝜎𝑎𝑐𝑞 , and ii) the fg noise 𝜎𝐵 (𝒑𝒔, 𝜎𝐼 )the last one depends on the position 𝒑𝒔 of the sensor relative to the fg and is due to excitation current noise 𝜎𝐼 . the voltage noise 𝜎𝑣 can be expressed as: 𝜎𝑣 = √𝜎𝐵 2 + 𝜎𝑎𝑐𝑞 2 , (5) where it has been assumed that 𝜎𝐵 and 𝜎𝑎𝑐𝑞 both contribute independently. note that • 𝜎𝑎𝑐𝑞 is approximately constant in the whole working volume, since it depends on the measurement devices and on the johnson noise of the sensor. experimentally, it has been measured 𝜎𝑎𝑐𝑞 ≅ 20 nv for each frequency component. • 𝜎𝐵 depends on sensor pose and excitation currents, hence it is related to 𝜎𝐼 , and its contribution is higher when the sensor is closer to the fg. moreover, 𝜎𝐼 can differ between each transmitting coil. experimentally, it has been measured 𝜎𝐼 ≅ 0.07 ma as an average value among transmitting coils. in [29], a technique to compensate the effect of 𝜎𝐼 on the position error has been proposed. the effect of these noise components must be considered during simulations. hence, the platform includes a section to set the noise components (𝜎𝑎𝑐𝑞 is a scalar, 𝜎𝐼 is a 𝑛𝑥1 vector), to be added in simulations and also during real-time experiments, to investigate how a certain source of error affects tracking accuracy. for instance, a discussion about the selection of the daq device depending on the noise is carried out in section 4.1. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 107 4. validation the proposed platform is suitable for the assessment of virtual emtss during simulation, as well as the developed emts prototypes. in sections 4.1 and 4.2 we employ the platform for a practical case, showing its usefulness in assisting engineers during emts design and characterization, and in section 4.3 we illustrate some tests performed on the real emts. 4.1. daq device selection the platform can support engineers during daq device selection by using the noise section described in section 3.5. when setting up the measurement chain to acquire the signal form the sensor coil, it is important to select the daq device according to the accuracy requirements of the system. frequency sampling and noise are two main parameters to be considered [20], which affect system accuracy. in particular, the noise floor indicated in the datasheet of the daq device is added to the induced voltage, thus it directly affects position repeatability and accuracy [20]. in this way, the choice of a low-noise daq device can be evaluated for the purpose of improving performance. in this section, we performed some simulations: the rms induced voltage was simulated by applying (3), then adding a voltage noise component to simulate the noise floor of the daq device. figure 3 shows the results, where the mean of the euclidean position error over a linear trajectory (101 points at 600 mm from the fg) is plotted versus a range of selected 𝜎𝑎𝑐𝑞 , considering a fixed current noise of 𝜎𝐼 = 0.07 ma. as expected, it can be observed an increasing error with 𝜎acq; moreover, the behaviour is quite linear. a mean euclidean error below 2 mm is obtained if using a daq device with 𝜎acq lower than 40 nv. this information could be particularly useful when choosing components, considering the trade-off between increased cost and required accuracy. 4.2. fg configuration optimization as said in section 3.2, the platform allows to test different fg configurations, to evaluate the influence of the number, arrangement, and electrical properties of the transmitting coils on system performance. in this section we compare the performance of two fg configurations: one representing the emts prototype (figure 1), and one flat fg composed of six coplanar transmitting coils (figure 4). the coils of the two configurations are identical in their geometrical and electrical parameters, except for their position and orientation in space. figure 5 shows the position error along each axes, obtained by keeping the sensor with a fixed orientation along the z-axis, and moving it along the x-axis along a linear trajectory of 101 points, with a step of 1 mm, from point (x, y, z)=(-50, 0, 600) to (x, y, z)=(50, 0, 600), considering the reference system of the fgs. the rms induced voltage was simulated by applying (3), assuming an acquisition frequency of 20 hz. current and voltage noise of 𝜎𝐼 = 0.07 ma and 𝜎acq = 20 nv were added to each channel. it can be noted higher accuracy in the 5-coils fg configuration, whereas the 6-coils fg exhibits higher position error, in particular along xand y-axes. this suggests that further investigation should be performed to understand the cause of the error in that configuration, to avoid it during the realization of the fg. figure 3. mean euclidean position error vs. 𝜎𝑎𝑐𝑞 , assuming 𝜎𝐼 =0.07 ma. figure 4. flat fg configuration, obtained by modifying the number of transmitting coils ant their position and orientation. the fg reference system is shown in red. figure 5. comparison of position error of the two fg configurations. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 108 4.3. developed emts and experimental test the results obtained from the simulations performed with the platform must be comparable with the ones obtained with an actual fg, in order to validate it effectiveness in assisting the system designers. in collaboration with the company masmec biomed (modugno, bari, italy), an emts prototype was developed to obtain accurate estimation of sensor pose beyond 0.5 m from the fg, thus improving the state-of-the-art of commercial systems [20], [26], [27], [29]. it consists of three main components (figure 6): i. a magnetic field generator to generate em signals (the same shown in figure 1); ii. a small em sensor coil from aurora system; iii. a control unit for data acquisition and signal processing. the fg is composed of five transmitting coils, whose arrangement minimizes mutual inductances. each coil is powered with a sinusoidal current at different frequencies (approximately from 1 to 5 khz), thus generating an ac magnetic field, whose amplitude does not exceed 0.2 mt, which is the threshold value set in the ieee standard c95.1-2005. the whole magnetic flux generates an induced voltage on the em sensor (to be inserted into the surgical instrument), which is acquired, digitalized, and filtered by means of five band-pass filters obtaining five rms voltage components, related to the different excitation frequencies. these components are used to estimate sensor position by means of a suitable reconstruction algorithm. moreover, the current in each transmitting coil is measured by using five hall effect sensors (la 55-p lem), for the purpose of i) ensuring stability of the magnetic field by means of a current control loop, and ii) reducing error due to variations in the magnetic field, as shown in [29]. the control software was developed in labview®, and the sensor was moved by means of an industrial robot (by mitsubishi), which provided accurate position reference. an experimental test was performed to show the potentialities of the proposed platform when tracking a real sensor coil. the em sensor coil was moved by the robot along the trajectory defined in section 4.2, and the rms induced voltage was measured with a frequency of 20 hz, as suitable for real-time surgical applications (the sampling frequency of the daq device is set to 50 khz, with 2500 samples, thus computing the rms value every 50 ms, i.e., 20 hz [20]). the same trajectory was performed on both simulated and experimental data. for the simulation, 𝜎𝐼 = 0.07 ma and 𝜎𝐼 = 20 nv were considered for each channel, as quantified from the experimental data. figure 7 shows the obtained results. the position error obtained in both simulated and experimental case are comparable, with a mean euclidean position error of about 1 mm and 2 mm, respectively, which is suitable for many surgical procedures [12]. the difference is due to the approximation of the coils with magnetic dipoles, and to uncertainty in parameters. this result validates the performance of the platform in simulating real tracking, providing a valuable tool during system design and prototyping. 5. conclusions several sources of error affect emtss, and the high accuracy required from surgical application is highly influenced by the design and arrangement of the transmitting coils of the fg. many design errors can be quickly identified and avoided by realtime feedback. in this paper we illustrated the main features of a virtual platform, which permits to analyse system performance by adding noise components and simulating error sources, hence the robustness and the accuracy of the system and its weaknesses can be studied. moreover, it can be particularly useful for system prototyping, by investigating the effects of system parameters (geometrical and electrical ones). the usefulness of the platform was demonstrated by performing simulations related to some practical cases. finally, it was validated by performing some tests on a real emts, obtaining a mean euclidean position error of about 2 mm at a distance of 600 mm from the fg, comparable with the position error of 1 mm obtained by simulations, which is suitable for many surgical procedures. further development will regard an improved graphic and user interface, the inclusion of other sources of error (magnetic field distortion, em interferences), as well as a dynamic system model, in order to evaluate position error in fast varying conditions; the kalman filter will also implemented to obtain smooth trajectories. moreover, in this first version, the algorithm is developed in matlab, but other programming languages -e.g., pythonwill be considered in further versions. figure 6. experimental setup for system characterization. figure 7. position error from simulated (blue line) and experimental data (red line) obtained by moving the melfa robot along a linear trajectory at a distance of 600 mm from the fg. the sensor is aligned along z-axis. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 109 references [1] l. romeo, r. marani, m. malosio, a. g. perri, t. d’orazio, performance analysis of body tracking with the microsoft azure kinect, 2021 29th mediterranean conference on control and automation (med), puglia, italy, 22-25 june 2021, pp. 572-577. doi: 10.1109/med51440.2021.9480177 [2] m. parvis, s. corbellini, l. lombardo, l. iannnucci, s. grassini and e. angelini, inertial measurement system for swimming rehabilitation, 2017 ieee international symposium on medical measurements and applications (memea), rochester, mn, usa, 7-10 may 2017, pp. 361-366. doi: 10.1109/memea.2017.7985903 [3] t. peters, k cleary, image-guided interventions: technology and application, springer, boston, ma, 2008, isbn 978-1-4899-97333. doi: 10.1007/978-0-387-73858-1 [4] e. grimson, m. leventon, l. lorigo, t. kapur, r. kikinis, image guided surgery, scientific american, 280(6) (1999), pp. 62-69. doi: 10.1038/scientificamerican0699-62 [5] n. giaquinto, m. scarpetta, m. spadavecchia, g. andria, deep learning-based computer vision for real-time intravenous drip infusion monitoring, ieee sensors journal 21(13) (2021), pp. 14148-14154. doi: 10.1109/jsen.2020.3039009 [6] v. portosi, a. m. loconsole, m. valori, v. marrocco, i. fassi, f. bonelli, g. pascazio, v. lampignano, a. fasano, f. prudenzano, low-cost mini-invasive microwave needle applicator for cancer thermal ablation: feasibility investigation, ieee sensors journal 21(13) (2021), pp. 14027-14034. doi: 10.1109/jsen.2021.3060499 [7] a. sorriento, m. b. porfido, s. mazzoleni, g. calvosa, m. tenucci, g. ciuti, p. dario, optical and electromagnetic tracking systems for biomedical applications: a critical review on potentialities and limitations, ieee reviews in biomedical engineering 13 (2020), pp. 212-232. doi: 10.1109/rbme.2019.2939091 [8] x. chen, n. bao, j. li, y. kang, a review of surgery navigation system based on ultrasound guidance, proceeding of the ieee international conference on information and automation shenyang, china, june 2012. doi: 10.1109/icinfa.2012.6246906 [9] l. m. galantucci, g. percoco, f. lavecchia, e. di gioia, noninvasive computerized scanning method for the correlation between the facial soft and hard tissues for an integrated threedimensional anthropometry and cephalometry, journal of craniofacial surgery 24(3) (2013), pp. 797-804. doi: 10.1097/scs.0b013e31828dcc81 [10] j. sun, m. smith, l smith, l.-p. nolte, simulation of an opticalsensing technique for tracking surgical tools employed in computer-assisted interventions, ieee sensors journal 5 (5) (2005), pp. 1127-1131. doi: 10.1109/jsen.2005.844339 [11] f. ezedine, j.-m. linares, w. m. wan muhamad, j.-m. sprauel, identification of most influential factors in a virtual reality tracking system using hybrid method, acta imeko 2(2) (2013), pp. 20-27. doi: 10.21014/acta_imeko.v2i2.136 [12] alfred m. franz, t. haidegger, w. birkfellner, k. cleary, t. m. peters, l. maier-hein, electromagnetic tracking in medicine – a review of technology, validation and applications, ieee transactions on medical imaging 33(8) (2014), pp. 1702 – 1725. doi: 10.1109/tmi.2014.2321777 [13] f. prada, m. del bene, l. mattei, l. lodigiani, s. debeni, v. kolev, i. vetrano, l. solbiati, g. sakas, f. dimeco, preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery. ultraschall med 2015, 36(2). pp. 174-186. doi: 10.1055/s-0034-1385347 [14] g. andria, f. attivissimo, g. cavone, a. m. l. lanzolla acquisition times in magnetic resonance imaging: optimization in clinical use, ieee transactions on instrumentation and measurement 58(9) (2009), pp. 3140-3148. doi: 10.1109/tim.2009.2016888 [15] t. koivukangas, j. p. katisko, j. p. koivukangas, technical accuracy of optical and the electromagnetic tracking systems, springerplus 2(90) (2013). doi: 10.1186/2193-1801-2-90. [16] s. goll, a. borisov, interactive model of magnetic field reconstruction stand for mobile robot navigation algorithms debugging which use magnetometer data, acta imeko 8(4) (2019), pp. 47-53. doi: 10.21014/acta_imeko.v8i4.688 [17] e. petritoli, f. leccese, l. ciani, g. s. spagnolo, probe position error compensation in near-field to far-field pattern measurements (2019) 2019 ieee international workshop on metrology for aerospace, metroaerospace, turin, italy, 19-21 june 2019, pp. 214-217. doi: 10.1109/metroaerospace.2019.8869674 [18] v. v. kindratenko, a survey of electromagnetic position tracker calibration techniques. virtual reality (5) (2000) pp. 169–182. doi: 10.1007/bf01409422 [19] y. qi, h. sadjadi, c. t. yeo, k. hashtrudi-zaad, g. fichtinger, electromagnetic tracking performance analysis and optimization, 2014 36th annual international conference of the ieee engineering in medicine and biology society, chicago, il, usa, 26-30 august 2014, pp. 6534-6538. doi: 10.1109/embc.2014.6945125 [20] g. andria, f. attivissimo, a. di nisio, a. m. l. lanzolla, m. a. ragolia, assessment of position repeatability error in an electromagnetic tracking system for surgical navigation, sensors 20 (2020), art. no. 961. doi: 10.3390/s20040961 [21] f. santoni, a. de angelis, i. skog, a. moschitta, p. carbone, calibration and characterization of a magnetic positioning system using a robotic arm, ieee transactions on instrumentation and measurement 68(5) (2019), pp. 1494-1502. doi: 10.1109/tim.2018.2885590 [22] h. shekhar, j.s.jeba kumar,v.ashok, a.vimala juliet, applied medical informatics using labview, international journal on computer science and engineering 2(2) (2010), pp. 198-203. [23] f. attivissimo, c. guarnieri calò carducci, a. m. l. lanzolla, m. spadavecchia, an extensive unified thermo-electric module characterization method, sensors 16(12) (2016) pp. 1-20. doi: 10.3390/s16122114 [24] p. catala-lehnen, j. v. nüchtern, d. briem, t. klink, j. m. rueger, w. lehmann, comparison of 2d and 3d navigation techniques for percutaneous screw insertion into the scaphoid: results of an experimental cadaver study, computer aided surgery 16(6) (2011), pp. 280-287. doi: 10.3109/10929088.2011.621092 [25] m. li, c. hansen, g. rose, a simulator for advanced analysis of a 5-dof em tracking systems in use for image-guided surgery. int j cars 12, 2217–2229 (2017). doi: 10.1007/s11548-017-1662-x [26] f. attivissimo, a. d. nisio, a. m. l. lanzolla, m. a. ragolia, analysis of position estimation techniques in a surgical em tracking system, ieee sensors journal 21(13) (2021), pp. 1438914396. doi: 10.1109/jsen.2020.3042647 [27] g. andria, f. attivissimo, a. di nisio, a. m. l. lanzolla, p. larizza, s. selicato, development and performance evaluation of an electromagnetic tracking system for surgery navigation, measurement 148 (2019), art. no. 106916. doi: 10.1016/j.measurement.2019.106916 [28] g. de angelis, a. de angelis, a. moschitta, p. carbone, comparison of measurement models for 3d magnetic localization and tracking, sensors 17(11) (2017), art. no. 2527. doi: 10.3390/s17112527 https://doi.org/10.1109/med51440.2021.9480177 https://doi.org/10.1109/memea.2017.7985903 https://doi.org/10.1007/978-0-387-73858-1 https://doi.org/10.1038/scientificamerican0699-62 http://dx.doi.org/10.1109/jsen.2020.3039009 https://doi.org/10.1109/jsen.2021.3060499 https://doi.org/10.1109/rbme.2019.2939091 https://doi.org/10.1109/icinfa.2012.6246906 https://doi.org/10.1097/scs.0b013e31828dcc81 https://doi.org/10.1109/jsen.2005.844339 http://dx.doi.org/10.21014/acta_imeko.v2i2.136 https://doi.org/10.1109/tmi.2014.2321777 https://doi.org/10.1055/s-0034-1385347 https://doi.org/10.1109/tim.2009.2016888 https://doi.org/10.1186/2193-1801-2-90 http://dx.doi.org/10.21014/acta_imeko.v8i4.688 https://doi.org/10.1109/metroaerospace.2019.8869674 https://doi.org/10.1007/bf01409422 https://doi.org/10.1109/embc.2014.6945125 https://doi.org/10.3390/s20040961 https://doi.org/10.1109/tim.2018.2885590 https://doi.org/10.3390/s16122114 https://doi.org/10.3109/10929088.2011.621092 https://doi.org/10.1007/s11548-017-1662-x https://doi.org/10.1109/jsen.2020.3042647 https://doi.org/10.1016/j.measurement.2019.106916 https://doi.org/10.3390/s17112527 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 110 [29] m. a. ragolia, f. attivissimo, a. di nisio, a. m. l. lanzolla and m. scarpetta, reducing effect of magnetic field noise on sensor position estimation in surgical em tracking, 2021 ieee international symposium on medical measurements and applications (memea), 23-25 june 2021, pp. 1-6. doi: 10.1109/memea52024.2021.9478723 https://doi.org/10.1109/memea52024.2021.9478723 ancient metrology in architecture: a new approach in the study of the roman bridge of canosa di puglia (italy) acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 ancient metrology in architecture: a new approach in the study of the roman bridge of canosa di puglia (italy) germano germano’1 1 scuola superiore meridionale, university of naples federico ii, 80138, italy section: research paper keywords: ancient metrology; heritage; archaeology; roman architecture; bridge citation: germano germano’, ancient metrology in architecture: a new approach in the study of the roman bridge of canosa di puglia (italy), acta imeko, vol. 11, no. 1, article 16, march 2022, identifier: imeko-acta-11 (2022)-01-16 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 26, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: germano germano’, e-mail: germano.germano@live.com 1. introduction a metrological approach is essential when studying ancient monuments, as much as latin is in the study of roman civilization. the language of architecture makes use of numbers and measurements that are specific to each culture or geographical area, and their interpretation is key to developing hypotheses about the original look of an artifact and unlocking its meaning. this interpretation is often made difficult by the frequent encounter of gaps in the structural stratigraphy of the monument itself. this difficulty also lies in the use of a different construction vocabulary in the ancient world, made up of measurement systems that cannot be superimposed on the ones used nowadays, globally recognized in the international system of units. an interpretative criterion can therefore consist of detecting the recurrence of round numbers or multiples and submultiples of the numeral system of a certain culture, starting from today's metric data, comparing it with known ancient units of measurement and cross-referencing the data to verify the existence of correspondences. this approach [1] has proven to be useful in the study of a masonry bridge dating back to the time of the roman emperor trajan (2nd century ce), located along the ofanto river near canosa di puglia, in southern italy. before reaching its present conformation, the bridge is described in ancient documents as having only three arches, of which the central one, wider and higher than the others, gave impressiveness to the whole structure. this characteristic is common to many roman bridges, such as the nearby ascoli satriano bridge, and is due to the intent to obstruct the flow of the river as little as possible. the first (as well as the only) systematic study of the bridge dates back to 1985, when an archaeological excavation was carried out under the scientific direction of professor raffaella cassano of the university of bari [2], [3]. on that occasion, the archaeological investigation focused in particular on the study of the platea and its construction techniques. after more than thirty years, this research embraces the heritage of those studies, adding new data on the architecture of the bridge and on the basis of this data formulating new hypotheses with the aim of shedding new light on the millennial history of the monument [4]. abstract the bridge of canosa di puglia (italy) was originally built in the 2nd century ce to cross the ofanto river along the via traiana, the route built at the behest of emperor trajan that connected rome with the port of brindisi, on the adriatic sea. restorations, collapses and architectural transformations have deeply altered its original structure over the centuries, making it lose the traces of a monumental central arch. archival and field research, conducted through various surveys, has produced new data that has provided an update of the bridge's history. the aim of this dissertation is to show the results of a research conducted with a new methodological approach to the monument, applying ancient metrology to the interpretation of its architectural evolution. this method has proven to be indispensable to formulate hypotheses about the original configuration of the bridge, whose central arch would result to be one of the widest among the bridges of the roman architecture. mailto:germano.germano@live.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 2. historical background the history of the bridge follows the events of the road network established in this large flat area of apulia, called tavoliere, which thanks to its geographical characteristics was considered ideal not only for transhumance practices but also for the passage of routes to reach the south of the region and therefore the east. 2.1. the bridge in the roman age the original structure was erected on the occasion of the construction of the via traiana (trajan way) ordered by emperor trajan in 109 ce, starting from the city of benevento, to create an alternative to the via appia (figure 1) that would allow a faster connection between rome and brindisi, the main port towards the east. it is not clear if there was already a bridge here prior to the imperial age, for the structure was constructed on a route already in use before, the via minucia [5]. in any case, the presence of a well-preserved foundation stone paved platea, reveals an ex novo construction which is consistent with the construction techniques of the imperial age. the first works of restoration are documented in roman times through inscriptions [6] that attest to repairs under septimius severus and caracalla, in the tetrarchic period, between the end of the 3rd century ce and the beginning of the 4th century ce and in the constantinian age. however, the epigraphs only report these operations in a generic and celebrative way, without providing any useful data to determine measurements. 2.2. the bridge in middle ages in the middle ages the bridge was still in use, since it was located along the via francigena, a road through by which christian pilgrims from all over europe reached the ports of puglia to the holy land. in the middle ages even flocks, herds and shepherds used it when they seasonally migrated from the altitudes of abruzzo and molise to the plains of the tavoliere with a milder climate, through the so called tratturi (drover’s roads), one of which was passing right through here. during this long period new works were certainly necessary, but these were not documented until 1521, as the fragment of an inscription, reused in the mausoleum of boemondo d'altavilla in canosa, would evidence. in 1541 a polish traveller described it as altissimum pontem muratum (“a very high masonry bridge”) [7], while the first known information on its dimensions has been provided by an italian traveler who at the end of the 16th century, in addition to describing the bridge as ponte bellissimo fatto (“a very well-made bridge”), reported the size of the central arch: 128 palms long and 40 palms high [8]. 2.3. 18th century: first survey, the collapse and the reconstruction more than a century later, earthquakes, floods and wear and tear would weaken the structure of the bridge, making further restoration work necessary. the institution in charge of its maintenance, as belonging to the area of its domain, was the regia dogana delle pecore (sheep customs office), established in nearby foggia in 1480. the documents relating to the interventions of the eighteenth century are still preserved in its archives. in these archives it is stated that in 1749 a technical expert's report was commissioned to francesco delfino in which he warned of the dangers of stability. attached to it, he schematically drew an architectural representation of the bridge in which he indicated the possible breaking points (figure 2). in the same document [9] he also reported the exact measurements of the arches: "[…] the main arch (is) 112 palms wide, high from the floor to the top (of) 44 palms, with a front of 5 palms, (while) the two lateral arches are wide 50 palms each, and high 25 palms". due to the stalling of a targeted action that inevitably would have involved large costs, which none of the parties involved wanted to bear (the dogana, the crown, local administrators and landowners), the central arch collapsed on 11 february 1751. several considerations were made by technical experts who immediately intervened after the matter, among which, in the end, the proposal of a safer reconstruction prevailed, but which would have definitively changed the millenary aspect of the bridge. in fact, it was decided not to rebuild the central arch but instead to put two smaller ones in its place resting on a newly built central pillar, thus lowering the profile of the bridge to a height similar to that visible today. 2.4. the bridge today the latter, however, does not date back to these interventions because the current structure is the result of a reconstruction carried out in the mid-twentieth century, particularly concerning the central arches, after the retreating german troops in world war ii bombed the bridge [10], of which only the piers and the abutments were saved. the bridge today (figure 3) features five arches of different sizes and morphology (starting from the east: 6.50 m, 13 m, 12.10 m, 12.10 m, 13 m) based on four piers of different sizes, ranging from a minimum of 6.2 m to a maximum of 8.4 m. these figure 1. the route of via appia (the appian way) and via traiana (the trajan way) with the major cities along the road and, in red, the city of canusium (canosa di puglia) near the ofanto river. figure 2. drawing of the bridge by francesco delfino, 1749, detail. (archivio di statto di foggia) acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 are composed of square blocks in isodomic work and equipped with triangular starlings and prismatic cones, upstream and downstream. the walkway above is developed for a length of 170 m and a width of 4.5 m, its trend is not straight and nor in correspondence with the abutments. the piers, the abutments, and the foundation platea are the only surviving elements of the original structure [3]. when the river is dry, it is also possible to see the concrete walkway built for the passage of american tanks in the last phase of world war ii [10] (figure 4). today, the bridge is only accessible on foot, so as not to overburden the structure with the passage of vehicles, for which instead was built a bridge in the 20th century a few hundred meters further upriver. 3. method thanks to the diffusion of new digital technology and the constant decrease of its cost it has been possible to develop methodologies able to detect, interpret and preserve very important data regarding works of archaeological and architectural heritage [11], [12]. all these operations have been a fundamental support to the global process of knowledge and analysis, revealing the need for multidisciplinary expertise, both theoretical and technical, in the approach to archaeological and architectural heritage. once the research aim has been set up and the parts of the artefact considered to be original had been ascertained, the following steps were taken: archival research, through published sources and unstudied documents, comparison with historical photographs, on-site survey with manual and drone technology, photogrammetry operations, identification and recording of the different ssu (structural stratigraphic units) [13], [14], 3d modeling, graphic restitution and reconstruction hypotheses. leaving aside the part about archival research, which has been dealt with in detail elsewhere [15], all the on-site operations highlighted the complexity of data collection in a context such as the fluvial one. in particular, the photographic campaign and the acquisition of the metric data with the integrated survey were found to be difficult due to the high flow of the river and the dense vegetation. for this reason, survey operations must take into account the natural context in which they are carried out and must also be appropriately planned according to the season. 3.1. survey with drone technology and photogrammetry thanks to drone technology, it was possible to overcome some of the obstacles described above and record a set of data useful for the creation of the virtual model. the drone used is a dji mavic 2 pro equipped with a high resolution camera with 78.8° shooting angle of 26.6 mm and 1/2.3'' cmos sensor of 12.7 megapixel and glonass gps system with vertical accuracy error of ± 0.1 m and ± 0.3 m figure 3. the bridge of canosa di puglia. view from west. figure 4. the bridge during the dry season, when the paving blocks of the roman platea and the concrete boardwalk built in the first half of the 20th century are visible. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 horizontal. proceeding with an aerial photo acquisition in manual flight, 3 photo acquisitions were made, of which 2 in rotation around the artifact at an altitude of 5 m and 15 m and one top view at an altitude of 25 m. the rotation photographic acquisition was made with a camera inclination of 0° (the one at 5 m), then with a view perpendicular to the building in order to reduce the aberrations during data processing, the one at 15 m with a camera inclination of 45° for a photo acquisition necessary to receive data of the intersection node between the vertical and horizontal surfaces, and finally the one above 25 m with nadiral camera inclination for the acquisition of data from the upper part of the bridge, producing a total of 154 photographs with gsd (ground simple distance) of 4. 55mm/pix. even if ground control points (gcps) are usually necessary to guarantee that the shape of the model is geometrically correct in all three dimensions, during the acquisitions it was not necessary to use gcps for a subsequent scaling of the model, as the measurements were taken using the architectural elements present in the structure such as the widths of the spans and the width of its the extrados, through direct survey operations and the use of laser level and laser rangefinder. the photogrammetry operations [16] were carried out with the help of agisoft photoscan software, through which a dense point cloud composed of 12,490,415 points was generated, on which a textured mesh composed of 396,783 polygons and 396,783 vertices was produced (figure 5). the model was scaled according to the points detected in the campaign phases. the combined use of the survey methodologies constituted a system of verification of the overall process in the acquisition of the monument's dimensional information and its interpretation. in addition to the textured model (figure 6) generated by the photogrammetry software, a simplified model has been elaborated for the virtual reconstruction operations [17]. the former then served as a comparison and verification of the latter, and vice versa. 3.2. ancient metrology and data analysis a metrological approach has been adopted in the interpretation of data resulting from the survey, comparing it with dimensional measures mentioned in ancient sources and cross-referencing them by synchronic or diachronic conversion of numerical data. the first case applies for historically and culturally related contexts, while the second case implements a multi-layered reading of different ages. discarding the parts ascertained as added or reconstructed, only the original elements still in situ were taken into consideration. finally, a reconstructive hypothesis of the original morphology of the monument has been formulated, based on roman construction and measurement methods. a certain degree of approximation is to be expected both in reporting the measures and, consequently, in converting them to reconstructive hypotheses. however, the margin of error is limited enough to consider the data as valid for the purpose of the research. figure 5. a) survey report: camera locations and image overlap; b) camera locations and error estimates. z error is represented by ellipse colour. x,y errors are represented by ellipse shape. estimated camera locations are marked with a black dot; c) survey report. reconstructed digital elevation model. figure 6. axonometric view of the bridge elaborated using drone technology and aerial photogrammetry. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 while the fundamental unit of measurement used during the roman age was the foot (pes) and followed closely the measures of its greek counterpart, the attic foot, corresponding to 0.296 m [18], the authors of the two reports describe the bridge using the same unit of measurement, i.e. the palmo (table 1). as the two cases, 1584 and 1749, fall within the time span of the dominion of the kingdom of naples, which included the whole of southern italy, we have therefore to assume that they were referring to the neapolitan palm, corresponding to 0.26367 m [19], calculated on the basis of the measurement of an oxidized bar kept in castel capuano in naples used as the governmental standard [20], according to an edict (lost) issued on 6 april 1480 by ferdinand i of aragon and in force until 1840. by applying a metrological approach, it is possible to derive different hypotheses about the construction phases of the bridge. 3.3. synchronic conversion: what happened between the two key dates (1584 1749)? a first conversion, of synchronic type, is made within the same measurement system to hypothesize the changes between the two sources key dates: it is interesting to note that both measurements are a multiple of 8. in the kingdom of naples, the unit of measurement that follows that of the palmo is the canna, which measures exactly 8 palms. reading it in multiples, the 1749 survey describes an arch span reduced from 16 to 14 canne, exactly one canna per side (2.1 m). a margin of error of 4.2 m appears too large for a width of about 30 m. one possibility is that consolidation works were necessary following one or both of the devastating earthquakes, the first in 1627 and the second in 1731 [21], which struck the capitanata area and in particular canosa. in this sense, the work may have involved the overlaying or covering of the inner part of the pillars with a masonry reinforcement designed on the basis of a standard building measure, i.e. one canna. within delfino's design there are two elements protruding from the piers (figure 7), an unusual fact that supports this hypothesis, as an expert surveyor would hardly have invented or exaggerated, despite the evident schematization of the work. 3.4. diachronic conversion: reconstruction hypothesis on the other side, through a diachronic conversion of the units of measurement (table 2) it is possible to read the dimensions of the monument with the same measuring standards of the roman builders. as previously said, the central arch was imposing and larger than the other two, as shown by delfino's drawing and, even if with a more simplified style, other graphic sources preserved in the archives. therefore, if we accept the dimensions reported in the document (112 palms) and hypothesize that they are unchanged from the roman age, having assumed the lateral piers as dating back to the imperial age, we obtain a measure (29.49 m) that is, with the due margin of error, exactly corresponding to 100 roman feet. this measurement is an architectural constant of the monumental roman architecture [22], present in one of the most famous monuments of the emperor trajan in rome, the column called "centenaria" for its height, and then in the aurelian column, but also verifiable in the diameter of many monuments such as mausoleums (emperor augustus, cecilia metella, l. m. plancus and sempronius atratinus and the most famous pyramid tomb of caius cestius, whose side measures exactly 100 roman feet), public buildings (the hall of the palatine basilica of constantine in trier), theater orchestras (aquileia), bridges (narni, alcantara) and many others, just to mention a few [23]-[26]. also the height, 44 palms, would correspond to about 40 roman feet, still a multiple decimal number, but this data is subject to more variables since in the case of restoration or collapse the section of the arches is the part most exposed to sensitive alterations. moreover, it must be considered that it is not indicated in the sources whether the measurements were taken at the height of the water or the foundation platea. given these measurements, therefore the arc of the circle tangent was traced to the ideal segment at the level of the piers, about 3 m high, obtaining a figure that outlines a double-inclined slopes profile, based on a comparison with similar bridges, such as that of ascoli satriano on the carapelle river (2nd century ce) and the pont julien at bonnieux, in france (augustan age). the inclination of this profile corresponds to that found in the joints of the abutments (figure 8), especially on the western side, which would confirm the presence of the original outline of the bridge (figure 9). further information comes from some inscribed slabs found in cerignola [27] that could belong to the bridge. this type of inscribed slab bore the inscription relating to the work, exalting its completion, and was part of the construction program of the via traiana. also in this case, even if we do not have any information on what the original parapet looked like, the metric data relative to the "front" (5 neapolitan palms, that is 132 cm) table 1. ancient units and their mutual correspondence. unit roman feet neapolitan palms meters pes 1 1.1225 0.2960 palmo 0.8909 1 0.2637 canna 7.1270 8 2.1096 meter 3.3784 3.7922 1 figure 7. drawing of the bridge by francesco delfino, 1749, detail of the central arch (archivio di stato di foggia). possible traces of structural changes between 1584 (a) and 1749 (b) which would explain the different dimensional data of the two reports mentioned in the text. table 2. diachronic measures conversion chart. element of the bridge palms meters roman feet main arch (span) 112 29.53 99.78 ≃ 100 main arch (height) 44 11.60 39.20 ≃ 40 lateral arches (span) 50 13.19 44.54 ≃ 45 lateral arches (height) 25 6.59 22.27 front 5 1.31 4.45 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 corresponds exactly to the height of the slab of cerignola, so is possible that it was set along the balustrade that delimited the passage on the bridge. thanks to this large amount of details, the reconstructive hypothesis of the bridge during the original phase of the structure and the subsequent ones has been elaborated, according to a representation technique that, combining technical drawing with watercolor effects, was both consistent with the data and useful for dissemination (figure 10). this phase of representation, based on processing and elaboration of raw data obtained from the previous survey, requires a logical process [28] that can lead us to reconstruct an accurate and complete 3d model in order to be used as a support for the research. in order for the representation to be complete and intelligible, it is necessary indeed to understand the geometry and shape of the elements to be represented, as well as their reciprocal relationships [29]. 4. conclusions in an impervious context such as that of a river, challenging due to the frequent lack of sources and archaeological data, the research has shown how useful metrology has proved to be in formulating the reconstructive hypothesis with a high degree of reliability on a monument that has been strongly compromised over the centuries. beyond the theories on the exact morphology of the bridge, the research brings to light an architectural reality whose scope has been unfairly underestimated, and which places the bridge back in the history of ancient architecture, counting it among those with a central span among the longest in roman times, according to a comparison with the monumental study carried out by professor vittorio galliazzo [30], which is the most important work on roman bridges, together with those of colin o'connor [31] and piero gazzola [32]. the results of this study would confirm the tendency in roman monumental architecture to use a 100-foot module, figure 8. upper, the bridge with the geometrical construction of the hypothesized shape. lower, northwest front of the bridge, abutment. the red line shows the probable original incline still identifiable in the grade of the rows of blocks. figure 9. graphical reconstruction of the bridge of canosa in roman times. figure 10. illustration of the two main phases: the roman bridge (top); the bridge nowadays showing the foundation platea (bottom). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 although the variables involved encourage a careful approach, in order to prevent the temptation of a “metrological pareidolia”. references [1] g. germano’, the roman bridge of canosa di puglia: a metrological approach, proc. of the 2020 imeko tc-4 international conference on metrology for archaeology and cultural heritage, trento, italy, 22-24 october 2020, pp. 605-610. online [accessed 20 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-115.pdf [2] r. cassano, canosa. campagna di scavo 1985: il ponte romano sull’ofanto, in: le rassegne archeologiche in puglia. atti del xxv convegno di studi sulla magna grecia, taranto, 3-7 ottobre 1985, napoli 1986, pp. 408-142. [in italian] [3] r. cassano, il ponte sull’ofanto, in: principi, imperatori, vescovi, duemila anni di storia a canosa. r. cassano (editor). marsilio, venezia, 1992, isbn: 8-8317-5377-0, pp. 708-711. [in italian] [4] g. germano’, il ponte romano sull’ofanto: analisi delle tecniche costruttive, ipotesi di restituzione e valorizzazione, tesi di specializzazione in beni architettonici e del paesaggio, bari, italy, 2019 (unpublished results). [in italian] [5] g. ceraudo, via gellia: una strada ‘fantasma’ in puglia centrale, studi di antichità 12 (2008), pp. 187-203. [in italian] [6] m. chelotti, v. morizio, m. silvestrini, le epigrafi romane di canosa ii, edipuglia, bari, 1990, isbn: 978-88-7228-065-2. [in italian] [7] b. bilinski, la puglia e bari nel "diario di viaggio" di jan ocieski, ambasciatore polacco nel 1541, in: la regina bona sforza tra puglia e polonia, atti del convegno (bari 27/4/1980). ossolinum, leopoli, 1987, pp. 16-40. [in italian] [8] p. ieva, la sepultura di re boamundo in una inedita brieve descrittione tardo-cinquecentesca, in: “unde boat mundus quanti fuerit boamundus”: boemondo i di altavilla, un normanno tra occidente e oriente: atti del convegno internazionale di studio per il ix centenario della morte, canosa di puglia, 5-6-7 maggio 2011. c.d. fonseca, p. ieva (editors). società di storia patria per la puglia, bari, 2015, pp. 301-335. [in italian] [9] m. r. tritto, i restauri settecenteschi del ponte romano di canosa di puglia, in: canosa. ricerche storiche 2004. l. bertoldi lenoci (editor). schena editore, fasano, 2005, pp. 71-100. [in italian] [10] g. germano’, bridges at war and conservation of cultural heritage: a case study from italy, in: antiquities, sites and museums under threat: cultural heritage and communities in a state of war (1939–45), proceedings of the international conference at university of ghent (belgium 15-16/10/2020). a. crisà, j. bourgeois (editors). brill, leiden, 2022. [11] j. mccarthy, multi-image photogrammetry as a practical tool for cultural heritage survey and community engagement, journal of archaeological science 43 (2014), pp. 175-185. doi: 10.1016/j.jas.2014.01.010 [12] e. stylianidis, photogrammetric survey for the recording and documentation of historic buildings, springer, cham, 2020. [13] e. fenthress, c. j. goodson, patricians, monks, and nuns: the abbey of s. sebastiano, alatri, during the middle ages, archeologia medievale, xxx, 2003, pp. 67-105. [14] r. parenti, le tecniche di documentazione per una lettura stratigrafica dell’elevato, in: archeologia e restauro dei monumenti. r. francovich, r. parenti (editors). all’insegna del giglio, firenze, 1988, pp. 249-279. [in italian] [15] g. germano’, the role of archives in the graphic restitution of monuments: the case of the roman bridge over the ofanto river in canosa di puglia, bitàcora arquitectura 45 (2020), pp. 4-11. doi: 10.22201/fa.14058901p.2020.45.77630 [16] f. remondino, heritage recording and 3d modeling with photogrammetry and 3d scanning, remote sens. 3 (2011) pp. 1104-1138. doi: 10.3390/rs3061104 [17] g. germano’, 3d reconstruction of a roman bridge in canosa di puglia (italy), in: studies in digital heritage 5(1) (2021), pp. 7587. doi: 10.14434/sdh.v5i1.30812 [18] a. mazzi, nota metrologica, in: archivio storico lombardo. società storica lombarda, milano, 1901, p. 354. [in italian] [19] c. salvati, misure e pesi nella documentazione storica dell'italia del mezzogiorno, l’arte tipografica, napoli, 1970, pp. 34-35. [in italian] [20] e. lugli, the making of measure and the promise of sameness, university of chicago press, chicago, 2019, p. 20. [21] a. rovida, r. camassi, p. gasperini, m. stucchi, italian parametric earthquake catalogue (cpti11). istituto nazionale di geofisica e vulcanologia (ingv), milano, bologna, 2001. doi: 10.6092/ingv.it-cpti11 [22] m. s. de rossi, analisi geologica ed architettonica delle catacombe romane, roma, 1864, p. 75. [in italian] [23] a. carandini, la roma di augusto in 100 monumenti, utet, roma, 2019, isbn: 9788851169909. [in italian] [24] c. inglese, l. paris (editors), arte e tecnica dei ponti romani in pietra, sapienza università editrice, roma, 2020, isbn: 978-889377-150-4. [in italian] [25] j.-p. brun, p. munzi, s. girardot, a. r. congès, m. pierobon, un mausoleo a tumulo di età tardo-repubblicana nella necropoli settentrionale di cuma, in: dall’immagine alla storia. studi per ricordare stefania adamo muscettola. c. gasparri, r. pierobon benoit (editors). naus editoria, pozzuoli, 2010, isbn 978-887478-017-4. pp. 279-302. online [accessed 20 march 2022] [in italian] https://hal.archives-ouvertes.fr/hal-01435853/document [26] a. r. ghiotto, g. fioratto, g. furlan, il teatro romano di aquileia: lo scavo dell’aditus maximus settentrionale e dell’edificio scenico, fold&r, 495, 2021. online [accessed 20 march 2022] [in italian] http://www.fastionline.org/docs/folder-it-2021-495.pdf [27] g. ceraudo, a proposito delle lastre iscritte dei ponti della via traiana, in: atlante tematico di topografia antica 22. l. quilici, s. quilici gigli (editors). l’erma di bretschneider, roma, 2013, isbn: 978-88-8265-772-7, pp. 143-153. [in italian] [28] v. croce, g. caroti, a. piemonte, m. g. bevilacqua, from survey to semantic representation for cultural heritage: the 3d modeling of recurring architectural elements, acta imeko 10(1) (2021), pp. 98-108. doi: 10.21014/acta_imeko.v10i1.842 [29] l. de luca, methods, formalisms and tools for the semantic-based surveying and representation of architectural heritage, applied geomatics 6 (2014) pp. 115-139. doi: 10.1007/s12518-011-0076-7 [30] v. galliazzo, i ponti romani. canova, treviso, 1995, isbn 10: 8885066-66-6. [in italian] [31] c. o’connor, roman bridges, cambridge university press, 1993. [32] p. gazzola, ponti romani, leo olschki, firenze, 1963. [in italian] https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-115.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-115.pdf https://doi.org/10.1016/j.jas.2014.01.010 http://dx.doi.org/10.22201/fa.14058901p.2020.45.77630 https://doi.org/10.3390/rs3061104 https://doi.org/10.14434/sdh.v5i1.30812 https://doi.org/10.6092/ingv.it-cpti11 https://hal.archives-ouvertes.fr/hal-01435853/document http://www.fastionline.org/docs/folder-it-2021-495.pdf http://dx.doi.org/10.21014/acta_imeko.v10i1.842 http://dx.doi.org/10.1007/s12518-011-0076-7 feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 7 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry sara mazzocato1, giacomo marchioro2, claudia daffara1 1 department of computer science, university of verona, str. le grazie 15, i-37134, verona, italy 2 department of cultures and civilisations, university of verona, v.le dell'università 4, i-37129, verona, italy section: research paper keywords: optical profilometry; 3d printing; conoscopic holography; surface analysis; non-destructive testing citation: sara mazzocato, giacomo marchioro, claudia daffara, feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry, acta imeko, vol. 11, no. 1, article 19, march 2022, identifier: imeko-acta-11 (2022)-01-19 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 22, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially supported by scan4reco project, european union horizon 2020 framework programme for research and innovation, grant agreement no 665091 and by temart project, por fesr 2014-2020. corresponding author: sara mazzocato, e-mail: sara.mazzocato@univr.it 1. introduction 3d sensors and 3d printing technologies are gaining more and more attention in many fields, from industry to medicine, to cultural heritage [1]-[5] also thanks to fact that 3d printers are being easily accessible and have gradually gained better levels of accuracy. in the field of cultural heritage, these technologies arouse interest because of the possibility to reproduce artworks or part of them for museum exhibition, restoration and conservation reasons, haptic fruition, and other purposes [6]-[8]. clearly, the first step to obtain realistic and accurate 3d printed objects is the data acquisition process. in this context, non-contact 3d optical systems play an important role in such applications, from quality control to robotics, where a remote measurement is preferable and/or the object is fragile. their capability to measure the surfaces in contact-less and noninvasive way led them to be key instruments especially in the field of cultural heritage, where the surface of the object or the 3d shape at the various scales is the central and essential part of the artwork itself [9]-[14] and integrated diagnostics is performed [15]. each surface has an intrinsic multiscale nature that can be represented as a superimposition of a large number of spatial wavelengths. the natural question that arises spontaneously is if (and the extent to which) the 3d printing process preserves this intrinsic multiscale nature of the surface. despite the rapid growth of the use of 3d printing technologies, the accuracy of the 3d printed models has not been thoroughly investigated and not all the technologies are studied [16]-[19]. in this paper, we first present the prototype of the optical scanning microprofilometer as scanning system that allows high accuracy acquisition of the object [20]. the system is based on the interferometric method of conoscopic holography that enables to acquire surface data with micrometric resolution. thanks to the adaptability of the conoscopic holography sensors and the scanning setup, the system is able to measure irregular shapes, composite materials, and polychrome surfaces, thus abstract we investigated optical scanning microprofilometry and conoscopic holography sensors as nondestructive testing and evaluation tools in archeology for obtaining an accurate 3d printed reproduction of the data. the modular microprofilometer prototype allows a versatile acquisition of different materials and shapes producing a high-quality dataset that enables surface modelling at micrometric scales from which a "scientific" replica can be obtained through 3d printing technologies. as exemplar case study, an archeological amphora was acquired and 3d printed. in order to test the feasibility and the performance of the whole process chain from the acquisition to the reproduction, we propose a statistical multiscale analysis of the surface signal of object and replica based on metrological parameters. this approach allows to demonstrate that the accuracy of the 3d printing process preserves the range of spatial wavelengths that characterizes the surface features of interest within the technology capabilities. this work extends the usefulness of the replicas from museum exposition to scientific applications. mailto:sara.mazzocato@univr.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 leading to a multiscale and multimaterial approach in surface analysis [21]. here, the scanning profilometry is used to acquire accurate data of an archaeological object, which are then processed in order to obtain a 3d printed replica of the object itself. thus, we have come full circle by measuring the 3d printed object and by comparing the surfaces on a metrological basis. after a brief presentation of the optical microprofilometer developed in our laboratory (section 2), we demonstrate the application in a real case study, processing the dataset acquired on an ancient amphora and obtaining a mesh file suitable for the 3d printer (section 3). in section 4 we propose a statistical and multiscale analysis of the original object and its 3d printed replica in order to assess the accuracy of the whole process from the acquisition to the 3d printing. finally, in the concluding section, the main results are discussed. 2. optical microprofilometry the optical microprofilometer is based on the conoscopic holography technique that exploits interference patterns to obtain a pointwise measurement of distances with micrometric accuracy [22], [23]. in detail, a laser backscattered from the surface is splitted into ordinary and extraordinary rays after passing through an optically anisotropic crystal. the two beams share the same geometric paths but have orthogonal polarization modes. the refractive index of the crystal depends on the angle of incidence of the beam and on its polarization state determining a difference in the optical path lengths. once the two beams exit the crystal, they interfere to each other, and the pattern is recorded. the analysis of the generated pattern allows to obtain information about the distance of the sampled surface from the light source [5], [6]. conoscopic holography sensors enable to perform contactless measurements at sub-millimeter spatial resolution with a precision down to a few micrometers on different kind of materials, from the specular reflective surfaces to the diffusive ones. the developed system has a multi-probe module including a sensor for diffusive materials, a sensor for reflective materials, and a sensor for specular or transparent surfaces (all sensors by optimet). interchangeable lenses allow performing acquisitions in different working ranges, i.e., the maximum scanned height, tailored to the scale of the object. this aspect is very important in profilometry applied to the variegate 3d archaeological manufacts. the different combination of sensors and lenses allows the analysis of reflective materials with a maximum accuracy of 1 µm and working range of 1 mm up to a working range of 9 mm with an accuracy of 4.5 µm. while for diffusive material it is possible to achieve an accuracy of 2 µm with a working range limited to 0.6 mm or extending the working range up to 180 mm maintaining a sub-millimeter accuracy of 100 µm. the surface dataset is obtained as sequence of single point measurements of the depth distance (z) in the x-y plane, with the probe triggered to a set of motorized stages moving the object plate, as can be seen in figure 1. the advantage is the creation of profiles and surface maps with custom “field of view” that is only limited by the axis range. our scanning setup is composed by a motion system with linear micrometric axis stages (by pi) orthogonally mounted to form the acquisition grid (x, y axis). the axes have a maximum travel range of 300 mm and a step precision of 0.1 µm with an accuracy of 1 µm over the entire length. the probe operates in pulse-mode, receiving pulses from an external trigger sent by the scanning system: for each pulse the sensor measures the distance to the sampled object. as depicted in figure 1 the object surface is effectively measured within maximum the working range of the depth sensor. in order to improve the capability of scanning complex shapes, a third motorized axis controlling the position of the probe along the line of sight can be added. 3. surface acquisition and 3d printing the case study concerns a portion of an archaeological amphora (figure 2). the object is acquired with the conopoint-3 sensor with a 75mm lens. this probe-lens coupling allows a working range of 18 mm with a stand-off distance of 70 mm and a laser spot size of 47 µm. we acquired a selected region of interest of 55.2 × 80.1 mm2 with a scanning step (x-y sampling grid) of 100 µm. the interest in this case arises from its surface geometrical structure being a superimposition of a large number of scales. the usual approach in surface metrology is to separate the surface in three main scales: the roughness, that represents the irregularities at smaller wavelengths and exhibits a random nature often related to the behaviour of the material; the waviness, i.e. the more widely spaced variation often associated with the traces left by the tools used for the creation of the object; and the form, i.e. the 3d shape of the object. obviously, even if these features are intrinsic characteristics of the object, the acquired surface signals depend on the bandwidth of the measurement: the distribution of peaks and valleys in a surface profile is influenced figure 1. scanning setup of the microprofilometer with the specification of the working range and an exemplification of a scan line. figure 2. part of the archaeological amphora with the scanned region highlighted. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 by the sampling step that, together with the nyquist criterium, determines the smallest spatial structure. surface signal decomposition is of particular interest in archaeology, as the waviness pattern can be put in relation with the manufacturing tools. a polynomial fitting enables to separate the form and the texture to obtain the so-called conditioned surface (with zero mean), while the roughness is separated from the waviness using a gaussian filter. 3.1. 3d printing from the acquired data we can tailor the creation of a mesh to be used for reproducing the object with 3d printing technology. most of the profilometers do not store the data as a point clouds or mesh so they cannot be printed directly. therefore, we developed our own tools for building a mesh from the height maps collected by the microprofilometer. in detail, from the generated grid of equally spaced point we obtain the point cloud data with the triplet (x, y, z) representing the vertices of the mesh, we create the faces and hence a cuboid with the same dimension of the scan and we substitute the top face with the scan. eventually, we can programmatically create and export the mesh to a stl file (using, for example, the trimesh library [24]), the typical file format used by the 3d printers, storing the triangulated surface. figure 3 shows the 3d printed object, created using stereolithography technology with a resolution of 50 µm in the z-axis of the printer. the resolution in the x-y printer plane is not specified even if it is specified the laser spot size of 140 µm [25]. in this kind of printing process, the object is printed layer-bylayer by polymerization of a liquid resin thanks to the exposure to laser light. each single layer in the x-y plane is created by a raster-scanning of the focused laser light spot within the plane. the accuracy of the process depends also on the printing direction [26]. on this regard, it is worth to note that the best orientation of the model during the printing process should be the one that reflects the acquisition process. in fact, the optical microprofilometer is a line-of-sight technique that provides a height map as result of the measure and, like most of the conventional surface measurement techniques, it is not able to acquire re-entering features (like a “c-shape” along the scan directions). 4. surface analysis and comparison as mentioned above, a surface can be represented as a superimposition of different frequencies; adding a process in the chain can alter the resulting surface. in particular, the 3d printing process could be seen as a filter that cuts or badly passes some components. therefore, the question to which we are interested, that naturally arises after the printing, regards the level of accuracy of the printing process in the creation of the replica. to answer this question, the 3d printed object was scanned with the microprofilometer in the same measurement condition, i.e. with the same probe, lens, and scan velocity. the comparative analysis of the surfaces was performed on the basis of surface metrology parameters [27]-[30]. figure 4 shows the comparison between the amplitude parameters of the original object and the 3d printed replica. in particular, the root mean square roughness sq, the average roughness sa, the skewness ssk and the kurtosis sku are evaluated. it is worth noting that the sku parameter, representing the sharpness of the surface peaks, deviates from 0.7 to 175.4, while the averaged texture amplitude sa varies from 112 µm to 110 µm, sq changes from 134 µm to 142 µm and the skewness ssk varies from 0 to -5. in order to understand the differences between the surface datasets of the real object and the replica, the same signal decomposition is carried out, namely the form is separated from the texture through a second order polynomial fitting and then the roughness is highlighted through a gaussian filter keeping the same parameters in both cases. the figures below (figure 5, figure 6) show the texture of the original and 3d printed objects. once the decomposition is done, the texture and the roughness are analysed using a multiscale approach [31]. this way, the texture and the roughness features are inspected with the variation of the observation scale. in order to have an insight of the average behaviour of the surface, the amplitude parameter sq and the extreme values parameters sz (maximum height of the peaks), sv (maximum depth of the valleys) and st (maximum peak to valley) are calculated as a function of the evaluation length. in particular, the multiscale analysis is developed as follows: the surface is divided in square subregions of specific side, i.e. the evaluation length, skipping a margin around the surface to avoid artefacts due to edge effects. in each subregion the aforementioned parameters are calculated and are then averaged on the patches. plotting the texture parameters against the evaluation length gives an idea of their variation with the observation scale. here, the evaluation length starts from 2 mm (400 sampled points in each subregion) and increases of 2 mm each time. from figure 7 it emerges that the root mean square sq of the original and the 3d printed object follows a similar behaviour with the length scale. as expected, sq first increases with the evaluation length and then it converges to a stable value. figure 3. 3d printed region of the amphora. figure 4. comparison of amplitude parameters between original and 3d printed object: root mean square deviation (sq), mean absolute deviation (sa), skewness (ssk) and kurtosis (sku) of the surface heights distribution. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 figures below show the variation of the extreme parameters with the evaluation length. as can be seen in figure 8, the maximum peak to valley distance st follows a similar evolution, but it is interesting to note how the values regarding the 3d printed object are lower than those concerning the original object. more in details, while the maximum height of the peaks sz (figure 9) has a similar trend, the maximum depth of valleys sv (figure 10) differs. beside resolution limits, a possible explanation is the combined effects of the gravity and the washing that is performed after the 3d printing process. in fact, during the printing process, the object grew upside down and after the object was completed, it was subjected to washing and post-curing processes. the washing has the goal of removing uncured resin from the surface of printed parts by simultaneously soaking and moving them in a solvent, but it is possible that some particles of liquid remain trapped within the valleys. another comparative analysis was performed, in term of frequencies, through the power spectrum density function (psd). the psd is calculated as the averaged one-dimensional psds along the scan direction of the surface height profiles. in order to have a double comparison, the psd is calculated also in a second scan of the 3d printed object rotated of 180°. the psd is plotted versus the wavevectors defined as qi = 2/i, where i is the wavelength of the surface component. figure 11 shows the power spectrum of the whole surfaces while figure 12 shows the power spectrum of the texture, i.e. the previous surfaces once the form is removed. as can be seen, the power spectrum of the total signal shows that the information is preserved for the lower frequencies, with a variation in the figure 5. texture of the original object acquired with a scan step of 100 µm and an accuracy in the z-axis of 10 µm. figure 6. texture of the 3d printed object acquired with a scan step of 100 µm and an accuracy in the z-axis of 10 µm. figure 7. variation of the sq of the textures calculated at different scan size. figure 8. variation of the peak-to-valley distances of the texture (st) calculated at different scan size. figure 9. variation of the maximum peak height of the texture (sz) calculated at different scan size. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 behaviour of the higher frequencies at q  15 mm-1 (  0.42 mm). this can be better analysed in the texture signal (figure 12) where the power spectra of the real and printed objects intersect. up to that point, the signal of the real object is higher, while from that point onward it emerges that the psd of the printed object is no more informative. the roughness analysis is done once the roughness signal is decomposed from the texture. the critical issues related to the use of the gaussian filter is the choice of the cut-off wavelength. the figure below shows the variation of sq of the roughness signal of the original and the printed object versus the cut-off wavelength. as can be seen, the trend follows a similar behaviour with a lower sq in the replica due the smoothing of the 3d printing process (figure 13a). as the cut-off becomes longer, the sq increases including also the waviness contribution (figure 13b). figure 14 shows an example of the texture decomposition with a cut-off of 6 mm. it emerges that the waviness signal is well preserved while the roughness signal highlights the smoothing effect described above. 5. conclusions in this work we presented the application of scanning optical microprofilometry for the acquisition, the analysis, and the 3d printer reproduction of archaeological objects. thanks to the capability to perform contact-less and full-field measurements with micrometric precision, this technique is a powerful tool for non-destructive testing and evaluation in cultural heritage. the versatile instrument configuration based on scanning stages and conoscopic holography sensors allows for setting optimal sampling parameters for different needs, enabling multiscale and multi-material surface measurements. the “field of view” of the figure 10. variation of the maximum valley depth of the texture (sv) calculated at different scan size. figure 11. comparison between the average power spectra of the total surface of original object and 3d printed object. figure 12. comparison between the average power spectra of the texture signals of the original object and the 3d printed object. a) b) figure 13. variation of the sq with the cut-off wavelength of the gaussian filter. the top plot a) represents the roughness signal extracted with a cut-off step of 100 µm while the second plot b) shows the increase in sq by varying the cut-off from 1 mm to 45 mm. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 x-y scanning (up to 30 × 30 cm2) was designed for acquiring microsurface data in medium-sized objects, within a z depth range that is determined by the lens (e.g. the range can vary from 0.6 mm to 180 mm maintaining the same probe), thus allowing the surface modelling of “flat” objects like ancient coins or significant part of 3d objects in a measuring session. reaching a micrometric resolution is of fundamental importance in the acquisition and in the analysis of the material surface. in this study we focused on the measurement of an archaeological object representing a complex and exemplar case study, with sharp evidence of the three main surface signal components: shape, waviness, and roughness. we demonstrated how we obtained a high-fidelity and high-resolution 3d printed replica of the object starting from the microprofilometry dataset. moreover, we have come full circle acquiring the printed object and assessing how the process preserves the surface signals. in order to perform this task, a multiscale analysis of the original and 3d printed object is proposed by inspecting the inband roughness through the power spectrum, as well as the variation of the areal amplitude parameters (sa, sq, ssk, and sku) with the observation length. it was demonstrated how the printing process is accurate even if the replica is affected by less marked valleys and by a loss of signal in the high frequency surface components, thus resulting in a smoothing effect. using a scanning step of 100 µm in the microprofilometer with the conoscopic probe setting a laser spot of 47 µm and a depth accuracy of 10 µm, and using a printing resolution of 50 µm (along the printer z-axis), it is shown that, beside the form of the archaeological manufact, the surface texture is accurately acquired and 3d printed without artifacts affecting the waviness and the roughness appearance. the focus of this work was the feasibility and performance analysis in 3d printing of artworks using laser scanning microprofilometry. we have turned our attention to a 3d replica obtained with the stereolithography technology and a photopolymer resin. however, the method could be used to compare the performance of different kind of 3d printer technologies as well as various orientation settings of the model during the printing process or several printers-resin combinations. acknowledgement the work was partly supported by scan4reco project funded by the european union horizon 2020 framework programme for research and innovation under grant agreement no 665091 and partly by temart project, por fesr 20142020. references [1] g. sansoni, m. trebeschi, f. docchio, state-of-the-art and applications of 3d imaging sensors in industry, cultural heritage, medicine, and criminal investigation, sensors, vol. 9, no. 1, jan. 2009, pp. 568–601. doi: 10.3390/s90100568 [2] c. balletti, m. ballarin, f. guerra, 3d printing: state of the art and future perspectives, j. cult. herit., vol. 26, jul. 2017, pp. 172–182. doi: 10.1016/j.culher.2017.02.010 [3] yu ying clarrisa choong, hong wei tan, deven c. patel, wan ting natalie choong, chun-hsien chen, hong yee low, ming jen tan, chandrakant d. patel, chee kai chua, the global rise of 3d printing during the covid-19 pandemic, nat. rev. mater., vol. 5, no. 9, sep. 2020, pp. 637–639. doi: 10.1038/s41578-020-00234-3 figure 14. surface signal decomposition: waviness with a cut-off of 6 mm and roughness with a cut-off of 600 µm. https://doi.org/10.3390/s90100568 https://doi.org/10.1016/j.culher.2017.02.010 https://doi.org/10.1038/s41578-020-00234-3 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 [4] v. m. vaz l. kumar, 3d printing as a promising tool in personalized medicine, aaps pharmscitech, vol. 22, no. 1, jan. 2021, p. 49. doi: 10.1208/s12249-020-01905-8 [5] t. singh, s. kumar, s. sehgal, 3d printing of engineering materials: a state of the art review, mater. today proc., vol. 28, 2020, pp. 1927–1931. doi: 10.1016/j.matpr.2020.05.334 [6] r. scopigno, p. cignoni, n. pietroni, m. callieri, m. dellepiane, digital fabrication techniques for cultural heritage: a survey, comput. graph. forum, vol. 36, no. 1, jan. 2017, pp. 6–21. doi: 10.1111/cgf.12781 [7] balletti ballarin, an application of integrated 3d technologies for replicas in cultural heritage, isprs int. j. geo-information, vol. 8, no. 6, jun. 2019, p. 285. doi: 10.3390/ijgi8060285 [8] s. mazzocato c. daffara, experiencing the untouchable: a method for scientific exploration and haptic fruition of artworks microsurface based on optical scanning profilometry, sensors, vol. 21, no. 13, jun. 2021, p. 4311. doi: 10.3390/s21134311 [9] g. schirripa spagnolo, l. cozzella, f. leccese, fringe projection profilometry for recovering 2.5d shape of ancient coins, acta imeko, vol. 10, no. 1, mar. 2021, p. 142. doi: 10.21014/acta_imeko.v10i1.872 [10] p. dondi, l. lombardi, m. malagodi, m. licchelli, 3d modelling and measurements of historical violins, acta imeko, vol. 6, no. 3, p. 29, sep. 2017. doi: 10.21014/acta_imeko.v6i3.455 [11] r. fontana et al., three-dimensional modelling of statues: the minerva of arezzo, j. cult. herit., vol. 3, no. 4, oct. 2002, pp. 325–331. doi: 10.1016/s1296-2074(02)01242-6 [12] f. remondino, heritage recording and 3d modeling with photogrammetry and 3d scanning, remote sens., vol. 3, no. 6, may 2011, pp. 1104–1138. doi: 10.3390/rs3061104 [13] a. mironova, f. robache, r. deltombe, r. guibert, l. nys, m. bigerelle, digital cultural heritage preservation in art painting: a surface roughness approach to the brush strokes, sensors, vol. 20, no. 21, nov. 2020, p. 6269. doi: 10.3390/s20216269 [14] m. callieri et al., alchemy in 3d: a digitization for a journey through matter, in 2015 digital heritage, sep. 2015, pp. 223–230. doi: 10.1109/digitalheritage.2015.7413875 [15] j. striova, l. pezzati, e. pampaloni, r. fontana, synchronized hardware-registered vis-nir imaging spectroscopy and 3d sensing on a fresco by botticelli, sensors, vol. 21, no. 4, feb. 2021, p. 1287. doi: 10.3390/s21041287 [16] e. george, p. liacouras, f. j. rybicki, d. mitsouras, measuring and establishing the accuracy and reproducibility of 3d printed medical models, radiographics, vol. 37, no. 5, pp. 1424–1450, sep. 2017. doi: 10.1148/rg.2017160165 [17] e. kluska, p. gruda, n. majca-nowak, the accuracy and the printing resolution comparison of different 3d printing technologies, trans. aerosp. res., vol. 2018, no. 3, sep. 2018, pp. 69–86. doi: 10.2478/tar-2018-0023 [18] r. m. carew, r. m. morgan, c. rando, a preliminary investigation into the accuracy of 3d modeling and 3d printing in forensic anthropology evidence reconstruction, j. forensic sci., vol. 64, no. 2, mar. 2019, pp. 342–352. doi: 10.1111/1556-4029.13917 [19] v. bonora, g. tucci, a. meucci, b. pagnini, photogrammetry and 3d printing for marble statues replicas: critical issues and assessment, sustainability, vol. 13, no. 2, jan. 2021, p. 680. doi: 10.3390/su13020680 [20] n. gaburro, g. marchioro, c. daffara, a versatile optical profilometer based on conoscopic holography sensors for acquisition of specular and diffusive surfaces in artworks, jul. 2017, p. 103310a. doi: 10.1117/12.2270307 [21] n. gaburro, g. marchioro, c. daffara, conoscopic laser microprofilometry for 3d digital reconstruction of surfaces with sub-millimeter resolution, in 2017 ieee international conference on environment and electrical engineering and 2017 ieee industrial and commercial power systems europe (eeeic / i&cps europe), jun. 2017, pp. 1–4. doi: 10.1109/eeeic.2017.7977779 [22] g. y. sirat, conoscopic holography i basic principles and physical basis, j. opt. soc. am. a, vol. 9, no. 1, jan. 1992, p. 70. doi: 10.1364/josaa.9.000070 [23] i. álvarez, j. enguita, m. frade, j. marina, g. ojea, on-line metrology with conoscopic holography: beyond triangulation, sensors, vol. 9, no. 9, sep. 2009, pp. 7021–7037. doi: 10.3390/s90907021 [24] trimesh [computer software]. online [accessed 23 march 2022] https://trimsh.org/ [25] formlabs, form2 tech specs. online [accessed 23 march 2022] https://formlabs.com/3d-printers/form-2/tech-specs/ [26] t. hada et al., effect of printing direction on the accuracy of 3dprinted dentures using stereolithography technology, materials (basel)., vol. 13, no. 15, aug. 2020, p. 3405. doi: 10.3390/ma13153405 [27] international organization for standardization, iso 251781:2016, geometrical product specifications (gps) — surface texture: areal — part 1: indication of surface texture, geneva, 2016. [28] international organization for standardization, iso 251782:2012, geometrical product specifications (gps) — surface texture: areal — part 2: terms, definitions and surface texture parameters, geneva, 2012. [29] international organization for standardization, iso 251783:2012, geometrical product specifications (gps) — surface texture: areal — part 3: specification operators, geneva, 2012. [30] f. blateyron, the areal field parameters, in characterisation of areal surface texture, berlin, heidelberg: springer berlin heidelberg, 2013, pp. 15–43. doi: 10.1007/978-3-642-36458-7_2 [31] m. bigerelle, t. mathia, s. bouvier, the multi-scale roughness analyses and modeling of abrasion with the grit size effect on ground surfaces, wear, vol. 286–287, may 2012, pp. 124–135. doi: 10.1016/j.wear.2011.08.006 https://doi.org/10.1208/s12249-020-01905-8 https://doi.org/10.1016/j.matpr.2020.05.334 https://doi.org/10.1111/cgf.12781 https://doi.org/10.3390/ijgi8060285 https://doi.org/10.3390/s21134311 https://doi.org/10.21014/acta_imeko.v10i1.872 https://doi.org/10.21014/acta_imeko.v6i3.455 https://doi.org/10.1016/s1296-2074(02)01242-6 https://doi.org/10.3390/rs3061104 https://doi.org/10.3390/s20216269 https://doi.org/10.1109/digitalheritage.2015.7413875 https://doi.org/10.3390/s21041287 https://doi.org/10.1148/rg.2017160165 https://doi.org/10.2478/tar-2018-0023 https://doi.org/10.1111/1556-4029.13917 https://doi.org/10.3390/su13020680 https://doi.org/10.1117/12.2270307 https://doi.org/10.1109/eeeic.2017.7977779 https://doi.org/10.1364/josaa.9.000070 https://doi.org/10.3390/s90907021 https://trimsh.org/ https://formlabs.com/3d-printers/form-2/tech-specs/ https://doi.org/10.3390/ma13153405 https://doi.org/10.1007/978-3-642-36458-7_2 https://doi.org/10.1016/j.wear.2011.08.006 iomt-based biomedical measurement systems for healthcare monitoring: a review acta imeko issn: 2221-870x june 2021, volume 10, number 2, 174 184 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 174 iomt-based biomedical measurement systems for healthcare monitoring: a review imran ahmed1, eulalia balestrieri1, francesco lamonaca2 1 university of sannio, 82100 benevento, bn, italy. 2 university of calabria, 87036 arcavacata, rende (cs), italy. section: research paper keywords: internet of things; internet of medical things; iot devices; biomedical devices; biomedical measurement systems; non-invasive medical devices citation: imran ahmed, eulalia balestrieri, francesco lamonaca, iomt-based biomedical measurement systems for healthcare monitoring: a review, acta imeko, vol. 10, no. 2, article 24, june 2021, identifier: imeko-acta-10 (2021)-02-24 section editor: ciro spataro, university of palermo, italy received march 6, 2021; in final form april 30, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: imran ahmed, e-mail: iahmed@unisannio.it 1. introduction biomedical measurement systems (bms) play a key role in the detection and diagnosis of various diseases, providing new solutions for healthcare monitoring and improving bioprocesses and technology for biomedical equipment. generally, bms use measurement devices to collect vital signs, such as heart rate, pulse rate and body temperature, from the human body, and then these vital signs are processed by a processing unit. finally, the results are displayed to aid doctors in the diagnosis of various diseases. however, for old-fashioned bms (for example, nonportable and non-smart ultrasound machines), acquiring vital signs without disturbing the patient’s routine activities is challenging. moreover, these old-fashioned bms require patients to visit the hospital for their check-up, which takes a great deal of time out of their busy lives. therefore, in order to utilise bms without disturbing routine activities, these systems must be able to acquire data in different scenarios [1]. this means not only being limited to situations that require the presence of patients inside hospitals but also outside, for example, industry workers, miners and sports professionals in their working environments as well as military personnel and individuals in their home environment. hence, the use of bms for today’s lifestyle demands that devices belonging to these systems should be compact, user friendly and comfortable for the wearer, with adequate measurement accuracy even in a harsh or complex environment [1], [3]. as a result of these emerging requirements, recent research activities have been directed towards improving bms by using the internet of things (iot) [4]-[8] and by creating a new paradigm, the internet of medical things (iomt) [1], [9]. these iomt solutions are mainly based on wearable and implantable biomedical measurement devices [12] using different sensors, such as tactile [13], silicon [14], polymer [15] and opticalbased sensors [16], [17] or sensors already integrated into commonly used devices, such as smartphones [3], [18]-[23]. wearable iomt bms typically include devices such as smartwatches, armbands, glasses, smart helmets and digital hearing devices [1], [25]. today, many wearable devices are smart in the sense that they can locally process signals acquired from sensors and transmit measurement data through the network to other connected devices (on a mobile phone or through hospital abstract biomedical measurement systems (bms) have provided new solutions for healthcare monitoring and the diagnosis of various chronic diseases. with a growing demand for bms in the field of medical applications, researchers are focusing on advancing these systems, including internet of medical things (iomt)-based bms, with the aim of improving bioprocesses, healthcare systems and technologies for biomedical equipment. this paper presents an overview of recent activities towards the development of iomt-based bms for various healthcare applications. different methods and approaches used in the development of these systems are presented and discussed, taking into account some metrological aspects related to the requirement for accuracy, reliability and calibration. the presented iomtbased bms are applied to healthcare applications concerning, in particular, heart, brain and blood sugar diseases as well as internal body sound and blood pressure measurements. finally, the paper provides a discussion about the shortcomings and challenges that need to be addressed along with some possible directions for future research activities. mailto:iahmed@unisannio.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 175 systems) so that doctors can promptly monitor and analyse the patient’s data to make effective decisions, especially in case of emergency [5], [8]. figure 1 shows some smart wearable and implantable devices that are used for measuring different vital signs. these devices acquire measurement data and then process and send the data to local elaboration units for further processing and for the presentation of the resulting information to clinicians or patients [12]. therefore, they are considered a substratum for the development of iomt bms. in order to stimulate research for designing innovative bms, this paper presents an overview of iomt-based bms. it is an extended version of a previous contribution to technical committee 4 of the 2020 international measurement confederation (imeko tc4) conference held in palermo, italy [9], and takes into account further developments in measurement devices and available techniques used for different medical applications. a discussion concerning each example described is reported, including the advantages, working principles and technology usage. in addition, some general issues and challenges related to the metrological aspects of iomt-based bms are highlighted. the organisation of this paper is as follows. section ii explains the basic architecture of iomt-based bms, and section iii presents the five main categories of existing iomt-based bms while also introducing some important metrological issues in relation to measurement devices used in iomt. section iv discusses the metrological challenges for existing bms, and finally, the conclusions are presented in section v. 2. iot-based biomedical measurement systems (iomt) the main advantage of iomt-based bms is that these systems provide the online monitoring of a patient's health to facilitate a quick response in an emergency and to offer remote access to doctors as well as relatives and the patients themselves for monitoring targeted vital signs (blood pressure, heart rate, glucose level and so on) [26]. to this end, iomt-based bms are usually designed to offer the following features: (i) the continuous monitoring of parameters without disturbing the patient’s daily routine, (ii) an alarm triggered in an emergency, and (iii) the use of low-cost measurement devices. as a consequence, the final aims of an iomt-based bms include the following: (i) a reduction in the cost of hospitalisation, (ii) the optimisation of public health costs, (iii) an increase in the independence and quality of life of older adults and (iv) an improvement in the monitoring of hospitalised and/or critical patients. the general architecture for iomt-based systems is shown in figure 2 [27]. compared with architectures [28]-[30] that are specifically designed for particular applications, such as heart disease, blood pressure and blood sugar, the system shown in figure 2 is more general and demonstrates the common components belonging to complete iot-based bms: (i) a physical layer, (ii) a data integration layer and (iii) an application service/presentation layer. in the physical layer, iomt-based bms mostly use wearable devices to measure the vital signs (heart rate, pulse rate, body temperature, blood pressure, oxygen concentration, lung contraction volume, blood sugar level, respiration rate and so on) of the subjects being monitored. these measurement data are first stored in the storage memory and then transferred to the data integration layer (figure 2) through the internet/bluetooth or any other communication protocol. in the data integration layer, the received data are processed and then sent to the application service/presentation layer. nowadays, various types of software are available to extract useful information from the measurement data. at the application service/presentation layer, data are analysed by the doctor or experts, enabling them to take effective decisions about the disease. in the following sections, some recently developed iomt-based systems applied to the diagnosis of various diseases are discussed along with some metrology-related issues. 3. existing iomt-bms classification overview iomt bms have various medical applications, including healthcare monitoring and the diagnosis of various diseases. based on these applications, this section has classified the existing iomt bms into five main categories: (i) iomt bms for heart disease, (ii) iomt bms for examining internal body a) b) figure 1. smart biomedical measurement devices: (a) smart wearable measurement devices [24], (b) smart implantable measurement devices. figure 2. general architecture of iomt systems. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 176 sounds, (iii) iomt bms for blood pressure, (iv) iomt bms for brain diseases and (v) iomt bms for blood sugar disease. for each category, various examples are discussed, examining their advantages, working principles and technology usage as well as their reliability and accuracy. in the literature, it has been observed that researchers in the field of iomt bms are usually concerned with applying iot technologies to different medical applications without focusing on developing suitable solutions in terms of metrology and requirements. since iomt bms rely on various types of measurement to acquire information about vital signs of the human body, therefore the reliability and accuracy of these systems play a critical and essential role in their actual ability to provide correct and suitable information that can be used by doctors for their diagnoses. it is also essential that iomt-bms devices are properly calibrated to ensure that they are accurate and perform in an appropriate and timely manner. vital-sign measurements that do not have the required accuracy can result in the misdiagnosis of patients’ diseases. accurate and reliable measurements not only ensure effective treatment but also save time and costs related to misdiagnosed patients. thus, the metrological characterisation and calibration of devices are very important for validating the reliability of iomt systems, and researchers must consider these characteristics, starting at the initial design phase and continuing to the final testing phase of the iomt bms. the following subsections discuss existing examples of iomt bms and related metrological issues. 3.1. iomt bms for heart disease the early detection of heart disease is very important for saving lives, and iomt could play a vital role in achieving this goal. in the physical layer, iomt-based systems for heart-disease detection generally take numerous measurements, such as sugar concentration levels, cholesterol levels, heart rate and pulse rate as well as other vital signs, using sensors. these measurements are usually taken by iomt devices, such as smartwatches, electrocardiogram (ecg) monitors and other ecg or opticalsensor based heart-monitoring medical devices. smartwatches mostly use optical sensors to scan the blood flow near the wrist to measure these vital signs, while ecg monitors use electrodes to acquire the electrical signals moving through the heart, record the strength and the timing of these signals and then display the acquired measurements in graphical form. however, there are few limitations to measure these vital signs due to the measurement conditions (for example, a patient sweating during the ecg measurement) and the accuracy of the measuring device [31]. once the measurements are taken, they are sent to the data integration layer through the internet and may use cloud-based servers for further processing [32]. after processing, the results are analysed by doctors in the application service/presentation layer by means of a mobile app or web page. additionally, further algorithms based on artificial intelligence (ai) are now available and are integrated into the data integration layer in order to further aid doctors in the diagnostic process [28]. for example, in [29], an iomt-based detection system for the monitoring of heart-related diseases based on the deep belief network model and a higher order boltzmann machine is presented. this system uses iot devices, such as embedded sensors and a wearable watch, to measure vital signs, such as heart rate and blood pressure, and to record other physical activities. however, the authors do not provide any details on the types of sensor and wearable watch used in the collection of these data or how accurate these measurements are. after collection, the required data is transmitted to the healthcare centre to be processed using the higher order boltzmann deep belief neural network (hobdbnn), which learns the features of heart disease from previous analyses. to evaluate the diseaseprediction accuracy, the system is implemented using matlab, and the collected data is divided into two sets: 70 % of the total data is for training the network and 30 % for testing purposes. the hobdbnn performance is evaluated by different metrics, such as the sensitivity and specificity, precision and f-measure, [29] which are defined as 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦/𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 , (1) 𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 , (2) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑇𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 , (3) 𝐹 𝑚𝑒𝑎𝑠𝑢𝑟𝑒 = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 × 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 , (4) where true positive is the model’s outcome that shows correct prediction of positive class, true negative refers to the correct prediction of a negative class, false positive is the model’s outcome that shows incorrect prediction of positive class and false negative is the incorrect prediction of a negative class. these hobdbnn performance metrics are then compared with other optimised classifiers, such as genetic algorithm-based trained recurrent fuzzy neural networks, swarm-optimised convolution neural networks with a support vector algorithm, particle-optimised feed-forward back-propagated neural networks and particle swarm-optimised radial-basis function networks. the results demonstrate that the performance metrics of the proposed hobdbnn have better values than those achieved using the other above-mentioned methods. the overall prediction rate of the deep network of the proposed system is reported to be about 99.03 %. similarly, an ecg-based heart-disease recognition system is presented in [33]. this system measures the heart data by using a commercially available device called the pulse sensor amped, which consists of a simple optical heart-rate sensor with amplification and noise cancellation hardware components to collect noise-free heart-pulse readings. the collected data are then transmitted wirelessly to the mobile application via an arduino microcontroller. a monitoring algorithm is implemented in a mobile application to detect any variances from the normal heart rate. this mobile application also raises the alarm whenever an emergency occurs. the system is capable of predicting heart disease by using an intelligent classifier and a machine-learning algorithm, which are pre-trained using clinical data. the authors have reported a 100 % detection rate for monitoring the algorithm and an 85 % correct-classification rate for the classifier. in [34], an iomt-based low-power cardiovascular healthcare system with cross-layer optimisation from a sensing patch to a cloud platform is presented. it uses a wearable ecg patch with custom system-on-chip technology that is integrated with a wireless connectivity to connect with mobile devices and a cloud acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 177 platform. to measure and process the ecg signals, the sensing patch needs to be placed directly on the human body. the system performance is evaluated by first checking the signal denoising and compression capability and then by evaluating the correct disease prediction rate using the mobile device and the cloud platform. to evaluate the signal denoising and compression capability of the proposed system, it is tested on an mit-bih database (an open-source dataset that provides standard investigation material for the detection of heart arrhythmia [35]). in particular, various types of noise (a baseline drift noise, power-line noise and electromyography noise) are added to the dataset signals, and, under three different scenarios, the proposed system is evaluated: (i) signal denoising, (ii) signal compression and (iii) combined signal denoising and compression. in this context, some metrics are used, such as the denoised mean square error (mse), the denoised signal-to-noise ratio (snr, db), improvement in the mse in percentage, the percentage root mean square difference (prd), the signal compression ratio (cr) and the quality score (qs), which are defined as 𝑀𝑆𝐸 = ∑[𝑓𝑐 (𝑛) − 𝑓𝑟 (𝑛)] 2 𝑁𝑖 1 𝑁𝑖⁄ , (5) 𝑆𝑁𝑅 = 10 log [ ∑ 𝑓𝑐 2(𝑛) 𝑁𝑖 1 ∑ [𝑓𝑐 (𝑛) − 𝑓𝑟 (𝑛)] 2𝑁𝑖 1 ] , (6) 𝑃𝑅𝐷 = √∑[𝑓𝑖 (𝑛) − 𝑓𝑟 (𝑛)] 2 𝑁𝑖 1 , (7) where n is the number of samples of a signal, fc(n) is the noiseless reference signal, fr(n) is the reconstructed signal after denoising, fi(n) is the input signal and ni is the total length of the input signal. 𝐶𝑅 = 𝑁𝑖 /𝑀𝛿 , (8) 𝑄𝑆 = 𝐶𝑅 𝑃𝑅𝐷⁄ , (9) where mδ is the number of resolved coefficients after compression. in the case of signal denoising, the results show that the average improvement in snr is 12.63 db and the improvement in mse is 94.47 %. for signal compression, the results show that the average cr is 7.89, the average prd is 0.61 % and the average qs is 13.06. with regard to combined signal denoising and compression, the average improvement in mse and snr is 94.47 % and 12.63 db, respectively, and the average cr, prd and qs are 9.8, 6.14 % and 1.62, respectively. in this case, compressed and denoised signals are directly generated with only one iteration of the whole system, which can improve the system efficiency at the cost of sacrificing signal performance (prd increases and qs decreases). to test the disease-prediction accuracy executed on the cloud platform and mobile device, four kinds of ecg signal are analysed to detect arrhythmias in disease [36]: (i) a normal ecg signal, (ii) a left bundle-branch block (lbbb), (iii) a right bundle-branch block (rbbb) and (iv) paced beats (pb). the five-fold cross validation [36] results for classifying these four kinds of ecg signal, using the mobile device and the cloud platform, are presented. for a normal ecg, the correct classification rate is 96 %, for lbbb it is 98 %, for rbbb it is 100 % and for pb it is 94 %. the average correct classification rate of the proposed disease-prediction system executed on the cloud platform is 97 %, which is calculated using the mean average correct detection values of all five-fold results [36]. 3.2. iomt bms for internal body sounds auscultation is a process for examining internal body sounds, such as from the heart, lungs or other organs, for medical diagnoses. typically, a stethoscope is used to examine these sounds [46], which can help to detect abnormalities that occur in the human body and provide information about various diseases [46]. however, the auscultation process has some important limitations: it requires the doctor to have good hearing acuity and expertise in order to accurately detect abnormal sounds, the improper placement of the stethoscope on the body results in the improper acquisition of sound, the patient should be in a relaxed position for correct sound acquisition and the presence of background noise can affect the process. in this regard, aibased auscultation systems have been proved to be very helpful for professionals/doctors in determining abnormal sounds [47], [48]. however, iot-based auscultation systems allow doctors to remotely monitor their patients and record the sounds, and they offer features for sharing information with other professionals to obtain an immediate second opinion [49]. in this paper, some recently developed iomt devices related to the measurement of internal body sounds are discussed. for example, ekuore pro [37] is an iot-based wireless stethoscope that acquires and measures internal body sounds through devices placed on the body. it can be connected to a mobile app using wi-fi to show phonograms in real time and keep track of the patient’s medical history, which can be easily shared with professionals and doctors. however, the manufacturer has not provided any information on the accuracy of the system. another iomt device for the measurement of internal body sounds currently available on the market, stethee, is presented in [38]. it is an ai-based wireless stethoscope that uses aida technology (an aiand machine-learning-based solution to analyse data [39]) to analyse the heartbeat and lung sounds acquired by placing the device on the body. it uses the stethee app, which displays clinical information such as heart rate, average systole, average diastole and respiratory rate in only 20 seconds, and it also allows the live streaming of sound data so that it can be visualised in real time for the easy evaluation of vital signs. although the manufacturer claims that the device has a number of powerful features, but information about measurement methods for the calculation of parameters and their accuracy is not provided. similarly, the iomt-based hd steth system [40] has been introduced by the hd medical group for cardiac auscultation. hd steth is a medical stethoscope approved by the food and drug administration (fda), and it is composed of integrated ecg electrodes, four microprocessors, an ai-enabled detection system for detecting cardiovascular disease and a display screen to visualise phonocardiographs and electrocardiographs. it provides the real-time visualisation of cardiac waveforms via the bluetooth low energy (ble) mobile app, and patients can easily share data with specialists via a cloud platform for a remote diagnosis. this device has been patented with noise cancellation and smart amplification for high-fidelity auscultation. as with the previously mentioned devices, the manufacturer has not provided any information on measurement accuracy. another iot-based smart stethoscope, stethome [41], is a ce certified and intelligent medical device able to measure and classify abnormal sounds in the respiratory system, and it can remotely analyse other internal sounds in the body. stethome is able to connect to a smartphone via bluetooth, and the patient can place acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 178 the device on the points indicated by the mobile app for the recording to start automatically. the mobile app then stores the medical history in the cloud, notifying patients if there are any abnormal sounds, and sends the examination results to a doctor, who will take effective decisions accordingly. it uses ai algorithms to verify the examination process, which makes, in the manufacturer’s opinion, the stethome about 29 % more accurate than a specialist in the detection and classification of abnormal sounds. however, the accuracy evaluation process is not provided by the manufacturer, so there is no information on how the 29 % figure has been obtained. in [42], an iomt-based wireless digital stethoscope with mobile integration for sound auscultation is presented. the system first acquires the sound signals from the human body by using a traditional stethoscope chest piece that has an integrated microcontroller unit and a bluetooth communication device. the acquired data are then processed and finally transmitted to a mobile device for recording and listening to and for the visual display of sound waveforms. this system can be used for monitoring patients in remote locations, especially in quarantine units, and can also be utilised for remotely training healthcare staff through the broadcasting of the recorded signals. however, the authors have not provided any information on the measurement accuracy of the system. in [43], an iomt-based novel cardiac auscultation monitoring system based on wireless sensing for healthcare is presented. in this system, the cardiac-sound auscultation sensing unit consists of two main components. the first is the hky-06b heart-sound sensor, manufactured by huake electronic, which converts the weak heart vibration signal into electrical signals. it also has integrated micro-sound components made from polymer materials. it has the capability of detecting all kinds of heart and acoustic sounds from the body’s surface. the second component is the data acquisition module consisting of cc2540 system-on-chip technology with an external antenna [44], an 8051-based micro-controller [45] and other auxiliary components. this module’s main function is analogue-to-digital conversion and bluetooth transmission. it uses a ble bluetooth protocol to offer power efficiency and moderate the data transmission rate. the proposed system is used to monitor cardiovascular health, and the acquired information is sent to caregivers as well as medical practitioners using the iot network and an android mobile app. in particular, pre-processing, segmentation and clustering techniques are performed to gather any significant health information. the system also features a hilbert–huang transform to reduce interference signals and help extract features of the first heart sound, s1, and the second heart sound, s2. in healthy people, s1 and s2 are produced by the closure of the atrioventricular valves and semilunar valves, respectively. the detection rate of the proposed system for s1 and s2 is 88.4 % and 82.7 %, respectively, and the overall detection rate of s1 and s2 for irregular heart sounds is 86.66 %, as reported in the article. in [46], an iomt-based smartphone auscultation system is presented. it is a low-cost stethoscope connected to a mobile phone that can record lung sounds and detect abnormal sounds from recorded data. the system uses a support vector machine to identify the sound of wheezes and crackles by extracting features from the spectrogram of each sound signal. the system is trained using recorded data consisting of lung sounds from 155 patients suffering from wheezes or crackles. the system is validated by evaluating the performance of detection algorithms, taking into account the area-under-the-curve (auc) parameters. this auc is calculated by plotting the receiver-operating-characteristic (roc) curve between the false positive rate and the true positive rate, as shown in figure 3. for the crackle detector algorithm, the auc value is 0.87, and for the wheeze algorithm, the auc value is 0.71. in [49], the iot-based smartphone monitoring of a second heart sound split is presented. the heart sounds are recorded using a customised external microphone consisting of an acoustic stethoscope and a 3.5-mm mini-plug condenser microphone with adapter that connects wirelessly to a mobile app to record the heartbeat. the system detects s1 and s2 by converting the recorded heartbeat signal into a frequency domain using the fast fourier transform. s2 is then fed to a discrete wavelet transform (dwt) and a continuous wavelet transform to extract the aortic and pulmonic components. this system can be very useful for remotely monitoring s2. however, these authors have also not provided any information on the measurement accuracy of the system. 3.3. iomt bms for blood pressure high blood pressure (bp) is a serious issue that affects older people as well as young adults; it is important for patients to control their bp with repeated check-ups, otherwise serious conditions such as heart failure or strokes can occur. therefore, patients that suffer hypertension need a bp check-up several times a day. iomt-based bp measurement systems can help to make this task easier for the patients. an automatic bp measurement system using the oscillometric technique is presented in [50]. this system is capable of monitoring both systolic and diastolic pressure, which are used to define arterial bp. the values are continuously updated through wi-fi on a database that can be accessed remotely, where these data are compared with already existing data to improve the accuracy of the results. the authors have stated the accuracy of the system to be 7 mmhg. this accuracy has been calculated by means of standards or protocols (defined for bp measuring devices) that are based on the general consensus of several organisations, such as the us association for the advancement of medical figure 3. receiver operating characteristic (roc) curves for crackles and wheezes [46]. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 179 instrumentation (aami), the british hypertension society and the european society of hypertension (esh), which are active working groups on bp monitoring, as well as the international organization for standardization (iso) [51]. however, these protocols and the accuracy of the oscillometric bp instrument continue to be the subject of discussion in the scientific field. this is due, for example, to the bp oscillometric device’s tendency to provide inaccurate measurements for certain patient groups and to be prone to noise and artifacts. moreover, the difficulty in reproducibility of the adopted calibrating methods [52]-[54] allows monitors to pass validation tests when there are clinically significant differences in bp estimated values in some individuals [52]. in [55], the qardio arm system is used to develop a smart bp measurement system, in which the acquired oscillometric data are transferred to a smartphone app for analysis and visualisation. the accuracy of the system has been evaluated by comparing its results with the omron m3 device, as it has been clinically validated according to the existing esh international protocol. however, the same concerns about the accuracy and calibration of the oscillometric bp measurement expressed for the previous device exist for the qardio arm system. in [56], the omron heartguide is introduced. it is an fdaapproved iomt smartwatch for bp measurement. this device can measure bp by using an inflatable cuff within the smartwatch bracelet. the smartwatch sends the data to the data integration layer via the internet, and then sends it on to the application service/presentation layer, where it is available for the doctor to access it in real time. the measurement accuracy of the device is about 3 mmhg, but the validation of this device has also been carried out under the protocols for bp devices, with the same limitations reported above. similarly, the iomt-based instant blood pressure (ibp) auralife app is presented in [57] for bp measurement using a mobile phone and without the use of any external hardware. ibp auralife extracts the bp values from the photoplethysmogram (ppg) signal acquired with a flash led light and a mobile camera. the accuracy of the system is evaluated by using the aami/iso 81060-2:2013 protocol to compare ibp with other reference upper-arm cuffless bp monitors and oscillometric blood pressure cuff devices. a result analysis shows that the device’s systolic bp mean accuracy is 2.7 mmhg and diastolic bp mean accuracy is 2.6 mmhg. however, the system only delivers results at this level of accuracy for individuals whose bp lies in a specific range; therefore, it is not suitable for use by individuals whose bp falls outside of the systolic range of 83–178 mmhg or diastolic range of 58–107 mmhg. the manufacturer states that it is not recommended for medical use and not a substitute for cuff-based or other bp monitors because it provides an estimate of bp only. another device, the asus vivowatch bp, is reported in [58]. this device has an ecg sensor on the back that receives an ecg signal from the wrist and an optical sensor on the front for the measurement of ppg signals from the index finger. this data is then automatically processed, and the results are displayed to the user. however, the manufacturer has not provided any information on the accuracy of the device. 3.4. iomt bms for brain diseases many people are affected by brain diseases, such as brain tumours, dementia, headaches, brain strokes, chronic pains in the head, tourette’s syndrome, alzheimer’s, parkinson’s and epilepsy. the development of iomt-based bms in the field of brain-related diseases is a promising solution for the monitoring of patients and timely detection of such diseases. typical devices that are used in iomt-based bms for brain-related diseases are electroencephalogram (eeg) electrodes, smartwatches, galvanic skin response sensors and cameras. these devices are used to monitor brain-disease patients. for example, by using eeg electrodes, it is possible to measure eeg signals to monitor brain activities, with galvanic skin response sensors, the changes in sweat glands can be measured to monitor stress, and by using cameras, it is possible to monitor the daily physical activities of patient (e.g. neuro-degenerative disease patients). in this context, some iomt brain-related bms are used to monitor and measure vital signs (for example stress levels and eeg signals) and can generate an alarm in the case of a crisis. in [59], an iomt smart sensor for stress-level detection is presented. this is a novel stress-detection system called istress. it monitors stress levels by measuring parameters such as the rate of body motion, body temperature and levels of sweat during physical activity using sensors for temperature and humidity and also a three-axis accelerometer. the collected data are then processed using a neural network based on a mamdani-type fuzzy logic controller, which is based on a fuzzy technique. the proposed system is very efficient in terms of power consumption and allows the real-time remote monitoring of stress levels by transmitting the collected data to the cloud, thus helping to improve the detection of the patient’s health status. the system classifies stress into three levels: low, normal and medium. the outputs of the sensors are fed to a fuzzy logic controller designed in matlab to detect stress levels. the authors report that this system has a stress detection rate of 97 %. in [60], a high-definition camera is used to analyse the motion of the patients with neuro-degenerative diseases (nd). the system is based on remote video monitoring that measures the quantity and quality of two clinically relevant motor symptoms (impairment in step length and arm-swing angle). the system has been evaluated by mean absolute error (mae), which gives an indication of how close the measurements are to the ground truth. for this evaluation test, a video of a healthy individual in a walking position is recorded in two scenarios: (i) just walking and (ii) sitting on a chair, followed by standing up and walking. the camera is setup to capture the lateral view for the correct detection of the required nd parameters. a total of 23 valid step lengths and 10 arm-swing angles are recorded in both cases. the ground-truth measurements are marked/annotated using the kinovea software package, which provides a set of tools to capture, slow down, study, compare, annotate and measure the technical performance of a video [61]. the authors state that the system is able to measure nd parameters with a tolerance ranging from 2 % to 5 %. in [62], an iot-based system is reported that predicts the parkinson’s brain disorder. the system uses wearable iot-based deep brain simulation (dbs) to collect patient brain activities and assess the condition of cells to predict brain functionality changes. dbs is a smart device that collects brain data by placing electrodes on individuals to conduct continuous monitoring. by means of an heuristic tubu-optimised sequence modular neural network (htsmnn), it is possible to predict the changes present in the human brain and its functions. to validate the proposed system, the authors have used the dataset in [63] that contains parkinson’s disease-related information. the performance of the system is analysed using mae, mse, precision, recall, classification accuracy (ca) and the auc, which are defined as acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 180 𝑀𝐴𝐸 = 1 𝑁 ∑ 𝑦𝑖 − ỹ𝑖 𝑁𝑖 1 , (10) 𝑀𝑆𝐸 = 1 𝑁 ∑(𝑦𝑖 − �̃�𝑖 ) 2 , 𝑁𝑖 1 (11) 𝐶𝐴 = 𝑇𝑃 + 𝑇𝑁 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 , (12) where n is the number of parkinson’s features and y is the actual and �̃� the predicted output; tp is the true positive, tn the true negative, fn the false negative and fp the false positive, while precision and recall have already been defined in equations (2) and (4). the above system parameters are calculated by first calculating the system deviation to identify the errors present in the parkinson’s disease recognition process. the deviation is computed by considering the difference between the actual and predicted output. the htsmnn system deviation value is compared with other methods, such as particle-swarm-optimised neural networks, particle-swarm-optimised radial basis neural networks, genetic algorithm-based extreme machine-learning networks and tubu-optimised deep neural networks. after computing the above parameters, it is reported by the authors that the system ensures an mae equal to 0.284, an mse equal to 0.273 and a ca equal to 98.07 %. in [64], an iomt-based bms is presented using the deep learning approach, called stress-lysis, which is used to measure stress levels. the deep learning system is developed and tested with three different datasets containing activities of daily living (adl) and physical activities. adl are collected with an accelerometer and humidity and temperature sensors worn on the wrist. a physical-activity monitoring dataset is developed by collecting 18 different activities from nine volunteers wearing three inertial measurement sensors and a heart-rate monitor. the system learns the stress-level parameters obtained by the wrist band, such as skin temperature, heart rate and sweat during physical activity. the authors have conducted the validation of this proposed solution by analysing the collected dataset using python and tensorflow. the dataset consists of 2,000 samples, of which 1,334 samples are for training purposes and 667 are for testing. the results are displayed in the form of a loss function that demonstrates that the correct classification rate is in the range of 98.3 % to 99.7 %. in [9], a seizure-detection iomt system is demonstrated by using a dwt, hjorth parameters and a k-nn classifier. this system is based on an iomt device called neuro-thing, which is capable of accurately detecting seizure-related diseases. in this method, eeg electrodes are used to acquire eeg signals, which contain information on the physiological state of the brain to understand and monitor brain function. these eeg signals are decomposed in sub-bands using dwt, and then hjorth parameters are extracted from the decomposed signals, which are classified using the k-nn method. the device is capable of sending the information to the iot cloud, where it can be accessed by doctors/physicians so that they can take effective decisions about the disease. in order to validate the proposed method, the authors perform system-level simulations, implemented in a simulink environment, where dwt and the k-nn classifier are developed by user-defined functions. iomt implementation is done using thingspeak, which is an open data platform that enables iot applications to gather and analyse data in the cloud. in addition, the system is validated by experimental results in which open-source eeg data [11] is utilised to validate the classification capability of the k-nn model by calculating sensitivity and specificity, as defined in equations (1) and (2). the reported results show 100% classification accuracy for normal vs. ictal eeg and 97.9% for normal and interictal vs. ictal eeg. 3.5. iomt bms for blood sugar disease diabetic or blood sugar disease occurs when the human body is not able to properly process blood sugar [65]. generally, blood sugar is measured by determining the concentration of glucose in the blood. most devices are based on electrochemical technology, which uses electrochemical strips to perform measurements. there are some limitations to obtaining accurate measurements due to the variance in strip manufacturing and the use of old or out-of-date strips. other limitations arise from environmental factors, such as temperature or altitude (in hilly areas), or from patient factors, such as improper handwashing [66]. patients suffering from this disease usually require their blood glucose levels to be checked regularly and to manage their diet to keep the effects of this disease under control. recent research focuses on using iomt to improve the sharing of measurement data with physicians and then giving prompt feedback to patients. an iomt system with a novel framework to measure and monitor glucose levels is presented in [65]. the system is used for remotely powered implantable glucose monitoring, in which the signal, retrieved from the interaction of radio frequency signals with biological tissue, is first characterised and then monitored. a low-power bluetooth protocol is used for the transmission of measurement data to the user’s mobile. however, the authors do not discuss the accuracy of the proposed system. in [67], an iomt-based system for glucose monitoring is presented. the article presents a non-invasive blood glucose measurement system based on optical detection and an optimised regression model. a system for light absorbance at a wavelength of 940 nm with a prediction model is designed, and the technique is validated through measurements taken from human fingertips. the evaluation of the method is performed by comparing the achieved results with referenced blood glucose concentrations using an sd-check one-touch glucometer. the results are evaluated by calculating the mean absolute difference, which is found to be about 5.82 mg/dl, while the mean absolute relative difference (mard) is 5.20 %, the average error (avge) is 5.14 % and rmse is 7.50 mg/dl. the test samples of 43 healthy people and diabetic patients are collected for a clarke error grid analysis, which is used to quantify the clinical accuracy of blood glucose devices with reference values [68]. the overall results show that better measurements have been achieved using the proposed approach than using the non-invasive measurement methods presented in [69]-[71]. in [72], iglu 2.0 is presented. this is a new iomt-based wearable device that is used for measuring blood glucose levels. the device uses infrared spectroscopy and iomt paradigms for the remote access of data by doctors/users. analysis of the optimised regression model is performed, and the system is validated on healthy, pre-diabetic and diabetic patients. the robust regression models of serum glucose levels are deployed as the mechanism for measurement for this proposed solution. in particular, a total of 50 different samples of capillary glucose and acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 181 37 samples of serum glucose are taken from pre-diabetic, diabetic and healthy people for the testing and analysis of the proposed iglu 2.0 device. the obtained results are then compared with the reference values for serum glucose obtained from a laboratory. in particular, the reference values for capillary glucose are measured using an invasive glucometer sd check, which is the gold standard for validation purposes. in terms of capillary blood glucose, avge is found to be about 6.07 % and mard is 6.08 %, whereas for serum glucose, avge is 4.88 % and mard is 4.86 %. in [73], a reliable iot-based embedded healthcare system that uses the alaris 8100 infusion pump, keil lpc-1768 board and iot cloud to monitor diabetic patients is proposed. the infusion pump delivers medical liquid (insulin) to the patient on a timely basis, and the keil lpc-1768 board is responsible for delivering the control commands and daily patient readings and providing a secure connection layer. the system is capable of storing patient records in the cloud, and a secure hash algorithm and secure socket shell are employed in this system to achieve the reliability components of the proposed scheme. the article reports that the system has a 99.3 % probability of continuing to operate normally. also the authors claim that the proposed system design is reliable, secure and authentic in relation to security and privacy. however, the metrological aspects of the proposed system are not discussed. 4. metrological iomt-bms challenges there are several metrological issues related to iot-based bms that should be addressed when considering their large-scale implementation in the healthcare sector. as reported in the previous sections, several devices have been proposed by researchers in this field, but the majority of researchers do not focus on the device’s metrological characterisation, and they provide questionable validation, calibration and/or accuracy parameters. the treatment and monitoring of diseases are based on measurements provided by the devices used in iomt bms. if iomt bms are not investigated from a metrological perspective, there can be no certainty about their capacity for providing reliable, accurate, precise and repeatable measurements of vital signs, with the serious risk of delivering incorrect information, leading to the misdiagnosis or incorrect treatment of diseases [73]. therefore, it is important to consider all aspects relating to the measurement accuracy of the devices (measurement nodes) used in the system. in addition, it is essential that the measurement devices used in iomt bms are properly calibrated [75] using suitable reproducible procedures and references. there should be appropriate guidelines for the common user on the calibration process of the iomt device or properly accredited laboratories that provide these services, either onsite or remotely. in practice, calibrations are usually made by switching the device on and off or zeroing or resetting the device, which is not recommended [73]. due to the presence of various measurement parameters (i.e. pressure flow, temperature, sound pressure), it is difficult to calibrate these devices. moreover, one of the major challenges is that there is no general consensus among different laboratories (from different disciplines, such as pressure flow, temperature, sound pressure) on how to regulate calibration traceability on a single platform for the different kinds of measurement system [76]. 5. conclusion iomt bms play an important role in the diagnosis of diseases, such as abnormal blood pressure, heart attacks, brain tumours, alzheimer’s, parkinson’s and epilepsy as well as in healthcare monitoring, the monitoring of disease progression and biomedical research. the rapid growth of and increasing demand for iomt bms that suit the modern lifestyle make it essential for these systems to be accurate, fast, user friendly and comfortable for the wearer and to provide stability and accuracy even in harsh environments. based on these common requirements, scientists are trying to improve these bms and develop new solutions. the presented overview aims to stimulate research in this field and offer some highlights of iot-based bms for specific diseases. the paper has also highlighted some important challenges related to metrology in iomt that need to be addressed. such issues will lay the groundwork for the development of new multidisciplinary approaches to the design of improved iomt systems and, thus, guarantee the continuous monitoring of human health, delivering accurate and reliable measurements. references [1] e. balestrieri, f. lamonaca, i. tudosa, f. picariello, d. luca carnì, c. scuro, f. bonavolontà, v. spagnuolo, g. grimaldi, a. colaprico, an overview on internet of medical things in blood pressure monitoring, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 25628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802164 [2] d. luca carnì, v. spagnuolo, g. grimaldi, f. bonavolontà, a. liccardo, r. s. lo moriello, a. colaprico., a new measurement system to boost the iomt for the blood pressure monitoring, proc. of the ieee int. symp. on measurements & networking (m&n), catania, italy, 8-10 july 2019, pp. 1-6. doi: 10.1109/iwmn.2019.8805016 [3] d. l. carni , d. grimaldi, p. f. sciammarella, f. lamonaca, v. spagnuolo, setting-up of ppg scaling factors for spo2% evaluation by smartphone, proc. of the ieee int. symp. on medical measurements and applications, benevento, italy, 15-18 may 2016, pp. 430-435. doi: 10.1109/memea.2016.7533775 [4] d. luca carnì, d. grimaldi, f. lamonaca, l. nigro, from distributed measurement systems to cyber-physical systems: a design approach, int. j. of computing 16 (2017) pp. 66-73. [5] f. lamonaca, c. scuro, d. grimaldi, r. s. olivito, p. f. sciammarella, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system, acta imeko 8 (2019) pp. 45-52. doi: 10.21014/acta_imeko.v8i2.640 [6] e. balestrieri, p. daponte, l. d. vito, f. picariello, s. rapuano, i. tudosa, a wi-fi internet-of-things prototype for ecg monitoring by exploiting a novel compressed sensing method, acta imeko 9 (2020) pp. 38-45. doi: 10.21014/acta_imeko.v9i2.787 [7] i. tudosa, p. daponte, l. de vito, s. rapuano, f. picariello, f. lamonaca, a survey of measurement applications based on iot, proc. of the workshop on metrology for industry 4.0 and iot, metroind 4.0 and iot, brescia, italy, 16-18 april 2018, pp. 157162. doi: 10.1109/metroi4.2018.8428335 [8] e. balestrieri, l. d. vito, f. lamonaca, f. picariello, research challenges in measurements for internet of things systems, acta imeko 7 (2018) pp. 82-94. doi: 10.21014/acta_imeko.v7i4.675 [9] i. ahmed, f. lamonaca, recent developments in iomt based biomedical measurement systems: a review, imeko tc4 2020, palermo, italy, 14 – 16 september 2020, pp 23-28. https://doi.org/10.1109/memea.2019.8802164 https://doi.org/10.1109/iwmn.2019.8805016 https://doi.org/10.1109/memea.2016.7533775 http://dx.doi.org/10.21014/acta_imeko.v8i2.640 http://dx.doi.org/10.21014/acta_imeko.v9i2.787 https://doi.org/10.1109/metroi4.2018.8428335 http://dx.doi.org/10.21014/acta_imeko.v7i4.675 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 182 [10] m. a. sayeed, s. p. mohanty, e. kougianos, h. zaveri., a fast and accurate approach for real-time seizure detection in the iomt, proc. of the ieee int. smart cities conf., kansas city, usa, 1619 september 2018, pp. 1-5. doi: 10.1109/isc2.2018.8656713 [11] r. g. andrzejak, k. lehnertz, f. mormann, c. rieke, p. david, c. e. elger, indications of nonlinear deterministic and finitedimensional structures in time series of brain electrical activity: dependence on recording region and brain state, phys. rev. 64 (2001) p. 061907 doi: 10.1103/physreve.64.061907 [12] h. c. koydemir, a. ozcan, wearable and implantable sensors for biomedical applications, annual rev. of analytical chemistry, 11 (2018) pp. 127-146. doi: 10.1146/annurev-anchem-061417-125956 [13] p. saccomandi, e. schena, c. m. oddo, l. zollo, s. silvestri, e. guglielmelli, microfabricated tactile sensors for biomedical applications: a review, biosensors 4 (2014) pp. 422-448. doi: 10.3390/bios4040422 [14] y. xu, x. hu, s. kundu, a. nag, n. afsarimanesh, s. sapra, s. c. mukhopadhyay, t. han, silicon-based sensors for biomedical applications: a review, sensors 19 (2019) p. 2908. doi: 10.3390/s19132908 [15] j. yunas, b. mulyanti, i. hamidah, m. mohd said, r. e. pawinanto, w. a. f. wan ali, a. subandi, a. a. hamzah, r. latif, b. yeop majlis, polymer-based mems electromagnetic actuator for biomedical application: a review, polymers 12 (2020) p. 1184. doi: 10.3390/polym12051184 [16] i. ahmed, u. zabit, a. salman, self-mixing interferometric signal enhancement using generative adversarial network for laser metric sensing applications, ieee access 7 (2019) pp. 174641-174650. doi: 10.1109/access.2019.2957272 [17] i. ahmed, u. zabit., fast estimation of feedback parameters for a self-mixing interferometric displacement sensor, proc. of ccode, islamabad, pakistan, 8-9 march 2017, pp. 407-411. doi: 10.1109/c-code.2017.7918966 [18] d. l. carnì, d. grimaldi, a. nastro, v. spagnuolo, f. lamonaca, blood oxygenation measurement by smartphone, ieee instrumentation & measurement magazine 20 (2017) pp. 43-49. doi: 10.1109/mim.2017.7951692 [19] y. kurylyak, k. barbe, f. lamonaca, d. grimaldi, w. van moer, photoplethysmogram-based blood pressure evaluation using kalman filtering and neural networks, proc. of the ieee int. symp. on medical measurements and applications (memea 2013) gatineau, qc, canada, 4-5 may 2013 pp. 170-174. doi: 10.1109/memea.2013.6549729 [20] f. lamonaca, k. barbe , y. kurylyak , d. grimaldi , w. v. moer, a. furfaro, v. spagnuolo, application of the artificial neural network for blood pressure evaluation with smartphones, proc. of the ieee int. conf. on intelligent data acquisition and advanced computing systems, berlin, germany, 12-14 september 2013, pp. 408-412. doi: 10.1109/idaacs.2013.6662717 [21] f. lamonaca, d. l. carnì, d. grimaldi, a. nastro, m. riccio, v. spagnolo, blood oxygen saturation measurement by smartphone camera, proc. of the ieee int. symp. on medical measurements and applications (memea 2015), pisa, italy, 7 – 9 may 2015, pp. 359-363. doi: 10.1109/memea.2015.7145228 [22] g. polimeni, a. scarpino, k. barbé, f. lamonaca, d. grimaldi., evaluation of the number of ppg harmonics to assess smartphone effectiveness, proc. of the ieee int. symp. on medical measurements and applications, lisbon, portugal, 11-12 june 2014, pp. 433-438. doi: 10.1109/memea.2014.6860101 [23] f. lamonaca, y. kurylyak, d. grimaldi, v. spagnuolo, reliable pulse rate evaluation by smartphone, proc. of the ieee int. workshop on medical measurements and applications (memea2012), budapest, hungary, 18 – 19 may 2012, pp. 234237. doi: 10.1109/memea.2012.6226672 [24] heres arantes junqueira, types of wearable technology. online [accessed 21 may 2021] https://www.researchgate.net/figure/different-types-ofwearable-technology_fig5_322261039 [25] r. nayak, l. wang, r. padhye, electronic textiles for military personnel, in: electronic textiles: smart fabrics and wearable technology. t. dias (editor), woodhead publishing, 2015, pp. 239-256. [26] s. vishnu, s. r. j. ramson, r. jegan, internet of medical things (iomt) an overview, proc. of icdcs, coimbatore, india, 5-6 march 2020, pp. 101-104. doi: 10.1109/icdcs48716.2020.243558 [27] e. balestrieri, l. de vito, f. picariello, i. tudosa, a novel method for compressed sensing-based sampling of ecg signals in medical-iot era, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 2628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802184 [28] f. ahmed, an internet of things (iot) application for predicting the quantity of future heart attack patients, int. j. comput. appl. 164 (2017) pp. 36-40. doi: 10.5120/ijca2017913773 [29] z. al-makhadmeh, a. tolba, utilizing iot wearable medical device for heart disease prediction using higher order boltzmann model: a classification approach, measurement 147 (2019) pp. 19. doi: 10.1016/j.measurement.2019.07.043 [30] g. bucci, f. ciancetta, e. fiorucci, a. fioravanti, a. prudenzi, an internet-of-things system based on powerline technology for pulse oximetry measurements, acta imeko 9 (2020) pp. 114-120. doi: 10.21014/acta_imeko.v9i4.724 [31] e. balestrieri, p. daponte, l. d. vito, f. picariello, s. rapuano, oscillometric blood pressure waveform analysis: challenges and developments, proc. of the ieee int. symp. on medical measurements and applications (memea), istanbul, turkey, 2628 june 2019, pp. 1-6. doi: 10.1109/memea.2019.8802175 [32] g. joyia, a. farooq, s. rehman, r. m. liaqat, internet of medical things (iomt): applications, benefits and future challenges in healthcare domain, j. commun. 12 (2017) pp. 240-247. doi: 10.12720/jcm.12.4.240-247 [33] a. f. otoom, e. e. abdallah, y. kilani, a. kefaye, m. ashour, effective diagnosis and monitoring of heart disease, int. j. of software engineering and its applications 9 (2015) pp. 143-156. doi: 10.14257/ijseia.2015.9.1.12 [34] c. wang, y. qin., h. jin, i. kim, j. g. d. vergara, c. dong, y. jaing, q. zhou, j. li, z. he, z. zou, l. r. zheng, x. wu, y. wang, a low power cardiovascular healthcare system with cross-layer optimization from sensing patch to cloud platform, ieee trans. on biomedical circuits and systems 13 (2019) pp. 314-329. doi: 10.1109/tbcas.2019.2892334 [35] goldberger, a., amaral, l., glass, l., hausdorff, j., ivanov, p. c., mark, r., stanley, h. e. (2000). physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation 101 (23), pp. e215–e220. doi: 10.13026/c2108f [36] jason brownlee, a gentle introduction to k-fold crossvalidation, 23 may 2018. online [accessed 28 june 2021] https://machinelearningmastery.com/k-fold-cross-validation/ [37] ekuore, ekuore pro electronic stethoscope. online [accessed 28 june 2021] https://www.ekuore.com/wireless-stethoscope/ [38] medicine, stethee pro. online [accessed 28 june 2021] https://www.stethee.com/ [39] tclab, artificial intelligence for document automation. online [accessed 28 june 2021] https://www.tclab.it/en/aida/ [40] hd medical group, intelligent stethoscope with integrated ecg. online [accessed 28 june 2021] https://doi.org/10.1109/isc2.2018.8656713 https://doi.org/10.1103/physreve.64.061907 https://doi.org/10.1146/annurev-anchem-061417-125956 https://doi.org/10.3390/bios4040422 https://doi.org/10.3390/s19132908 https://doi.org/10.3390/polym12051184 https://doi.org/10.1109/access.2019.2957272 https://doi.org/10.1109/c-code.2017.7918966 https://doi.org/10.1109/mim.2017.7951692 https://doi.org/10.1109/memea.2013.6549729 https://doi.org/10.1109/idaacs.2013.6662717 https://doi.org/10.1109/memea.2015.7145228 https://doi.org/10.1109/memea.2014.6860101 https://doi.org/10.1109/memea.2012.6226672 https://www.researchgate.net/figure/different-types-of-wearable-technology_fig5_322261039 https://www.researchgate.net/figure/different-types-of-wearable-technology_fig5_322261039 https://doi.org/10.1109/icdcs48716.2020.243558 https://doi.org/10.1109/memea.2019.8802184 https://doi.org/10.5120/ijca2017913773 https://doi.org/10.1016/j.measurement.2019.07.043 http://dx.doi.org/10.21014/acta_imeko.v9i4.724 https://doi.org/10.1109/memea.2019.8802175 https://doi.org/10.12720/jcm.12.4.240-247 https://doi.org/10.14257/ijseia.2015.9.1.12 https://doi.org/10.1109/tbcas.2019.2892334 https://doi.org/10.13026/c2108f https://machinelearningmastery.com/author/jasonb/ https://machinelearningmastery.com/k-fold-cross-validation/ https://www.ekuore.com/wireless-stethoscope/ https://www.stethee.com/ https://www.tclab.it/en/aida/ acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 183 https://hdmedicalgroup.com/wpcontent/uploads/2020/11/hd-steth-data-sheet.pdf [41] stehthome. online [accessed 28 june 2021] https://stethome.com/ [42] d. k. degbedzui, m. tetteh, e. e. kaufmann, g. a. mills, bluetooth-based wireless digital stethoscope with mobile integration, biomedical engineering: applications, basis and communications 30 (2018) p. 1850010. doi: 10.4015/s1016237218500102 [43] h. ren, h. jin, c. chen, h. ghayvat, w. chen, a novel cardiac auscultation monitoring system based on wireless sensing for healthcare, ieee j. of translational engineering in health and medicine 6 (2018) pp. 1-12. doi: 10.1109/jtehm.2018.2847329 [44] texas instruments, 2.4-ghz bluetooth low energy system-onchip. online [accessed 28 june 2021] https://www.ti.com/lit/ds/symlink/cc2540.pdf [45] farnell, atmel 8051 microcontroller family product selection guide. online [accessed 28 june 2021] http://www.farnell.com/datasheets/46220.pdf [46] d. chamberlain, r. kodgule, y. thorat, v. das, v. miglani, d. ganelin, a. dalal, t. sahasrabudhe, a. lanjewar, r. fletcher smart phone-based auscultation platform, european respiratory j. 48 (2016) suppl. 60. doi: 10.1183/13993003.congress-2016.oa2000 [47] d. g. mcnamara, value and limitations of auscultation in the management of congenital heart disease, pediatr. clin. north america 37 (1990) pp. 93-113. doi: 10.1016/s0031-3955(16)36834-1 [48] g. xavier, c. a. melo-silva, c. e. v. g. d. santos, v. m. amado, accuracy of chest auscultation in detecting abnormal respiratory mechanics in the immediate postoperative period after cardiac surgery, j bras. pneumol. 45 (2019) pp. 1-8. doi: 10.1590/1806-3713/e20180032 [49] s. r. thiyagaraja, j. vempati, r. dantu, t. sarma, s. dantu, smart phone monitoring of second heart sound split, proc. of the 36th annual int. conf. of the ieee eng. in medicine and biology society, chicago, usa, 26-30 august 2014, pp. 2181-2184. doi: 10.1109/embc.2014.6944050 [50] a. varghese, m. raghvan, n. s. hegde, n. t. prathiba, m. ananda, iot based automatic blood pressure system, int. j. of science & research (2015) pp. 1-3. online [accessed 28 june 2021] https://www.ijsr.net/conf/rise2017/ijsr1.pdf [51] g. s. stergiou, b. alpert, s. mieke, r. asmar, n. atkins, s. eckert, g. frick, b. friedman, t. graßl, t. ichikawa, j. p. ioannidis, p. lacy, r. mcmanus, a. murray, m. myers, p. palatini, g. parati, d. quinn, j. sarkis, a. shennan, t. usuda, j. wang, c. o. wu, e. o'brien, a universal standard for the validation of blood pressure measuring devices, hypertension 71 (2018) pp. 368-374. doi: 10.1161/hypertensionaha.117.10237 [52] e. balestrieri, s. rapuano, instruments and methods for calibration of oscillometric blood pressure measurement devices, ieee trans. on instrumentation and measurement 59 (2010) pp. 2391-2404. doi: 10.1109/tim.2010.2050978 [53] e. balestrieri, p. daponte and s. rapuano, automated noninvasive measurement of blood pressure: standardization of calibration procedures, proc. of the ieee int. workshop on medical measurements and applications, ottawa, canada, 9-10 may 2008, pp. 124-128. doi: 10.1109/memea.2008.4543012 [54] e. balestrieri, p. daponte, s. rapuano, towards accurate nibp simulators: manufacturers' and researchers' contributions, proc. of the ieee int. symp. on medical measurements and applications (memea), gatineau, canada, 4-5 may 2013, pp. 91-96. doi: 10.1109/memea.2013.6549713 [55] qardio. [accessed 28 june 2021] https://www.getqardio.com/ [56] manualslib, omron heartguide bp8000-m instruction manual page 67. online [accessed 28 june 2021] https://www.manualslib.com/manual/1501475/omronheartguide-bp8000-m.html?page=67#manual [57] instant blood pressure app. online [accessed 28 june 2021] https://www.instantbloodpressure.com/ [58] asus. online [accessed 28 june 2021] https://www.asus.com [59] l. rachakonda, p. sundaravadivel, s. p. mohanty, e. kougianos, m. ganapathiraju, a smart sensor in the iomt for stress level detection, proc. of the ieee int. symp. on smart electronic systems (ises) hyderabad, india, 17-19 december 2018, pp. 141145. doi: 10.1109/ises.2018.00039 [60] b. abramiuc, s. zinger, p. h. n. de with, n. de vries-farrouh, m. m. van gilst, b. bloem, s. overeem, home video monitoring system for neurodegenerative diseases based on commercial hd cameras, proc. of the 5th icce, berlin, germany, 6-9 september 2015, pp. 489-492. doi: 10.1109/icce-berlin.2015.7391318 [61] kinovea software. online [accessed 25 june 2021] https://www.kinovea.org/ [62] a. ali al zubi, a. alarifi, m. al-maitah, deep brain simulation wearable iot sensor device based parkinson brain disorder detection using heuristic tubu optimized sequence modular neural network, measurement 161 (2020), pp. 107887. doi: 10.1016/j.measurement.2020.107887 [63] g. m. mashrur e. elahi, s. kalra, l. zinman, a. genge, l. korngut, y.-h. yang. texture classification of mr images of the brain in als using m-cohog: a multi-center study, comput. med. imaging graph. (2020), pp. 101659. doi: 10.1016/j.compmedimag.2019.101659 [64] l. rachakonda, s. p. mohanty, e. kougianos, p. sundaravadivel, stress-lysis: a dnn-integrated edge device for stress level detection in the iomt, ieee trans. on consumer electronics 65 (2019) pp. 474-483. doi: 10.1109/tce.2019.2940472 [65] m. ali, l. albasha, h. al-nashash, a bluetooth low energy implantable glucose monitoring system, proc. of the 8th european radar conf., manchester, uk, 12-14 october 2011, pp. 377-380. [66] b. h. ginsberg, factors affecting blood glucose monitoring: sources of errors in measurement, j. of diabetes science and technology 3 (2009) pp. 903-913. doi: 10.1177/193229680900300438 [67] p. jain, s. pancholi, a. m. joshi, an iomt based non-invasive precise blood glucose measurement system, proc. of ises (formerly inis), rourkela, india, 16-18 december 2019, pp. 111116. doi: 10.1109/ises47678.2019.00034 [68] w. l. clarke, d. cox, l. a. gonder-frederick, w. carter, s. l. pohl, evaluating clinical accuracy of systems for self-monitoring of blood glucose, diabetes care 10 (1987) pp. 622-628. doi: 10.2337/diacare.10.5.622 [69] k. song, u. ha, s. park, j. bae, h. yoo, an impedance and multiwavelength near-infrared spectroscopy ic for non-invasive blood glucose estimation, ieee j. of solid-state circuits 50 (2015), pp. 1025-1037. doi: 10.1109/jssc.2014.2384037 [70] n. demitri, a. m. zoubir, measuring blood glucose concentrations in photometric glucometers requiring very small sample volumes, ieee trans. on biomedical engineering 64 (2017) pp. 28-39. doi: 10.1109/tbme.2016.2530021 [71] g. acciaroli, m. vettoretti, a. facchinetti, g. sparacino, c. cobelli, reduction of blood glucose measurements to calibrate subcutaneous glucose sensors: a bayesian multiday framework, ieee trans. on biomedical engineering 65 (2018) pp. 587-595. doi: 10.1109/tbme.2017.2706974 [72] a. m. joshi, p. jain, s. p. mohanty, n. agrawal, a new wearable for accurate non-invasive continuous serum glucose measurement https://hdmedicalgroup.com/wp-content/uploads/2020/11/hd-steth-data-sheet.pdf https://hdmedicalgroup.com/wp-content/uploads/2020/11/hd-steth-data-sheet.pdf https://stethome.com/ https://doi.org/10.4015/s1016237218500102 https://doi.org/10.1109/jtehm.2018.2847329 https://www.ti.com/lit/ds/symlink/cc2540.pdf http://www.farnell.com/datasheets/46220.pdf https://doi.org/10.1183/13993003.congress-2016.oa2000 https://doi.org/10.1016/s0031-3955(16)36834-1 https://doi.org/10.1590/1806-3713/e20180032 https://doi.org/10.1109/embc.2014.6944050 https://www.ijsr.net/conf/rise2017/ijsr1.pdf https://doi.org/10.1161/hypertensionaha.117.10237 https://doi.org/10.1109/tim.2010.2050978 https://doi.org/10.1109/memea.2008.4543012 https://doi.org/10.1109/memea.2013.6549713 https://www.getqardio.com/ https://www.manualslib.com/manual/1501475/omron-heartguide-bp8000-m.html?page=67#manual https://www.manualslib.com/manual/1501475/omron-heartguide-bp8000-m.html?page=67#manual https://www.instantbloodpressure.com/ https://www.asus.com/ https://doi.org/10.1109/ises.2018.00039 https://doi.org/10.1109/icce-berlin.2015.7391318 https://www.kinovea.org/ https://doi.org/10.1016/j.measurement.2020.107887 https://doi.org/10.1016/j.compmedimag.2019.101659 https://doi.org/10.1109/tce.2019.2940472 https://doi.org/10.1177/193229680900300438 https://doi.org/10.1109/ises47678.2019.00034 https://doi.org/10.2337/diacare.10.5.622 https://doi.org/10.1109/jssc.2014.2384037 https://doi.org/10.1109/tbme.2016.2530021 https://doi.org/10.1109/tbme.2017.2706974 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 184 in iomt frameworks, ieee trans. on consumer electronics 66 (2020) pp. 327-335. doi: 10.1109/tce.2020.3011966 [73] z. a. al-odat, s. k. srinivasan, e. m. al-qtiemat, s. shuja, a reliable iot-based embedded health care system for diabetic patients, int. j. on advances in internet technology (2019) pp 5060. online [accessed 28 june 2021] http://www.iariajournals.org/internet_technology/inttech_v12_ n12_2019_paged.pdf [74] b. karaböce, challenges for medical metrology, ieee instrumentation & measurement magazine 23 (2020) pp. 48-55. doi: 10.1109/mim.2020.9126071 [75] e. balestrieri, s. rapuano, instruments and methods for calibration of oscillometric blood pressure measurement devices, ieee trans. on instrumentation and measurement 59 (2010) pp. 2391-2404. doi: 10.1109/tim.2010.2050978 [76] m. do céu ferreira, the role of metrology in the field of medical devices, int. j. metrol. qual. eng. 2 (2011) pp. 135-140. doi: 10.1051/ijmqe/2011101 https://doi.org/10.1109/tce.2020.3011966 http://www.iariajournals.org/internet_technology/inttech_v12_n12_2019_paged.pdf http://www.iariajournals.org/internet_technology/inttech_v12_n12_2019_paged.pdf https://doi.org/10.1109/mim.2020.9126071 https://doi.org/10.1109/tim.2010.2050978 https://doi.org/10.1051/ijmqe/2011101 diagnosis of rotating machine defects by vibration analysis acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 diagnosis of rotating machine defects by vibration analysis karim bouaouiche1, yamina menasria1, dalila khalfa1 1 electromechanical engineering laboratory, badji mokhtar university annaba b.p.12 23000, algeria section: research paper keywords: vibration signal; kurtosis; fmd; hilbert transform; filtering citation: karim bouaouiche,yamina menasria, dalila khalfa, diagnosis of rotating machine defects by vibration analysis, acta imeko, vol. 12, no. 1, article 26, march 2023, identifier: imeko-acta-12 (2023)-01-26 section editor: francesco lamonaca, university of calabria, italy received december 27, 2022; in final form march 16, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by electromechanical engineering laboratory funding, algeria. corresponding author: karim bouaouiche, e-mail: karimbouaouiche@gmail.com 1. introduction rotating machines such as engine and gearbox produce a vibration which characterized by a set of parameters such as amplitude, velocity and acceleration [1]. the measurement of the parameters depends on the frequency of the vibration such as displacement sensor used at low frequency and velocity sensor at medium frequency and accelerometer at high frequency, vibration monitoring is a technique more used in maintenance, which allows the detection of defects [1]. the method of machine monitoring divided into two types: the first is the methods of fault detection based on the creation of an analytical model to correlate the signature and detect the damage parameters such as hidden markov model (hmm), artificial neural network (ann) [1]. the second method is the processing and extraction of the characteristics of the vibration signal in the time domain, frequency domain, and time-frequency domain [1]. data acquisition is an important step in the diagnosis of machine faults and the information collected divided into two types: data from events dependent on installation, repair, but monitoring data collected by measurements such as vibration data, the next step after the acquisition is the analysis of the data through models, algorithms, tools, which ensures a better interpretation [2]. the algorithms of decomposition of a signal are the most used in data analysis, such as emd, eemd, ceemd, empirical mode decomposition (emd) created by huang in 1998 but this method presents the problem of mixing modes [3]. through its emd improved by wu in 2009 to ensemble empirical mode decomposition (eemd) and in 2011 improved by torres to complete ensemble empirical mode decomposition (ceemd) [3]. the last decomposition method created by yonghao miao in 2022 is called feature mode decomposition (fmd) [4]. in this work, we propose an approach to analyse the real signals of the ball bearing 6025-skf, the experimental signals available on the platform (cwru) in the form of matlab files, see figure 1. 2. method in this study, we propose an approach that allows the processing of vibratory signals of a machine in an efficient way and gives a result that directly indicates the kinematic frequency figure 1. parts of the proposed approach. abstract this work presents the analysis of the vibration signals of a ball bearing, the signals available on the platform case western reserve university in the form of matlab files. by proposing an approach for the diagnosis of any rotating machine and detects the failed components in a simple way, the proposed approach consists of several methods and steps each complementary to the other. first on decomposing the signal by the feature mode decomposition (fmd) method and then, according to the kurtosis values by selecting the useful signals after the selection by computing the sum of the useful signals called the residual of the original signal. the simplicity of the form of the residual makes through the application of band pass filtering and hilbert transform, but the analysis of the peaks represents the frequency of the failed components. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 of the failed components. the approach consists of two parts, the first is the experimental part of signal acquisition by a measurement chain and the second part is the processing of the vibration signal by a matlab code compatible with a proposed approach that provides the frequencies of defects: figure 2 illustrates the proposed approach that consists of several steps each complementary to the other: step1: after insertion of the signal in fact the decomposition by fmd through the use of a finite impulse response (fir) filter bank the signal decomposes into different modes, the algorithm of fmd consists of the following steps [4]: enter the number of modes 𝑛 and the length of the filter 𝐿. initialization of filter banks by a hanning window using 𝐾 filter. obtain the modes (filtered signals) by the following equation: 𝑢𝑘 𝑖 = 𝑥 ∗ 𝑓𝑘 𝑖 (1) 𝑢𝑘 𝑖 : modes, 𝑥: signal, 𝑓𝑘 𝑖 : filter coefficient, 𝑖 : iteration, ∗ : convolution. updating the filter coefficients by using the signal 𝑥 and the modes 𝑢 and the estimated period 𝑇, which chosen as the point where the autocorrelation spectrum reaches the maximum value 𝑅𝑘 𝑖 after the point zero crossing. judges the number of iterations 𝑖 if it reaches the pre-interaction number by going to the next step. calculation of correlation coefficient 𝐶𝐶, both modes build a matrix 𝐶𝐶𝐾×𝐾 , then calculating the correlated kurtosis 𝐶𝐾 using the estimated period. judges if the number of modes 𝑛 reaches the specified number. obtain the final modes. figure 3 shows the flowchart of the fmd method [4]: the input parameters to the fmd algorithm are the number of modes 𝑛, the length of the filter 𝐿 and the number of frequency band segments 𝐾: the filter length 𝐿 has an influence on the filtering performance, the fmd performance test consists of the variation of the correlated kurtosis 𝐶𝐾 values of the filtered signals as a function of the filter length 𝐿, and the test result defines the optimal fmd filter length range, 𝐿 ∈ [30; 100], [4]. the parameter 𝐾 must be greater than 𝑛 to ensure decomposition, the fmd performance test which consists of the variation of 𝐶𝐾 values as a function of 𝐾 defines the optimal parameter interval 𝐾 ∈ [5; 10], [4]. however, we in your study propose criteria to define the values of the parameters of fmd: the number equals half the number of modes obtained by emd: 𝑛(fmd) = 𝑛(emd) 2 . (2) the parameter 𝐾 is half of the optimal interval: 𝐾 = 5 + 10 2 ≅ 8 . (3) the number (𝐿) defined according to the variation of cross correlation (𝐶) between the modes obtained by fmd and the original signal, the case that represents a maximum total value (𝑇𝐶) of cross correlation (𝐶) chosen. figure 2. the signal processing approach. figure 3. fmd algorithm. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 the cross correlation function applied to measure the similarity between two functions and calculated by the following formula [5]: 𝐶(𝜏) = ∫ 𝑥(𝑡) 𝐼𝑀𝐹𝑖 (𝑡 − 𝜏) d𝑡 +∞ −∞ (4) 𝑖 = 1, 2, … , 𝑛(fmd): the number of modes, 𝐼𝑀𝐹: the modes, 𝜏: time shift parameter. 𝑇𝐶 = ∑ 𝐶𝑖 (𝜏) 𝑛(fmd) 𝑖=1 . (5) step2: the kurtosis is a moment of order 4 and used to identify the non-periodic impacts and operation of a machine, the value there is a threshold in the detection of faults when the kurtosis greater than three the signal is impulsive [6]. as the case of bearing in good condition, the kurtosis is less than three and the distribution of the gaussian signal [7]. 𝑘𝑢 = 1 𝑁 ∑ ( 𝑥𝑖 − �̅� 𝜎 ) 4𝑁 𝑖=1 , (6) �̅�: the mean, 𝜎: standard deviation. step3 : the sum of the imf that illustrates a kurtosis value greater than three called the signal residual 𝑥res [8], after the determination of residual on calculating the kurtogram to define the two pass frequencies of the digital band pass filter, the limits of the interval that illustrate a high value of spectral kurtosis 𝐾(𝑓) is the pass frequencies. the kurtogram represents the variation of spectral kurtosis values as a function of frequency 𝑓 and width ∆𝑓, [9]. 𝐾(𝑓) is calculated by the following formula, [9]: 𝐾(𝑓) = < 𝐻4 (𝑡, 𝑓) > < 𝐻2(𝑡, 𝑓) >2 − 2 (7) 𝐻(𝑡, 𝑓): complex envelope of the signal at frequency 𝑓, estimated by the short-term fourier transform [9]. step 4: calculation of the envelope spectrum of the filtered signal 𝑥resf by the hilbert 𝐻[𝑥resf(𝑡)] and fourier transform, the absolute value 𝐸(𝑡) of the analytical signal 𝑧(𝑡) is the envelope of 𝑥resf and 𝐸(𝑓) is the spectrum, 𝐸(𝑡) and 𝐸(𝑓) calculated by the following equations [10]: 𝐻[𝑥resf(𝑡)] = 𝑥resf(𝑡) ∗ 1 π 𝑡 (8) ∗: convolution product. 𝑧(𝑡) = 𝑥resf + 𝑗 𝐻[𝑥resf(𝑡)] (9) 𝑧(𝑡) = 𝐸(𝑡) 𝑒𝑗 𝜑(𝑡) (10) 𝑥resf(𝑡) = 𝑅{𝑧(𝑡)} = 𝐸(𝑡) cos (𝜑(𝑡)) (11) 𝐸(𝑡) = |𝑥resf(𝑡) + 𝑗 𝐻[𝑥resf(𝑡)]| (12) 𝐸(𝑓) = ∫ 𝐸(𝑡)𝑒 −𝑗 2 π 𝑓 𝑡 d𝑡 +∞ −∞ . (13) step 5: the peak is the maximum amplitude value of a signal found by the following formula [11]: 𝑃𝑒𝑎𝑘 = max|𝐸(𝑓)| (14) the relationship between the peak frequency and the kinematic frequency of the components of a machine is the result of the diagnosis that ensures the detection of the faulty component. 3. vibration signals in this section, analysing the real vibratory signal of ball bearing type 6025-skf that supports the shaft of an electric motor at the drive end and subjected to different speeds and loads, among the signals available in the platform cwru taking two signals of defects, the characteristics of each signal represented in table 1, [12]. fault frequencies of bearing components are the multiple of the rotation frequency in hz with a coefficient as shown in table 2 [12]. 4. results and discussions 4.1. case 1 by applying the proposed approach on the inner race fault signal (107.mat), starting with the estimation of the parameters of the fmd algorithm: the number of modes obtained by applying the emd algorithm on the signal 107.mat equals 13 so 𝑛fmd ≅ 6. the variation of 𝑇𝐶 as a function of 𝐿 is represented in figure 4. the maximum value of cross-correlation 𝑇𝐶 = 893.8 has 𝐿 = 32, the step 𝑃 of variation of the interval 𝐿 equals two, 𝑃 = 2. the decomposition of the vibration signal (107.mat) by the fmd method gives six modes represented in figure 5, and the spectrum of each mode illustrated in figure 6. the kurtosis values for each mode shown in the table 3. the original signal residual is the sum of the modes (2, 3, … , 6): 𝑥res(𝑡) = ∑ 𝐼𝑀𝐹𝑖 (𝑡) 6 𝑖=2 . (15) the spectrum and the temporal signal of 𝑥res(𝑡) are represented in figure 7. table 1. vibration signals. signal defect diameter sampling frequency inner race (107.mat) load (1491.4 n m/s) speed (1750 rpm) 0.1778 mm 12 khz inner race (211.mat) load (1491.4 n m/s) speed (1750 rpm) 0.5334 mm 12 khz table 2. defect frequencies. components coefficients frequency (hz) inner race 5.4152 157.94 outer race 3.5848 104.55 cage 0.39828 11.61 rolling element 4.7135 137.47 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 the residual spectrum is complex and rich with information as shown in figure 7, the interpretation of 𝑥res(𝑓) is difficult for this reason, in fact the pass band filtering through the buttworth filter, the two pass frequencies of the filter defined from kurtogram of 𝑥res(𝑡). the kurtogram in figure 8 shows a maximum value of spectral kurtosis 𝐾max = 0.5 at the centered frequency 𝑓c = 3750 hz with a width 𝐵𝑤 = 500 hz at the level 𝑙 = 3.5 so the two passing frequencies 𝐹𝑝 = [3250 ; 4250] (hz). the spectrum of the envelope 𝐸(𝑓) of the filtered signal 𝑥resf(𝑡) in figure 9, illustrates a peak max|𝐸(𝑓)| = 0.661 at frequency 𝑓 = 157.5 hz, the value of 𝑓 closer to the fault frequency of the inner race (𝐹inner race = 𝑓). 4.2. case 2 in this case by analysing the vibration signal (211.mat) by the proposed approach, but summarizes the different steps: figure 4. the variation of tc. figure 5. the temporal variation of the modes. figure 6. the spectrum of the modes. figure 7. the residual. figure 8. kurtogram of 𝑥𝑟𝑒𝑠 (𝑡). table 3. kurtosis values. imf kurtosis 1 2.1 2 5.8 3 6.3 4 8.8 5 5.7 6 5 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 the emd decomposition of the signal yields 14 modes, the maximum cross-correlation value 𝑇𝐶 = 329.55 at 𝐿 = 30 as shown in figure 10, the estimation of fmd parameters according to the proposed criteria is: 𝑛fmd = 7; 𝐿 = 30; 𝐾 = 8. the residual is equal to the sum of all modes (1, 2, … ,7) as shown in the table 4. we observe on the spectrum 𝑥res(𝑓) (figure 11) a very high amplitude peak at the inner race fault frequency. 𝑥res(𝑡) = ∑ 𝐼𝑀𝐹𝑖 (𝑡) 7 𝑖=1 . (16) from the kurtogram of 𝑥res(𝑡) find the two frequencies of passage 𝐹𝑝 = [5375 ; 5875] hz. on the envelope spectrum 𝐸(𝑓) (figure 12), a high amplitude peak is found in the rotation frequency (29 hz) and the second peak at the inner race fault frequency (157.5 hz). 5. conclusion in this paper, we analyse two fault vibration signals of the inner ring of the ball bearing by a new proposed approach, which consists of a set of signal processing methods organized in the form of steps. from the final result of the proposed approach, we conclude that the new criteria formulated to estimate the input parameter of the fmd algorithm show a good efficiency since the number of modes and the length of the filter are two essential parameters to ensure the signal decomposition by fmd. as well as the figure 9. 𝐸(𝑡) and 𝐸(𝑓). figure 10. 𝑇𝐶 as a function of 𝐿 figure 11. 𝑥res(𝑡) and 𝑥res(𝑓). figure 12. 𝐸(𝑡) and 𝐸(𝑓). table 4. the values of kurtosis. imf kurtosis 1 3.5 2 9.7 3 10 4 10.6 5 10.2 6 10.2 7 16 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 parameters guarantee the decrease of the equality constraint between the original signal and the set of modes obtained after the decomposition. the selection of useful modes through the use of parameters such as kurtosis is a very important step to extract the information depends on defects. the envelope analysis is the last step of the proposed approach that represents the peaks of high amplitudes at the default frequency, envelope analysis is a combination of band pass filtering and hilbert transform and fourier transform, in your study the two pass frequencies of the filter defined by the maximum values of spectral kurtosis in the kurtogram. references [1] deepam goyal, b. s. pabla, condition based maintenance of machine tools—a review, cirp journal of manufacturing science and technology 10 (2015), pp. 24-35. doi: 10.1016/j.cirpj.2015.05.004 [2] a. k. s. jardine, daming lin, d. banjevic, a review on machinery diagnostics and prognostics implementing condition-based maintenance, mechanical systems and signal processing 20.7 (2006), pp. 1483-1510. doi: 10.1016/j.ymssp.2005.09.012 [3] fereshteh yousefi rizi, a review of notable studies on using empirical mode decomposition for biomedical signal and image processing." signal processing and renewable energy 3.4 (2019): 89-113. online [accessed 16 march 2023] https://spre.stb.iau.ir/article_669673.html [4] y. miao, b. zhang, c. li, j. lin, d. zhang, feature mode decomposition: new decomposition theory for rotating machinery fault diagnosis, ieee transactions on industrial electronics 70.2 (2022), pp. 1949-1960. doi: 10.1109/tie.2022.3156156 [5] yang liu, jigou liu, r. kennel, frequency measurement method of signals with low signal-to-noise-ratio using cross-correlation, machines 9.6 (2021): 123. doi: 10.3390/machines9060123 [6] shaopeng liu, shumin hou, kongde he, weihua yang, lkurtosis and its application for fault detection of rolling element bearings, measurement 116 (2018), pp. 523-532. doi: 10.1016/j.measurement.2017.11.049 [7] a. j. oyobé okassa, c. welba, j. p. ngantcha, extraction of vibration signal parameters by dwt for a new approach to using kurtosis, 2021. online [accessed 16 march 2023] https://www.researchgate.net/publication/353546792 [8] hafida mahgoun, bekka rais elhadi, ahmed felkaoui, gearbox fault diagnosis using ensemble empirical mode decomposition (eemd) and residual signal, mechanics & industry 13.1 (2012), pp. 33-44. doi: 10.1051/meca/2011150 [9] j. antoni, fast computation of the kurtogram for the detection of transient faults, mechanical systems and signal processing 21.1 (2007), pp. 108-124. doi: 10.1016/j.ymssp.2005.12.002 [10] d. wang, q. miao, x. fan, h.-zh. huang, rolling element bearing fault detection using an improved combination of hilbert and wavelet transforms, journal of mechanical science and technology 23.12 (2009), pp. 3292-3301. doi: 10.1007/s12206-009-0807-4 [11] mohamad hazwan mohd ghazali, wan rahiman, vibration analysis for machine monitoring and diagnosis: a systematic review, shock and vibration 2021, article id 9469318, 25 pp. doi: 10.1155/2021/9469318 [12] case western reserve university, bearing database. online [accessed 16 march 2023] https://engineering.case.edu/bearingdatacenter https://doi.org/10.1016/j.cirpj.2015.05.004 https://doi.org/10.1016/j.ymssp.2005.09.012 https://spre.stb.iau.ir/article_669673.html https://doi.org/10.1109/tie.2022.3156156 https://doi.org/10.3390/machines9060123 https://doi.org/10.1016/j.measurement.2017.11.049 https://www.researchgate.net/publication/353546792 https://doi.org/10.1051/meca/2011150 https://doi.org/10.1016/j.ymssp.2005.12.002 https://doi.org/10.1007/s12206-009-0807-4 https://doi.org/10.1155/2021/9469318 metrological characterization of instruments for body impedance analysis acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 metrological characterization of instruments for body impedance analysis valerio marcotuli1, matteo zago2, alex p. moorhead1, marco vespasiani3, giacomo vespasiani3, marco tarabini1 1 department of mechanical engineering, politecnico di milano, via privata giuseppe la masa 1, 20156 milan, italy 2 faculty of exercise and sports science, università degli studi di milano, via festa del perdono 7 20122 milan, italy 3 technical department, metadieta s.r.l., via antonio bosio, 2, 00161 rome, italy section: research paper keywords: bioimpedance; body composition; measurement uncertainty; calibration; multivariate linear regression citation: valerio marcotuli, matteo zago, alex p. moorhead, marco vespasiani, giacomo vespasiani, marco tarabini, metrological characterization of instruments for body impedance analysis, acta imeko, vol. 11, no. 3, article 14, september 2022, identifier: imeko-acta-11 (2022)-03-14 section editor: francesco lamonaca, university of calabria, italy received october 7, 2021; in final form august 31, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: valerio marcotuli, e-mail: valerio.marcotuli@polimi.it 1. introduction body composition describes the main components of the human body in terms of free fat mass (ffm), fat mass (fm) or their ratio ffm/fm. the analysis of body composition is used in different fields such as biology and medicine to estimate the nutritional status, muscular volume variations and potentially event pathological status. for example, physiological aging leads to a reduction of ffm and muscular mass, while fat increases and is redistributed over the body areas [1]. different levels of body composition, atomic, molecular cellular, tissular and global, can be analyzed depending on the measurement methods [2]. body mass index (bmi) is a generic indicator of the body composition, but it tends to give inaccurate information when subjects are highly overweight or obese; in fact, it is possible that malnutrition exists yet is masked by the high amount of fat mass [3]. a solution for measuring body composition is represented by the dual-energy x-ray absorptiometry (dxa). this is an imaging technique, similar to magnetic resonance imaging (mri), which scans the patient with two beams of x-rays with different energy (usually 40 and 70 kv). in recent years, dxa has become recognized as the “gold standard” for measuring body composition [4]. it evaluates both the global and the regional distribution of the three main body components: bone mineral content (bmc), fm and ffm. the accuracy of dxa makes it very effective in studying patient composition within specific body regions and evaluating their effect on the patient health [5]. unfortunately, a dxa machine is expensive ($20,000+) making it typically available only at big infrastructure such as clinics and hospitals. an alternative technique is the bioelectrical impedance analysis (bia): this employs a low alternate current (ac) with high frequency at 50 khz transmitted across the body to estimate its composition based on the hydration level of tissues [6]. bia allows for quick examinations, and it is much less expensive than dxa. additionally, bia is less dangerous than dxa as it does abstract body impedance analysis (bia) is used to evaluate the human body composition by measuring the resistance and reactance of human tissues with a high-frequency, low-intensity electric current. nonetheless, the estimation of the body composition is influenced by many factors: body status, environmental conditions, instrumentation, and measurement procedure. this work studies the effect of the connection cables, conductive electrodes, adhesive gel, and bia device characteristics on the measurement uncertainty. tests were initially performed on electric circuits with passive elements and on a jelly phantom simulating the body characteristics. results showed that the cables mainly contribute to increase the error on the resistance measurement, while the electrodes and the adhesive introduce a negligible disturbance on the measurement chain. this paper also proposes a calibration procedure based on a multivariate linear regression to compensate for the systematic error effect of bia devices. mailto:valerio.marcotuli@polimi.it acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 not use x-rays, meaning it can be also repeated several times with no contraindications. nevertheless, bia can be highly affected by many factors such as altered hydration of the subject, measurement conditions, ethnic background, and health conditions [7]. bia devices measure the magnitude of the impedance opposed to the current that varies with respect to the body anatomy. specifically, the physical principle assumes that the body is made up of tissues with different composition. some tissues are good conductors due to their water content while others are insulators. the water content is inversely related to the resistance that opposes the current flow. on the other hand, cellular membranes, able to accumulate electrical loads, can be considered capacitors. the presence of capacitors is directly proportional to reactance and introduces an observable delay on the current flow. the sum of the resistance and reactance defines the impedance. its evaluation indicates the body hydration and provides an estimate of the nutritional state equivalent to the cellular amount. since water is the main component of the cells and it is almost absent in fat, it is possible to deduce the amount of ffm from the water content. consequently, fm is evaluated by simply subtracting the ffm to the total weight [8]. 1.1. fricke’s circuit: a human body electrical model the human body can be modeled as a set of resistance and capacitance connected in parallel or in series. the most common body model used in the field of bia is the fricke’s circuit, whose two parallel branches represent the intracellular and extracellular components. in this model, a high-frequency current passes through the intracellular water, while at low frequencies through the extracellular space. this is because at zero or low frequency, the current does not penetrate the cell membrane (acting as an insulator), while it passes through the extracellular medium made of water and sodium [9]. the intracellular behavior, in turn, can be modeled as a resistance ri (due to the water and potassium content) and a capacitance 𝑋𝑐 of the cell membrane, while the extracellular behavior is described by a single resistance re as shown in figure 1. the total body resistance r measured by a bia instrument is in turn a combination of the two resistances ri and re which indicate the real part of a complex number [10]. generally, the phasor and other indices such the ratio ri/re can be good estimators of diseases presence, nutritional status, and hydration condition [11]. 1.2. the calibration plots the cole-cole plot is commonly used to visualize the electrical response of body measurements with the resistance r on the x-axis and the negative reactance 𝑋𝑐 on the y-axis. at extremely high or ideally infinite frequency, the intracellular branch is the only one with the minimum resistance value ri. at low or zero frequency, the current passes only in the extracellular space since the cell membranes act as insulators. consequently, re is the maximum value of resistance. the relationship between the capacitance 𝑋𝑐 and the total resistance r of a body can be expressed by a phase angle φ [12]. therefore, the resulting phasor ranging from ri and re describes an arc segment as shown in figure 2 and all the measured values would lie below it. this plot can be standardized with respect to height, gender, and ethnicity, to form a calibration model divided into adjacent areas contained in tolerance ellipses at 50 %, 75 %, and 95 % belonging to a certain population group (as seen in figure 3, which shows an example of a calibration model standardized by the height, h) [13]. the plot is used as a calibration map by companies for converting a measurement performed by means of a device into a body status information [14]. if the bia device displays a low measurement accuracy, measurement results could be misleading. 1.3. measurement uncertainty the biasing factors on bioimpedance estimation can be attributed to the subject (the measurand is not constant), to the measurement protocols, and to the instrumentation [15]. in this figure 1. fricke's circuit model for body composition consisting into two branches related to the intracellular and extracellular behaviors. figure 2. example of cole-cole plot of the fricke's circuit. figure 3. example of a calibration model standardized by the height (h). acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 study we investigate the possible source of errors of the bia instrumentation, consisting in a control unit, cables, and electrodes. the control unit is composed of electronic circuitry placed in a case with one or more ports for connecting the cables. even if protected, the circuitry is subjected to thermal, electrical, and magnetic disturbances [16], [17]. the identification of these disturbances is essential for the performances of the devices and to improve competitivity in the market. for this reason, the control unit and the accessories should be metrologically characterized through a specific test for each possible sources of error [18], [19]. moreover, if the disturbances are properly identified, a corrective calibration strategy can be applied [20], [21]. 2. materials and methods 2.1. instrumental equipment the instrumentation selected for this study consists of a bia device (metadieta bia), three sets of cables, three sets of electrodes of different manufacturers, a series of resistances and capacitors, and a breadboard. metadieta bia (figure 4) is an electromedical device for the evaluation of the corporal composition manufactured by the company meteda s.r.l. (rome, italy). the bioimpedance is measured by placing four electrodes on the hands and feet, with a single cable connected to the main unit. the impedance value is computed from the response to a sinusoidal current of 350 μa with a frequency of 50 khz (a standard de facto for most bia devices with a single frequency). the device has a size of 43 × 43 × 12 mm3 and a mass of 50 g; a lithium battery can supply the device up to 14 hours in working conditions. it does not have a screen on the control unit, but it can be managed by an application running on phones, tablets, and computers with a bluetooth connection. the device is designed to be used in clinics by physicians, nutritional biologists and qualified sanitary personnel but also by consumers in home environment. the application provides the user with information about the preparation and the execution of the test measurement, then it sends and stores the data on the cloud for later analyses. data measurements are processed on the cloud application and the results can be either quantitative for clinical personnel or qualitative with displayed information in graphs along with the tendencies for individual users. the additional equipment for the test is represented by three cables of the same model between the main unit and the four electrodes clamps and a series of electrodes of three producers: biatrodes® by akern slr (firenze, italy), bia electrodes by rjl systems inc (clinton twp, mi, usa), and regal™ resting ecg by vermed® inc (bells falls, vt, usa). 2.2. proposed method the first operation to perform with a measurement device regards the metrological characterization in terms of repeatability and reproducibility after the identification of the possible sources of error [22]. generally, this kind of device makes use of empirical equations whose parameters are established by means of a calibration operation performed in laboratory [23]. since the calibration curves can assume a large set of values, a simplification of the process can rely on the study of a group of key values. this research proposes a data selection based on six values of resistance between 200 ω and 900 ω with a step of 140 ω, combined with six values of reactance between 15 ω and 115 ω with a step of 20 ω. these values are represented in the grid in figure 5. to assemble a physical circuit starting from the reactance values, suitable capacitors can be identified by converting 𝑋𝑐 into a capacitance c with the formula: 𝐶 = 1 2 ∙ 𝜋 ∙ 𝑓 ∙ 𝑋𝑐 , (1) where f is the frequency of ac generated by the metadieta bia device, i.e. 50 khz. the capacitance values obtained after the conversion are therefore: 212 nf, 91 nf, 58 nf, 42 nf, 34 nf, and 28 nf. by combining the values of resistance and capacitance, we defined a grid of 36 combinations and we evaluated the measurements’ repeatability and reproducibility in each condition. the procedure also allows identifying compensation functions allowing to reduce the systematic errors affecting the reading [24]. 2.3. experimental design the metadieta bia is turned on when the cables are inserted in the miniusb port and the connection is initiated by the application on a master device. measurements are typically performed by placing four electrodes on the hands and feet. the electrodes are silver plated for a low resistance and attached to the skin using an adhesive gel. however, for consistency, all the experiments were performed on laboratory instrumentation with electric circuits figure 4. picture of the metadieta bia control unit. figure 5. calibration grid with 36 combinations of the key values selected. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 representing the body composition through the fricke’s model, so the electrodes were included only in specific tests. the tests were performed in metrospace lab of politecnico di milano and can be divided into: 1. preliminary tests for the metrological characterization of metadieta bia device, cables, electrodes, and adhesive gel. 2. test for systematic error compensation based on the calibration grid in figure 5. a high precision lcr meter, model lcr-819 gw instek (good will instrument co., ltd, taiwan), was used as a reference system for measuring the impedance of the test components, while a multimeter, model agilent 34401a, was used for the only resistance measurements of the electrical components. 2.4. preliminary tests first, the measurement repeatability of the control unit was tested by performing 30 measurements of the resistance r and reactance 𝑋𝑐 repeated on five different electric circuits connecting the cable clamps directly to the circuit with no other modifications between each test and the next. the three different cables of the same model were tested with 30 measurements each with the lcr meter, on the same electric circuit directly connecting the clamps of the cables. keeping the same configuration, the effect of the electrodes was studied applying these components without the adhesive material between the clamps and the electric circuit with passive elements. a total of 30 different sets of electrodes of the three manufacturers were tested, 4 electrodes for each set. at the same time, the resistance r of the cables and the electrodes was measured 30 times for each component through the multimeter. the variability of electrical resistance of the electrodes was estimated by placing the multimeter terminals in two positions, on the tab and on an opposite area far from it (circled in figure 6). the effect of the adhesive gel, which determines the interaction with the bia device and a biological tissue, was simulated by means of a jelly phantom (figure 7) with nominal resistance of 𝑅𝑝ℎ = (571.2 ± 1.2) ω (c.i. = 68 %) and nominal reactance of 𝑋𝑐 𝑝ℎ = (75.1 ± 1.9) ω (c.i. = 68 %) [25]. for this test, 30 measurements for each manufacturer’s electrode were performed to calculate the mean value of the resistance �̅� and reactance �̅�𝑐 and the relative standard deviation. the four electrodes were positioned at the edges of the container, one couple on the left side and the other couple on the right side with a distance of about 30 cm. the distance between the two electrodes of each couple was of about 10 cm as recommended by the manufacturer. this configuration with the dominant distance (30 cm > 10 cm) between the two couples of electrodes aimed to replicate the measurement behaviour on a human body, avoiding uncontrolled dispersion of the electric charge. 2.5. tests for systematic error compensation a set of 36 circuits with passive elements was built by combining selected components with the resistance and capacitance collected in table 1, to the key values of the calibration grid in figure 5. table 1 also includes the reactance values after the conversion obtained by inverting the eq.1. the resistances components have a manufacturing tolerance of 0.1%, whereas the capacitors have a value of 1%. the circuits were mounted on a breadboard and the values read by metadieta bia device were compared to the values read by the lcr meter as references [26]. the differences between the measured and the reference allowed to calculate the rmse and control for the presence of defined patterns related to systematic disturbances. part of these disturbances was removed by adding two corrections terms 𝑅𝑎 and x𝑐 𝑎, obtained by a least square minimization of a multivariate linear model, to the generic measurements r and 𝑋𝑐 in the form: 𝑅𝑎𝑑𝑗 = 𝑅 + 𝑅𝑎 (2) and 𝑋𝑐 𝑎𝑑𝑗 = 𝑋𝑐 + x𝑐 𝑎 , (3) where 𝑅𝑎𝑑𝑗 and 𝑋𝑐 𝑎𝑑𝑗 are the compensated results. figure 6. area of the electrode for measuring the resistance. figure 7. preliminary test of the electrodes on a jelly phantom. table 1. resistances and capacitances of the selected components and the reactance values after conversion for the calibration map experiments. component 1 2 3 4 5 6 r in ω 200 330 470 615 780 910 c in nf 225 92 51 36 32 27 xc in ω 14 35 56 89 99 120 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 3. results 3.1. preliminary tests the results of the repeatability test of the control unit on the 5 electric circuits with 30 measurements performed on each circuit are shown in table 2: r and 𝑋𝑐 are the key values chosen for the experiments, 𝑅𝑟𝑒𝑓 and 𝑋𝑐 𝑟𝑒𝑓 are the reference values read by the lcr meter, �̅� and �̅�𝑐 are the mean values read by the metadieta bia device with 𝜎𝑅 and 𝜎𝑋𝑐 the relative standard deviations. the three tested cables showed a standard deviation of the resistance of 𝜎𝑅 = 1.8 ω, while the standard deviation of the reactance is 𝜎𝑋𝑐 = 0.1 ω. from these values it was possible to evaluate the uncertainty values 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.33 ω and 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). the electrode without the adhesive gel were tested on a circuit with the nominal resistance r = (617.812 ± 0.011) ω (c.i. = 68 %) and the equivalent reactance 𝑋𝑐= (90.137 ± 0.019) ω (c.i. = 68 %) with the metadieta bia device. the mean and the standard deviation of the resistance and the reactance are reported in table 3. the maximum standard deviation values were reported by the rjl systems electrodes equal to 𝜎𝑅 = 0.5 ω and 𝜎𝑋𝑐 = 0.1 ω with the correspondent uncertainties equal to 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.091 ω and 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). the resistance-only measurements of the same electrodes performed through the multimeter are reported in table 4. in this case both rjl systems and vermed® electrodes reported a maximum standard deviation of 𝜎𝑅 = 0.4 ω and an uncertainty of 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.073 ω (c.i. = 68 %). the last experiment of the preliminary test on the jelly phantom are reported in table 5. all three electrode samples showed a standard deviation of 𝜎𝑅 = 0.1 ω with an uncertainty of 𝑢𝑅 = 𝜎𝑅 √30 =⁄ 0.018 ω (c.i. = 68 %), whereas akern and vermed® electrodes reported a standard deviation different from zero and equal to 𝜎𝑋𝑐 = 0.1 ω corresponding to an uncertainty of 𝑢𝑋𝑐 = 𝜎𝑋𝑐 √30 =⁄ 0.018 ω (c.i. = 68 %). 3.2. systematic error compensation the measurements on 36 electric combinations with the metadieta bia device and the reference values are depicted in figure 8. from these data, the rmse of the 36 configurations resulted 𝑅𝑅𝑀𝑆𝐸 = 4.17 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 = 7.28 ω. the minimization of the least square on the multivariate linear regression returned the following correction terms: 𝑅𝑎 = −1.592 + 0.994 ∙ 𝑅 + 0.002 ∙ 𝑋𝑐 + 2.45 ∙ 10−5 ∙ 𝑅 ∙ 𝑋𝑐 (4) and x𝑐 𝑎 = −3.412 + 0.010 ∙ 𝑅 + 1.079 ∙ 𝑋𝑐 − 2.19 ∙ 10−5 ∙ 𝑅 ∙ 𝑋𝑐 . (5) with r and 𝑋𝑐 the actual values read by the bia device. furthermore, the multivariate linear regression reported the table 2. results of the repeatability test of the control unit on 5 electric circuits. r in ω xc in ω rref in ω xcref in ω �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω 200 15 200.1 17.9 202.7 0.0 18.9 0.1 200 75 191.4 92.3 193.5 0.1 88.7 0.1 340 115 330.5 124.4 333.4 0.1 116.1 0.0 620 75 617.8 90.1 622.1 0.0 80.2 0.0 900 95 910.9 101.1 916.9 0.0 98.3 0.0 table 3. results of the repeatability test of the three producer’s electrodes without the adhesive gel by the metadieta bia device. manufacturer �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω akern 619.2 0.2 91.4 0.0 rjl systems 619.5 0.5 91.4 0.1 vermed® 619.4 0.1 91.5 0.0 table 4. results of the repeatability test of the three producer’s electrodes without the adhesive gel by the multimeter agilent 34401a. manufacturer �̅� in ω σr in ω akern 1.6 0.3 rjl systems 2.1 0.4 vermed® 2.1 0.4 table 5. results of the repeatability test of the three producer’s electrodes with the adhesive gel on the jelly phantom by the metadieta bia device. manufacturer �̅� in ω σr in ω �̅�𝐜 in ω σxc in ω akern 573.2 0.1 79.1 0.1 rjl systems 573.3 0.1 78.5 0.0 vermed® 573.2 0.1 78.9 0.1 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 adjusted r2 values of �̅�𝑅 2 =0.947 for the resistance and �̅�𝑋𝑐 2 = 0.696 for the reactance. compensating for the values in figure 8 with the terms 𝑅𝑎 and x𝑐 𝑎, the values of rmse decrease to 𝑅𝑅𝑀𝑆𝐸 =1.16 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 =1.28 ω. 4. discussion the tests on the metadieta bia device revealed that the cables, the silver-plated electrodes, and the gel have a negligible influence on the overall measurement chain: the cables showed an uncertainty of 𝑢𝑅 = 3.3 ∙ 10 -1 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.8 ∙ 10-2 ω (c.i. = 68 %), while the maximum uncertainties introduced by the electrodes were 𝑢𝑅 = 8.6 ∙ 10 -2 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.7 ∙ 10 -2 ω (c.i. = 68 %). the comparison between the three electrode models also showed that these elements have the same electric characteristics for which the device performance does not change, as proved by sanchez et al. [27]. also, the tests for the gel on the jelly phantom did not report any significant influence since the maximum uncertainties were 𝑢𝑅 = 1.7 ∙ 10-2 ω (c.i. = 68 %) and 𝑢𝑋𝑐 = 1.7 ∙ 10 -2 ω (c.i. = 68 %). this means that the adhesive gel is essential for keeping the contact between the electrodes and the skin but it does not add any relevant disturbance to the measurement process [28]. the comparison between the reference values and the measurements with the bia device in figure 8 showed that the uncertainties of the reactance and resistance tend to increase for the combinations with higher values. nonetheless, the trend was corrected effectively by the multivariate linear regression. in fact, the two terms 𝑅𝑎 and x𝑐 𝑎 can decrease the uncertainties to 𝑅𝑅𝑀𝑆𝐸 =1.16 ω and 𝑋𝑐,𝑅𝑀𝑆𝐸 =1.28 ω. moreover, by observing the expressions of 𝑅𝑎, it is evident that the read reactance contribution is negligible. conversely, the read resistance value has a relevant influence on the compensation procedure. 5. conclusions bia is an effective and valid tool to estimate body composition from a fast and safe single measurement. nonetheless, the estimation can fail when the measurement conditions change or if there is a poor calibration of the bia device. in this paper, we evaluated the causes of variability of bioimpedance measurements. first, the equipment was metrologically characterized showing that it does not influence the measurements significantly with uncertainties lower than 0.35 ω (c.i. = 68 %) for both resistance and reactance. for what concerns the validation of bia equations, it must be carried out against gold standards, even though they exhibit limitations due to hydration conditions, age, and ethnicity. this study proposed a calibration grid made of 36 configurations of key values. the grid allowed to calculate multivariate linear models minimizing the least square errors which can be used to calibrate the metadieta bia device. in the case-study presented in this work, the bias error compensation reduced the rmse from 4.2 ω to 1.2 ω for the resistance and from 7.3 ω to 1.3 ω for the reactance with the adjusted r2 values respectively of 0.947 and 0.696. prospectively, the calibration maps can be extended to higher values and the key points grid can be further populated for more robust results. references [1] u. g. kyle, l. genton, d. o. slosman, c. pichard, fat-free and fat mass percentiles in 5225 healthy subjects aged 15 to 98 years, nutrition. 17 (2001), pp. 534-541. doi: 10.1016/s0899-9007(01)00555-x [2] h. c. lukaski, methods for the assessment of human body composition: traditional and new, am. j. clin. nutr. 46 (1987), pp. 537-556. doi: 10.1093/ajcn/46.4.537 [3] a. talluri, r. liedtke, e. i. mohamed, c. maiolo, r. martinoli, a. de lorenzo, the application of body cell mass index for studying muscle mass changes in health and disease conditions, acta diabetol. 40 (2003). doi: 10.1007/s00592-003-0088-9 [4] a. choi, j. y. kim, s. jo, j. h. jee, s. b. heymsfield, y. a. bhagat, i. kim, j. cho, smartphone-based bioelectrical impedance analysis devices for daily obesity management, sensors (switzerland). 15 (2015), pp. 22151-22166. doi: 10.3390/s150922151 [5] p. pisani, a. greco, f. conversano, m. d. renna, e. casciaro, l. quarta, d. costanza, m. muratore, s. casciaro, a quantitative ultrasound approach to estimate bone fragility: a first comparison with dual x-ray absorptiometry, meas. j. int. meas. confed. 101 (2017), pp. 243-249. doi: 10.1016/j.measurement.2016.07.033 [6] m. dehghan, a. t. merchant, is bioelectrical impedance accurate for use in large epidemiological studies?, nutr. j. 7 (2008), pp. 1-7. doi: 10.1186/1475-2891-7-26 [7] u. g. kyle, i. bosaeus, a. d. de lorenzo, p. deurenberg, m. elia, j. m. gómez, b. l. heitmann, l. kent-smith, j. c. melchior, m. pirlich, h. scharfetter, a. m. w. j. schols, c. pichard, bioelectrical impedance analysis part ii: utilization in clinical practice, clin. nutr. 23 (2004), pp. 1430-1453. doi: 10.1016/j.clnu.2004.09.012 [8] j. hlubik, p. hlubik, l. lhotska, bioimpedance in medicine: measuring hydration influence, j. phys. conf. ser. 224 (2010), 012135. doi: 10.1088/1742-6596/224/1/012135 [9] f. villa, a. magnani, m. a. maggioni, a. stahn, s. rampichini, g. merati, p. castiglioni, wearable multi-frequency and multisegment bioelectrical impedance spectroscopy for unobtrusively tracking body fluid shifts during physical activity in real-field applications: a preliminary study, sensors (switzerland). 16 (2016), pp. 1-15. doi: 10.3390/s16050673 [10] i. v. krivtsun, i. v. pentegov, v. n. sydorets, s. v. rymar, a technique for experimental data processing at modeling the dispersion of the biological tissue impedance using the fricke equivalent circuit, electr. eng. electromechanics. 0 (2017), pp. 27-37. doi: 10.20998/2074-272x.2017.5.04 figure 8. comparison between the reference values provided by the lcr meter (blue dots) and the measurements performed with the metadieta bia device (orange dots). https://doi.org/10.1016/s0899-9007(01)00555-x https://doi.org/10.1093/ajcn/46.4.537 https://doi.org/10.1007/s00592-003-0088-9 https://doi.org/10.3390/s150922151 https://doi.org/10.1016/j.measurement.2016.07.033 https://doi.org/10.1186/1475-2891-7-26 https://doi.org/10.1016/j.clnu.2004.09.012 https://doi.org/10.1088/1742-6596/224/1/012135 https://doi.org/10.3390/s16050673 https://doi.org/10.20998/2074-272x.2017.5.04 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 [11] s. cigarrán guldrís, future uses of vectorial bioimpedance (biva) in nephrology, nefrologia. 31 (2011), pp. 635-643. doi: 10.3265/nefrologia.pre2011.oct.11108 [12] f. savino, f. cresi, g. grasso, r. oggero, l. silvestro, the biagram vector: a graphical relation between reactance and phase angle measured by bioelectrical analysis in infants, ann. nutr. metab. 48 (2004), pp. 84-89. doi: 10.1159/000077042 [13] r. gonzález-landaeta, o. casas, r. pallàs-areny, heart rate detection from plantar bioimpedance measurements, ieee trans. biomed. eng. 55 (2008), pp. 1163-1167. doi: 10.1109/tbme.2007.906516 [14] f. ibrahim, m. n. taib, w. a. b. wan abas, c. c. guan, s. sulaiman, a novel approach to classify risk in dengue hemorrhagic fever (dhf) using bioelectrical impedance analysis (bia), ieee trans. instrum. meas. 54 (2005), pp. 237-244. doi: 10.1109/tim.2004.840237 [15] s. f. khalil, m. s. mohktar, f. ibrahim, the theory and fundamentals of bioimpedance analysis in clinical status monitoring and diagnosis of diseases, sensors (switzerland). 14 (2014), pp. 10895-10928. doi: 10.3390/s140610895 [16] a. ferrero, measuring electric power quality: problems and perspectives, meas. j. int. meas. confed. 41 (2006), pp. 121-129. doi: 10.1016/j.measurement.2006.03.004 [17] g. m. d’aucelli, n. giaquinto, c. guarnieri caló carducci, m. spadavecchia, a. trotta, uncertainty evaluation of the unified method for thermo-electric module characterization, meas. j. int. meas. confed. 131 (2018), pp. 751-763. doi: 10.1016/j.measurement.2018.08.070 [18] m. yang, z. guan, j. liu, w. li, x. liu, x. ma, j. zhang, research of the instrument and scheme on measuring the interaction among electric energy metrology of multi-user electric energy meters, meas. sensors. 18 (2021), 100067. doi: 10.1016/j.measen.2021.100067 [19] e. pittella, e. piuzzi, e. rizzuto, s. pisa, z. del prete, metrological characterization of a combined bio-impedance plethysmograph and spectrometer, meas. j. int. meas. confed. 120 (2018), pp. 221229. doi: 10.1016/j.measurement.2018.02.032 [20] a. ferrero, c. muscas, on the selection of the “best” test waveform for calibrating electrical instruments under nonsinusoidal conditions, ieee trans. instrum. meas. 49 (2000), pp. 382–387. doi: 10.1109/19.843082 [21] b. qi, x. zhao, c. li, methods to reduce errors for dc electric field measurement in oil-pressboard insulation based on kerreffect, ieee trans. dielectr. electr. insul. 23 (2016), pp. 16751682. doi: 10.1109/tdei.2016.005507 [22] s. corbellini, a. vallan, arduino-based portable system for bioelectrical impedance measurement, ieee memea 2014 int. symp. med. meas. appl. proc. (2014), pp. 4-8. doi: 10.1109/memea.2014.6860044 [23] t. kowalski, g. p. gibiino, j. szewiński, p. barmuta, p. bartoszek, p. a. traverso, design, characterisation, and digital linearisation of an adc analogue front-end for gamma spectroscopy measurements, acta imeko 10 (2021) 2, pp. 70-79. doi: 10.21014/acta_imeko.v10i2.1042 [24] a. ferrero, m. lazzaroni, s. salicone, a calibration procedure for a digital instrument for electric power quality measurement, ieee trans. instrum. meas. 51 (2002), pp. 716-722. doi: 10.1109/tim.2002.803293 [25] m. peixoto, m. v. moreno, n. khider, conception of a phantom in agar-agar gel with the same bio-impedance properties as human quadriceps, sensors. 21 (2021). doi: 10.3390/s21155195 [26] l. cristaldi, a. ferrero, s. salicone, a distributed system for electric power quality measurement, ieee trans. instrum. meas. 51 (2002), pp. 776-781. doi: 10.1109/tim.2002.803300 [27] b. sanchez, a.l.p. aroul, e. bartolome, k. soundarapandian, r. bragós, propagation of measurement errors through body composition equations for body impedance analysis, ieee trans. instrum. meas. 63 (2014), pp. 1535-1544. doi: 10.1109/tim.2013.2292272 [28] t. ouypornkochagorn, influence of electrode placement error and contact impedance error to scalp voltage in electrical impedance tomography application, ieecon 2019, 7th int. electr. eng. congr. proc. (2019). doi: 10.1109/ieecon45304.2019.8939016 https://doi.org/10.3265/nefrologia.pre2011.oct.11108 https://doi.org/10.1159/000077042 https://doi.org/10.1109/tbme.2007.906516 https://doi.org/10.1109/tim.2004.840237 https://doi.org/10.3390/s140610895 https://doi.org/10.1016/j.measurement.2006.03.004 https://doi.org/10.1016/j.measurement.2018.08.070 https://doi.org/10.1016/j.measen.2021.100067 https://doi.org/10.1016/j.measurement.2018.02.032 https://doi.org/10.1109/19.843082 https://doi.org/10.1109/tdei.2016.005507 https://doi.org/10.1109/memea.2014.6860044 https://doi.org/10.21014/acta_imeko.v10i2.1042 https://doi.org/10.1109/tim.2002.803293 https://doi.org/10.3390/s21155195 https://doi.org/10.1109/tim.2002.803300 https://doi.org/10.1109/tim.2013.2292272 https://doi.org/10.1109/ieecon45304.2019.8939016 primary calibration of mechanical sensors with digital output for dynamic applications acta imeko issn: 2221-870x september 2021, volume 10, number 3, 177 184 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 177 primary calibration of mechanical sensors with digital output for dynamic applications benedikt seeger1, thomas bruns1 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: research paper keywords: primary calibration; mems; complex frequency response; digital interface; dynamic measurement citation: benedikt seeger, thomas bruns, primary calibration of mechanical sensors with digital output for dynamic applications, acta imeko, vol. 10, no. 3, article 24, september 2021, identifier: imeko-acta-10 (2021)-03-24 section editor: francesco lamonaca, university of calabria, italy received february 26, 2021; in final form july 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project 17ind12 met4fof has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. corresponding author: thomas bruns, e-mail: thomas.bruns@ptb.de 1. introduction more and more applications and appliances in both the industrial and the consumer fields are integrating micromechanical sensors with a direct digital output. that means the measured value of a quantity like acceleration, air pressure or force is accessible via a digital interface like spi, i2c or can in terms of a binary number. those values can be directly fed into control units or digital displays for further processing without any need for traditional signal conditioning by amplifiers or sampling by discrete ad converters. this is very beneficial for the integration of measurement capabilities in all kinds of devices and systems, from simple telephones to uavs1 and beyond. accordingly, the market for mems sensors has been booming for several years. in condition monitoring applications, e.g., on production machines, these sensors can be used to measure vibrations, also using information from the frequency domain [1], which makes 1 unmanned aerial vehicles also known as drones. dynamic calibration and absolute timestamping advantageous [2], [3]. from the metrology perspective, such sensors are rather inconspicuous as long as the focus is on a static measurement. as soon as the focus shifts onto dynamic signals, however, the timing of signal acquisition is of high importance, and the integrated sampling and signal conditioning of such sensors creates new challenges for the calibration lab. this starts with the fact that existing documentary standards and guidelines for dynamic calibration, like the current iso 16063 series or [4] rely on the synchronised timing of the (analogue input) data acquisition channels, which is under the control of the calibration system. recent work on the calibration of sensors for other mechanical quantities like force, torque or pressure relies on the same principles. while the calibration of the magnitude of a sensor's complex transfer function only needs the technical interfaces and a stationary excitation signal [5], the phase or group delay calibration is strongly dependent on either the synchronicity of the channels or other precise knowledge of the sample timing. abstract this article tackles the challenge of the dynamic calibration of modern sensors with integrated data sampling and purely digital output for the measurement of mechanical quantities like acceleration, angular velocity, force, pressure, or torque. based on the established calibration methods using sine excitation, it describes an extension of the established methods and devices that yields primary calibration results for the magnitude and phase of the complex transfer function. the system is demonstrated with a focus on primary accelerometer calibrations but can easily be transferred to the other mechanical quantities. furthermore, it is shown that the method can be used to investigate the quality and characteristics of the timing for the internal sampling of such digital output sensors. thus, it is able to gain crucial information for any subsequent phase-related measurements with such sensors. mailto:thomas.bruns@ptb.de acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 178 2. extension to conventional primary calibration a conventional analogue calibration system (acs) in compliance with iso 16063-11 or iso 16063-21 is based on two data acquisition channels with common timing usually provided by a central clock signal in a single data acquisition system. the two channels record the reference quantity (𝑦ref ), like acceleration, velocity, or displacement) and the calibration quantity (voltage, charge) as the output of the device under test (dut). the clock is usually quite accurate and the sampling of the time series from the acquisition can be considered equidistant. using a mono-frequent sinusoidal drive signal with angular frequency 𝜔 for the mechanical excitation (e.g., via a shaker), a linear sine approximation is used to quantify the reference input (𝑦ref) and the response (𝑦dut) of the dut. via linear least squares fitting, the parameters in the following equations are determined: 𝑦ref(𝑡𝑖 ) = 𝐴ref sin(𝜔𝑡𝑖) + 𝐵ref cos(𝜔𝑡𝑖) + 𝐶ref 𝑦dut (𝑡𝑖 ) = 𝐴dut sin(𝜔𝑡𝑖 ) + 𝐵dut cos(𝜔𝑡𝑖 ) + 𝐶dut , (1) where 𝐴 and 𝐵 can be considered as quadrature components of each signal, while 𝐶 is introduced to cover any potential bias introduced into the measurement. the complex transfer function s of the dut is then given in terms of magnitude �̂� and phase δ𝜑 by 𝑆(𝜔) = �̂�(𝜔) ⋅ e−𝑖 δ𝜑(𝜔) with �̂�(𝜔) = √ 𝐴dut 2 + 𝐵dut 2 𝐴ref 2 + 𝐵ref 2 and (2) 𝜑(𝜔) = arctan ( 𝐵dut 𝐴dut ) − arctan ( 𝐵ref 𝐴ref ) . although not self-evident from the equations, it is crucial for the correct calibration of the phase delay that the time base for the data acquisition (represented by 𝑡𝑖 in equation (1) is the same for both channels. in figure 1, such an acs is represented on the top left. the channel for the dut called "sync" in this schematic for reasons that will become clear, soon. 2.1. sample timestamping for the case of a digital output sensor (dos), the prerequisite last mentioned in the previous paragraph is no longer fulfilled. neither is the sample clock frequency (or the actual instance of sampling) under the control of the laboratory's acs nor is the exact instance of sampling even known to the measurement system. a reliable phase calibration under such circumstances requires an extension of the acs in order to mitigate the lack of knowledge and control. for the rest of this section it is presumed that the dos provides a "data ready signal", that is, it provides a means to signal to any connected recording system that a new sample has been acquired and is ready for delivery. this signal is active whenever a new sample was acquired by the sensor. the term "sample" in relation to the dos relates to one set of values of the measurands of the dos. these values are nominally associated with the same single point in time, e.g., for a three-component accelerometer, a sample could be a tuple (𝑎𝑥 , 𝑎𝑦 , 𝑎𝑧 ). the complications compared to a conventional analogue sensor are threefold: 1. the data interface is digital and cannot be read out by the classic acs. 2. the precise sample clock is unknown and asynchronous in relation to the acs's internal clock. 3. the sampling instances of the dos are not necessarily equidistant. to overcome these complications and accomplish a phase calibration of a dos with data ready output, the acs is extended with a separate digital acquisition unit (dau) as shown in the top right part of figure 1. figure 1. the conventional analogue calibration system (acs) on the top left extended by the digital acquisition unit (dau) on the top right. at the bottom the respective signal traces over time as acquired by the two systems. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 179 the core of this dau is the sampling and timestamping of the data from the digital interface of the dos related to a common time base, which it also uses for the sampling of an additional analogue channel called "sync" (orange arrows) in figure 1. in the implementation used for the investigations in this publication, the sampling of dos data and sync were performed synchronously. this, however, is not a strict requirement for the procedure. whenever the dos signals the availability of a new sample via the data ready output (upper green arrow in figure 1, the dau immediately samples the sync signal, (𝑦sync) (right orange input to dau in figure 1), reads the digital data sample, (𝑦dos) from the dos interface (lower green arrow in figure 1) and marks this data tuple with a precise timestamp based on a gpsaligned clock of the microprocessor used. repeating this procedure, the dau acquires a time series of digital and analogue data with a sample clock as close to the dos internal sample clock as possible but with timing information as accurate as the external clock provided to the dau. the relation to the reference signal for the calibration, which is still acquired with the acs (blue arrow in figure 1), is provided by a sync signal (orange arrow) and acs input in figure 1, hence the naming of this channel. the sync signal is a mono-frequent sinusoidal voltage with the identical frequency to the excitation signal. it is fed into the dut channel of the acs and to the sync input of the dau. such a signal can be easily derived directly from the excitation signal or taken from the reference channel of the acs. as a consequence of this setup and procedure, the originally independent systems of the acs and the dau can now be linked in time via the common sync signal which is sampled by both systems based on their respective time bases. this is depicted in the time traces in the lower part of figure 1, where the same sync signal (in orange) is given twice: once as acquired by the acs and secondly as acquired by the dau. 2.2. implementation details of the dau the implementation of the dau used in this study was based on an stm32f767® microcontroller and a related development board (nucleo-f767zi®). the acquisition of dos output and the adc was handled in a single interrupt service routine, which was triggered by the data ready pin of the dos. the timing information was generated by an internal 108 mhz counter, which, in turn, was synchronised with an externally provided pulse-per-second (pps) clock from a gps receiver. the details of the implementation are available under open access conditions [11], [12] the uncertainty of the absolute timestamp was evaluated by measuring a reference pps signal stabilised to ptb's atomic clock with the dau. figure 2 shows the deviation of the timestamps from the expected value as determined by the dau. that deviation is mainly caused by the short-term drift of the microcontroller's phase locked loop and the intrinsic jitter of the gps's pps signal. overall, an accuracy better than 150 ns was achieved. 2.3. data analysis for sample time stamping based on the described signal generation and data acquisition scheme, the system generates four times series of timestamped samples related to three different signal sources. the amplitudes and initial phases of these can be determined by appropriate sine approximation methods (described below). those time-series are: i. the reference signal acquired by the acs: 𝑦𝑖,ref(𝑡𝑖 ) = �̂�ref ⋅ sin(𝜔 𝑡𝑖,acs − 𝜑ref). ii. the sync signal acquired by the acs: 𝑦𝑖,sync(𝑡𝑖,acs) = �̂�sync,acs ⋅ sin(𝜔 𝑡𝑖,acs − 𝜑sync,acs). iii. the sync signal acquired by the dau: 𝑦𝑘,sync(𝑡𝑘,dau) = �̂�sync,dau ⋅ sin(𝜔 𝑡𝑘,dau − 𝜑sync,dau). iv. the dos signal acquired by the dau: 𝑦𝑙,dos(𝑡𝑙,dos) = �̂�dos ⋅ sin(𝜔 𝑡𝑙,dos − 𝜑dos) . here, the index i indicates that (i) and (ii) have the same time base and are synchronously sampled (by the acs). in contrast (iii) and (iv) do need the same time base (from the dau) but not necessarily synchronous sampling, hence the indices are k and l. the magnitude of the transfer function of the dos is independent of the actual timing situation and can be directly calculated from (i) and (iv) as �̂�(𝜔) = �̂�dos �̂�ref . (3) since (ii) and (iii) are records of the same source with only apparent but no real existing delay between 𝜑sync,acs and 𝜑sync,dau, the time bases of acs and dau and thus the initial phases of the reference and the dos can be linked. the phase difference between 𝑦ref and 𝑦dos is accordingly given by δ𝜑 = 𝜑dos − 𝜑sync,dau + 𝜑sync,acs − 𝜑ref . (4) it should be noted that the accuracy of that link is dependent on the timestamping accuracy of the dau and not on the precision of the dos's sample clock. as well as the uncertainty of the sync signal (iii) sampled by the dau. the transfer function of the dau adc must be calibrated and taken into account [13]. on the contrary, the measurement scheme described above provides additional information on the precision of the dos's sample clock, as described in section 3. 2.4. series timestamping some digital sensors/measuring systems do not provide a data ready signal for each sample but instead record a full time series of data after a given trigger signal. to calibrate such a kind of sensor (we will call it "time series sensor”, tss), precise knowledge of the relative time between the trigger and the excitation signal is needed. if the acs does not provide such a figure 2. distribution of the timestamp deviations determined using the dau with reference to an atomic clock stabilised pps signal over a monitoring period of 45 hours. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 180 synchronised trigger signal, separate analogue trigger logic, e.g. from a digital storage oscilloscope (dso), can be used to derive and generate the trigger signal from an analogue sync signal like that mentioned in section 2.1. figure 3 shows a schematic diagram of a setup to calibrate such a tss. the acs again acquires the generator signal 𝑦sync (𝑡𝑖 ), see section 2.1, as for an analogue dut (orange arrow in figure 3 into the acs). the dso (in our case a rigol mso5072) triggers, upon zero crossing of the same analogue sync signal (orange arrow in figure 3 into the dso) and generates a trigger signal which is connected to the tss's trigger input (brown arrow in figure 3). this starts the recording of the time series 𝑦tss(𝑡tss). note that due to noise, the dso does not trigger perfectly at the zero crossing of the sync signal, and there is some additional processing delay within the dso. however, the resulting actual trigger delay can be determined and corrected for, e.g. by sine approximation of the sync signal acquired simultaneously by the dso and determining the actual trigger phase δ𝜑dso,triggernoise. to this end, in a dedicated setup, the dso is used to sample its own trigger output simultaneously to the sine wave of the sync signal, which causes that trigger output. based on the two simultaneously acquired traces, the delay between the zero crossing of the sync signal (e.g. determined by sine approximation) and the rising edge of the trigger can be determined. due to inevitable noise, there will be a certain variation in the determined trigger delay. in the actual setup, the trigger signal is emitted with a mean delay of 𝜏dso = 236(1) ns. this was previously calibrated with a rectangle generator and the feedback of the trigger output to a second input channel of the dso. the use of a rectangle generator, providing a steep rising edge, was considered more precise than the sine approximation approach applied to the sync signal. however, it requires additional equipment. 2.5. data analysis for series timestamping subsequent to the data acquisition described, a fourparameter sine approximation (fpsa) according to [6] is used to determine the angular frequency 𝜔, amplitude �̂�𝑇𝑆𝑆, initial phase 𝜑𝑇𝑆𝑆 and offset 𝐶𝑇𝑆𝑆 of the tss's output 𝑦tss,𝜔 (𝑡tss) (note that due to the operation principle of a tss, the time instances are unknown.) with these parameters, the magnitude response �̂�(𝜔) of its complex transfer function at the excitation frequency 𝜔 can be obtained directly from the amplitude of the reference signal �̂�ref(𝜔) and the amplitude of the time-series signal �̂�tss(𝜔) �̂�(𝜔) = �̂�tss(𝜔) �̂�ref(𝜔) . (5) note that the internal sampling frequency of the tss, 𝑓s,tss may differ from its specified nominal sampling frequency 𝑓s,tss,nominal. such differences lead to a deviation of the apparent excitation frequency determined by the fpsa as compared to the applied excitation frequency 𝜔. a reduced internal sample rate 𝑓s,tss leads to an apparent increase of 𝜔tss, and accordingly an increased 𝑓s,tss leads to an apparent reduction of this frequency. the actual mean internal sample rate 𝑓s,tss can then be determined as follows: 𝑓s,tss = 𝑓s,tss,nominal ⋅ 𝜔 𝜔tss , (6) provided that the excitation frequency from the acs, 𝜔, is well known. the initial phase of the tss signal 𝜑tss was determined using the four-parameter approximation and is thus linked to the actual period of the excitation signal. therefore, no correction related to sample rate deviations of the measured initial phase is necessary. figure 3. the acs with a digital storage oscilloscope (dso) as a trigger generator for the tss. the dso triggers on the zero crossing of the sync signal. the trigger output of the dso, in turn, triggers the data acquisition of the tss. the traces are the reference signal (blue), the analogue sinusoidal sync signal (orange), the trigger (brown) and the sampled time series of the tss (green). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 181 consequently, the phase response δ𝜑 of the complex transfer function of a tss can be calculated as follows, δ𝜑 = 𝜑tss − δ𝜑dso − δ𝜑acs , (7) using the phase difference δ𝜑acs between the reference and sync signal obtained by the analogue calibration system2. consequently, the complex transfer function �̂�(𝜔) is given by: 𝑆(𝜔) = �̂�(𝜔) ⋅ e−𝑖 δ𝜑 = �̂�tss (𝜔) �̂�ref(𝜔) ⋅ 𝑒 −𝑖(𝜑tss−δ𝜑dso−δ𝜑acs). (8) 3. sample rate problems and mitigation strategies typical doss generate their internal sample rate with an internal clock unit based e.g. on an rc oscillator. such simple clock generators are subject to drift depending on the operational temperature or supply voltage etc. accordingly, the stability and precision of such clocks should be checked. in [7] the effect of a biased sample frequency was described, and a rather elaborate correction method was provided. the calibration procedures presented in this article are independent of the internal sample clock by design, because they make use of external gps-aligned timestamping. even more, these procedures provide the direct means for a quantitative determination of sample clock accuracy and stability. this can be achieved at least in two ways: • direct sample interval evaluation based on the timestamps 𝑡𝑙,dos, the sample interval between each dos sample pair can be calculated as 𝑡𝑙+1,dos − 𝑡𝑙,dos and compared to the nominal sample interval. this procedure and the resulting insights, of course, depend on the availability of independent precise and stable timestamps for each sample, which makes it unfeasible for the series timestamping. • nominal frequency evaluation to get a more general estimate of the dos sample rate precision and stability, it is sufficient to have a known, stable and precise single frequency excitation for the dos. accordingly, this method can be applied even without the complete calibration set-up. if the internal clock of the dos deviates by a factor 𝜆 from the laboratory clock, the dos senses an external excitation with frequency 𝑓vib with an apparent frequency 𝑓vib,dos. the relation between the two is 𝑓vib,dos = 𝑓vib 𝜆 . (9) this is the equivalent situation to that described by equation (6) for series timestamping. the apparent frequency sensed by the dos can easily be determined by applying an fpsa [4] to the time series measured by the dos. this algorithm does not only fit the amplitude and phase of a sinusoidal signal but iteratively reduces apparent frequency deviations starting with an initial frequency guess. note that in the case of sample timestamping the timestamps 𝑡𝑙 used for that fit have to be the nominal (deviating) 2 it should be noted that any additional phase delays within the acs known from conventional calibration have to be considered in the same fashion for the digitally enhanced system. timestamps calculated from the nominal sample rate of the dos. then, equation (9) yields the following for the real mean sample rate 𝑓dos,real of the dos 𝑓dos,real = 𝑓dos,nom 𝜆 . (10) 4. first primary calibration results 4.1. the transfer function in order to test the above-mentioned concept, the dau was used to perform a primary calibration of the z-axis of a 6-degreeof-freedom mems sensor of the type mpu-9250 following [6]. figure 4 shows the situation in the acs, i.e. the dos mounted on the shaker's armature and the connected dau. for the determined complex transfer function, the reference acceleration was measured by laser interferometry on top of the mems's housing. a second laser was targeted at the surface of the carrier breakout board right next to the dos in order to investigate any relative motion due to the solder joints. this, however, is of no concern in this manuscript. the generator signal driving the power amplifier for the shaker was used as the sync signal to link the acs and the dau. the calibration was performed in the frequency range from 10 hz to 200 hz with single frequency sine excitation and excitation levels between 2.5 m/s² and 100 m/s². the mpu-9250 sensor is one of those that provide a "data ready" signal whenever the internal sampling process is finished. according to the previous description, the dau acquired a sample of three acceleration values, three angular rate values and one temperature value for each trigger from the data ready signal and timestamped this set of values with an absolute time value. the sensor was running at a nominal sampling rate of 1000 s/s. figure 4. primary calibration setup for z-axis of an mpu-9250 sensor. the bottom half shows the dau with the stm32f7 development board in piggyback. in the top half, the mpu-9250-breakout board mounted on a steel adapter and screwed to the armature of an se-09 shaker is shown. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 182 the analysis focused on the single (nominal) axis of excitation, which was the z-axis. by using the sync signal as described in section 2.3 and sine fitting of the z-axis values related to the absolute timestamps, it was possible to attain a traceable primary calibration of the magnitude and phase of the respective complex transfer function. the results are depicted in figure 5 for three repeated measurements of the same unit. a linear regression to the phase results for the lower frequencies suggests a group delay of the sensor of 8,1 ms. 4.2. the internal sampling in addition to this classic evaluation, the timestamping enables the evaluation of the stability of the sensor's internal sample clock and possible influencing quantities. in direct comparison to a 10 mhz reference frequency distributed within ptb and traceable to the atomic fountain clocks of ptb, the utilised gps-based timestamping revealed a mean sample interval of 996,96 µs for a measurement time of approx. 8000 s. over this time the sample interval length yielded a standard deviation of 21 ns and a spread of 335 ns, where admittedly the quantization of the individual timestamps is 10 ns due to the counting frequency of 108 mhz of the dau. the distribution of the sample intervals is shown in the histogram in figure 6. from the visual appearance, it is evident that the distribution is not gaussian and therefore not the usually supposed normal distribution. rather than that, certain values of the sample interval are apparently strongly favoured over others. one cause for such an effect may be found in sensor-internal digital compensation processes in relation to temperature. as the mpu9250 also features an internal temperature probe, the investigation of the correlation of the sample interval versus the temperature is an obvious next step. figure 7 depicts the relation of the sample interval over the apparent temperature as measured internally. it is evident that three to four of the preferred sample durations can be associated with a certain temperature and that, in general, the average sampling frequency falls (slightly) with rising temperature. for an in-depth understanding of this peculiar relation, a detailed knowledge of the internal processes of the sensor is probably necessary. nevertheless, it remains to be noted that the application of sample timestamping with the newly presented dau could demonstrate that the statistical properties of the sampling unit of this sensor are far from the typically expected uniform or normal distribution. in order to compare this finding with some other dos, a bosch bma280 was calibrated in an identical setting, and the measured sample intervals were subject to the same processing. figure 8 depicts the results. in this case the distribution is split into three parts with a prioritised centre part which has three orders of magnitude more counts than the side lobes. such a shape makes the usual characterisation with a standard deviation almost meaningless. why such a peculiar distribution is generated becomes clearer on closer inspection of the time series of sample intervals given in figure 9. figure 5. magnitude (top) and phase (bottom) of the complex transfer function of an mpu-9250 sensor measured in 3 repetitions (#1, #2, #3). the insert show details at low frequencies. figure 6. distribution of sample intervals of an mpu-9250 dos determined by sample timestamping. counts are in logarithmic scale. figure 7. relation of sample interval and sensed temperature of the mpu9250 sensor. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 183 as seen in the upper chart, the vast majorities of the sample intervals are close to the mean value of 485 µs. however, periodically, i.e. approximately every 1.1 s or 2200 samples, there is a double spike. the close-up of one of those features in the lower chart of the figure discloses a combination of a delayed followed by a premature sample. presumably, some internal process of the digital part of the sensor, like an adaptive digital filter or an adjustable compensation, takes some excessive computation time, slowing down either the sampling process or the delivery (data ready) of the analogue quantity. in order to keep the mean sample rate constant, the delay is followed by an abridged sample interval compensating the delay. 5. measurement uncertainty this article is not dedicated to an in-depth discussion and evaluation of the measurement uncertainty budget associated with the new hardware and methodology. this will need more research in the various components involved and probably is worth an article on its own. nevertheless, we want to briefly address a few issues that were found during the research work so far. 5.1. magnitude of the transfer function the magnitude calibration is a rather straight forward process for the dos and in terms of measurement uncertainty very similar to conventional work with ad-converted output voltages. a significant difference is, however, that opposed to conventional set-ups, no gain adjustment by using external amplifiers is possible. this results in a fixed signal resolution that is linked to the embedded adc and has to be considered in the measurement uncertainty of the dos especially for excitations in the lower amplitude range. 5.2. phase of the transfer function the phase of the transfer function is the actual challenge in calibration and therefore the focus of this article. internally the measurement uncertainty of the phase is relying on the quality of the embedded oscillator and some firmware aspects of the dos. this was demonstrated by the measurement-analysis of the previous section 4. while the influence of a low-quality oscillator may be mitigated by precise timestamping during calibration, such an approach is usually not feasible for the typical application of dos. here, the nominal sampling rate is assumed. in addition to these components of measurement uncertainty of phase intrinsic to the dos, the hardware of the dau and the methodology add their own components. of course, the phase or group delay associated with the sampling of the sync-signal by the dau has to be calibrated, as this is no longer controlled by the acs. the best estimate of this group delay can be corrected, but an uncertainty remains from this calibration. this was briefly addressed in sections 2.2 and 2.4 for the work presented here. additional phase measurement uncertainty is inflicted by the noise on the sync signal which links the time-bases of acs and dau. this is strongly dependent on the source available as sync signal and in particular its signal-to-noise ratio. this noise degrades the phase accuracy on both sides, the acs and the dau. however, in the described set-up it can hardly be avoided. 6. conclusion and outlook conventional analogue accelerometer calibration systems are not capable of calibrating modern digital output sensors. however, an extension providing the necessary digital interface and analogue input for a suitable reference signal can extend the capabilities to the magnitude and phase calibration of the complex frequency response of such sensors. the typically independent internal sampling of the dos poses a special challenge in the calibration as well as a new source of measurement uncertainty and a new specification that has to become part of the calibration routine and the calibration result. with two different approaches, it is possible to calibrate devices that deliver separate samples as well as devices that buffer a series of samples after a single trigger and deliver the whole series at once. provided that the calibration laboratory is equipped with a precise and stable timestamp source, like a gpsaligned clock, simple procedures lead to detailed insights concerning the internal timing of the dos and their dependencies on environmental conditions. it turns out that simple classical assumptions about characteristics like the accuracy and jitter of the internal sampling units can be misleading. however, the application of external timestamping, as demonstrated, gives a clear and traceable quantification of these properties. within the european project titled met4fof, a cross validation in terms of an interlaboratory comparison between different participants is planned. in order to address the question of consistency of the results, a further evaluation of additional measurement uncertainty components involved with the use of the dos will be included. an existing calibration system will be equipped with a gps based absolute time base so that the phase of the transfer function can be determined without a reference figure 8. distribution of sample interval lengths of a bma280 dos determined by sample timestamping. counts are in logarithmic scale. figure 9. sample interval length over time plot for the bma280 sensor. the lower plot is a zoom-in on the upper overview. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 184 signal, but by means of the absolute time. by comparison with this system, the solution presented here will be evaluated with respect to uncertainty. the intrinsic timing properties, in particular, add a new dimension to measurement uncertainty estimation. in addition, most mems accelerometers on the market are multi-component devices. the question of dynamic crosstalk may therefore add another complication, where the static (dc) crosstalk has already been studied in other publications such as [7], [8]. acknowledgement this project 17ind12 met4fof has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme references [1] n. helwig, e. pignanelli, a. schütze, condition monitoring of a complex hydraulic system using multivariate statistics, proc. i2mtc-2015 2015 ieee international instrumentation and measurement technology conference, paper pps1-39, pisa, italy, may 11-14 (2015), https://ieeexplore.ieee.org/document/7151267 [2] b. seeger, th. bruns s. eichstädt, methods for dynamic calibration and augmentation of digital acceleration mems sensors. 19th international congress of metrology, 22003, paris, france (2019), https://cfmetrologie.edpsciences.org/articles/metrology/pdf/20 19/01/metrology_cim2019_22003.pdf [3] s. eichstädt, m. gruber, a.p. vedurmudi, b. seeger, th. bruns, g. kok, toward smart traceability for digital sensors and the industrial internet of things. sensors 21(6) (2021), doi: 10.3390/s21062019 [4] guideline dkd-r 3-2: calibration of conditioning amplifiers for dynamic application, physikalisch-technische bundesanstalt, dkd, braunschweig, gemany (2019), doi: 10.7795/550.20190425en [5] g. d'emilia, a. gaspari, f. mazzoleni, e. natale, a. schiavi, calibration of tri-axial mems accelerometers in the lowfrequency range – part 1: comparison among methods j. sens. sens. syst., 7, 245–257, 2018, doi: 10.5194/jsss-7-245-2018 [6] ieee 1057-2017 ieee standard for digitizing waveform recorders, standard, institute of electrical and electronics engineers, new york, ny, usa (2018). online [accessed 8 september 2021] https://standards.ieee.org/standard/1057-2017.html [7] w.-s. cheung, effects of the sample clock of digital-output mems accelerometers on vibration amplitude and phase measurements, metrologia, 57 (1) (2020), doi: 10.1088/1681-7575/ab5505 [8] iso 16063-11:1999, methods for the calibration of vibration and shock transducers – part 11: primary vibration calibration by laser interferometry, international organization for standardization, geneva, ch (1999). [9] a. prato, f. mazzoleni, a. schiavi, traceability of digital 3-axis mems accelerometer: simultaneous determination of main and transverse sensitivities in the frequency domain, metrologia, 57 (3) (2020) p. 035013. doi: 10.1088/1681-7575/ab79be [10] m. y. a. michael gaitan, j. geist, analysis and protocol for characterizing intrinsic properties of three-axis mems accelerometers using a gimbal rotated in the gravitational field, proc. of the imeko 23rd tc3, 13th tc5 and 4th tc22 international conference, helsinki, finland, 30 may 1 june 2017, 3 pp. online [accessed 8 september 2021] https://www.imeko.org/publications/tc22-2017/imekotc22-2017-015.pdf [11] software for the met4fof smartup unit v2. online [accessed 8 september 2021] https://github.com/met4fof/met4fof-smartupunit [12] pcb layout of the met4fof interface board. online [accessed 8 september 2021] https://circuitmaker.com/projects/details/benedikt-seeger2/met4fof-interface-board [13] b. seeger, l. klaus d. nordmann, dynamic calibration of digital angular rate sensors, acta imeko 9 (2020) 5, pp. 394-400. doi: 10.21014/acta_imeko.v9i5.1008 https://ieeexplore.ieee.org/document/7151267 https://cfmetrologie.edpsciences.org/articles/metrology/pdf/2019/01/metrology_cim2019_22003.pdf https://cfmetrologie.edpsciences.org/articles/metrology/pdf/2019/01/metrology_cim2019_22003.pdf https://doi.org/10.3390/s21062019 https://doi.org/10.7795/550.20190425en https://doi.org/10.5194/jsss-7-245-2018 https://standards.ieee.org/standard/1057-2017.html https://doi.org/10.1088/1681-7575/ab5505 https://doi.org/10.1088/1681-7575/ab79be https://www.imeko.org/publications/tc22-2017/imeko-tc22-2017-015.pdf https://www.imeko.org/publications/tc22-2017/imeko-tc22-2017-015.pdf https://github.com/met4fof/met4fof-smartupunit https://circuitmaker.com/projects/details/benedikt-seeger-2/met4fof-interface-board https://circuitmaker.com/projects/details/benedikt-seeger-2/met4fof-interface-board http://dx.doi.org/10.21014/acta_imeko.v9i5.1008 microsoft word article 11 116-485-1-galley finale.doc acta imeko december 2013, volume 2, number 2, 61 – 66 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 61 digital demodulator unit of laser vibrometer standard for in situ measurement akihiro oota1, hideaki nozato1, wataru kokuyama1, yoshinori kobayashi2, osamu takano2, naoki kasai2 1 national institute of advanced industrial science and technology, central 3, 1-1-1 umezono, tsukuba, japan 2 neoark corporation, 2062-21 nakanomachi, hachioji, tokyo, japan section: research paper keywords: laser vibrometer; analog demodulator; digital demodulator; iso 16063-41 citation: akihiro oota, hideaki nozato, wataru kokuyama, yoshinori kobayashi, osamu takano, naoki kasai, digital demodulator unit of laser vibrometer standard for in situ measurement, acta imeko, vol. 2, no. 2, article 11, december 2013, identifier: imeko-acta-02 (2013)-02-11 editor: paolo carbone, university of perugia received february 18th, 2013; in final form september 23th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was financially supported by meti (ministry of economy, trade and industry), japan corresponding author: akihiro oota, e-mail: a-oota@aist.go.jp 1. introduction laser vibrometers have been frequently used for precise vibration measurements to ensure the safety of commercial products in industry. the reliability and accuracy of such measurements should be guaranteed by a traceability system, which is a series of calibrations meeting national or international metrology standards regarding the use of laser vibrometers. to establish such a traceability system for laser vibrometers, a novel calibration method has been published as a new international standard (iso 16063-41 [1]) for laser vibrometers. some demonstrations have been attempted in accordance with this standard [2-5]. laser vibrometers consist of two key components, a laser optics unit and a demodulator unit. a heterodyne laser interferometer or homodyne laser interferometer can be applied with a laser optics unit. the interferometry signal detected by a laser optics unit is transformed into an acceleration, velocity, or displacement signal by the demodulator unit. two kinds of demodulator unit, analog and digital types, can be applied to realize this transformation. for many years, many analog demodulator units have been used to transform interferometry signals into velocity signals. these units enable the direct conversion from doppler signals to velocity signals, which can be continuously obtained over a wide velocity or frequency range. such a laser vibrometer based on doppler signals is referred to as a laser doppler vibrometer. in an analog demodulator unit, the characteristics of the electric components such as the frequency-to-voltage converter, lowpass filter, and scaling amplifier strongly affect the measurement accuracy [6]. on the other hand, the digital demodulator unit applies a new signal processing method that does not depend on the characteristics of analog components. because of the wide bandwidth of the interferometry signals, the only analog device required is an analog-to-digital converter (adc) with high speed and high resolution. in the digital demodulator unit, the measurement accuracy depends mainly on the characteristics of the adc and the calculation algorithm. therefore, the digital demodulator unit would enable users to easily achieve high measurement accuracy, in contrast with that obtained with the analog demodulator unit. however, abstract to establish an easy-to-use traceability system for laser vibrometers in vibration acceleration standard, a highly accurate reference standard is required. in this study, we investigated the measurement reliability of a commercial laser vibrometer with an analog demodulator. the results confirmed a large measurement deviation from nominal sensitivity values due to characteristics of the analog demodulator. to reduce such deviation, a high-accuracy digital demodulator, which can be utilized as a reference standard, was developed. the experimental results confirmed the improved measurement accuracy of the developed digital demodulator. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 62 digital demodulator units cannot always provide continuous measurements over the wide velocity or frequency range achieved by analog demodulator units because of technical barriers such as the limited number of samples available for digital processing and the limited sampling rate [6]. in this study, characteristics of various types of commercial laser vibrometer were investigated, and we clarify the disadvantages of the analog demodulator unit used. in addition, a novel digital demodulator unit, which enables us to continuously provide output data for in situ measurement, was developed to realize high measurement reliability, and we confirmed its capability experimentally. 2. characteristics of commercial laser vibrometer with analog demodulator unit 2.1. characteristics of three commercial laser vibrometers to confirm the potential of commercial laser vibrometers, we performed the calibration of a laser vibrometer and its demodulator unit. three different kinds of commercial laser vibrometers with analog demodulators were applied to the calibration, which are in this work denominated lv a, lv b and lv c. these laser vibrometers are made by three manufacturers in japan. the primary calibration of the laser vibrometer using practical vibrations was performed in accordance with iso 16063-41 (fringe counting method and sine approximation method). figure 1 shows the experimental setup used for the primary calibration of the laser vibrometer. the laser interferometry system used as a reference standard is the homodyne laser interferometry system included in the japanese national standard for primary calibration of accelerometers. in this setup, the fringe counting method was applied for the frequency range 20 hz – 80 hz, and the sine approximation method was applied for the frequency range 100 hz – 5 khz. in addition, their demodulator units are electrically calibrated with simulated frequency modulated signals, which are equivalent to output signals obtained from the laser optics unit in the calibration in accordance with iso 16063-41. figure 2 shows the experimental setup used for the demodulator unit calibration. a radio frequency (rf) signal generator was applied to generate the simulated frequency-modulated signal. an r.m.s. voltmeter (or digitizer) measures the output voltage of the demodulator unit. the rf signal generator and r.m.s. voltmeter can be calibrated through a traceability system of the time standard and voltage standard, respectively. figure 3 shows primary calibration results of the laser vibrometers lv-a, lv-b and lv-c, and also the electrical calibration results of their respective demodulator units. these demodulators were set for a nominal sensitivity of 10 (mm/s)/v. the calibration results of the demodulator units show very similar characteristics to the laser vibrometer calibration results. figure 4 shows the corrected primary calibration results based on the demodulator unit calibration results. although the results from two different calibration methods had a large deviation of more than 1.0 % from nominal sensitivities as shown in figure 3, a small deviation of less than 0.5 % was figure 1. experimental setup of primary calibration of a laser vibrometer. rf signal generator analogue demodulator r.m.s. voltmeter rf in out rf signal generator analogue demodulator r.m.s. voltmeter rf in out figure 2. experimental setup of demodulator unit calibration. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 63 almost achieved by performing a correction based on their demodulator unit calibration results. the small deviation after correction in figure 4 may normally be due to the effect of the optics unit of the laser vibrometer. however, all results after correction would be expected to be similar to each other. therefore, the deviation after correction would be due more to the systematic error from common components in the demodulator unit calibration, such as the simulated frequency-modulated signal from the radio frequency signal generator, and not so much by the effect of the optics unit as by the systematic error from the common component in the demodulator unit calibration. 2.2. linearity and frequency dependency of the laser vibrometer the experiments mentioned above were carried out for a specific frequency series as accelerometer calibration. however, as the basic principle of a laser vibrometer, the frequency shift of light scattered from a moving surface by the doppler-effect is proportional to the velocity of the moving surface. therefore, the nonlinearity of the velocity amplitude may be more significant than the frequency characteristics. in this study, both typical characteristics were evaluated in detail for the lv-c. the setting of the nominal sensitivity value was fixed during the evaluation. figure 5 shows the relative deviation from nominal sensitivity obtained while the velocity amplitude is adjusted from 3 mm/s to 100 mm/s at a calibration frequency of 160 hz. nonlinearity higher than 5 % was observed in this case. figure 6 shows the frequency characteristic of lv-c and its demodulator c at constant velocity amplitude of 10 mm/s. the relative deviation from nominal sensitivity changes approximately 0.7 % between 20 hz and 5 khz. as the sensitivity fluctuation due to the velocity amplitude is much larger than that due to the frequency, the linearity for the velocity amplitude may be as important for in situ measurements as the sensitivity of the laser vibrometer. the evaluation results of the demodulator unit are very similar to the evaluation results of the laser vibrometer. as a result, we can conclude that the drastic fluctuation of the laser vibrometer sensitivity would be mainly due to the analog demodulator unit. -6 -4 -2 0 2 4 6 10 100 1000 1000 d ev ia ti on fr om n om in al s en si ti vi ty [ % ]' frequency [hz] demodulator a lv a demodulator b lv b demodulator c lv c figure 3. comparison between calibrations of commercial laser vibrometers (lv) and electrical calibration of its demodulator unit calibration. the lvs of three different manufacturers were set at a nominal sensitivity of 10 (mm/s)/v. a, b and c: different manufacturers. lv: laser vibrometer. -6 -4 -2 0 2 4 6 10 100 1000 10000 d ev ia tio n fr om n om in al s en si tiv ity [% ]' frequency [hz] lv a after correction lv b after correction lv c after correction figure 4. correction results based on demodulator unit calibration. -5 -4 -3 -2 -1 0 1 2 1 10 100 d ev ia tio n fr om n om in al s en si tiv ity [% ] velocity amplitude [mm/sec] lv c demodulator c > 5 % figure 5. typical nonlinearity for velocity amplitude at 160 hz for a nominal sensitivity of 10 (mm/s)/v. -5.5 -5.3 -5.1 -4.9 -4.7 -4.5 -4.3 -4.1 -3.9 -3.7 -3.5 10 100 1000 1000 d ev ia tio n fr om n om in al s en si tiv ity [% ] ' frequency [hz] lv c demodulator c figure 6. typical frequency characteristic for nominal sensitivity of 10 (mm/s)/v (velocity amplitude of sinusoidal vibration given by vibration exciter is fixed at 10 mm/s). acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 64 consequently, to ensure the measurement reliability of the laser vibrometer in measurement ranges not covered by the primary calibration of the laser vibrometer in accordance with iso 16063-41, the appropriate electrical calibration for analog demodulator units would be very useful to extrapolate the sensitivity. however, if a more precise vibration measurement is required, the measurement accuracy of the demodulator unit should be improved. 3. development of the digital demodulator unit 3.1. preliminary comparison between the digital demodulator unit and the analog demodulator unit in iso 16063-41, the digital signal processing for the decoding of the doppler signal is required in the definition of laser vibrometer standards. therefore, to develop a higher accuracy demodulator unit, the potential of digital demodulation should be investigated by comparing it with analog demodulation. this comparison was carried out using electrical calibration and primary calibration explained above in section 2. in this experiment, lv-c was used as the laser vibrometer with the analog demodulator unit. the optics part of lv-c can be detached from the analog demodulator unit because it was specially designed for this purpose. on the other hand, to conveniently achieve digital demodulation, a commercial signal analyzer (anritsu ms2690a) was applied. by using this instrument, i & q signals (in-phase and quadrature-phase signal pair) can be easily obtained, and the decoding of the i & q signals is then performed by our proprietary software using labview. the signal analyzer has an adc resolution of 16 bits, a maximum sampling rate of 50 mhz, and an rf downconversion function. in this experiment, the signal analyzer and rf signal generator used to generate the simulated fm signal use a common sampling clock signal for their own processing. to implement a laser vibrometer with a digital demodulator unit, the optics part of lv-c was connected to the signal analyzer. figure 7 shows the evaluation results obtained from both the primary calibration and electrical calibration. it compares the primary calibration results obtained for lv-c with the analog demodulator c, with the analog demodulator c after correction, with the digital demodulator, and the electrical calibration result of only the digital demodulator unit. as a result, the potential of the digital demodulator unit to achieve higher measurement accuracy is confirmed. 3.2. development of the digital demodulator unit for in situ measurements based on the results shown in section 3.1, the digital demodulator unit with higher accuracy for laser vibrometer is now discussed. although the application of the signal analyzer led to good results in the preliminary experiment, the signal analyzer would be unsuitable as it cannot continuously output data in real time, as required for in situ measurements. therefore, a dedicated digital demodulator unit, which has a smaller deviation than commercial analog demodulator units, and which generates continuous output in real time, was designed for in situ measurements. the developed digital demodulator unit is constructed with an adc, digital to analog converter (dac), and fieldprogrammable gate array (fpga) at low cost. a he-ne laser is used as the laser source in almost all commercial laser vibrometers in japan. the driving frequency of its acousto optic modulator (bragg cell) in the laser optics unit is fixed at 80 mhz. the maximum frequency shift due to the doppler effect, which is equivalent to the maximum measurable velocity, is set to 10 mhz to cover the velocity range below 3 m/s. thus the maximum frequency of the signal to be detected by the adc is equivalent to 90 mhz. to satisfy the nyquist sampling theorem, a sampling rate of more than 200 mhz is required as the maximum sampling rate of the adc at least. to implement the measurement with higher accuracy in this study, the adc is selected to have two channel inputs, a high sampling rate of 500 ms/s, and a high resolution of 12 bits. the dac has a high sampling rate of 250 ms/s and a high resolution of 16 bits. high-speed signal processing is required to obtain the demodulation signal in as close to real time as possible. fpga can process data faster than dsp. therefore, an fpga was applied to achieve high-speed signal processing for demodulation at low cost. the data processing including the digital demodulation algorithm embedded in the fpga was developed to satisfy the requirement for the laser vibrometer standard described in iso 16063-41. additionally, adc, dac and fpga are driven with common clock signal. the specifications of the adc, dac and fpga are given to enable continuous measurements over a wide frequency range of up to several hundred khz. this digital demodulator unit can receive a carrier frequency signal of either 80 mhz or 10 mhz, which will enable it to immediately process the frequency signal after down conversion. in addition, the higher accuracy measurement can be achieved by applying a highaccuracy 10 mhz external clock to this demodulator unit. figure 8 shows a photograph of the developed digital demodulator unit. the developed digital demodulator unit was evaluated using the same experimental setup in the previous section under the same experimental conditions as the analog demodulator unit. figure 9 and figure 10 show typical evaluation results obtained. experimental results showed that the relative deviation is confirmed to be below 0.3% for three frequency series in the applicable measurement range, and the time delay of the output due to the signal processing is evaluated to be about 30 μs. these results indicate that the developed digital demodulator -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 10 100 1000 1000 r el at iv e de vi at io n fr om n om in al s en si tiv ity [ % ]' frequency [hz] digital demodulator unit lv c with analog demodulator unit lv c after correction lv with digital demodulator unit figure 7. preliminary comparison of digital demodulation with analog demodulation. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 65 unit has a sufficiently high measurement accuracy to be applicable as part of a laser vibrometer standard. 3.3. practical shock measurement using a laser vibrometer with the developed digital demodulator unit the prototype of the developed digital demodulator unit must be combined with the laser optics unit to work as a laser vibrometer for in situ measurements. therefore, the performance of the laser vibrometer, which consists of the developed digital demodulator unit and a commercial laser optics unit, was demonstrated using a shock acceleration exciter. to generate a rectilinear motion for sensing, a shock acceleration exciter was used for primary shock acceleration calibration as a national standard in japan [7]. to validate the capability for sensing, in this experiment, a homodyne laser interferometer in a shock acceleration calibration system was used as the reference standard. figure 11 shows typical experimental results. the peak -1 0 1 2 3 4 5 6 7 8 9 0.01 0.1 1 10 d ev ia tio n fr om n om in al s en si tiv ity [% ] velocity amplitude [m/sec] dd at 1000hz dd at 160hz dd at 5000hz ad at 1000hz ad at 160hz ad at 5000hz applicable measurement range figure 9. typical evaluation results of the relative sensitivity deviation for the developed digital demodulator unit. dd: digital demodulator unit. ad: analog demodulator unit. -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0 2000 4000 6000 8000 10000 ph as e sh if t [ ra d] frequency [hz] dd ad figure 10. typical evaluation results of the phase shift for the developed digital demodulator unit (phase shift). dd: digital demodulator unit. ad: analog demodulator unit. (a) overview of the developed digital demodulator unit (b) internal view of the developed digital demodulator unit figure 8. digital demodulator unit for the laser vibrometer developed in this study. -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 -500 0 500 1000 1500 2000 2500 3000 0.0036 0.0038 0.004 0.0042 0.0044 0.0046 v el oc ity [m /s ec ] a cc el er at io n [m /s 2 ] time [sec] acceleration sensed by ref. std. velocity sensed by ref. std. velocity sensed by dd figure 11. comparison of typical measurement results of the laser vibrometer with prototype of the developed digital demodulator unit and shock acceleration standard with homodyne laser interferometer in nmij. dd: laser vibrometer with prototype of the developed digital demodulator unit. ref.std.: homodyne laser interferometer for the shock acceleration standard in nmij. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 66 acceleration sensed by the reference standard was about 1700 m/s2. from a comparison of both velocity waveforms, a slight time delay was observed in the laser vibrometer with the developed digital demodulator unit. this result is in agreement with the result of the phase shift obtained in figure 10. in addition, the waveform that was obtained indicates unstable behavior after the application of shock acceleration. this instability may be due to the nonlinearity of the sensitivity to small velocity changes, and is a future technical issue to be investigated. 4. conclusions the characteristics of a laser vibrometer with an analog demodulator unit were investigated using three different commercial laser vibrometers. the calibration results of these demodulator units show very similar characteristics to those of the laser vibrometer calibration. most of the deviation from the nominal sensitivity is due to the characteristics of the demodulator unit used. the measurement accuracy of the laser vibrometer with a digital demodulator unit was compared with that of a laser vibrometer with an analog demodulator unit using the same laser optics unit. the results confirmed the higher measurement accuracy of the digital demodulator unit. to overcome the difficulty of realizing a continuous signal processing for in situ measurement, the digital demodulator unit was developed using an fpga, an adc with a high sampling rate, and a dac with a high sampling rate. the excellent potential of the developed digital demodulator unit was experimentally confirmed. on the other hand, we observed unstable behavior on the application of shock acceleration. this instability should be addressed to ensure greater usefulness for a laser vibrometer standard. acknowledgement this work was performed by aist, and is a research project aimed at supporting small and medium companies. we acknowledge the financial support provided by the meti (ministry of economy, trade and industry). we deeply appreciate the kind cooperation of the two main japanese manufacturers of laser vibrometers. we also wish to thank dr. h-j.von martens, who drafted iso 16063-41, for the useful discussions. references [1] iso 16063-41: methods for the calibration of vibration and shock transducers – part 41: calibration of laser vibrometers, international organization for standardization (2011). [2] h.-j. von martens, t. bruns, a. taubüner, w. wabinski, u. göbel, recent progress in accurate vibration measurements by laser techniques, 7th international conference on vibration measurements by laser techniques: advances and applications, ancona, italy, 2006, vol. 6345, pp. 634501-1-23. [3] a. oota, t. usuda, t. ishigami, h. nozato, h. aoyama, and s. sato, preliminary implementation of primary calibration system for laser vibrometer, 7th international conference on vibration measurements by laser techniques: advances and applications, ancona, italy, 2006, vol. 6345, pp. 634503-1-8. [4] u. buehn, and h. nicklich, “calibration of laser vibrometer standards according to the standardization project iso 1606341”, 7th international conference on vibration measurements by laser techniques: advances and applications, ancona, italy, 2006, vol. 6345, pp.63451f-1-9. [5] g. p. ripper, g. a. garcia, and r. s. dias, the development of a new primary calibration system for laser vibrometer at inmetro, imeko 20th tc-3, 3rd tc-16 and 1st tc-22 international conference, mérida, mexico, 2007, paper id-105. [6] m. bauer, f. ritter, and g. siegmund, high-precision laser vibrometers based on digital doppler-signal processing, 5th international conference on vibration measurements by laser techniques: advances and applications, 2002, vol. 4827 pp. 50-61. [7] h. nozato, t. usuda, a. oota, t. ishigami, calibration of vibration pick-ups with laser interferometry part iv: development of shock acceleration exciter and calibration system, measurement science and technology, vol. 21, no. 6, (2010), paper id 065107, pp. 1-10. first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise acta imeko issn: 2221-870x december 2021, volume 10, number 4, 10 16 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 10 first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise valerio baiocchi1, alessandro bosman2, gino dardanelli3, francesca giannone4 1 dipartimento ingegneria civile, edile ed ambientale(dicea), sapienza università di roma, via eudossiana 18, i00184, italy 2 istituto di geologia ambientale e geoigegneria, consiglio nazionale delle ricerche (cnrigag), rome, italy 3 department of civil, environmental, aerospace, materials engineering (dicam), university of palermo, italy 4 department of engineering, niccolò cusano university, via don carlo gnocchi 3, rome, i 00166, italy section: research paper keywords: gnss; bathymetry survey; rtk-lib; geoid; tyrrhenian sea citation: valerio baiocchi, alessandro bosman, gino dardanelli, francesca giannone, first considerations on post processing kinematic gnss data during a geophysical oceanographic cruise, acta imeko, vol. 10, no. 4, article 6, december 2021, identifier: imeko-acta-10 (2021)-04-06 section editor: silvio del pizzo, university of naples 'parhenope', italy received june 1, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by italian national research council (cnr) corresponding author: valerio baiocchi, e-mail: valerio.baiocchi@uniroma1.it 1. introduction the problem of calculating the connection between on-shore and off-shore heights is still very open and debated. in fact, even though they are all heights, they are referred to reference systems with non-equivalent definitions, so conceptually and numerically very different, and this often makes the connection between the two types of heights complex [1]. the ground altimetric reference system is generally based on the definition of a specific equipotential surface of the gravitational field, identified by the mean level of a reference tide gauge. this method has been used in italy both historically [2] and currently [3] and various national terrestrial altimetric systems have been defined, for this reason the possibility of unifying them at a regional [4] and global level [5] is currently subject of research. gnss systems have made it possible for the first time to obtain highly accurate planoaltimetric measurements (and therefore also elevation measurements) both on land and at sea; however, these measurements do not refer to a physical reality but to a mathematical surface, the ellipsoid of rotation. on land, ellipsoid elevations are transformed into orthometric elevations using local geoid models [6], which are generally more accurate, or global models, which are generally less accurate [7]. the problem of elevations at sea is even more complex because the sea surface is not an equipotential surface, which is "... characterized by uniform temperature and density and free of perturbations related to currents, winds and tides." [8] unfortunately, these are ideal conditions that are never found in nature, and for this reason the materialisation of altimetric reference systems at sea is profoundly different from terrestrial altimetric reference systems [9]. it is important to underline that the interest in this case is mainly in seafloor depth in order to construct high resolution digital elevation model (dems) for seafloor, habitat abstract differential gnss positioning on vessels is of considerable interest in various fields of application as navigation aids, precision positioning for geophysical surveys or sampling purposes especially when high resolution bathymetric surveys are conducted. however ship positioning must be considered a kinematic survey with all the associated problems. the possibility of using high-precision differential gnss receivers in navigation is of increasing interest, also due to the very recent availability of low-cost differential receivers that may soon replace classic navigation ones based on the less accurate point positioning technique. the availability of greater plano-altimetric accuracy, however, requires an increasingly better understanding of planimetric and altimetric reference systems. in particular, the results allow preliminary considerations on the congruence between terrestrial reference systems (which the gnss survey can easily refer to) and marine reference systems (connected to national tidegauge network). in spite of the fluctuations due to the physiological continuous variation of the ship's attitude, gnss plot faithfully followed the trend of the tidal variations and highlighted the shifts between gnss plot and the tide gauges due to the different materialization of the relative reference systems. mailto:valerio.baiocchi@uniroma1.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 11 mapping and bathymetric cartographies which are mainly used for navigational, safety and scientific bathymetric surveys [10][12]. the elevations at sea are often referred to a conventional local zero identified with a local tide gauge, conventionally the tide gauge's "zero" has no connection with the "zero" of the national altimetric system; bathymetries are referred to the tide gauges reference system for prudential reasons related to navigation and nautical charts [9]. therefore, there is no congruence between the zero of the tide gauges and the national altimetric system, not even between the various tide gauges at a given time. it must also be taken into account that the connection between the national altimetric system and the elevations on the islands cannot be made by precision geometric levelling but it is often made by trigonometric levelling or gnss levelling corrected later with the same geoid models [13]. the interest in the correct relationship between terrestrial and tidal altimetry systems is constantly increasing both for the growing interest in the automatic extraction of coastlines [14]-[16] and for the very recent availability of low cost gnss receivers that can acquire in differential mode, allowing centimetric and potentially also millimetric [17] planimetric accuracy even at sea. in this paper, the first results of a post processing kinematic (ppk) survey performed during the “thygraf – tyrrhenian gravity flow” oceanographic campaign in the southern tyrrhenian sea conducted on board of the urania r/v (research vessel) are reported. the aim was to make an initial comparison between the altitudes acquired by the gnss device and corrected with a geoid model and those recorded simultaneously by the tide gauges present in the area. in the paragraph "materials and methods" the instruments used and the measurements carried out will be illustrated, in the paragraph "gnss data processing" the processing carried out and the different strategies used will be reported and finally, in the paragraph "results and discussion" the results obtained will be compared with the tidal data and the conclusions will be illustrated. 2. materials and methods during the thygraf oceanographic survey conducted on board of the urania r/v from 12th to 19th of february 2013, the geodetic team of the scientific crew was engaged in experimenting and validating some innovative techniques of satellite survey in navigation. for this purpose, the geodetic class gps-gnss receiver topcon legacy-e was used and the antenna (topcon pga) was installed on the top of the ship itself, thanks to the collaboration of the ship's personnel (in figure 1 square antenna to the right of the main mast). before departure, a survey was carried out using a total station with the aim to measure the antenna height respect to the waterline. the result was 18.132 m with respect to the bottom of antenna mount. the installation height was necessary to avoid cycle slip and electromagnetic disturbances from the ship's machinery, but it certainly amplified the variability of the three-dimensional coordinates recorded due to the continuous and physiological variation of the ship's attitude. so, the antenna was installed on one of the highest points of the ship where the only obstruction was the highest part of the mast. the antenna was in basic configuration without multipath limiting devices because these could have increased the effect of the wind, so for this first experiment it was decided not to use them. no extensive analysis of antenna effects was carried out in this first experiment. the data surveyed (acquired in double frequency with sampling interval of one second) have subsequently been postprocessed with respect to the permanent stations of the new national dynamic network, these reference measurements have been made available by various agencies (university of palermo, calabria region etc.). the survey and the subsequent postprocessing allowed to evaluate the three-dimensional position of the antenna itself, statically fixed to the body of the ship throughout the oceanographic survey, with centimetre accuracy. it is important to underline that the antenna was obviously affected by all the displacements and attitude variations (heave, roll, pitch and yaw) that affected the ship during its navigation. this series of measurements allowed us to evaluate their variations with great accuracy. the gnss survey allowed the altimetric comparison between the data provided by the tidal networks (referred to a conventional local zero with respect to a local tide gauge) and the ones observed in navigation (ellipsoidal heigh that must be corrected with geoid model italgeo05). this analysis highlights a fundamental aspect linked to the compatibility between different altimetric reference systems, in order to allow a connection between on-shore and off-shore heights. 3. gnss data processing the gnss receiver positioned on the urania r/v (figure 1) acquired dual frequency data with 1s sampling interval from 14th to 18th of february 2013; during the research survey the marine weather conditions were optimal. the permanent station of tropea (table 1) belonging to the permanent network of calabria region [18] was selected for data post-processing, hourly rinex files with a sampling interval of 1 s were downloaded. figure 1. gnss antenna and its installation on the urania ship. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 12 3.1. rtklib the first step of the data processing was performed with rtklib ver. 2.4.2, open-source package for gnss positioning developed by tomoji takasu in 2007 [20], [21]. rtklib can process data from different satellite systems (gps, glonass, galileo, qzss and beidou) considering approaches both in realtime and post-processing and various position modes: single, dgps/dgnss, kinematic, static, moving-baseline, fixed, ppp-kinematic, ppp-static and ppp-fixed. unfortunately, rtklib provides a graphical interface where it is possible to upload only two rinex files: one for the rover and a second for the base station; furthermore, the software does not manage the raw format of the topcon receiver (figure 2). then, to facilitate the processing operations the data were managed with the open-source package teqc by unavco [22]. teqc is a toolkit with three main functions: translation, editing, and quality checking, from which it gets its name. for our purposes, the two functions of translation and editing were exploited: the first function allows to convert (translate) gnss raw receiver files into rinex format (observation and navigational files); the second one allows to cut or splice rinex files. the data, in tps format, acquired by the receiver positioned on the urania r/v were then converted to rinex and reorganised into daily files with the teqc software. in addition, the data from the trop permanent station, already in rinex format, were edited with teqc to create a single daily file. the output files from the teqc software were imported into rtklib, together with the precise ephemerides released by crustal dynamics data information system (cddis) [4] and processed in kinematic mode. for the doy046 (acquisition period from 15/02/13 00:00:00 to 15/02/13 23:59:59) and doy047 (acquisition period from 16/02/13 00:00:00 to 16/02/13 23:59:59), the number of fixed/float solution (table 2) and the estimated standard deviations (table 3) were reported. unfortunately, the trop permanent station acquisitions for the period 19:00:00 – 19:59:59 was not available, then rtklib software could only process the solution in single position (table 2). as a consequence, the coordinates related to these 2412 observations were not considered in the results (table 3) and in all figures related to doy047 because of very low and not reliable solution precision. the positions measured in kinematic mode, obtained processing the gnss data acquired during the two days of navigation, show a similar level of precision with a mean value of 0.005 m for the planimetric coordinates and 0.012 m for heights. the planimetric navigation paths of the ship is represented in gis software (qgis): on the first day (doy046) the path was mainly a round trip between the two islands of stromboli and lipari and therefore away from the mainland. on the second day (doy047) navigation started with a long “transfer” section from the islands to the mainland in a north-west -> south-east direction and then navigation along the coast and some grids in specific areas. in figure 3 the navigation paths doy046 (red track) and doy047 (yellow track) and the permanent gnss station trop location (red triangle) are represented. the daily trend of heights, despite high variability due to the vessel attitude, shows a periodic variation almost certainly due to the tidal effect (figure 4a). during the second day (figure 4b), a discontinuity is observed in the central hours of the day, probably table 1. coordinates and characteristics of the trop station as reported on the operator's website (reference system wgs84-etrs89; epsg: 4937 [19]). name station trop latitude 38°40' 45.6525" n longitude 15°53' 48.2067" e h in m 100.086 antenna type leiat504gg leis receiver leica grx1200ggpro figure 2. rtklib options for the ppk data processing. table 2. position estimated and their solutions. position estimated doy046 doy047 total 85563 85233 fixed solution 47573 (56 %) 39609 (46 %) float solution 37990 (56 %) 43212 (51 %) single solution 0 2412 (3 %) table 3. position estimated and their solutions. doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.006 0.005 0.012 0.005 0.005 0.012 max 0.543 0.407 0.988 0.637 0.486 1.065 min 0.003 0.003 0.008 0.003 0.002 0.008 figure 3. navigation paths on doy046 (red track) and doy047 (yellow track) and the permanent gnss station trop location (red triangle). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 13 this trend is due to the well-known effects of local tidal disturbance observed in the messina strait. 3.2. topcon tools the results of the heights measured in kinematic mode obtained from the processing of gnss data in rtklib was repeated for verification with a commercial software, topcon tools. this verification was carried out because rtklib sometimes has small “bugs” [21] and because, according to some authors, topcon tools would show very accurate results, in some cases even comparable with those of scientific software. this is probably due to a complete configurability of the processing [23] the topcon tools package ver. 8.2.3 by topcon corporation was used for the kinematic measurements. the software allows the data processing from different devices such as total stations, digital levels and gnss receivers, and it was used in several technical-scientific applications [24], [25]. topcon tools uses the modified hopfield model for the tropospheric corrections [26]. the employed positioning mode was code-based differential (“code diff”), the time range and the cut-off angle were set to 15 seconds and 10 degrees, respectively. recently dardanelli et al. [27] showed that the hypothesis of a normal distribution is confirmed in most of the pairs and, specifically, the static vs. nrtk pair seems to achieve the best congruence, while involving the ppp approach, pairs obtained with csrs software achieve better congruence than those involving rtklib software. although the lowest congruencies seem characterizing the pairs involving rtklib, this result should not be considered a criticism on the performance of this well-known open access program, which undoubtedly is one among of the most useful gnss processing software available, given its very straightforward applicability, considering also that our analysis is limited to few hours of data. the results obtained from the topcon tools processing are different from those obtained with rtklib, but the general trend that seems to mainly reproduce the tidal effects is very similar (figure 5a and b) as well as the numerical results (table 4). 4. results and discussion the ellipsoidal heights processed with rtklib and topcon tools were then compared with the hydrometric levels of some a) b) figure 4. ellipsoidal heigh variations during doy046 (a in figure) and during doy047 (b in figure) table 4. positions estimated with both packages and their estimated standard deviations. doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.101 0.062 0.118 0.078 0.078 0.051 max 0.140 0.106 0.150 0.283 0.283 0.112 min 0.070 0.032 0.100 0.042 0.042 0.013 doy046 doy047 sdn (m) sde (m) sdu (m) sdn (m) sde (m) sdu (m) avg 0.006 0.005 0.012 0.005 0.005 0.012 max 0.543 0.407 0.988 0.637 0.486 1.065 min 0.003 0.003 0.008 0.003 0.002 0.008 a) b) figure 5. ellipsoidal height variations during doy046 (a in figure) and during doy047 (b in figure), comparison between rtklib (blue dots) and topcon tools (orange dots) results. in red is the 10-minute moving average from topcon tools and in grey that of rtklib. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 14 stations belonging to the rmn – national tidegauge network [28]. rmn stations close to the study area are: “ginostra”, “strombolicchio”, “reggio calabria” and “messina” (figure 6).unfortunately, the data from strombolicchio station are not available for the period under examination. we decided to compare the hydrometric level with the ellipsoidal height, only on the tide gauge station of ginostra and palinuro, while reggio calabria and messina were not selected because they showed a very different trend probably due to the well-known local effects near the messina strait (figure 7). at first, the variations measured by the tide gauges were compared only in terms of trend with the heights measured by the gnss receiver on board the ship. in order to filter the heights and limit the effect of the oscillations caused by navigation, a moving average over ten-minute periods was adopted (in the figure is the grey trend). as already observed, there is an apparent similar trend between the tide gauge and the averaged gnss altitudes but also a constant shift between them. for doy046 the trend agreement continues very well for the whole day while for doy047 there seems to be a noticeable discontinuity at a certain point. the causes may be various but probably the aforementioned tidal effects near the strait are the main cause (figure 8). for the reasons outlined above, it was decided to continue the analysis only on doy046, which seems more significant. it is important to remember that the orthometric correction operated with a geoid model brings the elevations to an equipotential surface of the gravitational field at a given point on the national territory (for italy, genoa) that in general does not figure 6. tidegauge stations available in the study area. figure 7. tide gauges and path of the navigation during doy046 and doy047. a) b) figure 8. tide gauges trend and mean heights from gnss receiver for doy046 (a) and doy047 (b), note the different origin of height of the two series. figure 9. orthometric heights for doy046 (a) converted using italgeo2005 geoid model. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 15 coincide with the sea level in another position of the national territory at a given time [29], [30]. moreover, there is a problem of connection of the various tide gauges to the national altimetric system and conventionally the "zero" of the tide gauge has no connection with the "zero" of the national altimetric system. the ellipsoidal elevations measured with the gnss receiver were converted into orthometric elevations (h in figure 9) with respect to the national altimetric system using the geoid model italgeo2005, which is accredited with 2.5 cm of accuracy [5]. however, it should be noted that the reported accuracy is estimated on land where geoid–ellipsoid separation values measured on altimetric benchmarks are used to improve the estimate of geoid–ellipsoid separation itself. in our case some of the survey points are close to the coast and therefore the estimate of the geoid–ellipsoid separation should be reliable, while for offshore points the estimate is certainly less reliable. moreover, the connection between the national and islands elevation systems cannot be made by precision geometric levelling but it is often made by trigonometric or gnss levelling corrected later with the same geoid models. the elevations were then converted from ellipsoidal to orthometric using the software "geotrasformer" [31] that applies the resampling algorithms of the gridded geoid–ellipsoid separation values provided by the italian military geographic institute (igmi), the official national geodetic agency that released the grids of the italgeo2005 model. to compare surveyed data with the tide gauges information it is necessary subtract estimated antenna elevation (section 2) from the orthometric values themselves (gnss measurements corrected with italgeo2005); it must be considered, however, that during navigation the ship can progressively but significantly change its height due to the discharge of waste water and fuel consumption, such variations can reach several centimetres during a campaign and their variation may not be constant, therefore an average value during the day can still give reliable information. it was decided to take an average of the orthometric heights for the entire day to reduce the effect of the tide, obtaining an average height of 18.276 m. (including the height of the pga2 antenna). considering the approximations mentioned above, a comparison was made between the heights obtained with gnss measurements corrected with italgeo2005 and antenna elevation (𝐻 in figure 10) and the heights reported at the same time by the two tide gauges considered significant: "ginostra" and "palinuro" (hydrostatic level in figure 10). the main trend of all three tracks (figure 10) follows the same tidal repetitions. the gap in height between ship and tidal gauge data is due to the different definition of reference altimetric system, but the tidal effect is still predominant. there is also a systematic shift between the heights of the two tide gauges, which may be due to local reasons of a different average sea level or, more likely, to a different materialisation during the installation of the tide gauge itself, but also to an imperfect connection between the tide gauges due to the difficulty of connecting the heights of an island (ginostra) to the national altimetric system on the mainland. 5. conclusions and further developments considering the fluctuations due to the physiological continuous variation of the ship's attitude, the gnss plot faithfully followed the trend of the tidal variations and highlighted the shifts between the gnss data and the tide gauges due to the different materialisation of the relative reference systems. in fact, even if the installation height amplified the variability due to the continuous variation of the ship's attitude, the average trend of the gnss plot showed a relative trend very similar to that of the neighbouring tide gauges considered significant. after the orthometric correction of the heights and the estimation of the antenna height, it was possible to compare the data also "in absolute" (without forgetting the different altimetric references) and this comparison showed a remarkable agreement between the heights measured by the gnss and the tide gauges, highlighting, at the same time, the effects of the different altimetric references. this experimentation highlighted the need to rethink or update marine altimeter datums especially in view of the possible imminent diffusion of low-cost differential gnss receivers in navigation. to study the possibility of highlighting any anomalies in operation, usually due to momentary dysfunctions of the satellite segment, the experimentation of the innovative approach "multiconstellation" will be tested. this approach, designed by the same research group, can facilitate the detection of these anomalies while highlighting the corrections to be made. if this is verified, it would be possible to make the positioning measurements made during navigation more "robust" (i.e. less prone to errors), significantly improving their reliability. this could also be verified by a comparison with the information provided by the ship's dgps-ins navigation system. acknowledgement this research was funded by the national research council (cnr) and carried out in the framework of the flagship project ritmare (ricerca italiana per il mare). the authors would like to special thank the technical and scientific crews of the thygraf mission for their support in the installation of the receiver and for their continuous support in carrying out the measurement operations and its recordings. references [1] e. alcaras, c. parente, a. vallario,. the importance of the coordinate transformation process in using heterogeneous data in coastal and marine geographic information system, journal of marine science and engineering. 8(9) (2020) 708. doi: 10.3390/jmse8090708 [2] a. mori, la cartografia ufficiale in italia e l’istituto geografico militare nel cinquantenario dell’istituto geografico militare figure 10. orthometric heights for doy046 (a) comparison with the heights reported at the same time by the two tide gauges considered significant: "ginostra" and "palinuro". https://doi.org/10.3390/jmse8090708 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 16 (1872-1922); istituto geografico militare, stabilimento poligrafico per l’amministrazione della guerra: roma, italy, 1922, 425 pp. [3] l. surace, i sistemi di riferimento geotopocatografici in italia. bollettino di geodesia e scienze affini 57(1996) pp. 181–234, in italian. [4] r. barzaghi, d. carrion, m. reguzzoni, g. a. venuti, feasibility study on the unification of the italian height systems using gnssleveling data and global satellite gravity models, international association of geodesy symposia (2016) pp. 281-288. [5] r. barzaghi, c. i. de gaetani , b. betti, the worldwide physical height datum project, rendiconti lincei, 31 (2020) pp. 27-34. [6] g. fastellini, f. radicioni , a. stoppini , r. barzaghi, d. carrion , new active and passive networks for a support to geodetic activities in umbria, bollettino di geodesia e scienze affini, 67(3) (2008) pp. 203-227. [7] l. e. sjöberg, m. bagherbandi, quasigeoid-to-geoid determination by egm08, earth science informatics. 5(2) (2012) pp. 8791. [8] g. inghilleri, topografia generale, ed. utet, torino, italy (1974) 1019 pp. [9] intergovernmental oceanographic commission (ioc), manual on sea-level measurements and interpretation, volume iv: an update to 2006. paris, intergovernmental oceanographic commission of unesco. (2006) 78 pp. (ioc manuals and guides no.14, vol. iv; jcomm technical report no.31; wmo/td. no. 1339). [10] d. casalbore, a. bosman, d. casas, e. martorelli, d. ridente, morphological variability of submarine mass movements in the tectonically–controlled calabro–tyrrhenian continental margin (southern italy), geosciences (switzerland) 9(1) (2019) 43. doi: 10.3390/geosciences9010043 [11] e. petritoli, f. leccese, high accuracy attitude and navigation system for an autonomous underwater vehicle (auv). acta imeko 7 (2018) 2, pp. 3‐9. doi: 10.21014/acta_imeko.v7i2.535 [12] e. martorelli, f. italiano, m. ingrassia, l. macelloni, a. bosman, a. m. conte, s. e. beaubien, s. graziani, a. sposato, f. l. chiocci, evidence of a shallow water submarine hydrothermal field off zannone island from morphological and geochemical characterization: implications for tyrrhenian sea quaternary volcanism, journal of geophysical research: solid earth 121(12) (2016) pp. 8396–8414. doi: 10.1002/2016jb013103 [13] istituto mareografico nazionale. online [accessed 19 may 2021] https://www.mareografico.it/?session=0s26747870448387k707 58188d&syslng=ita&sysmen=-1&sysind=-1&syssub=1&sysfnt=0&code=home [14] s. zollini, m. alicandro, m. cuevas-gonzález, v. baiocchi, d. dominici, p. m. buscema, shoreline extraction based on an active connection matrix (acm) image enhancement strategy, journal of marine science and engineering 8(1) (2020) 9. doi: 10.3390/jmse8010009 [15] e. alcaras, c. parente, a. vallario, comparison of different interpolation methods for dem production, international journal of advanced trends in computer science and engineering 8(4) (2019) pp. 1654-1659. doi: 10.30534/ijatcse/2019/91842019 [16] d. costantino, m. pepe, g. dardanelli, v. baiocchi, using optical satellite and aerial imagery for automatic coastline mapping, geographia technica 15(2) (2020) pp. 171–190. doi: 10.21163/gt_2020.152.17 [17] u. robustelli, v. baiocchi, l. marconi, f. radicioni, g. pugliano, precise point positioning with single and dual-frequency multignss android smartphones, ceur workshop proceedings, 2020, 2626 [18] regione calabria. online [accessed 19 february 2021] www.regione.calabria.it [19] igmi, nota per il corretto utilizzo dei sistemi geodetici di riferimento all’interno dei software gis aggiornata a febbraio 2019. online. [accessed 22 november 2021] https://www.sitr.regione.sicilia.it/wpcontent/uploads/nuova_nota_epsg.pdf [20] t. takasu, rtklib: open source program package for rtk gps, foss4g 2009 tokyo, japan, november 2, 2009 [21] p. dabove, m. piras, k. n. jonah, statistical comparison of ppp solution obtained by online post-processing services. ieee/ion position, location and navigation symposium (plans), (2016) pp. 137-143. doi: 10.1109/plans.2016.7479693 [22] unavco. online [accessed 19 may 2021] https://www.unavco.org/software/dataprocessing/teqc/teqc.html [23] topcon, topcon tools 7.3 manual. online [accessed 19 may 2021] https://www.topptopo.dk/files/manual/7010_0612_revl_to pcontools7_3_rm.pdf [24] k. dawidowicz,g. krzan, k. świątek, relative gps/glonass coordinates determination in urban areas–accuracy anaysis.; in proceedings of the15th international multidisciplinary scientific geoconference sgem 2015, albena, bulgaria, 18–24 june 2015 volume 2 (2015) pp. 423–430. doi: 10.5593/sgem2015/b22/s9.053 [25] m. uradziński, m. bakuła, assessment of static positioning accuracy using low-cost smartphone gps devices for geodetic survey points’ determination and monitoring. appl. sci. 10 (2020) 1–22. doi: 10.3390/app10155308 [26] c. goad, l. godman, a modified hopfield tropospheric refraction correction model. in proceedings of the fall annual meeting american geophysical union; san francisco, ca, usa, 12–17 december 1974. [27] g. dardanelli, a. maltese, c. pipitone, a. pisciotta, m. lo brutto, nrtk, ppp or static, that is the question. testing different positioning solutions for gnss survey. remote sens. 13 (2021) 1406. doi: 10.3390/rs13071406 [28] istituto maregografico nazionale, rete mareografica nazionale. online. [accessed 19 february 2021] https://www.mareografico.it/?session=0s1758247488uc8488w ub85&syslng=ita&sysmen=-1&sysind=-1&syssub=1&sysfnt=0&code=live [29] m. pierozzi, il sistema altimetrico italiano, la livellazione, lo zero idrografico ed i riflessi in ambito portuale, in italian. online. [accessed 19 may 2021] http://www.assoporti.it/sites/www.assoporti.it/files/eventieste rni/pierozzi.pdf [30] r. barzaghi, a. borghi, d. carrion, g. sona, refining the estimate of the italian quasigeoid, bollettino di geodesia e scienze affini 3 (2007) pp. 145–160. [31] v. baiocchi, p. camuccio, m. zagari, a. ceglia, s. del gobbo, f. purri, l. cipollini, f. vatore, development of a geographic database of a district area in open source environment, geoingegneria ambientale e mineraria 151 (2017) 97–101. https://doi.org/10.3390/geosciences9010043 https://doi.org/10.21014/acta_imeko.v7i2.535 https://doi.org/10.1002/2016jb013103 https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://www.mareografico.it/?session=0s26747870448387k70758188d&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=home https://doi.org/10.3390/jmse8010009 https://doi.org/10.30534/ijatcse/2019/91842019 https://doi.org/10.21163/gt_2020.152.17 http://www.regione.calabria.it/ https://www.sitr.regione.sicilia.it/wp-content/uploads/nuova_nota_epsg.pdf https://www.sitr.regione.sicilia.it/wp-content/uploads/nuova_nota_epsg.pdf https://doi.org/10.1109/plans.2016.7479693 https://www.unavco.org/software/data-processing/teqc/teqc.html https://www.unavco.org/software/data-processing/teqc/teqc.html https://www.topptopo.dk/files/manual/7010_0612_revl_topcontools7_3_rm.pdf https://www.topptopo.dk/files/manual/7010_0612_revl_topcontools7_3_rm.pdf https://doi.org/10.5593/sgem2015/b22/s9.053 https://doi.org/10.3390/app10155308 https://doi.org/10.3390/rs13071406 https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live https://www.mareografico.it/?session=0s1758247488uc8488wub85&syslng=ita&sysmen=-1&sysind=-1&syssub=-1&sysfnt=0&code=live http://www.assoporti.it/sites/www.assoporti.it/files/eventiesterni/pierozzi.pdf http://www.assoporti.it/sites/www.assoporti.it/files/eventiesterni/pierozzi.pdf estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models acta imeko issn: 2221-870x september 2021, volume 10, number 3, 100 107 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 100 estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models marcantonio catelani1, lorenzo ciani1, giulia guidi1, gabriele patrizi1, diego galar2 1 department of information engineering, university of florence via di s. marta 3, 50139, florence (italy) 2 luleå university of technology, lulea, sweden section: research paper keywords: reliability; diagnostic; railway engineering; failure rate; hvac; useful life citation: marcantonio catelani, lorenzo ciani, giulia guidi, gabriele patrizi, diego galar, estimate the useful life for a heating, ventilation, and air conditioning system on a high-speed train using failure models, acta imeko, vol. 10, no. 3, article 10, september 2021, identifier: imeko-acta-10 (2021)-0310 section editor: lorenzo ciani, university of florence, italy received january 29, 2021; in final form august 2, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giulia guidi, e-mail: giulia.guidi@unifi.it 1. introduction all devices are constituted from materials that will tend to degrade with time. the materials degradation will continue until some critical device parameter can no longer meet the required specification for proper device functionality [1]-[8]. for this reason, as well as the growing complexity of equipment and the rapidly increasing cost incurred by loss of operation and for maintenance, the interest in reliability is growing in many industrial fields. generally reliability could be assessed through different methods, such as reliability prediction, fault tree analysis , reliability block diagram etc (see for instance [9]-[12]). fault tree analysis (fta) [13], [14] is an analytical and deductive (topdown) method. it is an organized graphical representation of the conditions or other factors causing or contributing to the occurrence of a defined outcome, referred to as the "top event". while, reliability block diagram (rbd) [15] is a functional diagram of all the components making up the system that shows how component reliability contributes to failure or success of the whole system. these above-mentioned techniques need input data to be performed but sometimes data are not available, and they need to be predicted. an accurate reliability prediction should be performed in the early stages of a development program to support the design process [16]-[21]. a reliability prediction of electronic components could be assessed following the guidelines of several handbooks, while the prediction of mechanical components is more challenging because of the following reasons [16], [22]: • individual mechanical components such as valves and gearboxes often perform more than one function and abstract heating, ventilation, and air conditioning (hvac) is a widely used system used to guarantee an acceptable level of occupancy comfort, to maintain good indoor air quality, and to minimize system costs and energy requirements. if failure data coming from company database are not available, then a reliability prediction based on failure rate model and handbook data must be carried out. performing a reliability prediction provides an awareness of potential equipment degradation during the equipment life cycle. otherwise, if field data regarding the component failures are available, then classical reliability assessment techniques such as fault tree analysis and reliability block diagram should be carried out. reliability prediction of mechanical components is a challenging task that must be carefully assessed during the design of a system. for these reasons, this paper deals with the reliability assessment of an hvac using both failure rate model for mechanical components and field data. the reliability obtained using the field data is compared to the one achieved using the failure rate models in order to assess a model which includes all the mechanical parts. the study highlights how it is fundamental to analyze the reliability of complex system integrating both field data and mathematical model. mailto:giulia.guidi@unifi.it acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 101 failure data for specific applications of nonstandard components are seldom available. • failure rates of mechanical components are not usually described by a constant failure rate distribution because of wear, fatigue and other stress-related failure mechanisms resulting in equipment degradation. data gathering is complicated when the constant failure rate distribution cannot be assumed and individual times to failure must be recorded in addition to total operating hours and total failures. • mechanical equipment reliability is more sensitive to loading, operating mode and utilization rate than electronic equipment reliability. failure rate data based on operating time alone are usually inadequate for a reliability prediction of mechanical equipment • definition of failure for mechanical equipment depends upon its application. lack of such information in a failure rate data bank limits its usefulness. the above listed problems demonstrates the need for reliability prediction models that do not rely solely on existing failure rate data banks [23], [24]. trying to solve these needs, this paper aims to introduce a reliability assessment procedure which integrates failure rate models and field data to optimize the reliability analysis of a railway heating, ventilation and air conditioning (hvac) system. the paper uses both fta and rbd techniques to estimate the system reliability based on realistic failure rate models for mechanical components. the rest of the paper is organized as follow: section 2 illustrates the aim of an hvac and it presents the high-level taxonomy of the system under test; section 3 presents the failure rate prediction of three mechanical components (compressor, heat exchanger and blower) using failure models; section 4 shows the results of the reliability assessment carried out using fta and rbd techniques and finally section 5 compares the results achieved with the different techniques. 2. hvac for high-speed train underground transport and rail systems become more and more frequent as they allow rapid transit times while transporting a large number of users [25]. consequently, rams (reliability, availability, maintainability and safety) analysis has become a fundamental tool during the design of railway systems [25]-[27]. the network of high-speed trains and also standard rails are more and more transferred to underground tunnels in order to mitigate the environmental impact. both applications need ventilation rates. in metros the influx of a large number of people and the presence of moving trains generate a reduction of oxygen and an increase in heat and pollutant. mechanical ventilation is then required to achieve the necessary air exchange and grant users of the underground train systems comfortable conditions. ventilation systems have a second and even more important purpose. that is to guarantee safety in case of fire emergency. in order to create a safe and clean environment for escaping mechanical ventilation both in tunnels and in the stations is activated. in rails the ventilation of tunnels is mainly dedicated to fire emergencies where it is vital to keep under control the smoke propagation and create safe areas and clear environment for the users. furthermore, efficient temperature regulation is becoming a necessity to face overcrowded carriages [28]-[30]. hvac is the best way of regulating temperature and air quality on crowded trains [31]. one of the most important guarantees that rail manufacturers should look for during the design of an air conditioning systems is reliability under the actual operating conditions [28], [32]. during the design of an hvac system it is necessary to achieve information about the hvac equipment and their uses[33]. the taxonomy is a systematic classification of items into generic groups based on factors possibly common to several of the items (location, use, equipment subdivision, etc.). referring to figure 1, levels 1 to 5 represent a high-level categorization that relates to industries and plant application regardless of the equipment units (see level 6) involved. this is because an equipment unit (e.g., air conditioning unit) can be used in many different industries and plant configurations and, for analysing the failure/reliability/maintainability of similar equipment, it is necessary to have information about the operating context. taxonomic information on these levels (1 to 5) shall be included in the database for each equipment unit as “use/location data”. levels 6 to 9 are related to the equipment unit (inventory) with the subdivision in lower indenture levels corresponding to a parent-child relationship. the taxonomy of the system under test, from level 1 to level 5 is reported in table 1. the levels from 6 to 9 are very structured and include the level of the components divided also in the part sections. 3. failure rate models predicting the life of a mechanical element is not easy, it includes mathematical equations to estimate the design life of mechanical components [16]. these reliability equations consider the design parameters, environmental extremes and operational stresses to predict the reliability parameters. the total failure rate of the figure 1. taxonomy classification with taxonomic levels (source iso 14224 2016 [34]). table 1. taxonomy of the system from level 1 to level 5. taxonomy level description level 1 industry railway level 2 business category high speed level 3 installation s121 level 4 unit front car level 5 system hvac system acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 102 component is the sum of the failure rates for the parts for a particular time period in question. the equations rely on a base failure rate derived from laboratory test data where the exact stress levels are known. more information about the failure rate data used in this work could be found in [19]. the most critical components of an heating, ventilation and air conditioning (hvac) system are the compressor, the heat exchanger and the blower[25], [35]. in order to improve the failure rate of these items, the relative failure models have been analysed in the following sections. 3.1. compressor model a compressor system is made up of one or more stages. the compressor compresses the gas, increasing its temperature and pressure [16], [36]. the total compressor may be comprised of elements or groups of elements in series to form a multistage compressor based on the change in temperature and pressure across each stage. every compressor to be analyzed will be characterized by a unique design and it will be comprised of many different components. according to [16] and to the compressor datasheet, the designed hvac compressor is a reciprocating type compressor. the following equation has been obtained in order to estimate the failure rate of the actual compressor used in the considered hvac design. 𝜆c = (𝜆fd ∙ 𝐶sf)+𝜆ca + 𝜆be + 𝜆va + 𝜆se + 𝜆sh , (1) where • 𝜆c is the total failure rate of compressor • 𝜆fd is failure rate of fluid driver • 𝐶sf is the compressor service multiplying factor • 𝜆ca is the failure rate of the compressor casing • 𝜆be is the total failure rate of compressor shaft bearings • 𝜆va is the total failure rate of control valve assemblies • 𝜆se is the total failure rate of compressor seals • 𝜆sh is the failure rate of compressor shaft. different compressor configurations such as piston, rotary screw and centrifugal have different parts within the total compressor and it is important to obtain a parts list for the compressor prior to estimating its reliability. the failure rate for each part comprising the compressor must be determined before the entire compressor assembly failure rate, λc, can be determined. failure rates for each part will depend on expected operational and environmental factors that exist during compressor operation. the total failure rate of compressor shaft bearings is: 𝜆be = 𝜆be,b ∙ 𝐶r ∙ 𝐶v ∙ 𝐶cw ∙ 𝐶t ∙ 𝐶sf ∙ 𝐶c , (2) where • 𝜆be is the total failure rate of bearing • 𝜆be,b is base failure rate • 𝐶r is life adjustment factor for reliability • 𝐶v is multiplying factor for lubricant • 𝐶cw is multiplying factor for water contaminant level • 𝐶t is multiplying factor for operating temperature • 𝐶sf is multiplying factor for operating service conditions • 𝐶c is multiplying factor for lubrication contamination level. the total failure rate of control valve assemblies is given by: 𝜆va = 𝜆po + 𝜆se + 𝜆sp + 𝜆so + 𝜆ho , (3) where • 𝜆va is the total failure rate of total valve assemblies • 𝜆po is the failure rate of poppet assembly • 𝜆se is the failure rate of the seals • 𝜆sp is the failure rate of spring(s) • 𝜆so is the failure rate of solenoid • 𝜆ho is the failure rate of valve housing consequently, using the failure data illustrated in [19] it is possible to solve equation (2)-(3). then, the compressor failure rate could be estimated integrating these results into equation (1), as follow: 𝜆c = 1.56 ∙ 10 −5 failure/h (4) usually, failure rates of components implemented in railway applications are expressed in failure/km or for sake of simplicity fpmk (failure per million kilometers). moreover, the duty cycle of the compressor must be taken into account in order to obtain a more accurate evaluation. consequently, considering an approximate annual distance for high-speed train of half a million kilometers and a duty cycle of 30 %, the failure rate of the compressor becomes: 𝜆compressor = 8.22 ∙ 10 −2 fpmk . (5) the failure rate achieved above must be compared with the failure rate provided by the manufacturer of the component (merak) which is obtained integrating field data and internal company tests. 𝜆compressor,merak = 7.79 ∙ 10 −2 fpmk . (6) comparing equations (5) and (6) it is possible to note that the failure rate obtained using the compressor model provides a value slightly higher than the one provided by the compressor manufacturer. the several variables considered by the failure model in equation (1) produce the different results since the merak value is based mainly on field data. figure 2 shows the reliability curves relative to the failure rates of equations (5) and (6). the blue line represents the reliability calculated with the manufacturer “merak” failure rate while the red one the reliability calculated using the failure model of the compressor. the curve obtained using the model decreases faster because its failure rate is higher than the merak failure rate. anyway, the difference of the two curves is limited, at the beginning the figure 2. reliability curves of the compressor assembly calculated through merak data and compressor model given by equation (1). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 103 curves are approximately equal then they decrease with different exponential decay rates. 3.2. heat exchanger model heat exchangers are essential part of any kind of hvac system nowadays. the main function of a heat recovery system is to increase the energy efficiency by reducing energy consumption and also by reducing the cost of operating by transferring heat between two gases or fluids, thus reducing the energy consumptions. in heat exchangers, as the name suggests, there is a transfer of energy from one fluid to another. both these fluids are physically separated and there is no direct contact between the fluids. there are different types of heat exchangers such as shell and tube, u tube, shell and coil, helical, plate etc. the transfer of heat can be between steam and water, water and steam, refrigerant and water, refrigerant and air, water and water. the hvac system’s compressor generates heat by compressing refrigerant. this heat can be captured and used for heating domestic water. for this purpose, a heat exchanger is placed in between the compressor and the condenser. the water that is to be heated is circulated though this heat exchanger with the help of a pump whenever the hvac system is on. the heat exchanger included in the system under test is composed by a tube and an expansion valve. the failure rate of a fluid conductor is extremely sensitive to the operating environment of the system in which it is installed as compared to the design of the pipe. each application must be evaluated individually because of the many installation, usage and maintenance variables that affect the failure rate. the failure of a piping assembly depends primarily on the connection joints and it can be estimated with the following equation [16]: 𝜆p = 𝜆p,b ∙ 𝐶e = 2.2 ∙ 10 −6 failure/h (7) where • 𝜆p is the failure rate of pipe assembly • 𝜆p,b is the base failure rate of pipe assembly, which is 1.57 ∙ 10−6 failure/h • 𝐶e is the environmental factor equal to 1.4 in case of a railway application. for the expansion valve the failure rate is provided by [16] and it is 𝜆va = 4.5 ∙ 10 −6 failure/h. therefore, the whole failure rate of the heat exchanger is given by the sum of the failure rate of the pipe and the failure rate of the valve, each one weighted on its own duty cycle. in particular, the duty cycle of the pipe is 80 % while the duty cycle of the valve is 30 %. consequently, the failure rate of the heat exchanger is given by: 𝜆heat exchanger = 3.11 ∙ 10 −6 failure h⁄ . (8) quite the same as the compressor, also the failure rate of the heat exchanger must be converted from failure/h into fpmk. the heat exchanger failure rate in case of railway application is the following: 𝜆heat exchanger = 5.4 ∙ 10 −2 fpmk . (9) also in this case, the failure rate based on field data has been provided by the component manufacturer merak and it is equal to: 𝜆heat exchanger,merak = 4.124 ∙ 10 −2 fpmk . (10) figure 3 shows the two different reliability curves, the blue one is related to merak data while the red one is related to the model results achieved in equation (9). the model line results a pessimistic estimation also for this component. the difference between the two reliability trends is extremely low. this is mainly due to the fact that for a simpler element like the heat exchanger, the manufacturer merak and model lead to a similar result. 3.3. blower model one of the most common downfalls of installed hvac systems is their inability to distribute the correct amount of air to where it’s needed most. when systems are restrictive, or blowers aren’t powerful enough, the air simply doesn’t make it to where it needs to go. a blower is composed by: • an ac motor; • two bearings; • a fan. the failure rate of a motor is affected by such factors as insulation deterioration, wear of sliding parts, bearing deterioration, torque, load size and type, overhung loads, thrust loads and rotational speed. the failure rate model developed is based on a fractional or integral horsepower ac type motor. the reliability of an electric motor is dependent upon the reliability of its parts, which may include bearings, electrical windings, armature/shaft, housing, gears and brushes. failure mechanisms resulting in part degradation and failure rate distribution (as a function of time) are considered to be independent in each failure rate model. the total motor system failure rate is the sum of the failure rate of each of the parts in the motor: 𝜆motor = 𝜆m,b ∙ 𝐶sf + 𝜆wi + 𝜆st + 𝜆as + 𝜆be + 𝜆gr + 𝜆c, (11) where • 𝜆motor is the total failure rate for the motor system • 𝜆m,b is the base failure rate of motor • 𝐶sf is the motor load service factor • 𝜆wi is the failure rate of electric motor windings • 𝜆st is the failure rate of the stator housing • 𝜆as is the failure rate of the armature shaft • 𝜆be is the failure rate of the bearing evaluated using equation (2) and the suitable factors • 𝜆gr is the failure rate of gears • 𝜆c is the failure rate of capacitor. the bearings failure rate could be estimated following the guidelines in section 3.1. the fans are modelled in according to mil-std-217f [37] by: figure 3. reliability curves of the heat exchanger assembly calculated through merak data and heat exchanger model according to equation (9). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 104 𝜆fan = [ 𝑡 2 𝛼𝐵 3 + 1 𝛼𝑊 ] failure/h (12) where • t is the motor operating time period • 𝛼b the weibull characteristics life for motor bearing • 𝛼w the weibull characteristics life for motor windings finally, the whole blower failure rate is given by the sum of the failure rates of its components, so: 𝜆blower = 𝜆motor + 2 ∙ 𝜆bearing + 𝜆fan = 1.328 ∙ 10−5 failure/h . (13) then considering the duty cycle of 100 % and the required conversion from failure/h into fpmk, the blower failure rate become: 𝜆blower = 2.33 ∙ 10 −1 fpmk (14) the failure rate achieved analyzing field data has been provided by the component manufacturer merak, and it is illustrated in the following: 𝜆blower,merak = 7.97 ∙ 10 −2 fpmk . (15) figure 4 shows the two reliability curves, the blue one is related to the merak data and the red one is related to the model data. like for the other components, the reliability calculated through the models provides a pessimistic reliability trend respect the reliability calculated with the field data provided by merak. this time, the differences between field data and model data is quite remarkable. this could be due to the harsh operating condition considered by the failure rate model in [16]. 4. reliability analysis when the data (coming from tests or from the manufacturer) are available, techniques such as fta or rbd could be used to estimate the useful life of the system. 4.1. fault tree analysis fault tree diagrams consist of gates and events connected with lines. the and and or gates are the two most commonly used gates in a fault tree. to illustrate the use of these gates, consider two events (called "input events") that can lead to another event (called the "output event")[14], [35]. if the occurrence of either input event causes the output event to occur, then these input events are connected using an or gate. alternatively, if both input events must occur in order to the occurrence of the output event, then they are connected by an and gate. fault tree analysis gates can also be combined to create more complex representations. in case of the hvac system under analysis, the top event is “hvac failure” and it’s caused by four different events, as illustrate in figure 5: • “possible fire”, when some events could involve a risk of fire in the railway cabin. • “loss of emergency ventilation”, when the emergency ventilation doesn’t work. • “loss of functions caused by a single event”, when a single event causes a direct loss of all the cooling, heating, ventilation function • “indirect loss of cooling heating and ventilation”, when some events cause independently a loss of cooling, heating and ventilation functions. the top event “hvac failure” is linked to the abovedescribed input events trough an or gate, that means if at least one of the four input events happens the whole system fails. every one of the input events in figure 5 is in turn caused by an extremely complex combinations of several events. the complete fta diagram is very large and structured, and it is not possible to show it entirely. so, for the sake of simplicity, figure 5 shows only an extract of these fta. the reliability trend considering the fta configuration is shown in figure 6. the curve is a decreasing exponential, it starts from a unitary reliability and it tends to zero. the analysis is simulated starting from 0 km up to 6∙106 km. according to an annual forecast distance traversed of about 487∙103 km, the simulation in term of time is over 12 years. at distance 0.5∙106 km (approximately 1 year) the reliability is around the 80 %, while after 1∙106 km (approximately 2 years) is decrease approximately to the 60 % and then it tends to zero at 5∙106 km (approximately 10 years). these results are justified by two reasons: • the mechanical nature of the whole system, that contributes to a fast decrease of the reliability. • the or gate that lead to the top event, which is the worst-case scenario between the several ones considered during the design. 4.2. reliability block diagram an overall system reliability prediction can be made by looking at the reliabilities of the components that make up the whole system or product. in order to construct a reliability block diagram, the reliability-wise configuration of the components must be determined. consequently, the analysis method used for computing the reliability of a system will also depend on the reliability-wise configuration of the components/subsystems. figure 4. reliability curves of the blower assembly calculated through merak data and blower model as in equation (14). figure 5. extract of the fta diagram for the hvac system under analysis. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 105 that configuration can be as simple as units arranged in a pure series or parallel configuration. there can also be systems of combined series/parallel configurations or complex systems that cannot be decomposed into groups of series and parallel configurations. the hvac system under analysis could be described using a series of three main blocks (see figure 7): • cooling system • heating system • ventilation system. therefore, supposing the exponential distribution for all the items [18], [38], [39], the reliability equation of the whole system is: 𝑅sys(𝑡) = e −(𝜆cooling+𝜆heating+𝜆ventilation)∙𝑡 . (16) figure 8 shows a comparison between cooling, heating and ventilation reliability curves. the figure also shows the whole system reliability trend calculated with equation (16), which is illustrated using a dashed black line. the red line represents the heating system, the blue line the ventilation system and the green line the cooling system. the worst system, in reliability terms, is the cooling system, because it contains a lot of series elements and most of them are mechanical items. the three systems are connected in a series configuration, where the component with the least reliability has the biggest effect on the system's reliability. as a result, the reliability of a series system is always less than the reliability of the least reliable component. that’s why the black line, representing the whole system reliability is lower than the cooling curve (the least reliable). 4.3. comparison between fta and rbd results table 2 shows the comparison of the reliability trends between the two proposed methods: reliability block diagram and fault tree analysis. the two curves are very similar, but the rbd reliability is always higher than the fta results (at every distance). the differences could be caused by: • different algorithm used by the software for the calculation of the reliability. • the fta analysis results are more complete and they consider all the possible path that lead to the top event, so it considers also the relationship between failures. the first column of table 2 reports the distance travelled by the train, the second the corresponding time, the third and the fourth the reliability values of the fta and rbd respectively. the last one reports the absolute percentage difference of the two previous columns. all the values relative to the rbd are higher than the fta, but their difference is lower than 6 %. figure 9 shows the trend of the difference between the two curves, it illustrates that the maximum value is 6.5 %, so the difference is very low. at the beginning the difference is not remarkable, in particular before 1000000 km is lower than 4 %, then it increases, and the peak is between 1500000 km and 2500000 km where the difference is 6 %. after that, it decreases slowly and it reaches the value of 2 % at 6000000 km. therefore, the two methods provide comparable results, and both the outcomes are valid. 5. comparison between field data and model data a comparison between the failure estimation of the previous paragraphs and the failure data provided by the manufacturer of the hvac has been carried out in order to investigate how the model-based failure rates affect the reliability trend of the whole system. the model-based failure rates of compressor, blower and heat exchanger are used to calculate the whole hvac reliability together with the failure rate estimation of the other components, which make up the system. figure 6. reliability trend of the fta. figure 7. reliability block diagram of the hvac system. figure 8. reliability trends of cooling, heating and ventilation systems (continuous lines) compared with the hvac system reliability (dashed line). table 2. reliability data of or type fta and rbd. distance km time rfta rrbd difference 0.5 ∙ 106 1 year 0.78 0.80 3 % 1 ∙ 106 2 years 0.60 0.64 4 % 2 ∙ 106 4 years 0.34 0.40 6 % 3 ∙ 106 6 years 0.18 0.24 6 % 4 ∙ 106 8 years 0.1 0.14 4 % 5 ∙ 106 10 years 0.05 0.08 3 % 6 ∙ 106 12 years 0.03 0.05 2 % acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 106 figure 10 shows three reliability curves, the blue trend is related to the fta reliability, the red one is calculated with the rbd analysis, while the green one is related to the failure rate of the components calculated using the failure models. it’s possible to note that the model-based failure rates contribute to reduce the reliability and have an important contribution to the whole system reliability. the failure rate models provide a pessimistic reliability results for the three components analyzed before. consequently, their reliability curves affect the whole system reliability, producing a trend lower than the ones calculated with the manufacturer data (in case of both fta and rbd techniques. 6. conclusion the paper deals with a heating, ventilation and air conditioning system mounted on a high-speed train. the first part of the paper illustrates the taxonomy of the system under study. the architecture of an hvac system includes several critical components, such as: a fan (blower), a heat exchanger and a compressor. a detailed study on the failure rates of the hvac most critical components is presented in this paper. compressor, heat exchanger and blower show a model-based reliability lower than the reliability achieved using the field data provided by the manufacturer of the hvac “merak”. then, the reliability of the complete hvac system has been estimated using two wellknown techniques: fault tree analysis and reliability block diagram. fta and rbd methods take the field data provided by merak as input to evaluate the system reliability over distance travelled by the train. the final analysis shows how the model failure rates affect the whole hvac reliability comparing the results achieved using fta and rbd with the one obtained using the failure rate models. the model-based failure rate provides a pessimistic result because it considers every possible failure modes and failure mechanisms of each subitem that make up the component. despite this. it could be not so realistic since it doesn’t properly consider the real operating conditions of the system under test. quite the opposite, the reliability evaluated using the field data takes into account the real context of the hvac but some of the failure mechanisms might be not occur during the observed time interval. for these reasons. it is fundamental to analyze the reliability of such complex system integrating both techniques. references [1] g. d’emilia, a. gaspari, e. hohwieler, a. laghmouchi, e. uhlmann, improvement of defect detectability in machine tools using sensor-based condition monitoring applications, procedia cirp, vol. 67, 2018, pp. 325-331. doi: 10.1016/j.procir.2017.12.221 [2] d. capriglione, m. carratu, a. pietrosanto, p. sommella, online fault detection of rear stroke suspension sensor in motorcycle, ieee trans. instrum. meas., vol. 68, no. 5, may 2019, pp. 13621372. doi: 10.1109/tim.2019.2905945 [3] a. paggi, g. l. mariotti, r. paggi, f. leccese, general reliability assessment via the physics-based, in 2020 ieee 7th international workshop on metrology for aerospace (metroaerospace), jun. 2020, pp. 510-515. doi: 10.1109/metroaerospace48742.2020.9160087 [4] g. d’emilia, a. gaspari, e. natale, measurements for smart manufacturing in an industry 4.0 scenario a case-study on a mechatronic system, in 2018 workshop on metrology for industry 4.0 and iot, apr. 2018, pp. 1-5. doi: 10.1109/metroi4.2018.8428341 [5] d. capriglione, m. carratù, a. pietrosanto, p. sommella, narx ann-based instrument fault detection in motorcycle, measurement, vol. 117, mar. 2018, pp. 304-311. doi: 10.1016/j.measurement.2017.12.026 [6] u. leturiondo, o. salgado, d. galar, estimation of the reliability of rolling element bearings using a synthetic failure rate, 2016, pp. 99-112. [7] a. reatti, f. corti, l. pugi, wireless power transfer for static railway applications, in 2018 ieee international conference on environment and electrical engineering and 2018 ieee industrial and commercial power systems europe (eeeic / i&cps europe), jun. 2018, pp. 1-6. doi: 10.1109/eeeic.2018.8493757 [8] s. giarnetti, e. de francesco, r. de francesco, f. nanni, m. cagnetti, f. leccese, e. petritoli, g. schirripa spagnolo, a new approach to define reproducibility of additive layers manufactured components, in 2020 ieee 7th international workshop on metrology for aerospace (metroaerospace), jun. 2020, pp. 529-533. doi: 10.1109/metroaerospace48742.2020.9160076 [9] e. petritoli, f. leccese, m. botticelli, s. pizzuti, f. pieroni, a rams analysis for a precision scale-up configuration of the ‘smart street’ pilot site: an industry 4.0 case study, acta imeko 8 (2019) 2, pp. 3-11. doi: 10.21014/acta_imeko.v8i2.614 [10] m. khalil, c. laurano, g. leone, m. zanoni, outage severity analysis and ram evaluation of italian overhead transmission lines from a regional perspective, acta imeko 5 (2016) 4, pp. 7379. doi: 10.21014/acta_imeko.v5i4.424 [11] p. liu, x. cheng, y. qin, y. zhang, z. xing, reliability analysis of metro door system based on fuzzy reasoning petri net, in lecture notes in electrical engineering, vol. 288 lnee, no. vol. 2, 2014, pp. 283-291. [12] l. cristaldi, m. khalil, m. faifer, markov process reliability model for photovoltaic module failures, acta imeko 6 (2017) 4, pp. figure 9. absolute percentage difference between the two reliability trends achieved using rbd and fta methods. figure 10. reliability curves of the hvac assembly calculated through merak data (both fta and rbd techniques) and model data. https://doi.org/10.1016/j.procir.2017.12.221 https://doi.org/10.1109/tim.2019.2905945 https://doi.org/10.1109/metroaerospace48742.2020.9160087 https://doi.org/10.1109/metroi4.2018.8428341 https://doi.org/10.1016/j.measurement.2017.12.026 https://doi.org/10.1109/eeeic.2018.8493757 https://doi.org/10.1109/metroaerospace48742.2020.9160076 http://dx.doi.org/10.21014/acta_imeko.v8i2.614 https://doi.org/10.21014/acta_imeko.v5i4.424 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 107 121-130. doi: 10.21014/acta_imeko.v6i4.428 [13] iec 61025, fault tree analysis (fta). international electrotechnical commission, 2007. [14] l. ciani, g. guidi, d. galar, reliability evaluation of an hvac ventilation system with fta and rbd analysis, 2020. [15] iec 61078, reliability block diagram. international electrotechnical commission, 2016. [16] nswc, handbook of reliability prediction procedures for mechanical equipment, no. may. 2011. [17] m. catelani, l. ciani, a. bartolini, g. guidi, g. patrizi, standby redundancy for reliability improvement of wireless sensor network, in 2019 ieee 5th international forum on research and technology for society and industry (rtsi), sep. 2019, pp. 364369. doi: 10.1109/rtsi.2019.8895533 [18] m. rausand, a. hoyland, system reliability theory, second. john wiley & sons, inc., 2004. [19] m. catelani, l. ciani, g. guidi, d. galar, a practical solution for hvac life estimation using failure models, 2020. [20] a. paggi, g. l. mariotti, r. paggi, a. calogero, f. leccese, prediction by means hazard rate occurrence is a deeply wrong approach, in 2017 ieee international workshop on metrology for aerospace (metroaerospace), june 2017, pp. 276-281. doi: 10.1109/metroaerospace.2017.7999580 [21] m. catelani, l. ciani, m. venzi, component reliability importance assessment on complex systems using credible improvement potential, microelectron. reliab., vol. 64, sep. 2016, pp. 113-119. doi: 10.1016/j.microrel.2016.07.055 [22] b. s. dhillon, human reliability and error in transportation systems. springer-verlag, 2007. [23] l. ciani, g. guidi, application and analysis of methods for the evaluation of failure rate distribution parameters for avionics components, measurement, vol. 139, june 2019, pp. 258-269. doi: 10.1016/j.measurement.2019.02.082 [24] j. g. mcleish, enhancing mil-hdbk-217 reliability predictions with physics of failure methods, 2010 proceedings of the annual reliability and maintainability symposium (rams), jan. 2010, pp. 1-6. doi: 10.1109/rams.2010.5448044 [25] m. catelani, l. ciani, g. guidi, g. patrizi, maintainability improvement using allocation methods for railway systems, acta imeko 9 (2020) 1, pp. 10-17. doi: 10.21014/acta_imeko.v9i1.733 [26] a. massaro, e. cannella, g. dipierro, a. galiano, g. d’andrea, and g. malito, maintenance and testing protocols in the railway industry, acta imeko 9 (2020) 4, pp. 4-12. doi: 10.21014/acta_imeko.v9i4.718 [27] t. addabbo, a. fort, c. della giovampaola, m. mugnaini, a. toccafondi, v. vignoli, on the safety design of radar based railway level crossing surveillance systems, acta imeko 5 (2016) 4, pp. 64-72. doi: 10.21014/acta_imeko.v5i4.419 [28] a. vedavarz, s. kumar, m. i. hussain, hvac handbook of heating, ventilation, and air conditioning for design & implementation, fourth. industrial press inc., 2013. [29] s. c. sugarman, hvac fundamentals, second. crc press: taylor & francis group, 2007. [30] l. tanghong, x. gang, test and improvement of ventilation cooling system for high-speed train, in 2010 international conference on optoelectronics and image processing, vol. 2, nov. 2010, pp. 493-497. doi: 10.1109/icoip.2010.55 [31] c. luger, r. rieberer, multi-objective design optimization of a rail hvac co2 cycle, int. j. refrig., vol. 92, pp. 133-142, 2018. doi: 10.1016/j.ijrefrig.2018.05.033 [32] f. porges, hvac engineer handbook, eleventh. elsevier science & technology books, 2001. [33] l. marjanovic-halburd, i. korolija, v. i. hanby, heating ventilating and air-conditioning (hvac) equipment taxonomy, in iir 2008 hvac energy efficiency best practice conference, 2008, no. february 2016. [34] international organization for standardization, iso 14224 petroleum, petrochemical and natural gas industries — collection and exchange of reliability and maintenance data for equipment. 2016. [35] l. ciani, g. guidi, g. patrizi, a critical comparison of alternative risk priority numbers in failure modes, effects, and criticality analysis, ieee access, vol. 7, no. d, 2019, pp. 92398-92409, doi: 10.1109/access.2019.2928120 [36] i. values, compressor selection: semi-hermetic reciprocating compressors technical data: (4g-30.2y) dimensions and connections, pp. 4-7, 2015. [37] mil-hdbk-217f, military handbook reliability prediction of electronic equipment. us department of defense, washington dc, 1991. [38] a. birolini, reliability engineering. berlin, heidelberg: springer berlin heidelberg, 2017. doi 10.1007/978-3-662-54209-5 [39] m. lazzaroni, l. cristaldi, l. peretto, p. rinaldi, m. catelani, reliability analysis in the design phase, in: reliability engineering. berlin, heidelberg: springer berlin heidelberg, 2011. doi: 10.1007/978-3-642-20983-3_3 http://dx.doi.org/10.21014/acta_imeko.v6i4.428 https://doi.org/10.1109/rtsi.2019.8895533 https://doi.org/10.1109/metroaerospace.2017.7999580 https://doi.org/10.1016/j.microrel.2016.07.055 https://doi.org/10.1016/j.measurement.2019.02.082 https://doi.org/10.1109/rams.2010.5448044 http://dx.doi.org/10.21014/acta_imeko.v9i1.733 http://dx.doi.org/10.21014/acta_imeko.v9i4.718 http://dx.doi.org/10.21014/acta_imeko.v5i4.419 https://doi.org/10.1109/icoip.2010.55 https://doi.org/10.1016/j.ijrefrig.2018.05.033 https://doi.org/10.1109/access.2019.2928120 https://doi.org/10.1007/978-3-662-54209-5 https://doi.org/10.1007/978-3-642-20983-3_3 case studies for the mathmet quality management system at vsl, the dutch national metrology institute acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 case studies for the mathmet quality management system at vsl, the dutch national metrology institute gertjan kok1 1 vsl, thijsseweg 11, 2629 ja, delft, the netherlands section: research paper keywords: european metrology network; mathmet; quality management system; validation citation: gertjan kok, case studies for the mathmet quality management system at vsl, the dutch national metrology institute, acta imeko, vol. 12, no. 2, article 21, june 2023, identifier: imeko-acta-12 (2023)-02-21 section editor: eric benoit, université savoie mont blanc, france received july 11, 2022; in final form april 21, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the work presented in this article has been performed in the empir project “support for a european metrology network for mathematics and statistics” (15net05 mathmet). this project has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. corresponding author: gertjan kok, e-mail: gkok@vsl.nl 1. introduction metrology is the science of measurement founded on the si system of units [1]. the metrological traceability of measurement results is an essential part of metrology. it is defined [2] as the property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty. for this chain to work properly, all constituent parts should be carefully assessed and validated, and the results of the validation should be properly documented. measurement results play a dominant role in this chain, but that does not mean that hardware and instrumentation are the only things that matter. mathematical calculations implemented in software can form an essential part of the measurement. these mathematical calculations will almost certainly be implemented in software, which may have been validated using some reference datasets. the whole data analysis procedure may be based on a written guideline. for complying with metrological traceability, it is therefore essential that the used software, data and guidelines are also under quality control. this requirement means that their working and content is checked for correctness, and that storing meta-information like version control data is properly managed. as there is worldwide cooperation within metrological applications, it is logical to organize this type of quality control at an international level. the emn mathmet [3] is therefore developing a lightweight quality management system (qms) against which the existing procedures at national metrology institutes (nmis) can be benchmarked and which can help to complement them to get more uniformity in assessing the quality of software, data and guidelines by different nmis. a full description of this qms is presented in [4], which has recently been published in acta imeko. this article is an accompaniment to it and supports [4]. in this contribution we will report on the application of the qms to several use cases concerning software, reference data and guidelines by vsl. we will discuss the usefulness, advantages, and disadvantages of the qms and possible pitfalls in sections 3 to 5, after shortly introducing the qms in section 2. finally, in section 6 some overall conclusions will be formulated. note that all these viewpoints and conclusions are abstract the european metrology network mathmet is a network in which a large number of european national metrology institutes combine their forces in the area of mathematics and statistics applied to metrological problems. one underlying principle of such a cooperation is to have a common understanding of the ‘quality’ of software, data and guidelines. to this purpose a flexible, lightweight quality management system (qms), also referred to as quality assessment tools (qat), is under development by the emn. in this contribution the application of the qms to several use cases of different nature by vsl is presented. the benefits and usefulness of the current version of the qms are discussed from the particular viewpoint of a particular employee of vsl, and an outlook for possible future extensions and usage of the qms is given. mailto:gkok@vsl.nl acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 from the perspective of one employee of vsl only, relate to the particular version of the qms of march 2022, and they are not necessarily shared by other nmis or by emn mathmet itself. 2. short overview of the qms for software, data and guidelines a thorough overview of the qms for software, data and guidelines is given in [4]. in this section a summary is given, starting with some remarks regarding its scope. 2.1. goal of the qms originally there was the idea that the emn itself would ‘recommend’ software, reference data and guidelines. assessment of these items by means of the qms would ensure that the emn mathmet recommendations meet the highest quality levels and achieve wide use and substantial impact. for various reasons this is currently not seen as realistic. one important reason is the fact that an emn is part of the larger entity euramet [5] and that the decision-making authority, responsibility and liability for such recommendations is not entirely clear. the second reason is the scope of the emn, which is now seen as a platform to interact with stakeholders and to define future research directions, fostering collaboration and preventing duplication of work. actual technical work should be done inside other forms of cooperation. linked to this reason is that the required budget for performing an assessment is not available within the emn itself. the qms might therefore be seen as a tool to help individual nmis assessing software, data and guidelines, rather than a tool for the emn itself. performed reviews will not be published on the mathmet website but could be put on the website of an individual nmi if an nmi would wish to do so. it therefore seems reasonable to assess the mathmet qms from the perspective of a single nmi. various nmis in mathmet have announced that they will indeed discuss the benefits of the mathmet qms with people directly responsible for quality control at their respective nmis. this will be done in the near future at vsl as well. this article presents an initial assessment of the qms by the author. 2.2. qms for software the qms for software consists of an interactive pdf-file of 5 pages. based on the calculated risk level, specific fields are visible and need to be filled out. the qms for software requires that the project team provides information and evidence of documents covering the following aspects and activities: • some meta-data • a risk level analysis resulting in a “software integrity level” that determines the quality interventions needed • user requirements • functional requirements • design • coding • verification • validation • delivery, use and maintenance. 2.3. qms for data the qms for data consists of an interactive pdf-file with 41 questions, which are again only visible if they are deemed relevant for the selected risk level. for data the team should provide information regarding: • general details and responsibilities • a risk level analysis, resulting in a “data integrity level” that determines the quality interventions needed • user requirements documentation and approval • data life cycle documentation • quality planning • quality monitoring, control and improvement • quality assurance • understandability • metrological soundness most questions to be answered are of general nature. only the last set of questions explicitly involve some metrological aspects. there are no questions explicitly addressing the mathematical aspect the data may have. 2.4. qms for guidelines the qms for guidelines consists of two different checklists: one checklist for existing guidelines and one for future guidelines. at the moment of writing of this manuscript these checklists are still out for review by the project partners, but preliminary versions have been assessed by vsl. the checklists are quite similar. they ask for information regarding: • organization generating the document • independent review and approval available • appropriate metadata available • copyright and ip protection • language • mentioning of target audience • relevance for mathematics and statistics in metrology and the target audience • clearly stated conclusions • appropriate references • presentation easy to understand what is noteworthy, is that the qms checklist is not asking to perform a thorough review of all mathematics by the user of the checklist, but rather to assess if this has been done (and documented!) already and by whom. for some questions, e.g., regarding the presentation and conclusions, it would of course be beneficial to read through the complete document. however, these questions can also be answered with a reasonable level of confidence by only reading small parts of the document. 3. uses case 1: qms applied to software the usefulness of the qms has been assessed by applying it to two pieces of software. 3.1. context the first piece of software was a library of mathematical routines written in python which can be used to take advantage of redundancy in sensor network data [6]. it was developed in the research project met4fof [7]. at a first stage a chi-squared based consistency check is performed to assess the statistical consistency of the sensor data. in the case of consistency, the measurement data is combined into a best estimate of the measurand respecting sensor uncertainties and covariances, whereas in the second case the largest consistent subset of acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 sensors is constructed. this is not only done for the case that the sensors directly measure values of the measurand, but also if there is a linear relationship between the vector of sensor values 𝒙 and a vector of values 𝒚 for the measurand. this vector 𝒚 reflects the availability of multiple, redundant estimates of the measurand. the relationship takes the form 𝒚 = 𝒂 + 𝐵 𝒙 , (1) in which 𝒂 is a vector and 𝐵 a matrix. the second piece of software that was used to evaluate the qms consisted of a recently developed calculation module based on the written standard iso 6142-1 [8]. this software is used at vsl in the production of certified reference materials (i.e., gas mixtures) for customers. the input to the software consists of atomic weights, chemical formulas of the mixture components, amount of substance fractions of the components in the parent gases and the added gas mass from each parent gas mixture to the target gas mixture based on weighing of the cylinder. the outputs of the calculation module are the amount of the substance fractions of the components in the target gas mixture, including their uncertainties and covariances. the quality system at vsl requires that software should be version controlled, documented and validated. however, there are no uniform, detailed procedures or templates to this purpose. in practice different groups assure the quality in different ways. 3.2. benefits and usefulness of the qms the qms for software was applied to these pieces of software with the aim of assessing the qms, rather than constructing all required information that might not be readily available. the following parts of the qms for software were especially appreciated: • the templates help to give a uniform description of the software. • the templates help to avoid overlooking important aspects of software quality. • at vsl there is a focus on version control, documentation, and validation of software and storing these properly. user and functional requirements as well as software design may be lost after the software has been released. it would be good to properly control these documents as well. especially the documentation of software design could be useful for future improvements of the software, possibly by new personnel. the following parts of the qms for software seemed to be less appropriate for the vsl context: • the ‘review by customer or proxy’ may need some flexible interpretation, as vsl does not sell any software to external customers. there could be a ‘vsl internal customer’ for the software, and/ or the envisaged outputs of the software could be assessed against known requirements of customers. in the case of research projects that are, e.g., funded by the eu, the ‘review by customer’ is usually difficult to achieve. • the number of up to three required reviews for some aspects is quite large and can be burdensome, especially for a small nmi like vsl. • the document asks for requirement and design documents at the moment of filling out the form. at vsl, often a gradual, agile based, approach is used for software development. it is not so clear for the author of this paper how the qms should be used in that context. should the qms forms and all implied documentation and reviews be repeated at each ‘sprint’ (development cycle) or at each new release of the software? some more guidance and clarity would be beneficial. as a general observation it would be helpful if the qms indicates some examples of ict tools (preferably open source) that could be used in combination with an agile development process while assuring the traceability (in an administrative sense) of all choices made. in this way, possible requirements of the mathmet qms that may not be directly accommodated by the quality system and available software systems at an nmi, could more readily be implemented at an nmi and used in the context of work related to emn mathmet. 4. use case 2: qms applied to data in this section we will assess the qms for data by applying it to some mathematical reference datasets that were generated in an earlier project. 4.1. context in the tracim project [9] reference datasets were produced for various mathematical problems, e.g., for non-linear least squares fitting problems. the precise definition of the computational aims, together with the datasets were stored in a database [10] accessible from the internet for registered users. an example of such a fit problem is the determination of the best fit parameters 𝑎′, 𝑏′ and 𝑐′ of a function 𝑦 = 𝑓(𝑥, 𝑎, 𝑏, 𝑐) modelling exponential decay to 𝑛 datapoints (𝑥𝑖 , 𝑦𝑖 ) 1 ≤ 𝑖 ≤ 𝑛 such that 𝐷(𝑎, 𝑏, 𝑐) = ∑(𝑦𝑖 − 𝑓(𝑥𝑖 , 𝑎, 𝑏, 𝑐)) 2 → min 𝑛 𝑖=1 . (2) in the database [10] reference datasets are available for this and other fit problems. at vsl there is no specific quality guidance for the generation and documentation of such datasets, other than the requirements mentioned in section 3 for software. 4.2. benefits and usefulness of the qms the interactive pdf file with 37 pages and at most 42 questions was filled out for the application described in section 4.1. the following parts of the qms for data were especially appreciated: • with the help of the data quality management plan template a uniform plan for all applications can be created. • the questions cover a large range of quality aspects, which might be forgotten if the qms tool would not be used. the following parts of the qms for data seem to be less appropriate for vsl’s needs: • the mathematical aspect of the data is not particularly addressed. there could be more guidance with respect to how to assess the correctness of numerical data. • four responsibilities related to data are mentioned: data manager, data administrator, data steward, data technician. these roles do not always seem to exist acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 at vsl, especially not for data generated in research projects. in many cases the situation seems to be much simpler. • similar to what was mentioned in the qms for software, it would be nice if more guidance could be given regarding how to implement all mentioned quality aspects by means of some ict tools, preferably open source. 5. use case 3: qms applied to guidelines in this last use case, we will discuss the application of the qms to a set of mathematical guidelines which was produced in the emrp project new04 [11], and to which vsl contributed. 5.1. context in the new04 project three best practice guides (bpgs) were produced [12]: 1. a guide to bayesian inference for regression problems (bpg1) 2. best practice guide to uncertainty evaluation for computationally expensive models (bpg2) 3. a guide to decision-making and conformity assessment (bpg3) except for the formatting of the title page, bpg1 and bpg2 are very similar in document structure. bpg3 consists of a set of four different loosely connected documents. we applied the qms checklist for existing guidelines to bpg2. this document of 84 pages provides a summary of current best practice in uncertainty evaluation for computationally expensive models. in the first part of the document the methods are explained. in the second part three case studies are presented. 5.2. benefits and usefulness of the qms the qms checklist is mainly asking some questions about the existence of specific information like ‘version number’, ‘independently reviewed’, ‘target audience’ and ‘appropriate references’. the benefit of this approach is that the assessment can be done fairly quickly without having to read, study and check the document itself. the qms checklist verifies that some formal quality criteria are fulfilled, and it is not requiring a tedious scientific review of the content. these simple checks can give a good indication of the overall care with which the document has been prepared in a very quick way, which is the main benefit of this qms for guidelines in our opinion. if the conclusion is that the document hasn’t been independently reviewed, then this job is still out to be done, but this is not directly in the scope of the qms. the application of the qms checklist to bpg2 yielded some interesting deficits. bpg2 has no version number, it doesn’t say anything about ‘copyright’, or ‘independent review’ and there are no ‘clearly stated conclusions’. the document simply ends with the last use case. this is particularly interesting, because several of the authors of bpg2 are mathmet members and even involved in the creation of the qms. the mathematical content of the bpg may be impeccable, but it doesn’t fulfil all quality metrics of the mathmet qms. 6. conclusions and outlook as over time the scope of the emn has become clearer, also the place of the qms in it has been reassessed. initial ideas about a mathmet qms that ensure that ‘emn mathmet recommendations meet the highest quality levels and achieve wide use and substantial impact’ [13] seem to have been replaced in practice by a qms that can help individual nmis with their quality assessment, at least from the perception from vsl. in this paper it has been assessed how this worked out for vsl, and which aspects of the qms proved useful and which less appropriate for the vsl context. the overall conclusion is that the collaboration within the emn on the qms gave useful insights with respect to assuring the quality of software, data and guidelines, and which aspects could matter. at the same time a proper assessment of the different parts and questions of the qms is needed in order to best align it with vsl’s requirements and working field. a discussion with the quality coordinators at vsl is still outstanding. when more nmis assess the mathmet qms and reflect on its implementation in nmi specific quality procedures, the common ground of most useful aspects of the qms will become clearer. this may lead to a next step in the development of the qms, which should lead to a greater uniformity in quality assessment of software, data and guidelines by nmis, and to a reduction of costs to set-up the system. also, there might be additional guidance for the usage of modern ict tools to assure the quality of software, data and guidelines in a more efficient way. as guaranteeing the quality of software, data and guidelines (cf. the attention paid to research papers with open software and data) is getting nowadays more and more attention, the creation of a common qms framework by emn mathmet for nmis seems to come at the right moment. this and similar initiatives will help to maintain and increase the trustworthiness in services provided by nmis. acknowledgement we thank the npl team for leading the development of the mathmet qms and in particular peter harris for fruitful discussions and feedback to an earlier draft of this paper. we also thank the referees for their comments which helped to improve this paper. the work presented in this article has been performed in the empir project “support for a european metrology network for mathematics and statistics” (15net05 mathmet). this project has received funding from the empir programme cofinanced by the participating states and from the european union’s horizon 2020 research and innovation programme. references [1] euramet european association of national metrology institutes, european metrology network (emn) mathmet. online [accessed 6 june 2023] www.euramet.org/mathmet [2] bipm, si system of units. online [accessed 6 june 2023] https://www.bipm.org/en/measurement-units [3] bipm, vim3, definition of traceability. online [accessed 6 june 2023] https://jcgm.bipm.org/vim/en/2.41.html [4] keith lines, jean-laurent hippolyte, indhu george, peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko, vol. 11, no. 4, article 8, december 2022, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1348 [5] euramet european asssociation of metrology institutes. online [accessed 6 june 2023] www.euramet.org http://www.euramet.org/mathmet https://www.bipm.org/en/measurement-units https://jcgm.bipm.org/vim/en/2.41.html https://doi.org/10.21014/actaimeko.v11i4.1348 http://www.euramet.org/ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 [6] metrology for the factory of the future, github software repository. online [accessed 6 june 2023] https://github.com/met4fof/met4fof-redundancy [7] empir project 17ind12 met4fof metrology for the factory of the future, 2018 – 2021. online [accessed 6 june 2023] https://www.ptb.de/empir2018/met4fof/home/ [8] iso 6142-1:2015, gas analysis — preparation of calibration gas mixtures — part 1: gravimetric method for class i mixtures, iso, geneva, 2015 [9] emrp project new06 tracim traceability for computationally-intensive metrology, 06/2012 05/2015; web site. online [accessed 6 june 2023] https://www.tracim.eu [10] tracim database with reference datasets. online [accessed 6 june 2023] http://www.tracim-cadb.npl.co.uk/ [11] emrp project new04 novel mathematical and statistical approaches to uncertainty evaluation, 08/2012 07/2015, web site. online [accessed 6 june 2023] https://www.ptb.de/emrp/new04-home.html [12] emrp project new04 novel mathematical and statistical approaches to uncertainty evaluation: best practice guides. online [accessed 6 june 2023] https://www.ptb.de/emrp/2976.html [13] project protocol of empir project 15net05 mathmet support for a european metrology network for mathematics and statistics, 2019 2023 https://github.com/met4fof/met4fof-redundancy https://www.ptb.de/empir2018/met4fof/home/ https://www.tracim.eu/ http://www.tracim-cadb.npl.co.uk/ https://www.ptb.de/emrp/new04-home.html https://www.ptb.de/emrp/2976.html magnetic circuit optimization of linear dynamic actuators acta imeko issn: 2221-870x september 2021, volume 10, number 3, 134 141 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 134 magnetic circuit optimization of linear dynamic actuators laszlo kazup1, angela varadine szarka1 1 reseach institute of electronics and information technology, university of miskolc, miskolc, hungary section: research paper keywords: magnetic brake; linear brake; magnetic circuit calculation; dynamic braking citation: laszlo kazup, angela varadine szarka, magnetic circuit optimization of linear dynamic actuators, acta imeko, vol. 10, no. 3, article 19, september 2021, identifier: imeko-acta-10 (2021)-03-19 section editor: lorenzo ciani, university of florence, italy received february 5, 2021; in final form april 27, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the described article/presentation/study was carried out as part of the efop-3.6.1-16-2016-00011 “younger and renewing university – innovative knowledge city – institutional development of the university of miskolc aiming at intelligent specialisation” project implemented in the framework of the szechenyi 2020 program. the realization of t this project is supported by the european union, co-financed by the european social fund. corresponding author: laszlo kazup, e-mail: laszlo.kazup@eiki.hu 1. introduction in the field of manufacturing, many methods of lifetime testing are used. in case of electronic devices, the stressand lifetime testing is quite simple in most cases compared to the mechanical tests, when mechanical load emulation is very difficult and expensive. for example, power tools are tested with different loads and some of them have alternating linear movements which should be loaded. there are not fully developed contactless load emulating methods for this application. sometimes hydraulic system is used with low efficiency. in case of traditional practice test methods, an operator works with the device under test (dut), he/she cuts, sands, or planes different materials. this type of testing is expensive, not reliable, and less repeatable than the automated test solutions. in some tests the operators were replaced by industrial robots, but the test is still expensive and dirty due to the using of real materials. the best result would be a system which can emulate the loading force without any physical contact, abrasion, and dirt. research of braking method for fast alternating linear movements by using contactless magnetic methods is in the focus of our project. this work is the second phase of the research and development project, which was started in 2008. the original goal of the project was to develop a special magnetic brake which can simulate the real operation of an electric jig saw to replace the traditional practice test method in which operators perform the full test process by cutting different types of material such as wood, steel, aluminium, etc. the repeatability of this test method is very low, as well as the reliability of the documented test circumstances. also a lot of waste and dirt in the test centre was produced. in those days a special hydraulic brake was developed in switzerland to solve this problem, but its performance, controllability and reliability was poor. the analysis of different dynamic brake constructions and methods have proved that the most reliable and efficient solution can be achieved by using a magnetic construction. the test equipment should perform a special braking characteristic which can be controlled even in a single moving period, and in case of using an electromagnetic solution, these properties can be abstract contactless braking methods (with capability of energy recuperation) are more and more widely used and they replace the traditional abrasive and dissipative braking techniques. in case of rotating motion, the method is trivial and often used nowadays. but when the movement is linear and fast alternating, there are only a few possibilities to break the movement. the basic goal of research project is to develop a linear braking method based on the magnetic principle, which enables the efficient and highly controllable braking of alternating movements. frequency of the alternating movement can be in wide range, aim of the research to develop contactless braking method for vibrating movement for as higher as possible frequency. the research includes examination and further development of possible magnetic implementations and existing methods, so that an efficient construction suitable for the effective linear movement control can be created. the first problem to be solved is design a well-constructed magnetic circuit with high air gap induction, which provides effective and good dynamic parameters for the braking devices. the present paper summarizes the magnetostatics design of “voice-coil linear actuator” type actuators and the effects of structure-related flux scattering and its compensation. mailto:laszlo.kazup@eiki.hu acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 135 realized. the development of this test equipment needed both practical and theoretical improvement. the prototype testing and evaluation originated the further hypotheses. our research group aims to create general theoretical and modelling methodology supporting reliable practical realization of dynamically controlled magnetic fields which can be used in many different industries to optimize the performance of voice-coil type linear actuators and brakes. my research work includes systematic analysis of all possible and realistic magnetic circuit constructions, optimization of the magnetic circuits, and also performing static and dynamic simulations to maximize the efficiency. the research includes analysis of voice-coil type magnetic actuators, design of magnetic circuit to maximize efficiency and reliability and minimize weight and size of the brake. the first part of this paper includes introduction of a method for transformation of 2-dimension magnetic calculations to the cylindrical coordinate system and presents the analysis of the flux leakage and its effects to the results of calculations. the calculations are confirmed by finite element simulations, and the results are also used to correct the differences caused by the flux leakage. these calculations, simulations and corrections were solved for different shapes to realize a shapeand size independent model to calculate the correct average flux density in the air gap. the second part of paper presents the results of dynamic simulations, by which dynamic behaviour (relationship between the current and the force in dynamic cases, eddy currentand solid losses, etc.) of the voice coil-type actuator is analysed. this research is the extension of the air-gap induction distortion analysis, which work is described in paper “a. diagnostics of air-gap induction’s distortion in linear magnetic brake for dynamic applications”, [1] [3]. 2. design of cylindrical magnetic circuit with twodimensional, plane cross-section model calculations the aim of the first part of the work was to theoretically establish, develop, and validate a method that transforms the dimensions of a cylindrical symmetrical magnetic circuit of a given size into an equivalent two-dimensional cross-sectional and constant depth model. in this way, cylindrical magnetic circuits can also be calculated. in this method the cylindrical magnetic circuit is “spread out” so that the vertical (z) dimensions are leaved unchanged, and the r values are transformed into x values to create a flat-section, fixed-depth model (“cubic model”) in which the volume of each part is the same as the volume of the corresponding parts in the cylindrical model, so the two models is connected to each other by the unchanged value of the flux. however, inductions calculated in the planar model is also valid for the transformation, the calculated values correspond to the average values in the cylindrical model, since values of the magnetic induction in the cylindrical model are changing in radial direction. as a result, higher induction values are observed on the inner half of the cylindrical parts and lower on the outer half. figure 1. shows the dimensions of a typical cylindrical model, and figure 2 illustrates the corresponding x and z dimensions in a planar cross-sectional model for transformation. [6], [7] the depth d of the plane cross-section fixed-depth model should be taken so that the x dimensions of the plane model are close to the radius differences of the cylindrical model in the area most affected by the test (this is practically the air gap). thus, for practical reasons, depth dimension d is selected equal to the length of line at the air gap center circle which is as follows: 𝑑 = (𝑟2 + 𝑟3) π (1) to determine the relationship between the radius and the x values, the volume equality was used as already mentioned above, which is the following: 𝑉𝑛𝑟ℎ = 𝑉𝑛𝑥𝑦 (𝑟𝑛 2 − 𝑟𝑛−1 2) π ℎ𝑚 = 𝑥𝑛 𝑦𝑚 𝑑 . (2) since the given h and y values in the two models are the same based on a previous condition, the final correlation for the x values after transformations is as follows: 𝑥𝑛 = (𝑟𝑛 2 − 𝑟𝑛−1 2) π 𝑑 . (3) in the above relation if n = 1, then r0 is 0, assuming that inner bar (iron cores and also magnets) is cylindrical. if the inner bar has ring section, value of r0 is equal to radius of inner ring. 3. verification of the relationships received by finite element simulation to verify transformation relationships introduced above, a transformation of a cylindrical magnetic circuit of a given size was performed. in determining the depth dimension d, the length figure 1. the mechanical drawing of the first experimental dynamic magnetic brake prototype, [3]. figure 2. the half cross-section of a typical cylindrical magnetic circuit. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 136 of the central circle of the air gap was considered, which will result nearly equal air gap induction in plane model and in the cylindrical model. the dimensions of the initial cylindrical model and the plane model which is the result of the transformation are summarized in table 1. the next step was to determine the air gap induction based on static magnetic calculations in which dimensions of the plane model were used. initial data had to be determined, these are the main magnetic parameters of the applied soft ferromagnetic and permanent magnetic material. these data were received from the finite element simulation software database for better comparison. these material properties are as follows (the operating point values of the magnet have been determined graphically): permanent magnet o material: y25 ferrite magnet o residual induction: 0.378 t o coercive force: 153035 a/m o relative permeability at the operating point section of the demagnetization curve: 1.9 soft steel parts o material: 1010 low carbon soft steel o relative permeability: ~103 o maximum induction: 1 t (in linear part of bh graph) the summarized relative permeability of the permanent magnet and the air gap is three order of magnitude smaller than relative permeability of the body, therefore the permeability of the soft iron was neglected in the calculations. in this phase of the research the magnetic calculations focused to the permanent magnet and the air gap. based on the above described initial data, the main steps of the calculations were as follows. 1.) determining the operating point of the magnet by equation of the air gap line (based on data of the transformed geometry): 𝐻𝑚 = −1 𝜇0 ∙ 𝑥1 𝑦1 ∙ 𝑥3 𝑦2 ∙ 𝐵𝑚 𝐻𝑚 = −7000 a m 𝐵𝑚 = 0.3726 t (4) 2.) determination of the flux: 𝛷 = 𝐵 ∙ 𝐴 = 𝐵𝑚 ∙ 𝐴𝑚 = 𝐵𝑚 ∙ 𝑥1 ∙ 𝑑 = 1.873 ∙ 10−3 wb . (5) 3.) determination of the air gap induction: 𝐵 = 𝛷 𝐴 → 𝐵𝛿 = 𝛷 𝐴𝛿 = 𝛷 𝑦1 𝑑 = 0.2092 t . (6) after performing the calculations, both the original cylindrical and the transformed magnetic circuits were simulated by femm 4.0 finite element simulation software [7]. the simulation results and the calculated values are summarized in table 2. the air gap induction value was approx. 75 % compared to the calculated one in the simulation (the reason is the leakage flux around the air gap). in spite of this, the average value of the induction in the magnet and the flux of the magnetic circuit showed only a very small difference from the calculated data. according to these results we can state that the model and transformation provided good results and can be used in the next stage of the research. most of the differences from the simulated values (the finite element simulation is the validation of the calculations in this stage) are due to the leakage flux, correction of which needs further studies. although flux in the air gap is also 72 % less, including leakage induction lines, simulation gives the original calculated flux value. the further paragraphs show that in case of flux leakage compensation the calculations will approximate the simulated results very close. 4. calculation of magnetic circuit for required air gap induction and air gap depth while the previous calculations illustrated the transformation of a magnetic circuit with given dimensions, in practice, developing a so-called “voice-coil-actuator”, a much more common problem is adjusting dimensions of the structure, especially dimensions of the magnet to be used, to the given air gap height and air gap induction value. in such a case, the minimum air gap diameter has to be defined at which the given induction can be performed at the specified air gap height. also, if the diameter of the air gap is fixed, feasibility of the desired induction with the given permanent magnet type at the specified sizes should be checked. in addition, the effect of table 1. basic dimensions of the cylindrical model and calculated dimensions of the transformed model. parameter description value (mm) original values of the cylindrical model 𝑟0 internal radius of the ring magnet 40 𝑟1 external radius of the ring magnet 70 𝑟2 internal radius of the air gap 𝑟3 external radius of the air gap 72.5 𝑟4 internal radius of the outer ring 130 𝑟5 external radius of the outer ring 150 𝑧1 height of the air gap 20 𝑧2 height of the magnet/outer ring 60 𝑧3 height of the bottom cylinder 20 transformed values 𝑥1 width of the transformed magnet 11.23 𝑥2 distance between the air gap and the magnet 23.16 𝑥3 air gap length 2.49998 (~2.5) 𝑥4 distance between the air gap and the outer mild iron part 81.71 𝑥5 width of the outer mild iron part (transformed one of the original outer ring) 39.3 𝑧1 height of the air gap 20 𝑧2 height of the magnet/ outer mild iron part 60 𝑧3 height of the bottom mild iron part 20 𝑑 depth of the magnetic circuit 447.68 table 2. comparison of the calculated and the simulated results parameter calculated value simulated value (cylindrical model) simulated value (plane model) bm 0.3726 t 0.3739 t 0.3744 t φm 1.873 · 10-3 wb 1.879 · 10-3 wb 1.886 · 10-3 wb bδ 0.2092 t 0.1509 t 0.138 t φδ 1.873 · 10-3 wb 1.351 · 10-3 wb 1.24 · 10-3 wb acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 137 leakage flux must be considered when calculating these data (the study and correction of leakage is discussed in chapter vi. 4.1. calculation steps 1.) definition of operating point values of magnets the real demagnetization curve of permanent magnets is linear in a relatively wide range, it has nonlinearity only near the coercive force. therefore, the operating point of the magnet should be defined to provide maximum value of the product b · h, which is in the point 𝐵𝑚 = 𝐵𝑟 / 2 in the practice. 2.) determination of the cross section of magnets perpendicular to flux the air gap flux can be determined from the air gap cross section and the desired induction. since the air gap and the flux of the magnet are the same in theory, the cross section of the magnet can be determined from the equation φ = b · a. the value of the radius of this surface must be checked to ensure that it is smaller than the inner circle line of the air gap (in the case of a plane model the test can also be done, in which case the x value at the beginning of the air gap must be greater than or equal to x). if the evaluation shows that the desired induction is not feasible at a given air gap circle, the air gap diameter must be chosen to be higher, or if it is not possible, the magnet can be used with a different (higher) induction than the optimal operating point, which will result an increase in the length of the magnet (see point 3). in practice, the air gap and the flux of the magnet do not match due to scattering, so the correction described later should be applied. 3.) determination of the optimal length of permanent magnets since the reluctance of the iron body is neglected according to an earlier condition, the equation ∮ 𝐻𝑑𝑙 in the magnetic circuit can be defined as follows (without considering leakage correction): 𝐻𝛿 𝛿 + 𝐻𝑚 𝑦𝑚 = 0, 𝐵𝑚 𝜇0 𝜇𝑚 𝑦𝑚 = −𝐵𝛿 𝜇0 𝛿 (7) where 𝜇𝑚 is the relative permeability of the permanent magnet (typically will be between 1 and 2), 𝐵𝑚 is the operating point induction of the permanent magnet, 𝑦𝑚 is the length of permanent magnet to be defined, 𝐵𝛿 required induction, 𝛿 length of air gap. calculation example shows that the required air gap induction can be achieved without operating the magnet at the operating point. the demagnetizing field strength is less than the operating point field strength, but in this case the magnet length must be greater than the optimal value for the equation to be realized. if the goal is to use a permanent magnet with the smallest possible volume (and at the same time the lowest cost), it is advisable to use the optimal dimensions determined by the operating point. to achieve this, custom-made permanent magnets are necessary in practice. experience shows that when using commercial permanent magnets from catalogs some we should accept some comptonization. 4.) determination of cross section of soft iron body based on the data sheets of various commonly used soft iron materials, we can state that approx. up to 1 t, their b-h curve is linear, so it is not recommended to design above this value. otherwise, especially near saturation, the relative permeability of the iron decreases, and in this case the reluctance of the given section is no longer negligible. [6] the previously determined flux value and the maximum induction can be calculated based on the cross-sectional dimensions. 5. application of ferrite magnets in the outer ring as a flux conductor the results of the dynamic simulations proved, that the reluctance of the magnetic circuit and the properties of the materials in the vertical columns and rings greatly influence the value of the inductivity of the moving coil and the magnitude of the eddy current losses during dynamic operation. we have also examined an initial experimental design that included a permanent magnet both inside and outside. a static study of this construction was also carried out, during which it was found that the external permanent magnet does not substantially increase the air gap induction in the magnetic-air gap-magnet series magnetic circuit, it only increases the demagnetization field strength of the inner, so-called “working” magnet, which results operating point down shifting of this magnet. however, dynamic studies have discovered some advantages, which are the reduced inductance of the moving coil and reduced power dissipation of iron loss. results show that the inductance is approx. half of the level when using soft iron instead of an external magnet in such a structure. the conclusion is that if the dimensions of the external magnet are determined so that the magnetic field strength inside of it is close to 0, then the magnetic ring behaves as a “flux conductor” like iron with low relative permeability. this operating state is also characteristic of soft iron materials in the near-saturation state, with significant flux leakage. however, in the case of permanent magnets, the leakage is minimal. the results are better dynamic parameters caused by reduced inductance of the moving coil as well as reduced eddy current losses, but in turn it and does not reduce the performance of the correctly calculated dimensions working magnet. if this solution is used for the sizing of the magnetic circuit, the last point of the design steps is modified as follows: the cross section of the external magnetic ring at which the induction is equal to the value of the residual induction of the magnet must be determined. for practical reasons, it is recommended to choose the length of the external magnet as equal to the length of the inner magnet (this simplifies the construction). while experience show that neodymium iron-boron (or samarium cobalt at higher operating temperatures) is the most suitable material for the internal working magnet due to its high energy density, conventional strontium ferrite magnets can also be used for the external magnetic ring used as a flux conductor. they have lower prices than the two types of magnets listed above and, since they are used as external elements, their relatively large size is not limited by critical parameters affecting the moving mass, such as the diameter of the moving coil. 6. analysis and correction of flux leakage the leakage of magnetic induction lines in a magnetic circuit with an air gap is a complex problem that depends on several design and operating parameters. practical experiences show that in optimal situation more than 90% of the leakage flux is present around the air gap, but in case of permanent magnets operating out of the optimal operating point or at soft iron sections near acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 138 the saturation, significant part of lines may close outside the magnetic circuit. there are several estimation work-help documents of leakage calculations, including air-gap leakage calculations are available from permanent magnet manufacturers to magnetic circuit designers. in these documents, the air gap and the field around it are divided into several areas depending on the type of magnetic circuit, which can include semicylindrical, semispherical, quarter-spherical and/or prismatic areas. the magnetic reluctances of these parallel fields are defined using exact, empirical formulas. however, these definitions help only in the design of certain magnetic circuits with frequently used structures. [9], [10] in general cases, the effects of leakage in an arbitrary magnetic circuit can be estimated most accurately by finite element simulation. this requires exact and accurate information about geometric model and characteristics of the applied materials. in order to make method outlined earlier applicable in practice for design “voice-coil actuator” type electromagnetic actuators, correct levels of leakage flux should be estimated. in the first step we have worked out an algorithm for automated static magnetic calculations. the calculations are based on the basic magnetostatic calculations mentioned in chapter 4. the main input parameters of this algorithm are the residual induction of the applied magnet types (internal and external), its coercive force, the height of the air gap, and the radius of its internal and external sections. the required air gap induction can be given by the following values: start value stop value number of steps the algorithm can generate a table including the most important geometric parameters of the parametric simulation sequences, which are the following: 𝑟0: the inner radius of the inner magnet 𝑟1: the outer radius of the inner magnet, equal to the inner radius of the air gap 𝑟2: the inner radius of the outer magnet, equal to the outer radius of the air gap 𝑟3: the outer radius of the outer magnet several simulation series were performed to analyze the effect of flux leakage and determine a correction relationship. these simulations were built by using ansys maxwell 2d finite element simulation software [8]. the models in the simulation were axis-symmetrical models. in the first simulation a magnetic circuit according to figure 1. was used, in which the radius of the central circle of the air gap is 70 mm and the height of the air gap is 20 mm. the start value of induction is 0.1 t and the stop value is 1 t, with 0.05 t steps. the 1 t stop value is maximized by the given air gap center circle radius. using the received geometric dimensions, a parametric simulation was prepared to investigate the difference between the average induction value experienced in the air gap and the initial air gap induction value due to flux leakage. in the simulation deviation of working range from the operating point was examined for internal magnets (induction value is 0.67 t for n35 type magnets) and deviation of the average induction from the residual induction value (br = 0.38 t) was analyzed. the cross-sections of the soft iron elements of the construction were set large enough to avoid saturation or close to saturation operation. initial values and the results of the simulation are summarized in table 3. simulation results prove slight increase (between 72 % and 78 %) in the ratio of theoretical and simulated air gap induction when varying the value of the air gap induction from 0.1 t to 1 t. examining the simulation results, we can find that that the operating point of the working magnet on the demagnetization curve b-h always shifts to the right of the ideal operating point. this phenomenon is caused by the modelling error, that is the computational models do not consider either the alternative reluctances caused by leakage or the real magnetization curve of the body. however, the difference in practice is small enough to be neglected in the general case. for this reason, the operating point of the external magnets also differs slightly from h = 0. the next step of the work is to determine a correction factor for the required initial air gap induction, resulting corrected geometric parameters at which the value of the simulated air gap induction will be equal to the originally required (not corrected) air gap induction. the correction relationship determined from the simulation results is expressed by the following equation 𝐵δkorr = 𝐵δ ∙ 1.281 + 0.018 . (8) including this correction into the original algorithm, the parametric simulation was repeated with initial values of 0.1 t and 0.75 t. the higher values is selected according to the maximum possible 1 t corrected initial value of induction. the results are shown in table 4. 7. the effect of permanent magnets used as flux conductors on dynamic behaviour experience has shown that the method mentioned in the previous chapter, discussing the use of permanent magnets instead of mild steel in the outer ring of cylindrical magnet circuits, improves the dynamic behaviour of such magnetic actuators. this improvement is detectable in both permanent magnet and electromagnetically excited constructions. due to the structure of the above-mentioned constructions, the magnetic field created by the vertically arranged voice coil current has an impact on the magnetic field of the stator (the still table 3. comparison of the calculated and the simulated results required air-gap induction in t simulated air-gap induction in t ratio simulated vs. required induction in % operating point induction of inner magnet in t operating point induction of external magnet in t 0.1 0.071 71.00 0.793 0.335 0.15 0.109 72.66 0.784 0.344 0.2 0.147 73.50 0.779 0.351 0.25 0.185 74.00 0.774 0.356 0.3 0.223 74.33 0.770 0.362 0.35 0.261 74.57 0.767 0.367 0.4 0.299 74.75 0.764 0.371 0.45 0.338 75.11 0.761 0.375 0.5 0.376 75.20 0.758 0.379 0.55 0.415 75.45 0.755 0.382 0.6 0.453 75.50 0.752 0.386 0.65 0.492 75.69 0.750 0.389 0.7 0.532 76.00 0.746 0.393 0.75 0.571 76.13 0.743 0.395 0.8 0.611 76.37 0.740 0.399 0.85 0.651 76.59 0.736 0.402 0.9 0.692 76.88 0.732 0.405 0.95 0.734 77.26 0.727 0.409 1 0.778 77.80 0.717 0.413 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 139 part of the actuator, which means the overall magnetic circuit) as well. previous tests and simulations in the field have verified that in certain applications, when a high current flows through the voice coil, the magnetic field created by the coil modifies, distorts the originally homogeneous magnetic field of the air gap. the distortion of induction in the air gap is particularly high in the case of voice coils excited with direct current. as a result, since the length of the voice coil is finite, during operation the force applied to the turns exceeds the mechanical limits the construction of the voice coil was designed to withstand in the first place. thus, if the actuator is designed to exert constant or slowly changing high force, this distortion must be taken into consideration when developing the construction of the voice coil. figure 3 illustrates the character of distortion of induction in the air gap in the case of direct current controlling. furthermore, the tests have revealed that since in most cases the mild steel parts of such actuators are made of solid mild steel units, locally induced eddy currents compensate the distortion of induction in the air gap in the case of rapidly changing, dynamic voice coil currents. as indicated by tests on the first, electromagnetically excited prototype of the braking system to be developed, even in case of 20hz current components, the distortion of the induction in the air gap considerably decreases as compared to excitation with direct current. the results of a series of such tests are shown in figure 4. on the other hand, later finite element simulation tests on further developed magnetic constructions have revealed another problem: the momentarily self-inductance of the voice coil is connected to the reluctance of the magnetic ring of the stator. for dynamic operation, it is essential that the self-inductance of the voice coil be as low as possible in order to achieve the adequate impulse response. when excitation is carried out with ndfeb permanent magnets, the self-inductance of the voice coil in the construction examined (a magnetic ring with a 55 mm middle/mean diameter air gap) is relatively low, approximately 13 µh, which may be appropriate from the aspect of dynamic behaviour. in this case, finite element simulations have resulted in a linear connection with a fairly good approximation between the current in the coil and the force generated in the coil. this is caused by the high reluctance of the magnetic ring in the environment of the voice coil, for the magnetic field of the voice coil can considerably influence the volume and direction of inductance in permanent table 4. comparison of the calculated and the simulated results required induction in t corrected value of induction for simulation input in t simulated induction in t 0.1 0.139 0.106 0.15 0.205 0.154 0.2 0.271 0.202 0.25 0.337 0.251 0.3 0.402 0.3 0.35 0.468 0.35 0.4 0.533 0.399 0.45 0.598 0.449 0.5 0.663 0.499 0.55 0.728 0.55 0.6 0.792 0.6 0.65 0.857 0.652 0.7 0.919 0.705 0.75 0.984 0.759 figure 3. the cross-section of the 2-d model. figure 4. average ac component of the air gap induction as a function of the frequency in a voice-coil type linear actuator excited by dc current. figure 5. the simulation model of an improved construction of an ndfebmagnet based voice coil actuator (brake) including permanent magnets in the outer ring. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 140 magnets only in case of a much higher value of excitation than the optimal operational value in the bias point, due to the character of demagnetization curve of the permanent magnet. however, further dynamic simulations indicate that with the help of the flux conductor solution, discussed in the previous chapter, the self-inductance of the voice coil can be decreased even more (about 10 % 12 %). figure 5 demonstrates the a forementioned construction solution, while the relevant results of the finite element simulations are shown in figure 6 and figure 7. the dynamic analysis of the construction including permanent magnets has shown that this buildup can have quite excellent dynamic behaviour even without extra magnets. however, in the case of electromagnetic excitation, the entire magnetic ring is by default made of mild steel of high relative permeability, owing to which the reluctance of the magnetic ring is low and the magnetic field generated locally due to the current in the voice coil causes greater distortion in the original magnetic field of the stator excited with current pulse / stator supplied with excitation current. result of a finite element simulation on an electromagnetically excited construction (the size of the buildup is the same) indicate that in this case the self-inductance of the voice coil (identical in the case of voice coil dimensions) is about 100 times higher than the same value of the ndfeb magnet-based construction, and the exciting electric current changes considerably and non-linearly between the average value and depending on time. this simulation result is shown on figure 8. the reason for this is that the greater part of the metallic body is magnetized to almost full saturation for the sake of the greatest air gap induction possible, and in certain sections, due to the magnetic field of the voice coil, the temporary operating point shifts to the non-linear part of the magnetization curve. this results in a non-linear connection between the voice coil current and the force generated, which, together with a highvalue induction impair the dynamic behaviour of the construction. if certain parts of the outer ring of the stator consist of permanent magnets with the operation point settings defined in the previous chapter, this non-linearity and the inductance of the voice coil may be decreased significantly, thus improving dynamic behaviour. consequently, the solution discussed in the previous chapter is capable of improving the dynamic behaviour of a so-called “voice-coil-actuator” by decreasing the inductance of the voice coil and its non-linear nature, especially if the excitation of the magnetic ring of the stator is carried out with an electromagnet. 8. conclusions and outlook results of the research show that air gap induction correction is an effective method to calculate geometry of magnets and checking the real air gap induction for the calculated geometry for magnetic circuits of so-called “speaker-type voice coil actuator” actuators. accuracy of air gap induction simulation can be increased if considering further construction details, like join deviations or detailed material properties. difference between the magnetic properties of real soft magnetic material and simulated material may also cause some simulation error. converting cylindrical, axially symmetrical magnetic constructions to plane model by defined geometric transformations, the induction and field strength values of the magnetic circuit’s sections can be determined with acceptable accuracy. the acceptable accuracy highly depends on the compensation capacity of the control system to be used in the system, therefore checking and correcting the calculations by finite element simulations can be still useful. validation of the developed calculation and simulation methods is in progress. a prototype is designed and built with the geometrical dimensions defined by the described methods; tests will be performed in the near future. the validated methods will be used for development and optimization of industrial testing processes. acknowledgement this research was supported by the european union and the hungarian state, co-financed by the european regional development fund in the framework of the ginop-2.3.4-152016-00004 project, aimed to promote the cooperation between the higher education and the industry. references [1] a. váradiné szarka, linear magnetic break of special test requirements with dynamic performance, journal of electrical and electronics engineering, vol. 3, no. 2, 2010, pp. 237‒240. online [accessed 2 september 2021] https://www.ingentaconnect.com/content/doaj/18446035/201 0/00000003/00000002/art00031 [2] c. h. chen, a. k. higgins, r. m. strnat, effect of geometry on magnetization distortion in closed-circuit magnetic measurements, journal of magnetism and magnetic materials, vol. 320, no. 9, figure 6. moving coil inductance of the original construction (current flow starts at 50ms) figure 7. moving coil inductance of the improved construction (current flow starts at 50ms) figure 8. dynamic finite element simulation of a construction excited by electromagnet https://www.ingentaconnect.com/content/doaj/18446035/2010/00000003/00000002/art00031 https://www.ingentaconnect.com/content/doaj/18446035/2010/00000003/00000002/art00031 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 141 2008, pp. 1597‒1656. doi: 10.1016/j.jmmm.2008.01.035 [3] l. kazup, a. váradiné szarka, diagnostics of air-gap induction’s distortion in linear magnetic brake for dynamic applications, xxi imeko world congress on measurement in research and industry, prague, czech republic, 30 august 4 september 2015, pp. 905–908. online [accessed 2 septemner 2021] https://www.imeko.org/publications/wc-2015/imeko-wc2015-tc7-190.pdf [4] g. kovács, m. kuczmann, simulation of a developed magnetic flux leakage method, pollack periodica, vol. 4, no. 2, 2009, pp. 45‒56. doi: 10.1556/pollack.4.2009.2.5 [5] m. kuczmann, nonlinear finite element method in magnetism, pollack periodica, vol. 4, no. 2, 2009, pp. 13‒24. doi: 10.1556/pollack.4.2009.2.2 [6] nikola a. spaldin, magnetic materials: fundamentals and device applications, cambridge university press, 2003, isbn 9780521016582. [7] heinz e. knoepfel, magnetic fields, john wiley & sons, 2008, isbn 3527617426. [8] femm – finite element method magnetics – homepage. online [accessed 2 september 2021] https://www.femm.info/wiki/homepage [9] ansys maxwell – product homepage. online [accessed 2 september 2021] https://www.ansys.com/products/electronics/ansys-maxwell [10] magnetic circuit design guide – tdk tech notes. online [accessed 2 september 2021] https://product.tdk.com/en/products/magnet/technote/design guide.html [11] design of magnetic circuits – tokyo ferrite. online [accessed 2 september 2021] https://www.tokyoferrite-ho.co.jp/en/wordpress/wpcontent/uploads/2017/03/technical_02.pdf http://dx.doi.org/10.1016/j.jmmm.2008.01.035 https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc7-190.pdf https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc7-190.pdf http://dx.doi.org/10.1556/pollack.4.2009.2.5 http://dx.doi.org/10.1556/pollack.4.2009.2.2 https://www.femm.info/wiki/homepage https://www.ansys.com/products/electronics/ansys-maxwell https://product.tdk.com/en/products/magnet/technote/designguide.html https://product.tdk.com/en/products/magnet/technote/designguide.html https://www.tokyoferrite-ho.co.jp/en/wordpress/wp-content/uploads/2017/03/technical_02.pdf https://www.tokyoferrite-ho.co.jp/en/wordpress/wp-content/uploads/2017/03/technical_02.pdf microsoft word article 2 editorial carbone.docx acta imeko  august 2013, volume 2, number 1, 2  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 2  acta imeko: gaining momentum  paolo carbone  department of electronics and information engineering, university of perugia, via g. duranti, 93 – 06125 perugia, italy    citation: paolo carbone, acta imeko: gaining momentum, acta imeko, vol. 2, no. 1, article 2, august 2013, identifier: imeko‐acta‐02(2013)‐01‐02  editor: paolo carbone, university of perugia, italy  copyright: © 2013 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: paolo carbone, e‐mail: paolo.carbone@unipg.it    1. introduction  after the journal kick-off last year, much has been done to produce this issue: many new people were involved to process incoming papers and an improved software infrastructure now guarantees sustainability of all operations. the publication process has been fine-tuned and many new papers are currently entering the production process. acta imeko publishes extended papers that have been previously presented at workshops, congresses and symposia organized by imeko worldwide. it offers the opportunity to publish the outcomes of research activities, presented and discussed at conferences, to favour improvements. the amplitude of the topics treated within imeko, by its 24 technical committees guarantees a full coverage of the subjects of interest in metrology and of their applications. so, if you happen to present your scientific work in an imeko sponsored event, consider also the possibility to access acta imeko as your supporting medium for the extended version of your paper. the open access nature of this publication will guarantee maximum availability of your results, looking at the entire scientific world as a possible stakeholder. as first editor-in-chief of this scientific journal, i am very proud to introduce this issue, full of interesting contributions by scientists in the areas of measurement of electrical quantities (tc4), education and training in measurement and instrumentation (tc1), measurement science (tc7), and measurements in biology and medicine (tc13). papers refer to two major imeko events that took place recently in korea and germany. details on these symposia are provided in the introductions by the section editors of this issue, ján šaliga, dušan agrež and gerhard linß. many people have contributed to this issue under the supervision of the imeko vice-president for publications, paul regtien. beyond the valuable contributions of all involved reviewers, in particular francisco alegria, pedro ramos, sergio rapuano and dirk röske did all an excellent and effective job in making this publication possible. thank you to all of them. keep in touch with our journal and enjoy its scientific content. introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities acta imeko issn: 2221-870x march 2021, volume 10, number 1, 5 acta imeko | www.imeko.org march 2021 | volume 10 | number 1 | 5 introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities koji ogushi1, momoko kojima1 1 national metrology institute of japan, national institute of advanced industrial science and technology, tsukuba central 3, 1-1-1 umezono, tsukuba, 305-8563 ibaraki, japan section: editorial citation: koji ogushi, momoko kojima, introduction to the special section of the 2019 apmf, the asia pacific measurement forum on mechanical quantities, acta imeko, vol. 10, no. 1, article 2, march 2021, identifier: imeko-acta-10 (2021)-01-02 editor: francesco lamonaca, university of calabria, italy received march 17, 2021; in final form march 17, 2021; published march 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding authors: koji ogushi, e-mail: kji.ogushi@aist.go.jp momoko kojima, e-mail: m.kojima@aist.go.jp dear readers, measurement technology on mass, force, torque constitutes an integral part of intellectual infrastructure for a diverse range of human activities such as quality and safety assurance of industrial products, fair trade, energy-saving, and environmental protection. the asia pacific symposium on measurement of mass, force and torque (apmf), since its initiation in 1992, has been offering participants the opportunity of exchanging the latest information on r&d in these fields. it has been growing steadily as a not-to-miss event for metrologists, researchers, and engineers, especially those actively working in the asia-pacific region. the name "asia pacific measurement forum on mechanical quantities (apmf)" has been considered and changed from "asia pacific symposium on measurement of mass force and torque (apmf)" in 2017. besides, apmf activities' scope has been extended into different mechanical quantities such as density, hardness, pressure, vacuum, and others. the apmf 2019 has been held in niigata, japan, from 17th to 21st november 2019. it has been sponsored by the society of instrument and control engineers (sice), co-sponsored by the international measurement confederation (imeko) and niigata university, and organized by the national metrology institute of japan (nmij), a division of the national institute of advanced industrial science and technology (aist). the successful forum was contributed by more than 140 participants from 14 countries and economies, presenting 71 scientific papers. in this special issue, you can find four articles selected by the international program committee of the apmf, which is considered worthy of publishing in the acta imeko journal after peer-reviewing. the papers included in the special section show recent progress of the research on the measurements in the fields of mass, pressure, and flow in the asia-pacific region. we briefly introduce those papers as follows. in the field of mass metrology, one paper was selected. yuhsin wu and her colleagues presented the combined x-ray fluorescence (xrf) / x-ray photoelectron spectroscopy (xps) surface analysis system for quantitative surface layer analysis of si spheres in order to realize the new kg definition by x-ray crystal density (xrcd) method in cms/itri. ptb cooperated with cms by transmitting the information and technology of the xrcd method. in the field of pressure metrology, two papers are included in the issue. ahmed s. hashad and ptb team reported the evaluation of ptb force-balanced piston gauge (fpg), which is a non-rotating piston gauge. they have compared fpg with three different ptb pressure standards ranging from 3 pa to 15 kpa and confirmed the theoretically obtained effective area. the other is the topic about improvement of yokogawa's silicon resonant pressure transducer reported by hideaki yamashita. the characteristics of the pressure sensors are refined excellently based on the calibration results from nmij and yokogawa, which aimed to be used as a transfer standard in future key comparisons. in the flow field, masanao kaneko has numerically investigated the effect of a single groove on the flow behaviour and loss generation in a linear compressor cascade. analysis was performed by changing the tip clearance, which will be beneficial for the improvement of the compressor aerodynamic performance in the future. we are deeply grateful to all contributors, editors, authors, and reviewers who make this issue possible and hope you will enjoy reading this special section. koji ogushi, momoko kojima guest editors mailto:kji.ogushi@aist.go.jp mailto:m.kojima@aist.go.jp development and metrological characterization of cement-based elements with self-sensing capabilities for structural health monitoring purposes acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 12 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 development and metrological characterization of cementbased elements with self-sensing capabilities for structural health monitoring purposes gloria cosoli1, alessandra mobili2, elisa blasi2, francesca tittarelli2,3, milena martarelli1, gian marco revel1 1 dip. di ingegneria industriale e scienze matematiche, università politecnica delle marche, via brecce bianche snc 60131 ancona, italy 2 dip. di scienze e ingegneria della materia, dell'ambiente ed urbanistica, università politecnica delle marche, via brecce bianche snc 60131 ancona-instm research unit, italy 3 istituto di scienze dell’atmosfera e del clima, consiglio nazionale delle ricerche, via gobetti 101 40129 bologna, italy section: research paper keywords: structural health monitoring; piezoresistivity; self-sensing materials; resilience; metrological characterization citation: gloria cosoli, alessandra mobili, elisa blasi, francesca tittarelli, milena martarelli, gian marco revel, development and metrological characterization of cement-based elements with self-sensing capabilities for structural health monitoring purposes, acta imeko, vol. 12, no. 2, article 31, june 2023, identifier: imeko-acta-12 (2023)-02-31 section editor: alfredo cigada, politecnico di milano, italy, andrea scorza, università degli studi roma tre, italy, roberto montanini, università degli studi di messina, italy received november 30, 2022; in final form april 22, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research activity was carried out within the framework of the recity project “resilient city – everyday revolution”, identification code ars01_00592. corresponding author: gloria cosoli, e-mail: g.cosoli@staff.univpm.it 1. introduction the life cycle of cement-based structures can be optimized through proper measurements, in particular in the field of structural health monitoring (shm). indeed, the advantage of continuous monitoring is unquestionable with respect to periodic inspections, which can occur when it is already too late to intervene effectively. adequate interventions strategies can be adopted if measurement results timely highlight a potential damage in a structure/infrastructure [1]. accordingly, it is possible to minimize management costs [2], which are higher as the time between damage occurrence and intervention increases (de sitter’s law [3]). in this way, the public administrations can prioritize the interventions on structures and infrastructures abstract mortar specimens containing conductive additions (i.e., biochar and recycled carbon fibres – both alone and together, and graphene nanoplatelets) were characterized from a metrological point of view. their piezoresistive capability was evaluated, exploiting the 4electrode wenner’s method to measure electrical impedance in alternating current (ac); in this way, both material and electrodematerial polarization issues were avoided. the selected mix-design was used to manufacture scaled concrete beams serving as demonstrators. additionally, fem-based models were realized for a preliminary analysis of the modal parameters that will be investigated through impact tests conducted after different loading tests, simulating potential seismic effects. the results show that the combined use of recycled carbon fibers and biochar provide the best performance in terms of piezoresistivity (with a sensitivity of 0.109 (µm/m)-1 vs 0.003 (µm/m)-1 of reference mortar). conductive additions improve the signal-to-noise ratio (snr) and increase the material electrical conductivity, providing suitable tools to develop a distributed sensor network for structural health monitoring (shm). such a monitoring system could be exploited to enhance the resilience of strategic structures and infrastructures towards natural hazards. a homogeneous distribution of conductive additions during casting is fundamental to enhance the measurement repeatability. in fact, both concrete intrinsic properties and curing effect (hydration phenomena, increasing electrical impedance) cause a high variability. mailto:g.cosoli@staff.univpm.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 identified through proper shm technologies combined with an early warning system [4]. sensors play a pivotal role in this field [5], especially if they have internet of things (iot) capabilities that endorse data sharing, cloud services for computation (also with artificial intelligence – ai – technologies), and remote monitoring systems [6]. the best solution is probably represented by distributed sensor networks, able to gather data in many locations within the same structure, thus mapping the whole system and highlighting eventual criticalities in almost real-time [7]. moreover, the collected data are exploited to constitute a dedicated database, which can be used to feed ai and machine learning (ml) algorithms for both prediction and classification purposes [8]–[10]. furthermore, these data can be interfaced with the structure bim (building information model), hence keeping track of all the changes occurring over years [11]. nondestructive techniques (ndts) are among the most used systems for shm, since they allow to measure significant parameters without taking samples to be analysed in laboratory. standard sensors can be employed, such as accelerometers, load cells, inclinometers, gps, environmental sensors, and also noncontact systems (e.g. laser-based sensors) [12]–[14]. the spatial resolution of the measurement results clearly depends on the sensors positioning; as mentioned above, distributed sensor networks are particularly relevant in this field, since they allow an actual mapping of the whole structure to be monitored, with a level of detail related to the final aims of the monitoring. in this context, also the cost of the hardware should be properly considered, being almost proportional to the nodes number. even better results can be achieved combining monitoring and inspections procedures; the latter can be performed also with advanced techniques, such as sensorized drones (also known as unmanned aerial vehicles, uavs [15]). amongst possible sensors, uavs can have onboard environmental sensors for air quality assessment [16] or high-resolution multispectral visionbased systems to detect possible structural damages or degradation phenomena (e.g., caused by a prolonged exposure in an aggressive environment – such as chloride-rich solutions/aerosols). when inspection identifies an event of relevance, further tests can be planned, for example to assess the severity of the phenomenon (e.g., cracks aperture) [17], [18]. what is more, the scanning of a structure with a vision-based system embedded on a uav allows to obtain a 3d model, showing the positioning and the severity of the identified defects or damages. in recent years, shm sensors are often coupled to self-sensing materials [19] (even better if eco-compatible and sustainable, as by-products or recycled materials). in this way, it is possible to develop distributed sensor networks with iot capabilities, able to continuously gather data from remote buildings and infrastructures. this is particularly relevant in case of critical structures, which should be always operational, even after a catastrophic event, when the management of aftershock emergencies is pivotal. in this context, monitoring systems feeding early warning systems are very important to ensure public safety [13], [20]. indeed, self-sensing materials confer many capabilities to the structure, easing its monitoring. indeed, through self-sensing materials the structure perceives its own health status [21], being able not only to sense external loads (i.e., phenomena related to piezoresistive capacity), but also to detect the penetration of contaminants or identify defects and cracks. many materials have been recently applied to pursue this aim, both in form of fibres and fillers. among the others we can mention steel fibres [22], carbon fibres (both virgin and recycled), nickel powder [23], carbon nanotubes [24], graphene [25], graphite [26], foundry sand [27], carbon black [28], char, and biochar [29]. in a view of green and circular economy, recently particular interest has been shown on recycled material and byproducts potentially having self-sensing capabilities; this strategy allows not only to reuse materials, but also to limit production costs. for example, in the european project endurcrete (new environmental friendly and durable concrete, integrating industrial by-products and hybrid systems, for civil, industrial, and offshore applications, ga n° 760639, http://www.endurcrete.eu/) some of the present authors have developed self-sensing mix-designs for mortar and concrete including carbon-based additions, namely recycled carbon fibres and char or biochar. also a patent (https://www.knowledgeshare.eu/en/patent/eco-friendly-and-self-sensing-mortar/) has been granted on this invention, together with the related measurement system for electrical impedance (“eco-compatible and self-sensing mortar and concrete compositions for manufacturing reinforced and non-reinforced constructive elements, related construction element and methods for the realization of self-monitorable building structures”, patent n° 102020000022024). this can be exploited for shm purposes in self-monitorable structures, whose life cycle will result optimised thanks to the continuous assessment of health status. the activities of endurcrete project are being followed up within the framework of the national project recity (resilient city – everyday revolution, pon r&i 2014-2020, identification code: ars01_00592, http://www.ponricerca.gov.it/media/396378/ars01_00592decreto-concessione-prot369_10feb21.pdf), whose objective is to realize a multimodal monitoring system (modular and interoperable as much as possible) that can enhance the resilience of critical structures/infrastructures with respect to natural hazards, along with the resilience of energy distribution systems. among natural threats, it is worth mentioning earthquakes and landslides (also interacting each other). indeed, in italy particular attention is paid to seismic risk, being italy a seismic area, where important earthquakes often manifest in different regions, such as the crater of the centre of italy. in the perspective of seismic protection of buildings and infrastructures, on the one hand, the strains caused by external loads should be assessed to analyse possible structural damages; on the other hand, the analysis of vibrations is pivotal to characterise the dynamic behaviour of the whole structure and identify possible criticalities. lacanna et al. [30] considered a bell tower (giotto’s bell tower, firenze, italy) and studied its dynamic response through the combination of operational modal analysis and seismic interferometry. they evaluated frequency, shape modes, and seismic wave velocity, using a seismic sensor network capable to promptly identify eventual structural damages. the results showed that the analysed bell tower is a dispersive structure with bending deformation. induced vibrations represent a great concern for concrete-based lifelines, such as bridges [31]: hence, control and mitigation of vibrations is fundamental. in the context of seismic monitoring, accelerometers are the most common sensors; just as an example, oliveira et alegre [32] applied accelerometers in the monitoring of dams. this way, they were able to describe natural frequencies, mode shapes, and seismic response over time. indeed, in a seismic context shm surely plays a pivotal role, even more when monitoring is sided by an early warning system based on the measured signals. in this way, the public administrations can be supported in decision-making strategies acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 definition, essential for risk management and for the prioritization of emergency interventions. in the recity project, the authors of this paper utilize data-fusion strategies together with artificial intelligence (ai) technologies to extract meaningful parameters related to the structural health status of a structure. this information can be exploited to set up an early warning system, promptly highlighting critical situations that should be timely considered, providing an efficient intervention. furthermore, ai algorithms will be useful for prediction purposes as well, made possible through the ingestion of longtime series data for model training. finally, the recity project aims to valorise good practices for resilience, supporting citizens community during emergency situations. to this aim, the data gathered through the monitoring system will be shared in dedicated platforms with user-friendly interfaces, thus allowing the creation of a formed, informed, trained, and active community, which is aware of the city status and has a proper sense of community. this paper presents the results of the metrological characterization of different types of mortar with self-sensing capabilities, embedding sensing electrodes for electrical impedance measurement, thus proving their piezoresistive ability. the results were compared to standard measurements performed with traditional strain gages. moreover, this work reports the first results on monitoring of demonstrative scaled prototypes realized with the best performing conductive additions in terms of self-sensing capability. in the near future, these prototypes will be subjected to loading tests and will be analysed in terms of dynamic response, hence demonstrating their potentiality for application within the monitoring platform, especially in a seismic context. then, durability tests will be carried out and data will be acquired for a long time, thus collecting data useful for the training of ai algorithms in a view of early warning system. indeed, this is pivotal to enhance the resilience of structures and infrastructures to natural hazards (like earthquakes). the paper is organised as follows: the materials and methods are reported in section 2, section 3 shows the results, and the authors give their discussions and conclusions in section 4, together with possible future developments. 2. materials and methods the main aim of the recity project is to develop a flexible and interoperable platform for the collection of multimodal signals and their sharing on a cloud to deliver different services (e.g., data processing and ai algorithms exploitation for setting up early warnings and deriving significant indices for shm purposes). the potentialities of this platform are relevant for managing emergencies and adopting adequate policies in the seismic context, thus improving the resilience of structures and infrastructures, especially when they are critical constructions. data from different sensors will be gathered from the project demonstrators, which will be described in the following sections. the pipeline related to the platform is reported in figure 1. the collected data will be stored both in a dedicated database and in fiware ecosystem (https://www.fiware.org/); fiware can deal with different types of data and can also merge new and existent data models, in a view of developing smart cities and systems capable to share information and knowledge with diverse stakeholders (e.g., institutions and public decision-makers, but also common citizens). in this way, the researchers’ community can arise the awareness on the city structures and infrastructures, hence also improving the resilience towards possible emergency situations. 2.1. metrological characterization of piezoresistive capacity in a preliminary phase of the recity project, the authors considered mortar specimens to identify the best mix-design in terms of piezoresistive capacity. different types of conductive additions were employed, and their behaviour was evaluated under laboratory conditions. these tests were performed to select the best performing carbon-based additions to be used for the casting of the project demonstrators (concrete beams), which will be further detailed. indeed, those concrete specimens will be subjected to both loading tests and vibrational analyses, simulating the effect of a seismic event. the recity platform for shm will include, among the others, electrical impedance sensors and accelerometers; in particular, the signals will be acquired through a low-cost system, the eval-ad5940 board (by analog device). within the context of piezoresistivity tests on mortar specimens, electrical impedance signals were compared with traditional strain gages deformation, to characterize these selfsensing materials in terms of metrological performance. for sure, this performance is affected by the conductive additions included in the mix-design. prismatic mortar specimens (40 mm × 40 mm × 160 mm, figure 2) were realized according to 5 different mix-designs: • reference mortar specimens (ref), without conductive additions, to be considered as reference mixture; • biochar-based mortar (bch). the by-product was provided by res (reliable environmental solutions) in pellet format, then was grinded and sieved at 75 µm before addition in mix-design (0.5 vol.%) in order to facilitate its distribution within the mixture; figure 1. scheme of data acquisition from demonstrators, sharing, processing and results. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 • mortar containing 6-mm long recycled carbon fibres (rcf). the fibres were supplied by procotex belgium sa and were obtained mixing carbon fibres of different origins and graphite from pure carbon fibres coils. a mean average density of 1.85 g/cm3 was considered and the fibres were added in 0.05 vol.%; • mortar with both biochar and recycled carbon fibres (bch+rcf), at the same dosages used in bch and rcf specimens (i.e., 0.5 vol.% and 0.05 vol.%, respectively); • mortar manufactured with graphene nanoplatelets (gnp), having a thickness of 6-8 nm and a size lower than 5 µm, in 0.5 vol.%. in particular, pentagraf 30 graphene nanoplatelets (produced by pentachem s.r.l.) were used; their specific surface area (measured with bet adsorbance method, brunauer, emmett, teller) was equal to 30 m2/g. to manufacture mortar specimens, we used portland cement (cem ii/a-ll) and a calcareous sand (0-8 mm) as fine aggregate, mixing it with 5.5 wt.% water to reach saturated surface dried (s.s.d.) conditions. the water/cement ratio (w/c) was the same for all the mortar specimens and was equal to 0.55 by mass, whereas the aggregate/cement ratio (a/c) was equal to 3 by mass. the mortars workability (x, measured according to uni en 1015-3 standard) was equal to 140 mm ≤ x ≤ 200 mm (i.e., plastic workability). the mix-designs referred to each type of mortar are reported in table 1. to characterize the mortars in terms of piezoresistive capacity, electrical impedance measurements should be carried out. hence, 4 stainless steel rods (diameter: 3 mm; length: 40 mm; inter-electrode distance: 12 mm), acting as electrodes, were embedded (half length) in the specimens centreline, in order to exploit the wenner’s configuration method [33] in alternating current (ac, in particular with a measurement frequency higher than 1 khz), hence avoiding both electrode-interface and material polarization effects, which would affect the measurement result with a significant uncertainty value. after the casting phase, specimens were cured in a temperature (t) and relative humidity (rh) controlled environment (t = (20 ± 1) °c; rh = (95 ± 5) %) for 7 days, being wrapped by plastic sheets. then, they were left at t = (20 ± 1) °c and rh = (50 ± 5) % without any cover. during the curing phase, the mortar specimens were regularly monitored in terms of both mechanical resistance and electrical impedance, with measurement carried out at 2, 7, and 28 days. compressive strength was assessed with a hydraulic press (galdabini, with an applicable maximum load of 200 kn), considering the average value obtained on 3 dedicated specimens of the same type. on the other hand, electrical impedance was assessed through electrical impedance spectroscopy method, employing a potentiostat/galvanostat (metrohm) with a 4electrode configuration (figure 3), on the prismatic mortar specimens with electrodes. the loading tests were performed after the completion of curing phase. a mechanical press (zwick roell z050) was employed to apply a maximum load equal to 11.5 kn. the value of the applied load was set in order to remain in the elastic range of the material [34], which was measured on the ref mortar during curing; in particular, 20% of the obtained compressive strength of the ref specimen was chosen. hence, the formation of cracks was avoided, as well as the alteration of the specimen mechanical properties. each specimen was subjected to 5 loading cycles per test and the test was repeated three times in different weeks (in a time interval of 8 weeks), for a total of 15 loading cycles per specimen. with this test protocol, also the variability due to hydration phenomena plays a relevant role on the measured electrical impedance. the complete test setup is reported in figure 4, where it is possible to observe the zwickroell mechanical press, equipped with a load cell (full scale: 50 kn), used to load the mortar specimen, on which a strain gage specific for cement-based materials (hbk, net grid length: 50 mm) was installed. spider8 system by hbm was employed to acquire strain gages signals, adopting a half-bridge configuration to compensate eventual external disturbs. moreover, a preliminary test was performed to compare the results obtained with halfand full-bridge configurations. to this aim, a bch specimen was subjected to 5 figure 2. geometry of the sensorized mortar specimen (strain gage and electrical impedance electrodes can be appreciated in the figure). table 1. mortars mix-design (kg/m3). mix cement water sand rcf bch gnp ref 499 274 1497 rcf 499 274 1496 1 bch 496 273 1489 10 bch+rcf 496 273 1488 1 10 gnp 496 273 1489 10 figure 3. wenner’s method applied on a mortar specimen: electric current is applied between the external electrodes (i.e. we and ce) and the corresponding electric potential drop is measured between the internal ones (i.e. s and re). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 repeated loading cycles and the strain was assessed with both the wheatstone’s bridge configurations. data analysis focused on the real part of electrical impedance, which in literature is the most associated one to the structural conditions of the material [35]. 2.2. design and realization of the project demonstrators to verify the behaviour of self-sensing materials in a seismic context, the authors designed loading tests and vibrational analyses on scaled demonstrators. in particular, 1:5 scaled reinforced concrete beams (10 cm × 10 cm × 50 cm) were planned in detail, both in terms of materials and embedded sensors. for the former, the best conductive additions and dosages resulting from piezoresistivity tests were chosen; for the latter, different types of sensors were selected, namely: • electrical impedance sensors, which are fundamental to show the piezoresistive capacity of the concrete elements; • accelerometers, mounted on specific bases fixed on the upper specimen surface to measure the dynamic response of the structure to external excitation (provided with an impact hammer, as it will be described in detail); • sensors for the monitoring of rebar free corrosion potential, useful to early detect the concrete deterioration as cracking or water penetration occur [38] (cosmonet concrete structures monitoring network, università politecnica delle marche, patent n° 0001364988). indeed, after loading and vibrational tests, the specimens will be subjected to accelerated degradation tests; hence, the presence of cracks generated during loading could ease the penetration of contaminants into the material. the geometry of the prototype is reported in figure 5; 20 degrees of freedom for the measurement of the specimen dynamic response are foreseen (stainless-steel washers are positioned on the specimen upper surface through bicomponent acrylic resin, in order to easily install the accelerometers during experimental modal analysis through beeswax), whereas the excitation point is set on the specimen centreline. the specimen has a reinforcing steel rebar at its centre, where different sensors are placed: • electrode arrays for electrical impedance measurement; • pseudo-reference electrode for the measurement of the rebar free corrosion potential (cosmonet sensor [36]). the impact test will be carried out with a sensorized impact hammer (pcb 086 b04), equipped with a load cell for the measurement of the provided force. in this way, possible effects of earthquake are simulated, including cracking phenomena linked to external forces acting on the structural element. hence, the dynamic response is evaluated at time 0 (as-is conditions) and after each applied load, to observe possible modifications in the modal parameters of the element. figure 4. measurement setup for piezoresistivity tests: loads are applied on the tested mortar specimen through a mechanical press; hence, strain and electrical impedance are measured by means of a wheatstone’s bridge and a galvanostat/potentiostat, respectively. figure 5. geometry of the demonstrator prototype with sensors, namely electrode array for electrical impedance measurement, electrode for the measurement of the rebar free corrosion potential (i.e. cosmonet sensor), as well as excitation and acceleration measurement points for impact test. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 2.2.1. manufacturing of concrete demonstrators no. 9 concrete specimens were manufactured as follows: • 3 sensorized concrete beams, to be subjected to loading tests and vibrational analyses; • 3 sensorized concrete beams, to maintain undamaged (i.e., reference specimens); • 3 non-sensorized concrete beams, to be subjected to loading tests and vibrational analyses. these serve to evaluate the effect of the embedded sensors, representing discontinuities for the material. to manufacture concrete specimens, we used portland cement (cem ii/a-ll) and a calcareous sand (0-4 mm) as fine aggregate, whereas intermediate (5-10 mm) and coarse (10-15 mm) river gravels were used as coarse aggregates. rcf and bch were added at 0.5 vol.% and 0.05 vol.% on the total, respectively, as conductive additions (being them resulted the best performing additions in terms of piezoresistive capability, see section 3.1.2). the w/c ratio was set at 0.50 by mass to reach the s5 workability class. the mix-design is reported in table 2. the casting phase was carried out using a concrete mixer; at first, the solid components were mixed together for 8 minutes, then water was added and mixed for additional 10 minutes. to manufacture sensorized/non-sensorized reinforced specimens, the fresh mix was poured in prismatic moulds (10 cm × 10 cm × 50 cm); in addition, cubic specimens (side: 10 cm) were realized for compressive strength tests (performed at 1, 7, and 28 days according to the en 12390-3 standard). moreover, also flexural strength was assessed on dedicated 10 cm × 10 cm × 50 cm nonsensorized reinforced specimens, according to the en 12390-5. 2.2.2. fem numerical model in order to carry out a preliminary test in terms of modal analysis, numerical simulations were made in comsol multiphysics® environment, exploiting finite element method (fem). in particular, the designed concrete beam was simulated in different configurations: • scaled (10 cm × 10 cm × 50 cm) and life-size (50 cm × 50 cm × 250 cm) concrete specimens, without reinforcing rebar; • scaled (10 cm × 10 cm × 50 cm) concrete specimen, with reinforcing rebar; • scaled (10 cm × 10 cm × 50 cm) concrete specimen, with reinforcing rebar and embedded sensors. indeed, the embedded sensors represent discontinuities inside the specimen; analogously, the plastic tubes used for installing the reinforcement rebar and the sensors influence the modal parameters of the structural element. hence, these preliminary models help to better understand the behaviour of the element in loading and vibrational tests, as well as to identify the natural frequencies of interest and the related modal shapes in both sensorized and non-sensorized beams. the geometry of the sensorized and non-sensorized reinforced concrete models is reported in figure 6 and in figure 7, respectively. 3. results this section reports the results related to the different research activities described in this paper, namely: • the results related to mortar specimens (section 3.1), including the monitoring of mechanical strength and electrical impedance of the different mix-design mortar specimens during curing (sub-section 3.1.1) and the results from piezoresistivity tests, together with the results of preliminary comparison tests between halfbridge and full-bridge configurations (sub-section 3.1.2); • the preliminary results related to the concrete beam demonstrators (section 3.2) in terms of mechanical strength characterization (sub-section 3.2.1), the monitoring of electrical impedance during curing (subsection 3.2.2), and the mode shapes resulting from fembased numerical simulations on both sensorized and nonsensorized concrete specimens (sub-section 3.2.3). table 2. concrete mix-design (kg/m3). cement 470 water 235 air (%) 2.5 sand 795 intermediate gravel 321 coarse gravel 476 rcf 1 bch 10 figure 6. geometry of the fem model related to the scaled sensorized concrete specimen. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 3.1. monitoring and piezoresistivity tests on mortar specimens in this section, the results related to the monitoring during curing phase of mortar specimens (in terms of both mechanical strength and electrical impedance) and the piezoresistivity tests, together with the comparison between half-bridge and fullbridge configurations for the strain measurement, are reported. 3.1.1. monitoring during curing phase the results in terms of compressive strength and real part of electrical impedance are reported in figure 8 and figure 9, respectively. it is possible to observe an increase in both values, as expected during curing, since hydration of the material is occurring, hence electrical impedance and mechanical strength increase [37]. furthermore, mechanical strength at 28 days is enhanced through the addition of rcf to the mortar mix-design, passing from 36 mpa (ref specimen) to 40 mpa and 43 mpa for bch+rcf and rcf specimens, respectively. thus, it can be stated that the addition of rcf improves the mechanical performance of the material, thanks to the presence of carbon micro-particles on their surface acting as nucleation points for the formation of c–s–h crystals [38], [39]. also the self-sensing properties should be enhanced, given that rcf contribute to decrease the material electrical resistivity; in particular, a decrease of 85 % and 92 % is obtained for rcf and bch+rcf mixtures, respectively. in this way, it is possible to exploit relatively lowcost sensors for the monitoring of electrical impedance, hence enabling the realization of multiple modes sensor networks for shm purposes [2]. observing figure 9, it is possible to notice relevant differences in terms of electrical impedance among diverse mix-designs. this was expected and is attributable to the different conductive additions employed to realize the mortar specimens. indeed, rcf significantly decrease the electrical resistivity of the final material, thanks to their good electrical conductivity properties. 3.1.2. piezoresistivity tests preliminary tests were carried out to compare the results obtainable with a half-bridge and a full-bridge configuration for the measurement of strain of a mortar specimen (namely a bch specimen) subjected to cyclic loading tests. the results provided a repeatability range of 18 µε and 4 µε for half-bridge and fullbridge configurations, respectively; moreover, a repeatability deviation of 7 µε and 2 µε was obtained in the two cases, respectively. given that these values are acceptable for the infield application of interest, the half-bridge configuration was selected for the rest of piezoresistivity tests. indeed, it is an optimal compromise between metrological performance (accuracy and sensitivity – gage factor) and ease of installation as well as cost (also in view of distributed sensor networks realization). the results of the piezoresistivity tests are reported in table 3; in particular, the mean (µ) and standard deviation (σ) values are reported for applied maximum loading force (fmax), maximum strain (εmax), variation of the real part of electrical impedance (∆zre) and related electrical impedance at 0-time (zre_t0), and sensitivity of electrical impedance real part towards strain. results are reported for all the tested mix-designs and are figure 7. geometry of the fem model related to the scaled non-sensorized concrete specimen. figure 8. mechanical strength of mortar specimens during curing phase. figure 9. real part of electrical impedance of mortar specimens during curing phase. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 averaged on the 15 loading cycles applied on each specimen. as expected, quite high values of standard deviation were obtained for electrical impedance. they can be attributed to the ageing process of the specimens (causing the material hydration – tests were performed in a time span of 8 weeks), which on the other hand cause also significant variations in terms of mechanical elasticity, reflecting into high standard deviation values for strain parameter. the sensitivity of electrical impedance towards strain (and, hence, external load) is improved by conductive additions. sensitivity passes from 0.003 µε-1 for ref mortar specimen to 0.109 µε-1 for bch+rcf one; in this case, in fact, the lower electrical resistivity leads to a higher percentage variation of electrical impedance. for the sake of completeness, it should be noted that rcf alone do not provide the same performance to mortar, at least at the considered load values; for this reason, biochar plays a key role in the provision of piezoresistive properties. moreover, high variability is observed also in the response of self-sensing materials in terms of electrical impedance variations; for example, considering bch+rcf mortar, a standard deviation of 26.46 ω for a mean value of 70.26 ω is reported for ∆zre quantity. this fact could be due to both hydration phenomena occurring over time (thus changing the material morphology and composition), as well as to the fact that cement-based materials (e.g., mortar and concrete) are inhomogeneous by definition. for this reason, significant variability could be observed also among specimens manufactured according to the same mix-design. in any case, the variations of the real part of electrical impedance mirror quite well the applied load and, hence, the strain of the specimen. an example of this behaviour is reported in figure 10 for bch+rcf specimen (5 loading cycles are considered); in particular, it is possible to observe that zre decreases with increasing applied load, since compression causes a decrease of the specimen length and, hence, of the sensing volume interested by the sensing electrodes. however, a lowmoderate strength of linear correlation was evidenced for all the tested mortar specimens, with the exception of bch+rcf mortar, where the pearson’s correlation coefficient was equal to 0.8 (figure 11). 3.2. the concrete beam project demonstrators in this section preliminary results related to the manufactured concrete beams as recity project demonstrators (figure 12) are reported. 3.2.1. mechanical strength the compressive strength measured on dedicated specimens is reported in figure 14. as expected, the compressive strength increases over time, reaching an average value of 40 mpa at 28 days, with a standard deviation of 1 mpa. concerning the flexural strength, an average value of 14 mpa was obtained, with a standard deviation of 1 mpa. 3.2.2. monitoring of electrical impedance during curing the electrical impedance data (in particular, in terms of real part, zre) is reported in figure 15. as expected, zre increases over time while only a specimen (i.e., c) is quite different from the others; this may be due to some particularly big aggregates present within the sensing volume. 3.2.3. fem numerical model the results obtained from scaled and life-size non-reinforced beams show that the natural frequencies vary together with the scaling factor; in particular, the natural frequencies will be 5 times those of the life-size element. for example, considering the first mode shape (figure 13), the natural frequency is estimated at 241 hz for the life-size structure (fn,real) and at 1205 hz for the scaled beam (fn,scaled); this means that fn,scaled is approximately 5 times fn,real. for this reason, it is necessary to evaluate the effects of a seismic event at frequencies higher than those typical of an figure 10. results in terms of loading force (f, top), strain (ε, centre), and real part of electrical impedance (zre, bottom) – example for bch+rcf mortar specimen. table 3. results obtained for the different mix-designs (reported as mean (standard deviation)). mix-design fmax in n εmax in µm/m ∆zre in ω zre_t0 in ω sensitivity in (µm/m)-1 ref 11516.20 (6.25) 157 (73) 23.16 (18.25) 5673.10 (3572.57) 0.003 (0.002) rcf 11512.16 (2.05) 181 (78) 0.92 (0.33) 203.14 (24.71) 0.004 (0.003) bch 11511.57 (2.11) 217 (119) 22.25 (15.30) 5725.73 (6071.60) 0.008 (0.009) bch+rcf 11512.76 (1.26) 155 (49) 70.26 (26.46) 419.07 (71.15) 0.109 (0.026) gnp 11511.88 (2.04) 226 (88) 9.86 (8.08) 6605.27 (4999.12) 0.001 (0.000) figure 11. evaluation of the linear correlation between the real part of electrical impedance (zre) and strain – the red line is the interpolating line (rcf+bch mortar specimen). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 9 earthquake, which are in the range of 1-10 hz [40], [41]. the reinforcing rebar seems to not influence the natural frequencies of the concrete beam, at least at frequencies up to 4000 hz, which will be the spectral range considered in the experimental modal analysis; this means that the geometry of the rebar is not particularly influencing in terms of the element rigidity. however, the presence of the external tubes modifies the dynamic behaviour of the structural element; in particular, the nodal lines of the first mode shape (figure 16) move on the tubes themselves and the related natural frequency increases up to 1529 hz (approximately + 27 %). this means that the structural element is slightly stiffer because of the embedded components, which also influence the deformation, as well as making the specimen less homogeneous. considering the second mode shape, it is possible to observe that the presence of plastic tubes introduces two additional nodal lines located on the tubes themselves, even if the associated natural frequency is almost the same (i.e., 2841 hz for the nonsensorized specimen, figure 17, against 2846 hz for the sensorized one, figure 18). 4. discussion and conclusions this paper introduced the monitoring platform being developed in the framework of the recity project (identification code: ars01_00592); in particular, the resilience of cement-based structures against seismic events are considered in the presented research activities. at first, the authors investigate different mix-designs of mortar in terms of piezoresistive capability; hence, sensorized concrete beams are designed and realized with the better mix-design to serve as the project demonstrators. preliminary fem numerical models are realized to analyse the modal parameters of the structural elements and the effect of the discontinuities represented by the embedded sensors. figure 12. example of the manufactured concrete beam demonstrators; tubes for installing the steel rebars and sensors are visible at the specimen extremities, whereas stainless-steel washers for the installation of accelerometers for modal analysis can be observed on the specimens top face. figure 13. first mode shape for life-size beam (natural frequency: 241 hz, corresponding to 1205 hz for the 1:5 scaled beam). figure 14. compressive strength of concrete specimens during curing phase. figure 15. real part of electrical impedance measured during curing on the 6 sensorized specimens. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 10 the results show that carbon-based conductive additions in form of filler and fibres (namely biochar, bch, and recycled carbon fibres, rcf) allow to obtain the best performance in terms of sensitivity to external loads. in particular, the measured electrical impedance shows a trend mirroring the one of the applied loads and, consequently, of the strain induced to the specimen. in this way, an electrical quantity (electrical impedance) reflects the behaviour of a mechanical quantity (strain), hence a sensor with self-sensing capacities is obtained. the metrological characterization of the phenomenon is pivotal and evidences the key role played by the type of conductive materials added to the mix-design. in fact, conductive additions have a twofold role: on the one hand, they decrease the material electrical resistivity, thus improving the circulation of electric current and easing the electrical impedance measurement; on the other hand, they improve the quality of the electric signal, decreasing the noise effect and, thus, enhancing the signal-tonoise ratio (snr). the best performance in terms of piezoresistive capability was obtained by the mix-design containing both bch and rcf, resulting in the highest sensitivity towards strain; in particular, the average sensitivity of bch+rcf mortar was equal to 0.109 (µm/m)-1, against 0.003 (µm/m)-1 of ref specimen. for this reason, these types and dosages of conductive additions were chosen for the realization of concrete scaled beams to serve as demonstrators of the recity project. furthermore, the selected conductive additions are green sustainable by-products, so they can be fruitfully exploited also in a view of an environmentally friendly circular economy. it is worthy to underline that a homogeneous distribution of conductive additions is fundamental. indeed, cement-based materials are inhomogeneous by definition; hence, the distribution of components during casting phase is pivotal. moreover, the electrical impedance measurement is local and is related to a limited sensing volume (depending on the interelectrode spacing chosen according to the wenner’s method, in the 4-electrode ac configuration). thus, it would be fundamental to optimize the manufacturing procedure to enhance the metrological performance of sensors based on selfsensing materials, especially in terms of measurement repeatability. furthermore, it should be considered also the fact that electrical impedance depends not only on the external loads, but also on several different variables, such as environmental parameters (temperature and relative humidity), damages and cracks, penetration of contaminants, carbonation phenomena, etc. for this reason, electrical impedance should be analysed not in absolute values, but in terms of trend variations, so as to be able to detect unexpected peaks or variations (differing from the normal daily changes [2]) that require ad hoc investigations (e.g. specific inspections). electrical impedance measurements can provide a lot of information on the structure health status and boundary conditions, resulting particularly suitable for data fusion techniques used in a view of extracting meaningful indicators in the context of shm. the sensing electrodes used for electrical impedance assessment could somehow substitute the traditional strain gages, which are much more expensive and difficult to install, besides being more delicate and requiring a more sophisticated acquisition circuit (i.e., wheatstone’s bridge). in the future, the realized concrete specimens will be subjected to loading tests with increasing load values (starting from 50% of the concrete flexural strength until failure load), in order to progressively drive cracking phenomena. modal analysis will be performed on the specimen as-is (time 0) and just after the execution of each load test. in this way, it will be possible to evaluate the effects of external loads and cracks on the modal parameters of the element, representing its “footprint”. both variations of natural frequencies and changes in the mode shape or mode curvature will be analysed, with the objective to both detect cracking onset and assess the severity of the damage. moreover, vision-based techniques will be exploited for the detection and the quantitative assessment of cracking phenomena; an automated measurement system developed within the framework of the endurcrete european project will be exploited to this aim. moreover, after the execution of loading figure 16. first mode shape for scaled sensorized beam (natural frequency: 1529 hz). figure 17. second mode shape for scaled non-sensorized beam (natural frequency: 2841 hz). figure 18. second mode shape for scaled sensorized beam (natural frequency: 2846 hz). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 11 tests and related experimental modal analysis, all the concrete specimens will be subjected to accelerated durability tests, in particular to the exposure in water solutions. the aim is to evaluate how damages caused by seismic events can impact on the material durability. in the recity project the electrical impedance data will be combined to the signals measured by means of standard transducers; they will be exploited also together with modal parameters coming from vibrational analyses, thus contributing to characterize a cement-based structure from a broader perspective. in fact, the multidomain information, properly analysed through ai-bases algorithms, can support decisionmaking processes and management procedures regarding critical structures. this allows to prioritize the interventions needed to guarantee the community safety and wellbeing, also enhancing the resilience towards natural hazards and emergency situations. moreover, the recity platform will enable data sharing, so as to arise the community awareness on these aspects, making a society not only informed but also formed and active in the management of the (smart) city structures in an environment that inevitably becomes more and more urbanized. acronyms ac alternating current ai artificial intelligence bch biochar bim building information model fem finite element model gnp graphene nanoplatelets ml machine learning ndt non-destructive technique rcf recycled carbon fibers rh relative humidity shm structural health monitoring shr signal-to-noise ratio ssd saturated surface dried uav unmanned aerial vehicle w/c water-to-cement ratio acknowledgement this research activity was carried out within the framework of the recity project “resilient city – everyday revolution” (recity) – pon r&i 2014-2020 e fsc “avviso per la presentazione di ricerca industriale e sviluppo sperimentale nelle 12 aree di specializzazione individuate dal pnr 20152020”, identification code ars01_00592. references [1] dipartimento della protezione civile, indicazioni operative per il raccordo e il coordinamento delle attività di sopralluogo tecnico speditivo, italy, 2018, p. 29. online [accessed 22 june 2023] [in italian] https://www.lazio.beniculturali.it/wpcontent/uploads/2021/03/dpc-indicazioni-operative.pdf [2] n. giulietti, p. chiariotti, g. cosoli, g. giacometti, l. violini, a. mobili, g. pandarese, f. tittarelli, g. m. revel, continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions, acta imeko, vol. 10, no. 4, dec. 2021, pp. 132–139. doi: 10.21014/acta_imeko.v10i4.1140 [3] w. r. de sitter, costs of service life optimization ‘the law of fives’, comité euro-international du béton, 1984, pp. 131–134. [4] m. p. limongelli, monitoraggio strutturale per la protezione sismica del patrimonio edilizio, tra prev. e cura la prot. del patrim. edil. dal rischio sism., 2013, pp. 36–48. [in italian] [5] s. das, p. saha, a review of some advanced sensors used for health diagnosis of civil engineering structures, measurement, vol. 129, dec. 2018, pp. 68–90. doi: 10.1016/j.measurement.2018.07.008 [6] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, and d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system system, acta imeko, vol. 8, no. 2, jun. 2019, pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [7] c. scuro, d. l. carnì, f. lamonaca, r. s. olivito, g. milani, preliminary study of an ancient earthquake-proof construction technique monitoring by an innovative structural health monitoring system, acta imeko, vol. 10, no. 1, mar. 2021, pp. 47–56. doi: 10.21014/acta_imeko.v10i1.819 [8] w. dong, y. huang, b. lehane, g. ma, an artificial intelligencebased conductivity prediction and feature analysis of carbon fiber reinforced cementitious composite for non-destructive structural health monitoring, eng. struct., vol. 266, sep. 2022, p. 114578. doi: 10.1016/j.engstruct.2022.114578 [9] s. meisenbacher, m. turowski, k. phipps, m. rätz, d. müller, v. hagenmeyer, r. mikut, review of automated time series forecasting pipelines, wiley interdiscip. rev. data min. knowl. discov., feb. 2022. doi: 10.1002/widm.1475 [10] sh. k. baduge, s. thilakarathna, j. sh. perera, m. arashpour, p. sharafi, b. teodosio, a. shringi, p. mendis, artificial intelligence and smart vision for building and construction 4.0: machine and deep learning methods and applications, autom. constr., vol. 141, sep. 2022, p. 104440. doi: 10.1016/j.autcon.2022.104440 [11] y. celik, i. petri, m. barati, blockchain supported bim data provenance for construction projects, comput. ind., vol. 144, 2023, p. 103768 doi: 10.1016/j.compind.2022.103768 [12] v. r. gharehbaghi, e. noroozinejad farsangi, m. noori, t. y. yang, sh. li, a. nguyen, ch. málaga-chuquitaype, p. gardoni, s. mirjalili, a critical review on structural health monitoring: definitions, methods, and perspectives, arch. comput. methods eng., vol. 29, no. 4, 2022, pp. 2209–2235. doi: 10.1007/s11831-021-09665-9 [13] a. sofi, j. jane regita, b. rane, h. h. lau, structural health monitoring using wireless smart sensor network – an overview, mech. syst. signal process., vol. 163, jan. 2022, p. 108113. doi: 10.1016/j.ymssp.2021.108113 [14] p. kot, m. muradov, m. gkantou, g. s. kamaris, k. hashim, d. yeboah, recent advancements in non-destructive testing techniques for structural health monitoring, appl. sci. 2021, vol. 11, page 2750, vol. 11, no. 6, mar. 2021, p. 2750. doi: 10.3390/app11062750 [15] k. máthé, l. buşoniu, vision and control for uavs: a survey of general methods and of inexpensive platforms for infrastructure inspection, sensors 2015, vol. 15, pages 14887-14916, vol. 15, no. 7, jun. 2015, pp. 14887–14916. doi: 10.3390/s150714887 [16] r. rani hemamalini, r. vinodhini, b. shanthini, p. partheeban, m. charumathy, k. cornelius, air quality monitoring and forecasting using smart drones and recurrent neural network for sustainable development in chennai city, sustain. cities soc., vol. 85, oct. 2022, p. 104077. doi: 10.1016/j.scs.2022.104077 [17] d. choi, w. bell, d. kim, j. kim, uav-driven structural crack detection and location determination using convolutional neural networks, sensors 2021, vol. 21, page 2650, vol. 21, no. 8, apr. 2021, p. 2650. doi: 10.3390/s21082650 https://www.lazio.beniculturali.it/wp-content/uploads/2021/03/dpc-indicazioni-operative.pdf https://www.lazio.beniculturali.it/wp-content/uploads/2021/03/dpc-indicazioni-operative.pdf https://doi.org/10.21014/acta_imeko.v10i4.1140 https://doi.org/10.1016/j.measurement.2018.07.008 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.21014/acta_imeko.v10i1.819 https://doi.org/10.1016/j.engstruct.2022.114578 https://doi.org/10.1002/widm.1475 https://doi.org/10.1016/j.autcon.2022.104440 https://doi.org/10.1016/j.compind.2022.103768 https://doi.org/10.1007/s11831-021-09665-9 https://doi.org/10.1016/j.ymssp.2021.108113 https://doi.org/10.3390/app11062750 https://doi.org/10.3390/s150714887 https://doi.org/10.1016/j.scs.2022.104077 https://doi.org/10.3390/s21082650 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 12 [18] s. sankarasrinivasan, e. balasubramanian, k. karthik, u. chandrasekar, r. gupta, health monitoring of civil structures with integrated uav and image processing system, procedia comput. sci., vol. 54, 2015, pp. 508–515. doi: 10.1016/j.procs.2015.06.058 [19] b. han, s. ding, x. yu, intrinsic self-sensing concrete and structures: a review, measurement, vol. 59, jan. 2015, pp. 110-128. doi: 10.1016/j.measurement.2014.09.048 [20] c. rainieri, g. fabbrocino, e. cosenza, integrated seismic early warning and structural health monitoring of critical civil infrastructures in seismically prone areas, vol. 10, no. 3, jun. 2010, pp. 291–308. doi: 10.1177/1475921710373296 [21] d. d. l. chung, electrically conductive cement-based materials, vol. 16, no. 4, may 2015, pp. 167–176. doi: 10.1680/adcr.2004.16.4.167 [22] c. g. berrocal, k. hornbostel, m. r. geiker, i. löfgren, k. lundgren, d. g. bekas, electrical resistivity measurements in steel fibre reinforced cementitious materials, cem. concr. compos., vol. 89, may 2018, pp. 216–229. doi: 10.1016/j.cemconcomp.2018.03.015 [23] b. g. han, b. z. han, x. yu, b. g. han, b. z. han, x. yu, experimental study on the contribution of the quantum tunneling effect to the improvement of the conductivity and piezoresistivity of a nickel powder-filled cement-based composite, smas, vol. 18, no. 6, 2009, p. 065007. doi: 10.1088/0964-1726/18/6/065007 [24] m. s. konsta-gdoutos and c. a. aza, self sensing carbon nanotube (cnt) and nanofiber (cnf) cementitious composites for real time damage assessment in smart structures, cem. concr. compos., vol. 53, 2014, pp. 162–169. doi: 10.1016/j.cemconcomp.2014.07.003 [25] y. suo, h. xia, r. guo, y. yang, study on self-sensing capabilities of smart cements filled with graphene oxide under dynamic cyclic loading, j. build. eng., vol. 58, oct. 2022, p. 104775. doi: 10.1016/j.jobe.2022.104775 [26] m. chen, p. gao, f. geng, l. zhang, h. liu, mechanical and smart properties of carbon fiber and graphite conductive concrete for internal damage monitoring of structure, constr. build. mater., vol. 142, 2017, pp. 320–327. doi: 10.1016/j.conbuildmat.2017.03.048 [27] a. mobili, a. belli, ch. giosuè, m. pierpaoli, l. bastianelli, a. mazzoli, m. l. ruello, t. bellezze, f. tittarelli, mechanical, durability, depolluting and electrical properties of multifunctional mortars prepared with commercial or waste carbon-based fillers, constr. build. mater., vol. 283, 2021, p. 122768. doi: 10.1016/j.conbuildmat.2021.122768 [28] a. o. monteiro, p. b. cachim, p. m. f. j. costa, self-sensing piezoresistive cement composite loaded with carbon black particles, cem. concr. compos., vol. 81, 2017, pp. 59–65. doi: 10.1016/j.cemconcomp.2017.04.009 [29] a. mobili, g. cosoli, n. giulietti, p. chiariotti, g. pandarese, t. bellezze, g. m. revel, f. tittarelli, effect of gasification char and recycled carbon fibres on the electrical impedance of concrete exposed to accelerated degradation, sustainability, vol. 14, 2022, no. 3, p. 1775. doi: 10.3390/su14031775 [30] g. lacanna, m. ripepe, m. coli, r. genco, e. marchetti, full structural dynamic response from ambient vibration of giotto’s bell tower in firenze (italy), using modal analysis and seismic interferometry, ndt e int., vol. 102, mar. 2019, pp. 9–15. doi: 10.1016/j.ndteint.2018.11.002 [31] s. elias, r. rupakhety, d. de domenico, s. olafsson, seismic response control of bridges with nonlinear tuned vibration absorbers, structures, vol. 34, dec. 2021, pp. 262–274. doi: 10.1016/j.istruc.2021.07.066 [32] s. oliveira, a. alegre, seismic and structural health monitoring of dams in portugal, 2019, pp. 87–113. doi: 10.1007/978-3-030-13976-6_4 [33] f. wenner, a method for measuring earth resistivity, j. washingt. acad. sci., vol. 5, no. 16, 1915, pp. 561–563. [34] o. galao, f. j. baeza, e. zornoza, p. garcés, strain and damage sensing properties on multifunctional cement composites with cnf admixture, cem. concr. compos., vol. 46, 2014, pp. 90–98. doi: 10.1016/j.cemconcomp.2013.11.009 [35] g. cosoli, a. mobili, n. giulietti, p. chiariotti, g. pandarese, f. tittarelli, t. bellezze, n. mikanovic, g. m. revel, performance of concretes manufactured with newly developed low-clinker cements exposed to water and chlorides: characterization by means of electrical impedance measurements, constr. build. mater., vol. 271, 2021, p. 121546. doi: 10.1016/j.conbuildmat.2020.121546 [36] f. tittarelli, a. mobili, p. chiariotti, g. cosoli, n. giulietti, a. belli, g. pandarese, t. bellezze, g. m. revel, cement-based composites in structural health monitoring, aci symp. publ., vol. 355, 2022, pp. 133–150. [37] m. collepardi, the new concrete, 2010. online [accessed 31 january 2022] https://www.libreriauniversitaria.it/new-concrete-collepardimario-tintoretto/libro/9788890377723 [38] a. mobili, g. cosoli, t. bellezze, g. m. revel, f. tittarelli, use of gasification char and recycled carbon fibres for sustainable and durable low-resistivity cement-based composites, j. build. eng., vol. 50, 2022, p. 104237. doi: 10.1016/j.jobe.2022.104237 [39] m. mastali, a. dalvand, a. sattarifard, the impact resistance and mechanical properties of the reinforced self-compacting concrete incorporating recycled cfrp fiber with different lengths and dosages, compos. part b, vol. 112, 2017, pp. 74-92. doi: 10.1016/j.compositesb.2016.12.029 [40] s. takemura, m. kobayashi, k. yoshimoto, prediction of maximum pand s-wave amplitude distributions incorporating frequencyand distance-dependent characteristics of the observed apparent radiation patterns 4. seismology, earth, planets sp., vol. 68, no. 1, dec. 2016, pp. 1–9. doi: 10.1186/s40623-016-0544-8 [41] p. tosi, p. sbarra, v. de rubeis, earthquake sound perception, geophys. res. lett., vol. 39, no. 24, p. 24301. doi: 10.1029/2012gl054382 https://doi.org/10.1016/j.procs.2015.06.058 https://doi.org/10.1016/j.measurement.2014.09.048 https://doi.org/10.1177/1475921710373296 https://doi.org/10.1680/adcr.2004.16.4.167 https://doi.org/10.1016/j.cemconcomp.2018.03.015 https://doi.org/10.1088/0964-1726/18/6/065007 https://doi.org/10.1016/j.cemconcomp.2014.07.003 https://doi.org/10.1016/j.jobe.2022.104775 https://doi.org/10.1016/j.conbuildmat.2017.03.048 https://doi.org/10.1016/j.conbuildmat.2021.122768 https://doi.org/10.1016/j.cemconcomp.2017.04.009 https://doi.org/10.3390/su14031775 https://doi.org/10.1016/j.ndteint.2018.11.002 https://doi.org/10.1016/j.istruc.2021.07.066 https://doi.org/10.1007/978-3-030-13976-6_4 https://doi.org/10.1016/j.cemconcomp.2013.11.009 https://doi.org/10.1016/j.conbuildmat.2020.121546 https://www.libreriauniversitaria.it/new-concrete-collepardi-mario-tintoretto/libro/9788890377723 https://www.libreriauniversitaria.it/new-concrete-collepardi-mario-tintoretto/libro/9788890377723 https://doi.org/10.1016/j.jobe.2022.104237 https://doi.org/10.1016/j.compositesb.2016.12.029 https://doi.org/10.1186/s40623-016-0544-8 https://doi.org/10.1029/2012gl054382 microsoft word article 15 102-789-1-galley final.docx acta imeko december 2013, volume 2, number 2, 86 – 90 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 86 establishing a metrological infrastructure and traceability of electrical power and energy in the r. macedonia ljupco arsov, marija cundeva-blajer ss. cyril and methodius university, faculty of electrical engineering and information technologies-skopje, ruger boskovic b.b., pob 574, 1000 skopje, r. macedonia section: technical note keywords: metrological infrastructure; power and energy measurements; calibrations; standards citation: ljupco arsov, marija cundeva-blajer, establishing metrology infrastructure and traceability of electrical power and energy in r. macedonia, acta imeko, vol. 2, no. 2, article 15, december 2013, identifier: imeko-acta-02 (2013)-02-15 editor: paolo carbone, university of perugia received april 12nd, 2013; in final form december 2nd, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by faculty of electrical engineering and information technologies-skopje corresponding author: marija cundeva-blajer, e-mail: mcundeva@feit.ukim.edu.mk 1. introduction after gaining the independence of the r. macedonia, the process of creating a metrological system and an infrastructure necessary for the establishment of measurement standards and metrological traceability has begun. the development has started with the creation of own legislative, institutions, equipment and infrastructure for the application of legal and industrial metrology. for the measurement of electrical energy and power, in the frame of legal metrology, nothing was inherited from the former yugoslav system. however, the significance of these measurements impose quick measures to be undertaken, because of the large needs and quantity, the large finances connected to the trade of electrical energy, the need of consumers’ protection and insurance of conditions of fair trade. the adequacy of these solutions, the need of their upgrading and improvement, as well as the model which would create conditions for fair trade and electrical energy consumers’ protection are further discussed. 2. current state of the art the field of measurements of electrical energy and power is legaly regulated by the law on metrology from 2002 [1]. beside this law a set of rulebooks [2-4] are adopted. currently, there is a process of a transposition of the eu documents in the field of metrology, which are connected to certain types of measuring instruments (sector directives), [5]. through the law on metrology [1], as one of the basic objectives in the legal regulation of the measurements of electrical energy, the condition on insurance of fair trade is posed. this fair trade will be insured by exact measurements of the electrical energy, with legally aproved instruments in compliance to the rulebook on measuring instruments [3], calibrated and verified, and with measurements traceable to the national standard. the traceability requirement, defined in article 3 of the law on metrology [1], although it is not explicitly stated, should insure measurement traceability to the national standard of the r. macedonia, to national standards of other states and to international standards. abstract in the paper the current state and the establishment of a metrological infrastructure and traceability of the measurements of electrical power and energy, i.e. the creation of conditions for unity of power and energy measurement results, international comparability of the results and measurements which insure fair trade and consumers’ protection are elaborated. beside the legal aspects, also other features are discussed, like the needs for calibration and verification in the field of electrical power and energy, participants in the chain of measurements and trade with electrical energy, the organization, the infrastructure, the methods and the systems of calibration and verification. an organization and certain documents which will contribute to the establishment of a system in accordance with the international standards and practice, as well as traceability and fair trade, are proposed. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 87 by taking into account that the trade of electrical energy is not only internal, but in a large part international, it is necessary to insure traceability and comparability of the results not only on a national, but also on an international level. the same addresses also to the usage of electricity meters, which should comply with the specifications of the domestic regulation [3] as well as with the european regulation [5]. from the aspect of law regulation, this requirment is fulfilled through transposition of the eu directives and harmonization of the national laws and regulation with the eu directives in the field of metrology. in respect to the compliance of the electricity meters to the requirements of the regulation, which is declared through posing of the „ce“ mark, and which presumes existence of a notified body which will participate in the procedure of approval and marking, it is unclear how the domestic producers could address this question, since the r. macedonia is not a member of the eu. according to the law on metrology [1], in order to insure the conditions for fair trade, the electricity meters should be verified, which is done by legal and physical entities selling electrical energy, and formally by the bureau of metrology of the r. macedonia. if the requirements of the international standard for bodies performing inspection [6] and the international standard for testing and calibration laboratories [7] are taken into account, this solution, inherited from the former system, is in contradiction to the requirements of these standards in respect to the independence, impartiality, integrity and confidentiality of the bodies performing verification (control), e.g. calibration. this has become especially obvious after the process of privatization of the distribution of electrical energy in 2006. the control performed by the interested body is in conflict with the requirement for independent, impartial, and confidential control of the electrical energy measurements. the market of electrical energy in the r. macedonia is not well developed yet. main participants in the trade of electrical energy are evn-macedonia, mepso, elem, licenced trade houses as well as the large industrial consumers (mak-steel skopje, feni industry kavadarci, mital steel-skopje etc.), light industry and the households. the bureau of metrology of the r. macedonia is in charge to insure the unity of the measurements of electrical energy, national standard of electrical energy, type approvals of the electricity meters produced according to the legal requirements, periodical verification/calibration, metrological surveilance of the electricity meters and measurement traceability to the national standard. the bureau of metrology of rm, by the help of the eu, has formed and partially equipped its laboratories in the last years. however in the laboratory for electrical energy there is no electrical power standard, so the bureau is not in a position to insure traceability of the electrical energy measurements i.e. is not in a position to calibrate standards and instruments for electrical power. the legal requirement for verification of the electricity meters is realized through verification (calibration) of electricity meters in the evn-macedonia laboratory for verification of electricity meters, which has 6 verification systems emh/mte on disposal equipped with a power standard, accuracy class 0.05. after the control/verification by the evn-macedonia, the bureau of metrology on the basis of the control results formally performs the verification and sealing. the bureau of metrology performs the metrological surveilance and verification through its department for verification of electricity meters. however, the methods of surveillance of the verification (metrological control) according to the equipment for control, sampling frequency, number of samples and the application of proper statistical methods according to the standards (iec 62058-11, [8], iec 62058-21, [9], iec 62058-31, [10]), do not give enough confidence of possible deviations from the requirements of the rulebook on measuring instruments [3]. the fee for verification of electricity meters which is paid to the bureau of metrology of rm (7 euro for three-phase electricity meters and 4 euro for single-phase meters), could be used for significant improvement of the equipment and the other preconditions and resources for confident metrological control as well as for insurance of traceability to the national and international standards [11], [12]. the electrical energy consumption in the r. macedonia, like in other countries, is connected to the life standard and is in constant increase. according to the state statistical office in the r. macedonia 82.6% (2009) and 83.6% (2010), out of 8265 837 mwh (2009), i.e. 8 677 969 mwh (2010), gross domestic consumption of electrical energy is of domestic production, and around 17% is imported [13, 14]. the biggest consumers of electricity in 2010 were the households with a share of 37.3%, the industrial sectors (energy sector plus industry) with 25.3%, and the other sectors with 17.7% of the gross national electricity consumption. own consumption (in production, transmission and distribution) of electricity in 2010 was 5%, while distribution losses were 14.7% of the gross national electricity consumption [14]. all this energy is measured at the level of system interconnections, at the level of large consumers, at the level of industry, at the level of small consumers and households at different voltage levels, different locations, and different types of electricity meters for direct and indirect electrical energy measurements. currently, in the r. macedonia approximately 800,000 electricity meters in the households and 7500 meters in the industry are in usage. the electricity meters used in the households of rm were in a large scale three-phase meters, accuracy class 2, produced by iskra-kranj, slovenia and in smaller scale of domestic production by video inzeneringohrid (by licence of siemens) and energetika-vds strumica (own development). the electricity meters used in the industry are mainly for indirect measurements, accuracy class 1 and 0.5, while the electricity meters at the system interconnections are of accuracy class 0.1. roughly, it can be estimated that these meters are used with approximately 30,000 instrument transformers of accuracy class 0.5. according to the law on metrology it is foreseen first, periodical and, if necessary, extra verification of the electricity meters to be done. the legal period for periodical verification table 1. annual verifications of electricity meters performed by bom. year 2007 year 2008 year 2009 year 2010 year 2011 101 400 99 800 231 900 180 000 120 000 acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 88 of the electricity meters is 10 years, which imposes that approximately 80,000 meters should be verified annually. according to the data of the bureau of metrology [15], in the last years the number of electricity meters verifications performed by the bom is given in table 1. these figures comprise the number of first verifications performed by the meter producers (foreign producers), as well as the number of periodically verified electricity meters by evn-macedonia. the legal period of verification of instrument transformers is 5 years, so the annual number of verified instrument transformers should be 6000. all the meters used for billing the electrical energy must have a type approval and verification. the type approval means compliance of the meter type to the technical standards and the legal regulation. in the procedure of type approval about. 30 different tests, including testing of the electrical and metrological characteristics, isolation tests, mechanical tests as well as some newer tests: software validation, life cycle and confidence, are performed. practically, it is impossible to test each meter, therefore samples of the newly developed meters are taken, and if all the tests are satisfying, the type approval is issued. the meters identical to the approved ones are considered to comply to the standards. according to the law, each electricity meter used for billing of electrical energy mus be verified. for verification of the meters and instrument transformers a limited set of tests is required, which is mainly testing of the meter error at different working conditions. 3. establishing of legal requirements, traceability and confidence of electrical power measurements the current legislation which regulates the measurements of electrical energy in the r. macedonia is discussed and referred to in section 2. if this regulation is compared with the regulation for measurement of electrical energy used in the eu countries, it can be noticed that a transposition of the mid directive, [5] is done, but for full coverage of the field of measurements of electrical energy and power, it is necessary to apply the european standards en 50470-1 [16] and en 504703 [17] for classes a, b, c (classes а, b and c correspond to accuracy classes 2, 1 and 0.5, respectively). the standards iec 62052-11 [18], iec 62053-21 [19] and iec 62053-22 [20], for classes 2, 1, 0.5, 0.5s and 0.2s should be applied too. one of the possible problems in the legal harmonization would be the insurance of indepence, impartiality, integrity and confidence of the bodies performing type testings, calibration and verification. therefore, changes in the law on metrology [1] and in the rulebook [4] are necessary. the goal of these changes is to enable the activity of independent accredited and authorized bodies for control of electricity meters, independent accredited calibration laboratories and independent services for electricity meters. one of the possible options of organization of the legal metrology for electricity meters is given in figure 1. the verification of the electricity meters and the instrument transformers in an authorized inspection body would be realized according to the sequence in figure 2. the traceability of the electrical energy measurements to national and international standards could be realized through a three-scale hierarchy chain shown in figure 3. for the realization of the traceability chain it is necessary to create a national laboratory which will be equipped with a national primary standard for electrical energy. this laboratory should fulfil the requirements of the standards iso 17025 [7] as well as to participate at international comparisons with other national laboratories, i.e. to calibrate its standard at national laboratories of other states with standards of a higher accuracy class. this national laboratory and the national standard could be in the frame of the bureau of metrology of the r. macedonia, but also other solutions are possible. the laboratory can be created where staff, equipment and space for such a laboratory already exist. according to the practice in most of the countries, the calibration laboratories for electricity meters would serve the figure 1. scheme-proposal on legal metrology of measurements of electrical energy. ministry of economy of rm, bom application approval authorization notification producers and importers independent control body verification of electricity meters and instrument transformers services application for type approval application for verification own inspection type testing application for verification of repaired meters verification/ inspection power companies consumers acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 89 industry and other owners of instruments for electrical energy and power for calibration and type testing, and should be independent laboratories which fulfil the requirements of the standard iso 17025 [7]. one of the important conditions for insurance of fair trade of electrical energy is the creation of independent and confident control/inspection body for verification of electricity meters. according to the international standard for control bodies iso 17020 [6], such bodies of type a should fulfil the following requirements: 1. the inspection body and its staff shall be independent to the parties involved, i.e. shall not be the designer, manufacturer, supplier, installer, and purchaser, owner, user or maintainer of the items which are inspected, nor authorized, representative of any of these parties. 2. the inspection body and its staff shall not be engaged in any activities that may conflict with their independence of judgment and integrity in relation to their inspection activities. in particular they shall not become directly involved in the design, manufacture, supply, installation, use or maintenance of the items inspected or similar competitive items. 3. all interested parties shall have access to the services of the inspection body. there shall not be undue financial or other conditions. the procedures under which the body operates shall be administrated in a non-discriminatory manner. beside the organizational, management and documentation requirements, the inspection body must be equipped with competent and responsible staff, which will respect the inspection procedures, criteria and ethics as well as proper equipment which must be calibrated with traceability to the national electrical energy standard. the inspection body should have on disposal proper procedures and protocols of testing and procedures for processing of the results, as well as statistical indicators for the pool of verified meters. for the activities of this inspection body, beside the standard iso 17020 and the national legislation, there are more international standards and international guides. 4. conclusions the current state and the importance of the measurements of electrical energy for billing and other purposes requires quick measures for harmonization and upgrading of the macedonian system of legal and industrial metrology for electrical energy. it is necessary to establish a national standard of electrical energy and a national laboratory for electrical energy, as well as traceability of the electrical energy measurements to them. it is necessary also to create independent competent calibration and testing laboratories, i.e. independent, impartial and confident inspection bodies for control/verification of electricity meters in compliance to the international standards iso 17025 and iso 17020, the standards for electricity meters and the practice in the other countries in europe and the world. the proposed schemes for practicing the legal metrology and establishing of figure 2. verification process. figure 3. traceability chain of the electrical energy measurements. international comparisons and calibrations national laboratory of electrical energy primary national standard (class 0.01) accredited laboratory for electrical energy standards (class 0.05) inspection body for verification of electricity meters standards (class 0.05) consumers electricity meters (class 2, 1, 0.5s, 0.2s) review and preparation testing assessment sealing document for refusal return/ delivery acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 90 the traceability chain in the measurements of electrical energy are possible options which should be further elaborated by taking into account all the aspects: legal, current state, staff, technical requirements, as well as the economic aspects. references [1] law on metrology, official gazette of r. macedonia no. 55/02, 84/07 and 120/09. [2] rulebook of the definitions, nomenclature and symbols, the scope, application and obligation for usage and writing of the legal measurement units, official gazette of r. macedonia, no. 104/2007. [3] rulebook on measuring instruments, official gazette of r. macedonia no. 17/10 [4] rulebook on determination of the categories and types of measuring instruments for which the verification is obligatory, procedures of verification, deadlines of periodical verification and the categories and types of measuring instruments on which an authorization for verification can be obtained, official gazette of r. macedonia”, no. 102/2007. [5] directive 2004/22/ec on measuring instruments (mid), official journal of the european union, 2004. [6] en iso 17020, general criteria for the operation of various types of bodies performing inspection, cenelec, brussels, 2004. [7] en iso/iec 17025, general requirements for the competence of testing and calibration laboratories, cenelec, brussels, 2005. [8] iec 62058-11, electricity metering equipment (ac) acceptance inspection, part 11: general acceptance inspection methods, international electrotechnical commission, geneve, 2008. [9] iec 62058-21, electricity metering equipment (ac) acceptance inspection part 21: particular requirements for electromechanical meters for active energy (classes 0,5, 1 and 2), international electrotechnical commission, geneve, 2008. [10] iec 62058-31, electricity metering equipment (ac) acceptance inspection part 31: particular requirements for static meters for active energy (classes 0,2 s, 0,5 s, 1 and 2), international electrotechnical commission, geneve, 2008. [11] decision on the amount and form of payment of the fee for services of the bureau of metrology and the authorized legal entity, official gazette of r. macedonia, no. 51/2004, no. 64/2008, no. 121/2010. [12] press release no. 0302-864/1 of bureau of metrology of r. macedonia from 1. 03. 2010. [13] press release no. 6.1.10.83, state statistical office of r. macedonia from 02.12.2010. [14] press release no. 6.1.11.91, state statistical office of r. macedonia from 30.11.2011. [15] strategic plan for the development of the bureau of metrology and the metrological infrastructure of r. macedonia 2010-2012, bureau of metrology of r. macedonia, 2010. [16] cen en 50470-1, electricity metering equipment (ac) general requirements, tests and test conditions. metering equipment (class indices a, b and c), cenelec, brussels, 2006. [17] cen en 50470-3: electricity metering equipment (ac) part 3: particular requirements static meters for active energy (class indices a, b and c), cenelec, brussels, 2006. [18] iec 62052-11, electricity metering equipment (ac) general requirements, tests and test conditions part 11: metering equipment, international electrotechnical commission, geneve, 2003. [19] iec 62053-21, electricity metering equipment (ac) particular requirements part 21: static meters for active energy (classes 1 and 2), international electrotechnical commission, geneve, 2003. [20] iec 62053-22, electricity metering equipment (ac) particular requirements part 22: static meters for active energy (classes 0.2 s and 0.5 s), international electrotechnical commission, geneve, 2003. 10-22-3-ce acta imeko december 2011, issue 0, 2 www.imeko.org acta imeko | www.imeko.org december 2011 | issue 0 | 2 journal contacts keywords: acta imeko, editorial, contacts citation: paul p.l. regtien, journal contacts, acta imeko, no. 0, december 2011, p. 2, identifier: imeko-acta-00(2011)-01-02 editor: paul regtien, measurement science consultancy, the netherlands received december 28, 2011; in final form december 29, 2011; published december 30, 2011 copyright: © 2011 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by measurement science consultancy, the netherlands corresponding author: paul p. l. regtien, e-mail: paul@regtien.net about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is: 2221-870x. editorial and publication board professor paul p.l. regtien (netherlands vice president for publications) dr dirk röske (germany information officer) professor antónio da cruz serra (portugal chairman of the advisory board) professor pasquale daponte (italy chairman of the technical board) francisco allegria (portugal – editorial office) sergio rapuao (italy – editorial office) imeko technical committee chairmen (ex officio) about imeko the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact: paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66 7558 jb hengelo (ov) the netherlands email: paul@regtien.net acta imeko attn. dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany support contact dirk röske email: dirk.roeske@ptb.de multipath routing protocol based on backward routing with optimal fuzzy logic in medical telemetry systems for physiological data measurements acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 -8 acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 1 multipath routing protocol based on backward routing with optimal fuzzy logic in medical telemetry systems for physiological data measurements suryadevara kranthi1, pallavi deshpande2, shadab pasha khan3, amedapu srinivas4 1 department of it, v r siddhartha engineering college, vijayawada-520007, andhra pradesh, india 2 department of electronics and telecommunication, bharati vidyapeeth (deemed to be university) college of engineering, pune-411043, maharastra, india 3 department of it, oriental institute of science & technology, bhopal-462021, madya pradesh, india 4 department of cse, sreenidhi institute of science and technology, hyderabad-501301, telangana, india section: research paper keywords: authentication; quality of service; mobile ad hoc network; multipath routing; optimal fuzzy logic citation: suryadevara kranthi, pallavi deshpande, shadab pasha khan, amedapu srinivas, multipath routing protocol based on backward routing with optimal fuzzy logic in medical telemetry systems for physiological data measurements, acta imeko, vol. 11, no. 1, article 23, march 2022, identifier: imekoacta-11 (2022)-01-23 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 20, 2021; in final form february 22, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: suryadevara kranthi, e-mail: kranthisri41@gmail.com 1. introduction a mobile ad hoc network (manet) has several sensor nodes [1] that are connected without central infrastructure via wireless media. to execute a fundamental routing activity like transferring data packets from one source to a location, routing techniques such as dynamic basis [2] and ad hoc remote vector [3] were devised. the manet properties should be taken into account in routing methods. the fundamental requirements of a manet are data dependability, multipath selection [4] and security [5] that improve network presentation. to attain this purpose many of the investigations have been developed. one of the important manet functionalities is on-demand routing [6]. in an ad hoc setting, the routing arrangements [7] must be confident, resilient and flexible. dynamic topology and node connection failure restricts the routing function. the mobility of the node makes it harder to route because it causes the nodes' connection failure. this frequent link failure leads to expensive routing and topology management, degrades reliability of data abstract mobile ad hoc network (manet) performance is critically affected by the mobility and resource constraints of nodes. node mobility will impact on connecting stability, and node resource limitations will result in congestion, so the development of a routing protocol that promotes quality of service (qos) in manet is quite difficult. in particular, frequent interrupting connection may degrade qos performance in the high-speed node drive scenario, so it is required to build the manet protocol routing which can be adapted for changes in the networking architecture to support qos. moreover, manet's multipath routing is the most necessary for secure transmission that can be achieved if self-centered nodes in the manet network are ignored. secure routing guarantees the reliability of data, confidentiality, authorization and authentication and non-denial. a novel safe multi-way method for dependable data transfer, which depends on the quality of service, is offered in this study. the ad hoc on demand backward routing protocol with optimal fuzzy logic (ofl) is also designed to provide multi-path routing. the hybridization of the bat optimization approach provides the optimum route by generating rules in ofl, and hence chooses an optimal rule. the efficiency of the planned technique is evaluated for factors such as final delay, the packet delivery ratio and so on. when the node is 150, the proposed hybrid bat algorithm achieved 0.61 of average packet latency, where the existing technique particle swarm optimization achieved nearly 1.2 to 1.3 packet latency. in this communication network, measurement of packet delivery ratio is an important task for establishing the communication network. the results show that our suggested work has more energy efficiency and network lifetime than existing ones. mailto:kranthisri41@gmail.com acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 2 transfer and reduces network efficiency. the failure of the connection in manet therefore becomes a serious issue. furthermore, such a link failure often results in a trajectory of tragedy. as a result, data transmission reliability is diminished; the lower the packet transmission ratio the greater the delay from end to end. the measurement of data packets in a manet increases the overhead control message pricey and reduces routing function performance. this is in turn more useful method in physiological data measurement and transmission via cognitive radio-based networks. therefore, the selection of an alternative way or the creation of several paths is vital to ensure that the data transfer is reliable when the link is unsuccessful. and there is a lot of interest in loop-free paths to determine an ideal path between several routes between the source and destination of a network. in order to improve the reliability of the data transmission, multipath routing is constructed in a manet [8], providing for a load balance between the nodes. using several split pathways, the data has been sent parallel and the delivery rate is greatly improved. the problem of scalability, confidentiality, integrity and the life of the network is addressed in the multi-routing systems [9]. the dependability of data communication in a manet is ensured by multi-way [10] routing between source and destination. existing multipath routing systems in a manet cause flooding, empty neighbourhoods, flat management, widespread data, high energy consumption, meddling and load balance concerns. the efficient multi-way routing strategy is therefore presented to tackle one or more of these problems. in the dynamic environment changes and frequent path failures [11]-[12] also the existing multi-path routers are not good. they also produce an overhead route in the network. the overhead routing takes up a significant part of the bandwidth of the network and the power of the mobile node quickly exhausts. therefore, in order to limit the mobile node's participation in a route discovery phase to maintain dependability of data communication, with minimum overall costs, a reliable multi-path protocol is essential [13]. multi-routing protocols are continuously improved for one decade by updating the quality of service (qos) support demand. as a matter of fact, the multi path routing protocol may liken the available node resources in various paths, such as the available bandwidth, battery level, etc., and choose the optimum path for data to satisfy qos. today, in the manet path selection parameters not only the available path node resources but path stability is included. researchers are looking for different parameters to measure path stability effectively. however, the routing protocol for adaptive link state, which is changing quickly to enable qos, has not yet been fully researched in the highspeed movements of resource limited nodes scenario. finding adequate criteria for road stability would also be harder. the major goal of this study is the development of a safe, nearby position verification protocol for the qos performance of highspeed manet, by measuring the packet delivery ratio. a backward routing technique is devised in this routing protocol to determine the confident nodes. in this work, the optimum fuzzy [14]-[15] system is used to improve the qos routing procedure. in fuzzy logic system, the optimal rule generation is identified by using the hybrid bat algorithm (hbat) technique and compared its efficiency with other optimization techniques. the paper is structured as follows: literary review is presented in section 2. section 3 delivers an explanation of the methods proposed. section 4 provides validation of the methods proposed with current methodologies and measuring the packet delivery ratio. finally, section 5 offerings the conclusion of this study and its future effort. 2. literature review this section mainly examines several routing methods in order to deal with changes in network topology and analyses their quality of service. gomes et al. [16] introduces the industrial wireless sensor network (wsn) link quality estimator, and the dedicated wsn node assesses connection quality depending on the signal intensity received. the estimate of connecting stability by distance alone, however, is not adequate for scenarios with high speeds such nodes that move with an average speed of 20m/s. since node speed can be reflected in qos performance through path life. long lifetime paths can give superior qos due to the long-term stability of the broadcast path. to improve the qos, cross-layer multicast routing (clmr) protocol based on a tree is proposed in [17]. they take advantage of data on battery life, physical layer bandwidth, routing layer stability and application layer overhead to upgrade the tree and then calculate the cost function based on the attributes of each layer. to develop robust routing pathways, clmr selects low-cost nodes. a topological change adaptive ad-hoc multi-path distance vector protocol is developed in [18], which aims to minimize the data traffic by utilizing qos. the protocol's limits are that it does not function well in dynamic arrangements requiring stability and node density both on routes. in general, this protocol improves a little when other protocols do far better in many circumstances. the notion of finding the efficient route with least energy consumption and shortest distance was proposed for the fitness function adaptive ad-hoc on-demand multipath distance vector protocol (ff-aomdv) [19]. this protocol employed aomdv, and the broadcast will be performed over the alternate route with the shortest path in the routing table in the event of any breakdown or break in connection. the ff-aomdv model took into account few qos characteristics, which means that its presentation is not as high as aomdv, while network improvement is very limited. the authors of [20] devised a routing technique which improves the network's quality with ga. this work took into account several circumstances including mobility speed and failure of the nodes. compared to other protocols, the network performance was enhanced. however, the energy consumption issue was not taken into account. in the multi-track transmission control protocol in [21], the energy-efficient mobbing control technique was projected. the results show that this algorithm works better than multipath transmission control protocol because of its reduced energy use and improved throughput. the suggested approach did not however take into account the random loss regularly occurring in wireless connections in loss networks. therefore, loss of packets is supposed to decrease the congestion window and degrade data performance as a result of congestion lost. a routing procedure that takes into account the energy efficiency and certain qos routing factors was proposed by the authors in [22]. in order to update nodes to qod state, the method relies on the transmission of topological information to the whole network. this influx of information would raise the traffic overhead, especially with big networks. a protocol known as the dynamic energy ad-hoc on-demand distance vector (aodv) protocol was suggested in [23]. the major goal is to reduce the time required to transmit packets, reduce the consumption of energy as well as to maximize the life of the network. it determines the shortest distance path and selects the acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 3 intermediate nodes with high residual power and network authority. in case of any inattentive link breaking or low-energy nodes during packet delivery, this procedure gives external energy to nodes. this enhances the reliability of the route and the lifespan. this external energy supply can also be considered to minimize the cost of the network. authors of [24] proposed to do away with congestion and clustering in wsn a new protocol called congestion and clustering routing. congestion control and model-based bandwidth estimates were presented in sender side to tcptraffic via containment-free multiple-access time division based manets [25]. both protocols offer acceptable routing options based on preventing congestion but do not take account of the random loss that would unduly narrow the congestion window. masood ahmad et al. [26] introduced a clustering technique based on honeybee and genetic algorithm. the overhead of topology maintenance is reduced by this structure. dynamic solutions with improved quality are provided by this combined algorithm, therefore, stable and balanced clusters are formed. however, multipath routing is done by this technique with high energy consumption. the energy issues and workload of ch are focused by amutha et al. [27] by developing a cluster manager-based ch selection scheme. the ch execute the packet transmission between sensor nodes in the network. low bandwidth, reliable throughput and low energy are provided by this technique. however, there is a major problem developed in this study, which is use of extra mechanism for selection process that leads automatically low bandwidth. 2.1. problem identification manet's volatility and limited assets make it very difficult to communicate information via such networks. – because of inconsistent wireless channel, integrated device shortages, channel quarrels, portability, node control and affirmation device, assurance of qos to manet applications is also highly complex. – the manet mobile nodes do not provide adequate power and risk repetitive node failures, resulting in a variety of network topologies, network allocations, packet loss problems and the lowest signal quality. – the current key organisation strategy of addressing an awkward node successfully does not deprive users of creating useful identities or the stealing of the individuality of individuals who do not participate in network activities. – moreover, manets generally require fresh limitations on problems related with qos in wire-based networks. this is the projected dynamic behaviour of the respective networks and the controlled assets. 3. proposed methodology in this study, a new optimum multi-track routing protocol is provided for manet to improve the qos, where network failures are reduced and will not use the information of individuals, who are not participate in the network activities. this is the four phases of the suggested model: the trust ideal for routing, the multipath routing, the optimal rule and data transmission. the phases of the proposed model are shown in figure 1. the secure adjacent position trust verification protocol (saptv) for analysis of trust value process is considered here. in our trust model each node maintains a value of trust for each of its neighbours. the ad-hoc on-demand distance vector backward routing (aodv-br) is employed for the development of confidential packets in the multi-way outing process. if source s wishes to connect with destination d, the route delivery is initiated by the source at that time. to improve the qos routing procedure, the optimum fuzzy system is used. the rules are made in a fuzzy sourceand target-based system. an inspired optimization model is proposed to optimize these rules. 3.1. trust model for multipath routing the source node begins the network-wide flood with the distribution of the route query packet, the most important waiting packets for the route response. the simulation process, the saptv protocol, is now intended. the data clustering updates the entry value. as demonstrated in (1), the trust measure is then taken into account 𝑇𝑣1~𝑇𝑣2 = 1 − (1 − 𝑇𝑣2) 𝑇1 , (1) where t 1 is the trust value suggestion and t 2 is the trust value direct. the values are between 0 and 1. the innovative technique specifies an explicit trust relationship as the trust suggestion among two nodes in the similar collection, but the trust references are regarded by deliberation as a confidence association among nodes in the diverging collection. the trust warning for pathways is always active if it is less than the trust value. 3.2. multipath routing (mr) process in order to ensure the source-destination route, the routing protocols must show significant energy efficiency levels. in this respect, there are now a number of energy-efficient routing algorithms. the protocols in this document propose that a considerable effort be made to depart from origin to endpoint figure 1. proposed model schematic diagram. figure 2. fuzzy logic process. acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 4 node on the most energy-efficient path. a new routing method for manet is developed called aodv-br that identifies the least node’s residual energy in the paths and classifies the descending path in the descendant order of nodal residual energy. figure 2 provides an ideal fuzzy logic (fl) method block diagram. after several interactions, the connection quality of a route is likely to be significantly diminished. thus, the hbat algorithm, which works admirably efficiently, selects the best way among the traditional route in the network. 3.2.1. aodv-br protocol the response messages and data packets are checked, and local repair is conducted in the aodv-br by a backup route. the source and endpoint nodes have the ability to change the data in the aodv-br. thus, with the assistance of this strategy, the finest overheard backup route is taken up. when the data packet is eradicated, it is transmitted via a substitute path in the aodv-br. 3.3. routing process on optimal fuzzy logic in this study, qos is improved by providing the shortest route using the optimal fuzzy logic (ofl) algorithm, where hbat is developed for optimal rule generation in ofl algorithm. initially, the description of ofl is provided, where fl consists of four phases: flushing, generating a rule, membership, and defuzzifying. the values of a number of rules are properly translated into linguistic variables which in fluid logic describe the strength of settlement with the arithmetic value of the targets. three separate kinds of values of trust like friends, acquaintance and strangers are characterized correspondingly by the three fuzzy as (high, medium and low). 3.3.1. fuzzification the hard value of any fuzzy collection is subjected to fluctuation to describe the value of the relationship. it also demonstrates the ability of a higher system to adjust the real scalar quality to a fluffy value. a floating x-collector partition represents a job a: x an l, where l reaches [0, 1], otherwise known as the membership feature. 3.3.2. membership function this function is educated by many forms in order to evaluate the very comfortable member company, by put on the easiest separate member function from one another, through straight lines. 3.3.3. rule generation process the collection of if-then rules builds on the data of a human expert to detect the preferred performance of the scheme. the rules can be maintained with the aim of achieving the entire functionality of the system. under table 1 below are presented sample fuzzy rules. 3.4. data transmission after receipt of the message from the destination, the data packet is sent to the destination node. there are several transmissions in preferred pathways, and the link quality is likely to be lost. thus, because of loss of the connection value, the source node does not accept any recognitions from the target node. the network opts for an ideal way from the basis routes. the best way to reach the goal should be energy-effective and must be the shortest. therefore, in order to identify the shortest route in an optimal way, the research study uses hbat algorithm in ofl, which is explained as follows: 3.4.1. bat algorithm bat algorithm (bat), which was projected by xin-she yang in 2010, is a swarm-intelligence algorithm. in the algorithm, the search process is motivated by the fats, by the ability to search and locate the proofs. the bats emit noise and can make a difference between the food and beasts, and estimate the distance, using the reflected sound waves. the following equation (2) is used to update the location of the bats (solution) during the optimization procedure: 𝑥𝑖 𝑡 = 𝑥𝑖 𝑡−1 + 𝑣𝑖 𝑡 , (2) where the current solution is signified by 𝑥𝑖 𝑡−1 and the new, updated position at iteration 𝑖 − 𝑡ℎ solution is represented by 𝑥𝑖 𝑡 . the velocity is represented by 𝑣𝑖 𝑡 . the velocity at time step 𝑡 is designed as follows in (3): 𝑣𝑖 𝑡 = 𝑣𝑖 𝑡−1 + (𝑥𝑖 𝑡−1 − 𝑥∗)𝑓𝑖 , (3) where 𝑥∗, denotes the current best global location, and 𝑓𝑖 specifies the 𝑖 − 𝑡ℎ bat frequency. the solution frequency is consistently taken from the specified range from the least to the maximum frequency and is evaluated as follows in (4): 𝑓𝑖 = 𝑓min + (𝑓max − 𝑓min)𝛽, (4) where 𝑓min and 𝑓max are represented as the minimum and maximum frequency, correspondingly and 𝛽 is represented as a random number, 𝛽 ∈ [0, 1, which is defined as follows in (5): 𝑥new = 𝑥old + 𝜖𝐴 𝑡 , (5) where 𝛽 is a scaling factor with random values among 0 and 1, when the average loudness of all bats is given by 𝐴𝑡 . the latch is updated as follows, after the prey is found by a bat and that are shown in (6) and (7): 𝐴𝑖 𝑡 = 𝛼𝐴𝑖 𝑡−1, 𝑟𝑖 𝑡 = 𝑟𝑖 0[1 − e−𝛾 𝑡 ] (6) 𝐴𝑖 𝑡 → 0, 𝑟𝑖 𝑡 → 𝑟𝑖 0, while 𝑡 → ∞ , (7) where 𝐴𝑖 𝑡 designates to the loudness of 𝑖 − 𝑡ℎ bat, at iteration 𝑡, and 𝑟 is the pulse emission level. the values of 𝛼 and 𝛾 are constant. 3.4.2. hybridized bat algorithm the process of exploitation is strong in the unique bat algorithm, but it is caught in local optimum when exploring the search space. in this paper, the hybridized bat algorithm is used to improve the exploration phase of the bat algorithm with the search of bees from the algorithm for an artificial bee colony, in this way, it is prevented to get trapped to the local optima. randomly inside the lower and lower limits are generated the initial solution populations as in (8); table 1. fuzzy rules. s.no mobility trust value delay route 1 low low low below optimal 2 medium low medium below optimal 3 high high low sub optimal 4 high medium low sub optimal 5 low high low low optimal 6 low high low optimal 7 medium medium low optimal acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 5 𝑥𝑖,𝑗 = 𝑙𝑏𝑗 + rand ∙ (𝑢𝑏𝑗 − 𝑙𝑏𝑗 ) , (8) where 𝑙𝑏𝑗 and 𝑢𝑏𝑗 are referenced to both the lower and the upper bound; rand is a random number of uniform distribution, rand ∈ [0, 1]; and 𝑥𝑖,𝑗 is a solution where the items are referenced by 𝑗. the fitness value is calculated as shown in (9) for each individual in the population after the generation of the starting population. 𝐹𝑥𝑖 = { 1 𝑓𝑥𝑖 𝑖𝑓 𝑓𝑥𝑖 ≥ 0 1 + |𝑓𝑥𝑖 | otherwise, (9) where the fitness function of 𝑖 − 𝑡ℎ individual is denoted by 𝐹𝑥𝑖 , and 𝑓𝑥𝑖 is the objective function of the 𝑖 − 𝑡ℎ individual. table 2 shows the parameters used in proposed algorithm. two alternative methods improve the exploration of search space. the counter (t) determines if the search procedure of the bat algorithm or the observer bee mechanism will be used. at each step, the persons move and update the position in accordance with the eq. (2), however the observer technique uses the following equation (10) if the value of t is odd: 𝑥𝑖,𝑗 𝑡 = 𝑥𝑖,𝑗 𝑡−1 + rand ∙ (𝑥𝑖,𝑗 𝑡−1 − 𝑥𝑘,𝑗 𝑡−1), (10) where the novel solution at time step 𝑡 is denoted by 𝑥𝑖,𝑗 𝑡 . the 𝑗 − 𝑡ℎ element of the 𝑖 − 𝑡ℎ individual is denoted by 𝑥𝑖,𝑗 𝑡−1and 𝑥𝑘,𝑗 𝑡−1 denotes the 𝑘 − 𝑡ℎ neighbor solution, rand ∈ [0, 1]. the random walk of the solution according to (5) will utilize the promising area. the selection probability that is expressed as follows in (11) determines whether the newly produced solution is established or discarded: 𝑝𝑖 = 𝐹avg ∑ 𝐹𝑥𝑖 𝑛 𝑖=1 , (11) where 𝑝𝑖 means the selection possibility and 𝐹 as the fitness value. 4. results and discussion in a series of simulated saptv scenarios, testing is performed with the proposed saptv-hbat algorithm, existing saptv particle swarm optimization, whale optimization algorithm (woa) and bat algorithms. a c++-based ns-2 simulator implements the planned algorithm and uses the amount of node, node mobility, traffic load and halt time for several simulation scenarios. table 3 lists the simulation parameters. nodes are randomly positioned within a 1000-meter radius in all circumstances. every mobile node's maximum transmission range is 250 m. nodes often move with speed in the range [0, 10] m/s, based on the random mobility point model. in this concept of mobility, all nodes migrate to a new destination point and remain there for a certain duration termed a pause. the channel is 2 mbps and the ieee 802.21 power save mode is used by the mac protocol. window size and beacon interval ad hoc traffic indication mode are 0.05s and 0.25s. each simulation is performed for 900 s. two ray floor models are employed to propagate. average control messages, power consumption, packet delivery ratio (pdr), network service life and delay are involved in the investigation. simulation outcomes were obtained after 10 runs to achieve stable status. table 2. hybridized bat algorithm control factors. notation parameter value n population size 20 𝛾 constant parameter 0.9 𝐴𝑚𝑖𝑛 constant minimum loudness 1 α constant parameter 0.9 𝐴0 maximum initial loudness 100 𝑓min minimum frequency 0 𝑓max maximum frequency 1 maxiter maximum iteration 30 algorithm 1. hybridized bat algorithm 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑓(𝑥) 𝐼𝑛𝑖𝑡𝑖𝑎𝑙𝑖𝑧𝑒 𝑡ℎ𝑒 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑏𝑎𝑡𝑠, 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓 𝑣𝑖 , 𝑟𝑖 𝑎𝑛𝑑 𝐴𝑖, 𝑑𝑒𝑓𝑖𝑛𝑒 𝑡ℎ𝑒 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 𝑜𝑓 𝑝𝑢𝑙𝑠𝑒 (𝑓𝑖 )𝑎𝑡 𝑥𝑖 , 𝑡ℎ𝑒 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 (𝑀𝑎𝑥𝐼𝑡𝑒𝑟), 𝑎𝑛𝑑 𝑠𝑒𝑡 𝑡ℎ𝑒 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡𝑒𝑟 (𝑡) 𝑡𝑜 0 𝒘𝒉𝒊𝒍𝒆 𝑡 < 𝑀𝑎𝑥𝐼𝑡𝑒𝑟 𝒅𝒐 𝒇𝒐𝒓 𝑖 = 1 𝑡𝑜 𝑁 (𝑒𝑎𝑐ℎ 𝑁 𝑖𝑛𝑑𝑖𝑣𝑖𝑑𝑢𝑎𝑙𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛) 𝒅𝒐 𝒊𝒇 𝑡 𝑖𝑠 𝑒𝑣𝑒𝑛 𝒕𝒉𝒆𝒏 𝐶𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑡ℎ𝑒 𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦 𝑎𝑛𝑑 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦 𝑣𝑎𝑙𝑢𝑒 𝑢𝑠𝑖𝑛𝑔 (3) 𝑎𝑛𝑑 (4) 𝑃𝑒𝑟𝑓𝑜𝑟𝑚 𝑡ℎ𝑒 𝑏𝑎𝑡 𝑠𝑒𝑎𝑟𝑐ℎ 𝑝𝑟𝑜𝑐𝑒𝑑𝑢𝑟𝑒 𝑢𝑠𝑖𝑛𝑔 (2) 𝒆𝒍𝒔𝒆 𝑃𝑒𝑟𝑓𝑜𝑟𝑚 𝑡ℎ𝑒 𝑜𝑛𝑙𝑜𝑜𝑘𝑒𝑟 𝑠𝑒𝑎𝑟𝑐ℎ 𝑝𝑟𝑜𝑐𝑒𝑑𝑢𝑟𝑒 𝑢𝑠𝑖𝑛𝑔 (11) 𝒆𝒏𝒅 𝒊𝒇 𝒊𝒇 𝑟𝑎𝑛𝑑 > 𝑟𝑖 𝒕𝒉𝒆𝒏 𝑆𝑒𝑙𝑒𝑐𝑡 𝑡ℎ𝑒 𝑓𝑖𝑡𝑡𝑒𝑠𝑡 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑃𝑒𝑟𝑓𝑜𝑟𝑚 𝑡ℎ𝑒 𝑟𝑎𝑛𝑑𝑜𝑚 𝑤𝑎𝑙𝑘 𝑝𝑟𝑜𝑐𝑒𝑠𝑠 𝑢𝑠𝑖𝑛𝑔 (5) 𝒆𝒏𝒅 𝒊𝒇 𝒊𝒇𝑝𝑖 < 𝐴𝑖 𝑎𝑛𝑑 𝑓(𝑥𝑖 ) < 𝑓(𝑥∗) 𝒕𝒉𝒆𝒏 𝑇ℎ𝑒 𝑛𝑒𝑤𝑙𝑦 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑖𝑠 𝑎𝑐𝑐𝑒𝑝𝑡𝑒𝑑 𝑅𝑒𝑑𝑢𝑐𝑒 𝐴𝑖 𝑎𝑛𝑑 𝑖𝑛𝑐𝑟𝑒𝑎𝑠𝑒 𝑟𝑖 𝑢𝑡𝑖𝑙𝑖𝑧𝑖𝑛𝑔 (6) 𝒆𝒏𝒅 𝒊𝒇 𝒆𝒏𝒅 𝒇𝒐𝒓 𝐹𝑖𝑛𝑑 𝑎𝑛𝑑 𝑠𝑎𝑣𝑒 𝑡ℎ𝑒 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑏𝑒𝑠𝑡 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑥 ∗ 𝒆𝒏𝒅 𝒘𝒉𝒊𝒍𝒆 𝑅𝑒𝑡𝑢𝑟𝑛 𝑡ℎ𝑒 𝑏𝑒𝑠𝑡 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛. table 3. simulation structure. parameter value packet size 256 bytes pause time 0–600 s transmission range (r) 250 m data size 5 mbytes data rate 2 mbits/sec simulation period 900 s traffic load 1 to 5 packets traffic type cbr sum of nodes 100–150 initial energy 180 j maximum node speed 10 m/s network area 1000 m × 1000 m acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 6 initially, the energy consumption of each model is test and the graphical results are shown in figure 3. the particle swarm optimization (pso) algorithm, woa, bat and the proposed hbat of saptv achieved 4.9 j, 3.7 j, 3.5 j and only 2.4 j, when the node is 120. however, the energy consumption is increased, when the number of nodes upsurges. for instance, when the node is 150, the pso, woa, bat and proposed hbat of saptv achieved 5.8 j, 4.8 j, 3.9 j and only 2.8 j. this decrease is due to a decrease in energy, hops and traffic consumption in data transmission over the defined path. moreover, hop-proportional energy is also available. traffic loads hence reduce energy consumption when hop numbers are optimized. the packet size for the proposed saptv-hbat protocol is shown in figure 4. it is experiential that the proposed saptv-hbat has better pdr than saptv-pso saptv-woa and saptv-bat. the pdr increases when the number of nodes upsurges. when the node is 100, the pso, woa, bat and proposed hbat with saptv achieved pdr of 65, 79, 87 and 96. in addition, these techniques achieved pdr of 72, 80, 88 and 97, when the node is 130. the increased sum of nodes makes sure the optimal route from the source to the destination is selected. the amount of backup and duration, i.e., path life, is also increased. figure 5 displays the average packet latency in diverse network sizes for the proposed system. when the node is 110, the average packet latency of saptvpso, saptv-woa, saptv-bat and saptv-hbat is 0.66, 0.61, 0.47 and 0.42. the same techniques achieved 0.91, 0.81, 0.67 and 0.54 of average packet latency, when the node reaches 130th level. but, the existing technique pso and woa with saptv reaches high packet latency i.e. nearly 1.2 to 1.3, where bat achieves 0.93 and proposed hbat with saptv achieved 0.61 of average packet latency, when the node is 150. compared to other approaches, the proposed saptv-hbat is observed to be less delay. the growing number of paths between nodes results in the selection of the best path with fewer hop counts and node sharing. therefore, this approach has a reduced latency and uses bandwidth better. figure 6 displays the average sum of figure 3. graphical representation of proposed model in terms of energy consumption. figure 4. graphical representation of proposed saptvhbat on the basis of packet delivery ratio. figure 5. graphical representation of proposed saptv-hbat in terms of average packet latency. figure 6. graphical representation of proposed saptv-hbat in terms of average number of control packet. 0 1 2 3 4 5 6 100 110 120 130 140 150 e n e r g y c o n su m p ti o n number of nodes saptvpso saptvwoa saptvbat saptvhbat 0 20 40 60 80 100 1 0 0 1 1 0 1 2 0 1 3 0 1 4 0 1 5 0 p a c k e t d e li v e r y r a ti o number of nodes saptv-pso saptv-woa saptv-bat saptv-hbat 0 0,2 0,4 0,6 0,8 1 1,2 1,4 100 110 120 130 140 150 a v e r a g e p a c k e t l a te n c y number of nodes saptvpso saptvwoa saptvbat saptvhbat 0 5 10 15 20 25 1 0 0 1 1 0 1 2 0 1 3 0 1 4 0 1 5 0 n e tw o r k l if e ti m e number of nodes saptv-pso saptv-woa saptv-bat saptv-hbat acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 7 control packets for the saptv-hbat design for various network sizes. when the node is 100, the average sum of control packets of pso, woa, bat and hbat with saptv is 1150, 1020, 950 and 830. the same techniques achieved 1250, 1158, 1059 and 950 of average number of control packets, when the node is 120. but, the existing technique pso and woa with saptv reaches high control packets i.e. nearly 1400 to 1500, where bat achieves nearly 1200 to 1300 and proposed hbat achieved 1120 and 1210 of average control packets, when the node is 140 and 150. it shows that control packets are increasing with the growth in the sum of nodes. the hbat optimization approach used to locate paths from source to destination is employed in a multipath network. figure 7 shows the study of network life. the lifetime of the network shall be measured as any of the nodes in the track are determined to be dead among the start of the data transmission into a particular path. when the node is 100, the average network lifetime of pso, woa, bat and hbat with saptv is 7, 9, 12 and 14. the same techniques achieved 8, 11, 13 and 15 of average network lifetime, when the node is 110. the average network lifetime of saptv-pso, saptv-woa, saptv-bat and saptv-hbat is 13, 15, 17 and 19, when the node is 130. but, the existing technique pso and woa reaches less network life time i.e., nearly 14 to 17, where bat achieves nearly 18 and 20 and proposed hbat achieved 21 and 23 of average path life time, when the node is 140 and 150. the lifetime of the network is better than everyone in the projected saptv-hbat. it is because the path selection is based on a multi-target function. the data transmission will not involve the node with excessive electricity consumption and traffic backlog. high traffic losses lead to packet loss and needless energy use. the hbat in saptv also displays better network lifetime than pso, woa and bat due to the consideration of trustworthy routes. 5. conclusion manet is an autonomous mobile node approach. qos protocols which can be adapted to quick topology modifications would assist high-speed manet network applications. a route may lose its link quality after the number of broadcasts. in this work, a multi-path routing system for the dependable exchange of data was suggested for qos. ofl is developed to find the shortest route in multipath routing, where the optimal rules are generated by hbat. the performance of the proposed model is compared with existing optimization approaches in terms of network life, pdr, energy consumption, average control packages and latency. the proposed protocol can be employed as an effective high-speed manet solution with qos necessities and limitations on resources such as multimedia communication between car and vehicle, the mobile video surveillance system etc. in the context of the previous energysensitive routing protocols the energy usage of the suggested routing protocol was quite small. as a future effort, the author will further examine the development of high-speed routing protocols that take into account both trajectory stability and node density. furthermore, the next research will create the way forward by implementing effective protocols and maximizing the performance, the qos and lowering communication delays. references [1] l. ciani, a. bartolini, g. guidi, g. patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, 9(1) (2020), pp. 1-7 doi: 10.21014/acta_imeko.v9i1.732 [2] r. desai, b.p. patil, d.p. sharma, routing protocols for mobile ad hoc network: a survey and analysis, indonesian journal of electrical engineering and computer science, 7(3) (2017), pp.795801. doi: 10.11591/ijeecs.v7.i3.pp795-801 [3] s. yan, y. chung, improved ad hoc on-demand distance vector routing (aodv) protocol based on blockchain node detection in ad hoc networks, international journal of internet, broadcasting and communication, 12(3) (2020), pp.46-55. doi: 10.7236/ijibc.2020.12.3.46 [4] a. sulthana, s. s. mirza, an efficient kalman noise canceller for cardiac signal analysis in modern telecardiology systems, ieee access, 6 (2018), pp.34616-34630. doi: 10.1109/access.2018.2848201 [5] d. peters, p. scholz, f. thiel, software separation in measuring instruments through security concepts and separation kernels, acta imeko, 7(1) (2018), pp.13-19. doi: 10.21014/acta_imeko.v7i1.510 [6] s. putluri, s. y. fathima, cloud-based adaptive exon prediction for dna analysis, healthcare technology letters, 5(1) (2018), pp. 25-30. doi: 10.1049/htl.2017.0032 [7] a. boukerche, b. turgut, n. aydin, m. z. ahmad, l. boloni, d. turgut, routing protocols in ad hoc networks: a survey, computer networks, 55(13) (2011), pp. 3032–3080. doi: 10.1016/j.comnet.2011.05.010 [8] f. noorbasha, m. manasa, r. t. gouthami, s. sruthi, d. h. priya, n. prashanth, m. z. ur rahman, fpga implementation of cryptographic systems for symmetric encryption. journal of theoretical and applied information technology, 95(9) (2017), pp. 2038-2045. [9] s. chakraborty, s. chakraborty, s. nandi, s. karmakar, fault resilience in sensor networks: distributed node-disjoint multi-path multi-sink forwarding, journal of network and computer applications, 57, pp. 85–101, 2015. doi: 10.1016/j.jnca.2015.07.014 [10] p. v. v. kishore, a. s. c. s. sastry, double technique for improving ultrasound medical images. journal of medical imaging and health informatics, 6(3) (2016), pp. 667-675 doi: 10.1166/jmihi.2016.1743 figure 7. graphical representation of proposed saptv-hbat on the basis of network lifetime. 0 200 400 600 800 1000 1200 1400 1600 1 0 0 1 1 0 1 2 0 1 3 0 1 4 0 1 5 0 a v e r a g e n u m b e r o f c o n tr o l p a c k e t number of nodes saptv-pso saptv-woa saptv-bat saptvhbat https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.11591/ijeecs.v7.i3.pp795-801 https://doi.org/10.7236/ijibc.2020.12.3.46 https://doi.org/10.1109/access.2018.2848201 https://doi.org/10.21014/acta_imeko.v7i1.510 https://doi.org/10.1049/htl.2017.0032 https://doi.org/10.1016/j.comnet.2011.05.010 https://doi.org/10.1016/j.jnca.2015.07.014 https://doi.org/10.1166/jmihi.2016.1743 acta imeko | www.imeko.org march 2022| volume 11 | number 1 | 8 [11] m. sedrati, a. benyahia, multipath routing to improve quality of service for video streaming over mobile ad hoc networks, wireless pers. communication, 99(2018), pp. 999–1013. doi: 10.1007/s11277-017-5163-6 [12] h. a. ali, m. f. areed, and d. i. elewely, an on-demand power and loadaware multi-path node-disjoint source routing scheme implementation using ns-2 for mobile ad-hoc networks,’simul. model. pract. theory, 80 (2018), pp. 50–65. doi: 10.1016/j.simpat.2017.09.005 [13] m. b. channappagoudar, p. venkataram, performance evaluation of mobile agent-based resource management protocol for manets, ad hoc netw., (2016), 36, pp. 308–320. doi: 10.1016/j.adhoc.2015.08.008 [14] e. petritoli, f. leccese, precise takagi-sugeno fuzzy logic system for uav longitudinal stability: an industry 4.0 case study for aerospace., acta imeko, 9(4) (2020), pp.106-113. doi: 10.21014/acta_imeko.v9i4.723 [15] e. benoit, fuzzy scales for the measurement of color. acta imeko, 3(3) (2014), pp.57-62. doi: 10.21014/acta_imeko.v3i3.89 [16] s. s. mirza, m. z. ur rahman, efficient adaptive filtering techniques for thoracic electrical bio-impedance analysis in health care systems, journal of medical imaging and health informatics, 7(6) (2017), pp.1126-1138. doi: 10.1166/jmihi.2017.2211 [17] d. chander, r. kumar, qos enabled cross-layer multicast routing over mobile ad hoc networks, procedia computer science, 125(2018), pp. 215–227. doi: 10.1016/j.procs.2017.12.030 [18] a. taha, r. alsaqour, m. uddin, m. abdelhaq, t. saba, energy efficient multipath routing protocol for mobile ad-hoc network using the fitness function, ieee access, 5 (2017), pp. 10369– 10381. doi: 10.1109/access.2017.2707537 [19] w. wang, x. wang, d. wang, energy efficient congestion control for multipath tcp in heterogeneous networks, ieee access, vol. 6, pp. 2889–2898, 2018. doi: 10.1109/access.2017.2785849 [20] n. muruganantham, h. el-ocla, routing using genetic algorithm in a wireless sensor network, wireless pers. communication, 111(4) (2020), pp. 2703–2732. doi: 10.1007/s11277-019-07011-8 [21] w. jabbar, w. saad, m. ismail, meqsa-olsrv2: a multi criteria based hybrid multipath protocol for energy-efficient and qosaware data routing in manet-wsn convergence scenarios of iot, ieee access, 6 (2018), pp. 76546–76572. doi: 10.1109/access.2018.2882853 [22] t. k. saini, s. c. sharma, prominent unicast routing protocols for mobile ad hoc networks: criterion, classification, and key attributes, ad hoc netw,, 89(2019), pp. 58–77. doi: 10.1016/j.adhoc.2019.03.001 [23] j. deepa, j. sutha, a new energy based power aware routing method for manets, cluster comput., 22(s6) (2019), pp. 13317– 13324. doi: 10.1007/s10586-018-1868-x [24] m. farsi, m. badawy, m. moustafa, h. a. arafat, y. abdulazeem, a congestion-aware clustering and routing (ccr) protocol for mitigating congestion in wsn, ieee access, 7(2019), pp. 105402–105419. doi: 10.1109/access.2019.2932951 [25] p. pal, s. tripathi, c. kumar, bandwidth estimation in high mobility scenarios of ieee 802.11 infrastructure-less mobile ad hoc networks, international journal of communication system, 32(15) (2019), p. e4080. doi: 10.1002/dac.4080 [26] m. ahmad, a. hameed, f. ullah, i. wahid, s. u. rehman, h. a. khattak, a bio-inspired clustering in mobile ad-hoc networks for internet of things based on honeybee and genetic algorithm, journal of ambient intelligence and humanized computing, 11(1) (2020), pp. 4347–4361. doi: 10.1007/s12652-018-1141-4 [27] s. amutha, b. kannan, m. kanagaraj, energy‐ efficient cluster manager‐ based cluster head selection technique for communication networks, international journal of communication systems, 34(5) (2021), p. e4741. doi: 10.1002/dac.4427 https://doi.org/10.1007/s11277-017-5163-6 https://doi.org/10.1016/j.simpat.2017.09.005 https://doi.org/10.1016/j.adhoc.2015.08.008 https://doi.org/10.21014/acta_imeko.v9i4.723 https://doi.org/10.21014/acta_imeko.v3i3.89 https://doi.org/10.1166/jmihi.2017.2211 https://doi.org/10.1016/j.procs.2017.12.030 https://doi.org/10.1109/access.2017.2707537 https://doi.org/10.1109/access.2017.2785849 https://doi.org/10.1007/s11277-019-07011-8 https://doi.org/10.1109/access.2018.2882853 https://doi.org/10.1016/j.adhoc.2019.03.001 https://doi.org/10.1007/s10586-018-1868-x https://doi.org/10.1109/access.2019.2932951 https://doi.org/10.1002/dac.4080 https://doi.org/10.1007/s12652-018-1141-4 https://doi.org/10.1002/dac.4427 vision-based reinforcement learning for lane-tracking control acta imeko issn: 2221-870x september 2021, volume 10, number 3, 7 14 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 7 vision-based reinforcement learning for lane-tracking control andrás kalapos1, csaba gór2, róbert moni3, istván harmati1 1 bme, dept. of control engineering and information technology, budapest, hungary 2 continental adas ai, budapest, hungary 3 bme, dept. of telecommunications and media informatics, budapest, hungary section: research paper keywords: artificial intelligence, machine learning, mobile robot, reinforcement learning, simulation-to-reality, transfer learning citation: andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, vol. 10, no. 3, article 4, september 2021, identifier: imeko-acta-10 (2021)-03-04 section editor: bálint kiss, budapest university of technology and economics, hungary received january 17, 2021; in final form september 22, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: andrás kalapos, e-mail: andras.kalapos.research@gmail.com 1. introduction reinforcement learning has been used to solve many control and robotics tasks. however, only a handful of papers have been published that apply this technique to end-to-end driving [1]-[7], and even fewer studies have focused on reinforcement learningbased driving, trained only in simulations and then applied to real-world problems. generally, bridging the gap between simulation and the real world is an important transfer-learning problem related to reinforcement learning, and it is an unresolved task for researchers. mnih et al. [1] proposed a method to train vehicle controller policies that predict discrete control actions based on a single image of a forward-facing camera. jaritz et al. [2] used wrc6, a realistic racing simulator, to train a vision-based road-following policy. they assessed the policy's generalisation capability by testing it on previously unseen tracks and on real driving videos in an open-loop configuration; but their work did not extend to an evaluation of real vehicles in closed-loop control. kendall et al. [3] demonstrated real-world driving by training a lanefollowing policy exclusively on a real vehicle under the supervision of a safety driver. shi et al. [4] presented research that involved training reinforcement learning agents in duckietown, in a similar way to that presented here; however, the focus was mainly on presenting a method that explained the reasoning behind the trained agents rather than the training methods. also similar to the present study, balaji et al. [5] presented a method for training a road-following policy in a simulator using reinforcement learning and tested the trained agent in the real world, yet their primary contribution is the deepracer platform rather than an in-depth analysis of the road-following policy. almási et al. [7] also used reinforcement learning to solve lane following in the duckietown environment, but their work differs from the present study in the use of an off-policy reinforcement learning algorithm (deep q-networks (dqns) [8]); in this study an on-policy algorithm (proximal policy optimization [9]) is used, which achieves significantly better sample efficiency and shorter training times. another important difference is that almási et al. applied hand-crafted colour threshold-based segmentation to the input images, whereas the method presented here takes the ‘raw’ images as inputs, which allows for a more robust real performance. abstract the present study focused on vision-based end-to-end reinforcement learning in relation to vehicle control problems such as lane following and collision avoidance. the controller policy presented in this paper is able to control a small-scale robot to follow the righthand lane of a real two-lane road, although its training has only been carried out in a simulation. this model, realised by a simple, convolutional network, relies on images of a forward-facing monocular camera and generates continuous actions that directly control the vehicle. to train this policy, proximal policy optimization was used, and to achieve the generalisation capability requir ed for real performance, domain randomisation was used. a thorough analysis of the trained policy was conducted by measuring multiple performance metrics and comparing these to baselines that rely on other methods. to assess the quality of the simulation-to-reality transfer learning process and the performance of the controller in the real world, simple metrics were measured on a real track and compared with results from a matching simulation. further analysis was carried out by visualising salient object maps. mailto:andras.kalapos.research@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 8 this paper is an extended version of the authors’ original contribution [10]. it includes the results of the 5th ai driving olympics [11] and aims to improve the description of the methods. in both works, vision-based end-to-end reinforcement learning relating to vehicle control problems is studied and a solution is proposed that performs lane following in the real world, using continuous actions, without any real data provided by an expert (as in [3]). also, validation of the trained policies in both the real and simulated domains is conducted. the training and evaluation code for this paper is available on github1. 2. methods in this study, a neural-network-based controller was trained that takes images from a forward-looking monocular camera and produces control signals to drive a vehicle in the right-hand lane of a two-way road. the vehicle to be controlled was a small differential-wheeled mobile robot, a duckiebot, which is part of the duckietown ecosystem [11], a simple and accessible platform for research and education on mobile robotics and autonomous vehicles. the primary objective was to travel as far as possible within a given time without leaving the road. lane departure was allowed but not preferred. although the latest version of the duckiebot is equipped with wheel encoders, for this method, the vehicle was solely reliant on data from the robot's forward-facing monocular camera. 2.1. reinforcement learning algorithm in reinforcement learning, an agent interacts with the environment by taking 𝑎𝑡 action, then the environment returns 𝑠𝑡+1 observation and 𝑟𝑡+1 reward. the agent computes the next 𝑎𝑡+1 action based on 𝑠𝑡+1 and so on. the policy is the parametric controller of the agent, and it is tuned during the reinforcement learning training. sequences of actions, observations and rewards (𝜏 trajectories) are used to train the parameters of the policy to maximise the expected reward over a finite number of steps (agent–environment interactions). for vehicle control problems, the actions are the signals that control the vehicle, such as the steering and throttle, and the observations are the sensor data relating to the environment of the vehicle, such as the camera, lidar data or higher-level environment models. in this research, the observations were images from the robot's forward-facing camera, and the actions were the velocity signals for the two wheels of the robot. policy optimisation algorithms are on-policy reinforcement learning methods that optimise the parameters of the πθ(𝑎𝑡|𝑠𝑡) policy based on the 𝑎𝑡 actions and the 𝑟𝑡 reward received for them; 𝜃 denotes the trainable parameters of the policy. on-policy reinforcement learning algorithms optimise the πθ(𝑎𝑡|𝑠𝑡) policy based on trajectories in which the actions have been computed by πθ(𝑎𝑡|𝑠𝑡). in contrast, off-policy algorithms (such as dqns [8]) compute actions based on the estimate of the action-value function of the environment, which they learn using data from a large number of (earlier) trajectories, making these algorithms less stable in some environments. in policy optimisation algorithms, the πθ(𝑎𝑡|𝑠𝑡) policy is stochastic, and in the case of deep reinforcement learning, it is implemented by a neural network, which is updated using a gradient method. the policy is stochastic because, instead of computing the actions directly, 1 https://github.com/kaland313/duckietown-rl (accessed 23 september 2021) the policy network predicts the parameters of a probability distribution (see 𝜇 and 𝜎 in figure 1) that is sampled to acquire the 𝑎�̃� predicted actions (here, predicted refers to this action being predicted by the policy). in the present study, to train the policy, the proximal policy optimization algorithm [9] was used because of its stability, sample-complexity and ability to take advantage of multiple parallel workers. proximal policy optimization performs the weight updates using a special loss function to keep the new policy close to the old, thereby improving the stability of the training. two loss functions were proposed by schulman et al. [9]: 𝔏clip(𝜃) = �̂�[min(𝜌𝑡(𝜃)�̂�𝑡,clip(𝜌𝑡(𝜃),1 − 𝜖,1 + 𝜖)�̂�𝑡)], (1) 𝔏klpen(𝜃) = �̂�[𝜌𝑡(𝜃)�̂� − 𝛽 kl[πθold(⋅ |𝑠𝑡),πθ(⋅ |𝑠𝑡)]], (2) where clip(⋅) and kl[⋅] refer to the clipping function and the kullback–leibler (kl) divergence, respectively, while �̂� is calculated as the generalised advantage estimate [12]. in these loss functions, 𝜖 is usually a constant in the [0.1,0.3] range, while 𝛽 is an adaptive parameter, and 𝜌 𝑡 (𝜃) = πθ(𝑎𝑡|𝑠𝑡) πθold(𝑎𝑡|𝑠𝑡) . (3) an open-source implementation of proximal policy optimization from rllib [13] was used, which performs the gradient updates based on the weighted sum of these loss functions. the pseudo code and additional details for the algorithm are provided in the appendix. 2.2. policy architecture the controller policy was realised by a shallow (4-layer) convolutional neural network. both the policy and the value network used the architecture presented by mnih et al. [1], with the only difference being the use of linear activation in the output of the policy network. no weights were shared between the policy and the value network. this policy is considered to be endto-end because the only learning component is the neural network, which directly computes actions based on observations from the environment. some preand post-processing was applied to the observations and actions, but these only performed very simple transformations (explained in the next paragraph and section 2.3). the aim of these preand post-processing steps was to transform the 𝑠𝑡 observations and 𝑎𝑡 actions into representations that enabled faster convergence without losing figure 1. illustration of the policy architecture with the notations used. the agent is represented jointly by the ‘policy network’ and ‘sampling action distribution’ blocks; 𝑠𝑡: ‘raw’ observation, 𝑠�̃�: pre-processed observation, 𝑎�̃�: predicted action, 𝑎𝑡: post-processed action. https://github.com/kaland313/duckietown-rl acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 9 any important features in the observations or restricting necessary actions. the input of the policy network consisted of the last three observations (images) scaled, cropped and stacked (along the depth axis). the observations returned by the environment (𝑠𝑡 on figure 1) were 640 × 480 (width, height) rgb images, the top third of which mainly showed the sky, which was therefore cropped. the cropped images were then scaled down to 84 × 84 resolution (note the uneven scaling), which were then stacked along the depth axis, resulting in 84 × 84 × 9 input tensors (𝑠�̃� in figure 1). the last three images were stacked to provide the policy with information about the robot's speed and acceleration. multiple action representations were experimented with (see section 2.3). based on these representations, the policy outputs 𝒂�̃� predicted an action vector of two or a scalar value that controlled the vehicle. the policy was stochastic, and the output of the neural network therefore produced the 𝜇 and logσ parameters of a multivariate diagonal normal distribution. during training, this distribution was sampled to acquire the 𝑎�̃� actions, which improved the exploration of the action space. during these evaluations, the sampling step was skipped by using the predicted 𝜇 mean value as the 𝑎�̃� policy output. 2.3. action representations the action mapping step transformed the 𝑎�̃� predicted actions, which could be implemented using many representations, to 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] wheel velocities (see figure 1). the vehicle to be controlled was a differential-wheeled robot; the most basic action representation was therefore to directly compute the angular velocities of the two wheels as continuous values in the 𝜔𝑙,𝑟 ∈ [−1;1] range (where 1 and −1 corresponded to forward and backward rotation at full speed). however, this action space allowed for actions that were not necessary for the manoeuvres examined in this paper. moreover, as the reinforcement learning algorithm ruled out unnecessary actions, exploration of the action space was potentially made more difficult, and the number of steps required to train an agent was therefore increased. several methods can be used to constrain and simplify the action space, such as discretisation, clipping some actions or mapping to a lower-dimensional space. most previous studies [1],[2],[5],[7] have used discrete action spaces, thus the neural network in these policies selected one from a set of hand-crafted actions (steering, throttle combinations), while kendall et al. [3] utilised continuous actions, as has been used in this study. in order to test the reinforcement learning algorithm's ability to address general tasks, multiple action mappings and simplifications of the action space were experimented with. these are described in the following paragraphs. wheel velocity: wheel velocities were a direct output of the policy; 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = 𝑎�̃�, therefore 𝜔𝑙,𝑟 ∈ [−1;1]. wheel velocity positive only: only positive wheel velocities were allowed because only these were required to move forward. values predicted outside the 𝜔𝑙,𝑟 ∈ [0;1] interval were clipped: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip(𝑎�̃�,0,1). wheel velocity braking: wheel velocities were still only able to fall within the 𝜔𝑙,𝑟 ∈ [0;1] interval, but the predicted values were interpreted as the amount of braking from the maximum speed. the main differentiating factor from the ‘positive only’ option was the bias towards moving forward at full speed: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip(1 − 𝑎�̃�,0,1). steering: predicting a scalar value that was continuously mapped to combinations of wheel velocities. the 0.0 scalar value corresponds to moving straight (at full speed), while −1.0 and 1.0 refer to turning left or right with one wheel completely stopped and the other going at full speed. intermediate values are computed using linear interpolation between these values. the speed of the robot is always maximal for a particular steering value. below is the formula that implements this action mapping: 𝑎𝑡 = [𝜔𝑙,𝜔𝑟] = clip([1 + 𝑎�̃�,1 − 𝑎�̃�],0,1). 2.4. reward shaping the reward function is a fundamental element of every reinforcement learning problem, as it serves the important role of converting a task from a textual description to a mathematical optimisation problem. the primary objective for the agent is to travel as far as possible within a given time in the right-hand lane; therefore, two rewards that promote this behaviour were proposed. distance travelled: the agent’s reward was directly proportional to the distance it moved along the right-hand lane at each step. only longitudinal motion was counted and only if the robot stayed in the right-hand lane. orientation: the agent was rewarded if it was facing and moving in the desired orientation, which was determined based on its lateral position. in simple terms, it received the largest reward if it faced towards the centre of the right-hand lane (some example configurations are shown in figure 2 d). a term proportional to the angular velocity of the faster moving wheel was also added to encourage fast motion. this reward was calculated as 𝑟 = 𝜆ψ 𝑟ψ(𝛹,𝑑) + λ𝑣 𝑟𝑣(𝜔𝑙,𝜔𝑟), where 𝑟ψ(⋅),𝑟𝑣(⋅) are the orientation and velocity-based components (explained below), while the 𝜆ψ,𝜆𝑣 constants scale these to [-1,1]. 𝛹,𝑑 are the orientation and lateral error from the desired trajectory, which is the centreline of the right-hand lane (see figure 2 a). the orientation-based term was calculated as 𝑟ψ(𝛹,𝑑) = λ(𝛹𝑒𝑟𝑟) = λ(𝛹 − 𝛹des(𝑑)), where 𝛹des(𝑑) is the desired orientation calculated using the lateral distance from the desired trajectory (see figure 2 b for the illustration of 𝛹des(𝑑)). the λ function achieves the promotion of the |𝛹𝑒𝑟𝑟| < 𝜑 error, while an error larger than 𝜑 leads to a small negative reward (a plot of λ(𝑥) is shown in figure 2 c): figure 2. explanation of the proposed orientation reward: (a) explains 𝛹,d, (b) shows how the desired orientation depends on the lateral error, (c) shows the λ(𝑥) function and (d) provides some examples of desired configurations. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 10 λ(𝑥) = { 1 2 + 1 2 cos(π 𝑥 𝜑 ) if − 1 ≤ 𝑥 ≤ 1 (1 − | 𝑥 𝜑 |) otherwise , (4) where the ε ∈ [10−1,10−2] and 𝜑 = 50° hyperparameters are selected arbitrarily. the velocity-based component was calculated as 𝑟𝑣(𝜔𝑙,𝜔𝑟) = max(𝜔𝑙,𝜔𝑟) to reward an equally high-speed motion in both straight and curved sections. in the curved sections, only the outer wheel was able to rotate at maximal speed, while on a straight road, both wheels were able to do so. 2.5. simulation-to-reality transfer to train the agents, an open-source simulation of the duckietown environment was used [14]. this simulation models certain physical properties of the real environment accurately (dimensions of the robot, camera parameters, dynamic properties, etc.), but several other effects (textures, objects at the side of the road) and light simulation are less realistic (e.g. compared to modern computer games). these inaccuracies create a gap between simulation and reality that makes it challenging for any reinforcement learning agent to be trained only in simulation but operate in reality. to bridge the simulation-to-reality gap and to achieve the generalisation capability required for real performance, domain randomisation was used. this involves training the policy in many different variants of a simulated environment by varying lighting conditions, object textures, the camera, vehicle dynamics parameters and road structures (see figure 3 for examples of domain randomised observations). in addition to the ‘built-in’ randomisation options of gym-duckietown, this study used a diverse set of maps to train on in order to further improve the agent's generalisation capability. 2.6. collision avoidance collision avoidance with other vehicles greatly increases the complexity of the lane-following task. these problems can be solved in different ways, for example, by overtaking or following at a safe distance. however, the sensing capability of the vehicle and the complexity of the policy determine the solution it can learn. images from the forward-facing camera of a duckiebot only have a 160 ° horizontal field of view; therefore, the policy controlling the vehicle has no information about objects moving next to or behind the robot. for simplicity, in this study, the same convolutional network for collision avoidance as for lane following was used, which does not feature a long short-term memory cell or any other sequence modelling component (in contrast to [2]). for these reasons, it is unable to plan long manoeuvres, such as overtaking, which also requires side vision to check when it is safe to return to the right-hand lane. the policy was therefore trained in situations where there was a slow vehicle ahead, and the agent had to learn to perform lane following at full speed until it had caught up with the vehicle in front, at which point it had to reduce its speed and maintain a safe distance to avoid collision. in these experiments, the wheel velocity braking action representation was used as the policy's output because this allowed the agent to slow down or even stop the vehicle if necessary (unlike the steering action). both the orientation and the distance travelled reward functions were used to train agents for collision avoidance. the former was supplemented with a term that promoted collision avoidance, while the latter was used unchanged. the simulation used provided a 𝑝coll penalty if the safety circles around the two vehicles overlapped. the 𝑟𝑐𝑜𝑙𝑙 reward component that promoted collision avoidance was calculated using this penalty. if the penalty decreased because the robot was able to increase its distance from an obstacle, the reward term was proportional to the change in penalty; otherwise, it was 0: 𝑟coll = { −𝜆coll ⋅ δ𝑝coll if δ𝑝coll < 0 0 otherwise . (5) this term was added to the orientation reward, and it aimed to encourage the policy to increase the distance from the vehicle ahead if it got too close. collisions were only penalised by terminating the episode without giving any negative rewards. 2.7. evaluation to assess the performance of the reinforcement learningbased controller, multiple performance metrics in the simulation were measured and compared against two baselines, one using a classical control theory approach and the other being human driving. survival time (𝑡survive) in s: the time until the robot left the road or the duration of an evaluation episode. distance travelled in ego-lane (𝑠ego) in m: the distance travelled along the right-hand lane within a fixed time period. only longitudinal motion was counted; tangential movement therefore counted the most towards this metric. distance travelled both lanes (𝑠both) in m: both the distance travelled along the right-hand-lane within a fixed time period and sections where the agent moved into the oncoming lane counted towards this metric. lateral deviation (𝑑𝑑) in m·s: lateral deviation from the lane’s centreline integrated over the time of an episode. orientation deviation (𝑑ψ) in rad·s: the robot orientation's deviation from the tangent of the lane centreline integrated over the time of an episode. figure 3. examples of domain randomised observations. a) simulated b) simulated c) real figure 4. a) test track used for simulated reinforcement learning and baseline evaluations; b) and c) real and simulated test track used for the evaluation of the simulation-to-reality transfer. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 11 time outside ego-lane (𝑡𝑜𝑢𝑡) in s: time spent outside the ego-lane. even though duckietown is intended to be a standardised platform, it is still under development, and the official evaluation methods and baselines have not been adopted widely in the research community. the ai driving olympics provided a great opportunity to benchmark the solution presented here to others; however, the methods behind these solutions have not yet been published in the scientific literature. for this reason, this method was analysed primarily by comparing it with baselines that could be evaluated under the same conditions. the classical control theory baseline relies on information about the robot’s relative location and orientation to the centreline of the lane, which is available in the simulator. this baseline works by controlling the robot to orient itself towards a point on its desired path ahead and calculating wheel velocities using a proportional-derivative (pd) controller based on the orientation error of the robot. the parameters of this controller are hand-tuned to achieve a sufficiently good performance, but more advanced control schemes could offer better results. in many reinforcement learning problems (e.g. the atari 2600 games [15]) the agents are compared to human baselines. motivated by this benchmark, a method to measure how well humans were able to control duckiebots was proposed, which was then used as a baseline. the values shown in table 1 were recorded by controlling the simulated robot using the arrow keys on a keyboard (therefore via discrete actions), while the observations seen by the human driver were very similar to the observations of the reinforcement learning agent. 2.8. methods to improve results at the ai driving olympics the agents in this study were trained to solve autonomous driving problems in the duckietown environment and not to maximise scores at the ai driving olympics. therefore, some hyperparameters and methods had to be modified to match the competitions' evaluation procedures. it was found that training on lower frame rates (0.1 ms step time) improved the scores even though the evaluation simulation was stepped more frequently. in addition, implementing the same motion blur simulation that was applied in the official evaluation improved the results significantly compared with agents that were trained on nonblurred observations. 3. results 3.1. simulation even though multiple papers have demonstrated the feasibility of training vision-based driving policies using reinforcement learning, adapting to a new environment still poses many challenges. due to the high dimensionality of the image-like observations, many algorithms converge slowly and are very sensitive to hyperparameter selection. the method presented in this study, using proximal policy optimization, is able to converge with good lane-following policies in 1-million timesteps thanks to the high sample complexity of the algorithm. this training takes 2–2.5 hours on five cores of an intel xeon e5-2698 v4 2.2 ghz cpu and an nvidia tesla v100 gpu if 16 parallel environments are used. 3.1.1. comparison against baselines table 1 compares the reinforcement learning agent from this study with the baselines. the performance of the trained policy is measurable to the classical control theory baseline as well as to how well humans are able to control the robot in the simulation. most metrics indicate similarly good or equal performance even though the pd-controller baseline relies on high-level data such as position and orientation error rather than images. 3.1.2. comparison against other solutions at the ai driving olympics table 2 shows the top-ranking solutions of the simulated lane-following (validation) challenge at the 5th ai driving olympics. all top-performing solutions were able to control the robot reliably in the simulation for the duration of an episode (60 s); however, the distances travelled were different. the method in this study is able to control the robot reliably at the highest speed, so it therefore achieves the highest distance-travelled value while also showing good lateral deviation and rarely departing from the ego-lane. 3.1.3. action representation and reward shaping experiments with different action representations show that constrained and preferably biased action spaces allow convergence with good policies (wheel velocity braking and steering). however, more general action spaces (wheel velocity and its clipped version) can only converge with inferior policies during the same number of steps (see figure 5). the proposed orientation-based table 1. comparison of the reinforcement learning agent with two baselines in simulation. mean metrics over 5 episodes rl agent pd baseline human baseline survival time in s ↑ 15 15 15 distance travelled both lanes in m ↑ 7.1 7.6 7.0 distance travelled ego-lane in m ↑ 7.0 7.6 6.7 lateral deviation in m ·s ↓ 0.5 0.5 0.9 orientation deviation in rad·s ↓ 1.5 1.1 2.8 table 2. comparing the method in this study with other solutions at the ai driving olympics author 𝒕𝐬𝐮𝐫𝐯𝐢𝐯𝐞 in s ↑ 𝒔𝐞𝐠𝐨 in m ↑ 𝒅𝒅 in m·s ↓ 𝒕𝐨𝐮𝐭 in s ↓ a. kalapos [10], [16] 60 30.38 2.65 0 a. béres [16] 60 29.14 4.10 1.4 m. tim [16] 60 28.52 3.45 0.4 a. nikolskaya 60 24.80 3.15 1.6 r. moni [16] 60 18.60 1.78 0 z. lorincz [16] 60 18.6 3.5 0.8 m. sazanovich 60 16.12 4.35 3.4 r. jean 60 15.5 3.28 0 y. belousov 60 14.88 5.41 9.8 m. teng 60 11.78 2.92 0 p. almási [7], [16] 60 11.16 1.32 0 a) orientation reward b) distance travelled reward figure 5. learning curves for the reinforcement learning agent with different action representations and reward functions. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 12 reward function also leads to as good a final performance as one that is ‘trivially’ rewarding based on the distance travelled; however, the latter seems to perform better on more general action representations (because policies using these action spaces and trained with the orientation reward do not learn to move fast). 3.2. real-world driving to measure the quality of the transfer learning process and the performance of the controller in the real world, performance metrics that were easily measurable both in reality and simulation were selected. these were recorded in both domains in matching experiments and compared against each other. the geometry of the tracks, the dimensions and the speed of the robot were simulated accurately to evaluate the robustness of the policy against all the inaccurately simulated effects and those that were not simulated. using this method, policies trained in the domainrandomised simulation were tested as well as those that were trained only in the ‘nominal’ simulation. this allows for the evaluation of the transfer learning process and the highlighting of the effects of training with domain randomisation. the real and simulated version of the test track used in this analysis is shown in figure 4 b and figure 4 c. during real evaluations, it was generally found that under ideal circumstances (no distracting objects at the side of the road and good lighting conditions), the policy trained in the ‘nominal’ simulation was able to drive reasonably well. however, training with domain randomisation led to a more reliable and robust performance in the real world. table 1 shows the quantitative results of this evaluation. the two policies seemed to perform equally well when compared based on their performance in the simulation. however, metrics recorded in the real environment show that the policy trained with domain randomisation performed almost as well as in the simulation, while the other policy performed noticeably worse. the lower distance travelled ego-lane metric of the domainrandomised policy can be explained by the fact that the vehicle tended to drift to the left-hand lane at sharp turns but returned to the right-hand lane afterwards, while the nominal policy usually made more serious mistakes. note that in these experiments the orientation-based reward and the steering action representation were used, as this configuration learns to control the robot in the minimum number of steps and the shortest training time. an online video demonstrates the performance of the trained agent from this study: https://youtu.be/kz7ywemg1is (accessed 23 september 2021). an important limitation for the method presented in this study is that during real evaluations, the speed of the robot had to be decreased to half of the simulated value. the policy evaluations were executed on a pc connected to the robot via wireless lan; therefore, the observations and the actions were transmitted between the two devices at every step. this introduced delays in the order of 10 – 100 ms, making the control loop unstable when the robot was moving at full speed. however, at half speed, a stable operation was achieved. it was noticed that models trained with motion blur and longer step times for the ai driving olympics performed more reliably in the real world regardless of whether they used domain randomisation. however, further analysis and retraining of these agents multiple times is needed to firmly support these presumptions. 3.3. collision avoidance figure 6 demonstrates the learned collision avoidance behaviour. in the first few seconds of the simulation, the robot controlled by the reinforcement learning policy accelerates to full speed. then, as it approaches the slower, non-learning robot, it reduces its speed and maintains an approximately constant distance from the vehicle ahead (see figure 6). from the simple, fully convolutional network of this policy, learning, planning and executing a more complex behaviour, such as overtaking, cannot be expected. table 4 shows that training with both reward functions leads to functional lane-following behaviour. however, the nonmaximal survival time values indicate that neither of the policies are capable of performing lane following reliably with the presence of an obstacle robot for 60 s. all metrics in table 4 indicate that the modified orientation reward leads to better lanefollowing metrics than the simpler distance travelled reward. it should be noted that these metrics were mainly selected to evaluate the lane-following capabilities of an agent; a more intable 3. evaluation results of reinforcement learning agent in the real environment and in matching simulations. eval. domain mean metrics over 6 episodes domain rand. policy nominal policy real survival time in s ↑ 54 45 distance travelled both lanes in m ↑ 15.6 11.4 distance travelled ego-lane in m ↑ 7.0 8.4 sim. survival time in s ↑ 60 60 distance travelled in m ↑ 15.5 15.0 a) 𝑡 = 0 s b) 𝑡 = 6 s c) 𝑡 = 8 s d) 𝑡 = 24 s e) approximate distance between the vehicles initial positions catching up following the vehicle ahead figure 6. sequence of robot positions in a collision avoidance experiment with a policy trained using the modified orientation reward. after 𝑡 = 6 s, the controlled robot follows the vehicle in front at a short but safe distance until the end of the episode (approximate distance is calculated as the distance between the centre points of the robots minus the length of a robot). table 4. evaluation results of policies trained for collision avoidance with different reward functions. mean metrics over 15 episodes distance travelled orientation +𝑟coll survival time (max. 60) in s ↑ 46 52 distance travelled both lanes in m ↑ 22.5 22.9 distance travelled ego-lane in m ↑ 22.7 23.1 lateral deviation in m·s ↓ 1.9 1.6 orientation deviation in rad·s ↓ 6.3 5.8 https://youtu.be/kz7ywemg1is acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 13 depth analysis of collision avoidance with a vehicle in front calls for more specific metrics. an online video demonstrates the performance of the agent trained in this study: https://youtu.be/8gqauvty1po (accessed 23 september 2021) 3.4. salient object maps visualising which parts of the input image contribute the most to a particular output (action) is important because it provides some explanation of the network's inner workings. figure 7 shows salient object maps in different scenarios generated using the method proposed in [17]. all of these images indicate high activations on lane markings, which is expected. 4. conclusions this work presented a solution to the problem of complex, vision-based lane following in the duckietown environment using reinforcement learning to train an end-to-end steering policy capable of simulation-to-real transfer learning. it was found that the training is sensitive to problem formulation, such as the representation of actions. this study has demonstrated that by using domain randomisation, a moderately detailed and accurate simulation is sufficient for training end-to-end lanefollowing agents that operate in a real environment. the performance of these agents was evaluated by comparing some basic metrics to match real and simulated scenarios. agents were also successfully trained to perform collision avoidance in addition to lane following. finally, salient object visualisation was used to give an illustrative explanation of the inner workings of the policies in both the real and simulated domains. acknowledgement we would like to show our gratitude to professor bálint gyires-tóth (bme, dept. of telecommunications and media informatics) for his assistance and comments on the progress of our research. the research reported in this paper and carried out at the budapest university of technology and economics was supported by continental automotive hungary ltd. and the ‘tkp2020, institutional excellence programme’ of the national research development and innovation office in the field of artificial intelligence (bme ie-mi-sc tkp2020). references [1] v. mnih, a. p. badia, m. mirza, a. graves, t. lillicrap, t. harley, d. silver, k. kavukcuoglu, asynchronous methods for deep reinforcement learning, proc. of the international conference on machine learning, new york, united states, 19–24 june 2016, pp. 1928-1937. [2] m. jaritz, r. de charette, m. toromanoff, e. perot, f. nashashibi, end-to-end race driving with deep reinforcement learning, proc. of the ieee international conference on robotics and automation (icra), brisbane, australia, 21–25 may 2018, pp. 2070-2075. [3] a. kendall, j. hawke, d. janz, p. mazur, d. reda, j. allen, v. lam, a. bewley, a. shah, learning to drive in a day, proc. of the international conference on robotics and automation (icra), montreal, canada, 20–24 may 2019, pp. 8248-8254. [4] w. shi, s. song, z. wang, g. huang, self-supervised discovering of causal features: towards interpretable reinforcement learning, 2020. online [accessed 3 august 2020] https://arxiv.org/abs/2003.07069 [5] b. balaji, s. mallya, s. genc, s. gupta, l. dirac, v. khare, g. roy, t. sun, y. tao, b. townsend, e. calleja, s. muralidhara, d. karuppasamy, deepracer: educational autonomous racing platform for experimentation with sim2real reinforcement learning, 2019. online [accessed 13 april 2020] https://arxiv.org/abs/1911.01562 [6] m. szemenyei, p. reizinger, attention-based curiosity in multiagent reinforcement learning environments, proc. of the international conference on control, artificial intelligence, robotics & optimization (iccairo), majorca island, spain, 3–5 may 2019, pp. 176-181. [7] p. almási, r. moni, b. gyires-tóth, robust reinforcement learning-based autonomous driving agent for simulation and real world, proc. of the international joint conference on neural networks (ijcnn), glasgow, united kingdom, 19–24 july 2020, pp. 1-8. [8] v. mnih, k. kavukcuoglu, d. silver, a. graves, i. antonoglou, d. wierstra, m. riedmiller, playing atari with deep reinforcement learning, 2013. online [accessed 13 april 2020] https://arxiv.org/abs/1312.5602 [9] j. schulman, f. wolski, p. dhariwal, a. radford, o. klimov, proximal policy optimization algorithms, 2017. online [accessed 2 december 2019] https://arxiv.org/abs/1707.06347 [10] a. kalapos, c. gór, r. moni, i. harmati, sim-to-real reinforcement learning applied to end-to-end vehicle control, proc. of the 23rd international symposium on measurement and control in robotics (ismcr), budapest, hungary, 15–17 october 2020, pp. 1-6. [11] j. zilly, j. tani, b. considine, b. mehta, a. f. daniele, m. diaz, g. bernasconi, c. ruch, j. hakenberg, f. golemo, a. k. bowser, m. r. walter, r. hristov, s. mallya, e. frazzoli, a. censi, l. paull, the ai driving olympics at neurips, 2018. online [accessed 13 april 2020] https://arxiv.org/abs/1903.02503 [12] j. schulman, p. moritz, s. levine, m. jordan, p. abbeel, highdimensional continuous control using generalized advantage estimation, proc. of the international conference on learning representations (iclr), san juan, puerto rico, 2–4 may 2016, 14 pp. online [accessed 23 september 2021] http://arxiv.org/abs/1506.02438 [13] e. liang, r. liaw, r. nishihara, p. moritz, r. fox, k. goldberg, j. gonzalez, m. jordan, i. stoica, rllib: abstractions for distributed reinforcement learning, proc. of the international conference on machine learning, stockholm, sweden, 10–15 july 2018, pp. 3053-3062. [14] m. chevalier-boisvert, f. golemo, y. cao, b. mehta, l. paull, duckietown environments for openai gym, 2018. online [accessed 15 january 2021] https://github.com/duckietown/ gym-duckietown [15] m. g. bellemare, y. naddaf, j. veness, m. bowling, the arcade learning environment: an evaluation platform for general agents, j. artif. intell. res. 47 (2013), pp. 253-279. doi: 10.1613/jair.3912 [16] r. moni, a. kalapos, a. béres, m. tim, p. almási, z. lőrincz, pia project achievements at aido5, 2020. online [accessed 15 january 2021] https://medium.com/@smartlabai/pia-project-achievementsat-aido5-a441a24484ef a) simulated b) real c) collision avoidance figure 7. salient objects highlighted on observations in different domains and tasks. blue regions represent high activations throughout the network. https://youtu.be/8gqauvty1po https://arxiv.org/abs/2003.07069 https://arxiv.org/abs/1911.01562 https://arxiv.org/abs/1312.5602 https://arxiv.org/abs/1707.06347 https://arxiv.org/abs/1903.02503 http://arxiv.org/abs/1506.02438 https://github.com/duckietown/%20gym-duckietown https://doi.org/10.1613/jair.3912 https://medium.com/@smartlabai/pia-project-achievements-at-aido5-a441a24484ef https://medium.com/@smartlabai/pia-project-achievements-at-aido5-a441a24484ef acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 14 [17] m. bojarski, p. yeres, a. choromanska, k. choromanski, b. firner, l. d. jackel, u. muller, explaining how a deep neural network trained with end-to-end learning steers a car, 2017. online [accessed 15 april 2020] https://arxiv.org/abs/1704.07911 appendix proximal policy optimization the pseudo code for proximal policy optimization (ppo) is as follows: algorithm ppo, actor-critic style (based on [9]) input: initial policy with 𝜃0 parameters and initial value function estimator with 𝜙0 parameters for iteration = 1,2,... do for actor=1,2,...,n do run πθold in the environment for t timesteps to collect τ𝑖 trajectory compute advantage estimates �̂�1,… ,â𝑇 based on the current value function end optimise 𝔏clip(𝜃) + 𝔏klpen(𝜃) wrt. 𝜃, for k epochs and minibatch size 𝑀 ≤ 𝑁𝑇 fit the value function estimate by regression on mean-squared error 𝜃old ← 𝜃, 𝜙old ← 𝜙 end the 𝛽 adaptive parameter mentioned in section 2.1 is updated according to the following rule: 𝛽 ← { 𝛽/2, if 𝑑 < 𝑑targ/1.5 𝛽 × 2, if 𝑑 > 𝑑targ × 1.5, (6) where 𝑑targ is a hyperparameter and 𝑑 is the kl-divergence of the old and the updated policy 𝑑 = �̂�[𝐾𝐿[πθold(⋅ |𝑠𝑡),πθ(⋅ |𝑠𝑡)]]. (7) the �̂�𝑡 generalised advantage estimate [12] is calculated as �̂�𝑡 = ∑(γλ) 𝑙δ𝑡 𝑉 ∞ 𝑙 (8) 𝛿𝑡 𝑉 = 𝑟𝑡 + 𝛾 𝑉(𝑠𝑡+1) − 𝑉(𝑠𝑡) , (9) where 𝑉(𝑠𝑡) and 𝑉(𝑠𝑡+1) are the value function estimates calculated by the value network at steps 𝑡 and 𝑡 + 1; 𝛾 is the discount factor, while 𝜆 is a hyperparameter of the generalised advantage estimate. to assure reproducibility, the hyperparameters of the algorithm are provided in the table 5. table 5. hyperparameters of the algorithm. the description of some parameters is from the rllib documentation [13]. description value number of parallel environments 𝑁 = 16 learning rate α = 5 × 10−5 discount factor for return calculation 𝛾 = 0.99 𝜆 parameter for the generalised advantage estimate 𝜆 = 0.95 ppo clip parameter ϵ = 0.2 sample batch size 𝑇 = 256 sgd minibatch size 𝑀 = 128 number of epochs executed in every iteration 𝐾 = 30 target kl-divergence for the calculation of 𝛽 𝑑targ = 0.01 https://arxiv.org/abs/1704.07911 measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 4 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method sefer avdiaj1,2, yllka delija1 1 department of physics, university of prishtina, prishtina 10000, kosovo 2 nanophysics, outgassing and diffusion research group, nanoalb-unit of albanian nanoscience and nanotechnology, 1000 tirana, albania section: research paper keywords: real gases; virial equation of state; compressibility factor; virial coefficients citation: sefer avdiaj, yllka delija, measurements of virial coefficients of helium, argon and nitrogen for the needs of static expansion method, acta imeko, vol. 11, no. 2, article 26, june 2022, identifier: imeko-acta-11 (2022)-02-26 section editor: sabrina grassini, politecnico di torino, italy received august 4, 2021; in final form march 15, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: yllka delija, e-mail: delijayllka@gmail.com 1. introduction the deviations of real gases from ideal gas behaviour are best seen by determining the compressibility factor [1]: 𝑍 = 𝑃 𝑉𝑚 𝑅 𝑇 , (1) where 𝑃 is the pressure, 𝑉𝑚 (𝑉/𝑛) is the molar volume, 𝑇 is the temperature, and 𝑅 is the universal gas constant. at low pressures and high temperatures, the ideal gas law can adequately predict the behaviour of natural gas. however, at high pressures and low temperatures, the gas deviates from the ideal gas behaviour and is described as real gas. this deviation reflects the character and strength of the intermolecular forces between the particles making up the gas. several equations of state have been suggested to account for the deviations from ideality. a very handy expression that allows for deviations from ideal behaviour is the virial equation of state [2]. this is a simple power series expansion in either the inverse molar volume, 1/𝑉𝑚 ,: 𝑍 = 𝑝 𝑉𝑚 𝑅 𝑇 = 𝐴 + 𝐵 𝑉𝑚 + 𝐶 𝑉𝑚 2 + ⋯, (2) or the pressure, 𝑝,: 𝑍 = 𝑝 𝑉𝑚 𝑅 𝑇 = 𝐴′ + 𝐵′ 𝑝 + 𝐶 ′𝑝2 + ⋯, (3) with 𝐴, 𝐴’first virial coefficient, 𝐵, 𝐵’second virial coefficient, 𝐶, 𝐶’third virial coefficient. all virial coefficients are temperature-dependent. according to statistical mechanics, the first virial coefficient is related to one-body interactions, the second virial coefficient is related to two-body interactions and the higher virial coefficients are related to multi-body interactions [1], [3]. the main goal of this experiment is to determine values for the second virial coefficient b of gases helium, argon and nitrogen at room temperature. the second virial coefficient has a theoretical relationship with the intermolecular forces between a pair of molecules which can provide quantitative information on these forces. abstract generally, there are three primary methods in vacuum metrology: mercury manometer, static expansion method, and continuous expansion method. for the pressure below 10 pa, the idea of the primary standard is that the gas is measured precisely at a pressure as high as possible, and then the gas is expanded to the bigger volumes; this allows to calculate the expanded pressure. an important parameter that needs to be taken care of in primary vacuum calibration methods is the compressibility factor of the working gas. the influence of virial coefficients on the realization of primary standards in vacuum metrology, especially in the realization of the static expansion method is very important. in this paper we will present the measured data for virial coefficients of three gases helium, argon and nitrogen measured at room temperature and a pressure range from 3 kpa to 130 kpa. the dominating term due to real gas properties arises from the second virial coefficient. the influence of higher orders of virial coefficients drops rapidly with lower pressure, particularly for gas pressures values lower than one atmosphere. hence, in our calculation, the series of real gas was used for the first and second virial coefficients but not for higher-order virial coefficients. mailto:delijayllka@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 the results will serve the scientific community in the field of metrology for the most accurate measurement standards in vacuum metrology, in atomic physics will show the level of interaction of atoms and molecules of certain gases, and in the chemical aspect we will gain knowledge about behaviour of gases at different pressures; in this way, it will be possible to obtain information about the limit of transition from ‘ideal gas’ to ‘real gas’ and vice versa. 2. experimental setup a schematic illustration of the experimental setup is shown in figure 1. our system comprises five chambers (𝑉0, 𝑉1, 𝑉2, 𝑉3 and 𝑉4) of significantly different sizes, the volumes of which have been determined by using the gas expansion method. the vacuum isolation valves are installed between the chambers, and a turbomolecular pump (hicube eco) is connected to valve 5. a capacitance diaphragm gauge (cdg025d) is attached to the chamber 𝑉0 to measure the precise pressure before and after expansion. the whole experiment was performed under wellcontrolled ambient temperature in the air-conditioned laboratory. in order to reduce the temperature drifts and transient local heating during the working day, we kept all electrical equipment (pump and lights) switched on at all times (24 h/day). the average temperature was 296 k and the pressure range was from 3 kpa to 130 kpa [4]. 3. volume determination to determine the volume of vacuum chambers, we must know the value of one of the volumes. we measured the geometrical dimensions of the known volume 𝑉0 by using a calibrated caliper. a cylinder with radius 𝑟 and height ℎ has a volume given by 𝑉0 = π 𝑟 2ℎ. using the dimensions of the volume 𝑉0 we have calculated mean value of the volume 𝑉0 = 0.71 l with associate uncertainty 0.14 % for coverage factor 𝑘 = 1. all the uncertainties throughout this paper are given for coverage factor 𝑘 = 1 [1], [4]. therefore, to determine volumes 𝑉1, 𝑉2, 𝑉3 and 𝑉4 we used the static expansion method. in this method, the pressures before and after the expansion are used for determining the expansion ratio. the gas is first enclosed in the smaller volume, then it is allowed to enter the larger volumes to expand under nearly perfect isothermal conditions [5]. this procedure is applicable only if the pressure after expansion can be measured with about the same accuracy as the pressure before expansion. argon is used as a gas for this type of calibration, although helium and nitrogen could also be used. the purity of the argon gas was 5.0 𝑁 (99.999 %). the mean values, standard deviation and uncertainty of volumes of the five vacuum chambers are given in table 1. the measurement uncertainty depends on the ratio of volumes (pressures before and after expansions) [6]. 4. compressibility factor in this section, an analytical approach for calculating the compressibility factor of gases is presented. in this experiment, the gas was expanded four times in different volumes. the experimental procedure can be described as follows. in the beginning, all valves are opened and the entire system is pumped down to less than 10−5 pa. valves 2, 3, 4, 5 are closed, valves 0, 1 are opened and gas comes out very quickly into 𝑉0 + 𝑉1. with the left hand, a knob has adjusted the regulator to fill the system at a pressure of a little over 130 kpa. the pressure is monitored for a few minutes until is stable, then is recorded as 𝑝1. the ambient temperature t is also recorded. then valve 2 is opened and the gas expanded into 𝑉0 + 𝑉1 + 𝑉2. again, after equilibrium, the pressure is recorded as 𝑝2. then valve 3 is opened and gas is expanded into 𝑉0 + 𝑉1 + 𝑉2 + 𝑉3. when equilibrium is reached the pressure is recorded as 𝑝3. for the final measurement, valve 4 is left open and the gas expanded into all volumes 𝑉0 + 𝑉1 + 𝑉2 + 𝑉3 + 𝑉4. after the gas is expanded in the entire system, pressure dropped below 3 kpa and is recorded as 𝑝4. this procedure is repeated nine times, and the equilibrated pressures are recorded at each stage [7], [8]. for pressure measurements, we have used cdg025d with an uncertainty of 0.2 % [9]. once each gas transfer pressure is measured, the volumes and temperature are also known, we need only the number of moles of gas to find directly its compressibility factor 𝑍. from the general gas equation, we obtain the expression for the amount of substance: 𝑛 = 𝑝 𝑉 𝑇 𝑇0 𝑃0 𝑉0 (4) where 𝑇0 is the standard temperature, 𝑃0 is the standard pressure and 𝑉0 is the molar volume at stp [1]. after we obtained the complete data set 𝑃, 𝑉, 𝑇 and 𝑛, the compressibility factor is calculated by using equation (1). plots of compressibility factor (𝑍) as a function of inverse molar volume (1/𝑉𝑚 ) for (a) helium, (b) argon and (c) nitrogen are presented in figure 2. for these real gases in the graphs, we notice that the shapes of the curves look a little different for each gas, and most of the curves only approximately resemble the ideal gas line at 𝑍 = 1 over a limited pressure range [10]-[12]. figure 1. schematic illustration of the experimental setup. valves 1, 2, 3 and 4 are installed between the chambers 𝑉0, 𝑉1, 𝑉2, 𝑉3 and 𝑉4. the capacitance diaphragm gauge (cdg) is attached to the chamber 𝑉0. a black rubber hosing connects the gas regulator to valve 0 and a flexible metal hose is used from valve 5 to the turbomolecular vacuum pump. table 1. mean value, standard deviation and uncertainty of volumes. �̅� 𝜹 𝒖 𝑉0(l) 0.7100 9.61 · 10 -4 1.36 · 10-3 𝑉1(l) 0.0668 5.82 · 10 -5 8.71 · 10-4 𝑉2(l) 0.7879 2.04 · 10 -3 2.59 · 10-3 𝑉3(l) 0.1652 2.82 · 10 -3 1.71 · 10-2 𝑉4(l) 0.5416 5.48 · 10 -3 1.01 · 10-2 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 5. results 5.1. calculation of virial coefficients at low pressures, e.g. below two atmospheres, the third and all higher virial terms usually may be neglected; hence, the equation of state (2) becomes: 𝑍 = 𝑝 𝑉 𝑛 𝑅 𝑇 = 𝐴 + 𝐵 𝑉𝑚 + ⋯ (5) using spreadsheet software such as excel, we plotted 𝑍 as a function of 1/𝑉𝑚 for each trial. for each gas, some curvature is observed, and we determined values for both the first (a) and second (b) virial coefficients. using the linest function in excel, we determined the values and standard deviations of the slope and y-intercept in a linear fit. by properly combining values of b and their uncertainties from multiple trials, an average value of this coefficient and its standard deviation for each gas was calculated [1], [3]. the mean value, standard deviation and uncertainty of the first and second virial coefficients for these three gasses are summarized in table 2. figure 3 shows values of the second virial coefficient for (a) helium, (b) argon and (c) nitrogen at room temperature. 6. discussion in table 3, the experimental values obtained in this work for second virial coefficients are compared with values from the literature [1], [13]-[17]. it can be seen the experimental results show good agreement with the data in the literature. according to the above results, it is possible to make the following considerations: a. the compressibility factor of helium is greater than 1, which shows that repulsive forces dominate between molecules and atoms. whereas for argon and nitrogen the value of z is lower than 1, in this case, the attractive forces are stronger (figure 2). b. at room temperature, the second virial coefficient is positive for helium and negative for argon and nitrogen (figure 3). figure 2. the compressibility factor, z, as a function of inverse molar volume (1/𝑉𝑚 ) for (a) helium, (b) argon and (c) nitrogen. figure 3. values of 𝐵 coefficient for (a) helium, (b) argon and (c) nitrogen at 296 k. table 2. mean value, standard deviation and uncertainty of 𝑨 and 𝑩 coefficients for helium, argon and nitrogen. 𝐻𝑒 �̅� 𝛿 𝑢 𝐴 1.00120 1.26 · 10-5 4.22 · 10-6 𝐵 (m3/mol) 1.16 · 10-5 2.33 · 10-7 7.78 · 10-8 𝐴𝑟 �̅� 𝛿 𝑢 𝐴 0.99871 1.63 · 10-4 5.42 · 10-5 𝐵 (m3/mol) -1.61 · 10-5 3.75 · 10-7 1.24 · 10-7 𝑁2 �̅� 𝛿 𝑢 𝐴 0.9998 3.75 · 10-5 1.25 · 10-5 𝐵 (m3/mol) -5.1 · 10-6 7.25 · 10-8 2.41 · 10-8 1,00119 1,00121 1,00123 1,00125 1,00127 1,4 2,4 3,4 4,4 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑎) 0,99845 0,99855 0,99865 0,99875 0,99885 0,99895 0,99905 1,4 2,4 3,4 4,4 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑏) 0,99975 0,99977 0,99979 0,99981 0,99983 0,99985 0,99987 0,99989 1,5 2,0 2,5 3,0 3,5 4,0 4,5 𝑦 = 𝑍 𝑥 = 1/𝑉𝑚 (𝑐) 1,12e-05 1,14e-05 1,16e-05 1,18e-05 1,20e-05 1,22e-05 1 2 3 4 5 6 7 8 9 𝐵 (m 3 / m o l) (𝑎) -1,70e-05 -1,65e-05 -1,60e-05 -1,55e-05 -1,50e-05 -1,45e-05 1 2 3 4 5 6 7 8 9 𝐵 (m 3 / m o l) (𝑏) -5,30e-06 -5,20e-06 -5,10e-06 -5,00e-06 -4,90e-06 -4,80e-06 1 2 3 4 5 6 7 8 𝐵 (m 3 / m o l) (𝑐) acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 7. conclusions based on the obtained experimental data, the pressure that is generated in the primary pressure standard for these initial pressure values must be corrected at most up to 0.005% since the deviation of the real gas from the ideal behaviour is very small. the gas with the smallest deviation from the ideal behaviour was nitrogen with a correction of about 0.001%: based on these results, we are going to use nitrogen in future calibration procedures. acknowledgement the present work was partly financed by the ministry of education, science and technology of the republic of kosovo (mest) for the project: developing of primary metrology in the republic of kosovomeasurement of the compressibility factor of noble gases and nitrogen. most of the experimental equipment was donated by the u.s. embassy in kosovo. references [1] j. m. h. levelt sengers, m. klein, j. s. gallagher, pressurevolume-temperature relationships of gases; virial coefficients, national bureau of standards, washington d.c. (1971). online [accessed 14 march 2022] doi: https://apps.dtic.mil/sti/citations/ad0719749 [2] t. d. varberg, a. j. bendelsmith, k. t. kuwata, measurement of the compressibility factor of gases: a physical chemistry laboratory experiment, journal of chemical education (2011), art. 1591. doi: 10.1021/ed100788r [3] p. j. mcelroy, r. battino, m. k. dowd, compression-factor measurements on methane, carbon dioxide, and (methane + carbon dioxide) using a weighing method, j. chem. thermodynamics 21(12) (1989), pp 1287-1300 doi: 10.1016/0021-9614(89)90117-1 [4] s. avdiaj, j. šetina, b. erjavec, volume determination of vacuum vessels by gas expansion method, metrology society of india, mapan 30 (2015), pp. 175-178. doi: 10.1007/s12647-015-0137-1 [5] a. kumar, v. n. thakur, h. kumar, characterization of spinning rotor gauge-3 using orific flow system and static expansion system, acta imeko 9(5) (2020), pp 325. doi: 10.21014/acta_imeko.v9i5.993 [6] c. mauricio villamizar mora, j. j. duarte franco, v. j. manrique moreno, c. e. garcía sánchez, analysis of the mathematical modelling of a static expansion system, acta imeko 10(3) (2021), pp. 185-191. doi: 10.21014/acta_imeko.v10i3.1061 [7] k. jousten, handbook of vacuum technology, wiley-vch, germany (2016), isbn 978-3-527-41338-6, pp 710-715. doi: 10.1002/9783527688265.ch2 [8] john w. moore, conrad l. stanitski, peter c. jurs, chemistry_ the molecular science-brooks cole, cengage learning; 4th edition (march 5, 2010), isbn-10: 1-4390-4930-0, pp 440-150. [9] inficon sky cdg025d capacitance diaphragm gauge on eurovaccum. online [accessed 20 april 2022] https://eurovacuum.eu/products/gauges/cdg025d/ [10] h. boschi-filho, c.c.buthers, second virial coefficient for real gases at high temperature, j. phys. chem. 73(3) 1969, pp 608– 615, art. 380. doi: 10.1021/j100723a022 [11] r. balasubramanian, x. ramya rayen, r. murugesan, second virial coefficients of noble gases, international journal of science and research (ijsr), 6(10) (2017). online [accessed 14 march 2022] https://www.ijsr.net/archive/v6i10/art20177192.pdf [12] c. gaiser, b. fellmuth, highly-accurate density-virial-coefficient values for helium, neon, and argon at 0.01℃ determined by dielectric-constant gas thermometry, the journal of chemical physics 150 (2019), art. no. 134303. doi: 10.1063/1.5090224 [13] b. schramm, e. elias, l. kern, gh. natour, a. schmitt, c. weber, precise measurements of second virial coefficients of simple gases and gas mixtures in the temperature range below 300 k, 95 (1991), pp. 615-621. doi: 10.1002/bbpc.19910950513 [14] a. hutem, s. boonchui, numerical evaluation of second and third virial coefficients of some inert gases via classical cluster expansion, journal of mathematical chemistry 50 (2012), pp 1262–1276. doi: 10.1007/s10910-011-9966-5 [15] e. bich, r. hellmann, e. vogel, ab initio potential energy curve for the helium atom pair and thermophysical properties of the dilute helium gas. ii. thermophysical standard values for lowdensity helium, an international journal at the interface between chemistry and physics 105(23-24) (2007). doi: 10.1080/00268970701730096 [16] d. w. rogers, concise physical chemistry, wiley; 1st edition, march, 2011, pp 18-34, isbn: 978-0-470-52264-6. [17] d. white, t. rubin, p. camky, h. l. johnston, the virial coefficients of helium from 20 to 300 k, j. phys. chem. 64(11) 1960, pp. 1607-1612, art. 774. doi: 10.1021/j100840a002 table 3. comparison between experimental values of second virial coefficients and literature values. 𝑇 (k) 𝐻𝑒 𝐴𝑟 𝑁2 literature 293 11.2 [14] -16.9 [14] -6.1 [14] 296 11.7 [13] -16.0 [13] -5.0 [13] 298 11.8 [15] -15.8 [16] -4.5 [16] 300 11.9 [17] -15.7 [1] -4.7 [1] this work 296 11.6 -16.1 -5.1 𝐵 in 10−6 (m3/mol) https://apps.dtic.mil/sti/citations/ad0719749 https://doi.org/10.1021/ed100788r https://doi.org/10.1016/0021-9614(89)90117-1 https://doi.org/10.1007/s12647-015-0137-1 http://dx.doi.org/10.21014/acta_imeko.v9i5.993 http://dx.doi.org/10.21014/acta_imeko.v10i3.1061 https://doi.org/10.1002/9783527688265.ch2 https://eurovacuum.eu/products/gauges/cdg025d/ https://doi.org/10.1021/j100723a022 https://www.ijsr.net/archive/v6i10/art20177192.pdf https://doi.org/10.1063/1.5090224 https://doi.org/10.1002/bbpc.19910950513 https://doi.org/10.1007/s10910-011-9966-5 https://doi.org/10.1080/00268970701730096 https://doi.org/10.1021/j100840a002 tracking and viewing modifications in digital calibration certificates acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 7 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 tracking and viewing modifications in digital calibration certificates vashti galpin1, ian smith2, jean-laurent hippolyte2 1 school of informatics, university of edinburgh, united kingdom 2 national physical laboratory, united kingdom section: research paper keywords: digital calibration certificates, provenance, temporal databases, xml, prototype citation: vashti galpin, ian smith, jean-laurent hippolyte, tracking and viewing modifications in digital calibration certificates, acta imeko, vol. 12, no. 1, article 11, march 2023, identifier: imeko-acta-12 (2023)-01-11 section editor: daniel hutzschenreuter, ptb, germany received november 20, 2022; in final form march 20, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by erc consolidator grant skye [grant number 682315] and by an iscf metrology fellowship grant provided by the uk government’s department for business, energy and industrial strategy (beis). corresponding author: vashti galpin, e-mail: vashti.galpin@ed.ac.uk 1. introduction information relating to the calibration of an instrument or artefact is captured in documents referred to as calibration certificates. these documents are typically provided either as physical paper-based documents or in electronic form, for example, in archivable portable document format (pdf-a). a downside of both approaches is that the information they contain is not machine-readable, that is it is not available in a form that can be read and processed by computer. using the information therefore requires, to some degree, human involvement and as such is prone to error, for example, arising from the transcription of numerical information from paper to computer. recent initiatives have considered the replacement of paperbased and electronic calibration certificates by fully machinereadable calibration certificates, referred to as “digital calibration certificates”, often abbreviated to “dccs”. the european metrology programme for innovation and research (empir) [1] funded the “smartcom” joint research project (2018-2021) [2] which developed a framework for dccs. the framework allows the key calibration certificate components of administrative data and measurement results to be stored. the recipient of a dcc will have, or be able to develop, software to ingest and use the information stored therein. ensuring the integrity and longevity of current and historical calibration data is essential for calibration laboratories to maintain long-term trusted relationships with their customers. it is also a requirement of iso/iec 17025 accreditation for these laboratories [3], along with the preservation of information about the methods, processes, software, equipment, and staff involved in the calibration task. this accompanying information, which is essential to trace and potentially repeat the conditions under which the calibration is performed, is more generally known as “provenance information” [4]. laboratories must be able to track and compare the differences between successive calibrations of the same artefact in data (for example, newly observed measurement results) or in provenance (for example, a different employee now signs off the calibration results). since dccs are containers for the trusted exchange of calibration data, their potential to increase the productivity of laboratories relies on abstract trust in current and historical calibration data is crucial. the recently proposed xml schema for digital calibration certificates (dccs) provides machine-readability and a common exchange format to enhance this trust. we present a prototype web application developed in the programming language links for storing and displaying a dcc using a relational database. in particular, we leverage the temporal database features that links provides to capture different versions of a certificate and inspect differences between versions. the prototype is the starting point for developing software to support dccs and the data with which they are populated and has underlined that dccs are the tip of the iceberg in automating the management of digital calibration data, activity that includes data provenance and tracking of modifications. mailto:vashti.galpin@ed.ac.uk acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 automated, integrated, and time-based curation of this data and associated provenance. the structure of this paper is as follows: we provide background on dccs before we go on to describe our prototype. we discuss the choices made in the development of the prototype and illustrate its usage with a screenshot. we conclude with ideas for further work. this paper is an extended version of an earlier conference paper [5]. 2. digital calibration certificates the structure of a dcc [6] reflects the information that is required by iso/iec 17025 [3] for reporting calibration results. a dcc comprises two compulsory sections. the first compulsory section contains the administrative information including that which is generally presented on the first page of a paper-based certificate. the second compulsory section contains measurement results. the measurement results section itself relies upon a framework, referred to as the “d-si”, developed specifically for the storage of measurement data. the framework ensures that quantity values, units of measurement and uncertainty information can all be represented. a dcc may also include two optional sections. the first optional section contains information presented purely for humans, for example, calibration-specific data sheets, and other auxiliary, machineinterpretable data, for example, relational tables. the second optional section contains an encoded version of a humanreadable version of the certificate. the d-si and dcc frameworks specify, for example, how numerical values and date and time information should be presented and uses the bipm si brochure [7] and the siunitx package for latex [8] as the basis for the provision of units of measurement. the d-si and the dcc may be implemented in a language chosen by the user, for example, extensible markup language (xml) or javascript object notation (json). the smartcom project has developed and made available an xml schema for the d-si framework [9] while the physikalisch-technische bundesanstalt (ptb), the national metrology institute (nmi) of germany, has made available an xml schema for the dcc [10]. demonstrators have been developed by nmis using excel and python [11], [12] and as a web application (gemimeg) [13] to illustrate the use of these schema. a different approach considers embedding calibration data in pdf calibration certificates to support machine-readability [14]. nmis have also been investigating good practice for dccs [15] and usage in specific domains [16], [17]. developers of commercial software for calibration are also starting to support the use of dccs [18], [19]. the first imeko tc6 m4dconf, held in berlin in 2022, focussed on the digital transformation of metrology and many submissions considered dccs [20]. we are aware that the frameworks (and related schemas) may be subject to future updates, for example, if the bipm si brochure is updated, or if the requirements for the contents of a calibration certificate change. in the approach we have taken, dealing with xml schema changes is straightforward, and in fact occurred during our development when a newer version of the dcc schema became available, and we discovered that the dummy certificate with which we were working was missing a compulsory xml element. 3. the dcc protoytpe our prototype application brings together concepts from metrology (specifically, the xml schema for dccs) and academic research in databases and programming languages and enables the transfer of research ideas from academia. on the computer science research side, there are two distinct concepts: temporal databases and a technique for storing xml documents in a relational database, that are combined using the links programming language, a tierless language that supports language-integrated query and provides a single language for the development of web applications with database back-ends [21], [22]. we now consider these three components in detail: links, temporal databases, and storage of xml documents; before considering some details of the implementation of the application. 3.1. the links programming language links is a strongly-typed and statically-typed functional programming language, that is cross-tier (or tierless): it removes the need for the developer to write javascript or a particular database query language (in this case, postgresql [23]) for the different tiers of a web application providing data from a database. in particular, links assists in the writing of correct software by providing language-integrated query which allows the programmer to write high-level queries that are transformed to correct sql queries with known performance for execution on the database. furthermore, the type-checking of variables and functions before program execution reduces the likelihood of run-time errors. figure 1 illustrates the links architecture. the links interpreter takes the links code and generates correct html and javascript to run in the user’s browser, and to support exchange of data between the browser and links program, thus allowing an interactive webpage. the interpreter also generates correct sql queries which are passed to a database server for execution which responds with the results. using this approach, data intensive operations can take place on the database server, using the fact that database systems such as postgresql employ efficient data structures and algorithms for queries. in particular, indexes on key values can make search operations very efficient. links provides syntax to specify database tables as well as comprehension syntax to iterate over each row of a database table. we use the mvu (model-view-update) library [24] for the development of the web interface as it provides functions to use the element of the bootstrap toolkit for website development [25], thereby affording a higher level of abstraction by which to achieve an appealing web frontend. figure 1. the links architecture. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 3.2. temporal databases temporal databases capture when a piece of data is valid. in the case of a relational database management system (rdbms), we are interested in the time period for which a row in a database table is valid, and the inclusion of additional fields (columns) allows this time period to be recorded [26], [27]. we are interested in the provenance of dccs and hence we want to capture when data changes in the database, namely transaction-time information. temporal databases can also record valid-time information which is information about when something is true in the real world. figure 2 demonstrates the difference between an update in a standard relational database versus one in a temporal database supporting transaction time. in the standard case, the new data overwrites the old data and there is no record of the fact that there was a previous value for the field. in a temporal database, two additional fields capture information about when the data was changed, permitting queries such as “what were the values on this day last year?”, “has this field ever had a different value?”, and “which fields have the most frequent changes?”. in the example, the date fields are given informally; however, in the database, the fields are of sql data type timestamp with time zone and cannot be null. in terms of performance in the dcc scenario, since updates are not frequent occurrences, the additional database operations required in the temporal case will not impose a significant overhead. links has recently been extended with temporal features that have been demonstrated in the context of curated scientific databases [28] and we use these features in the development of our prototype. 3.3. xml documents for this paper, we have used a synthetic dcc in xml, containing dummy administrative data and measurement results, formatted according to the xml schema [10] made available by ptb. xml is a tree-structured language, and an xml document is an (inverted) tree of nodes starting from a root node. each node can have any number of child nodes. nodes without children are called leaves. each node in an xml document is either an element or a text node (a string of characters). an element consists of an opening figure 2. non-temporal update of a field versus a temporal update. ... gb en ... laboratory digital thermometer ... ... ... ... ... ... figure 3. an example dcc in xml: text nodes are shown by white text on a dark blue background, attributes by black text on a grey background and the other components are xml elements defined by opening and closing tags. the ellipses indicate where elements have been omitted. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 tag and closing tag (or a combined opening and closing tag). the child nodes of the element (if any) appear in between a pair of opening and closing tags of that element. these child nodes can be elements themselves or text (strings of characters). a text node has no children (and hence is a leaf). an xml schema describes the tags that can be used in a document and how they can be nested [29]. additionally, opening tags may contain attributes. example nodes and attributes are given in table 1 and table 2. figure 3 provides an example dcc using the xml schema. in the example, is an opening tag and is a closing tag, together providing an xml element, which contains the text node laboratory. in the figure, the opening tag for the dcc:digitalcalibrationcertificate element contains three attributes which are shown and these attributes provide sources for the xml schema and give the schema version used in the document. the children of this initial element are the elements dcc:administrativedata and dcc:measurementresults. the temporal features of links are currently supported by an rdbms, hence it is necessary to map from the xml schema to this database. there were two choices here: either to use the dcc xml schema to create the appropriate tables in the rdbms or to treat the data in a schema-agnostic manner and capture it as an xml document. we choose the latter approach because changes to the xml schema will not require changes to the rdbms schema. the dynamic dewey (dde) labelling scheme supports the labelling of xml documents. it uniquely labels each node in an xml document and thereby provides a key for storing the node in a relational table. it has good query and insertion performance [30]. it is an extension of the dewey labelling scheme which has good query performance but differs from it by it removing the need for relabelling (which reduces update performance) when the xml document is updated because dynamic dewey can provide unique labels for new nodes. to store an xml document using this approach, only two tables are required [30] (although a third table can be added to store path information separately – we omit these details here). this approach requires the generation of a unique value to serve as a key for each node (an element or a text node) in an xml document. the dde algorithm determines this unique label (referred to as dde) which consists of a sequence of integers (the nodes can be labelled with negative numbers in certain cases) separated by dots – so-called dot decimal notation. a total order is defined for the labels (which differs from the standard ordering over dot-decimal notation), and this ordering can be used to reconstruct the nesting of the xml document as well as to determine relationships between nodes such as parent, child, and sibling. the main table stores nodes and is similar to the one illustrated in the right-hand side of figure 2. its schema is (dde, tag, text, path, tt-from, tt-to). each row has the dde value for the node as the key attribute. if the node is an element, the element tag will appear in the tag column, otherwise if it is a text node, its text string will appear in the text column (one of these two columns is always empty). the path column stores the list of the element tags followed from the root element to the parent of the node. the last two columns tt-from and tt-to store the timestamps that describe when the node is valid. because the dde value captures the relationships between nodes, there is no need to explicitly refer to other nodes in a row of this table. the second table contains attribute information and has the schema (dde, attr, value, tt-from, tt-to). in an element, each attribute can only appear at most once, hence the attribute name and the dde value are sufficient to uniquely identify the attribute’s value. consider the following xml fragment: text<\el2> <\el3> <\el1>. the dde algorithm will generate the content shown in tables 1 and 2. for this example, the length of each segment of the key is 3 characters. in the implementation, this length is a parameter, and it can be set appropriately for document size. as can be seen from this explanation, the actual details of the xml schema are not used for the database schema – there is no table for customer details, for example. by using the dde ordering with the appropriate links code to specify joins over the relations, documents can be stored, retrieved and displayed, as shown by our implementation. 3.4. implementation the prototype provides an interface for viewing a single dcc. straightforward extensions include the ability to edit the dcc and to validate that it adheres to the dcc and d-si xml schemas. further obvious extensions are the ability to work with many dccs, to compare different dccs, and to support the signing of dccs. we are interested in provenance beyond tracking changes in the data, such as recording which person and what software made the changes. in the longer term, we will require systems that support the whole calibration workflow including automated collection and processing of machine-readable calibration data. using the interface, it is possible to view the current state of the document, the state at a previously specified date and time, and to compare two versions at two specified dates and times. it is possible to hide or show specific subtrees of the document, and when doing the comparison, to see the subtrees where changes have been made. the change analysis option allows the user to see a history of all changes over the life of a dcc. figure 4 shows how changes are highlighted as well as cumulative totals of insertions and changes for each tag and text node in the document. there are various options to view subtrees for a specific tab or to control the display of the whole document. the figure also shows how details of changes are displayed. (note that this screenshot is of a different dcc to the one presented in figure 3.) the links code for the prototype is available at https://github.com/vcgalpin/dcc-xml-temporal [31]. table 1. example node table. dde tag text path tt-from tt-to 001 el1 \ … … 001.001 el2 \el1\ … … 001.001.001 text \el1\el2\ … … 001.002 el3 \el1\ … … table 2. example attribute table. dde attr value tt-from tt-to 001 attr1 a value … … 001.002 attr2 another value … … https://github.com/vcgalpin/dcc-xml-temporal acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 as mentioned above, a more complete application should be able to validate against the dcc and d-si xml schemas. the level of validation that is provided by these schemas is very high but not complete. in some cases, patterns are used to ensure that values are within bounds. for example, the si:probabilityvaluetype [9] has the following associated pattern k a are cut off and will not contribute in the far-field and, because of this, the series can be truncated. 𝑁𝑚 = 2 ∑(2 𝑛 + 1) 𝑁 𝑛=1 = 2(𝑁 2 + 2 𝑁) , (4) where nm is the total number of modes. each coefficient has an independent real and imaginary part. with this expression, an upper bound can be found. directivity is the ration of radiated power (at a given angle) and distance to the total radiated power divided by the total solid angle. the caucy-schwartz inequality yields a sum over the number of modes (nm), which is reduced by a factor of two. the result expression below once (ka) is substituted for n: 𝐷max ≈ { 3, 𝑘 𝑎 ≤ 1 (𝑘 𝑎)2 + 2 𝑘 𝑎, 𝑘 𝑎 > 1 . (5) the only problem here is dmax is an overestimation of the directivity of unwanted antennas. a better estimation can be derived by assuming that the spherical-wave coefficients are independent random variables. for unintentional emitters, the mean of coand crosspolarized terms will be equal, so the mean value of d is 1. 𝐷co = 𝐷cross = 1/2 (6) the field components of expression (1) is the following: figure 3. equivalent circuit of a three-phase y-connected bldc motor [14]. figure 4. calculated (a) resistance and (b) inductance of stator windings per phase for common mode and differential mode [9]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 153 |𝐸𝜃̅̅ ̅(𝑟, 𝜃, 𝜗)| 2 = 𝜂 4 π 𝑟2 | ∑ 𝑄𝑠𝑚𝑛 (3) 𝐾𝑠𝑚𝑛 (𝜃, 𝜗) ∙ �̅�𝜃 𝑠𝑚𝑛 | 2 (7) |𝐸𝜗̅̅̅̅ (𝑟, 𝜃, 𝜗)| 2 = 𝜂 4 π 𝑟2 | ∑ 𝑄𝑠𝑚𝑛 (3) 𝐾𝑠𝑚𝑛 (𝜃, 𝜗) ∙ �̅�𝜗 𝑠𝑚𝑛 | 2 (8) as mentioned earlier, the real and imaginary parts of 𝑄𝑠𝑚𝑛 (3) are independent and gaussian distributed with zero mean. the directivities dco and dcross is similarly distributed. the expected value for the maximum over ns samples of a χ 2 with two degrees of freedom distribution is: 𝐷co,max̅̅ ̅̅ ̅̅ ̅̅ ̅ = 𝐷co̅̅ ̅̅ ̅ ∑ 1 𝑛 𝑁𝑠 𝑛=1 . (9) this summation can be approximated and substituting the expected value of dco from (6) yields expression (10). 𝐷co,max̅̅ ̅̅ ̅̅ ̅̅ ̅ ≈ 1 2 [0.577 + ln(𝑁𝑠 ) + 1 2 𝑁𝑠 ] (10) if 𝑘 𝑎 > 1, ns = 2 nm and n = k a expression (10) results in (4). for 𝑘 𝑎 = 1 (ns = 12), (10) yields 𝐷co,max̅̅ ̅̅ ̅̅ ̅̅ ̅ ≈ 1.55. a physical interpretation of ns = 12 for 𝑘 𝑎 ≤ 1 is that the real and imaginary parts of six dipole moments (three electric and three magnetic) yield 12 independent source contributions. the directivity estimate is also very near the directivity of a single short dipole (d = 1.5). thus, using 1.55 for electrically small emitters in (10) results in a continuous function and should give good estimates for directivity: 𝐷max̅̅ ̅̅ ̅̅ ≈ { 1.55, 𝑘 𝑎 ≤ 1 1 2 [0.577 + ln(4(𝑘 𝑎)2 + 8 𝑘 𝑎) + 1 8(𝑘 𝑎)2 + 16 𝑘 𝑎 ] , 𝑘 𝑎 > 1 . (11) the main difference between the intentional emitter upper bound given by (5) and the unintentional emitter expected value given by (11) is that the upper bound increases rapidly as the square of the electrical size k a, while the expected value increases only as the natural logarithm (ln) of electrical size k a. the estimated, measured and simulated total radiated power ratio for an example eut are compared in [3]. figure 5 shows that the measurement result and simulation result are in optimal agreement; also, the theoretical estimation is adequately good. this means that estimation can be used instead of measurement and simulation, but if higher precision is needed, they cannot be ignored. the theory also shows that, in earlier stages, only estimation can be enough to determine the overall emc performance. this can further reduce costs, because we do not need actual measurements or time-consuming simulation processes to get antenna like properties. on the other hand, a large and complex device containing many different parts is usually made of such individual components that their emc properties are measured by the manufacturer (for example, in automotive industry, every electronic component should pass emc restrictions before building into the actual vehicle), so this type of estimation or simulation process can be skipped [18]. 6. antenna arrays at his point, we should have all the parameters to handle our individual devices as antennas. measurements, simulations, and estimations must give enough information about the product. when assembling the final, complex device made of the individual devices we can treat it like an antenna array. a classical antenna array (often called a 'phased array') is a set of two or more antennas. the signals from the antennas are combined or processed to achieve improved performance over that of a single antenna. these types of antennas can be used to increase overall gain, maximize the signal to interference plus noise ratio, direct the array so that it is most sensitive in a direction, cancel out interference and so on [19]. these applications (some of them with reciprocity) are also the basics of decreasing overall emc performance of a complex system. in a phased array it is not so hard to add together the different antennas, because they are usually consisting of similar ones and made symmetrically [4], [20]. classical antenna arrays are made for specific reasons, but in our case, the different devices according to their operation, orientation, and distance from each other act like transmitters, but also receivers [21]. if we choose one operation where we handle all of them as receivers, we can calculate the emi properties. next to the weighting, we need to include distance and directivity factors to get accurate results. we can calculate the overall power of radiated electromagnetic waves if we handle all of them like transmitters. and finally, if we want a mixed operation calculation, because we know that some of the individual devices work more like transmitters (emits more em waves than they are immune to) and some of them work more like receivers (they are sensitive to em waves but does not radiate so much) we need calculate them one by one. this method is like superposition in electrotechnics but with antennas [18]. weighting, antenna factors, distances, directivity properties must stay, but they can be “switched off” one-by-one and in a round calculate the effect on all the others, then finally sum-up the results of each round. in each iteration, it is possible to inspect the effect of only one device on the others or choosing several iterations to select a group of devices. it yields a similar result if we take the complex system; put it in an emc chamber and measuring. after one complete measurement, we switch off some devices, or just leave a little group of devices and measuring figure 5. the estimated (theory), measured, and simulated ratio for an example eut [3]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 154 again the emc. the cost of only calculations and measurements cannot be compared [22]. 7. possible errors this general theory is based on devices as antennas so one thing must be mentioned is conducted emc [23]. it is acceptable to measure the connecting wires, but real-life application may use cables different in length, diameter, shielding properties, etc. although we know the radiation pattern and radiated immunity of the product, but if we want proper results of the simulation, we must inspect the connecting wires also. it is not the most important in all applications but for example in automotive industry where lots of wires inside a metal body crowded with electronics, the problem should be solved [24], [25]. another possible factor is the passive, shielding like parts in the complex system. mountings, covers, design elements made from metal can easily distort the results. if we take them in count like the “active” devices, the risk of error decreases, but this procedure highly enlarges the number of devices. the less precise properties we use, the less precise results we get. if emc measurements or proper simulations are not available, we can estimate the performance of the device but in the more cases we calculate with estimations, the results may not meet our expectations in precision. 8. conclusions and outlook this paper provided a theory of handling devices as antennas to easily calculate emc properties if more devices are used in a closer area. the individual steps are used commonly, but until this point, we have not found this combined procedure. the scenario appears easily in electrical boxes, vehicles or in our house because we use more and more electronic devices. to achieve results, we need antenna like properties of the device, like overall gain, frequency chart, directivity, and other factors. these can be gained from measurements in proper emc chamber, 3d simulations and calculations or estimations. with these we must calculate the final performance of this antenna array with different methods depend on what performance parameter we need. the whole concept is illustrated in figure 6. further research on this theory must be done because the whole process (in ideal circumstances) can easily cut-off costs in designing and emc testing. also, to prove the concept example systems should be build up to try the different approaches. references [1] r. g: jobava, a. l. gheonjian, j. hippeli, g. chiqovani, d. d. karkashadze, f. g. bogdanov, b. khvitia, a. g. bzhalava, simulation of low-frequency magnetic fields in automotive emc problems, ieee transactions on electromagnetic compatibility 56(6) (2014) pp. 1420-1430. doi: 10.1109/temc.2014.2325134 [2] a. barchanski, emc simulation of consumer electronic devices, high frequency electronics, july 2013. [3] p. f. wilson, d. a. hill, c. l. holloway, on determining the maximum emissions from electrically large sources, ieee trans. electromagn. compat. 44(1) (2002), pp. 79–86. doi: 10.1109/15.990713 [4] c. a. balanis, antenna theory, analysis and design. john wiley & sons, inc., 1982 isbn: 9780471606390. [5] j. glimm, k. münter, r. pape, m. spitzer, p. b. ptb, d. braunschweig, new results of antenna calibration in a singleantenna set-up, xvi imeko world congress measurement, vienna, austria, 25-28 sept. 2000, pp. 1–3. online [accessed 17 august 2021] https://www.imeko.org/publications/wc-2000/imeko-wc2000-unc-p507.pdf [6] f. leferink, high intensity electromagnetic field generation using a transportable reverberation chamber, ursi gen. assem. int. symp., ghent, belgium, 09-16 aug. 2008. online[accessed 17 august 2021] https://www.ursi.org/proceedings/procga08/papers/e01p6.p df [7] m. borsero, a. d. chiara, c. pravato, a. sona, m. stellini, a. zuccato, considerations about radiated emission tests in anechoic chambers that do not fulfil the nsa requirements, proc. 16th imeko tc4 int. symp. explor. new front. instrum. methods electr. electron. meas. 13th tc21 int. work. adc model. test. jt. sess. proc., florence, italy, 22-24 sept. 2008, pp. 825–830. online [accessed 3 september 2021] https://www.imeko.org/publications/tc4-2008/imeko-tc42008-160.pdf [8] f. g. awan, n. m. sheikh, j. gohar, simulation of a parametric model for interference cancellation in open space emc measurement, proc. 2010 seventh international conference on information technology: new generations, las vegas, nv, usa, 12-14 april 2010, pp. 924–928. doi: 10.1109/itng.2010.197 [9] k. maki, h. funato, l. shao, motor modeling for emc simulation by 3-d electromagnetic field analysis, proc. 2009 ieee int. electr. mach. drives conf. iemdc ’09, miami, fl, usa, 3-6 may 2009, pp. 103–108. doi: 10.1109/iemdc.2009.5075190 [10] padmaraja yedamale, brushless dc (bldc) motor fundamentals, microchip. p. 20, 2004. [11] a. cataldo, g. monti, e. de benedetto, g. cannazza, l. tarricone, l. catarinucci, on the use of a reliable low-cost set-up for characterization measurements of antennas, proc. 16th imeko tc4 int. symp. explor. new front. instrum. methods electr. electron. meas. 13th tc21 int. work. adc model. test. jt. sess. proc., florence, italy, 22-24 sept. 2008, pp. 62–65. online [accessed 3 september 2021] https://www.imeko.org/publications/tc4-2008/imeko-tc42008-069.pdf [12] d. giordano, l. zilberti, m. borsero, r. forastiere, w. wang, validation of numerical methods for electromagnetic dosimetry through near-field measurements, acta imeko 4(1) (2016), figure 6. an illustration showing the whole concept. https://doi.org/org/10.1109/temc.2014.2325134 https://doi.org/org/10.1109/15.990713 https://www.imeko.org/publications/wc-2000/imeko-wc-2000-unc-p507.pdf https://www.imeko.org/publications/wc-2000/imeko-wc-2000-unc-p507.pdf https://www.ursi.org/proceedings/procga08/papers/e01p6.pdf https://www.ursi.org/proceedings/procga08/papers/e01p6.pdf https://www.imeko.org/publications/tc4-2008/imeko-tc4-2008-160.pdf https://www.imeko.org/publications/tc4-2008/imeko-tc4-2008-160.pdf https://doi.org/org/10.1109/itng.2010.197 https://doi.org/org/10.1109/iemdc.2009.5075190 https://www.imeko.org/publications/tc4-2008/imeko-tc4-2008-069.pdf https://www.imeko.org/publications/tc4-2008/imeko-tc4-2008-069.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 155 pp.90-96. doi: 10.21014/acta_imeko.v4i1.169 [13] d. giordano, l. zilberti, m. borsero, r. forastiere, w. wang, experimental set-up for the validation of numerical methods in electromagnetic dosimetry, 19th imeko tc4 symp. meas. electr. quant. 2013 17th int. work. adc dac model. test., barcelona, spain, 18-19 july 2013, pp. 549–554. online [accessed 17 august 2021] https://www.imeko.org/publications/tc4-2013/imeko-tc42013-130.pdf [14] m. gries, b. mirafzal, permanent magnet motor-drive frequency response characterization for transient phenomena and conducted emi analysis, proc. 2008 twenty-third annual ieee applied power electronics conference and exposition, austin, tx, usa, 24-28 feb. 2008, pp. 1767-1775. doi: 10.1109/apec.2008.4522966 [15] p. f. wilson, radiation patterns of unintentional antennas: estimates, simulations, and measurements, in proc. 2010 asiapacific symp. electromagn. compat. apemc 2010, beijing, china, 12-16 april 2010, pp. 985–989. doi: 10.1109/apemc.2010.5475702 [16] g. koepke, d. hill, j. ladbury, directivity of the test device in emc measurements, ieee int. symp. electromagn. compat., 2 (2000), pp. 535–540. doi: 10.1109/isemc.2000.874677 [17] j. e. hansen (ed.), spherical near-field antenna measurements, herts, united kingdom, the institution of engineering and technology, 2008. isbn: 086341110x doi: 10.1049/pbew026e [18] b. a. austin, a. p. c. fourie, characteristics of the wire biconical antenna used for emc measurements, ieee trans. electromagn. compat. 33(3) (1991), pp. 179–187. doi: 10.1109/15.85131 [19] antenna-theory.com, antenna arrays (phased arrays). online [accessed 17 august 2021] http://www.antenna-theory.com [20] x. wang, z. peng, k.-h. lim, j.-f. lee, multisolver domain decomposition method for modeling emc effects of multiple antennas on a large air platform, ieee trans. electromagn. compat. 54(2) (2012), pp. 375–388. doi: 10.1109/temc.2011.2161871 [21] m. t. ma, b. k. singaraju, theory and applications of antenna arrays, new york, john wiley & sons inc., 1975, isbn: 9780471557951. [22] s. frei, r. g. jobava, d. topchishvili, complex approaches for the calculation of emc problems of large systems, proc. 2004 international symposium on electromagnetic compatibility, silicon valley, ca, usa, 9-13 aug. 2004, pp. 826–831. doi: 10.1109/isemc.2004.1349929 [23] a. m. silaghi, a. p. buta, m. silviu baderca, a. de sabata, methods for reducing conducted emissions levels, proc. 22nd imeko tc4 int. symp. 20th int. work. adc model. test. 2017 support. world dev. through electr. electron. meas., iasi, romania, 14-15 sept. 2017, pp. 352–355. online [accessed 17 august 2021] https://www.imeko.org/publications/tc4-2017/imeko-tc42017-069.pdf [24] g. chavka, m. sadowski, n. litwinczuk, m. garbaruk, structure and emc simulation of vehicle radiocommunication base station, proc. ieee 6th international symposium on electromagnetic compatibility and electromagnetic ecology, st. petersburg, russia, 21-24 june 2005, pp. 111–115. doi: 10.1109/emceco.2005.1513077 [25] d. senic,a. sarolic, simulation of a shipboard vhf antenna radiation pattern using a complete sailboat model, proc. softcom 2009 17th international conference on software, telecommunications computer networks, hvar, croatia, 24-26 sept. 2009, pp. 65–69. online [accessed 17 august 2021] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=53 06818 https://doi.org/org/10.21014/acta_imeko.v4i1.169 https://www.imeko.org/publications/tc4-2013/imeko-tc4-2013-130.pdf https://www.imeko.org/publications/tc4-2013/imeko-tc4-2013-130.pdf https://doi.org/org/10.1109/apec.2008.4522966 https://doi.org/org/10.1109/apemc.2010.5475702 https://doi.org/org/10.1109/isemc.2000.874677 https://doi.org/org/10.1049/pbew026e https://doi.org/org/10.1109/15.85131 http://www.antenna-theory.com/ https://doi.org/org/10.1109/temc.2011.2161871 https://doi.org/org/10.1109/isemc.2004.1349929 https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-069.pdf https://www.imeko.org/publications/tc4-2017/imeko-tc4-2017-069.pdf https://doi.org/org/10.1109/emceco.2005.1513077 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5306818 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5306818 characterisation of glue behaviour under thermal and mechanical stress conditions acta imeko issn: 2221-870x december 2021, volume 10, number 4, 162 168 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 162 characterisation of glue behaviour under thermal and mechanical stress conditions gianluca caposciutti1, bernardo tellini1, alfredo cigada2, stefano manzoni2 1 dip. di ingegneria dell’energia, dei sistemi, del territorio e delle costruzioni, università di pisa, largo lucio lazzarino, 56125 pisa, italy 2 dip. di meccanica., politecnico di milano, via la masa 1, 20156 milano, italy section: research paper keywords: temperature cycles; glue bonding; mechanical stress; thermal stabilization citation: gianluca caposciutti, bernardo tellini, alfredo cigada, stefano manzoni, characterization of glue behaviour under thermal and mechanical stress conditions, acta imeko, vol. 10, no. 4, article 23, december 2021, identifier: imeko-acta-10 (2021)-04-23 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 29, 2021; in final form november 30, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gianluca caposciutti, e-mail: gianluca.caposciutti@ing.unipi.it 1. introduction attention about structural health monitoring (shm) systems, to monitor any possible infrastructure damage, is growing a lot these days [1]: unfortunately, the same attention is not being gained by metrology issues, dealing with the quality of measurements and their reliability. new low-cost measurement nodes, monitoring the structure motion, also require that the box housing the electronics and the elements fixing the boxes to the structure have costs aligned with those sensing nodes. the most common solution today is the use of plastic boxes, granting the required ip protection grade; these are fixed to the concrete structure by means of dowels and/or glue. also, the boards hosting sensors, microcontrollers and the needed electronics, today designed with a flat bottom to grant the same motion of the box and the sensor, preventing from the dynamic behaviour of the board to affect measurements, are connected to the enclosing box by means of metallic clips or glue. thus, there is a chain between the structure surface and the sensing element with many interfaces, influencing the metrological performances of the whole system. the presence of these interfaces can generate significant consequences on the measurements, especially when mems clinometers and accelerometers are taken into consideration. indeed, their working principle is based on the estimation of the angle between the sensing axis and the gravity vector. being the sensor glued to the enclosing box, when temperature changes (e.g., due to usual environmental thermal shifts), the glue can change its behaviour and geometrical layout. regarding the latter aspect, this implies that a significant systematic error can affect the readout e.g., by introducing a temperature dependant offset, while this effect must be avoided at best to preserve the quality of the measurement. such a goal can be achieved by properly choosing the type of glue through a study of its thermomechanical behaviour. in this paper, this is accomplished by capacitive measurements on tailored set-ups, as outlined below. sensors fixed to a structure for monitoring purposes, for instance on a bridge, can undergo important temperature changes, very high during summer and very low during winter. in order to better define the glue behaviour, a reference to capacitive displacement sensors has been adopted for a thorough understanding of the temperature related phenomena. capacitive abstract new low-cost measuring devices require that the box housing and electronics have the cost aligned with the sensing system. nowadays, metallic clips and/or glue are commonly used to fix the electronics to the box, thus providing the same motion of the structure to the sensing element. however, these systems may undergo daily or seasonal thermal cycles, and the combined effect of thermal and mechanical stress can determine significant uncertainties in the measurand evaluation. to study these effects, we prepared some parallel plates capacitors by using glue as a dielectric material. we used different types of fixing and sample assembly to separate the effects of glue softening on the capacitor active area and plates distance. therefore, we assessed the sample modification by measuring the capacitance variation during controlled temperature cycles. we explored possible non-linear behaviour of the capacitance vs. temperature, and possible effects of thermal cycles on the glue geometry. further work is still needed to properly assess the nature of this phenomenon and to study the effect of mechanical stress on the sample’s capacitance. mailto:gianluca.caposciutti@ing.unipi.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 163 displacement sensors are those exhibiting the highest sensitivity (up to hundreds of volt per millimetre), so it has been decided to use the glue as a dielectric interposed between two metallic plates: even the smallest change in the plate distance due to temperature can therefore be easily detected. capacitance measurements have therefore been carried out during varying temperature cycles, trying to split any change in the glue/dielectric features out of a real change of the plate distance: the latter is the main concern for shm issues. according to the theory, the capacitance between two conductors is defined as the ratio between the charge on one conductor and the potential difference between them. such a parameter depends on the geometry of the conductors, on their relative distances, and on the characteristic of the interposed dielectric medium. apart from simple geometries and ideal dielectric medium, an accurate calculation and measurement of the capacitance is a difficult task. for slowly varying fields, the electric permittivity ε is a macroscopic constitutive parameter which relates the macroscopic fields electric flux density �⃗⃗� and electric field �⃗� [2], [3]. a comprehensive discussion of the physics of frequency dispersion and of the effective time constants in dielectric media remains a complex issue, as well as a clear operative meaning of the measurement of ε in static conditions [4]. more in general, the properties of the dielectric material can vary as a function of several other parameters. for instance, temperature, and mechanical stress generally play an important role and should be properly taken into account for a valid description [5], [6]. regarding the temperature, the number of polarizable ions per unit of volume is a direct consequence of volume expansion, the ions polarizability depends itself on their thermal energy, which also influences the number of defects and disorder in the material, and the effective relaxation time. the total stress on dielectric material also affects its physical properties [2], [7], for which three different contributions are identified: mechanical stress, maxwell stress, and electrostriction component. in more detail, in absence of electric field, the material (assumed in an elastic regime) finally obeys to hooke’s law [8]. moreover, it is shown that the maxwell stress in solid dielectrics, such as many polymers, directly affects the material stress status, and causes molecular deformation, thus modifying the dielectric and electrical properties of the material [9]. the main purpose of the current work is to characterize the effect of the glue bonding between a sensor or its housing, and the target surface. particular attention is given to the glue thermal behaviour, which may significantly affect the sensor readout due to bonding deformation, e.g., in the case of a tilt or strainsensitive element. indeed, the temperature variation can lead to the deformation of the glue, possibly altering the relative distance between the plates and affecting the reference position of a sensor with respect to the base surface. to this aim, the mechanical and thermal variation of the glue is studied by using an electrical model of the system. in a first approximation, the bonding between the sensor or its housing and the target surface can be modelled as a flat plates capacitor, where the glue plays the role of the dielectric medium. the paper is structured as follows. section 2 shows the model used in this study, and section 3 reports the experimental analysis performed. moreover, section 4 shows the results obtained with the devices described in section 3, and finally, some conclusions are drawn in section 5. 2. modelling let us assume a capacitor having parallel plane electrodes with surface s, separation distance d, and dielectric medium with constant permittivity ε. according to the theory, for slowlyvarying and uniform electric field distribution between the two electrodes, the capacitance c between them reads as: 𝐶 = 𝜀 𝑆 𝑑 (1) it is worth to point out that for finite electrodes, the surface charge distribution is indeed not uniform [10], and an accurate estimation of the field distribution is not straightforward due to the fringing field effect and possible divergent field values at the electrode edges. metrological institutes commonly adopt guard rings as a method for the mitigation of such an effect [11], [12]. an accurate characterization of the dielectric material bases on a high control of the electric field distribution, and, in analogy with the characterization of magnetic properties of a material, we can more properly refer to characterizing the capacitance of the sample [13]. as a consequence, we assume for our analysis the following capacitance model: 𝐶(𝜀, 𝑆, 𝑑, 𝑓, 𝑇) = 𝜀(𝑓, 𝑇) 𝑆(𝑇) 𝑑(𝑇) , (2) for which we refer to capacitors made of two parallel metal disks, filled by a glue substance as interposed dielectric material and capacitance c. in equation (2), we made explicit for the capacitance parameters the dependence on the frequency f and the temperature t. thus, as mentioned in the introduction, ε generally presents frequency dispersion and depends on the temperature. the frequency dispersion represents a variation of ε vs. frequency and, following a classical approach, it is described by spring-mass models. for a review on such a topic, clausius-mossotti’s, debye’s expressions, and their novel modified versions are recognized models of such a phenomenon. the temperature dependence of the dielectric properties finds a general description for relatively rarefied media, such as the description of the orientation polarization based on maxwell-boltzmann statistics. on the other hand, such a relationship can significantly vary depending on the temperature range, structure, and physical status of the material. a conceptual scheme of the modelled configuration is shown in figure 1. the values of d and s are constant in the ideal configuration. however, these parameters vary with possible material contraction and dilatation because of the temperature variation. furthermore, glue softening can modify the material structure, influencing its dielectric and mechanical properties (e.g., viscosity reduction when temperature increase). thus, as made explicit in equation (2), the geometry of the system and the capacitance c are expected to vary with the temperature. as a consequence of such behaviours, the capacitor plates can get closer one each other under the effect of their weight and the electrostatic forces, thus reducing their relative distance. moreover, a viscosity reduction of the glue can let it slide over the plates, thus varying the geometry of the sample. these two main phenomena, together with the proper dielectric material temperature characteristic, generate a complex behaviour of the sample capacitance as a function of the temperature. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 164 as an example for raising the temperature, the glue softening might reduce d, determining an increase of c according to equation (2). moreover, the possibility of glue sliding also contributes to the variation of the c at high temperatures. after the heating process, the values of d and s can be eventually stabilized and, in a cooling stage, the behaviour of c(t) might follow a different curve, thus showing a hysteresis behaviour with the temperature. this complex phenomenon is expected to be more evident at high temperatures and it may exhibit some relaxation toward a stable condition after a few thermal cycles. to carry out a consistent characterization of c, we built several capacitor samples with specific features to distinguish the above-mentioned effects. firstly, the overall characteristic c(t) is evaluated by using a parallel face capacitor with circular electrodes. the geometry of the device is left free to evolve as a function of the temperature. the dielectric glue is positioned in the centre of the plates, far from the edges of electrodes, thus reducing the fringing field effect at the electrode’s edge. a second sample is built as the first one, but keeping fixed d to a minimum value by using ad-hoc glass spacers. the use of glass spacers with a low thermal dilatation coefficient allows for neglecting the electrode distance variation as a function of the temperature. however, for such a configuration, the glue can still possibly slide through the inter-electrode region, and finally affect the geometry and the active volume of the dielectric medium of the capacitor. a third sample is made with glass spacers, and by covering the border of the electrodes with dielectric glue. in this last case, the glue geometrical variation is expected not to significantly affect the material within the plates, thus further reducing the possible effective variation of both s(t) and d(t). therefore, temperature cycles are applied to the different samples and the behaviour of their capacitance is assessed at different operating frequencies. 3. experimental setup different capacitor samples are built with plain faces configuration, by adopting circular plates with 60 mm diameter and 3 mm thickness made by aluminium. a layer of glue is placed at the centre of the capacitor as the dielectric. particularly, the glue is placed in a 40 mm internal disk using proper pla spacers, which also kept the plates at the desired distance d of 1 mm, 1.4 mm, and 2.8 mm, respectively. the central glue positioning allows reduction as much as possible of the edge effect due to the electrode's borders. the distance d of the obtained samples can be further fixed to a minimum extent utilizing some 0.2 mm thickness glass spacers, placed at about 120° angular distance around the capacitor. the spacer's material is chosen to present a negligible thermal dilatation coefficient value with respect to the aluminium of the electrodes. the capacitor forming process and a final sample are shown in figure 2. the capacitors made as described, with d = 1 mm, d = 1.4 mm, and d = 2.8 mm, are namely c1, c2, and c3, respectively. the second type of capacitor, namely c4, is built with a distance d of 1.4 mm, kept by a 7 glass spacer with 0.2 mm thickness each, placed as the previous device. however, in this case, the glue is placed to fill all the plates and their borders. moreover, the upper circular plate has 60 mm diameter and figure 1. scheme of the modelled configuration. a) b) c) d) e) figure 2. spacers used for glue deposition (a) and final spacers for glue hardening (b). internal glue disk area (c), final capacitor assembly (d) and its main scheme (e). a) b) figure 3. capacitor sample c4 with glue covering the plate border (a) and its scheme (b). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 165 3 mm thickness, while the bottom plate is 180 mm diameter and 1 mm thickness. the sample c4 is shown in figure 3. to evaluate the sample's capacitance variation as a function of the temperature, the devices undergo to thermal cycling between -70 °c to +70 °c. the samples are placed in a genviro 060lc climate chamber and cycled with a maximum heating rate of 7 °c/h. such a low value is necessary to ensure thermal homogeneity of the sample, and to provide symmetric cycles during heating and cooling, thus limiting the influence of the thermal dynamic on the capacitance of the sample. the temperature is monitored on the top plate and on the climate chamber environment by using 2 rtd 1/10 din pt100 sensors, which signals are acquired through a ni9219 board mounted on a ni9178 chassis. the capacitance c and the conductance g of the samples are measured using an lrc meter e4980a in the 20 hz – 2 mhz range. proper compensation of the connecting wire is made before each test. the model used for calculating c and g is a parallel between the capacitance c and a resistance, which value is equal to the reciprocal of g. instruments management and data acquisition are performed through a labview software made on purpose. the scheme of the experimental setup is shown in figure 4. 4. results and discussion the frequency response of the sample is reported at the temperature of (24.0 ± 0.9) °c in figure 5 for the sample c1. figure 5 shows that the capacitance c decreases with the frequency, while g increases with it. c and g are in the order of 100 pf and 100 to 300 ns, respectively. data of g above 20 khz are not reported due to high uncertainty in their value. in particular, a significant reduction of the capacitance occurs at around 20 hz. in liquids, low excitation frequencies, typically in the order of some tenth of hertz, can cause ionic transport and layering phenomena, thus producing a double layer capacitance which largely increases the value of c. besides, the c and g isothermal curves are consistent with the recent literature regarding polymers and amorphous materials for the studied frequency range [14]. the capacitance values obtained by c1, c2 and c3 at (27.2 ± 0.6) °c are reported with respect to the value of c3 in figure 6. therefore, figure 6 shows that the capacitance values vary as a function of 1/d. indeed, c1/c3 and c2/c3 agree with d3/d1 and d3/d2 according to the model shown in equations (1) and (2). however, for frequencies in the order of a tenth of hertz, the capacitance results in a different behaviour as discussed in relation to figure 5a. in figure 7, we show c1 vs. t at 10 khz frequency, with no glass spacers used. the c1 device (d = 1 mm) undergoes a quick cooling to 70 °c, where for 10 hours a steady-state temperature is maintained. the initial thermal settlement is used to make uniform the temperature between the environment and the sample, and to reduce thermal gradients within the dielectric material. the first cooling branch in figure 7b shows that, when the temperature is figure 4. scheme of the experimental setup. a) b) figure 5. capacitance c (a) and conductance g (b) as a function of the analysis frequency for c1. uncertainties are given with 95% confidence interval. figure 6. capacity ratio at around 30 °c for c1, c2 and c3 capacitors as a function of the frequency. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 166 constant, the capacitance slightly varies by increasing its value by around 3 % (i.e., 66 pf to 68 pf). this variation can be attributed to some stabilization phenomena. the next cycles clearly show a hysteresis on the c-t plot on the high-temperature side. in the present case, the hysteresis could be generated by the effect of the mechanical instability of the sample, as the combination of the effect of the glue softening, collapsing, and sliding, together with the characteristic permittivity evolution as a function of the temperature, proper of the dielectric material. however, at this study stage, hysteresis can also derive from more complex phenomena involving the physical characteristics of the dielectric material and depending on the permittivity behaviour vs. t [15], [16]. figure 8 shows the capacitance behaviour of c1, c2 and c3 samples with fixed minimum distance d under the thermal cycles shown by figure 7a. results are provided for 10 khz frequency value. figure 8 shows that hysteretic behaviour occurs in all the different samples with no significant difference concerning the case without spacers, shown in figure 7. furthermore, the samples c2 and c3 present a significant instability in the measure of c above 40 °c. this effect can be attributed to a major impact of the glue softening on the capacitor effective geometry. moreover, in the cases shown by figures 8b and 8c, second cycles (i.e. the second cooling and heating branches) present a shifted and lower value of c, indicating possible glue settlements. according to these observations, the test is repeated on the case c3 by stabilizing the sample at + 70 °c for 20 hours. the temperature profile over the time and the results in terms of capacitance at 10 khz vs. temperature are shown in figure 9. in figure 9, the hysteretic behaviour is significantly reduced and barely observable, while the measurement instability disappeared. this evidence highlights that the observed phenomena are determined by thermo-mechanical effects on the sample geometry. moreover, the high-temperature treatment a) temperature profile b) capacitance behaviour figure 7. measured temperature profile (a) and behaviour of c1 sample (d = 1 mm) (b) with no fixing to the geometric sample parameters. uncertainties are given with 95% confidence interval through the coloured strips. a) c1 (d = 1 mm) b) c2 (d = 1.4 mm) c) c3 (d = 2.8 mm) figure 8. behaviour of c1 (a), c2 (b) and c3 (c) capacity as a function of the temperature during thermal cycling with fixed minimum plate distance and stabilization at -70 °c. uncertainties are given with 95% confidence interval through the coloured strips. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 167 process (red path in figure 9) contributes to significantly mitigate these effects. the last test is carried out on sample c4, where the dielectric glue is placed over the electrodes and their borders, thus reducing the glue softening on the active region in between the electrodes. thus, a 20 hours stabilization is performed at +70 °c and several cycles are provided between + 70 °c and -70 °c to further observe the capacitance behaviour. the results are shown in figure 10, together with the performed temperature cycles. moreover, figure 11 shows the capacitance behaviour at 10 khz as a function of the temperature for the 7th cycle (i.e. the last performed cycle) and different operating frequencies. figure 10 shows that the first 3 cycles exhibit an accommodation behaviour. however, the capacitancetemperature curve does not present any significant difference among the cycles after the 4th thermal cycle. moreover, the capacitance hysteresis practically disappeared at this level of approximation, as also supported by figure 11, for all the frequencies studied, and the c-t curve appears mainly monotone in the studied temperature range. the growth of c with the temperature is consistent with the recent literature regarding polymers and amorphous materials [14]. according to the presented results, the hysteresis shown in figure 7, figure 8, and figure 9 is significantly reduced adopting thermal treatment and the c4 configuration. possible thermaldynamic and thermo-mechanic phenomena still could affect the hysteretic behaviour of the capacitance. further experiments are in progress aimed at investigating such behaviour. as discussed in the previous section, it is worth highlighting that figure 11 reports the behaviour of c(t) of the c4 sample for which the geometry variation can be practically neglected as a function of the temperature. therefore, the results in figure 11 can represent the behaviour of the permittivity ε(t) according to equation (2). 5. conclusions the accurate positioning and referencing of sensors, such as strain or tilt sensors, is crucial for the proper measurand evaluation. in many cases, the sensor and the measuring body are glue bonded together, ensuring a solid and cheap fixing. on the other hand, the time stability of the bonding may be an issue especially when temperature cycles and the forces into play can affect the glue's physical and geometrical features. therefore, thermal and mechanical analysis of a glue bonding is made via electrical measurement technique, by studying the behaviour of the capacitance in a device using glue as the dielectric material. several types of fixing and pre-processing are used to separate the effects of the temperature cycles over the glue geometry variation. to assess such a behaviour, sample capacitance was analysed as a function of the electric field frequency and the environment temperature. it turned out that when no geometrical fixing is used, temperature cycles cause hysteretic a) temperature profile b) capacitance behaviour figure 9. measured temperature profile (a) and behaviour of c3 sample (d = 2.8 mm) (b) with fixed minimum plate distance and stabilization at +70 °c. uncertainties are given with 95 % confidence interval through the coloured strips. a) temperature profile b) capacitance behaviour figure 10. measured temperature profile (a) and behaviour of c4 sample (d = 1.4 mm) (b) with fixed minimum plate distance and stabilization at +70 °c. uncertainties are given with 95 % confidence interval through the coloured strips. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 168 behaviour of measured capacitance. when the capacitor geometry is fixed independently of the temperature, and after thermal stabilization of the dielectric, the hysteresis disappeared, and only a monotone behaviour c-t remains for all the tested frequencies. therefore, we shown that the glue deformation is possibly responsible for the capacitance hysteretic behaviour. despite this could be kept under control by e.g. proper thermal stabilization of the glue, an underestimation of this occurrence can lead to significant systematic errors in the evaluation of measurands through glue-fixed sensors. references [1] d. w. ha, h. s. park, s. w. choi, y. kim, a wireless memsbased inclinometer sensor node for structural health monitoring, sensors . 13 (2013). doi: 10.3390/s131216090 [2] l. d. landau, e. m. lifshitz, the classical theory of field, 1971. [3] j. d. jackson , classical electrodynamics, third edition. new york : wiley, 1999. online [accessed 15 december 2021] https://search.library.wisc.edu/catalog/999849741702121 [4] m. bologna, b. tellini, remarks on the measurement of static permittivity through a classical description, prog. electromagn. res. c. 33 (2012), pp. 95–108. [5] m. r. mahboob, z. h. zargar, t. islam, a sensitive and highly linear capacitive thin film sensor for trace moisture measurement in gases, sensors actuators b chem. 228 (2016), pp. 658–664. doi: 10.1016/j.snb.2016.01.088 [6] a. g. cockbain, p. j. harrop, the temperature coefficient of capacitance, j. phys. d. appl. phys. 1 (1968), pp. 1109–1115. doi: 10.1088/0022-3727/1/9/302 [7] h. y. lee, y. peng, y. m. shkel, strain-dielectric response of dielectrics as foundation for electrostriction stresses, j. appl. phys. 98 (2005) 74104. doi: 10.1063/1.2073977 [8] y. m. shkel, n.j. ferrier, electrostriction enhancement of solidstate capacitance sensing, ieee/asme trans. mechatronics. 8 (2003), pp. 318–325. doi: 10.1109/tmech.2003.816805 [9] j.-. crine, influence of electro-mechanical stress on electrical properties of dielectric polymers, ieee trans. dielectr. electr. insul. 12 (2005), pp. 791–800. doi: 10.1109/tdei.2005.1511104 [10] m. dhamodaran, r. dhanasekaran, s. ammal, evaluation of the capacitance and charge distribution of metallic objects by electrostatic analysis, in: journal of scientific & industrial research, vol. 75, 2016, pp. 552-556. [11] w. c. heerens, f.c. vermeulen, capacitance of kelvin guard‐ring capacitors with modified edge geometry, j. appl. phys. 46 (1975), pp. 2486–2490. doi: 10.1063/1.322234 [12] s. dado, capacitive sensors with pre-calculable capacitance, in: transactions on electrical engineering, vol. 2, 2013. [13] fausto fiorillo, characterization and measurement of magnetic materials, academic press, 2005. [14] j.d. menczel, r.b. prime, thermal analysis of polymers: fundamentals and applications, 2009. [15] a. bousseksou, g. molnár, p. demont, j. menegotto, observation of a thermal hysteresis loop in the dielectric constant of spin crossover complexes: towards molecular memory devices, j. mater. chem. 13 (2003), pp. 2069–2071. doi: 10.1039/b306638j [16] s. saadaoui, o. fathallah, h. maaref, fermi level pinning, capacitance hysteresis, tunnel effect, and deep level in algan/gan high-electron-mobility transistor, superlattices microstruct. 156 (2021), 106959. doi: 10.1016/j.spmi.2021.106959 figure 11. behaviour of c4 sample (d = 1.4 mm) at different temperature and excitation frequencies https://doi.org/10.3390/s131216090 https://search.library.wisc.edu/catalog/999849741702121 https://doi.org/10.1016/j.snb.2016.01.088 https://doi.org/10.1088/0022-3727/1/9/302 https://doi.org/10.1063/1.2073977 https://doi.org/10.1109/tmech.2003.816805 https://doi.org/10.1109/tdei.2005.1511104 https://doi.org/10.1063/1.322234 https://doi.org/10.1039/b306638j https://doi.org/10.1016/j.spmi.2021.106959 results of study of quantization and discretization error of digital tachometers with encoder acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 results of study of quantization and discretization error of digital tachometers with encoder vasyl kukharchuk1, oleksandr vasilevskyi2, volodymyr holodiuk1 1 vinnytsia national technical university, 95 khmelnitsky shose str., 21021, vinnytsia, ukraine 2 university of texas at austin, austin, texas, usa section: research paper keywords: angular velocity; encoder; quantization; sampling; microprocessor tachometer; quantization and sampling error equation; "adjoining interval" citation: vasyl kukharchuk, oleksandr vasilevskyi, volodymyr holodiuk, results of study of quantization and discretization error of digital tachometers with encoder, acta imeko, vol. 12, no. 2, article 19, june 2023, identifier: imeko-acta-12 (2023)-02-19 section editor: francesco lamonaca, university of calabria, italy received january 20, 2023; in final form february 19, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: vasyl kukharchuk, e-mail: bkuch@ukr.net 1. introduction currently, to intensify the testing of electric machines, the vast majority of research is focused on the acceleration of tests carried out in the no-load experiment [1]-[4]. the main one here is the transient characteristic (variable angular velocity over time 𝑛(𝑡)), which is obtained in the dynamic mode of operation of the measuring object (electric machine) with practically zero moment of resistance on its shaft (мс ≅ 0). to ensure the maximum number of measured values of angular velocity 𝑛 during the transition process (3 ÷ 5) 𝜏, most researchers [5]-[8] focused their choice on digital measuring channels with encoders, the main components of which are the following: measurements object ob, coupling shaft mc, encoder e, shaper f, device for the period selection t, f0 frequency generator g, "and" logic gate for the tx period quantization, binary counter ct2, programmable pi interface (parallel or serial), mps microprocessor system. in the generalized structural diagram shown in figure 1, the hardware sequence of measurement data transformations is carried out in the vast majority [9]-[12]. 1. encoder e converts the non-electric value angular velocity n into a sequence of electrical signals, the frequency of which is determined as 𝑓𝑥 = 𝑛 𝑍 60 , (1) figure 1. generalized structural diagram of the angular velocity measuring channel. abstract the analysis of measuring channels of angular velocity with an encoder given by the authors made it possible for the first time to obtain an equation for estimating the quantization and sampling error for an exponential mathematical model describing the transient process of operation of electrical machines. the components of the mathematical model of this dynamic error are the sampling step and the derivative, which characterizes the rate of change of the measured value over time. it was found that the errors of quantization and sampling significantly depend on the value of the resolution z of the encoder. moreover, an increase in z leads to a decrease in the sampling error, but the relative quantization error increases. to reconcile these components of errors, the laws of change in the distinguishing ability z of the encoder are adaptive to the dynamic properties of the change in angular velocity over time. proved that to ensure the maximum speed of measuring the angular velocity during the transient process, it is advisable to implement the method of changing the distinguishing ability of the encoder on the internal timers of the microcontroller proposed adapt ive to its dynamic properties, and the quantization of informative periods proportional to the measured angular velocity should be carried out in "adjoining intervals". mailto:bkuch@ukr.net acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 where 𝑍 is the resolution of the encoder. 2. the shaper f from the output quasi-sinusoidal encoder output signals forms pulses of a rectangular shape, the logic levels of which correspond to the levels of ttl logic. 3. the counting trigger t from the output frequency signals 𝑓𝑥 of the encoder, selects informative periods 𝑇𝑥 , proportional to the angular velocity 𝑛. 4. in the logical "and" gate, quantization takes place [10] by comparing the measured physical value of the period 𝑇𝑥 and the sample period 𝑇0. as a result of a such comparison, conversion equations for the frequency measurement channel of the instantaneous values given by such a transformation function are obtained 𝑁 = 𝑇𝑥 𝑇0 = 𝑇𝑥 ∙ 𝑓0 = 𝑓0 𝑓𝑥 . (2) 5. binary counter ct2 carries out the procedure of counting the number of 𝑁 sample periods 𝑇0, which were quantized by periods 𝑇𝑥 . 6. the programmable interface provides the transfer of binary codes of the number of pulses 𝑁[00. . .15] from the parallel outputs of the binary counter ct2 to the accumulator of the mps microprocessor system, the main components of which are the mcu microcontroller, ram, and permanent rom memory. the exchange of measurement information between the programmable interface and the microprocessor system accumulator is implemented in program mode, interrupt mode or direct memory access. 7. an array of measurement information about the transient characteristic of the measurement object is accumulated and memorized in the operational ram of the mps in the form of binary codes n, proportional to the instantaneous values of the periods 𝑇𝑥 of the frequency 𝑓𝑥 from the output of the encoder e. 8. to present the measured information in angular velocity values, the conversion equation for this type of non-electric quantity measuring channel is obtained by substituting the 𝑓𝑥 value from equation (1) into (2), which unambiguously links the input angular velocity 𝑛𝑋 with the output value, the number of pulses n on digital outputs of binary counter ct2 𝑁 = 60 ⋅ 𝑓0 𝑛𝑥 𝑧 . (3) 9. from the conversion equation (3), the array of instantaneous angular velocity values is calculated by software 𝑛𝑥 = 60 ⋅ 𝑓0 𝑁 𝑧 . (4) the imperfection of the given approach is explained by the shortcomings [13]-[17] inherent in digital measuring devices, the circuitry of which is implemented according to a "hard" control scheme. 2. measuring channels with microprocessor control the duality of the hardware and software implementation of microprocessor devices (figure 2) potentially provides greater flexibility and adaptability to the dynamic properties of measurement objects [10], [16], [18], which significantly expands their functional capabilities and improves static and dynamic metrological characteristics. the disadvantage of this development direction of microprocessor tachometers is the quantization of only even 𝑇𝑥 or only odd periods. in the above "quasi-instantaneous" means, measurement information is obtained not in each period, but after a period, which does not allow for the transient characteristic 𝑛(𝑡) with the required accuracy: determine the numerical values of the amplitude and duration of synchronous dips in angular velocity; to differentiate the experimentally obtained numerical values of the periods to construct the functional dependence of the change in acceleration over time; to indirectly determine the value of the dynamic moment and dynamic mechanical characteristic the phase "portrait" of the object of measurement. the solution to the problems highlighted above is the implementation of quantization [10], [12], [19]-[21] of each even 𝑇𝑥 and each odd periods (figure 3), which are proportional to the instantaneous value of the angular velocity, which is implemented by the hardware and software of the microcontroller. it is advisable to implement the quantization method in "adjacent" intervals on two internal timers of the microcontroller, and the analysis of metrological characteristics will allow to determine the parameter, knowledge of which will ensure its adaptation to the dynamic properties of the measurement object. 3. main metrological characteristics for further research, as an initial mathematical model of the angular velocity of electric machines, we will use a trivial (figure 4) exponential model 𝑛(𝑡) =  ω ∙ (1 − e − 𝑡 𝜏), (5) where ω – synchronous speed of the electric motor rotation, 𝜏 – electromechanical constant. as a result of the interaction of the hardware and software of the microcontroller, the analogue value 𝑛 is replaced, which has figure 2. generalized structural diagram of the microprocessor tachometer. figure 3. to the issue of quantization in "adjacent" intervals. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 an infinite number of values in the specified measurement range (from 𝑛min to 𝑛max), due to the limited number of its instantaneous values (because 𝑇д ≠ 0), a discretization error occurs [22] 𝛥д(𝑡) = 1 2 𝑇д ⋅ d𝑛 d𝑡 (6) thus, obtain a mathematical model for estimating the sampling error. the angular acceleration of the shaft movement of the object of measurement in the transient mode of its operation is determined as ε(𝑡) = d𝑛 d𝑡 = ω ⋅ e − 𝑡 𝜏 𝜏 (7) the sampling step for a digital tachometer with microprocessor control is determined as follows [10] 𝑇д = 𝑡adc + 𝑡fl + 𝑡dr . here, 𝑡adc is the duration of the analog-to-digital conversion, which is equal to the measured period 𝑡adc = 𝑇𝑋 , 𝑡fl is the time required to execute the flag subroutine waiting for the flag, 𝑡dr is the time to execute the driver software driver. considering the fact that during the measurement of the angular velocity in the "adjacent" intervals, the waiting subroutines for the flag and the driver software driver are executed after the completion of 𝑇𝑥 quantization, then 𝑇д = 𝑡adc = 𝑇𝑥 . (8) in this regard, the sampling frequency [23] of the angular velocity measuring channel is defined as 𝑓д = 1 𝑇д = 1 𝑇𝑋 . (9) now present the encoder conversion equation (1) in the following form: 𝑓д = ω ∙ 𝑧 60 (10) and the discretization step, respectively 𝑇д(𝑡) = 60 𝛺 ∙ 𝑧 = 60 𝛺 ∙ (1 − e − 𝑡 𝜏) ∙ 𝑧 . (11) substitute (7), (8), (11) into (6) and obtain a mathematical model for estimating (figure 5) the sampling error of the digital angular velocity measurement channel in "adjacent" intervals 𝛥д(𝑡) = 1 2 𝑇д ⋅ d𝜔 d𝑡 = 30 e − 𝑡 𝜏 (1 − e − 𝑡 𝜏) ∙ 𝑧 𝜏 (12) to analyse the relative error of quantization, we will first obtain the transfer function of the microprocessor tachometer 𝑁(𝑡) = 𝑓0 𝑓𝑥 = 60 ∙ 𝑓0 𝛺 ∙ (1 − e − 𝑡 𝜏) 𝑧 (13) considering (13), the mathematical model for estimating the quantization error in the transient mode of operation of the measurement object will have the form 𝛿к(𝑡) = 1 𝑁 ∙ 100% = 5 ∙ 𝛺 ∙ (1 − e − 𝑡 𝜏) ∙ 𝑧 3 ∙ 𝑓0 . (14) analysis of the quantization error equation (figure 6) shows that its reduction can be ensured by two methods: 1. reducing resolution z of the encoder; 2. increasing the quantization frequency 𝑓0 of the quartz resonator g. the second approach has the limitation 𝑓0 ≤ 𝑓𝑔𝑟 . the quantization frequency is limited by the limit frequency of the crystal resonator of the microcontroller mcu. in turn, decreasing in the resolution z of the encoder leads to an increasing in the sampling error (figure 6), which is not acceptable for dynamic measurements. the following conclusion can be drawn from the above graphic dependences of quantization and discretization errors: quantization and discretization errors significantly depend on the resolution value z of the encoder. moreover, increasing z leads to decreasing in the sampling error, but the relative quantization error increases. to reconcile these component errors, it is necessary to obtain the laws of change of the encoder’s resolution z, adapted to the law of angular velocity change in time. using (12), we obtain the law of change in the resolution of the encoder in the process of increasing the angular velocity from 0 to ω, compliance with which will ensure the value of the dynamic sampling error, which does not exceed the normalized value ∆д≤ ∆дн zд(t) = 30 e − 𝑡 𝜏 ∆дн 𝜏 ∙ (1 − e − 𝑡 𝜏) . (15) figure 4. dependence 𝑛 = (𝑡): 𝜏 = 0.5; 𝛺 = 1500 rpm. figure 5. laws of encoder resolution changing. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 similarly, from (14) we will get the law of the encoder resolution changing, which implementation will ensure the normalization 𝛿к ≤ 𝛿кн of the quantization error 𝑍к(𝑡) = 3 ∙ 𝛿кн ∙ 𝑓0 5 ∙ 𝛺 (1 − e − 𝑡 𝜏) . (16) graphic dependences of the change in encoder resolution 𝑧 = 𝑓(𝑡) during dynamic measurements of angular velocity 𝑛 = 𝑓(𝑡) during the transient process of the measurement object are shown in figure 5. to present the quality of measurement results based on the concept of measurement uncertainty, which is currently recommended by international standards [24]-[28], the following method of calculating the standard uncertainty of type b, which arises due to the existence of the discredit error 𝑢bd(𝑡), is proposed in the assumption on the normal distribution law of the components of the digital angular velocity measurement channel 𝑢bd(𝑡) = 𝛥д(𝑡) 𝑘p = 1 2 𝑇д d𝜔 d𝑡 𝑘p −1, (17) where 𝑘p is the coverage coefficient, which for a normal distribution law is taken as equal to 1.96 at a confidence level of p=0.95 [29], [30]; 𝛥д(𝑡) is the discrediting error of the digital angular velocity measurement channel. substituting the maximum value of the sampling error ∆дн ≤ 0.2 rpm (figure 6, b) into equation (17), we obtain the standard sampling uncertainty of type b, which is equal to 𝑢bd(𝑡) = 0.1 rpm for a confidence level of 0.95. the relative standard uncertainty of discretization can be estimated by the expression [29], [31] 𝑢�̃� = 𝑢bd(𝑡) 𝑛(𝑡) 100 % = 0.1 1500 100 % ≈ 0.01 % . (18) as follows from the conducted studies of the quantization error (figure 6, a), its value does not exceed 0.5 % at an angular speed of 1500 rpm. the standard uncertainty of type b, due to the presence of the quantization error [32], assuming its uniform distribution law, is calculated using the expression 𝑢𝐵𝑘 (𝑡) = 𝛿𝑘 (𝑡) 100 % √12 𝑛max(𝑡) = 0.5 % 346 % 1500 rpm = 2.17 rpm . (19) for the given object of measurement (for example, a threephase asynchronous motor uad-34 with a nominal rotation speed ω = 1500 rpm and 𝜏 = 0.5, a coupling of the membrane type, an encoder lir-120a (z=65536) and a debugging a board based on a dual-core 32-bit microcontroller tms320f28379d, containing 1 mb of flash memory, 128 kb of ram and having a sample frequency 𝑓0=8 mhz formed from the clock frequency of a quartz resonator, a method of normalizing the error [22] of the 𝑍д(𝑡) discretization and a method of normalization is proposed [33], [34] quantization errors 𝑍k(𝑡) by determining the change in resolution 𝑧 of the encoder, which corresponds to the change in the time coordinate 𝑡 in the transient process of measurement object: 𝑍д(𝑡) = 300 e − 2 𝑡 𝜏 (1 − e − 2 𝑡 𝜏 ) (20) 𝑍к(𝑡) = 1600 (1 − e − 2 𝑡 𝜏 ) . (21) a microcontroller such as the tms320f28379d has twelve 32-bit general-purpose timers in its structure, which is quite enough to carry out this kind of dynamic measurements with the adaptation of the z resolution of the encoder to the dynamic properties of the measurement object: timer 𝑇0 is programmed to the mode of the real-time counter, which every 0.01 s generates a control signal at its output, according to which the value of the binary code of the coefficient k of its list is recorded in the counter of timer 𝑇1; every 0.01 s, the value of the list coefficient k is recorded in timer 𝑇1 to form at its output a frequency signal 𝑓𝑋 𝑘 ⁄ , proportional to the resolution 𝑍д(𝑡) or 𝑍к(𝑡) according to (20) or (21); in timer 𝑇2, even 𝑇𝑥 periods of the frequency signal 𝑓𝑋 𝑘 ⁄ from the direct output of timer 𝑇1 are quantized. in timer 𝑇3, odd 𝑇𝑥 periods of the frequency signal 𝑓𝑋 𝑘 ⁄ from the inverse output of timer 𝑇1 are quantized. therefore, the method of quantization in "adjacent" intervals takes place in two timers 𝑇2 and 𝑇3 of the microcontroller. quantization of 𝑇𝑥 and 𝑇𝑥 periods occurs in both timers. first, as a result of counting the periods 𝑇0 of the sample frequency of the quartz resonator 𝑓0 from the leading edge to the trailing edge of the even period 𝑇𝑥 in the second 𝑇2 timer, and then from the leading edge to the trailing edge of the odd period 𝑇𝑥 in the third 𝑇3 timer. thus, each of the programmable timers of the microcontroller works in two modes: quantization (from rising edge to falling edge); transfer and memorization of measurement information (from falling edge to rising edge); additional speed of the proposed method is provided by the transfer and recording of measured information to the ram in dma mode (direct access to memory) without the participation of the computing core, freeing its resources for other, priority tasks during the measurement process. this hardware and software implementation of quantization in " adjacent " intervals ensures maximum speed of measurement and guarantees sufficient time for transferring and memorizing a figure 6. to the issue of quantization and discretization error. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 large volume of quantization results in the mps ram. and the implementation of the method of error normalization [22] of discretization 𝑍д(𝑡) or the quantization error [23], [34] 𝑍к(𝑡) by changing the resolution z of the encoder in the real-time measuring mode of operation allows to increase the accuracy of dynamic angular velocity measurements. 4. conclusions the analysis of well-known digital angular velocity measurement channels with an encoder made it possible to solve two extremely important tasks for metrology increasing the accuracy and speed of dynamic measurements of the angular velocity of the measurement object operating in real time. for the first time, an equation for estimating the quantization and discretization error was obtained for an exponential mathematical model that describes the transient process of the electric machines operation. the components of the mathematical model of these dynamic errors are the quantization and discretization steps and the derivative, which characterizes the rate of change of the measured quantity over time. it is proved that quantization and discretization errors significantly depend on the value of encoder resolution z. moreover, an increasing in z leads to a decreasing in the sampling error, but the relative quantization error increases. in order to reconcile these component errors, the laws of changing the resolution z of the encoder were obtained, which make it possible to adapt to the dynamic properties of the change in angular velocity over time for the selected object of measurement (for example, a threephase asynchronous machine uad-34 with a nominal speed of rotation ω = 1500 𝑟𝑝𝑚 and τ = 0.5), a coupling coupling of the membrane type, an encoder lir-120 (z=65536) and dualcore 32-bit microcontroller tms320f28379d, methods of normalizing the value of the sampling error ∆дн and the quantization error 𝛿кн are proposed. the normalization of these error components is carried out in the process of dynamic measurements of the angular velocity n during the transient process of the measurement object, changing the resolution z of the encoder, thus ensuring the fulfilment of the condition: ∆д≤ ∆дн or 𝛿к ≤ 𝛿кн it was established that in order to ensure the maximum speed of the angular velocity measuring during the transient process of the measurement object, it is advisable to implement the proposed adaptive to its dynamic properties method of changing the resolution of encoder on the internal timers of the microcontroller, and to carry out the quantization of informative periods proportional to the measured angular velocity in "adjacent intervals". references [1] m. zhao j. lin, health assessment of rotating machinery using a rotary encoder, ieee transactions on industrial electronics, vol. 65, no. 3, march 2018, pp. 2548-2556. doi: 10.1109/tie.2017.2739689 [2] h. li, g. cheng, a comparative study of speed estimation methods for motor rotor. in: y. jia, w. zhang, y. fu, z. yu, s. zheng (eds), proc. of the 17th chinese intelligent systems conference, fuzhou, china, 16-17 october 2021. lecture notes in electrical engineering, vol 803. springer, singapore. doi: 10.1007/978-981-16-6328-4_86 [3] v. v. kukharchuk, elements of the control theory of electric machines dynamic parameters: monograph, vinnytsia: universum-vinnytsia, 1998, 125 p. [4] yamini nagaratnam, sudanthiraveeran rooban, a modified truncation and rounding-based scalable approximate multiplier with minimum error measurement, acta imeko, vol. 11, no. 2, article 37, june 2022, pp. 1-6. doi: 10.21014/acta_imeko.v11i2.1245 [5] y. vázquez-gutiérrez, d. l. o'sullivan, r. c. kavanagh, smallsignal modeling of the incremental optical encoder for motor control, ieee transactions on industrial electronics, vol. 67, no. 5, , may 2020 pp. 3452-3461. doi: 10.1109/tie.2019.2916307 [6] v. v. kukharchuk, y. g. vedmitskyi, measurement of rotational movement parameters of electromechanical energy converters in transient operation mode: monograph, vinnytsia: universumvinnytsia, 2018, 155 p. [7] yi huang, clemens gühmann, wireless sensor network for temperatures estimation in an asynchronous machine using a kalman filter, acta imeko, vol. 7, no. 1, article 3, march 2018, pp. 5-12. doi: 10.21014/acta_imeko.v7i1.509 [8] v. v. kukharchuk, v. s. holodiuk, a tool for dynamic measurements of electric machines rotary motion parameters in transient operation modes, integrated intelligent robotic complexes, iirtk-2021 13th int. scientific and practical conference, kyiv, ukraine, 18-19 may 2021, p. 87. [9] n. k. boggarpu, r. c. kavanagh, new learning algorithm for high-quality velocity measurement and control when using low-cost optical encoders, ieee transactions on instrumentation and measurement, vol. 59, no. 3, 2010, pp. 565574. doi: 10.1109/tim.2009.2025064 [10] v. o. podzharenko, v. v. kukharchuk, measurement and computer-measuring equipment: study guide, 1991, 240 p. [11] giulio d’emilia, antonella gaspari, emanuela natale, dynamic calibration uncertainty of three‐axis low frequency accelerometers, acta imeko, vol. 4, no. 4, art. 14, december 2015, pp. 75-81. doi: 10.21014/acta_imeko.v4i4.239 [12] v. v. kukharchuk, fundamentals of metrology and electrical measurements, outline of lectures. part ii: vinnytsia: vntu, 2020, 155 p. [13] alexander schuler, albert weckenmann, tino hausotte, enhanced measurement of high aspect ratio surfaces by applied sensor tilting, acta imeko, vol. 3, no. 3, art. 6, september 2014, pp. 22-27. doi: 10.21014/acta_imeko.v3i3.124 [14] v. v. kukharchuk, v. y. kucheruk, analysis of dynamic properties of tachometric converters, technical electrodynamics, 2000, part 1, pp. 103-107. [15] j. zhu, t. hayashi, a. nishino, k. ogushi, new microforce generating machine using electromagnetic force. acta imeko, vol 9, no. 5, december 2020, pp. 109-112. doi: 10.21014/acta_imeko.v9i5.950 [16] v. y. kucheruk, v. v. kukharchuk, analysis and practical implementation of a microprocessor tool for measuring the angular speed of rotation of electric machines, visnyk vpi, 1995, number 2, pp. 12-16. [17] l. pecly, k. hashtrudi-zaad, model transformation for enhanced parameter identification of linear dynamic systems, 2020 ieee conference on control technology and applications (ccta), montreal, qc, canada, 24-26 august 2020, pp. 242-247. doi: 10.1109/ccta41146.2020.9206281 [18] v. v. kukharchuk, s. v. pavlov, v. s. holodiuk, v. e. kryvonosov, k. skorupski, a. mussabekova, g. karnakova information conversion in measuring channels with optoelectronic sensors. sensors 2022, 22, 271. doi: 10.3390/s22010271 [19] m. kaliuzhnyi generalizing the sampling theorem for a frequencytime domain to sample signals under the conditions of a priori https://doi.org/10.1109/tie.2017.2739689 https://doi.org/10.1007/978-981-16-6328-4_86 https://doi.org/10.21014/acta_imeko.v11i2.1245 https://doi.org/10.1109/tie.2019.2916307 https://doi.org/10.21014/acta_imeko.v7i1.509 https://doi.org/10.1109/tim.2009.2025064 https://doi.org/10.21014/acta_imeko.v4i4.239 https://doi.org/10.21014/acta_imeko.v3i3.124 https://doi.org/10.21014/acta_imeko.v9i5.950 https://doi.org/10.1109/ccta41146.2020.9206281 https://doi.org/10.3390/s22010271 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 uncertainty, eastern-european journal of enterprise technologies, 3(9), 111, 2021, pp. 6–15. doi: 10.15587/1729-4061.2021.235844 [20] b. k. ashley, u. hassan, frequency-time domain (ftd) impedance data analysis to improve accuracy of microparticle enumeration in a microfluidic electronic counter., 2021 43rd annual int. conference of the ieee engineering in medicine & biology society (embc), mexico, 01-05 november 2021, pp. 1201-1204. doi: 10.1109/embc46164.2021.9630635 [21] m. morawiec, a. lewicki, speed observer structure of induction machine based on sliding super-twisting and backstepping techniques, ieee transactions on industrial informatics, vol. 17, no. 2, , feb. 2021, pp. 1122-1131. doi: 10.1109/tii.2020.2974507 [22] v. v. kukharchuk, v. f. hraniak, s. sh. katsyv, v. s. holodiuk, torque measuring channels: dynamic and static metrological characteristics, informatyka, automatyka, pomiary w gospodarce i ochronie środowiska, 10(3), 2020, pp. 82–85. doi: 10.35784/iapgos.2080 [23] v. v. kukharchuk, v. s. holodiuk, results of studies of the quantization and discretization error of digital tachometers with an encoder, integrated intelligent robotic complexes (iirtk2022), 15th int. scientific and practical conference may 17-18 2022, kyiv, ukraine, k.: nau, 2022, 241 p. (a collection of theses), pp. 98-100. [24] o. vasilevskyi, m. koval, s. kravets, indicators of reproducibility and suitability for assessing the quality of production services, acta imeko, vol. 10, no. 4, art. 11, december 2021, pp. 54-61. doi: 10.21014/acta_imeko.v10i4.814 [25] bipm, iec, ifcc, iso, iupac, iupap and oiml, evaluation of measurement data – guide to the expression of uncertainty in measurement, joint committee for guides in metrology, bureau international des poids et mesures, jcgm 100:2008, 2008, 120. [26] iso/iec guide 98-1:2009, uncertainty of measurement – part 1: introduction to the expression of uncertainty in measurement (iso, switzerland, 2009). [27] oleksandr vasilevskyi, assessing the level of confidence for expressing extended uncertainty through control errors on the example of a model of a means of measuring ion activity, acta imeko, vol. 10, no. 2, art. 27, june 2021, pp. 199-203. doi: 10.21014/acta_imeko.v10i2.810 [28] iso/iec 17025:2005, general requirements for the competence of testing and calibration laboratories (iso, switzerland, 2005). [29] o. m. vasilevskyi, metrological characteristics of the torque measurement of electric motors, international journal of metrology and quality engineering, 8, 7, 2017, 9 p. doi: 10.1051/ijmqe/2017005 [30] r. trishch, o. nechuiviter, k. dyadyura, o. vasilevskyi, i. tsykhanovska, m. yakovlev, qualimetric method of assessing risks of low quality products, mm science journal, 2021, october, pp. 4769-4774. doi: 10.17973/mmsj.2021_10_2021030 [31] o. vasilevskyi, p. kulakov, d. kompanets, o. lysenko, v. prysyazhnyuk, w. wójcik, d. baitussupov, a new approach to assessing the dynamic uncertainty of measuring devices, proc. spie 10808, photonics applications in astronomy, communications, industry, and high-energy physics experiments 2018, 108082e, 2018, 8 p. doi: 10.1117/12.2501578 [32] o. kupriyanov, r. trishch, d. dichev, t. bondarenko, mathematic, model of the general approach to tolerance control in quality assessment, in: v. tonkonogyi, v. ivanov, j. trojanowska, g. oborskyi, i. pavlenko (eds), advanced manufacturing processes iii. interpartner 2021. lecture notes in mechanical engineering. springer, cham, pp. 415–423. doi: 10.1007/978-3-030-91327-4_41 [33] o. vasilevskyi, v. kucheruk, v. bogachuk, k. gromaszek, w. wójcik, s. smailova, n. askarova, the method of translation additive and multiplicative error in the instrumental component of the measurement uncertainty, proc. spie 10031, photonics applications in astronomy, communications, industry, and high-energy physics experiments, 2016, 1003127, 2016, 11 p. doi: 10.1117/12.2249195 [34] vasilevskyi o. m., kulakov p. i., ovchynnykov k. v., didych v. m., evaluation of dynamic measurement uncertainty in the time domain in the application to high speed rotating machinery, international journal of metrology and quality engineering, 8, 2017, 25. doi: 10.1051/ijmqe/2017019 https://doi.org/10.15587/1729-4061.2021.235844 https://doi.org/10.1109/embc46164.2021.9630635 https://doi.org/10.1109/tii.2020.2974507 https://doi.org/10.35784/iapgos.2080 https://doi.org/10.21014/acta_imeko.v10i4.814 https://doi.org/10.21014/acta_imeko.v10i2.810 https://doi.org/10.1051/ijmqe/2017005 https://doi.org/10.17973/mmsj.2021_10_2021030 https://doi.org/10.1117/12.2501578 https://doi.org/10.1007/978-3-030-91327-4_41 https://doi.org/10.1117/12.2249195 https://doi.org/10.1051/ijmqe/2017019 the results of atmospheric parameters measurements in the millimeter wavelength range on the radio astronomy observatory “suffa plateau” acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 the results of atmospheric parameters measurements in the millimeter wavelength range on the radio astronomy observatory “suffa plateau” dilshod raupov1,2, s. ilyasov1,2, g. i. shanin3 1 ulugh beg astronomical institute, uzbek academy of sciences, tashkent, uzbekistan 2 jizzah state pedagogical university, jizzah, usbekistan 3 radio observatory rt-70, uzbek academy of sciences, tashkent, uzbekistan section: research paper keywords: astroclimate; turbulence; transparency windows; atmospheric absorption; precipitated water; radio range; millimeter wave; neper citation: dilshod raupov, s. ilyasov, g. i. shanin, the results of atmospheric parameters measurements in the millimeter wavelength range on the radio astronomy observatory “suffa plateau”, acta imeko, vol. 12, no. 2, article 18, june 2023, identifier: imeko-acta-12 (2023)-02-18 section editor: francesco lamonaca, university of calabria, italy received december 12, 2022; in final form january 30, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: dilshod raupov, e-mail: dilshod@astrin.uz 1. introduction recently, thanks to the rapid development of microwave technology, the creation of adaptive large-aperture radio telescopes and the possibility of combining them into a system of ground-based and ground-space interferometers with very long bases, has gained considerable interest in fact, this represents a real possibility to fully realize the advantage of the millimetre range in solving fundamental problems of cosmology, as well as in solving a number of applied problems at a completely new level. at the same time, such a rapid technological breakthrough in the development of microwave technology inevitably required the solution of new problems in a detailed study of the parameters of the medium for the propagation of mm-wave radio waves. significant factors, in addition to insignificant attenuation in space plasma due to scattering and absorption, affecting the propagation of mm waves from the source of radiation to terrestrial observation are distortions introduced by the terrestrial atmosphere. in addition, the earth's atmosphere absorbs electromagnetic radiation of most wavelengths. however, there are frequency bands (radio windows) in which the atmosphere is substantially transparent. such a radio window in the mm region of the spectrum is a region from 1 to 15 mm, in which several absorption lines of oxygen and water vapor are located. a large number of papers and monographs have been devoted to theoretical studies of molecular absorption of mm waves. in these papers, it is emphasized that, in theoretical absorption calculations, special attention should be paid to the feasibility of the physical approximations used and to the adequacy of the description of the corresponding processes. from this point of view, interesting results were obtained in [1], where it was possible to achieve agreement between the theory of molecular absorption in water vapor in transparency windows and experimental data by taking into account the duration of molecular collisions in the framework of the memory function method. abstract the results of measurements of atmospheric absorption and the amount of precipitated water on the suffa plateau for the period from january, 2015 to november, 2020, are presented. the measurements of atmospheric parameters in the 2 and 3 mm range of the radio waves spectrum were carried out using the miap-2 radiometer. the results of more than six years of measurements have show that on the suffa plateau, atmospheric parameters in the above range remain fairly stable. the median value of the atmospheric absorption and the amount of precipitated water over the entire observation period were 0.14 and 0.12 nep and 5.91 and 9.83 mm, respectively, for the ranges of 2 and 3 mm. mailto:dilshod@astrin.uz acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 noteworthy are the calculations of absorption in water vapor and in molecular oxygen [2], [3] using natural (from a balloon) and laboratory measurements [4], taking into account the influence of the earth's magnetic field on the characteristics of the propagation of millimeter waves in the region λ = 5 mm. however, in theoretical calculations, there is still a problem of correct comparison of the theory of molecular absorption of millimeter waves with experimental and laboratory data, as well as with insufficiently studied statistical characteristics of fluctuations of atmospheric meteorological parameters. recently, interest has been growing in the construction of radio models of the atmosphere, with the help of which it would be possible to determine quickly (and with fair reliability) the characteristics of the propagation of millimeter waves along paths of various orientations and lengths. in the radio model constructed in [5], the input parameters are meteorological elements (pressure, temperature, humidity), the concentration of water particles suspended in the air, and the intensity of rain. the model makes it possible to calculate the influence of the attenuating and refractive characteristics of the medium on the propagation of radio waves in various atmospheric conditions. however, the authors note the need to refine and supplement the model and, first of all, the need for a much more thorough meteorological support. recently, scientists have paid much attention to the study of the influence of atmospheric parameters on the characteristics of microwave radio waves both in areas that are promising for the installation of large radiometric systems, and at operating complexes. one of such radiometric complexes being created on the plato de chajnantor in the central andes (chile) is alma (atacama large millimeter array) [6], [7]. another potential site for installing mm-submm telescopes is antarctica (south pole), where the dome-c antarctic observatory is being created on a plateau at the altitude of 2835 m [8]-[10]. detailed studies of transparency and stability of atmosphere in this range were also carried out at mauna kea (hawaiian islands, usa) [11], where submillimeter telescopes (in particular, the sma complex submillimeter array) are installed. radio transparency studies for mm-submm astronomy were also carried out at the indian astronomical observatory hanle in the himalayas in 2000-2003. [12], pampa la bola (8 km ne from chajnantor) [13], rio frio [14]. water in different phases and mixtures is a key element of the climate system. long-term chemical and isotopic changes of oceanic and atmospheric water mixtures must be monitored regularly and precisely [15]. the study of the atmospheric absorption at the radio astronomy observatory "suffa plateau" (uzbekistan) has been carried out from 2015 to 2020 in atmospheric transparency windows of 2 and 3 mm using the miap-2 measuring complex. a detailed description of the radiometer, its functional diagram, basic principles of measurements and calculations, estimates of permissible errors are given in [16]. 2. characteristics of the measuring complex the miap-2 measuring complex is a radiometric system that includes two radiometers for frequencies of 84-99 ghz (λav = 3 mm) and 132-148 ghz (λav = 2 mm), a turntable and a control, acquisition and processing system based on a personal computer. the receiver in the range (84-99) ghz is made according to the direct amplification scheme with detection at the fundamental frequency. the modulator at the input of the receiver is made on the basis of chains of series-parallel connected diodes with a schottky barrier (sbb) installed in the waveguide of the main section. the modulator control and synchronous data acquisition are carried out using a pc via the usb-4716 module. the modulation frequency is about 36 hz. the noise temperature of the receiver is 1300 k. the solid-state receiver of the radiometer in the range (132148) ghz is made according to the super heterodyne circuit and includes a local oscillator based on the gunn diode, a balanced mixer on the dbsh and an if in the range (4-8) ghz. the modulator is similar to the one described above, it is also controlled and data is collected via the usb-4716 module. the modulation frequency is also about 36 hz. the noise temperature of the receiver is 6300 k. the turntable consists of a support frame for mounting the radiometer module, a swivel mirror, and a mirror drive system based on a stepper motor. the weather protection housing is made of stainless steel and has a radio-transparent ptfe window. the limits of change of the angle of observation are 0º90º. the drive is controlled according to a given program via the usb-4716 module. the control, data collection and processing system ensure fully automatic operation of the complex in the mode of cyclic observations at a given time. both radiometers are equipped with lens antennas with a conical feed. the lenses are made of teflon and are limited on one side by a hyperbolic surface and flat on the other. each of the boundary surfaces has an enlightenment, which is made in the form of periodic circular concentric grooves ("corrugations"), providing a reflection coefficient in the entire operating range of each of the radiometers no more than 0.5 %. the irradiators are in the form of circular cones with a break. the antenna beam width at half power level in both bands is about 2.5º. atmospheric absorption measurements of radio waves in the millimetres range carried out by the method of vertical cuts were described in [17]. the method is based on measuring the intrinsic thermal radiation of the atmosphere, while comparing the brightness temperature increments of two parts of the atmosphere at different zenith angles relative to the known temperature of a certain reference region. in spite of all its advantages, this method is limited by the choice of one or another model of the structure of the atmosphere. in figure 1. location of the suffa plateau (λ = 65°26, ϕ = 39°37, h = 2500). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 particular, it is assumed that the earth's atmosphere is isothermal in horizontal coordinates, which can lead to errors associated with drifting temperature in homogeneities of the surface layers of the atmosphere. the measurement principle is illustrated in figure 2. the left side shows a typical temperature distribution in the troposphere with respect to height (the so-called temperature profile). in the middle a typical distribution of humidity in height. these profiles are subject to significant seasonal and daily changes, in fact, from them it is possible to calculate the amount of deposited water and extrapolate it to absorption. on the right are the zenith angles (that is, the number of atmospheres) by which the radiometer approximates the brightness temperature profile. before starting, the device is set to a strictly horizontal position according to the hydraulic level. the measurement cycle begins with the rotation of the mirror along the photo sensor exactly in the "zenith" direction. next, the mirror is moved using a stepper motor (the “step” value is 0.72 degrees) to a certain angle, the brightness temperature of the “spot” of the sky in the direction of observation is measured, the mirror moves to the next angle, the next measurement is made, etc. the signal averaging time can be changed from 1 s (with a “calm” atmosphere) to tens of seconds. the signal through the amplitude-to-digital converter (adc) enters the computer. next, a system of two (measurement at two angles) and five (measurement at five angles) equations is solved. the calculated value of atmospheric absorption is displayed on the graph. the cycle lasts about a minute, then the mirror is brought to its original position, the calculated transparency value is entered into the database and marked on the graph, after which the device is ready for the next measurement cycle. the time interval between cycles also varies widely, depending on the task set by the observer. with the help of the control program of the complex, it is possible to select any observation angles in the range from 0° (zenith) to 90° (horizon). however, this choice should take into account both the sensitivity of the device (it is necessary that the "neighbouring" corners have a difference in brightness temperatures distinguishable by the device), and some features of its design. the point is that the error introduced into the measurements and associated with the finite width of the radiation pattern and a fairly wide bandwidth has a complex dependence on the observation angle. an appropriate selection of angles can minimize this error. when processing the results from two angles, these angles are 60.5° and 76.3° (corresponding to approximately two and four atmospheric thicknesses overcome by the signal). when processing five corners, the most rational choice is: 60.5°, 76.3°, 81.4°, 84.2°, 88.6°. the last corner is as close as possible to the horizon. close absorption values obtained by processing the results of measurements at 2 and 5 angles indicate both the normal operation of the device and the absence of interference in the form of clouds or ground objects (at the “lowest” angle) in the direction of observation, as well as “calm state of the atmosphere. after processing each cycle of measurements, we directly obtain the value of atmospheric absorption at the zenith, expressed in neper (1 nep = 8.686 db), i.e. the so-called optical thickness of the atmosphere. this value is subject to significant seasonal and daily changes, which is primarily due to changes in the total content of water vapor in the atmosphere. at the end of the observation session, we obtain a chronology of absorption changes in two bands during the observation time. any observation period can be chosen. the possibility of roundthe-clock monitoring of radio transparency is provided. the total absorption of radio waves of the specified range includes absorption by oxygen molecules, water vapor contained in the atmosphere, and absorption in clouds. in clear time, the third component naturally vanishes. oxygen absorption varies little for a particular observation site over time due to its relatively constant content in the atmosphere. it is almost constant and depends only on the height of the observation site. thus, the change in radio transparency is mainly due to a change in the amount of water vapor in the atmosphere, or, in other words, a change in the amount of precipitated water. 3. observational statistics the significant array of measurements of atmospheric absorption in the mm spectral range has been accumulated on the suffa plateau for the period from january 2015 to november 2020. as an example, the full time series of atmospheric absorption obtained on the suffa plateau for the above period are shown in the 2 mm radio wave band is shown in figure 3. some gaps of data in observations is due to technical reasons. as can be seen from the figure, the value of atmospheric absorption over the entire observation period remains stable. a similar trend is also observed in the values of atmospheric absorption and the amount of atmospheric deposited water in the 2 and 3 mm ranges. it should be noted that equipment malfunctions appeared figure 2. the principle of measurement by the method of vertical cuts. the left side shows a typical temperature distribution in the troposphere with respect to height (the so-called temperature profile). in the middle a typical distribution of humidity in height. these profiles are subject to significant seasonal and daily changes, in fact, from them it is possible to calculate the amount of precipitated water and extrapolate it to absorption. on the right are the zenith angles (that is, the number of atmospheres) by which the radiometer approximates the brightness temperature profile [16]. figure 3. the atmospheric absorption time series obtained from january 2015 to november 2020 on the suffa plateau during the entire period in the 2 mm range acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 in 2018, and the data obtained were very different from the previous values of atmospheric parameters. therefore, when calculating atmospheric parameters, the data of 2018 were discarded. the statistical distribution of atmospheric absorption and the amount of precipitated water is shown in figure 4. the median value of atmospheric absorption over the entire observation period was 0.13 and 0.11 nep for the 2 mm and 3 mm radio range, respectively. the median value of the atmospheric amount of precipitable water for the entire period of observations in the 2 mm radio band was 4.96 mm, and for the 3 mm band, 9.25 mm. the sesonal trends are shown in table 1, which reports the average monthly values of atmospheric absorption (abs) and the amount of precipitable water vapor (pwv) computed after merging different years. as can be seen from the data, in the 2015-2020 period, the average monthly values of atmospheric absorption and the amount of deposited water on the suffa plateau remain fairly stable. the main range of changes in atmospheric absorption and the amount of deposited water lies within 0.11-0.19 nep and 4.84-7.40 mm for the 2 mm range of the radio wave spectrum, and for the 3 mm range ‒ 0.110.14 nep and 8.81 -13.38 mm, respectively. for a visual representation of the statistics of atmospheric parameters and its trend of change, the seasons were chosen according to the principle of weather conditions: november, december, january and february belong to the winter season; the transitional season includes march, april, september and october, the summer season may, june, july and august. in the winter season, the average value of atmospheric absorption is less than 0.10 nep, in the transitional season it is less than 0.12 nep, and in the summer season it is more than 0.12 to 0.15 nep, for the 3 mm range. for the 2 mm range in the winter season, the average value of atmospheric absorption is less than 0.12 nep, in the transitional season it is less than 0.14 nep, and in the summer season it is more than 0.14 to 0.15 nep. the temporal dynamics of precipitated water in january and july are characteristic of extreme climate conditions. the average value of deposited water in january is about 4.84 mm for the 2 mm range, 10.0 mm for the 3 mm range, and in july 7.40 and 13.38 mm for the 2 and 3 mm ranges, respectively. the diurnal variations of the deposited water in the summer period are more significant than in winter. on some nights in december and january, the amount of precipitated water drops to a minimum of about 2 mm, in summer it rises to 12 mm. 4. conclusions based on this study, it can be concluded that over a six-year period of time, the atmospheric parameters on the suffa plateau remain fairly stable. the values of atmospheric absorption and deposited water presented here correspond to the values of the entire thickness of the atmosphere at the zenith. measurements of atmospheric parameters on the suffa plateau showed that the value of atmospheric absorption at the zenith at a wavelength of 3 mm, sometimes for several is was within 0.06–0.08 nep, and at a wavelength of 2 mm within 0.08-0.10 nep. at shorter time intervals, for several hours, the absorption of the 3 mm wave sometimes drops to 0.06–0.08 nep, and at a wavelength of 2 mm to 0.05–0.06 nep. such cases occur in the winter. the amount of precipitated water for several days is in the range of 1.62.0 mm. they can be reduced to parameters at any angle, as well as extrapolated to any height, taking into account the standard atmosphere model. references [1] yu. p. kalmykov, s. v. titov, application of the method of memory functions to calculate the rotational absorption spectrum of water vapour, radio engineering and electronics, 33(1) (1989), pp. 13-20. figure 4. statistical distribution of atmospheric absorption (on top) and amount of deposited water (below) on the suffa plateau, calculated for the radio wave bands of 2 and 3 mm, from january 2015 to november 2020. the y-axis of the histogram is on the right, the cumulative distribution is on the left. table 1. monthly average values of atmospheric absorption and integral amount of deposited water from january 2015 to december 2020. months absorption 3 mm absorption 2 mm pwv 3 mm pwv 2 mm january 0,11 0,11 8,81 4,84 february 0,12 0,14 10,44 5,05 march 0,12 0,15 11,42 5,89 april 0,13 0,16 12,22 6,22 may 0,14 0,18 12,82 7,21 june 0,14 0,19 13,26 7,27 july 0,14 0,17 13,38 7,40 august 0,14 0,16 13,25 7,28 september 0,14 0,14 12,57 7,09 october 0,13 0,14 12,05 6,62 november 0,12 0,12 11,11 5,56 december 0,12 0,11 9,25 4,69 mean 0,12 0,14 11,30 5,91 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 [2] yu. p. kalmykov, s. v. titov, on the absorption spectrum of molecular oxygen in the o-hz frequency range, radio physics 32(8) (1989), pp. 933-944. [3] a. r. l. yasmin, theoretical modeling of microwave absorption by water vapour, appl. optics 29(13) (1990), pp. 1979-1983. doi: 10.1364/ao.29.001979 [4] a. a. vlasov, e. n. kadygrov, e. a. kuklin, laboratory measurements of the absorption of 5 mm radio waves in atmospheric oxygen, xvi all-union conference on radio wave propagation. abstracts of reports. kharkov, soviet union, khpi. 1990, part 2, p. 35. [5] a. a. vlasov, e. n. kadygrov, a. n. shaposhnikov, selection of the model for calculating the o2 absorption coefficient for determining the temperature profile of the atmosphere from satellite microwave measurements, earth exploration from space, 1 (1990), pp. 36-39. [6] s. j. e. radford, the atacama large millimeter array: observing the distant universe, in observing dark energy, asp conf. ser. 339, s. c. wolff, t. r. lauer (eds.), san francisco: asp, usa, 2005, isbn: 1-58381-206-7, p. 177. [7] s. j. e. radford, alma site and configurations, national radio science meeting, washington, d. c.,usa, national academy of sciences, 2002, p. 364. [8] p. g. calisse, m. c. b. ashley, m. g. burton, m. a. phillips, j. w. v. storey, s. j. e. radford, j. b. peterson, submillimeter site testing at dome c, antarctica in third international workshop on astrophysics at dome c, pasa, 21(3) (2004), pp. 256 – 263. doi: 10.1071/as03018 [9] p. g. calisse, m. c. b. ashley, m. g. burton, j. s. lawrence, t. travouillon, j. b. peterson, m. a. phillips, s. j. e. radford, j. w. v. storey, c. dome, antarctica: the best accessible submillimeter site on the planet? the dense interstellar medium in galaxies, springer proceedings in physics 91, ed. pfalzner, s., kramer, c., staubmeier, c., & heithausen, a., (berlin: springer, isbn: 3-540-21254-x), 2004, pp. 353-356. doi: 10.1007/978-3-642-18902-9_64 [10] p. g. calisse, m. c. b. ashley, m. g. burton, j. r. lawrence, m. a. phillips, j. w. v. storey, s. j. e radford, j. b. peterson, new submillimeter site testing results from dome c, antarctica, astronomy in antarctica, ed. burton, m., highlights of astronomy, 13 = astronomy in antarctica, 25th ga of the iau, ss 2, 18 july, 2003 in sydney, australia, meeting abstract, p. 33. [11] j. b. peterson, s. j. e. radford, p. a. r. ade, r. a. chamberlin, m. j. o'kelly, k. m. peterson, e. schartman, stability of the submillimeter brightness of the atmosphere above mauna kea, chajnantor and the south pole, pasp 115 (2003), art. 383. doi: 10.1086/368101 [12] p. g. ananthasubramanian, s. yamamoto, t. p. prabhu, d. angchuk, measurements of 220 ghz atmospheric transparency at iao, hanle, during 2000-2003, bull. astr. soc. india, 32(2) (2004), pp. 99-111. [13] s. matsushita, h. matsuo, j. r. pardo, s. j. e. radford, fts measurements of submillimeter-wave atmospheric opacity at pampa la bola ii, supra-terahertz windows and model fitting, publications of the astronomical society of japan 51 (1999), art. no. 603. doi: 10.1093/pasj/51.5.603 [14] m. a. holdaway, m. ishiguro, s. m. foster, r. kawabe, k. kohno, f. n. owen, s. j. e. radford, m. saito, comparison of rio frio and chajnantor site testing data, mma memo #152. [15] r. feistel, salinity and relative humidity: climatological relevance and metrological needs, acta imeko 4(4) (2015), pp. 57-61. doi: 10.21014/acta_imeko.v4i4.216 [16] g. m. bubnov, yu. n. artemenko, v. f. vdovin, d. b. danilevsky, i. i. zinchenko, v. i. nosov, p. l. nikiforov, g. i. shanin, d. a. raupov, the results of astroclimate observations in the short-wave length interval of the millimeter-wave range on the suffa plateau, radio physics and quantum electronics 59(8-9) 2017, pp. 763-771. doi: 10.1007/s11141-017-9745-7 [17] a. g. kislyakov, on the distribution of the absorption of radio waves in the atmosphere according to its own radio emission ii, radio engineering and electronics 13(7) (1968), pp. 1161-1168. https://doi.org/10.1364/ao.29.001979 https://ui.adsabs.harvard.edu/link_gateway/2004pasa...21..256c/doi:10.1071/as03018 http://www.springer.de/ https://doi.org/10.1007/978-3-642-18902-9_64 https://doi.org/10.1086/368101 http://adsabs.harvard.edu/cgi-bin/author_form?author=matsuo,+h&fullauthor=matsuo,%20hiroshi&charset=iso-8859-1&db_key=ast http://adsabs.harvard.edu/cgi-bin/author_form?author=radford,+s&fullauthor=radford,%20simon%20j.%20e.&charset=iso-8859-1&db_key=ast https://doi.org/10.1093/pasj/51.5.603 https://doi.org/10.21014/acta_imeko.v4i4.216 https://doi.org/10.1007/s11141-017-9745-7 measurements for non-intrusive load monitoring through machine learning approaches acta imeko issn: 2221-870x december 2021, volume 10, number 4, 90 96 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 90 measurements for non-intrusive load monitoring through machine learning approaches giovanni bucci1, fabrizio ciancetta1, edoardo fiorucci1, simone mari1, andrea fioravanti1 1 university of l’aquila, piazzale e. pontieri 1, 67100 l’aquila, italy section: research paper keywords: non-intrusive load monitoring (nilm); energy management; deep learning (dl); sweep frequency response analysis (sfra) citation: giovanni bucci, fabrizio ciancetta, edoardo fiorucci, simone mari, andrea fioravanti, measurements for non-intrusive load monitoring through machine learning approaches, acta imeko, vol. 10, no. 4, article 16, december 2021, identifier: imeko-acta-10 (2021)-04-16 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 12, 2020; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simone mari, e-mail: simone.mari@graduate.univaq.it 1. introduction nowadays, economic development has led to a steady increase in the demand for electricity and related advanced services. companies are therefore moving towards the conversion of their traditional electrical systems into smart systems. one of the advantages that can be achieved is an efficient use of energy through the programming of loads and the awareness of consumption by users. to this end it is necessary to know the information on the status and consumption of the various loads powered by the system. this can be achieved through intrusive monitoring, i.e. by installing individual sensors for each load. but also through non-intrusive monitoring, i.e. measuring the total power absorbed by the system and deducing the contributions of the individual loads from this, through the use of specific algorithms. in this second case, an extremely simple and compact measurement system is obtained, at the expense of greater complexity from the point of view of processing [1]. these non-intrusive monitoring systems, however, have proven effective on a wide range of applications, which go beyond energy management alone [2]. non-intrusive load monitoring systems (nilms) are used successfully in many applications, including demand response programs, where consumers can generate profits based on their flexibility [3], [4]. other applications are those of anomaly detection, to detect malfunctions based on the profiles of power absorbed by individual loads [5] and of condition-based maintenance, which has allowed the creation of monitoring systems capable of helping operators in maintenance planning [6], [7]. finally, the ambient assisted living is also very important, where the nilm system monitors the switching on and off of household appliances to infer the position and activities of people, detecting the space-time context and, therefore, the activities of daily life. of the subject [8]-[10]. the first nilm system was proposed by g. hart in 1985 [11]. this algorithm was based on a detection of the edges in the aggregate power profile, followed by a clustering operation and subsequent matching based on the value of the absorbed power and on the on and off time. clearly this approach, while being abstract the topic of non-intrusive load monitoring (nilm) has seen a significant increase in research interest over the past decade, which has led to a significant increase in the performance of these systems. nowadays, nilm systems are used in numerous applications, in particular by energy companies that provide users with an advanced management service of different consumption. these systems are mainly based on artificial intelligence algorithms that allow the disaggregation of energy by processing the absorbed power signal over more or less long time intervals (generally from fractions of an hour up to 24 h). less attention was paid to the search for solutions that allow non-intrusive monitoring of the load in (almost) real time, that is, systems that make it possible to determine the variations in loads in extremely short times (seconds or fractions of a second). this paper proposes possible approaches for non-intrusive load monitoring systems operating in real time, analysing them from the point of view of measurement. the measurement and postprocessing techniques used are illustrated and the results discussed. in addition, the work discusses the use of the results obtained to train machine learning algorithms that allow you to convert the measurement results into useful information for the user. mailto:simone.mari@graduate.univaq.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 91 functional in certain situations, showed significant limitations as a multi-state appliance had to be managed as a set of distinct on / off appliances. conversely, continuously variable appliances and appliances with permanent consumption could not be detected correctly. it also included strong manual feature extraction requirements. subsequently other algorithms based on combinatorial optimization were proposed [12], whose main assumption is that each load can be in a given state (one of a reduced number k), in which each state is associated with a different energy consumption. the goal of the algorithm is to assign states to household appliances in such a way as to minimize the difference between the aggregate power reading and the sum of the energy consumption of the different loads. in the last decade, thanks to the increase in available computing powers, attention has shifted to artificial intelligence algorithms, such as hidden-state markov chains [13]-[15] and deep learning models [16]-[18]. in particular, the use of deep learning algorithms has overcome many of the limits that have characterized previous methods, thus allowing measurement systems to adapt to homes never analysed during the training phase. furthermore, in terms of accuracy, systems based on convolutional neural networks have overcome other state-ofthe-art methods, such as those based on factorial hidden markov models [19]. however, the state of the art of these systems is represented almost exclusively by monitoring systems that process signals over time intervals of the order of hours, consequently the resulting feedback will not be in real time. instead, it is often necessary to know in real time the status changes of the monitored loads. non-intrusive load monitoring systems are therefore required, capable of recognizing the different powered devices based on signal processing, during intervals of seconds or fractions of seconds. in this paper, two approaches for the recognition of electrical loads in real time will be presented. the first is a passive measurement system, based on the acquisition and processing of the current absorbed by the system. the second is an active measurement system, based on the measurement of the response to a variable frequency signal injected into the system. 2. nilm system based on passive measurements a first attempt was made by creating a recognition system for electrical loads based on the analysis of the total absorbed current. a system of this type allows to obtain a low cost and galvanically isolated measuring system. in steady state conditions, the absorbed current does not provide sufficient information to characterize a wide range of different loads. this is because the waveform of the current absorbed by domestic loads hardly has a significant harmonic content. therefore, the only considerations that can be made are based on the difference in amplitude. it was therefore decided to characterize the loads on the basis of their transient characteristics. previous studies have tried to create nilm systems based on transient characteristics [20]-[23], but all have limited the analysis to a reduced number of loads and particular cases. in this study, on the other hand, tests were conducted on signals acquired from a test plant in which five commonly used household appliances were activated and deactivated, but above all the performance was also evaluated on the basis of the building-level fully-labeled public dataset for electricity disaggregation (blued) [24]. first, the rms value of the current is calculated by processing the acquired raw current with a sliding window technique, as follows: 𝐼rms (𝑘) = √ 1 𝑁 ∑ 𝑖(𝑛) 2 𝑘+(𝑁−1) 𝑛=𝑘 , (1) where 𝑘 is the 𝑘-th measured current sample, 𝑁 is the number of samples per cycle, 𝑖(𝑛) is the sampled signal, and 𝑛 is the summation index. the rms value is then derived, and the resulting signal 𝐼rms(𝑘) ′ is an impulsive signal in which each pulse corresponds to a change in state of one of the powered loads. an example is shown in figure 1. 𝐼rms(𝑘) ′ = 𝐼rms (𝑘) − 𝐼rms (𝑘−1) (2) the position of the pulse in the derived signal identifies the moment in which a certain event occurred. in this way, the information relating to the steady state is filtered and only the information relating to the transients is kept. this impulsive signal is successively processed by the short-time fourier transform (stft), through the following known transformation [25], [26]: 𝑆𝑇𝐹𝑇(𝑚,𝜔) = ∑ 𝐼rms(𝑛) ′ 𝑤(𝑛−𝑚) 𝑒 −𝑗𝜔𝑛 ∞ 𝑛=−∞ (3) each change in the states of the powered loads is characterized and discriminated on the basis of the spectral content of the derived signal. the current is processed cyclically figure 1. variation with time of the rms current 𝐼rms (k) (top) and its derivative 𝐼rms(k) ′ (bottom). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 92 at 1 second acquisition intervals, following the described procedure. each acquisition slot is processed (to calculate rms and the derivative) by adopting an overlap of 500 ms to ensure correct analysis. it is also processed for transient events, which can be fragmented into two successive slots. the stft is implemented by processing 10-cycles (200 ms) windows with an overlap of 4/5 of the processing window. this results in a spectrogram with 101 points in frequency at 26 different instants of time. to take into account the sign of the change (switching on, switching off or passing to a different consumption state), the spectrogram is multiplied by the sign of the cumulative sum, evaluated on the rms signal as follows: 𝑆𝑁 = ∑(𝐼rms (𝑛) − 𝐼rms (𝑛−1)) 𝑁 𝑛=1 (4) where 𝐼rms (𝑛) is the rms value of the current described in (1), 𝑁 is the number of samples, and 𝑆𝑁 is the value of the cumulative sum. the final signal 𝑆(𝑖, 𝑗) can be obtained in the form of a 101 × 26 matrix, as follows: 𝑆(𝑖, 𝑗) = 𝑆𝑇𝐹𝑇(𝑚,𝜔) ∙ sgn(𝑆𝑁 ) = ∑ 𝐼rms(𝑛) ′ 𝑤(𝑛−𝑚) e −𝑗𝜔𝑛 ∞ 𝑛=−∞ ∙ sgn (∑(𝐼rms (𝑛) − 𝐼rms (𝑛−1)) 𝑁 𝑛=1 ) (5) an example of the spectrogram obtained from this procedure is shown in figure 2. this spectrogram is used as inputs to a neural network which provides a response every 500 ms, indicating the presence or absence of events in the signal, and the type of device involved. 2.1. the adopted artificial neural network the deduction of the loads, starting from the spectrogram described above, is traced back to a multiclass classification problem, that is, a single unique label must be associated with each spectrogram. analysing the current using such a small sliding window (1 second with 0.5 second overlap) makes it possible to state that within a single window there is no change of state of more than one load. artificial neural networks (anns) are an example of algorithms that natively support multiclass classification problems. in this work, a particular ann type, namely, the convolutional neural network (cnn), is adopted [27] because of its capability of processing complex inputs such as multidimensional arrays. more specifically, cnns are designed to exploit the intrinsic properties of some two-dimensional data structures, in which there is a correlation between spatially close elements (local connectivity). the proposed system [28] includes different layers: an input level (for signal loading), three groups of layers, each of which consisting of convolution, relu, and max pooling layers (for feature extraction from the input), and a group of flatten, fully connected, and softmax layers, which uses data from convolution layers to generate the output. 2.2. the proposed system setup the proposed measurement system uses an agilent u2542a data acquisition module with a 16-bit resolution. the current signal was acquired using a ta sct-013 current transducer and the sampling frequency was set to 10 khz. the cnn network was implemented on a desktop computer (based on the windows 10 x64 operating system) using the open-source python 3.7 from anaconda. tests were conducted on signals acquired directly from a real system, in order to have flexibility both as regards the sampling frequency and for the generation of multiple events. other tests were conducted on signals belonging to the public blued dataset, which features 34 different types of devices. the proposed measurement system was installed on a test system, designed to generate electrical loads made by domestic users, as part of the research project "non-intrusive infrastructures for monitoring loads in residential users". the system, which is located in the electrical engineering laboratory of the university of l'aquila (i), allows the generation of electrical loads in a single or simultaneous way. these loads correspond to the loads generated by the most common household appliances and are integrated in a structure similar to that of a residential building to reproduce the real problems of conditioning and measurement of the signals. 2.3. the obtained results the performance of the nilm system was assessed by conducting acquisitions, during which various loads were turned on and off for a total of over 519 events. next, blued, a public dataset on residential electricity usage, was used. this dataset includes voltage and current measurements for a single-family house in the united states, sampled at 12 khz for an entire week. regarding nilm systems, no standard and consolidated techniques can be found in the literature to evaluate the performance of event detectors. since the purpose of a nilm system is to disaggregate consumption for each of the devices in question, their performances were analysed to verify the achievement of these objectives, which in summary are correct identification and classification of the events. these parameters were obtained using the number of true positive (tp), false positive (fp), true negative (tn), and false negative (fn). in addition, the accuracy was assessed as follows: 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 % = 𝐶𝑜𝑟𝑟𝑒𝑐𝑡 𝑚𝑎𝑡𝑐ℎ𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑚𝑎𝑡𝑐ℎ𝑒𝑠 100 % (6) the obtained results are summarized in table 1. figure 2. spectrogram obtained during the switch on of a microwave oven. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 93 3. nilm system based on active measurements subsequently, we tried to recognize the loads powered by an electrical system through the sweep frequency response analysis (sfra). the sfra is a non-destructive diagnostic technique that detects the displacement and deformation of windings, among other mechanical and electrical failures, in power and distribution transformers. sfra proceeds by applying a sinusoidal voltage signal of constant amplitude and variable frequency between one terminal of the bipole under test and ground. the response is measured between the other bipole terminal and ground. both input and output signals are acquired and processed. the obtained result is the transfer function (tf) of the bipole over a wide frequency range. a failure is detected when a change in the tf is observed. the possibility of using these traces to identify which devices are powered at the time of measurement was evaluated [29]. the basic idea is to detect a change in the load, starting from the change in the measured tf. for this purpose, a variable frequency sinusoidal signal is applied between the terminal of the power phase conductor and ground, by means of the instrumentation shown in figure 3, then both the applied input signal and the output signal between the neutral conductor terminal and ground are measured and processed. the test instrument generates a sinusoidal input signal of constant amplitude (a few volt) and frequency variable in the range between 10 khz and 1.5 mhz. the results obtained on the electrical system are analysed on a temporal basis, comparing them with those previously obtained on the same system. the measurement techniques follow the iec 60076-18 standard [30], which regulates the test execution methods, the characteristics of the instruments used, the connection methods and the analysis of the results. figure 4 shows the signatures obtained for the different types of loads. tests were also conducted in order to evaluate the ability to discriminate through these traces different loads when they are powered simultaneously. from figure 5 it is possible to see the possibility to discriminate, through these traces, if the heater is powered individually or in combination with other loads. 3.1. the machine learning approaches in order to translate these traces into useful information for the users, the problem was formulated as a multi-label classification problem. this is a variant of the classification problem, where multiple labels (or multiple classes) may be assigned to each instance. multi-label classification is a generalization of multiclass classification, which is the singlelabel problem of categorizing instances into precisely one of more than two classes. in the multi-label problem, there is no constraint on how many of the classes the instance can be assigned to. the problem was initially addressed with an ann [31] similar to the one described in the previous section, with good results. however, a limitation of the anns is the large amount of training data required, which makes it difficult to apply them in real cases. an attempt was therefore made to use another machine learning algorithm, the support vector machine (svm). svm is one of the most popular artificial intelligence algorithms and is a supervised learning algorithm used primarily for solving classification problems. unlike generic classification algorithms that discriminate on the basis of characteristics common to each class, svm focuses on the samples that are most similar to each other but belonging to different classes, which are therefore the most difficult samples to discriminate. on the basis of these samples, the algorithm constructs an optimal hyperplane capable of separating them, and which can then be used to discriminate the new samples. these samples are table 1. scores achieved with the acquired signal and the blued dataset. acquired signal blued dataset precision 0.981 0.998 recall 0.998 0.998 f1-score 0.989 0.998 accuracy % 98.0% 87.9% figure 3. instrumentation used for the sfra. figure 4. sfra tests of different household appliances. figure 5. sfra tests with simultaneous loads powered. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 94 called support vectors, because they are the only samples that support model creation, while all other samples are useless. in a two-dimensional case, where the examples to be classified are defined by only two characteristics, the optimal hyperplane is reduced to a straight line as shown in figure 6. the algorithm searches for the line that maximizes the margin between the two examples indicated as support vectors. if it is not possible to separate the classes with a straight line, as in the case of nonlinear classification problems, the algorithm uses the kernel trick [32]. in particular, a polynomial nucleus was chosen for this work, thus examining not only the given characteristics of the input samples to determine their similarity, but also their combinations. in the case of the proposed nilm system, the problem is obviously not two-dimensional. in fact, the sfra measuring system returns an array of 320 points, which represent the transfer functions corresponding to the different frequency values. therefore, the inputs of the svm is 320, and consequently also the size of the problem is 320. to solve the problem, to identify which devices are powered starting from the result of the sfra measurement, four svm classifiers are used, each of which performs a binary classification, identifying the presence or absence of the device associated with it. 3.2. the obtained results unlike the nilm system based on passive measurements, in this case there is no public dataset that has the necessary characteristics, i. e. there is no public dataset of measurements obtained through the sfra technique. therefore, the performance evaluation was made solely on the basis of our acquired measurements. as for the previously described system, also in this case the same test parameters were used. the proposed algorithm was subjected to different scenarios, each for a certain number of tests, in which the different appliances were powered individually or simultaneously. since, as already explained above, each appliance has an associated svm algorithm that reveals its presence or not, the performance of the four algorithms were assessed individually. to allow a comparison with the algorithms developed by other researchers, precision, recall, f1-score and accuracy during classification were evaluated [33]. the results are shown in table 2. as far as the ann is concerned, the evaluation measures for a multiclass, hence single-label classification problem, are generally different from those for the multiple label. in singlelabel classification we can use simple metrics such as precision, recall, and accuracy [34]. however, in the multi-label classification, an incorrect classification is no longer a real error, as a forecast containing a subset of the actual classes is certainly better than a forecast that does not, i.e. correctly predicting two of the four labels is better than foresee the absence of labels. to evaluate the performance of a multi-label classifier we have to calculate the average of the classes. there are two different methods of doing this called micro-averaging and macroaveraging [35]. the current is processed cyclically at 1 second acquisition the metric independently for each class and then take the average hence treating all classes equally, whereas the microaverage will aggregate the contributions of all classes to compute the average metric. in a multi-label classification setup, microaverage is preferable if there is a suspicion that there may be a class imbalance (i.e. the possibility of having many more examples of a class than other classes). in the cases under examination, this problem does not exist as the examples used for training and testing are sufficiently uniform, so micro-average and macro-average can both be considered reliable. 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging = ∑ 𝑇𝑃𝑛 𝑁 𝑛=1 ∑ 𝑇𝑃𝑛 + 𝐹𝑃𝑛 𝑁 𝑛=1 (7) 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging = ∑ 𝑇𝑃𝑛 𝑁 𝑛=1 ∑ 𝑇𝑃𝑛 + 𝐹𝑁𝑛 𝑁 𝑛=1 (8) 𝐹1 − 𝑠𝑐𝑜𝑟𝑒micro−averaging = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging × 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛micro−averaging + 𝑅𝑒𝑐𝑎𝑙𝑙micro−averaging (9) 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging = ∑ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛𝑛 𝑁 𝑛=1 𝑁 (10) 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging = ∑ 𝑅𝑒𝑐𝑎𝑙𝑙𝑛 𝑁 𝑛=1 𝑁 (11) 𝐹1 − 𝑠𝑐𝑜𝑟𝑒macro−averaging = 2 × 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging × 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛macro−averaging + 𝑅𝑒𝑐𝑎𝑙𝑙macro−averaging (12) figure 6. representation of a linear classification problem (top) and a nonlinear classification problem (bottom) in which the samples are defined by only two features. table 2. achieved scores with the svm. svm lamp svm hairdryer svm induction hob svm heater tp 42 200 200 200 fp 0 0 0 2 tn 400 250 250 248 fn 8 0 0 0 precision 1 1 1 0.99 recall 0.84 1 1 1 f1-score 0.91 1 1 0.99 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 95 table 3 shows the scores achieved according to the two criteria both with the ann and with the svm. 4. conclusions and final remarks in this paper a brief introduction of the state of the art of nilm systems has been presented. two different types of systems for real-time identification of electrical loads, based on different measurement techniques, were then presented. both systems were proven excellent identification performance. more in detail, the first system, based on the spectrogram analysis of the effective current through the cnn, has been proven excellent performance in correspondence with both the acquired measurements and those available in the blued dataset, reaching f1-score respectively equal to at 0.989 and 0.998 and accuracy% respectively equal to 98.0% and 87.9%. the greatest difficulty encountered in the classification phase with the blued dataset is attributable to the significantly greater number of devices that the network is required to recognize, compared to those used for the acquired measurements. furthermore, the value obtained for the f1-score is higher than that obtained with other systems using the same dataset, as those proposed in [36] (0.915) and [37] (0.932). traditional nilm systems perform the loads classification based on the analysis of quantities also related to voltage (e.g. analysis in the p-q or v-i plan [38]). the proposed system has the advantage of measuring only the overall current in a house. as a result, the complexity of the processing system is reduced. another advantage is that the measuring system can be implemented with a galvanically isolated at low-cost system, using a clamp current transducer. the second proposed system, based on the trace analysis provided by the sfra, also proved excellent performance. the traces were initially processed through an artificial neural network similar to that used for the previous system, reaching f1-score of 0.96. in order to reduce the number of training examples needed, it was decided to use a support vector machine. despite a significant reduction in the examples needed for training (from over 2000 to 90), the f1-score achieved with this second machine learning structure was even higher than that obtained with the artificial neural network. a system of this type is particularly interesting as it allows the creation of a plug-in solution that can be installed in any domestic, industrial or commercial environment. furthermore, the detection technique takes into account the physical characteristics of household appliances and the resulting transfer function. consequently, the identification of multi-state or continuously variable appliances is simplified, compared to processing time-varying signals such as real power, current, etc. references [1] g. bucci, f. ciancetta, e. fiorucci and s. mari, load identification system for residential applications based on the nilm technique, 2020 ieee international instrumentation and measurement technology conference (i2mtc), 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128599 [2] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, state of art overview of non-intrusive load monitoring applications in smart grids, measurement: sensors 18(2021), art. no. 100145. doi: 10.1016/j.measen.2021.100145 [3] a. lucas, l. jansen, n. andreadou, e. kotsakis, m. masera, load flexibility forecast for dr using non-intrusive load monitoring in the residential sector, energies 12(14) (2019), art. no. 2725. doi: 10.3390/en12142725 [4] w. schneider, f. campello de souza, non-intrusive load monitoring for smart grids, 2018. technical report. dell emc. [5] h. rashid, p. singh, v. stankovic, l. stankovic, can non-intrusive load monitoring be used for identifying an appliance's anomalous behaviour?, applied energy 238 (2019), pp. 796-805. doi: 10.1016/j.apenergy.2019.01.061 [6] d. green, t. kane, s. kidwell, p. lindahl, j. donnal, s. leeb, nilm dashboard: actionable feedback for condition-based maintenance, ieee instrumentation & measurement magazine 23(5) (2020), pp. 3-10. doi: 10.1109/mim.2020.9153467 [7] a. aboulian et al., nilm dashboard: a power system monitor for electromechanical equipment diagnostics, ieee transactions on industrial informatics 15(3) (2019), pp. 1405-1414. doi: 10.1109/tii.2018.2843770 [8] c. belley, s. gaboury, b. bouchard, a. bouzouane, an efficient and inexpensive method for activity recognition within a smart home based on load signatures of appliances, pervasive and mobile computing 12 (2014), pp. 58-78. doi: 10.1016/j.pmcj.2013.02.002 [9] n. noury, m. berenguer, h. teyssier, m. bouzid,m. giordani, building an index of activity of inhabitants from their activity on the residential electrical power line, ieee transactions on information technology in biomedicine 15(5) (2011), pp. 758766. doi: 10.1109/titb.2011.2138149 [10] x. zhang, t. kato, t. matsuyama, learning a context-aware personal model of appliance usage patterns in smart home,2014 ieee innovative smart grid technologies asia (isgt asia), kuala lumpur, malaysia, 20-23 may 2014, pp. 73-78. doi: 10.1109/isgt-asia.2014.6873767 [11] g. hart. 1985. prototype nonintrusive appliance load monitor. in mit energy laboratory technical report, and electric power research institute technical report. [12] g. w. hart. 1992. nonintrusive appliance load monitoring. proc. ieee 80, 12 (dec. 1992), 1870–1891. doi: 10.1109/5.192069 [13] j zico kolter and matthew j johnson. 2011. redd: a public data set for energy disaggregation research. [14] oliver parson, siddhartha ghosh, mark weal, alex rogers, nonintrusive load monitoring using prior models of general appliance types, 2012 twenty-sixth aaai conference on artificial intelligence, toronto, canada, 22-26 july 2012. [15] mingjun zhong, nigel goddard, and charles sutton. 2014. signal aggregate constraints in additive factorial hmms, with application to energy disaggregation. in advances in neural information processing systems 27, z. ghahramani, m. welling, c. cortes, n. d. lawrence, and k. q. weinberger (eds.). curran associates, inc., 3590–3598. [16] j. kelly, w. knottenbelt, neural nilm: deep neural networks applied to energy disaggregation, 2nd acm international conference on embedded systems for energy-efficient built environments (buildsys '15). association for computing machinery, new york, ny, usa, pp. 55–64. doi: 10.1145/2821650.2821672 [17] z. jia, l. yang, z. zhang, h. liu, f. kong, sequence to point learning based on bidirectional dilated residual network for nonintrusive load monitoring, international journal of electrical power & energy systems 129(2021), art. no. 106837. table 3. comparison between svm and ann. svm ann microaveraging macroaveraging microaveraging macroaveraging precision 0.99 0.99 0.94 0.91 recall 0.98 0.96 0.99 0.99 f1-score 0.98 0.97 0.96 0.95 https://doi.org/10.1109/i2mtc43012.2020.9128599 https://doi.org/10.1016/j.measen.2021.100145 https://doi.org/10.3390/en12142725 https://doi.org/10.1016/j.apenergy.2019.01.061 http://dx.doi.org/10.1109/mim.2020.9153467 https://doi.org/10.1109/tii.2018.2843770 https://doi.org/10.1016/j.pmcj.2013.02.002 https://doi.org/10.1109/titb.2011.2138149 https://doi.org/10.1109/isgt-asia.2014.6873767 https://doi.org/10.1109/5.192069 https://doi.org/10.1145/2821650.2821672 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 96 doi: 10.1016/j.ijepes.2021.106837 [18] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, multistate appliances identification through a nilm system based on convolutional neural network, 2021 ieee instrumentation and measurement technology conference i2mtc 2021, 17-21 may 2021. doi: 10.1109/i2mtc50364.2021.9460038 [19] c. zhang, m. zhong, z. wang, n. goddard, c. sutton, sequenceto-point learning with neural networks for non-intrusive load monitoring, aaai conf. artif. intell., new orleans, la, usa, feb. 2018, pp. 2604–2611. [20] h. chang, k. chen, y. tsai and w. lee, a new measurement method for power signatures of nonintrusive demand monitoring and load identification, ieee transactions on industry applications 48(2) (2012), pp. 764-771. doi: 10.1109/tia.2011.2180497 [21] y. lin, m. tsai, development of an improved time–frequency analysis-based nonintrusive load monitor for load demand identification, ieee transactions on instrumentation and measurement 63(6) (2014), pp. 1470-1483. doi: 10.1109/tim.2013.2289700 [22] h. chang, k. lian, y. su, w. lee, power-spectrum-based wavelet transform for nonintrusive demand monitoring and load identification, ieee transactions on industry applications 50(3) (2014), pp. 2081-2089. doi: 10.1109/tia.2013.2283318 [23] s. b. leeb, s. r. shaw, j. l. kirtley, transient event detection in spectral envelope estimates for nonintrusive load monitoring, ieee transactions on power delivery 10(3) (1995), pp. 12001210. doi: 10.1109/61.400897 [24] k. anderson, a. ocneanu, d. benitez, d. carlson, a. rowe, m. bergés, blued: a fully labeled public dataset for event-based non-intrusive load monitoring research, 2nd kdd workshop on data mining applications in sustainability (sustkdd), beijing, china, aug. 2012, pp. 1–5. [25] w. yuegang, j. shao, x. hongtao, non-stationary signals processing based on stft, 8th international conference on electronic measurement and instruments, xi'an, 2007, pp. 3-301– 3-304. doi: 10.1109/icemi.2007.4350914 [26] s. zhang, d. yu, s. sheng, a discrete stft processor for realtime spectrum analysis, apccas 2006 2006 ieee asia pacific conference on circuits and systems, singapore, 4-7 dec. 2006, pp. 1943–1946. doi: 10.1109/apccas.2006.342241 [27] s. albawi, t. a. mohammed, s. al-zawi, understanding of a convolutional neural network, international conference on engineering and technology (icet), antalya, turkey, 21-23 aug. 2017, pp. 1–6. doi: 10.1109/icengtechnol.2017.8308186 [28] f. ciancetta, g. bucci, e. fiorucci, s. mari, a. fioravanti, a new convolutional neural network-based system for nilm applications, ieee transactions on instrumentation and measurement 70 (2021), art no. 1501112. doi: 10.1109/tim.2020.3035193 [29] a. fioravanti, a. prudenzi, g. bucci, e. fiorucci, f. ciancetta, s. mari, non-intrusive electrical load identification through an online sfra based approach, 2020 international symposium on power electronics, electrical drives, automation and motion (speedam), sorrento (italy), 24–26 june 2020, pp. 694–698. doi: 10.1109/speedam48782.2020.9161856 [30] iec 60076-18:2012 power transformers part 18: measurement of frequency response [31] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, deep learning applied to sfra results: a preliminary study, 7th international conference on computing and artificial intelligence iccai 2021, tianjin, china, 23-26 april 2021, pp. 302-307 doi: 10.1145/3467707.3467753 [32] t. hofmann et al., kernel methods in machine learning, ann. statist. 36(3) (2008), pp. 1171–1220. doi: 10.1214/009053607000000677 [33] g. bucci, f. ciancetta, e. fiorucci, s. mari, a. fioravanti, a nonintrusive load identification system based on frequency response analysis, 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), 7-9 june 2021, pp. 254258. doi: 10.1109/metroind4.0iot51437.2021.9488472 [34] s makonin, f popowich, nonintrusive load monitoring (nilm) performance evaluation, energy efficiency 8(4) (2014), pp. 809– 814. doi: 10.1007/s12053-014-9306-2 [35] o. koyejo, n. natarajan, p. k. ravikumar, i. s. dhillon, consistent multilabel classification, in proc. nips, 2015, pp. 3321–3329. [36] m. a.peng, h. lee, energy disaggregation of overlapping home appliance consumptions using a cluster splitting approach, sustainable cities and society 43 (2018), pp. 487–494. doi: 10.1016/j.scs.2018.08.020 [37] k. jain, s. s. ahmed, p. sundaramoorthy, r. thiruvengadam, v. vijayaraghavan, current peak based device classification in nilm on a low-cost embedded platform using extra-trees, ieee mit undergraduate research technology conference (urtc), cambridge, ma, -5 nov. 2017, pp. 1–4. doi: 10.1109/urtc.2017.8284200 [38] t. hassan, f. javed, n. arshad, an empirical investigation of v-i trajectory-based load signatures for non-intrusive load monitoring, ieee transactions on smart grid 5(2) (2014), pp. 870–878. doi: 10.1109/pesgm.2014.6938824 https://doi.org/10.1016/j.ijepes.2021.106837 https://doi.org/10.1109/i2mtc50364.2021.9460038 https://doi.org/10.1109/tia.2011.2180497 https://doi.org/10.1109/tim.2013.2289700 https://doi.org/10.1109/tia.2013.2283318 https://doi.org/10.1109/61.400897 https://doi.org/10.1109/icemi.2007.4350914 https://doi.org/10.1109/apccas.2006.342241 https://doi.org/10.1109/icengtechnol.2017.8308186 https://doi.org/10.1109/tim.2020.3035193 https://doi.org/10.1109/speedam48782.2020.9161856 https://doi.org/10.1145/3467707.3467753 https://doi.org/10.1214/009053607000000677 https://doi.org/10.1109/metroind4.0iot51437.2021.9488472 https://doi.org/10.1007/s12053-014-9306-2 https://doi.org/10.1016/j.schres.2018.08.020 https://doi.org/10.1109/urtc.2017.8284200 https://doi.org/10.1109/pesgm.2014.6938824 improved uncertainty evaluation for a long distance measurement by means of a temperature sensor network acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 improved uncertainty evaluation for a long distance measurement by means of a temperature sensor network gertjan kok1, federica gugole1, aaron seymour1, richard koops1 1 vsl b.v., thijsseweg 11, 2629 ja delft, the netherlands section: research paper keywords: sensor network; uncertainty; interferometry; temperature; refractive index citation: gertjan kok, federica gugole, aaron seymour, richard koops, improved uncertainty evaluation for a long distance measurement by means of a temperature sensor network, acta imeko, vol. 12, no. 1, article 14, march 2023, identifier: imeko-acta-12 (2023)-01-14 section editor: daniel hutzschenreuter, ptb, germany received november 19, 2022; in final form february 14, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project (17ind12 met4fof) has received funding from the empir programme co-financed by the participating states and from the european union’s horizon 2020 research and innovation programme. corresponding author: gertjan kok, e-mail: gkok@vsl.nl 1. introduction one of the many aspects of the digital transformation is that sensor networks come at an affordable cost, are relatively easy to set up, and can measure with good accuracy once properly calibrated. as such they can be helpful to support other measurements by giving a greater understanding of the set-up, validate assumptions and improve uncertainty calculations. examples of sensor networks supplementing high-grade measurements are sensor networks for measuring outdoor air quality [1] and for noise measurements [2]. sensor networks are also advantageously being used in agriculture [3] and in aerospace applications [4]. this paper deals with a long-distance interferometric measurement, which will be supported by means of a sensor network for measuring ambient temperature. various measurement schemes exists for interferometric distance measurements [5] and some of them have been implemented at vsl [6], the dutch national metrology institute (nmi). in this paper we consider the classical fringe counting michelson interferometer [5]. the research question addressed by this paper was to validate a particular assumption that is currently used in the uncertainty calculation for interferometric measurements over long distances. this assumption was that the inhomogeneity of the air temperature was small enough that the current approach of using 5 temperature sensors over a total distance of 50 meters is sufficient. this assumption and the meaning of ‘sufficient’ will be made quantitative in section 2. in order to be able to validate this assumption two activities were undertaken. the first activity was the extension of the mathematical model describing the measurement from considering only the mean temperature over a long-distance path to account for the temperature profile over a path. this naturally led to additional theoretical questions regarding the exact shape of the light path, which were addressed by performing simulations, see section 2. secondly, we installed a sensor abstract the aim of the research described in this paper was to more accurately determine the measurement uncertainty of an interferometric long distance measurement. the chosen method was to install a temperature sensor network in the laboratory to measure in more detail the ambient temperature profile to more accurately determine the local values of the refractive index on the path travelled by the laser light. the experimental measurements were supplemented by a theoretical analysis of the mathematical model being used. the outcome of the performed work was that the claimed measurement uncertainty of the distance measurement, which is based on using only 5 temperature sensors, was justified. however, if in the future a lower uncertainty would be needed, a sensor network like the one that was temporarily installed would be needed. during the measurement campaign an offset in the mean temperature of 0.2 °c was found, which was equal to the maximum allowed bias in view of the claimed uncertainty for the long distance measurement. at a more general level, it was concluded that such sensor networks provide a useful new tool to increase the understanding of other measurements, to validate assumptions and optimize existing measurements. mailto:gkok@vsl.nl acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 network with 51 calibrated temperature sensors in vsl’s 50 meter long climatised corridor in which highly accurate long distance measurements are performed. the knowledge of the air temperature along a path is essential for this application due to its effect on the refractive index of air. with the help of the temperature sensor network the uncertainty evaluation of the long distance measurement was improved. this work was performed in the empir project "metrology for the factory of the future" in a task dedicated to redundant measurements of ambient conditions [7]. in this project several aspects related to sensor networks were studied, like redundancy aspects [8] or the effect of synchronization errors [9], based on various case studies in industrial testbeds. the outline of this paper is as follows. in section 22 the problem will be formulated with more mathematical detail, followed by a detailed presentation of the sensor network in section 3. in section 4 the results of both the simulations and the measurements will be presented. in the last section the overall conclusions are summarized. 2. problem description 2.1. measurement problem and approximations the distance measurement takes place in a climatised laboratory in which the temperature is kept between 19.5 °c and 20.5 °c. in the set up distances up to 50 m are being measured by means of a michelson interferometer with a laser light source with a vacuum wavelength 𝜆vac = 633 nm. a measurement is performed by moving an optical target (retroreflector) from the initial position to the final position. the optical target is mounted on a cart which moves smoothly in a straight line on a rail such that the electronics can count the number of fringes 𝑚 (not necessarily an integer). to convert this number to a measured distance 𝐷, it needs to be divided by two and multiplied by the actual wavelength in air 𝜆. this wavelength is derived from the vacuum wavelength by multiplication with the refractive index 𝑛. the refractive index is evaluated using edlen's formula [10] at the laser wavelength 𝜆vac, the measured mean air pressure, the measured mean relative humidity and the measured mean air temperature 𝑇0. finally, using these mean values the resulting estimate 𝐷0 of the distance is calculated by 𝐷0 = 𝑚 2 𝜆vac 𝑛(𝑇0). (1) the claimed expanded uncertainty for 𝐷0 with coverage factor 𝑘 = 2 is 1 ppm relative. in absolute terms this amounts to 50 µm at a measured distance of 50 m. equation (1) is based on the following assumptions: 1. it is assumed that the non-linearity of the function 𝑛(𝑇) is weak and that it is sufficient to evaluate 𝑛(𝑇) at the mean temperature value only. 2. due to local temperature variations, the light will be refracted and not travel in a perfect straight line, but on a curved path, leading to an overestimate of the distance. it's assumed that this effect is small. 3. it is assumed that the mean temperature can be estimated sufficiently well by measuring spot temperatures at only 5 positions. the validity of assumptions 1 and 2 have been analysed in a theoretical way and the results are presented in section 3.1 and 3.2. the validity of assumption 3 was assessed by means of installing a dedicated temperature sensor network. this network will be presented in more detail in the next section. if the assumptions 1 and 3 are not fulfilled, a more complex model of the form 𝐷1 = 𝑚 2 𝜆vac 1 𝐿 ∫ 𝑛(𝑇(𝑥)) d𝑥 𝐿 0 (2) will be needed in which the full variation of the refractive index over the travel path of the laser light is considered. here 𝐿 denotes the nominal distance and 𝑇(𝑥) the temperature at position 𝑥 along a path. in optics, the integral quantity in equation (2) is called the optical path length. if assumption 2 is not verified, then the model would become even more complex as the measured distance would depend on a three-dimensional profile of the refractive index. in that case a map of the temperature in three-dimensional space would be required. 2.2. target uncertainty in generic terms, an additional uncertainty contribution is not considered significant if it contributes less than about 1/5 of the current combined uncertainty. this is because the new combined uncertainty including the additional uncertainty component will have essentially the same size as √12 + 0.22 ≈ 1. having this in mind, we will now assess what the maximum measurement bias of the mean ambient temperature can be, without significantly affecting the combined uncertainty of the long distance measurement. the claimed combined expanded uncertainty at 50 m is 50 µm (𝑘 = 2). in view of the discussion of the last paragraph, the expanded uncertainty or maximum bias in the measured mean temperature should not contribute more than 1/5 of 50 µm, which is 10 µm. using edlen's formula, it can be calculated that a change of 1 °c in temperature induces a change in the calculated refractive index of slightly lower than 1 ppm, which results in a change of the calculated distance of 48 µm at a total measured distance of 50 m by means of equation (1). this indicates that the expanded uncertainty of the mean temperature should not be more than 10 48 ∙ 1 ℃ = 0.21 ℃ . (3) for a potential systematic bias in the measurement temperature we are using the same threshold of 0.21 °c. the goal of the sensor network is to assess if the measurement bias when determining the mean temperature using only 5 temperature sensors compared to the improved estimate when using 51 sensors lies below this threshold. if this is indeed the case, the usual procedure using only 5 temperature sensors can be retained without a need for increasing the claimed uncertainty to account for a larger than expected uncertainty in the measured mean temperature. 3. temperature sensor network 3.1. construction of the network fifty-one temperature sensors were assembled in house in order to get the lowest measurement uncertainty. the sensing part consisted of a 10 kω ntc thermistor placed in series with a resistance of 12 kω. the communicating part was formed by texas instruments cc2531 usb zigbee modules, communicating wirelessly at a frequency of 2.4 ghz. the voltage over the thermistor was measured by an analogue voltage input acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 of the module. to enhance the stability of the network, 5 additional cc2531 chips with external antennae were used. whereas the sensor nodes had only a meandered antenna on the chip itself, the external antennae more than tripled the link quality. the routers were used to repeat the network signal over the full distance of the measurement. to supply power to the nodes and routers a long wire with usb plugs at 1 meter spacing was used, as this seemed more practical than 51 individual battery packs. a texas instruments cc1352p chip was used as a coordinator in the network. finally, a raspberry pi 3 model a+ was used to run the zigbee server to which voltage readings were also logged in real time. in figure 1 a photo is shown with a temperature sensor mounted below the rail of the interferometric measurement cart set-up, together with a router node with external antenna. the total cost of the hardware was around 1000 eur. 3.2. calibration of the sensors the sensors were calibrated in the following way. the sensors, together with a set of traceable reference sensors, were mounted on an aluminium block which could be brought to a desired temperature by means of a water-based heating and cooling device. the block and sensors were well isolated from the environment by means of an insulating box. the digital voltage output which was transmitted wirelessly, was calibrated and a calibration function relating voltage output to temperature was established. calibration of all sensors took place before and after the measurement campaigns. after calibration, the expanded uncertainty of each of the sensors was 0.04 °c (k = 2), mainly limited by the resolution of the digital voltage output of the sensors. this uncertainty includes a component for the repeatability over two days, but doesn't include a component for the expected drift of the sensors over the entire measurement period. 3.3. software to operate the zigbee network by means of the raspberry pi two pieces of software were used. the zigbee2mqtt software [11] was used to start the zigbee server. the zigbee mosquitto software [12] provides a messaging protocol which was used to interface with the coordinator. these two pieces of software ultimately allow for the text message transmissions containing the voltage readings from the adcs of each of the sensor nodes to be stored on the raspberry pi. appropriate firmware was installed at the sensor nodes [13] and at the coordinator node [14]. 3.4. test procedure the main aim of the measurement campaign was to gain insight into the overall temperature profile along the corridor, and specifically, to assess if the mean value of the 5 climate system sensors was sufficiently close to the mean temperature of the 51 sensor network sensors, having the numerical target required for the interferometric distance measurement in mind as stated in equation (3). in order to get some further insights, a test plan with the following five test cases was defined: 1. undisturbed profile with nobody present in the corridor 2. person sitting at a fixed spot in the corridor 3. person walking around in the corridor all the time 4. person walking around in the corridor and then leaving the corridor 5. profile during a long distance measurement with the moving cart all sensors were calibrated in week 1. then the test plan was executed in week 2 and repeated in week 3. in week 4 all sensors were recalibrated in order to assess their drift. 4. results in this section the results will be presented. they are ordered according to the assumptions to be verified as listed in section 2.1. 4.1. assumption 1 on non-linearity of refractive index formula the effect of the neglected non-linearity of the refractive index as a function of temperature in equation (1) was analysed using a worst-case approach. if the mean temperature is 20.0 °c, then the worst case is that half of the path length is at 19.5 °c and the other half at 20.5 °c. the effective average refractive index in this case then amounts to (𝑛(19.5 ℃) + 𝑛(20.5 ℃)) 2⁄ which has to be compared with the value 𝑛(20.0 ℃) that is obtained from first averaging the temperature instead of averaging refractive indices, the latter begin what is currently used. the difference is only 6∙10-10 both in absolute and in relative sense, as 𝑛 ≈ 1.0003. the induced measurement error by averaging temperatures as done in equation (1) instead of averaging refractive indices therefore amounts to only -6∙10-10 × 50 m = -0.03 µm at a measurement distance of 50 m, which is very minor. the non-linearity of the refractive index dependence on temperature can therefore be neglected and for the uncertainty calculation of the interferometric long distance measurement it is sufficient to focus on the correct determination of the mean temperature. figure 1. the horizontal black parts are the rails for the cart used in the interferometer set-up. in the red circle at the lower left corner the sensing part of a temperature sensor node is visible. in the yellow circle in the top right corner a router node with external antenna is visible, connected to the wire supplying power by means of a usb connector. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 4.2. assumption 2 on curved light paths if a light beam travels from one medium into another medium with a different refractive index (e.g., air with a different temperature), it will change its direction of propagation following snell's law [15]: 𝑛1 sin 𝜃1 = 𝑛2 sin 𝜃2 , (4) where 𝑛𝑖 and 𝜃𝑖 are respectively the refractive index and the angle of the propagation direction of the beam to the surface normal in medium 𝑖 = 1 and 2. so the magnitude of the direction change 𝜃2 − 𝜃1 depends on the difference of the refractive indices and on the angle of incidence 𝜃1. equation (1) is based on the assumption that the light travels in a perfectly straight line without any curvature due to diffraction. to assess how much this simplification affects the final measurement result 𝐷0 two-dimensional curves were simulated by modelling the temperature distribution by a onedimensional gaussian process with a squared exponential kernel of the form 𝜎t 2 exp (− (𝑥1 − 𝑥2) 2 2 ℓ2 ) . (5) in equation (5), the parameters 𝑥1 and 𝑥2 denote the horizontal position of points 1 and 2 (horizontal distance from the laser emitting the light), ℓ the characteristic length-scale of the kernel, and 𝜎t the standard deviation of the temperature. this kernel defines the correlation strength between the temperatures at different positions in the corridor. the normal vectors of the interfaces between two neighbouring zones of uniform temperature were chosen randomly. in reality the situation is probably more favourable than this, as the interfaces will be more similar to planes perpendicular to the propagation direction of the light and not fully randomly oriented. in figure 2, a schematic sketch of the simulated temperature profile together with the light path is shown. in our simulation we used 1000 zones of uniform temperature over a distance of 50 m. in this simulation only the geometric path length was assessed, thus the effect of different actual wavelengths due to changes in the refractive index in different path sections is not included in the results in this section. this latter effect is covered by the results of sections 4.1 and 4.3 it was found that the maximum path increase significantly depended on the used parameter values. as a reasonable choice for the kernel parameters the characteristic length-scale of correlation parameter was set to ℓ = 1 m and the standard deviation of the temperature was set to 𝜎𝑇 = 0.25 °c, as the climate system kept the laboratory temperature between 19.50 °c and 20.50 °c. for this choice the simulated increase in path length remained below 1 µm, thus being negligible. in figure 3, an example of a simulated path of the laser beam is shown. note nevertheless that one can mathematically construct worst-case temperature profiles with worst-case normal vectors in which the light travels an almost arbitrary path of arbitrary large distance, e.g., the light can change direction by 180°. for this to happen, the normal vectors should be chosen close to vertical and the temperature differences between zones should be chosen as large as possible. 4.3. assumption 3 on the measured overall mean temperature the test procedure with the temperature sensor network as specified in section 3.4 was executed twice in two consecutive weeks. the main points of attention during the data analysis were the changes of the measurement values of the individual sensors over time, the spatial temperature profile of the mean sensor values in the corridor, and the comparison of the measured values by the 5 sensors of the fixed installed climate system and the 5 sensors of the sensor network that were in the closest proximity. after the first test campaign it turned out that some of the 51 sensors reported readings only sporadically, or not at all. therefore fewer sensors were used during the second campaign. in the first campaign data from 40 sensors were available, whereas in the second campaign data from 30 sensors were used. the temperature traces of single sensors over time were generally very stable, although incidentally some higher and/or lower values were measured. in figure 4, the temperature profile in space for test case 5 simulating a real measurement is shown. as some measurement instruments and computers are present near the start of the corridor the temperature is slightly higher in that region. the temperature profiles for the other cases look similar. we'll now first present the conclusions for the various test cases as defined in section 3.4 before addressing our main question based on the uncertainty of the long distance measurement induced by the non-constant temperature profile. figure 2. two-dimensional simulation of curved light paths due to refraction effects. the vertical curvy black dashed lines delimit zones with different temperatures. the slanted orange lines indicate the assumed direction of the local interface between these zones. the light is assumed to travel from the red spot on the left to the green spot on the right and back. figure 3. blue: simulated curved path of the light using 1000 zones of 50 mm width in which the temperature was assumed to be uniform. red: temperature profile over the distance of 50 m. not shown are the angles defining the interfaces between the zones, which are completely random. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 it turned out that the measurement results for the various test cases were not significantly different, as compared with the unperturbed case 1. in test case 2, a person sitting between two sensors did not cause any significant perturbation in the temperature measurements of the sensors. in test cases 3 and 4, a person walking around the lab caused significant perturbations in the sensors’ reading during the first execution but not during the second. the results on this aspect were therefore inconclusive. during test case 5, a battery powered temperature sensor was mounted on the moving cart and its measured values were compared with the values of the non-moving sensors mounted closest by at the moment when the cart passed. the differences between the moving sensor network sensor and the non-moving sensor network sensors varied from 0.05 °c to a few 0.1 °c with a maximum of even 0.6 °c, which is higher than expected. furthermore, the values of the 5 fixed installed climate system sensors were compared with the values of the nearest sensors of the sensor network, both placed on the 50 meter long granite support for the interferometric measurement set-up. two of the five climate sensors agreed well with the closest sensor network sensors within the mutual uncertainties, whereas a third sensor agreed part of the time. two sensors didn't agree and differences of 0.15 to 0.20 °c were present between the mean values of the climate system sensors and the sensor network sensors. the differences can be due to the sensor hanging in free air or not, or to locally warmer air flows (one such a flow was detected close to an electrical device mounted half way the corridor). this indicates that the location of the sensors can be an important factor when measuring the temperature of the lab. in figure 5, the temperature plots over time of two climate sensors together with the plots of the nearest sensor network sensors are shown. we’ll now turn to the results for the main purpose of our measurement campaigns, which was to determine if the mean temperature of the corridor was measured sufficiently well, with a bias below 0.21 °c. in table 1, the mean temperatures as measured by the climate system and as measured by the sensor network including the uncertainties and their differences are shown. the uncertainty of both the climate system and the sensor network was dominated by a systematic part due to the calibration, supplemented by a part for the variation during the measurement. based on the recalibration of the sensor network sensors after the measurement campaigns a standard uncertainty for sensor drift of 0.05 °c was added per sensor. however, as it was a random contribution, it averaged out when calculating the mean temperature and it didn’t affect the uncertainty of the mean value. as can be seen from table 1, it turned out that the difference between the mean corridor temperature as measured by the sensor network and the climate system was at most 0.25 °c with an expanded uncertainty of 0.06 °c. when using the expanded measurement uncertainty as a tolerance, the threshold of 0.21 °c of equation (3) was not exceeded. it was therefore decided that it is not needed to improve the temperature measurement in the corridor for this application in a permanent way, nor to increase the claimed uncertainty. however, the results also indicated that if the claimed uncertainty were to be reduced in future, then a better understanding of the temperature profile would be needed. this could then be accomplished by the sensor network, albeit that its potential drift over time would need a more careful study. figure 4. temperature profile in space for test case 5 simulating a real measurement for the two repetitions a (top) and b (bottom). the circular marker is at the mean value and the error bars are at plus minus one standard deviation. the lines above and below connect the maximum and minimum measured values during the entire time span. table 1. mean temperature and expanded uncertainty (k = 2) in parentheses as measured by the climate system and by the sensor network, and their differences, as measured in the 5 test cases and the two test campaigns a and b. case number climate system / °c sensor network / °c difference / °c 1-a 19.91 (0.06) 19.67 (0.03) 0.24 (0.06) 1-b 19.89 (0.06) 19.73 (0.03) 0.16 (0.06) 2-a 19.92 (0.06) 19.69 (0.03) 0.24 (0.06) 2-b 19.93 (0.06) 19.75 (0.03) 0.18 (0.06) 3-a 19.94 (0.06) 19.70 (0.03) 0.25 (0.06) 3-b 19.91 (0.06) 19.76 (0.03) 0.15 (0.06) 4-a 19.91 (0.06) 19.68 (0.03) 0.23 (0.06) 4-b 19.90 (0.06) 19.75 (0.03) 0.15 (0.06) 5-a 19.93 (0.06) 19.73 (0.02) 0.20 (0.06) 5-b 19.92 (0.06) 10.76 (0.03) 0.16 (0.06) figure 5. measured temperature profile over time for two sensors of the climate system and the sensors of the sensor network that was installed at the closest position for test case 5-a. for the climate system sensor displayed in the upper plot there is a difference of about 0.15 °c, whereas for the measurement at the bottom there is complete agreement. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 5. conclusions the digital transformation makes it economically and practically possible to perform a larger number of measurements in a much finer spatial grid by means of sensor networks. this can help to better monitor the ambient measurement conditions of a measurement set-up and as a consequence potentially reduce its measurement uncertainty. in this contribution a sensor network with 51 sensors for measuring the ambient temperature in a corridor used for interferometric long distance measurements was presented. this network enabled the analysis of the temperature profile in much more detail than what was possible until recently, together with a more accurate measurement of the mean temperature. simulation results showed that it is sufficient to focus on the mean temperature only, as well as that the effect of a non-straight optical path due to refraction effects is negligible. the measurement results using the network showed that the mean temperature measured by the 5 climate system sensors was on average about 0.2 °c higher than the more accurate measurement by the sensor network. based on this result it was decided that the current practice with only five temperature measurements and a claimed relative measurement uncertainty of 1 ppm is valid. in future work the network can be used in other applications where temperature measurement is of critical importance. furthermore, employment and recalibration at longer time scales will give more information about the metrological stability of such sensor networks. acknowledgements this project (17ind12 met4fof) has received funding from the empir programme co-financed by the participating states and from the european union's horizon 2020 research and innovation programme. we thank the independent reviewers for their comments, which helped to significantly improve the paper. references [1] d. r. peters, o. a. m. popoola, r. l. jones, n. a. martin, j. mills, e. r. fonseca, a. stidworthy, e. forsyth, d. carruthers, m. dupuy-todd, f. douglas, k. moore, r. u. shah, l. e. padilla, r. a. alvarez, evaluating uncertainty in sensor networks for urban air pollution insights, atmos. meas. tech., 15, 2022, pp. 321–334. doi: 10.5194/amt-15-321-2022, [2] l. luo, h. qin, x. song, m. wang h. qiu, z. zhou, wireless sensor networks for noise measurement and acoustic event recognitions in urban environments. sensors. 2020; 20(7): 2093. doi: 10.3390/s20072093. [3] l. maiolo, d. polese, advances in sensing technologies for smart monitoring in precise agriculture, proc. of the 10th int. conference on sensor networks sensornets 2021, virtual, 9-10 february 2021, 151-158. online [accessed 26022023] https://www.scitepress.org/publishedpapers/2021/104154/10 4154.pdf [4] f. leccese, m. cagnetti, s. sciuto, a. scorza, k. torokhtii, e. silva, (2017). analysis, design, realization and test of a sensor network for aerospace applications, proc. of the 2017 ieee int. instrumentation and measurement technology conference i2mtc 2017, turin, italy, 22-25 may 2017, pp. 1-6. doi: 10.1109/i2mtc.2017.7969946 [5] p. hariharan (2007). basics of interferometry, second edition. elsevier. isbn 978-0-12-373589-8. [6] s. van den berg, s. van eldik,, n. bhattacharya, mode-resolved frequency comb interferometry for high-accuracy long distance measurement. sci rep 5, 2015, 14661. doi: 10.1038/srep14661 [7] empir project 17ind12 metfof metrology for the factory of the future, 2018 – 2021. online [accessed 21 february 2023] www.met4fof.eu [8] g. kok, quantifizierung von redundanz in sensornetzwerken und die beziehung zur messunsicherheit, tm technisches messen, 89(10), 2022, pp. 647-657, [in german]. doi: 10.1515/teme-2022-0012 [9] t. dorst, influence of synchronization within a sensor network on machine learning results. j. sens. sens. syst., 10, 2021, pp. 233–245. doi: 10.5194/jsss-10-233-2021 [10] b. edlén, the refractive index of air, metrologia vol. 2, no. 2, 1966, pp. 71–80. doi: 10.1088/0026-1394/2/2/002 [11] zigbee mqtt software. online [accessed 21 february 2023] https://www.zigbee2mqtt.io/guide/getting-started/ [12] zigbee mosquitto software. online [accessed 21 february 2023] https://www.mosquitto.org/download/ [13] zigbee sensor node firmware. online [accessed 21 february 2023] https://github.com/ptvoinfo/zigbee-configurable-firmware [14] zigbee coordinator firmware. online [accessed 21 february 2023] https://github.com/koenkk/z-stackfirmware/raw/master/coordinator/z-stack_3.x.0/bin/. [15] m. born, e. wolf, principles of optics, 1959, new york, ny: pergamon press inc. https://doi.org/10.5194/amt-15-321-2022 https://doi.org/10.3390/s20072093 https://www.scitepress.org/publishedpapers/2021/104154/104154.pdf https://www.scitepress.org/publishedpapers/2021/104154/104154.pdf https://doi.org/10.1109/i2mtc.2017.7969946 https://doi.org/10.1038/srep14661 http://www.met4fof.eu/ https://doi.org/10.1515/teme-2022-0012 https://doi.org/10.5194/jsss-10-233-2021 https://doi.org/10.1088/0026-1394/2/2/002 https://www.zigbee2mqtt.io/guide/getting-started/ https://www.mosquitto.org/download/ https://github.com/ptvoinfo/zigbee-configurable-firmware https://github.com/koenkk/z-stack-firmware/raw/master/coordinator/z-stack_3.x.0/bin/ https://github.com/koenkk/z-stack-firmware/raw/master/coordinator/z-stack_3.x.0/bin/ aeronautic pilot training and augmented reality acta imeko issn: 2221-870x september 2021, volume 10, number 3, 66 71 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 66 aeronautic pilot training and augmented reality simone keller füchter1, mário sergio schlichting1, george salazar2 1 university of estácio de santa catarina pesquisa produtividade program, av. leoberto leal, 431, são josé, santa catarina, brazil 2 nasa, national aeronautics and space administration johnson space center, 2101 nasa parkway houston, 77058, texas usa section: research paper keywords: augmented reality; virtual training; flight panel; pilot training citation: simone keller füchter, mário sergio schlichting, george salazar, aeronautic pilot training and augmented reality, acta imeko, vol. 10, no. 3, article 11, september 2021, identifier: imeko-acta-10 (2021)-03-11 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 14, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simone keller füchter, e-mail: simonekf.2011@gmail.com 1. introduction various different materials are used for pilot training on the equipment used in various types of aircraft. traditionally, the aircraft manual has been an important source of information on each part of the aircraft’s structure, as well as its mechanical, electronic and digital components. while the operating manual remains the main learning resource, a number of multimedia resources have been added for better visualisation of certain tasks and commands during the panel checklist. the array of available printed books can also be a good knowledge resource for students aspiring to be private pilots, also known as pps. this pilot category involves certification that allows the individual to fly an airplane and carry baggage and passengers without being paid or hired.[1] videos, photographs, printed posters, flight simulators and virtual reality glasses can also be used to help learn [2] and memorise the components of the panels. each resource has its advantages and disadvantages. for example, the simulator is fairly complete and close to reality but is not available 24 hours a day for everyone, while its cost per hour of use is relatively high since it involves sophisticated equipment. meanwhile, while videos present a great deal of detail, they do not involve interaction, and in an effort to find the information one is looking for, it is sometimes necessary to watch a long video to be able to visualise a specific procedure. software such as microsoft's flight simulator [3] and x-plane [4], among others, have a wide base of information and procedures as well as highquality graphics. however, the purpose of the present work is to create a prototype app for smartphones that is easy to use and is specifically focused on familiarising the pilot with the panel and the respective checklists. thus, the learning and memorisation of a new aircraft, as well as the attendant training, can be faster, bringing great savings and, above all, enhanced flight safety. checklists are a critical part of preparing for a flight. apps such as that presented in this paper can be used anywhere without the need for computers, screens, joysticks, or any other hardware; in fact, all that is required is a smartphone. the app can be used, for example, before the pilot goes into a flight simulator. indeed, the app is not intended for use as a kind of flight simulator, but a specific and cheap tool to help pilots or students to memorise the flight panel components and study them continuously, be it at home, in open spaces or even in rooms without an internet abstract a pre-flight checklist requires in-depth technical knowledge of the aircraft, including its dashboard, avionics, instruments, functions and cabin layout. to obtain up-to-date certification, students training to be a pilot or advanced pilot must be completely familiar with each instrument and its position on the flight panel. every second spent searching for the location of an instrument, switch or indicator can waste time, resulting in a poor start-up procedure and possibly a safety hazard. the objective of this research was to obtain preliminary data to determine whether the use of augmented reality as a human interface for training can help pilots improve their skills and learn new flight panel layouts of different aircraft. the methodology used was the human-centred design method, which is a multidisciplinary process that involves various actors who collaborate in terms of design skills, which includes individuals such as flight instructors, students, and pilots. a mobile/tablet application prototype was created with sufficient detail on the flight panel of a cessna150, an aircraft used in training flights at the aero club of santa catarina. the tests were applied in brazil and the results indicated good response and acceptance from the users. mailto:simonekf.2011@gmail.com acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 67 connection. indeed, the main objective of this work was to attempt to represent, as realistically as possible, the details of the flight panel without the need of computers and in a cost-effective and efficient way using readily available technology. 2. methodology the methodology used for this research with augmented reality (ar) training is the human-centred design (hcd) method. this method is a multidisciplinary process that involves various stakeholders who collaborate in terms of design skills, including the individuals who specifically pertain to the process [5], such as flight instructors, pilots and students. interactivity is a key point in this field and comes in the form of continuous testing and evaluation of the ar system and the related concepts. the hcd method is a key component of the human-systems integration (hsi) process, which is defined as an interdisciplinary technical and management process that integrates human considerations within and across all system elements [6]. in this context, iso 13407:1999, part of ics 13.180 on ergonomics, has been withdrawn and revised in terms of iso 9241-210:2019 ergonomics of human-system interaction-part 210: humancentred design for interactive systems. [7] the hsi domains define (a) how human capabilities or limitations impact the hardware and software and, conversely, (b) how the system’s software, hardware, and environment influence human performance.[8] meanwhile, there are four principles of hcd [9]: • function allocation between user and technology • design iteration • multidisciplinary design • active involvement of users and a clear understanding of user and task requirements. these principles are crucial for the concept of this research since they are focused on the ergonomics, comfort and acceptance of the pilots and students. for this study, a prototype mobile app related to ar was built based on the cessna150 aircraft panel, which is used for the training of new pps in a number of flying clubs in brazil, specifically here, the aero club of santa catarina. this prototype provides the pilot with the opportunity to familiarise themselves with the panel of a new plane that he/she may fly, accessing it on their smartphone anytime, anywhere, and viewing the panel in a highly realistic way. furthermore, perhaps more importantly, the pilot can use the main checklist procedures in 3d view, which enables a better understanding of certain movements of push/pull, rotate, and other movements. 3. augmented reality the technology pertaining to ar involves overlaying virtual components in a real-world environment with users viewing it using a specific device. for the most part, virtual objects are added to the real world in real-time during the user experience [10]. to connect with this technology, digital devices such as smartphones, tablets and virtual reality glasses (e.g., microsoft hololens) [11] can be used. the difference between ar and virtual reality (vr) is that the latter creates a completely synthetic and artificial world in which the user is completely immersed, while in the former, the user can observe a real environment with virtual objects overlaid. with ar, the pilot can observe a virtual flight panel in front of them and, using a smartphone, can visualise and study each component before moving on to a real physical aircraft. the combination of ar and smartphone provides a useful tool for training, with the professional able to practice in a manufacturing environment [9], a garage or a hangar. when used correctly, the ar technology can present a real environment in terms of creating virtual objects that mimic real-time applications [12]. however, the future of ar depends on huge innovations in all fields [13], and this paper presents a new way of learning and training aircraft-related checklist panels. 3.1. augmented reality as a training tool this technology is used in various industries [14], military environments, and schools and is highly useful in the health field, making it possible to visualise the inside of a human body. furthermore, this type of digital interface is suitable for the new generations comfortable with the devices used for vr and ar [15]. when combined with gamification, educational technologies can be improved, largely because the new generations are open to experimenting with new virtual competencies and stimuli and are highly motivated to win [16]. the same idea can be applied to ar, that is, using challenges similar to videogames to promote learning and pleasure, as explained in terms of the dopamine cycle, which pertains to challenges, achievement and pleasure [17]. a recent study conducted on a multi-national european sample of pilots regarding the use of ar and gamification [11] demonstrated that 72.25 % of the women pilots and 56.25 % of the male pilots considered it satisfactory in terms of successfully finishing a task, while, overall, 70.74 % of the pilots regarded the feedback received for corrective actions as satisfactory. this demonstrates that interactivity is important for users. for this reason, the third step of the proof of concept presented in this paper was created to clarify the number of possibilities this prototype can offer. 3.2. augmented reality and aviation according to the aviation instructors handbook, [18] vr is already part of pilot training, while ar is not mentioned. therefore, this field requires more research and more applications to find new ways to use this tool, such as external inspection, maintenance, procedures using the flight panel and various other aspects [19], [20], [21]. 4. the prototype in this paper, a life-size prototype was created and tested by six users made up of flight instructors, pilots and students, who applied the visualisation of a virtual flight panel employing ar to their smartphones or tablets. bringing together the information from the manual, the experience of the instructors and pilots who were studying how to use different aircrafts, and the proofs of concept, various evaluations were created, as shown in figure 1. a full description follows. 4.1. preliminary proof of concept the prototype was built in unity 3d, a game development platform [22], using vuforia [23] as the ar technology. here, it was used as a printed marker in banner size (a0), as shown in figure 2. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 68 after the first test, a small ‘pocket’ marker was printed, as shown in figure 3. the app was developed after several meetings with the instructors and several practice flights with the students. the assessment was made based on how pilots perform a checklist before departure in the aircraft. a sketch was shown with the first placement of the instruments and buttons. the size of the panel was presented, and the quality of the generated 3d image was tested. the result exhibited good aesthetic quality and high definition, but still presented little interactivity. after the first phase, an enhancement was applied where various animations were created and placed in the first menu item, the idea being to present each item in the manual according to the order in which it appears from left to right, following the checklist found in the manual. for each instrument or button displayed, a green highlight appears, as shown in figure 4. this step was not planned with this type of indication, but with the feedback given in the meetings and discussions, we determined that the first step in the study of the checklist was not to start with the checklist steps. in fact, it was more interesting to present all the instruments and buttons first before moving on to the checklist, using various animations of objects and instruments. the prototype, which ended up acquiring the unexpected new feature described above, now featured the item in the menu that makes it possible to understand not only the names and functions of the buttons but also the kind of movement it performs. if this movement is a turn, it is evidenced in terms of three dimensions: whether it is, for example, 45 ⁰ or 90 ⁰, whether it is a push/pull-type button, or whether it is an on/off switch. as such, it is possible to closely visualise the movement of the mixture control, which consists of a button and a lever that must be pressed at the same time. therefore, in this second proof of concept, the goal was to demonstrate the panel in a better way than the pictures in the aircraft manual, where it is difficult to clearly observe the controls’ design, format and movements, as shown in figure 5. in short, an aircraft manual presents 2d images, making it difficult to visualise the array of controls and understand whether each is a rotate, push/pull or press button. thus, the goal was to demonstrate the panel in high definition and to include animation and interaction, as shown in figure 6. this app is very specific to a checklist approach and the panel components, which makes it different to other applications such as x-plane. the focus of our app includes a didactical figure 1. proof of concept. figure 2. a life-size panel that can be viewed at home using a smartphone and a banner printed marker. figure 3. a small-size ‘pocket’ panel that can be visualised anywhere, anytime in a ubiquitous approach. figure 4. each instrument and controls are presented to the user. figure 5. typical images presented in aircraft manuals, where full visualisation can be difficult (cessna 150 aircraft manual). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 69 explanation for each instrument. the cessna 150 starting engine checklist [24] was used here, as shown in figure 7. 4.2. enhanced proof of concept at this point, greater interaction with the user was pursued, with the same checklist presented and the user asked to perform the movements of the lower control panel. it will only be possible to start the aircraft (start the engine) in this ar simulation if all the buttons have been executed with the correct movement and in the correct order. at the end of this step, the engine is started, and the audio will confirm the complete verification of the checklist in addition to a message. following discussions with all those involved, it was understood that these features would bring gamification to the app, as the pilot feels challenged ‘to hit’ the movements and finish the checklist perfectly, and as a reward, the engine will start. this stage is still under development, but its concept and technologies have already been finalised. flight instructors, pilots and students were involved in this experiment. 5. the experience the prototype was tested by students, pilots and instructors. the experience involved the following steps: • explaining the objectives of the study; • obtaining the participants’ consent; • reading the orientation on how to use the app; • using the app in front of a banner as a marker for the ar. the user can be standing or seated and can move in front of the panel; • using the app with a banner marker in a0 size and with a little marker printed on half an a4 page. in the first case, the user can see the ar panel in full scale (lifesize). in the second case, the user can move the little marker using their hands and manipulate the panel position so that it can be visualised comfortably; • filling out a questionnaire for reflection and sharing points of view the tests were performed with instructors, as can be seen in figure 8, while the experience also involved students preparing to gain their pp license, as shown in figure 9. 6. results the results demonstrated that the virtual 3d model is highly realistic and will prove useful for the pilot through the feature of simulating failures in the instruments in order to check whether the pilot had paid attention to the flight indicators or even whether the aircraft has deficiencies in the human interface design, which should be corrected or controlled during the flight. the prototype features various animated controls and items that promote interactive and complex tasks in different situations. the app can help the pilot to be more confident, faster and more secure when flying. meanwhile, ensuring that less time is spent on the checklist in a real aircraft or a flight simulator will reduce the costs of the process and increase the safety with ar training. furthermore, the use of ar improves the pilot’s situational awareness (s-a) [25] in perceiving, comprehending and projecting future actions in scenarios in which s-a and the figure 6. in an ar app, it is possible to identify that a specific control is not a button but a lever. the animation clearly shows how to execute the procedure. figure 7. the checklist items used in this experience (cessna150 aircraft manual). figure 8. flight instructor testing the ar app. figure 9. student testing the ar app. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 70 system’s mental models are important for minimising human error. table 1 presents example information and comments that allowed us to conclude how useful this application can be for pilots and students, and even for companies that need to improve their training resources. 7. conclusion this was merely a preliminary study, and more students, pilots and instructors will be invited to participate in future tests of the system. in addition, flight engineers and physicians will also become involved so as to increase the diversity in the evaluation of the system as part of the hcd method. it will be important to ensure that this application goes beyond the data and information that can be found in aircraft manuals and the existing procedures. it will also be crucial to improve the app such that it can capture and share the knowledge and background of qualified pilots, instructors and all stakeholders who may have novice pilots. this will help promote a complete experience, improve security, ensure better quality flights and good tolls, and will present mutually beneficial teaching methods, transferring the knowledge and experience of flying. unquestionably, this study can be useful for the aircraft industries and for professionals such as those operating in the medical industry, where medical practitioners can receive more information to support the human cognitive and health fields in terms of, for example, medical operation procedures. essentially, the ar method is independent of the field of application. acknowledgements s. k. füchter thanks the aero club of santa catarina, the instructors, claudia thofehrn and thiago bortholotti, the pilots and the students. we also thank dr orly pedroso, physician specialist in aerospace medicine, development in health, astronaut, and increased vr in cosmic space. he is accredited by the national civil aviation agency (anac), brazil; postgraduate in the course for aerospatiale medical examiners, and nasa, johnson space center, houston, texas, usa: physiology and aerospatiale medicine course. s. k. füchter also thanks the university of estácio de santa catarina center and the programa pesquisa produtividade (ppp). references [1] i. n. gleim and g. w. gleim, private pilot faa knowledge test (1st ed.), university station, 2011, pp 2. [2] j. tustain, tudo sobre realidade virtual & fotografia 3600. são paulo editora senac, 2019. [3] microsoft. microsoft flight simulator. xbox. online [accessed 18 september 2020] http://xbox.com/games/microsoft-flight-simulator [4] laminar research. x-plane 11. x-plane. online [accessed 18 september 2020] https://www.x-plane.com [5] international standards organisation (iso), iso 13407. humancentered design processes for interactive systems. online [accessed 15 september 2020] fehler! linkreferenz ungültig.https://www.iso.org/obp/ui/#iso:std:iso:13407:en [6] j. s. martinez, n. schoenstein, g. salazar, t. m. swarmer, h. silva, n. russi-vigoya, a. baturoni-cortez, j. love, d. wong, r. walker, implementation of human-system integration workshop at nasa for human spaceflight, 70th international astronautical congress iac washington d. c., usa, 21-25 october 2019, 11 pp. online [accessed 20 sept. 2020] https://ntrs.nasa.gov/api/citations/20190032353/downloads/2 0190032353.pdf table 1. users comments. profile/function comments pros cons flight instructor/ coordinator ‘the app has great potential, not only for new students studying to get their private pilot license, but also for experienced pilots when they want to upgrade and get their jet license, for example’. ‘easy, it can be replicated to other airplanes, high definition’. ‘check list could have all items, it could be more complete’. flight instructors ‘cool. we can see in detail’. ‘it is a good way to learn about the instruments and controls. good to memorise’. ‘cool. we can see in detail’. ‘it is a good way to learn about the instruments and controls. good to memorise’. ‘the prototype is just for android. various instructors use iphone’. students ‘there are illustrations in the manuals that do not allow us to understand the real movement that is made when activating a certain switch, lever, button, or selector. with ar, it is possible to visualise things as if the panel were really in front of us, and the items are easily seen’. ‘it would have been great to have had an app like this one when i was preparing myself to be a pilot and upgrading my licenses and airplanes’. ‘functional, detailed, easy’. ‘it could include more procedures’. students ‘i found it very realistic and i liked the part where i could have it at home and use the panel whenever i wanted’. ‘super. i can study at home and see the panel in detail. when i use this airplane ( i will be a new student) it will be familiar to me’. ‘nowadays, i´m learning to fly in an aeroboeiro aircraft, i would like to have one of these for this airplane’. ‘cool, portable, useful in training’. ‘the prototype is just for android. various students use iphone; the screen is too small to click on it’. aerospace physician ‘brilliant project’ ‘useful and safe in training’. http://xbox.com/games/microsoft-flight-simulator https://www.x-plane.com/ https://www.iso.org/obp/ui/#iso:std:iso:13407:en https://ntrs.nasa.gov/api/citations/20190032353/downloads/20190032353.pdf https://ntrs.nasa.gov/api/citations/20190032353/downloads/20190032353.pdf acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 71 [7] iso international standalisation organisation, documentation. online [accessed 25 september 2020] https://www.iso.org/standard/52075.html [8] nasa/sp–2015-3709 human systems integration (hsi) practitioner’s guide. national aeronautics and space administration lyndon b. johnson space center houston, texas. 2015. online [accessed 18 december 2020] https://ntrs.nasa.gov/api/citations/20150022283/downloads/2 0150022283.pdf [9] nasa/tp-2014-218556. human integration design processes (hidp). national aeronautics and space administration international space station program johnson space center houston, texas. usa, 2014. online [accessed 18 december 2020] https://www.nasa.gov/sites/default/files/atoms/files/human_i ntegration_design_processes.pdf [10] p. milgram, f. kishino, a taxonomy of mixed reality visual displays. ieice trans. inform. syst. 77 (1994), pp. 1321-1329. [11] h. schaffernak, potential augmented reality application areas for pilot education: an exploratory study. educ. sci. 2020, 10, p. 86. doi: 10.3390/educsci10040086 [12] c. kirner, and r. tori, introdução à realidade virtual, realidade misturada e hiper-realidade. realidade virtual: conceitos, tecnologia e tendências. 1 ed. sã paulo, senac, 2004, v. 1. [13] p. cipresso, i. a. giglioli, m. a. raya, g. riva, the past, present, and future of virtual and augmented reality research. front psychol (2018). doi: 10.3389/fpsyg.2018.02086 [14] s. füchter, t. pham, a. perecin, l. ramos, a. k. füchter, m. schlichting, o uso do game como ferramenta de educação e sensibilisação sobre a reciclagem de lixo. revista educação e cultura contemporânea 13(31) (2016), pp. 56-81. doi: 10.5935/2238-1279.20160022 [15] l. brown, the next generation classroom: transforming aviation training with augmented reality. in: proceedings of the national training aircraft symposium (ntas), embry-riddle aeronautical university, daytona beach, fl, usa, 14–16 august 2017. online [accessed 8 jan. 2021] https://commons.erau.edu/ntas/2017/presentations/40/?utms ource=commons.erau.edu%2fntas%2f2017%2fpresentations% 2f40utmmedium=pdfutmcampaign=pdfcoverpages [16] s. k. füchter, m. s. schlichting, utilisação da realidade virtual voltada para o treinamento industrial. revista científica multidisciplinar núcleo do conhecimento. ano 03 10(07) (2019), pp. 113-120. [17] g. zichermann, j. linder, the gamification revolution: how leaders leverage game mechanics to crush the competition. new york: mcgraw-hill, 2013. [18] aviation instructor’s handbook: 2008, department of transportation, federal aviation administration, flight standards service: oklahoma city, ok, usa, 2008. [19] s. k. fuchter, augmented reality in aviation for pilots and technicians. october 2018 xr community of practice. johnson space center – nasa. houston, 2018. [20] s. k. fuchter, pre-flight inspection and briefing for training aircraft pilots using augmented reality. ieee galveston bay section. joint meeting of yp group and new i&s section chapter. university of houston clear lake, houston, tx, usa, 2018. [21] s. k. fuchter. augmented reality as a tool for technical training. galveston bay section meeting: the institute of electrical and electronics engineers (ieee). gilruth center nasa-jsc, 17 august 2017. online [accessed 18 december 2020] https://site.ieee.org/gb/files/2017/06/ieee-gbs-081717.pdf [22] unity. documentation. online [accessed 18 september 2020] https://learn.unity.com/?_ga=2.143231477.1523753934.163121 9945-1791299178.1627660886 [23] ptc. vuforia: market-leading enterprise ar. online [accessed 22 september 2020] https://www.ptc.com/en/products/vuforia [24] cessna pilot’s operating handbook. 150 commuter cessna model 150m, 1977. [25] m. r. endsley, toward a theory of situation awareness in dynamic systems. human factors journal 37(1) (1995), pp. 32-64. https://www.iso.org/standard/52075.html https://ntrs.nasa.gov/api/citations/20150022283/downloads/20150022283.pdf https://ntrs.nasa.gov/api/citations/20150022283/downloads/20150022283.pdf https://www.nasa.gov/sites/default/files/atoms/files/human_integration_design_processes.pdf https://www.nasa.gov/sites/default/files/atoms/files/human_integration_design_processes.pdf https://doi.org/10.3390/educsci10040086 https://doi.org/10.3389/fpsyg.2018.02086 http://dx.doi.org/10.5935/2238-1279.20160022 https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://commons.erau.edu/ntas/2017/presentations/40/?utmsource=commons.erau.edu%2fntas%2f2017%2fpresentations%2f40utmmedium=pdfutmcampaign=pdfcoverpages https://site.ieee.org/gb/files/2017/06/ieee-gbs-081717.pdf https://learn.unity.com/?_ga=2.143231477.1523753934.1631219945-1791299178.1627660886 https://learn.unity.com/?_ga=2.143231477.1523753934.1631219945-1791299178.1627660886 https://www.ptc.com/en/products/vuforia macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography leandro sottili1,2, laura guidorzi1,2, alessandro lo giudice1,2, anna mazzinghi3,4, chiara ruberto3,4, lisa castelli4, caroline czelusniak4, lorenzo giuntini3,4, mirko massi4, francesco taccetti4, marco nervo5, rodrigo torres6, francesco arneodo6, alessandro re1,2 1 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125 torino, italy 2 istituto nazionale di fisica nucleare (infn), sezione di torino, via pietro giuria 1, 10125 torino, italy 3 dipartimento di fisica e astronomia, università degli studi di firenze, via giovanni sansone 1, sesto fiorentino, 50019 firenze, italy 4 istituto nazionale di fisica nucleare (infn), sezione di firenze, via giovanni sansone 1, sesto fiorentino, 50019 firenze, italy 5 centro conservazione e restauro “la venaria reale”, piazza della repubblica, venaria reale, 10078 torino, italy 6 new york university abu dhabi, division of science, p.o. box 129188, saadiyat island, abu dhabi, united arab emirates section: research paper keywords: ma-xrf; digital radiography; pigments identification; paintings citation: leandro sottili, laura guidorzi, alessandro lo giudice, anna mazzinghi, chiara ruberto, lisa castelli, caroline czelusniak, lorenzo giuntini, mirko massi, francesco taccetti, marco nervo, rodrigo torres, francesco arneodo, alessandro re, macro x-ray fluorescence analysis of xvi-xvii century italian paintings and preliminary test for developing a combined fluorescence apparatus with digital radiography, acta imeko, vol. 11, no. 1, article 6, march 2022, identifier: imeko-acta-11 (2022)-01-06 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form december 13, 2021; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project has received funding from: the european union’s horizon 2020 research and innovation programme under the marie skłodowskacurie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c); infn-chnet and compagnia di san paolo (nexto project, progetti di ateneo 2017). corresponding author: alessandro lo giudice, e-mail: alessandro.logiudice@unito.it 1. introduction nowadays, the use of non-destructive non-invasive x-ray based techniques is well established in heritage science for analysis and conservation of artworks [1]-[3]. x-ray fluorescence (xrf) technique plays a fundamental role since it provides information on the elemental composition of painted surfaces, contributing to identify the materials employed in artworks. whenever xrf is combined with scanning capability on macroscopic surfaces, the technique is indicated as macro xray fluorescence (ma-xrf) [4]. conversely, due to the impossibility to transport most of the artworks inside a laboratory to undertake scientific analyses, e.g., for their preciousness or considerable weight, an important class of instruments is made up of portable and transportable scanners [5]. a number of ma-xrf scanners are nowadays in use in heritage science, both commercial [6] and built in-house [7]-[9]. despite the high analytical capabilities of the ma-xrf abstract using portable instruments for the preservation of artworks in heritage science is more and more common. among the techniques, macro x-ray fluorescence (ma-xrf) and digital radiography (dr) play a key-role in the field, therefore a number of ma-xrf scanners and radiographic apparatuses have been developed for this scope. recently, the infn-chnet group, the network of the infn devoted to cultural heritage, has developed a ma-xrf scanner for in-situ analyses. the instrument is fully operative, and it has already been employed in museums, conservation centres and out-door fields. in the present paper, the ma-xrf analysis conducted with the instrument on four italian artworks undertaking conservation treatments at the conservation centre ccr “la venaria reale” are presented. results on the preliminary test to combine dr with ma-xrf in a single apparatus are also shown. mailto:alessandro.logiudice@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 technique, it is worth underlining the importance of a thorough multi-analytical approach for a better comprehension of the artworks. another well-established non-destructive non-invasive and transportable x-ray technique is the digital radiography (dr) whose potentialities are widely known [10] as a tool for conservators and art historians [11]. it is frequently used in combination with ma-xrf by means of a dedicated instrument for a more complete information of artworks, as in the case of painting on canvas and on wooden panels [12], [13]. however, the possibility to employ a single apparatus integrating xrf and dr is not yet well investigated [14]. the advantage would be to have a single x-ray tube for a straightforward combined analysis in the same area. in this work, the ma-xrf scanner [15] developed in-house by the cultural heritage network of the national institute of nuclear physics (infn-chnet) was used to analyse xvi-xvii century paintings under conservation at the centro conservazione e restauro (ccr) “la venaria reale” [16], located nearby torino. to date, the infn-chnet network gathers 18 local divisions, 4 italian partners among which the ccr “la venaria reale”, that is a second level node in the network, and international partners as the new york university of abu dhabi (uae) [17]. moreover, a flat panel detector for dr coupled with a minix-ray tube that will be used in a modified version of the infnchnet ma-xrf was tested on a painting on canvas. information obtained by means of elemental mapping and radiography were combined for a better comprehension of the realisation of the artwork. 2. experimental set-up for the measurements presented in this paper two set-up were used: the ma-xrf scanner developed by the infn-chnet group for compositional information and a mini-x-ray tube combined with a flat panel detector for dr, that will be added in a modified version of the ma-xrf scanner in the near future. 2.1. the infn-chnet ma-xrf scanner the infn-chnet ma-xrf scanner (figure 1) is a compact (60 × 50 × 50 cm3) and lightweight (around 10 kg) instrument. its main parts are the measuring head, a three axes motor stage and a case containing all the electronics for acquisition and control. the measuring head is composed by an x-ray tube (moxtek©, 40 kv maximum voltage, 0.1 ma maximum anode current, 4 w maximum power, mo anode) with a brass collimator (typically 800 µm of diameter), a silicon drift detector (amptek© xr100 sdd, 50 mm2 effective active surface, 12.5 µm thickness be window) and a telemeter (keyence ia100). the motor stage (physik instrumente©, travel ranges 30 cm horizontally (x axis), 15 cm vertically (y axis) and 5 cm in z direction) holding the measuring head is screwed on the carbonfibre case. typical operating voltage is around 30 kv. signals are collected with a multi-channel analyser (model caen dt5780) and the whole system is controlled by a laptop. the control-acquisition-analysis software is developed within the infn-chnet network and allows both an on-line and an off-line analysis. the output of the acquisition process is a file containing the scanning coordinates and, for each position, the spectrum acquired. for each map, a single element can be selected and shown in the scanned area, or in a part of it. using the raw data, for each element the relative intensities are shown in grey scale, in which the maximum intensity is in white and the lower is in black. scan is carried out on the x axis, and a step size of typically 1 mm is set on the y axis resulting in a pixel size of 1 mm2. a complete review on the instrument can be found in [15]. the instrument has already been used for a number of different applications, i.e. paintings [18]-[21], illuminated manuscripts [22], coins [23], ceramics [24], and furniture [25]. 2.2. the digital radiography set-up structural information of artworks can be obtained by a radiographic approach. although a radiography could be carried out in principle using the same x-ray tube employed in the present infn-chnet ma-xrf apparatus, for future applications a modified version with a different source will be considered. considering the higher distance from the object needed to obtain a radiography than xrf maps, and the thickness of artworks to be passed through, an x-ray tube with a slightly higher voltage and power was used. in particular, the measurements were made with a moxtek©, 60 kv x-ray tube (1 ma maximum anode current, 12 w maximum power, 0.4 mm diameter nominal focal spot size, rh anode). if not collimated, it generates a 20 cm diameter beam at about 25 cm of distance. as the present ma-xrf apparatus only has a 5 cm z travel range, the future version will be capable of a translation in z up to 30 cm to avoid the handling of the artwork between xrf and dr measurements. about x-ray imaging, a shad-o-box hs detector by teledyne, model 6k was selected. the detector contains a large active area (11,4 cm × 14,6 cm) that is fully covered by the x-ray beam at 25 cm of distance from the source; the pixel size is 49.5 µm and the maximum integration time 65 s. the video signal is digitised to 14 bits, reassembled within the camera’s fpga, and then transferred to a computer via a high-speed gigabit ethernet interface. the cmos sensor inside the detector contains a direct-contact csi scintillator, that converts x-ray photons into visible light that is sensed by the cmos photodiodes. a thin graphite cover protects the sensor from accidental damage as well as from ambient light. the shad-o-box hs camera also contains lead and steel shielding to protect its electronics from x-ray radiation. the cameras are sensitive to x-ray energies as low as 15 kev, and may be used with generators up to 225 kvp. the detector, that has already been used for x-ray imaging with conventional tubes [26], is part of the nexto project that has the aim to integrate ma-xrf, dr and x-ray luminescence (xrl) [27] in a single portable instrument. figure 1. infn-chnet ma-xrf scanner placed in front of the panel painting madonna e i santi by cristoforo roncalli, known as il pomarancio. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 3. applications at the ccr “la venaria reale” in this section different applications of the instrumentation on paintings are presented. the works of art represent case studies from italian central regions of different periods (from the beginning of the 16th to the beginning of the 17th century). they were analysed during conservation processes carried out at the ccr “la venaria reale” [28]. for the ma-xrf measurements, a collimator of 800 µm diameter was used. the vertical step was set to 1 mm and a scanning speed of 3 mm/s. furthermore, the keyence ia-100 telemeter was switched on to maintain the sample distance during the scanning process. 3.1. madonna di san rocco by francesco sparapane the first painting presented is the oil on panel madonna di san rocco depicting the virgin with the child, saint antonio from padua and saint rocco by francesco sparapane (preci, umbria region, 1530 ca.). the importance of the work of art is related to the lack of documented paintings by the author, thereby its study represents a key-feature for understanding the painting technique of the artist [29]. the ma-xrf measurements were conducted on two areas as shown in figure 2, from which a number of maps were created. the source voltage was set to 30 kv and its anode current to 20 µa. the maps around saint rocco (figure 3) show the use of lead white, most likely due to the imprimatur layer and as proper pigment in the flesh tones. from the map of copper, the green part of the hat was realised with copper-based compounds [30]. the presence of tin is also detected in this same region, although in moderate amounts, and might be due to the use of lead-tin yellow in mixture with the copper-based pigment. a more precise identification of the material cannot be made with xrf technique: for instance, it is not possible to distinguish between a mixture of tin-based yellow with malachite rather than with azurite [31]. the shadows of the flesh tones were realised with ochreearths, as can be inferred from the match between iron and manganese maps [32]. corresponding to the red tone in the checks, a high signal of mercury is present, most likely due to the use of vermilion-cinnabar (hgs) [33]. furthermore, calcium is present in the strings of the hat, the dark strips and in the eyes, that may indicate the use of bone black for darkening [34] as well as manganese in the same areas may indicate the use of manganese black [35]. the halo was made with gold (figure 3), while the corresponding presence of calcium and iron is probably due to a calcium/iron-based preparation, as discussed in [36]. the second area around the upper part of the head of s. antonio (figure 4) presents similar results. however, a marked difference is related to the sky, that is made with a copper-based compound (most likely azurite [31]) with a glaze realised with smalt, a material rarely used as a pigment from the 15th century and which was widespread from the 17th c. onwards. concisely, smalt is a blue potash glass (thus characterised also by presence of potassium and aluminium) where the chromophore is cobalt and it usually contains impurities, among others, of bismuth when produced after 1520 [37]. its presence is thus hypothesised by the maps of the corresponding elements. a similar palette was probably used for the sky in the first area; however, due to the bad conservation condition, only traces of the characteristic elements are present in the maps of copper and cobalt. 3.2. madonna con bambino e santi by pomarancio the oil on canvas madonna con bambino e santi by cristoforo roncalli, known as il pomarancio, was made in the first decade of the 17th century, and it is placed in the santa maria argentea church in norcia (umbria region, italy). the virgin and the child are depicted with the saints eutizio, fiorenzo, santolo, and spes. figure 2. painting madonna con bambino e s. antonio e s. rocco by francesco sparapane. the scanned areas are indicated in the white boxes. figure 3. ma-xrf maps of area 1 of the madonna di san rocco by francesco sparapane (size 280 mm × 70 mm). figure 4. ma-xrf maps of the area 2 of the madonna di san rocco by francesco sparapane (size 120 mm × 70 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the focus of the analysis was the painting palette used in the flesh tones by the author [38], of which two representative areas were scanned, as shown in figure 5. the source voltage was set to 28 kv and its anode current to 20 µa. the maps realised in the first area (figure 6) show the use of lead white for the flesh tones and the book, while a high signal of iron is present corresponding to the shadows. the blue cope of saint fiorenzo shows an intense signal of copper, due to a copper-based compound, leading to the hypothesis of azurite [31]. the darkest colour is related to a high signal of calcium, that, by only means of the xrf technique, cannot lead to a precise hypothesis on the material used. furthermore, the presence of tin was detected in the squiggles as well as a higher intensity of lead, most likely due to the use of lead-tin yellow [32]. the second area (figure 7) shows a different composition: the map of mercury matches with the hand, leading to the hypothesis of vermilion-cinnabar for the glove. as opposed to the previous area, the map of iron does not show an intense signal in the hand of saint spes. the main signal of iron comes from the stick and the cope in correspondence to the yellow colour. by comparing the maps of iron, manganese, mercury, and tin, it can be noted that all of them are present in the crosier, even if tin and mercury are in the highlights, whereas the manganese and iron are in the shadows. this result can be explained by the use of vermilioncinnabar mixed with lead-tin yellow [32] in the highlights, and the use of ochre-earths [32] in the shading. iron, manganese, and tin are also present in the yellow cope. furthermore, iron and manganese are present in the green medallion, which present a strong signal from copper, related to copper-based pigments. from the map of copper, it may be seen that all the green colours in the area are related to its presence. however, as in the hat of saint rocco seen in the previous section, it is not possible to hypothesise a conclusion on the material used. 3.3. adorazione dei magi by sante peranda the oil on canvas adorazione dei magi by sante peranda (figure 8) is dated around the first decade of the 17th century. in this case the interest was focused on the blue colours. the measurements were conducted in three areas: one on the robe of the virgin, one behind the magus on the far left wearing the white dress, and the last one behind the kneeling magus. the composition detected is different for each area. the source voltage was set to 28 kv and its anode current to 30 µa. in the first area, the virgin’s robe (figure 9), cobalt is present. as in the previous sections, this may suggest the use of smalt as figure 5. painting madonna con bambino e i santi by il pomarancio. the scanned areas are indicated in white boxes. the saints are, from left, s.eutizio, s. fiorenzo, s. santolo, and s. spes. figure 6. maps of the area 1 of madonna con bambino e i santi by il pomarancio (size 130 mm × 110 mm). figure 7. ma-xrf maps of area 2 of madonna con bambino e i santi by il pomarancio (size 150 mm × 137 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 blue pigment [37]. the match between the cobalt and the silicon maps most likely indicates the use of such blue glass in this area. furthermore, the localised lack of these two elements in the area is related with the presence of a conservation intervention with a titanium-based material [32]. in the second area, shown in figure 10, the composition of the blue is similar to the previous, despite the presence of a significant iron signal, probably due to the use of ochre-earths for the shading [34]. however, with this technique alone a later retouch with prussian blue cannot be excluded [39]. beside the map of cobalt, the map of bismuth is also reported to confirm the hypothesis of smalt and, consequently, the dating of the painting [34]. moreover, it can be seen that the lα-line of lead (10.55 kev) is detected in the whole area, whereas the m-lines (2.34 kev) are present only in the robe. this is due to the different absorption for different x-ray energies (the lower is the energy and the higher is the absorption), therefore the comparison of the two maps suggests that lead white was used for the imprimatur, as well as for the white robe of the magus on the far left. a different composition is detected in the last blue area (figure 11). in this case a strong signal of copper is present, whereas no presence of cobalt was detected. for this reason, conversely to the previous cases, the use of azurite can be hypothesised for the blue tone in the area. it is also clearly visible from the map of copper its presence beneath the yellow robe, probably due to a pentimento in the back of the kneeling magus. the last hypothesis can be also supported by the maps of lead, in which its use up on the copper can be hypothesised by the detection of the m-line only in the region of the robe, whereas the rest of the area shows only the l-lines of lead. the yellow robe shows the presence of iron and lead, which may suggest a combined use of white lead and yellow ochreearths [31]. the hair of the servant present in the area shows a signal of iron and manganese, probably due to the employment of ochreearths. 3.4. madonna con bambino ed i santi crescentino e donnino by timoteo viti the last painting presented is the madonna con bambino ed i santi crescentino e donnino by timoteo viti (figure 12), dated between 1500 and 1510. the work is a tempera on canvas, its size is 168 cm × 165 cm. the painting presented bad conservation conditions on the areas around the faces of the virgin and the child. the painting technique is tempera magra [40], in which the binder tends to be absorbed by the preparatory layer. moreover, the application of a protective varnish was not envisaged, leaving the paint in direct contact with the external environment. figure 8. adorazione dei magi by sante peranda. the scanned areas are indicated in white boxes. figure 9. maps of area 1 in the robe of the virgin in figure 8 (size 100 mm × 100 mm). figure 10. maps of area 2 in the robe of the magus wearing the white dress in figure 8 (size 85 mm × 40 mm). figure 11. maps of area 3 in the blue robe behind the kneeling magus in figure 8 (size 70 mm × 65 mm). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 the source voltage was set to 28 kv and the anode current to 30 µa. the maps of the area around the child’s head are presented in figure 14. as can be seen from figure 13, the flesh tones are characterised by a strong signal of calcium, whereas no evidence of the use of lead was detected. moreover, the shading was made most likely with ochre-earths, according to the map of iron. in addition, the highlights of the mouth and the cheeks show a signal of mercury, most likely due to the use of cinnabarvermilion. the high presence of calcium can be explained with the use of white of san giovanni (white lime) pigment or other calcium-based compounds [41]. furthermore, by creating a spectrum in the area of the face, and comparing it with a spectrum obtained outside (figure13), it can be noted a higher intensity of the 2.0 kev line compared with the kα of calcium, that can be explained with the presence of phosphorus in the flesh tone. this may be explained with the presence of bone black, a pigment used for shading [34]. the signal of lead is present in the hair of the child. furthermore, the match of the spatial distribution of tin with lead may indicate the use of lead-tin yellow for it. the landscape of the background was realised with copperbased compounds mixed with ochre-earths, while the halo was realised with gold. in addition to the ma-xrf measurements, a radiographic investigation was carried out in the same area using the set-up described in section 2.2. the voltage was set to 20 kv, the anode current to 0.6 ma and the integration time to 2 seconds. as shown in figure 15, for example in jesus christ’s hair, the image is more detailed than the ma-xrf map: this allows the visualization of warp and weft threads of the canvas. moreover, it can be observed to match with the ma-xrf map distributions of the heavy metals (pb, sn, au, cu), and only partially with the distribution of calcium. this result is due to the very thin thickness of the painting layer, typical of the tempera magra painting technique. 4. conclusions the infn-chnet ma-xrf scanner was applied on four italian paintings at the ccr “la venaria reale”. for each application, different queries were advanced during the conservation processes and the described analysis achieved important information on the painting layers. in the madonna di san rocco by francesco sparapane, the figure 12. madonna con bambino e i santi crescentino e donnino by timoteo viti. the results presented are from the area in the white box. figure 13. comparison between two spectra, one obtained selecting an area inside the face (black) and outside (red). figure 14. ma-xrf maps of the area around the face of jesus christ in madonna ed i santi crescentini e donnino painting (size 140 mm × 110 mm). figure 15. radiography of the area around jesus christ’s face. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 composition of the flesh tones of s. antonio and s. rocco were identified, even though a definite composition cannot be measured, due to the limitation of the xrf technique to detect elements lighter than sodium. a similar conclusion has been made for other parts of the painting (the sky and s. rocco’s clothes). in the madonna con bambino e i santi by il pomarancio, the wide painting palette (lead white, lead-tin yellow, cinnabar-vermilion, copper-based compounds) was measured, confirming the skills of the author. for the adorazione dei magi by sante peranda, the blue colours in the areas under study have shown different compositions. it is worth noting that a more precise identification of the materials employed in the painting layers is not possible with only xrf technique, but a further investigation with other techniques such as fiber optics reflectance spectroscopy (fors) or raman spectroscopy is needed. in the last painting, the presence of calcium-based white in the face of the child was detected. however, no signal of lead is present in that area, whereas it is present in the background. in addition, dr was conducted in the same area using a new set-up that was proven to be suitable to be combined with xrf in a single instrument. the test carried out at the ccr “la venaria reale'' is the first step for the development of this multitechnique device. the complete realisation will rely on the expertise of the infn-chnet group, which has already allowed to achieve several important technological results in heritage science applications [42]-[46]. acknowledgement this project has received funding from compagnia di san paolo (nexto project, progetto di ateneo 2017) and the infn-chnet project. this project has received funding from the european union’s horizon 2020 research and innovation programme under the marie skłodowska-curie grant agreement no 754511 (phd technologies driven sciences: technologies for cultural heritage – t4c). the authors wish to warmly thank marco manetti of the infn-fi for his invaluable technical support, the students giulia corrada, francesca erbetta, daniele dutto, giulia dilecce and their supervisors, prof.ssa gianna ferraris di celle, prof. alessandro gatti, prof. bernadette ventura, prof. antonio iaccarino idelson and the staff of the ccr “la venaria reale” for its support. references [1] v. gonzalez, m. cotte, f. vanmeert, w. de nolf, k. janssens, x‐ ray diffraction mapping for cultural heritage science: a review of experimental configurations and applications, chem. eur. j., 2020, 26, 1703. doi: 10.1002/chem.201903284 [2] m. p. morigi, f. casali, radiography and computed tomography for works of art, in handbook of x-ray imaging: physics and technology, editor p. russo, boca raton: crc press, 2018, pp. 1185-1210. [3] f. p. romano, c. caliri, p. nicotra, s. di martino, l. pappalardo, f. rizzo, h. c. santos, real-time elemental imaging of large dimension paintings with a novel mobile macro x-ray fluorescence (ma-xrf) scanning technique, j. anal. atomic spectrom., 32 (4) (2017), pp. 773-781. doi: 10.1039/c6ja00439c [4] p. ricciardi, s. legrand, g. bertolotti, k. janssens, macro x-ray fluorescence (ma-xrf) scanning of illuminated manuscript fragments: potentialities and challenges, microchemical journal, issn 0026-265x, volume 124, 2016, pp. 785-791. doi: 10.1016/j.microc.2015.10.020 [5] m. alfeld, k. janssens, j. dik, w. de nolf, g. van der snickt, optimization of mobile scanning macro-xrf systems for the in situ investigation of historical paintings, j. anal. at. spectrom., 26 (2011), 899-909. doi: 10.1039/c0ja00257g [6] m. alfeld, j. vaz pedroso, m. van eikema hommes, g. van der snickt, g. tauber, j. blass, m. haschke, k. erler, j. dik, k. janssens, a mobile instrument for in situ scanning macro-xrf investigation of historical paintings, journal of analytical atomic spectrometry, 28, 2013, 760-767. doi: 10.1039/c3ja30341a [7] e. pouyet, n. barbi, h. chopp, o. healy, a. katsaggelos, s. moak, r. mott, m. vermeulen, m. walton, development of a highly mobile and versatile large ma‐xrf scanner for in situ analyses of painted work of arts, x‐ray spectrom. 2020; 1–9. doi: 10.1002/xrs.3173 [8] s. a. lins, e. di francia, s. grassini, g. gigante, s. ridolfi, maxrf measurement for corrosion assessment on bronze artefacts, 2019 imeko tc4 international conference on metrology for archaeology and cultural heritage, metroarchaeo 2019, 2019, pp. 538-542. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2019/imeko-tc4-metroarchaeo-2019-105.pdf [9] e. ravaud, l. pichon, e. laval, v. gonzalez, development of a versatile xrf scanner for the elemental imaging of paintworks, appl. phys. a 122, 17 (2016). doi: 10.1007/s00339-015-9522-4 [10] j. lang, a. middleton, radiography of cultural material, 2nd edition, elsevier, oxford, 2005, isbn 978-0-08-045560-0 [11] d. graham, t. eddie, x-ray techniques in art galleries and museums, a. hilger (editor), bristol, 1985, isbn 10: 0852747829 [12] m. alfeld, j. a. c. broekaert, mobile depth profiling and subsurface imaging techniques for historical paintings-a review, spectrochimica acta part b 88, 2013, pp. 211–230. doi: 10.1016/j.sab.2013.07.009 [13] m. alfeld, l. de viguerie, recent developments in spectroscopic imaging techniques for historical paintings a review, spectrochimica acta part b, 2017, pp 81-105. doi: 10.1016/j.sab.2017.08.003 [14] a. shugar, j. j. chen, a. jehle, x-radiography of cultural heritage using handheld xrf spectrometers, xray spectrom 21 (2017) pp. 311-318. doi: 10.1002/xrs.2947 [15] f. taccetti, l. castelli, c. czelusniak, n. gelli, a. mazzinghi, l. palla, c. ruberto, c. censori, a. lo giudice, a. re, d. zafiropulos, f. arneodo, v. conicella, a. di giovanni, r. torres, f. castella, n. mastrangelo, d. gallegos, m. tascon, f. marte, l. giuntini, a multi-purpose x-ray fluorescence scanner developed for in situ analysis, rendiconti lincei, scienze fisiche e naturali, 2019, 30:307-322. doi: 10.1007/s12210-018-0756-x [16] centro conservazione e restauro la venaria reale, online [accessed 11 march 2022] www.centrorestaurovenaria.it/en [17] cultural heritage network of the italian national institute for nuclear physics, online [accessed 11 march 2022] http://chnet.infn.it/en/who-we-are-2/ [18] c. ruberto, a. mazzinghi, m. massi, l. castelli, c. czelusniak, l. palla, n. gelli, m. bettuzzi, a. impallaria, r. brancaccio, e. peccenini, m. raffaelli, imaging study of raffaello's “la muta” by a portable xrf spectrometer, microchemical journal, 2016, volume 126, pp. 63-69. doi: 10.1016/j.microc.2015.11.037 [19] m. vadrucci, a. mazzinghi, b. sorrentino, s. falzone, characterisation of ancient roman wall‐painting fragments using https://doi.org/10.1002/chem.201903284 https://doi.org/10.1039/c6ja00439c https://doi.org/10.1016/j.microc.2015.10.020 https://doi.org/10.1039/c0ja00257g http://dx.doi.org/10.1039/c3ja30341a https://doi.org/10.1002/xrs.3173 https://doi.org/10.1002/xrs.3173 https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-105.pdf https://www.imeko.org/publications/tc4-archaeo-2019/imeko-tc4-metroarchaeo-2019-105.pdf https://doi.org/10.1007/s00339-015-9522-4 https://doi.org/10.1016/j.sab.2013.07.009 https://doi.org/10.1016/j.sab.2017.08.003 http://dx.doi.org/10.1002/xrs.2947 https://doi.org/10.1007/s12210-018-0756-x http://www.centrorestaurovenaria.it/en https://www.centrorestaurovenaria.it/ https://www.centrorestaurovenaria.it/ http://chnet.infn.it/en/who-we-are-2/ http://dx.doi.org/10.1016/j.microc.2015.11.037 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 non‐destructive iba and ma‐xrf techniques, x‐ray spectrom. 2020; 49: pp. 668– 678. doi: 10.1002/xrs.3178 [20] a. dal fovo, a. mazzinghi, s. omarini, e. pampaloni, c. ruberto, j. striova, r. fontana, non-invasive mapping methods for pigments analysis of roman mural paintings, journal of cultural heritage, volume 43, 2020, pp. 311-318. doi: 10.1016/j.culher.2019.12.002 [21] a. mazzinghi, c. ruberto , l. castelli, c. czelusniak, l. giuntini, p. a. mandò, f. taccetti, ma-xrf for the characterisation of the painting materials and technique of the entombment of christ by rogier van der weyden, applied sciences, 2021; 11(13):6151. doi: 10.3390/app11136151 [22] a. mazzinghi, c. ruberto, l. castelli , p. ricciardi, c. czelusniak, l. giuntini, p. a. mandò, m. manetti, l. palla, f. taccetti, the importance of being little: ma‐xrf on manuscripts on a venetian island, x‐ray spectrom, 2020; pp. 1–7. doi: 10.1002/xrs.3181 [23] v. lazic, m. vadrucci, f. fantoni, m. chiari, a. mazzinghi, a. gorghinian, applications of laser-induced breakdown spectroscopy for cultural heritage: a comparison with x-ray fluorescence and particle induced x-ray emission techniques, spectrochimica acta part b: atomic spectroscopy, volume 149, 2018, pp. 1-14. doi: 10.1016/j.sab.2018.07.012 [24] s. m. e. mangani, a. mazzinghi, p. a. mandò, s. legnaioli, m. chiari, characterisation of decoration and glazing materials of late 19th-early 20th century french porcelain and fine earthenware enamels: a preliminary non-invasive study, eur. phys. j. plus, 2021, 136 (10). doi: 10.1140/epjp/s13360-021-02055-x [25] l. sottili, l. guidorzi, a. mazzinghi, c. ruberto, l. castelli, c. czelusniak, l. giuntini, m. massi, f. taccetti, m. nervo, s. de blasi, r. torres, f. arneodo, a. re, a. lo giudice, the importance of being versatile: infn-chnet ma-xrf scanner on furniture at the ccr “la venaria reale”,, applied sciences 2021;11(3):1197. doi: 10.3390/app11031197 [26] l. vigorelli; a. lo giudice.; t. cavaleri.; p. buscaglia; m. nervo; p. del vesco; m. borla; s. grassini, a. re, upgrade of the x-ray imaging set-up at ccr “la venaria reale”: the case study of an egyptian wooden statuette, proceedings of 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage, trento, italy, october 22-24, 2020. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-118.pdf [27] a. re, m. zangirolami, d. angelici, a. borghi, e. costa, r. giustetto, l.m. gallo, l. castelli, a. mazzinghi, c. ruberto, f. taccetti, a. lo giudice, towards a portable x-ray luminescence instrument for applications in the cultural heritage field, eur. phys. j. plus, 2018, pp. 133-362. doi: 10.1140/epjp/i2018-12222-8 [28] l. sottili, l. guidorzi, a. mazzinghi, c. ruberto, l. castelli, c. czelusniak, l. giuntini, m. massi, f. taccetti, m. nervo, a. re, a. lo giudice, infn-chnet meets ccr la venaria reale: first results, 2020 imeko tc4 international conference on metrology for archaeology and cultural heritage 2020, 2020, pp. 507-511. online [accessed 11 march 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-096.pdf [29] d. dutto, la madonna di san rocco di francesco sparapane: problemi conservativi e intervento di restauro di un dipinto su tavola del xvi secolo proveniente dalla valnerina, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2018 [30] e. nicholas, pigment compendium: a dictionary and optical microscopy of historical pigments, amsterdam: butterworthheinemann (editor), 2008. [31] artists’ pigments: a handbook of their history and characteristics, vol. 2, editor ashok roy, national gallery of art, washington archetype publications, london, 1993 [32] c. seccaroni, p. moioli, fluorescenza xprontuario per l’analisi xrf portatile applicata a superfici policrome, nardini editore; firenze, 2002. [33] r. j. gettens, r. l. feller, w. t. chase, vermilion and cinnabar, studies in conservation 17, no. 2, 1972, 45-69. doi: 10.2307/1505572 [34] artists’ pigments: a handbook of their history and characteristics, vol. 4, editor b. h. berrie, national gallery of art, washington archetype publications, london, 2007 [35] m. spring, r. grout, r. white, 'black earths': a study of unusual black and dark grey pigments used by artists in the sixteenth century, national gallery technical bulletin, 2003, 24, pp. 96– 114. online [accessed 11 march 2022] https://www.jstor.org/stable/42616306 [36] i. c. a. sandu, l. u. afonso, e. murta, m. h. de sa, gilding techniques in religious art between east and west, 14th -18th centuries, int. j. of conserv. sci.. 1 (2010) pp. 47-62 . [37] b. h. berrie, mining for color: new blues, yellows, and translucent paint, early science and medicine, volume 20: issue 4-6, 2015, 308–334. doi: 10.1163/15733823-02046p02 [38] f. erbetta, restaurare dopo il terremoto: il dipinto olio su tela madonna con bambino e santi del pomarancio dalla chiesa di santa maria argentea di norcia, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2018 [39] j. kirby, d. saunders, fading and colour change of prussian blue: methods of manufacture and the influence of extenders, natl gallery tech bull. 2004, 25: 73-99. [40] g. corrada, studio interdisciplinare del dipinto a tempera magra su tela madonna con bambino e i santi crescentino e donnino, timoteo viti, msc thesis, master’s degree programme in conservation and restoration for cultural heritage, university of torino, torino, 2013 [41] s. rinaldi, la fabbrica dei colori: pigmenti e coloranti nella pittura e nella tintoria, roma, il bagatto, 1986 [42] l. guidorzi, f. fantino, e. durisi, m. ferrero, a. re, l. vigorelli, l. visca, m. gulmini, g. dughera, g. giraudo, d. angelici, e. panero, a. lo giudice, age determination and authentication of ceramics: advancements in the thermoluminescence dating laboratory in torino (italy), acta imeko, vol 10 (2021), no 1, pp. 32-36. doi: 10.21014/acta_imeko.v10i1.813 [43] e. di francia, s. grassini, g. ettore gigante, s. ridolfi, s. a. barcellos lins, characterisation of corrosion products on copperbased artefacts: potential of ma-xrf measurements of ma-xrf measurement, acta imeko, vol 10 (2021), no 1, pp. 136-141. doi: 10.21014/acta_imeko.v10i1.859 [44] a. impallaria, f. evangelisti, f. petrucci, f. tisato, l. castelli, f. taccetti, a new scanner for in situ digital radiography of paintings, applied physics a, 122, 12, 2016. doi: 10.1007/s00339-016-0579-5 [45] c. czelusniak, l. palla, m. massi, l. carraresi., l. giuntini, a. re, a. lo giudice., g. pratesi, a. mazzinghi, c. ruberto, l.castelli, m. e. fedi, l. liccioli, a. gueli, p. a. mandò, f. taccetti, preliminary results on time-resolved ion beam induced luminescence applied to the provenance study of lapis lazuli, nucl. instrum. methods phys. res. b 2016, 371, 336–339. doi: 10.1016/j.nimb.2015.10.053 [46] l. palla., c. czelusniak., f. taccetti, l. carraresi, l. castelli, m. e. fedi, l. giuntini, p. r. maurenzig, l. sottili., n. taccetti, accurate on line measurements of low fluences of charged particles, eur. phys. j. plus 2015, 130. doi: 10.1140/epjp/i2015-15039-y http://dx.doi.org/10.1002/xrs.3178 http://dx.doi.org/10.1016/j.culher.2019.12.002 https://doi.org/10.3390/app11136151 https://doi.org/10.1002/xrs.3181 https://doi.org/10.1016/j.sab.2018.07.012 https://doi.org/10.1140/epjp/s13360-021-02055-x https://doi.org/10.3390/app11031197 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-118.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-118.pdf http://dx.doi.org/10.1140/epjp/i2018-12222-8 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-096.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-096.pdf https://doi.org/10.2307/1505572 https://www.jstor.org/stable/42616306 https://doi.org/10.1163/15733823-02046p02 http://dx.doi.org/10.21014/acta_imeko.v10i1.813 http://dx.doi.org/10.21014/acta_imeko.v10i1.859 https://doi.org/10.1007/s00339-016-0579-5 https://doi.org/10.1016/j.nimb.2015.10.053 https://doi.org/10.1140/epjp/i2015-15039-y pmma-coated fiber bragg grating sensor for measurement of ethanol in liquid solution: manufacturing and metrological evaluation acta imeko issn: 2221-870x june 2021, volume 10, number 2, 133 138 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 133 pmma-coated fiber bragg grating sensor for measurement of ethanol in liquid solution: manufacturing and metrological evaluation antonino quattrocchi1, roberto montanini1, mariangela latino2, nicola donato1 1 department of engineering, university of messina, c.da di dio, 98166 messina, italy 2 department of mift, university of messina, viale stagno d'alcontres 31, 98166 messina, italy section: research paper keywords: fbg sensors; dip coating; ethanol sensors; concentration evaluation in liquids; pmma coating citation: antonino quattrocchi, roberto montanini, mariangela latino, nicola donato, pmma-coated fiber bragg grating sensor for measurement of ethanol in liquid solution: manufacturing and metrological evaluation, acta imeko, vol. 10, no. 2, article 19, june 2021, identifier: imeko-acta-10 (2021)-02-19 section editor: ciro spataro, university of palermo, italy received january 18, 2021; in final form april 30, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: antonino quattrocchi, e-mail: antonino.quattrocchi@unime.it 1. introduction fiber bragg grating (fbg) sensing is a mature technology that has been used so far to develop measurement solutions for a wide range of applications. currently, they are commonly employed for health structural monitoring, as well as to fabricate distributed sensing devices to be used in different fields such as medicine and biomedical research, automotive, aerospace, maritime and civil engineering, and many others. fbgs are largely preferred over other types of optical sensors [1] thanks to their unique advantages. they present small sizes, immunity from electromagnetic disturbances, good resistance in terms of temperature and humidity and, above all, they are wavelength encoded and chemically inert. for these reasons, they often represent the favorite choice for applications in critical environments, e.g. explosive or radioactive [2]. the sensitivity of fbg to the variation of a specific parameter can be improved by choosing a suitable coating material. the basic principle relies in the swallowing of the coating material which eventually induces a change in the grating period and in the refractive index along the fiber length. several types of materials and deposition techniques can be used to create a fiber coating for a specific application. fbg-based humidity sensors has been realized by dip coating the fiber in a polyamide solution [3], by depositing a layer of graphene oxide on the fiber surface [4] and by embedding the fbg in a steel and carbon fiber reinforced polymer [5]. montanini et al. [6] compared results obtained on two polymer-coated fbg humidity sensors, using poly(methyl methacrylate) (pmma) and poly(vinyl acetate) (pvac) as sensing materials. the calibration of the two sensors was carried out in the 20 70 %rh range at three different temperatures (15 °c, 30 °c and 45 °c) by a two-pressure humidity generator. the pmma-coated sensor prototype displayed overall much better performance when compared with abstract the sensitivity of fiber bragg gratings (fbgs) to the variation of a specific parameter can be improved by choosing a suitable coating material. this paper explores the possibility of measuring the concentration of ethanol in aqueous solutions by using poly(methyl methacrylate) (pmma) as coating material of a single-mode fbg. the basic measurement principle relies in the absorption of liquid ethanol which produces controlled swallowing of the coating, straining the underneath bragg grating. the pmma coating was deposited on the fbg by means of a specifically designed dip coating test bench, able to accurately control the thickness of the applied layer. metrological performances of the developed pmma-coated fbg ethanol sensor have been assessed for ethanol concentrations in the range of 0-38 % in distilled water at a constant temperature of 25 °c. the prototype shows good repeatability and better sensitivity when compared to a traditional fbg, especially at low ethanol concentrations. the dynamic behavior of the sensor (which depends on the kinetics of the absorption mechanism) has been assessed as well, showing a response time of about 290 s, while the recovery time (660 s) was almost twice. mailto:antonino.quattrocchi@unime.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 134 the pvac-coated one, although it manifested a higher temperature sensitivity. instead, fbg-based cryogenic thermometers were developed using coatings of metals (copper, zinc, tin and lead) for the fiber by electroplating and casting [7], by bonding the fbg onto teflon substrates [8] and by placing a layer of ormocer on the fiber surface by gravity-coating [9]. another specific application was developed by campopiano et al. [10]. the authors produced several types of hydrophones, using plastic-coated fbgs. they evaluated how the use of polymers, which exhibits great variability in terms of obtainable geometries and acoustic and mechanical properties, guarantees the achievement of customized performances. an important field of application is that of the chemical industry. cong et al. [11] proposed an optical salinity sensor based on a hydrogel-coated fbg. the detection mechanism relies on the mechanical stress induced into the coating material when water comes out of it. the main limitation of this device lies in the slow response time due to the balance between the forces acting at the intermolecular level of the coating material. aldaba et al. [12] investigated a ph sensor consisting of a polyaniline layer polymerized on the surface of a fbg. in this case, the ph detection occurs thanks to the interaction between the optical absorption spectrum of polyaniline, which is sensitive to ph, and the bragg grating. the device exhibited fast response, biochemical compatibility, temperature independence and longterm stability. dinia et al. [13] proposed a fbg multifunctional ph sensor, covered with a ph-sensitive hydrogel, to monitor the ph rain on critical and prestigious monuments. ethanol is a widely used chemical in many fields, such as medical, food processing and automotive [13]. however, because of its high volatility and flammability, ethanol can be dangerous for living organisms and, therefore, it is very important to have disposal effective detection systems [15]. several sensor typologies are available, basing on transduction effect, spanning from resistive sensors [16], electrochemical [17], gas and resonant ones [18, 19]. in contrast, fbg represents an intrinsically safe device, which can be profitable used to measure ethanol concentration either in vapor or liquid phase. morisawa et al. [17] analyzed a plastic optical fiber sensor, coated with a layer of novolac resin, for detecting alcohol vapors, including ethanol. they putted into evidence that the transmitted light intensity is a function of the coating thickness, of the exposure time, of the alcohol type and of the vapor pressure. latino et al. [21] presented some preliminary results concerning the assessment of low concentration of ethanol (0-3 %) in aqueous vapor, using a fbg coated with thin layers of pmma with different thicknesses. in this case, pmma was distributed onto the fbg surface with a simple dip coating technique. coradin et al. [22] evaluated the metrological performance of four fbgs, assembled in several configurations, for analysis of water-ethanol mixtures. raikar et al. [23] employed a ge-b doped fbg to show as the wavelength shift decreases with the increase in liquid solution concentrations, moving from 0 % to 50 %. arasu et al. [24] developed a fbg coated with a thin gold film deposited by sputtering. their results displayed as the concentration of ethanol in water from 0 % to 100 % is better assessed measuring the change in absorbance with the thickest gold layer. recently, kumar et al. [25] analyzed a graphene oxide-coated fbg for ethanol detection in petrol. this sensor was manufactured covering a partially chemical etched fbg with a layer of an oxidize graphite powder by means of a drop casting technique. the detection of ethanol proportion in petrol, from 0 to 100 %, was investigated and a calibration curve was obtained. the main result shows as the graphene oxide-coated fbg allows a significant enhancement (by a factor of ten) in the detection of ethanol in comparison with un-coated fbg. in this paper, which is an extended version of the one presented at the 24th imeko tc4 international symposium and the 22nd international workshop on adc and dac modeling and testing (imeko tc4 2020) [26], starting from our previous experience gained with pmma-coated fbgs designed for sensing ethanol concentration in aqueous vapor [21], we aim to explore the possibility to measure the concentration of ethanol in liquid solutions too. the performance of the pmma-coated fbg has been investigated by evaluating its response to ethanol concentration at a constant temperature of 25 °c and a calibration curve was obtained and compared with that of a traditional fbg. the main metrological features of the prototype, in terms of sensitivity and response/recovery time, have also been assessed and discussed. 2. material and methods 2.1. fbg ethanol sensor manufacturing the fbg ethanol sensor has been realized by recoating a single-mode silica fbg with a thin layer of pmma. the device has an active length of the bragg grating of about 14 mm and a total thickness of (0.44 ± 0.03) mm. this coating thickness value was chosen in order to find the best balance between sensing properties and stability ones. the fbg is of the smf-28 type with a diameter of 250 µm, of which 9 µm of core, 125 µm of cladding and 250 µm of an acrylate coating, a reflectivity > 90 % and a full-width at half maximum (fwhm) < 0.30 nm. figure 1 reports a magnification of the sensor structure. the measurement principle relies on the selective absorption of ethanol by the coating layer made of pmma. this interaction causes the swelling of the sensing polymer and a consequent mechanical effect on the fbg, which produces a wavelength shift related with the ethanol concentration in the liquid solution. in order to develop the fbg ethanol sensor, the pre-existent coating of the single-mode optical fiber was removed, then, the pmma deposition was carried out by a controlled dip coating procedure. the fbg was dipped under controlled conditions (constant dipping/rising velocity) in a solution of 560 mg pmma figure 1. fbg ethanol sensor with pmma sensing layer. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 135 and 3 ml of acetone. the block diagram of the deposition system is depicted in figure 2. it consisted of a precision dc servo motor, equipped with an agilent e3632a power supply, an adaptor, and a stripe holder where the sample was held. by means of the power modulation delivered to the servo motor, it was possible to control the rising/dipping rate of the fbg in the pmma-acetone solution. figure 3 reports the relationship between the voltage applied to the servo motor and the resulting dipping rate, showing a clear linear trend. hence, by simply varying the rising/dipping rate, it was possible to accurately control the thickness of the coating layer. 2.2. measurement setup the measurement setup consisted of an optical spectrum analyzer (osa) (micron optics mod. si 720) with ± 3 pm accuracy, wired to the fbg ethanol sensor and to a pc. a dedicated graphical user interface, developed in labview™ environment, was implemented to display in real time and to store the data, using a national instruments gpib-usb-hs ieee-488.2 communication protocol. the fbg ethanol sensor was placed into a cylindrical metallic container, filled with the target solution of distilled water and ethanol. the temperature of the metallic container was kept constant through a thermostatic bath, with the aim to minimize the temperature dependence of the fbg and better relate the sensor response with the ethanol concentration. figure 4 illustrates a block diagram of the measurement setup. 3. results 3.1. response of the fbg ethanol sensor figure 5 shows the power spectra of the fbg ethanol sensor obtained for three different concentrations of ethanol in water at a constant temperature of 25 °c. the peak of each curve shifts to the right as the ethanol concentration increases, while the maximum amplitude of the power can be considered sufficiently constant. this highlights how the interaction between the pmma coating and ethanol results only in a shift of the wavelength of the bragg grating, without substantial modification of the shape of the reflected signal. hence, the fbg sensor response at constant temperature can be defined considering the relative shift of the bragg wavelength δλ with respect to the baseline wavelength in distilled water λh 2 o (i.e. 0 % v/v of ethanol, reference temperature t0 of 25 °c): ∆𝜆 = 𝜆r − 𝜆h2o, (1) being λr the actual wavelength of the fbg sensor as it is exposed to a definite concentration of ethanol in distilled water. figure 6 displays the fbg response as it is repeatedly immersed firstly in distilled water and eventually in a water/ethanol solution of 7.89% v/v at a constant temperature of 25 °c. the obtained data highlight a large variation of the fbg wavelength: in air, the wavelength λ0 is 1554.73 nm and it rapidly increases to 1555.99 nm (λh 2 o) as the fbg ethanol sensor was immersed in distillated water. this occurs because the pmma layer hydrates in contact with water, and it begins to expand until a steady state condition has been reached. as the sensor is exposed to the water/ethanol solution, ethanol is absorbed by the pmma layer and the swelling process restarts until a new steady state condition is gained. the described interaction between pmma and figure 2. block diagram of the deposition setup. 1 2 3 4 5 6 7 8 9 2 4 6 8 10 12 14 16 v e lo c it y ( m m /m in ) supply voltage (v) y = -0.0870 + 1.8811 x r 2 = 0.9982 figure 3. relationship between the supply voltage to the dc servo motor and the resulting dipping rate. figure 4. block diagram of the measurement setup. 1553 1554 1555 1556 1557 1558 -40 -30 -20 -10 0 h 2 o 2.78% 14.63% 32.69% p o w e r (d b m ) wavelength (nm) l h 2 0 = 1554.99 nm coating: pmma reference: h 2 o (t h 2 o = 25 °c) target: ethanol (t ethanol = 25 °c) figure 5. power spectra of the fbg ethanol sensor for different concentrations of ethanol in pure water. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 136 ethanol is reversible, since the baseline is reached when the sensor is exposed to distilled water. then, a proper calibration procedure allows to evaluate ethanol concentration in the whole characterization range. figure 7 presents an example of data recorded in real-time, keeping the temperature constant at 25 °c and varying the ethanol concentration from 2.78 % v/v to 5.40 % v/v, triggering the osa on the maximum marker. even though the fbg ethanol sensor response is subject to a transient before reaching the steady state condition, it is evident that a greater value of λr is correlated to an increase of ethanol concentration. furthermore, the measurement process is reversible since the acquired wavelength coincides with the basic one (λh 2 o = 1554.99 nm), when the fbg ethanol sensor is placed into distilled water. figure 8 puts into evidence the response of the fbg ethanol sensor by increasing ethanol concentrations at a constant temperature of 25 °c. it can be clearly seen how an increasing in ethanol concentration brings to a corresponding increasing of the lr in the whole ethanol investigation range. 3.2. metrological evaluation of the fbg ethanol sensor in this section will be evaluated the metrological performance of the fbg ethanol sensor, such as repeatability and the calibration curve. figure 9 reports the repeatability of the fbg ethanol sensor, when it is exposed to three pulses of 7.89 % v/v of ethanol and again to distilled water at a constant temperature of 25 °c. the repeatability was estimated at an ethanol concentration of 7.89 % v/v and for the distilled water (base line); here are reported a mean value of 0.075 nm with a standard deviation of 0.005 nm, and 0.071 nm and 0.003 nm, respectively. therefore, peak values of λr and λh 2 o show a good repeatability and reversibility. figure 10 a) compares the calibration curves of the fbg and the pmma-coated fbg (i.e. the fbg ethanol sensor) for ethanol concentrations ranging from 0 to 38 % v/v at a constant temperature of 25 °c. it can be seen a linear trend for the fbg vs. ethanol concentration. this is probably due to ethanol absorption by the acrylate coating of the fbg itself. on the other hand, the pmma-coated fbg exhibits more complex behavior. in the latter case, although a linear branch can be observed for low concentration values, a polynomial curve is able to fit the data in the whole ethanol range. a such nonlinear trend could be caused by the binding forces between the pmma molecules, which could then limit the swelling of the coating layer (i.e. the ethanol absorption) above a defined threshold. however, the sensitivity of the pmma-coated sensor is much greater (more than one order of magnitude) than the fbg one. while figure 10 b) reports the residual values of both the previous fitting curves. 0 2000 4000 6000 8000 10000 1554.70 1554.75 1554.80 1554.85 1554.90 1554.95 1555.00 1555.05 1555.10 l r ( n m ) time (s) l 0 = 1554.71 nm l h 2 0 = 1554.99 nm coating: pmma reference: h 2 o (t h 2 o = 25 °c) target: 7.89 % v/v ethanol (t ethanol = 25 °c) in a ir in h 2 o figure 6. different response of the fbg ethanol sensor in air, into distilled water and exposed at a fixed ethanol concentration (7.89 % v/v). 3000 4000 5000 6000 7000 8000 9000 1554.85 1554.90 1554.95 1555.00 1555.05 1555.10 in h 2 oin h2o 5.40% 2.78% in h 2 o l r ( n m ) time (s) l 0 = 1554.73 nm l h 2 o = 1554.99 nm coating: pmma reference: h 2 o (t h 2 o = 25°c) target: ethanol (t ethanol = 25°c) figure 7. wavelength shifts after exposition of the fbg ethanol sensor to different concentrations of ethanol. 19500 20000 20500 21000 21500 22000 22500 23000 1554.95 1555.00 1555.05 1555.10 1555.15 1555.20 1555.25 l r ( n m ) time (s) l 0 = 1554,73 nm l h 2 o = 1554.99 nm coating: pmma reference: h 2 o (t h 2 o =25°c) target: ethanol (t ethanol =25°c) 38% 13% 27% 33% 21% enhance ethanol concentration in h 2 o figure 8. response of the fbg ethanol sensor at progressive ethanol concentrations. 2500 5000 7500 10000 12500 15000 17500 20000 1554.80 1554.85 1554.90 1554.95 1555.00 1555.05 1555.10 l r ( n m ) time (s) l 0 = 1554.73 nm l h 2 0 = 1554.99 nm coating: pmma reference: h 2 o (t h 2 o = 25 °c) target: 7.89% ethanol (t ethanol = 25 °c) figure 9. repeatability of the fbg ethanol sensor exposed at a fixed ethanol concentration of 7.89 % v/v. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 137 figure 11 displays the first derivatives of the calibration curves towards time, for both the fgb and the pmma-coated one. these curves give the sensitivity of the devices, highlighting that the pmma coating leads to a significant increase of the fbg sensitivity to ethanol. it is also noteworthy to observe that the sensitivity of the pmma-coated fbg is not constant, but it strongly depends on the specific ethanol concentration. on the other hand, the sensitivity of the fbg can be considered constant. furthermore, the acrylate coating of the fbg has a considerably lower absorption of ethanol compared to the pmma coating, as already highlighted by figure 10 a). 3.3. behavior of the fbg ethanol sensor the fbg ethanol sensor exhibits a dynamic behavior as a function of the kinetics of the swelling phenomenon of the pmma layer. the typical values of the response and the recovery time are identified in figure 12 for an ethanol concentration of 7.89 % v/v at a temperature of 25 ° c. these parameters are calculated as the time to reach the 90 % of the final value, when the sensor is exposed to ethanol, and the time to reach the 90 % of the baseline wavelength, when the sensor is exposed to distilled water, respectively. the response time value is of 290 s, while the recovery one is of 660 s. by considering these results, it is possible to note how the sensing of ethanol (i.e. the absorbing process) is faster than the desorbing one (the process to reach again the baseline wavelength in presence of only distilled water). 4. conclusions this paper describes the manufacturing process and the metrological characterization of a fbg-based ethanol sensor obtained by covering the bragg grating with a layer of pmma. the obtained device is able to evaluate the different ethanol concentrations in distilled water at a constant temperature of 25 °c. in fact, the measurement data have shown a clear shift in the wavelength of the bragg grating, strictly related to the increasing in ethanol concentration. the fbg ethanol sensor identifies specific and progressive concentrations of ethanol, showing good repeatability. its calibration curve highlights a polynomial trend that easily describes the whole range of considered concentration (from 0 % to 38 % in v/v of ethanol in distilled water). furthermore, compared to the calibration curve for fbg, better sensitivity has been achieved, especially for the low concentration of ethanol. finally, dynamic behavior for a fixed ethanol concentration (7.89 %) has also been assessed. in this case, the response time (290 s) is more than half of the recovery one (660 s). future work will be focused on the cross-sensitivity investigation of the analyzed pmma-coated fbg, in order to evaluate the selectivity of the device to other chemical compounds. 0 5 10 15 20 25 30 35 40 0 50 100 150 200 250 fbg pmma-coated fbg linear fit of fbg polynomial fit of pmma-coated fbg d l (p m ) ethanol (% v/v) y = -0.2244 + 0.6389 x r 2 = 0.9991 y = 6.2643 + 9.2300 x 0.1169 x 2 r 2 = 0.9932 l 0 = 1554.73 nm l h 2 o = 1554.99 nm reference: h 2 o (t h 2 o = 25°c) target: ethanol (t ethanol = 25°c) a) 0 5 10 15 20 25 30 35 40 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 linear fit of fbg polynomial fit of pmma-coated fbg r e s id u a l o f d l (p m ) ethanol (% v/v)b) figure 10. a) comparison between the calibration curves of the fbg and the pmma-coated one and b) residual analysis of the fittings. 0 5 10 15 20 25 30 35 40 0 5 10 15 y = -0.0001 + 0.6479 x y = -0.1993 + 8.2098 x fbg pmma-coated fbg linear fit of fbg linear fit of pmma-coated fbg d d l /d t (p m /s ) ethanol (% v/v) l 0 = 1554.73 nm l h 2 0 = 1554.99 nm reference: h 2 o (t h 2 o = 25°c) target: ethanol (t ethanol = 25°c) figure 11. sensitivity vs. ethanol concentration for fbg and for pmma-coated one. 3000 4000 5000 6000 1554.90 1554.95 1555.00 1555.05 1555.10 recovery in ethanol in h 2 o l r ( n m ) time (s) l 0 = 1554.73 nm l h 2 o = 1554.99 nm coating: pmma reference: h 2 o (t h2o = 25 °c) target: 7.89 % v/v ethanol (t ethanol = 25 °c) in h 2 o response figure 12. response and the recovery time of the fbg ethanol sensor exposed to a fixed ethanol concentration (7.89 % v/v). acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 138 references [1] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee transactions on instrumentation and measurement, 69 (2020) n. 8932617, pp. 2459-2467. doi: 10.1109/tim.2019.2959293 [2] j. k. sahota, n. gupta, d. dhawan, fiber bragg grating sensors for monitoring of physical parameters: a comprehensive review, optical engineering, 59 (2020) n. 060901. doi: 10.1117/1.oe.59.6.060901 [3] t. l. yeo, t. sun, k. t. v. grattan, d. parry, r. lade, b. d. powell, characterization of a polymer-coated fibre bragg grating sensor for relative humidity sensing, sensors and actuators b: chemical, 110 (2005), pp. 148-156. doi: 10.1016/j.snb.2005.01.033 [4] y. wang, c. shen, w. lou, f. shentu, c. zhong, x. dong, l. tong, fiber optic relative humidity sensor based on the tilted fiber bragg grating coated with graphene oxide, applied physics letters, 109 (2016) n. 031107. doi: 10.1063/1.4959092 [5] a. montero, g. aldabaldetreku, g. durana, i. jorge, i. s. de ocáriz, j. zubia, influence of humidity on fiber bragg grating sensors, advances in materials science and engineering, (2014) id. 405250. doi: 10.1155/2014/405250 [6] r. montanini, m. latino, n. donato, g. neri, comparison between pmma and pvac coated fiber bragg grating sensors for relative humidity measurements, proc. of 22nd international conference on optical fiber sensors ofs2012, beijing, china, 8421, 2012, n. 842145. doi: 10.1117/12.970279 [7] c. lupi, f. felli, a. brotzu, m. a. caponero, a. paolozzi, improving fbg sensor sensitivity at cryogenic temperature by metal coating, ieee sensors journal, 8 (2008), pp. 1299-1304. doi: 10.1109/jsen.2008.926943 [8] t. mizunami, h. tatehata, h. kawashima, high-sensitivity cryogenic fibre-bragg-grating temperature sensors using teflon substrates, measurement science and technology, 12 (2001) n. 914. doi: 10.1088/0957-0233/12/7/329 [9] t. habisreuther, e. hailemichael, w. ecke, i. latka, k. schroder, c. chojetzki, k. schuster, m. rothhardt, r. willsch, ormocer coated fiber-optic bragg grating sensors at cryogenic temperatures, ieee sensors journal, 12 (2011), pp. 13-16. doi: 10.1109/jsen.2011.2108280 [10] s. campopiano, a. cutolo, a. cusano, m. giordano, g. parente, g. lanza, a. laudati, underwater acoustic sensors based on fiber bragg gratings, sensors, 9 (2009), pp. 4446-4454. doi: 10.3390/s90604446 [11] j. cong, x. zhang, k. chen, j. xu, fiber optic bragg grating sensor based on hydrogels for measuring salinity, sensors and actuators b: chemical, 87 (2002), pp. 487-490. doi: 10.1016/s0925-4005(02)00289-7 [12] a. l. aldaba, á. gonzález-vila, m. debliquy, m. lopez-amo, c. caucheteur, d. lahem, polyaniline-coated tilted fiber bragg gratings for ph sensing, sensors and actuators b: chemical, 254 (2018), pp. 1087-1093. doi: 10.1016/j.snb.2017.07.167 [13] l. dinia, f. mangini, m. muzi, f. frezza, fbg multifunctional ph sensor-monitoring the ph rain in cultural heritage, acta imeko, 7 (2018) 3, pp. 24-30. doi: http://doi.org/10.21014/acta_imeko.v7i3.560 [14] s. brusca, r. lanzafame, a. marino cugno garrano, m. messina, on the possibility to run an internal combustion engine on acetylene and alcohol, energy procedia, 45 (2014), pp. 889-898. doi: 10.1016/j.egypro.2014.01.094 [15] s. brusca, s. l. cosentino, f. famoso, r. lanzafame, s. mauro, m. messina, p. f. scandura, second generation bioethanol production from arundo donax biomass: an optimization method, energy procedia, 148 (2018), pp. 728-735. doi: 10.1016/j.egypro.2018.08.141 [16] a. mirzaei, k. janghorban, b. hashemi, m. bonyani, s. g. leonardi, g. neri, highly stable and selective ethanol sensor based on α-fe2o3 nanoparticles prepared by pechini sol–gel method. ceramics international, 42 (2016), pp. 6136–6144. doi: 10.1016/j.ceramint.2015.12.176 [17] m. tomassetti, r. angeloni, m. castrucci, e. martini, l. campanella, ethanol content determination in hard liquor drinks, beers, and wines, using a catalytic fuel cell. comparison with other two conventional enzymatic biosensors: correlation and statistical data, acta imeko, 7 (2018) 2, pp. 91-95. doi: 10.21014/acta_imeko.v7i2.444 [18] a. gullino, m. parvis, l. lombardo, s. grassini, n. donato, k. moulaee, g. neri, employment of nb 2 o 5 thin-films for ethanol sensing. proc. of 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128457 [19] g. gugliandolo, d. aloisio, s. g. leonardi, g. campobello, n. donato, resonant devices and gas sensing: from low frequencies to microwave range, proc. of 14th international conference on advanced technologies, systems and services in telecommunications (telsiks), niš, serbia, 2019, pp. 21-28. doi: 10.1109/i2mtc43012.2020.9128457 [20] m. morisawa, y. amemiya, h. kohzu, c. x. liang, s. muto, plastic optical fibre sensor for detecting vapour phase alcohol, measurement science and technology, 12 (2001) n. 877. doi: 10.1088/0957-0233/12/7/322 [21] m. latino, r. montanini, n. donato, g. neri, ethanol sensing properties of pmma-coated fiber bragg grating, procedia engineering, 47 (2012), pp. 1263-1266. doi: 10.1016/j.proeng.2012.09.383 [22] f. k. coradin, g. r. possetti, r. c. kamikawachi, m. muller, j. l. fabris, etched fiber bragg gratings sensors for water-ethanol mixtures: a comparative study, journal of microwaves, optoelectronics and electromagnetic applications, 9 (2010), pp. 131-143. doi: 10.1590/s2179-10742010000200007 [23] u. s. raikar, v. k. kulkarni, a. s. lalasangi, k. madhav, s. asokan, etched fiber bragg grating as ethanol solution concentration sensor, optoelectronics and advanced material, 1 (2007), pp. 149-151. [24] p. arasu, a. s. m. noor, a. a. shabaneh, s. h. girei, m. a. mahdi, h. n. lim, h. a. abdul rashid, m. h. yaacob, absorbance properties of gold coated fiber bragg grating sensor for aqueous ethanol, journal of the european optical society-rapid publications, 9 (2014) n. 14018. doi: 10.2971/jeos.2014.14018 [25] p. kumar, s. kumar, j. kumar, g. s. purbia, o. prakash and s. k. dixit, graphene-oxide-coated fiber bragg grating sensor for ethanol detection in petrol, measurement science and technology, 31 (2019) n. 2. doi: 10.1088/1361-6501/ab2d63 [26] a. quattrocchi, r. montanini, m. latino, n. donato, development and characterization of a fiber bragg grating ethanol sensor for liquids, proc. of 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing (imeko tc4 2020), palermo, italy, 2020 pp. 55-59. online [accessed 20 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-11.pdf https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.1117/1.oe.59.6.060901 https://doi.org/10.1016/j.snb.2005.01.033 https://doi.org/10.1063/1.4959092 https://doi.org/10.1155/2014/405250 https://doi.org/10.1117/12.970279 https://doi.org/10.1109/jsen.2008.926943 https://doi.org/10.1088/0957-0233/12/7/329 https://doi.org/10.1109/jsen.2011.2108280 https://doi.org/10.3390/s90604446 https://doi.org/10.1016/s0925-4005(02)00289-7 https://doi.org/10.1016/j.snb.2017.07.167 http://doi.org/10.21014/acta_imeko.v7i3.560 https://doi.org/10.1016/j.egypro.2014.01.094 https://doi.org/10.1016/j.egypro.2018.08.141 https://doi.org/10.1016/j.ceramint.2015.12.176 http://dx.doi.org/10.21014/acta_imeko.v7i2.444 https://doi.org/10.1109/i2mtc43012.2020.9128457 https://doi.org/10.1109/i2mtc43012.2020.9128457 https://doi.org/10.1088/0957-0233/12/7/322 https://doi.org/10.1016/j.proeng.2012.09.383 http://dx.doi.org/10.1590/s2179-10742010000200007 http://dx.doi.org/10.2971/jeos.2014.14018 http://dx.doi.org/10.1088/1361-6501/ab2d63 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-11.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-11.pdf 3d shape measurement techniques for human body reconstruction acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 3d shape measurement techniques for human body reconstruction iva xhimitiku1, giulia pascoletti2, elisabetta m. zanetti3, gianluca rossi3 1 centro di ateneo di studi e attività spaziali g. colombo (cisas), university of padua, via venezia 15, 35131 padua, italy 2 department of mechanical and aerospace engineering (dimeas), politecnico di torino, corso duca degli abruzzi 24, 10129 turin, italy 3 department of engineering, university of perugia, via g. duranti 93, 06125, perugia, italy section: research paper keywords: 3d scanning techniques; non-contact measurement; low-cost technology; customised orthopaedic brace; non-collaborative patient; multimodal approach; 3d printing citation: iva xhimitiku, giulia pascoletti, elisabetta m. zanetti, gianluca rossi, 3d shape measurement techniques for human body reconstruction, acta imeko, vol. 11, no. 2, article 33, june 2022, identifier: imeko-acta-11 (2022)-02-33 section editor: francesco lamonaca, university of calabria, italy received december 20, 2021; in final form march 15, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: iva xhimitiku, e-mail: vaxhimitiku@gmail.com 1. introduction biomedical applications often required device customization: as well known, no patient is identical to another one, and this is even truer referring to pathologic conditions. in the past, customization has often been sacrificed in favour of manufacturability, however, with the advent of 3d printing [1], this shortcoming is being overcome [2], [3], and more and more emphasis is being given to the necessity of providing fast and accurate systems to obtain the geometry of the whole body [4], [5], [6] or of specific body segments [7]. traditional techniques are based on plaster moulds and are affected by some major limitations such as: the invasiveness, the need to keep the patient still for the curing time [8], a limited accuracy (over 15 mm, according to[9], [10]), and the impossibility of acquiring undercut geometries. more recently, and as a viable alternative, various non-contact instruments have been developed in order to perform digital scanning [11], [12], [13] and the respective performances have been extensively reported in literature [14], [15]. however, the application introduced in this work was somehow peculiar due to the young age of the patient [16] which led to add some requirements to the scanning methodology that are a time limit to perform the whole acquisition, and the possibility to compensate motions since the patient was not collaborative due to his young age [17], [18]. the final aim was to obtain the 3d geometry of his trunk in order to gather input data for brace design [19]. prior attempts had been made with traditional moulding techniques and they did not succeed due to frequent patient movements [20], [21], [22]. a specific methodology has been here developed, tested and discussed, which is based on a multimodal approach [23] where the benefits of different scanning technique are merged in order to optimize the final result. in section 2, three common scanning techniques are briefly described, reporting their specifications, and highlighting the respective advantages and disadvantages in relation to their application to human body scanning. these technologies are photogrammetry, light detection and ranging (lidar) and structured light scan. the performances of these shape measurement techniques have been assessed reconstructing the torso of two adults (one abstract in this work the performances of three different techniques for 3d scanning have been investigated. in particular two commercial tools (smartphone camera and ipad pro lidar) and a structured light scanner (go!scan 50) have been used for the analysis. first, two different subjects have been scanned with the three different techniques and the obtained 3d model were analysed in order to evaluate the respective reconstruction accuracy. a case study involving a child was then considered, with the main aim of providing useful information on performances of scanning techniques for clinical applications, where boundary conditions are often challenging (i.e., non-collaborative patient). finally, a full procedure for the 3d reconstruction of a human shape is proposed, in order to setup a helpful workflow for clinical applications. mailto:vaxhimitiku@gmail.com acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 male and one female); the main objective of this first analysis was to evaluate the performances of two low-cost tools [16], [24], [25] (smartphone camera and ipad pro lidar), in relation to the accurate reconstruction obtainable with the structured light scanner, used as reference measurement system [26]. once the performances of these tools have been defined under ‘ideal’ scanning conditions (collaborative subject able to maintain a position throughout the scan process), the same techniques have been used to obtain a set of 3d scans of a 4-year-old boy’s torso, at an orthopaedic laboratory (officina ortopedica semidoro srl, perugia, italy). for both these analyses, the process of 3d reconstruction and structure extraction is described in detail in section 5. the accuracy and correlation among the geometries reconstructed with different visual devices, are evaluated and discussed, and the bias given by a non-collaborative patient is illustrated, leading to introduce a new methodology based on a multimodal approach, whose benefits are outlined and quantified. in section 6, it is demonstrated how this methodology can be applied in orthopaedics [1], [8], [11], and on least collaborative patients, making it possible to obtain body scans where the alternative approach based on plaster of paris moulds would fail or would result in lower accuracy and longer execution times. 2. background there are several techniques to perform body scans; herein photogrammetry, structured light and lidar will be considered (figure 1). a) photogrammetry (ph): it is an imaging method used to capture pictures of objects from different perspectives with calibrated cameras. the feature points obtained by overlapped images are used to calculate shooting position through specific algorithms which allow individuating automatically chromatic analogies between two images [21], [27], [28]. b) lidar: recent developments of commercial devices such as smartphones and tablets have led to very fast scanning with lidar. this technique is based on the time of flight [29] measurements, that is the time taken by an object, a particle or a wave to travel a certain distance. more specifically, lidar emits a pulse or modulated light signal and measures the time difference in the returning wavefront [25], [30]; this allows estimating distance from signal propagation speed. c) structured light scanner (sl): this technology is based on the projection of a known light pattern (grid) on the object and, according to the deformation of this projected grid on the curved surface of the object, reconstructs its geometry. moreover, triangulation is used for the location of each point on the object, thanks to two cameras placed at known angles. these measurement techniques have some specific advantages over contact measuring techniques, such as fast acquisition, high accuracy, and minimal invasiveness. depending on the application, some specifications may become more relevant than others. with reference to clinical applications, in some cases high resolution and accuracy must be prioritized, while in other cases a good representation of the colour and structure is mandatory. 3. instruments 3.1. photogrammetry for this application a commercial smartphone redmi note 10 with an average cost of 190 € was used. it is equipped with a duo camera system dedicated for commercial use. its technology includes a digital stabilizer, 30 fps video speed and video rec. 4k (2160p). this tool has been paired to zephyr software (3d flow, v. aerial 3.1) in order to obtain a 3d reconstruction. this software uses structure from motion (sfm) algorithms for photogrammetric processing of digital images to create 3d spatial data. 3.2. structured light scanner the go!scan 50 (15 k€) is a hand-held scanner based on structured light with high speed. in total, this scanner uses three cameras positioned at various angles and depths. in the centre of the device, an rgb camera is installed surrounded by led flashlight to capture textures without the need for special light setup. the scanner works at a rate of 550000 measurements per second, covering a scanning area of 380 × 380 mm² with a resolution of 0.5 mm and a point accuracy up to 0.1 mm. a lamp guidance system helps to set the scanning distance between 0.3 m and 3.0 m. the surface is captured while moving the hand-held scanner over the object. moreover, it is possible to reduce the noise arising from movement, by setting the appropriate parameters on the acquisition software (vx element by creaform, v. 0.9) [31]. the go!scan 50 is the only certified instrument among those used for this work; for this reason, the respective reconstructed 3d geometries have been considered as the most accurate for replicating the actual torso shape and used as reference to evaluate the reconstruction accuracy of the other techniques [32], [33]. 3.3. lidar the ipad pro lidar scanner is a pulsed laser able to capture surroundings up to 5 m through a photon-level reading since it works at time of flight, the time required for data acquisition is strictly related to the speed of light and distance. apple inc. itself does not specify the accuracy of the respective technologies or hardware [25]. this tool allows scanning objects and exporting scans as 3d textured cad models. the scanning resolution used for our applications was 0.2 mm. the scanning time for a particular subject varies from operator to operator since using each scanner is an acquired skill. in general, scanning could take about 15 min depending on the desired accuracy level of the resulting scan [14]. as a rule of thumb, the fastest technique is the lidar scanning and the slowest one is the sl system. subject comfort is comparable among the reviewed scanners. figure 1. instruments and operation scheme: a) structured light; b) photogrammetry; c) lidar. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 4. methodology in the first part of this work the accuracy of reconstruction of the considered scanning techniques was investigated through the trunk reconstruction of a male and a female human subject. scanning results coming from this analysis have been used as reference for techniques comparison, since the subjects can be considered in a stable configuration, with the exception of the intrinsic deformability of the trunk (micro movements due to breathing). in the following step, the same analysis has been repeated on a 4-year-old trunk, adding one more bias given by subject’s macro-movements. ph is characterised by a timing video acquisition of about 50 s for each subject. using zephyr software, the geometry of the torso was reconstructed taking a total of 7 h with high software settings: up to 15000 keypoints per image and “pairwise image matching” setting on at least 10 images. keypoints are specific points in a picture that zephyr can understand and recognize in different pictures. matching stage depth (pairwise image matching) controls how many pairwise image matching to perform. usually, the more is the better; however, this comes at a computational cost. the mesh given by this scan technique can result in shape’s topological errors due to shadow areas and object’s movement. the shape complexity and the macro movements led to sudden changes of curvature, making the reconstruction difficult and resulting so in missing parts and loss of details. the mesh obtained from the scan performed with go!scan 50 required higher manual processing times, given the computational heaviness due to the high resolution. the scan parameters were set directly in the vx elements software according to the manufacturer, with a resolution of 2 mm. targets, semi-rigid positioning, and natural features were used for placement parameters. the acquisition and processing data both required an average time of 15 min under ideal conditions (collaborative subject). scanning with the ipad is the fastest technique. accurate colour information (texture) can be obtained from the two rear cameras whose images are managed by proprietary algorithms. output meshes are of low quality, due to the limited number of triangles used for the surface discretization. for both adults and child scanning analyses, the procedure consists of four main steps: 1) scanning: trunk acquisition required the scanners to rotate around the subject; adhesive circular reference targets with a diameter of 10 mm have been used in order to facilitate the alignment and matching between scans on the post-processing phase (figure 2 b); these targets have been positioned over the trunk considering that, as well known, at least three tie-points must be present in two neighbouring scans in order to allow the respective alignment. 2) geometry reconstruction (post-processing): post-processing was performed using geomagic studio software (3d system, v. 12) [34]. sometimes, data acquisition results in more than one point cloud; hence, these point clouds have to be registered and then merged in order to obtain one single cloud. a cleaning phase follows, where spurious points are eliminated; these points are generated by environment noise and by subject motion or by the camera resolution being close to the size of geometric details. a triangulated mesh is then generated, and it is smoothed to obtain a more regular geometry. the smoothing phase must be performed carefully in order to avoid losing relevant information. finally, the mesh is edited to avoid double vertices, discontinuities of face’s normal, holes, internal faces, so obtaining a manifold geometry. at the end of editing, the mesh is optimised to reduce the number of triangles. 3) comparison among measurement techniques: first of all, scanners’ performances were evaluated in terms of times required to obtain the final geometry. the geometries were then compared through a fully automated operation, performed by dedicated software: geomagic studio. it should be reminded that mesh coming from different scanning are not iso-topological [25] and this can make this operation more critical in addition to the 288389, 431000, 158000 triangles being processed for male, female and child torso respectively. more in detail, for the adults’ scans case, the reference geometry obtained from go!scan 50 was compared to output geometries from ph and from lidar, analysing the distribution of distances both before and after mesh filtering. a software-coded mapping analysis between pairwise scans was performed: results of this analysis are represented by the standard deviation of the statistical distribution of the shortest distance between two scans, along with the mean value of this distance. this analysis is a signed type analysis; for this reason, in the following positive and negative values of the mean distance will be provided, representing deviations towards the outer or inner scanned volume, respectively (figure 3). for the child torso (figure 4), this deviation analysis was performed twice. in the first instance, lidar scans were compared, analysing the deviation distribution at different threshold levels (10, 20, 80, 120 and 180 mm), where the threshold parameter represents the distance value (in mm) beyond which the mesh points are considered as outliers. this analysis was performed because three lidar scans figure 2. data obtained by acquisition with the three instruments; a) body reconstruction of a female and a male (red and blue); b) marker details. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 were obtained: a full body scan (longer acquisition time) and two partial body scans (shorter acquisition time). over the acquisition time of the two partial scans, the subject’s torso could be reasonably considered still, while the full body scan, due to the longer acquisition time, was more biased by macro movements. the multimodal approach used for the child torso consists in the reconstruction of the 3d geometry using the two partial sl scans after their alignment with the lidar scan, which was used as a reference for the global alignment, since it was the only technique which allowed obtaining a full body scan. the deviation analysis among lidar models has allowed quantifying full scan’s macro movements and the respective reconstruction uncertainty for the final torso 3d model, if this scan was used as a reference for the go!scan 50 model positioning. 4) measurement: prior to subjects’ trunk scanning, some main measurements were taken with a seamstress meter in order to have a reference when checking the scanned geometry scale. 5. results 5.1. adult subjects scan scans from sl scanner are the most accurate and are certified; therefore, as aforementioned, they were used as a reference. for ph, it was possible to reconstruct only a portion of the surface for the female subject, while meshes related to the male test has resulted without detail: 94 % of points were too far from sl points to be used for the calculation of geometric deviation (figure 3). with reference to the female subject, 57 % of points resulted to be far from the sl model. this is due to male subject movement, colour and reflection of clothes. the best matching points were located at the torso back. lidar scans were more complete: only 17 % of points had to be discarded for the female subject and 38 % for the male one (table 1). in terms of triangles number, which is closely related to the geometry accuracy, sl scan has given a total of 350614 triangles, ph has resulted in 14430 triangles, and lidar has provided 37792 triangles for the male subject and 46811 triangles for the female subject. two reference points have been tracked through apposite markers (figure 2 b). the respective distance was equal to 100 mm with reference to the male subject, and 90 mm for the female subject. in the male subject this same distance was evaluated equal to 98.6 mm with sl (1.49 % uncertainty); 109 mm with lidar (9 % uncertainty). with reference to the female subject, the respective distance was evaluated equal to 89.9 mm with sl (1.11 % uncertainty), and 104 mm with lidar (15.5 % uncertainty). 5.2. young boy’s scans the following information has been obtained: a) 1 ph scan, with partial covering of the subject’s trunk, obtained in 18 s with 110244 triangles (referred as ‘ph’ in the following); b) 3 lidar scans: one full-body scan (biased by the movement of the subject) with 8420 triangles and two partial scans of the left (4310 triangles) and right side (4303 triangles), minimally affected by child’s movements. these scans required 4 s and 10 s for the figure 3. example of distances’ distributions between the outputs obtained with sl and lidar instruments: male torso. figure 4. a) young child torso and detail of scan’s output given by b) ph, c) lidar and d) sl techniques table 1. comparison of distances’ distributions between lidar and sl scans, for the adult case. pairwise comparison between techniques reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) sl_lidar (female subject) sl 20 7.61/5.42 8.63 17 sl_ph (female subject) sl 20 12.52/11.06 13.14 57 sl_lidar (male subject) sl 20 6.00/7.00 9.00 38 sl_ph (male subject) sl 20 15.00/14.00 16.00 94 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 left and the right side, and 20 s for the full trunk. in the following, these scans are referred to as lidar ‘1’, ‘2’ and ‘3’ (left, right and full-body respectively), according to the respective position in the scanning sequence (figure 5); c) 2 partial sl scans from go!scan 50: these are much more accurate (47294 triangles) and required about 5 min for the back side and 4 min for front side. ph failed to reconstruct the trunk because the legs were the only still part of the child’s body (figure 4 b). 5.2.1. analysis of lidar’s results in figure 6 a detail of lidar scans alignment is shown. the three scans were compared through three pairwise combinations, varying the threshold distance: the distribution of distances between two scans has been obtained on a limited set of points, whose distance laid below this given threshold value. value. this threshold was varied in order to assess its influence on final results (figure 7). according to figure 8 a, 10 mm or 20 mm threshold values have to be chosen in order to keep the standard deviation below 60 mm. however, 10 mm threshold would produce a too high percentage of outliers, as shown in figure 8 b. therefore, a threshold value equal to 20 mm has been chosen: it represents the trade-off between the standard deviation and the percentage of retained points. having chosen the 20 mm threshold as reference, the mean values for the standard deviation of distances among lidar scans have been analysed. these values are reported in table 2, they show that the minimum value of the deviation is associated to lidar 2 (which is the fastest scan) versus lidar 3 comparison and lidar 3 (which is the only full body scan available) versus lidar 1 comparison (bolded values in table 2). for this reason, lidar 3 has been chosen as a reference for the following alignment procedure in multimodal scans. 5.2.2. lidar versus structured light scans from sl have been considered as a reference since the respective scanner has been certified and this technique is known to be the most accurate [33]. an optimized geometric alignment was performed by geomagic studio software, which is based on iterative closest point algorithms. figure 9 show the displacement between scans after alignment. the sections are evaluated by a level curves measurement tool that returns the circumferences of trunk. three combinations have been studied: three scans form lidar were compared to both sl scans (figure 10). the maximum standard deviation has resulted to be equal to 6.93 mm with mean values equal to +6.32 mm and -6.36 mm (where positive and negative values represent deviations towards the outer or inner scanned volume, respectively) obtained from sl-lidar 3 combination, corresponding to the overlap between the full lidar and the sl scans (reference). on the other hand, the minimum standard deviation is represented by figure 5. lidar acquisition: a) right side scan detail of three lidar scan acquisitions referred to as ‘lidar 1’; b) left side scan without movement, referred to as ‘lidar 2’; c) total body scan movement, referred to as ‘lidar 3’. figure 6. example of lidar scans alignment: a) point selection for alignment; b) alignment; c) top view of alignment. figure 7. example of pairwise comparisons between lidar 1 and lidar 6 scans with a threshold value of 20 mm. green colour indicates areas with below-threshold distances, red areas are above the reference surface and blue ones are below the reference surface, grey areas are out of range (outlier). figure 8. trend of a) standard deviation and b) percentage of outliers versus threshold values for different pairwise comparisons. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 overlapping the fastest lidar scan (lidar 2) and both partial sl scans (references). the value of standard deviation in this case is 6.65 mm with mean values equal to +5.71 mm and -5.49 mm (table 3). as noted, the full body lidar scan (lidar 3) has the closest values to both sl scans, and it is the best suited to replicate the actual back shape and to be used as reference for sl scans alignment. 5.2.3. multimodal space procedure the full body scan obtained from lidar was used as reference for both sl scans (chest and back) positioning, while ph provided an incomplete result which could not be merged to obtain a full trunk scan. looking at lidar results, the lines of movement can be outlined in texture scans (figure 11). the two sl scans were overlapped on the 3d lidar full scan and in the next step a topological optimization of the trunk was performed with 3-matic (materialise, v. 12) [35], a software used for clinical application (figure 12). finally, a comparison between the actual trunk measurements (circumferences at chest and waist levels) and the corresponding measurements taken on the reconstructed geometry was performed, resulting in a difference of 5.9 mm (1.25 %) at the waist level, and an uncertainty of 8.2 mm (1.64 %) at the chest level (figure 13). the plaster mould accuracy, acceptable for medical applications is above 15 mm [9], [10]. the uncertainty of the reconstruction for this multimodal non-contact measurement methodology is within this limit in fact the maximum uncertainty is 8.2 mm. 6. discussion all instruments, photogrammetry, structured light scanner and lidar have been proved to be able to capture trunk geometry in a still patient. when results coming from all three table 2. comparison of distances’ distributions between lidar scans for the child case. pairwise comparison between three lidar scans reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) lidar 1 (young boy) lidar 2 20 7.60/9.54 10.00 57 lidar 2 (young boy) lidar 3 20 8.27/7.52 9.16 44 lidar 3 (young boy) lidar 1 20 6.05/5.97 7.90 13 table 3. comparisons between structured light and lidar scans for the child. techniques reference scan max (mm) mean +/ (mm) dev. std. (mm) distant point (%) sl_lidar 1 sl 20 6.73/5.83 6.68 55 sl_lidar 2 sl 20 5.71/5.49 6.65 75 sl_lidar 3 sl 20 6.32/6.93 6.93 62 a) b) figure 9. a) level curves (blue curves) for distance evaluation between lidar and sl scans: b) example of lidar 3 to sl scan alignment. figure 10. example of distances distribution between sl and lidar 3 scan. figure 11. lidar movement texture detail. upper row: side view. lower row: back view. a) column: lidar 1; b) column: lidar 2 scan; c) column: lidar 3. figure 12. reconstruction of torso in 3-matic materialise, using the full lidar scan as a reference. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 instruments were compared to those coming from traditional techniques based on plaster moulding, they proved to be more accurate with the advantage of producing a digital editable model; structured light scanner produced the most accurate results. when a non-collaborative patient is considered, new specifications must be taken into account such as the time required for scanning the whole geometry and the robustness of reconstruction algorithms. as a result, lidar technique was proved to be the only technique able to provide a full scan, thanks to the lowest acquisition time. however, the respective accuracy was quite low, and lidar could not be used alone; however, it could be used as reference for structured light scans registration, so removing the major source of noise in sl, that is non-collaborative patient’s movement. from this, it can be pointed out that a multimodal methodology was needed in order to overcome the limited accuracy of lidar, recovering information from partial scans obtained from sl. the whole methodology has been set up and tested with encouraging results: the final outcome has an acceptable accuracy (8.2 mm), where the only alternative would be taking a limited number of measurements on the noncollaborative patient body. compared to plaster moulding, the accuracy is greatly improved (8.2 mm against 15 mm), and the bias given by dermal tissue compressibility [36], [37] is totally absent. once the scans were cleaned, simplified and merged, the standard triangulation language (stl) model was exported and 3d printed, to evaluate the viability of this workflow to produce a customised brace. finally, the brace was manufactured with traditional method on the 3d printed volume, without any contact with the subject (figure 14), after having been virtually tested through mock up techniques [38]. 7. conclusions in this work a multimodal scanning approach was proposed. the uncertainty given by movement was analysed and compensated. a full procedure for the reconstruction of the 3d external shape was developed by integration of different 3d measurement techniques. the shape of the human torso of a child was finally measured, 3d printed and used for the creation of a patient-specific brace. future developments will focus on combining fast and lowcost techniques and algorithms with low-cost measurement systems for orthopaedic applications, in order to improve the measurement technique without the need for high-performance tools. acknowledgement this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. references [1] d. f. redaelli, v. abbate, f. a. storm, a. ronca, a. sorrentino, c. de capitani, e. biffi, l. ambrosio, g. colombo, p. fraschini, 3d printing orthopedic scoliosis braces: a test comparing fdm with thermoforming, int. j. adv. manuf. technol. 111(5-6) (2020), pp. 1707-1720. doi: 10.1007/s00170-020-06181-1 [2] m. calì, g. pascoletti, a. aldieri, m. terzini, g. catapano, and e. m. zanetti, feature-based modelling of laryngoscope blades for customized applications, advances on mechanics, design engineering and manufacturing 3 (2021), pp. 206-211. doi: 1007/978-3-030-70566-4_33 [3] m. calì, g. pascoletti, m. gaeta, g. milazzo, r. ambu, a new generation of bio-composite thermoplastic filaments for a more sustainable design of parts manufactured by fdm, appl. sci. 10(17) (2020), pp. 1-13. doi: 10.3390/app10175852 [4] s. grazioso, m. selvaggio, g. di gironimo, design and development of a novel body scanning system for healthcare applications, int. j. interact. des. manuf. 12(2) (2018), pp. 611620. doi: 10.1007/s12008-017-0425-9 [5] f. remondino, 3-d reconstruction of static human body shape from image sequence, comput. vis. image underst. 93(1) (2004), pp. 65-85. doi: 10.1016/j.cviu.2003.08.006 [6] j. tong, j. zhou, l. liu, z. pan, h. yan, scanning 3d full human bodies using kinects, ieee trans. vis. comput. graph. 18(4) (2012), pp. 643-650. doi: 10.1109/tvcg.2012.56 [7] n. tokkari, r. m. verdaasdonk, n. liberton, j. wolff, m. den heijer, a. van der veen, j. h. klaessens, comparison and use of 3d scanners to improve the quantification of medical images (surface structures and volumes) during follow up of clinical (surgical) procedures, adv. biomed. clin. diagnostic surg. guid. syst. xv 10054 (2017), p. 100540z. doi: 10.1117/12.2253241 figure 13. reconstruction of torso and measurement of a) circumferences at chest and b) waist levels c) intersection between a horizontal plane and model. d) level curves to be measured. figure 14. a) 3d printed trunk, b) plaster mould built on printed model, c) plaster realization d) final model. https://doi.org/10.1007/s00170-020-06181-1 https://doi.org/1007/978-3-030-70566-4_33 https://doi.org/10.3390/app10175852 https://doi.org/10.1007/s12008-017-0425-9 https://doi.org/10.1016/j.cviu.2003.08.006 https://doi.org/10.1109/tvcg.2012.56 https://doi.org/10.1117/12.2253241 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [8] p. andrés-cano, j. a. calvo-haro, f. fillat-gomà, i. andréscano, r. perez-mañanes, role of the orthopaedic surgeon in 3d printing: current applications and legal issues for a personalized medicine, rev. esp. cir. ortop. traumatol. 65(2) (2021), pp. 138151. doi: 10.1016/j.recot.2020.06.014 [9] m. farhan, j. z. wang, p. bray, j. burns, t. l. cheng, comparison of 3d scanning versus traditional methods of capturing foot and ankle morphology for the fabrication of orthoses: a systematic review, j. foot ankle res. 14(1) (2021), pp. 1-11. doi: 10.1186/s13047-020-00442-8 [10] w. clifton, m. pichelmann, a. vlasak, a. damon, k. refaey, e. nottmeier, investigation and feasibility of combined 3d printed thermoplastic filament and polymeric foam to simulate the cortiocancellous interface of human vertebrae, sci. rep. 10(1) (2020), pp. 1-9. doi: 10.1038/s41598-020-59993-2 [11] j. c. rodríguez-quiñonez, o. yu, sergiyenko, l. c. basaca preciado, v. v. tyrsa, a. g. gurko, m. a. podrygalo, m. rivas lopez, d. hernandezbalbuena, optical monitoring of scoliosis by 3d medical laser scanner, opt. lasers eng. 54(2014), pp. 175-186. doi: 10.1016/j.optlaseng.2013.07.026 [12] d. g. chaudhary, r. d. gore, b. w. gawali, inspection of 3d modeling techniques for digitization, int. j. comput. sci. inf. secur. (ijcsis) 16(2) (2018), pp. 8-20. [13] p. dondi, l. lombardi, m. malagodi, m. licchelli, 3d modelling and measurements of historical violins, acta imeko 6(3) (2017), pp. 29-34. doi: 10.21014/acta_imeko.v6i3.455 [14] c. boehnen, p. flynn, accuracy of 3d scanning technologies in a face scanning scenario, proc. int. conf. 3-d digit. imaging model. 3dim, 2005, pp. 310-317. doi: 10.1109/3dim.2005.13 [15] p. treleaven, j. wells, 3d body scanning and healthcare applications, computer, 40(7) (2007), pp. 28-34. doi: 10.1109/mc.2007.225 [16] a. ballester, e. parrilla, a. piérola, j. uriel, c. perez, p. piqueras, b. nácher, j. vivas, s. alemany, data-driven three-dimensional reconstruction of human bodies using a mobile phone app, int. j. digit. hum. 1(4) (2016), p. 361. doi: 10.1504/ijdh.2016.10005376 [17] m. pesce, l. m. galantucci, g. percoco, f. lavecchia, a low-cost multi camera 3d scanning system for quality measurement of nonstatic subjects, procedia cirp 28 (2015), pp. 88–93. doi: 10.1016/j.procir.2015.04.015 [18] j. conkle, k. keirsey, a. hughes, j. breiman, u. ramakrishnan, p. s. suchdev, r. martorell, a collaborative, mixed-methods evaluation of a low-cost, handheld 3d imaging system for child anthropometry, matern. child nutr., vol. 15, no. 2, 2019, pp. e12686–e12686. doi: 10.1111/mcn.12686 [19] i. molnár, l. morovič, design and manufacture of orthopedic corset using 3d digitization and additive manufacturing, iop conf. ser. mater. sci. eng. 448(1) (2018), pp. 1-7. doi: 10.1088/1757-899x/448/1/012058 [20] f. remondino, a. roditakis, 3d reconstruction of human skeleton from single images or monocular video sequences,” lect. notes comput. sci. (including subser. lect. notes artif. intell. lect. notes bioinformatics) 2781 (2003), pp. 100-107. doi: 10.1007/978-3-540-45243-0_14 [21] j. a. beraldin, basic theory on surface measurement uncertainty of 3d imaging systems, three-dimensional imaging metrol. 7239 (2009), p. 723902 doi: 10.1117/12.804700 [22] v. rudat, p. schraube, d. oetzel, d. zierhut, m. flentje, m. wannenmacher, combined error of patient positioning variability and prostate motion uncertainty in 3d conformal radiotherapy of localized prostate cancer, int. j. radiat. oncol. biol. phys. 35(5) (1996), pp. 1027-1034. doi: 10.1016/0360-3016(96)00204-0 [23] j. a. torres-martínez, m. seddaiu, p. rodríguez-gonzálvez, d. hernández-lópez, d. gonzález-aguilera, a multi-data source and multi-sensor approach for the 3d reconstruction and web visualization of a complex archaelogical site: the case study of ‘tolmo de minateda,’ remote sens. 8(7) (2016), pp. 1-25. doi: 10.3390/rs8070550 [24] l. barazzetti, l. binda, m. scaioni, p. taranto, photogrammetric survey of complex geometries with low-cost software: application to the ’g1′ temple in myson, vietnam, j. cult. herit. 12(3) (2011), pp. 253-262. doi: 10.1016/j.culher.2010.12.004 [25] m. vogt, a. rips, c. emmelmann, comparison of ipad pro®’s lidar and truedepth capabilities with an industrial 3d scanning solution, technologies 9(2) (2021) 25, pp. 1-13. doi: 10.3390/technologies9020025 [26] i. xhimitiku, g. rossi, l. baldoni, r. marsili, m. coricelli, critical analysis of instruments and measurement techniques of the shape of trees: terresrial laser scanner and structured light scanner, in 2019 ieee international workshop on metrology for agriculture and forestry, metroagrifor 2019 proceedings, oct. 2019, pp. 339-343. doi: 10.1109/metroagrifor.2019.8909215 [27] m. lo brutto, g. dardanelli, vision metrology and structure from motion for archaeological heritage 3d reconstruction: a case study of various roman mosaics, acta imeko 6(3) (2017), pp. 35-44. doi: 10.21014/acta_imeko.v6i3.458 [28] c. buzi, i. micarelli, a. profico, j. conti, r. grassetti, w. cristiano, f. di vincenzo, m. a. tafuri, g. manzi, measuring the shape: performance evaluation of a photogrammetry improvement applied to the neanderthal skull saccopastore 1, acta imeko 7(3) (2018), pp. 79-85 doi: 10.21014/acta_imeko.v7i3.597 [29] s. logozzo, a. kilpelä, a. mäkynen, e. m. zanetti, g. franceschini, recent advances in dental optics part ii: experimental tests for a new intraoral scanner, opt. lasers eng. 54 (2014), pp. 187-196. doi: 10.1016/j.optlaseng.2013.07.024 [30] d. marchisotti, p. marzaroli, r. sala, m. sculati, h. giberti, m. tarabini, automatic measurement of hand dimensions using consumer 3d cameras, acta imeko 9(2) (2020), pp. 75-82. doi: 10.21014/acta_imeko.v9i2.706 [31] m. di, m. di, piattaforma software 3d completamente integrata, no. mi, 2020 [in italian]. [32] l. ma, t. xu, j. lin, validation of a three-dimensional facial scanning system based on structured light techniques, comput. methods programs biomed. 94(3) (2009), pp. 290-298. doi: 10.1016/j.cmpb.2009.01.010 [33] a. cuartero, study of uncertainty and repeatability in structuredlight 3d scanners, no. 2. [34] 3d systems, presentation of the geomagic wrap 3d scanning software, 2021. online [accessed 26 april 2022] https://de.3dsystems.com/software/geomagic-wrap [35] materialise, 3-matic, version 14.0 – reference guide, april 2019. online [accessed 26 april 2022] https://help.materialise.com/131470-3-matic/3-matic-140-usermanual [36] m. terzini, c. bignardi, c. castagnoli, i. cambieri, e. m. zanetti, a. l. audenino, ex vivo dermis mechanical behavior in relation to decellularization treatment length, open biomed. eng. j. 10 (2016), pp. 34-42. doi: 10.2174/1874120701610010034 [37] m. terzini, c. bignardi, c. castagnoli, i. cambieri, e. m. zanetti, a. l. audenino, dermis mechanical behaviour after different cell removal treatments, med. eng. phys. 38(9) (2016), pp. 862-869 doi: 10.1016/j.medengphy.2016.02.012 [38] e. m. zanetti, c. bignardi, mock-up in hip arthroplasty preoperative planning, acta bioeng. biomech. 15(3) (2013), pp. 123128. doi: 10.5277/abb130315 https://doi.org/10.1016/j.recot.2020.06.014 https://doi.org/10.1186/s13047-020-00442-8 https://doi.org/10.1038/s41598-020-59993-2 https://doi.org/10.1016/j.optlaseng.2013.07.026 https://doi.org/10.21014/acta_imeko.v6i3.455 https://doi.org/10.1109/3dim.2005.13 https://doi.org/10.1109/mc.2007.225 https://doi.org/10.1504/ijdh.2016.10005376 https://doi.org/10.1016/j.procir.2015.04.015 https://doi.org/10.1111/mcn.12686 https://doi.org/10.1088/1757-899x/448/1/012058 https://doi.org/10.1007/978-3-540-45243-0_14 https://doi.org/10.1117/12.804700 https://doi.org/10.1016/0360-3016(96)00204-0 https://doi.org/10.3390/rs8070550 https://doi.org/10.1016/j.culher.2010.12.004 https://doi.org/10.3390/technologies9020025 https://doi.org/10.1109/metroagrifor.2019.8909215 https://doi.org/10.21014/acta_imeko.v6i3.458 https://doi.org/10.21014/acta_imeko.v7i3.597 https://doi.org/10.1016/j.optlaseng.2013.07.024 https://doi.org/10.21014/acta_imeko.v9i2.706 https://doi.org/10.1016/j.cmpb.2009.01.010 https://de.3dsystems.com/software/geomagic-wrap https://help.materialise.com/131470-3-matic/3-matic-140-user-manual https://help.materialise.com/131470-3-matic/3-matic-140-user-manual https://doi.org/10.2174/1874120701610010034 https://doi.org/10.1016/j.medengphy.2016.02.012 https://doi.org/10.5277/abb130315 comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery acta imeko issn: 2221-870x june 2021, volume 10, number 2, 80 87 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 80 comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery davide aloisio1, giuseppe campobello2, salvatore gianluca leonardi1, francesco sergi1, giovanni brunaccini1, marco ferraro1, vincenzo antonucci1, antonino segreto2, nicola donato2 1 institute of advanced energy technologies “nicola giordano”, national research council of italy, salita s. lucia sopra contesse, 5 98126, messina, italy 2 university of messina, department of engineering, c.da di dio, vill. s.agata, 98166 messina, italy section: research paper keywords: machine learning; electrochemical impedance spectroscopy eis; lithium-ion battery; state of charge; state of health citation: davide aloisio, giuseppe campobello, salvatore gianluca leonardi, francesco sergi, giovanni brunaccini, marco ferraro, vincenzo antonucci, antonino segreto, nicola donato, comparison of machine learning techniques for soc and soh evaluation from impedance data of an aged lithium ion battery, acta imeko, vol. 10, no. 2, article 12, june 2021, identifier: imeko-acta-10 (2021)-02-12 section editor: ciro spataro, university of palermo, italy received january 18, 2021; in final form april 29, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was funded by the italian ministry of economic development under the programme “ricerca di sistema”, project electrochemical storage corresponding author: davide aloisio, e-mail: aloisio@itae.cnr.it 1. introduction as well known, machine learning (ml) is a subfield of computing, an artificial intelligence (ai) technique that provides machines with the ability to learn from the field data without explicit programming [1]. in particular, ml can be really useful in applications that try to extract some information or unknown properties (‘features’) from the dataset (usually called ‘training set’) coming from data warehouses or data lakes. information extracted from this kind of data analyses can be used to develop prediction models for systems behaviour (subject to certain operative conditions and under some constraints). in particular, the battery behaviour characterisation is quite complex to be described through analytical models, mainly due to many parameters act in determining the ageing evolution (e.g. charge and discharge current rates, operative temperature, depth of discharge (dod) reached, state of charge (soc) during rest periods and so on. therefore, the combination of the aforementioned parameters makes the systems hard to model via analytical equations. this is particularly evident for li-ion batteries, for which it is more difficult to describe electrochemical processes with analytical equations, due to the nonlinearities present in their behaviours. the analytical models require, in addition to input data of the actual working conditions (current, temperature, etc.), the knowledge of many parameters (geometry, density and porosity of materials, etc.) these data are not always available or easily measurable and can vary over time (e.g. due to ageing). therefore, the analytical models can be affected by inaccuracy. abstract state of charge estimation and ageing evolution of lithium ion (li-ion) batteries are key points for their massive applications in the market. however, the battery behavior is very complex to understand because many parameters act in determining their ageing evolution. therefore, traditional analytical models employed for this purpose are often affected by inaccuracy. in this context, machine learning techniques can provide a viable alternative to traditional models and a useful tool to characterize the batteries behavior. in this work, different machine learning techniques were applied to model the impedance evolution over time of an aged cobalt based li-ion battery, cycled under a stationary frequency regulation profile for grid application. the different ml techniques were compared in terms of accuracy to determine the state of charge and the state of health over the battery ageing phenomena. experimental results showed that ml based on random forest algorithm can be profitably used for this purpose. mailto:aloisio@itae.cnr.it acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 81 in this context, ml techniques represent a viable alternative and a useful tool for modelling the battery behaviour. ml algorithms learn directly from experimental data, reducing the complexity of modelling, usually due to the high number of parameters and empirical adjustments needed. in addition, according to the recent literature, the application of ml techniques in the prediction of the ageing of li-ion batteries shows errors in the range between 0.5% 5.5% [2]-[4]. this range of accuracy is considered a good compromise among algorithm complexity, effort spent on model development and reliability of results. many physical and electrical parameters are characteristic of the chemical reactions inside li-ion batteries. therefore, these relations could be used as tools for the battery state modelling [5], [6]. typically, these features are derived from charging and discharging curves, since typical battery management systems (bms) are able to collect current-voltage data. hence these are the most commonly used parameters for real time battery monitoring [7], [8]. various approaches based on the use of different parameters were proposed in the literature to train machine learning models. among battery parameters, singlepoints terminal voltage, current, temperature, charge/discharge profiles [2], [9], [10] or their geometrical characteristics [11] were employed for this purpose. however, much more information about the status of the battery can be extracted from the impedance spectra recorded by means of electrochemical impedance spectroscopy (eis) [12]. indeed, the impedance spectrum of a lithium cell contains rich information on all materials properties, interfacial phenomena and electrochemical reactions. from a practical point of view, many of these can be extrapolated by the nyquist diagram in which inverse imaginary part of impedance is plotted against the real one for each investigated frequency (of solicitations). in the case of li-ion batteries, the nyquist diagram consists of four distinct regions typically belonging to the frequency range between 10 mhz to 10 khz [13]. in the low frequency region, an almost linear trend in the nyquist plot is representative of the solid diffusion of lithium ions through the electrodes material. in the medium-high frequencies range, one or more semicircles usually represent the impedance of either charge transfer phenomena or passivation layers on the electrodes surface (solid electrolyte interphase-sei). the intersection of the impedance spectrum with the real axis (pure ohmic impedance) represents the cell internal resistance. finally, the high frequency region is representative of inductive phenomena. since each one of these phenomena are strictly related to temperature, to soc and to the state of health (soh) of the cell, then the analysis of the impedance data can be used to monitor the status of the battery [14]. however, due to the large number of data involved in a single eis spectrum and the amount of information it can contain, the use of conventional data analysis methods may be difficult. also, because of the difficulties in measuring the impedance while the cells are active, eis is not widely used [15]. to overcome this drawback, increasing attention is paid to the implementation of ml approaches, either to aid the fitting of the parameters of equivalent circuits able to describe the battery impedance [16], or by directly modelling the entire impedance spectrum [17]. in this paper, ml is addressed to identify possible methodologies to estimate the soc and soh of li-ion battery from eis data, mainly aiming at developing a feasible model easy to be integrated in a battery management system (bms). implementation in bmss of techniques able to extend batteries useful life, estimating the possible replacement time (estimation of remaining useful life, or rul), is considered a key research activity in the field [18]. in section 2, some state-of-the-art of ml techniques applied to soh and rul estimation are reviewed. section 3 describes the experimental procedures employed to age the li-ion cell; the main parameters extrapolated to create the dataset for the algorithm; and the methods for their collection. section 4 describes the methodology used to carry out the first selection of ml algorithm and the validation of the model. section 5 presents the main results related to the use of different classifiers to model both soc and capacity loss of the li-ion cell. finally, in section 6, the main observations are summarised. 2. ml algorithms for state of health (soh) and remaining useful life (rul) evaluation: a brief review thanks to the remarkable computational capabilities of today’s systems, learning algorithms applied to large quantities of data have often become the preferred approach in the search and identification of complex system behaviour, and therefore represent a valid tool for soh estimation of batteries. in these techniques, a large amount of data, constituted by main battery parameters, are collected continuously up to the end of their life. the dataset analysis of the battery life, performed by learning algorithms, allows extracting non-linear relationships among the various parameters. the knowledge derived from this kind of information can allow a careful management of the battery, helping to extend the useful life and giving reliable prediction on possible replacement times, with obvious positive impact on costs and investments. ml techniques such as fuzzy logic (fl), support vector machine (svm) and artificial neural networks (ann) have extensively been applied for the estimation of the health of batteries, and a brief review can be found in [3]. in most cases, soh is estimated by determining battery capacity and internal resistance, parameters strictly related to soh, from input variables behaviour analysis (current, temperature, voltage, etc.) an application of fuzzy logic with a potential use in portable devices is reported in [19], where electrochemical impedance spectroscopy (eis) technique was used for the dataset creation. however, improper hypotheses in the fuzzy rules [3] and reduced set of observations can lead to substantial errors. the support vector machine is a regression algorithm which converts nonlinearities in a lower dimension space to a linear model developed in a higher-dimensional one [20]. examples of application of this technique applied to soh are reported in [21][25]. in particular, in [25] an online method for soh estimation was developed determining support vectors by means of pieces of charging curves. soh with less than 2% error for 80% of all the cases for commercial nmc li-ion batteries was achieved. the accuracy of the results is strongly dependent on the noise and operational conditions; hence, other data manipulation techniques (particle filter, bayesian technique) have to be used in conjunction with svm to increase the robustness of the estimation [26], thus increasing the complexity of implementation. relevance vector machine (rvm) is suggested as a possible improvement of this approach in [20]. artificial neural networks (anns) are probably the most used approach, inspired by the biological functioning of the human brain, for modeling nonlinear systems. soh estimation using an independently recurrent neural network (in rnn) was realised in acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 82 [3]. here soh was predicted accurately with a root means square error (rmse) of 1.33% and mean absolute error (mae) of 1.14%. the main limitation is the need of a detailed analysis on experimental dataset. different chemistries can require a precise identification and understanding of input parameters. in [27], an improved neural network method based on the combination of a lstm (long-short-term memory) and (pso) particle swarm optimisation was developed. the methodology proposed here uses some additional techniques in each part of the learning process, such as pso for optimisation of the weights, dynamic incremental learning for soh model updating, ceemdan method to denoise raw data, with the aim of increasing the accuracy of the model [27]. another hybrid approach can be found in [28], where false nearest neighbour method was used in conjunction with a mixed lstm and convolutional neural network (cnn) as a solution for unreliable sliding window sizes, a problem commonly present in data-driven rul evaluation approaches. the complexity and topology of ann used in these works is actually classified as deep learning, an evolution of the machine learning concept coined for neural networks which exploits the concept of multilayer perceptron (mlp). a comparison of deep learning and different other common techniques showing its potential and advantages of data driven approaches was presented in [4]. the outcomes showed the goodness of deep neural networks (dnn), which are suitable when high accuracy is needed. however, also this technique is not easy to be implemented due to higher computational complexity and resources needed [4]. a lot of other techniques and approaches can be found in the literature. although out of the scope of the present work, the goal of a possible implementation in bms suggests the choice of low-complexity approaches to reduce computational resources needed and thus leading to lower energy consumption [29]. a possible alternative is given by random forest algorithms. they generally use reduced computation resources, and thus can result preferable in comparison to other techniques analysed, based on svm and nn. in general, linear regressors or random forest response is faster than complex model and is easily interpreted. however, it has to be underlined that the accuracy in random forest models is related to the number and size of trees and therefore to the availability of memory [1], [30]. 3. dataset collection and creation the present work was aimed at the development of a method to identify the degradation level induced by the use of li-ion batteries in a primary frequency regulation (fr) service. more precisely, the activity was focused on the identification of the main parameters indicating the state of battery degradation. for this purpose, cylindrical-type 18650 li-ion-ion cells (table 1) were cycle aged according to a test profile extrapolated by the standard iec 61427-2 [31]. the standard profile requires that the storage system is able to provide symmetrical charging and discharging phases at constant power of 500 kw and 1000 kw, respectively, with a voltage range of 400–600 v. therefore, the profile was adapted to the single cell characteristics. moreover, in order to enhance the degradation of the cell (thus limiting the overall duration required for data collection) fr ageing tests were accelerated by operating at an ambient temperature of 45 °c. in fact, the degradation processes of li-ion batteries are accelerated by temperature increase [32]. the ageing tests were performed by a dual-channel bitrode ftv1 battery cycler. in addition, the cell was tested under temperature-controlled atmosphere in an angelantoni discovery dm 340 bt climatic chamber. the fr ageing profile with actual power steps imposed to the cell is shown in figure 1. the full ageing protocol consisted of a first charge of the cell up to 100% soc and then an execution of the fr profile. once the cell reached the lower voltage cut-off threshold (discharged), a recharge up to 100% soc was performed and then the cycle was restarted. the ageing level was defined in terms of residual capacity retained by the cell. this information was obtained from periodic check-ups carried out on the cell, approximately every 10 days of operation. parametric check-ups of the cells performed the extraction of residual capacity and impedance evaluations by means of eis technique. both analyses were carried out through a high reliability autolab 302n potentiostat/galvanostat (whose potential accuracy and current accuracy are both ±0.2% of the full-scale value). it is worth noting that, due to instruments calibration and performance, measurements were considered reliable and having no impact of uncertainty on the model. the robustness of the model will be investigated in a future work. capacity tests, constituted by a galvanostatic discharge at nominal c-rate and room temperature, allowed to extrapolate the characteristic parameters indicative of the soh of the cell. the recorded discharge curves at the begin of life (bol) and different soh levels are reported in figure 2a. in particular, residual capacity (cd) and residual energy (ed) were collected and used as output variables of the database. the value of cd was obtained by integrating the actual current (id) between begin of discharge (t0) and end of discharge (tf), within the upper and lower voltage cut off limit 𝐶𝑑 = ∫ 𝐼𝑑(𝑡)d𝑡 𝑡𝑓 𝑡𝑜 . (1) table 1. main characteristics of the tested li-ion cell. description value nominal voltage 3.7 v nominal capacity 1.1 ah max charge current 4 a max discharge current 10 a maximum voltage 4.2 v minimum voltage 2.5 v discharge temperature -30 ÷ 60 °c charge temperature 0 ÷ 60 °c chemistry licoo2-linicomno2/graphite figure 1. power profile used to age the battery according to a frequency regulation profile extrapolated from the international standard iec 61427-2. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 83 the quantity ed was obtained by integrating the actual power (id) between the begin of discharge (t0) and the end of discharge (tf), within the upper and lower voltage cut off limits. 𝐸𝑑 = ∫ 𝑃𝑑(𝑡) d𝑡 𝑡𝑓 𝑡𝑜 , (2) where 𝑃𝑑(𝑡) = 𝑉(𝑡) ∙ 𝐼𝑑(𝑡), with 𝑉(𝑡) and 𝐼𝑑(𝑡) representing the instantaneous values of voltage and current, respectively. the recorded discharge curves at begin of life (bol) and different soh levels are reported in figure 2a. soh levels were defined as capacity loss of the cell identified during each parametric check-up. as input variables of the algorithm, the complex impedance values were collected at different frequencies and soh levels of the cell. such information came from eis analysis carried out in correspondence of parametric check-ups. to create the database, the impedance of the cell was recorded at different socs (100%, 75%, 50%, 25%, 0%) at bol and every ten days of operation under fr cycle, until a loss of capacity (closs) of about 8% was reached. moreover, the loss of capacity (closs, as effect of ageing) was used as indicative parameter of the cell soh. nyquist plots of the impedance recorded for different socs at bol and five different soh levels are reported in figure 2b. the impedance was recorded in the frequency range between 10 mhz to 10 khz with ten points per decade, which leads to 61 values for each soc. finally, the data set used for the case study consists of 1830 impedance measurements. table 2 contains some statistical information on the dataset used. 4. methodology the above-mentioned dataset has been used to test various classification and regression techniques through the scikit-learn tool [33], an open-source library for machine learning developed in python. among them, k-nearest neighbors (knn), linear discriminant analysis (lda), gaussian naive bayes (gnb), support vector classification (svc), decision tree (dt), linear regressor, lasso, ridge and random forest were considered for performances comparison. in order to avoid an influence on the results by the particular previous partitioning, a cross-validation technique was also used. in particular, in this phase the original data set was partitioned into 5 subsets (folds) used for tests and training. in the case of regressors, the value of the mae and the determination coefficient (r2) was calculated for each round. similarly, the accuracy (acc) was measured for the classifiers. the models were then compared on the basis of the average values of the aforementioned metrics obtained in the 5 validation rounds. the standard deviation (std) was also determined from the same metrics, which provides information on the robustness of the model (in fact, lower values of std generally correspond to more robust models). 5. results 5.1. data analysis first, correlation coefficients were analysed to investigate relationships among impedance measurements and the corresponding soc and closs values. correlation coefficients of soc and closs, specifically achieved for the rf cycle, are summarised in table 3 for both rectangular and polar forms of the impedance. the analysis of the correlation coefficients shows that the highest correlation value is between the closs measurements and the real part of the impedance (re(z)), for which a correlation coefficient of 0.471 was obtained. it is also possible to observe that the correlation coefficient obtained between closs and the impedance module (abs(z)) is just smaller (0.456). the similarity between these two correlation coefficients suggests that, for the purpose of closs modelling, it is possible to use either the module or the real part of the impedance. in the case of soc, the highest correlation value is obtained with the impedance phase values (arg(z)) for which a correlation coefficient of 0.337 was obtained. rectangular coordinate values, on the other hand, are uncorrelated to soc. therefore, it can be assumed that, for soc modelling, the phase values of impedance are the most useful, at least for this set of data. as a consequence, it is to be expected that machine learning algorithms will perform better with the use of impedance values represented in polar coordinates rather than rectangular ones. table 2. statistical data of used dataset. f (hz) re{z} (ω) im{z} (ω) soc (%) c_loss (%) range min-max 0.01-10000 0.041-0.207 -0.072-0.067 0-100 0-8.27 mean 797 0.0762 0.0019 50.00 4.54 std 1951.6 0.0212 0.0175 35.36 2.69 figure 2. a) discharge curves for extrapolation of output variables; b) nyquist plot of the impedance used as input variables of the database. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 84 the above analysis was repeated considering only eis impedance data corresponding to frequency values lower than 350 hz. henceforward, in this work, we will refer to this data set as filtered data. indeed, as also reported in [34] where a similar lithium cell was used, the most important features induced by ageing on physico-chemical processes were observed only for the negative imaginary part of the impedance, which, in our case, matches with the selected filtered frequency range. also, it is well known that eis at moderate and high frequency is strongly dependent on the experimental setup and cables, thus leading to measurement errors and scattered data [35]. accordingly, the comparison of correlation coefficients reported in table 3 and table 4 (for original and filtered data, respectively) reveals a marked improvement in soc correlation when only low frequency measures are considered. in particular, in the case of filtered data, a correlation coefficient of 0.706 was obtained between the soc and the impedance phase, which is relatively higher in comparison to the value of 0.337 obtained for the original dataset. as a consequence, machine learning algorithms are expected to provide higher performance results when trained with the filtered dataset. 5.2. comparison of machine learning algorithms performance of several machine learning classifiers and regressors were evaluated and compared. among them, knearest neighbors (knn), linear discriminant analysis (lda), gaussian naive bayes (gnb), support vector classification (svc) and decision tree (dt), were considered as representative classification algorithms. the aforementioned algorithms were compared in terms of accuracy achieved on both original dataset and filtered dataset, using a cross-validation method on 5 folds. table 5 shows the average values and the standard deviation of the accuracy obtained for the above classifiers in the case of soc prediction obtained by training the algorithms with the original dataset. it is possible to observe how the use of polar representation leads to an improvement in accuracy for all classifiers with an increase between 40% and 270%, depending on the classifier. for both representations (rectangular/polar), the decision tree (dt) exhibits a better performance, obtaining an average accuracy equal to 0.915 with the polar representation. this analysis was repeated considering the filtered dataset, i.e. removing high frequency points from the original dataset. as can be seen from table 6, filtering improves accuracy of almost all classifiers (for the sake of clarity, in table 6 only results for polar coordinates are reported). for better comparison, in figure 3, a box plot shows the median value (orange line), quartiles, and range of accuracy values (minimum and maximum values) in the case of algorithm trained considering filtered (figure 3b) and original (figure 3a) datasets. from the comparison between figure 3a and figure 3b it can be observed that for the lda classifier the use of filtered values leads to a marked improvement in performance. moreover, in the case of dt, in addition to an increase of the average accuracy value, there is also a significant reduction of data dispersion that justifies the reduction of the standard deviation in table 6, obtained in the case of filtered data. similar considerations can be carried out using for comparison purpose the f1 metric [36]. in fact, as shown in figure 4 where the macro-average f1 score obtained for the same classifiers (for both filtered and un-filtered datasets) is reported, dt classifier achieves better results even considering the macro-averaged f1 metric instead of the accuracy. finally, figure 5 shows the confusion matrix obtained for the dt classifier in the case of the filtered dataset for an 80/20 distribution, i.e. with 80% of the data used for training and 20% for testing. a total of 272 soc values were tested and only 16 of them were wrongly classified, thus obtaining an accuracy on the specific test set of 94.31%. therefore, the achieved classification can be effectively used to evaluate the state of charge of the battery starting from the impedance and, in particular, to predict when the state of charge is below 50%. it is worth mentioning that, the choice of using classifiers instead of regressors is related to the specific application. in a few cases, in fact, classifiers able to simply detect discrete values of soc can be useful for detecting when specific critical threshold levels have been reached, i.e. the 20% of capacity reduction commonly used for automotive applications. it is worth noting that, in the previous analysis, the default values of scikit-learn was used for all classifiers, i.e. all classifiers table 3. correlation matrix for impedance measures evaluated on original/unfiltered data re{z} (ω) im{z} (ω) abs{z} (ω) arg{z} (ω) closs +0.471 +0.044 +0.456 -0.002 soc -0.166 -0.103 -0.170 -0.337 table 4. correlation matrix for impedance measures evaluated only on lowfrequency data. re{z} (ω) im{z} (ω) abs{z} (ω) arg{z} (ω) closs +0.477 +0.119 +0.458 -0.001 soc -0.213 -0.1239 -0.215 -0.706 table 5. accuracy of some classifiers used for modeling the soc starting for unfiltered data in rectangular and polar representation. classifier representation z mean std lda rectangular 0.234 0.027 lda polar 0.333 0.045 gnb rectangular 0.196 0.020 gnb polar 0.370 0.053 svc rectangular 0.192 0.014 svc polar 0.380 0.068 knn rectangular 0.222 0.072 knn polar 0.383 0.088 dt rectangular 0.329 0.020 dt polar 0.915 0.047 table 6. accuracy of some classifiers used for modeling the soc in polar representation for filter and unfiltered data. classifier mean std mean std (original data) (filtered data) lda 0.333 0.045 0.602 0.192 gnb 0.370 0.053 0.397 0.027 svc 0.380 0.068 0.374 0.066 knn 0.383 0.088 0.392 0.084 dt 0.915 0.047 0.938 0.024 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 85 were applied without any optimisation. this fact partially justifies why most classifiers exhibit poor performances. in addition, it is well known that lda, like other linear classifiers and regressors such as ridge and lasso, adapts well to linear models while the dependence of soc on impedance curves does not. nevertheless, it is generally better to test and compare them due to their lower computational complexity. therefore, a similar analysis was carried out for linear, lasso, elastic, ridge, gradient boosting, ada boost and random forest regressors with the main difference that, in the case of regressors, performance was measured in terms of mae and the determination coefficient (r2). the regressor with the best performance in terms of both r2 and mae was the random forest. the distributions of the predicted values by random forest regressor, when trained with the filtered data, are reported in figure 6a and figure 6b for the modeling of soc and closs, respectively. figure 6 also reports the average values and standard deviations of r2 and mae. in particular, in the case of soc, an average value of r2 of 0.98 was achieved (see figure 6a). in comparison to [37], which considered unfiltered data, a significant reduction was obtained in the mae, from 2.65 to 1.87. in addition, as discussed in the following subsection, the use of filtered data leads to models with lower complexity. 5.3. analysis of the random forest parameters different tradeoffs between performance and complexity of machine learning algorithms can be obtained by a proper tuning of related parameters. in the specific case of the random forest, the most important parameters that impact on both performance and overall complexity are the number of trees (n_estimators) and the maximum depth of trees (max_depth). generally, increasing one or both of such parameters improve performance at the cost of greater complexity and estimation time. table 7 shows the r2 and mae metrics obtained with random forest for some combinations of n_estimators and max_depth considering the original set, i.e. unfiltered data. it is possible to observe that r2 and mae metrics are most affected by the max_depth parameter. in particular, a maximum value of r2 equal to 0.97 is achieved by setting max_depth = 30. the use of higher values increases computational complexity without significant performance advantages. as regards the other parameter investigated (i.e., n_estimators), there is no substantial difference in the values of r2 and mae obtained by fixing max_depth = 30 and using n_estimators values higher than 100. this analysis leads to conclude that, in the case of unfiltered data, the optimal values of the random forest parameters that maximise performance are max_depth = 30 and n_estimators = 100, which are the parameters used in [37]. the same analysis was conducted for filtered data, and the related results are summarised in table 8. in this case, better a) b) figure 3. accuracy of machine learning algorithms on the soc estimation a) unfiltered, b) filtered data. a) b) figure 4. f1 metric results for a) unfiltered and b) filtered data. figure 5. confusion matrix of dt classifier. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 86 results are achieved even with lower values of the parameters. for instance, the performance obtained using filtered data with max_depth=10 and n_estimators=10 is better than when using unfiltered data for max_depth=30 and n_estimators=100. thus, by training the algorithm with filtered data we obtained models with better performance and lower complexity. 6. conclusions starting from impedance measurements, different machine learning techniques were analysed as predictors of the state of charge and the loss of capacity of a lithium battery, subjected to a frequency regulation profile for grid applications. according to the results, the following conclusions can be drawn: for the training of machine learning techniques, the use of impedance values expressed in polar form is to be preferred; decision trees and random forest provided superior performance compared to the other machine learning techniques analysed; using low frequency data for training random forest regressor improved performance in terms of r2 and mae for both state of charge and capacity loss prediction and largely reduced overall complexity. acknowledgement special thanks to the italian ministry of economic development for funding this activity. references [1] g. hackeling, mastering machine learning with scikit-learn: packt publishing, 2014. [2] l. ren, l. zhao, s. hong, s. zhao, h. wang, l. zhang, remaining useful life prediction for lithium-ion battery: a deep learning approach, ieee access 6 (2018), pp. 50587-50598. doi: 10.1109/access.2018.2858856 [3] p. venugopal, state-of-health estimation of li-ion batteries in electric vehicle using indrnn under variable load condition, energies 12(22) (2019), art. 4338. doi: 10.3390/en12224338 [4] p. khumprom, n. yodo, a data-driven predictive prognostic model for lithium-ion batteries based on a deep learning algorithm, energies 12(4) (2019), art. 660. doi: 10.3390/en12040660 [5] j. meng, g. luo, m. ricco, m. swierczynski, d.i. stroe, r. teodorescu, overview of lithium-ion battery modeling methods for state-of-charge estimation in electrical vehicles, applied sciences 8(5) (2018), art. 659. doi: 10.3390/app8050659 [6] c. lin, a. tang, w. wang, a review of soh estimation methods in lithium-ion batteries for electric vehicle applications, energy procedia 75 (2015), pp. 1920-1925. doi: 10.1016/j.egypro.2015.07.199 [7] c. weng, y. cui, j. sun, h. peng, on-board state of health monitoring of lithium-ion batteries using incremental capacity analysis with support vector regression, journal of power sources, 235 (2013), pp. 36-44. doi: 10.1016/j.jpowsour.2013.02.012 [8] r. r. richardson, c. r. birkl, m. a. osborne, d. a. howey, gaussian process regression for in situ capacity estimation of lithium-ion batteries, ieee transactions on industrial informatics 15(1) (2019), pp. 127-138. doi: 10.1109/tii.2018.2794997 [9] x. xu, n. chen, a state-space-based prognostics model for lithium-ion battery degradation, reliability engineering and system safety 159 (2017), pp. 47-57. doi: 10.1016/j.ress.2016.10.026 [10] m. a. patil, p. tagade, k. s. hariharan, s. m. kolake, t. song, t. yeo, s. doo, a novel multistage support vector machine based approach for li ion battery remaining useful life estimation, applied energy 159 (2015), pp. 285-297. doi: 10.1016/j.apenergy.2015.08.119 table 7. r2 and mae values obtained by random forest technique varying parameters n_estimators and max_depth (on unfiltered data). n_estimators max_depth r2 mae 10 5 0.79 (0.01) 10.47 (0.29) 10 10 0.93 (0.01) 4.49 (0.38) 10 30 0.96 (0.01) 3.32 (0.50) 10 50 0.96 (0.01) 3.34 (0.26) 25 30 0.97 (0.01) 3.11 (0.24) 50 50 0.97 (0.01) 3.02 (0.34) 100 5 0.80 (0.01) 10.43 (0.24) 100 10 0.93 (0.01) 4.43 (0.29) 100 30 0.97 (0.01) 3.02 (0.36) 100 50 0.97 (0.01) 3.03 (0.34) 1000 30 0.97 (0.01) 2.99 (0.30) table 8. r2 and mae values obtained by random forest technique varying parameters n_estimators and max_depth (on filtered data). n_estimators max_depth r2 mae 10 5 0.95 (0.01) 4.62 (0.30) 10 10 0.98 (0.00) 1.94 (0.22) 10 30 0.98 (0.00) 1.94 (0.22) 10 50 0.98 (0.00) 1.84 (0.29) 25 30 0.98 (0.00) 1.89 (0.15) 50 50 0.98 (0.00) 1.91 (0.18) 100 5 0.95 (0.01) 4.50 (0.20) 100 10 0.98 (0.00) 1.89 (0.13) 100 30 0.98 (0.00) 1.8 (0.21) 100 50 0.98 (0.00) 1.87 (0.22) 1000 30 0.98 (0.00) 1.83 (0.18) a) b) figure 6. random forest distribution on filtered data for a) soc and b) capacity loss. https://doi.org/10.1109/access.2018.2858856 https://doi.org/10.3390/en12224338 https://doi.org/10.3390/en12040660 https://doi.org/10.3390/app8050659 https://doi.org/10.1016/j.egypro.2015.07.199 https://doi.org/10.1016/j.jpowsour.2013.02.012 https://doi.org/10.1109/tii.2018.2794997 https://doi.org/10.1016/j.ress.2016.10.026 https://doi.org/10.1016/j.apenergy.2015.08.119 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 87 [11] c. lu, l. tao, h. fan, li-ion battery capacity estimation: a geometrical approach, journal of power sources 261 (2014), pp. 141-147. doi: 10.1016/j.jpowsour.2014.03.058 [12] d. i. stroe, m. swierczynski, a. i. stan, v. knap, r. teodorescu, s. j. andreasen, diagnosis of lithium-ion batteries state-of-health based on electrochemical impedance spectroscopy technique, 2014 ieee energy conversion congress and exposition (ecce), pittsburgh, pa, 14-18 september 2014, pp. 4576-4582. doi: 10.1109/ecce.2014.6954027 [13] d. andre, m. meiler, k. steiner, c. wimmer, t. soczka-guth, d. u. sauer, characterization of high-power lithium-ion batteries by electrochemical impedance spectroscopy. i. experimental investigation, journal of power sources 196(12) (2011), pp. 53345341. doi: 10.1016/j.jpowsour.2010.12.102 [14] f. huet, a review of impedance measurements for determination of the state-of-charge or state-of-health of secondary batteries, journal of power sources 70(1) (1998), pp. 59-69. doi: 10.1016/s0378-7753(97)02665-7 [15] i. masmitjà rusinyol, j. gonzález, g. masmitjà, s. gomáriz, j. delrío-fernández, power system of the guanay ii auv, acta imeko 4(1) (2015), pp. 35-43. doi: 10.21014/acta_imeko.v4i1.161 [16] s. buteau, j. r. dahn, analysis of thousands of electrochemical impedance spectra of lithium-ion cells through a machine learning inverse model, journal of the electrochemical society 166(8) (2019), art. a1611. doi: 10.1149/2.1051908jes [17] y. zhang, q. tang, y. zhang, j. wang, u. stimming, a. a. lee, identifying degradation patterns of lithium ion batteries from impedance spectroscopy using machine learning, nature communications 11 (2020), art. 1706. doi: 10.1038/s41467-020-15235-7 [18] f. liu, x. liu, w. su, h. lin, h. chen, m. he, an online state of health estimation method based on battery management system monitoring data, international journal of energy research 44(8) (2020), pp. 6338-6349. doi: 10.1002/er.5351 [19] p. singha, r.vinjamuria, x. wangb, d. reisner, design and implementation of a fuzzy logic-based state-of-charge meter for li-ion batteries used in portable defibrillators, journal of power sources 162(2) (2006), pp. 829-836. doi: 10.1016/j.jpowsour.2005.04.039 [20] s. b. sarmah, p. kalita, a. garg, x.-d. niu, x.-w. zhang, x. peng, d. bhattacharjee, a review of state of health estimation of energy storage systems: challenges and possible solutions for futuristic applications of li-ion battery packs in electric vehicles, journal of electrochemical energy conversion and storage 16(4) (2019), art. 040801. doi: 10.1115/1.4042987 [21] a. nuhic, t. terzimehic, t. soczka-guth, m. buchholz, and k. dietmayer, health diagnosis and remaining useful life prognostics of lithium-ion batteries using data-driven methods, journal of power sources 239 (2013), pp. 680-688. doi: 10.1016/j.jpowsour.2012.11.146 [22] z. chen, m. sun, x. shu, r. xiao, j. shen, online state of health estimation for lithium-ion batteries based on support vector machine, applied sciences 8(6) (2018), art. 925. doi: 10.3390/app8060925 [23] v. klass, m. behm, g. lindbergh, a support vector machinebased state-of-health estimation method for lithium-ion batteries under electric vehicle operation, journal of power sources, vol. 270 (2015), pp. 262-272. doi: 10.1016/j.jpowsour.2014.07.116 [24] j. meng, l. cai, g. luo, d.-i. stroe, r. teodorescu, lithium-ion battery state of health estimation with short-term current pulse test and support vector machine, microelectronics reliability 88-90 (2018), pp. 1216-1220. doi: 10.1016/j.microrel.2018.07.025 [25] x. feng, c. weng, x. he, x. han, l. lu, d. ren, m. ouyang, online state-of-health estimation for li-ion battery using partial charging segment based on support vector machine, ieee transactions on vehicular technology 68(9) (2019), pp. 85838592. doi: 10.1109/tvt.2019.2927120 [26] m. berecibar, i. gandiaga, i. villarreal, n. omar, j. van mierlo, p. van den bossche, critical review of state of health estimation methods of li-ion batteries for real applications, renewable and sustainable energy reviews 56 (2016), pp. 572-587. doi: 10.1016/j.rser.2015.11.042 [27] j. qu, f. liu, y. ma, j. fan, a neural-network-based method for rul prediction and soh monitoring of lithium-ion battery, ieee access 7 (2019), pp. 87178-87191. doi: 10.1109/access.2019.2925468 [28] g. ma, y. zhang, c. cheng, b. zhou, p. hu, y. yuan, remaining useful life prediction of lithium-ion batteries based on false nearest neighbors and a hybrid neural network, applied energy, 253 (2019), art. 113626. doi: 10.1016/j.apenergy.2019.113626 [29] r. la rosa, a. y. s. pandiyan, c. trigona, b. andò, s. baglio, an integrated circuit to null standby using energy provided by mems sensors, acta imeko 9(4) (2020), p. 144 -150. doi: 10.21014/acta_imeko.v9i4.741 [30] g. campobello, d. dell’aquila, m. russo, a. segreto, neurogenetic programming for multigenre classification of music content, applied soft computing 94 (2020), art. 106488. doi: 10.1016/j.asoc.2020.106488 [31] international standard iec 61427-2: secondary cells and batteries for renewable energy storagegeneral requirements and methods of test part 2: on-grid applications, ed, 2015. [32] s. ma, m. jiang, p. tao, c. song, j. wu, j. wang, t. deng, w. shang, temperature effect and thermal impact in lithium-ion batteries: a review, progress in natural science: materials international 28(6) (2018), pp. 653-666. doi: 10.1016/j.pnsc.2018.11.002 [33] f. pedregosa, g. varoquaux, a. gramfort, v. michel, b. thirion, o. grisel, m. blondel, p. prettenhofer, r. weiss, v. dubourg, scikit-learn: machine learning in python, the journal of machine learning research 12 (2011), pp. 2825-2830. online [accessed 09 june 2021] http://jmlr.org/papers/v12/pedregosa11a.html [34] v. j. ovejas, impedance characterization of an lconmc/graphite cell: ohmic conduction, sei transport and charge-transfer phenomenon, batteries 4(3) (2018), art. 43. doi: 10.3390/batteries4030043 [35] t. f. landinger, g. schwarzberger, a. jossen, a novel method for high frequency battery impedance measurements, ieee international symposium on electromagnetic compatibility, signal & power integrity (emc+sipi), new orleans, la, usa, 22-26 july 2019, pp. 106-110. doi: 10.1109/isemc.2019.8825315 [36] m. l. zhang, z. h. zhou, a review on multi-label learning algorithms, ieee transactions on knowledge and data engineering 26(8) (2014), pp. 1819–1837. doi: 10.1109/tkde.2013.39 [37] d. aloisio, g. campobello, s. g., leonardi, a. segreto, n. donato, a machine learning approach for evaluation of battery state of health, 24th imeko tc4 international symposium and 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020, pp. 129-134. online [accessed 09 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-25.pdf https://doi.org/10.1016/j.jpowsour.2014.03.058 https://doi.org/10.1109/ecce.2014.6954027 https://doi.org/10.1016/j.jpowsour.2010.12.102 https://doi.org/10.1016/s0378-7753(97)02665-7 http://dx.doi.org/10.21014/acta_imeko.v4i1.161 https://doi.org/10.1149/2.1051908jes https://doi.org/10.1038/s41467-020-15235-7 https://doi.org/10.1002/er.5351 https://doi.org/10.1016/j.jpowsour.2005.04.039 https://doi.org/10.1115/1.4042987 https://doi.org/10.1016/j.jpowsour.2012.11.146 http://dx.doi.org/10.3390/app8060925 https://doi.org/10.1016/j.jpowsour.2014.07.116 https://doi.org/10.1016/j.microrel.2018.07.025 https://doi.org/10.1109/tvt.2019.2927120 https://doi.org/10.1016/j.rser.2015.11.042 https://doi.org/10.1109/access.2019.2925468 https://doi.org/10.1016/j.apenergy.2019.113626 http://dx.doi.org/10.21014/acta_imeko.v9i4.741 https://doi.org/10.1016/j.asoc.2020.106488 https://doi.org/10.1016/j.pnsc.2018.11.002 http://jmlr.org/papers/v12/pedregosa11a.html https://doi.org/10.3390/batteries4030043 https://doi.org/10.1109/isemc.2019.8825315 https://doi.org/10.1109/tkde.2013.39 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-25.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-25.pdf development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 8 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study mitsuru sato1, mizuki narita2, naoya takahashi1, yohan kondo1, masashi okamoto1, toshihiro ogura2 1 department of radiological technology, school of health sciences, niigata university, niigata, japan 2 department of radiology, gunma prefectural college of health sciences, gunma, japan section: research paper keywords: infection control; sars-cov-2; eye-tracking manipulation; contactless device; radiographic console citation: mitsuru sato, mizuki narita, naoya takahashi, yohan kondo, masashi okamoto, toshihiro ogura, development of a contactless operation system for radiographic consoles using an eye tracker for severe acute respiratory syndrome coronavirus 2 infection control: a feasibility study, acta imeko, vol. 11, no. 2, article 38, june 2022, identifier: imeko-acta-11 (2022)-02-38 section editor: francesco lamonaca, university of calabria, italy received march 29, 2022; in final form june 8, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mitsuru sato, e-mail: mitu-sato@clg.niigata-u.ac.jp 1. introduction an outbreak of severe acute respiratory syndrome coronavirus (sars-cov-2) infection occurred in wuhan, china, in december 2019 [1]–[5]. since then, the virus has been transmitted worldwide, and consequently, the world health organization declared it a pandemic on march 11, 2020. this infection can be primarily transmitted via droplets and contact routes [6]-[8]. currently, infection control measures, including social distancing, using masks or face shields, and frequent handwashing and disinfection, are important [9], [10], particularly in the medical field. hospitals with isolation wards for patients with coronavirus disease 2019 (covid-19) are taking various measures to prevent infection transmission. the environment where patients with covid-19 are treated is zoned as clean (cold zone), intermediate (warm zone), and unclean (hot zone) areas [11]-[13] to prevent hospital-wide infection. however, it is difficult to completely control infection despite such isolation measures [14]. chest radiography is important for the management of covid-19, and portable x-ray machines are used in isolation wards. in most hospitals, after imaging patients in the isolation ward is completed, all areas that may have come in contact with the patient or been exposed to droplets, including the device, the flat panel detector, and imaging console attached to the flat panel detector system, should be disinfected [13]. however, medical staff can be mentally and physically exhausted during a abstract sterilization of medical equipment in isolation wards is essential to prevent the transmission of severe acute respiratory syndrome coronavirus 2 (sars–cov-2) infection. particularly, the radiographic console of portable x-ray machines requires frequent disinfection because it is regularly moved; this requires considerable infection control effort as the number of patients with coronavirus disease 2019 (covid-19) increases. to evaluate the application of a system facilitating noncontact operation of radiographic consoles for patients with covid-19 to reduce the need for frequent disinfection. we developed a noncontact operation system for radiographic consoles that used a common eye tracker. we compared calibration errors between with and without face shield conditions. moreover, the use of console operation among 41 participants was investigated. the calibration error of the eye tracker between with and without face shield conditions did not significantly differ. all (n = 41) observers completed the console operation. pearson’s correlation coefficient analysis showed a strong correlation (r = 0.92, p < 0.001) between the average operation time and the average number of misoperations. our system that used an eye tracker can be applied even if the operator uses a face shield. thus, its application is important in preventing the transmission of infection. mailto:mitu-sato@clg.niigata-u.ac.jp acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 pandemic, making it difficult to perform appropriate disinfection. moreover, disinfection may not have been strictly practiced prior to the pandemic [15]-[17]. when several patients with covid-19 undergo imaging, the console should be disinfected to prevent secondary infections, such as due to methicillin-resistant staphylococcus aureus and vancomycinresistant enterococci infections, even if imaging was conducted in the same ward. nevertheless, it is difficult to disinfect a plastic bag covering a complex-structures medical device while wearing personal protective equipment. therefore, the radiographic console, which is touched frequently, can be a source of infection [18]. reducing the frequency of touching the imaging equipment, which can be achieved using contactless input devices, is important in addressing these issues. currently, several contactless devices are available. however, their use for protection against sars-cov-2 infection has not been reported. previous studies have assessed the use of contactless input devices to operate medical devices without touching them in the clinical setting [19]-[24]. such devices are effective in maintaining sterile rooms [19]. therefore, this study aimed to assess the use of an eye tracker, which does not require body movement, as a contactless input device. image display systems have been successfully manipulated via eye tracking during interventional radiology, thereby allowing images to be paged and magnified using the observer’s eye movements alone [25]. in this study, we applied such technology to develop a radiographic console operation system for infection control during portable x-ray imaging of patients with covid19. face shields are used as personal protective equipment in the management of patients with covid-19, and they create an obstruction between the eye tracker and the eyes. therefore, we evaluated our operating system by assessing calibration errors between with and without face shield conditions, average time required for console operation, and average number of misoperations. 2. material and methods 2.1. development of a contactless operation system using an eye tracker in this study, we used tobii pceye mini (tobii, stockholm, sweden) as the eye tracker for our contactless operation system (figure 1). this small and light weight devices has the following measurements: width, 169.5 mm; height, 17.8 mm; thickness, 12.4 mm; and weight, 59 g. it can be easily installed on the radiographic console used in portable x-ray systems. the usable distance from the eye detector ranges from 45 to 85 cm, the sampling rate of the eye tracker is 60 hz, and the recommended screen size is up to 19 in. we used a computer with the following specifications: windows 10 home 64 bit, intel core i7-6700hq central processing unit, and nvidia geforce gtx 960m; its screen size was 17 in (monitor size: width, 38.4 cm; height, 21.6 cm). the eye tracker could be easily used with a universal serial bus connection; however, it requires prior calibration. the system provided with the tobii pceye mini was used for calibration. there was a function in this system to inform the observer whether the position of the observer is appropriate in order to properly calibrate the system. in addition, instructions are displayed on the screen regarding the location to be gazed for calibration so that it can be facilitated. specifically, we entered the gaze detection range and gazed at seven points on the screen, i.e., centre, upper right, upper centre, upper left, lower right, lower centre, and lower left. we used the pupil centre corneal reflection method to calibrate the operator’s gaze point by measuring the corneal reflection point of the irradiated infrared light and the position of the pupil when gazing at each point, which improved with the eye tracker [26]. we developed a contactless operation system for radiographic consoles that used an eye tracker to prevent the transmission of sars-cov-2 infection. we used microsoft visual studio (microsoft, redmond, wa, usa) as an integrated development environment, c# (microsoft), and the nuget package of tobii.interaction v.0.7.3 (tobii core sdk, tobii). the principle of the operation was based on the characteristics of eye movement. the two types of fixations are saccade, which is a quick movement of the gaze, and fixation, which requires the maintenance of gaze on the same spot [27], [28]. the vector was calculated as the amount of gaze point movement on the screen for 0.02 s to detect the fixation state. in this study, we considered the saccade state if the amount of movement exceeded 200 pixels (0.02 cm per pixel, which is approximately 4 cm). an amount of movement of 200 pixels in 0.02 s could be considered a saccade [25]. if the amount of movement was below this threshold, the system was considered in fixation state. the only commands required to operate the radiographic console were moving the cursor and clicking. therefore, we developed a method to move the cursor in accordance with the movement of the gaze point and to click when the fixation state is reached. figure 2 shows the use of the console operation system for imaging. in total, 41 students from radiological technology participated, and they were briefed in advance about the operation system. table 1 shows the characteristics of the observers. 2.2. considerations of the system because eye tracking may not be possible when a face shield is worn, the developed system was used with and without face shield. we used a face shield (logi meister, osaka, japan) made of polyethylene terephthalate with following measurements: height, 22 cm; width, 33 cm; and thickness, 0.25 mm. the distance from the eyeball to the face shield is approximately 4 cm (figure 3). we performed (1) a comparison of calibration errors between the two face shield conditions and (2) an analysis of console operation. in experiment 1, calibration of the eye detector was performed with and without a face shield. subsequently, the error between the actual gaze point coordinates on the screen and the detected gaze point coordinates on the screen was figure 1. overview of tobii pceye mini (tobii, stockholm, sweden). acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 measured at the following nine points on the monitor: top left, top, top right, left, centre, right, bottom left, bottom, and bottom right. a gazing point was displayed on the screen to measure the error (figure 4). the points on the four corners of the screen were displayed at a distance of 150 pixels each in the x and y coordinates from the screen edge (resolution: 1920 pixels × 1080 pixels) to assess the calibration error in the periphery as much as possible. the other five points were placed at the centre of the x and y coordinates. the coordinates of the mouse cursor while the operator was gazing at each point were measured five times. the measurement results were averaged to reduce measurement error due to nystagmus. nystagmus is a cyclic and involuntary oscillatory movement of the eyeball. the effect of physiological nystagmus was reduced. the distance was calculated from the obtained coordinates. the calibration error was defined as the distance between actual gaze point which is the location coordinates of gazed by observer and the detection point coordinates for each of the nine points. figure 2. operation of the radiographic console used in isolation wards. real-time gaze analysis allows operations required in conducting examinations, such as clicking buttons based on gaze duration. table 1. characteristics of the participants observer eye condition 1 bare 2 with glasses 3 with soft contact lenses 4 with soft contact lenses 5 with soft contact lenses 6 with soft contact lenses 7 with soft contact lenses 8 with soft contact lenses 9 with soft contact lenses 10 with soft contact lenses 11 with soft contact lenses 12 with glasses 13 with soft contact lenses 14 with soft contact lenses 15 with soft contact lenses 16 bare 17 with soft contact lenses 18 bare 19 with soft contact lenses 20 with glasses 21 with soft contact lenses 22 with soft contact lenses 23 with soft contact lenses 24 with soft contact lenses 25 bare 26 bare 27 bare 28 with soft contact lenses 29 with soft contact lenses 30 with soft contact lenses 31 with soft contact lenses 32 with soft contact lenses 33 with soft contact lenses 34 with soft contact lenses 35 with soft contact lenses 36 with soft contact lenses 37 with soft contact lenses 38 with glasses 39 with soft contact lenses 40 with soft contact lenses 41 with soft contact lenses figure 3. image of the face shield made of polyethylene terephthalate (logi meister, osaka, japan) used in this study. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 in experiment 2, the developed system was used to evaluate the operability of the console simulating the radiographic console attached to the flat panel detector system while the operator is wearing a face shield. this radiographic console simulated that used in the clinical setting (console advance dr-id300cl; fujifilm, tokyo, japan) and was divided into the patient selection (figure 5 a), radiographic item confirmation (figure 5 b), and (3) radiographic screens (figure 5 c). the clicked locations are listed below. • (1) patient selection screen: select a patient from the list (figure 5 a-1) • select button (figure 5 a-2) • (2) radiographic item confirmation screen: start examination button (figure 5 b-3) • (3) radiographic screen: select the radiographic item (figure 5 c-4) • re-imaging process (figure 5 c-5) • add the same type of imaging (figure 5 c-6) • click the end button (figure 5 c-7) these buttons measured 26 cm × 1 cm, 3 cm × 1 cm, 4 cm × 2 cm, 10 cm × 2 cm, 2 cm × 2 cm, 2 cm × 2 cm, and 4 cm × 4 cm, respectively. the experimental procedure was based on the actual examination procedure, and the console was operated in the order of patient selection, confirmation of radiographic items, and operation of the radiographic screen (figure 6). the time between the start of the operation and the completion of clicking the end button was measured. the number of clicks on the screen was recorded, as was the number of clicks caused by accidental eye pauses. the procedure mentioned above was performed five times with each observer. 2.3. statistical analysis the data obtained were the calibration errors at each of the nine positions under the two face shield conditions in experiment 1. we used the paired t test to investigate whether significant differences existed in the average calibration error of all observers. we also investigated whether significant differences existed in the average calibration error at each point for all observers. a value of p < 0.05 was considered statistically significant. the average operation time and the average misoperation time for the radiographic console were obtained in experiment 2. moreover, we investigated whether the calibration errors obtained from experiment 1 was correlated with the average operation time and the average number of misoperations. furthermore, the correlation between the average operation time and the average number of misoperations was evaluated via pearson’s product–moment correlation coefficient analysis. the results ranged from −1.0 to 1.0, with −1.0 and 1.0 representing a negative and positive correlation, respectively. an absolute value of < 0.2 was defined as almost no correlation; 0.2 – 0.4, weak correlation; 0.4 – 0.7, medium correlation; and  0.7, strong correlation. a p value of < 0.05 indicated a correlation between the two face shield conditions. figure 4. illustration of the nine measurement points evaluated in this study. the calibration error was measured by calculating the difference in the coordinates between the detection and actual gaze points. figure 5 a. components of the radiographic console used in this study (patient selection). the numerals in boldface indicate the button click steps. figure 5 b. components of the radiographic console used in this study (radiographic item confirmation). the numerals in boldface indicate the button click steps. figure 5 c. components of the radiographic console used in this study (radiographic screens). the numerals in boldface indicate the button click steps. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 3. results 3.1. experiment 1 the average ± standard deviation (sd) calibration errors at all points for all observers with a face shield and those without a face shield were 1.22 ± 0.94 cm and 1.19 ± 0.79 cm, respectively (figure 7). no significant difference between the two face shield conditions was observed. data about the average calibration error at each point for all observers are shown in figure 8. the nine measurement points corresponded to top left, top, top right, left, centre, right, bottom left, bottom, and bottom right. only measurement point 7, which represented the bottom left, had a significantly larger calibration error in the no face shield condition. data about the average calibration error for all points for each observer are shown in figure 9. there was no tendency for operation either with a face shield or that without a face shield to be consistently larger, although significant differences were observed for 16 of the 41 observers. 3.2. experiment 2 although students and not radiologists participated in this study, they were able to operate the console easily. there was no significant difference between the calibration error of the eye tracker with and without the face shield. the results of the experiment revealed that all observers (n = 41) were able to operate the console. the average operation time was 37.89 ± 24.22 s, and the average number of misoperations was 5.4 ± 4.1. pearson’s product–moment correlation coefficient analysis found a very strong positive correlation (r = 0.92, p < 0.001) between the average time required to complete the operation and the average number of misoperations (figure 10 a). it found no correlation (r = 0.24, p = 0.13) between the average operation time and the calibration error (figure 10 b) also between the average number of misoperations and the calibration error (r = 0.28, p = 0.08) (figure 10 c). 4. discussion there have been no studies to utilize eye tracker for infection control of sars-cov-2. in previous study, there are methods to manipulate image display systems using motion sensors [19]-[24]. however, there are very few cases in which manipulation was achieved using an eye tracker. therefore, the study by use of an figure 6. overall scheme of the experimental procedure. figure 7. comparison of calibration errors between the use and nonuse of a face shield at all measurement points for all observers. figure 8. calibration errors according to measurement point. figure 9. calibration errors of all measurement points according to observer. significant differences were detected between some observers. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 eye tracker was state-of-the-art. furthermore, it is versatile and useful because it can be used not only for sars-cov-2, but also for infection control against other viruses and pathogenic bacteria. the method proposed in this study can not only prevent the risk of contact infection but also save supplies and improve the time efficiency of disinfection. the mean calibration errors at all points for all observers did not significantly differ with and without the use of face shield. forty-one observers participated in this study, representing a relatively large number of people. one of the primary characteristics of eye detectors is that the four corners of the screen are prone to errors during calibration. in this study, we followed the guidance of the calibration system attached to the eye tracker, made the distance between the computer screen and the observer constant, and calibrated the system to have the same geometric arrangement. however, because the pupil centre corneal reflection method was used to detect the shape of the eyeball and the position of the reflection of infrared light on the eyeball, even a slight shift in position may have affected the calibration error. nevertheless, poor operability on the four corners of the screen is not an uncommon outcome. in such a case, operability can be improved by placing the buttons closer to the centre and making them appear with gazing time or eye movement. the larger the calibration error, the larger is the gap between the gazed position and the coordinates of the mouse cursor. therefore, there was a possibility of accidental clicking on a position other than the button owing to calibration errors. moreover, the larger the calibration error, the more difficult it was to adjust the click position with the gaze point, which could increase the average operation time. pearson’s product–moment correlation coefficient analysis showed that there was a very strong correlation between the average operation time and the average number of misoperations. however, there was no correlation between the average calibration error and the average operation time and average number of misoperations. although it is natural for the operation time to increase with the number of misoperations, because the average calibration error does not correlate with the average number of misoperations, it is possible that a small calibration error, such as that in this study, would not have a significant impact on usability. nevertheless, when we obtained feedback on the system’s operability from observers after the experiment, several stated that the buttons were difficult to operate because of their small size and that they were able to click on the button by moving their eyes according to the amount of the shift when they clicked on a position different from the one they were looking at. therefore, it is possible to reduce the effect of calibration error via the operator’s effort, although the operation time will tend to increase because of misoperation. despite the system’s potential, its user interface needs to be improved because it can be easily operated at the same level as the current touch pad or touch panel operation for clinical use. however, the average operation time was longer (20–80 s) than that when the same operation was performed with a mouse (approximately 10 s). the eye tracker was affected by the calibration error, and the detection results showed coordinates that were slightly different from the actual gaze point. therefore, even a calibration error of a level that would not be problematic in such studies as a gaze analysis would be problematic in such cases as this one that require detailed button operation of the imaging console. we considered that this was a result of the small size of the buttons. if the button size is smaller than the calibration error, then it could be difficult to operate (figure 11). this study showed that it is possible to operate imaging consoles while wearing a face shield. although it is difficult to immediately introduce this technology to the clinical setting, it can be used in clinical practice in the near future because the usability of the radiographic console can be improved by simply increasing the size of the buttons. furthermore, the observers in this study were students; the system we developed is useful as it can be operated by observers who are not familiar with radiographic consoles. taken together, the proposed manipulation method that used an eye tracker for infection control is not ready for clinical use. however, because it can be used even when a face shield is worn, its clinical application is feasible with improvements in the radiographic console. it is necessary to improve the eye tracker and the ui in order to use the method in actual clinical situation. the first priority should be improvement of the ui. as mentioned above, the influence of calibration error can be reduced by improving the button size. although the observer determined the operating position according to the system's instructions during the experiment, operation was possible even if the observer's position was slightly moved. however, there were cases when the gazing position could not be detected correctly while the operating position differed significantly from the position at the calibration. as a mentioned above, the usable distance of the eye figure 10. a) – c) results of pearson’s product–moment correlation coefficient analysis of the manipulation experiment data obtained for each observer. figure 11. effect of button size characteristics on difficulty in operating our contactless system that used an eye tracker. if the button size is larger than the calibration error, then the misoperation can be reduced if the button is pressed. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 7 tracker ranges from 45 to 85 cm, which is approximately equal to the distance from which touch panel operation and/or mouse operation is performed. therefore, it is usable within the range of normal use. it was possible that the eye tracker would be unusable in the existence of natural light. however, experiments in this study were conducted in a laboratory where natural light existed and under fluorescent lighting. in other words, the experiments were conducted in an environment similar to a hospital room. in addition, there are cases in which eye tracker was used in hospital rooms in previous studies, although these studies were not directly operating computers [29], [30]. therefore, we believe that the system in this study can be used without problems in the isolation ward where it is planned to be used. 5. conclusions feasibility of a contactless operation system for radiographic consoles using an eye tracker for sars-cov-2 infection control has been demonstrated by this study. the system developed in this study is extremely useful even when the operator uses a face shield. however, because of its long average operation time, a radiographic console designed with an eye tracker should be developed for daily clinical use. thus, our proposed method can be useful for controlling not only sars-cov-2 infection but also other potential conditions in the future. 6. acknowledgement the authors thank the students of gunma prefectural college of health sciences who participated as study observers for their assistance in the measurements as well as haruka hirasawa and madoka hasegawa of gunma prefectural college of health sciences for their helpful assistance. 7. research ethics and patient consent all procedures involving human participants were performed in accordance with the ethical standards of the relevant institutional and/or national research committee and the declaration of helsinki and its later amendments or comparable ethical standards. informed consent was obtained from all participants. this research, which involves the development of a medical device operation system, specifically an image display system, using a contactless device, was approved by the ethical review committee of gunma prefectural college of health sciences (approval no.: 2020-16). this work was not previously published in part or its entirety. 8. declaration of conflict of interests the authors declare that there is no conflict of interest. 9. funding this research did not receive any specific grant from any funding agency in the public, commercial, or not-for-profit sector. references [1] c. huang, y. wang, x. li, (another 26 authors), clinical features of patients infected with 2019 novel coronavirus in wuhan, china, lancet 395 (2020), pp. 497–506. doi: 10.1016/s0140-6736(20)30183-5 [2] h. shi, x. han, n. jiang, y. cao, o. alwalid, j. gu, y. fan, c. zheng, radiological findings from 81 patients with covid-19 pneumonia in wuhan, china: a descriptive study, lancet infect dis. 20 (2020), pp.425–434. doi: 10.1016/s1473-3099(20)30086-4 [3] x. yang, y. yu, j. xu, (another 13 authors), clinical course and outcomes of critically ill patients with sars-cov-2 pneumonia in wuhan, china: a single-centered, retrospective, observational study, lancet respir med. 8 (2020), pp. 475–481. doi: 10.1016/s2213-2600(20)30079-5 [4] n. chen, m. zhou, x. dong, (another 11 authors), epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in wuhan, china: a descriptive study. lancet. 395 (2020), pp. 507–513. doi: 10.1016/s0140-6736(20)30211-7 [5] j. shigemura, r. j. ursano, j. c. morganstein, m. kurosawa, d. m. benedek,https://onlinelibrary.wiley.com/action/dosearch? contribauthorraw=kurosawa%2c+mie public responses to the novel 2019 coronavirus (2019-ncov) in japan: mental health consequences and target populations, psychiatry clin neurosci. 74 (2020), pp. 281–282. doi: 10.1111/pcn.12988 [6] j. a. otter, c. donskey, s. yezli, s. t douthwaite, transmission of sars and mers coronaviruses and influenza virus in healthcare settings: the possible role of dry surface contamination, j hosp infect. 92 (2016), pp. 235. doi: 10.1016/j.jhin.2015.08.027 [7] a. wilder-smith, c. j. chiew, v. j. lee, can we contain the covid-19 outbreak with the same measures as for sars?, lancet infect dis. 20 (2020), pp. 102. doi: 10.1016/s1473-3099(20)30129-8 [8] p. pena, j. morais, a. q. gomes, et al. sampling methods and assays applied in sars-cov-2 exposure assessment, sci total environ. 775 (2021), pp. 145903. doi: 10.1016/j.scitotenv.2021.145903 [9] morawska l, tang jw, bahnfleth w, c. viegas, how can airborne transmission of covid-19 indoors be minimised?, environ int. 142 (2020), pp. 105832. doi: 10.1016/j.envint.2020.105832 [10] world health organization. epidemic-prone and pandemicprone acute respiratory diseases. summary guidance: infection prevention & control in health-care facilities, geneva: world health organization, 2007. online [accessed 22 december 2021]. https://apps.who.int/iris/handle/10665/69793 [11] f. ogawa, h. kato, k. sakai, k. nakamura, m. ogawa, m. uchiyama, k. nakajima, y. ohyama, t. abe, i. takeuchi, environmental maintenance with effective and useful zoning to protect patients and medical staff from covid-19 infection, acute med surg. 7 (2020), pp. 536. doi: 10.1002/ams2.536 [12] k. mimura, h. oka, m. sawano, a perspective on hospitalacquired (nosocomial) infection control of covid-19: usefulness of spatial separation between wards and airborne isolation unit, j breath res. 15 (2021), pp. 042001. doi: 10.1088/1752-7163/ac1721 [13] p. an, y. ye, m. chen, y. chen, w. fan, y. wang, management strategy of novel coronavirus (covid-19), pneumonia in the radiology department: a chinese experience, diagn interv radiol. 26 (2020), pp. 200. doi: 10.5152/dir.2020.20167 [14] world health organization. infection prevention and control of epidemicand pandemic-prone acute respiratory infections in health care, geneva: world health organization, 2014. online [accessed 22 december 2021). http://apps.who.int/iris/bitstream/10665/112656/1/97892415 07134_eng.pdf?ua=1 https://doi.org/10.1016/s0140-6736(20)30183-5 https://doi.org/10.1016/s1473-3099(20)30086-4 https://doi.org/10.1016/s2213-2600(20)30079-5 https://doi.org/10.1016/s0140-6736(20)30211-7 https://onlinelibrary.wiley.com/action/dosearch?contribauthorraw=kurosawa%2c+mie https://onlinelibrary.wiley.com/action/dosearch?contribauthorraw=kurosawa%2c+mie https://doi.org/10.1111/pcn.12988 https://doi.org/10.1016/j.jhin.2015.08.027 https://doi.org/10.1016/s1473-3099(20)30129-8 https://doi.org/10.1016/j.scitotenv.2021.145903 https://doi.org/10.1016/j.envint.2020.105832 https://apps.who.int/iris/handle/10665/69793 https://doi.org/10.1002/ams2.536 https://doi.org/10.1088/1752-7163/ac1721 https://doi.org/10.5152/dir.2020.20167 http://apps.who.int/iris/bitstream/10665/112656/1/9789241507134_eng.pdf?ua=1 http://apps.who.int/iris/bitstream/10665/112656/1/9789241507134_eng.pdf?ua=1 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 8 [15] d. pittet, improving compliance with hand hygiene in hospitals, infect control hosp epidemiol. 21 (2000), pp. 381–386. doi: 10.1086/501777 [16] e. girou, f. oppein, handwashing compliance in a french university hospital: new perspective with the introduction of hand-rubbing with a waterless alcohol-based solution, j hosp infect. 48 (2001), pp. 55–57. doi: 10.1016/s0195-6701(01)90015-5 [17] d. pittet, compliance with hand disinfection and its impact on hospital-acquired infections, j hosp infect 48 (2001), pp. 40–46. doi: 10.1016/s0195-6701(01)90012-x [18] r. pintaric, j. matela, s. pintaric, suitability of electrolyzed oxidizing water for the disinfection of hard surfaces and equipment in radiology, j environ health sci eng. 13 (2015), 6 p. doi: 10.1186/s40201-015-0160-8 [19] j. h. tan, c. chao, m. zawaideh, a. c. roberts, t. b. kinney, informatics in radiology: developing a touchless user interface for intraoperative image control during interventional radiology procedures, radiographics 33 (2013), pp. 61–70. doi: 10.1148/rg.332125101 [20] g. c. s. ruppert, l. o. reis, p. h. j. amorim, t. f. de moraes, j. v. lopes da silva, touchless gesture user interface for interactive image visualization in urological surgery, world j urol. 30 (2012), pp. 687–691. doi: 10.1007/s00345-012-0879-0 [21] m. g. jacob, j. p. wachs, r. a. packer, hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images, j am med inform assoc 20 (2013), pp. 183–186. doi: 10.1136/amiajnl-2012-001212 [22] t. ogura, m. sato, y. ishida, n. hayashi, k. doi, development of a novel method for manipulation of angiographic images by use of a motion sensor in operating rooms, radiol. phys technol. 7 (2014), pp. 228–234. doi: 10.1007/s12194-014-0259-0 [23] sato m, ogura t, yasumoto y, et al. development of an image operation system with a motion sensor in dental radiology. radiol phys technol. 8 (2015), pp. 243–247. doi: 10.1007/s12194-015-0313-6 [24] a. mewes, b. hensen, f. wacker, c. hansen, touchless interaction with software in interventional radiology and surgery: a systematic literature review, int j comput. assist radiol. surg. 12 (2017), pp. 291–305. doi: 10.1007/s11548-016-1480-6 [25] m. sato, m. takahashi, h. hoshino, t. terashita, n. hayashi, h. watanabe, t. ogura, development of an eye-tracking image manipulation system for angiography: a comparative study, acad radiol (2020), pp. 1–10. doi: 10.1016/j.acra.2020.09.027 [26] tobii. how do tobii eye trackers work? learn more with tobii pro, stockholm: tobii, 2015. online [accessed 22 december 2020]. https://www.tobiipro.com/learn-and-support/learn/eyetracking-essentials/how-do-tobii-eye-trackers-work/ [27] k. holmqvist, m. nyström, r. andersson, r. dewhurst, h. jarodzka, eye tracking. a comprehensive guide to methods and measures, uk: oxford university press. 2011, isbn: 9780199697083, pp 1-560. [28] a. van der gijp, c. j. ravesloot, h. jarodzka, m. f. van der schaaf, i. c. van der schaaf, j. p. j. van schaik, th. j. ten cate, how visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology, adv health sci educ theory pract. 22 (2017), pp. 765–787. doi: 10.1007/s10459-016-9698-1 [29] r. bates, m. donegan, h. o. istance, j. p. hansen, k.-j. räihä, introducing cogain: communication by gaze interaction, univ access inf soc, 6(2007), 159–166. doi: 10.1007/s10209-007-0077-9 [30] m. debeljak, j. ocepek, a. zupan, eye controlled human computer interaction for severely motor disabled children. computers helping people with special needs, icchp 2012. lecture notes in computer science, 7383(2012), pp. 153-156. https://doi.org/10.1086/501777 https://doi.org/10.1016/s0195-6701(01)90015-5 https://doi.org/10.1016/s0195-6701(01)90012-x https://doi.org/10.1186/s40201-015-0160-8 https://doi.org/10.1148/rg.332125101 https://doi.org/10.1007/s00345-012-0879-0 https://doi.org/10.1136/amiajnl-2012-001212 https://doi.org/10.1007/s12194-014-0259-0 https://doi.org/10.1007/s12194-015-0313-6 https://doi.org/10.1007/s11548-016-1480-6 https://doi.org/10.1016/j.acra.2020.09.027 https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/how-do-tobii-eye-trackers-work/ https://www.tobiipro.com/learn-and-support/learn/eye-tracking-essentials/how-do-tobii-eye-trackers-work/ https://doi.org/10.1007/s10459-016-9698-1 https://doi.org/10.1007/s10209-007-0077-9 jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 7 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring pawandeep singh matharu1, akash ashok ghadge1, yara almubarak2, yonas tadesse1,3,4,5 1 humanoids, biorobotics, and smart systems laboratory, mechanical engineering department, the university of texas at dallas, richardson, tx 78705, usa 2 sorobotics laboratory, mechanical engineering department, wayne state university, detroit, mi 48202, usa 3 biomedical engineering department, the university of texas at dallas, richardson, tx 78705, usa 4 electrical and computer engineering department, the university of texas at dallas, richardson, tx 78705, usa 5 alan g. macdiarmid nanotech institute, the university of texas at dallas, richardson, tx 78705, usa section: research paper keywords: artificial muscles; underwater robots; biomimetics; computer vision; jellyfish; smart materials; tcp citation: pawandeep singh matharu, akash ashok ghadge, yara almubarak, yonas tadesse, jelly-z: twisted and coiled polymer muscle actuated jellyfish robot for environmental monitoring, acta imeko, vol. 11, no. 3, article 6, september 2022, identifier: imeko-acta-11 (2022)-03-06 section editor: zafar taqvi, usa received february 27, 2022; in final form august 25, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was partially supported by office of naval research, usa. corresponding author: pawandeep singh matharu, e-mail: pawandeep.matharu@utdallas.edu 1. introduction in recent times dealing with inhospitable environments has become inevitable. the advancement of technology and increase in demand have forced humans to consider exploration, operation, and data collection in places that are nearly impossible for them to operate in unless equipped with expensive, and in some cases, heavy protective equipment. the human activities are restrained especially in an underwater medium. they are faced with environment limitation such as extreme temperatures (4 °c to 1 °c), radiation, and extreme pressure at depths of over (10,000 ft). moreover, facing physical limitations such as time spent underwater, size, danger when faced with underwater creatures. adaptability is needed to deal with the numerous obstacles presented for humans in an aquatic environment, thus the integration of underwater soft robotics is highly critical and essential for the success of human exploration of the ocean. any soft robot’s development requires improvement in its locomotion, size, weight, flexibility, and control. combining soft materials and artificial muscles along with sensors is the steppingstone of the next generation of smart biomimetic robots making them highly compliant. this part of the development stage is critical and requires many iterations and analysis. the key challenges now are control, movement, and stiffness modulation. control of stiffness is highly essential in soft robots [1]-[4]. abstract silent underwater actuation and object detection are desired for certain applications in environmental monitoring. however, several challenges need to be faced when addressing simultaneously the issues of actuation and object detection using vision system. this paper presents a swimming underwater soft robot inspired by the moon jellyfish (aurelia aurita) species and other similar robots; however, this robot uniquely utilizes novel artificial muscles and incorporates camera for visual information processing. the actuation characteristics of the novel artificial muscles in water are presented which can be used for any other applications. the bio-inspired robot, jelly-z, has the following characteristics: (1) the integration of three 60 mm-long twisted, and coiled polymer fishing line (tcpfl) muscles in a silicone bell to achieve contraction and expansion motions for swimming; (2) a jevois camera is mounted on jelly-z to perform object detection while swimming using a pre-trained neural network; (3) jelly-z weighs a total of 215 g with all its components and is capable of swimming 360 mm in 63 seconds. the present work shows, for the first time, the integration of camera detection and tcpfl actuators in an underwater soft jellyfish robot, and the associated performance characteristics. this kind of robot can be a good platform for monitoring of aquatic environment either for detection of objects by estimating the percentage of similarity to pre-trained network or by mounting sensors to monitor water quality when fully developed. mailto:pawandeep.matharu@utdallas.edu acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 recently, researchers have also taken advantage of using soft material to explore the mariana trench (depth of 11,034 m underwater) [5]. many works have been presented which start replicating the movement and behaviour of animals using synthetic materials to illustrate a high degree of freedom robot and variability in stiffness such as 3d printed musculoskeletal joints [6], [7], biomimetic octopus like tentacles [8]-[10], robotic fish [11]-[13], and most importantly jellyfish like robots [14]-[16]. the jellyfish is considered as the most efficient swimmer in the ocean [17], with its highly flexible and deformable bell, it can propel itself long distances while exerting very low energy. robertson et al. presented a robot with jet propulsion similar to the jellyfish inspired by scallops [18]. origami inspired robots have also been investigated [19]. others have presented “soft growing” robot that can be controlled and actuated with a pneumatic actuation mechanism[20]. reviews on the challenges of maintaining linear assets by effectively utilizing autonomous robots have been shown to reduce maintenance costs, human involvement, etc. [21]. however, the research focus in our work is on the jellyfish for its geometrical simplicity and its swimming advantages. many have also attempted to integrate artificial muscles in various configurations to achieve their desired biomimetic jellyfish design. some include jellyfish like robots actuated by shape memory alloys (smas) [14], [22], twisted and coiled polymers (tcps) [15], dielectric elastomers (des) [23], [24], pneumatics [25], ionic polymer metal composites (ipmcs) [26], and hydrogen fuel powered [16]. one aspect that is missing in the literature is the integration of sensors and the lack of object detection capabilities underwater, except for a katzschmann et al. [13] and lm-jelly that utilizes magnetic field and electromagnetic actuator [27]. haines et al. showed that twisted and coiled polymer muscles from fishing line (tcpfl) can be very promising for robotic applications [28]. these muscles can exhibit large displacements in response to heating, which considerably decreases the stiffness of the tcp muscles [29]. the thermally induced fiber untwists in the coiled structure allowing for tensile and torsional actuation in tcp muscles [30]. wu et al. [31] developed a novel mandrel coiled tcpfl muscle for actuating the musculoskeletal system and showed that these types of actuators can be used for other soft robotic applications. however, these mandrel coiled tcpfl actuators have low blocking force/ high strain; hence, taking into account to application considered in this work, we fabricated self-coiled tcpfl actuators [6], [10] with large blocking force and enough displacement to actuate the soft robot. we seek to present a fully functional swimming underwater robot, jelly-z, that is used for applications such as underwater monitoring and data collection. to this extent, in this work, we present jelly-z as shown in figure 1, inspired by the geometry of the moon jellyfish and other similar robots, but this robot is actuated by twisted and coiled polymer fishing line muscles. it is the first jellyfish-like soft robot equipped with a camera with object detection capabilities and actuated by artificial muscles at the same time. these two features make this robot stand out as it attempts to address the fundamental problems in actuation, design, and the use of vision system of soft robots to be deployed for eco-friendly underwater mission. the main features of this robot are: • mimics the bow (jellyfish bell) and string (tcpfl) arrangement for the actuation mechanism. • noiseless and vibration free swimming underwater. • easy to fabricate and is lightweight. • equipped with a camera for surveillance and object detection underwater. the highlights of the paper can be listed as follows. first, the detailed design and fabrication of jelly-z using tcpfl to mimic the movement of moon jelly is shown. second, the fabrication process and characterization results of twisted and coiled polymer fishing line muscles (tcpfl) in underwater environment. third, successful vertical swimming experiment of soft structure of jelly-z robot which includes swimming analysis and object detection results. the contribution of this work in measurement and estimation is that the proposed small bioinspired underwater soft robot equipped with a small camera utilizes unique artificial muscles (which are silent in actuation, easily manufacturable and have sufficient actuation properties) for swimming. it is able to predict and detect different kinds of objects in the surrounding while swimming in water and estimating the percentage of similarity to the object the camera/robot is trained for. in addition, we provided the characteristics of the artificial muscles (tcpfl) in water which can be used for other similar applications. 2. design and fabrication of jelly-z the main body of jelly-z is fabricated from a round silicone bell (diameter 130 mm). spring steels are added to provide stiffness to the attached artificial muscle [15, 32]. it also contains a jevois smart camera and a piece of foam for buoyancy. a rendered image shows the assembled version of the bot in figure 1(a and b). figure 1. (a) cad design showing major components of the jelly-z robot, (b) top view. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 2.1. assembly of jelly-z robot unlike many rigid and complex underwater robots and rovs jelly-z can be fully assembled with six simple steps. first, prepare the steel springs. the 120-micron steel spring is cut into 120 mm x 10 mm strips. figure 2(1) shows a snapshot of one of the steel springs used in the robot. for this robot, we require a total 6 strips. second, since the tcpfl muscles are directly attached to the steel strip by crimping, an insulating tape is added to prevent any current transfer between the actuating muscles and steel strip. moreover, electrical wires are placed in between the tape and the steel strips to keep them fixed within the robot as shown in figure 2(2). third, a 3d printed (abs plastic) mould (figure 2(3)) is used to fabricate the silicone bell. the mould is prepared by cleaning its surface and spraying it with a non-stick compound. fourth, align the steel strips in the designed direction as shown in figure 2(4). fifth, pour the silicone mix (ecoflex 00-10, 1:1 ratio of part a and part b) into the mould (figure 2(5)) and allow it to cure for a minimum of four hours. finally, integrate the tcpfl muscles into the bell by attaching them to the steel springs. the artificial muscles must be stretched (prestressed) to create gaps between each pitch which allows it to contract while heating. then attach both the foam and camera on the robot of the jellyfish as shown in figure 2(6). the foam is used to set the neutral buoyancy of jelly-z which was calculated based on its experimental volume. the camera is also sealed and waterproofed before attaching. 2.2. fabrication of twisted and coiled polymer fishing line (tcpfl) muscles the fabrication process of tcpfl actuators is simple, scalable, and allows the user to easily manipulate the actuator’s properties such as resistance, diameter, and length. this fabrication process previously presented in detail in our previous work [6], [7], [10]. it is done in-house using a setup which includes two stepper motors, a controller, a power supply, and a computer. the fabrication consists of four major steps (1) inserting a full twist on the fishing line fibre; (2) incorporating the nichrome wire; (3) coiling the wire and nichrome together; and (4) annealing in the oven to allow the actuator to maintain its coiled structure. to assure that all the actuators perform and behave the same, two long fishing line actuator were fabricated and later cut into the desired shorter length for the jellyfish. in this case, two 130 mm fishing line actuators were made and cut into shorter 60 mm length. the precursor fibre is an 80-lb nylon 6,6 monofilament (0.8 mm in diameter) purchased from eagleclaw. the conductive nichrome wire is 160 µm in diameter purchased from mcmaster carr. the coiling speed was kept at 150 rpm. figure 3 shows the fabrication set up. stepper motor 1 (sm1) located at the top is used to insert a twist and coil the fibres. stepper motor 2 (sm2) is used to guide the coiling of the nichrome wire throughout the length of the fishing line fibre. the speed of sm2 (150 rpm) is critical as it controls the pitch and amount of nichrome that is being incorporated which will affect the final electrical resistance of the actuator. 3. isotonic testing of tcpfl in an underwater environment 3.1. characterization setup isotonic testing is one of the most important characterization processes to identify how the muscle behaves when its heated and then cooled under a constant load condition. this test can be performed in both air and underwater environment to fully mimic the muscle’s true actuation condition. this set up, shown in figure 4 (left), includes a power supply for joule heating, ni daq 9219 to record the temperature change using thermocouples, and a camera along with tracker physics program to measure the displacement for an underwater testing environment. 3.2. characterization results figure 5 presents the results of the characterization experiments conducted for the tcpfl muscles in water. figure 4 (right) shows a zoomed in snapshot of the tcpfl in its unloaded and loaded state. the length of the muscle is 60 mm (same as that in the robot), diameter d is 2.5 mm and resistance r is 60 ω. the aim of this experiment is to test the effect of different input currents on the tcpfl actuator in an underwater environment. the properties of tcpfl muscles in water is shown in table 1. figure 2. schematic showing fabrication steps of jelly-z robot. figure 3. schematic diagram of the tcpfl muscle fabrication process (left) twisting process, (middle) nichrome incorporation process and (right) coiling process. twisting and coiling protocol similar to wu et al. [7], hamidi et al. [6] and almubarak et al. [10]. figure 4. (left) isotonic test experimental setup, (right) zoomed-in image of the tcp fishing line muscle; no pre-stress and with pre-stress. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 a 500 g weight attached to the free end of the tcpfl muscle while the other end (top end) is fixed in a glass cylinder filled with 5.5 gallons of water. copper wires are connected to both ends on the muscle and to a power supply. the thermocouple is directly connected to the actuator on one end and into the ni daq 9219 on the other. lastly, a camera is placed at the free end on the muscle to record the displacement. a labview program collects and saves all the data at a frequency of 10 hz for analysis by the user. the open-source tracking program (tracker) captures the actuation displacement from the recording and the data is plotted in matlab 2021b as shown in figure 6. the maximum temperature and actuation strain are measured experimentally at three different input currents (0.45 a, 0.55 a, 0.75 a) as shown in figure 5 (a). the maximum actuation strain is noted at ~7 % (figure 5 (d)) while the highest temperature reached is ~62 °c (figure 5 (c)). the highest voltage consumption is ~65 v (figure 5 (b)), which makes the power consumption ~48 w. 4. jelly-z swimming experiment for efficient propulsion of the robot, pressure gradients are generated during each contraction and relaxation cycle across the bell margin of jelly-z bot that allows an upward movement. during each relaxation cycle there is a slight sinking of the robot due to the reverse movement of the bell. figure 6 shows the total vertical distance jelly-z swam in a 70-gallon fish tank (92 x 58.5 x 46 cm3). the jellyfish robot takes up to 21 actuation cycles to swim 360 mm vertically in 63 seconds. a camera mounted on a tripod, working at 60 fps, is used to track the jelly-z while swimming. the open-source tracking program (tracker physics) is employed to measure the distance travelled by the robot and the data is plotted in matlab 2021b. the background stripes are taken as a measurement reference as each layer (white and black in figure 6(a)) are 8 mm wide. three muscles of 60 mm length (60 ω resistance each) are used to make the robot swim with an input current of 1.8 a for 1.5 s heating (contraction cycle) and 1.5 s cooling (relaxation cycle – 0 a). the velocity of the robot is slower for the first 10 cycles of operation while the velocity increases as the robot reaches closer to the surface of water. this is due to the air bubbles that are formed underneath the bell as the water reacts with the high power supplied to the muscles. figure 6(a) and figure 6(b) show the full assembly of the jelly-z robot through the front and bottom perspective views. figure 6(c) (i, ii, iii) show the zoomed-in graphs showing the first 3 cycles (i) current, (ii) velocity (velocity amplitude for each actuation cycle reaches to ~20 mm/sec.), (iii) displacement for 0.33 hz of actuation frequency (duty cycle – 67 %). the total electrical power used to actuate the jelly-z is 109.8 w, but generally these muscles consume much lesser power in air (1/3rd). this is due to the thermal energy being consumed in the surrounding environment (water); hence, to reach the actuation temperature for muscle movement, the input power is higher in water than in air. water as a medium also allows rapid cooling of the muscles. as this is the first application shown with tcpfl muscles in water, more research must be conducted to reduce the heat lost in the surrounding water medium, thus reducing input power. 5. underwater object detection the experimental set up to test the design consists of a large fish tank (920 mm x 585 mm x 460 mm) made of transparent glass, a dc power supply unit, a standard laptop, a fully assembled jelly-z bot prototype mounted with a jevois smart camera, and the appropriate wirings/cables for connections. for this set up, we have used jevois a-33 smart camera solution, by jevois smart machine vision (jsmv) weighing ~17 g (figure 7, bottom(a)). the camera (hardware specifications given in table 2) is mounted on top of the jelly-z bot as shown in full assembly in figure 7 (top right). the camera combines with a camera sensor, embedded quad-core computer, and usb video link in a tiny package. the advantage of using jsmv can be explained with the help of schematic in figure 7(bottom, (b)). a standard camera displays the output graphics without any processing and leave the analysis of data to the receiver. while a jsmv includes a processing unit which processes the video to interpret contents and provide instant results to the receiver. the hardware specifications are given in table 2 [34]. figure 5. experimental characterization of tcp fishing line muscle in an underwater environment and two-step input sequence with a pre-stress loading of 500 gm for 3 different input currents (0.45 amps, 0.55 amps, 0.75 amps); (a) current vs time of two-step power input, time domain plots for two-step power input: (b) output voltage, (c) temperature, (d) strain. table 1. twisted and coiled polymer fishing line actuator mechanical and electrical properties. property characteristic/quantity material nylon (6,6) fishing line type of actuation electrothermal type of resistance wire nichrome (nickel 70 %, chromium 30 %) nichrome temperature coefficient of resistance (1/°c) 579 · 10−6 [33] fishing line diameter (mm) 0.8 nichrome diameter (µm) 160 length of the actuator after coiling (mm) 60 diameter of the actuator after coiling (mm) 2 mass of the actuator (kg) 0.4 · 10-3 resistance () 60 heating time (s) 5 cooling time (s) 25 duty cycle (%) 16.6 actuation frequency (hz) 0.33 free strain in water (%) ~7 % blocking force in water (n) 5.88 current in water (a) 0.45-0.75 voltage (v)/ power in water (w) 50 v / 30 w acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 the jsmv does not come with waterproof properties and is not suitable for underwater applications. so, in order to make the camera waterproof we have embedded the camera unit inside a silicone rubber mould, can be seen in figure 7 (b). this allows camera to function underwater, however, the camera unit has a cooling fan which has to be removed in the process and that limits the functioning of camera up to 2 mins due to overheating. while the software specifications include already uploaded object detection software such as tensorflow, yolo darknet, matlab module and python object recognition. for this set up, we have used yolo darknet set up. standard module of yolo provided by jsmv detects upto 1000 different types of objects using deep neural network, darknet. darknet is an open source neural network framework written in ‘c’ and ‘cuda’ [35]. figure 7(c) shows architecture of yolo framework. yolo’s architecture is very similar to fcnn (fully connected neural network). this is a neural network that applies multiple convolutional layers to extract features from images to create learning models. the max pool layer sub-samples the image properties for each of the smaller segments in the image and trains the model accordingly. many such layers are implemented in the neural network of yolo. other layers like the fully connected layer combines the weight associated to the feature properties of the image [36]. figure 6. (top) underwater swimming analysis for jelly-z robot at 1.8 a input current / voltage – 60 v, duty cycle – 50 % for an actuation frequency of 0.33 hz at different time intervals. (a) front perspective view. (b) bottom perspective view. (bottom) (c, i-iii) input current, velocity profile and distance vs time graphs for first three cycles. table 2. hardware specifications of jevois a-33 smart camera. parameters specifications weight 17 g size 28 cm³ processor allwinner a33 quad core arm cortex a7 processor @ 1.34ghz with vfpv4 and neon, and a dual core mali-400 gpu supporting opengl-es 2.0 memory 256 mb ddr3 sdram camera sensor 1.3 mp camera with sxga (1280 x 1024) up to 15 fps (frames/s) hardware serial port 5v or 3.3v (selected through vcc-io pin) micro serial port connector to communicate with arduino or other embedded controllers power 3.5 watts maximum from usb port. requires usb 3.0 port or y-cable to two usb 2.0 ports. led one two-color led: green: power is good. orange: power is good, and camera is streaming video frames figure 7. (a) underwater object detection setup. (b) waterproofing of a jevois camera. (c) autonomous object detection in jevoise camera vs standard camera. (d)the cnn architecture of yolo deep learning model. (e)experimental results of object detection from jevois camera. (i) 65 % human (ii) 63% laptop, (iii) 76% human, (iv) multiple humansat 57 % & 35 %. conv. layer 7x7x64 conv. layer 3x3x192 conv. layer 1x1x128 conv. layer 1x1x256 conv. layer 1x1x512 conv. layer 1x1x1024 conv. layer (a) (b) (c) (d) (e) acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 the architecture divides the sample image into a grid size of s*s, known as a residual block. each grid cell is responsible to detect the object cantered within itself. each grid cell predicts the bounding boxes with help of their confidence score. if there is no object in the grid cell, the confidence score would remain zero. each bounding box provides five predictions (x, y, w, h) and confidence score. (x, y) represent the centre of the box and (w, h) represent the dimensions of the box. the predicted bounding boxes are equivalent to the true boxes of the objects when intersection over union is used. this phenomenon gets rid of any unnecessary bounding boxes that don't complement the objects' characteristics (like height and width). the final detection will be unique bounding boxes that correctly fit the objects [37]. the results of object detection experiment can be seen in figure 7(e). the camera mounted on jelly-z bot while being submerged under water, can detect humans passing in-front of fish tank. the camera is also able to identify a laptop during this experiment. to test the detection of multiple objects, two people walked next to fish tank, and as seen in figure 7(e) the camera was able to distinguish between two people and were able to detect them both separately. capabilities of underwater detection are enormous along with application in civil and military domains. 6. conclusions and future work we presented a fully-functional underwater jellyfish like robot, jelly-z. this work shows the first implementation of actuation solely by self-coiled twisted and coiled polymer with fishing line artificial muscles (tcpfl) and the integration of object detection system presented for a soft underwater robot. we showed the unique design of jelly-z inspired by the moon jellyfish and the actuation mechanism using tcps. this design iteration enabled the integration of three tcpfl muscles for the bell contraction and relaxation motion to allow for swimming. jelly-z achieved swimming at a velocity of 5.7 mm/s traveling 360 mm vertically in 63 s. it could generate an instantaneous velocity of ~20 mm/s per cycle, while carrying its own weight of 215 g. it is to be noted that the vertical swimming of this robot can be controlled by turning off the actuation, which allows the jelly-z to essentially sink under the action of its own weight. moreover, we presented the fabrication of tcpfl and underwater isotonic testing. the tcpfl can actuate up to ~7% in underwater condition at a power of 109 w while carrying a load of 500 g, almost 1,000 times heavier than its weight. the tcp muscles are manufactured in-house, and all the muscle integration processes are simple. some of the challenges to improve upon is the life cycle of these (tcpfl) artificial muscles in water, the high-power consumption of the muscles and the operation time of the jevois camera when it is underwater. these aspects need further work and should be done before considering for deployment. as for the system’s behaviour in non-ideal conditions a control system will need to be developed equipped with an internal gps and a 3d imu which will help the robot maintain its specific position when it is pushed or exposed to high underwater currents. autonomous underwater seakeeping is beyond the scope of the presented work but it is worth consideration for research in future projects and applications. we also equipped jelly-z with the jevois smart camera and tested for object and human detection in an underwater environment using the yolo darknet object detection algorithm. future work will include theoretical modelling and comparison with experimental characterization results for tcpfl actuators. improving the efficiency of tcpfl actuators is another major work that has to be done in future. water tunnel tests have to be conducted to study the generation of vortices while the robot is swimming during the contraction and relaxation motion of each cycle. characterization of thrust force generated and simulations for the same have to be carried out from the linear robot motion for improving the efficiency of the robot design. work has to be done on enhancing underwater imaging [37] and object detection capabilities for practical applications. the work presented here is an attempt towards developing a better design of life-like soft robotic jellyfish with no motors or rigid components for actuation. acknowledgement the authors would like to thank dr. ray baughman for his valuable input on the design. we would like to thank the office of naval research for partially supporting this work. supplementary supplementary video is available at the hbs youtube channels; in particular, a combined video showing the structure of the robot, actuation characteristics and object detection. https://youtu.be/2mhchqribjo. references [1] e. g. dos santos, h. richter, design and analysis of novel actuation mechanism with controllable stiffness, actuators 8(1) (2019), art. no. 12, pp. 1-14. doi: 10.3390/act8010012 [2] t. du, j. hughes, s. wah, w. matusik, d. rus, underwater soft robot modeling and control with differentiable simulation, ieee robotics and automation letters 6(3) (2021), pp. 49945001. doi: 10.1109/lra.2021.3070305 [3] b. lu, c. zhou, j. wang, y. fu, l. cheng, m. tan, development and stiffness optimization for a flexible-tail robotic fish, ieee robotics and automation letters 7(2) (2021), pp. 834-841. doi: 10.1109/lra.2021.3134748 [4] y. yang, y. li, y. chen, principles and methods for stiffness modulation in soft robot design and development, bio-design and manufacturing 1(1) (2018), pp. 14-25. doi: 10.1007/s42242-018-0001-6 [5] l. li, x. zheng, r. mao, g. xie, energy saving of schooling robotic fish in three-dimensional formations, ieee robotics and automation letters 6(2) (2021), pp. 1694-1699. doi: 10.1109/lra.2021.3059629 [6] a. hamidi, y. almubarak, y. tadesse, multidirectional 3dprinted functionally graded modular joint actuated by tcpfl muscles for soft robots, bio-design and manufacturing 2(4) (2019), pp. 256-268. doi: 10.1007/s42242-019-00055-6 [7] l. wu, i. chauhan, y. tadesse, a novel soft actuator for the musculoskeletal system, advanced materials technologies 3(5) (2018), art. no. 1700359, pp. 1-8. doi: 10.1002/admt.201700359 [8] m. calisti, m. giorelli, g. levy, b. mazzolai, b. hochner, c. laschi, p. dario, an octopus-bioinspired solution to movement and manipulation for soft robots, bioinspiration & biomimetics 6(3) (2011) art. no. 036002. doi: 10.1088/1748-3182/6/3/036002 [9] j. jiao, sh. liu, h. deng, yu. lai, f. li, t. mei, h. huang, design and fabrication of long soft-robotic elastomeric actuator inspired by octopus arm, in 2019 ieee international https://youtu.be/2mhchqribjo http://dx.doi.org/10.3390/act8010012 https://doi.org/10.1109/lra.2021.3070305 https://doi.org/10.1109/lra.2021.3134748 https://doi.org/10.1007/s42242-018-0001-6 https://doi.org/10.1109/lra.2021.3059629 https://doi.org/10.1007/s42242-019-00055-6 https://doi.org/10.1002/admt.201700359 https://doi.org/10.1088/1748-3182/6/3/036002 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 conference on robotics and biomimetics (robio), 06-08 december 2019, dali, china, pp. 2826-2832. doi: 10.1109/robio49542.2019.8961561 [10] y. almubarak, m. schmutz, m. perez, s. shah,y. tadesse, kraken: a wirelessly controlled octopus-like hybrid robot utilizing stepper motors and fishing line artificial muscle for grasping underwater, international journal of intelligent robotics and applications 6 (2022), pp. 543–563. doi: 10.1007/s41315-021-00219-7 [11] z. chen, s. shatara, x. tan, modeling of biomimetic robotic fish propelled by an ionic polymer–metal composite caudal fin, ieee/asme transactions on mechatronics 15(3) (2010), pp. 448-459. doi: 10.1109/tmech.2009.2027812 [12] r. zhang, z. shen, z. wang, ostraciiform underwater robot with segmented caudal fin, ieee robotics and automation letters 3(4) (2018), pp. 2902-2909. doi: 10.1109/lra.2018.2847198 [13] r. k. katzschmann, j. delpreto, r. maccurdy, d. rus, exploration of underwater life with an acoustically controlled soft robotic fish, science robotics 3(16) (2018), pp. 108-116. doi: 10.1126/scirobotics.aar344 [14] y. almubarak, m. punnoose, n. x. maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators, smart materials and structures 29(7) (2020) art. no. 075011. doi: 10.1088/1361-665x/ab859d [15] a. hamidi, y. almubarak, y. m. rupawat, j. warren, y. tadesse, poly-saora robotic jellyfish: swimming underwater by twisted and coiled polymer actuators, smart materials and structures 29(4) (2020), art. no. 045039. doi: 10.1088/1361-665x/ab7738 [16] y. tadesse, a. villanueva, c. haines, d. novitski, r. baughman, s. priya, hydrogen-fuel-powered bell segments of biomimetic jellyfish, smart materials and structures 21(4) (2012), art. no. 45013. doi: 10.1088/0964-1726/21/4/045013 [17] j. h. costello, s. p. colin, j. o. dabiri, b. j. gemmell, k. n. lucas, k. r. sutherland, the hydrodynamics of jellyfish swimming, annual review of marine science 13 (2021), pp. 375-396. doi: 10.1146/annurev-marine-031120-091442 [18] m. a. robertson, f. efremov, j. paik, roboscallop: a bivalve inspired swimming robot, ieee robotics and automation letters 4(2) (2019), pp. 2078-2085. doi: 10.1109/lra.2019.2897144 [19] z. yang, d. chen, d. j. levine, c. sung, origami-inspired robot that swims via jet propulsion, ieee robotics and automation letters 6(4) (2021), pp. 7145-7152. doi: 10.1109/lra.2021.3097757 [20] s. grazioso, a. tedesco, m. selvaggio, s. debei, s. chiodini, towards the development of a cyber-physical measurement system (cpms): case study of a bioinspired soft growing robot for remote measurement and monitoring applications, acta imeko 10(2) (2021), pp. 104-110. doi: 10.21014/acta_imeko.v10i2.1123 [21] d. seneviratne, l. ciani, m. catelani, d. galar, smart maintenance and inspection of linear assets: an industry 4.0 approach, acta imeko 7(1) (2018), pp. 50-56. doi: 10.21014/acta_imeko.v7i1.519 [22] a. villanueva, c. smith, s. priya, a biomimetic robotic jellyfish (robojelly) actuated by shape memory alloy composite actuators, bioinspiration & biomimetics 6(3) (2011), art. no. 036004. doi: 10.1088/1748-3182/6/3/036004 [23] c. christianson, c. bayag, g. li, s. jadhav, a. giri, ch. agba, t. li, m. t. tolley, jellyfish-inspired soft robot driven by fluid electrode dielectric organic robotic actuators, frontiers in robotics and ai 6 (2019), art. no. 126, pp. 1-11. doi: 10.3389/frobt.2019.00126 [24] t. cheng, g. li, y. liang, m. zhang, b. liu, t.-w. wong, j. forman, m. chen, g. wang, y. tao, t. li, untethered soft robotic jellyfish, smart materials and structures 28(1) (2018), art. no. 015019, 2018. doi: 10.1088/1361-665x/aaed4f [25] j. frame, n. lopez, o. curet, e. d. engeberg, thrust force characterization of free-swimming soft robotic jellyfish, bioinspiration & biomimetics 13(6) (2018), pp. 64001-64001. doi: 10.1088/1748-3190/aadcb3 [26] s.-w. yeom, i.-k. oh, a biomimetic jellyfish robot based on ionic polymer metal composite actuators, smart materials and structures 18(8) (2009) art. no 085002. doi: 10.1088/0964-1726/18/8/085002 [27] j. ye, y.-ch. yao, j.-y. gao, s. chen, p. zhang, l. sheng, j. liu, lm-jelly: liquid metal enabled biomimetic robotic jellyfish, soft robotics, 2022 (in press). doi: 10.1089/soro.2021.0055 [28] c. s. haines, m. d. lima, n. li, g. m. spinks, j. foroughi, j. d. w. madden, s. h. kim, sh. fang, m. jung de andrade, f. göktepe, ö. göktepe, s. m. mirvakili, s. naficy, x. lepró, j. oh, m. e. kozlov, s. j.kim, x. xu, b. j. swedlove, g. g. wallace, r. h. baughman, artificial muscles from fishing line and dewing thread, science 343(6173) (2014), pp. 868-872. doi: 10.1126/science.1246906 [29] y. tadesse, l. wu, f. karami, a. hamidi, biorobotic systems design and development using tcp muscles, proc. spie 10594, electroactive polymer actuators and devices (eapad) xx, 1059417 (2018). doi: 10.1117/12.2300943 [30] j. a. lee, n. li, c. s. haines, k. j. kim, x. lepró, r. ovallerobles, s. j. kim, r. h. baughman, electrochemically powered, energy-conserving carbon nanotube artificial muscles advanced materials (deerfield beach, fla.) 29(31) (2017), art. no. 1700870. doi: 10.1002/adma.201700870 [31] l. wu, i. chauhan, y. tadesse, a novel doft actuator for the musculoskeletal system, adcanced material technologies 3(5) (2018), art. no. 1700359, pp. 1-8. doi: 10.1002/admt.201700359 [32] y. almubarak, m. punnoose, n. x. maly, a. hamidi, y. tadesse, kryptojelly: a jellyfish robot with confined, adjustable pre-stress, and easily replaceable shape memory alloy niti actuators (2020), art. no. 075011. doi: 10.1088/1361-665x/ab859d [33] matweb. nichrome 70-30 medium temperature resistor material. online [accessed 23 august 2022] http://www.matweb.com [34] i. a. t. u. o. s. c. laurent itti. (2016). jevois smart machine vision camera. online [accessed 23 august 2022] http://jevois.org/ [35] j. du, understanding of object detection based on cnn family and yolo, journal of physics: conference series 1004 (2018), art. no. 012029. doi: 10.1088/1742-6596/1004/1/012029 [36] m. asyraf, i. isa, m. marzuki, s. sulaiman, c. hung, cnn-based yolov3 comparison for underwater object detection, journal of electrical & electronic systems research 18 (2021), pp. 30-37. doi: 10.24191/jeesr.v18i1.005 [37] w. gai, y. liu, j. zhang, g. jing, an improved tiny yolov3 for real-time object detection, systems science & control engineering 9(1) (2021), pp. 314-321. doi: 10.1080/21642583.2021.1901156 https://doi.org/10.1109/robio49542.2019.8961561 https://doi.org/10.1007/s41315-021-00219-7 https://doi.org/10.1109/tmech.2009.2027812 https://doi.org/10.1109/lra.2018.2847198 https://doi.org/10.1126/scirobotics.aar3449 https://doi.org/10.1088/1361-665x/ab859d https://doi.org/10.1088/1361-665x/ab7738 https://doi.org/10.1088/0964-1726/21/4/045013 https://doi.org/10.1146/annurev-marine-031120-091442 https://doi.org/10.1109/lra.2019.2897144 https://doi.org/10.1109/lra.2021.3097757 http://dx.doi.org/10.21014/acta_imeko.v10i2.1123 http://dx.doi.org/10.21014/acta_imeko.v7i1.519 https://doi.org/10.1088/1748-3182/6/3/036004 https://doi.org/10.3389/frobt.2019.00126 https://doi.org/10.1088/1361-665x/aaed4f https://doi.org/10.1088/1748-3190/aadcb3 https://doi.org/10.1088/0964-1726/18/8/085002 https://doi.org/10.1089/soro.2021.0055 https://doi.org/10.1126/science.1246906 https://doi.org/10.1117/12.2300943 https://doi.org/10.1002/adma.201700870 https://doi.org/10.1002/admt.201700359 https://doi.org/10.1088/1361-665x/ab859d http://www.matweb.com/ http://jevois.org/ https://doi.org/10.1088/1742-6596/1004/1/012029 http://dx.doi.org/10.24191/jeesr.v18i1.005 https://doi.org/10.1080/21642583.2021.1901156 first experimental tests on the prototype of a capacitive oil level sensor for aeronautical applications acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 6 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 first experimental tests on the prototype of a capacitive oil level sensor for aeronautical applications filippo attivissimo1, francesco adamo1, luisa de palma1, daniel lotano1, attilio di nisio1 1 dept. of electrical and information engineering, polytechnic university of bari, via edoardo orabona 4, i-70126 bari, italy section: research paper keywords: level sensor; cls; helical; airborne; capillary effect; conditioning circuit citation: filippo attivissimo, francesco adamo, luisa de palma, daniel lotano, attilio di nisio, first experimental tests on the prototype of a capacitive oil level sensor for aeronautical applications, acta imeko, vol. 12, no. 1, article 27, march 2023, identifier: imeko-acta-12 (2023)-01-27 section editor: francesco lamonaca, university of calabria, italy received february 3, 2023; in final form february 16, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was funded by the italian ministry of university and research (mur) on the “pon r&i 2014-2020 – area di specializzazione “aerospazio”, project “further”. corresponding author: francesco adamo, e-mail: francesco.adamo@poliba.it 1. introduction aircrafts require highly reliable level monitoring for fluids like lubricant oils for the main engine as well as for other mechanical subsystems. these levels can be sensed using microwave sensors [1]-[4], tdr-based (time domain reflectometry) sensors [5][6], optical sensors, and many other methods. however, these techniques usually require very complex and expensive signal conditioning hardware. moreover, it is important to note that level monitoring in aeronautical applications requires a continuous measurement over the entire dynamic range. a capacitive level sensor (cls) perfectly fits this requirement because its output is not only continuous but even highly linear. a cls carries even more benefits, as it needs a limited planned maintenance, has long mtbf (mean time between failures) and most importantly, it requires low-cost conditioning electronics. capacitive sensors are commonly used in a large variety of industrial applications for measuring several parameters, e.g., pressures, mechanical strains, displacements, ecg signals, facial movements, sound levels, fluid levels, object presence, and many others [7]-[18]. this paper investigates a novel approach to the design of a capacitive sensor for level sensing, with the proposal of a new geometry design for the sensing probes. the design is presented in section 2, starting from the work presented in [19]. several problems will be discussed, such as vibrational stresses and capillarity effects of the fluid in the gap between probes. in section 3 the details of the conditioning circuit (already presented in the previous paper [20]), the readout electronic and the test methodology will be discussed. finally, in section 4 the preliminary experimental results of the characterization of the prototypes will be provided. 2. sensor probes design to develop a new sensor for lubricant oil level monitoring for aerospace applications, a first look at the “standard” or “conventional design” cls probes is needed. for reference purposes, we have considered the commercially available cls model 8tj209 from ametek aerospace [21]; it is made of two cylindrical and concentric metallic plates with a gap of about 4.3 mm and has a declared sensitivity of about 8 pf/inch ≅ 315 pf/m. due to the relatively wide gap between plates, the dynamic response to a step variation of the abstract in this paper, the design, the prototyping, and the first results of experimental tests of a capacitive oil level sensor (cls) intended for aeronautical applications are described. due to potentially high vibrational stresses and the presence of high electromagnetic interferences (emi), the working conditions on aircraft can be considered quite harsh. hence, both the sensing part and the conditioning circuit should meet strict constraints. for this reason, in the design phase, great attention has been paid also to the mechanical characteristics of the probes. all the design aspects are exposed and the main advantages concerning alternative level sensing techniques are discussed, and the preliminary experimental results of sensitivity, linearity, hysteresis, and settling time tests are presented and commented. mailto:francesco.adamo@poliba.it acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 liquid level is of the second order, and thus prone to oscillations, as demonstrated in [22] and explained in the following. the aim of the novel design here presented is to increase the sensitivity and, at the same time, to improve the dynamic response reducing the overshoot and the settling time. the sensitivity of a cls is simply defined as the first partial derivative of the output capacitance vs. the liquid level: 𝑆 ≝ 𝜗𝐶 𝜗ℎ⁄ . in working conditions these kinds of clss can be seen as the parallel connection of two capacitors: the first one corresponding to the section immersed into the liquid and the second one corresponding to the remaining section where the dielectric is the air; so the total output capacitance is given by: 𝐶 = 𝐶0 𝐿 [𝜀𝑟,𝑙𝑖𝑞 ∙ ℎ + (𝐿 − ℎ)] , (1) where ℎ is the liquid level, 𝐿 is the length of the probe, 𝜀𝑟,𝑙𝑖𝑞 is the relative permittivity of the sensed liquid and, finally, c0 is the “dry capacitance” of the sensor, i.e. the capacitance shown at the output terminals when the fluid level is ℎ = 0. for simplicity, it has been assumed that relative permittivity of air is one. following the definition of sensitivity, we can write: 𝑆 = 𝐶0 𝐿 (𝜀𝑟,𝑙𝑖𝑞 − 1) . (2) equations (1) and (2) are generic and can be adapted to clss with concentric cylindrical armatures simply considering that for this kind of geometry the dry capacitance per unit length 𝐶0 𝐿 ⁄ is given by: 𝐶0 𝐿 = 2𝜋𝜀0 ln 𝑟𝑒 𝑅𝑖 = 2𝜋𝜀0 ln 1 1 − 𝑤 𝑟𝑒 , (3) where 𝜀0 is the vacuum/air permittivity, 𝑟𝑒 is the inner radius of the external electrode, 𝑅𝑖 is the outer radius of the internal electrode and 𝑤 = 𝑟𝑒 − 𝑅𝑖 is the gap between the electrodes. from equations (2) and (3) it’s easy to see that the sensitivity 𝑆 increases when the ratio w/re decreases or, in other words, when the gap w between the electrodes is narrowed. narrowing the gap has also the additional effect of improving the dynamic response of the sensor, as the mechanical behaviour of the liquid in between the plates of the probe changes. in fact, performing numerical simulations for decreasing values of w, the dynamic response to a step input level changes from a second order one to a first-order approximation when 𝑤 ≤ 1.35 mm [22]. this result has been obtained by considering that the system is modelled by communicating vessels, and can be described by the second-order non-linear equation deeply investigated as equation (12) in the mentioned manuscript. the analysis takes into account several pressure variables, such as hydrostatic pressure, electrostatic pressure, pressure drop due to friction losses and so on. however, some negative aspects must also be carefully considered: i) with such a marked gap-width reduction, a significant capillarity rise starts to happen, and ii) the oil is held on the plates for a longer period due to its viscosity. the capillarity effect can be numerically modelled and addressed by engraving slits into at least one of the concentric probes so that no column above the bath level can build up in the channel. two kinds of contour have been considered for the slits, as shown in figure 1: helicoidal and straight. both were carefully modelled and studied in comsol multiphysics® to calculate the physical characteristics, in particular their sensitivities [22]. furthermore, also the mechanical behaviour of these new kinds of electrodes was carefully studied. in fact, in the typical operating condition aboard helicopters, the vibrational stresses are very significant, and, if not modelled and counteracted the resulting behaviour of the sensor could be undetermined. in fact, in presence of intense mechanical vibrations, the electrodes are set in relative motion to each other, so capacitance fluctuations and, at least potentially, also possible short circuits can occur. the highest values of vibration frequencies occur on helicopters with many blades in the main rotor. as an example, consider a rotor with 7 blades spinning at 225 rpm; in these conditions, the fundamental excitation frequency is 𝑓 = 225 ∙ 7 60⁄ ≅ 25 hz. it is then important that the vibrational modes of the proposed probes have higher frequencies than the ones occurring on the helicopter where it is used. for our analysis, only the flexural vibrations were considered, because longitudinal and torsional ones do not alter the relative position of the electrodes due to the mechanical constraints [22]. due to this constraint as well as to the chemically highly aggressive environment in which the sensor must work, we choose to make the probe prototypes in marine-grade stainless steel (aisi-316); materials like copper or bronze have been excluded a-priori due to their poor resistances to corrosion phenomena [23]-[24]. performing a fem (finite elements method) analysis of the modes of vibration, the probe with the longitudinal slit shows the lowest resonant frequencies at 67 hz for the internal electrode and 83 hz for the external electrode, while the resonant frequencies for the helicoidal slit probe are 37 hz and 44 hz respectively. hence, both probes have higher modes than the vibrations on the helicopter, so no resonances will occur in working conditions. after these preliminary considerations, three different probes were manufactured and tested: one with a helicoidal slit, one with a straight longitudinal slit, and a third one with the internal electrode without any cut and the external electrode with the same previous helical slit. table 1. sensitivities of the proposed cls calculated in comsol multiphysics®. corresponding sensitivity concerning an uncut probe. slit contour sensitivity (pf/m) no slit 725 helicoidal 718 straight 723 a) b) c) figure 1. electrodes prototypes used in the experiments: a) without slits b) with helical slits and b) with a longitudinal straight slit. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 3. experimental setup and methods the purposely designed and prototyped readout electronics as well as the methodology followed for the experiments and the sampling of the data is explained in the following. a look at the mechatronic setup used for the sliding of the probes in and out of the oil is given too. 3.1. conditioning circuit one of the most common and immediate ways to measure the output capacitance of a capacitive sensor is to insert it in a low-pass series rc filter and then measure its time constant in response to a step voltage input stimulus. in such a circuit, the current is given by: 𝑖(𝑡) = 𝑉 𝑅 𝑒 − 𝑡 𝜏 , (4) where 𝜏 = 𝑅 𝐶 is the time constant. hence, the capacity can be calculated by measuring the time needed to charge the capacitor through the resistor from a given start voltage to an end voltage. unfortunately, this method is unreliable because of two main reasons: 1) the effective resolution of low-cost adcs or even of the adcs integrated into typical microcontrollers is too low (typically less than 12 bits) to achieve the requested measurement accuracy, and 2) the time needed to complete a measurement can be too long. in this work a different approach was pursued: we used an lc tank oscillator circuit for the measurement of the frequency shift due to capacitance variations. a parallel lc oscillator has a wellknown resonant oscillation frequency where the capacitor reactance matches the inductor reactance: 𝑋ctot = 1 2 π 𝑓 𝐶tot = 2 π 𝑓 𝐿 = 𝑋𝐿 , (5) where: 𝑋ctot and 𝑋l are the capacitor and the inductor reactance respectively. 𝐶tot is the total capacitance in the circuit, i.e. the sum of a fixed capacity value 𝐶 and of the sensor’s capacitance 𝐶𝑥; to improve the measurement accuracy, also the parasitic capacitances 𝐶par due to the printed circuit board (pcb) traces and to the connecting cables must be included. l is the inductance in the oscillator circuit and 𝑓 is the resonant frequency. equation (5) can be rewritten with the resonant frequency on the left-hand side as: 𝑓 = 1 2 π √𝐿 𝐶tot . (6) hence the sensor’s capacitance 𝐶𝑥 can be calculated from (6) as: 𝐶𝑥 (𝑓) = 1 𝐿(2 π 𝑓)2 − (𝐶 + 𝐶par) . (7) the tank oscillator was implemented using standard commercial values for inductor 𝐿 and capacitor 𝐶, namely 𝐿 = 10 μh and 𝐶 = 270 pf. the pcb parasitic capacitance was measured to be 𝐶par ≅ 13 pf, i.e. a value well in line with typical mean values of capacitance on a pcb. if we assume that the sensor’s capacitance spans the range 500 nf – 1000 pf, the expected frequency span is about 393 khz. 3.2. measurement system the sensor readout is performed by the fdc2214 by texas instruments, a specialized capacitance-to-frequency converter (cfc) integrated circuit (ic) [25]-[27]. the fdc2214 was selected among many alternative ics because it has 4 independent input channels with a full-scale range (fsr) of up to 250 nf, a nominal resolution of 28 bits, an rms noise of 0.3 ff, and an output data rate of 4.08 ksa/s. the prototype of the pcb with the fdc2214 and the minimum electronics needed to interface it with an external microcontroller is shown in figure 2. the fdc2214 communicates with the external microcontroller through an i2c (inter integrated circuits) interface. to interface it with a pc, we used a simple arduino uno development system with an atmega328p mcu (micro controller unit) by microchip corp. measurement data are acquired from the fdc2214 by the mcu and transferred to the pc where they are processed in mathworks matlab. to simplify the test procedures, the probes being characterized were tied to a high-precision linear actuator and suspended over a 125 mm diameter pvc pipe filled with lubricant oil (figure 3). the probes were mounted on the moving cart of the system driven by a closed-loop controlled stepper motor and a high-precision trapezoidal screw. this setup allowed the positioning of the sensor with a nominal resolution of 50 µm. for the sake of completeness, it must be said that this setup has been preferred to the one on which the sensor probes are set figure 2. the prototype of the sensor readout electronics based on the fdc2214. figure 3 . the mechatronic system developed for the automated tests on the cls prototypes; the four main parts of the system are evidenced. eado t electronics ega p on rd ino losed loop stepper otor driver ro e nder test acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 in position in a tank and a pump is used to fill and empty the tank with the lubricant oil, due to the limited maximum flow of the available pumps. the lubricant oil used for the tests is an sae 15w40 standard, semi-synthetic one by viskoil®. its characteristics were deeply studied and described in [28]. some assumptions for the experimental tests were done: the room temperature in the laboratory was considered constant. the lubricant is not worn as it is not working in a motor, so there is no contamination by water nor coolant, or metallic debris. from [28], the dielectric constant reduces by only 1.5% when the temperature rises from 60 °c to 120 °c; so, in a first approximation, its changes can be considered linear in the working temperature range of a typical high-performance aeronautical motor (about 100 °c). in the following, the relative dielectric constant is assumed to be 𝜀r,oil = 2.2. in perspective, the effects of temperature on the dielectric constant could also be measured and compensated by taking advantage of one of the residual 3 channels available on the fdc2214; in fact, it can be used to measure the capacitance of a reference capacitor with known geometry permanently and fully immersed in the same oil tank where the cls is installed. 3.3. test methods for each sensor, sensitivity, hysteresis, rise time, and root mean square non-linearity error (rmse) have been studied and evaluated to carry out a comparison. hysteresis experiments were done with three immersion and extraction cycles of the probes with increasing pause times between the moving and the sampling phase. the experiments were performed over a full run of 400 mm in 10 mm steps. for each step, after a delay time (if added), 100 samples were taken in a time interval of about 2 s. the mean absolute hysteresis is calculated as the mean of the differences between the immersion and extraction measures. the sensitivity is calculated by fitting the 10 s cycle data with a first-grade polynomial function. the rise time experiments were performed by measuring the sensor’s output capacitance for a minimum of 20 s, after rapidly dipping or extracting the probe into/from the oil. the time constant 𝜏63 was calculated as 63 % of the time from the moment the sensor stopped to the moment its output capacitance reached its settled value. settling time was then assumed to be 𝑡𝑟 = 5 ∙ 𝜏63. the same sampled data were also used to calculate the sensitivity of the sensor and the rmse, both for the immersion and for the extraction movements. 4. results according to the hysteresis experiment, the sensor with mixed electrodes, i.e., the probe with the internal plate without slits and the external plate with helicoidal slits, showed overall the least residual non-linearity. with respect to this aspect, when given enough time for the settling of the output capacitance, the longitudinal slit probe appears to be the worst-performing one. the sensitivity shown by all the sensors is about 25% higher than the one expected from the simulations run in comsol multiphysics®. the highest sensitivity was achieved with the helicoidal slits probe: this is not in accordance with the physics and could be due to a misalignment of plates or to the small defects of the manufacturing process. other prototypes of the electrodes will be prepared but with different and more accurate manufacturing processes to study this phenomenon. with respect to the rise time evaluation experiments, the main issues arose with the helicoidal probe; in fact, in this case, we observed a very high 𝜏63 time constant since i) the gap between the two electrodes is very narrow and this, as already explained, increases the time needed for the oil to rise and fall in the gap, and ii) the slits are narrow too, allowing a thin membrane of oil to form in them. hence, the oil is held on the electrodes for a longer time. moreover, the time constants are different for increasing and decreasing oil levels; indeed, the time constant is higher for the extraction paths due to the oil retention phenomenon. in this case, the vertical slit electrodes performed best, even though the results are comparable to the mixed electrodes ones. finally, a preliminary look was given at the dynamic performance of the sensors. the residual non-linearity rmse is comparable for all three probes. results for the immersion and extraction are similar too. however, when fitting the sampled data and calculating the sensitivity, a clear difference between the extraction and immersion characteristics arises, with the sensitivity for the immersion being lower than the extraction. figure 4. results of the hysteresis experiment on probes with helicoidal slits a) linear fit and b) residuals. table 2. comparison of performances of the cls with the three different kinds of electrodes probe sensitivity pf/mm 𝑴𝒂𝒙 |𝑵𝑳𝑬|(*) pf 𝑴𝒂𝒙 |𝒉𝒚𝒔𝒕𝒆𝒓𝒆𝒔𝒊𝒔| pf rise time s helicoidal 0.949 4.59 2.57 4.85 mixed 0.888 1.93 1.24 3.01 straight 0.879 11.58 5.10 3.01 (*) nle = non-linearity error acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 5. conclusions in this manuscript, the authors presented a novel design strategy for oil-level monitoring aboard aircraft. after a first indepth study of a novel geometry for the probes, with accurate numerical simulations on the electrical aspects, also the effects of mechanical stresses on the electrodes were modelled and addressed. the probes have been prototyped and tested using an in-house developed pc-based mechatronic system and readout electronic. the tests showed positive results in line with the expected behaviour predicted through simulations. in particular, the proposed solution proved superior to the other level sensing technologies in the industry nowadays. concerning the sensitivity, the proposed sensor showed an improvement of more than three times with respect to other commercial clss. furthermore, the issue of the second-order response in response to a step change in the oil level was overcome, with an improvement in the settling time too. even the comparison with an optical level sensor is promising, as the readout electronic is simpler, cheaper, and most important more reliable. the readout electronic was prototyped in a small series with a cost of about 50 eur/unit while, the cost of a single set of probes of the csl is around 1,500 eur, due to the complexities of the manufacturing process. of course, both these cost components could be reduced in the mass production of the sensor. however, it can be estimated that also with a limited number of units the overall cost of the proposed system could be lower than that of the corresponding optical or microwave solutions. acknowledgement this study has been developed in close cooperation with an international aerospace company within the project further “f t re evol tionary echnologies for hy rid electric ircrafts”, funded by the italian ministry of university and research (mur) through the research program pon r&i 20142020. references [1] m. vogt, c. schulz, c. dahl, i. rolfes, m. gerding, an 80 ghz radar level measurement system with dielectric lens antenna, 2015 16th international radar symposium (irs), dresden, germany, 24-26 june 2015, pp. 712-717. doi: 10.1109/irs.2015.7226222 [2] kunde santhosh kumar, a. bavithra, m. ganesh madhan, a cavity model microwave patch antenna for lubricating oil sensor applications, materials today: proceedings, vol. 66, part 8, 2022, pp. 3446-3449. doi: 10.1016/j.matpr.2022.06.136 [3] a. m. loconsole, v. v. francione, v. portosi, o. losito, m. catalano, a. di nisio, f. attivissimo, f. prudenzano, substrateintegrated waveguide microwave sensor for water-in-diesel fuel applications, applied sciences, vol. 11, 2021, no. 21, art. no. 10454. doi: 10.3390/app112110454 [4] g. andria, f. attivissimo, s. m. camporeale, a. di nisio, p. pappalardi, a. trotta, design of a microwave sensor for measurement of water in fuel contamination, measurement, vol. 136, 2019, pp. 74-81. doi: 10.1016/j.measurement.2018.12.076 [5] m. scarpetta, m. spadavecchia, f. adamo, m.a. ragolia, n. giaquinto, detection and characterization of multiple discontinuities in cables with time-domain reflectometry and convolutional neural networks., sensors, vol. 21, 2021, no. 23, 8032, 13 pp. doi: 10.3390/s21238032 [6] m. scarpetta, m. spadavecchia, g. andria, m. a. ragolia, n. giaquinto, analysis of tdr signals with convolutional neural networks, proc. of ieee int. instrumentation and measurement technology conference (i2mtc), glasgow, united kingdom, 17-20 may 2021, pp. 1-6. doi: 10.1109/i2mtc50364.2021.9460009 [7] x. x. liu, k. peng, z. chen, h. pu, z. yu, a new capacitive displacement sensor with nanometer accuracy and long range, ieee sensors journal, vol. 16, 2016, no. 8, pp. 2306-2316. doi: 10.1109/jsen.2016.2521681 [8] m.-y. cheng, c.-l. lin, y.-t. lai, y. j. yang, a polymer-based capacitive sensing array for normal and shear force measurement, sensors, vol. 10, 2010, no. 11, pp. 10211-10225. doi: 10.3390/s101110211 [9] a. ueno, y. akabane, t. kato, h. hoshino, s. kataoka, y. ishiyama, capacitive sensing of electrocardiographic potential through cloth from the dorsal surface of the body in a supine position: a preliminary study, ieee transactions on biomedical engineering, vol. 54, 2007, no. 4, pp. 759-766. doi: 10.1109/tbme.2006.889201 [10] v. rantanen, p. kumpulainen, h. venesvirta, j. verho, o. špakov, j. lylykangas, a. vetek, v. surakka, j. lekkala, capacitive facial activity measurement, acta imeko, vol. 2, 2013, no. 2, pp. 78-85. doi: 10.21014/acta_imeko.v2i2.121 [11] m. a. ragolia, a. m. lanzolla, g. percoco, g. stano, a. di nisio, thermal characterization of new 3d-printed bendable, coplanar capacitive sensors, sensors, vol. 21, 2021, art. no. 6324. doi: 10.3390/s21196324 [12] g. stano, a. di nisio, a. m. lanzolla, m. a. ragolia, g. percoco, additive manufacturing for capacitive liquid level sensors, the international journal of advanced manufacturing technology, vol. 123, no. 7, 2022, pp. 2519–2529. doi: 10.1007/s00170-022-10344-7 [13] q. yang, a. j. yu, j. simonton, g. yang, y. dohrmann, z. kang, y. li, j. mo, f.-y. zhang, an inkjet-printed capacitive sensor for water level or quality monitoring: investigated theoretically and experimentally, journal of materials chemistry a, vol.5, 2017, 17841–17847. doi: 10.1039/c7ta05094a [14] r. n. miles, w. cui, q. t. su, d. homentcovschi, a mems lownoise sound pressure gradient microphone with capacitive sensing, journal of microelectromechanical systems, vol. 24, 2015, no. 1, pp. 241-248. doi: 10.1109/jmems.2014.2329136 [15] n. l. buck, r. a. aherin, human presence detection by a capacitive proximity sensor, applied engineering in agriculture, vol. 7, 1991, no. 1, pp. 55-60. doi: 10.13031/2013.26191 [16] g. stano, a. di nisio, a.m. lanzolla, m. ragolia, g. percoco, fused filament fabrication of commercial conductive filaments: experimental study on the process parameters aimed at the minimization, repeatability and thermal characterization of electrical resistance, the international journal of advanced manufacturing technology, vol. 111, 2020, pp. 2971-2986. doi: 10.1007/s00170-020-06318-2 [17] texas instruments, common inductive and capacitive sensing applications (rev. b), 2021. online [accessed 26 february 2023] https://www.ti.com/lit/pdf/slya048 [18] texas instruments, accurate frost or ice detection based on capacitive sensing, 2018. online [accessed 26 february 2023] https://e2e.ti.com/blogs_/b/industrial_strength/posts/accurate -frost-or-ice-detection-based-on-capacitive-sensing [19] l. de palma, f. adamo, f. attivissimo, s. de gioia, a. di nisio, a. lanzolla, m. scarpetta, low-cost capacitive sensor for oil-level monitoring in aircraft, proc. of ieee int. instrumentation and https://doi.org/10.1109/irs.2015.7226222 https://doi.org/10.1016/j.matpr.2022.06.136 https://doi.org/10.3390/app112110454 https://doi.org/10.1016/j.measurement.2018.12.076 https://doi.org/10.3390/s21238032 https://doi.org/10.1109/i2mtc50364.2021.9460009 https://doi.org/10.1109/jsen.2016.2521681 https://doi.org/10.3390%2fs101110211 https://doi.org/10.1109/tbme.2006.889201 https://doi.org/10.21014/acta_imeko.v2i2.121 https://doi.org/10.3390/s21196324 https://doi.org/10.1007/s00170-022-10344-7 https://doi.org/10.1039/c7ta05094a https://doi.org/10.1109/jmems.2014.2329136 https://doi.org/10.13031/2013.26191 https://doi.org/10.1007/s00170-020-06318-2 https://www.ti.com/lit/pdf/slya048 https://e2e.ti.com/blogs_/b/industrial_strength/posts/accurate-frost-or-ice-detection-based-on-capacitive-sensing https://e2e.ti.com/blogs_/b/industrial_strength/posts/accurate-frost-or-ice-detection-based-on-capacitive-sensing acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 measurement technology conference (i2mtc), ottawa, on, canada, 16-19 may 2022, pp. 1-4. doi: 10.1109/i2mtc48687.2022.9806667 [20] f. adamo, f. attivissimo, s. de gioia, a. di nisio, d. lotano, m. savino, development and prototyping of a capacitive oil level sensor for aeronautical applications, proc. of 25th imeko tc4 int. symposium, brescia, italy, 12-14 september 2022. online [accessed 26 february 2023] https://www.imeko.org/publications/tc4-2022/imeko-tc42022-61.pdf [21] ametek aerospace, 8tj209 capacitive level sensor data sheet. online [accessed 26 february 2023] https://www.ameteksfms.com//media/ameteksensors/documents/sensor-data-sheets/8tj209capacitive-oil-level-sensor.pdf [22] f. adamo, f. attivissimo, s. de gioia, d. lotano, a. di nisio, a design strategy for performance improvement of capacitive sensors for in-flight oil-level monitoring aboard helicopters, measurement, 2023, 112476. doi: 10.1016/j.measurement.2023.112476 [23] l. iannucci, l. lombardo, m. parvis, p. cristiani, r. basseguy, e. angelini, s. grassini, an imaging system for microbial corrosion analysis, proc. of ieee int. instrumentation and measurement technology conference, auckland, new zealand, 20-23 may 2019, pp. 1-6. doi: 10.1109/i2mtc.2019.8826965 [24] l. iannucci, m. parvis, p. cristiani, r. ferrero, e. angelini, s. grassini, a novel approach for microbial corrosion assessment, ieee transactions on instrumentation and measurement, vol. 68, 2019, no. 5, pp. 1424-1431. doi: 10.1109/tim.2019.2905734 [25] texas instruments, fdc2x1x emi-resistant 28-bit,12-bit capacitance-to-digital converter for proximity and level sensing applications datasheet (rev. a). online [accessed 26 february 2023] https://www.ti.com/lit/gpn/fdc2214 [26] texas instruments, capacitive sensing: direct vs remote liquid level sensing performance analysis (rev. a), 2015. online [accessed 26 february 2023] https://www.ti.com/lit/pdf/snoa935 [27] texas instruments, fd 114 and fd 14 ev user’s g ide, 2016. online [accessed 26 february 2023] https://www.ti.com/lit/ug/snou138a/ [28] h. sun, y. liu, j. tan, research on testing method of oil characteristic based on quartz tuning fork sensor, applied sciences, vol. 11, 2021, no. 12, p. 5642. doi: 10.3390/app11125642 https://doi.org/10.1109/i2mtc48687.2022.9806667 https://www.imeko.org/publications/tc4-2022/imeko-tc4-2022-61.pdf https://www.imeko.org/publications/tc4-2022/imeko-tc4-2022-61.pdf https://www.ameteksfms.com/-/media/ameteksensors/documents/sensor-data-sheets/8tj209-capacitive-oil-level-sensor.pdf https://www.ameteksfms.com/-/media/ameteksensors/documents/sensor-data-sheets/8tj209-capacitive-oil-level-sensor.pdf https://www.ameteksfms.com/-/media/ameteksensors/documents/sensor-data-sheets/8tj209-capacitive-oil-level-sensor.pdf https://doi.org/10.1016/j.measurement.2023.112476 https://www.doi.org/10.1109/i2mtc.2019.8826965 https://doi.org/10.1109/tim.2019.2905734 https://www.ti.com/lit/gpn/fdc2214 https://www.ti.com/lit/pdf/snoa935 https://www.ti.com/lit/ug/snou138a/snou138a.pdf https://doi.org/10.3390/app11125642 analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 5 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique ch. thejesh kumar1, amit bindaj karpurapu2, mayank mathur3 1 department of ece,raghu institute of technology,visakhapatnam-531162, andhra pradesh, india 2 department of ece,swarnabharathi institute of science and technology, khammam-507002, telangana, india 3 university of technology, jaipore 303903, rajasthan, india section: research paper keywords: fbmc-oqam; pts; leap frog optimization; papr; ber citation: ch. thejesh kumar, amit bindaj karpurapu, mayank mathur, analysis of peak-to-average power ratio in filter bank multicarrier with offset quadrature amplitude modulation systems using partial transmit sequence with shuffled frog leap optimization technique, acta imeko, vol. 11, no. 1, article 29, march 2022, identifier: imeko-acta-11 (2022)-01-29 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received november 29, 2021; in final form march 5, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ch. thejesh kumar, e-mail: thejeshkumar.ch@gmail.com 1. introduction the technology of orthogonal frequency division multiplexing (ofdm) technology emerged as the suitable multicarrier modulation technique for modern wireless communications [1]. they are responsible for the effective spectral frequency usage while minimizing inter-symbol interface. this is usually performed employing the guard interval insertion. in the multipath channel. adding the guard interval, however, reduces the efficiency of spectra and power. in order to tackle the issue, the filter bank multicarrier with offset quadrature amplitude modulation (fbmc-oqam) based technique has been proposed. the technique improves spectral efficiency. however, this comes at the cost of computational complexity. further, it has a noticeable impact yielding to reduced out-of-band power leakage [2]. the fbmc-oqam is being evaluated as a contender for the 5g mobile communication standard because to its benefits like excellent temporal and frequency localisation. further, other significant features are the minimization of out-of-band emission and resistance to phase noise [3]. in the fbmc-oqam approach [4], a prototype filter is used to reduce the orthogonality of subcarriers. this is often used to avoid the gi insertion. as a result, this can achieve greater spectral efficiency. citing the recent developments in wireless communication, the demand for mobile and broad-band service in 5g is evergrowing. in this line, the fbmc-oqam approach can be considered as the effective to support the ofdm technique [5]. abstract because of its low adjacent channel leakage ratio, the filter bank multicarrier with offset quadrature amplitude modulation (fbmcoqam) has sparked the attention of many researchers in recent times. however, the problem of high peak-to-average power ratio (papr) measurement has a detrimental influence upon the fbmc system's energy measurement efficiency. we study papr reduction of fbmc-oqam signals using the partial transmit sequence (pts) methods in this work. the pts with shuffled frog leap phase optimization method is proposed in this paper to reduce the larger papr measurement, which is the major disadvantage of the filter bank multicarrier with offset quadrature amplitude modulation system. according to the simulation software findings, the suggested approach has a considerably superior papr, which may reduce the papr by about 2 db at complementary cumulative distribution function 10-3 when compared to the traditional pts method, and it has a much lower computational complexity than the previous ways. the experimental parameters are measured, and results are evaluated using matlab tool. mailto:thejeshkumar.ch@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 the significant problem persistent in ofdm and fbmcoqam is the peak-to-average power ratio (papr) enhancement. as a result of this issue, greater non-linear signal distortion has developed at the output of a power amplifier, resulting in a catastrophic reduction of bit-error rate (ber) performance. to address this issue, various papr reduction techniques for multicarrier modulation systems have been developed, including partial transmit sequence (pts) [6], selective mapping [7], and others. further, the manuscript is organized to provide the information on papr reduction techniques in section 2. the algorithm is explained in section 3 while the demonstration of the application is given in section 4. the simulated results, computations and discussions are presented in section 5. overall conclusions are given in section 6. 2. papr reduction methods initially, the concept of papr reduction has been considered as an essential step in communications especially pointing to applications like radar and speech synthesis. typical radar systems have several limitations in terms of peak power. this is very common issue in every communication system. similarly, the speech synthesis application suffers degradation as it is severely bounded by peak power. this is due to the reason that the larger the signal, land up in the situation where the computing machine voice sounds very hard. these features should be avoided especially while handling human speech. when a multicarrier-based communication system is considered, the primary issue to deal with is the inherent papr. hence, the papr reduction can be considered as the essential part of improving the quality of the communication. several schemes and methods are proposed in due course of time to handle the papr reduction process with ease. it is essential to mention that the papr should be dealt through a non-complex phenomenon. for an ofdm system, several papr reduction schemes are proposed precisely following this rule. to list some of the most successful schemes so far, are clipping, tone injection and reservation. similarly, other techniques based on constellations and transmit sequences are also popular. along with these techniques like selective mapping along with block coding are some to mention in the popular list. a comparative study of the techniques mentioned above essentially gave a deep insight on the potential application and strength of each and every method. one of the most common papr reduction methods is the pts method, which is characterized as a distortion-less approach. the minimal papr value in the pts technique is obtained by multiplying the data signal cluster of each data symbol choosing the available and suitable phase combination. while demodulating, the corresponding side information provided by the transmitter is essential. challenging this, a solution to this papr enhancement issue has been provided in specific to the fbmc-oqam system. this can outperform the standard pts technique in terms of papr performance while reducing the computational cost of optimizing phase combination. the quiet characteristics are that the bigger papr is minimized using the overlapping-pts technique [8], [9] and the phase combination is optimised utilizing the shuffled frog leap (sfl) phase optimization. the receiver, like the pts technique, need transmitters to provide relevant side information which is useful for further the data demodulation. 3. shuffled frog leap optimization the sfl algorithm (sfla) belongs to the class of metaheuristic algorithms. it performs a learned heuristic approach for searching a solution. especially, it performs a combinatorial optimization employing several heuristic functions. these heuristic functions are non-complex mathematical models. the evolution of the memes is the factor of inspiration while structuring the algorithm. it mimics the way established to exchange the information among them. the sfla is a hybrid of deterministic and random methods. as in particle swarm optimization, the deterministic method enables the algorithm to effectively employ the surface data which is considered to be highly responsive. this has often been the potential driving source for the heuristic search. the random elements enable the search pattern's flexibility and resilience. like any other algorithm, the initial step to choose the population. this refers to number of frogs which are randomly generated. this consistently includes the whole swamp. the population has been classified into multiple parallel communities. these are referred as memeplexes. independent evolution of the individual is encouraged to search the space in multiple directions. the frogs inside each memeplex are infected by the ideas of other frogs, resulting in memetic development. memetic evolution increases the quality of an individual's meme and improves the individual frog's performance toward a goal. to keep the infection process competitive, frogs with better memes (ideas) must contribute more to the creation of new ideas than frogs with weak ideas. using a triangular probability distribution to choose frogs gives superior ideas a competitive edge. during evolution, frogs do change their memes. this is usually taking in to account the corresponding best available information. this information can be usually obtained from the memeplex. however, in many cases it can be fetched from the whole population basing on the quality. this change refers to the jump and the corresponding magnitude is its step size. this yields the new meme relative to the updated position of the frog. every frog is reintroduced to the community once it has improved its status. the knowledge acquired from a position shift is instantly available for improvement. this rapid access to fresh information distinguishes this technique from genetic algorithm. usually in the later, the whole population is affected. however, in the sfla, the scenario is different. every frog in the population is considered to be the potential solution. this depicts the concepts of idea dissemination. the potential part of the algorithm refers to the comparative study as drawn by the team of innovators suitably developing the concept. this can also be visualised as the engineer who continuously works towards bettering the designs. 3.1. sfla steps step1. initialization. selection of 𝑚 and 𝑛. here 𝑚 refers to number of memeplexes [9]. similarly, 𝑛 denotes number of frogs available every memeplex. the value of 𝐹 is computed as overall swamp size 𝐹 = 𝑚 × 𝑛. step2. generate a virtual population. step3. sort basing on ranks of the frogs. the sorting should follow the process to keep the best on top. step4. partition of frogs into memeplexes. partition array x into 𝑚 memeplexes 𝑌1, 𝑌2, … . , 𝑌m , each containing 𝑛 frogs, such that acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑌𝑘 = 𝑈(j)k, f(𝑗)𝒌|u(𝑗)𝒌 = 𝑈(𝑘 + 𝑚(𝑗 − 1)), f(𝑗)𝒌 = f(𝑘 + 𝑚(𝑗 − 1)), 𝑗 = 1, … , 𝑛; 𝑘 = 1, … , 𝑚 . (1) for instance, if 𝑚 = 3, the corresponding rank 1 goes to memeplex-1. similarly, the rank-2 is assigned to memeplex-2. subsequently the rank 3 awarded to meplex-3 while the 4th rank designated to memeplex-1. the process goes on. step5. this step refers to memetic evolution. this takes place inside every memeplex. yield every memeplex 𝑌𝑘 , 𝑘 = 1, … , 𝑚. this is in accordance with the sfla. step6. all the memes are shuffled. following the finite iterative process, 𝑌1, … , 𝑌m should be replacing into 𝑋. this is with respect to 𝑋 = {𝑌𝑘 , 𝑘 = 1, … , 𝑚}. further, the step also includes sorting 𝑋 in the decreasing order basing on the respective performance index. this finally updates the position of the best frog to 𝑃x. step7. check whether convergence is achieved. termination should be called in once the conference is witnessed. if not go to step#3 once again. the termination criterion can be the computational time, number of iterations or the convergence. the convergence refers to ‘best memetic pattern’ without change. basing on this the computation of the function evaluation takes place. 4. papr reduction using pts-sfl 4.1. papr papr is the ratio of the peak power of a multicarrier signal to its average power. as the papr value rises, it shows that the system requires high-power amplifiers (hpas) to function in their saturation area, which is essential since any additional increase in signal amplitude forces the hpa to operate in its nonlinear region, resulting in signal distortion. the reduction methods in ofdm systems are performed separately to each symbol, and the papr of each symbol is computed independently. the overlapping of data blocks is not considered for fbmc/oqam signal which is good sign. to compute the papr correctly, the overlapping of the symbol blocks must be taken into account [10]. the papr of pure fbmc/oqam in db can be expressed as follows: 𝑃𝐴𝑃𝑅(db) = 10 log10 max (𝑚 − 1) ≤ 𝑡 ≤ 𝑚 𝑇 |𝑠(𝑡)m| 2 e[|𝑠(𝑡)|2] (2) from above equation (2) the numerator represents the peak power in the duration of the input 𝑚thdata block, 𝑇 (𝑚 − 1) ≤ 𝑡 ≤ 𝑚 𝑇, and the denominator e[|𝑠(𝑡)|2] represents the average power of the fbmc signal. 4.2. ptps in the conventional pts method, an input block is divided into 𝑉 sub-blocks. each sub-block is zero-padded to create a vector of length 𝑁. inverse fast fourier transform is performed separately for each sub-block, significantly increasing the computation complexity. adjacent, interleaving, and pseudorandom are some of the widely used partitioning schemes [11]. the adjacent method is used in the proposed method, owing to its simplicity and effectiveness. from figure 1, it can be observed that the resulting sequences are optimized by phase rotation factors 𝑏 = [𝑏1, 𝑏2, … , 𝑏v], where 𝑏v = ej 2 π 𝑣/𝑊 and 𝑣 = 0, 1, … , 𝑊 − 1 , to create symbol candidates referred to as partial transmit sequences. this operation results in a variation of the peak values for the signal candidates, and the one with the minimum papr is chosen for transmission. the total number of signal candidates depends on v and w, where w is the number of the phase factors allowed for a single sub-block. the process of producing the optimum phase factor vector that reduces the papr is as follows [12]-[17] {�̃�1, �̃�2, … �̃�v} = arg min [𝑏1, … , 𝑏v] max 0 ≤ 𝑡 ≤ 𝑇 |∑ 𝑠m 𝑣 𝑏𝑣 𝑉 𝑣=1 | 2 . (3) the transmit signal after reducing the papr can be expressed as follows: �̃�m(𝑡) = ∑ 𝑠m 𝑣 �̃�𝑣 𝑉 𝑣=1 . (4) direct application of this method to each fbmc symbol separately is not effective. as the symbols overlap, the parts where the symbols overlap will have a peak regrowth, increasing the papr again [18]. 4.3. pts-slf it is evident that the proposed yielded excellent papr reduction. this comes with a performance that reported lower computational complexity [19]-[22]. the process of sfl in optimizing the phase is performed according to the algorithm discussed in section 2. the proposed block diagram is shown in figure 1. 5. results and discussion following the implementation of the sfla to the problem of papr reduction, the simulation-based experiment has been carried out and the results are presented in this section. the experimental computations are responsible for the deep insights which can be drawn on the concept of papr reduction. clearly, the induction of the pts-sfl and the effect on the papr are the part of study. the probabilistic approach and the computations based on complementary cumulative distribution function (ccdf) are performed as a part of the simulation. it is evident that the corresponding parameter has a direct impact on the papr. the results demonstrate various papr reduction techniques and are compared with proposed model. figure 1. the proposed pts-sfl technique. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 the signals are optimized independently inconsideration of the overlapping signals, the complexity of the algorithm can be reduced using fewer combinations of phase factors. herein, we explore the effects of different combinations of phase factors on the performance of the proposed technique. figure 2 shows the ccdf curves for the proposed technique without clipping and with four sub-blocks. from figure 2 it is shown that papr is around 8.1db using sfl optimization technique. further, 8 subblocks are considered and papr results are evaluated and is shown in figure 3. by considering 𝑉 = 8, the threshold papr obtained using pts is 7.5 db, using segment-based optimization the papr value is around 7.4 db and using ant colony optimization technique is around 7.1 db. the papr value obtained using sfl optimization is 6.5 db which is almost 38 % reduction of papr compared to original fbmc model. from figure 4, it is shown that for different 𝑉 values our proposed sfla performance is better comparing to other existing techniques. the performance of ber is shown in figure 5. at snr of 10 db, the ber is very low for proposed method compared to clipping signal and original fbmc and also other two techniques i.e., satin bowerbird optimization (sbo) and ant colony optimization (aco). the bit error rate values at snr of 10 s db are tabulated and are shown in table 1. 6. conclusions in this paper, the technique of overlapping-pts has been formulated and demonstrate. the technique has been successfully simulated and applied to the problem of papr reduction in the considered system. the proposed technique featured with reduced computational complexity. it is also reported that the phase optimization can be achieved in the perceived technique using the artificial bee colony phase optimization. the analysis of the simulated reports yielded an excellent insight which is sufficient to consider that the technique can achieve dominating papr performance. this can subsequently suppress the papr by 2 db at least. the corresponding suppression reported as per the experimentation seems to be having an average of 38 %. this significant part is that the papr performance does not degrades the ber. references [1] t. hwang, c. yang, g. wu, s. li, g. y. li, ofdm and its wireless applications: a survey, ieee transactions on vehicular technology, 58(4) (2009), pp. 1673-1694. doi: 10.1109/tvt.2008.2004555 figure 2. comparison of papr for different techniques. figure 3. comparison of papr with respect to threshold values. figure 4. comparison of papr for v = 4 and v = 8 using different methods. figure 5. comparison of ber with respect to snr. table 1. ber obtained using different techniques for snr of 10 db (s surge of information). technique ber clipping at s = 90 10-2* clipping at s = 30 10-2.8 clipping at s = 15 10-3 fbmc original 10-3.4 sbo 10-3.45 aco 10-3.6 proposed sfla 10-3.8 https://doi.org/10.1109/tvt.2008.2004555 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 [2] h.q. wei, a. schmeink, comparison and evaluation between fbmc and ofdm systems, proc. of international itg workshop on smart antennas, ilmenau, germany, 3-5 march 2015, pp. 1-7. [3] p. banelli, s. buzzi, g. colavolpe, a. modenini, f. rusek, a. ugolini, modulation formats and waveforms for 5g networks: who will be the heir of ofdm: an overview of alternative modulation schemes for improved spectral efficiency, ieee signal processing magazine, 31(26) (2014), pp. 80-93. doi: 10.1109/msp.2014.2337391 [4] s. frank, filterbank based multi carrier transmission (fbmc) – evolving ofdm, in: proc. of european wireless conference, lucca, italy, 12-15 april 2010, pp. 1051-1058. doi: 10.1109/ew.2010.5483518 [5] f. schaich, t. wild, waveform contenders for 5g ofdm vs fbcm vs ufmc, proc. of international conf. symposium on communications, control and signal processing, athens, greece, 21-23 may 2014, pp. 457-460. doi: 10.1109/isccsp.2014.6877912 [6] y. a. jawhar, l. audah, m. a. taher, k. n. ramli, n. s. m. shah, m. musa, m. s. ahmed, a review of partial transmit sequence for papr reduction in the ofdm systems, ieee access, 7(2019), pp. 18021-18041. doi: 10.1109/access.2019.2894527 [7] a. mohammed, s. hussein, r. amr, a. saleh, a novel iterativeslm algorithm for papr reduction in 5g mobile fronthaul architecture, ieee photonics journal, 11(1) (2019), pp. 1-12. doi: 10.1109/jphot.2019.2894986 [8] n. shi, w. shouming, a partial transmit sequences based approach for the reduction of peak-to-average power ratio in fbmc system, in: proc. of wireless and optical communications conf., 2006. pp.1-3. doi: 10.1109/wocc.2016.7506550 [9] muzaffar eusuff, kevin lansey, fayzul pasha, shuffled frogleaping algorithm: a memetic meta-heuristic for discrete optimization, engineering optimization, 38 (2) (2006), pp.129-54. doi: 10.1080/03052150500384759 [10] n. al harthi, z. zhang, d. kim, s. choi, peak-to-average power ratio reduction method based on partial transmit sequence and discrete fourier transform spreading. electronics 2021, 10, 642. doi: 10.3390/electronics10060642 [11] y. a. jawhar, l. audah, m. a. taher, k. n. ramli, n. s. m. shah, m. musa, m. s. ahmed, a review of partial transmit sequence for papr reduction in the ofdm systems. ieee access 2019, 7, 18021–18041. doi: 10.1109/access.2019.2894527 [12] h. wang, x. wang, l. xu, w. du, hybrid papr reduction scheme for fbmc/oqam systems based on multi data block ptsand tr methods. ieee access 2016, 4, 4761–4768. doi: 10.1109/access.2016.2605008 [13] l. yang, r. s. chen, y. m. siu, k. k. soo, papr reduction of an ofdm signal by use of pts with low computational complexity, ieee trans. broadcast., 52(1) (2006), pp. 83–86. doi: 10.1109/tbc.2005.856727 [14] p. boonsrimuang, k. mori, t. paungma, h. kobayashi, proposal of improved pts method for ofdm signal, 18th ieee int. symp. on personal, indoor and mobile radio communications, 2007, pp. 1–5. doi: 10.1109/pimrc.2007.4393989 [15] s. j. ku, c. l. wang, c. h. chen, a reduced-complexity ptsbased papr reduction scheme for ofdm systems, ieee trans. wireless commun., 9(8) (2010), pp. 2455–2460. doi: 10.1109/twc.2010.062310.100191 [16] j. hou, j. ge, j. li, peak-to-average power ratio reduction of ofdm signals using pts scheme with low computational complexity, ieee trans. broadcast., 57(1) (2011), pp. 143–148. doi: 10.1109/tbc.2010.2079691 [17] l. j. cimini, n. r. sollenberger, peak-to-average power ratio reduction of an ofdm signal using partial transmit sequences with embedded side information, ieee global telecommunications conf., 2 (2000), pp. 746–750. doi: 10.1109/glocom.2000.891239 [18] c. c. feng, c. y. wang, c. y. lin, y. h. hung, protection and transmission of side information for peak-to-average power ratio reduction of an ofdm signal using partial transmit sequences, 58th ieee vehicular technology conf., 4 (2003), pp. 2461–2465. doi: 10.1109/vetecf.2003.1285976 [19] a. d. s. jayalath, c. tellambura, side information in par reduced pts-ofdm signals, 14th ieee int. symp. on personal, indoor and mobileradio communications, 1 (2003), pp. 226–230. doi: 10.1109/pimrc.2003.1264266 [20] paweł kwiatkowski, digital-to-time converter for test equipment implemented using fpga dsp blocks, elsevier: measurements, 177 (2021), pp. 1-11. doi: 10.1016/j.measurement.2021.109267 [21] andrás kalapos, csaba gór, róbert moni, istván harmati, vision-based reinforcement learning for lane-tracking control, acta imeko, 10(3) (2021), pp. 7-14. doi: 10.21014/acta_imeko.v10i3.1020 [22] eulalia balestrieri, luca de vito, francesco picariello, sergio rapuano, ioan tudosa, a review of accurate phase measurement methods and instruments for sinewave signals, acta imeko, 9(2) (2020), pp. 52-58. doi: 10.21014/acta_imeko.v9i2.802 https://doi.org/10.1109/msp.2014.2337391 https://doi.org/10.1109/ew.2010.5483518 https://doi.org/10.1109/isccsp.2014.6877912 https://doi.org/10.1109/access.2019.2894527 https://doi.org/10.1109/jphot.2019.2894986 https://doi.org/10.1109/wocc.2016.7506550 https://doi.org/10.1080/03052150500384759 https://doi.org/10.3390/electronics10060642 https://doi.org/10.1109/access.2019.2894527 https://doi.org/10.1109/access.2016.2605008 https://doi.org/10.1109/tbc.2005.856727 https://doi.org/10.1109/pimrc.2007.4393989 https://doi.org/10.1109/twc.2010.062310.100191 https://doi.org/10.1109/tbc.2010.2079691 https://doi.org/10.1109/glocom.2000.891239 https://doi.org/10.1109/vetecf.2003.1285976 https://doi.org/10.1109/pimrc.2003.1264266 https://doi.org/10.1016/j.measurement.2021.109267 http://dx.doi.org/10.21014/acta_imeko.v10i3.1020 http://dx.doi.org/10.21014/acta_imeko.v9i2.802 microsoft word article 17 mikes advertisement.docx acta imeko december 2013, volume 2, number 2, 96 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 96 section: advertisement arctic metrology centre for metrology and accreditation – mikes mikes-kajaani, tehdaskatu 15, puristamo 9p19, fi-87100 kajaani, finland staff of mikes-kajaani (from left): sauli kilponen, petri koponen, kari kyllönen, jani korhonen, timo nissilä and aimo pusa. second from right dr. rainer engel from the ptb germany. mikes-kajaani, the world’s northernmost national standards laboratory located near the arctic circle, has integrated well to the local research community, and serves both national and international customers with calibrations and special services. premises in the newly designed, renovated building have proper temperature, vibration and humidity control as well as precisely determined g-values. the group running the facility is built up from eight local experts. force, torque and heavy masses are in full service and serve industry in its calibration needs. the assembled three different water flow calibration rigs are also in service. mikes is a partner in a joint research centre cemis (centre for measurement and information systems; the umbrella organisation of measurement technology in kajaani) together with the universities of oulu and jyväskylä, kajaani university of applied sciences, and vtt technical research centre of finland. cemis specialises in research and training in the field of measurement and information systems. in this cooperation, mikes has been active in applied research varying from optical emission based measurement methods of heavy metals in water to ski base topography measurements. new thinking of reliable measurement results is implemented in every project. mikes-kajaani is also participating in a european metrology research project (emrp) aiming for better measurements of larger forces. calibration services at mikes-kajaani. contact information: group manager petri koponen, tel. +358 29 5054 453, firstname.lastname@mikes.fi, www.mikes.fi unit range relative uncertainty (k=2) remarks force 1 n – 1.1 mn 210-5 110-4 8 devices torque 0.1 nm – 20 knm 210-5 510-4 7 devices mass 50 kg – 2000 kg 210-6 3 devices water flow 0.2 l/s – 750 l/s 0.03 % 0.5 % gravimetric or reference consistency 0 – 12 % 0.2 % flow rate 0.5 m/s – 4.0 m/s (consistency dependent) real-time power quality measurement audit of government building – a case study in compliance acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 9 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 real-time power quality measurement audit of government building – a case study in compliance with ieee 1159 d. v. n. ananth1, naeem hannoon2, mohamed shahriman bin mohamed yunus3, mohd hanif bin jamaludin3, v. v. s. s. sameer charavarthy4, p. s. r. chowdary4 1 department of electrical and electronics engineering, raghu institute of technology, visakhapatnam-531162, andhra pradesh, india 2 faculty of electrical engineering, uitm, 40450 shah alam, malaysia 3 centre of excellent for engineering & technology, melaka, malaysia 4 department of electronics and communication engineering, raghu institute of technology, visakhapatnam-531162, andhra pradesh, india section: research paper keywords: ieee 1159 compliance; power quality measurement; switchgear; fluke device; facts devices; custom-power devices; power auditing citation: d. v. n. ananth, naeem hannoon, mohamed shahriman bin mohamed yunus, mohamed hanif b. jamaaludin, v. v. s. s. sameer charavarthy, p. s. r. chowdary, real-time power quality measurement audit of government building – a case study in compliance, acta imeko, vol. 12, no. 1, article 21, march 2023, identifier: imeko-acta-12 (2023)-01-21 section editor: francesco lamonaca, university of calabria, italy received may 2, 2022; in final form february 7, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: d. v. n. ananth, e-mail: dvnananth1@gmail.com 1. introduction the power quality (pq) plays a vital role in utility systems, with emphasis on a) continuity deliver power supply, and b) quality or purity of voltage [1], [2]. this pq can be defined as the problem taking place in voltage, current or frequency deviations that result in failure or disoperation of electrical equipment or maintain the integrity of power supplied to the system. poor pq ‘events’ such as voltage sag, swells, transients, harmonics, unbalance, interruption, flickering, etc., are the various issues on pq [3]-[5]. it is crucial to perform a power quality audit to identify any pq problems in a system. the audit helps to understand the more sensitive types of equipment and devices in the industrial plant and associated issues they face, due to the current and voltage disturbances. many commercial pq monitoring instruments have sampling rates of 256 samples per cycle since the majority of pq events have frequency contents below 5 khz as observes in the reference paper [6]. the availability of high-end instruments to capture infrequent very high frequency events is limited due to technical and economical hurdles. the pq disturbances are mainly classified as short-term and long-term variation. higher load switching devices like drives are short-term and disturbances such as over-voltage, short-voltage faults come under long term [7]. there are specific standards that are followed to maintain equipment and personnel safety, and it depends on the country and the industry [8]-[9]. some associations accountable for emergent pq standards comprise the institute of electrical and electronics engineers (ieee), the abstract power quality (pq) measurements and auditing play a vital role for smart grid applications, industrial safety and reliability. the major electrical pq characteristics and parameters, like voltage sags/swells, harmonic distortion, voltage unbalance, voltage variation & flicker, and supply interruptions, are studied with the aim of maintaining the international or national standards. the electrical characteristics are studied and analyzed for single-phase and poly-phase systems in this ieee 1159 recommended practice. the significant pq constraints and parameters, like electromagnetic interference phenomenon deviation from the regular operation due to load equipment or source to load interaction, are described. further, it is discussed pq different monitoring devices, roles, and applications. in this paper, power quality monitoring of the equipment using fluke instrument and accessories to identify the equipment performance, audit, pq issues are discussed practically for a two-panel boards switchgear equipment as a case study in malaysia. here, trends in voltage and current changes with current imbalances were observed on 1st may 2019 morning 8'o clock and on 7th may 2019 evening at 9'0 clock. mailto:dvnananth1@gmail.com acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 international electronic commission (iec), national electrical manufacturers association (nema), national fire protection association (nfpa), national environmental policy act (nepa), and american national standards institute (ansi), etc., [10]. the true value of any pq monitoring program lies in its ability to analyse and interpret voluminous raw data and generate actionable information to prevent pq problems or improve the overall power quality performance. ieee 1100-1999 and ieee 1159 are one of the finest sources for describing pq concerns and providing solutions to equipment and system difficulties. pq refers to the powering and grounding of electronic equipment in a way that is appropriate for the device's functioning and compatible with the property's wiring system and other connected equipment. moreover, practice for powering and grounding is ieee std. 1100-1999 and monitoring electrical power quality is ieee std. 1159-2009 [11]. the iec also provides a platform to companies, industries to discuss and develop the standards required like iec std. 61000-2-1, electromagnetic compatibility (emc environmental description for low frequency carries out disturbances and signalling in the municipal power supply system [11]. the standard definition of ieee std. 1159-2009 is “any power problem manifested in voltage, current, or frequency deviations that results in failure or disoperation of customer equipment depending on deficient power quality” [12]. there is research on ieee std. 1159 and different authors suggested notable remarks on maintenance of the standards for equipment and personnel safety. the s-transform method used as input to this hybrid classifier of pq disturbances shows pleasing results of efficient recognition and classification with the proposed real-time method [13]. an s-transform based semisupervised approach is discussed to solve classification crises without the necessity of sample data. first, pq disturbances detection using the s-transform chosen to exact enough pq time-frequency events characteristic with s-matrix for later categorization under noise, and traits are attained. in the next step, seven to eight binary classifiers are created with the normalized ruler to recognize these disturbances. finally, these multiple pq disturbances are classified using this method [14]. a sampling rate of 15 to 18 khz and de-noised with the help of discrete wavelet transform (dwt) based technique requires signals with superior signal-to-noise ratio classification pq features with an accuracy of 98.18 % as contrasted to the generalized wavelet algorithm [15]. many practical types of equipment like flexible ac transmission system (facts) devices improve the overall pq standards and norms for companies and industries. two types of facts devices were conducted in india in the real-time environment on the distribution power system under the case studies with devices like distribution static compensator (dstatcom) and static var compensator (svc). this helped to exemplify the modelling technique and the efficacy of these facts and custom power devices in curtailing financial losses [16]. another case study using a dvr, for voltage sag mitigation with a conventional pi controller [17]. here, better dynamic restoration under the voltage disturbance and the ability of postfault resurgence as a case study is observed on an extensive electrical distribution system. appropriate evaluation techniques are verified for the outcome without, and with-compensation devices are studied to evaluate the role of heat-maps. in a similar way, soft computing-based techniques are alternatives for pq parameter enhancement like voltage regulation, power factor control, harmonics, dynamic loads, and non-linear loads in spite of conventional techniques. the softcomputing techniques for pq applications are particle swarm optimization (pso) algorithm [18], security and cost-optimal allocation of multifacts devices using multi-objective (pso) [19], few advanced non-linear methods like pq theory & fractional order pid controller in dpfc for pq improvement [20]. the pq performance using facts and renewable energy sources (res) with the technical and economic evaluation is discussed [21], [22]. the payback period calculation applies facts devices in the case study with the metallurgical industry [23]. a malaysian case study on distributed generation: a review on current energy status, grid-interconnected pq issues, and implementation constraints [24]. in this paper, carbon emissions, res, different countries' electrical pq standards and regulations, additional facts and custom-power devices, techniques for pq enhancements as a review is made exclusively in detail. the major objectives of the present work are to record and interpret raw data into useful information, analyse the pq measurement results and understand the characteristics of pq variations on the power supply. many pq events are organized worldwide to improve the pq standards for the company and industry [25]-[28]. these are done by american power laboratory, canadian electrical associations, unipede europe, and many standard organizations. the major objectives of the work are to record and interpret raw data into useful information, analyse the power quality measurement results and understand the characteristics of power quality variations on the power supply. books discussing on various pq issues, harmonic analysis, sources, distortion and monitoring and further study on equipment overheating, motor failure, capacitor failure, and incorrect power metering are examples of issues with harmonic distortion [29]-[30]. measurements are made of the components of a single-phase induction motor with closed or semi-closed rotor slots that is capacitor-start and capacitor-run, and a model is developed for the examination of the machine's performance in the frequency domain under the effect of harmonic voltages [31]. the current versions of ieee and iec are too restrictive for low-frequency voltage and current (integer) harmonics as they apply to residential power systems, according to experimental and analytical studies of the effects of voltage and current harmonics on induction machines, transformers, appliances, and relays. it is advised that flexible, rather than strict, rules be set up such that various harmonic spectra be allowed that cause the same extra temperature to rise in transformers and induction machines [32]-[33]. power quality factors are evaluated using a variety of computer algorithms, each with unique benefits. in order to compare the perceived power quality of complicated electric systems, this study improved the curve fitting algorithm (cfa), which has excellent accuracy in the assessment of the signal's power quality characteristics [34]. the tracking of pq metrics by biggest italian telecommunications company had it happen inside four medium/low voltage transformer rooms of telecommunication power substations. the tests were carried out using a suitable home-made apparatus that samples the mains' three-phases and neutral lines over the course of around two years and the results are presented [35][36]. in this paper, the objective is not to describe the signal processing or artificial intelligent methods, other than quite to discuss challenges and probable appliance of signal processing techniques in spiralling raw pq measurement data to a much acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 more valuable commodity-knowledge and information to improve pq performance. section 2 of the paper presents research methodology with field measurement approach, while sections 3 and 4 provide descriptions on case study background with equipment installation and erection locations and results and analysis of the proposed method with fluke measuring device to analyse raw pq measurement data. the applications described provide the basis for research efforts done in centre of excellent for engineering & technology (create), malaysia (many of which are under way around the world) to identify new and improved methods for the data analysis and development of important conclusions from the measurement data. the main key research queries explored are: • what affects the pq in the switchgear terminal unit? • what existing standards help in pq improvements in malaysia? • what is the solution/enhancement for the current imbalance and other pq issues? in this reverence, the explicit research objectives on pq events are: 1. analyse the pq events and their impact on the supply system and sensitive loads; 2. explore the precise pq events distressing create switchgear terminal unit and the distribution units; 3. perform a pilot survey on pq events to review the contentment level with the services accessible; 4. monitor the pq events and suggest suitable monitoring instruments. 2. research methodology (field measurement) the pq monitoring and analysis in a practical environment using the field-measurement method in the flowchart representation are shown in figure 1a. a closed-loop circular diagram representation is shown in figure 1b. in this diagram, we can observe the equipment location at one point or multiple points has to be chosen initially. later, these pq monitoring devices are installed based on voltage and power specifications. the data is stored and retrieved for the pq analysis, with any deviation from typical values under current unbalance or voltage disturbances are analysed and recorded. after this, the pq equipment and devices are changed in other switchboards, and the process is repeated for auditing and measuring. finally, characteristics tables are drawn, deviations in parameters shown in tables, and graphs are stored for records. the figure 1a and figure 1b depict, analyse and classify the power quality problems. this paper focuses on field measurement analysis technique to analyse pq waveforms and signals like voltage and current. with the installation of the pq measuring devices, the pq events are recorded, summarized and analysed to understand the sequence of the event, influence on the nearest, farthest and sensitive loads are examined in detail. field measurement based auding technique with fluke like pq measuring device is very popular in understanding the pq events. this method presents a good time resolution for narrow window while the large window is useful for good frequency resolution. in addition, this method has advantages as the system parameters dependency is decreased, mathematical modelling, reduced complex erections and therefore it is easy to implement and also the measuring devices involved are all not necessary. this paper presents the development of pq monitoring system by using field measurement analysis technique which is an auding based using pq measuring devices. from fluke device, parameters of the power quality signals are estimated such as rms and fundamental value, total harmonic distortion (thd), current to neutral and current through ground for voltage and current. then, characteristics of the signals are measured using these devices and calculations are made from the signal parameters based on ieee std. 1159-2009 as shown in flow chart in figure 1a. from the time illustration, pq parameters are measured such as instantaneous of rms voltage, rms fundamental voltage, total waveform distortion, total harmonic distortion and total non-harmonic distortion. then, characteristics of power quality signals are calculated from the signal parameters and are used for signal classification. the signal parameters and characteristics and their influence on the system are studied in detail in the further sections. 3. case study background the centre of excellent for engineering & technology (create), malaysia industrial erection to understand the pq auditing, recommended practice, current disturbance, and impact, location of the pq devices is studied. two sets of pq a) b) figure 1. a) flowchart showing practical pq monitoring equipment installation and b) process flow schematic closed-loop operation for pq analysis. result power quality analyzer identify the site locations installation of advance signal processing equipments records the data from the analyzer for some period of time.analyze the data of voltage, current and frequency. observe any pq issues such as voltage fluctuations etc. if pq issues are observed, install pq regulators to improve the system performance. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 fluke devices are located at point of measurement 1, and 2 with layout diagram is shown in figure 2a. here, the parameter symbols db ha/ g/ p and e represent distribution generator (dg), ground (g), phase (p), earth (e) for our switchgear electrical medium voltage switch board-1 (emsb) as shown in figure 2a. the total pq devices, individual components, and accessories used for the measurement and data storage are shown in figure 2b. after connection and erection of the pq devices, the switchgear board unit looks as in figure 2c. the fluke device and other pq devices after erection at the switchgear unit at create, malaysia in figure 2d. the point of measurement-1 erection unit equipped is shown in figure 2e. the model and type of the pq equipment used in paper are shown with their purpose and type number in table 1. the create electrical medium voltage switch board-1 (emsb-1), malaysia looks to improve the pq standards and norms and reliability of supply to the building for research analysis. between may to september 2019, create used this project with self-study analysis to install power quality measuring devices and parameter values in various selected sub-locations in the switchgear terminal. in april 2019, complete quality of service came to the picture, and the results are recorded for retrieving and analysis. it is established to track the pq monitoring and reporting features under normal and abnormal current balance conditions. the research proposes that, if put into practice will provide a better solution in solving the pq events and maintenance of the ieee 1159 compliance recommended in table 2. based on the ieee 1159 compliance, table 2 describes the range and nominal operating conditions constraints. the power interruption must be more than 1 minute of time with voltage less than 10 % of its nominal value of 230 v (< 23 v). the over-voltage is defined as voltage greater than 10 % and under-voltage is less than 6 % of nominal 230 v. the voltage unbalance is referred to 2 % change in any phase voltage from the nominal value and current unbalance is 30 % of operating load current. the highest neutral a) b) c) d) e) figure 2. a) layout of the indoor switchgear board with the pq equipment erection, b) fluke pq based components used in our case-study, c) the indoor switchgear board at pc-1 ready for erection, d) fluke pq based components erected and ready for testing, e) point of measurement-1 erection unit. table 1. model and type of the pq equipment. meter used purpose model no. fluke 1750 power recorder power quality analyzer 1750 multi clamp leaker measurement of leakage currents 140 fluke true rms multimeter voltage measurement 117 fluke clamp meter current measurement 376 table 2. categories and compliance standards with ieee 1159. categories compliance ieee 1159 power interruption more than 1 minute, 10 % (23 v) overvoltage > 253 v (+10 %) under-voltage < 216.2 v (-6 %) voltage unbalance 2 % current unbalance 30 % high neutral to earth voltage 3 v ne impulse > 20 v transient voltage dip (sag) 23 v – 207 v voltage swell > 253 v voltage fluctuations 6 % thd (voltage) 5 % thd (current) 3 % dc offset 0.10 % acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 (n) to earth (e) voltage must be within 3 v and the impulse voltage is said to be greater than 20 v between these ne terminals. the voltage sag is defined for a nominal voltage of 230 v is between 23 to 207 volts and swell if the voltage is greater than 253 volts. the voltage fluctuation is said be when there is ±6 % variation in voltage range from 230 volts and this repeats for every millisecond of time with the nominal voltage. the maximum allowable total harmonic distortion (thd) voltage is 5 % and current is 3 % and the dc offset or dc component on the ac supply should be less than 0.10 %. these are the allowable ranges for the standard ieee 1159/ 2009. the pq event under voltage-time stamp is shown in the pictorial form in figure 3. here, 90-110 % of the voltage event magnitude on the y-axis represents regular operation, and 100 % is ideal voltage. the voltage between 0 to 90 % is under-voltage and beyond 110 % is over-voltage. in the x-axis, the time up to 0.5 cycles considered to be notch or transient for under-voltage and over-voltage. a momentary fault is from 0.5 cycles to 3s, and to 1 minute is called the temporary fault, or beyond 1 minute is called sustained interruption. after discussing the practical layout and basic background on the work, results are discussed based on practical case studies in the next section. 4. results and discussion in this section, different case studies are studied and analysed at (emsb1 point)-voltage values based on the practical reading. here, voltage waveforms and data recorded are explored in the first case under steady-state analysis. later in the second case, load variation is dealt and in the last case, under very light load conditions is analysed. 4.1. case 1: voltage flow diagram of the three phases under normal operating conditions the three phases voltage distribution at emsb discussed in figure 2 switchgear terminal location is shown in figure 4. here, the rms voltage variation in all the three phases shown in the red, blue, and black waveform is between 248 to 240 volts under current imbalance regular operation in an industrial/company location. voltage surges observed in the fluke meter having amplitudes 221.8 v and 219.4 v because of load shifting observed in the figure. this current imbalance leads to voltage sag-like behaviour but is within limits. table 3 shows the deviation from the expected value with maximum and minimum values in three different phases and the neutral point at various times from 1st may to 6th may 2019. the system can also be utilized to capture continuous sampling as well as collecting and recording the real time signal. the data is then analysed to be stored in database. this system can set specific time to store data into computer automatically. figure 8 shows examples of data which has been stored in notepad by using this system. based on the measured values for voltage and current using the system, the results obtained are comparable to the measured values using the existing equipment which is fluke power quality analyser. the development of the real-time power quality monitoring system is shown in figure 2a to figure 2f. the system is capable of measuring all standard power line measurements such as voltage (rms), current (rms), frequency, real power, reactive power, apparent power and power factor that are also plotted in graph as shown in figure 4. in addition, the voltage and current signals are shown in time and frequency domain as illustrated and tabulated. based on table 3, the significant observations found are maximum average rms voltage vl1n, vl2n & vl3n at p1 are 246.2 v (7.04 %), 246.37 v (7.12 %), and 247.47 v (7.60 %), respectively, which refer to a nominal voltage of 230 v. the average rms vng is 2.91 v, which is lower than the maximum requirement (3 v). the voltage fluctuation is 5.45 %, which complies with the standard requirement (< 6 %) for a trend of 10 minutes of data. 4.2. voltage flow diagram of the three phases under load variation conditions a fluctuating balanced load and the voltage behaviour at the emsb-1 are analysed in this case study here. the current is varying from 50 a to 160 a under light load to full load usage condition is shown in the right side of the graph plat, and voltage rms parameters are shown in the left side of the plot. the voltage deviation is observed between 223 volts to 247 volts for all three phases, neutral and ground, shown in figure 5. it can be observed, the maximum and minimum rms current of all the figure 3. pq voltage event description with the duration time stamp. figure 4. rms voltage of the three phases recorded in may 2019 at emsb-1 at the point of connection-1. table 3. maximum and minimum rms voltage of three phases and neutral from 1st may to 6th may 2019. phase max (rms) time min (rms) time v rms avg l1n 246.20 v 01/05/2019 08:00:00 233.59 v 03/05/2019 10:50:00 v rms avg l2n 246.37 v 01/05/2019 07:30:00 234.06 v 03/05/2019 10:50:00 v rms avg l3n 247.47 v 01/05/2019 07:30:00 234.93 v 03/05/2019 10:50:00 v rms avg ng 2.91 v 06/05/2019 07:10:00 1.29 v 03/05/2019 17:10:00 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 three phases and also in the neutral line, the current imbalance is observed at the switchboard emsb-1 during the that during the period 30th april may to 8th may 2019. the details are tabulated as shown in the table 4. the maximum and minimum 10 min trends data for currents at emsb1(p1) is recorded for the analysis. the maximum average rms current l1, l2 & l3 at p1 is 158.81 a, 181.25 a and 158.51 a, respectively. the current unbalance is 14.97 % is lower than the maximum permissible value (< 30 %). the foremost concern for malaysian electrical power system is to sustain reliable power supply, maintain almost constant voltage with variable load condition. the present real-time study plays a vital role in understanding the effect of line or phase of the power system, timing at which it influences, unbalance, neutral current concern and grounding technique are the main factors observed in this case study. 4.3. voltage flow diagram of the three phases under very light load variations conditions in this case, light loads like lighting only are applied, and the voltage fluctuations are analysed. a light-load is in general, results in increase in the voltage and hence to be regulated to reach the nominal value. the voltage behaviour at the emsb-2is analysed for 10-minute time scale between 1st to 5th may 2019 in this present case study. the voltage varies from 232 v to 248 v under light load usage condition is shown with voltage rms parameters recorded at db ha/g/p/e (p2) for all the three phases, neutral and ground, at rms current of l1, l2 dan l3 phases shown in figure 6 and table 5. the normal rms current of three phases is 18 a, and neutral current is 6 a is observed at emsb-1 is summarized as depicted in figure 7 and table 6. in this case study, a 10-minute trend in the voltage at the point of connection (p2) is analysed and shown in figure 7. the maximum average rms voltage vl1n, vl2n & vl3n at p2 are 245.07 v (6.55 %), 245.30 v (6.65 %), and 247.59 v (7.65 %), respectively, which refer to a nominal voltage of 230 v. the average rms vng is 3.5 v, which is higher than the maximum requirement (3 v). the voltage fluctuates 7.2 %, which is higher than the standard ieee 1159 requirement (< 6 %) and within limits. by managerial control, harmonic spread among areas and the system unbalance are dealt with and general pq development of the proposed field measurement is provided. the maximum and minimum of 10 min trends data for currents at db ha/g/p/e at the point of connection (p2) are analysed in this case study. the average rms current maximum figure 5. rms voltage of the three phases at emsb-1 at the point of connection-1 under various loading conditions. table 4. maximum and minimum rms current of three phases and neutral from 30th april, may to 8th may 2019 under current imbalance operation observed at emsb-1. phase max (rms) time min (rms) time a rms avg l1n 158.81 a 07/05/2019 07:40:00 51.54 a 01/05/2019 10:30:00 a rms avg l2n 181.25 a 01/05/2019 21:10:00 50.86 a 30/04/2019 18:40:00 a rms avg l3n 158.51 a 08/05/2019 00:20:00 53.58 a 01/05/2019 10:20:00 a rms avg n 1.57 a 01/05/2019 04:00:00 1.41 a 01/05/2019 09:40:00 a rms avg g 0.12 a 01/05/2019 08:30:00 0.07 a 30/04/2019 17:20:00 % a unbalance 14.94 03/05/2019 19:10:00 2.40 06/05/2019 07:20:00 figure 6. rms voltage of the three phases at emsb-1 at the point of connection-1 under very light load conditions. table 5. maximum and minimum rms current of three phases and neutral from 1st may to 7th may 2019 under light load operation at emsb-1 phase max (rms) time min (rms) time v rms avg l1n 245.07 v 01/05/2019 08:30:00 232.15 v 03/05/2019 10:50:00 v rms avg l2n 245.30 v 01/05/2019 08:30:00 232.86 v 03/05/2019 10:50:00 v rms avg l3n 247.59 v 01/05/2019 07:30:00 234.05 v 03/05/2019 10:50:00 v rms avg ng 3.50 v 02/05/2019 08:00:00 1.56 v 07/05/2019 19:30:00 figure 7. rms current for the three phases at db ha/g/p/e point-current values at emsb-2 from 30th april to 4th may 2019. v o lt s r m s a m p s r m s recorded volts/amps/hz ref erence timeplot: 30/04/2019 12:21:56 08/05/2019 10:16:23 click on any axis and drag to scroll acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 7 l1, l2 & l3 at p2 are 1.11 a, 2.94 a, and 7.04 a. the current unbalance is 54.85 %, which is higher than the requirement by ieee 1159 (< 30 %), which is also within limits. the final summarized form of the computer and business equipment manufacturer's association (cbema) curve methodology (field measurement) is described in figure 8. in this graph, the y-axis represents voltage deviation, and the xaxis denotes a time scale. it is observed that 289 under-voltage disturbances for a maximum of 8.3 ms are observed in one year from may 2019 to april 2020. also, 98 under-voltage (sags), 18 interruptions, 164 over-voltage (swell) and 19 transients are observed under the cbema curve. hence, this type of analysis with pq equipment and chart-based analysis with the help of the cbema curve aid in understanding the pq disturbances and formulations to be done for the company/industry based on the ieee 1159 standards. consequently, it is observed that the pq problem of create electrical medium voltage switch board-1 (emsb-1) and emsb2, system requires to be concentrated more as the pq condition in one sub-area in one sub-station or one electrical terminal will definitely influence the other electrical terminals pq level due to unbalance, neutral current flow and harmonic propagation. the proposed method is recommended and the use of fluke and other pq devices with necessary testing equipment needs to be installed at the electrical sub-system or main point where the sensitive loads are connected at the point of the system. the pq assessment, practices are observed with the auditing technique involved and observed under the existing unbalance and sudden load varying condition. in this paper, the development of real-time monitoring system for power quality signal is developed by using field measurement method. the system performs power line measurement such as voltage and current in root mean square (rms), real power, apparent power, reactive power, frequency and power factor. from the tfr of power quality signal, parameters of the signal are estimated such as rms voltage, rms fundamental voltage, total waveform distortion, total harmonic distortion and total non-harmonic distortion. based on the monitoring testing, the system exhibits excellent accuracy in calculation due to low percentage error compared to the existing equipment. 5. conclusions a real-time power auditing with field measurement proposed scheme in this paper to advance the pq of a sub-station when the pq requirements and challenges are different. the voltage fluctuation rate at db ha/g/p/e is 7.2 %, which is higher than 6 % (ieee 1159), which is expected due to high impedance in the installation. voltage unbalance, current sudden changing with load variation, harmonics, are observed using pq measurement devices like fluke etc., and auding is done and then our technique is applied in this paper. high neutral to the ground voltage (average vng rms) at db ha/g/p/e is more than 3 v (3.5 v at the site) which is expected due to intermittent termination or loose contact at the grounding system. the activity of high vng impulse observed at installation, which is expected due to intermittent single line-to-ground fault, which can affect electrical equipment. from the real time observation, it is found that, we can increase the efficiency of power distribution in an industrial network is possible with proper electrical auditing. it is found that, the solution for harmonic and unbalancing in a power sub-station to the load point. this will mainly influence the sensitive loads like computer, scada, small lighting systems etc. in addition, the proposed technique is useful to enhance the pq of existing sensitive load buses (or cps) without installation of a new active power filters or external devices in the respective buses. it is due to incorporated pq upgrading of the sub-system is addressed employed with the available devices. pq should also be adopted as part of a preventive maintenance program. this type of data analysis done in create electrical medium voltage switch board-1 (emsb-1 and 2), practically helps in studying environmental, economic, and engineering impacts on the power systems. the main contribution of the proposed technique is pertaining an effective array of available pq devices, advanced cbema methods in the electrical power auditing process so that the effect of one parameter on the other parameter, grid influence, role of neutral and ground are understood and helped in pq improvement. to this aim, pq events and suggest suitable monitoring instruments are done in this paper. the effect of light load, current imbalance, and standard operating conditions also considerably influences the system's voltage deviations. because of the harmonic propagation, current unbalance and mutual effects of unbalances between sub-stations and the load points, the loading order of each sub-station is of vast significance and this concern is addressed by using a scheduling framework and regular power auditing is observed with the proposed process. if non-linear and unbalanced loads are considered, the impact of voltage deviations is very high, which is to be carefully investigated in future studies. the cost of pq devices, location, type of loading, disturbance play a vital role in pq improvement. the concluding section contains the major achievements of the research presented in the manuscript. it should be concise but table 6. maximum and minimum rms current of three phases and neutral from 30th april may to 8th may 2019 under light load operation observed at emsb-2. phase max (rms) time min (rms) time a rms avg l1 1.11 a 3/05/2019 11:20:00 0.55 a 02/05/2019 10:30:00 a rms avg l2 2.94 a 0/04/2019 17:00:00 2.31 a 01/05/2019 11:00:00 a rms avg l3 7.04 a 2/05/2019 16:10:00 5.84 a 04/05/2019 07:50:00 a rms avg n 6.68 a 0/04/2019 17:00:00 5.60 a 04/05/2019 18:30:00 a rms avg g 0.16 a 0/04/2019 17:00:00 0.05 a 04/05/2019 18:00:00 % a unbalance 54.85 02/05/2019 14:40:00 45.58 04/05/2019 05:10:00 figure 8. computer and business equipment manufacturer's association (cbema) curve methodology (field measurement). acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 8 informative. when numerical results are an essential part of the research, for instance a wider measurement range, higher uncertainty [6], they should be included in the conclusions. references [1] b. mohit, a. k. singh, grid integrated renewable dg systems: a review of power quality challenges and state‐of‐the‐art mitigation techniques, international journal of energy research 44, no. 1 (2020), pp. 26-69. doi: https://doi.org/10.1002/er.4847 [2] k. varun, a. s. pandey, s. k. sinha, grid integration and power quality issues of wind and solar energy system: a review, 2016 int. conf. on emerging trends in electrical electronics & sustainable energy systems (iceteeses), ieee, 2016, pp. 71-80. doi: https://doi.org/10.1109/iceteeses.2016.7581355 [3] chattopadhyay, surajit, madhuchhanda mitra, samarjit sengupta, electric power quality, in: electric power quality, pp. 5-12. springer, dordrecht, 2011. [4] g. bucci, f. ciancetta, e. fiorucci, a. fioravanti, a. prudenzi, an internet-of-things system based on powerline technology for pulse oximetry measurements, acta imeko 9 (2020) 4, pp. 114-120. doi: https://doi.org/10.21014/acta_imeko.v9i4.724 [5] m. suresh, a. kumar panda, power quality issues: current harmonics. crc press, 2018. doi: https://doi.org/10.1201/9781315222479 [6] m. m. sabarimalai, s. r. samantaray, i. kamwa, detection and classification of power quality disturbances using sparse signal decomposition on hybrid dictionaries, ieee transactions on instrumentation and measurement 64, no. 1 (2014), pp. 27-38. doi: https://doi.org/10.1109/tim.2014.2330493 [7] g. artale, a. cataliotti, v. cosentino, s. guaiana, d. di cara, n. panzavecchia, g. tiné, n. dipaola, m. g. sambataro, pq metrics implementation on low cost smart metering platforms. a case study analysis, in 2018 ieee 9th international workshop on applied measurements for power systems (amps), pp. 1-6. ieee, 2018. doi: https://doi.org/10.1109/amps.2018.8494866 [8] h. chandana, v. j. gosbell, s. perera power quality (pq) survey reporting: discrete disturbance limits, ieee transactions on power delivery 20, no. 2 (2005), pp. 851-858. doi: https://doi.org/10.1109/tpwrd.2005.844257 [9] a. terje, m. ylönen, the strong power of standards in the safety and risk fields: a threat to proper developments of these fields?, reliability engineering & system safety 189 (2019), pp. 279-286. doi: https://doi.org/10.1016/j.ress.2019.04.035 [10] a. mariscotti, uncertainty of the energy measurement function deriving from distortion power terms for a 16.7 hz railway, acta imeko 9 (2020) 2, pp. 25-31. doi: https://doi.org/10.21014/acta_imeko.v9i2.764 [11] tenaga nasional, welcome to mytnb portal, online [accessed 21 march 2023] https://www.mytnb.com.my/ [12] d. alizzio, a. quattrocchi, r. montanini, development and characterisation of a self-powered measurement buoy prototype by means of piezoelectric energy harvester for monitoring activities in a marine environment, acta imeko 10 (2021) 4, , pp.201-208. doi: 10.21014/acta_imeko.v10i4.1161 [13] r. kumar, b. singh, d. t. shahani, a. chandra, k. al-haddad, recognition of power-quality disturbances using s-transformbased ann classifier and rule-based decision tree, ieee transactions on industry applications 51, no. 2 (2014): 12491258. doi: 10.1109/tia.2014.2356639 [14] l. ganyun, z.yang, y. jin, and y. ding, a novel method of complex pq disturbances classification without adequate history data, in 2016 ieee power and energy society general meeting (pesgm), pp. 1-5. ieee, 2016. doi: 10.1109/pesgm.2016.7741119 [15] m. a. s. masoum, s. jamali, n. ghaffarzadeh, detection and classification of power quality disturbances using discrete wavelet transform and wavelet networks, iet science, measurement & technology 4, no. 4 (2010): 193-205. doi: 10.1049/iet-smt.2009.0006 [16] a. k. goswami, c. p. gupta, g. k. singh, minimization of voltage sag induced financial losses in distribution systems using facts devices, electric power systems research 81, no. 3 (2011), pp. 767-774. doi: 10.1016/j.epsr.2010.11.003 [17] h. liao, j. v. milanović, on capability of different facts devices to mitigate a range of power quality phenomena, iet generation, transmission & distribution 11, no. 5 (2017), pp. 1202-1211. doi: 10.1049/iet-gtd.2016.1017 [18] e. naderi, m. pourakbari-kasmaei, h. abdi, an efficient particle swarm optimization algorithm to solve optimal power flow problem integrated with facts devices, applied soft computing 80 (2019), pp. 243-262. doi: 10.1016/j.asoc.2019.04.012 [19] h. r. baghaee, m. mirsalim, g. b. gharehpetian, a. k. kaviani., security/cost-based optimal allocation of multi-type facts devices using multi-objective particle swarm optimization, simulation 88, no. 8 (2012), pp. 999-1010. doi: 10.1177/0037549712438715 [20] r. naidu, p. kumar, s. meikandasivam, power quality enhancement in a grid-connected hybrid system with coordinated pq theory & fractional order pid controller in dpfc, sustainable energy, grids and networks 21 (2020): 100317. doi: 10.1016/j.segan.2020.100317 [21] m. ghiasi, technical and economic evaluation of power quality performance using facts devices considering renewable generations, renewable energy focus 29 (2019), pp. 49-62. doi: 10.1016/j.ref.2019.02.006 [22] m. ghiasi, s. esmaeilnamazi, r. ghiasi, m. fathi, role of renewable energy sources in evaluating technical and economic efficiency of power quality, technology and economics of smart grids and sustainable energy 5, no. 1 (2020), pp. 1-13. doi: 10.1007/s40866-019-0073-1 [23] m. s. balabanov, k. h. yoboue, r. n. khamitov, i. a. gonzalez palau, method for calculating the payback period of facts devices in the metallurgical industry, dynamics of systems, mechanisms and machines (dynamics), ieee, 2017, pp. 1-6. doi: 10.1109/dynamics.2017.8239431 [24] j. y. lee, r. verayiah, k. h. ong, a. k. ramasamy, m. b. marsadek, distributed generation: a review on current energy status, grid-interconnected pq issues, and implementation constraints of dg in malaysia, energies 13, no. 24 (2020): 6479. doi: 10.3390/en13246479 [25] e. w. gunther, h. mebta, a survey of distribution system power quality-preliminary results, ieee transactions on power delivery 10, no. 1 (1995), pp. 322-329. doi: 10.1109/61.368382 [26] j. v. milanović, j. meyer, r. f. ball, w. howe, r. preece, m. h. j. bollen, s. elphick, n. čukalevski, international industry practice on power-quality monitoring, ieee transactions on power delivery 29, no. 2 (2013), pp. 934-941. doi: 10.1109/tpwrd.2013.2283143 [27] e. schwindt, j. lópez gappa, m. p. raffo, m. tatián, a. bortolus, j. m. orensanz, g. alonso, m. e. diez, b. doti, g. genzano, c. lagger, g. lovrich, m. l. piriz, m. m. mendez, v. savoya, m. c. sueiro, marine fouling invasions in ports of patagonia (argentina) with implications for legislation and monitoring programs, marine environmental research 99 (2014), pp. 60-68. doi: 10.1016/j.marenvres.2014.06.006 [28] p. martins estevão teixeira, m. oleskovicz, a. l. da silva pessoa. a survey on smart grids: concerns, advances, and trends, ieee pes innovative smart grid technologies conference-latin america (isgt latin america), ieee, 2019, pp. 1-6. doi: 10.1109/isgt-la.2019.8895296 https://doi.org/10.1002/er.4847 https://doi.org/10.1109/iceteeses.2016.7581355 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-09%20%282020%29-04-15 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-09%20%282020%29-04-15 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-09%20%282020%29-04-15 https://doi.org/10.21014/acta_imeko.v9i4.724 https://doi.org/10.1201/9781315222479 https://doi.org/10.1109/tim.2014.2330493 https://doi.org/10.1109/amps.2018.8494866 https://doi.org/10.1109/tpwrd.2005.844257 https://doi.org/10.1016/j.ress.2019.04.035 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-09%20%282020%29-02-05 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-09%20%282020%29-02-05 https://doi.org/10.21014/acta_imeko.v9i2.764 https://www.mytnb.com.my/ https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-04-31 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-04-31 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-04-31 https://acta.imeko.org/index.php/acta-imeko/article/view/imeko-acta-10%20%282021%29-04-31 http://dx.doi.org/10.21014/acta_imeko.v10i4.1161 https://doi.org/10.1109/tia.2014.2356639 https://doi.org/10.1109/pesgm.2016.7741119 https://doi.org/10.1049/iet-smt.2009.0006 https://doi.org/10.1016/j.epsr.2010.11.003 https://doi.org/10.1049/iet-gtd.2016.1017 https://doi.org/10.1016/j.asoc.2019.04.012 https://doi.org/10.1177%2f0037549712438715 https://doi.org/10.1016/j.segan.2020.100317 https://doi.org/10.1016/j.ref.2019.02.006 https://doi.org/10.1007/s40866-019-0073-1 https://doi.org/10.1109/dynamics.2017.8239431 https://doi.org/10.3390/en13246479 https://doi.org/10.1109/61.368382 https://doi.org/10.1109/tpwrd.2013.2283143 https://doi.org/10.1016/j.marenvres.2014.06.006 https://doi.org/10.1109/isgt-la.2019.8895296 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 9 [29] j. arrillaga, d. bradley, p. s. bodger, power system harmonics; john wiley & sons: hoboken, nj (1985) usa [30] j. arrillaga, b. c. smith, n. r. watson, a. r. wood, power system harmonic analysis (2013), power system harmonic analysis, isbn: 978-111887831-6; 0471975486; 978-047197548-9 doi: 10.1002/9781118878316 [31] d. lin, t. batan, e. f. fuchs, w. m. grady, harmonic losses of single-phase induction motors under non-sinusoidal voltages (1996),ieee transactions on energy conversion 11 (2), pp. 273279. doi: 10.1109/60.507179 [32] e. f. fuchs, d. j. roesler, m. a. s. masoum, are harmonic recommendations according to ieee and iec too restrictive?, ieee transactions on power delivery 19 (4), 2004, pp. 17751786. doi: 10.1109/tpwrd.2003.822538 [33] e. f. fuchs, d. j. roesler, k. p. kovacs, sensitivity of electrical appliances to harmonics and fractional harmonics of the power system's voltage. part ii: television sets, induction watthour meters and universal machines, (1987), ieee transactions on power delivery 2 (2), pp. 445-453. doi: 10.1109/tpwrd.1987.4308128 [34] m. caciotta, f. leccese, a. trifiro, from power quality to perceived power quality (2006), proc. of the iasted int. conf. on energy and power systems, chiang mai, thailand, 29-31 march 2006, pp. 94-102. [35] f. leccese, rome, a first example of perceived power quality of electrical energy: the telecommunication point of view (2007), proc. of the int. telecommunications energy conf. (intelec), rome, italy, 30 september 4 october 2007, pp. 369-372. doi: 10.1109/intlec.2007.4448800 [36] f. leccese, m. cagnetti, s. di pasquale, s. giarnetti, m. caciotta, a new power quality instrument based on raspberry-pi (2016) electronics (switzerland), 5 (4), art. no. 64. doi: 10.3390/electronics5040064 https://www.scopus.com/record/display.uri?eid=2-s2.0-84949751820&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-84949751820&origin=reflist https://doi.org/10.1002/9781118878316 https://www.scopus.com/record/display.uri?eid=2-s2.0-0030170566&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-0030170566&origin=reflist https://doi.org/10.1109/60.507179 https://www.scopus.com/record/display.uri?eid=2-s2.0-7244245490&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-7244245490&origin=reflist https://doi.org/10.1109/tpwrd.2003.822538 https://www.scopus.com/record/display.uri?eid=2-s2.0-0023327247&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-0023327247&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-0023327247&origin=reflist https://www.scopus.com/record/display.uri?eid=2-s2.0-0023327247&origin=reflist https://doi.org/10.1109/tpwrd.1987.4308128 https://doi.org/10.1109/intlec.2007.4448800 https://doi.org/10.3390/electronics5040064 editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop 2022 acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 2 editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop 2022 eric benoit1 1 université savoie mont blanc, polytech'savoie, lab. d'informatique, systèmes, traitement de l'information et de la connaissance (listic) b.p. 80439 annecy-le-vieux cedex 74944, france section: editorial citation: eric benoit, editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop 2022, acta imeko, vol. 11, no. 4, article 2, december 2022, identifier: imeko-acta-11 (2022)-04-02 received december 28, 2022; in final form december 28, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: eric benoit, e-mail: eric.benoit@univ-smb.fr dear readers, this issue includes a first selection of papers issued from the imeko tc1-tc7-tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop “cutting-edge measurement science for the future” held in porto in august 2022. due to the pandemic, theses four imeko communities didn’t have the possibility to meet face to face since 2019 and the enthusiasm of all authors to expose their studies definitely contributed to the success of this event. for this reason a second selection will be presented in a future issue. three papers related to education and training in measurement and instrumentation are presented in this issue. jakub svatos and jan holub explored the impact of the pandemic on education of measurement with the paper “how the covid19 changed the hands-on laboratory classes of electrical measurement” [1]. dominik pražák et al present “a training centre for intraocular pressure metrology” [2]. on another field of measurement education and training, raik illmann, maik rosenberger and gunther notni expose their “training program for the metric specification of imaging sensors” [3]. the measurement of biophysical properties is a topic shared by several teams during the symposium. it is considered, first by francesco crenna et al with their paper on “biomechanics in crutch assisted walking” [4], and by jakub wagner and roman z. morawski with a “spatiotemporal analysis of human gait, based on feet trajectories estimated by means of depth sensors” [5]. finally, this first selection ends with 2 papers on knowledge management considerations related to measurement science: in “contrasting roles of measurement knowledge systems in confounding or creating sustainable change” [6], william p. fisher compare the scientific modelling and the statistical modeling in the context of promoting sustainable change. in “a mathmet quality management system for data, software, and guidelines” keith lines et al present the essential components of the mathmet qms that has the potential to become a reference in the measurement community [7]. i hope you will enjoy your reading. eric benoit section editor references [1] jakub svatos, jan holub, how the covid-19 changed the handson laboratory classes of electrical measurement acta imeko 11 (2022) 4, pp. 1-7. doi: 10.21014/actaimeko.v11i4.1301 [2] dominik pražák, vítězslav suchý, markéta šafaříková-pštroszová, kateřina drbálková, václav sedlák, šejla ališić, anatolii bescupscii, vanco kacarski, a training centre for intraocular pressure metrology, acta imeko 11 (2022) 4, pp. 1-7. doi: 10.21014/actaimeko.v11i4.1300 [3] raik illmann, maik rosenberger, gunther notni, training program for the metric specification of imaging sensors acta imeko 11 (2022) 4, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1361 [4] francesco crenna, matteo lancini, marco ghidelli, giovanni b. rossi, marta berardengo biomechanics in crutch assisted walking, acta imeko 11 (2022) 4, pp. 1-5. doi: 10.21014/actaimeko.v11i4.1328 [5] jakub wagner, roman z. morawski, spatiotemporal analysis of human gait, based on feet trajectories estimated by means of depth sensors acta imeko 11 (2022) 4, pp. 1-7. doi: 10.21014/actaimeko.v11i4.1349 mailto:eric.benoit@univ-smb.fr https://doi.org/10.21014/actaimeko.v11i4.1301 https://doi.org/10.21014/actaimeko.v11i4.1300 https://doi.org/10.21014/actaimeko.v11i4.1361 https://doi.org/10.21014/actaimeko.v11i4.1328 https://doi.org/10.21014/actaimeko.v11i4.1349 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 [6] william p. fisher, jr., contrasting roles of measurement knowledge systems in confounding or creating sustainable change acta imeko 11 (2022) 4, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1330 [7] keith lines, jean-laurent hippolyte, indhu george, peter harris, a mathmet quality management system for data, software, and guidelines acta imeko 11 (2022) 4, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1348 https://doi.org/10.21014/actaimeko.v11i4.1330 https://doi.org/10.21014/actaimeko.v11i4.1348 the experience gained from implementing an iso 56000-based innovation management system acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 the experience gained from implementing an iso 56000based innovation management system tzvetelin gueorguiev1 1 university of ruse “angel kanchev”, 8 studentska street, 7004 ruse, bulgaria section: research paper keywords: innovation management system; iso 56000 series; intellectual property management; best practice citation: tzvetelin gueorguiev, the experience gained from implementing an iso 56000-based innovation management system, acta imeko, vol. 12, no. 2, article 37, june 2023, identifier: imeko-acta-12 (2023)-02-37 section editor: leonardo iannucci, politecnico di torino, italy received january 29, 2023; in final form may 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: tzvetelin gueorguiev, e-mail: tzgeorgiev@uni-ruse.bg 1. introduction innovations are a key factor for organizational success. the ever more frequent use of the word ‘innovation’ threatens to cause lack of sensitivity towards its true essence. in a way this is similar to the use of the word ‘quality’ which now represents anything meaning perfection, excellence, conformity, satisfaction, etc. the iso 9000 series of standards is globally recognized as a model of a successful management system. the fundamentals, vocabulary and the quality management principles are established in iso 9000:2015. the requirements for a quality management system are specified in iso 9001:2015. the guidance on the implementation of iso 9001 requirements, as well as some examples are given in iso/ts 9002:2016. the guidance to achieve sustained success of the organization is given in iso 9004:2018. this standard also contains a 5-level self-assessment tool for various quality management processes. clause 11.4 defines the guidance for processes related to innovation. in time, the upward trend of iso 9001 certified quality management systems has somewhat slowed down thus giving way to other management systems, such as information security, energy, and more specificallyto innovation management systems [1]. this is reaffirmed by lines et al. along with highlighting the importance of the process approach and the “plan-do-check-act” (pdca) cycle for all management system standards published by the international organization for standardization, iso [2]. the pdca cycle and innovations are also part of the standard iso 21001:2018. the factor which practically makes the difference in quality management systems is the rate of improvementbe it incremental or continual. any innovation may be considered to be an improvement, but not every improvement is innovative. this paper presents an overview of iso 56000 based innovation management systems in order to better understand the significance of such management systems and the perceived benefits from their implementation. the current state of the iso 56000 series of standards for innovation management systems is discussed in section 2. in section 3 some examples of the implementation of guidance standards from the iso 56000 series at the university of ruse are described. finally, in the concluding section, some of the proven practices for innovation management are summarized. abstract the aim of this paper is to share the experience gained from an early adoption of the iso 56000 series of standards for innovation management systems. the university of ruse is among the first universities in bulgaria to implement an iso 9001 quality management system. later this system is updated and extended with elements of iso 21001:2018management systems for educational organizations. since 2020 the technology transfer and intellectual property centre (ttipc) at the university of ruse has implemented a number of requirements and guidelines of the iso 56000 series of standards. the foundation of the integrated management system is iso 56002:2019 with guidance for innovation management. the principles for innovation management from iso 56000:2020 are being followed while implementing key methods and tools from iso 56003:2019 for innovation partnership, iso/tr 56004:2019 for innovation management assessment, iso 56005:2020 for intellectual property management, and iso 56006:2021 for strategic intelligence. mailto:tzgeorgiev@uni-ruse.bg acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 2. the iso 56000 series of standards the need for a specific standard for innovation management systems became evident at the turn of the 21-st century. a detailed chronology of the development of innovation management standards is presented by da silva [3]. in 2008 the european committee for standardization (cen) has created a specific technical committee on innovation managementcen/tc 389 which published the first set of international standards on innovation management – the cen/ts 16555 series. this initiative is continued by the international organization for standardization which established a similar technical committeeiso/tc 279 “innovation management”. until january 2023, iso has published six standards for innovation management, part of the iso 56000 series: • iso 56000:2020 innovation management fundamentals and vocabulary [4]; • iso 56002:2019 innovation management innovation management system guidance [5]; • iso 56003:2019 innovation management tools and methods for innovation partnership guidance [6]; • iso/tr 56004:2019 innovation management assessment guidance [7]; • iso 56005:2020 innovation management tools and methods for intellectual property management guidance [8]; • iso 56006:2021 innovation management tools and methods for strategic intelligence management guidance [9]. four additional iso standards are at different stages of development: • iso/awi 56000 innovation management fundamentals and vocabulary (an update of the standard iso 56000:2020); • iso/cd 56001 innovation management innovation management system requirements [10]; • iso/fdis 56007 innovation management system tools and methods for managing opportunities and ideas guidance [11]; • iso/dis 56008 innovation management tools and methods for innovation operation measurements guidance [12]; • iso/cd ts 56010 innovation management illustrative examples of iso 56000 [13]. the similarity of the concepts behind iso 9000 and iso 56000, and iso/ts 9002 and iso 56002 is noticeable. yet, the approach applied for developing innovation management standards is substantially different. instead of publishing the standard with requirements for innovation management systemsiso 56001 first, and then giving guidance on their implementation, iso/tc 279 has chosen a more customer-friendly approach. this “innovation” allows the potential developers, implementers and auditors of innovation management systems to establish their systems based on the guidance and supporting standards. only when sufficient practice in maintaining such systems is in place, the requirements standard will be used to audit and possibly certify for conformance to iso 56001. 2.1. iso 56000:2020 innovation management fundamentals and vocabulary this standard is prepared by iso/tc 279 “innovation management” with the purpose to clarify the terminology, concepts and principles to be used in the whole iso 56000 series of standards. 8 groups of terms are defined in clause 3. they are related to innovation, organization, objective, knowledge, intellectual property, innovation initiative, performance, and assessment. the alignment of these terms and definitions with the oslo manual and the definitions of intellectual property in trips (agreement on trade-related aspects of intellectual property rights) and wipo (world intellectual property organization) is made clear in annex b of iso 56000:2020. in addition to clarifying the impact of innovations, clause 4 outlines the fundamental concepts and the 8 innovation management principles: • realization of value; • future-focused leaders; • strategic direction; • culture; • exploiting insights; • managing uncertainty; • adaptability; • systems approach. each of these principles is detailed using a structure of 4 elements: statement, rationale, key benefits, and possible actions. this allows the innovation managers in the organization to have a clear purpose (the statement), some helpful information to raise awareness and achieve motivation of the interested parties (rationale and key benefits), as well as an outline of an action plan to establish the foundations of a solid innovation management system (possible actions). 2.2. iso 56002:2019 innovation management system guidance this standard follows the high-level structure of annex sl. this makes the clauses of iso 56002, parts of the text, some terms and definitions identical with the ones used in other management systems standards, and more specificallyiso 9001 for quality management systems. some elements are added to the “backbone” of annex sl such as: innovation culture, innovation vision, innovation strategy, innovation portfolios, etc. in clause 7 “support”, important additions are: tools and methods, strategic intelligence management, and intellectual property management. these elements are further explained and extensively discussed in the supporting standards of the iso 56000 series. new structural elements in clause 8 “operations” are the innovation initiatives and the innovation processes comprising of identifying opportunities, creating concepts, validating concepts, developing solutions, and deploying solutions. these steps of the innovation process are also aligned with the steps of the pdca cycle, just like the overall arrangement of the clauses of iso 56002. the role of metrology is clearly outlined in clause 9 “performance evaluation”, and more specifically in 9.1 “monitoring, measurement, analysis and evaluation”. clause 10 “improvement” completes the pdca cycle with deviation, nonconformity and corrective action, and continual improvement of the innovation management system. this general analysis of the standard iso 56002:2019 should not be misleading. the in-depth analysis of the content of each clause shows a specific focus on innovation management which cannot be found in other management system standards. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 2.3. iso 56003:2019 tools and methods for innovation partnership innovations are often the result of teamwork and partnerships. the guidance of iso 56002:2019 refers to partners on a number of occasions, such as: • a tool for managing uncertainty and risk; • an element of the context of the organization; • a key interested party; • an element to consider when determining the scope of the innovation management system; • a possibility for collaboration (clause 4.4.3); • an outsourcing opportunity; • a source of knowledge, competence, finances, and infrastructure; • a recipient and a source of information in the communication process; • a source and a user of intellectual property rights; • a counterpart in innovation initiatives and processes. the core of this standard can be found between clauses 4 to 8 and includes: • the innovation partnership framework (clause 4); • making the decision whether to enter an innovation partnership or not (clause 5 and annexes a and d); • selecting internal and external partners by generating a long list, and thena short list of potential partners, and ultimately deciding based on objective criteria (clause 6 and annex b); • partnership alignment and signing a non-disclosure agreement (nda), a memorandum of understanding, a letter of intent or other legally binding agreements (clause 7 and annex c), and • managing the interactions between the partners throughout the lifetime of the innovation partnership (clause 8 and annex d). all the annexes provide guidance on the applicable tools and methods for the clauses of iso 56003:2019. annex e presents a guidance on monitoring, measurement, analysis and evaluation of a set of quantitative and qualitative innovation performance indicators, and links them to the performance evaluation criteria set out in clause 9.1 of iso 56002:2019. 2.4. iso/tr 56004:2019 innovation management assessment this standard continues the measurement of the innovation performance indicators. as outlined in clause 4, this is done in order to: • gain a better understanding of innovation management; • determine the performance of the current innovation management; • meet internal and/or external requirements, and • improve the performance and increase the value of the organization. in clause 5 “choosing the innovation management assessment (ima) approach” organizations can find different ima approaches, examples of quantitative and qualitative measures that serve as performance criteria for innovation management, as well as guidance on the type, quality and format of the ima outputs. clause 6 “the ima process” visualizes the rest of the clauses and how they interconnect within the plan-do-check-act cycle: • prepare (plan) the ima (clause 7)aligning the strategic intent and the scope of the ima, defining the design, the expected results and the performance metrics of the ima, clarifying the resources needed and the organization’s ability and willingness to change, and setting up the ima; • conduct (do) the ima (clause 8)setting up the necessary tools, collecting quantitative and qualitative data, analysing data and identifying gaps in innovation management and ima; • conclude (check) the ima (clause 7)documenting ima findings, structuring the ima report content, communicating findings to relevant top management and interested parties, and recommending actions for improvement; • improvement (act) of the ima itselfdetermining and implementing a roadmap for enhancing future imas. about half of the iso 56003:2019 standard is devoted to the 7 principles that facilitate the design and implementation of the ima (annex a), and visual examples how to present the aggregated results (dashboard and radar diagram) and detailed results (histogram, bar chart, benchmarking of key performance indicators, scoreboard, etc.) from the ima (annex b). 2.5. iso 56005:2020 intellectual property management innovation is intrinsically related to intellectual property (ip) whether or not it is protected by a patent, utility model, trademark, industrial design or other type of intellectual property rights (ipr). the effective and efficient management of ip and ipr can substantially increase the innovative potential of the organization by improving the competence of its people and enhancing organizational knowledge. this standard builds on the foundations set in clause 7.8 “intellectual property management” of iso 56002:2019 and supports the innovation strategy of the organization. the ip management framework (clause 4) encompasses: the context of the organization, the commitment of top management, leaders and ip managers, the innovative culture, the knowledge, competence, education and training of human capital, the financial and legal considerations. clause 5 of iso 56005:2020 defines the interrelationship between the business strategy, the innovation strategy, and the ip strategy of the organization. clause 6 “ip management in the innovation process” has identical structure to that of clause 8.3 “innovation process” in iso 56002:2019, also aligned with the pdca cycle: 1) general; 2) identify opportunities (plan); 3) create concepts (do); 4) validate concepts (check); 5) develop solutions (act); 6) deploy solutions (act). this sequence of steps in the innovation process is logical but by no means it is linear. whatever the case, this process is proven to lead to more efficient innovations by streamlining the efforts of researcher teams and innovators, and innovation partnerships. by combining elements from “the golden circle” (why? – how? – what?) and sipoc diagrams (suppliers – inputs – process steps – outputs – customers) for each of the abovementioned steps, clauses from 6.2 to 6.6 provide detailed guidance on: • why this is important for ip management? acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 • what are the necessary inputs to be considered in the innovation process? • how should this be done throughout the lifecycle of the innovation initiative? • the outputs resulting from these activities. in most cases, the outputs from the previous step(s) should serve as inputs for the next step(s) in the innovation process. the iso 56005:2020 supports the implementation of clause 6 by providing 6 annexes with tools and methods for: • invention record and disclosure (annex a); • ip generation, acquisition and maintenance (annex b); • ip search (annex c); • ipr evaluation (annex d); • ip risk management (annex e); • ip exploitation (annex f). 2.6. iso 56006:2021 strategic intelligence management clause 7.7 “strategic intelligence management” of iso 56002:2019 states that “strategic intelligence can include activities to acquire, collect, interpret, analyse, evaluate, apply, and deliver to, or share between, decision-makers and other relevant interested parties, the necessary data, information, and knowledge.” [5]. this information shall be communicated to the top management of the organization in order to align it to the strategic direction, to anticipate and manage change, and to navigate the organization successfully into a volatile, uncertain, complex and ambiguous (vuca) environment. the standard iso 56006:2021 shares the same 8 innovation management principles as iso 56000:2020 and adds aspects of strategic intelligence to the statement of each principle. clause 3 “terms and definitions” has two elements3.1 “intelligence”, and 3.2 “strategic intelligence”. when the two terms are combined, their definitions would convey the following meaning of strategic intelligence: the result of gathering, analysing and interpreting data (related to market, technology, competition, intellectual property or business), information and knowledge directed to top management with recommendations to make decisions impacting the vision, strategy, policy and objectives as well as innovation activities of the organization. clause 4 presents the fundamentals of strategic intelligence such as: purpose, needs, core process, timing, expected outcomes, and essential support in terms of infrastructure and competencies. the strategic intelligence cycle follows the diki model: data – information – knowledge – intelligence. this model is presented in more detail in clause 5 and consists of the following steps: • framingdefining the criteria and scope for intelligence generation; • data gathering and analysiswith main outcome information, or analysed data; • interpretationwith outcome knowledge, or interpreted information; • recommendation, based on knowledge with outcome intelligence, i.e., communicated knowledge. clause 6 “intelligence communication” consists of 6.1 “recommendations to top management”, and 6.2 “documentation, communication and distribution control”. the other standards of the iso 56000 series are at different stages of development. undoubtedly, upon their publication, they will have a major impact on the understanding and implementation of innovation management systems. 3. implementation of iso 56000 guidance at the university of ruse the implementation of management systems in higher educational organizations began by adapting iso 9001 requirements to the actual educational environment. naturally, the pre-existing management systems and the international and national regulatory documents served as bases for the additional requirements. the technology transfer and intellectual property centre (ttipc) at the university of ruse “angel kanchev” is integrating elements of innovation management systems. this improvement to the existing quality management systems integrates the requirements of iso 9001:2015 and of iso 21001:2018 with the guidance of iso 56002:2019. the process has started in 2020 with the election of the current manager of the ttipc. the value of the books “iso 56000: building an innovation management system: bring creativity and curiosity to your qms”” by peter merrill [14], and “managing innovation: integrating technological, market and organizational change” by joe tidd and john bessant [15] cannot be overestimated. the implementation of the iso 56000 series of standards encompasses the elements from 3.1 to 3.8. 3.1. updated innovation vision, mission, strategy, policy, and objectives these strategic documents specified in iso 56002:2019 are developed by the manager of the ttipc and approved by the academic council and the rector of the university of ruse. the innovation mission of the ttipc is to promote the protection of intellectual property of the researchers at the university of ruse, and to expand the opportunities for realization of modern technologies by partners of the university of ruse. the innovation vision is to ensure that the university of ruse is established and a regional leader in the realization of its intellectual property. the innovation strategy being followed is to determine the existing opportunities for progress and to seek out “burning issues” that require innovative solutions which can be developed by multidisciplinary teams of researchers at the university of ruse and its partner organizations. the innovation policy states that the top management is committed to achieving the status and recognition of a research university while always striving for continual improvement of the innovation management system. the innovation objectives are consistent with the innovation policy. there is a requirement for all professors and research staff to participate in research projects, as well as to develop and register ip with a stronger focus on international and bulgarian patents, utility models, and submitted applications for ipr protection. 3.2. streamlined innovation management process prior to 2020, there were 3 different centres focusing on different aspects of the ttipc. these were: • the centre for intellectual property; • the centre for technology transfer; • the danube transfer centre. by using the guidance of iso 56002:2019 and iso 56005:2020, the ttipc now implements a series of innovation management processes that encompass the identification of opportunities, creation and validation of concepts, developing and deploying solutions. since 2022, the prevention and monitoring of infringement of ip and copyright are added to the scope of activities of the ttipc by implementing the system acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 approved by the ministry of education and science of bulgaria strikeplagiarism. 3.3. innovation portfolio the first innovation portfolio of the university of ruse meeting the intent of the guidance of clause 6.4 of iso 56002:2019 is presented on figure 1. the ttipc has developed and published innovation portfolios yearly. currently there are the 2021 and the 2022 editions which contain all existing and newly registered intellectual property rights (ipr)patents, utility models, trademarks, designs, etc. this allows the realization of synergies, including possibilities for re-use and optimization. an added benefit from the innovation portfolio is that it is used to raise interested parties’ awareness of the ipr of the university and its researcher community, and last but not leastto improve the visibility of innovators and their success stories. 3.4. ipr database the initial stages of ipr management can be characterised as utterly basic and passive. there used to a very general list of patents and utility models with no specific information. the newly created and continuously updated database of ipr includes the following elements: • type of ipr with colour coding for different ipr; • date and number of the application; • date and number of the registration by the patent office; • ipr owner(s) and/or inventor(s); • period of validity, including a calendar of the dates of validity allowing to have objective evidence (a scanned certificate) on any given date (see figure 2); • payment status. the ipr database is used as an input for reporting the performance of the innovation management system at top management reviews and/or responding to external inquiries. 3.5. assessments of the performance of the innovation management system these assessments are done periodically depending on the level of reporting. the ttipc presents quarterly reports to the director and the managing council of the research & development centre (r&dc) at the university of ruse. in addition, the detailed annual report contains all the major activities and key performance indicators of the innovation management system. this is done in line with: • the criteria and procedures of the ministry of education and science, and the national evaluation and accreditation agency; • the guidance of iso/tr 56004:2019 about performance criteria (clause 5.2.1) and performance metrics (clause 7.4); • the internal regulations of the university of ruse “angel kanchev” [16], [17]. the continual improvement of the implemented innovation management system is also assessed for positive change in the maturity level of the system [18]. the performance is communicated to interested parties by aligning the innovation management system with the iso 56000 series of standards and the united nations sustainable development goals [19]. 3.6. innovation partnerships the innovation portfolio has made the application of the guidance of iso 56003:2019 a priority. some teams of innovators have existed prior to the implementation of the innovation management system. they generally comprised of researchers from the same scientific field. rarely they involved inventors from other departments and faculties, and even less oftencollaborators from external organizations. these partnerships generate better overall outcomes and outputs. the implementation of iso 56003:2019 at the university of ruse is presented in [20]. initially, the long list of potential partners was created based on the information contained in the innovation portfolio, and the discussion with interested parties. using the criteria, described in iso 56003:2019, the long list was reduced. the result of the partnership selection has produced an interdisciplinary team of researchers from 3 faculties, 5 different departments, phd, msc and bsc students, and an external practitioner. 3.7. management of strategic intelligence iso 56006:2021 is published as a bulgarian standard on 23 march 2022 but is not available in bulgarian. this fact hinders its smooth implementation. to avoid waiting for an official translation, the ttipc has made a working document based on the guidance of this standard. it is communicated to the vicedeans in charge of research so that strategic intelligence from various fields can support the decision-makers in the university. 3.8. additional innovation management initiatives and activities some of the activities arising from the integrated quality, educational and innovation management system, which has placed the university of ruse among the top research institutions in bulgaria, are: figure 1. innovation portfolio of the university of ruse for the period 01 january 2020 – 31 december 2020 [in bulgarian]. figure 2. calendar of the ipr database [in bulgarian]. https://strikeplagiarism.com/en/ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 • active membership in professional organizations such as ispimthe international society for professional innovation management, and special interest groups (sigs); • participation in conferences, seminars, webinars and workshops aiming to exchange best practices; • organizing seminars to obtain buy-in from internal and external interested parties; • promoting best practices by making interviews with successful innovators; • participation in promotional events such as the annual innovative youth expo (see figure 3), etc. 4. conclusions the experience of the university of ruse with the innovation management system shows that the best results and performance of the management system are achieved when the organization: • carefully considers the current context and emerging trends and technologies, as well as the requirements of relevant interested parties; • successfully manages the competence, awareness and involvement of key representatives of top management, researchers, professors, laboratory technicians and administrative staff; • actively engages in professional organizations, projects and trainings; • timely aligns its strategic direction with international and national regulatory documents and frameworks; • embraces change and creates its own the future. the guidance, tools and methods of the iso 56000 series of standards are creatively implemented and have contributed to a significant improvement of the innovation management system. references [1] the iso survey of management system standard certifications 2021. 24 january 2023. online [accessed 28 january 2023] https://www.iso.org/home.isodocumentsdownload.do?t=9z0j vmcb1svkz9z0zejqmegpesczmgl_ggsogk7yes3dbsznygu 5-tlkaplnmw68&csrftoken=sr21-gstr-4lvnqg1k-negg-s2pa-vmfo-txer. [2] k. lines, j.-l. hippolyte, i. george, p. harris, a mathmet quality management system for data, software, and guidelines, acta imeko, vol. 11, no. 4, (2022), pp. 1-6. doi: 10.21014/actaimeko.v11i4.1348 . [3] s. b. da silva, improving the firm innovation capacity through the adoption of standardized innovation management systems: a comparative analysis of the iso 56002:2019 with the literature on firm innovation capacity, international journal of innovation – iji, 9(2), (2021), pp. 389-413. doi: 10.5585/iji.v9i2.19273. [4] iso 56000:2020, innovation management fundamentals and vocabulary. [5] iso 56002:2019, innovation management innovation management system guidance. [6] iso 56003:2019, innovation management tools and methods for innovation partnership guidance. [7] iso/tr 56004:2019, innovation management assessment guidance. [8] iso 56005:2020, innovation management tools and methods for intellectual property management guidance. [9] iso 56006:2021, innovation management tools and methods for strategic intelligence management guidance. [10] iso/cd 56001 innovation management innovation management system requirements. online [accessed 28 january 2023] https://www.iso.org/standard/79278.html. [11] iso/fdis 56007 innovation management system tools and methods for managing opportunities and ideas guidance. online [accessed 28 january 2023] https://www.iso.org/standard/75068.html. [12] iso/dis 56008 innovation management tools and methods for innovation operation measurements guidance. online [accessed 28 january 2023] https://www.iso.org/standard/78485.html. [13] iso/cd ts 56010 innovation management illustrative examples of iso 56000. online [accessed 28 january 2023] https://www.iso.org/standard/80612.html. [14] p. merrill, iso 56000: building an innovation management system: bring creativity and curiosity to your qms, quality press, milwaukee, 2020, isbn 978-1-951058-26-9. [15] j. tidd, j. r. bessant, managing innovation: integrating technological, market and organizational change, wiley, hoboken, 2021, isbn 978-1-119-71319-7. [16] tz. gueorguiev, b. sakakushev, b. evstatiev, improving educational management systems by integrating quality and innovations, 59 annual scientific conference of angel kanchev university of ruse and union of scientists – ruse “new industries, ruse, bulgaria, 2020, pp. 27-32. online [accessed 28 january 2023] https://conf.uni-ruse.bg/bg/docs/cp20/9.1/9.1-3.pdf. [17] tz. gueorguiev, assessment of innovation management and intellectual property, intellectual property in universities – new horizons of academic dialogue, sofia, bulgaria, 2022, pp. 217-230 [in bulgarian]. online [accessed 28 january 2023] https://unyka.unibit.bg/images/pdf/2022-1/15-c-georgiev.pdf. [18] tz. gueorguiev, continual improvement and innovations, 6th int. conf. on advances in mechanical engineering (icame 2021), istanbul, turkey, 20-22 october 2021, pp. 320-325. [19] tz. gueorguiev, innovation management systems – reality and perspectives, international scientific journal “innovations”, vol. 9 (2021), issue 2, pp. 48-50. online [accessed 28 january 2023] https://stumejournals.com/journals/innovations/2021/2/48.ful l.pdf. [20] tz. gueorguiev, the role of innovation partnerships in the creation of intellectual property, 61 annual scientific conference of angel kanchev university of ruse and union of scientists – ruse “new industries, digital economy, society projections of the future v”, ruse, bulgaria, 2022, pp. 11-16. online [accessed 28 january 2023] https://conf.uni-ruse.bg/bg/docs/cp22/9.1/9.1-1.pdf. figure 3. innovative youth expo certificate. https://www.iso.org/home.isodocumentsdownload.do?t=9z0jvmcb1svkz9z0zejqmegpesczmgl_ggsogk7yes3dbsznygu5-tlkaplnmw68&csrftoken=sr21-gstr-4lvn-qg1k-negg-s2pa-vmfo-txer https://www.iso.org/home.isodocumentsdownload.do?t=9z0jvmcb1svkz9z0zejqmegpesczmgl_ggsogk7yes3dbsznygu5-tlkaplnmw68&csrftoken=sr21-gstr-4lvn-qg1k-negg-s2pa-vmfo-txer https://www.iso.org/home.isodocumentsdownload.do?t=9z0jvmcb1svkz9z0zejqmegpesczmgl_ggsogk7yes3dbsznygu5-tlkaplnmw68&csrftoken=sr21-gstr-4lvn-qg1k-negg-s2pa-vmfo-txer https://www.iso.org/home.isodocumentsdownload.do?t=9z0jvmcb1svkz9z0zejqmegpesczmgl_ggsogk7yes3dbsznygu5-tlkaplnmw68&csrftoken=sr21-gstr-4lvn-qg1k-negg-s2pa-vmfo-txer https://doi.org/10.21014/actaimeko.v11i4.1348 http://dx.doi.org/10.5585/iji.v9i2.19273 https://www.iso.org/standard/79278.html https://www.iso.org/standard/75068.html https://www.iso.org/standard/78485.html https://www.iso.org/standard/80612.html https://conf.uni-ruse.bg/bg/docs/cp20/9.1/9.1-3.pdf https://unyka.unibit.bg/images/pdf/2022-1/15-c-georgiev.pdf https://stumejournals.com/journals/innovations/2021/2/48.full.pdf https://stumejournals.com/journals/innovations/2021/2/48.full.pdf https://conf.uni-ruse.bg/bg/docs/cp22/9.1/9.1-1.pdf an iot-based handheld environmental and air quality monitoring station acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 8 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 an iot-based handheld environmental and air quality monitoring station m. n. m. aashiq1, w. t. c. c. kurera1, m. g. s. p. thilekaratne1, a. m. a. saja1, m. r. m. rouzin1, navod neranjan2, hayti yassin2 1 faculty of engineering, south eastern university of sri lanka, sri lanka 2 faculty of integrated technologies, university brunei darussalam, brunei darussalam section: research paper keywords: environmental monitoring; air quality; internet of things; cloud; sensor network citation: m. n. m. aashiq, w. t. c. c. kurera, m. g. s. p. thilekaratne, a. m. a. saja, m. r. m. rouzin, navod neranjan, hayti yassin, an iot-based handheld environmental and air quality monitoring station, acta imeko, vol. 12, no. 3, article 6, september 2023, identifier: imeko-acta-12 (2023)-03-06 section editor: susanna spinsante, grazia iadarola, polytechnic university of marche, ancona, italy received february 27, 2023; in final form june 11, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: m. n. m. aashiq, e-mail: aashiqmnm@gmail.com 1. introduction in the early days, people used larger devices to measure a single weather parameter. they are used to place weather stations in a location to collect and analyse information, and to make predictions on future weather changes. in the past years the devices were large, complex, and analogic type. then, simpler and less power-consuming digital systems were introduced [1]. the collection and the storage of data with analogic and digital devices is a tedious and time-consuming activity, as manual involvement was significantly needed for that. the need of collecting real-time data and their storage led researchers to pay more attention to obtaining accurate and fast weather parameters in a large area using simple on-set devices [2]. these devices are considered automated devices that measure and record weather information using sensors. with the real-time data monitoring ability of iot, these devices become more reliable. hence the iot is considered as the modern state-of-theart method for measuring the weather and air quality parameters. overall, iot technology provides the capability to monitor, collect and exchange data using a device consisting of sensors or actuators, controllers, etc. the main reason for using iot in smart weather monitoring is due to the vast range of options that exist in this technology. on the other hand, iot enables the connection of a larger number of sensors to a cloud network using various connectivity methods based on connection range. to date, an iot is considered the best way to collect, maintain, and analyse large amount of data [3]. abstract weather and air quality play an important role in determining environmental pollution. fluctuation of these parameters not only causes environmental pollution but also causes severe injuries to human health. with the emergence of the internet of things (iot), sensorbased weather devices with easy observation facilities started to develop. in this regard, this study focused on developing an iot-based portable weather monitoring gadget that can measure weather and air parameters which are more often required in our day-to-day life. the proposed system is capable to measure temperature, pressure, humidity, altitude, pm2.5, pm10 level, voc, and co level. it consists of a portable display and a mobile app with a thing speak cloud platform. further, the system has a wi-fi and gsm connection to communicate data. a mobile application was developed to monitor the readings in real-time which are stored in the cloud platform. the developed hardware was carefully calibrated in the national meteorological department to make sure our system is practically usable. compared to existing models, our prototype is very handholdable, easily installable which does not require trained technicians, and is easily maintainable. also, it is possible to access the data from anywhere in the world through wi-fi connectivity, and possible to make data visualization and analysis. on the other hand, it is very difficult to find a single, portable size, and low-cost device to collect all these parameters together. nonetheless, our prototype has the potential of connecting with multiple similar devices to create a larger iot network grid. overall, our proposed product has a combination of weather and air quality parameters in a portable and handholdable size with low-cost and low power consumption which other devices do not have according to the latest literature. mailto:paul@regtien.net acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 this study aimed to design a small portable weather monitoring device suitable for environments such as schools, laboratories, and industries to monitor real-time parameters. the device can give all the required environmental and air quality parameters at once with a reasonable time range and with the possibility to customize the data acquisition. hence, this device could be used to create a nationwide network to collect the required weather and air quality parameters accessible at any place. overall, the weather station gets seven important parameters at once with easy monitoring methods. the system could be used in agricultural and field monitoring activities, environmental controlling activities in places such as greenhouses, environmental and air pollution monitoring, damage control activities, climate change study, etc. further, this network would generate big data which could be used for machine learning research. this system could also be usable as a home automation control device in which we can control the home automation sensors and actuators based on the weather and air quality readings that we are obtaining. additionally, air pollution causes severe health injuries to human health, which ultimately result in death [4]. hence, this system could be used to monitor the air quality parameters in the close living environment and to accordingly take countermeasures to avoid life-threatening risks. the rest of the paper is organised as follows. section 2 describes the previous works, section 3 provides existing models, section 4 and 5 explains the hardware and software layouts, section 6 analyses the results, and finally, section 7 concludes our work. 2. related work in general, smart weather stations are automated systems designed by connecting one or more sensors using a microcontroller. with the ability of iot to connect devices to a cloud network using wireless communication, the data could be accessed from any place in the world. in [2] an iot-based system was designed using an lpc1768 microcontroller and four sensors to measure temperature, pressure, relative humidity, and light intensity. the data was collected and sent to a base station using an esp8266 wi-fi module. in [3], a system was implemented using an arduino uno board, dht11 sensor, esp8266, and wi-fi module which transmit data to open iot api service thing speak where the data can be stored and analysed. rather than weather monitoring, air quality measuring devices are also designed using arduino uno, mq135, and mq7 sensors [5]. also, there are systems designed [6] using raspberry pi to monitor temperature, humidity, pm2.5, and pm10 concentration, and air quality index. later projects were using the arduino uno based system [7], to measure even gas content and earthquake information with a combination of moisture, temperature, humidity, and rainfall parameters. there were even single sensors such as mq135 which were able to measure carbon dioxide, sulphur dioxide, nitrogen dioxide, smoke, and lpg gas [8]. there are many challenges related to iot-based systems such as complex topology design, privacy and security, power backup, and high memory requirements, in [9], the researchers have designed an indoor and outdoor pollution monitoring system using esp8266 node mcu controller, sds021 sensor, and ze07co sensor. the system is a low-cost, simple topology system. further the gsm/ gprs modules could also be used in iot systems to build a mobile communication systems [10], [11]. overall, these systems used an lcd to display real-time information, and systems could use wireless modules such as esp8266 to connect the system to the internet using a wireless connection. then the data was transferred to a remote server or iot platform such as “thing speak” or to a mobile application [12]. wireless sensor networks have carried out a selfconfiguration and reconfigurations, and can adapt well to mobility as well as to a remote control [13]. rather than webbased technologies, messaging technologies such as mqtt can be used in iot systems [14]. this can help to separate system migration complexity and make it easier to build a distributed information system. in the global market, there are iot-based weather devices that can measure only a few parameters such as temperature, humidity, pressure, or some on-set devices that can use as domestic or industrial weather stations. many of them are powered by batteries and they use solar power systems as a backup power supply. in [15], the authors describe an iot-based microclimate monitoring weather station that can be installed in any place and can be easily modified to suit any environment. the data is transmitted to the cloud at a fixed range using a mobile communication network. a microprocessor, local storage unit, communication module, lcd display, power controller, and sensor panel comprise the system whereas it has only the temperature and the humidity sensor to get only those two parameters. further, in [16] there was another work carried out by international water management institute (iwmi) to upgrade the climate monitoring systems in sri lanka, with the aim of designing and programming a device using arduino, using opensource software and hardware. the aim of this equipment was to monitor the variability of rainfall. the system was able to measure parameters such as wind speed, pressure, humidity, water pollutants, wind direction, rainfall, and water level. the device consists of an arduino lakduino board, a weather shield, a gps receiver, a data logger, an anemometer, and a rain gauge with solar-powered rechargeable battery. further [17]-[19] are addressing similar devices used in environmental and agricultural monitoring applications. however, most of the existing devices were developed only for either weather monitoring or air quality monitoring separately. only recent researches have started to incorporate both types of devices [20]-[29], but there is still a wide combination of parameters to include. our objective was to measure multiple parameters with a single device, given that they are needed by environmental researchers for machine learningbased forecasting models [30]-[37]. moreover, most of the existing works did not indicate the calibration procedure and the results after obtaining the calibration report. hence, this would question the reliability of the hardware for practical use. most researchers have carried out laboratory tests to show the variations in the readings by suddenly creating some variations in the atmosphere and checking if the device readings were changed or not. apart from that, the sensor components are being improved and enhanced with the latest technologies, leading to a better performance and a less power consumption. the majority of the previous works were carried out through raspberry pi, a more complicated device if compared to arduino uno, because of its programming structure and the nature of the raspbian operating system [3]. in table 1 we compare the existing models with the one we propose in this study. based on acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 3 the cost of the equipment, we have also categorized if the cost for the development of the setup is low, high, or moderate. based on the comparison with the literature, it is evident that our proposed setup enables weather, air quality, and environmental monitoring facilities for a low cost, with cloud, and mobile connection facilities. 3. system requisites in order to build the hardware and the software platforms we needed to have microcontrollers, weather parameter reading sensors, air pollution monitoring sensors, a 3d printer, a wi-fi module, a gsm module, a display unit, a cloud platform for data storage, data visualization, and data analysis, android application development kit, and finally, a power source for the continuous operation of the system. 4. system hardware the proposed portable smart weather monitoring station is designed as a weather station that should be able to measure both weather and air quality parameters using an iot-based system. the system has an arduino uno board as the main microcontroller and a node mcu with an esp8266 wi-fi module. the device can measure temperature, humidity, pressure, pm2.5, pm10, o3, co, and voc levels. the data is sent to a thing speak web platform and to a portable display using a wi-fi connection. these data can be accessed and analysed in an iot platform “thing speak” or can be accessed using a mobile application or web server for other activities. the system also has a gsm connection to be used in the absence of wireless connectivity. the next subsections present a brief description of the components of the system. 4.1. microcontroller microcontroller consists of an arduino uno and a node mcu board. arduino uno is an open-source microcontroller board based on the microchip atmega328 microcontroller. this will be a 14-pin board powered by a usb cable or by a 9-volt external battery. arduino uno board has a voltage range of 720 v and can be programmable using arduino ide. the node mcu board is used as a supported microcontroller as well as the esp8266 wi-fi module enhances to provide the wireless connectivity requirements of the device. 4.2. sensors we have selected the sensors according to three main features, necessary to address our main objective: i) less power consumption, ii) small dimension, and iii) low cost. aht10 was selected to get temperature and humidity measurements. it has a small volume, a less power consumption, and a better antiinterference capability. the maximum voltage requirement for its operation is 3.6v dc. mq-7 is having more sensitivity to the carbon monoxide gas. it can detect up to 10-500 ppm range of co gas. it also complies with our three major concerns and provides suitable digital outputs. mq-7 was interfaced with the arduino-uno controller. furthermore, it produces low conductance with clean air. bmp280 was selected for the pressure sensor due to its robustness, high reliability, and digital outputs along with the above three primary concerns. it can be interfaced with mobile devices, gps modules, and even with watches. bmp280 was connected to the node mcu controller since arduino uno is fully loaded with the previous sensors. zp07-mp503 is a volatile organic components detection sensor that is sensitive to formaldehyde, alcohol, cigarette smoke, ammonia, benzene, hydrogen, essence, etc. it is also very costeffective, has a long life, and pre-shipment calibrated sensor. this sensor was connected to our main arduino uno controller and can be operated with 5vdc power. conversely, it is not suggested to expose the module to a high concentration of organic gases for an extended period. the laser scattering principle is utilized here, namely, the induction of scattering by using a laser to radiate suspended particles in the air, then collect scattering light to a specific degree, and lastly acquire the dispersion of light changes throughout the time. finally, using mie theory [38], a microprocessor can calculate the equivalent particle diameter as well as the number of particles of different sizes per unit volume. this sensor is linked to the node mcu since the connections are full in the arduino uno. this sensor also requires around 5vdc power for its continuous operation. after the complete connection of all the sensors and configurations, two outer cases were printed using the 3d printer. one is for the main equipment and the other one is for the digital display unit. table 2 illustrates the details of the sensors that were used. aht10, pms 5003, and bmp280 sensors are described by an uncertainty value which indicates the maximum deviation values from the actual readings. they can be considered as the worstcase uncertainties (maximum errors). but for the other sensors, mq-07 and zp07-mp503 are described by a sensitive value that defines the smallest physical parameter input required to produce a discernible output change or it can be interpreted as the input parameter change needed to form a standardized output change. consequently, figure 1 depicts the block diagram for the hardware implementation. figure 2 shows the internal and external physical organization of the device. a detailed wiring and schematic diagram have been depicted in figure 3. 4.3. wireless connectivity a wireless connection was established using esp 8266 wi-fi module. this connection is used to send data to the thingspeak channel and portable display unit. this wi-fi module uses the table 1. comparison of similar research. reference w e a th e r m o n it o ri n g a ir q u a lit y m o n it o ri n g e n vi ro n m e n t m o n it o ri n g c lo u d e n a b le d m o b ile c o n n e ct iv it y c o st ( lo w (l ), m o d e ra te (m ), h ig h (h )) [2] ✓  ✓   l [3] ✓   ✓  l [4]  ✓  ✓  l [9]  ✓  ✓  l [10] ✓ ✓    l [20] ✓     l [23]  ✓    m [26]  ✓    l [28] ✓   ✓ ✓ h [29] ✓     h our work ✓ ✓ ✓ ✓ ✓ l acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 4 full stack of tcp/ip. when this wi-fi module is connected to the internet it will get the ip address automatically or dynamically. after establishing the wi-fi connection, it would be possible to transmit the equipment readings to the thingspeak cloud in which real-time monitoring, data visualizations, and data analysis are possible in one place. it requires only 3.3 vdc power for its function and it is also an inexpensive device. this works under the 2.4ghz (802.11 b/g/n) regular wi-fi standard. this wi-fi module was also connected to the node mcu controller. 4.4. portable display unit a display unit with wi-fi connectivity was designed along with the prototype to display the sensors’ data using wi-fi connectivity. a node mcu with esp 8266 wi-fi module and an oled 0.96' display is used in the portable display. a lithium battery powered up the device. the required power supply for this unit is provided by the arduino uno in the system. 5. system software software is critical to the integration and operation of our hardware design. there are two software components in our design. the first component is driving the operation of the hardware components including the sensors. it was completed by microcontroller programming using arduino ide. the second component is the android-based graphical user interface (gui) which can be used by multiple mobile devices to get the weather and air quality readings. 5.1. thingspeak channel a thingspeak channel was created using a commercial license which contains 8 channels to display temperature, humidity, pressure, relative altitude, voc, co and pm2.5, and pm10 levels. table 3 describes the channel information of the obtained account. thingspeak uses traditional http/https connectivity via the internet. this cloud-based analytics platform is used to gather, view, and analyse live data streams. figure 4 shows the dashboard visualization layout of the real-time data values. 5.2. mobile application an android app with graphical user interface (gui) was designed using the android studio to enable users to retrieve the data from the thingspeak platform easily. the application is more flexible and efficient to use, and the user authentication features help users to log in using their username and password. mysql has been used to store the required details. android table 2. specification of the sensors. module/ sensor parameter uncertainty/ sensitivity range of measurement aht10 humidity, temperature ±2 % rh ±0.3 % ℃ 0 … 100 % rh, −40 … + 85℃ pms 5003 pm2.5, pm10 ± 10μ g/m3 0.3 … 1.0, 1.0 … 2.5, 2.5 … 10 bmp280 pressure, temperature ±1 hpa ±1.0 ℃ 300 … 1100 hpa −40 … + 85℃ mq 131 ozone 𝑅𝑠(in air) 𝑅𝑠(200 ppm) ) ≥ 2 10 … 1000 ppb mq-7 carbon monoxide 𝑅𝑠(in air) 𝑅𝑠(100 ppm) ) ≥ 5 10 … 500 ppm zp07-mp503 volatile organic materials ≤ 1 % /𝑦𝑒𝑎𝑟 figure 1. block diagram figure 2. enclosure and wiring of the hardware system includes the sensors aht10, mq-07, voc, bmp280, esp 8266, pms5003, zp07-mp503, and the battery pack unit. figure 3. detailed wiring and schematic diagram of the device showing the sensor interfaces to the arduino uno and node mcu controllers. table 3. thingspeak channel information. channel name quick weather channel id 1675898 author spt4725 access public/ private acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 5 studio uses java script object notation (json) parser to enable sending get requests to the thingspeak channel to collect the necessary data by using the rest api request methods. figure 5 illustrates the login page and the selection of parameter options menu page of the developed mobile application. 6. results and discussion it is very important to have higher precision or lower uncertainty values when building measurement equipment. the accuracy value is defined by the combination of the resolution and precision of that sensor. resolution is the measurable incremental variation and precision is the nature of giving the same output the for same input values [39]. in weather monitoring gadgets, uncertainty is the most important feature which should be given much priority as well. calibration could be done in many ways such as by homogenous mutual test, heterogenous mutual test, combined mutual test, and by using standard sensors [40]. we have used the fourth method for calibration which is using standard sensors. after designing and configuring the device, data collection was carried out through a calibrated device available at the sri lanka meteorology department headquarters to check the uncertainty of device readings and to calculate error percentages. for that purpose, a mounted ambient weather station is used. according to those data, the error calculation and plotting are done for a selected time interval using the obtained data. data was recorded from morning 10.00 am until afternoon 4.00 pm at 30-minute intervals. figure 6 shows the humidity variation, figure 7 shows the pressure variation, and figure 8 shows the figure 4. real-time visualization from the thingspeak channel figure 5. mobile application. figure 6. rh measurements from 10.00 am to 4.00 pm: comparison between the presented device and the reference device. figure 7. pressure variation from 10.00 am to 4.00 pm between our device and the calibration reference device. figure 8. temperature measurements from 10.00 am to 4.00 pm: comparison between the presented device and the reference device. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 6 temperature variation. relative humidity error varies from 0.32 % to 5.92 %. but two outlier values caused errors of 9.39 % and 11.15 %. this might have occurred due to some temporary malfunctioning of the sensor at that instance. the error of the pressure parameter varies between 0.38 hpa and 2.03 hpa. similarly, temperature error values are deviating from 0.10 °c and 4.79 °c values. deviation values for humidity, pressure, and temperature parameters are shown in detail in table 4. figure 9 and figure 10 are showing similar graphs for pm2.5, and pm10 reading values. co level readings are very close to the standard reading values with less than a 5 % error percentage. pm10 sensor also indicates the lower uncertainty values close to the standard reading with less than 7 % of error values. pm2.5 sensor is also showing a lesser deviation from the actual reading values. in many instances error percentage is 0 %, but in a few instances, they are reaching 10 % to 11 % with a little higher deviation. in any way, the overall accuracy values are good in comparison with the standard values. table 5 shows the final summary of the sensor accuracy values in the mean absolute percentage error (mape) metric. it is obvious from the summary table that the pressure sensor, co sensor, pm2.5, and pm10 sensor are having reliable reading values. but the temperature sensor and pressure sensor should be replaced or rechecked. these variations might have occurred due to various reasons such as there is always a fluctuation range provided in any type of sensor as indicated in table 2. manufacturing variation, environmental effects over time, the presence of signal-to-noise ratio, improper installation practices while using very thin and sensitive cables, discontinuity in reading the values (temporary malfunction) or non-responsiveness, sporadic events, and some human planned attacks such as malicious attacks, and tampering are some of the potential reasons for getting the error values [40], [41]. calibration is helpful in removing the structure errors of the sensors. by using the error values which are shown in figure 6, figure 7, figure 8, and table 3, the characteristic curves could be obtained for correcting the reading values. this curve would be linear in the ideal situation only, most of the time, it will be non-linear. singlepoint, two-point, and multi-point adjustments are done for nonlinear cases [40]. 7. conclusion this paper describes an iot-based environmental and air quality monitoring station that could be a part of a larger weather data acquisition network. this platform could be used to collect and analyze the readings of various parameters in multiple geographical locations and this could help us to get the smaller variations of the environmental and air quality parameters among the various villages and cities. the designed prototype is consisting of 5 sensors and a wi-fi connection. we connected bmp280, aht10, mq7, pm2.5, and voc sensors which give us a total of 8 parameters as temperature, humidity, pressure, co level, altitude, pm2.5, pm10 level, and voc level. also, we expected to design a very smaller and handholdable size equipment with very low cost and successfully made it. if we want to use the market products, we need to use multiple devices and some of them are very larger and need trained technicians for installation. this is the main novelty of our research study. the wi-fi connection is set up using esp8266 in the node mcu board. the parameters can be successfully updated using the thing speak platform with an interval of 30 seconds. the data could be also observed using a portable handheld display unit or the mobile app as well. the display unit is updated every 1 minute using wi-fi connectivity. this system paves the way for a crucial step in understanding the creation and execution of iot applications and serves as a basis for several figure 9. pm2.5 measurements from 10.00 am to 4.00 pm: comparison between the presented device and the reference device. figure 10. pm10 measurements from 10.00 am to 4.00 pm: comparison between the presented device and the reference device. table 4. comparison of temperature, humidity, and pressure sensors’ reading with standard sensors. temperature (°c) relative humidity (%) pressure (hpa) time standard sensor1 error standard sensor2 error standard sensor3 error 10.00 am 28.72 30.03 1.31 72.39 73.38 0.99 1011.86 1013.67 1.81 10.30 am 28.98 33.77 4.79 70.07 64.15 -5.92 1011.74 1013.58 1.84 11.00 am 29.52 31.16 1.64 67.53 71.18 3.65 1011.57 1013.54 1.97 11.30 am 28.23 30.32 2.09 75.13 75.45 0.32 1011.02 1013.00 1.98 12.00 pm 28.39 30.57 2.18 75.10 75.66 0.56 1010.39 1012.36 1.97 12.30 pm 28.88 29.45 0.57 72.88 78.38 5.50 1009.8 1011.83 2.03 01.00 pm 26.50 28.51 2.01 87.56 84.20 -3.36 1009.73 1011.63 1.90 01.30 pm 24.15 28.38 4.23 93.79 84.40 -9.39 1008.95 1010.61 1.66 02.00 pm 25.53 26.21 0.68 95.11 91.44 -3.67 1008.36 1009.61 1.25 02.30 pm 25.82 25.64 -0.18 92.91 91.56 -1.35 1007.85 1008.62 0.77 03.00 pm 26.70 26.80 0.10 90.11 92.20 2.09 1007.45 1007.93 0.48 03.30 pm 27.44 27.23 -0.21 82.39 93.54 11.15 1007.45 1007.83 0.38 04.00 pm 27.33 27.55 0.22 84.17 92.22 8.05 1006.97 1007.92 0.95 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 7 advancements in building centralized climate control entities. our main contribution to the existing literature is proposing a handheld weather station with air quality measurements which could be helpful in building a larger network. moreover, most of the previous researchers did not provide the calibration details which might question the reliability of their research work. on the other hand, this system could help the authorities to take necessary actions to control environmental and air pollution which ultimately protects humans from severe life-threatening health injuries. our calibration results have shown a good indication of the practical working ability of our equipment. continuous power supply to the equipment is another crucial factor for such kinds of iot devices. since the main goal of this research was to build handheldsize equipment, it is not preferable to allow for carrying bigger size power resources along with the gadget. hence, we have used a power bank with a rechargeable battery. 24 hours of continuous operation is not possible with our equipment since the equipment should be switched off while recharging the battery. this is one of the limitations of our work. there could be much research carried out to design very efficient and handheld power supplies for such systems. this could be explored in the future. moreover, somehow the temperature-humidity sensor which is ah10 has produced unreliable results with more than 5 % of mape value. this should be checked and rectified. we can test this by using different types of the latest iot sensor models and should compare the results. it is a real challenge to get guidance from the previous studies since majority of the previous researchers have not produced their calibration and comparison results. further, we should be sure that there are no noise effects and signal interferences are generated in the environment. on the other hand, other sensors have produced extremely better results. acknowledgment we acknowledge the contribution of the sri lanka meteorological department in helping us to get the calibration of our equipment. their support in this work should be highly appreciated. references [1] w. y. yi, k. m. lo, t. mak, k. s. leung, y. leung, m. l. meng, a survey of wireless sensor network based air pollution monitoring systems, sensors, vol. 15, no. 12, dec. 2015, art. no. 12. doi: 10.3390/s151229859 [2] s. r. shinde, a. h. karode, s. r. suralkar, review on iot based environment monitoring system, int. j. electron. commun. eng. technol. ijecet, vol. 8, no. 2, apr. 2017, pp. 103–108. online [accessed 26 july 2023] https://iaeme.com/masteradmin/journal_uploads/ijecet/v olume_8_issue_2/ijecet_08_02_014.pdf [3] s. zafar, g. miraj, r. baloch, d. murtaza, k. arshad, an iot based real-time environmental monitoring system using arduino and cloud service, eng. technol. appl. sci. res., vol. 8, no. 4, aug. 2018, pp. 3238–3242. doi: 10.48084/etasr.2144 [4] i. manisalidis, e. stavropoulou, a. stavropoulos, e. bezirtzoglou, environmental and health impacts of air pollution: a review, front. public health, vol. 8, 2020. online [accessed 26 april 2023] https://www.frontiersin.org/articles/10.3389/fpubh.2020.00014 [5] k. b. k. sai, s. r. subbareddy, a. k. luhach, iot based air quality monitoring system using mq135 and mq7 with machine learning analysis, scalable comput. pract. exp., vol. 20, no. 4, dec. 2019, art. no. 4. doi: 10.12694/scpe.v20i4.1561 [6] f. j. j. joseph, iot based weather monitoring system for effective analytics, int. j. eng. adv. technol. ijeat, vol. 8, no. 4, pp. 311–315, apr. 2019. online [accessed 26 july 2023] https://www.ijeat.org/wpcontent/uploads/papers/v8i4/d6100048419.pdf [7] r. deekshath, p. dharanya, k. r. dimpil kabadia, g. deepak dinakaran, s. shanthini, iot based environmental monitoring system using arduino uno and thingspeak, int. j. sci. technol. eng., vol. 4, no. 9, mar. 2018, pp. 68–75. online [accessed 26 july 2023] http://www.ijste.org/articles/ijstev4i9025.pdf [8] p. pal, r. gupta, s. tiwari, a. sharma, iot based air pollution monitoring system using arduino, int. res. j. eng. technol. irjet, vol. 04, no. 10, oct. 2017, pp. 1137–1140. online [accessed 26 july 2023] https://www.irjet.net/archives/v4/i10/irjet-v4i10207.pdf [9] v. barot, v. kapadia, s. pandya, qos enabled iot based low cost air quality monitoring system with power consumption optimization, cybern. inf. technol., vol. 20, no. 2, jun. 2020, pp. 122–140. doi: 10.2478/cait-2020-0021 [10] h. n. shah, z. khan, a. a. merchant, m. moghal, a. shaikh, p. rane, iot based air pollution monitoring system, int. j. sci. eng. res., vol. 9, no. 2, feb. 2018, pp. 62–66. online [accessed 26 july 2023] https://www.ijser.org/researchpaper/iot-based-air-pollutionmonitoring-system.pdf [11] s. b. kamble, p. r. p. rao, a. s. pingalkar, g. s. chayal, iot based weather monitoring system, ijariie, vol. 3, no. 2, 2017, pp. 2886–2891. online [accessed 26 july 2023] http://ijariie.com/adminuploadpdf/iot_baesd_weather_mon itoring_system_ijariie4557.pdf [12] k. okokpujie, e. noma-osaghae, o. modupe, s. john, o. oluwatosin, a smart air pollution monitoring system, int. j. civ. eng. technol. ijciet, vol. 9, no. 9, sep. 2018, pp. 799–809. online [accessed 26 july 2023] https://core.ac.uk/download/pdf/162043864.pdf [13] c. v. saikumar, m. reji, p. c. kishoreraja, iot based air quality monitoring system, int. j. pure appl. math., vol. 117, no. 9, 2017, pp. 53–57. online [accessed 26 july 2023] https://acadpubl.eu/jsi/2017-117-8-10/articles/9/10.pdf [14] y.-c. tsao, y. t. tsai, y.-w. kuo, c. hwang, an implementation of iot-based weather monitoring system, 2019 ieee international conferences on ubiquitous computing & communications (iucc) and data science and computational intelligence (dsci) and smart computing, networking and services (smartcns), shenyang, china, 21-23 oct. 2019, pp. 648– 652. doi: 10.1109/iucc/dsci/smartcns.2019.00135 [15] m. f. m. firdhous, b. h. sudantha, {cloud, iot}-powered smart weather station for microclimate monitoring, indones. j. electr. eng. comput. sci., vol. 17, no. 1, jan. 2020, art. no. 1. doi: 10.11591/ijeecs.v17.i1.pp508-515 [16] t. s. gunawan, y. m. s. munir, m. kartiwi, h. mansor, design and implementation of portable outdoor air quality table 5. mean absolute percentage error (mape) values of all the sensors. sensors mape values co 0.026 pm2.5 0.045 pm10 0.046 pressure 0.145 humidity 5.27 temperature 5.70 https://doi.org/10.3390/s151229859 https://iaeme.com/masteradmin/journal_uploads/ijecet/volume_8_issue_2/ijecet_08_02_014.pdf https://iaeme.com/masteradmin/journal_uploads/ijecet/volume_8_issue_2/ijecet_08_02_014.pdf https://doi.org/10.48084/etasr.2144 https://www.frontiersin.org/articles/10.3389/fpubh.2020.00014 https://doi.org/10.12694/scpe.v20i4.1561 https://www.ijeat.org/wp-content/uploads/papers/v8i4/d6100048419.pdf https://www.ijeat.org/wp-content/uploads/papers/v8i4/d6100048419.pdf http://www.ijste.org/articles/ijstev4i9025.pdf https://www.irjet.net/archives/v4/i10/irjet-v4i10207.pdf https://doi.org/10.2478/cait-2020-0021 https://www.ijser.org/researchpaper/iot-based-air-pollution-monitoring-system.pdf https://www.ijser.org/researchpaper/iot-based-air-pollution-monitoring-system.pdf http://ijariie.com/adminuploadpdf/iot_baesd_weather_monitoring_system_ijariie4557.pdf http://ijariie.com/adminuploadpdf/iot_baesd_weather_monitoring_system_ijariie4557.pdf https://core.ac.uk/download/pdf/162043864.pdf https://acadpubl.eu/jsi/2017-117-8-10/articles/9/10.pdf https://doi.org/10.1109/iucc/dsci/smartcns.2019.00135 https://doi.org/10.11591/ijeecs.v17.i1.pp508-515 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 8 measurement system using arduino, int. j. electr. comput. eng. ijece, vol. 8, no. 1, feb. 2018, art. no. 1. doi: 10.11591/ijece.v8i1.pp280-290 [17] j. jo, b. jo, j. kim, s. kim, w. han, development of an iot-based indoor air quality monitoring platform, j. sens., vol. 2020, jan. 2020, p. e8749764. doi: 10.1155/2020/8749764 [18] a. kulkarni, d. mukhopadhyay, internet of things based weather forecast monitoring system, indones. j. electr. eng. comput. sci., vol. 9, no. 3, mar. 2018, pp. 555–557. doi: 10.11591/ijeecs.v9.i3.pp555-557 [19] n. a. zakaria, z. zainal, n. harum, l. chen, n. saleh, f. azni, wireless internet of things-based air quality device for smart pollution monitoring, int. j. adv. comput. sci. appl., vol. 9, no. 11, 2018, pp. 65–69. doi: 10.14569/ijacsa.2018.091110 [20] a. lópez-vargas, m. fuentes, m. v. garcía, f. j. muñozrodríguez, low-cost datalogger intended for remote monitoring of solar photovoltaic standalone systems based on arduinotm, ieee sens. j., vol. 19, no. 11, jun. 2019, pp. 4308– 4320. doi: 10.1109/jsen.2019.2898667 [21] k. r. mallires, d. wang, v. v. tipparaju, n. tao, developing a low-cost wearable personal exposure monitor for studying respiratory diseases using metal–oxide sensors, ieee sens. j., vol. 19, no. 18, sep. 2019, pp. 8252–8261. doi: 10.1109/jsen.2019.2917435 [22] m. taştan, h. gökozan, real-time monitoring of indoor air quality with internet of things-based e-nose, appl. sci., vol. 9, no. 16, jan. 2019, art. no. 16. doi: 10.3390/app9163435 [23] wanqi sun, liangchun deng, guoming wu, lin wu, pengfei han, yucong miao, bo yao, atmospheric monitoring of methane in beijing using a mobile observatory, atmosphere, vol. 10, no. 9, sep. 2019, art. no. 9. doi: 10.3390/atmos10090554 [24] t. glass, s. ali, b. parr, j. potgieter, f. alam, iot enabled low cost air quality sensor, 2020 ieee sensors applications symposium (sas), kuala lumpur, malaysia, 09-11 mar. 2020, pp. 1–6. doi: 10.1109/sas48726.2020.9220079 [25] m. alvarez-campana, g. lópez, e. vázquez, v. a. villagrá, j. berrocal, smart cei moncloa: an iot-based platform for people flow and environmental monitoring on a smart university campus, sensors, vol. 17, no. 12, dec. 2017, art. no. 12. doi: 10.3390/s17122856 [26] k. n. genikomsakis, n.-f. galatoulas, p. i. dallas, l. m. candanedo ibarra, d. margaritis, c. s. ioakimidis, development and on-field testing of low-cost portable system for monitoring pm2.5 concentrations, sensors, vol. 18, no. 4, apr. 2018, art. no. 4. doi: 10.3390/s18041056 [27] i. gryech, y. ben-aboud, b. guermah, n. sbihi, m. ghogho, a. kobbane, moreair: a low-cost urban air pollution monitoring system, sensors, vol. 20, no. 4, jan. 2020, art. no. 4. doi: 10.3390/s20040998 [28] m. a. carlos-mancilla, l. f. luque-vega, h. a. guerrero-osuna, g. ornelas-vargas, y. aguilar-molina, l. e. gonzález-jiménez, educational mechatronics and internet of things: a case study on dynamic systems using meiot weather station, sensors, vol. 21, no. 1, jan. 2021, art. no. 1. doi: 10.3390/s21010181 [29] p. b. leelavinodhan, m. vecchio, f. antonelli, a. maestrini, d. brunelli, design and implementation of an energy-efficient weather station for wind data collection, sensors, vol. 21, no. 11, jan. 2021, art. no. 11. doi: 10.3390/s21113831 [30] b.-y. kim, y.-k. lim, j. w. cha, short-term prediction of particulate matter (pm10 and pm2.5) in seoul, south korea using tree-based machine learning algorithms, atmospheric pollut. res., vol. 13, no. 10, oct. 2022, p. 101547. doi: 10.1016/j.apr.2022.101547 [31] f.-j. chang, l.-c. chang, c.-c. kang, y.-s. wang, a. huang, explore spatio-temporal pm2.5 features in northern taiwan using machine learning techniques, sci. total environ., vol. 736, sep. 2020, p. 139656. doi: 10.1016/j.scitotenv.2020.139656 [32] j. ma, z. yu, y. qu, j. xu, y. cao, application of the xgboost machine learning method in pm2.5 prediction: a case study of shanghai, aerosol air qual. res., vol. 20, no. 1, 2020, pp. 128– 138. doi: 10.4209/aaqr.2019.08.0408 [33] n. a. f. k. zaman, k. d. kanniah, d. g. kaskaoutis, m. t. latif, evaluation of machine learning models for estimating pm2.5 concentrations across malaysia, appl. sci., vol. 11, no. 16, jan. 2021, art. no. 16. doi: 10.3390/app11167326 [34] m. zamani joharestani, c. cao, x. ni, b. bashir, s. talebiesfandarani, pm2.5 prediction based on random forest, xgboost, and deep learning using multisource remote sensing data, atmosphere, vol. 10, no. 7, jul. 2019, art. no. 7. doi: 10.3390/atmos10070373 [35] w. n. shaziayani, a. z. ul-saufie, s. mutalib, n. mohamad noor, n. s. zainordin, classification prediction of pm10 concentration using a tree-based machine learning approach, atmosphere, vol. 13, no. 4, apr. 2022, art. no. 4. doi: 10.3390/atmos13040538 [36] j. kujawska, m. kulisz, p. oleszczuk, w. cel, machine learning methods to forecast the concentration of pm10 in lublin, poland, energies, vol. 15, no. 17, jan. 2022, art. no. 17. doi: 10.3390/en15176428 [37] i. yeo, y. choi, y. lops, a. sayeed, efficient pm2.5 forecasting using geographical correlation based on integrated deep learning algorithms, neural comput. appl., vol. 33, no. 22, nov. 2021, pp. 15073–15089. doi: 10.1007/s00521-021-06082-8 [38] g. iadarola, a. poli, s. spinsante, compressed sensing of skin conductance level for iot-based wearable sensors, 2022 ieee int. instrumentation and measurement technology conf. (i2mtc), ottawa, on, canada, 16-19 may 2022, pp. 1-6. doi: 10.1109/i2mtc48687.2022.9806516 [39] j. gajda et al., design and accuracy assessment of the multi-sensor weigh-in-motion system, 2015 ieee int. instrumentation and measurement technology conf. (i2mtc), pisa, italy, 11-14 may 2015, pp. 1036–1041. doi: 10.1109/i2mtc.2015.7151413 [40] g. iadarola, a. poli, s. spinsante, analysis of galvanic skin response to acoustic stimuli by wearable devices, 2021 ieee int. symposium on medical measurements and applications (memea), lausanne, switzerland, 23-25 june 2021, pp. 1-6. doi: 10.1109/memea52024.2021.9478673 [41] p. daponte, l. de vito, g. iadarola, s. rapuano, experimental characterization of a rf mixer for wideband data acquisition systems, 2017 ieee int. instrumentation and measurement technology conference (i2mtc), turin, italy, 22-25 may 2017, pp. 1-6. doi: 10.1109/i2mtc.2017.7969884 https://doi.org/10.11591/ijece.v8i1.pp280-290 https://doi.org/10.1155/2020/8749764 https://doi.org/10.11591/ijeecs.v9.i3.pp555-557 https://doi.org/10.14569/ijacsa.2018.091110 https://doi.org/10.1109/jsen.2019.2898667 https://doi.org/10.1109/jsen.2019.2917435 https://doi.org/10.3390/app9163435 https://doi.org/10.3390/atmos10090554 https://doi.org/10.1109/sas48726.2020.9220079 https://doi.org/10.3390/s17122856 https://doi.org/10.3390/s18041056 https://doi.org/10.3390/s20040998 https://doi.org/10.3390/s21010181 https://doi.org/10.3390/s21113831 https://doi.org/10.1016/j.apr.2022.101547 https://doi.org/10.1016/j.scitotenv.2020.139656 https://doi.org/10.4209/aaqr.2019.08.0408 https://doi.org/10.3390/app11167326 https://doi.org/10.3390/atmos10070373 https://doi.org/10.3390/atmos13040538 https://doi.org/10.3390/en15176428 https://doi.org/10.1007/s00521-021-06082-8 https://doi.org/10.1109/i2mtc48687.2022.9806516 https://doi.org/10.1109/i2mtc.2015.7151413 https://doi.org/10.1109/memea52024.2021.9478673 https://doi.org/10.1109/i2mtc.2017.7969884 introductory notes for the acta imeko thematic issue on measurement systems and instruments based on iot technologies for health acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 2 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 introductory notes for the acta imeko thematic issue on measurement systems and instruments based on iot technologies for health grazia iadarola1, susanna spinsante1, francesco lamonaca2 1 department of information engineering (dii), polytechnic university of marche, via brecce bianche 12, 60131, ancona, italy 2 department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, 87036, arcavacata di rende, italy section: editorial citation: grazia iadarola, susanna spinsante, francesco lamonaca, introductory notes for the acta imeko thematic issue on measurement systems and instruments based on iot technologies for health, acta imeko, vol. 12, no. 3, article 2, september 2023, identifier: imeko-acta-12 (2023)-03-02 received june 28, 2023; in final form june 30, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: grazia iadarola, e-mail: g.iadarola@staff.univpm.it dear readers, the internet of things (iot) has revolutionized many services people rely on in everyday life. of course, there is no exception in the healthcare sector, where the impact of iot grows rapidly. indeed, the use of iot technologies helps healthcare, by connecting more easily patients to medical staff for continuous and long-term monitoring, through smart data management and innovative wearables as well. with these premises, we are delighted to present to you this new thematic issue “measurement systems and instruments based on iot technologies for health”. the papers accepted in this issue contemplate, under different points of view, the importance of measurement systems and instruments for iot applications related to health. common aspects can be traced in these papers, from low-cost monitoring to wireless interfaces, both fundamental for iot. an important theme underlined by these papers is air quality monitoring, which should be analyzed in everyday life to avoid serious health risks. a first example of low-cost monitoring for an iot application related to health is presented in [1]. specifically, the low-cost monitoring system is proposed for skin conductance signals. the comparison with the signals provided by a reference desk equipment points out the ability of the low-cost system in providing the same relevant information for stimulus detection, despite its simpler design and hardware limitations. in fact, the increase of both baseline and peaks are detected by the proposed low-cost system after stimulation. a low-cost system for air quality is then described in [2]. the system monitors co2 and volatile organic compounds (vocs) inside protective equipment such as ffp2 masks (commonly adopted during the covid-19 pandemic), aggregates data over a 15-minute window, and calculates average values for each measured parameter. by comparing average values to reference thresholds, the monitoring system can suggest removing the mask when necessary. an innovative aspect is the personalized monitoring of exhaled breath, as customized and reliable information is provided to doctors, thanks to the integration of removable memories. air quality in living environments or outdoor is discussed also in [3], where a portable monitoring station is presented to measure parameters such as co2 and vocs, in the presence of pollutants. the portable station acquires a combination of weather and air quality parameters, with low-cost and reduced power consumption. data connectivity is ensured by wireless interfaces and the portable station can be part of a network, for possible distributed monitoring and alert delivery, in case of critical weather and air conditions. similarly, wireless networks can be exploited in indoor location identification and tracking. in [4], a method for remote rehabilitation that requires only the access to wi-fi points (routers) is presented without the hotspot mode for user mobile devices. the wi-fi access points with non-standard firmware are proposed as measuring equipment to determine in real time the coordinates of the patient location within a medical institution. mailto:g.iadarola@staff.univpm.it acta imeko issn: 2221-870x september 2023, volume 12, number 3, 2 3 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 finally, the paper [5] presents a mechatronic automatism based on near-field communication that allows the identification of user garments by sensor data, to improve the quality of life of fragile people, such as blind or disabled ones. with the help of an integrated interface to manage the requests from the user, a proper algorithm classifies the garments depending on their predominant color. we hope you will enjoy your reading. grazia iadarola, susanna spinsante and francesco lamonaca, guest editors references [1] g. iadarola, v. bruschi, s. cecchi, n. dourou, s. spinsante, lowcost monitoring for stimulus detection in skin conductance, acta imeko, vol. 12, no. 3, pp. 1-6. doi: https://doi.org/10.21014/actaimeko.v12i3.1540 [2] f. ruffa, m. lugarà, g. fulco, c. de capua, an iot measurement system for a tailored monitoring of co2 and total volatile organic compounds inside face masks, acta imeko, vol. 12, no. 3, pp. 18. doi: https://doi.org/10.21014/actaimeko.v12i3.1536 [3] m. n. m. aashiq, w. t. c. c. kurera, m. g. s. p. thilekaratne, a. m. a. saja, m. r. m. rouzin, n. neranjan, h. yassin, an iotbased handheld environmental and air quality monitoring station, acta imeko, vol. 12, no. 3, pp. 1-8. doi: https://doi.org/10.21014/actaimeko.v12i3.1487 [4] o. tohoiev, determining the location of patients in remote rehabilitation by means of a wireless network, acta imeko, vol. 12, no. 3, pp. 1-5. doi: https://doi.org/10.21014/actaimeko.v12i3.1495 [5] l. silva, d. rocha, f. soares, j. sena esteves, v. carvalho, automatic system to identify and manage garments for blind people, acta imeko, vol. 12, no. 3, pp. 1-10. doi: https://doi.org/10.21014/actaimeko.v12i3.1490 https://doi.org/10.21014/actaimeko.v12i3.1540 https://doi.org/10.21014/actaimeko.v12i3.1536 https://doi.org/10.21014/actaimeko.v12i3.1487 https://doi.org/10.21014/actaimeko.v12i3.1495 https://doi.org/10.21014/actaimeko.v12i3.1490 introductory notes for the acta imeko first issue 2023, special issue on metrology and digital transformation acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 3 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 introductory notes for the acta imeko first issue 2023, special issue on metrology and digital transformation jon bartholomew1, daniel hutzschenreuter2, siniša prugovečki3, cristian zet4 1 emirates metrology institute, abu dhabi quality and conformity council, p.o box 853, abu dhabi, united arab emirates 2 department 9.4 metrology for the digital transfromation, physikalisch-technische bundesanstalt (ptb), bundesallee 100, 38116 braunschweig, germany 3 lorisq inc., 99 wall st., new york 10005, u.s.a. / metroteka d.o.o., kreše golika 3, 10000 zagreb, croatia 4 gheorghe asachi technical university of iasi, faculty of electrical engineering, 23 d. mangeron blvd, 700050 iasi, romania section: editorial citation: daniel hutzschenreuter, introductory notes for the acta imeko first issue 2023, special issue on metrology and digital transformation, acta imeko, vol. 12, no. 1, article 2, march 2023, identifier: imeko-acta-12 (2023)-01-02 received march 22, 2023; in final form march 30, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the transition of metrology into the emerging digital era has grown to become one of the key challenges in the 21st century for metrologists worldwide. we already know that its highly cross-functional and interdisciplinary nature will lead to many major changes in the field, where not only expertise in measurement science but also skills for digital data, digital infrastructures, and machine-driven processes will play an important role. imeko accompanied by acta imeko is playing an important role in interconnecting experts around the world to share knowledge to foster the transition into the digital future. as a result, imeko tc6 was started in 2021 [1]. the aim of tc6 is to develop, organise and disseminate fundamental concepts of measurement science that relate to digitalization and digital transformation in science, industry, and society. the tc promotes the accumulation and curation of knowledge in various forms relating to the digitalization of measurement methodologies and measurement outcomes. its purpose is to provide a robust body of knowledge to support digital transformation whenever measurement is involved. digitalization's multidisciplinary nature is expected to overlap with other imeko groups' interests, therefore tc6 encourages collaborations and joint activities with other tcs and partner organizations around the globe. on 19 – 21 september 2022 tc6 organized the first international conference on metrology and digital transformation – the m4dconf 2022. with over 50 presentations and more than 200 participants from around the world the conference received great attention. highlights were the sessions by patron organisations: a session organized by the international committee on weights and measures (cipm) on the digital transformation of the international system of units, a session organized by the international organization in legal metrology (oiml), a session on metrological traceability in digital applications by imeko tc8, as well as sessions supported by euramet and eurolab. with further support by apmp, gulfmet and sim, many more high levels sessions were held providing a broad spectrum of hands-on contributions. topics included: pathways to digital transformation for small and emerging organizations, representation and use of fair (findable, accessible, interoperable, reusable) metrological information, digital infrastructures and technologies, metrology for advanced and sustainable manufacturing, and applications of artificial intelligence methods. in almost 50 % of all presentations, the development and use of machine-readable digital certificates in metrology was presented as an important technology. on behalf of imeko tc6 and the editors, it is a delight that this special issue of acta imeko is dedicated to the 16 best contributions on digitalization in metrology from m4dconf2022. it wouldn’t have been possible without the dedication of the authors, reviewers, conference committee members, editors and the exceptional support of the acta imeko team (francesco lamonaco, dirk röske and the copyeditors). the usability and interoperability of conventional quantityunit system builds the basis for reliable metrological data representation and exchange between both humans and machines. while today’s analogue systems are suitable for human users a direct translation into digital formats is posing difficulties to digitalization. the metrology-information layer (m-layer) is a new approach from research to overcome the limitations by providing profound metadata describing quantity kinds (aspects), units and scales to disambiguate transition between unit systems. mailto:editorinchief.actaimeko@hunmeko.org acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 blair hall and mark kuster have a major contribution to the development of the m-layer concept. the paper by mark kuster [2] reports work toward developing the m-layer’s current abstract conceptualization into a concrete model, working prototype, and demonstration software, with the eventual goal to create a fair resource. sensor networks (sn) are becoming an inherent part of many upcoming technologies, including smart cities, smart grids, complex processes monitoring in many industries, autonomous driving, healthcare and many other applications. martin koval et al. [3] describe the general approach for sns in their paper and deal with the principle components of sn architecture, plus the opportunities for the implementation of current and new technologies. together with other technologies such as ai and utilization of big data, the sn is becoming an important tool for optimization of many processes. the sn can help to push the limits in metrology with new effective algorithms in the ai or helping to solve challenges in uncertainty evaluation. however, sns as measuring systems underpin many developments in digital transformation, with applications ranging from regulated utility networks to low-cost internet of things (iot). the metrological assessment of sensor networks necessitates a fundamental revision of calibration, uncertainty propagation and performance assessment and new approaches for information and data handling regarding the individual sensors and their interactions in the network to allow a systems metrology approach to be established. the contribution by sascha eichstädt et al. [4] introduces some initial findings from recent research and gives an outlook into future developments. early in 2022 the national institute of standards and technology embarked on a pilot project to produce digital calibration reports and digital certificates of analysis for reference materials. william dinis camara et al. [5] present a paper reporting on the progress of the effort for reference material certificates and discuss some of the challenges and solutions in creating digital certificates. challenges include the diverse and complex information presently contained in certificates, conversion of values to non-si units of measurement to match the needs of stakeholders, format updates necessary for machine generation, and the wide variety of reference materials offered. in automated digital systems and environments such as industry 4.0 measurement data, including the calibration information, will need to flow through the whole process chain in digital format. the study presented by juho nummiluikki et al. [6] demonstrates a fully digitalized environment for calibration data generation, transfer, and usage in a proof of concept (poc) project. it is an outcome from a collaborative approach showing an exchange of calibration information by dccs from a national metrology institute through an accredited laboratory to endusers of calibrated measuring instruments. digital technologies have been providing their usefulness in instrumentation and measurements for many years. the ability to use software in measuring instruments became a common practice, bringing the advantage of signal and information processing and interfacing. instruments can be calibrated and verified in an automated calibration system which can perform all the process without the operator interference. the paper by cristian zet et al. [7] presents the possibility of joining an automated calibration system and the creation of dccs with the blockchain technology for storing and validating data. demonstrated benefits are digital traceability of dcc content by preservation of the full history of calibration information in a digital wallet and mechanisms to protect data from change. degradation of temperature sensors in harsh environments such as high temperature, contamination, vibration, and ionising radiation causes a progressive loss of accuracy that is not apparent. new developments to overcome the problem of ‘calibration drift’, include self-validating thermocouples and embedded phase-change cells which self-calibrate in situ by means of a built-in temperature reference and practical primary thermometers, such as the johnson noise thermometer, which measure temperature directly and do not suffer from calibration drift. all these developments provide measurement assurance which is an essential part of digitalization to ensure that sensor output is always ‘right’, as well as providing essential ‘points of truth’ in a sensor network. jonathan pearce et al. [8] present in an overview to some state-of-the art developments giving an excellent insight on how data quality can benefit from combining new measurement concepts with digital technologies. sensor networks could provide useful new tools for laboratories to increase the understanding of other measurements, to validate assumptions on and optimize existing measurements. a practical application for more accurate determination of the measurement uncertainty of an interferometric long-distance measurement is presented in the work by gertjan kok at al. [9]. a network with five temperature sensors was installed in a laboratory to measure a more detailed profile of the ambient temperature. it aims for more accurate determination of values for the refractive index on the path travelled by the laser light. during the measurement campaign an offset in the mean temperature of 0.2 °c was found, which was equal to the maximum allowed bias in view of the claimed uncertainty for the long-distance measurement. in the framework of empir project comtraforce, a digital twin (dt) concept of force measurement device was developed. dt aims to cover static, continuous, as well as dynamic calibration processes, preserving data quality and collecting calibration data for improved decision-making. to illustrate the dt concept, a prototype realization for static and continuous force calibration processes was developed, involving a simulation with ansys engineering software. oksana baer et al. [10] report on the current progress of the work in their paper. it is focused on data connection between a physical device and the dt. the dt model is validated using traceable measurements. in recent years, the need for better data science and data engineering have raised many challenges for metrology concerning the increasing amount of data that needs to be made available in ‘appropriate’ ways. to ensure findability, accessibility, interoperability, and reusability (fair) of digital resources, digital objects as a synthesis of data and metadata with persistent and unique identifiers should be used. in this context, the fair data principles formulate requirements that research data and, ideally, also industrial data should fulfil to make full use of them, particularly when machine learning or other datadriven methods are under consideration. in the contribution by tanja dorst et al. [11], the process of providing scientific data of an industrial testbed in a traceable and fair manner is documented as a descriptive example. ‘data metrology’, i.e., the evaluation of data quality and its fitness-for-purpose, is an inherent part of many disciplines including physics and engineering. in other domains such as life sciences, health, and pharmaceutical manufacturing these tools are often added as an afterthought, if considered at all. the use of data-driven decision-making and the advent of machine learning in these industries has created an urgent demand for harmonized, high-quality, and instantly available datasets across acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 domains. fair principles alone do not guarantee that data is fitfor-purpose. issues such as missing data and metadata, insufficient knowledge of measurement conditions or data provenance are well known and can be aided by applying metrological concepts to data preparation to increase confidence. a showcase for life science and health care projects where data metrology has been used to improve data quality is presented by paul m. duncan et al. [12]. digitalization will also affect the ways metrologists will handle data in their everyday work. an example of a modern approach to ease annotating, archiving, retrieving, and searching measurement data from a large-scale data archival system is described by frederic brochu et al. [13]. their tool extends and simplifies the interaction with their database and is implemented in popular scientific applications used for data analysis, namely matlab and python. it allows scientists to execute complex interactions with the database for data curation and retrieval tasks in a few simple lines of accessible templated code. shifting from paper-based data archives to purely digital ones raises a need for proper tracing of changes in data records without accidentally deleting or overwriting content. vashti galpin et al [14] have developed a framework allowing traceable updates in relational databases. they present a prototype web application developed in the programming language links for storing and displaying dcc using a relational database. their work leverages the temporal database features that links provides to capture different versions of a certificate and inspect differences between versions. modern programming language type systems help programmers write correct software, and furthermore help them write the software they intended to write. such type systems could be a powerful tool in the digitalization of metrology. by exploiting advances in dependent type systems, it is possible to strengthen the ability of software to reason about dimensional correctness of metrology data and bridge the gap between human-readable semantic specifications of data, and the actual code representing it in a specific programming environment. conor mcbride et al. [15] show in their paper how expressive types can be used to encode dimension and units of measurement information, which can be used to avoid dimensional mistakes and guide software construction. an automatic creation of source code is considered to further help to eliminate a whole class of potential bugs in software. introducing new information systems in organisations is often disruptive for individual departments. alexander oppermann et al. [16] show how application of an operation layer concept can allow departments to maintain their existing process and infrastructures and still harmonise data through the use of uniform representational state transfer interfaces. the automatic data transfer implemented reduces the workload for employees, increases the productivity, integrity and availability of data and greatly reduces the susceptibility to errors. we hope you will enjoy your reading. jon bartholomew, daniel hutzschenreuter, siniša prugovečki and cristian zet guest editors for the special issue on metrology and digital transformation references [1] imeko, imeko tc6 homepage. online [accessed 24 march 2023] https://www.imeko.org/index.php/tc6-homepage [2] mark j. kuster, toward a metrology-information layer for digital systems, acta imeko, vol. 12 (2023) no.1, pp. 1-4. doi: 10.21014/actaimeko.v12i1.1416 [3] martin koval et al., general sensors network application approach, acta imeko, vol. 12 (2023) no.1, pp. 1-5. doi: 10.21014/actaimeko.v12i1.1409 [4] sascha eichstädt et al., fundamental aspects in sensor network metrology, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1417 [5] william dinis camara et al., digital nist: an examination of the obstacles and opportunities in the digital transformation of nist’s reference materials, acta imeko, vol. 12 (2023) no.1, pp. 1-4. doi: 10.21014/actaimeko.v12i1.1403 [6] juho nummiluikki et al., digital calibration certificate in an industrial application, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1402 [7] cristian zet et al., automated calibration and dcc generation system with storage in private permissioned blockchain network, acta imeko, vol. 12 (2023) no.1, pp. 1-7. doi: 10.21014/actaimeko.v12i1.1414 [8] jonathan pearce et al., progress towards in-situ traceability and digitalization of temperature measurements, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1386 [9] gertjan kok et al., improved uncertainty evaluation for a long distance measurement by means of a temperature sensor network, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1411 [10] oksana baer at al., digital twin concept of a force measuring device based on the finite element method, acta imeko, vol. 12 (2023) no.1, pp. 1-5. doi: 10.21014/actaimeko.v12i1.1404 [11] tanja dorst et al.: a case study on providing fair and metrologically traceable data sets, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1401 [12] paul m. duncan et al., metrology for data in life sciences, healthcare and pharmaceutical manufacturing: case studies from the national physical laboratory, acta imeko, vol. 12 (2023) no.1, pp. 1-5. doi: 10.21014/actaimeko.v12i1.1406 [13] frederic brochu et al., a tool for curating and searching databases proving traceable analysis of data and workflows, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1408 [14] vashti galpin et al., tracking and viewing modifications in digital calibration certificates, acta imeko, vol. 12 (2023) no.1, pp. 1-7. doi: 10.21014/actaimeko.v12i1.1407 [15] conor mcbride et al., measuring with confidence: leveraging expressive type systems for correct-by-construction software, acta imeko, vol. 12 (2023) no.1, pp. 1-5. doi: 10.21014/actaimeko.v12i1.1412 [16] alexander oppermann et al., digital transformation: towards a cloud native architecture for highly automated and event driven processes, acta imeko, vol. 12 (2023) no.1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1410 https://www.imeko.org/index.php/tc6-homepage https://doi.org/10.21014/actaimeko.v12i1.1416 https://doi.org/10.21014/actaimeko.v12i1.1409 https://doi.org/10.21014/actaimeko.v12i1.1417 https://doi.org/10.21014/actaimeko.v12i1.1403 https://doi.org/10.21014/actaimeko.v12i1.1402 https://doi.org/10.21014/actaimeko.v12i1.1414 https://doi.org/10.21014/actaimeko.v12i1.1386 https://doi.org/10.21014/actaimeko.v12i1.1411 https://doi.org/10.21014/actaimeko.v12i1.1404 https://doi.org/10.21014/actaimeko.v12i1.1401 https://doi.org/10.21014/actaimeko.v12i1.1406 https://doi.org/10.21014/actaimeko.v12i1.1408 https://doi.org/10.21014/actaimeko.v12i1.1407 https://doi.org/10.21014/actaimeko.v12i1.1412 https://doi.org/10.21014/actaimeko.v12i1.1410 multilayer feature fusion using covariance for remote sensing scene classification acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 8 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 multilayer feature fusion using covariance for remote sensing scene classification s. thirumaladevi1, k. veera swamy2, m. sailaja3 1 ece department, jawaharlal nehru technological university, kakinada 533003, andhra pradesh, india 2 ece department, vasavi college of engineering, ibrahimbagh, hyderabad 500 031, telangana, india 3 ece department, jawaharlal nehru technological university, kakinada 533003, andhra pradesh, india section: research paper keywords: feature extraction; pre-trained convolutional neural networks; support vector machine; scene classification citation: s. thirumaladevi, k. veera swamy, m. sailaja, multilayer feature fusion using covariance for remote sensing scene classification, acta imeko, vol. 11, no. 1, article 33, march 2022, identifier: imeko-acta-11 (2022)-01-33 section editor: md zia ur rahman, koneru lakshmaiah education foundation, guntur, india received december 25, 2021; in final form february 20, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: s. thirumaladevi, e-mail: thirumaladeviece1@gmail.com 1. introduction remote sensing scene categorization has gotten a lot of attention recently, and it may be utilized in a variety of practical applications like urban planning, defence, space applications, in which measurement technology plays a key role [1]. on the other hand, it is a difficult challenge, since scene images often have complicated spatial structures with great intra-class and slight inter-class variability. to solve this problem, numerous strategies for scene classification have been advised in recent years [2]. recently, inspired by the tremendous achievement of convolutional neural networks (cnns) relating to the computer vision field [3]. deep neural networks have gained prominence in the remote sensing community due to their exceptional performance, particularly in scene classification and computer vision applications [4]. developing a deep cnn model from scratch, on the other hand, frequently necessitates a large amount of training data, whereas commercially available scene image data sets of remote sensing scene image data sets are typically tiny. deep cnn models have a high degree of generalization on a wide range of tasks because they are commonly trained on imagenet [5], which contains millions of images (e.g., scene classification and object detection [6]). in this context, the idea of using off-the-shelf pretrained cnn models for example alexnet [7], visual geometry group (vgg) 16 [8], and vgg19 as feature extractors for scene categorization using remote sensing has gained attraction. the success is due to these models representing images using hierarchical architecture and can extract more representative features. while these models can achieve categorization performance is excellent. hu et al. [9] looked into two scenarios for using a pretrained cnn model abstract remote sensing images are obtained by electromagnetic measurement from the terrain of interest. in high-resolution remote sensing imageries extraction measurement technology plays a vital role. the scene classification is one of the interesting and challenging problems due to the similarity of image structure and the available hrrs image datasets are all small. training new convolutional neural networks (cnn) using small datasets is prone to overfitting and poor attainability. to overcome this situation using the features produced by pre-trained convolutional nets and using those features to train an image classifier. to retrieve informative features from these images we use the existing alex net, vgg16, and vgg19 frameworks as a feature extractor. to increase classification performance further makes an innovative contribution fusion of multilayer features obtained by using covariance. first, to extract multilayer features, a pre-trained cnn model is used. the features are then stacked, downsampling is used to stack features of different spatial dimensions together and the covariance for the stacked features is calculated. finally, the resulting covariance matrices are employed as features in a support vector machine classification. the results of the experiments, which were conducted on two difficult data sets, uc merced and siri-whu. the proposed staked covariance method consistently outperforms and achieves better classification performance. achieves accuracy by an average of 6 % and 4 %, respectively, when compared to corresponding pre-trained cnn scene classification methods. mailto:thirumaladeviece1@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 (vgg16). the final few fully connected layers are portrayed as final image attributes for scene classification in the first scenario. in the second case, the final convolutional layer's feature maps are encoded to represent the input image using a standard method of feature encoding, such as the improved fisher kernel [10]. the vector support machine (svm) is used as the final classifier in both cases. to improve the efficacy of the proposed method, the features were extracted from multiple cnns of the same image combined by xue et al. [11] for classification. for feature fusion, sun et al. [12] have used the gated bidirectional connection method. in [13], the image is represented by combining the last two fully connected (fc) layers of a cnn model. here we propose an innovative method, called the stacked covariance strategy to fuse features from different layers of a pre-trained cnn to classify remote sensing scenes. in the first phase, a pre-trained cnn model is used to extract multi-layered features and concatenate them. the covariance approach is used to aggregate the concatenated multiple feature vectors extracted from different layers. in contrast to traditional strategies, which only use first-order statistics to integrate feature vectors, the proposed strategy allows for the use of second-order statistics information. more representative features can thus be learned as a result. then, the features are stacked, and covariance is calculated. finally, for classification using an svm classifier and improved the classification performance. this is how the rest of the paper is organized. section 2 explains the intended scene classification framework, novel aspects of our proposed technique. section 3 contains the full experimental results for two data sets, section 4 of this work concludes with some observations. 2. proposed technique description the process of transforming the raw image into numerical features that can be processed while retaining the original information is referred to as feature extraction. with the upsurge of deep learning, the first layers of deep networks have largely replaced feature extraction, particularly for image data. pretrained networks with hierarchical architecture can extract a large number of features from an image, which is thought to transmit additional information that can be put to much better use to increase categorization accuracy. the learned image characteristics are first retrieved from a pre-trained convolutional neural network and then used to train an image classifier. all pretrained cnns requisite fixed-size input images and specify the desired image size, as well as create augmented image data stores and use these data stores as input arguments to activations to automatically resize the training and test images before they are submitted to the network. remove the pre-trained cnn's last fc layer fc8 and consider the rest as a fixed feature extractor. we feed an input image scene into the cnn and using the vector as a representation of global features of the input image, generate in advance a dimensional activation vector from the first or second fc layer. finally, use the dimensional features to train a linear svm classifier for scene classification. figure 1 shows an illustration of this. to improve the classification accuracy further proposed a modified pre-trained network design by combining information from several convolution layers. the shallower levels of the cnn model are more likely to represent low-level visual components (such as edges), whereas the deeper layers exemplify more abstract information in the images. furthermore, in certain computer vision applications, combining different levels, from shallow to deep, can provide state-of-the-art performance, meaning that merging different layers of cnn can be very helpful. our proposed approach uses a similar strategy to take advantage of the information held by multiple layers in this way. this is represented in figure 2. here convolutional layers of the last three blocks of pretrained networks are adopted and concatenated the features extracted from these layers. namely conv3, conv4, conv5 in case of alexnet and "conv3-3", "conv43", and "conv5-3" in case of vgg16, vgg19. different convolutional layers predominantly have distinctive spatial dimensions, they cannot be directly concatenated. to address this issue, downsampling with bilinear interpolation is used in conjunction with channel-wise average fusion. obtained features that were reformed into a matrix along the channel dimension and aggregated using covariance. the proposed technique is described below. cnn model is a collection of functions in which each function 𝑓𝑛 takes data samples 𝑋𝑛 and a filter bank 𝑏𝑛 as inputs and outputs 𝑋𝑛+1 , where 𝑛 = 1, 2, … 𝑁 and 𝑁 is the number of layers represented as a number (1) 𝐹(𝑋) = 𝑓𝑁(… 𝑓2(𝑓1(𝑋; 𝑏1); 𝑏2) … 𝑏𝑁 ) . (1) the filter bank 𝑏𝑛 for a pretrained cnn model was learned from a large data collection. the multiplayer characteristics are retrieved from an input image 𝑋 as follows: 𝐿1 = 𝑓1(𝑋; 𝑏1), 𝐿2 = 𝑓2(𝑋; 𝑏2), and so on. as pretrained models, alexnet, vgg16, and vgg19 are employed in this paper. the features produced from the convolutional layers of the last three blocks of pretrained networks are adopted and utilized. different convolutional layers typically have different spatial dimensions; therefore, they can't be concatenated directly. direct concatenation is not allowed when conv3 may have 𝐿1 ∈ 𝑅1 ℎ×𝑤×𝑑1 , conv4 𝐿2 ∈ 𝑅2 ℎ×𝑤×𝑑2 , and conv5 has 𝐿3 ∈ 𝑅3 ℎ×𝑤×𝑑3 . downsampling with bilinear interpolation, as well as channel-wise average fusion, are used to solve this problem. using downsampling with a 𝑑 number of dimensions three convolutional layers that have been pre-processed have been obtained and channel-wise average fusion is performed then the stacked feature set is acquired as follows: 𝐿 = [𝐿1, 𝐿2, 𝐿3] ∈ 𝑅𝑖 𝑆×𝑆×𝐷 , where 𝐷 = 7 𝑑 and 𝑆 is the predefined down-sampled spatial dimension. the covariance-based pooling can be written as [14] figure 1. classification using single-layered pre-trained cnn as a feature extractor. figure 2. classification using stacked multilayer pre-trained cnn as a feature extractor. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 𝑃 = 1 𝑁 − 1 ∑(𝑌𝑖 − 𝜇) (𝑌𝑖 − 𝜇) t 𝑁 𝑖=1 ∈ 𝑅𝑖 𝐷×𝐷 , (2) where [𝑌1, 𝑌2, … 𝑌𝑁 ] ∈ 𝑅𝑖 𝐷×𝑁 is the vectorization of 𝐿, 𝑁 = 𝑆 2 and 𝜇 = (1 𝑁⁄ ) ∑ 𝑌𝑖 ∈ 𝑅 𝐷𝑁 𝑖=1 , the covariance between the two separate features makes is represented by p, while the variance of each map is represented by diagonal entries. this method incorporates covariance (i.e., second-order statistics) to produce a more dense and discriminative exemplification. second, the correlation between two distinct feature maps is represented by each entry in the covariance matrix. this is an easy approach to merge data from different feature maps that complement each other. the suggested method varies from existing cnn-based algorithms that are pre-trained. concatenating the cnn's unique convolutional features (from shallow to deep layers), feature maps from several layers are merged. as a result, the suggested technique performs much better in terms of categorization. furthermore, because the covariance matrices do not lie in euclidean space, they cannot be processed by the svm. the covariance matrix, on the other hand, can be mapped into euclidean space using the matrix logarithm operation [15]. �̂� = logm(𝑃) = 𝑈 log ∑ 𝑈t ∈ 𝑅𝐷×𝐷 , (3) where, 𝑃 = 𝑈 ∑ 𝑈t is the eigen decomposition equation of 𝑃. the preceding operations are carried out both on the samples of training and testing. {𝑉𝑖 , 𝑆𝑖 }, 𝑖 = 1, 2, … 𝑛 and the testing sets are now taken into account where 𝑆𝑖 represents the number of training samples and 𝑛 represents the number of corresponding labels. {𝑉𝑖 , 𝑆𝑖 }, 𝑖 = 1, 2, … 𝑛 is exploited to train an svm model as min𝑎,ζ ,𝑏 {1 2⁄ ‖𝑎‖2 + 𝑃 ∑ ζ𝑖 } 𝑆𝑖 (∠φ(𝑉𝑖 , 𝑎 + 𝑏)) ≥ 1 − ζ𝑖 𝜀𝑖 > 0, 𝑉𝑖 = 1, 2, … 𝑛 (4) where 𝑎 and 𝑏 are the parameters of a linear classifier (.) is the mapping function and 𝜀𝑖 are positive slack variables to assert with outliers in the training set. with k(𝑉𝑖 , 𝑉𝑗 ) = 𝑉𝑖 t𝑉𝑗 𝑓(𝑥) = sgn (∑ 𝑆𝑖 𝜆𝑖 𝑘(𝑣𝑖 , 𝑣) 𝑛 𝑖=1 + 𝑏 ). (5) 3. experimental results analysis discussion 3.1 experimental data sets we run tests on two tough image data sets related to remote sensing scene images to see how well the suggested approach performs. 1) land use data set from uc merced [16]: 2100 pictures are classified into 21 scene groups in the uc merced land use (uc) [17] data set. each class consists of 100 images in the rgb space with a size of 256 × 256 pixels. each image has a one-foot pixel resolution. figure 3. depicts sample images a) b) c) d) e) f) g) h) i) j) k) l) m) n) o) p) q) r) s) t) u) figure 3. land-use categories of 21 example classes representation of uc merced data set : a) agricultural13, b) airplane19, c) baseball diamond3, d) beach33, e) buildings21, f) chaparral13, g) denseresidential40, h) forest23, i) freeway23, j) golfcourse41, k) harbour31, l) intersection3, m) mediumresidential12, n) mobilehomepark12, o) overpass45, p) parkinglot32, q) river32, r) runway26, s) sparse residential64, t) storagetanks54, u) tenniscourt16. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 from each class, some categories (forest and sparse residential, for example) exhibit a significant level of interclass similarity, making the uc data set a difficult one to work with. siri-whu was obtained from google earth (google inc.) and covers urban regions in china, as well as siri-whu [18]. there are 12 items in the data set. each class has 200 photos, each cropped to 200 × 200 pixels and with a spatial resolution of 2 meters. in this study, 80 % of the training samples were chosen from the siri-whu [19] google data set, while the remaining amount of samples was kept for testing. the sample images of the siri-whu data set are shown in figure 4. 3.2 experimental setup in our approach, multilayer features are extracted using three well-known cnn pretrained models: alexnet [7], vgg-19 [8], and vgg-16 [8]. vgg-16 and vgg-19 three convolutional layers (e.g., "conv3–3," "conv4–3," and "conv5–3") , alexnet's three convolutional layers (i.e., "conv3," "conv4", and "conv5") are used. for feature extraction, the scene images are preset to the size of the input layer such as 227 × 227 × 3 in the case of alex net and 224 × 224 × 3 forvgg-vg16 and 19. imagenet is used to train both models. for illustration purposes, the uc data set and the siri-whu data set are used, with 80 % training samples and 20 % testing samples selected. the most frequently used image classification assessment criteria are oa and confusion matrix, f1-score. confusion matrix: this is a special matrix that is commonly used g to visualize the output. each column in this matrix represents the estimated value, while each row signifies an authentic category. as a result, evaluating is relatively simple. overall accuracy (oa): the number of appropriately categorized images divide by the total number of images in the data set, regardless of which class they belong to. f1 score: the f1 score is a metric used to determine how accurate a test is. the harmonic mean of recall and precision is calculated using the test's precision and recall. based on the combination of real and anticipated categories, classification problem with several 𝑀, which comprises 𝑃 positive instances and 𝑁 negative instances, there are four types of cases: true positives (𝑇𝑃), false positives (𝐹𝑃), true negatives (𝑇𝑁), and false negatives (𝐹𝑁). the positive sample 𝑃 = 𝑇𝑃 + 𝐹𝑁 provides a positive sample that is expected to be positive, while 𝑇𝑃 represents a positive sample that is forecast to be negative. similarly, 𝑇𝑁 denotes the number of negative cases that are identified as negative, while 𝐹𝑃 denotes the number of negative incidences that are predicted to be positive; thus, 𝑁 = 𝑇𝑁 + 𝐹𝑃 denotes the total number of negative samples. the fraction of correct instances is the accuracy, and the calculation equation is 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃 . (6) the fraction of actually positive instances in all cases projected to be positive elements is called precision. the formula is as follows: 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑃 . (7) the recall calculation equation is the fraction of all positive samples that are projected to be positive. 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃 𝑇𝑃 + 𝐹𝑁 . (8) the f1-score is a precision and recall evaluation indicator with a comprehensive calculation equation. 𝑅𝑒𝑐𝑎𝑙𝑙 = 2 ∙ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ∙ 𝑅𝑒𝑐𝑎𝑙𝑙 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 . (9) the confusion matrix, along with accuracy, is shown in figure 5. the first column depicts a single fully connected layer is used as the final feature extractor for scene classification, whereas the second column depicts the proposed sc-based network classification. experiments on the uc data set revealed that the oas of pre-trained alexnets is 79.76 %, vgg-19 is 81.19 %, and vgg-16 is 83.81 % while using sc and combining the last a) b) c) d) e) f) g) h) i) j) k) l) figure 4. example class representation of the siri-whu dataset: a) agriculture1, b) commercial50, c) harbor64, d) idle_land76, e) industrial111, f) meadow120, g) overpass15, h) park29, i) pond37, j) residential1, k) river106, l) water97. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 three conv layers increases accuracy by 85 %, 87.14 %, and 88.33 %, respectively. the proposed method accomplishes perfect classification performance on the majority of classes, such as agricultural, beach, chaparral, forest, harbour, parking lot, runway and improved classes are buildings, dense residential, baseball diamond, tennis court and on average 6 % accuracy is increased in the case of ucm. similar results can be obtained in the experiments conducted on the siri-whu data set, confusion matrix for the fully connected layer is treated as a final feature extractor for scene classification, proposed sc-based classification are shown in figure 6, where the pre-processed single layer of alexnet is 86.52 %%, vgg-19 is 87.6 %, and vgg-16 is 88.04 % while using the proposed strategy increases the accuracy by 90 %, a) d) b) e) c) f) figure 5. confusion matrix of uc merced dataset using three pre-trained networks. first column corresponding to single-layered, a) alex net, b) vgg19, c) vgg16, second column corresponding to multi-layered fusion, d) sc-alex net, e) sc-vgg19, f) sc-vgg16. table 1. comparison results for the two data sets ucm and siri-whu. ucm dataset with 80% training (overall accuracy %) siri-whu dataset with 80% training (overall accuracy %) network/method pre-trained network fc7 as a feature extractor proposed sc network as a feature extractor pre-trained network fc7 as a feature extractor proposed sc network as a feature extractor alexnet 79.76 85 86.52 90 vgg-vd19 81.19 87.14 87.60 91.08 vgg-vd16 83.81 88.33 88.04 92.60 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 91.08 %, and 92.60 %, respectively. in most classes, the suggested technique achieves optimal classification performance like agriculture, commercial, harbour, meadow, meadow and improved classes are industrial, overpass, pond, overall 4 % accuracy is increased and comparison graphs are shown in figure 7. the related comparison results for the two data sets are shown in table 1. the proposed scenario shows a clear improvement in oa when several conv layers are combined. as illustrated in figure 8, f1 scores of improved classes of ucm data set. by using the proposed strategy, a considerable number of classes show noticeable improvement such as agricultural, beach, harbour, runway reached 100 % and dense residential-class improves approximately 40 %. likewise, figure 9, shows the corresponding f1 scores of pretrained single-layered and proposed networks on the siri-whu data set. as can be witnessed the proposed strategy, exhibits obvious improvements in most of the classes for example, on the siri-whu data set water reached 100% and harbour, idle_land, industrial, overpass, park classes reach above 90%. 4. conclusion in this research, we portray stacked covariance, a new technique for fusing image features from multiple layers of a cnn for scene categorization using remote sensing data. feature extraction is performed initially with a pre-trained cnn model, followed by feature fusion with covariance in the presented scbased classification framework. more dense features are recovered for classification because the proposed scenario takes into account second-order data. each feature represents the covariance of two distinct feature maps and these features are applied to svm for classification. our extensive suggested sc method's effectiveness is validated by comparison with state-ofthe-art methodologies using two publicly accessible remote sensing image scene categorization data sets. we recognize that utilizing the proposed sc technique, the accuracy attained for most classes shows obvious enhancements, indicating that this is a viable improvement strategy. a) d) b) e) c) f) figure 6. confusion matrix of siri-whu dataset using three pre-trained networks. first column corresponding to single-layered, a) alex net, b) vgg19, c) vgg16, second column corresponding to multi-layered fusion, d) sc-alex net, e) sc-vgg19, f) sc-vgg16. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 references [1] l. fang, n. he, s. li, p. ghamisi, j. a. benediktsson, extinction profiles fusion for hyperspectral images classification ieee trans. geosci. remote sens., vol. 56, no. 3, 2018, pp. 1803–1815. doi: 10.1109/tgrs.2017.2768479 [2] x. bian, c. chen, l. tian, q. du, fusing local and global features for high-resolution scene classification, ieee j. sel. topics appl. earth observ. remote sens., vol. 10, no. 6, jun. 2017, pp. 2889– 2901. doi: 10.1109/jstars.2017.2683799 [3] x. lu, b. wang, x. zheng, x. li, exploring models and data for remote sensing image caption generation, ieee trans. geosci. remote sens., vol. 56, no. 4, apr. 2018, pp. 2183–2195. doi: 10.1109/tgrs.2017.2776321 a) ucm dataset b) siri-whu dataset figure 7. comparison of pre-trained networks and proposed sc framework-based networks of uc merced (ucm), siri-whu datasets. figure 8. comparison between f1 scores of uc merced data set improved classes with pre-trained, proposed framework networks. figure 9. comparison between f1 scores siri-whu data set improved classes with pre-trained, proposed framework networks. 0 20 40 60 80 100 alexnet proposed alexnet vgg 19 proposed vgg19 vgg16 proposed vgg16 agricultural beach denseresidential golfcourse harbor intersection river 60 70 80 90 100 alexnet proposed alexnet vgg 19 proposed vgg19 vgg16 proposed vgg16 agriculture harbor idle_land industrial overpass park pond https://doi.org/10.1109/tgrs.2017.2768479 https://doi.org/10.1109/jstars.2017.2683799 https://doi.org/10.1109/tgrs.2017.2776321 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [4] b. m. reddy, m. zia ur rahman, analysis of sar images using new image classification methods, international journal of innovative technology and exploring engineering, vol.8, no.8, 2019, pp. 760-764. [5] j. deng, w. dong, r. socher, l.-j. li, k. li, l. fei-fei, imagenet: a large-scale hierarchical image database, in proc. ieee conf. comput. vis. pattern recognit., miami, fl, usa, 20-25 june 2009, pp. 248–255. doi: 10.1109/cvpr.2009.5206848 [6] r. girshick, j. donahue, t. darrell, j. malik, rich feature hierarchies for accurate object detection and semantic segmentation, in proc. ieee int. conf. comput. vis., columbus, oh, usa, 23-28 june 2014, pp. 580–587. doi: 10.1109/cvpr.2014.81 [7] y. wang, c. wang, l. luo, z. zhou, image classification based on transfer learning of convolutional neural network, chinese control conference (ccc), guangzhou, china, 27-30 july 2019, pp. 7506-7510. doi: 10.23919/chicc.2019.8865179 [8] k. simonyan, a. zisserman, very deep convolutional networks for large-scale image recognition, 3rd iapr asian conference on pattern recognition (acpr), kuala lumpur, malaysia, 3-6 november 2015, pp. 1-13. doi: 10.1109/acpr.2015.7486599 [9] s. tammina, transfer learning using vgg-16 with deep convolutional neural network for classifying images, international journal of scientific and research publications (ijsrp), vol. 9, 2019, no. 10, pp. 143-150. doi: 10.29322/ijsrp.9.10.2019.p9420 [10] s. putluri, m. z. ur rahman, s. y. fathima, cloud-based adaptive exon prediction for dna analysis, healthcare technology letters, vol. 5, no. 1, 2018, pp. 25-30. doi: 10.1049/htl.2017.0032 [11] y. bi, b. xue, m. zhang, genetic programming with image-related operators and a flexible program structure for feature learning in image classification, ieee transactions on evolutionary computation., vol. 25, no. 1, 2020, pp. 87–101. doi: 10.1109/tevc.2020.3002229 [12] h. sun, s. li, x. zheng, x. lu, remote sensing scene classification by gated bidirectional network, ieee trans. geosci. remote sens., vol. 58, no. 1, pp. 82–96, 2019. doi: 10.1109/tgrs.2019.2931801 [13] g.fiori, f.fuiano, a.scorza, j.galo, s.conforto, s.a sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko, vol. 10, no.2, 2021, pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [14] l. fang, n. he, s. li, a. j. plaza, j. plaza, a new spatial– spectral feature extraction method for hyperspectral images using local covariance matrix representation, ieee trans. geosci. remote sens., vol. 56, no. 6, jun. 2018, pp. 3534–3546. doi: 10.1109/tgrs.2018.2801387 [15] v. arsigny, p. fillard, x. pennec, n. ayache, geometric means in a novel vector space structure on symmetric positive-definite matrices, siam j. matrix anal. appl., vol. 29, no. , 20071, pp. 328– 347. doi: 10.1137/050637996 [16] uc merced data set. online [accessed december 2019] http://weegee.vision.ucmerced.edu/datasets/landuse.html [17] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko, vol. 10, no.1, 2021, pp. 114121. doi: 10.21014/acta_imeko.v10i1.847 [18] siri-whu data set. online [accessed august 2020] http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/g oogle.html [19] y. liu, y. zhong, f. fei, q. zhu, q. qin, scene classification based on a deep random-scale stretched convolutional neural network, remote sensing, vol. 10, no. 3, apr. 2018. doi: 10.3390/rs10030444 https://doi.org/10.1109/cvpr.2009.5206848 https://doi.org/10.1109/cvpr.2014.81 https://doi.org/10.23919/chicc.2019.8865179 https://doi.org/10.1109/acpr.2015.7486599 https://doi.org/10.29322/ijsrp.9.10.2019.p9420 https://doi.org/10.1049/htl.2017.0032 https://doi.org/10.1109/tevc.2020.3002229 https://doi.org/10.1109/tgrs.2019.2931801 https://doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.1109/tgrs.2018.2801387 https://doi.org/10.1137/050637996 http://weegee.vision.ucmerced.edu/datasets/landuse.html https://doi.org/10.21014/acta_imeko.v10i1.847 http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/google.html http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/num/google.html https://doi.org/10.3390/rs10030444 acta imeko  december 2014, volume 3, number 4, 10 – 16  www.imeko.org    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 10  two‐functional µbist for testing and self‐diagnosis of analog  circuits in electronic embedded systems  dariusz załęski, romuald zielonko  gdansk university of technology, faculty of electronics, telecommunications and informatics, department of metrology and  optoelectronics, ul. gabriela narutowicza 11/12, 80‐233 gdansk ,poland      section: research paper   keywords: electronic embedded systems; built‐in self‐testers (bist); functional testing; analog fault diagnostics  citation: dariusz załęski, romuald zielonko, two‐functional µbist for testing and self‐diagnosis of analog circuits in electronic embedded systems, acta  imeko, vol. 3, no. 4, article 4, december 2014, identifier: imeko‐acta‐03 (2014)‐04‐04  editor: paolo carbone, university of perugia   received october 10 th , 2013; in final form october 10 th , 2013; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: romuald zielonko, e‐mail: zielonko@eti.pg.gda.pl    1. introduction  electronic embedded systems (eess) with an embedded intelligent unit in a structure (usually in the form of a microcontroller), are widely used in many branches of industry, technology and science. the dominant group of eess are mixed-signal systems. a common fact of these systems is the presence of analog circuits, since there always must be an interface to the real world. mixed-signal eess bring new challenges to the test problem of the analog circuits and blocks, because there is lack of general test methods and strategies. the main direction of development of testing analog circuits in eess is the built-in self-test (bist) technique. in last years many specific solutions of bists dedicated for concrete circuits have been reported: oscillation-based bist (obist) [1], digital reuse [2], histogram based [3], [4] dedicated for fully-differential stages [5], [6],  bist [7], [8] and adc bist for ad converters. the common feature of these solutions is hardware excess, i.e. the necessity to build an overhead hardware into the ees’s structure, which introduces additional costs. one promising solution to reduce the hardware overhead and testing costs is reuse of signal blocks already present in the ees (such as processors and memories) for a bist creation. the new generations of microcontrollers (e.g. at91sam, aduc814) have hardware resources (adcs, dacs, timers, counters, analog comparators) that allow to generate stimulating signals and to measure responses of a circuit under test (cut), as well as the computing power to realize of testing procedures. therefore, the hardware and software resources of modern microcontrollers are sufficient for creation of microbists (µbists). these resources are sufficient, but simultaneously they are rather modest, thus special signals and procedures adequate to microcontroller resources must be used. some solution of the µbist for functional testing based on shape designed complementary signals (css) [9], [10], abstract  the paper concerns the testing of analog circuits and blocks in mixed‐signal electronic embedded systems (eess), using the built‐in  self‐test  (bist)  technique.  an  integrated,  two‐functional,  embedded  microtester  (µbist)  based  on  reuse  of  signal  blocks  already  present in an ees, such as microprocessors, memories, adcs, dacs, is presented. the novelty of the µbist solution is its extended  functionality. it can perform 2 testing functions: functional testing and fault diagnosis on the level of localization of a faulty element.  for functional testing the complementary signals (css), and for fault diagnosis the simulation before test (sbt) vocabulary techniques  have been used. in the fault vocabulary the graphical signatures in the form of identification curves in multidimensional spaces have  been applied.   acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 11  [11] has been proposed by the authors in [12]. the disadvantage of this solution is poor functionality, limited only to functional testing. the novelty of the solution presented in this paper is the extended functionality of the µbist, including both functional and diagnostic testing. in the proposed solution (named the integrated, two-functional µbist) for functional testing, complementary signals and for fault diagnosis sbt (simulation before test) techniques have been used. the paper is organized as follows: section ii presents the cs method for functional testing. section iii describes a vocabulary sbt technique for diagnostic testing. section iv presents an experimental realization of the integrated twofunctional µbist and its verification in an exemplary ees. 2. description and investigation of the cs method  for functional testing  the essence of this method is stimulation of the cut by a special shape-designed cs signal, whose parameters are matched to poles of the cut’s transfer function. two kinds of cs signals has been investigated in [14]: �iparameters cs (with variable impulse amplitude �i) and tiparameters cs (with variable impulse width ti). a typical example of ti-parameters cs signal is shown in figure 1a. the testing principle is explained in figure 1b. the first impulse of the cs signal drives the cut into an initial state yt0(t), whereas the remaining impulses yt1(t), yt2(t) compensate it, in the manner shown in the figure, that the cut response y(te) reaches a zero state at the signal end time te. such situation takes place when the cut is in its nominal state. in a faulty state, when transfer function poles are deviating from nominal values, the cut response after the end of stimulation is not compensated (y(te)≠0). the sample value of the cut response y(te) depends on deviation of transfer function parameters from the nominal state. so, it can be used for functional testing of the cut transfer function. the cs method has been widely investigated, at first via simulations and then via physical experiments, on the example of testing of a 4th order butterworth low-pass filter (lpf). the structure of the filter, as well as nominal functional parameters and component values are shown in figure 2. the filter was composed with two biquad sections realized on the integrated circuit uaf42. the transfer function of the 4th order butterworth lpf is described by the following formula:                        2 2 1 2 2 2 2 21 2 1 2 1 2 ( ) n niv n n n n g s s s s s q q (1) where: n1, n2 are filter cut-off angular frequencies, q1, q2 are quality factors of the 1st and 2nd section. the filter was stimulated by a matched ti-parameter cs signal with 1 v impulses and with following ti-parameters: t0 = 15.916 ms, t1 = 37.393 ms, t2 = 57.185 ms, t3 = 70.397 ms, t4 = te = 75.027 ms. the investigation results of the deviation influence of both: quality factors q1, q2 and angular frequencies n1, n2 on the cut’s output signal, are presented in figure 3. it is seen from plots shown in the figure, relations y(te)=f(qi/qi) and y(te)=f(ni/ni) are convenient to implement. the y(te)=f(qi/qi) relations are linear in a wide range of qi/qi deviation (±50%). the testing t t0 t1 t2 t3 -  u(t) te (a)  (b) figure 1. an example of the 3 rd  order ti‐parameter cs (a) and cut responses  yti(t) to each impulse of ti‐parameter cs and its resultant response y(t) (b).  r14 50 k r12 50 k rf11 r11 50 k c12 1000 pf c11 1000 pf rf12 rg1 rq1 12 13 8 7 14 1 3 11 uaf42 vin c11a c12a r24 50 k r22 50 k rf21 r21 50 k c22 1000 pf c21 1000 pf rf22 rg2 rq2 12 13 8 7 14 1 3 11 uaf42 lp out c21a c22a   components  values  of  the  1 st   section:  rg1=52.22 k  rq1=83.56 k  rf11,rf12=15.28 k c11a=1052 nf  c12a=1054 nf        components  values  of  the  2 nd   section:  rg2=51.84 k  rq2=17.43 k  rf21,rf22=14.95 k c11a=1040 nf  c12a=1041 nf    functional parameters:  fc=10 hz  c=62.8319 rad/s  q1=0.54119  q2=1.30655  s1, s2=‐58.0491  4.0447i   s3, s4=‐24.0447  8.0491i figure 2. schematic diagram of investigated 4 th  order butterworth low‐pass  filter and its component values.  -50 -40 -30 -20 -10 0 10 20 30 40 50 -40 -30 -20 -10 10 20 30 40 y (t e) [mv] qi/ qi [%] (a) -50 -40 -30 -20 -10 0 10 20 30 40 50 -40 -20 20 40 60 80 y(t e) [mv] n/n [%] (b) figure  3.  responses  y(te)=f(qi/qi)  and  y(te)=f(ni/ni)  of  the  tested  4 th   order lpf as a function of q1 and q2 factors deviation (a) and n1 and n2  angular frequencies deviation (b).  y(t) acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 12  sensitivity is about 0.8 mv/1%. the y(te)=f(ni/ni) relations are slightly nonlinear and the testing range for positive deviations of (ni/ni) is limited to 25%. testing sensitivities are not equal for different biquad sections of the filter. for the 1st section the sensitivity is about 0.5 mv/1% and for the 2nd section it is about 1 mv/1%. in all investigated cases testing sensitivities are high enough for cut functional testing. the cut’s output response signals y(te) are easy to measure using the ad converter available in the microcontroller aduc814, chosen for the realization of the two-functional µbist. hardware resources of this microcontroller (12-bit adc and dacs and 3x16-bit programmable counters) are good enough for the generation of cs signals with adequate precision and for accurate measurement of cut’s output responses. in [14] the metrological properties of the �i-parameters and ti-parameters css have been widely investigated and compared. the following conclusions should be emphasized: both signals have similar sensitivities to deviations of cut functional parameters, more predestined for application in the µbist are ti-parameters css. they are shorter, easier to generate using microcontroller resources and can be optimized to the cut structure by matching the width of the 1st impulse. the most useful is a unipolar ti-parameter cs. generally, we can conclude that unipolar ti-parameters css signals are suitable for application in a µbist for functional testing of analog circuits and blocks in eess, which appears as the first function of an integrated µbist. the cs method might be useful in the low frequency range, up to about 10 khz. 3. description and investigation of the sbt  diagnostic method  for diagnostic testing of analog circuits in the eess, which appears as the second function of an integrated µbist, the vocabulary sbt method has been chosen, with graphical fault signatures in the form of identification curves in multidimensional measurement spaces. such a form of the fault vocabulary has two advantages: identification curves can be scaled, thus they make fault localization and also identification possible, it is possible to introduce an additional distinctive feature of the measurement signal to increase the dimension of the measurement space, and in consequence rarefying curves in the space. in result it enables to increase distinctivity of fault localization and to decrease the fault masking effect of parameter tolerances. we investigated this method using a 2d and 3d measurement space based on the samples u1(t1), u2(t2),..., uk(tk) of a cut output response at properly chosen time moments, taking into account 2 cases: toleranceless cuts and cuts with real tolerances of elements. at first, for simplification we explain this method on the example of fault diagnosis of the 2nd order butterworth lpf (figure 4), without taking into account tolerances. for the filter, on the basis of 2 or 3 samples of its output response to a single square-impulse stimulation, we can r1 = r2 = 10 k   c1 = 70.44 nf   c2 = 146.35 nf  figure 4. the 2nd order low‐pass filter.  (a) (b)  figure 5. the families of identification curves on the plane (a) and in the 3d  space (b).  figure 6. the identification curves for range 0.5pi nompi1.5pi nom.  (a) r1, r2, r3 = 5 k,  c1 = 44.5 nf   c2 = 110 nf  c3 = 6.42 nf  fc= 1 khz   (b) figure 7. the circuit diagram the 3 rd  order butterworth lpf (a) and families  of dispersed identification curves (belts) in the 3d space (b).  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 13  obtain the fault vocabulary in the form of a family of identification curves on a plane (for 2 samples u1(t1), u2(t2)) or in 3d space (for 3 samples u1(t1), u2(t2), u3(t3)) as shown in figure 5, for wide a range of parameters change: 0.1pinompi10pinom. in practice, the soft faults are diagnosed in narrower ranges: 50% (0.5pinompi1.5pinom) or even 25%. the family of scaled identification curves in the range 50% is shown in figure 6. in this range, as it is seen, the curves are smooth and easy to point interpolation. for such fault signature classification a simple conventional point distance classifier can be used. the required memory size for a fault vocabulary is moderate. for example, in case of a fault vocabulary in 3d space with 10 identification curves and 50 interpolation points for each, the memory size is about 3750 bytes. the toleranceless version of the sbt method can be useful for fault diagnosis of circuits with small parameter tolerances <0.5%. it seems to be of little use in practice. in practice, analog circuits have higher tolerances (in real circuits 2%-5%), which cause dispersion of identification curves in the measurement space. the dispersed curves form in the 2d space a kind of identification belts, and in the 3d space they form the identification snakes as shown in figure 7. such families can be obtained from simulations using the monte carlo method. identification belts and snakes in the general case have variable cross sections, but for moderate cut’s tolerances (2%) they can be assumed constant. the identification belts and snakes can be used for soft fault diagnosis, but their more sophisticated description as graphic signatures in the fault vocabulary is needed. an appropriate classifier fitted to these signatures must be used. for modeling of the identification belts and snakes and description in a vocabulary, we have used models shown in figure 8, for 2d and 3d spaces. the belts are modeled by the set of circles with radius of  around the interpolation points pi of the nominal identification curve. the sets of spheres around the interpolation points, with radius of , model the identification snakes. they are described by two factors: co-ordinates of interpolating points of the nominal identification curve, and the scaling factor , which characterizes the dispersion of the curve in these points. different versions of the method were investigated, but for implementation in the µbist based on the aduc814 microcontroller, we have used the 3d version of the method with the following assumptions: all identification snakes have a constant scaling factor  = 2, where  is the standard deviation and the distance between interpolation points pi is 1.5 = 3. the confidence coefficient  depends on the distance between adjacent interpolation points pi. for a distance equal to 3, based on the drawing in figure 9 that shows a piece of the identification belt model in 2d space, the value of the confidence coefficient  has been estimated as  =0.954-0.0350.92 [14]. this value is sufficient for using this model in practice. the testing procedure is simple and based on sequential checking of the distance between the measurement point pm and the interpolating points pi of identification curves with the use of a conventional point distance classifier. description of the identification snakes in the 3d fault vocabulary (using co-ordinates of interpolation points pi and scaling factor ) for the cut with 10 elements requires 3000 bytes, for a fault range of 50% in a testing procedure, 9 operations must be performed for the checking of each interpolating point pi of the identification curve in the fault vocabulary. thus, about 5000 operations are required for testing a 10 element cut. it can easily be realized in an acceptable time of <0.1 seconds. based on simulation investigations, we can conclude that the real tolerance version of the sbt method can be easily realized in the aduc814 microcontroller, but it needs experimental verification in a physically realized µbist. (a) (b) figure 8. models of identification belts and snakes in 2d (a) and 3d (b)  spaces.  figure 9. a piece of identification belt model for estimation of confidence  coefficient α.  figure 10. the block diagram of the realized µbist.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 14  4. experimental realization and verification of an  integrated two‐function µbist  on the basis of hardware and software resources of the analog devices microcontroller aduc814, an integrated µbist for functional and fault diagnostic testing of analog circuits in the eess has been physically realized and investigated in an experimental embedded system. the structure of the µbist is shown in figure 10 and its circuit diagram is shown in figure 11. almost all additional elements surrounding the aduc814 chip are present in each ees and they are necessary for proper functioning of the microcontroller. the microcontroller was programmed with software realizing two testing procedures of functional testing (using unipolar ti-parameter cs signals) and the diagnostic testing using the sbt method, in a 3d version. the µbist was investigated experimentally, on the basis of physical testing of some real analog circuits, mainly filters. different variants of the cs method for functional testing and the sbt method for fault diagnosis have been experimentally investigated. various deviations of qi, n figure 11. the circuit diagram and the picture of the realized µbist.  -50 -40 -30 -20 -10 0 10 20 30 40 50 -40 -30 -20 -10 10 20 30 y(te) [mv] q1/q1 [%] measured cut response simulated cut response (a) -50 -40 -30 -20 -10 0 10 20 30 40 50 -40 -30 -20 -10 10 20 30 40 50 y(te) [mv] measured cut r esponse simulated cut r esponse q2/q2 [%] (b) figure  12.  the  simulated  and  measured  cut  responses  y(te)=f(qi/qi)  as  function of q1 (a) and q2 (b) deviations.  -50 -40 -30 -20 -10 0 10 20 30 40 50 -30 -20 -10 10 20 30 40 50 y(te) [mv] measured cut response simulated cut response n1/n 1 [%] (a) -50 -40 -30 -20 -10 0 10 20 30 50 -40 -20 20 40 60 80 40 measured cut response simulated cut response y(te) [mv] n2/n2 [%] (b) figure 13. the simulated and measured cut responses y(te)=f(ni/ni) as a  function of n1 (a) and n2 (b) deviations.  figure  14.  the  testing  results  on  the  background  of  the  family  of  identification curves for the 2 nd  order lpf.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 15  parameters and values of soft faults for each pi element were physically entered to real cuts. the experimental results were compared with simulation ones. for the cs method of functional testing a comparison between experimental and simulation results is shown in figure 12. in simulation and experimental investigations the same cs signal level of 1 v has been used. figure 12 shows experimental (blue curve) and simulated (red curve) plots: y(te)=f(q1/q1) and y(te)=f(q2/q2) concerning dependences of the cut response signal on q1 and q2 deviations. analogous plots of y(te)=f(n1/n1) and y(te)=f(n2/n2), concerning deviations of n1 and n2, are shown in figure 13. as it is seen in both figures, experimental results are very close to the simulated ones. experimental and simulated plots have the same shapes and they are very close, especially in the range of ±10% deviations. higher differences out of this range do not matter in practice, because relations cut response – parameter deviations can be modeled on the basis of experimental data. in simulations and experimental investigations of this method, cs signals with a level of 1 v have been used. for such signals the threshold level of functional testing is on the level of a few percent for qi/qi and ni/ni deviations. in practice cut stimulation with higher level of cs, up to 5 v, can be used. it enables to obtain a lower threshold level for both parameter deviations qi/qi and ni/ni, below 1%. the convergence of the simulation and experimental results confirms the previous conclusions from simulation investigations. the method can be used in a frequency range up to 10 khz. for the sbt method of fault diagnostics the 2nd order lpf shown in figure 4 was experimentally investigated. figure 14 shows the results of the investigations (in the form of crosses) in comparison with simulated identification curves. as it is seen, in this case the experimental results and simulated curves are very close in a wide investigated range 0.1pinompi10pinom.. this example and other investigations presented in [14] confirm the usefulness of this sbt diagnostic method for application in a µbist. it can be utilized for fault localization in analog circuits with a moderate number of components of about 10, like: filters, equivalent circuits of signal transmission lines or nonelectrical objects (e.g. sensors) modeled in the form of multi-element two-terminals or two-ports. the method makes localization of single soft faults and ambiguity groups possible as well as detection of multiple faults. on the basis of both above methods of functional and diagnostic testing, the final version of the two-functional µbist has been programmed in the microcontroller aduc814. in both methods unipolar input signals with an extended impulse level of 5 v have been used. some µbist properties are given below: threshold level for functional testing deviations qi/qi is 1% and for deviations ni/ni is about 0.8%. mean functional testing time of the 4th order lpf is about 90 ms. mean time of diagnostic testing of the same filter is about 600 ms. the required microcontroller memory size for functional testing is 476 bytes and for diagnostic testing it is 2469 bytes (total 2945 bytes). there are many possibilities of improving the µbist, mainly by using more advanced 16or 32-bit microcontrollers and a neural network classifier of fault signatures. recently, a neural classifier with two-central basis functions has been elaborated and reported in [16]. it is well suited to application in bists. we are going to implement this classifier in the new version of the integrated bist. 5. conclusions  in this paper the new concept, methodology and solution of an integrated, two-functional bist, dedicated to testing of analog parts of electronic embedded systems, have been presented. the bist integrates two functions: functional testing and fault diagnosis on the level of localization of soft faulty component. the bist has been constructed on the basis of reusing a microcontroller and other blocks already present in each ees. we showed that contemporary microcontrollers, like the aduc814, have sufficient hardware and software resources to design bists for functional and diagnostic testing of linear analog circuits in the low frequency range (0.1 hz 10 khz) such as: filters, signal transmission paths or some non-electrical objects (e.g. sensors) modeled by electrical schemes. there are perspectives of further improvement of the presented bist, using 16or 32-bit microcontrollers and a neural classifier. references  [1] toczek w.: “an oscillation-based built-in test scheme with agc loop”. measurement, vol. 41, no. 2 pp. 160-168, 2008. [2] andrade a. q.:”test planning for mixed-signal socs and analog bist: a case study”. ieee transact. on instrumentation and measurement, vol.54, no. 4, 2005. [3] toczek w., kowalewski m., zielonko r.: “histogram-based feature extraction technique applied for fault diagnosis of electronic circuits”. proceedings of the 10th int. conference on technical diagnosis, imeko tc-10, pp. 27-32, budapest, 2005. [4] ren j., h. ye: “a novel linear histogram bist for adc”. ninth international conference on solid-state and integrated-circuit technology, pp. 2099-2102, beijing, 2008. [5] stratigopoulos h. g. d.: “an adaptive checker for fully differential analog code”. ieee journal of solid-state circuits, vol. 41, no. 6, pp. 1421-1429, 2006. [6] toczek w.: “self-testing of fully differential multistage circuits using common-mode excitation”. microelectronics reliability, pp. 1890-1899, 2008. [7] huang j. l., cheng k. t.: “a sigma-delta modulation based bist scheme for mixed-signal circuits”. proceedings of the 2000 asia and south pacific design automation conference, pp. 605-610, japan, 2000. acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 16  [8] toczek w.: “analog fault signature based on sigma-delta modulation and oscillation test methodology”. metrology and measurement systems, vol. xi, no. 4, pp. 363-375, 2004. [9] schreiber h. h.: „fault dictionary based upon stimulus design”, ieee transactions on circuits and systems, vol. 26, no. 7, pp. 529-537, 1979. [10] bartosiński b., zielonko r.: “new classes of complementary signals”. electronic letters, vol.23, no. 9, pp. 433-434, apr. 1987. [11] bartosiński b., zielonko r.: “application of complementary measuring signals to testing of analog circuits”, third international symposium on methods and models in automation and robotics, poland, pp. 527-532, 1996. [12] załęski d., bartosiński b., zielonko r.: “application of complementary signals in built-in self testers for mixedsignal embedded electronic systems”. ieee transactions on instrumentation and measurement, vol. 59, 2010, pp. 345352. [13] czaja z., zielonko r.: “fault diagnosis in electronic circuits based on bilinear transformation in 3-d and 4-d spaces”. ieee transactions on instrumentation and measurement, vol. 52, no. 1, pp. 97-102, 2003. [14] załęski d., the autodiagnostics of electronic embedded systems, phd dissertation, gdansk university of technology, 2013. (in polish). [15] czaja z., załęski d.: ”implementation of an input-output method of diagnosis of analog electronic circuits in embedded systems”. proceedings of the 10th imeko tc10 international conference on technical diagnostics, pp. 145150, budapest, 9-10 june 2005. [16] kowalewski m., the tolerance-proof vocabulary methods of fault diagnostic in electronic circuits using dedicated faults classifier with neural network, phd dissertation, gdansk university of technology, 2012 (in polish). non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 6 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 non-destructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and x-ray fluorescence analysis nina simbirtseva1,2, pavel v. sedyshev1, saltanat mazhen1,2, almat yergashov1,2, andrei yu. dmitriev1, irina a. saprykina3, roman a. mimokhod3 1 frank laboratory of neutron physics, joint institute for nuclear research, dubna, russia 2 institute of nuclear physics, almaty, 050032, the republic of kazakhstan 3 institute of archaeology of the russian academy of sciences, moscow, russia section: research paper keywords: neutron resonance capture analysis; non-destructive neutron analysis; xrf analysis citation: nina simbirtseva, pavel v. sedyshev, saltanat mazhen, almat yergashov, andrei yu. dmitriev, irina a. saprykina, roman a. mimokhod, nondestructive investigation of the kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture and xray fluorescence analysis, acta imeko, vol. 11, no. 3, article 20, september 2022, identifier: imeko-acta-11 (2022)-03-20 section editor: francesco lamonaca, university of calabria, italy received march 5, 2021; in final form august 31, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: nina v. simbirtseva, e-mail: simbirtseva@jinr.ru 1. introduction neutron resonance capture analysis (nrca) is known as technique for non-destructive investigations of various objects, including archeological artifacts and objects of cultural heritage. the induced activity in experiments with bronze artefacts is practically absent, what was evaluated in several works one of them you can find in ref. [1]. the method relies on registration neutron resonances in radiative capture and measurement the yield of reaction products in these resonances. the energy positions of resonances give information about isotope and elemental composition of an object. the area under the resonances can be used to calculate the number of the element or isotope’s nuclei. nrca is applied for various studies at different institutes and sources, such as the gelina pulsed neutron source of the institute of reference materials and measurements of the joint research center (gel, belgium) [2] the isis pulsed neutron and muon source in the united kingdom [3] and the j-parc pulsed neutron source in japan [4]. at present, the nrca is also used at the frank laboratory of neutron physics (flnp) in the joint institute nuclear research (jinr), dubna, russia [5]. the experiments are carried out at the intense resonance neutron source (iren) facility [6]-[8] with multi-sectional liquid scintillator detector (210 liters), which is used for the registration prompt gamma-quanta [9]. one of these experiments at the iren facility was carried out for the kyathos which was transferred by the institute of archeology of the russian academy of sciences. abstract the method of neutron resonance capture analysis (nrca) is currently being developed at the frank laboratory of neutron physi cs. the analysis determines the elemental and isotope compositions of objects non-destructively, which makes it suitable measurement tools for artefacts without sampling. nrca is based on the registration of neutron resonances in radiative capture and the measurement of the yield of reaction products in these resonances. the potential of nrca at the intense resonance neutron source facility is demonstrated on the investigation of a kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula. in addition, x-ray analysis was applied to the same archeological object. the element composition determined by nrca and xrf data is in agreement. mailto:simbirtseva@jinr.ru https://context.reverso.net/%d0%bf%d0%b5%d1%80%d0%b5%d0%b2%d0%be%d0%b4/%d0%b0%d0%bd%d0%b3%d0%bb%d0%b8%d0%b9%d1%81%d0%ba%d0%b8%d0%b9-%d1%80%d1%83%d1%81%d1%81%d0%ba%d0%b8%d0%b9/russian+academy+of+sciences acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 2. the kyathos in 2016-2018 a sochi expedition group of institute of archaeology of the russian academy of sciences (ia ras) under the leadership of roman a. mimokhod [10] conducted excavations of an antique town soil necropolis volna 1 on the taman peninsula (figure 1). the necropolis is dated to the middle / second quarter of the 6th century bc the beginning of the 3rd century bc, the main period of its use dates back to the second half of the 6th-5th centuries bc. the burials of the volna 1 necropolis were supposedly left by the greek and barbarian population; the earliest burials may have been left by settlers who arrived from the territory of magna graecia (children's burials in amphorae, burials in the "rider's pose", etc.). the burial ground volna-1 is an important monument for studying of a problem of greekbarbarian relations on the territory of the borderland the northern black sea region, the clash and interaction of two different ethnocultural layers of the population. the social structure, economic and political position of the first greek colonies on the territory of the bosporus depended on the nature of these contacts, their duration and strength, and the development of adaptive mechanisms. this interaction process was extremely complex, multifaceted and contradictory. on the one hand, with the emergence of the greek colonies, confrontational relations with the local population inevitably developed, on the other hand, the greek colonists and the local population influenced on each other. this process was superimposed on the specificity and uniqueness of the northern black sea region, where for a long time societies of fundamentally different economic structures, the level of development of social, political and economic life came into contact. more than 2,000 burials have been uncovered within the boundaries of the necropolis. an anthropological material, representative and significant for the history and archeology of the northern black sea region, was obtained. the collection of items obtained during the excavations is represented by ceramic materials, incl. production of ceramic centers of ancient greece (container amphoras, kiliks, skyphoses, drinking bowls, lekythoses, askoses, etc.), phoenician glass, weapons (spears, swords, arrows), protective weapons (full armor, helmet of the corinthian type), jewelry (bronze, silver and gold earrings and rings, etc.), coins and other categories of burial objects. all this testifies to the fact that the volna 1 burial ground is a city necropolis. this places it in the category of the most prestigious necropolises of the bosporan kingdom. the necropolis was associated with the settlement of the same name, to which it adjoins from the north. in a settlement, existed from the pre-greek period to the 3rd century bc, the systems of urban planning, stone house-building were studied. the fact that it is not a rural settlement, but a polis is evidenced by a number of prestigious finds, including, for example, a ceramic mask, which most likely indicates the presence of a theater in the city. in the burials, objects rare for the territory of the northern black sea region were found: a bronze prosthesis with a wooden support structure for a leg, an iron plate armor, a bronze corinthian helmet of the "hermione" type, musical instruments (cithara, lyre), a wreath on a gilded bone base with bronze petals and gold beads, as well as a series of kiathoi ancient greek vessels for pouring wine [11]. there are in total 17 burials from the materials of the excavations of ia ras, where bronze kyathoi or their fragments were found. their planigraphy is illustrative. burials with these items are located in the early section of the necropolis, in the northwestern part (figure 2). figure 1. location of the volna 1 necropolis. figure 2. the scheme of the necropolis of excavations 2017 2018 and a location of the burials with kiathoi. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 in the excavations of the ia ras in 2016, which is located to the south, kiathoi were not found. a burial 656, in which the item, considered in the article, was found (figure 3, 1), was paired. one skeleton belonged to a man 45-55 years old, the second to a woman 20-25 years old. the burial was done in a box, which was made of mud blocks. the dead were laid on a klinai a wooden bed, from the legs of which characteristic recesses remained in the grave. the burial had pronounced military attributes. fragments of an iron sword have been found near the male skeleton; an accompanying burial of a horse was done next to the pit. the kyathos was found in the filling of the burial pit. its original position is unclear. however, the grave contains vessels that are clearly associated with wine drinking: skyphos (figure 3, 3) and kilik (figure 3, 4). the kyathos, as an item for scooping and pouring wine, complements this set. the volume of the scoop was 0.045 liters, that is, a quarter of a sextarius. in contrast to various similar finds made of clay (their production from clay began at the end of the 6th century bc), the kiathoi found on the territory of the necropolis volna 1 were made of metal. it moves them in to the category of special objects. presumably, these kiathoi refer to greek imports that entered the northern black sea region along with the greek colonists. 3. nrca experiment the investigations were carried out at iren facility. the main part of the iren facility is a linear electron accelerator. the facility parameters: the average energy of electrons was ~ 60 mev, the peak current was ~ 1.5 а, the width of electron pulse was ~ 100 ns, and the repetition rate was 25 hz. neutron producing target is made of tungsten-based alloy and represents a cylinder 40 mm in diameter and 100 mm in height placed within an aluminum can 160 mm in diameter and 200 mm in height. distilled water is circulated inside the can, providing target cooling and neutron moderation. water layer thickness in a radial direction is 50 mm. the total neutron yield was about 3∙1011 s-1. the measurements were carried out at the 58.6 meters flight path of the 3rd channel of the iren. the big liquid scintillator detector was used for the registration of γ-quanta. the sample was placed inside the detector. the neutron flux was permanently monitored by the snm-17 neutron counter. the signals from the detector and the monitor counter were simultaneously fed to the two independent inputs of time-todigital converter (tdc). the measurements with the sample lasted about 136 hours. the resonance energies were determined according to the formula: 𝐸 = 5227 𝐿2 𝑡 2 , (1) where 𝑡 is time of flight in microseconds, 𝐿 flight path in meters, 𝐸 kinetic energy of a neutron in ev. the resonances of silver, tin, copper and arsenic were identified on the time-of-flight spectrum (figure 4, figure 5) [12], [13]. figure 3. volna 1. items from burial 656: 1 – kyathos, 2 bowl, 3 – skyphos, 4 kilik, 5 lekythos. figure 4. the part of time-of-flight spectrum of (n,γ) reactions on the kyathos material. figure 5. the part of time-of-flight spectrum of (n,γ) reactions on the kyathos material. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 the measurements with standard samples of identified elements were made in addition to the measurement with the investigated sample. parts of time-of-flight spectra of (n, γ) reactions on the material of standard silver, tin, copper and arsenic samples are shown in figure 6 to figure 9. 4. data analysis and results five resonances of tin, two resonances of copper, one resonance of silver and one resonance of arsenic were selected during the analysis of the experimental data. only well-resolved, without overlapping, with sufficient statistics and unambiguously known parameters resonances were analyzed. the sum of the detector counts in resonance is expressed by the formula: ∑ 𝑁 = 𝑓(𝐸0) ⋅ 𝑆 ⋅ 𝑡 ⋅ 𝜀𝛾 ⋅ 𝛤𝛾 𝛤 𝐴 . (2) here, 𝑓(𝐸0) is the neutron flux density at the resonance energy 𝐸0, 𝑆 – the sample area, 𝑡 – measuring time, 𝜀𝛾 – the detection efficiency of the detector radiative capture, 𝛤𝛾 , 𝛤 – the radiative and total resonance widths. 𝐴 = ∫ [1 − 𝑇(𝐸)] 𝐸2 𝐸1 d𝐸 (3) – resonance area on the transmission curve, where 𝐸1, 𝐸2 – initial and final values of energy range near resonance. 𝑇(𝐸) = 𝑒 −𝑛 𝜎(𝐸) (4) – the energy dependence of the neutron transmission by the sample; 𝜎(𝐸) – the total cross section at this energy with doppler broadening, 𝑛 – the number of isotope nuclei per unit area. the value 𝐴 was determined from experimental data for investigated sample by the formula: 𝐴𝑥 = ∑ 𝑁𝑥 ⋅ 𝑀𝑠 ⋅ 𝑆𝑠 ∑ 𝑁𝑠 ⋅ 𝑀𝑥 ⋅ 𝑆𝑥 ⋅ 𝐴𝑠 . (5) here, ∑ 𝑁𝑥 , ∑ 𝑁𝑠 counts under the resonance peak of the investigated and standard samples, 𝑆𝑥 , 𝑆𝑠 – the area of the investigated and standard samples. 𝑀𝑥 , 𝑀𝑠 – the number of monitor counts during the measurement of the investigated and standard samples. we used a program written according to the algorithm given in [14] for calculation of 𝐴𝑠 (resonance area on the transmission curve of the standard sample) and 𝑛𝑥 (the number of isotope nuclei per unit area of the investigated sample) values. this procedure is schematically shown in (figure 10). the 𝐴𝑠 value was calculated by means of known resonances parameters and figure 6. the part of time-of-flight spectrum of (n,γ) reactions of the standard silver sample. figure 7. the part of time-of-flight spectrum of (n,γ) reactions of the standard tin sample. figure 8. the part of time-of-flight spectrum of (n, γ) reactions of the standard copper sample. figure 9. the part of time-of-flight spectrum of (n,γ) reactions of the standard arsenic sample. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 𝑛𝑥 parameters for the standard sample. the 𝑛𝑥 value was determined from the investigated sample 𝐴𝑥 value. the first measurement with the kyathos [16] hadn’t shown satisfactory results for copper and arsenic. it was decided to repeat the measurement but take into account archeological object and neutron flux features. the kyathos is a relatively long object (a length is about 16 cm) and has uneven thickness distribution (a handle and a bucket). as we know the neutron flux intensity decrease from a center to an edge it makes sense to measure the investigated object in parts and place every part to the center of the beam line. this experiment was carried out and we have obtained two spectrums (with the handle and with the bucket separately) that were summarized taking into account monitoring coefficients (figure 4, figure 5). the analysis results are presented in the table 1. the nrca has limitations correlated to the neutron flux intensity at iren facility at the present time, using of certain type of a detector system in an experiment and a matrix of elements in an investigated object. and besides that, this method has low sensitivity to elements lighter than iron and to elements that have atomic mass close to magic mass numbers (bismuth, lead, etc.) additional measurements therefore were carried out by x-ray fluorescence (xrf) on 4 points of archeological object by means portable spectrometer 5i tracer (bruker) (table 2). the element composition of the kyathos determined by nrca and xrf are in agreement; some differences in wt% could be well understood due to the different nature of the analysis. the application of the xrf analysis allows determining the elemental composition on the surface. nrca, in turn, does not measure the element and isotope compositions just on the surface but rather in the bulk (through the whole volume of the object), the analytical results are not significantly affected by surface corrosion. 5. conclusion nrca carried out at the intense resonance neutron source (iren) facility for the kyathos from the necropolis volna 1 on the taman peninsula showed that this method delivers satisfactory results. the mass of kyathos, determined by weighing is 86.7 g. according to the result of nrca the value of determining total elements mass coincides with the kyathos mass within the margin of error and considering the presence of silicon, aluminum and iron on the surface of the object. the element composition of the kyathos determined by nrca and xrf are in agreement. the results obtained confirm the using of tin bronze to make the investigated kyathos. the recorded presence of tin in the composition of the alloy, its quantitative characteristics, along with the established presence of arsenic, refers us to the type of alloy characteristic of archaic greece [17]. the presence of such elements as arsenic, silver point to the conclusion that copper was obtained from polymetallic ores. nrca allows not only identifying the elemental and isotopic composition of the sample but also makes it possible to determine the amounts of elements and isotopes in the whole volume of the object. the method is non-destructive; the induced activity of the bronze samples is practically absent. all this makes it promising for research of archaeological artifacts and objects of cultural heritage. although the number of facilities that can provide suitable neutron beams is limited, nrca might be a useful additional analyzing technique. acknowledgement the authors express their gratitude to the staff of the iren facility and to a. p. sumbaev, the head of the development of the facility, to v. v. kobets, the head of sector no.5 of the scientific and experimental injection and ring division of the nuclotron (veksler and baldin laboratory of high energy physics), for the supporting with uninterrupted operation of the facility in the process of measurements. references [1] h. postma, m. blauw, p. bode, p. mutti, f.corvi, p. siegler, neutron-resonance capture analysis of materials, journal of radioaanalitical and nuclear chemistry 248(1) (2001), pp. 115120. doi: 10.1023/a:1010690428025 [2] h. postma, p. schillebeeckx, neutron resonance analysis, in neutron methods for archaeology and cultural heritage. neutron scattering applications and techniques, ed. by n. kardjilov and g. festa (springer, cham, 2017), pp. 235–283. [3] g. gorini (ancient charm collab.), ancient charm: a research project for neutron-based investigation of cultural-heritage figure 10. dependence of value a on a number of nuclei and resonance parameters with taking into account δ (doppler effect) [15]. table 1. the results of measurements with the kyathos by nrca (the bulk). № element mass, g weight, % 1 cu 59.7 ± 3.9 68.8 ± 4.5 2 sn 5.29 ± 0.23 6.10 ± 0.26 3 as 0.1892 ± 0.0081 0.02179 ± 0.0094 4 ag 0.0131 ± 0.0014 0.0151 ± 0.0016 table 2. the results of measurements with the kyathos by xrf, averaged over four points (the surface). № element weight, % 1 cu 61.63 ± 0.31 2 si 21.57 ± 0.15 3 al 7.02 ± 0.19 4 sn 4.58 ± 0.25 5 fe 4.254 ± 0.075 6 pb 0.684 ± 0.073 7 ti 0.327 ± 0.057 8 as 0.050 ± 0.017 https://doi.org/10.1023/a:1010690428025 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 objects, nuovo cim. c 30, 2007, pp. 47–58. doi: 10.1393/ncc/i2006-10035-9 [4] h. hasemi, m. harada, t. kai, t. shinohara, m. ooi, h. sato, k. kino, m. segawa, t. kamiyama, y. kiyanagi, evaluation of nuclide density by neutron transmission at the noboru instrument in j-parc/mlf, nucl. instrum. methods phys. res., sect. a 773 (2015), pp. 137–149. doi: 10.1016/j.nima.2014.11.036 [5] n. v. bazhazhina, yu. d. mareev, l. b. pikelner, p. v. sedyshev, v. n. shvetsov, analysis of element and isotope composition of samples by neutron spectroscopy at the iren facility, physics of particles and nuclei letters 12 (2015), pp. 578–583. doi: 10.1134/s1547477115040081 [6] o. v. belikov, a. v. belozerov, yu. becher, yu. bulycheva, a. a. fateev, a. a. galt, a. s. kayukov, a. r. krylov, v. v. kobetz, p. v. logachev, a. s. medvedko, i. n. meshkov, v. f. minashkin, v. m. pavlov, v. a. petrov, v. g. pyataev, a. d. rogov, p. v. sedyshev, v. g. shabratov, v. a. shvec, v. n. shvetsov, a. v. skrypnik, a. p. sumbaev, a. v. ufimtsev, v. n. zamrij, physical start-up of the first stage of iren facility, journal of physics: conf. ser. 205, 2010, 012053. doi: 10.1088/1742-6596/205/1/012053 [7] o. v. belikov, a. v. belozerov, yu. becher, yu. bulycheva, a. a. fateev, a. a. galt, a. s. kayukov, a. r. krylov, v. v. kobetz, p. v. logachev, a. s. medvedko, i.n.meshkov, v. f. minashkin, v. m. pavlov, v. a. petrov, et al., physical start-up of the first stage of iren facility, j. phys.: conf. ser. 205 (2019), art. no. 012053. doi: 10.1088/1742-6596/205/1/012053 [8] e. a. golubkov, v. v. kobets, v. f. minashkin, k. i. mikhailov, a. n. repkin, a. p. sumbaev, k. v. udovichenko, v. n. shvetsov, the first results of the commissioning of the second accelerating section of the lue-200 accelerator of the iren installation, soobshch. oiyai r9-2017-77 (dubna, oiyai, 2017). [9] h. maletsky, l. b. pikelner, k. g. rodionov, i. m. salamatin, e. i. sharapov, detector of neutrons and gamma rays for work in the field of neutron spectroscopy, communication of jinr 13-6609 (dubna, jinr, 1972) 1-15 (in russian). [10] r. a. mimokhod, n. i. sudarev, p. s. uspensky, the necropolis volna 1(2017) (krasnodar territory, taman peninsula), rescue archaeological research materials 25 (2018), pp. 220–231. [11] r. a. mimokhod, n. i. sudarev, p. s. uspensky, necropolis volna-1 on the taman peninsula (2019), new archaeological projects. recreating the past. to the 100th anniversary of russian academic archeology. edited by makarov m., ia ran, 2019, pp. 80–83. [12] s. f. mughabghab, neutron gross sections, neutron resonance parameters and thermal gross sections, academic press, new york, 1984, isbn: 9780125097017. [13] s. i. sukhoruchkin, z. n. soroko, v. v. deriglazov, low energy neutron physics. landolt-bornstein.v. i/16b, berlin: springer verlag, 1998, isbn: 3540608575. [14] v. n. efimov, i. i. shelontsev, calculation for graphs of determining the parameters of neutron resonances by the transmission method, communications of the jinr p-641 (dubna, 1961) 1-19 (in russian). [15] p. v. sedyshev, n. v. simbirtseva, a. m. yergashov, s. t. mazhen, yu. d. mareev, v. n. shvetsov, m. g. abramzon, i. a. saprykina, determining the elemental composition of antique coins of phanagorian treasure by neutron spectroscopy at the pulsed neutron source iren in flnp jinr, pis’ma v zhurnal fizika elementarnykh chastits i atomnogo yadra, issn:1547-4771, physics of particles and nuclei letters 17(3), pp. 389-400. doi: 10.1134/s1547477120030139 [16] n. v. simbirtseva, p. v. sedyshev, s. t. mazhen, a. m. yergashov, i. a. saprykina, r. a. mimokhod, preliminary result of investigation of element composition of kyathos (6th-4th centuries bce) from the necropolis volna 1 on the taman peninsula by neutron resonance capture analysis. imeko tc4 international conference on metrology for archaeology and cultural heritage, 22-24 october 2020, trento, italy, proceedings, 2020. online [accessed 24 september 2022] https://www.imeko.org/publications/tc4-archaeo2020/imeko-tc4-metroarchaeo2020-073.pdf [17] s. orfanou, early iron age greek copper-based technology: votive offerings from thessaly. institute of archaeology, ucl, thesis submitted for phd in archaeology/archaeometallurgy, 2015, p. 87, tab. 2.2. online [accessed 28 august 2022] https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orf anou%202015%20early%20iron%20age%20greek%20copperbased%20technology.pdf http://dx.doi.org/10.1393/ncc/i2006-10035-9 https://doi.org/10.1016/j.nima.2014.11.036 https://doi.org/10.1134/s1547477115040081 https://doi.org/10.1088/1742-6596/205/1/012053 https://doi.org/10.1088/1742-6596/205/1/012053 https://ui.adsabs.harvard.edu/link_gateway/2020ppnl...17..389s/doi:10.1134/s1547477120030139 https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-073.pdf https://www.imeko.org/publications/tc4-archaeo-2020/imeko-tc4-metroarchaeo2020-073.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf https://discovery.ucl.ac.uk/id/eprint/1471577/1/orfanou_orfanou%202015%20early%20iron%20age%20greek%20copper-based%20technology.pdf the importance of sound velocity determination for bathymetric survey acta imeko issn: 2221-870x december 2021, volume 10, number 4, 46 53 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 46 the importance of sound velocity determination for bathymetric survey pier paolo amoroso1, claudio parente2 1 international phd programme “environment, resources and sustainable development”, department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy 2 department of science and technology, parthenope university of naples, centro direzionale, isola c4, (80143) naples, italy section: research paper keywords: bathymetric survey; single beam; multi beam; sound velocity in water; depth measurements citation: pier paolo amoroso, claudio parente, the importance of sound velocity determination for bathymetric survey, acta imeko, vol. 10, no. 4, article 10, december 2021, identifier: imeko-acta-10 (2021)-04-10 section editor: silvio del pizzo, university of naples 'parhenope', italy received may 1, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by parthenope university of naples, italy. corresponding author: claudio parente, e-mail: claudio.parente@uniparthenope.it 1. introduction as reported in huet (2009), “hydrography is the branch of applied sciences, which deals with the measurement, and description of the physical features of oceans, seas coastal areas, lakes and rivers, as well as with the prediction of their evolution, for the primary purpose of safety of navigation and all other marine activities, including economic development, security and defence, scientific research, and environmental protection” [1]. according to iho, international hydrographic organization [2], hydrographic survey can be defined as the survey of an area of water, but in modern use it includes many other objectives such as measurements of tides, currents, and determination of the physical and chemical properties of water. the main objective is to obtain essential data for the edition of nautical charts with particular interest in the characteristics that may influence navigation [3]. in addition, the hydrographic surveys aim to acquire the information necessary for marine navigation products and for the management, engineering, and science of coastal areas. the bathymetric surveys belong to the family of hydrographic surveys and are carried out whenever there is a need to precisely know the morphological trend of the seabed [4], [5]. they are therefore preliminary to the realization of maritime and river works and in the case of works already supplied, they are indispensable to verify continuously water heads and dredging volumes [6]. generally, hydrographic surveys are carried out using a vessel equipped with a precision echo sounder, which uses the principle of acoustic waves to sound the bottom and determine the depth. an accurate determination of the vessel position and attitude as well as a correct functioning of the echo sounder are both fundamental for the quality of the survey results. using satellites techniques, differential corrections with code measurements allow accuracies estimated to a few meters. currently, one of the abstract bathymetric surveys are carried out whenever there is a need to know the exact morphological trend of the seabed. for a correct operation of the echo sounder, which uses the principle of acoustic waves to scan the bottom and determine the depth, it is important to accurately determine the sound velocity in water, as it varies according to specific parameters (density, temperature, and pressure). in this work, we want to analyse the role of sound velocity determination in bathymetric survey and its impact on the accuracy of depth measurement. the experiments are conducted on data set provided by “istituto idrografico della marina militare italiana” (iim), the official hydrographic office for italy, and acquired in the ligurian sea. in our case, the formulas of chen & millero (unesco), medwin, and mackenzie were applied. the introduction of errors on chemical-physical parameters of the water column (temperature, pressure, salinity, depth) simulating inaccurate measurements, produces considerable impacts on sound velocity determination and subsequently a decrease of the depth value accuracy. the results remark the need to use precise probes and accurate procedures to obtain reliable depth data. mailto:claudio.parente@uniparthenope.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 47 most widely used techniques for high-precision positioning of vessels in bathymetric survey is the rtk mode (phase measurements) [7] which allow to achieve centimetre accuracy on the horizontal and vertical plane. since the effects of the vessel motion on the accuracy of the observed depths and their positioning, attitude (roll, pitch, and heading) and heave must be measured using appropriate instruments, such as inertial sensors that can be also integrated with gps [8]. a correct functioning of the echo sounder is very important for accurate depth measurement, but an accurate determination of the sound velocity in water is necessary [9]. once the instrument is activated, the transmitter sends a small amount of electrical energy towards the transducer, which switches it into a sound pulse and sends it towards the seabed. once it reaches the seabottom, the signal goes back to the transducer. therefore, the instrument is used to measure the time interval between the transmission and reception of the signal [10], [11]. two principal types of echo sounder are available, namely the single beam and the multibeam. the substantial difference is that the single beam emits a single sound impulse, while the multibeam emits a beam of sound impulses, which allow to obtain a greater and more detailed acquisition of the seabed [12]. the single beam and multibeam are basic instruments of acoustical oceanography, the discipline that describes the role of the ocean as an acoustic medium by relating oceanic properties to the behaviour of underwater acoustic propagation, noise, etc [13]. the sound velocity in a medium is mostly influenced by the medium itself [14], [15], so it is affected by the conditions of the sea-bottom boundaries as well as by the variation of the chemical-physical parameters of the water volume [16]. in fact, the sound velocity in seawater is defined as a function of the isothermal compressibility, the ratio of specific heats of seawater at constant pressure and constant volume, and the density of seawater [17]. particularly, the sound velocity in sea water increases with an increase in temperature, salinity or pressure [18]. temperature decreases from the sea-surface to the seabed, but there are different local variations. the sound velocity profile is very variable near the surface according to the seasons and the hours of the day, due to the heat exchange with the atmosphere, which modifies the temperature and salinity of the sea [16]. if temperature is constant, the velocity sound increases with depth due to the pressure gradient [19]. normally in literature, the average value of the sound velocity in water is accepted as 1500 m/s, calculated taking as reference the nominal conditions of the water, characterized by a temperature of 0 ° c, a salinity of 35 ppt (parts per thousand) and a pressure of 760 mmhg [20], [21]. this average value, however, can oscillate according to the characteristics of the water, varying between 1387 m/s and 1529 m/s [17]. local velocity measurements are quite difficult to perform accurately, whereas its constitutive parameters are more easily quantified. particular probes, bar check method and empirical formulas, permit to determine the sound velocity. empirical formulas require physical and chemical parameters, such as depth (d), temperature (t), salinity (s), pressure (p). these parameters can be measured with different types of instruments. there are many different formulas available to calculate the sound velocity in water, and the most popular and accurate are chen & millero (1977) [22]-[25], del grosso (1974) [26]-[28], mackenzie (1981) [29]-[31] and medwin (1975) [32], [33]. 2. sound velocity determination the determination of the sound velocity in water can be obtained through direct or indirect measurements. among the systems commonly used to measure the sound velocity in situ, we find the depth velocimeter, which directly measures the sound velocity for a high frequency wave transmitted over an accurately regulated distance. among the systems commonly used to determine the sound velocity in indirect way, we find specific probes capable of measuring chemical-physical parameters of water as input data. for example, the bathythermograph or the xbt probe [34], [35] measures the water temperature only as a function of depth; to deduce the sound velocity, it is necessary to have the salinity data independently, so it is measured simultaneously by a conductivity meter, integrated in the same device. the echo sounder is calibrated for water temperature and salinity, or directly to a known depth using the "bar check" method (measurement of the immersion depth of a metal bar or disc lowered below the transducer and suspended a graduated cable). this method consists in immersing under the echo sounder transducer, a plate with a square base of edge equal to 60 cm supported by a chain that has centimetre subdivisions. dive depths are generally set at 5 m, 10 m, 15 m, 20 m [36]. then you go to operate on the settings of the sound velocity in the water proceeding with the measurements until the correct depth value is obtained. this type of operation is repeated for the various control depths at least twice on each depth, and an arithmetic average is calculated on the obtained sound velocity values and set on the instrument. some systems, however, use all the values read at the various depths. this method is conveniently carried out in shallow water with low currents, gives a mean velocity to the observed depth and simultaneously checks the calibration of the sounder [37]. when indirect measurements are carried out and chemicalphysical parameters of water are available, the application of formulas that permit to calculate sound velocity is necessary. different formulas are used depending on the depth values, i.e.: 1. chen & millero is only used for depths less than 1000 m; 2. del grosso is used only for depths greater than 1000 m; 3. mackenzie for quick calculations in ocean waters up to 8000 m depth; 4. medwin is used for quick calculations in ocean waters up to 1000 m depth [38]. a range of validity of physical and chemical parameters characterizes every empirical formula. normally, depths are measured in meters, temperatures are measured in °c, salinity is measured in ppt (part for thousand) and pressure is measured in bar. a brief description of each formula is reported below. the formula chen & millero is denominated as the international algorithm adopted by unesco; accurate models than others characterize it, and the equation is the following: 𝑐 = 𝐶𝑤(𝑇, 𝑃) + 𝐴(𝑇, 𝑃)𝑆 + 𝐵(𝑇, 𝑃)𝑆 3 2⁄ + 𝐷(𝑇, 𝑃)𝑆 2 , (1) where 𝐶𝑤 (𝑇, 𝑃) = c00+c01 t+c02 t 2+c03 t 3+c04t 4 + +c05t 5 + (c10+c11t+c12t 2 +c13t 3+ c14 t 4)p+ +(c20+c21t+c22t 2+c23t 3 +c24t 4)p2+ +(c30+c31t+c32t 2)p3 (2) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 48 𝐴(𝑇, 𝑃) = a00 + a01t + a02t 2 + a03t 3 + a04t 4 + +(𝐴10 + 𝐴11𝑇 + 𝐴12𝑇 2 + 𝐴13𝑇 3 + +𝐴14𝑇 4)𝑃 + +(𝐴20 + 𝐴21𝑇 + 𝐴22𝑇 2 + 𝐴23𝑇 3)𝑃2 + +(𝐴30 + 𝐴31𝑇 + 𝐴32𝑇 2)𝑃3, (3) 𝐵(𝑇, 𝑃) = 𝐵00 + 𝐵01𝑇 + (𝐵10 + 𝐵11 𝑇)𝑃 , (4) 𝐷(𝑇, 𝑃) = 𝐷00 + 𝐷10𝑃 . (5) this formula is valid for temperature values included in the range 0 °c < t < 40 °c, salinity values included in the range 0 ppt < s < 40 ppt, and pressure values included in the range 0 bar < p < 1000 bar [39]. the coefficients in the equations are reported in table 1. the del grosso equation is used as an alternative to unesco algorithm. the formula is the following: 𝑐(𝑆, 𝑇, 𝑃) = 𝐶000 + ∆𝐶t + ∆𝐶s + ∆𝐶p + ∆𝐶stp, (6) where ∆𝐶t(𝑇) = 𝐶t1𝑇 + 𝐶t2𝑇 2 + 𝐶t3𝑇 3, (7) ∆𝐶s(𝑆) = 𝐶s1𝑆 + 𝐶s2𝑆 2 , (8) ∆𝐶p(p) = 𝐶p1𝑃 + 𝐶p2𝑃 2 + 𝐶p3𝑃 3 , (9) ∆cstp(𝑆, 𝑇, 𝑃) = 𝐶tp𝑇 𝑃 + 𝐶t3p𝑇 3𝑃 + 𝐶tp2𝑇 𝑃 2 + +𝐶t2p2𝑇 2𝑃2 + 𝐶tp3𝑇 𝑃 3 + 𝐶st𝑆 𝑇+ 𝐶st2𝑆 𝑇 2 + +𝐶stp𝑆 𝑇 𝑃 + +𝐶s2tp𝑆 2𝑇 𝑃 + 𝐶s2p2𝑆 2𝑃2 . (10) the coefficient values are shown in table 2. this formula is valid for temperature values included in the range 0 °c < t < 30 °c, salinity values included in the range 30 ppt < s < 40 ppt, and pressure values included in the range 0 bar < p < 980.665 bar. unlike chen & millero and del grosso, mackenzie uses depth in the formula for velocity calculation. the formula is: 𝑐(𝐷, 𝑆, 𝑇) = 1448.96 + 4.591𝑇 − 5.304 ∙ 10−2𝑇2 + +2.374 ∙ 10−4𝑇3 + 1.340(𝑆 − 35) + 1.630 ∙ 10−2𝐷 + 1.675 ∙ 10−7𝐷2 − +1.25 ∙ 10−2𝑇(𝑆 − 35) − 7.139 ∙ 10−13𝑇𝐷3. (11) this formula is valid for temperature values included in the range 2 °c < t < 30 °c, salinity values included in the range 25 ppt < s < 40 ppt, and depth values included in the range 0 m < d < 8000 m [38]. medwin is the simplest formula, and it is given as: 𝑐 = 1449.2 + 4.6 𝑇 − 0.055 𝑇2 + 0.00029 𝑇3 + (1.34 − 0.010 𝑇)(𝑆 − 35) + 0.016 𝐷 . (12) this formula, instead, is valid for temperature values included in the range 0 °c < t < 35 °c, salinity values included in the range 0 ppt < s < 40 ppt and depth values included in the range 0 m < d < 1000 m. 3. applications 3.1. hydrographic data the “istituto idrografico della marina militare italiana (iim)” provided the data set for this work. the ship used for the survey is the nave magnaghi, a hydro-oceanographic ship. the hydrographic activity carried out by the unit is concretized in the realization of port, coastal and offshore reliefs through sounding operations, research of minimum depths and wrecks, determination of the topography of the coastline and port works, study of the nature of the seabed etc. the sampling point is localized near the ligurian coast named “riviera di levante” and has the following coordinates referred to world geodetic system 84 (wgs-84): φ = 43°55’17’’.53 n, λ = 9°40’55’’.4 e (figure 1). this type of data set was obtained by using the "ctd idronaut-ocean seven 316 plus" probe which measures chemicalphysical parameters of water providing in output the values of pressure, temperature, conductivity, salinity, depth and velocity. particularly, data are provided about every meter of depth, in order to be able to determine the correct vertical profile of the sound velocity. the analysed depths range from 0.99 m to 324.12 m, the pressure values are provided by the probe in db (decibar), the temperature in degrees celsius (° c) and the salinity in parts per thousand (ppt). as reported in the probe technical specification document, pressure and temperature are directed measured as well as other parameters e. g. conductivity and ph. table 1. coefficients of chen & millero formula. a00 1.389 b00 -0.01922 c22 2.60e-08 a01 -0.01262 b01 -4.40e-05 c23 -2.50e-10 a02 7.16e-05 b10 7.36e-05 c24 1.04e-12 a03 2.01e-06 b11 1.79e-07 c30 -9.80e-09 a04 -3.20e-08 c00 1402.388 c31 3.85e-10 a10 9.47e-05 c01 5.03711 c32 -2.40e-12 a11 -1.30e-05 c02 -0.05809 d00 0.001727 a12 -6.50e-08 c03 0.000334 d10 -8.00e-06 a13 1.05e-08 c04 -1.50e-06 a14 -2.00e-10 c05 3.15e-09 a20 -3.90e-07 c10 0.153563 a21 9.10e-09 c11 0.00069 a22 -1.60e-10 c12 -8.20e-06 a23 7.99e-12 c13 1.36e-07 a30 1.10e-10 c14 -6.10e-10 a31 6.65e-12 c20 3.13e-05 a32 -3.40e-13 c21 -1.70e-06 table 2. coefficients of del grosso formula. c000 1402.392 ct1 5.01e+00 ct2 -5.51e-02 ct3 2.22e-04 cs1 1.33e+00 cs2 1.29e-04 cp1 0.1560592 cp2 2.45e-05 cp3 -8.83e-09 cst -1.28e-02 ctp 6.35e-03 ct2p2 2.66e-08 ctp2 -1.59e-06 ctp3 5.22e-10 ct3p -4.38e-07 cs2p2 -1.62e-09 cst2 9.69e-05 cs2tp 4.86e-06 cstp -3.41e-04 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 49 other parameters such as salinity and water density are obtained using specific algorithms available in literature [40]. the probe is characterized by the following accuracy values for direct measurements: 5.00e-04 bar for pressure; 2.00e-03 °c for temperature. in consideration of the adopted approach of the indirect measurement of the salinity, the accuracy is better than 0.005 ppt. 3.2. results in this work, we proceeded with the calculation of the sound velocity in water using three of the formulas previously described (del grosso formula is not applicable because the analyzed depths are less than 1000 m). table 3 shows a selection of data supplied by “idronaut ctd” acquired at different depths along the investigated water column, as well as the relative sound velocity values cu, cme, cma, calculated using respectively the unesco formula, the medwin formula and the mackenzie formula. table 4 shows the statistics (mean, standard deviation, minimum and maximum values) of sound velocity values supplied by the adopted formulas. the sound velocity calculated with the unesco formula oscillates between a minimum equal to 1508.988 m/s and a maximum equal to 1533.994 m/s, with an average value equal to 1513.543 m/s. the velocity calculated with the medwin formula oscillates between a minimum equal to 1509.142 m/s and a maximum equal to 1534.026 m/s, with an average value of 1513.636 m/s. finally, the sound velocity calculated with the mackenzie formula oscillates between a minimum equal to 1509.455 m/s and a maximum equal to 1534.676 m/s, with an average value equal to 1514.043 m/s. in the table 5, the statistical values of the residuals produced by the comparison between the adopted formulas are reported. once the sound velocity was calculated, a systematic error was introduced in each chemical-physical parameter of the water, i.e. temperature, salinity, pressure and depth, in order to evaluate the impact on the determination of the sound velocity. by systematic error we mean, in this case, an incorrect sampling of the parameters in question, simulating probe or human errors during the measurement. the introduced errors are: 0.1, 0.5 and 1 °c figure 1. localization of the sample point: visualization on the italy map (upper) and on sentinel-2 satellite image of the ligurian sea (lower). table 3. a selection of values of pressure, temperature and salinity supplied by ctd probe and corresponding velocity of the sound in water calculated by using unesco, medwin and mackenzie formulas. p (bar) d (m) t (°c) s (ppt) cu (m/s) cme (m/s) cma (m/s) 0.1 0.990 23.397 38.1408 1533.889 1533.924 1534.569 0.2 1.980 23.396 38.147 1533.910 1533.945 1534.591 0.3 2.980 23.398 38.151 1533.934 1533.968 1534.615 0.4 3.970 23.397 38.154 1533.953 1533.987 1534.635 0.5 4.960 23.398 38.156 1533.974 1534.006 1534.655 0.6 5.950 23.397 38.160 1533.994 1534.026 1534.676 0.7 6.940 23.389 38.163 1533.994 1534.024 1534.676 0.8 7.940 23.380 38.166 1533.990 1534.020 1534.672 0.9 8.930 23.369 38.171 1533.987 1534.016 1534.669 1 9.920 23.361 38.176 1533.988 1534.017 1534.671 5.1 50.580 16.426 38.046 1515.445 1515.598 1515.984 10.1 100.170 14.034 38.180 1508.991 1509.145 1509.458 20.2 200.280 13.944 38.519 1510.773 1510.865 1511.252 32.7 324.120 14.007 38.660 1513.210 1513.216 1513.672 table 4. statistics of sound velocity values supplied by the adopted formulas. cu (m/s) cme (m/s) cma (m/s) mean 1513.543 1513.636 1514.043 st. dev. 6.686 6.666 6.747 min 1508.988 1509.142 1509.455 max 1533.994 1534.026 1534.676 table 5. statistical values of the residuals produced by the comparison between the adopted formulas. cme – cu (m/s) cma – cu (m/s) cma – cme (m/s) mean 0.094 0.501 0.407 st. dev. 0.050 0.063 0.087 rmse 0.106 0.505 0.416 min 0.006 0.462 0.312 max 0.165 0.694 0.666 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 50 for temperature, 0.1, 0.5 and 1 ppt for salinity, 0.1, 0.5 and 1 bar for pressure, 0.1 0.5 and 1 m for depth. however, those errors are worse than the accuracy values reported for the probe in subchapter 3.1, so to simulate combination of unfavourable environmental situations and poorly accurate operations. we then proceeded with the calculation of the residuals obtained by comparing the values of sound velocity produced by systematic error with the initial values of sound velocity considered as a reference. subsequently, the statistical values (mean, standard deviation, rmse, minimum and maximum values) are calculated for each formula, so to define the impact of the systematic errors on the accuracy of sound velocity determination. table 6, table 7, table 8 and table 9 show the statistical values of the residual produced by the injection of systematic errors in, respectively, the temperature, salinity, pressure and depth values in each formula. we finally proceeded with the calculation of the statistical values of the residuals generated by the worst possible combination of the systematic errors on temperature and salinity. the results of this calculation are shown in the table 10. in the end, the estimate of the errors on the depths was calculated. to provide the reader with an idea of this error dimension, average values of sound velocity in water were taken, calculated using the three formulas, every 100 m, 200 m and a single average value over the entire column of water, both with and without systematic errors. figure 2 shows the variability of the sound velocity in the case of unesco formula application, when we introduced systematic errors on temperature (0.5 °c), salinity (0.5 ppt) and both (0.5 °c-0.5 ppt). table 11 shows the rmse values of the differences between the known and the calculated depths by using different sound table 6. statistical values of the residuals produced by systematic errors for the temperature (0.1-0.5-1 °c) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 °c) 0.311 0.020 0.312 0.249 0.321 δc me (m/s) (0.1 °c) 0.310 0.021 0.311 0.247 0.320 δc ma (m/s) (0.1 °c) 0.315 0.021 0.315 0.250 0.325 δc u (m/s) (0.5 °c) 1.549 0.102 1.553 1.239 1.597 δc me (m/s) (0.5 °c) 1.542 0.104 1.546 1.227 1.591 δc ma (m/s) (0.5 °c) 1.565 0.107 1.569 1.240 1.616 δc u (m/s) (1 °c) 3.079 0.203 3.085 2.461 3.174 δc me (m/s) (1 °c) 3.063 0.207 3.070 2.437 3.161 δc ma (m/s) (1 °c) 3.110 0.213 3.117 2.463 3.210 table 7. statistical values of the residuals produced by systematic errors for the salinity (0.1-0.5-1 ppt) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 ppt) 0.117 0.002 0.117 0.109 0.118 δc me (m/s) (0.1 ppt) 0.119 0.003 0.119 0.111 0.120 δc ma (m/s) (0.1 ppt) 0.133 0.000 0.133 0.133 0.133 δc u (m/s) (0.5 ppt) 0.584 0.012 0.584 0.547 0.590 δc me (m/s) (0.5 ppt) 0.594 0.013 0.594 0.553 0.600 δc ma (m/s) (0.5 ppt) 0.664 0.000 0.664 0.664 0.664 δc u (m/s) (1 ppt) 1.169 0.025 1.169 1.094 1.181 δc me (m/s) (1 ppt) 1.188 0.027 1.189 1.106 1.201 δc ma (m/s) (1 ppt) 1.328 0.000 1.328 1.328 1.328 table 8. statistical values of the residuals produced by systematic errors for the pressure (0.1-0.5-1 bar) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 bar) 0.017 0.000 0.017 0.017 0.017 δc me (m/s) (0.1 bar) δc ma (m/s) (0.1 bar) δc u (m/s) (0.5 bar) 0.083 0.000 0.083 0.083 0.083 δc me (m/s) (0.5 bar) δc ma (m/s) (0.5 bar) δc u (m/s) (1 bar) 0.166 0.001 0.166 0.166 0.167 δc me (m/s) (1 bar) δc ma (m/s) (1 bar) table 9. statistical values of the residuals produced by systematic errors for the depth (0.1-0.5-1 m) introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 m) δc me (m/s) (0.1 m) 0.002 0.000 0.002 0.002 0.002 δc ma (m/s) (0.1 m) 0.002 0.000 0.002 0.002 0.002 δc u (m/s) (0.5 m) δc me (m/s) (0.5 m) 0.008 0.000 0.008 0.008 0.008 δc ma (m/s) (0.5 m) 0.008 0.000 0.008 0.008 0.008 δc u (m/s) (1 m) δc me (m/s) (1 m) 0.016 0.000 0.016 0.016 0.016 δc ma(m/s) (1 m) 0.016 0.000 0.016 0.016 0.016 table 10. statistical values of the residuals produced by combination of systematic errors for temperature and salinity (0.1 °c-0.1 ppt, 0.5 °c-0.5 ppt, 1 °c-1 ppt), introduced in the adopted formulas. mean st. dev. rmse min max δc u (m/s) (0.1 °c-0.1 ppt) 0.428 0.023 0.429 0.358 0.439 δc me (m/s) (0.1 °c-0.1 ppt) 0.429 0.024 0.429 0.357 0.440 δc ma (m/s) (0.1 °c-0.1 ppt) 0.447 0.021 0.448 0.382 0.458 δc u (m/s) (0.5 °c-0.5 ppt) 2.131 0.114 2.134 1.784 2.185 δc me (m/s) (0.5 °c-0.5 ppt) 2.134 0.117 2.137 1.777 2.189 δc ma (m/s) (0.5 °c-0.5 ppt) 2.229 0.107 2.232 1.904 2.280 δc u (m/s) (1 °c-1 ppt) 4.237 0.227 4.244 3.547 4.344 δc me (m/s) (1 °c-1 ppt) 4.242 0.233 4.248 3.533 4.352 δc ma (m/s) (1 °c-1 ppt) 4.437 0.213 4.442 3.790 4.538 figure 2. impact on the sound velocity in water caused by systematic errors on t, s and t-s. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 51 velocity values, those obtained from different approaches, i.e. using unesco formula (rmseu), medwin formula (rmseme) and mackenzie formula (rmsema), averaging the sound velocity values on sections (100 m) or on the entire water column, injecting errors in temperature (0.1-0.5-1 °c), salinity (0.1-0.5-1 ppt) or both. 3.3. discussions the results highlight how the values of sound velocity in water obtained by means of the three formulas are like each other. in particular, as reported in table 5, it is possible to see how the unesco formula and the medwin formula are very similar to each other, as they show a very small rmse, while the mackenzie formula, compared to the other two, approximates the sound velocity values differently in the same conditions. from the results shown in the previous tables, temperature and salinity are the parameters that have the greatest effect on the determination of the sound velocity in water. in fact, the introduction of systematic errors on them produces impacts greater than depth and pressure. as regards the temperature, the parameter that has the greatest influence compared to the others, there is an increase in the rmse value, in particular with the presence of an error of 1 °c, an rmse equal to 3.088 m/s is obtained, as shown in table 6. these results are approximately similar for all formulas. for salinity, another parameter with strong influence, as shown in table 7, we can see that the results obtained by using the unesco formula and the medwin formula are very similar, while regarding the mackenzie formula, we can see how an important variation of the velocity values is obtained, but this remains constant along the water column. this result is highlighted by the standard deviation values. for the pressure and the depth, the situation is the opposite, as they are the parameters that have minor effects on the determination of the velocity, in fact the impact produced by the introduction of systematic errors can be considered almost negligible. for pressure, we have an rmse much smaller compared to the one seen previously. it can be noted that even with the presence of a systematic error equal to 1 bar, the rmse is almost negligible, showing a value of 0.16 m/s as shown in table 8. finally, for the depth, as shown in table 9, we get the smallest values of rmse. also for these two parameters, the formulas show similar values. in the case of the worst combination generated by the simultaneous introduction of systematic errors on temperature and salinity, table 10 shows very important results. the incorrect determination of these parameters, combined with each other, leads to an rmse ranging from 0.428 m/s (for a systematic error of 0.1 °c 0.1 ppt) up to 4.248 m/s (for a systematic error of 1 °c – 1 ppt). also in this case, the values obtained by the formulas of unesco and medwin are similar, while the mackenzie formula shows a slight increase in the obtained values. in the end, table 11 shows the rmse values for the depth associated with the resulting velocity values. in particular, it can be noted that, as more velocity values are taken along the water column, the rmse value tends to decrease. in fact, for a single velocity value over the entire water column, an rmse equal to 0.260 m is obtained, while for velocity values taken every 100 m, an rmse equal to 0.110 m is obtained. we can also note that a systematic error on velocity naturally leads to an error on depth, even taking multiple velocity values along the water column, generating an rmse equal to 0.560 m in the worst possible case, i.e. with the presence of a systematic error on both temperature and salinity. 4. conclusions bathymetric surveys are typically carried out using techniques that exploit the propagation of acoustic waves in water. therefore, the correct determination of the sound velocity in water is of fundamental importance. it was found that an error on the chemical-physical parameters of the water (temperature, pressure, salinity, depth) due to an inaccurate instrument calibration, or, rather, to the use of a wrong model for one of them, can impact significantly on sound velocity determination. in particular, this article provides a measurement of the errors that can be produced. for our application, the formulas of unesco, medwin and mackenzie have been taken into consideration and systematic errors on the four parameters have been simulated. the inaccuracy of temperature and salinity measurements produces the greatest effects. particularly, the results remark that the sound velocity is very sensitive to the variation of the temperature. an error on the determination of the sound velocity in water leads to a non-negligible error on the depth. it has been seen that using a single sound velocity value over the entire water column, affected by a combination of systematic errors for temperature and salinity, generates errors that can reach about 0.5 m. of course, taking more different velocity values as reference along the water column allows you to determine the depth of the bottom more accurately, but even in this case, you risk having non negligible errors. it is possible to conclude that the sound velocity in water represents a very important parameter in bathymetric surveys, and therefore it must necessarily be determined with the highest possible accuracy. our experiments do not permit to evaluate the best method for indirect measurement of sound in water: the table 11. rmse values of the differences between the known and the calculated depths using velocity sound values derived from different approaches. velocity values taken on: rmseu (m) rmseme (m) rmsema (m) entire water column 0.260 0.250 0.263 every 100 m 0.111 0.107 0.121 entire water column (0.1 °c) 0.246 0.236 0.243 entire water column (0.5 °c) 0.245 0.243 0.248 entire water column (1 °c) 0.353 0.356 0.360 every 100 m (0.1 °c) 0.116 0.116 0.128 every 100 m (0.5 °c) 0.216 0.224 0.229 every 100 m (1°c) 0.388 0.397 0.401 entire water column (0.1 ppt) 0.254 0.244 0.251 entire water column (0.5 ppt) 0.237 0.229 0.234 entire water column (1 ppt) 0.235 0.231 0.239 every 100 m (0.1 ppt) 0.111 0.109 0.123 every 100 m (0.5 ppt) 0.130 0.134 0.148 every 100 m (1 ppt) 0.178 0.188 0.207 entire water column (0.1 °c-0.1 ppt) 0.241 0.233 0.239 entire water column (0.5 °c-0.5 ppt) 0.276 0.277 0.286 entire water column (1 °c-1 ppt) 0.470 0.477 0.497 every 100 m (0.1 °c-0.1 ppt) 0.121 0.123 0.134 every 100 m (0.5 °c-0.5 ppt) 0.279 0.290 0.302 every 100 m (1°c-1 ppt) 0.526 0.538 0.560 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 52 paper highlights the impact of inaccurate determination of temperature, pressure and salinity on the bathymetric survey results. extremely precise probes for direct measurement now available are to prefer to improve the depth determination. acknowledgement this work synthesizes results of experiments executed within a research project performed in the laboratory of geomatics, remote sensing and gis of the “parthenope” university of naples. we would like to thank the technical staff for their support. references [1] m. huet, marine spatial data infrastructure, an iho perspective: data, products, standards and policies, international hydrographic bureau, monaco (2009). [2] m. j. umbach, hydrographic manual, department of commerce, national oceanic and atmospheric administration, national ocean survey, 20 (2), (1976). [3] r. m. alkan, n.o. aykut, evaluation of recent hydrographic survey standards, in proc. of the 19th international symposium on modern technologies, education and professional practice in geodesy and related fields, pp. 116-130, (2009). [4] j. v. gardner, p. dartnell, l. a. mayer, j. e. h. clarke, geomorphology, acoustic backscatter, and processes in santa monica bay from multibeam mapping, marine environmental research, 56 (1-2), pp.15-46, (2003). doi: 10.1016/s0141-1136(02)00323-9 [5] t. a. kearns, j. breman, bathymetry-the art and science of seafloor modeling for modern applications, ocean globe, pp. 136, (2010). [6] l. m. pion, j. c. m. bernardino, dredging volumes prediction for the access channel of santos port considering different design depths, transnav international journal on marine navigation and safety of sea transportation 12, (2018). doi: 10.12716/1001.12.03.09 [7] d. popielarczyk, rtk water level determination in precise inland bathymetric measurements, in proceedings of the 25th international technical meeting of the satellite division of the institute of navigation (ion gnss 2012), september (2012), pp. 1158-1163. [8] manual on hydrography, publication c-13, 1 st edition, published by the international hydrographic bureau, may(2005). [9] g. b. mills, international hydrographic survey standards, the international hydrographic review, (1998). [10] g. antonelli, f. arrichiello, a. caiti, g. casalino, d. de palma, g. indiveri, m. razzanelli, l. pollini, e. simetti, isme activity on the use of autonomous surface and underwater vehicles for acoustic surveys at sea, acta imeko, 7(2), pp. 24-31, (2018). doi: 10.21014/acta_imeko.v7i2.539 [11] c. parente, and a. vallario, interpolation of single beam echo sounder data for 3d bathymetric model, int. j. adv. comput. sci. appl, 10, pp.6-13, (2019). doi: 10.14569/ijacsa.2019.0101002 [12] i. parnum, j. siwabessy, a. gavrilov, m. parsons, a comparison of single beam and multibeam sonar systems in seafloor habitat mapping, in proc. 3rd int. conf. and exhibition of underwater acoustic measurements: technologies & results, nafplion, greece, june (2009), pp. 155-162. [13] h. medwin, sounds in the sea: from ocean acoustics to acoustical oceanography, cambridge university press, (2005). [14] j. w. s. rayleigh, theory of sound, 2(255-266), macmillan, london, (1894). [15] d. agrez, s. begus, evaluation of pressure effects on acoustic thermometer with a single waveguide, acta imeko, 7(4), pp. 4247, (2019). doi: 10.21014/acta_imeko.v7i4.576 [16] x. lurton, an introduction to underwater acoustics: principles and applications, springer science & business media, (2002). [17] p. c. etter, underwater acoustic modeling and simulation, crc press, (2018). doi: 10.1201/9781315166346 [18] l. d. talley, descriptive physical oceanography: an introduction, academic press, (2011). doi: 10.1016/b978-0-7506-4552-2.10001-0 [19] introduction to sonar, bureau of naval personnel navy training course, 2nd ed., washington, d.c.: u.s. bureau of naval personnel, (1963). [20] a. e. ingham, hydrography for the surveyor and engineer, 3rd edn., blackwell scientific publications, london, pp. 132, (1992). [21] s. jamshidi, m. n. abu bakar, an analysis on sound speed in seawater using ctd data, journal of applied sciences, 10(2), pp. 132-138, (2010). doi: 10.3923/jas.2010.132.138 [22] c. t. chen, r. a. fine, f. j. millero, the equation of state of pure water determined from sound speeds, the journal of chemical physics, 66(5), pp. 2142-2144, (1977). doi: 10.1063/1.434179 [23] c. t. chen, f. j. millero, the equation of state of seawater determined from sound speeds, journal of marine research, 36(4), pp. 657-691, (1978). [24] f. j. millero, c. t. chen, a. bradshaw, k. schleicher, a new highpressure equation of state for seawater, deep sea research part a. oceanographic research papers, 27(3-4), pp. 255-264, (1980). doi: 10.1016/0198-0149(80)90016-3 [25] f. j. millero, history of the equation of state of seawater, oceanography, 23(3), pp.18-33, (2010). doi: 10.5670/oceanog.2010.21 [26] v. a. del grosso, c. w. mader. speed of sound in pure water, the journal of the acoustical society of america, 52(5b), pp. 1442-1446, (1972). doi: 10.1121/1.1913258 [27] f. filiciotto, g. buscaino, the role of sound in the aquatic environment, ecoacoustics, pp.61-79, (2017). doi: 10.1002/9781119230724.ch4 [28] l. li, t. wang, l. yang, c. gu, h. liang, modeling of sound speed in uwasn, in 2018 ieee 15th international conference on networking, sensing and control (icnsc) , ieee., march (2018), pp. 1-6. doi: 10.1109/icnsc.2018.8361308 [29] c. c. leroy, development of simple equations for accurate and more realistic calculation of the speed of sound in seawater, the journal of the acoustical society of america, 46(1b), pp. 216-226, (1969). doi: 10.1121/1.1911673 [30] k. v. mackenzie, discussion of seawater sound‐speed determinations, the journal of the acoustical society of america, 70(3), pp. 801-806, (1981). doi: 10.1121/1.386919 [31] k. v. mackenzie, nine‐term equation for sound speed in the oceans, the journal of the acoustical society of america, 70(3), pp.807-812, (1981). doi: 10.1121/1.386920 [32] h. medwin, speed of sound in water: a simple equation for realistic parameters, the journal of the acoustical society of america, 58(6), pp.1318-1319, (1975). doi: 10.1121/1.380790 [33] c. c. leroy, s. p. robinson, m. j. goldsmith, a new equation for the accurate calculation of sound speed in all oceans, the journal of the acoustical society of america, 124(5), pp.2774-2782, (2008). doi: 10.1121/1.2988296 [34] r. h. heinmiller, c. c. ebbesmeyer, b. a. taft, d. b. olson, o. p. nikitin, systematic errors in expendable bathythermograph (xbt) profiles, deep sea research part a. oceanographic research papers, 30(11), pp. 1185-1196, (1983). doi: 10.1016/0198-0149(83)90096-1 https://doi.org/10.1016/s0141-1136(02)00323-9 https://doi.org/10.21014/acta_imeko.v7i2.539 https://doi.org/10.14569/ijacsa.2019.0101002 https://doi.org/10.21014/acta_imeko.v7i4.576 https://doi.org/10.1201/9781315166346 https://doi.org/10.1016/b978-0-7506-4552-2.10001-0 https://doi.org/10.3923/jas.2010.132.138 https://doi.org/10.1063/1.434179 https://doi.org/10.1016/0198-0149(80)90016-3 https://doi.org/10.5670/oceanog.2010.21 https://doi.org/10.1121/1.1913258 https://doi.org/10.1002/9781119230724.ch4 https://doi.org/10.1109/icnsc.2018.8361308 https://doi.org/10.1121/1.1911673 https://doi.org/10.1121/1.386919 https://doi.org/10.1121/1.386920 https://doi.org/10.1121/1.380790 https://doi.org/10.1121/1.2988296 https://doi.org/10.1016/0198-0149(83)90096-1 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 53 [35] l. cheng, j. abraham, g. goni, t. boyer, s. wijffels, r. cowley, v. gouretski, f. reseghetti, s. kizu, s. dong, f. bringas, m. goes, l. houpert, j. sprintall, j. zhu, xbt science: assessment of instrumental biases and errors, bulletin of the american meteorological society, 97(6), pp. 924-933., (2016). doi: 10.1175/bams-d-15-00031.1 [36] m. j. langland, bathymetry and sediment-storage capacity change in three reservoirs on the lower susquehanna river, 1996-2008, reston, va: us geological survey, (2009). doi: 10.3133/sir20095110 [37] c. d. maunsell, the speed of sound in water, canadian acoustics, 4(3), pp. 2-4, (1976). [38] k. h. talib, m. y. othman, s. a. h. sulaiman, m. a. m. wazir, a. azizan, determination of speed of sound using empirical equations and svp, ieee 7th international colloquium on signal processing and its applications, ieee, march (2011), pp. 252256. doi: 10.1109/cspa.2011.5759882 [39] r. m. alkan, y. kalkan, n. o. aykut, sound velocity determination with empirical formulas and bar check, in proceedings of 23rd fig congress, munich, october (2006). [40] n. p. fofonoff, r.c. millard jr., algorithms for computation of fundamental properties of seawater, unesco technical papers in marine science, 44, (1983). https://doi.org/10.1175/bams-d-15-00031.1 https://doi.org/10.3133/sir20095110 https://doi.org/10.1109/cspa.2011.5759882 acta imeko  september 2014, volume 3, number 3, 73  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 73  section: advertisement  article 15 204 template advertisement_hbm application of wearable eeg sensors for indoor thermal comfort measurements acta imeko issn: 2221-870x december 2021, volume 10, number 4, 214 220 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 214 application of wearable eeg sensors for indoor thermal comfort measurements silvia angela mansi1, ilaria pigliautile2, camillo porcaro3,4, anna laura pisello2, marco arnesano1 1 università telematica ecampus, 22060 novedrate (co), italy 2 ciriaf – centro interuniversitario sull’inquinamento e l’ambiente mauro felli, dipartimento di ingegneria, università di perugia, 06125 perugia, italy 3 department of neuroscience and padova neuroscience center (pnc), university of padova, padova, italy 4 institute of cognitive sciences and technologies (istc) national research council (cnr), rome, italy section: research paper keywords: thermal comfort measurement, electroencephalography (eeg); wearable sensors, signal processing citation: silvia angela mansi, ilaria pigliautile, camillo porcaro, anna laura pisello, marco arnesano, application of wearable eeg sensors for indoor thermal comfort measurements, acta imeko, vol. 10, no. 4, article 33, december 2021, identifier: imeko-acta-10 (2021)-04-33 section editors: carlo carobbi, university of florence, gian marco revel, università politecnica delle marche and nicola giaquinto, politecnico di bari, italy received october 7, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the italian ministry of research through the next:com (prot.20172fsch4) ‘’towards the next generation of multiphysics and multidomain environmental comfort models: theory elaboration and validation experiment’’ project, within the prin 2017 program. corresponding author: marco arnesano, e-mail: marco.arnesano@uniecampus.it 1. introduction human-building interaction is an essential subject studied in the last decades, aimed at ensuring the occupants’ wellbeing and the correlated influence on buildings’ energy consumption. an important aspect of the built environment is indoor thermal comfort and its impact on occupants. from the 1970s, different thermal comfort models have been introduced. firstly, the twonode model [1] was presented by j. b. pierce, which represents the human body into two-layer: skin and core. each layer is a heat transfer node with thermal physiological parameters controlled by energy and mass conservation low. then, fanger [2] proposed the predicted mean vote (pmv) model to predict the thermal comfort of a large sample of individuals. in the last 3 decades, the adaptive model was widely used [3]. it is based on findings from the simultaneous collection of data on the thermal environment and thermal response of subjects, to determine the indoor thermal states, and the influencing parameters, that satisfy occupants’ sensations. the resulting adaptive models provide experimental relationships between the thermal comfort temperature and the outdoor air temperature. however, neither of those models encompass the personal physiological and psychophysics influence on the individual thermal perception. the concept of the personal comfort model has been introduced as a novel approach to predict individual-specific thermal comfort based on the measurements of environmental quantities, occupants’ behaviour, and physiological responses. abstract multidomain comfort theories have been demonstrated to interpret human thermal comfort in buildings by employing human-centered physiological measurements coupled with environmental sensing techniques. thermal comfort has been correlated with brain activity through electroencephalographic (eeg) measurements. however, the application of low-cost wearable eeg sensors for measuring thermal comfort has not been thoroughly investigated. wearable eeg devices provide several advantages in terms of reduced intrusiveness and application in real-life contexts. however, they are prone to measurement uncertainties. this study presents results from the application of an eeg wearable device to investigate changes in the eeg frequency domain at different indoor temperatures. twenty-three participants were enrolled, and the eeg signals were recorded at three ambient temperatures: cold (16 °c), neutral (24 °c), and warm (31 °c). then, the analysis of brain power spectral densities (psds) was performed, to investigate features correlated with thermal sensations. statistically significant differences of several eeg features, measured on both frontal and temporal electrodes, were found between the three thermal conditions. results bring to the conclusion that wearable sensors could be used for eeg acquisition applied to thermal comfort measurement, but only after a dedicated signal processing to remove the uncertainty due to artifacts. mailto:marco.arnesano@uniecampus.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 215 the usage of different physiological signals has been discussed in the literature, in the field of thermal comfort measurements. four of them have been identified as highly correlated with the perceived thermal comfort: electroencephalogram (eeg), electrocardiogram (ecg), skin temperature (st), and galvanic skin response (gsr), being a part of different processes involved in the thermoregulatory system activities [4]. several studies evaluated the thermal status of individuals acquiring different physiological signals simultaneously, but analysed them separately [5]. only a few of those provide a regression equation that relates physiological parameters to each other [6]. in addition, wearable sensors, for measuring real-time physiological signals, have been assessed as a promising technology in the field of personal thermal comfort estimation [7]. among physiological signals, measurable via wearable devices, the eeg has instigated interest, in the field of thermal comfort, for the possibility to monitor the human physiological responses changes in real-time [8]. generally, the brain’s electrical activity changes in response to the process of perception and cognition of environmental stimuli. eeg represents the measure of voltage fluctuation resulting from ionic current within the neurons of the brain [9]. brainwaves are detected using sensors placed on the scalp according to the 10-20 system [10]. they are divided into bandwidths. each of those corresponds to a particular state of mind. in general, delta (0.1 hz – 4 hz) is associated with deep sleep, theta waves (4 hz – 7.5 hz) are related to consciousness sleep towards drowsiness, alpha waves (7.5 hz – 12 hz) are the prominent rhythm in relaxing and passive attention activities, beta (12 hz – 30 hz) is associated with active thinking and gamma waves (30 hz – 45 hz) are prominent during high mental activities. recent studies showed how eeg power spectral densities (psds) were influenced by changes in environmental temperature. lv and colleagues [11] correlated the eeg frequency bands to the neutral and warm air temperatures. they showed higher delta-band activity in the warm condition. yao and colleagues [10] measured eeg signals from 20 subjects exposed at low, neutral, and high temperatures, showing that the relative eeg power of the beta band was significantly higher in the cold and warm environment compared with the neutral condition. lim et al. [12] found a connection between the eeg and thermal comfort, alpha/beta ratio (rab) increased in a comfortable environment, the opposite trend was highlighted by the relative beta (rb). other studies [12], [17] revealed how eeg frequencies change in conjunction with body temperature variations and how the ambient temperature influences eeg psds. son et al. [13] investigated a correlation between psychological and physiological measures to evaluate thermal comfort. their results showed an increase of theta band and a decrease of the beta band according to the thermal pleasure. wu et al. [14] classified thermal comfort under different conditions, showing an increase of delta and a decrease of beta power in a warm environment. zhu [15] examined changes in eeg responses during cognitive activity at different air temperatures. findings indicated a high value of relative delta and a low value of relative theta, alpha, and beta at high temperatures. in the above-mentioned eeg studies, measurements were performed using a traditional cap or medical devices; these devices can provide good quality of data, but they require a long time of applications and often they are perceived as unpleasant by the users. moreover, they are not applicable in a real-life context. such issues could be solved by the advent of wearable sensors. they offer several advantages: they are low cost, simple to be used, their comfortable design allows to reduce the time of application and, more importantly, they considerably attenuate the obstructiveness of measurements, making the experimental time not unpleasant for the participants. however, they are strongly prone to collect environmental noise and artifacts due to subject movements (e.g., eyeblink, muscular artifacts), which means that acquired data need to be processed before they became reliable for thermal comfort measurement [21]. several studies performed a metrological characterization of wearable devices for testing their accuracy. arpaia et al. [8] proposed a human stress detection method based on eeg signal acquired by a highly wearable single channel instrument. the results of their study demonstrated that the four standard machine learning (ml) classifiers used, reached more than 90 % of accuracy in distinguishing stress conditions of participants. in another study, arpaia and colleagues [16] present the calibration and the metrological characterization of a low-cost wearable device (olimex eeg-smt). preliminary calibration results showed good linearity, however, a magnitude of error around 8 % with a dependence on frequency was detected. in general, studies revealed how the commercial low-cost wearable devices used in conjunction with ml classifiers in an experimental context, can reach an accuracy between 83.3 % and 99.1 % [17]. thus, findings revealed that eeg wearable sensors can provide the required accuracy for the classification of human mental states. this paper presents the application of eeg wearable sensors for thermal comfort measurement. the experimental protocol, the signal processing procedure, and the statistical analysis are illustrated together with results from the measurement campaign performed in a controlled environment. results demonstrate the feasibility of the proposed approach, that could be used to build personalized comfort models based on eeg measurement. 2. materials and methods 2.1. eeg data collection and processing 2.1.1 eeg measurement device in this study, the eeg signal acquisition was done using a commercial wearable device: the interaxon muse headband [18]. the reference electrode fpz (cms/drl) is located on the forehead, the input electrodes are two front (left and right of the reference: af7, af8, silver made) and two posteriors, above each ear (tp9 and tp10, conductive silicone -rubber) (figure 1). the device acquires signals at 256 hz sampling frequency. raw eeg data were collected using the muse application [19], paired with a smartphone through bluetooth low energy (ble). however, the reliability of muse can be questioned. the limited number of electrodes can preclude the multi-networks figure 1. a) muse 2 headband sensors overview. b) top-down view of the eeg electrode positions on the subject’s head. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 216 evaluation from being focused on a specific area of the brain. the frontal electrodes are more prone to collect eye-blink and movement artifacts, and they can disrupt the measurements of actual brain waves [4]. dry electrodes may also be more prone to result in discomfort over time and pose a higher risk of misplacement on the forehead, resulting in lower accuracy of the signal acquired. in addition, certain head shapes, head sizes, and hairstyles made data collection difficult, the poor contact with the head surface does not allow a proper data acquisition [20], [21]. nevertheless, many studies demonstrated its ability to be applied in research experimental context. krigolson et al. [18] demonstrated that muse can be used successfully for eventrelated potential (erp) study applications. they tested the reliability of erp data collected with muse using a resampling analysis, obtaining reliable erp components with muse (especially the n200) with a minimal number of participants. youssef et al. [22], in their study for lies detection using muse, showed great success in their experimental purpose. ratti and colleagues [23] compared eeg medical devices with consumer muse portable devices. this study demonstrated that muse psds were similar to medical systems, but with higher variation (the power spectral ratio was between 0.975 and 1.025 for medical equipment, ratios between 1.125 and 1.225 for muse). this broadband increase in the power spectrum of muse data may reflect artifacts in data recorded by dry electrodes. however, muse is simple to set up, the applicability is quick (less than 10 min) and simple, which is significantly convenient for the selfhelp applications. 2.1.2 eeg data processing eeg recording is prone to collect noise and physiological artifacts, such as eye blinking, movements, and non-physiological artifacts such as electrical interference. therefore, it is very essential to apply processing and denoising to the recorded eeg data. based on the literature background, a processing custom code was implemented to filter and isolate the signal of interest: notch filter was used for removing the power line noise (50/60 hz), a high-pass filter at 0.1 hz to remove dc offset and low-frequency skin potential artifacts, a low-pass filter at 45 hz to remove high-frequency noise [24]. the independent component analysis (ica), which decomposes the signal into maximally independent components and artifacts components, was applied. after a visual inspection, the eyeblink component was removed, and the data were reconstructed (figure 2). epochs of 2.000 ms with 1.000 ms overlap were extracted from artifacts-free continuous data. then the power spectrum analysis (psd) was computed using a pwelch function with hamming 256-samples window, with 50 % overlap. the output was normalized using the z-score normalization method. 2.1.3 eeg features extraction to establish the correlation between eeg frequency bands and the subjective thermal sensation, retrieved through a questionnaire, the features extraction was performed. the output of pre-processing step represents the five major brain waves in the different frequency ranges: delta, theta, alpha, beta, and gamma waves. once the five brain waves were computed the main eeg features were calculated, based on their relevance in the context of the thermal comfort assessment using eegs. 2.2. experimental campaign 2.2.1 participants twenty-three healthy volunteers were enrolled for the experiment. they were informed about the experimental protocol and data management. collected data were anonymized. two experimental sessions were conducted during wintertime from january to february 2021, and summertime in july 2021. the selected group included 9 males and 14 females. all volunteers were local students at the ‘’university of perugia’’, where the experiment took place. none of them had a pathological history. personal information was collected with a survey filled by all subjects at the end of each test. table 1 summarizes the information about the participants involved in figure 2. normalized eeg data before eyeblink component removal; corrected eeg data after eyeblink component removal and all the components estimated by ica are reported. component 3 was rejected. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 217 the campaigns: range of ages, mean and standard deviation (std) of height and weight. the average clothing thermal insulation was 1.1 clo and 0.43, typical for wintertime and summertime respectively. the metabolic rate of participants was 1.1 met according to standard iso 7730 [25]. the second part of the survey aims at collecting data about thermal perception. in particular, the sensation vote for each thermal condition is given through a 5-points scale going from -2 to 2, where 0 corresponds to neutrality. 2.2.2 experimental setup: the controlled test room the experiments were carried out in the next.room (4 × 4 × 2.7 m3), a novel test room built up at the engineering campus of perugia university (italy) for human-comfort studies. the next.room ambient temperature was controlled through the installed heating, ventilation & air conditioning (hvac) system based on a heat pump with an inverter and providing four levels of fan speed. the room has a window that was shaded during all the tests of the experimental campaign here described, while the internal level of illuminance was kept constant thanks to the installed artificial lighting system. the environmental parameters inside the next.room were continuously monitored employing a fixed microclimate station located in the centre of the room and the associated data logging system [26]. table 2 provides information about the sensors installed in the test room, their accuracies, and the environmental boundaries monitored during cold (16 °c), neutral (24 °c), and warm (31 °c) tests accounting for both the winter and summertime series. 2.2.3 experimental setup: environmental condition during the experiments (both winter and summer seasons) the air velocity was always below 0.1 m/s, the relative humidity of air was not dependently controlled, while the air temperature was set at 16 °c (cold), 24 °c (neutral), and 31 °c (warm) according to the thermal sensation perceived by the participants collected with the survey (table 3). a specific schedule for the experiment was adopted. the measurements were done from 10:00 a.m. to 1:00 p.m. and from 3:00 p.m. to 6:30 p.m. subjects were asked to sit down and keep relaxed; no activity was allowed. the experimental procedure was carried out sequentially for 2 days for each thermal condition, in both seasons. each experiment lasted 20 min, 15 min for thermal adaptation (the previous study affirmed that people need 15 min to adapt to a new environment [27]), and 5 for recording data. (figure 3). 3. results the capability of discriminating the cold, warm and neutral sensations starting from the portable eeg measurements was evaluated in terms of statistical significance between brain waves features. eeg features were divided into three groups (cold, neutral, and warm) according to the thermal sensation scores expressed by the participants in the questionnaire. mean (𝑓)̅ and standard uncertainty of the mean (𝑢𝑓 ) of each feature was then calculated according to [28]. the normality of each group of features was evaluated with the shapiro test [29]. given that all groups presented a nongaussian distribution, the statistical significance was determined with the non-parametric kruskal-wallies test [30]. table 4 reports the features that turned out to provide a significant statistical difference (p-value < 0.05). the results showed how brain activities were altered by the thermal sensation perceived by occupants to the different environmental temperatures they were exposed to. in particular, the outcomes revealed that eeg features connected to high-frequency bands, such as beta and gamma for both frontal and temporal electrodes, tended to decrease with a warm sensation. instead, features that express the mean power of low-frequency bands registered an opposite trend (alpha-beta ratio af8, relative alpha tp9, theta beta ratio af8). the capability of a feature to discriminate between two different thermal conditions was further investigated with a posthoc analysis based on the dwass-steel-critchlow-fligner pairwise comparison test [31]. results from the test are reported in table 5. the pairwise comparison results demonstrated that table 1. details of the subjects participating in experiments. information details total number of subjects 23 (9 males and 14 females) age (min-max) (27 32) years height (mean ± std) (162 ± 12) cm mean of weight (mean ± std) (50.7 ± 15.3) kg table 2. technical information of the sensors for environmental parameters monitoring sensor environmental parameter accuracy thermal-hygrometer air temperature, °c relative humidity, % ± 0.1 °c ± 1.5 % black globe radiant temperature sensor mean radiant temperature, °c ± 0.15 °c hotwire anemometer air velocity, m/s ± 0.05 m/s co2sensor co2 concentration, ppm ± 50 ppm (+ 2 %) luxmeter illuminance, lx ± 5 % figure 3. measurements of eeg signals with muse headband interaxon in climate chamber. a facial mask was not used during the experiments. table 3. mean values and std of environmental parameters monitored in the climate chamber during the experimental sessions. measured parameters cold neutral warm air temperature, °c 16.7 ± 0.3 24.3 ± 0.5 31.4 ± 0.5 relative humidity, % 25.9 ± 0.5 20.1 ± 0.2 18.2 ± 0.4 air velocity, m/s 0.1 ± 0.02 0.07 ± 0.04 0.09 ± 0.03 mean radiant temperature, °c 16.4 ± 0.6 23.9 ± 0.5 30.2 ± 0.4 co2 concentration, ppm 487 ± 12 492 ± 4 502 ± 6 illuminance, lx 287 ± 22 290 ± 25 281 ± 23 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 218 all the features (except for alpha-beta ratio tp9, beta tp10, and gamma tp9) were different between cold and warm thermal sensations. beta af8, gamma af8, and relative beta af8 also showed differences between neutral and warm conditions, as shown in figure 4. none of the measured features showed differences between cold and neutral thermal sensations. in general, eeg measurements performed with the portable device showed a correlation with the thermal sensations in terms of increase or decrease of power of brain waves. the warm sensation could be correlated to an increase of alpha and theta waves, indicating that when subjects are exposed to a warm environment tend to be less concentrated and unable to keep focused. on the other hand, in cold conditions, there is an increase in the activity of beta and gamma waves, which are the main brain waves connected to high mental activity. at the same time, high levels of beta and gamma can be synonyms of high stress, indicating that the cold condition is perceived as more stressful than the warm one. results obtained with the muse turned out to be aligned with the state of the art concerning thermal comfort measurements based on eeg data. 4. conclusions as it is already known the prominence of an eeg frequency band is correlated to a certain type of mental state. for example, the high power of gamma waves corresponds to high mental activity, vice versa dominant alpha corresponds to relaxed conditions. some studies revealed that eeg theta waves increased while beta waves decreased with a comfortable thermal state, others showed that high values of theta band correspond to a high state of arousal level and vice versa. all studies have been performed with medical and non-portable devices, providing accurate eeg measurements, but with poor wearability and not applicable in real-life situations. for this reason, the proposed study aims at demonstrating whether a lowcost portable eeg device (muse) could be used to perform thermal comfort measurements, providing better wearability but with lower measurement accuracy. the experiments were performed in a controlled environment, where eeg was measured on 23 subjects exposed to warm, neutral, and cold conditions. considering the different thermal sensations perceived by subjects exposed to the same environmental condition, before performing the statistical analysis, the physiological features were divided and classified according to the thermal sensation questionnaire-based results into cold, table 4. mean feature (𝑓̅), standard uncertainty of the mean (u), h-statistic, and the significative p-value (p-value<0.05) for each of the statistically significant eeg features are reported. eeg features 𝒇 ̅ ± 𝒖𝒇 (cold) 𝒇 ̅ ± 𝒖𝒇 (neutral) 𝒇 ̅ ± 𝒖𝒇 (warm) h-statistic p-value alpa beta ratio tp10 1.2 ± 0.08 1.3 ± 0.07 1.3 ± 0.07 3.3 0.04 alpha beta product tp9 in db 1.3 ± 0.12 1.5 ± 0.14 1.6 ± 0.10 8.1 0.02 alpha beta product af8 in db 0.1 ± 0.05 0.1 ± 0.03 0.1 ± 0.02 8.2 0.02 beta af8 in db 0.4 ± 0.10 0.3 ± 0.07 0.1 ± 0.06 13.9 0.001 beta tp9 in db 0.5 ± 0.05 0.0 ± 0.09 0.4 ± 0.5 5.3 0.006 gamma af7 in db 0.1 ± 0.09 0.1 ± 0.05 -0.1 ± 0.09 6.7 0.03 gamma af8 in db 0.1 ± 0.1 0.0 ± 0.06 -0.1 ± 0.07 12.4 0.002 gamma tp10 in db 0.2 ± 0.06 0.2 ± 0.02 0.1 ± 0.04 7.9 0.02 gamma tp9 in db 0.2 ± 0.07 0.1 ± 0.05 0.1 ± 0.05 8.9 0.01 relative alpha tp9 0.3 ± 0.17 0.3 ± 0.02 0.3 ± 0.07 10.3 0.006 relative beta af8 0.3 ± 0.06 0.3 ± 0.05 0.1 ± 0.07 16.9 < 0.001 relative gamma tp10 0.1 ± 0.02 0.1 ± 0.01 0.1 ± 0.02 6.1 0.04 relative gamma tp9 0.1 ± 0.03 0.0 ± 0.02 0.0 ± 0.03 4.8 0.09 relative theta af8 0.1 ± 0.06 0.1 ± 0.05 0.2 ± 0.07 9.2 0.01 temporal asymmetry alpha in db 0.1 ± 0.03 0.0 ± 0.02 0.0 ± 0.02 4.5 0.01 temporal asymmetry delta in db 0.1 ± 0.03 0.0 ± 0.03 0.0 ± 0.02 6.7 0.03 temporal asymmetry theta in db 0.1 ± 0.03 0.0 ± 0.4 0.0 ± 0.03 8.7 0.01 theta beta ratio af8 -0.3 ± 0.71 0.1 ± 0.24 2.3 ± 2.31 7.2 0.02 table 5. dwass-steel-critchlow-fligner pairwise comparison between cold (c), neutral (n), and warm (w) thermal sensation results (* for p-value < 0.05). eeg features c-n (p-value) c-w (p-value) n-w (p-value) alpa beta ratio tp10 0.3 0.1 0.7 alpha beta product tp9 in db 0.2 0.01* 0.6 alpha beta product af8 in db 0.5 0.02* 0.1 beta af8 in db 0.5 0.001* 0.03* beta tp9 in db 0.1 0.02* 0.9 gamma af7 in db 0.2 0.04* 0.6 gamma af8 in db 0.5 0.003* 0.04* gamma tp10 in db 0.5 0.02* 0.3 gamma tp9 in db 0.1 0.2 0.8 relative alpha tp9 0.1 0.004* 0.5 relative beta af8 0.9 0.004* 0.001* relative gamma tp10 0.4 0.04* 0.6 relative gamma tp9 0.2 0.02* 0.3 relative theta af8 0.6 0.2 0.007* temporal asymmetry alpha in db 0.5 0.02* 0.2 temporal asymmetry delta in db 0.3 0.02* 0.6 temporal asymmetry theta in db 0.6 0.01* 0.2 theta beta ratio af8 0.2 0.03* 0.4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 219 neutral, and warm groups. this approach allows investigating the thermal perception of occupants overcoming the limits due to the different nationalities and personal thermal history, focusing on just the perception felt by the subjects during experiments. results showed that all the eeg features calculated presented a statistical difference between cold and warm sensations. beta af8, gamma af8 and, relative beta af8 showed significant differences also between neutral and warm sensations. in general, the results demonstrate that muse eeg wearable device could be used for thermal comfort measurements, discriminating cold and warm thermal sensation. due to its comfortable design, the muse considerably reduces the measurement’s intrusiveness, reducing consequently the time of application and discomfort due to wearability. the presence of dry electrodes facilitates the recording procedure but, the absence of a conductivity gel increases the electrical impedance between the dry electrodes and the skin, making the devices more prone to register artifacts, such as muscular movements and eyeblink, with the risk of reducing the signal-to-noise ratio. therefore, signal processing for data cleaning and artifacts removal, based on the approach proposed in this paper, is required. in addition, the usage of water solution, to improve the conductivity of muse electrodes during its application, could introduce a further source of uncertainty in the case of longlasting experiments. the water evaporation can lead to drifts of the measured voltage, resulting in a misleading features interpretation. for longer experiments, future investigations are required to estimate the measurement uncertainty in the case of the usage of portable eeg sensors for real-life applications. nevertheless, for future developments, the idea to study the eeg signal in conjunction with other physiological signals, such as ecg, st, and gsr has been considered. this approach could allow the generation of comprehensive knowledge about the physiological response of the mechanisms involved in the human thermoregulatory system. new experimental campaigns will be conducted, increasing the number of samples to give consistency to the statistical results. an investigation about how the gender and age of subjects can affect thermal perception will be considered, as an additional aspect of the study. the final scope is to identify the most relevant physiological features to be used as input for the creation of predictive models, based on ml techniques, capable of thermal comfort level classification. acknowledgment the research has been co-founded by the italian ministry of research through the next:com (prot.20172fsch4) ‘’towards the next generation of multiphysics and multidomain environmental comfort models: theory elaboration and validation experiment’’ project, within the prin 2017 program. references [1] q. zhao, z. lian, d. lai, thermal comfort models and their developments: a review, energy built environ., vol. 2, no. 1, pp. 21–33, jan. 2021. doi: 10.1016/j.enbenv.2020.05.007 [2] p. o. fanger, assessment of man’s thermal comfort in practice, br. j. ind. med., vol. 30, no. 4, 1973, pp. 313–324. doi: 10.1136/oem.30.4.313 [3] j. f. nicol, m. a. humphreys, adaptive thermal comfort and sustainable thermal standards for buildings, energy build., vol. 34, no. 6, pp. 563–572, 2002. doi: 10.1016/s0378-7788(02)00006-3 [4] silvia angela mansi, giovanni barone, cesare forzano, ilaria pigliautile, maria ferrara, anna laura pisello, marco arnesano, measuring human physiological indices for thermal comfort assessment through wearable devices: a review, meas. j. int. meas. confed., vol. 183, p. 109872, 2021. doi: 10.1016/j.measurement.2021.109872 [5] y. yao, z. lian, w. liu, q. shen, experimental study on physiological responses and thermal comfort under various ambient temperatures, physiol. behav., 2008. doi: 10.1016/j.physbeh.2007.09.012 [6] j. gwak, m. shino, k. ueda, m. kamata, an investigation of the effects of changes in the indoor ambient temperature on arousal level, thermal comfort, and physiological indices, appl. sci., 2019. doi: 10.3390/app9050899 [7] s. liu, s. schiavon, h. prasanna das, m. jin, c. j. spanos, personal thermal comfort models with wearable sensors, build. environ., vol 162, 2019. doi: 10.1016/j.buildenv.2019.106281 [8] p. arpaia, n. moccaldi, r. prevete, i. sannino, a. tedesco, a wearable eeg instrument for real-time frontal asymmetry monitoring in worker stress analysis, ieee trans. instrum. meas., vol. 69, no. 10, pp. 8335–8343, 2020. doi: 10.1109/tim.2020.2988744 [9] j. satheesh kumar, p. bhuvaneswari, analysis of electroencephalography (eeg) signals and its categorization–a study, procedia engineering, volume 38, 2012. doi: 10.1016/j.proeng.2012.06.298 [10] u. herwig, p. satrapi, c. schönfeldt-lecuona, using the international 10-20 eeg system for positioning of transcranial magnetic stimulation, brain topogr., vol. 16, no. 2, pp. 95–99, dec. 2003. doi: 10.1023/b:brat.0000006333.93597.9d [11] lv b, su c, yang l, wu t. effects of stimulus mode and ambient temperature on cerebral responses to local thermal stimulation: an eeg study. int j psychophysiol. 2017 mar; 113:17-22 doi: 10.1016/j.ijpsycho.2017.01.003 [12] lim, j. r., g. h. baek and e. jeon. analysis of the correlation between thermal sensations and brain waves via eeg measurements. 2018. [13] son, young joo & chun, chungyoon. research on electroencephalogram (eeg) to measure thermal pleasure in thermal alliesthesia in temperature step-change environment. 2018. indoor air. 28. 10.1111/ina.12491 doi: 10.1111/ina.12491 figure 4. the main representative kruskal-wallies results. mean (𝒇 ̅) and standard uncertainty of the mean (𝒖𝒇) in each condition are shown. https://doi.org/10.1016/j.enbenv.2020.05.007 https://doi.org/10.1136/oem.30.4.313 https://doi.org/10.1016/s0378-7788(02)00006-3 https://doi.org/10.1016/j.measurement.2021.109872 https://doi.org/10.1016/j.physbeh.2007.09.012 https://doi.org/10.3390/app9050899 https://doi.org/10.1016/j.buildenv.2019.106281 https://doi.org/10.1109/tim.2020.2988744 https://doi.org/10.1016/j.proeng.2012.06.298 https://doi.org/10.1023/b:brat.0000006333.93597.9d https://doi.org/10.1016/j.ijpsycho.2017.01.003 https://doi.org/10.1111/ina.12491 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 220 [14] wu m, li h, qi h. using electroencephalogram to continuously discriminate feelings of personal thermal comfort between uncomfortably hot and comfortable environments. indoor air. 2020 may;30(3):534-543 doi: 10.1111/ina.12644 [15] zhu, m., liu, w. & wargocki, p. changes in eeg signals during the cognitive activity at varying air temperature and relative humidity. j expo sci environ epidemiol 30, 285–298 (2020) doi: 10.1038/s41370-019-0154-1 [16] sci-hub, metrological characterization of a low-cost electroencephalograph for wearable neural interfaces in industry 4.0 applications. 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7-9 june 2021 doi: 10.1109/metroind4.0iot51437.2021.9488445 [17] j. larocco, m. d. le, d. g. paeng, a systemic review of available low-cost eeg headsets used for drowsiness detection, front. neuroinform., vol. 14, oct. 2020, p. 42. doi: 10.3389/fninf.2020.553352/bibtex [18] o. e. krigolson, c. c. williams, a. norton, c. d. hassall, f. l. colino, choosing muse: validation of a low-cost, portable eeg system for erp research, front. neurosci., 2017. doi: 10.3389/fnins.2017.00109 [19] mind monitor. online [accessed 16 december 2021] https://mind-monitor.com/ [20] e. s. kappenman, s. j. luck, the effects of electrode impedance on data quality and statistical significance in erp recordings, psychophysiology, vol. 47, no. 5, 2010, pp. 888–904. doi: 10.1111/j.1469-8986.2010.01009.x [21] x. wang, d. li, c. c. menassa, v. r. kamat, investigating the effect of indoor thermal environment on occupants’ mental workload and task performance using electroencephalogram, build. environ., vol. 158, no. march, 2019, pp. 120–132. doi: 10.1016/j.buildenv.2019.05.012 [22] a. e. youssef, h. t. ouda, m. azab, muse: a portable costefficient lie detector, in 2018 ieee 9th annual information technology, electronics and mobile communication conference, iemcon 2018, 2019. doi: 10.1109/iemcon.2018.8614795 [23] e. ratti, s. waninger, c. berka, g. ruffini, a. verma, comparison of medical and consumer wireless eeg systems for use in clinical trials, front. hum. neurosci., 2017. doi: 10.3389/fnhum.2017.00398 [24] w. peng, eeg preprocessing and denoising, eeg signal process. featur. extr., jan. 2019, pp. 71–87. doi: 10.1007/978-981-13-9113-2_5 [25] iso 7730:2005-ergonomics of the thermal environment – analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria. [26] i. pigliautile, s. casaccia, n. morresi, m. arnesano, a. l. pisello, g. m. revel, assessing occupants’ personal attributes in relation to human perception of environmental comfort: measurement procedure and data analysis, build. environ., 2020. doi: 10.1016/j.buildenv.2020.106901 [27] a. ghahramani, g. castro, b. becerik-gerber, x. yu, infrared thermography of human face for monitoring thermoregulation performance and estimating personal thermal comfort, build. environ., vol. 109, 2016, pp. 1–11. doi: 10.1016/j.buildenv.2016.09.005 [28] iso, evaluation of measurement data – guide to the expression of uncertainty in measurement, int. organ. stand. geneva isbn, vol. 50, no. september, p. 134, 2008. online [accessed 16 december 2021] https://www.bipm.org/en/publications/guides [29] m. brzezinski, the chen-shapiro test for normality, stata j., vol. 12, no. 3, sep. 2012, pp. 368–374. doi: 10.1177/1536867x1201200302 [30] e. ostertagová, o. ostertag, j. kováč, methodology and application of the kruskal-wallis test, appl. mech. mater., vol. 611, 2014, pp. 115–120. doi: 10.4028/www.scientific.net/amm.611.115 [31] d. e. critchlow, m. a. fligner, on distribution-free multiple comparisons in the one-way analysis of variance, vol. 20, no. 1, jan. 2007, pp. 127–139. doi: 10.1080/03610929108830487 https://doi.org/10.1111/ina.12644 https://doi.org/10.1038/s41370-019-0154-1 https://doi.org/10.1109/metroind4.0iot51437.2021.9488445 https://doi.org/10.3389/fnins.2017.00109 https://mind-monitor.com/ https://doi.org/10.1111/j.1469-8986.2010.01009.x https://doi.org/10.1016/j.buildenv.2019.05.012 https://doi.org/10.1109/iemcon.2018.8614795 https://doi.org/10.3389/fnhum.2017.00398 https://doi.org/10.1007/978-981-13-9113-2_5 https://doi.org/10.1016/j.buildenv.2020.106901 https://doi.org/10.1016/j.buildenv.2016.09.005 https://www.bipm.org/en/publications/guides https://doi.org/10.1177/1536867x1201200302 https://doi.org/10.4028/www.scientific.net/amm.611.115 https://doi.org/10.1080/03610929108830487 probability theory as a logic for modelling the measurement process acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 5 acta imeko | www.imeko.org june 2023 | volume a | number b | 5 probability theory as a logic for modelling the measurement process giovanni battista rossi1, francesco crenna1, marta berardengo1 1 measurement and biomechanics lab – dime – università degli studi di genova,via opera pia 15 a, 16145 genova, italy section: research paper keywords: measurement theory; epistemology; philosophy of probability; measurement modelling; probabilistic models citation: giovanni battista rossi, francesco crenna, marta berardengo, probability theory as a logic for modelling the measurement process, acta imeko, vol. 12, no. 2, article 13, june 2023, identifier: imeko-acta-12 (2023)-02-13 section editor: eric benoit, université savoie mont blanc, france received july 2, 2022; in final form march 16, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giovanni battista rossi, e-mail: g.b.rossi@unige.it 1. introduction the problem of the nature of probability has become a topic in measurement science for over twenty years, as a part of the debate on the expression and evaluation of measurement uncertainty, significantly raised by the publication of the guide to the expression of uncertainty in measurement (gum) [1] and of its long lasting and still ongoing revision process [2]. in the debate, the opposition between the bayesian and the frequentist schools of thoughts in statistics soon emerged, which involves the consideration of the nature of probability. in this regard, some authors pursue an explicit adoption of a bayesian paradigm for the overall context of uncertainty evaluation [3], [4], others instead suggest maintaining a more open attitude [5], [6], even when expressing a preference for the bayesian view [7], or to include the frequentist approach [8], when appropriate. here the focus is put on measurement modelling and probability is regarded as a logical and mathematical tool for developing such models, in such a way as to account for uncertainty. alternative choices could be done, for example those based on the evidence theory [9], [10], but here only probability is discussed and investigated. measurement modelling has been recently the subject of investigation, not only in respect to practical issues [11], but also to theoretical and foundational aspects [12], [13]. yet the “nature” of probability in such modelling seems to have not been discussed explicitly, which is instead the goal of this communication. basically, it is here suggested that probability can be regarded as an appropriate logic for developing models of measurement when uncertainty must be accounted for. therefore, in section 2 deterministic measurement modelling will be firstly addressed. then, in section 3, the logical approach to probability here proposed will be presented. its application to probabilistic measurement modelling will be addressed in section 4 and conclusions will be drawn in section 5. 2. deterministic measurement modelling 2.1. generic modelling issues it is here suggested that probability can be understood as a logic for developing measurement models. the notion of model thus needs reviewing. to establish some terminology, let us consider a system as a set of entities with relations among them [14]. a model can be thus understood as an abstract system, capable of describing a class of real systems. for example, if we abstract the problem of the nature of probability has been drawn to the attention of the measurement community in the comparison between the frequentist and the bayesian views, in the expression and the evaluation of measurement uncertainty. in this regard, it is here suggested that probability can be interpreted as a logic for developing models of measurement capable of accounting for uncertainty. this contributes to regard measurement theory as an autonomous discipline, rather than a mere application field for statistics. following a previous work in this line of research, where only measurement representations, through the various kinds of scales, were considered, here the modelling of the measurement process is discussed and the validity of the approach is confirmed, which suggests that the vision of probability as a logic could be adopted for the entire measurement theory. with this approach, a deterministic model can be turned into probabilistic by simply shifting from a deterministic to a probabilistic semantic. mailto:g.b.rossi@unige.it acta imeko | www.imeko.org june 2023 | volume a | number b | 5 consider the height of the inhabitants of a generic town, the model, 𝑀, can be expressed by a function, ℎ: 𝑈 → 𝑋, that associate to each inhabitant his/her height, on a proper height scale. for maximum simplicity, in the following illustrative examples height will be considered as a purely ordinal property, and 𝑋 the set of the numbers expressing height on an ordinal scale. therefore, the model can be synthetically expressed by the triple: 𝑀 = (𝑈, 𝑋, ℎ). (1) let us now introduce the distinction between deterministic and probabilistic models. a typical statement related to model 𝑀 is: ℎ(𝑢) = 𝑥 , (2) with 𝑢 ∈ 𝑈 and 𝑥 ∈ 𝑋. yet the truth of this statement is undefined, till a specific town, 𝑇, is considered. with reference to 𝑇, instead, if 𝐴 denotes the set of its inhabitants, 𝑋𝐴 the set of their height values and ℎ𝐴 the corresponding height function, the model is now specialised to 𝑇, that is: 𝑀(𝑇) = (𝐴, 𝑋𝐴, ℎ𝐴). (3) suppose for example that in 𝑇 there are just 3 inhabitants, 𝐴 = {𝑎, 𝑏, 𝑐}, that 𝑋 = {1,2}, and ℎ𝐴 = {(𝑎, 2), (𝑏, 1), (𝑐, 1)}, then (𝐴, 𝑋𝐴 , ℎ𝐴) = ({𝑎, 𝑏, 𝑐}, {1,2}, {(𝑎, 2), (𝑏, 1), (𝑐, 1)}). (4) the structure in equation (4) provides a sematic, that is a criterion of truth for the deterministic model 𝑀, since it allows us to ascertain the truth of any statement involved in the model. for example, ℎ(𝑎) = 2 is true, whilst ℎ(𝑏) = 2 is false. the general truth criterion is thus, for town 𝑇, 𝑢 ∈ 𝐴, 𝑥 ∈ 𝑋𝐴: ℎ(𝑢) = 𝑥 ↔ (𝑢, 𝑥) ∈ ℎ𝐴. (5) let us call 𝑇 an instance of the model 𝑀: then a model is deterministic if for any instance of the model all the statements concerning the model are either true or false. conversely, we will call probabilistic a model where for at least one of its instances there is at least one statement concerning the model for which its state of truth cannot be ascertain, but only a probability can be assigned to it. the transition from a deterministic to a probabilistic description will be discussed in section 3. 2.2. modelling the measurand measurement modelling concerns both the measurand and the measurement process. the modelling of the measurand aims at ensuring that the property of interest can be measured and it is thus closely related to the measurability issue [15]. at a foundational level, this implies assuming that the quantity1 under consideration can be measured on an appropriate measurement scale, i.e., that it possesses the required empirical properties. for example, (empirical) order is required for an ordinal scale, whilst order and difference are needed in the case of an interval scale. at a more operational level, modelling the measurand may account for the interactions it has with the environment and with the measuring system, to ensure that they do not hinder measurement, to compensate them, if possible, and to account for them in the uncertainty budget. 1 “quantity” here stands for “measurable property”. 2 note that here the term “object” has to be understood as “the carrier of the property to be measured” irrespectively of it being a concrete object, here only the first aspect, that is the possess of proper empirical properties, is briefly discussed. this is typically the scope of the so-called representational theory and it can be summarised by one or more representation theorems. for example, taking again the case of height of persons, still considering it as an ordinal property, an operation of empirical comparison needs considering, that allows us to determine, for any pairs of persons, 𝑢 and 𝑣, whether 𝑢 is taller than 𝑣, 𝑢 ≻ℎ 𝑣, or 𝑣 is taller than 𝑢, 𝑣 ≻ℎ 𝑢, or they are equally tall, 𝑢 ∼ℎ 𝑣. one such operation, provided that is transitive, ensures that the function “height of persons”, introduced in the previous section, exists. the corresponding model is now: 𝑀′ = (𝑈, 𝑋, ≽, 𝑚) , (6) where ℎ, height, has been replaced by the more general symbol 𝑚, “measure”, and the subscript ℎ has been dropped accordingly. then, the corresponding representation theorem reads: 𝑢 ≽ 𝑣 ↔ 𝑚(𝑢) ≥ 𝑚(𝑣) . (7) yet, although the existence of the function ℎ is mathematically ensured by the properties of the empirical relation ≽ℎ, its actual experimental determination requires a measurement process, which is to be modelled now. 2.3. modelling the measurement process for modelling the measurement process, the approach proposed in reference [16] is here followed and only very briefly recalled. it is suggested that measurement can be parsed in two phases, called observation and restitution. in the observation phase the “object”2 carrying the property to be measured interacts with the measurement system, in such a way that an observable output, called instrument indication, is produced, based on which a measurement value can be assigned to the measurand. the successive phase, where the result is produced, based on the instrument indication and accounting for calibration results (calibration curve), is here called restitution. this approach is essentially in agreement with others recently proposed in the literature [17]-[20]. for example, in the case of persons’ height, the measuring device may consist of a platform, on which the subject to be measured must stand erect, and of an ultrasonic sensor, placed at a fixed height over the head of the subject. the instrument generates a signal whose intensity, 𝑦, is proportional to the distance of the sensor from the top of the head of the subject, which constitutes the instrument indication. let us call 𝜑 the function that describes this phase: thus, if 𝑎 is the object to be measured, 𝑦 = 𝜑(𝑎). (8) calibration requires the pre-constitution of a reference (measurement) scale, 𝑅 = {(𝑠1, 𝑥1), (𝑠2, 𝑥2), … , (𝑠𝑛 , 𝑥𝑛)}, which includes a set of standards, 𝑆 = {𝑠1, 𝑠2, … , 𝑠𝑛 }, and their corresponding measurement values, 𝑋 = {𝑥1, 𝑥2, … , 𝑥𝑛 }. calibration can be done by inputting such standards to the measuring system and recording their corresponding indications, thus forming the function 𝜑𝑠 = {(𝑠1, 𝑦1), (𝑠2, 𝑦2), … , (𝑠𝑛 , 𝑦𝑛 )}, which is a subset of 𝜑, defined above. based on this information, it is possible to obtain a calibration function, 𝑓 = like a workpiece, or an event, such as a sound or a shock, or even a person, in the case of psychometrics [13]. acta imeko | www.imeko.org june 2023 | volume a | number b | 5 {(𝑥1, 𝑦1), (𝑥2, 𝑦2), … , (𝑥𝑛 , 𝑦𝑛 )}, that establishes a correspondence between the value of each standard and the corresponding output of the measuring device. calibration allows us to perform measurement, since, once the instrument indication has been obtained, it is possible to assign to the measurand, in the restitution phase, the value of the standard that would have produced the same output, that is �̂� = 𝑓 −1(𝑦) ≜ 𝑔(𝑦). (9) lastly, we obtain a description of the overall measurement process, by combining observation and restitution: �̂� = 𝛾(𝑎) = 𝑔(𝜑(𝑎)) = 𝑓 −1(𝜑(𝑎)). (10) this equation constitutes a basic deterministic model of the measurement process. in [20], a more detailed model was presented, where the generation of instrument indication was more deeply investigated. yet the structure of that model is compatible with the one just recalled, that will be used in the following, for the sake of simplicity. let us now show how this model can be turned into probabilistic by just shifting from a deterministic to a probabilistic semantic, which is the main goal of this communication. prior to doing so, the present approach to considering probability theory as a logic must be presented, with a special focus on the notions of probabilistic function, with associated operations of inversion and composition, which are necessary for treating equation (10). 3. probability as a logic for models formulated through first-order languages 3.1. probabilistic semantic let us consider models formulated in a first order language, 𝑳, that is a language whose elementary propositions concern properties of, or relations among, individuals, whilst more complex ones can be formed by combining the elementary ones through logical operators, such as conjunction, ∧, disjunction, ∨, or negation, ¬ [21]. such a language is rich enough for our purposes, as it will appear in the following. once a statement is made, it is of interest to assess its truth or falsity. this is the object of semantic, and the basis of a deterministic semantic, for statements of our interest, has already been presented in previous section 2.1. the purpose of the proposed theory is to replace a deterministic semantic with a probabilistic one [22]. as we have seen, a deterministic model, 𝑀𝑑 , may be expressed by a structure, 𝐻 = (𝐶, 𝑅), where 𝐶 = 𝐴1 × 𝐴2 … × 𝐴𝑝 is a cartesian product of sets and 𝑅 = (𝑅1, 𝑅2, … , 𝑅𝑞 ), where each 𝑅𝑖 is a 𝑚𝑖 -ary relation on 𝐶, expressed in the language 𝑳. the truth of a generic statement, 𝜙, concerning 𝑀𝑑 , can be assessed in the following way: • if 𝜙 is an elementary proposition, it is true if for some 𝑅𝑖 ∈ 𝑅, 𝜙 ∈ 𝑅𝑖 , • if instead it is the combination of elementary propositions, through logical operators, it is true if it satisfies the truth condition of the operators combined with the truth state of the elementary propositions involved. a probabilistic model, 𝑀𝑝, instead is constituted by a finite collection of structures, 𝐸 = {𝐻1 , 𝐻2 , … 𝐻𝑀 }, all associated to the same collection of sets, 𝐶, and a probability distribution 𝑃(𝐻𝑖 ) over 𝐸, such that 𝑃(𝐸) = ∑ 𝑃(𝐻𝑖 ) = 1 𝑀 𝑖=1 . (11) such structures constitute a set of possible realisations of the same basic underlying structure and are sometimes suggestively called “possible worlds”. then, the probability of any statement 𝜙 associated to 𝑀𝑝 is 𝑃(𝜙) = 𝑃{𝐻 ∈ 𝐸|𝜙} = ∑ 𝑃(𝐻𝑖 ) 𝐻𝑖∈𝐸|𝜙 , (12) where 𝐻 ∈ 𝐸|𝜙 denotes a structure where 𝜙 is true, {𝐻 ∈ 𝐸|𝜙} is the subset of 𝐸 that includes all the structures in which 𝜙 is true and the sought probability is the sum of the probabilities of such structures. to apply this approach to measurement, its application to probabilistic m-ary relations and to probabilistic functions must be investigated, with a special focus on probabilistic inversion. 3.2. probabilistic relations if 𝑅(𝑢1, 𝑢2, … 𝑢𝑚 ) is an m-ary relation and 𝐸 is a finite collection of structures, 𝐻𝑖 (𝐶, 𝑅𝑖 ), where the truth of 𝑅 can be ascertained, we obtain: 𝑃(𝑅(𝑎1, 𝑎2, … 𝑎𝑚 )) = 𝑃{𝐻 ∈ 𝐸|𝑅(𝑎1, 𝑎2, … 𝑎𝑚 )} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖∈𝐸|𝑅(𝑎1,𝑎2,…𝑎𝑚) . (13) probabilistic relations were treated in detail in reference [22] and thus are not pursued further here. 3.3. probabilistic functions considering a function 𝑓: 𝐴 → 𝐵, the associated structure is 𝐻 = (𝐴 × 𝐵, 𝑓), and the generic statement 𝑣 = 𝑓(𝑢) denotes a binary relation on 𝐴 × 𝐵 such that ∀𝑢 ∈ 𝐴, ∃𝑣 ∈ 𝐵(𝑣 = 𝑓(𝑢) ), and ∀𝑢 ∈ 𝐴∀𝑣, 𝑧 ∈ 𝐵(𝑣 = 𝑓(𝑢) ∧ 𝑧 = 𝑓(𝑢) → 𝑣 = 𝑧). let us then consider a finite collection, 𝐸, of such structures and an associated probability distribution on 𝐸. then the probability that the above statement holds true for a pair (𝑎, 𝑏), 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵, can be calculated by: 𝑃(𝑓(𝑎) = 𝑏) = 𝑃{𝐻 ∈ 𝐸|𝑓(𝑎) = 𝑏} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖|𝑓(𝑎)=𝑏 . (14) 3.4. probabilistic inversion consider now the probabilistic inverse to the function 𝑓 in the previous subsection, i.e., 𝑔: 𝐵 → 𝐴. let us consider first the possibility of calculating directly the probability associated to each value of 𝑔 from the knowledge of the corresponding direct function 𝑓, through the very definition of inverse function, by establishing the following rule: 𝑃(𝑔(𝑏) = 𝑎) ∝ 𝑃{𝐻 ∈ 𝐸|𝑓(𝑎) = 𝑏} = ∑ 𝑃(𝐻𝑖 )𝐻𝑖|𝑓(𝑎)=𝑏 . (15) after imposing the closure condition ∑ 𝑃(𝑔(𝑏) = 𝑢)𝑢∈𝐴 =1, we obtain the rule: 𝑃(𝑔(𝑏) = 𝑎) = ∑ 𝑃(𝐻𝑖)𝐻𝑖|𝑓(𝑎)=𝑏 ∑ 𝑃(𝑓(𝑢)=𝑏)𝑢∈𝐴 . (16) let us briefly discuss the relationship between probabilistic inversion, as here presented, and the bayes-laplace rule. to do that, let now 𝑢 and 𝑣 be two variables that denotes generic elements of 𝐴 and 𝐵, respectively, and let 𝑎 and 𝑏 be two specific elements of 𝐴 and 𝐵, respectively. then we can form the atomic statements 𝜙 = (𝑢 = 𝑎) and 𝜓 = (𝑣 = 𝑏), that means, for example, that, in some circumstance, the element 𝑎 ∈ 𝐴 acta imeko | www.imeko.org june 2023 | volume a | number b | 5 occurred and the element 𝑏 ∈ 𝐵 occurred. then function 𝑓 induces a conditional probability measure on 𝐴 × 𝐵, defined by: 𝑃(𝜓|𝜙) = 𝑃((𝑣 = 𝑏)|(𝑢 = 𝑎) ) = 𝑃(𝑏 = 𝑓(𝑎)). (17) then the (inverse) conditional probability 𝑃(𝜙|𝜓), equals the probability of the inverse function, and can be calculated through the bayes-laplace rule, with a uniform prior, that is 𝑃(𝜙|𝜓) = 𝑃((𝑢 = 𝑎)|(𝑣 = 𝑏)) = 𝑃((𝑣 = 𝑏)|(𝑢 = 𝑎) ) 𝑃((𝑣 = 𝑏)) = 𝑃(𝑎 = 𝑔(𝑏)) (18) therefore, in this context, the bayes laplace rule can be interpreted as a procedure for calculating the inverse of a probabilistic function. consequently, its use in measurement can be presented just as a step in measurement modelling, as it will be shown in the next section, without taking any commitment to bayesian statistics, with its philosophical and epistemological implications [23]. 3.5. composition of probabilistic functions lastly, let 𝑓, 𝑔, and ℎ be three probabilistic functions, 𝑓: 𝐴 → 𝐵, 𝑔: 𝐵 → 𝐶 and ℎ: 𝐴 → 𝐶, where for 𝑢 ∈ 𝐴, ℎ(𝑢) = 𝑔(𝑓(𝑢)). then the probability of statements concerning ℎ can be assessed through the rule: 𝑃(𝑤 = ℎ(𝑢)) = ∑ 𝑃 𝑣∈𝐵 (𝑤 = 𝑔(𝑣))𝑃(𝑣 = 𝑓(𝑢)) (19) where 𝑤 ∈ 𝐶. let us now apply the above rules to the probabilistic modelling of measurement processes. 4. probability as a logic for measurement modelling 4.1. modelling the measurand in section 2.2 a deterministic model was developed, based on equations (6) and (7). such model implies that empirical relations appearing in it are uncertainty-free. if, instead, the intrinsic uncertainty of the measurand, which basically corresponds to the “definitional uncertainty” in the vim, needs considering, such model must be turned into probabilistic. this can be done, by applying equation (13), to equation (7), which ultimately yields 𝑃(𝑢 ≽ 𝑣) = 𝑃(ℎ(𝑢) ≥ ℎ(𝑣)), (20) as proved in reference [22] and similar results can be obtained for all the scales of practical interest. 4.2. modelling the measurement process the overall modelling of the measurement process has been outlined in section 2.3, where it was suggested that the overall measurement process can be described by the measurement function 𝛾: 𝐴 → 𝑋, characterised by equation (10). therefore, a proper structure for the measurement process is 𝑀" = (𝐴 × 𝑌 × 𝑋, 𝜑, 𝑓, 𝛾). (21) yet, this description does not include the modelling of the measurand and does not allow to account for the associated intrinsic or definitional uncertainty, as previously discussed. this is acceptable in practice when such uncertainty is considered 3 see reference [22] for additional details of this representational side of the question. negligible. yet in the general case, models 𝑀′ and 𝑀" must be merged, yielding (for a purely ordinal quantity) the structure: 𝑁 = (𝐴 × 𝑌 × 𝑋, ≽, 𝑚, 𝜑, 𝑓, 𝛾). (22) as anticipated, this overall model can be interpreted either as deterministic of probabilistic, after interpreting the relations, variable and/or functions involved accordingly. recalling the previously presented equations, we obtain for a generic probabilistic statement concerning the measurement function 𝛾, in model 𝑀′: 𝑃(�̂� = 𝛾(𝑎)) = 𝑃 (�̂� = 𝑓 −1(𝜑(𝑎))) = ∑ 𝑃(𝑦=𝑓(𝑥)) ∑ 𝑃(𝑦=𝑓(𝑤)) 𝑤 𝑃(𝑦 = 𝜑(𝑎)) 𝑦 . (23) on the other hand, if we want to account for intrinsic uncertainty also, we should refer to model 𝑁 and consider 𝑚 as a probabilistic function as well. note, in this regard, that the function 𝜑: 𝐴 → 𝑌, only depends (at least ideally) on the way in which the object 𝑎 realises and manifests the quantity, 𝑥, of interest. let us call it 𝑥𝑎 = 𝑚(𝑎). therefore, 𝑦 = 𝜑(𝑎) = 𝑓(𝑚(𝑎)). (24) 4.3. a very simple numerical illustrative example let us finally illustrate the entire procedure by a very simple numerical example, concerning the (purely ordinal) height of three subjects, call them john (𝑎), paul (𝑏) and evelyn (𝑐). suppose john is definitely taller than the other two, so that 𝑃(𝑎 ≻ 𝑏) = 𝑃(𝑎 ≻ 𝑐) = 1.0. let instead paul be almost as tall as evelyn”, with 𝑃(𝑏~𝑐) = 0.6, 𝑃(𝑏 ≻ 𝑐) = 0.1 and 𝑃(𝑐 ≻ 𝑏) = 0.3. then it is easy to check that a proper function 𝑚: {𝑎, 𝑏, 𝑐} → {1,2} will have3: 𝑃(𝑚(𝑎) = 1) = 0.0; 𝑃(𝑚(𝑎) = 2) = 1.0; 𝑃(𝑚(𝑏) = 1) = 0.9; 𝑃(𝑚(𝑏) = 2) = 0.1; 𝑃(𝑚(𝑐) = 1) = 0.7; 𝑃(𝑚(𝑐) = 2) = 0.3. let us now consider the calibration function, 𝑓: 𝑋 → 𝑋 and let 𝑋 = {1,2} and the probability of 𝑓 be such that: 𝑃(𝑓(1) = 1) = 0.8; 𝑃(𝑓(1) = 2) = 0.2; 𝑃(𝑓(2) = 1) = 0.1; 𝑃(𝑓(2) = 2) = 0.9. then the probability of the inverse function 𝑔 is such that: 𝑃(𝑔(1) = 1) = 8/9; 𝑃(𝑔(1) = 2) = 1/9; 𝑃(𝑔(2) = 1) = 2/11; 𝑃(𝑔(2) = 2) = 9/11. the observation function 𝜑, is obtained by composing 𝑓 and 𝑚, according to equation (19), which yields: 𝑃(𝜑(𝑎) = 1) = 0.10; 𝑃(𝜑(𝑎) = 2) = 0.90; 𝑃(𝜑(𝑏) = 1) = 0.73; 𝑃(𝜑(𝑏) = 2) = 0.27; 𝑃(𝜑(𝑐) = 1) = 0.59; 𝑃(𝜑(𝑐) = 2) = 0.41. lastly, the measurement function 𝛾 outcomes from the composition of 𝑔 with 𝜑, yielding: 𝑃(𝛾(𝑎) = 1) = 0.251; 𝑃(𝛾(𝑎) = 2) = 0.749; 𝑃(𝛾(𝑏) = 1) = 0.698; 𝑃(𝛾(𝑏) = 2) = 0.302; 𝑃(𝛾(𝑐) = 1) = 0.599; 𝑃(𝛾(𝑐) = 2) = 0.401. 5. conclusion the problem of the interpretation of probability in measurement has been considered and it was suggested to regard acta imeko | www.imeko.org june 2023 | volume a | number b | 5 probability theory as a logic for developing probabilistic models. a remarkable feature of this approach is that after modelling measurement through the relations holding among the transformations involved, the model can be treated as either deterministic or probabilistic, depending upon the chosen sematic. alternative approaches can be considered, such as the fuzzy logic or the possibility theory [9], [10]. all these approaches have their merits and limitations, and the choice may be done depending upon the assumptions made in the development of the model. the logicistic approach here developed may overcome some reservations about probability theory, related to the limits of the frequentistic and the subjectivistic approaches, and may thus contribute to a wider use of the probabilistic approach. references [1] bipm, iec, ifcc, iso, iupac, iupap, oiml 1993 guide to the expression of uncertainty in measurement (iso, geneva, switzerland) corrected and reprinted, 1995, isbn 92-67-10188 [2] w. bich, m. cox, c. michotte, towards a new gum – an update, metrologia, 53, 2016, s149-s159. doi: 10.1088/0026-1394/53/5/s149 [3] w. bich, from errors to probability density functions. evolution of the concept of measurement uncertainty, ieee trans. instr. meas., 61, 2012, pp. 2153-2159. doi: 10.1109/tim.2012.2193696 [4] i. lira, the gum revision: the bayesian view toward the expression of measurement uncertainty, european journal of physics, 37, 2016, 025803. doi: 10.1088/0143-0807/37/2/025803 [5] d. r. white, in pursuit of a fit-for-purpose uncertainty guide, metrologia, 53, 2016, s107-s124. doi: 10.1088/0026-1394/53/4/s107 [6] a. possolo, a. l. pintar, plurality of type a evaluations of uncertainty, metrologia, 54, 2017, pp. 617-632. doi: 10.1088/1681-7575/aa7e4a [7] a. possolo, c. elster, evaluating the uncertainty of input quantities, in measurement models, metrologia, 51, 2014, pp. 339353. doi: 10.1088/0026-1394/51/3/339 [8] w. f. guthie, h.-k. liu, a. l. rukhin, b. toman, c.-m. jack wang, n.-f. zhang, three statistical paradigms for the assessment and interpretation of measurement uncertainty, in f. pavese. a. b. forbes (eds.) data modelling for metrology and testing in measurement science, boston, birkhauser, 2009. [9] e. benoit, expression of uncertainty in fuzzy scales based measurements, measurement, 46, 2013, pp. 3778-3782. doi: 10.1016/j.measurement.2013.04.006 [10] s. salicone, m. prioli, measuring uncertainty within the theory of evidence, switzerland, springer, 2018. [11] jcgm gum-6, developing and using measurement models, 2020 [12] l. pendrill, quality assured measurement, switzerland, springer, 2019. [13] l. mari, m. wilson, a. maul, measurement across the sciences, switzerland, springer, 2021. [14] a. backlund, the definition of system, kybernetes 29, 2000, pp. 444-451. doi: 10.1108/03684920010322055 [15] g. b. rossi, measurability, measurement, 40, 2007, pp. 545-562. doi: 10.1016/j.measurement.2007.02.003 [16] g. b. rossi, toward an interdisciplinary probabilistic theory of measurement, ieee trans. instrumentation and measurement, 61, 2012, pp. 2097-2106. doi: 10.1109/tim.2012.2197071 [17] k. d. sommer, b. r. l. siebert, systematic approach to the modelling of measurement for uncertainty evaluation, metrologia, 43, 2006, s200-s210. doi: 10.1088/1742-6596/13/1/052 [18] a. giordani, l. mari, a structural model of direct measurement, measurement, 145, 2019, pp. 535-550. doi: 10.1016/j.measurement.2019.05.060 [19] r. z. morawski, an application oriented mathematical meta model of measurement, measurement, 46, 2013, pp. 3753-3765 doi: 10.1016/j.measurement.2013.04.004 [20] g. b. rossi, f. crenna, a formal theory of the measurement system, measurement, 116, 2018, pp. 644-651. doi: 10.1016/j.measurement.2017.10.062 [21] g. rigamonti, corso di logica, torino, boringhieri, 2005. [in italian] [22] g. b. rossi, f. crenna, a first-order probabilistic logic with application to measurement representations, measurement 79, 2016, pp. 251-259. doi: 10.1016/j.measurement.2015.04.024 [23] g. b. rossi, f. crenna, beyond the opposition between the bayesian and the frequentistic views in measurement, measurement, 151, 2020, 107157. doi: 10.1016/j.measurement.2019.107157 https://doi.org/10.1088/0026-1394/53/5/s149 https://doi.org/10.1109/tim.2012.2193696 https://doi.org/10.1088/0143-0807/37/2/025803 http://dx.doi.org/10.1088/0026-1394/53/4/s107 https://doi.org/10.1088/1681-7575/aa7e4a https://doi.org/10.1088/0026-1394/51/3/339 http://dx.doi.org/10.1016/j.measurement.2013.04.006 https://doi.org/10.1108/03684920010322055 https://doi.org/10.1016/j.measurement.2007.02.003 https://doi.org/10.1109/tim.2012.2197071 https://doi.org/10.1088/1742-6596/13/1/052 about:blank about:blank about:blank https://doi.org/10.1016/j.measurement.2015.04.024 https://doi.org/10.1016/j.measurement.2019.107157 a 3d-printed soft orthotic hand actuated with twisted and coiled polymer muscles triggered by electromyography signals acta imeko issn: 2221-870x september 2022, volume 11, number 3, 1 8 acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 1 a 3d-printed soft orthotic hand actuated with twisted and coiled polymer muscles triggered by electromyography signals irfan zobayed1,2, drew miles1, yonas tadesse1,2,3,4 1 humanoid, biorobotics, and smart systems (hbs) lab, mechanical engineering department, the university of texas at dallas 2 biomedical engineering department, the university of texas at dallas 3 electrical and computer engineering department, the university of texas at dallas 4 alan g. macdiarmid nanotech institute, the university of texas at dallas section: research paper keywords: actuators; soft robotics; emg; rehabilitation measurement instrument; powered hand orthosis citation: irfan zobayed, drew miles, yonas tadesse, a 3d-printed soft orthotic hand actuated with twisted and coiled polymer muscles triggered by electromyography signals, acta imeko, vol. 11, no. 3, article 9, september 2022, identifier: imeko-acta-11 (2022)-03-09 section editor: zafar taqvi, usa received march 15, 2022; in final form september 23, 2022; published september 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: yonas tadesse, e-mail: yonas.tadesse@utdallas.edu 1. introduction one of the major groups of patients who experience an overall decrease in hand function and ability are those who suffer from ischemic stroke, which makes up for a majority of all strokes suffered worldwide. in fact, the number of strokes that occur annually worldwide is approximately 17 million, justifying why it is one of the most common causes of death in many countries, often trailing coronary artery disease [1]. specifically in the united states, strokes are the fifth leading cause of death, yielding to more than 125,000 casualties annually as well as leaving long-lasting effects for nearly one million more patients [2]. alongside the decreased physical ability of patients, strokes also hold a strong economic toll as well, due to health care services, medications, loss of productivity, and required rehabilitation, which can not only bring economic issues to patients but cost the economy billions of dollars [1]. worldwide, approximately 50 to 60 million people suffer from some form of hand impairment after suffering not only a stroke but also spinal cord injury (sci) and skeletal muscle atrophy. this often results in a loss of independence, leading to a decrease in the quality of activities in daily life for such patients [3]. the prevalence of upper-extremity neuromotor impairments because of ischemic strokes and sci is statistically inevitable – when combined with the economic challenges that come when attempting to manage a stroke, there is an overwhelming issue in access to affordable treatment to treat any related impairments. performing rehabilitation on a patient as soon as possible after suffering a stroke gives the best chance of regaining any lost hand functionality and dexterity back to the patient [4]. however, traditional methods of rehabilitation are not only expensive but abstract various wearable robotic hands, prosthetic hands, and orthotic exoskeletons developed in the last decade aim to rehabilitate patients whose daily quality of life is affected from hand impairments – however, a majority of these devices are controlled by bulky, expensive, noisy, and uncomfortable actuators. twisted and coiled polymer (tcp) muscles are novel smart actuators that address these key drawbacks. they have been utilized in soft robotics, hand orthosis exoskeletons, and powered hand orthotic devices; they are also light weight, high-performance, and inexpensive to manufacture. previously, tcp muscles have been controlled via power supplies with mechanical switches that are not portable, hence making it unfeasible for long term applications. in this work, a portable control system for tcp muscles via electromyography (emg) signals that are captured through electrodes placed on the arm of the user and processed through a channel of electrical components to actuate 4-ply tcp muscles, which is demonstrated on a 3d-printed soft orthotic hand. with portable emg control, orthotic devices can become more independently accessible to the user, making these devices novel instruments for measuring, aiding, and expediting the progress of hand impairment rehabilitation. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 2 also not possible to perform since patients need to be fully conscious and at a minimum state of health post-stroke or injury, which is very uncommon in most cases. to tackle the expenses and lack of accessibility that comes with rehabilitation for hand impairments, medical robotics have been a refreshing addition to the few options patients have when selecting their path to recovery post-stroke. specifically, the introduction of powered hand orthotic exoskeletons has made great strides in easing the difficulty that defames neuromotor rehab from both an engineering and clinical perspective. the quality and efficiency of the current common rehabilitation process improvement as well as expediting the overall process that helps retain maximal motor function for patients within the critical period post-stroke. design parameters for hand exoskeleton devices are imperative to the user experience as well, including a focus on key mechanical aspects that relate to hand anatomy, user comfort, effective force transmission via an actuator, and affordability for the common consumer [5]. the critical combination of such parameters within an exoskeleton device must not only aim to meet user demands but also needs to meet the demands of the industry. one of the current industry leaders in assistive hand orthotic devices is the myopro device that was developed by myomo inc. – electromyography, better known as emg, signals are captured by electrodes upon muscle contractions and filtered through their patented software that is onboard the device itself. the filtered signal then triggers the actuators, which are motors, within the device [6]. unique due to its combination of portability, self-control mechanism by processing emg signals, and minimalist look, the myopro device exhibits the key combination of design parameters within an exoskeleton device. although the device can exhibit the dexterity and function of a human hand, the cost to purchase the device for a consumer will yield nearly $80,000 after insurance, which is unaffordable for a majority of patients who have hand motor impairments. another industry-leading alternative assistive hand orthotic device is the nasa roboglove, a humanoid robotic that was designed to perform and enhance human-scale work, which is an alternative approach to the perspective of rehabilitative applications for stroke patients [7]. the device was developed in a partnership between general motors and nasa in an attempt to improve and assist healthy humans and their hand mechanical capabilities in applications that may require it, such as performing highintensity work that requires strength in the international space station (iss) or other zero-gravity environments. although innovative in its design and application intentions, the roboglove is not available to average consumers for purchase, further justifying the need for the accessibility and affordability of hand orthotic devices. as a result of both the need for affordable soft robotic stroke rehabilitation alternatives and research for accessible optimized combinations of critical parameters for building exoskeletons, there have been multiple attempts to build these devices. over recent years, there have been many attempts at developing reliable hand orthotic devices, including 165 different separate iterations of dynamic hand orthosis that have been studied and documented by bos et al. by sifting through 296 different sources of literature, with 109 of the devices developed since 2011. bos found that a majority of the dynamic hand orthosis is currently powered by dc motors and that devices are powered with traditional power supplies and trigger actuation via manual transmission, force, pressure, pre-tension loads, and in some advanced cases, emg signals [8]. several hand orthotic devices are able to perform some of the key functionalities that are presented by the myopro and roboglove. these hands are much more affordable but are not commercially available. developed by butzer et al. is the relab tenoexo, which can actuate on spring pre-loaded tension mechanisms via a dc motor that is triggered by processed emg signals [3]. a similar powered hand orthotic device is also actuated by dc motors that are triggered via emg signals, which was developed by park et al. out of columbia [9]. another implementation of a hand orthotic device that was partially 3-d printed with pla filament was developed by yoo et al., which is also actuated by a dc motor and significantly cheaper than other similar devices. an overwhelming majority of the devices mentioned as well as other similar devices utilize some form of a traditional or bulky actuator or emg triggering for the actuation of the device [3], [8], [10]-[14]. traditional actuators, although affordable, are bulky, noisy, and not user-friendly. a cost-effective, high performance, and lightweight soft artificial actuator called twisted and coiled polymer (tcp) muscles was developed by haines et al. 2014, who founded a new class of smart actuators that have high power to weight ratios and also alleviate many of the issues that are founded with traditional actuators [15]. tcp muscles are fabricated from precursor fiber conductive multifilament silver-coated nylon 6,6 sewing thread and can exhibit linear strokes as high as 50 % and is easy to both fabricate and manufacture – the first-ever soft robotic powered hand orthotic device to utilize this new class of smart actuators was the igrab, which is a 3d printed hand orthotic exoskeleton, developed by saharan et al., that is actuated via 2-ply tcp muscles [16]. like tcp muscles, other types of smart actuators that also exhibit some of the similar key thermomechanical and electrical properties, which include shape memory alloy (sma) and fishing line muscles. both are suitable for various applications given their unique mechanical properties. smas are a promising alternative to traditional actuators due to their shape retention abilities when actuating [17]. furthermore, their muscle-like properties and various methods of cooling them for faster movement prove their relevance in robotic applications [18]. fishing line muscles (tcpfl) are similar to silver-coated tcp (tcpag) muscles in terms of fabrication and manufacturing, but they are developed from a monofilament nylon 6 fishing line rather than a thread, allowing them to produce higher actuation in particular cases, which have been demonstrated in underwater soft robotics [19]. the new class of smart actuators founded by haines et al. exhibit excellent thermomechanical properties in various soft robotic applications and address most of the key issues that are found within current hand orthotic exoskeleton devices, including affordability and accessibility, as well as user experience, bulky designs, and noisy applications. however, actuation of these muscles was performed manually via a traditional power supply – hence, there is no portable or userfocused method of triggering actuation for them, specifically tcp muscles. as depicted in earlier works related to hand orthotic devices, emg triggered actuation of dc motors were evaluated and applied to devices, yet no such control exists for tcp muscles. traditional emg signal actuation methods utilize advanced signal processing and pattern recognition as discussed by jafarzadeh et al. [20], who introduces a novel method of emg signal processing via deep learning with convolutional neural networks (cnn) for more accurate signal processing. a derivation of emg signal processing from the cnn can be acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 3 implemented for accurate emg results to trigger tcp muscle actuation in a soft robotic application. in the following study, a novel method of portable tcp muscle control will be introduced, specifically triggering the actuation of these novel smart actuators via processed emg signals on a 3d printed soft robotic hand and gathering data to evaluate the performance of the system. 2. methods & materials the following section will thoroughly address the proper methods that were taken place to perform the experimentation and implementation of emg control of novel artificial tcp muscles concerning the control of a soft robotic, hand orthotic exoskeleton as a complete system as depicted in figure 1 above. 2.1. tcp muscle manufacturing twisted and coiled polymer (tcp) muscles are made of precursor fiber conductive multifilament silver-coated nylon 6,6 sewing thread – they can be made as 2-ply, 4-ply, or 6-ply strands, which are each depicted in figure 2b. the thread can be purchased commercially from shieldex trading inc. in both small cones and large spools. thicker tcp muscles will require a larger current to actuate as resistance has a direct correlation with length and ply on the muscle [21]. the fabrication of the tcp muscles is prepared in a similar way as described in saharan et al. [17]. briefly, a 170 cm long, 200 µm diameter silver-coated nylon fiber strand is cut from the cone or spool – a 2-ply tcp muscle needs 1 strand of fiber; 2 and 3 strands are needed for 4-ply and 6-ply, respectively. the strand(s) will be connected at each end (2-ply muscles strands are not connected since they are a single thread), which will then have one end attached to a 450 rpm motor and the other end attached to a pre-set load to undergo twist insertion method as seen in figure 2a. after the fiber strands begin to coil, plying is performed by folding the coiled muscle in half, which will twist in the opposite direction upon release and will then be crimped with a size 6 or 8 gold-plated ring terminal to maintain its shape and tension. the pre-set loads are 175 g, 350 g, and 525 g for 2ply, 4-ply, and 6-ply muscles, respectively. for the twisted and plied fibers to actuate properly, electrothermal annealing and training of the tcp muscles are required – each muscle underwent pulse actuation cycles while holding pre-set loads that are slightly heavier than the loads for fabrication – 300 g, 600 g, 900 g are used for 2-ply, 4-ply, and 6ply, respectively. the exact cycle type, electrical specifications, duration, and repetition that each muscle undergoes are explicitly figure 1. (a) simplified emg signal algorithm block diagram (blue arrows) and (b) hardware diagram (orange arrows) for proposed method to control orthotic hands, such as the igrab hand orthotic device by lokesh et al. [16], via raw emg signals. table 1. tcp training cycle specifications. figure 2. (a) fabrication set-up for tcp muscles; fibers spun on dc motor at 450 rpm with pre-tension load at the bottom, (b) examples of 2-ply, 4-ply, 6ply tcp at 10x scale and (c) characterization setup for tcp muscles; the power supply sends current values from steps 3 and 4 of table 1; ni daq device derives temperature from k-type thermocouples, displacement, current, and voltage to calculate the actuation strain and power consumption. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 4 listed in table 1. tcp muscles elongate during this process due to the strain placed on the muscle to retain its mechanical properties; the two 4-ply tcp muscles used for this study are ~21.0 cm long post-trained. to characterize the tcp muscles that are manufactured through the fabrication, training, and annealing phases, the characterization set up, as depicted in figure 2c, gathers key thermomechanical properties of each muscle placed within the set-up. a single trained tcp muscle is secured onto the set-up platform and has a pre-tension weight added by a defined load cell (300 g, 600 g, and 900 g for 2-ply, 4-ply, and 6-ply muscles respectively). a benchtop power supply then applies a constant current into the muscle, which provides a variable voltage due to the varying resistance of each muscle, hence actuating the tcp muscle. a laser sensor is mounted and secured onto the set up to calculate the displacement of the muscle in real-time when it is actuated. two thermocouples (omega, k-type 36 awg) are attached at two different points on the muscle to follow thermal changes throughout the actuation and cooling process of the muscle. all data from the thermocouples, laser sensor, voltage, and current are captured by a national instruments (ni) rio data acquisition (daq) device, which consisted of an analog output module (ni-9263), differential analog input module (ni9219), voltage input module (ni-9201), and another singlechannel analog input module (ni-9221). the data is then processed into a ni labview application and outputted into a spreadsheet, whose data is plotted and visualized by an automated matlab script for muscle characterization. the data points that are assessed are actuation strain, temperature, current, voltage, and power of each tcp that is manufactured. actuation results for 4-ply and 6-ply are shown in figure 3. all fabrication, training, annealing, and characterization of tcp muscles were performed in the humanoid, biorobitcs, and smart systems (hbs) laboratory at the mechanical engineering department at the university of texas at dallas. for the following article, two fully trained and characterized 4-ply tcp muscles were utilized. 2.2. emg signal algorithm the algorithm that is depicted in figure 1a captures raw emg signals from the electrodes, then filters excess noise and amplifies the raw filtered signal. the input signal then passes through threshold parameters from the electrodes that must be satisfied – these values must be within a certain period of time based on actuation frequencies and between 2 to 3.5 mv for a single finger or 3 to 4.5 mv for two fingers. once the signal satisfies the threshold, a scaled output signal will be generated to trigger the controller board, which will communicate to the driver board to send a voltage signal to actuate the 4-ply tcp muscles on the robotic hand. 2.3. emg control setup the emg control set up, depicted in figure 4, is composed of four main components: the (1) signal acquisition board, (2) power driver control board, (3) sensor data board, and (4) the nvidia jetson tx2 module controller. for testing purposes, not all board components were connected. the signal acquisition board, seen in figure 4d has a total of eight inputs for a maximum of eight electrode pairs and one reference electrode – each electrode pair monitors a different muscle or grouping of muscles. these electrode pairs take raw emg signals from the subject and send them into the myoware muscle sensor (pn# sen-13723) seen in figure 4b and figure 5a, which will collect the raw emg signals and input the signal into the board. the myoware sensor acts in between the electrodes and signal acquisition board. the power control board, seen in figure 4e, regulates the voltage and current that is inputted from the battery to power the entire system safely as well as output the voltage to actuate the muscles. the board has a voltage rating of around 20v. the operational amplifiers and dac chips can output a maximum of 5.7v from each terminal. the board can be fully powered by an operational voltage of only 6v to output an accumulated 45.6v through all eight terminals. the board has a negligible current rating relative to the current it uses. for experimental purposes, a benchtop power supply was utilized in this study. the sensor data board, seen in figure 4c, has a total of eight analog inputs to input sensor voltage signals for multiple purposes, including data and performance analytics as well as the implementation of future feedback control systems. all three boards were designed in eagle and manufactured by a local pcb vendor. each board is directly connected to the nvidia jetson tx2, seen in figure 4a, which acts as the controller and processor for the input data from the sensor data board and power board. the controller was selected for its ability to filter copious quantities of data within the emg processing algorithm for multiple fingers and output proper commands to the power control board to output the necessary voltage and current to move the hand orthotic device from its pre-trained position to the desired position seen in figure 5e and in figure 5f. figure 3. example characterization data of a 16.7 cm 4-ply and a 17.1 cm 6-ply muscle at 600g for 50 seconds. the current is not depicted since it is a constant input of 1.5 a and 2.1 a for 4-ply and 6-ply, respectively. (a) the strain peaks at 5.5 % for 4-ply and 6.4 % for 6-ply; (b) the voltage peaks at 8.2 v for 4-ply and 8.0 v for 6-ply; (c) the power used peaks at 12.8 w for 4-ply and 16.7 w for 6-ply; (d) the temperature peaks at 129 0c for 4-ply and 140 0c for 6-ply. the number of plies has a direct relationship to the maximum strain a muscle can exhibit. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 5 2.4. orthotic hand depicted in figure 4f, the 3-d printed orthotic hand is a prototype developed in-house and was set up as a testing apparatus for experimental purposes. the material of the 3-d printed hand orthotic device is made of thermoplastic polyurethane (tpu) material and is actuated by two 21 cm posttrained 4-ply tcp muscles, which are each connected to the index finger and ring finger of the orthotic hand by commercial fishing line that will mimic the anatomical functionality of a tendon in the human hand. figure 4. the components that power the emg triggered actuation system consists of the (a) nvidia jetson tx2, (b) myoware muscle sensor, (c) sensor data board, (d) signal acquisition board, (e) power control board, and (f) a 3-d printed orthotic hand, which is actuated by 4-ply tcp muscles connected by a commercial fishing line tendon to each the index and ring finger. figure 5. (a) electrode placement used for emg controlled actuation in relation to key anatomical muscle structure of the anterior left forearm for clear signal reception; (b) an open hand, which is the rest position for the user, will not actuate the hand and keep the (c) device in its pre-trained rest position; (d) a curled of closed hand by the user will trigger a signal to actuate the tcp muscles, which (e) actuates the ring finger or (f) both the ring and index finger. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 6 2.5. test subject to trigger actuation of the hand orthotic device via emg signals, a verified subject was required, hence an institutional review board or irb approval was applied for and granted. electrodes from the emg control setup were placed on the anterior surface of the subjects’ arm at the base of the forearm, distal to the elbow joint with a reference on the medial bony protrusion of the elbow. the muscle of primary interest is the flexor digitorum profundus, shown in figure 5a. 3. results the following section depicts the results of the experimentation conducted for the actuation of a single finger and dual finger setup, as well as a brief overview of the muscle performance and potential areas of experimental improvement for the data seen in figure 6. 3.1. single finger actuation depicted in figure 6a is the acquisition of the filtered emg signal, denoted by the red-triangle-line for a single finger set-up. initially reading a voltage of 1.72 mv from the electrode while the user was in rest position (seen in figure 5b), the user then closed their hand (seen in figure 5d), yielding a spike in voltage to 3.42 mv for a duration of 0.35 seconds and causing the orthotic hand to also close before returning to a steady voltage around 1.72 mv in rest position. the full flexion yielded from the actuation strain is depicted by the index finger seen on the orthotic hand in figure 5e. the output voltage from the power control board to the muscle was ~5.8 v, which was expected given the electrical specifications of the board and verified with a multimeter. in figure 6b, the acquisition of the single finger unfiltered emg signal that was processed by the myoware muscle sensor is depicted, showing a similar trend to that of the filtered signal. 3.2. dual finger actuation portrayed in figure 6a is the acquisition of the filtered emg signal for a double finger set-up, denoted by the blue-circle-line. similar to the single finger, the initial voltage when the user's hand was at rest was at 1.78 mv before the user closed their hand, which then yielded a spike in the filtered voltage to 3.42 mv for a duration of 0.35 seconds before returning to a steady voltage around 1.74 mv. the full flexion of the two fingers, which were observed on the index finger and ring finger simultaneously, is depicted in figure 5f, which were both connected via the same set-up seen in the single finger set-up for both fingers. in figure 6b, the acquisition of the dual finger unfiltered emg signal that was processed by the myoware muscle sensor is depicted, showing a similar trend to that of the filtered signal. 3.3. signal comparison when plotted side by side, the filtered emg signals that were acquired from the electrode on the user, which are seen in figure 6a, depict similar shapes and trends – the same can be said for the side-by-side comparison of the unfiltered signals depicted in figure 6b. when comparing the statistical data discussed earlier in this section, there are similar power sources of voltage from the electrode, which should be expected given that the same hand gestures were performed. this verifies that the emg input is acquired correctly by the signal acquisition board and converted properly by the microprocessor because the output voltage from the power control board was able to reach peak voltage for a maximum actuation strain and flexion. it is important to note that during the rest phases (before the initial spike of the voltage at 0.4s and 0.35s for single and dual finger set-ups, respectfully) that for the two different set-ups, it is evident that the emg voltage of the dual finger set-up has a very slight variance when compared to that of the single finger set-up – this is because there are two output channels opened from the power control board for the actuation of the orthotic hand. as more channels are opened, the filtering of the acquisition signal from the electrodes becomes slightly weaker, causing the variation seen for the dual finger actuation. although there is the presence of a slight variance, it is not enough to determine a significant impact on the acquisition and filtering of the signals. furthermore, the signals acquired are solely that of the subject approved via the irb process and no one else, as emg signals can vary amongst each individual, commonly ranging between 1 mv to 10 mv, which is comfortably within the range of acquisition seen in the study [22]. additional subject tests will optimize the current acquisition and filtration algorithm. 3.4. response time the single finger actuation began 0.4 seconds from the initial start of the emg signal capture whereas the dual finger actuation began 0.05 seconds earlier at 0.35 seconds. although the figure 6. the emg acquisition signals depicted are a result of the raw emg signals captured by the three electrodes on the myoware muscle sensor, which were filtered and adjusted by the controller; (a) filtered emg signals and (b) unfiltered emg signals. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 7 response time is faster as the number of open channels increase, studies with more actuated fingers should be performed to validate this relationship. further improvements to the current raw signal capture methods can also improve response time. however, the response time of the tcp muscle is slower due to the time it needs to cool, which is a typical behaviour of thermal muscles (see figure 3a). 3.5. performance the muscles perform with a delayed actuation producing an assistive push to the orthotic hand, approximately 0.4 seconds for the single finger actuation and 0.35 seconds for the dual finger actuation. through coupling with muscles on the posterior side of the arm, movement will become slower, smoother, and have a higher degree of control. drawing from the conclusions seen in previous studies [17], the utilization of 2-ply muscles instead of 4-ply would increase the speed of actuation with lesser power consumption but might not be strong enough to further develop into a compact device. likewise, 6-ply muscles would have the necessary strength (as seen by the example muscles in figure 3) to actuate but will require greater than the 5.7 volts provided in the current experiment. furthermore, testing more channels might allow for further insight into variations of the emg signal acquisition, which can then be addressed by updating the current processing and filtering algorithm. 4. discussion tcp muscles are a quality candidate for an effective low-cost orthotic device. the muscles used in this device were of short length without a pulley system, hence resulting in a much shorter strain per actuation with a higher degree of precision and control. if the individual muscles were combined with many others similar to that of human muscles, smoother and more powerful motion could be obtained. the performance of these muscles is also affected by the power limitations of the board. while tcp muscles perform optimally at around 6-13 v depending on the specific ply, the highest deliverable voltage with the current setup design is approximately 5.7 v [20]. sma muscles can perform well at the provided voltage but increase the production cost greatly and potential hysteresis during the cooling process can also slow down the efficiency of actuation. fishing line muscles are another powerful soft artificial muscle that actuates based on temperature changes. the heating and cooling of these muscles can be achieved through water or fans depending on the size and containment of the system [17], [23]. however, the amount of actuation currently produced is not inherently undesirable for an assistive motion and the following study supports that this motion can be increased temporarily for the rehabilitation phase of product use. an important distinction is the degree and type of care needed for rehabilitation versus assistance in daily activities. robotic devices have been used in research studies for motor function rehabilitation quite extensively in the past twenty years. these training periods typically last around one hour and are conducted with at least one day between each session [24]. assistive devices are designed to be worn during all hours of daily life to assist with essential tasks and functions. once a person’s functionality is restored to their individual peak, an assistive device works to fill the gaps between their current functionality and their preincident functionality [25], which can potentially become more accessible to patients as more iterations of portable and soft orthotic hand devices are investigated. the performance of this device in its current form has not yet been determined through standard human subject tests. this is appropriate as the initial tests performed verified the efficacy and need for further testing through credited existing tests. the fugl meyer test is a classic functionality test that examines the degree of functionality that a person possesses in their upper extremities [26]. when performed with and without an assistive device, the respective scores can be compared to assess the effectiveness of the device. additional tests such as the action research arm test (arat), wolf motor test, and box and block test can provide a more holistic characterization of soft robotic assistive device performance [27]-[29]. emg controlled actuation, through soft or hard actuators, requires precise processing and filtering of the emg signal in real-time. the methods required for such processing depend on the system used for emg acquisition, the muscles being monitored, and the sensor used. the emg acquisition of the flexor digitorum profundus results in desirable levels of control for hand orthotic devices [30]. this could be improved through the addition of a complementary sensor placed on the posterior surface of the forearm in the same position as the original sensor. likewise, the addition of further processing techniques can serve as an alternative to additional electrode placement [31]. such research can be performed and further investigated from the given study, potentially enabling the development of more novel solutions that are tailored to both affordability and accessibility of the device for patients as well as thorough user experience. 5. conclusion loss of hand movement or functionality is a widespread problem in the world today. the orthotic hand discussed would restore functionality to the user at a fraction of the cost of currently available systems. furthermore, it can be utilized a measurement tool for rehabilitative progress for patients who are trying to regain functionality in their hand. with human subject trials, the device can be further improved for better portability and convenience. additional testing will also yield more emg data, allowing for more precise control of the device. integrating existing gesture-based emg databases with our current algorithms will allow for complex mechanical movements of the orthotic device [32]. this same technology can apply not only well to low-cost prosthetics and rehabilitative robotics for trans-ulnar amputees, but also as assistive devices for the general use of able-bodied individuals [33]. with modifications, this technology could have useful applications for restoring functionality in those with parkinson’s disease or who suffer from foot drops. the orthotic device discussed has immense potential to positively impact the daily lives of thousands of individuals across the world from not only a functionality perspective but also that of an instrument to measure rehabilitative progress. 6. fundings the following research was supported by internal source research enhancement. 7. comments the authors declare no conflicts of interest. this paper is the full version of the presentation “igrab: novel 3d printed soft orthotic hand triggered by emg signals” presented at the tc17 special events conference in october 2021. acta imeko | www.imeko.org september 2022 | volume 11 | number 3 | 8 references [1] b. r. french, r. s. boddepalli, r. govindarajan, acute ischemic stroke: current status and future directions, mo. med., vol. 113, no. 6, pp. 480–486, 2016. online [accessed 27 september 2022] https://www.ncbi.nlm.nih.gov/pmc/articles/pmc6139763/ [2] s. l. murphy, k. d. kochanek, j. xu, e. arias, mortality in the united states, 2020, 2021. online [accessed 27 september 2022] https://pubmed.ncbi.nlm.nih.gov/34978528/ [3] t. bützer, o. lambercy, j. arata, r. gassert, fully wearable actuated soft exoskeleton for grasping assistance in everyday activities, soft robot., vol. 8, no. 2, pp. 128–143, 2021. doi: 10.1089/soro.2019.0135 [4] p. langhorne, j. bernhardt, g. kwakkel, stroke rehabilitation, the lancet, vol. 377, no. 9778, pp. 1693–1702, 2011. doi: 10.1016/s0140-6736(11)60325-5 [5] m. sarac, m. solazzi, a. frisoli, design requirements of generic hand exoskeletons and survey of hand exoskeletons for rehabilitation, assistive, or haptic use, ieee trans. haptics, vol. 12, no. 4, pp. 400–413, 2019. doi: 10.1109/toh.2019.2924881 [6] j. p. mccabe, d. henniger, j. perkins, m. skelly, c. tatsuoka, s. pundik, feasibility and clinical experience of implementing a myoelectric upper limb orthosis in the rehabilitation of chronic stroke patients: a clinical case series report, plos one, vol. 14, no. 4, p. e0215311, 2019. doi: 10.1371/journal.pone.0215311 [7] c. a. ihrke, l. b. bridgwater, d. r. davis, d. m. linn, e. a. laske, k. g. ensley, j. h. lee, roboglove-a robonaut derived multipurpose assistive device, 2014. [8] r. a. bos, c. j. w. haarman, t. stortelder, k. nizamis, j. l. herder, a. h. a. stienen, d. h. plettenburg, a structured overview of trends and technologies used in dynamic hand orthoses, j. neuroengin. rehabil., vol. 13, no. 1, 2016, pp. 1-25. doi: 10.1186/s12984-016-0168-z [9] s. park, m. fraser, l. m. weber, c. meeker, l. bishop, d. geller, j. stein, matei ciocarlieuser-driven functional movement training with a wearable hand robot after stroke, ieee trans. neural syst. rehabil. eng., vol. 28, no. 10, pp. 2265–2275, 2020. doi: 10.1109/tnsre.2020.3021691 [10] l. randazzo, i. iturrate, s. perdikis, j. d r. millán, mano: a wearable hand exoskeleton for activities of daily living and neurorehabilitation, ieee robot. autom. lett., vol. 3, no. 1, pp. 500–507, 2017. [11] t. shahid, d. gouwanda, s. g. nurzaman, moving toward soft robotics: a decade review of the design of hand exoskeletons, biomimetics, vol. 3, no. 3, p. 17, 2018. doi: 10.3390/biomimetics3030017 [12] t. du plessis, k. djouani, c. oosthuizen, a review of active hand exoskeletons for rehabilitation and assistance, robotics, vol. 10, no. 1, p. 40, 2021. [13] n. secciani, c. brogi, m. pagliai, f. buonamici, f. gerli, f. vannetti, m. bianchini, y. volpe, a. ridolfi, wearable robots: an original mechatronic design of a hand exoskeleton for assistive and rehabilitative purposes, front. neurorobotics, vol. 15, 2021. doi: 10.3389/fnbot.2021.750385 [14] p. heo, g. m. gu, s. lee, k. rhee, j. kim, current hand exoskeleton technologies for rehabilitation and assistive engineering, int. j. precis. eng. manuf., vol. 13, no. 5, pp. 807– 824, 2012. [15] c. s. haines (+ 20 authors), artificial muscles from fishing line and sewing thread, science, vol. 343, no. 6173, pp. 868–872, 2014. doi: 10.1126/science.1246906 [16] l. saharan, m. j. de andrade, w. saleem, r. h. baughman, y. tadesse, igrab: hand orthosis powered by twisted and coiled polymer muscles, smart mater. struct., vol. 26, no. 10, p. 105048, 2017. [17] l. wu, m. jung de andrade, r. rome, c. haines, m. d. lima, r. h. baughman, y. tadesse, nylon-muscle-actuated robotic finger, in active and passive smart structures and integrated systems 2015, 2015, vol. 9431, pp. 154–165. doi: 10.1117/12.2084902 [18] y. tadesse, n. thayer, s. priya, tailoring the response time of shape memory alloy wires through active cooling and pre-stress, j. intell. mater. syst. struct., vol. 21, no. 1, pp. 19–40, 2010. [19] y. almubarak, m. schmutz, m. perez, s. shah, y. tadesse, kraken: a wirelessly controlled octopus-like hybrid robot utilizing stepper motors and fishing line artificial muscle for grasping underwater, 2021. doi: 10.21203/rs.3.rs-186985/v1 [20] m. jafarzadeh, d. c. hussey, y. tadesse, deep learning approach to control of prosthetic hands with electromyography signals, in 2019 ieee international symposium on measurement and control in robotics (ismcr), 2019, pp. a1-4. doi: 10.1109/ismcr47492.2019.8955725 [21] i. m. zobayed, y. tadesse, igrab: mechanical characterization of 4-ply and 6-ply tcp muscle actuators for hand orthotic exoskeleton. biomedical engineering society (bmes) annual meeting, (poster). october 6-9. orlando, fl. [22] m. b. i. reaz, m. s. hussain, f. mohd-yasin, techniques of emg signal analysis: detection, processing, classification and applications, biol. proced. online, vol. 8, no. 1, pp. 11–35, 2006. [23] l. saharan, y. tadesse, a novel design of thermostat based on fishing line muscles, in asme international mechanical engineering congress and exposition, 2016, vol. 50688, p. v014t07a019. doi: 10.1115/imece2016-67298 [24] a. c. lo (+ 19 authors), robot-assisted therapy for long-term upper-limb impairment after stroke, n. engl. j. med., vol. 362, no. 19, pp. 1772–1783, 2010. doi: 10.1056/nejmoa0911341 [25] b. radder (+ 9 authors), a wearable soft-robotic glove enables hand support in adl and rehabilitation: a feasibility study on the assistive functionality, j. rehabil. assist. technol. eng., vol. 3, p. 2055668316670553, 2016. doi: 10.1177/2055668316670553 [26] d. j. gladstone, c. j. danells, s. e. black, the fugl-meyer assessment of motor recovery after stroke: a critical review of its measurement properties, neurorehabil. neural repair, vol. 16, no. 3, pp. 232–240, 2002. [27] m. mcdonnell, action research arm test, aust j physiother, vol. 54, no. 3, p. 220, 2008. doi: 10.1016/s0004-9514(08)70034-5 [28] e. taub, d. m. morris, j. crago, wolf motor function test (wmft) manual, birm. univ. ala. ci ther. res. group, pp. 1– 31, 2011. [29] t. platz, c. pinkowski, f. van wijck, i.-h. kim, p. di bella, g. johnson, reliability and validity of arm function assessment with standardized guidelines for the fugl-meyer test, action research arm test and box and block test: a multicentre study, clin. rehabil., vol. 19, no. 4, pp. 404–411, 2005. doi: 10.1191/0269215505cr832oa [30] m. mulas, m. folgheraiter, g. gini, an emg-controlled exoskeleton for hand rehabilitation, in 9th international conference on rehabilitation robotics, 2005. icorr 2005., 2005, pp. 371–374. doi: 10.1109/icorr.2005.1501122 [31] h.-j. yoo, s. lee, j. kim, c. park, b. lee, development of 3dprinted myoelectric hand orthosis for patients with spinal cord injury, j. neuroengineering rehabil., vol. 16, no. 1, pp. 1–14, 2019. https://jneuroengrehab.biomedcentral.com/articles/10.1186/s1 2984-019-0633-6 [32] n. j. jarque-bou, m. vergara, j. l. sancho-bru, v. gracia-ibáñez, and a. roda-sales, a calibrated database of kinematics and emg of the forearm and hand during activities of daily living, sci. data, vol. 6, no. 1, pp. 1–11, 2019. https://www.nature.com/articles/s41597-019-0285-1 [33] s. park, c. meeker, l. m. weber, l. bishop, j. stein, m. ciocarlie, multimodal sensing and interaction for a robotic hand orthosis, ieee robot. autom. lett., vol. 4, no. 2, pp. 315–322, 2018. https://www.ncbi.nlm.nih.gov/pmc/articles/pmc6139763/ https://pubmed.ncbi.nlm.nih.gov/34978528/ https://doi.org/10.1089/soro.2019.0135 https://doi.org/10.1016/s0140-6736(11)60325-5 https://doi.org/10.1109/toh.2019.2924881 https://doi.org/10.1371/journal.pone.0215311 https://doi.org/10.1186/s12984-016-0168-z https://doi.org/10.1109%2ftnsre.2020.3021691 https://doi.org/10.3390%2fbiomimetics3030017 https://doi.org/10.3389/fnbot.2021.750385 https://doi.org/10.1126/science.1246906 http://dx.doi.org/10.1117/12.2084902 https://doi.org/10.21203/rs.3.rs-186985/v1 https://doi.org/10.1109/ismcr47492.2019.8955725 https://doi.org/10.1115/imece2016-67298 https://doi.org/10.1056/nejmoa0911341 https://doi.org/10.1177/2055668316670553 http://dx.doi.org/10.1016/s0004-9514(08)70034-5 https://doi.org/10.1191/0269215505cr832oa https://doi.org/10.1109/icorr.2005.1501122 https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-019-0633-6 https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-019-0633-6 https://www.nature.com/articles/s41597-019-0285-1 remote video monitoring for offshore sea farms: reliability and availability evaluation and image quality assessment via laboratory tests acta imeko issn: 2221-870x december 2021, volume 10, number 4, 17 24 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 17 remote video monitoring for offshore sea farms: reliability and availability evaluation and image quality assessment via laboratory tests david baldo1, gabriele di renzone1, ada fort1, marco mugnaini1, giacomo peruzzi1, alessandro pozzebon2, valerio vignoli1 1 department of information engineering and mathematics, university of siena, via roma 56, 53100 siena, italy 2 department of information engineering, university of padova, via gradenigo 6/b, 35131 padova, italy section: research paper keywords: video monitoring; offshore; reliability; image quality; climatic chamber citation: david baldo, gabriele di renzone, ada fort, marco mugnaini, giacomo peruzzi, alessandro pozzebon, valerio vignoli, remote video monitoring for offshore sea farms: reliability and availability evaluation and image quality assessment via laboratory tests, acta imeko, vol. 10, no. 4, article 7, december 2021, identifier: imeko-acta-10 (2021)-04-07 section editor: silvio del pizzo, university of naples 'parhenope', italy received june 1, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was funded by regione toscana, project seafactory, por fesr 2014–2020 bando n.2: progetti strategici di ricerca e sviluppo delle mpmi. corresponding author: alessandro pozzebon, e-mail: alessandro.pozzebon@unipd.it 1. introduction fish farming has undergone a massive growth for years now, mainly owing to various causes. primarily, bred seafood could take part within the fight against world hunger without entailing an increase in costs, thus proving to be a cheap and valuable alternative for global food supply. furthermore, bred seafood quality may be easily certified since the complete fish lifetime can be promptly traced throughout the breeding cycle. the upcoming affluence of this sector is also validated by some studies: a prediction on worldwide fishing market in 2030 [1] foresees that 62 % of fish for human consumption will be produced via aquaculture by that year. in addition, the food and agriculture organization (fao) foretold a hopeful situation presuming an aquaculture production expansion up to 58% by 2022 [2]. concerning europe, a future discord among fish demand and supply was guessed [3], therefore fostering fish farming may turn to be an advisable initiative. finally, in italy aquaculture includes more than 800 companies, and most of them are situated in the mediterranean sea, where more than 5000 plants are located. abstract in this article, the availability and reliability of a remote video monitoring system for offshore sea farming plants are studied and tested in laboratory. the scope of the system is to ensure a video surveillance infrastructure so to supervise breeding cages along with the fish inside them, in order to contrast undesired phenomena like fish poaching as well as cages damages. the system is installed on a cage floating structure: it is mainly composed of an ip camera that is controlled by a raspberry pi zero which is the core of the system. images are streamed thanks to a 3g/4g dongle, while the overall system is powered via two photovoltaic panels charging a backup battery. simulations are carried out considering two seasonal functioning periods (i.e., winter and summer): each of them is characterised by temperature trends defined according to the average temperatures of the system deployment site, 8 km offshore the city of piombino, italy. in order to optimise power consumption without hindering application scenario requirements, the system operates according to a duty cycle of 2 minutes out of 15 (i.e., 8 minutes of operation per hour). the performances of the system are then tested in laboratory exploiting a climatic chamber so to simulate different environmental conditions: variations on image quality are then analysed in order to identify possible dependencies on critical situations related to specific temperature and relative humidity values and to the presence of salt in the air. mailto:alessandro.pozzebon@unipd.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 18 offshore sea farms need to rely on surveillance systems so to contrast undesired phenomena like fish poaching as well as breeding cages damages. therefore, in this paper an autonomous remote video monitoring system for offshore sea farms is presented along with simulations and laboratory tests whose outcomes are exploited to study its availability and reliability along with assessing performance variations and its overall effectiveness. the system is designed so to include off-the-shelf components, and in order to be energy efficient since it is powered via an energy harvesting system (i.e., two photovoltaic panels and a backup battery). eventually, the system is installed on a cage floating structure. since environmental conditions may severely affect the reliability of the system in terms of quality of the acquired images, tests are performed exploiting a climatic chamber, thus allowing to simulate different environmental conditions in terms of temperature and humidity. system behaviour in extremely high and low temperatures as well as high humidity levels was tested, analysing the possible degradation in images: in particular, the environmental parameters for the various tests were identified considering the meteorological conditions of the final deployment site, as well as the ones of any general marine site in the temperate climate zone. this paper is an extension of [4] and it is drawn up as follows. some related works are reported in section 2, while section 3 shows the video monitoring system architecture. the reliability configurations on which simulations are carried out are outlined in section 4, and simulations results are presented in section 5. section 6 is devoted to the description of the laboratory tests setup, while in section 7 the tests results are presented and discussed. eventually, section 8 points out remarks and conclusions. 2. related works autonomous systems for the monitoring of fish behaviour within offshore sea farms during feeding phases was reviewed in [5]: amid sundry enabling technologies and processing techniques, the use of video recordings thorough ad-hoc systems and cameras were pointed out, thus showing their feasibility. video monitoring systems that are deployed in marine contexts are mainly designed for coastal safeguard so to assess erosion [6]-[9] rather than for sea farms surveillance. such systems make use either of standard ip cameras [6], [7], like the one within the system presented in this paper, or of embedded cameras directly controlled by a single-board computer as raspberry pi [8], which is the control unit of the system that will be presented in the following section. similarly, remote video monitoring systems may be also exploited to assess post-storm recoveries within beaches. to this end, in [9] such a system is developed and tested in australia. therein, a video monitoring framework was employed for the assessment of erosion levels of sandy coastlines that experienced severe storms. at the same time, such facility was used as well to evaluate the recovery level after storms, highlighting adequate system reliability and effectiveness. nonetheless, these sorts of monitoring infrastructures are usually ashore installed thus easing, for instance, data transmission (which can be accomplished without resorting to wireless solutions) and system maintenance. however, the literature also comprehends works implementing video monitoring systems that are installed on offshore buoys within breeding plants: in [10], cameras are set up within the fish cages, while video streaming is ensured by a radio frequency system that is installed on board of the buoy. similarly, works reported in [11], [12] extend such a surveillance system. for what concerns the application scenario, [13] shares a similar context with respect to the one of this paper, since the authors propose a remote monitoring system for offshore oceanic sea farms made of floating cages. the system includes cameras, sonars, and telemetry devices, and it is installed on cage feed buoys (which are like the one that will be introduced later on). unfortunately, though, images are not ashore streamed because they are recorded and stored on board of a boat. on the other hand, marine video monitoring systems are additionally devised so to operate underwater, fulfill submarine investigation and exploration purposes [14], as well as for proper aquaculture ponds characterised by turbid water [15]. concerning video monitoring systems in a broad sense (i.e., which are employed in diverse contexts with respect to the marine one) that rely on photovoltaic energy harvesting systems and a backup battery, works within [16], [17] confirm the suitability of such a technique thus underlining the potential effectiveness of the solution proposed in this paper. image quality may be judged by means of several metrics so to measure the grade of similarity between two pictures. indeed, cameras capabilities can be objectively evaluated by resorting to the aforesaid metrics. this paper makes use of a well-known technique, that will be introduced later, which received a wide approval in the academic world. however, and for the sake of completeness, some newer approaches are also cited below so to provide readers with additional references. for instance, in [18] image quality assessment is performed from a vectorial perspective by making recourse to the vector root mean squared error which allows for an alternative to the biggest limitations (e.g., noise cancellation or detail preservation) of mean-squarederror-based techniques. on the other hand, image quality assessment may be also accomplished by means of machine learning techniques, and in particular [19] tackles the problem by falling back on deep neural networks. they can capture highorder statistics of image feature descriptors and storing them into a dictionary thus turning out into a deep dictionary encoding network. finally, captured images from remote video monitoring systems may suffer from artifacts stemming from equipment wear as well as from bad weather. in order to enhance image quality, it is favourable setting up methods aiming at mitigating such shortcomings. therefore, in [20] a solution to the problem is proposed by establishing an algorithm for dehazing images which can be applied to a myriad of contexts, especially the harsher ones like offshore. 3. system architecture the block diagram of the video monitoring system is depicted in figure 1, and it is composed of 3 main building blocks (i.e., power supply, control and communications and camera). the power supply block contains 2 photovoltaic panels providing 20 w each, whose task is either to power up the whole system and to recharge a 12 v, 25 ah lead-acid backup battery through a solar charge controller. the power supply block is only composed of off-the-shelf components: such a decision was made so to develop the prototype as soon as possible to test its effectiveness. the core of the power supply system is the solar charge controller. it is responsible for correctly powering up the control and communications block and the camera in function of both the battery charge level and the power coming from the photovoltaic panels. indeed, whenever they are exposed to acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 19 enough sunlight, the solar charge controller manages the harvested energy for running the whole system as well as for recharging the backup battery. on the contrary, whenever the harvested energy is scant (e.g., during the night), the solar charge controller draws energy from the battery so to supply the system. photovoltaic panels are fundamental for the long-term functioning of the system. indeed, such a prototype is offshore installed, and it is supposed to operate for at least a 6-month timespan, while the mere backup battery only ensures a 48-hour autonomy. thus, additionally highlights the low-power feature of the system components (that will be introduced below) along with the effectiveness of the duty-cycling functioning policy (that will be addressed in the next section). however, a complete battery discharge is extremely unlikely since it would be the consequence of several hours of darkness. the control and communications block is the core of the system since it manages the duty cycling of the camera along with images capturing and transmitting towards a remote server, while minimizing power consumption by the activation of the inner elements only for the minimum amount of time needed. such a working flow is obtained by the following off-the-shelf components: • dc-dc converter which filters out the power supply coming from the appropriate building block so to correctly power each of the system elements; • raspberry pi zero is the control unit of the system due to python scripts managing both the camera and the duty cycling of all of the other components; • raspberry pi relays board contains relay switches that are directly controlled by the raspberry pi so to turn on and off the camera and the other system elements whenever they are needed and only for the strictly necessary time in order to limit the overall power consumption; • 3g/4g dongle provides internet connectivity that is exploited to send the captured images and the debug logs to a remote server in order to perform diagnostic; • reset switch performs a daily hardware reset of the whole system acting as a sort of long-term watchdog timer so to overcome software issues or unexpected behaviours. the camera is an off-the-shelf outdoor ip one produced by hikvision, which is especially designed so to resist to marine environments. all the elements composing the control and communications block are housed within an ip56 box, while the complete system is mounted on a support pole (see figure 2) which is offshore installed on a breeding cage floating structure (see figure 3). 4. reliability configurations the application scenario for this video monitoring system does not necessitate of real-time image streaming. in particular, only snapshots on a regular basis (i.e., one every 15 minutes) are required. therefore, in order to meet functioning requirements, the system operates for a time span of 2 minutes every quarter of an hour within which the picture is taken and remotely sent via the internet. in so doing, only 8 minutes per hour of system activity is experienced (i.e., 172 minutes per day), thus also optimizing power consumption. for what concerns weather conditions the system is exposed, two seasonal functioning periods are identified on which availability and reliability simulations are carried out: • winter, that is made up of 3 8-hour time slots, that are daily repeated, which are in turn characterised by a temperature of 5°c, 10°c and 15°c; • summer, which accounts for 3 8-hour time slots, which are daily repeated, that are respectively characterised by a temperature of 20°c, 25°c and 30°c. such temperatures are considered because of the future system deployment scenario: 8 km offshore the city of piombino, italy. simulation parameters are summarised in figure 4, while availability and reliability simulation schemes are shown in figure 5 [21]-[24]. the mil hdbk 217f database was selected to evaluate individual component failure rates at different temperatures considering an environment of the kind naval unsheltered (nu). unfortunately, such parameters are usually not available from component producers, therefore a conservative approach given by the mentioned database results in worst condition scenarios was followed. 5. simulation results as shown in figure 6, simulations were performed by means of the blocksim software (by reliasoft) on the two scenarios previously described. in the upper picture the availability, that is the ratio between the mean time between failures and the sum of the overmentioned time plus the mean time to restoration (mtbf/(mtbf+mttr)), over time (considering a restoration time of 48 hours) is shown, while in the bottom picture the reliability trend over time is represented considering as failure rates the inverses of the figures cited in table of figure 4. both results prove that the summer period presents a decrease in the final time values with respect to the winter one. such results agree with what expected by the theory connected to the exploitation of the mil hdbk 217f database where figure 1. video monitoring system block scheme. figure 2. realization of the video monitoring system: photovoltaic panels (left) and pole with camera and ip56 box containing the electronics (right). acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 20 temperature-based degradation, under a fixed environment, takes place. to simulate the availability model, a fixed 48 h repair time was considered. of course, due to the very small number of electronic components and to the low system complexity, the system has to be tested especially for wintertime on long periods since the overall mtbf is considerably high at those temperatures. moreover, both the selected duty cycle for system exploitation and the power consumption reduction limit the internal temperature raise keeping it almost constant, without significant degradation rates apparently induced on individual electronics. simulations do not include condensation due to temperature excursions. therefore, the rusting due to salty environment condensation is not considered in the results. the overall system is housed within an ip56 box presenting holes to which siphons are connected so to allow heat dissipation via air circulation. however, this also implies that salty air enters within the box coming into contact with system components. this is far from being an optimal solution, albeit it is sufficient for the prototype. indeed, the latter is supposed to operate just for 6 months during which damages due to saltiness should be limited. consequently, salty air from the sea is assumed to be present around the electronics especially if maintenance activities will be performed. hence, such environment is by far challenging for what concerns junctions and soldering rather than single component performances. in the future, in case of failure, a root cause analysis could be applied on single assets so to verify whether this hypothesis is confirmed or not. the system availability without the charging station is not taken into consideration because the system would not work enough time to gather the requested data, and thus it is excluded by the simulations. 6. experimental setup following the simulations concerning the availability and reliability of the system, a set of tests was planned and performed in laboratory, with the aim of identifying possible negative effects on the acquired images due to the severe variations in terms of environmental conditions that the system, and the video camera in particular, is subject to. so to expose the system to different environmental conditions, the control and communication block, as well as the camera itself, were detached from the energy harvesting system and positioned inside a climatic chamber, figure 3. autonomous remote video monitoring system prototype offshore installed on a cage floating structure. figure 4. simulation parameters. figure 5. availability and reliability simulation schemes: winter and summer working periods. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 21 whose size did not allow the insertion of the complete system. indeed, an acs angelantoni hygros 250 environmental test chamber, whose internal volume is 600 × 535 × 700 mm3 (w × d × h), was used to perform tests. the climatic chamber is characterised by a temperature range from -40 °c to +180 °c, and a relative humidity range from 10 % to 98 % in the temperature range from +5 °c to +95 °c. it is important to point out that the relative humidity concentration depends on the psychrometric principle; therefore, the relative humidity value strictly depends on the temperature values. the minimum relative humidity values 𝑅𝐻min are expressed in the following bulleted list: • 𝑅𝐻min = 𝑓𝑙𝑜𝑎𝑡𝑖𝑛𝑔 for 𝑇 ≤ 0 °c; • 𝑅𝐻min = 55% for 𝑇 = +10 °c; • 𝑅𝐻min = 30% for 𝑇 = +20 °c; • 𝑅𝐻min = 17% for 𝑇 = +30 °c; • 𝑅𝐻min = 10% for 𝑇 ≥ +40 °c; where 𝑇 is the temperature value. tests were performed following the subsequent approach. the climatic chamber was programmed so to simulate the maritime weather, which has a relative humidity value around 80%; therefore, it performs the following temperature and relative humidity pairs, which for the sake of simplicity are numbered from the first to the last executed test: 1. 𝑇 = −10 °c with 𝑅𝐻 = 𝑓𝑙𝑜𝑎𝑡𝑖𝑛𝑔; 2. 𝑇 = 0 °c with 𝑅𝐻 = 𝑓𝑙𝑜𝑎𝑡𝑖𝑛𝑔; 3. 𝑇 = +10 °c with 𝑅𝐻 = 80 %; 4. 𝑇 = +20 °c with 𝑅𝐻 = 80 %; 5. 𝑇 = +30 °c with 𝑅𝐻 = 80 %; 6. 𝑇 = +40 °c with 𝑅𝐻 = 80 %; 7. 𝑇 = +50 °c with 𝑅𝐻 = 80 %. the whole system was positioned inside the climatic chamber, and it was cycled for each temperature humidity pair. moreover, 300 ml of saturated solution of salt in water was put along with the system to simulate the real conditions of the final deployment site. for the sake of completeness, it must be said that the battery and the 3g/4g dongle are exempt from the test. the first one was completely removed since lead-acid batteries are subjected to breakdown when extreme and abrupt changes of temperature values occur; thus, the system was mains powered exploiting a cable gland. the latter one, instead, was positioned outside the climatic chamber since the chassis of the hygros 250 is made of metal (except for the porthole), so it compromises the connection capability of the dongle. test setup can be seen in figure 7. the camera was fixed to the climatic chamber so to avoid misplacements of the pictures, while tests were in progress. since the aim of the tests was to identify possible image degradations, a frame control image, shown in figure 8, was selected. such image was chosen with the aim of comparing the details and chromatic differences that may occur when the camera is subjected to different environments. the control image was positioned at the porthole, inside the climatic chamber; then, a picture of the control image was taken at a temperature of 25 °c and a relative humidity of 50 %, with the light of the climatic chamber switched on. the test started setting the climatic chamber as introduced in the aforesaid numbered list. as soon as the climatic chamber reached the defined temperature and relative humidity pair, the whole system was held at those levels for 10 minutes so to allow the electronics inside the climatic chamber to reach a steady state. at this point the inner light was switched on; the system was powered up, and 500 pictures were taken at a rate of approximately one picture every two seconds. tests and related results are presented in the following section. figure 6. availability and reliability simulation results during winter and summer working periods. figure 7. test setup: control image fixed at the porthole can be seen on the left, while the camera and the system are positioned inside the climatic chamber together with the glass containing the saturated solution of salt and water. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 22 7. test results and discussion tests aimed at assessing the video surveillance monitoring system performances, under different environmental conditions, from the point of view of two different benchmarks: picture acquisition time and picture quality. acquisition times were directly sampled by the video monitoring system. then, for each of the experimental set, those times were averaged: the relative results are reported in figure 9. the maximum mean acquisition time (i.e., 1173 ms) was experienced at -10 °c, while the minimum one (i.e., 1126 ms) was recoded at 30 °c and 80 % of relative humidity. however, despite the mean acquisition time trend is not constant, it could be considered so since the difference amid the maximum and the minimum is just 47 ms, thus underlying the fact that no significant performances variation is present. the assessment of the variation of picture quality was carried out by resorting to the multiscale structural similarity (msssim) index for image quality [25]. this method compares two images and provides as output an index (i.e., the ms-ssim) representing how much these pictures are similar: the closer the ms-ssim to 1, the more similar the images. if ms-ssim for two images is 1, then they are identical. therefore, each of the pictures taken by the camera during the experimental sets were compared to the control one by means of the ms-ssim, and then the resulting ms-ssims were averaged for each experimental set. mean ms-ssim trend is reported in figure 10, while figure 11 shows the control picture along with one picture for each of the experimental sets. although all the pictures seem to look the same, as the mean ms-ssim trend shows, they slightly differ. as expected, the maximum mean ms-ssim (i.e., 0.799) was experienced at 20 °c and 80 % of relative humidity: indeed, the control picture was taken at room temperature, therefore the most significant environmental variation was the relative humidity content. on the other hand, the minimum mean msssim (i.e., 0.685) was recorded at -10 °c. qualitatively, the system better performed at higher temperatures, indeed meaningful discrepancy amid lower temperatures experimental sets and higher temperatures ones took place. this behaviour can be justified by the mechanical stress induced on the camera infrastructure. electronics cannot be responsible for picture variability over temperature and humidity stress for two reasons. the first one is that the timing and temperature range of experiments are so limited that electronics are not likely to be affected. the second one is that relative humidity can affect both the optical transparency and the plastic support can be deformed in such temperature range. therefore, it is likely that in such testing condition the plastic support holding the camera optics can be affected influencing camera acquisition and therefore the images. 8. conclusions the aim of this paper was to describe the architecture of an autonomous video monitoring system to be employed for the remote control of offshore breeding cages in aquaculture plants. the proposed solution is characterised by energy self-sufficiency thanks to an energy harvesting system based on the use of photovoltaic panels. in order to demonstrate the usability of the system, as a first step simulations were performed in order to validate its reliability and availability. then, a set of tests were carried out, exploiting a climatic chamber, so to identify possible image degradations due to different environmental conditions. results prove that the system can be successfully employed in the proposed application scenario for both winter and summer environmental settings, while degradations occurring at extreme climatic conditions are still compliant with the video monitoring purpose of the whole system. the current analysis is not including considerations on the possible rusting caused by the operation scenario either on soldering, junctions and connectors which should be included in the future in order perform an utter system reliability and availability analysis. moreover, while the figure 8. image used for the tests. figure 9. mean acquisition times. figure 10. mean ms-ssim trend. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 23 tests demonstrated the reliability of the image acquisition procedure, a long-term field test, performed on site, is expected to be carried out in the near future in order to study the behaviour of the system in its real deployment scenario. acknowledgement the authors would like to thank agroittica toscana srl for its support in all the field test activities and alta industries srl for its support for the tests carried out in the climatic chamber. references [1] m. kobayashi, s. msangi, m. batka, s. vannuccini, m. m. dey, j. l. anderson, fish to 2030: the role and opportunity for aquaculture, aquaculture economics & management 19 (3) (2015), pp. 282-300. doi: 10.1080/13657305.2015.994240 [2] a. fredheim, t. reve, future prospects of marine aquaculture, proceedings of oceans 2018 mts/ieee, charleston, sc, usa, 22-25 october 2018, pp. 1-8. doi: 10.1109/oceans.2018.8604735 [3] l. angulo aranda, p. salamon, m. banse, m. van leeuwen, future of aquaculture in europe: prospects under current conditions, proceedings of xv eaae congress, “towards sustainable agri-food systems: balancing between markets and society'', parma, italy, 29 august-1 september 2017. [4] d. baldo, a. fort, m. mugnaini, g. peruzzi, a. pozzebon, v. vignoli, reliability and availability evaluation of an autonomous remote video monitoring system for offshore sea farms, proceedings of 2020 imeko tc-19 international workshop on metrology for the sea, naples, italy, 5-7 october 2020. online [accessed 10 december 2021] https://www.imeko.org/publications/tc19-metrosea2020/imeko-tc19-metrosea-2020-04.pdf [5] d. li, z. wang, s. wu, z. miao, l. du, y. duan, automatic recognition methods of fish feeding behaviour in aquaculture: a review, aquaculture, 528 (15) (2020), 735508. doi: 10.1016/j.aquaculture.2020.735508 [6] r. taborda, a. silva, cosmos: a lightweight coastal video monitoring system, computers & geosciences, 49 (2012), pp. 248-255. doi: 10.1016/j.cageo.2012.07.013 [7] n. valentini, a. saponieri, l. damiani, a new video monitoring system in support of coastal zone management at apulia region, italy, ocean & coastal management, 142 (2017), pp. 122-135. doi: 10.1016/j.ocecoaman.2017.03.032 [8] r. archetti, m.g. gaeta, f. addona, l. cantelli, c. romagnoli, f. sistilli, g. stanghellini, coastal vulnerability assessment through low-cost video monitoring: the case of riccione, proceedings of scacr 2019 international short course/conference on applied coastal research, 2019. [9] k. d. splinter, d. r. strauss, r. b. tomlinson, assessment of post-storm recovery of beaches using video imaging techniques: a case study at gold coast, australia, ieee transactions on geoscience and remote sensing, 49 (12) (2011), pp. 4704-4716. doi: 10.1109/tgrs.2011.2136351 [10] b. fullerton, m.r. swift, s. boduch, o. eroshkin, g. rice, design and analysis of an automated feed-buoy for submerged cages, aquacultural engineering, 32 (1) (2004), pp. 95-111. doi: 10.1016/j.aquaeng.2004.03.008 [11] s.j. boduch, j.d. irish, aquaculture feed buoy control-part 1: system controller, proceedings of oceans 2006, boston, ma, usa, 18-21 september 2006. doi: 10.1109/oceans.2006.307132 [12] j. d. irish, s. j. boduch, aquaculture feed buoy control-part 2: telemetry, data handling and shore-based control, proceedings. of oceans 2006, boston, ma, usa, 18-21 september 2006. doi: 10.1109/oceans.2006.307133 [13] a. p. m. michel, k. l. croff, k. w. mcletchie, j. d. irish, a remote monitoring system for open ocean aquaculture, proceedings of oceans ’02 mts/ieee, biloxi, mi, usa, 2931 october 2002. doi: 10.1109/oceans.2002.1192017 [14] h. wang, w. cai, j. yang, q. chen, design of hd video surveillance system for deep-sea biological exploration, proceedings of 2015 ieee 16th international conference on communication technology (icct), hangzhou, china, 18-20 october 2015, pp. 908-911. doi: 10.1109/icct.2015.7399971 [15] c.c. hung, s.c. tsao, k.h. huang, j.p. jang, h.k. chang, f.c. dobbs, a highly sensitive underwater video system for use in turbid aquaculture ponds, scientific reports, 6 (1) (2016), pp. 1-7. doi: 10.1038/srep31810 [16] l. qi-an, l. peng, f. gui-zeng, l. ping, l. chang-lin, z. jun-xiao, the solar-powered module design of wireless video monitoring system, energy procedia, 17 (2012), pp. 1416-1424. doi: 10.1016/j.egypro.2012.02.261 [17] k. abas, k. obraczka, l. miller, solar-powered, wireless smart camera network: an iot solution for outdoor video monitoring, computer communications, 118 (2018), pp. 217-233 doi: 10.1016/j.comcom.2018.01.007 [18] a. de angelis, a. moschitta, f. russo, p. carbone, a vector approach for image quality assessment and some metrological considerations, ieee transactions on instrumentation and measurement, 58 (1) (2008), pp. 14-25. doi: 10.1109/tim.2008.2004982 [19] q. jiang, w. gao, s. wang, g. yue, f. shao, y. s. ho, s. kwong, blind image quality measurement by exploiting high-order figure 11. examples of acquired images for each test phase. https://doi.org/10.1080/13657305.2015.994240 https://doi.org/10.1109/oceans.2018.8604735 https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-04.pdf https://www.imeko.org/publications/tc19-metrosea-2020/imeko-tc19-metrosea-2020-04.pdf https://doi.org/10.1016/j.aquaculture.2020.735508 https://doi.org/10.1016/j.cageo.2012.07.013 https://doi.org/10.1016/j.ocecoaman.2017.03.032 https://doi.org/10.1109/tgrs.2011.2136351 https://doi.org/10.1016/j.aquaeng.2004.03.008 https://doi.org/10.1109/oceans.2006.307132 https://doi.org/10.1109/oceans.2006.307133 https://doi.org/10.1109/oceans.2002.1192017 https://doi.org/10.1109/icct.2015.7399971 https://doi.org/10.1038/srep31810 http://dx.doi.org/10.1016/j.egypro.2012.02.261 https://doi.org/10.1016/j.comcom.2018.01.007 https://doi.org/10.1109/tim.2008.2004982 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 24 statistics with deep dictionary encoding network, ieee transactions on instrumentation and measurement, 69 (10) (2020), pp. 7398-7410. doi: 10.1109/tim.2020.2984928 [20] z. zhu, h. wei, g. hu, y. li, g. qi, n. mazur, a novel fast single image dehazing algorithm based on artificial multiexposure image fusion, ieee transactions on instrumentation and measurement, 70 (2020), pp. 1-23. doi: 10.1109/tim.2020.3024335 [21] g. ceschini, m. mugnaini, a. masi, a reliability study for a submarine compression application, microelectronics reliability, 42 (9-11) (2002), pp. 1377–1380. doi: 10.1016/s0026-2714(02)00153-1 [22] m. catelani, l. ciani, m. mugnaini, v. scarano, r. singuaroli, definition of safety levels and performances of safety: applications for an electronic equipment used on rolling stock, proceedings of 2007 ieee instrumentation & measurement technology conference imtc, warsaw, poland, 1-3 may 2007, 4258348. doi: 10.1109/imtc.2007.379086 [23] m. mugnaini, m. catelani, g. ceschini, a. masf, f. nocentini, pseudo time-variant parameters in centrifugal compressor availability studies by means of markov models, microelectronics reliability, 42 (9-11) (2002), pp. 1373–1376. doi: 10.1016/s0026-2714(02)00152-x [24] a. fort, f. bertocci, m., mugnaini, v. vignoli, v. gaggii, a. galasso, m. pieralli, availability modeling of a safe communication system for rolling stock applications, proceedings of 2013 ieee international instrumentation and measurement technology conference (i2mtc), minneapolis, mn, usa, 6-9 may 2013, pp. 427–430. doi: 10.1109/i2mtc.2013.6555453 [25] z. wang, e. p. simoncelli and a. c. bovik, multiscale structural similarity for image quality assessment, proceedings of the thrityseventh asilomar conference on signals, systems & computers, pacific grove, ca, usa, 9-12 november 2003, pp. 1398-1402. doi: 10.1109/acssc.2003.1292216 https://doi.org/10.1109/tim.2020.2984928 https://doi.org/10.1109/tim.2020.3024335 https://doi.org/10.1016/s0026-2714(02)00153-1 https://doi.org/10.1109/imtc.2007.379086 https://www.scopus.com/authid/detail.uri?authorid=6505763334 https://doi.org/10.1016/s0026-2714(02)00152-x https://doi.org/10.1109/i2mtc.2013.6555453 https://doi.org/10.1109/acssc.2003.1292216 a metrological approach for multispectral photogrammetry acta imeko issn: 2221-870x december 2021, volume 10, number 4, 111 116 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 111 a metrological approach for multispectral photogrammetry leila es sebar1, luca lombardo2, marco parvis2, emma angelini1, alessandro re3,4, sabrina grassini1 1 dipartimento di scienza applicata e tecnologia, politecnico di torino, corso duca degli abruzzi 24, 10129, turin, italy 2 dipartimento di elettronica e telecomunicazioni, politecnico di torino, corso duca degli abruzzi 24, 10129, turin, italy 3 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125, turin, italy 4 infn, sezione di torino, via pietro giuria 1, 10125, turin, italy section: research paper keywords: photogrammetry; multispectral imaging; reference object; metrology; cultural heritage citation: leila es sebar, luca lombardo, marco parvis, emma angelini, alessandro re, sabrina grassini, a metrological approach for multispectral photogrammetry, acta imeko, vol. 10, no. 4, article 19, december 2021, identifier: imeko-acta-10 (2021)-04-19 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received november 4, 2021; in final form december 6, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: leila es sebar, e-mail: leila.essebar@polito.it 1. introduction in the last few years, digitalization techniques and related 3d imaging systems have acquired major importance in several fields, like industry, medicine, civil engineering, architecture, and cultural heritage. for the last mentioned one, in particular, such technologies can provide multiple contributions, in terms of conservation, data archiving, enhancement, and web sharing [1][3]. the existing three-dimensional imaging systems, which acquire measurements through light waves, can be discriminated on the basis of the ranging principle employed [4]. among the several techniques, photogrammetry is a remote image-based technique, that became widely diffused. in particular, this technique allows for the collection of reliable 3d data of an object, regarding its surface (color and texture) and its geometrical features without requiring any mechanical interaction with the object itself [5]. indeed, a 3d model is constructed starting from digital images of the object, leading to the creation of its virtual replica. with the increasing diffusion of digitalization techniques and the aims of many users to the creation of 3d models, several concerns have been raised about the results that can be achieved. therefore, even though digitalization practices are widely diffused and they can provide realistic replicas of an object, the factors that impact the uncertainty of the final 3d models are several and they must be further investigated. some authors have summarized the most important factors that affect the uncertainty in 3d imaging and modeling systems [4], [6]. nevertheless, precision and accuracy evaluation of 3d models has not been supported by internationally recognized standards which are of major importance to avoid archiving and sharing wrong information [7]. some publications have presented different test artifacts or new systems that could be used to test the performances of the photogrammetry approach [6],[8]. in some cases, the accuracy of a final model is determined by comparing the results with some reference data, acquired with active systems such as laser scanners [8]-[10]. otherwise, the results are also evaluated on the basis of statistical parameters generated by the employed reconstruction software [11]. abstract this paper presents the design and development of a three-dimensional reference object for the metrological quality assessment of photogrammetry-based techniques, for application in the cultural heritage field. the reference object was 3d printed, with nominal manufacturing uncertainty of the order of 0.01 mm. the object was realized as a dodecahedron, and in each face, a different pictorial preparation was inserted. the preparations include several pigments, binders, and varnishes, to be representative of the materials and techniques used historically by artists. since the reference object’s shape, size and uncertainty are known, it is possible to use this object as a reference to evaluate the quality of a 3d model from the metric point of view. in particular, verification of dimensional precision and accuracy are performed using the standard deviation on measurements acquired on the reference object and the final 3d model. in addition, the object can be used as a reference for uv-induced visible luminescence (uvl) acquisition, being the materials employed uv-fluorescent. results obtained with visible-reflected and uvl images are presented and discussed. mailto:leila.essebar@polito.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 112 nevertheless, there is no unique and recognized way to define the quality of a reconstructed model. this paper presents preliminary but promising results, achieved through the design and realization of a low-cost 3d printed reference object specifically designed for assessing position accuracy and dimensional uncertainty of photogrammetric reliefs. moreover, the reference object, realized in collaboration with the “centro conservazione e restauro la venaria reale”, was created with special insets in which different pictorial preparations were inserted in order to be representative of the materials and techniques used historically by artists. the proposed object could be therefore proposed also as a reference sample for multispectral imaging applications, a widely diffused 2d technique for the characterization and identification of historic-artistic materials [12], [13]. generally, photogrammetry and multispectral imaging are applied as separate techniques, but their combined application is becoming more and more frequent in the cultural heritage field. indeed, this approach exploits the benefit of mapping multispectral imaging data onto 3d models for complete documentation of the conservation state of an object [14], [15]. even though several references can be used in different fields [16]-[18], in this case a specific reference object has to be employed. the reference object was tested using a photogrammetric measuring system, that allows acquisition of both the visible-reflected (vis) and the uv-induced luminescence (uvl) images. in particular, the experimental setup is composed by an ad-hoc modified digital camera capable of working in a wide spectral range (350-1100 nm), several different lighting sources and filters, and an automatic rotating platform. meshroom [19], an open-source software, was used to make the photogrammetric reconstruction of the reference object. the obtained results were then compared to the physical 3d object in order to estimate the accuracy of the final 3d replica. in addition, a comparison of two different approaches for the realization of the uvl model is presented. 2. 3d reference object the reference object consists of a 3d printed polymeric dodecahedron. figure 1 shows the prototype, that was designed with the wings 3d [20] software. then, the reference object was realized with a projet 2500 plus (3d systems) printer, employing the visijet® m2r-gry resin. the printer allows obtaining an object with a nominal uncertainty of the order of 0.01 mm. the reference object was designed to be suitable for photogrammetry survey, indeed its shape is properly created for achieving specific information regarding the reconstruction geometrical accuracy. furthermore, the object was designed in order to have twelve pentagonal slots, in which different pigment preparations can be inserted. in particular, twelve different pigments were chosen to be representative of the principal artistic materials. all the pigments employed are provided by kremer pigmente gmbh & co. kg [21]. in particular, the pigments employed are lead white, white barium sulfate, bone black, magnetite black, raw sienna italian, lead-tin yellow, minium, lac dye, azurite, malachite, verdigris, and lapis lazuli. the twelve painting preparations were realized in order to reproduce the techniques employed in real historical artifacts. therefore, each one consists of several consecutive layers: support, preparation layer, underdrawings, pictorial layer, and varnishes. the preparation layer is made of stucco, realized adding gypsum to saturate a solution of water and animal glue (14:1 ratio in weight). then, three different underdrawings were applied directly on top of the stucco layer. in particular, the materials employed are charcoal, sanguine, and iron gall ink. these first two layers, namely stucco preparation, and underdrawings are the same for all the twelve mock-ups. each of the twelve sections was designed to host one single pigment/dye in nine different combinations. indeed, each section was divided into three subsections, based on the binder employed: arabic gum, egg tempera, and linseed oil. then each section is further divided into three subsections: one with a historical varnish (i.e. mastic), one with a modern one, and one left unprotected. this choice is of particular interest for the uvl imaging techniques because it allows discriminating between the fluorescence of the different pictorial preparations with and without varnish. figure 2 shows the proposed reference object and a scheme of the pictorial preparation. figure 1. drawing of the reference object realized by means of wings 3d software. figure 2. on the left: top view and scheme of the pictorial preparation. on the right, the proposed reference object. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 113 3. 3d acquisition system the system employed to acquire the images of the reference object is composed of a modified digital camera, a set of suitable light sources, and an automatic rotating platform. 3.1. acquisition setup the images were acquired with a fujifilm xt-30 digital camera coupled with a minolta mc rokkor-pf 50mm f/1.7 lens. the camera is modified to be suitable for ultraviolet-visibleinfrared photography acquisition, in the range from 350 nm to 1100 nm. for both vis and uvl measurements the camera was equipped with a hoya ir uv-cut filter and a schott bg40 filter. the image acquisition was performed in a room where it was possible to create absolute darkness, in order to avoid any possible interference from unwanted light sources. led uv a365 nm filtered sources were employed as sources for the acquisition of uvl images, while standard halogen lamps were used for vis images. table 1 reports the main parameters employed in the image acquisition and figure 3 shows the complete acquisition setup [22]. the acquisition system is completed by the rotating platform which allows to automatically take images of the object at specified rotation angles. the platform is composed of a circular rotating plate that hosts the object. the plate is connected to a stepper motor (type nema 17) by employing a suitable gearbox (1:18 ratio) in order to increase the torque and the angular resolution and to reduce the rotation speed. a stepper motor driver chip (a4988, allegro microsystems) is used to drive the motor and to move the platform at specified angular positions with a resolution of 0.1°. the platform is controlled by an arduinouno development board connected to a computer, where a dedicated application allows the users to set up all the acquisition parameters (such as image number, angle, speed, etc.) and to carry out some basic image pre-processing on the acquired images. furthermore, the platform features a camera shot trigger output which, connected to the camera, allows the platform to automatically trigger the camera shot at each one of the specified object positions. this greatly improves and simplifies the image acquisition procedure dramatically reducing the user manual intervention. 3.2. data processing the reconstruction of the 3d models was performed by means of meshroom (version 2021.1.0). this software has a particular embedded feature, called live reconstruction that allows to directly import images while they are acquired and to augment the previous structure from motion coverage with an iterative process. in this study, the images were iteratively added in groups of four for each step. the first block of images was acquired frontally respect to the artifact with an angular step of 15°. subsequently, two additional sets of images were acquired after flipping the artifact on different sides in order to improve the reconstruction of all the faces and their details. hence, a total of 72 reflected vis images plus 72 uvl images were collected and processed with a standard pipeline in meshroom. in particular, the following steps were carried out: camera initialization, natural feature extraction, image matching, features matching, structure from motion (sfm) creation, prepare dense scene, depth map estimation, depth map filtering, meshing, meshing filtering, texturing. the parameters of reconstruction were all set to default values. figure 3. photogrammetry system for multispectral images acquisitions. figure 4. from left to right: 3d model from reflected vis images; 3d model obtained after re-texturing of the reflected-vis model with uvl images; and 3d model obtained using uvl images directly. table 1. the employed acquisition parameters. parameter value image size 6240 × 4160 sensor size/type 23.5mm × 15.6mm (aps-c)/ x-trans cmos effective pixels 26 megapixels image format .raf iso 200 focal length 50 mm aperture f/16 shutter speed 2.0 s acquired images 72 (3 revolutions of 24 images) acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 114 the aforementioned procedure was applied to reconstruct models from both vis and uvl images. therefore, two different models were obtained. nevertheless, meshroom allows to texture a 3d model also using a different set of images with respect to the ones used to generate the point cloud and the mesh. indeed, it is possible to duplicate the computed dense scene node and import a new folder of images. in this way, the software will generate a texture for the model, with a different set of images. in order to test this feature, the procedure was applied to the vis model, to retexture it with the uvl images. the three textured 3d models were exported in the obj format and properly scaled by employing the open-source software wings 3d. to scale each model, three different dimensions of the object were measured, and the mean scaling factor was computed. figure 4 shows an image of the final models. 4. experimental validation in order to assess the uncertainty of the 3d reconstructed models from reflected vis images, several distances on the real artifact were taken by using a caliper and compared with the same distances on the 3d models. figure 5 shows the distances measured, which are distributed all around the artifact itself. the distances were chosen in correspondence of the edges and between opposite faces of the artifact since they can be easily measured. the measurements of the artifact were collected with a 1/20 mm caliper. whereas the software wings 3d was employed to measure the corresponding distances on the virtual replica. the model uncertainty was estimated according to the difference δ as reported in (1):  = |𝐷𝑟 −𝐷𝑚 | , (1) where 𝐷𝑟 is the distance on the reference object and 𝐷𝑚 is the one on the 3d model. in addition, also the relative uncertainty was calculated as shown in (2): % = |𝐷𝑟 −𝐷𝑚 | 𝐷𝑟 × 100 % . (2) finally, the overall standard deviation was evaluated, as in (3): table 2. uncertainty estimation of the 3d model from vis images. the measurement on the reference object, on the vis model, and on the uvl model are reported.  indicates the difference between the measurements (equation (1)), and  is the relative uncertainty (equation(2)). dimension reference object (mm) vis-3d model (mm) δ (mm) ε (%) uvl-3d model (mm) δ (mm) ε (%) a 44.2 44.0 0.23 0.52 44.1 0.13 0.30 b 44.3 44.5 0.19 0.43 44.8 0.31 0.70 c 44.1 44.2 0.11 0.25 44.7 0.491 1.11 d 99.0 98.7 0.31 0.31 100.6 1.91 1.94 e 71.4 71.2 0.25 0.35 71.6 0.45 0.63 f 44.3 44.0 0.35 0.79 44.8 0.85 1.93 g 44.3 44.6 0.33 0.74 44.6 0.03 0.07 h 44.4 44.5 0.08 0.18 44.8 0.32 0.72 i 121.6 122.2 0.55 0.45 125.3 3.15 2.58 l 71.4 71.4 0.05 0.07 72 0.65 0.91 m 71.2 71.3 0.14 0.20 71.4 0.06 0.08 n 98.8 98.9 0.12 0.12 100.8 1.88 1.90 o 115.0 114.9 0.14 0.12 116.7 1.84 1.60 p 71.2 70.5 0.67 0.94 71.2 0.67 0.95 q 114.5 114.8 0.34 0.30 116.6 1.76 1.53 r 70.8 70.9 0.19 0.27 71.8 0.86 1.21 s 70.8 70.5 0.27 0.38 71.9 1.37 1.94 t 70.9 70.9 0.03 0.04 71.7 0.77 1.09 u 44.2 44.2 0.06 0.14 44.5 0.29 0.66 v 70.9 70.8 0.02 0.03 71.9 1.07 1.51 figure 5. original vis images of the reference object with validation measurement distances. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 115 𝜎 = √ 1 𝑁 − 1 ∑(𝛿𝑖 2 ) 𝑁 𝑖=1 (3) regarding the multispectral reconstruction, two different approaches were tested. one uvl model was obtained retexturing the mesh already obtained for the vis model. this means that the coordinates of the measured points did not change and that the uvl-model is perfectly superimposable. therefore, there is no need to perform a metric evaluation of this model. the second uvl-model was instead reconstructed starting directly from uvl images. for the estimation of the quality of the model, the procedure previously presented was applied. in particular, the differences between the distances measured on the uvl 3d model and the ones measured on the reference object were computed. in table 2, the distance difference δ in mm and the corresponding relative error ε are reported. on the basis of these results, the reconstructed visible models are quite reliable with maximum dimensional uncertainties lower than 1 % for the visible model and lower than 2 % for the uv model. nevertheless, the average uncertainties are lower, reaching about 0.33 % and 1.17 % for the visible and the uv models, respectively, while the standard deviations  of the differences between the real model and the reconstructed one are of about 170 µm and 690 µm, respectively. the higher uncertainty obtained for the uv model is probably due to a higher color uniformity of the acquired images which affected the reconstruction process. therefore, the reconstruction accuracy can be probably improved by tuning the image acquisition procedure and the reconstruction parameters. 5. conclusions this paper presented the design and development of an artifact, that can be used as a metric reference object for the assessment of the accuracy and dimensional uncertainty of the 3d model obtained through photogrammetry. the object is 3d printed and has twelve insets, in which several pictorial preparations were inserted. indeed, the object is suitable also to be a reference for multispectral imaging. the reference object was employed in order to test the photogrammetric measurement system employed, from a metric point of view. a comparison between several distances acquired both on the mechanical reference object and on the reconstructed vis 3d model was realized. the maximum dimensional uncertainty is lower than 1 % and the average uncertainty reached about 0.33 %. moreover, the reconstruction of the model from uvl images was performed, using two different approaches. the obtained results were compared with the real object. from the comparison between the obtained results, it is possible to state that the approach that involves the creation of a vis model and the subsequent re-texturing with uvl images achieved the best results. acknowledgement the authors would like to acknowledge dr. paola buscaglia from centro conservazione e restaura “la venaria reale” for the support related to the realization of the pictorial preparation. references [1] m. russo, f. remondino, g. guidi, principali tecniche e strumenti per il rilievo tridimensionale in ambito archeologico, archeologia e calcolatori, 22 (2011) (in italian), pp. 169-198, issn 1120-6861. [2] l. es sebar, l. iannucci, c. gori, a. re, m. parvis, e. angelini, s. grassini, in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors, acta imeko 10(1) (2021) pp. 241-249. doi: 10.21014/acta_imeko.v10i1.894 [3] i. m. e. zaragoza, g. caroti, a. piemonte, the use of image and laser scanner survey archives for cultural heritage 3d modelling and change analysis, acta imeko 10(1) (2021) pp. 114-121. doi: 10.21014/acta_imeko.v10i1.847 [4] j. a. beraldin, m. rioux, l. cournoyer, f. blais, m. picard, j. pekelsky, traceable 3d imaging metrology, proc. spie videometrics ix 6491 (2007) doi: 10.1117/12.698381 [5] t. schenk, introduction to photogrammetry, the ohio state university, columbus, 2005, 106. online [accessed 3 december 2021] https://www.mat.uc.pt/~gil/downloads/introphoto.pdf [6] j. a. beraldin, f. blais, s. el-hakim, l. cournoyer, m. picard, traceable 3d imaging metrology: evaluation of 3d digitizing techniques in a dedicated metrology laboratory, proceedings of the 8th conference on optical 3-d measurement techniques, july 912, 2007, zurich, switzerland 2007, pp. 310-318. [7] i. toschi, a. capra, l. de luca, j. a. beraldin, on the evaluation of photogrammetric methods for dense 3d surface reconstruction in a metrological context, in isprs technical commission v symposium, wg1 2(5) (2014) pp. 371-378. [8] g. j. higinio, b riveiro, j. armesto, p. arias, verification artifact for photogrammetric measurement systems, optical engineering 50(7) (2011), art. 073603. doi: 10.1117/1.3598868 [9] c. buzi, i. micarelli, a. profico, j. conti, r. grassetti, w. cristiano, f. di vincenzo, m. a. tafuri, g. manzi, measuring the shape: performance evaluation of a photogrammetry improvement applied to the neanderthal skull saccopastore 1, acta imeko 7(3) (2018). doi: 10.21014/acta_imeko.v7i3.597 [10] a. koutsoudis, b. vidmar, g. ioannakis, f. arnaoutoglou, g. pavlidis, c. chamzas, multi-image 3d reconstruction data evaluation, journal of cultural heritage 15(1) (2014) pp. 73-79. doi: 10.1016/j.culher.2012.12.003 [11] a. calantropio, m.p. deseilligny, f. rinaudo, e. rupnik, evaluation of photogrammetric block orientation using quality descriptors from statistically filtered tie points, international archives of the photogrammetry, remote sensing & spatial information sciences 42(2) (2018). [12] j. dyer, g. verri, j. cupitt, multispectral imaging in reflectance and photo-induced luminscence modes: a user manual, british museum, 2013. [13] a. cosentino, identification of pigments by multispectral imaging; a flowchart method, herit sci 2(8) (2014). doi: 10.1186/2050-7445-2-8 [14] s. b. hedeaard, c. brøns, i. drug, p. saulins, c. bercu, a. jakovlev, l. kjær, multispectral photogrammetry: 3d models highlighting traces of paint on ancient sculptures, in dhn (2019), pp. 181-189. [15] e., nocerino, d. h. rieke-zapp, e. trinkl, r. rosenbauer, e. m. farella, d. morabito, f. remondino, mapping vis and uvl imagery on 3d geometry for non-invasive, non-contact analysis of a vase, international archives of the photogrammetry, remote sensing and spatial information sciences isprs archives, 42 (2), (2018) pp. 773-780. doi: 10.5194/isprs-archives-xlii-2-773-2018 [16] m. parvis, s. corbellini, l. lombardo, l. iannucci, s. grassini, e. angelini, inertial measurement system for swimming rehabilitation, 2017 ieee international symposium on medical http://dx.doi.org/10.21014/acta_imeko.v10i1.894 http://dx.doi.org/10.21014/acta_imeko.v10i1.847 https://doi.org/10.1117/12.698381 https://www.mat.uc.pt/~gil/downloads/introphoto.pdf https://doi.org/10.1117/1.3598868 http://dx.doi.org/10.21014/acta_imeko.v7i3.597 https://doi.org/10.1016/j.culher.2012.12.003 https://doi.org/10.1186/2050-7445-2-8 http://dx.doi.org/10.5194/isprs-archives-xlii-2-773-2018 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 116 measurements and applications (memea), rochester, mn, usa, 8-10 may 2017, pp. 361-366. doi: 10.1109/memea.2017.7985903 [17] a. gullino, m. parvis, l. lombardo, s. grassini, n. donato, k. moulaee, g. neri, employment of nb2o5 thin-films for ethanol sensing, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, may 25-28, 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128457 [18] l. iannucci, l. lombardo, m. parvis, p. cristiani, r. basseguy, e. angelini, s. grassini, an imaging system for microbial corrosion analysis, 2019 ieee international instrumentation and measurement technology conference (i2mtc), auckland, new zealand, may 20-23, 2019, pp. 1-6. doi: 10.1109/i2mtc.2019.8826965 [19] alicevision, meshroom 3d reconstruction software. online [accessed 3 december 2021] https://alicevision.org/#meshroom [20] wings 3d. online [accessed 3 december 2021] http://www.wings3d.com [21] kremer pigmente. online [accessed 3 december 2021] https://www.kremer-pigmente.com/en [22] l. es sebar, s. grassini, m. parvis, l. lombardo, a low-cost automatic acquisition system for photogrammetry, 2021 ieee international instrumentation and measurement technology conference-i2mtc, (2021) pp. 1-6. doi: 10.1109/i2mtc50364.2021.9459991 https://doi.org/10.1109/memea.2017.7985903 https://doi.org/10.1109/i2mtc43012.2020.9128457 https://doi.org/10.1109/i2mtc.2019.8826965 https://alicevision.org/#meshroom http://www.wings3d.com/ https://www.kremer-pigmente.com/en https://doi.org/10.1109/i2mtc50364.2021.9459991 an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1 acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 5 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1 carlo carobbi1 1 departement of information engineering, università degli studi di firenze, via santa marta 3, 50139 firenze, italy section: research paper keywords: measurement uncertainty; type a evaluation; pooled variance; bayesian inference; informative prior citation: carlo carobbi, an informed type a evaluation of standard uncertainty valid for any sample size greater than or equal to 1, acta imeko, vol. 11, no. 2, article 29, june 2022, identifier: imeko-acta-11 (2022)-02-29 section editor: francesco lamonaca, university of calabria, italy received october 1, 2021; in final form february 23, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: carlo carobbi, e-mail: carlo.carobbi@unifi.it 1. introduction the quantification of the type a uncertainty contribution in the case of a small sample ( = 1, 2, 3n ) is a subject of research and passionate debate in the working group 1 of the joint committee for guides in metrology (jcgm wg1), the standards working group involved in the maintenance and development of the guide to the expression of uncertainty in measurement (gum, [1]) and its supplements. the topic is so felt that, at the end of 2019, the “jcgm wg1 workshop on type a evaluation of measurement uncertainty for a small set of observations” was held at the bureau international des poids et mesures (bipm, sèvres, paris). the problem arose following the negative reaction to the committee draft (cd) of the review of the gum, circulated at the end of 2014 [2]. one of the most criticized issues of the draft of the “new gum” is the type a evaluation of uncertainty based on the use of a student’s t probability density function having −1n degrees of freedom, shifted by the mean y of the n observations i y , = 1, 2,...,i n , and scaled by the standard deviation of the mean s n , where (1) and . (2) by following this approach, the type a evaluation of standard uncertainty is , (3) which is not valid for a sample having a size of less than = 4n . such solution originates from a bayesian approach to inference, where improper priors (jeffreys prior) are adopted for the mean  and variance  2 parameters of the parent normal probability density function (pdf), i.e. (4) and = =  1 1 n i i y y n ( ) = = − −  22 1 1 1 n i i s y y n ( ) − = − 1 3 n s u y n n ~ const. abstract an informed type a evaluation of standard uncertainty is here derived based on bayesian analysis. the result is mathematically simple, easily interpretable, applicable both in the theoretical framework of the guide to the expression of uncertainty in measurement (propagation of standard uncertainties) and in that of the supplement 1 of the guide (propagation of distributions), valid for any size greater than or equal to 1 of the sample of present observations. the evaluation consistently addresses prior information in the form of the sample variance of a series of recorded experimental observations and in the form of an educated guess based on expert’s experience. it turns out that distinction between type a and type b evaluation is, in this context, contrived. mailto:carlo.carobbi@unifi.it acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 , (5) where ( ) 20p represents the improper prior adopted for  2 , namely . (6) note that the information conveyed by these priors is the one strictly relevant to the character of the two parameters:  is the location parameter and  2 is the scale parameter. in contrast, practitioners in testing and calibration have much richer information about the variability of the measurement process that is represented by (6). the bayesian approach is the one followed by the supplement 1 of the gum (gums1, [3]) and the intent of jcgm wg1 was precisely to align the gum to gums1 by attributing the same student's t probability density to a sample of repeated observations. the problem is that, by doing so, it is possible to propagate the distributions (as foreseen by gums1) but it is not possible to propagate the standard uncertainties (as foreseen by the gum) if the sample size is less than = 4n . this is generally not acceptable (e.g., in destructive), particularly if implemented as a standard (mandatory) method. the gum and the gums1 approaches are therefore inconsistent. they produce substantially different results when random variability is a significant contribution to measurement uncertainty and the number of measurements used for its estimate is low [4]. the jcgm wg 1 did not seemingly yet identify a way out of the inconsistence between the gum and the gums1. both frequentists and bayesians can agree on the fact that the estimate of the average value obtainable from such a small sample is not very reliable. in favor of the bayesian approach to inference, one can observe that no other way to enrich the estimate is available than the use of prior information on the variability of the measurement process that integrates the meagre experimental observation. in this sense, a bayesian approach is useful because, differently from the frequentist approach, it provides us with a method for combining prior information with experimental observation. from the applicative point of view these concepts have relevance to the evaluation of measurement repeatability. measurement repeatability quantifies the variability of measurement results obtained under specified repeatability conditions. measurement repeatability is an essential contribution to measurement uncertainty in every field of experimental activity. in the context of testing and calibration if a stable item is retested or re-calibrated, the new measurement results are expected to be compatible with the old ones. two distinct operators should provide compatible measurement results when testing or calibrating the same item. measurement repeatability is then a reference for qualification of personnel. monitoring measurement repeatability contributes to assuring the validity of test and calibration results. in an accreditation regime [5], measurement repeatability must be kept under statistical control. periodic assessments are carried out by the accreditation body aimed at verifying, through an appropriate experimental check, the robustness of the estimate of measurement repeatability, see [6], equation (6), p. 5 (in italian), and [7], clause 6.6.3. the gum provides type a evaluation of standard uncertainty as the tool to quantify measurement repeatability. type a evaluation is based on a frequentist approach, thus implying that information on the quality of the estimate of measurement uncertainty must be conveyed to the user. this is done in terms of effective degrees of freedom. the gums1 adopts a knowledge based (in contrast to frequentist) approach to model measurement repeatability. the quality of the estimate of measurement uncertainty is accounted for by the available prior knowledge, which eventually determines the width of the coverage interval. the use of numerical methods for professional (accredited) evaluation of measurement uncertainty is expected to increase in the future. indeed, the gums1 numerical method, which is based on the propagation of probability distributions, accounts for possible non-linearity of the measurement model, is simple, less prone to mistakes (partial derivatives are not required), provides all the available information about the measurand in terms of its probability distribution. further, the use of numerical methods is practically unavoidable when the measurement model is complex and/or the measurand is an ensemble of scalar quantities (vector). on the other extreme, the analytical method (based on the law of propagation of uncertainty) is consolidated and the one predominantly adopted nowadays. a further point of strength of the analytical method is its great pedagogical value. achieving consistence between the analytical and numerical approaches to measurement uncertainty quantification is therefore desirable since both have arguments of strength and are expected to coexist in the future. what is proposed here is a knowledge-based approach to the type a evaluation of measurement uncertainty and, specifically, measurement repeatability. an estimate of the repeatability of a measurement system may be available, representative of its performance in testing. this knowledge may be derived from: • systematic recording of periodic verifications of the measurement system • analysis and quantification of the individual sources of variability in the measurement chain • normative reference (for standard measurement systems used in testing) • information from manufacturers of measuring instruments • experience with the specific measurement chain or similar ones. as in the gums1, use is made here of bayesian inference since it provides a straightforward method to incorporate prior knowledge. differently from the gums1 bayesian approach, here an informative prior pdf is assigned to  2 . to obtain analytical results, useful in the framework of the law of propagation of uncertainty, a normal probability model is assumed with a non-informative prior pdf for the mean and a conjugate prior pdf for the variance. in section 2 the theoretical approach is described and in subsection 2.1 is compared with another one [8] previously presented in the scientific literature and proposed by a member of jcgm wg1. in section 3 theoretical results are applied to a practical case, based on the experience of the author, as an assessor of accredited testing laboratories. conclusions follow in section 4. finally, an appendix is devoted to the mathematical derivations supporting the results presented in section 2. ( )~ 2 20p ( )   2 0 2 1 p acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 2. type a evaluation when prior information is available by prior information we mean here information on the variability of the measurement process obtained before that a certain test (or calibration) is carried out. let us consider the case in which the a priori information consists of a relatively long series of experimental observations. the important hypothesis that must be verified is that the previous experimental observations are obtained under repeatability conditions that are representative of those that occur during the test, both as regards the measurement system and the measurand. if this is not verified, the a priori information is not valid to represent the variability observed during the test. this hypothesis is necessarily realized following an experimental procedure based on a physical modeling aimed at identifying the causes of the variability and at limiting its effects. it is the experimenter's task to ensure that the hypothesis is verified in practice. in mathematical terms, the bayesian inference is made on the mean value  and the variance  2 of a gaussian pdf assuming an improper uniform pdf for  and a scaled inverse  2 pdf [9], table a.1, p. 576, for  2 . the choice of the improper uniform pdf for  is justified by the desire to avoid introducing an a-priori bias on the best estimate of the measurand value, which in this way depends solely on the experimental observation obtained during the test. the choice of the scaled inverse  2 pdf for  2 is justified by the desire to incorporate prior information while retaining the well-known student’s t as the posterior pdf of  [9], section 3.3, p. 67. the parameters of the scaled inverse  2 pdf are the prior variance  2 0 and the associated degrees of freedom  0 . another advantage stemming from the use of the scaled inverse  2 pdf is the immediate physical interpretation of the degrees of freedom  0 as the number of measurements that have been necessary to derive the prior estimate  2 0 minus 1. at the same time  0 can be linked to the degree of credibility trusted to  2 0 as an estimate of  2 , as it is demonstrated here, through the use of (11). with this choice of the prior pdfs (see the appendix for the derivation) we obtain, for the posterior marginal pdf of  a student’s t pdf with degrees of freedom , (7) shifted in (8) and with scaling factor  2 n n , where . (9) according to this approach, the type a evaluation of standard uncertainty will be . (10) we observe from (7) that the number of degrees of freedom  0 of the prior evaluation of variability,  0 , add up to the number of degrees of freedom −1n with which the variability s is evaluated during testing. the result is valid if the assumption that repeatability conditions are kept the same both in the prior investigation and testing is verified. the estimate (8) is determined by repeated observations obtained during the testing phase because a constant and improper prior pdf for  has been chosen. the result (9) is particularly simple and convincing: the variance  2 n which quantifies the variability of the measurement process is the result of the pooling of the prior variance  2 0 and the sample variance observed in testing 2 s through a weighted average, the weights being the corresponding degrees of freedom. the type a evaluation of standard uncertainty passes from (3), in absence of prior information, to (10), which is valid also for = 1n provided that   0 3 . the following consideration is also of interest. the prior information about the variability of the measurement process may be derived, for example, from the assessment of an expert. a simple form of this prior information is a best estimate  0 and a quantile   that the expert judges to be exceeded with a small probability  . a link can be established among   ,  and  0 for a given  0 . this can be done through the cumulative distribution function of the scaled inverse  2 prior of  2 evaluated at  2 , namely , (11) where is the upper incomplete gamma function with parameters  and z , and ( ) z is the gamma function. if  0 is known then (11), for any given  , implicitly provides a value for  0 . this relationship can be represented through a plot such as the one in figure 1. note from figure 1 that the larger is    0 the smaller is  0 for a given  . the smaller is  for a given    0 the larger is  0 . the idea of pooling prior variability is not new in the context of the gum. it is indeed briefly mentioned in clause 6.4.9.6 of the gums1 and in 9.2.6 of the cd of gum review [2]. 2.1. comparison with the type a evaluation obtained truncating the improper prior for 𝝈𝟐 in a recent paper [8] cox and shirono propose a solution to the problem of the type a evaluation in case of small sample where  t is an upper bound (truncation) value for the improper prior of  2 , i.e.  = − + 0 ( 1) n n  = n y ( )     − + = − + 2 2 0 02 0 1 ( 1) n n s n      = − 2 n n n n              = −       2 0 0 0 2 0 , 2 2 1 2 ( ) ( )  −  = − 1 , exp z z t t dt acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 . (12) the prior pdf of  is in [8], as in this work, a constant improper prior. by following [8], the type a evaluation of standard uncertainty can be expressed as  s n , where . (13)   0 if  2n is a function of s , n and  t . as shown in figure 2, it results that    t s also when  t s and n is arbitrarily large. this is problematic because, when observed variability is more credible (larger number of degrees of freedom) than prior knowledge of variability, then the observed variability, not its prior estimate, should dominate the type a evaluation. in other words, setting an upper bound on  2 is acceptable provided that irrefutable evidence is available of an upper truncation value. otherwise, setting a large value with an associated small probability of being exceeded is a more cautionary approach. another limitation of the approach in [8] is that necessarily  2n (see (13),  = 0 if = 1n ) while, according to the solution here proposed, also the case = 1n is tractable. 3. application in the context of accreditation to iso/iec 17025 national accreditation bodies require evaluation of measurement repeatability of the test methods in the scope of accreditation. such evaluation is carried out by testing laboratories through periodic recording of measurement results obtained in representative conditions of actual testing. an estimate  0 with corresponding degrees of freedom  0 is thus obtained. how to incorporate this prior knowledge into test outcome? we here provide a numerical example in the context of electromagnetic compatibility (emc) testing. suppose that the estimate of the non-repeatability of the radiated emission measurement chain is  = 0 0.8 db and  = 0 9 . testing two times ( = 2n ) an absolute deviation between measured values of 1.5 db is obtained, then = 1.5 2s db = 1.06 db. by pooling standard deviations  0 and s we have ( ) = − + = + =01 1 9 10n n , ( )      + − + = = = − + + 2 0 2 22 0 0 1 db 0.76 db 1 1 0 ( 1) 8 1 9 .06 9 . n n s n and      = = = − − 10 0.76 0.60 db 2 10 2 2 n n n n as a second example consider the case where an expert of the specific test method provides a guess  = 0 1 db, based on experience with similar test systems. the expert is also confident that, with a low probability  = 5 %,  exceeds   = 2.5 db. this state of knowledge corresponds to approximately (see figure 1)  = 0 4 , from which  = 5 n (instead of 10, as in the previous example),  = 0.86 n db (instead of 0.76 db), and   = 0.78 db (instead of 0.60 db). 4. conclusions reliable statistical techniques to incorporate prior knowledge into the so-called “type a” evaluation of standard uncertainty should be identified to make evaluation more robust in case of small sample. the use of these statistical techniques should be promoted and confidently accepted in accredited testing if competence requirements are fulfilled. gums1 already provides such a tool by pooling prior variance and sample variance. a bayesian derivation of the gums1 pooled variance is here illustrated along with and a more flexible interpretation aimed at addressing expert’s knowledge as a useful source of reliable information. ( ) ~             2 2 0 1 0 0 t t p ( ) ( )      −    −−    =   −    −    1 2 2 2 2 2 3 , 2 2 11 2 1 , 2 2 1 t t n n sn n n s figure 1: plots of the degrees of freedom as a function of the ratio obtained by solving the implicit equation (11) for three values of probability (see the legend). figure 2: plots of as a function of and for selected values of (see the legend). note that for any value of and for any value of .  0  0     t  t s n    ts  ts n acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 5 according to the results described in this work there is no need to distinguish between type a and type b evaluations since a homogeneous mathematical treatment is used to address prior information about variability (notwithstanding is originated from experimental evidence or expert’s experience) and its pooling with present observation. the main ideas and results in this work were presented by the author of this paper, during the 2019 jcgm-wg1 workshop mentioned in the introduction section. i would like to acknowledge, that during the same workshop, also anthony o’hagan (emeritus professor, university of sheffield), proposed the use of the scaled inverse  2 pdf to solve the problem of the type a evaluation in case of small sample size. his formulation of the solution (still unpublished) was different from mine, but it is remarkable that two researchers, having a completely different background, arrived at similar proposal. the concluding section contains the major achievements of the research presented in the manuscript. it should be concise but informative. when numerical results are an essential part of the research, for instance a wider measurement range, higher uncertainty (6), they should be included in the conclusions. notice that conclusions are not the same as an abstract. references [1] gum: bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml, 2008 guide to the expression of uncertainty in measurement, jcgm 100:2008, gum 1995 with minor corrections. [2] jcgm 100 201x cd (committee draft), evaluation of measurement data guide to uncertainty in measurement, circulated in december 2014. [3] gums1: bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml, 2008 supplement 1 to the ‘guide to the expression of uncertainty in measurement’ – propagation of distributions using a monte carlo method jcgm 101:2008. [4] w. bich, m. cox, r. dybkaer, c. elster, revision of the ‘guide to the expression of uncertainty in measurement’, metrologia 49 (2012) 702–705. doi: 10.1088/0026-1394/49/6/702 [5] iso/iec 17025, conformity assessment–general requirements for the competence of testing and calibration laboratories, int. org. standardization, geneva, switzerland (2017). [6] sinal dt-0002/6, guida al calcolo della ripetibilità di un metodo di prova ed alla sua verifica nel tempo, rev. 0, dicembre 2007. [in italian] [7] b. magnusson, u. örnemark (eds.) eurachem guide: the fitness for purpose of analytical methods – a laboratory guide to method validation and related topics, (2nd ed. 2014). isbn 97891-87461-59-0. online [accessed 22 april 2022] https://www.eurachem.org/index.php/publications/guides/mv [8] m. cox, t. shirono, informative bayesian type a uncertainty evaluation, especially applicable to a small number of observations, metrologia 54 (2017), pp. 642–652. doi: 10.1088/1681-7575/aa787f [9] a. gelman, a. vehtari, j. b. carlin, h. stern, d. b. dunson, d. b. rubin, bayesian data analysis, third edition, crc press, 2014, isbn 9781439840955. appendix we here derive the marginal posterior pdf of  given prior information in terms of the prior pdfs of  and  2 and the set of observations i y , where = 1, 2,...,i n . a uniform prior pdf is assigned to  as , (14) while the prior of  2 is a scaled inverse  2 pdf with prior variance  2 0 and associated degrees of freedom  0 . (15)  and are a-priori independent, then the joint prior pdf of and is, from (14) and (15), . (16) the likelihood of the observations is easily obtained as [9] , (17) where y is a vector representing the set of observations i y , = 1, 2,...,i n . due to bayes theorem the joint posterior pdf of  and  2 is given by . (18) substituting (16) and (17) into (18) and marginalizing with respect to  2 it is readily obtained (19) where ( )|p y represents the marginal posterior pdf of  . it is evident from (19) that ( )|p y is a student’s t pdf shifted in y and scaled by  2 n n , where  2 n is given by (9). ~  2 const. ( )~   −2 2 20 0inv ,  2   2 ( ) ( ) ( )       − +    −    0 2 2 1 2 2 0 0 2 , exp 2 p ( ) ( ) ( ) ( ) ( ) ( )         = −  − −        − + − = −       2 2 2 1 21 2 22 2 2 2 exp 2 ; , 1 exp 2 i n i n y l n s n y y ( ) ( ) ( )      2 2 2, | ; , ,p l py y ( ) ( ) ( )      + −  −  +   − +   0 2 2 2 2 0 0 | 1 1 n n y p y n s http://dx.doi.org/10.1088/0026-1394/49/6/702 https://www.eurachem.org/index.php/publications/guides/mv https://doi.org/10.1088/1681-7575/aa787f measurements of helium permeation in zerodur glass used for the realisation of quantum pascal acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 4 acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 1 measurements of helium permeation in zerodur glass used for the realisation of quantum pascal ardita kurtishaj1, ibrahim hameli1, arber zeqiraj2, sefer avdiaj1 1 department of physics, university of prishtina “hasan prishtina”, 10000 prishtinë, kosovo 2 department of materials and metallurgy, university of mitrovica “isa boletini”, 40000 mitrovicë, kosovo section: research paper keywords: permeation; helium; diffusion; vacuum; metrology citation: ardita kurtishaj, ibrahim hameli, arber zeqiraj, sefer avdiaj, measurements of helium permeation in zerodur glass used for the realisation of quantum pascal, acta imeko, vol. 11, no. 2, article 27, june 2022, identifier: imeko-acta-11 (2022)-02-27 section editor: sabrina grassini, politecnico di torino, italy received august 3, 2021; in final form march 1, 2022; published june 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sefer avdiaj, e-mail: sefer.avdiaj@uni-pr.edu 1. introduction pressure is traditionally defined as force per unit area. therefore, to realise the unit of pressure pascal, the most apparent method is to apply a known force to a known surface. essentially, this is how pressure is defined since 1640, when evangelista torricelli invented the mercury barometer [1]. nonetheless, since pressures in the vacuum range do not exert great forces, it becomes more convenient to formulate pascal as the amount of energy per unit volume [2]. consequently, at low pressures, pascal is realized through the law of ideal gases, utilizing the optical measurement of gas density [3]. one of the methods that serves for the latter purpose, relies on fabryperot optical cavities for the measurement of refractivity of the gas being used [4], [5]. cavities made out of ultra-low expansion (ule) glass were initially proposed to measure helium refractivity for the new realisation of pascal [1], [4], [6]. however, the usage of this material has shown some difficulties, as reported in references [2] and [7]. one of these difficulties is the permeability of helium into the cavity material. for this reason, the 18sib04 quantumpascal empir project “towards quantum-based realisations of the pascal” proposed the testing of zerodur as potential cavity material. to estimate whether zerodur is more suitable than the ule glass, different studies are being made, such as the one reported in reference [8]. this evaluation requires the modelling of gas transport dynamics in the material. such modelling, on the other hand, requires knowledge of diffusion and permeability coefficients. therefore, as collaborators in the above-mentioned project, we have studied the permeability of helium into the zerodur material. the measurements were performed in the temperature range 27 °c – 120 °c. determined values of helium gas permeability in the zerodur sample are given in the temperature range 80 °c – 120 °c. 2. materials and methods the vacuum system, in which the measurements were made, consists of two separate volumes (high-pressure volume 𝑉1 and abstract in the new optical pressure standard ultra-low expansion glass cavities were proposed to measure helium refractivity for a new realisation of the unit of pressure, pascal. however, it was noticed that the use of this type of material causes some difficulties. one of the main problems of ule glass is the pumping effect for helium. therefore, instead of ule, zerodur glass was proposed as a material for the cavity. this proposal was given by the vacuum metrology team of the physikalisch-technische bundesanstalt ptb in the quantumpascal project. in order to calculate the flow of helium gas through zerodur glass one has to know the permeation constant k. moreover, the modelling of time dependency of the flow requires the knowledge of diffusion constant d as well. the relation between them is given by k = s · d, where s is the solubility of helium in glass. in our research work we measured permeation of helium gas in zerodur. measurements were performed in the temperature range 80 °c – 120 °c. based on our results, we consider that the zerodur material has potential to be used as cavity material for the new quantum standard of pressure. mailto:sefer.avdiaj@uni-pr.edu acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 2 low-pressure volume 𝑉2 + 𝑉3, as shown in figure 1) having in common the sample wall. each of these volumes has its own vacuum pump, vacuum gauge and valves. in this setup, gas diffuses through the thin sample wall directly into the chamber where the quadrupole mass spectrometer was mounted. the vacuum chamber’s material is stainless steel, the valves used are kf valves with stainless steel body and elastomer seal, whereas the pumps used in this research work are pfeiffer turbopumps. with this vacuum system, we investigated the diffusion of helium in a zerodur sample. the sample (a squared plate with a thickness of 0.2 cm and an area of 2.27 cm2) was received by the physikalisch-technische bundesanstalt in berlin (charge number "105080201"). aluminium joints, iso-kf flanges and elastomer o-rings were used to mount the sample into the system. the orings were used to prevent he permeation and were placed on both sides of the sample, whereas kf flanges were placed next to the o-rings and then were tightened using aluminium joints. to regulate and control the temperature we used htc-5500 /5500pro temperature control unit and heating tapes. the sample was wrapped several times with heating tapes, which were then wrapped with aluminium foil. such sample insulation provided temperature regulation of about ± 1 ℃. temperature changes were recorded using a thermocouple connected to the labview software. before running measurements with the investigated sample, a leak test was performed, to make sure there was no he leakage through the elastomer parts of our system. during the measurements, the following procedures were pursued: the high-pressure chamber and the vacuum chamber were evacuated with the aid of two turbomolecular pumps. when the vacuum chamber had been pumped down to 10-4 pa, helium gas was admitted to the high-pressure side of the mounted sample, at a pressure of 1.32·105 pa and temperature 27.1 ℃. data on the he partial pressure and the chemical composition of the gas species in the vacuum system were obtained by pfeiffer’s prismapro qmg 250 f1 quadrupole mass spectrometer, qms. data recording and analysis was done using qms software pv massspec. the qms was factory calibrated, nonetheless, its calibration expired by the time we conducted the study. to investigate helium diffusion through the zerodur sample we used two main methods. the first method aimed to determine permeation, diffusion and solubility coefficients by recording he partial pressure increase in the vacuum chamber vs. time, until steady state is achieved. after the steady state is reached, the permeated amount of substance vs. time graph is a straight line. the intercept of this straight line with the time axis can be used to calculate the diffusion coefficient, d [9]. the permeation coefficient, k, can be determined from the helium gas flow in the steady state, the known thickness and area of the sample used, and the known pressure in the high-pressure chamber, as described in reference [9]. the relation between the diffusion and permeation coefficient is given by k = s · d, where s is the solubility of he in glass. therefore, with the known diffusion and permeation coefficients, solubility coefficient can be calculated as well. the second method is a modification of the so-called accumulation method, which is described in detail below. following the two methods, our study lasted 70 days. from the first day to the twelfth day the sample was kept at a temperature of 27 °c. then, for the next nineteen days, temperature was raised and adjusted to be 50 ℃. fourteen consecutive days temperature has been changed to 80 ℃, then the next nine days to 110 ℃, the next seven days to 115 ℃ and the last nine days to 120 ℃. to reduce the noise in the recorded signal, dwell time was changed from 32 ms to 1024 ms on the 49th day of the study. also, due to the reduction of he pressure in the high-pressure volume, on the 66th day of the study pressure was increased from 8.87·104 pa to 1.31·105 pa. on the 69th day of the study he gas was pumped from the high-pressure volume, to finally record changes in the he signal. 3. results and discussion modified accumulation method with the previously described procedure, he signal was recorded after its permeation into the zerodur sample. graphs presented at figure 2 display some of the measurements of this signal (representing the partial pressure of he, in pa) at different times and temperatures during the study. as can be noticed from the graphs listed in figure 2, even with the increased dwell time, the recorded he signal was still very low. this may be because he permeability in zerodur material is so small that it cannot be recorded with our experimental scheme. therefore, we have conducted figure 1. schematic representation of the vacuum system used to study the permeability of helium gas into the zerodur material. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 3 measurements with a modified version of the accumulation method as well [10]. to record measurements by this method, the valve separating the pump and the vacuum system (valve 3, in figure 1), closes. this enables the accumulation of helium gas that has permeated the sample and allows a longer time for the qms to record the signal, before it is pumped. while keeping valve 3 closed, the total pressure is monitored in the full range gauge (gauge 5, figure 1). as soon as pressure reaches a value of 10-2 pa the valve opens to prevent damage to the qms filament. this procedure is repeated several times, for different times and temperatures. in the meantime, we tried to conduct measurements with the same method, while keeping valve 2 closed (the valve separating volume 𝑉2 from volume 𝑉3, in figure 1). this enables for the gas that permeates the zerodur sample to not glide into the vacuum chamber where the qms is located, and it allows us to make comparisons between the signal recorded in these two states. if after closing valve 2 the signal value drops, it indicates that the recorded helium signal is actually the one permeating the sample. since no significant values of the helium signal were recorded up to the temperature of 80℃, the results of the measurements conducted with the accumulation method are presented for the temperatures of 80 ℃, 110 ℃, 115 ℃ and 120 ℃. an example of recorded measurements is shown in figure 3. the first three measurements represent the helium signal recorded by the accumulation method when valve 2 is open while the last three measurements indicate the helium signal recorded by the accumulation method when valve 2 is closed. thus, by comparing the recorded measurements, with valve 2 open or closed, the helium gas flow can be determined. with the known gas flow, the coefficient of permeability can be determined utilizing the known mathematics of the problem. the latter has been discussed in many articles and books, such as [11], [12], and [13]. here we refer to reference [9] where the gas flow q, through a plane sample with pressure gradient ∆p, cross-sectional area a and thickness l, is given by: 𝑞 = 𝐾 𝐴 𝑝 − 𝑝0 𝑙 , (1) figure 2. recorded he signal at different times and temperatures during the study. p(cdg) indicates the he pressure at the high-pressure volume, measured with the capacitance diaphragm gauge – cdg. in addition to the date of measurements recording, the start time of their registration and dwell time are marked as well. acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 4 where k is the permeability coefficient, p is helium pressure in the high-pressure volume and p0 is the helium partial pressure in the low-pressure volume. as the low-pressure side of our sample is a high vacuum, it was assumed that the partial pressure of helium at this side is zero. the volume of the low-pressure part of the system was determined by gas expansion method [14]. the sample surface and thickness were determined geometrically. values of the permeability coefficient for different temperatures and at different study times are listed in table 1. 4. conclusions after 70 days of studying the helium permeability in the zerodur material, the stationary state of the diffusion process was not recorded. this is because, the helium signal is so weak that it cannot be fully recorded with our vacuum system. as it was not possible to analyse the stationary state of helium diffusion in the zerodur material, the time constant was not recorded, and consequently the helium diffusion coefficient in the zerodur material was not determined. on the other hand, with the accumulation method, we determined the helium permeability coefficient in the zerodur material, for different temperatures. to our knowledge, there are no data in literature concerning the helium permeability in zerodur. from the obtained results we estimate that the value of this coefficient is quite low and that it does not change much with increased temperature. for this and for the fact that the time constant of this material is observed to be long, we can conclude that zerodur material has the potential to be used as cavity material for the quantum standard of pressure measurement. overcoming other problems regarding the performance of refractometers is among the objectives of the "quantumpascal" empir project. acknowledgement the research equipment in our laboratory was financed by usa embassy in kosovo and the ministry of education, science and innovation of kosovo. we also thank prof. lars westerberg from uppsala university for providing us with some of the vacuum components used in this research. references [1] j. hendricks, quantum for pressure, nat. phys., vol. 14, no. 1, pp. 100–100, 2018. doi: https://doi.org/10.1038/nphys4338 [2] k. jousten, j. hendricks, d. barker, k. douglas, s. eckel, p. egan, j. fedchak, j. flügge, ch. gaiser, d. olson, j. ricker, t. rubin, w. sabuga, j. scherschligt, r. schödel, u. sterr, j. stone, g. strouse, perspectives for a new realization of the pascal by optical methods, metrologia, vol. 54, no. 6, 2017, pp. s146–s161. doi: https://doi.org/10.1088/1681-7575/aa8a4d [3] k. jousten, a unit for nothing, nat. phys., vol. 15, no. 6, 2019, p. 618. doi: https://doi.org/10.1038/s41567-019-0530-8 [4] p. f. egan, j. a. stone, j. h. hendricks, j. e. ricker, g. e. scace, g. f. strouse, performance of a dual fabry–perot cavity refractometer, opt. lett., vol. 40, no. 17, 2015, p. 3945. doi: https://doi.org/10.1364/ol.40.003945 [5] z. silvestri, d. bentouati, p. otal, j. p. wallerand, towards an improved helium-based refractometer for pressure measurents, acta imeko, vol. 9, 2020, no. 5, pp. 305-309. doi: http://dx.doi.org/10.21014/acta_imeko.v9i5.989 [6] j. scherschligt, j. a. fedchak, z. ahmed, d. s. barker, k. douglass, s. eckel, e. hanson, j. hendricks, n. klimov, t. purdy, j. ricker, ro. singh, j. stone, quantum-based vacuum metrology at nist, arxiv, 2018, pp. 1-49. doi: https://doi.org/10.1116/1.5033568 [7] s. avdiaj, y. yang, k. jousten, t. rubin, note: diffusion constant and solubility of helium in ule glass at 23 °c, j. chem. phys., vol. 148, no. 11, 2018, pp. 3–5. doi: https://doi.org/10.1063/1.5019015 [8] j. zakrisson, i. silander, c. forssén, z. silvestri, d. mari, s. pasqualin, a. kussicke, p. asbahr, t. rubin, o. axner, simulation of pressure-induced cavity deformation – the 18sib04 quantumpascal empir project, acta imeko, vol. 9, 2020, no. 5, pp. 281-286. doi: http://dx.doi.org/10.21014/acta_imeko.v9i5.985 [9] b. sebok, m. schülke, f. réti, g. kiss, diffusivity, permeability and solubility of h2, ar, n2, and co2 in poly(tetrafluoroethylene) between room temperature and 180 °c, polym. test., vol. 49, 2016, pp. 66-72. doi: https://doi.org/10.1016/j.polymertesting.2015.10.016 [10] technical specification iso/ts — procedures to measure and report, vol. 2018, 2018. [11] j. crank, the mathematics of diffusion, oxford university press, london and new york, 1975. [12] v. o. altemose, helium diffusion through glass, j. appl. phys., vol. 32, no. 7, 1961, pp. 1309-1316. doi: https://doi.org/10.1063/1.1736226 [13] f. j. nortonn, helium diffusion through glass, j. am. ceram. soc., vol. 36, no. 3, 1953, pp. 90-96. doi: https://doi.org/10.1111/j.1151-2916.1953.tb12843.x [14] s. avdiaj, j. setina, b. erjavec, volume determination of vacuum vessels by gas expansion method, mapan j. metrol. soc. india, vol. 30, no. 3, 2015, pp. 175-178. doi: https://doi.org/10.1007/s12647-015-0137-1. table 1. determined values of the permeability coefficient of helium gas in the zerodur sample, for different temperatures. date and time of commencement of data registration temperature t in °c permeation coefficient k in cm² / s 31.08.2020 12:51h 80 1.12 ∙ 10−15 05.09.2020 18:24h 110 1.30 ∙ 10−14 08.09.2020 17:20h 110 1.59 ∙ 10−14 09.09.2020 17:18h 115 2.09 ∙ 10−14 12.09.2020 15:27h 115 5.20 ∙ 10−14 19.09.2020 15:54h 120 6.47 ∙ 10−14 figure 3. an example of the data conducted with the accumulation method at temperature 110 °c. the first three measurements represent the helium signal recorded by the accumulation method when valve 2 is open while the last three measurements show the helium signal recorded by the accumulation method when valve 2 is closed. https://doi.org/10.1038/nphys4338 https://doi.org/10.1088/1681-7575/aa8a4d https://doi.org/10.1038/s41567-019-0530-8 https://doi.org/10.1364/ol.40.003945 http://dx.doi.org/10.21014/acta_imeko.v9i5.989 https://doi.org/10.1116/1.5033568 https://doi.org/10.1063/1.5019015 http://dx.doi.org/10.21014/acta_imeko.v9i5.985 https://doi.org/10.1016/j.polymertesting.2015.10.016 https://doi.org/10.1063/1.1736226 https://doi.org/10.1111/j.1151-2916.1953.tb12843.x https://doi.org/10.1007/s12647-015-0137-1 microsoft word 18-143-1-ga.doc acta imeko  july 2012, volume 1, number 1, 36‐41  www.imeko.org    acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 36  high‐speed multichannel impedance measuring system  marco j. da silva, eduardo n. dos santos, tiago p. vendruscolo  department of electrical engineering, universidade tecnológica federal do paraná, av. sete de setembro 3165, 80230‐901 curitiba‐pr,  brazil    keywords: electrical impedance; multichannel; high‐speed measurement; fluid distribution measurement; planar sensor   citation: marco j. da silva, eduardo n. dos santos, tiago p. vendruscolo, high‐speed multichannel impedance measuring system, acta imeko, vol. 1, no. 1,  article 9, july 2012, identifier: imeko‐acta‐01(2012)‐01‐09  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 24 th , 2012; in final form june 1 st , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was partially funded by bg group, brazil and petrobras, brazil  corresponding author: marco j. da silva, e‐mail: mdasilva@utfpr.edu.br  1. introduction  impedance sensors, in which the measurand causes a variation of an electrical characteristics such as resistance or capacitance, have found widespread use in industrial applications mainly due to their simplicity, low fabrication costs and robustness [1,2]. impedance measurement is a common tool for the characterization of electrical properties of materials and substances, in which measurement times of seconds to minutes are used to achieve high measurement accuracy in the chemical analysis [3,4]. however, in industrial applications, measuring times in the range of microseconds to milliseconds are required in order to investigate dynamic processes, e.g. mixing of substances in chemical reactors, or multiphase flow in pipelines [5]. accuracy requirements in such applications are less critical, since substances involved, and consequently their electrical properties, are known a-priori. in addition, multipoint impedance measurement is also often required in order to obtain spatial or distributed information, for instance, in the imaging of multiphase flows [6,7] or for the measurement of distributed sensors [8-14]. hitherto used systems for multichannel measurement are limited to the evaluation of a single electric parameter such as resistance or capacitance [7-14]. commercial measuring instruments such as lcr meters can only reach repetition rates of few measurements per second and are, therefore, not suitable for high-speed measurements which are necessary in some industrial application, such as the investigation of fluid flow. in recent years some efforts have been made by applying dual-modality measuring techniques for enhancing the range of application in which two different sensing techniques are used to distinguish more than two substances. furthermore, the use of single-point electrical impedance/admittance measurements, i.e. the measurement of both real and imaginary parts (or amplitude and phase), has been reported in the past [15-18]. in this paper, we introduce a novel high-speed multichannel electronics being able to simultaneously determine the conductive and the capacitive component of a multipoint sensor which is arranged in a matrix-like form such as wire-mesh sensors [6,7] or planar array sensors [12-14]. circuits are based on classical impedance measurement techniques and the basic idea of two components evaluation is to apply an excitation signal composed of two distinct frequencies. based on amplitude measurements only and spectral analysis of data it is possible to determine the resistive and capacitive components at each sensing point. multichannel interrogation is achieved by a multiplexed excitation-sensing scheme. in  this  paper,  a  novel  high‐speed  multichannel  impedance  measuring  system  in  presented.  the  measurements  are  based  on  simultaneous excitation with two distinct frequencies to interrogate the multiple sensing point of a given sensor. received signals are  analogue‐to‐digital converted (with a daq card) and the amplitudes of each frequency are determined using fft  implemented  in  labview. the capacitive and conductive parts of impedance are calculated based on amplitude measurements. the developed system  can operate 8 transmitter and 8 receiver electrodes at a frame repetition frequency of up to 781 hz, i.e. single channels are sampled at  6,248 hz. the system has been evaluated by measuring reference components. deviations from references values are below 10%  which considering the fast repetition frequency of measurements is satisfactory. the developed system was applied to visualize the  fluid distribution over the surface of planar multipoint sensor. two different liquids (oil and water) and air were evaluated and their  spatial distribution over the sensor’s surface was correctly visualized.   acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 37  in section 2 the basic measuring principle is first described and then the multichannel electronics is presented. the new electronics is evaluated in section 3. an application example is given in section 4, in which a planar array sensor is used to visualize the spatial distribution of different fluids over the sensor surface. the main achievements are summarized in the end. 2. system development  2.1. one‐channel circuit analysis  for impedance measurements we chose the circuit synonymously known as transimpedance amplifier, autobalancing bridge or current-voltage converter, as depicted in figure 1. since the developed system is meant to be used to identify different fluids in an approach similar to [13], the electric model that better represents the characteristics of a fluid is a parallel rc circuit [19, 20]. thus, the unknown impedance was assumed to be capacitive only. in figure 1, vi is the excitation voltage, zx represents the unknown impedance and cf along with rf the feedback network. furthermore, cs1 and cs2 represent the stray capacitances to ground which are caused, for instance, by cables used to connect the circuit with a sensor. in principle these stray capacitances have no influence in the circuit since cs1 is directly driven by the source voltage and cs2 is virtually grounded by the opamp. the impedance is determined by measuring the voltage at the opamp output. assuming that the opamp is ideal the complex-value output voltage vo is determined by o x x i f f g j c g j c          v v , (1) where ω = 2πf, f is the frequency of the sinusoidal excitation signal and j is the imaginary unit ( j2 = −1 ). the bode diagram for the amplitude of eq. (1), using typical r and c values, is shown in figure 2. two plateaus can be readily identified – one at low frequencies and the other at high frequencies. knowing that a capacitor works as an open circuit in dc, the smaller the frequency, the lower the influence of the capacitive part, that is, the impedance is purely resistive. on the other hand, the higher the frequency, the higher the influence of the capacitive part and the lower is the resistive part relative contribution. the magnitude of each plateau is given by the quotient of gx/gf and cx/cf which are obtained taking the limit for f → 0 and f → ∞ of the modulus of equation (1), in the form     22 2 x xo 22 2i f f 2 2 g f c g f c      v v . (2) a simplified way to view the behaviour of an auto-balancing bridge is according to the graph showing in the figure 3. further shown in the figure, is the operational amplifier frequency response. in principle any two frequencies may be chosen for determining the unknown components. the simplest choice, however, is to select two frequencies located exactly in each plateau. hence, the two unknown parameters are found by low o x i f f f g g   v v (3) and high o x i f f f c c   v v . (4) since the resistance r is directly proportional to the conductivity of the material, and the capacitance c is directly proportional to the electric permittivity of the material, the measurement of the resistance and capacitance is an indication of the electrical conductivity and permittivity of the substance in question. substances can thus be distinguished from each other and thus correctly indentified based on these measurements. 2.2. multichannel system  the developed multichannel system is schematically shown in figure 4. the developed hardware is basically divided into four blocks. the first is the transmitter printed-circuit board (pcb), in which there are two direct digital synthesizers (ad9833 from analog devices) for generating the two figure  1.  practical  circuit  for  measuring  impedances  formed  as  parallel circuit of a capacitor and a resistor.  figure 2. frequency response for components values cf = 10 pf, gf = 10 µs  (100 kω), cx and gx (rx) are indicated in the plots.  figure 3. asymptotic and simplified frequency response of a practical circuit  for  impedance  measurement  considering  the  opamp  non‐ideal  frequency  response.  acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 38  sinusoidal signals at different frequencies. the direct digital synthesizer (dds) is a programmable integrated circuit, which can generate signals up to 12.5 mhz. the dds are programmed via the employed data acquisition card (pcie-7841r from national instruments) using a serial peripheral interface (spi) and are user-defined via software developed in labview. as described in the previous section, two frequencies are sufficient for the evaluation of parameters r and c. in this way, a signal composed by two single frequencies is generated by summing together the signals of two separated dds circuits by means of an adder circuit using operational amplifiers. the excitation signal (containing two frequencies) is connected to up to eight excitation electrodes with the help of analogue switches (adg1434). the non-selected channels are grounded in order to allow for a multiplexed excitation scheme. buffers (lmh6722) at the output assure that the excitation signals have low impedance. the signal is than connected to the transmitter electrodes of a given sensor. the receiver electrodes of the sensor are connected to the receiver board, where transimpedance amplifiers (figure 2) convert the signals into proportional voltage, as described in section 2.1, whereby the operational amplifier used is the opa656 (500 mhz unity gain bandwidth) with feedback components of 4.7 pf and 100 kω. the voltage signals from the opamp are a/d-converted by the acquisition card. the digitalized signals are processed in the host pc with a program created in labview. the amplitudes of the excitation signals are determined using fast fourier transform (fft) processing tools. in order to allow for fast response, each channel is sampled at 200 khz (maximum possible sampling frequency – simultaneous sampling) and 32 samples are processed. for the reason that for one single image to be obtained, 8 transmitter electrodes must be activated, up to 781 hz frame rate is possible for sensors arranged in an 8 × 8 matrix configuration. although the pcie-7841r analogue input channels have a bandwidth (−3db) of 1 mhz, high frequency sinusoidal signal was configured to 1.825 mhz in order to reach the capacitance plateau as schematically shown in figure 3. the voltage signal at 1.825 mhz is attenuated by −6.45 db. the low frequency signal used was 75 khz. since the sampling frequency is 200 khz (nyquist frequency = 100 khz), the 1.825 mhz signal is undersampled, appearing as 25 khz component in the fft spectrum, as schematically shown in figure 5. though undersampled and attenuated, the amplitude of the high frequency component can be correctly evaluated after calibration measurements, i.e. measurements with known impedances or fluids. 3. system evaluation  the developed system was evaluated to verify the performance of step and frequency responses. further, accuracy of the developed system was evaluated by measuring different impedances with known values consisting of a parallel rc circuit, as described below. 3.1. frequency response  a sinusoidal voltage signal swept in frequency from 2 khz to 20 mhz was used to test the frequency response of the auto-balancing bridge circuit. the signal was applied to the input of a known rc circuit (measurand) which was connected to the auto-balancing bridge circuit. the output signal of the receiver module was measured with an oscilloscope. thus, we surveyed the frequency response curves for four different combinations of resistors and capacitors with values similar to the ones in figure 2 for comparison. the obtained frequency responses are depicted in figure 6. the two plateaus can be clearly seen in the experimental results. also, the shape of the curves fits very well with the theoretical response, as anticipated in section 2.1. only the absolute measured values (figure 6) slightly deviate from the theoretical ones (figure 2). this fact implies that the circuit figure  4.  block  diagram  of  the  developed  multichannel  impedance  measuring system.  10075 1,82525 frequency (khz) a m p lit u d e figure 5. schematic representation of amplitude spectrum obtained via fft analysis.  figure  6.  frequency  response  in  circuit  for  components  values  cf  =  10pf,  gf = 10µs (100 kω), cx and gx (rx) are indicated in the plots.  acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 39  input-output response must be adjusted by using some reference components. 3.2. step response  in order to verify the maximum achievable repetition in a multichannel system, the step response of the auto-balancing bridge circuit was evaluated. using a waveform generator capable of amplitude modulation (agilent 33220a), in which a carrier sine wave of 1 mhz was modulated with a square wave signal of 50 khz, the step response of the circuit can be evaluated. an oscilloscope at the output shows the resulting waveform, as depicted in figure 7. in this experiment the rc circuit used was a 2.2 pf capacitance in parallel with a 1 mω resistance. as shown in the figure 7, the time response of the carrier (1 mhz) is almost instantaneous for the time scale used, so that the system can be properly used in the intended kilohertz repetition range with no loss in measurement accuracy. 3.3. static impedance measurement  in order to evaluate the accuracy of the measuring circuit, different combinations of known resistor and capacitors were measured for a single transmitter-receiver pair, which is representative for all channels. excitation amplitude of two components was kept constant, and the amplitude of output signals was determined via 32-point fft and at a repetition rate of 6.248 khz (single channel measurement), as described earlier. twenty two combinations of resistors and capacitors were measured, with values between 100 kω 2.2 mω and 1 pf 10 pf. the parity plots of measured (eq. 3 and 4) and reference values are shown in figures 8 and 9. deviations from references values are below 10% which is satisfactory considering the fast repetition frequency. higher accuracy may be achieved by lowering the repetition frequency of measurements, thus decreasing random errors. 4. example of an application  in this section, the developed electronics is used to operate a planar array sensor. the sensor is composed by a number of identical sensing elements which are individually interrogated by the electronics. as an example of application, the spatial distribution of two different liquids and of air is visualized based on their electrical properties. 4.1. planar array sensor  the sensor employed in this section is composed of 256 single interdigital sensing structures which are multiplexed in a matrix with 16 driver (rows) and 16 sensing electrodes (columns). figure 10 depicts the sensor. since the developed electronics can handle 8 x 8 electrodes, only one quarter of the sensor was interrogated (region indicated by the yellow square in figure 10). 4.2. data processing  since the resistance r is directly proportional to the conductivity of the material, and the capacitance c is directly proportional to the electric permittivity of the material, from equations (3) and (4) it is possible to show that the amplitudes of the voltage measured in each plateau is proportional to the electrical properties of the fluid at the sensing elements of a planar sensor. for simplicity we will define vκ as the voltage of low-frequency component and vε, as the voltage of high-frequency component. the measured output voltages vκ correspond to the conductivity value κ at the crossing points according to κv k   , (5) where k is a proportionality factor which depends on electronic circuit and sensor constants. in this way, the individual substance present over a sensing point can be identified by evaluating the output voltage vκ and thus an electrically conducting and a non-conducting fluid (e.g. air and water) can be distinguished. in order to discriminate two non-conducting fluids like air and oil, the capacitance or the permittivity of a sensing element must be evaluated. the voltage vε is proportional to the relative permittivity εr of the fluid present at a crossing point according to figure 7. step response in circuit for a rate of 50 khz.  figure 8. parity plot of measured and reference values of capacitance. the  dotted lines show the +/‐10% deviation from ideal line.  figure 9. parity plot of measured and reference values of conductance. the dotted lines show the +/‐10% deviation from ideal line.  acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 40  ε rv a b   , (6) where a and b are constants that encompass the specific parameters of the electronics and sensor. repeated successive activation of all transmitter electrodes and measurement of currents for all receiver channels gives a three-dimensional data matrix with electrical voltage values denoted by v(i,j,k) which corresponds either to conductivity or permittivity distributions over the pipe’s cross section. here, i and j are the spatial indices (corresponding to the wire numbers 1 to 8) and k is the temporal index of each image. equations (5) and (6) hold for every sensing element in the matrix of a planar sensor. a calibration of all transmitter-receiver pairs is necessary to obtain accurate output readings. calibration may be performed by measuring the output voltages for two known substances (for instance water and air) and calculating parameters k(i,j) or a(i,j) and b(i,j) for each transmitterreceiver channel. 4.3. substance distribution measurement  in order to show the imaging capability of the spatial distribution of a substance over the sensor surface, a simple experiment was conducted. tap water and silicone oil were intentionally distributed as two column arrangement over the sensor surface, as illustrated in figure 11. the electrical conductivity of water, measured with a conductivity meter, was 398 μs/cm. the conductivity of oil can be neglected. reference relative permittivities of air, oil and water are 1, 2.8, and 80, respectively [19, 20]. prior to the start of the experiment a two-point calibration was realized with air and water as references. in this test, 100 frames were acquired and averaged. figure 12 shows the resulting images, in which a logarithmic colour scale for the permittivity and linear colour scale for conductivity values was used. all three substances involved (air, oil and water) can be clearly recognized in the permittivity image, while in the conductivity one only the conducting from the non-conducting phase is distinguished, as expected. comparison via visual inspection of the distribution shows good agreement with the measurements. the measured values of conductivity and permittivity for the sensing elements at row number 4 (middle line of images) were figure  10.  photography  of  planar  array  sensor  employed  for  substance distribution  measurement.  region  in  the  sensor  indicated  by  the  dotted yellow square is the interrogating area (8 x 8 single sensors).   water oil figure  11.  distribution  of  substances  imposed  over  the  surface  of  planar sensor.  figure 12. measured distributions of electrical permittivity and conductivity. 1 2 3 4 5 6 7 8 10 -2 10 -1 10 0 10 1 10 2 10 3 mearuring point # c o n d u c ti v it y (  s /c m ) 1 2 3 4 5 6 7 8 10 0 10 1 10 2 mearuring point # r e la ti v e p e rm it ti v it y ( -) figure 13. values of measured electrical conductivity and permittivity taken from sensors at line number 4 (central line of interrogating area).   acta imeko | www.imeko.org    july 2012 | volume 1 | number 1 | 41  taken and plotted in figure 13, in order to show quantitative data. a logarithmic scale for both permittivity and conductivity is used for comparison purposes. it can be seen that the conductivity values of air and oil are very low (around 10-2 μs/cm) and the conductivity value of water is about 400 μs/cm, as expected. permittivity values for air, oil and water are in good agreement with reference ones. all three substances can be properly distinguished from each other based on their electrical properties. 5. conclusions  a novel measuring system for multipoint impedance measurement of capacitive and resistive components was presented and tested. the impedance is determined based on the measurement of the amplitudes at two distinct frequencies. the system was evaluated showing appropriated accuracy and fast response. imaging capability of the multichannel system was explored by operating a planar array sensor. conductivity and permittivity images are in good agreement with the imposed distribution over sensor surface. further work will focus on data fusion of the two measured parameter and application to the imaging of dynamic flow. acknowledgement  authors acknowledge financial support from petrobras, brazil and bg group, brazil for partially funding this project. e.n. santos thanks prh10/anp/petrobras for doctoral degree scholarship and t.p. vendruscolo thanks ibp and bg group, brazil for a master degree scholarship. references  [1] r.pallas-areny and j.g.webster, sensors and signal conditioning, 2nd ed., new york: willey, 2001, isbn 0471332321. [2] agilent, “agilent impedance measurement handbook”, application note, agilent technologies, 2009. available at: [3] j.r.macdonald, w.b.johnson, “fundamentals of impedance spectroscopy”, in: impedance spectroscopy, j.r. macdonald (ed). new york: john wiley & sons, pp. 1-26, 1987, isbn 0471831220. [4] u.kaatze, y.feldman, “broadband dielectric spectrometry of liquids and biosystems”, measurement science and technology, 17, 2006, pp.r17-r35. [5] h.mccann, d.m.scott, process imaging for automatic control. boca raton, fl: crc press., 2005. isbn 0824759206. [6] m.j.da silva, e.schleicher, u.hampel, “capacitance wire-mesh sensor for fast measurement of phase fraction distributions”. measurement science and technology 18, 2007, pp. 2245-2251. [7] h.m.prasser, a.böttger, j.zschau. “a new electrode-mesh tomograph for gas-liquid flows”. flow measurement and instrumentation 9, 1998, pp.111-199. [8] t.hermes, m.bühner, s.bücher, c.sundermeier, c.dumschat, m.borchardt, k.cammann, m.knoll. “an amperometric microsensor array with 1024 individually addressable elements for two-dimensional concentration mapping”, sensors and actuators b: chemical, 21, 1994, pp.33-37. [9] r.s.saxena, r.k.bhan, a.aggrawal. “a new discrete circuit for readout of resistive sensor arrays”. sensors and actuators a: physical, 149, 2009, pp.93-99. [10] a.v.mamishev, k.sundara-rajan, f.yang, y.du, m.zahn, “interdigital sensors and transducers”, proceedings of the ieee, 92, 2004, pp.808-845. [11] m.j.da silva, e.schleicher, u.hampel. “advanced wire-mesh sensor technology for fast flow imaging”, in: ieee international workshop on imaging systems and techniques 2009, shenzhen. proceedings of ist 2009, pp. 253-257. [12] m.j.da silva, t.sühnel, e.schleicher, r.vaibar, d.lucas, u.hampel. “planar array sensor for high-speed component distribution imaging in fluid flow applications”. sensors 7, 2007, pp. 2430-2445. [13] s.thiele, m.j.da silva, u.hampel, “capacitance planar array sensor for fast multiphase flow imaging”. ieee sensors journal, v. 9, 2009, pp. 533-540. [14] m.damsohn, h.m.prasser “high-speed liquid film sensor with high spatial resolution”, measurement science and technology 20, 2009, 114001. [15] e.dykesteen, a.hallanger, e.hammer, e.samnoy, r.thorn, “non-intrusive three-component ratio measurement using an impedance sensor,” journal of physics e-scientific instruments 18, 1985, pp. 540-544. [16] f.garcía-golding, m.giallorenzo, n.moreno, v.chang, “sensor for determining the water-content of oil-in-water emulsion by specific admittance measurement,” sensors and actuators a 47, 1995, pp. 337-341. [17] m.j.da silva, e.schleicher, u.hampel, “a novel needle probe based on high-speed complex permittivity measurements for investigation of dynamic fluid flows”. ieee transactions on instrumentation and measurement 56, 2007, pp.1249-1256. [18] a.kimoto, t.kitajima, “an optical, electrical and ultrasonic layered single sensor for ingredient measurement in liquid” measurement science and technology, 21, 2010, 035204. [19] m.j.da silva, impedance sensors for fast multiphase flow measurement and imaging. tud press, dresden, 2008, isbn 978-3940046994. [20] d.r.lide, crc handbook of chemistry and physics, 85th ed., boca raton, fl: crc press, 2005, pp. 6-155−6-177, isbn 978-0849304859. spatiotemporal analysis of human gait, based on feet trajectories estimated by means of depth sensors acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 7 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 spatiotemporal analysis of human gait, based on feet trajectories estimated by means of depth sensors jakub wagner1, roman z. morawski1 1 warsaw university of technology, faculty of electronics and information technology, institute of radioelectronics and multimedia technology, nowowiejska 15/19, 00-665 warsaw, poland section: research paper keywords: gait analysis; gait asymmetry; healthcare; depth sensor; measurement data processing citation: jakub wagner, roman z. morawski , spatiotemporal analysis of human gait, based on feet trajectories estimated by means of depth sensors, acta imeko, vol. 11, no. 4, article 9, december 2022, identifier: imeko-acta-11 (2022)-04-09 section editor: eric benoit, université savoie mont blanc, france received july 19, 2022; in final form november 4, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the institute of radioelectronics and multimedia technology at the faculty of electronics and information technology, warsaw university of technology. corresponding author: jakub wagner, e-mail: jakub.wagner@pw.edu.pl 1. introduction the analysis of human gait is a fertile research field with various applications related to healthcare, in particular – to monitoring of elderly persons. gait analysis can be used, e.g., for estimating the risk of falling [1], diagnosing cognitive impairments [2] and optimising the rehabilitation after a stroke [3]. such analysis may involve the estimation of [4] – angles between body segments, – ground-reaction forces, – electromyographic signals, – spatiotemporal gait parameters. this paper is focused on the estimation of the latter, and in particular of [5]: – length and duration of steps and strides, – walking speed, – cadence (or the number of steps per minute), – duration of the phases of a gait cycle. the gait cycle is the time interval encompassing the repeatable pattern of movement performed during walking, i.e. a single stride or two steps (one by each foot). within the gait cycle, the so-called swing phase and stance phase can be identified. these phases correspond to the time intervals when a given foot is moving or resting on the floor, respectively. the duration of the so-called double-support phase, during which both feet contact the floor, is also of interest for healthcare practitioners [2]. important information, useful from the healthcare perspective, can be extracted from the average values of the aforementioned parameters and from some indicators characterising their variability [6]. estimates of those parameters can also be used for obtaining information about gait asymmetry, which is considered particularly useful in the treatment of strokeinduced hemiplegia [7]. the gait asymmetry is typically quantified by comparing the values of some parameters, characterising the left and right sides of the body, e.g. by computing the ratio between the duration of the stance phases of the two feet [3]. the spatiotemporal gait parameters can be estimated by a trained clinician using a stopwatch; the simplicity and low cost of such an examination method are behind its prevalence, although its accuracy and repeatability are quite limited [8]. a technological solution that allows for more reliable analysis of gait, relatively widespread in research laboratories and clinical facilities, involves abstract this paper addresses a methodology for the analysis of human gait, based on the data acquired by means of depth sensors. this methodology is dedicated to healthcare-related applications and involves the identification of the phases of the gait cycle by thresholding estimates of velocities of the examined person’s feet. in order to assess its performance, a series of experiments was carried out using a reference gait-analysis system based on a pressure-distribution-measurement platform. an original method for quantifying gait asymmetry, based on a quasi-correlation between the feet trajectories, was also proposed and tested experimentally. the results of the reported experiments seem to promise a high applicability potential of the considered methodology. mailto:jakub.wagner@pw.edu.pl acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 the use of platforms or treadmills equipped with pressure sensors. even more detailed and useful information can be obtained by means of optoelectronic systems based on multiple video cameras, tracking the motion of special markers attached to the person’s body; however, such systems are quite expensive and need to be installed in a large room. the last decade has brought about the development of other gait-analysis techniques based on various types of sensors, including wearable sensors, which seem to be applied most frequently. this paper is devoted to a technique whose applicability potential has not yet been fully explored, viz. a technique based on depth sensors. in the last decade, the possibility of using depth sensors for the analysis of human gait has attracted considerable interest of the scientific community. this interest seems to be justified by the facts that 1) such sensors are relatively inexpensive and commercially available, and 2) the acquisition of data representative of human movement can be quite fast and convenient, viz. it can be performed in the natural conditions of overground walking, without requiring the person to wear any devices or markers on the body or clothes. moreover, the data acquired by means of such sensors convey quite detailed and rich information about human movement, allowing for both its spatiotemporal and kinematic analysis. such sensors do not seem, however, as reliable – in terms of the attainable measurement uncertainty – as marker-based optoelectronic systems or pressure-measuring platforms. furthermore, unlike wearable sensors, depth sensors have a limited field of view to which the examination must be confined, and their reliability depends – to some extent – on the angle at which the examined person is observed. given their advantages and disadvantages with respect to other technologies applicable for gait analysis, the depth sensors are often considered potentially useful for rapid screening of patients prior to more detailed diagnostic procedures. the gait-analysis systems based on depth sensors may, therefore, become quite common in clinical practice and inhome monitoring; the development of suitable data-processing methods and research aimed at assessing the validity of those methods is thus necessary [9]. a recent review of issues related to the development of gait-analysis systems based on depth sensors and other techniques can be found in [10], chapter 8. a depth sensor typically consists of a projector and a camera, both operating in the infrared range of electromagnetic radiation. the data acquired by means of such a sensor represent its distance to the objects present in its field of view. those data are organised in sequences of so-called depth images. in a depth image, each pixel represents the three-dimensional position of a point belonging to a surface reflecting infrared radiation. an exemplary depth image is shown in figure 1a. what makes the depth sensors particularly promising for gait analysis is the existence of algorithms for automatic identification of human silhouettes and for the localisation of various anatomical landmarks, such as the head, the feet, selected joints etc. such an algorithm is implemented in one of the most popular devices comprising depth sensors, viz. the microsoft kinect device [11], including its kinect v2 model which has been used in the experiments reported in this paper. similar algorithms, which can be used for processing data from other depth sensors, are available commercially. an exemplary human silhouette, identified using the algorithm implemented in the kinect v2 device, is shown in figure 1b. various approaches to the application of depth sensors in the systems for gait analysis have been reported in the last decade. the difficulties in developing such systems are mainly related to the uncertainty of localisation of feet and other anatomical landmarks, and from the limitations of the depth sensors’ field of view. the techniques proposed so far differ in terms of the examination setup (which may involve the use of one or more depth sensors, a treadmill, wearable devices etc.), the dataprocessing methods used for the identification of the gait-cycle phases, and other aspects [12]–[14]. the methodology for spatiotemporal gait analysis considered in this work consists in obtaining some estimates of feet positions using a depth sensor, followed by processing them by means of a procedure which comprises the following operations: – estimation of feet velocities; – identification of the swing and stance phases by thresholding feet velocities; – estimation of selected spatiotemporal gait parameters; and – computation of indicators of gait asymmetry. the above-listed operations are described in more detail in section 2. section 3 is devoted to the description of the experiments that were carried out in order to assess the applicability potential of the considered methodology for gait analysis. the results of these experiments are presented in section 4 and discussed in section 5 where conclusions are drawn. the following acronyms appear in the rest of this paper: – tvr – total variation regularisation; – str – stance time ratio; – gct – gait cycle time; – spm – steps per minute. 2. data processing procedure 2.1. estimation of feet velocities the estimates of the positions of feet, obtained by means of a kinect v2 device, are subject to non-negligible measurement uncertainty. among the 25 anatomical landmarks, whose positions can be estimated using the algorithm implemented in that device, the feet and ankles are usually localised with the least accuracy [15]. that algorithm performs poorly in distinguishing a foot from the ground when the foot is resting; hence, the estimates of feet positions are more accurate at the moments when the feet are off the ground [16]. the antero-posterior component of those estimates (i.e. the one corresponding to the walking direction) is more accurate than the vertical one and the medio-lateral one; furthermore, those estimates are more accurate if the person is walking rather than when the range of movement of the feet is small (e.g. in sit-to-stand tests) [15]. a) b) figure 1. a depth image in which brighter pixels represent larger distance from the sensor (a); a human silhouette, identified in that image by means of the kinect device (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 3 when compared with the reference data acquired by means of marker-based optoelectronic movement-analysis systems, the foot position estimates obtained by means of kinect v2 devices have turned out to be somewhat biased [17]. this is, however, of lesser practical importance for the gait-analysis methodology proposed here, because this methodology refers to the differences among the estimates of the positions of feet rather than to their absolute values. on the other hand, those estimates are subject to random errors. the mean euclidean distance between those estimates and the corresponding reference values has been reported to be ca. 8.0 cm [18]. although such distance may be considered negligible in various applications, in the case of the proposed methodology it could – if not remedied – significantly distort the results of data processing; that is because this methodology involves the numerical differentiation of sequences of foot position estimates, and differentiation is numerically ill-conditioned, which means that small errors can be very significantly amplified in its course. an exemplary foot trajectory and the results of its differentiation by means of the central-difference method, without any prior denoising, are shown in figure 2. as discussed in the following subsections, the visible abrupt changes in the estimates of foot velocity would hinder the identification of the stance and swing phases according to the proposed methodology. in order to prevent this effect, the foot position estimates were denoised prior to their numerical differentiation. in the context of the analysis of human movement, the technique most commonly used for denoising depth-sensorbased estimates of positions of anatomical landmarks is low-pass filtering by means of a butterworth filter of order 2, 3 or 4 with a cut-off frequency from the range [2, 10] hz (cf., e.g., [13], [18], [19]). in this study, this denoising technique was compared with two other techniques, less frequently considered in this context: the savitzky-golay filter and the total-variation regularisation technique (the tvr technique). filtering by means of a savitzky-golay filter is equivalent to approximating the data – within a moving window – by means of a fixed-degree polynomial [20]; both the length of the moving window and the degree of the approximating polynomial need to be optimised empirically. this technique is particularly useful for denoising data representative of a smooth signal – i.e. a signal adequately modelled by a mathematical function which has some continuous derivatives – while, at the same time, preserving the width and height of the peaks present in that signal [21]. the savitzky-golay filter is a low-pass filter whose cut-off frequency increases with the increasing polynomial degree and with the decreasing window length. unlike the butterworth filter, the savitzky-golay filter is characterised by a linear phase response. it has similarly flat passband but less attenuation in the stopband than the butterworth filter [22]. the frequency characteristics of exemplary filters of both kinds are presented in figure 3, their impulse responses – in figure 4. the tvr technique is well-suited for processing piecewiselinear rather than smooth signals [23]. it seems potentially useful for denoising the estimates of the antero-posterior position of a foot during walking, because that position is constant during the stance phase and changing approximately linearly during the swing phase. this technique is characterised by a single scalar regularisation parameter whose value needs to be optimised empirically. if the savitzky-golay filter or the tvr technique is applied, the estimates of the derivative can be obtained directly, without a) b) figure 2. an exemplary sequence of data acquired by means of a kinect v2 device, representative of the antero-posterior position of the left foot of a person who was walking toward that device (a), and the results of numerical differentiation of that sequence using the central-difference method (b). a) b) figure 3. the frequency characteristics of the savitzky-golay filter with 19sample window and 2-degree approximating polynomial, and of the butterworth filter of order 2 with the cut-off frequency 1.4 hz. a) b) figure 4. the impulse response of the savitzky-golay filter with 19-sample window and 2-degree approximating polynomial (a), and of the butterworth filter of order 2 with the cut-off frequency 1.4 hz (b). acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 4 computing the denoised sequence of data. however, in the experiments reported herein, better results were obtained by computing the velocity estimates using the central-difference method on the basis of the denoised position estimates. a recent review of methods of numerical differentiation, which can be used in this context, can be found in [10], chapter 5. exemplary estimates of feet velocities, obtained by means of the three above-described denoising techniques, are shown in figure 5. 2.2. identification of gait-cycle phases the swing phase and the stance phase were identified as the time intervals in which the velocity of the examined foot is above or below – respectively – an empirically selected threshold. two variants of threshold values were considered here: – a fixed absolute value; – a value relative to the maximum velocity estimate present in the set of data under analysis. exemplary results of this operation are shown in figure 6. 2.3. estimation of spatiotemporal gait parameters the length and duration of the steps were estimated according to the definitions provided in the documentation of the zebris fdm gait analysis system [24]. the estimates of the duration of the left and right stance phase and the doublesupport phase were divided by the duration of the gait cycle. the average walking speed was estimated by dividing the total distance, travelled during the experiment, by the total walking time. the cadence was estimated by computing the inverse of the duration of the steps, averaged over all observed left and right steps. the estimates of each spatiotemporal gait parameter, differing from their median value by more than an empirically selected threshold value, were identified as outliers and removed from the set of results. for brevity, some spatiotemporal gait parameters whose values can be inferred from the values of the parameters mentioned above – such as the stride length or the duration of the swing phase – are omitted here. 2.4. quantification of gait asymmetry the following indicator – which, to the best authors’ knowledge, has not been considered in any previous studies – can be used for quantifying gait asymmetry: ( ) ( ) ( ) ( )       +    − +   +      l r 0 lr 2 2 l r 0 0 max 1, 1 t t t v t v t dt r v t dt v t dt , (1) where ( )lv t and ( )rv t denote the horizontal speed of the left and right foot – respectively – at a time instant t belonging to the time interval under analysis [0, t]. this indicator can be interpreted as the maximum of a quasi-correlation between the trajectories of both feet; its values close to 1 indicate near-perfect symmetry, and smaller values – lesser symmetry. the stance time ratio (str), defined as follows:          l r l r r l if 0,1 otherwise st st st st str st st (2) – where lst and rst denote the stance time of the left and right foot, respectively – was used as a reference indicator of gait asymmetry [3]. 3. experimentation programme a set of experiments was completed in order to assess the applicabilitiy potential of the considered methodology for spatiotemporal gait analysis. the data representative of the human gait were acquired by means of a kinect v2 device and a zebris fdm pressure-measurement platform simultaneously. three persons walked over the platform 33 times in total. the depth sensor’s line of sight coincided with the walking direction. the sketch of the experimental setup is shown in figure 7. a) b) c) figure 5. the estimates of the feet velocities, obtained by means of numerical differentiation of some exemplary feet trajectories denoised using the butterworth filter (a), the savitzky-golay filter (b) and the tvr technique (c). figure 6. the exemplary results of the identification of the gait-cycle phases by thresholding the estimates of the foot velocity. figure 7. sketch of the experimental setup. acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 5 subsets of data from the depth sensor, representative of the 2.8 m displacement which comprised the space covered by the platform – with a certain margin – were extracted. according to the examination procedure associated with the zebris platform, the examined persons’ passages across the platform were grouped into triplets and the results corresponding to each triplet of passages were averaged; thus, 11 pairs of the estimates of each parameter were obtained (i.e. the pairs comprising one depthsensor-based estimate and one zebris-platform-based estimate). the mean error and the standard deviation of errors of the depth-sensor-based estimates were evaluated by treating the zebris-platform-based estimates as the reference. the dataprocessing software associated with the zebris platform was used for evaluating the standard deviation of the zebrisplatform-based estimates of spatiotemporal gait parameters. the cycle of data processing was repeated six times, according to six variants of the procedure described in section 2 – the variants being the combinations of the three denoising techniques (the butterworth filter, the savitzky-golay filter and the tvr technique) and two options of velocity threshold values (absolute or relative to the maximum velocity estimate). 4. results the values of the parameters of data-processing methods, optimised empirically in such a way as to minimise the differences between the depth-sensor-based and zebrisplatform-based estimates of the spatiotemporal gait parameters, are presented in table 1. the selected values of the parameters of the savitzky-golay filter correspond to the cut-off frequency of 1.53 hz. the mean errors and standard deviations of the estimates of spatiotemporal gait parameters are presented in table 2. the use of the absolute velocity threshold consistently yielded better results than the use of the relative threshold in all cases; hence, for brevity, only the results obtained using the absolute threshold are presented in table 2. the values of the indicators of gait asymmetry lrr and str, determined for all the recorded passages (the latter – on the basis of the data from both the depth sensor and the zebris platform), are shown in figure 8. the reported experiments involved only healthy persons whose gait was quite symmetric; thus, all these values are close to 1. to further assess the informative value of the proposed indicator lrr , another experiment, including emulation of asymmetric gait, was performed. during this experiment, a single healthy person walked first naturally and next – making fast steps with the right foot and slow steps with the left foot. the values 1.00 lr r = and 0.99str = were obtained for natural gait, whereas lr 0.86r = and 0.77str = – for emulated asymmetric gait. the results of this experiment are illustrated in figure 9. 5. discussion and conclusion all three denoising techniques, introduced in section 2.1, allowed for obtaining estimates of spatiotemporal gait parameters subject to quite similar uncertainty. in the case of the step times, the tvr technique yielded more biased estimates but less dispersed than the low-pass filters. in the cases of other parameters, differences in the uncertainty indicators could be noticed among the different denoising techniques, but the reported results do not justify the indication of any of them as capable of providing the most reliable results. the absolute (rather than the relative) value of the velocity threshold can be recommended for further study. table 1. empirically selected values of the parameters of data-processing methods in six variants of the procedure for estimation of spatiotemporal gait parameters. butterworth filter absolute threshold relative threshold order 2 4 cut-off frequency 1.4 hz 3.6 hz velocity threshold 1.7 m/s 40 % savitzky-golay filter absolute threshold relative threshold window length 19 samples 19 samples polynomial degree 2 2 velocity threshold 1.7 m/s 42 % tvr technique absolute threshold relative threshold regularisation parameter 8 · 10–4 1 · 10–3 velocity threshold 1.6 m/s 50 % table 2. mean values and the standard deviations of the errors corrupting the estimates of spatiotemporal gait parameters, obtained by means of the depth sensor, and the standard deviations of the corresponding reference estimates obtained by means of the zebris platform; the symbols l and r indicate the left and right side of the body; gct is the acronym of gait-cycle time; spm – of steps per minute. depth-sensor-based estimates zebris-platformbased estimates mean error standard deviation of errors standard deviation of estimates butterworth filter savitzky-golay filter tvr technique butterworth filter savitzky-golay filter tvr technique step time (l) –0.004 s –0.004 s –0.009 s 0.019 s 0.022 s 0.016 s 0.009 s step time (r) 0.000 s 0.001 s 0.008 s 0.020 s 0.023 s 0.021 s 0.013 s step length (l) –1.4 cm –0.8 cm –2.1 cm 1.8 cm 3.4 cm 3.2 cm 1.4 cm step length (r) –0.4 cm –0.9 cm 0.1 cm 3.8 cm 3.6 cm 4.1 cm 1.5 cm stance time (l) 0.1 % gct 0.9 % gct –0.5 % gct 2.5 % gct 2.1 % gct 2.3 % gct 0.5 % gct stance time (r) –0.2 % gct –0.3 % gct –2.1 % gct 1.0 % gct 1.1 % gct 2.1 % gct 0.8 % gct double-support time 0.6 % gct 0.9 % gct –1.9 % gct 4.1 % gct 2.6 % gct 3.6 % gct 0.8 % gct walking speed –1.4 cm/s –1.4 cm/s –1.4 cm/s 2.9 cm/s 2.9 cm/s 2.9 cm/s 5.1 cm/s cadence 0.70 spm 0.54 spm 0.31 spm 2.25 spm 2.37 spm 2.12 spm 2.46 spm acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 6 the mean differences between the depth-sensor-based estimates of spatiotemporal gait parameters and the corresponding reference values were quite small, whereas the standard deviations of the depth-sensor-based estimates were somewhat larger than those of the reference values but within the same order of magnitude. these results indicate a high applicability potential of the considered methodology for gait analysis: the somewhat larger uncertainty of the estimates may be justified by the considerably smaller cost and complexity of the examination setup and procedure, if compared to the use of the zebris platform. moreover, the considered methodology enables one to quantify gait asymmetry. in the experiments involving healthy persons, the values of all considered indicators of gait asymmetry – presented in figure 8 – are, as expected, close to 1; the dispersion of the values of the proposed indicator rlr, defined by equation (1), is the smallest. in the experiment including the emulation of asymmetric gait – whose results are presented in figure 9 – significantly different values of the indicators were obtained for symmetric and asymmetric gait. the aforementioned experimental results suggest that the proposed indicator may become quite useful in clinical practice; however, more experimental work is needed to reliably assess its capability to properly characterise the degree of gait asymmetry. the following practical considerations can be made based on the experiments: – the value of the foot velocity threshold has a significant impact on the uncertainty of the estimates of spatiotemporal gait parameters; in most experiments, the values 1.6–1.7 m/s have yielded the best results. the choice of the type of the low-pass filter used for denoising the foot position estimates does not seem to significantly affect the results, but the values of that filter’s parameters need to be selected carefully. – the best results can be obtained if the examined person walks towards the depth sensor along its line of sight; if the walking direction does not parallel that line, the feet occlude each other from time to time and, consequently, the reliability of the obtained results is significantly reduced. – certain kinds of examined person’s clothing, such as skirts or wide trousers, significantly hinder the localisation of the feet on the basis of depth-sensor data, making it impossible to obtain reliable estimates of spatiotemporal gait parameters. the authors’ plans for future work include: – the implementation and testing of other data-processing methods aimed at identifying gait-cycle phases, including the methods based on the distances between the examined person’s knees [13] and on the vertical oscillations of that person’s centre of mass [14]; – the experiments aimed at assessing the uncertainty of identification of gait-cycle phases, involving the use of a reference optoelectronic gait-analysis system (which, in contrast to the zebris platform, is capable of providing not only the reference values of spatiotemporal gait parameters, but also the reference three-dimensional trajectories of the feet); – the experiments involving persons whose ability to walk is impaired, in particular – whose gait is significantly asymmetrical. acknowledgement the authors express their sincere gratitude to dr. katarzyna kaczmarczyk and dr. michalina błażkiewicz from the university of physical education in warsaw for their help in the acquisition of data used for experimentation reported in this paper. references [1] g. allali, c. p. launay, h. m. blumen, m. l. callisaya, a.-m. de cock, r. w. kressig, v. srikanth, j.-p. steinmetz, j. verghese, o. beauchet, the biomathics consortium, falls, cognitive impairment, and gait performance: results from the good initiative, journal of the american medical directors association 18 (2017), pp. 335–340. doi: 10.1016/j.jamda.2016.10.008 [2] i. mulas, v. putzu, g. asoni, d. viale, i. mameli, m. pau, clinical assessment of gait and functional mobility in italian healthy and cognitively impaired older persons using wearable inertial sensors, aging clinical and experimental research 33 (2021), pp. 1853– 1864. doi: 10.1007/s40520-020-01715-9 [3] f. parafita, p. ferreira, d. raab, p. flores, h. hefter, m. siebler, a. kecskeméthy, evaluating balance, stability and gait symmetry of stroke patients using instrumented gait analysis techniques, proc. of the 4th joint international conference on multibody system dynamics, montréal, canada, 2016. figure 8. the values of the indicators of gait asymmetry, obtained in the experiment on only healthy persons whose gait is quite symmetric; the white horizontal lines indicate the median values; the boundaries of the blue rectangles indicate the first and third quartiles; the black whiskers indicate the minimum and maximum values. a) b) figure 9. the estimates of the speed of feet, shifted in time so as to maximise the quasi-correlation defined by equation (1), obtained for quite symmetric gait (a) and quite asymmetric gait (b), and the corresponding values of the indicators of gait asymmetry. https://doi.org/10.1016/j.jamda.2016.10.008 https://doi.org/10.1007/s40520-020-01715-9 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 7 [4] r. baker, measuring walking: a handbook of clinical gait analysis, mac keith press, london, united kingdom, 2013, isbn 978-1908316-66-0. doi: 10.1016/j.jbiomech.2013.08.017 [5] j. h. hollman, e. m. mcdade, r. c. petersen, normative spatiotemporal gait parameters in older adults, gait & posture 34 (2011), pp. 111–118. doi: 10.1016/j.gaitpost.2011.03.024 [6] f. pieruccini-faria, m. montero-odasso, j. m. hausdorff, gait variability and fall risk in older adults: the role of cognitive function, in: falls and cognition in older persons: fundamentals, assessment and therapeutic options. m. montero-odasso and r. camicioli (editors). springer, cham, 2020, isbn 978-3-03024233-6, pp. 107–138. doi: 10.1007/978-3-030-24233-6_7 [7] m. c. chang, b. j. lee, n.-y. joo, d. park, the parameters of gait analysis related to ambulatory and balance functions in hemiplegic stroke patients: a gait analysis study, bmc neurology 21 (2021), p. 38. doi: 10.1186/s12883-021-02072-4 [8] c. ridao-fernández, e. pinero-pinto, g. chamorro-moriana, observational gait assessment scales in patients with walking disorders: systematic review, biomed research international (2019), p. 2085039. doi: 10.1155/2019/2085039 [9] r. a. clark, b. f. mentiplay, e. hough, y. h. pua, threedimensional cameras and skeleton pose tracking for physical function assessment: a review of uses, validity, current developments and kinect alternatives, gait & posture 68 (2019), pp. 193–200. doi: 10.1016/j.gaitpost.2018.11.029 [10] j. wagner, p. mazurek, r. z. morawski, non-invasive monitoring of elderly persons: systems based on impulse-radar sensors and depth sensors, springer nature, cham, switzerland, 2022, isbn 978-3-030-96008-7. doi: 10.1007/978-3-030-96009-4 [11] j. shotton, a. fitzgibbon, a. blake, a. kipman, m. finocchio, b. moore, t. sharp, real-time human pose recognition in parts from a single depth image, in proc. of the ieee conference on computer vision and pattern recognition, washington, usa, 2011, pp. 1297–1304. doi: 10.1109/cvpr.2011.5995316 [12] a. hynes, s. czarnuch, m. c. kirkland, m. ploughman, spatiotemporal gait measurement with a side-view depth sensor using human joint proposals, ieee journal of biomedical and health informatics 25 (2021), pp. 1758–1769. doi: 10.1109/jbhi.2020.3024925 [13] e. auvinet, f. multon, v. manning, j. meunier, j. p. cobb, validity and sensitivity of the longitudinal asymmetry index to detect gait asymmetry using microsoft kinect data, gait & posture 51 (2017), pp. 162–168. doi: 10.1016/j.gaitpost.2016.08.022 [14] a. dubois, f. charpillet, measuring frailty and detecting falls for elderly home care using depth camera, journal of ambient intelligence and smart environments 9 (2017), pp. 469–481. doi: 10.3233/ais-170444 [15] k. otte, b. kayser, s. mansow-model, j. verrel, f. paul, a. u. brandt, t. schmitz-hübsch, accuracy and reliability of the kinect version 2 for clinical measurement of motor function, plos one 11 (2016), p. e0166532. doi: 10.1371/journal.pone.0166532 [16] q. wang, g. kurillo, f. ofli, r. bajcsy, evaluation of pose tracking accuracy in the first and second generations of microsoft kinect, in proc. of the 2015 international conference on healthcare informatics, dallas, usa, 2015, pp. 380–389. doi: 10.1109/ichi.2015.54 [17] d. geerse, b. coolen, d. kolijn, m. roerdink, validation of foot placement locations from ankle data of a kinect v2 sensor, sensors 17 (2017), p. 2301. doi: 10.3390/s17102301 [18] j. a. albert, v. owolabi, a. gebel, c. m. brahms, u. granacher, b. arnrich, evaluation of the pose tracking performance of the azure kinect and kinect v2 for gait analysis in comparison with a gold standard: a pilot study, sensors 20 (2020), p. 22. doi: 10.3390/s20185104 [19] c. ferraris, v. cimolin, l. vismara, v. votta, g. amprimo, r. cremascoli, m. galli, r. nerino, a. mauro, l. priano, monitoring of gait parameters in post-stroke individuals: a feasibility study using rgb-d sensors, sensors 21 (2021), p. 5945. doi: 10.3390/s21175945 [20] r. w. schafer, what is a savitzky-golay filter?, ieee signal processing magazine 28 (2011), pp. 111–117. doi: 10.1109/msp.2011.941097 [21] a. savitzky, m. j. e. golay, smoothing and differentiation of data by simplified least squares procedures, analytical chemistry 36 (1964), pp. 1627–1639. doi: 10.1021/ac60214a047 [22] r. w. schafer, on the frequency-domain properties of savitzkygolay filters, in proc. of the 2011 digital signal processing and signal processing education meeting, sedona, usa, 2011, pp. 54– 59. doi: 10.1109/dsp-spe.2011.5739186 [23] j. wagner, r. z. morawski, gait-analysis-oriented processing of one-dimensional data with total-variation regularisation, measurement: sensors 18 (2021), p. 100287. doi: 10.1016/j.measen.2021.100287 [24] zebris medical gmbh, zebris fdm 1.16.x software manual. online [accessed 12 december 2022] https://www.zebris.de/en/service/downloads https://doi.org/10.1016/j.jbiomech.2013.08.017 https://doi.org/10.1016/j.gaitpost.2011.03.024 https://doi.org/10.1007/978-3-030-24233-6_7 https://doi.org/10.1186/s12883-021-02072-4 https://doi.org/10.1155/2019/2085039 https://doi.org/10.1016/j.gaitpost.2018.11.029 https://doi.org/10.1007/978-3-030-96009-4 https://doi.org/10.1109/cvpr.2011.5995316 https://doi.org/10.1109/jbhi.2020.3024925 https://doi.org/10.1016/j.gaitpost.2016.08.022 https://doi.org/10.3233/ais-170444 https://doi.org/10.1371/journal.pone.0166532 https://doi.org/10.1109/ichi.2015.54 https://doi.org/10.3390/s17102301 https://doi.org/10.3390/s20185104 https://doi.org./10.3390/s21175945 https://doi.org/10.1109/msp.2011.941097 https://doi.org/10.1021/ac60214a047 https://doi.org/10.1109/dsp-spe.2011.5739186 https://doi.org/10.1016/j.measen.2021.100287 https://www.zebris.de/en/service/downloads microsoft word 19-149-1-ga.doc acta imeko  july 2012, volume 1, number 1, 42‐48  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 42  characterization of the isdb‐tb critical spectrum mask  pablo n. de césare, marcelo tenorio  national institute of industrial technology (inti), radiocommunication laboratory, av gral. paz 5445, san martín, buenos aires argentina    keywords: isdb‐t, spectrum mask, intermodulation, spectrum analyzer  citation: pablo n. de césare, marcelo tenorio characterization of the isdb‐tb critical spectrum mask, acta imeko, vol. 1, no. 1, article 10, july 2012,  identifier: imeko‐acta‐01(2012)‐01‐10  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 9 th , 2012; in final form july 6 th , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: no information available  corresponding author: pablo de césare, e‐mail: decesare@inti.gob.ar  1. introduction  the transmission spectrum of the integrated services digital broadcasting-terrestrial (isdb-t) signal [1] consists of 13 successive ofdm segments. the nature of ofdm modulation is to send the information parallelized in thousands of modulated carriers very close to each other, obtained through an ifft algorithm. these carriers are then amplified by a power amplifier chain with a non-linear behaviour that generates intermodulation products that increase the out of band emission interfering with the adjacent channels and also degrading signal quality due to inter-carrier interference. reducing out-of-band emissions demands the incorporation of an external filter at the output of the power transmitter to eliminate the intermodulation products generated [2][3]. as defined in the brazilian digital terrestrial television standard abnt nbr 15601, there are three different spectrum masks: non-critical; sub-critical and critical. the difference between them are the attenuation of the out-side emission being the most difficult to achieve and test. transmission spectrum masks are commonly tested with spectrum analysers, but in isdb-t, the requirements are too high and more sophisticated measurement test methods must be incorporated into digital tv analysers. in this paper the transmission spectrum mask measurement as an alternative methodology is proposed and compared with a multi-propose spectrum analyser and other dedicated digital tv analysers, the characteristics of the test equipment required is described and the uncertainty calculation for the proposed method is included. 2. test analisys  2.1. isdb‐t signal analysis  isdb-t modulation can be configured to operate in three different modes: mode 1 or 2k, mode 2 or 4k and mode 3 or 8k that modify the number of carriers per ofdm segment. the frequency bandwidth shall be 5.7 mhz when the ofdm carrier bandwidth is 5.572 mhz with 4 khz spacing between carrier frequencies in mode 1. this bandwidth shall apply regardless of the mode which is chosen, and selected to ensure that the bandwidth of 5.610 mhz has a gap where each carrier of the uppermost and lowermost frequencies in the 5.572mhz bandwidth includes 99 % of the energy. according with standards abnt nbr 15601 and arib std-b31 if the spectrum mask is tested with a resolution bandwidth of 10 khz, the power density is reduced by 5.572 mhz 10 log 27.46 db 10 khz       . (1) abnt nbr 15601 table 41 defines the limits of the outof-band emission indicating the attenuations in relation to the transmitter average power. the final purpose of this test it to characterize the power density distributions, thus the limits non‐linearity  in the  isdb‐t transmission chain causes  intermodulation products that widens the spectrum emission and should be  taken into account when assigning frequency channels. the characterization of the transmission spectrum mask is one of the most  important measurements in order to achieve the best use of the electromagnetic spectrum. the use of critical mask defined in [1]  allows allocations of co‐site adjacent channels for an efficient use of the electromagnetic spectrum that is a finite and limited resource.  this paper discusses different test procedures using spectrum analyzers and dedicated digital tv analyzers in order to measure the  isdb‐t transmission mask.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 43  given in [1] should be specified as attenuation in relation to the power density at 10 khz as shown in figure 1. 2.2. two tone intermodulation phenomena  a two tone intermodulation phenomenon can be observed when two cw sources are combined at the input of an amplifier and their frequencies are inside the passband of the device being tested. nonlinearities in the amplifier will lead to intermodulation products of the form (n1-m2). the components (21-2),(22-1) and (31-22),(32-21) are known respectively as third and fifth order intermodulation products and their frequencies and levels are close to the fundamental tones 1 and 2. higher order products are generally negligible in comparison. this phenomenon is also present in the testing equipment and is one of the limits of the available dynamic range. the input mixer of a spectrum analyser is a non-linear device, so it always generates distortion by itself. most spectrum analysers use diode mixers and their current intensity can be expressed as . 1 qv kt si i e          , (2) where is is the diode’s saturation current, q is the electron charge (1.60×10–19 c), v is the instantaneous voltage, k is the boltzmann’s constant (1.38×10–23 joule/°k), t is the temperature in degrees kelvin. expanding into a power series leads to 2 3 1 2 3 ( ...)si i k v k v k v    . (3) consider that two tones and the local oscillator are input into the mixer. in this case the input voltage is given by 1 1 2 2 sin( ) sin( ) sin( )lo lov v t v t v t     . (4) the following unwanted mixing products are also generated in the output mixer: 2 4 1 2 1 2 2 4 1 2 2 1 ( / 8) cos[ – ( 2 – )] ( / 8) cos [ – ( 2 – ] lo lo lo lo k v v v t k v vv t       (5) where 4 4 4 ! q kt k       . (6) if v1 and v2 have the same amplitude, their products can be considered as cubic terms. the third order products rise at rate 3 db/db as is shown in figure 2. 2.3. instrument available dynamic range  dynamic range is the ratio, expressed in db, of the largest to the smallest signals simultaneously present at the input of the instrument that allows measurement of the smaller signal to a given degree of uncertainty. there are three factors that limit the dynamic range: internal noise, internal intermodulation performance and the phase noise of a local oscillator. the instrument internally generates noise and distortion that affects the measurement’s accuracy. spectrum analysers’ noise floor is described by danl and signals below this level cannot be seen. the input attenuator strongly affects the sensitivity of the analyser to display low level signals by attenuating the input signal and reducing the signal-to-noise ratio (snr). resolution bandwidth also affects snr. the total noise power is determined by the width of the if filter. the noise level displayed varies following the relation 2 1,2 1 10 log rbw n rbw        . (7) since danl is often referred to a specific if-filter bandwidth, it is easy to obtain the displayed noise for any if-filter bandwidth. isdb-t spectrum mask measurement transmission-spectrum limit masks -140 -120 -100 -80 -60 -40 -20 0 -15 -10 -5 0 5 10 15 difference from the center frequency [mhz] a tt e n u a ti o n [ d b /1 0 k h z] channel power critical figure 1. isdb‐t critical mask.  figure. 3 spectral growth caused by the input mixer level.  figure 2. two‐tone intermodulation.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 44  defines a 10 khz if-filter. the unwanted distortion products generated by the input mixer fall at the same frequencies as the distortion products under measure on the isdb-t input signal. the effect of the intermodulation can be seen as a spectral growth usually described as “spectrum shoulders” as can be seen in the top trace of figure 3. the phase relationship generates additions and cancelations between the generated intermodulation products and the signal that we want to measure. the lack of knowledge of the phase relationship between them is a source of uncertainty that also depends on their amplitude relationship through the relation 20 internal intermodulation 20 log(1 10 ) d error   , (8) where d is the difference in db between the amplitude of the intermodulation products generated internally and the amplitude of the intermodulation of the incoming signal. the probability function distribution can be assumed as being rectangular. the third order performance is given by the third order intercept point (toi) and represents the mixer level at which the internally generated intermodulation would be equal to the fundamental tones. this point is obtained by sweeping the levels of the fundamental tones 1 and 2 and observing the levels of the third order components as a function of the input power level. the dynamic range can be plotted as a function of the mixer input signal level. low levels reduce the snr and causes lower intermodulation products. high levels lead to more noise immunity but increase the intermodulation products. figure 4 shows the hp8564 [4] available dynamic range for isdb-t spectrum mask, where 78db is the best condition obtained for an input level of -27 dbm. as was shown in figure 1, the required dynamic range for transmission spectrum mask is obtained by addition of the total power channel to the power density in a 10 khz band and the highest attenuation defined in [1]: required dynamic range 27.46 db 97 db 124.46 db   . (9) actually, there is no available spectrum analyser with a dynamic range of 124.46db. most digital tv analysers have been equipped to perform the spectrum transmission mask test. however yhey do not have the better toi level and the lower danl required. they only incorporate the facility to add the spectrum emission with the filter mask response. table 1 gives a comparison between a spectrum analyser hp8564 [5] and the digital tv analysers anritsu ms8911b [5] and rohde & schwarz eth [6] and etl [7]. 3. test procedure  3.1. using a spectrum analyser  to solve the excessive dynamic range requirement, out of band emissions can be tested by measuring the isdb-t signal before filter mask and the filter response separately and then multiply both curves (or add them if they are expressed in db). figure 5a shows the filter characterization block diagram using a tracking generator synchronized with a spectrum analyser using the same setting defined in abnt nbr 15601. the filter characterization should be saved in an internal trace memory as figure 5b shows, where it is necessary that the top of the trace has the same reference level value of the spectrum analyser. figure 6 shows one of the possible block diagrams of the isdb-t signal measurement system. the directional coupler before the mask filter must be previously characterized. to   a)  b)  figure  5.  filter  characterization  a)  block  diagram  and  b)  characterization  trace. table 1. instruments’ specifications.  figure 4. hp8564 dynamic range.   figure 6. isdb‐t characterization block diagram. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 45  obtain the best dynamic range it is necessary to add internal and external attenuation. insufficient attenuation will cause intermodulation products close to the signal under test and an excessive one will increase the noise level. in practice it is desirable to adjust the attenuation in 1 db steps until one obtains the best dynamic range. most spectrum analysers have many functions like the addition and subtraction of traces. the peak power density should have the same value as the reference level of the spectrum analyser. when emissions cannot be measured due to the dynamic range, it is possible to recall the filter curve from an auxiliary trace and add it with the isdb-t spectrum obtained before the mask filter as is shown in figure 7. filter correction is very important for frequencies of  3 mhz from the centre frequency. in offset of 3 mhz from the centre frequency the available dynamic range should be enough to determine the compliance of the transmission spectrum mask, but results with lower uncertainties can be obtained applying the filter corrections only out of these limits with the configuration shown in figure 6. between 3 mhz from the centre frequency lower uncertainties are obtained with the block diagram shown in figure 8, without any correction of the filter transference. 3.2. using dedicated dtv analysers  when using anritsu ms8911b with option 030 for isdb-t analysis a pre-load default filter response can be used. comparison between the ms8911b pre-load default filter ant that of a commercial rfs [5] response is shown in figure 9. the eth tv analyser from rohde & schwarz has an integrated tracking generator for filter characterization that allows saving to an internal trace. the etl tv analyser from rohde & schwarz also has an integrated tracking generator for filter characterization but the filter response should be manually entered into a register. 4. uncertainty analisys  4.1. uncertainty due to amplitude accuracy  when the signal under test is applied to the instrument, the first source of uncertainty is given by impedance mismatch which causes the incident and reflected signals to add constructively or destructively. this causes the signal received by the analyser to be larger or smaller than the original one. this measurement is made with the same attenuator settings so the input attenuation switching uncertainty does not need to be considered. the input signal is mixed with the local oscillator and their flatness contributes to the frequency response uncertainty. spectrum analysers have a band switching uncertainty but in measurements with a30 mhz span they do not usually need to be added. after the input signal is converted to an if, it passes through the if gain amplifier and if attenuator that references the input signal amplitudes to the reference level. reference level accuracy and resolution bandwidth switching do not add uncertainty in relative measurements such as spectrum mask, but the display scale fidelity should be included. in table 2 the sources of uncertainties, in relative measurement using spectrum analyser, are summarized. uncertainty due to impedance mismatch can be calculated in the same way like power measurements [8][9].the systematic error is 12 l while the limits of mismatch uncertainty are 2 g l   caused by the lack of the phase difference between g and l . in most cases, uncertainty due to mismatch is relatively small. figure 7. transmission spectrum mask result.   figure 9. anritsu 8911b default filter comparison.    figure 8. isdb‐t characterization block diagram between the 3 mhz from  the centre frequency.  table 2. amplitude uncertainty in relative measurements. source of uncertainty  probability distribution  divisor  impedance mismatch  u‐shape  2   frequency response  rectangular  3   display scale fidelity  rectangular  3   -80 -70 -60 -50 -40 -30 -20 -10 0 -15 -10 -5 0 5 10 15 offset from center frequency [mhz] a tt e n u a ti o n [ d b ] rfs 8pxwe characterization anritsu ms8911b default filter characteristic normalized to -27,4db acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 46  4.2. uncertainty due to filter characterization  following the connections described in figure 5a, filter characterization uncertainty will include the same sources described in table 2, in addition with the frequency response of the tracking generator and mismatch in the input and output ports. that implies that the filter response trace, like figure 5b shows, is obtained with a given level of uncertainty that will be carried out until the final result. the attenuation of a filter inserted between a generator and a spectrum analyser that are not perfectly matched [3], has a standard deviation of mismatch, m [db], which can be approximated by   0.52 22 2 11 22 42 2 21 . . 8.686 [db] 2 . 1 g f l f g l f s s m s                  (10) where s11f and s21f can be obtained from the filter specifications [5]. table 3 shows the uncertainty sources of the filter characterization measurement. 5. results  the device under test and the instruments used during the test are show in table 4. 5.1. measuring transmission spectrum mask after filter  between 3.5 mhz offset from centre frequency using  spectrum analyser  figure 10a shows the transmission mask measurement measured as figure 8 describes. figure 10b represents the same information as a function of frequency offset from the centre frequency channel. on table 6 the uncertainty budget is shown. the internal intermodulation error shown in table 5 is obtained using the spectrum analyser settings and the calculated available dynamic range. table 3. amplitude uncertainty in filter characterization.  source of uncertainty  probability distribution  divisor  uncertainty for filter mismatch  normal  1  generator amplitude linearity  rectangular  3   display scale fidelity  rectangular  3   frequency response  rectangular  3   table 4. devices under test and test instruments.  isdb‐t transmitter  nec dtl‐10/1r6p 1kw uhf digital tv transmitter system  output filter  rfs 8pxwe  spectrum analyser  hp8564e  sweep generator  r&s smp02  variable attenuator 1 db step  hp84904l  table 5. internal intermodulation error calculation.  analyzer parameter  value  reference level [dbm]  ‐26.7  internal attenuator  [db]  30  power/power density [db]  27.46  mixer level [dbm]  ‐29.24  available dynamic range [db]  ‐78  d [db]  18  errorinternal intermodulation [db]  1.1  table  6.  transmission  mask  measurement  uncertainty  for 3.5mhz  offset  from the centre frequency.  source of  uncertainty  value  [db]  probability  distribution  divisor  ui  [db]  veff  impedance  mismatch  0.12  u‐shape  2   0.08   frequency  response  0.80  rectangular  3   0.46   display scale  fidelity  0.85  rectangular  3   0.49   internal  intermodulation   1.10  rectangular  3   0.64   combined standard  uncertainty     normal    0.93  >1000  expanded  uncertainty     k = 2    1.86  >1000    a)    b)    figure 10. critical mask measurement between 3.5 mhz from the channel  centre frequency.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 47  5.2 uncertainty budget  the uncertainty budget for the transmission mask can be seen in table 6 for the case of 3.5 mhz offset from the centre frequency. 5.3 measuring transmission spectrum mask before filter  beside 3.5 mhz offset from centre frequency using  spectrum analyser  for testing the transmission spectrum mask on frequencies offset higher than 3.5 mhz, two measurements must be done: filter characterization and spectrum emissions measures before the filter. the results of filter characterization measurement following the connections described in figure 5a, are shown in figure 11 and the uncertainty budget on table 7. 5.4 filter characterization uncertainty budget  figure 12 shows the filter characterization trace, the spectrum emission trace, measured before the filter and the result obtained as the addition of these two traces. the results are compared with the limits of the critical mask defined in [1]. table 7. filter characterization uncertainty. source of  uncertainty  value  [db]  probability  distribution  divisor  ui  [db]  veff  filter  impedance  mismatch  0.31  normal  1  0.31   generator  amplitude  linearity  0.60  rectangular  3   0.35   frequency  response  0.80  rectangular  3   0.46   display scale  fidelity  0.85  rectangular  3   0.49   combined  standard  uncertainty     normal    0.74  >1000  expanded  uncertainty     k = 2    1.49  >1000  table 8. internal intermodulation error calculation.  analyzer parameter  value  mixer level [dbm]  ‐29.24  available dynamic range [db]  ‐78  d [db]  22  errorinternal intermodulation [db]  0.7  table 9. transmission mask measurement uncertainty.  source of  uncertainty  value  [db]  probability  distribution  divisor  ui  [db]  veff  impedance  mismatch  0.12  u‐shape  2   0.08   frequency  response  0.80  rectangular  3   0.46   display scale  fidelity  0.85  rectangular  3   0.49   internal  intermodulation  0.7  rectangular  3   0.20   filter  characterization  0.74  normal  1  0.74  >1000 combined standard  uncertainty    normal    1.11  >1000 expanded  uncertainty    k = 2    2.22  >1000   figure 11. filter characterization measurement.  figure 12. transmission spectrum mask measurement results.  figure 13. transmission spectrum mask measurement results.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 48  5.5. uncertainty due to internal intermodulation  to measure the isdb-t signal before the filter, the available dynamic range is 22 db. table 8 shows the spectrum analyser parameters and the obtained error generated by the internal intermodulation. 5.6. measuring transmission spectrum using anritsu  ms8911b  the results of critical mask transmission measurements using anritsu ms8911b are shown in figure 13, where the filter characterization measurement described before was replaced by the default filter response pre-load. 6. conclusions  it is possible to reduce measurement uncertainty of the transmission spectrum mask between 3.5 mhz from the centre frequency using the available dynamic range when the internal intermodulation error is relative low. out of 3.5 mhz from the centre frequency, where the requirement of dynamic range exceeds the available one, a filter response correction is applied and the results are obtained by addition the mask filter response to the isdb-t signal measured before it. in this case, the uncertainty is higher due to the contribution of filter characterization uncertainty. isdb-t testers have the similar performance than multipurpose spectrum analyser for testing the transmission spectrum mask. the use of pre-load default filter response gives a good approximation of the final measurement result but the results obtained should not be used for approval test being the mask filter characterization the more accurate method. digital tv analysers incorporate facilities to add the mask filter response to the isdb-t signal but the main advantage of them are the base band analysis, allowing measurement like modulation error ratio (m.e.r.), echo pattern, imbalance i-q, and bit error rate performance that characterize the signal quality. references  [1] abnt nbr 15601a – digital terrestrial television – transmission system (in portuguese), associação brasileria de normas técnicas, 2007. isbn: 978-85-07-00539-1. [2] hossein yamini and vijaya kumar devabhaktuni “cad of dual-mode elliptic filters exploiting segmentation”, 36th european microwave conference, 10-15 sept. 2006, pp. 560-563. [3] rfs, technical data sheet pxw series. [4] agilent 8560 e-series spectrum analysers product specification. [5] anritsu ms8911b product specification. [6] rohde & schwarz eth product specification. [7] rohde & schwarz etl product specification. [8] i.a. harris, f.l. warner, “re-examination of mismatch uncertainty when measuring microwave power and attenuation”, iee proceedings, part h microwaves, optics and antennas, vol. 128, pt. h, no. 1, february 1981, pp. 35-41. [9] glenn f. engen, “microwave circuit theory and foundations of microwave metrology”, iet 1992, isbn 0-86341-287-4, pp. 33-83. microsoft word article 13 168-1322-1-le.docx acta imeko  february 2015, volume 4, number 1, 82 – 89  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 82  fpga‐based real time compensation method for medium  voltage transducers  gabriella crotti 1 , daniele gallo 2 , domenico giordano 1 , carmine landi 2 , mario luiso 2   1  inrim, strada delle cacce 91, 10135 torino, italy  2  department of industrial and information engineering, second university of naples via roma 29, 81031 aversa (ce), italy      section: research paper   keywords: voltage transducer, voltage measurement, power system measurement, power quality measurement, optimization  citation: gabriella crotti, daniele gallo, domenico giordano, carmine landi, mario luiso, fpga‐based real time compensation method for medium voltage  transducers, acta imeko, vol. 4, no. 1, article 13, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐13  editor: paolo carbone, university of perugia   received december 31 st , 2013; in final form november 12 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by measurement science consultancy, the netherlands  corresponding author: mario luiso, e‐mail: mario.luiso@unina2.it    1. introduction  the growing need of power quality analysis in medium voltage (mv) grids, required by the increasing diffusion of distributed generation sources, has led to the employment of transducers and measurement techniques able to convert and acquire voltage signals with improved accuracy. many applications require the use of transducers with wide frequency bandwidth ([1]-[3]), like power quality measurements ([4]), measurements on photovoltaic plants ([5]), energy and power meter calibration ([6]). for this purpose, voltage dividers which show higher frequency bandwidth and better linearity than conventional voltage transformers are becoming the most used transducers in power quality assessments. however, disadvantages like a higher dependency on the divider transfer function from environmental conditions (temperature, proximity effects) are introduced. the proximity effects can be reduced by shielding the device ([7]); this produces an increase of the stray capacitive coupling among the elements whose strength varies with the insulation medium. many dividers, for economic and safety reasons, make use of resin as an insulating medium whose electric characteristics can strongly depend on temperature, voltage and frequency ([8], [9]). with the aim of improving the performances of these dividers, so allowing their applications to grids with severe electrical and environmental conditions like the railway feed systems and on board mv circuits, a real time compensation method is proposed. in scientific literature several papers face the issue of characterizing and compensating measurement instrument transformers ([10]-[16]). most presented techniques perform compensation of measuring transformers only at industrial frequency; moreover, at best author’s knowledge, none of them apply the compensation to transducers intended for medium voltage power systems. so, in this paper, an improved technique for compensating voltage transducers in a wide frequency range is presented. it is based on the identification of a digital filter, with frequency response equal to the inverse of the divider, which is executed on a field programmable gate array (fpga), equipped with a/d and d/a converters. as an application, the transfer abstract  since the increase of the distributed power connected to the medium voltage networks, a capillary monitoring of the power quality  becomes essential. this entails the spread of transducers with suitable frequency bandwidths, as required by the relevant standards.  the paper describes a real time compensation method for the extension of the frequency bandwidth of medium voltage dividers  whose performances do not allow to perform measurements over a wide frequency range. this approach will contribute to keep the  costs of this innovation low. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 83  function for the frequency compensation of two mv dividers is identified. the two chosen dividers have strongly different frequency behaviours; therefore the application of the proposed technique to the compensation of their frequency responses represents an interesting test bench. the paper is organized as follows. in section 2 there is a description of the operation of the two dividers. in section 3 the measurement setup used to characterize the dividers is presented. section 4 deals with the compensation technique. section 5 presents the optimization results and finally section 6 shows the experimental results related to the compensated dividers. 2. divider features  the compensation technique is applied to two dividers with different frequency behaviour. one is a pure resistive divider (rd) and the other is a resistive-capacitive divider (rcd). the highest voltage for equipment (um) is 24 kv for both. the resistive divider has a rated scale factor knom of 10000 and a 30 mω high voltage arm. the rcd, whose knom is 1000, is made, for the mv arm, of a series of 20 cells. each cell consists of a 10 mω non-inductive thin film resistor ([8], [9]) and a 1.5 nf ceramic class 1 capacitor, parallel connected. for the low voltage arm an equivalent resistance of 155 kω and 116.4 nf ceramic capacitors are used. the mv cells are disposed in four layers, each one made of five cells, which are series connected following an opposite path to minimize the stray inductances. a two sections cylindrical shield allows the control of the capacitive coupling due to the surrounding structures (figure 1). both the dividers are resin insulated, with a consequent downgrade of their performances. the transfer functions of the two dividers show strongly different dynamic behaviours (figures 2 and 3). the scale factor and phase error of rd (figure 2) have an overall variation of 30 db and about 1.2 rad respectively over 4 decades, while the rcd (figure 3) shows 12% and 100 mrad of variation over the same frequency range. the so different frequency behaviour of the two dividers represents an interesting test bench for the compensation technique. in fact the identification of the parameters of the compensation transfer function requires a growing effort from rd to rcd. 3. divider characterization  the complex frequency response of a voltage divider with a scale factor higher than 1000 requires a characterised measurement system able to compare voltage phasors whose amplitude differs three orders of magnitude. this can be performed by means of digitisers coupled with an attenuator probe or a reference divider, with high scale factor and frequency response up to at least 1 mhz. as an alternative, to perform the frequency sweep at low voltage (e.g. hundreds of volt), without making use of a reference voltage transducer, two agilent 3458 multimeters (dmms) are employed as digitizers. they have full scale ranges from 100 mv to 1000 v and a frequency bandwidth up to 12 mhz, depending on the selected digitizing technique. the synchronization between the two multimeters is provided by an external trigger supplied by a waveform generator. before introducing the measurement circuit, some aspects on dmm digitizing methods are discussed in the following. 3.1. multimeter digitizing methods  the multimeters provide three digitizing methods: dcv, direct sampling and subsampling. two of them, dcv and sub-sampling, are implemented for the divider frequency characterisation. by dcv technique, the signal is acquired just in dc mode, and the sample is obtained by integration of the input signal, over a specified integration time interval, that can be varied from 500 ns up to 1 s. the technique offers speed and resolution tradeoffs from 18 bits (5½ digits) at 6 khz to 16 bits (4½ digits) at 100 khz, as well a high input impedance. however, the maximum sample rate of 100 khz associated with a bandwidth limited     figure 1. resistive‐capacitive shielded divider during the assembly step. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 84  to 30 khz for 100 v and 1000 v full scale makes this digitizing technique not adequate to perform the frequency analysis up to 100 khz, but it ensures a high magnitude and phase accuracy up to tens of hertz. the sub-sampling technique employs a track and hold (t&h) circuit which performs the signal integration over a very small interval (2 ns) and holds the integrated sample during the slower a/d conversion. the required a/d conversion time limits the actual sample frequency to 50 khz, but thanks to the subsampling algorithm the sampling frequency reaches 100 mhz. the bandwidth at 100 v peak full scale is limited to 1∙108 v∙hz which means that at 100 v the bandwidth drops to 1 mhz. table i summarizes the frequency bandwidth, the input impedance and the best accuracy for the employed scales and digitizing methods. 3.2. measurement set‐up  the measurement set-ups for the subsampling and dcv digitising methods are shown in figure 4a and 4b respectively. in both cases a waveform generator gives the trigger signal. for the dcv mode, a 5 v square wave signal triggers the simultaneous acquisitions of each sample by the two multimeters and the applied voltage is given by a power amplifier (dc – 500 khz, 120 v) supplied by a calibrator. when the subsampling technique is used (a picture of this configuration is shown in figure 5), the output of the waveform generator supplies the power amplifier and the available sync output triggers the two multimeter acquisition sequences. since with this approach the multimeter time base is used, errors can be introduced because of slight differences in the two timebases. a) b) figure 2. frequency behaviour of rd divider between 20 hz and 90 khz (a)  and zoom between 20 hz and 300 hz (b). a) b) figure 3. frequency behaviour of rcd divider between 10 hz and 90 khz (a)  and zoom between 10 hz and 300 hz (b).  table 1. multimeter specifications.    peak full scale  bandwidth  input impedance  best accuracy  dcv  100 mv  80 khz  >1010 ω  0.00005 – 0.01%  1 v  150 khz  >1010 ω  100 v  30 khz  >10 mω  1000 v  30 khz  >10 mω  sub‐sampling  100 mv  12 mhz  1 mω 144 pf  0.02% 1 v  12 mhz  100 v  12 mhz (1 mhz at 100 v) 10 100 1000 10000 90000 0 10 20 30 frequency [hz] m a g n itu d e *k n o m [ v /v ] rd divider frequency response 10 100 1000 10000 90000 0 0.5 1 1.5 frequency [hz] p h a se [ ra d ] magnitude phase 10 50 100 150 200 250 300 0.9 0.95 1 1.05 1.1 frequency [hz] m a g n itu d e *k n o m [ v /v ] rd divider frequency response 10 50 100 150 200 250 300 0 25 50 75 100 frequency [hz] p h a se [ m ra d ] magnitude phase 10 100 1000 10000 90000 0.88 0.9 0.92 0.94 0.96 0.98 1 frequency [hz] m a g n itu d e *k n o m [ v /v ] rcd divider frequency response 10 100 1000 10000 90000 -40 -20 0 20 40 60 80 frequency [hz] p h a se [ m ra d ] magnitude phase 10 50 100 150 200 250 300 0.9 0.92 0.94 0.96 0.98 1 frequency [hz] m a g n itu d e *k n o m [ v /v ] rcd divider frequency response 10 50 100 150 200 250 300 -50 -20 10 40 70 100 frequency [hz] p h a se [ m ra d ] magnitude phase acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 85  because of its better accuracy at low frequency (within 400 µrad up to some hundreds of hertz for the phase, when introducing corrections for the use of scales with different bandwidth), the dcv method is employed from 10 hz to 60 hz. as to the subsampling technique, a time delay of 25 ns, due to the different latency on the trigger for the two dmms, becomes relevant in terms of angle for frequencies higher than 10 khz. this systematic error is corrected but the overall accuracy degrades. moreover, approaching 100 khz the accuracy further decreases because of the bandwidth which is limited to 1 mhz for the 1000 v scale. for what concerns the uncertainty on the frequency of the generated signal, the utilized calibrator has an uncertainty of 25 ppm±1 mhz between 0.01 hz and 11 khz, while it has an uncertainty of 25 ppm±15 mhz between 12 khz and 120 khz. summing up all the uncertainty contributions of the measurement setup, the measurement uncertainty is conservatively estimated to be 500 ppm and 5 mrad at 100 khz for the scale factors and phase errors. the magnitude, phase and dc component of each voltage signal is estimated by the application of a fitting algorithm to the acquired samples over 10 periods. since the waveform generator and the calibrator provides signals with very low distortions, no higher harmonics besides the dc and the fundamental one have to be inserted in the fitting procedure, which provides accurate estimation of the amplitude and phase within tens of ppm and microradians respectively. 4. compensation technique  since a divider can be considered as a linear system, its frequency response can be expressed by: (1) where x and y are the spectra of the signals, both before and after the transduction. r and  are systematic modifications introduced by the transduction in amplitude and in phase, respectively, on spectral components of the input signal. once the divider has been metrologically characterized and its frequency response found over a certain frequency range, cascading a device with a frequency response equal to the inverse of the divider frequency response, systematic deviations are compensated over all the considered frequency range. for this purpose, a filter can be adopted, whose frequency response, hd(f), should be exactly given by: (2) for any frequency, f, in the range of interest. the analog implementation of transfer function (2) is not easily practicable and it can lead to acceptable results only if applied to a very limited frequency range. better results can be obtained with digital filtering. obviously, for a real-time compensation, a digital processor has to implement such digital filtering. in the following the procedure for identifying the filter is described, while the hardware which executes it in real time is shown in the next sections. two main implementations for digital filters exist: fir and iir. fir filters are relatively simple to compute and inherently stable but their main drawback, compared with   a)    b)  figure 4. measurement set‐up for sub‐sampling digitizing method (a) and  for dcv method (b).  figure 5. measurement set‐up for the subsampling configuration. broadband amplifier dmm1 dmm2 voltage divider input output gpib controller sync out input wavefor generator output trigger calibrator outputtrigger output waveform generator input outsync controller gpib output input voltage divider dmm2 dmm1 broadband amplifier waveform multimeters broad band amplifier waveform generator acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 86  those of iir filters, is that they may need a large number of coefficients to approximate a desired response; moreover they can only introduce a delay in phase frequency response. this makes them ineffective for the aim of this paper. an iir filter is generally modeled by a transfer function in the z-domain that can be written as: ⋯ ⋯ . (3) with this approach, filter design requires the choice of the best values for parameters a1, …, an and b0, b1, …, bm so that the transfer function of the filter approximates a desired frequency characteristic. the problem of choosing the best coefficients can be formulated, from a mathematical point of view, as an inverse problem ([18], [19]) and solved by adopting optimization techniques ([16], [17]). an objective function, describing the difference among desired frequency response and obtained values has to be defined and minimized by an optimization algorithm. the choice of the objective (or cost) function affects the optimality and the computational complexity of the solution. a robust design should account for filter response over its entire bandwidth. as illustrated in the previous section, the automated station used for divider characterization operates in the range of dc 100 khz: therefore, the best filter is identified in this range. anyway, identifying a filter, assigning frequency response data over its entire bandwidth, is a more complicated task, since its transfer function has fewer degrees of freedom. an efficient solution has been found choosing different expressions for filter transfer function and objective function. as it is said in [20], if the filter transfer function, h(z), is factorized in second order sections (sos), the frequency response is less sensitive to changes in coefficient values. this factorization can be expressed as: ∏ , , , , (4) in addition, for the objective function ([21]), the following expression is used: ∑ ∙ |log , log | ∙ log log (5) where p is the vector of the 4n+1 variables of (4), m is the number of the frequency points involved in the identification procedure and w is the vector of the weights. the cost function (5) weighs the ratio, rather than the difference, between the model frequency response and the frequency response data at each frequency. in addition, a logarithmically spaced frequency interval has been used. practically it is like including bode’s concept in the cost function, in fact, it weighs the difference between bode diagrams of the model frequency response and the frequency response data. it is important to note that this exactly addresses the real problem as what is really requested, it is to minimize the difference between the bode diagrams obtained by the model and the experimental data. the optimization problem here studied has a non-linear objective function with 4n+1 independent variables. therefore, the research space should be r4n+1, with r the whole set of real numbers. nevertheless, this interval can be reduced adopting some constraints on solution characteristics. the constraints divide the research space into feasible and infeasible regions with remarkable reduction of computational burden ([22]). considering this a constraint results in filter stability: the poles of the digital filter must have a modulus smaller than one, so nonlinear inequality constraints are imposed. in order to numerically study equation (5), a hybrid scheme based on the combination of a stochastic and deterministic approach has been adopted ([23], [24]). the two approaches are used in a combined way to take advantages of their complementary characteristics. in fact, the deterministic approach is the fastest way to work out a solution but the quality of the results strongly depends on the choice of the starting point. non-deterministic approaches do not depend on the initial choice and they are usually slow in finding out optimal solutions. starting from these considerations, an initial exploration of space of solutions is made by a genetic algorithm having a population size greater than the number of coefficients chosen as target. then, the obtained values are used as initial points to run a constrained deterministic approach based on the sequential quadratic programming (sqp) ([22]) to find out the optimal solution. the sqp algorithm was preferred over simpler algorithms (such as zero-order methods) taking into account the information about the derivative of the objective function and, in addition, to include in a direct way the above mentioned constraints. the described procedure has two parameters, which can be arbitrarily chosen before solution research starts: the sampling frequency and the number of soss. since the frequency response of the filter is strictly related to the sampling frequency, its value has to be carefully chosen. in fact, if the chosen sampling frequency differs from the actual sampling frequency of the utilized fpga boards, then the actual frequency response of the executed filter differs from the one found in the identification procedure. for the case at hand, the utilized fpga boards (described in detail in the next section) have a sampling period resolution equal to 1 µs, while the timebase accuracy is equal to ±100 ppm with a peak-to-peak jitter of 250 ps. according to such considerations, taking into account the frequency range involved in the compensation procedure, the sampling frequency has been chosen equal to 200 khz. the number of soss should be fixed very carefully because it is directly proportional to the number of independent variables of the objective function and to the complexity of the obtained filter. typically, better results are obtained increasing the filter order. anyway, it is fundamental to keep filter computational burden low. for this reason, the procedure is repeated a certain number of times, varying for each run the number of soss, in order to find out the best values for them. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 87  the identification procedure is made in this way: first of all, hd(f), defined in (2), is constructed in a numerical way, linearly interpolating experimental data obtained from calibration. the optimization algorithm runs in two nested loops, varying the number of soss; in the inner loop the procedure is repeated a certain number of times. this is required based on the fact that the utilized hybrid optimization technique includes a stochastic algorithm which returns different results every run. the number of frequency points is chosen equal to four times the total number of variables [25]. among the solutions, referring to the same number of soss and coming from the inner loop, the solution that minimizes the cost function is chosen. 5. optimization results  data available from characterization of uncompensated dividers are used in the optimization procedure, in order to find the digital filters that, minimizing ratio error and phase displacement, could compensate the divider frequency response. as it is previously said, the sampling frequency of the digital filter is chosen equal to 200 khz. the number of soss has been chosen from the range of 1 5. the reason of this choice is that higher orders would require a high computational burden, not compatible with the hardware used for implementing the compensation filter. the inner loop is repeated 10 times. to evaluate the filter’s performance, over the whole input frequencies range, ratio errors and phase displacements have to be evaluated before and after filter introduction. those before the filter introduction are reported in (6) and (7), while those after the filter introduction are in (8) and (9), where h(f) is the frequency response of the implemented filter. ∆ 100 1 (6) ∆ (7) ∆ 100 | | 1 (8) ∆ (9) starting from the definitions (6)-(9), the two indices in (10) and (11) have been used for characterizing improvements introduced by the filter: they describe the relative mean quadratic improvements in ratio error and phase displacement, respectively. ∑ ∆ ∑ ∆ (10) ∑ ∆ ∑ ∆ (11) ir and i are shown in figure 6, for the rd divider, and in figure 7, for the rcd divider, as functions of the number of soss. for the rd divider, the improvements are about 31.4 figure 6. compensation improvement for rd divider.  table 2. coefficients of the best compensating filter for rd divider, expressed referring to the filter representation in (3).  b0  b1  b2  b3  b4  b5  b6  626.80  749.95  ‐415.10  ‐230.50  35.625  ‐4.3313  ‐6.7867  ‐  a1  a2  a3  a4  a5  a6  ‐  ‐0.46864  ‐0.96937  0.50844  0.0063371  ‐0.0026524  0.00017674      table 3. coefficients of the best compensating filter for rcd divider, expressed referring to the filter representation in (3).  b0  b1  b2  b3  b4  1111.8  ‐615.8  ‐1090.4  626.8  ‐9.1506  ‐  a1  a2  a3  a4  ‐  ‐0.53772  ‐0.98070  0.54920  ‐0.0078201  1 2 3 4 5 0 10 20 30 40 sos number r a tio e rr o r im p ro ve m e n t compensation improvement for rd divider 1 2 3 4 5 0 10 20 30 40 sos number p h a se d is p la ce m e n t im p ro ve m e n t ratio phase acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 88  and 22.8 for ratio error and phase displacement, respectively. for the rcd divider, the improvements are about 161.4 and 1.4 for ratio error and phase displacement, respectively. table 2 and table 3 show the coefficients of the best compensating filters for, respectively, the rd and the rcd dividers; the coefficient refers to the standard representation for an iir filter, that is the form (3). 6. experimental results  in order to perform experimental verification of the filter, designed in the previous section, a compensating device has been implemented with a real-time digital processor, opportunely equipped with analog to digital and digital to analog converters. the case in question, an fpga board has been used. its features are: 1) xilinx virtex-ii 1 megagate fpga with 16 bufgmuxs, 324 external iobs, 227 loced iobs, 40 mult18x18s, 40 ramb16s, 5120 slices [26]; 2) clock rate equal to 40 mhz; 3) 96 digital lines; 4) 8 analog inputs, independent sampling rates up to 200 khz, 16-bit resolution, ±10 v; 5) 8 analog outputs, independent update rates up to 1 mhz, 16-bit resolution, ±10 v. the block scheme of the compensated divider is shown in figure 8, where vin is the input voltage, vout the divider output voltage, vout,k the sampled version of vout, vout,k* the filtered version of vout,k, vout* the analog version of vout,k* and it is the output of the compensated divider. in this way, the compensated divider continues to be an analog device, offering thus the possibility of being employed in whatever measuring system. using the presented compensation device, the two dividers were characterized through the measurement setup figure 7. compensation improvement for rcd divider.  a) b) figure  8.  a)  inverse  frequency  response  of  rd  divider  and  frequency  response of its compensating filter. b) ratio error and phase displacement  of compensated rd divider.  figure 9. block scheme of the compensated divider.  a) b) figure  10  a).  inverse  frequency  response  of  rcd  divider  and  frequency  response of its compensating filter. b) ratio error and phase displacement  of compensated rcd divider.  1 2 3 4 5 0 50 100 150 200 sos number r a tio e rr o r im p ro ve m e n t compensation improvement for rcd divider 1 2 3 4 5 0 0.5 1 1.5 2 sos number p h a se d is p la ce m e n t im p ro ve m e n t ratio phase 10 100 1000 10000 90000 0 5000 10000 15000 rd divider inverse's and filter frequency responses frequency [hz] m a g n itu d e [ v /v ] divider inverse's filter 10 100 1000 10000 90000 -1.5 -1 -0.5 0 frequency [hz] p h a se [ ra d ] 10 100 1000 10000 90000 -15 -10 -5 0 5 compensated rd divider frequency errors frequency [hz] r a tio e rr o r [% ] 10 100 1000 10000 90000 -0.1 -0.05 0 0.05 0.1 frequency [hz] p h a se d is p la ce m e n t [r a d ] 10 100 1000 10000 90000 1000 1050 1100 1150 rcd divider inverse's and filter frequency responses frequency [hz] m a g n itu d e [ v /v ] divider inverse's filter 10 100 1000 10000 90000 -0.1 -0.05 0 0.05 0.1 frequency [hz] p h a se [ ra d ] 10 100 1000 10000 90000 -2 -1 0 1 2 compensated rcd divider frequency errors frequency [hz] r a tio e rr o r [% ] 10 100 1000 10000 90000 -0.05 0 0.05 0.1 0.15 frequency [hz] p h a se d is p la ce m e n t [r a d ] a/d fpg compensation device d/a divide compensated divider vin vout vout,k vout,k * vout* acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 89  shown in the previous sections. figure 9 a) shows the rd divider inverses and its compensating filter frequency responses, while figure 9 b) shows the ratio error and phase displacement of the compensated rd divider. figure 10 a) shows the rcd divider inverses and its compensating filter frequency responses, while figure 10 b) shows the ratio error and phase displacement of the compensated rcd divider. 7. conclusions  since the increase of the distributed power connected to the medium voltage networks, a capillary monitoring of the power quality becomes essential. this entails the spread of transducers with suitable frequency bandwidths, as required by the relevant standards. the paper describes a real time compensation method for the extension of the frequency bandwidth of mv voltage dividers whose performances would not have satisfied the requirements. the adoption of the compensation technique allows to reach improvement in dividers performance up to a factor 160. in this way, the performance of the compensated dividers is comparable with that of dividers of a better accuracy class, but the cost is kept low. references  [1] en 50160, “voltage characteristics of electricity supplied by public distribution systems”, cenelec, 1999. [2] iec en 61000-4-30: “testing and measurement techniques – power quality measurement methods”, 11/2003. [3] d. gallo, c. landi, m. luiso, e. fiorucci, g. bucci, f. ciancetta, “realization and characterization of an electronic instrument transducer for mv networks with fiber optic insulation”, wseas transactions on power systems, issn: 1790-5060, e-issn: 2224-350x, issue 1, volume 8, january 2013. [4] a. delle femine, d. gallo, c. landi, m. luiso, “power quality monitoring instrument with fpga transducer compensation”, ieee transactions on instrumentation and measurement, volume 58, issue 9, sept. 2009 page(s):3149 – 3158, issn: 00189456. [5] d. gallo, c. landi, m. luiso, e. fiorucci, “analysis of a photovoltaic system: ac and dc power quality”, wseas transactions on power systems, issn: 17905060, e-issn: 2224-350x, issue 4, volume 8, october 2013, pp. 45-55. [6] a. delle femine, d. gallo, c. landi, m. luiso, “advanced instrument for field calibration of electrical energy meters”, ieee transactions on instrumentation and measurement, volume 58, issue 3, march 2009 page(s):618 – 625, issn: 00189456. [7] j. merev, o. yilmaz, and o. kalenderli, “selecting resistors for high voltage divider,” in proc. xiiith international symposium on high voltage engineering, delft, the netherlands, aug. 2003. [8] g. crotti, d. giordano, a. sardi. “design of a rc medium voltage divider for on-site calibration”. in: proceeding of the 16th international symposium on high voltage engineering. cape town, south africa, august 2009, p. 210-214, isbn/issn: 9780620445849. [9] g. crotti, d. giordano, a. sardi, “development and use of a medium voltage rc divider for on-site calibration”. in: international workshop on applied measurements for power systems (amps), 2011 ieee . aachen, germany, september 2011, p. 53-57, isbn/issn: 9781612849454. [10] daponte, p.; “electronically compensated current transformer modeling”, measurement, 15 (4), p.213-222, jul 1995. [11] locci, n.; muscas, c.; “a digital compensation method for improving current transformer accuracy” power delivery, ieee transactions on, volume 15, issue 4, oct. 2000 page(s):1104 – 1109. [12] locci, n.; muscas, c.; “hysteresis and eddy currents compensation in current transformers”, power delivery, ieee transactions on, volume 16, issue 2, april 2001 page(s):154 – 159. [13] slomovitz, d.; “electronic system for increasing the accuracy of in-service instrument-current transformers”, instrumentation and measurement, ieee transactions on, volume 52, issue 2, april 2003 page(s):408 – 410. [14] baccigalupi, a.; liccardo, a.; “low-cost prototype for the electronically compensation of current transformers”, sensors journal, ieee, volume 9, issue 6, june 2009 page(s):641 – 647. [15] cataliotti, a.; di cara, d.; emanuel, a.e.; nuccio, s.; “a novel approach to current transformer characterization in the presence of harmonic distortion”, instrumentation and measurement, ieee transactions on, volume 58, issue 5, may 2009 page(s):1446 – 1453. [16] d. gallo, c. landi, m. luiso, “large bandwidth compensation of current transformers”, proceedings of ieee international instrumentation and measurement technology conference i2mtc 2009, singapore, may 5-7 2009, e-isbn :978-1-4244-3353-7. [17] d. gallo, c. landi, m. luiso, “compensation of current transformers by means of field programmable gate array”, metrology and measurement systems, volume xvi number 2/2009, pp. 279-288, index 330930, issn 0860-8229, www.metrology.pg.gda.pl [18] j. kaipio and e. somersalo, statistical and computational inverse problems, ser. applied mathematic sciences. new york: springer-verlag, 2005. [19] k. deb, multiobjective optimization using evolutionary algorithms, hoboken, nj: wiley, 2001. [20] a. v. oppenheim, r. w. schafer, “digital signal processing” englewood cliffs, nj: prenticehall, 1975. [21] banos, a.; gomez, f.; “parametric identification of transfer functions from frequency response data”, iet computing & control engineering journal, volume 6, issue 3, june 1995 page(s):137 – 144. [22] t. f. coleman and y. li, “an interior, trust region approach for nonlinear minimization subject to bounds,” siam j. optim., vol. 6, no. 2, pp. 418–445, may 1996. [23] j. h. holland, “genetic algorithms,” sci. amer., vol. 267, pp. 66–73, 1992. [24] d. e. goldberg, genetic algorithms in search, optimization, and machine learning. reading, ma: addison-wesley, 1989. [25] i. kollar, y. rolain, “complex correction of data acquisition channels using fir equalizer filters”, instrumentation and measurement, ieee transactions on, volume 42, issue 5, oct. 1993 page(s):920 – 924. [26] xilinx web site, www.xilinx.com. microsoft word article 13 106-705-1-le-final.doc acta imeko december 2013, volume 2, number 2, 73 – 77 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 73 machine integrated telecentric surface metrology in laser structuring systems robert schmitt1 2, tilo pfeifer1 2, guilherme mallmann2 1 laboratory for machine tools and production engineering wzl, rwth aachen university, aachen, germany 2 fraunhofer institute for production technology (ipt) – department of production metrology, aachen, germany section: research paper keywords: inline metrology, frequency-domain optical coherence tomography, surface inspection, laser structuring citation: robert schmitt, tilo pfeifer, guilherme mallmann, machine integrated telecentric surface metrology in laser structuring systems, acta imeko, vol. 2, no. 2, article 13, december 2013, identifier: imeko-acta-02 (2013)-02-13 editor: paolo carbone, university of perugia received april 15th, 2013; in final form october 8th, 2013; published december 2013 copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by german federal ministry of education and research (bmbf), germany corresponding author: guilherme mallmann, e-mail: guilherme.mallmann@ipt.fraunhofer.de 1. introduction the functionalization of surfaces based on laser micromachining is an innovative technology used in a broad spectrum of industrial branches. its main advantage against other processes is the high machining flexibility. this technique permits the structuring of different workpieces (different form complexity and materials) with the same machine tool. examples can be found in minimizing air resistance, operation noise and friction losses [1] as well as in the surface structuring of tools and moulds [2]. there is, however, a trend towards the improvement of the overall process quality and automation as well as smaller microstructures in laser micro-machining. the fulfilment of these demands are at present limited by the non-existence of a sufficiently described process model as well as the absence of a robust and accurate inline process monitoring technique. on the one hand the missing process model leads to a time consuming effort to initialize the laser structuring of new products. this procedure depends e.g. on the applied material composition, product form and surface roughness. if the process behaviour is unknown for a determined workpiece, laser parameters and suitable machining strategies have to be identified in a trial and error testing before the real machining process can start. in this context reference geometries need to be structured and analysed outside the machine tool until a suitable parameter set is found. on the other hand the absence of a process control based on the real machined surface causes a high degree of inefficiency. this is explained by the inability of the machine to identify process defects during the machining procedure, leading to an increased possibility of rejected parts with a high degree of added value. for solving this task with a high level of compatibility and integration, an optical distance measurement system based on the frequency domain optical coherence tomography (fd-oct) was developed. the described telecentric measurement through the laser machining optical system enables a fast and highly accurate surface inspection in machine coordinates before, during and after the structuring process. based on this process monitoring a machining control can be set-up, leading to a fully automated process adjustment and manufacture procedure. abstract the laser structuring is an innovative technology used in a broad spectrum of industrial branches. there is, however, a market trend to smaller and more accurate micro structures, which demands a higher level of precision and efficiency in this process. in this terms, an inline inspection is necessary, in order to improve the process through a closed-loop control and early defect detection. within this paper an optical measurement system for inline inspection of micro and macro surface structures is described. measurements on standards and laser structured surfaces are presented, which underline the potential of this technique for inline surface inspection of laser structured surfaces. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 74 2. state of the art 2.1. laser structuring process the laser structuring technology utilizes thermal mechanisms to machine a workpiece, which are induced through the absorption of high amounts of energy. the physical interaction of laser radiation and matter is therefore a crucial point in this process. its efficiency is associated with laser and material properties, as applied wavelength, focal radius, angle of incidence, material light absorption, surface roughness, metal temperature, laser pulse repetition frequency and energy [1]. typical working wavelengths for the machining of metals and their alloys are 1064 nm (infra-red light) or 532 nm (green light). the guidance of the laser beam to the part’s surface is accomplished in most systems using a rastering system or a laser scanner [3]. this configuration uses computer numerically controlled galvanometer mirrors to deflect and an f-theta lens to focus the laser beam over a working area. this lens is wavelength optimized to focus the chief ray of the laser beam normal to the scanning field regardless of the scan angle, as well as to make the traveled distance of the laser spot on the focal plan directly proportional to the scan angle [4]. 2.2. laser structuring process monitoring inline process monitoring solutions for laser based structuring systems currently being developed in the academy and industry show technical limitations. the technologies available present in part low accuracy, no depth information or are not able to measure directly in machine coordinates. for example an approach using conoscopic holography is used. this technique is not able to measure directly in machine coordinates, as it uses a different optical path as the process laser, leading to complex calibration steps and inserting transformation errors as well as measurement displacement. in [5] a technology based on the acquisition of process generated electro-magnetic emissions is presented for the monitoring of the selective laser melting process. a similar approach can be also applied on laser structuring systems. this technique is however not able to deliver any direct depth information, just being able to monitor the amount of energy absorbed in the machining procedure and based on this evaluate the removed depth. 3. solution concept the solution concept for the machine integrated process monitoring system was designed based on the rastering system machine layout (figure 1). as measurement system an optical distance measurement technique based on the frequency domain optical coherence tomography (fd-oct) was used. the system integration is accomplished through an optical element as beam splitter. 3.1. frequency domain optical coherence tomography (fd-oct) the fd-oct is a technique based on low-coherence interferometry. differently from normal low-coherence interferometers, which use a piezo element to find the maximum interference point, in the fd-oct the depth information is gained by analyzing the spectrum of the acquired interferogram. the calculation of the fourier transformation of the acquired spectrum provides a back reflection profile as a function of the depth. for the generation of the interference pattern a measurement and a reference path are used, where the optical path difference between these arms is detected. the higher the optical path difference between reference and measuring arm, the higher the resulting interference modulation (figure 2). the total interference signal i(k) is given by the spectral intensity distribution of the light source (g(k)) times the square of the sum of the two back reflected signals (ar as the reflection amplitude of the coefficient reference arm and a(z) as the backscattering coefficient of the object, with regard to the offset z0), where k is the optical wavenumber [6]:   0 ( ) ( ) exp( 2 ) ( ) exp 2 ( )( )r zi k g k a i kr a z i kn z r z dz    (1) where n is the refractive index, 2r is the path length in the reference arm, 2(r+z) is the path length in the object arm and 2z the difference in path length between both arms. by finding the maximum amplitude at the spectrum’s fourier transformation, the absolute optical path difference can be detected. the max. measuring depth (zmax) is described by [7]  2max 0 4z n n  (2) where is the central wavelength,  is the bandwidth, n is the sample’s refractive index and n is the number of detector units covered by the light source’s spectrum. the axial resolution of an fd-oct is described by [7] 2 02 0.44fd oct car l     . (3) for the measurement of single distance (a single back reflection) the axial resolution can be increased to a submicrometric resolution by the usage of signal processing techniques, such as gauss fit. figure 1. concept of the inline process monitoring system – (a) dispersion compensator / glass rod. figure 2. frequency domain optical coherence tomography (ft-oct) set-up. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 75 3.2. measurement system prototype the spectrometer for the interference signal acquisition was developed with a wavelength measuring range of 107 nm, which can be adjusted in the absolute frequency range from 900 to 1100 nm depending on the light source to be used (figure 3). as a detector an indium gallium arsenide (ingaas) line camera was used. standard silicon based detectors present quantum efficiencies of less than 20% for the applied wavelength range against values between 60%-80% for ingaas (1.7 µm) based detectors [8]. the light source used in the system is a superluminescent diode with central wavelength at 1017 nm and wavelength range of 101 nm. the theoretical measuring range (maximum depth scan) of the developed fd-oct using the presented sld light source was calculated using equation (2) to be of 1.31 mm. the available measurement range was evaluated using a precision linear translation table. the results showed a maximum distance measurement of 1.25 mm using a specimen of aluminium with a technical surface, which simulates the workpieces used for the laser structuring. the theoretical axial resolution of the system calculated based on equation (3) is 4.51 µm. the usage of a gauss fitting algorithm to find the modulation frequency after the fourier transformation of the acquired light spectrum increases the axial resolution by calculating a sub-pixel accurate curve maximum. based on this technique an increased axial resolution could be achieved. the standard deviation of the distance measurement values acquired in the centre of the scanner field was 218 nm. a detailed analysis of the measurement system is presented in [9]. 3.3. beam coupling the concept chosen for the integration of the developed fd-oct in a laser structuring machine is based on an optical filter or a dichroic beam splitter used as an optical coupler. the coupling is accomplished by using this optical element in the reflective area for the measurement wavelengths and in the transmissive area for the structuring laser wavelength. a very important requirement on this system is the laser beam’s transmission efficiency. the laser beam coupling performance will be directly connected to the machine process energy efficiency as well as to the overall heat development on the coupling system. in order to warrant a robust system with little long time deviations, e.g. by component wear, and little energy losses an optical element with a transmission of near to 100% needs to be chosen. another important system requirement is a highly accurate beam alignment. a misalignment between laser and measurement beam will lead to a displacement between the laser and the measurement spot, causing a mismatch / uncertainty in the measurement results. the developed coupling system meets the described demands and enables a system integration by an unique machine hardware change. the insertion of a coupling optical element in the laser beam path with a determined angle (45° for a dichroic beam splitter or a wavelength dependent angle for an optical filter) fulfills the complete integration. the component disposal for the beam coupling can be seen in the prototype setup presented in figure 4. the coupling efficiency of the concept was evaluated using an optical edge filter for the wavelength of 1064 nm. by changing the angle of the edge filter, the edge frequency between reflection and transmission is displaced. by an angle of 23° the edge frequency is adjusted in such a way, that the filter reflects the wavelength bandwidth of the measuring system and transmits the wavelength of the laser beam (figure 4). an overall coupling efficiency of over 95% for the laser beam and over 93% for the measurement beam was evaluated in laboratory tests. these results validate the concept for the machine integration. 3.4. telecentric f-theta scanning lens as presented in figure 1 a typical scanning system used in laser structuring systems is composed mostly of a scanning unit based on galvanometric moved mirrors and an f-theta scanning lens. the scanning objective used in the presented system prototype is a telecentric f-theta scanning lens. this lens type is wavelength optimized through the addition of a targeted optical distortion in the lens system. the aim of this optimization is to create a focal plane for the laser beam, as well as to create a direct proportional relation between scanning angle and laser spot position [4]. the designed optical system of a telecentric f-theta lens causes optical aberration in wavelengths other than the machining laser wavelength. this aberration introduces systematic errors to the measurement beam, such as a figure 4. system prototype with detailed view of the beam coupling unit (laser and measurement beam coupling). figure 3. measurement system based on fourier domain low coherence tomography. acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 76 distortion of the focal plane, changes in the optical path and form deformation of the measurement spot. the evaluated dispersion could be compensated on the measurement system by the usage of a special designed glass rod, which was inserted on the system’s reference path (figure 1). this glass rod represents the same optical dispersion generated by the f-theta in the main optical path. hereby the interference conditions are fulfilled for all field angles of the scanner. the remaining effects of the f-theta in the optical path for other beam entrance angles can be seen in the measurement results as slight deviations of the real object form or a small decrease of the lateral accuracy in the border of the scanning field. the specific optical path deformation over the scanning field could be evaluated through a system simulation. the results show a form error in the shape of a saddle (figure 5). 3.5. laser structuring system prototype in order to evaluate the proposed inline measurement technique previously to a machine integration, a laser structuring system prototype was constructed. for the deflection of the laser and measurement beam a galvanometer based scanning unit was used. a telecentric f-theta lens with focal distance of 80 mm was applied for focusing both beams on the structuring plane (figure 6). the combination of both optical elements enables a working field of 30 mm × 30 mm. as machining laser a nanosecond pulsed fiber laser with central wavelength at 1064 nm was used. the controlling of the scanner and laser units is executed by a specially developed piece of software. a laser security compliant housing completes the prototype. 4. results the evaluation of the system prototype was carried out through a series of test measurements in flatness and step standards, as well as in laser structured workpieces. for these tests the processing laser was turned off. by measuring a flatness and a step standard the remaining amount of optical aberration introduced by the telecentric f-theta lens could be investigated. alterations in the optical path length of the measuring beam affect the surface form inspection and need to be characterized. the measurement of a flatness standard shows a slight distorted plane (figure 7), as expected by the system simulation results (figure 5). for a measurement area of 20 mm × 20 mm a parabolic distortion could be detected in x and y directions. after the subtraction of the plane inclination a maximal deformation of 48 µm in the x-axis and of 65 µm in the y-axis was measured. as already shown in the system simulation, this measurement distortion is caused by an optical dispersion in dependency of the beam entrance angle. a correction of this effect can be achieved by a system calibration, which is based on a measurement of a reference surface (e.g. flatness standard). the evaluated form error is modeled by a 3d polynomial and used in the system software to compensate the measurement results. to evaluate the measurement range as well as a possible non-linearity after the calibration process a step standard with steps of about 100 µm was measured (figure 8). the measured area of the workpiece was 6.5 mm × 2 mm. the overall height variation measured by the prototype was 925 µm over the 10 steps. a reference measurement with a chromatic sensor acquired a height variation of 917 µm over the 10 steps. an overall non-linearity of about 8 µm over a measurement range of 1 mm could be evaluated. this represents a non-linearity of figure 5. simulation of the optical path length of the measurement beam through the machine optical system over the complete working field. figure 6. prototype of a laser structuring system with the developed inline measurement system. figure 7. 3d measurement of a reference surface (unit in mm). acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 77 less than 1% for the presented prototype in the used scanning area. regarding the application of the presented measurement system for laser structuring processes, e.g. in the manufacturing of tools and moulds [1], a laser machined structure (20 mm × 20 mm) was measured and analyzed (figure 9). the resulting measured surface demonstrates the robustness of the inline system for the inspection of complex surfaces. 5. conclusions within this paper an optical measurement system for the surface inspection of micro and macro structures with sub-micron accuracy within a laser structuring machine was described and evaluated. a standard deviation of the distance measurements in the z-axis of fewer than 218 nm and a nonlinearity of less than 1% for a measurement range of 1 mm was determined. measurements on standards and laser structured surfaces are presented, which validate the potential of this technique for a telecentric surface inspection within laser structuring machines. future investigations, especially in the usage of this technique for a feedback control of adaptive laser micro processing machines, will be carried out. the effects of process emissions in the measurement results are also an important research task. acknowledgement we gratefully acknowledge the financial support by the german ministry of education for the project ‘scan4surf ’ (02po2861), which is the basis for the proposed achievements. references [1] s. schreck et. al., “laser-assisted structuring of ceramic and steel surfaces for improving tribological properties”, proc. of the european materials research society, applied surface science, 2005, vol. 247, pp. 616-622. [2] f. klocke, et al., “reproduzierbare designoberflächen im werkzeugbau: laserstrahlstrukturieren als alternatives fertigungsverfahren zur oberflächenstrukturierung”, werkstatttechnik, 2009, no. 11/12, pp. 844-850. [3] j. c. ion, laser processing of engineering materials – principles, procedure and industrial application, elsevier, 2005, isbn 978-0-7506-6079-2, p. 389. [4] b. furlong and s. motakef, “scanning lenses and systems”, photonik international, no. 2, 2008, pp. 20-23. [5] t. craeghs et. al., “online quality control of selective laser melting”, proceedings of the 20th solid freeform fabrication (sff) symposium, 2011. [6] m. brezinski, optical coherence tomography – principles and applications, elsevier, 2006, isbn 978-0121335700 pp. 130-134. [7] p. tomlings and r. wang, “theory, developments and applications of optical coherence tomography”, applied physics, no. 38, 2005, pp. 2519-2535. [8] a. rogalski, infrared detectors, crc press, 2010, isbn 978-1420076714, pp. 315-317. [9] r. schmitt, g. mallmann and p. peterka, “development of a fd-oct for the inline process metrology in laser structuring systems”, proc. spie 8082, 2011, 808228. figure 8. 3d measurement of a step standard (unit in mm). figure 9. 3d measurement of a laser structured surface (unit in mm). bringing optical metrology to testing and inspection activities in civil engineering acta imeko issn: 2221-870x september 2021, volume 10, number 3, 108 116 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 108 bringing optical metrology to testing and inspection activities in civil engineering luís martins1, álvaro ribeiro1, maria do céu almeida1, joão alves e sousa2 1 lnec national laboratory for civil engineering, avenida do brasil 101, 1700-066 lisbon, portugal 2 ipq portuguese institute for quality, rua antónio gião 2, 2829-513 caparica, portugal section: research paper keywords: optical metrology; civil engineering; testing; inspection citation: luis martins, álvaro ribeiro, maria do céu almeida, joão alves e sousa, bringing optical metrology to testing and inspection activities in civil engineering, acta imeko, vol. 10, no. 3, article 16, september 2021, identifier: imeko-acta-10 (2021)-03-16 section editor: lorenzo ciani, university of florence, italy received february 8, 2021; in final form august 5, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by lnec national laboratory for civil engineering, portugal. corresponding author: luís martins, e-mail: lfmartins@lnec.pt 1. introduction optical metrology has a large scientific and technological scope of application, providing a wide range of measurement methods, from interferometry to photometry, radiometry and, more recently, to applications using digital, video and vision systems, which combined with computational algorithms, allow obtaining traceable and accurate measurements. increasing accuracy of optical measurement instruments creates new opportunities for applications in civil engineering, namely, for testing and inspection activities. these new methodologies open broader possibilities in civil engineering domains where dimensional and geometrical quantities are major sources of information on infrastructures and construction materials. the assessment of their performance and behaviour, often involves monitoring and analysis under dynamic regimes [1], [2]. in many cases, the development of new technologies, based on the use of methods combining optics and digital algorithms, have recognized advantages, namely, those using non-invasive techniques in harsh environments and remote observation [3]. moreover, the need for accurate measurements related to infrastructures management, e.g., in early detection of damage or in safety monitoring, is growing. the contribution of metrology in this area is key to increase the confidence in decision-making processes. r&di activities in the optical metrology domain in recent years at the portuguese national laboratory for civil engineering (lnec) led to the development of innovative applications, many of them related to doctoral academic research. the main objectives are: (i) to design and develop optical solutions for applications where conventional instrumentation does not provide satisfactory results; (ii) to establish si (international system of units) traceability of measurements undertaken with optical instruments; (iii) to develop advanced mathematical and numerical tools, namely based on monte carlo methods (mcm) and bayesian methods, abstract optical metrology has an increasing impact on observation and experimental activities in civil engineering, contributing to the research and development of innovative, non-invasive techniques applied in testing and inspection of infrastructures and construction materials to ensure safety and quality of life. advances in specific applications are presented in the paper, highlighting the application cases carried out by lnec (the portuguese national laboratory for civil engineering). the examples include: (i) structural monitoring of a long-span suspension bridge; (ii) use of close circuit television (cctv) cameras in drain and sewer inspection; (iii) calibration of a large-scale seismic shaking table with laser interferometry; (iv) destructive mechanical testing of masonry specimens. current and future research work in this field is emphasized in the final section. examples given are related to the use of m oiré techniques for digital modelling of reduced-scale hydraulic surfaces and to the use of laser interferometry for calibration of strain measurement standard for the geometrical evaluation of concrete testing machines. mailto:lfmartins@lnec.pt acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 109 bringing benefits to the evaluation of measurement uncertainty in complex and non-linear optical problems. this paper exemplifies how new methods enable traceable and accurate solutions to assess conformity with safety requirements, providing support to the measurement uncertainty evaluation as a tool to use decision rules. in addition, the applications described emphasize the role of digital and optical systems, as a basis for robust techniques able to provide measurement estimates for dimensional quantities, replacing conventional invasive measurement approaches. to illustrate these achievements, results of r&di in the civil engineering context are presented, including examples of application in: (i) structural monitoring of a long-span suspension bridge; (ii) drain and sewer inspection using cctv cameras; (iii) calibration of a large-scale seismic shaking table with laser interferometry; (iv) destructive testing of masonry specimens. 2. overview of optical metrology optical metrology is a specific scientific area of metrology, defined as the science of measurement and its applications [4], in which experimental measurement processes are supported by light. currently, it has a significant contribution in multiple scientific and engineering domains, improving measurement methods and instruments, to assess their limits and increasing their capabilities in order to improve the knowledge of the studied phenomena. in recent years, the technological development of computational tools has extended the optical metrology activity scope, by increasing the number of measurement processes supported in digital processing of images obtained from optical systems [5]. this activity is characterized by the detection and record abilities, without physical contact with the object and in minor time interval, of a large amount of information (dimensional, geometrical, radiometric, photometric, colour, thermal, among others), overcoming human vision limitations, reaching information imperceptible for human eyes and, therefore, improving knowledge about phenomena. although this paper is focused on dimensional measurements, optical metrology also reaches other domains of activity, namely, temperature, mechanical and chemical quantities. optical metrology covers a wide range of dimensional measurement intervals, from nanometer magnitude up to the dimension of celestial bodies and space distances. in this context, measurement principles are usually grouped in three categories [6]: (i) geometrical optics – related to the refraction, reflection and linear propagation of light phenomena, which are the functional support of several instruments and measurement systems composed by light sources, lenses, diaphragms, mirrors, prisms, beam splitters, filters and optical electronic components; (ii) wave optics – where the wave nature of light is explored, namely, the interference of electromagnetic waves with similar or identical wavelength, being present in a wide range of instruments and measurement systems which use polarized and holographic optical components and diffraction gratings; and (iii) quantum optics – supports the generation of laser beams which correspond to high intensity and monochromatic coherent light sources used, e.g., in sub-nanometer interferometry and scanning microscopy. in the case of civil engineering, two main areas for applications of optical metrology are identified: space and aerial observation; and terrestrial observation. space observation, supported by optical systems, equipped with panchromatic and multi-spectral sensors integrated in remote sensing satellites, is gradually more frequent in the context of civil engineering, due to the growing access to temporal and spatial collections of digital images of the earth’s surface with increasing spatial resolution. aerial observation is generally focused on photogrammetric activities undertaken from aircrafts, aiming at the production of geographic information to be included in topographic charts or geographical information systems, namely, through orthophotos and three-dimensional models (realistic or graphical) representing a certain region of the earth’s surface. moreover, optical systems are also installed in uav unmanned aerial vehicles, used in the visual inspection of large constructions, contributing to the detection and mapping of observations (e.g. cracks, infiltrations, among others) and analysis of their progression with time (see example in figure 1) [7]. 3. structural monitoring of a long-span suspension bridge optical metrology has been successively applied by lnec to the monitoring of a long-span suspension bridge, allowing the development of non-contact measurement systems, capable of determining three-dimensional displacements of critical regions, namely, in the bridge’s main span central section. optical systems are an interesting solution for this class of measurement problems, especially in the observation of metallic bridges, where the accuracy of microwave interferometric radar systems [8] and global navigation satellite systems [9], [10] can be affected, for instance, by the multi-path effect resulting from electromagnetic wave reflections in the bridge’s structural components. the measurement approach developed consists in the use of a digital camera rigidly installed beneath the bridge’s stiffness girder, oriented towards a set of four active targets placed at a tower foundation, materializing the world three-dimensional system. provided that the camera’s intrinsic parameters (focal length, principal point coordinates and lens distortion coefficients) and the targets relative coordinates are accurately known (by previous testing), non-linear optimization methods can be used to determine the position of the camera’s projection centre. the temporal evolution of this quantity is considered representative of the bridge’s dynamic displacement at the location of the camera. since distances can be quite high in this type of observation context, thus the use of high focal length lenses is required to figure 1. digital image processing of concrete wall surface image showing crack. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 110 achieve a suitable spatial image resolution. however, conventional camera parameterization methods were mainly developed for small focal length cameras (below 100 mm). when applied to high focal length cameras, such methods can reveal numeric instability related to over-parameterization and ill-conditioned matrices. a suitable solution for this problem is found in [11], where the intrinsic parametrization method is described, supported in the use of diffractive optical elements (doe). this approach was implemented in the 25th of april long-span suspension bridge (p25a) in lisbon (portugal), for an observation distance near 500 m. to obtain suitable sensitivity of three-dimensional displacement measurement, a 600 mm high focal length lens (composed by a 300 mm telephoto lens and a 2x teleconverter) was used. a set of four active targets was placed in the p25a bridge south tower foundation (figure 2), facing the bridge’s main span where the camera was installed (figure 3). each of the four targets was composed by 16 leds, distributed in a circular geometrical pattern capable of emitting a narrow near-infrared beam (875 nm wavelength) and compatible with the camera’s spectral sensitivity. an optical filter on the camera reduced the environment visible irradiance from many other elements in the observation scenario, thus improving contrast in the target image. several field validation tests were performed, aiming at the quantification of the optical phenomena influence, such as atmospheric refraction and turbulence in the dimensional measurement accuracy. a calibration device was used for this purpose [11], [12], allowing to install the set of targets in four reference positions. by placing the camera in the p25a south anchorage, orientated toward the calibration device in the p25a south tower foundation (both considered static structural regions), the systematic effect caused by refraction and the beam wandering effect originated turbulence, mainly in the summer season, were quantified as explained in [12]. since the p25a bridge has two main decks (an upper road deck and a lower train deck), two types of displacement records with and without train circulation were obtained during field testing of the displacement measurement system. due to the reduced measurement sensitivity in the longitudinal direction, demonstrated in the validation tests, only transverse and vertical displacements were recorded. an image acquisition frequency of 15 hz was defined for an observation time interval of three minutes. the collected image sequences were digitally processed afterward, using the same techniques applied in the validation tests. figure 4 exemplifies a typical displacement record obtained for a passengers train passage on the p25a main span central section. for the operational condition mentioned train and road traffic the observed maximum (peak-to-peak) displacement were 0.39 m and 1.69 m, respectively, in the transverse and vertical directions. high-measurement sensitivity is noticed in the vertical displacement record where the number of train carriages (four) can be temporally discriminated four small spikes around t = 120 s, with a 95 % expanded measurement uncertainty of 8.8 mm. the distributed passengers train load was estimated between 20.7 kn/m (empty train) and 28.8 kn/m (overload train), which is considerably lower than the distributed load applied in the p25a static loading test performed in 1999, where a 3.15 m vertical displacement value was recorded for a 77.5 kn/m distributed load. as expected, in the absence of train circulation in the p25a, the observed maximum displacements were less significant, namely 0.53 m and 0.29 m, respectively, for the vertical and transverse directions, as shown in figure 5. 4. drain and sewer inspection using cctv cameras another recent example of the application of optical metrology to the civil engineering inspection context is the study carried out on the metrological quality of dimensional measurements based on images from cctv inspections in drain and sewer systems (example shown in figure 6). in this context, investigations are carried out using several sources of information, including external and internal inspection activities for the detection and characterization of anomalies which can negatively affect the performance of the drain or sewer system. cctv inspection is a largely used visual inspection technique for non-man entry components. this type of indirect visual inspection is characterized by the quantification of a significant number of absolute and relative dimensional quantities, which contribute to the characterization of the inspection observations and, consequently, to the analysis of the performance of drain and sewer systems outside buildings. unfavourable environmental factors and conditions in the drain or sewer components pose difficulties in the estimation of the quantities of interest and the quality of the recorded images can be quite poor (lighting, lack of reference points, geometric irregularities and subjective assessments, among others). the study [14] stresses the need of proper metrological characterization of the optical system the cctv camera used in drain or sewer inspections, namely, the geometrical characterization and quantification of intrinsic parameters using traceable reference dimensional patterns and applying known algorithms. the standard radiometric characterization, aiming at figure 2. active targets on the south tower foundation. figure 3. digital camera installed in the stiffness girder. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 111 the determination of the cctv camera sensitivity, linearity, noise, dark current, spatial non-uniformity and defecting pixels, is also mentioned [15]. two measurement models were studied to be applied in this context the perspective camera model and the orthographic projection camera model [16]. the first model implies having input knowledge about the camera’s intrinsic parameters and the extrinsic parameters (the camera position and orientation in the local or global coordinate system), which must be obtained from instrumentation of the cctv camera. the second model is a less rigorous approach that can be followed, assuming a parallel geometrical relation between the image plane and the crosssection plane in the drain or sewer to define a scale coefficient between real dimension (in millimetres) and image dimension (in pixels). research efforts were directed towards the evaluation of the measurement uncertainty following the gum framework [17], [18]. particular attention was given to the influence of lens distortion in the results obtained from the perspective camera model. in a typical inspection of a drain or sewer system, a reduced focal distance lens is generally used to have a wider angle. in this type of lens, distortion can cause geometrical figure 4. p25a main span central section displacement train and road traffic. figure 5. p25a main span central section displacement road traffic only. figure 6. inspection image showing dimensional reduction by deformation effect [13]. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 112 deformation of the image, thus affecting the accuracy of dimensional measurements. for this purpose, intrinsic parameters’ estimates and standard uncertainties were obtained [14] for the case of a camera with a 4 mm nominal focal length and an image sensor with 480 × 640 squared pixels, considering a pixel linear dimension equal to 6.5 µm. high-order radial distortion coefficients were considered negligible. the standard uncertainty related to the image coordinates, resulting from the performed intrinsic parametrization, was equal to 0.04 pixel. to assess the impact of distortion in the image coordinate measurement accuracy, a monte carlo method [18] was used, given the complex and non-linear lens distortion model [19]. figure 7 shows the estimates of the image variation due to the combined effect of radial and tangential distortions and figure 8 presents the corresponding 95 % measurement uncertainty. as shown in figure 7 and figure 8, the distortion impact in the images coordinates is quite low. as expected, a higher distortion is observed in the extreme regions of the image, especially in the corners. the maximum distortion estimate is close to 0.050 pixel with a 95 % expanded uncertainty of 0.001 pixel. these results allow to remove the distortion component from the perspective camera model, making it less complex and numerically more stable. due to the non-linear and complex mathematical models related to the perspective camera model, a monte carlo method was again applied in numerical simulation, in order to obtain the dispersion of values related to the local dimensional coordinates which support dimensional measurement in inspection images. a 95 % computational accuracy level lower than 1 mm was obtained. the simulation results showed that a dimensional accuracy level lower than 10 mm can only be achieved for a camera and plane location standard uncertainties of 1 mm and an image coordinate standard uncertainty bellow 3 pixels. in a sensitivity point of view, the camera and plane standard uncertainty showed a stronger contribution to the dimensional accuracy level, rather than the image coordinate measurement uncertainty. when compared with the global dimensions of the corresponding camera field-of-view (974 mm x 731 mm), the 95 % expanded uncertainty of the dimensional coordinates is comprised between 0.2 % and 4.1 %. the measurement uncertainty related to the adoption of the orthographic projection model was also studied in [14] using the uncertainty propagation law [17], considering the linearity of the applied mathematical models. for the worst case, related to the scale coefficient with the highest measurement uncertainty, the obtained dimensional measurement accuracy was always above 5 %. better accuracy levels are possible, namely, in the case of the lowest measurement uncertainty of the scale coefficient, for standard uncertainties of 1.3 pixel (for dimensional measurements close to 100 mm) and 2.5 pixels (for dimensional measurements of 200 mm), 5. calibration of a large-scale seismic shaking table with laser interferometry laser interferometry was applied for the calibration of a large-scale seismic shaking table, used by lnec’s earthquake engineering research centre in r&di activities related to seismic risk analysis and experimental and analytical dynamic modelling of structures, components and equipment. this european seismic engineering research infrastructure (shown in figure 9) is composed by a high stiffness testing platform with 4.6 m x 5.6 m dimensions and a maximum payload capacity of 392 kn, connected to hydraulic actuators, allowing to test real or reduced-scale models up to extreme collapse conditions, between 0 hz and 40 hz [20]. the control system used allows the active application of the displacement to the testing platform in three independent orthogonal axis, while its rotation is passively restricted using torsion bars. the performed calibration is included in the introduction of quality management systems in large experimental infrastructures with r&di [21], aiming the recognition of technical competence for testing and measurement and the formal definition of management processes, which can be regularly assessed by an independent entity. the compliance with metrological requirements is a key issue in this context, being related, for example, with traceability and calibration procedures, conformity assessment, measurement correction and uncertainty evaluation, data record management and data analysis procedures. laser interferometry was used to evaluate the dimensional cross-axis motion, as well as the rotation motion across axis performances of lnec’s shaking table, using specific experimental setups and optical components, as shown in figure 10 and figure 11. this experimental work allowed performing remote and non-invasive measurements with a high accuracy level in a harsh environment, being composed by two stages: the laser beam figure 7. image distortion estimates in pixels. figure 8. image distortion 95 % expanded uncertainties in pixels. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 113 alignment and data acquisition (500 sampling pairs from both the interferometer and the dimensional sensors of the seismic shaking table, having a gaussian representation of the probability distribution). the main identified uncertainty components were related to misalignment of optical elements, time synchronization and influence quantities such as air and material temperature, relative humidity and atmospheric pressure. specific actions were taken in order to minimize these uncertainty components, namely, full-range preliminary tests with adaptative adjustment of the main optical components, the application of a signal synchronization procedure and the use of compensation algorithms for the correction of the material thermal expansion and of the air refraction index [23]. one of the developed tests was defined in order to evaluate the dimensional scale calibration errors and reversibility, using input dynamic series with low variance 30 mm calibration steps, within a measurement interval of ± 120 mm. examples of obtained results are shown in figure 12 and figure 13. a measurement discrimination test was also developed, considering transition steps of 0.5 mm, 0.1 mm and 5.0 mm given at 20 mm, 50 mm and 80 mm linear positions. an example of the obtained results is shown in figure 14. the obtained results show calibration errors ranging, approximately between -0.4 mm and 0.7 mm, with a reduced reversibility close to 0.1 mm. these results were included in the measurement uncertainty evaluation, from which an instrumental measurement accuracy of 0.31 mm was obtained considering a confidence interval of 95 %. the corresponding target instrumental measurement uncertainty, defined as a metrological requirement for the seismic shaking table, is equal to 1 mm. additional dynamical tests and the corresponding discussion of results can be found in [21]. figure 9. top view of lnec’s earthquake engineering testing room [22]. figure 10. experimental setup for cross-axis motion testing. figure 11. experimental setup for rotation motion testing. figure 12. calibration errors and reversibility for the static position test of axis 1-t-a. figure 13. calibration errors for the dynamic position test of axis 1-t-a. figure 14. results of the discrimination test of axis 1-t-a at the 80 mm position. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 114 6. destructive mechanical testing of masonry specimens the application of optical metrology to the destructive mechanical testing of masonry specimens was motivated by the possibility of obtaining non-contact dimensional measurements. in a destructive test, the use of classical invasive instrumentation, such as deformeters, electrical strain gauges and contact displacement sensors, is considered not suitable for some applications due to dynamic effect in the experimental setup and to the high risk of damaging the equipment. knowledge of mechanical characteristics of resistant masonry walls is one of the aspects that still have gaps, mainly due to the difficulty in obtaining representative specimens. in addition, the growing interest in the rehabilitation of old buildings contributes to the search for new reinforcement solutions that are compatible with the original building construction techniques [24], [25]. it is equally important to ensure that these reinforcement techniques, in addition to the aesthetic and functional aspects, also reduce the seismic vulnerability of these buildings [26]. from an experimental point view, dimensional measurements have a strong contribution for the determination of key mechanical characteristics since they support the indirect strain measurement in the tested specimens [27], [28]. afterwards, these measurements are used for characterizing the masonry specimen mechanical behaviour in terms of its elasticity modulus and poisson ratio. the optical measurement solution proposed [29] is based in the use of a single camera with a spatial position and orientation allowing visualization of a set of passive targets evenly distributed in different regions, both in the static region surrounding the specimen and in the dynamic region of the tested specimen surface. the weak perspective model or the orthographic model with uniform scaling was adopted [29] allowing to establish a functional relation of the three-dimensional point georeferenced (expressed in millimetres, for example) with the corresponding bi-dimensional position in the image (usually expressed in pixels). a measurement referential, composed of reference targets, was placed in front of the observation region in the masonry specimen at the minimum distance from the specimen surface (without contact), thus minimizing the observation depth difference to the monitoring targets fixed and scattered in the observation region (in the inner region of the referential), as shown in figure 15. the mentioned referential was subjected to dimensional measurement in an optical measuring machine, before the specimen testing, aiming at the determination of the three-dimensional georeferenced position of each reference target. the knowledge of these spatial coordinates supported the calculation of the scale coefficient in each acquired image, since the measurement referential is placed in a static region of the experimental setup (ensuring that it does not touch the specimen and it is not subjected to vibrations produced by the testing machine). solid and hollow ceramic brick masonry specimens were retrieved from the walls of one building built in the beginning of the 20th century in the city of lisbon (portugal), which was undergoing rehabilitation. the proposed optical approach was implemented by fixing monitoring targets in the specimen’s ceramic bricks and placing the measurement referential with the reference targets close to the observation surfaces as shown in figure 16 (displacement sensors are also visible, being used for validation purposes, without specimen collapse). the recorded images were subjected to tailored digital image processing algorithm, in order to retrieve the image coordinates of both reference and monitoring targets, as shown in figure 17. the first stage of obtained results is related to the scale coefficient measurement samples (with a dimension equal to 28), from which an average value was obtained. figure 18 illustrates the dispersion of scale coefficient values obtained for one of the used measurement referential. based on the specimen’s length and width measurements, as well as the axial compression force readings obtained from the used universal testing machine, vertical and horizonal figure 15. schematic representation of the proposed optical measurement method. figure 16. instrumentation of the masonry specimen. figure 17. example of targets image after digital processing, showing the determined centroids. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 115 dimensional measurements were performed in the frontal and rear surfaces of the specimen, noticing the existence of both contact and optical measurement points not spatially coincident. from the collected data, stress vs. strain curves were obtained for the loading and unloading cycle corresponding to 1/3 of the fracture stress, as shown in figure 19 and figure 20. figure 20 shows the effect of noise in the strain measurements obtained by the optical dimensional measurements, when compared with the strain measurements obtained by the contact measurement chain (figure 19). this is justified by the low spatial resolution of the acquired images, which affects the targets image coordinates which support the deformation measurement. a higher spatial resolution can be achieved with an image sensor composed by smaller pixels or by using a different lens that is capable of producing a higher image magnification with an acceptable narrow field-of-view. these results were use in the determination of mechanical properties estimates and measurement uncertainties in tested masonry specimens. a detail discussion is shown in [29]. 7. conclusions this paper describes relevant contributions of optical metrology when applied in different testing and inspection activities in civil engineering, providing significant added-value in decision-making processes. the wide diversity of testing and inspection activities in this context, together with the versatility of the measurement solutions and tools provided by optical metrology, motivates the development of new interdisciplinary r&di work at lnec, so far with promising results. one of these fields is the development of moiré techniques [30] applied in the digital modelling of reduced-scale hydraulic surfaces. hydraulic experimental activities are frequently carried out in a dynamic regime; however, conventional invasive instrumentation is often unsuitable for real-time observations, making these experiments time-consuming and with reduced acquisition frequency. moiré techniques have been successively applied in other scientific and technical areas, however, their application in the civil engineering context is still quite reduced. another research field being developed by lnec in this context is the application of laser interferometry in the calibration of a strain measurement standard used for the geometrical evaluation of concrete testing machines (self-alignment and movement restriction) [31]. this measurement standard a strain gauged column is required to have a reduced instrumental measurement uncertainty (0.1 % or 5x10-6), making laser interferometry an interesting suitable solution for this objective. acknowledgement the authors acknowledge the financial support provided by lnec national laboratory for civil engineering. references [1] g. di leo, c. liguori, a. paolillo, a. pietrosanto, machine vision systems for on line quality monitoring in industrial applications, acta imeko 4 (2015) 1, pp. 121-127. doi: 10.21014/acta_imeko.v1i1.7 [2] g. d’emilia, d. di gasbarro, e. natale, optical system for online monitoring of welding: a machine learning approach for optimal set up giulio, acta imeko 5 (2016) 4, pp. 4-11. doi: 10.21014/acta_imeko.v5i4.420 [3] m. lo brutto, g. dardanelli, vision metrology and structure from motion for archaeological heritage 3d reconstruction: a case study of various roman mosaics, acta imeko 6 (2017) 3, pp. 35-44. doi: 10.21014/acta_imeko.v6i3.458 [4] vim international vocabulary of metrology basic and general concepts and associated terms, jcgm joint committee for guides in metrology, 2008, pp. 16. [5] m. rosenberger, m. schellhorn, g. linß, new education strategy in quality measurement technique with image processing technologies chances, applications and realisation, acta imeko 2 (2013) 1, pp. 56-60. doi: 10.21014/acta_imeko.v2i1.92 figure 18. dispersion of values related to the scale coefficient. figure 19. stress vs. strain curve obtained by contact dimensional measurement. figure 20. stress vs. strain curve obtained by optical dimensional measurement. http://dx.doi.org/10.21014/acta_imeko.v1i1.7 http://dx.doi.org/10.21014/acta_imeko.v5i4.420 http://dx.doi.org/10.21014/acta_imeko.v6i3.458 http://dx.doi.org/10.21014/acta_imeko.v2i1.92 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 116 [6] h. schwenke, u. neuschaefre-rube, t. pfeifer, h. kunzmann, optical methods for dimensional metrology in production, cirp annals manufacturing technology 51, 2 (2002), pp. 685-699. [7] l. santos, visual inspections as a tool to detect damage: current practices and new trends, proceedings of condition assessment of bridges: past, present and future a complementary approach, lisbon, 2012. [8] m. pieraccini, g. luzi, d. mecatti, m. fratini, l. noferini, l. carissimi, g. franchioni, c. atzeni, remote sensing of building structural displacement using microwave interferometer with imaging capability, ndt& int. 37 (2004), pp. 545-550, doi: 10.1016/j.ndteint.2004.02.004 [9] k. wong, k. man, w. chan, monitoring hong kong’s bridges. real-time kinematics spans the gap, gps world 12, 7 (2001), pp. 10-18. [10] v. khoo, y. thor, g. ong, monitoring of high-rise building using real-time differential gps, proceedings of the fig congress, sidney, 2010. [11] l. martins, j. rebordão, a. ribeiro, intrinsic parameterization of a computational optical system for long-distance displacement structural monitoring, optical engineering 54, 1 (2015), pp. 1-12, doi: 10.1117/1.oe.54.1.014105 [12] l. martins, j. rebordão, a. ribeiro, thermal influence on long-distance optical measurement of suspension bridge displacement, int. j. thermophysics 35, 3-4 (2014), pp. 693-711, doi: 10.1007/s10765-014-1607-3 [13] p. henley, sewer condition classification and training course, wrc, 2017. [14] l. martins, m. almeida, a. ribeiro, optical metrology applied in cctv inspection in drain and sewer systems, acta imeko 9 (2020) 1, pp. 18-24. doi: 10.21014/acta_imeko.v9i1.744 [15] m. rosenberger, c. zhang, p. votyakov, m. peibler, r. celestre, g. notni, emva 1288 camera characterisation and the influences of radiometric camera characteristics on geometric measurements, acta imeko 5 (2016) 4, pp. 81-87. doi: 10.21014/acta_imeko.v5i4.356 [16] r. hartley, a. zisserman, multiple view geometry in computer vision, cambridge university press, new york, 2003. [17] gum guide to the expression of uncertainty in measurement, iso international organization for standardization, 1993. [18] gum-s1 evaluation of measurement data supplement 1 to the guide to the expression of uncertainty in measurements propagation of distributions using a monte carlo method, jcgm joint committee for guides in metrology, 2008. [19] d. brown, close-range camera calibration, proceedings of the symposium on close-range photogrammetry, illinois, 1971, pp. 855-866. [20] r. duarte, m. rito-corrêa, t. vaz, a. campos costa, shaking table testing of structures, proceedings of the 10th world conference on earthquake engineering, rotterdam, 1994. [21] a. ribeiro, a. campos costa, p. candeias, j. alves e sousa, l. martins, a. martins, a, ferreira, assessment of the metrological performance of seismic tables for a qms recognition, journal of physics: conference series 772 (2016), pp. 1-16. doi: 10.1088/1742-6596/772/1/012006 [22] common protocol for the qualification of research, seismic engineering research infrastructures for european synergies, wp 3, na 2.4, deliv. 3, 2012. [23] g. lipinski, mesures dimensionnelles par interférométrie laser, techniques de l’ íngénieur measures et contrôle, r 1 320, 1995. [24] a. caporale, f. parisi, d. asprone, r. luciano, a. prota, micromechanical analysis of adobe masonry as two-component composite: influence of bond and loading schemes, compos. struct. 112 (2014), pp. 254-263. doi: 10.1016/j.compstruct.2014.02.020 [25] f. greco, l. leonetti, r. luciano, p. blasi, an adaptative multiscale strategy for the damage analysis of masonry modelled as a composite material, compos. struct. 153 (2016), pp. 972-988. doi: 10.1016/j.compstruct.2016.06.066 [26] s. kallioras, a. correia, a. marques, v. bernardo, p. candeias, f. graziotti, lnec-build-3: an incremental shake-table test on a dutch urm detached house with chimneys, eucentre research report euc203/2018u, eucentre, 2018. [27] a. marques, j. ferreira, p. candeias, m. veiga, axial compression and bending tests on old masonry walls, proceedings of the 3rd international conference on protection of historical constructions, lisbon, 2017. [28] en 1052-1 methods of test for masonry part 1 determination of compressive strength, cen european committee for standardization, 1998. [29] l. martins, a. marques, a. ribeiro, p. candeias, m. veiga, j. gomes ferreira, optical measurement of planar deformations in the destructive mechanical testing of masonry specimens, applied sciences 10, 371 (2020), pp.1-23. doi: 10.3390/app10010371 [30] k. gåsvik, optical metrology, 2nd edition, john wiley & sons, 1995, pp. 168-169. [31] en 12390-4 testing hardened concrete part 4: compressive strength specifications for testing machines, cen european committee for standardization, 2019. https://doi.org/10.1016/j.ndteint.2004.02.004 https://doi.org/10.1117/1.oe.54.1.014105 https://doi.org/10.1007/s10765-014-1607-3 http://dx.doi.org/10.21014/acta_imeko.v9i1.744 http://dx.doi.org/10.21014/acta_imeko.v5i4.356 https://doi.org/10.1088/1742-6596/772/1/012006 https://doi.org/10.1016/j.compstruct.2014.02.020 https://doi.org/10.1016/j.compstruct.2016.06.066 https://doi.org/10.3390/app10010371 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 100‐106    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 100  least square procedures to improve the results of the three‐ parameter sine‐fitting algorithm  aldo baccigalupi, guido d’alessandro, mauro d’arco, rosario schiano lo moriello   dipartimento di ingegneria elettrica e delle tecnologie dell’informazione, università degli studi di napoli federico ii, via claudio 21, 80125  napoli, italy      section: research paper   keywords: model identification; sine fitting; least square minimization; parameters estimation; bias; uncertainty; computational burden; running time  citation: a. baccigalupi, g. d’alessandro, m. d’arco, r. schiano lo moriello, least square procedures to improve the results of the three‐parameter sine  fitting algorithm, acta imeko, vol. 4, no. 2, article 17, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐17  editor: paolo carbone, university of perugia, italy  received january 7, 2014; in final form june 9, 2014; published june  2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: mauro d’arco, e‐mail: darco@unina.it    1. introduction  many powerful methods aimed at measuring the parameters of digital signals corrupted by noise rely on model fitting approaches [1]. fitting consists in adjusting the parameters of the model in order to minimize a distance measurement between the model and the digital signal. when the distance measurement consists in the root mean square difference, the fitting is recognized as the best fit in a least square sense [2]-[3]. sine fitting is aimed at accurately estimating the frequency, amplitude, phase and dc value of a sinusoidal signal corrupted by noise. also, the estimation of the variance of the noise can be gained from the residue of the fit [4]-[7]. for instance, displacement sensors [8] and digital impedance meters [9] exploit sine fitting to estimate the amplitude of the signal, whereas digital instruments devoted to system monitoring and control use sine fitting to obtain accurate frequency [10]-[12] or phase measurements [13]-[16]. in metrological applications the chief purpose of the fitting is the accurate estimation of the variance of the noise [17]. the most common sine-fitting algorithm is the three parameter algorithm [17], which is based on a linear algebra framework, and it is utilized when the frequency parameter is known and stable. the accuracy characterizing the parameters estimates offered by the three parameter algorithm is mainly limited by the exact knowledge of the frequency of the signal. if the fitting result is unsatisfactory a four parameter algorithm can be adopted to improve the estimates. the four parameter algorithm starts from the results of a pre-fit, obtained through the three parameter algorithm, and runs a least square procedure to (i) update the estimates of amplitude, phase, and dc value and (ii) identify a frequency correction term to improve the frequency estimate. in order to further improve the results, the four-parameter algorithm can be recursively invoked until the target accuracy is gained, using the estimates gained at the previous execution as inputs for the next execution. in the paper, starting form an analysis of the effects produced by a frequency error in the use of the three parameter sine fitting algorithm, two least square methods capable of complementing the three parameter sine fitting algorithm and improve its results are proposed as alternative to the four parameter algorithm. the proposed approaches quantifies both the frequency error, to offer an accurate estimate of the frequency of the input signal, and the variance of the noise. abstract  the  paper  presents  two  approaches  to  improve  the  three  parameter  sine  fitting  algorithm  and  attain  accurate  estimates  of  the  parameters of a sinusoidal signal corrupted by noise. as the four parameter sine fitting algorithm, which is usually adopted to improve  the estimates of the three parameter algorithm, in theory, the proposed ones can be inserted into iterative schemes and repeated  until a target precision is gained. anyway, the use of the proposed algorithms is particularly suggested for those applications in which  the results must be gained in very short times, which are in contrast to long iterative procedures. in these cases, when a single run of  the algorithms has to offer the required improvement, the proposed ones are valid alternatives with respect to the four parameter  algorithm, sharing with it almost comparable accuracy.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 101  their performance in terms of accuracy and running times are thoroughly analysed and compared to that offered by the four parameter algorithm. although both the proposed algorithms and the four parameter algorithm can be inserted into iterative schemes and repeated until a target precision is gained, the single shot results are considered in the comparison. the proposed algorithms are in fact intended for those applications in which the estimates of the frequency and of the variance of the noise must be gained in very short times, which are in contrast to long iterative approaches. 2. sine fitting approaches  2.1. three parameter sine fitting  the standard three parameter sine fitting algorithm estimates the amplitude, phase and dc value of the sinusoidal signal that best fits in a least square sense a digital signal yk. named the value of the digital frequency of the input data (i.e. the frequency in hertz normalized to the sample frequency), a, , and c, respectively, the amplitude, phase and dc value, nk the noise, and k the digital time ranging from 0 up to m-1, the following model can be adopted:   kk nckay  2cos (1) the three-parameters sine-fitting algorithm requires as input parameter the value of the digital frequency or an estimate  of it. if the value  coincides with , the values of the parameters a, and c can be gained considering the equation:     kk nckbkay  00000 2sin2cos  (2) which is equivalent to equation (1) but linear in the parameters a0, b0 and c0, respectively defined as: a0 = acos, b0 = -asin, and c0 = c. discarding the noise nk and ranging k from 0 up to m-1, equation (2) produces a linear system made up of m equations, which can be represented in matrix form by: d0x0 = y (3) where x0 is a column array with the unknown components a0, b0 and c0, y is a column array containing the samples of the input digital signal, yk, and d0 is the matrix of the coefficients:                   1 1 1 1010 00 0000 0 mm kk sc sc sc   d (4) in which  kc k 00 2cos  , and  ks k 00 2sin  . the three parameter algorithm determines x0 by means of the pseudoinverse method, namely:   ydddx t00t0 10  (5) as it is well known the solution represented by equation (5) minimizes the mean square value of the residue of the fitting, which should coincide with the noise. the parameters of the model given in (1) are then estimated through:            0 00 2 0 2 0 ,atan2 cc ba baa  (6) in which atan2( . , . ) represents a four quadrant inverse tangent function. 2.2. four parameter sine fitting  the four parameter sine fitting algorithm works on the hypothesis that the estimated value of the digital frequency  approaches the exact one , but it is biased by a small amount . it linearizes the model given in (2), which is non-linear with respect to , by approximating it with a first order taylor expansion centred in i-1 = 0. thus it considers for the acquired digital signal the representation:          kiiiii iiiiik nkkbkka ckbkay     1111 11 2cos2sin2 2sin2cos   (7) in which there are four unknown parameters, namely ai, bi, ci, and i. the term i represents a correction term that added to i-1 produces an improved estimate of the digital frequency, i=i-1+i. equation (7) represents an iterative model that includes the values of the parameters estimated at step (i-1)-th and the unknown parameters to be estimated at step i-th. in order to start the iterations, the four parameter algorithm uses at step i = 1 the results of a three-parameter pre-fit to initialize ai-1 = a0, bi-1 = b0 and i-1 = 0; for the next iterations, the parameters estimated at step i-th are reused at step (i+1)-th. the method to solve (7) is formally identical to that discussed for the three-parameter algorithm. in fact, discarding the noise nk and ranging k from 0 up to m-1, equation (7) produces a linear system made up of m equations. the estimates of the parameters ai, bi, ci and i that minimize the mean square value of the residue in (7) can thus be attained solving the system by means of the pseudo-inverse method. in matrix form, collecting the parameters ai, bi, ci and i, in the column array xi, it results:   ydddx tt iiii 1 (8) where di is defined by:                              1,111,111,11,1 ,11,11,1,1 0,110,110,10,1 121 21 021 miimiimimi kiikiikiki iiiiii i cbsamsc cbsaksc cbsasc    d (9) the four-parameter algorithm is usually run until the distance between successive estimates is smaller than a target threshold. it typically converges in less than ten iterations, but can diverge if the initialization step is coarse. 3. analysis of the effects of a frequency error  in order to improve the results of the fitting, an appropriate model is proposed to analytically describe the effects produced by a frequency error. in particular, the digital signal can be represented according to:        k kk ncika nckay   00 0 2cos 2cos   (10) in which i0 satisfies the constraint 20i0 =  + n2 for an unknown integer n, and ∆ is much smaller than . from equation (10) it follows that:             k k ncikika ikikay   000 000 2sin2sin 2cos2cos   (11) and, in the hypothesis that the frequency error  is small: acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 102        k k nckika kay     00 0 2sin2 2cos (12) according to (12), the residue       k kk nkika ckay     00 0 2sin2 2cos (13) does not coincide with the noise but includes an additional contribution, which is described by linearly damped oscillations characterized by frequency 0; the parameters  and i0 characterize the envelope of the damped oscillations. this contribution is often visible in the typical graph of the k data resulting in real tests: an example of a possible scenario is given in figure 1. 4. proposed methods  the removal of the deterministic contribution present in (13) can be obtained by least square estimation procedures. in particular, two methods can be considered. 4.1. method ‘a’  a first method consists in finding the values of the parameters  and i0 that produce the least square residue in (13). the values can be obtained by solving through the pseudo-inverse method the over-determined system:                                             1 0 0 1 0 1 0 1 0 m k m k m k i s s s sm ks s            (14) where    kask 02sin2 . specifically, named a the column array with components  and i0: a = (ata)-1at (15) where  is the column array containing the values k, and a is the matrix in (14). it is worth noting that (15) is formally the solution of a straight line fit problem. the estimate of the frequency error can be therefore straightforwardly gained by applying the formula:           1 0 1 0 2 1 0 1 0 22 m k kk m k k m k kk m k k skskssk  (16) in order to measure the random noise nk, the deterministic contribution present in (13), identified through the estimation of  and i0, has to be subtracted from the residue k. 4.2. method ‘b’  alternatively the frequency correction term  and the variance 2n of the noise corrupting the signal can be estimated from the squared values 2k of the residue throughout a further least square procedure. in this case a simple and linear model can be utilized to fit the data. specifically, taking the square values of equation (13), it follows:             k kk nkika nkika     00 2 0 2 0 22 2sin22 24cos12 (17) the squared values of the noise 2kn include both a constant term equal to 2n , which is the variance of the original noise nk, and a zero mean random term mk. grouping all the contributions that have zero mean value in (17), both if characterized by deterministic or random nature, in the term lk:            k kk nkika mkikal     00 0 2 0 2 2sin22 24cos2 (18) it can be stated that:     knk likika  2200222 22  (19) to gain estimates of , i0, and 2n , and get rid of lk, equation (19) is first rewritten in a more compact form: kk lcbkak  22 (20) and the parameters a, b, and c are estimated by solving through the pseudo-inverse method the system:                                                    2 1 2 2 0 2 2 111 1 100 m k c b a mm kk        (21) specifically, named b the column array with components a, b, and c it results: b = (btb)-1bt (22) where  is a column array containing the values 2k and b is the matrix in (21). the typical results obtained fitting the 2k data are given in figure 2. figure 1. residue characterized by linearly damped oscillations masked by  noise.  figure 1. squared values of the residue and polynomial fitting results.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 103  finally, taking into account that a, b, and c are related to , i0 and 2n by:               22 0 22 0 0 2 0 2 2 42 2 nn iaaic iaaib aa    (23) the parameters of interest can be gained from:                   2 0 2 20 2 4 2 1 iac a b i a a n     (24) in order to estimate the term that compensates the frequency error, the sign of the correction has to be gained throughout further considerations. for instance, if the initial frequency estimate 0 has been obtained by applying a peak location algorithm to the results y(m), m = 0, …, m-1, of a m points discrete fourier transform (m-dft), according to:      n j jymynm  020 ;max  (25) the correction can be given without sign ambiguity as:        11 jyjysign (26) which is valid for rectangular window at large values of j (typically more than 10 cycles in the measurement interval). the high signal to noise ratio in the neighbouring of the peak value should assure the minimum sensitivity to the effects of the noise.  5. performance evaluation  the performance of the proposed methods has been assessed by means of several tests. the main goal of the tests is evaluating the accuracy of the methods at estimating both the frequency of the input signal and the variance of the superimposed noise. in this case the accuracy can be decomposed into two terms, i.e. bias and repeatability, and can be quantified as square root of their quadratic summation. each test consists in repeating n times two fundamental steps: running a three-parameter sine-fitting algorithm to attain initial estimates of the parameters of the sine model; applying the proposed methods and the four-parameter algorithm to attain improved estimates of the parameters of interest. the difference between the mean value of the repeated estimates and the reference value represents the experimental bias, while the experimental standard deviation of the repeated estimates represents the repeatability. the signals under test are made up of m samples that represent a sinusoid corrupted by noise. the digital frequency of the test signals  is obtained adding to a basic value 0, expressed with finite resolution rbw = 1/m, an offset , i.e.  = 0 + . the basic value 0 is given in input to the initial three-parameter sine-fitting algorithm to obtain an initial estimate of the signals parameters. the offset  makes the methods under test work in the presence of a frequency error. figure 3 shows the typical results related to digital frequency estimates, obtained for a signal under test characterized by m = 100, 0 = 0.310, and signal to noise ratio (snr) equal to 37 db. to highlight both the bias and repeatability contributions to the overall accuracy the results are shown by means of scatterplots; different markers are associated to the different methods. in particular, the results offered by the peak location algorithm applied to the outcomes of the m-dft are represented by marker ‘*’, those offered by method a by marker ‘+’, those offered by method b by marker ‘o’, and those offered by the four-parameter sine fitting algorithm in single run mode by marker ‘x’. the considered frequency offsets  are positive and equal to 10%, 20%, 30% 40% and 50% of rbw; it has been observed that positive and negative frequency errors produce the same statistics. for each method a set of 15 estimates has been considered to put in evidence bias and repeatability. the results highlight that methods a and b exhibit comparable performance, both in terms of bias and repeatability, with respect to the four-parameter sine-fitting algorithm in single run mode. note that the results produced by the peak location applied to the m-dft results are characterized by utmost repeatability, which is explained by the minimum sensitivity to the effects of the noise in the neighbouring of the peak value of the signal spectrum, as well as the high snr equal to 37 db. figure 4 shows the typical results related to the variance estimates obtained in the same test conditions. specifically, the results are given by means of a scatterplot that makes visible both bias and repeatability. the peak-to-peak amplitude of the simulated signal is 1 v, the noise variance is 0.005 v2. in this case marker ‘*’ is associated to the results offered by the three parameter sine fitting algorithm. methods a and b exhibit almost comparable performance with respect to the four parameter sine fitting algorithm utilized in single run mode. similar tests have been repeated for signals characterized by reduced snr = 17 db; figure 5 and figure 6 show the results. concerning the digital frequency estimations, methods a and b still exhibit comparable performance in terms of bias and repeatability with respect to the four parameter sine fitting algorithm. figure  3.  results  related  to  the  estimates  of  the  digital  frequency  of  sinusoidal signals corrupted by noise when the snr is equal to 37 db.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 104  further tests have been performed in order to verify any influence of the length of the processed data on the results. in particular, the estimates of the digital frequency and noise variance have been performed upon records made up of 100, 250, 500, and 1000 samples. the results show that the accuracy characterizing the digital frequency estimates improves upon the increasing of the record size m. the performance in terms of bias and repeatability related to the variance estimates produced by the proposed methods show the same sensitivity on the record size with respect to that offered by the four parameter algorithm. figure 7 and figure 8 show a typical case: they are related to a signal corrupted by noise characterized by snr = 17 db. it is worth noting that the resolution rbw needed to measure the exact digital frequency is 0.001, thus for the record characterized by size 100, 250 and 500, the m-dft approach offers a digital frequency estimate which is shifted from the real one, because the available resolutions are respectively equal to 0.01, 0.004, 0.002. the results obtained for m = 1000 are very accurate because the digital frequency of the signal is exactly estimated by the mdft approach.  6. conclusions  the paper has presented two methods to improve the three parameter sine fitting algorithm. the performance of the two methods, intended as an additional step of the three parameter figure  5.  results  related  to  the  estimates  of  the  digital  frequency  of sinusoidal signals corrupted by noise when snr is equal to 17 db.  figure  6.  results  related  to  the  estimates  of  the  digital  frequency  of sinusoidal signals corrupted by noise when snr is equal to 20 db.  figure 7. estimates of the digital frequency of the signal corrupted by noise:  the accuracy improves upon the increasing of the record size m.  figure 8. typical estimates of the variance of the noise: the performance of  the  proposed  methods  are  comparable  to  those  offered  by  the  four  parameter algorithm.  figure 4. results related to the estimates of the noise variance carried out  when the digital frequency of the signal is affected by error.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 105  sine fitting algorithm, has been compared to that of a four parameter sine fitting algorithm used in single run mode. the proposed methods represent nice alternatives to the four parameter sine fitting algorithm for the identification of the parameters of the sinusoidal model and the estimation of the variance of the superimposed noise. in addition, they are suitable to function in a large variety of noise conditions. it is worth noting that one of the simplest ways to improve the results of a three parameter sine fitting algorithm is preliminary computing an accurate initial estimate of the frequency of the input signal. to this end, high-accuracy spectrum analysis techniques, capable of providing analytical leakage compensation and improving frequency estimation, can be employed; one of these simply requires interpolating the dft results related to the signal under test. in several practical applications the proposed methods are likely rejected in favor of interpolated dft approaches, while in particular cases can be considered for their low computational burden. anyhow, the proposed methods represent two different lines of attack to refine the three-parameter sine fitting results, to be valued from a conceptual and methodological point of view. appendix a  the computational burden of the different approaches that have been considered can be evaluated by counting the number of the most expensive calculus operations, i.e. the nonlinear operations and multiplications, which are required to obtain the results. the overall computational burden of the proposed methods and the four parameter algorithm considered for estimating the digital frequency and the variance of the noise includes that of the three parameter algorithm. the three parameter sine fitting algorithm requires the definition of matrix d0 that involves the calculation of cos(2k) and sin(2k), with k ranging from 0 up to m-1, and has a cost equal to 2m. then a least square minimization has to be performed by processing m equations to gain the values of p parameters. in detail, mp(p+1)/2 multiplications are needed to compute the symmetrical square matrix d0td0 and mp multiplications are required to condition the input data by computing d0ty. the other calculations have a negligible cost. in fact, the number of multiplications needed to perform the p-square matrix inversion, i.e. (d0td0)-1, in the case that the inversion is attained by calculating the algebraic complements, which is the most expensive method and requires p! multiplications, is negligible since p is equal to 3; similarly, the final step that consists in calculating the inner product between the inverted matrix and the conditioned data, counts only p2 multiplications, which is negligible. the four parameter algorithm adds the computational burden of including in matrix di a fourth column, i.e. 2a0kcos(20k) 2b0ksin(20k), which requires 4m multiplications, aside the three already present in d0. moreover it demands least square minimization upon m equations to gain q = 4 parameters. method a instead involves the computational burden of defining matrix a, which consists in calculating the values sk, which have a cost equal to m, and performing the m multiplications k·sk, and, furthermore, the computational burden of performing least square minimization upon m equations to gain r = 2 parameters. nonetheless, to estimate the variance of the noise, the parameters  and i0 have to be used to build up the fitted model of the residue k, extract the noise nk, and calculate its variance. method b is the best one from a computational point of view because it does not add any computational burden for defining matrix b, the values of which are independent from unknown parameters: to define matrix b only the size m of the acquired data is needed. since the square matrix (btb)-1 can be calculated and stored previously, the remaining computational burden is just that of data conditioning, which consists in calculating bty and requires 3m multiplications. moreover, the method makes ready both an estimate of the frequency error and of the variance of the noise without requiring building up models. the running times of the three and four parameter algorithms as well as of the methods a and b have also been measured. in particular, the algorithms have been run on a comex xp.520 core 2duo t7200 machine, which is a windows based multitasking processor. due to the multitasking functioning of the operative system, the running times have shown different values in repeated tests. to clear the effects of interruptions and holding-ups due to the operative system, the same test signals have been processed a thousand times and the average running time observed for each algorithm has been considered as a reasonable measurement. the test results highlight that the four parameter sine fitting algorithm lengthens the running time of the three parameter algorithm. in particular, table 1 gives in the first column the results related to the average processing time of a record of 1000 points in milliseconds. it also summarizes in the second column the approximate estimations of the computational burden of the four different approaches, expressed in terms of number of most expensive calculus operations. references  [1] steven m. kay , “fundamentals of statistical signal processing”, volume i: estimation theory, april 1993, prentice hall. [2] ramos, p.m., fonseca da silva, m., martins, r.c., serra, a.m.c., “simulation and experimental results of multiharmonic leastsquares fitting algorithms applied to periodic signals”, ieee trans. on instrum. and measurements, vol.55, n.2, 2006, pp.646-651. [3] ramos, p.m., serra, a.c., “least squares multiharmonic fitting: convergence improvements”, ieee trans. on instrum. and measurements, vol.56, n.4, 2007, pp.1412-1418. [4] quatieri, t.f., danisewicz, r.g., “an approach to co-channel talker interference suppression using a sinusoidal model for speech”, ieee trans, on acoustics, speech and signal processing, vol.38, n.1, 1990, pp.56-69. [5] kollar, i., blair, j.j., “improved determination of the best fitting sine wave in adc testing”, ieee trans. on instrum. and measurements, vol.54, n.5, 2005, pp.1978-1983. [6] pintelon, r., schoukens, j., “an improved sine-wave fitting procedure for characterizing data acquisition channels”, ieee table 1. average running times determined upon 1000 runs and theoretical  computational burden.    average  running  time [ms]  theoretical  computational burden  3 par. 12.8 [ 2 + p(p+1)/2 + p ] m   4 par. 37.7 [ 4+2 + p(p+1)/2 + p q(q+1)/2 + q ] m method ‘a’ 25.7 [ 2 + 2 + p(p+1)/2 + r(r+1)/2 + r] m method ‘b’ 19.1 [ 3 + 2 + p(p+1)/2 + p ] m  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 106  trans. on instrum. and measurements, vol.45, n.2, 1996, pp.588-593. [7] bertocco, m., narduzzi, c., “sine-fit versus discrete fourier transform-based algorithms in snr testing of waveform digitizers,’ ieee trans. on instrum. and measurements, vol.46, n.2, 1995, pp.445-448. [8] masi, a., brielmann, a., losito, r., martino, m., “lvdt conditioning on the lhc collimators”, ieee transactions on nuclear science, vol.55, n.1, 2008, pp.67-75. [9] radil, t., ramos, p.m., serra, a.c., “impedance measurement with sine-fitting algorithms implemented in a dsp portable device”, ieee trans. on instrum. and measurements, vol.57, n.1, 2008, pp.197-204. [10] bakhtiari, s., shaolin liao, elmer, t.w., gopalsami, n.s., raptis, a.c., “a real-time heart rate analysis for a remote millimeter wave i-q sensor”, ieee trans. on biomedical engineering, vol.58, n.6, 2011, pp.1839-1845. [11] mahata, k., “subspace fitting approaches for frequency estimation using real-valued data”, ieee trans. on signal processing, vol.53, n.2, 2005, pp.3099-3110. [12] radil, t., ramos, p.m., serra, a.c., “new spectrum leakage correction algorithm for frequency estimation of power system signals”, ieee trans. on instrum. and measurements, vol.58, n.5, 2009, pp.1670-1679. [13] stenbakken, g., ming zhou, “dynamic phasor measurement unit test system”, ieee power engineering society general meeting, 2007, pp.1-8. [14] yang kuojun, tian shulin, “algorithm based on tdc to estimate and calibrate delay between channels of high-speed data acquisition system”, 10th international conference on electronic measurement & instruments (icemi), vol.3, 2011, pp.221-224. [15] vucijak, n.m., saranovac, l.v., “a simple algorithm for the estimation of phase difference between two sinusoidal voltages”, ieee trans. on instrum. and measurements, vol.59, n.12, 2010, pp.3152-3158. [16] fenster, s., dann, t., moretti, a., “synchrotron improvements with shed waveform”, ieee transactions on nuclear science, vol.28, n.3, 1981, pp.2577-2579. [17] ieee std 1241-2010 (revision of ieee std 1241-2000), “ieee standard for terminology and test methods for analog-todigital converters”, 2011, pp.1-139. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 10‐17    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 10  development of a knowledge base for the planning of  prismatic parts inspection on cmm  vidosav d. majstorovic 1 , slavenko m. stojadinovic 1 , tatjana v. sibalija 2   1  university of belgrade, faculty of mechanical engineering, department for production enginnering, kraljice marije 16, 11120 belgrade    2  metropolitan university, belgrade, faculty of information technology, tadeusa koscuska 63, 11 000 belgrade        section: research paper  keywords: inspection planning; cmm; knowledge base; feature – based system  citation: vidosav d. majstorovic, slavenko m. stojadinovic, tatjana v. sibalija, development of a knowledge base for the planning of prismatic parts  inspection on cmm, acta imeko, vol. 4, no. 2, article 3, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐03  editor: paolo carbone, university of perugia, italy  received june 26, 2014; in final form january 28, 2015; published june  2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: slavenko m. stojadinovic, e‐mail: slavenkostojadinovic@gmail.com    1. introduction  the development of an intelligent system for inspection planning is the imperative and prerequisite for a new generation of metrological systems and their applications within the digital quality concept, based on the global interoperability model [13]. the interoperability model integrates cad-cam-cai data on a digital platform [4, 5] and presents a basis for virtual simulation and knowledge-based inspection planning, especially for prismatic parts. digital manufacturing is a framework for the development of a new generation of technological systems, based on virtual simulation, digital modelling of a product and cloud computing application [2, 5]. expert systems (ess), being an advanced approach for modelling and application of engineering knowledge in technological systems, have been frequently used in the last few decades [6-9]. since 2000, ess have passed through the mature stage of development and application [10-14]. influenced by ess for the technological processes planning (capp/cam) and step standardisation [4,15,16], the metrological primitives modelling is based on the modelling of metrological characteristics (features) [17-19]. our research is based on this approach [20, 21]. in the cai model, consideration of metrological features implies the recognition and extraction from the cad model and their definition and modelling [18]. our approach has evolved from the definition and modelling [21] and application of ontology for defining a knowledge hierarchy for inspection [20], toward an integrated approach [22] which uses a digital product model in interoperability environment (autodesk inventor professional 2011, solidworks, pc-dmis, protege). our model of the system for intelligent design of a conceptual inspection plan for cmms (i.e. intelligent system for automatic planning of inspection – isapi), in the artificial intelligence environment, is presented in this paper. special attention is dedicated to the knowledge base model that was developed and applied within the proposed approach [22]. 2. development of a knowledge base model   the conceptual inspection plan presents a basis for cmm programming. usually, based on experience, knowledge and skills, an engineer (inspection planner) generates a conceptual plan for the inspection on cmms. however, this approach should be avoided in a modern manufacturing, especially due to the fact that today cmms work in a digital environment and abstract  inspection  on  coordinate  measuring  machines (cmms)  is  based  on  software  support  for  various  classes  of  metrological  tasks,  i.e.  tolerances. today, the design of a uniform inspection plan for a measuring part presents a rather complex issue due to the following:  (i) metrological complexity of a measuring part; (ii) skills and knowledge of a designer / inspection planner; and (iii) software for cai  model, considered as a part of an integrated cad‐capp‐cam‐cai system. this issue could be addressed by the usage of expert systems  that generate a conceptual inspection plan for a measuring part, based on which the inspection plan for a selected cmm could be  automatically developed. this paper presents the development of a model of an automatic inspection planning system for cmms, and,  in particular, the developed knowledge base model.     acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 11  with huge numbers of different parts. in contrast to the old approach, a new concept is based on the usage of ess that generate conceptual inspection plans. the first investigations in this domain have been conducted a long time ago [6], defining a framework for the design of an inspection plan for cmms using ess. since then, owing to the development of artificial intelligence tools, a new generation of ess has been developed. the basis of each es model is a knowledge base, its organisation, scope and content in terms of factual and heuristic knowledge. a knowledge base must contain necessary knowledge and information about the measuring part (mp), its tolerances, geometric features (gf) and metrological features (mf), inspection sequences (is), measuring probe configuration (pc), measuring machine (mm), and fixture tools (ft) and accessories, as presented in figure 1 [21]. the basic steps for generating a conceptual inspection plan for cmm are: (i) analysis and synthesis of metrological tasks (tolerances) according to the digital product model (the nodes mp, mf and gf presented in figure 1); (ii) definition and orientation of measuring coordinate systems (mp, mf, gf, is, pc and mm); (iii) selection of a measuring probe configuration (is, pc and mm); and (iv) definition of a measuring strategy (decomposition of metrological features, development of geometric features and inspection planning). therefore, our approach is based on the geometrical (cad), technological (capp/cam) and metrological (cai) integration, as presented in the knowledge base model in figure 1). the proposed model generates a conceptual inspection plan with the following elements: (i) measuring part with metrological / geometric features and inspection sequences; (ii) measuring coordinate systems; (iii) measuring probe configuration; (iv) fixture tools and accessories; and (v) necessary characteristics of cmm. the proposed model supports the cai system [23, 24] developed as isapi [22], and also supports the realisation of the following approaches: (i) measurement (measuring part – cmm – measuring results), (ii) inspection (measuring part – cad – cmm – inspection results), (iii) reversible engineering (real part model – cmm – cad), and (iv) inspection planning (cad – cai – inspection plan). therefore, our model is based on the following axiom: geometrical – technological metrological integration (figure 2), compatible with the state-of-the-art approaches in digital manufacturing [4]. previous analyses show that the measuring and inspection on cmm require a wider scope of knowledge: (i) knowledge about the part processing (for the analysis of tolerances and definition of measuring coordinate systems); (ii) general knowledge about tolerances (analysis and synthesis of metrological and geometric features); (iii) specific mathematical knowledge about modelling and relations between geometric and metrological features (e.g. curves modelling); (iv) knowledge about cmm and its working principles; (v) knowledge about software installed on cmm; and (vi) heuristic knowledge about this domain. 2.1. an experimental example  an experiment is performed on a real measuring part [25]. for the observed measuring part a graph of knowledge base model is developed with 4 nodes and their interrelations, as presented in figure 3. the nodes are non-terminated symbols that present knowledge entities: mp – knowledge about the measuring part and its tolerances, mf – knowledge about metrological features, gf – knowledge about geometric features for the defined metrological feature. non-terminated symbols as knowledge entities are connected with terminated symbols: a0 = (tolerances, feature), a1 = (metrological feature, feature, tolerances), a2 = (geometric feature, features), b = (tolerance, geometric feature, features), and a = (measuring part, geometric features, features). they are presented as ontological structures with hierarchical relations that define all elements of knowledge in this domain. for example, relations between mp and gf could be as follows: a0 a1 a2 (reasoning line for generating a measuring probe path), a0 – b (reasoning line for generating point coordinates at the geometric feature), and a (reasoning line for geometric features at the measuring part). relevant investigations on the modelling based on tolerance characteristics have already been performed [4,15,16,21]. our approach integrates metrological – geometrical information from the cad digital model, where geometrical information are parameters of geometric features taken from the iges file after modelling of the prismatic measuring part using software autodesk inventor professional 2011. figure  2.  elements  for  building  the  knowledge  based  model  for  prismatic parts inspection on cmm.   figure 1. the knowledge base graph for the  isapi model.    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 12  therefore, the tolerances defined by iso 1101 standard are presented by the graph that illustrates ontological decomposition into metrological features, and then – into geometric features. this way the reasoning line is specified, and, by extracting parameters, each geometric feature is uniquely defined. this geometric feature presents a basis for the analysis of accessibility of the measuring probe, path planning, and for generating measuring protocols for the input measuring requirements. terminated symbol a0 defines the decomposition of a prismatic measuring part into the tolerances type defined by the iso standard: length tolerances (tl), form/shape tolerances (tf), orientation tolerances (to), and location tolerances (tlc). terminated symbol a1 defines the continuation of decomposition into specific sub-types of tolerances defined in the iso standard. for example, the orientation tolerance could be decomposed into the following sub-types: parallelism (to1), perpendicularity (to2) and angularity (to3). terminated symbol a2 also defines the continuation of tolerances decomposition into the specific forms/shapes. these forms are denoted as metrological characteristics (features). from the metrological aspect, one metrological feature is composed of one or more geometric features. figure 4 shows a graph of an ontological structure (developed using protege software), that presents rules for the decomposition in a general case. figure 3. the graph of a knowledge base and its decomposition.    figure 4. the rules for decomposition of a knowledge base graph, developed by using protege software.    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 13  for example, the tolerance tl is composed of seven metrological features: tl1-1, tl1-2, tl1-3, tl1-4, tl1-5, tl1-6 and tl1-7. further decomposition of each of them could be presented by five geometric features (gf100-points, gf200-lines, gf500-planes, gf600-cylinders and gf700spheres). initially, the geometric feature concept has been defined in analytical geometry. based on this definition, it was later used in engineering modelling. in manufacturing engineering, the geometric features present a basis for the definition of certain aspects of a product design, process design, inspection design, etc. [24]. in our model, the geometric feature presents the lowest level of tolerances definition that refers to the generation of measuring probe points on the measuring part. the systematisation of geometric features is presented in figure 5. geometric features shown in figure 5 present basic features used for the development and explanation of all types of metrological features. each geometric feature is uniquely defined by the set of parameters with respect to the local coordinate system oxyz (presented by the red colour in figure 5) and the coordinate system of a measuring part оxмyмzм (presented by the black colour in figure 5). these parameters could be: coordinates (x, y, z), diameter (d, d1), height (h, h1), width (a), length (b), vector of a primitive (n), parameter of the fullness of a feature (np=1 full, np=-1 empty). vector n determines the orientation of primitives in a space. the position of a primitive is defined by the coordinates x0, y0, z0. the parameter of fullness is defined by the unit vector of the x-axis of a feature: the value of a parameter np=1 implies a full feature and the value np=-1 implies an empty feature. the parameter of fullness and the vector of a feature define the direction of a measuring probe access during the definition of metrological feature coordinates. the extraction of parameters of a geometric feature cylinder from the iges file is based on the recognition of its structure. the part of a structure needed for the analysis in our model is presented in table 1. an iges file is composed of five sections in the following order: start section, global section, directory entry section, parameter data section, and terminate section, as shown in figure 6. all geometric entities are given in the directory entry section and parameter data section. the extraction of parameters is performed based on the number of sequences of an entity (geometric feature), as in table 2. figure 5. the geometric features and their parameters (geometric primitives).    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 14  the extraction of parameters for the other geometric features could be conducted using the same procedure that was used for a cylinder. the primary objective is decomposition of a prismatic measuring part into metrological features (mf) that indirectly participate in the inspection planning. the secondary objective is decomposition of tolerances into geometric features (gf), and, from a metrological aspect, they give a full set of information to define a conceptual inspection plan. an application of the aforementioned concept of a knowledge base is performed on a real measuring part (a housing of the main spindle of a lathe), presented in figure 7. according to the knowledge base model, tolerances of a part are reduced to geometric features, including all metrological features that are involved in part tolerances. the measuring part implies the following tolerances: tolerance of length (tl), tolerance of shape / form (tf), tolerance of orientation (to) and tolerance of location (tlc). the length tolerance is composed of four length tolerances (tl1-61, tl1-62, tl1-63 and tl1-7) and five diameter tolerances (tl2-11, tl2-12 tl2-13 tl2-14 and tl2-15). table 1. extraction of iges parameters of a cylinder.  entity  1  1   2   3 5   6   7  73‐80 line  (generatrix)  110  x1, y1, z1 (start point)  x2, y2, z2  (end point)  4 seq. number  line (axis)  110  x3, y3, z3 (start point)  x4, y4, z4  (end point)  5 seq. number  surface of  revolution  120  seq. no. 1,  seq. no. 2  α 1, α2  (start and  end angle)  3 seq. number  direction  123  i1, j1, k1  (unit vector)    18 seq. number  direction  123  i2, j2, k2  (unit vector)    28 seq. number  figure 6. iges file of a cylinder geometric feature.  table 2. calculation of parameters.  sequence number  parameter of cylinder  4,5  d= ((x6‐x1) 2 +( y6‐y1) 2 +( z6‐z1) 2 ) 0.5 5  xo=x1,  yo=y1,  zo=z1  5  h= ((x2‐x1) 2 +( y2‐y1) 2 +( z2‐z1) 2 ) 0.5 18  n= i1  j1  k1  figure 7. the real measuring part.    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 15  the form tolerance is tf2-1, while the orientation tolerances are designated as to2-11, to2-12, to2-13 to2-14, to2-15 and to2-16. the location tolerance is marked as tlc1-4. figure 8 shows the ontological hierarchy (class hierarchy, object property hierarchy and data property hierarchy), i.e. the decomposition of a prismatic measuring part into features. it was based on the general tolerance types as defined by the iso standard. in this case, the tolerances are tl, tf, to and tlc. these tolerance types are further decomposed into specific tolerances types that are also defined by the standard. it is necessary to follow this procedure in order to connect specific tolerance types with the real tolerances of real parts. the next step implies further decomposition into specific tolerance types presented in the technical drawing of the part. metrological features are composed of a few geometric features, and present the relations between tolerance types and geometric features of the part. in other words, if we know that the technical drawing of the part is a real source of metrological information then the metrological features are introduced as a link between tolerances and the part geometry whose carrier is the digital model of the part (cad model). cad software allows us to input only one type of tolerance – the tl type. this shows that, in inspection planning, the cad model of the part could be used only from the geometrical aspect. that is why software for cmm loads geometrical information from the iges file. inspection planning of prismatic parts on cmms is usually performed with respect to three mutually orthogonal directions, depending on the number, position and orientation of measuring stylus in a measuring sensor. figure 9. the direction of probe accessibility.  figure 8. decomposition of the real measuring part into metrological features.    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 16  this assumption serves as a basis for the development of a sequence of metrological features inspection for prismatic parts in our model. from the mentioned three directions, in general it is possible to derive six directions of a measuring probe access (dpa). since a measuring part must be set up at the machine table, one direction of access is disregarded. hence, there are five possible directions: dpa 1, dpa 2, dpa 3, dpa 4 and dpa 5, as presented in figure 9. each of these directions corresponds to some of the directions of the coordinate system of the cmm: dpa 1 = z; dpa 2 =  x; dpa 3 = y; dpa 4 = +x; and dpa 5 = +y. at the level of a geometric feature, dpa is determined by parameters n and np. for the inspection of a primitive, the possible directions are dpa 1, dpa 2, dpa 3, dpa 4 and dpa 5. a special case is when the direction of a vector n does not match with dpa 1, dpa 2, dpa 3, dpa 4 and/or dpa 5. then, the movement of a measuring probe should be decomposed into movements in two dpa’s: the first is the closest one from dpa 1, dpa 2, dpa 3, dpa 4 and dpa 5 (according to the minimal angle criteria) for the movement immediately beneath the primitive, and the second is n vector of the primitive. this approach automatically defines an inspection sequence for the measuring part. dpa 1,…, dpa 5 differ from the movement of machine axes. dpa defines the access to a feature, and machine axes accomplish acquisition of measuring points in the desired directions. verification of this approach has been performed on a cmm, and results were reported in [26]. 3. conclusion  the development and application of the uniform inspection plan for cmms are important issues that depend on metrological complexity of a part and skills and knowledge of an engineer (inspection planner). the presented research shows a model of the knowledge base for cmm inspection planning that aims to offer a solution for the development of an intelligent system for inspection planning (isapi). the presented knowledge base is defined by entities and relationships among them. the result of this approach is a definition of relations between geometric features and certain types of tolerances that could be found by searching through the graph of a knowledge base model. by searching through the graph, the general tolerance types defined by the standard are linked to the geometric features in order to allow the definition of a metrological sequence and planning of a measuring probe path. analysis of this method could be performed in respect to the step nc standard which contains information about tolerances and geometry. however, the existing cad interfaces do not allow us to specify tolerances on the cad model of a part. therefore, it could be said that our model presents an efficient way to provide all necessary data for an automatic inspection, which is its main advantage. acknowledgement  the work presented in this paper has been performed within the technological development project tr 35022, supported by the ministry of education, science and technological development of the republic of serbia. references  [1] e. westkämper, “digital manufacturing in the global era”, in pedro f. cunha and paul g. maropoulos (eds.): digital enterprise technology, perspectives and future challenges, springer, (2007), p. 3-14. [2] e. westkämper, (2010): factories of the future beyond 2013 a view from research: the role of ict. http://cordis.europa.eu/fp7/ict/micro-nanosystems/docs/fofbeyond-2013-workshop/westkaemper-manufuture_en.pdf (accessed july 2013) [3] http://www.steptools.com/library/stepnc/ (accessed december 2013) [4] r. laguionie, m. rauch, j.y. hascoet, s. h. suh, “an extended manufacturing integrated system for feature – based manufacturing with step-nc”, international journal of computer integrated manufacturing, 24 (2011), pp. 785–799. [5] http://ec.europa.eu/research/industrial_technologies/pdf/pppfactories-of-the-future-strategic-multiannual-roadmap-infoday_en.pdf (accessed august 2013). [6] h.a. elmaraghy, p.h. gu, “expert system for inspection planning”, annals of the cirp, 36 (1987), pp. 85–89. [7] c.w. ziemian, d.j. medeiros, “automated feature accessibility for inspection on a coordinate measuring machine”, 35 (1997), pp. 2839–2856. [8] a. limaiem, e.h. e1maraghy, “a general method for analysing the accessibility of features using concentric spherical shells”, int. j. adv. manuf. technol., 13 (1997), pp. 101-108. [9] k. takamasu, r. furutani, s. ozono, “basic concept of featurebased metrology”, measurement, 26 (1999), pp. 151–156. [10] g. moroni, w. polini, q. semeraro, “knowledge based method for touch probe configuration in an automated inspection system”, journal of materials processing technology, 76 (1998), pp. 153–160 [11] f.s.y. wong, k.b. chuah, p.k. venuvinod, “automated inspection process planning: algorithmic inspection feature recognition, and inspection case representation for cbr”, robotics and computer-integrated manufacturing, 22 (2006), pp. 56–68. [12] y.s.f. wong, b.k. chuah, k.p. venuvinod, “automated extraction of dimensional inspection features from part computer-aided design models”, international journal of production research, 43 (2005), pp. 2377–2396. [13] a. mohib, a. azab, h. elmaraghy, “feature-based hybrid inspection planning: a mathematical programming approach”, int. j. computer integrated manufacturing, 22 (2009), pp. 13-29. [14] d.p. stefano, f. bianconi, d.l. angelo, “an approach for feature semantics recognition in geometric models”, computeraided design, 36 (2004), pp. 993–1009. [15] x. zhao, m.t. pasupathy kethara, g.r. wilhelm, “modeling and representation of geometric tolerances information in integrated measurement processes”, computers in industry, 57 (2006), pp. 319-330. [16] m. w. cho, t. i. seo, “inspection planning strategy for the onmachine measurement process based on cad/cam/cai integration”, int. j. adv. manuf. technol., 19 (2002), pp. 607– 617. [17] t.r. kramer, h. huang, e. messina, f.m. proctor, h. scott, “a feature-based inspection and machining system”, computeraided design, 33 (2001), pp. 653-669. [18] s.g. zhang, a. ajmal, j. wootton, a. chisholm, “a featurebased inspection process planning system for co-ordinate measuring machine (cmm)”, journal of materials processing technology, 107 (2000), pp. 111-118. [19] w.c. myeong, l. honghee, s.y. gil, c. jinhwa, “a feature-based inspection planning system for coordinate measuring machines”, int. j. adv. manuf. technol., 26 (2005), pp. 1078–1087.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 17  [20] s.m. stojadinovic, v.d. majstorovic, “towards the development of feature – based ontology for inspection planning system on cmm”, journal of machine engineering, 12 (2012), pp. 89-98. [21] majstorović, v., “inspection planning on cmm based expert system”, proceedings of 36th cirp international seminar on manufacturing systems, pp. 48 – 56, saarbrucken, germany. [22] s. stojadinović, “intelligent concept for the inspection planning of prismatic parts on measuring machines”, (phd in progress). mechanical engineering faculty, belgrade, 2014. [23] v. majstorović, p. bojanić, v. milačić, “expert system for inspection planning on cmm”, the first world congress on intelligent manufacturing process & systems, proceedings, pp. 120-126, san juan, 1995. [24] v. majstorović, p. bojanić, s. vraneš, “intelligent environment for product and process design”, proceeding of 29th cirp conference “manufacturing systems”, pp.145-148, osaka, 1997. [25] v. majstorovic, t. sibalija, m. ercevic, b. ercevic, “cai model for prismatic parts in digital manufacturing”, proceedings of det 2014, the 8th international conference on digital enterprise technology, pp. 214-220, stuttgart. [26] v. majstorovic, t. sibalija, b. ercevic, m. ercevic, “capp model for prismatic parts in digital manufacturing”, proceedings of digital product and process development systems ifip tc 5 international conference, new prolamat 2013, pp. 142148, dresden. on the suitability of redundant accelerometers for the implementation of smart oscillation monitoring system: preliminary assessment acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 9 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 on the suitability of redundant accelerometers for the implementation of smart oscillation monitoring system: preliminary assessment giorgio de alteriis1, enzo caputo1, rosario schiano lo moriello1 1 department of industrial engineering, university of naples federico ii, piazzale tecchio 80, 80125, naples, italy section: research paper keywords: structural monitoring systems; oscillation measurements; kalman filter; zero-velocity update; accelerometers; internet of things citation: giorgio de alteriis, enzo caputo, rosario schiano lo moriello, on the suitability of redundant accelerometers for the implementation of smart oscillation monitoring system: preliminary assessment, acta imeko, vol. 12, no. 2, article 11, june 2023, identifier: imeko-acta-12 (2023)-02-11 section editor: laura fabbiano, politecnico di bari, italy received april 4, 2023; in final form april 18, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by pon "research and innovation" 2014-2020 and fsc ", project: "intelligent monitoring system for the safety of urban infrastructures (insist)", cup: e64e18000110005. corresponding author: giorgio de alteriis, e-mail: giorgio.dealteriis@unina.it 1. introduction the detection of the stress state of structures is an important problem in the field of structural engineering as it provides crucial information for monitoring their health and detecting the development and propagation of damage within them. structural health monitoring (shm), which encompasses a variety of techniques and technologies, has been developed to address this issue and is widely used in both aerospace and civil engineering [1]–[7]. in recent decades, shm has extended its applications to infrastructure and civil engineering, including historic and new buildings, bridges, tunnels, industrial plants, manufacturing facilities, offshore platforms, port structures, foundations, and excavations [8]–[11]. existing bridge structures are exposed to various environmental and operational stresses during their lifetime, and the influences of these external loads can lead to the acceleration of structural damage. in addition, extreme events such as earthquakes can occur during the life of a bridge, emphasizing the need for timely detection of the structure's condition to ensure safety. while visual inspection has traditionally played an important role in detecting defects on the structure surface and assessing the structural condition, it is labor-intensive, timeconsuming, and subjective. to address this issue, shm techniques have been proposed and are increasingly applied to long-span bridges [12]–[14]. shm is defined as the use of sensing techniques and analysis of structural features to detect structural damage or deterioration; it is suggested that shm should be considered in the context of condition assessment and as a damage detection technique [15]. in summary, the main abstract structural health monitoring (shm) is an essential aspect to ensure the safety and longevity of civil infrastructure. in recent years, there has been a growing interest in developing shm systems based on micro-electro-mechanical systems (mems) technology. mems-based sensors are small, low-power, and cost-effective, making them ideal for large-scale deployment in structural monitoring systems. however, the use of mems-based sensors in shm systems can be challenging due to their inherent errors, such as drift, noise, and bias instability; these errors can affect the accuracy and reliability of the measured data, leading to false alarms or missed detections. therefore, several methods have been proposed to compensate for these errors and improve the performance of mems-based shm systems. for this purpose, the authors propose the combined of a redundant configuration of cost-effective mems accelerometers and a kalman filter approach to compensate mems inertial sensor errors and data filtering; the performance of the method is preliminarily assessed by means of a custom-controlled oscillation generator and compared with that granted by a high-cost, high-performance mems reference system where amplitude differences of 0.02 m/s2 have been experienced. finally, a sensor node for real-time monitoring has been proposed that exploits lorawan and nfc protocols to access the structure information to be monitored. mailto:giorgio.dealteriis@unina.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 objectives of shm are damage detection and condition assessment. shm techniques have been applied to bridges for almost 40 years through the installation of structural health monitoring systems (shms) and have been increasingly applied to long-span bridges worldwide. in summary, shm techniques, particularly those based on vibration measurements, have great potential in the monitoring and maintenance of bridges to ensure their safety and longevity [16]–[20]. among the various sensor techniques used in shm, vibration-based measurement using accelerometers and data acquisition systems is one of the most widely studied and applied methods. in this approach, the response of the structure is measured under a given excitation, and its modal parameters, such as natural frequencies, damping ratios, mode shapes, and modal scaling factors, are determined [21]–[23]. the performance of the accelerometer is critical to the quality of the acquired data and the accuracy of the results. therefore, the proper selection of the accelerometer should consider the balance between the cost and performance of the overall shm system [24], [25]. in recent years, the use of micro-electromechanical systems (mems) sensors in shm systems has gained significant attention. mems sensors are tiny devices that can measure a wide range of physical quantities, such as acceleration, strain, pressure, and temperature, with high accuracy and precision [26]. they are also compact, low-power, and cost-effective, making them ideal for large-scale and long-term monitoring of structures. the integration of mems sensors into shm systems brings several benefits, such as improved accuracy and reliability, reduced power consumption, and enhanced data acquisition and processing capabilities. mems sensors can provide real-time data on the stress state of a structure, enabling early detection of damage or deterioration and allowing for timely and costeffective maintenance and repair. they can also be used to identify potential failure modes and predict the remaining useful life of the structure. in the case of bridges, the use of mems sensors in shm systems can provide a comprehensive and continuous monitoring solution that can detect even subtle changes in the structure's condition due to external loads, traffic, or environmental factors [24], [27]. this information can be used to optimize the bridge's performance and extend its service life while reducing maintenance and repair costs and ensuring the safety of the users. overall, the use of mems sensors in shm systems represents a promising approach for the monitoring and maintenance of structures, particularly in the case of large-scale and critical infrastructure such as bridges [28]. with further advances in mems technology and data analytics, it is expected that shm systems will become even more efficient, accurate, and costeffective, providing greater benefits to both engineers and endusers. mems inertial sensors offer several advantages, but it is important to consider their drawbacks as well. their small size makes them highly sensitive to environmental changes, and random noise can make error compensation procedures more complex and limit their applicability [29], [30]. an increasing bias drift with non-linear characteristics is a significant factor to consider, as it demonstrates that while mems imus can provide remarkable accuracy at high rates, angular velocity, and acceleration data can easily degrade over longer periods. therefore, special attention must be paid to these sensors, which are typically classified based on their bias instability and random walk parameters, both of which characterize their performance and suitability for specific applications [31], [32]. for example, accelerometers with a bias instability lower than 0.01 mg are considered "marine-grade," while more cost-effective "consumer-grade" sensors have lower performance but also lower power consumption [33]–[35]. to overcome the mems limitations, several studies have been conducted on the adoption of a kalman filter (kf) to compensate the mems errors. in particular, for shm applications, high-frequency noise could be a key element to accurately estimate the structure acceleration. so, the kalman filter algorithm can be used to reduce noise in data obtained from mems accelerometers. it works by estimating the state of a system based on a series of noisy measurements. in the case of mems accelerometer data, the kalman filter can be used to estimate the true acceleration of an object by filtering out highfrequency noise [36]–[39]. in this research, a redundant prototype of low-cost accelerometers that exploits a kalman filter algorithm for filtering purposes has been proposed. then, the performance of the proposed solution is assessed by means of a comparison with a high-performance accelerometer sensor by means of a controlled oscillation generator that is specifically realized to obtain oscillations with known frequency and amplitude. finally, the authors propose a sensor node that could be adopted for shm applications. the paper is organized as follows; the proposed method, the implementation of the controlled oscillation generator, and the sensor node realized are described in section 2, while in section 3, the system architecture that includes both the hardware and software implementation has been described. finally, in section 0 the obtained results are presented as advantages introduced by the proposed approach, the overall performance reached in oscillation measurements, and the proposed iot architecture before drawing the conclusions in section 5. 2. proposed methods and smart monitoring architecture a platform based on the internet of things (iot) exploiting redundant low-cost mems accelerometers for monitoring large structures such as bridges and tunnels has been evaluated, where the redundant prototype is already presented in [40]. the research activities have been focused mainly on two main aspects: (i) a prototype of redundant low-cost mems accelerometers evaluation for oscillation measurements by exploiting the kalman filter for data filtering, where the performance is assessed by means of custom testing setup for controlled oscillation measurements and a result comparisons with a reference system, i.e., high-performance accelerometer sensor; (ii) an iot solution for a real-time system that acquires and visualizes data based on the adoption of the redundant prototype and the typical protocol of iot. 2.1. proposed method regarding the first point, the main goal was to use a redundant configuration of accelerometers to reduce the typical errors that affect low-cost mems accelerometer sensors, such as bias instability, in-run bias instability, and velocity random walk. in this way, thanks to the redundant prototype performances, it could be adopted for oscillation measurements. to this aim, the authors propose an initialization procedure based on the adoption of a zero-velocity update (zvu) filter that allows to estimate of the initial error values and the initial alignment. in acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 this way, the acceleration measurements are compensated by the estimated noise values and then are processed by means of a kalman filter for high-frequency noise reduction and, finally, a fast fourier transform (fft) to evaluate the oscillation frequency and amplitude, as shown in figure 1. the performance validation of the proposed method is assessed by a suitably controlled oscillation generator based on a crank-rod system capable to emulate different oscillation frequencies. finally, a marine-grade mems sensor has been adopted as a reference system to assess and compare the performance reached by the proposed solution. 2.1.1. zero-velocity update filter and initial alignment the kalman filter is a recursive algorithm that can be used to estimate the state of a dynamic system based on a series of noisy measurements. it works by predicting the state of the system at the next time step based on a model of the system dynamics and then updating this prediction based on the actual measurements. the algorithm includes two key steps: prediction and update. in the prediction step, the state estimate and error covariance matrix is predicted based on the previous state estimate, control input, and system dynamics. in the update step, these predictions are updated based on the actual measurements and the measurement noise covariance matrix [32]. the zvu filter is based on an error-state kalman filter (eskf). this approach is based on the assumption that the system output is in a standing condition, i.e., the velocity is equal to zero. the zvu filter is realized according to the following steps: 1) the position vector is provided by a gnss source; 2) the velocity vector is set equal to zero; 3) roll and pitch angles are obtained by means of a coarse leveling procedure according to (1) and (2): 𝜃 = arctan ( −𝑓𝑖𝑏, 𝑏 𝑥 √(𝑓𝑖𝑏, 𝑏 𝑦 ) 2 + (𝑓𝑖𝑏, 𝑏 𝑧 ) 2 ) (1) 𝜙 = arctan2(−𝑓𝑖𝑏, 𝑏 𝑦 ,−𝑓𝑖𝑏, 𝑏 𝑧 ) , (2) where 𝜃 is the pitch angle, 𝜙 is the roll angle, 𝑓𝑖𝑏 𝑏 represents the raw acceleration measurements among the x, y, and z axis referred to as the body reference frame. the predict/correct phases are realized according to the eskf implementation [40]; in this way, noise terms are estimated during the state vector correction and exploited in the successive prediction stages. the residual error values are then adopted to correct the acceleration measurements to obtain better performance in frequency and amplitude oscillation measurements. for the sake of clarity, in this specific application, the gnss position is only once evaluated; in fact, the accelerometer sensors are mounted on a structure where the gnss position variations are not significant or not observable by most gnss modules. regardless, knowledge of the position is necessary to accurately compensate for the gravity vector. 2.1.2. kalman filter approach once obtained, the initial noise parameters, thanks to the application of the zvu, and a further kf has been applied to the acquired acceleration samples in order to filter out highfrequency noise. to suitably reduce the computational burden, the update matrix is reduced to the identity matrix (i.e., in the prediction stage, the so-called a-priori estimated value �̂�− is equal to the corrected value obtained in the previous iteration). as for the correction stage, it is obtained according to the following equations: 𝐾 = 𝑃/(𝑃 +𝑅) (3) �̂�+ = �̂�− +𝐾(𝑥 − �̂�−) (4) 𝑃 = (𝐼 −𝐾) 𝑃 + (�̂�+ − �̂�−) 𝑄 , (5) where 𝐾 is the kalman gain, 𝑃 is the error covariance matrix, 𝑅 is the noise covariance matrix, �̂� and �̂�−are the state vector and estimated state vector, respectively, i.e., the acceleration measurements, 𝐼 is the identity matrix, and 𝑄 is the system noise covariance matrix. figure 1. proposed method based on the adoption of a zero velocity update filter. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 2.1.3. controlled oscillation system and measurement setup the measurement setup is composed of a controlled stepper motor that rotates a cylinder that is connected to the beam by means of a linkage in such a way as to convert the rotary motion of the stepper motor into linear motion. the sensor modules are placed on a plate rigidly constrained to the beam; in this way, harmonic motion is realized. the nominal acceleration values (𝑎(𝑡)) can be derived from the maximum imposed displacement and frequency by the equation: 𝑎(𝑡) = −𝜔2𝐴 sin(𝜔 𝑡) , (6) where 𝐴 is the vertical displacement amplitude, 𝜔 = 2 π 𝑓 is the angular frequency. the geometrical dimensions of the realized system are reported in table 1, where it is also highlighted the maximum displacement that is evaluated as the difference between the highest and lower points of the plate (among the z-axis). figure 2 shows the controlled oscillation generator and the measurement setup. in particular, the mechanical parts are realized by the 3d printing process [41], [42] and are highlighted in black color, while the stepper motor and motor driver are highlighted in orange color. the measurement setup is mainly composed of the sensor modules, i.e., the proposed redundant low-cost accelerometers (red) as well as the stim318 by sensonortm, the marine-grade (green) accelerometer exploited as a comparison reference. both modules are connected to a microcontroller (placed under the plate), specifically, the stm32f446re from stmicroelectronicstm, that acquire data via the i2c (inter integrated circuit) and via the uart protocol, respectively. to this aim, an interface between the rs422 and uart protocols was realized using dual differential drivers and receivers (sn75c1167 from texas instruments). to synchronize the two systems, the microcontroller provides an external trigger to sample the marine-grade accelerometer data. the stim318 acquires data at 2 khz and is triggered at 125 hz, resulting in a mean delay of 250 µs between the request and sampling of measured quantities. finally, the acquired data is sent to a personal computer for further processing via a bluetooth module connected to the microcontroller through the uart interface. 2.2. proposed smart monitoring architecture as for the second point, the prototype device is completed with a wireless communication system based on the lorawan protocol and a nfc module for remote and in-situ monitoring, respectively, as shown in figure 3. once the operations for determining the quantities of interest have been locally performed, the microcontroller sends the result via lorawan protocol to a gateway on which a node-red-flow is implemented that forwards the measured data via mqtt to a thingsboardbased cloud dashboard. moreover, the data are also accessible through nfc protocol. in particular, using a mobile phone as nfc reader, the identity of the node is detected and exploited to access the iot platform and display measured data on the mobile application. the proposed architecture aims to transmit only the maximum amplitude value of the signal, which is evaluated by the microcontroller as the peak in the frequency domain. the microcontroller then sends both the frequency and amplitude values, effectively reducing the number of samples transmitted and allowing for real-time monitoring. this ensures that only relevant information is transmitted, resulting in a more efficient system protocol with fewer packets, for forwarding measured data via the lora and nfc protocol so as to speed up nfc and dashboard communications. 3. system architecture the hardware components used in performance assessment and wireless sensor node development for real-time monitoring are reported in table 2 and table 3, respectively, while the corresponding operations flow-chart are shown in figure 4a and figure 4b. for the sake of clarity, the redundant prototype is composed of six inemos from stmicroelectronicstm in a cubic configuration. figure 2. realized oscillating system and measurement setup for performance assessment; thanks to the stepper motor oscillations with known frequency and amplitude can be applied to the low-cost redundant accelerometers. table 1. geometrical dimension of the controlled oscillating system. system dimension (cm) beam 22.5 × 1 × 1 (l × w × h) linkage 9.5 × 0.2 × 0.5 (l × w × h) cylinder radius 5.5 max displacement 0.5 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 figure 3. sensor node and proposed smart monitoring architecture. figure 4. data acquisition flow-chart for: a) performance evaluation; b) realtime monitoring. table 2. performance evaluation equipment. hardware part number manufacturer redundant accelerometers lsm6dsm (six inemos) stmicroelectronics marine-grade accelerometers stim318 sensonor microcontroller nucleo-f446re stmicroelectronics motor stepsyn sanyo denki motor driver board x-nucleo-ihm04a1 stmicroelectronics bluetooth module rn41 roving rs422-uart sn75c1167 texas instruments table 3. sensor node realization hardware. hardware part number manufacturer microcontroller nucleo-f446re stmicroelectronics sensor board x-nucleo-iks02a1 nfc module x-nucleo-nfc04a1 lora module i-nucleo-lrwan spi-micro sd-card 410-380 diligent inc. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 as stated above, the communication between the microcontroller and the pc is realized by means of a bluetooth protocol (figure 4.a); this allows control of the start of the acquisition and data collection. once the start is sent, the microcontroller generates a trigger signal for the sensor modules and starts to acquire data from both. during the tests, it is necessary to ensure accurate measurement synchronizations. therefore, the data is processed offline to avoid introducing delays from each data acquisition in both systems. the software architecture is designed with a focus on synchronizing both systems to obtain comparable measurements between the proposed solutions and the reference system. this approach ensures that the results are reliable and can be compared accurately. as for the wireless sensor node, the software architecture is realized to real-time process the acceleration measurements, as shown in figure 4b, where the microcontroller acquires data and elaborates them according to the proposed method with the aim to evaluate the maximum frequency peak value and send by means of the lorawan protocol and nfc the oscillation frequency and the associated amplitude values. the proposed solution is based on the zvu filter for initial alignment and noise terms evaluation; to enhance parameter estimations require the gnss position information. to this aim, the realized iot architecture includes the setting of these parameters that can be stored on the sd-card; this procedure could be achieved when the sensor node is mounted on the structure that will not change its position. finally, the zvu and initial alignment that requires 60 s is activated in two conditions: 1) each time the device is turned on to correct the random bias values; 2) whenever the internal temperature of the accelerometer changes by ± 3 °c, in such a way as to correct the bias values changes due to the temperature. once the error values are estimated, the microcontroller removes the high-frequency noise from the raw acceleration measurements adopting the kalman filter and then processes them by means of a fft algorithm to evaluate the maximum frequency peak value and associated amplitude values that are sent through the iot protocols, i.e., lorawan and nfc. 4. results the proposed solution is evaluated according to the measurement setup realized and described in section 2.1. to this aim, the oscillation measurements in terms of oscillation amplitude of both systems have been compared at different controlled frequencies from 1 hz to 5 hz. the performance of the proposed method has been assessed by comparing the measurement results provided by the realized system with those guaranteed by a marine-grade mems sensor, i.e., the stim318. the raw acceleration measurements are acquired by the concentrator and sent to a pc that collects and processes data by means of a matlab code, according to section 2.2. the performance reached by the proposed method highlights the benefits introduced by exploiting the zvu filter and initial alignment for measurement calibrations, i.e., the compensation of accelerometer bias drift values. in fact, as shown in figure 5, the signal spectrum of both systems have been compared, where the amplitude value differences associated with the component at 1 hz between the proposed method (𝐴est) and the reference (𝐴ref) are equal to 0.021 m/s², while the amplitude differences between the best sensor i.e., one cube face (𝐴one) measures and reference system are equal to 0.137 m/s². for the sake of clarity, a comparison at frequency oscillation of 1hz is shown, but similar results have been experienced, which are reported in table 4. moreover, the theoretical amplitude values evaluated according to (6) have been reported. the benefit introduced by the proposed method is highlighted in table 5, where the differences between the reference system and estimated values (∆𝐴est) and the reference system and one sensor value (∆𝐴one) have been reported. these results are also shown in figure 6, where the differences between the reference system and uncompensated measures rise as the frequency increases due to the typical lowcost accelerometer errors, while by adopting the proposed method, the differences are constant. in fact, by exploiting the zvu calibration, the bias drift and bias stability are compensated, as shown by the stability of the system as the frequency increases. finally, to assess the proposed solution, figure 7 shows the second-order polynomial fitting to evaluate the quadratic trend of the acceleration amplitude as the frequency increases. the comparison highlights that the reference system and estimated acceleration amplitude values present a comparable trend with the ideal trend, while instead, the uncompensated values show a significant drift over frequency. figure 5. acceleration amplitude comparison at 1hz: reference system (red), proposed method (blue) and one sensor measures (orange). table 4. acceleration amplitude results from 1hz to 5hz. amplitude (m/s²) oscillation frequency (hz) 1 2 3 4 5 reference system 0.173 0.767 1.754 3.123 4.914 proposed method 0.152 0.744 1.733 3.101 4.891 one sensor 0.31 1.243 2.254 4.63 6.823 theoretical acceleration 0.197 0.789 1.776 3.158 4.934 table 5. amplitude differences between the reference system and (i) the proposed method; (ii) one accelerometer sensor measures. amplitude differences (m/s²) oscillation frequency (hz) 1 2 3 4 5 ∆𝐴𝑒𝑠𝑡 0.021 0.023 0.021 0.022 0.022 ∆𝐴𝑜𝑛𝑒 0.137 0.476 0.5 1.507 1.909 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 once the performance has been verified, an example of sensor node operation, presented in section 2.2, is shown in figure 8. the monitoring dashboard can be accessed from both smartphones and pc, and information regarding the amplitude of the oscillation in g and the frequency is shown. for the sake of clarity, the full dashboard also shows all the sensor axes. 5. conclusions the use of mems-based sensors in shm systems has been shown to be an effective and cost-efficient approach for largescale structural monitoring. however, the inherent errors in mems-based sensors can affect the accuracy and reliability of shm systems. to overcome these challenges, this research evaluates the use of a redundant configuration of low-cost mems sensors, then proposes (i) an initialization procedure based on the adoption of a zero-velocity update filter to compensate the bias components i.e., in-run stability, bias instability, and thermal effects; (ii) an initial alignment procedure, and (iii) a kalman filter for high-frequency noise reduction that has proven to be effective in improving the performance of mems-based shm systems keeping advantages in terms of computational load. in particular, the system performance is evaluated by means of a custom oscillating platform that provides oscillation at constant displacement and variable frequency. in this way, the proposed method is tested, and a comparison with a reference system, i.e., a marine-grade accelerometer, theoretical values, and the one accelerometer sensor measures, has been evaluated. the proposed method highlights a remarkable agreement with the reference system, which respects the theoretical trend, and the performance improvements over one accelerometer sensor measurements are deducted from the results obtained; in fact, amplitude differences of 0.02 m/s2 and 0.137 m/s2 have been experienced between the reference system and the proposed solution and one sensor respectively. moreover, it is observed that drift and bias stability errors are not present in the system as the frequency increases. finally, a smart monitoring architecture has been proposed based on the adoption of a sensor node capable of real-time monitoring of the structure by exploiting the iot communication protocols, i.e., lorawan and nfc. the use of these systems, combined with the proposed solution, can figure 6. differences as the frequency changes (from 1h z to 5 hz), between the reference system and (i) the proposed method (blue); (ii) one sensor measures (orange). figure 7. polynomial fitting comparison: ideal trend (black), reference system (red), proposed method (blue) and, one sensor measure (orange). figure 8. monitoring dashboard example of operation. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 provide accurate and reliable data, leading to timely maintenance and cost savings. as future work, additional tests will be executed by means of different beam sizes (the acceleration amplitude could be changed by modifying the beam length) and a vibrodyne to suitably assess the combined effects of amplitude and frequency by evaluating the minimum and maximum operating values. references [1] l. sun, z. shang, y. xia, s. bhowmick, s. nagarajaiah, review of bridge structural health monitoring aided by big data and artificial intelligence: from condition assessment to damage detection, journal of structural engineering, vol. 146, no. 5, 2020, p. 4020073. doi: 10.1061/(asce)st.1943-541x.0002535 [2] j. seo, j. w. hu, j. lee, summary review of structural health monitoring applications for highway bridges, journal of performance of constructed facilities, vol. 30, no. 4, 2016, p. 4015072. doi: 10.1061/(asce)cf.1943-5509.0000824 [3] f. lamonaca, c. scuro, d. grimaldi, r. s. olivito, p. f. sciammarella, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system system, acta imeko, vol. 8, no. 2, 2019, pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [4] n. giulietti, p. chiariotti, g. cosoli, g. giacometti, l. violini, a. mobili, g. pandarese, f. tittarelli, g. m. revel, continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions, acta imeko, vol. 10, no. 4, 2021, pp. 132–139. doi: 10.21014/acta_imeko.v10i4.1140 [5] a. cigada, l. corradi dell'acqua, b. mörlin visconti castiglione, m. scaccabarozzi, m. vanali, e. zappa, structural health monitoring of an historical building: the main spire of the duomo di milano. international journal of architectural heritage, vol. 11, no. 4, pp. 501–518, 2017. doi: 10.1080/15583058.2016.1263691 [6] m. carratù, v. gallo, v. paciello, a. pietrosanto, a deep learning approach for the development of an early earthquake warning system, ieee int. instrumentation and measurement technology conf. (i2mtc), ottawa, on, canada, 16-19 may 2022, pp. 1–6. doi: 10.1109/i2mtc48687.2022.9806627 [7] l. angrisani, f. bonavolontà, g. d'alessandro, m. d'arco, inductive power transmission for wireless sensor networks supply, ieee workshop on environmental, energy, and structural monitoring systems, naples, italy, 17-18 september 2014, pp. 1-5. doi: 10.1109/eesms.2014.6923289 [8] c. scuro, p. f. sciammarella, f. lamonaca, r. s. olivito, d. l. carni, iot for structural health monitoring, ieee instrum meas mag, vol. 21, no. 6, 2018, pp. 4–14. doi: 10.1109/mim.2018.8573586 [9] a. astarita, f. tucci, a. t. silvestri, m. perrella, l. boccarusso, p. carlone, dissimilar friction stir lap welding of aa2198 and aa7075 sheets: forces, microstructure and mechanical properties, int. journal of advanced manufacturing technology, 117, 2021, pp 1045–1059. doi: 10.1007/s00170-021-07816-7. [10] f. tucci, p. carlone, a. t. silvestri, h. parmar, a. astarita, dissimilar friction stir lap welding of aa2198-aa6082: process analysis and joint characterization, cirp j manuf sci technol, vol. 35, 2021, pp. 753–764. doi: 10.1016/j.cirpj.2021.09.007. [11] p. jiao, k.-j. i. egbe, y. xie, a. matin nazar, a. h. alavi, piezoelectric sensing techniques in structural health monitoring: a state-of-the-art review, sensors, vol. 20, no. 13, 2020, p. 3730. doi: 10.3390/s20133730 [12] i. roselli, a. tatì, v. fioriti, i. bellagamba, m. mongelli, r. romano, g. de canio, m. barbera, m. cianetti, integrated approach to structural diagnosis by non-destructive techniques: the case of the temple of minerva medica, acta imeko, vol. 7, no. 3, 2018, pp. 13–19. doi: 10.21014/acta_imeko.v7i3.558 [13] f. zonzini, c. aguzzi, l. gigli, l. sciullo, n. testoni, l. de marchi, m. di felice, t. s. cinotti, c. mennuti, a. marzani, structural health monitoring and prognostic of industrial plants and civil structures: a sensor to cloud architecture, ieee instrum meas mag, vol. 23, no. 9, 2020, pp. 21–27. doi: 10.1109/mim.2020.9289069. [14] d. seneviratne, l. ciani, m. catelani, d. galar, smart maintenance and inspection of linear assets: an industry 4.0 approach, acta imeko, vol. 7, no. 1, 2018, pp. 50-56. doi: 10.21014/acta_imeko.v7i1.519 [15] c. r. farrar, k. worden, an introduction to structural health monitoring, philosophical transactions of the royal society a: mathematical, physical and engineering sciences, vol. 365, no. 1851, 2007, pp. 303–315. doi: 10.1098/rsta.2006.1928 [16] p. rizzo, a. enshaeian, challenges in bridge health monitoring: a review, sensors, vol. 21, no. 13, 2021, p. 4336. doi: 10.3390/s21134336 [17] p. j. vardanega, g. t. webb, p. r. a. fidler, f. huseynov, k. kariyawasam, c. r. middleton, bridge monitoring, in: innovative bridge design handbook, elsevier, 2022, pp. 893–932. doi: 10.1016/b978-0-12-823550-8.00023-8 [18] l. ciani, a. bartolini, g. guidi, g. patrizi, a hybrid tree sensor network for a condition monitoring system to optimise maintenance policy, acta imeko, vol. 9, no. 1, 2020, pp. 3–9. doi: 10.21014/acta_imeko.v9i1.732 [19] m. brambilla, p. chiariotti, f. di carlo, p. isabella, a. meda, p. darò, a. cigada, metrological evaluation of new industrial shm systems based on mems and microcontrollers, european workshop on structural health monitoring: ewshm 2022, volume 1, springer, 2022, pp. 774–783. doi: 10.1007/978-3-031-07254-3_78 [20] a. liccardo, f. bonavolontà, m. balato, c. petrarca, lora-based smart sensor for pd detection in underground electrical substations, ieee trans instrum meas, vol. 71, 2022, pp. 1–13. doi: 10.1109/tim.2022.3200427 [21] e. reynders, system identification methods for (operational) modal analysis: review and comparison, archives of computational methods in engineering, vol. 19, no. 1, 2012, pp. 51–124. doi: 10.1007/s11831-012-9069-x [22] a. lavatelli, e. zappa, uncertainty in vision based modal analysis: probabilistic studies and experimental validation, acta imeko, vol. 5, no. 4, 2016, pp. 37–48. doi: 10.21014/acta_imeko.v5i4.426 [23] g. cerro, l. ferrigno, m. laracca, f. milano, p. carbone, a. comuniello, a. de angelis, a. moschitta, an accurate localization system for nondestructive testing based on magnetic measurements in quasi-planar domain, measurement, vol. 139, 2019, pp. 467–474. doi: 10.1016/j.measurement.2019.03.022 [24] r. r. ribeiro, r. de m. lameiras, evaluation of low-cost mems accelerometers for shm: frequency and damping identification of civil structures, latin american journal of solids and structures, vol. 16, 2019. doi: 10.1590/1679-78255308 [25] f. bonavolonta, l. p. di noia, a. liccardo, s. tessitore, d. lauria, a pso-mma method for the parameters estimation of interarea oscillations in electrical grids, ieee trans instrum meas, vol. 69, no. 11, 2020. doi: 10.1109/tim.2020.2998909 [26] r. iervolino, f. bonavolontà, a. cavallari, a wearable device for sport performance analysis and monitoring, in 2017 ieee international workshop on measurement and networking, m and n 2017, naples, italy, 27-29 september 2017, pp. 1-6. doi: 10.1109/iwmn.2017.8078375 [27] m. varanis, a. silva, a. mereles, r. pederiva, mems accelerometers for mechanical vibrations analysis: a comprehttps://doi.org/10.1061/(asce)st.1943-541x.0002535 https://doi.org/10.1061/(asce)cf.1943-5509.0000824 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.21014/acta_imeko.v10i4.1140 https://doi.org/10.1080/15583058.2016.1263691 https://doi.org/10.1109/i2mtc48687.2022.9806627 https://doi.org/10.1109/eesms.2014.6923289 https://doi.org/10.1109/mim.2018.8573586 https://doi.org/10.1007/s00170-021-07816-7 https://doi.org/10.1016/j.cirpj.2021.09.007 https://doi.org/10.3390/s20133730 https://doi.org/10.21014/acta_imeko.v7i3.558 https://doi.org/10.1109/mim.2020.9289069 https://doi.org/10.21014/acta_imeko.v7i1.519 https://doi.org/10.1098/rsta.2006.1928 https://doi.org/10.3390/s21134336 https://doi.org/10.1016/b978-0-12-823550-8.00023-8 https://doi.org/10.21014/acta_imeko.v9i1.732 https://doi.org/10.1007/978-3-031-07254-3_78 https://doi.org/10.1109/tim.2022.3200427 https://doi.org/10.1007/s11831-012-9069-x https://doi.org/10.21014/acta_imeko.v5i4.426 https://doi.org/10.1016/j.measurement.2019.03.022 https://doi.org/10.1590/1679-78255308 https://doi.org/10.1109/tim.2020.2998909 https://doi.org/10.1109/iwmn.2017.8078375 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 9 hensive review with applications, journal of the brazilian society of mechanical sciences and engineering, vol. 40, 2018, pp. 1–18. doi: 10.1007/s40430-018-1445-5 [28] c. bedon, e. bergamo, m. izzi, s. noè, prototyping and validation of mems accelerometers for structural health monitoring the case study of the pietratagliata cable-stayed bridge, journal of sensor and actuator networks, vol. 7, no. 3, 2018, p. 30. doi: 10.3390/jsan7030030 [29] s. han, z. meng, o. omisore, t. akinyemi, y. yan, random error reduction algorithms for mems inertial sensor accuracy improvement a review, micromachines, vol. 11, no. 11, 2020, 36 pp. doi: 10.3390/mi11111021 [30] h. huang, h. zhang, l. jiang, an optimal fusion method of multiple inertial measurement units based on measurement noise variance estimation, ieee sens j, vol. 23, no. 3, 2023, pp. 2693–2706. doi: 10.1109/jsen.2022.3229475 [31] s. ahvenjärvi, the human element and autonomous ships, transnav, the international journal on marine navigation and safety of sea transportation, vol. 10, no. 3, 2017, pp. 517-521. doi: 10.12716/1001.10.03.18 [32] p. d. groves, principles of gnss, inertial, and multisensor integrated navigation systems, 2nd edition [book review], ieee aerospace and electronic systems magazine, vol. 30, no. 2, 2015, pp. 26 27. doi: 10.1109/maes.2014.14110 [33] g. t. schmidt, ins/gps technology trends, technology (singap world sci), vol. 116, no. 2010, 2011. [34] g. de alteriis, v. bottino, c. conte, g. rufino, r. s. lo moriello, accurate attitude inizialization procedure based on mems imu and magnetometer integration, 2021 ieee 8th int. workshop on metrology for aerospace (metroaerospace), naples, italy, 23-25 june 2021, pp. 1–6. doi: 10.1109/metroaerospace51421.2021.9511679 [35] a. liccardo, s. tessitore, f. bonavolontà, s. cristiano, l. p. d. noia, g. m. giannuzzi, c. pisani, detection and analysis of interarea oscillations through a dynamic-order dmd approach, ieee trans instrum meas, vol. 71, 2022, pp. 1–14. doi: 10.1109/tim.2022.3186371 [36] r. alfian, a. ma'arif, s. sunardi, noise reduction in the accelerometer and gyroscope sensor with the kalman filter algorithm, journal of robotics and control (jrc), vol. 2, nov. 2020, pp. 180–189. doi: 10.18196/jrc.2375 [37] d. capriglione, m. carratù, m. catelani, l. ciani, g. patrizi, a. pietrosanto, p. sommella, experimental analysis of filtering algorithms for imu-based applications under vibrations, ieee trans instrum meas, vol. 70, 2021, pp. 1-10. doi: 10.1109/tim.2020.3044339 [38] f. bonavolontà, m. d'arco, a. liccardo, o. tamburis, remote laboratory design and implementation as a measurement and automation experiential learning opportunity, ieee instrum meas mag, vol. 22, no. 6, 2019, pp. 62–67. doi: 10.1109/mim.2019.8917906 [39] d. capriglione, g. cerro, l. ferrigno, g. miele, analysis and implementation of a wavelet based spectrum sensing method for low snr scenarios, 2016 ieee 17th int. symposium on a world of wireless, mobile and multimedia networks (wowmom), ieee, coimbra, portugal, 21-24 june 2016, pp. 1–6. doi: 10.1109/wowmom.2016.7523585 [40] g. de alteriis, a. t. silvestri, v. bottino, e. caputo, f. bonavolontà, r. s. lo moriello, a. squillace, d. accardo, innovative fusion strategy for mems redundant-imu exploiting custom 3d components, 2022 ieee 9th int. workshop on metrology for aerospace (metroaerospace), pisa, italy, 27-29 june 2022, pp. 644–648. doi: 10.1109/metroaerospace54187.2022.9856222 [41] a. t. silvestri, i. papa, f. rubino, a. squillace, on the critical technological issues of cff: enhancing the bearing strength, materials and manufacturing processes, vol. 37, no. 2, 2022, pp. 123-135. doi: 10.1080/10426914.2021.1954195 [42] a. t. silvestri, i. papa, a. squillace, influence of fibre fill pattern and stacking sequence on open-hole tensile behaviour in additive manufactured fibre-reinforced composites, materials, vol. 16, no. 6, 2023, 13 pp. doi: 10.3390/ma16062411 https://doi.org/10.1007/s40430-018-1445-5 https://doi.org/10.3390/jsan7030030 https://doi.org/10.3390/mi11111021 https://doi.org/10.1109/jsen.2022.3229475 http://dx.doi.org/10.12716%2f1001.10.03.18 https://doi.org/10.1109/maes.2014.14110 https://doi.org/10.1109/metroaerospace51421.2021.9511679 https://doi.org/10.1109/tim.2022.3186371 https://doi.org/10.18196/jrc.2375 https://doi.org/10.1109/tim.2020.3044339 https://doi.org/10.1109/mim.2019.8917906 https://doi.org/10.1109/wowmom.2016.7523585 https://doi.org/10.1109/metroaerospace54187.2022.9856222 https://doi.org/10.1080/10426914.2021.1954195 https://doi.org/10.3390/ma16062411 the masonry of the terme di elagabalo at the palatine hill (rome). survey, analysis and quantification of a roman empire architecture acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 9 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 the masonry of the terme di elagabalo at the palatine hill (rome). survey, analysis and quantification of a roman empire architecture emanuele brienza1, lorenzo fornaciari2 1 università degli studi di enna “kore”, cittadella universitaria 94100 enna (en) 2 università degli studi di salerno, via giovanni paolo ii, 132, 84084 fisciano (sa) section: research paper keywords: rome, archaeology, ancient architecture; 3dsurvey; gis citation: lorenzo fornaciari, emanuele brienza, the masonry of the terme di elagabalo at the palatine hill (rome). survey, analysis and quantification of a roman empire architecture, acta imeko, vol. 11, no. 1, article 11, march 2022, identifier: imeko-acta-11 (2022)-01-11 section editor: fabio santaniello, university of trento, italy received march 1, 2021; in final form march 24, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo fornaciari, e-mail: lorenzo.fornaciari88@gmail.com 1. introduction the area of the northeast slopes of the palatine hill that faces the colosseum valley has been the subject of a long archaeological research since 1986, carried out by the department of scienze dell’antichità of sapienza, university of rome (figure 1). during more than 30 years of excavations, the material remains of major building and monumental interventions have been unearthed, testifying an environmental and topographical continuum where the development of diversified urban systems has involved a complex physical overlap of structures and architectural complexes distributed over time [1]-[4]. today the actual floor and appearance of the zone reflects the aspect reached during the 4th century c.e with some main differences: first of all, the first big excavations made during 19th century included the destruction of many medieval remains; secondly the building during the fascist regime of two new large streets, via dell’impero and via dei trionfi, caused the dismantling of the velia hill and the destruction of several roman monuments, like the meta sudans fountain and the basement of the famous colossus. the target of this investigation was to reconstruct the ancient topography and to restore the connection between the colosseum valley and the palatine hill; the research had to take into consideration the previous literature and studies about the zone and the huge amount of data produced during annual stratigraphic excavations: dealing with this purpose, we had to use it tools for data collection and management, articulated into distinct chronologies and graphic outputs able to reproduce the ancient spaces and their transformations over time, on a small and large representation scale. finally, in several zones of the investigated area previous digs carried from xvii to xix centuries compromised the stratigraphic earth basins, making difficult to read the archaeological record: here, in order to obtain other keys of interpretation, we decided to implement the stratigraphic and abstract the ne slopes of the palatine and the colosseum valley area have been the place of a long archaeological research; here the continuous urban development produced an overlap of architectural complexes distributed over time. the huge amount of archaeological documentation produced during research is managed by an intra-site gis. for ancient walls analysis we have introduced the use of image-based-modelling photogrammetry in order to create a very detailed 3d documentation, marked by an increasing and progressive high-level-autopsy, linked to a dbms dedicated to ancient structural features. in this procedure we also decided to compare two different approaches to check results of both: one based on taking brick measures directly by hand, the other taking same measures on photogrammetric elaborations by gis. through this methodology, counting measures and dimensional aspect of wall facades features, we can evaluate specific aspects of the ancient construction yards for each period; we can also refine the chronological sequences of the architectures and verify the contextual relationships of the surrounding buildings in order to formulate wide-ranging reconstructive hypotheses. mailto:lorenzo.fornaciari88@gmail.com acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 detailed analysis of ancient architectures, according to the most current research criteria. in section 2 we will discuss about the archaeological framework; in section 3 we will give a brief description of data management methods, techniques and procedures; in section 4 we will illustrate our particular approach for the structural analysis of ancient walls; in section 5 we will give some quantitative data about constructive features of the ancient walls. finally, in the concluding section, we will account the preliminary results of our research. 2. the archaeological framework an area of about 4500 square meters has been excavated reaching a depth, were possible, of 8/9 meters, chasing the millenary history the city, from first settlement to modern age, including all kind of evidences like medieval spoliations and modern or contemporary interventions. in order to reconstruct the original geomorphology, we have conducted also geophysical investigations tools such as ground-penetrating radar and resistivity [5]-[7]. to make an extreme synthesis of the main results, starting from the remains of iron age huts, found along the slope, we move on to an early urban planning witnessed by the presence of two sanctuaries dating to the roman kingdom (8th-7th century bce) located along both sides of the ancient road leading to the roman forum: one of them can be identified with the curiae veteres and has been frequented until the affirmation of christianity. the installation of a residential district, along the road, is documented since the archaic period: subsequently this area has been periodically rebuilt in the following centuries, until augustus age. in this period, at the meeting point of five of the new 14 city zones planned by the emperor, the first meta sudans fountain was built, in front of the curiae veteres which were also reconstructed in monumental shape during the years of the emperor claudius. the real break-up here happened in conjunction with the great nero’s fire: after this disaster nero decided to carry out in this area a deep urban transformation that would end with the realization of his majestic palace, the domus aurea. in the years between 64 and 68 ce a total reorganization of the road system was made, with a regular and orthogonal shape, according to the guidelines dictated by the palace project. in the colosseum valley the new architectural complex was characterized by columned porticoes around an artificial pond over which the flavian dynasty later will build the colosseum; the palatine hill slopes were regularized by artificial terraces on arcades while the new climbing way to the forum was flanked by arched porticoes. the urban planning of flavian emperors, focused on restoring a public dimension to the urban spaces occupied by the domus aurea, can be emblematically summarized, together with colosseum building, in the reconstruction of the curiae and of the meta sudans fountain, both burned in the fire. the area will be modified again by hadrian with the construction of the venus and rome temple and, a of a long building flanking the porticoes on the other side of the street going to the forum. after another catastrophic fire, at the end of the 2nd century ce the area was rebuilt again by the severian dynasty: in close connection with the monumental project of the new palace of the emperor, the whole front of northeast palatine’s substructures was heavily transformed while the constructions at its feet were completely dismantled and replaced by a new courtyard building, commonly called terme di elagabalo (figure 2). inside this monument, in the 4th century ce, a large banquet hall was obtained, with gardens and fountains and a small bath in the backyard. finally, with the construction of the constantine’s arch and the restorations at the venus and rome temple the ancient urban history of the area was completed [8]-[11]. 3. archaeological record collection: methods techniques and procedures the huge amount of documentation produced in front of such a complex stratigraphic sequence required the development of a data storage and management system, dedicated to contextualize information and capable of proposing new elements useful for research. the entire archive is therefore managed by an intra-site gis (designed since 2001), used for data-retrieving, spatial analysis and for the elaboration of archaeological themes and/or reconstructive models [12]. the historical sequence reconstructed for the excavation areas was therefore contextualized in a wider urban historical framework, traced through the study of previous bibliographies, archive data and historical cartography. the archaeological records collected during these operations were linked to a general map developed in a gis environment, made up of several distinct layers combined into a unique reference system by overlay mapping procedures: the vector cadastre of rome, the digital cartography of the municipality plus several rectified aerial photos and satellite images. over this base-map we have georeferenced several historic cartographies, in raster and vector format, such as nolli’s nuova figure 1. the investigated area in the historic centre of rome. figure 2. general plan of the terme di elagabalo complex: the severian phase is highlighted in red. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 pianta di roma (1748), some sheets of lanciani’s forma urbis romae and the cartographic atlas media pars urbis made by v. reina in 1911 (figure 3) [13], [14]. we have updated the territorial study producing a digital elevation model in order to compare the current state with ancient surfaces: this was produced by interpolation of the altimetric data available at the geoportale della regione lazio (figure 4) [15] and subsequently recalibrated on the basis of the altimetry detected during the annual topographic campaigns; it is an additional basis where to place stratigraphic evidence of structures and infrastructures (ancient roads, sewers, terraces) connected to the original morphological configuration; here we can verify, using the deepest stratification data, our environmental reconstructions for the most ancient periods. in fact, using all the data collected from stratigraphic and geoarchaeological investigations, from surveys and historical cartography, two digital terrain models have been developed: one related to the augustan age and the other suggesting the situation after the 64 big fire and nero’s massive interventions. those dtms are the basis of virtual urban scenarios, modelled in a 3d environment, that can be navigated following a diachronic exposition and associating each monument to its period, with the option to converse interactively with each architecture in its specific temporal version [16]. over the years this system has been implemented in software, for the advent of new it products, and in the stored contents: in our spatial database, today, digital and analogical documentation (in particular handmade archaeological detailed drawings and onpaper archaeological forms) are managed together, in order to maintain the integrity of the research archive and its history (figure 5) [17], [18]. to have the best results in this operation a memorandum of agreement has been approved by the sapienza, university of roma, the kore university of enna and the i.s.p.c.-c.n.r. (institute of sciences for cultural heritage of national council of research), institution that since 2007 has collaborated with the research carried out at the palatine hill and the colosseum valley, in particular in 3d surveying and integrated geophysical figure 3. collection of historic cartographies reproducing our investigation area in different periods of modern age (e. brienza and m. fano). figure 4. the new digital elevation model produced from the data of the geoportale della regione lazio. figure 5. intra site gis: data management. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 prospecting [19], [20]. during these years a big amount of raw data has been preserved in two different repositories, while only final elaborations were shared by the research group: since 2019 we have finally started to unify the separate archives in a single spatial database, for investigation purposes but also for deontological instances in order to leave a complete and single testimony of all activities carried out in this important archaeological place site during these decades. our intention is focused on giving access to the scientific community but also to interested people, not only to the data (both synthetic and in-depth format) but also to the analysis system itself: paying great attention to the issues of open-data, archeofoss [21] and public archaeology [22] we have tested the migration of the entire dataset and its interrogation criteria and tools to an open-source web-gis platform, using web-oriented dbms like postgresql + postgis and gis software like qgis server+lizmap). in this digital environment, starting from the general site map, it is possible to decompose the single architectures into their structural contexts and features and verify the cognitive process for each one of them: passing from photos to 3d models, then to elevations, wall-samples up to the general synthesis of filecards and records of ancient structures (figure 6). 4. stratigraphic analysis and autopsy of ancient walls the study of ancient architectures today can adopt survey tools able to detect objects in 3d with a certain precision and quickly: through these tools we have produced ortho-photoplans that, gradually, have joined the traditional 2d documentation but we have also proposed three-dimensional sequences of excavated stratigraphic sequences as well as the reproduction of some ancient artifacts, suggesting their virtualdigital restoration. in the study of ancient walls, the use of new image-based-modeling photogrammetry techniques based on structure from motion (having accuracy and photographic texturing) brought us to the realization of a new detailed 3d documentation of the ancient walls [23]-[30]. a 24.2 megapixel canon eos d200 reflex camera, with image stabilizer and a focal length of 18 mm, was used for the photos; in addition, a tripod and a telescopic rod were used to ensure better quality; an average of 30-40 shoots were taken for each acquisition not to burden the processing times. we have produced 3d models of the structures, adopting three approaches based on differentiated analysis scales: a general and less accurate one (mainly aerial), for architectural complexes (figure 7) [31]; a second one, more accurate, for sectors and rooms of ancient buildings (close-range); a third one of in-depth analysis of the walls and samples, using a maximum resolution and accuracy approach (very-close-range, figure 8). in this last case each wall was taken up completely, maintaining a constant distance: particular attention was paid to have a maximum accuracy survey of 1sqm area samples, taken from facades, maintaining a maximum distance of 1 m, in order to obtain images with a high level of detail; in this way it was possible to use very-high quality orthophotos, having an average figure 6. intra site gis: 3d data management. figure 7. aerial 3d survey of the architectural complexes (g. caratelli and c. giorgi). figure 8. very-close-range digital photogrammetry of ancient walls. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 detail level between 0.2-0.4 mm / pixel, for the stratigraphic reading and for samples quantifications: in the selection of these samples, we proceeded analytically, choosing 1sqm wallvestment areas with an adequate level of conservation and free of discontinuity elements [32]. vertical high-resolution ortho-photomaps were extrapolated from the textured meshes and, acting a careful verification of the measurements, subjected to a vector design process. the new documentation included also a dbms recording updating the usual formats for ancient structural features following the guidelines suggested by the archeology of construction and by the archaeology of architecture [33]-[39]. we have planned a new file-card format dedicated to register information about the logistic of the ancient construction yards and the related dynamics on material production and ancient building organization, in addition to data relating to their measures, composition and nature (figure 9). in this way the chrono-typological analysis, which traditionally focuses on the recognition of the single construction features by material aspect, has been expanded with the collection of information related to building methods such as, for example, structural expedients for static stability, specific materials selection in relation to particular needs or quantification of the work in terms of time and number of the workers. defining trends, measures and treatments of specific building materials can help us to identify diachronically the processes and resources of the ancient construction yards, while the stratigraphic analysis of the walls, with its identification of constructive temporal sequences, is crucial to understand the formative dynamics of the ancient architectures and must be done through observation of details on the basis of a precise and clearly legible survey. obviously, in order to normalize the data entry and editing, we have encoded standard glossaries while the detailed morphometric information, derived from autoptic analysis of samples taken from wall facades (normally their size is 1 square meter), is managed by sub-cards where each “constituent" (i.e. brick, block, etc.) is organized by type, use/reuse, material, manufacture, finishing, and measures. starting from these assumptions, the analysis of ancient architecture was carried out with a workflow that, as usual, started from the autopsy analysis and survey of each wall. elaborations from photogrammetry have been vectorized in gis environment. figure 9. dbms file-card for structural and construction data. figure 10. vectorization of ancient walls samples. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 for this purpose, next to the module dedicated to the analytical database of the ancient walls, a new apparatus has been created for the collection of all the data relevant to the documentary base. here, photographs, 3d models acquired from scratch, sections and elevations, drawings and all the graphic documentation produced during the excavations, have found their place. in this way, through a simple query, it is possible to trace the whole corollary of raw and elaborated data that constitute the starting point for the analysis of each context. for the quantifications of information coming from wallfacades-samples we have performed differentiated gis analysis procedures, comparing the dbms data taken directly on the field (counting bricks and measures on wall facades) and those obtained automatically on spatial vector drawings made on very detailed orthophotos (figure 10); these measures were taken on the same sampled wall-facades but in different way: despite the different tools and procedures the results of 20 samples were indeed very similar, giving us a good indicator of a correct method [40]. through a series of expressions specifically prepared, it is also possible to calculate automatically and expeditiously the variable of the constituent / conglomerate ratio, but also the dimensions of the components of the facades with their degree of homogeneity and variability (figure 11). in particular, the gis for calculating the dimensions of each brick involves the use of two expressions whose common principle is based on the construction of a regular polygon circumscribed to each geometry returning the maximum length along the x and y axes [41], [42]. in this way length is calculated as follows: 𝑏𝑜𝑢𝑛𝑑_𝑤𝑖𝑑𝑡ℎ ($𝑔𝑒𝑜𝑚𝑒𝑡𝑟𝑦); (1) while the thickness: 𝑏𝑜𝑢𝑛𝑑_ℎ𝑒𝑖𝑔ℎ𝑡 ($𝑔𝑒𝑜𝑚𝑒𝑡𝑟𝑦). (2) as mentioned, the measurements made through these expressions agreed with those taken by direct autopsy producing a great advantage in terms of time. 5. some quantification data focusing on the construction process, the first value that must be verified in the analysis is represented by the percentage ratio between constituents, in this case bricks, and binder; we must then add the values of the average of the lengths, thicknesses and areas of the bricks which, being strongly influenced by anomalies, can be misleading in describing the overall character of the masonry; regard this, the standard deviation, i.e. the index according to which the measures of each constituent deviate from their average value, can be very effective [43]; from an archaeological point of view we could say that if the standard deviation index is low this may indicate a certain homogeneity of the bricks, referable to the availability of homogeneous lots of materials; on the contrary, we could imagine very inhomogeneous bricks due the availability of different lots or reused material, recovered from pre-existing buildings, and therefore more fragmented [44], [45]. taking in consideration the peculiar characters of 60 brickwalls samples of our complex a specific feature is the extreme thinness of vertical joints: this observation indicates a high level of specialization of the workers but does not exclude the use of recycled materials. in fact, same samples show a certain lack of homogeneity in lengths and thicknesses of the bricks, highlighting a combined use of new and recycled material, we know that there were three standard brick types, based on multiples of the ancient roman foot (about 29.6 cm): these were produced in a square shape and then cut into a triangle whose longest side was on the facade: by measuring this side it is possible to reconstruct the used brick types, also calculating the parts lost during cutting [46]. from our analysis of about 5700 bricks, it seems that standard type used in masonry is 67% bessales (2/3 of roman foot) divisible into two groups of measures (1822 and 22-25 cm) while 25% of the material is fragmented and probably reused (figure 12) [47], [48]. it is maybe possible that, at the beginning of the construction, it was planned the recovery of the remained material of the previous, destroyed, building. on the other hand, the supply of newly made material is confirmed by presence of special brick types like double-feet bricks (bipedales) used for horizontal planes in masonry and relieving arches in the facades, following constructive and static procedures typical of severian architecture, with a homogeneity and number that suggests a great availability of new material. 6. conclusions through the methodologies and tools described above it is now possible to evaluate specific aspects of the ancient construction yards for each period, such as the extent of resources supply, the reuse index and building materials selection level and consequently refine the chronological sequence of the construction phases of the individual buildings. furthermore, being able to have such a complex sequence of figure 11. vector and dbms data analysis and verification. figure 12. quantification of bricks measures and types in masonries. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 building interventions and referring them to architectures of public and private character, it is possible to plan a comparative analysis of the construction techniques adopted in relation to both chrono-typological aspects and to particular contingencies. finally, we have a bigger chance to clarify the structural and contextual relationships of each construction yards with the surrounding buildings, in order to formulate wide-ranging and multi-temporal reconstructive hypotheses. some first outcomes of our work can be briefly exposed here. first of all, our “very-close-range” photogrammetry approach to structural archaeological evidences, using very high definition shoots of samples and supported by total station, with automatic measurement of vectorial features, has given results very encouraging if compared to measures of samples directly taken, with a tape and one by one, on the ancient walls: the dimensions of bricks and other building components of the archaeological structure are always almost the same in both cases and mismatches are very few and very little. in addiction the colours and the type of materials can be clearly distinguished. this means that a mensio-chronological approach using our method can be correct; obviously a total analysis of building materials and their treatments (specially for mortars, concretes and conglomerates) still needs a direct autopsy, physical and material, of archaeological evidence. in particular cases, anyway, for emergency or during seasonal researches in foreigner countries, this expeditive approach can be adopted on the field, obtaining reliable results in the subsequent laboratory study. another result is of archaeological and architectural nature and concerns the accentuate reuse of building materials during the first severian age which, compared to the topographical context (we are next emperor's palace in rome) and the chronology (generally this building practice in the capital is peculiar of late antiquity), might seem an unusual phenomenon. however, this has been already partially detected by previous study of brick stamps found in situ which show the reuse of hadrian's bricks in severian masonry in the terme di elagabalo complex [49] but also in other sectors of the rebuilt emperor’s palace on the palatine, in particular during the first severian phase [50]-[52]. this phenomenon, associated with a new prevalent use of bessales bricks, seems typical of the years of septimius severus, when the first emperor of the new dynasty had to reactivate the intensive production of bricks to manage the reconstruction of a large part of the destroyed city from the 192 c. e. fire [53]. we can imagine that at the initial moment, while waiting for new material stocks in large and continuous quantities, it was decided to reuse bricks from burnt walls, which must have cost about 1/5 of that of first choice, also solving part of rubble removal problem [54], [55]. this would confirm the hypothesis that the construction of our complex should be dated to the years of septimius severus (before 211 c. e.) rather than to the later period of severus alexander (222-235 c. e.) [56], [57]. acknowledgement for support in the realization of this paper we want to thank: clementina panella (former full professor of la sapienza università di roma and scientific director of meta sudans and nepalatine slopes excavations), the parco archeologico del colosseo (in particular dr.sa giulia giovanetti), giovanni caratelli and cecilia giorgi (i.s.p.c.-c.n.r.); a. f. ferrandes (department of scienze dell’antichità of la sapienza università di roma). references [1] c. panella (editor), meta sudans i. un’area sacra in palatio e la valle del colosseo prima e dopo nerone, istituto poligrafico e zecca dello stato, rome, 1996, isbn 9788824039505. [in italian] [2] c. panella (editor), i segni del potere. realtà e immaginario della sovranità nella roma imperiale, edipuglia, bari, 2011, isbn 9788872286166. [in italian] [3] c. panella (editor), scavare nel centro di roma. storie uomini e paesaggi, quasar, rome, 2013 isbn 9788871405117. [in italian] [4] e. brienza, valle del colosseo e pendici nord-orientali del palatino. la via tra valle e foro: dal dato stratigrafico alla narrazione virtuale (64 d.c.-138 d.c.), quasar, rome, 2016, isbn 9788871407609. [in italian] [5] a. arnoldus-huyzendveld, c. panella, inquadramento geologico e geomorfologico della valle del colosseo in: meta sudans i. un’area sacra in palatio e la valle del colosseo prima e dopo nerone, c. panella (editor), istituto poligrafico e zecca dello stato, rome, 1996, isbn 9788824039505, pp. 9-26. [in italian] [6] a. arnoldus-huyzendveld, aspect of the landscape environment of rome in antiquity. change of landscape, shift of ideas, in: le regole del gioco. tracce, archeologi, racconti. studi in onore di clementina panella, a. f. ferrandes, g. pardini (editors), quasar, roma, 2016, isbn 9788871407128, pp. 177-202. [7] s. piro, indagini georadar ad alta risoluzione nell’area delle pendici nord-orientali del palatino, scienze dell’antichità 13 (2008) pp. 141-156. [in italian] [8] l. saguì, m. cante, f. quondam, le terme di elagabalo. i risultati delle ultime indagini, scienze dell’antichità 20, (2014) pp. 211-230. [in italian] [9] c. panella, s. zeggio, a. f. ferrandes, lo scavo delle pendici nordorientali del palatino tra dati acquisiti e nuove evidenze, scienze dell’antichità 20 (2014) pp. 159-210. [in italian] [10] l. saguì, m. cante, archeologia e architettura nell’area delle terme di elagabalo, alle pendici nord-orientali del palatino. dagli isolati giulio-claudii alla chiesa paleocristiana, thiasos 4 (2015) pp. 37-75. [in italian] [11] c. panella, a. f. ferrandes, g. iacomelli, g. soranna, curiae veteres. nuovi dati sulla frequentazione del santuario in età tardorepubblicana, scienze dell’antichità 25.1 (2019) pp. 41-71. [12] e. brienza, un gis intra site per la valle del colosseo: concetti, metodi, applicazione, scienze dell’antichità 13 (2008) pp. 123-139. [in italian] [13] e. brienza, cartografia storica e cartografia numerica. la pianta del nolli e il gis, in: roma nel secolo dei lumi architettura erudizione scienza nella pianta di g.b. nolli "celebre geometra, m. bevilacqua (editor), electa, napoli, 1998, isbn 9788843587544, pp. 199-202. [in italian] [14] c. panella, m. fano, e. brienza, dallo scavo alla valorizzazione: l’esempio della valle del colosseo delle pendici nord-orientali del palatino, proc. of 2nd sitar sistema informativo territoriale archeologico di roma, m. serlorenzi. i. jovine (editors), iuno edizioni, roma, 2013, isbn: 9788890371165, pp. 145-189. historical mapping of rome. online [accessed 14 march 2022] http://mappingrome.com [15] regione lazio geoportale. online [accessed 25 february 2021] https://geoportale.regione.lazio.it/geoportale [16] c. panella, m. fano, e. brienza, r. carlani, a 3d web-gis for the valley of the colosseum and the palatine hill, in: layers of perception, proc. of the 35th international conference on computer applications and quantitative methods in archaeology, a. posluschny, k. lambers, i. herzog, (editors), dr. rudolf habelt gmbh, bonn, isbn: 9783774935563, pp. 1-10. [17] c. panella, e. brienza, il geodatabase della valle del colosseo e del palatino nord-orientale ed il trattamento digitale del dato archeologico, bollettino sifet 3 (2009) pp. 9-18. [in italian] [18] c. panella, m. fano, e. brienza, 30 years of urban archaeology: measuring, interpreting and reconstructing, proc. of 1st imeko international conference on metrology for archaeology, benevento, italy, 21-23 october 2015, pp. 49-54. http://mappingrome.com/ https://geoportale.regione.lazio.it/geoportale acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 [19] c. panella, r. gabrielli, c. giorgi, le terme di elagabalo sul palatino: sperimentazione di un metodo fotogrammetrico 3d applicato allo scavo archeologico, archeologia e calcolatori 22 (2011) pp. 243-260. [in italian] [20] c. giorgi, terme di elagabalo. il balneum tardoantico: studio archeologico e rilievo 3d, in: valle del colosseo e pendici nordorientali del palatino. materiali e contesti 1, c. panella, l. saguì (editors), scienze e lettere, roma, 2013, isbn 9788866870371, pp. 55–86. [in italian] [21] archivio archeofoss. online [accessed 13 march 2022] http://archivio.archeofoss.org [22] g. volpe, archeologia pubblica. metodi, tecniche, esperienze, carocci, roma, 2020, isbn: 8843099884. [in italian] [23] f. remondino, s. campana (editors), 3d surveying and modeling in archaeology and cultural heritage. theory and best practices, bar int. ser. 2598, archaeopress, oxford, 2014, isbn 9781407312309. [24] s, bertocci, s. parrinello (editors), digital survey and documentation of the archaeological and architectural sites, edifir edizioni, fireneze, 2015, isbn: 9788879707121. [25] e. stylianidis, f. remondino (editors), 3d recording, documentation and management of cultural heritage, whittles publishing dunbeath, 2015, isbn: 9781849951685. [26] d. tsiafaki, n. michailidou, benefits and problems through the application of 3d technologies in archaeology: recording, visualisation, representation and reconstruction, scientific culture 1.3 (2015) pp. 37-45. [27] h. kamermans, ch. piccoli, ch., w. de neef, a. g., posluschny, r. scopigno (editors), the three dimensions of archaeology, proc. of the xvii world uispp conference, burgos, spain, 1-7 september 2014, archaeopress, oxford, 2016, isbn 9781784912949. [28] n. dell’unto, g. landeschi, a. m. leander touati, m. dallepiane, m. callieri, d. ferdani, experiencing ancient buildings from a 3d gis perspective. a case drawn from the swedish pompei project, journal of archaeological method and theory 23 (2016) pp. 7394. [29] j. zachar, m. horňák, p. novaković (editors), 3d digital recording of archaeological, architectural and artistic heritage, conpra series i, university of ljubljana press, ljubljana, 2017, isbn 9789612378981. [30] f. bianconi, m. filippucci, la fotomodellazione per il rilievo archeologico, archeologia e calcolatori 30 (2019) pp. 205-228. [in italian] [31] e. adamopoulos, f. rinaudo, uas-based archaeological remote sensing: review, meta-analysis and state-of-the-art, in drones 4(3) (2020) pp.1-28. doi: 10.3390/drones4030046 [32] v. di cola, studio dei paramenti laterizi delle mura aureliane. 2. la selezione dei campioni, proc. of le mura aureliane nella storia di roma. 1. da aureliano a onorio, rome, italy, 25 march 2015, edizioni rome tre-press, roma, 2017, isbn: 9788894885392, pp. 69-91. [in italian] [33] s. camporeale, h. dessales, a. pizzo (editors), arqueología de la construcción i. los procesos constructivos en el mundo romano: italia y provincias occidentales, anejos del archivo español de arqueología 50, mérida, 2008, isbn 9788400087890. [in spanish] [34] s. camporeale, h. dessales, a. pizzo (editors), arqueología de la construcción ii. los procesos constructivos en el mundo romano: italia y provincias orientales, anejos del archivo español de arqueología 57, madrid-mérida, 2010, isbn: 9788400092795. [in spanish] [35] s. camporeale, h. dessales, a. pizzo (editors), arqueología de la construcción iii. los procesos constructivos en el mundo romano: la economía, de las obras, anejos del archivo español de arqueología 64, madrid-merida, 2012, isbn 9788400095000. [in spanish] [36] j. bonetto, s. camporeale, a. pizzo (editors), arqueología de la construcción iv. las canteras en el mundo antiguo: sistemas de explotación y procesos productivos, anejos del archivo español de arqueología 69, mérida, 2014, isbn 9788400098322. [in spanish] [37] s. camporeale, j. delaine, a. pizzo (editors), arqueología de la construcción v. man-made materials, engineering and infrastructure, anejos del archivo español de arqueología 77, madrid, 2016, isbn 9788400101428. [in spanish] [38] the journals: archeologia dell’architettura, issn 11266236; arqueología de la arquitectura, issn 16952731. [39] g.p. brogiolo, a. cagnana, archeologia dell’architettura. metodi e interpretazioni, all’insegna del giglio, firenze, 2012, isbn 9788878145184. [in italian] [40] s. acacia, r. babbetto, m. casanova, e. macchioni, d. pittalunga, photogrammetry as a tool for chronological dating of fired bricks structures in genoa area, isprs archives 42-2/w5 (2017) pp. 749-753. [41] d. vitelli, applicazioni di ‘gis verticale’ per la quantificazione delle opere architettoniche in muratura e i loro tempi di realizzazione: il caso del castello di drena, archeologia dell’architettura 22 (2017) pp. 101-112. [in italian] [42] f. giacomello, f. parisi, s. schivo, una proposta di metodo per l’interpretazione del reimpiego del mattone romano tramite analisi gis, archeologia dell’architettura 22 (2017) pp. 133-145. [in italian] [43] s. mongodi, studio dei paramenti laterizi delle mura aureliane. 3. analisi statistica, proc. of le mura aureliane nella storia di roma. 1. da aureliano a onorio, rome, italy, 25 march 2015, edizioni rome tre-press, roma, 2017, isbn 9788894885392, pp. 93-101. [in italian] [44] m. medri, v. di cola, s. mongodi, g. pasquali, quantitative analysis of brick-faced masonry: example from some large imperial buildings in rome, arqueología de la arquitectura 13 (2016) pp. 1-8. [45] m. medri, studio dei paramenti laterizi delle mura aureliane. 1. osservazioni generali, proc. of le mura aureliane nella storia di roma. 1. da aureliano a onorio, 2017, pp. 41-67; rome, italy, 25 march 2015, edizioni rome tre-press, roma, 2017, isbn 9788894885392, pp. 41-67. [in italian] [46] e. bukowiecki, la taille des briques de parement dans l’opus testaceum, in: arqueología de la construcción ii. los procesos constructivos en el mundo romano: italia y provincias orientales, s. camporeale, h. dessales, a. pizzo (editors), madrid-mérida, 2010, isbn 9788400092795, pp 143-151. [in spanish] [47] m. serlorenzi, f. coletti, l. traini, s. camporeale, il progetto domus tiberiana (roma). gli approvvigionamenti di laterizi per i cantieri adrianei lungo la nova via, arqueología de la arquitectura 13 (2016) pp. 1-13. [in italian] [48] m. serlorenzi, s. camporeale, anatomia di un muro romano: dati preliminari sullo smontaggio e quantificazione di alcune strutture in laterizio di epoca adrianea dallo scavo di piazza dante a roma, archeologia dell’architettura 22 (2017) pp. 21-33. [in italian] [49] d. botticelli, epigrafia del costruito. bolli laterizi dallo scavo delle pendici nord-orientali del palatino (area iv, anni 2007-2008) in: valle del colosseo e pendici nord-orientali del palatino. materiali e contesti 3, c. panella, v. cardarelli (editors), scienze e lettere, roma, 2017, isbn 9788866870739, pp. 49–96. [in italian] [50] f. villedieu (editor), la vigna barberini, ii. domus, palais impérial et temples. stratigraphie du secteur nord-est du palatin, école française de rome, rome, 200, isbn 9782728307852. [in french] [51] f. villedieu, la vigna barberini à l’époque sévérienne, in: palast und stadt im severischen rom, n. sojc, a. winterling, u. wulfrheidt (editors), steiner, stuttgart, 2013, isbn 9783515103008 pp. 157-180. [in french] [52] j. pflug, u. wulf-rheidt, la domus severiana sul palatino. gli interventi nel palazzo imperiale, in: roma universalis. i severi: l'impero e la dinastia venuta dall'africa, c. panella, a. d’alessio, r. rea (editors), electa, milano, 2018, isbn 9788891820723, pp. 158-169. [in italian] http://archivio.archeofoss.org/ https://doi.org/10.3390/drones4030046 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 [53] e. bukowiecki, u. wulf-rheidt, l’industria laterizia e l’organizzazione dei cantieri urbani in: roma universalis i severi: l'impero e la dinastia venuta dall'africa, c. panella, a. d’alessio, r. rea (editors), electa, milano, 2018, isbn 9788891820723, pp. pp. 220-227. [in italian] [54] s. barker, roman builders – pillagers or salvagers? the economics of deconstruction and reuse, in: arqueología de la construcción ii. los procesos constructivos en el mundo romano: italia y provincias orientales, s. camporeale, h. dessales, a. pizzo (editors), madrid-mérida, 2010, isbn 9788400092795pp. 127142. [55] c. n. duckworth, a. wilson (editors), recycling and reuse in the roman economy, oxford university press, oxford, 2020, isbn 9780198860846. [56] c. panella, i contesti e le stratigrafie, in: i reperti scultorei dalle “terme di elagabalo”. il ritrovamento, il restauro, l’edizione, m. papini (editor) quasar, roma, 2019 978887140-981-8, pp. 9-54. [in italian] [57] a later dating, during severus alexander reign is proposed in r. mar, el palatí. la formació dels palaus imperials a roma, universitat rovira i virgili, tarragona, 2005, isbn 9788493469801. microsoft word article 1 contacts.docx acta imeko  february 2015, volume 4, number 1, 1  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 1  journal contacts    about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief  paolo carbone, italy honorary editor‐in‐chief  paul p.l. regtien, netherlands editorial board francisco alegria, portugal leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany rainer tutsch, germany luca de vito, italy about imeko  the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses  principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia, italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov), the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany email: dirk.roeske@ptb.de iot enviromental quality monitoring in smart buildings in presence of measurement uncertainty: a decision making approach acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 8 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 iot enviromental quality monitoring in smart buildings in presence of measurement uncertainty: a decision making approach damiano alizzio1, claudio de capua1, gaetano fulco1, mariacarla lugarà1, valentina palco1, filippo ruffa1 1 departments diies and dieceam, università ‘mediterranea’ di reggio calabria, 89122, reggio calabria, italy section: research paper keywords: smart building; innovation-iot; security; sensoring; monitoring; building heritage citation: damiano alizzio, claudio de capua, gaetano fulco, mariacarla lugarà, valentina palco, filippo ruffa , iot enviromental quality monitoring in smart buildings in presence of measurement uncertainty: a decision making approach, acta imeko, vol. 12, no. 2, article 24, june 2023, identifier: imeko-acta12 (2023)-02-24 section editor: alfredo cigada, politecnico di milano, italy, andrea scorza, università degli studi roma tre, italy, roberto montanini, università degli studi di messina, italy received december 2, 2022; in final form february 22, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: gaetano fulco, e-mail: gaetano.fulco@unirc.it 1. introduction going through the historical evolution of living, it emerges that the house, considered as a "safe place", has had to adapt to multiple cultural, economic-political, climatic changes, but also to the needs of individuals and family groups [1]. the changes do not only concern the construction process and the technologies used for the realization of spaces, which certainly, nowadays, are closer to very high-performance standards and able to lower environmental impact as much as possible. the determining factor for the development of the concept of living has always been the relationship between home and family structure [2]. the different patterns and family statuses introduced by modernity necessarily exceed the functionalist vision of the home as a pure "machine for living" [3], i.e. the "house" with minimum standards that guarantee a good quality of life for the occupants. in this scenario, it is important to intersect the digital and real world, so that digital is at the service of the user, in order to improve living comfort and well-being, understood as better living conditions [4]. the emergency to which we are called to respond today as a scientific community, professionals and technicians in the construction sector and beyond, is the fragility to which the vast majority of the italian abstract the change of living concept from "traditional" to "smart” concerns how we live and relate in spaces that today, more than ever, are "sensitive" (i.e. spaces where digital technologies occupy a prominent place in the monitoring and control of buildings, with the aim of achieving high levels of quality of life). we are therefore witnessing the creation of new declinations of living and, in this context, the internet of things (iot) represents the starting point for the creation of connected products that "share" the information they detect with other objects or people on the network. in this scenario, the authors propose an original approach to measurements for the assessment of comfort in living environments. the work consists in the design and implementation of a measurement station, which acquires and analyses data collected by the network of distributed sensors and activates forced ventilation if the level of comfort is below the desired threshold. in such situations where measurement data are compared with a threshold value, it is necessary to consider how measurement uncertainty affects the decision taken; in this particular context, since the activation of actuators involves energy consumption, the decision on the effective threshold crossing should be well thought. for this reason, the aim of this work is to propose a smart monitoring system that through the setup and calibration of two decision-making algorithms, can decide if the measured value is below or over the threshold set with a known probability. in this way, the end user can chose an appropriate strategy, calibrated on the specific living environment, which allows to maximize either environmental comfort or energy saving, depending on the specific needs. mailto:gaetano.fulco@unirc.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 building heritage is subjected. in fact, in the italian territory, over time, the buildings have shown an insufficient degree of resilience from both a seismic and energetic point of view. in italy, over 22% of buildings are in a mediocre or very bad state of conservation and the construction sector is the most energyintensive in terms of consumption, maintainability and habitability of built environments [5]. we define “fragile” a building that does not fit into an environmental context and compromises the life of its occupants because it is not very resistant or not very liveable, a space without the minimum services of smart living [6]. smart living is a concept that in the last years is becoming even more important in the entire construction sector, covering the different phases: from design to the construction process, from monitoring and maintenance to management by end users. this concept is based on the idea that the use of technology allows to create environments that are able to substantially improve and simplify the quality of life of the occupants [7]. the current technical-scientific debate is focused on what tools, currently in use, can be considered reliable and capable of responding to important needs, such as: the use of renewable sources and strategies to contain energy consumption, the protection of human lives and last but not least the levels of indoor comfort. living, in this perspective is becoming "intelligent and sustainable", with continuous interrelation between human action and technology through increasingly digitized services aimed to considerably increase the levels of quality of life of the occupants. a statistic reported by cisco [8] about the use of the iot (internet of things) points out that in 2010 there were over 12.5 billion connected devices, with an increase of more than 400% in 2020 (about 50 billion). there are several experiments and applications currently underway in the construction sector [9], [10], including "dynamic facades with high-performance envelope for the energy efficiency of the building", "the optimization of iaq (indoor air quality) levels in buildings through intelligent ventilation systems", "systems for monitoring building safety and risk mitigation", etc. technology, connected to open source-hardware and sensing systems allows the detection of any changes in conditions and status [11]. this leads to meet the occupant comfort, energy consumption and cost efficiency needs. an important aspect concerns indoor air quality. according to the world health organization (who) the sick building syndrome (sbs) affects people who are subject to prolonged exposure to chemical, biological and / or physical agents in buildings with a low level of indoor air quality, generally due to poor ventilation. in fact, from literature, we know that iaq levels are generally 2 to 5 times worse than outdoor [12], [13]. this means that the occupants of closed spaces that do not enjoy good natural ventilation (and/or mechanical), have an increased risk of developing psychophysical malaise that often over time, leads to devastating effects on the human body such as the onset of diseases related to the respiratory tract, the central nervous system and even cancer [14]. since in recent years the lifestyle has changed considerably and people spend indoor much more time than before (about 90%), several research activities in the ict (information and communication technologies) sector, aim at investigating how multi-sensor iot platforms can optimize iaq levels in buildings [15], [16]. if we also consider the recent changes in people's lifestyle, who have increased the time spent at home, it is clear the importance of investing in iot platforms, which allow to dialogue with the home and to adapt the environment to the user needs [17]. among the different aspects of living, the present work aims to provide technological tools for efficient and effective monitoring of the living environment, in particular through the development of an automatic measurement and control station aimed at optimizing ieq (indoor environmental quality) levels. the proposed station is able to guarantee safety and living comfort through the continuous monitoring of iaq levels, temperature and relative humidity and, controlling actuators, is capable to restore the baseline conditions. the implemented system considers the uncertainty associated to the measurement process in order to make appropriate decisions about the actual exceeding of the thresholds set. 2. methods and applications in this paragraph, we will describe the acquisition and processing system that using a sensors network is able to detect the quantities of interest to monitor the environmental quality, in particular the iaq and the parameters related to thermal comfort, such as temperature and humidity. to make the system more usable, the monitoring station makes data easily accessible through the network. with regard to the iaq, the monitoring focuses on the assessment of the level of pollutants and the consequent management of the forced ventilation system. to maximize the flexibility, the system implements two different control strategies: one optimizes safety and activates the ventilation system more often, the other aims at obtaining maximum energy efficiency and, therefore, activates ventilation only in relevant cases. the system considers also the quality of the outdoor air, using a network of sensors placed outside the monitored environment, because in some cases it may be lower than the internal one. the comfort matching criteria described above are applied also to the monitoring of thermal quantities to decide whether it is the case to activate the ventilation system. 2.1. monitored parameters the level of indoor air quality refers to the concentration of pollutants, which can come both from sources inside the building, and outside, especially in urban contexts [18]. however, apart from temporary and exceptional situations, it is always possible to consider the air quality outside the building as a baseline, since the presence of polluting sources inside can only worsen the standard situation. in this context, it is of great importance to have real-time monitoring of these pollutants to guarantee timely air changes if air quality levels are no longer satisfactory. the parameters for the evaluation of iaq levels were selected from the state-of-the-art [19]-[26] and, in particular, the study was focused on three commonly studied air pollutants: carbon dioxide (co2), and particulate matter pm2.5 and pm10, as shown in table 1. if the concentration of these pollutants exceeds the safety limits imposed by the community standards, this would entail both immediate and long-term risks to the health of exposed people and, this is even more relevant the higher is their concentration. focusing on the pollutants examined, co2 can be considered as an indicator of the effectiveness of ventilation and excessive population density [27] and, in indoor environments with limited ventilation, should never exceed the limit of 1000 ppm [28] to guarantee adequate safety. the concentration of pm2.5 and pm10 particulate matter in the air has also been acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 directly associated with respiratory tract diseases [29], [30]. therefore, it is necessary, in closed spaces, to monitor these quantities in order to plan appropriate intervention strategies that aim to restore the most optimal air quality. it is possible, in fact, if their concentration exceeds the safety limits imposed by current regulations, or rather, limits that are more conservative and include a safety margin, even for long-term exposures, to intervene by activating a ventilation system to guarantee an adequate air change. both for the healthiness of the rooms and to optimize living comfort, it is also necessary to monitor quantities such as temperature and, above all, humidity. the presence of excessive humidity leads to the proliferation of mites and moulds, and the latter can be an important factor in the onset of rheumatic diseases and also increases the thermal perception of both heat and cold. on the other hand, excessive dryness of the air can cause breathing difficulties and increase the transmissibility of certain diseases [31]. several studies have shown that a relative humidity between 40 % and 60% is optimal to maximize comfort [32], [33]. for what concerns the temperature, if it is too high it impacts both on the symptoms of sbs and on the productivity of the occupants. the ideal temperature should be between 21 and 25 °c [34]. to set the thresholds levels of pollutants present in the monitored environment, among the technical regulations, we considered the thresholds established by the world health organization (who). for what concerns thermal comfort, we have referred to the iso 7730 standard [35], which also refers to the iso 17772-1: 2017 standard [36], as shown in table 2. to control the level of iaq the proposed system regulates forced ventilation, reducing so the concentrations of pollutants and regulating temperature and humidity. 2.2. monitoring and control system the monitoring and control system, schematized in figure 1, consists of a central unit that acquires information from the network of distributed wireless sensors (both indoor and outdoor) and if it is necessary, activates the ventilation system, in order to maintain a defined level of comfort in the living environment. the sensor network consists of sensors placed both inside and outside the building, so to monitor indoor and outdoor environmental conditions. the parameters monitored indoor are co2, pm2.5, pm10, temperature and humidity, while outdoor only pm, temperature and humidity. the activation of the ventilation system, instead, is implemented through the control of a relay to bring power to the internal conditioning system. the technical characteristics of the sensors used are shown in table 3. the central control unit is implemented with a national instruments single-board rio, with its expansion board. its function is to take measurement data from the different sensors in real time and to control the actuators of the ventilation system, through the opening of a relay, implementing the algorithm described in the flowchart of figure 2. using a graphical interface, users can view all measurement data and, according to personal needs, they can make changes to the configuration parameters of the smart management system. the system also allows to choose between two different operating strategies that aim respectively at maximizing safety related to the healthiness of the environments or at maximizing energy saving. based on the selection, the system defines the thresholds and decision criteria. 2.3. data analysis and decision-making for each pollutant, we calculate a quality index, which depends on the concentration of the single parameter measured and the reference level taken from the legislation [37] table 1. iaq standards. standard co2 pm10 pm2.5 bes x breeam dgmb en 16798 x x x hqe x x klima x leed x x x nabers x x osmoz x x x well x x table 2. pollutants reference level. parameter threshold value co2 1000 ppm pm10 50 μg/m3 pm2.5 10 μg/m3 temperature 24.5 – 28 °c in summer 20 – 24 °c in winter humidity 40 – 60 % in summer 30 – 60 % in winter parameter threshold value figure 1. control system. table 3. sensors specifications. sensor range accuracy temperature in °c -60 +75 ± 0.5 relative humidity in % 5 95 ± 7 co2 in ppm 0 10000 ± 40 pm2.5 in µg/m3 0 1000 ± 10 pm10 in µg/m3 0 1000 ± 25 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 𝐼x = 100 % − 𝛼 log 𝐶x 𝐶x|0 , (1) where 𝐶x is the measured quantity of the single pollutant x, 𝐶x|0 is the concentration of the same pollutant under ideal conditions and 𝛼 is a constant that considers the toxic levels of the pollutant defined by the standard. the value of  is therefore imposed by placing 𝐼x = 0, when 𝐶x = 𝐶xmax, where 𝐶xmax is the maximum concentration of the pollutant taken as a threshold. the overall iaq level is equal to the minimum of the indices calculated for the single monitored parameters. 𝐼iaq = min(𝐼co2 ,  𝐼pm10,  𝐼pm2.5) , (2) where 𝐼co2 ,  𝐼pm10 and 𝐼pm2.5 are the quality indexes of co2, pm10 and pm2.5. this method lets to have a global and rapid information on the level of air quality in indoor environments and can be easily implemented without great computational efforts. the algorithm compares the estimated level of iaq with the threshold taken as a reference and, if it does not meet the user needs, it activates the ventilation actuators. in the same way the system monitors the temperature and relative humidity in the environment and compares their measured values with the reference thresholds in order to maintain the desired quality standards. as said in section 2.2, configuring the system, it is possible to choose between two different strategies, i.e. maximization of safety or energy saving and to set, consequently, the reference thresholds. in the second case, in fact, we consider as thresholds the levels indicated by the regulations, while in the first case we set thresholds that are more conservative, considering also the number of occupants. in particular, the new thresholds are: 𝑇𝑟𝑒𝑠ℎ𝑜𝑙𝑑  = 𝑁𝐿 𝐾 , (3) where 𝑁𝐿 is the level imposed by normative and 𝐾 depends on the number of occupants in relation to the size of the environment. in situations like this, where there is a need to make decisions about exceeding a certain threshold, the algorithm has to analyse experimental data, considering not only the measured value, but also the uncertainty associated with the measurement process [38]-[41]. in this case we face to a particular class of problems that pertains to the field of decision-making. as known, in fact, the result of a measurement does not provide a single value, but an interval, centred in the measured value, inside which it is possible to find the value of the measurand with a given level of confidence. this means that, when the result of the measurement falls in a certain band around the threshold level identified: 𝑚 = 𝑣 ± 𝜀 , (4) where 𝑚 is the measured value, 𝑣 is the value of the measurand and 𝜀 is the value of the extended uncertainty relating to the measurement, we cannot be certain about the result of the comparison with the reference threshold and therefore probabilistic assessments must be made to associate a known level of risk with the decision that will be taken. taking from datasheet the type b uncertainty associated to each sensor and, considering a measurement error with a gaussian distribution, the probability that the measured value falls within a band of ± 3  around the measurand, where  is the standard deviation, is 99.7 %. clearly, taking into account the extended uncertainty, the wider the region of ambiguity, the more times the decision-making algorithm will intervene, as can be easily understood from figure 3. in the implementation of the decision-making algorithm, it is possible to consider a smaller band of uncertainty, cutting the tails of the gaussian curve, but this means choosing a smaller coverage factor and therefore decreasing the correctness of the decision taken. for what concerns the decision-making algorithms, among the various proposed in the literature, we decided to implement the utility cost test and the fixed risk [42], [43]. to address the decision problem by considering the impact of measurement uncertainty, the utility cost test considers the potential consequences of the different possible decisions. the algorithm evaluates four possible situations and their associated costs, i.e. positive, false positive, false negative, negative. by suitably weighing these costs, the algorithm makes a decision on whether the threshold is actually exceeded; evaluating which of the two possibilities has the lower cost. the fixed risk algorithm, on the other hand, works by initially setting the maximum level of risk acceptable for a wrong decision. this means that the threshold is dynamically set, so that the decision made by the algorithm is right or wrong with a known probability once an lth threshold has been set, comparing the measurement result with it, if the measured value falls within the ambiguity band, there is a risk, highlighted in figure 4 (blue area), of considering the measured value below the threshold when it is above. since the probability density function associated with figure 2. flow chart of the algorithm. figure 3. decision making region acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 the measurement is known, it is possible to set the maximum risk that is considered acceptable (mra) in making the decision. through a change of variable, the threshold is repositioned in order to match the following equation: 𝑀𝑅𝐴 = ∫ 𝑓𝑢(𝑥−𝑚) d𝑥 +∞ 𝐿𝑡ℎ , (5) where 𝑓𝑢(𝑥−𝑚) is the probability density function centred in 𝑚. knowing the measurement uncertainty of the sensor, if the process is characterized by a normal distribution, it is possible to set the parameters of the algorithm. the algorithm will then be able to make a decision on exceeding the threshold within the ambiguity zone and give an answer to the decision-making problem, together with a quantification of the risk associated with the decision taken. the implementation of a decisionmaking criterion is essential for those systems, such as the one described in this work, in which the incorrect evaluation of the operating conditions, i.e. considering the parameters within the safety zone when they are not, can affect people health and comfort. the proposed system implements both decision-making algorithms, whose parameters are calibrated for the two different strategies, i.e. maximization of environmental quality and maximization of energy savings. 3. evaluation of decision-making algorithms performances in this paragraph, we talk about the evaluation of the performances of the decision-making algorithms described above. with the simulations performed, we wanted to evaluate how they respond to the two different management strategies: highest air quality and healthiness maximum energy savings once the operational strategy in the management system has been selected, the system calculates the cost functions for the application of the utility cost test and sets the maximum percentage of risk allowed considering the reference threshold exceeded in the fixed risk. the two algorithms thus calibrated will make appropriate decisions in considering the results of measurements that fall within the ambiguity range, above or below the threshold set, based on the overall strategy chosen. to verify the correct functioning of the algorithms, we used simulated datasets, considering a generic sensor with its associated uncertainty and a fixed threshold level. evaluating a region of ± 3 σ around the threshold level, the intervention of the algorithms was tested with different coverage factors and the number of interventions that considered the threshold exceeded was compared with the number of activations that would have occurred without the implementation of the decision-making algorithms. the results obtained are summarized in table 4 and table 5. as we can see in the results reported, considering either a linear or a sinusoidal trend of the measurand, without decision-making algorithms, 50% of the comparisons result above the threshold. this because we perform a direct comparison between the measured value and the threshold, without considering the impact of measurement uncertainty. this means that in operative conditions, where the measured value is given by eq. 4, the result of the comparison can be alternately positive or negative with a probability that is the more similar, the more the measurand is near to the threshold. using decision-making algorithms it is possible to adopt a more or less conservative approach to measurement uncertainty. if the strategy chosen aims at maximizing air quality and minimizing health risk, there is a greater number of activations, therefore a better ventilation of the rooms, compared to what would have happened with a direct comparison with the threshold. in the specific case examined, we can observe an increase in the number of activations of 15% and 10% with utility cost test, respectively for linear and sinusoidal trend of the dataset, and 32% and 29.5% with fixed risk. with a coverage factor k = 1, using the fixed-risk algorithm, we observe 10% less activations than with the same algorithm and a greater coverage factor, this because in this case the decisionmaking band is narrower and therefore the algorithm intervenes fewer times. in the other case, which instead aims at maximizing energy savings by minimizing the number of activations of the ventilation system, we observe a strong reduction in interventions. in fact, using utility cost test algorithm, we have 16.5% and 10.5% less activations than in operation without decision-making algorithms, respectively for linear and sinusoidal trend of the dataset, and 22% and 20% less, using fixed risk. this is just a general case implemented to test the impact of the decision-making approach, since, in real applications, implementing a correct risk assessment during the system design phase, it is possible to re-calibrate the response of the algorithms to match the specific environments needs. 4. conclusions technologies applied to living environments are significant to ensure their smart evolution. the italian scenario still lacks of applications for the redevelopment of buildings (especially those for public use) aimed at improving the quality of life, through targeted actions, which let to contain energy consumption using iot or smart systems. designing and building according to these paradigms means that we seriously need to face with the requirements for balance between resources and environmental impact. the building is an artefact that changes and adapts itself according to user needs. the current challenge consists in considering the building through a multidisciplinary approach that manages the technological and digital system in order to obtain high performance responses in terms of both process control, product maintainability and liveability. in this work, the authors proposed an automatic measurement system to be integrated into the context of smart homes, with the aim of improving aspects related to the safety and quality of living environments. by monitoring air quality, ambient temperature, relative humidity and, consequently, managing the ventilation system, it is possible to optimize the level of ieq in the observed environment. in these contexts where measured data are figure 4. risk level. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 compared with a reference threshold, it is important to consider how measurement uncertainty affects the decision on threshold crossing. this is much more relevant in applications like the one described in this paper where a wrong decision implies a waste of energy. in fact, if the value of the measurand is near to the reference level, the contribution of measurement uncertainty can be determinant to assess if the measured value is above or below the threshold. for this reason, the measurement and control station proposed acquires and analyses the measurement data collected by the network of distributed sensors and, with the support of decision-making algorithms to verify the actual exceeding of the thresholds, activates the actuators of the system to optimize the ieq levels. in this system, the decision-making approach covers a role of primary importance, since it let to be more or less conservative on measurement uncertainty, inherently to the choice on the activation of the ventilation system. in fact, the proposed algorithm works on two levels: one related to the safety of indoor environments, and the other linked to the optimization of comfort. the first level is met within the safety thresholds imposed by the current regulations, for each monitored parameter. environmental comfort, on the other hand, is guaranteed through careful monitoring of ieq levels and the operation of actuators that allow to restore the desired quality levels, if the index falls below the desired comfort threshold. the system allows, through an interactive user interface, to record the user's preferences about the strategy to be adopted. in this way the algorithm will be able to dynamically move the thresholds adapting to the needs of the single households. to verify the functioning of the comparison algorithm, two predefined datasets were used. the decision-making algorithms were calibrated on two different strategies: one aimed at maximizing the quality of the environments and the other aimed at maximizing energy savings. from the tests performed it was possible to observe an increase in activations of the actuators between 10 and 15% using the utility cost test algorithm and between 29 and 32% using the fixed risk algorithm, with respect to direct comparison with the threshold, if the chosen strategy is to maximize quality. in the other case, however, we observed a reduction in the number of activations between 10.5 and 16.5% with the utility cost test algorithm and between 20 and 22% with fixed risk compared to the reference case. the results obtained with these two strategies served to verify the functioning of the decision-making algorithm. in practice, the system, installed in a specific living environment, can be recalibrated to make smart decisions, which best fit the user needs. future developments will concern the integration of other aspects related to smart buildings design, such as the evaluation of the energy balance in a context in which electricity is produced on site from renewable sources. in this sense, it is possible to increase the efficiency and safety of living spaces through the measurement of energy flows and a consequent adaptation of the algorithms that should consider this aspect. references [1] j. hoof, w. zeiler, r. bergen, architectural and building services requirements for smart homes, handbook of smart homes, health care and well-being, 2014, pp. 1-7. doi: 10.1007/978-3-319-01904-8_29-1 [2] l. van den berg, m. kalmijn, t. leopold, family structure and early home leaving: a mediation analysis, european journal of population 34 (2018), pp. 873–900. doi: 10.1007/s10680-017-9461-1 [3] r. suryantini, d. p. kristanti, a. y. yandi, a healthy machine for living: investigating the fluidity of open spaces in the domestic environment during the pandemic, aip conference proceedings 2376(1) (2021). doi: 10.1063/5.0063938 table 4. sinusoidal data variation. sinusoidal data maximizing safety/ quality maximizing energy savings k = 1 n. total threshold comparisons 200 200 n. interventions decision making algorithms 80 80 n. detections above utility cost test threshold 60 19 n. detections above fixed risk threshold 80 0 n. above threshold with direct comparison 40 40 k = 2 n. total threshold comparisons 200 200 n. interventions decision making algorithms 183 183 n. detections above utility cost test threshold 111 70 n. detections above fixed risk threshold 151 51 n. above threshold with direct comparison 91 91 k = 3 n. total threshold comparisons 200 200 n. interventions decision making algorithms 200 200 n. detections above utility cost test threshold 120 79 n. detections above fixed risk threshold 160 60 n. above threshold with direct comparison 100 100 table 5. linear data variation linear data maximizing safety/ quality maximizing energy savings k = 1 n. total threshold comparisons 200 200 n. interventions decision making algorithms 82 82 n. detections above utility cost test threshold 71 8 n. detections above fixed risk threshold 82 0 n. above threshold with direct comparison 43 43 k = 2 n. total threshold comparisons 200 200 n. interventions decision making algorithms 160 160 n. detections above utility cost test threshold 110 47 n. detections above fixed risk threshold 144 36 n. above threshold with direct comparison 82 82 k = 3 n. total threshold comparisons 200 200 n. interventions decision making algorithms 200 200 n. detections above utility cost test threshold 130 67 n. detections above fixed risk threshold 164 56 n. above threshold with direct comparison 102 102 https://doi.org/10.1007/978-3-319-01904-8_29-1 https://link.springer.com/journal/10680 https://link.springer.com/journal/10680 https://doi.org/10.1007/s10680-017-9461-1 https://doi.org/10.1063/5.0063938 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [4] r. panchalingam, k. chan, a state-of-the-art review on artificial intelligence for smart buildings, intelligent buildings international, 2019. doi: 10.1080/17508975.2019.1613219 [5] p. la greca, g. margani, seismic and energy renovation measures for sustainable cities: a critical analysis of the italian scenario, sustainability 10(1) (2018), art. no. 254. doi: 10.3390/su10010254 [6] v. palco, f. fulco, sensoring & iot: abitare smart, design in the digital age – sitda, napoli, 2020. [in italian] [7] v. palco, g. fulco, c. de capua, f. ruffa, m. lugarà, iot and iaq monitoring systems for healthiness of dwelling, ieee international workshop on metrology for living environment (metroliven), cosenza, italy, 25-27 may 2022, pp. 105-109. doi: 10.1109/metrolivenv54405.2022.9826946 [8] cisco annual internet report (2018–2023) white paper, 2020. [9] f. ruffa, m. lugarà, g. fulco, v. palco, c. de capua, monitoring of thermal dispersion in indoor ernvironments: an infrared scanner technique, ieee international workshop on metrology for living environment (metroliven), cosenza, italy, 25-27 may 2022, pp. 34-38. doi: 10.1109/metrolivenv54405.2022.9826913 [10] g. iadarola, s. spinsante, l. de vito, f. lamonaca, a support for signal compression in living environments: the analog-toinformation converter, 2022 ieee international workshop on metrology for living environment (metroliven), cosenza, italy, 25-27 may 2022, pp. 292-297. doi: 10.1109/metrolivenv54405.2022.9826923 [11] c. nava, v. bruzzaniti, prestazioni energetico-ambientali, monitoraggio e cibernetica con la tecnologia arduino, design driven innovation off-shore e off-site, progetto di ricerca s2 home, roma, 2019, pp.134 138. [in italian] [12] a. schiewecka, e. uhdea, t. salthammera, l. salthammerb, l. morawskac, m.mazaheric, p. kumard, smart homes and the control of indoor air quality, renewable and sustainable energy reviews 94 (2018), pp. 705-718. doi: 10.1016/j.rser.2018.05.057 [13] v. costanzo, r. yao, t. xu, j. xiong, q. zhang, b. li, natural ventilation potential for residential buildings in a densely built-up and highly polluted environment. a case study, renewable energy 138 (2019), pp. 340–353. doi: 10.1016/j.renene.2019.01.111 [14] i. manisalidis, e. stavropoulou, a. stavropoulos, e. bezirtzoglou, environmental and health impacts of air pollution: a review, front public health 8 (2020), art. no. 14. doi: 10.3389/fpubh.2020.00014 [15] g. chiesa, s. cesari, m. garcia, m. issa, s. li, multisensor iot platform for optimising iaq levels in buildings through a smart ventilation system, sustainably 11(20) (2019), art. no. 5777. doi: 10.3390/su11205777 [16] a. lay-ekuakille, p. vergallo, r. morello, c. de capua, indoor air pollution system based on led technology. measurement 47 (2014), pp. 749-755. doi: 10.1016/j.measurement.2013.09.040 [17] i. froiz-míguez, t. fernández-caramés, p. fraga-lamas, l. castedo, design, implementation and practical evaluation of an iot home automation system for fog computing applications based on mqtt and zigbee-wifi sensor nodes, sensors 18(8) (2018), art. no. 2660. doi: 10.3390/s18082660 [18] v. costanzo, r. yao, t. xu, j. xiong, q. zhang, b. li, natural ventilation potential for residential buildings in a densely built-up and highly polluted environment. a case study. renewable energy 138 (2019), pp. 340–353. doi: 10.1016/j.renene.2019.01.111 [19] m. schieweck, e. uhde, t. salthammer, l. salthammer, l. morawska, m. mazaheri, p. kumar, smart homes and the control of indoor air quality, renewable and sustainable energy reviews 94 (2018)n pp 705-718. doi: 10.1016/j.rser.2018.05.057 [20] s. abdul-wahab, s. chin fah en, a. elkamel, l. ahmadi, k. yetilmezsoy, a review of standard and guidelines set by international bodies for the parameters of indoor air quality. atmospheric pollution research (2015)b pp 751-767. doi: 10.5094/apr.2015.084 [21] a. baudet, e. baurès, o. blanchard, p. le cann, j. gangneux, a. florentin, indoor carbon dioxide, fine particulate matter and total volatile organic compounds in private healthcare and elderly care facilities, toxics 10(3) (2022), art. no. 136. doi: 10.3390/toxics10030136 [22] a. a. hapsari, a. i. hajamydeen, d. j. vresdian, m. manfaluthy, l. prameswono, e. yusuf, real time indoor air quality monitoring system based on iot using mqtt and wireless sensor network, ieee 6th international conference on engineering technologies and applied sciences (icetas), kuala lumpur, malaysia, 20-21 december 2019, pp. 1-7. doi: 10.1109/icetas48360.2019.9117518 [23] m. c. pietrogrande, l. casari, g. demaria, m. russo, indoor air quality in domestic environments during periods close to italian covid-19 lockdown, int. j. environ. res. public health 18(8) (2021), art. no. 4060. doi: 10.3390/ijerph18084060 [24] a. moreno-rangel, t. sharpe, f. musau, g. mcgill, field evaluation of a low-cost indoor air quality monitor to quantify exposure to pollutants in residential environments, journal of sensors and sensor systems 7 (2018), pp. 373-388. doi: 10.5194/jsss-7-373-2018 [25] b. shrestha, s. tiwari, s. bajracharya, m. keitsch, role of gender participation in urban household energy technology for sustainability: a case of kathmandu, discover sustainability 2 (2021), art. no. 19. doi: 10.1007/s43621-021-00027-w [26] j. fernández-agüera, s. dominguez-amarillo, m. fornaciari, f. orlandi, tvocs and pm 2.5 in naturally ventilated homes: three case studies in a mild climate, sustainability 11(22) (2019), art. no. 6225. doi: 10.3390/su11226225 [27] y. song, f. mao, q. liu, human comfort in indoor environment: a review on assessment criteria, data collection and data analysis methods, ieee access 7 (2019), pp. 119774119786. doi: 10.1109/access.2019.2937320 [28] health canada, residential indoor air quality guidelines: carbon dioxide, 2021. [29] health effects of particulate matter, policy implications for countries in eastern europe, caucasus and central asia, world health organization, 2013. [30] m. a. zoran, r. s. savastru, d. m. savastru, m. n. tautan, assessing the relationship between surface levels of pm2.5 and pm10 particulate matter impact on covid-19 in milan, italy, science of the total environment 738 (2020). doi: 10.1016/j.scitotenv.2020.139825 [31] a. ahlawat, a. wiedensohler, s. mishra, an overview on the role of relative humidity in airborne transmission of sars-cov-2 in indoor environments, aerosol and air quality research, special issue on covid-19 aerosol drivers, impacts and mitigation 20 (2020), pp 1856–1861. doi: 10.4209/aaqr.2020.06.0302 [32] k. h. chan, j. s. malik peiris, s. y. lam, l.m. poon, k. y. yuen, w. h. seto, the effects of temperature and relative humidity on the viability of the sars coronavirus, advances in virology (2011) art. no. 734690. doi: 10.1155/2011/734690 [33] p. wolkoff, indoor air humidity, air quality, and health – an overview, international journal of hygiene and environmental health 221 (2018), pp 376-390. doi: 10.1016/j.ijheh.2018.01.015 [34] a. olli, w. seppänen, some quantitative relations between indoor environmental quality and work performance or health, ashrae https://doi.org/10.1080/17508975.2019.1613219 https://doi.org/10.3390/su10010254 https://doi.org/10.1109/metrolivenv54405.2022.9826946 https://doi.org/10.1109/metrolivenv54405.2022.9826913 https://doi.org/10.1109/metrolivenv54405.2022.9826923 https://doi.org/10.1016/j.rser.2018.05.057 https://doi.org/10.1016/j.renene.2019.01.111 https://doi.org/10.3389/fpubh.2020.00014 https://doi.org/10.3390/su11205777 https://doi.org/10.1016/j.measurement.2013.09.040 https://doi.org/10.3390/s18082660 https://doi.org/10.1016/j.renene.2019.01.111 https://doi.org/10.1016/j.rser.2018.05.057 https://doi.org/10.5094/apr.2015.084 https://doi.org/10.3390/toxics10030136 https://doi.org/10.1109/icetas48360.2019.9117518 https://doi.org/10.3390/ijerph18084060 https://doi.org/10.5194/jsss-7-373-2018 https://doi.org/10.1007/s43621-021-00027-w https://doi.org/10.3390/su11226225 https://doi.org/10.1109/access.2019.2937320 https://doi.org/10.1016/j.scitotenv.2020.139825 https://doi.org/10.4209/aaqr.2020.06.0302 https://doi.org/10.1155/2011/734690 https://doi.org/10.1016/j.ijheh.2018.01.015 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 research journal 12(4) (2006), pp 957-973. online [accessed 22 april 2023]. https://escholarship.org/content/qt80v061jx/qt80v061jx.pdf?t =lnpsal [35] iso 7730:2005 ergonomics of the thermal environment — analytical determination and interpretation of thermal comfort using calculation of the pmv and ppd indices and local thermal comfort criteria. [36] iso, b. 17772-1: 2017 energy performance of buildings. indoor environmental quality. indoor environmental input parameters for the design and assessment of energy performance of buildings. [37] i. mujan, d. licina, m. kljajic, a. culic, a. anđelkovic, development of indoor environmental quality index using a lowcost monitoring platform, journal of cleaner production 312 (2021). doi: 10.1016/j.jclepro.2021.127846 [38] k. t. huynh, a. barros, c. bérenguer, maintenance decisionmaking for systems operating under indirect condition monitoring: value of online information and impact of measurement uncertainty, ieee transactions on reliability 61(5) (2012), pp. 410-425 doi: 10.1109/tr.2012.2194174 [39] c. de capua, r. morello., n. pasquino, a fuzzy approach to decision making about compliance of environmental electromagnetic field with exposure limits, ieee transactions on instrumentation and measurement 58 (2009), pp. 612-617. doi: 10.1109/tim.2008.2003340 [40] r. morello, c. de capua, an iso/iec/ieee 21451 compliant algorithm for detecting sensor faults: an approach based on repeatability and accuracy, ieee sensors journal 15(5) (2015), pp. 2541-2548. doi: 10.1109/jsen.2014.2361697 [41] n. morresi, s. casaccia, m. arnesano, g. m. revel, impact of the measurement uncertainty on the monitoring of thermal comfort through ai predictive algorithms, acta imeko 10(4) (2021), pp. 221-229. doi: 10.21014/acta_imeko.v10i4.1181 [42] v. kreinovich, t. hung nguyen, s. niwitpong, statistical hypothesis testing under interval uncertainty: an overview, international journal of intelligent technology and applied statistics 1(1) (2008), pp.1-32. doi: 10.6148/ijitas.2008.0101.01 [43] f. de vettori, s. menegozzi, l. duò, come prendere decisioni in condizioni di incertezza, tutto misure 1 (2007). [in italian] https://escholarship.org/content/qt80v061jx/qt80v061jx.pdf?t=lnpsal https://escholarship.org/content/qt80v061jx/qt80v061jx.pdf?t=lnpsal https://doi.org/10.1016/j.jclepro.2021.127846 https://doi.org/10.1109/tr.2012.2194174 https://doi.org/10.1109/tim.2008.2003340 https://doi.org/10.1109/jsen.2014.2361697 https://doi.org/10.21014/acta_imeko.v10i4.1181 https://doi.org/10.6148/ijitas.2008.0101.01 microsoft word article 8 162-1309-1-le.docx acta imeko  february 2015, volume 4, number 1, 44 – 52  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 44  statistical characterization of the 2.45 ghz propagation  channel aboard trains  annalisa liccardo  1 , andrea mariscotti  2 , attilio marrese  1 , nicola pasquino  1 , rosario schiano lo  moriello  1   1  dept. of electrical engineering and information technologies, university of naples federico ii, via claudio 21, naples italy  2  dept. of naval and electrical engineering, university of genoa, via all'opera pia 11, genoa italy       section: research paper   keywords: radio propagation channel; path loss; multipath propagation; delay spread; coherence bandwidth; statistical analysis of measurement data  citation: a. liccardo, a. mariscotti, a. marrese, n. pasquino, r. schiano lo moriello, statistical characterization of the 2.45 ghz propagation channel aboard  trains, acta imeko, vol. 4, no. 1, article 8, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐08  editor: paolo carbone, university of perugia   received december 13 th , 2013; in final form november 17 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by measurement science consultancy, the netherlands  corresponding author: nicola pasquino, e‐mail: npasquin@unina.it    1. introduction  use of telecommunication services aboard trains is becoming more and more common for providing information and entertainment services to passengers during trips. recently, wifi access points began to be installed on board with the aim of providing passengers with wide-band access to the internet. trains are a possibly harsh environment to wireless propagation, given the presence of metal walls, seats, moving and steady passengers, besides emissions due to electric and electronic equipment and the power quality phenomena that may cause interference with electronic devices [1]. there are two main areas of interest for researchers in order to reduce performance degradation: the study of the propagation characteristics aboard trains to determine the attenuation figure 1. train layout.  abstract  the propagation channel aboard trains is investigated with reference to the propagation path loss within cars, the delay spread and  the coherence bandwidth. results show that the path loss exponent is slightly smaller than in free space, possibly due to reflections by  metal walls, and that it does not depend significantly on the position of transmitter and receiver. the delay spread and coherence  bandwidth  depend  on  both  the  polarization  and  distance  between  transmitter  and  receiver  while  the  effect  of  interaction  is  not  statistically significant. the best fit for both delay spread’s and coherence bandwidth’s experimental distribution is also investigated.  results show that it does not always match models suggested in the literature and that the fit changes with the values of the input  parameters.  finally,  the  functional  law  between  coherence  bandwidth  and  delay  spread  is  determined.  results  typically  match  expectations although the specific measurement configuration effects the model parameters.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 45  law and multipath properties, and the investigation of the effects of external disturbance onto the telecommunication signal. in [2]–[4] the results of an extensive measurement campaign to characterize the behavior of the disturbance radiated by the electric arc generated by the interaction between pantograph and overhead wire can be found, while the effects of pulses onto some quality-of-service parameters have been investigated in [5]. with reference to the characterization of the propagation channel, literature about this topic usually includes research papers about the propagation in different scenarios and at different frequencies. in [6], [7] it is shown that the use of a directive antenna reduces the doppler spread and increases the received power when train runs towards the base transmitting station (bts); when train moves away from bts the omnidirectional antenna provides better results instead. in [8]–[11] it is shown that the classical models for propagation loss (such as hata model and two-ray model) are inadequate for attenuation prediction when propagation occurs under the effect of structures like viaducts and terrain cuttings (canyons). moreover, the presence of canyons, especially if topped by bridges, determines higher path losses than in the case of viaducts. with specific reference to propagation on board, in [12], [13] a narrow-band approach has shown that the transmitted signal can re-enter cars through windows and that its contribution to inter-car propagation is more relevant than the line-of-sight (los) signal. in [14] propagation on board has been analyzed with a 2.35 ghz continuous-wave signal and both a planar and an omnidirectional antenna, placed at different locations. the path loss and the ricean k-factor are shown to be related to the antenna type, while the delay spread is independent of the measurement configuration. studies performed aboard ships [15]–[18] only partially can be applied to trains due to differences in the geometrical and electromagnetic configurations. the paper is organized as follows: in section ii the measurement setup and methodology for the experimental studies are described; in section iii the results are presented. 2. measurement setup  measurements were run on an etr200 train owned by circumvesuviana s.r.l., an italian local transportation service. figure 1 shows a portion of the train layout, with dimension in millimeters. the properties of the propagation channel were investigated with a narrow-band approach. the transmitting port of a fieldfox n9918a vector network analyzer by agilent has been connected to a bbha 9120d horn antenna by schwarzbeck, while the receiving port to an em6865 biconical antenna by electrometrics. automated calibration was run to normalize the attenuation introduced by cables and connections. also, care has been taken to keep polarization coupling and to maximize the received signal by a proper antenna alignment. it was not necessary to take into account antenna factors because the main focus has been on the propagation loss exponent and not on the absolute value of loss itself. measurements were executed in a steady train, without significant reflectors in close proximity outside the car. figure 1 also shows distances from the wall (in mm) for the two propagation configurations for path loss measurements, which were taken along the centerline of the car and away from it (off-axis measurements), both at 120 cm height. samples were taken moving the receiving antenna away from the transmitting one starting at 1 m up to 20 m in 10 cm steps. figure 2 shows a view of the propagation environment for centerline measurements. it is apparent that propagation is affected by reflection on the walls, seats and vertical handrails, because of which the number of measurement points was reduced from 190 to  figure 2. measurement setup.  figure 3. path loss measurements and regression with 95% confidence intervals. left: centerline and off‐axis; upper right: centerline; lower right: off‐axis.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 46  171 and 151 for the centerline and off-axis configuration, respectively. delay spread was measured along the centerline, both in vertical and horizontal polarization, at 120 cm height and three distances, i.e. 5, 10 and 15 m, moving the whole system at 10 cm steps. again, some measurement points were missed, this time because of the presence of seats along the off-axis positions besides the ones already excluded because of the handrails. 3. results  3.1. relative path loss plr  since internet access on board is expected to be granted through wifi access points, channel propagation has been investigated in the 2.45 ghz band. figure 3 shows the relative path loss plr obtained by normalizing the received power to that measured at 1 m from the transmitting antennas [19], [20]. the graph on the left shows both centerline and off-axis samples, and regression is carried out without distinction between the two configurations. variability about the regression line is typically caused by the external noise affecting the measurement setup [21]. because of the inhomogeneity of the propagation environment (see figure 1), different exponents are expected to rule the power-decay law along the channel. therefore the approach shown in [22] should have been adopted. however, the limited number of points assigned to each propagation layer would have resulted in a reduced significance of the regression analysis. for this reason, no distinction was made, and the exponent n can be thought of as the average one over the whole distance. the model for the relative path loss plr is [23]:   0 10 logr dpl d n x d       (1) where d0 = 1 m is the reference distance and x is a random variable describing the effect of multipath propagation onto received power, expected to be distributed according to a gaussian random variable. with the obtained experimental data, the path loss 86420-2-4-6-8 100 80 60 40 20 0 residue [db] pe rc en t normal fit residual a) residuals of the whole set of propagation measurement data  840-4-8 100 80 60 40 20 0 840-4-8 residual [db] pe rc en t centerline off-axis normal fit residual b) residuals for centerline and off‐axis propagation  figure 4. residuals of regression.  a) magnitude and phase frequency response for minimum and maximum  delay spread propagation  b) impulse response for minimum and maximum delay spread propagation   figure 5. frequency and impulse response.   figure 6. τrms vs   for each combination of input factors.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 47  exponent turns out to be n = 1.71, smaller than that for free-space due to reflection on metal walls. to check that differences between the two propagation directions were indeed statistically insignificant, we also applied the regression procedure separately to the two propagation scenarios. the exponent for centerline and off-axis measurements turned out to be nc = 1.67 and no = 1.73 respectively. the associated 95% confidence intervals are shown in table 1. because of the overlap between them and with the confidence interval for the indistinct regression, propagation in the two configurations can be assumed to be very similar. figure 4 shows that the gaussian model fits the experimental distribution of residuals from linear regression as expected by model (1). residuals from the whole set of propagation data (“overall” propagation) are normal with a p-value p = 10% (figure 4.a), while p = 12.7% and p > 15% for centerline and off-axis propagation, respectively. mean values of residuals are zero with p = 99.7%, p = 85.6% and p = 68.1% for the whole measurement set, centerline and off-axis propagation respectively. the 95% confidence intervals for the mean values are [-0.304;0.305], [-0.39;0.47] and [-0.50;0.33] respectively. moreover, we cannot reject the hypothesis of equal variances for the centerline and offaxis scenarios unless we accept a risk at least equal to 29.1%. the 95% confidence interval for the ratio of the two variances is [0.87;1.61]. 3.2. frequency and impulse response  figure 5.a shows the frequency response (magnitude and phase) of the propagation channel for horizontal polarization with 10 m separation. in the figure the responses with the smallest and largest τrms delay spread, calculated according to [23] and described in the next section, are plotted. the square magnitude of the associated impulse responses are plotted in figure 5.b. 3.3. rms delay spread τrms  the mean excess delay and the rms delay spread τrms are quantities used to estimate the time dispersion properties of the multipath channel. the mean excess delay is the first central moment of the power delay profile:     k kk kk p p        (2) while the rms delay spread τrms is defined as the square root of the second central moment of the power delay profile:   table 1. path loss exponent’s 95% confidence intervals.     n, overall nc, centerline no, off-axis [1.61;1,81] [1.53;1.81] [1.60;1.86] a) main effects plot of polarization and distance for τrms b) interaction plot between polarization and distance for τrms  figure 7. main effects and interaction plots for τrms.  d [m] pol. 15 10 5 v h v h v h 9876543 95% c.i. for standard deviation s [ns] test statistic 39,59 p-value 0,000 test statistic 8,61 p-value 0,000 bartlett's test levene's test a) 95% confidence interval for standard deviation  100 75 50 25 0 403020100 100 75 50 25 0 403020100 403020100 h; 5 t _rms [ns] pe rc en t h; 10 h; 15 v; 5 v; 10 v; 15 normal fit t _rms [ns] b) experimental cdf’s and normal fit  figure 8. validation of anova hypothesis for τrms.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 48   22rms    (3) where 2 is the mean square value:     2 2 k kk kk p p        (4) table 2 shows the sample mean values and the sample variance for τrms and . for both parameters the mean value increases with separation between antennas, but for the mean excess delay the increment has a larger magnitude. the relation between rms delay spread and mean excess delay, shown in figure 6, is linear and follows the model: rms      , (5) where it can also be seen that for vertical polarization the correlation coefficient ρ is usually smaller: its values are 76.2%, 80.3% and 78,9% for 5, 10, 15 m horizontal polarization and 53.7%, 65.0% and 66.7% for the same distances, vertical polarization. table 3 reports the values of the α and β parameters. figure 7.a shows the main effects of distance and polarization on τrms, while figure 7.b shows the interaction plot between the two quantities. figure 7.a shows that on average, the two polarizations cause a significant change in the amplitude of τrms, namely the vertical polarization typically suffers from a larger contribution from secondary peaks of the impulse response, that is reflection contributions are more numerous or present higher amplitudes. this may be due to the different boundary conditions encountered by vertically and horizontally polarized waves. the plot on the right of figure 7.a shows that at greater distances the delay spread increases on average, which is due to the larger number of reflected signals arriving at the receiving antenna. the interaction plot in figure 7.b is used to determine if the effect of polarization also depends on distance. it can be seen that the increase in the delay spread when the distance changes between 5 m and 15 m is different for the two polarizations. the interaction plot therefore seems to indicate that there is indeed interaction between the two factors. for a more complete analysis, the analysis of variance (anova) was run on experimental data. results show that while distance and polarization are indeed statistically significant as a cause for variations observed in experimental data (p < 0.1%), interaction cannot be considered significant unless an error probability of at least 20% is accepted. however, the low value of the r2adj statistics (r2adj = 27%) tells that only about 27% of the variability in experimental data can be explained by the linear model including distance, polarization and their interaction. more factors should therefore be included in the model. this is not unexpected at all, since propagation in such a complex scenario as a train car is not only ruled by polarization and distance between transmitting and receiving systems. furthermore, it is necessary to verify that hypotheses for applicability of the anova methodology are satisfied.  figure 9. frequency correlation functions.  a) main effects plot of polarization and distance for bc b) interaction plot between polarization and distance for bc  figure 10. main effects and interactions for bc.  table 2. sample mean and sample variance for   and  rms .  dist. pol. rms s2 s2 5 v 28.05 23.23 16.77 23.43 10 51.07 61.78 20.31 23.33 15 63.91 49.84 21.63 39.56 5 h 22.44 10.96 12.51 13.03 10 40.78 9.92 15.24 18.15 15 62.69 65.61 18.64 42.25 acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 49  figure 8 shows results of such validation. in figure 8.a we see that although variances σ2 can be assumed to be equal between horizontal and vertical polarization at a given distance, they cannot be considered to be equal in general assuming either normal (bartlett’s test) or continuous distributions (levene’s test). in figure 8.b it is shown that the hypothesis of normal distribution of data for each combination of the input factors, unlike expected [24], is only verified for 10 m separation. both conditions pose a limit to the validity of results of anova although the very small p-values associated to the effect of distance and polarization may compensate for it. to further test validity of results, the non-parametric kruskal-wallis test has been applied. it is equivalent to a one-way anova, therefore interaction of factors cannot be determined. it does not require normality of data but does rely on equal variances assumption. by application of the test on τrms it turns out that both distance and polarization are significant at p < 0.1%, confirming the aforementioned outcomes from anova. it must be said however that variances as a function of distance cannot be considered equal (p < 0.1%) while they can be considered equal as a function of polarization (levene’s test, p = 46.9%). 3.4. coherence bandwidth bc  coherence bandwidth bc is the frequency separation at which the frequency correlation function crosses a certain level c. typical values of c are 0.9 and 0.5 [23]. figure 9erro! a origem da referência não foi ncontrada. shows a typical behavior of such functions for each polarization and distance. it must be noted that oscillations in the curves are caused by the multipath propagations and directly related to the amplitude of τrms. in table 4 the sample mean and the sample variance of bc for each configuration are reported. figure 10.a shows the main effects of distance and polarization on bc, while figure 10.b shows the interaction plot between the two quantities. figure 10.a shows that on average, the two polarizations cause a significant change in the amplitude of bc, namely the vertical polarization typically presents a lower value. again, this may be due to the different boundary conditions encountered by vertically and horizontally polarized waves. the plot on the right of figure 10.a shows that at greater distances the coherence bandwidth decreases, in accordance to the increase of the delay spread shown in figure 7 and the behavior shown in figure 9. the interaction plot in figure 10.b shows that the reduction in bc when the distance changes between 5 m and 15 m is slightly different for the two polarizations. the interaction plot therefore seems to indicate that there is small interaction between the two factors. again, for further analysis the analysis of variance (anova) was run on experimental data. results show that while distance and polarization are indeed statistically significant as a cause for variations observed in experimental data (p < 0.1%), interaction can be considered significant at a level of 3.6%. like for τrms the value of the r2adj statistics (r2adj = 23%) suggests that to explain table 3. parameters of linear regression  rms  vs  .  dist. pol. α β 5 v 1.63 0.54 10 -0.07 0.40 15 -16.36 0.59 5 h -9.19 0.83 10 -29.16 1.09 15 -21.08 0.63 d [m] pol. 15 10 5 v h v h v h 3,53,02,52,01,51,0 95% c.i. for standard deviation s [mhz] test statistic 60,82 p-value 0,000 test statistic 10,28 p-value 0,000 bartlett's test lev ene's test a) 95% confidence intervals for standard deviations  100 75 50 25 0 1612840 100 75 50 25 0 1612840 1612840 h; 5 bc [mhz] pe rc en t h; 10 h; 15 v; 5 v; 10 v; 15 normal fit bc [mhz] b) experimental cdf’s and log‐normal fit  figure 11. validation of anova hypothesis for bc.  table 4. sample mean and sample variance for  cb .  dist. pol.  x s2 5 v 4.92 2.47 10 4.44 2.28 15 4.02 2.10 5 h 7.03 2.21 10 6.75 4.88 15 5.21 5.90 acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 50  variability in experimental data more factors should be included in the model. again, this is expected just like for τrms, for the same reasons. validation of the hypotheses for applicability of the anova methodology is shown in figure 11. figure 11.a shows that although variances σ2 can be assumed to be equal at each polarization for the different distances, they cannot be considered to be equal in general assuming either normal (bartlett’s test) or continuous distributions (levene’s test). as for normality of bc’s data, it must be said that there is no evidence in scientific literature about a specific distribution. we found (see figure 11.b) that the best fit is with a lognormal cdf in all cases except for the horizontal polarization at 5 and 15 m. like for delay spread, both conditions pose a limit to the validity of results of anova, although the very small p-values associated to the effect of distance and polarization may compensate for it. the non-parametric kruskal-wallis test has been applied also to bc data. again, results show that both distance and polarization are significant at p < 0.1%. in this case, however, variances as a function of distance can be considered equal (levene’s test, p = 29%) while they cannot be considered equal (p < 0.1%) as a function of polarization. 3.5. coherence bandwidth bc vs. delay spread τrms  often in literature the functional relation between bc and τrms is identified by the following expression [23]: 1 50 c rms b   (6) by the more generic model [25]: c rms b    (7) or by an even more generic one 1,71,61,51,41,31,21,11,00,90,8 1,2 1,0 0,8 0,6 0,4 0,2 log10(t _rms) lo g1 0( bc ) linear fit (t _rms,bc) a) overall behavior of bc vs. τrms  a) bc vs. τrms for each combination of input factors   figure 12. bc vs. τrms.  table 5. values and 95% confidence intervals for a and b.  dist. pol. a b value 95 % c.i. value 95 % c.i. 5 v 1,90 1,79; 2,01 1,02 0,93; 1,11 10 2,28 2,13; 2,43 1,28 1,16; 1,39 15 2,06 1,92; 2,21 1,13 1,02; 1,24 5 h 2,03 1,89; 2,18 1,12 0,99; 1,25 10 2,28 2,11; 2,45 1,27 1,13; 1,42 15 2,16 2,02; 2,30 1,20 1,08; 1,31 overall 2,06 2,01; 2,12 1,13 1,08; 1,17 a) 95% confidence intervals for parameter a  b) 95% confidence intervals for parameter b  figure 13. 95% confidence interval for a and b.  table 6. values of ρ.  overall hor. 5 m hor. 10 m hor. 15 m vert. 5 m vert. 10 m vert. 15 m 91.2% 88.0% 89.4% 92.6% 92.5% 93.0% 91.5% acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 51  c b rms b    (8) in agreement with [26] and [27]. values of the coherence bandwidth bc versus the rms delay spread τrms are shown in figure 12 together with the linear fit: the log-log representation changes eq. (4) to: log log logc rmsb rms b a b       . (9) in figure 12.a regression has been applied to all available data, while in figure 12.b each combination of polarization and distance has been considered separately. values for a and b, and their 95% confidence intervals are shown in table 5 and in figure 13. values are typically compatible, although the configuration at d = 5 m with vertical polarization does not overlap to the other intervals. the large values of the correlation coefficient ρ reported in table 6 for measurement data in both figure 12.a and figure 12.b show that τrms and bc are strongly correlated and that their relationship is very close to a linear function. 4. conclusion  the characterization of the propagation channel aboard train with a narrow-band methodology has been presented. results show that the propagation law is very close to the free-space one, without significant differences between the propagation along the centerline and off-axis. the delays spread and coherence bandwidth are strongly dependent on the polarization of the propagating signal and on the distance between transmitting and receiving antenna, horizontal polarization showing typically smaller spreads and thus larger coherence bandwidth. however, a detailed statistical analysis shows that variability in the data cannot be explained only by the contribution of polarization and distance, and therefore more factors should be included in the model. this is not unexpected given the very complex propagation environment. finally, the parameters of the model describing bc as a function of τrms typically do not depend on the propagation configuration. references  [1] d. gallo, c. landi, n. pasquino, “an instrument for objective measurement of light flicker,” measurement, vol. 41, no. 3, pp. 334–340, april 2008. [2] a. mariscotti, a. marrese, n. pasquino, “time and frequency characterization of radiated disturbances in telecommunication bands due to pantograph arcing”, ieee i2mtc international instrumentation and measurement technology conference, proceedings may 2012, pp. 2178-2182, doi: 10.1109/i2mtc.2012.6229310. [3] a. mariscotti, a. marrese, n. pasquino, “experimental investigation on radiated emissions generated by pantograph arcing and their effects on telecommunication bands”, 20th imeko world congress 2012, busan, rep. of korea, sept. 2012, vol. 2, pp. 1175-1178. [4] a. mariscotti, a. marrese, n. pasquino, r. schiano lo moriello, “time and frequency characterization of radiated disturbance in telecommunication bands due to pantograph arcing,” measurement, vol. 46, no. 10, pp. 4342-4352, dec. 2013, doi: 10.1016/j./measurement.2013.04.054. [5] v. deniau, s. dudoyer, m. heddebaut, a. mariscotti, a. marrese, n. pasquino, “test bench for the evaluation of gsm-r operation in the presence of electric arc interference,” electrical systems for aircraft, railway and ship propulsion (esars), oct. 2012. [6] s. knorzer, m. baldauf, t. fugen, w. wiesbeck, “channel modelling for an ofdm train communications system including different antenna types,” in vehicular technology conference, 2006. vtc-2006 fall. 2006 ieee 64th, sept. 2006, pp. 1–5. [7] s. knorzer, m. baldauf, t. fugen, w. wiesbeck, “channel analysis for an ofdm-miso train communications system using different antennas,” in vehicular technology conference, 2007. vtc-2007 fall. 2007 ieee 66th, 30 2007oct. 3 2007, pp. 809 –813. [8] j. lu, g. zhu, b. ai, “radio propagation measurements and modeling in railway viaduct area,” in wireless communications networking and mobile computing (wicom), 2010 6th international conference on, sept. 2010, pp. 1–5. [9] r. he, z. zhong, b. ai, j. ding, “an empirical path loss model and fading analysis for high-speed railway viaduct scenarios,” antennas and wireless propagation letters, ieee, vol. 10, pp. 808–812, 2011. [10] j. lu, g. zhu, c. briso-rodriguez, “fading characteristics in the railway terrain cuttings,” in vehicular technology conference (vtc spring), 2011 ieee 73rd, may 2011, pp. 1–5. [11] r. he, z. zhong, b. ai, j. ding, “propagation measurements and analysis for high-speed railway cutting scenario,” electronics letters, vol. 47, no. 21, pp. 1167–1168, 13 2011. [12] n. kita, t. ito, s. yokoyama, m.-c. tseng, y. sagawa, m. ogasawara, m. nakatsugawa, “experimental study of propagation characteristics for wireless communications in high-speed train cars,” in antennas and propagation, 2009. eucap 2009. 3rd european conference on, march 2009, pp. 897 –901. [13] t. ito, n. kita, w. yamada, m.-c. tseng, y. sagawa, m. ogasawara, m. nakatsugawa, t. sugiyama, “study of propagation model and fading characteristics for wireless relay system between long-haul train cars,” in antennas and propagation (eucap), proceedings of the 5th european conference on, april 2011, pp. 2047–2051. [14] w. dong, g. liu, l. yu, h. ding, j. zhang, “channel properties of indoor part for high-speed train based on wideband channel measurement,” in communications and networking in china (chinacom), 2010 5th international icst conference on, aug. 2010, pp. 1–4. [15] e. balboni, j. ford, r. tingley, k. toomey, j. vytal, “an empirical study of radio propagation aboard naval vessels,” in antennas and propagation for wireless communications, 2000 ieee-aps conference on, 2000, pp. 157–160. [16] e. l. mokole, m. parent, t. t. street, e. tomas, “rf propagation on ex-uss shadwell,” ieee conf.anten.propag.wireless commun., pp. 153–156, 2000. [17] d. r. j. estes, t. b. welch, a. a. sarkady, h. whitesel, “shipboard radio frequency propagation measurements for wireless networks,” in proceedings ieee military communications conference milcom, vol. 1, 2001, pp. 247–251. [18] p. nobles, r. scott, “wideband propagation measurements onboard hms bristol,” in proceedings ieee military acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 52  communications confer-ence milcom, vol. 2, 2003, pp. 1412–1415. [19] p. bifulco, a. liccardo, a. mariscotti, a. marrese, n. pasquino, r. schiano lo moriello, “wide-band and narrowband characterization of the propagation channel in trains,” international review of electrical engineering (iree), vol. 8, is. 5, oct. 2013, pp. 1467-1472. [20] a. mariscotti, a. marrese, n. pasquino, r. schiano lo moriello, “characterization of the propagation channel aboard trains,” in 19th imeko tc-4 symposium, proceedings of, barcelona, spain, july 2013, pp. 339-344. [21] l. angrisani, m. d’apuzzo, d. grillo, n. pasquino, r. schiano lo moriello, “a new time-domain method for frequency measurement of sinusoidal signals in critical noise conditions,” measurement, vol. 49, is. 1, march 2014, pp. 368-381, doi: 10.1016/j.measurement.2013.11.034. [22] j. turkka, m. renfors, “path loss measurements for a nonline-of-sight mobile-to-mobile environment,” in its telecommunications, 2008. itst 2008. 8th international conference on, oct., pp. 274–278. [23] t. rappaport, wireless communications: principles and practice. prentice hall ptr, 1996. [24] h. hashemi, d. tholl, “analysis of the rms delay spread of indoor radio propagation channels,” communications, 1992. icc '92, conference record, supercomm/icc '92, discovering a new world of communications., ieee international conference on, vol.2, pp.875-881, 14-18 jun 1992. [25] g. morrison, m. fattouche, h. zaghloul, “statistical analysis and autoregressive modeling of the indoor radio propagation channel,” in universal personal communications, 1992. icupc ’92 proceedings., 1st international conference on, 1992, pp. 04.03/1–04.03/5. [26] m.s. varela, m.g. sanchez, “rms delay and coherence bandwidth measurements in indoor radio channels in the uhf band,” vehicular technology, ieee transactions on, vol. 50, no. 2, pp.515,525, mar 2001. [27] s. howard, k. pahlavan, “measurement and analysis of the indoor radio channel in the frequency domain,” instrumentation and measure-ment, ieee transactions on, vol. 39, no. 5, pp. 751–755, 1990. acta imeko  december 2014, volume 3, number 4, 52  www.imeko.org    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 52  section: advertisement  article 10 204 template advertisement_hbm microsoft word article 10 194-1076-1-pb.docx acta imeko may 2014, volume 3, number 1, 41 – 46 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 41 impact of modern instrumentation on the system of basic concepts in metrology j.m. jaworski, j. bek, a.j. fiok warsaw university of technology, warsaw, poland section: research paper keywords: metrology, measurement model, error citation: j.m. jaworski, j. bek, a.j. fiok, impact of modern instrumentation on the system of basic concepts in metrology, acta imeko, vol. 3, no. 1, article 10, may 2014, identifier: imeko-acta-03 (2014)-01-10 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. introduction every field of human activity has its own system of basic concepts. if the development of a discipline has been relatively slow and uniform, then its system of basic concepts is complete, well-ordered and well-defined. usually, fast progress in the discipline destroys completeness and orderliness of that system; some concepts lose their sense, the sense of many others has to be changed and there is a need for introduction of new ones. in the last decade the progress in measurement science has been generally very significant and fast, but not uniform. the greatest and most rapid it has been in instrumentation mainly due to: (i) proliferation of measurement needs in all the fields of science and technology, (ii) increase of complexity of measurement tasks to be performed in practice, (iii) great advances in technology, mostly in electronics (microprocessors, lsi and vlsi circuits and computers) which make possible implementation of instruments and systems realizing automatically nearly all physical and mathematical operations which are necessary in measurement practice. new instruments (e.g. dynamic signal analysers, digital oscilloscopes) have introduced in metrology changes of not only quantitative character, but also of qualitative one (new types of measurements e.g. of spectrum and cepstrum) – measurements giving deeper knowledge about the measured object than it was possible before. in the paper the authors have tried to prove that the existing system of basic metrological concepts does not correspond to the contemporary state and expected development of instrumentation and up-to-date level of theoretical knowledge. the new system is necessary in everyday work of all metrologists; designers and users as well as teachers of measurement and instrumentation at all levels of education. as the reference for our considerations we have the “international vocabulary of basic and general terms in metrology” [1] which is one of the latest texts dealing with the system of basic concepts. in the paper general rules of creation of a complete system of basic concepts as well as definitions of the most important new concepts and new definitions of some old concepts have been proposed. these proposals have been based both on the own experience of the authors and on the review of up-to-date technical literature, conference proceedings and catalogues of modern instrumentation. this is a reissue of a paper which appeared in acta imeko acta imeko 1988, proceedings of the proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 125–134. the paper is devoted to the new system of basic concepts in metrology which better fits to the development of contemporary instrumentation and measurement science. the existing concepts are confronted with the current needs; their modifications or new concepts are proposed. in particular the concepts of quantity and measurement method have been broadened and generalised. the proposed system is based on understanding of measurement as an experiment of parameter identification of the model of the measured object. one of the most important new concepts introduced here is the concept of measuring metasystem which comprises not only instrumentation but also the measured object, a “measurement interface” between the object and instrumentation and the operator. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 42 2. an outline of a development of instrumentation and measurement methods in the development of measuring instruments and measurement methods three partially overlapping periods may be distinguished. first there was a period of mechanical instruments and direct-comparison methods of measurement. the overwhelming majority of instruments were deflection type e.g. pressure gauge, rotameter, liquid thermometer, electromechanical voltmeter. typical measurements by direct comparison methods were measuring of length by means of a gauge, of angle by means of a protractor, of voltage by means of a potentiometer or measurement of resistance by means of an electrical bridge. in the second period electrical and electronic instruments and conversion-based measurement methods were the dominant ones. measuring transducers enabled one to measure non-electrical quantities using electrical methods. electronic instruments became the most important in measurement at first of electrical and next also other quantities. in the culmination point of that era digital instruments were introduced into everyday use. the third stage of instrumentation development, closely connected with the “electronic” one, is the stage of computerization. the dominant measurement methods are indirect ones based on computerized signal processing and calculations. the computer becomes the “heart” of a measuring instrument or a system; it controls the whole measurement process and processes the data carried by the signals from the sensors. the function of traditional readout devices has been taken over by monitors. knowledge about designing measurement algorithms and computer programming becomes of major importance for anybody who wants to measure. 3. contemporary tasks of metrology in classical metrology a measurement has been mainly treated as an experimental determination of the value of a quantity. but in everyday practice we are mainly interested in obtaining a quantitative information about the state or properties of the given object, while a physical quantity accessible for instruments is only a carrier of the required information. the modern instrumentation offers new technical facilities enabling us to extract this information. to achieve it, the tasks of modern metrology have been extended in comparison to the classical one. contemporary measurements are aimed at determining not only the value of a quantity but [2, 3] also: – variation of a physical quantity with time, e.g. its temporal distribution or “waveform”, – distribution of a quantity in space, e.g. in measurement of surface geometry, – mathematical representations of quantities, e.g. spectral density, – relations between quantities, e.g. current-voltage characteristic of an electric element, their distributions or representations, – parameters of such relations, e.g. attenuation. moreover, in many cases we do now measurements which differ from the traditional ones in their basic general character. to such “new types” of measurements belong: (i) complex concurrent measurements of several interrelated quantities which through the relations between these quantities characterize the state and properties of the investigated object; this aspect of contemporary measurements is sometimes called “complexity of measurement”, (ii) measurements in which obtaining of information about the investigated object requires exciting it with an appropriate stimulus and sensing a response signal [2, 3, 6]; such measurements, common in investigation of many electrical, mechanical, optical and physico-chemical properties of great variety of measured objects can be called “active” measurements, (iii) measurements which include operations typical for diagnostics, quality control, image recognition, model identification, a.s.o. 4. assumptions of the proposed system of basic concepts of metrology in the result of analysis of the contemporary measurements we decided to propose a new system of basic concepts in metrology. let us mention some important assumptions of the proposed system. structure of the system of basic concepts in a particular science should correspond with the structure of this science. we have assumed that [2, 5] metrology or using other words – science of measurement and instrumentation – can be divided into four parts: theory, technique, instrumentation and legislation. theory of metrology (we shall call it measurement theory) is the system of general laws of metrology. technique in metrology (we shall call it measuring technique) includes the knowledge concerning purposeful, usually theorybased, activity connected with planning, organization and execution of measurement as well as evaluation, verification and interpretation of measurement results. instrumentation connotes all the knowledge on measuring instruments, systems end devices. legislation (i.e. legal metrology) is a system of law regulations concerning measurements. we proposed to group the basic concepts into six parts: 1) fundamentals of metrology 2) measurement and its component operations 3) errors 4) measuring instruments and their properties 5) measuring technique 6) legal metrology up to now our works have been concentrated on the first three parts and the fifth one. so the concepts dealing with measuring instruments and legal metrology have not been considered in this paper. the proposed system covers in principle the concepts connected with measurements in exact sciences (first of all in physics and technology). for other fields like biology, psychology or economics, perhaps some modifications of the system may be needed. our considerations have been aimed at the system of concepts. the concepts are of primary importance; the terms (names of concepts) are also of importance, but in the stage reported here, our work has been concentrated on the concepts, not on the names of them. of course it has been necessary to use names, but in a lot of cases they are only tentative ones; in some cases no existing word fits with the needed meaning, so we have tried to propose neologisms. the classical approach to the metrology has assumed that the subject of measurement is a physical quantity [1]. the basic assumption of our approach is that the subject of measurement acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 43 is a physical object and measurement is treated as an experiment of parameter identification of the model of the measured object [2, 3, 5, 6]. 5. measured object, its model, quantities classical metrology defines [1] measurement as “the set of operations having the object of determining the value of a quantity” and quantity as “an attribute of phenomenon, body or substance, which may be distinguished qualitatively and determined quantitatively”. many metrologists consider quantities as real entities being the objects of measurement or experiment. in fact, quantity understood in classical sense is an abstract concept defined for idealized (abstract) objects, different from real objects which are to be measured. in our opinion no measurement can be separated from a part of objective reality – a physical object (called measured object) whose chosen properties are to be quantitatively determined in effect of the measurement. a quantitative determination is a determination by means of mathematical categories. the categories describing properties of the object must be exactly defined. to do it we must choose, assume or create a mathematical model of the measured object. measurement-oriented mathematical model of the measured object is a mathematical or equivalent (e.g. circuit-type) description of all its properties and relations between them which are relevant to the measurement task. the concept of quantity is and will remain one of primary concepts in metrology and in mathematical modelling of physical objects. we propose only to broaden and generalize this basic concept. our approach takes into account that: (i) generally, quantity itself is not a property of real objects; it only models its particular property within bounds of the assumed mathematical model of the object, (ii) temporal and/or space aspects of object properties can be of major importance, (iii) it is necessary to distinguish clearly different concepts named with the same term “quantity”. in classical sense quantities are defined as families of some properties of idealized entities abstracted from relations of equivalence – verifiable by idealized experiments. for instance, ajdukiewicz [8] gives the definition of length: “length is a family of ideal properties abstracted from the relation of congruence of line sections”. the definition of quantity cited at the beginning of this clause given in [1], common in classical metrology, has the same character and fails in many practical measurement situations. for example it is not too easy to utilize the above definition of length, very good for “measurement” of abstract object (line section), in measurement of real 3dimensional object as a cylinder. real objects are always “imperfect” in comparison with abstract objects for which physical quantities (in classical meaning) have been defined. in our opinion quantities should be treated as elements of assumed mathematical model of measured objects. in this sense quantities model elementary properties of the object and of phenomena in it. to distinguish clearly two meanings of the term, we will call quantities being part of the model of real object “modelling quantities” in contrary to those defined for abstract objects called by us (it necessary) “abstract quantities”. mathematical model of the object consists in general of: (i) set of modelling quantities, (ii) set of equations which model relations between properties of objects and phenomena in it (we shall call them model equations). sometimes model may consist of modelling quantities only. the set of modelling quantities and their mathematical character and type of model equations give the structure of the model. the coefficients of model equations are parameters of the model. it is possible to generalize the concept of model parameter as in [6]. the mathematical model of a physical object is always an approximation; if accuracy of the model is high (for the particular object measured for a given aim in given conditions) we may ignore this fact. but in some cases errors of this approximation can be of major importance from the point of view of accuracy of the whole measurement; sometimes the measurement results may be even completely without sense due to assumption of oversimplified model e.g. one dimensional (when measuring the diameter of a cylinder) or linear (when measuring the capacity of ferroelectric condenser in the range of high voltages). for a particular class of objects it is possible to construct or choose a series of models of different accuracy. the utilized model should be optimized from the point of view of the particular measurement task: as simple as possible to make measurement easier and cheaper; as complicated as necessary to include all the factors which may preclude obtaining reliable measurement results of sufficient accuracy. different meanings of the term “quantity” which should be distinguished are as follows: (i) quantity as a property of abstract objects (abstract quantity), (ii) manifestation of a quantity (in an abstract sense) (perhaps it would be better to call it manifestation of a property [9]) i.e. a specific state of property; if two objects are equivalent in the sense of the property considered, they give the same manifestation of this property; the property can be treated as a set of manifestations and relations between them, (iii) mathematical model of a quantity, (iv) values of quantity manifestations, i.e. mathematical categories (e.g. numbers) which are mapping manifestations of the quantity; the mathematical model of a quantity is a set of these categories and relations between them (mapping relations between manifestations). unfortunately these different concepts are usually not distinguished by different terms; the term “quantity” is commonly used for the first three concepts and term “value of a quantity” for the concepts (ii) and (iv). the world of physical quantities (treated as properties of physical objects) given by the classical metrological system of quantities is very simplified; it neglects temporal and spatial aspects of properties of objects. the quantities defined in this world do not depend on the time and space coordinates; we shall call them the static-lumped quantities. such concept of static lumped quantity should be generalized. we propose to introduce concepts of a staticdistributed quantity, a dynamic-lumped quantity and a dynamicdistributed quantity. such generalized quantities are also attributes of objects. their manifestations are mapped into different mathematical categories. images of manifestations (we shall call them values of a quantity, spatial distributions of a quantity, temporal distributions and spatio-temporal distributions) are real numbers, functions of the spatial coordinates, functions of time and functions of time and spatial coordinates. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 44 the next step in generalization of quantity is concept of the mathematical representation of quantity which is a transformation of quantity values or distributions into the domain of certain mathematical categories. signal energy, fourier transform, autocorrelation function are typical examples of representations. our system of basic concepts defines also concepts of: measurement scale, additive quantity, interval quantity, vector quantity, unit of quantity, system of quantities, system of units, basic quantity etc. 6. measurement and its component operations classical definition of measurement [1] is: “the set of operations having the object of determining the value of a quantity”. the quantity in the definition is “static lumped”, its values are real numbers. we would like to change the definition to: “measurement is the experiment carried out upon a physical object and aimed at determination of: (i) values or distributions of quantities modelling an object, (ii) values of mathematical representations of quantities modelling an object, (iii) relations between quantities modelling an object or between representations of these quantities”. the term “quantity” used in the proposed definition means a quantity in generalized sense. quantities, their representations, relations between quantities or representations (the relations may be given by parameters of the standard equations), mentioned in definition, are measurands. the fact of great importance for practice is that “values” of these measurands may be not only real numbers but also other mathematical categories as complex numbers, functions of any kinds, sets of numbers, etc. the definition proposed above is closer to “identification” concept of measurement then the classical one. if we only broaden the meaning of parameters of model, treating the values and distributions of quantities or their representations (if they are measurands) and the specific forms of relations between them as values of parameters of the model, then we can define measurement as “an experiment of parameter identification of mathematical model of an object to be measured” [6, 7]. this approach to measurement has many theoretical and practical consequences. the most important ones of them seem to be [2]: (i) widespread interpretation of measurement as a reliable method of objective cognition of reality is no more strictly valid. by means of measurement we investigate reality but only within the bounds of accepted structures of its mathematical models. (ii) the parameter identification approach to the measurement enables one to reinterpret the problem of “true value” as an absolute aim and reference of measurement. (iii) this approach to measurement entails relevant changes in measuring technique, what will be discussed later on. in the classical model of measurement shown in figure l measurement is considered as consisting mainly of operation of comparison of the measured quantity (or a quantity dependent on the measured quantity) with the standard quantity of known value. much less attention has been paid to other operations: sensing of the signal from the measured object (in most cases the concept of measurement signal was not utilized or the signal was treated as the same as the measured quantity itself), conversion (in most cases aimed on signal conditioning) and display of measurement result. in engineering practice measurements of complex objects aimed at determination of many measurands are very often executed at present and in most of cases the measurands are not directly accessible for measuring instruments. one of many examples are measurements of magnetic materials. contemporary measuring instruments and systems require use of a modified model of measurement. the model shown in figure 2 seems to describe well modern measurements [7]. the “inputs” of a measuring instrument (or a system) are measurement signals generated by the measured object or exciting the object. exciting (test) signals, even if generated within the instrument, are from the informative point of view the inputs for the instrument. the measurement signals are in general multidimensional ones. the measured object is described by its mathematical model. the structure of the model (the set of modelling quantities and the type of model equations) is given a priori, the model parameters (in the generalized sense) are the measurands. the measuring system accomplishes first of all operations of sensing and comparison. the operation of sensing is often associated with operation of actuating the measured object with the appropriate stimulus (test signal). the other basic operations in the system are: signal and information processing (enhanced version of classical conversion covering all computerized data processing) and formation of an output signal whose particular case is visualization of the results. the purpose of signal and information processing is the determination of measurands based on the information carried by the measurement signals. to accomplish this task the relations between the measurands and the measurement signals must be strictly determined what is equivalent to the acceptance of the model of the measurement interface between the object and the system. both the structure of the object model and the structure figure 2. model proposed for modern measurements and illustration of the concept “measuring metasystem”. figure 1. classical model of the measurement. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 45 and parameters of an interface model are decided on the base of a-priori knowledge about the object and the system. the same is true in relation to the model of measuring system. in fact each description of metrological properties of measuring instrument or system of instruments is its model. mathematical model of classical instrument was sometimes so simple that in some cases it was possible to give it on the front panel of the instrument. model of contemporary instrument is more complex, it needs many pages of the text to be given. the structure and parameters of the model of measuring instruments or system must be known to its user before the execution of the measurement. the classical metrology has put the main stress on the operation of comparison; measurement methods have been distinguished mainly from the point of view of the way of comparison used (e.g. direct-comparison method, substitution method and null-method [1]). our system of basic concepts saves these concepts but takes into account that the measurement comprises also other operations and the measurement method is determined not only by the method of comparison but also by the methods of signal sensing, signal conversion as well as the method(s) of signal and information processing. each operation utilized in the measurement can have “own” methods of realization of its task and some of them should be included in the system of concepts. functioning of contemporary instruments, especially computerized and intelligent ones, is based on sensing and processing of signals. classical metrology paid a very little attention to informative and signal aspects of measurement. so it is necessary to define – from the point of view of metrology – several general concepts (e.g. measurement information, measurement signal, parameters of a measurement signal) as well as some specific concepts connected with forms of measurement signal, types of carried information, types of signal conversion etc. the classical model of measurement shown in figure 1 is a particular, very simplified case of the proposed more general model which was discussed above. in the classical model of measurement the model of measured object is reduced to the quantity modelling the object. 7. errors the insufficiency of the set of concepts connected with errors in measurement considered by classical metrology seems to be now evident. the classical error theory deals only with the inaccuracy of measurements whose results are real numbers. results of contemporary measurements may be also complex numbers, series of real numbers or complex ones, functions (both real and complex ones) etc., as well as parameters of functions or equations. the new set of concepts should also refer to such situations. each operation accomplished in a measuring system is associated with errors, because some of these operations were not considered by classical metrology, these errors have not been considered. for our system of basic concepts new concepts in the error theory have been needed. an attempt to formulate a broadened and modified system of concepts of error theory has been presented in [4]. the most important elements of the system are: 1) the word “error” has two meanings: – an event of discrepancy between two entities compared, the first one being the reference for the other, the other being the realization of the first one, – a mathematical model of the discrepancy which represents it in mathematical categories. 2) in metrology, compared entities (measurands, signal parameters) are mathematical categories or have their own mathematical models so that the error is determined by comparison of mathematical categories in a proper space of compared categories (of e.g. complex numbers, functions of given type etc.). there are two main kinds of errors: – difference error defined by means of algebraic difference between categories compared e.g. “classical” absolute and relative errors of measurement [1], – metric error defined by means of “the distance” (in mathematical sense) between categories compared e.g. mean square error as a measure of a discrepancy between two functions. errors in the domain of real numbers remain very important; their field of application is broad and in many cases it is possible to reduce other types of errors to this type. 3) two kinds of errors as mathematical models must be strictly distinguished: – true error (more precisely conventionally true error) defined as mathematical model of discrepancy between two categories compared, – boundary error defined as the parameter (or parameters) determining a subspace (in space of categories compared) which surrounds one of the category compared and contains the other one (or for which there is a probability large enough that it may contain the other category). in classical metrology reference is a true value; in metrology based on identification concept of measurement in many cases reference can be given by a model more exact than the used one. 4) set of errors distinguished in classical metrology should be broadened. among others there is a need to take into account also: – error of the model of measured object, – error of the model of interface between the object and the measuring system, – error of data processing. 8. measuring technique measuring technique is understood as the system of reasonable and theoretically based methods of carrying out measurement. our system of basic concepts connected with measuring technique bases on two concepts: measuring process and measuring metasystem [4]. measuring process is a connected series of actions and operations deliberately undertaken to carry out given measurement. up to now the concept of measuring system has been limited to “a complete set of measuring instruments and other equipment assembled to carry out a specified measurement task” [1]. we would like to broaden the concept to embrace (figure 2) all the things and ideas needed to carry out given measurement i.e.: (i) object to be measured, (ii) mathematical model of the object, exactly the structure of model and definitions of its parameters which are measurands for measurement, (iii) set of measuring instruments, its mathematical model and programs (instrumental part of the system), acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 46 (iv) mathematical model of the interface object – instrumental part of the system, (v) program of measurement, (vi) operator. to avoid confusion of the new meaning of the term “measuring system” with the traditional one, we propose to introduce the new term “measuring metasystem” for the measuring system understood in the new broadened sense. the instrumental part of the metasystem might be called, as before “measuring system”. measuring process has some typical stages. we can group them in three main parts of this process: – preparation of measurement, – execution of the measurement, – postprocessing of measurement results. the object to be measured is an operand for measurement. specification of the measurement is a starting point of measuring process. the specification above all consists of: – mathematical model of the object to be measured, exactly the structure of model and definitions of measurands, – permissible errors of measurement, – permissible costs and time of the measurement. in typical situation specification is given a priori but in practice it may be necessary to correct elements of specification. the main parts of measuring process may be subdivided, e.g. the first part, preparation of measurement, into: – creation of the conception of measurement (fixing model of interface object-instruments, choosing the methods of signal sensing, conversion and processing) – design of measuring system and of measurement program, – implementation of measuring system, – making the measuring system ready for proper functioning (including the identification of the mathematical model and errors of the system). the characteristic feature of the modern measuring process seems to be a need of validation and correction of partial results. the effect of each operation in measuring process has to be examined from the point of view of its validity and congruency with the preliminary specification. the needed correction of effect of each operation may be accomplished by a proper modification of this operation and, if necessary, of the previous ones. in a limit case even a change of elements of the measurements specification (e.g. of the object model or of requirements dealing with errors or costs) may be necessary. the broadening of the concept of measuring system is necessary not from the theoretical but from the very practical point of view. it is an effect of the real situation existing in contemporary measurements. it is now impossible to separate hardware from software in modern instrumentation and an interaction (both in hardware and software sense) between the measured object and the measuring system is of major importance. 9. summary and conclusions let us underline once more that the aim of our work reported here is rather practical; we would like to work out the theoretical tools which could be useful for designers and users of modern instrumentation and would facilitate their everyday work. we did not intend to present in this paper a complete system of basic concepts in metrology. it is impossible because creation of a new system is not a trivial task; it requires much effort, profound discussions with the outstanding experts in measurement science and instrumentation. the work has not been completed yet. we presented here only some proposals concerning the scope and the selected most important concepts connected with chosen parts of the system. your valuable opinion will be of major importance for our further work. the most important elements of our approach to the system of basic concepts in metrology are: – observation that the aim of measurement is to obtain not the value of an abstract quantity but an image of the measured real object mapping its properties by means of proper mathematical categories, – conclusion that one of the first stages of the measurement process must be the choice of the mathematical model of the measured object, – treating the measurement as an experiment of parameter identification of the model of the measured object, – generalization of the concept of quantity, – introduction of the concept of measuring metasystem which consists not only of instrumentation but also of: the measured object, the measurement interface between the object and instruments, the whole “software” (in a very broad sense) and the operator, – understanding the major importance of parts of measuring process prior to the execution of the measurement and of the validation of effects of each component operation and, if necessary, corrections of them. we hope we have proved that contemporary measurements need new system of basic concepts. the system reported in the paper is one of possible proposals. acknowledgements the authors would like to express their gratitude to many colleagues – metrologists whose views presented in various discussions seriously influenced the presented work. special thanks to prof. roman morawaki whose opinions and help in preparation of the final version of this paper were of great importance. references [1] international vocabulary of basic and general terms in metrology, bipm, iec, iso, oiml, 1983. [2] a.j. fiok, j.m. jaworski, r.z. morawski, j.s. oledzki, a.c. urban,theory of measurement in teaching metrology at the engineering faculties, proc. 8th int. imeko—tc1 colloquium on higher education, warsaw, 1986. [3] a.j. fiok, j.m. jaworski, what to teach in measurement and instrumentation, 8th int. imeko—tc1 colloquium on higher education, warsaw, 1986. [4] j.m. jaworski, j. bek, theory of errors in metrology. an attempt to formulate the system of basic concepts, 10th imeko congress, prague, 1985, vol. 1, pp. 124–130. [5] j. bek, a.j. fiok, j.m. jaworski, j.s. oledzki, structure of metrology and its system of basic concepts, pomiary, automatyka, kontrola, 1987, no. 8, pp. 175–177 (in polish). [6] j.m. jaworski, a new parameter model of measurement. proc. 5th int. imeko—tc7 symposium on intelligent measurement (ed. by d.hofmann), jena, 1986, vol. 2, pp. 250–252. [7] j.m. jaworski, errors, noise and disturbances in measuring system, 1st int. imeko—tc4 symposium on measurement, como (italy) 1986, pp. 27–32. [8] k. ajdukiewicz, pragmatic logic, pwn, warszawa 1985 (in polish). [9] l. finkelstein, theory and philosophy of measurement, chapter 1 in: handbook of measurement science ed. by p.sydenham, j.wiley, chichester, new york 1982. microsoft word article 5 156-1030-1-le.docx acta imeko  february 2015, volume 4, number 1, 19 – 25  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 19  development, implementation and characterization of a dsp  based data acquisition system with on‐board processing   pedro m. pinto 1 , josé gouveia 2 , pedro m. ramos 3   1  instituto superior técnico/universidade de lisboa, av. rovisco pais 1, 1049‐001 lisbon, portugal   2  instituto de telecomunicações, av. rovisco pais 1, 1049‐001 lisbon, portugal   3  instituto de telecomunicações and instituto superior técnico/universidade de lisboa, av. rovisco pais 1, 1049‐001 lisbon, portugal      section: research paper   keywords: data acquisition; digital signal processing; on‐board processing  citation: pedro m. pinto, josé gouveia, pedro m. ramos,  development, implementation and characterization of a dsp based data acquisition system with  on‐board processing, acta imeko, vol. 4, no. 1, article 5, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐05  editor: paolo carbone, university of perugia   received december 9 th , 2013; in final form april 13 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: work supported by fundação para a ciência e tecnologia, portugal  corresponding author: pedro m. ramos, e‐mail: pedro.m.ramos@tecnico.ulisboa.pt    1. introduction  the development of analog to digital converters (adcs) was the cornerstone of a massive shift in the architectures of measuring systems and devices. in fact, the rapid development of fast adcs capable of achieving resolutions above 10-12 bit has revolutionized the instrumentation and measurement field and can be considered as one of the crucial building blocks to our worldwide digital content driven society. the possibility to measure just about anything, sometimes with extremely reduced costs, has increased many times the deployment of sensor based measurement systems as well as the total amount of acquired data, the rapid development of signal processing algorithms to extract real-time information of the raw data, the use of data mining to analyze the results [1], sensor fusion to combine the measurements from multiple sources [2]. a data acquisition system is typically a system or device that can digitize an input electrical quantity at a given sampling rate. it can have multiple input channels with multiple or single input ranges. by far, most commercial systems are designed to acquire voltages. for multi-channel devices, the cheaper solution is to have a single adc and sequentially connect (using a multiplexer) the input signals to the adc input, thus enabling the acquisition of multiple channels with only one adc. this solution, well suited for many applications [3] and frequently implemented even within multipurpose micro-controllers, reduces cost but is incapable of simultaneous acquisition and the inter-channel interference (cross-talk) is usually non-neglectable. the main alternative is to have a independent adc for each channel and thus a unique signal path to the adc for each input channel. this solution is more expensive but can have lower cross-talk and can perform simultaneous acquisition. abstract  this paper presents the development, implementation and characterization of a data acquisition (daq) system capable of on‐board  processing  the  acquired  data.  the  system  features  four  differential  channels,  with  1  mhz  bandwidth,  simultaneous  acquisition,  9  independent  bipolar  ranges,  and  a  maximum  sampling  rate  of  600  ks/s.  the  analog  daq  inputs  are  protected  against  incorrect  connections even direct connection to the power grid voltage. this protection ensures that the daq can recover to full operation  without the need to replace any damaged components or fuses. a 450 mhz sharc digital signal processor (adsp 21489) is used to  control  the  system  and  perform  on‐board  processing.  interface  between  the  system  and  a  personal  computer  is  through  a  usb  hi‐speed connection.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 20  commercial of the shelf (cots) data acquisition devices are used together with a computer or in some sort of specialized integrated system such as vxi [4], pxi [5], vme [6], axie [7] or compactrio [8]. the main advantage of systems that are not computer based is that stricter timing issues and higher throughput can be achieved in those architectures which were specifically developed to deal with the specificity of measurements systems. computer based systems are either pci or usb based. while the former are used in desktop computers, the latter are more flexible and can also be used in portable computers. however, the throughput when using usb connections can limit the range of the combined set of number of acquired samples, sampling rate and adc number of bits. lan based devices (of which lxi [9] is a particular case) are more frequently available but their usage as not reached the initially predicted dissemination. data acquisition systems are currently the most important blocks in just about every measurement system. for example, they are used in power quality monitoring [10, 11], in impedance measurement and spectroscopy devices [12], in the characterization of energy-harvesting devices [13], in environment monitoring [14], in real-time reflectometry diagnostic [15], in fusion experiments [16], in power metering [17] and many other applications. most cots data acquisition devices include some sort of micro-controller or processing unit that manages the communication with the adcs, controls the input range and communicates with the system main processor (e.g., the computer). for applications where the signal processing is done in the computer, this local processor acts mainly as a gateway with little or no processing tasks executed in the actual system device (see for example, [18]). however, in some applications, processing in the acquisition system is required. the reason behind this requirement is for example in large measurement systems with multiple channels, some data reduction must be made before the data can be managed and processed in a timely manner. for example, in power quality monitoring or power quality qos (quality of service), the measurement nodes must acquire the samples continuously without interruption. this constitutes a challenge since sending the raw data to a computer or central processing unit, for processing is hardly possible when the grid can have hundreds of such devices spread out in a large area. collecting all the data in one or more data centers and still achieve real-time processing is not feasible. in this situation, the best approach is to have each node with a processor capable of performing digital signal processing to detect events (in power quality monitoring) and store/transmit only the data of such events. in power quality qos, the nodes should quantify the qos in terms defined by the local power regulator and transmit only the final aggregate parameters at a given periodic interval. another system example that requires processing near the acquisition level are the detectors in the large hadron collider (lhc). the atlas detector [19] has about 90 million channels which requires multiple customized trigger levels and processing algorithms to reduce the amount of data that it collects so as to discard non-relevant or non-events and focus centralized storing and processing on the most promising data. the goal of this work is to develop, implement and characterize a 16-bit four-channel multi-range simultaneous data acquisition system with on-board signal processing capabilities. to achieve this goal, a digital signal processor is used to control the entire system and perform the on-board processing. the system has to be able to execute simultaneous acquisitions at a maximum rate of 600 ks/s and have an analog bandwidth of 1 mhz. multiple independent voltage ranges must be available to allow accurate measurement of signals with different dynamic ranges. the input channels are acquired differentially with impedance at low frequencies of 1 mω (single-ended) and 2 mω (for differential acquisitions) and must withstand a considerable level of incorrect usage (i.e., incorrect direct connection to the power grid) without damaging the system circuitry and still ensure operability (this means that no fuses should be used). 2. system architecture  generically, a data acquisition system can be divided in three blocks. in the first block, the signal conditioning circuit adapts the input signal before digitalization. this includes attenuation or amplification to account for different input ranges. another important function of the conditioning circuit is to protect the system against nonideal conditions, like overload, incorrect usage or electrostatic discharge (esd). in the second block, after the conditioning circuit, the analog to digital converter (adc) digitalizes the signal. in the third block, to control the spi2 spi b sdram adsp-21489 serial/parallel converter adc dac input protection ia switch 3 adc dac input protection ia switch 4 spi spi spi spi spi serial-parallel converter adc dac input protection ia switch 1 adc dac input protection ia switch 2 spi spi spi spi spi spi a usb device figure 1. system architecture.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 21  system a microcontroller is used. it can be a pic peripheral interface controller [18], dsp digital signal processor [20], arm [21] or fpga field programmable gate array [22, 23]. this device is responsible for the configuration of the range of each channel, by programming the amplifiers’ gain, controlling the sampling frequency, collecting and transferring the sampled data. in some cases, the system has the ability to process data and control other systems, like actuators for example. in the system described in this paper, the control unit must be able to perform on-board processing such as fft calculation for example and so, a dsp was selected. in figure 1, the proposed system architecture is shown. the system has four identical input analog channels, with independent gain setting and offset compensation, and individual adcs. in addition to the described modules, the system has a 16 mbit flash module to store the dsp program which is loaded upon reset or booting. a dsp external memory module (sdram with 256 mb capacity in 16-bit words) was added to increase storage capacity and further increase flexibility in the processing of large amounts of data. usb 2.0 hi-speed is used in the communication with an external computer. in the system, it is implemented using a ft2232h device. the speed of this interface limits the maximum sampling rate in continuous acquisition mode, when all the samples are to be transferred to the computer. 2.1. analog interface circuitry  the input attenuation/protection circuit is shown in figure 2. its purpose is to attenuate the input signals 5 times and to protect the amplifiers, the adcs and the dsp from incorrect usage [24]. it is dimensioned to sustain input direct connection into the power grid without any system damage and complete operability afterwards. this means that the user can incorrectly connect the system inputs into the power grid without damaging the device. note that, the system will only operate correctly with input voltages up to ±10 v. the purpose of the input attenuation/protection circuit is so that the system is not damaged if it is incorrectly used (up to the power grid voltage). therefore, the maximum operating range is ±10 v and the absolute maximum rating input voltage is ±325 v ( 230 2 v ). the set r2, r3, cvar and the equivalent input capacitance from r3 onward, form the basis of a compensated attenuator as is traditional for instance in the analog input attenuation of oscilloscopes. the schottky diodes (d1) are included to protect over voltages from reaching the programmable amplifiers. r1 and r4 are included to ensure that during transient connection, and while the capacitors are discharged, the maximum current in the protection diodes do not cause their failure. although the inclusion of r1 (as shown in the equivalent circuit for single-ended acquisitions represented in figure 3 where cd are the equivalent capacitance of the diodes and c2 is the combined input capacitance of ia1 and of the offset compensation controlled switch) makes the voltage divider uncompensated, its value is dimensioned so that the extra poles (located at 10 mhz and 887 mhz) and extra zero (located at 796 mhz) are at frequencies well above the specified target input analog bandwidth (1 mhz). note that the value of r1 is selected as a compromise between the location of the lowest pole (at 10 mhz) and the current limit of the diodes. with this set of parameters, the amplitude response of the input attenuator (including r1 and the influence of the diodes) remains essentially constant until at least 1 mhz. the frequency response of the circuit was simulated and the amplitude and phase response are shown in figure 4. the dc gain of -14 db corresponds to the 1/5 gain desired for the input attenuator and the first pole is located at 10 mhz. the phase does not reach -90º because, at 100 mhz, the influence of the other pole and the zero is already changing the initial 1st order low pass filter like response. as shown in figure 2, two instrumentation amplifiers (ia) are used in each channel. each of the ia are ad8250 var c inv var c figure 2. analog signal conditioning circuit.  figure 3. equivalent analog input circuitry for single‐ended acquisitions.  figure 4. simulation results of the gain of the analog input circuit before the  adc.  10 1 10 3 10 5 10 7 10 9 -60 -40 -20 0 frequency [hz] g a in [ d b ] -90 -60 -30 0 p h a se [ º] gain phase acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 22  with programmable gains of 1, 2, 5 and 10, slew-rate of 20 v/µs and cmrr of 80 db. the gain combinations from the two amplifiers enable the implementation of the nine input ranges listed in table 1. note that, for all ranges, the input signal is always attenuated five times by the input resistive divisor (see figure 2). for example a ±1 v input sine signal becomes a ±0.2 v sine signal after the fixed gain attenuator and, in the ±1 v input range, is amplified 10 times (fourth column of table 1) to become a ±2 v sine signal. note that, in the last ia a 2.048 v dc component is added before the adc converts the signal into the digital domain. although it is counterproductive to attenuate a signal and then amplify it, this solution ensures the level of protection against misuse of the device that the system is capable of withstanding. each ia gain is set by the dsp using a spi connection and a serial/parallel converter (one for each channel pair) as shown in figure 5 and as it is directly related with the input range desired by the user. the ia gains are set before acquisition starts which means that this process is not time critical as the range is not changed during an acquisition. therefore, the use of a slow serial to parallel converter is well suited as it also reduces the usage of i/o ports of the dsp. three bits are used for each ia (two for the gain bits and one for the write enable). 2.2. conversion into the digital domain  the selected adcs (analog devices ad7980) are 16-bit sar with maximum sampling rate of 1 ms/s, unipolar input voltage, -3 db input bandwidth of 10 mhz, spi interface with the ability of operation in daisy-chain configuration (as shown in figure 6) to share a spi connection. note that, due to the method of operation of the selected sar adcs, no sample and hold is required. the adc range is set to [0 ; 4.096] v and as mentioned, the vref input of the final ia (see figure 2) is used to add the dc component (2.048 v) necessary to ensure a unipolar signal at the adc inputs. in the digital domain, this fixed value dc component is removed, and the gain introduced in the analog amplification stage is removed from the adc output as each 16-bit sample is converted into the voltage at the channel input taking also into account the gain introduced by the ias. 2.3. final prototype implementation  the final prototype implemented in a printed circuit board (pcb) is shown in figure 7. in the bottom part of the picture, the four bnc connectors for each channel are visible. in the top middle section of the pcb is the dsp figure 5. detail of gain programming stage.   figure 6. adc daisy‐chain connection. each pair of adcs shares the same spi  connection to the dsp.  figure 7. implemented system. the final pcb has 145 mm length and 75 mm  width.  table 1. list of input ranges and corresponding gains of the instrumentation  amplifiers and total analog gain (including also the input attenuator).  input  range  ia 1  gain  ia 2  gain  ia total  gain  total  gain  ±0.1 v  10  10  100  20  ±0.2 v  10  5  50  10  ±0.4 v  5  5  25  5  ±0.5 v  10  2  20  4  ±1 v  5  2  10  2  ±2 v  5  1  5  1  ±2.5 v  2  2  4  0.8  ±5 v  2  1  2  0.4  ±10 v  1  1  1  0.2    adsp-21489 spi serial/parallel converter (mcp23s18) input protection ia input protection adc ia gnd input protection ia input protection adc ia gnd 3 3 3 3 a0 – a1 wren a0 – a1 wren a0 – a1 wren a0 – a1 wren channel a channel a channel b channel b acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 23  while in the left-hand side is the usb communication integrated circuit (the usb connector is just to the left of the ic in the board edge). the system jtag connector is located in the top center of the pcb. note that, once the system 16 mbit flash module is programmed, the jtag is no longer needed since, upon restart, the dsp program is loaded from the flash and executed. the jtag is used for loading new programs, for testing and for programming the flash once the final firmware is ready. 3. system characterization  the frequency response of the acquisition channel (including the analog circuitry and the adc) was measured with a tti tg1010a function generator controlled with ieee 488.2 and an application specifically developed in labview to control the function generator, set the channel input range, retrieve the daq samples and process the results. the processing stage includes the estimation of the measured signal amplitude using a simple, singlechannel sine-fitting algorithm [25]. this characterization application then changes the stimulus frequency and automatically repeats the process for all measurement frequencies (selectable in the labview front panel) and all daq input ranges (also changing the signal generator output amplitude to match 95% of the dynamic input range for each daq range). the final results for one of the input channels and for all the input ranges are shown in figure 8. it can be seen that the overall input channel bandwidth is limited by the response of the lowest range and corresponds to about 2 mhz. however, a slight distortion was measured for signals with frequencies above 1.6 mhz due to the slew-rate of the amplifiers. nevertheless, these limits ensure that the specified 1 mhz input signal bandwidth, for all ranges, is achieved. the results for the remaining three channels are similar. the delay/rising time response of one of the channels (before the adc) was measured with a tti tg1010a function generator and a 350 mhz tds5034 oscilloscope. the results are depicted in figure 9. the rise time is 57.4 ns while the delay is 112 ns. as will be shown in the simultaneous acquisition measurement, these values are within the order of magnitude of the interchannel delay and therefore are considered acceptable. note that the main cause of the rise time and delay are the capacitors used in the analog input compensated attenuator cvar and of the equivalent capacitance c2. to characterize the inter-channel cross-talk, a sine signal near the full-input range of the daq is applied to one channel while the others are short-circuited. the ratio between the generated sine amplitude and the amplitude measured at that same frequency in the other channels is a quantification of the inter-channel cross-talk. to improve the estimation of the cross-talk value, multiple acquisitions are performed and the average spectrums are used. in figure figure  8.  input  frequency  responses  for  all  ranges  in  one  channel.  the  results for the other channels are identical.  figure 9. delay/rising time measurement.  figure  10.  measured  average  spectrum  cross‐talk  measurements  for  91.5523 khz.  the  top  average  spectrum  represents  the  input  sinusoidal  signal while the bottom average spectrum is the one obtained on the non‐ stimulated channel.  10 2 10 3 10 4 10 5 10 6 10 7 -20 -10 0 10 20 30 frequency [hz] g a in [ d b ] ±0.1 v ±0.2 v ±0.4 v ±0.5 v ±1 v ±2 v ±2.5 v ±5 v ±10 v 90 91 92 93 94 95 96 97 98 99 100 -100 -80 -60 -40 -20 0 20 frequency [khz] s p e ct ru m [ d b ] 90 91 92 93 94 95 96 97 98 99 100 -100 -80 -60 -40 -20 0 20 frequency [khz] s p e ct ru m [ d b ] acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 24  10, the results from one of the cross-talk measurements are presented. the stimulus signal is a 91.5523 khz sinusoidal signal. in this case, the cross-talk is -86.3 db. typical crosstalks of around -80 db were achieved for all tested situations. the frequency value was selected to minimize spectral leakage (500 ks/s with 65536 samples correspond to a frequency resolution of 7.63 hz and with this frequency, the fundamental will appear in the fft at the 12000 bin). also the frequency was selected to be representative of the input frequency range and to avoid any overlap (caused by aliasing) from any input signal higher harmonics. note that, multiple tests at different frequencies were also performed and it is from these tests that the overall -80 db crosstalk value was obtained. in addition to the cross-talk evaluation, a study in the harmonic distortion introduced by the system was also performed. this characterization was done with a lowdistortion function generator from stanford research systems (a ds360) with thd better than -100 db (until 20 khz). the results from two situations are shown in figure 11. the first situation corresponds to a 1250 hz sine signal with 1 v amplitude and acquired at 40 ks/s. the measured thd (using 10 records to average the amplitude spectrum) is -79 db. in the second situation, the frequency is changed to 12.5 khz (the sampling rate is also increased 10 fold) and the resulting thd is -72.1 db. 4. measurement results  to demonstrate the simultaneous acquisition of the developed daq, a 10 khz sinusoidal signal was applied to all channels as they were acquired with a sampling rate of 400 ks/s. in figure 12, two of the channels are depicted. in this case, the interchannel delay estimated using the sevenparameter sine-fitting algorithm [26], is under 125 ns. in figure 13 one example of an acquisition, as shown in the front panel of the developed labview interface application, is shown. the application enables the selection of the channels to be acquired, their independent ranges, the number of samples and the sampling rate. in figure 14, the amplitude of the fft calculated within the device is shown. the input signal corresponds to a sinusoidal or triangular signal with 1 khz and amplitude of 0.4 v (generated by a agilent 33210a) sampled at 40 ks/s with 65536 samples. 5. conclusions  the development, implementation and characterization of a simultaneous data acquisition system with fourchannels, multiple independent ranges, analog bandwidth of at least 1 mhz, sampling rate up to 600 ks/s and advanced on-board processing capabilities is presented. multiple results demonstrating the developed system characterization and performance were presented. the device can be used as a development tool for advanced measurement systems requiring embedded processing such as pq monitoring and pq qos measurements.     figure  11.  measured  average  spectrum  thd  measurements.  the  circles  represent  the  detected  harmonic  amplitudes.  the  top  spectrum  corresponds to a 1250 hz sine signal acquired at 40 ks/s and the resulting  thd is ‐79 db. the bottom spectrum corresponds to a 12.5 khz sine signal  acquired at 400 ks/s and the resulting thd is ‐72.1 db.   figure 12. simultaneous acquisition of a 10 khz signal. both channels are  represented (the channel 1 samples are the circles and the samples from  channel 2 are the crosses).  figure 13. front panel of one acquisition example.  0 5 10 15 20 -100 -80 -60 -40 -20 0 frequency [khz] s p e ct ru m [ d b ] 0 50 100 150 200 -100 -80 -60 -40 -20 0 frequency [khz] s p e ct ru m [ d b ] 0 0.05 0.1 0.15 0.2 -0.5 -0.25 0 0.25 0.5 time [ms] c h a n n e l 1 , 2 [ v ] acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 25  references  [1] a. asheibi, d. stirling, d. sutanto, "analyzing harmonic monitoring data using supervised and unsupervised learning," ieee transactions on power delivery, vol. 24, no.1, (2009), pp. 293-301. [2] n. xiong, p. svensson, "multi-sensor management for information fusion: issues and approaches", information fusion, vol. 3, no. 2, (2002), pp. 163-186. [3] m. abdallah, o. elkeelany, a. t. alouani, "a low-cost stand-alone multichannel data acquisition, monitoring, and archival system with on-chip signal preprocessing," ieee transactions on instrumentation and measurement, vol. 60, no. 8, (2011), pp. 2813-2827. [4] r. wolfe, "virtual instruments in vxi," ieee systems readiness technology conference autotestcon (1993), pp. 183-190. [5] e. b. kushnick, "the pxi carrier: a novel approach to ate instrument development," ieee international test conference, (2005), .pp. 35.2.1-35.2.7. [6] ieee std. 1014-1987, standard for a versatile backplane bus: vmebus (1987). [7] axie-1: base architecture specification, rev. 2.0 (2013), available at: http://www.axiestandard.org/images/axie1_revision_2.0_20130905.pdf [8] http:// www.ni.com/compactrio/ [9] lxi device specification, rev. 1.4 (2011), available at: http://www.lxistandard.org/documents/specifications/lxi %20device%20specification%202011%20rev%201.4.pdf [10] j. mindykowski, t. tarasiuk, "development of dsp-based instrumentation for power quality monitoring on ships," measurement, vol. 43, no. 8, (2010), pp. 1012-1020. [11] t. radil, p. m. ramos, f. m. janeiro, a. c. serra, "pq monitoring system for real-time detection and classification of disturbances in a single-phase power system", ieee transactions on instrumentation and measurement, vol. 57, no. 8, (2008), pp. 1725 1733. [12] j. santos, f. m. janeiro, p. m. ramos, "impedance frequency response measurements with multiharmonic stimulus and estimation algorithms in embedded systems", measurement, vol. 48, no. 1, (2014), pp. 173 182. [13] j. j. ruan, r. a. lockhart, p. janphuang, a. vasquez quintero, d. briand, n. de rooij, "an automatic test bench for complete characterization of vibration-energy harvesters," ieee transactions on instrumentation and measurement, vol. 62, no. 11, (2013), pp. 2966-2973. [14] s. sreelal, s. jose, b. varghese, m. n. suma, r. sumathi, m. lakshminarayana, m.m. nayak, "data acquisition and processing at ocean bottom for a tsunami warning system," measurement, vol. 47, (2014), pp. 475-482. [15] j. santos, m. zilker, l. guimarais, w. treutterer, c. amador, m. manso, "cots-based high-data-throughput acquisition system for a real-time reflectometry diagnostic," ieee transactions on nuclear science, vol. 58, no. 4, (2011), pp. 1751-1758. [16] d. sanz, m. ruiz, r. castro, j. vega, j. m. lopez, e. barrera, n. utzel, p. makijarvi, "implementation of intelligent data acquisition systems for fusion experiments using epics and flexrio technology," ieee transactions on nuclear science, vol. 60, no. 5, (2013), pp. 3446-3453. [17] a. cataliotti, v. cosentino, d. di cara, a. lipari, s. nuccio, c. spataro, "a pc-based wattmeter for accurate measurements in sinusoidal and distorted conditions: setup and experimental characterization," ieee transactions on instrumentation and measurement, vol. 61, no. 5, (2012), pp. 1426-1434. [18] h. wang, z. jie, x. nie, d. li, "design of pic microcontroller-based high-capacity multi-channel data acquisition module," international conference on measurement, information and control (mic), 2012, vol. 2, pp. 685-688. [19] g. aad. et. al,. "the atlas experiment at the cern large hadron collider", journal of instrumentation, vol. 3, (2008), p. s08003. [20] y. tsujita, j. s. lange, and c. fukunaga, “construction of a compact daq-system using dsp-based vme modules”, ieee transactions on nuclear science, vol. 47, no. 2, (2000), pp. 123-126. [21] w. jiannong, w. wei, "the common data acquisition system based on arm9," 10th international conference on electronic measurement & instruments (icemi), 2011, vol. 3, pp. 324,327. [22] z. baofeng, w. ya, z. junchao, “design of high speed data acquisition system based on fpga and dsp”, international conference on artificial intelligence and education (icaie), 2010, pp. 132-135. [23] p. branchini, a. budano, a. balla, m. beretta, p. ciambrone, and e. de lucia, “an fgpa based general purpose daq module for the kloe-2 experiment”, ieee transactions on nuclear science, vol. 58, no. 4, (2011), pp. 1544-1546. [24] signal conditioning & pc-based data acquisition handbook, measurement computing, 2012. [25] ieee std. 1057-2007, ieee standard for digitizing waveform records, the institute of electrical and electronic engineers, new york, december 2007. [26] p. m. ramos, a. c. serra, “a new sine-fitting algorithm for accurate amplitude and phase measurements in two channel acquisition systems”, measurement, vol. 41, no. 2, (2008), pp. 135-143. figure  14.  example  of  fft  calculation  done  within  the  data  acquisition  device. sinusoidal signal (top) and triangular signal (bottom).  0 5 10 15 20 -120 -100 -80 -60 -40 -20 0 frequency [khz] s p e ct ru m [ d b ] 0 5 10 15 20 -120 -100 -80 -60 -40 -20 0 frequency [khz] s p e ct ru m [ d b ] continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions acta imeko issn: 2221-870x december 2021, volume 10, number 4, 132 139 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 132 continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions nicola giulietti1, paolo chiariotti2, gloria cosoli1, giovanni giacometti1, luca violini1, alessandra mobili3, giuseppe pandarese1, francesca tittarelli3,4, gian marco revel1 1 department of engineering and mathematical sciences (diism), v. brecce bianche, 60131, ancona, italy 2 department of mechanical engineering, v. la masa 1, 20156, milano, italy 3 department of materials, environmental sciences and urban planning (simau), v. brecce bianche, instm research unit, 60131, ancona, italy 4 institute of atmospheric sciences and climate, national research council (isac-cnr), v. piero gobetti 101, 40129 bologna, italy section: research paper keywords: concrete health monitoring; electrical impedance; remote monitoring; distributed sensor network citation: nicola giulietti, paolo chiariotti, gloria cosoli, giovanni giacometti, luca violini, alessandra mobili, giuseppe pandarese, francesca tittarelli, gian marco revel, continuous monitoring of the health status of cement-based structures: electrical impedance measurements and remote monitoring solutions, acta imeko, vol. 10, no. 4, article 22, december 2021, identifier: imeko-acta-10 (2021)-04-22 section editor: roberto montanini, università di messina and alfredo cigada, politecnico di milano, italy received july 26, 2021; in final form december 5, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: endurcrete (new environmental friendly and durable concrete, integrating industrial by-products and hybrid systems, for civil, industrial, and offshore applications) project, funded by the european union’s horizon 2020 research and innovation programme under grant agreement n° 760639. corresponding author: gloria cosoli, e-mail: g.cosoli@staff.univpm.it 1. introduction with a view of optimizing the costs related to maintenance operations necessary to guarantee a service life as long and efficient as possible for cement-based structures, structural health monitoring (shm) is paramount [1]. in fact, inspections alone are not sufficient to carry out timely actions intended to cope with possible damages. monitoring remotely structures and infrastructures is undoubtedly advantageous, since this process provides continuous data, often available in real-time, allowing to promptly act in a proactive way to avoid the risk of damages in the target structure. solutions involving iot-based approaches [2] have been also recently proposed, also in the context of smart cities applications [3]. these approaches demonstrated to be very promising given their distributed nature and connectivity characteristics making them prone to cloud analyses [4]. indeed, it is possible to monitor the factors threatening the durability of a structure, intended as “the ability of concrete to resist abstract the continuous monitoring of cement-based structures and infrastructures is fundamental to optimize their service life and reduce maintenance costs. in the framework of the endurcrete project (ga no. 760639), a remote monitoring system based on electrical impedance measurements was developed. electrical impedance is measured according to the wenner’s method, using 4-electrode arrays embedded in concrete during casting, selecting alternating current as excitation, to avoid the polarization of both electrode/material interface and of material itself. with this measurement, it is possible to promptly identify events related to contaminants ingress or damages (e.g. cracks formation). conductive additions are included in some elements to enhance signal-tonoise ratio, as well as the self-sensing properties of concrete. specifically, a distributed sensor network was implemented, consisting of measurement nodes installed in the elements to be monitored, then connected to a central hub (rs-232 protocol). nodes are realized with an embedded unit for electrical impedance measurements (eval-ad5940bioz board with ad5940 chip, by analog device) and a digital thermometer (ds18b20 by maxim integrated), enclosed in cabinets filled with an ip68 gel against moist-related problems. data are available on a cloud through wi-fi network or lte modem, hence can be accessed remotely via a use-friendly multi-platform interface. mailto:g.cosoli@staff.univpm.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 133 weathering action, chemical attack, and abrasion while maintaining its desired engineering properties” (american concrete institute, aci [5]). particular aggressive exposure conditions (e.g. marine environments) and contaminants (e.g. chlorides and sulphates) undermine the structures durability [6], requiring interventions getting more expensive the longer the time since the damage has occurred (“the law of fives” by de sitter states that the repairing costs increase exponentially after the structure damaging [7]). these events modify the concrete element composition and morphology, resulting in changes detectable through different techniques, such as ultrasound [8], [9], computer vision [10], [11], thermography [12], [13], ground penetrating radar (gpr) [14], [15], electrical resistivity [16], only to name a few. the use of electrical resistivity/impedance (cell constant scaling/no scaling) measurements for shm have grown widely [17] and achieved particular success with the introduction of conductive fillers (e.g. char, carbon black, graphene nanoplatelets, nickel powder, graphite powder, iron oxide, titanium dioxide, etc.) and fibres (e.g. virgin and recycled carbon fibres, steel fibres, carbon nanotubes, etc.) to enhance the selfsensing ability of concrete. the resulting lower electrical resistivity of the material allows to obtain a higher signal-tonoise ratio (snr), as well as to use low-cost instrumentation providing levels of accuracy comparable to those levels characterizing laboratory equipment. consequently, the possibility to exploit more affordable electronic equipment paves the way to the development of distributed sensor networks to monitor cement-based structures in those areas most subjected to stress, penetration of contaminants and hence damages and degradation. indeed, the electrical resistivity/impedance measurement is a local measurement, with a “sensing volume” corresponding to the hemisphere with the radius equal to the inter-electrode distance. sensors positioning hence becomes of uttermost importance to get significant data. furthermore, this type of measurement is particularly attractive since allows to monitor a plethora of aspects: the ingress of contaminants [18], [19] (being electrical resistivity linked to the material ability to transport ions, responsible of many degradation processes [20]), the penetration of water (particularly relevant since it carries ions and also aggressive substances, such as chlorides and sulphates), changes in temperature and moisture content, the presence of stress, the formation of cracks, the corrosion of reinforcements, etc. therefore, electrical resistivity/impedance can be assumed as an essential parameter for monitoring the health status of cementbased structures. to perform electrical impedance measurements, a proper excitation signal should be applied to the material; this can be done by means of an electric current/potential, then measuring the corresponding electric potential/current and finally computing the resultant electrical impedance. electrodes are necessary to carry out the measurement; different materials and configurations are reported in literature and both direct or alternating current (dc and ac, respectively) can be used, but the consequent measurement results will be different [6]. indeed, there are no widely accepted standards for this measurement in concrete, even if recommendations are available [21], [22]. in particular, in order to avoid both material and electrode/material interface polarization (the first caused by dc, the second caused by the use of the same two electrodes for both excitation and measurement – giving the so-called “insertion error”), the wenner’s method [23] should be adopted, using four electrodes (two for the material excitation, the other two for the measurement) and ac with a frequency greater than 1 khz [21][24]. in order to remotely monitor all these aspects in a continuous way, a dedicated monitoring system was developed within the endurcrete project (ga no. 760639); in particular, both electrical impedance and temperature of concrete elements are measured. the developed system was tested in a spanish demo site, as it will be described in detail below. the paper is organised as follows: the architecture of the remote measurement system developed to monitor the health status of cement-based structures is presented in section 2, results of laboratory tests for the monitoring system development and preliminary results of long-term monitoring activities are reported in section 3, whereas in section 4 the authors provide their comments and conclude the work. 2. materials and methods 2.1. single node design gold-standard measurement systems to perform electrical impedance measurements are galvanostats/potentiostats; their costs make their in-field use unfeasible, thus alternative sensors should be sought. moreover, given the “local” nature of the measurement, several sensing nodes should be considered to cover those areas prone to damages. the sensing nodes developed go in this direction by exploiting an embedded unit (eval-ad5940bioz board by analog device) controlling the ad5940 chip for electrical impedance measurements (a similar chip, the ad5933, has been already proved to be accurate for damages detection through electrical impedance measurements [25]). the unit was set to carry out electrical impedance measurements according to electrochemical impedance spectroscopy (eis) method in galvanostatic configuration, using ac to excite the material (frequency range: 1 khz – 20 khz; resolution: 1 khz; a signal amplitude of 600 mvpp is used, limiting the flowing electric current through a current limiting resistor to comply to iec 60601 standard – given that the employed acquisition board was originally thought for measurements on the human body). moreover, a digital thermometer was included to monitor the concrete element internal temperature, specifically the ds18b20 by maxim integrated (temperature range: from -55 °c to +125 °c; accuracy: ±0.5 °c from -10 °c to +85 °c; 1-wire bus communication), with stainless steel housing. the electronic board was placed inside an ip66 certified electrical box further filled with a self-sealing polymer-type insulating gel with cold cross-linking targeted to guarantee ip68 performance. the metrological performance of the sensing nodes targeted to the eis measurements was tested at laboratory level by comparison with a potentiostat/galvanostat used in galvanostatic configuration (gamry reference 600 by gamry instruments, considered as the gold-standard instrument). in particular, the comparison was made in terms of the real part of electrical impedance, mainly for two reasons: (i) it is directly related to electrical resistivity, in case of a pure resistive behaviour, according to the second ohm’s law; (ii) it depends on several parameters of interest (e.g. moisture content and water penetration [26], chlorides penetration [18], [27], porosity [28], carbonation depth, cracks formation [29], curing state [30]), given that it is linked to the ionic movement in the material and to the durability itself [6], [31]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 134 laboratory acquisitions were performed on a concrete block (35 × 35 × 20 cm3) specifically manufactured for this testing activity, using the same mix-design adopted for the in-field validation phase. the concrete mix design was as follows: 375 kg/m3 of cement, 908 kg/m3 of calcareous sand 0 mm – 4 mm, 362 kg/m3 of intermediate gravel 5 mm – 10 mm, 618 kg/m3 of coarse gravel 10 mm – 15 mm, 169 kg/m3 of water, 0.55 kg/m3 and 1.27 kg/m3 of pc2 and pc3 superplasticizers, respectively. the concrete block was manufactured embedding both a temperature sensor and two 4electrode arrays (for performance comparison) for electrical impedance measurement (figure 1). stainless-steel was used for manufacturing the electrodes. furthermore, another 35 × 35 × 20 cm3 concrete specimen was manufactured hosting two couples of electrode arrays with different spacing, namely 1 cm and 4 cm, fixed at a depth of 5 cm and directed downwards and upwards, respectively (figure 2); a temperature sensor was also embedded in the specimen. the results of these laboratory tests allowed to fine-tune the setting parameters of the monitoring system, as well as to better understand the factors influencing the results (e.g. panels positioning). it is worthy to underline the fact that the tests were performed at 20±1 °c ambient temperature and 50±5% relative humidity. 2.2. monitoring system design given the targeted distributed nature of the monitoring system, a star-network architecture was selected. each measurement node is connected to a central gateway acting as data collector (figure 3). serial communication (rs232) was used as communication protocol. the central gateway also hosts a 16 channel usb to db9 rs232 serial adapter hub, an intel® nuc mini-pc and an lte industrial router. a python-based back-end service installed on the nuc sends a serial request to each node every hour and then receives the acquired data. the gateway stores data both locally in sqlite database and on cloud (aws amazon ec2 server) exploiting postrgresql engine. data can be accessed remotely through a dedicated multiplatform application developed in dart language exploiting the google flutter network for android, ios, linux and windows figure 1. first concrete panel prototype: configuration scheme. figure 2. second concrete panel prototype: configuration scheme. figure 3. monitoring system with star-network architecture, with edge devices connected to the central hub, collecting data and sending them to a cloud that can be accessed remotely. figure 4. mobile application for monitoring: home page (left), login and registration (centre, top and bottom) and main page (right). 4 electrode array, 4 cm spacing emperature sensor egend: 2 4 4 electrode array, up ards, 4 cm spacing 4 electrode array, do n ards, cm spacing emperature sensor egend: 2 4 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 135 environments, developed within the endurcrete project. as example, the application that runs on an android device is shown in the following. at the first access, registration is required with an email address and password; after the login, it is possible to access the main page with different options (figure 4). it is possible to switch between different demo sites (tunnel, tu, and marine, ma – the second has been activated in italy more recently and will be the object of future analyses), then select the panel and the electrode array of interest (for example the label tu_03_3s stands for tunnel demo site, panel n. 3, sensor n. 3 short (1-cm spacing) whereas the label tu_03_1l stands for tunnel demo site, panel n. 3, sensor n. 1 long (4-cm spacing)). furthermore, it is possible to choose the measurement frequency for the electrical impedance measurement and select the desired parameters: temperature, magnitude, phase, real and imaginary parts of electrical impedance; finally, it is possible to figure 5. mobile application for monitoring: selection of demo site, panel and electrode array, measurement frequency and parameters of interest. figure 6. mobile application for monitoring: data saving and graph making. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 136 pick out the monitoring period to be shown (figure 5). it is possible to save data, send them to the email address inserted in the registration phase (with the options to save the data related to all the panels exposed in the demo site), as well as to make a graph with the selected options (figure 6). 2.3. the concrete panels monitored in the project demo sites the concrete panels exposed in the demo sites of the endurcrete project were manufactured with low-clinker cement (aimed at reducing the carbon footprint, given that the production of ordinary portland cement – opc – is responsible for approximately 8 % of global co2 emissions [32]), developed within the same project [33]. two different mix designs were formulated, with and without conductive additions. the monitoring system developed was tested in a spanish demo site, representative of a harsh setting: a tunnel in león, which is an underground environment rich in sulphates. in particular, 2 small panels of 35 × 35 × 20 cm3 (with and without conductive additions), 1 long panel of 200 × 200 × 15 cm3 and another long panel of 200 × 400 × 15 cm3 were exposed (figure 7). in each small panel 4-electrode arrays were embedded, whereas 6 ones in longer panels, in order to have a proper distributed sensor network (figure 8). arrays with different electrode spacings were used, to be installed at depths respecting the literature recommendation (a minimum distance of twice the contact spacing from the element edge should be guaranteed): • 4-cm spacing, installed at a depth of 16 cm and 10 cm in small and long panels, respectively; • 1-cm spacing, to be installed at a depth of 4 cm and 5 cm in small and long panels, respectively. the monitoring system configuration is reported in figure 7, whereas the electrodes configuration in figure 8. it is worthy to mention that temperature sensors were embedded in the monitored concrete specimens, whereas the moisture content was not assessed; indeed, it is out of the scope of the proposed monitoring system to distinguish among the different factors contributing to the results, since the main aim is to identify ingress of contaminants that can possibly threaten the concrete durability. the positioning of the concrete panels in the tunnel is shown in figure 9. figure 7. monitoring system at the demo site in león (spain). figure 8. concrete panels exposed to the demo site in león (spain): no. 2 panels of 35 × 35 × 20cm3 (left), no. 1 panel of 200 × 200 × 15 cm3 (centre) and no. 1 panel of 200 × 400 × 15 cm3 (right). legend: 4-electrode arrays with 4-cm spacing are reported in blue, 4-electrode arrays with 1-cm spacing are reported in red, temperature sensors are reported in yellow. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 137 3. results and discussion 3.1. laboratory tests the results concerning the validation of the measurement system are reported for the real part of electrical impedance measured at 10 khz in figure 10 (single measurements are compared on each day of the time interval considered after the initial concrete curing period); measurements were performed until 15 days after casting. it is possible to notice that the measurement node developed provides results compatible to those obtained from the gold standard, with absolute percentage errors lower than 10%. the advantages of the developed node are multiple: • compactness, thus possibility to be installed close to the element to be monitored: this allows to minimize the cables length and consequently to reduce their influence on the measurement results, as well as to minimize parasitic electrical components; • reduced cost, approximately 3% of the reference equipment; • modularity of the system, which results particularly useful for the development of a distributed sensor network, which can be also modified at a later time. a comparison was made also concerning the electrodes spacing (figure 11). as expected, the real part of electrical impedance is higher with the shorter electrode spacing, since the sensing volume is smaller, resulting in higher opposition to the electric current flow. however, both the measurements show a trend increasing with curing time, as expected from the literature [16], because of the continuous hydration of the cement paste and the gradual water evaporation. in this way, it is possible to monitor different sensing volumes within the same element, also considering surfaces looking out different exposure conditions. some data pre-processing capabilities were also embedded in the system, namely a moving median filter (sliding window: 9 samples) combined to a wavelet decomposition (discrete meyer wavelet of level 5). data were also normalised with respect to the initial value, in order to observe the variations over time, independently from the absolute values (less significant for monitoring purposes, looking for variations due to particular events, such as cracks or contaminants penetration). 3.2. results from tunnel demo site some examples of the results related to the data acquired from 8th october 2020 to 28th june 2021 (approximately 9 months) are reported in this paragraph. in particular, three sensors are considered: 2-electrode arrays with 1-cm spacing embedded in two small panels with (figure 12) and without (figure 13) conductive additions, one electrode array with 4-cm spacing embedded in a long panel without conductive additions (figure 14). it can be observed that the monitoring system did not show any faults during the demo test activities and data were not affected by any issues attributable to the aggressive conditions of the exposure environment since there are no anomalous spikes or particular increase/decrease due to external phenomena. as expected, the electrical impedance module increases with decreasing temperature; however, it is worthy to note that figure 9. positioning of concrete panels in the tunnel at the demo site of león (spain); concrete panels with embedded sensors for the measurement of electrical impedance are 4 (2 small and 2 long). figure 10. comparison between measurement results obtained with the system exploiting ad5940 chip (yellow data) and reference equipment (gamry reference 600, blue data) – measurement frequency: 10 khz. figure 11. comparison between results obtained by electrode arrays with different spacings: 1 cm (orange data) and 4 cm (grey data) – measurement frequency: 10 khz. figure 12. results related to a small panel with conductive additions (1-cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 138 underground environment makes temperature values quite stable and this reflects on the electrical impedance trend (no day-night cycles can be identified, neither in temperature nor in electrical impedance signals). in addition, environmental sensors were employed in the demo site to continuously monitor environmental temperature and relative humidity, which resulted both quite stable in the considered monitoring time interval (t = 10 °c – 15 °c and rh = 70 % – 80 %, respectively). in line with what obtained in laboratory tests, the electrical impedance is generally higher when electrode spacing is shorter (see figure 13 with respect to figure 14), since the electrical current flows in the sensing volume less easily, probably due to the fact that considering a minor material volume enhances the effects of particularly non-conductive elements (e.g. aggregates). moreover, it is confirmed that conductive additions lower the electrical impedance of concrete panels (see figure 12 in comparison to figure 13), hence allowing a monitoring through the use of relatively low-cost instrumentation, with limited metrological performance. the small panels (figure 12 and figure 13) show a positive phase before normalization of the signal (with respect to the initial value); this results in an inductive imaginary part, contrary to what it is expected from the literature [34]. this is probably due to a capacitive coupling with ground, as it happened during laboratory tests; in fact, this do not happen in long panels (figure 14), which are leaning directly on the ground (figure 9). however, this does not impede the monitoring of those elements and the identification of eventual factors hindering durability. the monitoring phase will continue at least until the project end (december 2021), in order to evaluate possible uncertainty sources linked to the ingress of aggressive substances (e.g. sulphates), which could hinder the concrete elements durability. 4. conclusions the ongoing validation phase of the developed monitoring system at the spanish demo site of león is providing interesting results, even though the small variations recorded so far do not highlight any particular event associated to the ingress of contaminants. the measurement system presented in the paper is continuously generating data from the field application, no matter the aggressive exposure conditions. it is worthy to underline the importance of properly choosing the locations of sensors, being a local measurement with a defined sensing volume; the use of conductive additions allows to employ lowcost modular equipment, with the possibility to realize a distribute sensor network with sensing nodes in correspondence of those areas more prone to damages or contaminants ingress, which could impact on the structure durability and efficiency during the whole life cycle. in the future, the monitoring system will be exploited also in different harsh environments, such as a typical marine site which is rich in chlorides. acknowledgement this research activity was carried out within the endurcrete (new environmental friendly and durable concrete, integrating industrial by-products and hybrid systems, for civil, industrial, and offshore applications) project, funded by the european union’s horizon 2020 research and innovation programme under grant agreement n° 760639. authors would like to thank acciona group for having prepared concrete specimens to be tested, nts and sika for having provided aggregates and admixtures, respectively. references [1] m. p. limongelli, shm for informed management of civil structures and infrastructure, j. civ. struct. heal. monit 10 (2020) pp. 739–741. doi: 10.1007/s13349-020-00439-8 [2] f. lamonaca, c. scuro, p. f. sciammarella, r. s. olivito, d. grimaldi, d. l. carnì, a layered iot-based architecture for a distributed structural health monitoring system system, acta imeko. 8 (2019) 2, pp. 45–52. doi: 10.21014/acta_imeko.v8i2.640 [3] a. h. alavi, p. jiao, w. g. buttlar, n. lajnef, internet of thingsenabled smart cities: state-of-the-art and future trends, measurement 129 (2018) pp. 589–606. doi: 10.1016/j.measurement.2018.07.067 [4] f. zonzini, c. aguzzi, l. gigli, l. sciullo, n. testoni, l. de marchi, m. di felice, t. s. cinotti, c. mennuti, a. marzani, structural health monitoring and prognostic of industrial plants and civil structures: a sensor to cloud architecture, ieee instrum. meas. mag. 23 (2020) pp. 21–27. doi: 10.1109/mim.2020.9289069 [5] the portland cement association, america’s cement manufaxcturers durability of concrete, (n.d.). online [accessed 05 december 2021]. https://www.cement.org/learn/concrete-technology/durability figure 13. results related to a small panel without conductive additions (1cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. figure 14. results related to a long panel without conductive additions (4-cm spacing electrode array): temperature, electrical impedance module, electrical impedance phase, real and imaginary parts of electrical impedance (from top to bottom) – measurement frequency: 10 khz. https://doi.org/10.1007/s13349-020-00439-8 https://doi.org/10.21014/acta_imeko.v8i2.640 https://doi.org/10.1016/j.measurement.2018.07.067 https://doi.org/10.1109/mim.2020.9289069 https://www.cement.org/learn/concrete-technology/durability acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 139 [6] g. cosoli, a. mobili, n. giulietti, p. chiariotti, g. pandarese, f. tittarelli, t. bellezze, n. mikanovic, g. m. revel, performance of concretes manufactured with newly developed low-clinker cements exposed to water and chlorides: characterization by means of electrical impedance measurements, constr. build. mater. 271 (2020) 121546. doi: 10.1016/j.conbuildmat.2020.121546 [7] w. r. de sitter, costs of service life optimization “the law of fives”, ceb-rilem workshop on durability of concrete structures, copenhagen, denmark, 18-20 may 1983. in ceb bulletin d'information, vol. 152, comité euro-international du béton, (1984) pp. 131-134. [8] n. epple, d. f. barroso, e. niederleithinger, towards monitoring of concrete structures with embedded ultrasound sensors and coda waves – first results of dfg for coda, in: lect. notes civ. eng., springer science and business media deutschland gmbh, (2021) pp. 266–275. doi: 10.1007/978-3-030-64594-6_27 [9] m. goueygou, o. abraham, j. f. lataste, a comparative study of two non-destructive testing methods to assess near-surface mechanical damage in concrete structures, ndt e int. 41(6) (2008) pp. 448–456. doi: 10.1016/j.ndteint.2008.03.001 [10] c. z. dong, f. n. catbas, a review of computer vision–based structural health monitoring at local and global levels, struct. heal. monit. 20 (2021) pp. 692–743. doi: 10.1177/1475921720935585 [11] d. feng, m. q. feng, computer vision for shm of civil infrastructure: from dynamic response measurement to damage detection – a review, eng. struct. 156 (2018) pp. 105–117. doi: 10.1016/j.engstruct.2017.11.018 [12] s. pozzer, f. dalla rosa, z.m.c. pravia, e. rezazadeh azar, x. maldague, long-term numerical analysis of subsurface delamination detection in concrete slabs via infrared thermography, appl. sci. 11(10) (2021), art. no. 4323. doi: 10.3390/app11104323 [13] p. cotič, d. kolarič, v. b. bosiljkov, v. bosiljkov, z. jagličić, determination of the applicability and limits of void and delamination detection in concrete structures using infrared thermography, ndt e int. 74 (2015) pp. 87–93. doi: 10.1016/j.ndteint.2015.05.003 [14] k. tešić, a. baričević, m. serdar, non-destructive corrosion inspection of reinforced concrete using ground-penetrating radar: a review, materials 14(4) (2021), art. no. 975. doi: 10.3390/ma14040975 [15] x. dérobert, g. villain, effect of water and chloride contents and carbonation on the electromagnetic characterization of concretes on the gpr frequency band through designs of experiment, ndt e int. 92 (2017) pp. 187–198. doi: 10.1016/j.ndteint.2017.09.001 [16] a. belli, a. mobili, t. bellezze, f. tittarelli, p. cachim, evaluating the self-sensing ability of cement mortars manufactured with graphene nanoplatelets, virgin or recycled carbon fibers through piezoresistivity tests, sustainability 10(11) (2018), art. no. 4013. doi: 10.3390/su10114013 [17] b. a. de castro, f. g. baptista, f. ciampa, new signal processing approach for structural health monitoring in noisy environments based on impedance measurements, measurement 137 (2019) pp. 155–167. doi: 10.1016/j.measurement.2019.01.054 [18] m. saleem, m. shameem, s. e. hussain, m. maslehuddin, effect of moisture, chloride and sulphate contamination on the electrical resistivity of portland cement concrete, constr. build. mater. 10(3) (1996), pp. 209–214. doi: 10.1016/0950-0618(95)00078-x [19] i.-s. yoon, c.-h. chang, effect of chloride on electrical resistivity in carbonated and non-carbonated concrete, appl. sci. 10(18) (2020), art. no. 6272. doi: 10.3390/app10186272 [20] p. azarsa, r. gupta, electrical resistivity of concrete for durability evaluation: a review, 2017 (2017), art. no. 8453095. doi: 10.1155/2017/8453095 [21] k. r. gowers, s. g. millard, measurement of concrete resistivity for assessment of corrosion severity of steel using wenner technique, mater. j. 96 (1999) pp. 536–541. doi: 10.14359/655 [22] g. cosoli, a. mobili, f. tittarelli, g. m. revel, p. chiariotti, electrical resistivity and electrical impedance measurement in mortar and concrete elements: a systematic review, appl. sci. 10 (24) (2020), art. no. 9152. doi: 10.3390/app10249152 [23] f. wenner, a method for measuring earth resistivity, j. washingt. acad. sci. 5 (1915) pp. 561–563. online [accessed 05 december 2021] https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4 p469_a2b.pdf [24] t.-c. hou, wireless and electromechanical approaches for strain sensing and crack detection in fiber reinforced cementitious materials, university of michigan, ph.d. thesis, 2008. online [accessed 05 december 2021] https://deepblue.lib.umich.edu/bitstream/handle/2027.42/6160 6/tschou_1.pdf?sequence=1&isallowed=y [25] t. wandowski, p. h. malinowski, w. m. ostachowicz, improving the emi-based damage detection in composites by calibration of ad5933 chip, measurement. 171 (2021) art. no. 108806. doi: 10.1016/j.measurement.2020.108806 [26] a. a. ramezanianpour, a. pilvar, m. mahdikhani, f. moodi, practical evaluation of relationship between concrete resistivity, water penetration, rapid chloride penetration and compressive strength, constr. build. mater. 25(5) (2011) pp. 2472-2479. doi: 10.1016/j.conbuildmat.2010.11.069 [27] x. dérobert, j. f. lataste, j. p. balayssac, s. laurens, evaluation of chloride contamination in concrete using electromagnetic nondestructive testing methods, ndt e int. 89 (2017) 19–29. doi: 10.1016/j.ndteint.2017.03.006 [28] j. zhang, z. li, hydration process of cements with superplasticizer monitored by non-contact resistivity measurement, proc. adv. test. fresh cem. mater. ed. by h. w. reinhardt, aug. 3-4, 2006, stuttgart, ger. (2006). [29] j.f. lataste, c. sirieix, d. breysse, m. frappa, electrical resistivity measurement applied to cracking assessment on reinforced concrete structures in civil engineering, ndt e int. 36(6) (2003) pp. 383–394. doi: 10.1016/s0963-8695(03)00013-6 [30] n. wiwattanachang, p. h. giao, monitoring crack development in fiber concrete beam by using electrical resistivity imaging, j. appl. geophys. 75(2) (2011) pp. 294–304. doi: 10.1016/j.jappgeo.2011.06.009 [31] c. g. berrocal, k. hornbostel, m. r. geiker, i. löfgren, k. lundgren, d. g. bekas, electrical resistivity measurements in steel fibre reinforced cementitious materials, cem. concr. compos. 89 (2018) pp. 216–229. doi: 10.1016/j.cemconcomp.2018.03.015 [32] r. m. andrew, global co2 emissions from cement production, earth syst. sci. data. 10 (2018), pp. 195–217. doi: 10.5194/essd-10-195-2018 [33] g. bolte, m. zajac, j. skocek, m. ben haha, development of composite cements characterized by low environmental footprint, j. clean. prod. 226 (2019) pp. 503–514. doi: 10.1016/j.jclepro.2019.04.050 [34] j. torrents dolz, p. juan garcia, a. aguado de cea, electrical impedance as a technique for civil engineer structures surveillance: considerations on the galvanic insulation of samples. online [accessed 05 december 2021]. https://upcommons.upc.edu/bitstream/handle/2117/2865/agu ado_measurement_1.pdf https://doi.org/10.1016/j.conbuildmat.2020.121546 https://doi.org/10.1007/978-3-030-64594-6_27 https://doi.org/10.1016/j.ndteint.2008.03.001 https://doi.org/10.1177/1475921720935585 https://doi.org/10.1016/j.engstruct.2017.11.018 https://doi.org/10.3390/app11104323 https://doi.org/10.1016/j.ndteint.2015.05.003 https://doi.org/10.3390/ma14040975 https://doi.org/10.1016/j.ndteint.2017.09.001 https://doi.org/10.3390/su10114013 https://doi.org/10.1016/j.measurement.2019.01.054 https://doi.org/10.1016/0950-0618(95)00078-x https://doi.org/10.3390/app10186272 https://doi.org/10.1155/2017/8453095 https://doi.org/10.14359/655 https://doi.org/10.3390/app10249152 https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4p469_a2b.pdf https://nvlpubs.nist.gov/nistpubs/bulletin/12/nbsbulletinv12n4p469_a2b.pdf https://deepblue.lib.umich.edu/bitstream/handle/2027.42/61606/tschou_1.pdf?sequence=1&isallowed=y https://deepblue.lib.umich.edu/bitstream/handle/2027.42/61606/tschou_1.pdf?sequence=1&isallowed=y https://doi.org/10.1016/j.measurement.2020.108806 https://doi.org/10.1016/j.conbuildmat.2010.11.069 https://doi.org/10.1016/j.ndteint.2017.03.006 https://doi.org/10.1016/s0963-8695(03)00013-6 https://doi.org/10.1016/j.jappgeo.2011.06.009 https://doi.org/10.1016/j.cemconcomp.2018.03.015 https://doi.org/10.5194/essd-10-195-2018 https://doi.org/10.1016/j.jclepro.2019.04.050 https://upcommons.upc.edu/bitstream/handle/2117/2865/aguado_measurement_1.pdf https://upcommons.upc.edu/bitstream/handle/2117/2865/aguado_measurement_1.pdf gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system acta imeko issn: 2221-870x december 2021, volume 10, number 4, 97 102 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 97 gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system emanuele buchicchio1, francesco santoni1, alessio de angelis1, antonio moschitta1, paolo carbone1 1 department of engineering university of perugia, italy section: research paper keywords: gesture recognition; sign language; machine learning; cnn citation: emanuele buchicchio, francesco santoni, alessio de angelis, antonio moschitta, paolo carbone, gesture recognition of sign language alphabet with a convolutional neural network using a magnetic positioning system, acta imeko, vol. 10, no. 4, article 17, december 2021, identifier: imeko-acta10 (2021)-04-17 section editors: umberto cesaro and pasquale arpaia, university of naples federico ii, italy received october 15, 2021; in final form december 4, 2021; published december 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: emanuele buchicchio, e-mail: emanuele.buchicchio@studenti.unipg.it 1. introduction sign language recognition (slr) is a research area that involves gesture tracking, pattern matching, computer vision, natural language processing, linguistics, and machine learning [1]. the final goal of slr is to develop methods and algorithms to build an srl system (slrs) capable of identifying signs, decoding their meaning, and producing some output that the intended receiver can understand (figure 1). the general slr problem includes the following tasks: 1) letter/number sign gesture recognition, 2) word sign gesture recognition, and 3) sentence-level sign language translation available literature surveys [2]-[5] report that recent research achieved accuracy in the range of 80–100% for the first two tasks using vision-based and sensor-based approaches. in this paper, we compare the performance of the two systems we developed: a vision-based system and a hybrid system with sensor-based data acquisition and vision-based classification stages. 1.1. slrs performance assessment in the instrumentation and measurement field, machine learning is used for processing indirect measurement results. an indirect measurement is defined in [6] as a “method of measurement in which the value of a quantity is obtained from measurements made by direct methods of measurement of other quantities linked to the measurand by a known relationship.” in the machine learning (ml) common jargon [7], the quantities that can be measured with a direct method are denoted as features x1, x2, …, xn, and the measurand as y. the measurand y is linked to features by a functional relationship y=f(x1, x2, …, xn). the process of estimating f is known as “training.” in the training process, the ml model is trained with the given dataset to find the best possible approximation according to the selected optimality criterion. the trained model produces an estimation of y in response to the vector x=(x1, x2, …, xn). abstract gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. this work proposes the usage of a high-precision magnetic positioning system for 3d positioning and orientation tracking of the fingers and hands palm. the gesture is reconstructed by the magik (magnetic and inverse kinematics) method and then proce ssed by a deep learning gesture classification model trained to recognize the gestures associated with the sign language alphabet. results confirm the limits of vision-based systems and show that the proposed method based on hand skeleton reconstruction has good generalization properties. the proposed system, which combines sensor-based gesture acquisition and deep learning techniques for gesture recognition, provides a 100% classification accuracy, signer independent, after a few hours of training using transfer learning technique on well-known resnet cnn architecture. the proposed classification model training method can be applied to other sensorbased gesture tracking systems and other applications, regardless of the specific data acquisition technology. mailto:emanuele.buchicchio@studenti.unipg.it acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 98 in the case of classification systems, the measurand y is the class to which an input vector x belongs. the most widely used performance metric for gesture slrs is classification accuracy (defined as the ratio of correct predictions over the total predictions). in this work, accuracy was adopted both for model benchmark and as model optimality criterion. 1.2. sign language sign language (sl) is defined as "any means of communication through bodily movements, especially of the hands and arms, used when spoken communication is impossible or not desirable" [8]. modern sign language originated in the 18th century when charles-michel de l'épée developed a system for spelling out french words with a manual alphabet and expressing whole concepts with simple signs. other national sign languages were developed from this system and became an essential means of communication among the hearing-impaired and deaf communities. according to the world federation of the deaf, today exist over 200 sign languages used by 70 million deaf [9]. sign language involves using facial expressions and different body parts, such as arms, fingers, hands, head, and body. one class of sign languages, also known as fingerspelling, is limited to a set of manual signs that represent the symbols of the letters of an alphabet performed with one hand [10]. the asl signs of the alphabet letters are shown in figure 2. 1.3. vision-based vs. sensor-based approaches for hands tracking and gesture recognition many common devices and applications rely on tracking hands, fingers, or handheld objects. specifically, smartphones and smartwatches track 2d finger position, a mouse tracks 2d hand position, and augmented reality devices like the microsoft hololens 2 track the 3d pose of the finger. in addition to slr, many other applications rely on hand gesture recognition such as augmented reality [12], assistive technology [13], [14], collaborative robotics [15], telerobotics [16], home automation [17], infotainment systems [18], [19], intelligence and espionage [20] and many others [21]. in this paper, we focused on recognizing static hand gestures associated with the letters of the alphabet for fingerspelling. both computer-vision-based and sensor-based approaches were implemented for sign language alphabet recognition. hand features extraction is a significant challenge for vision-based systems [11] because extraction is affected by many factors, such as lighting conditions, complex backgrounds in the image, occlusion, and skin color. sensor-based gesture recognition systems are commonly implemented as gloves featuring various types of sensors. sensor-based approaches have the advantage of simplifying the detection process and can help make the gesture recognition system less dependent on input devices. on the other hand, a disadvantage of sensor-based systems is that they can be expensive and too invasive for real-world deployment. 2. vision-based sign language gesture recognition machine learning techniques are widely adopted for gesture classification tasks. various public datasets are available for system performance assessment and benchmark. the american sign language mnist dataset [22], a flavor of the classic mnist dataset [23], created for sign language gesture, is often used as a baseline. other more complex datasets such as [24], [25] are also available. 2.1. classic machine learning and convolutional neural network on mnist dataset the american sign language mnist dataset is in a tabular format similar to the original mnist dataset. each row in the csv file has a label and 784 pixels values ranging from 0-255, representing a single 28 × 28 pixels greyscale image. in total, there are 27,455 training cases and 7,172 tests cases in this dataset. the classification accuracy was selected as the primary metric for models’ performance assessment and benchmarking with other published comparable works. two different models were trained to accomplish the letter/number gesture recognition task from static images using two different approaches: a classic ml model and a deep neural network (figure 3). the first model was selected among many model candidates obtained by applying different combinations of features engineering techniques, ml algorithms, and ensemble methods using the automated ml (automl) service of azure machine learning. azure machine learning [26] is a cloud-based platform that provides tools for automation and orchestration of all training, scoring, and comparison operations. automl tests hundreds of models in a few hours with parallel job execution with no human interaction after the initial experiment and remote compute target cluster setup. the experiment generates many models that achieve 100% classification accuracy. among figure 1. block diagram of a sign language recognition system (slrs). figure 2. letters of the american sign language (asl) alphabet [11]. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 99 them, the “logistic regression” based model has a smaller memory footprint at runtime. the second model was created with a minimal custom convolutional neural network (cnn) architecture (2d convolution, max pooling, flatten, dense layer, dropout, dense) commonly used for simple deep learning image recognition tasks (figure 4). the model was built and trained with the keras library. model hyperparameters such as the number of neurons in layers, batch size, the number of training epochs, and dropout percentage were tuned using the hyperdrive service from azure machine learning. the best scoring model achieves a classification accuracy score of 99.99%. the best models from the two training pipelines were deployed as web services for production usage. the (zipped) size of the cnn model is about 17 mb when the logistic regression model size is only 0.8 mb. simple and lightweight models should be preferred if there is no performance penalty. 2.2. vision-based classification accuracy the 100% accuracy was confirmed after deployment with test cases from the american sign language mnist. simple classic ml models could not recognize gestures in realistic images with variable backgrounds and light conditions. the cnn model scores over 90% accuracy on a subset of the "asl alphabet" [24] image dataset that includes more "realistic" light and background conditions. however, while deployed as a web service, the performance on image stream from a live camera was not satisfactory for production usage in challenging conditions such as partial line of sight obstruction, presence of shadows in the image, and confusing backgrounds like in the test case of asl alphabet test dataset [25]. 3. sensor-based gesture recognition with deep cnn on visual gesture representation our experiment with a vision-based approach confirms both performance and limitation described in other works. given the result of our experiments and other works, in this paper, we propose an slrs system that combines a sensor-based approach in the acquisition stage and computer vision techniques in the gesture recognition stage (figure 5). 3.1. hand tracking with magnetic position system (mps) the magnetic positioning system (mps) described in [27] is immune from many problems that affect computer vision techniques such as occlusion, light condition, shadows, skin colors. the mps is composed of transmitting nodes and receiving nodes. the transmitting nodes are mounted on the fingers and hand to be tracked (figure 6), whereas the receiving nodes are placed at known positions on the sides of the operational volume. an advantage of the sensor-based systems is that they are not sensitive to illumination conditions and the other factors affecting vision-based systems. furthermore, mps can also operate in the presence of obstructions caused by objects or body parts. therefore, the proposed approach enables robust and reliable tracking of the hand and fingers. it is thus suitable for slr and the other applications of hand gesture recognition, such as human-machine interaction, virtual and augmented reality, robotic telemanipulation, and automation. figure 3. workflow for the comparison of various machine learning models for static gesture recognition using azure skd, automl and hyperdrive for operations automation. figure 4. deep cnn model architecture. figure 5. proposed slrs with sensor-based data acquisition and vision-based gesture recognition. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 100 3.2. gesture recognition using skeleton reconstruction classic machine learning models can achieve 100% accuracy on static sign language recognition tasks on laboratory datasets like [24]. cnn deep learning models score high accuracy (over 90%) on realistic images. classic machine learning models can achieve 100% accuracy on static sign language recognition tasks on laboratory datasets. cnn deep learning models score high accuracy (over 90%) on realistic images with variable light. however, these high performances are not robust and cannot be easily replicated in real-world operating conditions. in our paper [11], we demonstrated that training the classification model on data from a tracking system gives substantial advantages in terms of robustness to environmental conditions and signer variability. the hand gesture is reconstructed using the technique illustrated in [28], with the improvements added in [11], which we called magik (magnetic and inverse kinematics). the method, with some empirical modification introduced in the model to optimize the reconstruction of the gesture among different test subjects, allows reconstructing the movement of the hand with 24 degrees of freedom (dof). positions and orientations of all the magnetic nodes estimated by the mps are sent to a kinematic model of the hand, to obtain the position and flexion of each joint and the position and orientation of the whole hand with respect to the mps reference frame. as the last step, magik produces a visual representation, such as the examples shown in figure 7. we call this technique “skeleton reconstruction”. 3.3. efficient deep cnn training for sign language recognition many pre-trained deep learning models are proven to be adequate for image/video classification tasks. we chose the resnet34 cnn because the resnet (residual network) architecture achieves good results in image classification tasks and is relatively fast to train [29]. figure 8 illustrates the training pipeline implemented with pytorch and fastai [30] library. transfer learning approaches allow fast training of the deep cnn (resnet34) model. the optimal learning rate for training was estimated with the cyclical learning rates method [31] to avoid time-consuming multiple runs to perform hyperparameters sweeps. the rules of thumb for the selection of learning rate value from [31] are: 1) one order of magnitude less than where the minimum loss was achieved; and 2) the last point where the loss was clearly decreasing. the loss estimation plot (figure 9) produced by the algorithm implementation in the fastai library suggested a learning rate in the range 10-2 – 10-3. model fine-tuning was performed using fastai api with a sequence of freeze, fit-one-cycle, unfreeze, and fit-one-cycle operations using the «discriminative learning rate» method. the training continued until error rate, validation loss, and training loss converged to zero after four epochs (figure 10). 3.4. gesture classification inference with mps the trained model, after the fine-tuning process, was developed in an inference pipeline (figure 11) that takes the figure 6. mps transmitting coils mounted on a wearable glove. figure 7. examples of asl letters (y and l) articulated while wearing the glove, and their respective reconstructions obtained through the kinematic model and magik technique. figure 8. training pipeline for resnet43 cnn with transfer learning. acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 101 output generated by mps control software and, for each acquired frame: 1) reconstructs the gesture using magik model kinematic model, 2) exports the visual representation as a bitmap image, 3) feeds the cnn model with the generated gesture image and get the array of confidence values associated with each class in the training dataset, and 4) printouts the label of the sign class with the highest confidence value. 4. conclusions classic machine learning models can only achieve 100% accuracy on static sign language recognition tasks on laboratory datasets [22]. deep cnn models can accomplish the task with over 90% accuracy also on more realistic images [24]. however, these high performances are not robust and cannot be replicated in real-world operating conditions. combining sensor-based acquisition, visual reconstruction of the skeleton, and a deep cnn classification model, the proposed system achieves 100% inference accuracy on gestures performed by different people after a few epochs of training. we cannot achieve 100% accuracy with classic machine learning in comparable experimental conditions. the sensor-based approach is immune from many problems that affect computer vision techniques such as occlusion, light condition, shadows, skin colors. building a gesture recognizer on top of a tracking system, instead of direct classification from a sensor stream, can help make the gesture recognition system less dependent on input devices. skeleton tracking allows for good generalization: system performances are robust across different sign performers and classifications do not rely on specific hand characteristics. the classification method implemented in this work can be applied to almost any sensor-based dataset: the only requirement is to provide a convenient visual representation of input data to be used both in training and inference. after replacing the magik with another method suitable for the specific application, other stages of the training pipeline and inference pipeline do not need any change and can be directly used for many other applications. references [1] h. cooper, b. holt, r. bowden, sign language recognition. in visual analysis of humans; moeslund, t., hilton, a., krüger, v., sigal, l., eds.; springer 2011. doi: 10.1007/978-0-85729-997-0_27 [2] a. wadhawan, p. kumar, sign language recognition systems: a decade systematic literature review. arch. comput. methods eng. 28 (2019) pp. 785–813. doi:10.1007/s11831-019-09384-2 [3] m. j. cheok, z. omar, m. h. jaward, a review of hand gesture and sign language recognition techniques. int. j. mach. learn. cyber 10 (2019) pp. 131–153. doi: 10.1007/s13042-017-0705-5 [4] r. elakkiya, machine learning based sign language recognition: a review and its research frontier. j. ambient. intell. hum. comput. 2020. doi: 10.1007/s12652-020-02396-y [5] r. rastgoo, k. kiani, s. escalera, sign language recognition: a deep survey. expert syst. appl. 164 (2021). doi: 10.1016/j.eswa.2020.113794 [6] "iec standard 60050–300", international electrotechnical vocabulary (iev) part 300: electrical and electronic measurements and measuring instruments, international electrotechnical commission, jul. 2001. [7] s. shirmohammadi, h. al osman, machine learning in measurement part 1: error contribution and terminology figure 9. loss estimation plot against learning rate values for optimal learning rate selection. the optimal value for training is in range 10-2 – 10-3. figure 10. loss and error rate values recorded during the training process. figure 11. inference pipeline with mps and skeleton reconstruction and an example of execution from jupyter notebook python environment. https://doi.org/10.1007/978-0-85729-997-0_27 https://doi.org/10.1007/s11831-019-09384-2 https://doi.org/10.1007/s13042-017-0705-5 https://doi.org/10.1007/s12652-020-02396-y https://doi.org/10.1016/j.eswa.2020.113794 acta imeko | www.imeko.org december 2021 | volume 10 | number 4 | 102 confusion, ieee instrumentation & measurement magazine, 24(2) (2021) pp. 84-92. doi: 10.1109/mim.2021.9400955 [8] encyclopedia britannica, sign language. online [accessed december 05 2021] https://www.britannica.com/topic/sign/language [9] world federation of the deaf. online [accessed december 05 2021]. http://wfdeaf.org/our-work [10] fingerspelling. wikipedia. online [accessed december 05 2021] https://en.wikipedia.org/wiki/fingerspelling [11] m. rinalduzzi, a. de angelis, f. santoni, e. buchicchio, a. moschitta, p. carbone, p. bellitti, m. serpelloni, gesture recognition of sign language alphabet using a magnetic positioning system. appl. sci. 11 (2021), 5594. doi:10.3390/app11125594 [12] j. dong, z. tang, q. zhao, gesture recognition in augmented reality assisted assembly training. j. phys. conf. ser. 1176(3) (2019), art. 032030. doi: 10.1088/1742-6596/1176/3/032030 [13] r. e. o. ascari schultz, l. silva, r. pereira, personalized interactive gesture recognition assistive technology. in proceedings of the 18th brazilian symposium on human factors in computing systems, vitória, brazil, 22–25 october 2019. doi: 10.1145/3357155.3358442 [14] s. s: kakkoth, s. gharge, real time hand gesture recognition and its applications in assistive technologies for disabled. in proceedings of the fourth international conference on computing communication control and automation (iccubea), pune, india, 16–18 august 2018. doi: 10.1109/iccubea.2018.8697363 [15] m. a. simão, o. gibaru, p. neto, online recognition of incomplete gesture data to interface collaborative robots, ieee trans. ind. electron. 66 (2019) pp. 9372–9382. doi: 10.1109/tie.2019.2891449 [16] i. ding, c. chang, c. he, a kinect-based gesture command control method for human action imitations of humanoid robots. in proceedings of the 2014 international conference on fuzzy theory and its applications (ifuzzy2014), kaohsiung, taiwan, 26–28 november 2014; pp. 208–211. doi: 10.1109/ifuzzy.2014.7091261 [17] s. yang, s. lee, y. byun, gesture recognition for home automation using transfer learning, 2018 international conference on intelligent informatics and biomedical sciences (iciibms), bangkok, thailand, 21–24 oct. 2018, pp. 136–138. doi: 10.1109/iciibms.2018.8549921 [18] q. ye, l. yang, g. xue, hand-free gesture recognition for vehicle infotainment system control, 2018 ieee vehicular networking conference (vnc), taipei, taiwan, 5–7 december 2018; pp. 1–2. doi: 10.1109/vnc.2018.8628409 [19] z. u. a. akhtar, h. wang, wifi-based gesture recognition for vehicular infotainment system—an integrated approach, appl. sci. 9 (2019), art. 5268. doi: 10.3390/app9245268 [20] y. meng, j. li, h. zhu, x. liang, y. liu, n. ruan, revealing your mobile password via wifi signals: attacks and countermeasures, ieee trans. mob. comput. 19(2) (2019) pp. 432–449. doi: tmc.2019.2893338 [21] m. j. cheok, z. omar, m. h. jaward, a review of hand gesture and sign language recognition techniques, int. j. mach. learn. cyber. 10 (2019) pp. 131–153. doi: 10.1007/s13042-017-0705-5 [22] the american sign language mnist dataset. online [accessed december 05 2021] https://www.kaggle.com/datamunge/sign-language-mnist [23] lecun, y., & cortes, c. (2010). mnist handwritten digit database. at&t labs. online [accessed december 05 2021] http://yann.lecun.com/exdb/mnist [24] asl alphabet. online [accessed december 05 2021] https://www.kaggle.com/grassknoted/asl-alphabet [25] asl alphabet test, online [accessed december 05 2021] https://www.kaggle.com/danrasband/asl-alphabet-test [26] azure machine learning product overview. online [accessed december 05 2021] https://azure.microsoft.com/it-it/services/machinelearning/#product-overview [27] f. santoni, a. de angelis, a. moschitta, p. carbone, a multi-node magnetic positioning system with a distributed data acquisition architecture, sensors 20(21) (2020), art. 6210, pp. 1-23. doi: 10.3390/s20216210 [28] f. santoni, a. de angelis, a. moschitta, p. carbone, magik: a hand-tracking magnetic positioning system based on a kinematic model of the hand, ieee transactions on instrumentation and measurement 70 (2021), art. 9376979 doi: 10.1109/tim.2021.3065761 [29] k. he, x. zhang, s. ren, j. sun, deep residual learning for image recognition, 2016 ieee conference on computer vision and pattern recognition (cvpr), las vegas, nv, usa, 27-30 june 2016, pp. 770-778. doi: 10.1109/cvpr.2016.90 [30] j. howard, s. gugger, fastai: a layered api for deep learning, information 11(2) (2020), art. 108. doi: 10.3390/info110201081 [31] l. n. smith, cyclical learning rates for training neural networks. online [accessed december 05 2021] https://arxiv.org/abs/1506.01186 https://doi.org/10.1109/mim.2021.9400955 https://www.britannica.com/topic/sign/language http://wfdeaf.org/our-work https://en.wikipedia.org/wiki/fingerspelling https://doi.org/10.3390/app11125594 https://doi.org/10.1088/1742-6596/1176/3/032030 https://doi.org/10.1145/3357155.3358442 https://doi.org/10.1109/iccubea.2018.8697363 https://doi.org/10.1109/tie.2019.2891449 https://doi.org/10.1109/ifuzzy.2014.7091261 https://doi.org/10.1109/iciibms.2018.8549921 https://doi.org/10.1109/vnc.2018.8628409 https://dx.doi.org/10.3390/app9245268 https://doi.org/10.1109/tmc.2019.2893338 https://doi.org/10.1007/s13042-017-0705-5 https://www.kaggle.com/datamunge/sign-language-mnist http://yann.lecun.com/exdb/mnist https://www.kaggle.com/grassknoted/asl-alphabet https://www.kaggle.com/danrasband/asl-alphabet-test https://azure.microsoft.com/it-it/services/machine-learning/#product-overview https://azure.microsoft.com/it-it/services/machine-learning/#product-overview https://doi.org/10.3390/s20216210 https://doi.org/10.1109/tim.2021.3065761 https://doi.org/10.1109/cvpr.2016.90 https://doi.org/10.3390/info11020108 https://arxiv.org/abs/1506.01186 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 2‐3    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 2  editorial  paolo carbone  university of perugia,  italy   section: editorial   citation: paolo carbone, editorial, acta imeko, vol. 4, no. 2, article 1, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐01  editor: paolo carbone, university of perugia, italy  received june 25, 2015; in final form june 25, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: paolo carbone, email: paolo.carbone@unipg.it     dear reader, this second issue in 2015 of acta imeko is particularly rich in content. it contains 14 papers related to the 3 imeko events: imeko tc14 ismqc 11th international symposium on measurement and quality control 11 13 september 2013, cracow and kielce, poland joint imeko international tc3, tc5 and tc22 conference 2014 lagoon beach hotel, cape town, south africa february 3, 2014 – february 5, 2014 13th imeko tc10 workshop on technical diagnostics advanced measurement tools in technical diagnostics for systems' reliability and safety warsaw, poland, june 26 27, 2014 moreover, it also contains the very first two papers that were submitted directly to acta imeko without being presented in a prior imeko event. the publication of these two papers testifies a major change in the policy of this journal. while it originally started as one of the two journals of imeko, publishing invited and extended versions of papers presented at imeko conferences and workshops it now also considers papers describing results and advancements in the state-of-theart of measurement science as a whole, freely submitted by their authors. the editors believe that this a major advancement in how this journal serves the community of its readers and contributors. at the same time this is an open invitation to all prospective authors to consider this journal as one journal among which to choose when selecting the publication destination of their researches. this journal is and remains an open access journal serving the large community of researchers in metrology. the first papers in this issue originate from the imeko tc14 event held in cracow and kielce in 2013. the first contribution is authored by researchers working at the leibniz university of hannover in germany and it proposes a new fringe projection method to detect geometry deviations by reducing, at the same time, measurement duration. the second paper covers again the topic of inspection. it is authored by researchers with the university of belgrade and with the metropolitan university in belgrade and it describes a disciplined process, helping practitioners planning inspection of parts on coordinate measuring machines. the last paper associated to this imeko tc14 event is a joint international work between researchers working in ukraine and poland and it covers the topic of designing a four electrode conductivity cell for usage as a primary standard in the area of electrolytic conductivity. the next series of papers is associated to the event held in south africa last year. andrea brüge with the physikalischtechnische bundesanstalt (ptb) in germany, illustrates the characteristics of a calibration facility for torque wrenches. a detailed representation of all uncertainty sources is presented as well as how they contribute to the final uncertainty budget. the second paper in this series is authored by researchers with the national metrology institute of japan. the authors describe the calibration chain for the national torque standard for hand torque screwdrivers. a thorough analysis of the effect of major uncertainty contributions is made and experimental data are presented. leonard klaus and other authors with the german ptb present the following paper on dynamic calibration of torque transducers. this paper describes the issues associated to the modelling of this physical phenomenon and of the identification of the model parameters. the next paper is authored again by researchers with the german ptb. they present a model—based approach to dynamic calibration of force transducers. modelling and identification of model parameters are discussed in this paper and a large set of experimental results is presented, highlighting the difficulties associated to this task and the open issues to still be addressed.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 3  the paper by christiaan veldman with the national metrology laboratory of south africa deals with the transverse sensitivity calibration of accelerometers. both the measurement principles are introduced and experimental results presented together with the analysis regarding contributing uncertainty sources. the final paper in this series is authored by christian schlegel and other colleagues working at the german ptb. it covers the topic of force measurements and of the mechanical influences that are present during sinusoidal force measurements. the experimental arrangement is described and modelling of the transducers using fem simulations shows the behaviour of the dynamic modes occurring during periodic excitations. the next 5 papers originate from the imeko event held in warsaw. the first contribution comes from authors with the instituto de telecomunicações and instituto superior técnico in portugal. it deals with non-destructive techniques for testing metallic tubes. the technique used for testing a stainless steel tube using two different magnetic detectors is presented and results discussed to highlight the best solution to solve the test problem. the next paper is authored by lauryna šiaudinyte with the vilnius gediminas technical university. it deals with length measurements and with the description of a linear bench for testing total stations. the topic is clearly described and experimental results are presented. the next paper is authored by researchers with the warsaw university of technology. it covers the topic of power quality in the context of dc traction supply systems. accordingly, the effectiveness of ad-hoc designed filters for reducing the ac components is described along with experimental data regarding the harmonics present in the kv range of the considered supply system. the following paper is the result of a joint collaboration between researchers from china and ukraine. it describes the problem of measuring and correcting the effect of nonlinearities in analog-to-digital converters using multi-resistor dividers. the last paper in this series comes from the federal institute for materials research and testing, located in berlin. it describes two application examples of radio-frequency identification tags in the areas of infrastructure monitoring and of transportation of dangerous goods. this paper shows how innovative applications can still be devised using the rfid technology. the last two papers in this issues are those directly submitted by their authors. the first one is authored by daniele fontanelli et al., with the university of trento in italy. it describes a line tracking measurement system for robotic vehicles. the adopted measurement model is illustrated with great detail and experimental results are presented to validate their proposal. the paper concluding this issue is authored by researchers with the university of naples federico ii. it deals with the 3—parameter sine fitting algorithm. the authors propose two techniques to improve the accuracy of the estimates returned by this algorithm when applied to sinusoidal signals corrupted by noise. let me thank all who made this issue possible. as a concluding remark i would like to thank prof. francisco alegria, with the instituto de telecomunicações, in portugal. he contributed to set up this journal and, starting this issue, he ceases his collaboration. the editors are grateful for the time he spent on improving this publication. have a fruitful reading of the second issue of acta imeko in 2015!   microsoft word 297-1914-1-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 2    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 2  introduction to the special section of the imeko tc21  “mathematical tools in measurements”  franco pavese   torino, italy     section: editorial   citation: franco pavese, introduction to the special section of the imeko tc21 “mathematical tools in measurements”, acta imeko, vol. 4, no. 4, article 1,  december 2015, identifier: imeko‐acta‐04 (2015)‐04‐01  section editor: franco pavese, torino, italy  received december 10, 2015; in final form december 10, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: franco pavese, email: frpavese@gmail.com       this special section of the imeko tc21 “mathematical tools in measurements” is formed by a selection of extended papers that was submitted following the presentations made at the international conference “advanced mathematical and computational tools in metrology and testing” (amctm) held in september 2014 in st. petersburg, russia, with an important attendance from russia. for this reason, this section comprises three russian papers out of seven. the paper of mazin proposes the presentation of a physical quantity by a vector of the pseudo-euclidean plane including the quantity value and its degree of uncertainty. the paper of kurekova et al. discusses the positioning uncertainty of kinematic structures, namely of the tricept. the paper of kislitsyna and malykhina deals with the modelling of an altimeter designed for soft landing on the moon, based on gamma rays scattering. the remaining four papers well represent the worldwide attendance to the conference. the paper of barari and ahmadi, from canada, concerns coordinate metrology and introduces a novel approach to predict surface behaviour via distribution of geometric deviations (dgd) to estimate the detailed deviation zone. the paper of gonzalez, estrada and lira, from mexico, discusses the modeling of a combustion chamber of a reference calorimeter used to measure the superior calorific value (scv) of natural gas. the paper of willink, from new zealand, discusses the important issue of the evaluation of the type a component of uncertainty when the sample size is not predetermined, considering both the frequentist and the bayesian methods. the paper by pavese, from italy, starting from a statistics of the results of key comparisons (kc) available on the bipm kcdb, discusses the consequences of the results obtained, of a dominant situation where an intercomparison contains discrepant data, proposing an alternative method to the simple prior correction for the known systematic effects. journal contacts acta imeko issn: 2221-870x june 2022, volume 11, number 2, 1 2 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org june 2022 | volume 11 | number 2 | 6 section editors (vol. 7 11) yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand claudia zoani, italy microsoft word 216-1842-1-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 57‐61    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 57  salinity and relative humidity: climatological relevance and  metrological needs   rainer feistel  leibniz institute for baltic sea research, 18119 warnemünde, germany      section: technical note  keywords: teos‐10; salinity; relative humidity; metrology; si‐traceability; climatology; hydrological cycle  citation: rainer feistel, salinity and relative humidity: climatological relevance and metrological needs, acta imeko, vol. 4, no. 4, article 11, november 2015,  identifier: imeko‐acta‐04 (2015)‐04‐11  editor: paolo carbone, university of perugia, italy  received september 29, 2014; in final form september 29, 2014; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: rainer feistel, e‐mail: rainer.feistel@io‐warnemuende.de    1. introduction  between 1910 and 1945, global air temperatures increased by about 0.5 °c, followed by a stagnation period of roughly 30 years, and by another rise by 0.5 °c within 30 years, until quite recently a sudden “hiatus” of global warming was discovered. neither the obvious frequency nor the tipping points of this staircase are explained, let alone predicted, by global climate models. rather, the hunt is open for finding “the lost heat” [1]. among the top “suspects” is ocean-atmosphere interaction with anomalies in humidity and salinity; section 4 will return to this point. another hot favourite is uncertainty of observational data [2]. a rule-of-thumb calculation tells us that warming an atmospheric mass of 104 kg/m2 by 0.5 °c within 30 years requires a mean heat flux of about 5 mw/m2. this estimate may be low by a factor of 10 if the ocean surface is included, but that is another question. on the other hand, oceanic evaporation of 1200 mm/yr exports 100 w/m2 of latent heat to the atmosphere. a typical meteorological uncertainty of 1 %rh of the typical ocean-surface relative humidity (rh) of 80 %rh implies an uncertainty of 6 w/m2 of the latent heat flux [3]. thus, the systematic signal needed to detect the hiatus is 1000 times below the noise level of the data available. in fact the uncertainty of the global latent heat flux amounts to as much as 30 w/m2 for various other reasons [4] even if the uncertainty of the global rh average may likely be less than 1 %rh. atmospheric water flux is a perfect hideout for “the lost heat”. the hydrological cycle is a prominent example for metrological deficiencies in climatology, meteorology and oceanography. scientific analysis must rely on century-long data series of high accuracy and stability, world-wide homogeneity, without spurious trends or technological artefacts, which is by abstract  water plays the  leading thermodynamic role  in earth’s “steam engine” climate. followed by clouds and co2, water vapour  in the  atmosphere is dominating the greenhouse effect. evaporation from the ocean surface is the main route of energy export from the  ocean, the rate of which  is known with poor 20 % uncertainty only. regional climatic trends  in evaporation and precipitation are  reflected in small changes of ocean surface salinity.  observational data of salinity and relative humidity need to be globally comparable within requisite uncertainties over decades and  centuries,  but  both  quantities  rely  on  century‐old  provisional  standards  of  unclear  stability,  or  on  ambiguous  definitions.  this  increasingly urgent and long‐pending problem can only be solved by proper metrological traceability to the international system of  units (si). consistent with such si‐based definitions, state‐of‐the‐art correlation equations for thermophysical properties of water,  seawater,  ice and humid air such as those available from  the  recent oceanographic standard  teos‐10 need to be developed and  adopted as joint international standards for all branches of climate research, in oceanography, meteorology and glaciology for data  analysis and numerical models.   the scor/iapws/iapso joint committee on the properties of seawater jcs is targeting at these aims in cooperation with bipm, wmo  and other international bodies.    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 58  no means granted for the records we possess since meteorological and oceanographic data started to be collected systematically. improvement is urgently needed. anomalies of evaporation and precipitation leave their footprints behind in ocean salinity [5] which can be measured much more precisely than relative humidity or heat flux. but, salinity measurements are not (yet) traceable to the international system of units, si [6] and tiny drifts of the metrological reference materials may remain undetectable, see section 3. the introduction of the new thermodynamic equation of seawater 2010, teos-10 [7] has addressed a number of open questions of salinity measurement and of the definition of relative humidity, as briefly reviewed in section 2. but this work has also shed new light on further fundamental problems which may be solved in close cooperation of a newly established scor1/iapws2/iapso3 joint committee on the properties of seawater, jcs, with international bodies such as bipm4 and wmo5, see section 5. in a recent series of four papers [8], metrological challenges for measurements of key climatological observables are reviewed and future actions are suggested for practical solutions, section 6. 2. seawater standard teos‐10  teos-10, the thermodynamic equation of seawater 2010 [7], was adopted as an international standard for oceanography by the ioc6 in paris in 2009 and by the iugg7 in melbourne in 2011. it provides thermodynamic properties of water, ice, seawater and humid air in a mutually consistent way. its novel axiomatic approach is based on four thermodynamic potentials, subsequently endorsed as iapws documents in 2006, 2008, 2009 and 2010, from which various properties of the pure substances, their mixtures, phase equilibria and composites can be formally derived by mathematical or numerical methods [9], see figure 1. for the first time after a century of international oceanographic standards [10], teos-10 makes use of a chemical composition model for sea salt dissolved in iapso standard seawater (ssw), the standard reference material for oceanographic salinity measurements specified by the practical salinity scale of 1978 (pss-78). the related new referencecomposition salinity scale [11] provides a conversion formula for pss-78 data but additionally permits systematic studies of seawater composition anomalies [12]. teos-10 is highly accurate. within their common ranges of validity, teos-10 is consistent with the cipm-2001 equation for the density of liquid water [13], [14] and with the cipm2007 equation for the density of humid air [15] which are 1 scor: scientific committee on oceanic research, http://www.scor-int.org 2 iapws: international association for the properties of water and steam, http://www.iapws.org 3 iapso: international association for the physical sciences of the oceans, http://iapso.iugg.org 4 bipm: bureau international des poids et mesures, http://www.bipm.org 5 wmo: world meteorological organization, http://www.wmo.int 6 ioc: intergovernmental oceanographic commission of unesco, http://ioc-unesco.org 7 iugg: international union of geodesy and geophysics, http://www.iugg.org recommended for metrology by the cipm8. teos-10 results for the ice point and for the sublimation pressure are the most accurate values currently available [16]. entropy and enthalpy of seawater, ice and humid air available from teos-10 provide precise values of the oceanic heat content [17] and for the latent heat of evaporation and of freezing of seawater [18]. relative fugacity calculated from teos-10 is physically more suitable than relative humidity defined by wmo [19] for the description of non-equilibrium thermodynamic forces at the oceanatmosphere interface [3], [16]. teos-10 was developed in close cooperation between the iapws subcommittee on seawater, scsw, that was established at the 15th international conference on the properties of water and steam in 2008 in berlin, germany, and the scor/iapso working group 127 on thermodynamics and equation of state of seawater, established in 2005 in cairns, australia, and disbanded in 2011 in accordance with the rules governing scor/iapso working groups [10]. wg 127 was chaired by trevor mcdougall, hobart, australia, and scsw by rainer feistel, warnemünde, germany. 3. salinity and relative humidity  conductance measurements relative to a standard reference solution offer higher resolution than si-traceable “absolute” measurements in conductance cells. the difference between the two is crucial for oceanographic density calculations, and so world-wide salinity measurements are carried out with conductivity sensors calibrated with respect to certified ssw specimen, as defined by pss-78. this excellent resolution and comparability prevents the false detection of spurious, virtually significant density gradients in the world ocean but comes at the risk of rising uncertainties in long-term trends [6]. moreover, conductivity measurements alone cannot detect composition anomalies of the solute that may affect the seawater density, such as dissolved silicate in the deep pacific [20]. si-traceability is indispensable for the detection of tiny but climatologically relevant long-term salinity trends. among the surrogate measurands that may substitute conductance for this purpose, density is the most promising candidate, but the 8 cipm: comité international des poids et mesures, http://www.bipm.org/en/committees/cipm/ figure  1.  axiomatic  structure  of  teos‐10.  four  thermodynamic  potentials  (top  row)  contain  all  empirical  coefficients  and  permit  the  calculation  of derived properties in a consistent, independent and complete way.     acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 59  intended transition causes numerous practical, technological and metrological problems to be addressed, see section 6. relative humidity is defined by wmo and ashrae9 as the partial pressure of water vapour divided by the same quantity at saturation at the same temperature and pressure. however, several meteorological textbooks, climatological studies or iupac10 offer alternative and mutually inconsistent definitions. most of the definitions fail under certain conditions, for example for humid air in equilibrium with concentrated solutions at low pressures. moreover, different correlation equations for the calculation of dewpoint temperatures or saturation pressures are in practical use, often without explicit specification of uncertainty or range of validity. in order to harmonise this variety and to ensure comparability and stability of measurements, the working group “humidity” of cipm-cct11 is investigating suitable options available for this purpose. among those is relative fugacity [16], [18] which properly describes the thermodynamic driving forces in non-equilibrium situations and naturally covers conditions where other definitions cease to exist. teos-10 is an international standard that offers accurate equations for the calculation of the fugacity of water in humid air [21]. 4. climatological relevance  water is a key player in earth’s climate dynamics; heinrich hertz in 1885 was perhaps the first who painted the physical picture of climate as a “gigantic steam engine” [22]. processes involving water in the atmosphere are very complex, theoretically and observationally demanding, and subject to substantial uncertainties [4]. the fundamental energy conveyor of our climate is the hydrological cycle rather than the radiative "co2 greenhouse" balance often debated in public. water vapour in the atmosphere accounts for 50 60 % of the terrestrial greenhouse effect, in contrast to only about 20 25 % due to carbon dioxide, co2. atmospheric rh also controls the oceanic export of latent heat as well as the formation and distribution of clouds; the latter may contribute another 10 25 % to the greenhouse warming. estimates for the latent-heat fraction of air-sea energy exchange vary between 45 and 90 %. trends in global distributions of humidity, latent heat flux, evaporation and precipitation are closely connected with small but precisely measurable systematic changes in sea-surface salinities [5]. the latter govern oceanic convection processes which may deposit vast amounts of heat in the abyss [1]. as a result of global warming, the amount of tropospheric water is increasing which in turn amplifies the temperature rise via a positive feedback loop, known as the “runaway greenhouse” effect [23]. in contrast, rh over the oceans is rather constant at values of about 80 %rh, almost independently of region, season, or atmospheric warming. 9 ashrae: american society of heating, refrigerating and air-conditioning engineers, http://www.ashrae.org/ 10 iupac: international union of pure and applied chemistry, http://www.iupac.org 11 cct: cipm consultative committee on thermometry, http://www.bipm.org/en/committees/cc/cct/ similarly, no significant trend of the total cloud cover or albedo could be detected yet. the fundamental physical causes for this constancy remain elusive [23] but strong feedback mechanisms between the two “adiabatic invariants” of climate change may be assumed [3], [24]. day-time clouds form the main “input valve” for solar irradiation while surface rh regulates the ocean’s cooling by evaporation. the self-organised loop is closed by the formation of clouds, governed by the amount of atmospheric moisture which originates to 90 % from the ocean. the signature of this process is hidden in the fluctuation spectra of cloudiness and of relative humidity at the sea surface. the characteristic time scales of this feedback are well known and rather short; the residence time of solar energy in the ocean surface layer is between 60 and 90 days, that of water in the troposphere is about 8 days, and that of water in clouds is just 1 hour. short relaxation times indicate high dynamic stability against external fluctuations. small changes of this feedback may easily amplify or compensate effects caused by co2 or other atmospheric perturbations. in contrast to the temperature, small systematic anomalies of the chemical and isotopic compositions of the natural fluid mixtures “seawater” and “humid air” may go almost unnoticed even though they represent key indicators for severe implications of climate change. 5. joint committee on seawater  as long as the understanding of the climate attractor and the predictive capabilities of global circulation models remain insufficient, careful long-term observations extending over decades remain indispensable. however, this urgent need is in striking discrepancy to the fact that the formal definitions and the measurement practice of seawater salinity and atmospheric rh include century-old provisional and ambiguous definitions, in part without proper metrological traceability to the si. such problems were emphasised at the 2010 wmo-bipm workshop on measurement challenges for global observation systems for climate change monitoring in geneva [25]. to overcome this situation and to improve long-term stability and consistency of measurement results, initial meetings between representatives of bipm and iapws took place at the bipm in august 2011 and in february 2012, see figure 2. as a result, a joint effort of iapws, bipm, scor, iapso and wmo has been envisaged to address metrological challenges for measurements of certain key climatological observables, including oceanic salinity and atmospheric relative humidity [8]. in this cooperation of international bodies, figure 2. participants of the bipm‐iapws meeting on 7 february 2012 at the  bipm  headquarter,  pavillon  de  breteuil  at  sèvres  near  paris,  from  left  to  right:  dan  friend,  karol  daucik,  jeff  cooper,  alain  picard,  petra  spitzer,  rainer feistel, michael kühne, andy henson, robert wielgosz.     acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 60  iapws may provide highly accurate correlation equations that relate the problematic quantities to suitable surrogate measurands and to other relevant properties, consistent with the teos-10 equations. bipm may support this aim by the endorsement and recommendation of suitable uniform definitions and related metrological standards in the framework of the si. scor, iapso and wmo may guide and support the adoption of the new equations and standards in the atmospheric and oceanographic scientific and technical communities. this important and demanding activity will be coordinated by the joint committee on the properties of seawater12 (jcs) that was founded at the iapws meeting in october 2012 in boulder, colorado. membership and terms of reference were identified in september 2013 during a joint iapws-bipm workshop held at the 16th international conference on the properties of water and steam in greenwich, uk. in return, jcs representatives attended the 2014 meetings of ccqm13 and cct at the bipm in sèvres. jcs is chaired by rich pawlowicz, vancouver, canada. 6. intended activities  jcs in cooperation with ccqm is working toward an oceanic salinity definition traceable to the si. this development may include several subsequent steps: step 1: define salinity as the mass fraction of solute. this was already done by the formal introduction of “absolute salinity” in the framework of teos-10 [11], [12]. step 2: calculate this mass fraction for a chemical reference model. the reference composition introduced in 2008 [11] satisfies this requirement. step 3: by experimental and theoretical data, relate step 2 to si-traceable surrogate measurands of sufficient accuracy: this was achieved by the construction of the teos-10 equation of state [26] which relates seawater density to the absolute salinity, and by studies that demonstrate the insensitivity of this relation with respect to small composition anomalies [12], [20], [27], colloquially known as “millero’s rule” [28]. seawater density relative to pure-water density can be measured in the laboratory within an uncertainty of less than 3 ppm [29], [30], [31], which is sufficiently accurate for oceanography. step 4: officially endorse the relation of step 3 as a standard equation. the introduction of teos-10 as an international standard for oceanography by ioc and iugg served this purpose. step 5: produce reference materials of certified salinity based on the relation defined in step 4. work on this task is in progress. among the various practical problems involved is the stability of certified density in distributed and stored ampoules of standard seawater. step 6: calibrate routine instruments (whatever quantity they actually measure) with the reference materials provided by step 5. it is planned that conventional bathysondes (ctd) equipped with conductivity sensors will be calibrated with respect to 12 jcs: scor/iapws/iapso joint committee on the properties of seawater, http://www.teos-10.org/jcs.htm 13 ccqm: cipm consultative committee for amount of substance, http://www.bipm.org/en/committees/cc/ccqm/ density rather than to salinity. other studies of external groups investigate the sea-going use of in-situ sensors that measure density or refractive index rather than conductivity. step 7: to support step 6, add auxiliary standard equations, such as between salinity and conductivity, or between density and refractive index. these tools may permit an easy in-situ detection of seawater composition anomalies but will in advance require extensive future laboratory work. iapws supports external activities of this kind by certified research needs [32]. cct in cooperation with jcs is working toward a new definition of relative humidity. a tentative series of actions is suggested here, similar to those of salinity, but likely the practice of future scientific and technological developments may take other roads. step 1: develop an si definition of relative humidity. a promising candidate for this quantity is relative fugacity to which all other rh definitions in practical use (by wmo, ashrae, etc.) may be considered as suitable proxies. step 2: calculate the quantity of step 1 for a chemical reference model. the cipm stoichiometric model of dry air [15] is consistent with the teos-10 equation of state for humid air. teos-10 permits the calculation of fugacity and relative fugacity, but more work is needed here on the details of, e.g., the roles of varying amounts of co2 or of the dissolution of air in water. step 3: by experimental and theoretical data, relate step 2 to si-traceable surrogate measurands of sufficient accuracy. this may in a first approach be achieved by the teos-10 equation for humid air but may require further refinement in the future. step 4: officially endorse step 3 as a standard equation. this may take the form of a cipm equation for the relative humidity or relative fugacity. step 5: produce reference materials or calibration devices (such as saturated solutions or humidity generators) of certified rh based on step 4. step 6: calibrate routine instruments (such as the various humidity sensors available) against the standards specified in step 5. 7. conclusions  water in different phases and mixtures is a key element of the climate system. long-term chemical and isotopic changes of oceanic and atmospheric water mixtures must be monitored regularly and precisely. mutual consistency and comparability of observational measurement results are indispensable. future sibased definitions of seawater salinity and relative humidity are necessary and long overdue. iapws formulations for climatologically relevant quantities may support upcoming definitions and measurement standards developed by the bipm. there is a wide and important field for iapws as a standard-developing organisation that is specialised on aqueous systems to engage in climate research, as there is for bipm to further engage in climate research to ensure world-wide uniformity of measurements and their traceability to the si. references  [1] x. chen, k.-k. tung, varying planetary heat sink led to globalwarming slowdown and acceleration, science 345(2014), pp. 897903. [2] j. curry, uncertain temperature trend, nature geosci. 7(2014), pp. 83-84.   acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 61  [3] r. feistel, w. ebeling, physics of self-organization and evolution, wiley-vch, weinheim, 2011, isbn 978-3-52740963-1. [4] s.a. josey, s. gulev, l. yu, “exchanges through the ocean surface”, in: ocean circulation and climate. a 21st century perspective. g. siedler, s.m. griffies, j. gould, j.a. church (editors). elsevier, amsterdam, 2013, isbn: 978-0-12-391851-2, pp. 115-140. [5] p.j. durack, s.e. wijffels, t.p. boyer, “long-term salinity changes and implications for the global water cycle”, in: ocean circulation and climate. a 21st century perspective. g. siedler, s.m. griffies, j. gould, j.a. church (editors). elsevier, amsterdam, 2013, isbn: 978-0-12-391851-2, pp. 727-757. [6] s. seitz, r. feistel, d.g. wright, s. weinreben, p. spitzer, p. de bievre, metrological traceability of oceanographic salinity measurement results, ocean sci. 7(2011), pp. 45-62. [7] ioc, scor, iapso, the international thermodynamic equation of seawater 2010: calculation and use of thermodynamic properties. intergovernmental oceanographic commission, manuals and guides no. 56, unesco (english), 196 pp., paris. 2010, http://www.teos-10.org [8] r. feistel, r. wielgosz, s.a. bell, m.f. camões, j.r. cooper, p. dexter, a.g. dickson, p. fisicaro, a.h. harvey, m. heinonen, o. hellmuth, h.-j. kretzschmar, j.w. lovell-smith, t.j. mcdougall, r. pawlowicz, p. ridout, s. seitz, p. spitzer, d. stoica, h. wolf, metrological challenges for measurements of key climatological observables. part 1: overview, part 2: oceanic salinity, part 3: seawater ph, part 4: atmospheric relative humidity, metrologia (2015), in press. [9] r. feistel, d.g. wright, k. miyagawa, a.h. harvey, j. hruby, d.r. jackett, t.j. mcdougall, w. wagner, mutually consistent thermodynamic potentials for fluid water, ice and seawater: a new standard for oceanography, ocean sci. 4(2008), pp. 275-291. [10] r. pawlowicz, t.j. mcdougall, r. feistel, r. tailleux, an historical perspective on the development of the thermodynamic equation of seawater – 2010, ocean sci. 8(2012), pp. 161–174. [11] f.j. millero, r. feistel, d.g. wright, t.j. mcdougall, the composition of standard seawater and the definition of the reference-composition salinity scale, deep-sea res. i 55(2008), pp. 50-72. [12] d.g. wright, r. pawlowicz, t.j. mcdougall, r. feistel, g.m. marion, absolute salinity, "density salinity" and the referencecomposition salinity scale: present and future use in the seawater standard teos-10, ocean sci. 7(2011), pp. 1-26. [13] m. tanaka, g. girard, r. davis, a. peuto, n. bignell, recommended table for the density of water between 0 °c and 40 °c based on recent experimental reports, metrologia 38(2001), pp. 301-309. [14] a.h. harvey, r. span, k., fujii, m., tanaka, r.s. davis, density of water: roles of the cipm and iapws standards, metrologia 46(2009), pp. 196-198. [15] a. picard, r.s. davis, m. gläser, k. fujii, revised formula for the density of moist air (cipm-2007), metrologia 45(2008), pp. 149-155. [16] r. feistel, teos-10: a new international oceanographic standard for seawater, ice, fluid water and humid air, int. j. thermophys. 33(2012), pp. 1335-1351. [17] t.j. mcdougall, potential enthalpy: a conservative oceanic variable for evaluating heat content and heat fluxes, j. phys. oceanogr. 33(2003), pp. 945–963. [18] r. feistel, d.g. wright, h.-j. kretzschmar, e. hagen, s. herrmann, r. span, thermodynamic properties of sea air, ocean sci. 6(2010), pp. 91-141. [19] wmo, guide to meteorological instruments and methods of observation. world meteorological organisation, geneva, 2008, isbn 978-92-63-10008-5. [20] r. pawlowicz, d.g. wright, f.j. millero, the effects of biogeochemical processes on oceanic conductivity/salinity/density relationships and the characterization of real seawater, ocean sci. 7(2011), pp. 363387. [21] r. feistel, j.w. lovell-smith, o. hellmuth, virial approximation of the teos-10 equation for the fugacity of water in humid air. int. j. thermophys. 36(2015), pp. 44-68. [22] j.f. mulligan, g.g. hertz, an unpublished lecture by heinrich hertz: “on the energy balance of the earth”, am. j. phys. 65(1997), pp. 36-45. [23] w. ingram, a very simple model for the water vapour feedback on climate change. quart. j. roy. met. soc. 136(2010), pp. 30-40. [24] r. feistel, “water, steam and climate”, proc. 16th int. conf. prop. water steam, greenwich, uk, september 2013. [25] bipm, “wmo-bipm workshop on measurement challenges for global observation systems for climate change monitoring: traceability, stability and uncertainty”, wmo headquarters, geneva, switzerland, 2010, isbn 13 978-92-822-2239-3. [26] r. feistel, a gibbs function for seawater thermodynamics for –6 to 80 °c and salinity up to 120 g/kg, deep-sea res. i, 55(2008), pp. 1639-1671. [27] r. feistel, g.m. marion, r. pawlowicz, d.g. wright, thermophysical property anomalies of baltic seawater, ocean sci. 6(2010), pp. 949-981. [28] f.j. millero, k. kremling, the densities of baltic waters, deepsea res. 23(1976), pp. 1129-1138. [29] k. kremling, new method for measuring density of seawater, nature 229(1976), pp. 109-110. [30] h. wolf, determination of water density: limitations at the uncertainty level of 1×10−6, accred. qual. assur. 13(2008), pp. 587–591. [31] r. feistel, s. weinreben, h. wolf, s. seitz, p. spitzer, badel, g. nausch, b. schneider, d.g. wright, density and absolute salinity of the baltic sea 2006–2009, ocean sci. 6(2010), pp. 324. [32] iapws, thermophysical properties of seawater. iapws certified research need – icrn 16 (2014), http://www.iapws.org/icrn.html. introductory notes for the acta imeko first issue 2023 general track acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 2 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 introductory notes for the acta imeko first issue 2023 general track francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko first issue 2023 general track, acta imeko, vol. 12, no. 1, article 1, march 2023, identifier: imeko-acta-12 (2023)-01-01 received march 29, 2023; in final form march 29, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, as usual also this issue includes a general track aimed to collect contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give to you an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. power quality (pq) measurements and auditing play a vital role for smart grid applications, industrial safety and reliability. the major electrical pq characteristics and parameters are studied and analysed for single-phase and poly-phase systems in the ieee 1159 recommended practice. in [1], a power quality monitoring system to identify the utilisers performance, audit, pq issues is presented practically for a two-panel boards switchgear equipment as a case study in malaysia. today, an innovative leap for wireless sensor networks, leading to the realization of novel and intelligent industrial measurement systems, is represented by the requirements arising from the industry 4.0 and industrial internet of things (iiot) paradigms. in fact, unprecedented challenges to measurement capabilities are being faced, with the ever-increasing need to collect reliable yet accurate data from mobile, battery-powered nodes over potentially large areas. therefore, optimizing energy consumption and predicting battery life are key issues that need to be accurately addressed in such iot-based measurement systems. this is the case for the additive manufacturing application considered in [2], where smart battery-powered sensors embedded in manufactured artifacts need to reliably transmit their measured data to better control production and final use, despite being physically inaccessible. a low power wide area network (lpwan), and in particular lorawan (long range wan), represents a promising solution to ensure sensor connectivity in the aforementioned scenario, being optimized to minimize energy consumption while guaranteeing long-range operation and low-cost deployment. in the application presented in [2], lora-equipped sensors are embedded in artifacts to monitor a set of meaningful parameters throughout their lifetime. in this context, once the sensors are embedded, they are inaccessible, and their only power source is the originally installed battery. therefore, in [2], the battery lifetime prediction and estimation problems are thoroughly investigated. for this purpose, an innovative model based on an artificial neural network (ann) is proposed and developed starting from the discharge curve of lithium-thionyl chloride batteries used in the additive manufacturing application. the results of experimental campaigns carried out on real sensors were compared with those of the model and used to tune it appropriately. the results obtained are encouraging and pave the way for interesting future developments. bouaouiche et al. in [3] present an innovative method for the analysis of the vibration signals of a ball bearing for the diagnosis of rotating machine defects by vibration analysis. the signals are available on the platform case western reserve university in the form of matlab files. the proposed approach consists of several methods and steps including the decomposition of the signals by feature mode decomposition (fmd) method and then, according to the kurtosis values by selecting the signals useful for the defect diagnosis. in [4], the design, the prototyping, and the first results of experimental tests of a capacitive oil level sensor (cls) intended for aeronautical applications are described. due to potentially high vibrational stresses and the presence of high electromagnetic interferences (emi), the working conditions on aircraft can be considered quite harsh. hence, both the sensing part and the conditioning circuit should meet strict constraints. for this reason, in the design phase, great attention has been paid also to the mechanical characteristics of the probes. all the design aspects are exposed and the main advantages concerning alternative level sensing techniques are discussed, and the preliminary experimental results of sensitivity, linearity, hysteresis, and settling time tests are presented and commented. analogue data acquisition is a common task which has application in several fields such as scientific research, industry, food production, safety, and environmental monitoring. it can be carried mailto:editorinchief.actaimeko@hunmeko.org acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 out either using systems designed ad-hoc for a specific application or by using general-purpose digital acquisition boards (daq). several daq solutions are nowadays available on the market, however, most of them are extremely expensive and come as commercial closed products, a factor which prevents users to adapt the system to their specific applications and limits the product compatibility to few operating systems or platforms. the paper in [5] describes the design and the preliminary metrological characterisation of a digital data acquisition solution based on the teensyduino development board. the aim of the project presented in [5] is to create a hardware and software infrastructure suitable to be employed on several operating systems and that can be freely modified by the users when required. taking advantage of the teensyduino features, the proposed system is easy to be calibrated and used, and it provides functions and performance comparable to many commercial daqs, but at a significantly lower cost. in recent years, technological innovation has acquired a fundamental role in the agri-food sector, in particular in food quality control. the development of technology allowed to improve the quality of food before it is placed on the market. recently, noninvasive techniques such as those operating in the thz spectral band were applied to the field of food quality control. in the laboratories of the enea centre in frascati, close to rome (italy), a thz imaging system has been developed operating in reflection mode, together with an experimental setup able to measure both reflection and transmission of the samples in the frequency range from 18 ghz 40 ghz. with these two setups, the authors in [6] will distinguish rotten and healthy hazelnuts by acquiring in real time both images of the fruit inside the shell by using the imaging system and the transmission data exploiting the 18-40 ghz system. also this issue shows heterogeneous topics, all connected by the common focus on measurement and instrumentation. i hope you will enjoy your reading. francesco lamonaca editor in chief references [1] d. v. n. ananth, n. hannoon, mohamed shahriman bin mohamed yunus, mohd hanif bin jamaludin, v. v. s. s. sameer charavarthy, p. s. r. chowdary, “real-time power quality measurement audit of government building – a case study in compliance with ieee 1159”, acta imeko, vol. 12 (2023) no. 1, pp. 1-9. doi: 10.21014/actaimeko.v12i1.1287 [2] a. morato, t. fedullo, s. vitturi, l. rovati, f. tramarin, “a learning model for battery lifetime prediction of lora sensors in additive manufacturing”, acta imeko, vol. 12 (2023) no. 1, pp. 1-10. doi: 10.21014/actaimeko.v12i1.1400 [3] k. bouaouiche, y. menasria, d. khalfa, “diagnosis of rotating machine defects by vibration analysis”, acta imeko, vol. 12 (2023) no. 1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1438 [4] f. attivissimo, f. adamo, l. de palma, d. lotano, a. di nisio, “first experimental tests on the prototype of a capacitive oil level sensor for aeronautical applications”, acta imeko, vol. 12 (2023) no. 1, pp. 1-6. doi: 10.21014/actaimeko.v12i1.1474 [5] l. lombardo, “multi-platform solution for data acquisition”, acta imeko, vol. 12 (2023) no. 1, pp. 1-8. doi: 10.21014/actaimeko.v12i1.1475 [6] manuel greco, fabio leccese, emilio giovenale, andrea doria, “terahertz techniques for better hazelnut quality”, acta imeko, vol. 12 (2023) no. 1, pp. 1-8. doi: 10.21014/actaimeko.v12i1.1477 https://doi.org/10.21014/actaimeko.v12i1.1287 https://doi.org/10.21014/actaimeko.v12i1.1400 https://doi.org/10.21014/actaimeko.v12i1.1438 https://doi.org/10.21014/actaimeko.v12i1.1474 https://doi.org/10.21014/actaimeko.v12i1.1475 https://doi.org/10.21014/actaimeko.v12i1.1477 microsoft word 289-1777-2-le-rev acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 2 ‐ 3    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 2  introduction to the special issue on the 20th imeko tc‐4  international symposium and the 18th tc‐4 workshop on  adc and dac modelling and testing  ján šaliga 1 , sergio rapuano 2    1  technical university of kosice, faculty of electrical engineering and informatics, dept. of electronics and multimedia telecommunications,  letná 9 04120 košice, slovakia.  2  university of sannio, department of engineering, corso garibaldi 107, 82100 benevento, italy.    section: editorial   citation: ján šaliga, sergio rapuano, introduction to the special issue on the 20th imeko tc‐4 international symposium and the 18th tc‐4 workshop on adc  and dac modelling and testing, acta imeko, vol. 4, no. 3, article 1, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐01  editor: paolo carbone, university of perugia, italy  received june 25, 2015; in final form june 25, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: ján šaliga, email: jan.saliga@tuke.sk       the imeko technical committee number 4 on ‘‘measurement of electric quantities’’ comprises researchers from all over the world dealing with electrical and electronic measurements. such research community meets annually at the tc-4 symposia or the imeko world congresses where the most recent original proposals and research results are presented and discussed in several oral and poster sessions focused on specific topics. after having been organized in many countries, from bulgaria in 1990 to barcelona in 2013, the tc-4 symposium came back to italy for the third time in its history. the 2014 edition of the symposium and the 18th tc-4 workshop on adc and dac modelling and testing, in fact, were held in benevento, a historical town amid the southern italy known for the significant memories of its glorious past and the beauty of its unesco world heritage sites. the theme of 2014 was "research on electrical and electronic measurement for the economic upturn". benevento is the site of a small and young but very dynamic university that hosted the tc-4 symposium 2014, which, gathering more than 200 papers presented by researchers of all over the world, was one of the most successful ones. among this relevant number of contributions, the board of tc-4 selected a few papers to be extended and submitted for the peer review of acta imeko journal. in this special issue you can find the 11 articles that, according to the reviewers, were considered worthy of publication in this journal. the double selection of the tc-4 board and the journal reviewers provided a distillate of the most advanced scientific activity in this field. the papers included in the special issue represent a sample of the state of the art of the research on several topics associated with the measurement of electric quantities. more in detail, the traditional research field of analog-todigital converter (adc) metrology is represented in this issue by two papers. the paper reliable, accurate and scalable adc test methods for standard software platforms, by vilmos pálfi, tamás viroztek, and istván kollár, presents some algorithms to overcome some problems arising from the application of the sinewave histogram test, a commonly used method to characterize nonlinear behaviour of adcs. the paper digital reconstruction stage for the fbd ʃ∆–based adc in multistandard receiver: theoretical analysis and design, by rihab lahouli, manel ben-romdhane, chiheb rebai and dominique dallet, presents the design of a digital reconstruction stage for a frequency band decomposition (fbd)-based adc architecture for digitizing multistandard receiver signals. another research topic the tc-4 researchers are focused on is the time synchronization among instruments, always needed when a consistent result is requested from a distributed group of instruments or sensors. the paper time coordination of standalone measurement instruments by synchronized triggering, by francesco lamonaca, domenico luca carnì and domenico grimaldi, proposes a new architecture of hardware interface for the synchronous triggering of the measurement instruments in a distributed measurement system with the aim to avoid the effects of concurrent software processes, and to reduce the causes of delay in detecting the trigger condition. the paper hybrid time synchronization for underwater sensor networks, by oriol pallares valls, pierre-jean bouvet and acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 3  joaquin del río, is focused on a particular but significant case of distributed measurement system. it presents a study of time synchronization problems over underwater sensor networks, taking into account the main communication challenges of the water channel and observing its behaviour in simulation and real tests. the current measurement methods and instruments are a core business for tc-4 researchers. in the paper setting up of a floating gate test bench in a low noise environment to measure very low tunneling currents, the authors jérémy postel-pellerin, gilles micolau, philippe chiquet, jeanne melkonian, guillaume just, daniel boyer and cyril ginoux, propose a solution to measure very low tunneling currents in non-volatile memories, based on the floating-gate technique. the proposed key factor is to carry out the measurements in an extremely low-noise environment that allows the authors to reach current levels lower than the ones obtained by direct measurements. the nondestructive testing is a main research field in many areas, including telecommunications, electronics and manufacturing. the research results presented in the two following papers aim at easing the metrological characterization of conductive or superconductive elements by means of microwaves. in particular the paper surface conductance and microwave scattering in semicontinuous gold films, by jan obrzut, presents techniques to study mechanisms of electromagnetic response of randomly structured metallic networks. the paper broadband corbino spectroscopy and stripline resonators to study the microwave properties of superconductors, by marc scheffler, maximilian felger, markus thiemann, daniel hafner, katrin schlegel, martin dressel, konstantin ilin, michael siegel, silvia seiro, christoph geibel and frank steglich, describes two different techniques to study superconductors at microwave frequencies: the broadband corbino approach can be used on a very wide range of frequencies almost continuously but is limited to thin-film samples whereas the stripline resonators are sensitive enough to study low-loss single crystals but can provide results only at a set of discrete resonant frequencies. the design and characterization of measurement transducers is another core activity of tc-4 researchers. several papers were presented in this field during the symposium. this issue includes two papers on this topic. the paper estimation of stepping motor current from long distances through cablelength-adaptive piecewise affine virtual sensor, by alberto oliveri, mark butcher, alessandro masi and marco storace, proposes a piecewise affine virtual sensor, a function of past inputs and measured outputs of a system, for the estimation of the motor-side current of hybrid stepper motors, which actuate the lhc (large hadron collider) collimators at cern. the paper new generation of cage-type current shunts developed using model analysis, by věra nováková zachovalová, martin šíra, pavel bednář and stanislav mašláň, presents a new type of ac/dc current shunt ranging from 30 ma to 10 a developed using a lumped circuit element model. during the tc-4 symposium 2014, several special sessions on interdisciplinary topics were organized gathering researchers from the more traditional tc-4 fields and quite diverse ones that are based on metrology or that are the base for a correct measurement. legal metrology and software quality are two examples of such fields where metrology merges with and supports or is supported by different disciplines. the paper painting authentication by means of a biometric-like approach, by giuseppe schirripa spagnolo, lorenzo cozzella, maurizio caciotta, roberto colasanti and gianluca ferrari, for example, an innovative system based on smartphone acquisition and mobile application is proposed to verify artwork authenticity based on random intrinsic object characteristics. software is an essential component of many measurement systems, while metrology concepts are the base of any method for evaluating the quality of software. the paper some thoughts on quality models: evolution and perspectives, by luigi buglione, discusses from an evolutionary perspective what software quality has been, is and could be perceived and defined during next years, by a measurement perspective. the editors of this issue and the authors are grateful to all the colleagues and institutions that have made these contributions possible. a particular acknowledgement is due to all the tc-4 board members and the reviewers that with their efforts allowed the production of this special issue. we sincerely hope that you will enjoy the reading of this issue. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 80‐84    acta imeko | www.imeko.org  june 2014 | volume 4 | number 2 | 80  method of integral nonlinearity testing and correction of  multi‐range adc by direct measurement of output voltages  of multi‐resistors divider   hu zhengbing 1 , roman kochan 2 , orest kochan 3 , su jun 4 , halyna klym 2   1  huazhong normal university, no.152 luoyu road, wuhan, 430079, p.r.china  2  specialized computer systems department of lviv national polytechnic university, s. bandery str., 12, lviv, 79013, ukraine   3  information measurement technology department of lviv national polytechnic university, s. bandery str., 12, lviv, 79013, ukraine   4  internet of things department of hubei university of technology, no.1 lizhi road, wuhan, 430068, p.r.china       section: research paper   keywords: integral nonlinearity; multi‐resistor voltage divider; residual error  citation: hu zhengbing, roman kochan, orest kochan, su jun, halyna klym, method of integral nonlinearity testing and correction of multi‐range adc by  direct measurement of output voltages of multi‐resistors divider, acta imeko, vol. 4, no. 2, article 14, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐14  editor: paolo carbone, university of perugia, italy  received october 23, 2014; in final form february 10, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by china international science and technology cooperation project (cu01‐11) and ukrainian ministry of education and  science grant 0115u000446  corresponding author: roman kochan, e‐mail: kochan.roman@gmail.com    1. introduction  the application of digital signal processing algorithms and computer systems in all fields of our life results in the implementation of analogue-to-digital converters (adc) as a component of modern measurement systems. in some cases the adc’s metrology parameters determine the characteristics of the whole measurement system. particularly, this point is important for measurement systems of electrical quantities. therefore improving the adc is an essential task for higher measurement accuracy. the market of precision dc adc’s is led by converters based on sigma-delta modulators sdm [1], [2]. the high accuracy of these components is provided by the implementation of null setting and calibration. these methods provide decrease of additive and multiplicative conversion errors. therefore, the conversion error is composed of the following errors: calibration voltage source, multiplexer and residual adc errors. the most significant component of the residual adc error is its integral nonlinearity. for example, the maximum allowable integral nonlinearity of the 24-bit adc ad7714 [3] is 15 ppm. this nonlinearity corresponds to the 16th bit, that is approximately 8 least significant bits (lsb) are a priori inaccurate and excessive. therefore, for this adc, to obtain a higher accuracy than 15 ppm the integral nonlinearity should be corrected. in the same case, the noise level of this adc does not exceed 2.5 lsb. therefore, the adc has approximately 5.5 stable bits, and therefore it cannot be used when high accuracy is required. moreover, the accuracy of measurement results obtained by the implementation of the substitution method [4] in an adc is defined by the adc’s integral nonlinearity [5]. so correction of the adc’s integral nonlinearity provides a higher accuracy of the measurement results. in [6] a method for the identification of the adc’s integral nonlinearity in the set of testing points conventionally called as basic method was proposed. this method provides the generation of a set of testing points, which correspond to the number sequence run )1( where ru is the range of the adc and n an integer. this means that all testing points are grouped abstract  a method of testing points generation for the identification and correction of the integral nonlinearity of high performance adc’s is   presented. the proposed method is based on averaging all voltages of the multi‐resistor voltage divider. the influence of the resistors  errors and random errors of the adc on the residual error of the integral nonlinearity correction for method based on multi‐resistor  divider is investigated.  acta imeko | www.imeko.org  june 2014 | volume 4 | number 2 | 81  in the lower half of the adc’s range. the main objective of this work is to develop and investigate the method of adc’s integral nonlinearity identification and correction with a uniform distribution of the testing points over the range. 2. approach of testing points generation  the proposed method of testing points generation is based on analog to digital conversion of the output signals of the voltage divider consisting of n serially combined resistors nrrr ,...,, 21 , connected to the reference voltage source refu . the measurement circuit is shown in figure 1. according to kirchhoff’s voltage law, we have the following equation    n i irref uu 1 (1) where niu ir ,1,  is the voltage of the appropriate resistors of the divider. the average voltage of all resistors of the divider u can be computed as:    n i iru n u 1 1 (2) taking into account (1), (2) can be presented as n u u ref (3) it means that the average voltage of all resistors of the divider u does not depend on the voltages of the separate resistors. in addition, according to ohm’s law, the resistances of these resistors do not influence the average voltage. so, we have an indirect measurement described by the function  xhy  , where refuxuy  ,   n x xh  . therefore, the absolute measurement error y is [7]:   xxhy  / (4) where  xh/ is the derivative of the function  xh and x the absolute error of the argument. taking into account that n is a natural number, (4) can be converted to n refu u   (5) where u  is the absolute error of the average voltage u and refu  the absolute error of the voltage source refu . the relative error of the average voltage u – u  is ref ref ref u ref u ref u u u u n u n u         %100%100%100 (6) with refu  the relative error of the voltage source refu . taking into account (6), the next intermediate conclusion can be made: the error caused by the measurement converter based on a multi-resistor voltage divider with averaging voltages of all resistors tends to zero. it provides the opportunity of generating the set of testing signals for the adc with an exactly predefined ratio. in case of adc calibration by the voltage source of the divider refu , the average voltage u as testing point for the identification of the adc’s integral nonlinearity can be used. the result of analog to digital conversion ( c ) of an input voltage u [8] is:  ufu u cc cc ref ref    00 (7) where: 0c – result of analog to digital conversion by the null setting channel; refc – result of analog to digital conversion by the calibration channel (for input connected to voltage refu );  uf – integral nonlinearity of adc’s conversion function, and for     00  fuf ref . taking into account (7), and using (2) and (3) results in   refref n i ref ii u n u cc ufcc n 11 1 0 0      (8) where ic is the result of analog-to-digital conversion of the voltage iru . after simplifying (8) we get    0 1 0 ccufcc ref n i ii   (9) the average voltage of the nonlinear component of the conversion function for all voltages generated by the multiresistor divider  uf can be computed as       n i iuf n uf 1 1 (10) taking into consideration (10), (9) can be transformed as               n i refi cccc n uf 1 00 1 (11) the value of  uf is computed using the value of the integral nonlinearity at the testing point nuu ref . uref ur1 ur2 urn r1 r2 rn і figure 1. circuit of  n ‐resistors voltage divider.  acta imeko | www.imeko.org  june 2014 | volume 4 | number 2 | 82  an analysis of the influence of the tested adc on the residual error [6] shows the potential of this implementation method for an adc with a smooth conversion characteristic. so, it is shown that the multi-resistors voltage divider provides precision identification of the adc’s integral nonlinearity in one testing point without using precision components. the proposed method is called a basic method [6] and it provides increasing the number of generated testing points by choosing n . since n contains the set of natural divisors  tmm ,,1  , it is the set of natural numbers  tkk ,,1  which satisfies the condition tikmn ii ,1;  . it allows the conversion of voltages on the cascades of  tiki ,1 serially connected resistors, which corresponds to (1). therefore, the integral nonlinearity of the adc can be computed in accordance to the set of t voltages: ti m u k n u u i ref i ref i ,1;  (12) the error of all these voltages corresponds to (6), and only one reference voltage source can be used. 3. proposed method of generating testing points  the analysis of testing results of the basic method of identification and correction of the adc’s integral nonlinearity [6] showed a linear dependence of the residual conversion error of the density of testing points concentration. it allows us to separate at least two subranges: lower and higher half of the adc’s range, which has different residual errors after nonlinearity correction using the basic method. the residual nonlinearity error of the upper subrange is approximately one order higher than the residual nonlinearity error of the lower subrange. taking into account these results we propose to generate testing points for nonlinearity identification of a multirange adc (in the simplest case for a double range adc) as the voltages of the serially connected resistors of the multiresistor voltage divider, measured using the lower subrange of the highest range and to use these voltages for calibration and nonlinearity identification/correction for all other ranges. in the case of a voltage divider with n resistors ( nrrr ,...,, 21 ), the following testing points are generated: the first is the voltage on resistor 1r , the second is the voltage on the serially connected resistors 1r and 2r , the third for 31...rr , etc. up to the 2 n th: 21... nrr . in general, this approach provides the generation of 2 n testing points with relative high precision uniformly distributed in the lower half of the highest range of the adc. calibration and nonlinearity identification of the lower ranges provide high accuracy and a uniform distribution (in the case when the ratio of higher range to lower range is not less then two) of all generated testing points. besides, it is possible to generate an arbitrary number of testing points by selecting an appropriate n and increasing the generated testing points with a smaller increase of n in comparison with the basic method. 4. investigation of the residual error  investigation of the residual nonlinear error by experiments demands high precision equipment with errors 3…5 times less then the expected residual nonlinearity, and corresponding to 0.5 ppm. besides, it is necessary to have the opportunity to set the level and the form of the adc’s nonlinearity with the same error level. therefore, the aim is to investigate the proposed method by simulation and to evaluate the influence of resistor errors and adc noise on the residual nonlinear error for various nonlinear functions. generally, the method of investigation results in emulation of the integral nonlinearity by a set of curves and its computing according to the proposed method. the difference between emulated and computed curves is the error for analysis. the algorithm for this investigation consists of the following steps:  definition, by random way, of two curves, which simulate the adc’s nonlinearities in the higher and lower ranges;  definition of resistance of the divider’s resistors nrrr ,...,, 21 with random deviation from the average value for simulation of the resistors’ error;  computation of the results of the analog to digital conversion for appropriate combinations of resistors for the implementation of the basic method for adc’s higher range nonlinearity identification;  noising of these conversion results to simulate the random error of the adc;  computation of parameters of the correction function for the adc’s higher range nonlinearity;  computation of voltages on the serially connected resistors 2 131211 ...;...;...;,; nrrrrrrr using simulated results of the analog-to-digital conversion for these voltages in the higher range and correction of the adc’s nonlinearity in that range;  computation of the results of the analog-to-digital conversion for these voltages using the lower range of the adc;  noising these conversion results for simulation of the random error;  computation of parameters for the correction function for the lower range nonlinearity;  computation of the residual error of the integral nonlinearity correction for the lower range as the difference between the defined curve and the computed correction function for this range. so, accumulating the set of residual error curves we can implement the statistical data manipulation methods for investigation of parameters of the proposed method and their sensitivity to the adc’s random error and resistors error. the nonlinearity simulation curves for the higher and lower ranges are based on the fourth order polynomial functions with random coefficients [6]. the validity criterion of each curve is not exceeding by the absolute maximum value on the range of the adc of the 250 quantums. this constant corresponds to the maximum allowable nonlinearity of the ad7714. examples of such curves are presented in figure 2, with nonlinearity in quantum along the y-axis, and the input voltage in percents of the range along the x-axis. also, two curves with maximum and minimum values for 500 curves generated for our investigation are presented. the software verification was done by setting “ideal” resistors and adc (zero error of all resistors and zero adc’s random error). in this case we obtained a maximum error of 0.05 quantum, which we can explain by error rounding during computation. therefore, we can expect that the developed acta imeko | www.imeko.org  june 2014 | volume 4 | number 2 | 83  models and software are adequate and can be used for investigation of the influence of component errors on the residual error after integral nonlinearity correction. the simulation was done for a 12-resistors voltage divider. this number of resistors was selected based on two contradictory requirements:  maximum number of natural dividers for implementation of the large amount of generated testing points (number 12 totally has six dividers: 1, 2, 3, 4, 6, 12);  minimum number of resistors for decreasing of their channels of multiplexer and simplification (a 12-resistors divider demands 2 switches with 13 channels for the multiplexer). the simulated resistors’ error is ±1 and ±2 %. it corresponds to 10 … 20 years of exploitation of widely used wire-wound-type resistors with allowable error ±0.1 %. the curves of residual error vs input voltage for the lower range in the case of resistor error ±1 % is presented in figure . the five randomly selected curves and two curves with maximum and minimum values for 500 curves are presented. as we can see, the maximum value does not exceed 1.5 quantum in the worst case. this value is comparable with the adc’s resolution. the maximum residual error for resistors with error ±2 % does not exceed 3.5 quantums. this value is comparable with the adc noise. the results testified the residual error for a noise level of ± 6 quantums. they are presented in figure 4. there are five randomly selected curves and two curves with maximum and minimum values for 500 curves. as we can see, the maximum value does not exceed 18 quantums in the worst case. generally, these results are close to the results of the basic method investigation [6] but they correspond to the whole lower range of the adc instead of the lower subrange for the basic method. the form of the curves obtained for other noise levels of the adc is similar to the curves presented in figure 4. figure 5 shows the dependence of the maximum value of the residual error for the lower range versus the noise level. taking into consideration the linear character of this dependence, we can conclude that the noise influence coefficient for the presented method does not exceed three for a 12-resistors voltage divider. 5. conclusions  the investigation of the proposed method of integral nonlinearity identification and correction for the lower range of the adc leads to the following conclusions:  the number of generated testing points depends on the number of resistors in the voltage divider and all of them are evenly distributed over the range. it provides the generation of testing points corresponding to the requirements of the actual standard for metrology verification of adcs with a smooth conversion characteristic [9];  the influence of resistor errors of the voltage divider on the residual error is comparable with the adc’s resolution and it is negligible in comparison with other errors;  the influence of the adc’s random error on the residual error is dominating and proportional to its noise level with proportionality factor three for a 12-resistors voltage divider. the weak sensitivity of the proposed method to resistor errors and adc noise provides the opportunity of its implementation for metrological verification subsystem [10] of adcs using a single channel reference voltage source. the error of such metrological verification is mainly defined by the error of the implemented reference voltage source, therefore, metrology support of such metrological verification subsystem is reduced to the verification of the reference voltage source. it provides the opportunity to embed this metrological verification subsystem with reference voltage source into the adc. -3,00e+02 -2,00e+02 -1,00e+02 0,00e+00 1,00e+02 2,00e+02 3,00e+02 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 figure 2. simulated nonlinearity of the adc.      0 5 10 15 20 25 30 2 4 6 8 noise level, quantumsm a x im u m r e si d u a l e rr o r, q u a n tu m s figure 5. maximum residual error vs adc noise level.      figure 4. residual error vs input voltage for adc noise ± 6 quantums.  figure 3 residual error vs input voltage for resistors’ error ± 1 %.  acta imeko | www.imeko.org  june 2014 | volume 4 | number 2 | 84  implementation of proposed method demands special hardware and software. this hardware consists of multiresistors divider, multiplexer and controller. the most complex operation of software is computing of polynomial function with rational coefficients and integer argument. therefore it can be implemented on microcontroller of adc or external data processing module. using external data processing module and external hardware provides implementation of metrological verification subsystem without accessing to adc’s hardware or software. example of universal data processing module, which corresponds to requirements of proposed method, is presented in [11]. it is compatible with wide set of serial peripheral interfaces for connection adc and mentioned hardware. its computing power is enough for implementation proposed method in real time. references  [1] fowler k. part 7: analog-to-digital conversion in real-time systems. ieee instrumentation & measurement magazine. 2003. vol. 6. issue 3. pp. 58-64. [2] kester w, which adc architecture is right for your application?, analog dialogue, 2005, vol. 39, n. 2. рp. 11-19. url:http://www.analog.com/library/analogdialogue/archives/ 39-06/architecture.pdf [3] 24-bit sigma-delta, signal conditioning adc with 2 analog input channels ad7714 data sheets. url: http://www.analog.com/en/analog-to-digitalconverters/ad-converters/ad7714/products/product.html [4] substitution method for measurement of medium resistance url: http://www.myclassbook.org/substitution-method-formeasurement-of-medium-resistance/ [5] kochan r.v, adc implementation for measurement using replacement method, ukrainian metrology journal. – kharkiv, 2010, n. 3, pp. 11-16. (in ukrainian) [6] r. kochan, o. kochan, m. chyrka, s. jun, p. bykovyy, approaches of voltage divider development for metrology verification of adc. proc. of 7-th ieee international conference on intelligent data acquisition and advanced computing systems: technology and applications (idaacs’2013), 12-14 september 2013, berlin, germany, pp. 70 – 75. [7] zhengming wang, dongyun yi, xiaojun duan, jing yao, defeng gu, measurement data modeling and parameter estimation. – crc press, 2011, 553 p. [8] walt kester, data conversion handbook, analog devices, 2004, 952 p. [9] gost 30605-98, interstate standard. digital measurement converters of voltage and current. general technical conditions. actual from 01.01.2004. – moscow.: standards publishing house, 1998, 10 p. (in russian) [10] v. sobolev, a. sachenko, p. daponte, o. aumala, "metrological automatic support in intelligent measurement systems", computer standards & interfaces, elsevier pubs, vol.24, no.2, june 2002, pp.123-131. [11] v. kochan, k. lee, r. kochan, a. sachenko, “approach to improving network capable application processor based on ieee 1451 standard”, computer standards & interfaces, elsevier pubs, vol.28, no.2, december 2005, pp.141-149. microsoft word 304-1952-1-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 3    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 3  editorial  paolo carbone  university of perugia, italy     section: editorial   citation: paolo carbone, editorial, acta imeko, vol. 4, no. 4, article 2, december 2015, identifier: imeko‐acta‐04 (2015)‐04‐02  editor: paolo carbone, university of perugia, italy  received december 20, 2015; in final form december 20, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: paolo carbone, email: paolo.carbone@unipg.it      dear reader, the other 6 papers published in this last issue of 2015 are related both to imeko tc1 and tc12 and to another scientific event that took place in italy (the 2015 italian conference on electrical and electronic measurements organized by gmee and gmmt). the last paper in this issue is a freely submitted paper. the paper by crenna et al. covers the topic of measuring human movements in the expanding area of measurements for biomechanics. the authors present a disciplined approach to the assessment of a metrological procedure applied to measurement of forces and kinematic quantities associated to human movements. the aim is that of allowing reproducibility of results by controlling and estimating all major sources of uncertainty. the next paper by rainer feistel is a technical note about the thermodynamic role of water in the determination of climate changes. the role of salinity and relative humidity is of great importance in this case and the metrological requirements for the determination of such quantities must be determined more precisely if data are to be compared over decade long periods of time. this note describes the activities done at the international level to promote traceability of measurement results in this research area. the next paper by rolle et al. is a technical note covering again the topic of measurements for environmental purposes and, in particular, the problem of metrological traceability for the analysis of atmospheric pollutants. it describes the procedures and activities done in this scientific area by the italian metrological institute, inrim. they deal with the preparation of gaseous reference material and with traceability issues for measuring the particulate matter. the paper by lancini et al. is related to reliability of railway components and to the measurements needed for this scope. by using vibration measurements authors show how to detect high wear rates in rolling contacts. the procedure and the experimental results are presented in depth with a large level of detail. it is shown that it is possible to assess wear phenomena of wheel and rail steels by using vibrational analysis. d’emilia et al. consider the uncertainty in the calibration of three-axis low frequency accelerometers. they analyze both static and dynamic calibration up to 4 hz and present both an analysis of the used mathematical model and experimental results. they identify the main sources of uncertainty and provide suggestions for an effective calibration of these sensors. j. bongiorno and a. mariscotti authored the last paper in this issue. it describes the techniques used to measure the realto-earth conductance and the insulating efficiency in railways. in the paper it is reported the application of iec 62128-2 for performing such measurements and experimental results are also described. several practical considerations are made, including description of the effects of measurement uncertainty. this issue contains a very interesting set of papers offering an ever increasingly wider view on the possible applications of metrology and instrumentation. we are looking forward to the issues in 2016, including the extended versions of papers presented at the last imeko world congress held in prague this year. have a fruitful reading of this last issue of acta imeko in 2015! acta imeko issn: 2221-870x july 2017, volume 6, number 2, 89-92 acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 89 improved evaluation of uncertainty for indirect measurement valeriy didenko department of information-measuring technique, national research university “mpei”, krasnokazarmennaya 14, 105568 moscow, russian federation section: research paper keywords: indirect measurement; accuracy specifications; measurement uncertainty citation: valeriy didenko, improved evaluation of uncertainty for indirect measurement, acta imeko, vol. 6, no. 2, article 16, july 2017, identifier: imekoacta-06 (2017)-02-16 section editor: eric benoit, university savoie mont blanc, france received february 13, 2016; in final form june 30, 2017; published july 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: valeriy didenko, e-mail: didenkovi@mail.ru 1. introduction it is well-known [1] that the result of an indirect measurement is a function of several variables: ),...,2,1( nxxxfx = . (1) the values of these variables are found by direct measurements. the maximum possible absolute error of the indirect measurement as a function of maximum possible errors for the direct measurements ( ...1 x nx ∆∆ ) can be found from (1) approximately as [1]: nx nx f x x f x ∆ ∂ ∂ ++∆ ∂ ∂ =∆ ...1 1 . (2) it is usually suggested that the accuracy of (2) is found by means of high order derivatives in taylor’s expansion of f [1]. another way is to use simulation methods giving changes of , ...,1x x n in (1). according to modern metrology, (2) can be explained as the uncertainty with 100 % confidence level (the worst-case uncertainty) [2]. the worst-case uncertainty means that errors higher than found by (2) are absent. in practice, a real maximum possible absolute error of the indirect measurement can be a little higher than one found by (2). one reason of this fact has already been discussed (errors of taylor’s expansion). other reasons are elevated values of the errors of the direct measurements with regard to , ...,1x x n∆∆ . the worst-case method supposes that elevated errors of the indirect measurements with regard to (2) are negligible, for example, lower than five percent from (2). the result found by (2) is usually much higher than the real values. the reason is that all the errors of the direct measurements are calculated independently. the exception to the rule is given in [2]. all the direct measurements are supposed to use an analog to digital converter (adc) with the same values of the maximum offset error 0u , the maximum gain error gu , and the maximum linearity error inlu . the first two errors are supposed to have the same sign (positive or negative), while the linearity error can change the sign for abstract the paper gives formulae for uncertainty evaluation of an indirect measurement based on direct measurements made by different types of measuring devices. the first type of the measuring devices has the specifications of a total error (e.g. digital instruments), while the second type has the specifications of offset, gain and linearity errors (e.g. analog to digital converters). the choice of a device range and the configuration of measuring circuits for decreasing uncertainty are considered. the conversion of the specifications for the first type to the specifications for the second type is discussed. acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 90 different direct measurements. the uncertainty of the indirect measurements is found in accordance with [2], [3] for four cases: the standard uncertainty and the worstcase uncertainty (both absolute and relative). the absolute worst-case uncertainty (the maximum possible absolute error) of the indirect measurement with negligible quantization error is then [2]: ∑ = +∑ =+∑ == n i i kinlu n i ixikgu n i ikuxu 1 .110)( (3) coefficients ik can have different signs. for example, the indirect measurement 21 xxx −= gives .121 =−= kk therefore, (3) can show a lower value than the one found by (2) for the devices with the same errors of each direct measurement. direct measurements of nxx ...1 are often fulfilled by data acquisition devices. three most popular sampling architectures are: multiplexed structures, simultaneous sample and hold structures and multi-adc structures [4]. corresponding structures are shown in figure 1. for the same speed of the used adc, the multi-adc architecture gives a higher scan rate per channel and therefore is preferable according to the recommendations in [4]. the following abbreviations are used in figure 1: mux – multiplexer, amp – instrumentation amplifier, adc analog to digital converter, ssh – simultaneous sample and hold. the advantage of the multiplexed architecture in comparison with the multiplexed structure in terms of uncertainty is shown in section 4. besides the values of direct measurements, the values reproduced by material measures (standard electric resistors, standard signal generators etc.) are also included in (1). for several direct measurements, the following variants are discussed in terms of uncertainty: the application of the same adc at the same range, or different ranges; the choice of the same adc, or different adcs at the same range. for simplicity, several phenomena (dynamic errors, e.g.) ignored in [2] are not considered here either. additionally, the quantization error and noise are supposed to be negligible. all these approximations do not usually influence the main conclusions given in the paper. 2. accuracy specifications for different types of measuring devices accuracy specifications of digital instruments (di) are usually presented by the total (maximum) absolute error. as a first approximation, the maximum possible absolute measurement error for each variable ix found by a digital instrument is ( ),x a b xi idi∆ = + (4) where a is a positive number of the same unit as ix and b is a positive non-dimensional number. the maximum absolute error of a digital instrument as a function of an input signal x is shown in figure 2. two values of the input signal (x1 and x2) are considered. the corresponding absolute errors ( 1∆ and 2∆ ) are shown in figure 2. sometimes a is given in % of the full scale ( ifsx . ) and b is given in % of a reading. accuracy specifications of the adc and data acquisition (da) devices are usually presented by the maximum offset, gain and linearity errors. the quantization error and noise (the random error) are usually included in a and b for digital instruments but can be specified separately for other devices. for simplicity, we will not consider them in this paper. then the maximum absolute error of the da is .0x u u x ui ig inlda∆ = + + (5) for example, (5) is used for finding the maximum absolute error of a single measurement in [5]. the maximum absolute error of the adc as a function of input signal x is shown in figure 3. positive errors are considered only as an example. two values of the input signal (x1 and x2) are used. the linearity error is supposed to be zero at the ends of the range (x=0 and x=xfs) but is equal to the maximum value uinl with any sign at any other points. the linearity error is negative near x1 and is positive near x2 (the worst-case method). let us figure 1. simultaneous sampling architecture – simultaneous sample and hold (ssh). figure 2. maximum absolute measurement error vs. input signal for digital instruments. acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 91 suppose that 1∆ and 2∆ errors are the same for a digital instrument (figure 2) and an adc (figure 3). then the difference between specifications of the two devices mentioned is not important for the evaluation of the direct measurement uncertainty. the situation changes dramatically if we investigate indirect measurements. let us consider the simplest indirect measurement 2 1x x x= − . the uncertainty of the measurement for the digital instrument (figure 2) is .1 2 x di ∆ + ∆∆ = the uncertainty of the measurement for the adc (figure 3) is .2 1 x da ∆ −∆∆ = it is clear from figures 2 and 3 that the second value can be much lower. because of this result it is necessary to find ways to transform the specifications of digital instruments into the form specified for adcs. if (4) and (5) show the same results, then a and b can be found with given inluguu ,,0 as inluua += 0 , (6) gub = . (7) let a and b be specified. if we consider x=0, then 0u = a .it is possible to find the following inequalities (see figure 2): 2 / , 2( ),g fs inlu b a x v a b x≤ + ≤ + (8) where fsx is the full scale of the di. the evaluation of the linearity error by (8) is usually much higher than the true value. therefore (8) does not give any advantages for calculation of the uncertainty of indirect measurements in comparison with initial equation (4). fortunately, some digital instruments have the additional specification of the linearity error. for example, the linearity error is specified in [6] as ,xu a x bl lfsinl = + (9) where al and bl are positive non-dimensional numbers. now the maximum possible absolute error of the digital instrument can be written practically in the same way as it was given for adcs or data acquisition devices: 2 ( . )a inlx fs x a x ui idi b∆ = + ++ (10) in accordance with figure 3, inlu is supposed to be equal to zero at the ends of the scale, but can produce both positive and negative errors at any other points. the signs of errors produced by a and b are constant for the given indirect measurement. the linearity error found by (9) is approximately 10 times less than one found by (8) for the instrument 3401a [6]. if the linearity error for the di is not specified, then it can be found approximately from the experiment [7]. the accuracy of material measures (standard electric resistors, standard signal generators etc.) can be specified by both (4) and (5). for example, the accuracy of voltage calibrators is usually specified as (4), while data acquisition devices with analog output [5] have specifications (5). it means that all the following theory can be used both for results of the direct measurements and quantities reproduced by material measures. 3. general formulae for indirect measurement uncertainty direct measurements n used for the indirect measurement can be divided into three parts. then the absolute worst-case uncertainties of the indirect measurement ,) 121 ( 21 11 . 21 11 . 21 11 .0 1 1 1 1 1 10 )( ikixib n nni i a nn ni i k iinl u nn ni i xikigu nn ni i kiu n i i kinluix n i i kgu n i i kuxu +∑ ++= + +∑ + += ∑ + += +∑ + += ++ ∑ = ++∑ = +∑ = = (11) the simplest indirect measurement is described by the function 2 1x x x= − , where 11 2k k= − = − . the absolute worst-case uncertainty of the indirect measurement when one device is used in the same range ( 21 == nn ) and 012 ≥≥ xx in accordance with (11) is inluxxguxiu 2)12()( +−= . (12) the absolute worst-case uncertainty of the indirect measurement when two devices of the same type are applied in the same range ( 2,2n n= = ,02.01.0 uuu == gugugu == 2.1. , inluinluinlu == 2.1. ) in accordance with (11) is inluxxguuxiiu 2)21(02)( +++= . (13) the maximum difference of the results found by (12) and (13) is at :21 fsxxx ≈≈ inlu fsxguu xiu xiiu ++= 01 )( )( . (14) let us use (14) to compare the uncertainties of the multiplexed and multi-adc structures (discussed in section 1) figure 3. maximum absolute error vs. input signal for adc. acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 92 for the implementation of the function 2 1x x x= − . model pci-6250, used in the multiplexed structure, includes the adc with the following parameters [5]: =fsx 10 v, =0u 2∙10 -4 v, =gu 6∙10 -5, =inlu 6∙10 -4 v. if we use the same adc in the multi-adc structure, then, according to (14), the worst-case uncertainty is 2.3 times more. it means that recommendations [4] can be not true for the uncertainty of the indirect measurement. if 1x << 2x , then two ranges can be used for each direct measurement. in this case, the absolute worst-case uncertainty of the indirect measurement 2 1x x x= − is .2.1.22.11.2.01.0)( inluinluxguxguuuxiiiu +++++= (15) the application of one range for both direct measurements will be better if (12) gives a lower result in comparison with (15). let us consider the example pci-6250 [5] used for the indirect measurement 2 1x x x= − with 1x ≈ 5 v and 2x ≈10 v. the corresponding specifications for =≈ 1.1 fsxx 5 v are =1.0u 1∙10 -4 v, =1.gu 7∙10 -5 v and =1.inlu 3∙10 -4 v. the specifications for =≈ 1.2 fsxx 10 v were given before. using the pci-6250 at 10 v range only, we get from (12) that )( xiu = 1.5 mv. if we use two ranges (5 v for 1x and 10 v for 2x ), the result is )( xiiiu =2.15 v. it means that the application of only one range gives 1.4 times better result, and the well-known recommendation to use the lowest range (5 v in this example) is valid only for the uncertainty of the direct measurements but can be incorrect for the uncertainty of the indirect measurements. 4. conclusions there are two main types of the specifications for measuring devices: the maximum possible total error (4) and the maximum offset, the gain, and the linearity errors (5). the inequalities (8) are offered to find the maximum gain and linearity errors approximately. the result of the evaluation of the linearity error by (8) can be much higher than the true value. if the linearity error is specified besides (4), then the gain error can be found from (4) by (8). general formulae for the absolute (11) and the relative (12) worst-case uncertainties of the indirect measurement are found as functions of three parts of variables. these parts include application for the indirect measurement of one or several devices in one or several ranges with the specifications of the total error or the maximum offset, gain, and linearity errors separately. only the first parts of (11) can give a lower value of the uncertainty for some types of the indirect measurements in comparison with the approach (2). the formulae published before [1], [2] are special cases of (11). the second part of (11) was not discussed in [1], [2]. formula (11) can also be used to take into account application of material measures (standard electric resistors, standard signal generators etc.) for the indirect measurements. applications of the offered approaches are given in section 4. the advantage of the multiplexed structure vs. the multiadc structure from the uncertainty point of view is shown though the multi-adc structure is better from the dynamics point of view [4]. conditions for the choice of one range of a device instead of two ranges for two direct measurements are found from the uncertainty point of view. references [1] m. sedlacek, v. haasz, “electrical measurement and instrumentation”, vydavatelstvi cvut, prague, 1996. [2] f. attivissimo, n. giaquinto, m. savino, “worst-case uncertainty measurement in adc-based instruments”, computer standard & interfaces, 29 (2007), pp. 5-7. [3] iso, guide to the expression of uncertainty in measurement, 1 st edition 1993, 1995. [4] simultaneous sampl. data acquisition architectures. national instrument, publish date 2010. [5] high-speed data acquisition, national instruments, 2011. [6] agilent technologies, user’s guide, 3401a. [7] v. didenko, a. minin. a. movchan, “polynomial and piece-wise linear approximation of smart transducer errors”, measurement, 31 (2002), pp. 61-69. improved evaluation of uncertainty for indirect measurement the simplest indirect measurement is described by the function, where . the absolute worst-case uncertainty of the indirect measurement when one device is used in the same range () and in accordance with (11) is the absolute worst-case uncertainty of the indirect measurement when two devices of the same type are applied in the same range (, ) in accordance with (11) is the maximum difference of the results found by (12) and (13) is at the application of one range for both direct measurements will be better if (12) gives a lower result in comparison with (15). there are two main types of the specifications for measuring devices: the maximum possible total error (4) and the maximum offset, the gain, and the linearity errors (5). the inequalities (8) are offered to find the maximum gain and linearity errors a... general formulae for the absolute (11) and the relative (12) worst-case uncertainties of the indirect measurement are found as functions of three parts of variables. these parts include application for the indirect measurement of one or several device... applications of the offered approaches are given in section 4. the advantage of the multiplexed structure vs. the multi-adc structure from the uncertainty point of view is shown though the multi-adc structure is better from the dynamics point of view ... predicting and monitoring blood glucose through nutritional factors in type 1 diabetes by artificial neural networks acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 predicting and monitoring blood glucose through nutritional factors in type 1 diabetes by artificial neural networks giovanni annuzzi1, lutgarda bozzetto1, andrea cataldo2, sabatina criscuolo3, marisa pesola3 1 department of clinical medicine and surgery, university of naples federico ii, 80125 naples, italy 2 department of engineering for innovation, university of salento, 73100 lecce, italy 3 department of electrical engineering and information technology (dieti), university of naples federico ii, 80125 naples, italy section: research paper keywords: artificial intelligence; monitoring; nutritional factors; postprandial glucose response; type 1 diabetes citation: giovanni annuzzi, lutgarda bozzetto, andrea cataldo, sabatina criscuolo, marisa pesola, predicting and monitoring blood glucose through nutritional factors in type 1 diabetes by artificial neural networks, acta imeko, vol. 12, no. 2, article 7, june 2023, identifier: imeko-acta-12 (2023)-02-07 section editor: francesco lamonaca, university of calabria, italy received january 23, 2023; in final form february 28, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was carried out as part of the ‘ai4pg’ project, under the fra-finanziamento per la ricerca di ateneo initiative, financially supported by the university of naples federico ii. this work was also supported by the pnrr dm 351/2022 -m4c1, by the european union fse-react-eu, pon research and innovation 2014-2020, dm 1061/2021 contract number dot19x7nyl-2. corresponding author: andrea cataldo, e-mail: andrea.cataldo@unisalento.it 1. introduction type 1 diabetes (t1d) is an autoimmune condition in which the immune system attacks and destroys the pancreatic cells (β cells) that produce insulin [1]. recent epidemiological research (e.g., [2], [3]) estimate that the t1d worldwide prevalence is 9.5 per ten thousand people. along with regular exogenous insulin injection, t1d patients have to lead a healthy lifestyle and carefully manage the levels of blood sugar to prevent complications, such as hypoglycaemia and hyperglycaemia [4]. in particular, the management of the postprandial glucose response is a major issue for t1d patients [5]. the technological advantages in healthcare and the progress in wearable devices [6], [7] have led to the development of artificial pancreas (ap) [8], a closed-loop systems that combines a continuous glucose monitoring (cgm) and, a control algorithm based on heuristics and theoretical knowledge, automating the insulin release via an insulin pump [9]. while the ideal goal is to design a fully closed-loop system, actual clinical scenarios depend on several physiological factors, e.g., delays in insulin assimilation. at the present time, only hybrid closedloop systems (hclss) are available for medical practice. although basal insulin is automatically delivered with little to no issue, these systems are unable to adequately manage the postprandial response, so the patient is forced to manually set up the pre-prandial dose of insulin [10]. thus, a crucial part of abstract the monitoring and management of postprandial glucose response (pgr), by administering an insulin bolus before meals, is a crucial issue in type 1 diabetes (t1d) patients. artificial pancreas (ap), which combines autonomous insulin delivery and blood glucose sensor, is a promising solution; nevertheless, it still requires input from patients about meal carbohydrate intake for bolus administration. this is due to the limited knowledge of the factors that influence pgr. even though meal carbohydrates are regarded as the major factor influencing pgr, medical experience suggests that other nutritional should be considered. to address this issue, in this work, we propose a machine learning (ml)-based approach for a more comprehensive analysis of the impact of nutritional factors (i.e., carbohydrates, protein, lipids, fiber, and energy intake) on the blood glucose levels (bgls). in particular, the proposed ml-model takes into account bgls, insulin doses, and nutritional factors in t1d patients to predict bgls in 60-minute time windows after a meal. a feed-forward neural network was fed with different combinations of bgls, insulin, and nutritional factors, providing a predicted glycaemia curve as output. the validity of the proposed system was demonstrated through tests on public data and on self-produced data, adopting intra and inter-subject approach. results anticipate that patient-specific data about nutritional factors of a meal have a major role in the prediction of postprandial bgls. mailto:andrea.cataldo@unisalento.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 hcls devices consists of the algorithm responsible of maintaining the levels of blood glucose in a safety range. several control strategies have been designed and presented in literature, spanning from proportional-integral-derivative control to fuzzy logic control [11]. nonetheless, the postprandial glucose response (pgr) still remains a main open issue in aps [12]. research on this topic focused mostly on carbohydrates’ intakes, ignoring additional aspects relative to mealtime, such as other nutritional factors (lipids, proteins, etc.) or psycho-physiological status [13], [14]. artificial intelligence (ai), and in particular machine learning (ml), is increasingly providing new opportunities in ap designs by boosting the extraction of information from big biological data [15]–[17]. an example of promising ai-based strategies for ap is offered by artificial neural networks (anns), which aim to early detect hypoand hyper-glycaemia events, and to consequently enhance insulin administration [18]. however, it is worth pointing out that, at the state of the art, even these anns models considered mostly carbohydrates, without taking into account other nutritional factors [19]. as a matter of fact, the nutritional properties of meals can impact blood glucose level (bgl), significantly affecting pgr. for instance, clinical trials have demonstrated that high-fat/protein meals require more insulin than lower-fat/protein meals with identical carbohydrate content. hence, the design of models based on meal composition rather than just carbohydrates intake seems significant [20]. starting from these considerations, a study of the impact of nutritional factors over the 60 minutes (min) after a meal was conducted by ml methods. in particular, the effect of nutritional factors such as carbohydrates, proteins, lipids, fibres, and meal energy intake, on postprandial blood glucose response was analysed. relying on models presented in [21] and [22], we proposed a model able to predict the glycaemic curve over the 60 minutes from the meal, by considering the nutritional factors. more in detail, the impact of the nutritional factors was investigated by feeding the model with a different combination of bgls, insulin doses, and nutritional factors, by validating the model on both public and self-produced data. the paper is organised as follows. in section 2 an overview of the state-of-the-art of ml solutions in the managing of postprandial (after the meal) blood glucose response was presented. in section 3 the datasets employed, and the proposed method were described. the experimental setup is reported in section 4. section 5 and section 6 show results and discussion, respectively. finally, in the concluding section the key points of the work are summarised, and the future steps are outlined. 1. related work ml has gained increasing attention in several research fields, and especially in health-related tasks [23]–[27]. among the ml techniques, the use of anns in prediction of blood glucose was investigated in several studies [28]–[35], using data from real t1d patients and virtual patients [36], as those obtained with uva/padova simulator [37]. this tool allows the generation of virtual subjects through complex physiological models, enabling users to control the experimental parameters. nonetheless, due to the difficulty of tuning its elements and interpreting the results, actual data from real patients are sometimes preferred for the examination of specific realistic scenarios. as example, in [33] the authors analysed the performance of a feed-forward neural network (ffnn) model for real-time prediction of glucose implementing a prediction horizon (ph) of 75 min. the ffnn was trained using a training set that included cgm values collected in 17 patients. overall, the reported rootmean-square deviation (rmse) was (43.9 ± 6.5) mg/dl. in [34], a glucose prediction algorithm that combines cgm readings and information on carbohydrate intake was proposed, by testing both on virtual patients, and real datasets. results on simulated and real data showed that for a prediction horizon (ph) of 30 min, rmse was calculated as (14.0 ± 4.1) mg/dl and (9.4 ± 1.5) mg/dl, respectively. in [35], a multilayer convolutional neural network (cnn) followed by a recurrent neural network with long short-term memory (lstm) cells was investigated for the prediction of blood glucose with phs of 30 and 60 min. the study was conducted on both virtual patients and t1d real patients with cgm sensors, by achieving rmse results of (21.07 ± 2.35) mg/dl (ph = 30 min) and (33.27 ± 4.79) mg/dl (ph = 60 min) for real data. in [21], the authors proposed a ffnn model, by considering input as a 30 min sliding window across the blood glucose values and associated eight statistics (i.e., minimum, maximum, mean, standard deviation, difference between highest and lowest, median, kurtosis, and skewness). the obtained rmse was (2.82 ± 1.00) mg/dl, (6.31 ± 2.43) mg/dl, (10.65 ± 3.87) mg/dl, and (15.33 ± 5.88 mg/dl), respectively considering different phs (15, 30, 45, and 60 min). however, the information about nutritional factors was not considered in the model. recently, some studies considered the nutritional factors of the meal as an important input of the neural networks [22], [38], [39], forecasting the glycaemia values after the meal. in this regard, in [22] a ffnn to predict post-prandial blood glucose values every 2 min up to 4 hours was designed, by using as input meal information in two ways. in the former, raw nutrient quantities were used. hence, the network was fed by carbohydrates (g), lipids (g), fibers (g), insulin amount (pmol/1000) and bgl (mmol/l). in the second way, a bioinspired model of glucose absorption curve. in particular, numerical parameters such as time elapsed to the peak of the curve, time elapsed to 50 % of the peak of the curve, and rate of absorption at the maximum of the curve were calculated. then, these curve characteristics were exploited to train the network along with insulin amount and bgl. they showed that better performance was achieved when the absorption model was integrated in the model, with an average rmse of 1.12 mmol/l (ph = 60 min), compared to the rmse value for the first approach (1.816 mmol/l). nevertheless, among the involved subjects, only one was a t1d patient. it is worth to note that previous reported studies do not focus on the impact of nutritional factors in blood glucose prediction. considering this, a study of the impact of nutritional factors over a 60-min time window after the meal was conducted by ml methods. 2. dataset characteristics and method 2.1. dataset direcnet dataset the direcnet is a public dataset of cgm measurements, collected by jaeb center for health research [40]. it includes data from child-patients with t1d wearing the medtronic minimed guardian-rt, a hcls device that recorded glucose values at intervals of 5 min. the dataset contains cgm data from 50 patients, aged between 3 to 7 or 12 to 18 years, with acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 a t1d for more than 1 year. for approximately 7 days, the blood glucose data was continually collected every 5 minutes. ai4pg dataset the ai4pg dataset, provided by federico ii university hospital (naples, italy), includes information from 25 t1d patients wearing the hcls, medtronic minimed 670g system [41]. the dataset reports data on meals, insulin doses, and cgm measurements for 6/7 days. subjects range age was (40 ± 12) years with a duration of diabetes of (15 ± 12) years. patients completed food diaries with information about meals for 7-days, by obtaining a dataset of 1264 meals (breakfasts, lunches, dinners) represented as time series of preand postprandial glycaemic levels (mg/dl). the dataset includes details about the manual boluses (mbs) administered at mealtime based on carbohydrate intake of the meal, and an estimate of carbohydrates (g), lipids (g), proteins (g), fibers (g) and energy intake (kcal) associated with each meal. the glycaemia levels from cgm every 5 min, from 30 min before meal to 60 min after meal was reported. the data were collected with informed consent from eligible subjects and the protocol was approved by the ethical committee of federico ii university. 2.2. proposed method the proposal of this study is the prediction of post-prandial glycaemia in t1d patients over one hour after the meal, using a ffnn model [42]. the taken approach was inspired by the findings reported in [21] and [22], choosing as inputs a 30-min window of blood glucose values and 8 associated statistical features. more in detail, minimum, maximum, mean, standard deviation, pick-to-pick difference, median, kurtosis, and skewness were calculated on the glycaemia values. the output is the entire glycaemic curve from 5 min to 60 min after the meal, and namely 12 output neurons correspond to these values sampled every 5 minutes. prediction performance was then evaluated on all the 12 output values simultaneously. the number of hidden layers and neurons of the ffnn was set by a grid search strategy. to evaluate the performance of the proposed system in predicting blood glucose levels, a preliminary experiment using direcnet data was conducted. subsequentially, the system was applied to the self-produced ai4pg dataset. since the goal was to examine how nutritional parameters affected postprandial glycaemic response, nine different input configurations were tested: ● #1 only glycaemia: the model took in input: blood glucose levels (mg/dl) from 30 min before meal until mealtime every 5 min glycaemia’s statistical features such as minimum, maximum, mean, standard deviation, difference between highest and lowest, median, kurtosis, and skewness calculated on glycaemia values mentioned above. ● #2 insulin, no nutritional factor: in addition to glycaemia values and associated statistics, the network also took as input the insulin bolus mb (mmol/l). ● single-nutritional factors scenarios: in these scenarios, the inputs were composed of glycaemia values, statistical features, mb, and a single nutritional factor across the following: #3 carbohydrates (g), #4 proteins (g), #5 fibers (g), #6 lipids (g), and #7 energy intake (kcal) associated with each meal. ● #8 insulin, all nutritional factors: the model was supplied simultaneously with glycaemia values, statistical attributes, insulin bolus and all nutritional factors. ● #9 no insulin, all nutritional factors: as the previous one except for the insulin bolus, the model exploited glycaemia values, statistical features, and all nutritional factors. the outputs are the 5-min-step values of the blood glucose curve over 60 min after the meal. root mean square error (rmse), defined by equation (1), was used to evaluate the prediction performance. 𝑅𝑀𝑆𝐸 = √ 1 𝑁 ∑ ( 1 𝑇 ∑ (𝑦𝑝 − 𝑦𝑚 ) 2 𝑇 𝑚=1 ) 𝑁 𝑝=1 , (1) where 𝑦𝑚 and 𝑦𝑝 represents the measured and predicted bgls at the same instant of time, respectively; 𝑇 is the time, while 𝑁 is the total number of blood glucose measurements in the dataset. 3. experiments this section describes the conducted experiments, by reporting the pre-processing and the experimental setup. a set of preliminary experiments were carried out by using the public dataset direcnet to validate the proposed ml system in the bgl prediction task. then, as the goal of this study was analysing the impact of nutritional factors on the bgl prediction capability, the system was used on the ai4pg dataset. figure 1 illustrates the key steps of the proposed pipeline, which are detailed below. figure 1. proposed pipeline. the data pre-processing stage includes the savitzky-golay filtering step and statistical features calculation. then, the dataset is split into training, validation, and test set, and scaled using the min-max scaler strategy. model hyperparameters are tuned by a grid search strategy. finally, the best model is selected. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 3.1. pre-processing for both public direcnet and self-produced ai4pg data, the pre-processing procedure involved filtering and scaling the data. more in detail, from direcnet dataset, the cgm data (mg/dl) of the 12 patients were considered. based on [43], [44], to clean the data from noise and thus improve the performance, the data was filtered with savitzky-golay technique [45], by considering a first-order polynomial with a 15-step filtering window on bgls. then, the model input was constructed by using a 30-minute sliding window across the blood glucose data. moreover, as mentioned, statistical attributes were calculated on each window of bgls data and added as inputs: minimum, maximum, mean, standard deviation, difference between highest and lowest values, median, kurtosis and skewness. as for the self-produced ai4pg data, we considered the blood glucose values (mg/dl), the mb (mmol/l), and the nutritional factors such as energy intake (kcal) of the meal, protein (g), carbohydrates (g), lipids (g), and fibers (g). data from 15 patients were used, for a total 1036 meal records. subsequentially, the bgls were pre-processed by savitzkygolay filter using a first-order polynomial and a 15-step filtering window. therefore, the input of ffnn was composed by glycaemic values from 30 min before meal until mealtime every 5 minutes. in other words, 7 blood glucose values are given in parallel as inputs to the model. in addition, as previously discussed, the 8 statistics computed on pre-prandial bgls were used as input. 3.2. experimental setup a feed forward neural network (ffnn) was investigated as a predictor of postprandial blood glucose levels over a 60-min time window from the meal. in order to set optimal hyperparameters for the proposed ffnn model, a grid search strategy was implemented by considering the search spaces reported in table 1. in particular, discrete ranges of values were chosen for the number of hidden layers (from 1 to 3), the number of neurons per layer (as powers of two) and the learning rate of the process. these values fall within reasonable ranges adopted in previous studies. regardless of each single configuration, relu activation function [46] was chosen for each hidden layer, the regularization term (l2 penalty) with weight decay parameter was set to 0.0001, stochastic gradient descent (sgd) [47] was used as optimization algorithm. the epochs were set to a maximum of 1000 with a patience of 20 [42]. for the experiments, we exploited intra-subjective and inter-subjective approaches on both direcnet and ai4pg data. the intrasubjective approach relies on the inherent physiological variability among different individuals, examining one patient's data at a time and conducting a customized investigation to obtain more accurate considerations. conversely, in the intersubject approach, data from all patients are pooled together to achieve more universal findings. more in detail, for intra-subjective approach, a different model for each patient was built. to validate the method, a holdout validation strategy was performed by splitting the dataset into training, validation, testing sets. in particular, the training set contained 70 % of the data, the 10 % of data was used for validation, and the 20 % for testing. all data were scaled using min-max scaling, by computing the minimum and maximum of the training data. instead, for the inter-subjective case, the data of all patients was used to build a single model to investigate the possibility that a model trained on data from a different subject can generalize to new data. to validate the method, a 5-fold cross-validation (cv) was performed. cv is a technique used in ml to better evaluate the performance of a predictive model on a limited dataset [47]. generally, in k-fold cv, data are divided into k equal-sized folds. the model is trained on k-1 of these folds and evaluated on the remaining fold. this process is repeated k times and then the performance is averaged across all k folds. in this work, for each of the 5 iterations, a portion of the training data was used as the validation set, according to a 70 %/10 % splitting. a min-max scaler was applied considering the minimum and maximum values of the training data. rmse was used for model evaluation on the test set. 4. results in this section, the experimental results both for intraand inter-subjective approaches were reported. direcnet dataset rmse between actual and predicted values every 5 minutes over a 60-min time window was calculated and mean and standard deviation were reported in table 2. in this case, the predictions were based only on bgl, as no information about insulin doses and meals is available. in the intra-subjective approach, the rmse is the average on all the considered patients, whereas, in the inter-subjective case, the rmse is the average on the 5-fold. as observed, the results are similar in the two approaches with a main difference in the standard deviation, that in the intra-subject case is greater due to high performance variability across different patients. instead, since each fold contains data from different patients, the variability of performance between folds was minimal in the inter-subject case. ai4pg dataset – in order to evaluate the impact of nutritional factors and insulin doses on bgl prediction, a statistical paired t-test, with significant level α of 0.05, was exploited. in particular, the t-test was used to compare the #1 only glycaemia results with the other ones. the statistical significance was interpreted through p-value. as can be seen from the results reported in table 3, a positive statistical significance was obtained when the ffnn was fed by nutritional factors individually considered. as expected, carbohydrates are the factor that has the greatest impact on blood glucose prediction [48] in the first 60 minutes after the meal, but also other factors such as proteins, fibers, lipids, and the energy played a key role. instead, for inter-subjective approach shown in table 4, no statistical significance was found as a p-value always greater than the significant level α, reflecting the need to model interindividual variability. as a matter of fact, clinical studies showed a significant role of the individual characteristics in postprandial glucose [49]. table 1. search space adopted during the grid search. hyperparameters search space number of hidden layers [1, 2, 3] number of neurons [32, 64, 128, 256] learning rate [0.0001, 0.0005, 0.001, 0.005, 0.01] table 2. mean rmse with standard deviation for bgl prediction in intrasubjective and inter-subjective approach on direcnet dataset. approach mean ± std (mg/dl) intra-subjective 11.4 ± 3.3 inter-subjective 11.8 ± 0.9 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 5. discussion the goal of this study was to investigate the influence of nutritional factors on bgl forecasts. to do this, a set of preliminary experiments on a direcnet public dataset allowed to verify the capability of proposed mlmodel in bgls prediction with respect to the literature. however, in literature studies the role of nutritional factors, which could help to achieve more effective bgl predictors, is not widely investigated. this study showed, for the intra-subjective case on ai4pg data, that not only carbohydrates but also other nutritional factors such as protein, lipids, fibers, and the caloric intake of the meal have an impact on the prediction of bgl over a 60min time window after a meal. it is interesting to note that considering all the nutritional factors simultaneously leads to a lower effect on the performance with respect to the use of just one nutritional factor. this could be due to the peaking phenomenon [42], [50] of the proposed model, according to which, for finite training sets, the performance of a model does not improve as the increasing of the features number (in this case nutritional factors). another reason could be a negative interaction between the nutritional factors involved. instead, for the inter-subjective case, the nutritional factors do not contribute to appreciably improved bgl predictions. thus, the postprandial glycaemic response seems strongly related to individual subject characteristics and this agrees with clinical studies [48], [49], [51] that have demonstrated a postprandial glucose response almost constant in the same subject, while it changes among different subjects. hence, a better knowledge of how nutritional factors affect bgl prediction could enhance the algorithms controlling insulin infusion in hcls and the calculation of insulin bolus. 6. conclusions the goal of this study was to explore how nutritional factors may affect the prediction of post-prandial bgl in the 60 minutes after mealtime, via machine learning methods. a set of experiments to test bgls prediction by a feed forward neural network was conducted on the self-collected ai4pg dataset, which also contained various data of interest like insulin doses and intakes of nutritional factors. first, the model was validated on the public direcnet dataset, demonstrating capability to be able to obtain an acceptable prediction performance over a 60-minute time window. then, a prediction performance analysis was computed on the ai4pg dataset, considering different nutritional factors as inputs to evaluate the impact of each of them. finally, the case in which all the nutritional factors are considered simultaneously as input was explored. the obtained results show that nutritional factor information can be relevant in bgl forecasts, but this information should be employed in a subject-specific fashion. clearly, this does not exclude the possibility of exploring into alternative machine learning strategies based on transferring knowledge across different datasets, such as transfer learning techniques (e.g. [52], [53]). this study focused on the impact on bgl predictions of different nutritional factors in a 60-minute time window after a meal. in addition to the global rmse performance, one could also evaluate the performance on different time horizons. for instance, this was already investigated in [54], though the temporal resolution was coarser (15 minutes in spite of 5 minutes). in this framework, one could also investigate the impact of nutritional factors at different time scales (greater than 60 minutes). moreover, explainable artificial intelligence (xai) [55]–[59] methods could be help in explaining the input-output relationships and so the impact of different scenarios on bgl prediction. finally, using more sophisticated neural network models, like long short-term memory, could improve the outcome. references [1] a. katsarou, s. gudbjörnsdottir, a. rawshani, d. dabelea, e. bonifacio, b. j. anderson, l. m. jacobsen, d. a. schatz, å. lernmark, type 1 diabetes mellitus, nat rev dis primers 3(1) (2017), art. no. 17016. doi: 10.1038/nrdp.2017.16 [2] m. mobasseri, m. shirmohammadi, t. amiri, n. vahed, h. hosseini fard, m. ghojazadeh, prevalence and incidence of type 1 diabetes in the world: a systematic review and meta-analysis, health promot perspect 10(2) (2020), pp. 98–115. doi: 10.34172/hpp.2020.18 [3] p. saeedi, i. petersohn, p. salpea, b. malanda, s. karuranga, n. unwin, s. colagiuri, l. guariguata, a. a motala, k. ogurtsova, j. e. shaw, d. bright, r. williams, idf diabetes atlas committee, global and regional diabetes prevalence estimates for 2019 and projections for 2030 and 2045: results from the international diabetes federation diabetes atlas, 9th edition, diabetes res clin pract 157 (2019), art. no. 107843. doi: 10.1016/j.diabres.2019.107843 [4] national institute for health and c. e. (uk), type 1 diabetes in adults: diagnosis and management. national institute for health and care excellence, 2015. online [accessed 09 march 2023] https://pubmed.ncbi.nlm.nih.gov/26334079/ [5] j. d. hoyos, m. f. villa-tamayo, c. e. builes-montaño, a. ramirez-rincón, j. l. godoy, j. garcia-tirado, p. s. rivadeneira, identifiability of control-oriented glucose-insulin linear models: table 3. experimental results in intra-subjective approach on ai4pg dataset. the t-test p-values between the #1 only glycaemia scenario and the other ones are also reported. in bold the p-values less than the significance level α. scenario mean ± std [mg/dl] p-value #1 only glycaemia 13.3 ± 0.5 #2 insulin, no nutritional factor 13.1 ± 0.6 0.3 #3 carbohydrates 12.9 ± 0.4 0.006 #4 proteins 12.8 ± 0.5 0.004 #5 fibers 12.8 ± 0.4 0.003 #6 lipids 12.8 ± 0.5 0.01 #7 energy 12.9 ± 0.5 0.04 #8 insulin, all nutritional factors 13.0 ± 0.6 0.06 #9 no insulin, all nutritional factors 13.3 ± 0.4 0.6 table 4. experimental results in inter-subjective approach on ai4pg dataset. scenario mean ± std [mg/dl] #1 only glycaemia 13.7 ± 0.4 #2 insulin, no nutritional factor 13.5 ± 0.7 #3 carbohydrates 13.4 ± 0.6 #4 proteins 13.5 ± 0.7 #5 fibers 13.3 ± 0.4 #6 lipids 12.3 ± 0.2 #7 energy 13.4 ± 0.8 #8 insulin, all nutritional factors 13.3 ± 0.7 #9 no insulin, all nutritional factors 13.4 ± 0.2 https://doi.org/10.1038/nrdp.2017.16 https://doi.org/10.34172%2fhpp.2020.18 https://doi.org/10.1016/j.diabres.2019.107843 https://pubmed.ncbi.nlm.nih.gov/26334079/ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 review and analysis, ieee access, vol. 9, 2021, pp. 69173–69188. doi: 10.1109/access.2021.3076405 [6] p. arpaia, l. callegaro, a. cultrera, a. esposito, m. ortolano, metrological characterization of consumer-grade equipment for wearable brain–computer interfaces and extended reality, ieee trans instrum meas 71 (2022), pp. 1–9. doi: 10.1109/tim.2021.3127650 [7] g. cosoli, a. poli, s. spinsante, l. scalise, the importance of physiological data variability in wearable devices for digital health applications, acta imeko 11(2) (2022), pp. 1-8. doi: 10.21014/acta_imeko.v11i2.1135 [8] m. j. khodaei, n. candelino, a. mehrvarz, n. jalili, physiological closed-loop control (pclc) systems: review of a modern frontier in automation, ieee access 8 (2020), pp. 23965–24005. doi: 10.1109/access.2020.2968440 [9] e. bekiari, k. kitsios, h. thabit, m. tauschmann, e. athanasiadou, t. karagiannis, a.-b. haidich, r. hovorka, a. tsapas, artificial pancreas treatment for outpatients with type 1 diabetes: systematic review and meta-analysis, bmj, 2018, p. k1310. doi: 10.1136/bmj.k1310 [10] a. saunders, l. h. messer, g. p. forlenza, minimed 670g hybrid closed loop artificial pancreas system for the treatment of type 1 diabetes mellitus: overview of its safety and efficacy, expert rev med devices 16(10) (2019), pp. 845–853. doi: 10.1080/17434440.2019.1670639 [11] g. quiroz, the evolution of control algorithms in artificial pancreas: a historical perspective, annu rev control 48 (2019), pp. 222–232. doi: 10.1016/j.arcontrol.2019.07.004 [12] a. el fathi, m. raef smaoui, v. gingras, b. boulet, a. haidar, the artificial pancreas and meal control: an overview of postprandial glucose regulation in type 1 diabetes, ieee control systems magazine 38(1) (2018), pp. 67–85. doi: 10.1109/mcs.2017.2766323 [13] c. toffanin, m. messori, f. di palma, g. de nicolao, c. cobelli, l. magni, artificial pancreas: model predictive control design from clinical experience, j diabetes sci technol 7(6) (2013), pp. 1470–1483. doi: 10.1177/193229681300700607 [14] m. c. riddell, d. p. zaharieva, l. yavelberg, a. cinar, v. k. jamnik, exercise and the development of the artificial pancreas: one of the more difficult series of hurdles, j diabetes sci technol 9(6) (2015), pp. 1217–1226. doi: 10.1177/1932296815609370 [15] l. angrisani, g. annuzzi, p. arpaia, l. bozzetto, a. cataldo, a. corrado, e. de benedetto, v. di capua, r. prevete, e. vallefuoco, neural network-based prediction and monitoring of blood glucose response to nutritional factors in type-1 diabetes, in 2022 ieee international instrumentation and measurement technology conference (i2mtc), 2022, pp. 1–6. doi: 10.1109/i2mtc48687.2022.9806611 [16] a. aliberti, i. pupillo, s. terna, e. macii, s. di cataldo, e. patti, a. acquaviva, a multi-patient data-driven approach to blood glucose prediction, ieee access 7 (2019), pp. 69311–69325. doi: 10.1109/access.2019.2919184 [17] m. de bois, m. a. el yacoubi, m. ammi, glyfe: review and benchmark of personalized glucose predictive models in type 1 diabetes, med biol eng comput (2021), pp. 1–17. doi: 10.1007/s11517-021-02437-4 [18] b. w. bequette, a critical assessment of algorithms and challenges in the development of a closed-loop artificial pancreas, diabetes technol ther 7(1) (2005), pp. 28–47. doi: 10.1089/dia.2005.7.28 [19] f. j. doyle, l. m. huyett, j. b. lee, h. c. zisser, e. dassau, closed-loop artificial pancreas systems: engineering the algorithms, diabetes care 37(5) (2014), pp. 1191–1197. doi: 10.2337/dc13-2108 [20] k. j. bell, c. e. smart, g. m. steil, j. c. brand-miller, b. king, h. a. wolpert, impact of fat, protein, and glycemic index on postprandial glucose control in type 1 diabetes: implications for intensive diabetes management in the continuous glucose monitoring era, diabetes care 38(6) (2015), pp. 1008–1015. doi: 10.2337/dc15-0100 [21] g. alfian, m. syafrudin, m. anshari, f. benes, f. tatas dwi atmaji, i. fahrurrozi, a. fathan hidayatullah, j. rhee, blood glucose prediction model for type 1 diabetes based on artificial neural network with time-domain features, biocybern biomed eng 40(4) (2020), pp. 1586–1599. doi: 10.1016/j.bbe.2020.10.004 [22] r. a. h. karim, i. vassányi, i. kósa, after-meal blood glucose level prediction using an absorption model for neural network training, comput biol med. 125 (2020), art. no. 103956. doi: 10.1016/j.compbiomed.2020.103956 [23] a. apicella, p. arpaia, e. de benedetto, n. donato, l. duraccio, s. giugliano, r. prevete, enhancement of ssveps classification in bci-based wearable instrumentation through machine learning techniques, ieee sensors journal 22(9) (2022), pp. 9087–9094. doi: 10.1109/jsen.2022.3161743 [24] p. arpaia, a. esposito, a. natalizio, m. parvis, how to successfully classify eeg in motor imagery bci: a metrological analysis of the state of the art, j neural eng, 2022. doi: 10.1088/1741-2552/ac74e0 [25] b. v. subba rao, r. kondaveti, r. v. v. s. v. prasad, v. s. rao, k. b. s. sastry, bh. dasaradharam, classification of brain tumours using artificial neural networks, acta imeko 11(1) (2022), pp. 17. doi: 10.21014/acta_imeko.v11i1.1232 [26] p. arpaia, u. bracale, f. corcione, e. de benedetto, a. di bernardo, v. di capua, l. duraccio, r. peltrini, r. prevete, (2022). assessment of blood perfusion quality in laparoscopic colorectal surgery by means of machine learning. scientific reports 12(1), 14682. doi: 10.1038/s41598-022-16030-8 [27] l. angrisani, p. arpaia, a. esposito, l. gargiulo, a. natalizio, g. mastrati , n. moccaldi, m.parvis, passive and active braincomputer interfaces for rehabilitation in health 4.0. measurement: sensors, 18, 100246 (2021), pp. 1-4. doi: 10.1016/j.measen.2021.100246 [28] c. pérez-gandía, a. facchinetti, g. sparacino, c. cobelli, e.j. gómez,m. rigla, a. de leiva, m.e. hernando artificial neural network algorithm for online glucose prediction from continuous glucose monitoring. diabetes technol ther, 2010. doi: 10.1089/dia.2009.0076 [29] j. ben ali, t. hamdi, n. fnaiech, v. di costanzo, f. fnaiech, j.m. ginoux, continuous blood glucose level prediction of type 1 diabetes based on artificial neural network, biocybern biomed eng 38(4) (2018), pp. 828–840. doi: 10.1016/j.bbe.2018.06.005 [30] v. felizardo, n. m. garcia, n. pombo, i. megdiche, data-based algorithms and models using diabetics real data for blood glucose and hypoglycaemia prediction–a systematic literature review, artif intell med (2021), art. no. 102120. doi: 10.1016/j.artmed.2021.102120 [31] t. el idrissi, a. idri, z. bakkoury, systematic map and review of predictive techniques in diabetes self-management, int j inf manage 46 (2019), pp. 263–277. doi: 10.1016/j.ijinfomgt.2018.09.011 [32] j. carrillo-moreno, c. pérez-gandia, r. sendra-arranz, g. garcia-sáez, m. e. hernando, á. gutiérrez, long short-term memory neural network for glucose prediction, neural comput appl 33(9) (2021), pp. 4191–4203. doi: 10.1007/s00521-020-05248-0 [33] s. m. pappada, b. d. cameron, p. m. rosman, r. e. bourey, t. j. papadimos, w. olorunto, m. j. borst, neural network-based realtime prediction of glucose in patients with insulin-dependent diabetes, diabetes technol ther. 13(2) (2011), pp. 135–141. doi: 10.1089/dia.2010.0104 [34] c. zecchin, a. facchinetti, g. sparacino, g. de nicolao, c. cobelli, neural network incorporating meal information improves https://doi.org/10.1109/access.2021.3076405 https://doi.org/10.1109/tim.2021.3127650 https://doi.org/10.21014/acta_imeko.v11i2.1135 https://doi.org/10.1109/access.2020.2968440 https://doi.org/10.1136/bmj.k1310 https://doi.org/10.1080/17434440.2019.1670639 https://doi.org/10.1016/j.arcontrol.2019.07.004 https://doi.org/10.1109/mcs.2017.2766323 https://doi.org/10.1177/193229681300700607 https://doi.org/10.1177/1932296815609370 https://doi.org/10.1109/i2mtc48687.2022.9806611 https://doi.org/10.1109/access.2019.2919184 https://doi.org/10.1007/s11517-021-02437-4 https://doi.org/10.1089/dia.2005.7.28 https://doi.org/10.2337/dc13-2108 https://doi.org/10.2337/dc15-0100 https://doi.org/10.1016/j.bbe.2020.10.004 https://doi.org/10.1016/j.compbiomed.2020.103956 https://doi.org/10.1109/jsen.2022.3161743 https://doi.org/10.1088/1741-2552/ac74e0 https://doi.org/10.21014/acta_imeko.v11i1.1232 https://doi.org/10.1038/s41598-022-16030-8 https://doi.org/10.1016/j.measen.2021.100246 https://doi.org/10.1089/dia.2009.0076 https://doi.org/10.1016/j.bbe.2018.06.005 https://doi.org/10.1016/j.artmed.2021.102120 https://doi.org/10.1016/j.ijinfomgt.2018.09.011 https://doi.org/10.1007/s00521-020-05248-0 https://doi.org/10.1089/dia.2010.0104 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 accuracy of short-time prediction of glucose concentration, ieee trans biomed eng 59(6) (2012), pp. 1550-1560. doi: 10.1109/tbme.2012.2188893 [35] k. li, j. daniels, c. liu, p. herrero, p. georgiou, convolutional recurrent neural networks for glucose prediction, ieee j biomed health inform 24(2) (2019), pp. 603–613. doi: 10.1109/jbhi.2019.2908488 [36] i. contreras, j. vehi, and others, artificial intelligence for diabetes management and decision support: literature review, j med internet res 20(5) (2018), art. no. 10775. doi: 10.2196/10775 [37] c. d. man, f. micheletto, d. lv, m. breton, b. kovatchev, c. cobelli, the uva/padova type 1 diabetes simulator, j diabetes sci technol 8(1) (2014), pp. 26–34. doi: 10.1177/1932296813514502 [38] c. zecchin, a. facchinetti, g. sparacino, c. cobelli, jump neural network for real-time prediction of glucose concentration, in artificial neural networks, springer, 2015, pp. 245–259. doi: 10.1016/j.cmpb.2013.09.016 [39] k. li, c. liu, t. zhu, p. herrero, and p. georgiou, glunet: a deep learning framework for accurate glucose forecasting, ieee j biomed health inform 24(2) (2019), pp. 414–423. doi: 10.1109/jbhi.2019.2931842 [40] jchr, direcnet dataset. evaluation of counter-regulatory hormone responses during hypoglycemia and the accuracy of continuous glucose monitors in children with t1dm, 2007. online [accessed 09 march 2023] https://public.jaeb.org/direcnet/stdy/167 [41] medtronic, “minimed 670g; medtronic minimed, usa”, online [accessed 09 march 2023] https://www.medtronic.com/us-en/healthcareprofessionals/products/diabetes/insulin-pumpsystems/minimed-670g.html [42] c. m. bishop, n. m. nasrabadi, pattern recognition and machine learning, vol. 4. springer, 2006. isbn: 978-1-4939-3843-8 [43] c. pérez-gandía, a. facchinetti, g. sparacino, c. cobelli, e. j. gómez, m. rigla, a. de leiva, m. e. hernando, artificial neural network algorithm for online glucose prediction from continuous glucose monitoring, diabetes technol ther 12(1) (2010), pp. 81– 88. doi: 10.1089/dia.2009.0076 [44] k. turksoy, e. s. bayrak, l. quinn, e. littlejohn, d. rollins, a. cinar, hypoglycemia early alarm systems based on multivariable models, ind eng chem res 52(35) (2013), pp. 12329–12336. doi: 10.1021/ie3034015 [45] a. savitzky, m. j. e. golay, smoothing and differentiation of data by simplified least squares procedures, anal chem 36(8) (1964), pp. 1627–1639. doi: 10.1021/ac60214a047 [46] a. apicella, f. donnarumma, f. isgrò, r. prevete, a survey on modern trainable activation functions, neural networks 138 (2021), pp. 14–32. doi: 10.1016/j.neunet.2021.01.026 [47] y. goldberg, neural network methods for natural language processing, synthesis lectures on human language technologies 10(1) (2017), pp. 1–309. doi: 10.2200/s00762ed1v01y201703hlt037 [48] c. vetrani, i. calabrese, l. cavagnuolo, d. pacella, e. napolano, s. di rienzo, g. riccardi, a. a rivellese, g. annuzzi, l. bozzetto, dietary determinants of postprandial blood glucose control in adults with type 1 diabetes on a hybrid closed-loop system, diabetologia 65(1) (2022), pp. 79–87. doi: 10.1007/s00125-021-05587-0 [49] l. bozzetto, d. pacella, l. cavagnuolo, m. capuano, a. corrado, g. scidà, g. costabile, a. a rivellese, g. annuzzi, postprandial glucose variability in type 1 diabetes: the individual matters beyond the meal, diabetes res clin pract 192 (2022), p. 110089. doi: 10.1016/j.diabres.2022.110089 [50] m. verleysen, d. françois, the curse of dimensionality in data mining and time series prediction, in international workconference on artificial neural networks, 2005, pp. 758–770. doi: 10.1007/11494669_93 [51] s. shilo, a. godneva, m. rachmiel, t. korem, y. bussi, d. kolobkov, t. karady, n. bar, b. chen wolf, y. glantz-gashai, m. cohen, n. zuckerman-levin, n. shehadeh, n. gruber, n. levran, sh. koren, a. weinberger, o. pinhas-hamiel, e. segal, the gut microbiome of adults with type 1 diabetes and its association with the host glycemic control, diabetes care 45(3) (2022), pp. 555–563. doi: 10.2337/dc21-1656 [52] a. apicella, p. arpaia, m. frosolone, g. improta, n. moccaldi, a. pollastro, eeg-based measurement system for monitoring student engagement in learning 4.0, sci rep 12(1) (2022), pp. 1-13. doi: 10.1038/s41598-022-09578-y [53] s. j. pan, q. yang, a survey on transfer learning, ieee trans knowl data eng 22(10) (2010), pp. 1345–1359. doi: 10.1109/tkde.2009.191 [54] g. annuzzi, a. apicella, p. arpaia, l. bozzetto, s. criscuolo, e. de benedetto, m. pesola, r. prevete, e. vallefuoco, impact of nutritional factors in blood glucose prediction in type 1 diabetes through machine learning. ieee access 11 (2023), pp. 1710417115. doi: 10.1109/access.2023.3244712 [55] a. apicella, f. isgró, a. pollastro, r. prevete, toward the application of xai methods in eeg-based systems, in ceur workshop proceedings, 2022, vol. 3277, pp. 1 – 15. doi: 10.48550/arxiv.2210.06554 [56] e. tjoa, c. guan, a survey on explainable artificial intelligence (xai): toward medical xai, ieee trans neural netw learn syst, 32(11) (2020), pp. 4793–4813. doi: 10.1109/tnnls.2020.3027314 [57] a. apicella, s. giugliano, f. isgrò, r. prevete, exploiting autoencoders and segmentation methods for middle-level explanations of image classification systems, knowl based syst, 255 (2020), art. no. 109725. doi: 10.1016/j.knosys.2022.109725 [58] a. apicella, f. isgrò, r. prevete, a. sorrentino, g. tamburrini, (2019). explaining classification systems using sparse dictionaries. esann 2019 proceedings, 27th european symposium on artificial neural networks, computational intelligence and machine learning, isbn: 978-287587065-0, pp. 495 – 500. [59] m. t. ribeiro, s. singh, c. guestrin, ‘why should i trust you?’ explaining the predictions of any classifier, in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 2016, pp. 1135–1144. doi: 10.1145/2939672.2939778 https://doi.org/10.1109/tbme.2012.2188893 https://doi.org/10.1109/jbhi.2019.2908488 https://doi.org/10.2196/10775 https://doi.org/10.1177/1932296813514502 https://doi.org/10.1016/j.cmpb.2013.09.016 https://doi.org/10.1109/jbhi.2019.2931842 https://public.jaeb.org/direcnet/stdy/167 https://www.medtronic.com/us-en/healthcare-professionals/products/diabetes/insulin-pump-systems/minimed-670g.html https://www.medtronic.com/us-en/healthcare-professionals/products/diabetes/insulin-pump-systems/minimed-670g.html https://www.medtronic.com/us-en/healthcare-professionals/products/diabetes/insulin-pump-systems/minimed-670g.html https://doi.org/10.1089/dia.2009.0076 https://doi.org/10.1021/ie3034015 https://doi.org/10.1021/ac60214a047 https://doi.org/10.1016/j.neunet.2021.01.026 https://doi.org/10.2200/s00762ed1v01y201703hlt037 https://doi.org/10.1007/s00125-021-05587-0 https://doi.org/10.1016/j.diabres.2022.110089 https://doi.org/10.1007/11494669_93 https://doi.org/10.2337/dc21-1656 https://doi.org/10.1038/s41598-022-09578-y https://doi.org/10.1109/tkde.2009.191 https://doi.org/10.1109/access.2023.3244712 https://doi.org/10.48550/arxiv.2210.06554 https://doi.org/10.1109/tnnls.2020.3027314 https://doi.org/10.1016/j.knosys.2022.109725 https://doi.org/10.1145/2939672.2939778 microsoft word article 1 contacts.doc acta imeko may 2014, volume 3, number 1, 1 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor-in-chief paolo carbone, italy honorary editor-in-chief paul p.l. regtien, netherlands editorial board francisco alegria, portugal leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy sergio rapuano, italy dirk röske, germany lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany luca de vito, italy about imeko the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov) the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de microsoft word 268-1861-1-le-ver2 acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 4‐8    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 4  physical quantity as a pseudo‐euclidean vector  valery mazin  st. petersburg state polytechnical university, ul. polytechnicheskaja, 29, 195251  st. petersburg, russia        section: research paper   keywords: pseudo‐euclidean plane; uncertainty; vector of the unit; total vector; pseudo‐euclidean angle  citation: valery mazin, physical quantity as a pseudo‐euclidean vector, acta imeko, vol. 4, no. 4, article 3, december 2015, identifier: imeko‐acta‐04  (2015)‐04‐03  section editor: franco pavese, torino, italy  received april 8, 2015; in final form june 17, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author valery mazin, e‐mail: masin@list.ru     1. introduction  the categories of evolution in theoretical metrology, mentioned in [1], among which developing and substantiating mathematical models should be particularly noted, are still holding their truth today. the geometrical approach plays a special role among other approaches to the understanding of fundamental categories of measurement. in the “encyclopedia of mathematics” [2], the role of geometry is defined as follows: “the development of geometry, its applications, development of the geometric perception of abstract objects in different areas of mathematics and natural science give evidence of the importance of geometry as one of most drastic and productive means of reality cognition in terms of produced ideas and methods”. it is underscored in [3], that the geometric representation of analytic concepts has an infinite heuristic value; it also mentions that geometry “becomes more and more important in . . . physics, simplifying mathematical formalities and deepening physical comprehension. this renaissance of geometry influenced not just special and general theories of relativity, known to be geometrical in their essence, but also other branches of physics, where geometry of physical space is being replaced by the geometry of more abstract spaces”. this article offers for consideration a geometrical representation of a fundamental concept in metrology – the physical quantity. also, a geometric representation integrates uncertainty into the concept of the physical quantity. 2. defining the problem  let a quantity x characterize an object, that allows it to change from xa to xb. this very situation actually takes place in reality, for not a single physical parameter in an imperfect world can be defined by a single value. there is always a probable turn-down range, that exists either due to transition from one kind to another, or due to external conditions change, or due to inaccuracy of a measurement. the mean value and the range of uncertainty characterizes different properties of a quantity (one is indicating the location on the numerical axis, the other – dispersion around this location). on the other hand, a physical quantity contains both, quantitative as well as qualitative aspects. both of these dualities suggest an idea that a representation of a quantity, naturally combining this duality, would be necessary. this research attempts to represent any physical quantity as a two-dimensional vector of a given abstract space. 3. physical quantity as a vector, characterizing its   uncertainty   the following suggestion was proven in [4]: abstract  the aim of this research was an attempt of the finding of a mathematical model for a physical quantity organically including its degree  of  uncertainty.  the  basic  method  is  an  application  of  the  geometrical  approach.  as  the  starting  point  the  basic  equation  of  measurement  is  accepted.  result  was  a  presentation  of  a  physical  quantity  by  a  vector  of  the  pseudo‐euclidean  plane.  four  new  matters appearing by such presentation are discussed.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 5  a collection of logarithms of physical quantities in any power (integer and fractional) plus zero defines a vector space. each vector of this vector space, per basic equation of measurement x = {x}[x], where x is a physical quantity, {x} is its numerical value, [x] is its unit, can be decomposed into two portions, formed up after taking the logarithm of this equation (strictly speaking, this simple notion is, of course, not an equation): ln x = ln {x} + ln [x] . (1) expression (1) is usually considered lacking mathematical rigor and leading to contradictory statements, for it is commonly adopted that transcendent functions of denominate quantities do not make sense, whereas the units of quantities are exactly what make them as such. nevertheless, formal operation (1), obviously, can be carried out, and we will show that they are not just formal. we shall distinguish between the fundamental possibility of assigning a coordinate system and the method of determining the values of these coordinates. taking the logarithm produces a coordinate system. as for the values of coordinates, multiple approaches are possible. first of all, it is necessary to acknowledge that for as long as the unit of a quantity may have different values, this unit in itself bears a quantitative meaning. this meaning, however, remains indeterminate, for the entire quantitative side of a physical quantity is a result of comparing this quantity with unity. therefore logarithmic transformation can be used. secondly, as it will be shown later, by using the immutable fact of uncertainty of value of a quantity, it will become possible to make sense of ln [x]. and finally, we can consider ln [x] to be simply a symbol of a coordinate, a sign of direction, whereas αi shall be considered the coordinate itself. assuming medxxx ba  , corresponding vectors may be drawn as illustrated in figure 1, where a0 is the axis of numerical values logarithms. we will choose the metric on this plane in such a way, that it reflects the fact of the presence of x in given limits. this can be done on the condition that the vectors magnitudes within their domain of definition were real, and outside the domain of definition imaginary. as a result, we shall arrive to: 0 | |  ba xx lnln , (2) i.e. corresponding vectors turn out to be isotropic. all vectors, located to the left as well as to the right from the isotropic ones, including those, directed alongside the numerical axis, have imaginary lengths. the latter statement meets our condition, whose meaning is clear: what falls short of reality – is imaginary. as abstract numbers don’t exist in nature, a numerical axis must be imaginary. such metric happens to be pseudoeuclidean. every point a of this plane characterizes an equivalent-kind physical quantity, taken in any power and varying within certain limits. we shall name it a representing point. in that case, the meaning of the points population, forming a pseudo-euclidean plane, becomes clear if one can imagine few possible reasons for transitioning from one point to another, i.e. movements in this plane. these reasons are mathematically expressed by parameters, whose variations cause the end of the vector of quantity trace a certain trajectory. it is possible to specify three types of such parameters: coverage probability, power of the quantity and external factors. in such metric  2med02med | | aa x  xx lnln . (3) this leads to |ln xmed| = ab – amed = amed – aa , (4) and   22med | | meda xx lnln . (5) substituting (4) into (5) and taking into consideration that 2 med ba aaa   , (6) we obtain:   ba aax ln . (7) this result is of fundamental importance. it shows that the vector norm, representing the unit of a quantity, is not a permanently set value, but rather is a geometric average of logarithms of the limit values for a given quantity. after simple transformations we also arrive to:   axxb aaaa  00 xln . (8) figure 1. vector representation of a physical quantity.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 6  as a result of this concept, new entities are coming to life. the first of them – vector norm  x ln . 4. vector of a unit  the vector norm of a unit generally is not equal to zero since it is not a “logarithm of one” in its common sense, but rather is thought to be a logarithm of a certain qualitative content of the quantity. this notion fully agrees with the concept, stating that the numerical value and unit for a quantity reflect its quantitative and qualitative component, respectively. being expressed as a vector, laid in a pseudo-euclidean plane, the unit of quantity thus has two dimensions. one of them is ordinary, with logarithmic coordinate at 0, characterizes the size of the commonly adopted unit. another dimension also gives the size of the unit, except this unit is related to the applicable physical system, and it is defined by the range xa, xb. 5. the full vector of a quantity  another new entity, appearing in connection with the geometrical concept of a physical quantity the length of its full vector x ln . as it can be seen from (6), when either a0x = aa, or a0x = ab , x ln = 0. it can be explained only on the condition that a0x is located at the center of a corresponding probability distribution. in this case, if the center of distribution coincides with its limit, dispersion becomes zero (for there aren’t any values at either side from the center); then a0x becomes constant. no other interpretation is capable to give an informatively sound explanation of this exceptional nature of limit values for a0x . moreover, we can also determine the class of distribution center for a quantity, represented by the coordinate a0x in a given model. since the numerical axis is “common for everyone”, i.e. all values for distribution centers for all physical quantities are being located along it, these centers shall be added up algebraically while the summation itself is a vector addition. it is worth to mention here, that this rule of addition is applicable in case of dealing with population means. consequently, a0x is the population mean of logarithms of quantity x. it is necessary to notice, that in each and every single pseudo-euclidean plane, characterizing a certain quantity, any of known characteristics of the probability distribution center may act as coordinate a0x . having said that, if the quantities are multiplied (which is the same as vector sum of their logarithms), then a rule that governs addition of their logarithmic probability distribution centers shall be defined. likely, this will require establishing different numerical axes for different quantities. in this article, however, we will skip this issue. let's notice that the reason for a physical quantity to be represented as a two-dimensional entity lays in the fundamental uncertainty of its value. such uncertainty causes the averages of a0x and  x ln to have different absolute values. the length of the quantity vector expresses the degree of its uncertainty in a format of geometric average of the two greatest possible deviations of quantity value logarithms from their center of distribution. let's re-write (8) in the following format:         a b a b x x x x x x x x lnlnlnln xln , (9) where x = exp a0x. if the quantity’s variation within the limits xa – xb is reasonably small (owing to say, measurement errors), then the quotients x xb and ax x are approaching 1. then, expanding the logarithms into taylor series and limit the expansion to the first two members, we arrive at:              11 a b x x x x xln , (10) where γ+ and γare positive and negative estimates of the error, respectively. quite often γ+ = γ= γ [5], consequently x ln . (11) quantity variation limits of xa and xb define a level of confidence, specified with a certain probability. consequently, the entire model built as a pseudo-euclidean plane turns out to be probabilistic. 6. the angular characteristic of measurable  property of a physical object  the third new entity that needs much thinking, is the pseudo-euclidean angle between the vector of quantity’s unit and the full vector of the quantity. see figure 2. each point on a straight line, coinciding with the vector of quantity x, including the point, that belongs to x in a fractional power, expresses the same property of the object in both, qualitative and quantitative, terms. for example, electric resistance r, r2, conductivity r-1, and so on express the same property of a conductor, though in various terms. but then vector ln x , as well as any other vector co-linear to it, such as ln x2, characterizes just one image of this general property. it is natural to assume, that there should be a characteristic that unites all of these images. this characteristic, obviously, is the angle at which the line u crosses some chosen direction; for example, the direction of the quantity’s unit vector, as shown in figure 2. the angle between these straight lines, just as the quantity x itself, characterizes the object in the adopted system of measurement; however, more general and more sustainable. it is certainly not the angle that can be seen, but rather a pseudo-euclidean angle. thus, we are forced to consider the angle as a characteristic of a measurable physical property of an object. figure 2. further elaboration of the physical quantity concept.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 7  for the purpose of making the notes look simpler, we shall write: ln x = x*; ln [x] = [x]*; medlx  * med ;   xa x  e * , where e is a unitary vector, directed along [x]*. to begin with, let’s define the angle θβ between the unitary vector and the geometric average vector. corresponding transformations show:        a b a a ln 2 1  . (12) in other words, the pseudo-euclidean angle between the unitary vector and the geometric average vector for this object is determined by the logarithm of ratio of quantity’s limit values. formula (7) establishes the degree of difference between  *x and lmed and it is significant in the sense that xmed can also be considered a system unit. now let’s define the angle θφ between the vector of geometric average and the full vector of the quantity x, an angle shown in figure 1 as φ. we find: xb ax aa aa 0 0ln 2 1    . (13) in this case, the logarithm is taken from a prime relation, that includes three points aa, a0x, ab – all lying on the same line, a. k. a. an affine invariant. finally, the formula for the angle between the vector of the unit and the full vector of the quantity is expressed as                    0 0 :ln 2 1 ln 2 1 0 0 0 0 b a xb ax xb ax a b a a aa aa aa aa a a   . here the logarithm is taken from a complex relation of four points aa, ab, a0x, and 0 , representing a principal projective invariant. it is remarkable that the last formula exactly matches the formula defining the length 0 a0x in hyperbolic and elliptic geometries. it is analogous to the laguerre formula  2121ln 2 uujj i  , (14) where ψ is the euclidean angle in a complex euclidean space; u1 and u2 are two real straight lines; j1 and j2 are two isotropic imaginary straight lines, whereas all four straight lines pass through the same point;  2121 uujj is the complex relation of four aforementioned straight lines. in our case, the complex relation of four points, obviously, is equal to the complex relation of the straight lines coinciding with vectors *ax , * bx , x* and [x] *. out of these four, the first two are also isotropic. 7. conclusions  the concept representing a physical quantity as a vector of the pseudo-euclidean plane means knowing the center of probability distribution and degree of uncertainty for this quantity. this possibility of natural and, in essence, necessary combination of the center of probability distribution and the degree of uncertainty for a quantity, is a reason to favor this geometrical concept of representing physical quantities. the suggested approach allows to express a certain property of the physical object, that in particular may be expressed via a physical quantity, through a pseudo-euclidean angle. this approach is a natural development of representing a physical quantity as a numerical axis. here comes the question: how exactly shall we use this representation in describing physical processes? in other words, what is a practical value of these results? since the combination of the center of distribution and dispersion around this center for a given quantity is defined by a two-dimensional vector, the equations of interest shall operate with these vectors instead of otherwise commonplace quantities. an example in the appendix illustrates such approach. appendix  example the sensitivity of an induction sensor to magnetic induction is described by the formula: s = wa , where  cyclic frequency of induction varying, w – number of windings in the coil, a – projection of the coil’s area on the plane, perpendicular to induction. let  lay in the range of (5970 ÷ 6590) s-1, with a probability of 0.95; let also w = 1293. in the end, let us take it as given that a lays in the range of (1.22 ÷ 1.24)10-3 m2 with 0.95 probability for different sensors of the same model. the formula for sensitivity in vector form will look as  . (i) for the sake of simplicity, let us call: = s*,  *, = w*, = a*. then let’s break each vector down into components, one of which is directed along the numerical axis and another – along the full vector of the quantity. the result of this transformation is a0s + s* = a0 + * + a0w + w* + a0a + a* , i.e. each quantity is represented by the sum of the distribution center and expanded uncertainty. it can be said that a0s = a0 + a0w + a0a is an algebraic sum of the logarithmic expected values, and s* = * + w* + a* (ii) is a vector sum of the expanded uncertainties. (i) and (ii) are, obviously, identical; however it is important to keep in mind that vectors in (i) are full vectors, whereas in (ii) the same vectors become coordinate vectors, should the coordinate system change. let us determine the relative expanded uncertainty s, taking into account that w = const. according to the vector addition formula: s = | ∗| = | ∗| | ∗| 2 | ∗|| ∗| , where is the coordinate of the metric tensor of the model space. this tensor, according to [6], is determined by the distribution type for  and a, coverage probability, and ratio of the uncertainties of  and a. to find | ∗| and | ∗|, see (11). assuming normal distribution for  and uniform for a , when aforementioned coverage probability and variables are present, following the value tables for the metric tensor developed by one of the authors in [6], we arrive at ≅ 0.05. as a result, it turns out that | ∗| 0.051. here is where methodology from [7] becomes handy. as it follows from earlier stated values for  and a, their relative acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 8  expanded uncertainties are 0.05 and 0.01, respectively. matching standard uncertainties at coverage factors of 1.96 and 1.65 are found to be 0.026 and 6.110-3 respectively. the standard uncertainty of s therefore can be found as 0.026 6.1 ∙ 10 0.027. the assumption of a normal distribution for s will produce 0.027 ∙ 1.96 0.053, that in all practicality equals the result, obtained earlier. it is important to notice that we made a supposition regarding the type of resulting distribution. applying the monte carlo method does not require such supposition; however, it requires a considerable increase of computing power. in this case, when statistical sampling of 10,000 values is exercised, s = 0.052, i.e. a comparable result yet again. references  [1] tarbeyev y.v., “development of fundamental research in the area of theoretical metrology”, 3-d all-state conference on theoretical metrology: presentation theses, leningrad, 1986, pages 4-7 (in russian) [2] “encyclopedia of mathematics in five volumes”, vol. 1, moscow, 1977, 1152 col. (in russian) [3] shutz, b., “geometrical methods in mathematical physics”, translation from english, moscow, 1984, 303 p, (in russian) [4] mazin v.d., “description of properties of physical objects in piecewise euclidean and piecewise riemann spaces”, scientific and technical bulletin of spbstu (computing, measuring and operating systems), petersburg, 1993 (in russian) [5] novitsky p.v., zograf i.a., “evaluation of errors estimation in measurement results”, second edition, leningrad, 1991, 303 p. (in russ.) [6] mazin v.d., chepushtanov a.n., “properties and application technology of a vector-analytical method for an estimation of uncertainty in measurement”, 5 international conference on precision measurement, ilmenau, 2008. [7] “guide to the expression of uncertainty in measurement”, jcgm 2008. distributed coverage optimisation for a fleet of unmanned maritime systems acta imeko issn: 2221-870x september 2021, volume 10, number 3, 36 43 acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 36 distributed coverage optimisation for a fleet of unmanned maritime systems geert de cubber1, rihab lahouli2, daniela doroftei1, rob haelterman2 1 royal military academy, department of mechanics, avenue de la renaissance 30, 1000 brussels, belgium 2 royal military academy, department of mathematics, avenue de la renaissance 30, 1000 brussels, belgium section: research paper keywords: unmanned maritime systems; multi-agent systems; maritime surveillance; distributed coverage optimisation citation: geert de cubber, rihab lahouli, daniela doroftei, rob haelterman, distributed coverage optimisation for a fleet of unmanned maritime systems, acta imeko, vol. 10, no. 3, article 7, september 2021, identifier: imeko-acta-10 (2021)-03-07 section editor: bálint kiss, budapest university of technology and economics, hungary received january 15, 2021; in final form september 9, 2021; published september 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the research presented in this paper has been funded by the belgian royal higher institute of defence, in the framework of the dap19/08 (marsur) project, and by the flemish agency for innovation and entrepreneurship, in the framework of the blue cluster project ssave (hbc.2019.0045). corresponding author: geert de cubber, e-mail: geert.de.cubber@rma.ac.be 1. introduction an ever-increasing percentage of the global population lives in coastal areas. a downside of this evolution is that an increasing number of criminals are turning their attention to our seas and oceans to carry out illegal activities. examples include drug smuggling, human trafficking, illegal fishery and border infringements. the problem for law enforcement agencies is that patrolling and surveilling the vast oceans using traditional means (large, manned vessels) is impossible from an economic and operational point of view. unmanned maritime systems (ums) can potentially provide maritime law enforcement agencies with a valuable tool for increasing their capabilities in relation to maritime surveillance. of course, ums are not the only answer; they are just one part of a much wider maritime situational awareness toolkit [1], which also encompasses satellite monitoring [2], manned and unmanned aerial assets [3] with advanced analytics solutions, allowing the data gathered by all these agents to be turned into information and knowledge. one of the main capabilities the ums require is to be able to operate as a well-coordinated group, working together towards a higher-level goal such as maritime surveillance. however, the practical deployment of these novel smaller-scale ums requires the careful consideration of several aspects related to the operational requirements of the end users [4], the interoperability between the different systems [5] and towards the design of the surveillance architecture. as an example, the traditional approaches towards distributed patrol and surveillance [6]-[8] by manned systems generally do not take into consideration the effects of small waves (which are irrelevant for larger ships but very important for small ums). in this paper, a novel methodology for the real-time control of a fleet of between two and ten ums will therefore be proposed. the presented methodology is cast as a distributed coverage optimisation problem that specifies that the danger level for the ums of overturning is effectively estimated as a function of the potential trajectories and considered in the selection of the optimal movement strategy. as a result, the optimal safe trajectories for all the agents in the fleet can be planned. abstract unmanned maritime systems (ums) can provide important benefits for maritime law enforcement agencies for tasks such as area surveillance and patrolling, especially when they are able to work together as one coordinated system. in this context, this paper proposes a methodology that optimises the coverage of a fleet of ums, thereby maximising the opportunities for identifying threats. unlike traditional approaches to maritime coverage optimisation, which are also used, for example, in search and rescue operations when searching for victims at sea, this approach takes into consideration the limited seaworthiness of small ums, compared with traditional large ships, by incorporating the danger level into the design of the optimiser. mailto:geert.de.cubber@rma.ac.be acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 37 the proposed approach is validated through a simulation in an application scenario [9] connected to the surveillance of belgian offshore wind farms. belgian territorial waters are a very densely populated maritime area, with reserved spaces for all actors, as presented in figure 1, and it is important that all actors stay within the delimited zones. for wind farms (area shaded in red on figure 1), this often presents problems, as other users (e.g. fishing vessels and pleasure yachts) penetrate this zone without permission. in order to police and enforce the exclusion zone, it is necessary to patrol this area, which is on the maritime border with the netherlands and measures about 10 km by 30 km. 2. previous studies multi-agent robotic coverage optimisation is a research topic that has received a considerable amount of attention in recent years, as an increasing number of robotic assets are being deployed; thus, the need to identify strategies to optimise the coordination between these agents has increased. a first distinction to be made between the different methodologies is based upon the type of agents that are taken into consideration. on the one hand, there are approaches that tackle swarms of a high number of less intelligent agents [10]. swarm approaches generally make use of some form of ant colony optimisation algorithm [11] to solve the coverage problem. on the other hand, there are multi-agent approaches that deal with a lower number of more intelligent agents, which is the case for the application in the present study. a second important distinction between methodologies is based upon the assumptions made with regard to the connectivity between the different agents. if continuous broadband access between the agents is assumed, then all agents can obtain perfect localisation and sensor data from one another, and then the approaches can be based on some kind of global optimisation approach [12], with the capability to adapt to a timedependent environment [13]. even though it has been shown that finding a globally optimal solution for the coverage maximisation of a multi-agent fleet is an np-hard problem [14], it is possible to come quite close to this solution within real-time constraints [15], [16]; however, this requires intelligent strategies to guide the optimisation process (see more discussion on this subject later). if, however, unreliable network connections are assumed, then the agents cannot rely on a global planner, and a local optimisation is required. this also entails the need for a distributed approach that still allows for timely coordination between the different agents within the system, as proposed by xin et al. [17]. the methodology presented here adopts a hybrid approach. conceptually, it is based on a global optimisation, but one that is executed separately by each of the agents, taking into consideration the latest known data from the other agents. spatio-temporal memories are used to track and predict the localisation and sensor data from the other agents in order to address communication delays and breakdowns. clearly, these estimations are not perfect, but in this way, the optimisation scheme tries to adopt the best of both types of approach. within the robotics community, most attention has been focused on providing solutions to the multi-agent coverage optimisation problem for unmanned ground vehicles, but there are certainly also approaches that consider unmanned aerial vehicles [18]. however, for maritime systems, the research domain is less developed. fabbri et al. [19] presented a path and decision support system for maritime surveillance vessels, based on multi-objective optimisation algorithms that seek to find an optimal trade-off between several mission objectives. while the concepts are similar, this paper focuses on a high-level decision support system for large, manned vessels. this application aims to develop a solution for small-scale, unmanned patrol vessels, which means that the requirements and constraints are very different. as discussed previously, finding a globally optimal solution for the coverage maximisation of a multi-agent fleet is an nphard problem [14]. this implies that the algorithms scale up traditionally badly for an increasing number of agents. as the hybrid planner proposed here features a mix of global and local optimisation aspects, it also suffers from the drawback of global planning systems. in order to remedy this problem, researchers have suggested particle optimisation [20], as proposed by han et al., or grey wolf optimisation methodologies, as first introduced by mirjalili et al. [21] and later improved for distributed coverage optimisation problems by wang et al. [22]. in short, all these methodologies aim to intelligently prune the number of candidate positions that have to be investigated in order to limit the number of computations to be performed. developing further on these ideas, an optimisation strategy is also proposed that quickly selects the high-probability candidate positions, thereby limiting the computation time. figure 1. maritime spatial plan of belgian territorial waters, showing the very dense occupation of these waters by different actors and for different economic activities. this paper considers the surveillance and patrol of offshore wind farms, indicated on the map as the area with a red overlay (source: belgian federal public service – health, food chain safety and environment). acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 38 3. methodology 3.1. overall framework the proposed methodology draws inspiration from behaviour-based control frameworks [23], in which multiple behaviours actively work together to control the robot, or in this case, the ums. the main problem in behaviour-based control is how to synergise the different individual behaviours into a consistent and optimal global behaviour for the robotic agent. this requires the choice of so-called weight parameters that lead to the expected global behaviour. however, finding these weight parameters is a non-trivial task. therefore, this study proposes an optimisation scheme to find the optimal weights, taking into consideration two objectives: a) increasing the global coverage (and thereby increasing the acquisition of new knowledge about the environment) and b) minimising the danger level (thereby minimising the likelihood of the vessel capsizing). a major design issue for the development of such an optimisation scheme is that the weight parameters to be optimised are subject to a large number of environmental factors, such as visibility and wave height. therefore, for the present study, a dual approach was adopted. • at the offline learning stage, depicted by algorithm 1, an optimisation process was repeatedly run to find the optimal weight parameters 𝑤opt for multiple environmental conditions: 𝑤opt = arg min 𝑤 𝜙(𝑤, 𝛼, 𝑥, 𝑦, 𝜃, 𝜐, 𝛾, 𝑣m, 𝜃m, 𝑤h , 𝑤𝜃 , 𝑜𝑚, 𝜆) , (1) with the following parameters: o 𝑤 represents the weight parameters to be optimised. o 𝛼 is the number of agents. o (𝑥, 𝑦) is the position of the agents in a metric grid. o 𝜃 is the orientation of the agents in radians. o 𝜐 is the visibility in meters. this is a function of the sensorial visibility (which is considered to be static, as the ums sensor package does not change during a mission) and the meteorological visibility, which is dynamic, as the weather conditions may change throughout a mission. o 𝛾 is the sensors’ field of view on board the ums (rad). in this implementation, the sensors are always assumed to be front facing (although the field of view can be set to 360 °). o 𝑣m is the maximum velocity (m/s) that can be reached by the ums. o 𝜃m is the maximum turning rate (rad/s) that can be achieved by the different ums. o 𝑤h is the wave height (m). o 𝑤𝜃 is the wave orientation (rad). o 𝑜𝑚 is an obstacle map, expressed as a probability density function, that expresses the probability of finding an obstacle. o 𝜆 is a dimensionless parameter regulating the relative importance of coverage maximisation and the minimisation of the risk of capsizing. the parameters of the optimisation function 𝜙 and the function itself are further explained in section 3.3. for this optimisation process, the classic nelder–mead simplex algorithm [24] was used. this process typically takes a long time (a few days, depending on the granularity or resolution requested). for this reason, section 3.4 introduces an accelerated optimisation scheme. at the end of this process, the resulting data were stored in a database for later retrieval (during the online stage). • at the online stage, the correct weight parameters for the environmental conditions at hand were retrieved from the database and applied directly to the same optimisation function used before, as depicted by algorithm 2. in the following section, both parts of the optimisation scheme will be discussed in detail. 3.2. offline optimisation algorithm 1 depicts the offline optimisation scheme. as explained, its objective is to develop a database with the optimal weight parameters for each possible combination of environmental factors. this study has focused on four main factors that have been experimentally shown to have an important impact on the choice of the different weight parameters: the number of assets 𝛼, visibility 𝜐, wave height 𝑤h and wave direction 𝑤𝜃 . with regard to the number of assets 𝛼, fleets of between two and ten unmanned systems have been considered for this study. the reason why this number cannot be scaled up further is that the methodology relies on an analysis of the localisation and sensor data from all other assets. the methodology aims to predict the outcome of moving in a number of directions for each of these assets with an 𝒪(𝑁 2) problem. as a result, increasing the number of assets above ten leads to prohibitively long computation times, at least for the non-optimised version of the algorithm (see section 3.4 for details). concerning visibility, as this study is based on the use of small vessels (which have a minimal height and therefore a limited view over the waves), the maximum visibility range is set to 1,000 m. in terms of wave height, the database considers wave heights up to 10 m even though the simulations show that the danger level for such large wave heights is very high, and thus, the seaworthiness of the ums considered in this implementation is not assured. 3.3. online optimisation algorithm 2 depicts the online localisation scheme, which coincides with the optimisation function 𝜙 of algorithm 1. each step of the pseudo-code algorithm is explained here in detail: algorithm 1. offline optimisation scheme. algorithm 1. offline optimisation scheme. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 39 1. first, the relevant weights are extracted from the database. if no exact match can be found, an interpolation is performed taking into consideration the closest matching conditions in the database developed during the offline stage. 2. the assets perform an initial communication to get to know each other's position. an empty coverage map (𝑐𝑚) is constructed (note that no a priori knowledge is assumed). the robotic assets collectively build up a world model (a coverage map indicating areas they have visited and an obstacle map showing areas where they have found obstacles), which is maintained in memory by each of them in order to be able to cope with network outages. this world model is initially empty; the only information the assets have at the start is each other’s position and the boundaries of the working area. 3. the main loop for the simulation timer is created. 4. all the ums in the fleet are interrogated. 5. a set of candidate positions (𝑥c, 𝑦c) that the ums are able to move to is selected, depending on the starting position (𝑥0, 𝑦0 ), orientation 𝜃 and the maximum velocity �̅�max(𝑣m, 𝜃m) of the ums. 6. all possible candidate positions are explored. 7. new information that can be retrieved by moving from the starting position (𝑥0, 𝑦0) to the new position (𝑥, 𝑦) is assessed. this is achieved by adopting a visibility model, indicating, through visibility 𝜐 and the sensor field of view 𝛾, the probability of detecting an object as a result of the vessel’s orientation. the adopted visibility model assumes a mix of infrared, visual and lidar-based sensing and draws upon the heuristically established sensor models established by lahouli et al. [25] (for infrared sensors) and balta et al. [26] (for visual and lidar sensors). figure 2 provides an example of a visibility model for a vessel that is oriented at a 45 ° angle at the position (0,0). this visibility model is compared to the coverage map (𝑐𝑚), resulting in a local map 𝑝1, which can be regarded as a heat map indicating the best locations to move to in order to obtain the maximum amount of new data (i.e. to maximally increase the total value of the coverage map). 8. in order to maximise the chances of finding threats, it is better to move fast. however, the vessel should not move too fast because this would not be fuel efficient, and it could lead to incidents. therefore, another function generates a local heat map favouring a compromise vessel velocity 𝑣. 9. vessels are not able to change their orientation 𝜃 suddenly. therefore, another 'behaviour' generates a local heat map that avoids sharp turns. 10. small vessels are extremely susceptible to waves. both the wave height 𝑤h and the wave direction 𝑤𝜃 play an important role, and these need to be carefully aligned with the vessel speed and orientation. in order to assess this, an empirical ‘wave function’ was compiled, based on sailors’ experiences set out in the literature, that expresses the danger level related to waves. this wave function is expressed as 𝜙wave = (1 − 𝑦) 𝑣 𝑤h , (2) with 𝑦 defined as 𝑦 = 0.35 𝑥 6 − 3.5 𝑥 5 + 12.74 𝑥 4 − 20.75 𝑥 3 +14.36 𝑥 2 − 2.9 𝑥 + 0.1 . (3) figure 3 provides an example of a wave-function equation for a vessel at location (0,0) and with incoming waves from the north. as can be seen, the ideal orientation for the vessel (highest value of 𝜙wave) would be slightly inclined but nearly head on to the waves. orientations that are to be avoided (lowest value of 𝜙wave) are waves coming from the side or from the back. 11. it is important that vessels do not run into any detected obstacles. the ums therefore collectively create and share an obstacle map (𝑜𝑚) and steer away from objects on this map. 12. it would be inefficient for multiple agents in the fleet to investigate the same area. therefore, the swarm optimisation behaviour seeks to maintain an adequate distance between the agents. 13. the different local heat maps are combined into a single map 𝑝 using the weights that were calculated previously (in the offline step). 14. an extra check is performed in order to ensure that the ums do not stray away from the designated surveillance area. 15. an extra check is made in order to avoid revisiting recent locations. this is required not only to speed up the convergence but also to avoid getting stuck at local minima. therefore, a trajectory memory is maintained and checked for pruning the local heat map 𝑝. 16. on the local heat map 𝑝, the optimal position (𝑥𝑏 , 𝑦𝑏 ) is located. 17. all possible positions are then checked. 18. the vessel is steered towards the optimal position. algorithm 2. online optimisation scheme. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 40 19. the danger level for moving to this new position is estimated based on the wave function. the danger level is here defined as 𝑑𝑎𝑛𝑔𝑒𝑟 = 1 − 𝜙wave . 20. the ums performs an update of its sensing cycle, which will update the coverage map as new information is obtained. 21. the iteration for all agents ends. 22. the mean coverage score 𝑓c is recorded. 23. the total (summed) danger score 𝑓d is recorded. for reasons of normalisation, it is divided by the number of assets 𝛼. 24. the temporal loop ends. 25. coverage needs to be maximised while minimising the danger level. therefore, the objective function to be minimised is defined as 𝑓 = 1 𝑓c + 𝜆 𝑓d . (4) the first term of the objective function ensures that the coverage is maximised, while the second term ensures that the danger level is minimised. the parameter 𝜆 regulates the relative importance accorded to both aspects. this parameter is dependent on the type of vessel used. for smaller ums, sea waves present a much higher risk, so 𝜆 should be higher. for larger vessels, 𝜆 can be reduced in order to maximise the coverage mapping more rapidly. 3.4. computational speed optimisation as can be seen in the definition of algorithm 2, the computation time rises exponentially with the number of unmanned systems considered. not only do all of these assets require a separate evaluation (step 4 in algorithm 2), but also the complexity of many of the subprocesses (e.g. swarm optimisation) increases with the number of agents. the main culprits for the long computation time after this are the number of candidate positions that have to be evaluated (step 5 in algorithm 2) and the sometimes-high number of iterations required for the convergence of the nelder–mead simplex algorithm, used for solving (1). the adopted optimisation methodology is geared primarily towards an intelligent pruning of the candidate positions, as follows: • in the first phase, only a very limited subset of the original candidate positions is selected. this is performed by downscaling the selected candidate positions on a lower-scale metric grid. • in the second phase, an analysis (as described in algorithm 2, lines 6 to 17) is performed on the downscaled candidate positions. • a local subgrid is defined at the original resolution around the resulting position. • for all candidate positions within this local subgrid, the analysis (as described in algorithm 2, lines 6 to 17) is performed again and a final position is selected. the double loop may seem to add extra complexity and computation time, but in practice, it avoids the evaluation of a large number of candidate positions that do not have a viable chance of being selected. indeed, as the local maps are mostly continuous functions, it makes sense to evaluate them first at a lower resolution and then to scale up. furthermore, the convergence settings of the nelder–mead simplex algorithm were optimised to match the candidate position pruning approach. 4. validation 4.1. quantitative validation for the validation of the proposed approach, an application was selected that is used for the surveillance of belgian offshore wind farms, which have an area of around 10 km × 30 km that needs to be patrolled. however, the proposed methodology would also be very useful for a maritime search and rescue [27] or a fishery control scenario. figure 2. visibility model for a vessel that is oriented at a 45 ° angle at the position (0,0). figure 3. wave function for a vessel at location (0,0) and with incoming waves from the north. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 41 in order to validate the methodology, it was compared to five state-of-the-art solutions: • a random search, where each agent adopts a completely random movement pattern, • a distributed random search, where the search area is subdivided into equal parts and each agent adopts a random search pattern within the designated subzone, • a lawnmower search, where each agent uses a movement pattern typically adopted by robotic lawnmowers: moving in straight lines and turning a random number of degrees when approaching boundaries, • a distributed lawnmower search, where the search area is subdivided into equal parts and each agent adopts a lawnmower search pattern within the designated subzone, and • distributed greek patterns. this is the search and surveillance approach typically adopted by manned vessels, which has been proven to be very efficient for rapid area coverage. moreover, by subdividing the search area and distributing the search tasks among multiple agents, this approach is quite well suited to maritime coverage optimisation. figure 4 provides an example of the greek pattern. one disadvantage of all these state-of-the-art approaches is that they do not take into consideration the danger the waves pose to the vessel, which is an integral part of the proposed solution. in order to further validate the optimisation scheme, the results were also compared using a non-optimised, nominal version (using a static initial estimate for the weight parameter 𝑤) with the optimised approach. figure 5 presents the results in terms of coverage in a simulation with four agents present. it can be clearly seen that the presented approach (denoted as optimal and indicated in dark red) achieves the highest overall coverage. without using weight optimisation, the distributed greek patterns approach outperforms the baseline nominal approach presented here. all other approaches achieve a performance that is far lower. it can also be observed that the coverage results do not always monotonically increase. this can be explained by the fact that at each iteration, the existing coverage data is ‘aged’ (in practice, the coverage map is multiplied by 0.99) in order to represent the fact that older data have become less valid. the result is that with a limited number of agents, it becomes very difficult to maintain a high overall coverage score. these results can be expected, as the random search and lawnmower search approaches are quite simplistic methodologies, whereas the distributed greek patterns approach has a proven track record for these kinds of applications. still, using weight optimisation, the methodology proposed in this study succeeds in achieving a higher coverage score. however, the major strength of this approach can be seen in figure 6, which indicates the danger level of executing a mission using each of the approaches. the blue portion of the bar chart indicates the mean danger level, whereas the red portion indicates the maximum danger level attained during a particular mission. clearly, both are important for assessing the risk of incidents. it can be clearly seen that both the nominal and the optimal proposed methodology achieve a danger level that is significantly lower than in the other approaches. moreover, for the optimal approach, there is little difference between the mean and the maximum danger levels, indicating that the methodology succeeds in maintaining risk at a constant and low level. figure 4. example of distributed greek patterns. figure 5. evolution of the relative coverage of a surveillance area using seven different approaches. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 42 4.2. scaling and timing in order to assess the effects of the speed optimisation methodology explained in section 3.4, figure 7 shows the evolution of the processing time, relative coverage and relative danger level for the proposed approach with and without the application of the candidate location pruning methodology. as is clearly indicated in figure 7, the computation time is drastically reduced by the incorporation of the candidate position pruning methodology. in general, the accelerated approach is about 9 times faster than the baseline approach. this clearly shows that the proposed methodology of first analysing a limited set of points and then providing a detailed analysis of a dense point set in just one small area has a highly beneficial impact on the global processing time. this also enables the methodology to be used for an increased number of assets even though the global algorithm still scales up to slightly more than 𝒪(𝑁). an important aspect to assess is whether there would be any loss of quality using the accelerated approach. in order to evaluate this, the coverage mapping and danger levels were recorded for both methodologies and for the different numbers of agents, as shown in figure 7. surprisingly, the accelerated approach performed even better than the baseline approach even though the differences were not large. this may seem counter-intuitive at first sight because the accelerated approach evaluates fewer candidate positions and thus, compared with the baseline approach, always has a higher risk of being trapped at local minima. this phenomenon was investigated and found to be caused by the better convergence properties of the accelerated approach. indeed, as the accelerated approach requires fewer time-consuming local map evaluation steps, the nelder–mead simplex optimisation algorithm achieves (slightly) lower values for the optimisation function 𝜙 when using the accelerated method. the ‘side effect’ of accelerated optimisation, as described in section 3.4, is therefore also an improved (higher) coverage mapping and a slightly improved (lower) danger level. 5. conclusions in this paper, an approach towards distributed coverage optimisation for a maritime surveillance application has been presented. the approach is based upon a mix of offline learning and online optimisation. in order to remedy the traditional problem related to the excessive processing time for multi-agent global planning methodologies, an approach for the multi-scale selection of candidate locations has also been proposed. the methodology was validated by comparing it in simulation to multiple state-of-the-art approaches. this comparison demonstrated that the proposed approach performed well in terms of coverage mapping and very well in terms of minimising the danger of capsizing for small, unmanned vessels. moreover, the validation of the performance of the accelerated approach on multi-agent systems demonstrated that the computation time can be drastically reduced while the coverage mapping performance is increased. a next step will be to implement and test the system on the real-life ums that are planned to be deployed to patrol belgian territorial waters. figure 6. relative danger level for executing a maritime surveillance mission using seven different approaches. figure 7. evolution of the processing time, mean relative coverage and mean relative danger level for the proposed approach with and without the application of the candidate location pruning methodology. acta imeko | www.imeko.org september 2021 | volume 10 | number 3 | 43 acknowledgement the research presented in this paper has been funded by the belgian royal higher institute of defence, in the framework of the dap19/08 (marsur) project, and by the flemish agency for innovation and entrepreneurship, in the framework of the blue cluster project ssave (hbc.2019.0045). references [1] l. snidaro, i. visentini, k. bryan, fusing uncertain knowledge and evidence for maritime situational awareness via markov logic networks, information fusion 21 (2015), pp. 159-172. doi: 10.5281/zenodo.22949 [2] e. schwarz, d. krause, h. daedelow, s. voinov, near real time applications for maritime situational awareness, 36th international symposium on remote sensing of environment, berlin, germany, 11–15 may 2015, pp. 1-5. doi: 10.5194/isprsarchives-xl-7-w3-999-2015 [3] r. csernatoni, constructing the eu’s high-tech borders: frontex and dual-use drones for border management, european security 27 (2018), pp. 175-200. doi: 10.1080/09662839.2018.1481396 [4] d. doroftei, a. matos, g. de cubber, designing search and rescue robots towards realistic user requirements, applied mechanics and materials, vol. 658 (2014), pp. 612-617. doi: 10.4028/www.scientific.net/amm.658.612 [5] d. s. lópez, g. moreno, j. cordero, j. sanchez, s. govindaraj, m. m. marques, v. lobo, s. fioravanti, a. grati, k. rudin, m. tosa, a. matos, a. dias, a. martins, j. bedkowski, h. balta, g. de cubber, interoperability in a heterogeneous team of search and rescue robots, in: search and rescue robotics from theory to practice. g. de cubber, d. doroftei (editors), intech, rijeka, 2017, isbn 978-953-51-3375-9, pp. 93-125. doi: 10.5772/intechopen.69493 [6] c. yan, t. zhang, multi-robot patrol: a distributed algorithm based on expected idleness, int. j. of advanced robotic systems 13 (2016), pp. 1-12. doi: 10.1177/1729881416663666 [7] d. aksaray, k. leahy, c. belta, distributed multi-agent persistent surveillance under temporal logic constraints, ifacpapersonline 48 (2015), pp. 174-179. doi: 10.1016/j.ifacol.2015.10.326 [8] m. baseggio, a. cenedese, p. merlo, m. pozzi, l. schenato, distributed perimeter patrolling and tracking for camera networks, proc. of the 49th ieee conference on decision and control (cdc), atlanta, usa, 15-17 december 2010, pp. 20932098. doi: 10.1109/cdc.2010.5717883 [9] g. de cubber, r. haelterman, optimized distributed scheduling for a fleet of heterogeneous unmanned maritime systems, proc. of the international symposium for measurement and control in robotics (ismcr), 19-21 september 2019, pp. a3-2-1-a3-2-7. doi: 10.1109/ismcr47492.2019.8955727 [10] m. brambilla, e. ferrante, m. birattari, m. dorigo, swarm robotics: a review from the swarm engineering perspective, swarm intelligence 7 (2013), pp. 1-41. doi: 10.1007/s11721-012-0075-2 [11] m. dorigo, t. stutzle, ant colony optimization: overview and recent advances. in: m. gendreau, j. y. potvin (eds.). handbook of metaheuristics. international series in operations research & management science, vol 146. springer, boston, ma., usa, 2019, doi: 10.1007/978-1-4419-1665-5_8 [12] a. pierson, l. c. figueiredo, l. c. pimenta, m. schwager, adapting to sensing and actuation variations in multi-robot coverage, int. j. of robotics research 36 (2017), pp. 337-354. doi: 10.1177/0278364916688103 [13] l. c. a. pimenta, m. schwager, q. lindsey, v. kumar, d. rus, r. c. mesquita, g. a. s. pereira, simultaneous coverage and tracking (scat) of moving targets with robot networks, springer, berlin heidelberg, 2010, isbn 978-3-642-00311-0. [14] c. gao, y. kou, z. li, a. xu, y. li, y. chang, optimal multirobot coverage path planning: ideal-shaped spanning tree, mathematical problems in engineering vol. 2018, 2018, p. 3436429. doi: 10.1155/2018/3436429 [15] a. c. kapoutsis, s. a. chatzichristofis, e. b. kosmatopoulos, darp: divide areas algorithm for optimal multi-robot coverage path planning, j. of intelligent & robotic systems 86 (2017), pp. 663-680. doi: 10.1007/s10846-016-0461-x [16] j. cortes, coverage optimization and spatial load balancing by robotic sensor networks, ieee transactions on automatic control 55 (2010), pp. 749-754. doi: 10.1109/tac.2010.2040495 [17] b. xin, g. gao, y. ding, y. zhu, h. fang, distributed multi-robot motion planning for cooperative multi-area coverage, proc. of the int. conf. on control automation (icca), ohrid, macedonia, 36 july 2017, pp. 361-366. doi: 10.1109/icca.2017.8003087 [18] j. valente, j. del cerro, a. barrientos, d. sanz, aerial coverage optimization in precision agriculture management: a musical harmony inspired approach, computers and electronics in agriculture 99 (2013), pp. 153-159. doi: 10.1016/j.compag.2013.09.008 [19] t. fabbri, r. vicen-bueno, r. grasso, g. pallotta, l. m. millefiori, l. cazzanti, optimization of surveillance vessel network planning in maritime command and control systems by fusing metoc & ais vessel traffic information, proc. of oceans 2015 genova, italy, 18 – 21 may 2015, pp. 1-7. [20] x. han, s. li, x. pang, coverage optimization algorithm of wireless sensor network, in: advances in future computer and control systems. advances in intelligent and soft computing, vol. 159. d. jin, s. lin (editors). springer, berlin, heidelberg, 2012, isbn 9783642293870, pp. 32-40. [21] s. mirjalili, s. m. mirjalili, a. lewis, grey wolf optimizer, advances in engineering software 69 (2014), pp. 46-61. doi: 10.1016/j.advengsoft.2013.12.007 [22] z. wang, h. xie, z. hu, d. li, j. wang, w. liang, node coverage optimization algorithm for wireless sensor networks based on improved grey wolf optimizer, j. of algorithms & computational technology 13 (2019), pp. 1-15. doi: 10.1177/1748302619889498 [23] d. doroftei, g. de cubber, e. colon, y. baudoin, behavior based control for an outdoor crisis management robot, proc. of the int. ws. on robotics for risky interventions and environmental surveillance, brussels, belgium, 12-14 january 2009, 8 pp. [24] j. c. lagarias, j. a. reeds, m. h. wright, p. e. wright, convergence properties of the nelder–mead simplex method in low dimensions, siam j. optimization 9 (1998), pp. 112-147. doi: 10.1137/s1052623496303470 [25] i. lahouli, e. karakasis, r. haelterman, z. chtourou, g. de cubber, a. gasteratos, r. attia, hot spot method for pedestrian detection using saliency maps, discrete chebyshev moments and support vector machine, iet image processing 12 (2018), pp. 1284-1291. doi: 10.1049/iet-ipr.2017.0221 [26] h. balta, j. velagic, g. de cubber, w. bosschaerts, b. siciliano, fast statistical outlier removal based method for large 3d point clouds of outdoor environments, proc. of the 12th ifac symposium on robot control syroco 2018, budapest, hungary, 27-30 august 2018, vol. 51, issue 22, pp. 348-353. doi: 10.1016/j.ifacol.2018.11.566 [27] g. de cubber, d. doroftei, k. rudin, k. berns, d. serrano, j. sanchez, s. govindaraj, j. bedkowski, r. roda, search and rescue robotics from theory to practice, intech, rijeka, 2017, isbn 978-953-51-3375-9. https://doi.org/10.5281/zenodo.22949 https://doi.org/10.5194/isprsarchives-xl-7-w3-999-2015 https://doi.org/10.1080/09662839.2018.1481396 https://doi.org/10.4028/www.scientific.net/amm.658.612 https://doi.org/10.5772/intechopen.69493 https://doi.org/10.1177/1729881416663666 https://doi.org/10.1016/j.ifacol.2015.10.326 https://doi.org/10.1109/cdc.2010.5717883 https://doi.org/10.1109/ismcr47492.2019.8955727 https://doi.org/10.1007/s11721-012-0075-2 https://doi.org/10.1007/978-1-4419-1665-5_8 https://doi.org/10.1177/0278364916688103 https://doi.org/10.1155/2018/3436429 https://doi.org/10.1007/s10846-016-0461-x https://doi.org/10.1109/tac.2010.2040495 https://doi.org/10.1109/icca.2017.8003087 https://doi.org/10.1016/j.compag.2013.09.008 https://doi.org/10.1016/j.advengsoft.2013.12.007 https://doi.org/10.1177/1748302619889498 https://doi.org/10.1137/s1052623496303470 https://doi.org/10.1049/iet-ipr.2017.0221 https://doi.org/10.1016/j.ifacol.2018.11.566 microsoft word 314-2388-1-le-rev acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 70‐75    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 70  implementations and pilot comparison of primary low  intensity shock calibration  qiao sun, hong‐bo hu  national institute of metrology, beisanhuandonglu 18, 100029 beijing, china       section: research paper   keywords: metrology;  comparison; primary shock calibration;  low intensity shock acceleration   citation: qiao sun, hong‐bo hu, implementations and pilot comparison of primary low intensity shock calibration, acta imeko, vol. 5, no. 3, article 11,  november 2016, identifier: imeko‐acta‐05 (2016)‐03‐11  section editor: konrad jedrzejewski, warsaw university of technology, poland  received january 13, 2016; in final form june 16, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by general administration of quality supervision, inspection and quarantine,  the people’s republic of china.  corresponding author: qiao sun, e‐mail: sunq@nim.ac.cn    1. introduction  accurate low intensity shock acceleration calibration is of special importance for automobile and civil industries worldwide. the physikalisch-technisches bundesanstalt (ptb) set the first example of a successful implementation of low intensity shock calibration by laser interferometry [1]. recent years have witnessed the establishment of a primary low intensity shock calibration standard and relevant scientific research projects at different metrology institutes of the apmp (asia pacific metrology programme), such as the national metrology institute of japan (nmij) [2], the industrial technology research institute (itri) of taiwan [3], the national institute of metrology of thailand (nimt), and the national institute of metrology (nim), china [4]. the international organization for standardization (iso) 16063-13[5] describes the primary low intensity shock calibration method, algorithm, and technical requirements, with an implementation example of a rigid body collision with excitations from 100 m/s2 to 5000 m/s2 and a pulse duration less than 10 ms. however, in the metrological field of shocks, there was no formal comparison either at a consultative committee (cc) level or a regional metrology organization technical committee (rmo tc) level. therefore, the unification of the shock acceleration quantity was short of direct supporting evidence. the consultative committee of acoustics, ultrasound and vibration (ccauv) has already planned to conduct key comparisons for shock in its strategic planning programme for 2013 to 2023 [6], possibly one key comparison for low intensity and the other for high intensity. abstract  this  paper  first  presents  two  main  technical  concerns  for  a  possible  low  intensity  shock  comparison:  variety  of  primary  shock  calibration  systems  and  feasibility  of  comparison  artifact.  for  the  primary  calibration  system,  the  mechanical  excitation  includes  hammer‐anvil  collision,  pneumatic  driven  projectile  impact  and  the  hopkinson  bar.  the  shock  pulses  generated  have  a  smooth  monopole  shape  by  the  first  two  types  with  air  bearings,  but  a  dipole  shape  by  the  third  type.  for  the  comparison  artifact,  the  standard  accelerometer  of  a  back‐to‐back  type  with  a  charge  amplifier  consists  of  an  accelerometer  measuring  chain  whose  nonlinearity of amplitude and phase frequency responses is investigated. based on the nonlinear fact of comparison artifact in the  frequency domain and the spectrum range difference of mechanical excitations of the calibration system, strict comparison conditions  had to be laid down for the measurement of shock sensitivity at specific acceleration levels and pulse durations.   the pilot comparison, coded as apmp.auv.v‐p1, is successfully organized by the asia pacific metrology programme. some comparison  results of both monopole excitation and dipole excitation are shown with the expanded uncertainty. the completion of this pilot  comparison  can  serve  as  part  of  the  basis  for  a  planned  key  comparison  targeted  at  a  low  intensity  shock  range  at  consultative  committee level.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 71  during the meeting of apmp tcauv in 2011, the decision was taken to make preparations for a pilot comparison targeted at low intensity shock acceleration. despite of scientific investigations of shock standards by individual institutes of the apmp, guest scientist research of the tc initiative project as intra-apmp collaboration on this topic and dedicated sessions of tcauv workshops in japan in 2011 and in thailand in 2014 made a direct contribution to the successful completion of this apmp tcauv pilot comparison of low intensity shock, coded as apmp.auv.v-p1 with participants from nim, itri, nimt and spektra (german dkd lab) [7]. in this paper, the implementations of three different types of shock excitations suitable for laser interferometry are described. rationale of strict comparison conditions are explained in detail. the comparison results are presented in two groups: shock sensitivities of monopole excitation from 500 m/s2 to 5000 m/s2 and shock sensitivities of dipole excitation at 1000 m/s2. the organization of this pilot comparison by the apmp tcauv has proved, among other things, the credibility of the primary low intensity shock calibration capability of the participants. 2. implementations of a low intensity shock calibration system  the primary low intensity shock calibration system consists of three main parts: shock excitation, laser interferometer and signal acquisition and data processing. for the laser interferometer, either homodyne or heterodyne can meet the requirement as primary measurement standard of acceleration for the calibration system. the technology is available in previous research, such as in [1]-[5]. signal acquisition and data processing algorithms and procedures are also covered in detail in [1]-[5]. the different shock excitation devices for primary laser interferometry calibration are of particular interest for the investigation of the pilot comparison in that their frequency spectra can be quite different. therefore, three different types of shock excitation devices which are employed in the pilot comparison are described. 2.1. hammer‐anvil excitation device  the shock excitation device based on a hammer-anvil mechanical collision is called shock machine and its basic working principle is well explained in [5]. the actual implementation is verified. ptb uses a spring unit as exciter to provide the original shock force in [1]. nmij uses an air pressure exciter [2] and itri uses an electromagnetic exciter [3]. the nim’s shock excitation device consists of an electromagnetic exciter and a pneumatic exciter. this combination as a mechanical power supply of the excitation device can deliver a wide range of shock acceleration levels and wider pulse durations [4]. all these four versions of implementation are placed in a horizontal position. it is worth noting that a high resonant frequency airborne hammer and anvil as moving parts of the excitation device is a precondition to generate a high-quality monopole shock pulse. the thin stiff air films can largely reduce mechanical disturbances from other parts of the excitation device, and avoid asymmetric impact forces to the anvil. as a result, the anvil can move rectilinearly with less rotational motion and produce repeatable shock pulses for calibration. figure 1 gives an instance of the primary shock calibration system based on the hammer-anvil excitation device from nim. the structure of the shock excitation device is shown in figure 2. in this figure, the shock exciter is the original part to supply mechanical power in the shock excitation device. the amount of output power is the decisive factor to determine the acceleration level and pulse duration of the shock pulse generated. besides, good controllability of the power supply contributes to good repeatability of the acceleration level generated. two shock exciters are deployed. the electromagnetic exciter is under precise control to produce an expected amount of power. the maximum output force is 9800 n. responding to a voltage input set by the controller, the electromagnetic exciter can drive the hammer at a speed from 0.08 m/s to 4.06 m/s, whose repeatability is better than 1 %. however, limited by its maximum power, the electromagnetic exciter is only used to produce an acceleration level below 5000 m/s2. the pneumatic exciter can produce a power supply corresponding to an air pressure range from 0.49 bar to 4.9 bar. by adjusting the air pressure inside the air tank and regulating the opening time of the electric valve, the pneumatic exciter can drive the hammer at the high speed limit of 5 m/s, which results in the highest generation of shock acceleration level above 10000 m/s2. airborne hammer and anvil are moving parts of the shock machine. they are made of titanium and aluminium alloy respectively, with 30 mm in diameter and 200 mm in length. each titanium hammer and anvil weighs 1.27 kg. their longitudinal resonant frequency of the first order is about 12 khz obtained by special tests. this resonant frequency limits the minimum shock pulse duration to about 0.5 ms generated by the shock machine, with its most energy components in the low frequency range. since the collision caused by their free figure  1.  photo  of  nim’s  primary  shock  calibration  system  based  on  a  hammer‐anvil excitation device.  figure 2. model of the shock excitation device.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 72  motion and direct impact is a precondition for a high-quality shock pulse, both the hammer and the anvil are equipped with air bearings in such a way that their centerlines are aligned within a tolerance of ±0.2 mm. air bearings of 30 mm inner diameter from new way co. are used. stiff air films less than 10 μm can significantly reduce negative influences from resonances of other parts of the shock machine structure, and avoid asymmetric impact forces to the anvil to a large extent. therefore, the anvil can move rectilinearly without rotational motion, and produce repeatable shock pulses for calibration purposes. the shock machine is based on solid body collision of hammer and anvil to produce excitation of shock pulses. but if the airborne hammer and anvil collides directly, it will cause strong resonant motions, and therefore the accelerometer mounted at the end surface of the anvil will provide unreliable output signals which cause inaccurate measurement results. this adverse influence must be avoided for accurate calibration results by attaching a cushioning pad to the hammer. the cushioning pad, likewise a damping isolator to absorb highfrequency noisy motions, acts as a shock pulse generator to produce desired pulse shapes with an expected acceleration level and pulse duration. this object is achieved by applying three sets of cushioning pads of different materials. it is worth noting that different cushion pads of different materials or different hardnesses or thicknesses can generate shock pulses of different shapes, acceleration levels or pulse durations, due to their own damping properties. therefore, each cushioning pad functions well to produce shock pulses at certain acceleration levels. beyond this working range, the pulse shape generated may be distorted and thus cause poor calibration results. 2.2. pneumatic projectile excitation device  the pneumatically driven projectile excitation device is a vertical implementation for good-quality monopole shock pulses in the pilot comparison. a projectile accelerated by pressurized air functions as hammer. while the air pressure remains constant, the kinetic energy of the projectile can be controlled by a motor driven mechanical stop that allows a precise adjustment of the projectiles starting position and thus of the distance over which it is accelerated. therefore, the repeatability of the shock pulse generated by this excitation device is good under the full automatic control of the target acceleration level. to be used as mechanical excitation source for laser interferometry measurements, an air bearing is equipped with the mounting part of the accelerometer to reduce transverse motions caused by the impact of the projectile. figure 3 shows the primary shock calibration system based on a pneumatic projectile excitation device from spektra. 2.3. hopkinson bar excitation device  the hopkinson bar excitation device is based on wave propagation and reflection characteristics inside a long thin bar and can generate dipole shock pulses normally in the acceleration range from 1000 m/s2 to 100000 m/s2 as described in [5]. by careful determination of the dimension and length of the titanium hopkinson bar and application of a piezoelectric actuator as exciter, a shock acceleration range from 200 m/s2 to 40000 m/s2 can be achieved. however, the pulse duration of the half sine shape is narrower than 0.2 ms [8]. the main parts of the hopkinson bar excitation device include a hopkinson bar, a piezoelectric actuator and a reaction mass. when a driving voltage is applied to the actuator, the piezo-stack changes its length and a reaction force is generated by the reaction mass according to newton’s 2nd law. this force acts as the input force on the end surface of the hopkinson bar. the acceleration generated at the other end of the bar can be quite accurate because the driving voltage can be precisely controlled. figure 4 shows the primary shock calibration system based on the pneumatic projectile excitation device from nimt. 3. pilot comparison   3.1. background of the comparison   the accurate measurement of low intensity shock acceleration is vital in certain applications, for example car crash tests. efforts to decrease the losses in human lives on the roads during the 1950s led to an increased research into the biomechanics of the head impact. a break-through was made with the introduction of the wayne state tolerance curve [9]. this curve was interpreted and a weighted injury criterion was developed. this criterion was later transformed into the head injury criterion (hic) to improve the crashworthiness of cars. hic is a measure of the likelihood of head injury arising from an impact. the hic can be used to assess safety related to vehicles, and it is defined as: figure 3. photo of the spektra’s primary shock calibration system based on  a pneumatic projectile excitation device.  figure 4. photo of the nimt’s primary shock calibration system based on a  hopkinson bar excitation device.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 73              2 1 2.5 2 1 2 1 max 1 ( ) ( ) t t hic a t dt t t t t , (1) where t1 and t2 are any two arbitrary times during the acceleration pulse. in 1986, the time interval over which hic is calculated was limited to 36 ms. shock acceleration is measured in standard gravity acceleration, shown in figure 5. normally the variable is derived from the acceleration/time history of an accelerometer mounted at the centre of gravity of a dummy’s head, like in the car crash test photo of figure 5, when the dummy is exposed to crash forces. this means that the hic includes the effects of head acceleration and the duration of the acceleration. large accelerations of up to 5000 m/s2 may be tolerated for very short durations of less than 1 ms. therefore, a low intensity shock comparison is well justified with the acceleration range from 500 m/s² to 5 000 m/s² and a duration of monopole shock pulse from 0.3 to 3 ms. 3.2. comparison feasibility  it was found previously by ptb that different shock sensitivities of the same accelerometer under the same level of shock acceleration were obtained when it was exposed to the excitation of a hammer-anvil device and a hopkinson bar device, respectively. the sensitivity difference was about 4 % at about 5000 m/s2. figure 6 reveals the cause of this sensitivity difference. the excitation of rigid body motion, the impact from hammer-anvil or pneumatic projectile, falls into a narrow low frequency range of the spectrum while the excitation by the hopkinson bar covers a wider spectrum range from low to high frequencies. the sensitivity magnitude frequency response of the accelerometer only has a narrow working range of constant sensitivity at low frequency. this effect plays an increasing role when the shock pulse duration decreases and therefore covers a wide frequency range, which finally results in obvious sensitivity difference by two different excitations. therefore, pulse shape, pulse duration and acceleration level should be restricted for the feasibility of the pilot low intensity shock comparison. hammer-anvil excitation and pneumatic projectile excitation can generate monopole pulses and fall into the same group of rigid body motion. hopkinson bar excitation should be treated into a different group. it should be noted that pulse duration is defined as time width between rising edge point and falling edge point at a 10 % level of peak acceleration. for the monopole group, participants are supposed to measure at the following acceleration levels (all values in m/s²): 500, 1000, 2000, 3000, 4000, 5000. a series of 0.5 ms, 1 ms, 1.5 ms and 2 ms of shock pulses are recommended, with a reference of 2 ms at an acceleration of 1000 m/s². for the dipole group, participants are supposed to measure at 1000 m/s². a series of 0.03 ms, 0.05 ms, 0.07 ms, 0.10 ms (reference), 0.15 ms and 0.20 ms are recommended. 3.3. comparison results   for the purpose of the comparison the pilot laboratory selected an endevco 2270 (sn: 10466) with a brüel & kjær charge amplifier 2692 (sn: 2752215) as accelerometer chain of which monitoring data for 6 months were available and of which data were not included in any published international cooperation work. for this comparison, nim used its hammeranvil calibration system and hopkinson bar calibration system; itri used its hammer-anvil calibration system; nimt used its hopkinson bar calibration system; and spektra used its pneumatic projectile calibration system and hopkinson bar calibration system. in table 1 and 2, the calibration results under monopole and dipole excitations are presented. the weighted mean was agreed upon by all participants to calculate the pilot comparison reference values (pcrv) for the apmp.auv.v-p1 data. pcrvs were calculated separately at each acceleration or pulse duration point for the accelerometer chain. four typical degrees of equivalence with respect to pcrvs of the participants are shown in figure 7 and 8. the degrees of equivalence also support the uncertainty of the measurements of the participants at other acceleration levels and pulse durations. 4. conclusions  based on the investigation work on the primary low intensity shock calibration technique and feasibility of a pilot comparison, tcauv of apmp has successfully conducted a pilot comparison of shock acceleration sensitivity at low intensity shock accelerations from 500 m/s² to 5000 m/s². laser interferometry is a necessity as a measurement standard of the calibration system. three different types of mechanical shock excitations are employed by the participants. the comparison results are divided into two groups by the excitation pulse shape: the monopole and the dipole group. the figure 6. frequency responses of the accelerometer, the rigid body and the  hopkinson bar.  figure 5. wayne state tolerance curve and photo of car crash test in china. acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 74  reported sensitivities and associated uncertainties from four participants nim, itri, nimt and spektra are used for the calculation of mean values of the pilot comparison results and their associated uncertainties, as well as the deviations from the mean values with associated uncertainties. the degrees of equivalence calculated from the measurement results by the four participants support the measurement uncertainty reported by them at all acceleration levels and pulse durations specified in the technical protocol. the successful completion of this pilot comparison can serve, among other things, as part of the basis for a planned key comparison coded as ccauv.v-k4 targeted at a low intensity shock range at ccauv. acknowledgement  this work was supported in part by the general administration of quality supervision, inspection and quarantine of the people’s republic of china under contract no. alc1201 and was partly funded by the apmp tc initiative project no. tci-2011-01-tcauv and by the apmp dec project 2014. the authors would like to give special table 1. calibration results of the participants for voltage sensitivities sva under monopole shock excitation with expanded relative uncertainty uc (k = 2). acceleration  (m/s²)  pulse duration  (ms)  nim  itri  spektra  sva  (mv/(m/s 2 ))  uc (%)  sva  (mv/(m/s 2 ))  uc (%)  sva  (mv/(m/s 2 ))  uc (%)  500  3.0  0.1967  0.5  0.1966  1.0  0.19727  0.5  1000  2.0  0.1968  0.5  0.1970  1.0  0.19746  0.5  2000  1.5  0.1970  0.5  0.1971  1.0  0.19749  0.7  3000  1.0  0.1973  0.5  0.1971  1.0  0.19755  0.7  4000  1.0  0.1973  0.5  0.1971  1.0  0.19735  0.7  5000  0.8  0.1973  0.5  0.1971  1.0  0.19734  0.7  table 2. calibration results of the participants for voltage sensitivities sva under dipole shock excitation with expanded relative uncertainty uc (k = 2). acceleration  (m/s²)  pulse duration  (ms)  nim  nimt  spektra  sva  (mv/(m/s 2 ))  uc (%)  sva  (mv/(m/s 2 ))  uc (%)  sva  (mv/(m/s 2 ))  uc (%)  1000  0.20  0.1972  0.5  0.1976  0.6  0.19765  0.5  1000  0.15  0.1973  0.5  0.1977  0.6  0.19744  0.5  1000  0.10  0.1976  1.0  0.1976  0.7  0.19765  0.5  1000  0.07  0.1981  1.0  0.1976  1.0  0.19776  0.8  1000  0.05  0.1985  1.5  0.1977  1.0  0.19758  0.8  1000  0.03  0.1990  1.5  0.1985  1.0  0.19718  0.8  figure  7.  degree  of  equivalence  for  voltage  sensitivities  under  monopole shock excitation at 1000 m/s 2 , 2.0 ms and 5000 m/s 2 , 0.8 ms.  figure 8. degree of equivalence for voltage sensitivities under dipole shock  excitation at 1000 m/s 2 , 0.20 ms and 0.03 ms.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3| 75  thanks to colleagues in itri, nimt, nmij and spektra for their support and cooperation in this comparison, related research work and workshop activities. references  [1] link a, von martens hj, “calibration of accelerometers by shock excitation and laser interferometry”, shock vib, 2000 7:101-112. [2] h nozato, t usuda, a oota, t ishigami, “calibration of vibration pick-ups with laser interferometry: part iv development of a shock acceleration exciter and calibration system”, measurement sci. technol. 2010 21:065107. [3] yu-chung huang, jiun-kai chen, hsin-chia ho, chung-sheng tu, chao-jun chen, “the set up of primary calibration system for shock acceleration in nml”, measurement, 2012 45: 23832387. [4] qiao sun, jian-lin wang, hong-bo hu, “a primary standard for low-g shock calibration by laser interferometry”, measurement sci. technol. 2014 25:075003. [5] iso 16063-13, methods for the calibration of vibration and shock transducers. part 13: primary shock calibration using laser interferometry, 2001. [6] cipm ccauv, “strategy document for rolling programme development for 2013 to 2023”, www.bipm.org/en/committees/cc/ccauv. [7] qiao sun, hongbo hu. “final report on pilot comparison of low intensity shock apmp.auv.v-p1”. metrologia tech. suppl., 2015, 52: 09002. [8] spektra, data sheet of se-220 hop-ms hopkinson bar shock exciter, www.spektra-dresden.com/spektra. [9] lissner hr, lebow m, evans fg 1960 “experimental studies on the relation between acceleration and intracranial pressure change” in men surgery gynecology and obstetrics 111: 329. introductory notes for the acta imeko second issue general track acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 4 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 introductory notes for the acta imeko second issue general track francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko second issue general track, acta imeko, vol. 12, no. 2, article 1, june 2023, identifier: imeko-acta-12 (2023)-02-01 received june 28, 2023; in final form june 30, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, as editor in chief of acta imeko, i am pleased to share with you a further success of acta imeko that in the last ranking of scimago increases its position to the third quartile also in instrumentation (https://www.scimagojr.com/journalsearch.php?q =21100407601&tip=sid&clean=0). i would like to thank all the editorial staff, the reviewers, the authors, and each of you for choosing acta imeko as a valuable source to find knowledge and inspiration and an effective media to share your research and results with the scientific community. this general track collects several contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give to you an overview of these papers. accurate, continuous and reliable data gathering and recording about crop growth and state of health, by means of a network of autonomous sensor nodes that require minimal management by the farmer will be essential in future precision agriculture. in [1], a low-cost multi-channel sensor-node architecture is proposed for the distributed monitoring of fruit growth throughout the entire ripening season. the prototype presented is equipped with five independent sensing elements that can be attached each to a sample fruit at the beginning of the season and are capable of estimating the fruit diameter from the first formation up to the harvest. the sensor-node is provided with a lora transceiver for wireless communication with the decision making central, is energetically autonomous thanks to a dedicated energy harvester and an accurate design of power consumption, and each measuring channel provides resolution of a few tenths of a millimeter with a full-scale range of 12 cm. the accurate calibration procedure of the sensor-node and its elements is described in the paper, which allows for the compensation of temperature dispersion, noise and nonlinearities. the prototype was tested on field in real application, in the framework of the research activity for next-generation precision farming performed at the experimental farm of the department of agricultural and food science of the university of bologna, cadriano, italy. the paper in [2] belongs to a line of research known as aerial archaeology and compares some specific visualizations of lidar data (hill-shading, openness, and sky view factor) to understand which of them can provide the best approach to suitably identify and unveil some archaeological permanences as function of different boundary conditions. in the case presented by the authors, such permanences belong to the very special material heritage consisting of the "physical traces" of the great war, although latent, they persist in the present landscapes at different states of preservation and visibility, waiting to be unearthed to express their cultural potential. they represent an indispensable palimpsest of "minor signs" such as, for example, fragments of entrenchments, gun emplacements, shelters, bomb craters, and temporary shelters. such elements made the war machine work at that time while, nowadays, if properly recognized and enhanced, could foster the historical and cultural revitalization of the territories where they are placed. in [3] the results of measurements of atmospheric absorption and the amount of precipitated water on the suffa plateau for the period from january, 2015 to november, 2020, are presented. the measurements of atmospheric parameters in the 2 and 3 mm range of the radio waves spectrum were carried out using the miap-2 radiometer. the results of more than six years of measurements have shown that on the suffa plateau, atmospheric parameters in the above range remain fairly stable. in [4], the analysis of measuring channels of angular velocity with an encoder is given. this analysis has made it possible to obtain an equation for estimating the quantization and sampling error for an exponential mathematical model describing the transient process of operation of electrical machines. the components of the mathematical model of this dynamic error are the sampling step and the derivative, which characterizes the rate of change of the measured value over time. it was found that the errors of quantization and sampling significantly depend on the mailto:editorinchief.actaimeko@hunmeko.org https://www.scimagojr.com/journalsearch.php?q%20=21100407601&tip=sid&clean=0 https://www.scimagojr.com/journalsearch.php?q%20=21100407601&tip=sid&clean=0 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 value of the resolution z of the encoder. moreover, an increase in z leads to a decrease in the sampling error, but the relative quantization error increases. a proposal to reconcile these components of errors is also given. the monitoring and management of postprandial glucose response (pgr), by administering an insulin bolus before meals, is a crucial issue in type 1 diabetes (t1d) patients. artificial pancreas (ap), which combines autonomous insulin delivery and blood glucose sensor, is a promising solution; nevertheless, it still requires input from patients about meal carbohydrate intake for bolus administration. this is due to the limited knowledge of the factors that influence pgr. even though meal carbohydrates are regarded as the major factor influencing pgr, medical experience suggests that other nutritional should be considered. to address this issue, in [5], the authors propose a machine learning (ml)-based approach for a more comprehensive analysis of the impact of nutritional factors (i.e., carbohydrates, protein, lipids, fiber, and energy intake) on the blood glucose levels (bgls). in particular, the proposed ml-model takes into account bgls, insulin doses, and nutritional factors in t1d patients to predict bgls in 60-minute time windows after a meal. a feed-forward neural network was fed with different combinations of bgls, insulin, and nutritional factors, providing a predicted glycaemia curve as output. the validity of the proposed system was demonstrated through tests on public data and on self-produced data, adopting intraand inter-subject approach. results anticipate that patient-specific data about nutritional factors of a meal have a major role in the prediction of postprandial bgls. the cyber-security of an embedded device is a crucial issue especially in the internet of things (iot) paradigm, since the physical accessibility to the smart transducers eases an attacker to eavesdrop the exchanged messages. in [6], the role of metrology in improving the characterization and security testing of embedded devices is discussed in terms of vulnerability testing and robustness evaluation. the presented methods ensure an accurate assessment of the device’s security by relying on statistical analysis and design of experiments. a particular focus is given on power analysis by means of a scatter attack. in this context, the metrological approach contributes to guaranteeing the confidentiality and integrity of the data exchanged by iot transducers. an electroencephalography (eeg)-based classification system of three levels of fear of heights is proposed in [7]. a virtual reality (vr) scenario representing a canyon was exploited to gradually expose the subjects to fear inducing stimuli with increasing intensity. an elevating platform allowed the subjects to reach three different height levels. psychometric tools were employed to initially assess the severity of fear of heights and to assess the effectiveness of fear induction. a feasibility study was conducted on eight subjects who underwent three experimental sessions. the eeg signals were acquired through a 32-channel headset during the exposure to the eliciting vr scenario. the main eeg bands and scalp regions were explored in order to identify which are the most affected by the fear of heights. as a result, the gamma band, followed by the high-beta band, and the frontal area of the scalp resulted the most significant. the average accuracies in the within-subject case for the three-classes fear classification task, were computed. the frontal region of the scalp resulted particularly relevant and an average accuracy of (68.20 ± 11.60) % was achieved using as features the absolute powers in the five eeg bands. considering the frontal region only, the most significant eeg bands resulted to be the highbeta and gamma bands achieving accuracies of (57.90 ± 10.10) % and of (61.30 ± 8.43) %, respectively. the sequential feature selection (sfs) confirmed those results by selecting for the whole set of channels, in the 48.26 % of the cases the gamma band and in the 22.92 % the high-beta band and by achieving an average accuracy of (86.10 ± 8.29) %. in [8], a feasibility study on electroencephalographic monitoring of executive functions during dual (motor and cognitive) task execution is presented. electroencephalographic (eeg) signals are acquired by means of a wearable device with few channels and dry electrodes. the light weight and wireless device allow for walking in a natural way. the most significant eeg features are investigated to classify different levels of activation for two fundamental executive functions (ef) both in sitting and walking conditions. power spectral density in the gamma band resulted in the most relevant feature in discriminating low and high levels of inhibition. power spectral density in the beta and gamma bands resulted the most discriminating the level of activation of working memory. the study poses the basis for (i) monitoring the activation levels of ef during gait allowing loss prevention in the elderly and (ii) specific cognitive rehabilitation aimed at the most relevant executive functions during walking. the paper in [9] deals with the extraction of the fetal electrocardiography (ecg) signal from the raw ecg signals of the mother by the beamforming based algorithms. the foetal ecg sensors bring out signals containing information from the pregnant mother and the infant. detailed and separate signals are already provided by the foetal ecg instruments; but for some specific studies related to the infant conditions, it is necessary to improve the quality of the signal with a dedicated processing. in this paper, four techniques, with some enhancements, are proposed to perform the processing; we have applied the following techniques: least mean square (lms) with adaptive noise cancellation technique, discrete wavelet transform (dwt)-based technique, empirical wavelet transform (ewt) technique, and multiple signal classification (music). the lms and the music pertain to beamforming approach. the techniques were used to decompose and identify the different elements constituting the source signal (mother's signal) and noise cancellation by multivariate empirical mode decomposition (memd) technique. the signal was adaptively decomposed by lms, dwt and music according to optimised parameters to extract some hidden components of the source signal, such as the foetal features, qrs, heartbeat etc. the results have showed that lms, with enhancements, is more effective in identifying and removing useless noise. the techniques were applied to the ecg signal of a 30-year-old healthy pregnant woman, which allowed to verify their applicability. the present research leads to the below main contributions among others: separation of the ecg signal of the foetus from the mother, highlighting the functional state of the foetal heart rhythm (heart rate and heartbeat,) and this can show us if the foetal ecg has malfunctions. the random noise test of analog to digital converters recommended by the ieee 1057 standard for digitizing waveform recorders is studied in [10]. the heuristically derived expression presented for the estimation of the random noise standard deviation is experimentally validated. the standard suggests a triangular stimulus signal. the authors show how to use a sinusoidal stimulus signal to carry out the test. the influence of stimulus signal offset and amplitude on the estimation error is also analysed and an expression is presented acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 to compute the minimum amplitude value that guarantees an upper bound on the estimator bias. the paper in [11] presents a theoretical analysis of uncertainty sources in measurement techniques used to determine vibrations of turbomachinery blades using stationary sensors mounted on the casing of the turbomachine. a mathematical model based on fundamental physical principles is proposed, and two different measurement set-ups are evaluated. one set-up uses a reference sensor to measure the passage of an undeformed part of the blades (blade base), while the other set-up does not involve the use of a reference sensor, with both sensors facing the blades tip (deformed part). the intrinsic uncertainty of these methods and the performance of the complete measurement chain are defined. the analysis of the measurement technique leads to conclusions about the practical set-up and possible performances of these measurement techniques. measurement uncertainty plays a very important role ensuring validity of decision-making procedures, since it is the main source of incorrect decisions in conformity assessment. the guidelines given by the actual standards allow one to take a decision of conformity or non-conformity, according to the given limit and measurement uncertainty associated to the measured value. due to measurement uncertainty, a risk of a wrong decision is always present, and the standards also give indications on how to evaluate this risk, although they mostly refer to a normal probability density function to represent the distribution of values that can be reasonably attributed to the measurand. since such a function is not always the one that best represents this distribution of values, the paper in [12] considers some of the most-often used probability density functions and derives simple formulas to set the acceptance (or rejection) limits in such a way that a pre-defined maximum admissible risk is not exceeded. accurate and reliable results in orthodontics heavily depend on selecting the right impression materials. with the rise of digital technology and additive manufacturing techniques, it has become necessary to characterize experimentally the materials used to design prosthetic bases. in the study presented in [13], the mechanical properties of polyetheretherketone, nylon6, nylon12, and polypropylene are analysed, as impression materials commonly used in dentistry applications. specifically, the effect on their flexural elastic modulus of the exposure to working environment conditions is also investigated by means of 3-point bending test performed on virgin materials and samples immersed in saliva for 72 hours. the proposed approach revealed significant behaviour in terms of loss in mechanical performances. these findings have significant implications for the proper selection and use of am materials in dental applications. structural health monitoring (shm) is an essential aspect to ensure the safety and longevity of civil infrastructure. in recent years, there has been a growing interest in developing shm systems based on micro-electro-mechanical systems (mems) technology. mems-based sensors are small, low-power, and cost-effective, making them ideal for large-scale deployment in structural monitoring systems. however, the use of memsbased sensors in shm systems can be challenging due to their inherent errors, such as drift, noise, and bias instability; these errors can affect the accuracy and reliability of the measured data, leading to false alarms or missed detections. therefore, several methods have been proposed to compensate for these errors and improve the performance of mems-based shm systems. for this purpose, the authors in [14] propose the combined of a redundant configuration of cost-effective mems accelerometers and a kalman filter approach to compensate mems inertial sensor errors and data filtering; the performance of the method was preliminarily assessed by means of a custom-controlled oscillation generator and compared with that granted by a highcost, high-performance mems reference system where amplitude differences of 0.02 m/s2 have been experienced. finally, a sensor node for real-time monitoring has been proposed that exploits lorawan and nfc protocols to access the structure information to be monitored. electromagnetic tracking systems (emtss) are widely used in surgical navigation, allowing to improve the outcome of diagnosis and surgical interventions, by providing the surgeon with real-time position of surgical instruments during medical procedures. the main goal is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. studies are currently being conducted to optimize the magnetic field generator (fg) configuration (both geometrical arrangements and electrical properties) since it affects tracking accuracy. in [15], authors discuss experimental data from an emts based on a developed 5-coils fg prototype, and they show the correlation between position tracking accuracy and the gradients of the magnetic field. therefore, they optimize the configuration of the fg by employing two different metrics based on i) the maximization of the amplitude of the magnetic field as reported in literature, and ii) the maximization of its gradients. the two optimized configurations are compared in terms of position tracking accuracy, showing that choosing the magnetic field gradients as objective function for optimization leads to higher position tracking accuracy than maximizing the magnetic field amplitude. i hope you will enjoy your readings. francesco lamonaca editor in chief references [1] lorenzo mistral peppi, matteo zauli, luigi manfrini, luca corelli grappadelli, luca de marchi, pier andrea travers, low-cost, high-resolution and no-manning distributed sensing system for the continuous monitoring of fruit growth in precision farming, acta imeko, vol.12, no.2, 2023, pp. 1-11. doi: 10.21014/acta_imeko.v12i2.1342 [2] joel aldrighettoni, maria grazia d’urso, military archaeology and lidar data visualizations: a non-invasive approach to detect historical remains, acta imeko, vol.12, no.2, 2023, pp. 1-10. doi: 10.21014/acta_imeko.v12i2.1395 [3] dilshod raupov, s. ilyasov, g. i. shanin, the results of atmospheric parameters measurements in the millimeter wavelength range on the radio astronomy observatory “suffa plateau”, acta imeko, vol.12, no.2, 2023, pp. 1-10. doi: 10.21014/acta_imeko.v12i2.1430 [4] vasyl kukharchuk, oleksandr vasilevskyi, volodymyr holodiuk results of study of quantization and discretization error of digital tachometers with encoder, acta imeko, vol.12, no.2, 2023, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1452 [5] giovanni annuzzi, lutgarda bozzetto, andrea cataldo, sabatina criscuolo, marisa pesola, predicting and https://doi.org/10.21014/acta_imeko.v12i2.1342 https://doi.org/10.21014/acta_imeko.v12i2.1395 https://doi.org/10.21014/acta_imeko.v12i2.1430 https://doi.org/10.21014/acta_imeko.v12i2.1452 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 monitoring blood glucose through nutritional factors in type 1 diabetes by artificial neural networks, acta imeko, vol.12, no.2, 2023, pp. 1-7. doi: 10.21014/acta_imeko.v12i2.1453 [6] pasquale arpaia, francesco caputo, antonella cioffi, antonio esposito, the role of metrology in the cybersecurity of embedded devices, acta imeko, vol.12, no.2, 2023, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1455 [7] andrea apicella, simone barbato, luis alberto barradas chacόn, giovanni d’errico, lucio tommaso de paolis, luigi maffei, patrizia massaro, giovanna mastrati, nicola moccaldi, andrea pollastro, selina christin wriessenegger, electroencephalography correlates of fear of heights in a virtual reality environment, acta imeko, vol.12, no.2, 2023, pp. 1-10. doi: 10.21014/acta_imeko.v12i2.1457 [8] pasquale arpaia, renato cuocolo, paolo de blasiis, anna della calce, allegra fullin, ludovica gargiulo, luigi maffei, nicola moccaldi, electroencephalographic-based wearable instrumentation to monitor the executive functions during gait: a feasibility study, acta imeko, vol.12, no.2, 2023, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1460 [9] john p. djungha okitadiowo, a. lay-ekuakille, t. isernia, v. bhateja, s.p. singh, beamforming-based algorithms for recovering information from fetal electrocardiographic sensors, acta imeko, vol.12, no.2, 2023, pp. 1-8. doi: 10.21014/acta_imeko.v12i2.1470 [10] francisco alegria, random noise test of analog-to-digital converters, acta imeko, vol.12, no.2, 2023, pp. 1-8. doi: 10.21014/acta_imeko.v12i2.1502 [11] giulio tribbiani, lorenzo capponi, tommaso tocci, tiberio truffarelli, roberto marsili, gianluca rossi, a theoretical model for uncertainty sources identification in tip-timing measurement systems, acta imeko, vol.12, no.2, 2023, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1510 [12] alessandro ferrero, harsha vardhana jetti, sina ronaghi, simona salicone, a method to consider a maximum admissible risk in decision-making procedures based on measurement results, acta imeko, vol.12, no.2, 2023, pp. 1-9. doi: 10.21014/acta_imeko.v12i2.1518 [13] tommaso zara, lorenzo capponi, francesca masciotti, stefano pagano, roberto marsili, gianluca rossi, effects of saliva on additive manufacturing materials for dentistry applications: experimental research using flexural strength analysis, acta imeko, vol.12, no.2, 2023, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1519 [14] giorgio de alteriis, enzo caputo, rosario schiano lo moriello, on the suitability of redundant accelerometers for the implementation of smart oscillation monitoring system: preliminary assessment, acta imeko, vol.12, no.2, 2023, pp. 1-9. doi: 10.21014/acta_imeko.v12i2.1532 [15] gregorio andria, filippo attivissimo, attilio di nisio, anna m. l. lanzolla, mattia alessandro ragolia, analysis and optimization of surgical electromagnetic tracking systems by using magnetic field gradients, acta imeko, vol.12, no.2, 2023, pp. 1-8. doi: 10.21014/acta_imeko.v12i2.1589 https://doi.org/10.21014/acta_imeko.v12i2.1453 https://doi.org/10.21014/acta_imeko.v12i2.1455 https://doi.org/10.21014/acta_imeko.v12i2.1457 https://doi.org/10.21014/acta_imeko.v12i2.1460 https://doi.org/10.21014/acta_imeko.v12i2.1470 https://doi.org/10.21014/acta_imeko.v12i2.1502 https://doi.org/10.21014/acta_imeko.v12i2.1510 https://doi.org/10.21014/acta_imeko.v12i2.1518 https://doi.org/10.21014/acta_imeko.v12i2.1519 https://doi.org/10.21014/acta_imeko.v12i2.1532 https://doi.org/10.21014/acta_imeko.v12i2.1589 microsoft word 249-1769-1-le-rev2 acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 53 ‐ 58    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 53  estimation of stepping motor current from long distances  through cable‐length‐adaptive piecewise affine virtual sensor  alberto oliveri 1 , mark butcher 2 , alessandro masi 2 , marco storace 1   1  department of electrical, electronic, telecommunications engineering and naval architecture (diten), university of genoa, via opera pia   11a, 16145, genova, italy  2  department of engineering, cern, 1211 geneva, switzerland      section: research paper  keywords: virtual sensor; estimation; fpga; piecewise‐affine functions  citation: alberto oliveri, mark butcher, alessandro masi, marco storace, estimation of stepping motor current from long distances through cable‐length‐ adaptive piecewise affine virtual sensor, acta imeko, vol. 4, no. 3, article 9, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐09  editor: paolo carbone, university of perugia, italy  received february 13, 2015; in final form may 29, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: work supported by the university of genoa.  corresponding author: marco storace, e‐mail: marco.storace@unige.it    1. introduction  the lhc (large hadron collider) at cern is a circular particle accelerator which, as all machines of this kind, needs a collimation system to block highly energetic particles flying off their trajectories. if these particles are not properly collected, they can seriously damage the accelerator. the module that collects these potentially harmful particles is called collimation system and comprises some moving parts that need to be actuated with high precision inside the tunnel. hybrid stepper motors are often used as the actuators in the collimation system because of their high positioning repeatability and open loop control [1]. these motors, and their electronic drives, are subject to a number of requirements that are relatively unique to accelerators. the environment surrounding the motors is highly radioactive and driver electronics are damaged by this radioactivity, then the drivers are placed in radiation-safe zones at a distance of up to 1 km from the motors. they must therefore be connected to the hybrid stepper motors via long cables, as shown in figure 1. pulse width modulated (pwm) control voltages are largely used to increase power efficiency; however, they generate significant electromagnetic interference (emi) emissions, which can affect neighbouring electronics. high-frequency pwm chopping frequencies must therefore be used to shift the emissions to higher frequencies. these high-frequency voltage signals, nonetheless, cause the long cables to act as transmission lines, and produce a ringing phenomenon in the currents on the driver-side of the cable, the only side where measurements are possible. figure 2 shows a comparison of the current in the driver-cable-motor circuit for a single motor phase. good motor positioning repeatability is of great importance and requires to have real-time knowledge of the motor’s position, in order that compensatory action can be taken to correct any misalignments (i.e., steps losses). radiation-hard resolvers are used to measure the motor position and to detect lost steps. it is, nonetheless, desirable to have sensor redundancy. additionally, even if the stepping motors work at nominal torque, chosen by design to be at least twice the nominal load torque, having an estimate of the real load torque abstract  in this paper a piecewise affine virtual sensor is used for the estimation of the motor‐side current of hybrid stepper motors, which  actuate the lhc (large hadron collider) collimators at cern. the estimation is performed starting from measurements of the current  in the driver, which is connected to the motor by a long cable (up to 720 m). the measured current is therefore affected by noise and  ringing phenomena. the proposed method does not require a model of the cable, since it is only based on measured data and can be  used with cables of different length. a circuit architecture suitable for fpga implementation has been designed and the effects of fixed  point representation of data are analyzed.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 54  can be useful to warn about degradation of the mechanical parts of the collimation system. a sensorless driver, based on an extended kalman filter (ekf), which can work with arbitrarily long power cables between the driver and motor has been developed in [2]. in order to use exclusively the motor model in the ekf, the algorithm’s inputs and measurements should be the motor-side voltages and currents, respectively. however, as previously stated, these signals are not directly available in ordinary operation since measurements can only be made on the drive side of the cable. in [2] a cable model is used to obtain a transfer function relating the motor-side current to the driverside current. the current is therefore estimated by means of a digital filter whose coefficients can be easily adapted to the used cable. a physical sensor is a hardware device that converts physical phenomena into electric signals. a virtual sensor, instead, uses mathematical models (usually obtained through a black boxapproach) and the reading of other physical sensors to indirectly estimate a given physical variable. virtual sensors are basically nonlinear functions of past inputs and measured outputs of the considered system. they can be useful when some variables are not easily observable or measurable for various reasons, such as availability, cost or maintenance of a physical sensor. in summary, a virtual sensor can be used to estimate an unmeasurable system variable, without needing the knowledge of a mathematical model of the system. a procedure for the design of a virtual sensor is described in [3], which relies on choosing a suitable set of basis functions, so that the resulting virtual sensor satisfies the assumptions required to apply the theoretical results in [4]. in 0, [6] and [7], pwas functions (i.e. piecewise affine functions defined over a simplicial partition) have been proposed as basis functions for the design of virtual sensors. the main advantage of using pwas functions is that they can be implemented very efficiently in digital programmable circuits such as fpgas, providing low power consumption, fast response times, and, at least for high-volume applications, low cost. moreover, convergence and optimality properties of the general approach [3] are maintained. in 0 a single pwas function is used to obtain the estimate (standard virtual sensor), but if a relatively large number of inputs or measurable outputs is available, or if a large number of past data are used, the exponential increase of the complexity (curse of dimensionality) makes the approach impractical. an alternative solution (reduced-complexity virtual sensor) leading to a complexity reduction has been proposed in [6] and [7]. the method used for estimation of the motor-side currents in [2] is based on an approximation of a first principles model of the dynamical system relating the drive-side current to the motorside current. it therefore requires multiple, time intensive steps of modelling, parameter estimation and approximation to produce a real-time implementable, reduced-order estimator. in [8] the reduced-complexity pwas virtual sensor has been used for the estimation of the motor-side current with a fixed cable length of 720 m. here we generalize the sensor in order to make it cable-length-adaptive. the sensor is indeed designed (trained) starting from measurements obtained with cable lengths of 180 m, 360 m and 540 m while the validation is performed on measurements related to a 720 m long cable. a circuit architecture suitable for fpga implementation has been also designed and simulation results considering fixedpoint data representation effects are shown. 2. pwas virtual sensor  in this section the pwas virtual sensor approach is briefly summarized. consider the following nonlinear discrete-time dynamical model: , where ∈ is the state vector, ∈ is the exogenous input vector of manipulated variables, ∈ is the vector of measurable outputs, and k denotes the discrete-time instant. vector ∈ collects a set of variables to be estimated. we assume that the vector can be measured by a real sensor for 0,…, 1. a portion of these measurements, from 0 to 1, , is considered as a training set and is used to design the virtual sensor, which will operate without measuring ; the remaining data, from to 1 is used as a validation set to verify the estimation capabilities of the virtual sensor. we aim to construct a virtual sensor that estimates for , when the real sensor is no more available. the functions ∙,∙ : → , ∙ : → and ∙ : → are assumed to be unknown. for the sake of simplicity, we assume 1; nonscalar can be easily estimated component-wise. since the actual variables , and at past time instants are not available, the noisy measurements of them are assumed to be: , 0 , 0 ̃ , 0 figure 1. illustration of the connection of a 2‐phase hybrid stepper motor to its driver via a long cable, with the corresponding phase currents.  figure 2. comparison of the current in the driver‐cable‐motor circuit for a single motor phase: driver‐side (blue) and motor‐side (red).  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 55  where , and are unmeasured stochastic variables. for given values of , and , the inputs of the virtual sensor will be noisy sequences of measurements of and , and a vector of past values of ̂, namely ≜ ⋯ ≜ ⋯ ≜ ̂ ̂ ⋯ ̂ the values of , , are generally considered as tuning parameters. 2.1. standard virtual sensor  the standard virtual sensor is obtained by estimating in the following way: ̂ , , 1 being a pwa function defined over a simplicial partition made up of vertices and represented through the -basis, i.e., . 2 by denoting as , 1,…, _ the vertices of the simplicial partition, functions are pwas functions such that 1, if 0, if once the basis functions and the simplicial partition are selected, weights uniquely define and, in case the basis is used, they correspond to the value of the function in the partition vertices, i.e., . a detailed discussion about pwas functions can be found in [9]. weights vector … is obtained by solving the following optimization problem: min ̃ , , γw 3 being the number of elements in the training set, the tikhonov regularization parameter and max , , . the first term can be reformulated as a quadratic function of and takes into account the square error between the value of the pwas function (i.e., the estimated data) and the actual data ̂ . the second term performs a tikhonov regularization that depends on the structure of γ. in the simplest case, γ provides the zero-order tikhonov regularization. first-order (or higher order) tikhonov regularizations can be obtained alternatively by considering the gradient of the pwas function (or higher order derivatives) in constructing γ. the choice of the regularization parameter is quite critical since it can highly influence the performances of the virtual sensor. a low value may lead to an ill-conditioned problem, i.e. the solution is sensitive to small changes in data. a high value may lead to an inaccurate estimate of the unmeasurable output. the regularized least squares problem (3) can be recast as an unconstrained quadratic programming (qp) problem in the form: min 1 2 4 a rigorous convergence analysis of the standard virtual sensor is reported in 0. 2.2. reduced‐complexity virtual sensor  the reduced-complexity virtual sensor approach expresses the estimate ̂ as a sum of lower-dimensional pwas functions, in order to mitigate the effects of the curse of dimensionality, which prevents the applicability of the standard virtual sensor in many applications. for the sake of compactness, henceforth the input of the virtual sensor is referred to as ξ ≜ 5 where ≜ . assume to split vector ξ into ∈ subsets ξ ,ξ ,…,ξ , such that all elements of ξ are included in one and only one of these subsets. each subset ξ ( 1,…, ) has dimension equal to , such that 1 and ⋯ . the elements of each ξ are denoted as , , , ,… , , . the proposed reducedcomplexity virtual sensor is defined through a sum of pwas functions , 1,…, , each being the weighted sum of pwas basis functions: ̂ ξ ξ , , ξ 6 where : → (for fixed ), , denotes the -th basis of the -th pwas function and is the number of basis functions in each domain ξ . also, ≜ , ⋯ , , ⋯ , ⋯ , ⋯ , 7 the vector of parameters (which determines the shape of ) is obtained, as for the standard virtual sensor, by solving the least-squares problem min ̃ ξ γw 8 the regularized least squares problem (8) can be recast again as an unconstrained qp problem in the form (4). a rigorous convergence analysis of the reduced-complexity virtual sensor is reported in [7]. 3. design of the specific virtual sensor  in this section, the specific virtual sensor design is described. 3.1. virtual sensor architecture  we exploited the capabilities of moby-dic toolbox for matlab [10] to train a reduced-complexity pwas virtual sensor for the cable-length adaptive estimation of the motorside current, starting from measurements of the drive-side current. the same tool also allowed the automatic generation of vhdl files describing a circuit architecture for the fpga implementation of the sensor. the block scheme of the architecture, described in detail in [7], is shown in figure 3. at each sampling time, the values of the system inputs and measurable outputs are loaded into first-in-first-out (fifo) blocks which behave as buffers, storing the data at current and past time instants (i.e. and ). once the fifo acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 56  blocks are full, the computation of the estimated output is performed by the pwas_i ( 1,…, ) blocks, which are responsible for the evaluation of the pwas functions. a detailed description of these blocks is available in [11]. each pwas_i block communicates the end of its computation with a ready_i signal; as soon as all ready_i signals are active, the results of each function evaluation (fpwas_i) are added up by an adder block, which provides the estimation of the unmeasurable output , and a global ready signal is set to logic value ‘1’ indicating that the result is available. if 0, the estimation is brought back to the fifo_z block and is used as input for next estimations. an affine scaling of ̂ (performed by block scale_z) is necessary to bring the value to the correct range needed by circuit inputs. the architecture, has a latency of 9 clock cycles (i.e., 180 ns at 50 mhz). in our case we have 0, 2 and 1, in particular contains the drive-side current and the cable length and is the motor-side current. we designed the sensor by heuristically setting 1 and 2. the domain of the pwas functions constituting the virtual sensor has been partitioned by using two subdivisions along the axis representing the drive-side current and only one subdivision along the axis representing the cable length. 3.2. experimental dataset  a dataset made up of 2,000,000 samples of the drive-side current and the corresponding motor-side current, sampled at 500 ks/s was employed. the samples correspond to 4 different cable lengths: 180 m, 360 m, 540 m and 720 m (500,000 samples for each length). figure 4 shows the drive-side currents for the different cable lengths. in order to acquire simultaneously both the drive-side and motor-side currents, two channels of an 8-bit oscilloscope were used to oversample the signals at 10 ms/s to avoid aliasing. current probes based on combined hall effect and current transformer technologies were used as the sensors. the scope has an effective resolution of 64 ma, which corresponds to a quantization noise variance of 0.0642/12 = 0.000341 a2. the probes have a 3σ accuracy of 0.25 a. considering the noise sources to be independent, the combined variance can be found by summing the individual variances in quadrature. this sum leads to an effective standard deviation of 85.4 ma, which can be considered acceptable compared to the 2√2 a amplitude of current waveform. the 10 ms/s sampled data is then low pass filtered to attenuate all frequency components above 250 khz to below the quantization level before downsampling the signals to 500 ks/s. this lower rate is high enough to capture the significant pwm-excited harmonics, but low enough to be sampled by standard drive adcs, using analog anti-aliasing, thereby allowing real-time estimator implementation. the samples corresponding to cable lengths of 180 m, 360 m and 540 m have been used as a training set, to derive the weights . the remaining 500,000 samples (corresponding to cable length 720 m) have been used to validate the virtual sensor. the weights defining the pwas virtual sensor, see (6), are shown in table 1. 4. results  figures 5 and 6 show the measured motor-side current of the validation dataset (blue) and the estimated current obtained by the fpga in fixed-point precision (red). the root mean squared (rms) estimation error obtained is 0.0858 a in fixed point precision and 0.0794 a in matlab double precision, whilst the overall current waveform has an rms value of 2 a. in [8] a lower rmse value was obtained by employing the pwas virtual sensor, nevertheless that sensor was trained with measurements taken with a 720 m long cable, while here the training is performed with data referred to different cable lengths, and the cable length is a sensor input. this solution is therefore more general, since it is cable-length-adaptive. the motor-side estimated current obtained with the figure 4. drive‐side currents for different cable lengths.  figure 3. block scheme of the circuit implementing the reduced‐complexity  virtual sensor.  table 1: weights of the three pwas functions constituting the reduced‐ complexity virtual sensor. w , w , w , w ,   w , w , 0.3474 0.1560 0.6341  0.2473  0.1559 0.0123 , ,   ,   9.9865 0.1665  10.0544  , ,   ,   3.9596 0.0135  3.7117  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 57  estimator in [2] in matlab double precision is shown in figure 7 for comparison. the rms estimation error is 0.0454 a. despite being lower than that obtained with the proposed method, it requires significantly more effort to obtain the estimator. depending on the application, this extra effort may not be worth the small increase in estimation precision. figure 8 shows the estimation errors for both estimators, pwas virtual sensor (blue curve) and estimator in [2] (red curve). mean value and variance are -0.017 and 0.007, respectively, for the pwas case and 2.5 10-3 and 2.1 10-3, respectively, for the case in [2]. 5. conclusions  we designed a piecewise-affine virtual sensor for the estimation of motor-side current of hybrid stepper motors, starting from measurements of drive-side currents through cables of different length. this sensor is therefore cable-lengthadaptive. a circuit architecture has also been designed for the implementation of the virtual sensor. the proposed solution exhibits slightly poorer performances compared with the ones obtained by standard model-based estimators, but results in a much faster and simpler design, since the identification of a model is not required. furthermore, even this slightly higher estimation error is negligible compared to the waveform amplitude and so allows the desired current levels to be tracked such that a negligible effect occurs on the motor position and torque. this latter is usually chosen to include a significant safety margin anyway. references  [1] p. acarnley, “stepping motors: a guide to theory and practice. control eng”, series 63, 4-th edition, the institution of electrical engineers, london, uk, 2002. [2] a. masi, m. butcher, m. martino, and r. picatoste, “an application of the extended kalman filter for a sensorless stepper motor drive working with long cables”, ieee transactions on industrial electronics, 2012, 59(11), pp. 4217–4225. [3] m. milanese, c. novara, k. hsu, and k. poolla, “filter design from data: direct vs. two-step approaches”, proceedings of the american control conference (acc), 14-16 june 2006, minneapolis, mn, usa, pp. 4466–4470. [4] l. ljung, “convergence analysis of parametric identification methods”, ieee transactions on automatic control, 1978, 23(5), pp. 770–783. [5] t. poggi, m. rubagotti, a. bemporad, and m. storace, “highspeed piecewise affine virtual sensors”, ieee transactions on industrial electronics, 2012, 59(2), pp. 1228–1237. [6] m. rubagotti, t. poggi, a. bemporad, and m. storace, “piecewise affine direct virtual sensors with reduced complexity”, proceedings of the 2012 ieee conference on decision and control (cdc), maui, hawaii (usa), 10-13 dec. 2012, pp. 656– 661. [7] m. rubagotti, t. poggi, a. oliveri, c. a. pascucci, a. bemporad, and m. storace, “low-complexity piecewise-affine virtual sensors: theory and design”, international journal of control, 2014, 87(3), pp. 622–632. [8] a. oliveri, m. butcher, a. masi, and m. storace, “piecewise affine virtual sensor: a case study-estimation of stepping motor current from long distances”, proceedings of the 20th imeko tc4 international symposium and 18th international workshop on adc modelling and testing research on electric and electronic measurement for the economic upturn. benevento, italy, 15-17 sep. 2014. [9] m. parodi, m. storace, and p. julián, “synthesis of multiport resistors with piecewise-linear characteristics: a mixed-signal figure  5.  motor‐side  current:  measured  (blue),  estimated  in  fixed‐point  precision (red).  figure 8. estimation error for the estimator  in [2] (red) and for the pwas  virtual sensor (blue).  figure 6. zoom of figure 5.  figure 7. motor‐side measured current (blue) and current estimated with the estimator in [2] (red).  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 58  architecture”, international journal of circuit theory and applications, 2005, 33(4), pp. 307–319. [10] a. oliveri, d. barcelli, a. bemporad, b. a. g. genuit, w. p. m. h. heemels, t. poggi, m. rubagotti, and m. storace, “mobydic: a matlab toolbox for circuit-oriented design of explicit mpc”, proceedings of the 4th ifac conference on nonlinear model predictive control (nmpc), noordwijkerhout (the netherlands), 23-27 aug. 2012, pp 218–225. [11] m. storace, t. poggi, “digital architectures realizing piecewiselinear multivariate functions: two fpga implementations”, international journal of circuit theory and applications, 2011, 39, pp. 1–15. a theoretical model for uncertainty sources identification in tip-timing measurement systems acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a theoretical model for uncertainty sources identification in tip-timing measurement systems giulio tribbiani1, lorenzo capponi2, tommaso tocci3, tiberio truffarelli3, roberto marsili3, gianluca rossi3 1 cisas g. colombo, university of padova, via venezia 15, 35131 padova, italy 2 department of aerospace engineering, university of illinois urbana-champaign, 104 s. wright street, 61801 urbana, usa 3 department of engineering, university of perugia, via g. duranti 93, 06125 perugia, italy section: research paper keywords: vibrations, blade, turbomachinery, contactless measurement citation: giulio tribbiani, lorenzo capponi, tommaso tocci, tiberio truffarelli, roberto marsili, gianluca rossi, a theoretical model for uncertainty sources identification in tip-timing measurement systems, acta imeko, vol. 12, no. 2, article 28, june 2023, identifier: imeko-acta-12 (2023)-02-28 section editor: laura fabbiano, politecnico di bari, italy received march 14, 2023; in final form april 13, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giulio tribbiani, e-mail: giulio.tribbiani@phd.unipd.it 1. introduction in operating conditions, monitoring turbomachinery blades vibrations is necessary for improving structural health techniques and for validating the dynamics of the system [1]. in fact, uncontrolled vibrations at or close to natural frequencies, combined with high thermo-mechanical loads, can lead to damage for the machinery and increase the risk of unexpected failures [2]. traditionally, dynamical analysis and measurement of vibrations of rotating blade have been performed using strain gauge sensors [3]–[5]. however, this technique can have impact on the monitoring procedures [6]. in fact, although strain gauges have high accuracy and their usage is well established, they have relatively limited lifetime in high-temperature conditions, and, as intrusive technique, their installation on rotating systems and their data transmission are complex steps to achieve [5]. to overcome these problematics, non-contact and non-intrusive blade vibration measurement techniques have been developed in the past years, based on vibration, temperature [7] and ultrasound approaches [5], [8]–[12]. one of the most successful and promising on-site techniques for axial turbomachinery blade dynamics measurements is called blade tip-timing (btt) [13], [14]. btt is based on the measurement of instantaneous blade tip deflection by detecting advances or delays in the time of arrival (toa) of the blade tip by means of sensors installed on the casing at fixed angular positions [15]. btt can use different types of sensing probes, such as laser probes [16], [17], microwave sensors [4], [18], capacitive sensors [19], magneto resistive sensors [20], [21], or optical probes [6], [22]–[24]. while the measurement principle is the same for all different sensors, the optical probes are usually preferred for their performances in accuracy and resolution [6], [25]. the blade-sensor interaction generates electrical pulse signals. in ideal conditions, where no blade vibration is considered (i.e., rigid blade), the toa is determined a priori given the geometry and the dynamic of the system [26], [27]. on the other side, considering blades as vibrating flexible structures, blade deflections lead to delays or advances of the blade tip with respect to the expected toa [13], [28], [29]. these shifts in the toa are extracted and used for determining the amplitude of deflection of each blade [30]. for this reason, a thorough identification and definition of sources abstract this paper presents a theoretical analysis of uncertainty sources in measurement techniques used to determine vibrations of turbomachinery blades using stationary sensors mounted on the casing of the turbomachine. a mathematical model based on fundamental physical principles is proposed, and two different measurement set-ups are evaluated. one set-up uses a reference sensor to measure the passage of an undeformed part of the blades (blade base), while the other set-up does not involve the use of a reference sensor, with both sensors facing the blades tip (deformed part). the intrinsic uncertainty of these methods and the performance of the complete measurement chain are defined. the analysis of the measurement technique leads to conclusions about the practical set-up and possible performances of these measurement techniques. mailto:giulio.tribbiani@phd.unipd.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 of uncertainties for tip timing measurement systems (i.e., uncertainty on the measurement of the toa) is essential for obtaining a detailed information on the dynamics of the system [31]. this will lead to a reliable structural health monitoring, giving insights on instrumentation best practices and information for validating numerical models [32]. different studies have been carried out on btt measurement systems, focusing on the uncertainty of specific cases technique [33]–[36]. this study proposes a complete overview of the uncertainty sources of a generic btt system, and their influence on the estimation of the toa. with regard to this, two different probes configuration are analysed, typically employed in btt measurement systems, either with or without a reference sensor. 2. materials and methods a typical btt measurement system allows the sampling of the relative displacement law s(t), supposed periodic, between two points of the blade, expressed in a reference system rotating with the blade itself, using a non-contact sensor installed on the machine casing. the sensors are usually paired in couple, with one being the reference sensor and the other the sensor used to measure the blade deformation. the first one is typically installed at the base of the blade, where there is no deformation, the other is placed at the blade tip. by analysing the resulting signals, it is possible to obtain the instants in which the blade passes in front of the sensors and measure its deformation. if the tip deflects from the base, there will be a δt, in delay or in advance, between the signals obtained by the reference and the non-reference sensor. in addition to this, the pulse width of the passage signal is representative of the duration of the passage itself. in this way, a displacement and velocity value can be measured for each transition of the blade in front a pair of sensors. the problem lies in the fact that the displacement s(t), as well as the velocity v(t), i.e. its first derivative, have generally higher frequency components than the passage frequency in front of the sensors. hence, s(t) and v(t) are sub-sampled. nevertheless, it is possible, under particular conditions discussed in [37], to calculate the harmonics of s(t), even if the sampling is performed in such conditions. a method is described that allows to obtain the discrete spectrums of the blade displacement and velocity, s(t) and v(t), by solving a system of 2m+1 non-linear equations, with m being the number of harmonics to be evaluated. 3. velocity measurement principle and displacement using reference sensor in this section it will be described the working principle of a btt measuring system using a reference sensor. considering a blade on a rotating drum subject to bending deformations as shown in figure 1. let ω be the angular rotation velocity of the disk. two reference systems are considered: a fixed one (o', s') and a second rotating one (o, s). both s and s' are curvilinear abscissae defined on the circumference containing the blade tips, of radius r. in this reference systems, s identifies the position of the undeformed blade tip, hence s(t) represents the circumferential component of the blade tip displacement over time. it is assumed that this displacement changes in time according to a periodic law: 𝑠(𝑡) = 𝑆 ∙ 𝑓(𝑡), (1) where 𝑓(𝑡) is a periodic function of period t, fundamental frequency f0 = l/t and unitary amplitude, multiplied by a scalar s, greater than 0. the function s(t) is the measurand, sampled by the btt measurement technique. deriving the displacement s(t) the velocity v(t) is obtained. considering the relative motion between drum and casing: 𝑣(𝑡) = 𝑣 ′(𝑡) − 𝑣0′(𝑡) (2) with v’(t) being the velocity in the rotating system and vo’(t) the absolute velocity due to drum rotation. the blade deflection can be measured by analysing the signals generated by the two sensors arranged as shown in figure 1. figure 2 shows the qualitive representation of these two signals. the blade passage is detected when the signal exceeds a threshold value sg. in conditions of undeflected blade, there will be a fixed time delay between the threshold crossing of signals from sensor 1 and 2. if the blade is deformed, the time delay δtac will differ from the one obtained in the previous condition. hence, it is possible to measure the blade deformation by measuring δtac. this method is valid if some key hypotheses are met; these assumptions will inevitably lead to an increase in measurement uncertainty. first, it is considered that the time interval δtac does not vary due to the irregularity of the rotation, hence the angular speed ω of the drum remains strictly constant. second, the δtac in the undeformed configuration is equal to the mean ∆tac̅̅ ̅̅ ̅̅ when the blades are vibrating: ∆𝑡ac̅̅ ̅̅ ̅̅ = 1 𝑁 ∑ ∆𝑡ac(𝑖) 𝑁 𝑖=1 . (3) with these hypotheses, the time delays due to the oscillation s(t) can be calculated from: 𝛿(∆𝑡ac) = ∆𝑡ac − ∆𝑡ac̅̅ ̅̅ ̅̅ . (4) associating the times intervals δtab and δtcd respectively to the distance δab and δcd along which the sensors see the blade tip, defined as distances corresponding to the crossing of the threshold sg as illustrated in figure 2, it is possible to compute the average velocity respectively between a and b and between c and d by the following two equations: figure 1. proposed measuring set-up using a reference sensor (facing the base of the blade) to measure the deformation. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 𝑣0 ′ = 𝛿ab ∆𝑡ab ∙ 𝑅 𝑟 (5) 𝑣′ = 𝛿cd ∆𝑡cd . (6) being the duration of these signals very short, such quantities can be reasonably considered as the instantaneous velocity of the blade tip and base. assuming that both s and the blade tip velocity remain constant during the passage occurred in δtcd, the displacement can be calculated as follows: 𝑠(𝑖) = 𝛿cd ∆𝑡cd ∙ 𝛿(∆𝑡ac) (7) the value of v, assuming that vo’(t) remains constant during δtac using (2) can be expressed as: 𝑣(𝑖) = 𝛿cd ∆𝑡cd − 𝛿ab ∆𝑡ab ∙ 𝑅 𝑟 (8) the usage in (7) and (8) of the instantaneous velocities of rotation allows a considerable extension of the applicability of the btt measurement methods on machines with high degrees of irregularity compared to the ones presented in [15], [30], [38], where velocity is calculated as ω r, hence assumed constant during the entire revolution. in the method proposed here, vo’ remains constant only in the δtac period. in fact, by using (7) and (8) at each instant of passage of the blade in front of a couple of fixed sensors, s(i) and v(i) values are calculated. 4. method without reference sensor in many applications, it is not possible to install a reference sensor in a position useful to detect the passage of a not deformed part of the blade, i.e., the blade base. in these cases, it is possible to install the sensors as shown in figure 3, facing only the blade tip. it is assumed once again that the time interval does not change due to the irregularity of the rotation between the two pulses and that the δt, relative to the passage of the blade in the undeformed configuration, is equal to the average value of δt measured over a certain number of revolutions. the time delays δ(δt(i)) are therefore calculated with respect to these average values. assuming a strictly-constant tangential velocity equal to ωr, the measured blade deflection si, as shown in figure 4, can be calculated by the following relation: 𝑆𝑖 = 𝜔 ∙ 𝑅 ∙ 𝛿(∆𝑡(𝑖)) (9) hence, for each couple of pulses, we can have one pair of displacement samples s and s’. average displacement can be estimated by (s + s’)/2 and a velocity value v can be estimated by (s’-s)/td, where td is the pulse width. to estimate the vibration frequency, we can use the following considerations. if the displacement of the blade is: 𝑠 = 𝐴 ∙ cos(𝜔 𝑡). (10) deriving it, the blad tip velocity v is obtained and can be expressed by: 𝑣 = −𝐴 ∙ 𝜔 ∙ sin(𝜔 𝑡). (11) by computing the ratio between the minimum and maximum value of s and v, it is possible to estimate the vibration frequency: 𝑓 = 𝑣max − 𝑣min 𝑠max − 𝑠min . (12) 5. vibration harmonics calculation the sampling of blade tip vibration by btt technique produces a series of displacement and velocity samples on the rotating reference system: one for each passage of the rotating blade in front of the couple of the sensors. at each instant of figure 2. time history of the outputs of sensors 1 (reference on the base) and 2 (on the tip) typically in volt: the blade is passing when the output exceeds the threshold value sthreshold. figure 3. measuring technique without reference sensor. figure 4. blade vibration and sensor output over time; measured blade deflection at the i-th passage, resulting from the variation δ(δt(i)). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 passage, i.e., at time t(i), a deflection value s(i) and a velocity value v(i) are calculated. the relative motion of the blade tip can be in general described by a periodic function over time. as vibrations, this periodic motion can be effectively described by a relatively small number of harmonics, with the energy being almost completely contained in the first harmonics. the two functions s(t) and v(t) can be in general approximated by a fourier series limited to m harmonics: 𝑠(𝑡) = ∑ 𝐴𝑗 ∙ cos(2 π 𝑣0 𝑗 𝑡) + 𝐵𝑗 ∙ sin(2 π 𝑣0 𝑗 𝑡) 𝑀 𝑗=1 (13) 𝑣(𝑡) = 2 π 𝑣0 ∙ ∙ ∑ −𝐴𝑗 ∙ sin(2 π 𝑣0 𝑗 𝑡) + 𝐵𝑗 ∙ cos(2 π 𝑣0 𝑗 𝑡) 𝑀 𝑗=1 . (14) the values of s(t) and v(t) are measured at the instants t(i), not necessarily equally distributed in time, as the sensors could be placed on the turbine casting at not equi-spaced angles. these values s(i) and v(i) can be used to write a system of nonlinear equations with unknowns that are the fundamental frequency and the 2 m coefficients aj and bj of the harmonics. thus, (2 m + 1) equations can be written; to solve this system, it is necessary to get enough samples of s(t) and v(t), at least (2 m + 1) or more. so, in principle, by the acquisitions of the (2 m + 1) s(i) and v(i) samples and the solution of a non-linear system of equations, it is possible to estimate the discrete spectrum of the harmonics of the vibration. with more than (2 m + 1) samples of s(i) and v(i), a least square approach is also possible. the limits of these ideas have been discussed in [37], where a complete theoretical approach is illustrated. 6. uncertainty sources the hypotheses made for the description of the btt measurement principle of velocity and displacements samples of a rotating blade tip, limit the applicability of the methods. the main parameters of the rotating disk with a vibrating blade are ω, r, i, v, and s. a first uncertainty source can be identified in the variation of the δtac due to the irregularity rotation, expressed by: 𝑖 = 𝜔max − 𝜔min 𝜔average , (15) with ω being the angular speed of the rotating drum. as a first linear approximation, this uncertainty source can be estimated as follows: 𝐸𝑖 = 𝑖 ∙ 𝛥𝑡ac (16) it was previously assumed that s(t) and v(t) do not change in the time period δtac. a linear estimation of changes of s(t) and v(t) during δtac can be expressed by the following relations: 𝐸𝑠 = [ 𝑑𝑠 𝑑𝑡 ] ∙ 𝛥𝑡ac (17) 𝐸𝑣 = [ 𝑑𝑣 𝑑𝑡 ] ∙ 𝛥𝑡ac . (18) assuming the time history of s(t) is a simple sinusoidal vibration: 𝑠(𝑡) = 𝑆 ∙ sin(2 π 𝑣 𝑡) (19) the following relations are obtained: [ 𝑑𝑠 𝑑𝑡 ] max = 2 π ∙ 𝑣 ∙ 𝑠 (20) [ 𝑑𝑣 𝑑𝑡 ] max = 4 π2 ∙ 𝑣2 ∙ 𝑠 . (21) by replacing these values in (17) and (18), it is possible to estimate the maximum effect on uncertainty due to changing of s and v over δtac: 𝐸𝑠 = 2 π ∙ 𝑣 ∙ 𝑠 ∙ 𝛥𝑡ac (22) 𝐸𝑣 = 4 π 2 ∙ 𝑣2 ∙ 𝑠 ∙ 𝛥𝑡ac . (23) considering that the maximum of δtac is obtained when the absolute speed of the blade tip is at minimum, (22) and (23) become: 𝐸𝑠 = 2 π ∙ 𝑣 ∙ 𝑠 ∙ 𝛿ac 𝜔 𝑅 − 2 π 𝑣 𝑠 (24) 𝐸𝑣 = 4 π 2 ∙ 𝑣2 ∙ 𝑠 ∙ 𝛿ac 𝜔 𝑅 − 2 π 𝑣 𝑠 . (25) (16), (24) and (25) estimate the uncertainty amplitude, therefore can be used to identify the conditions of applicability of the measurement methods previously described. knowing this, it is possible to change the parameters of the measurement system, in order to properly choose and install the sensors in convenient locations, as well as find the optimal acquisition set up. the resolution δs of the displacement measurement is given by the δ(δtac ) resolution. starting from: 𝑅𝑠 = ∆𝑠 𝑆 (26) and being trs the resolution in the time measurements, i.e., sampling time, we can write: 𝑡𝑟𝑠 = 𝑅𝑠 (𝛿(∆𝑡ac))max (27) and being: (𝛿(∆𝑡ac))max = 𝑆 𝜔 ∙ 𝑅 . (28) trs strictly affects the resolution on the measurement of the displacement s. a useful equation can be defined to relate trs to rs. this relation becomes fundamental when choosing the resolution of the time measurement, in order to achieve the target resolution of the blade tip displacement s 𝑡𝑟𝑠 = 𝑅𝑠 𝑆 𝜔 ∙ 𝑅 . (29) the resolution on blade tip velocity measurement depends essentially on δ(δtcd) measurement resolution. as for the displacement measurements, the following equation is obtained: 𝑡𝑟𝑣 = 𝑅𝑣 ∙ (𝛿(∆𝑡cd))max (30) and being acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 (∆𝑡cd)min = 𝛿cd 𝜔 + 2 𝜋 ∙ 𝑣 ∙ 𝑠 (31) (∆𝑡cd)max = 𝛿cd 𝜔 − 2 π ∙ 𝑣 ∙ 𝑠 (32) hence (𝛿(∆𝑡cd))max = (∆𝑡cd)max − (∆𝑡cd)min 2 . (33) therefore, from (33), it is possible to define another useful formula to choose sampling time of sensor signals in order to have a sufficient resolution for tip time velocity measurement: 𝑡𝑟𝑣 = 2 π ∙ 𝑆 ∙ 𝛿cd ∙ 𝑅𝑣 𝜔2 ∙ 𝑅2 − 4 π2 ∙ 𝑣 2 ∙ 𝑠2 (34) further causes of uncertainty can also be due to: variations of δab and δcd; definition of the threshold sg; relative radial motion between sensor and blade tip; noise in sensors signals. this analysis has been developed in [39]. 7. conclusion some basic models for blade tip timing measurement systems have been defined: the first using one of the two sensors as a reference; the second with both sensors facing the blade tip. although the second approach is more versatile and easier to install, using one sensor as a reference is needed when the analysed machineries have higher degrees of irregularity. uncertainty on the parameters of these models has been theoretically analysed. some simple formulas to relate the uncertainty and resolution obtained with reference to measurement system chosen parameters and sensor installation have been proposed. these formulas could be very useful to make proper choices over measurement system components, in relation to blade tips expected vibration characteristics and turbomachine parameters. so, this work can be used for blade tip timing measurement system design and installation guidelines. references [1] s. l. dixon, fluid mechanics, thermodynamics of turbomachinery, elsevier, 2010, isbn 978-1-85617-793-1. [2] r. i. lewis, turbomachinery performance analysis. arnold, 1996. [3] p. russhard, the rise and fall of the rotor blade strain gauge, mechanisms and machine science, vol. 23, 2015, pp. 27–37. doi: 10.1007/978-3-319-09918-7_2 [4] h. guo, f. duan, j. zhang, blade resonance parameter identification based on tip-timing method without the once-per revolution sensor, mech syst signal process 66-67 (2016), pp. 625–639. doi: 10.1016/j.ymssp.2015.06.016. [5] b. kestner, t. lieuwen, c. hill, l. angello, j. barron, c. a. perullo, correlation analysis of multiple sensors for industrial gas turbine compressor blade health monitoring, j eng gas turbine power 137(11) (2015). doi: 10.1115/1.4030350 [6] f. mevissen, m. meo, a review of ndt/structural health monitoring techniques for hot gas components in gas turbines, sensors 2019, vol. 19, page 711, vol. 19, no. 3, p. 711, feb. 2019 doi: 10.3390/s19030711 [7] g. allevi, p. castellini, p. chiariotti, f. docchio, r. marsili, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, qualification of additive manufactured trabecular structures using a multi-instrumental approach, proc. of the 2019 ieee int. instrumentation and measurement technology conf. (i2mtc), auckland, new zealand, 20-23 may 2019. doi: 10.1109/i2mtc.2019.8826969 [8] s. bornassi, t. m. berruti, c. m. firrone, g. battiato, vibration parameters identification of turbomachinery rotor blades under transient condition using blade tip-timing measurements, measurement 183 (2021) art. no. 109861. doi: 10.1016/j.measurement.2021.109861 [9] j. p. feist, s. berthier, precision temperature detection using a phosphorescence sensor coating system on a rolls-royce viper engine, proc. of the asme turbo expo 2012: turbine technical conference and exposition. volume 1: aircraft engine; ceramics; coal, biomass and alternative fuels; controls, diagnostics and instrumentation. copenhagen, denmark. june 11–15, 2012, pp. 917-926. doi: 10.1115/gt2012-69779 [10] t. tagashira, n. sugiyama, y. matsuda, m. matsuki, measurement of blade tip clearance using an ultrasonic sensor, 35th aerospace sciences meeting and exhibit, 1997. doi: 10.2514/6.1997-165 [11] f. cannella, a. garinei, r. marsili, e. speranzini, dynamic mechanical analysis and thermoelasticity for investigating composite structural elements made with additive manufacturing, composite structures 185 (2018), pp. 466-473. doi: 10.1016/j.compstruct.2017.11.029 [12] m. becchetti, r. flori, r. marsili, g. l. rossi, measurement of stress and strain by a thermocamera, proc. of the sem annual conference, albuquerque, new mexico, june 1-4 2009. online [accessed 24 april 2023] https://www.researchgate.net/profile/roberto-marsili2/publication/267236205_measurement_of_stress_and_strain_b y_a_thermocamera/links/576cf00908aedb18f3ec5bac/measure ment-of-stress-and-strain-by-a-thermocamera.pdf [13] l. andrenelli, n. paone, g. rossi, e. p. tomasini, non-intrusive measurement of blade tip vibration in turbomachines, proceedings of the asme turbo expo 5 (2015). doi: 10.1115/91-gt-301 [14] l. j. kiraly, digital system for dynamic turbine engine blade displacement measurements, proceedings of measurement methods in rotating components of turbomachinery, new orleans, la, 1979. online [accessed 24 april 2023]. https://ntrs.nasa.gov/api/citations/19800005857/downloads/1 9800005857.pdf [15] w. b. watkins, w. w. robinson, r. m. chi, noncontact engine blade vibration measurements and analysis, aiaa paper, 1985. doi: 10.2514/6.1985-1473 [16] s. heath, m. inregun, an improved single-parameter tip-timing method for turbomachinery blade vibration measurements using optical laser probes, journal of mechanical sciences 38(10) (1996), pp. 1047-1058. doi: 10.1016/0020-7403(95)00116-6 [17] l. andrenelli, n. paone, g. rossi, large-bandwidth reflection fiber-optic sensors for turbomachinery rotor blade diagnostics, sensors and actuators a-physical 32 (1992), pp. 539-542. doi: 10.1016/0924-4247%2892%2980040-a [18] j. zhang, f. duan, g. niu, j. jiang, j. li, a blade tip timing method based on a microwave sensor, sensors 17(5) (1997) art. no. 1097. doi: 10.3390/s17051097 [19] c. huang, m. hou, technology for measurement of blade tip clearance in an aeroengine, measurement and control technology, measurement and control technology 27(3) (2011) pp. 27-32. [20] e. cardelli, a. faba, r. marsili, g. rossi, r. tomassini, magnetic nondestructive testing of rotor blade tips, j appl phys 117(17) (2005) art. no. 17a705. doi: 10.1063/1.4907180 [21] r. tomassini, g. rossi, j. f. brouckaert, on the development of a magnetoresistive sensor for blade tip timing and blade tip clearance measurement systems, review of scientific instrument http://dx.doi.org/10.1007/978-3-319-09918-7_2 https://doi.org/10.1016/j.ymssp.2015.06.016 https://doi.org/10.1115/1.4030350 http://dx.doi.org/10.3390/s19030711 https://doi.org/10.1109/i2mtc.2019.8826969 https://doi.org/10.1016/j.measurement.2021.109861 https://doi.org/10.1115/gt2012-69779 https://doi.org/10.2514/6.1997-165 https://doi.org/10.1016/j.compstruct.2017.11.029 https://www.researchgate.net/profile/roberto-marsili-2/publication/267236205_measurement_of_stress_and_strain_by_a_thermocamera/links/576cf00908aedb18f3ec5bac/measurement-of-stress-and-strain-by-a-thermocamera.pdf https://www.researchgate.net/profile/roberto-marsili-2/publication/267236205_measurement_of_stress_and_strain_by_a_thermocamera/links/576cf00908aedb18f3ec5bac/measurement-of-stress-and-strain-by-a-thermocamera.pdf https://www.researchgate.net/profile/roberto-marsili-2/publication/267236205_measurement_of_stress_and_strain_by_a_thermocamera/links/576cf00908aedb18f3ec5bac/measurement-of-stress-and-strain-by-a-thermocamera.pdf https://www.researchgate.net/profile/roberto-marsili-2/publication/267236205_measurement_of_stress_and_strain_by_a_thermocamera/links/576cf00908aedb18f3ec5bac/measurement-of-stress-and-strain-by-a-thermocamera.pdf https://doi.org/10.1115/91-gt-301 https://ntrs.nasa.gov/api/citations/19800005857/downloads/19800005857.pdf https://ntrs.nasa.gov/api/citations/19800005857/downloads/19800005857.pdf https://doi.org/10.2514/6.1985-1473 https://doi.org/10.1016/0020-7403(95)00116-6 https://doi.org/10.1016/0924-4247%2892%2980040-a https://doi.org/10.3390/s17051097 https://doi.org/10.1063/1.4907180 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 87(10) (2016) art. no. 20003. doi: 10.1063/1.4964858 [22] j. m. gil-garcía, i. garcía, j. zubia, g. aranguren, measurement of blade tip clearance and time of arrival in turbines using an optic sensor, proceedings of 2015 int. conf. on applied electronics (ae), 08-09 september 2015. online [accessed 24 april 2023] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=73 01053 [23] r. reinhardt, d. lancelle, o. hagendorf, m. schultalbers, o. magnor; p. duenow, improved reference system for high precision blade tip timing on axial compressors, proceedings of 2017 25th optical fiber sensors conference (ofs), jeju, south korea, 24-28 april 2017. doi: 10.1117/12.2263295 [24] i. garcía, j. beloki, j. zubia, g. aldabaldetreku, m. asunción illarramendi, f. jiménez, an optical fiber bundle sensor for tip clearance and tip timing measurements in a turbine rig, sensors 13(6) (2013), pp. 7385-7398. doi: 10.3390/s130607385 [25] d. ye, f. duan, j. jiang, g. niu, z. liu, f. li, identification of vibration events in rotating blades using a fiber optical tip timing sensor, sensors 19(7) (2019) art. no. 1482. doi: 10.3390/s19071482 [26] g. rossi, j. brouckaert, design of blade tip timing measurements systems based on uncertainty analysis (2012) pdfs.semanticscholar.org. online [accessed 24 april 2023] https://pdfs.semanticscholar.org/6c78/2f97a0b8c181c29aeb9ac 7f15d9fff110522.pdf [27] m. pan, y. yang, f. guan, h. hu, h. xu, sensors, and undefined 2017, sparse representation based frequency detection and uncertainty reduction in blade tip timing measurement for multimode blade vibration monitoring, sensors 17(8) (2017), art. no. 1745. doi: 10.3390/s17081745 [28] j. m. gil-garcía, a. solís, g. aranguren, j. zubia, an architecture for on-line measurement of the tip clearance and time of arrival of a bladed disk of an aircraft engine 17(10) (2017) art. no. 2162. doi: 10.3390/s17102162 [29] j. bouckaert, tip timing and tip clearance problem in turbomachines, von karman institute lecture series, 2(5) (2007). [30] h. roth, vibration and clearance measurements on rotating blades using stationary probes, von karman inst. for fluid dyn. meas. tech. in turbomachines, 1981. [31] s. chatterton, l. capponi, t. tocci, m. marrazzo, r. marsili, and g. rossi, experimental investigation on hardware and triggering effect in tip-timing measurement uncertainty, sensors 23(3) (2023), art. no. 1129. doi: 10.3390/s23031129 [32] t. tocci, l. capponi, g. rossi, r. marsili, m. marrazzo, statespace model for arrival time simulations and methodology for offline blade tip-timing software characterization, sensors 23(5) (2023), art. no. 2600. doi: 10.3390/s23052600 [33] c. zhou, h. hu, f. guan, y. yang, modelling and simulation of blade tip timing uncertainty from rotational speed fluctuation, 2017 prognostics and system health management conference, phm-harbin 2017 proceedings, oct. 2017. doi: 10.1109/phm.2017.8079252 [34] russhard and pete, blade tip timing (btt) uncertainties, aip conference proceedings 1740(1) (2016), art. no. 020003. doi: 10.1063/1.4952657 [35] m. e. mohamed et al., experimental validation of fem-computed stress to tip deflection ratios of aero-engine compressor blade vibration modes and quantification of associated uncertainties, mechanical systems and signal processing 178 (2022), art. no. 109257. doi: 10.1016/j.ymssp.2022.109257 [36] s. catalucci, r. marsili, m. moretti, g. rossi, measurement, and undefined, comparison between point cloud processing techniques, measurement 127 (2018), pp. 221-226. doi: 10.1016/j.measurement.2018.05.111 [37] a. j. jerri, the shannon sampling theorem—its various extensions and applications: a tutorial review, proceedings of the ieee 65(11) (1977), pp. 1565–1596. doi: 10.1109/proc.1977.10771 [38] p. e. mccarty, j. w. thompson, r. s. ballard, development of a noninterference technique for measurement of turbine engine compressor blade stress, proc. of 16th joint propulsion conf., hartford, ct, usa, 30 june 1980 – 02 july 1980. doi: 10.2514/6.1980-1141 [39] p. nava, n. paone, g. l. rossi, e. p. tomasini, design and experimental characterization of a nonintrusive measurement system of rotating blade vibration, journal of engineering for gas turbines and power 116(3) (1994); pp. 657-662. doi: 10.1115/1.2906870 https://doi.org/10.1063/1.4964858 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301053 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7301053 https://doi.org/10.1117/12.2263295 https://doi.org/10.3390/s130607385 https://doi.org/10.3390/s19071482 https://pdfs.semanticscholar.org/6c78/2f97a0b8c181c29aeb9ac7f15d9fff110522.pdf https://pdfs.semanticscholar.org/6c78/2f97a0b8c181c29aeb9ac7f15d9fff110522.pdf mailto:https://doi.org/10.3390/s17081745 https://doi.org/10.3390/s17102162 https://doi.org/10.3390/s23031129 https://doi.org/10.3390/s23052600 https://doi.org/10.1109/phm.2017.8079252 https://doi.org/10.1063/1.4952657 https://doi.org/10.1016/j.ymssp.2022.109257 https://doi.org/10.1016/j.measurement.2018.05.111 https://doi.org/10.1109/proc.1977.10771 https://doi.org/10.2514/6.1980-1141 https://doi.org/10.1115/1.2906870 acta imeko  december 2014, volume 3, number 4, 46 – 51  www.imeko.org    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 46  surface roughness modeling with edge radius and end  milling parameters on al 7075 alloy using taguchi and  regression methods  m. numan durakbaşa 1 , anıl akdoğan 2 , a. serdar vanlı 2 , aslı günay 2   1 vienna university of technology, institute for production engineering and lasertechnology, department for interchangeable  manufacturing and industrial metrology, 1040 vienna, austria  2  yildiz technical university, faculty of mechanical engineering, department of mechanical engineering, 34349,istanbul,turkey      section: research paper   keywords: edge radius; surface roughness; precise measurements; optimization  citation: m. numan durakbaşa, anıl akdoğan, a. serdar vanlı, aslı günay, surface roughness modeling with edge radius and end milling parameters on al  7075 alloy using taguchi and regression methods, acta imeko, vol. 3, no. 4, article 9, december 2014, identifier: imeko‐acta‐03 (2014)‐04‐09  editor: paolo carbone, university of perugia   received october 10 th , 2013; in final form december 26 th , 2013; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by yildiz technical university, scientific research projects coordination department ‐ project number: 2012‐06‐01‐kap01,  turkey.  corresponding author: anıl akdoğan, e‐mail: nomak@yildiz.edu.tr    1. introduction  despite of recent developments, causes of cutting tool geometry deformations incurred by usage is still unknown and not possible to estimate precisely yet. since the costs of cutting tools and their replacements in machining processes influence the total production costs by 3% and 12%, respectively, accurate measurement of tool geometry is of great importance to delay the tool deformations and to extend tool lifetime period [1]. quality assurance policies are used by many various companies to provide clients with high qualified products and technical support. thus, inprocess inspections should include all steps of manufacturing to meet the demands of related quality management system standards [2]. many different industries set high demands on geometry variation of tools due to the technological developments. these high demands stem from high and ultra-high precision machining of different kinds of materials. for instance, while cutting of brittle materials in a ductile mode using diamond tools, such as ductile cutting of silicon and quartz for wafer fabrication, one of the key conditions for achieving ductile chip formation is to get the right cutting edge radius of the tool to the undeformed chip thickness. it has been shown that the undeformed chip thickness has to be in the order of nanometers and that the tool cutting edge radius has to be smaller than the undeformed chip thickness. therefore, precise measurement of diamond abstract  tool geometry  and edge radius are not only crucial for workpiece surface characteristics determination but  they also have direct  impacts on tool lifetime. they are created in manufacturing and deformations occur in machining processes. with this regard, precise  determination of the geometry is of vital importance. this study focused on process parameters like cutting speed, feed rate, depth of  cut, tool geometry, and different coatings, to indicate the effects on surface roughness of the machined product on the basis of two  and  three‐dimensional  precise  measurements.  this  paper  studies  the  optimization  of  process  parameters  and  different  coating  materials with combined different tool radii to obtain the maximum surface quality for the end milling process of al 7075 alloy by  taguchi and regression methods. the results revealed optimum process parameters with the proper coating type. the calculated  mathematical model predicts average surface roughness value against edge radius wear and processing parameters.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 47  cutting tools has become a key issue for ductile mode cutting of brittle materials [3]. as emphasized in the literature, cutting tool tip radius deformations during the machining, according to the wear mechanism, is still poorly understood and cannot be accurately predicted [4]. determining significant factors which affect the tool tip radius is crucial for the final surface quality and life of the tool. the precise identification of tool geometry is important for the determination of both tool life and machined material surface conditions. tool geometry identification, including edge radius determination gives not only important information on high precise manufacturing processes, but also explains the wear behavior of the tool. high precision measurement techniques are being used to determine the geometry and especially the edge radius variations of various machining operations tools. recently, 7xxx series al alloy are being used in different experimental studies. al 7076-t6 material was used for investigating high-speed milling effects on the final workpiece surface by means of analytical and experimental methods [5]. al 6061-t6 was used as specimens for micro milling experiments in an experimental study [6]. in another study, al 5083 blocks (140x70x30 mm) were used to investigate parameter effects on tool lifetime. the gray based taguchi method and anova analyses were applied for experimental design and results. this analysis showed that decreasing feed rates increased the tool life [7]. al 7075t6 alloy specimens were applied in an experimental study for dry cutting high-speed milling operations [8]. to model the cutting forces for face milling operations different kinds of al 7050 specimens were used in other experimental studies [9-10]. researchers used al 6061-series aluminum workpieces in a taguchi l9 orthogonal experimental array set to obtain significant factors that affect the workpiece surface roughness. anova was used to analyze outputs from an orthogonal array. according to these analyses, a low feed rate and a minimum rake angle create high surface quality at high speed milling operations [11]. the purpose of this study is to determine the geometries of brand new and used end milling tool tips and the final workpiece surface qualities by means of precise measurement techniques and methods. in addition, we aim at determining the optimal process parameters and tool radius by using the design of experiment (doe) methodology. this paper studies the optimization of processing parameters and different coating materials combined with different tool radii to obtain the minimum ra (arithmetical mean surface roughness parameter) values for the end milling process of al 7075 alloy by using taguchi and regression methods. specifically, the tool edge radius was obtained by modern 3d dimensional methods. the geometry has significant effects on surface quality and there are limited numbers of studies on the subject matter. this experimental study aim was determination of high accuracy mathematical model between the ra and the processing parameters by using the measurement data. the relationship between the control factors and the response factor “surface roughness” is determined. the results revealed proper coating type and the optimum process parameters. in addition, the calculated mathematical model predicts edge radius changes of the tools against workpiece surface damage. 2. materials and methods in this study, al 7075 alloy workpieces were end milled by using 3 different coated milling tools of 3 different initial edge radii. each tool has four flutes with a 30° helix angle and 10 mm nominal diameter. the selected solid carbide cutting tools for experiments were coated with 3 different coating materials by pvd (plasma vapor deposition) technique. aluminum alloys are widely used in aerospace and automotive industries to manufacture products in short processing periods to supply a large number of manufactured products by high-speed machining. the manufacturing of such a significant amount of products requires high productivity with an increased cutting speed and feed rate. al 7075 alloy blocks were used for each set of experiments. the chemical composition of the processed material is given in table 1. the experiments were conducted in two different sets of processing parameters, such that observation of the effects of the harder cutting conditions on tool edge geometry is possible. the experiments were classified into two sets, namely “the first milling operations (1st)” and “the second milling operations (2nd)”. experimental design factors and their levels are given in table 2. the 27 different al alloy blocks had a 5050 mm cross section and a length of 160 mm. the method presented in this study is an experimental design process called the taguchi design by which the inherent variability of materials and manufacturing processes have been taken into account at the design stage. in the taguchi design procedure, the parameter design stage has been widely used for optimization of manufacturing processes [12, 13]. the vital aim of this experimental work is to determine the optimal cutting conditions that yield the minimum ra value. in a significant number of studies, only three common process parameters, i.e. feed rate, cutting speed and depth of cut were experimented [14-17]. manufacturers and customers are expected to test functional geometry and wear characteristics of tools to ensure that only functionally capable tools are used in the production process. for these purposes, the initial geometrical and surface characteristics of the brand new tools are examined in both 2d and 3d measurements. edge radii of the brand new and mill-processed tools are measured by a zoller venturion 450 3d laser scanner, germany. images were captured by a keyence optical microscope, us, and a schut stereo microscope, the netherlands. the milling table 1. chemical composition of al 7075 alloy.   material  si  fe  cu  mn  mg  cr  zn  ti  %  0.08  0.36  1.52  0.08  2.59  0.21  5.62  0.04  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 48  operations were performed on a vertical cnc machining center, mori seiki, milltap 700, japan. a din 6499 er type tool holder was used in all the experiments. after performing two sets of milling operations, the surface roughness of the machined products were measured by using a profilometer from taylor hobson, uk, at five different locations. repetitive measurements increase the conformity of the model accuracy. deformations from previous conditions are provided in table 3 as ∆rε1 and ∆ rε2 respectively for the 1st and 2nd set of milling operations. additionally, table 3 shows the response factors at the end of the 1st and 2nd set of operations as ra measurements, respectively. the arithmetic average of the absolute values of the roughness profile was determined by means of a gauss filter with 0.8 mm cut off value and 4l access length. certainly, tool edge radius stability performance is related with the final surface roughness of the workpiece. therefore, an experimental design was introduced and all test specimen end milling tool radii were measured before and after machining for obtaining deviations from the initial geometry. by this approach changes of surface conditions and tool tip geometry were obtained accurately. figure 1 shows the initial tip geometry figures of the zrn coated tools at 1.0 mm and 1.5 mm nominal radius measurement operations respectively, with ×50 magnification. 3. results and discussions today’s high precise manufacturing technology requires practical solutions to improve surface quality of the machined products. at this point, evaluating cutting performance is an important factor. on the basis of this factor, ra values of the milled products were chosen as response to determine end milling performance of the manufacturing systems. process parameters that affect the characteristics of milled products are tool characteristic parameters; tool radius and coating material and the milling parameters such as the cutting speed, the feed rate, and the depth of cut. the presented results were analyzed in the limits of the selected orthogonal array. the basic quality characteristic for this research is the minimum surface roughness for optimum machining conditions. according to the measurements, the minimum surface roughness was obtained with the third level of radius, the third level of coating, the first level of cutting speed, the second level of depth of cut and the first level of feed rate (table 4). the anova statistical analyzing method was used for investigating the most effective process parameters. the results indicated that coating type is the most effective parameter for the surface roughness in accordance with the selected parameters and their levels. the results were provided at the 95% confidence level. after determining the optimum levels of the processing parameters and tool edge radius, these levels should be verified. repetitive machining processes were applied for the verifications. repetitive surface roughness measurements taken from the work piece are indicated in table 4. the experimental results are in agreement with the predicted results from the proposed method. the experimental results of the surface roughness were comparable with the predicted values. the mathematical model results and the measured surface roughness values in each experiment were similar. the authors also calculated a similar model in a previous study [18]. in this paper, the presented models predict more accurately the final surface quality by using radius wear effects. the authors have conducted an anova test for investigating the most effective process parameters for the first step milling operation results. the coating type was determined as the most significant parameter, according to the anova analysis. analyses showed that the zrn type coating provides the best surface quality with a minimum surface roughness of the workpiece. increasing cutting speed decreases the surface quality, as observed in previous studies [19, 20]. in this experimental work, the minimum cutting speed of 50 m/min resulted in a minimum surface roughness. the s/n ratio graph shows that at 0.75 mm depth of cut the best surface roughness were recorded (figure 2). the depth of cut value has a remarkable impact on the cutting forces. moreover, it introduces thermal changes during the machining process. the depth of cut effects the chip flow angle, which is crucial for heat dissipation. thus, the depth of cut is a significant parameter table 2. experimental control factors and their levels. control  factors  1st milling operation  2nd milling operation level 1  level 2  level 3  level 1  level 2  level 3 radius  (mm)  0.5  1  1.5  0.5  1  1.5  coating   type  altin  alticn  zrn  altin  alticn  zrn  cutting  speed  (m/min)  50  75  100  100  150  200  depth of  cut (mm)  0.50  0.75  1.00  0.50  0.75  1.00  feed rate  (mm/tooth)  0.05 0.10  0.15  0.10  0.20  0.30  figure 2. s/n graph for surface roughness for the 1 st  set operations.  figure 1. optical microscope images of zrn coated tools of 1.0 mm and 1.5  mm nominal radii respectively.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 49  for the surface quality and the tool lifetime [21-23]. as is indicated, decreasing the feed rate increases the surface quality [24, 25]. the experiments in this study also confirmed that the minimum feed rate leads to a minimum surface roughness of end milled products. the regression analysis was used to understand the relationship between the selected parameters and the surface roughness deviations. the calculated mathematic model for the ra is: 11.1766 11.7502 0.0058 1.1988 4.2456a c p zr r v a f       (1) variables were defined as initial edge radius, edge radius after 1st milling operations, cutting speed, depth of cut and feed rate for the regression analysis. as indicated in eq. (1), the model incorporates the effect of edge radii, cutting speed, depth of cut and feed rate on the surface roughness of the workpiece. the authors strongly recommend the use of the suggested coating type i.e. zrn, when using the regression model considering the taguchi results. the edge radius change, rε1, was also determined as an important parameter during the machining process according to the taguchi method. according to the results, when the edge radius increases, the surface quality improves, related to the depth of the cut values. on the other hand, when the feed rate decreases, the surface quality increases. the feed rate appeared to be the most effective parameter after the coating type for a better surface quality. by the created regression model, 85.53% of the total variability in the deviation can be explained by eq. (1). the 2nd milling operations were conducted using more aggressive processing parameters, doubling vc and fz . the results of the 2nd taguchi analysis verified the optimum process parameters in the 1st operation. the s/n ratio graph indicates that 50 m/min minimum cutting speed, 0.75 mm depth of cut and 0.05 mm/tooth minimum feed rate resulted in a minimum surface roughness values (figure 3). in addition, a tialcn type coating was suggested based on these results. the regression analysis model was run to understand the relationship between the process parameters and the surface roughness for the 2nd milling operation. the calculated mathematical model for the ra is indicated in eq. (2). 88.96% of the total variability in the deviation can be explained by this model. 20.2610 4.7204 0.0054 0.2505 3.9257a c p zr r v a f       (2) table 3. orthogonal array l27 and the results. rε0 :edge radius before milling operation, rε1: edge radius after 1stmilling operation, rε2: edge radius after 2nd milling operation. run  control factors and levels   response factors  1 st milling operation  2 nd  milling operation radius  (mm)  coating  type  cutting speed  vc (m/min)  depth of cut ap (mm)  feed rate fz (mm/tooth)  mean(1 st ) ra (µm)  rε1   rε1‐ rε0  (mm)  mean (2 nd ) ra (µm)  rε2  rε2‐ rε1  (mm)  1  1  1  1  1 1 0.581 0.010  1.010  0.005 2  1  1  1  1 2 1.082 0.002  0.953  0.005 3  1  1  1  1 3 0.617 0.005  1.290  0.010 4  1  2  2  2 1 0.376 0.016  0.590  0.000 5  1  2  2  2 2 0.355 0.008  0.811  0.008 6  1  2  2  2 3 0.639 0.006  1.321  0.007 7  1  3  3  3 1 0.528 0.005  2.161  0.087 8  1  3  3  3 2 0.412 0.038  0.728  0.031 9  1  3  3  3 3 0.648 0.004  0.922  0.005 10  2  1  2  3 1 0.689 0.008  1.360  0.012 11  2  1  2  3 2 1.099 0.338  0.855  0.337 12  2  1  2  3 3 0.992 0.010  1.350  0.009 13  2  2  3  1 1 0.487 0.023  0.694  0.014 14  2  2  3  1 2 0.509 0.003  0.502  0.021 15  2  2  3  1 3 0.684 0.000  0.679  0.003 16  2  3  1  2 1 0.277 0.041  0.274  0.011 17  2  3  1  2 2 0.340 0.031  0.589  0.007 18  2  3  1  2 3 0.839 0.021  1.054  0.009 19  3  1  3  2 1 0.366 0.000  0.448  0.021 20  3  1  3  2 2 0.854 0.009  0.883  0.006 21  3  1  3  2 3 0.758 0.001  0.895  0.002 22  3  2  1  3 1 0.250 0.019  0.504  0.015 23  3  2  1  3 2 0.770 0.013  0.697  0.007 24  3  2  1  3 3 0.472 0.001  1.025  0.013 25  3  3  2  1 1 0.271 0.003  0.419  0.008 26  3  3  2  1 2 0.491 0.007  0.494  0.012 27  3  3  2  1 3 0.697 0.002  0.743  0.004 table 4. confirmation tests results.   control factors and levels  roughness values (µm)  radius (mm) cutting speed (m/min) depth of cut (mm) feed rate (mm/tooth) exp.1 exp. 2  exp.3  predicted 1.5  50  0.75  0.05 0.269 0.262  0.196  0.283 acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 50  4. conclusions  this experimental work is about optimizing the processing parameters by using different coating materials which were combined with different tool radii to obtain the minimum ra values for the end milling of al 7075 alloys. the taguchi method was used for the experimental design. tool edge radii and machined surface roughness were measured by means of high precision measurement techniques. surface roughness values were used as output for the analysis. experimental results and the regression models confirmed that the increased wear of edge radii decreased the surface quality of the machined workpiece. according to the proposed optimization method, the optimum process parameters, the minimum cutting speed, the average depth of cut, the minimum feed rate and the maximum edge radius provided the best surface quality for al 7075 alloy. besides, according to the regression analysis results, zrn coating types for relatively low cutting speeds and tialcn coating types for cutting speeds higher than 100 m/min gave the best surface roughness performance. in addition, a relationship between the work piece surface roughness and a cutting tool edge radius were found according to the experimental runs. in conclusion, tool edge radius wear was found to be a crucial parameter, as well as the feed rate, the depth of cut and the cutting speed. this study provides an insight into the relationship between the wear mechanism of the tools and the surface quality of a workpiece, according to the processing parameters. acknowledgement  this research has been supported by yildiz technical university, scientific research projects coordination department, istanbul, turkey, project number: 2012-0601-kap01. the authors are thankful to milmak metal, i̇stanbul, and dmg mori seiki, i̇stanbul, turkey for their support. references  [1] a.weckenmann, k nalkntic, “precision measurement of cutting tools with two matched optical 3d sensors”, manuf. techn., 52, 443–446, 2003, http://dx.doi.org/10.1016/s0007-8506(07)60621-0. [2] iso 9001:2008, “quality management systems – requirements”. [3] li x. p., rahman m., liu k., neo. k. s., chan c. c., “nanoprecision measurement of diamond tool edge radius for wafer fabrication”, j. mater. process. techn., 140, 358 362, 2003, http://dx.doi.org/10.1016/s0924-0136(03)00757x. [4] durazo-cardenas i., shpre p., luo x., jacklin t.imprey s. a., cox a., “3d characterization of tool wear whilst diamond turning silicon”, wear, 262, 340349, 2007, http://dx.doi.org/10.1016/j.wear.2006.05.022. [5] rao b., shin y.c., “analysis on high speed face milling of 7075-t6 al using carbide and diamond cutters”, int. j. mach. tool. manu., 41, 1763-1781, 2001, http://dx.doi.org/10.1016/s0890-6955(01)00033-5 [6] chern g.l. ve chang y.c., “using two dimensional vibration cutting for micro milling”, int. j. mach. tool. manu., 46, 659-666, 2006, http://dx.doi.org/10.1016/j.ijmachtools.2005.07.006. [7] kopac j., krajnik b., “robust design of flank milling parameters based on grey-taguchi method”, j. mater. process. techn., 191, 400-403,2007. http://dx.doi.org/10.1016/j.jmatprotec.2007.03.051. [8] rivero a., lacelle l.n.l., penelva m.l., “tool wear detection in dry high speed milling based upon the analysis of machine internal signals”, mechatronics, 18, 627-633, 2008, http://dx.doi.org/10.1016/j.mechatronics.2008.06.008. [9] dang j. w., zhang w.h., yang y., wan m., “cutting force modelling for flat end milling including bottom edge cutting effect”, int. j. mach. tool. manu., 50, 986-997, 2010, http://dx.doi.org/10.1016/j.ijmachtools.2010.07.004. [10] wan m., lu m.s., zhang w.h., yang y., “a new ternary mechanism model for the prediction of cutting forces in flat end milling”, int. j. mach. tool. manu., 57, 34-45, 2012, http://dx.doi.org/10.1016/j.ijmachtools.2012.02.003. [11] baharudin b.t.h.t., ibrahim m.r., ismail n., leman z., ariffin m.k.a., majid d.l., “experimental investigation of hss face milling to al 6061 using taguchi method”, procedia engineerimg, 50, 933-941, 2012. http://dx.doi.org/10.1016/j.proeng.2012.10.101. [12] fowlkes y., creveling c. m., “engineering methods for robust product design, using taguchi methods in technology and product development”, addison-wisley publishing company, 1995, isbn 0-201-63367-1. [13] afazov s.m., ratchev s.m., segal j., “modelling and simulation of micro-milling cutting forces”, j. mater. process. techn., 210, 2154–2162, 2010. http://dx.doi.org/10.1016/j.jmatprotec.2010.07.033. [14] adeel h., suhail n., ismail s., wong v. and abdul jalil n.a., “cutting parameters identification using multi adaptive network based fuzzy inference system: an artificial intelligence approach”, scientific research and essays, 6(1), 187–195, 2011. [15] ghani j.a., choudhury i.a., hassan h.h., “application of taguchi method in the optimization of end milling parameters”, j. mater. process. techn., 145, 84–92, 2004, http://dx.doi.org/10.1016/s0924-0136(03)00865-3. [16] nalbant m., gökkaya h., sur g., “application of taguchi method in the optimization of cutting parameters for surface roughness in turning”, mater. design, 28, 1379– 1385, 2007, http://dx.doi.org/10.1016/j.matdes.2006.01.008. [17] park s.h., “robust design and analysis for quality engineering”, chapman & hall, london, 1996, isbn 0412-55620-0. [18] durakbaşa m. n., akdoğan, a.,vanlı a. s., günay a.,”tip radius effects on surface roughness of end milled al 7075 using taguchi and regression methods”, 12th imeko tc10 workshop on technical diagnostics, id 0015/p.106, 2013. figure 3. s/n graph for surface roughness for the 2 nd  set operations.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 51  [19] lin h.m., liao y.s., wei c.c.,”wear behavior in turning high hardness alloy steel by cbn tool”, wear, 264, 679–684, 2008, http://dx.doi.org/10.1016/j.wear.2007.06.006. [20] khidhir b. a., mohamed b.,”study of cutting speed on surface roughness and chip formation when machining nickel-based alloy”, j. mech. sci. technol.,vol., 24 (5), 2010. [21] chen j.-s., huang y.-k, chen m.-s., “feed rate optimization and tool profile modification for the highefficiency ball-end milling process”, int. j. mach. tool. manu., 41, 1763–1781, 2001, http://dx.doi.org/10.1016/j.ijmachtools.2004.11.020. [22] buj-corral i., vivancos-calvet j. domínguez-fernández a., “surface topography in ball-end milling processes as a function of feed per tooth and radial depth of cut”, int. j. mach. tool. manu., 53(1), 151–159, 2012. http://dx.doi.org/10.1016/j.ijmachtools.2011.10.006 [23] avishan b., yazdani s., jalali vahid d., “the influence of depth of cut on the machinability of an alloyed austempered ductile iron”, mat. sci. eng., 523, 93–98, 2009. http://dx.doi.org/10.1016/j.msea.2009.05.044. [24] agarwal n.,”surface roughness modeling with machining parameters (speed, feed and depth of cut) in cnc milling”, mit int. j. mach. eng., 2, 55-61, 2012. [25] colak o., kurbanoğlu c., kayacan m. c., “milling surface roughness prediction using evolutionary programming methods”, mater. design, 28, 657–666, 2007, http://dx.doi.org/10.1016/j.matdes.2005.07.004. development and verification of self-sensing structures printed in additive manufacturing: a preliminary study acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 development and verification of self-sensing structures printed in additive manufacturing: a preliminary study antonino quattrocchi1, roberto montanini1 1 department of engineering, university of messina, c.da di dio, 98166 messina, italy section: research paper keywords: fiber bragg grating sensor; stereolithography; embedded sensors; 3d printing monitoring; stress-strain measurements citation: antonino quattrocchi, roberto montanini, development and verification of self-sensing structures printed in additive manufacturing: a preliminary study, acta imeko, vol. 12, no. 2, article 25, june 2023, identifier: imeko-acta-12 (2023)-02-25 section editor: alfredo cigada, politecnico di milano, italy, andrea scorza, università degli studi roma tre, italy received december 14, 2022; in final form february 27, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the ministry of university and research in italy (mur) under the research project of the national operative program (pon “ricerca e innovazione 2014 e 2020”) “nausica efficient ships through the use of innovative and low carbon technological solutions”. corresponding author: antonino quattrocchi, e-mail: antonino.quattrocchi@unime.it 1. introduction technological innovation is increasingly oriented towards the development of systems that allow monitoring of the health state of structures (structural health monitoring, shm). in this way, structural damage can be early detected and identified by means of sensors integrated into the structure itself, which are therefore able to evaluate the degradation in real time [1]. such sensors are preferred, compared to conventional transduction devices, because they are protected from atmospheric agents and can be placed more easily in critical points of the structure, also in areas that are difficult to reach from the outside [2]. on the other hand, their installation must be foreseen when the structure is designed or as retrofitting, since an easy intervention to restore the malfunctioning sensor is not possible [3]. fiber optical sensors, specifically fiber bragg grating (fbg), are suitable for the production of smart structures [4]. they have numerous advantages, such as low intrusiveness, immunity to electromagnetic interference, self-referentiality, absence of corrosion and the possibility of creating sensor networks with many measurement points. instead, the main drawbacks concern the significant influence from the temperature, the limited opportunity of imparting a curvature to the fiber to avoid signal losses, the relatively high cost of both the sensors and the interrogation devices necessary for the measurement [5]-[7]. a fbg sensor can be installed onto different materials by applying ribbon tapes, glues or epoxy resins. however, these methods are time-consuming, require different molds or tools and induce variability in the measurement of the considered property [8][10]. alternatively, in the case of composites and reinforced concretes, the integration of such sensors can be performed during the stratification of the structure [11], [12]. the rapid diffusion of additive manufacturing (am) in a perspective of industry 4.0 opens new possibilities in the field of shm, enabling the design of complex and topologically abstract additive manufacturing (am) is used today to fabricate complex structures as well as to demonstrate innovative design concepts. this opens new horizons in the field of structural health monitoring (shm), allowing to correlate a high performing design with real-time detection and identification of the structural damage. fiber optical sensors, such as fiber bragg gratings (fbgs), are an effective option for this type of applications. the present work discusses the development of a demonstrative self-sensing structure, obtained by embedding a fbg sensor during the 3d stereolithographic (sla) printing process. the paper reports the strategies developed in order to ensure a correct adhesion of the fbg sensor embedded into the structure and the experimental tests used for validating the structural response of the self-sensing specimen. the output signal of the fbg sensor was continuously recorded during the different stages of the creation phases: this allowed real-time monitoring of the whole am process (i.e. printing, washing and curing stages). the obtained results showed that the self-sensing demonstrative structure was able to effectively monitor the thermo-mechanical behavior of the am process and to guarantee the correct identification and measurement of the strain as the same structure was subjected to a controlled stress. mailto:antonino.quattrocchi@unime.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 optimized structural elements [13], [14]. nowadays several am methods are available, among which the most popular are fused deposition modeling (fdm), stereolithography (sla) and selective laser sintering (sls). although in all cases the objects are produced by sequential layer deposition of material, there are substantial differences in the stratification process [15], [16]. in this context, the use of a fbg sensor to ensure a selfsensing capability to a 3d printed structure assumes great relevance. among the first researchers, zubel et al. [17] demonstrated the possibility of incorporating some fbg sensors into a patch to estimate strain and temperature. lima et al. [18] proposed a cantilever structure to evaluate displacement, temperature and acceleration. lesiak et al. [19] verified this approach also to produce a measuring head implanted into a mechanical transmission element, while zhang et al. [20] created a plantar pressure sensing platform. a large part of the literature discusses the case of fdmprinted structures, where the incorporation process of the fbg sensors generally follows two different procedures. in the first one [21], the whole structure is printed and equipped with a small channel, where the fbg sensor is inserted and glued. instead, in the second one, the structure is only partially realized, temporarily stopping the printing. then the fbg sensor is placed directly onto the superficial layer [22], [23] or inside a small housing channel [24]. finally, the stratification of the remaining layers goes to completed. in this way, the molding of the polymer filament is exploited to cover the fbg sensor and connect it to the structure. currently few articles discuss the case of sla, whereby pausing the process is more complex. in fact, the manufactured object is alternative immersed in and extracted from the liquid bath in order to create the single layer and to mix the printing material, respectively. such printing material is generally a resin, characterized by a high viscosity at the liquid state. consequently, the solidified layer always remains wet and this is the main cause of the difficulty of correctly placing the fbg sensor [25]. recently manzo et al. [26] have analytically and experimentally investigated two different applications, a reducedscale frame structure and a pressurized cylindrical vessel, obtained by sla. the authors have embedded the fbg sensor filling a thin channel with the printing material (resin). this has involved a considerable attention to avoid air bubbles and the need to cure the resin inserted into the channel. the aim of this work is to discuss the development of a selfsensing structure with embedded fbg, using sla. at first, two different procedures for the integration of the fbg sensor were discussed, highlighting their advantages and disadvantages. after identifying the best configuration, the fbg sensor was exploited to monitor the whole am process (i.e. printing, washing and curing stages) of the self-sensing structure, also making use of additional strategies to compensate for the effects of the temperature. finally, the structural response of the realized selfsensing sample was evaluated by subjecting it to a controlled stress. 2. materials and methods 2.1. printing of the self-sensing structures the representative test sample of the developed self-sensing structure was manufactured by an sla printer (mod. form 2, formlabs), using a photopolymer resin (ppr) (mod. black v4, formlabs) and a layer resolution of 50 µm. its morphology was chosen in accordance with the type iv specimen, reported in the astm d638-14 standard. such a standard is designed to reproduce tensile property data for the evaluation of plastic materials. to allow the self-sensing feature, a fbg sensor (mod. smf28, technica sa) was embedded into the sample. geometrically, it had a diameter of about 242 µm, of which (8.2 ± 0.1) µm of core, (125.0 ± 0.7) µm of cladding and (242 ± 5) µm of an acrylate coating, and a length of the grating of 14 mm. while, optically, it presented a nominal wavelength of (1559.98 ± 0.05) nm in transmission and reflection, a full-width at half maximum (fwhm) of 0.26 nm, a reflective bandwidth (rbw) @-3 db of 0.26 nm and a reflectivity of 85.65 %. specifically, the printing stage was interrupted when the central layer of the test sample was reached and the fbg sensor was carefully placed along the longitudinal axis of the same sample. furthermore, some supports were also printed for pretensioning the fbg sensor and ensuring its correct blocking. once this step was ended, the printing was restarted by completing the structure. at the end of the whole process, the printing supports were removed, and the test sample was first washed in isopropyl alcohol for 20 min and then cured under uv for 30 min (15 min for each side) at 60 °c. together with the test sample, two additional compensation samples (compensation sample a and b), nominally identical to the previous one, were printed (figure 1). a second fbg sensor (mod. smf-28, technica sa, nominal wavelength of (1556.31 ± 0.05) nm), encapsulated into a needle [27], and a k-type thermocouple were respectively embedded, adopting the same procedures already described. their function was to acquire the thermal trend during the manufacturing process, allowing the apparent strain to be assessed. consequently, the choice of this specific fbg was mainly determined by the fact that it was already investigated in [27], although calibrated as a thermometer. 2.2. experimental setup and measurement procedures the optical signal from the two fbg sensors was acquired by a spectrum analyzer (mod. si 720, micron optics) and an optical interrogator (mod. di 410, hbm), characterized by accuracies of ± 3 pm and of ± 1 pm, respectively. instead, a specific data acquisition system (daq, mod. usb-temp, measurement computing) was used to record the output voltage from the thermocouple. after the curing stage, the test sample was left to rest for 30 min at room temperature, to guarantee its thermal stabilization, and subjected to a tensile stress by means of a testing machine (mod. electroplus e3000, instron). a load cell of ± 5 kn and an figure 1. details of the samples during the sla printing process. compensation sample b compensation sample a test sample thermocouple supportsfbg sensor needle acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 extensometer (mod. extensometer 12.5 mm class b-2, instron) were used to measure the applied force and the consequent strain, respectively (figure 2). the testing parameters, in accordance with astm d638-14, are shown in table 1. 3. preliminary analysis preliminary tests were carried out to evaluate the aspects related to the embedding process of the fbg sensor into the test sample. 3.1. embedding process of the sensor in the self-sensing structure at the beginning, three compact specimens, i. e. without the fbg sensor, were printed and subjected to a tensile test to determine their mechanical characteristics. then the elastic modulus, equal to about (3.53 ± 0.09) gpa, and the elastic range, up to 300 n were measured. after that, two different ways of embedding the fbg sensor into the test sample were investigated (figure 3). in the first one (configuration with seat) an optical fiber, without bragg grating and with same dimensions as the fbg sensor, was placed and pre-tensioned in a circular seat. such a seat, having a radius of 150 µm, was provided on the longitudinal axis of the test sample in correspondence of its central layer. instead, in the second one (configuration without seat), the optical fiber was simply laid and pre-tensioned in the middle of the central layer. hence, further six specimens, three for each configuration, were tested. these strategies are different to those reported in the literature for sla. in [26], the main drawback that can be found is the difficulty of guaranteeing optimal incorporation of the fbg sensor. in fact, if the thickness and the width of the printed structure is high, there is a significant risk of obtaining an incomplete polymerization process and therefore the imperfect transfer of the load to the fbg sensor itself. 3.2. mechanical characteristics of the embedding process the configuration with seat (figure 4 a) highlighted a lack of continuity, caused by the interruption of the printing to allow the positioning of the optical fiber. after the tensile test (figure 4 c), the optical fiber undergoes a marked dismemberment of its cladding as proof of a good embedding into the printing material. on the other hand, the configuration without seat (figure 4 b), in addition to presenting the same lack of continuity for the reasons already described, generates a relevant distortion of the upper layers of the specimen. finally, at failure (figure 4 d), the strain in the optical fiber suggests a lower adherence with the printing material than in the previous case. figure 5 describes the force-displacement curves, obtained from the tensile test, relating to the compact specimens with seat and without seat. the presence of the optical fiber inside the specimens leads to a slight weakening of the self-sensing structures in terms of load and elongation at failure. generally, it is probably attributable to a higher speed of propagation of the crack, in turn due to the introduction of a discontinuity (optical fiber) into the material. quantitative data are reported in table 2. specifically, the configuration with seat presents a lower reduction of fracture load (4.4 %) than the configuration without seat (7.1 %). that is caused, in an indiscernible way, by the polymerization of the ppr inside the seat and by the distortion of the layers above the optical fiber. however, in the elastic range, all the specimens have a rather limited variation of the elastic modulus (-1.6 % for specimens with seat and -3.5 % without seat referred to the average maximum stress in the elastic figure 2. details of the validation tests. table 1. testing parameters. parameter (unit) value sampling frequency of testing machine (hz) 10 preload (n) 10 loading speed (mm/min) 1 sampling frequency of optical interrogator (hz) 10 sampling frequency of spectrum analyser (hz) 100 sampling frequency of temperature daq (hz) 10 figure 3. scheme of configuration a) with seat and b) without seat. images not in scale. a) b) c) d) figure 4. typical microscope images of two representative specimens of the configuration with seat (a and c) and without seat (b and d) before and after the tensile test, respectively. extensometer sample load cell fbg sensors samples 1st part a) b) seat samples 2nd part acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 range of the compact specimens). for this reason, in terms of self-sensing, the different configurations are sufficiently superimposable (figure 6). taking into account the performed preliminary analysis, and considering that the seat also acts as a visual guide during the placing and pre-tensioning phases of the fbg sensor, the choice of the most suitable configuration fell on the one with seat. 4. results in this section, results concerning the monitoring of the 3d printing process and the mechanical behaviour (elastic stage) of the self-sensing assembled structure, are reported. 4.1. monitoring of the stereolithographic printing process figure 7 displays the trends of the characteristic wavelength of the fbg sensors embedded into the test sample and into the compensation sample a as a function of elapsed time during the printing stage. locally, both curves are characterized by a cyclic trend due to the movement of the build platform, i.e., its immersion in the ppr bath, the generation of the single layer and its emersion (figure 8). however, as expected, the fbg sensor embedded into the compensation sample a is less affected by this condition. in fact, its encapsulation inside a needle considerably reduces the strain to which it is subject, favoring the measurement of the apparent strain caused by the variation of the temperature. generally, the fbg sensor into the test sample exhibits a reduction in the pitch of the bragg grating, attributable to a state of compression induced by the polymerization of the ppr. figure 9 compares the trends of the characteristic wavelength of the fbg sensor into the compensation sample a and of the thermocouple into the compensation sample b during the printing stage. in this case, the two signals present a good coherence up to 700 s from to the printing re-start, i.e. from the printing start of the second half of the sample. after that, a visible divergence occurs probably because of a localized strain of the needle due to the excessive heating of the layers above the center line and the consequent state of compression, induced by the polymerization of the ppr. in conclusion, the signal of the fbg sensor into the test sample was suitably compensated by that of the apparent strain recorded by the second fbg into the compensation sample a. figure 10 presents the compensated signal along the whole figure 5. force-displacement curves for the compact specimens (cs) with seat (sws) and without seat (sns). table 2. results of tensile test for different type of specimens. type of specimen average fracture load in n standard deviation in n reduction* of fracture load in n compact specimens 1647 11 specimens with seat 1574 13 4.4 specimens without seat 1530 14 7.1 * referred to the average fracture load of the compact specimens. figure 6. average trends of the stress-strain curves in elastic range for compact specimens (cs) with seat (sws) and without seat (sns). figure 7. time history of the characteristic wavelength of the fbg sensors embedded into the test sample (𝜆fbg) and into the compensation sample a (𝜆tc) during the printing stage. figure 8. detail of the time history of the characteristic wavelength of the fbg sensor embedded into the test sample (𝜆fbg) during the printing stage (i) = start immersion of the build platform in the ppr, (ii) = generation of the single layer and (iii) = start of emersion of the build platform from the ppr. 0.00 0.50 1.00 1.50 2.00 2.50 3.00 0 200 400 600 800 1000 1200 1400 1600 1800 f o rc e / n displacement / mm cs1 cs2 cs3 sws1 sws2 sws3 sns1 sns2 sns3 0.0000 0.0005 0.0010 0.0015 0.0020 0.0025 0.0030 0 2 4 6 8 10 s tr e s s / m p a strain / mm/mm cs sws sns ycs=3519.3782xcs+0.1316 r2cs=0.9993 ysws=3463.9010xsws+0.1290 r2sws=0.9996 ysns=3397.8852xsns+0.1882 r2sns=0.9992 0 500 1000 1500 2000 2500 3000 1553.800 1554.000 1554.200 1554.400 1554.600 1554.800 1555.000 1555.200 1555.400 c h a ra c te ri s ti c w a v e le n g th / n m elapsed time / s lfbg ltc 1140 1150 1160 1170 1180 1190 1200 1210 1220 1554.875 1554.900 1554.925 1554.950 1554.975 1555.000 1555.025 (iii) (ii) l f b g / n m elapsed time / s (i) acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 manufacturing process of the self-sensing structure. the major differences between the raw signal (𝜆fbg) and the compensated one (𝜆fbg_compensated) are visible in the printing and curing stages. instead, in the washing and resting stages the curves are sufficiently superimposable after an initial transient. furthermore, all stages, except for the curing one, are subject to a reduction in the characteristic wavelength and therefore to a consequent state of compression. globally, these trends are attributable to the effects of the polymerization of the single layers (compression) and the hardening of the surface of the structure (relaxation). table 3 shows the values of the characteristic wavelengths of the fbg sensor into the test sample at specific time (figure 10). conservatively, the uncertainty of the characteristic wavelength of the fbg sensor embedded into the test sample (𝜆fbg) was estimated as type b [28], based on the accuracy of the spectrum analyzer used for these measurements. therefore, a uniform distribution (i.e. rectangular) with an amplitude of ± 0.003 nm was considered from which a value of type b uncertainty equal to about ± 0.0017 nm was estimated. 4.2. mechanical performance of the self-sensing structure figure 11 exhibits the spectrum of the fbg sensor as the load on the test sample varies, however remaining in elastic range. the tensile phase induces a shift of the peak to greater wavelengths and a reduction of the magnitude of the optical signal due to the increase of the pitch of the bragg grating (table 4). in this case the uncertainty was estimated as type a [28] on the magnitude of the optical signal at 300 n with a value of about 0.01 db. finally, figure 12 reports the calibration curve relating to the tensile strain measured by the test sample. as well-known from the literature [29] and as can be seen also from figure 11, the plotted curve has a good linearity. figure 9. time history of the characteristic wavelength of the fbg sensor embedded into the compensation sample a (𝜆tc) and of the temperature measured by the thermocouple into the compensation sample b during the printing stage. figure 10. time history of the characteristic wavelengths of the fbg sensor embedded into the test sample before (𝜆fbg) and after compensation 𝜆fbg_compensated) during the whole manufacturing process. table 3. characteristic wavelengths of the fbg sensor embedded into the test sample before (𝜆fbg) and after compensation (𝜆fbg_compensated). point elapsed time in s 𝜆fbg in nm 𝜆fbg_compensated in nm δ𝜆* in nm a 1000 1555.020 1554.853 0.167 b 3000 1553.098 1552.938 0.159 c 5000 1551.578 1551.484 0.094 d 7000 1547.718 1547.709 0.009 *computed as δ𝜆 = 𝜆fbg𝜆fbg_compensated figure 11. variation of the magnitude of the optical signal of fbg sensor as a function of the wavelength at a load application on the test sample. table 4. results of tensile tests for different type of specimens. load in n peak wavelength in nm peak magnitude in db standard deviation in db 0 1555.428 -4.62 0.05 100 1557.516 -4.86 0.05 200 1559.504 -5.06 0.05 300 1561.368 -5.27 0.05 figure 12. calibration curve of the fbg sensor used to measure the tensile strain on the test sample. 0 500 1000 1500 2000 2500 3000 1555.100 1555.125 1555.150 1555.175 1555.200 1555.225 1555.250 30.0 32.0 34.0 36.0 38.0 40.0 42.0 l t c / n m elapsed time / s t e m p e ra tu re / ° c 1540.000 1550.000 1560.000 1570.000 1580.000 -60 -50 -40 -30 -20 -10 0 m a g n it u d e / d b lfbg / nm 0 n 100 n 200 n 300 n 0.0000 0.0010 0.0020 0.0030 0.0040 1550.000 1552.000 1554.000 1556.000 1558.000 l f b g / n m strain / mm/mm linear fitting y=1692.098x+1550.407 r2=0.9998 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 5. conclusions the paper discusses the development of self-sensing structures printed in sla and equipped with a fbg sensor. the presented study allows to state that a fbg sensor can be suitably embedded into a structure printed in sla. this integration is optimal when a seat is created, since it guarantees the advantageous placing of the fbg sensor and the determination of a good interface for the transfer of the load without making use of further post-production methods. in addition, such a work shows how a fbg sensor can monitor the stress state induced into a structure throughout its printing process (i.e., printing, washing and curing stages). to obtain this, the use of compensation strategies, implemented through appropriate compensation sensors embedded in similar structures and subjected to the same process, is decisive. this represents an innovative approach of investigation as it allows obtaining information directly related to the printed product without resorting to more complex and often difficult to apply investigations during the printing itself. finally, the self-sensing structure shows an appropriate structural response when subjected to a controlled stress. the main limitation of the proposed work relates to the fact that, because of the movement of the build platform of the sla printer, the fbg sensor could be erroneously embedded into the structure and consequently the quantity of interest could not be correctly measured. such an issue is addressed here by adopting some supports for pre-tensioning the fbg sensor, capable of ensuring its correct blocking. however, this procedure cannot always be easily ensured, especially when the self-sensing structure has a complex geometry. furthermore, the same fbg sensor could require a tortuous positioning such as to disperse the light beam inside it, making the measurement impossible, or to produce its breakage, being constituted by a glass fiber. future developments will be focused on the study of the thermo-mechanical effect due to the printing process variables on the self-sensing structure and on the monitoring of a complex and functional self-sensing structure, equipped with a multisensor fbg system. references [1] e. mesquita, p. antunes, f. coelho, p. andré, a. arêde, h. varum, global overview on advances in structural health monitoring platforms, journal of civil structural health monitoring 6 (2016), pp. 461-47, doi: 10.1007/s13349-016-0184-5 [2] h. rocha, c. semprimoschnig, j. p. nunes, sensors for process and structural health monitoring of aerospace composites: a review, engineering structures 237 (2021) n. 112231, doi: 10.1016/j.engstruct.2021.112231 [3] e. mesquita, a. arêde, r. silva, p. rocha, a. gomes, n. pinto, p. antunes, h. varum, structural health monitoring of the retrofitting process, characterization and reliability analysis of a masonry heritage construction, journal of civil structural health monitoring 7 (2017), pp. 405-428. doi: 10.1007/s13349-017-0232-9 [4] z. gao, x. zhu, y. fang, h. zhang, active monitoring and vibration control of smart structure aircraft based on fbg sensors and pzt actuators, aerospace science and technology 63 (2017), pp. 101-109. doi: 10.1016/j.ast.2016.12.027 [5] a. quattrocchi, r. montanini, m. latino, n. donato, development and characterization of a fiber bragg grating ethanol sensor for liquids, proc. of the 22nd international workshop on adc and dac modelling and testing, palermo, italy, 14-16 september 2020, pp. 55-59. online [accessed 11 april 2023] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-11.pdf [6] a. quattrocchi, r. montanini, m. latino, n. donato, pmmacoated fiber bragg grating sensor for measurement of ethanol in liquid solution: manufacturing and metrological evaluation. acta imeko 10(2) (2021), pp. 133-138. doi: 10.21014/acta_imeko.v10i2.1052 [7] z. ma, x. chen, fiber bragg gratings sensors for aircraft wing shape measurement: recent applications and technical analysis, sensors 19 (2018), art. no. 55. doi: 10.3390/s19010055 [8] t. h. loutas, p. charlaftis, a. airoldi, p. bettini, c. koimtzoglou, v. kostopoulos, reliability of strain monitoring of composite structures via the use of optical fiber ribbon tapes for structural health monitoring purpose, compos. struct. 134 (2015), pp. 762771. doi: 10.1016/j.compstruct.2015.08.100 [9] a. montero, i. s. de ocariz, i. lopez, p. venegas, j. gomez, j. zubia, fiber bragg gratings, it techniques and strain gauge validation for strain calculation on aged metal specimens, sensors 11 (2011), pp. 1088-1104. doi: 10.3390/s110101088 [10] l. f. ferreira, p. antunes, f. domingues, p. a. silva, p. s. andré, monitoring of sea bed level changes in nearshore regions using fiber optic sensors, measurement 45 (2012), pp. 1527-1533. doi: 10.1016/j.measurement.2012.02.026 [11] j. leng, a. asundi, structural health monitoring of smart composite materials by using efpi and fbg sensors, sensors actuators a 103 (2003), pp. 330-40. doi: 10.1016/s0924-4247(02)00429-6 [12] u. m. n. jayawickrema, h. m. c. m. herath, n. k. hettiarachchi, h. p. sooriyaarachchi, j. a. epaarachchi, fibre-optic sensor and deep learning-based structural health monitoring systems for civil structures: a review, measurement 199 (2022) art no. 111543. doi: 10.1016/j.measurement.2022.111543 [13] m. malekzadeh, m. gul, f. n. catbas, use of fbg sensors to detect damage from large amount of dynamic measurements, topics on the dynamics of civil structures, volume 1, pp. 273281, springer, new york, usa 2012. doi: 10.1007/978-1-4614-2413-0_27 [14] l. ren, z.-g. jia, h.-n. li, g. song, design and experimental study on fbg hoop-strain sensor in pipeline monitoring, opt. fiber technol. 20 (2014), pp. 15–23. doi: 10.1016/j.yofte.2013.11.004 [15] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee transactions on instrumentation and measurement, 69 (2019), pp. 2459-2467. doi: 10.1109/tim.2019.2959293 [16] a. quattrocchi, d. alizzio, l. capponi, t. tocci, r. marsili, g. rossi, s. pasinetti, p. chiariotti, a. annessi, p. castellini, m. martarelli, f. freni, a. di giacomo, r. montanini, measurement of the structural behaviour of a 3d airless wheel prototype by means of optical non-contact techniques, acta imeko 11(3) (2022), pp. 1-8. doi: 10.21014/acta_imeko.v11i3.1268 [17] m. g. zubel, k. sugden, d. j. webb, d. sáez-rodríguez, k. nielsen, o. bang, embedding silica and polymer fibre bragg gratings (fbg) in plastic 3d-printed sensing patches, microstructured and specialty optical fibres iv 9886 (2016), pp. 78-89. doi: 10.1117/12.2228753 [18] r. lima, r. tavares, s. o. silva, p. abreu, m. t. restivo, o. frazão, fiber bragg grating sensor based on cantilever structure embedded in polymer 3d printed material. proc. of the 25th optical fiber sensors conference (ofs), jeju, south korea, 2428 april 2017, pp. 1-4. doi: 10.1117/12.2264600 https://doi.org/10.1007/s13349-016-0184-5 https://doi.org/10.1016/j.engstruct.2021.112231 https://doi.org/10.1007/s13349-017-0232-9 https://doi.org/10.1016/j.ast.2016.12.027 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-11.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-11.pdf https://doi.org/10.21014/acta_imeko.v10i2.1052 https://doi.org/10.3390/s19010055 https://doi.org/10.1016/j.compstruct.2015.08.100 https://doi.org/10.3390/s110101088 https://doi.org/10.1016/j.measurement.2012.02.026 https://doi.org/10.1016/s0924-4247(02)00429-6 https://doi.org/10.1016/j.measurement.2022.111543 https://doi.org/10.1007/978-1-4614-2413-0_27 https://doi.org/10.1016/j.yofte.2013.11.004 https://doi.org/10.1109/tim.2019.2959293 https://doi.org/10.21014/acta_imeko.v11i3.1268 https://doi.org/doi:10.1117/12.2228753 https://doi.org/10.1117/12.2264600 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [19] p. lesiak, k. pogorzelec, a. bochenek, p. sobotka, k. bednarska, a. anuszkiewicz, t. osuch, m. sienkiewicz, p. marek, m. nawotka, t. r. woliński, three-dimensional-printed mechanical transmission element with a fiber bragg grating sensor embedded in a replaceable measuring head, sensors 22 (2022) art. no. 3381. doi: 10.3390/s22093381 [20] y. f. zhang, c. y. hong, r. ahmed, z. ahmed, a fiber bragg grating based sensing platform fabricated by fused deposition modeling process for plantar pressure measurement, measurement 112 (2017), pp. 74-79. doi: 10.1016/j.measurement.2017.08.024 [21] l. fang, t. chen, r. li, s. liu, application of embedded fiber bragg grating (fbg) sensors in monitoring health to 3d printing structures, ieee sensors journal 16 (2016), pp. 6604-6610. doi: 10.1109/jsen.2016.2584141 [22] y. k. lin, t. s. hsieh, l. tsai, s. h. wang, c. c. chiang, using three-dimensional printing technology to produce a novel optical fiber bragg grating pressure sensor, sensors mater 28(5) (2016), pp. 389-394. doi: 10.18494/sam.2016.1189 [23] c. hong, y. yuan, y. yang, y. zhang, z. a. abro, a simple fbg pressure sensor fabricated using fused deposition modelling process, sensors and actuators a: physical 285 (2019), pp. 269274. doi: 10.1016/j.sna.2018.11.024 [24] z. a. abro, c. hong, y. zhang, m. q. siddiqui, a. m. r. abbasi, z. abro, s. q. b. tariq, development of fbg pressure sensors using fdm technique for monitoring sleeping postures, sensors and actuators a: physical 331 (2021) art. no. 112921. doi: 10.1016/j.sna.2021.112921 [25] r. montanini, g. rossi, a. quattrocchi, d. alizzio, l. capponi, r. marsili, a. di giacomo, t. tocci, structural characterization of complex lattice parts by means of optical non-contact measurements. proc. of the 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1-6. doi: 10.1109/i2mtc43012.2020.9128771 [26] n. r. manzo, g. t. callado, c. m. cordeiro, l. c. m. vieira, embedding optical fiber bragg grating (fbg) sensors in 3d printed casings, optical fiber technology 53 (2019) art. no. 102015. doi: 10.1016/j.yofte.2019.102015 [27] r. montanini, l. d'acquisto, simultaneous measurement of temperature and strain in glass fiber/epoxy composites by embedded fiber optic sensors: i. cure monitoring, smart materials and structures 16 (2007), pp. 1718–1726. doi: 10.1088/0964-1726/16/5/026 [28] uncertainty of measurement—part 3: guide to the expression of uncertainty in measurement, document iso/iec guide 983:2008, 2008 [29] w. shen, r. yan, l. xu, g. tang, x. chen, application study on fbg sensor applied to hull structural health monitoring, optik, 126 (2015), pp. 1499-1504. doi: 10.1016/j.ijleo.2015.04.046 https://doi.org/10.3390/s22093381 https://doi.org/10.1016/j.measurement.2017.08.024 https://doi.org/10.1109/jsen.2016.2584141 http://dx.doi.org/10.18494/sam.2016.1189 https://doi.org/10.1016/j.sna.2018.11.024 https://doi.org/10.1016/j.sna.2021.112921 https://doi.org/10.1109/i2mtc43012.2020.9128771 https://doi.org/10.1016/j.yofte.2019.102015 https://doi.org/10.1088/0964-1726/16/5/026 https://doi.org/10.1016/j.ijleo.2015.04.046 metrology infrastructure for radon metrology at the environmental level acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 metrology infrastructure for radon metrology at the environmental level annette röttger1, stefan röttger1, daniel rábago2, luis quindós2, katarzyna woloszczuk3, maciej norenberg3, ileana radulescu4, aurelian luca4 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2 university of cantabria (uc), spain, 3 central laboratory for radiological protection (clor), poland 4 "horia hulubei" national institute for r&d in physics and nuclear engineering (ifin-hh), romania section: research paper keywords: radon calibration; radon flux calibration; traceradon; radiation protection; climate observation citation: annette röttger, stefan röttger, daniel rábago, luis quindós, katarzyna woloszczuk, maciej norenberg, ileana radulescu, aurelian luca, metrology infrastructure for radon metrology at the environmental level, acta imeko, vol. 12, no. 2, article 35 section editor: leonardo iannucci, politecnico di torino, italy received december 30, 2022; in final form may 24, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this project 19env01 traceradon has received funding from the empir programme co-financed by the participating states and from the european union's horizon 2020 research and innovation programme. corresponding author: annette röttger, e-mail: annette.roettger@ptb.de 1. introduction radon gas is the largest source of public exposure to naturally occurring radioactivity. radon activity concentration maps, based on atmospheric measurements, as well as radon flux maps can help member states to comply with the eu council directive 2013/59/euratom and, particularly, with the identification of radon priority areas. radon can also be used as a tracer to improve atmospheric transport models and to indirectly estimate greenhouse gas (ghg) fluxes. this is important for supporting successfully ghg mitigation strategies. one approach to estimate ghg fluxes on the local to the regional scale is the so-called radon tracer method (rtm), which is based on the night-time correlation between atmospheric concentrations of radon and ghg measured at a given station together with information on the radon flux data within the station’s footprint. thus, atmospheric monitoring networks are interested or are already measuring atmospheric radon activity concentrations using different techniques but a metrological chain to ensure the traceability of all these measurements has been missing till traceradon [1] started. an overview on the project is available in [2], details on the metrology needs can be found in [3]. this paper presents in section 2 new activity standards which are available to provide new reference atmospheres. the creation of theses atmospheres is a complex task as explained in section 3. to bring all capacities together a reference field for the radon flux from soil to atmosphere is needed. this infrastructure, the so-called exhalation bed facility is presented in section 4. the paper closes with a summary and an outlook. abstract since 2020 a large consortium has been engaged in the project empir 19env01 traceradon to develop the missing traceability chains to improve the sensor networks in climate observation and radiation protection. this paper presents results in the areas of: novel 226ra standard sources with continuous controlled 222rn emanation rate, radon chambers aimed to create a reference radon atmosphere and a reference field for radon flux monitoring. the major challenge lies in the low activity concentrations of radon in outdoor air from 1 bq∙m-3 to 100 bq∙m-3, where below 100 bq∙m-3 there is currently no metrological traceability at all. thus, measured values of different instruments operated at different locations cannot be compared with respect to their results. whin this paper, new infrastructure is presented, capable of filling this gap in traceability. the achieved results make new calibration services, far beyond the state of art, possible. mailto:annette.roettger@ptb.de acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 2. new activity standards the metrology of radon (222rn is considered here) with respect to the representation of the unit of activity concentration in bq m-3 can be fundamentally divided into two different approaches: one way to quantify radon absolutely is the method of picolo [4], in which gaseous radon from the decay of radium (226ra) is frozen out at a cold point in vacuum. the alpha particles emitted by the solid radon are measured at a defined solid angle so that the unit of activity can be traced back to the base units of seconds and meters. the radon is then returned to the gas phase and can subsequently be used to produce atmospheres of known activity concentration. other methods are based on liquid scintillation counting [5] or 4πγ method [6]. the activity concentration of an atmosphere produced in this way decreases exponentially with a half-life of 3.8 days, limiting the counting statistics and hence the achievable uncertainty in the calibration of test samples. thus, this approach is not suitable for representing reference atmospheres with activity concentrations comparable to those in outdoor air (< 100 bq∙m-3). the use of this activity concentration for calibration purposes requires that the released activity of radon can be quantified traceable to the si system. since the mechanism causing the release of radon consists of a combination of two processes, on the one hand the recoil of the nuclei resulting from alpha decay and on the other hand the diffusion processes, the amount of radon released usually correlates with environmental parameters, such as relative humidity and temperature, since these influence the effective diffusion coefficient. to investigate these dependencies, novel sources have been developed at ptb with the metrological possibility of continuous determination of the emanation [8]. thus, it can be shown that at low penetration depth of the ionimplanted 226ra, there is no measurable dependence of the emanation on the ambient humidity and only a small dependence on the temperature [9]. the absolute determination of the activity of 226ra is done for this type of sources by spectrometry under a defined solid angle, while the released fraction of 222rn is quantified by -spectrometric comparison with radon-dense sources of the same design. on this basis, several primary 222rn activity standards have been fabricated that are suitable for representing the unit activity in the activity concentration range < 100 bq·m-3 in large-volume climate chambers. this is because typical sizes of walk-in climate chambers are 10 m³ to 30 m³. figure 1 shows three different activity standards in an overview: at the top a source based on electro-deposition of 226ra on stainless steel backing, in the middle mass separated ionimplanted 226ra onto a tungsten backing (tungsten or aluminium are both possible as backing using laser resonance ionization at the risiko 30 kv mass separator of johannes gutenberguniversität mainz) and on the bottom thermal physical vapor deposition of 226racl2 onto 450 mm2 ion-implanted silicon detector. a completely new approach in traceability and thus a milestone in the metrological development is represented by the new device at the bottom of figure 1: it accomplishes several challenges, such as high temporal resolutions and at the same time is able to generate high counting statistics. it also needs to meet specific requirements, like to be able to achieve low figure 1. new kind of activity standard sources for 222rn based on the principle of emanation of 222rn from electrodeposition, implantation or physical vapor deposition of 226ra activity from standards, produced at ptb. from top to bottom: (1) electro-deposited 226ra activity standard: deposition at 30 v < u < 200 v (easy to produce, but still dependant of environmental parameters during utilisation); (2) implanted 226ra activity standard: implantation of 226ra into w / al after mass separation at the risiko mass separator at the university of mainz with about 30 kev (ideal metrological source, but difficult and expensive to produce); (3) pips 226ra activity standard: 450 mm², 300 µm with about 160 bq 226racl2 layer from thermal, physical vapor deposition (new kind of source/detector combination with online emanation traceability). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 uncertainties at low activities, for which the highest possible realizable detection efficiency is needed. the innovative approach was to coat commercially available silicon detectors with a thin layer of 226ra: integrated 226ra source/detector (irsd). after vapor deposition, 226ra is located on the surface of the silicon detector, so that about 50% of the radon produced is released. both, the alpha particles of the 226ra and those of the remaining radon can thus be detected spectrometrically under ambient pressure with a counting yield of about 50%, separately from each other and continuously. since the total amount of radon is a conservation quantity, the continuously recorded alpha spectra allow the amount of released radon to be determined. this type of measurement represents a mathematical inversion and requires special analytical procedures. based on the kalman filter for state estimation in linear dynamic systems, novel algorithms have been designed and implemented, which can be used to calculate the time course of the released activity of radon and the associated uncertainty from the continuously acquired measurement data. using the described method, even the release of one radon atom per second (about 2 µbq∙s-1) can still be determined within a few hours integration time with an uncertainty of around 2 % (k = 1) on average. in the future, the irsd will be used to calibrate radon monitors continuously, automatically and with feedback from radon chambers but even in the field and under changing climatic conditions. 3. new reference atmospheres for calibration of radon monitors the importance of radon exposure to the population and workers was noticed in 2013 by the european union when the basic safety standards directive (bss) was published in the council directive 2013/59/euratom. a significant part of this directive refers to the issues related to the radon hazard in dwellings and workplaces. the eu member states were required to establish national radon action plans addressing long-term risks from radon exposures in dwellings and workplaces. hence, member states needed to establish national reference levels for indoor radon concentrations, which should not be higher than 300 bq·m-3. this created new challenges for the radon dosimetry and calibrations because the previous limit used in certain eu member states had been much higher at 1000 bq∙m-3. among others arose the need for harmonization of existing radon measurement procedures and for creating new ones, ensuring the traceable calibration of radon measurement instruments at low radon activity concentrations with low uncertainties. these aspects could have been solved over the years, inter alia, due to research projects, like for instance metroradon, www.metroradon.eu. radon is not only addressed as the largest natural source of public exposure to ionizing radiation but also as a useful tracer for understanding atmospheric processes and estimating greenhouse gas emissions [10]. that is why reliable measurements of low-level radon activity concentrations, such as those found in the environment (< 20 bq∙m-3), are important both for organizations responsible for radiation protection and climate research. despite the enormous changes in radon metrology that have occurred in recent years, the activity concentrations below 100 bq∙m-3 have not been subject to metrological research so far. this poses new challenges, which are the development of traceable methods and robust technology for measurements of environmental low-level radon activity concentrations and radon fluxes. both are important to derive information from greenhouse gas fluxes in the environment and therefore important for reduction strategy planning. properly defined closed volume is necessary to create the reference radon atmosphere. in calibration laboratories two types of radon chambers are used: the first type is a large container (typically the volume of such chambers is greater than 10 m3). often designed as a walk-in chamber with an air-lock, allowing entry and exit with the minimum disturbance in the radon atmosphere. the second type of radon chambers are small containers only for the equipment under test. the volumes are usually less than 1 m3. the large radon chambers, due to their large volume, allow to perform intercalibrations of active radon monitors or calibration of devices measuring potential alpha energy concentration defined as: “the concentration of short-lived radon-222 or radon-220 progeny in air in terms of the alpha energy emitted during complete decay from radon-222 progeny to lead-210 or from radon-220 progeny to lead-208 of any mixture of short-lived radon-222 or radon-220 in a unit volume of air. the si unit for potential alpha energy concentration is j m-3”, given in icrp 115 [11] the smaller radon chambers are usually used to expose passive detectors in reference radon activity concentration. radon chambers with a volume of about 1 m3 represent an interesting option for calibration laboratories, allowing a successful approach to the calibration of both active and passive radon monitors. a radon chamber is not only a “radon container”, but also a system that ensures the maintenance of radon concentration inside and appropriate environmental conditions. the most frequently monitored parameters are temperature, relative humidity, atmospheric pressure, the ambient aerosol concentration, size distribution of radioactive aerosols, the radon decay products concentration and fractionalization, equilibrium factor of radium-radon gamma-ray dose or dose rate. the system for test atmospheres with radon (star), also called “radon chamber”, has four inseparable parts: the equipment for containing the atmosphere, the equipment for producing the atmosphere, the reference atmosphere thus created, and the equipment and methods for monitoring the atmosphere [12]. devices used to characterise the atmosphere shall be traceable to a primary standard. table 1 shows radon chambers presented in this article. they are compared according to their basic features. in the table the chambers are represented by the name of the institute and country which operating them: central laboratory for radiological protection (clor), poland, "horia hulubei" national institute for r&d in physics and nuclear engineering (ifin-hh), romania and university of cantabria (uc), spain. all these chambers fulfil the requirements of a star. one essential part of the system is the reference radon monitor, used to give the radon concentration inside. in the three cases exposed, the radon monitors have been calibrated at the bundesamt für strahlenschutz (bfs), accredited by the national accreditation body for the federal republic of germany dakks, according to the iso standard 17025, which is traceable to the national standards of the national metrology institute (ptb) [13]. to validate the two types of low-level 222rn emanation sources developed within the traceradon project, comparison measurements will be performed in low-level radon calibration chambers. based on a protocol, these sources are going to be measured also at ifin-hh radon chamber. the ionizing radiation metrology laboratory (lmri, ifinhh, romania) has developed this facility in the framework of acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 national and european joint research projects [14]. the chamber is cylindrical in shape, with a volume of 1 m3 and has accessories used to produce radon reference atmospheres they ensure international metrological traceability and equivalence of radon activity measurements, while performing reliable measurements of radon activity concentration. an alphaguard df2000 (saphymo gmbh) radon monitor traceable to bfs is used to take measurements of radon activity concentration inside the chamber. recent testing and improvement of the radon chamber tightness and traceability capacity have been done using the reference instruments and standard radon gas sources, thus obtaining designation by the romanian nuclear authority, national commission for nuclear activities control (cncan) as calibration laboratory for instruments measuring radon activity concentration in air, according to the iso standard 17025. the laboratory is already performing calibrations of active radon measurement instruments belonging to various romanian users. two radon chambers are located in the central laboratory for radiological protection (clor), which is the only laboratory in poland that has accreditation (in accordance to iso/iec 17025) in the field of calibration of radon activity concentration and potential alpha energy concentration monitors. one of them is the walk-in radon chamber with a volume of 12.3 m3. it is an air-tight climatized room made of 100 mm pur sandwich elements, covered outside with zinc-coated steel and plastic and inside with stainless steel. the chamber is built with an air-lock, allowing entry and exit with minimum disturbance to environmental conditions or radon atmosphere. the chamber has the option of setting the temperature and relative humidity within a wide range. it has also several input ports that allow the entry of radon and aerosols, the collection of air samples and the connection of instruments outside. this climatic radon calibration chamber fulfills the requirements of the international standard iec 61577-4. the second one is a small cylindrical shaped, tight container with a volume of 0.47 m3. this newly developed radon chamber is planned to be used for creating low-level reference radon atmospheres using new low-activity sources developed within the traceradon project. so far, to generate the reference radon atmospheres, two certified dry flow-through radon sources by pylon electronic development co. have been used. their activities are 137.3 kbq and 502.5 kbq. the radon chamber of the university of cantabria (uc) spain is the only spanish facility that provides calibration of radon monitors and exposure of passive radon detectors, according to iso/iec 17025 accreditation. it is a cubic shaped, stainless steel container with a wall thickness of 3.25 mm and an internal volume of 1 m3. the access to the chamber is provided by lifting the top lid of the chamber. it also allows access through three circular holes of 80 mm diameter, commonly used to insert and remove passive detectors without disturbing the radon inside during the process. radon measurement equipment can also be inserted inside at different heights. to homogenize the internal concentration, a fan is built in, working non-stop during the exposures. the radon sources used to generate the reference atmosphere are certified sources model pylon rn-1025 (pylon electronics inc.) and sources powdered with high 226ra content and encapsulated in a pvc jar which provides a constant radon diffusion rate. the radon monitors with traceability to international standards are the atmos12 (gammadata instruments ab) and the alphaguard (saphymo gmbh). these devices are connected to a computer located outside the chamber to know the concentration in real time. the radon concentration inside the chamber can be controlled and modified by using an open-air circuit that includes a pump, extracting air from inside the chamber. 4. reference fields for the calibration of radon flux monitors the exhalation bed facility provided by the laboratory of environmental radioactivity, university of cantabria (laruc), acts as a standard flux source of radon activity across a defined area per time. thus, its aim is to be used for calibrating continuous radon flux systems, that can be applied as a transfer standard for test validation of existing radon flux monitors by comparison campaigns in field or on the same exhalation bed under laboratory conditions. the exhalation bed facility consists of two exhalation beds, one with a high radon exhalation rate and another with a low radon exhalation rate (see figure 2). table 1. radon chambers features of clor, ifin-hh and uc from left to right. all chambers provide services like calibration of active monitors, and exposure of passive detectors. clor, poland ifin-hh, romania uc, spain v (m3) 12 0,47 1 1 radon concentration range (bq·m-3) 100 to 50 000 100 to 50 000 100 to 10 000 250 to 11 000 calibration and measurement capability (cmc) (k = 2) 7% 5% 5% accreditation according to iso 17025 designation according to iso 17025 according to iso 17025 reference devices alphaguard alphaguard df2000 alphaguard atmos12 traceability to bfs reference chamber environmental conditions adjustable temperature, humidity, ambient aerosol concentration temperature monitored temperature, humidity, pressure, size distribution of radioactive aerosols temperature, humidity, pressure temperature, humidity, pressure temperature, humidity, pressure acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 each exhalation bed has an effective surface of 1 m3 and is made up of five stainless steel welded plates shaping a box, with the upper part open. this configuration avoids the leakages through the plates and forces radon to escape to the top surface. the materials for the exhalation beds were selected according to their radioactive content and properties, such as soil texture, structure, dry bulk density, porosity, etc. the soil for the high exhalation bed was collected from the former spanish uranium mine, located in saelices el chico (salamanca, spain) and managed by the spanish national uranium company enusa. the low exhalation bed soil, however, was taken from the fosbucraa mine (western sahara). the materials were dried, sieved and homogenized before putting them into each container. the radon flux reference value of each exhalation bed has been approached by two ways. firstly, the theoretical one that is obtained from the resolution of the diffusion equation [15]: 𝐸 = 𝜀 · 𝐶ra · 𝜌 · 𝜆 · 𝑧 , (1) where 𝜀 is the soil emanation factor, 𝐶ra is the radium 226 concentration, 𝜌 is the dry bulk density, 𝑧 is the soil thickness and λ the radon decay constant. the equation (1) is an approximation when the soil thickness is much smaller than the radon diffusion length. such assumption was checked from the relationship between the diffusion length l and the porosity p and moisture content m [16]. the second approach is the experimental one, i.e. monitoring the radon increasing in a hermetically closed accumulation chamber [17]. the soil features and the exhalation rate obtained using both approaches are shown in table 2. moreover, during the intercomparison campaigns in the framework of the traceradon project two different areas were tested that could be a reference site to check radon flux systems under environmental conditions [18]. the high radon flux field is in the grounds of a former uranium mine managed by the spanish uranium company (enusa), located in saelices el chico (salamanca, spain). the average soil radium concentration in this area is (814 ± 65) bq∙kg-1 (k = 2). the reference radon flux value obtained during the intercomparison campaign was 2364 bq∙m-2 h-1 with a standard deviation of 1172 bq∙m-2 h-1. the low radon flux field, located in esles de cayón (cantabria, spain), contains an average radium concentration of (29 ± 3) bq∙kg-1 (k = 2) in soil. in the low area, the reference value obtained was 50 bq∙m-2∙h-1 with a standard deviation of 15 bq∙m-2∙h-1. moreover, it was a good agreement between the consensus experimentally obtained values and the output of the model proposed by [19]. the radon flux systems that can be tested in both scenarios, exhalation bed facility and field sites, should have defined protocols, both for installation and for measurement and analysis. in case of the exhalation bed, it is possible to open and close the accumulation chamber manually, if necessary, however in field tests it is preferable to have an automatic system in order to check the correlation with environmental parameters. 5. summary and outlook naturally occurring radon is the cause for most of the population's exposure to ionizing radiation. at the same time, radon is a highly efficient tracer for understanding atmospheric processes, for evaluating the accuracy of chemical transport models, and for providing integrated emission estimates of greenhouse gases. a metrological system for measuring figure 2. picture of the exhalation beds: brown, to the bottom: high radon flux, grey, to the top: low radon flux. table 2. results of the parameters influencing the calculation of radon exhalation rate in each “exhalation bed” for theoretical approach. uncertainties are expressed with the coverage factor (k = 1). exhalation bed parameter symbol (unit) high low moisture content m 0.013 0.05 porosity p 0.36 0.61 emanation factor 𝜀 0.18 ± 0.03 0.53 ± 0.02 radium concentration cra (bq/kg) 19130 ± 350 214 ± 8 bulk density d (kg/m3) 1645 ± 2 893 ± 0.6 radon decay constant λ (h-1) (7.5575 ± 0.0004) ·10-3 thickness z (m) 0.165 ± 0.005 0.130 ± 0.005 diffusion length l (m) 1.29 1.55 exhalation rate (theoretical) etheo (bq m-2 h-1) 6900 ± 1000 100 ± 6 exhalation rate (experimental) eexp (bq m-2 h-1) 6320 ± 240 94 ± 4 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 atmospheric radon activity concentrations as well as the radon flux from the soil is therefore needed for atmospheric, climate, and radiation research. the activities developed and performed within the traceradon project went even further to increase the metrological capabilities of low atmospheric radon concentrations calibration and monitoring in the range below 100 bq m-3. this has included up to now: ― the development of new primary 222rn activity standards suitable for representing the unit activity in the range below 100 bq m-3 in the established walk-in radon chambers. ― the first successful calibrations in the new reference atmospheres radon monitors using the new developed activity standards. ― the development of laboratory exhalation beds tests for the calibration of radon flux monitors. ― intercomparison campaigns in two types of fields, low and high radon exhalation rates. the results obtained in the above-mentioned activities feed into the traceradon project with respect to radionuclide metrology and radiation protection at the environmental level. knowledge of such joint efforts can offer a solid background to providing more accurate and traceable results for these measurement methods, being even more challenging due to the outdoor environment. climate change and radiological protection both affect humankind and the environment worldwide. for the planet to combat both climate change and radiation exposure, measurements must be supported by reliable metrology. by addressing a topic (i.e. the measurement of low levels of radon in the environment) that supports both climate observation and global radiological protection, this project simultaneously supports the long-term economic, social and environmental work of icos, the integrated pollution prevention and control (ippc) directive 2008/1/ec, the iaea, analytical laboratories for the measurement of environmental radioactivity (almera) and who. acknowledgement this project 19env01 traceradon has received funding from the empir programme co-financed by the participating states and from the european union's horizon 2020 research and innovation programme. the following institutes are involved as project partners in empir 19env01 traceradon: physikalischtechnische bundesanstalt, germany, budapest főváros kormányhivatala hungary, cesky metrologicky institut, czech republic, agenzia nazionale per le nuove tecnologie, l'energia e lo sviluppo economico sostenibile, italy, institutul national de cercetare-dezvoltare pentru fizica si inginerie nucleara "horia hulubei", romania, npl management limited, united kingdom, institut za nuklearne nauke vinca, serbia, österreichische agentur für gesundheit und ernährungssicherheit gmbh, austria, centralne laboratorium ochrony radiologicznej, poland, instituto de engenharia de sistemas e computadores, tecnologia e ciência, portugal, joint research centre european commission, lunds universitet, sweden, státní ústav jaderné, chemické a biologické ochrany, czech republic, universidad de cantabria, spain, university of bristol, united kingdom, universitat politècnica de catalunya, spain, université de versailles saint-quentin-en-yvelines, france, ideas hungary betéti társaság, hungary. the consortium is supported by the following collaborating partners: university of heidelberg, germany, ansto, australia's nuclear science and technology organisation, australia, era, european radon association, europe, met office, united kingdom, university of novi sad, serbia, politecnico di milano, italy, university of cordoba, spain, eurados e.v., europe, university of siegen, germany, institut de radioprotection et de sûreté nucléaire, france, arpa piemonte, italy, arpa valle d'aosta, italy as well as the life-respire project, peter bossew, austria and the university of groningen, netherlands. the interest and support for the project extends far beyond europe: the members of the stakeholder committee, for example, come from four different continents. thanks are due to maria sahagia (ifin-hh) for some useful suggestions aimed to improve the manuscript. references [1] a. röttger, s. röttger, c. grossi, a. vargas, r. curcoll (+ another 14 authors), new metrology for radon at the environmental level, meas. sci. technol. 32, 2021, 124008, 13 pp. doi: 10.1088/1361-6501/ac298d [2] s. röttger, a. röttger, c. grossi, a. vargas, u. karstens (+ another 6 authors), radon metrology for use in climate change observation and radiation protection at the environmental level, adv. geosci., 57, 2022, pp. 37–47. doi: 10.5194/adgeo-57-37-2022 [3] s. chambers, a. griffiths, a. g. williams, ot. sisoutham, v. morosh, s. röttger, f. mertes, a. röttger portable two-filter dualflow-loop 222rn detector: stand-alone monitor and calibration transfer device, adv. geosci., 57, 2022, pp. 63–80. doi: 10.5194/adgeo-57-63-2022 [4] j. l. picolo, absolute measurement of radon 222 activity, nucl. instr. meth. a 369, issues 2-3, 1996, pp. 452–457. doi: 10.1016/s0168-9002(96)80029-5 [5] p. cassette, m. sahagia, l. grigorescu, m. c. lépy, j. l. picolo, standardization of 222rn by lsc and comparison with αand γ spectrometry, appl. radiat. isot. 64, 2006, pp. 1465–1470. doi: 10.1016/j.apradiso.2006.02.068 [6] y. nedjadi, ph. spring, c. bailat, m. decombaz, g. triscone, j.-j. gostely, j.-p. laedermann, f. o. bochud primary activity measurements with 4πγ nai(tl) counting and monte carlo calculated efficiencies, appl. rad.. isot. 65, issue 5, 2007, pp. 534538. doi: 10.1016/j.apradiso.2006.10.009 [7] f. mertes, s. röttger, a. röttger, a new primary emanation standard for radon-222, app. rad. and isot., 156, 2020, 108928. doi: 10.1016/j.apradiso.2019.108928 [8] f. mertes s. röttger, a. röttger, development of 222rn emanation sources with integrated, quasi 2pi active monitoring, int. journal of environmental research and public health 2022, 19(2), 2022, 840. doi: 10.3390/ijerph19020840 [9] f. mertes, n. kneip, r. heinke, t. kieck, d. studer, f. weber, s. röttger, a. röttger, k. wendt, c. walther, ion implantation of 226ra for a primary 222rn emanation standard, applied radiation and isotopes, vol. 181, 2022, 110093. doi: 10.1016/j.apradiso.2021.110093 [10] l. quindós, c. sainz fernandez, i. fuente merino, j. l. gutierrez villanueva, a. gonzalez diez, the use of radon as tracer in environmental sciences, acta geophys. 61, 2013, pp. 848-858. doi: 10.2478/s11600-013-0119-z [11] m. tirmarche, j. d. harrison, d. laurier, f. paquet, e. blanchardon, j. w. marsh, icrp publication 115, lung cancer risk from radon and progeny and statement on radon, ann. icrp 40(1), 2015, pp. 1-64. doi: 10.1016/j.icrp.2011.08.011 https://doi.org/10.1088/1361-6501/ac298d https://doi.org/10.5194/adgeo-57-37-2022 https://doi.org/10.5194/adgeo-57-63-2022 https://doi.org/10.1016/s0168-9002(96)80029-5 https://doi.org/10.1016/j.apradiso.2006.02.068 https://doi.org/10.1016/j.apradiso.2006.10.009 https://doi.org/10.1016/j.apradiso.2019.108928 https://doi.org/10.3390/ijerph19020840 https://doi.org/10.1016/j.apradiso.2021.110093 https://doi.org/10.2478/s11600-013-0119-z https://doi.org/10.1016/j.icrp.2011.08.011 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [12] iec 61577-4, radiation protection instrumentation radon and radon decay product measuring instruments part 4. geneva, switzerland. [13] t. r beck, a. antohe, f. cardellini, a. cucoş, e. fialova (+ another 23 authors), the metrological traceability, performance and precision of european radon calibration facilities, int. journal of environmental research and public health, 18(22), 12150. doi: 10.3390/ijerph182212150 [14] a. luca, i. rădulescu, m.-r. ioan, v. fugaru, c. teodorescu (+ another 7 authors), recent progress in radon metrology at ifinhh romania, atmosphere 2022, 13, 363. doi: 10.3390/atmos13030363 [15] j. porstendörfer, properties and behaviour of radon and thoron and their decay products in the air, journal of aerosol science, 25(2) , 1994, pp. 219-263. doi: 10.1016/0021-8502(94)90077-9 [16] v. c. rogers, k. k. nielson, multiphase radon generation and transport in porous materials, health physics, 60(6), 1991, pp. 807815 doi: 10.1097/00004032-199106000-00006 [17] i. gutiérrez-álvarez, j. e. martín, j. a. adame, c. grossi, a. vargas, j. p. bolívar, applicability of the closed-circuit accumulation chamber technique to measure radon surface exhalation rate under laboratory conditions, radiation measurements, vol. 133, april 2020, art. 106284. doi: 10.1016/j.radmeas.2020.106284 [18] m. fuente, d. rabago, s. herrera, l. quindos, i. fuente, m. foley, c. sainz, performance of radon monitors in a purpose-built radon chamber, journal of radiological protection, 38(3), 2018, 1111. doi: 10.1088/1361-6498/aad969 [19] u. karstens, c. schwingshackl, d. schmithüsen, i. levin, a process-based 222radon flux map for europe and its comparison to long-term observations, atmos. chem. phys., 15, 2015, 12845. doi: 10.5194/acp-15-12845-2015 https://doi.org/10.3390/ijerph182212150 https://doi.org/10.3390/atmos13030363 https://doi.org/10.1016/0021-8502(94)90077-9 https://doi.org/10.1097/00004032-199106000-00006 https://doi.org/10.1016/j.radmeas.2020.106284 https://doi.org/10.1088/1361-6498/aad969 https://doi.org/10.5194/acp-15-12845-2015 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 57‐61    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 57  mechanical influences in sinusoidal force measurement  christian schlegel, gabriela kiekenap, holger kahmann, rolf kumme  physikalisch‐technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany      section: research paper   keywords: dynamic force; sinusoidal excitation; laser vibrometer; rocking motion; triaxial acceleration; fem modal analysis  citation: c. schlegel, g. kieckenap, b. glöckner, a. buß, r. kumme, mechanical influences in sinusoidal force measurement, acta imeko, vol. 4, no. 2, article  10, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐10  editor: paolo carbone, university of perugia, italy  received august 14, 2014; in final form february 12, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by euramet  corresponding author: christian schlegel, e‐mail: christian.schlegel@ptb.de    1. introduction  force measurement plays a major role in industrial processes, statically as well as dynamically. in the past a very versatile system of static force calibration was established, which last but not least can be recognized by the many high level force calibration laboratories and services around the globe. nevertheless, most of the processes where force measurement is involved have a dynamic nature. so far only static calibrated force transducers also have been used in dynamic applications. the deviations in measurement associated with this may rise to the order of several percent and, especially in the vicinity of resonances, up to 10 % to 100 %. to close this gap, some developments were carried out in the past to provide dynamic force calibration. these developments have also triggered a project currently running in the european metrology research programme (emrp), the “traceable dynamic measurement of mechanical quantities”, which includes, apart from a work package on dynamic force, also work packages on dynamic pressure, dynamic torque, the electrical characterization of measuring amplifiers and mathematical and statistical methods and modelling [2]. the investigation of the uncertainty contributions in sinusoidal force measurement is crucial for providing a reliable dynamic calibration of force transducers. a sinusoidal calibration is usually performed with an electrodynamic shaker system. thereby, the force transducer, mounted on this shaker, is equipped with a top mass, and the acceleration on the surface of this mass as well as the force transducer signal are measured during the sinusoidal movement. a more detailed description of the whole calibration process can be found in [2]. 2. measuring an acceleration distribution  a big advantage for the acceleration measurement is a scanning vibrometer which offers the opportunity to measure many points, e.g. on the whole surface of the mass block. by averaging the signals measured at these points, one can obtain realistic standard deviations, e.g. uncertainties. the dynamic behaviour during such a sinusoidal excitation depends on the kind of coupling of the top mass on the transducer as well as on the mounting of the transducer on the shaker table. in addition, of course, the internal mechanical structure of the transducer is of fundamental importance. one result of such a dynamic calibration is the dynamic sensitivity, which is the ratio between the force transducer signal and the measured dynamic force, which is the product of the acceleration of the top mass and its mass value. due to the imperfect rigidity of the transducer, which depends on the internal structure, rocking modes may occur at certain frequencies. these rocking modes can be detected, for instance, from the uncertainty as a function abstract  the paper describes mechanical influences which disturb a sinusoidal force calibration and hence have an influence on measurement  uncertainty. the measurements are based on the application of a scanning vibrometer and the use of triaxial accelerometers. the  measuring of many acceleration points on the top mass of the transducer makes it possible to obtain acceleration distributions from  which a standard deviation can be derived; the triaxial accelerometer allows the observance of certain effects, like rocking modes, or  other problems related to specific excitation frequencies of the force transducer. both measurements can be related to each other.  the rocking effects are discussed with fem model calculations.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 58  of the frequency drawn for certain measuring channels. figure 1 shows one example of such an analysis. in this special case the uncertainty of the acceleration shows several frequencies where the uncertainty is significantly higher than elsewhere. it should be noted, that this behaviour is not related to rocking modes of the shaker table, which was investigated elsewhere. surprisingly, this behaviour does not correspond to the synchronously measured transducer output signal. it seems that this special kind of force transducer is not so susceptible to interference from external influences. the origin of the relatively large uncertainties of the transducer output signal at 200 hz and 300 hz is unknown; obviously this increase has nothing to do with possible rocking modes. in practice, one could conclude from such a measurement of the uncertainty distribution that it is advisable to avoid the regions where the transducer perhaps shows some irregularities. it should be noted that the uncertainty given above is not the whole uncertainty because the contributions influenced by the measuring equipment are not considered. they yield, depending on the frequency range, an additional 0.3 % to 0.5 %. the scanning of the acceleration can be accompanied by triaxial acceleration measurements, where one can directly measure the transverse acceleration in certain directions. combining the different methods may clarify the question of whether the increased uncertainty at certain frequencies can really be associated with rocking modes. 3. transverse acceleration  in order to investigate the possible rocking modes during the periodic excitation, the transverse acceleration was measured. for this purpose a special arrangement of four triaxial accelerometers on top of a test mass was chosen. figure 2 presents two pictures of the setup. the upper picture shows the force transducer equipped with an 8 kg test mass fixed with a mechanical adapter on the transducer. on top of the test mass the plate with the triaxial accelerometers can be seen. besides the force transducer one can see one additional accelerometer mounted on the shaker table. the force transducer was an interface type with a nominal force range of 25 kn. amplification of the force transducer output was realized with a dewetron conditioning amplifier. the lower picture (figure 3) shows the arrangement of the triaxial accelerometers on the plate which is mounted on the test mass. figure  2.  the  upper  picture  shows  the  whole  arrangement  of  the  force  transducer  with  the  test  mass,  which  was  equipped  with  a  special  plate  where four triaxial accelerometers have been mounted. the lower picture shows the plate with the four triaxial accelerometers.  figure 1. the figure shows the uncertainty of the measured acceleration on the top mass as well as of the force transducer signal. the uncertainty was obtained  by  averaging  62  acceleration  measuring  points  and  their  62  associated force signals at a certain frequency. it should be noted that the uncertainty of the vibrometer contributes to an additional 0.2 % to 0.3 %  and the uncertainty of measurement of the force conditioning amplifier an additional  0.1  %  to  0.2  %.  the  two  bars  reaching  0.04  %  have  to  be multiplied  by  a  factor  of  3.75.  note,  that  the  spatial  acceleration distribution is not taken into account in this analysis; due to the quite small height of the mass block it can be neglected. in addition note the different  scales of both plots.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 59  for a better fitting of the accelerometers on the plate, a certain area of the plate was milled out. the sensors themselves were screwed in place from the reverse of the plate. the accelerometers were arranged in such a way that the x and y components have an equidistant separation of 45 degrees. the sensors themselves were from kistler, type 8762a50, which have shear sensor elements that feature extremely low thermal transient responses and have a high immunity to base strain and transverse acceleration. an advanced hybrid charge amplifier is incorporated with a wide frequency range from 0.5 hz – 6 khz. the acceleration range of the sensors is in a scale of ±50 g whereby the peak limit is by ±80 g and the sensitivity is 100 mv/g, where g is the gravitational acceleration. the three outputs of each accelerometer were fed into the kistler 5134b conditioning amplifier which also provides the power supply for the integrated amplifier of the sensor. figure 4 shows the acceleration measurement values according to the eight transverse directions as a function of the excitation frequencies. the behaviour can be quite well demonstrated in a kind of windmill plot. the length of the wings is proportional to the amplitude. the opening angle of the wings can be chosen arbitrarily, but is equal for all directions. on the scale on the left-hand side the amount of the acceleration amplitude in percent in relation to the z-amplitude is given. at first glance one can see that the transverse acceleration distribution changes with the frequency. if one observes at certain frequencies a rocking in the direction of 45°/225°, e.g. at 100 hz and 1250 hz, the picture changes at frequencies of, e.g. 1750 hz and 2000 hz, where the rocking occurs in the direction of 0°/180°. on the other hand, there are quite different amplitudes of the transverse acceleration as a function of frequency, which reach from a few percent at low frequencies up to 100 % at high frequencies, related to the z-amplitude of the acceleration. in figure 5 the vector content of the transverse acceleration was calculated. according to the coordinate system, shown in figure 5, the three vector components give one acceleration vector in space with a certain value, r and an angle of inclination, θ, in respect to the normal of the force vector, which is perpendicular to the upper surface of the top mass (zaxis). the corresponding projection of this vector on the xy-plane as a percentage of r and the angle, θ, is given in figure 5 as a figure  4.  this  figure  shows  the  measured  acceleration,  observed  in  eight  different  directions  at  various  frequencies,  on  top  of  the  test  mass.  the scale on the left is the amplitude of the acceleration given as a percentage  of the z‐component of the acceleration. note the widely‐varying scales.  figure 3 schematic representation of the arrangement of the four triaxial accelerometers with their acceleration vectors in the transverse directions, x and y. in addition there are four accelerations perpendicular to the shown plane, in the z direction.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 60  function of frequency. from the figure one can see that quite large rocking motions occur at certain frequencies. the question of what is the origin of such behaviour arises. the origin of the large transverse amplitude at 500 hz is probably connected with a cross resonance of the transducer. the main resonance of the setup is around 930 hz. experimentally, the uncertainties are large beyond the resonance frequency, which is also to be seen in this data, see e.g. figure 1. if one compares both acceleration measurements, the scanning results according to figure 1 and the transverse accelerations according to figure 4 one can see a good correspondence at problematic frequencies, e.g. 500 hz. in summary, one can conclude that distinct frequencies with higher spatial variation of the top mass are often related to rocking modes of the experimental assembly which leads to higher uncertainties. 4. fem simulation of the dynamic system  to get more knowledge about the several dynamic modes which may happen during a periodical excitation an fem modal analysis was performed. in figure 6 the investigated transducer can be seen as a fem model in the upper left corner. the transducer is a shear force transducer, whereby the strain gauges are placed on the thin beams which are in between the holes. the inner part of the transducer is connected via a thread bolt to the outer environment, which provides the force initiation. in the case of the described periodical force measurement this is the additional mass block. by tension or compression of this inner part, the thin beams of the holes are distorted by higher stress, which are detected by the strain gauges. all following sub pictures show the behaviour of the transducer in connection with a 4 kg load mass. the frequencies of the dynamic modes are indicated below the figures, at which the dynamic modes occur. at 261 hz one has a first tilt resonance, where the mass block tilts relative to the transducer body. the red colour is thereby an indication of high stress, whereby the blue colour is a stress free situation. in the special case at 261 hz that means, that these modes initiate additional stress in the mass block. at 374 hz one has a torsion vibration of the mass block against the transducer body. also here the additional stress is transferred to the mass block with the difference that, the stress rises from the inner to the outer diameter of the mass cylinder in cylindrical shells with different stress level. at 1044 hz there is a tilting of both, the mass block and the transducer with respect to each other. the amount of additional stress is more or less the same in the transducer as well as in the mass block. the tilting vibration is non concordant, which leads to the stress maximum at the outer part of the mass block on the bottom side. at 1179 hz there is the main system resonance, which is a vertical vibration of the mass block relative to the transducer. here one can see a homogeneous stress level over the whole mass block. at 1461 hz there is a mode where the body of the transducer makes torsion in respect to the mass block. this mode is similar to the mode at 374 hz, with the difference, that the stress shells now occur in the transducer. figure 5. vector components of the triaxial acceleration measurement. the amplitude of the transverse acceleration as well as the angle of inclination of the acceleration vector in respect to the vertical of the top mass surface  (z‐axis) as a function of the excitation frequency.  figure 6. fem model calculations of the force transducer equipped with a  loading mass. shown are the inner structure of the used transducer, upper left corner and the behaviour of the system at different modal frequencies  the  colours  give  the  amount  of  stress  level  whereby  red  indicates  high  stress and blue no stress.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 61  beyond 1500 hz there are several other resonances which cannot be reached by the performed excitations. 5. conclusions  this paper describes some of the mechanical influences which are present during a sinusoidal force measurement. in contrast to single point acceleration measurements, the measurement of the whole acceleration distribution on the surface of the top mass makes it possible to observe effects, like rocking modes or other mechanical deficiencies. surprisingly, the effects seen by the acceleration signal are often suppressed in the force transducer output. it could be shown by transverse acceleration measurements that the measured uncertainty distribution of the acceleration on the top mass has good correspondence with the transverse motions at least below the resonance frequency of the setup. acknowledgement  the emrp is jointly funded by the emrp participating countries within euramet and the european union. references  [1] european metrology research programme (emrp), http://www.emrponline.eu. [2] c. schlegel, g. kieckenap, b. b. glöckner, a. buß, r. kumme “traceable periodic force calibration”, metrologia 2012, 49, n° 3, 224-235. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 39‐44    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 39  dynamic torque calibration by means of model parameter  identification  leonard klaus 1 , barbora arendacká 2 , michael kobusch 1 , thomas bruns 1   1 physikalisch‐technische bundesanstalt (ptb), bundesallee 100, 38116 braunschweig, germany   2 physikalisch‐technische bundesanstalt (ptb), abbestraße 2‐12, 10587 berlin, germany      section: research paper   keywords: model parameter identification; dynamic torque calibration; dynamic measurement; mechanical model  citation: leonard klaus, barbora arendacká, michael kobusch, thomas bruns, dynamic torque calibration by means of model parameter identification, acta  imeko, vol. 4, no. 2, article 7, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐07  editor: paolo carbone, university of perugia, italy  received september 24, 2014; in final form december 11, 2014; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work is part of the european metrology research programme (emrp) joint research project ind09 – “traceable dynamic measurement of  mechanical quantities”. the emrp is jointly funded by the emrp participating countries within euramet and the european union.   corresponding author: leonard klaus, email: leonard.klaus@ptb.de    1. introduction  research in the field of dynamic calibration of torque transducers has been carried out in the context of the european metrology research programme (emrp) joint research project ind09 “traceable dynamic measurement of mechanical quantities” [1], [2]. an existing prototype measuring device [3] was modernised and extended, and a model-based description of the dynamic behaviour of torque transducers was developed [4]. the model of the transducer will be used to describe its dynamic behaviour in a later industrial application. the modelling is necessary, because torque transducers are always coupled on both ends to a given mechanical environment, which may influence the transducer’s dynamic behaviour. in case of the calibration this environment differs from that of a subsequent use in industry. for future dynamic torque calibrations, it will be necessary to identify the model parameters of a transducer to be calibrated from measurement data. this contribution is an extended version of a contribution to the imeko international tc3, tc5 and tc22 conference 2014 [5]. 2. model  the mathematical model used to represent the torque transducer assumes a linear and time invariant (lti) system which consists of two mass moment of inertia (mmoi) elements connected by a torsional spring and a damper in parallel. to be able to include the previously mentioned influence of the mechanical environment in the model-based description of the dynamic behaviour of the transducer, it was necessary to extend the model of the transducer to consider the mounted transducer including the dynamic torque measuring device (i.e. the mechanical environment in case of a calibration). this extended model represents the physical components of the measuring device and the transducer under test while assuming lti behaviour (see figure 1). it consists of elements for the mass moment of inertia, the torsional spring and the abstract  for the dynamic calibration of torque transducers, a model of the unmounted transducer and an extended model of the mounted  transducer including the measuring device have been developed. the dynamic behaviour of a torque transducer under test will be  described by its model parameters. this paper presents the models comprising the known parameters of the measuring device and  the unknown parameters of the transducer and how the calibration measurements are going to be carried out. the principle for the  identification  of  the  transducer’s  model  parameters  from  measurement  data  is  described  using  a  least  squares  approach.  the  influence of a variation of the transducer’s parameters on the frequency response of the expanded model is analysed.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 40  torsional damper. the equation of motion is described as an inhomogeneous system of ordinary differential equations: (1) in this equation denotes the mass moment of inertia matrix, the damping matrix, the stiffness matrix and the angle vector and its derivative vectors ( , ), respectively. the forced excitation is described by . for the described model as depicted in figure 1, the model approach leads to the mass moment of inertia matrix 0 0 0 0 0 0 0 0 0 0 0 0 (2a) the damping matrix 0 0 0 0 0 0 (2b) and the corresponding stiffness matrix 0 0 0 0 0 0 (2c) the angle vector and its derivative vectors , , are given by (2d) (2e) (2f) the forced excitation of the rotational exciter is given by 0 0 0 (2g) table 1: model parameters of the measuring device and of the device under  test.  known parameters of  the measuring device  unknown parameters  of the dut  mmoi , ,   , torsional stiffness ,   damping ,   3. known and unknown model parameters  to be able to identify the unknown model parameters of the torque transducer, it was necessary to identify the model parameters of the measuring device first. to this end, dedicated auxiliary measuring set-ups for the determination of the mass moment of inertia, torsional stiffness [4] and torsional damping [6] were developed. based on the measurement results from these set-ups, the previously unknown model parameters of the dynamic torque calibration device have been determined. a similar determination of the model parameters of the transducer under test is not possible. due to the unknown mechanical design and the lack of knowledge about the actual dynamic behaviour, a dynamic calibration remains necessary. the extended model of the measuring device now consists of a set of known model parameters, which represents the components of the measuring device, and of a set of unknown model parameters, representing the transducer under test (see table 1), which need to be identified. 4.  data acquisition and analysis  for the calibration measurement with a transducer under test, a sinusoidal excitation with given frequencies is generated by a rotational exciter. the frequency response function of the drive shaft of the measuring device depends on the device under test (see figure 2). the control of excitation frequency and vibration magnitude, including abort conditions and a predetermination of the frequency response of each set-up, is carried out by means of a closed-loop vibration controller. the excitation frequencies are based on the recommendations from [7]. they are equally spaced in logarithmic scale in the frequency domain. the 1/3 octave series was chosen for low and high frequencies far from the resonance frequency, and the narrower spaced 1/12 octave series was chosen for frequencies close to the resonance frequency, respectively. the frequency range of excitation ranges from 10 hz up to 1 khz. figure 2. frequency response of the measuring device with three duts of  different  torsional  stiffness  and  mmoi  measured  with  random  noise excitation.  figure 1. model of the dynamic torque calibration device (marked in blue)  including the transducer under test (marked in orange).  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 41  the angle of excitation at the top is measured by means of a laser doppler vibrometer for rotational oscillations. the rotational acceleration at the bottom is measured by means of an integrated angular accelerometer. for calibration measurements, these two quantities and the output of the transducer dut are acquired with a four-channel data acquisition system. two high speed sampling inputs acquire the raw signal of the interferometer with a high sampling rate; two high precision inputs acquire the voltage outputs of the angular accelerometer and of the transducer under test. the frequency modulated interferometer output signal is demodulated by software and down-sampled to the lower sampling rate of the high precision input channels afterwards. the three input quantities are monofrequent sinusoids with the angular frequency 2π , the phase offset and the magnitude as follows ⋅ sin ⋅ (3) such a signal can be described by sine and cosine components as follows ⋅ cos ⋅ ⋅ sin ⋅ (4) the parameters and of eq. 3 can be derived from and of eq. 4 as , arctan (5) the phase angle needs to be derived using a four-quadrant inverse tangent (atan2). if the two-quadrant inverse tangent is used, a correction of for π may be necessary as this function is only defined for –π 2⁄ π 2⁄ . for multi-channel data acquisition with sinusoidal signals, it is advantageous to fit all the channels together with a common frequency [8]. the frequency is not known well enough, so it needs to be estimated as well. the parameters of interest of the three sinusoidal oscillations of the acquired channels ( , , , , , ) can be estimated with a common excitation frequency by the function multsin as follows multsin ⋅ ⋅ cos 2π ⋅ ⋅ sin 2π ⋅ ⋅ ⋅ cos 2π ⋅ ⋅ sin 2π ⋅ ⋅ ⋅ cos 2π ⋅ ⋅ sin 2π ⋅ (6) with the vector of measurement data and the time and assignment matrix consisting of the time vector and the assignment vectors … for the different channels, respectively. the length of the vector and the matrix is 3 , while is the number of measurement values acquired. , , …, , , , …, , , , …, (7) (8) 1 0 0 1 0 0 ⋮ 1 0 0 1 0 0 0 1 0 ⋮ 0 1 0 0 1 0 0 0 1 ⋮ 0 0 1 0 0 1 (9) the unknown parameters , , , , , will be identified by a nonlinear least squares approach as follows argmin multsin (10) if the values of the different acquired channels have different numerical magnitudes (due to units, etc.) the least squares algorithm would unintentionally ‘weight’ the channels according to their numerical magnitudes. to avoid such behaviour, the three channels need to be normalised. a linear least squares regression is applied to each channel based on eq. (4) with the frequency of excitation assumed to be known. based on the fit results for each single channel, the three channels are normalised prior to the combined regression described in eq. (6). the results from the linear regression are used as initial parameters for the iterative combined regression algorithm as described in eq. (10). 5. components influencing the acquired signals  prior to the parameter identification, all the dynamic effects of the different components apart from the dut need to be corrected by means of calibration frequency response functions. this includes the measuring components of the calibration device, as well as the signal conditioning and transmission electronics of the transducer under test. the data acquisition system needs to be calibrated as well. for each component analysed, a complex frequency response function i with the output i and the input i was determined for certain calibration frequencies as follows i i i (11) the magnitude of the frequency response function follows from | i | re i im i (12) with the real part re and the imaginary part im of the frequency response function. the phase of the frequency response function equals arctan im i re i (13) again, as in eq. 5, it is necessary to use a four-quadrant inverse tangent algorithm to derive the correct phase angle. the acquired measuring signals can now be corrected for magnitude and phase. with the corrected magnitude and phase , the measured magnitude and phase and the magnitude and phase frequency response functions , from calibration follows ⋅ (14a) (14b) 6. parameter identification  mounting different devices under test with different properties (mmoi, torsional stiffness, or damping) will influence the frequency response of the measuring device (see figure 2). utilising this variability in the frequency response of acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 42  the measuring device, the properties of the dut may be identified. the output of the transducer is assumed to be proportional to the difference δ , of the torsion angles at the top and at the bottom of the transducer as dut ⋅ h b ⋅ δ (15) because of the measurement principle (strain gauges) and the known linear behaviour. as the transducer under test measures the torque and not the angle difference, a scale factor is introduced. for the parameter identification, all the signals acquired are assumed to be harmonic. in this case we can assume the following relationship for the angle , the angular velocity and the angular acceleration : ⋅ e i ⋅ e i ⋅ e (16) here, i √ 1 denotes the imaginary number. the parameters of the dut will be identified by analysing the output of the transducer and the mechanical input, which is measured as the angular accelerations m and e at the top and at the bottom of the coupling elements (see figure 3). this leads to the following frequency response equations: top i ⋅ δ i ⋅ δ e (17) these equations are based on the ordinary differential equation (ode) system of the model (equations (1), (2a), (2b) (2c), (2d), (2e) (2f), (2g)) and contain the known model parameters of the measuring device, as well as the still unknown parameters of the dut. δ i i i i (18) for the transfer function of the top part top i , there follows i ⋅ ⋅ i i (19) with i i i (20) consisting only of the known model parameters of the measuring device. the expression for is more complex. additionally to eq. (20) we denote i i i (21) which again is not dependent on the dut’s parameters. with i and i we finally obtain , the transfer function of the bottom part of the measuring device examining eq. (22), the simple numerator and complex denominator suggest considering the inverse instead: 1 i i i i i i i i i (23) or alternatively from eq. (19) 1 i i i i i i i ⋅ i ⋅ i (24) which shows the dependencies on the unknown parameters , , , and . to get a closer look into the structure of eq. (24), we denote i , , i i ⋅ i i ⋅ i (25) i i i ⋅ i i i i i ⋅ i i (22) figure 3. transfer function based on the acquired measurement channels.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 43  then, using eq. (25) leads to 1 ⋅ i i i ⋅ i , , ⋅ i , , (26) expressing the separability of the model for the parameters 1⁄ , ⁄ and ⁄ . these parameters are conditionally linear. that means, when some of the parameters in the equation are known (in this case , ), we deal with a linear model (with respect to the remaining parameters). the same applies to eq. (19) where i gives 1 i i ⋅ 1 ⋅ i ⋅ 1 ⋅ i (27) with ⁄ and ⁄ being conditionally linear. assuming that and were known in eq. (26), or in eq. (27), additionally to the predetermined parameters of the measuring device, there would be closed-form formulae for the estimation of the remaining parameters 1⁄ , ⁄ and ⁄ (eq. (26)) or ⁄ and ⁄ (eq. (27)) by means of a linear least squares approach. these formulae would of course depend on and which has implications on the estimation procedure for all the unknown model parameters (see table 1). instead of estimating the 5 parameters of eq. (26) or the 4 parameters of eq. (27), respectively, by a least squares approach in one nonlinear optimisation step, the parameters 1⁄ , ⁄ and ⁄ may be replaced by closed-form formulae, and a nonlinear minimisation over two dimensions and can be carried out [9]. consecutively, the estimates for 1⁄ , ⁄ and ⁄ will be obtained from the closed-form formulae, with the estimated values of and . 7. analysis of the sensitivity of parameter  identification by simulation  the expected dynamic behaviour of the measuring device with an installed torque transducer was analysed by simulation. the simulation was carried out based on the model table  2:  parameters  of  hbm  t5  and  hbm  t10f  transducer  from  specifications [10, 11].    torsional stiffness    mmoi  (   mmoi  distribution  ( /   hbm t5 640 n ⋅ m rad⁄ 41 ⋅ 10 kg ⋅ m 0.5/0.5 hbm t10f 160 ⋅ 10 n ⋅ m rad⁄   1.3 ⋅ 10 kg ⋅ m 0.51/0.49 equations of the measuring device (eqs. (1), (2)). the frequency response functions were calculated with chosen parameters for the device under test and with the model parameters of the measuring device. a result of such a simulation is presented in figure 4, giving the magnitude and phase responses of the complex frequency response functions for the top and for the bottom as depicted in figure 3. for the analysis, properties of two typical transducers with a totally different mechanical design – and therefore with different model parameters – were chosen. one transducer is a classical shaft type slip ring transducer of the type hbm t5 (nominal torque 10 n·m) and one is a flange type transducer hbm t10f (nominal torque 50 n·m). both transducers are used for measurements in the dynamic torque calibration device. the specified model parameters of the analysed transducers are given in table 2. the mmoi distribution describes to what amount the mmoi of the transducer is allocated in the head and in the base mmoi element of the model. to find out how well the parameters of transducers under test may be identified, the sensitivity of a change in the parameters of interest , , , and to a change in the theoretical frequency response function (real and imaginary parts) was analysed. parameters which induce only very small changes in the frequency response function might be difficult to estimate or will have large uncertainties. starting from the chosen realistic parameter values, we changed one parameter at a time and compared the resulting frequency response functions with the frequency response functions calculated with the initial set of parameters. this investigation showed that changes in are more pronounced in the real parts of the frequency response functions (see figure 5), while changes in are manifested more in the imaginary parts. to quantify the induced changes, we figure  4.  simulated  transfer  function  top  (top)  and    (bottom)  calculated with the parameters of an hbm t10f transducer.  figure  5.  simulated  change  of  10 %  of    of  a  t10f  transducer  and  its  influence  on  the  real  and  imaginary  part  of  the  two  frequency  response  functions function  top (top) and   (bottom), respectively.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 44  considered differences between the real (imaginary) parts as a percentage of the magnitude of the initial inverse frequency response function. it was found that the mass moments of inertia and , of the hbm t5 transducer appear as potentially difficult to identify. a change of 10 % in the value of results in changes of less than 2% relative to the magnitudes in the real and imaginary parts of this frequency response function aside from the resonance case at about 150 hz (see figure 6). a change of 10 % in the value of , which is present only in i , results in changes less than 0.2 % relative to the magnitudes in the real and imaginary parts of this frequency response function, as depicted in figure 7. moreover, the influence is more profound for frequencies above the resonance frequency, which are difficult to measure. the same low sensitivity on a variation of a parameter applies for the damping parameter for the hbm t10f transducer. however, a parameter which is hard to identify means on the other hand that a parameter variation will not influence the dynamic system’s behaviour significantly, e.g. the low mass moment of inertia of the hbm t5 transducer (c.f. table 3) is too small to affect the frequency response of the drive train substantially. 8. conclusions  the presented identification scheme for the model parameters is a necessary component for the dynamic calibration of torque transducers. the dynamic behaviour of torque transducers is described by a physical model. the model parameters of each transducer will be identified from measurement data acquired during calibration. the model parameters of the measuring device have been determined prior to the calibration measurements to be able to identify the parameters of the transducer’s model. for their identification, the angular acceleration at the top and at the bottom of the transducer under test, as well as the transducer’s output, will be analysed. based on this input data, a parameter identification using the method of least squares is presented, and a two stage procedure utilising the separability of the model for a consecutive linear and nonlinear optimisation is described. based on the model of the measuring device, an analysis of the influence of the parameter estimation was carried out. it was shown that some transducer parameters might be difficult to identify if they have little influence on the measuring device’s dynamic behaviour. references  [1] c. bartoli et al., “traceable dynamic measurement of mechanical quantities: objectives and first results of this european project” in international journal of metrology and quality engineering; 3, 127–135 (2012) doi: 10.1051/ijmqe/2012020 [2] c. bartoli et al., “dynamic calibration of force, torque and pressure sensors” in proc. of joint imeko international tc3, tc5 and tc22 conference, 2014, cape town, south africa http://www.imeko.org/publications/tc22-2014/imeko-tc3tc22-2014-007.pdf th. bruns, “sinusoidal torque calibration: a design for traceability in dynamic torque calibration” in proc. of xvii imeko world congress; 2003, dubrovnik, croatia http://www.imeko.org/publications/wc-2003/pwc-2003-tc3008.pdf [3] l. klaus, th. bruns, m. kobusch, “modelling of a dynamic torque calibration device and determination of model parameters” in acta imeko vol. 3, no. 2 (2014), pp. 14-18 http://acta.imeko.org/index.php/actaimeko/article/view/imeko-acta-03%20%282014%29-0205/253 [4] l. klaus, b. arendacká, m. kobusch, th. bruns, “model parameter identification from measurement data for dynamic torque calibration” in proc. of joint imeko international tc3, tc5 and tc22 conference, 2014, cape town, south africa http://www.imeko.org/publications/tc3-2014/imeko-tc32014-018.pdf [5] l. klaus, m. kobusch, “experimental method for the noncontact measurement of rotational damping” in proc. of joint imeko international tc3, tc5 and tc22 conference, 2014, cape town, south africa http://www.imeko.org/publications/tc22-2014/imeko-tc3tc22-2014-003.pdf [6] iso tc 43. “iso 266:1997, acoustics: preferred frequencies for measurements”. international organization for standardization (iso), geneva, switzerland, 1997. [7] p. m. ramos, a. c. serra, “a new sine-fitting algorithm for accurate amplitude and phase measurements in two channel acquisition systems” in measurement vol. 41, no. 2 (2008), pp. 135-143 doi: 10.1016/j.measurement.2006.03.011 [8] g. h. golub, v. pereyra, “the differentiation of pseudo-inverses and nonlinear least squares problems whose variables separate”, siam journal on numerical analysis 10, 1973, pp. 413-432. doi:10.1137/0710036 [9] hottinger baldwin messtechnik gmbh, “t10f torque flange datasheet”, 2009. [10] hottinger baldwin messtechnik gmbh, “t5 torque transducer datasheet”, 2004. figure  6.  simulated  variation  of  10 %  of    of  a  t5  transducer  and  its influence  on  the  real  and  imaginary  parts  of  the  two  frequency  response functions.  figure 7. simulated change of 10 % of   of a t5 transducer and  its only very small influence on the real and imaginary part    (bottom).  multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino acta imeko issn: 2221-870x march 2022, volume 11, number 1, 1 10 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 1 multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino luisa vigorelli1,2,3, alessandro re2,3, laura guidorzi2,3, tiziana cavaleri4,5, paola buscaglia4, marco nervo3,4, paolo del vesco6, matilde borla7, sabrina grassini8, alessandro lo giudice2,3 1 dipartimento di elettronica e telecomunicazioni, politecnico di torino, c.so duca degli abruzzi 24, 10129 torino, italy 2 dipartimento di fisica, università degli studi di torino, via pietro giuria 1, 10125 torino, italy 3 infn, sezione di torino, via pietro giuria 1, 10125 torino, italy 4 centro conservazione e restauro “la venaria reale”, piazza della repubblica, 10078 venaria reale, torino, italy 5 dipartimento di economia, ingegneria, società e impresa, università degli studi della tuscia, via santa maria in gradi 4, 01100 viterbo, italy 6 fondazione museo delle antichità egizie di torino, via accademia delle scienze 6, 10123 torino, italy 7 soprintendenza abap-to, torino, piazza san giovanni 2, 10122 torino, italy 8 dipartimento di scienza dei materiali e ingegneria chimica, politecnico di torino, c.so duca degli abruzzi 24, 10129 torino, italy section: research paper keywords: egypt; statuette; multi-analytical; museo egizio; tomography citation: luisa vigorelli, alessandro re, laura giudorzi, tiziana cavaleri, paola buscaglia, marco nervo, paolo del vesco, matilde borla, sabrina grassini, alessandro lo giudice, multi-analytical approach for the study of an ancient egyptian wooden statuette from the collection of museo egizio of torino, acta imeko, vol. 11, no. 1, article 15, march 2022, identifier: imeko-acta-11 (2022)-01-15 section editor: fabio santaniello, university of trento, italy received march 7, 2021; in final form march 14, 2022; published march 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: nexto project (progetto di ateneo 2017) funded by compagnia di san paolo, neu art project funded by regione piemonte corresponding author: alessandro re, e-mail: alessandro.re@unito.it 1. introduction scientific research in the cultural heritage field involves, in most cases, the development and use of physical and chemical methods to answer specific questions for a better understanding of objects produced in different contexts during history. through the analyses, it may be possible to reveal and identify materials and technologies used in the past, and also provide more grounded parameters for the preservation and conservation of cultural heritage artefacts [1]. the use of non-invasive methods (with no sampling) is very suitable in this kind of study, allowing the analysis of different abstract in the field of cultural heritage, the interdisciplinary and multi-technique approach to the study of ancient artifacts is widely used, providing more reliable and complementary results. to study these great-value objects, non-invasive approach is always preferred, although micro-invasive techniques may be necessary to answer specific questions. in this work, a study based on both non-invasive and micro-invasive techniques in a two-step approach was applied as a powerful tool to characterise materials and their layering, as well as to get a deeper understanding of the artistic techniques and the conservation history. the object under study is an ancient egyptian wooden statuette, belonging to the collections of the museo egizio of torino. analyses were performed at the centro conservaz ione e restauro “la venaria reale” (ccr), starting from non-invasive multispectral and x-ray imaging on the whole object, in order to obtain information about the technique of assembly and on some aspects of the constituent materials, and up to non-invasive xrf analysis and ft-ir, sem-edx and optical microscopy on micro-samples. this work is intended to lay the groundwork to the study of other wooden objects and statuettes belonging to the same funerary equipment, with the definition of a measuring protocol to study the most significant aspects of the artistic technique. mailto:alessandro.re@unito.it acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 2 types of objects fully respecting their integrity. some of these non-invasive techniques cover a large part of the electromagnetic spectrum, ranging from the analysis with gamma and x-ray radiation going through the ultraviolet, visible and infrared regions [2]-[10]. furthermore, for chemical and compositional analysis, some micro-invasive techniques are often used [11], [12]. together, all these complementary methods are powerful to give valuable information on the elemental composition as well as on the state of preservation, getting to the artistic processes performed by the artists [13], [14]. in this paper a multi-technique approach was used to finalize an in-depth study on an ancient egyptian wooden statuette, belonging to the collection of the museo egizio, in torino. all measurements were performed at centro conservazione e restauro (ccr) “la venaria reale”, where several scientific laboratories for the study and characterization of the different materials of artworks and ancient artefacts are available, before carrying out the required conservation treatments. the approach is based on the combination of imaging techniques, in particular ultraviolet fluorescence (uvf), visible-induced infrared luminescence (vil) and infrared reflectography (ir) that were employed to map the distribution of the different materials, such as the pigments used in the polychromies, their thicknesses and layering, as seen in previous studies [15]-[17]. among imaging analyses, a radiographic (rx) and tomographic (ct) study was also performed in order to investigate the inner part of the objects and to reach a deeper understanding on the execution technique and state of preservation [18], [19]. the next sections show in addition the results of compositional and elemental analysis acquired both directly on the surface with the noninvasive x-ray fluorescence (xrf) technique, and on samples with optical microscopy, fourier transform infrared spectroscopy (ft-ir) and scanning electron microscopy (semedx), largely used also in some other studies [20]-[24]. all the techniques listed above proved to be equally important in this study, each one providing different and complementary information, but essential to get a thorough knowledge of the artefact and to finally implement the best conservation strategy. this work contributes to the creation of a measuring protocol, applicable to other wooden objects and statuettes belonging to the same funerary assemblage, in order to significantly increase our understanding of the entire group of finds retrieved from a specific archaeological context. 2. the statuette the painted wooden statuette, representing an offering bearer (inv. no. s.8795, figure 1), was found during the 1908 excavation season of the italian archaeological mission, directed by ernesto schiaparelli, in the necropolis of asyut (egypt), a site situated some 375 km south of cairo. the statuette was part of the rich funerary assemblage of the so-called “tomb of minhotep”, which included additional statuettes of offering bearers, larger wooden statues, a model of a bakery, boat models, as well as coffins, wooden sticks, a bow with arrows and numerous earthenware jars and bowls [25], [26]. most of the equipment derived from specialized workshops operating in asyut during the early middle kingdom (ca. 1980-1900 bce). according to ancient egyptian religious beliefs, the tomb had to maintain the memory of the deceased, to preserve his/her body and to grant his/her survival in the afterlife thanks to specific rituals and, above all, food offerings. funerary offerings could be real, simply listed on stelae and coffins, or even scale models of servants carrying or processing food. at the beginning of the middle kingdom these scale models, together with models of granaries, boats or artisanal activities, become the main element of tomb assemblages and were placed within the burial chamber, usually near the coffin. the statuette examined for the present study is the typical representation of a female “offering bearer”, carrying a basket on her head, generally held in position with the left hand, and a duck in the other hand. although nowadays the two arms of the statuette are missing, an old photo preserved in the alinari archive shows the object complete of these two parts among the other finds from the same tomb as it was displayed in the museo egizio in the early 20th century [27]. the statuette is structurally composed of three elements (the basket, the human figure and the base) and measures 60.0 (h) × 12.5 (d) × 25.5 (w) cm3. 3. material and methods in the field of cultural heritage, diagnostic protocols usually give priority to non-invasive and imaging analyses because they provide an overview of the main characteristics of the object highlighting some material differences; consequently, they are essential for selecting the most representative subsequent analysis and sampling points. with this approach, the taking of some micro-samples from the artefact is the last step of the diagnostic campaign necessary to answer specific questions that arise during the early stages of the investigation. in this specific case study, non-invasive investigations (imaging and chemical analysis) were initially carried out; the results then led to the need of very small samples (~µm) for more in-depth measurements, such as microscopic investigation and ft-ir spectroscopy, in order to obtain useful information on the state of preservation and previous restorations. figure 1. the “offering bearer” statuette (s. 08795), frontal (left) and lateral (right) views after the conservation treatment. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 3 3.1. techniques and instrumentation uv induced visible fluorescence (uvf) the statuette was irradiated with uv labino® spot lamps (uv light mpxl and uv floodlight), with emission peak at 365 nm. the fluorescence produced in the vis region was captured with a nikon d810 full-frame reflex camera equipped with a peca 916 filter. the post-production of the photographs using adobe's lightroom software provided for the chromatic balance thanks to a 99% spectralon® and a minolta ceramic. this technique allows to evaluate characteristics such as homogeneity and distribution of surface layers based on the colour and intensity of the visible fluorescence induced by uv radiation. infrared reflectography 950 nm (ir1) the investigation was carried out after the first cleaning phase, that provided the removal of superficial dust, to reduce its interference facilitating the study of the artistic technique. the measure was made with ianiro varibeam halogen 800w lamps. the images were acquired in the photographic infrared range (from 750 nm to 950 nm) with a nikon d810 iruv full frame reflex camera, with b + w 093 filter. the post-production of the images using adobe's lightroom software provided for colour and exposure correction. this technique shows the presence of preparatory traces or changes under the painted film. visible-induced infrared luminescence (vil) the vil technique allows the localization of details made with the egyptian blue pigment, even when in poor conservation state, due to its intense fluorescence emission at around 916 nm when illuminated with visible light [28]. the lighting was guaranteed by led lamps without ir emission and the images were acquired in the photographic infrared range (from 750 nm to 950 nm) with a xnite nikon d810 (digital camera uv + vis + ir functionality) modified full frame reflex camera equipped with hoya r72 filter. the post-production carried out with adobe's lightroom software provided for the chromatic balance using colorchecker® classic of 24 colours and a reference standard of egyptian blue (kremer pigmente n° 10060) inserted in the shooting field. radiography (rx) and tomography (ct) the radiographic and tomographic analysis are useful for studying, for example, the techniques used in assembling the structure (e.g. joining elements) or to detect evidence of previous interventions. in particular, ct allows to observe the inner sections of the object, the orientation in space of its constituent parts, the sequence of the layers that compose it and their thickness. in this particular case a first radiographic measurement was performed using a fixed x-ray imaging set-up, developed in the context of the neu_art project [29], [30] and already used on very different kind of artworks [31], [32] and archaeological finds [19], [33]. it consists of a general electric eresco 42mf4 x-ray source, a rotating platform and a hamamatsu c975020tcn linear x-ray detector with 200 μm of pixel dimension that scans at about 0.2-6 m/min over an area of about 2×2 m². since this set-up was optimized for ct of large artefacts, a second rx followed by a ct analysis were performed using a flat panel detector (fp) shad-o-box 6k hs by teledyne dalsa, which, with an area of only about 160 cm2 and a pixel dimension of 49.5 µm, is more suitable for small objects or for part of large objects where a higher resolution is needed. for both measurements, a voltage of 80 kv and a current of 10 ma was set as acquisition parameters. for the radiography, the elaboration of the images was made with the open-source software imagej, whereas for the ct sections reconstruction a filtered back-projection algorithm [34] by means of a non-commercial software-utility developed by dan schneberk of the lawrence livermore national laboratory (usa) was used; the 3d rendering and segmentation was processed using vgstudio max 2.2 from volume graphics. x-ray fluorescence (xrf) the measurements were performed on representative points selected on the basis of the responses of the imaging techniques. the technique identifies the chemical elements present in the analysed area (spot diameter between 0.65 mm and 1.50 mm) for a variable depth based on the chemical nature of the materials present (approximately 150 μm). the collected data are employed to make hypotheses on the inorganic materials (mineral pigments) of the pictorial palette. for the analysis a portable micro-edxrf bruker artax 200 spectrometer was used with a fine focus x-ray source with molybdenum anode and adc with 4096 channels; the anode voltage is adjustable between 0 kv and 50 kv, the anode current between 0 µa and 1500 μa (maximum power 50 w). the measurements were carried out with a voltage of 30 kv and a current of 1300 µa, by fluxing helium on the measurement area in order to optimize the detection limit of the instrument. fourier transform infrared spectroscopy (ft-ir) the analyses were carried out to characterise original organic substances or possibly localized intervention materials previously detected through uv fluorescence imaging. infrared spectrophotometry allows to identify the organic and inorganic components present in the sample. the measurements were conducted on selective micro-samples with a bruker vertex 70 ft-ir spectrophotometer coupled with a bruker hyperion 3000 infrared microscope working in transmission with the aid of a diamond cell. optical microscopy and scanning electron microscopy with edx (om and sem-edx) the stratigraphic samples were collected after the noninvasive analytical campaign, in order to study the materials in depth. the observation with om in stratigraphy allows to obtain information on the executive technique adopted for the realization of the preparations, the polychromies and the finishes; the sem-edx analysis instead allows to obtain compositional information on the elements present in the different layers. following the sampling, the fragments identified for stratigraphic analysis were incorporated in transparent resin (struers epofix epoxy resin). the samples prepared in a polished section are observed by means of an olympus bx51 mineropetrographic microscope, in visible and uv light, interfaced to a pc by means of a digital camera. the acquisition and processing of images is carried out using analysis five proprietary software. for the sem analysis, the samples were observed with a zeiss evo60 electron microscope for morphological investigations (mainly by means of backscattered electrons detector, bsd). the edx bruker microprobe allows semi-quantitative elemental analysis. the analyses were performed in high vacuum for which the sections were carbon-coated. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 4 figure 2. frontal and lateral pictures of the statuette under different illumination for the multispectral analysis: (a) and (e) visible light, (b) and (f) uv fluorescence, (c) ir reflectography, (d) and (g) radiography. in picture (a) the measure points for xrf analysis are signed with numbers 1-8. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 5 4. results and discussion 4.1. assembly and modelling techniques the close observation and the analyses carried out made it possible to describe the assembly technique of the two portions (the basket and the body), which were originally assembled by inserting wooden dowels, with circular section, free of glue or filler material. moreover, a non-uniformity of the surface and the presence of gaps distributed over the entire body could be immediately noticed. in correspondence with some of these, such as on the chest or the hips, it was possible to observe the presence of a double layer of preparation. the first, light brown-coloured and coarser, is directly spread on the wooden material; on this, a second, thinner, white layer was perceivable (figure 2a, figure 3a). in some areas of the sculpture, the thin white layer seems to be applied directly on the wooden material, as observed for example on some gaps in the garment (figure 2e, figure 3b). the pictorial decoration was realized on this white preparation. for a better understanding of the construction technique and the contribution of the preparatory layers in modelling the shape, radiographic and tomographic analyses were carried out [35]. data were acquired on the whole statuette for radiography, while for ct the acquired portion is limited from the basket down to the hips of the statuette. thanks to the x-ray imaging, it was possible to visualize details and features of the object: tomographic data in particular provided important information not only on the assembly, but also on previous structural interventions (discussed in section 4.3). at first glance, it was possible to notice areas of the radiographic images with different radiopacity over the entire volume of the body (figure 2d). this confirm the nonhomogeneity of the preparatory layer distribution, which is more radiopaque than the wooden support. thanks to the ct slices it was also possible to localize the presence of the double layer of preparation mentioned before. the capability of detecting the material of the ground layer allowed us to confirm that it has contributed to partially polish the shape (e.g. head, breasts and hips), and to correct the gaps in volume due to possible defects of the wood or to refine some imprecisions in carving. where the carving was enough refined, a single preparation layer has been laid (figure 4). as regards the assembly technique, the junction between the basket and the head has been realized by means of wooden dowels insertions, as the radiographic, and the tomographic images, have shown (figure 4). the same anchoring system is evident in correspondence of the missing arms, as shown in figure 4e. from the tomographic sections of the head (figure 4c), multiple portions assembled by wooden dowels, peculiarly oversized, are observed, employed in order to achieve the final volume; ct demonstrated to be useful also in understanding the direction of insertion of each dowel. moreover, in correspondence with the right breast, the same type of assembly technique can be seen: the observation of the trend of the inner growth rings allows to recognize this insertion as a remediation for a material detachment probably in the notching phase (figure 4d). to investigate the nature of materials used for the preparation layers (and pigments, as detailed in section 4.2), non-invasive xrf analysis were performed on some representative points (figure 2a and table 1) and two stratigraphic samples, one from the white garment (figure 5) and one from the black wig (figure 6), were analysed by om and sem-edx. the white thin preparation layer results to be made of a calcium carbonate-based material, with a little fraction of quartz. 4.2. pigments and finishing layers for a better understanding of the pictorial materials, imaging analyses have been followed by non-invasive xrf and analysis on two cross-sections, as previously explained. the yellow pale colour used for the skin of the figure results to be made of yellow ochre and/or earth, probably mixed with gypsum and/or calcium carbonate (see figure 2a and table 1). the warm white colour of the garment results to be made of figure 3. detail of the chest (a) and the dress (b) of the statuette; it is visible (a) the preparation layer on which the painted layer is applied; (b) the painted layer applied directly on the wood. figure 4. radiographic image and ct vertical and horizontal slices of the statuette: (a) basket; (b) and (c) head, (d), (e) and (f) body (green arrows: wooden dowels for assemblage; blue arrows: wooden joints; pink arrows: thicker preparation layer). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 6 gypsum with minimal impurities of iron oxides, as detected in the cross-section (figure 5). considering the rarely documented use of gypsum as a pigment, this data appears very interesting and could be one of the features to be investigated also on the other finds of the funerary equipment. the presence of p could be referred to possible organogenic material in the rock used to produce the pigment (figure 5e and table 1). a black pigment results to be used in profiling the dress, the eyes and the eyebrows, and to colour the wig. the ir reflectography (figure 2c) suggests it is probably a carbon-based material, based on its strong absorption. om and sem-edx analyses of the cross-section from the wig suggest the carbonaceous nature of this black: the black layer is quite thick, nevertheless only very low signals of inorganic elements have been detected in the layer, suggesting it is composed of organic carbon black (figure 6). as regards the wig, the preliminary hypothesis that there could be some egyptian blue pigment mixed with black was ruled out thanks to the vil survey which did not detect any luminescence of the pigment on the statuette. the preliminary close observation of the statuette and the uvf analysis allowed excluding the presence of a finish layer distributed on the surface, rather confirming a strong material non-homogeneity due to the state of preservation and previous interventions (figure 2b,f), as already observed also by the x-ray imaging analysis. a selective sampling from one of the areas with the greatest surface yellowing, that showed yellow-orange uv fluorescence, was taken for further materials investigations with ft-ir analysis (see section 4.3). however, no information about the type of binder used for the preparatory and pictorial layer could be obtained from the analysis, even if, with reference to the technical literature, the most probable hypothesis is the use of a vegetable rubber-based binder [36]. table 1. xrf analysis results (+++ = main chemical elements; ++ = secondary elements; + = trace elements; = not detected). measurement point element mg al si p s cl k ca ti mn fe sr 1_face yellow + + + + ++ + + ++ + + +++ + 2_body yellow + + + ++ + + +++ + + +++ + 3_body yellowish white + + + + ++ + + +++ + + + 4_body clean white + + + ++ + + +++ + + + + 5_leg gap white + + + + + + + +++ + + + + 6_eyebrow black + + + + ++ + + +++ + + +++ + 7_basket dark yellow + + + + + + +++ + + +++ + 8_basket red + + + + + + +++ + + ++ + figure 5. (a) stratigraphic sample of the white preparation, taken from the body of the statuette; (b) cross section under om in visible light (1, preparation layer; 2, warm white pictorial layer); (c) sem-bsd image of the cross section; (d, e) sem-edx spectra respectively of the preparation and pigment layers. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 7 4.3. conservation history and previous interventions as regards the state of preservation of the find, it showed a general greying of tones and some gaps in the preparation and paint layers distributed over the entire surface, as documented by photographs in visible light and x-ray imaging. after the close observation and a careful reading of the imaging analyses, in particular of the uv fluorescence, ft-ir analysis were performed on micro samples taken from (i) an area that showed orange-yellow uv fluorescence and (ii) from the face of the bearer, where a glossy, surface film-forming material seemed to be present. from the resulting spectra, an acrylic resin identified as paraloid was found in both the analysed samples (figure 7). the adhesive material paraloid resulted to be distributed over the entire surface and the data is compatible with the general greying of the surface, due to the typical paraloid absorption of dust and atmospheric particulate in time. figure 7b shows also signals referable to the presence of calcium carbonate and silicates in the pictorial layer, in accordance with the other performed analysis. furthermore, the signal at 1540 cm-1 is attributable to the presence of a protein-based substance, probably due to an ancient securing interventions carried out with animal glues, as traditional conservation practices usually envisaged. in addition, the base of the statuette showed some peculiar characteristics: from a first general observation, it was hypothesized that it could be traced back to a reuse of a wood fragment, perhaps taken from a coffin. this hypothesis could be reasonably confirmed by the evidence of a presumably original joint in correspondence of the lower portion of the base, attributable to the portion of the feet, but no longer in place, that led to think of a reuse. more insights concerning this specific characteristic should be provided, in consideration of a similar element observed in another find of the equipment. figure 6. (a) stratigraphic sample from the black wig on the back of the statuette; (b) cross section under om in visible light (1, white preparation layer; 2, warm white pictorial layer; 3, black pictorial layer); (c) sem-bsd image of the cross section; (d, e, f) sem-edx spectra respectively of the white, warm white and black layers. figure 7. ft-ir spectra (black curves) of the two samples taken from the body (orange-yellow uv fluorescence area) (a) and from the face area (greying of tones) (b). the substance was identified as paraloid (the red curve is the paraloid standard spectrum, for comparison). acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 8 as regards the figure-base anchoring, the complete absence of the feet (generally made of wood, sometimes partially completed with details in modelling material, often equipped with a tenon for the base joint) was immediately noted, and a wooden insert to which the legs were anchored, attributable to a previous intervention, was present (figure 8a). in fact, a complete discrepancy between the main block and this wooden insert was observed (for colour, compactness and grain of the wood); the insert was grouted on the perimeter and hidden on the surface by a pictorial retouching attributable to a previous intervention, rather recent. thanks to the radiographic analysis, this portion in the base was clearly distinguished and it could be observed that the insert reaches about half the total thickness of the base. furthermore, the x-ray images showed how both the legs and the wooden insert were applied and fixed: in fact, x-rays enhanced the presence of two holes created to accommodate the end portions of the two legs. the legs and the big wooden insert in correspondence of the base were fixed by applying a filler material (similar to a mortar), more radiopaque than the wooden material (figure 9). a fracture of the wooden matter in the back of the right leg of the bearer and at the calf level corresponded to a tongue and rebate joint for the completion of the leg anatomy, which can be clearly observed from the radiographic images of the lower part (figure 8b,c). in literature, many of the sculptures represented in a progressive position present an assembly of two separate elements for the back leg, a practical ruse to smoothly carve the internal parts. taking into account both this aspect and the fact that this kind of joint is already documented starting from very ancient chronologies and the characteristics of the wood are rather similar to those of the central body (colour, compactness), the originality of the part could not be excluded in a first phase, despite of some anomalies. in fact, the presence of an intervention mortar at the junction between the leg and the body (attributable to a bonding) and the chromatic and morphological differences between the wooden material of the base insert and that of the leg itself, are not sufficient to attribute the leg portion to an original or subsequent intervention. regarding this issue, a comparison with a wooden sculpture picture (xviii dynasty, collections of the museo egizio of torino), taken before restoration in the late 1980s by the doneux laboratory results to be very interesting: also in this case a wooden insert is observed for the lower portion of one of the two legs, certainly a non-original operation in this instance, but conceptually comparable to the one discussed here [37]. more insights should be provided to ascertain this aspect. as of the basket-head anchoring, evidence of previous intervention was identified. on the basket, in fact, a significant presence of an adhesive material was observed, for whose characterization no specific analysis was conducted; however, due to its mechanical and optical characteristics, in addition to the specific reactivity in contact with polar solvents, the adhesive has probably a synthetic origin (presumably of a vinyl nature, figure 10). finally, both from the radiographic analysis and at the time of disassembly, a metal element inserted to fix the two portions of reduced diameter and size, was identified (figure 11). in consideration of its shape, it is possible to suppose its pertinence to a modern structural intervention. figure 8. the detail of the statuette’s legs, (a) the wooden insert for the legs anchorage, (b) and (c) the radiographic images, frontal and lateral respectively, in which the joint is clearly visible. figure 9. radiographic image of the base of the statuette. figure 10. details of the adhesive residues (yellow arrows) detected at the basket-head interface. acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 9 5. conclusion the present work reports on the results of a two-step scientific approach useful for the characterization of the materials and the stratigraphy of ancient objects. the combination of imaging and punctual techniques, along with the visual inspection of the artwork, could give indication of the used materials and techniques of assembly, repairs, over paintings, finishes and treatments that have occurred in antiquity and through the centuries. in the specific case under study, the close observation and the tomographic analysis carried out allowed to describe the assembly technique. in fact, the use of several portions assembled using wooden dowels and of a preparation material based on calcium carbonate to achieve the final volume was observed. thanks to chemical investigations, also conducted in consideration of the multispectral imaging results, it was possible to define the nature of the different materials used for the statuette manufacturing. in addition to vil, xrf analysis in combination with om and sem-edx methodology gave the possibility to identify the pigments used for the decoration. taking into account the identification of synthetic materials and the ft-ir analysis results, it has also been possible to distinguish modern interventions, probably dated from the second half of the twentieth century. additionally, more ancient interventions, such as the insertion of wooden elements to complete the figure, seems to be present. all the performed analysis and the consequent evaluations contributed to the definition of the best conservation process for the statuette. in the future, it will be possible to apply the same investigation strategy to other wooden artefacts and statuettes belonging to the same framework, in order to make comparisons among the objects. analogies and differences in terms of materials, manufacturing techniques and state of preservation will support also the egyptological study of specific technical features, aiming at the possible reconstruction of different workshops active in asyut in the early second millennium bce. acknowledgements the nexto project (progetto di ateneo 2017) funded by compagnia di san paolo, the neu_art project funded by regione piemonte, and the infn chnet network are warmly acknowledged. we would like to thank also dr. anna piccirillo and dr. daniele demonte from the centro conservazione e restauro “la venaria reale”, for performing, respectively, ftir spectroscopy analyses and multiband imaging. references [1] m. a. rizzutto, j. f. curado, s. bernardes, p. h. o. v. campos, e. a. m. kajiya, t. f. silva, c. l. rodrigues, m. moro. m. tabacniks, n. added, analytical techniques applied to study cultural heritage objects, inac 2015 são paulo, sp, brazil, october 4-9 (2015) isbn 978-85-99141-06-9 [2] e. peccenini, f. albertin, m. bettuzzi, r. brancaccio, f. casali, m. p. morigi, f. petrucci, advanced imaging systems for diagnostic investigations applied to cultural heritage, j. phys.: conf. ser. 566 012022 (2014). doi: 10.1088/1742-6596/566/1/012022 [3] s. bruni, v. guglielmi, e. della foglia, m. castoldi, g. bagnasco gianni, a non-destructive spectroscopic study of the decoration of archaeological pottery: from matt-painted bichrome ceramic sherds (southern italy, viii-vii b.c.) to an intact etruscan cinerary urn, spectrochimica acta part a: molecular and biomolecular spectroscopy 191 (2018) pp. 88–97. doi: 10.1016/j.saa.2017.10.010 [4] m. hain, j. bartl, v. jacko, multispectral analysis of cultural heritage artefacts, measurement science review, volume 3, section 3 (2003) [5] t. cavaleri, p. croveri, a. giovagnoli, spectrophotometric analysis for pigment palette identification: the case of “profeta stante, 10th international conference on non-destructive investigations and microanalysis for the diagnostics and conservation of cultural and environmental heritage, aipnd, 2011 [6] m. p. morigi, f. casali, m. bettuzzi, r. brancaccio, v. d’errico, application of x-ray computed tomography to cultural heritage diagnostics, appl phys a 100 (2010), pp. 653–661. doi: 10.1007/s00339-010-5648-6 [7] g. fiocco, t. rovetta, m. malagodi, m. licchelli, m. gulmini, g. lanzafame, f. zanini, a. lo giudice, a. re, synchrotron radiation micro-computed tomography for the investigation of finishing treatments in historical bowed string instruments: issues and perspectives, eur. phys. j. plus 133 (2018). doi: 10.1140/epjp/i2018-12366-5 [8] e. di francia, s. grassini, g. e. gigante, s. ridolfi, s. a. barcellos lins, characterisation of corrosion products on copper-based artefacts: potential of ma-xrf measurements, acta imeko, volume 10 (1) (2021) pp. 136-141. doi: 10.21014/acta_imeko.v10i1.859 [9] a. lo giudice, al. re, d. angelici, j. corsi, g. gariani, m. zangirolami, e. ziraldo, ion microbeam analysis in cultural heritage: application to lapis lazuli and ancient coins, acta imeko, 2017, volume 6 (3) (2017) pp. 76-81. doi: 10.21014/acta_imeko.v6i3.465 [10] m. c. leuzzi, m. crippa, g. a. costa, application of nondestructive techniques. the madonna del latte case study, acta imeko, volume 7 (3) (2018), pp. 52 – 56. doi: 10.21014/acta_imeko.v7i3.587 [11] m. schreiner, m. melcher, k. uhlir, scanning electron microscopy and energy dispersive analysis: applications in the field of cultural heritage, anal bioanal chem 387 (2007), pp. 737–747. doi: 10.1007/s00216-006-0718-5 [12] c. invernizzi, g. v. fichera, m. licchelli, m. malagodi, a noninvasive stratigraphic study by reflection ft-ir spectroscopy and uv-induced fluorescence technique: the case of historical violins, microchemical journal 138 (2018), pp. 273–281. doi: 10.1016/j.microc.2018.01.021 [13] a. mangone, g. e. de benedetto, d. fico, l. c. giannossa, r. laviano, l. sabbatini, i. d. van der werfb, a. traini, a multianalytical study of archaeological faience from the vesuvian area as a valid tool to investigate provenance and technological features, new j. chem. 35 (2011), pp. 2860–2868. doi: 10.1039/c1nj20626e [14] g. barone, s. ioppolo, d. majolino, p. migliardo, g. tigano, a multidisciplinary investigation on archaeological excavation in messina (sicily). part i: a comparison of pottery findings in “the strait of messina area”, journal of cultural heritage 3 (2002), pp. 145–153. doi: 10.1016/s1296-2074(02)01170-6 figure 11. metallic element: (a) detail of the rx image that allows its localization and (b) the metallic pin after removal during the intervention. http://dx.doi.org/10.1088/1742-6596/566/1/012022 https://doi.org/10.1016/j.saa.2017.10.010 https://doi.org/10.1007/s00339-010-5648-6 https://doi.org/10.1140/epjp/i2018-12366-5 http://dx.doi.org/10.21014/acta_imeko.v10i1.859 https://doi.org/10.21014/acta_imeko.v6i3.465 http://dx.doi.org/10.21014/acta_imeko.v7i3.587 https://doi.org/10.1007/s00216-006-0718-5 https://doi.org/10.1016/j.microc.2018.01.021 https://doi.org/10.1039/c1nj20626e https://doi.org/10.1016/s1296-2074(02)01170-6 acta imeko | www.imeko.org march 2022 | volume 11 | number 1 | 10 [15] s. bracci, o. caruso, m. galeotti, r. iannaccone, d. magrini, d. picchi, d. pinna, s. porcinai, multidisciplinary approach for the study of an egyptian coffin (late 22nd/early 25th dynasty): combining imaging and spectroscopic techniques, spectrochimica acta part a: molecular and biomolecular spectroscopy 145 (2015), pp. 511–522. doi: 10.1016/j.saa.2015.02.052 [16] abdrabou, m. abdallah, i. a. shaheen, h. m. kamal, investigation of an ancient egyptian polychrome wooden statuette by imaging and spectroscopy, internation journal of conservation science, volume 9 (1) (2018), pp. 39-54. [17] t. cavaleri, p. buscaglia, c. caliri, e. ferraris, marco nervo, f. p. romano, below the surface of the coffin lid of neskhonsuennekhy in the museo egizio collection, x-ray spectrom. (2020), pp. 1–14. doi: 10.1002/xrs.3184 [18] a. otte, t. thieme, a. beck, computed tomography alone reveals the secrets of ancient mummies in medical archaeology, hellenic journal of nuclear medicine, 16(2) (2013), pp.148-149. [19] a. re, a. lo giudice, m. nervo, p. buscaglia, p. luciani, m. borla, c. greco, the importance of tomography studying wooden artefacts: a comparison with radiography in the case of a coffin lid from ancient egypt, internation journal of conservation science, volume 7 (2) (2016), pp. 935-944. [20] c. calza, r. p. freitas, a. brancaglion jr., r. t. lopes, analysis of artifacts from ancient egypt using an edxrf portable system, inac 2011, belo horizonte,mg, brazil, october 24-28 (2011). isbn 978-85-99141-04-5 [21] n.m. badr, m. fouad ali, n. m. n. el hadidi, g. abdel naeem, identification of materials used in a wooden coffin lid covered with composite layers dating back to the ptolemaic period in egypt, conservar património 29 (2018), pp. 11-24. doi: 10.14568/cp2017029 [22] m. abdallah, h. m. kamal, a. abdrabou, investigation, preservation and restoration processes of an ancient egyptian wooden offering table, internation journal of conservation science, volume 7 (4) (2016), pp. 1047-1064. [23] a. abdrabou, m. abdallah, h. m. kamal, scientific investigation by technical photography, om, esem, xrf, xrd and ftir of an ancient egyptian polychrome wooden coffin, conservar património 26 (2017), pp. 51-63. doi: 10.14568/cp2017008 [24] h. a.m. afifi, m. a. etman, h. a.m. abdrabbo, h. m. kamal, typological study and non-destructive analytical approches used for dating a polychrome gilded wooden statuette at the grand egyptian museum, scientific culture, vol. 6 (3) (2020), pp. 69-83, doi: 10.5281/zenodo.4007568 [25] j. kahl, a.m. sbriglio, p. del vesco, m. trapani asyut. the excavations of the italian archaeological mission (1906-1913), studi del museo egizio 1, ed. franco cosimo panini, modena 2019, isbn 978-88-570-1577-4 [26] p. del vesco, le tombe di assiut, in p. del vesco and b. moiso, missione egitto 1903–1920: l’avventura archeologica m.a.i. raccontata, ed. franco cosimo panini, modena, 2017, pp. 293301, isbn 8857012654 [27] b. moiso, p. del vesco, b. hucks, l'arrivo degli oggetti al museo e i primi allestimenti, in p. del vesco and b. moiso, missione egitto 1903-1920. l'avventura archeologica m.a.i. raccontata, modena, 2017, pp. 325, isbn 8857012654 [28] g. verri, the use and distribution of egyptian blue: a study by visible-induced luminescence imaging, in k uprichard & a middleton, the nebamun wall paintings, london: archetype publications, pp. 41-50, 2008, isbn 9781904982142 190498214x [29] a. re, f. albertin, c. bortolin, r. brancaccio, p. buscaglia, j. corsi, g. cotto, g. dughera, e. durisi, w. ferrarese, m. gambaccini, a. giovagnoli, n. grassi, a. lo giudice, p. mereu, g. mila, m. nervo, n. pastrone, f. petrucci, f. prino, l. ramello, m. ravera, c. ricci, a. romero, r. sacchi, a. staiano, l. visca, l. zamprotta, results of the italian neu_art project, iop conference series: materials science and engineering 37 (2012). doi: 10.1088/1757-899x/37/1/012007 [30] m. nervo, il progetto neu_art. studi e applicazioni/neutron and x-ray tomography and imaging for cultural heritage, cronache, 4, editris, torino, 2013, isbn 9788889853344 [31] a. lo giudice, j. corsi, g. cotto, g. mila, a. re, c. ricci, r. sacchi, l. visca, l. zamprotta, n. pastrone, f. albertin, r. brancaccio, g. dughera, p. mereu, a. staiano, m. nervo, p. buscaglia, a. giovagnoli, n. grassi, a new digital radiography system for paintings on canvas and on wooden panels of large dimensions, 2017 ieee international instrumentation and measurement technology conference (i2mtc 2017) proceedings (2017). doi: 10.1109/i2mtc.2017.7969985 [32] a. re, f. albertin, c. avataneo, r. brancaccio, j. corsi, g. cotto, s. de blasi, g. dughera, e. durisi, w. ferrarese, a. giovagnoli, n. grassi, a. lo giudice, p. mereu, g. mila, m. nervo, n. pastrone, f. prino, l. ramello, m. ravera, c. ricci, a. romero, r. sacchi, a. staiano, l. visca, l. zamprotta, x-ray tomography of large wooden artworks: the case study of “doppio corpo” by pietro piffetti, heritage science 2 (1) (2014). doi: 10.1186/s40494-014-0019-9 [33] a. re, j. corsi, m. demmelbauer, m. martini, g. mila, c. ricci, x-ray tomography of a soil block: a useful tool for the restoration of archaeological finds, heritage science 3(1) (2015). doi: 10.1186/s40494-015-0033-6 [34] a. c. kak, m. slaney, principles of computerized tomographic imaging, ieee press, 1987, chapter 3, pp. 49-112, isbn 087942-198-3 [35] l. vigorelli, a. lo giudice, t. cavaleri, p. buscaglia, m. nervo, p. del vesco, m. borla, s. grassini, a. re, upgrade of the x-ray imaging set-up at ccr “la venaria reale”: the case study of an egyptian wooden statuette, proceedings of 2020 imeko tc-4 international conference on metrology for archaeology and cultural heritage, trento, italy, october 22-24 (2020), pp..623628, isbn 978-92-990084-9-2 [36] r. newman, m. serpico and r. white, adhesives and binders, in: p.t. nicholson and i.shaw, ancient egyptian materials, cambrige university press, 2000, pp. 475-493, isbn 0-5121-45257 [37] e.f. marocchetti, la scultura in legno al museo egizio di torino. problemi di conservazione e restauro, materiali e strutture. problemi di conservazione. sulla scultura, nuova serie anno vi, numero 11-12, pp. 9-31, 2008, issn 1121-2373 https://doi.org/10.1016/j.saa.2015.02.052 https://doi.org/10.1002/xrs.3184 https://doi.org/10.14568/cp2017029 https://doi.org/10.14568/cp2017008 http://dx.doi.org/10.5281/zenodo.4007568 http://dx.doi.org/10.1088/1757-899x/37/1/012007 https://doi.org/10.1109/i2mtc.2017.7969985 https://doi.org/10.1186/s40494-014-0019-9 https://doi.org/10.1186/s40494-015-0033-6 template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 81-87 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 81 emva 1288 camera characterisation and the influences of radiometric camera characteristics on geometric measurements maik rosenberger, chen zhang, pavel votyakov, marc preißler, rafael celestre, gunther notni ilmenau university of technology, gustav kirchhoff platz 2, 98693 ilmenau, germany section: research paper keywords: camera characterisation; camera parameters; edge transition model; optical geometric measurements citation: maik rosenberger, chen zhang, pavel votyakov, marc preißler, rafael celestre, gunther notni, emva 1288 camera characterisation and the influences of radiometric camera characteristics on geometric measurements, acta imeko, vol. 5, no. 4, article 13, december 2016, identifier: imeko-acta05 (2016)-04-13 section editor: paul regtien, the netherlands received march 22, 2016; in final form december 12, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the presented work is the result of the research within the project id2m which is situated at the ilmenau university of technology, germany as part of the research support program innoprofile, funded by the federal ministry of education and research (bmbf) germany corresponding author: maik rosenberger, e-mail: maik.rosenberger@tu-ilmenau.de 1. introduction over the last decades, plenty of different image sensors were developed by a huge variety of companies. presently there is a strong trend to complementary metal oxide semiconductor based image sensors (cmos) in comparison to the charge coupled device sensors (ccd) because of their high resolution and frame rate with lower power consumption [1] as well as their strong ability for integration of signal processing circuits down to pixel level [2]. on the other hand, the cmos sensors are subjected to worse signal to noise ratio (snr), as shown in [1]. furthermore, the cmos sensors have usually a fixed pattern noise (fpn) in the image due to their read out circuit [3], while the fpn in ccd sensors is mostly of a random nature. having this huge variety of sensor characteristics, there is a difficulty in the image sensor choice for practical applications, especially for geometric measurements in regard of the required high measurement stability and accuracy. first of all, the sensor comparison could be meaningfully conducted only under the precondition that the sensors are measured and characterised according to the same standard and with the same measuring instructions. however, the datasheets provided by the industrial camera manufacturers are usually written in their own standards and formats, so it is difficult to draw a comparison between different sensors using them. opposite to this situation, the emva 1288 standard was released to define the measurement and characterisation methods of image sensors and the format of datasheets. abstract over the past decades, a large number of imaging sensors based mostly on ccd or cmos technology were developed. datasheets provided by their developers are usually written on their own standards and no universal figure of merit can be drawn from them for comparison purposes. the emva 1288 is a standard aims to overcome this problem by setting parameters and experimental setup for radiometric characterisation of cameras. an implementation of an experimental setup and software environment for radiometric characterisation of imaging sensors following the guidelines of the emva 1288 is presented here. using simulations, the influences and impact of several emva 1288 parameters on geometric measurements can be estimated. this paper also presents a signal model and image acquisition chain; measurements of radiometric characteristics of an image sensor; and sensor evaluation for geometric measurements, where the aforementioned influences on geometric measurements are discussed. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 82 furthermore, there is a need for understanding the impact of image sensor parameters on image processing procedures. for high-precise optical 2d geometric measurements based on areas of interest consisting of 1d search lines, the fundament is the determination of edge point location with subpixel accuracy on each search line. up until now, the theoretical investigations on edge detection (e.g. [4]-[6]) focus mainly on the performance comparison between different methods under a simple noise model that is not completely in agreement with the physical camera model and thus cannot present the real sensor characteristics. to estimate the influence of sensor parameters on real measurements, a systematic simulation model is needed. the emva 1288 standard gives a mathematical description of the signal conversion procedure in the camera system which can be also used as the fundament of the simulation model. moreover, the simulation should be designed considering real measurement conditions. in this paper, a measurement setup for radiometric camera characterisation according to the emva 1288 standard is firstly described. subsequently, a method based on a close-to-reality simulation model with the sensor parameters in the emva 1288 standard is proposed to evaluate image sensors in respect of geometric measurements in accordance with measurement metrology. using this simulation method, the influences of several emva 1288 sensor parameters were estimated. 2. signal model and image aquisition chain in general, the task of image sensors is the transformation of electromagnetic radiation into analogue or digital signals. the fundament of this signal conversion process is the photoelectric effect. for image sensors in the ultraviolet-visible (uv-vis) and near-infrared (nir) range, mainly the inner photoelectric effect is utilized for the conversion of photons into electrical signals. figure 1 illustrates the signal conversion stages in the image acquisition. the photons interact with the silicon layer so that a number of electron-hole pairs are generated in dependence on the amount of photons. this effect results in an accumulation of elementary charges in a potential well. the collected charges are converted into voltage output in proportion to the amount of collected electrons in an active sensor area which are called pixels and can be assembled as a ccd or cmos element. finally, the voltage is quantized to discrete digital gray values with an analog-to-digital converter. the saturation of the potential well will lead to saturation of the gray value so that it won’t no longer increase its value with further exposure. figure 1 lists uncertainty sources that are standardised in the emva 1288 camera model, which is illustrated in figure 2. it begins with the generation of electrons during the exposure time. with the wavelength λ dependent total quantum efficiency η(λ), the average number of photons µp hitting the pixel area is converted into an electrical signal µe: 𝜂(𝜆) ∙ 𝜇𝑝 = 𝜇ₑ. (1) for (1), the effects generated by the fill-factor and the influences of the micro lenses mounted on the active sensor area will be included in the total quantum efficiency. the mean number of photons µp can be calculated using the knowledge about the pixel area a, exposure time texp and the irradiance e on the sensor surface according (2): 𝜇𝑝 = 𝐴𝐴𝑡𝑒𝑒𝑒 ℎ𝑣 = 𝐴𝐴𝑡𝑒𝑒𝑒 ℎ 𝑐 𝜆 , (2) where h is the planck’s constant and c the speed of light. for the calculation of µp, a precise irradiance measurement is needed on the same place where the image sensor is under test. besides, a minor number of electrons µd are also accumulated during the exposure time due to the sensor thermal effect as well as the uncertainty from the sensor read out and amplifier circuits. the dark and bright signals are summed and amplified and this behaviour is modelled using (3) with the introduction of the overall system gain factor k: 𝜇𝑦 = 𝐾(𝜇𝑑 + 𝜇𝑒) = 𝜇𝑦.𝑑𝑑𝑑𝑑 + 𝐾𝜇𝑒. (3) in combination with (1) and (2), the mean gray value µy can be calculated as the sum of the mean value of the dark gray value and the product of overall system gain factor k and expected number of electrons µe which is calculated from the number of photons: 𝜇𝑦 = 𝜇𝑦.𝑑𝑑𝑑𝑑 + 𝐾𝜂 𝜆𝐴 ℎ𝑐 𝐸𝑡𝑒𝑒𝑝. (4) with this equation, the linearity of the sensor can be characterized using image gray values. figure 2: schematic of the emva 1288 camera model [8]. figure 1: simplified diagram of the image acquisition process and the principal noise sources [7]. dark noise quantization noise quantum efficiency system gain number of photons number of electrons digital grey value yη k nd nenp σq acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 83 the temporal fluctuation of the number of accumulated charge units, namely the shot noise, is subjected to a poisson distribution [8], hence its variance can be determined with the mean value: 𝜎𝑒2 = 𝜇𝑒 = (𝜇𝑦 − 𝜇𝑦.𝑑𝑑𝑑𝑑) 𝐾⁄ . (5) with the introduction of the quantisation noise σq, a signal independent dark noise σd as well as the photon shot noise σe calculated with (5), the overall noise σy measured with digital images can be characterized with (6): 𝜎𝑦2 = 𝐾2𝜎𝑑 2 + 𝜎𝑞2������� offset + 𝐾⏟ slope �𝜇𝑦 − 𝜇𝑦.𝑑𝑑𝑑𝑑�. (6) as mentioned, the dark noise σd consists of the thermal noise, which is poisson distributed, since it is an accumulative process, and the read out and circuits noise that can be modelled with a gaussian distribution [9]. from the radiation measurement, the dark noise σd and the overall gain factor k can then be measured from (4) to (6) with the photon transfer method [10]. for the determination of these parameters, a measurement setup was developed and is presented in section 3. 3. measurement setup for the determination of radiometric image sensor pamameters 3.1. measurement setup construction the requirements for the measurement setup to calculate the sensor parameters in the emva 1288 standard are given in the sections 6-9 of [8]. with the knowledge of those requirements, the measurement setup in figure 3 was developed. the main criteria for the construction is to ensure the accordance with the pinhole camera model, which is realized with the f-number restriction: 𝑓# = 𝑑 𝐷 = 8. (7) with an f-number restriction of 8 and a given distance d between the light source and the sensor plane, the free radiation diameter d of the light source can be calculated. another major point for this construction is the ideal wall surface behaviour inside the tube. therefore, a special coating which ensures a minor reflection coefficient was used for the inner surface. furthermore, a special camera socket was constructed for the reason to minimize the parasitic reflection inside the mounting. 3.2. software development the software for the emva 1288 measurement setup was developed using the matlab environment. the software handles gige-vision cameras as well as pictures taken by the user with other systems. the complete standard was implemented in this measurement tool. figure 4 illustrates the general procedure of the emva 1288 measurement. first of all, the irradiance which will be then applied to the measurement position of the image sensor under test has to be measured with a radiometer. after the measurement of the radiation power, the test camera can be connected to the setup, afterwards the acquisition is started to measure the radiometric parameters and the non-uniformity of image sensor. at the end of the camera characterisation, a standard compliant emva 1288 datasheet for the test camera can be generated automatically. this datasheet can then be used in the simulation program for the sensor evaluation with regard to its process capability in the high precision 2d optical geometric measurement based on search lines. the simulation program was also developed in matlab with an interface for the import of the camera datasheet. following the camera model in figure 2, a model was developed for the simulation of the 1d edge detection process with subpixel accuracy, which is the crucial process in 2d measurements. with this simulation model and the measured sensor parameters, the sensor can be evaluated according to the measurement uncertainty resulting from the monte carlo simulation [11]. the simulation model and its implementation are described extensively in section 5. moreover, the sensor parameters can be freely scaled in the simulation program in order to estimate the influences of the individual sensor parameters on the measurement. 4. measurements of radiometric image sensor characteristics the first measurements were taken with two ccd camera systems which differ in pixel size and quantum efficiency. the special requirement for this measurement is the equal set of the exposure time levels, so that a comparison between the cameras becomes possible. in figure 5, the different saturation points as well as the difference in the sensitivity coefficient between both cameras are easy to recognise. the saturation point of the ccd sensor icx 415al with higher sensitivity is reached at 34000 photons per pixel, which is significantly lower than icx 424. 5. sensor evaluaton for geometric measurements based on emva 1288 sensor parameters 5.1. simulation model in the majority of previous works about the simulation of 1d edge detection, e.g. [4], [5] and [6], it is simply assumed that figure 4: general procedure for data acquisition and evaluation separated in three stages. figure 3: emva 1288 measurement setup. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 84 the noise follows a gaussian distribution. but this assumption could not conform to the real radiometric sensor characteristic according to the camera model in emva 1288. in this work, a simulation model based on this camera model was developed with consideration of all noise sources. for the 1d edge detection, a linear image must be generated first. this is considered as a cut from the 2d image. the simulation aims for the evaluation of measurement capability of the whole sensor area, hence the sensor non-uniformity which is characterised with dsnu1288 and prnu1288 is also modelled as random parameters in the 1d image simulation. figure 6 illustrates the process to simulate the measurement uncertainty of edge detection with given radiometric sensor parameters. at the first step, a spatial distribution of irradiation in metric units is given as the light signal on the image sensor. since it is proved in [12] that the edge detection error is strongly dependent on the difference between the edge location and the center of the corresponding sensor pixel, a moving edge is used whose subpixel-part is generated randomly in the value range between 0 and 1 using a uniform distribution function. considering the blurring effect due to the imaging optics, the blurred edge transition model in [13] under the assumption of a gaussian point spread function (psf) is used. this edge model is showed in figure 7 and formulated in (8): 𝐼(𝑥) = 𝑑 2 �𝑒𝑒𝑓 �𝑒−𝑙 √2𝜎 � + 1� + ℎ, (8) where l is the edge location, h the intensity offset, k the edge contrast, and σ the standard deviation of the gaussian psf. this signal is then converted into a discrete distribution of gray values along a defined number of pixels in the following step. at first, the number of photons in each pixel is calculated by integrating (8) over the pixel grid l. the number of photons in the n-th pixel is given by (9): 𝜇𝑝(𝑛) = ∫ 50.34 ∙ 𝑡𝑒𝑒𝑝 ∙ 𝜆 ∙ 𝐼(𝑥) 𝑛∙𝐿 (𝑛−1)∙𝐿 . (9) in this equation, the light wavelength and the exposure time are set constant and their values are 550 nm and 1 ms, respectively. from the number of photons, the gray value of each pixel is simulated according to the system model in figure 2. upon the assumption that the linearity error characteristic is unchanged over the sensor area, the quantum efficiency η is pixel-wisely adjusted to ηl according to the number of irradiated photons and the linearity error curve. in the next step, the spatial nonuniformity of photon response is simulated. according to [14], a gaussian random function with ηl as mean and prnu1288 as variance is used to generate the final quantum efficiency ηl,p for each pixel. then, the bright signal μe with photon shot noise is generated using a random poisson distribution function with mean value ηl,p·μp. in the simulation of the dark signal which is considered as thermally induced electrons, a basic noise signal μd,g is firstly generated from a random poisson distribution function whose mean value is the square of the dark noise value. to simplify the spatial non-uniformity, only the spatial variance is taken into consideration, in disregard of the periodic variations. as in the non-uniformity simulation at the bright signal, a gaussian random function with μd,g as mean and dsnu1288 as variance is used to generate the final dark signal μd. the bright and dark signal are combined with each other, then multiplied with the gain factor k and quantized to gray values. the edge location is detected with an adaptive threshold figure 5: sensitivity measurements at different camera systems. figure 7: edge transition using specific blurred edge model. figure 6: simulation of measurement uncertainty of edge detection. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 85 that is estimated using the histogram based method in [15]. to achieve the subpixel accuracy, a cubic interpolation is used in the edge transition area and the subpixel edge point is defined as the intersection of the threshold and the fitted function. the complete procedure is repeated according to the monte carlo method [11]. the deviation of the detected edge from the defined edge location in (8) is used as outcome in each iteration. under the assumption that the distribution of the edge detection error is subjected to a symmetric distribution function, the outcomes are evaluated with the quantile method for symmetric distributions [15] to determine the measurement uncertainty (mu), which is defined as half of the interval having a 95 % level of confidence [16]. 5.2. model implementation in the model implementation the parameters in (8) must be determined first. the σ value is to be set according to the characteristic of the optics which will be used in the real measurements. the allowed high level h+k in the irradiation distribution (figure 7) is determined based on the gray value simulation at a single pixel using the sensor data und the camera model. in this process, the irradiation with which the possibility of pixel saturation equals to 95 % is determined by simulation and used as the high level in the light signal. based on the high level h+k, the edge transition contrast in (8) can be flexibly set by varying the offset h to simulate different measurement conditions in the practice. an important parameter in the monte carlo method is the number of iterations n. this number should be as small as possible to save simulation time but ensure a stable simulation result at the same time. the ratio of the standard deviation 𝑠𝑈 of the simulated measurement uncertainty to its mean value 𝑈� is utilized to examine the simulation stability. figure 8 illustrates the dependency of simulation stability on the iteration number. it is shown that this ratio remains nearly constant at 0.13 % from n = 50000, hence all the simulations are performed with 50000 iterations. the simulation result using sensor parameters of the ccd sensor “icx445” is shown in figure 9. it can be shown that the deviation of the detected edge location from its target value agrees very well with a symmetrical distribution. hence, the use of the quantile method can be validated. 5.3. analysis of the influence of sensor data on the uncertainty of measurement based on this simulation model and the real characteristics of the ccd sensor “icx445”, the influences of linearity error, dark noise, dsnu1288 and prnu1288 on the measurement uncertainty were estimated by a gradual raise of individual parameters, while the other system parameters remained unchanged. the σ value in (8) is set to 5, with which the edge transition form nears the real characteristics delivered by optical systems. the tests were at first performed with 100 % contrast, in which the light signal covers the full sensor dynamic range. afterwards, the contrast dependence of the influences of these sensor parameters was investigated with gradually reduced contrast by raising the low level in the light signal. the results of the investigations are represented in figure 10 to figure 13. in the datasheet the dark noise and dsnu1288 are given with the absolute unit [e-], which cannot directly indicate their relation to the 8-bit digitalized image signal. therefore, these parameters are represented with the ratio between them and the saturation capacity of the image sensor. moreover, a change of figure 8: stability of simulation in dependency on the number of iterations. figure 10: measurement uncertainty with magnification of linearity error. figure 9: simulation result with sensor parameters of ccd-sensor “icx445”. figure 11: measurement uncertainty with magnification of dark noise. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 86 systematic measurement deviation is expected with the magnification of linearity error, hence the variation of the expected value of the absolute measurement deviation were also observed in this case. with the full contrast that refers to the optimal illumination condition, the magnification of linearity error and dark noise cause hardly any significant change of measurement uncertainty (smaller than 0.005 pixel), though the magnification of the linearity error causes a systematic shift of the detected edge location, as shown in figure 14. likewise, the magnification of prnu1288 and dsnu1288 up to 0.5 % brings hardly any changes. with a further magnification of these parameters, a minor rise of measurement uncertainty could be observed. as these parameters expand to 2 %, the uncertainty rises by ca. 0.015 pixel. as a summary, the test results are robust in resistance of a certain deterioration of sensor parameters under optimal contrast conditions, because an adequate dynamic range in the converted gray value image is fundamentally secured in this illumination situation. as the contrast decreases to 60 %, all the measurement uncertainty curves move slowly upwards with only one exception in figure 10. with a further decrease of contrast, this movement becomes significant. the reason for the growth of measurement uncertainty is that the decrease of contrast raises the ratio between the uncertainty of the individual gray values in the signal and its dynamic range by reducing the resulting value. on decreased contrast levels, the magnification of linearity error can cause an irregular change of uncertainty, whereby the value change is lower than 0.04 pixel. the characteristic of the systematic measurement deviation remains nearly the same in 80 % and 60 % contrast, but shows a greatly irregular changes in 40 % and 20 % contrast, as shown in figure 14. the reason is speculated to be that the linearity error of the sensor is not uniformly distributed over the irradiation levels, as shown in figure 15, so the in 40 % and 20 % contrast used sensor ranges have different linearity error characteristics. with the contrast decrease down to 40 %, the measurement uncertainty remains still stable against the magnification of dark noise. a clear relationship between dark noise and uncertainty could only be observed under the 20 % contrast. in this case, the measurement uncertainty rises by 0.063 pixel, as the dark noise expands to 10 % of the saturation capacity. a significant interaction between contrast level and sensor parameters can be observed at prnu1288 and dsnu1288. on the 40 % contrast an obvious rise of measurement uncertainty from the turning points at 0.2 % in both curves can be already observed, whereby the uncertainty rises by 0.1 pixel with the magnification of prnu1288 to 2 % and by 0.06 pixel with the magnification of dsnu1288. at the 20 % contrast, the increase of measurement uncertainty becomes more significantly. it rises by 0.68 pixel with the magnification of prnu1288 and by 0.346 pixel with the magnification of dsnu1288. from the results above, it can be seen that for low light applications and in the case of weak reflective object surfaces the sensor parameters prnu1288 and dsnu1288 have a strong influence on the measurement uncertainty in edge detection and should be considered primarily in sensor selection. figure 15: linearity error curve of the ccd sensor “icx445”. figure 12: measurement uncertainty with magnification of prnu1288. figure 13: measurement uncertainty with magnification of dsnu1288. figure 14: measurement deviation with magnification of linearity error. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 87 6. discussion of results the magnification of the linearity error curve has generally a relatively weak influence on the measurement uncertainty but a significant influence on the systematic measurement deviation. the reason for it is that the measurement uncertainty results primarily from the uncertainty of the discrete gray values in the edge transition area, whereas the linearity error causes form distortion of the edge transition signal, which has a direct effect on the systematic measurement deviation. the irregularity of the influence on measurement uncertainty lies in the complex relationship between the signal form and the measurement uncertainty. a high dark noise reduces the reachable signal-to-noise ratio of the sensor and thus the dynamic range of signal which correlates with the measurement uncertainty. this effect becomes significant only when the contrast in the light signal is reduced to 20 % and the dark noise is extremely high. the parameters prnu1288 and dsnu1288, which directly refer to the uncertainty of the individual gray values, contribute the most to the measurement uncertainty. the reason for the difference of the curve characteristics shown in figure 12 and figure 13 is that both these parameters represent two different sources of uncertainty. prnu1288 refers to the uncertainty of the photon response so that the bright pixels have a higher uncertainty, while the by dsnu1288 characterized dark noise uncertainty is the same to all pixels. 7. conclusions the presented summary shows an abstract of the possibility to characterize image sensors using the standard emva 1288 and to evaluate image sensors for geometric measurements using the monte carlo method in which the imaging process is simulated using the system model in the emva 1288 standard. with the simulation program, the influences of several essential parameters of image sensor on geometric measurements were investigated. references [1] h. t. beier and b. l. ibey, “experimental comparison of the high-speed imaging performance of an em-ccd and scmos camera in a dynamic live-cell imaging test case”, plos one 9(1):e84614, 2014. [2] a. e. gamal and h. eltoukhy, “cmos image sensors”, ieee circuits and devices magazine, vol. 21, issue: 3, 2005. [3] s. mohammadnejad, s. roshani and m. n. sarvi, “fixed pattern noise reduction method in ccd sensors for leo satellite applications”, proceedings of 11th international conference on telecommunications, 2011. [4] e. p. lyvers, o. r. mitchell, m. l. akey, and a. p. reevs, “subpixel measurements using a moment-based edge operator”, ieee transactions on pattern analysis machine intelligence, vol.11, no.12, pp.1293-1309, 1989. [5] f. bouchara, m. bertrand, s. ramdani and m. haydar, “subpixel edge fitting using b-spline”, proceedings of the 3rd international conference on computer vision, pp. 353-364, 2007. [6] m. hagara and o. ondráček, “moving edge detection with sub pixel accuracy in 1-d images”, proceedings of 26th international conference radioelektronika, 2016. [7] c. aguerrebere, j. delon, y. gousseau and p. musé, “study of the digital camera acquisition process and statistical modeling of the sensor raw data”, 2013. [8] european machine vision association, emva standard 1288 – standard for characterisation of image sensors and cameras, release 3.0, 2010. [9] r, mancini, op amps for everyone, texas instruments, 2002. [10] j. r. janesick, “ccd characterisation using the photon transfer technique”, solid state imaging arrays, vol. 570 of spie proc., pp. 7-19, 1985. [11] d. p. kroese, t. taimre, z. i. botev, handbook of monte carlo methods, isbn 978-0470177938, wiley; 1. edition, 2015. [12] m. hagara and o. ondráček, “comparison of methods for edge detection with sub-pixel accuracy in 1-d images”, proceedings of 3. mediterranean conference on embedded computing (meco), pp. 124-127, 2014. [13] m. hagara, “edge detection with sub-pixel accuracy based on approximation of edge with erf function”, radio engineering, vol. 20, issue 2, 2011. [14] m. konnik and j. welsh, “high-level numerical simulations of noise in ccd and cmos photosensors: review and tutorial”, eprint arxiv:1412.4031, 2014. [15] k. weißensee, “beitrag zur automatisierbaren messunsicherheitsentwicklung in der präzisionskoordinatenmesstechnik mit bildsensoren”, phdthesis, ilmenau university of technology, 2011. [16] joint committee for guides in metrology, guide to the expression of uncertainty in measurement, 1. edition, 2008. https://hal.archives-ouvertes.fr/hal-00733538v4 microsoft word article 3 190-1060-1-pb.docx acta imeko may 2014, volume 3, number 1, 4 – 9 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 4 the role of measurement in the development of industrial automation s.s. carlisle united kingdom section: research paper keywords: industrial automation citation: s.s. carlisle, the role of measurement in the development of industrial automation, acta imeko, vol. 3, no. 1, article 3, may 2014, identifier: imeko-acta-03 (2014)-01-03 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. introduction throughout the whole history of scientific endeavour the science and technology of measurement has been of paramount importance. our knowledge of physics, chemistry, the nature of matter and of human life itself has progressed according to our ability to measure and so to test our theories. furthermore, it is by measurement that we can communicate experience from one research centre to another; thus a facility is provided for research work to be continued by one worker following another. it also allows workers in different locations to coordinate their efforts in the advancement of scientific knowledge. more recently, measurement in industry has enabled manufacturing and processing skills to be more readily communicated from one worker to another; such communication means that advances in manufacturing practice can proceed and spread more quickly throughout an industry. when operator skill has established optimum conditions for working a process, the instrumentation of the process enables these best conditions to be easily transferred to other operators on similar processes without repeating the work. recent experience by a firm starting up a new oxygen-blown steelmaking process showed that comprehensive instrumentation and data logging on the plant resulted in the commissioning time to get the furnace up to its target production rate being greatly reduced. it is only by measurement in industry that we can really know the efficiency of utilisation of materials and of machines or processes. it is, thus, an essential basis upon which to advance our productivity. measurements are, of course, only of value if we use the results. in basic research we use them to confirm or improve the theory we are testing. in industry we use them, or should use them to alter our production planning or our plant operating conditions to achieve a better result, either as regards productivity or product quality. of course, if we do not use our measurements in industry to test out and compare alternative, and possibly better ways of doing things, we are wasting a valuable facility. measurement combined with human intelligence in interpreting the result and a will to take the action prescribed can hold a process at a constant best working condition in spite of inevitable disturbances in input conditions. when the interpretation of the measurement is simple and the control action easily defined we have for many years now been able to dispense with human interpretation and apply automatic control as in single variable feed-back control systems. nowadays, with the facility of analogue and digital computers we can perform more complex interpretation or processing of the measurement information and determine the required control action automatically in circumstances where previously this is a reissue of a paper which appeared in acta imeko 1967, proceedings of the 4th international measurement congress, 1967, warsaw, vol. 1, pp. 37–50. the paper sets a frame on the role of measurement and control as critical tools to support the automation of manufacturing processes. in this perspective three main requirements for measurement are discussed, i.e., to identify where automation can be most profitably used; to investigate individual process behaviors and hence to formulate process control strategies; and finally to perform quality control of products. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 5 we had to employ human judgment and skill. by automation i mean the automatic determination of control action according to measurement information. it is only a matter of degree as to whether a digital computer is involved or not in the processing of the information. measurement science and technology is undoubtedly of importance on its own. i believe that it is by its use in the implementation of automation that it becomes of such unusual significance to industry – and, indeed, to our whole way of life. it is becoming realised in every progressive country that the standard of living prevailing in a community is closely related to the productivity of that community. by productivity, i mean the quantity of a consumable or durable product (such as a can of beans, a motor car), or of a service, such as a telephone connection, that can be produced for a given input of all resources including fuel, raw material, human effort and capital investment. by mechanisation and the use of massive machines, like continuous strip rolling mills, we can produce enormous quantities of steel strip per unit human operator employed. of course in such applications there is a large resource input in the form of fuel and capital investment in machines which basically are produced by human effort. it is important to realise that when discussing productivity we are concerned with output as a function of total resource input, not just output per man on a machine. it is this ratio of output to total resource input which directly determines a national standard of living. it is this ratio which automation can optimise. it is the job of automation to ensure that fuel is efficiently used in a process by applying automatic control to process variables. it is also its job to control raw material conditions and process conditions to ensure that there is a maximum yield of saleable product. furthermore, automation can be used to optimise production planning so that at any time all the components of a production facility are used best in meeting the requirements of the order book at that time. minimisation of inter-process stock is of course an important part of this exercise. the point i wish to emphasise is that automation is concerned with the optimisation of operation of a complete manufacturing complex and not just with providing automatic control of processes nor just with minimising the labour utilisation on operation of a particular plant. it therefore follows that the use of automation in industry is a major controlling factor on our standard of living. 2. impediment to progress in use of automation in industry if one accepts the great significance of this technology to any progressive country, and i think the leaders of nations do, and one notes the outpourings of technological achievement on measurement, automatic process control and computer data processing, which come from conferences like this and ifac and many others, one might well ask: “why don’t we all get on more quickly in using this wonderful tool?” i believe that in general technologically advanced countries find themselves in tile position that their technical competence in automation is taken advantage of by far too small a proportion of their total manufacturing industry. the shining examples of achievement in industries like steel, petro-chemicals, paper production, aircraft and traffic control, get much discussed. but what about automation in food production, brick-making, pottery manufacture, timber products and in the multitude of light engineering industries? these are the industries which in nearly every country are the greatest employers of labour and usually the more backward technically. why are they not so eager to exploit automation? there are many reasons of course. the more obvious ones which get much discussed arise from sociological and capital investment considerations. a less obvious reason but one which i think is far more important and much neglected arises from the fact that automation, more than any other technology, requires a close coordination of effort by people of many different skills and disciplines if it is to be successfully employed. the control engineer cannot make any progress in selling his technology to the building brick manufacturer unless both parties take the trouble to learn something of each other’s business. their disciplines, expertise and language are vastly different and yet it is only by working together that success will be achieved. many of you will have had experience of instruments which you developed for use in an industry – and which you felt confident from trials were satisfactory technically – and yet they fell into disuse by the firm which originally requested them. the reason probably was that in your enthusiasm for technological development of the device you neglected to formulate and communicate to the management of the company a proper conception of the use of the device as a means of improving the productivity and profitability of his business. it is noteworthy that in industries such as steel and modern chemicals, where measurement and control engineers have been introduced alongside production managers, process technologists and plant designers, there have been the most significant developments in the use of automation. this suggests that the use of automation has to be stimulated from within an industry. it cannot be applied by edict or exhortation from outside. the basic problem is one of making those who know the process and product technology of their industry also conscious of the new technology of automation and the use that can be made of it. similarly, the control engineer must learn something of the business and processes of the user industry before he can offer his technology as a viable proposition. i believe that measurement and control engineers must take a share of the blame for too slow progress in the wider use of this technology in industry. research and development people concerned with measurement and control technology too often grasp problems which industry puts to them because the problem gives them an opportunity to extend their research and development expertise. it would be better if their primary motivation was to advance the interests of the user by solving his problem rather than to advance their own disciplines. in other words, we require more research and development people whose enthusiasm is to solve real industrial control problems rather than to develop solutions and then seek problems to justify the solutions. 3. the advancement of the use of automation in the uk taking account of these problems which i have outlined, special facilities have been developed in the uk to promote the wider use of automation throughout our industry [1]. special attention is given to the less technically advanced industries and to the medium to small sized firms. i should like to describe these facilities and some examples of projects undertaken acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 6 because they demonstrate the role of measurement in promoting industrial automation. in the uk we have a network of 49 research associations, the majority of them serving process industries such as food, baking, textiles, paper manufacture, printing, furniture-making, steelmaking, etc. their job is to advance the technology of the industries they serve. they provide centres of knowledge of their industries’ process and product technology and its trends of development. the procedure now is to form within each of these research associations groups of measurement and control technologists who will develop within the particular industries they serve more technical awareness about control technology. a few of the research associations already have automation groups within their organisations. for example, the british iron and steel research association has a large operational research and control engineering department and has done much to advance automation in the steel industry. to aid the development of this procedure one of tile research associations, the scientific instrument research association (sira) has been appointed as the focal point for development of this automation activity throughout the research associations. sira now has within its organisation a strong industrial measurement and control division which encompasses work on industrial instrument development, control engineering and operational research. we believe that there is special advantage in having a systems engineering expertise in all organisation otherwise concentrated on measurement research and development. it is through industrial systems studies that the requirements for instrument development can be defined most reliably. i think it is wrong to isolate instrument research and development from systems and control engineering. the method of working which has been evolved is to form a joint team of control engineers from sira with experts from the other research association and arrange for this joint team to make a systems study in a factory of the particular industry. the job of the joint team is to study the whole manufacturing complex, the economics and technology of its operation, and out of this study to formulate proposals for the ways ill which automation can be introduced so as to be of greatest advantage in improving productivity. arrangements can then be made to install measurement and control systems to demonstrate economic and technical feasibility. all this work is done in close collaboration with staff of the particular firm chosen for the study. a vital feature in the carrying out of these studies is that control engineers, process technologists and works production staff are united in conducting the operation. 4. the role of measurement in advancing automation the first objective in these production system studies is to acquire data on production performance, plant utilisation and process yields in order to ascertain what parts of tile whole production process can benefit most from improved control. too often one is hampered in conducting these operation studies by the inadequacy of measurement of such things as machine operating time, inter-process stock levels, scrap losses at each process etc. when such matters are studied the opportunities to improve production practice by automation often become apparent and the economic justification can be assessed. for example, in an initial study we made of a biscuit production plant some 18 months ago, the following observations were made [2]. 1. one dough mixer was used to feed each biscuit production line so resulting in a system of 7 mixers feeding 7 lines. the mixers are batch operated and quite separable from the biscuit lines which are continuous flow processes. it was observed that the mixer utilisation was only about 20–50% of their total capacity when keeping all biscuit lines busy. it was thus apparent that by more precise scheduling and control of mixer operation – such as can be done by a computer (parttime use) – the number of mixers could be reduced from 7 to 4 with a potential saving in capital investment of about £ 60,000. 2. some defective biscuits are produced at each stage in the process, e.g. due to faulty dough mixing, malfunctioning of the dough sheeter or the cutters and disturbances in oven operation. also variation of biscuit dimensions causes packaging machine failures and consequent rejection of biscuits. the total production of defectives was estimated to be about 2.5% of production. a reduction of this by say 1% would be worth £ 15,000 per annum. 3. biscuit packages are sold to a specified weight and it is legally necessary that this weight be supplied as a minimum. thus, because of a random variation in individual biscuit weights the average package weight is greater than the weight specified. this excess is known as “give-away” and can be as high as 6–7% (6–10 drams per packet). this can be reduced by reducing the variance in biscuit weight and it is calculated that for each dram reduction in give-away per packet there is a saving to the company of £ 75 per ton of biscuit production. in a typical plant a reduction in give-away of 1.5 drams per packet is worth £ 20,000 per annum to the company. these observations stimulated the development of several in-process measuring devices and control systems. it was apparent that the manual method of sampling biscuits and measuring their weight and dimension by traditional means was unreliable and tedious. furthermore, the making of these measurements was clearly necessary to improve supervision and to give a basis for automatic control of the biscuit machine. consequently a device was developed for automatically sampling the biscuits as they emerge from the baking oven on a belt [3]. figure 1 shows this sampler. it picks up a stream of biscuits from the moving belt and measures their thickness, weight, ovality and colour (brownness) and puts the information through a data logger on to punched tape, while returning the measured biscuits on to the belt in the same place from which they were removed but some distance downstream. some hundreds of thousands of biscuits have been sampled through this machine and distribution curves of weight and dimension variance obtained by computer analysis of the data logger output. the immediate value of this sampler is to show how well or badly a biscuit line is behaving and to enable the effect of changes in process operation to he measured. it will also show the improvement in thickness and weight variation which we expect to get from an automatic thickness control system now being installed on the dough sheeter. ultimately, it will be the master instrument for supervision of several automatic process control systems to be installed on the dough sheeter and on the baking oven. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 7 in a similar way we have made a study of a ceramic tilemaking plant which produces some 150,000 sq yds of glazed tiles per week. because of the variability of material, and process parameters, there is considerable production of defective tiles at each stage of the process which includes making green tiles from the clay mix, firing of the tiles and glazing. the percentage of defectives at each stage varies from 5–7% and a rough estimate of the total cost of defectives is about £ 180,000 per annum or about 7% of production costs. even a reduction in defectives of only one-quarter of the present value would be worth about £ 45,000 per annum. assuming a 25% per annum return on capital invested is required, this can justify a capital cost for measurement and control apparatus of £ 180,000. work is now being done to reduce defective production by better control of clay composition, by control of the presses and kilns and by inprocess measurement of the tile dimensions, shape and surface quality. a system for automatic control of the moisture content in clay dust is being developed and it is estimated that this will reduce the cracking of the tiles during firing and bring about a saving of £ 11,000 per annum. so we see that a first requirement for measurement is to ascertain what a production system is doing and so to identify where automation can be most profitably used. the next function of industrial instrumentation is to investigate individual process behaviour and determine the most significant variables and hence formulate a process control strategy. this is the function which has had most attention from control engineers and leads to the sort of process instrumentation and control familiar to us all. the thickness measurement and control of the biscuit dough sheet i referred to earlier is a typical example in this category. it is surprisingly similar to the problem of automatic thickness control in steel rolling mills. i would like to mention a few more unusual examples in this category each of which i think has a message to give. in the manufacture of fan blades for motor car engines the blades are spot welded on to a hub flange and it is common practice to do destructive examination of the welds of a sample of fans taken from the production line. this is wasteful and does not ensure that no defectives get through. in searching for a non-destructive test the british welding research association made a detailed study of the resistance spot welding process and found that while the weld is being made the electrodes are forced apart about ten thousands of an inch and then return as the molten nugget of metal is formed between the parts being welded [4]. figure 2. shows curves of separation of the electrodes for a splash weld with too solid heating (h), a good weld (n), and a weak weld with too little heating (l). it was observed that this curve had a particular form when a good weld was made. thus, it became possible to inspect a weld while it was being made by measuring the time taken for a particular magnitude of electrode separation to occur. this principle is now employed in a resistance weld inspection device which has been developed as an attachment to a welding machine. work is now proceeding to use tile principle as a base for automatic control of the resistance welding process. consider another example. in the manufacture of clay bricks the brick is formed by extruding clay into the required section and then cutting up into bricks ready for firing. it is important that the moisture content of the clay be consistent so as to obtain a uniform contraction of the brick during firing and so reduce incidence of cracking and deformation. for some time a method of measuring the moisture content of the clay has been sought but no really satisfactory solution found. some studies of the extrusion process were made by the ceramics research association and these showed that a measurement of back pressure behind the nozzle of tile extruder is a very good and consistent measure of tile moisture content of the clay. pressure is easy to measure with rugged apparatus suitable for a brick works. this has now been developed as a system for automatic control of the moisture content in tile clay mix. this is a much simpler solution than trying to measure true moisture content. these two simple examples illustrate the need for study of the technology of an industrial process to precede development of process instrumentation and control. they emphasise the responsibility that rests on the control engineer to familiarise himself with the technicalities and mode of operation of a manufacturing process before he applies his specialist technology. failure to recognise these requirements can result in valuable effort being applied to development of the wrong instrument for control of a process. a third function of industrial instrumentation which is of figure 1. automatic sampling machine installed on a biscuit line. figure 2. principle of operation of weld monitor. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 8 rapidly growing importance is measurement or inspection of a product, either during the production process or on completion. the need for this can arise for a number of reasons. 1. to remove a limit imposed on production by shortage of labour for product inspection. 2. to provide faster detection of defective product made and so facilitate quicker correction of a faulty process and provide a basis for automatic process control. 3. to detect and remove from a sequence of processes a partially completed product which is defective and which may waste or damage the succeeding process if it is allowed to proceed. 4. to improve quality control of products. very often the inspection of components or products in a factory is one of the most labour intensive functions and is often a tedious and onerous task. consider for example the manufacture of metal coins which in most countries are high precision and high quality objects. it is common practice to inspect visually the blanks coming from the stamping machine to prevent defective blanks being fed to the coining presses where they may cause damage to expensive dies. a coinage plant may produce 20 million coins per week and one will usually find a team of 20 or more inspectors looking at blanks on a moving belt to detect any defectives. this task of visual inspection becomes increasingly expensive as labour pay rates rise and recruitment of suitable labour becomes increasingly difficult. this is an ideal task for the automatic inspection instrument and recently sira has developed a photoelectric device which will inspect the blanks at a speed of up to 40 per second and reject those which have a damaged edge or are out of tolerance on size. figure 3 shows the instrument. the coin blank is passed over an illuminated aperture behind which is an optical system which casts an image of the blank on a photocell. this photocell has a cathode consisting of four quadrants. if this blank is truly symmetrical with no edge damage there will be during the transit of the image over the four photocell quadrants some point at which there is a balance of photo currents in opposing quadrants. this balance point is detected electronically. if no balance is found due to a damaged edge the coin is rejected. this principle may be applied to check dimensions and shape of any two dimensional symmetrical objects. another interesting in-process inspection problem arises in the manufacture of glazed ceramic tiles. after firing the tiles are passed to the glazing plant. some of the defects on the glazed tiles can be attributed to surface defects on the uncoated tile and if these could be detected and rejected by a machine before glazing there could be a significant saving on production costs by avoiding the glazing of defective material. an apparatus is under development for this purpose. it is also aimed at replacing the large labour force required to inspect the finished tiles. numerous new requirements for automatic inspection are introduced as a result of progress in automation of assembly, packaging and filling operations. for example, the introduction of automatic filling machinery in the pharmaceutical industry to put drugs into vials, calls for much closer tolerances to be maintained on the vial dimensions, and so the need arises for automatic dimension checking of the empty vials. furthermore, when the filled vials are packed by machine into cartons these need to be inspected automatically for misfills or particulate matter in the filling. such inspection is done by the human packer as a simultaneous task, but when he or she is replaced by the packaging machine, automatic inspection becomes necessary. this situation arises in many different industries. while packing wrapped confectionery products into cartons the human packer simultaneously checks the quality of the product wrap. however, when the machine does this packing a device for checking the wrap must be introduced. automatic assembly of a wide range of engineering products like watches, gear boxes, electricity meters, electric motors, cycle wheels etc. is being introduced [5]. this is going to impose much stricter discipline on component dimensions and quality and will make objective inspection of components essential. in the simple case of introducing automatic bottle capping machines it becomes necessary to inspect and control the dimensions of the bottle necks and tops much more precisely than is required with manual capping. an automatic assembly machine putting screws in must be protected from being fed with screws without slots in the head. automatic inspection of such things as the surface of tinplate or metal foil or paper presents interesting problems because the decision of acceptance or rejection depends on a judgment which has to take into account the type, size and distribution of defects. detection of the defects is not usually difficult with available technology, and at the high production speeds now common with these materials. the real problems lie in processing the signals from the defect detectors so as to make decisions automatically on suitability or otherwise for the particular use for which the product is required. for this problem we hope to get help from those working on pattern recognition techniques. another problem of product inspection which i think is of great potential interest is in checking items produced in large quantities such as postage stamps, currency notes or similar figure 3. apparatus for detecting defective coin blanks. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 9 complex patterns in multicolour print. i doubt if this problem is likely to be solved by traditional scanning methods in which an enormous amount of information is generated in the scan and then compared with a reference standard. it is more likely to be an area of opportunity for techniques using image transformation such as holography. 5. the changing approach to industrial instrumentation measurement technology for use in industry can no longer be treated as a specialisation separable from the situation in which it is utilised. it has become a vital and interwoven constituent in the overall technology of operation and control of an industrial manufacturing process. this has considerable repercussion on the activities of an organisation engaged on instrument development and manufacture and on the training of the instrument technologist. the situation one sees developing is this. the manufacturing and processing industries, as users of instruments, will employ increasing numbers of control engineers. these engineers will concern themselves with improving the control of operation of the processes in their factory so as to improve the efficiency of operation of tile production system as a whole. they will also influence the design and specification of the plant installed to make it inherently more controllable. these control engineers will thus work closely with process technologists, production engineers and plant designers. we also see evidence now of management structure having to be changed in order to permit the fullest utilisation of computers for improved production organisation. it is to the control engineer that the instrument developer will look for advice on the potential usefulness of a new device. i believe the control engineer will act as a vital link between the instrument maker and industrial user. it is due to the too frequent absence of this link that the enthusiastic instrument developer has often been misguided in the past with consequent misplacement of his inventive abilities and the failure of instrument designs to suit the conditions of use. at present very few user industries employ control engineers and this has made it necessary for instrument manufacturers to extend their activities into control engineering and systems design. to do this they have had to learn much of the business and technology of their customers’ industries so that they can offer viable control systems. they have managed to do this for large customer industries, such as steel, chemicals, paper and such like but it is costly for them and particularly difficult to finance as the profit margins on instrument manufacture are modest. the situation is even more difficult when related to the diverse range of smaller unit industries which may in total offer a large potential market but individually offer only small markets for instruments. for these industries there is a need for cooperative work on the development of control technology specific to them. thus, one expects the set up of automation research and development organisations, oriented to the interests of specific potential user industries such as textiles, ceramics, plastics, timber, light engineering etc. within these organisations the special instrument development required to implement control systems will be carried out as part of an activity which encompasses systems engineering and process control engineering. as time progresses these organisations will infuse into their industries control engineers and a philosophy of the significance of control. when that is achieved the job of the industrial instrument manufacturers will be greatly eased. they will be able to devote their research and development resources to the development of the next generation of instruments rather than having to devote it to educating their potential customers in the significance of control and to solving their application engineering problems for them. references [1] s.s. carlisle, getting automation applied, instrument practice, july 1965. [2] r.g. parish, p. wade, d.a. watkin, application of computers and automation techniques to biscuit manufacture, sira confidential research report r339, 1965. [3] p. wade, d.a. watkin, development of some instrumentation for monitoring the production of semi-sweet biscuits, food trade review, april 1966, p. 48. [4] p.m. knowlson, instrumentation for resistance welding, british welding journal supplement, april 1965, p. 183. [5] institution of production engineers journal, volume 43, no. 3, march 1967. bias-induced impedance effect of the current-carrying conductors acta imeko issn: 2221-870x june 2021, volume 10, number 2, 88 97 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 88 bias-induced impedance effect of the current-carrying conductors sioma baltianski 1 1 wolfson dept. of chemical engineering, technion – israel institute of technology, haifa, israel section: research paper keywords: bias-induced impedance; impedance spectroscopy; zbi-effect citation: sioma baltianski, bias-induced impedance effect of the current-carrying conductors, acta imeko, vol. 10, no. 2, article 13, june 2021, identifier: imeko-acta-10 (2021)-02-13 section editor: giuseppe caravello, università degli studi di palermo, italy received january 18, 2021; in final form april 15, 2021; published june 2021 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: sioma baltianski, e-mail: cesema@technion.ac.il 1. introduction the study of various physical objects using impedance spectroscopy under applied dc bias is widespread. these objects may be of different physical nature: semiconductor structures [1], electroceramic structures [2], [3], electrochemical objects [4], etc. in all cases, the external offset sets the operating point in the vicinity of which the impedance measurements are made. the set offset (bias) makes it possible to tie the parameters obtained from impedance measurements to the physical state of the object under study. the idea to investigate the current-carrying conductors under the influence of bias arose because of a misunderstanding of the behavior of a fairly simple object as a load in the process of testing the different impedance meters. the first research results were published in [5]. possible measurement errors were checked in various ways, and literature sources describing the detected effects were not found. this work relates to impedance spectroscopy for several reasons. on the one hand, based on the described phenomena, sensitive elements can be created that require the use of impedance spectroscopy as a method for extracting informative parameters. on the other hand, it is a challenge to build more sensitive impedance meters (with an appropriate offset function) that allow obtaining reliable data under relatively difficult measurement conditions: low frequency and high tangent of the loss angle. the goal of this work is to reveal the discovered properties, which are important in the context of impedance research. low-frequency impedance spectroscopy was used as a method. the complex conductivity function and its determination by approximating experimental data utilising different models serve as theoretical basis [6]-[8]. the impedance of current-carrying conductors is well known and described in the literature [9]. as an example, we give the behaviour of the impedance of a silver conductor 0.5 m long and 0.25 mm in diameter. the initial experiment in figure 1 without bias (at vdc = 0 v) represents a typical behaviour of the real and imaginary parts of the impedance in the frequency range 1 mhz … 100 mhz. first, we use the full frequency range to verify the processes. later, we will be interested in events only in the low abstract the paper presents the previously unstudied properties of current-carrying conductors utilising impedance spectroscopy. the purpose of the article is to present discovered properties that are the significant context of impedance research. the methodology is based on the superposition of test signals and bias affecting the objects under study. these are the main results obtained in this work: the studied objects have an additional low-frequency impedance during the passage of an electric current; the bias-induced impedance effect (zbieffect) is noticeably manifested in the range of 0.01 hz … 100 hz and it has either capacitive or inductive nature or both types, depending on the bias level (current density) and material types. the experiments in this work were done using open and covered wires made of pure metals, alloys, and non-metal conductors, such as graphite rods. these objects showed the zbi-effect that distinguishes them from other objects, such as standard resistors of the same rating, in which this phenomenon does not occur. the zbi-effect was modeled by equivalent circuits. particular attention is paid to assessing the consistency of experimental data. understanding the nature of this effect can give impetus to the development of a new type of instrument in various fields. mailto:cesema@technion.ac.il acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 89 frequency part of the spectrum. the initial and all subsequent potentiostatic experiments were carried out at the amplitude of the test signal vac = 10 mv. this signal corresponds to a small signal approach. the view of graph results is quite trivial at zero bias. three approximate frequency domains can be distinguished in this graph: the high frequency region (hf) of the spectrum f = 1 mhz … 100 khz; the mid frequency region (mf) f = 100 khz … 100 hz and the low frequency region (lf) f = 100 hz … 0.1 hz. in the hf region there is an increase of a real re(z) and imaginary im(z) part of impedance with increasing frequency. this part is well described by a parallel connection of resistance rp and inductance lp (figure 1). in the mf region, a constant value of re(z) is observed. series resistance rs must be added to the model. a linear decrease in the imaginary component occurs with a decrease in frequency (log-log scale). the relative noise level increases at the same time. this noise is natural and associated with the measuring system capabilities. the lf part of the spectrum at vdc = 0 v demonstrates the constancy of re(z) and the strong noise of im(z). this area is not informative for interpretation using the imaginary part of the impedance. measurements were made using a faraday cage to improve the signal-to-noise ratio mainly for the imaginary impedance component. in this simple model, the specific resistance of the conductor determines the series resistance rs. mainly the length of the conductor determines the inductance lp. the parallel resistance rp that is connected to the inductance characterises the active loss in the conductor due to the skin effect at high frequencies. the experimental values at zero bias correspond to the expected values and are quite common. the situation changes significantly when measurements are carried out under bias. the experimental characteristics are shown in the same figure 1 at biases vdc = 0.09 v … 0.9 v. the increment of bias was 0.09 v, and the measuring test signal was the same, namely vac = 10 mv. the hf and the mf imaginary part of the impedance do not change with bias. however, the lf part changes considerably. this response can be reflected by including an additional non-linear impedance zbi (figure 1). a corresponding increase of the real part of the impedance occurs in the considered region, which meets the kramers–kronig relationships [6]. besides, we observe a monotonic change in the real component of the impedance in the entire frequency range (the model element rs). this change is caused by a shift in the temperature of the conductor due to biases. the model indicated in figure 1 is intuitive, but it well describes and fits the object under study in the specified frequency range. the bias phenomenon is sometimes difficult to detect because experimental data are often on the limit of the sensitivity of measuring instruments, namely, limitations on phase measurement with a high value of the loss tangent. potentiostat/galvanostat biologic sp-240 (biologic science instruments) was used as a measuring instrument. in doubtful cases, data were verified using the potentiostat/galvanostat gamry reference 3000 (gamry instruments). a biologic’s contour plot defines an error not more than 0.3% and 0.3° in the desired measuring range [10]. thus, the experimental data in figure 1 is reliable. moreover, in this case, we are interested in relative changes in the impedance components. a homemade four-wire sample holder was used to connect the samples under test (sut), as shown in figure 2. electrode designation is taken from the manual [10]. this sample holder gives accurate measurement of low-resistance objects as well as negligible influence of contact phenomena. in addition to the devices used in this work (biologic and gamry), which are based on frequency response analyzer, also the devices of the lock-in amplifier type can be used (see for example [11]). we used standard not wire wound resistors as references to verify measuring results and estimate artifacts. this phenomenon was not observed when dealt with standard resistors of the same rating as sut. the impedance change in the lf region can be caused not only by bias using direct current but also by alternating current – by a large amplitude test signal at zero bias. it should be emphasised that in this work, we use a small signal approach in which a change in the lf impedance is not observed at zero bias. thus, the occurrence of the additional impedance in the lf region will be determined solely by the level of bias. this physical phenomenon is named here as a bias-induced impedance (zbieffect). in section 2 we will systematise the experimental results. section 3 is devoted to the interpretation of experimental results lo g ( f r e q /hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 re(z) at vdc = 0 v im(z) at vdc = 0 v rs rp lp vdc t zbi lo g (| r e (z )/ o h m |) lo g (|-im (z )/o h m |) log(freq/hz) figure 1. re(z) and im(z) of the silver wire at vdc = 0 v … 0.9 v with bias step 0.09 v; length 500 mm and diameter 0.25 mm; frequency range 1 mhz … 0.1 hz. figure 2. four-wire sample holder. 1 high force-and-sense kelvin clip; 2 – sample under test (wire); 3 – working electrode; 4 – counter electrode; 5 low force-and-sense kelvin clip; 6 – reference electrode; 7working sence electrode. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 90 using electrical models. section 4 discusses significant differences in impedance behavior of open and covered objects. special attention is paid to checking the consistency of experimental data: this is outlined in the section 5. section 6 discusses and proves the main hypotheses that explain the revealed effect. finally, the main results are presented in the concluding section. 2. systematisation of the experimental results we studied pure metals: nickel, copper, silver, tungsten, platinum, gold; alloys: constantan, nichrome, manganin; nonmetals graphite rods. although the frequency scan started from 1 mhz toward low frequencies, the analysis of the results was carried out only for the lf part of the spectrum, where the zbieffect manifested. according to the type of zbi-effect, all studied materials were grouped into three categories: (i) the zbi-effect has a capacitive nature, (ii) an inductive nature, and (iii) mixed when both types of reactance occur. table 1 summarises the properties of the investigated materials. below are the experimental characteristics of one of the representatives of each group. 2.1. pure metals we find significant changes in the behaviour of the imaginary part of the impedance after a critical frequency of about 30 hz (resonance point) when applying bias (figure 1). the inductive nature of reactance sharply changes to a capacitive nature from this point to the direction of lf. we observe a monotonic change in the imaginary and real component of the impedance, depending on the applied bias. changes in the real part of impedance in the mid frequency region also occur; however, this is due to a change in the temperature of the conductor upon bias. for example, with a maximum bias of 0.9 v for this experiment and a conductor resistance of about 0.23 ω, the current flowing through the conductor will be approximately 3.9 a. the power dissipation will be approximately 3.5 w, which will lead to certain heating of the conductor and consequential increase in its resistance. figure 3 shows a nyquist graph of the lf part of the same experiment, which is shown in figure 1. at mid and high frequencies, there is no change in the behaviour of the imaginary component of the impedance under the influence of bias. henceforward, we will limit visualisation within the lf part of experimental data in the form of nyquist plots. similar in appearance, but numerically different in values, the characteristics were obtained in studies of other pure metals: nickel, copper, tungsten, platinum, and gold. 2.2. alloys the impedance characteristics of alloys as a function of bias differ from pure metals. manganin demonstrated the inductive nature of reactance at moderate bias. in our nichrome and constantan samples, the zbi-effect had both capacitive and inductive reactance. the nature of the reactance depends on the level of bias. as an example, studies of a nichrome sample with a diameter of 0.1 mm and a length of 57 mm are presented in figure 4. the experiment was carried out using a bias in the range of 2.8 v… 8.5 v in increments of 0.1 v. the data were taken in the frequency spectrum 1 mhz … 0.1 hz, but only the range of interest is presented here: 100 hz … 0.1 hz. three areas of bias were identified. in the bias of 2.8 v … 4.6 v, the capacitive nature of the reactance was observed. in the range of 4.6 v … 6.7 v, the increasing portion of the inductive nature of the reactance added to the decreasing portion of the capacitive nature of the reactance. with a subsequent increase in bias, the reverse process occurs. also, in the bias range of 6.7 v … 8.5 v, the capacitive nature of the reactance was again re ( z ) / oh m 0.30.250.20.15 im ( z ) /o h m 0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08 z at vdc = 0 v z at vdc = 0.9 v f < 3 0 h z f > 3 0 h z -i m (z )/ o h m re(z)/ohm figure 3. nyquist plot of silver wire at vdc = 0 v … 0.9 v with bias step 0.09 v; length 500 mm and diameter 0.25 mm. re ( z ) /oh m 9.59 im ( z ) /o h m 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 z at vdc = 6.7 v z at vdc = 4.6 v re(z)/ohm -i m (z )/ o h m figure 4. nyquist plot of nichrome wire at vdc=4.6 v … 6.7 v with bias step 0.1 v; length 57 mm and diameter 0.1 mm. table 1. systematisation of the investigated materials by the nature of zbieffect. type of conductors pure metals alloys non-metals (graphite) nature of zbi-effect capacitive mixture: capacitive and inductive inductive acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 91 observed, the same as with a small bias. figure 4 shows a transient state when both types of reactance are present. 2.3. non-metals measurements were carried out on graphite rods. samples of various diameters were investigated. figure 5 shows nyquist plots of the impedance of the graphite rod 0.5 mm in diameter and 57 mm in length. the inductive nature of reactance was demonstrated over the entire range of biases. 3. interpretation using electrical models first, we consider a simple case of interpreting experimental data related to pure metals in which the zbi-effect of capacitive nature is manifested. as an example, figure 6 presents a fitting result of the lf part of one of the experiments shown in figure 3, specifically at bias vdc = 0.72 v. the fitting was carried out using the impedance model which consists of serial resistor rs connected to parallel c1 and r1. the resistor rs reflects specific resistance of the sample under test and its geometry. the resistance varies with the applied bias, which affects the temperature of the sample (see a right shift of characteristics in figure 3 with increasing bias). the parallel circuit c1-r1 exactly describes the zbi-effect. figure 6 shows a good fitting quality. a similar approach for fitting can be used for materials in which the zbi-effect is purely inductive (figure 5) by using the lr circuit. the situation becomes more complicated in the case of a complex zbi-effect (figure 4). one of the possible electrical models that satisfactorily approximate the experimental data is embedded in figure 7. a system function as a rational fraction [12] that corresponds to this model has the following form: 𝑍(𝑠) = 𝐴0 + 𝐴1𝑠 + 𝐴2𝑠 2 1 + 𝐵1𝑠 + 𝐵2 𝑠 2 , (1) where 𝑠 = 𝑗 2 π 𝑓 and 𝐴𝑖, 𝐵𝑖 are unknown coefficients. although the system function uniquely approximates the experimental data, its coefficients are difficult to fill with a physical meaning. it is easier to do this using circuit functions which reflect the topology of the corresponding equivalent circuits [12]. the circuit function corresponding to the model in figure 7 is described by the following equation: 𝑍(𝑠) = 𝑅𝑠 + 𝑅1 1 + 𝜏𝐶 ⋅ 𝑠 + 𝐿1 ⋅ 𝑠 1 + 𝜏𝐿 ⋅ 𝑠 , (2) where: 𝜏𝐶 = 𝑅1 ⋅ 𝐶1; 𝜏𝐿 = 𝐿1/𝑅2 and rs, r1, r2, c1, l1 requested parameters from fitting. the system function (1) covers several equivalent circuits. figure 7 represents one of the possible implementations. the results of fitting utilising this circuit for one of the characteristics represented in figure 4, specifically at bias vdc = 5.3 v, are given in figure 7. the selection of a suitable electrical model can be made empirically by iterating through the available set of models determined by the system function (1). to implement this process, experimental data can be approximated using an acceptable set of equivalent circuits and utilising available fitting programs, such as levm [6] or method described in [13]. figure 8 represents the dependencies of the model parameters versus bias of the silver wire corresponding to the data shown in figure 3. re ( z ) /oh m 1.31.2 im ( z ) /o h m 0.05 0 -0.05 -0.1 -0.15 z at vdc = 0v z at vdc = 1v f > 1 0 0 h z -i m (z )/ o h m re(z)/ohm figure 5. nyquist plot of graphite rod at vdc=0 v … 1 v with bias step 0.1 v; length 57 mm and diameter 0.5 mm. r1 rs c1 zbi -i m (z )/ o h m re(z)/ohm figure 6. fitting result of lf part of data (f = 10 hz … 0.1 hz). silver wire at vdc=0.72 v: rs=0.214 ω; r1=0.07 ω; c1=21.34 f. r2 rs c2 zbi r1 l1 -i m (z )/ o h m re(z)/ohm figure 7. fitting result of lf part of data (f = 10 hz … 0.1 hz). nichrome wire at vdc = 5.3 v: rs = 8.261 ω; r1 = 0.622 ω; c1 = 0.373 f; r2 = 1.018 ω; l1 = 1.091 h. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 92 the capacitance с1 increases exponentially with decreasing bias. this leads to a decrease in the contribution of the reactive component into the zbi-effect. at the same time, a monotonic decrease in resistance r1 is observed. as a result, at zero bias, the zbi-effect demonstrated by the vanishingly small magnitude. the resistance rs reflects a change in resistivity as a function of temperature, which in turn depends on the flowing bias current. the temperature value (it got by utilising resistivity) and power dissipation for this experiment shown in figure 9. similar results, but differing in values, were obtained for other pure metals. the graphite rod model behaves quite differently as shown in figure 10. first, the series resistance rs decrease with bias due to a negative temperature coefficient (ntc). it distinguishes them from metal in which there is a positive coefficient of resistance (ptc), see figure 8. secondly, the inductance l1, together with the parallel resistance r1, decreases with decreasing bias. this nullifies the zbi-effect at zero bias. the behaviour of model parameters for alloys is more complex and is beyond the scope of this article. 4. the difference between open and covered objects some of the previously investigated objects were studied using various types of cover. this is necessary to validate the hypotheses put forward to explain the occurrence of the effect as described in section 6. the results of the study of currentcarrying conductors covered with dielectric materials are quite informative for this purpose. here, we present the results employing a shell in the form of thin teflon and ceramic (alumina) tubes that is quite tight adjacent to the object under study. it is necessary to expand the frequency range towards lower frequencies down to 1 mhz to detect the effect of covering onto the impedance results. this requirement significantly increased the time of each experiment. figure 11 shows the real and imaginary parts of the impedance without covering. we used the galvanostatic mode of the measurement device (biologic sp-240) and the same sample holder as previously. the full range of bias current was idc = r1 rs c1 zbi 0.72 v vdc/v r /o h m c /f figure 8. model parameters of silver wire as a function of bias refer to the experiment data in figure 3. vdc/v t p /w figure 9. power dissipation and temperature of silver wire refer to the experiment data in figure 3. rs zbi r1 l1 vdc/v l /h r /o h m figure 10. model parameters of graphite rod as a function of bias refer to the experiment data in figure 5. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 11. re(z) and im(z) of the open silver wire; galvanostatic mode at amplitude iac = 100 ma and bias idc = 0 a … 4a with step 0.4 a; length 500 mm and diameter 0.25 mm; frequency range 1 mhz … 1 mhz. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 93 0 a … 4 a with steps 0.4 a, and the test signal was iac = 100 ma (it satisfies the low signal approach). figure 12 and figure 13 show the results of the silver wire inside alumina and teflon tubes, respectively. a comparison between these experiments in the form of a nyquist plot at one of the biases (at idc = 2 a) is shown in figure 14. it can be seen with the naked eye that an open wire has one time constant while the covered conductors have two time constants. yet teflon covering shows more overlapping and more distributed impedance spectra. the fitting of the low-frequency part of the data (100 hz … 1 mhz) using the equivalent rc-circuits is also presented in figure 14. the fitting results are summarised in table 2. the indexes in the equivalent circuits in the table have the following meaning: p parallel connection and s serial connection. evaluating the fitting results, we can say that these objects are quite satisfactorily approximated by models utilising lumped elements. a better result can be obtained for the case of a conductor surrounded by teflon using the gaussian distribution function, convolved into impedance [14]. but in this case, to demonstrate the zbi-effect when exposed by the covering, this is not essential. a simple calculation of the ratios of the time constants (τ = rc) taken from table 2 gives the following values: τ2 / τ1 = 247 in the case of alumina covering and τ2 / τ1 = 40 in the case of teflon covering. let us cite as an example the behaviour of an object exhibiting an inductive nature of zbi-effect also in a free and covered state. figure 15 shows resulting nyquist graphs of a graphite rod at bias idc = 0.9 a and the test signal iac = 100 ma in the open state and covered with alumina and teflon tubes. it is important to emphasise that the imaginary part has the opposite sign compared to the previous graphs of the silver wire. the fitting results are summarised in table 3. the ratio of the two time constants (τ =l/r) for the graphite rod covered by alumina is τ2 / τ1 = 284. in the case of teflon covering, the ratio is τ2 / τ1 = 22. the difference in the ratios of the time constants is similar to that found earlier in experiments with silver wire covered with the same materials. 5. check of the data consistency current-voltage characteristics were acquired on the same samples to check a data set for internal consistency. a sweep rate of 1 mv/s which is reasonable to our low frequency 0.1 hz (in the first studies) was selected. this speed allows getting quasistatic characteristics. static and differential parameters, namely resistances, were calculated and compared with parameters obtained from impedance measurements. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 12. re(z) and im(z) of the silver wire inside the alumina tube. the same experimental conditions as pointed in figure 11. l o g ( f r e q / hz ) 50 lo g ( |r e ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g ( |im ( z ) /o h m |) 0 -1 -2 -3 -4 -5 lo g (| r e (z )/ o h m |) lo g (-|im (z )/o h m |) log(freq/hz) figure 13. re(z) and im(z) of the silver wire inside the teflon tube. the same experimental conditions as pointed in figure 11. re(z)/ohm -i m (z )/ o h m figure 14. nyquist plot of experimental and fitted data of non-covered silver wire, the wire inside alumina, and the wire inside teflon. low frequency part of data: 100 hz … 1mhz at the bias idc = 2 a. table 2. fitting results of non-covered silver wire, the wire inside alumina, and the wire inside teflon (according to figure 14). sut equivalent circuit fit parameters: r/ohm; c/f open wire (r1p + c1p)s + rs r1p = 0.032; c1p = 64.27 rs = 0.185 wire inside alumina (r1p + c1p)s + (r2p + c2p)s + rs r1p = 6.0e-3; c1p = 42.3 r2p = 0.025; c2p = 2.48e3 rs = 0.214 wire inside teflon (r1p + c1p)s + (r2p + c2p)s + rs r1p = 6.2e-3; c1p = 91.8 r2p = 0.013; c2p = 1.86e3 rs = 0.179 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 94 the i-v characteristics of a silver sample are shown in figure 16. the parameters of the sample under test correspond to the parameters indicated in figure 3. the setup for i-v measurements was identical to the setup for impedance measurements. figure 16 represents i-v curve, static resistance rstat = vdc/idc and differential resistance rdiff = d(vdc)/d(idc). parabolic spline interpolation was used for analytical differentiation. a fairly good accordance was obtained between the model parameters extracted from impedance measurements and the parameters calculated from the current-voltage characteristics. the parameter rs extracted from the impedance zbi and the rstat extracted from the i-v well fit each other in an error of not more than 0.3%. the total resistance rsum=rs+r1 found from the impedance measurements corresponds to the resistance rdiff calculated from the i-v (figure 16). as an example, the bias point vdc = 0.72 v was taken in figure 16 for indicating these correlations. it is corresponding to this bias point in figure 6, figure 8 and figure 9. the dependence of power dissipation pi +δp on the influence of bias and the test signal at an operating point i will have the form: 𝑃𝑖 + 𝛥𝑃 = (𝑉𝑖 + 𝛥𝑉) ⋅ (𝐼𝑖 + 𝛥𝐼), (3) therefore, changes in power due to only the test signal is determined as 𝛥𝑃 = 𝑉𝑖 ⋅ 𝛥𝐼 + 𝐼𝑖 ⋅ 𝛥𝑉 + 𝛥𝑉 ⋅ 𝛥𝐼 , (4) where: vi, ii voltage and current at a working point and δv, δi – amplitudes of voltage and current of the test signal. from (4), it can be noticed that with increasing bias the dissipated power caused by the test signal will increase. hence, the temperature variation will increase, which will lead to an increase in a change of the resistivity by an influence of the test signal. this is an explanation of the magnification of the zbi-effect at all the experiments with increasing a bias. the most mysterious case is the presence of both types of reactivity in experiments with alloys (figure 4 and figure 7). it required an additional consistency check through an independent method. for these purposes, current-voltage characteristics with different sweep rates were used. the sweep rates were chosen to match the transient point around 0.5 hz (figure 7). the corresponding graphs of the i-v characteristics are shown in figure 17. the setup was the same as for impedance measurements. figure 18 shows the graphs of the calculated values of the static resistances rstat obtained from the i-v characteristics with different sweep rates (figure 17). now we can see that, in the vicinity of 5.3 v bias voltage, the static resistance changes the trend from decreasing its value with increasing a bias at the sweep 0.1 mv/s to increasing its value with increasing a bias at the sweep 100 mv/s. the sign of the -i m (z )/ o h m re(z)/ohm figure 15. nyquist plot of experimental and fitted data of non-covered graphite rod, the rod inside alumina, and the rod inside teflon. low frequency part of data 100 hz … 1 mhz at the bias idc = 0.9 a. table 3. fitting results of the non-covered graphite rod, the rod inside alumina, and the rod inside teflon (according to figure 15) sut equivalent circuit fit parameters: r/ohm; l/h open rod (r1p + l1p)s + rs r1p = 0.160; l1p = 0.343 rs = 1.09 rod inside alumina (r1p + l1p)s + (r2p + l2p)s + rs r1p = 0.01; l1p = 1.33e-3 r2p = 0.121; l2p = 4.572 rs = 1.128 rod inside teflon (r1p + l1p)s + (r2p + l2p)s + rs r1p = 0.041; l1p = 0.027 r2p = 0.095; l2p = 1.38 rs = 1.041 r e s is ta n c e /o h m voltage/v c u rre n t/a figure 16. i-v curve; static and differential resistance curves of silver wire with the same dimensions as in figure 3. c u rr e n t/ a voltage/v c u rr e n t/ a voltage/v figure 17. i-v curves with voltage sweeps: 0.1; 1; 10 and 100 mv/s. nichrome wire, the same dimensions as pointed in figure 4. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 95 differential resistance, which is the essence of impedance measurements, will change accordingly. thus, there is a consistency between measurements in the frequency domain (figure 7) and time domain (figure 18). several temperature studies were carried out to check the data consistency. the platinum conductor was heated by a separate heating element at various biases. figure 19 shows the results of the lf part of data 100 hz – 0.1 hz at different values of temperature controller: at room temperature, at 165 c°, and 265 c°. the actual temperature of the wire can be calculated considering the resistivity of the wire – via real part of impedance at frequency 100 hz or using dc – measurement. for qualitative analysis, knowing the actual temperature of the wire is not essential in our case. the main results of this experiment are as follows. we see that the zbi-effect does not occur in the absence of bias at any temperature. at the same time, this effect takes place in the presence of a bias at any temperature. it is also noticeable that the external temperature additively shifts the values of the real impedance component, almost without affecting the imaginary one. a good illustration in this graph is the occasion when, in the case of room temperature and bias idc = 2 a, the actual conductor temperature is practically identical with the case when the temperature controller value shows 265 c° without bias applied to the conductor (idc = 0 a). 6. discussions the impedance of quite ordinary current conductors under bias in the low-frequency region demonstrates remarkable properties named as zbi-effect. the term "current-carrying conductor" refers to extended conductors designed to carry electric current. the research that is outlined in this article is also relevant to other types of objects. these can be conductive or semiconductive materials of various compositions and shapes. an example relates to experiments carried out with thermistors (they are out of the scope of the paper). the zbi-effect is counter-intuitive. it was natural to assume that the measurement of the impedance of objects at the lowest frequency brings it closer to the measurement result at direct current. this is what happens in the absence of bias. yet, if a bias is applied to an object the feeling deceives. it looks paradoxical, but the infra-low frequency is not an asymptotic approximation to dc when measuring the impedance of a conductor under bias. now, the question of how to explain the occurrence of such significant reactive elements in the impedance models of the studied objects. in particular, the capacitance of pure metals reaches the order of farads (figure 8). the inductance of graphite rods reaches about hundreds of millihenry (figure 10). in reality, of course, such reactance does not exist in the studied objects. this phenomenon may be called as a “phantom” reactance. this effect can be explained considering two necessary properties of the studied objects. the first is nonlinearity and the second one is inertia. the nonlinearity of current conductors is the second kind of nonlinearity (indirect). this property distinguishes them from objects of the first kind of nonlinearity (direct), such as p-n junctions or schottky diodes. in the case of the first kind of nonlinearity, the nonlinearity reveals itself directly, without any delay. for the second kind of nonlinearity, it manifests itself due to a resistivity dependence on temperature. this is a factor of the studied material. the nonlinearity of the second kind has a significant delay. the bias sets a specific operating point. the test signal acts in the vicinity of this point. no matter how small the test signal is, it will change the temperature of the investigated object in the locality of the operating point with a certain delay. consequently, the resistance of the material under investigation will cyclically change with the test signal. eventually, the resistance is modulated by the test signal. the difference between the phase of this modulation and the phase of the acting test signal determines the occurrence of phantom reactance. if the investigated object has a ptc property, a capacitive reactance arises. this behavior is typical for pure metals or ptc thermistors. if the object under study has an ntc feature, then an inductive reactance appears. it is specific, for example, to graphite and ntc thermistors. in terms of electrical measurements, impedance is properly defined only for systems satisfying stationarity [6]. in our case, we have a dynamic structure with one exception the system changes cyclically and synchronously with the influence of the test signal. the amplitude and phase response depends on the frequency of the test signal. a purely active resistance, which changes are synchronously according to the test signal, but with r e si st a n ce /o h m voltage/vvoltage/v r es is ta n ce /o h m figure 18. static resistances calculated from the i-v curves represented in figure 17. -i m (z )/ o h m re(z)/ohm figure 19. nyquist plot of platinum wire at different temperatures and biases; length 83 mm and diameter 0.2 mm; frequency range 100 hz … 0.1 hz. acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 96 a different phase relative to the test signal, generates a reaction that looks like a complex resistance. as a result, a complex value will be estimated during the measurements as an impedance of the studied object. successive experiments revealed a significant feature. it turned out that the time constant following from the zbi-effect’s model (τ = rc for ptc objects and τ = l / r for ntc objects) weakly depends on the applied bias. it is reasonable to assume that these time constants related to the time constant of the heat exchange between the object under study and the environment (air in our initial study case). studies using a various covering of current-carrying conductors support this hypothesis. there are two time constants (figure 12 and figure 13). the first of which is apparently determined by the thermal properties of the covering and the thermal interaction between the conductor and the covering. the second time constant is determined by the thermal interaction between the covering and the environment. an accurate description of thermal processes requires special knowledge and is beyond the scope of this article. however, it seems possible that the discovered effect could allow to develop specialised sensors to assess the thermal conductivity of various materials. this approach may be an alternative to the methods described in [15], [16]. the experimental results obtained on the nichrome alloy motivates the appearance of additional ideas. in particular, both capacitive and inductive reactive components are observed at a bias vdc = 5.3 v (figure 7) and in the areas close to it (figure 4) depending on the frequency of the test signal. specifically, for figure 7 capacitive nature takes place in the frequency range of 10 hz to 0.5 hz, and inductive nature takes place in the range of 0.5 hz to 0.1 hz. such effects are possible if we assume that the temperature coefficient of resistance (tcr) has dynamic properties. in other words, the tcr changes its character depending on the rate of temperature change. in turn, the rate of temperature variations at the selected operating point will be related to the frequency of the test signal. therefore, in the higher frequency range will be observed a ptc-feature, in the lower frequency range will be observed an ntc-feature (figure 7). this assumption was confirmed using i-v experiments with different sweep rates (figure 17 and figure 18). it looks like impedance spectroscopy may provide a more sensitive tool for assessing the dynamic properties of the tcr. since the dynamic properties of the tcr will depend on the composition of the objects under study, there is a potential possibility of indirect composition estimation by checking the moment of the tcr sign change utilising impedance spectroscopy. the revealed new properties (i.e., the possibility of evaluating the thermal conductivity and estimating the composition of the material by dynamic tcr, arising from the discovered zbi-effect) may represent a significant contribution to the scientific and technical community, in particular, in the development of the theory and practice of impedance spectroscopy of objects that cyclically change their parameters synchronously with the test signal. understanding the nature of this effect can foster the development of a new type of instruments in various fields and various scientific institutions. 7. conclusions in this work, the phenomenon of bias-induced impedance was described. this effect was most evident in the low frequency spectra of the reactive part of the impedance. the manifestation of the different nature of this issue was shown experimentally. the zbi-effect may be capacitive, inductive, or complex, which includes both types of reactance. the nature of the reactance depends on the type of test material. pure metals showed capacitive reactance. graphite rods showed inductive reactance. the alloys showed reactance of both types depending on the level of bias. the investigated objects can be attributed to the inertial nonlinear resistances. the zbi-effect is caused by the thermal interaction between the conductor and the environment under the superposition of the bias combined with a test signal. relatively simple equivalent circuits were found to describe the experimental data. additional studies should be undertaken to better understand the behaviour of alloys and other composites under the bias, especially unexpected dynamic tcr properties. new possibilities arise for assessing the thermal conductivity of various materials. this requires the synthesis of knowledge in the fields of electrical and thermal measurements and the construction of specialised sensor devices. references [1] e. h. nikollian, j. r. brews, mos physics and technology, wiley, new york, 1982. isbn-13: 978-0471430797 [2] s. taibl, g. fafilek, j. fleig, impedance spectra of fe-doped srtio3 thin films upon bias voltage: inductive loops as a trace of ion motion, nanoscale. 8 (2016), pp.13954–13966. doi: 10.1039/c6nr00814c [3] n. kumar, e. a. patterson, t. frömling, d. p. cann, dc-bias dependent impedance spectroscopy of batio3bi(zn1/2ti1/2)o3 ceramics, j. mater. chem. c. 4 (2016), pp. 1782–1786. doi: 10.1039/c5tc04247j [4] z. b. stoynov, b. m. grafov, b. s. savova-stoynov, v. v. elkin, electrochemical impedance, nauka, moscow, 1991. isbn: 5-02001945-3 [5] s. baltianski, low frequency bias-induced impedance, 24th imeko tc4 int. symp. 22nd int. work. adc and dac model. test, palermo, italy, 14-16 september 2020, pp 423-428. online [accessed 18 june 2021] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-79.pdf [6] e. barsoukov, j. r. macdonald (eds.), impedance spectroscopy, theory, experiment, and applications, 2nd ed., new jersey, john wiley & sons. inc., 2005. isbn: 978-0-471-64749-2 [7] m.e. orazem (eds.), b. tribollet, electrochemical impedance spectroscopy, 2nd ed., new jersey, john wiley & sons, inc., 2008. isbn:9780470041406 [8] a. lasia, electrochemical impedance spectroscopy and its applications, electrochem. impedance spectrosc. its appl. 9781461489337 (2014) 1–367. doi: 10.1007/978-1-4614-8933-7 [9] f. w. grover, inductance calculations: working formulas and tables (dover phoenix editions) hardcover – march 29, 2004. isbn-10: 0486495779 [10] n. murer, installation and configuration manual for vmp-300based instruments and boosters. online [accessed 05 june 2021]. https://www.biologic.net/documents/vmp300-based-manuals/ [11] p. baranov, v. borikov, v. ivanova, b. b. duc, s. uchaikin, c. y. liu, lock-in amplifier with a high common-mode rejection ratio in the range of 0.02 to 100 khz, acta imeko 8(1) (2019), pp. 103–110. doi: 10.21014/acta_imeko.v8i1.672 [12] s. s. baltyanskii, measuring the parameters of physical objects by identifying electrical models, meas. tech. 43 (2000), pp. 763–769. doi: 10.1023/a:1026645722396 https://doi.org/10.1039/c6nr00814c https://doi.org/10.1039/c5tc04247j https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-79.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-79.pdf https://doi.org/10.1007/978-1-4614-8933-7 https://www.biologic.net/documents/vmp300-based-manuals/ http://dx.doi.org/10.21014/acta_imeko.v8i1.672 https://doi.org/10.1023/a:1026645722396 acta imeko | www.imeko.org june 2021 | volume 10 | number 2 | 97 [13] f. m. janeiro, p. m. ramos, gene expression programming and genetic algorithms in impedance circuit identification, acta imeko 1(1) (2012), pp. 19-25. doi: 10.21014/acta_imeko.v1i1.16 [14] s. baltianski, impedance spectroscopy: separation and asymptotic model interpretation, xxi imeko world congr. measurement res. ind., prague, czech republic, 30 august 04 september 2015, pp. 492-497. online [accessed 05 june 2021]. https://www.imeko.org/publications/wc-2015/imeko-wc2015-tc4-101.pdf [15] e. barsoukov, j. h. jang, h. lee, thermal impedance spectroscopy for li-ion batteries using heat-pulse response analysis, j. power sources 109 (2002), pp. 313–320. doi: 10.1016/s0378-7753(02)00080-0 [16] m. swierczynski, d. i. stroe, t. stanciu, s. k. kær, electrothermal impedance spectroscopy as a cost efficient method for determining thermal parameters of lithium ion batteries: prospects, measurement methods and the state of knowledge, j. clean. prod. 155 (2017), pp. 63–71. doi: 10.1016/j.jclepro.2016.09.109 http://dx.doi.org/10.21014/acta_imeko.v1i1.16 https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc4-101.pdf https://www.imeko.org/publications/wc-2015/imeko-wc-2015-tc4-101.pdf https://doi.org/10.1016/s0378-7753(02)00080-0 https://doi.org/10.1016/j.jclepro.2016.09.109 microsoft word 257-1778-5-le acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 4 ‐ 13    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 4  full information adc test procedures using sinusoidal  excitation, implemented in matlab and labview  vilmos pálfi, tamás virosztek, istván kollár  budapest university of technology and economics, department of measurement and information systems, műegyetem rkp. 3., 1111, budapest, hungary        section: research paper   keywords: adc test; maximum likelihood; sine wave fitting; histogram test; matlab; labview  citation: vilmos pálfi, tamás virosztek, istván kollár, full information adc test procedures using sinusoidal excitation, implemented in matlab and  labview, acta imeko, vol. 4, no. 3, article 2, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐02  editor: paolo carbone, university of perugia, italy  received february 14, 2015; in final form april 8, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: vilmos pálfi, e‐mail: palfi@mit.bme.hu    1. introduction  testing and characterization of analog-to-digital converters is an important field of measurement technology. a commonly used method for adc testing is using a sine wave excitation signal, e. g. sine wave-based histogram test. in this latter procedure the device under test is excited with a possibly clean sinusoidal input, then a histogram is created which is used after correction for the probability density function of the sine wave to determine the transition levels of the converter. the standard method to estimate the parameters of the excitation signal is the least squares fitting algorithm. if the signal frequency is assumed to be known, the so-called threeparameter fit can be done which estimates the sine and cosine amplitudes and the dc offset level of the signal. in the case of unknown frequency, the four-parameter fit solves the problem. dynamic errors of the adc are often investigated using the fft test which reveals the spurious components and harmonic distortion introduced by the device. the test methods mentioned above are described in detail in the ieee standard [1]. furthermore, this document defines rather strict conditions for the signal parameters which have to be fulfilled to ensure accurate results. however, users have to face some difficulties during the application of the standard procedures:  there is no method proposed to check the fulfilment of the conditions for the signal parameters (e.g. coherence, relative prime condition),  while the proposed methods are sensitive to the signal parameters, they are unable to recognize bad parameter settings which might lead to incorrect characterization of the converter,  correct signal parameters by themselves still do not ensure precise estimation of the sine parameters since the least squares method is sensitive to the nonlinearities of the adc. the main goal of this paper is to present some advanced methods which are able to handle the above problems and provide unbiased information with minimum variance about the signal and adc parameters, using only the measured record. matlab and labview implementation of the methods are freely available on the internet [2], these software tools are also presented here. abstract  analog‐to‐digital  converters  and  the  need  to  test  these  devices  appeared  simultaneously.  thus,  adc  circuit  realizations  and  test  methods evolved also simultaneously. in the last decades several techniques have been elaborated and spread worldwide. these are  available  in  ieee  standards  and  in  the  literature  as  well.  however,  standard  methods  do  not  support  the  recognition  of  incorrect  measurement settings. accurate test results require careful choice of settings and calculated quality parameters of the adc under test  are very sensitive to imperfections of the measurement setup. in addition, the requirements are different for each test technique and  the restrictions can even be contradicting (e.g. overdrive is recommended for histogram test and contraindicated for fft test). this  paper presents solutions to perform the commonly used methods reliably and some advanced methods to increase the performance  of adc quality parameter estimation. implementations of the proposed algorithms are presented as well, with url for download. acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 5  section 2 explains in details the standard methods and their disadvantages. the advanced methods are presented in section 3. the software tools are illustrated in section 4. finally, in section 5 some experimental results are shown which confirm the advantages of the proposed methods. this work is based on and extends [5]. 2. standard methods in adc testing  2.1. the sine wave histogram test  the histogram test is an effective way to estimate the code transition levels of an a/d converter. the adc is tested with a pure sine wave which slightly exceeds the input range (see [3]). a histogram is created which shows the number of hits in each code bin. let be the number of hits in code bin ( 0…2 1 for an adc of bits). then the cumulative histogram can be defined as: ∑ . (1) let the model of the excitation signal be cos 2 (2) where , , and are the offset, amplitude, signal frequency and initial phase, respectively. using the parameters , , the number of samples and the cumulative histogram , the th transition level can be estimated with the following formula: cos . (3) the signal parameters and can be estimated in units of the adc quantum step, directly from the cumulative histogram (e.g. from the position of the 10 % and 90 % points), or making a least squares fit of the histogram. this is usually enough to execute the test in adc units (absolute values can be obtained by using estimates of two arbitrary (but not too close) transition levels, matching to the measurement). using such estimators will not introduce any errors in the inl and dnl characteristics of the converter [1]. however, determination of parameters like sinad, enob, etc. requires estimation of all parameters of the input signal (that is, also the phase and frequency). the histogram test is very sensitive to the appropriate ratio of the signal frequency and of the sampling frequency . this ratio defines the relation between the number of samples ( ) and the number of periods ( ) in the record: . (4) standard [1] requires the sampling be coherent (thus has to be an integer number), and and be relative primes. these conditions are very important because they guarantee the unbiased, minimum variance estimation of the transition levels (with respect to the given number of samples, see [4] and [5]). unfortunately, there is no proposed method in the standard about inspecting the fulfilment of the above conditions. the requirements defined in the standard are important because this test technique compares the histogram of the quantized signal to the probability density function (pdf) of the sine wave. if a sine is quantized with an ideal adc, its histogram will be very similar to the pdf, especially if the number of bits is high. figure 1 shows such a histogram. the imperfections of the converter cause distortions in the histogram, since the number of hits in the code bin differs from the ideal case as the code is wider or narrower than the quantization step (see figure 2). however, if the coherence condition is not fulfilled, then the number of periods in the signal is not integer. the samples of the fractional period might also cause distortions in the histogram, so transition levels among these codes will be estimated with systematic errors (see figure 3). figure 1. the histogram of a coherently sampled sine wave quantized by an ideal quantizer.  figure 2. histogram of a coherently sampled sine wave, quantized by a non‐ ideal quantizer.   figure  3.  distortion  in  the  histogram  of  the  sine  wave  caused  by  non‐ coherent sampling. acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 6  the conclusion is that only coherent sampling ensures unbiased results in the estimation. however, the relative prime condition still has to be fulfilled, otherwise the quality of the results might be harmed. the distribution of the phases of the samples is uniform in the interval , only if the greatest common divisor of and is 1. in that case every sample excites the adc at a different voltage level, thus the transition levels can be estimated with maximum precision. figure 4 illustrates the case when both coherence and relative prime conditions are fulfilled. the samples are distributed uniformly, the distance between two adjacent samples is everywhere the same. the transition levels can be estimated with the best precision because the uncertainty of their location depends on the distance between the samples close to each other in phase. figure 5 shows the case when the signal is sampled coherently, but the relative prime condition is not fulfilled. the value of the greatest common divisor is 5 instead of 1, thus the samples are arranged into nodes. the distance between two nodes is much higher than its ideal value, thus the uncertainty of the locations of the transition levels is high which significantly increases the variance of the estimation. finally, figure 6 shows the case when the sampling is noncoherent. the distance between the phases varies, which leads to distortions in the histogram of the sine wave (see also figure 3). 2.2. the least squares method  precise estimation of the signal parameters is quite important in adc testing. for example, the rms value of residuals strongly depend on the estimated parameters. equation (3) also shows that the amplitude and dc offset parameters have to be known as exactly as possible to determine the adc characteristics precisely. the least squares method uses the following model of the sine wave: cos 2 sin 2 (5) where cos and sin . the advantage of this modified model is that it is linear in a, b, c, and nonlinear only in the signal frequency. the method minimizes the following quadratic cost function: ∑ cos 2 sin 2 . (6) let denote the matrix of the derivatives of the residuals with respect to the parameters of the sine wave. the solution of the least-squares equation can be expressed in the following form: ̂ . (7) solving the above equation iteratively (e.g. 5-6 times newton-gauss steps) will give the least squares estimator of the sine parameters. despite of the efficiency of the method, it has several disadvantages:  the statistical properties of the estimation depends strongly on the saturation of the adc, e.g. a 10 % overdrive leads to significant errors in the estimation of the sine parameters;  the presence of harmonic components also influence the precision of the estimator negatively;  the least-squares method assumes implicitly an ideal quantizer (so the stochastic model of noise is appropriate). however, this model is not valid for true adcs with nonlinear characteristics which results biased estimation; figure  4.  distribution  of  the  phases  in  [0,  2π]  when  both  coherency  and  relative prime conditions are fulfilled.   figure  5.  distribution  of  the  phases  in  [0,  2π]  when  only  the  coherency condition is fulfilled.   figure  6.  distribution  of  the  phases  in  [0,  2π]  in  case  of  non‐coherent  sampling.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 7   the computational demands increase rapidly with the record length, however testing high resolution converters requires long measurements. 2.3. the fft test  the purpose of the fft test is to characterize the dynamic behaviour of the adc by identifying spurious and harmonic components introduced by the device. the spurious free dynamic range (sfdr) shows the relation between the carrier and the largest spurious component in the signal. overdrive of the adc or non-coherent sampling significantly decrease the precision of the test results due to spectral leakage and harmonic components caused by clipping the peaks of the sine wave [6]. 3. advanced methods  3.1. main goals  the main goal of this paper is to present some advanced algorithms which are able to handle the problems introduced in section 2. this way the user can perform accurate and reliable adc testing. the methods perform the following tasks:  quality analysis of the measured data is provided by checking saturation and the fulfilment of the conditions required by the histogram test method. this step requires precise estimation of the sine parameters;  if the original record fails to fulfil the conditions, the coherent parts of the record are identified. if the coherent parts are too short compared to the original measurement, a new signal frequency is proposed and the measurement can be repeated following this suggestion. above steps improve significantly the results of the histogram test and the fft test since both methods provide the best results in the case of coherent sampling;  the signal parameters are determined using the maximum likelihood (ml) algorithm. the ml estimator is not influenced negatively by the (possibly) nonlinear characteristics of the adc, thus the signal parameters, the fitting residuals and values such sinad, enob can be determined with the best achievable precision. the algorithms are presented in details in the next subsections. it is important to notice that no a priori information is used or required, the source of the information (transition levels, signal parameters, etc.) is the measured data only. 3.2. overdrive detection  overdrive detection is quite important because distortions in the signal caused by clipping the peaks largely influence the results of sine wave fitting and the fft test. the method identifies the samples in the measured signal which seems to be out of the full-scale range of the adc. for this purpose, first the number of periods ( ) in the signal is determined using ipfft with maximum sidelobe decay window ([7], [8] and [9]). in the next step, the three-parameters sine fit [1] is done to determine the , and parameters. let be the output of the adc, and , be the smallest and the largest output code of the converter. based on [10], only those samples are used during the three-parameters fit algorithm for which the following condition holds true: . (8) then the fit can be expressed as: cos sin . (9) the th sample of the signal is assumed to be overdriven if 1/2 or 1/2 (these are two virtual transition levels at the start and end of the full scale range). the overdriven samples are replaced (as in [6]) with the corresponding samples of : , 1/2 , 1/2 , otherwise . (10) using ′ instead of during the fft test and the sine fitting algorithm improves the results significantly. 3.3. least‐squares fit in the frequency domain  disadvantages of the standard, time domain least squares method (presented in section 2) show that it is not the best method for the determination of the sine parameters. most of the disadvantages can be handled if the fit is performed in the frequency domain. for this purpose, first ′ is windowed with the three-terms blackman-harris windows (see [11]), then the fft of the windowed signal is computed. the blackmanharris window concentrates the information about the sine wave around its frequency very effectively, thus only a few points are used during the iterative estimation method. the method is explained in details in [12] and [13]. the main advantages are the following:  the blackman-harris window compresses the information around the sine and dc frequencies, thus the computational costs are reduced significantly;  the statistical properties of the estimator are the same compared to the original method. on low frequencies the frequency domain method outperforms the original algorithm;  due to the windowing the frequency domain method is much less sensitive to harmonic distortions in the signal. it is important to notice that nonlinearities in the adc characteristics cannot be handled with least squares estimators regardless of the domain of the used samples. however, the influence of the characteristics is much more significant regarding parameters , and in comparison with , the number of periods in the signal. so is approximately unbiased and it was also shown in [13] that its statistical properties allow the estimator be used to check the fulfilment of the coherency and relative prime conditions. 3.4. coherence analysis  the main purpose of the algorithm is to decide the suitability of the measured sine wave for histogram testing. this depends on the exact number of periods in the signal, denoted by . this can be written as 〈 〉 ∆ where 〈 〉 is the rounded value of and ∆ is the residual, thus |∆ | 0.5. the goal is to identify the coherent record parts in the measurement. for this purpose, the condition of carbone and chiorboli is used. it was shown in [14] that if 〈 〉 and are relative primes and |∆ | (11) hold true then the variance of the histogram test method does not increase noticeably in comparison with the ∆ 0 case, acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 8  thus the sampling can be assumed coherent from the histogram test’s point of view. since the estimators are probability variables, a probabilistic approach of coherency analysis is recommended and presented. let be the variance of the estimator. as it was shown in [13], is asymptotically gaussian, unbiased and its variance can be determined in close form using the jennrich-theorem (see [15]). using the information above, the following probability can be determined: , . (12) the following probability is the same: , . (13) using the latter form, the probability of coherency can be answered for different record lengths by selecting the values of and properly. let be the rounded value of , thus 〈 〉. to determine the probability of fulfilling the carbonechiorboli condition, the following values of and are required: . (14) if this probability is higher than a previously defined threshold (e.g. 95 %), the sampling can be assumed coherent. in addition, if the relative prime condition is also fulfilled, then the data is optimal for histogram testing. otherwise, additional steps are required to identify the coherent parts of the signal. if the original record of samples fails to fulfil the requirements, then a new has to be determined. let be the number of periods represented by one sample of the signal ( 0.5, this comes from the nyquist-shannon sampling theorem). the exact value of is unknown, but it can be estimated: . (15) let be the number of periods for which the condition of carbone and chiorboli holds true, and the greatest common divisor of and is 1. then can be estimated as: . (16) the variance of is also known: , . (17) this way the probability of coherence can be determined for any record length. using the above procedure, the following steps are proposed to determine the optimal number of samples used for histogram testing:  estimation of the number of periods in the original record using the frequency domain estimator. as a result, we have and ;  determination of the possible number of integer periods in the record. this is stored in the vector, its elements increase from 1 to 〈 〉. the value of is also determined;  determination of the number of samples for each element of . since these have to be integers, these are the rounded values of the ratio of the possible integer number of periods and . the results are stored in the vector;  for the elements of the exact number of periods is calculated and stored in the vector. each element is a probability variable, the standard deviations are stored in / ;  the next step is the calculation of the values of the carbone-chiorboli bound for each element of . once this is done, the probabilities of coherency can be determined for every value of . these probabilities are stored in the vector, and the greatest common divisors of the corresponding elements of and in the vector;  using the above vectors, the optimal number of samples can be determined. the proposed formula is the following: where , and are the th element of the , , vectors, respectively. the histogram test should be performed with that length for which is maximal;  if the value of is too small compared to the original record length, the needed adjustment of the signal frequency can be determined if the sampling frequency is known. from (4) one can derive that ∆ ∆ where ∆ is the required correction in the sampling frequency. note that the frequency resolution of the signal generators is not enough high to adjust the frequency exactly by the proposed value. 3.5. the maximum likelihood method  3.5.1. motivation  the main weakness of the standard ls estimators for the sine wave parameters is the possible bias of them. least-squares estimation can be biased in multiple cases, examples from the field of system identification are itemized in [16]. in adc testing, this problem appears due to the nonideality of the real quantizers: the code transition levels are not distributed uniformly, thus code bins have different widths. however, the ls estimator finds the best fitting sine wave to the output codes of the device under test (occasionally to the nominal voltage values corresponding to the output codes), while the aim is to estimate the parameters of the input sine wave. the nonlinearity of the converter is not modelled in this standard method. the goal of maximum likelihood (ml) estimation of sine wave and adc parameters [17], [21] is to provide minimum-variance unbiased (mvu) estimators for the analog excitation signal considering the non-ideal properties of the quantizer. the theoretical and practical aspects of ml estimation for adc testing are to be itemized in the following subsections. 3.5.2. modelling the measurement setup  for sine wave-based qualification of converter circuits, the measurement setup is simple. the analog side of the adc under test is connected to a sine wave generator, while the digital record of the sine wave is post-processed to achieve the quantities that qualify the converter. however, the quality requirements for the sine wave generator are high. this device shall provide excellent frequency stability even for minutes and acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 9  harmonic distortion must be very low. these strict requirements have the following reasons: alteration of frequency, phase noise and multi-harmonic signals can be treated mathematically, nevertheless these options give too many degrees of freedom to the model. e.g. harmonic components in the record provided by the nonlinearity of the converter and provided by the analog generator cannot be distinguished. similarly, a measured sine wave can be described as using additive noise and harmonic components as using frequency alteration and phase noise. the noise of the analog signal, the disturbances of the analog environment and the electronic noise of the adc circuit are handled in a simple, but very lifelike noise model [18]. to examine the statistical and spectral properties of the measurement noise it is expedient to perform long measurements with short circuited analog input or zero excitation on the sine wave generator (the latter one is better to record the electromagnetic disturbances of the environment). evaluating several measurements with different adc circuits, the white noise model is apparently very realistic. this is important from the aspect of the mathematical form of the likelihood function as well, because the samples of the noise are assumed to be independent (see subsection 3.5.3.). regarding the probability distribution of the noise, the results are less straightforward. we tried to confirm or deny the null hypothesis that these samples are from a well-known distribution using the kolmogorov-smirnov test. as the records contained up to 2 million samples, these hypothesis test results are reliable at high confidence level (p = 95-99 %). the distribution of the recorded populations were close to the gaussian normal distribution, but showed significantly higher kurtosis: contained more outliers than it is expected in case of gaussian noise. according to our experience, the best practice is to use a combination of gaussian and laplacian distribution to handle the outliers, but to keep the shape of distribution as well [19]. the model of the measurement and the parameters corresponding to the elements of the setup appear in figure 7. 3.5.3. the likelihood function  the measurement record contains m samples of the digitally recorded sine wave. the observations are these samples of the quantized noisy signal. the likelihood function depends on the following parameters denoted by : , , , , , 1 ,⋯, 2 1 (18) where is the number of bits, denotes the cosine coefficient of the sine wave, is the sine coefficient of the sine wave, denotes the dc offset of the excitation signal. the frequency of the sine wave is denoted by (the sampling frequency is known), denotes the standard deviation of the additive noise on the analog signal, and the code transitions levels of the quantizer (from the lowest to the highest) are denoted by 1 , . . . , 2 1 , respectively. the likelihood of the measurement can be expressed using the following equation: ∏ (19) where denotes the th recorded sample of the sine wave and is a discrete random variable corresponding to the th.sample of the record. to calculate the distribution of , it is necessary to calculate the th sample of a pure sine wave with given parameters , , and : cos 2 sin 2 . (20) the threshold levels of the quantizer (code transition levels) and the noise model appear in the following formula, which gives the discrete distribution of random variable : n , , 1 n , , (21) where n , denotes the cumulative distribution function of the noise with expected value and standard deviation . using the cumulative distribution function (cdf) of gaussian distribution as ncdf is usually a very good approximation (see subsection 3.5.2). in this likelihood function the following a priori information is used:  the noise is white (samples of the noise are independent);  the analog excitation is a sine wave with additive, almost gaussian noise;  the quantizer is described with its sampling frequency ( , constant in a measurement) and code transition levels ( is the voltage value which results digital code 1 with 50 % probability and with 50 % probability as well). the maximum likelihood estimators can be achieved optimizing this likelihood function with respect to the parameters stored in : argmax . (22) 3.5.4. challenges of optimization  the most important problem is the computational demand of the optimization which strongly depends on the number of parameters. using a b-bit quantizer the number of parameters is 2 4, the number of restrictions is 2 1, thus the parameter space increases exponentially with respect to the number of bits. the entire computational demand grows even faster: let n denote the number of parameters, thus depending on the optimization algorithm, the operations have the following computational complexity:  calculating the first-order partial derivatives (the gradient vector): ~ ;  calculating the second order partial derivatives (the hessian matrix): ~ ;  inverting the hessian matrix: ~ . this way performing the optimization process on the entire parameter space requires unacceptable efforts for a regular 12bit, 16-bit or higher resolution adc. to handle this problem one of the following approximations shall be used:  the code transition levels are estimated from the sinusoidal record using histogram test [2]. these values are considered to be fixed and will not be adjusted later. this method reduces the parameter space to 5 dimensions, however brings along all the figure 7. measurement setup for adc testing.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 10  problems that can appear in sinusoidal histogram testing [4];  the nonlinearity of the quantizer is parameterized: the code transition levels are not estimated one by one, but the entire static transfer characteristics is described using fewer (from 5 up to 15) parameters. this way the number of parameters of the likelihood function remains between 10 and 20, so the optimization can be performed without excessive efforts. for parameterization, chebyshev [20] and taylor polynomials or fourier-coefficients can be used. the information regarding the nonlinearity is described using fewer quantities in this case, however the estimation of these polynomial coefficients brings along less variance than the estimation of the single code transition levels. 3.5.5. numerical recipes to optimize the likelihood function  the likelihood function in the reduced parameter space can be optimized multiple ways to achieve approximate maximum likelihood estimators:  derivative-based methods: the simple gradient descent which only requires the first order partial derivatives, the gauss-method that implies the calculation of the hessian matrix and the generalized levenberg-marquardt method where the hessian is used as well (instead of formula from the jacobian matrix). calculation of the hessian in each iteration cycle can be bypassed using quasi-newton methods (e.g. dfp, bfgs or broyden);  simplex downhill (nelder-mead) method: does not require to calculate derivatives (the objective function must not even be differentiable), however the number of iterations and cost function evaluations can be high (depending on the shape of the minimum/maximum and the termination criteria);  differential evolution: a genetic algorithm used to find the global optimum of the objective function. this method does not require derivatives and is able to escape from local extrema. on the other hand it requires large number of cost function evaluations and the convergence is partially based on heuristics; in our software implementation available on the web [8] a gradient-based method is used, nevertheless other algorithms can be efficient and their efficient usage is also subject of investigation. 4. software tools for adc testing  previous sections demonstrated the importance of the quality analysis on the measured data before the test methods are performed. the algorithms have to be executed in a fixed order to ensure best results. every method on a later stage uses the information provided by previous methods. the main tasks are the fft test, histogram test and estimation of the sine parameters, these are supported by the other algorithms. the data processing chain can be seen on figure 8. first the overdrive detection method is performed which identifies the samples clipped by the adc. these are replaced by their estimated value. this step is done automatically in the labview tool, while in matlab the user is warned (see figure 9). as a result, the results of the fft and the least squares fit in the frequency domain are improved. once the latter is done, the coherency analysis can be performed and the optimal records length can be determined (figure 10). at this point the processed record can be assumed coherent, thus the histogram and fft tests will provide valid results. once the transition levels are known, the maximum likelihood estimation method can be executed. this will estimate the parameters of the sine and noise unbiasedly and with a variance which is very close to the theoretical limit (see figure 11). this serves the accurate determination of some adc quality parameters such as enob, sinad, etc. figure 8. the sequence of methods in the adc testing software.   figure 9. classification of the measurement in the matlab tool.   figure 10. results of the coherency analysis. the user can  choose among  the best 3 options.   figure 11. results of the maximum  likelihood method. towards the signal  and noise parameters, sinad and enob are also determined.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 11  5. experimental results  5.1. simulation results  this subsection presents the test results of the coherence analysis algorithm. during the test a 12 bit quantizer was used, the characteristics can be seen in figure 12. in the test sine waves were generated and the initial phase and the number of periods were random variables. the initial phase was uniformly distributed in the interval 0,2 . the number of periods, , was also distributed uniformly in 8.95,9.05 . the aim of this selection was to model that users try to fulfil the coherence condition, but due to the errors in the signal and sampling frequencies he fails to do so. the value of the overdrive and was set to 10 % and 215. the model of the sine wave was the following (according to (2)): 1.1 cos . (23) to simulate real-like circumstances and to model the imperfections of the signal generator, independent gaussian noise ( ) was added to the samples of the signal with 0 mean and lsb standard deviation: e 0,var 1 . (24) a harmonic component ( ) was also added with 1 lsb amplitude and double frequency of the carrier: cos . (25) the test signal was the sum of the carrier, the noise and the harmonic component: . (26) 100 tests were run where first the histogram test was performed using the whole record, then using the optimal record length after coherence analysis. figure 13 shows the standard deviation of the error of the histogram test for the case when the whole record was used. figure 14 shows the standard deviation when the record was truncated to the optimal length. the comparison of the figures shows that the coherence analysis method successfully reduced the errors in the estimation. simulation results for the maximum likelihood method and comparison with the ls estimator can be found in [22]. in [13] a comprehensive study is presented about the frequency domain ls estimator. 5.2. experimental results  the presented algorithms were tested with real measurement data. a national instruments adc of 16 bits and =200 khz sampling frequency was used for data quantization. the excitation signal was provided by a brüel&kjaer type 1051 sine wave generator. since a slight overdrive of the adc is recommended for histogram testing the signal amplitude was set to 120 % of the full scale range of the converter. the frequency was set to =97 hz and =220 samples were collected. the nominal values of the frequencies and the record length fulfil both the coherency and relative prime conditions. in the first test, the original least-squares method [1] was compared to the frequency domain method with overdrive detection. the algorithms estimated the four parameters of the sine wave, than sinad and enob were determined using the fitting residuals ( ): ∑ (27) √ (28) . . . (29) in (29) the value of was substituted for for values higher than the full scale of the adc. the results of the comparison can be seen in table 1. figure 12. adc inl and dnl characteristics.   figure 13. standard deviation of the estimation error of the histogram test when the whole record was used.   figure 14. standard deviation of the estimation error of the histogram test after the coherence analysis.   table 1. comparison of the standard and the proposed least‐squares fitting  method.  parameter original method proposed method a [lsb] -30834.5 -30886.4 b [lsb] 12140.2 12160.1 c [lsb] 32789.8 32793.3 j 211.01 211.05 sinad [db] 48.240 79.114 enob 7.721 12.849 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 12  the results show that harmonic components introduced by the saturation of the adc cause huge errors in the amplitude and dc offset parameters of the sine. however, detection and substitution of samples around the peaks improved the results significantly. in the next step the effect of the coherency analysis algorithm was studied. two histogram tests were performed, one for the whole record and one for the coherent part of the signal. coherency analysis showed that the optimal record length for histogram testing is =288227 samples which is only the 27 % of the original length despite the nominal values of the sampling and sine frequency fulfilled every requirement. figure 15 shows the results of the inl estimations and the difference between the results. in the first case the histogram test was performed using a non-coherent record, thus the results are distorted. the error curve showed that 57 transition levels were estimated with an error higher than 3 lsb, 4113 of them were estimated with an error higher than 2 lsb. the value of the average estimation error is 0.957 lsb. it is very important to notice that 288227 samples are not enough to test accurately a 16 bit adc, the presented software would recommend an adjustment in the signal frequency (see section 4). the new record length shows the amount of independent information in the whole record, thus ~70 % of the samples do not provide new information about the transition levels. in addition, it is still better to perform the histogram test with the truncated record since the whole record introduces bias in the estimation, but the variance of the estimation is the same for both records due to the same amount of information. finally, the results of the least-squares method and the maximum likelihood method were compared. the inl curve of the adc shows that the characteristics are nonlinear, thus the least-squares method is not able to provide unbiased results. the ml method uses the transition levels of the converter during the parameter estimation process so the results are precise for nonlinear converters also. table 2 shows the results. the enob and sinad parameters were found to be smaller using the ml method. the explanation if that ls fit minimizes the error, thus the values of above parameters are maximized. however, the true values are provided by the ml estimator which maximizes the probability with respect to the parameters. 6. conclusions  in adc testing, users have to face some difficulties during the application of the standard methods. this paper presented some advanced methods which are able to handle the problems of the original procedures. the implementation of the proposed algorithms was also presented and experimental results showed that the new methods provide accurate and reliable information about the adc and sine wave parameters. references  [1] ieee standard for terminology and test methods for analogto-digital converters," ieee std 1241-2010 (revision of ieee std 1241-2000), pp. 1-139, jan. 14, 2011. doi: 10.1109/ieeestd.2011.5692956 [2] virosztek, t., pálfi v., renczes b., kollár i., balogh l., sárhegyi a., márkus j., bilau z. t., adctest project site: http://www.mit.bme.hu/projects/adctest, 2000-2014 [3] blair, j., "histogram measurement of adc nonlinearities using sine waves," ieee transactions on instrumentation and measurement, vol.43, no.3, pp.373-383, jun 1994. doi: 10.1109/19.293454 [4] vilmos pálfi, istván kollár, “improving the results of the histogram test using a fast sine fit algorithm”, proc. of 19th imeko tc4 symposium, barcelona, spain, 2013. paper 118. url: http://www.imeko.org/publications/tc4-2013/imekotc4-2013-118.pdf [5] vilmos pálfi, istván kollár, “reliable adc testing using labview”, proc. of 20th imeko tc4 symposium, benevento, italy, 2014. pp. 237-241, url: http://www.imeko.org/publications/tc4-2014/imeko-tc42014-259.pdf [6] xu, li, siva kumar sudani, and degang chen. "efficient spectral testing with clipped and noncoherently sampled data." ieee transactions on instrumentation and measurement, 63.6 (2014): 1451-1460. doi: 10.1109/tim.2013.2292273 [7] belega, daniel, and dominique dallet. "frequency estimation via weighted multipoint interpolated dft." science, measurement & technology, iet 2.1 (2008): 1-8. doi.: 10.1049/ietsmt:20070022 [8] belega, daniel, and dominique dallet. "efficiency of the threepoint interpolated dft method on the normalized frequency estimation of a sine-wave." intelligent data acquisition and advanced computing systems: technology and applications, 2009. idaacs 2009. ieee international workshop on. ieee, 2009. doi: 10.1109/idaacs.2009.5343034 [9] belega, daniel, dominique dallet, and dario petri. "accuracy of sine wave frequency estimation by multipoint interpolated dft approach." ieee transactions on instrumentation and measurement, 59.11 (2010): 2808-2815. doi: 10.1109/tim.2010.2060870 [10] kollár, istván, and jerome j. blair. "improved determination of the best fitting sine wave in adc testing." ieee transactions on instrumentation and measurement, 54.5 (2005): 1978-1983. doi: 10.1109/tim.2005.855082 [11] albrecht, hans-helge. "a family of cosine-sum windows for high-resolution measurements." acoustics, speech, and signal table 2. comparison of the least‐squares and maximum likelihood methods. parameter ls ml a [lsb] -30886.4 -30886.4 b [lsb] 12160.1 12160.1 c [lsb] 32793.3 32792.8 j 211.05 211.05 sinad [db] 79.114 78.918 enob 12.849 12.817 figure  15.  comparison  of  results  of  the  histogram  tests  using  the  whole record and the coherent part only.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 13  processing, ieee international conference on. vol. 5. ieee, 2001. doi: 10.1109/icassp.2001.940309 [12] pálfi, vilmos, and istván kollár. "efficient execution of adc test with sine fitting with verification of excitation signal parameter settings." instrumentation and measurement technology conference (i2mtc), 2012 ieee international. ieee, 2012. doi: 10.1109/i2mtc.2012.6229502 [13] pálfi, vilmos, and istván kollár. "acceleration of the adc test with sine-wave fit." ieee transactions on instrumentation and measurement, 62.5 (2013): 880-888. doi.: 10.1109/tim.2013.2243500 [14] carbone, paolo, and giovanni chiorboli. "adc sinewave histogram testing with quasi-coherent sampling." ieee transactions on instrumentation and measurement, 50.4 (2001): 949-953. doi: 10.1109/19.948305 [15] jennrich, robert i. "asymptotic properties of non-linear least squares estimators." the annals of mathematical statistics (1969): 633-643. doi: 10.1214/aoms/1177697731 [16] schoukens, johan, and rik pintelon. identification of linear systems: a practical guideline to accurate modelling. elsevier, 2014. url.: http://goo.gl/dg7uuv [17] balogh, lászló, istván kollár, and attila sárhegyi. "maximum likelihood estimation of adc parameters." instrumentation and measurement technology conference (i2mtc), 2010 ieee. ieee, 2010. doi.: 10.1109/imtc.2010.5488286 [18] tamás virosztek „adc testing in practice, using maximum likelihood estimation: report in students' scientific circle (tdk)” 49 p., 2013. url: http://mycite.omikk.bme.hu/doc/149632.pdf [19] b renczes, i kollár, p carbone, a moschitta, v pálfi, t virosztek analyzing numerical optimization problems of finite resolution sine wave fitting algorithms – accepted for publication at ieee instrumentation and measurement technology conference (i2mtc), may 11-14 2015, pisa, italy [20] attivissimo, f.; giaquinto, n.; kale, i., "inl reconstruction of a/d converters via parametric spectral estimation," instrumentation and measurement, ieee transactions on , vol.53, no.4, pp.940,946, aug. 2004 doi: 10.1109/tim.2004.831508 [21] šaliga, ján, et al. "a comparison of least squares and maximum likelihood methods using sine fitting in adc testing." measurement 46.10 (2013): 4362-4368. doi.:10.1016/j.measurement.2013.05.004 microsoft word article 16 174-1029-1-le.docx acta imeko  february 2015, volume 4, number 1, 105 – 110  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 105  virtual quasi‐balanced circuits and method of automated  quasi‐balancing  artur skórkowski, adam cichy, sebastian barwinek  institute of measurement science, electronics and control, silesian university of technology, akademicka 10 street, 44‐100 gliwice, poland       section: research paper   keywords: automatic balancing; virtual instrument; quasi‐balanced circuit; labview development  citation: artur skórkowski, adam cichy, sebastian barwinek, virtual quasi‐balanced circuits and method of automated quasi‐balancing, acta imeko, vol. 4,  no. 1, article 16, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐16  editor: paolo carbone, university of perugia   received january 10 th , 2014; in final form march 25 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by institute of measurement science, electronics and control, silesian university of technology, poland  corresponding author: artur skórkowski, e‐mail: artur.skorkowski@polsl.pl    1. introduction  quasi-balanced circuits are ac circuits destined for measuring impedance components. they have a special selected state, the so-called quasi-equilibrium state, which is usually a predetermined phase shift between the selected signals. the advantage of quasi-balanced circuits is the use of only one control element. the quasi-equilibrium state is an a priori assumed non-zero state – generally meant as the achievement of the determined phase shift between the selected signals of the circuit. maximum convergence is the advantage of the circuits under consideration, whereas the lack of possibility of simultaneous measurement of both immitance components is the disadvantage, although the measurement of the second component is usually possible after uncomplicated reconfiguration of the circuit. 2. quasi‐balanced circuit for impedance  components measurements  there are many solutions of quasi-balanced circuits for measuring impedance components, e.g. those presented in [1…7]. figure 1 shows an example of the circuit used for measuring a capacitance modelled by a series combination of rc [8]. modern measuring instruments are more and more often built as virtual instruments. in analog techniques, operations on measurement signals are performed on sampled and quantized signals by software. the block diagram of the circuit (figure 1) describing analog processing becomes then a measurement algorithm (virtual instrument). quasi-balanced circuits can be virtualized very easily, since there are only operations of summing, amplifying or shifting signals by ±π/2 in the discussed circuits. phase-sensitive detection can also be realized with algorithmic methods. the equations describing the selected output signals w1, w2 in the system shown in figure 1 have the form:        2 j 2 2 j 1   ebiw ebiavw x xx (1) where a is the voltage amplifier gain, b is the conversion factor of the current /voltage converter; vx and ix are the voltage and current of the rc object under test, respectively. abstract  a basic purpose of this research was to verify a possibility of automatic balancing in the virtual realization of a quasi‐balanced circuit  for capacitance measurements. the diagrams of a virtual quasi‐balanced instrument are presented in this paper. the tested circuit was  built using a pc computer and the daq card ni‐6009. the daq card and the calculation were controlled by the application developed  in the graphical development platform labview.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 106  the complex numbers in eq. 1 can be expressed in polar from as follows:          2 j 2j2j 2 2 j 2j1j1j 1   eeibew eeibevaew x xx (2) where: 21 , ww modules of the selected signals of the circuit; �1, �2 – phases of the selected signals of the circuit; xx iv , modules of the voltage and current of the tested rc two-port; �1, �2 phases of the tested rc two-port. after dividing both sides of the system of equations (2) by each other one obtains the expression:   2 j 2j 2 j 2j1j 21j 2 1   eeib eeibeva e w w x xx     (3) which can be brought to the form: 12 21j 2 1           ez b a e w w x wj (4) where: �w – angle of the phase shift between the selected signals of the circuit, xz modulus of the impedance of the tested rc two-port. the dependence (4) is a complex number equation and can be written as a system of two real number equations in the trigonometric form:          xxw xxw z b a w w z b a w w   cossin 1sincos 2 1 2 1 (5) after dividing both sides of the system of eq. (5) by each other and trigonometric transformation, one obtains the equation describing the signal �w being detected as a function of the circuit parameters a and b as well as the tested impedance components:                          x x xx xx w za zab z b a z b a re im arccotan cos sin1 arccotan   (6) if a ≠ 0 and re(zx) ≠ 0. in the quasi–equilibrium state the conversion equation (6) is reduced to the form:   0 2 cotan im00   xzab (7) from which it is possible to calculate the passive component of the measured impedance   0 0im a b zx  (8) where a0 is the voltage amplifier gain in the quasiequilibrium state, b0 is the conversion factor of the current/voltage converter in the quasi-equilibrium state. since the discussed circuit is destined for capacitance measurements, the capacitance of the capacitor is calculated from eq. (8). in the quasi-equilibrium state the phase angle is set to π/2. then the capacitance of the capacitor can be determined from the relationship:   0 0 im 1 b a z c x x   (9) where a0 and b0 as in equation (8). in the case of using a circuit for capacitance measurements and taking into account that x xx cj rz  1  , (10) equation (6) can be rewritten as:               x x w ar c ab  1 arccotan (11) the detected signal �w is a phase shift between the selected signals w1 and w2. the equation describes the �w signal as a function of the parameters a, b and the measured impedance component. eq. 11 is a conversion equation of the circuit of figure 1. the amplifier’s voltage gain a or the conversion factor of the current/voltage converter’s b can be the adjusted parameter in the circuit of figure 1. the circuit is brought to the quasi-equilibrium state by changing the value of one selected, adjustable parameter a or b. such a process is called the process of quasi–balancing the circuit. if the measuring circuit of figure 1 is destined for measuring the reactance of capacitors, then it is more advantageous to change the setting of the parameter b. change of the parameter a will be more advantageous in circuits for measuring the capacitance. in both cases mentioned above a simple relation between the adjustable parameter and the quantity being measured in the quasi-equilibrium state is obtained. such a feature is not of great importance in modern measuring instruments containing microprocessors, but in some cases (for instance in order to decrease the energy consumption in portable instruments) one still tends to simplify calculations and to reduce the balancing time of the circuit. in the case of the adjustable parameter a, the parameter b remains constant. during the whole measuring process and after achieving the quasi-equilibrium state figure  1.  block  diagram  of  the  quasi‐balanced  circuit  for  capacitance  measurements.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 107  const0  bb (12) after substituting eq. (12) in eq. (6) and dividing the numerator and denominator of the argument of the arccotan function in this equation by a0 one obtains                                             x x x x wa z a a z a a z a a z a a a b re im1 arccotan re im arccotan 0 0 0 00 0 (13) where �wa is the signal being detected in the case of the adjustable parameter a. the relation between the active and passive component of the series rc impedance zx is the dielectric loss factor tg �x of this impedance     xx x z z  tg im re  (14) hence equation (13) can be written as follows:               0 0 1 tg 1 arccotan a a a a x wa  (15) figure 2 shows the dependence of the signal being detected, wa, on the adjustable parameter a relative to the value of a0 for different typical values of tg δx. 3. automated quasi‐balancing  figure 3 shows a simplified structure of the virtual instrument executed in the labview graphical programming environment, according to the approach presented in [8].  the quasi–balanced circuit for capacitance measurements shown in figure 1 was executed as a virtual instrument (figure 3). measurement signals, such as a voltage drop across the measured impedance and a current converted into a voltage, were applied to the data acquisition card usb ni 6009. further conversion of the signals in the measuring channels was carried out by a program executed in the labview graphical programming environment. the amplifier voltage gain or the conversion factor of a current/voltage converter may be the adjustable parameter in this system. by amending the value of one selected adjustable parameter a or b, the system is automatically set into the quasi-equilibrium state. in the circuit for capacity measurement it is better to adjust the parameter a at a constant value of the parameter b = b0. the process of the automated quasi-balancing of the circuit shown in figure 1 aiming at determining the capacitance cx given by eq. (9) consists in changing the setting of a at the constant setting of b (b = b0) until the value of the signal being detected achieves π/2. the automated quasi-balancing of the circuit is performed in three steps according to the conversion characteristic presented in figure 4:  for the optional setting a = a1 the indication of a phasesensitive detector wa1 is determined (point 1 in figure 4),  the setting of a is changed and for a2 ≠ a1 the indication of a phase-sensitive detector wa2 is again determined (point 2 in figure 4),  according to the relationships presented in the system of equations (16) the setting a0 corresponding to the selected quasi–equilibrium state wa = π/2 is determined (point 0 in figure 4).                                         0 2 0 2 2 0 1 0 1 1 1 tg 1 arccotan 1 tg 1 arccotan a a a a a a a a x wa x wa   (16)   figure 2.   φwa signal vs. relative parameter a/a0 for different loss factor   tg x values.  figure 3. the labview realization of the virtual capacitance meter.  0 15 30 45 60 75 90 0,85 0,9 0,95 1 1,05 1,1 1,15 wa a/a0 0,001 0,002 0,005 0,01 0,02 0,05 acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 108  for determining the setting a0 it is not necessary to know the loss factor tg �x, since it has a constant value in the system of equations (16) and does not appear in the solution of this system which can be presented as follows:   1122 1221 0 cotan cotan cotan cotan wawa wawa aa aa a    (17) having finished the automated quasi-balancing of the circuit of figure 1, one can determine the capacitance of the tested capacitor from equation (9) based on the known settings b0 and a0. the exemplary results of the tests made for the virtual circuit for capacitance measurements during classical (by changes of the adjustable parameter by a given constant value) and automated quasi-balancing are given in table 1. 4. double quasi‐balanced circuits  in general quasi-balanced circuits only allow the measurement of one impedance component, but it is possible to build circuits to measure two components of impedance, for example in parallel quasi-balanced circuits. some quasi–balanced circuits allow the measurement of the mutual relationship between the components of the impedance, e.g. quality factor. in such systems, double quasi balancing in two successive steps is applied. the circuit of the quasi-balanced bridge, designed to measure the quality factor of real inductors is presented in figure 5. the symbols in figure 1 represent respectively: r3 a standard variable resistor; vs the power supply voltage, r a potentiometer resistance; n a potentiometer setting (0 < n < 1) and i1, i2 the currents of the branches of the bridge. the object under test is modeled as a series connection of resistance rx and inductance lx. the quasi-balancing process requires two steps. in the first state of quasi-equilibrium the phase angle between vad and vdc equals π/2. the slider of the potentiometer r is located in the position for which n = ½ and the regulatory element is a resistor r3. in the second quasi-balance state the phase angle between vdc and vcb also equals π/2. the control element is the potentiometer r. in the second quasibalance state the n parameter is read and then the relationship for the determination of the measured quality factor qc is: n n qc 21  . (18) based on the analysis of the bridge in figure 5 it is possible to build a non-bridge structure, performing the same operations on the current and voltage signals of the tested impedance. the procedure of deriving a non–bridge circuit has been presented in [9]. the non-bridge circuit has the structure shown in figure 6. this circuit processes the measurement signals according to the principle of operation of the bridge from figure 5. the selected signals are phase shifts between w11 and w12 signals and w21 and w22 signals. it can easily be implemented as a virtual instrument. figure 7 shows a view of the prototype of the quality factor meter built according to the previously described figure 4.   φwa signal vs. parameter a for unknown loss factor tg x values  (conversion characteristic).  table  1.  comparison  of  selected  measurement  results  obtained  during  classical  and  automated  quasi‐balancing  of  the  circuit  for  capacitance  measurement.  the classical quasi-balance method a0 wa0 cx, µf 1.0357 90.00 0.3294 the automated quasi-balance method a1 wa1 a2 wa2 a0 wa0 cx, µf 100.0000 15.14 1.0404 89.01 1.0356 90.00 0.3294 3.9401 20.00 1.0875 80.00 1.0360 90.00 0.3295 1.9393 30.09 1.1482 70.09 1.0359 90.00 0.3295 1.5238 40.00 1.2272 60.00 1.0374 89.82 0.3299 figure 5. diagram of the quasi‐balanced bridge for loss factor measurement.  nvx σ σ ix phase detector π/2 phase detector π/2 + + r3 + n w11 w12 w22 w21 figure 6.   block diagram of a quasi‐balanced circuit with dual quasi‐ balancing.   0 15 30 45 60 75 90 wa a a0a1 a2 φwa1 φwa2 1 2 0 acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 109  concept. a coil under test was powered from the rigol dg1022 dds generator. the current of the object was converted to a voltage across the 1 kω standard resistor with accuracy class 0.01. the voltage of the object and the voltage proportional to its current were connected to the 16-bit daq ni usb-6251 [10]. the labview 2011 software package was used to build the virtual instrument [10]. the diagram of the virtual instrument is shown in figure 8 and its front panel in figure 9. the first tests were done as a simulation. the simulations confirmed the usefulness of the system to measure the quality factor of inductors. tests of the circuit were performed for the reference inductance in the range from 0.05 h to 1 h at a frequency of 100 hz. the results were compared with the results obtained from the meter motech mic-4090, for which the manufacturer declares a quality factor accuracy of 0.5%. the exemplary dependence of the errors versus the measured quality factor is shown in figure 10. 5. conclusions  the tests of the presented way of quasi-balancing the circuit for capacitance measurements proved that the proposed procedure is correct and showed the possibility of a significantly faster achievement of the quasi-equilibrium state than in the case of classical balance methods by changes of the adjustable parameter by a given constant value. the presented automated quasi-balance method does not reduce the accuracy of the phase detector operation and does not increase the uncertainty of determining the tested capacitor capacitance significantly. during investigations an insignificant influence of the circuit conversion characteristic shape was observed (figure 4). also the selection of the points on this characteristic had neglible influence on the accuracy of achieving the quasi– equilibrium state. further investigations aim at the detailed determination of the selection of points 1 and 2 during the realization of the procedure of quasi-balancing the circuit on the accuracy of assessing the setting a0 in the quasi-equilibrium state. further, the examination of possibilities of using the presented measuring circuit and the automated quasibalancing procedure for determining the dielectric loss factor tg �x of an rc impedance is planned. the theory and implementation of a non-bridge quasibalanced measuring circuit with dual quasi-balancing, designed for measurements of the quality factor have been presented as well. the main advantage of the circuit is maximum convergence and a simple measuring process. it requires two independent controls. the circuit described above has been implemented as a virtual system, using the labview package. simulation tests and tests carried out on real objects confirmed the usefulness of the proposed figure 9.   front panel of a quasi‐balanced meter with dual quasi‐balancing.   figure 8.   block  diagram  of  a  quasi‐balanced  circuit  with  dual  quasi‐ balancing.  figure 7. view of the prototype of the quality factor meter realized as the  quasi‐balanced circuit with dual quasi‐balancing.   figure 10. error vs. the measured quality factor.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 110  solutions. the level of errors reaches 5%, but the study was focused on the prototype, which will be even improved. references  [1] karandeev k. b.: “bridge and potentiometer methods of electrical measurements”, moscow, peace publishers, 1966. [2] atmanand m.a., jagadeesh kumar v., murti v.g.k: “a novel method of measurement of l and c”, ieee transaction on instrumentation & measurement, vol. 44, no. 4, august 1995, pp. 898-903. [3] atmanand m. a., jagadeesh kumar v. and murti v. g. k.: “a microcontroller based quasi-balanced bridge for the measurement of l, c and r”, ieee transaction on instrumentation & measurement, vol. 45, no.3, june 1996, pp. 1-5. [4] burbelo m.i.: “universal quasi-balanced bridges for measuring the parameters of four-element two-terminal networks”, izmeritel'naya tekhnika, vol.44, no.11, november 2001, pp.1130-1133. [5] marcuta c., fosalau c. and petrescu c.: “a virtual impedance measuring instrument based a quasi balanced bridge”, proceedings of the 14th int. symposium imeko tc4, september 2005, pp. 517–523. [6] amira h., hfaiedh m. and valentin m.: “quasi-balanced bridge method for the measurements of the impedances”, iet science, measurement and technology, vol. 3, iss. 6, 2009, pp. 403 – 409. [7] jagadeesh kumar v., sankaran p., sudhakar rao k.: “measurement of c and tan δ of a capacitor employing psds and dual slope dvms”, ieee transaction on instrumentation & measurement, vol.52, no. 5, august 2003, pp.1588-1992. [8] skórkowski a., cichy a.: “testing of the virtual realization of the quasi-balanced circuit for the capacitance measurement”, measurement automation and monitoring, vol. 53, no. 12, 2007, pp. 91-93. [9] cichy a.: “non-bridge circuit with double quasi-balancing for measurement of dielectric loss factor”, science, measurement and technology, vol. 7 iss. 5, 2013, pp. 274-279. [10] sine.ni.com. microsoft word 333-2352-1-le-rev2 acta imeko  issn: 2221‐870x  september 2016, volume 5, number 2, 19‐25    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 19  diet and health in central‐southern italy during the roman  imperial time  luca bondioli 1 , alessia nava 1,2 , paola francesca rossi 1 , alessandra sperduti 1,3   1 section of bioarchaeology, national museum of prehistory and ethnography "luigi pigorini", polo museale del lazio, p.zza marconi 14,  00144 rome, italy  2 department of environmental biology, sapienza university of rome, p.le aldo moro 5, 00185 rome, italy  3 university of naples "l'orientale", p.zza san domenico maggiore 12, 80134 naples, italy      section: research paper   keywords: roman imperial time; diet and health; carbon and nitrogen isotopes  citation: luca bondioli, alessia nava, paola francesca rossi, alessandra sperduti, diet and health in central‐southern italy during the roman imperial time,  acta imeko, vol. 5, no. 2, article 4, september 2016, identifier: imeko‐acta‐05 (2016)‐02‐04  section editors: sabrina grassini, politecnico di torino, italy; alfonso santoriello, università di salerno, italy  received march 15, 2016; in final form april 7, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: ministero dei beni e delle attività culturali e del turismo, italy   corresponding author: luca bondioli, e‐mail: luca.bondioli@beniculturali.it    1. introduction  the study of the funerary complexes is always and necessarily accompanied by the analysis of human bones and teeth. a fruitful relationship between the two areas of research was in fact established through dialogue and collaboration both on the field as in the laboratory and led to significant advances in the historical reconstruction of the biocultural dynamics that characterized the evolution of human populations in space and time. the final outcome is the attempt to provide a consistent response to the growing demand, by archaeologists and historians, of coherent reconstructions of past adaptations to the environment. skeletal anthropology has matured, since the immediate post-war period, its own disciplinary status, which takes advantage of the development of advanced technologies and methods of investigation. nevertheless, the reconstruction of biological adaptations of past populations, based on odontoskeletal remains, is extremely difficult and complex. this difficulty is even more evident when approaching the subject of dietary and health status reconstructions in the highly stratified roman imperial society. the study of biocultural adaptation scenarios of ancient populations has, in fact, proved to be a difficult challenge. indeed, the validity of the mortality models based on skeletal analyses was deeply criticized by the contributions of boquet and masset [1] and wood and associates [2], who undermined the reconstructive power of paleodemographic and health status indicators in the odontoskeletal record. fortunately, over twenty years after the publication of these basic critical tools, skeletal anthropology is consistently increasing the quantity and especially the quality of the information today extractable from skeletal populations. this contribution illustrates the usefulness abstract  the reconstruction of ancient diets by means of stable isotopes analysis acquires a deeper meaning when their results are compared  with  other  odonto‐skeletal  indicators  which  are  strongly  contextualized  in  the  light  of  historical  and  archaeological  evidence.  nevertheless,  the  outcomes  can  be  contradictory  or,  more  realistically,  they  may  not  completely  satisfy  our  hypotheses  on  how  complex and diverse conditions ‐ such as health status, life style, diet and nutrition ‐ can actually interrelate in the life course of an  individual.  in this study we present and discuss evidences from isola sacra and velia, two roman imperial age coastal towns. the δ 15 n and δ 13 c  values  are  compared  with  demographic  and  health  status  parameters,  such  as  age,  sex,  stature,  auricular  exostoses,  dish,  cribra  orbitalia, enamel defects.   acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 20  of collagen carbon and nitrogen isotopic composition studies as proxy of ancient diet. the data from two major italian coastal communities of roman times are used not only to reconstruct diet, but are analysed in the framework of the morphological, histological and paleopathological evidences, opening unexpected interpretive opportunities. 2. materials and methods  2.1. the skeletal material from isola sacra and velia  the two skeletal series come from the roman imperial necropolises of isola sacra (lazio, i-iii cent. ad) and veliaporta marina (campania, i-ii cent. ad) (figure 1). isola sacra has been the main necropolis of the harbor town of portus romae, serving the capital of the empire. the several excavation campaigns have yielded more than 2000 skeletal individuals. porta marina was the southern gate graveyard of velia, a port town on the tyrrhenian sea. more than 300 skeletons were recovered. the high level of preservation and completeness of both skeletal collections has allowed an extensive and highly standardized data recording. in this study the isotopic individual data, available for 117 individuals from velia and 105 individuals from isola sacra, are crossed with other anthropological data. 2.2. sex and age‐at‐death diagnosis  when dealing with well represented skeletal individuals, sex estimates can reach high levels of consistency. sexing on the base of skull and pelvis morphology gives 98 % of correct diagnosis [3]. a recent study has demonstrated a similar discriminative power in the application of univariate and multivariate methods on long bones [4]. the level of reliability of the several aging techniques proposed in literature has been questioned and tested in numerous anthropological studies. the most critical contribution [1] pointed out that skeletal age-at-death estimates tend to mimic the age distribution of the reference sample from which the criteria were developed. this issue can be partly overcome by employing procedures such as the "combined" method [5], the summary age method [6] and the transition analysis [7]. another important issue is the level of consistency among researchers and across laboratories [8]-[10]. in this study, the analyses for sex and age-at-death assessments were performed following the criteria commonly reported in literature [3], [11], [12]. sex was mainly diagnosed on the basis of the morphology of the pelvis and the skull. age at death of adults individuals was estimated by means of many odontoskeletal indicators, which included, among others, degenerative changes of the pubic symphysis and auricular surface of the pelvis; morphological variation of the sternal end of the 4th rib, dental wear and ectoand endocranial suture closure. the subadults were aged according to dental formation and eruption stages [13] and skeletal maturity and dimensions [14]. in our age at-death assessment we followed the approach of the combined method in order to minimize the "reference sample effect". we also tested, in the velia series, the intraobserver level of consistency. a subsample of 241 skeletal individuals was aged by two researchers separately. the level of agreement reaches 88.8 %, while in the remaining cases the discordance never exceeded one age class. as expected, the inter-observer error was very low for subadults (5.8 % of the total estimates) in comparison with the adults (>20 years 18.6 %). the cases of discrepancy have been resolved with an additional assessment performed by a third observer, and using wider age classes. 2.3. stature  adult stature estimation can be considered as a rather good descriptive of the life conditions of a population because, beyond the direct action of the genetic pool, is strongly influenced by the health status and diet during early life [15]. estimation of stature in archaeological samples relies on linear regression formulae derived from reference collections that correlate the length of long bones (femur, tibia, fibula, humerus, radius, ulna) with living stature (i.e. [16]-[19]). the choice of the most proper regression formula to be used is crucial: the equations available in the literature are calibrated on specific modern population samples and the body proportions of the archaeological sample should be consistent with that of the reference one [20]. as suggested by giannecchini and moggicecchi [21], the pearson's regression formulae are the most reliable for italian historical samples, thus adult living stature in our roman italian coastal population was calculated applying this regression method. the maximum length for each bone, with no regard about side, was measured following the standard technique by martin and saller [22]. for each individual, the length of all the available long bones (considering femur, tibia, humerus, radius) was measured and the stature was given as the mean value of all the obtainable estimates. 2.4. pathological assessments  in the last decades, paleopathology has undergone a significant shift from descriptive analysis on individual cases to a population-based epidemiological perspective. the quantification and the cross comparison of past population health status present problems of diagnosis and interpretation [2], [23]-[25], which correlate with the standardization of the scoring methods and with inter-observer consistency [26], [27]. finally, the paleopathological practice has to deal with the   figure 1. geographic location of the port towns of portus romae and velia  porta marina.   acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 21  continuum model of pathological skeletal signs and the issue of how to better ascribe the progressive manifestations to clear-cut discrete categories. in our study, the following macroscopic health indicators were recorded: cribra orbitalia; enamel hypoplasia; diffuse idiopathic hyperostosis (dish), external auricular exostosis (eae). all the health indicators were systematically recorded for the whole sample. the general guidelines for diagnosis and assessment of the pathological changes are reported in buikstra and ubelaker [11] and grauer [28]. dish was positively diagnosed by the pathognomonic features of the vertebral column; i.e. the ossification of the anterior longitudinal ligaments of the spine. enamel hypoplasic defects on teeth were counted and chronologically assessed by measuring their relative position from the cervical line (cemento-enamel junction) [29]. a 4 grade recording system was used in the assessment of presence and severity of cribra orbitalia, according to buikstra and ubelaker [11]; while the recording of external auricular exostosis followed the method presented in crowe and colleagues [30]. 2.5. dental enamel histology  histomorphometrical analysis of tooth crowns allows enhancing and interpreting enamel microstructures that are formed during amelogenesis (i.e. weekly retzius lines and daily cross striations [31]) as result of the regular periodicity of enamel matrix secretion. a physiological stress event strong enough to temporary disrupt amelogenesis, causes an abrupt change in the orientation of the enamel prisms, visible in an histological thin section as a broad dark band (accentuated retzius line (arl) or wilson band [32]). thus using the long and short period markings of enamel formation, it is possible to reconstruct the temporal scales of stress event chronology. the scale is zeroed at birth, marked by the first accentuated stria, the neonatal line [33], [34]. to establish the chronology of the stress events in the deciduous teeth the regression formulas proposed by birch and dean [35] were used, while for the first permanent molars the method by guatelli-steinberg and collaborators [36] has been applied, based on the average daily rate of enamel secretion (about 2.85 µm per day) within 200 µm from the enamel dentine junction. starting from the tip of the dentine horn and following an enamel prism toward the outer surface, the first arl was detected and the corresponding length was measured. the same procedure was applied to all the arl encountered in the lateral enamel moving toward the tooth neck. the regression formulas transform distances in days of enamel formation. this methodology has been used to determine with a daily accuracy the age at death of immature individuals in a subsample from velia, presenting the neonatal line and the crown still forming at the time of death. thin sections of dental crowns were obtained using the method proposed by caropreso and colleagues [37]. the teeth, after an ultrasonic bath, were included in bicomponent epoxy resin (epofix buehler) and cut in longitudinal buccolingual sections by means of a diamond blade microtome (leica microtome diamond blade 1600, leica ag). each section was thinned with abrasive paper mounted on a motorized grinder in distilled water (minimet 1000 automatic polishing machine, buehler) up to about 100 µm. further polishing with alumina powder (0.05 µm gamma alumina micropolish b, buehler) allowed the elimination of the cutting marks and the enhancement of the features of the fine structure of the enamel. after assembling the cover glass, each thin section was digitally recorded through a camera (leica dfc 295) attached to an optical microscope (laborlux s, leica ag) under polarized light, with the magnification of 40× (resolution 1600×1200 pixels). to digitally analyze the images, each crown was reconstructed assembling overlapping pictures using dedicated software (fiji plugin mosaic j, [38]). 2.6. isotopes analyses  the isotopes analyses for diet reconstruction were previously performed and published [39]-[41]. full details of the sampling and analytical methodologies, as well as individual data, are given in the above mentioned studies. 3. results and discussion  3.1. δ15n and δ13c values of isola sacra and velia   the diet composition of velia and isola sacra, compared with data available in literature for the roman empire, is presented in figure 2. the two samples under study are characterized by a high intra-site variability. for δ15n, in fact, the values show the coexistence of both low and high protein intake diets, the latter possibly deriving from marine food. as already suggested in [39]-[41], males tend to consume more high trophic foods (meat and/or fish) than females (figure 3), even if this tendency is statistically significant in velia but not in isola sacra. 3.2. velia: diet and stature   adult stature, although being a multifactorial trait influenced by a complex interplay of genetic and environmental factors, is commonly used as a proxy for health and nutritional status during growth [15]. the relationship between stature and diet composition for velia is presented in figure 4. results show a positive correlation between individual height and δ15n values, reaching the significance only in the male series (generalized additive regression model, p=0.022). figure 2. plot of convex hulls of the δ 15 n and δ 13 c individual values in velia  (blue), isola sacra (red) and other roman imperial age skeletal series (grey). acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 22  3.3. velia and isola sacra: diet and auricular exostosis  external auricular exostosis (eae) is an abnormal bony growth within the external ear meatus that has proved to be very informative in detecting aquatic working activities in past populations [40]. the frequency of eae in isola sacra and velia adult males is rather high (21.1 % and 35.3 %, respectively). the presence of this specific occupational marker is significantly higher among the marine food consumers in velia, as shown in figure 5. conversely, at isola sacra, the correlation between eae and the c, n isotopic ratio fails to be significantly different from zero. 3.4. velia: diet and cribra orbitalia  cribra orbitalia is a pathological lesion consisting of porosities in the outer table of the orbital roof. it is mainly linked to iron deficiency anemia, of acquired (unbalanced diets, diarrheal disease, intestinal parasites) or hereditary origin (i.e. sickle cell or thalassemia). other possible but rarer causes have also been acknowledged [42]. cribra frequency in velia is 28.4 % (n=74) with no differences between sexes. the combined data of diet and cribra (figure 6) point out that the individuals showing the highest δ15n values do not present the lesions, while for the lower values both affected and unaffected individuals are equally represented. overall, cribra and diet are not correlated at a level significantly different from zero. 3.5.  velia: diet and dish  diffuse idiopathic skeletal hyperostosis (dish) is a pathological condition characterized by vertebral ankylosis and extraspinal bone proliferations. dish has a complex etiology: it is often found associated with different metabolic disorders and it is more frequent in aged males. in the velia male sample, the occurrence of dish is rather high (12.5 %; n=48), when compared with other archeological series [43], and significantly correlates with high δ15n values (figure 7), thus contributing with positive results to the issue of the possible influence of nutritional habits in the onset and manifestation of the pathology [44], [45]. 3.6. velia and isola sacra: health status and weaning  histomorphometrical analysis of the dental crowns was performed in a subsample of 79 subadults from velia and figure  5.  δ 15 n  and  δ 13 c  values  and  presence  of  the  external  auricular  exostosis (eae). figure 6. velia: nitrogen and carbon isotopic delta values and presence of  cribra orbitalia. figure 3. δ 15 n and δ 13 c values in velia and isola sacra by sex.    figure  4.  3d  scatterplot  and  generalized  additive  regression  of  stature  in  males (green dots) and female (blue dots) of velia in reference to n and c  isotopic ratios.  -22 -21 -20 -19 -18 -17 -16 6 8 1 0 1 2 1 4 13c 1 5 n isola sacra isola sacra eae velia velia eae acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 23  compared with the results obtained from the histology of 127 deciduous teeth from the isola sacra sample [46]. figure 8 shows the monthly prevalence distribution of the defects in both velia and isola sacra. in both samples, the general trend of the curve is comparable, and shows a steep rise in months 1-5 possibly related, as expectable, to the decline of the maternal buffer. the higher monthly prevalence values observed in isola sacra suggest worst general health conditions during infancy, that could be related to a much broader range of the socio-economic status in reference to the smaller and more homogeneous velian community. such kind of morbility trends is usually associated with the stress of weaning; however, the maximum peak in velia seems to occur too early to be consistently explained with this hypothesis. in isola sacra the high number of physiological stresses (arl), registered from birth to 9 months, seems to do not affect the survivorship of children at later ages (r=-0.037). similarly, the number of arl does not consistently associate with diet (figure 9). 4. conclusions  the carbon and nitrogen stable isotope analysis depicts the dietary habits of velia and portus romae communities, showing a high degree of variability in the diet composition, especially regarding the levels of protein intake. both samples show indication of marine food consumption. these evidences can be explained by the geographical location of the towns, together with the social complexity and differential subsistence strategies within the communities. in the study of past populations, diet is usually considered a good proxy for wellbeing and it is usually presented and discussed as a standing alone evidence. conversely we think that a more in-depth comprehension of ancient life conditions can be obtained through a strong historical and archeological contextualization and through the integration of data from multiple sources. nevertheless, crossing stable isotope data with demographic, morphological and health parameters not always yields univocal results of simple and straightforward interpretation. in fact, we found that nitrogen delta values significantly correlates with a specific occupational activity (eae), higher statures and the manifestation of a skeletal disorder (dish), conversely the association of diet and aspecific stress indicators (arl and cribra orbitalia) was not detected. possible explanations can be suggested by the fact that cribra orbitalia, observed in the adult sample of velia were developed during infancy and adolescence. however, recent calibrated studies on bone remodeling rates [47] suggest that the isotopic signal in the adult bone still reflects most of its later infancy, adolescence and early adulthood. the absence of correlation between diet and arl in the growing segment of isola sacra deserves further consideration and analysis, since it goes straight to the issue of the differential (and hidden) frailty of individuals and its influence in their morbidity and mortality. in conclusion, the identification of physiological stress events and processes (such as weaning), the quantification of timing of formation, maturation and tooth eruption, the reconstruction of the differential diet interand intra population and its changes in the course of life, represent invaluable information, which allows to build models of adaptive strategies, migrations, health and growth in an ancient population. since these events are recorded at the tissue level in varying degrees during growth, their measure allows to challenge (although not entirely) the osteological paradox, offering the unique opportunity to carry out a longitudinal and figure 8. maximum prevalence of stress events in velia and isola sacra.   figure  9.  isola  sacra  nitrogen  and  carbon  isotopic  delta  values  and  arl  number in the deciduous teeth analysed.  figure 7.  velia: nitrogen and carbon isotopic delta values and presence of dish.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 24  transverse study of a mortality sample. the ultimate goal is to lay the basis for a (possible) consilience, or unity of knowledge, of history, archeology and anthropology. references  [1] j. boquet-appel, c. masset, farewell to paleodemography, j. hum. evol. 11 (1982) pp.321-333. [2] j.w. wood, g.r. milner, h.c. harpending, k.m. weiss, the osteological paradox: problems of inferring prehistoric health from skeletal samples, curr. anthropol. 33 (1992) pp.343-358. [3] w.m. krogman, m.y. iscan, the human skeleton in forensic medicine, charles c. thomas, springfield, 1986, isbn 9780398052249. [4] m.k. spradley, r.l. jantz, sex estimation in forensic anthropology: skull versus postcranial elements, j. for. sci. 56 (2011) pp.289-296. [5] g. acsàdi, j. nemeskéri, history of human life span and mortality, akadémiai kiadò, budapest, 1970. [6] c.o. lovejoy, r.s. meindl, r.p. mensforth, t.j. barton, multifactorial determination of skeletal age at death: a method and blind test of its accuracy, am. j. phys. anthropol. 68 (1985) pp.1-14. [7] j.l. boldsen, g.r. milner, l.w. konigsberg, j.w. wood, “transitional analysis: a new method for estimating age from skeletons”, in: paleodemography: age distributions from skeletal samples. r.d.hoppa, j.w.vaupel (editors). cup, new york, 2002, isbn 978-0-511-06326-8, pp.73-106. [8] e.h. kimmerle, d.a. prince, g.e. berg, inter-observer variation in methodologies involving the pubic symphysis, sternal ribs, and teeth, j. for. sci 53 (2008) pp.594-600. [9] c.g. falys, m.e. lewis, proposing a way forward: a review of standardisation in the use of age categories and ageing techniques in osteological analysis (2004–2009), int. j. osteoarch. 21 (2011) pp.704–716. [10] h.m. garvin, n.v. passalacqua, current practices by forensic anthropologists in adult skeletal age estimation, j. for. sci.57 (2012) pp.427-433. [11] j.e. buikstra, d.h. ubelaker, standards for data collection from human skeletal remains, arkansas archaeological survey research series 44, fayetteville, 1994, isbn 978-1563490750. [12] t.d. white, p.a. folkens, human osteology, academic press, san francisco, 1991, isbn 978-0123741349. [13] s.j. alqahtani, atlas of human tooth development and eruption, queen mary and westfield college, 2009. [14] m. schaefer, s.m. black, l. scheuer, juvenile osteology: a laboratory and field manual, academic press, amsterdam, 2009, isbn, 978-0123746351. [15] a. mummert, e. esche, j. robinson, g.j. armelagos, stature and robusticity during the agricultural transition: evidence from the bioarchaeological record, econ. hum. biol. 9 (2011) pp.284-301. [16] m. trotter, g.g. gleser, estimation of stature from long bones of american whites and negroes, am. j. phys. anthropol.10 (1952) pp.463-514. [17] m. trotter, g.g. gleser, a reevaluation of estimation of stature based on measurements of stature taken during life and of long bones after death, am. j. phys. anthropol.16 (1958) pp.79-123. [18] k. pearson, on the reconstruction of the stature of prehistoric races. mathematic contributions to the theory of evolution, philos. trans. r. soc.192 (1899) pp.169–244. [19] l. manouvrier, la détermination de la taille d’apré les grands os des membres, mém. soc. anthropol. paris 4 (1893) pp.347–402. [20] s.d. ousley, “estimating stature”, in: a companion to forensic anthropology. d.d.dirkmaat, wiley-blackwell, 2012, isbn 9781-4051-9123-4, pp.330-334. [21] m. giannecchini, i. moggi cecchi, stature in archaeological samples from central italy: methodological issues and diachronic changes, am. j. phys. anthropol.135 (2008) pp.284-292. [22] r. martin, k. saller, lehrbuch der anthropologie, band ii, stuttgart, fischer, 1959. [23] j.l. boldsen, g.r. milner, “an epidemiological approach to paleopathology”, in: a companion to paleopathology. a.l.grauer (editor). wiley-blackwell, malden, ma., 2012, isbn 978-1-4443-3425-8, pp.114-132. [24] m.k. zuckerman, b.l. turner, g.j. armelagos, “evolutionary thought in paleopathology and the rise of the biocultural approach”, in: a companion to paleopathology. a.l.grauer (editor), wiley-blackwell, malden, ma., 2012, isbn 978-1-44433425-8, pp.34-57. [25] m.k. zuckerman, k.n. harper, g.j. armelagos, adapt or die: three case studies in which the failure to adopt advances from other fields has compromised paleopathology, int. j. osteoarch, (2015) doi: 10.1002/oa.2426. [26] r. macchiarelli, l. bondioli, l. censi, m. kristoff, l. salvadei, a. sperduti, intra and inter-osberver differences in direct and radiographic recording of tibial harris' lines, am. j. phys. anthropol. 95 (1994) pp.77-83. [27] k.p. jacobi, m.e. danforth, analysis of interobserver scoring patterns in porotic hyperostosis and cribra orbitalia, int. j. osteoarchaeol. 12 (2002) pp.248–258. [28] a.l. grauer (editor), a companion to paleopathology, wileyblackwell, malden, ma, 2012, isbn 978-1-4443-3425-8. [29] d.j. reid, m.c. dean, variation in modern human enamel formation times, j. hum. evol.50 (2006) pp.329-346. [30] f. crowe, a. sperduti, t. o’connel, o. craig, k. kirsanow, p. germoni, r. macchiarelli, p. garnsey, l. bondioli, water-related occupations and diet in two roman coastal communities (italy, first to third century ad): correlation between stable carbon and nitrogen isotope values and auricular exostosis prevalence, am. j. phys. anthropol.142 (2010) pp.355-366. [31] m.c. dean, growth layers and incremental markings in hard tissues: a review of the literature and some preliminary observations about enamel structure in paranthropus boisei, j. hum. evol.16 (1987) pp.157-172. [32] d.f. wilson, f.r. shroff, the nature of the striae of retzius as seen with the optical microscope, australian dent. j.15 (1970) pp.162-171. [33] c. zanolli, l. bondioli, f. manni, p. rossi, r. macchiarelli, gestation length, mode of delivery, and neonatal line thickness variation, hum. biol. 83, iss.6, art.8 (2011). [34] n. sabel, c. johansson, j. kuhnisch, a. robertson, f. steiniger, j.g. norén, neonatal lines in the enamel of primary teeth. a morphological and scanning electron microscopic investigation, arch. or. biol. 53 (2008) pp.954-63. [35] w. birch, m.c. dean, a method of calculating human deciduous crown formation times and of estimating the chronological ages of stressful events occurring during deciduous enamel formation, j. for. leg. med. 22 (2014) pp.127-144. [36] d. guatelli-steinberg, b.a. floyd, m.c. dean, d.j. reid, enamel extension rate patterns in modern human teeth: two approaches designed to establish an integrated comparative context for fossil primates, j. hum. evol. 63 (2012) pp.475-486. [37] s. caropreso, l. bondioli, d. capannolo, thin sections for hard tissues histology: a new procedure, j. microsc. 199 (2000) pp.244247. [38] p. thévenaz, m. unser, user-friendly semiautomated assembly of accurate image mosaics in microscopy, microsc. res. tech. 70(2) (2007) pp.135–146. [39] t. prowse, h.p. schwarcz, s. saunders, r. macchiarelli, l. bondioli, isotopic paleodiet studies of skeletons from the imperial roman-age cemetery of isola sacra, rome, italy, j. arch . sc.31 (2004) pp .259–272. [40] f. crowe, a. sperduti, t.c. o'connell, o.e. craig, k. kirsanow, p. germoni, r. macchiarelli, p. garnsey, l. bondioli, waterrelated occupations and diet in two roman coastal communities (italy, first to third century ad): correlation between stable carbon and nitrogen isotope values and auricular exostosis prevalence, am. j. phys. anthropol. 142 (2010) pp.355-366. [41] o.e. craig, m. biazzo, t.c. o'connell, p. garnsey, c. martinezlabarga, r. lelli, l. salvadei, g. tartaglia, a. nava, l. renò, a. acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 25  fiammenghi, o. rickards, l. bondioli, stable isotopic evidence for diet at the imperial roman coastal site of velia (1st and 2nd centuries ad) in southern italy, am. j. phys. anthropol. 139 (2009) pp.572-583. [42] p.l. walker, r.r. bathurst, r. richman, t. gjerdrum, v.a. andrushko, the causes of porotic hyperostosis and cribra orbitalia: a reappraisal of the iron-deficiency-anemia hypothesis, am. j. phys. anthropol. 139 (2009) pp.109-125. [43] r.k. spencer, testing hypotheses about diffuse idiopathic skeletal hyperostosis (dish) using stable isotope and adna analysis of late medieval british populations, durham theses, durham university, 2008. [44] m. gundula, m.p. richards, diet and diversity at later medieval fishergate: the isotopic evidence, am. j. phys. anthropol. 134 (2007) pp.162-174. [45] k. quintelier, a. ervynck, g. müldner, w. van neer, m.p. richards, b.t. fuller, diet and diversity at later medieval fishergate: the isotopic evidence, am. j. phys. anthropol. 153 (2014) pp.203-213. [46] c. fitzgerald, s. sauders, l. bondioli, r. macchiarelli, health of infants in an imperial roman skeletal sample: perspective from dental microstructure, am. j. phys. anthropol. 189 (2006) pp.179-189. [47] r. hedges, j. clement, c. thomas, t. o'connell, collagen turnover in the adult femoral mid-shaft: modeled from anthropogenic radiocarbon tracer measurements, am. j. phys. anthropol. 133 (2007) pp.808-816. decision-making on establishment of re-calibration intervals of testing, inspection or certification measurement equipment by data science acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 8 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 decision-making on establishment of re-calibration intervals of testing, inspection or certification measurement equipment by data science marija cundeva – blajer1 1 ss. cyril and methodius university in skopje (ukim), faculty of electrical engineering and information technologies (feeit), st. rugjer boskovik no. 18, 1000 skopje, republic of north macedonia section: research paper keywords: decision-making in conformity assessment; testing-inspection-certification; re-calibration interval; data fusion citation: marija cundeva – blajer, decision-making on establishment of re-calibration intervals of testing, inspection or certification measurement equipment by data science, acta imeko, vol. 12, no. 2, article 38, june 2023, identifier: imeko-acta-12 (2023)-02-38 section editor: leonardo iannucci, politecnico di torino, italy received january 31, 2023; in final form june 20, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: marija cundeva-blajer, e-mail: mcundeva@feit.ukim.edu.mk 1. introduction data science classes of tic problems the conformity assessment processes are mostly finalized with decision delivery, mostly evident in testing, inspection, and certification (tic). these decisions are predominantly based on empirical data derived by measurements. measurements are crucial in various critical societal sectors, such as healthcare, trade, industry, energy sector, environmental protection, etc., where tic activities are commonly conducted. lately, significant impact on the tic sector is created by the spin of the digital transformation. in the centre of the digital transformation there is the enormous quantity of data, which is continuously produced, processed, stored, and used for increasing number of applications. artificial intelligence, machine learning, internet of things, big data analytics etc. are based on data. however, the data quality is an issue not always properly addressed, especially in sectors with traditions based on experimental approaches, as the tic. poor models, incorrect results, and finally wrong decisions might derive from poor quality of data. data science utilizes scientific approaches, protocols, algorithms and systems with interdisciplinarity to extract insights and information from noisy, structured and unstructured data, and deploy knowledge from data in wide scope of applicative solutions [1], [2]. the recent revival of measurement and data science interrelation is induced by the emerging application of sensory devices and significant increase of available data storage, processing, transmission capacities which are variously utilized. one of the results of the high quantity of recorded information and of the theoretical achievements in measurement and data science, is an invention of numerous newly developed products and smart services. this contribution conducts an analysis of the options for utilization of the latest data science achievements in the tic decision making processes, based on conclusions with synchronous usage of “measurements” as completely empirical, and the “data science” as methodology oriented towards modelling and simulation, combined with high complementarity and synergy, i.e., the data fusion approach. the modern scientific abstract this contribution is related to issues on decisions in conformity assessment, especially in testing, inspection, and certification (tic) predominantly based on measurement data. the digital transformation has started to impact the tic sector, which is related to intensified utilization of data science in tic. the quality of the data in big data analytics, is not always sufficiently addressed, especially in sectors with traditionally empirical approaches, such as tic. this paper conducts a survey of options for deployment of data science in the tic decision making processes, based on conclusions with complementary usage of empirical “measurements” and the “data science”. the focus of the discussion is centered over a case study where data fusion is applied for determining the tic instrument calibration frequency, presenting a model to establish recalibration interval with reduced risk in the final decision delivery over the next re-calibration moment. mailto:mcundeva@feit.ukim.edu.mk acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 methodologies require sustaining the theory validity, through experimental verification, whenever possible. experimental proof comprises a quantitative measure or non-quantitative (i.e., qualitative measure) of the observed quantities achieved through measurement. the consistency degree of various measurement results, derived by different independent experimenters or by the same experimenter at various moments, provides an indicator for the reliability of the results of the quantity of interest, taking into account that empirical knowledge is mostly imperfect to some degree and the combination of observations are standard and essential practices [3]. several classes of data science problems for which techniques might be developed and evaluated across different domains in the tic sector are [1]: • detection: finding data of interest in given dataset. • anomaly detection: identification of system states that force additional pattern classes in a model. outlier detection is associated with identifying potentially erroneous data items forcing changes in prediction models “influential observations”. • cleaning: elimination of errors, omissions, and inconsistencies in data or across datasets. • alignment: relating different instances of the same object [4], like a word with the corresponding visual object, or time stamps associated with two different time series. data alignment is frequently used for entity resolution, identifying common entities among different data sources. • data fusion: different representations integration of the same real-world object, encoded in a well-defined knowledge base of entity types [5]. • identification and classification: attempt to determine, for each item of interest, the type or class to which the item belongs [6]. • regression: finding functional relationships between variables. • prediction: estimation of a variable or multiple variables of interest at future times. • structured prediction: tasks where the outputs are structured objects, rather than numeric values. a desirable technique to classify a variable in terms of a more complicated structure than producing discrete or realnumber values. • knowledge base construction: construction of a database with a predefined schema, based on any number of diverse inputs. • density estimation: production of a probability density (distribution function), beside a label/value. • joint inference: joint optimization of predictors for different sub-problems using constraints that enforce global consistency used for detection and cleaning for more accurate results. data science involves ranking, clustering, and transcription (“structured prediction”), as in [7]. other classes of problems are based on algorithms and techniques which are applied to raw data at an earlier “pre-processing” stage. different data processing may be activated if evaluation methodology is essential, [1]. 2. main statistical paradigms for decision making in tic the international endorsement of the guide to the expression of uncertainty in measurement (gum), [8] accelerated the need to provide uncertainty statements in measurement results. the laboratory accreditation based on standards such as iso 17025 [9] has amplified this process. as the uncertainty statements have been recognized as essential for effective decision making, various laboratories, from national metrology institutes to commercial test laboratories, insert significant workload into evaluation of measurement uncertainty by applying the gum methods [8], [10], [23], but also the methodologies proposed in the international guideline ilac g8 [22]. the approaches for uncertainty propagation evaluation in the tic applications comprise the frequentist, bayesian, and fiducial statistical paradigms [11], [23]. the first statistical paradigm frequentist, is where uncertainty can be probabilistically assessed, and is based on statistical theory, referred as “classical” or “conventional”. considering the origin of uncertainty in tic, these approaches must be adjusted to derive frequentist uncertainty intervals under practical conditions. in most realistic tic environments, uncertainty intervals must comprise both the uncertainty in quantities estimated using data and the uncertainty in quantities derived from expert knowledge, so the approach of data fusion is indispensable. to gain an uncertainty interval, the measurands which are not under observation are usually treated as random variables with probability distributions of their values, on the other hand measurands with values possible to be assessed by applying statistical data are considered as unknown constants. the traditional frequentist protocols should be altered to achieve the prescribed level of confidence after averaging over the potential quantities’ values evaluated by expert judgment [11]. the second paradigm bayesian approach [11] named after the fundamental theorem, which was proved by the reverend thomas bayes in the mid-1700s, is where the analyst’s knowledge about the measurands is modeled as a set of stochastic variables with a probability distribution in the joint parameter space. the theorem enables the probability distributions to be updated based on the observed data and the inter-relationships of the parameters defined by the function or equivalent statistical models. the probability distribution is obtained by describing the knowledge of measurand given the observed data. the third statistical paradigm fiducial approach, is developed by r. a. fisher in the 1930s [11]. the probability distribution (fiducial distribution) for a measurand conditional on the data is gained from the interrelationship of measurand and the input value described by the function and the distributional assumptions on the data used to estimate. 3. decision-making, and risk based thinking in tic established by data fusion data fusion aim to obtain higher quality information to apply to specific contexts, by profiting from the symbiosis of data collected from diverse sources. data fusion is the process of combining data or information to estimate or predict entity states [12]. applied in many decision-making domains, such as the tic, it encompasses classification and pattern recognition utilized to argument decisions. decision making, especially in conformity assessment is directly linked to introduction of risks in the laboratory or the tic entity’s operations. so, in tic it is crucial acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 not only to fuse data obtained from multiple sources, both experimental and theoretical, but also to assess threats and risk [9], [22] and [23]. data fusion enlarges robustness and soundness and diminishes the vulnerability of the system giving arguments for the decision, and enabling decision-making even when some sources of information are missing or are inappropriate. through data fusion better and larger coverage of space and time is achieved, ambiguity is decreased, because better information leads to better distinction among available hypotheses. data fusion is based on experimental data output by sensing devices or instruments, and on information gained by other routes (e.g., the user as a data source for a priori knowledge, experience, and model application). data fusion imposes all data to be represented in the same format (e.g., numeric values in the same units, relative values). if data are diverse in representation, data alignment or data registration is indispensable [12]. measurements, as instrument outputs, produce a signal usually affected by noise, and whose reliability has to be proved (e.g., instrument malfunction, express corruption of measured quantity, like jamming). filtering and validation of the data are necessary in data fusion processes. data fusion comprises activities tackling: data from sources with different quality levels, such as different accuracy, co-related data, inflation of information, and all other issues leading to computational problems, and impose a need to change the context of the observation, like from time to frequency domain, or to extract features or attributes [12]. as an illustration for application of data science in the tic sector, one of the most relevant tic decision-making and riskintroducing issues will be further demonstrated determination of the re-calibration period of the tic measurement equipment by deploying data fusion as a mean for argumentation. 4. planning the tic instrument re-calibration period by deployment of data fusion estimating the re-calibration intervals is an essential issue of the tic sector entities utilizing calibrated instruments in their activities. most of the test equipment in today's laboratory inventories are multi-parameter items or consist of individual single-parameter items. an item-measurand is declared to be outof-tolerance if a single instrument parameter or an item in a set, is found to be out of pre-defined specifications. this is expensive, and introduces risks [13], [14], [23]. most of the published methods for planning the re-calibration period of an instrument, are of statistical origin and can be adequately used only for large inventories of instruments, [15]. as a result of the different performance characteristics of individual instruments and their changeable working conditions, instrument reliability is complex to anticipate. extended calibration intervals might have a consequence in increased potential costs associated with a given instrument, as more operation cycles (tests or calibrations) have been conducted before it is re-calibrated and found to be inor out-of-tolerance. a posteriori costs might encompass a reverse traceability review to identify the items that have been tested by the instrument, a thorough investigation of the level of negative impact on their performance given the scale of the instrument’s out-of-tolerance, leading to customer alert, accreditation suspension, product recall and imperceptible issues like the tic entity’s jeopardized reputation might occur. in this contribution, the focus is on estimating the recalibration period of measuring instruments used by the tic entities by deploying data fusion for reduced decision-making risk. the approach for determining the recalibration range will be validated through a case study on experimental calibration and check data of an electrical measuring instrument, by fusion of data from diverse sources (both a posteriori experimental and a priori knowledge, experience, model application). most of the standards according to which the tic entities are accredited/certified require to have available, suitable, and adequate facilities and equipment to permit all tic activities to be carried out in a competent and safe manner, with the responsibility lying solely on the tic entity. one of the most significant decisions regarding the calibration is “when and how often to do it?” many factors influence the time range between calibrations, and they should be identified and considered by the tic entity. the most important factors are: • uncertainty of measurement required or declared by the tic entity, • risk of a measuring instrument exceeding the limits of the maximum permissible error when in use, • cost of necessary correction measures when it is found that the instrument was out-of-tolerance over a long period of time, • type of instrument, • tendency to wear and drift, • manufacturer’s recommendation, • extent and severity of use, • environmental conditions (climatic conditions, vibration, ionizing radiation, etc.), • trend data obtained from previous calibration records, • recorded history of maintenance and servicing, • frequency of cross-checking against other reference standards/measuring devices (including diverse measures for quality assurance in tic, such as inter-laboratory comparisons or proficiency testing schemes, or repeatability of tests under different operating conditions), • frequency and quality of intermediate checks in meantime, • transportation arrangements and risk, and • degree to which the tic personnel are trained [15]. the ilac-g24 specifies the following methods, [15]: • automatic adjustment or “staircase” (calendar-time), • control chart (calendar-time), • “in-use” time, • “in service” checking, or • “black-box” testing, and • other statistical approaches. the use of statistical methods (i.e., by deploying data science) on an individual instrument or instrument type are of interest, especially if combined with adequate software tools. according to agilent technologies®, prior to the introduction of a new product, [16] the responsible personnel set the initial recommended re-calibration period. data is treated as reliable data if originating from at least three areas: • data from similar instruments, • data for the individual components used in the instrument, • data on any subassemblies deriving from existing mature products (i.e., instruments). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the usual working environment and the testing results of the surrounding conditions conducted on instrument prototypes are considered as well [18]. several methods for determining the calibration intervals are published, [13], [14], [19], [20] and [21]. some models assume that the calibration condition of the instrument can be traced by monitoring the drift of an observable parameter, [13]. the calibration ranges can be presented according to analysis by parameter variables data, analysis by parameter attributes data, by instrument attributes data, and by class instrument attributes data. other methods, such as an extension by providing a maximum likelihood estimation for the analysis of data characterized by unknown failure times, are given in [13], where the estimation method is using the exponential reliability function. an approach with a review of the instrument’s calibration history is presented in [14], calibration records indicate the history of remaining in tolerance. the instrument might have a higher likelihood of remaining in tolerance, as a result of an algorithm that has been developed calculating calibration ranges based on the condition received from calibration along with a historical weighting. a method from variables data is presented for determining calibration intervals for parameters whose value demonstrate time-drift with constant statistical variance. the method utilizes variables data in the analysis of the timedependence of deviations between as-left and as-found values from calibration. the deviations are from the difference between a parameter's as-found value at a given calibration and as-left value prior to calibration [14]. the choices for the tolerance band for parameter x in the table 1, are derived from other authors publications [14], but also based on the own laboratory metrology experience. in further research other methodologies for this purpose are planned to be deployed, like the suggested decision-making ranges as in ilac g8, [22]. in [19] and [21], a stochastic optimisation approach for determination of the re-calibration period is presented. a genetic algorithm methodology is deployed for estimation of the next calibration period, considering the previous calibration history of the measurement device. the experimental results of last calibration certificate are used for verification of the predicted device measurement time drift in the moment of the estimated moment of next calibration. the modelling is performed by representing the time dependence of the instrument time-drift with lagrange orthogonal polynomials, constructed from experimental calibration history embedded in an algorithm based on the statistical least square method and inclusion of the accompanying uncertainties in the genetic algorithm stochastic optimisation tool for determination of the coefficients of the polynomial model of the function of the time-drift of the instrument. 4.1. estimation of a re-calibration period-model development based on the previous discussions and survey, the following innovative data fusion model for determination of the recalibration period is proposed: ni = eci ⋅ [𝐶1 ∙ x + c2 ⋅ x + c3 ∙ x + c4 ∙ x + ic ∙ 𝑌 + +cfu ∙ 𝑍 + co ∙ 𝑈 + ofh ∙ 𝑉 + ms ∙ 𝑊] (1) where are: ni new interval eci established calibration interval table 1. values of parameters as multipliers. parameter value x “in tolerance” 1 “out of tolerance” 0.8 < 1x the tolerance band “out of tolerance” 0.6 > 1x the tolerance band, < 2x the tolerance band “out of tolerance” 0.4 > 2x the tolerance band, < 4x the tolerance band “out of tolerance” 0.3 > 4x the tolerance band, < 4x the tolerance band y=σyi y1 number of in-service checks between calibrations 1 time 0.1 < 5 times 0.3 < 10 times 0.4 > 10 times 0.5 y2 measured value no difference (<3%) 0.5 difference < 20% 0.4 difference > 20% 0.1 z=σzi z1 frequency of usage dayly 0.1 montly 0.5 yearly 0.7 z2 habit of usage used with caution in laboratory conditions 0.3 used with caution in tendency to wear and drift 0.2 use without special attention in terms of events 0.1 u=σui u1 cost of calibration small 0 medium 0.3 large 0.5 u2 cost of necessary correction measurement (in case the artefact of calibration is out of specifications and further service/adjustment is needed and repeated calibration procedure is imposed) < 0.5 x cost of calibration 0.5 < 1 x cost of calibration 0.1 > 1 x cost of calibration 0 v the operator is trained to handle the instrument and knows the measured items 1 the operator is trained to handle the instrument, but imperfectly acquainted with the measured items 0.5 w service (minor repair or adjustment of the artefact of calibration) performed between previous and last calibration 0.5 no service performed between previous and last calibration 1 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 𝐶1 historical weighting of the most recent calibration (0.2, modification of the simplified method) 𝐶2 historical weighting of the most previous calibration (0.1, modification of the simplified method) 𝐶3 historical weighting of the previous calibration (0.08, modification of the simplified method) 𝐶4 predicted value of next calibration (0.06, newly introduced parameter) ic in-service check between calibrations (0.1, newly introduced parameter) cfu condition and frequency of use (0.2, newly introduced parameter) co costs (0.1, newly introduced parameter) ofh operator factor and habit (0.08, newly introduced parameter) ms maintain and service (0.08, newly introduced parameter). the model in the equation (1) is derived by combing models from [14] an [15], and by deploying the laboratory experience in electrical metrology (i.e., the linearity of the electrical instruments characteristics derived from the instruments technical specification), like the time-drift of the instruments [17]. the main idea is to provide an easy model ready to be used in the tic entities everyday operations. the eci can be specified depending on the experience with the stability of similar instruments, experience, and recommendations. this is a parameter containing the a priori knowledge in the data fusion process. other coefficients of a priori knowledge origin are 𝐶1, 𝐶2, 𝐶3, 𝐶4, ic, cfu, co, ofh and ms. the 𝐶1, 𝐶2, 𝐶3, are historical weighting coefficients, of the previous three calibrations. in case of more than three previous data, for a shorter time history they can be taken into consideration with significantly lower weighting coefficients (less than 0,4 or 0,2, respectively for the fourth and fifth previous calibration), and in case of longer time history (longer periods of re-calibration of more than one year) they can be neglected. the longest possible re-calibration period will be estimated, leading to a conclusion that this approach is more rigorous in comparison to the “simplified method” as defined in [15], only if the estimated period is shorter than the real time range used for validation derived from the last calibration certificate and in which the instrument was found to be in-tolerance. the parameters as multipliers are given in table 1. 4.2. experimental case study for methodology validation a data base containing the historical data of previous calibrations of the instrument must be established and maintained by the tic entity, for further proper implementation in the proposed model. the proposed model is feasible to be utilized after at least two conducted calibrations of the instrument in appropriate time ranges. as a case study for validation of the proposed methodology, a real data base with the calibration history of a digital multimeter used during testing process by a tic body is adopted. the variations of the calibration values should be considered in maximum available measurement points, emphasising points with detected changes. to be on the safe side, the most acceptable value of x is the smallest value among all available points. the expected value of the next calibration time moment can be obtained by deploying sophisticated algorithms previously published in [14], [18], [19], [20], and [21], but for some tic entities their approaches introduce obstacles and risks for implementation. the methodology we present in this study, embedding the statistical tool of the least squares, is a simplified option for the tic entities. this method is chosen because it is embedded in many already available calculators to the tic entities in a very user-friendly form (e.g., ms excel). the in-service checks with another instrument, should be accomplished in time moments and occasions where the uncertainty of calibration is on disposal for both instruments. tic entity’s quality management proposes the extent of factors and habits of the staff, while the instrument operator specifies the frequency and conditions of use, which are another example of a priori knowledge in the data fusion process. depending on the available history data and tracking behaviour of the instrument, the coefficients proposed in the algorithm can be modified and customized for each instrument or group of instruments (i.e., the model is universal and invariant to the type of instrument or the number of instruments). the next calibration period of a metrel® eutotest xe mi 3102 tester is estimated as a case study for the model validation. following the recommendations of the instrument producer metrel® [17], regular 6-months or 1-year calibration of all measurement functions of the instrument should be carried out. this case study has been chosen because of the instrument, the artefact of calibration, and because the laboratory already has sufficient data on disposal. namely, the data used in the modelling and verification process, is for a period of 72 months (i.e., period of 6 years), which is an appropriate time range to derive sound regressive conclusions. in tables 2 and 3 the calibration history for the instrument in a single point of the current and voltage measurement ranges are given, respectively. the data used in tables 2 and 3 are from the calibration certificates of the instrument, conducted by an external accredited calibration laboratory. to the moment of first calibration the zero value is assigned, and the time representation of the further moments of calibration are expressed in month units from the first calibration. the reference calibration value for the current is chosen to be 10 a, while the reference value for the voltage is selected to be 400 v. the measurement uncertainty is divided by a corresponding coverage factor declared in the calibration certificate, as in the history calibrations, they are carried out in different laboratories, some expressing the expanded uncertainty at factor of coverage table 2. the calibration history in a single point of 10 a in current measurement range of metrel® eurotest xe mi 3102. t in month i in a uncertainty in ma 0.00 9.98 0.020 15.00 10.00 4.121 36.80 10.01 14.545 52.60 9.98 14.545 72.90 10.00 0.015 table 3. the calibration history in a single point of voltage measurement range of metrel® eurotest xe mi 3102. t in month u in v uncertainy in v 0.00 399.00 0.42 21.80 401.00 0.36 37.66 401.00 0.36 58.02 400.00 0.50 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 k = 1.65 (for rectangular distribution at probability of 95 %) and some expressing the expanded uncertainty at factor of coverage k = 2 (for normal distribution at probability of 95 %). so, the standard uncertainty is utilized in the calculations. this is the step of data alignment in the process of fusion of heterogeneous data. the trend lines for both quantities current at 10 a represented in equation (2), and voltage at 400 v represented in equation (3) are derived by utilization of the statistical least square method, with exclusion of the experimental value from last calibration, and it is used for predictive verification of the models. the derived predictive models are: 𝐼 = −7 ∙ 10−7 ⋅ 𝑡 3 + 10−5 ⋅ 𝑡 2 + 0.0013 ⋅ 𝑡 + 9.98 𝑅2 = 1 (2) u = −0.0024 ⋅ 𝑡 2 + 0.1448 ⋅ t + 399 𝑅2 = 1 . (3) in figures 1 and 2 the calculated expected values from the function models in (2) and (3) by inserting the last calibration time moment in the models, are shown in figures 1 and 2. so the expected values are 9.85 a for current measuring range and 399.32 v for the voltage measurement range and are in tolerance. the differences between the calculated (theoretical) values and the real measured values, derive from the ranges of the measurements uncertainty in calibration which is one of the main inputs in the modelling, and which is of stochastic and unpredictable nature. in the table 4 are presented the experimental results of inservice check measurements with another instrument of similar type (comprising the same measurement ranges as the object of validation) with established measurement traceability. the results are in limits of errors (i.e., in-tolerance). in case of out of tolerance result derived from the in-service measurement checks, according to the prescribed laboratory procedures the instrument will be subjected to immediate re-calibration or put out of service. other values for the parameters in the algorithm are as follows: eci = 24 months ni = eci ⋅ [𝐶1 ⋅ x + c2 ⋅ x + c3 ⋅ x + c4 ⋅ x + +ic ⋅ y + cfu ⋅ z + co ⋅ u + ofh ⋅ v + ms ⋅ w] = 24 ⋅ [0.2 ⋅ 1 + 0.1 ⋅ 1 + 0.08 ⋅ 1 + +0.06 ⋅ 1 + 0.1 ⋅ 0.8 + 0.2 ⋅ 0.3 + 0.1 ⋅ 0 + +0.08 ⋅ 1 + 0.08 ⋅ 1] = 24 ⋅ 0.74 = 17.76 . (4) the last calibration is not used in the prediction of the next value and is used as a validation point of the algorithm. the real calibration period (between the last two calibrations) is 20 months, while the predicted re-calibration period by the proposed algorithm is 18 months. the values obtained with the last calibration validate the method. shorter value of the recalibration interval is obtained, which is on the safe side, and can figure 1. the calibration history and expected value in a single point 10 a of current measurement range. figure 2. the calibration history and expected value in a single point 400 v of voltage measurement range. table 4. experimental values of the last in-service check with other calibrated instrument with established measurement traceability. instument i in a u in v metrel mi 3102 3.7 224 metrel mi 833 3.7 223 table 5. multipliers used in the case study. parameter value x “in tolerance” 1 y=σyi y1 number of check between calibration < 5 times 0.3 y2 measured value no difference (3%) 0.5 z=σzi z1 frequency of usage dayly 0.1 z2 habit of usage used with caution in tendency to wear and drift 0.2 u=σui u1 coft of calibration small 0 u2 cost of necessary correction measurement > 1 x cost of calibration 0 v the operator is trained to handle the instrument and knows the measured items 1 w no service performed between previous and last calibration 1 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 be accepted as applicable without introduction of additional risks from aspect of the re-calibration period. in fact, the tic operation’s risk is significantly mitigated, i.e. minimized. namely, in case when the derived theoretical value would be outside of the tolerance limits, is could be concluded that the planned recalibration is not appropriately chosen (i.e., it is too long). in that case, new planning should be conducted, in accordance with the period calculated using the model. additional validation of the predictive model for determination of the re-calibration period is the a posteriori experimental approach of verification by executing in-service check measurements with another instrument of similar type (comprising the same measurement ranges as the object of validation) with established measurement traceability. this case study has validated the proposed methodology for prediction of the next moment of instrument calibration. the derived results demonstrate reduced risk arising from out-oftolerance state of the instrument due to prolonged re-calibration period. so, this data fusion methodology, with the proposed simple procedure for application in any tic entity, enables argumented decision making concerning the determination of the instrument re-calibration period. the mitigated risk from this aspect increases the confidence in the reliability in the instruments used by the conformity assessment body. 5. conclusions the proposed methodology for predicting the period of recalibration based on data fusion concept is simple, containing a plenty of data on factors influencing the stability of the instrument derived from diverse sources. it is easily deployable in the daily routine of any tic entity. the model opts to decrease the quality management risk of the occurrence of errors due to inappropriately defined re-calibration period of any instrument used in the tic activities. the presented case study validates and confirms the effectiveness of the proposed methodology, through experimental values verification. an advantage of proposed universal model is its openness which enables the variation of the coefficients and provides means for specialization in case of a group of instruments. one of the options for generalisation of the method in case of a group of large number of instruments of the same type, is the possibility to fix some of the coefficients, and to make variations of certain coefficients. more thorough studies should be conducted in this context, by deploying statistical approaches for random sampling of data from previous calibrations of high number of instruments of same type (i.e., date reduction should be carried out through a data science approach). finally, it can be concluded that data fusion approach is highly adaptable for various decision-making situations in the tic sector, opening possibilities for mitigation and reduction of risks during tic operations. references [1] k. cunha, r. santos. the reliability of data from metrology 4.0. international journal on data science and technology. vol. 6, no. 4, 2020, pp. 66-69. doi: 10.11648/j.ijdst.20200604.11 [2] b. j. dorr, c.s. greenberg, c.s., fontana, p. et al. a new data science research program: evaluation, metrology, standards, and community outreach, int j data sci anal 1, 2016, pp. 177–197. doi: 10.1007/s41060-016-0016-z [3] f. pavese, an introduction to data modelling principles in metrology and testing, ed. f. pavese, a. forbes “data modeling for metrology and testing in measurement science”, birkhauser boston, springer science+business media, llc 2009, pp. 1-30. [4] r. fagin, l. m. haas, m. hernández, r. j. miller, l. popa, y. velegrakis, clio: schema mapping creation and data exchange, in: conceptual modeling: foundations and applications, springer, ny, 2009, pp. 198-236. doi: 10.1007/978-3-642-02463-4_12 [5] j sleeman., t. finin, a. joshi, entity type recognition for heterogeneous semantic graphs, in: 2013 aaai fall symposium series, ai magazine 36(1) 2015, pp. 75-86. doi: 10.1609/aimag.v36i1.2569 [6] m. jeevan, fundamental methods of data science: classification, regression and similarity matching, jan. 2015. online [accessed 25 june 2023] http://www.kdnuggets.com/2015/01/fundamental-methodsdata-science-classification-regression-similarity-matching.html [7] i. j. goodfellow, y. bengio, a. courville, deep learning, 2015. [8] bipm, evaluation of measurement data guide to the expression of uncertainty in measurement, jcgm 100:2008 gum. [9] iso, en iso/iec 17025 general requirements for the competence of testing and calibration laboratories, cenelec, brussels, 2017. [10] bipm, evaluation of measurement data supplement 1 to the “guide to the expression of uncertainty in measurement” propagation of distributions using a monte carlo method, jcgm 101:2008. [11] w. f. guthrie, h. liu, a. l. rukhin, b. toman, j. c. m. wang, n. zhang, three statistical paradigms for the assessment and interpretation of measurement uncertainty, ed. f. pavese, a. forbes “data modeling for metrology and testing in measurement science”, birkhauser boston, springer, llc 2009, pp. 71-116. [12] p. s. girao, o. postolache, j. m. d. pereira, data fusion, decision-making, and risk analysis: mathematical tools and techniques, ed. f. pavese, a. forbes “data modeling for metrology and testing in measurement science”, birkhauser boston, springer, llc 2009, pp. 205-254. [13] a. krleski, m. cundeva-blajer, analysis and contribution to methods for determination of optimal recalibration intervals, proc. of mma’16 26th symposium metrology and metrology assurance, sozopol, bulgaria, 7-11 september 2016, pp. 493-499. online [accessed 26 june 2023] https://metrology-bg.org/fulltextpapers/187.pdf [14] h. castrup, calibration intervals from variables data, ncsli work. & symp., washington, usa, 2005. [15] ilac, ilac-g24:2022 guidelines for the determination of calibration intervals of measuring instruments. online [accessed 25 june 2023] https://ilac.org/?ddownload=124891 [16] agilent technologies, setting and adjusting instrument calibration intervals – app. note, 2013. [17] metrel, eurocheck cs 2099 user manual, 2013. [18] allen bare, simplified calibration interval analysis, ncsli workshop & symposium, 2006. [19] m. cundeva-blajer, prediction of resistance standards time behaviour by stochastic determination of lagrange polynomial, advanced mathematical and computational tool in metrology and testing, ix edition, world scientific publishing company, 2012. [20] p. mostarac, r. malarić, h. hegeduš , extrapolation of resistor standard values between calibration, etai, ohrid, macedonia, 26-29 september 2009. [21] m. cundeva-blajer, application of optimization methods for improved electrical metrology, acta imeko november 2016, vol. 5, issue 3, pp. 9-15. doi: 10.21014/acta_imeko.v5i3.378 https://doi.org/10.11648/j.ijdst.20200604.11 https://doi.org/10.1007/s41060-016-0016-z https://doi.org/10.1007/978-3-642-02463-4_12 http://dx.doi.org/10.1609/aimag.v36i1.2569 http://www.kdnuggets.com/2015/01/fundamental-methods-data-science-classification-regression-similarity-matching.html http://www.kdnuggets.com/2015/01/fundamental-methods-data-science-classification-regression-similarity-matching.html https://metrology-bg.org/fulltextpapers/187.pdf https://ilac.org/?ddownload=124891 https://doi.org/10.21014/acta_imeko.v5i3.378 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 [22] ilac, ilac g8: 09/2019 guidelines on decision rules and statements of conformity. online [accessed 25 june 2023] https://ilac.org/?ddownload=122722 [23] s. andonov, m. cundeva-blajer, fmea for tcal: risk analysis in compliance to en iso/iec 17025:2017 requirements, proc. of 24th imeko tc4 int. symp. / 22nd int. workshop on adc and dac modelling and testing, 2020 palermo, italy, 14-16 september 2020. online [accessed 25 june 2023] https://www.imeko.org/publications/tc4-2020/imeko-tc42020-42.pdf [24] m. cundeva-blajer, data science for testing, inspection and certification, proc. of the imeko tc11 & tc24 joint hybrid conference, dubrovnik, croatia, 17-19 october 2022. doi: 10.21014/tc11-2022.25 https://ilac.org/?ddownload=122722 https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-42.pdf https://www.imeko.org/publications/tc4-2020/imeko-tc4-2020-42.pdf https://doi.org/10.21014/tc11-2022.25 microsoft word 261-1779-2-le-rev3 acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 42 ‐ 46    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 42  surface conductance and microwave scattering in  semicontinuous gold films  jan obrzut  material measurement laboratory, national institute of standards and technology, gaithersburg, md 20899‐8542, usa            section: research paper   keywords: coplanar waveguides; scattering parameters; conductance; microwave absorption; percolation; semi‐continuous gold films  citation: jan obrzut, surface conductance and microwave scattering in semicontinuous gold films, acta imeko, vol. 4, no. 3, article 7, september 2015,  identifier: imeko‐acta‐04 (2015)‐03‐07  editor: paolo carbone, university of perugia, italy  received february 24, 2015; in final form april 8, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the nist center for nanoscale science and technology, and material measurement laboratory   corresponding author: jan obrzut, e‐mail: jan.obrzut@nist.gov    1. introduction  semicontinuous thin metallic films have attracted attention due to their unusual optical and electronic properties such as enhancement of non-linear optical effects [1] and inhomogeneous localization of electromagnetic eigenmodes [2]. the characteristic fractal morphology of such films develops during deposition, when metallic nanoparticles nucleate to form conducting clusters mixed with dielectric voids. the size of the conducting clusters increases with increasing deposition thickness until a percolation threshold is passed and the film becomes conducting [3], [4]. in the vicinity of the percolation transition, the size of the inhomogeneities is comparable with the mean free path of the electrons and results in partial carrier localization [5]. such films may also exhibit unexpected electromagnetic characteristics, as the carrier response can be dominated by resistive localized activated transport between grains [2]. typically, dc or thz [6] electrical conductivity has been reported, but microwave studies are scarce due to the complexity of such measurements [7], [8]. in this paper, we report measurements of conductance and power absorption over the dielectric to metallic percolation transition using a technique based on coplanar waveguides (cpw) [9]. this approach enables effective impedance matching and allows for measurement of the film propagation characteristics, which can be accurately normalized with the film surface conductance while mitigating uncertainty from the film thickness measurement. the results are presented for gold nanoparticles deposited on a flat surface, which we choose as a model of 2d semicontinuous metallic film. abstract  semicontinuous gold films 4 nm to 12 nm thick were characterized using patterned coplanar waveguides over a frequency range of  100  mhz  to  20  ghz.  such  films  can  form  two  dimensional  fractal  aggregates  mixed  with  dielectric  voids  with  unusually  large  electromagnetic absorption. surface conductance and microwave absorption were obtained from the measured scattering parameters  using  a  microwave  transmission‐reflection  model.  within  the  percolation  coverage  of  gold  nanoparticles,  when  the  surface  conductance  increases  from  10 –5   s  to  10 –3   s,  the  film  properties  transition  from  dielectric  to  metallic,  and  the  corresponding  microwave transmittance falls far more rapidly than the classical skin depth model would suggest. the resulting microwave absorption  attains a peak value in this range, which results from an inhomogeneous localization of an electromagnetic field in fractal structures.  the  dielectric  to  metallic  transition  can  be  easily  identified  experimentally  from  an  abrupt  change  in  the  phase  of  the  reflection  scattering  parameter  s11.  the  results  demonstrate  a  convenient  measurement  technique  to  study  electromagnetic  properties  of  surface‐enhanced semicontinuous metallic films for thin broadband absorbers with minimized reflection, and for other microwave  applications.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 43  2. methods  2.1. coplanar waveguide testing structure  coplanar waveguides with a nominal impedance value (z0) of 50  and a propagation length (l) ranging from 450 µm to 1800 µm, were made with 10 nm ti and 200 nm au evaporated on 500 m thick, 25 mm by 25 mm electronic grade alumina wafers (figure 1). cpws were patterned by lift off lithography. the width (w) of the central signal strip of these cpws was 50 µm ±0.2 µm, while the signal to ground plane spacing (s) was nominally 22 m. semicontinuous films were deposited by thermal evaporation of gold directly onto cpw through a shadow mask. during deposition the film average mass thickness (d) was monitored by a quartz crystal oscillator with resolution of 1 nm. 2.2. microwave measurement and analysis  measurements of the microwave wave scattering parameters, s11 and s21, were performed using an agilent 8720d network analyzer in the frequency range of 1 ghz to 20 ghz. the analyzer was connected to the cpw test structure with phase preserving cables from agilent (85131-60013) and 50  ground-signal-ground (gsg) air-coplanar probes (acp40, 100 µm pitch) from cascade. the measurement system was calibrated using a 101-190c impedance calibration standard and wincal calibration software from cascade. parameters in bold face denote complex quantities that have both magnitude and phase. after film deposition, the cpw impedance changes from z0 to zs. we consider the cpw test structure as a microwave network consisting of impedance discontinuity z0;zs;z0, that is, zs inserted between two reference transmission lines having a real characteristic impedance z0 (figure 1) where multiple wave reflection takes place at each z0;zs interface, affecting the reflection coefficient, , and transmission coefficient, t . the material’s properties in the specimen section of propagation length l are represented by the complex impedance zs, and complex propagation constant s. the relation between the measured scattering parameters s11, s21, and , zs and t are given by equations (1-3) [10]-[12]: 11 2 21 2 112 2 1 1 s ss b,bbγ   (1) γ γ z    1 1 0zs (2) γss γss t )(1 2111 2111    . (3) in (1) || <= 1.0. the complex transmission coefficient t = e-s l is related to the complex propagation constant by (4): ls )]/( [ln -1tγ  (4) which has multiple solutions. since t = |t |ej , (4) can be rearranged into (5) where the real part of s has an unique value, and only the imaginary part, the phase constant in (3) has multiple values: ]-n[(2|]/| [ln -1s ljl /)tγ  . (5) the ambiguity in the phase constant can be easily eliminated by measurements on two cpws with different propagation lengths. having determined zs and s, the distributed conductance, gs, can be obtained from the conventional transmission line relations [12]: )re( s s sg z γ  . (6) the distributed conductance gs (6) can be scaled with the film surface conductance (s), (in units of siemens per square): sgss 2 1  (7) where (s) is the cpw signal to ground plane spacing, here s = 22 m (figure 1). note, that s is independent of the cpw geometry, but it reflects the physical structure of the film, which depends on the particles surface coverage. in particular, s eliminates ambiguity associated with the film mass thickness measurement, which is commonly used in characterizing 2d metallic films. the microwave absorption, as, is given by (8): sss rta  1 (8) where the transmittance ts, and reflectance rs, are magnitudes of the corresponding transmission and reflection coefficient, respectively. the combined uncertainty of scattering parameters magnitude is 0.1 db and phase angle is 2. the combined relative uncertainty of s and as is within 5 %. 3. results   3.1. scattering parameters  figure 2 shows the magnitude and phase of the complex scattering parameters measured for uncoated cpws and cpws figure 2. magnitude and phase of scattering parameters s11 and s21 for the uncoated cpws (1), and the following au film thickness: (2) 4 nm, (3) 6 nm, (4) 7 nm,  (5) 8 nm, (6) 9 nm, and (7) 10 nm.  figure 1. cpw testing structure. acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 44  coated with gold films with thickness increasing from 4 nm to 10 nm. the magnitude of the reflected wave, |s11|, for uncoated cpws is rather small, in the range of –45 db, indicating negligibly small insertion loss. the corresponding phase angle (plot 1 of s11 phase) indicates that the impedance mismatch is not significant. uncoated cpws show highly transmitting characteristics with |s21| slightly decreasing with frequency from its maximum value of 0 db, again indicating that the cpws test structures are well impedance matched. there is a general tendency of conducting au films to affect the magnitude and phase of both the s11 and s21 parameters. with increasing film thickness, the magnitude of s11 increases, while the magnitude of s21 correspondingly decreases. for example, in comparison to uncoated cpws, for a 7 nm thick au film the |s11| value increases by several orders of magnitude, from –45 db to about –10 db, while |s21| decreases from about 0 db to about –4.3 db. these changes are significant, and reflect the increasingly conducting character of the au films. as the film thickness grows, |s11| increases towards the maximum value of 0 db, while the phase angle approaches the value of . an abrupt change in s11 phase, from –135 to +175 when the film thickness increases from 6 nm to 8 nm, indicates a conductivity percolation transition from a dielectric to a conducting metallic state. films thicker than 6 nm are clearly conducting, each showing |s11| values approaching 0 db and similar phase angles of about . this is characteristic for reflection from a highly conducting interface. changes in s21 magnitude and phase show higher sensitivity to these thicker, more conducting films, and our measurement of both s11 and s21 accurately captures this transition from the insulating to the conducting state. the conductance is apparently frequency independent, as inferred from the flat characteristic of the magnitude of scattering parameters. it also increases with increasing film thickness. 3.2. surface conductance  the surface conductance values, s, determined from (7) are listed in table 1 at several frequencies. the observed slight dependence of s on frequency is rather not significant, and for films thicker than 4 nm it is within the measurement uncertainty of 5 %. in comparison the effect of the particle coverage, indicated by the film mass thickness d, is much larger. films 4 nm thick are only weakly conducting with s slightly increasing with frequency, from 210-6 s at 2 ghz to about 4.0  10–6 s at 10 ghz. thicker films show higher s values. s of 6 nm thick films is in the range of about 1.3  10–5 s to 1.4  10–5 s. this value increases further by three orders of magnitude to about 1.5  10–2 s for films 10 nm thick. such a large increase of conductance within the relatively narrow thickness range is indicative of the dielectric to metallic conductor percolation transition. with increasing d the metallic nanoparticles, initially separated in a mix with dielectric voids, nucleate to form conducting clusters. the size of the conducting clusters increases with increasing film thickness until a percolation threshold is passed and the predominantly dielectric film becomes a metallic conductor. in our previous work [13], we applied the generalized effective medium theory (gem) to model the effect of film surface coverage, p, on conductivity and found this model to provide a good estimate of the percolation threshold concentration pc, and critical exponents, s and t. in the gem model, the conductivity, s, below pc scales with the surface conductance of the dielectric, d, as s  d (pc  p)s where the exponent s describes the singular changes in s near the percolation threshold. d = 2 10-7 s is the measured conductance of the alumina substrate. above pc, s scales with the exponent t, s  m (p  pc)t, where m is the surface conductance of pure metal. the universal exponents t and s, both have values of about 1.2  0.2, close to the known ‘universal’ percolation exponent values in two dimensions [14]. similarly, the pc value of about 0.54 is consistent with the value of 0.5 predicted for a 2d square lattice. thus overall, these results were found to be consistent with charge transport in two-dimensions. in comparison, s  0.73 in three-dimensions for thicker, 20 nm films of random metallic particles [15]. the gem model formally “merges” the percolation and effective medium theories and typically captures well the conductivity crossover between the high and low conductivity states that is characteristic of most real dielectric – metallic composite materials in two and three dimensions [16], [17]. the shortcoming of this procedure is that the exact perturbative result is no longer recovered in the additive concentration. the effective medium theory predicts ‘effective’ absorption near pc in a very narrow concentration range when the permittivity diverges from the dielectric to metallic, even if both the dielectric and conducting components are lossless. in the next section we show the experimental evidence of a broad microwave absorption peak that is continuous on the conductivity scale, in apparent contradiction with the effective medium scaling arguments. 3.3. microwave absorption  figure 3 shows the microwave reflectance rs, transmittance ts, and absorption as (8) at 10 ghz plotted directly as a function of s. this approach removes ambiguity associated   figure  3.  microwave  transmittance  ts  (triangles),  reflectance  rs  (solid  squares) and absorption as (circles) of au films at 10 ghz in relation to the  film surface conductance s. table 1. surface conductance (in s), at several frequencies (f) for gold films  having mass thickness (d). d (nm) f (ghz) 2 5 10 15 20 4 1.86 10-6 2.85 10-6 3.92 10-6 3.71 10-6 3.32 10-6 6 1.33 10-5 1.47 10-5 1.46 10-5 1.33 10-5 1.27 10-5 7 1.00 10-3 1.01 10-3 1.10 10-3 1.10 10-3 1.16 10-3 8 3.00 10-3 3.00 10-3 3.01 10-3 3.01 10-3 3.14 10-3 9 6.48 10-3 6.59 10-3 6.68 10-3 6.72 10-3 6.68 10-3 10 1.87 10-2 1.73 10-2 1.55 10-2 1.44 10-2 1.35 10-2 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 45  with correlating the measured mass film thickness with surface coverage in semicontinuous films. reflectance of films with conductance s < 310-4 s is near zero. in this dielectric regime, the transmittance gradually decreases from 100 % to about 70 %, which is balanced by a gradual increase in absorption, as. within percolation transition when s increases from 310-4 s to 10-2 s, the transmittance falls rapidly to zero and as shows a peak reaching a maximum of about 0.5 at s  310-3 s, while rs increases steeply. above the conductivity percolation transition, when s > 10-2 s, as decays reaching a value of about 10 % at s  0.1 s, while rs approaches a value of 90 %. the large value of reflectance above s  0.01 s (d > 10 nm) is consistent with the film forming an electrically conducting mesh. the size of the conducting clusters, estimated from our microscopic study is about 25 nm [13]. thus, the metallic mesh at wavelengths  longer than the cluster size is almost totally reflecting. in figure 3, rs > 90 % when s  0.15 s (d > 12 nm) and the network of au nanoparticles acquires electrical characteristics of a continuous metallic film, with the bulk conductivity v  107 s/m. 4. discussion  the absorption peak that appears within the conductivity percolation transition (figure 3) is an interesting characteristic of the semicontinuous randomly distributed conductive nanoparticles. to our knowledge this is the first experimental evidence of such absorption in the microwave range. the results indicate a non-classical effect where microwave transmittance falls far more rapidly than the classical skin depth model [18] would suggest. table 2 illustrates this situation where attenuation 1-te =1-exp(-2d/e), due to skin penetration depth, e = (fr0(s/d))-1/2, is several orders of magnitude smaller than the actually measured absorption as. for 8 nm thick films, the classical attenuation model predicts absorption of 210-3, since at 10 ghz s  310-3 s and the skin depth e is about 9 m. in comparison, the measured absorption approaches a value of 0.5 and the corresponding classical penetration depth as  2d/ln(1-as) decreases to only about 25 nm. large local electromagnetic field fluctuations have been identified numerically at the percolation transition in 2d metallic-dielectric networks, assuming a strong skin effect, and frequency independent conductivity [2]. the presented microwave scattering data provide experimental evidence that the interaction of the electromagnetic field with randomly distributed conductive particles leads to localized resonant modes within the film, confined in a dimension comparable with the particle grain size, which is in the range of 20 nm to 100 nm (table 2). in an attempt to explore the origin of the microwave absorption in more detail, we examined the phase characteristic of the reflected wave, s11 shown in figure 2. the phase angle () of the scattering parameter s11 is negative, consistent with the predominantly dielectric character of films thinner than 6 nm below the percolation threshold. with film conductance increasing beyond the conductivity percolation threshold, s > 10-3 s, (d > 6 nm) the phase angle changes from negative to positive values by about . this indicates a transition from a capacitive (xc) to inductive reactance (xl) evidencing an lc resonance, which leads to power dissipation over the randomly distributed conducting paths. the film response may be formally described as equivalent to a distribution of resonant lc circuits or electrically shorted lossy transmission lines. the wide range of conductivity values over which the anomalous absorption is observed (figure 3) suggests high damping conditions and that the distribution of the effective guided wavelengths is rather broad. the ratio of the internal film resistance and impedance, |r/z| normalizes the resonant absorption such that if z = r then as = 1.0 [19]. when as attains a peak value  0.5, the film reactance is inductive (z = r + xl) and r/(r + xl)  0.5. the result demonstrates that a portion of the injected microwave power is trapped in the magnetic field. the 2d semicontinuous films composed of metallic gold nanoparticles act within the percolation conductivity transition as a surface reactance with diminishing resistive part. trapped mode resonances are characteristic of 2d metasurfaces, which are periodic arrays of planar resonant circuits with dimensions smaller than the wavelength. metasurfaces or metafilms can be considered as a subset of 3d metamaterials. the properties of classical thin film metamaterials have been traditionally characterized by effective permittivity and permeability with the effective medium theory. since the bulk properties and thickness of metasurfaces cannot be uniquely defined, the surface susceptibilities are used instead, and the scattering from metasurfaces is characterized in terms of the generalized sheet transition conditions (gstc)[20]. the theoretical studies predict trapped mode resonance having a fano-type line shape on the reflection/transmission spectra [21], [22]. one of the attractive features offered by metasurfaces is the possibility of compact electromagnetic absorbers. such devices will scatter in a narrow band, in the vicinity of the resonant frequency range [23]. in contrast to gstc results, the scattering parameters (figure 2) and conductance (table 1) of semicontinuous films are evidently frequency independent in the investigated frequency range suggesting that the characteristic properties of such films can be described by the effective medium (gem) model. the gem universal exponents t  s  1.2 are consistent with 2d percolated network. but, using our experimental data, v  107 s/m, d  10, and 1/(t+s) = 0.42, gem predicts the effective absorption in unrealistically narrow concentration range, |p-pc|< |2f0d/(4v)| 1/(t+s)= 8.6 10-4. to solve these difficulties, a dynamic effective model was introduced and the effect of broadband scattering of microwaves from inhomogeneous surfaces near percolation coverage has been analysed numerically in terms of generalized ohm’s law (gol) [22]. the gol simulations support our observation of an anomalous broadband absorption corresponding to resonating table  2.  surface  conductance  (s),  skin  depth  (e),  attenuation  (1‐te),  measured absorption (as), and the corresponding penetration depth as at  10 ghz for au films having thickness (d).  d   (nm)  s   (s)  e (m)  1‐te  as  as   (nm)  7  1.510‐5  109  1.310‐4  0.05  190 8  3 x10 ‐3   9  2 10‐3  0.5  25 9  1.510‐2  4  5 10‐3  0.35  40 10  510‐2  2.3  910‐3  0.2  90 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 46  structures at the limit of high damping and strong skin penetration. in practice one can easily estimate scattering properties of 2d semicontinuous films using surface conductance and as., ts and rs plots in figure 3. for example, transmission ts of a single layer with s  310-3 s is about 0.5 and rs  0.14. neglecting multiple reflections in an n-layer configuration, transmission will decrease as (ts)n, rs will remain about the same and the total absorption will approach the value of 1(ts)n rs. for a nano-size structure composed of 10 layers, (ts)n  10-3 (30 db) and absorption will increases to about 0.85. in comparison, a classical absorber with rs in the range of 0.14 and total attenuation, 1(ts)n, of 0.997 would have to be macroscopically thick. somewhat more accurate results may be obtained from numerical models discussed above. nevertheless, the presented results demonstrate the practical usefulness and advantage of our experimental approach, which utilizes surface conductance as a constituent parameter in characterization of anomalous microwave scattering from 2d semicontinuous films. 5. conclusions  the presented measurement technique minimizes the impedance mismatch and allows effective coupling of microwaves to nanostructured metallic films over a broad frequency range. the results provide experimental evidence of large resonant absorption in semicontinuous gold films within the conductivity percolation transition. this transition can be easily identified experimentally from the characteristic  phase change of the reflection scattering parameter s11. the effective attenuation distance of microwaves is about 25 nm to 100 nm. this is several orders of magnitude shorter than the classical skin depth model would predict. the films attain a maximum absorption of about 0.5 slightly above the percolation threshold, when the film thickness is about 8 nm and surface conductance is about 2.510-3 s. by tuning the particle concentration towards the percolation threshold, the film reactance becomes inductive and the structure transitions from an electric to magnetic energy concentrator. the effect opens the possibility of controlled scattering of microwave energy in sub-wavelength structures through networks of metallic nanoparticles. conductance of films thicker than 15 nm approaches the value of 0.6 s, which corresponds to volume conductivity of bulk metallic gold of about 107 s/m. disclaimer  certain commercial equipment, instruments, or materials are identified in this paper to foster understanding. such identification does not imply recommendation or endorsement by the national institute of standards and technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. references  [1] v.m. shalaev, “nonlinear optics of random media: fractal composites and metal-dielectric films”, springer-verlag berlin. [2] v.a. shubin, a.k. sarychev, j.p. clerc, v.m. shalev, “local electric and magnetic fields in semicontinuous metal films: beyond the quasistatic approximation”, phys. rev. b 62 (2000) pp. 11230-11244. [3] n. kaiser, “review of fundamentals of thin film growth”, appl. optics 41 (2002) pp 3053-3059. [4] m. hovel, b. gompf, m. dressel, “dielectric properties of metal films around the percolation threshold, phys. rev. b 81 (2010) p. 035402. [5] v. krachmalnicoff, e. castanie, y. de. widde, r. carminati, “fluctuations of the local density of states probe localized surface plasmons in disordered metal films” phys. rev. lett. 105 (2010) p. 183901. [6] m. walther, d. g. cooke, c. sherstan, m. hajar, m. r. freeman, f. a. hegmann, “terahertz conductivity of thin gold films at the metal-insulator percolation transition” phys. rev. b 76 (2007) p. 125408. [7] a. n. lagarkov, k.n. rozanov, a.k. sarychev, n.a. simonov, “experimental and theoretical study of metal-dielectric percolating films at microwaves” physica a 241 (1997) pp. 199−206. [8] i. r. hooper, j.r. sambles, “some considerations on the transmissivity of thin metal films” opt. express 16 (2008) pp.17249−17257. [9] j. obrzut, o. kirillov, “microwave conductance of semicontinuous metallic films from coplanar waveguide scattering parameters” 2013 ieee international instrumentation and measurement technology conference proceedings, minneapolis, mn, may 6-9, 2013, pp. 912-916. [10] j. obrzut, “general analysis of microwave network scattering parameters for characterization of thin film materials” measurement j., 46 (2013) pp. 2963-2970. [11] a. m. nicolson, “measurement of the intrinsic properties of materials by time-domain technique” ieee trans. instr. meas. 19 (1970) pp. 377–382. [12] w. r. eisenstadt, y. eo, s-parameter-based ic interconnect transmission line characterization, ieee trans. compon, hybrids manuf. technol. 15 (1992) pp. 483–490. [13] j. obrzut, j. f. douglas, o. kirilov, f. sharifi, j. a. liddle,” resonant microwave absorption in thermally deposited au nanoparticle films near percolation coverage”, langmuir, 29, (2013) pp. 9010-9015. [14] s. galam, a. mauger, “universal formulas for percolation threshold”, phys. rev. e 53 (1996) pp. 2177-2181. [15] d. m. grannan, j.c. garland, d.b. tanner, “critical behaviour of the dielectric constant of a random composite near the percolation threshold” phys. rev. lett. 46 (1981) pp. 375 -378. [16] d. s. mclachlan, g. sauti. c. chiteme, “static dielectric function and scaling of the ac conductivity for universal and nonuniversal percolation systems”, phys. rev. b 76 (2007) p. 014201. [17] d. simien, j.a. fagan, w. luo, j.f. douglas, k.m. migler, j. obrzut, “influence of nanotube length on the optical and conductivity properties of thin single-wall carbon nanotube networks”, acs nano 2 (2008) pp. 1879–1884. [18] j. d. jackson, classical electrodynamics, wiley, ny (1999), section 5.18. [19] m. sucher, “measurement of q”, in handbook of microwave measurements; sucher, m.; and fox, j., eds.; polytechnic press of the polytechnic institute of brooklyn: new york (1963) p. 420. [20] c. l. holloway, e. f. kuester, j. g. gordon, j. o’hara, j. booth and d. r. smith, “ an overview of the theory and applications of metasurfaces: the two-dimensional equivalents of metamaterials”, ieee antenna and prop. mag., 54 (2012) pp. 10-34. [21] v. a. fedotov, m. rose, s. l. prosvirnin, n. papasimakis and n. i. zheludev, “sharp trapped mode resonances in planar metamaterials with a broken structural symmetry”, phys. rev. lett., 99 (2007) p. 147401. [22] a. dhoubi, s. n. burokur, a. lupu, a. de lustrac and a. priou, “excitation of trapped modes from a metasurface composed of only z-shaped meta-atoms”, appl. phys. lett., 103 (2013) p. 184103. [23] o. luukkonen, f. costa, c. r. smovski, a. monorchio and s. a. tretyakov, “a thin electromagnetic absorber for wide incidence angles and both polarizations”, ieee trans. antennas propag., 57 (2009) pp. 3119-3125. multi-platform solution for data acquisition acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 8 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 multi-platform solution for data acquisition luca lombardo1 1 department of electronics and telecommunications, politecnico di torino, corso duca degli abruzzi 24, 10129 torino, italy section: research paper keywords: daq; signal processing; low-cost instrumentation citation: luca lombardo, multi-platform solution for data acquisition, acta imeko, vol. 12, no. 1, article 28, march 2023, identifier: imeko-acta-12 (2023)01-28 section editor: francesco lamonaca, university of calabria, italy received february 2, 2023; in final form february 27, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: luca lombardo, e-mail: luca.lombardo@polito.it 1. introduction several applications both in the research and industry fields require the digital acquisition of analogue signals and the subsequent signal processing. this task can be successfully carried out by employing data acquisition boards, usually referred to as daq boards. these systems are typically able to acquire digital and analogue signals sending data to a host computer via usb port, for storage and further processing, and are widely employed in many applications, from imaging processing to acquisition of physical and chemical quantities, [1][3]. many companies and universities daily use such generalpurpose acquisition boards when it is not economically advantageous to develop a dedicated acquisition system. as a matter of fact, the use of daq boards requires an easy interfacing with the host computer in order to carry out several operations including: board calibration [4] and configuration, data acquisition from several sources, and data saving. typically, users are supported in such operations by a control and acquisition software, provided by the board manufacturer, which is installed on the host computer. unfortunately, such software is in most of the cases compatible only with windows os [5]. moreover, the software is usually provided as commercial closedlicense application which cannot be freely modified or updated, so that users are dependent on the board manufacturer for firmware and software updates and for platform support. this factor is extremely important because it prevents the employment of the commercial daq in several applications where, due to other constraints, the use of windows-based computers is not possible. few open-source solutions are available [6]-[8], but usually they lack in performance (especially in terms of accuracy) when compared with commercial daqs. other daqs, instead, are developed for very specific applications [9] or they feature fpga/dsp-based data processing which increases costs [10]. this work tries to tackle this issue, by providing a simple, thought powerful solution suitable for data acquisition regardless of the employed operating system. the proposed system is based on a teensy board [11], easily available off-theshelf and able to connect to host pcs via a serial ’emulated’ usb. the usb port is also used to power both the teensy and all the auxiliary circuits, therefore, no external power supply is required. even though it is not possible to guarantee its compatibility for all future operating systems, the probability of a dropping abstract analogue data acquisition is a common task which has application in several fields such as scientific research, industry, food production, safety, and environmental monitoring. it can be carried out either using systems designed ad-hoc for a specific application or by using general-purpose digital acquisition boards (daq). several daq solutions are nowadays available on the market, however, most of them are extremely expensive and come as commercial closed products, a factor which prevents users to adapt the system to their specific applications and limits the product compatibility to few operating systems or platforms. this paper describes the design and the preliminary metrological characterisation of a digital data acquisition solution based on the teensyduino development board. the aim of the project is to create a hardware and software infrastructure suitable to be employed on several operating systems and that can be freely modified by the users when required. teensyduino board is a well-known development platform which is characterised by high computing performance and usb support. taking advantage of the teensyduino features, the proposed system is easy to be calibrated and used, and it provides functions and performance comparable to many commercial daqs, but at a significantly lower cost. mailto:luca.lombardo@polito.it acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 support is minimal due to the use of basic drivers, typically available for all machines. the acquisition system is compact, easy to use and to calibrate. in particular, calibration is carried out either in-house, if a suitable dmm is available, or in a calibration laboratory. the daq provides also analogue and digital outputs which can be used to generate analogue signals and to interface with other systems. all these features are typically available only on extremely expensive commercial daqs. two different solutions have been developed, either with or without external adcs and dacs. the former, based on highperforming adcs and dacs, can be extremely accurate and can provide outstanding performance, while the latter, based on the internal teensy peripherals, has limited performance, but it has a really low cost. in this work both solutions are explored and, in both cases, suitable control software is provided so that the board use is simple and straightforward. 2. software infrastructure in order to have a simple and ready-to-use acquisition system, the board should be able to work on almost any type of platform. this implies the development of a software infrastructure which is compatible with most of the machines and operating systems nowadays available on the market. to address this issue, the chosen solution was to rely on a layered software infrastructure, as shown in figure 1. java has been used for developing the software due to its high compatibility and portability on a very wide range of platforms, including windows, linux, and macos. the teensy board comes with a native usb interface which embeds an ’emulated’ serial port supporting data transfer rates similar the usb 2.0 full-speed specifications. when the teensy is connected to the host computer, it is recognized as a ’virtual serial port’ if a suitable low-level driver is installed. such type of driver is available virtually for all operative systems/platforms and, in most cases, it is natively installed. the proposed solution makes use of the fazecast multi-platform serial driver [12], but several other java serial drivers could be used as well. the next layer is the daq driver, which embeds all the control and data exchange tasks between the virtual serial port and the main application. in particular, the daq driver is based on a java class named tennsytask and all serial data interchange with the teensy board is confined inside another class called usbconnect. any user wishing to employ the daq driver simply has to instantiate a teensytask and add the required channels, either analogue or digital, input or output, using the method createchannel provided by one of the channels (aichannel, ...). the user can set all the timing details by using the method timing. eventually, the user can instantiate one of the reader/writer methods (analogsinglechannelreader, ...) and decide the number of samples to be read and/or written. finally, the main application provides a graphical user interface (gui), which allows the users to configure all the daq parameters, visualize the acquired data and export them in several formats. if such application does not satisfy the requirements of a specific acquisition task, users can freely develop their own software taking advantage from the daq driver. an alternative software architecture using python is currently under development with the aim to further extend the range of platforms supported by the proposed daq solution. 3. teenydaq board without external adc as a first case study, the author developed an ultra-low-cost daq board using only a teensy 3.6 board and few passive external components, as it is shown in figure 2. the board can be proposed for possible applications in different fields, such as the development of generic acquisition of analogue signals, impedance measurements, environmental and structural monitoring, corrosion protection also in the cultural heritage field, motor control for movable equipment and biomedical applications [13]-[16]. the board has a cost in the order of 30 $ and it has dimensions and performance similar to the usb6001, a commercial daq deployed by national instruments [17]. the daq is fully compliant with the software described in section ii. figure 1. the developed software infrastructure which allows the board to be interfaced to a wide range of different platforms, including windows, linux and macos. figure 2. structure of the ultra-low-cost teensy-based daq board. inside the 3d-printed box, there is the teensy 3.6, few passive components and the input/output connectors. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 3.1. board architecture teensy 3.6 contains a micro-controller with two analog-to digital converters and two digital-to-analogue converters, plus several digital input/output pins. therefore, it is possible to arrange a complete solution based only on this development board, adding only few passive components. some possible configurations have been proposed in literature [18]-[19], however, some problems still exist: – the adcs can sample up to the speed of 1 msps, but the inputs are constrained to only positive voltages in the range from 0 v to 3.3 v. usually, daq boards have bipolar input ranges which extend up to 10 v. it is possible to obtain bipolar ranges only with passive components, but at the cost of a reduced input impedance and accuracy. – commercial daqs are able to measure very lowamplitude signals thanks to internal active amplifiers, but this cannot be achieved with passive component. – the adcs have a maximum nominal number of bits equal to 16, but the uncertainty and the noise limit the effective bit number to about 12. in addition, a low input impedance is required to maintain the 12-bit accuracy, so this limits the type of sources which can be used with the daq. – the dacs are quite limited, as well. despite their high sample rate capability, again the output voltage range is limited from 0 v up to 3.3 v, while commercial boards usually have bipolar outputs in the range of +/10 v. – digital input/outputs are limited to a maximum of 3.3 v and applying the usual ttl voltage of 5 v can damage the board. the discussed limitations can be partially overcome. figure 3(a) shows a possible solution for the unipolar teensy adc inputs. basically, this is achieved by adding a fixed offset so that the ain voltage is at the adc range centre when the input voltage 𝑉in is zero. the relation between the adc input voltage ain and the input voltage vin can be written as: 𝐴in = 𝐾 + 𝛼 𝑉in , (1) where 𝛼 is the attenuation factor of the input voltage and 𝐾 is a fixed dc offset which is added to the scaled input voltage: 𝐾 = 𝑉 + 𝑅1𝑅2 (𝑅1𝑅2 + 𝑅1𝑅3 + 𝑅2𝑅3) (2) 𝛼 = 𝑅2𝑅3 (𝑅1𝑅2 + 𝑅1𝑅3 + 𝑅2𝑅3) (3) by using this configuration, virtually any bipolar input voltage range can be obtained while keeping the adc input within the allowed range by selecting the resistors r1, r2, and r3 to satisfy the following equations: 𝐾 = 𝑅21 𝑅3 + 𝑅21 ∙ 𝑉 + = 𝑉ref 2 , 𝑅𝑖𝑗 = 𝑅𝑖 ∙ 𝑅𝑗 𝑅𝑖 + 𝑅𝑗 (4) 𝛼 = 𝑉ref 2 𝑉inmax , (5) where 𝑉ref is the teensy adc reference voltage, usually equal to either 1.2 v or 3.3 v, and 𝑉inmax is the maximum input voltage. the equivalent input impedance can be easily calculated as: 𝑅in = 𝑅1 + 𝑅23 . (6) as an example, by using 𝑉 + = 𝑉ref = 3.3 v and selecting resistors of 𝑅1 = 849 k, 𝑅2 = 300 k and 𝑅3 = 221 k, a maximum input of about 10 v and an input impedance of about 1 m can be obtained without using any active component and selecting the resistors only from the e96, 1% series. however, still some problems are present: – the input impedance is quite high, in the order of 1 m, but still lower than commercial daqs whose typical input impedance is of the order of 1 g. moreover, a voltage appears on the input pins. in principle, this is not a problem, but it could either harm some circuits or interfere with their operation. – the equivalent impedance at ain pins is of the order of 115 k while the teensy specifications state that the analogue input impedance should be kept at the minimum possible value. while this might not be a problem, there are cases where the adc accuracy cannot be guaranteed. this problem could be easily solved by inserting a buffer amplifier, but this would increase the cost and prevent a really simple system to be arranged. figure 3. (a) input passive conditioning network which allows the teensy to measure bipolar signals in a fixed input range. (b) simple configuration to test the adc noise and accuracy performance for different input resistances. figure 4. example of dc voltage acquired with two different input resistances, respectively, of 1 k to 1 m. the inset shows the signal acquired in the same conditions when the dac is programmed to output sine signal. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 – using the same value for 𝑉 + and 𝑉ref makes the contribution of the 𝑉 + uncertainty negligible, but the resistor uncertainty of 1 % may lead to an uncertainty on each input range of the order of 2 %. this means that in the absence of other corrections, at full input range a 2 % uncertainty should be expected (200 mv on the input value). if such measurement uncertainty is not acceptable, there is a simple way of calibrating the system. as the scaling gain for each input may be slightly different, each channel must be individually calibrated. the solution is the use of a known voltage to be measured by each channel. as an example, by employing the texas instruments lm4030amf-2.5 voltage reference featuring an output voltage of 2.5 v with a 0.05 % maximum uncertainty and a temperature coefficient of 10 ppm/˚c, the calibration can be easily performed for each channel. the author decided to put the lm4030amf-2.5 on the pcb and to take its output out on one connector pin without adding any digital switch for the calibration with the aim of maintaining the system cost as low as possible. in this way the calibration has to be done by manually shorting each input to the ground to measure 𝐾 and then connect it to the reference output for measuring 𝛼. the values are then stored on the teensy memory for automatically correcting the acquired data. all these operations do not eliminate the problem related to the impedance on the input analogue pins on the teensy. under the best conditions, i.e., with hardware averaging enabled, differential mode and adc clock slow enough, the accuracy is better than 13 bits, but if all these conditions are not met, the accuracy may severely degrade, especially if the impedance seen by the analogue inputs is high. in order to test such performance deterioration, the configuration shown in figure 3(b) can be used. using a dac output lets users to change the signal amplitude so that it is possible to stimulate the teensy adc with a voltage which is inside the allowed range, while it is possible to observe the degradation of the sampled output signal by changing the input resistance 𝑅in. as an example, figure 4 superposed two acquisitions carried out with a 𝑅in of 1 k and 1 m. the difference in terms of noise performance is clear. moreover, no change was observed by driving the dac with different dc values. the inset shows, instead, the acquired data when the dac is programmed to output a sine signal with the same two input resistances. the signals have a different phase as no phase adjustment was performed and have different amplitudes since the adc loading effect is not negligible when using a resistance of 1 m. in any case, the noise effect on the acquired signal is similar as in the case of dc values. computing the equivalent bits of the acquisition is not straightforward, as it depends on the acquisition purpose. in the worst-case scenario, i.e., if the instantaneous value of the signal is required, the equivalent bits can be obtained as: 𝐸𝑞bits = log2 ( 216 ∆ ) , (7) where ∆ is the maximum difference between the real value and the measured value. the author tested the adc for resistances in the range from 1 k to 1 m by using two different adc configurations: – maximum sampling rate (about 200 khz) and no hardware averaging; – slow sampling rate (about 5 khz) and a hardware averaging of 32. the results are shown in figure 5, where the equivalent bit number is calculated as function of the input resistance. the figure shows how in the case of maximum speed the equivalent bits exceed 10 bits only when the resistance is negligible and severely degrades as the input resistance increases, while the equivalent bits are close to 13 when the speed is limited to 5 khz. it is worth to note that this is the worst case and if, as an example, a sine signal is used and a sine amplitude is required, much better values can be obtained as the noise tends to spread over a wide frequency range. this is confirmed by the fft of the acquired data. figure 6, as an example, shows the fourier transform of the data acquired at a sampling rate of 100 khz with an input resistance of 1 m apart from the peak at 50 hz, which is expected when the source impedance is high and no shielding is done, it is clear how the noise spread on a large frequency range. 3.2. performance the daq which uses only the teensy 3.6 and few external components has several advantages, including an extremely low figure 5. equivalent bit number as function of the input resistance 𝑅in. the two traces refer to the adc acquiring in single ended mode with a sampling rate of 200 khz (no hardware averaging) and 5 khz (with hardware averaging of 32), respectively. figure 6. fft of the acquired data in the case of a sampling rate of 100 khz and an input impedance of 1 m. the inset shows that the peak highlighted by the fft is due to the 50 hz mains noise which couples to the input when source impedance is high. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 cost and a very compact size. unfortunately, the absence of any active component at the input stages involves also several issues which can be addressed only partially. the lack of any gain stage at the input limits the capability of the daq to acquire signals with very low amplitudes. the relatively low input impedance limits the employment of the board to only low-impedance sources. moreover, as shown by figure 5, also at low input resistance (few k) there is an unacceptable signal degradation. better results can be obtained when the teensy 3.6 adc is used with the hardware average and at a low sampling rate. in these cases, the adcs is able to deliver an accuracy of about 12 bits when used on single ended channels. this would turn out in about 1.2  10-4 relative uncertainty at full range, but such an uncertainty can be achieved only with a very low input impedance. 4. teensydaq board and external converters to overcome the performance degradation due to the low input impedance and the lack of active gain stages, the author designed an advanced daq board with performance comparable to high-end commercial systems by adding some external active devices. nevertheless, the cost of the new daq is of about 100 $, still much lower than the price of competing products. figure 7 shows the general block diagram of the high performance daq. the new daq version still employs the teensy 3.6, already used for the ultra-low-cost board. such teensy can be easily replaced with the teensy 4.1 which features quite better performance with respect to teensy 3.6 (larger ram to store the data, higher clock frequency and improved processing capabilities). however, teensy 4.1 lacks the dac converters and, therefore, it cannot be employed on the ultralow-cost daq. the principal task of the teensy is to interface the external adc and dac converters, to acquire and temporary store the data, and to send them to the host computer. moreover, some optional signal processing (such as averaging and filtering) could be implemented on the teensy itself thanks to its high computing capabilities. the daq board is directly powered from the 5 v of the usb port, therefore, no external power supply is required. however, internally the system needs different voltages for properly operating. for this reason, a dedicate power supply module, shown in figure 7, is implemented in the daq. this module employs a boost/inverter dc-dc converter and few ultra-lownoise linear voltage regulators to provide the 15 v rails and the low-noise 3.3 v voltage required by the analogue inputs and outputs and the acquisition chain. eventually, another 3.3 v voltage is generated by the teensy and used to supply only digital circuits, avoiding any noise coupling between analogue and digital devices. the overall performance and functionalities of this daq are comparable to, or even better than, many commercial boards which are available at a much higher cost. 4.1. analogue input channels the section related to the analogue input channels is highlighted with a green colour in figure 7, while figure 8 shows the detailed implementation of the adc acquisition chain. the daq features 8 fully differential inputs which can be alternatively used as single-ended inputs. each channel has an input impedance higher than 1 g and all the channels can be sampled with a user-selectable sampling rate up to 1 msample/s shared between the active channels. the acquisition of multiple channels is carried out sequentially by using an analogue multiplexer which features low noise, low distortion and high bandwidth. a high-performance precision programmable gain amplifier (pga) is employed to maximize the input range capability of the acquisition chain. the pga is connected to the multiplexer output and can be programmed to have 11 different gains in the range of 1/8 v/v to 128 v/v. additionally, the pga can add an optional 1.375 gain factor. therefore, thanks to the 22 different input gain combinations of the pga, the daq is able to acquire analogue signals in bipolar ranges from about  19 mv up to about  12 v. moreover, the pga is designed to add a fixed dc voltage to the output, so that the subsequent stages can be powered by a single-ended power supply. the selected adc is the texas instrument ads8881 chip [20]. this high-performance adc features a resolution of 18 bits and a maximum sampling rate of 1 msample/s. it exhibits very good performance both in terms of linearity and accuracy when its inputs are properly decoupled and buffered. for this reason, a dedicated differential buffer/-driver with an anti-aliasing filter has been inserted between the pga outputs and the adc inputs. the topology selected for the anti-aliasing filter was a classic differential rc first-order low-pass filter. particular attention has been posed when designing the external voltage reference circuit required by the adc. a highaccuracy and low-drift reference voltage with a nominal output value of 2.5 v was employed along with three operational amplifiers. in particular, the reference voltage selected for the figure 7. block diagram of the high-performance daq board which employees the teensy 3.6 and external adc and dacs. figure 8. block diagram of the analogue acquisition chain featuring the ads8881 high-performance adc. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 prototype was the ref5025i, featuring a low-noise output voltage of 2.5 v with an initial maximum uncertainty of 0.05 % and a temperature coefficient of only 3 ppm/˚c. one high-speed operation amplifier is used as buffer for providing the reference voltage to the adc. such an amplifier satisfies the band and stability requirements of the adc operating at its maximum frequency, but it does not have a suitable accuracy. to overcome such a problem, another high-accuracy operational amplified has been inserted in the reaction chain of the first amplifier. this way all the adc requests in terms of accuracy, band and stability are satisfied. a third high-accuracy operation amplifier is instead employed to generate a 1.25 v required for the proper operation of the pga. the data stream provided by the adc is read by the teensy 3.6 via the spi0 interface and temporarily stored in the microcontroller ram together with an accurate timestamp and a channel reference. once acquired, the samples can be sent to the host pc via the usb interface without any significant latency or sampling rate reduction. optionally, the acquired data can be stored on the micro secure digital (µsd) available on the teensy. this is an added value of the proposed solution which can additionally operate as a standalone data logger without connection to any host computer. of course, in this case, the daq board has to be powered by an external power source. as an example, a rechargeable battery can be employed making the daq a portable data logger able to operate also in remote locations. 4.2. analogue output channels and digital input/output the daq can also be equipped with a maximum of four analogue output channels working with a resolution of 16 bits and a maximum sampling rate of 100 khz, as highlighted in yellow in figure 7. the outputs are capable of generating positive and negative voltages and currents in different ranges which can be selected by the user. these dacs can be used along with the input analogue channels and are able to generate arbitrary waveforms up to about 20 khz. in this case, the waveform can be configured from the host pc and programmed into the teensy, which will take charge to send the proper samples to the dacs without any user intervention. additionally, the board has 16 digital input/output channels (shown in orange in figure 7) which are able to operate at 5 v. the channels can be read/written simultaneously in block of 8 bits and can be individually configured either as inputs or outputs. these digital channels are obtained by means of an io expander chip which is connected to the teensy 3.6 via the spi2. eventually, four high-speed digital inputs and four highspeed digital outputs are available as well. these digital lines are directly connected to the teensy 3.6 io pins using a level converter which is able to operate with any logic level between 1.8 v and 3.6 v at a maximum frequency of 1 mhz. the analogue outputs can be useful in several specific applications. as an example, they are able to accurately bias sensors such as load gauges and generate control signals for several systems. moreover, together with the analogue inputs they can be used for implementing impedance measurements and readout units for electrochemical sensors. eventually, the availability of digital lines allows the daq to interface with digital systems and they can be optionally used in pwm mode. 4.3. high-performance teensydaq characterisation the design of the high-performance teensydaq has been carried out trying to obtain an optimal trade-off among performance, cost, size, and functions. therefore, smd packages have been selected both for performance and size constraints. unfortunately, this can be an issue during the first testing stage of the prototype, being smd devices more difficult to be handled and connected on development boards. for this reason, smd-to-tht adapters, test jumpers, and sockets have been used to realize the first prototype of the board. this solution allows one to easily arrange/modify the circuits under test. however, the employment of wires and sockets severely degrades ac performance due to multiple noise coupling sources. nevertheless, the author tried to optimize the single blocks of the daq and carried out a preliminary characterisation of dc and ac performance, as described in the following sections, obtaining very promising results which are comparable with commercial high-end products. 4.4. dc performance initial tests have been carried out in order to assess dc accuracy and repeatability. all tests have been performed using as reference voltmeter the 8.5 digits hp 3458, calibrated less than 2 years ago with a stated dc uncertainty of less than 1 ppm. the adc has been used at a sampling frequency of 100 khz, acquiring about 30000 samples for each measurement (about 0.3 s measuring time). initially, the complete acquisition chain has been verified (adc together pga and driver, see figure 8) to observe linearity and overall gain of the daq. figure 9 shows the experimental results obtained in the input voltage range from -2.5 v to 2.5 v and setting the pga gain to the unity. the plot inset shows an excellent linearity of the entire acquisition chain. the difference between the actual measured values and the expected ones is reported, as well. it is possible to see how the difference is extremely low with an overall gain very close to 1 and a negligible offset. even though the measurement number is quite limited, it is possible to estimate the maximum linearity error for the entire measurement chain in about 200 µv, while maintaining mostly under 40 µv, as visible from figure 9. then, acquisition chain noise and measurement repeatability have been assessed. the test has been carried out in three steps with the aim of quantifying the influence of each stage of the chain. initially, the performance of only the adc have been checked and compared with the specifications of the manufacturer. a stabilized power supply has been used to apply figure 9. transfer characteristic of the analogue acquisition chain obtained in an input voltage range of -2.5 v to 2.5 v by setting the pga gain to 1 and the difference between expected and measured values. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 7 voltages between -2.5 v and 2.5 v directly to the adc input. then, the same measurement has been repeated including the adc driver and the pga. in particular, the input voltage has been measured at the same time with the daq and the hp 3458 voltmeter, acquiring respectively about 30000 and 20 measurements for each voltage. the standard deviation of the hp 3458 was negligible for all measurements. figure 10 shows the three sets of results: each dot shows the standard deviation obtained by the daq, respectively applying the input voltage to the adc, to the adc together with its driver, and to the whole acquisition chain (including the pga). of course, the lowest standard deviation is obtained with only the adc, with a maximum value of about 35 ppm. it is worth to notice that 1 lsb at 18 bits resolution corresponds to 8 ppm. therefore, the adc standard deviation is of the order of 4 lsb, in-line with the 3 lsb declared by the manufacturer. the standard deviation slightly increases up to about 50 ppm when the adc driver is included, which corresponds to about 6 lsb. eventually, the standard deviation reaches values of about 90 ppm when the complete acquisition chain is used. this increase is partially due to the intrinsic noise of the pga, but also the presence of long connection wires which cannot now be neglected being the input impedance of the pga higher than 1 g. nevertheless, such results are extremely promising because in the same conditions several commercial high-end devices, available at much higher prices, have an uncertainty in the order of 3000 ppm without calibration and more than 100 ppm after calibration [21]. 4.5. ac performance obtaining an ac characterisation of the daq is not straightforward. as an example, the dmm employed for the dc characterisation has an ac uncertainty of 10 ppm  100 ppm depending on range and frequency, therefore, it is quite difficult to have a good transfer uncertainty ratio. for this reason, it has been decided to use a low-distortion sine generator and to compute the fourier transform of the acquired samples. the generator has a stated distortion of 0.01 % but it is not perfectly stable in frequency. therefore, it is not possible to make the transform on an integer number of periods. acquisitions have been carried out at nominal frequencies of 10 hz, 100 hz, 1 khz and 10 khz, while the adc sampling frequency has been set to 100 khz for all measurements. as an example, figure 11 shows the result obtained with 1 khz sine signal. it is possible to note the presence of high-frequency noise as well as components at 100 hz and multiples, probably due to noise and mains coupling with the daq signal path. notably, there are also two symmetrical lines around the main component at 1 khz, probably due to the frequency instability of the signal generator. nevertheless, the tail of the spectrum reaches -110 db at low frequency and more than -90 db at high frequency. therefore, the achieved performance is generally quite promising, also considering that the assembly of this first prototype has multiple noise-coupling paths due to several wired connections and sockets between the daq components. 5. conclusions the paper presents the design and initial characterisation of two daq multi-platform solutions with the aim of addressing several issues currently present on most of the commercial devices: software/hardware limited compatibility and support, high cost and impossibility to freely modify the software to fit any specific applications. the two boards are based on a teensy 3.6 and are able to operate on most of the operative systems thanks to a dedicated software infrastructure designed to simplify the use, the maintenance and the upgrading. the first version is very simple and employs only the teensy 3.6 and few passive devices. it features an extremely low-cost of about 30 $, but it has quite limited performance and can be employed only when the input signal impedance is very low. the second version, instead, uses external high-performance adc and dacs converters and is able to achieve performance comparable with high-end commercial products, but at a cost anyway lower than 100 $. the first prototype of the daq has been tested and several optimizations have been carried out. the achieved results are very interesting and demonstrate the feasibility of the proposed board as an alternative solution to commercial products. nevertheless, due to circuit assembly implemented by using sockets and several connection wires, the ac performance requires further investigation. for this reason, a new prototype is currently under development. the new board will be realized using printed circuit boards in a modular design, where each block (as shown in figure 7) will be implemented as an independent module so that it will be very easy to change or upgrade any part of the system. figure 10. standard deviation obtained from the proposed daq in the range from -2.5 v to 2.5 v, respectively considering the only adc, the adc together with its driver, and the complete acquisition chain. figure 11. fourier transform of the acquired data using 1 khz sine input signal. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 8 references [1] l. iannucci, m. parvis, p. cristiani, r. ferrero, e. angelini, s. grassini, a novel approach for microbial corrosion assessment, ieee transactions on instrumentation and measurement, vol. 68, 2019, no. 5, pp. 1424-1431. doi: 10.1109/tim.2019.2905734 [2] l. es sebar, s. grassini, m. parvis, l. lombardo, a low-cost automatic acquisition system for photogrammetry, proceedings of ieee instrumentation and measurement technology conference, (2021). doi: 10.1109/i2mtc50364.2021.9459991 [3] l. iannucci, chemometrics for data interpretation: application of principal components analysis (pca) to multivariate spectroscopic measurements, ieee instrumentation and measurement magazine, vol. 24, 2021, no. 4, pp. 42-48. doi: 10.1109/mim.2021.9448250 [4] a. carullo, metrological management of large-scale measuring systems, proceedings of ieee instrumentation and measurement technology conference, pp. 97-101 (2004). doi: 10.1109/imtc.2004.1351004 [5] national instruments, national instrument training resources. online [accessed 21 february 2022] https://education.ni.com/training/resources [6] open daq board. online [accessed 28 june 2022] https://www.open-daq.com [7] monodaq d.o.o., monodaq board. online [accessed 21 february 2022] https://www.monodaq.com [8] h. wang, z. jie, x nie, d. li, design of pic microcontrollerbased high-capacity multi-channel data acquisition module, proceedings of 2012 international conference on measurement, information and control, 2012, pp. 685-688. doi: 10.1109/mic.2012.6273385 [9] n. erraissi m. raoufi, n. aarich, m. akhsassi, a. bennouna, implementation of a low-cost data acquisition system for “propre.ma” project, measurement, vol. 117, 2018, pp. 21-40. doi: 10.1016/j.measurement.2017.11.058 [10] p. m. pinto, j. gouveia, p. m. ramos, development, implementation and characterization of a dsp based data acquisition system with on-board processing, acta imeko, vol. 4, 2015, no. 1, pp. 19-25. doi: 10.21014/acta_imeko.v4i1.156 [11] pjrc, teensy usb development board. online [accessed 21 february 2022] https://www.pjrc.com/teensy [12] fazecast, inc., fazecast java serial driver. online [accessed 21 february 2022] https://fazecast.github.io/jserialcomm [13] l. iannucci, l. lombardo, m. parvis, p. cristiani, r. basseguy, e. angelini, s. grassini, an imaging system for microbial corrosion analysis. proceedings of ieee international instrumentation and measurement technology conference, 2019. doi: 10.1109/i2mtc.2019.8826965 [14] m. scarpetta, m. spadavecchia, g. andria, m. a. ragolia, n. giaquinto, simultaneous measurement of heartbeat intervals and respiratory signal using a smartphone, proceedings of ieee international symposium on medical measurements and applications (memea), 2021. doi: 10.1109/memea52024.2021.9478711 [15] l. de palma, m. scarpetta, m. spadavecchia, characterization of heart rate estimation using piezoelectric plethysmography in timeand frequency-domain, proceedings of ieee international symposium on medical measurements and applications (memea), 2020. doi: 10.1109/memea49120.2020.9137226 [16] l. es sebar, l. lombardo, m. parvis, e. angelini, a. re, s. grassini, a metrological approach for multispectral photogrammetry. acta imeko, vol. 10, 2021, no. 4, pp. 111-116. doi: 10.21014/acta_imeko.v10i4.1194 [17] national instruments, daq model usb6001-6212. online [accessed 21 february 2022] https://www.ni.com/it-it/support/model.usb-6001.html [18] l. es sebar, l. iannucci, e. angelini, s. grassini, m. parvis, electrochemical impedance spectroscopy system based on a teensy board, ieee transactions on instrumentation and measurement, vol. 70, 2021. doi: 10.1109/tim.2020.3038005 [19] l. es sebar, l. iannucci, c. gori, a. re, m. parvis, e. angelini, s. grassini, in-situ multi-analytical study of ongoing corrosion processes on bronze artworks exposed outdoors, acta imeko, vol. 10, 2021, no. 1, pp. 241-249. doi: 10.21014/acta_imeko.v10i1.894 [20] texas instruments, ads8881, 18-bit, 1-msps, 1-ch sar adc with truedifferential input, spi interface and daisy-chain. online [accessed 21 february 2022] https://www.ti.com/product/ads8881 [21] national instruments, daq model usb-6212. online [accessed 21 february 2022] https://www.ni.com/en-us/support/model.usb-6212.html https://doi.org/10.1109/tim.2019.2905734 https://doi.org/10.1109/i2mtc50364.2021.9459991 https://doi.org/10.1109/mim.2021.9448250 https://doi.org/10.1109/imtc.2004.1351004 https://education.ni.com/training/resources https://www.open-daq.com/ https://www.monodaq.com/ https://doi.org/10.1109/mic.2012.6273385 https://doi.org/10.1016/j.measurement.2017.11.058 https://doi.org/10.21014/acta_imeko.v4i1.156 https://www.pjrc.com/teensy https://fazecast.github.io/jserialcomm https://doi.org/10.1109/i2mtc.2019.8826965 https://doi.org/10.1109/memea52024.2021.9478711 https://doi.org/10.1109/memea49120.2020.9137226 https://doi.org/10.21014/acta_imeko.v10i4.1194 https://www.ni.com/it-it/support/model.usb-6001.html https://doi.org/10.1109/tim.2020.3038005 https://doi.org/10.21014/acta_imeko.v10i1.894 https://www.ti.com/product/ads8881 https://www.ni.com/en-us/support/model.usb-6212.html editorial acta imeko june 2014, volume 3, number 2, 2 www.imeko.org editorial paul regtien measurement science consultancy (msc), julia culpstraat 66, 7558 jb hengelo (ov), the netherlands section: editorial citation: paul regtien, editorial, acta imeko, vol. 3, no. 2, article 2, june 2014, identifier: imeko-acta-03 (2014)-02-02 editor: paolo carbone, university of perugia copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: paul regtien, e-mail: paul@regtien.net this issue of acta imeko comprises the third set of articles that are based on papers presented at the xx imeko world congress, busan, republic of korea, 9–12 september, 2012. the issue includes the extended and updated versions of papers from tc3 (measurement of force, mass and torque) and tc16 (pressure and vacuum measurement). the remaining submissions from this world congress to be published by acta imeko will appear in the forthcoming issue. this edition starts with eight research papers on force, mass and torque measurements (tc3). the first one deals with the proposed new si and the consequences for mass metrology. the next two papers discuss models for a checkweigher and a dynamic torque calibrator, respectively, and they are followed by a paper on an improved force standard machine. calibration systems for torque are presented in the next two contributions and, in a subsequent paper, the effect of humidity on torque transducers is discussed. the last contribution from tc3 is about a multi-component facility for torque and force. four articles from tc16 follow on equipment for pressure and vacuum measurements: a pressure balance piston, capacitive diaphragm gauges, pressure measuring multipliers and finally a discussion on rarefied gas flow in pressure and vacuum measurements. the issue closes with an advertisment from hbm, a contributor to the confederation. advertising is open for other supporting members. we would like to thank all those who have contributed to this issue: authors, reviewers, copyeditors, layout editors and journal managers. special thanks go to the section editors yon-kyu park (tc3) and jorge torres-guzman (tc16). we hope you enjoy this issue. acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 2 editorial microsoft word 17-120-2-ga.doc acta imeko  july 2012, volume 1, number 1, 26‐35  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 26  characterization of power quality transient phenomena  of dc railway traction supply  andrea mariscotti  università di genova, via opera pia 11a, 16145 genova, italy    keywords: conducted interference; guideway transportation systems; power quality; spectral analysis; supply transients; time domain analysis  citation: andrea mariscotti, characterization of power quality transient phenomena of dc railway traction supply, acta imeko, vol. 1, no. 1, article 8, july  2012, identifier: imeko‐acta‐01(2012)‐01‐08  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 4 th , 2012; in final form may 4 th , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: information not available  corresponding author: andrea mariscotti, e‐mail: andrea.mariscotti@unige.it  1. introduction  power quality (pq) in railways encompasses a variety of phenomena, steady and transient: harmonic components (characteristic of the substation rectifiers), other similar components (not harmonically related to the supply frequency on the feeding ac side of substations, but caused by on-board converters and drives) and low frequency voltage fluctuations (due to more or less periodic load cycles) are examples of the first kind; steep current absorptions (in the form of the inrush current following the pantograph rise) and line voltage variations (due for example to pantograph bounce), and complex changes of modulation patterns (due to various types of transients, sudden tractive efforts and braking, wheel slip and slide, etc.) are examples of the second kind. pq in railway traction systems is relevant for several reasons: line voltage variations have influence on the pantograph voltage perceived by the single rolling stock and its performance [2]; harmonic distortion in combination with traction line impedance can trigger line voltage instability [2]; uncommon components in the absorbed current spectrum may lead to interference to signalling circuits when flowing back to substation as return current through the rails. more generally the pq compatibility of a rolling stock unit with a railway network is recognized as electrical interoperability, part of the overall interoperability, aiming at ensuring the safe and efficient circulation across different railway networks in different countries, such as those of the european union. pq properties and definitions are reviewed in section 2 with reference to standards and distinguishing those phenomena that are peculiar of dc distribution. in section 3 these elements are revisited and alternative methods are considered for their calculation. in section 4 measured data are used to verify both the considered methods and the relevance of the pq issues occurring on real railway networks. 2. review of power quality phenomena  definition and analysis of pq phenomena in ac railways are easier because they may be derived from ac supply networks [3], notwithstanding the different supply frequency, the distribution scheme and the presence of moving loads. in ac railways the emissions from substations and rolling stock are all of the harmonic type with reference to the common supply frequency [4], featuring a stability that is slightly worse than that of country-wide power grids [5][6]. in dc railways there is no such a synchronizing element and the emissions are harmonically related to independent sources: the substation keeps supplied by the dedicated feeders derived from the country-wide power grid, while for the rolling stock the motor frequency and the dc railways are considered as an example of a complex distribution network, featuring distributed supply and moving  loads. the  power  quality  phenomena  affecting  traction  voltage  are  many,  steady  and  transient,  depending  on  the  converter  operations  at  substations and on‐board vehicles. a classification is proposed and a discussion follows of the correct methods for their quantification,  with reference to related standards and to the vast background related to ac networks. the characterization of pq is related on one  side to ripple and fluctuation of the pantograph variables for supply purposes, and on the other side to the appearance of specific  components  relevant  for  interference  to  signaling  on  the  track.  considerations  and  conclusions  are  based  on  a  large  amount  of  measured  waveforms,  recorded  on  the  italian  railway  networks,  and  are  aimed  at  contributing  to  the  forthcoming  standards  on  traction supply quality.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 27  switching frequency of dc/dc converters and inverters are the reference. the time window and the frequency resolution for spectrum analysis are thus not immediately following [3]. for ac and dc railways, besides power quality issues related to harmonic disturbance and their composition [4], network resonance and traction line rolling stock interaction [7]-[9], a very basic requirement that impacts on train performance and service efficiency is the voltage level “seen” by the train pantograph, called useful voltage. the useful voltage uav,u is defined in the en 50388 [2] for dc railway systems as the average value of the mean value of the pantograph voltage vp (i.e. dc component) over a well-defined geographical area of the national network and for one or several trains. thus, there is a distinction between the uav,u(zone), for the average operation made over all the circulating trains in a given zone, and uav,u(train), for the average operation made for one train over a predetermined journey or its timetable. the measurement data collected in the past and that will be used for the following analysis were all recorded on a single train and thus allow the calculation of uav,u(train) only. in general, pq concept may be extended to a series of phenomena, steady and transient, that may have relevance for the whole railway network: impact on, and possible interference, to signalling circuits is required to be evaluated by analyzing the pantograph current spectrum against a so-called limit mask [10][11]. the analysis parameters are more or less fixed by the different national procedures adopted by the various administrations, yet there is no agreement on how to distinguish transient events: they bring non-characteristic spectral components and in some cases they are simply “recognized” and excluded from the analysis [12]. the focus here is on the identification of these phenomena on long-term recordings, so that they can be properly treated (in some cases processed separately, or used for the specific analysis of transient interference or simply discarded). at dc it is not possible to use the fundamental period to derive an indication of the correct time window, both for harmonics and for the computation of the mean value. the time window duration has however a strong influence on how different types of transients are included and weighted, and this could lead to some uncertainty. this aspect is considered later on in section 3. so far transient phenomena have been generically considered; the classification proposed in [13] is reported for the discussion:  (type 1) unusual sudden tractive efforts with packet-like current absorptions may trigger oscillations in the onboard filter and thus in the pantograph current ip, but with negligible effect on vp, due to the low short circuit impedance of the network;  (type 2) change over to an adjacent supply section connected to a different substation, while passing under a supply gap (the equivalent for dc of a neutral section for ac systems) and producing a vp step change;  (type 3) pantograph bounces disconnect the sliding contact from the contact wire for a few ms, depending on several factors (speed, mechanical performance of the pantograph frame and dampers, catenary oscillations, ice); this produces a step change of absorbed current ip and a spike like change of vp; vp is namely determined by the free response of the on-board circuitry for the short time interval during which the pantograph is detached from the catenary; it is remembered also that current conduction still occurs through the plasma at the pantograph-contact wire interface and this represents an increase of the supply resistance, thus explaining the vp steep but limited increase;  (type 4) change of the operating conditions and of the spectrum of current emissions of onboard converters due to wheel slip, internal control rules, driving style and applied torque, etc. the next section is dedicated to the calculation methods and their settings to identify and evaluate these phenomena in terms of amplitude, spectral distribution and frequency of occurrence. 3. calculation methods and parameters  without a fundamental frequency for the definition of harmonics, all the narrowband components are relevant for the calculation of the supply distortion. the distortion is thus put in relationship to ripple, defined below and particularly appropriate for the evaluation of the traction supply voltage. the attention is on the pantograph voltage, even if the considered methods may be applied to the pantograph current as well. the ripple index defined below is a measure of the perceived quality applicable to dc networks [13][14]. some types of transients and their influence on the spectral properties are then considered. the subject reveals to be particularly interesting also for the relationship with the interference to signalling systems, in particular if the variable of interest becomes the pantograph current. in any case a distorted pantograph voltage reflects in a distorted pantograph current if the locomotive can be considered simply as a passive load at those frequencies. 3.1. ripple and useful voltage  ripple is defined as the variation of a quantity about its steady state value during steady electric system operation [3]. ripple is interpreted often as a periodic variation around the steady state dc value, but not necessarily so [15][16]: some components are related to steady periodic sources (harmonics of rectifiers and inverters [17]-[19] in steady conditions), but others are caused by transients (interpreted as aperiodic phenomena of limited duration). the expressions defined in [20] are briefly summarized. given the q(t) quantity (in this case the pantograph voltage vp), the peak-to-peak value qpp over mt samples interval is the exact ripple index, ri, that is the maximum peak-to-peak value over a given time interval, and is given by  , , max [ ] [ ]pp t t n m q q n q n m m    . (1) if a spectrum-based approach is followed, a sliding dft over a time window t is computed giving the spectrum q[k] [21][22]. ri is then calculated as the sum of the components of the set kthr, containing values of the index k with corresponding component q[k] with amplitude larger than a given threshold thr; sum-of-amplitudes (sa) and sum-of-amplitudes-andphases (sap) rules, as given in [20] , , [ ] thr dft t sa k k q q k    (2a) and , , [ ] thr dft t sap k k q q k    . (2b) acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 28  the thr value must be carefully chosen not to leave out any significant component and to keep the size of kthr as low as possible; normally a 0.1 % relative to the nominal voltage is already a conservative choice, provided that full-scale is correctly set. lower values may be used in the presence of many spectral components, but only in case of a background noise that is located well below. the purpose of thr is to “clean” the spectrum, ruling out all spectral components other than the relevant ones not to artificially magnify the resulting ri value. holding these assumptions a lower thr value, such as 0.01 % used in the next figures, represents only a more cautious choice, but does not change the result. an overlap factor p of 0.5 was shown to adequately track signal dynamics and is compatible in terms of minimum correlation for the large part of used windows. in [13] it is shown that the two indexes differ slightly and therefore only sa will be considered in the following. uav,u is the index that is evaluated to assess the adequacy of the infrastructure (power supply network) to the prescribed performance of the circulating rolling stock and concerns only the average value, while the proposed ri takes account in a similar way of the other spectrum components. for 3 kv dc railways the minimum uav,u is set to 2800 v for high speed lines and 2700 v for conventional lines. 3.2. periodic phenomena  with similarity to harmonics in ac railways, periodic phenomena in dc railways are related to the commutation of static converters either at substations (6-pulse and 12-pulse diode rectifiers) or on-board (the front-end choppers have a switching frequency of some hundreds hz, possibly changing in discrete steps for variable operating conditions; the conducted emissions of traction drives may leak back to the dc line, appearing as sixth multiples of the motor stator frequency and/or higher components around the pwm switching frequency). changes to the operating conditions, such as a change of speed and thus of mechanical rotational speed, may occur leading to a type 4 transient, highly dependent on the type of locomotive and the adopted modulations and control schemes. the requirements on frequency resolution df (and thus on the time window duration t) are set by several national standards for evaluation of interference to signalling, still following a dft approach. in this case df is set to 1 hz [12] or slightly larger and the average of successive spectra may be required; the spectrum is extended up to some khz, 3600 hz for the italian case. the most modern approach envisaged by the recent clc/ts 50238-2 [23] is that of using a bank of band-pass filters followed by an integral calculation of the rms value [24]. it is also underlined that a coarser frequency resolution allows fast tracking of transient components. being the relevant components for interference located in the higher portion of the frequency axis, a df of e.g. 5 hz leads to a few % frequency resolution and allows a 200 ms base time resolution, that can be further reduced by means of overlap. 3.3. aperiodic transients  the complex scenario of variable operating conditions, position of the rolling stock and the possibility of local unstable conditions of traction converters located on different nearby vehicles, give rise to a wide set of transients. as already pointed out in the classification given in section 2, transients of the pantograph voltage and/or current may occur due to the inrush current of the vehicle filter, pantograph bounces, wheel slipping and sliding, etc. the aim is twofold: identifying transients during the measurements of conducted emissions on the pantograph current (subject to the values of the analysis parameters set by the standards for interference to track circuits) and during the evaluation of the useful voltage (with much less constraints on the parameters settings). broadly speaking the former may require a finer frequency resolution to match with the bandwidth and operating frequency range of the victim track circuits and related limit prescriptions. the latter is mostly influenced by slow variations, such as those experienced while moving at the end of a supply section, farthest from the substation (as it will be commented for figure 1 at points (1) and (2)); the analysis can be made with a less demanding frequency resolution and thus a finer time resolution. a 10 hz frequency resolution will be normally used in the following, accompanied by a 50 % (or 75 %) overlap. 4. real cases  recordings taken on the italian 3 kv dc railway lines are considered to evaluate the ri, to identify the transients, their spectral characteristics and their influence on the uav,u and in general on the spectrum components. the following subsections are aimed at evaluating the spectral properties of transients and defining the best indexes to identify and isolate the transients themselves. 4.1. useful voltage and ripple index  the useful voltage has been computed from the measured pantograph voltage vp using a 1 s average interval (as prescribed by the en 50163 [5] and en 50388 [2]) for various test runs performed on different italian lines (details are given in the caption of figure 1). it is underlined that the uav,u computation here includes all the operating conditions of the train rather than only traction, thus explaining the high overvoltages on the chiusi-arezzoorte line, probably occurring during regenerative braking in particular line conditions). the processed vp recordings give only in one case a useful voltage lower than the prescribed limits [2], 2800 v and 2700 v, for high speed and conventional interoperable dc lines. in the arezzo-chiusi-orte recordings some overvoltages are present, with a value much higher than the indicated umax3, the maximum non-permanent voltage as per en 50163. the observed overvoltages last for one or two average samples, so they have a time duration nearly equal to or lower than 1 s and thus fall under the category of transient overvoltages. a 10 min profile of vp and ip is shown in figure 2, together with the sa index, that differs from the one computed in [13] for the lower threshold thr (0.01 %, rather than 0.1 %) and the use of 75 % overlap. this recording shows quite a regular profile of current absorption and the large step changes in vp are due to supply gaps (type 1 transient). the vp box-like increase (3) is probably due to a particular arrangement approaching a station, confirmed by the decrease of ip at the end of the recording. a fairly constant high value ip profile represents medium/high speed travelling on straight lines with rare stops, such as on main lines. even in this case pantograph bounces, wheel slip/slide and transitions in the operating conditions can produce transients with effects visible in the voltage and current spectra. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 29  figure 2 is referred to a heavy loaded line, with the absorbed current in the range of 1700-1800 a, corresponding to an absorbed power of about 5.8 to 6.3 mw. the corresponding voltage ripple is however limited to about 2 %, occurring only at some time intervals. at point (4) ri has a step-like reduction, not only due to the increase in vp dc value, but also due to lower supply impedance. figure 3 (with the same overlap and thr values of figure 2) shows an interval of lightly loaded operations with the ri that is nearly half that of figure 1, as the reduced absorbed current implies. moreover the pantograph voltage is at the maximum values compatible with the 3 kv dc system [5] (the test run began close to a supply substation of a normally highly loaded line, so regulated for the maximum allowed no-load voltage); in this case the ri normalized by its value is further reduced. it is interesting to note that in the first 100 s, characterized by a negligible ri, time-varying harmonics are populating the spectrum (see section 4.2). 4.2. spectral components  when major transients are not visible in the time domain waveforms, this does not imply that other types of transients may be present. these transients are mainly of type 1 (bounded to the low frequency range) or type 4 (occurring over the entire frequency interval). figure 5 through 6 show the spectra at low frequency and around one of the pwm bands of the traction drive for an apparently almost constant ri interval (corresponding to t*=5.8 s in figure 2). low frequency broadband components due to leakage often cover important changes to high frequency components of smaller amplitude. transients of this kind produce several components that may be the reason for noncompliance with signalling interference limits [23] during dedicated tests and must be correctly weighted in the overall operating profile of the converter under test [25]. leakage at very low frequency occurs because of the non-stationary nature of supply voltage fluctuations. the best tool to identify and to track spectrum occupation is the time-frequency spectrogram. figure 5 reports the spectrum of the pantograph current, to 0 500 1000 1500 2000 2500 2500 3000 3500 4000 4500 5000 time [s] u a v ,u [ v ] umax3 = 3900 v umin,hs = 2800 v umin,conv = 2700 v figure 1. uav,u for runs of different duration: arezzo‐chiusi‐orte (black), torino‐modane (blue), gemona‐tarvisio (green), genova‐ventimiglia (red/brown).  0 100 200 300 400 500 600 3400 3600 3800 v o lta g e [ v ] 0 100 200 300 400 500 600 500 1000 1500 2000 c u rr e n t [a ] 0 100 200 300 400 500 600 0 1 2 time [s] r ip p le [ % ] (1) (2) (3) (4) figure 2. ripple index: sa curve for a 10 min test run.  0 100 200 300 400 500 600 700 800 900 1000 3400 3600 3800 4000 v o lta g e [v ] 0 100 200 300 400 500 600 700 800 900 1000 0 200 400 600 c u rr e n t [a ] 0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 time [s] r ip p le [ % ] figure 3. ripple index: sa curve for a 17 min test run.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 30  identify the spectrum components more easily; the components of the pantograph voltage spectrum are immediately related to them by the line impedance seen at the pantograph. in figure 5a two components at 150 and 490 hz definitely increase after t*, besides an evident frequency leakage below 80 hz; zero stuffing is used in figure 5a to enhance the frequency resolution down to 5 hz, identifying the peculiar behaviour below 80 hz probably due to the resonant free response of the on-board filter, but surely influenced by the frequency leakage of the dc component. for the aim of reducing the latter it is advisable to subtract an estimate of the dc component, bringing the analysed signal down to zero mean. in figure 5b at higher frequency the dotted and solid black spectra at or after t* prevail, showing an increase also of high frequency components. two components are almost fix, 300 hz (substation) and 560 hz (on-board chopper). the raw fourier spectra need to be pre-processed before being used for the ri calculation and other computations, since the background noise components may largely influence an overall index like ri or the total rms; a threshold needs to be assigned to zero out the components below it. when evaluating broad peaks with a fine frequency resolution, peak isolation is necessary to avoid counting them more than once, including lateral components due to leakage. these techniques have already been successfully applied to the processing of recordings of electromagnetic field intensity on-board rolling stock [26]. however the use of a threshold, overlap percentage and peak isolation is prone to large deviations, as it is evident, by comparing the results in figure 2 with the preliminary results reported in [13]. it is however recognized that the ri index is a useful tool to locate various types of transients on the time-axis for further processing and analysis. from a graphical point of view, the time-frequency representation of fourier spectra is an effective tool. transients and time-varying components leave a clear signature in the fourier spectra computed by short time fourier transform, or spectrogram. as it is known, the spectrogram is subject to the same constraint on the frequency and time resolution of the basic fourier analysis. the steady characteristic harmonics at 300 hz, 600 hz and 900 hz may be easily seen in figure 4(a); other steady noncharacteristic harmonics at 50 hz, 100 hz and 150 hz are located in the lower part of the frequency range. a comb pattern of harmonics referred to a fundamental around 280 hz is also present (280 hz, 560 hz, 840 hz, 1120 hz), that increases with increasing speed and reaches approximately 460 hz, 940 hz, 1400 hz, 1860 hz after 30 s; the estimated rate of rise is thus 6 hz/s. this is evidently the emission of a traction static converter during acceleration and represents a typical type 4 transient. (a) (b) figure 4. spectrogram  in dbv: (a)  low frequency harmonics are artificially  compressed to highlight time‐varying harmonics; (b) typical non‐stationary  pwm signature from a different test run.  0 100 200 300 400 500 600 700 10 -2 10 -1 10 0 10 1 10 2 frequency [hz] c u rr e n t [a ] 0 100 200 300 400 500 600 700 10 -2 10 -1 10 0 10 1 10 2 frequency [hz] c u rr e n t [a ] t*-3t t*-2t t*-1t t* t*+1t t*+2t 300 hz 560 hz 300 hz 560 hz 100 hz 100 hz 490 hz 150 hz 490 hz 150 hz (a) 500 1000 1500 2000 2500 3000 10 -2 10 -1 10 0 frequency [hz] c u rr e n t [a ] t*-2t, -t t* t*+1t,+2t 560 hz 2750 hz 1130 hz (b) figure 5. analysis of transient behaviour at t*=5.8 s: (a) low frequency, with  base and enhanced frequency resolution; (b) high frequency.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 31  the spectrogram in figure 4a also shows two vertical lines at 40 and 80 s, corresponding to transients of type 2 or 3. in figure 4b the high frequency pattern of the pwm modulation of a variable speed drive, probably the traction drive, is also clearly visible. the components are located symmetrically around the higher central component. the evolution versus time follows the operating conditions of the rolling stock, i.e. speed and tractive effort. this type of emissions, also of type 4, is particularly relevant when interference to signalling is concerned. limits for track circuits are expressed with a bandpass approach and non-stationary components may fall inside the susceptibility band [10][23][24]. the spectrogram is particularly useful in detecting such transients if a trade-off between frequency and time axis resolution is reached. post-processing of the amplitude and graphical presentation can ease visual detection. two transients have been selected and further analysed in figure 6 and figure 7 with a frequency resolution of 25 hz (40 ms time window) and overlap of 50 %; the overlap is necessary not only to track time varying components, but also to artificially increase the time axis resolution. the periodicity of about 200 hz, visible in figure 8, is related to the two peaks at about 5-6 ms in figure 7; on the contrary the single peak of figure 6 produces a flat spectrum. in both cases the transient components mask the characteristic harmonics, such as 300 hz. the last plot, in figure 8 is obtained with a 90 % overlap, so that the time step is only 4 ms; it is possible to distinguish the spectra before and after the transient event (dashed and dotted black curves), the spectra just at the beginning and the end of it (solid black curves) and during the transient itself (grey curves). overlapping and zero stuffing can improve the time axis resolution, ensuring transient detection; based on normal transient durations (2 to 10 ms), a 25 hz resolution with 50 % overlap ensures a 20 ms time step and a uniform increase of spectrum components by about 30 %. smaller time steps can be used simply increasing the overlap to 75 % (10 ms) or 90 % (4 ms). with the latter time step and the base 25 hz resolution, also the single pantograph voltage spike can be detected and correctly processed with acceptable “dilution” over the 40 ms window. the analysis is repeated for another transient with a finer frequency resolution and a better time axis representation (figure 9). the 300 hz component is visible as the main ripple term both in time and frequency domains; the grey circles identify a three-by-three pattern due to the 100 hz component. the transient occurring at 15.25 ms produces the broad spectrum shown in heavy black, but the extinction of the transient in about 1 ms is followed by the free response of the on-board filter, that lasts for about 10 ms, modifying the regular pattern of 300 hz and 100 hz components (see the two grey arrows on the following two 300 hz cycles) and produces the 75.66 75.67 75.68 75.69 3700 3800 3900 4000 4100 4200 tim e [s] v o lta g e [ v ] 0 200 400 600 800 1000-40 -20 0 20 40 60 80 frequency [hz] v o lta g e [ d b v ] figure  6.  spectrogram:  transient  waveform  and  fourier  spectra  before (dashed), at (solid) and after (dotted) the transient (20 ms time step).  81.75 81.76 81.77 81.78 81.79 81.83700 3800 3900 4000 4100 4200 tim e [s] v o lta g e [ v ] 0 200 400 600 800 1000 frequency [hz] -40 -20 0 20 40 60 80 v o lta g e [ d b v ] figure  7.  spectrogram:  transient  waveform  and  fourier  spectra  before  (dashed), at (solid) and after (dotted) the transient (20 ms time step).  0 200 400 600 800 100020 40 60 80 100 120 140 frequency [hz] v o lta g e [ v ] figure  8.  spectrogram  (same  as  figure  7):  spectra  before  (dashed  black),  partially  including  (solid  black),  centred  at  (solid  grey)  and  after  (dotted  black) the transient (4 ms time step).  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 32  transient spectrum components indicated by the black arrows. the spectrum components during the transient are predominant up to about 500 hz, with an amplitude increase of approximately 10 db. another uncommon type of transient is related to the stability of the supply frequency which, for dc traction systems supplied by the national grids, is 50 hz in europe and 60 hz in a few other countries (e.g. japan, korea, united states, etc.). the analysis of the arezzo-chiusi-orte recordings looking for type 2 or type 3 transients revealed a temporary shift of substation characteristic harmonics due to a supply frequency change, as shown in figure 10. in figure 10a the vp waveform and the ri curve are shown only for reference. the spectrogram in figure 10b using a logarithmic frequency scale reveals repeated low frequency transients of moderate amplitude, indicated by the repeated dark and clear vertical lines, extending up to about the 50 hz component. the most relevant transients at 101.0 s and 101.4 s extend above it, up to the 100 hz component. the linear scale in figure 10c reveals a frequency shift of all substation characteristic harmonics that can be led back to a significant change of the 50 hz supply frequency of nearly 0.7 hz (estimated from the higher order harmonics, where the frequency shift is proportionally larger and more easily evaluated). this event is quite uncommon for the observed relevance and the effect is a shift of various substation characteristic harmonics that might fall inside the operating frequency intervals of signalling devices. frequency stability is not further considered here; a complete discussion of this topic for ac traction supplies can be found in [6][28], where the variations of the fundamental are however much smaller with a span in the order of 0.1 hz. 4.3. wavelet analysis  the considerations on the classical fourier analysis of section 4.2, with the frequency resolution chosen with constraints on the time resolution, suggest the use of a wavelet representation for transients of type 1, 2 and 3. the discrete time wavelet transform (dtwt) is at the moment preferred for its simplicity with respect to the continuous wavelet transform, notwithstanding the better performances of the latter in terms of time and frequency resolution [27]. transients in vp are detected, by applying a threshold to the details dk, and classified, by deriving an 15.2 15.21 15.22 15.23 15.24 15.25 15.26 15.27 15.28 15.29 15.3 3450 3500 3550 3600 3650 3700 3750 time [s] v o lta g e [ v ] 0 200 400 600 800 1000 1200 1400 1600 1800 2000 10 -1 10 0 10 1 frequency [hz] v o lta g e [ v ] figure  9.  spectrogram:  transient  waveform  and  fourier  spectra  before (dashed) and at (solid) the transient (10 ms time step, 50 % overlap); the arrows in the time domain trace identify a change in the regular pattern of the  100  hz  and  300  hz  components,  the  three  arrows  in  the  frequency domain spectrum  identify the frequency intervals where the effect of the transient is evident.  97 98 99 100 101 102 103 104 105 3720 3725 3730 3735 3740 v o lta g e [ v ] 97 98 99 100 101 102 103 104 105 0.65 0.7 0.75 0.8 time [s] v o lta g e [ % ] a)  b)  c)  figure 10.  in a) the pantograph voltage, vp (top) and the ripple  index, ri (bottom). in b) and c) the spectrogram in dbv (10 hz frequency resolution, 75 % overlap) in logarithmic and linear frequency scale respectively.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 33  empirical characterization of amplitude and oscillations in each detail. figure 11 shows an example of a type 3 transient at 70.29 s; a pantograph bounce is superimposed on the steady vp ripple of substation harmonics (an interpretation of the main ripple components visible in the time domain waveform was done after figure 9). the details d3 and d4 show the 300 hz ripple, that is absent from the adjacent details. wavelet details can be interpreted as band-pass channels with a centre frequency that, for dtwt, is determined by a geometric ratio. the basic frequency fb is determined by the sampling frequency and by the type of the wavelet; the centre frequency values at each scale a (and detail) are readily determined as fb/2 a. details not interested by the substation ripple are the candidates for the transient detection (see table 1). the best mother wavelet is then selected by considering the correct band apportioning, the amplitude of the peaks appearing in the details and the accuracy of their time location. many mother wavelets produce almost identical results [13]; the comparison is carried out by analysing the frequency behaviour of the examined wavelets. in the following “db” will stand for daubechies and “sym” for symlet. the values reported in table 1 are referred to a sampling time of 160 μs, eight times slower than the original sampling time. to cover a wider frequency interval, a lower value is needed and a larger number of details, thus increasing the computational effort. it is remembered that pantograph bounces trigger the free response of the on-board filter, located in the lower portion of the frequency axis (see figure 9), and modify temporarily the conducted emissions of the on-board converters, so that they represent a relevant event for the evaluation of interference to signalling circuits, in particular in the low and medium frequency intervals. in [28] the db4/db6 and db8/db10 were found the best choices for fast and slow transients, but for ac industrial supply networks. two wavelets (db3 and db5) are tested first and the results are shown in figure 12 and figure 13. low order wavelets may be not suitable to track all the variations of the analysed signal, but they feature a slightly larger peak amplitude in the low order details and a nonoscillating behaviour, thus ensuring a better time axis location of the crossing of the chosen threshold value and a nonambiguous behaviour (see figure 14 for a db1 wavelet applied to the same signal trace). without showing the results in the caption of figure 14 it is stated that the results are practically identical to those obtained with “sym1” and “haar” wavelets. given the use of discrete values to define the latter, the same is highly indicated to reduce the required computational effort and implement a real-time function. setting the sampling frequency to a higher value (25 khz, four times larger than the previous one used to derive table 1 and figure 11, figure 12 and figure 13), transients with a steep rise time and a time duration limited to one or few milliseconds may be detected more effectively (as shown in figure 15, that corresponds to figure 13 in [13]). such a sampling time moves all the five details on a band above the main characteristic harmonics, being fc5 located at 778 hz. 5. conclusions  in the present paper a range of transients typical of dc railway systems are considered and classified for their time and frequency behaviour. the target of the analysis is the evaluation of the power quality perceived at the pantograph voltage, but also the identification and location on the time axis of transients, that are relevant for interference to signalling circuits and thus for interoperability. the use of spectrogram and wavelets is proposed for the location of pantograph bounces on long records, that amount to many gigabytes of data and cannot be inspected manually; wavelets types and the behaviour of the details were tested based on real signals. the results available in the literature, that advise the optimal settings for wavelet analysis, are almost always referred to ac distribution networks in industrial systems, while a dc railway system represent a peculiar case study. very simple mother wavelets, such as a daubechies or symlet of order 1 or the haar wavelet, seem preferable (in particular the latter), if accurate time axis location and fast computing are the main requisites. 70.28 70.285 70.29 70.295 70.3 70.305 70.31 70.315 70.32 3750 3800 3850 3900 3950 4000 4050 4100 4150 4200 4250 time [s] v o lta g e [ v ] 70.25 70.3 70.35 -200 0 200 70.25 70.3 70.35 -200 0 200 70.25 70.3 70.35 -200 0 200 70.25 70.3 70.35 -500 0 500 70.25 70.3 70.35 -500 0 500 time [s] d1 d2 d3 d4 d5 figure 11. vp transient analysed with db3, 5 levels.  table 1. frequency axis location of wavelets.    wavelet type    db1  db3  db5  dt [μs]  160  160  160  fc1 [hz]  3113  2500  2083  fc2 [hz]  1556  1250  1042  fc3 [hz]  778.2  625  520.7  fc4 [hz]  389.1  312.5  260.4  fc5 [hz]  194.6  156.3  130.2  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 34                              references  [1] a. mariscotti, direct measurement of power quality over railway networks with results of a 16.7 hz network, ieee transactions on instrumentation and measurement, 60 (2011), pp. 1604-1612. [2] en 50388, railway applications – power supply and rolling stock – technical criteria for the coordination between power supply (substation) and rolling stock to achieve interoperability, aug. 2005. [3] en 61000-4-7, electromagnetic compatibility – part 4-7: testing and measurement techniques – general guide on harmonics and interharmonics measurements and instrumentation, for power supply systems and equipment connected thereto, 2002-08. [4] b. hemmer, a. mariscotti d. wuergler, recommendations for the calculation of the total disturbing return current from electric traction vehicles, ieee transactions on power delivery, 19 (2004), pp. 1190-1197. [5] en 50163, railway applications – supply voltages of traction systems, nov. 2004. [6] a. mariscotti and d. slepicka, “analysis of frequency stability of 16.7 hz railways”, proc. of the ieee intern. instrumentation and measurement technical conference i2mtc, may 10-12, 2011, hangzhou, china. [7] m. meyer and g.j. van alphen, netzresonanzmessungen auf hsl zuid und betuweroute, eisenbahn review, 7-8 (2006), pp. 610-611. [8] a. dolara, m. gualdoni and s. leva, impact of high voltage primary supply lines in the 2×25 kv – 50 hz railway system on the equivalent impedance at pantograph terminals, ieee transactions on power delivery, 27 (2012), pp. 164-175. [9] m. meyer and j. schöning, netzstabilitat in grossen bahnnetzen, eisenbahn review, 7-8 (1999), pp. 312-317. [10] pren 50238-2 (draft, pr. 15360), railway applications – compatibility between rolling stock and train detection systems – part 2: compatibility with track circuits, 2009. [11] clc/tr 50507, railway applications – interference limits of existing track circuits used on european railways, 2007-05. [12] specifica ferrovie dello stato (fs) n. 370582 esp. 1.0. [13] a. mariscotti, “dc railway line voltage ripple for periodic and aperiodic phenomena”, proc. of the imeko tc 04 congress, sept. 27-30, 2011, natal, rn, brazil. [14] m. caserza magro, a. mariscotti and p. pinceti, “definition of power quality indices for dc low voltage distribution networks”, imtc 2006, sorrento, italy, april 20-23, 2006. [15] en 61000-4-17, electromagnetic compatibility – part 4-17: testing and measurement techniques – ripple on d.c. input power port immunity test, 1999-08. 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -200 0 200 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -300 0 300 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 -250 0 250 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -1000 0 1000 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -1000 0 1000 time [s] figure 12. vp transient of figure 8: db3, 5 levels.  81.76 81.765 81.77 81.775 81.78 81.785 81.79 -200 0 200 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -300 0 300 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -400 0 400 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 0 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -1000 0 1000 time [s] figure 13. vp transient of figure 8: db3, 5 levels.  81.76 81.765 81.77 81.775 81.78 81.785 81.79 -300 0 300 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 0 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 0 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -600 0 600 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -600 0 600 time [s] figure 14. vp transient of figure 8: db1, 5 levels (identical results for sym1 and haar wavelets).  81.76 81.765 81.77 81.775 81.78 81.785 81.79 -300 0 300 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 0 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -500 0 500 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -800 0 800 81.76 81.765 81.77 81.775 81.78 81.785 81.79 -1000 0 1000 time [s] figure 15. vp transient of figure 8: db1, 5 levels (40 s sample time).  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 35  [16] mil std 704e, aircraft electric power characteristics, may 1991. [17] a. mariscotti, analysis of the dc link current spectrum in voltage source inverters, ieee transactions on circuits and systems – part i, 49 (2002), pp. 484-491. [18] a. mariscotti and p. pozzobon, synthesis of line impedance expressions for railway traction systems, ieee transactions on vehicular technology, 52 (2003), pp. 420-430. [19] e.w. kimbark, direct current transmission, wiley interscience, 1971. [20] a. mariscotti, “methods for ripple index evaluation in dc low voltage distribution networks”, proc. of the ieee instrumentation and measurement technical conference imtc, may 2-4, 2007, warsaw, poland. [21] s. herraiz jaramillo, g.t. heydt and efrain o’neill-carrillo, power quality indexes for aperiodic voltage and currents, ieee transactions on power delivery, 15 (2000), pp. 784-790. [22] a. mariscotti, discussion of power quality indexes for aperiodic voltage and currents, ieee transactions on power delivery, 15 (2000), pp. 1333-1334. [23] clc/ts 50238-2, railway applications – compatibility between rolling stock and train detection systems – part 2: compatibility with track circuits, 2010-07. [24] a. mariscotti, “on the uncertainty of the bandpass filter method for the evaluation of interference on track circuits”, proc. of the 20th imeko world congress, sept. 9-14, 2012, busan, republic of korea. [25] g. armanino, a. mariscotti and m. mazzucchelli, “in-house test of low frequency conducted emissions of static converters for railway application”, proc. of the 17th imeko world congress, june 22-27, 2003, cavtat-dubrovnik, croatia. [26] d. bellan, a. gaggelli, f. maradei, a. mariscotti and s. pignari, time-domain measurement and spectral analysis of nonstationary low-frequency magnetic field emissions on board of rolling stock, ieee transactions on electromagnetic compatibility, 46 (2004), pp. 12-23. [27] l. angrisani, p. daponte, m. d’apuzzo, and a. testa, a measurement method based on the wavelet transform for power quality analysis, ieee transactions on power delivery, 13 (1998), pp. 990-998. [28] a. mariscotti and d. slepicka, “the frequency stability of the 50 hz french railway”, proc. of the ieee international instrumentation and measurement technical conference, i2mtc, may 14-16, 2012, graz, austria. microsoft word 285-2076-2-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 32‐35    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 32  spectroscopic fingerprinting techniques for food  characterisation  monica casale,   lucia bagnasco, chiara casolino, silvia lanteri, riccardo leardi  univeristy of genoa, departmentof pharmacy – via brigata salerno 13, 16147 genova, italy      section: research paper   keywords: non selective signals; uv‐visible spectroscopy (uv‐vis); mid‐infrared spectroscopy (mirs); near‐infrared spectroscopy (nirs); chemometrics;  food analysis  citation: monica casale, lucia bagnasco, chiara casolino, silvia lanteri, riccardo leardi, spectroscopic fingerprinting techniques for food characterisation,  acta imeko, vol. 5, no. 1, article 7, april 2016, identifier: imeko‐acta‐05 (2016)‐01‐07  section editor: claudia zoani, italian national agency for new technologies, energy and sustainable economic development affiliation, rome, italy  received june 26, 2015; in final form december 17, 2015; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author:  riccardo leardi, e‐mail: riclea@difar.unige.it    1. introduction  traditional analytical methods for food analysis have several drawbacks, such as low speed, the necessity for sample pretreatments, a requirement for highly-skilled personnel and destruction of the sample. several fast and non-destructive instrumental methods have been proposed to overcome these hurdles. among them, uvvisible, mid infrared and near infrared spectroscopy have proven to be successful analytical methods for analysis of food since they offer a number of important advantages [1], [2]. they are fast, non-destructive methods and they require a minimal or no sample preparation. moreover, they are less expensive because no reagents are required and thus no wastes are produced. finally, the versatility of these instruments makes them useful tools for online process monitoring. chemical information contained in the spectra resides in the band positions, intensities, and shapes. whereas band positions give information about the molecular structure of chemical compounds, the intensities of the bands are related to the concentration of these compounds as described by the lambert-beer law. the easiest way to determine the content of a chemical compound is to measure the change in the intensity of a well-resolved band that has been unambiguously attributed to this compound [3]. however, this is possible for a pure component system, but foods contain numerous components giving rise to complex spectra with overlapping peaks. in fact, in order to take advantage of these spectroscopic fingerprinting techniques, the analyst must overcome limitations in sensitivity and selectivity that arise from the abstract  the analysis of samples by using spectroscopic fingerprinting techniques is more and more common and widespread. such approaches  are very convenient, since they are usually fast, cheap and non‐destructive. in many applications no sample pretreatment is required,  the acquisition of the spectrum can be performed in about one minute and no solvents are required. as a consequence, the return on  investment of the related technology is very high.   the “disadvantage” of these techniques is that, the signal being non‐selective, simple mathematical approaches (e.g., lambert‐beer  law) cannot be applied. instead, a multivariate treatment must be performed by using chemometrics tools.   in what concerns food analysis, they can be applied in several steps, from the evaluation of the quality and the conformity of raw  material  to  the  assessment  of  the  quality  of  the  final  product,  to  the  monitoring  of  the  shelf  life  of  the  product  itself.  another  interesting  field  of  application  is  the  verification  of  food‐authenticity  claims,  this  being  extremely  important  in  the  case  of  foods  labeled as protected designation of origin (pdo), protected geographical indication (pgi) and traditional speciality guaranteed (tsg).   in the present paper, it is described how non‐selective signals can be used for obtaining useful information about a food. acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 33  relatively weak and highly overlapping bands found in the spectra. the result is a unique “fingerprint” that can be used to confirm the identity of a sample. thus, for the implementation of a successful analysis the use of chemometric methods is fundamental to extract from the spectra as much information as possible about the analysed samples. the information extracted from the non-selective signals can be used for two types of analysis: (1) quantitative analysis to link features of the spectra to quantifiable properties of the samples. spectroscopic techniques are commonly used to obtain calibration models able to predict the concentration of a compound or a specific characteristic of a food product. several studies have been performed, for example on the detection of adulterants in foods. (2) qualitative analysis, i.e. classification of samples. recently, a rather specific type of fraud has become important involving claims regarding the geographical origin of food ingredients. it is generally true that deceptions regarding the geographical origin of foods have few health implications, but they may nonetheless represent a serious commercial fraud. in fact, consumers often pay a considerable price premium when a food is labelled with a declaration of production within a specific region, since such a label may be perceived as an implicit guarantee of a traditional and, perhaps, healthier manufacturing process. in response, several national and international institutions have issued directives to support the differentiation of agricultural products and foodstuffs on a regional basis by introducing an integrated framework for the protection of both geographical origin and traditional production techniques. in these cases, non-selective signals can be used to obtain classification models able to discriminate samples according to a quality/characteristic. in this paper, as an example, the construction of a reliable quantitative model for the detection of addition of barley to coffee using nir spectroscopy and chemometrics is shown. 2. how to build a model  figure 1 shows the scheme for building a quantitative or qualitative multivariate model. the main steps to be performed are: 1) sample selection the analysed samples should fully represent the population studied; this means that all the variability sources (or at least the most important ones) should be taken into account. moreover, when a calibration model is developed the samples should cover a wide range of response values. unfortunately, there are many reports based on poor sampling, and this affects the result of the whole analysis. for example, in order to discriminate extra virgin olive oils on the basis of the olive cultivar, oil samples should be collected from different oil mills in order to take into account the different sources of variability, such as geographical origin, production year, harvest period and production technologies (fertilizer, olive fly control tools, extraction process, etc). 2) spectra acquisition and data matrix the acquisition of the spectra is very simple and generally requires few minutes. then, spectra have to be arranged in a data matrix in which each row corresponds to a sample and each column to a variable (wavelength). this trivial operation is not always easy to be performed since many instruments do not allow the direct exportation of a set of spectra. 3) sample pretreatment spectra can be affected by turbidity in liquid samples or different granulometry in solid samples and by variations in the optical path length. to avoid or decrease these interferences, mathematical pretreatments are required. the most current data pretreatments are normalisation methods such as standard normal variate (snv) [4] and derivatives. the snv, or row autoscaling, mainly corrects both baseline shifts and global intensity variations, which are related to the granulometry of the sample. each spectrum is row-centered, by subtracting its mean from each single value, and then scaled by dividing it by its standard deviation. as a result, each spectrum has mean equal to 0 and standard deviation equal to 1. since snv removes the possibly shifting informative regions along the signal range, the interpretation of the results referring to the original signals should be performed with caution. other common pretreatments applied to spectroscopic signals are the derivatives. first derivative provides a correction for baseline shifts, while the second derivative represents a measure of the curvature of the original signal, i.e., the rate of change of its slope. such transform provides a correction for both baseline shifts and drifts. a drawback of derivatives is the increase of random noise. 4) variable selection since the uv-visible, nir and mir spectra are characterized by a very high number of variables, the selection of the informative ones is an important task, both to obtain simpler qualitative or quantitative models and to identify the most useful wavelengths. several algorithms can be applied. among them, some commonly used are the stepwise selection methods (i.e: stepwise linear discriminant analysis [5], iterative stepwise elimination [6]), interval-pls [7] and genetic algorithms [8]. compared with the other techniques, genetic algorithms have the great advantage of selecting well defined spectral regions, often corresponding to relevant spectral features, instead of single wavelengths. 5) model building and validation the choice of the classification or regression method should be performed following the simplicity criterion, since simpler models are also more robust and stable. the most used ones are linear discriminant analysis (lda) [9] and k nearest neighbors (knn) [10] for classification, soft independent figure  1.  scheme  for  building  a  quantitative  or  qualitative  multivariate model.   acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 34  modeling of class analogy (simca) [11] as class-modeling technique and partial least square (pls) [12] for multivariate calibration. a chemometrical model must be evaluated on the basis of its predictive ability. cross-validation is the most common validation procedure: the n objects are divided into g cancellation groups, the model is computed g times and each time the objects in the corresponding cancellation group are predicted. at the end of the procedure, each object has been predicted once. 6) external test set the real predictive ability of each model should be tested on an external set of samples. the test set has to be totally independent of the calibration set, not only mathematically as in cross-validation, but also from the ‘spatial’ and ‘temporal’ point of view. for instance, in case of industrial data the samples of the test set must be produced and analysed after the samples of the training set; in case of samples whose origin can be very different the samples of the test set must be produced by producers that are not the same as the producers of the samples of the training set; in case of natural products the samples of the test set must come from crops subsequent to those of the samples of the training set. 3. example of construction of a reliable  quantitative model for food analysis  this study [13] presents an application of near infrared spectroscopy for detection and quantification of the fraudulent addition of barley in roasted and ground coffee samples. nine different types of coffee including pure arabica, robusta and mixtures of them at different roasting degrees were blended with four types of barley. the types of coffee and barley were selected in order to be as representative as possible of the italian market. since it was decided to perform 100 experiments out of the 360 combinations resulting from nine coffees, four barleys and ten different concentrations (from 2 % to 20 % w/w), the calibration set was defined by applying a d-optimal design. the validation was performed on thirty experiments selected by applying a subsequent d-optimal design on the combinations not constituting the training set. after that, a further validation was performed on a completely external set made by mixtures of a type of coffee and a type of barley, different from those used in the previous steps. partial least squares regression (pls) was employed to build the models aimed at predicting the amounts of barley in coffee samples. in order to obtain simplified models, taking into account only informative regions of the spectral profiles, genetic algorithms were applied for feature selection. this allowed to reduce the number of data points from 1501 to 188. the models showed excellent predictive ability with root mean square errors (rmse) for the test and external sets equal to 1.4 % w/w and 1.1 % w/w, respectively. as it can be noticed in figure 2, the model obtained is capable to predict barley concentration with a very satisfactory accuracy, not only in calibration but also on external samples. this application clearly shows that the representativity of the training set is a key point in the success of a calibration model. the achievement of very low prediction errors on a totally external test set (i.e., on mixtures composed by qualities of coffee and barley unknown to the model) has been possible only as a consequence of the fact that the training set was made by taking into account a relatively large number of varieties of coffee and barley. another key point is the application of doptimal design for the selection of a subset of adequate size from the very high set of candidate experiments. moreover, the variable selection by using genetic algorithms helped to determine the spectral regions most useful to identify the adulteration of coffee with barley and to increase calibration model performances. 4. conclusions  in the present paper it has been shown, from a methodological point of view, how non-selective signals can be used for obtaining useful information about food. the chemometrical elaboration is fundamental in order to obtain useful information from the spectral data; the main steps of this approach have been identified and their importance discussed also showing a real application. references  [1] t. woodcock, g. downey, c.p. o’donnell, review: better quality food and beverages: the role of near infrared spectroscopy, journal of near infrared spectroscopy, 16(1), (2008), pp. 1-29. [2] l. wang, f. s.c. lee, x. wang, y. he, feasibility study of quantifying and discriminating soybean oil adulteration in camellia oils by attenuated total reflectance mir and fiber optic diffuse reflectance nir, food chemistry, 95, (2006) pp.529–536. [3] r. karoui, g. downey, c. blecker, mid-infrared spectroscopy coupled with chemometrics: a tool for the analysis of intact food systems and the exploration of their molecular structure-quality relationships a review, chemical reviews, 110, (2010), pp.6144– 6168. [4] r.j. barnes, m.s dhanoa, s.j. lister, standard normal variate transformation and de-trending of near-infrared diffuse reflectance spectra. applied spectroscopy, 43(5), (1989), pp.772777. [5] r.i. jenrich, in k. enselein, a. ralston, h.s. wilf (eds)statistical methods for digital computers, j. wiley and sons, new york, 1960, pp. 76. [6] r. boggia, m. forina, p. fossa, l. mosti, chemometric study and validation strategy in the structure-activity relationships of new cardiotonic agents, quantitative structure-activity relationships, 16, (1997), pp. 201-213. figure  2.  experimental  vs.  predicted  values  of  concentration  (%  w/w)  of  barley  in  the  coffee  samples  of  the  external  test  set  (pls  model  on  the  spectral regions selected by ga).   0 5 10 15 20 0 5 10 15 20 experimental value p re d ic te d v a lu e acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 35  [7] l. nørgaard, a. saudland, j. wagner, j.p. nielsen, l. munck and s.b. engelsen, interval partial least squares regression (ipls): a comparative chemometric study with an example from nearinfrared spectroscopy, applied spectroscopy, 54, (2000) pp. 413-419. [8] r. leardi, a.l. gonzalez, genetic algorithms applied to feature selection in pls regression: how and when to use them, chemom. intell. lab. syst. 41 (1998) pp.195–207. [9] d. l massart, b. g. m. vandeginste, l. m. c buydens, s. de jong, p. l lewi, j. smeyers-verbeke, “supervised pattern recognition” in, handbook of chemometrics and qualimetrics: part b, vol. 20b, b.g.m. vandeginste, & s.c. rutan, elsevier, amsterdam, 1998, pp. 213-220. [10] t. m. cover and p. hart, "the nearest neighbor decision rule," ieee trans. inform. theory, vol. it-13, pp. 21-27 1967. [11] s. wold, m. sjostrom, “simca: a method for analysing chemical data in terms of similarity and analogy” in chemometrics, theory and application, b. r. kowalski, acs symposium series 52, american chemical society, washington, dc, 1977, pp. 243. [12] h. wold, “partial least squares” in: s. kotz, n.l. johnson, (eds.), encyclopedia of statistical sciences, wiley, new york, 1985, pp. 581–591. [13] h. ebrahimi-najafabadi, r. leardi, p. oliveri, m.c. casolino, m. jalali-heravi, s. lanteri, detection of addition of barley to coffee using near infrared spectroscopy and chemometric techniques, talanta 99, (2012) pp.175–179. microsoft word article 14 169-1323-1-le.doc acta imeko  february 2015, volume 4, number 1, 90 – 96  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 90  validation of numerical methods for electromagnetic  dosimetry through near‐field measurements   d. giordano 1 , l. zilberti 1 , m. borsero 1 , r. forastiere 2 , w. wang 1   1  inrim, strada delle cacce 91 – 10135 torino, italy  2  dip. energia, politecnico di torino, corso duca degli abruzzi – 10129 torino, italy      section: research paper   keywords: electromagnetic dosimetry; mri; near‐field measurements  citation: d. giordano, l. zilberti, m. borsero, r. forastiere, w. wang, validation of numerical methods for electromagnetic dosimetry through near‐field  measurements, acta imeko, vol. 4, no. 1, article 14, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐14  editor: paolo carbone, university of perugia   received december 31 st , 2013; in final form november 21 st , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: european metrology research programme (emrp)‐hlt06 joint research project (jrp) “metrology for next‐generation safety standards and  equipment in mri” (2012–2015)  corresponding author: d. giordano, e‐mail: d.giordano@inrim.it    1. introduction  an estimated 8 – 10% of the european population are carrying medical implants. at present, they are excluded from receiving a magnetic resonance imaging (mri) scan as no metrics exists to assess the specific safety risks related to these implants. with the aim of investigating implant associated risks [13], advanced modelling concepts for the evaluation of electromagnetic field inside phantoms and human computer models [4][5] are fundamental tools which must be validated by experimental comparisons to guarantee their reliability. to this end an experimental tool was designed and set up and first comparisons between measurements and numerical results obtained through a boundary element method (bem) algorithm, are presented in the paper. since this preliminary investigation is based on a simple numerical model involving all the physical aspects, a simplified experimental tool is required as well. to do that a system made of a cylindrical phantom with a tissue-like liquid, a loop antenna and a 3d electromagnetic field mapping was set up. the magnetic source and measuring system shall be able to generate and detect radio frequency (rf) electromagnetic fields with magnitude and frequency similar to the ones exploited by magnetic resonance imaging (mri) devices for diagnostic purposes. as regards the frequency, it shall be comparable with the isocentre resonance frequency (larmor frequency) of the most used mri devices. these are characterized by frequencies of about 64 mhz, 128 mhz and 300 mhz related to isocentre static magnetic fields of 1.5 t, 3 t and 7 t respectively. for these preliminary investigations the frequency of interest was set to 64 mhz [6]. as well known from the nuclear magnetic resonance (nmr) theory, a rotation of the nuclear magnetic moment vectors can be impressed by transferring a suitable amount of energy into the anatomy by means of coils tuned at the larmor frequency. such coils have to generate rf magnetic abstract  this paper describes the arrangement of a first experimental set‐up which allows the comparison between the measurement of the  electromagnetic field quantities induced inside a simple cylindrical phantom and the same quantities estimated numerically through a  boundary element method. the reliability of the numerical method has been tested at 64 mhz, the larmor frequency associated to  the magnetic resonance imaging devices with an isocenter magnetic field of 1.5 t. to assess its robustness, the comparison is also  performed by introducing, inside the phantom, a metallic non magnetic element, which roughly simulates a medical implant.   acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 91  fields perpendicular to the static magnetic field direction. consequently, a loop antenna was selected [6-7]. since the aim of this paper is the experimental set-up, just a synthetic description of the field formulation is given in section 2. a description of the experimental set-up is given in section 3, and some critical items due to the simplified model of the loop antenna are pointed out. in particular, the presence of the shield in the actual loop and the absence of the earth in the numerical model are argued. the first comparisons between measured and computed magnetic field components are performed along a vertical and a radial line inside the cylindrical phantom. additional comparisons are performed when a metallic object mimicking an implant is introduced inside the phantom (see section 4). a brief description of the system control and operations, managed by a python programme which automates the generation and acquisition of the field components, is given in section 5, followed by some conclusions (section 6). 2. field formulation    the electromagnetic field problem is described by the electric field integral equation (efie) and magnetic field integral equation (mfie) under sinusoidal conditions (angular frequency ω). the integral equations are solved by the boundary element method (bem), discretizing the external surface (∂ω) of the considered objects (ω) with 2d surface elements. the discretized form of the efie and mfie equations is:       , , , s m m m m i s i mm m m m i m i mm m m m j dv ds ds j ds                         e j n e n e n h         s , , , m m m m i s i mm m m m i m i mm m m m dv ds ds j ds                        h j n h n h n e (1) where js is the impressed current density of the sources (ωs), ξ is the singularity factor (ξ = 0.5 on the surface and ξ = 1 elsewhere) and n is the normal unit vector directed outwards ω. the m-th element is the source point while, during the setting of the matrix, the computational point is the barycentre of the i-th element. the green function is defined as: ' 4 ' ike      r r r r with k j              (2) where r and r’ are the coordinate vectors of the observation and of the source points, while µ, σ and ε are the magnetic permeability, the electric conductivity and the electric permittivity respectively. efie and mfie can be written for any volume belonging to the computational domain (the air region outside the phantom, inside this latter and also inside possible metallic objects). the volume integrals must be included only in volumes involving impressed sources. the integrals of the green function and its gradient are computed by using an adaptive quadrature rule based on kronrod algorithm, which enables high accuracy limiting the computational burden. 3. experimental set‐up  figure 1 shows an overall view of the experimental setup, including the phantom filled with a yellow liquid (the human tissue-like liquid), the 3d probe positioning system made of dielectric material with low permittivity (εr = 2), the electric (or magnetic) field probe and its optical converter, the support for the antennas (more antennas are foreseen). the black loop antenna can be seen on the right side of the phantom. figure 1 experimental set‐up.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 92  3.1. phantom  the phantom consists of a cylindrical container made of polymethylmethacrylate whose dielectric permittivity (εr = 2.2), much lower than that of the liquid, does not affect the electric field induced in the phantom. it has both an internal diameter and an internal height equal to 240 mm. the liquid, prepared by the physikalisch technische bundesanstalt laboratories (ptb, berlin), shows electrical characteristics comparable with the ones of some human tissues. to reduce the effects of the support material on the electric field behaviour around the phantom, four slim columns sustain the container, increasing the distance from the structure supporting the whole system. 3.2. magnetic field generation system  the magnetic field source shall be able to induce, inside the phantom, an electric field with a magnitude detectable by the near-field probes employed for this experiment. moreover, the source should not produce spurious electric fields which are not considered in the numerical procedure. the selected source for these purposes is a small loop antenna, consisting of a single turn with 6 cm diameter, inside a balanced e-field shield (figure 2a). through preliminary computations, a magnetic field amplitude of 1 a/m produced by a current of 0.5 a is predicted at 20 mm far from the loop. of course this value is lower than the typical magnetic field value generated by the rf imaging coil of the mri devices [9-11]; nevertheless, this is acceptable because the purpose here is not to cause the nuclear magnetic resonance but to investigate the possible side-effects induced in the human body during mri examination. the estimated electric field induced by such magnetic field is a few tens of volts per meter, enough to be detected by the probe. a first experimental arrangement for the supply and estimation of the current flowing in the antenna is depicted in figure 2b). the supply system consists of a signal generator and a 100 w broadband amplifier (9 khz to 100 mhz). in order to improve the matching between the supply system and the antenna, a series capacitor cs, with a rated value of 8 pf, is introduced. in this way it is possible to obtain the required field without exceeding the maximum power (25 w) of the 50 ω resistive load. this load allows to estimate the current which generates the field. indeed, through the knowledge of the voltage across the resistive load, obtained by the measurement of the incident power p, it is possible to solve the circuital model shown in figure 3a and to estimate the current that flows in the inductance l simulating the loop antenna. the parameter cl simulates the stray electrical coupling between the loop wire and its shield. because of the antenna dimensions and supply frequency, the radiation resistance can be neglected as well as the joule losses and the circuital model can be simplified. the current flowing in the loop antenna (and hence in the inductance l), il, which generates the magnetic field, a)     directional coupler direct power sensor 50 25 w load circular/rectangular antenna coilmatching capacitor 100 w 100 mhz or 400 mhz rf amplifier synthetizer -40 db p reflected power sensor controller  b)  figure 2. a) sketch of the loop antenna [8]. b) sketch of the set‐up for the generation of rf magnetic field.  c l c s lv lv f r l o a d a n t e n n a p   a)   45 50 55 6 0 65 70 0,0 0,2 0,4 0,6 0,8 1,0 t ( s /m ) f (mhz) computed t m easured t   b)  figure 3. a) circuital model for the estimation of the current flowing in the  loop  antenna.  b)  computed  and  measured  frequency  behaviour  of  t  coefficient. figure  4.  investigation  lines  inside  the  phantom:  frontal  and upper view. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 93  can be directly estimated by knowing the transmitted power to rload and the admittance t defined as:  2 1 with 1 l s res f l s res i c t v l c c                (3) the transmitted power is obtained as difference between the incident and reflected power values measured by means of a directional coupler and a power meter. from that, the voltage vf is easily obtained. the unknown cs and ωres are extrapolated from the experimental frequency behaviour of t from 45 mhz to 70 mhz by means of a least square algorithm between the measured frequency behaviour and the computed one. assuming that the current is uniform along the single turn (i.e., πd << λ/10, where d is the turn diameter and λ is the wavelength of the working frequency), the magnetic field in the loop centre is l c i h d  ; then the following system can be written:     1 2 1 1 2 1 2 2 2 2 1 1 c s f res c s f res h c t d v h c t d v                                    (4) where ω1 and ω2 are arbitrary angular frequencies chosen in the range 45 mhz to 70 mhz. from a series of measured values  c i f h v  evaluated in the same frequency range a couple of such values is randomly extracted to solve the system (4) obtaining a series of couple of values (cs-i, ωres-i). by means of the above cited least square algorithm between the measured t(ω) values and the computed ones, the optimal couple of values (cs, ωres) were found to be equal to 8.3 pf and 70.6 mhz, respectively. figure 3b shows the frequency behaviour of the computed and measured t coefficient. the gap on the e-field shield such as a coupling between the antenna and earth, due to a non-symmetric supply, can compromise the comparison between measurement and numerical results. indeed the two elements introduce spurious electric components which are not taken into account by the numerical procedure. a non-symmetric supply for the antenna generates a common electric coupling with the earth evidenced by an electric field component perpendicular to the earth, not foreseen by the model. the gap problem can be overcome by shielding the gap with a metallic strip. the earth coupling disappears when the antenna is placed close to the phantom. the conductivity of the liquid “shields” the antenna from the earth. the encouraging results shown in the following prove the reliability of the assertions. 3.3. electric and magnetic field probes  since the electromagnetic field measurement should be performed in the near-field region, probes with very small dimensions are chosen. moreover, owing to their usage inside the phantom, resistance to organic solvent must be guaranteed. their main characteristics are resumed in table i [12]. these isotropic probes give the true rms value of the applied field along the three orthogonal axes. thanks to an electro-optic converter the information can be easily carried from the probe to the meter which can be placed a few meters away from the magnetic source. the field generation and detection system is automatically managed by a python program which reduces the measurement time and increases the accuracy of the power measurement, allowing for a thermal equilibrium of the devices. 4. comparison between numerical model and  measurement results  a preliminary calibration phase gives evidence of a rotation of the x/y field arrays around the z direction with respect to the absolute reference system shown in figure 5, for both the electric and magnetic field probes. the estimation of such rotation angle is performed thanks to the symmetry conditions imposed by the loop antenna. the results, accompanied by an uncertainty of about 10%, are 20° and 23.5° for the electric and magnetic probe respectively. the algorithm which manipulates the acquired field components to apply the coordinate rotation is implemented in the software which controls the whole experimental set-up (section 5). as a first comparison between the bem algorithm and table 1. characteristics of the electric and magnetic field probes. section  electric field  magnetic fied  frequency  40 mhz ÷ 6 ghz  10 mhz ÷ 600 mhz (absolute accuracy ± 6.0%);  output linearized  dynamic range  2 v/m ÷1000 v/m  0.08 a/m to 40 a/m at 13.56 mhz   0.01 a/m to 5 a/m at 100 mhz  overall length  337 mm (tip: 40 mm) 337 mm (tip: 40 mm)  distance from   probe tip to dipole centre  2.5 mm  3 mm  tip diameter  8 mm (body: 10 mm) 6 mm (body: 12 mm)  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 94  measurements, two investigation lines are chosen. line 1 is in parallel to the y axis (figure 4), 30 mm above it. since the induced current effects on the magnetic field at 64 mhz are negligible, the ideal magnetic field array lies on y-z plane, so the hx_bem component is null (figure 5a). the electric field lines induced in the phantom are, at this frequency, circular with the centres on the loop axis, thus, only ex_bem is non-zero along the investigation line (figure 5 b). line 2 is in parallel to the z axis and is placed 110 mm from the reference system centre. since this line lies on the z-y plane which cuts symmetrically the antenna, the hx_bem component is null, hz_bem shows two peaks (figure 6a) in correspondence with the minimum distance between line 2 and the turn of the antenna, while hy_bem reaches the peak in correspondence with the loop antenna axis. the shape of the ex_bem component should be analogous to the shape of hz_bem while ey_bem and ez_bem are zero (figure 6b) because of the circular shape of the electric field lines. when performing electric field measurements the noise level is not always negligible (in the order of 2 to 3 v/m). to reduce the noise effects on the measured values, the following expression is employed: 2 2 64mhz tot noisee e e  (5) where e64mhz is the electric field corrected for the noise, etot is the electric field given by the meter (signal plus noise) and enoise is the mean value of the noise detected when the loop antenna is not supplied. in spite of the problems spotted both in the estimation of the current flowing in the antenna at 64 mhz and the 0,00 0,02 0,04 0,06 0,08 0,10 0,12 0 2 4 6 8 10 h ( a /m ) y (m) h y , h z bem surfaces measurements   a)  0,00 0,02 0,04 0,06 0,08 0,10 0,12 0 10 20 30 40 50 60 70 e ( v /m ) y (m) e x bem surfaces measurements    b)  figure 5.  magnetic a) and electric b) r.m.s. field behaviour along line 1 of  the three components with a unit‐current in the antenna.  0,00 0,04 0,08 0,12 0,16 0,20 0,24 0 2 4 6 8 h ( a /m ) z (m) h y , h z bem surfaces measurements   a)   0,00 0,04 0,08 0,12 0,16 0,20 0,24 0 10 20 30 40 e ( v /m ) z (m) e x bem surfaces measurements    b)  figure 6. magnetic a) and electric b) r.m.s. field behaviour along line 2 of the  three components with a unit‐current in the antenna.  a)             b)  figure 7 a). metallic parallelepiped 200 mm high placed inside the phantom. b) investigation lines and dimensions : frontal and upper view. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 95  approximation introduced in the numerical representation of the antenna, the measurement-computation comparisons give encouraging results. in particular, figures 6a and 6b show the very good performance of the probes which can detect field having high spatial gradient (close to the antenna axis: z = 0.12 m). the experimental scenario is finally made more complicated by introducing a 40×40×240 mm3, conductive, non-magnetic parallelepiped. the aim is to roughly simulate a medical implant. its large dimensions provide a high distortion field, allowing a first evaluation of the sensitivity of the numerical method. figure 7 shows a picture of the metallic object inserted in the phantom and its position. figures 8 and 9 provide the numerical results compared with the measured ones for the same lines previously investigated. as can be easily seen, a higher discrepancy between measurement and computation is detected, in particular for line 2. this fact deserves further investigations, but probably it can be ascribed to the features of the problem matrix, which, in presence of the metallic object, is badly scaled and gives rise to a nonoptimal convergence of the iterative solver used to solve the bem formulation. moreover, the wavelength in the metallic object is much lower than elsewhere and this would ask for a very fine mesh, with a strong increase of the computational burden needed to guarantee an accurate reconstruction of the electromagnetic quantities. 5. system control and operations  in order to increase the safety, measurement reliability and speed, the supply and measurement systems are managed by a software developed in python environment. python is an interpreted, general purpose, high-level programming language [13]. the software controls the synthesizer and power amplifier by means of a gpib-usb controller. the acquisition of the incident and reflected power is supervised by an usb connection. to download the acquired electric/magnetic field components, a simple network between the remote unit easy4 (server) and the pc (client) by an ethernet connection [14] is set. thanks to a console user interface it is possible to insert the working directory and the output file names. if the directory and/or the files do not exist the program creates them. the measurement results are stored in two output files with the same name but different extensions: “.txt”. they contain the per-unit current electric/magnetic field components, referred to the main reference coordinate, while the second file, with a “.raw” extension, contains the raw field components, affected by noise, and referred to a rotated coordinate system. moreover, the simple user interface requires: the type of the employed probe (electric or magnetic field probe), the geometrical characteristics of the investigation curve and the number of the measurement points.   a)     b)  figure 8 magnetic a) and electric b) r.m.s. field behaviour along line 1 of the  three components with a unit‐current in the antenna.   a)     b)   figure 9. magnetic a) and electric b) r.m.s. field behaviour along line 2 of the  three components with a unit‐current in the antenna. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 96  once the required measurement attributes are inserted and the electric field probe is connected, the software makes the noise level detection available for each component. then, a warm-up period of 60 s is run to guarantee a thermal equilibrium of the whole system; after that the remote unit provides the field components which are downloaded. in the same time the power information is downloaded and, thanks to the circuital model of the antenna, the flowing current is estimated. a suitable module of the software provides the rotation of the field array in order to refer to the main coordinate system, cancels the noise and evaluates the per-unit current field components. after this phase, the output software gives the coordinate of the new investigation point, the operator places the probe in the indicated point and by pushing enter a new supply, acquisition and manipulation phase is run. in order to protect the matching capacitors, a control structure gives a warning signal if the voltage across the capacitors, estimated on the base of the antenna circuital model and of the voltage and frequency values set on the synthesizer, exceeds a defined limit. 6. conclusions  a description of a first experimental set-up devoted to the assessment of the reliability of numerical models for the estimation of induced electromagnetic fields into a phantom simulating human tissues is presented. a simple model of the transmitting antenna with lumped parameters is implemented and an evidence of the effectiveness of the model is given up to about 70 mhz. a first comparison between computation and measurement along two investigation lines inside the phantom is shown. the results are encouraging. the computed electric field over line 1 shows a maximum deviation lower than 8% for field magnitudes higher than 5 v/m. for line 2, characterized by a higher field gradient value, the computed electric field has a maximum deviation lower than 20% for field magnitudes higher than 5 v/m. the magnetic field comparison between computation and measurement over line 1 shows a deviation higher than 30% for the hy component, while the hz component is better reproduced. the estimated magnetic field over line 2 reproduces with high fidelity the measured one, except for the peak value of the hy component, for which a deviation of 6% is detected. to deeper investigate the reliability of the numerical model in presence of medical implants, the measurements are carried out by placing a metallic object within the tissue-simulating liquid in the phantom. even though the comparison between measurement and numerical method is acceptable, a higher deviation is found, in particular for the peak values of each component. indeed, a metallic object with high conductivity introduces a spatial discontinuity of this parameter and can reduce the accuracy of the numerical simulation. presently, the design of a new antenna system is being developed to increase the value of the electric and magnetic field-strength generated in the phantom, and a further measurement campaign is in progress. acknowledgement  the authors wish to thank dr. gerd weidemann from ptb for his contribution in the production and characterization of the human tissue-like liquid. references  [1] iec 62311 “assessment of electronic and electric equipment related to human exposure restrictions for electromagnetic fields (0 hz – 300 ghz), 2007. [2] koch, k.m. and hargreaves, b.a. and pauly, k. butts and chen, w. and gold, g.e. and king, k.f, “magnetic resonance imaging near metal implants”, journal of magnetic resonance imaging, vol.32, no.4, pp.773-787, 2010. [3] ho, henry s., “safety of metallic implants in magnetic resonance imaging”, journal of magnetic resonance imaging, vol.14, no.4, pp.472—477, 2001. [4] m. borsero, o. bottauscio, l. zilberti, m. chiampi, w. wang “a boundary element estimate of radiated emissions produced by unknown sources”, electromagnetic compatibility (emc europe), 2012 international symposium on. [5] oriano bottauscio, mario chiampi, luca zilberti “boundary element approach to relate surface fields with the specific absorption rate (sar) induced in 3-d human phantoms”, eng. analysis with boundary elements vol 35 (2011) pp. 657–666. [6] “rf coils for mri”, j. thomasvaughon, john r. griffiths, john villy & sons ltd, 2012. [7] joseph p. hornak “the basics of mri”, www.cis.rit.edu/htbooks/mri/, 1996. [8] “near field probe set user’s manual”, emco the electro mechanics company, 1988. [9] stefano pisa, et all. “a study of the interaction between implanted pacemakers and the radio-frequency field produced by magnetic resonance imaging apparatus”, ieee trans. on emc, vol. 50, no.1, pp. 35-42, february 2008. [10] tsuboi, h.; tanaka, h.; misaka, h.; fujita, m., "electromagnetic field analysis of rf antenna for mri," magnetics, ieee transactions on, vol.24, no.6, pp.2591,2593, nov 1988. [11] fujita, m. and higuchi, m. and tsuboi, h. and tanaka, h. and misaki, t., “design of the rf antenna for mri”, magnetics, ieee transactions on, vol.26, no.2, pp.901-904, 1990. [12] speag, “easy4/mri” www.speag.com/products/easy4mri/probes-2. [13] “introduction to python programming and developing gui applications with pyqt”, b.m. harwani, course technology a part of cengage learning, 2012. [14] easy4 exposure acquisition system, schmid & partner engineering ag february 3, 2010. microsoft word 220-1874-1-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 62‐65    acta imeko | www.imeko.org  december 2015 | volume 4| number 4 | 62  metrological traceability for the analysis of environmental  pollutants in the atmosphere  francesca rolle, enrica pessana, michela sega  istituto nazionale di ricerca metrologica (inrim), strada delle cacce 91, 10135 torino, italy        section: technical note  keywords: metrological traceability; environmental pollution; primary gaseous mixtures; polycyclic aromatic hydrocarbons  citation: francesca rolle, enrica pessana, michela sega, metrological traceability for the analysis of environmental pollutants in the atmosphere, acta  imeko, vol. 4, no. 4, article 12, december 2015, identifier: imeko‐acta‐04 (2015)‐04‐12  editor: paolo carbone, university of perugia, italy  received: september 30, 2014; in final form november 6, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: francesca rolle, e‐mail: f.rolle@inrim.it    1. introduction  the importance of carrying out accurate and reliable measurements is a fundamental topic in many different fields, in particular for the safeguard of the environment and the climatic conditions of the planet. this is the basis for the planning of correct actions to prevent environmental damages and potential harmful effects for the health of the human beings and biota. in this framework, the application of metrology to chemical monitoring can assure the reliability of measurement results, in particular for the analysis of pollutants in different environmental sectors. metrological traceability of measurement results is a fundamental feature in every measurement field but is crucial when dealing with the environmental chemistry. in this field, the application of metrological concepts is not straightforward, as the analytes are usually present at very low concentrations (even in trace) and the sample matrix can generate interferences during the instrumental identification and quantification of the analytes of interest. at the istituto nazionale di ricerca metrologica (inrim), the italian metrology institute, different activities are carried out for the analysis of gaseous and organic pollutants in the atmosphere. this paper deals with some examples of such activities. on one hand, for gaseous pollutants, an important activity concerns the analysis of two species, carbon dioxide (co2) and nitrogen oxides (nox), the former having great relevance for the greenhouse effect and the latter for the photochemical pollution in urban areas. at inrim primary gravimetric mixtures are produced for the calibration of the instrumentation devoted to the analysis of these gases, with the aim of assuring metrological traceability to the measurements of co2 at ambient level and at the vehicles emission level, and of nox at ambient level. abstract  the importance of carrying out accurate and reliable measurements is a fundamental topic in many different fields, in particular for  the safeguard of the environment and the climatic conditions of the planet. this is the basis for the planning of correct actions to  prevent environmental damages and potential harmful effects for the health of the human beings. the application of metrology to  chemical monitoring can assure the reliability of measurement results.  at the  istituto nazionale di ricerca metrologica (inrim), the  italian metrology  institute, different activities are carried out for the  analysis of gaseous and organic pollutants in the atmosphere. this paper deals with some examples of such activities.   for gaseous pollutants primary gravimetric mixtures are produced for the calibration of the instrumentation devoted to the analysis of  carbon dioxide (co2) and nitrogen oxides (nox), with the aim of assuring traceability to the measurements of co2 at ambient level and  at  the  vehicles  emission  level,  and  of  nox  at  ambient  level.  on  the  other  hand,  research  is  carried  out  in  the  field  of  organic  micropollutants, in particular regarding the establishment of metrological traceability for the atmospheric concentrations of polycyclic  aromatic hydrocarbons (pahs) adsorbed on particulate matter (pm).  acta imeko | www.imeko.org  december 2015 | volume 4| number 4 | 63  on the other hand, research is carried out in the field of organic micropollutants, in particular regarding the establishment of metrological traceability for the atmospheric concentrations of polycyclic aromatic hydrocarbons (pahs) adsorbed on particulate matter (pm). a suitable metrological procedure was developed for the extraction and determination of the priority pahs, as defined by the united states environmental protection agency (us epa), on pm sampled from the air of torino, italy. in section 2, some remarks on the activity carried out for gas analysis are given. section 3 is focused on organic micropollutant analysis. 2. analysis of gaseous environmental pollutants  2.1. inrim activity  at inrim, reference gas mixtures of carbon dioxide (co2) and nitrogen oxides (nox) in the mol/mol range are realised by gravimetry, which is a primary method, described in the international standard iso 6142-1 [1]. such primary mixtures are validated by using different analytical techniques: non-dispersive infrared spectroscopy (ndir) for co2, chemiluminescence analysis and fourier transform infrared spectroscopy (ftir) for nox. stability studies of the prepared mixtures are carried out for their possible use as certified reference materials (crms) for instrumental calibration. a fundamental part of the work is devoted to the measurement uncertainty evaluation and to the study of the matrix effects on the analytical response. 2.2. carbon dioxide and nitrogen oxides  co2 is the primary greenhouse gas emitted through human activities and is naturally present in the atmosphere as part of the earth's carbon cycle. human activities are altering the carbon cycle, both by adding more co2 to the atmosphere and by influencing the ability of natural sinks, like forests, to remove co2 from the atmosphere. the main sources of co2 emissions are power plants, transportation, industry [2] and the most effective way to lower co2 levels in atmosphere is to reduce its emissions and fossil fuel consumption. nox indicates the sum of nitric oxide (no) and nitrogen dioxide (no2). no generates from high temperature combustion processes (e.g in motor vehicles) while no2 generates from no by means of photochemical reactions. nox can be responsible for pulmonary diseases (e.g. asthma, bronchitis) for long periods of exposure at low concentrations. in the troposphere, they are classified as secondary pollutants and participate in the photochemical pollution cycle and in the acidic rains formation. in addition, no has a large influence on both ozone and on the hydroxyl radical. the sum of many oxidised nitrogen species, both organic and inorganic, is referred to as noy, excluding nitrous oxide (n2o), ammonia (nh3), acetonitrile (acn) and hydrocyanic acid (hcn). the official method for nox monitoring is the chemiluminescence analysis, as prescribed in the en 14211 [3]. 2.3. primary method for the preparation of gaseous reference  materials  at inrim, primary gravimetric mixtures are prepared by using a validated procedure. the cylinders are previously conditioned by evacuating and heating them, then filling them with the matrix gas. this process is repeated at least 3 times. the preparation procedure of the mixtures is articulated into three steps: the empty cylinder is weighed and then filled with the analyte gas and the matrix gas. each step is followed by a precision weighing, in order to determine the exact amount of gas introduced in the cylinder. the precision weighing is carried out by comparing the cylinder with a reference one following a double substitution scheme, to take into account also the contribution of the buoyancy effect. 2.4. results  recent activities carried out at inrim concerned the realisation of no2 primary mixtures in synthetic air at few µmol/mol (8-12 µmol/mol range), the realisation of no2 mixtures with different o2 concentrations, for the evaluation of matrix effects and of no mixtures in nitrogen at 500 nmol/mol. chemiluminescence and ftir analysers were calibrated between 5 and 15 µmol/mol of no2 in synthetic air and between 300 and 700 nmol/mol for no (chemiluminescence) for the validation of these mixtures. co2 mixtures were prepared at ambient level (around 400 µmol/mol) and at the vehicle emission levels (11-14 %) and analysed by means of non-dispersive infrared spectroscopy (ndir). chemiluminescence analysis was thought to be a very selective technique for nox, but in recent years it was observed that this technique can give unsatisfactory response due to the interferences of noy species. for this reason, at inrim, the chemiluminescence analysis is compared with ftir analysis, which is less influenced by the interferences of other nitrogen species. at the moment, improvements are under investigation also to reduce interferences of environmental parameters (e.g. ambient temperature, relative humidity, air components) which can influence the ftir analysis. 3. organic micropollutants analysis: polycyclic  aromatic hydrocarbons (pahs)  pahs are a class of organic micropollutants which are present in all the environmental compartments, i.e. soil, water, air and also in food. the latter represents the major source on intake for living beings. in addition, a relevant way of exposure is airborne particulate matter (pm), as pahs can adsorb on pm particles and be carried into the respiratory system, even reaching the alveolar region of the lungs. fine and ultrafine pm are important sources of urban pollution and are potentially responsible for several pathologies (mainly cardiovascular and respiratory). they may lead to harmful effects both for the direct action of the particles (i.e. inflammatory processes) and the action of many pollutants adsorbed on it (e.g. pahs, heavy metals). pahs are a class of organic micropollutants of great concern for their fallouts on human health. indeed scientific evidences showed that some pahs may have carcinogenic effects. in particular the most harmful of them is benzo[a]pyrene (bap), which was classified as carcinogenic agent to humans (group 1) by the international agency for research on cancer (iarc) [4]. the european regulations prescribed the monitoring of this pollutant in ambient air and a target value of concentration in air of 1 ng/m3, for the total content in the pm10 fraction averaged over a calendar year is fixed [5]. acta imeko | www.imeko.org  december 2015 | volume 4| number 4 | 64  in this framework, the use of reliable analytical methods for the determination of pahs levels in the environment is necessary and the guarantee of accurate and comparable analytical results is fundamental to support the planning of preventive actions for the reduction of pm levels and emissions in atmosphere. 3.1. inrim method for the analysis of priority pahs  at inrim a procedure for the quantification of bap in ambient air was developed [6], starting from the guidelines given in the european standard method en 15549 [7] which describes a methodology for the determination of bap in ambient air adsorbed on pm and can be used in the framework of the european directive related to air quality [8]. the developed method was extended to the 16 pahs classified as priority pollutants from us epa and other pahs having toxicological relevance. the procedure was applied to samples of total suspended particles (tsp) sampled in torino. and was validated in all its steps, by evaluating different parameters among which the recovery efficiency, the limit of detection (lod) and the limit of quantification (loq). in our method, a classical extraction technique soxhlet extraction [9] was chosen and the quantification was performed by means of gas chromatography coupled with mass spectrometry (gc-ms). sampling campaigns were carried out in different seasons to evaluate the concentration profiles of pahs during the year. the pm was collected by means of a low volume sampler with a sampling head for tsp, using glass fibre filters as sampling media. the filters were weighed before and after the sampling to determine the real masses of sampled pm. in addition, the same procedure was used for the determination of the final volumes of the extracts, which were calculated from the weighed masses of each extract, taking into account the density of the extraction solvent. in this way, we could assure metrological traceability to the masses of pahs in the sample extracts as they were gravimetrically determined by comparison with calibrated mass standards. the metrological traceability of the quantification step was assured by calibrating the gc-ms with a set of reference standard solutions prepared by gravimetric dilution of a suitable crm nist srm 2260a which contains 36 aromatic hydrocarbons in toluene. the recovery efficiency ranges from 70 to 100 % for the heavier pahs (having more than 4 benzene rings) while for small pahs (2-3 rings) the recovery efficiencies were generally not satisfactory, probably because these analytes are very volatile and can be lost during the extraction step. the regulation [7] fixes a lod value only for bap (lower than 0.04 ng/m3). the lod of our internal method was evaluated between 0.01 and 0.04 ng/m3 for all the pahs, hence in agreement with the regulation. the only exception was phenanthrene with a lod of 0.06 ng/m3. loq was calculated as 10 times the standard deviation of the repeated analyses of the blanks. 3.2. results and discussion  the results obtained for pahs analysis were in accordance with the provisioned seasonal trends. the concentrations measured in pm sampled in spring are below the target value of 1 ng/m3 fixed for bap and are not very high compared with the results of samples collected in winter. the measured concentrations highlight the increasing trend of pahs in atmosphere from spring to winter. these is a typical seasonal trend for pahs, characterized by a lowering of their levels during spring and summer, mainly for the reduction of the emissions, the changing of the meteorological conditions and the increase of the ambient temperatures. during winter, the major emissions in urban centres come from the domestic heating plants and the vehicular outputs. in addition, the lowering of the mixing height in atmosphere forces the pollutants to persist longer in the lower troposphere while the colder temperatures can improve the repartition from vapour phase onto pm particles also for small pahs, because their vapour pressure is strongly reduced by the decrease of the ambient temperatures. bap concentrations measured in winter samples exceeded the target value of 1 ng/m3. these results confirm the abundance of bap and of the other pahs in urban pm and, consequently, highlight the need of resolving actions for the reduction of pahs inlets in atmosphere. in addition, the results show that the ratios of the pahs are almost constant in samples of different periods of the year and the repartition of pahs between vapour and particulate phases is not affected by seasonal changes, but depends on the physico-chemical properties of pahs and on the sources of emission. however, for particulate sampled during some rainy days a lowering of the pahs level was observed. this is an example of the effects of the meteorological conditions on pm and pahs levels in the troposphere. favourable weather conditions can help the deposition of pm particles to the ground, consequently reducing the measured pahs concentrations. 4. conclusions  the metrological approach followed at inrim for the analysis of gaseous and organic pollutants has very important fallouts on these fields of environmental measurements, in particular for the significance of the considered analytes. the aim of the work conducted at inrim is in particular the establishment of correct metrological traceability chains for all the analytes considered. further developments for the preparation of gaseous reference materials concern the use of another primary method for the preparation of reference gas mixtures, namely the dynamic dilution technique. this method has the advantage of preparing primary mixtures prior to the use, hence it is suitable for unstable and reactive molecules and for low concentration levels. in the field of organic analysis, improvements are foreseen in order to give traceability to the pm sampling step and performing sampling campaigns of pm, as requested by the european regulation, extending the research activity also to different analytes of environmental and toxicological interest. references  [1] international standard iso 6142-1:2015 “gas analysis - preparation of calibration gas mixtures -part 1: gravimetric method for class i mixtures” [2] http://www3.epa.gov/climatechange/ghgemissions/sources.htm l [3] en 14211:2012 “ambient air. standard method for the measurement of the concentration of nitrogen dioxide and nitrogen monoxide by chemiluminescence” [4] http://monographs.iarc.fr/eng/classification/index.php acta imeko | www.imeko.org  december 2015 | volume 4| number 4 | 65  [5] directive 2004/107/ec of the european parliament and of the council of 15 december 2004 relating to arsenic, cadmium, mercury, nickel and polycyclic aromatic hydrocarbons in ambient air [6] f. rolle, v. maurino, m. sega “metrological traceability for benzo[a]pyrene quantification in airborne particulate matter”, accreditation and quality assurance, vol. 17, n. 2, 2012, pp. 191-197 [7] en 15549:2008 “air quality standard method for the measurement of the concentration of benzo[a]pyrene in ambient air” [8] directive 2008/50/ec of the european parliament and of the council of 21 may 2008 on ambient air quality and cleaner air for europe [9] us epa method 3540c “soxhlet extraction” (1996). journal contacts acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 2 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany laura fabiano, italy ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 section editors (vol. 7 12) jon bartholomew, uae yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy mauro d’arco, italy egidio de benedeto, italy alessandro depari, italy alessandro germak, italy istván harmati, hungary daniel hutzschenreuter, germany min-seok kim, korea bálint kiss, hungary momoko kojima, japan koji ogushi, japan vilmos palfi, hungary jeerasak pitakarnnop, thailand siniša prugovečki, usa/croatia md zia ur rahman, india fabio santaniello, italy jan saliga, slovakia emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand cristian zet, romania claudia zoani, italy an iot measurement system for a tailored monitoring of co2 and total volatile organic compounds inside face masks acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 8 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 an iot measurement system for a tailored monitoring of co2 and total volatile organic compounds inside face masks filippo ruffa1, mariacarla lugarà1, gaetano fulco1, claudio de capua1 1 dip. diies, università ‘mediterranea’ di reggio calabria, 89122, reggio calabria,italy section: research paper keywords: air quality; face mask; tvoc; covid19 citation: filippo ruffa, mariacarla lugarà, gaetano fulco, claudio de capua, an iot measurement system for a tailored monitoring of co2 and total volatile organic compounds inside face masks, acta imeko, vol. 12, no. 3, article 9, september 2023, identifier: imeko-acta-12 (2023)-03-09 section editor: susanna spinsante, grazia iadarola, polytechnic university of marche, ancona, italy received april 14, 2023; in final form june 16, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: filippo ruffa, e-mail: filippo.ruffa@unirc.it 1. introduction after the declaration of covid-19 as a global pandemic, multiple scientific research publications, acknowledge that sars-cov-2 is mainly transmitted from human to human through close contacts, airborne respiratory droplets, aerosols and contaminated surfaces [1]-[3]. since march 2020 several countermeasures have been taken by the national governments to limit the pandemic spread, whose impact has been devastating in terms of human lives [4]. these countermeasures include restrictive measures to limit interpersonal contacts and the use of personal protective equipment (ppe), i.e., face masks. the introduction of the new mrna covid-19 vaccines strongly reduced morbidity and mortality, anyhow they were not enough to prevent the spread of new sars-cov-2 variants and to fully protect the groups of people most at risk. for this reason, the use of face masks and social distancing are still highly recommended, mainly in those areas with high viral spread [5], [6]. in this context, the use of sensors and the integration of internet of things (iot), can lead to a more efficient and smart usage of ppe, allowing a continuous monitoring of the microenvironment inside them. several studies assess the importance of monitoring air quality in order to reduce exposure to some pollutants that, over time, can increase the risk of psycho-physical discomfort, which often leads to devastating effects on the human organism [7]-[11]. ppes filter the air in both directions (inhaled and exhaled), creating a microenvironment which has higher temperature and humidity than the external environment [12]. moreover, with the use of face masks, it is observed a substantial increase in inhaled co2 concentration, also known as co2 rebreathing [13]-[15]. several studies demonstrated that co2 accumulation in the mask microenvironment varies in function of breath rate and intensity. it was demonstrated that the increase in co2 concentration ranged from 1.5 % at rest, during speech, or at low work rates, up to 3.5 % during low-intensity exercise, which is well beyond the expected in atmospheric air (~0.04 %) [11], [16]. several studies have also analysed the impact of ppe usage during physical exercise [17]-[19]. as stated in [20]: “even though in healthy populations, the potentially life-saving benefits of wearing facemasks seem to outweigh the documented adverse effects, there seems to be widespread abstract this paper proposes an innovative iot-based system for monitoring air quality inside protective equipment such as ffp2 masks, which have become more widespread due to the covid-19 pandemic. the system aims to provide diagnostic elements to specialist doctors and suggest a healthier use of the mask by monitoring the concentration of pollutants. the system aggregates data over a 15-minute window and calculates average values for each measured parameter, comparing them with reference thresholds to suggest removing the mask if necessary. an innovative aspect is personalized monitoring of exhaled breath, specifically volatile organic compounds (vocs), providing a customized and reliable information framework for doctors. this is possible thanks to the integration of removable memories, inside which the user's personal information and metrological characteristics of the system are stored in a standardized form. the proposed platform is accessible to both users and doctors, enabling early diagnosis by providing a complete picture of the patient's specific condition. this solution can have a strong impact on daily life and well-being, especially for diseases that increase the presence of certain compounds in the exhaled breath. mailto:filippo.ruffa@unirc.it acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 agreement on the increased breathing resistance, co2 rebreathing and decreased inhaled o2 concentration caused by the use of facemasks”. in this context, a real-time monitoring of the air quality inside the face mask can help assessing both potential risks for health [20] and the integrity of the mask itself [21]. this is possible nowadays thanks to the technological development, which has allowed the realization of new thin and light sensors used in wearable devices in many medical applications [22]-[27]. such a solution can help in finding a good trade-off between the mitigation of infection transmission risk and minimization of potential harmful effects due to co2 rebreathing, in particular in those contexts, like hospitals, pharmacies, classrooms and workplaces, where face masks are usually worn for many hours. in addition, the integration of sensors and the monitoring of the microenvironment inside the face mask may give important information on the health status of the person who wear it. in particular, a high concentration of some substances, mainly vocs, in exhaled breath, may relate to an increase in oxidative stress and many potential health problems, such as chronic inflammation [28], hypoxia in multi chemical sensitivity (mcs) [29] and even covid-19 [30]. as said in [31] the main techniques used to detect biomarkers in exhaled breath are based either on chromatography and spectroscopy, which is used mainly in laboratory, or on bioelectric mos sensors. using these techniques, it is possible to detect and isolate some specific types of vocs. nowadays it is known that breath contains more than 3500 types of vocs [32] and some of these are particularly related to some specific health issues, such as ammonia, which is related to renal failure, liver dysfunction and cirrhosis, acetone and isoprene to diabetes and hypercholesterolemia, ethane and pentane to gi diseases and asthma, aldehydes to cancer and neurological disease. many other types of vocs are related to specific issues, and many are still under investigation in order to find if there is a correlation with specific health diseases. in this specific context, with the work presented in this paper, the authors aim to propose a tool for monitoring the air quality within the ppe, which allows, albeit without making a diagnosis, to give information indicative of the patient's state of health, based on breath analysis. in particular, the proposed solution consists in an internet of things (iot) face mask with integrated sensors for the monitoring of co2 and total volatile organic compounds (tvoc). the face mask is connected to a personal platform that allows to store all the information related to data monitoring over time and allows the physician to check the information relating their patients on their online medical record. since the masks are approved for use for a few hours, typically 6-8, after which they should be changed, the proposed type of monitoring must provide for the re-configurability of the new masks. this is implemented in the proposed approach using removable memories, which contain all the information relating to the specific patient. in this way, the proposed solution relies on iot to implement a tailored monitoring of the parameters of the patient in an ambient assisted living application [33]-[35], both for prevention of possible health issues and for sharing data on his/her digital medical record. 2. methodology in this section the methodologies used to develop the proposed iot face mask are explained. in detail the discussion is focused on the design of the monitoring system and the iot tools. the proposed system relies on the integration of sensors inside the face mask to continuously monitor co2 and tvoc concentration in the mask microenvironment. in fact, as said in section 1, monitoring the concentration of these two substances is important both for safety reasons, i.e., mitigation of potential harmful effects due to exposure to these substances, and for health monitoring, since tvoc concentration is related to many health issues. the system implements the framework shown in figure 1, consisting of two sensors integrated into the mask, an embedded system for data processing and transmission and an application on a remote device that, together with a local led advises the user in case of anomalies. furthermore, the application, synchronized with a remote smart personal platform, let the user to visualize the monitoring data and the trend over time. a microcontroller (mcu) manages data acquisition phase and, using its internal adc, samples co2 and tvoc sensors data over a 15-minute interval at a rate of 1 s/s (samples for second). data acquired are stored in local memory and, after the time acquisition window has expired, the mcu averages all acquired samples, for each monitored parameter, and compares the average measured value with a reference threshold. the reference threshold is taken by the local regulations and indicates the maximum concentration of the pollutant, which is still considered safe, considering an exposure time of 15 minutes. if the result of the comparison is positive, even for one of the two substances, the microcontroller advises the user of a possible issue turning on a led indicator and sending a warning to the software application on the remote device. after making the comparison with the threshold, the average measured value for each pollutant is sent to a remote device for data collection and visualization on the dedicated application and the internal memory of the mcu is erased and ready to store the new incoming samples. the innovative feature of the proposed system is the monitoring of exhaled tvoc over time, which can provide valuable information about potential health issues. however, the presence of high voc concentrations in exhaled air is not diagnostic by itself, and repeatable measurements must be taken in a controlled environment following standard protocols to compare them with reference thresholds. it is also important to consider that the normal range varies from person to person, figure 1. project framework. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 3 making personalized monitoring necessary. monitoring the trend of voc concentration in exhaled breath over time is more important than a single comparison with a reference threshold. for daily use, replicating the measurement in standard conditions is not feasible, nor is it necessary since the objective is to monitor the user's breath parameters over the medium to long term and provide indications for further medical investigation if needed. the personalization of the monitoring process takes place through the use of a removable memory support, that carries general information on the single user, on his/her health status, on average monitoring data over different periods of time and on normal reference range defined by the personal physician. another removable memory support carries calibration and accuracy information on the specific measurement system, allowing to check if data are reliable or if a re-calibration of the system is necessary. since common ffp2 face masks are approved for a use of maximum 6-8 hours, after this time of use, when the user needs to change the mask the entire system, joint with the two memory supports, are removed from the old mask and inserted in the new one to continue the monitoring with the new mask. the microcontroller periodically performs a comparison between the average measured values stored in the memory and the reference range defined by the physician and stored in the same memory. if the measured values are out of range, it turns on a led indicator to inform the user and give suggestion for a medical check. for a better customization of the monitoring process, the use of memories integrates perfectly with the use of a remote smart personal platform, better described in section 2.1, to which only the specialist doctor and the user have access. the smart personal platform is synchronized with the user software application. it is divided into two sections, one dedicated to the air quality monitoring for safety purposes and one dedicated to the automatic processing and custom visualization of exhaled breath parameters. it receives real-time data from the monitoring each 15 minutes, processes the new data, and stores them on the digital medical record of the patient. the platform uses data to create hourly, daily, weekly and monthly statistics regarding the concentration of exhaled tvoc. these statistical data, each time they are re-processed, are sent back to the microcontroller, which stores them on the dedicated removable memory support. a block diagram of the described process is reported in figure 2. 2.1. smart personal platform a key factor that has an important role in improving reliability in the detection of health risk situations, starting from the values of the parameters detected in the breath, is the knowledge of the history of the patient, i.e., what are typical values in known operating conditions. this, together with the knowledge of a reference range determined by the personal physician and fitted to the patient allows a prompt identification of any deviations from standard conditions. therefore, as said above, in order to customize the monitoring system, it becomes essential to integrate memory devices to save personal information useful for assessing the state of breath, the history of previous evaluations and the reference range determined by the specialist doctor. in order to implement an automated access to this information, it was decided to implement two transducers electronic data sheets (teds) that are intended to allow access in a standardized format, according to the indications contained in the ieee 1451 standard, thus facilitating the automatic configuration of the system. this reduces configuration time and allows metrological information to be integrated into data processing, reduces the possibility of error related to manual entry and improves traceability by linking all meta-information to the specific user. teds can be loaded into the system at start stage or on request. in the proposed iot face mask, the presence of this standard data structure allows to access historical information in an easy way and to fit the monitoring system on the specific user. the implemented smart personal platform, synchronized with the software application, can access two teds where, as seen in figure 3, personal data (histo) and sensor-related measurement uncertainty (metro) information are stored and accessible. figure 2. data flow block diagram illustrating the flow of information in the system. figure 3. parameters stored in histo and metro. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 4 histo allows the system to adapt the processing algorithm to the specific patient being monitored, it must be patient-specific and updated whenever a new assessment is made. before applying the algorithm for the evaluation of breath, the platform accesses the memory to acquire the statistics of some predefined intervals (mean and standard deviation) and the reference range defined by the specialist doctor. using non-volatile memory offers a simple solution to store patient data and to tailor the assessment made by the algorithm, by providing a comparison baseline for each new assessment. personal information such as: user id, age, sex, statistical data calculated on previous time frames (monthly, weekly, daily or hourly), average percentage of hours of use, presence of past specific diseases that affect breath and reference normal range for tvoc concentration in breath (assessed by the doctor), are all stored in histo. using the stored information, the system facilitates the reading of the measured data. in this way, abnormal breath characteristics or patient-specific pathologies are considered during the evaluation of acquired data, allowing to reduce the occurrence of false positive detections and helping the specialist to have a complete picture of the available information. metro contains information related to the measurement uncertainty of the sensor, because data reliability is of primary importance when dealing with issues related to personal health and safety. the measurement system must be the most accurate and reliable possible. in order to achieve this objective, it is always necessary to consider the calibration intervals, after which the metrological characteristics of the sensor are no longer guaranteed, and a new calibration must be carried out. once a day, the system evaluates whether or not the time suggested for the next calibration has elapsed and, if necessary, informs the user of the need for a technical intervention. the system will continue to operate, but the presence of the alert will be an element to be evaluated in presence of non-normal values, as they could be linked to a drift in sensor performance. all data stored into teds are accessible from the data analysis platform, which, by combining that information, performs a tailored processing and is able to determine if the average tvoc monitored in exhaled breath, over pre-set time windows, is in or out of reference range. furthermore, the platform allows user to plot a graphical representation of the raw data and of the trend line, obtained applying regression techniques. every time the platform receives a new data from the iot mask stores it in a personal medical record, which can be accessed from the specialist. the platform algorithm, whose data flow is shown in figure 4, processes each new received value, updating the hourly, daily, weekly and monthly statistics. after that, it verifies if the average measured tvoc in these time figure 4. smart personal platform algorithm. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 5 intervals is within the reference range indicated by the specialist doctor and stored in the histo memory and, if it is not the case, turns on a led on the platform, specific for each time interval, to advise the user that the measured values are out of range. the updated statistics, once calculated, are sent back to the iot mask to be overwritten in the histo memory. the platform allows the user and the specialist doctor, who has full access to the platform, to plot raw measurement data over a selectable time interval and to trace a regression line in order to show the trend over time. furthermore, the platform allows to show on the graph the reference band indicated by the doctor and the average band where the measured data of the patient fall. the last one is identified as the average measured value for the selected interval plus or minus the corresponding standard deviation multiplied for the coverage factor, that is simplified for selection with a simplified label for the user (low, medium, high). as shown in figure 5, “low” indicates the choice of a coverage factor equal to one, which identifies an interval around the average value, where there is approximately 68.3 % probability that a new measured data will fall, medium corresponds to a coverage factor of two, equivalent to 95.4 % probability and high corresponds to a coverage factor of 3, which identifies an interval in which 99.7 % of the measured data will fall, in the absence of anomalies. the user, normally a doctor specialized in respiratory diseases, can obtain different informations by varying these parameters. in fact, based on the choice of the coverage factor, any positioning outside the so-called average range will have a different weight. the graphical visualization of the data, in this way, let the doctor to see if the measurements, in the visualization interval, are compliant or not with the reference range, if there is a drift over time and what is the slope of the regression line and finally to determine the number of occurrences in which the measured value falls out of the average band (considering different coverage factors). the platform is also accessible to the patient, who can see his/her monitoring data and eventual warnings that can advise for a medical check. 3. results to test the proposed monitoring approach, a first prototype of the measurement system, described in section 2 has been realized and tested with ffp2 face masks. the prototype, shown in figure 6, is still a preliminary version and its aim is only to test the functionality of the proposed method. co2 and tvoc sensors, whose characteristics are described in table 1, have been integrated into the face mask making a little cut on the cloth. after introducing the sensors, the cut was stitched up and closed with hot glue to restore the original insulation. the sensors are externally wired to a ni rio acquisition board, which acquires measurement data and implements, on board, the entire monitoring, and control station described in section 2. the smart personal platform, described in section 2.1, has been realized in labview and is reachable through the web by acceding the remote application web page, as shown in figure 7. all the monitoring data are also loaded to a personal ftp folder, which can be remotely acceded by both the user and the personal physician. as it can be seen in figure 7, the user interface of the web application is divided in three panels. the first panel reports the personal information of the user, taken from histo and metro; the second panel shows a series of indicators that warn the user if the average tvoc measured in the last 60 minutes, 24 hours, 7 days or the last month are above a certain tolerance band defined by the physician for the specific user. furthermore, it allows the user and the physician to graphically show the tvoc measured points over a selectable time interval and to figure 5. impact of coverage factor on tolerance band around mean value. figure 6. prototype of the iot face mask. table 1. sensor characteristics. sensor range accuracy co2 in ppm 0 – 40000 ± (40 + 5 % of reading) voc in ppb 0 – 60000 ± 15 % of reading figure 7. graphical interface of the personal smart platform application. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 6 trace a regression line implementing a best linear fit function. the information so obtained, in particular the slope of the regression line can give the physician important information on how the exhaled tvoc has changed over the selected time interval. in the same graph it is possible also to graphically show both the limits of the reference range defined by the doctor, and the tolerance band around the average measured value, as defined in section 2.1. in this way, the doctor observing the data can easily assess whether the parameters are out of the normal range or the usual range and, if so, to quantify their extent. furthermore, in this last case, he/she can easily assess whether it is an isolated phenomenon or whether it is part of a process to be observed more carefully. a third panel is dedicated to the monitoring of air quality levels for safety purposes, verifies whether the monitored values of co2 and tvoc are higher than the reference thresholds established by the regulations and, in that case, warns the user of the problem, suggesting, if possible, to go outdoor and remove the mask. ten specimens of the described prototype have been realized and the system was tested on 10 healthy volunteers, 5 males and 5 females, aged between 23 and 35 years over a period of one month. the monitoring was carried out inside the electric and electronic measurement laboratory of the university “mediterranea” of reggio calabria and each day included a monitoring interval of 4 hours. since the face masks are approved for a use of 4-6 hours, at the end of each day they were substituted with new ones. the results obtained from the monitoring campaign over the period of one month are summarized in table 2, which reports for each of the 10 volunteers the average co2 and tvoc measured values and their standard deviation. as you can see in table 2, the average value of co2 measured inside face masks, in all cases, is not far from the safety thresholds defined by the standard regulations in eu, which are 15.000 ppm for an exposure time of 15 minutes and 5000 ppm for an exposure time of 8h. during the monitoring campaign the 15 minutes threshold has never been reached, and the 8h threshold could not be considered, since the monitoring interval is 4 hours per day. anyhow considering a conservative threshold of 10000 ppm for a period of 4 hours, after this time of use, in almost all cases the system turns on the low air quality warning indicator. as it can be seen co2 measurements present also a high standard deviation, which, as the reader can see in figure 8, is due to a drift in co2 concentration inside the mask over the time of use, mainly due to the deterioration of the face mask. for what concerns tvoc measurements no situations of particular relevance were recorded, and the data monitored was almost constant over time. in only one case, a daily variation of expired tvoc deviated from the mean value recorded up to the moment by a value greater than three times the standard deviation. however, as can be seen in figure 9, the phenomenon was episodic and did not impact the slope of the regression line, which is 0.11 ppb/day. in this case, since the phenomenon is episodic and there is no incremental trend in exhaled tvoc, it can be traced back to temporary situations or alimentary reasons. to prove the algorithm functioning in detecting a deviation over the time periods defined before, a montecarlo simulation was performed, obtaining 1000 possible random tvoc paths. the simulation was set to start from a tvoc value of 115 ppb, with an increment between each step chosen randomly from a gaussian distribution centred in zero and with a standard deviation of 10 ppb. the reference range was set, in order that the algorithm can detect a potential anomaly if the average value of the first hour, day, week or month exceeds the average reference value for the same time interval by more than 3 times the standard deviation associated. the results obtained are summarized in table 3. considering one case where the simulated exhaled tvoc over one month period goes out of range, it is shown in figure 10 how the tvoc trend deviates over time, by observing the regression line, which in this case has a slope of 12.1 ppb/day. if a case like this would arise, the personal physician after a medical check can determine if the observed trend is indicative of a possible issue or not. 4. conclusions in conclusion, this work presents an innovative iot measurement system integrated into ppes to monitor air quality in the microenvironment inside them. the proposed system aims to provide an easy-to-use tool that signals the deterioration of air quality inside the mask in the presence of a high exposure risk to two specific pollutants, co2 and tvoc. additionally, the system can be used to monitor the user's health, as voc concentration in exhaled air may be related to various health issues. figure 8. co2 trend over 4 hours of use. table 2. results obtained from one month trial. sex age average co2 in ppm co2 stddev in ppm average tvoc in ppb tvoc stddev in ppb f 23 9599 1156 85 22 f 23 9780 1132 82 23 m 24 10165 1125 100 23 m 23 9130 1210 115 43 f 24 10243 1154 102 23 m 32 10341 1140 115 19 m 31 9136 1132 82 20 f 35 9352 1208 82 21 f 25 9196 1176 81 20 m 28 9656 1147 81 21 table 3. results of algorithm test using montecarlo simulations. time interval number of tvoc paths number of detections above thresholds 1st hour 1000 0 1st day 1000 3 1st week 1000 158 1st month 1000 277 6000 8000 10000 12000 14000 0 1 2 3 4 c o 2 i n p p m time in hours acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 7 the iot mask becomes a highly personalized health-checking support tool, allowing the specialist doctor and the user to see any warnings and visualize both raw data and trends over selectable time intervals. to account for the variation in normal range among individuals, a standard format file containing personal information is used, and the sensors' calibration status is continuously checked to ensure the reliability of the collected data. ten prototypes were implemented using ni rio acquisition boards and tested on 10 healthy volunteers for one month. the proposed algorithm's capability in detecting anomalies in 1000 random tvoc generated using montecarlo simulations was also evaluated. the results show that after one hour, no detection above thresholds was reported, but after one day, three detections above thresholds were reported. after one week, 158 detections above thresholds were reported, and after one month, 277 detections above thresholds were reported. the iot mask proposed in this work is innovative compared to the state of the art since it becomes a personalized diagnostic support tool that can help obtain early diagnoses for disorders still difficult to identify through traditional protocols. further developments may concern the engineering of the prototype to make it cheaper, lighter, smaller, and easily transportable from the old mask to the new one. with the level of technological advances in semiconductors and pcb printing, the proposed monitoring system can be implemented using thin film wearable and flexible sensors and chips that can be easily integrated within the mask. references [1] e. l. anderson, p. turnham, j. r. griffin, c. c. clarke, consideration of the aerosol transmission for covid-19 and public health. risk analysis, 40: 902-907. doi: 10.1111/risa.13500 [2] n. wilson, s. corbett, e. tovey, airborne transmission of covid19 bmj 2020; 370:m3206. doi: 10.1136/bmj.m3206 [3] e. mehraeen, m. a. salehi, f. behnezhad, h. r. moghaddam, s. a. seyedalinaghi, transmission modes of covid-19: a systematic review, infectious disorders-drug targets (formerly current drug targets-infectious disorders) 21.6 (2021): 27-34. doi: 10.2174/1871526520666201116095934 [4] j. schöley, j. m. aburto, i. kashnitsky, m. s. kniffka, l. zhang, h. jaadla, j. b. dowd, r. kashyap, life expectancy changes since covid-19. nat hum behav 6, 1649–1659 (2022). doi: 10.1038/s41562-022-01450-3 [5] k. chu, e. a. akl, s. duda, k. solo, s. yaacoub, h. j. schünemann, physical distancing, face masks, and eye protection to prevent person-to-person transmission of sars-cov-2 and covid-19: a systematic review and meta-analysis, the lancet 395.10242 (2020): 1973-1987. doi: 10.1016/s0140-6736(20)31142-9 [6] s. m. bartsch, k. j o'shea, k. l. chin, u. strych, m. c. ferguson, m. e. bottazzi, p. t. wedlock, s. n. cox, s. s. siegmund, p. j. hotez, b. y. lee, maintaining face mask use before and after achieving different covid-19 vaccination coverage levels: a modelling study, the lancet public health 7.4 (2022): e356-e365. doi: 10.1016/s2468-2667(22)00040-8 [7] v. palco, g. fulco, c. de capua, f. ruffa, m. lugarà, iot and iaq monitoring systems for healthiness of dwelling, 2022 ieee figure 9. tvoc measurement data and regression line over one month for a 23 years old male volunteer. figure 10. a tvoc path obtained from montecarlo simulation. 0 50 100 150 200 250 300 28.02.2023 06.03.2023 12.03.2023 18.03.2023 24.03.2023 30.03.2023 t v o c i n p p b date [dd/mm/yyyy] measured tvoc linear fit (measured tvoc) 0 100 200 300 400 500 600 700 28.02.2023 06.03.2023 12.03.2023 18.03.2023 24.03.2023 30.03.2023 t v o c i n p p b date [dd/mm/yyyy] tvoc [ppb] linear fit (measured tvoc) https://doi.org/10.1111/risa.13500 https://doi.org/10.1136/bmj.m3206 https://doi.org/10.2174/1871526520666201116095934 https://doi.org/10.1038/s41562-022-01450-3 https://doi.org/10.1016/s0140-6736(20)31142-9 https://doi.org/10.1016/s2468-2667(22)00040-8 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 8 international workshop on metrology for living environment (metroliven), cosenza, italy, 25-27 may 2022, pp. 105-109. doi: 10.1109/metrolivenv54405.2022.9826946 [8] c. de capua, g. fulco, m. lugarà, f. ruffa, an improvement strategy for indoor air quality monitoring systems. sensors 2023, 23, 3999. doi: 10.3390/s23083999 [9] i. mujan, d. licina, m. kljajic, a. culic, a. s. anđelkovic, development of indoor environmental quality index using a lowcost monitoring platform. journal of cleaner production, vol. 312, 2021. doi: 10.1016/j.jclepro.2021.127846 [10] k. kisielinski, p. giboni, a. prescher, b. klosterhalfen, d. graessel, s. funken, o. kempski, o. hirsch, is a mask that covers the mouth and nose free from undesirable side effects in everyday use and free of potential hazards?, int. j. environ. res. public health 18, 4344–4344 (2021). doi: 10.3390/ijerph18084344 [11] r. j. roberge, a. coca, w. j. williams, j. b. powell, a. j. palmiero, physiological impact of the n95 filtering facepiece respirator on healthcare workers, respir. care 55, 569–577 (2010). [12] d. sofronova, r. a. angelova, y. sofronov, m. ivanova, measuring the parameters of the microenvironment under protective face masks, 2022 xxxii int. scientific symposium metrology and metrology assurance (mma), sozopol, bulgaria, 2022, pp. 1-6. doi: 10.1109/mma55579.2022.9993318 [13] m. s. m. rhee, c. d. lindquist, m. t. silvestrini, a. c. chan, j. j. y. ong, v. k. sharma, carbon dioxide increases with face masks but remains below short-term niosh limits, bmc infect dis 21, 354 (2021). doi: 10.1186/s12879-021-06056-0 [14] c. matuschek, f. moll, h. fangerau, (+ another 23 authors), face masks: benefits and risks during the covid-19 crisis, eur j med res 25, 32 (2020). doi: 10.1186/s40001-020-00430-5 [15] k. a. shaw, g. a. zello, s. j. butcher, j. bum ko, l. bertrand, ph. d. chilibeck, the impact of face masks on performance and physiological outcomes during exercise: a systematic review and meta-analysis. applied physiology, nutrition, and metabolism. 46(7): 693-703. doi: 10.1139/apnm-2021-0143 [16] c. l. smith, j. l. whitelaw, b. davies, carbon dioxide rebreathing in respiratory protective devices: influence of speech and work rate in full-face masks, ergonomics 56, 781–790 (2013). doi: 10.1080/00140139.2013.777128 [17] y. li, h. tokura, y. p. guo, a. s. w. wong, effects of wearing n95 and surgical facemasks on heart rate, thermal stress and subjective sensations, int. arch. occup. environ. health 78, 2005, pp. 501–509. doi: 10.1007/s00420-004-0584-4 [18] d. epstein, a. korytny, y. isenberg, e. marcusohn, r. zukermann, b. bishop, s. minha, a. raz, a. miller, return to training in the covid‐19 era: the physiological effects of face masks during exercise, scand. j. med. sci. sports 31, 2021, pp. 70– 75. doi: 10.1111/sms.13832 [19] s. fikenzer, t. uhe, d. lavall, u. rudolph, r. falz, m. busse, p. hepp, u. laufs, effects of surgical and ffp2/n95 face masks on cardiopulmonary exercise capacity, clin. res. cardiol. 109, 2020, pp. 1522–1530. doi: 10.1007/s00392-020-01704-y [20] p. escobedo, m. d. fernández-ramos, n. lópez-ruiz, o. moyano-rodríguez, a. martínez-olmos, i. m. pérez de vargassansalvador, m. a. carvajal, l. f. capitán-vallvey, a. j. palma smart facemask for wireless co2 monitoring. nat commun 13, 72 (2022). doi: 10.1038/s41467-021-27733-3 [21] v. ratnayake mudiyanselage, k. lee, a. hassani, integration of iot sensors to determine life expectancy of face masks. sensors 2022, 22, 9463. doi: 10.3390/s22239463 [22] g. iadarola, a. poli, s. spinsante, compressed sensing of skin conductance level for iot-based wearable sensors, 2022 ieee int. instrumentation and measurement technology conf. (i2mtc). ieee, ottawa, on, canada, 16-19 may 2022, pp. 1-6. doi: 10.1109/i2mtc48687.2022.9806516 [23] g. iadarola, s. meletani, f. di nardo, s. spinsante, a new method for semg envelope detection from reduced measurements, 2022 ieee international symposium on medical measurements and applications (memea), messina, italy, 22-24 june 2022, pp. 1-6. doi: 10.1109/memea54994.2022.9856436 [24] g. iadarola, a. poli, s. spinsante, reconstruction of galvanic skin response peaks via sparse representation, 2021 ieee int. instrumentation and measurement technology conf. (i2mtc), glasgow, united kingdom, 17-20 may 2021, pp. 1-6. doi: 10.1109/i2mtc50364.2021.9459905 [25] wei gao, s. emaminejad, hnin yin yin nyein (+ another 11 authors), fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis, nature 529.7587 (2016): 509-514. doi: 10.1038/nature16521 [26] zhihua zhu, tao liu, guangyi li, tong li, yoshio inoue, wearable sensor systems for infants, sensors 15.2 (2015): 37213749. doi: 10.3390%2fs150203721 [27] yancong qiao, xiaoshi li, th. hirtz (+ another 11 authors), graphene-based wearable sensors, nanoscale 11.41 (2019): 18923-18945. doi: 10.1039/c9nr05532k [28] a. w. boots, j. j. b. n. van berkel, j. w. dallinga, a. smolinska, e. f. wouters, f. j. van schooten, the versatile use of exhaled volatile organic compounds in human health and disease, j. breath res. 6 (2), 027108 (2012). doi: 10.1088/1752-7155/6/2/027108 [29] a. mazzatenta, m. pokorski, c. di giulio, volatile organic compounds (vocs) in exhaled breath as a marker of hypoxia in multiple chemical sensitivity, physiol rep. 2021 sep;9(18):e15034. doi: 10.14814/phy2.15034 [30] t. oktavian, i. n. chozin, s. d. pratiwi, analysis of volatile organic compounds in exhaled breath of covid-19 patients, european respiratory journal sep 2022, 60 (suppl 66) 2691. doi: 10.1183/13993003.congress-2022.2691 [31] sagnik das, mrinal pal, non-invasive monitoring of human health by exhaled breath analysis: a comprehensive review, journal of the electrochemical society 167.3 (2020): 037562. doi: 10.1149/1945-7111/ab67a6 [32] t. a. popov, human exhaled breath analysis, ann. allergy asthma immunol., 2011 jun;106(6):451-6; quiz 457. doi: 10.1016/j.anai.2011.02.016 [33] p. daponte, l. de vito, g. iadarola, f. picariello, ecg monitoring based on dynamic compressed sensing of multilead signals, sensors 2021, 21(21), 7003. doi: 10.3390/s21217003 [34] g. iadarola, s. spinsante, l. de vito, f. lamonaca, a support for signal compression in living environments: the analog-toinformation converter, 2022 ieee int. workshop on metrology for living environment (metroliven), cosenza, italy, 25-27 may 2022, pp. 292-297. doi: 10.1109/metrolivenv54405.2022.9826923 [35] r. morello, f. ruffa, i. jablonski, l. fabbiano, c. de capua, an iot based ecg system to diagnose cardiac pathologies for healthcare applications in smart cities, measurement 190 (2022): 110685. doi: 10.1016/j.measurement.2021.110685 https://doi.org/10.1109/metrolivenv54405.2022.9826946 https://doi.org/10.3390/s23083999 https://doi.org/10.1016/j.jclepro.2021.127846 https://doi.org/10.3390/ijerph18084344 https://doi.org/10.1109/mma55579.2022.9993318 https://doi.org/10.1186/s12879-021-06056-0 https://doi.org/10.1186/s40001-020-00430-5 https://doi.org/10.1139/apnm-2021-0143 https://doi.org/10.1080/00140139.2013.777128 http://dx.doi.org/10.1007/s00420-004-0584-4 https://doi.org/10.1111/sms.13832 https://doi.org/10.1007/s00392-020-01704-y https://doi.org/10.1038/s41467-021-27733-3 https://doi.org/10.3390/s22239463 https://doi.org/10.1109/i2mtc48687.2022.9806516 https://doi.org/10.1109/memea54994.2022.9856436 https://doi.org/10.1109/i2mtc50364.2021.9459905 https://doi.org/10.1038/nature16521 https://doi.org/10.3390%2fs150203721 https://doi.org/10.1039/c9nr05532k https://doi.org/10.1088/1752-7155/6/2/027108 https://doi.org/10.14814/phy2.15034 https://doi.org/10.1183/13993003.congress-2022.2691 https://doi.org/10.1149/1945-7111/ab67a6 https://doi.org/10.1016/j.anai.2011.02.016 https://doi.org/10.3390/s21217003 https://doi.org/10.1109/metrolivenv54405.2022.9826923 https://doi.org/10.1016/j.measurement.2021.110685 acta imeko  september 2014, volume 3, number 3, 1  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 1  journal contacts    about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief  paolo carbone, italy honorary editor‐in‐chief  paul p.l. regtien, netherlands editorial board francisco alegria, portugal leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany rainer tutsch, germany luca de vito, italy about imeko  the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses  principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov) the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de microsoft word 322-2423-1-le-rev3 acta imeko  issn: 2221‐870x  september 2016, volume 5, number 2, 14‐18    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 14  constraining absolute chronologies with the application of  bayesian analysis  francesco maspero, emanuela sibilia, marco martini  cudam, università degli studi di milano bicocca, piazza della scienza 1, 20125 milano, italy  infn ch_net – sezione di milano‐bicocca e dipartimento di scienza dei materiali, università degli studi di milano bicocca, via roberto cozzi  55, 20126 milano, italy      section: research paper   keywords: dating; statistics; bayes; chronology  citation: francesco maspero, emanuela sibilia, marco martini, constraining absolute chronologies with the application of bayesian analysis, acta imeko,  vol. 5, no. 2, article 3, september 2016, identifier: imeko‐acta‐05 (2016)‐05‐03  section editors: sabrina grassini, politecnico di torino, italy; alfonso santoriello, università di salerno, italy  received march 15, 2016; in final form april 7, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: luca bondioli, francesco maspero, francesco.maspero@unimib.it    1. introduction  absolute dating has become a powerful tool in archaeology. however, the interpretation of the archaeometric data can be sometimes difficult, especially in the case of non-gaussian probability distributions (e.g. radiocarbon dates). in fact, sometimes scientific analysis results are not univocal and must be interpreted. in absence of an impartial instrument to discriminate data, the degrees of freedom in the choice of the right results is often left to archaeologists and historians. moreover, prior information about analyzed samples and provenance sites are not usually taken into account in real synergy with experimental data and historical studies, giving space to subjective interpretations, sometimes in conflict. in order to overcome this misunderstanding it is important to think about the scientific results as a higher concept than “numbers”, and treat the historical data and clues as mathematical terms of an equation. a great help comes from the bayesian statistical approach, a model that combines in a single formal analysis the experimental results coming from scientific analyses together with the present knowledge of an archaeological problem, in order to make inferences that can contextualize the problem in a coherent interpretation [1]-[4]. after a brief remind of the theory related to the method, three case studies are reported and discussed. finally, the potential and advantages of this method will be underlined. 2. theory  the bayes’ theorem states that the posterior probability of an event is proportional to the likelihood times the prior probability, or formally: | | , (1) where are the probabilities of the single events, and | are the likelihood of the two events to be linked. bayesian statistics uses probability as a means of measuring one’s strength of belief in a particular hypothesis being true [5]. in other words, the application of bayesian statistics allows the selection of the most significant data in the experimental set, rejecting the ones not supported by historical evidences or by a relevant likelihood. abstract  in this work the application of bayesian statistics to archaeological problems will be discussed. in particular, three case studies will be  analyzed,  each  presenting  complex  interpretative  scenarios,  and  the  most  suitable  way  to  solve  them.  it  will  be  shown  that  the  bayesian approach allows to refine a dating when in presence of multiple data, even from different dating techniques. the bayesian  approach  is  presented  as  the  common  language  between  physicists,  archaeologists  and  statisticians  to  perform  more  accurate  evaluations on stratigraphies and chronologies.     acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 15  the application of this method can bring to some strange and unconventional results, which can be understood only after the comprehension of the bayesian “way of thinking”. a special case may help: in figure 1 a prior probability of an event a (blue curve, labelled r_date standing for “radiocarbon date”) is depicted, which has a maximum probability spanned over 200 years; the likelihood graph (red curve, labelled c_date standing for “calendar date”, normally distributed) states that the probability that event a occurs in conjunction with an event b (such as the use of a coin discovered in the same layer of the material associated to event a) has a normal probability distribution peaked at 920ad with an uncertainty of 20 years. combining this data produces a posterior probability curve extremely different from the prior one, but closer to the real probability distribution, taken into account all the available data (black curve). the powerfulness of this method is well represented by the calibration of radiocarbon dates to obtain the probability density curve a scientist is accustomed to (figure 2). in radiocarbon dating the isotopic concentration of carbon-14 is not univocally related to a single date, but it has to be “calibrated” using a reference curve that links every concentration to a set of calendar ages. in this case the prior information is the existence of the radiocarbon date itself (the result of the isotopic analysis guarantees that this result is associated to at least a date on the calendar timescale); its numerical value is given in the graph title (r_date, expressed in percent of modern radiocarbon pmc). the likelihood parameter is represented by a mix of the uncertainty associated to the isotopic analysis (red curve, figure 2) and the probability distribution given by the calibration curve intcal (blue curve, figure 2). the latter is represented in the graph with its uncertainty, which varies with time depending on the precision and the number of measures used to build the curve. the projection of the gaussian variable on the calibration curve is a density curve (black curve) from which the most credible date intervals (hpd, highest posterior density regions) can be extracted. the table in the graph describes the age ranges with 1-sigma and 2-sigma probability: note that the probability shape changes radically from the gaussian initial one [6], and the probability densities can be splitted in multiple peaks characterized by a fraction of the initial 68 % and 95 %. this approach is extremely useful when there is the need to match multiple dates and prior knowledge about a site or an artifact. in this case, the same formal procedure is used, putting one of the collected dates as a prior information, and using the known connections between the stratigraphy layers or building phases as likelihood parameters. 3. materials and methods  in this work three different examples of bayesian statistics applications will be shown, each of them introducing a more complex approach to archaeological questions that can be solved with the help of prior information. each posterior probability curve is evaluated numerically through a markov monte carlo chain analysis implemented in oxcal 4.2 [6]. this is a software package developed to calibrate radiocarbon concentrations and to calculate the most probable dates associated to a single concentration. it is also possible to apply the most used bayesian models to a set of data distributions, both before and after the calibration process. the monte carlo analysis is performed during the calibration to evaluate the best probability density distribution. more precisely, the metropolishastings algorithm is used, since it only requires relative probability information [7]; in addition, it uses a set of proposal moves which can both result in changes to single elements of the model or changes to the duration and timing of whole groups. this provides much faster convergence for complex models. 4. results  a first example of this approach is the dating of the site of my-son, a cluster of abandoned and partially ruined hindu temples constructed between the 4th and the 14th century ad by the kings of champa in quang nam province (central vietnam) [8]. the available materials included 3 sets of bricks from one of the latest construction phases, and some charcoal pieces embedded in a brick (figure 3a). a memorial stele commemorating the dedication of the temple (1155 ad, c_date t) gives an historical boundary. both thermoluminescence (tl) and radiocarbon were used when possible. the selected samples come from three different sections of the structure walls, and the archaeological question was if all the masonries were contemporary or not. the ages obtained for the bricks allowed to identify three main groups: one (60 %) consisted in reused material, whose dates were well before 1155 ad, the dedication year; another surely posterior to that date; the last (g3 phase) with dates in between. the components of the first group are characterized by a density distribution completely shifted on the left of the stele boundary and their date is univocally assessed before the stele foundation, suggesting a probable reuse from preexisting structures. the components of the second group are completely shifted on its figure 1. representation of a bayesian analysis.  figure 2. example of radiocarbon date calibration.    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 16  right (g4 phase, figure 3a), stressing the possibility of restoration performed in later times. the main question regarding the samples of the third group was if they were reused or purposely made for the edification of the site. unlike the calibration example, here the prior information is represented by a supposed terminus post quem given by the dated stele; this information is formalized by the equation: , , , . (2) this equation and the following don’t take into account the uncertainty associated to the boundary events. this is done to explain the model function, in the algorithm the whole range of probabilities is used, projecting the tales of the boundary distribution and smoothing the step slope of the resulting probability. in figure 3b the same results as in figure 3a are represented, but after the application of the bayesian statistics. the dark grey curves represent the posterior probability density distribution: it is clear that g3h has a large probability to be dated just before 1155 ad (stele), supported by a non-zero probability that comes from the radiocarbon dating of a piece of burned wood embedded in the brick paste. the “combine” distribution is an operation on probability distributions that combines any number of probability distribution functions which give independent information on a parameter. the similarity in the probability distribution allows to extend the production of all g3 bricks before 1155 ad (figure 4). a further modelling takes into account the probable contemporaneity of production of the g3 bricks, adding a prior function: if we assume that the production period is constrained by a start unknown event (ta) and a finish unknown event (tb), the formalization of the model is as follows: ∝ ∏ , , . (3) a second case regards the dating campaign of three burials in the archaeological site of sipan in northern peru. it shows that the precision of the chronological boundaries of an event can be enhanced combining the site stratigraphy with the whole set of dating results. during this campaign, three different tombs were dated, using both tl and radiocarbon techniques. while the stratigraphic evidence clearly stated their relative temporal sequence, the archaeological request was the refinement of the absolute dating of the warrior-priest tomb (t14). looking at the raw results, it appears that the age of all the examined materials, given the experimental uncertainty, is practically the same (range of phase t3: 215-435 ad; range of phase t14: 255-775 ad; range of phase t1-t2: 595-775 ad) (figure 5a). however, using the stratigraphy of the site as the main constraint, the most probable period of t14 construction is severely restricted (figure 5b). the prior information applied here is similar to the one from the previous example, with the difference that here there are three different groups of events, with two unknown intermediate periods that represent the end of construction of a tomb and the start of construction of the next tomb. the formal condition of this model is: ∝ ∏ , , ∏ , , ∏ , , , (4) where ta, tb, tc, and td are the boundaries of the grouped events. as reported in figure 5b, the prior condition of noncontemporaneity of the three phases reduces the span of t14 use by 80 %, especially involving some particularly spanned radiocarbon dates (rc124). the last example regards a neolithic site in southern italy. it shows how the bayesian analysis can extract extended information on a site integrating prior hypothesis and dating. here the information needed is not about the refining of the site chronology, as the collected samples are common wares used in everyday life during all the occupation period, and cannot represent an a priori restrain; instead, it is possible to extrapolate data about the site life span, beside the beginning and end of its occupation. for this settlement, the continuity of site occupation was hypothesized. after a first analysis (figure 6), it was possible to identify two sub-phases, which were further modeled to find a possible hiatus between the two periods. a further complication in the prior probability model   figure  3.  a:  unmodelled  representation  of  radiocarbon  and thermoluminescence  dates  on  my‐son  samples; b: bayesian analysis of the same dates.    figure 4. logical deduction path for g3 phase dating.  analyzed g3 samples probably made before the foundation  stele g3h similar to other g3 samples  g3h charcoal = g3h brick  certainly before the foundation  stele   acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 17  is introduced taking into account the presence of a gap between the end of the first event (tb) and the start of the second one (tc): ∝ , ∏ , , ∏ , , . (5) the resulting data are showed in figure 7: the end of the first sub-phase and the beginning of the second are not overlapping, but there is a gap of about 100 years in the probability distribution curves. this can be a signal of a temporary site leaving, as well as of a period of crafting decadence. it is clear that such a result could have not been obtained through a rough qualitative interpretation of the results. 5. discussion and conclusion  the described examples aim to underline the importance of a correct approach during the statistical elaboration of the results of absolute dating techniques. they are not only numbers, but a source of linked information that can be extracted imposing the right conditions and constraints. the right approach is not to put the obtained data in a supposed model, rejecting what doesn’t fit or “doesn’t sound well”; all information should be analyzed and criticized, trying to shape the historical model in a feedback process, involving archaeologists, physicists and statisticians in the discussion.   figure  5.  a:  unmodelled  representation  of  radiocarbon  and thermoluminescence  dates  on  sipan  samples;  b:  bayesian  analysis  of  the same dates.  figure 7. sub‐phases discrimination.  figure 6. distribution of the radiocarbon dates on southern italy samples.     acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 18  furthermore, the use of all the available information in the refining process of the statistical data considers the uniqueness of every site, with the great advantage of taking into account the uniqueness of the experimental evidences [9]. the potential of a bayesian approach in archaeology will reach its maximum when a tight interdisciplinary collaboration between archaeologists and staticians will come true. this requires archaeologists to be comfortable in analyzing a situation and defining the archaeological problem in a realistic but not over-refined way. in wider terms, they require to communicate their ideas to statisticians speaking a “shared” language, and to explain the importance of their work in simple terms. references  [1] g. alberti, the aid of bayesian radiocarbon modeling in assessing the chronology of middle bronze age sicily at the site level. a case study, journal of archaeological science: reports, 2 (2015) pp. 246-256. [2] t.s. dye and c.e. buck, archaeological sequence diagrams and bayesian chronological models, journal of archaeological science, 63 (2015) pp. 84-93. [3] w.d. hamilton and j. kenney, multiple bayesian modelling approaches to a suite of radiocarbon dates from ovens excavated at ysgol yr hendre, caernarfon, north wales, quaternary geochronology, 25 (2015) pp. 72-82. [4] a. quiles, e. aubourg, b. berthier, e. delque-količ, g. pierratbonnefois, m.w. dee, g. andreu-lanoë, c. bronk ramsey and c. moreau, bayesian modelling of an absolute chronology for egypt's 18th dynasty by astrophysical and radiocarbon methods, journal of archaeological science, 40 (2013) pp. 423-432. [5] g.a. barnard and t. bayes, studies in the history of probability and statistics: ix. thomas bayes's essay towards solving a problem in the doctrine of chances, biometrika, 45 (1958) pp. 293-315. [6] c.b. ramsey, bayesian analysis of radiocarbon dates, radiocarbon, 51 (2009) pp. 337-360. [7] w.r. gilks, s. richardson and d. spiegelhalter, markov chain monte carlo in practice, chapman & hall, 1996. [8] m. martini, e. sibilia, m. cucarzi and p. zolese, "absolute dating of the my son monuments", in: champa and the archaeology of my son (vietnam). h. a., m. cucarzi and p. zolese (editors). nus press, singapore, 2009, 978-9971-69-451-7, pp. 369-380. [9] c.e. buck, w.g. cavanagh and c.d. litton, bayesian approach to interpreting archaeological data, john wiley & sons, chichester, england, 1996, 0471961973. microsoft word 401-3027-1-le-rv acta imeko  issn: 2221‐8702  july 2017, volume 6, number 2, 54‐58    acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 54  molten metal’s reaction force measurement for pressure  estimation and control system construction in press casting  ryosuke tasaki, kazuhiko terashima  department of mechanical engineering, toyohashi university of technology, japan      section: research paper   keywords: force measurement; soft‐sensing; model‐based estimation; casting process  citation: ryosuke tasaki, kazuhiko terashima, molten metal’s reaction force measurement for pressure estimation and control system construction in press  casting, acta imeko, vol. 6, no. 2, article 9, july 2017, identifier: imeko‐acta‐06 (2017)‐02‐09  section editor: min‐seok kim, research institute of standards and science, korea  received june 30, 2017; in final form june 30, 2017; published july 2017  copyright: © 2017 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: ryosuke tasaki, e‐mail: tasaki@me.tut.ac.jp    1. introduction  a new casting method called sand mold press casting, has been developed by our group over recent years. in this casting process, the ladle first pours the molten metal into a lower (drag) mold. after the pouring, an upper (cope) mold is lowered to moderately press the molten metal into the cavity. this process has enabled us to enhance the production yield rate from 70 % of typical gravity casting to over 90 %, because the sprue cup and the runner are not needed for the suggested casting plan [1]. in the casting process, the molten metal can precisely and quickly be poured into the lower mold. liquid weight control during pouring with the tilting ladle has been proposed in very interesting recent studies [2], [3]. however, in the pressing part of this casting process, casting defects are often caused by high pressure inside the mold related to pressing velocity. a major application example of the casting method is metal impeller production, shown in figure 1. high velocity pressing of molten metal induces product defects as a rough surface at the bottom of the upper mold. this type of casting defect, in which molten metal soaks through sand particles of the greensand mold and then solidifies, is called physical metal penetration. the penetration is most likely caused by the high pressure, and it requires an extra process of surface finishing. thus, high quality stable casting must be enabled by excess pressure suppression in the pressing process. to maintain a shorter cycle time in production, a pressing control system, considering the liquid pressure behaviour inside the mold, is highly demanded. pressure control techniques have figure 1. press casting process and its iron product.   abstract  this paper presents a measurement system for molten metal’s reaction force and estimation of  liquid pressure during pressing to  control the iron product quality. we have developed a new type of casting process. in the process, molten metal is quickly filled into  casting molds by high‐speed pressing. casting defects such as physical metal penetration is often caused by excess pressure. hence,  we have constructed a pressure control system using a mathematical model‐based off‐line simulation to derive the ideal feedforward  control input of pressing. however, it is difficult to accurately control the pressure in cases of varying conditions such as liquid volume  and  temperature  changes.  also,  pressure  measurements  by  using  contact‐type  sensors  directly  is  impossible  for  molten  metal,  because of the high temperature of the liquid, over 1400 °c. so, we have proposed a new pressure estimation method with force  measurement data processing. here, the exact reaction force from the molten metal must be accurately observed by a force sensor  set  between  the  upper  mold  and  its  elevating  device.  the  viscosity  coefficient  can  also  be  calculated  on  a  real‐time  basis.  the  proposed force measurement system will realize an improved casting quality due to the effective feed‐back control system. acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 55  been proposed for different casting methods, especially the injection molding process with metallic mold [4], [5]. the pressure control problem has been successfully resolved by mathematical modelling of liquid behavior and computing a 3d simulation analysis [6], [7]. furthermore, a feedback pressure control based on pid gain selection has been proposed for the filling process. although the pressure in the mold must be detected in order to control the process adequately using feedback control, it is difficult to measure the fluid pressure directly, because the high temperature of the molten metal over 1400 °c precludes the use of a pressure sensor. thus, in our previous papers [8], [9], the pressure during pressing at a lower pressing velocity was estimated by using a simple model of the molten metal’s pressure based on the analysis results of the computational fluid dynamics (cfd) software. a new sequential pressing control, namely a feedforward method using a novel simplified pressure model, has been reported by the authors [10], [11]. it has been shown that this method is very effective for adjusting the pressure in the mold. as the main work in this paper, we have undertaken several pressing experiments using viscous water to validate a force measurement algorithm for liquid pressure estimation inside the mold, and to confirm the modified pressure control model and a feed-forward control method during pressing. these results indicate that measuring the pressure during pressing is desired to achieve quality casting under a variety of conditions as the poured liquid volume and temperature changes. therefore, we have proposed an indirect pressure measurement for the estimation method using the information about the reaction force from the molten metal during pressing. in the final section, the proposed pressure estimation method is applied to a water pressing experiment using an acrylic mold, shaped as a simple injection type. the effectiveness for the actual flow situation, which means viscous flow depending on varying molten metal temperature, is discussed 2. pressing process in press casting  2.1. servo drive pressing machine  the filling molds consist of greensand in the actual casting production. the convex part of the upper mold has several passages called the overflow area. molten metal that exceeds the product volume flows into the overflow areas during pressing. these areas are the only parts of the casting plan that provide the effect of liquid static pressure. the overflow part is designed as long and narrow channels. when fluid flows into such an area, high pressurization causes casting defects. therefore, it is important to control the pressing velocity in order to suppress the rapid increase in pressure that occurs in high-speed pressing. the upper mold moves up and down by a lifting press cylinder driven by a servomotor, as shown in figure 2. the position of the upper mold can be continuously measured by an encoder mounted on the servo cylinder that controls the molten metal pressure by using the controlled velocity input. the force sensor for measuring the molten metal’s reaction force is mounted between the bottom part of the servo cylinder and the top of the upper metal molding box covering the greensand. 2.2. metal production experiments  casting production tests by press casting have been performed using the previously proposed pressing control. figure 3 shows an overview of the cast product pressed by sequentially switched velocity pattern sv. these two products are cast under the experimental condition of higher temperature ht (1400 °c) on the left side in figure 3, and the right-hand side shows the results tested with lower temperature lt (1350 °c). from these photographs, a better product of sv-ht is clearly verified, because the switch velocity is designed for the higher temperature condition (1400 °c). on the other hand, the case of sv-lt has a tiny penetration part, because a higher pressure is generated with higher viscosity related to the lower temperature. this comparative validation result of the different temperatures indicates that for quality casting an adjustable pressing control depending on the molten metal’s temperature drop during pressing is desired. it can be well performed by using both of model based designs of pressing velocity and the pressure control with a known parameter related to liquid viscosity. 3. reaction force measurement for pressure  estimation  3.1. reaction force measurement for pressure  in actual production, the poured volume and metal temperature are susceptible as a repeating error for each pouring. the optimal higher pressing process is done by sensing the variable volume error and temperature change from the measured liquid pressure inside the mold. therefore, a real figure 2. press casting equipment.         figure 3. casting products in case of different temperature.   acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 56  time pressure estimation algorithm and its confirmative experiments are required for the construction of a new control system for press casting. the installation of a contact-type pressure sensor to measure the molten metal pressure is quite difficult due to the high liquid temperature of the molten metal. therefore, an indirect sensor by reaction force is implemented above the upper mold. by using the reaction force from the load-cell, a pressure estimation of the liquid inside the mold is validated. water pressing experiments have been carried out to directly measure the pressure by this contact-type sensor, to observe the flow state and to estimate the pressure by the load-cell. an illustration of the experimental equipment and sensing/control system is shown in figure 4. 3.2. pressure estimation considering frictional resistance  cancellation  pressing motion causes frictional resistance between the upper and lower molds as shown in figure 4. in this section, we propose a pressure estimation method using the reaction force measurement by the load cell. to remove the effect of friction as a disturbance for an accurate pressure estimation, an equation of friction force during pressing is derived. the measurable total force f [n] including the effects of mold friction, inertia of the upper mold and gravity is simply derived, and then the pressure of the base area p [pa] is given by (1), (1) where m [kg] is the mass of the upper mold, g [m/s2] the gravity force, z [m] the upper mold position, μ [-] the wall frictional coefficient, l [m] the girth of the frictional part, and a [m2] is the area of the upper mold base. here μ strongly depends on the pressing velocity, and also switches between both kinetic and static frictions. in (2), μd and μs are coefficients of the kinetic and static frictions respectively, [m/s] is the pressing velocity related to the frictional force. (2) to validate (1) and (2), water pressing experiments have been performed. a cylindrically shaped injection type mold is selected, with a diameter of 0.10 m at the bottom side and a smaller diameter of 0.02 m at the upper side. water inside the mold flows upward during pressing. the velocity reference input and the pressure behavior are shown in figure 5. for each pressing the liquid viscosity is changed, as experimental condition for the pressure estimation. as shown in figure 5, the   (a)  = 0.001[cp]      (b)  = 0.500[cp]      (c) = 1.000[cp]      (d)  = 5.000[cp]  figure 5. pressure estimation results for different viscous liquid. figure 4. block diagram of pressing control system.   acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 57  viscosity is adjusted to four values of (a) 0.001, (b) 0.500, (c) 1.000 and (d) 5.000 by cmc(carboxymethylcellulose)-contained water. dashed, solid gray and solid black lines mean the actual pressure measured by the contact-type pressure sensor (ap-10s, keyence corp.), the force measurement data by the load cell (tclz-500ns001, tokyo sokki kenkyujo co., ltd.) and the estimated pressure by (1) and (2), respectively. these results make clear that the actual measured and the estimated pressure behaviors are successfully matched for all values of the liquid viscosity. 3.3. estimation of  unmeasurable liquid viscosity  the cfd based on the exact model such as the navierstokes equation is very effective for offline analysis of fluid behavior and is useful for predicting the behavior and optimization of a casting plan [7], [8]. however, this is not sufficient for the design of a pressing control system, because the complex and exact model calculations take too much time. therefore, a construction of a mathematical pressure model is desired in order to realize an efficient pressing casting control system by estimating the liquid property such as the viscosity depending on varying metal temperature and volume. the mathematical model construction can be considered from the viewpoint of the conservation of energy in the continuous streamline from the bottom to the surface of the liquid. the pressure model construction is considered for only unstationary vertical flow without air entrainment. figure 6 shows the rising flow of the molten metal. the unstationary bernoulli theorem for incompressible fluids, and considering the bottom surface of the upper mold (eh(t) = 0) and the liquid surface (eh(t)), gives us the following pressure model equation:        zvgfz ed el tzz ea ea p p h h hs hm b , 2 1 )( )( 2 2 2                  , (3) where  [kg/m3] is the fluid density and g [m/s2] the acceleration of gravity. the fluid surface position eh [m] and its velocity [m/s] at the free surface as [m2] relate to the mold cross-sectional area am [m2] and the pressing velocity . furthermore,  means the wall frictional coefficient depending on the liquid temperature t [k] on the upward flow. l [m] and d [m] are the vertical flow length and the diameter, respectively. to properly control the pressure behavior inside the casting mold, the unknown temperature depending on parameter  could be identified by the mathematical model (3) as given above and the estimated data mentioned in the previous section. (see [11] for more information about the mathematical model construction). to confirm the proposed viscosity identification by using the mathematical pressure model, several experiments using a simple injection type mold have been carried out. in the experimental results shown in figure 7, the estimated pressure behavior of solid black lines are reshaped with the identified parameter  from the reference output (= 0) related to mathematical model. here, the trapezoidal pressing velocity input is set to the same reference in figure 5. fitting the model based time behavior to the estimated pressure by force sensing, the unknown parameter  has been uniquely derived for each different viscous liquid. the relationship between viscosity and  is almost linear as shown in figure 8. 4. model predictive control approach  the pressure control must satisfy the pressure constraint condition for high-quality sand mold casting. the mpc method first predicts the future output, calculates the optimum input for keeping the constraint, and then realizes a good performance of the real output. in our mpc approach, we have built a predictive model control system with high speed calculation, therefore it is effective for real-time feedback control. equation (4) is the response of the system. the response  should be equal to the reference pressure trajectory  r , suy f  . (4)   figure 7. pressure estimation with identified  .    figure 8. identification parameter of  .  figure 6. illustration of filling flow state shown in cross‐sectional view.   acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 58  the optimal control input u is given by    frtt ysssu    1 , (5) where yf is the free output response on the time in the future and s the unit step response matrix. the mpc simulation result is shown in figure 9. this simulation result confirms that the mpc is very effective for the liquid pressure control during pressing in the press casting process. 5. conclusions  for a practical application of the casting quality control system in the press casting process, we have proposed an implementation approach of force measurement, pressure estimation and parameter identification for molten metal during dynamic filling. it is made clear that the molten metal pressure inside the casting mold is accurately estimated by the reaction force from a force sensor mounted between the upper mold and its elevating device. from the estimated pressure behavior, the actual pressure behavior can be calculated with the identified liquid viscosity using the constructed mathematical pressure model. next, the filling behavior of the upward flow liquid can also simultaneously be observed on a real-time basis. in the case of varying casting conditions such as mold shape, molten metal properties, volume and temperature as seen in practical cases, our proposed real-time estimation method in this paper can be effectively integrated with a next-generation supervisory process control system for metal manufacturing. in the near future, optimum quality control experiments using molten metal, based on the model predictive control algorithm, will be demonstrated. references  [1] k. terashima, y. noda, k. kaneto, k. ota, k. hashimoto, j. iwasaki, y. hagata, m. suzuki and y. suzuki, “novel creation and control of sandmold press casting ”post-filled formed casting process”, foundry trade journal international (the journal of the institute of cast metals engineers), no. 183 (3670), (2009) pp. 314-318. [2] y. noda and k.terashima, “modeling and feedforward flow rate control of automatic pouring system with real ladle”, journal of robotics and mechatronics, vol. 19 (2), (2007) pp. 205‐211. [3] y. noda, k. yamamoto and k. terashima, “pouring control with prediction of filling weight in tilting ladle-type automatic pouring system”, international journal of cast metals research, science and engineering of cast metals, solidification and casting processes, afc-10 special issue, vol. 21(1-4), (2008) pp. 287-292. [4] hu, j, vogel. j. h, “dynamic modeling and control of packing pressure in injection molding”, int. journal of engineering materials and technology, vol. 116, no.2, (1994) pp. 244-249. [5] mickowski j. r, teufert, c. e, “the control of impact pressure in the high pressure die casting process”, trans 17th int. die cast congress and exposition, (1993) pp. 349-354. [6] h. devaux, “refining of melts by filtration. a water model study”, in proc. of 53rd international foundry congress, prague, czech, (1986) pp.107-115. [7] c. galaup, u and h. luehr, “3d visualization of foundry molds filling”, in proc. of 53rd international foundry congress, prague, czech, (1986) pp.117-127. [8] i. ohnaka, “development of the innovative foundry simulation technology”, in journal of the materials process technology, ”sokeizai”, vol.45, no.9, (2004) pp. 37-44. [9] k. terashima, r. tasaki, y. noda, k. hashimoto, j. iwasaki, t. atsumi, “optimum pressure control of molten metals for casting production using a novel greensand mold press casting method”, key engineering materials, vol. 457, (2011) pp. 453-458. [10] r. tasaki, y. noda, k. terashima, k. hashimoto, “sequence control of pressing velocity for pressure in press casting process using greensand mold”, journal of cast metal research, vol. 21, no. 1–3, (2008) pp. 369-374. [11] r. tasaki, y. noda, k. hashimoto and k. terashima, “modelling and control of pressurized molten metal in press casting”, journal of mechanics engineering and automation (jmea), vol. 2, no. 2, (2012) pp.77-84.     figure 9. pressure behaviour on mpc simulation.  microsoft word 376-2527-1-le-rev acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 16‐23    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 16  proficiency tests with uncertainty information: analysis using  the maximum likelihood method  katsuhiro shirono, masanori shiro, hideyuki tanaka, kensei ehara  national metrology institute of japan, aist central 5, 1‐1‐1 higashi, tsukuba, ibaraki, japan       section: research paper   keywords: proficiency test; en score; uncertainty; iso 13528; maximum likelihood method  citation: katsuhiro shirono, masanori shiro, hideyuki tanaka, kensei ehara, proficiency tests with uncertainty information: analysis using the maximum  likelihood method, acta imeko, vol. 5, no. 3, article 4, november 2016, identifier: imeko‐acta‐05 (2016)‐03‐04  editor: franco pavese, italy  received march 31, 2016; in final form march 31, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.  funding: this work was supported by a grant‐in‐aid for scientific research (no. 26870899) from the japan society for the promotion of science (jsps), japan.  corresponding author: katsuhiro shirono, e‐mail: k.shirono@aist.go.jp    1. introduction  the proficiency test (pt) using an interlaboratory comparison is an effective tool to assure the quality of the measurements of calibration and testing laboratories. participation in a pt is usually required for a laboratory to be accredited under iso/iec 17025:2005 [1]. for performance evaluation in a pt with information on uncertainty, a comparison with the result of a reference laboratory is implemented using the en score as described in iso 13528:2015 [2], which is referred to as the en number in iso 13528:2005 [3]. iso 13528 is the standard for the statistical methods used in pts. given that the measurement value of laboratory k and its expanded uncertainty are respectively xk and uk while those of the reference laboratory are respectively xref and uref, the en score for laboratory k is defined as follows:     2ref2refn uuxxe kkk  , (1) where superscript (k) means laboratory k. when |en(k)| ≤ 1 and > 1, the performances of laboratory k are evaluated as “satisfactory” and “unsatisfactory”, respectively. because it is difficult to designate the reference laboratory in some testing fields, performance evaluation in a pt with uncertainty information where an appropriate reference laboratory does not exist has not yet been described in iso 13528:2015. it must be noted that when there is no reference laboratory, outliers can seriously impair the quality of the pt. on the other hand, since a pt is conducted to check the proficiency of laboratories, the possible existence of laboratories with inadequate proficiency should be taken into consideration. thus, a robust analysis method is required. although no suggestion can be found in iso 13528:2015, several analysis procedures have already been proposed for key comparison tests, which are a type of pt for national metrology institutes that are basically implemented without a specific reference laboratory. a guideline on statistical methods for key abstract  in this study, we report the application of the maximum likelihood method to the analysis of proficiency test data when uncertainty  information is given and a reference laboratory does not exist. there are two causes that could impair the quality of analysis using the  maximum  likelihood method: the existence of an unknown random effect, and outliers. the conditions under which performance  evaluations can be appropriately conducted are discussed in this study. to avoid serious impacts from these two causes, the maximum  permissible  standard  uncertainty  of  an  unknown  random  effect  and  the  minimum  permissible  standard  uncertainty  of  the  values  reported  by  a  participating  laboratory  are  quantified.  through  simulations,  the  maximum  and  the  minimum  permissible  standard  uncertainties are found to be 0.3 and 0.5 times the  intermediate magnitude of the standard uncertainty that the participants are  expected to report. we believe that our proposed procedure based on these criteria is sufficiently simple to be employed in actual  proficiency tests.   acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 17  comparisons was presented by cox [4] in 2002, in which a robust analysis referred to as procedure b is proposed for a case where the results are inconsistent. moreover, analysis with the largest consistent subset (lcs), also proposed by cox [5], has been widely employed in such analysis. the lcs is the subset with the largest data size among the subsets whose consistencies are confirmed through a 2 test. various other methods have also been proposed so far [6]−[10]. it is worth noting that although the statistical models employed in these proposals differ from each other, they are useful in their respective situations. we also developed an analysis method that is robust to outliers [11], [12], which is referred to as the robust method in this paper. this method comprises two steps: (i) the detection of an unknown random effect, and (ii) performance evaluation using the local maximum likelihood (lml) method. the robust method is explained in the appendix, and a brief summary of it is given in subsection 2.1. the advantage of this method is that it lessens the risk of performances being evaluated based on inappropriate data influenced by an unknown random effect, and allows performance evaluations to be conducted with clear statistical meaning. in the present study, the robust method is reduced to a simpler method, which is referred to as the global maximum likelihood (gml) method. the robust method, in which lml estimators are employed, may give the impression of being difficult to implement because of its computational complexity. the approach using the gml method is explained in subsection 2.2, and is, we believe, as simple as algorithm a in iso 13528:2015 appendix c. if the conditions are clearly given, this approach can be employed in an actual pt. corresponding to the two steps in the robust method, conditioning is conducted by two parameters: (i) the maximum permissible standard uncertainty for a random effect, and (ii) the minimum permissible standard uncertainty for the reported values. the first parameter is quantified by examining the magnitude of a random effect that will significantly affect the quality of the pt. the second parameter is considered because, when an outlier has an extremely small uncertainty, the gml method can offer different results from the lml method. these parameters are quantified relative to the intermediate magnitude of the standard uncertainty that the participants are expected to report. this paper is organized as follows: section 2 provides the basic theory of the robust method and the gml method. the qualification of the parameters and the proposal of a practical procedure based on the qualified parameters are then given in section 3. in section 4, the procedure is applied to the data of an actual pt. a brief conclusion is presented in section 5, and information on the robust method is provided in the appendix. 2. robust and global maximum likelihood (gml)  methods  2.1. robust method  suppose that n laboratories participate in a pt. it is assumed that laboratory i reports xi and ui as the reported value and its standard uncertainty for i = 1, 2, …, n. qi is defined as the square of the standard uncertainty ui. as mentioned earlier, the two steps in the robust method are (i) the detection of an unknown random effect, and (ii) performance evaluation using the lml method. it is possible that inhomogeneity or instability of the pt items, or vagueness of the measurand, may cause a large unknown random effect and seriously impair the quality of the pt. the first step is therefore necessary. the second step is important in order to evaluate performances in a manner that is robust to outliers. in the first step, the marginal likelihoods are computed. the marginal likelihood is, simply put, the likelihood of the model. the marginal likelihoods of the models in which an unknown random effect is not considered or is commonly considered for all n laboratories are defined as 0 and n, respectively. when 0 < n, the model with no random effect is more likely, and the data are regarded to be inappropriate for the performance evaluation. in the robust method, the marginal likelihoods of some other models with a random effect are also computed. moreover, an estimation of the measurand is given robustly to outliers through bayesian analysis. two examples are shown in figure 1(a) and (b). in figure 1(a), the data are largely dispersed. in this case, 0 < 7, and an unknown random effect is detected. the data are therefore inappropriate for use in the performance evaluation. on the   (a)    (b)  figure 1. simulated pt data: (a) (x1, x2, x3, x4, x5, x6, x7) and (u1, u2, u3, u4, u5,  u6, u7) are given as (a) (1, 2, 3, 4, 5, 6, 7) and (0.2, 0.2, 0.2, 0.2, 0.2, 0.2, 0.2),  and (b) (1, 2, 4, 4, 4, 6, 6.4) and (1, 1, 1, 1, 1, 1, 0.04). the error bars show  the expanded uncertainties with a coverage factor of 2.   acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 18  other hand, most of the data seem consistent in figure 1(b). in this case, an unknown random effect is not detected. the estimation of the measurand is given as 4.0, which is comparable to the values of x3 to x5. in the performance evaluation, the statistical model xi ~ n(, qi + i) (i = 1, 2, …, n) is considered, where  is assumed to be the true value of the measurand in the pt, and i is an additional variance. several local maximums for the likelihood of the model can be observed. to analyse the data robustly, the combination of parameters maximizing the likelihood in which the value of  is closest to the estimate of the measurand is focused on. defining the lml estimators as the parameters in the combination, the symbols lml and 1lml to nlml are introduced to express the lml estimators  and 1 to n. the performance evaluation for laboratory k is then conducted to check the validity of the statistical model xk ~ n(, qk) and xi ~ n(, qi + ilml) for all i ≠ k. the specific form of the statistic is given in the appendix as the extended en score. when |en(k)| ≤ 1 and > 1, the performances of laboratory k are evaluated respectively as “satisfactory” and “unsatisfactory” as described in connection with (1). the extended en scores obtained using the robust method for the data in figure 1(b) are given in table 1. the extended en scores of x3 to x5, whose values are close to the estimation of the measurand, are 0.0, and their performances are evaluated as “satisfactory”. on the other hand, the extended en score of the apparent outlier of x7 is 2.4, and the performance is evaluated as “unsatisfactory”. thus, it is found that the robust method is not significantly influenced by outliers. 2.2. global maximum likelihood (gml) method  although the robust method is flexibly applicable, the computation may be complicated. therefore, it is meaningful to provide a simpler method. analysis using the gml estimator is proposed here. in this section, n, xi, ui, and qi are defined as in subsection 2.1. the statistical model xi ~ n(, qi + i) (i = 1, 2, …, n) is considered in this method as in the robust method. this model is actually identical to model iii proposed by willink [8]. in willink’s paper, the gml estimator of i, igml, is given as igml = max{|xi − gml|2 − qi, 0}, where gml is the gml estimator of . moreover, willink showed that gml can be given through the minimization of the following quantity q():                  n i i i i x q 1 2 log    , (2) where i=max{|xi -| 2, qi}. we apply the same concept as in the robust method to the performance evaluation. from an analogy to en(k) in the robust method given in (a5) in the appendix, the following extended en score is computed for the performance evaluation of laboratory k with igml = igml + qi :   1 gml2 gml 1 gml 12 1                       n ki ik n ki ii n ki ikk n u xx e   , (3) where  kin denotes the summation for i = 1, 2, …, k − 1, k + 1, …, n. this statistic is analogic to the concept of the “exclusive statistics” [13], [14]. when |en(k)| ≤ 1 and > 1, the performances of laboratory k are evaluated as “satisfactory” and “unsatisfactory”, respectively. this analysis method is referred to as the gml method in this study. since no check of an unknown random effect is conducted in this method, the gml method is not appropriate when an unknown random effect has a serious impact on the pt data. therefore, the gml method should not be applied to the data shown in figure 1(a) without any quantitative consideration of the unknown random effect. moreover, gml and lml may differ from each other when an extremely small uncertainty is associated with an outlier. when the gml method is applied to the data in figure 1(b), the extended en scores are reported as shown in table 1. the computed extended en score of laboratory 7 is 0.8, and the performance is evaluated as “satisfactory”. this unnatural result occurs due to the extremely small uncertainty of x7. however, we believe that the gml method is sufficiently simple to be employed in an actual pt analysis if certain conditions are satisfied. specifically, the method is applicable when it is confirmed that the uncertainty of an unknown random effect is negligibly small, and no extremely small standard uncertainties are reported by the participating laboratories. in section 3, the conditions are discussed in quantitative terms and a practical procedure is proposed. 3. conditioning for application of the global  maximum likelihood approach  3.1. concept of the maximum and the minimum permissible  uncertainties  in this section, we focus on the two factors that might cause a problem in the performance evaluation proposed in subsection 2.2: (i) an unknown random effect, and (ii) outliers. the first factor can be neglected when the random effect is much smaller than the uncertainty of the reported values, while the second factor can be neglected when no extremely small uncertainty is reported. once it is clarified how large an uncertainty will be reported by the participants in a pt, criteria to avoid the effects of these factors can be examined. the standard uncertainty that is expected to be averagely reported by the participants in a pt is referred to as the expected standard uncertainty in this study, and expressed as uexp. it is assumed that the expected standard uncertainty can be roughly determined based on technical knowledge in advance of the pt. the maximum permissible standard uncertainty of an unknown random effect and the minimum permissible standard uncertainty of the reported values are introduced to avoid the influences of the random effect and outliers, respectively. these are discussed in subsections 3.2 and 3.3 based on the table  1.  extended  en  scores  computed  using  the  robust  method  and  the  gml method.  laboratory (k)  en (k)  in the robust method  en (k)  in the gml method  1  − 1.4  − 2.7 2  − 1.0  − 2.2 3  0.0  − 1.2 4  0.0  − 1.2 5  0.0  − 1.2 6  0.9  − 0.2 7  2.3  0.8 acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 19  assumption that the expected standard uncertainty uexp is appropriately given. in subsection 3.4, a practical procedure is proposed based on these discussions. 3.2. influence of the random effect  only inhomogeneity and instability are considered as sources of an unknown random effect in this study. these random effect sources tend to have a serious impact on the analysis of a pt. it is assumed that it has been confirmed that the definition of the measurand, including the choice of the measurement methods, will not have a serious influence. defining rnd as the standard deviation of the random effect, the case in which rnd is estimated to be 0.3 × uexp is considered. the criterion that the standard deviation of the inhomogeneity must not exceed 0.3 × p, where p is the standard deviation for the proficiency assessment, is given in iso 13528:2015. regarding the instability, almost the same criterion is used. this criterion is applied by extension to a case with uncertainty information in this study. since p could be interpreted as the average value of the dispersion of the reported values, it seems appropriate to replace p with uexp. to characterize these criteria quantitatively, letting −1(.) be the inverse of the cumulative standard normal probability function, it is considered that xi is derived from n(, qi + rnd2) and the data are given as follows:   13.01 1 2     niφ v x i , (4) qi = uexp = 1 for i = 1, 2, …, n, where zi = −1(i /(n+1)), and v =  n i 1 (zi – n i 1 zi/n)2/(n −1). using these data, rnd2 is unbiasedly estimated to be 0.32. it should be noted that this sequence of xi does not necessarily mean that xi is truly derived from n(0, 1 + 0.32), and other statistical models including xi ~ n(, qi + i) (i = 1, 2, …, n) employed in the robust method might yield the same sequence of xi. in figure 2, the dispersion of data when n = 30 is shown. consequently, the unknown random effect is undetected when the data are given by (4). the marginal likelihoods of the models in which an unknown random effect is considered for no and all laboratories, 0 and n, are computed for the cases with n = 2, 5, 10, 20, 100, and 200. figure 3(a) shows 0 and n, implying that 0 is always larger than n. thus, it is concluded based on the discussion in subsection 2.1 that no unknown random effect is detected. since the result could change if different data were provided, the property is checked with more dispersive data. the following data are considered:   15.01 1 2     niφ v x i , (5) qi = uexp = 1 for i = 1, 2, …, n. figure 3(b) shows 0 and n, and 0 > n for all n as well. this means that the unknown random effect is undetectably small even when rnd is roughly estimated to be 0.5 × uexp. therefore, it is concluded that 0.3 × uexp could be a strong candidate for the maximum permissible standard uncertainty. in this discussion, rnd is unbiasedly estimated using the pt data. in iso 13528:2015, on the other hand, inhomogeneity and instability are basically evaluated independently from the pt data. however, since the random effect is undetectably small even with more dispersive data, it can be said to be conservative to set the maximum permissible standard uncertainty as 0.3 × uexp, irrespective of the method used for the estimation of rnd. 3.3. sensitivity to outliers  we recommend that the gml method be employed only when the number of the laboratories with |en| ≤ 1 is 10 or more. when the number of participants is small, the discrimination of outliers from the other values is technically difficult. such discrimination is, however, necessary in the case of the gml method. in the following discussion, the gml method is characterized through simulated data with an outlier and 10 other data. cases with a smaller data size are not taken into consideration. we believe that 0.5 × uexp is appropriate as the minimum permissible standard uncertainty mentioned in subsection 3.1. it   figure 2. simulated pt data given by (4) when n = 30. the error bars show  the expanded uncertainties with a coverage factor of 2.      figure  3.  computed  logarithmic  marginal  likelihoods  per  number  of  laboratories  for  the  data  when  n  =  2  to  200  using  the  data  given  in  the  equations of (a) (4) and (b) (5).  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 20  is considered that the uncertainty should be larger than 1/√10 × uexp ≈ 0.32 × uexp, because the weighted mean of the 10 data with the standard uncertainty of uexp is given as 1/√10 × uexp. when ui < 1/√10 × uexp, the value xi can have a strong impact on the determination of the gml estimator. thus, the minimum uncertainty of 0.5 × uexp seems a possible choice. the robustness of the gml method with this value is consequently examined in this subsection. in discussing the property of the gml method in quantitative terms, it is assumed that uexp = 1, and x1 to x10 are considered to be given as follows:   111 1 iφvx i  , (6) qi = uexp2 = 1, where v is defined in subsection 3.2. in addition to these data, x11 and q11 are basically determined differently so as to be characterized as outliers. see figure 4 as an example with x11 = 2.0 and q11 = 0.52. examples with q11 = 0.32 and 0.52 are described in this study; the analysis with q11 = 0.32 is conducted for comparison. if the simulation with the smaller minimum permissible standard uncertainty of 0.3 × uexp provides a result that is robust to the outliers, it can be said that setting the minimum permissible standard uncertainty as 0.5 × uexp is an adequately conservative choice. consequently, the criterion of a minimum permissible standard uncertainty of 0.5 × uexp seems appropriate. when lml = gml, the identical extended en scores are given in both the lml and the gml methods for all of the laboratories. figure 5(a) and (b) show a comparison between lml and gml for q11 = 0.32 and 0.52, respectively. lml and gml are in perfect agreement with each other under the conditions of both q11 = 0.32 and q11 = 0.52. even in the case of q11 = 0.3, the results are not significantly contaminated by the existence of outliers. thus, the validity of the criterion of 0.5 × uexp is confirmed. it should be noted that the participating laboratories must agree in advance to a pt in which standard uncertainty less than the minimum permissible standard uncertainty is not usually reported, and will not be reported in the pt. this is because the measurements for a pt must be conducted under the usual experimental conditions. it is not recommended that a laboratory that usually reports an uncertainty of less than 0.5 × uexp reports an uncertainty of 0.5 × uexp or more only for the purpose of this pt. 3.4. proposed procedure using the gml method  based on the discussions in subsections 3.2 and 3.3, we suggest the following four conditions for application of the gml method: (i) no unknown random effect other than inhomogeneity and/or instability of the pt items exists, (ii) the random effect caused by the inhomogeneity and/or instability has a standard deviation of less than 0.3 × uexp, (iii) the minimum permissible standard uncertainty from the laboratories is 0.5 × uexp, and (iv) 10 or more laboratories report data for which the performance is evaluated as “satisfactory”. the following procedure entailing 11 steps is proposed to realize the above conditions: 1. it is confirmed that there is no or practically negligible unknown uncertainty in the definition of the measurand, including the choice of the measurement method. 2. the pt provider determines the expected uncertainty, uexp, from the existing technical knowledge. 3. the pt provider checks that the standard deviation of the random effect caused by the inhomogeneity and/or instability of the pt items is less than 0.3 × uexp. 4. all of the participants agree that a standard uncertainty of less than 0.5 × uexp is not usually reported, and will also not be reported in the pt. 5. the pt is implemented and the data of xi and ui (i = 1, 2, …, n) are obtained. 6. it is confirmed that a representative value (e.g., the median) of the reported standard uncertainties is adequately close to the expected uncertainty. 7.                         n i i i i xxx x ni 1 2 ...,, old logminarg 1    .   figure 4. simulated pt data given by (7) when x11 = 2.0 and q11 = 0.5 2 . the  error bars show the expanded uncertainties with a coverage factor of 2.      figure 5. computed gml and lml estimators with q11 = 0.3 2  and 0.5 2  as a function of x11. x1 to x10 yielded by (7).  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 21  8.                                 n i ii n i iii xq xqx 1 2old 1 2old gml ,max/1 ,max/    . 9. if |gml − old| > [ ni 1 {1/max(qi, (xi − gml)2)}]−1/2, where  is a small value like 10–3, old = gml and go to step 8. 10. for k = 1, 2, …, n, en(k) is computed through   1 gml2 gml 1 gml 12 1                       n ki ik n ki ii n ki ikk n u xx e   , where igml = max(qi, (xi − gml)2). 11. it is confirmed that 10 or more laboratories have a performance of |en(k)| ≤ 1. for steps 1, 3, 4, 6, and 10, if the conditions are not satisfied, the robust method should, in principle, be implemented instead of the gml method. however, in step 4, if a laboratory requests permission to report a smaller standard uncertainty than the minimum permissible uncertainty, such a request may be accepted only when the laboratory has sufficient technical evidence. in step 7, the initial value for the estimation of gml is determined as xi with which q(xi) in (2) is the smallest among all q(xi). although it is not mathematically assured that the gml estimator can be obtained through this estimation, we have not found a case in which the gml estimator is not given by this determination of the initial value. the chief advantage of the gml method is that the algorithm is sufficiently simple to be employed in an actual pt; or in other words, to be incorporated into iso 13528:2015. in iso 13528:2015, several algorithms for a pt without uncertainty information are described. it can be said that the above algorithm is as simple as those methods. 4. example of application: measurement of  concentration of copper in water  the gml method is applied to the data from a pt that was conducted by the japan society for analytical chemistry (jsac) from 2014 to 2015 [15]. in this test, the concentration of copper in water was measured and reported in the unit of mg/l. it should be noted that gml analysis was not applied in the actual analysis, and the performances were evaluated through a comparison with the reference laboratory. these data are cited merely as a set of numerical examples. the following explanation is given based on the procedure described in subsection 3.4. step 1: jsac has implemented pts for the measurement of the concentration of metals in water since 2007. thus, information on the uncertainty caused by the measurement method has been shared by the participants, and the uncertainty has been incorporated into the reported uncertainty. hence, it cannot be an unknown component of the uncertainty. step 2: the expected standard uncertainty uexp was yielded as 0.0034 mg/l from the past pt data implemented from 2013 to 2014 [16]. the median of the reported relative standard uncertainties in the pt in 2013 to 2014 was 1.7 %. uexp was therefore given as 1.7 % of the set concentration of copper in the pt in 2014. the set concentration was 0.200 mg/l, and uexp was given as 0.0034 mg/l. step 3: the maximum permissible standard uncertainty of the random effect was given as 0.0010 mg/l, which is 0.3 times the expected uncertainty. inhomogeneity of the pt items was checked in this test. the evaluated standard uncertainty between units was calculated as 0.0003 mg/l through analysis of variance. since this value is much smaller than 0.0010 mg/l, the effect of the inhomogeneity on the pt results was negligible. in actuality, when the robust method is applied, the random effect cannot be detected. step 4: the minimum permissible standard uncertainty of the reported values was given as 0.0017 mg/l, which is 0.5 times the expected uncertainty. since information on the minimum permissible standard uncertainty was not given in the actual pt, there was a laboratory that reported a standard uncertainty of less than 0.0017 mg/l. the data of that laboratory have been removed, because the data are treated only as a numerical example in the present study. of course, it is not recommended in an actual pt that data be removed after the pt without reasonable grounds. step 5: the 22 reported values are shown together with their respective standard uncertainties in table 2 and figure 6. the reported values ranged from 0.1908 mg/l to 0.2417 mg/l. laboratories 6 and 20 reported extremely large standard uncertainties. unlike the case of extremely small uncertainties, these large uncertainties are not considered to impair the quality of the pt. step 6: the median of the standard uncertainties in table 2 is 0.0036 mg/l, which is close to the expected standard uncertainty, 0.0034 mg/l. since the median is larger, it can be table  2.  actual  data  reported  in  a  pt  of  the  concentration  of  copper  in  water conducted by the japan society for analytical chemistry from 2014 to  2015, and the evaluated values of q(xi) and en(i) from the data. one set of  data in which the standard uncertainty exceeded the minimum permissible  standard uncertainty was removed only for the purpose of this study.   laboratory (i) reported  value (xi)  standard  uncertainty  (ui)  q(xi)  en (i)   1 0.1908 0.0088  ‐162.13  ‐0.9  2 0.196 0.0094  ‐179.68  ‐0.5  3 0.1967 0.0033  ‐182.31  ‐1.4  4 0.1987 0.0024  ‐189.10  ‐1.4  5 0.1991 0.0050  ‐190.38  ‐0.7  6 0.202 0.11  ‐199.65  0.0  7 0.2025 0.0022  ‐201.26  ‐0.8  8 0.2025 0.0029  ‐201.26  ‐0.6  9 0.2043 0.0023  ‐205.46  ‐0.4  10 0.2056 0.0036  ‐207.32  0.0  11 0.2059 0.0033  ‐207.43  0.0  12 0.206 0.011  ‐207.42  0.0  13 0.2061 0.0044  ‐207.39  0.0  14 0.2070 0.0018  ‐206.13  0.3  15 0.2071 0.0027  ‐205.89  0.2  16 0.2071 0.0023  ‐205.89 0.3  17 0.2072 0.0018  ‐205.64 0.4  18 0.2116 0.0067  ‐185.02 0.4  19 0.2129 0.0036  ‐180.16 1.0  20 0.214 0.039  ‐176.38 0.1  21 0.216 0.014  ‐169.83 0.4  22 0.2417 0.0036  ‐127.96 4.9  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 22  said that a conservative evaluation was conducted to determine the maximum permissible standard uncertainty for checking the uncertainty of the random effect. on the other hand, there is a possibility such that the minimum permissible standard uncertainty of the reported value might be too small. however, as mentioned in subsection 3.3, when the minimum permissible standard uncertainty is 0.3 times the expected standard uncertainty, the gml method works well in most cases. since 0.0036 (mg/l) / 0.0034 (mg/l) is smaller than 0.5 / 0.3 = 1.67, this slight difference does not seem to have a significant impact on the performance evaluation. steps 7 to 9: the values of q(xi) are shown in table 2, and the minimum is given when i = 11. the value of x11 is therefore employed as the initial value in the repetitive computation to determine gml. the repetitive computation in steps 8 and 9 gives the value of gml as 0.2059 mg/l. in this case, lml = gml, and the performances evaluated using the gml method are identical to those using the robust method. steps 10 and 11: the extended en scores are computed, and these are shown in table 2 together with the reported values. the number of laboratories with a performance of |en(k)| ≤ 1 is 19, which is more than 10. there are three laboratories whose magnitudes of en scores are larger than 1.0. it is found from figure 6 that the values that do not contain gml in the range of their expanded uncertainties are evaluated as “unsatisfactory”. these results seem natural. we have presented a detailed discussion on the validity of the extended en scores in another paper [11]. for comparison, other methods were applied to this example. the robust method shows performance evaluation results identical to those obtained using the gml method. analysis using the largest consistent subset [4] obtains en scores with magnitudes exceeding 1.0 only for laboratories 3, 4, 19, and 22. this difference does not seem to be essential. we have provided further discussion through a comparison with other methods in another paper [12]. 5. conclusion  in this study, the application of maximum likelihood to the analysis of proficiency test data is discussed for cases where uncertainty information is given and a reference laboratory does not exist. to prevent serious effects from an unknown random effect and a few outliers, the following four conditions are suggested: (i) no unknown random effect other than inhomogeneity and/or instability of the pt items exists, (ii) any random effect caused by inhomogeneity and/or instability has a standard deviation of less than 0.3 × uexp, (iii) the minimum permissible standard uncertainty from the laboratories is larger than 0.5 × uexp, and (iv) the performances of 10 or more laboratories are consequently “satisfactory”, where uexp is the standard uncertainty that the participants are expected to report averagely. based on these suggestions, a practical procedure is proposed. moreover, the analysis method is characterized through an actual example. we believe that the analysis method proposed in this study can provide natural results with a computationally simple algorithm. acknowledgement  this work was supported by a grant-in-aid for scientific research (no. 26870899) from the japan society for the promotion of science (jsps). appendix: method proposed in our previous  studies  the method proposed in our previous studies [11], [12], which is referred to as the robust method in the main manuscript, is explained. it should be noted that the meanings of some symbols are different from those in the main manuscript. in this method, the model selection is implemented through a comparison of the marginal likelihood. the statistical model with the following parameters is considered: 1. the number of data to which the common random effect is given, m (m = 0, 2, 3, …, n); 2. the identification vector for correspondence of the laboratory and the data, vk = (k(1), k(2), …, k(n))t (k(1) < k(2) < … < k(m), k(m+1) < k(m+2) < … < k(n)); and 3. the parameters for the priors , m+1, m+2, …, and n. (1 ≤  < +∞, 1 ≤ i (i = 1, 2, …, n)); where n is the number of participating laboratories. suppose that laboratory k(i) reports the measurement value xi and its standard uncertainty ui (i = 1, 2, …, n). let qi = ui2 for simplicity of the description. xi is assumed to be derived from the normal distribution with the same mean of . on the other hand, the variances of the distribution for the reported values of laboratories k(i) (i = 1, 2, …, m) are assumed to be qi + c, where c is the variance caused by an unknown random effect. the variances for the reported values of laboratories k(i) (i = m + 1, m + 2, …, n) are assumed to be qi + i, where i is the additional variance caused by the unskillfulness of these laboratories. thus, the model distributions of xi are given as follows:            .,,1for ,n~ ,,,1for ,n~ c nkmkiqx mkkiqx iii ii       (a1) defining   figure  6.  actual  pt  data  shown  in  table  2.  the  error  bars  show  the  expanded  uncertainties  with  a  coverage  factor  of  2.  the  dotted  line indicates the gml estimator, gml = 0.2059 mg/l. the data denoted by the empty  circles  are  the  values  whose  performances  are  found  to  be “unsatisfactory” in the gml method.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 23  iii m i i q q                  ,1 1 1 c c , (a2) the priors of , c, and i (i = m + 1, …, n), p(), p(c), and p(i), are given as follows:          . , , 1 1 1 1 c cc iiii m i i qp qp p i                         (a3) the priors of c and i, p(c) and p(i), are given accordingly. the hyperparameters of m, vk,  and i, are optimized to maximize the following modified marginal likelihood:          w c 1 cc ,,,, θvxθ dddppmlλ m mi iik  , (a4) where x = (x1, …, xn)t,  = (m+1, …, n)t, and w= {, c, | − ∞ <  < +∞, 0 < c < +∞, 0 < i < +∞}. l(, c, |x, m, vk) is the likelihood of , c, and  given x, m, and vk. the marginal likelihoods with m = 0 and m = n are referred to as 0 and n, respectively, and employed in the main manuscript. it should be noted that vk is not a parameter when m = 0 and m = n. the point is that if m ≥ 2 is chosen as the optimized parameter, the performance evaluation should not be implemented, because m ≥ 2 means c > 0. c is the variance of a random effect. the effect must be corrected before the performance evaluation. only when m = 0 is chosen, the performance evaluation is given. for the performance evaluation, let the posterior mean of  with the optimized model be rob. several combinations of (, 1, ..., n) locally maximizing the likelihood of the statistical model xi ~ n(, qi + i) (i = 1, 2, …, n) can exist. however, the lml estimators of  and i, lml and ilml, are specifically defined as the values of  and i included in the combinations of (, 1, ..., n) whose value of  is the closest to rob among those several combinations. defining ilml = qi + ilml, the extended en score for laboratory k is proposed as follows:   1 lml2 lml 1 lml 12 1                       n ki ik n ki ii n ki ikk n u xx e   , (a5) to check the validity of the following statistical model:      .,...,1,1,...,1 ,n~ ,,n~ lml nkkix qx ii kk   (a6) references  [1] international organization for standardization (iso)/international electrotechnical commission (iec), “iso/iec 17025:2005: general requirements for the competence of testing and calibration laboratories”, iso, geneva, 2005. [2] iso/iec, “iso/iec 13528:2015: statistical methods for use in proficiency testing by interlaboratory comparisons”, iso, geneva, 2015. [3] iso/iec, “iso/iec 13528:2005: statistical methods for use in proficiency testing by interlaboratory comparisons”, iso, geneva, 2005. [4] m. g. cox, metrologia 39 (2002) pp. 589–595. [5] m. g. cox, metrologia 44 (2007) pp. 187–200. [6] r. c. paule and j. mandel, j. res. natl. inst. stand. technol. 87 (1982) pp. 377–385. [7] b. toman, a. possolo, accred. qual. assur. vol. 14 (2009), pp. 553–563. [8] r. willink, “advanced mathematical and computational tools in metrology and testing x”, world scientific, singapore, 2015, isbn: 978-981-4678-61-2, pp. 78–89. [9] s. k. shirono, h. tanaka, k. ehara, metrologia 47 (2010) pp. 444–452. [10] r. n. kacker, a. forbes, r. kessel, k.-d. sommer, metrologia 45 (2008) pp. 512–23. [11] k. shirono, h. tanaka, m. shiro, k. ehara, measurement 83 (2016) pp. 135-143. [12] k. shirono, h. tanaka, m. shiro, k. ehara, measurement 83 (2016) pp. 144-152. [13] a. g. steele, b. m. wood, r. j. douglas, metrologia 38 (2001) pp. 483–8. [14] m. j. t. milton, m g cox, metrologia 40 (2003) pp. l1-l2. [15] the japan society for analytical chemistry (jsac), jsac/ptp43: report on the proficiency test in accordance with iso/iec 17043, jsac, tokyo, 2015. [16] jsac, jsac/ptp-40: report on the proficiency test in accordance with iso/iec 17043, jsac, tokyo, 2014. editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet workshop 2022, 2nd part acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 2 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet workshop 2022, 2nd part eric benoit1 1 université savoie mont blanc, polytech'savoie, lab. d'informatique, systèmes, traitement de l'information et de la connaissance (listic) b.p. 80439 annecy-le-vieux cedex 74944, france section: editorial citation: eric benoit, editorial to selected papers from imeko tc1-tc7-tc13-tc18 joint symposium and mathmet workshop 2022, 2nd part, acta imeko, vol. 12, no. 2, article 2, june 2023, identifier: imeko-acta-12 (2023)-02-02 received june 28, 2023; in final form june 30, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, this issue includes the second part of the selection of papers presented at the imeko tc1-tc7-tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop “cutting-edge measurement science for the future” held in porto in august 2022. as for the first part [1], all these papers followed an acta imeko regular peerreview process before being included into this issue. i have the pleasure to introduce two first papers devoted to the fundamental and historical aspects of measurement science. in their paper “probability theory as a logic for modelling the measurement process” [2], giovanni rossi, francesco crenna and marta berardengo face a fundamental aspect of measurement with an audacious proposal on the nature of probability in the context of measurement modelling. in this contribution they show that probability can be interpreted as a logic for developing models of measurement and by the way they open multiple doors on future advances in measurement modelling. in “metrology in the early days of social sciences” [3], clara monteiro vieira and elisabeth costa monteiro explore the fundamental aspects of the measurement methods proposed more than a century ago by two founding authors of sociology as a scientific field. they especially seek to identify the possible connections between these preliminary sociological approaches and the current metrological conceptions. the identification of human activity is a crucial topic. paweł mazurek and szymon kruszewski present in their paper “applicability of multiple impulse-radar sensors for the recognition of a person’s action” [4] a highly accurate non-intrusive sensor for the monitoring of elderly persons. artificial intelligence is a major topic of interest related to multiple fields, and it also concerns measurement fields. a first paper presents the results of a study to classify masonry debris by different machine learning methods. elske linß, jurij walz and carsten könke investigate in their paper, “image analysis for the sorting of brick and masonry waste using machine learning methods” [5], the performance of several ai methods (svm, mlp and k-nn) on images of masonry debris. a second paper, “a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence” [6], proposed by hubert zangl, narendiran anandan and ahmed kafrana, is focused on the diffusion of the knowledge in measurement field by the use of dedicated robotic platforms and their digital twins. this second selection of selected papers also includes two contributions related to the european metrology network for mathematics and statistics mathmet, a network including a large number of european national metrology institutes and that aims at fostering the field of mathematical and statistical applications for measurement science in europe. the first one, by francesca r. pennecchi and peter m. harris, is a technical note that concerns the mathmet ‘measurement uncertainty activity’. it provides an analysis of the dissemination of mu training, and a precise report on the progress to undertake surveys of existing training courses on mu: “mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples”[7]. the second one, “case studies for the mathmet quality management system at vsl, the dutch national metrology institute”[8] by gertjan kok presents an early test on two pieces of metrological software, on mathematical reference datasets, and on a set of mathematical guidelines, of the mathmet quality management system previously presented by lines (see [9]). i hope you will enjoy your reading. eric benoit section editor mailto:editorinchief.actaimeko@hunmeko.org acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 references [1] eric benoit, editorial to selected papers from imeko tc1-tc7tc13-tc18 joint symposium and mathmet (european metrology network for mathematics and statistics) workshop 2022, acta imeko 11 (2022) 4, pp. 1-2. doi: 10.21014/actaimeko.v11i4.1439 [2] giovanni battista rossi, francesco crenna, marta berardengo, probability theory as a logic for modelling the measurement process, acta imeko, vol.12, no.2, 2023, pp.1-5. doi: 10.21014/acta_imeko.v12i2.1313 [3] clara monteiro vieira, elisabeth costa monteiro, metrology in the early days of social sciences, acta imeko, vol.12, no.2, 2023, pp.1-6. doi: 10.21014/acta_imeko.v12i2.1337 [4] paweł mazurek, szymon kruszewski, applicability of multiple impulse-radar sensors for the recognition of a person’s action, acta imeko, vol.12, no.2, 2023, pp.1-7 doi: 10.21014/acta_imeko.v12i2.1346 [5] elske linß, jurij walz, carsten könke, image analysis for the sorting of brick and masonry waste using machine learning methods, acta imeko, vol.12, no.2, 2023, pp.1-5. doi: 10.21014/acta_imeko.v12i2.1325 [6] hubert zangl, narendiran anandan, ahmed kafrana, a low-cost table-top robot platform for measurement science education in robotics and artificial intelligence, acta imeko, vol.12, no.2, 2023, pp.1-5. doi: 10.21014/acta_imeko.v12i2.1356 [7] francesca r. pennecchi, peter m. harris, mathmet measurement uncertainty training activity – overview of courses, software, and classroom examples, acta imeko, vol.12, no.2, 2023, pp.1-6. doi: 10.21014/acta_imeko.v12i2.1310 [8] gertjan kok, case studies for the mathmet quality management system at vsl, the dutch national metrology institute, acta imeko, vol.12, no.2, 2023, pp.1-5. doi: 10.21014/acta_imeko.v12i2.1339 [9] keith lines, jean-laurent hippolyte, indhu george, peter harris, a mathmet quality management system for data, software, and guidelines, acta imeko, vol.11, no.4, 2022, pp. 1-6. doi: 10.21014/actaimeko.v11i4.1348 doi: 10.21014/actaimeko.v11i4.1348 https://doi.org/10.21014/actaimeko.v11i4.1439 https://doi.org/10.21014/acta_imeko.v12i2.1313 https://doi.org/10.21014/acta_imeko.v12i2.1337 https://doi.org/10.21014/acta_imeko.v12i2.1346 https://doi.org/10.21014/acta_imeko.v12i2.1325 https://doi.org/10.21014/acta_imeko.v12i2.1356 https://doi.org/10.21014/acta_imeko.v12i2.1310 https://doi.org/10.21014/acta_imeko.v12i2.1339 https://doi.org/10.21014/actaimeko.v11i4.1348 acta imeko  september 2014, volume 3, number 3, 2  www.imeko.org    acta imeko | www.imeko.org  september 2014 | volume 3 | number 3 | 2  editorial  paolo carbone  university of perugia    section: editorial   citation: paolo carbone, editorial, acta imeko, vol. 3, no. 3, article 2, september 2014, identifier: imeko‐acta‐03 (2014)‐03‐02  editor: paolo carbone, university of perugia  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: paolo carbone, email: paolo.carbone@unipg.it    this issue of acta imeko collects papers submitted from authors that are active in various areas of imeko and ends the publication of manuscripts submitted after the completion of the world congress in busan (korea), in 2012. in the area tc5 – measurement of hardness we publish 3 contributions. the first of these papers is authored by low et al. and it is a joint experimental work done by authors coming from several major national metrological institutes. it proposes a new definition of the brinell hardness indentation test that proves to be useful when low uncertainties are a goal, at the first levels of the traceability chain. the second paper by ma et al. covers the same topic and highlights the difficulties arising by the usage of the current definition of brinell hardness, when the indentation diameter must be determined. the third paper authored by s. takagi shows how to profile rockwell diamond indenters by means of 3d laser scanners. in the area of nanometrology we publish 2 papers. the first one by schuler et al. deals with the subject of tactile surface measurements and addresses the case when surfaces do not have a limited curvature. experimental results are presented that validate the technique presented in this paper to overcome the curvature problem. the second paper by malinovsky et al. discusses nanometrology capacity at the brazilian national metrology institute. traceability at nanometrological level is the central issue discussed here. an interference microscope is described and its metrological characterization is presented. in the area tc13 – measurements in biology and medicine we publish one paper by fiedler et al. it is the research result of an international team of researchers and it reports about an experimental comparison of dry electrodes for electroencephalography to be applied in brain-machine interfaces. in the area tc20 – energy measurements we publish a contribution by an international team of researchers authored by seitz et al. it covers the important task of improving the si traceability of bioethanol by using electrolytic conductivity measurements. in the area tc7 – measurement science we publish 3 contributions. the first one by zieliński et al. describes a timeinterval measurement system based on multi-tapped delay lines. this component is rapidly becoming an important building block for many practical applications. the authors describe their architecture and show experimental results obtained by using their system. the second contribution by aschenbrenner and zagar presents an inductive absolute position measurement system to be applied in industrial environments. the system is described and characterized from a metrological point of view. e. benoit authors the third paper in this area. it deals with the fundamental issue of color measurements in hard sciences and in behavioral sciences. by using fuzzy scales, the paper shows how to adapt the scale used for color measurements to the measurement context. in the area tc15 – experimental mechanics, we publish a paper by panciroli and minak that describes a methodology to be used for the analysis of the air trapped in a fluid when a deformable structure enters the fluid. the paper closing this issue is authored by chen et al. and it relates to the area tc3 – measurement of force, mass and torque. it describes a built system able to measure microforces. experimental results are obtained by comparison with milligram scale deadweights. i want to thank all authors for their scientific contributions and all people involved in managing and preparing this issue. finally, on behalf of the editorial board i wish you to enjoy the scientific content of this acta imeko issue. microsoft word article 9 163-1352-1-le.doc acta imeko  february 2015, volume 4, number 1, 53 – 60  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 53  simultaneous power quality analysis of feeders in mv utility  power stations  aleksandar nikolic, blagoje babic, aleksandar zigic, nikola miladinovic, srdjan milosavljevic  electrical engineering institute nikola tesla, university of belgrade, koste glavinica 8a, 11000 belgrade, serbia        section: research paper   keywords: power quality; simultaneous measurements; analysis; utility power station  citation: aleksandar nikolic, blagoje babic, aleksandar zigic, nikola miladinovic, srdjan milosavljevic, simultaneous power quality analysis of feeders in mv  utility power stations, acta imeko, vol. 4, no. 1, article 9, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐09  editor: paolo carbone, university of perugia   received december 15 th , 2013; in final form december 11 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: aleksandar nikolic, e‐mail: anikolic@ieent.org    1. introduction  the increased demands for electricity created extensive power generation and distribution grids. industries demand larger and larger shares of the generated power, which, along with the growing use of electricity in the residential sector, stretched electricity generation to the limit. today, electrical utilities are no longer independently operated entities; they are part of a large network of utilities tied together in a complex grid. the combination of these factors has created electrical systems requiring power quality [1],[2]. contemporary concepts of measuring instruments for monitoring power quality are based on sophisticated solutions in the field of information technology with advantages in terms of reliability, conformity, speed of operation, automatic control and management of quality on the principles of distributed intelligent measurement methods using software tools, especially virtual instrumentation. this also becomes important for smart grid solutions, where decisions for different actions in the local network are also based on the power quality analysis [3]-[6]. 2. requirements for data acquisition equipment  the choice of components and equipment in the preparation of the conceptual design of the measuring system which function is to measure and analyse the power quality is defined by the needs of distribution companies [7], [8]. given the needs of distribution substations where the usual numbers of feeders to consumers are 8 or 12, a solution should be found that will allow simultaneous measurements up to 12 feeders. in this way, in the case of an analysis of the impact of changes in consumer power quality on the part of the network that is powered from a substation can unambiguously determines the feeder with the problem. another limiting factor is typical device dimensions. the device shall be mounted on the wall inside the building substation or pole-mounted transformer with abstract  the development of a  low‐cost, space saving device that could simultaneously take measurements from all (usually eight or up to  twelve) outgoing three‐phase feeders in a distribution substation is presented in this paper. to meet these requirements, at least 3  voltage measurements and 36 current measurements should be performed at the same time. in order to save space but not to reduce  the measurement accuracy, a data acquisition system is designed based on real‐time multiprocessing with a microcontroller and an  fpga circuit. voltage and current measurements and their corresponding higher‐order harmonics are calculated using a fast fpga  circuit, while other calculations (power, power factor, voltage and current phase angles, etc.) are performed in the microcontroller.  further savings are obtained using multichannel analog input modules with multiplexed inputs. communication with the supervising  computer  is  done  using  a  gprs  modem  or  wireless  network  module  depending  of  the  station  location.  results  obtained  in  the  laboratory and later in an industrial prototype confirm the proposed solution. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 54  pole. also, to provide mobility, i.e. portability, the device should be as small as possible just in case that it is used in a number of substations. some problems that could arise are summarized hereafter. the necessity to set some parameters with a frequency at least 10 khz in order to better register voltage and current waveforms, and thus the potential disturbances to the sine waveform, requires that acquisition software should measure values in real time. additionally, because of the need to apply the discrete fourier transform (dft) to calculate the content of higher harmonics [9] of voltage and current of up to the 50th order, programmable fpga circuits and their flexible reprogramming that enables the use of rapid calculations on a hardware level are considered [10],[11]. three phase measurements without phase delay between phases of the same feeder, as a consequence of multiplexing several analogue inputs using one a/d converter, are achieved by connecting each phase of one feeder to a different input module. since measurements on all modules can be initiated in the same time without delay, all three phase currents of one feeder can be measured without phase delay between phases. the phase delay between adjacent phases as an effect of the multiplexer is lower than the acceptable accuracy level. since this phase delay is constant, it could be additionally compensated in software if higher accuracy is requested. the proposed measuring principle is shown in figure 1, where ai denotes analogue inputs (500mv level) on 32 channel modules connected on the same data communication bus. for simplicity, only measurements on two feeders are shown. signals from further feeders (up to 32) are connected in the same manner. the measurement range is selected based on the output of split core current transformers that have 333 mv rated output. such transformers are chosen in order to have a simple installation in the field with an opportunity to move the acquisition system to another power station. also, mv output gives the possibility to directly feed signals to the acquisition system without the need for additional signal conditioning. a way of connecting current transformers on a three-phase feeder is shown in figure 2. it can be observed that the same common line on one three-phase feeder additionaly reduces the overal number of connections, what is a merit for the industrial implementation. 3. prototype development  3.1. laboratory prototype  in order to check the functionality and accuracy of the system, at first a laboratory prototype is developed. it is based on a data acquisition usb card and electronics cards that simulate the three-phase system [10]. this specially developed card has two inputs connected to the usb data acquisition card used for simulating voltage and current. in order to have three voltage and three current signals at the output, the card uses phase shifters to provide additional signals with 120° phase difference. this card needs external power supply of ±15 vdc. the proposed multi-channel power quality acquisition system is built up using a programmable controller ni crio 9076 with 4 modules, one 230 vac three channel voltage measurement module and three 32 channel voltage analogue inputs modules. the same controller configuration is used for the industrial prototype. figure 3 shows the complete laboratory setup. the electronics card that emulates the three phase system is placed on top of the dc power supply. the outputs of the figure 1. proposed measurement principle.  figure 2. connection of split core current transformers with voltage output.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 55  emulator are 6 voltages of ±100 mv, where three signals are representing feeder voltages of phases r, s, t and the other three signals are currents of the corresponding phases. 3.2. industrial prototype  the advantage of the hardware-in-the-loop (hil) approach [10], used for the laboratory prototype shown in the previous chapter, is a shorter developing time for the final solution. this relies on the real-time controller, since it is used both in the laboratory and the industrial prototype. furthermore, a real-time algorithm running on the microcontroller and the fpga is retained for the industrial prototype and only changes were made regarding the principle of voltage measurement. in order to further reduce the number of requested measurements, one three-phase voltage is measured using the three channel module with isolated 300 vac inputs. such approach is possible due to the fact that in a utility power station all feeders are supplied from one or more power transformers connected to the same high voltage three-phase system (one feeder) emulator usb daq device rt controller figure 3. laboratory prototype.    figure 4. industrial prototype with one feeder current measurements.  figure 5. first part of real‐time application.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 56  line. the difference between laboratory and industrial tests is in the signal introduced to the controller to emulate the three-phase voltage in case of the laboratory prototype. for these tests the same analogue input module used for current measurements is also used for voltage measurement. on the other side, in the industrial prototype real phase voltages of 230 vac are introduced. for that module, initialization and calculations are different. but, that doesn’t influence the other parts of the real-time software, especially higherorder harmonics calculations performed on hardware level using the fpga. figure 6. voltage high‐order harmonics.  figure 7. current high‐order harmonics.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 57  in figure 4 an industrial prototype is shown in opened housing. for communication to the supervisory computer ethernet is used or a gprs modem connected to the rs232 port. the controller also has a usb port for external usb memory sticks or even a usb hard disk. in that case a large amount of data could be saved without the necessity for real-time data transfer to another computer. that could be also a merit in cases where the ethernet network is not available or mobile signal strength is poor for the gprs modem. 4. developed software  in order to use all features of the proposed real-time controller, like fpga for computationally high intensive algorithms [11], and at the same time visualisation on a client computer, software is coded using thr graphical programming language labview, a software development environment that contains numerous components, several of which are required for any type of test, measurement, or control application [12],[13]. 4.1. real‐time application  the real-time application, developed in labview and built with real-time and fpga options, consists of three parts and it is the same for both the laboratory and the industrial prototype. this approach provides the possibility to test every aspect that could occur in the real system, while reducing time for final tuning of the software. the first part of the application shows measured or simulated input voltages and currents per feeder, phasor diagram, phase angles and calculated power (active, reactive and apparent) and power factor. it is also possible to get accumulated values (power usage) in wh/kwh/�wh. this part of the application is shown in figure 5. the second and third parts of the application are indicated as vh and ah, respectively. both parts are devoted to high-order harmonics of voltages (vh) and currents (ah) shown in the tables, with a clear indication if values exceeding limits defined in the standards [9], [14]. also, the total harmonic distortion (thd) is shown both for voltage and current, with a colour signal indicating values higher than the limits (8% for voltage and 12% for current). these two parts of the application are shown in figures 6 and 7. 4.2. application for data collecting and analysis  in order to centralize power quality measurements of different consumers, it is required to develop a dedicated database and client application in addition to making the measuring device [15]. the role of the database is to allow storage of measured data, while a client application collects data from the data acquisition software through an appropriate communication medium for the selected protocol. the roles of client applications are to perform the display of archived data and perform when signal processing is needed. the concept of a system that includes a database client application and a power quality measuring device is shown in figure 8. 4.2.1. database structure  the main purpose of the database is to collect the data important for further power quality analysis of supplied electricity. the database is organized regarding to the objects and relations in the power system, but in such a manner to satisfy specific user requirements. modular database organization enables its extension to other power objects and consumers. the database also uses auxiliary tables, like standard limits, measurement methods and instrument technical characteristics. as a system for database control, mysql 5.1 is used. due to the advanced features, the innodb engine has been chosen. 4.2.2. client application structure  for client application development, visual c# is used as a programming language, .net development environment and visual studio as a development tool. development environment .net has a library of base classes for access to different databases, work with xml files, web and windows forms, and connection through remoting and web services. com (component object model), word and excel automatization are also used because most customers like to perform additional calculations in excel and to use word as a report creating tool. c omunication medium save measurement device power quality data software for acquisition central supervision center server computer user software mysql database figure 8. concept of proposed power quality system.  m obile network gprs modem rs 232 data from pow er quality analyzer gprs modem rs 232 client 1 client n server request from control center internet data from pow er quality analyzer request f rom control center figure 9. communication interface using gprs protocol.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 58  4.2.3. data transmission from measurement devices  in order to transfer data from the measurement device to the client computer, mostly located in the control centre of the local utility, a different communication medium can be used. after an analysis of different interfaces and protocols [16]-[18], two solutions for communication between local measurement devices mounted in the utility power station and control/diagnostic centre are chosen. the first one is ethernet, for locations that exist in or is nearby a local computer network and a gprs modem for longer distances and for instance pole mounted power transformers. as an upgrade of ethernet, digital radio modems over dnp3 protocol and wireless lan can be used. the protocol modbus is proposed in two versions: modbus rtu and modbus tcp/ip. modbus rtu is used for communication between the metering and the data acquisition device and the local modem (usually gprs), while the modem provides direct translation of data to tcp/ip. in this way it enables further uniformity of data transfer, regardless of whether it uses a gprs wireless interface or the classic ethernet network. also, it is suitable for networking computers to exchange data retrieved from the local measuring points [19]. figure 9 shows a configuration of a communication system with gprs implemented protocol with client-server architecture. 5. experimental results  5.1. laboratory tests  in order to check the proposed multichannel power quality measurement system, first a test signal of 230vac 50hz is used (pure sine signal). in that case we could easily verify correct calculations in software. next,a test signal is superimposed with harmonics (100, 150, 200, 250, 300, 350, 400, 450, 500, 1200 and 1250 hz) with an amplitude of 10 v. the obtained signal is shown in figure 10. it can be observed that total harmonics distortion is higher than the permissive value from the standard [14]. on the bottom diagram a fast-furrier transformation of the signal is shown. several values of higher-order harmonics that exceed limits are indicated as failed in the table shown in figure 10. the results show correctness of the proposed methods for power quality analysis [20], [21]. also, the part of the system based on fpga provides fast, real-time calculations on several channels. 5.2. criteria for selecting typical measurement points  prior to putting the measuring device in an industrial environment, typical measurement points should be selected. one criterion is higher-order harmonics content, since the electricity usage is related to the expected current or voltage distortion in the distribution network. figure 10. measurement results of a test signal with superimposed harmonics.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 59  criteria of expected higher-order harmonics are defined by the index of expected higher-order harmonics content, calculated for every power station x/0.4 k: 5 aj, i i hj j 1 ai w ih k w   (1) where khj is the weighted coefficient of power consumption of consumers’ group j, waj,i is the active energy of the consumers group „j“ according to the categorization based on higher-order harmonics and supplied from power station x/0.4kv “i”, and wai is the active energy of all the consumers supplied from power station x/0.4kv “i”. the power station with the highest value of index ihi is the first candidate for placing the power quality supervision system. 5.3. testing of the industrial prototype  using previously defined criteria, as a consumer for testing the industrial prototype an office building is used. such buildings are usually equipped with a lot of computers, air-conditions and fluorescent light. in that case, large office buildings are consumers with high value of total harmonic distortion (thd). in the following figures some results taken from a main supply cabinet in one office building are presented. figure 11 shows the phase voltage rms trend, figure 12 the voltage total harmonic distortion, while figure 13 presents the total harmonic distortion factor of current. 6. conclusions  one solution for solving requirements for simultaneous multi-channel power quality measurement in distribution networks is presented in this paper. it is shown that using state-of-art fpga technology and properly configuring real-time software computationally intensive tasks like dft of signals with harmonics up to 50th can be solved. this enables power quality analysis on 12 channels simultaneously, what is a merit of the proposed system and also requirements by the distributor operator is fulfilled. furthermore, with the proposed measurement arrangement it is possible to measure simultaneously up to 32 feeders or 96 currents along with 3 voltages, while device dimensions and weight are still less than required up to 500x500x200mm (wxhxd) and 5 kg. in this case mobility requirements for the measuring system are also retained. the proposed solution has also reduced developing time. a programmable controller, introduced in the first development stage for the laboratory prototype is used with the same real-time software as in the industrial prototype, thanks to the hil concept applied during laboratory tests. results obtained from both development phases show good correlations. references  [1] c. sankaran, power quality, crc press llc, florida, 2002. [2] kilter, j.; meyer, j.; howe, b.; zavoda, f.; tenti, l.; milanovic, j.v.; bollen, m.; ribeiro, p.f.; doyle, p.; romero gordon, j.m., "current practice and future challenges for power quality monitoring cigre wg c4.112 perspective," 2012 ieee 15th international conference on harmonics and quality of power (ichqp), , pp.390,397, 17-20 june 2012. [3] meyer, j.; kilter, j.; howe, b.; zavoda, f.; tenti, l.; romero gordon, j.m.; milanovic, j.v., "contemporary and future aspects of cost effective power quality monitoring — position paper of cigre wg c4.112," electric power quality and supply reliability conference (pq), 2012 , pp.1,6, 11-13 june 2012. [4] music, m.; bosovic, a.; hasanspahic, n.; avdakovic, s.; becirovic, e., "integrated power quality monitoring systems in smart distribution grids," 2012 ieee international energy conference and exhibition (energycon), pp.501,506, 9-12 sept. 2012. 210,0 215,0 220,0 225,0 230,0 235,0 240,0 245,0 250,0 255,0 v 15:58:00.000 10.10.2013 16:08:00.000 10.10.2013 2 min/div 10:00,000 (m in:s) figure 11. phase voltage trend.  1,800 2,000 2,200 2,400 2,600 % 15:58:00.000 10.10.2013 16:08:00.000 10.10.2013 2 min/div 10:00,000 (m in:s) figure 12. voltage total harmonic distortion trend.  9,500 10,00 10,50 11,00 11,50 12,00 12,50 13,00 13,50 14,00 14,50 15,00 % 15:58:00.000 10.10.2013 16:08:00.000 10.10.2013 2 min/div 10:00,000 (m in:s) figure 13. current total harmonic distortion trend.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 60  [5] laskar, s.h.; khan, s.; mohibullah, "power quality monitoring in sustainable energy systems," sustainable systems and technology (issst), 2012 ieee international symposium on, pp.1,6, 16-18 may 2012. [6] cornoiu, m.; barbulescu, c.; jigoria-oprea, d.; kilyeni, s.; prostean, g.; teslovan, r., "power quality monitoring and analysis. case study for 220/110 kv substation," 2011 ieee 3rd international symposium on exploitation of renewable energy sources (expres), pp.133,138, 11-12 march 2011. [7] chen, s.; zhang, c.l.; liu, y.z., "a multi-channel monitoring system for system-wide power quality measurements," power system technology, 2000. proceedings. powercon 2000. international conference on , vol.2, pp.953-958 vol.2, 2000. [8] ruiz-llata, m.; guarnizo, g.; boya, c., "embedded power quality monitoring system based on independent component analysis and svms," the 2011 international joint conference on neural networks (ijcnn), pp.2229,2234, july 31 2011-aug. 5 2011. [9] j. arrillaga, n.r. watson, power system harmonics, john wiley & sons, ltd, new jersey, 2003. [10] cuk, v.; nikolic, a.; zigic, a., “a development system for testing integrated circuits used for power and energy measurements,” in proceedings of 13th international power electronics and motion control conference epe-pemc 2008, pp. 1426-1431, poznan, poland, september 2008. [11] n. kehtarnavaz, s. mahotra, digital signal processing laboratory: labview-based fpga implementation, brownwalker press, florida, 2010. [12] national instruments, getting started with labview, june 2010, publication no. 373427g-01. [13] laskar, shahedul haque; muhammad, mohibullah, "power quality monitoring by virtual instrumentation using labview," proceedings of 2011 46th international universities' power engineering conference (upec), pp.1,6, 5-8 sept. 2011. [14] power quality standard en 50160:2010, voltage characteristics of electricity supplied by public electricity networks, 2010. [15] lee, r.p.k.; lai, l.l.; tse, n., "a web-based multi-channel power quality monitoring system for a large network," power system management and control, 2002. fifth international conference on (conf. publ. no. 488) , vol., no., pp. 112-117, 17-19 april 2002. [16] yue fuchang; shao lin; wu zaijun, "modeling and implementation of power quality monitoring device based on iec 61850," 2012 china international conference on electricity distribution (ciced), pp.1,5, 10-14 sept. 2012. [17] wanfang xu; gang xu; zhijiang xi; chuanyong zhang, "distributed power quality monitoring system based on ethercat," electricity distribution (ciced), 2012 china international conference on , vol., no., pp.1,5, 10-14 sept. 2012. [18] yi zhang; honggeng yang; gong cheng, "application of 3g technology in power quality monitoring system," 2012 asia-pacific power and energy engineering conference (appeec), pp.1,4, 27-29 march 2012. [19] jiang xian-kang; ren jian-wen; he peng-yun, "analysis and application of power quality data from power quality monitoring network," 2011 asia-pacific power and energy engineering conference (appeec), pp.1,4, 25-28 march 2011. [20] nikolic, a.; naumovic-vukovic, d.; skundric, s.; kovacevic, d.; milenkovic, v., “methods for power quality analysis according to en 50160,” in proceedings of the 9th international conference electrical power quality and utilization epqu 2007, pp. 1-6, barcelona, spain, october 2007. [21] nikolic, a.; babic, b.; zigic, a.; miladinovic, n.; milosavljevic, s., “multi-channel system for remote power quality monitoring of electricity supplied by public distribution networks,” in proceedings of the xix imeko tc-4 symposium, pp. 1-6, barcelona, spain, july 2013. microsoft word 342-2440-1-le-rev acta imeko  issn: 2221‐870x  september 2016, volume 5, number 2, 55‐63    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 55  3d  survey  technologies:  investigations  on  accuracy  and  usability  in  archaeology.  the  case  study  of  the  new  “municipio” underground station in naples  luigi fregonese 1 ,  francesco fassi 1 , cristiana achille 1 , andrea adami 1 , sebastiano ackermann 1 , alessia  nobile 1 , daniela giampaola 2 , vittoria carsana 3   1 polytechnic of milan, department abc, hesutech laboratory, campus mantova, piazza d’arco 3 ‐ 46100 – mantova, italy  2 superintendence archaeology campania, piazza museo nazionale, 19 ‐ 80135 ‐ naples, italy  3 assistant of the superintendence archaeology campania, via del marzano, 6 – 80123 – naples, italy      section: research paper   keywords: close range photogrammetry; laser scanner; instruments; accuracy; calibration; 3d‐model; archaeology  citation: luigi fregonese, francesco fassi, cristiana achille, andrea adami, sebastiano ackermann, alessia nobile, daniela giampaola, vittoria carsana, 3d  survey technologies: investigations on accuracy and usability in archaeology. the case study of the new “municipio” underground station in naples, acta  imeko, vol. 5, no. 2, article 8, september 2016, identifier: imeko‐acta‐05 (2016)‐02‐08    section editor: sabrina grassini, politecnico di torino, italy; alfonso santoriello, università di salerno, italy  received march 24, 2016; in final form july 22, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: luigi fregonese, e‐mail: luigi.fregonese@polimi.it    1. introduction  nowadays, three-dimensional models of objects from images are a standard in a wide range of applications from autonomous robotics to industrial vision and consumer digital entertainment. in addition, it has been a topic of intensive research since the early days of computer vision and in the field of cultural heritage [1], but it has eluded a general solution regarding the accuracy in photogrammetric systems. image based modelling (ibm) is based on multiple 2d image measurements to recover 3d object information through a mathematical model. this method calculates 3d measurements from multiple views with the use of projective geometry and a perspective camera model. in addition, it guarantees a good portability and implies often low cost sensors [2]. however, the surveying results must meet specific criteria in order to provide the required accuracy for certain applications. for that reason, any geometric surveying task, such as the photogrammetric one, includes not only the definition of the relative positions of points and objects but also the estimation of the accuracy of the results [3]. with the least squares adjustment method, based on finding an approximate solution to overdetermined systems, it is abstract  advanced 3d survey technologies, such as digital photogrammetry (imaged based) and laser scanner, are nowadays widely used in  cultural heritage and archaeological fields. the present paper describes the investigations realized by the laboratory hesutech of the  polytechnic of milan in cooperation with the superintendence archaeology campania in order to examine the potentiality of image  based  modeling  (ibm)  systems  applied  to  the  archaeological  field  for  advanced  documentation  purposes.  besides  the  3d  model  production workflow in an uncommon excavation environment, a special consideration about the reached accuracy will be discussed.  in the first part of the research, a comparison between photogrammetric camera parameters obtained with ibm systems and the ones  provided with the calibration certificate by the manufacturer of the camera is performed.   in the second part of the research, the operational phases of the application of such advanced 3d survey technologies are shown. the  test field is the archaeological excavation area for the construction of the new “municipio” underground station in naples. due to its  position in one of the historical area of the city, its construction coexists with the archaeological excavations and it is strictly tied to  their evolution. in such conditions, the need to reduce as much as possible the time to build the public infrastructure is a very relevant  feature together with the ability to produce accurate documentation of what is considered archaeologically important.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 56  possible to obtain reliable information concerning the accuracy of the results as well as the accuracy of the observations. the building of the new “municipio” underground station in naples has required extended as well as intensive archaeological investigations on almost the entire area of the construction yard. the importance of the findings and the evidences together with the need to speed up the completion of the infrastructure, required to introduce advanced 3d survey technologies to satisfy both the requirements. however, the high amount of almost every day photogrammetric surveys as well as restitutions, has not allowed to follow a rigorous workflow regarding the calibration procedure (e.g. with dedicated calibration test fields). anyhow, a self-calibration has been always performed by using the same images acquired during the survey tasks, obtaining residuals on gcps of few millimetres or 1 to 1.5 cm in the worst cases. in the following section, a comparison between a self-calibration obtained in such way and the one certified by the manufacturer will be descripted and discussed. in the second part, a few numbers of the ibm system applications on different size and complexity objects will be shown. 2. accuracy with ibm method  2.1. method for the determination of the accuracy with ibm  nowadays, high-resolution cameras allow to acquire excellent quality images in terms of resolution and sharpness and can be used to perform precision surveys. the best way to evaluate the efficiency and the quality of this instrument as well as the whole process is to focus on camera calibration, the common issue in photogrammetric applications, especially when the precision for dimensional measurements is a not negligible variable. in order to perform this calibration, the process follows a three steps approach: 1. image acquisition; 2. image alignment with the software photoscan®, to define position of images and camera parameters (internal orientation and lens distortions); 3. comparison of the calculated values with those reported in the metric calibration certificate. compared to the other aberrations, lens distortion is the one that mainly affects photogrammetric measurements in terms of accuracy, and the images must be corrected in photogrammetry. lens distortions are of several kinds, but the radial one is the most significant. it is the radial displacement of a projected image point on the sensor from its theoretically true position or, equivalently, a change in the angle between a ray and the optical axis. in order to define the distortions, the parameters of the internal orientation of a camera are calculated by means of the self-calibration, in which the distortion and camera parameters are included as part of the bundle adjustment solution. the digital camera used in this test is the rollei 6008 (figure 1) with fixed digital 72405433 pixel (49.23236.9444 mm) resolution back sensor (phase one) and with a 40 mm optical lens. this camera has the following features:  it can save loss-free or raw-format images, up to 48 bit colour depth, 32 mb per image;  interchangeable metric lenses pq (fastest shutter speed: 1/500 s) between 40 mm and 350 mm, interchangeable metric lenses pqs (fastest shutter speed: 1/500 s) between 50 mm and 500 mm;  metric calibration certificate for each lens;  pixel size 6.8 μm. the calibration certificate provided by the manufacturer includes the following parameters:  ck: 41.195 mm;  xh: 0.161 mm and yh: +0.460 mm;  a1: 3.6010-5 and a2: 2.4410-8 as parameter for the radial distortion;  r0: 0.0 mm;  r: [0, 31] mm. the curve of distortion is defined by rollei with the following equation: ∆ ∙ ∙ ∙ ∙ . (1) the radial distortion curve reported in figure 2 shows a figure 1. rollei 6008af with phaseone p45.  figure 2. lens radial symmetric distortion values in pixel and μm.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 57  radial distortion of 374 μm (55 pixels) at 31 mm of radial distance. 2.2. test for accuracy investigation  in order to evaluate the overall accuracy of the ibm system (in particular photoscan) a calibration test was realized in a closed environment set-up in a square room of about 4 meters of length, with a cross vault on the top (figure 3). in this room, a total amount of 108 coded targets (85 coded for photoscan and 23 for leica hds cyclone) were displaced all around and measured with a leica total station ts30. in the first data processing, all acquired targets have been used for camera calibration through the steps of image alignment and subsequently of optimization. in such a way the software photoscan calculated the inner orientation, with the results reported in table 1. these results are congruent with the data provided in the original calibration certificate of the camera and the distortion, as reported on also from the distortion curves of figure 4. as concerning the accuracy of alignment of the photogrammetric model, the process has calculated its solution at high quality and it discovered about 120.000 points (figure 5) with the quality values expressed in table 2. the above described accuracy values clarify that the precision of the photogrammetric model (with an order of 0.5 pixel) well fits the standard deviations obtained for each direction x, y, z. to have a more significant comparison, about 2/3 of all gcp were used as check points (cps) to verify the quality of the alignment of the total model: the results of such test are summarized in table 3. figure 5. gcp and tie points determined in alignment and calibration phase  of the model of investigation.  table 1. camera parameters calculated with photoscan software: exported  in australis format.  camera  parameters  value  camera   rollei 6008 p45 h sensor  7240pixel v sensor  5433pixel pixel size  6.8µm  c  41.150mm xp  ‐0.2549mm yp  0.3618mm k1  ‐3.9628710 ‐5   k2  3.0461610 ‐8   k3  ‐4.7477510 ‐12   p1  3.5966310 ‐6   p2  1.6384410 ‐6   figure  4.  comparison  lens  radial  symmetric  distortion  in  μm  (above)  and  pixel (below).  figure  3.  the  polygon  test  in  which  coded  target  for  photoscan  and  hds laser scanning were positioned.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 58  as evident from the comparison, the ibm approach, based on photogrammetry and topography, is correct and we can use it also in the documentation of cultural heritage. indeed, the network geometry adopted for the images acquisition in this work is the same as the one adopted during the surveys of architectural and archaeological artifacts; in addition, the order dimension of the acquired objects, for cases of close range survey, are similar. the ibm method is efficient not just for the acquisition stage (in terms of instrumental costs and timing for data acquisition), but also in terms of achievable accuracy. 2.3. test on surface accuracy   the last part of the test concerns the comparison between the point cloud extracted with photogrammetric approach and a second one obtained by employing the laser scanner leica hds 7000 [4]. the laser scan was made by positioning the scanner in the centre of the room and it was registered in the same reference system used for the photogrammetric model by aligning it through the 23 black/white hds targets. the results of the registration, made with leica cyclone, are shown in table 4. the evaluation of surface accuracy has been made with the software cloudcompare which allows to measure the euclidean distance between two pointclouds. in figure 6, it is clear that the distances between photogrammetric and laser scanner pointcloud are included in the interval 0.00061. wilcoxon–mann–whitney tests were applied for comparisons of two different groups, and kruskal wallis tests were applied for comparisons of three different groups. the level of statistical significance was set to p <0.05. 3. results and discussion   3.1. phenolic compounds  the phenolic compounds were quantified using the response factor for tyrosol [41]. the sum of oleuropein, the ligstroside derivatives, tyrosol, hydroxytyrosol, lignans and phenolic acids (1), the sum of oleuropein and the ligstroside derivatives (2), and the sum of tyrosol and hydroxytyrosol (3) determined in all the fresh olive oils from three years of sampling varied from 145 mg/kg to 966 mg/kg (median, 417 mg/kg), 83 mg/kg to 584 mg/kg (median, 251 mg/kg), and 2 mg/kg to 97 mg/kg (median, 9 mg/kg), respectively. it is important to note that the composition of the phenolic compounds can vary widely according to each olive variety. the contents of the flavonoids luteolin and apigenin in the fresh olive oil from “istrska belica” were in the ranges of 2.6 mg/kg to 5.8 mg/kg (n = 20) and 0.9 mg/kg to 1.9 mg/kg (n = 20), respectively. the same flavonoids contents for “leccino” oil ranged from 1.5 mg/kg to 4.3 mg/kg (n = 11) and 0.3 mg/kg to 0.8 mg/kg (n = 11), and for “maurino” oil they ranged from 0.8 mg/kg to 2.0 mg/kg (n = 4) and 0.2 mg/kg to 0.6 mg/kg (n = 4). the “istrska belica” oil lignans ranged from 22 mg/kg to 70 mg/kg (n = 20), with “leccino” oil as 11 mg/kg to 33 mg/kg (n = 15), and “maurino” oil as 46 mg/kg to 51 mg/kg (n = 6). the dialdehyde forms of both decarboxymethyl oleuropein aglycone (dmo-agl-da) and decarboxymethyl ligstroside aglycone (dml-agl-da) for “istrska belica” oil varied from 23 mg/kg to 124 mg/kg (n = 20), with “leccino” from 17 mg/kg to 274 mg/kg (n = 15), and “maurino” from limit of detection (lod) to 119 mg/kg (n = 6). the oxidised aldehyde and hydroxyl forms of oleuropein aglycone (o-aglda) and ligstroside aglycone (l-agl-da) were lower compared to dmo-agl-da and dml-agl-da, and for “istrska belica” oil they varied from 5.6 mg/kg to 87 mg/kg (n = 20), with “leccino” from 1.5 mg/kg to 21 mg/kg (n = 15), and “maurino” from 𝐸𝑙𝑙𝑖𝑝𝑠𝑒𝑚𝑖𝑛𝑜𝑟𝑠𝑒𝑚𝑖𝑎𝑥𝑖𝑠 then dist = 𝐸𝑙𝑙𝑖𝑝𝑠𝑒𝑚𝑖𝑛𝑜𝑟𝑠𝑒𝑚𝑖𝑎𝑥𝑖𝑠 end if rot ← random[0°, 360°] �̂�𝑖 ← 𝑃𝑖 + distance (𝐷𝐼𝑆𝑇) in direction (𝑅𝑂𝑇) end while calculate descriptor matching score: 𝑆𝑖(�̂�𝑖) end for 𝑆 𝑘 = 𝑚𝑎𝑥(𝑆𝑖(�̂�𝑖) … 𝑆𝑁(�̂�𝑁)) end for return 𝑆𝑀𝐴𝑋 = 𝑚𝑎𝑥 (𝑆 ) sequential monte carlo: hist input: 𝑃𝐶𝑀 𝐼 , 𝑃𝐶𝑀 𝐼𝐼 (center of mass of i and ii joint) 𝑃𝑖 𝐼 … 𝑃𝑁 𝐼 (initial pseudo-random points i joint) 𝑃𝑗 𝐼𝐼 … 𝑃𝑁 𝐼𝐼 (initial pseudo-random points ii joint) 𝑆𝑖(𝑃𝑖 𝐼, 𝑃𝑖 𝐼𝐼) … 𝑆𝑁(𝑃𝑗 𝐼, 𝑃𝑗 𝐼𝐼) (descriptor matching scores) output: new points (�̂�𝑖 𝐼, �̂�𝐽 𝐼𝐼) with 𝑆𝑀𝐴𝑋 rot ← normal direction to 𝑃𝐶𝑀 𝐼 𝑃𝐶𝑀 𝐼𝐼̅̅ ̅̅ ̅̅ ̅̅ ̅̅ ̅ 𝑃𝑖 𝐼 … 𝑃𝑁 𝐼 = 𝑃𝑖 𝐼 … 𝑃𝑁 𝐼 ∗ 𝑅𝑂𝑇 𝑃𝑗 𝐼𝐼 … 𝑃𝑁 𝐼𝐼 = 𝑃𝑗 𝐼𝐼 … 𝑃𝑁 𝐼𝐼 ∗ 𝑅𝑂𝑇 for 𝑘 ← 1 to max number of iterations do for 𝑖 ← 1 to 𝑁 do for 𝑗 ← 1 to 𝑁 do saved 𝑆𝑖𝑗(𝑃𝑖 𝐼, 𝑃𝑗 𝐼𝐼) in matrix 𝑆𝑇𝑂𝑇[𝑖, 𝑗] end for end for 𝑠𝑜𝑟𝑡(𝑆𝑇𝑂𝑇) → 𝑆𝑇𝑂𝑇[1. . 𝑁 2 ] dist(i, j) ← |𝐸𝑙𝑙𝑖𝑝𝑠𝑒𝑚𝑖𝑛𝑜𝑟𝑠𝑒𝑚𝑖𝑎𝑥𝑖𝑠 ∗ log (𝑆𝑖𝑗)| if dist > 𝐸𝑙𝑙𝑖𝑝𝑠𝑒𝑚𝑖𝑛𝑜𝑟𝑠𝑒𝑚𝑖𝑎𝑥𝑖𝑠 then dist = 𝐸𝑙𝑙𝑖𝑝𝑠𝑒𝑚𝑖𝑛𝑜𝑟𝑠𝑒𝑚𝑖𝑎𝑥𝑖𝑠 end for 𝑖, 𝑗 ← 1 to 𝑁 do �̂�𝑖 𝐼 ← 𝑃𝑖 𝐼 ± distance (𝐷𝐼𝑆𝑇) in direction (𝑅𝑂𝑇) �̂�𝑗 𝐼𝐼 ← 𝑃𝑗 𝐼𝐼 ± distance (𝐷𝐼𝑆𝑇) in direction (𝑅𝑂𝑇) calculate descriptor matching score: 𝑆𝑖(�̂�𝑖 𝐼, �̂�𝑗 𝐼𝐼) end for 𝑆 𝑘 = 𝑚𝑎𝑥(𝑆𝑖𝑗(�̂�𝑖 𝐼, �̂�𝑗 𝐼𝐼) … 𝑆𝑁𝑁(�̂�𝑁 𝐼 , �̂�𝑁 𝐼𝐼)) end for return 𝑆𝑀𝐴𝑋 = 𝑚𝑎𝑥 (𝑆 ) https://doi.org/10.1029/jc094ic05p06177 https://doi.org/10.1145/31336.31338 https://doi.org/10.48550/arxiv.1611.08134 http://dx.doi.org/10.1007/978-1-4757-3437-9 https://doi.org/10.1007/978-1-4471-6296-4_8 https://doi.org/10.1068/p2896 https://docs.openvino.ai/latest/omz_demos_pedestrian_tracker_demo_cpp.html https://docs.openvino.ai/latest/omz_demos_pedestrian_tracker_demo_cpp.html https://doi.org/10.1109/iccv.2015.133 acta imeko, title acta imeko issn: 2221-870x february 2015, volume 4, number 1, 5 10 template for an imeko event paper thomas bruns1, dirk röske1, paul p.l. regtien2, francisco alegria3 1 physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2 measurement science consultancy, julia culpstraat 66, 7558jb hengelo, the netherlands 3 instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, av. rovisco pais 1, 1049-001 lisbon,portugal shape \* mergeformat shape \* mergeformat section: research paper keywords: journal; template; imeko; microsoft word citation: thomas bruns, dirk röske, paul p.l. regtien, francisco alegria, template for an imeko event paper, acta imeko, vol. 3, no. 1, article 1, january 2014, identifier: imeko-acta-03 (2014)-01-01 editor: paolo carbone, university of perugia, italy received month day, year; in final form month day, year; published january 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by measurement science consultancy, the netherlands corresponding author: paul p.l. regtien, e-mail: paul@regtien.net shape \* mergeformat 1. introduction the introduction describes the background of the research, a short review of related research published in recent literature, together with the major claims setting the framework of the present publication. references should be related to the present publication, not just a list of papers merrily showing the authors’ knowledge of the literature. this relation must be made explicit. the newly presented method is shortly introduced, as an alternative to previously published methods, with mention of the advantages aimed at. to minimize editing by the editorial team, authors are asked to follow the styles that go with this template. the font size of the whole body text is 10 pt and the font type is garamond. for the header of a section use the style named “level1title”. the introduction ends with an outline of the remainder of the paper, for instance as follows. in section 2 we will discuss how the paper can be divided in sections and subsections, and an indication of the use of numbering and heading format is outlined. in the next two sections the use of illustrations and equations is described. in section 5 methods for citing references are given. finally, in the concluding section the major rules are summarized. 2. first page the first page should have the title of the paper, the authors names (do not use initials for the first and last name), and the author affiliations. following should be an abstract and 3 to 5 keywords separated by a semicolon. the next 4 items (citation, editor, dates and copyright) are to be updated by the editor. the paper information section ends with reference to the funding (leave blank if necessary) and the name and email address of the corresponding author. the header and footer have information about the publication that is to be updated by the editor. page numbers appear in the bottom right corner and are updated automatically. 3. about sections the body of the paper is divided into sections and, when readability requires it, subsections. this helps the reader to recognize the various elements of the paper, such as background theory, modelling and simulations, experimental setup, experimental results with evaluation, conclusions, references, acknowledgements and, when appropriate, appendices. figure 1. stamp issued to help people getting familiar with si units. pages should be laid out using two columns, as done in most journals, to increase readability. 3.1. subsections if a section is long or deals with different topics, make a subdivision in subsections. avoid further subdivision of a subsection. when subsections are used, there must be at least two. use the style named “level2title” for the header of a subsection. 3.2. numbering of subsections subsection numbering follows the outline numbering format which is configured in the template. subsection headings use the calibri font, and are in bold. this template uses automatic outlined numbering for the sections and subsections. we recommend that the author makes use of this feature. if the author does not feel comfortable with it he may choose to manually number the sections and subsections. configuring a blank word document to use automatic outline numbering is not always as straightforward as it should be. we point out, nevertheless, that the configuration is already done in this template and the author just has to use it. it suffices to place the cursor in the section or subsection title and select the "level1title" or "level2title" styles already available from the menu or ribbon. this simple procedure is the same that should be used for all other parts of the paper (paper title, main text, abstract, etc.). the author does not have to worry about the numbering at all. an even simpler procedure would be just to copy and paste an existing section or subsection title and rewrite the text. the author, however, can choose to use manual numbering by deleting the automatic number that comes with the use of the proper style and input the numbers he wishes for each section. 4. about illustrations and tables 4.1. location illustrations and tables can have two formats: column wide or page wide. figure 1 is an example of the first kind [1]. figure 5 gives an example for a page wide figure. page wide figures and tables should be placed inside a frame. column wide ones can be placed inside a frame or directly in the middle of the body text. in both cases they should be located on top or bottom of the page where they are first referred to in the text if possible. figures should be configured with the "figure" style. 4.2. managing frames figure 2. microsoft word frame formatting window. it can be accessed by clicking on the frame content to make the frame border visible, clicking in the frame border to select it and finally right click the frame border to show up the pop-up menu and choosing the option "format frame". figure 3. microsoft word caption insertion window. it can be accessed by right clicking on the picture or table and selecting "insert caption" from the pop-up menu. to create a frame we recommend that the author copies and pastes one of the frames in this template. before doing that, however, it is important to understand how they are configured. figure 2 shows the window where the configuration is done. it can be accessed by: 1) clicking on the frame content to make the frame border visible; 2) clicking in the frame border to select it; 3) right clicking the frame border to show up the pop-up menu and choosing the option "format frame". that window has 4 sections organized from top to bottom ("text wrapping", "size", "horizontal" and "vertical"). text wrapping should always be set to none. the size should be exactly 18 cm for page wide illustrations and tables and 8.75 cm for column wide ones. the horizontal setting should be center relative to the column or page for column wide or page wide content respectively. the vertical setting can be top relative to margin or bottom relative to margin depending where the frame is supposed to be located (top or bottom) of the page. the frames in this template have all the proper formatting and can be used as is. the only setting the author will need to manage when crating new frames by copy and pasting existing ones is the vertical setting that will have to be changed from top to bottom depending on the new frame location. figure 4. microsoft word cross reference window. it can be accessed by going to the menu "references" and choosing "insert cross reference". the copying and pasting of frames has to be done with care because the new frame will have exactly the same configuration as the original frame and may overlap with it making one of them invisible to the user. we suggest that the author selects the original frame, chose "copy" (ctrl+c), place the cursor in a page or column that has no frame in the same position as the original and chose "paste" (ctrl+v). it is up to the author to manage in which page or column each frame is to be located. the more complicated situation is when the author wants to copy a page wide frame that is in the top of a page to a new frame located in the bottom of the same page. because the original frame vertical setting is top the new pasted frame will also have the same setting and will overlap with the original one if placed in the same page. the solution is to paste the frame in a different page where no frames exist in the top (can be a temporary blank page at the end of the document), change the vertical setting to bottom and perform a cut and paste to the desired page. it will then show up at the bottom of that page. if the author prefers to create a frame from scratch he/she can choose "insert text box". then he/she should right click on the text box border and select from the pop-up menu the option to format the text box. in the window that becomes visible press "convert to frame". the properties of the frame should be adjusted as described previously. 4.3. captions place the figure captions directly below the figure inside the frame, and choose the style “figure caption”. figure captions have the format “figure x. aaa.” where x stands for the figure number and aaa for the figure caption. figures should be numbered consecutively with arabic numerals starting from 1. note that the caption should end with a period. the paragraph spacing before the caption should be 6 pt and after the caption should be 12 pt. this is defined in the “figure caption” style. this formatting should be overridden in the case of figures placed at the bottom of a page so that the paragraph spacing after the caption is 0. see, for instance, the caption of figure 5. table captions should be placed inside the frame directly above the table. format it with the style “table caption”. table captions have the format “table y. aaa.” where y stands for the table number and aaa for the table caption. tables should be numbered consecutively with arabic numerals starting from 1. tables and figures should have separate numberings. note that table captions should also end with a period. the paragraph spacing before the table caption should be 12 pt and after the caption should be 6 pt. this is defined in the “table caption” style. this formatting should be overridden in the case of tables placed at the top of a page so that the paragraph spacing before the caption is 0. see, for instance, the caption of table 1. 4.4. tables figure 5. shakuhachi: old japanese length standard: 1 shaku = 30.3 cm. tables usually span a full page width. they are treated in the same way as a page-wide figure. avoid breaking tables across pages. table captions are placed above the table. an example is presented by table 1, summarizing the various styles used in this template. 4.5. numbering microsoft word permits to have figure and table numbering done automatically. the author is asked to use this feature if possible instead of numbering them by hand. the captions in this template already use automatic numbering. the best way for the author is just to copy and paste those captions and change the text accordingly. because the number in the copied caption label will not be automatically updated, the author can place the cursor in the caption number and press the key f9 to update it (the number background turns grey because it is a "field code"). if the author wants to use automatic caption numbering but creates captions from scratch he/she can right click on the picture or table and select "insert caption" from the pop-up menu. a window will be displayed (figure 3) where one can choose the label "figure" or "table" and insert the caption text. if those labels are not in the dropdown list the author can add them by using the "new label" button. 4.6. referring to figures and tables in the text if automatic caption numbering is used the author should refer to the figures and tables in the text using automated references. a reference can be inserted in a given point in the text by going to the menu "references" and choosing "insert cross reference" (figure 4 ref _ref312314316 \h \* mergeformat ). select which figure or table are to be cited, the label type ("figure" or "table") and that only the label and number should be used in the citation. keep the option "insert as hyperlink". 5. about equations all equations should be numbered consecutively throughout the paper. do not use outline numbering per section. numbers are placed between parentheses aligned right, and without a label, see equation (1) as an example, expressing the saturation current id in a mosfet transistor [2]: (1) where w is the channel width, l the channel length, (0 the dielectric constant of free space and (ox of the oxide, ( is the mobility in the channel, t the oxide thickness and vgs the gate voltage [2]. make sure that all symbols are defined unambiguously. when confusion may arise, add the units of the parameters between square brackets. use si and derived units only [3]. select the style named “equation” for proper margins between the text and positioning of the equation number (insert a tab right after the equation box). table 1. overview of styles and font sizes used in this template. section font size [pt] format special title calibri 20 bold only first letter is capital authors calibri 12 bold affiliation and email address calibri 9 italic abstract text calibri 9 normal keywords calibri 8 normal label in bold citation calibri 8 normal label in bold editor calibri 8 normal label in bold dates calibri 8 normal label in bold copyright calibri 8 normal label in bold funding calibri 8 normal label in bold corresponding author calibri 8 normal label in bold first-level section headings calibri 10 bold numbered, all caps subsection headings calibri 9 bold outline numbered body text garamond 10 normal justified acknowledgements, appendix garamond 10 normal as body text equations garamond/symbol 10 italic numbered equations (subscript/superscript) garamond/symbol 70% of 10 italic numbered equations (sub-subscript/superscript) garamond/symbol 60% of 10 italic numbered table text calibri 8 normal bold headings figures calibri 9 normal centered captions of figures and tables calibri 8 normal justified references garamond 9 normal numbered long equations that normally span more than one column should be wrapped over more lines, broken at a suitable place by arithmetic symbols (=, +, (, () as separator. an example is equation (2) about the surface heat flux per unit area along a flat plate [4]. (2) equations are considered part of the previous sentence and should, when appropriate, have a period or a comma after them, as in (2). very short equations that are not further referred to may be inserted in line with the text, for instance r = v/i. make sure that all variables are in italic, also when used in the main body. note that the font size for equations is 10 pt and that it must be reduced to 70% and 60% in the case of subscript/ superscript and sub-subscript/superscript respectively. 6. about references and citations references are limited to published works or papers that have been accepted for publication and should give full bibliographical information. they are placed in the section references at the end of the manuscript, in order of their appearance in the text. references are cited in the text by a number between square brackets. ensure that every reference cited in the text is also present in the reference list and vice versa. unpublished results and personal communications may be included in the reference section following the standard reference style and should include a substitution of the publication date with either "unpublished results" or "personal communication". citation of a reference as "in press" implies that the item has been accepted for publication. the format of references is as follows: a. for journal articles: initials and last name(s) of each author, title of article (first word only capitalized), journal title, volume number, (year), pages. b. book references: author(s) as above, title of book (main words capitalized), publisher, city of publication, year, isbn. c. for a chapter in an edited book: author(s) as above, title of article (first word only capitalized), in: title of book (main words capitalized). editor(s). publisher, city of publication, year, isbn, pages. d. conference proceedings: initials and last names of each author, title of article (first word only capitalized), name of the conference, place, country, year, pages. there is a section break at the end of the paper (after the references) so that the content of the last page is equally divided between the two columns. 7. conclusions the concluding section contains the major achievements of the research presented in the manuscript. it should be concise but informative. when numerical results are an essential part of the research, for instance a wider measurement range, higher uncertainty [5], they should be included in the conclusions. notice that conclusions are not the same as an abstract. acknowledgement here persons or institutes may be acknowledged for their technical, scientific or financial support. list them in this section, and not as a footnote or otherwise. references [1] m. fazio, s.l. rota, metrology on stamps, phys. educ. 30 (1995) pp. 289-297. [2] s. middelhoek, s.a. audet, silicon sensors, academic press, london, 1989, isbn 0-12-495-051-5. [3] k.t.v. grattan, “measurement: system of scales and units”, in: concise encyclopedia of measurement and instrumentation. l.finkelstein, k.t.v.grattan (editors). pergamon press, oxford, 1994, isbn 0-08-036212-5, pp. 209-214. [4] m.j. lighthill, “contribution to the theory of heat transfer through a laminar boundary layer”, proc. of roy. soc., london, 1950, a 202, pp. 359-377. [5] v. pop, p.p.l. regtien, h.j. bergveld, p.h.l. notten, j.h.g. op het veld, “uncertainty analysis in a real-time state-of-charge evaluation system for lithium-ion batteries”, proc. of 18th imeko world congress, sept. 17-22, 2006, rio de janeiro, brazil, pp. 164-166. abstract the editorial team of acta imeko encourages authors to follow the instructions as described in this template file to produce their manuscript. the abstract should be composed in a way suitable for publication in the abstract section of electronic journals, and should state concisely what it is written in the paper. important items are the aim of the research, the basic method and the major achievement (also numerically, when applicable). the length should not exceed 200 words. 1 acta imeko | www.imeko.org january 2014 | volume 3 | number 1 | 1 _1465198629.unknown _1465198630.unknown 2 0 2 oxgs d wv i lt mee = ( ) ( ) ( ) ( ) 1 1 1 2 3 '' 1 3 1 1 1 0 pr 0.538 . w ff xx w w x x px tx ddx x t k um tx x m æö æö =´ ç÷ ç÷ èø èø ìü ¶ ïï ´ íý ¶ ïï îþ òò editorial to selected papers from the imeko tc24 special issue "measurements in cultural heritage" acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 2 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 editorial to selected papers from the imeko tc24 special issue "measurements in cultural heritage" tatjana tomić1, leonardo iannucci2, leila es sebar2 1 ina industrija nafte, lovinčićeva 4, zagreb, croatia 2 department of applied science and technology, politecnico di torino, corso duca degli abruzzi 24, 10129 torino, italy section: editorial citation: tatjana tomić, leonardo iannucci, leila es sebar, editorial to selected papers from the imeko tc24 special issue "measurements in cultural heritage", acta imeko, vol. 11, no. 4, article 3, december 2022, identifier: imeko-acta-11 (2022)-04-03 received december 21, 2022; in final form december 21, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: leonardo iannucci, e-mail: leonardo.iannucci@polito.it dear readers, this special issue, promoted by the imeko tc24 – chemical measurements, includes four papers focused on the topic of “measurements in cultural heritage”. this field of application is particularly challenging for researchers focused on chemical analyses because it combines traditional issues related to measurement science such as traceability and uncertainty with the constraints related to the world of cultural heritage. actually, when working on historical or archaeological artifacts, sampling is generally not always possible, so non-destructive analyses are preferred. moreover, additional challenges are related to the development and use of portable instrumentation, which is required whenever the artifact under study can not be moved to the laboratory. for these reasons, many researchers in the field of chemical measurements work specifically on archaeometry and on various topics related to cultural heritage. the aim of this special issue is to present some of the possible approaches for this application field, stimulating new studies and new collaborations in this challenging yet fascinating research topic. in the paper “a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors” [1], giorgia ghiara and co-authors investigate the corrosion products formed on archaeological bronzes coming from a celtic deposit located in central france. the use of a multi-analytical protocol based on image analysis, electron microscopy, and raman spectroscopy allowed the researchers to identify different corrosion morphologies, which were then related to different metallurgical factors such as alloy composition, microstructure, degree of deformation, and grain size. in the paper “photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy)”[2], manuel j. h. peters and tesse d. stek present a composite image-based modelling workflow to generate 3d models, historical orthophotos, and historical digital elevation models from images acquired decades ago. the proposed workflow was then applied to improve the interpretation of survey data from the early roman colony of aesernia (south of italy). in the paper entitled “roman coins at the edge of the negev: characterisation of copper alloy artefacts and soil from rakafot 54 (beer sheva, israel)” [3] manuel j. h. peters and co-authors report about the preliminary nonand semi-destructive analysis of copper alloys, corrosion products, and soil components from a roman archaeological site in israel. the use of a multianalytical approach based on x-ray fluorescence, x-ray diffraction, scanning electron microscopy, and micromorphological analyses allowed the researchers to characterise the corrosion products and to correlate them to the soil composition. the fourth paper is entitled “reversible protective and consolidating coatings for the ancient iron joints at the acropolis monuments” [4]. g. frantzi and co-authors present a corpus of measurement methodologies used during a multidisciplinary project, aiming at the protection and conservation of ancient steel joints of the acropolis monuments, undertaken by the acropolis restoration service (ysma). different coating systems were applied on metal coupons, characterising their physicochemical properties and the protection performance against accelerated corrosion test. then a 1-year test was carried out, exposing the coated samples outdoor at the monument site. results allowed the researchers to propose one of the tested coating systems as candidate to protect metal artifacts from atmospheric corrosion. mailto:leonardo.iannucci@polito.it acta imeko issn: 2221-870x december 2022, volume 11, number 4, 2 3 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 this brief excursus about the papers composing this special issue shows the great variety of themes and approaches encountered when dealing with “measurements in cultural heritage”. we hope that these articles will stimulate your interest and that you will enjoy your reading. tatjana tomić leonardo iannucci leila es sebar special issue editors references [1] giorgia ghiara, christophe maniquet, maria maddalena carnasciali, paolo piccardo, a morphological and chemical classification of bronze corrosion features from an iron age hoard (tintignac, france): the effect of metallurgical factors, acta imeko, vol. 11, no. 4, article 11, december 2022. doi: 10.21014/acta_imeko.v11i4.1278 [2] manuel j. h. peters, tesse d. stek, photogrammetry and gis to investigate modern landscape change in an early roman colonial territory in molise (italy), acta imeko, vol. 11, no. 4, article 12, december 2022. doi: 10.21014/actaimeko.v11i4.1284 [3] manuel j. h. peters, yuval goren, peter fabian, josé mirão, carlo bottaini, sabrina grassini, emma angelini, roman coins at the edge of the negev: characterisation of copper alloy artefacts and soil from rakafot 54 (beer sheva, israel), acta imeko, vol. 11, no. 4, article 13, december 2022. doi: 10.21014/actaimeko.v11i4.1285 [4] giasemi frantzi, michail delagrammatikas, olga papadopoulou, charalampos titakis, eleni aggelakopoulou, panayota vassiliou, reversible protective and consolidating coatings for the ancient iron joints at the acropolis monuments, acta imeko, vol. 11, no. 4, article 14, december 2022. doi: 10.21014/actaimeko.v11i4.1341 https://doi.org/10.21014/acta_imeko.v11i4.1278 https://doi.org/10.21014/actaimeko.v11i4.1284 https://doi.org/10.21014/actaimeko.v11i4.1285 https://doi.org/10.21014/actaimeko.v11i4.1341 microsoft word 270-1892-1-pb (1)-rev2 acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 82‐87    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 82  accuracy of railway track conductance and joint efficiency  measurement methods  jacopo bongiorno 1 , andrea mariscotti 2   1  università di genova, via opera pia 11a, 16145 genova, italy  2  astm sagl, via comacini 7, 6830 chiasso, switzerland    section: research paper  keywords: conductivity measurement; dc power systems; electric variables measurement; grounding; guideway transportation power systems; guideway  transportation testing; stray current; uncertainty  citation: jacopo bongiorno and andrea mariscotti, accuracy of railway track conductance and joint efficiency measurement methods, acta imeko, vol. 4,  no. 4, article 15, december 2015, identifier: imeko‐acta‐04 (2015)‐04‐15  editor: paolo carbone, university of perugia, italy  received april 12, 2015; in final form october 29, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: andrea mariscotti, e‐mail: andrea.mariscotti@astm‐e.ch  1. introduction  the impact of stray current is mitigated first of all by limiting the rail-to-earth conductance (in the following indicated by gre when expressed per unit length, so as conductivity) and this clearly stated in the applicable standard for protection against stray current effects in dc systems iec 62128-2 [1]. the rail-to-earth conductance of a track section depends on several factors, characteristic of the track (type of fasteners and sleepers, use of isolating materials and their performance, type of ballast and sub-ballast, etc.) or external to the system (e.g. type of soil) and also depending on environmental conditions (wet or dry soil, moisture percentage, pollutants, ageing, temperature). all these factors hardly can be thoroughly modelled and accurately predicted, and the measurement assumes a paramount importance, not only to determine if the system meets the limits (dictated by standards or set in the contract), but also the suitability of different provisions and countermeasures. the control of track conductance is first implemented by dividing the track into electrically independent sections [2]. the track may feature different constructive characteristics depending on the type of support and infrastructure (tunnel, cut & cover, viaduct, station, etc.), and the provisions adopted for one may not be optimized for the other [3] to [5]. moreover, such track portions shall in principle allow separate testing, with distinct values of gre. for these reasons mechanical insulating rail joints (irjs) are used and their correct and satisfactory operation shall be verified, determining the socalled joint efficiency (je), that is the complement to 100 % of the percentage of rail current leaking through the irj itself. track isolation and track grounding are two opposite requisites when designing for compliance to two different, but connected, problems: stray current protection and electrical safety of track and wayside installations. the two standards iec 62128-2 [1] and iec 62128-1 [6] address them, the former limited to dc systems (stray current is much less relevant for ac systems). to simplify, we might say that corrosion and stray current are relevant in the long term (thus in terms of expected system life, maintenance and monitoring, damage to external third parties), while electrical safety, grounding and bonding, are technical requirements enforced at any time. abstract  the iec 62128‐2 (en 50122‐2) indicates methods for the measurement of rail‐to‐earth conductance and insulating rail joint efficiency  in dc electrified railways. these methods are reviewed, considering the influence of system parameters on the accuracy and variability  of  results.  this  work  reports  characterization  and  quantification  of  uncertainty  and  limits  of  validity,  as  well  as  some  practical  considerations on the execution of measurements. whereas track conductance is characterized by acceptable uncertainty in the range  of one to few %, insulating joint efficiency test is deemed by a larger and variable uncertainty, so that countermeasures and provisions  are necessary.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 83  the assessment of the compliance of the design and installation of a new railway system is particularly important in the preliminary stage of a project, when pilot tracks are laid down and construction techniques, as well as provisions and countermeasures, are verified, checking effectiveness, cost, implementation and harmonization. to this aim the experimental determination of track quantities to compare with constraints and prescriptions assumes a primary role: suitability of the measurement technique, embodiment in a standard, accuracy and sensitivity are all factors to consider when preparing the test procedures and the test track setup on site, and when technical discussions between contractors and customer take place. this work focuses on the measurement methods for track conductance (intended as per-unit-length conductance, or conductivity, measured in -1m) and joint efficiency appearing in appendix a of iec 62128-2 (equivalent to en 50122-2), considering the spread of results in real conditions for the expected variability of system parameters; technical issues are also considered for the proposed improvements. 2. measurement methods appearing in iec 62128‐2  in this section the two methods for the measurement of track conductivity and joint efficiency are reviewed, setting the basis for the following analysis and discussion. 2.1. standard method for track conductivity measurement  the measurement of track conductivity is described in app. a, sec. 3, of iec 62128-2. the method, whose setup is shown in figure 1, needs two insulating rail joints (irjs) that electrically separate the running rail. the standard requires that a dc voltage source is applied across the first irj (irj-1) and that a nearly constant current flows. the current is proportional to the conductance to earth of the track section under measurement between the two irjs (with the term “track section” we will identify either a single running rail or both rail, without explicit distinction if it is clear or not determinant). the quantities that are measured from the test circuit are: the test current i leaving the source terminal, the voltage to ground vp at the voltage terminal located in p. this voltage is in reality measured against a good enough ground connection (e.g. vertical electrode) or the reference grounding circuit, with respect to which the track conductance is to be determined (e.g. concrete mesh, stray current collector, etc.). due to contingencies and practical issues, the measurement execution may be subject to some constraints, related to: availability of a long enough track behind the source irj (irj-1) or that this track is solidly grounded to close the measurement circuit; the voltage terminal within the section may be moved along it, to ease access to the reference ground potential point, and – it is observed – its position is subject to a minimum distance dmin = 50 m from the first irj, but no maximum length is prescribed. the standard indicates also a maximum length of the section lmax = 2 km. both constraints are aimed at impeding too a favourable configuration, that would lead to underestimation of track conductivity as we will see in section 3.1. the injection point distance from irj x is not constrained, but it is reasonable to assume that it is quite close to the irj under test; by the way, it is possible to show that changing it does not influence appreciably the total amount of current flowing in the left and right rail sections in parallel. rxlxinj re rx re lx zzz xlg z xg z ,,,, // )( 11    . (1) equations in (1) describe the relationship between equivalent impedances seen by injection point, respect right and left parts of the system, and it is evident that zinj keeps almost constant with the left term reducing and the right term increasing, while x moves from left (minimum distance from origin) to right (nearly full length l). the conductivity to earth is estimated, following indications in iec 62128-2 as lv i g p re  , (2) where i is the total current at the injection point, l is the section length, vp is the voltage at the point p. 2.2. standard method for joint efficiency measurement  the method for the measurement of joint efficiency appears in app. a, sec. 5, of iec 62128-2. the method, whose setup is shown in figure 2, needs two insulating rail joints (irjs): one is the joint under test (jut), the other one at the other end electrically separates the rail section. a dc voltage source e is applied across the jut irj-1 and the current flowing behaves as described at the beginning of section 2. the standard specifies that two longitudinal voltage drops across 10 m of rail are measured at the left and the right of the injection point and the ratio of the voltages gives the joint efficiency (je) (3). before using the measured voltages, they are corrected for the off-voltages, i.e. the voltages measured once the source voltage is switched-off. %100 )()( ,2,2,1,1 ,2,2     offonoffon offon uuuu uu je . (3) in reality, because the longitudinal resistance of the rail is a constant, the same equation holds also for the current terms flowing in the two voltmetric sections. the limit for je established by the standard is 95 %. the length lv = 10 m for the measurement of the longitudinal voltage drop is dictated by the standard, as well as figure 2. test setup for joint efficiency as per iec 62128‐2 annex a.5.  figure 1. test setup for track conductivity as per iec 62128‐2 annex a.3.   acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 84  the position of the current injection point, immediately after the first two voltage terminals, that is at a distance slightly larger than 10 m. while the resulting je is not influenced by gre, the value of lv is quite relevant. 3. variability of results  the expressions reported in the previous section are now analyzed with respect to plausible parameter variations, solving an equivalent circuit which represents the considered system, in order to check their validity and the spread of results. 3.1. track conductance per‐unit‐length  the expression for the calculation of per-unit-length track conductance from raw measurements (2) relies implicitly on the fact that the whole test current i flows past the voltage terminal, so that the only correction to apply suggested in the standard is for the voltage drop in the running rail. the effect of the fraction of the test current leaving the rail through the conductance-to-earth of the left section before the voltage terminal at p is not taken into account (see figure 3). if the distance of the voltage terminal d from irj-1 were much less than the total track section l, then this phenomenon represents a minor source of error. the rail section is modelled as a ladder network of resistance and conductance elements, rr,i and gr,i, respectively, each determined by the per-unit-length values multiplied by the equivalent length of the circuit cell; r0 represents the equivalent resistance of track sections behind irj-1. the equivalent circuit may be simplified for any point p by calculating the thevenin equivalent (ep in series to zp) for the left part (going iteratively from the source to the point p) and the series-parallel equivalent resistance (zp) for the right part (going backwards from the last element n to the point p). the details of calculations are omitted. the variability of the estimated conductivity with section length l and position p of the voltmetric measurement point is analyzed in figure 4. the attention is focused on five different track-section lengths l, spanning from a minimum of 250 m (typical of depots and some stations with short trains, such as metro and light railways) to the maximum allowed by the standard lmax = 2 km. rail resistivity, ground conductivity and r0, used in the calculation are assumed and kept constant. the equivalent circuit is solved moving the voltmetric measurement position p and varying the distance from the irj from the minimum dmin = 50 m to a practical maximum distance corresponding to half of the length l. the estimated conductivity gre decreases with l and increases with d. assuming an absolute validity of the estimated gre independent on l, the spread of values at the minimum distance of 50 m is nearly 1 % and slightly smaller moving away from the injection point, i.e. with the distance of the voltage terminal d > dmin. the dependency on position p is limited to about 0.2 % relative for each l. the tracks behind the irj under test (irj-1) and closing the circuit for the current to flow through the soil are modelled with an equivalent resistance r0 that is in series to the track section subject to measurement. the influence is evaluated in figure 5, where r0 ranges from 0.1  to 20  and the estimated gre values are plotted for three different track section   (a)    (b)  figure  4.  estimated  conductivity  for  various  test  section  length  values  l (from top to bottom: 250 m black, 500 m blue, 1 000 m green, 1 500 m red,  2 000 m yellow) vs. the position p of the voltmetric v terminal (that moves  from 50 m up to l/2): rail conductivity gre is (a) 10 ms/km, (b) 100 ms/km,  r0 = 1 ω, rr = 20 mω/km.  figure  5.  estimated  conductivity  as  a  function  of  the  assumed  equivalent  resistance  r0  of  tracks  behind  irj‐1  vs.  varying  voltmetric  v  terminal  position p between 50 m and l/2: r0 = 0.1 ω, 1 ω, 10 ω and 20 ω (thinner to  thicker in each group of four curves); gre = 10 ms/km, rr = 20 mω/km, l = 500 m (blue), 1 000 m (green) and 1 500 m (red).  figure  3.  equivalent  circuit  of  the  rail  section  between  irjs  with  voltage source  e  across  irj‐1  and  voltmetric  terminal  at  point  p;  the  reference potential common to all conductance elements and r0 is ground.   acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 85  lengths l = 500 m, 1000 m and 1500 m. low values of r0 indicate a well earthed long track section behind the first joint; conversely large values are typical of short track sections (maybe accidentally sectioned) and hence a more difficult path for the return current of the source. in figure 5 it can be seen that for short test sections and large r0 values the estimated gre may be larger than the real one, but in general the error is always to reduce the estimated gre, that is on the wrong side with respect to limits. however, the error is always well below 1 % and r0 = 20 ω represents already a worst case. the estimated conductivity depends slightly also on the rail longitudinal resistance rr, that causes a longitudinal voltage drop due to the flowing test current: this is investigated in figure 6, where three track lengths l are considered fixing r0 = 5 ω and gre=10 ms/km. it is observed that the estimated gre is larger when the rail resistance rr is smaller; the used rr values are characteristic of largeand medium-gauge running rails and the resulting spread of values is around 1 %. 3.2. joint efficiency  assuming an ideal irj with infinite resistance, the application of (3) for different lv brings to values of je (in principle 100 %) that are easily as low as 85 % to 90 %, thus leading to a failed test and a rejected irj. we call this as “setup limit of je” and cannot be lower than 95 %; moreover, it should be higher than that, in order to accommodate some tolerance for less-than-ideal joints, measurement uncertainty and round-offs. it is underlined that while the resistance of a joint is a unique value, joint efficiency depends also on track characteristics: a joint isolates a rail section better if the rail-to-earth conductance is large. a simulation with the simple equivalent circuit shown in figure 3 was done for lv = 2.5 m, 5 m, 10 m and 20 m and the results are reported in figure 7. having reduced the length of the voltmetric sections to 5 m, first, and then to 2.5 m, the measured efficiency was increased above the limit of 95 %. however, reducing lv has the undesired effect of reducing the amplitude of the measured drops u1 and u2, reducing the signal-to-noise ratio and going quite close to the sensitivity of the used instrumentation. preexisting voltages and offsets may be easily compensated for. with lv = 10 m as prescribed, to bring the setup limit of je above 95 %, the required rail section length l shall be at least 200 m. in practical situations this may not be feasible, as for shunting yards and depots of light railways and metros, where trains are never very long and turnouts and shunts are often separated by 150 m to 200 m of track. another practical factor that may in turn limit the reduction of lv is the safety requirement of using selv voltage, which for dc is 60 v. for a good rail with low conductivity to ground (e.g. 10 ms/km or lower [7], [8]) this would imply 0.48 a/km, or 48 ma every 100 m. with a longitudinal rail resistance in the range of 20 /m to 50 /m, the resulting voltage drop across the voltage terminals separated by lv meters, would be 9.6 v to 24 v reading for lv = 10 m and l = 200 m, reducing to 4.8 v to 12 v, when lv is reduced to 5 m. a microvoltmeter and repeated readings shall be adopted, in order to reduce unavoidable fluctuations; it is observed that offvoltages are large enough to affect the accuracy of such readings. it is underlined that the intensities of injected current suggested by the standard are not attainable if the selv voltage limit is enforced; to reach some ampère of intensity for moderate track-section length, the voltage of the dc source shall be increased to more than a hundred volts. the problem is now considered the other way round, that is by assigning irjs a known resistance (1 k, 10 k and 100 k, simulating an increasing efficiency) and a sensitivity study is made to identify which parameters influence the so-determined je value, causing the “setup limit of je” (see figure 8). for high-efficiency irjs (i.e. with equivalent resistance larger than 100 k) large track-conductivity values do not interfere with the test and the resulting je values are sufficiently large, provided that the test track is long enough, i.e. longer than about 200 m minimum. on the contrary, less efficient irjs cannot be satisfactorily measured and variability of track conductance has a negative impact on test accuracy. 4. other practical considerations  besides the evaluation of expressions, also the measurement methods are considered, as far as their applicability and possible improvements are concerned. 4.1. dc source and extraneous voltages  it is required to use a dc voltage source. this is quite convenient for the availability of dc voltages of 12 v, 24 v and figure  7.  joint  efficiency  as  resulting  from  the  application  of  iec  62128‐2  equation and for three different values of lv : 2.5 m and 5 m (brown upper  curves), 10 m (thick red curve), 20 m (yellow lower curve). r0 = 10 ω, rr =  35 mω/km, gre = 10 ms/km.  figure  6.  estimated  conductivity  for  different  rail  longitudinal  resistance values  rr  =  20  (solid),  40  (dashed),  60  (dotted)  mω/km  vs.  voltmetric  v terminal  position  p  between  50  m  and  l/2:  gre  =  10  ms/km,,  l  =  500  m  (blue), 1 000 m (green) and 1 500 m (red), r0 5 ω.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 86  48 v from batteries of any size, freeing the operator from mains sockets. dc measurements, however, are normally affected by extraneous voltages developing because of electrochemical reactions in the soil and junctions between different metals. two consecutive readings with polarity reversal is the usually adopted solution; the standard unfortunately does not include such possibility, in favour of a simpler additional reading in off conditions, aiming to tackle the problem of pre-existing extraneous voltages. this technique, in principle, is not able to reproduce the same conditions that occur during on-time intervals, when the source voltage is applied and current is flowing. however, practically speaking, off-voltage and reversed polarity readings are quite in agreement, based on experimental observations. for example, for a track-conductance measurement the two positive and negative voltages were +50.4 v and 49.9 v, with an offvoltage of 0.44 v, so quite close to their difference. 4.2. polarization  when applying dc voltage in the “on” interval, quite rapidly polarization occurs in the soil and in the other electrolytic substances that may be dissolved in water and moisture trapped beneath the rails and in the fastening system. when the source voltage is disconnected (as rapidly as possible), the voltage readings during the “off” interval are quite critical. the standard does not say anything for the track-conductivity measurement, but for the joint-efficiency measurement it specifies “directly after the switching off”. practical experience confirms that this might be interpreted by the operator as “a fraction of a second” or “some seconds” after switching off, depending also on his/her speed in storing values, taking notes, etc.. when a data logger is used minimizing delays, it may be observed that the off-voltage due to polarization varies rapidly in the first 3 to 5 seconds, with the exact time instant of the measurement representing a source of uncertainty. when measuring joint efficiency at very low voltage values, the off voltages that are subtracted from the on voltages may become a significant percentage, ranging on average from some % to up to 30 %. varying of a few seconds the instant of off-voltage reading has an overall impact on the final result of about 1 % to 15 % based on experience. in extreme cases the “off” voltage may be nearly equal to the “on” voltage. in table 1 there are reported sample readings made at low temperature (3 °c to 4 °c), with wet soil conditions and humidity around 50 % for the first three measurements (due to soft rain the day before) and the fourth one with some drier soil (measurement taken the following day). observing the values reported for 2 s and 5 s interval after switching off, various behaviours may be outlined: voltage drops by 50 % (meas. 1), by only 30 % (meas. 3), in between these values (meas. 4) or does not appreciably change (meas. 2). this means that there is no rule to follow to compensate for off-voltage readings taken accidentally at different instants of time and that the method of measurement should be more detailed. 5. conclusions  the methods were considered for the experimental determination of per-unit-length track conductance to earth and joint efficiency, as reported in the iec 62128-2 standard. this standard was reiterated until the most recent version in 2012, so that the methods may be considered mature. it was shown that the track-conductance method exhibits a variability with respect to the considered parameters (length of test section, rail resistance, rail-to-earth conductivity and resistance-to-earth of the tracks behind the test section) that is limited to 1 %, in general causing underestimation of the real conductance value. for the joint efficiency method the variability is much larger and the method itself in many cases (especially for short test sections of less than 200 m) is unable to determine satisfactorily joint efficiency, if the limit of 95 % given in the iec 62128-2 is considered, thus causing rejection of good rail joints. the analysis shows which parameters of the setup influence the effectiveness of the method. however, the offered solution of reducing the separation of voltmetric terminals on the two sides of the current injection point reduces the voltage drop intensity, and shall be evaluated against the non-negligible intensity of disturbing external voltages and polarization. the off-voltage reading after the “on” phase is subject to rapid variations in the first seconds after switch off: this is a relevant source of uncertainty and the standard does not establish a clear method, nor the exact amount of time to elapse after switch off to take off-voltage readings. when the length of the test track section is such to cause unavoidable and relevant error, it is proposed to adopt a combination of the following: reduce the voltmetric terminal separation to 5 m and increase the supply voltage to 100 v. references  [1] iec 62128-2, railway applications fixed installations electrical safety, earthing and the return circuit part 2: provisions against the effects of stray currents caused by d.c. traction systems, 2012.   figure  8.  joint  efficiency  with  real  irjs  of  finite  resistance:  5 k,  10 k,   100 k  (from  lighter  to  darkest  line);  hot  and  cold  colours  refer  to gre = 10 ms/km and 100 ms/km, respectively; r0 = 10 ω, rr = 35 m/km.  table 1. polarization for various joint efficiency measurements.     u1 [mv]  u2 [mv]  2s 5s  2s  5s meas. 1  on  0.020  0.290  off  0.006  0.003  0.044  0.025  meas. 2  on 0.021  0.187 off 0.016 0.012  0.040 0.042 meas. 3  on 0.027  0.172 off  0.011  0.08  0.030  0.022  meas. 4  on 0.012  0.095 off  0.011  0.06  0.012  0.09  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 87  [2] k. s. bahra and r. e. catlow, “control of stray currents for d.c. traction systems,” iee conference publication no. 405, electric railways in a united europe, 27-30 march 1995. [3] d. paul, “dc traction power system grounding,” ieee transactions on industrial applications, vol. 38, no. 3, 2002, pp. 818-824. [4] a. ogunsola, a. mariscotti and l. sandrolini, “estimation of stray current from a dc electrified railway and impressed potential on a buried pipe”, ieee transactions on power delivery, vol. 27 no. 4, oct. 2012, pp. 2238-2246. [5] i. cotton, c. charalambous, p. aylott, and p. ernst, “stray current control in dc mass transit systems,” ieee transactions on vehicular technology, vol. 54, no. 2, march 2005, pp. 722-730. [6] iec 62128-1, railway applications fixed installations electrical safety, earthing and the return circuit part 1: protective provisions against electric shock, 2012. [7] shi-lin chen, shih-che hsu, chin-tien tseng, kun-hong yan, huang-yu chou, and tong-ming too, “analysis of rail potential and stray current for taipei metro,” ieee transactions on vehicular technology, vol. 55, no. 1, jan. 2006, pp. 67-75. [8] a. mariscotti and p. pozzobon, “experimental results on low rail-to-rail conductance values”, ieee transactions on vehicular technology, vol. 54, n. 3, may 2005, pp. 1219-1222. template for an acta imeko event paper acta imeko issn: 2221-870x april 2017, volume 6, number 1, 13-19 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 13 hybrid backward simulator for determining causal heater state with resolution improvement of measured temperature data through model conformation yukio hiranaka, shinichi miura and toshihiro taketa yamagata university, jonan 4-3-16, yonezawa, 992-8510 yamagata, japan section: research paper keywords: backward simulation; temperature measurement; inverse problem; precision improvement; lsb noise reduction citation: yukio hiranaka, shinichi miura and toshihiro taketa, hybrid backward simulator for determining causal heater state with resolution improvement of measured temperature data through model conformation, acta imeko, vol. 6, no. 1, article 3, april 2017, identifier: imeko-acta-06 (2017)-01-03 section editor: paul regtien, the netherlands received may 28, 2016; in final form march 14, 2017; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by jsps kakenhi grant number 255400006 corresponding author: yukio hiranaka, e-mail: zioi@yz.yamagata-u.ac.jp 1. introduction to estimate heater operation from the data of room temperature change is a typical inverse problem [1], [2]. and it is a kind of ill-conditioned problem because a slight error in the data would extremely disturb the results [3]. however, such a problem is common for measurement and system diagnostics, and an important part of measurement. if the relation between input and output is linear, it is a problem of deconvolution and there are many studies including super-resolution [4]. subtractive deconvolution [5] may be applied if available data are impulsive. unfortunately, temperature change is a long trailing phenomenon and is very sensitive to measurement resolution and error [3]. our idea to tackle this is to consider measurement resolution inevitable and to treat it explicitly by defining a range for each input or output signal. there are two ways to search deconvolution results for range signal models. one is a try and modify method to test some signal within a certain divided range and to judge whether it matches the given output data. it is a forward simulation because it requires many trials, regardless of whether a convergence method is used or not. the other is an inference method starting from the given output data to the input in backward direction. it is a normal method for problem solving, but it is not always possible to find a good solution. however, we may perform a backward simulation in a similar manner to the forward simulation, starting with a range divided output signal, if we can make a backward processing model. there are backward simulation applications in many fields such as process scheduling [6], initial position estimation of physical objects [7], and software debugging [8]. abstract we are developing a backward simulator, which determines the unknown system input from the system output by using a system model. however, its processing time would increase enormously if the simulation model requires the multiple case branching, which is typical for backward simulations. in some target applications, we can use forward simulation processing in the backward data flow with significant reduction of processing time. this paper shows an example of such application to determine system input of heater operation from measured data of room temperature. although the resolution of measurement restricts the performance of the simulation result, we also used the model to improve the resolution of measured data and show its effect to simulation. furthermore, we show the result of reduction of noise caused by quantizing lsb jitters. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 14 there exist many difficulties in creating a backward system model. a typical backward simulation requires case branching when it has multiple possible conditions on its backward trace. however, we have experiences to cope with them, and have shown that some physical restriction may effectively reduce the number of case branches [9], [10]. we have been focusing our attention on utilizing temporal model and strict facts such as nonnegative property and causality that any current value must not affect past values. under such conditions, we can perform a backward simulation effectively. in this paper, we show two methods for efficiently performing backward simulation and for effectively improving measurement resolution. the first method is for creating a backward simulator by incorporating a forward simulation model. it extremely suppresses case-branching processing in the example of this paper. the second method is for improving resolution of the given measured data by applying a strict model of the system. hereinafter, we describe the concept of backward simulation and target simulation model in section 1, hybrid implementation of backward integration loop in section 2, measured temperature data and estimation of model parameters in section 3, simulation results for error free temperature data in section 4, effect of quantization error and countermeasure to it in section 5, simulation results for real measurement data in section 6, discussion in section 7, and conclusion in section 8. 2. backward simulation and models here, we explain the concept of backward simulation with an example application of inference of heater activity. figure 1 shows the structure of our forward simulation model for room temperature change caused by an electrical heater. black dots denote branching points, circles with plus sign denote summation points, and arrows denote the direction of physical information flow. a block with z-1 means one sampling time delay, which is a member of the integration loops. we adopted two integration loops because the room temperature increases even after the time the heater is turned off. the first integration loop corresponds to the neighbor of the heater that accumulates heat locally, and the second loop corresponds to the whole room heat reservoir. the parameters a, b are decay factors due to heat transfer, c is the specific heat of the room which converts heat to temperature. figure 1 can be expressed by the following equation, (0 )1 ,1 ( )1 2 1 2 w aq t tiq aq t ti q aq bq  − ≤ < =  − ≤ = −   (1) where 1q and 2q are accumulated heat in the first loop and the second loop, t is time, and the heater of w (w) is on for 0 to ti . we can obtain 2q as { } { } (1 ) (1 ) / ( ) (0 ) 2 ( 1) ( 1) / ( ). ( ) at btw b e a e b b a t ti q at at bt bti iw b e e a e e b b a t ti  − −− + − − ≤ < =  − − − − − − ≤  (2) figure 2 shows the implemented software objects and connection diagram for the simulation of figure 1. each object has a corresponding function, “heater” for heating for a specified time duration, “wsum” for summing heat with wattage input port, “csum” for summing heat, “br” for branching with the same output values, “dly” for one sampling time delay, “co” for coefficient multiplier, and “temp” for temperature recording. these objects are coded in scala classes, which use a java virtual machine, and dly1 and dly2 are instance objects of the class dly, for example. all the objects have independent gui windows which accept local settings and display status and parameters of each object. data flow on the connection links consist of time and values. as an example, heating power 800 w from “heater” at its output port “o” at the time of 10.0 s is shown in a xml style ucf (universal communication format) message [9], [10] heatero10.0800.0, where sim is the simulation controller which redirects this message from “o” port of “heater” to “i” port of “sum1” as specified in the simulator’s connection table [10]. the tag indicates the source of the message. the source tag is nested in this case to indicate the port “o” of “heater.”. in the backward simulation (figure 3), ucf messages flow in the backward direction indicated by dashed lines with reversed arrows. the node “temp” is the starting object which sends the time sequenced temperature in the reversed order, one pair of time and temperature data in each ucf message. the temperature in the backward simulation is expressed by a value range which expresses the minimum and the maximum temperature as “0.0, 10.0”, which means that the temperature is in the range from 0.0 inclusive to 10.0 exclusive [10]. by narrowing the range, we can control the resolution of the simulation. simulation parameter “ndiv” set by the starting block “temp” specifies the number of divisions. as an example, when the whole range is from 0.0 to 10.0 and ndiv is equal to 10, the value 4.5 is expressed in the ucf message as “4.0, 5.0” as the divided range width is 1.0. 3. hybrid simulator and implementation two important features of the simulator are time synchronization and hybrid backward simulation. figure 4(a) i i i i i i i i i f heater wsum br1 co1 1-a dly1 a 1z− dly2 b 1z− csum br2 co2 1/c temp o o o o o1 o o o1 o o2 o2 f figure 2. forward simulation objects and connections corresponding to figure 1. heater 1-a 1 c temp a 1z− b 1z− figure 1. forward simulation model of room temperature. heater 1-a 1 c temp a 1z− b 1z− figure 3. backward simulation model corresponding to figure 1. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 15 simplifies a forward integration loop in figure 1. the summation node must gather the same time data from the two incoming links, the input x(i) and the feedback f(i)=y(i-1) to form the output y(i)=y(i-1)+x(i). a fully synchronized simulator will do the work naturally. however, we intend to perform our simulation in a distributed processing environment, and we prefer asynchronous processing as long as it is possible. then, the summation node is designed to have an input record table which keeps time and value pairs received at the left input port (triangle arrowed port in figure 1 up to figure 13). the data message from the feedback port (from the delay node) triggers the summation process by picking up the data of the same time from the input record table. figure 4(b) shows the backward version of figure 4(a). we need to calculate two backward outputs from a single backward input at the summing point, satisfying y(i)=x(i)+f(i). the computation intensive solution is to simulate all the pairs of x(i) and f(i), matching the equation. a practical solution is to perform the simulation for a finite number of divided range pairs, e.g. {x,f} pairs of {0.0 to 2.5, 2.5 to 5.0} and {2.5 to 5.0, 0.0 to 2.5} to match the output of 4.5 [10]. a case branching mechanism is needed for such processing. however, if we apply another solution which uses a forward simulation object in the backward simulation as in figure 5, we can avoid the expansion of processing time caused by the case branching. in figure 5, we can formulate the process as x(i)=y(i)–f(i) and f(i)=y(i+1). the branch node sends received backward data in time reverse order through the two links to the sum node and to the delay node. the same input record table used in figure 4 can be utilized to keep y(i) required by the reverse delayed feedback signal of f(i)=y(i+1). we have to describe the detailed processing of the sum node. in the backward simulation, data flows are expressed as a range (the minimum and the maximum). then, the backward calculation must handle the range information. if the range from the right is (a, b), which means that the minimum value is a and the maximum value is b, and the range from the bottom is (c, d), the output through the left port should be calculated as (a-d, b-c) to cover the broadest value range. however, x(i) must be positive or zero as it expresses heat. if a-d is less than zero, it must be substituted by zero. furthermore, if b-c is less than zero, the case simulated is not a feasible one. figure 6 shows the resulting practical hybrid backward simulator, and figure 7 shows implemented objects and connections. the simulation objects in figure 7 are the same objects as in figure 2, with backward processing capability except dly. the objects “co1” and “co2” divide their incoming backward data by their coefficients, “br1” and “br2” pass through their incoming backward data to their two backward outputs. the simulator in figure 7 needs three consecutive backward inputs to start as the first data stops at csum and the second data stops at wsum because there will be no matching time data coming from the other backward port. we describe here the mechanism of model mismatch detection in detail. sum objects (wsum and csum) in figure 7 have two arriving inputs in the backward simulation. we note the backward input from the right of csum at the time sample of t as ( )v t , the other backward input can be expressed as b ( 1)v t − because dly2 node delays the backward flow signal for one sample time and multiplied by a constant coefficient b. the backward output ( )u t from csum can be expressed as equation (3) and must be positive or zero because it expresses heat. ( ) ( ) ( 1) 0.u t v t bv t= − − ≥ (3) the backward output from wsum can be expressed as the following equation, ( ) ( 1) ( ) ( 1) { (t 1) (t 2)} 1 1 ( ) (t 2) ( ) (t 1) 0.1 u t au t v t bv t a v bv a a v t abv a b v a = − − − − − − − − − − + − − + − = ≥− (4) if the both inequalities are not satisfied for the maximum value of the range at any time point, the backward simulation is failed and the starting condition must not occur for the simulation model. 4. real temperature change and model simulation at first, we verified the correctness of the simulator. figure 8 shows the result of a room temperature measurement when the infrared heater (800 w) on the floor was turned on for the duration from 0 s to 180 s in a tiny room of 3.6 m3. the sensor is a sensirion’s sht71, which has 0.01 degree resolution, placed near the heater at 1 m high in the room. figure 8 also shows a simulated temperature change using the parameters a (0.095/10s), b (=0.9938/10s) and c (24200 cal/deg). to determine these parameters, we assume that the heat loss rate of the room (b) is less than that of the heater appliance (a). then the temperature decline after the x(i) y(i) 1z− f(i)=y(i-1) (a) x(i) y(i) 1z− f(i) (b) figure 4. forward integration loop (a) and backward integration loop (b). heater 1-a 1 c temp a 1z− b 1z− figure 6. hybrid backward simulation model. i i i i i i i i f o o o o o1 o o o1 o o2 o2 f heater wsum br1 co1 1-a dly1 a 1z− dly2 b 1z− csum br2 co2 1/c temp i figure 7. hybrid backward simulation objects and their ports. x(i) y(i+1), y(i) 1z− f(i)=y(i+1) (a-d,b-c) (a,b) (c,d) figure 5. hybrid backward integration loop. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 16 temperature peak is estimated by the following equation, log . ( )2q bt const t ti= − + ≤ . (5) we define t p as the peak temperature time and estimate a by the following equation derived from equation (2) , ( ) ( 1) / ( 1).p i ia b t at bte e e− = − − (6) the heat capacity parameter c is estimated as the magnification factor for fitting the measured data with the simulated temperature data which was calculated by using the simulator’s forward function with the estimated parameters a and b. those parameters should be fine-tuned to closely fit the measured data. however, the rear part of the curve in figure 8 cannot be fit by our model. we may need the third heat accumulation loop for representing wall temperature change and heat release to the environment. as will be shown in section 6, the above approximation was almost enough because large measured values around the peak are matched. figures 9 and 10 are the results of the backward simulation when the simulated temperature change in figure 8 was fed backwardly from “temp” node in figure 7. the width of the resultant ranges are shown as the difference between min and max values in those figures. the ndiv parameter was set to 1,000 and 10,000 for figures 9 and 10, respectively. the larger ndiv is, the closer the result is to the actual heater operation (800 w for 0-180 s). as we calculated with a float resolution for the temperature data, we can successfully increase ndiv up to the desired resolution to narrow down the min-max difference. 5. effect of measurement resolution we may not obtain such a good result as in figure 10 for usual resolution data. practically, a good resolution of temperature measurement may be 0.01 °c. if we throw away digits smaller than 0.01 °c from the simulated data in figure 8, the backward simulation will stop because of a model mismatch for ndiv larger than 108, which corresponds nearly to 0.01 °c resolution as the maximum temperature change is 1.2 °c (figure 8). figure 11 shows the backward result for ndiv 108. if we intend to obtain better results by increasing ndiv, we have to improve the resolution of the backward temperature data. figure 12 illustrates our method to improve the resolution, where the value div1 is original resolution and value div2 is half of div1. circular points indicate original a/d truncated values for the resolution of div1. if the resolution is figure 8. measured and simulated temperature change caused by heating for 180 s from the start. figure 11. backward simulation result for ndiv=108 for the resolution limited data. div2 div1 1 2 3 4 5 6 figure 12. resolution increase needs to raise values at some points. figure 9. backward model simulation result for ndiv=1,000. figure 10. backward model simulation result for ndiv=10,000. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 1000 2000 3000 4000t em pe ra tu re c ha ng e( de g) time(s) measured simulated 0 1000 2000 3000 4000 5000 6000 0 1000 2000 3000 4000 he at er (w ) time(s) min max 0 200 400 600 800 1000 1200 1400 0 1000 2000 3000 4000 he at er (w ) time(s) min max 0 200 400 600 800 1000 0 1000 2000 3000 4000 he at er (w ) time(s) min max acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 17 improved to half of the original resolution, it is natural to raise the values at the times of 3 and 5 (indicated by triangle points). so, we wrote a shell script to raise the value when the backward simulation is stopped by detecting a model mismatch. as shown in figure 13, if the backward simulation detects a negative value and stops when calculating (3) or (4), we raise the value by div2 at the corresponding time t of v(t) in the case of (3) and u(t) in the case of (4). the process is repeated to get the whole sequence of backward input data pass the model match test, which means until we get a valid backward result. to improve the resolution further, the repetition is to be done for a new resolution value. there may be a case in which the model match test was passed even if the data modification was not fully done as in figure 12. in such cases, further repetition for resolution improvement needs to add more than one resolution value and requires a longer processing time afterward. figure 14 shows the result for ndiv 10,000 after repeating resolution improvement. value ranges other than the duration of heating converge to zero. although values for the duration of heating do not converge to the correct value of 800 w, they clearly indicate heater activity and the average for the duration of 0 s to 180 s is 780 w. it shows that our method of data modification is to suppress the negative value points in (3) and (4), and not to have a resolution improvement effect at positive value points. this means that we will modify data when we detect steeper temperature falls, while we infer heating when we detect steeper temperature rises. 6. real data and backward simulation we show the result of the backward simulator applied to the measured data in figure 8. the backward simulator could only output a model conforming result up to ndiv=50 (figure 15). it shows a large gap between possible minimum and maximum values. still, the real heating power values reside within the minmax pairs. as shown in the previous section, the min value will go up and the max value will go down when the resolution of temperature data improves. the repetitive processing of temperature data modification described in the above section improves the result, shown in figure 16 for ndiv 10,000. we cannot expect further improvement even if we set ndiv larger, because the points of min and max are very close to each other at almost all sample times. checking the detail of the measured data in figure 8, we found that there are fluctuations like the sequence of circle points in figure 17, which would be the result of sampling and truncating quantization of the solid line. those may be lsb (least significant bit) fluctuations caused by noises around digital thresholds. such setbacks are treated as the results of figure 15. backward simulation result for real data from figure 17 for ndiv=50. figure 16. backward simulation result for real data from figure 17 for ndiv=10,000 with resolution improvement. inconsistency1 inconsistency2 accuracy improvement temperatnre data accuracy improvement i i i i i i i i i f o o o o o1 o o o1 o o2 o2 f heater sum1 br1 co1 1-a dly1 a dly2 b sum2 br2 co2 1/c temp figure 13. backward simulation with resolution improvement. figure 14. backward simulation result for ndiv=10,000 with applying resolution improvement. 1 2 3 4 5 6 div figure 17. real a/d data of figure 17 have fluctuating sample points. 0 2000 4000 6000 8000 10000 0 1000 2000 3000 4000 he at er (w ) time(s) min max 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 he at er (w ) time(s) min max 0 500 1000 1500 2000 2500 0 1000 2000 3000 4000 he at er (w ) time(s) min max acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 18 some heating and the simulator making spontaneous wattage rise in the resultant heating estimation in figure 16. if we suppress those fluctuations, we can eliminate spontaneous values. it is natural to estimate the original temperature change as the dashed line in figure 17. then, on probation, we eliminated them, judging by eye with adding only one lsb or subtracting only one lsb around fluctuation points, leaving the points where the fluctuation is more than one lsb. the result is shown in figure 18, slightly smoother than the measured data in figure 8, especially in the tail part.. figure 19 is the result of the backward simulation of the data in figure 18 for ndiv=10,000. it shows a better result than in figure 16. values are large for the period from 70 to 160 seconds and almost zero after 170 seconds. the average heater power for the duration of 0 s to 180 s is 746 w. further improvement with larger ndiv cannot be expected because the min and max points almost coincide for all sample times. 7. discussion by using the hybrid simulation model in a feedback type simulation, no case branching is required in the backward simulation. the backward simulator we use is for functional evaluation and includes gui for monitoring and manual operations. it performs a single simulation for about 3.0 s for 100 sample data and about 3.6 s for 382 sample data. the processing time is not proportional to the number of samples. one cause for it may be the fact that we used multithread java processing on a four-core eight-thread cpu (intel i7-3770). figure 20 shows the processing time for the total repetition of model mismatch and data modification relative to ndiv. processing time in a case of model mismatch depends on the time when the mismatch is detected. average processing time for one backward simulation is from 2.5 to 3 s. we modified the temperature data step by step, which means that we improved the resolution for ndiv 100, and then improved it for 200 by using the result of ndiv=100, as an example. the coordinate of figure 20 shows the total processing time up to the abscissa ndiv values. roughly, the logarithm of the processing time is proportional to the logarithm of ndiv. resolution improvement using model mismatch was successful. although there are many possible methods for such data modification, our method to improve by half of the former resolution when a model mismatch is detected was shown effective. indeed, the curve of figure 18 is in agreement with the measured data in figure 8. our resolution improvement can be considered as a method to search feasible temperature data for a higher resolution. as the estimated parameters a and b may have some errors, they may cause effects on the results of the backward simulation. we have to evaluate such effect, though they may have little effect on the long range of time sequence because those parameters were determined by a relatively long part of the temperature data. although we show here only one real data simulation result, the simulation model is simple, and the result for simulated data is perfect. also, the result for real measured data is almost perfect even if our model and estimated parameters are not perfect. so, we expect similar result would be obtained for other measured data. to improve estimations at periods of heating, we need another simulation model, e.g. a model which restricts heater wattage to one of two ranges. in such case, we need to find the time point where the estimated heating is not feasible and to determine which temperature data should be modified. 8. conclusion we showed that a backward simulation incorporating model conformation and resolution improvement is very effective. it is also shown that the backward simulation can be efficiently performed by using our hybrid method (forward simulation objects in backward simulation structure) by eliminating case figure 19. backward simulation result of fig.18 for ndiv=10,000 with resolution improvement. figure 18. fluctuation eliminated temperture data correspoinding to the measure data in figure 9. figure 20. processing time including resolution improvement vs ndiv for the cases in section 5. 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 he at er (w ) time(s) min max 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 1000 2000 3000 4000 te m pe ra tu re (d eg ) time(s) 1 10 100 1000 10000 100000 100 1000 10000 100000 ac cu m ul at ed p ro ce ss in g tim e( m in ) ndiv acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 19 branches. causal input changes can be practically determined for the simulated data and for the real measurement data. the internal state of any system can be determined by the backward simulation if we define the system’s model properly for backward simulation. in the case of data with high resolution, the backward simulator outputs almost perfect results. for cases of limited resolution, repetitive resolution improvement responding to model mismatches was successfully done. we also showed that the backward simulation with real measurement data can be effectively done, although noise elimination was needed for errors larger than quantising errors. the results suggest that we can separate noise from signal by using the backward simulation with a model conformation test. the backward simulation offers us a new method to infer causal inputs and internal states of various systems. we need to study the relation among the precision of simulation output, degree of model fitness and snr of data for backward processing. acknowledgement this work was supported by jsps kakenhi grant number 25540006. we thank one of the reviewers who motivated us to improve the simulation. references [1] j. v. beck, b. blackwell and c.r.s. clair jr., inverse heat conduction ill-posed problems, john wiley & sons, 1985, isbn 0-471-08319-4. [2] y. jarny and d. maillet, linear inverse heat conduction problem – two basic examples, http://www.sft.asso.fr/local/sft/dir/user3775/documents/actes/metti5_school/lectures%26tutorialstexts/text-l10-jarny.pdf. [3] k. oguni, inverse problem and instrumentation, ohmsha, tokyo, 2011, isbn 978-4-274-06829-4. [4] t.b.bako and t.daboczi, improved-speed parameter tuning of deconvolution algorithm, ieee trans. instrum. meas., vol.65, no.1, pp.1568-1576, 2016. [5] a.muqaibal, a,safaai-jazi, b.woerner and s.raid, uwb channel impulse response characterization using deconvolution techniques, proc. mwscas, 2002. [6] chueng-chiu huang and his-huang wang, backward simulation with multiple objectives control, proc. imecs (international multiconference of engineers and computer scientist), hong kong, 2009. [7] c.d.twigg and d.l.james, back steps in rigid body simulation, acm trans. graph., vol. 27, no.3, article 25, 2008. [8] j.j.cook, reverse execution of java bytecode, computer journal, vol.45, no.6, pp.608-619, 2002. [9] y. hiranaka and t. taketa, designing, backward range simulator for system diagnoses, proc. xx imeko world congress, 2012. [10] y. hiranaka., h. sakaki, k. ito, t. taketa and s. miura, numerical backward simulation model with case branching capability, proc. 4th international conference on simulation and modeling methodologies, technologies and applications (simultech 2014), pp.225-230, 2014. hybrid backward simulator for determining causal heater state with resolution improvement of measured temperature data through model conformation microsoft word 28-146-1-ga.doc acta imeko  july 2012, volume 1, number 1, 85‐88  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 85  two‐stage current transformer with electronic compensation  daniel slomovitz, leonardo trigo, carlos faverio  ute ‐ laboratory, paraguay 2385, montevideo, uruguay      keywords:  current transformer; compensation; measurement; calibration; power; capacitance; two‐stages.  citation: daniel slomovitz, leonardo trigo, carlos faverio, two‐stage current transformer with electronic compensation, acta imeko, vol. 1, no. 1, article 16,  july 2012, identifier: imeko‐acta‐01(2012)‐01‐16  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 20 th , 2012; in final form june 8 th , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: no information provided  corresponding author: daniel slomovitz, e‐mail: dslomo@gmail.com    1. introduction  very high precision wattmeters have been developed in national metrology institutes. some of them are based on adding devices [1], others on bridges [2], and others use two commercial digital sampling voltmeters (dsv), like the model hp 3458 [3, 4]. most of them require a current-to-voltage transducer for the input current. if a simple resistor shunt is used as this transducer, the maximum current is limited by the maximum allowed dissipation. it is not possible to get very low uncertainties if the currents are higher than some miliamperes. this leads to the use of current transformers (ct) to reduce the current through the shunt to a small value, so that the shunt dissipates a small power with low temperature rise. burdens around 10 ω for output voltages between 1 v and 2 v lead to powers lower than 1 w, which are acceptable for this kind of application. when two dsvs are used, one is used to digitize the voltage and the other a voltage proportional to the current. as the best voltage range of the mentioned instrument is 10 v, it is necessary to scale currents that can reach up to 100 a to voltages of few volts. the uncertainty of this model of dsv, using special control programs [5, 6], is in the order of some parts in 106. then, the whole current to voltage transducer must have uncertainties around of a few parts in 106, in order not to degrade the system. as this transducer comprises a resistor shunt and a ct, this last device must have uncertainties around 1 part in 106. to get that, a two-stage transformer [7, 8] is proposed, but this proposal adds electronic compensations. although electronic compensation of two-stage transformers have been proposed in the past [9-18], none of them reaches a complete compensation of the internal transformer error sources. they do not compensate errors caused by internal resistance of the windings and errors coming from internal stray capacitances. for this type of ct, the main source of error is the magnetizing current (im). this current flows through the magnetizing branch, so that it does not flow through the output. figure 1 shows a schematic model of this type of transformer. z1 and z2 are the series impedances with resistive and inductive components (r2, l2), while rm, lm represent the components of the magnetizing branch. c1, c2 represent stray capacitances in the primary and secondary windings. there is no capacitance between primary and secondary due to electrostatic shields that this kind of ct has. to get low errors, the magnetic flux must be as low as possible, reducing in rmlm r2 l2 z1 z2 c1 c2 i vl +ic1 im vm   figure 1. transformer model.  a current transformer with electronic assistance was developed, intended for current to voltage transducers. it is based on the two‐ stage principle, and it includes electronic compensation for magnetizing errors as well as capacitive errors. the nominal input currents  go from 1 a up to 100 a, and the nominal output current is 0.2 a. using a load of 10 ω, the nominal output voltage is 2 v, with 0.4 w of  dissipation.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 86  this way the magnetizing current. for that purpose, the magnetizing voltage (vm) must be reduced as much as possible. with conventional cts there is a limit due to the voltage drop on the load, which in our case is very high, 2 v at 0.2 a. usually, standard ct loads are around 0.2 ω, that is 50 times lower. to reach lower errors, two-stage cts are used in this application. in two-stage transformers [7], one stage provides the power needed for the core magnetization and for the burden (magnetizing core), while the other stage only supplies the error current of the first stage (ratio core), being its magnetic flux very low but not zero. the exact value of this flux depends on the external burden and the internal series impedance of its secondary winding. the lower the load is, the smaller the magnetic flux. this method requires one to add the currents of both secondary windings, what is done using different methods depending on the application. in current to voltage transducer, this addition can be done using two burden resistors instead of only one [19]. one resistor, of high precision, is the burden of the magnetizing secondary winding and the other one, generally of the same value but lower precision, is the burden of the ratio winding (see figure 2). wm is the magnetizing winding of the two-stage transformer and wc is the ratio one. the core to the left, in figure 2, is the magnetizing one, and the core to the right is the ratio one. as the compensating current through wc is much smaller than the main one (through wm), the same occurs to the voltage drops on rc and rm, reducing the magnetization current of the compensating core. this allows total errors in the order of 10 parts in 106 if burdens around 10 ω are used [19]. for reducing the errors 10 times more, as it is the goal of this proposal, an electronic compensating circuit is added. 2. transformer design  the core for both stages is of a high permeability type (mumetal). both secondary windings (magnetizing and ratio stages) have 500 turns each, as figure 3 shows. there are two groups of primary windings. the first one is formed by 10 groups of 10 turns each. connecting in parallel-series sets, it can be arranged from 10 to 100 effective turns (10, 20, 50 and 100). as the nominal secondary current is 200 ma, the nominal input current ranges are 1 a, 2 a, 5 a and 10 a. for the upper ranges, another primary winding group exists. it has 10 groups of 1 turn each. therefore, using these winding groups, the nominal primary current goes from 10 a up to 100 a. table 1 shows the different ratios available. each group of turns in each winding has the same current whichever were the connection. in this way, the stray magnetic flux does not change when different ratios are selected. this property ensures that the errors are practically the same for all ratios, allowing a reduction in the calibration work. however, error differences can appear between low and high current windings, because they have different turns. to determine the amount of this variation, there is a 10 a range in each winding. connecting both with opposite polarity, ideally, the output must be null. then, measuring the actual output, that ratio error difference can be calculated. 3. electronic compensation  the goal of the compensating circuit is to null the magnetic flux in the compensating stage core. for that, a controlled voltage source generates the same value as the sum of the voltage drops in the internal and external series impedances of the compensating secondary winding, but with opposite polarity. in this way, no magnetizing voltage exists in this winding, neither magnetizing current i. from a theoretic point of view, there is a complete compensation and null errors. this electronic source supplies only the small power required by the compensating stage of the transformer, so that a simple operational amplifier is enough for this application. figure 4 shows a schematic circuit. the purpose of the electronic device is to null the electromotive force (emf) ε in wc. assuming an internal resistive impedance r of this winding, the following equations apply:  1 2i r r v    (1) and wm rm + wc rc vo figure 2. conventional two‐stage current to voltage transducer.  figure 3. windings of the proposed transformer. compensating core to the left, main core to the right.  table 1. available ratios using different series‐parallel connections. parallel  groups  groups  turn/group  ratio  input  current (a)  1  10  10  5  1  2  5  10  10  2  5  2  10  25  5  10  1  10  50  10  1  10  1  50  10  2  5  1  100  20  5  2  1  250  50  10  1  1  500  100  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 87  2 2 1 3   1 r v r i r         , (2) which leads to 1 2 3 r r i r r         . (3) if r = r1r2/r3, then ε = 0. this shows that it is possible to null the emf in wc, nulling in this way the magnetic flux through the auxiliary core. 4. error sources  as mentioned, the first error source is the magnetizing current. in the prototype, it was reduced 100 times with the electronic compensator, reaching errors in the order of 0.1 µa/a (in phase and in quadrature). this value includes ambient temperature variation. the influence of the latter on the compensation is due to the variation of the resistance of the windings, that vary around 0.4%/k, affecting equation (3). second order errors include stray capacitances and stray magnetic fluxes. for the first ones, an electrostatic shield between primary and secondary windings and an electronic compensation to reduce the influence of some stray capacitances were used. the shield was connected to an external guard terminal. in this way, stray capacitances between primary and secondary windings are eliminated. however, stray capacitances between all windings and shields remain. it is possible to reduce this effect using shielded cables for the windings [20], but this technique was not used because of the high cable section required in this design. in the secondaries, these capacitances produce errors in quadrature that only depends on the resistive burden. with 10 ω burden, as is used, this error source cannot be neglected. to compensate it, an electronic device that simulates a negative capacitor was included at the output, in parallel with the burden. its value was adjusted in such way that no error differences exist when the burden was changed between 0 ω and 10 ω. stray capacitances between primary windings themselves, and to shield, changes when different series-parallel connections are used. however, all these capacitances have low values due to the small number of turns that these windings have, leading to errors lower than 1 part in 106. to reduce the effect of stray magnetic fluxes and magnetic shields an equalization winding was added. this winding has 5 partial windings of 5 turns, all connected in parallel, around the core. in this way, it forces the flux to be homogeneous, reducing this error source. the influence of these second order errors in the whole performance was estimated in 0.4 µa/a for the prototype. then, the total error, including the magnetizing current error, is 0.5 µa/a, approximately. to confirm this theoretical error estimation, the prototype was tested against a current comparator, with uncertainties of 0.3 µa/a (in phase and in quadrature). the tested ratios were 5 and 10, and the results are shown in table 2. the uncertainty of this calibration is mainly due to the uncertainty of the current comparator, including its zero detector, previously stated (0.3 µa/a). the error values are basically covered by the estimated errors of the prototype and the uncertainty of the calibration, confirming the theoretic analysis. 5. prototype  figure 5 shows a photo of the prototype. it has two output binding posts (vertical, red at the left) corresponding to the transformer secondary. the four red binding posts in the middle corresponds to the burden resistor, and the two metallic ones, to guard and ground. the banana couple black and red are connected to a thermistor included in the burden resistor to measure its temperature. other switches and leds, at the right, correspond to power supplies. to eliminate interference from power network, batteries are used. at the front panel, it has 10 pairs of terminals of low-current winding (10 groups of 10 turns). each group is connected to each vertical pair of terminals. the combinations in seriesparallel connections are made by changing a circuit printed board (pcb) that is connected at the front of the case. the cooper side of this pcb is on the inner face of the case, so it is not visible. figure 6 shows two pcbs on their cooper sides for 1 a and 5 a. the upper one connects all winding groups in series, forming a winding of 100 turns. the lower pcb connects the first five groups in parallel, in series with the last i + r2r3 zl v2 r1 wm ic1 + i wc +  figure  4.  schematic  circuit  of  the  two‐stage  electronic  compensated transformer.  figure 5. prototype of the proposed transformer. table 2. calibration of the proposed transformer.  ratio  primary current  (a)  error in phase   x10 ‐6   error in quadrature  x10 ‐6   10  2  ‐0.6  ‐0.8  5  1  ‐0.4  ‐0.6  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 88  five groups. in this way, the effective number of turns is 20. there is one printed board for each ratio, with different connections. for 10 a, all groups are connected in parallel. a similar system exists for the high-current winding (10 groups of 1 turn) in the upper side of the case (see figure 7). it also has 10 pairs of terminals that can be connected in different series-parallel configurations. as the currents in these windings are higher than the first ones, the connections are made by using copper bars instead of pcbs. 6. conclusions  a current transformer intended for high precision currentto-voltage transducers was proposed. it uses a two-stage technique plus electronic assistance. this last device cancels the magnetic flux in the compensating core, canceling the error of the transformer due to this cause. it does not depend on the internal winding resistance or the external burden. the primary windings can be arranged in different series-parallel connections covering ranges from 1 a to 100 a. the estimated ratio error is about 0.5 µa/a. references  [1] p. braga, d. slomovitz, “rms voltmeter based power and power-factor measuring system,” international journal of electronics, vol. 75, no 3, set. 1993, pp. 561-565. [2] j. bai, l. yang, s. zhao, j. zong, e. so, “the establishment of power/energy standard at china electric power research institute and its comparison with nrc,” conference on precision electromagnetic measurements, conference digest, june 2010, pp. 275-276. [3] e. tóth, a. m. ribeiro franco, and r. m. debatin, “power and energy reference system, applying dual-channel sampling,” ieee trans. instrum. meas., vol. 54, no. 1, feb. 2005, pp. 404-408. [4] u. pogliano, “use of integrative analog-to-digital converters for high-precision measurement of electrical power,” ieee trans. instrum. meas., vol. 50, no. 5, oct. 2001, pp. 1315-1318. [5] r. l. swerlein, “a 10 ppm accurate digital ac measurement algorithm,” proc. ncsl, albuquerque, usa, aug. 1991, pp. 17-36. [6] g.a. kyriazis, r. swerlein, “evaluation of uncertainty in ac voltage measurement using a digital voltmeter and swerlein´s algorithm,” conference on precision electromagnetic measurements, conference digest, june 2002, pp. 24-25. [7] h. b. brooks, f. c. holtz, “the two-stage current transformer,” aiee trans., vol. 41, june 1922, pp. 382-393. [8] p. j. betts, “two-stage current transformers in differential calibration circuits,” iee proc., vol. 130, pt. a, no. 6, sept. 1983, pp. 324-328. [9] d. l. h. gibbibngs, "a circuit for reducing the exciting current of inductive devices," proc. iee, vol. 108,1961, pp. 339-349. [10] o. peterson, "a self-balancing current comparator," ieee transs on instru. and meas., vol. im-15, 1966, pp. 62-71. [11] r. friedl, "current transformers with electronic error compensation," messkechnik, vol. 76, no 10, 1968, pp. 241-250. [12] t. m. souders, "wide-band two-stage current transformer of high accuracy," ieee trans. instrum. meas., vol. 4, june 1972, pp. 340-349. [13] g. e. bear, "100:1 step-up amplifier-aided two-stage current transformers with small error at 60 hz," ieee trans. instrum. meas., vol. 28, june 1979, pp. 146-152. [14] p. n. miljanic, e. so, w. j. m moore, “an electronically enhanced magnetic core for current transformers,” conference on precision electromagnetic measurements, june 1990, pp. 328. [15] j. l. west, p. n. miljanic, “an improved two-stage current transformer,” ieee trans. instrum. meas., vol. 40, june 1991, pp. 633-635. [16] e. so, s. ren, d.a. bennett, "high-current high-precision openable-core ac and ac/dc current transformers," ieee trans. on instrum. and meas., vol. 42, no. 2, april 1993, pp. 571-576. [17] e. so, d. a. bennett, “a low-current multistage clamp-on current transformer with errors below 50×10-6,” ieee trans. instrum. meas., vol. 46, apr. 1997, pp. 454-458. [18] j. l. west, p. n. miljanic, “an improved two-stage current transformer,” ieee trans. instrum. meas., vol. 40, june 1991, pp. 633-635. [19] conimed, nota de aplicación modelo cct20, conversor corriente tensión de gran exactitud, www.conimed.com. [20] d. slomovitz, h. de souza, “shielded electronic current transformer,” ieee trans. instrum. meas., vol. 54, no. 2, apr. 2005, pp. 500-502. figure 6. circuit printed boards used to connect the groups of low‐current  primary  windings.  the  upper  one  corresponds  to  1  a  of  nominal  current (100 turns), and the lower to 5 a (20 turns).  figure 7. connection terminals for high‐current windings at the upper side  of the case.  microsoft word 305-2374-1-le-rev acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 81‐86    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 81  influence of the human body mass in the open‐air mri on  acoustic noise spectrum  jiří přibil 1 , anna přibilová 2 , ivan frollo 1   1 institute of measurement science, slovak academy of sciences, bratislava, slovakia  2 institute of electronics and photonics, faculty of electrical engineering & information technology, slovak university of technology,  bratislava, slovakia      section: research paper   keywords: acoustic noise; noise reduction; signal processing; spectral analysis; statistical evaluation  citation: jiří přibil, anna přibilová, ivan frollo, influence of the human body mass in the open‐air mri on acoustic noise spectrum, acta imeko, vol. 5, no. 3,  article 13, november 2016, identifier: imeko‐acta‐05 (2016)‐03‐13  editor: paolo carbone, university of perugia, italy  received december 21, 2015; in final form may 26, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: the work has been done in the framework of the cost action ic 1206 and has been supported by the grant agency of the slovak academy of  sciences (vega 2/0013/14)  corresponding author: jiří přibil, e‐mail: jiri.pribil@savba.sk    1. introduction  the magnetic resonance imaging (mri) device usually consists of three gradient coils that produce three orthogonal linear fields for spatial encoding of a scanned object. from the physical principle follows that the rapid changes of the lorentz forces during fast switching inside the weak static field b0 environment [1] result in a significant mechanical vibration of these gradient coils. this process subsequently propagates in the air in the form of a progressive sound wave received by the human auditory system as a noise [2]. due to its harmonic nature and the audio frequency range, the produced acoustic noise of this device can generally be treated as a voiced speech signal, and thus it can be recorded by a microphone and processed in the spectral domain using similar methods as those used for speech signal analysis. the mri technique enables analysis of the human vocal tract structure and its dynamic shaping during speech production while simultaneous recording of a speech signal [3], [4] is performed. the primary volume models of the human acoustic supra-glottal cavities constructed from the mr images can be transformed into three-dimensional (3d) finite element (fe) models [5]. image and audio acquisition must be synchronized and the recorded speech signal must have a good signal-tonoise ratio so that high-quality results are achieved in 3d vocal tract model creation [6]. several approaches to reduce the noise in the mri equipment [7]-[9] are used in practice. one group of these enhancement methods is based on spectral subtraction of the estimated background noise when at least two microphones are used [10]. our method uses only one microphone that picks up the noise of the running mri scan sequence without phonation and then it records the speech signal during phonation with the background mri noise. the following signal processing is carried out: the part of the signal containing only the mri noise is used to calculate the basic spectral envelope using the mean welch periodogram, and then it is analyzed in segments to determine the basic and supplementary spectral properties. the obtained spectral features are subsequently processed statistically and the abstract  the paper analyses changes in spectral properties of the acoustic noise when the examined person lies in the scanning area of the  open‐air magnetic resonance imager (mri). consequently the holder of the lower gradient coils is loaded with the mechanical mass  represented by the person’s weight. the acoustic noise pressure level is mapped in the mri neighborhood, too. obtained results of  spectral analysis will be used for the design of a correction filter to suppress the noise in the simultaneously recorded speech signal for  3d modeling of the human vocal tract.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 82  achieved values are used to modify the basic spectral envelope of the noise signal that is further subtracted from the spectral envelope of the speech signal with the superimposed mri noise. in general, these noise estimation techniques based on statistical approaches are not able to track real noise variations; thereby they result in an artificial residual fluctuation noise and a distorted speech. therefore, the spectral properties of the acoustic noise generated by the gradient system of the mri device must be analyzed with high precision so that the noise is efficiently suppressed while the maximum quality of the processed speech signal is preserved. to investigate the transmission of vibration through the plastic holder of the mri device scanning area, the measurement arrangement consists of the testing spherical water phantom which is useful for testing of the magnetic field homogeneity [11] in the device calibration phase. the situation changes when the examined person lies in the scanning area of the open-air mri and the holder of the lower gradient coils is loaded with his/her weight. then the mass of the whole mechanical system is altered and a change in spectral properties of the generated acoustic noise is expected, too. to verify this working hypothesis, the noise signal recording and its spectral analysis were performed for different person weights and with the water phantom only (for comparison). the obtained results will be used to devise an improvement of the developed cepstral-based noise reduction method [12] in the speech recorded during mri scanning. 2. analysis of spectral properties of the acoustic  noise signal  the spectrogram can be successfully used for visual comparison of differences in the time / frequency domain – see an example in figure 1a,b. the disadvantage of subjectivity in this method can be eliminated by numerical matching of the spectral envelopes. to obtain the smoothed spectral envelope, the mean periodogram can be computed by the welch method. in general, the periodogram represents an estimate of the power spectral density (psd) of the input signal. using the nfft -point fft to compute the psd as s(e j) / fs and the sampling frequency fs in hz, we obtain the resulting spectral density in logarithmic scale expressed in db/hz – see graphs in figure 1c,d. the basic spectral properties can be determined from the spectral envelope and subsequently the histograms of spectral values can be calculated for objective matching – see figure 1e. further detailed analysis in the frequency domain is done in the region of interest (roi) – see the visualization in figure 2. for numerical comparison it is possible to calculate the rmsbased spectral distances drms between periodograms corresponding to the mri noise signals with different persons lying in the mri device scanning area or using the sole phantom object. in addition, for evaluation of differences between spectral envelope values in the chosen frequency range, the absolute spectral difference sdiff can be calculated by subtraction of their values in db. analyzing this differential signal, the maximum value pmax can be localized and the corresponding frequency fmax can be determined – see the graphs (b) and (c) in figure 2. the supplementary spectral properties are usually determined from the frames (after segmentation and windowing). these properties describe the shape of the magnitude of the power spectrum |s(k)|2 of the noise signal and they can be determined with the help of the additional statistical parameters [13]. the spectral centroid (scentr) is defined as a center of gravity of the spectrum, i. e. the average frequency weighted by the values of the normalized energy of each frequency component in the spectrum     . 2 1 2 2 1 2    fftfft n k n k centr ksksks (1)   figure  2.  the  smoothed  envelopes  by  mean  periodograms  of  two  noise  signals  in  the  full  frequency  range  0fs /2  together  with  calculated  rms  spectral distance (a), selected roi in the low‐frequency band 0~2.5 khz (b),  depicted absolute differential signal sdiff with localization of the frequency  fmax corresponding to the maximum difference pmax (c).  figure 1. spectrograms of two mri noise signals (a, b), power spectral densities and their envelopes  in the selected 250‐ms rois depicted for the  low‐ frequency band 0~2.5 khz (c)‐(d), histograms of these spectral envelopes (e). 0 0.5 1 1.5 2 2.5 0 5 10 15 — › s d iff [ d b ] —› f [khz] s diff p max =5.45 db 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 — › p [ d b ] —› f [khz] envelope1 envelope2 d rms =1.9278 0 1 2 3 4 5 6 7 8 -100 -80 -60 -40 -20 — › p [ d b ] —› f [khz] envelope1 envelope2 d rms =2.3854 f max = 575 hz a) b) c) roi —› time [s] — › f [k h z ] 0.5 1 1.5 2 2.5 0 2 4 6 8 -120 -100 -80 -60 -40 -20 —› time [s] — › f [k h z ] 0.5 1 1.5 2 2.5 0 2 4 6 8 -120 -100 -80 -60 -40 -20 p [db] p [db] b)a) 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 — › p o w e r /f re q u e n c y [ d b / h z ] —› f [khz] noise1 0 0.5 1 1.5 2 2.5 -100 -80 -60 -40 -20 — › p o w e r /f re q u e n c y [ d b / h z ] —› f [khz] noise2 -140 -120 -100 -80 -60 -40 -20 0 2 4 6 8 10 —› psd [db] — › o c c u re n c e [ % ] noise1 noise2 c) d) e) acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 83  the spectral flatness (sflat) is determined as the ratio of the geometric and the arithmetic mean values of the power spectrum and it also describes the degree of periodicity in the signal [14]     . 1 22 1 2 2          fft fft fftnfft n k n n k flat kskss (2) the spectral spread (sspread) parameter represents the dispersion of the power spectrum around its mean value   ,22   xes spread (3) where  is the first central moment and  is the standard deviation of the spectrum values. the spectral skewness (sskew) is a measure of the asymmetry of the data around the sample mean and can be determined as the third moment    .33  xesskew (4) the spectral kurtosis (skurt) expressed by the fourth central moment represents a measure of peakedness or flatness of the shape of the spectrum relative to the normal distribution for which it is 3 (or 0 after subtraction of 3)    .344  xeskurt (5) 3. subject, experiments, and results  the analyzed open-air mri equipment e-scan opera contains an adjustable bed which can be positioned in the range of 0 to 180 degrees, where the 0 degree represents the left corner near the temperature stabilizer device [15] – see the principal angle diagram of the mri scanning area in figure 3d. this noise has an almost constant sound pressure level (spl) and consequently it can be easily subtracted as a background (spl0). due to the low basic magnetic field b0 (up to 0.2 t) in the scanning area of this mri machine, any interaction with the recording microphone must be eliminated. as the noise properties depend on the microphone position, the optimal recording parameters (the distance between the central point of the mri scanning area and the microphone membrane, the direction angle, the working height, and the type of the microphone pickup pattern) must be found. the chosen type of the sequence together with the basic scan parameters – repetition time (tr) and echo time (te) – have significant influence on the scanning time. values of these parameters result primarily from the chosen type of the scanning sequence. they can also be slightly changed manually but their final values depend on the setting of the other scan parameters – field of view (fov), number of slices, slice thickness, etc. the realization of the experiment with measurement of the acoustic noise produced by the gradient system of the mri device consists of two phases. first, the noise signal is recorded by the pick-up microphone during execution of a scan mr sequence with different testing persons lying in the scanning area of the mri device as documented by a photo in figure 3a, and using only the testing spherical phantom with the diameter 14 cm, filled with the solution of cuso4 in a distilled water [15] (cuso4 shortens the tr time and speeds up the mr data collection) [11]. this phantom is placed in the middle point of the scanning area inside the scanning rf coil – see figure 3b. then, the noise signals are processed as follows: 1. calculation of the basic and supplementary spectral properties of the recorded acoustic noise signals; determination of the main differences between the signals with and without a lying person in the scanning area, and detailed analysis of the influence of different person weights on the spectral properties. 2. visual comparison of the smoothed spectral envelopes in the full frequency range up to fs /2 (0 to 8k), in the low frequency sub-band up to 2.5 khz (0 to 2k), and the middle frequency sub-band of 2~6 khz (2 to 6k); determination of the spectral distances drms of these envelopes between individual persons or the phantom object, and localization of the frequency fmax corresponding to the maximum difference pmax within the low-frequency band 0 to 2k. 3. statistical processing of determined values of the basic and supplementary spectral properties; numerical matching of the obtained results. comparison of spectral envelopes in the low-frequency band up to 2.5 khz together with their histograms for the selected figure 3. an arrangement of noise spl measurement and recording in the  mri opera: a lying person with a pick‐up microphone at the position of 90  degrees (a), the testing water phantom with the recording microphone at the position of 150 degrees (b), the sound level meter at the position of 30 degrees (c), a principal angle diagram of the mri scanning area (d).  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 84  pairs can be seen in figure 4. the observed differences between the selected supplementary spectral properties are visualized by histograms in figure 5, the summarized numerical values are introduced in table 1. the ssf-3d scanning sequence chosen for this comparison experiment can be used for the 3d mr scans of the human vocal tract [12], [16]. the auxiliary parameters of the used sequence were set as follows: te=10 ms, tr=45 ms, 10 slices, 4-mm thick, sagittal orientation. the spherical testing phantom described earlier in this section was inserted in the scanning rf knee coil when the noise signal was recorded without a lying testing person in the mri scanning area. in correspondence with our previous research [12], the mapping of the acoustic noise spl in the mri neighborhood was performed for the mentioned scan sequence. the sound level meter of the multi-function environment meter lafayette dt 8820 (with the range set to 35~100 db) was used for the measurement of the noise spl at directions of 30, 90 and 150 degrees – see the obtained values in a graphical form in figure 6a. the spl meter was located at the distances of 45, 60, and 75 cm from the central point of the scanning area (see figure 3c). subsequently the directional pattern of the acoustic noise spl distribution was measured in the range of <0-180> degrees, per 5 degrees, at the distance of dl=60 cm from the mri device central point with the inserted testing water phantom – see the resulting diagram including the spl0 background values in figure 6b. the noise measurement was practically realized by real-time recoding of the signal by a microphone and transferring it to the external notebook during execution of a chosen scan mr sequence. the recording microphone was located at a distance of 60 cm, at the horizontal positions of 30, 90, and 150 degrees, and vertically in the middle between the both gradient coils. the input analogue noise signal, picked up by the 1" behringer dual diaphragm condenser microphone b-2 pro with the cardioid polar pattern setting, was pre-amplified and processed by the mixer device behringer xenyx 502 connected to the notebook by the usb audio interface u-control uca202. the noise signal was recorded at a sampling frequency of 32 khz, then resampled to 16 khz, and subsequently processed. the stationary signal parts with a time duration of 8 s were selected and normalized to the level of – 16 db using the sound editor program sound forge 8.0. the collected noise database originates from the records of six testing persons lying in the mri scanning area: three males (m1-m3) with figure 4. spectral envelopes in the frequency band up to 2.5 khz and their  histograms for the selected pairs: 78‐kg male m1 / 0.75‐kg phantom wp (a) 50‐kg female f2 / 75‐kg male m2 (b), 50‐kg female f2 / 55‐kg female f3 (c); microphone at 90 degrees.  table 1. summary comparison of mean values of spectral properties of the  acoustic noise for tested pairs of male/female persons and water phantom  (wp) inserted in the scanning area of the mri device e‐scan opera.  tested pairs  fmax     [khz]  pmax    [db]  drms (0_2k)  [db]  drms (2_6k)  [db]  drms (0_8k)  [db]  male/wp  0.577  9.81  4.34  3.85  4.25  female/wp  0.575  8.95  4.18  2.78  3.55  female/male  0.565  5.49  3.25  3.98  3.49  male/male  1.85  3.65  1.62  1.72  1.62  female/female 1.26 3.27  1.29  1.69 1.54 figure 5. histograms of the supplementary spectral properties: centroid (a),  flatness (b), spread (c), skewness (d), and kurtosis (e) for the noise signals  recorded at the position of 150 degrees.  figure 6. bar‐graph of the noise spl values in the directions of 30, 90, and  150  degrees  at  the  distances  of  dl  =  45,  60,  and  75  cm  (a),  detailed  directional pattern of the noise source together with the background noise spl0, dl = 60 cm (b) for the used ssf‐3d mr scan sequence.  300 400 500 600 0 5 10 15 20 —› s centr [hz] — › o c c u re n c e [ % ] male female phantom 0.01 0.015 0.02 0.025 0 5 10 15 20 —› s flat [-] — › o c c u re n c e [ % ] male female phantom 0.1 0.15 0.2 0.25 0 5 10 15 20 —› s spread [-] — › o c c u re n c e [ % ] male female phantom 0.2 0.4 0.6 0.8 0 5 10 15 20 —› s skew [-] — › o c c u re n c e [ % ] male female phantom -2 -1 0 0 5 10 15 20 —› s kurt [-] — › o c c u re n c e [ % ] male female phantom b)a) c) d) e) 30 deg 150 deg 90 deg 0 20 40 60 80 100 — › s p l [ d b ] d l =45 cm d l =60 cm d l =75 cm a) 30 60 90 120 150 1800 —› angle [deg] 60 80 20 400—› spl [db] ssf-3d spl 0 b) acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 85  approximate weights about 78 kg, three females (f1-f3) with the mean weight of 53 kg, and the testing water phantom with a holder (wp) weighting about 0.75 kg. the patient’s bed with an examined person was located at 180 degrees – this configuration was chosen since this has a minimal effect on the mentioned temperature stabilizer. 4. discussion of obtained results  the obtained results were evaluated by comparison of the basic spectral properties of the noise signals measured for the pairs with different weights: male/phantom, female/phantom, female/male, male/male, and female/female. the determined spectral differences of the noise signals between the persons with similar weights (female/female – see graphs in figure 4c are small in all three observed frequency ranges. for the male/female pairs the spectral differences are greater and well visible in the spectral envelopes within the low-frequency band up to 2.5 khz – see the graphs in figure 4b (male person m3 with the weight of 80 kg). for the male/phantom pairs the spectral differences are the greatest as can be noticed in the subplots of figure 4a. the achieved results of the supplementary spectral properties of the recoded noise mapped by histograms (see figure 5) document that the range of values represented by the standard deviation is significantly broader for male and female persons than for the testing phantom. it was caused by the fact that the phantom of the same weight was used, but the values of both male and female groups were taken from the measurements using three persons of different weights. next, the results of numerical matching of the noise spectral properties for the tested groups (phantom object, male or female persons) are in correlation with the graphical ones as shown by the summarized mean values in table 2. regarding the supplementary spectral properties of the recoded noise, the achieved results are in correlation with the obtained values of the distribution mapped by histograms (see figure 5) and with numerical matching of the mean values of the noise spectral properties for the tested pairs summarized in table 2. for small differences in the masses (similar weights of the subjects) the frequency of the detected maximum difference between the two spectral envelopes is higher than for the great differences between the masses (the weight of the male subject versus the water phantom giving this frequency about 570 hz). from the performed comparison of the supplementary spectral properties for three tested locations of the recording microphone follows that no significant differences exist, however, the best results are obtained for the microphone position at 90 degrees, i. e. directed at the face of the lying person. at the microphone position of 30 degree the influence of the mri temperature stabilizer can be superimposed as an additive noise with normal distribution. the microphone position at 150 degrees is unnatural from the point of the lying person and, in addition, the distance between the speaker face and the recording microphone was the longest. 5. conclusions  the changes in the spectral properties of the noise signal generated by the gradient coils during the mri scanning sequences while the examined person lay in the scanning area of the open-air mri machine were analyzed. the resulting supplementary spectral features describe also the degree of voicing and the statistical properties of the noise component of the speech signal (type of noise, randomness, distribution, etc.). this information is necessary for correct application of the excitation in the cepstral speech reconstruction after noise suppression [12]. the obtained results will serve to create databases of initial parameters (such as the bank of noise signal pre-processing filters) for a developed cepstrum-based algorithm for noise suppression in the recorded speech. it will be useful in experimental practice, when it often occurs that the basic parameter setting of the used scanning sequence as well as the other scanning parameters must be changed depending on the currently tested person. in addition, it will be very interesting to carry out the detailed analysis and to determine the spectral properties of the mri noise as a function of the mass of the subject. in this stage of our research only six volunteer persons (healthy people – colleagues) took part in our experiment, so it is very difficult or practically impossible. the solution to this issue may be cooperation with some medical centre (in bratislava, brno, etc.) having a certificate for the work with patients. finally, for better knowledge of the acoustic noise conditions in the scanning area and in the vicinity of the mri device, accomplishment of additional measurement and experiments is necessary. to describe how the vibrations induce the acoustic noise and how they travel through the plastic holder of the mri device, the time delay between the vibration impulses caused by the gradient coils and the noise signal must be analyzed, too. acknowledgement  the authors would like to thank the volunteers from the department of imaging methods, institute of measurement science in bratislava, for their help in our experiments. references  [1] a. moelker, p.a. wielopolski, m.t. pattynama, relationship between magnetic field strength and magnetic-resonance-related acoustic noise levels, magnetic resonance materials in physics, biology and medicine 16 (2003) pp. 52-55. [2] g.z. yao, c.k. mechefske, r.k. brian, acoustic noise simulation and measurement of a gradient insert in a 4 t mri, applied acoustics 66 (2005) pp. 957-973. [3] d. aalto et al., large scale data acquisition of simultaneous mri and speech, applied acoustics 83 (2014) pp. 64-75. [4] a.c. freitas, m. wylezinska, m.j. birch, s.e. petersen, m.e. miquel, comparison of cartesian and non-cartesian real-time mri sequences at 1.5t to assess velar motion and velopharyngeal closure during speech, plos one 11 (2016) e0153322. doi: 10.1371/journal.pone.0153322 [5] s.m.r. ventura, d.r.s. freitas, i.m.a.p. ramos, j.m.t.s. tavares, morphologic differences in vocal tract resonance table  2.  summary  mean  values  of  the  supplementary  spectral  properties  including  the  standard  deviation  values  (in  parentheses)  of  the  analyzed  noise signals summarized for all three recoding microphone positions.  property type  male  female  phantom  scentr [hz]  406 (70.8)  425 (71.9)  485 (10.9)  sflat [‐]  0.0145 (0.0032)  0.0138 (0.011)  0.0162 (0.0005)  sspread [‐]  0.221 (0.022)  0.188 (0.098)  0.194 (0.006)  sskew [‐]  0.531 (0.093)  0.452 (0.373)  0.629 (0.016)  skurt [‐]  ‐0.645 (0.376)  ‐1.017 (1.728)  ‐0.226 (0.333)  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 86  cavities of voice professionals: an mri-based study, journal of voice 27 (2013) pp. 132-140. [6] t. vampola, a.m. laukkanen, j. horáček, j.g. švec, vocal tract changes caused by phonation into a tube: a case study using computer tomography and finite element modelling, of the acoustical society of america 129 (2011) pp. 310-315. [7] g. kannan, a.a. milani, i.m.s. panahi, r.w. briggs, an efficient feedback active noise control algorithm based on reduced-order linear predictive modeling of fmri acoustic noise, ieee transactions on biomedical engineering 53 (2011) pp. 33033309. [8] d. aalto et al., large scale data acquisition of simultaneous mri and speech, applied acoustics 83 (2014) pp. 64-75. [9] x. shou, et al., the suppression of selected acoustic frequencies in mri, applied acoustics 71 (2010) pp. 191-200. [10] g. sun, et al., adaptive speech enhancement using directional microphone in a 4-t scanner, magnetic resonance materials in physics, biology and medicine 28 (2015) pp. 473-484. [11] i. frollo et al., measurement and imaging of planar electromagnetic phantoms based on nmr imaging methods, measurement science review 10 (2010) pp. 97-101. [12] j. přibil, j. horáček, p. horák, two methods of mechanical noise reduction of recorded speech during phonation in an mri device, measurement science review 11 (2011) pp. 92-98. [13] y. dodge the concise encyclopedia of statistics, springer, 2008, isbn 978-0-387-32833-1. [14] s. aleinik, o. kudashev, “estimating stochasticity of acoustic signals”, in: speech and computer, lncs 8773. a.ronzhin, r.potapova, v.delic (editors). springer, cham, heidelberg, new york, isbn 978-3-319-11580-1, pp. 192-199. [15] e-scan opera, image quality and sequences manual, 830023522 rev. a, esaote, genova, italy, 2008. [16] j. přibil, d. gogola, t. dermek, i. frollo, design, realization, and experiments with a new rf head probe coil for human vocal tract imaging in an nmr device, measurement science review 12 (2012) pp. 98-103. microsoft word article 6 157-1363-1-le.docx acta imeko  february 2015, volume 4, number 1, 26 – 34  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 26  good practice guide for calibrating a hydrophone "in situ"  with a non‐omnidirectional source at 10 khz  albert garcia‐benadí 1 , javier cadena‐muñoz 2 , joaquín del río fernandez 2 , xavier roset juan 2 , antoni  manuel‐làzaro 2   1  laboratori de metrologia i calibratge, centre tecnològic de vilanova i la geltrú, universitat politècnica de catalunya (upc), rambla  exposició, 24 , 08800 vilanova i la geltrú, barcelona, spain, albert.garcia‐benadi@upc.edu 2  sarti research group. electronics dept. universitat politècnica de catalunya (upc). rambla exposició 24, 08800, vilanova i la geltrú.  barcelona. spain.+(34) 938 967 200, www.cdsarti.org       section: research paper   keywords: hydrophone; uncertainty budget; seawater  citation: albert garcia‐benadí, javier cadena‐muñoz, joaquín del río fernandez, xavier roset juan, antoni manuel‐làzaro, good practice guide for  calibrating a hydrophone "in situ" with a non‐omnidirectional source at 10 khz, acta imeko, vol. 4, no. 1, article 6, february 2015, identifier: imeko‐acta‐04  (2015)‐01‐06  editor: paolo carbone, university of perugia   received december 10 th , 2013; in final form december 16 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: albert garcia‐benadí, e‐mail: albert.garcia‐benadi@upc.edu    1. introduction  nowadays a multitude of tests in the marine environment are performed, such as the measurement of salinity, acidity or basicity (ph) and carbon dioxide (co2). some of these tests include the measurement of noise pollution as well as the study of cetaceans in the marine environment. hydrophones are used to perform such tests. these devices are microphones for the marine environment. the hydrophones can be of different types, but their principal task is the transformation of pressure variation into an electrical variation. most hydrophones are based on a piezoelectric transducer that generates electricity when exposed to a change in pressure. the main parameter that relates the electrical and pressure magnitude is sensitivity. the following equation (1) is used to measure sensitivity            ref ref p p v v s log20 (1) the unit of sensitivity is expressed as db rel 1 v/µpa, because its unit is db (decibels) and then the unit details the reference values, in this case vref is 1 v and pref is 1µpa. the quantity pref is 1 µpa because it is the basic pressure in seawater. in air pref is 20 µpa. in our case the output value is voltage, v, and via the sensitivity the voltage changes to pressure, p. a practical case: the hydrophone has a sensitivity value of -192 db rel 1v/µpa, which gives 15 mv. the real pressure can be described by the equation (2), and the result is 59·106 µpa. in this example the reference values are not used because the voltage value is calculated in v and the result obtained is µpa. 2010 s v p  (2) in (2) the pressure value is calculated, but the interesting parameter is the reception level, rl. equation (3) shows the relation with the pressure and the result is in db. abstract  the aim of this paper is to provide the basis for the calibration of a hydrophone "in situ" with a non‐omnidirectional source at 10 khz,  thus assigning a value of uncertainty, which may be high, but according to requirements, may be sufficient.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 27           reference read p p rl log20 (3) the real result for 15 mv is 155.52 db. the sensitivity is a function of frequency, thus the aim of the hydrophone calibration is to obtain the sensitivity as a function of frequency. the calibration of this equipment is detailed in various standards such as [1], where the calibration is done as a function of frequency. the standard calibrations are carried out in laboratories where all parameters are controlled. the calibration method is described in [1]. the standard is divided into 7 topics: free-field reciprocity, free-field calibration by comparison, calibration by hydrostatic excitation, calibration by piezoelectric compensation, acoustical coupler reciprocity, calibration with a pistonphone and calibration with a vibrating column. the uncertainty value obtained in accredited laboratories is less than 1 db rel 1 v/µpa. although standard calibrations of hydrophones are essential to maintain the quality of acoustic monitoring, there is also a need for in-situ calibrations. the high cost for marine observatories to retrieve and redeploy the hydrophones, the requirement for continuous operation and the need for more frequent calibrations are the main factors carried out on in-situ calibrations. therefore, we propose the calibration of the hydrophone in the marine environment at 10 khz with a non-omnidirectional source. the reason for using a nonomnidirectional source is that it has more problems, and the purpose of the paper is to solve those problems by giving clear guidelines. this method of calibration involves a considerable increase in the uncertainty because the main parameters are not controlled, but measured. another problem is the low reproducibility of the test since the same conditions over sea surface and underwater will not occur again. however, in many cases this increment of the uncertainty compensates the little investment in performing the calibration. the objective of the calibration is to obtain the sensitivity as a function of frequency. performing the calibration in a laboratory is expensive because of the logistic involves three steps. the first is the process which involves ships and divers or rov (remote operated vehicle) to bring the hydrophone back to the surface. the second step is the processing time for the calibration and the last one is similar to the first step but for the return process to put the equipment back. all these procedures are expensive, and the processing time could be more than 1 month. for these reasons the aim of the paper is to propose an in situ calibration methodology to reduce the calibration time and costs. in order to achieve this objective it is necessary to evaluate various elements of the environment as well as the spreading factor, the attenuation index and the echo factor. 2. development  in this section, the method used, the equipment required, the basic equations of the sound level in marine environment, the geometrical approach of the scene, all relevant factors in marine environment sound and their uncertainty calculations will be explained in detail. 2.1. equipment  the equipment that is needed to carry out the calibration consists of a sound source, a hydrophone and a gnss (global navigation satellite system) receiver. the properties of the sound source, depicted in figure 1 (left), have to be known, as well as its calibration uncertainty, the tvr (transmit voltage response) as a function of the spl (sound pressure level) and the emission frequency. the hydrophone, depicted in figure 1 (center), (dut, device under test) must be characterized with its sensitivity as a function of frequency. in our case, the hydrophone is an integrated device, including the analogue receiver, the digital converters and the ethernet transmitter. the hydrophone is a bjorge mark, model naxys ethernet 02345. it is composed of an acoustically transparent coverage membrane, where the transducer element is located. figure 2 shows all enclosed elements. the signals sent by the hydrophone are received by an internet protocol suite, in this case user datagram protocol (udp) server. the data structure is detailed on the webpage of the manufacturer. the gnss receiver has to be able to get the raw data from the satellite and to be compatible with an open source program package such as rtklib [2] applications, where the acronym rtk means real time kinetic. the signal generator used in this calibration procedure is a hp 33120a device. it is depicted in figure 1 (right) and provides the signal of amplification. it is connected to the figure 1. the non‐omnidirectional sound pressure generator, lubell model ll9642t,  used  as  acoustic  source  (left),  the  hydrophone  (bjørge  naxyx ethernet  hydrophone  02345  used  as  acoustic  receiver  (center),  and  the signal generator hp33120a (right).   figure 2. main components in the hydrophone.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 28  amplifier and a sound pressure generator. the signal generated is a pulse with a frequency of 10 khz, which will be generated once per second. figure 3 shows one of these pulses injected into the environment. the reason why a pulse is generated every second is to avoid any interferences which may arise in reception. if the signal generated is a continuous pulse, there would be interferences in the reception. these interferences will be a surface seawater and seafloor bounce. the preliminary study of the best conditions is detailed in section 2.5. the compass is imbedded in a gnss receiver with bluetooth connection. the gnss is a woxter mark, model bt-tracer100. 2.2. basic equations  the simplified form of the sound wave propagation in environment equation (4), links three acoustic parameters. tlslrl  (4) the source sound level, sl, is the acoustic spectral density produced by an acoustic source recalculated to a distance of 1 meter. the receipt level, rl, is the acoustic signal obtained with the hydrophone. transmission loss, tl, is a decrease in the sound level radiated by a target over a distance. the rl value is obtained with a sensor, and the value sl is obtained at 1 meter with another calibrated hydrophone. the measurements are performed with another hydrophone, in this case the b&k 8103. the calibration is performed at 1 meter deep so as to avoid the initial perturbations of the transient signal from the generator. the output of all generators is detailed at 1 meter. in this case, the source is non-omnidirectional and for this reason it is very important to know the direction of emission. this factor is studied in sections 2.4 and 5.1. the last contribution is the transmission loss, which is composed of other factors, as described in equation (5). echorrrctl  )1000log( (5) where c is the spreading factor, � is the attenuation index [3], r is the distance between source and receiver in figure 3. pulse injected into environment.  figure 4. the attenuation index as a function of temperature and frequency.  the salinity, ph and depth are 30  0 /00, 8 and 20 m, respectively.  figure 5. the attenuation index as a function of depth and frequency. the  salinity, ph and temperature are 30  0 /00, 8 and 15 °c, respectively.  figure  6.  the  attenuation  index  versus  ph  and  frequency.  the  salinity,  depth and temperature are 35  0 /00, 20m and 15 °c respectively.  figure 7. the attenuation index versus salinity and frequency. the depth, ph  and temperature are 20 m, 8 and 15 °c respectively.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 29  kilometers, and recho is the echo contribution of the seawater surface and the seabed. the bounce in the water surface implies a phase change. however, there is no phase change in the seafloor. as the distance of the first bounce on the water surface is greater than the rebound on the seabed, we consider only one of the two contributions. another representation of (4) with the simple variables is (6), where v is the voltage received by the hydrophone, ps is the pressure generated and p0 is the basic pressure in the seawater (1 µpa). echorrrcp p vs s  )1000log()log(20)log(20 0 (6) the sensitivity, s, is ready to be calculated in (3). the terms in (6) can be divided into different fields (7), (8) and (9) from (4), the first one being a function of the receiver, the second a function of the source and the last a function of the environment. conceptually the sensitivity is the electrical energy generated minus the pressure energy received, and this pressure energy received is the same as that generated minus the energy absorbed or lost in the environment. svrl  )log(20 (7) )log(20 0p psl s (8) echorrrctl  )1000log( (9) 2.3. transmission loss  the transmission loss depends on other quantities, as can be seen in equation (5). these factors are detailed below. the spreading factor is a function of the geometry and relief of the seabed. in an ideal case, the parameter c has two possible values. the c value is 20 for the spherical propagation, and the c value is 10 for the cylindrical propagation. however, in reality the c value is between 10 and 20. this parameter is very difficult to calculate, being the major source of error and uncertainty. the attenuation index α is a function of other basic parameters, such as the temperature, the salinity, the ph, the depth difference between emission and reception and the emission frequency. the behavior of increasing alpha is due to the increase of the frequency, but the other parameters are also important. the attenuation index α depends solely on environmental conditions, and indicates the attenuation value of any waveform that passes through the environment. the alpha value depends on the frequency of this wave, and increases with the frequency of the impulse waveform. for this reason, in the sea you can hear the low frequencies at greater distances than in the air. alfa can be modeled as a function of the frequency, ph, depth and temperature by equation (10). 2 332 2 2 222 2 1 2 111 fpa ff fpa ff fpa                 (10) the parameters of equation (10) can be obtained from (11). dsatc sa f sa f ddp ddp p ttta t c sa a c a t t ph                     0167,019,121,31412 )35(0018,01 1017,8 10 35 8,2 109,41083,31 102,61037,11 1 105,11011,91059,210937,4 )025,01(44,21 10 86,8 ) 273 1990 8( 2 ) 273 1245 4( 1 2105 3 294 2 1 382754 3 2 )578,0( 1 (11) the variation of α versus frequency and temperature is depicted in figure 4. the variation of α versus frequency and depth is illustrated in figure 5. the variation of � versus frequency and ph is shown in figure 5 and the variation of α versus frequency and salinity is depicted in figure 7. however, the gradients of salinity and ph are null, because the study is in offshore. moreover, because the generator is figure 8. join scheme of generator, pvc triangle and semicircle.  figure 9. source generator and nylon semicircle assembly.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 30  located 2 m under the sea surface, the gradient of temperature is also nil although, the temperature gradient in the first meter under the surface is very high because of the solar radiation. the last factor is the echo contribution. this factor is very complex to evaluate, thus if in the geometrical scene the velocity of acquisition and the time between two successive sound pulses are well planned, this factor is not nil, but is detectable graphically. for these reasons it only works with the main signal. these factors are described in the section 2.5. 2.4. non‐omnidirectional source  the aim of this article is to propose a good practice to evaluate the non-omnidirectional sound marine source, in this case at 10 khz. the first step is to characterize the sound generator as a function of angle at 1 meter of distance. the emitted sound is measured using a calibrated hydrophone, in this case a brüel &kjaer (b&k), model 8103, located over a semi sphere with a 1 m radius. figure 8 shows the semi-circle establishment in the source to obtain the directionality generation of the source. given that the source is directional, its directivity is determined by the compass located in the pvc triangle on the water surface. initially the mission of the pvc triangle is to prevent the rotation of the generator that is sunk in water, and to determine the direction of emission. another advantage of the pvc triangle is that it can be used to orient the generator. the generator has a cubic shaped metal skeleton. in the top rear vertices of the cube, the vertices of the pvc triangle are joined, figure 8. the b&k hydrophone is located over the nylon semicircle. 2.5. initial geometrical study  the sound velocity in seawater is a function of the salinity, the temperature and the depth, but in this case the tests are made offshore and around 2 km from dut position. in these conditions the gradients of temperature, salinity and depth are estimated as being nil because when the temperature and salinity changes, the sound speed changes too. another important parameter is the seabed relief. in our case the bathymetry illustrated in figure 10 is known. this approach is correct for calculating the distance because the main signal follows a different path than the first echo. figure 12 shows a schematic with the main parameters. equations (12) and (13) are based on the distance, d0, which is the minimum distance between the sound generator and the hydrophone, and the elapse time, �t, between the main wave and its first echo.              r rr htc hhtc d    sin 3)sin( sin2 1 22 0 (12)           22200 sin)sin( sin 1 rhddhrc t   (13) where: hr = the distance between the hydrophone and the seafloor d0 = distance between the sound generator and the hydrophone on a plane orthogonal to the surface and including the emitter and the receiver c = the sound velocity in the water (about 1500 m/s) θ = angle between the reflected wave and the surface plane in a plane orthogonal to the surface and including the emitter and the receiver he = the distance between the generator and the sea surface. the he and hr are measured with tape measure. the hr was measured by the diver in the maintenance tasks, and he was manually measured when the test was done. the he does not appear because the reflections over the surface are not evaluated since the distance is bigger than 2 m. for this reason, the minimum distance will be hr which is 1 meter. if we fix the time interval to 1 s, the minimum distance between generator and dut must be 750 m. figure 11 shows an example of the real location used for this test. 3. procedure  the calibration process starts with the generation of the sound pulses through a sound generator, and the reception signal by the hydrophone. the main parameters are the generator position, the environmental conditions of the water, the signal amplitude and its frequency. the equipment used and the work method applied are detailed in this section. figure 10. bathymetry of the vilanova coast.  figure 11. distance between emission and reception in a real case.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 31  3.1. readings  there are two points for collecting the data. the rl is the dut. the collected data in the ship are: the orientation, measured with a compass situated over the pvc triangle and the gps satellite signal with the raw signal. the sl is measured with a b&k hydrophone located over the nylon semicircle. the temperature, the salinity and the ph are collected through the obsea observatory [4] located where the dut is also positioned. the readings stored aboard ship are: the signal at two frequencies l1 (1575.42 mhz) and l2 (1227.60 mhz) of the reception signal of the gnss receiver. orientation of the source through a compass located at the main vertices of the triangle. emission depth, he. the hydrophone collects the following readings: voltage receipt, which is sent to the internet using the protocol udp (user datagram protocol). environmental conditions such as temperature, salinity and ph are measured in obsea. other required readings for the post-process are the institut català de cartografia (icc) caster file that will be included in the free software real time kinetic (rtk). the motivation to use this system is to carry out a differential correction of the position without the need for our own terrestrial base. for this reason, we use the services provided by different entities such as icc and igs among others. this allows us to make a high quality positioning but without the financial investment. 4. uncertainty calculation  the uncertainty study starts with (6), where the propagation law is applied by the guide of uncertainty (gum) [5]. using the nomenclature detailed in (7), (8) and (9), equation (14) is obtained: )()()( )()()()( 222 2 2 2 2 2 2 2 tluslurlu tlu tl s slu sl s rlu rl s su                           (14) in the next subsections the receiver, the generator and the environment uncertainty are reviewed. the terms of all factors included in (14) are not correlated, because the correlation between the distance and the generator (sl), does not exist, since the generation is done irrespective of the distance. the environment is relevant, in fact it is characterized by three parts, the attenuation index (independent of distance), the spreading factor (element that depends on the morphology and the depth where the test is performed) and the echo contribution (not considered, since we are at a distance greater than the minimum required). 4.1. uncertainty of the receiver   the hydrophone has been considered as a black box where only output values (counts) are known, because we do not analyze the transducer neither the analog to digital conversion. the elimination of the offset of the signal has been done by the difference between maximum and minimum. therefore the possible variation of the offset can be eliminated. thus, the uncertainty of the receiver is a function of the uncertainty in the voltage measurement. the error propagation is shown in (15) 3)10ln( 20 )()( 2 min 2 2 2 2 v v vu v rl rlu                 (15) where the voltage uncertainty has a rectangular probability distribution with a width of the resolution in voltage. the hydrophone sends the values in counts. the voltage is calculated through the conversion factor of the daq and the gain value. in this case, the hydrophone has a converter of 16 bits. as the full scale of the equipment is 5 v, we obtain that the conversion factor between counts and v is 2.5 v/215 counts. the value vmin is the minimum voltage that the hydrophone gives and corresponds to the value of 1 count. the value of the squared uncertainty in voltage is vmin/√6 corresponding to a rectangular probability distribution with a width equal to the minimum value assigned. 4.2. uncertainty of the generator   the uncertainty of the generator is obtained through its calibration certificate. the error propagation is shown in (16) )( )10ln( 20 )()( 2 2 0 2 2 2 s s s s pu pp pu p sl slu                     (16) where u(ps) is obtained in the calibration certificate of the generator as follows (17). 2 g g 2 20 bks bk bk 220 bks s 2 k u 1020 )10ln(v )v(u10)p(u                   (17) the values of ug and kg are obtained from the calibration certificate of the generator. these parameters are calculated by the accredited laboratory following the standard [1] and the parameter sbk and vbk are the sensitivity and the voltage of the b&k hydrophones, respectively. this term is included because the output signal is measured with the b&k hydrophone. 4.3. uncertainty of the environment   the uncertainty of the environment is a function of many parameters and it is the largest contributor to the uncertainty budget. the error propagation is described by equation (18): )()()()( 2 2 2 2 2 2 2 cu c tl ru r tl u tl tlu                            (18) figure 12. schematic test with the main parameters.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 32  in (18) the uncertainty of α, the uncertainty of the distance and the uncertainty of spreading appears. the attenuation index, α, is a function of many parameters explained in [3]. equation (10) shows the dependence of α with other factors. the values of parameters of equation (10) are detailed in (11), where c is the sound speed in seawater, sa is the salinity in 0/00, ph is the measure of the acidity or basicity, t is the temperature in degrees celsius, and d is the depth in meters. the uncertainty is a function of the typical uncertainty in temperature, salinity depth, and ph. every term is evaluated through the evaluation of its resolution, its variation and its expanded uncertainty of the calibration certificate with its coverage factor. equation (19) shows the typical uncertainty, with a fictitious parameter w. the parameter w will be finally replaced by the uncertainty of temperature, salinity, ph and depth. 2 ncalibratio ncalibratio 2 iationvar 2 resolution k w 3 w 12 w )w(u                    (19) the spreading uncertainty is calculated using a previous study [6]. this factor is a function of the seabed geometry, the localization and the depth of the generator point, and independent of the other parameters. this factor is a function of the relief of the seabed and the type of generator used. in order to obtain the uncertainty of the distance, the rectangular probability distribution with a 1 m width is used. the reception quality factors of the gps data in raw, after the data integration from the caster, are analyzed offline. we could affirm that the width of the rectangular probability distribution is less 1 m. but this fact is too optimistic; therefore 1 meter is chosen, in this case. it is thus an overestimation. some values calculated are detailed in section 5.3. 5. results  5.1. direction of emission  the direction of emission is calculated through the nylon semicircle and the b&k hydrophone (model 8103); the values obtained are shown in table 1. figure 13 shows the composition graphically. the heights between the emitter and the receiver remain constant, since neither the emitter nor the receiver changes its height regarding the maximum emission angle and the angle between the dut position and the pvc triangle indicates, with the compass measure, the correction value to apply. 5.2. evaluation of position and distance   the use of the rtklib for position correction allows us to know the correct position with an accuracy of, less than 0.5 m. without this correction the position has an accuracy of around 20 m. table 1 and table 2 shows an example of obtained data without correction, and table 3 shows an example with the corrected data. once the emission point is located, then it is necessary to calculate the distance between the generator and the receiver. the distance is calculated with the vicenty [7] conversion. table 4 shows the distance corresponding to the values shown in table 1. the uncertainty of distance is very difficult to obtain because the uncertainty for data of gnss receivers is an independent study, which is being evaluated by many people. in this case, the quality factor included inside the rtklib has been analyzed and the position has been evaluated with a pair of gnss. once the quality factor of the correction data is evaluated and increased with the contribution of vicenty conversion, the value of the distance uncertainty is 0.60 m. this value is obtained for a rectangular distribution probability with a width of 1 m. 5.3. attenuation index and its uncertainty  the environment absorption index is calculated with a high-accuracy recorder for conductivity, temperature and pressure (model sbe 16plus v2), henceforth ctd. the parameters are temperature of 24.04 degrees celsius, salinity of 38.04 0/00 and ph of 8.3. the ctd is located in the obsea. in table 5, some of the values obtained by the ctd during the tests are shown. the variations during the test are negligible because of the short interval of the test time of approximately 15 minutes. the attenuation index is 0.85 db/km. the effective time has been 15 minutes as the test lasted one hour or more, but not all the positions obtained by the gnss receiver were reliable. for a position to be reliable it has to be validated with the data obtained by the caster. this does not always happen because satellites change position and sometimes suitable signals or numbers are not available. if the series of data takes a long time to collect, ambient variables in the water would have to be taken into figure 13. scheme of the reference plane.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 33  consideration and so there would be a margin of uncertainty at each point. however, in our case, reliable data is taken every 15 minutes, so we can state that the ambient variables such as temperature, ph and salinity remain constant the uncertainty value for this case is 0.99 db/km. this value is really high in comparison to the absorption index, but it is normal due to uncontrolled parameters. 5.4. spreading value  the spreading value is calculated in another study [6], where the sensitivity uncertainty is known. in this case the value is 16.04±0.67. it is important to note that the spreading factor does not have units. the initial spreading value expected was around 10, because of the shallow depth, about 20 meters. therefore, a cylindrical propagation is expected. but the change is produced because it was initially thought that the distance between receiver and generator would be sufficient to transform the cylindrical propagation into a spherical one, but it is not sufficient. the uncertainty value in this case is 0.67. in this case because the seafloor is flat and so the relief does not change, the value of the spreading factor is constant. 5.5. evaluation of the signal receipt  the generator and reception systems are not synchronized. therefore the received signal timing is independent on the generation. because the sampling frequency is 96000 samples/second, the reception file is very big and the duration of the record is 1.5 hour, even though the test time is 15 minutes. the lack of time table 1. readings received in db at 1 meter from source.  angle over  semicircle  ‐45° regard to 0   plane  0  +45° regard to 0  plane  ‐90 186.34  186.19  186.19 ‐45 186.19  184.06  183.04 0 183.77  193.54  189.06 45 180.69  183.57  183.46 90 186.19  186.99  185.81   table 2. coordinate example without correction, in degrees.   % gpst  latitude n   longitude e 2013/09/19 10:21:21.000 41.17613741  1.765862013 2013/09/19 10:21:22.000 41.17613436  1.765833378 2013/09/19 10:21:23.000 41.17614067  1.765866606 2013/09/19 10:21:24.000 41.17614583  1.765843793 2013/09/19 10:21:25.000 41.17614367  1.765922205   table 3. coordinate example with correction, in degrees.   % gpst  latitude n   longitude e 2013/09/19 10:21:21.000 41.17608642  1.765954648 2013/09/19 10:21:22.000 41.17607901  1.765961776 2013/09/19 10:21:23.000 41.17608325  1.765958510 2013/09/19 10:21:24.000 41.17608555  1.765966284 2013/09/19 10:21:25.000 41.17608986  1.765958881   table 4. example of distance.   % gpst  distance (km)   2013/09/19 10:21:21.000 1.309357 2013/09/19 10:21:22.000 1.310040 2013/09/19 10:21:23.000 1.310051 2013/09/19 10:21:24.000 1.3096851 2013/09/19 10:21:25.000 1.3100011 table 5. example of data obtained from ctd.   day  time  temperature  (°c)  salinity  (‰)  ph  19/09/2013 10:20:57 23.76  38.04  8.30 19/09/2013 10:21:17 23.76  38.04  8.30 19/09/2013 10:21:37 23.76  38.04  8.30 19/09/2013 11:01:37 23.79  38.04  8.29 19/09/2013 12:01:57 24.02  38.05  8.30 figure  14.  temporal  frame  and  frequency  sub‐frame  for  the  acceptation  case.  figure 15. temporal frame and frequency sub‐frame for the decline case.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 34  synchronization between the generator and the receptor means that the post-processed signal is a critical task. the signal processing has been made with a matlab® application. the sequence of this application is: first to analyze every frame that arrived from the hydrophone (512 values in int16), then to locate a maximum value and finally to create a sub-frame with the 15 index values before the maximum and 16 index value after the maximum. the subframe, with 32 values, is processed with a fft in order to determine if the maximum value is centered in 10 khz. figure 14 shows the temporal frame (512 values) and the fft of the sub-frame (32 values) for the acceptation case, and the declination case is illustrated in figure 15. the declination case can be produced by many causes such as crustaceans, fish, seaweed and bubbles, among others. the case accepts the maximum and minimum values of the signal, and this value divided by 2 is the amplitude of the received signal. once the amplitude value for the acceptation case and its time are found, another matlab® application is used to overlap the time values. 5.6. sensitivity value  the last step is to assemble all contributions and calculate the sensitivity value and its uncertainty. table 6 shows an example of the different values of the contribution to uncertainty. the step is calculated for every point, table 7 shows an example with some points. the final result is that the sensitivity is –189.25 ±2.95 db rel 1 v/µpa at 10 khz. 6. conclusions  the article gives the basis for considering the realization of an in situ calibration of a hydrophone with nonomnidirectional sound source. the uncertainty contribution is higher in comparison to the uncertainty obtained in the laboratory which is less than 1 db. for this reason, it is necessary to repeat this test to improve the different contributions to uncertainty. improvements could include incrementing the amplitude value of the source generator which lowers the contributions factors to uncertainty of source pressure and of receiver voltage. in the last hydrophone calibration, in february 2010, the sensitivity value was -192 db rel 1 v/µpa at 10 khz, which is inside the confidence interval. acknowledgement  this work was supported by the spanish ministry of economy and competitiveness under the research project: “sistemas inalambricos para la extension de observatorios submarinos” (ctm2010-15459). references  [1] “iec60565underwater acoustics –hydrophones –calibration in the frequency range 0,01 hz to 1 mhz”, iec, 1997. [2] t.takasu, rtklib: open source program package for rtk-gps, foss4g 2009 tokyo, japan, november 2, 2009. [3] r.e. francois and g. r. garrison “sound absorption based on ocean measurements: part ii: part ii: boric acid contribution and equation for total absorption” j. acoust. soc. am. vol. 72, n.6, december 1982. [4] aguzzi, j.; mànuel, a.; condal, f.; guillén, j.; nogueras, m.; del rio, j.; costa, c.; menesatti, p.; puig, p.; sardà, f.; toma, d.; palanques, a. “the new seafloor observatory (obsea) for remote and long-term coastal ecosystem monitoring”. sensors 2011, 11, 5850-5872. [5] “evaluation of measurement data. guide to expression of uncertainty in measurement”, september 2008. [6] garcia-benadí, a.; cadena-muñoz, f.j.; sarria, d.; sitjar, r.; del rio, j. "good practice guide for c calculation". in: 5th martech international workshop on marine technology. pp. 61 66. 0011. isbn 978-84-616-5764-3. [7] t. vicenty. ”direct and inverse solutions of geodesics on the ellipsoid with application of nested equations”. survey review xxii, 176, april 1975. table 6. contribution to uncertainty.   distance [km] u(sl) [db]  u(rl) [db]   u(tl) [db] u [db] 1.362060 0.01 0.01  1.31 2.73 1.400959 0.01 0.01  1.35 2.80 1.402288 0.01 0.01  1.35 2.80 1.438561 0.01 0.01  1.38 2.87 table 7. sensitivity table example.   distance (km)  sensitivity (db)   uncertainty (db)  1.335602 ‐190.17  3.00 1.401709 ‐195.89  2.70 1.525931 ‐190.92  2.93 1.664731 ‐192.31  3.19 microsoft word 237-1859-4-le-rev2 acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 66‐74    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 66  abstract  rolling contact fatigue (rcf) plays a critical role in railway components, and the characterization of materials used, in terms of rcf life,  is  still  an  open  task,  made  complex  by  the  interactions  of  different  phenomena.  the  contact  surface  has  a  direct  impact  on  the  pressure exerted and can change during the test, due to wear. the procedure proposed consists in using vibrations of a test bench  during rcf‐life tests to identify when wear increases and causes a quick flattening of the specimen’s surface, and when this process is  complete. the procedure is applied to two case studies regarding wheel and rail steels. in the tests, a wheel steel specimen rotates  against a rail steel specimen, while pressed against each other by a constant force. at regular intervals weight loss and surface analysis  are performed, while vibrations and torque are monitored continuously. destructive tests are carried out at the end of each test.  results from non‐destructive measurements were used to provide input data to a numerical simulation, used to determine the cyclic  plasticity  properties  of  the  material.  the  methodology  proposed  shows  the  potential  application  of  vibration  measurements  for  detecting wear rates thus allowing supporting or partially supplanting destructive testing. using vibration measurements to detect high wear rates in  rolling contact fatigue tests  matteo lancini 1 , ileana bodini 2 , david vetturi 1 , simone pasinetti 1 , angelo mazzù 1 , luigi solazzi 1 ,  candida petrogalli 1 , michela faccoli 1   1  university of brescia, department of mechanical and industrial engineering, via branze 38, brescia, italy  2  university of brescia, department of information engineering, via branze 38, brescia, italy    section: research paper   keywords: damage evaluation; rolling contact fatigue; vibrations   citation: matteo lancini et al, using vibration measurements to detect high wear rates in rolling contact fatigue tests, acta imeko, vol. 4, no. 4, article 13,  december 2015, identifier: imeko‐acta‐04 (2015)‐04‐13  editor: paolo carbone, university of perugia, italy  received december 18, 2014; in final form march 20, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: matteo lancini, e‐mail: matteo.lancini@unibs.it    1. introduction  characterization of materials is critical in railway applications because cyclic, localized stresses lead to rolling contact fatigue (rcf) phenomena, which can cause severe damage and sudden failure of components. wear, also, is a common phenomenon occurring on both rail and wheels, but its interaction with rcf is not always detrimental: in specific conditions (uniformity, low rate, stability), it could optimize contact geometry, increasing rcf life, while, in non-optimal conditions, it could reduce rcf life. moreover, the high shear stress leads to a complex combination of concurring damage phenomena [1], [2] which should be taken into account. models describing cyclic plasticity are available in literature, but, if material parameters are obtained by laboratory tests in standard stress condition [3], [4], their results are not reliable, especially due to a different propagation mechanism [5]. a reliable procedure to analyze rcf-wear interaction would be to carry on accelerated rcf tests on controlled slip ratio conditions with multiple specimens, interrupting the test at different times and performing destructive analysis on the specimen, which is costly and time consuming. in this paper, an alternative method is proposed: vibrations and torque are monitored throughout the tests and compared with the damage detected by non-destructive methods (contact surface state, wear) and by destructive methods, at the test end (subsurface microstructure and hardness). the correlations found between the measured parameters and damage phenomena, occurring at the surface and subsurface region, such as cyclic plasticity or crack nucleation acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 67  and propagation, could be used to replace part of the destructive tests. this would reduce both cost and time needed to perform rcf tests on railway materials. 2. materials  to evaluate the robustness of the proposed approach, different sets of specimens were used. pairs of discs, machined out of wheel rims and railheads respectively, to represent different materials combinations, made up each set. a first set, which was called “primary materials”, was used to investigate the correlation between results from destructive and non-destructive methods and constitutes the primary experimental data source for this work. a second set, here called “secondary materials”, was used for preliminary tests, performed to select which quantity to monitor, out of the six proposed (see paragraph 3.2). a third set, moreover, was employed for early stage tests, using the same materials as in the first set, in order to analyze the behavior in the early stages of the process. 2.1. primary materials  the proposed procedure was applied to pairs of wheel rims and railhead materials (er8 en13262 and uic 900a respectively), as a case study, with conditions reported in table 1 with test id a to f and s3. the wheel specimens were 60 mm wide and 10 mm thick cylindrical discs, while the rail specimens were 59.5 mm in diameter, 10 mm in thickness and had a crowning radius of 200 mm, to prevent any border effect on the contact surface. the mechanical properties of wheel rims (er8 steel) and railheads (900 a steel) tested, as reported by the manufacturer, are shown in table 2. three different contact pressure levels were tested, in two different sliding ratio conditions, totaling six different rolling/sliding parameter combinations, here reported in table 1. vibration recordings from those with the highest sliding ratio (s = 3 %) and with the highest nominal pressure (p = 1 500 mpa) were also examined using the proposed procedure. 2.2. secondary materials  a second set of specimen, here called s2, was used in preliminary tests. sliding ratio, nominal contact pressure and other test condition for this secondary case study are reported in table 1. both wheel and rail specimens had a diameter of 60 mm, and a thickness of 15 mm. wheels discs were made by machining a superlos® steel by lucchini, while rail discs were made by a 900 a steel. mechanical properties of both materials, as reported by the manufacturer, are listed in table 2. 3. methods  3.1. measurement system  the test bench used is a bi-disc machine dedicated to study interactions between two components subjected to cyclic contact in different load conditions, already presented in a previous work by the authors [6], and whose general layout can be found in figure 1. the contact load, up to 70 kn, is maintained constant by means of a servo-hydraulic actuator, enabling the sliding of one of the mandrels, while two independent 33 kw engines provide the specimens rotation. both engines and the actuator are controlled to perform tests at given slip ratio, rotating speed, contact force and engine torque. on both mandrel supports, piezo-accelerometers were mounted using a cyanoacrylate glue, as shown in figure 2: one in the vertical and one in the horizontal plane, both normal to the rotation axis. the transducers used were wilcoxon 736 iepe accelerometers, with a nominal sensitivity of 0.98 v/(m/s²), a full scale of 5 m/s², and a linear bandwidth in the 5 hz to 20 khz range. the signals from the two accelerometers, as well as a torque sensor, positioned on the sliding mandrel, were acquired by means of a configurable data acquisition system at a 5 khz synchronous sampling frequency. table 2. material properties of the steels used.  steel    er8  900a superlos® ultimate tensile stress [mpa] 940  930 980 monotonic yield stress [mpa] 590  470 640 cyclic yield stress  [mpa] 470  390 525 necking [%] 54  26 ‐ elongation [%] 17  14 18 brinell hardness [hb] 230‐255  296 280 figure 1. test bench layout.  table 1. test parameters of the first case study (a to f), of the preliminary test (s2) and of the early stage monitoring test (s3).  test id  nominal contact  pressure  contact load  rail sample  rolling speed  wheel sample  rolling speed  sliding ratio  mpa  n  r.p.m.  r.p.m.  %  a  1 100  1 500 516.0 492.5  3 b  1 300  2 490 516.0 492.5  3 c  1 500  3 830 516.0 492.5  3 d  1 500  3 830 511.0 497.5  1 e  1 300  2 490 511.0 497.5  1 f  1 100  1 500 511.0 497.5  1 s2  1 100  7 557 497.5 502.5  1 s3  1 500  3 830 516.0 492.5  3 acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 68  an l101b-2k basler linear array camera was also used to monitor contact surface conditions of both specimens. the camera was acquired in sync with the bench rotation, in a setup leading to a resolution of 1 µm on the specimen’s mid surface. due to the high rotation speed, however, it was only used at regular intervals while the test was on halt, and the specimen was moved at 5 rpm. every 2×105 cycles the test was halted and weighing was performed with a precision scale (0.01 g resolution), to evaluate weight loss due to wear. at the end of each test, the samples were cut in mid-thickness, polished mechanically and etched (using 2 % nital), then inspected using an optical microscope to measure the plastic flow. following this, a leo evo-40xvp scanning electron microscope with an eds probe was used to seek for rcf crack paths, and hv10 micro-hardness was measured at different depths on the cross-section of each sample. 3.2. mechanical model  while vibration analysis is commonly used to detect faults on gears and bearing components [7]-[10], its use in material characterization of continuous discs is limited, as this is usually carried on with standardized methods, generally involving destructive testing or evaluation at the end of test. in previous works [11] a damage indicator, estimated from vibrations and correlated to surface cracks, was proposed. following the same approach the two specimens interactions are synthesized, as it is common in modal analysis, using frequency transfer functions (frf) between torque t and accelerations in the x and y directions, as depicted in figure 3. the idea behind this approach is that any variation in the material or geometry of the samples would be reflected by a simultaneous change in the modal parameters of the system, causing a change in the frfs values. because of this, a change in the dynamic behavior could be used to assess when crowing gets flattened, and the duration of this wear process. frfs were numerically computed as the ha/b cross-spectrum between a signal a and a signal b, divided by the auto-spectrum of signal b. in particular, the following 6 quantities were monitored: a. the transmissibility hy/x, between the y acceleration on the fixed mandrel support along the vertical axis and the x acceleration of the moving mandrel support along the pneumatic actuator axis, b. the rotating inertances hx/t and hy/t between the x and y accelerations and the torque t measured on the rotating axis of the sliding mandrel, c. the reciprocals to the previous three quantities, hx/y, ht/x and ht/y. though all these quantities are related to properties of the system, such as masses and stiffnesses, their connection to any proper mechanical parameter would require an overly complex multiple degree of freedom model, which would exceed the scope of this research. therefore, their absolute values are not meaningful per se, and only their variation from their value at the beginning of each test has been taken into account. 3.3. preliminary tests  preliminary tests were performed, on the specimens described in section 2.2, to select which quantity to monitor and in which frequency range. to avoid misreading due to external or uncorrelated sources of vibrations, the test bench was also monitored while running in jog mode, as well as in full operational mode, but without any load applied between the samples. a power spectral density analysis of x, y and t signals, recorded for about 100 s, and averaged using a 1 hz spectral resolution, revealed that their frequency content was located below 400 hz, while the same analysis performed in the final phases of standardized rcf tests revealed vibrational phenomena up to 1 000 hz. moreover, an experimental modal analysis of the pneumatic actuator, pointed out a resonance frequency lower than 100 hz, therefore, the spectral quantities monitored were treated using a band-pass filter between 400 hz and 1 000 hz. to select which quantity should be monitored, a comparison between the h functions computed, subtracted of their initial value and normalized on their maximum value, was performed during a full test of an unrelated rail/wheel couple, examined in a previous work [11] and here reported in figure 4. this comparison pointed out how the frf between torque and acceleration along the linear actuator axis, here called t/x, and its reciprocal x/t, displayed a sensible change, coherent with other damage indicators. in the initial 60 000 cycles, this quantity is less affected by accidental impacts or other impulsive events, here visible as spikes on the chart, which are frequently occurring due to the uncontrolled environment in which the bench operates. 3.4. vibration analysis  using the acquired data, the transfer function ht/x(ω) between torque and acceleration was computed averaging 5 figure 3. simplified frf based model.  figure 2. specimens mounted on mandrels (the accelerometers measuring axes are highlighted).  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 69  windows of 1 s, thus producing a spectrum of 1 000 lines, with a spectral resolution of 1 hz, every 5 s. the initial dynamic behaviour of the system made up by the test bench and the intact specimens were assessed by averaging the ht/y(ω) values of the first 2 500 cycles, thus assumed as a set of reference values ht/xref(ω). to point out changes in the system, the difference dt/x(ω) between the current status and the reference value, for each frequency resolution ωi, has been computed, then the synthetic index dx/t has been evaluated as the root mean square (rms) summation for all frequencies, as shown in (1).           * 1 ///// 1    n i i ref xtixti ref xtixtxt hhhh n d  . (1) to smooth the resulting signal, given the relative slow damaging process, the rms value was averaged on a 30 s window. the same procedure, here illustrated in figure 5, was performed for the reciprocal hx/t, leading to an indicator dx/t, which was also monitored during the rcf tests. 4. results  4.1. vibrations  the values of dx/t for the four tests under scrutiny are reported in figure 6. all three tests with a high sliding ratio level (3 %) report a steep increase in value (from 10 to more than 100 kg-1m-1), after few thousand cycles, followed by an abrupt decrease after a variable amount of cycles, then settling around a stationary level (30 kg-1m-1). the last test, with a 1 % sliding ratio, displayed only a slight increase after 8 000 cycles to a stationary level (20 kg-1m-1). such behavior could be used to identify a period, characterized by higher dy/t levels, during which the damage process is different from both the initial and the stationary stages. the approximate onset and duration of these periods are reported in table 3, and their values depend on the nominal contact pressure, as visible in figure 7. figure 4. dy/x, dt/x and dt/y, on preliminary tests, where wear rate changes were detected by weight loss measurements [11] after 60 000 and 100 000 cycles. figure 5. signal processing procedure’s steps.  figure 1. dx/t indicators for tests a (black), b (red), c (green) and d (cyan) during the first 20 000 cycles of each test.  30 s averaged rms difference between spectra 400‐1000 hz band pass filter current frf reference frf torque and acceleration measurements 5 s averaged frf 30 s averaged frf acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 70  4.2. weight loss  the wear rate, in terms of weight loss as a function of cycle number, with varying nominal contact pressure p, was investigated in tests from a to f. results are shown in figure 8 and figure 9 for sliding ratios s = 1 % and s = 3 % respectively. given the linear relationship displayed between weight loss and cycle number in the steady state period of the wear curves, wear rates were calculated as the angular coefficient of the linear best fit of the experimental points. wear rate values increased as nominal contact pressure and sliding ratio increased. the wear rate in general was higher for the rail steel samples, despite the surface damage was more severe for the wheel steel specimens: this means that wear, in this case, mitigates the effect of surface pitting by removing damaged layers. this linear wear rate behavior suggests a stationary contact geometry already achieved before the first measurement (after 200 000 cycles), while the difference between 3 % and 1 % sliding ratio tests points out a decreased wear rate associated with the lower sliding condition. 4.3. damaged surface  width  shape changes due to wear and rcf were assessed by measuring the worn portion of the wheel specimen as shown on images recorded by the linear camera before test and after 200 000 cycles, here displayed in figure 10 to figure 13. the worn portion was identified by visual inspection of the recorded images and by manually selecting the image area of the specimen surface where linear patterns, due to the specimen machining during its production, were not present, or were only partially visible. the process was repeated on 20 images for each specimen table 3. onset and duration of high dy/t levels.  id  onset  offset  duration  [cycles]  [cycles]  [cycles]  [s] a  700  4 700  4 000  480 b  3 800  8 500  4 700  560 c  6 000  12 700  6 700  800 d  ‐n/a‐  8 200  ‐n/a‐  ‐n/a‐   figure 7. onset and duration of high dy/t levels.  figure 10. wheel contact surfaces before (upper) and after (lower) the first  200 000 cycles for test a.  figure 8. 1 % sliding test weight loss for rail (upper) and wheel (lower).  figure 9. 3 % sliding test weight loss for rail (upper) and wheel (lower).  0 2k 4k 6k 8k 1000 1200 1400 1600 c y cl e s nominal contact pressure [mpa] onset duration 1.61 x 10-063.00 x 10 -06 2.86 x 10-06 0 1 2 3 4 5 6 0 500000 1000000 1500000 2000000 w ei g h t l o ss [ g ] number of cycles 900a rail p = 1100 mpa p = 1300 mpa p = 1500 mpa 2.00 x 10-06 1.96 x 10-06 2.60 x 10-06 0 1 2 3 4 5 6 0 500000 1000000 1500000 2000000 w ei g h t l o ss [ g ] number of cycles er8 wheel p = 1100 mpa p = 1300 mpa p = 1500 mpa 10.0 mm acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 71  and then the ratio between damaged and visible area was averaged and multiplied by the nominal thickness of the specimen. when no clear traces of the original surface of the wheel specimen was identified in the image, the whole surface width (10.0 mm) was considered as damaged. 4.4. micro‐hardness  after the tests were completed, micro-hardness measurements were taken on the cross sections of the cut and polished specimens at different depths. the resulting hardness profiles are shown in figure 14. a higher hardening was observed in tests with higher sliding ratios and contact pressures, involving layers between 0.4 mm to 0.8 mm. this behavior could be associated with isotropic hardening. 4.5. scanning electron microscope  the inspection of specimen sections pointed out that the main damage mechanism, in both rail and wheel steels samples, figure 13. wheel contact surfaces before (upper) and after (lower) the first  200 000 cycles for test d.  figure 11. wheel contact surfaces before (upper) and after (lower) the first  150 000 cycles for test b.  figure 12. wheel contact surfaces before (upper) and after (below) the first  200,000 cycles for test c.  figure 14. micro‐hardness profiles on the cross‐section of rail (upper) and  wheel (lower) steel specimen.  10.0 mm 7.8 mm 10.0 mm acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 72  was ratcheting, initiating cracks which propagate to cause rcf failure. figure 15 shows the rcf crack path for various test conditions: the rail steel samples (a and c) were always less damaged than the wheel steel ones (b and d) in all tested conditions, and presented only surface cracks. in addition, the wheel steel samples also presented subsurface cracks. all surface and subsurface cracks, which were analyzed, displayed a shallow angle to the surface and follow the plastically deformed material during their growth. 4.6. early stage measurements  to better understand the relationship between the vibrationbased index proposed and wear, in early stages of the damaging process, a separate test has been performed, as indicated in section 2.2. while for previous tests a surface image was available only every 200 000 cycles, this test was halted every 1 500 cycles, and the wheel specimen surface was photographed. this allowed to identify when the flattening due to wear started, and its progression, using the same procedure described in section 4.3, to assess the damaged surface width from surface images, such as the ones displayed in figure 16. the superposition between the two measurements, up to the first 40 000 cycles, is displayed in figure 17, and shows how the dx/t index and the damaged surface width have a similar behavior. during the first 12 000 cycles dx/t is stationary and low (100 kg-1m-1), while the wheel surface does not display any damage. between 13 500 and 25 500 cycles, dx/t quickly increases up to a high stationary value (300 kg-1m-1). in the same period the damaged width rises from 0 mm to 7.5 mm, indicating a rapid flattening of the surface due to a high wear rate. after 25 000 cycles, dx/t displays a stationary behavior, while wear rate decreases to about one third, completing the flattening at 34 500 cycles. 5. discussion  the material plastic behavior was simulated using a numerical simulation for rolling contact [12], already calibrated on er8 en13262 and uic 900a74 [13]. the model predicts plasticization in a two-dimensional plane-strain half-space subjected to contact pressure and friction, also taking into account wear as a concurring phenomenon removing layers from the surface. input parameters for this model are specimen geometry (radius and contact width), load, friction coefficient, material   figure 17. early stage damage progression: damaged surface width from photographs (red, right axis) and vibration‐based dx/t index (blue, left axis).  figure  15.  rcf  crack  path  in:  a)  rail  steel  sample  tested  at  s = 1  %  and  p = 1 500 mpa; b) wheel steel sample tested at s = 1 % and p = 1 500 mpa; c)  rail steel sample tested at s = 3 % and p = 1 300 mpa; d) wheel steel sample  tested at s = 3 % and p = 1 300 mpa.        figure 16. surface of the wheel specimen  in early stage tests after 3 000  cycles (upper), 18 000 cycles (mid) and 27 000 cycles (lower).  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 73  properties (yield stress, elastic constants, isotropic and kinematic hardening constants) and wear rate. the micro-hardness profiles pointed out a behavior associated with isotropic hardening, which allowed for a model simplification, supposing a linear relationship between yield stress and hardness. the images taken after the first 200 000 cycles, displayed in figure 11 to figure 13, proved that wear flattened the rail specimen, thus increasing the contact surface and reducing the actual contact pressure exerted. vibrations analysis, in particular of the first 20 000 cycles, reported in table 3, identified a time frame in which the flattening process was occurring. time and extent of the flattening detected by dx/t allowed to replace the standard point hertz contact in the model with a line contact surface, with a track width of 10 mm (8 mm for the 1 % sliding ratio), occurring at the very beginning of the rcf simulations, as shown in table 4. figure 18 shows the calculated displacements due to plastic strain in the er8 specimen in test d (f = 3 830 n and s = 1 %) of material points that before the deformation process were standing along a vertical line. the simulation allowed predicting that a stationary state was reached in the material strain distribution after 200,000 cycles, coherently with what was experimentally observed in terms of both wear rate and vibration parameters. figure 19 shows the superposition of the observed subsurface microstructure with curves representing the calculated displacements: these curves are in agreement with the general aspect of the microstructure. the simulation correctly predicted that the cracks and plasticized layers could not be deeper than 40-50 µm, even though ratchetting never stops, because wear removes plasticized and cracked layers from the surface, preventing deep crack propagation. the ex-post analysis of the monitored index dx/t showed good correlation with destructive and non-destructive test result. nonetheless, its interpretation could still be difficult, due to the fact that the stationary level (for tests a to d about 30 kg-1m-1 as seen in figure 1) is unknown until it is reached. therefore making deductions, based on the proposed index real time analysis, could still be unfeasible while the test is in its early stage. monitoring the proposed index while the test is still running could be feasible after the first 20 000 cycles, although for its interpretation to be practical still more work is needed, especially towards producing a smoother and more stable dx/t value for the whole rcf life test duration. further tests should be performed, using the same technique on different materials and conditions, to assess the proposed method robustness. moreover, future studies could focus on making the index more easily computed and assess in real time. 6. conclusions  a procedure to assess the presence, onset and duration of collaborative rcf and wear phenomena leading to contact shape alteration was proposed. the approach was applied to a wheel and rail steels characterization procedure based both on rolling-sliding contact tests, on disc specimens, and numerical simulation. an early stage evaluation of the damage process provided evidence that the vibration-based index proposed is able to detect high wear transient states during which the concurring damage phenomena alter the specimen shape. this allows altering the numerical model used for characterization accordingly, without halting tests and removing the specimens from the test bench. the advantage of the proposed method rests, in fact, in its non-destructive and inline nature, therefore allowing for a continuous assessment without interfering with the specimens. this procedure completes the described full rcf failure life test, and allows to correct numerical models required to figure 18. calculated displacements due to plastic strain in er8 specimen  in test d (f = 3 830 n and s = 1 %).  figure 19. superposition of calculated (white) and observed (image) plastic  strain bands in tests with s = 1 % and p = 1 100 mpa (left), and s = 1 % and  p = 1 300 mpa (right), after 2 000 000 cycles.  table  4.  nominal  and  measured  width  of  specimens  as  used  for  simulation.  id  nominal  contact width  contact width  used  width  reported since  [mm]  [mm]  [cycles] a  1.785  10  4 700 b  2.113  10  8 500 c  2.440  10  12 700 d  2.440  8 8 200 acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 74  characterize the materials. it points out phenomena whose duration is too short to be taken into account by the same numerical model describing, for the whole rcf life of the specimen, the different damage phenomena and their interactions in a complex case as cyclic contact. acknowledgement  the authors are grateful to mr. silvio bonometti and mrs. valentina ferrari for their support in the experimental activities, and to prof. franco docchio for careful reading and revision. references  [1] zerbst u., beretta s., “failure and damage tolerance aspects of railway components”, engineering failure analysis, 2011, 18 pp. 534-542 [2] donzella g. et al. “progressive damage assessment in the nearsurface layer of railway wheel – rail couple under cyclic contact”, wear, 2011, 271(1-2), pp. 408-416 [3] fedele r., filippini m., maier g., “constitutive model calibration for railway wheel steel through tension-torsion tests”, computers & structures, 2005, 83(12-13), pp. 1005-1020. [4] liu y., stratman b., mahadevan s., “fatigue crack initiation life prediction of railroad wheels”, international journal of fatigue, 2005, 28, pp. 747-756. [5] liu y., liu l., mahadevan s., “analysis of subsurface crack propagation under rolling contact loading in railroad wheels using fem”, engineering fracture mechanics, 2007, 74, pp. 2659-2674. [6] solazzi l. et al. “rolling contact fatigue damage detected by correlation between experimental and numerical analyses”, structural durability and health monitoring, 2012 8(4), pp. 329340 [7] mahfoud, j., breneur, c.., “experimental identification of multiple faults in rotating machines”, smart structures and systems, 2008, 4(4), pp. 429-438. [8] byington, c.s., et al., “shaft coupling model-based prognostics enhanced by vibration diagnostics”, insight: non-destructive testing and condition monitoring, 2009, 51(8), pp. 420-425. [9] roemer, m.j., c.s. byington, and j. sheldon, “advanced vibration analysis to support prognosis of rotating machinery components”, international journal of comadem, 2008, 11(2), pp. 2-11. [10] borghesani, p., pennacchi, p., randall, r. b., sawalhi, n., and ricci, r., 2013, “application of cepstrum pre-whitening for the diagnosis of bearing faults under variable speed conditions”, mechanical systems and signal processing, 36(2), pp. 370-384. [11] solazzi, l., c. petrogalli, and m. lancini, “vibration based diagnostics on rolling contact fatigue test bench”, procedia engineering 2011, 10, pp. 3465–3470. [12] mazzù a., “surface plastic strain in contact problems; prediction by a simplified non-linear kinematic hardening model”, journal of strain analysis, 2009, 44(3), pp. 187-199. [13] mazzù a., et al. “an integrated model for competitive damage mechanisms assessment in railway wheel steels”, wear 322-323, 2015, pp. 181–191. template for an acta imeko event paper acta imeko issn: 2221-870x april 2017, volume 6, number 1, 50-58 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 50 statistical experimental design screening strategies for free-monomeric isocyanates determination by uplc in materials used in cork stoppers manufacturing catarina andré1, inês delgado2, isabel castanheira2, joão bordado1, ana sofia matos3 1departamento de engenharia química, instituto superior técnico, universidade de lisboa avenida rovisco pais 1049-001 lisboa, portugal 2departamento de alimentação e nutrição, instituto nacional de saúde doutor ricardo jorge (insa), av. padre cruz 1649-016 lisboa, portugal 3unidemi, departamento de engenharia mecânica e industrial, faculdade de ciências e tecnologia, universidade nova de lisboa, 2829-516 caparica, portugal section: research paper keywords: diisocyanates; derivatization; chromatography; design of experiments; cork stoppers citation: caratina andré, inês delgado, isabel castanheira, joão bordado, ana sofia matos, statistical experimental design screening strategies for freemonomeric isocyanates determination by uplc in materials used in cork stoppers manufacturing, acta imeko, vol. 6, no. 1, article 8, april 2017, identifier: imeko-acta-06 (2017)-01-08 section editor: janaína marques rodrigues, inmetro, brazil received june 15, 2016; in final form november 18, 2016; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by project qren 5012-liracork corresponding author: ana sofia matos, e-mail: asvm@fct.unl.pt 1. introduction cork stoppers are considered the premium material for sealing bottled wine. the main reasons for this are their high compressibility, flexibility and impermeability, which prevent wine from oxidation and deterioration. moreover, cork stoppers are a natural product and therefore environmentally friendly. to avoid wasting residues from the manufacturing process, some sub-products, such as agglomerated cork stoppers, can be produced. the granulated cork is used together with binders that, according to the international code abstract a statistical experimental design was used to screen variables of the analytical procedure to quantify free monomeric isocyanates presented in polyurethane based pre-polymers in trace amounts. for this purpose, diphenylmethane-4,4’-diisocyanate (4,4'-mdi), 2,4-toluene diisocyanate (2,4-tdi) and 2,6-toluene diisocyanate (2,6-tdi) were analysed by ultra performance liquid chromatography with a photo diode array detector (uplc-pda). a preliminary study was performed with three derivatization agents, being 1-(2-piridyl) piperazine (1,2-pp) the most suitable one. column temperature, flow and percentage of ammonium acetate (% nh4ac.) were the factors studied at two levels each. a sequence of experiments was planned according to a 23 full factorial design with three replicates and two repetitions. analysis of variance (anova) was applied for the identification of significant factors and interactions. higher responses were achieved when the column temperature was 30 °c, a flow of 0.3 ml min-1 and a solvent with a percentage of ammonium acetate of 0.1 %. figures of merit were assessed within-laboratory as a preliminary step for method validation. similar values were obtained for tdi and mdi. recoveries are approximately 100 %. in addition, the values of detection limits (lods) for mdi and tdi were 0.08 and 0.11 µg ml−1, respectively, and quantification limits (loqs) were 0.25 and 0.33 µg ml−1 for mdi and tdi, respectively. the working range was between 0.01 and 10.00 µg ml−1 for mdi and 0.01 – 4.95 µg ml−1 for tdi. these figures of merit seemed adequate to detect low amounts of free monomeric isocyanates presented in agglomerates and foams for agglomerated cork stoppers production. this data is suitable to address the optimization of an analytical method by a response surface methodology. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 51 of cork stoppers manufacturing practices [1], must be polyurethane based (food grade). during manufacture, residual unpolymerized isocyanate monomer can remain in the polymer and may migrate into wine [2]. isocyanates are very reactive chemical compounds, which contain one or more functional groups of the type –n=c=o. these compounds are classified regarding the number of functional groups contained. they may be mono-, dior polyisocyanates according to containing one, two or more groups, respectively (figure 1 a) b) c) d) e)). there is a critical effect on health associated when these compounds are found in the working environment at high doses that may cause irritation of eyes, skin and respiratory system. for this reason, workers in this type of industry should take special care to protect the eyes, skin and inhalation of vapours. a high care for humans, after long periods of exposure, may become sensitive to extremely low concentrations. coughing, wheezing, chest discomfort, edema and interstitial pulmonary fibrosis are some of the effects that have occurred associated with inhalation of isocyanates. the effect that these compounds have on health are, therefore, very well described in the specific documentation on this subject where the limits for exposure to these compounds are also documented [3]. due to the health effects, these limits are low in some european countries (<20 mg l-1). high performance liquid chromatography has been widely employed for the analysis of isocyanates [2], [4]–[6]. however, this technique requires long run times and large amounts of solvents. the emerging of ultra-performance liquid chromatography (uplc) was an important advance in chromatography performance. the improvement of this new technique was the use of columns packed with particles smaller than 2 µm. the van deemter equation indicates that as the particle size decreases to less than 2.5 µm, there is a significant gain in efficiency and that efficiency does not diminish at increased flow rates or linear velocities [7], [8]. on the other hand, with the decrease of particle size the operating pressure increases and a decreasing of flow rates can be used. with this progress, it is possible to decrease the theoretical plate height that allows overall separation with better resolution. due to the high reactivity of isocyanates with water derivatization of isocyanates is crucial. in literature several derivatization agents were reported. the derivatization reagents most frequently used are mama, 1,2-pp, dba among others (figure 1 f) g) h)) [9]. the ability of a chromatographic method to separate, identify and quantify compounds satisfactorily depends on several factors which can be controlled by the operator. often the use of the design of experiments (doe) can be a powerful tool, which allows the analyst to identify which factors or interaction of factors can affect the chromatographic method, driving to optimization [10]–[14]. whatever the application area, the design of experiments ensure that results are obtained in a more effective way, since it considers the variation of the factors in combination, and their interactions [15]–[17]. the design of experiments allows structuring the order of trials to translate the objectives predefined by the investigator. the doe has two applications in chromatography. the first is to show that no factors in the study are significant, and thus verify the robustness of the method for validation. the second one, used in this study, is to identify which factors are significant and optimize the response taking into account these same factors [12]. analysis of variance (anova) is used for the statistical treatment of results after performing the experiments. this variance analysis allows identifying, objectively, which factors and/or interactions are significantly affecting the answers. after identifying these factors (alone or in interaction), it establishes the best combination of factor levels that will lead to the maximization of the pre-established objectives. as far as we known uplc methods are not used to quantify isocyanates at low concentration levels (< 1 mg kg-1). in our previous work [18], the taguchi method was applied for the determination of free isocyanate analysis in uplc, using mama as a derivatization agent. however, mama seems not suitable for low concentrations due to a higher limit of detection [19]. the main goal of this study was to evaluate the effect of three controllable factors in a chromatographic method to determine free diisocyanates at low concentration levels present in cork stoppers, by a screening process. 2. materials and methods 2.1. reagents and chemicals standards all solvents used were hplc or superior grade with purity higher than 96 %. 2,4-toluene diisocyanate (2,4-tdi) (96 %), 2,6-toluene diisocyanate (2,6-tdi) (98.5 %), hexamethylene diisocyanate (hdi) (99 %) from ehrenstorfer; mixture of 2,4toluene diisocyanate (80 %) and 2,6-toluene diisocyanate (20 %) (t80) from bayer; diphenylmethane-4,4’-diisocyanate (4,4’mdi) (98 %), 9-(methylaminomethyl)anthracene (mama) (99 %), 1-(2-piridyl)piperazine (1,2-pp) (99.5 %), dibutylamine (dba) (99.5 %) from aldrich; triethylamine (tea) (99 %) from alfa aesar; dimethyl sulfoxide (dmso), from sigmaaldrich; dichloromethane (dcm), acetonitrile (acn), ammonium acetate (nh4ac.), formic acid (98-100 %), isooctane, ethanol from merck; orthophosphoric acid (85 %), glacial acetic acid from panreac; n-n’-dimethylformamide (dmf) from emsure. the ultra-pure water used for all purposes was obtained with a milli-q element system from millipore (interface, portugal). 2.2. derivatization and sample preparation procedures 2.2.1. mama – derivatives mama derivatives were prepared at a concentration of 100 µg ml-1 in dcm. working standards were derivatized with figure 1. structure of isocyanates and derivatization agents. a) 4,4' diphenylmethane diisocyanate; b) 2,4-toluene diisocyanate; c) 2,6-toluene diisocyanate; d) hexamethylene diisocyanate; e) 1-naphthyl isocyanate; f) 9-(n-methylaminomethyl) anthracene; g) 1-(2-pyridyl) piperazine; h) dibutylamine. a) b) c) d) e) f) g) h) ocn nco acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 52 mama for one hour in the dark, evaporated to dryness in nitrogen flow and redissolved in 10 ml of dmf: mobile phase 50:50. 2.2.2. 1,2-pp – derivatives 1,2-pp derivatives were prepared at a concentration of 500 µg ml-1 in dcm and derivatized with 50 µl of 1,2-pp. these stock solutions were derivatized for one hour. intermediate standard solutions were prepared, by dilution, at a concentration of 80 µg ml-1 in acn:dmso (95:5). working standards were prepared, by dilution with acn, to a concentration of 8 µg ml-1. 2.2.3. dba – derivatives a solution of dba was prepared in acn at a concentration of 5000 µg ml-1. isocyanate solution was prepared in isooctane at a concentration of 500 µg ml-1 and added dropwise to the dba solution with continuous stirring. a rotating evaporator was used to evaporate this mixture to dryness. after removing dba surplus under vacuum, the precipitate was recrystallized with 70 % aqueous ethanol. accurately weighed amounts of working solution were dissolved in acetonitrile at a concentration of 1000 µg ml-1 and further diluted in acn: water (50:50) at a concentration of 100 µg ml-1. 2.2.4. internal standard hdi was used as internal standard (is) for mdi determination and 1-naphthyl was used as is for tdis determination. solutions of hdi and 1-naphthyl were prepared with a concentration of 2500 and 5000 µg ml-1 in dcm, respectively. intermediate standard solutions were prepared by dilution in acn:dmso and adding 100 µl of 1,2pp. these solutions were derivatized in the same conditions as the standards and samples. working solutions were prepared by dilution with acn. the same concentration of the internal standard solution was added to all working solutions. 2.2.5. sample treatment samples were prepared by adding adhesive or spraying foam into dcm. intermediate solutions were prepared by dilution in acn:dmso and adding 100 µl of 1,2-pp. these solutions were derivatized for two hours. working solutions were prepared by dilution with acn. all standard solutions were filtered into a vial by a ghp 0.2 µm syringe filter. 2.3. chromatographic condition of uplc-uv-flu acquity ultra performance lc system from waters was used for the analysis of target compounds with an acquity uplc beh c18 (150 × 2.1 mm; 1.7 µm) column. the uplc system has a photo diode array detector (pda) (waters, milford, ma, usa) acquiring at 240 nm and 254 nm. results are expressed in free-monomer isocyanate (%fm). table 1 summarizes the best chromatographic conditions achieved from pre-experiments. 2.3.1. mama the mobile phase for mama derivatives was 80 % acetonitrile and 20 % water with 3 % triethylamine adjusted to ph 3 with orthophosphoric acid. the column temperature was 30 °c and flow rate at 0.3 ml min-1 during 15 minutes. 2.3.2. 1,2-pp for these solutions, the mobile phase consisted in two solvents. the first solution was 0.1 % ammonium acetate adjusted to ph 6 with acetic acid and the second solution was acetonitrile. gradient elution using these two solutions was performed. the gradient was linear from 65:35 to 30:70 until 4.69 minutes, followed by another linear gradient to 5:95 until 5.16 minutes, isocratic elution with 5:95 during additional 1.87 minutes returning to initial condition with a total run time of 10 minutes. the column temperature was 40°c and the flow was 0.4 ml min-1. 2.3.3. dba the mobile phase for these solutions contained acetonitrile/water/formic acid (v/v/v) (a) 5/95/0.05 and (b) 95/5/0.05. elution was performed using a linear gradient from 40 % b to 100 % b in 12 minutes, followed by isocratic elution with 100 % b for the additional 3 minutes. the column temperature was 30 °c and flow was 0.35 ml min-1. 2.4. design of experiments in the present work three variables were identified as potential parameters affecting the chromatographic conditions for isocyanate analysis. the three independent variables (controllable factors) were column temperature (tcol), flow (flow) and percentage of ammonium acetate in solvent a of the mobile phase (sol.). each variable was studied at two levels. in order to evaluate and screen the complete set of main factors and their corresponding interactions a full factorial design with two levels for each factor was used. the levels of each factor, as well as the coded symbol and correspondent units, are summarized in table 2. to control drift and to avoid biases, a full factorial design was carried out in a pseudo-random order, using “flow” as a blocking variable. a total of 8 experiments were performed in triplicate in a random sequence, in which each was injected two times. so, the experiences were reordered taking into account decreasing the total run time and, consequently, the amount of mobile phase. the design matrix is presented by standard order (table 3), together with the results obtained for the two dependent variables analyzed (responses: peak area and resolution between two adjacent peaks). with the study of peak area, the objective was to maximize the response for detecting the higher content possible. to reinforce the screening process, the study of the table 1. overview of styles and font sizes used in this template. derivatization agent detector solvent a solvent b flow (ml min-1) column temperature mama pda 254 nm 3% triethylamine ph = 3.0 acetonitrile 0.33 30 °c 1,2-pp pda 254 nm 0.1% nh4ac. ph = 6.0 acetonitrile 0.4 40 °c dba pda 254 nm acn/h2o/hcooh 5/95/0.05 acn/h2o/hcooh 95/5/0.05 0.35 30 °c acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 53 resolution between peaks will allow us to identify the best combination of factor levels that conduct to a better peak separation between the three isocyanates in the analysis. the resolution between peaks is defined by: 𝑅𝑠 = 2(𝑡𝑟2−𝑡𝑟1) 𝑤1+𝑤2 (1) where tr1 and tr2 are the retention time (tr) for compounds 1 and 2, in minutes, and w1 and w2 correspond to the peak widths (w) at half peak height for compound 1 and 2, respectively. all the statistical analysis was performed using ‘minitab 16’ (minitab inc., cité paradis, paris, france) statistical software package, which included the analysis of variance (anova) and the determination of the polynomial coefficients to be included in the predicting equations for both dependent variables. all anova assumptions (model errors independent, normally distributed and constant variance) were validated through a complete residual analysis. statistical significance was established at a p-value < 0.05 for all statistical tests applied. 3. results and discussion 3.1. preliminary experiments in this study, only diisocyanates were used so the amount of derivatization reagent had to be at least 2.1 times the amount of isocyanate. a kinetics reaction (results not shown) study was also performed to know the optimum time for the reaction to be complete. to understand which derivatization reagent was most appropriate to determine isocyanates, preliminary experiences were performed. the layout of the chromatogram was used to assess the most suitable derivatizing agent and chromatographic conditions. the finest result for each derivatization reagent was summarized in table 1. preliminary experiences revealed that 1,2-pp was the most appropriate derivatizing agent for isocyanate analysis. for this derivatization agent, the chromatogram layout showed less noise and better peak shape. the screening study was carried out only with 1,2-pp. table 2. controllable factors (independent variables) with the coded symbol, units and the corresponding levels. factor levels variable coded symbol units low (-) high (+) column temperature tcol °c 30 40 flow flow ml min-1 0.3 0.4 solvent sol. % nh4 ac. 0.01 0.1 table 3. factorial design matrix with corresponding results for peak area and resolution between peaks. note: the letters a, b and c correspond respectively to column temperature (tcol), flow (flow) and percentage of ammonium acetate in solvent a of the mobile phase (sol.). low and high levels were denoted by “-“ and “+”, respectively. factors peak area per isocyanate a b c 2,6-tdi 2,4-tdi mdi (1) 955363 956056 961789 4513670 4522689 4502456 1158609 1151364 1150963 a + 964051 965921 943387 4495047 4509529 4486964 1060813 1055479 1076169 b + 721916 717805 721036 3373708 3384527 3384071 861751 866549 818431 ab + + 722704 721636 723888 3377115 3384068 3383109 828270 831404 829637 c + 1006489 1004554 1004636 4613640 4603758 4612637 1166362 1163986 1168146 ac + + 1004989 1004476 1004450 4601103 4592920 4590992 1076628 1089310 1090145 bc + + 744180 738828 751788 3453850 3454082 3458858 872285 870533 874780 abc + + + 750909 750530 748874 3446738 3442698 3444175 837155 830967 836409 a b c resolution between 2,6-tdi and 2,4-tdi resolution between 2,4-tdi and mdi (1) 3.3915 3.4091 3.5397 7.0381 7.0463 7.1022 a + 1.9474 1.9382 2.1732 9.2782 9.1930 9.4726 b + 1.8532 1.8511 1.8973 7.0609 6.8366 6.9113 ab + + 2.0281 2.0343 2.0734 8.9455 8.6981 8.8900 c + 2.2989 2.2892 2.3131 7.4277 7.3511 7.1057 ac + + 2.2180 2.0364 2.0433 9.8617 9.7837 9.7918 bc + + 2.0923 2.1383 1.8169 6.7289 6.6081 6.6600 abc + + + 2.2171 2.1026 2.1767 9.4926 9.7007 9.3122 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 54 3.2. design of experiments a full factorial design (ffd) was performed to screen the chromatographic conditions and to provide a better understanding of the interactions between critical factors. the ffd analysis was run with the three factors shown in table 2, namely column temperature (tcol), flow (flow) and ammonium acetate percentage in solvent a of the mobile phase (sol.). results from preliminary experiences were used to select higher level (+). lower level (-) was defined according to uplc characteristics taking into account that uplc columns are packed with solid phase particles smaller than those used in hplc, which yields a lower theoretical plate height and higher analyte separation. two wavelengths for peak detection were selected (osha 254 nm; epa ctm-036a 240 nm) [20], [21]. the quality of each result was defined by comparing peak areas or resolution between peaks under the same chromatographic conditions with detection at 254 nm and 240 nm. the analysis of variance (anova) was chosen and applied directly to the responses (y) shown in table 3, as well as to the transformed response given by a logarithmic transformation of variance (-10 log s2), applied only to peak area. the first analysis allowed identifying which factors can significantly affect the two quality characteristics under study (peak area and resolution between adjacent peaks), obtained at appropriate wavelength (location effects), whereas the second analysis intends to investigate which factors affect the peak area variability (dispersion effects) [16], [23]. tables 4 and 5 present the anova results for the three isocyanates location effects, respectively for the responses peak area and the resolution between peaks. the significance of the factors and interactions, as well as the lack of fit, were evaluated through the p-value and the f-distribution values. the analysis of the lack of fit (smallest p-value of 0.164 for 2,4-tdi peak area) allowed to conclude that the controllable factors chosen for this screening process were correct, allowing to explain the observed data variability expressed in table 3. an exception was identified in table 5 to the response “resolution between 2,6-tdi and 2,4-tdi peaks”, where the lack of this quality parameter (lack of fit) is because all factors and interactions revealed high levels of significance (p-value < 0.000). as can be seen in table 4, and for all isocyanates, the flow was the most contributed factor that allows maximizing the peak area, with a percentage contribution to the total variability of 97.42 %, 99.42 % and 93.56 %, respectively for 2,6-tdi, 2,4tdi and mdi. the strong influence of this factor can be explained by the interaction analyte – mobile phase – stationary phase. also, solvent distribution evidenced a significant effect over peak area, but with significantly less impact. regarding column temperature, this factor does not reveal a significant effect on 2,6-tdi. to select the chromatographic conditions, when two or more isocyanates are present, an evaluation of the resolution between peaks was performed (table 5). with significant table 4. anova results obtained for peak areas of the three isocyanates (2,6-tdi, 2,4-tdi and mdi). (ss sum of squares; dof degrees of freedom; ms – mean squares; fo –statistical value given by factor and error mean squares ratio; p-value level of significance of each factor) source of variation for 2,6-tdi area ss dof ms f0 p-value flow 3.66e+11 1 3.66e+11 14,900 0.000 sol. 8.04e+09 1 8.04e+09 328 0.000 flow x sol. 6.71e+08 1 6.71e+08 27.4 0.000 residual error 4.89e+08 20 2.45e+07 lack of fit 5.00e+07 4 1.25e+07 0.45 0.768 pure error 4.39e+08 16 2.75e+07 total 3.75e+11 23 source of variation for 2,4-tdi area ss dof ms f0 p-value tcol 6.35e+08 1 6.35e+08 12.4 0.002 flow 7.77e+12 1 7.77e+12 1.52e+05 0.000 sol. 4.15e+10 1 4.15e+10 813 0.000 flow x sol. 1.22e+09 1 1.22e+09 23.8 0.000 residual error 9.70e+08 19 5.11e+07 lack of fit 2.59e+08 3 8.63e+07 1.94 0.164 pure error 7.11e+08 16 4.45e+07 total 7.82e+12 23 source of variation for mdi area ss dof ms f0 p-value tcol 1.93e+10 1 1.93e+10 170.4 0.000 flow 4.40e+11 1 4.40e+11 3876 0.000 sol. 1.46e+09 1 1.46e+09 12.9 0.002 tcol x flow 4.83e+09 1 4.83e+09 42.5 0.000 residual error 2.16 e+09 19 1.14 e+08 lack of fit 3.24e+08 3 1.08e+08 0.94 0.443 pure error 1.83e+09 16 1.15e+08 total 4.68e+11 23 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 55 contributions, the flow (19.14 %), as well as the interaction between flow and column temperature (23.88 %) have a significant impact over the separation of both tdi isomers. for maximizing the resolution between 2,4-tdi and mdi the most significant factor obtained was column temperature, with a relative contribution of 92.74 %. these two peaks were well defined since they have a resolution value higher than 1.5. thus, column temperature having such significant value, suggests that no other factor is important for this separation. based on the anova results (tables 4 and 5) five polynomial models were constructed, where only factors and interaction with a p-value smaller than 0.05 were chosen to the predictions under varying conditions (table 6). the first three prediction models in coded terms considered the peak area for each of the three isocyanates (2,6-tdi, 2,4-tdi and mdi). regarding peak resolution, the two predictive models obtained, in coded terms, are express in the last two lines of table 6, and corresponds to the peak resolution between 2,6-tdi and 2,4tdi, denoted by res(2,6-tdi – 2,4-tdi), and between 2,4tdi and mdi, denoted by res(2,4-tdi mdi). the quality of the developed models was assessed by the r2, predicted r2 and adjusted r2 values, shown in table 6. since the reported values for these parameters are considerably higher (> 97 %), it means that the models developed through this experimental design strategy were well conducted and performed, allowing to predict the optimum chromatographic conditions of the three isocyanates in the study, at low concentration levels. still, within the same screening strategy, an analysis of variance was done using a log transformation of the peak area variance to identify which factors and/or interactions could reduce significantly the variability present within data (dispersion effects). this analysis could allow us to reach more robustness to peak area determination. the results for 2,4-tdi and mdi are shown in figure 2 through a pareto chart, where the significant factors and interaction bars appear above the vertical line (p-value < 0.05). each bar represents the estimate standardized effect in absolute value. for 2,6-tdi no significant effect was identified, meaning that the data variability due to the dispersion within each experiment couldn´t be explained by any of these factors or interaction between them. regarding 2,4-tdi and mdi, it is clear that all factors (isolated or interactions between them) can contribute significantly to reduce the variability within the data, thus increasing data robustness and improve data accuracy. table 5. anova results obtained for resolution of 2,6-tdi 2,4-tdi and 2,4-tdi mdi. (ss sum of squares; dof degrees of freedom; ms – mean squares; fo –statistical values given by factor and error mean squares ratio; p-value level of significance of each factor) source of variation for res(2,6-tdi – 2,4-tdi) ss dof ms f0 p-value tcol 0.6344 1 0.6344 72.6 0.000 flow 1.1778 1 1.1778 134.9 0.000 sol. 0.2387 1 0.2387 27.3 0.000 tcol x flow 1.4348 1 1.4348 164.3 0.000 tcol x sol. 0.5378 1 0.5378 61.6 0.000 flow x sol. 0.6689 1 0.6689 76.6 0.000 tcol x flow x sol. 0.5901 1 0.5901 67.6 0.000 residual error 0.1397 16 0.0087 total 5.4222 23 source of variation for res(2,4-tdi – mdi) ss dof ms f0 p-value tcol 33.9464 1 33.9464 2099.3 0.000 flow 0.8844 1 0.8844 54.7 0.000 sol. 0.4680 1 0.4680 28.9 0.000 tcol x sol. 0.5338 1 0.5338 33.0 0.000 tcol x flow x sol. 0.1647 1 0.1647 10.2 0.005 residual error 0.2911 18 0.0162 lack of fit 0.0446 2 0.0223 1.4 0.265 pure error 0.2465 16 0.0154 total 36.2884 23 table 6. final model equations with summary statistics. variable final model r2 predicted r2 adjusted r 2 area2,6-tdi 8.58e+5 – 1.23e+5 flow + 1.83e+4 sol. – 5.29e+3 flow x sol. 99.87 % 99.81 % 99.85 % area2,4-tdi 3.98e+6 – 5.14e+3 tcol – 5.69e+5 flow + 4.16 sol. – 7.12e+3 flow x sol. 99.88 % 99.74 % 99.98 % areamdi 9.82e+5 – 2.84e+3 tcol – 1.35e+5 flow + 7.80e+3 sol. + 1.42e+4 tcol x flow 99.54 % 99.26 % 99.44 % res(2,6-tdi – 2,4-tdi) 2.2450 – 0.1626 tcol – 0.2215 flow – 0.0997 sol. +0.2445 tcol x flow + 0.1497 tcol x sol. + 0.1669 flow x sol. – 0.1568 tcol x flow x sol. 97.42 % 94.20 % 96.30 % res(2,4-tdi – mdi) 8.1790 + 1.1893 tcol – 0.1920 flow + 0.1396 sol. + 0.1491 tcol x sol. +0.0828 tcol x flow x sol. 99.20 % 98.57 % 98.98 % acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 56 note that for all analyses of variance involved in this study a complete residual analysis was performed for each one and models adequacy checked. 3.3. selection of the optimal conditions based on the predictive models shown in table 6, developed to improve free isocyanate determination at low concentrations, together with the pareto charts information (figure 2), a screening process was achieved to identify the best combination of factors level. to reach a more accurate definition of the isocyanates peak, a maximization of the three dependent responses (peak area, resolution and variability) must be achieved. so, the signal of the isolated factors coefficient in the model equations (table 6) indicates, as the best combination: column temperature at 30 ºc; flow at 0.3 ml min-1; and solvent at 0.1 % nh4ac.. the only exceptions identified were for both peaks resolutions, where for res (2,6-tdi – 2,4-tdi) the solvent indicates the use of the low level (0.01 % nh4ac.) and for res (2,4-tdi – mdi) a column temperature of 40 ºc. the first conflict (sol.) could be ignored since the solvent shows the smaller f value within anova table (f0 = 27.3), with a relative contribution to the total variability less than 2 %. regarding the second conflict (tcol at 40 ºc), the authors considered it important to analyse this situation in a different way. while the column temperature has a relative contribution smaller than 4 % for the peak area of both isocyanates (2,4-tdi and mdi), the resolution between these two peaks shows the column temperature to be the most contributive factor with 92.7 %. the retention time when the column temperature was 40 °c; a flow of 0.3 ml min-1; and a solvent of 0.1 % nh4ac. was 2.6 min. for 2,4-tdi and 3.3 min. for mdi at 240 nm. these were considered to be the optimum values for the 2,4-tdi and mdi resolution. to analyse the significant effects of interactions and those considered on the log transformation of the variance, an analysis of means (anom) was performed but no level conflict identified (results not shown). 3.4. method performance since validation guidelines were not established for isocyanates analysis determined by uplc neither 1,2-pp use as a derivatizing agent, an in-house single laboratory validation procedure based on the codex alimentarius [24] was implemented to achieve the performance of the method. parameters such as calibration curve, linearity, working range, limits of detection and quantification, reproducibility and recovery were evaluated and the method to analyze isocyanates present in materials in contact with foodstuff validated. the linearity of the method was evaluated with calibration curves using 40 calibration points for mdi and 15 calibration points for tdi within the corresponding working range (table 7). the correlation coefficient was higher than 0.999 for both compounds. the limit of quantification was used to assess method sensitivity. the calculation of limit of detection (lod) and limit of quantification (loq) values were based on the calibration curve [25]. all experimental values were within the acceptance criteria (residual standard deviation (rsd) <15 %). due to lack of reference material, trueness was estimated, in terms of internal standard recovery, by evaluating the several concentration levels during ten days for mdi analysis and five days for tdi. 3.5. applicability the results of method applicability are presented in table 8. samples, polyurethane adhesives based on tdi and mdi and polyurethane foams mdi based, obtained from the manufactures, produced in different days, were analyzed. in tdi analysis the internal standard used was 1-naphthyl and all quantifications were determined with tdi/1-naphthyl ratio. the peaks were well resolved since the retention time was 2.42 min for 2,6-tdi, 2.66 min for 2,4-tdi and 3.07 min for 1table 7. method performance. calibration curve linearity working range (µg ml-1) lod (µg ml-1) loq (µg ml-1) reproducibility (%) recovery (%) mdi y = 1e+08x + 0.0113 1 0.01 10.00 0.08 0.25 11 101.32 tdi y = 3e+08x + 0.0213 0.9998 0.01 4.95 0.11 0.33 2 100.20 figure 2. pareto plot evidencing the significant main factors and interaction for 2,4-tdi (left) and mdi (right), considering the log transformation of the peak area variance. bars represent the standardized effect estimate in absolute value. 2,4 tdi 0 100 200 300 400 500 600 700 tcol x s ol tcol x flow x s ol tcol tcol x flow s ol flow mdi 0 5 10 p=,05 20 25 30 35 flow x s ol flow s ol tcol x flow x s ol tcol x s ol tcol x flow p = 0.05 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 57 naphthyl as shown in figure 3. the validation parameters were adequate with good linearity and recovery between our acceptance criteria (80 %-120 %) can be observed. hdi was the internal standard for mdi analysis. the retention time was, for hdi 2.45 min and 3.34 min for mdi so the peaks were well resolved (figure 3). the recoveries were good for the mdi/hdi ratio so for these peaks there isn’t a matrix or other interferences. the linearity was good for both matrixes mdi based. a 2% fm-tdi maximum was found in tdi based adhesives. fm-mdi content in adhesives ranged from 3 % to 6 %. the amount of fm-mdi in mdi based foams was below 1 %. these values were compared with the expected values theoretically calculated and experimental values obtained by titration carried out in another laboratory. the values were not statistically different. the methods proved to be efficiency because real samples content have the expected value [26]. another advantage is the fact that these pre-polymers can be used in agglomerated cork stoppers industries. 4. conclusion three derivatization agents (mama, dba and 1,2-pp) were studied before the experimental design. 1,2-pp was chosen for table 8. quantification of free isocyanate content from polyurethane prepolymers in several matrices. calibration curve linearity working range (µg ml-1) lod (µg ml-1) loq (µg ml-1) repeatability (%) content (%) recovery (%) tdi adhesive 1 y = 2e+08x + 0.0085 0.9995 0.01 7.87 0.29 0.86 4.3 1.54 ± 0.56 100.65 adhesive 2 y = 3e+08x + 0.1301 0.9992 0.01 9.89 0.46 1.40 5.0 0.76 ± 0.16 100.57 mdi adhesive 3 y = 1e+08x 0.0133 0.9997 0.01 9.96 0.26 0.79 2.8 4.86 ± 1.25 106.57 adhesive 4 y = 1e+08x 0.0133 0.9997 0.01 9.96 0.26 0.79 3.4 5.90 ± 0.83 105.00 adhesive 5 y = 1e+08x 0.0133 0.9997 0.01 9.96 0.26 0.79 0.8 4.57 ± 0.11 111.72 adhesive 6 y = 1e+08x 0.0133 0.9997 0.01 9.96 0.26 0.79 3.0 3.31 ± 1.62 105.49 foam 1 y = 1e+08x 0.0205 0.9995 0.01 9.01 0.33 1.01 0.5 0.93 ± 0.02 100.40 foam 2 y = 1e+08x 0.0205 0.9995 0.01 9.01 0.33 1.01 1.4 1.06 ± 0.07 101.38 foam 3 y = 1e+08x 0.0205 0.9995 0.01 9.01 0.33 1.01 0.4 0.87 ± 0.10 100.62 foam 4 y = 1e+08x 0.0205 0.9995 0.01 9.01 0.33 1.01 0.4 0.83 ± 0.07 101.44 figure 3. chromatogram of three real samples. a) chromatogram of adhesive based on tdis; b) chromatogram of foam based on mdi; c) chromatogram of adhesive based on mdi. a) c) b) acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 58 doe application since it was the one that provided the best results in preliminary experiments. the chromatographic method to determine free diisocyanates, at low concentration levels, should be performed with a column temperature of 30 °c, a flow of 0.3 ml min-1 and a concentration of ammonium acetate in the mobile phase of 0.1 %nh4ac. the best wave length of the pda detector to quantify mdi was 254 nm and for the tdis it was 240 nm. two methods were developed since the mdi internal standard (hdi) has the same retention time of 2,6-tdi. also, from the design of experiments, the best combination for peak maximization was different for the analysis of mdi and tdis. the results demonstrated that methods are efficient to determine isocyanates at low concentration levels in the material in contact with foodstuff. for both methods, linearity was higher than 0.999 and recovery approximately 100 %. the studied pre-polymers are adequate as “food grade” materials, such as agglomerated cork stoppers among others, since the fm-isocyanate content is under the theoretical levels. acknowledgments catarina andré and inês delgado acknowledge the financial support from the portuguese innovation agency (adi) in the frame of project qren 5012-liracork. this work was supported by the portuguese innovation agency (adi) in the frame of project qren 5012-liracork. references [1] c.e. liège, international code of cork stopper manufacturing practices, 1996. [2] a.p. damant, s.m. jickells, l. castle, liquid chromatographic determination of residual isocyanate monomers in plastics intended for food contact use, j. aoac int. 78, (1995), pp. 711-719. [3] health and safety laboratory, methods for the determination of hazardous substances #25/3 organic isocyanates in air (mdhs 25/3), (1999). [4] british standard, bs en 13130-8:2004 materials and articles in contact with foodstuffs — plastics substances subject to limitation — part 8: determination of isocyanates in plastics, (2004). [5] u. s. environmental protection agency (epa), method 207 method for measuring isocyanates in stationary source emissions, (2006). [6] p. tremblay, j. lesage, c. ostiguy, h. van tra, investigation of the competitive rate of derivatization of several secondary amines with phenylisocyanate (phi), hexamethylene-1,6-diisocyanate (hdi), 4,4′-methylenebis(phenyl isocyanate) (mdi) and toluene diisocyanate (tdi) in liquid medium, analyst. 128, (2003), pp. 142-149. [7] k. mastovska, recent developments in chromatographic techniques, in: y. picó (ed.), compr. anal. chem. vol. 51 food contam. ans residue anal., 1st ed., elsevier b.v., (2008), pp. 175–200. [8] m.e. swartz, ultra performance liquid chromatography (uplc): an introduction, sep. sci. redefined. may, (2005), pp. 8–14. [9] j.f. mcgaughey, s.c. foster, r.g. merrill, laboratory development and field evaluation of a generic method for sampling and analysis of isocyanates, epa report, 1995. [10] t. lundstedt, e. seifert, l. abramo, b. thelin, å. nyström, j. pettersen, et al., experimental design and optimization, chemom. intell. lab. syst. 42, (1998), pp. 3–40. [11] b.j. stojanović, factorial-based designs in liquid chromatography, chromatographia. 76, (2013), pp. 227–240. [12] d.b. hibbert, experimental design in chromatography: a tutorial review, j. chromatogr. b. 910, (2012), pp. 2–13. [13] w. dewé, r.d. marini, p. chiap, p. hubert, j. crommen, b. boulanger, development of response models for optimising hplc methods, chemom. intell. lab. syst. 74, (2004), pp. 263-268. [14] x. zhang, r. wang, x. yang, j. yu, central composite experimental design applied to the catalytic aromatization of isophorone to 3,5-xylenol, chemom. intell. lab. syst. 89, (2007), pp. 45–50. [15] a.c. atkinson, r.d. tobias, optimal experimental design in chromatography, j. chromatogr. a. 1177, (2008), pp. 1–11. [16] d.c. montgomery, design and analysis of experiments, 5th ed, jonh wiley & sons, inc., 1997, isbn: 0-471-31649-0. [17] n. akvan, h. parastar, second-order calibration for simultaneous determination of pharmaceuticals in water samples by solidphase extraction and fast high-performance liquid chromatography with diode array detector, chemom. intell. lab. syst. 137, (2014), pp. 146–154. [18] c. andré, f. jorge, i. castanheira, a. matos, optimizing uplc isocyanate determination through a taguchi experimental design approach, j. chemom. 27, (2013), pp. 91–98. [19] c.j. purnell, r.f. walker, methods for the determination of atmospheric organic isocyanates. a review, analyst, 110 (1985), pp. 893-905. [20] osha analytical laboratory, method no. 47: methylene bisphenyl isocyanate (mdi), salt lake city, 1989. [21] osha analytical laboratory, method no. 42: diisocyanates, salt lake city, 1989. [22] u. s. environmental protection agency (epa), ctm 036a analysis of isocyanates liquid chromatography diode array / msd, 2004. [23] a. ávila, e.i. sánchez, m.i. gutiérrez, optimal experimental design applied to the dehydrochlorination of poly(vinyl chloride), chemom. intell. lab. syst. 77, (2005), pp. 247–250. [24] who/fao, codex alimentarius commission procedural manual, 21st ed., rome, italy, 2013. [25] p. konieczka, j. namiesnik, quality assurance and quality control in the analytical chemical laboratory: a practical approach, crc press, 2009, isbn 9781420082708. [26] i. delgado, c. andré, a. ramos, a.s. matos, k. stockham, s. kumaran, et al., “bracketing versus multipoint calibration in determination of isocyanates in agglomerated cork stoppers”, proc of xx imeko world congress green growth, sept. 9-14, 2012, busan, republic of korea, vol 1, pp. 696-699. statistical experimental design screening strategies for free-monomeric isocyanates determination by uplc in materials used in cork stoppers manufacturing microsoft word 15 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 90‐99    acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 90  a fast and  low‐cost vision‐based  line tracking measurement  system for robotic vehicles   daniele fontanelli 1 , david macii 1 , tizar rizano 2   1  department of industrial engineering, university of trento, via sommarive 9, 38123 trento, italy  2  department of information engineering and computer science, university of trento, via sommarive 5, 38123 trento, italy      section: research paper   keywords: kalman filters; vehicular technologies; image processing; direction estimation; real‐time; random sample and consensus  citation: daniele fontanelli, david macii, tizar rizano, a fast and low‐cost vision‐based line tracking measurement system for robotic vehicles, acta imeko,  vol. 4, no2, article 16, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐16  editor: paolo carbone, university of perugia, italy  received february 9, 2015; in final form june 12, 2015; published june 2015.  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: david macii, e‐mail: david.macii@unitn.it    1. introduction  sensors and measurement science play a key role in the development of industrial automation and robotics [1]. one well-known fundamental and crucial problem in robotics is path planning-and-tracking [2]. in order to address this issue, a robot first has to identify the wanted path by sensing the environment (e.g. by recognizing suitable landmarks and visual cues). secondly, it should be able to estimate steadily its position with respect to the planned trajectory, so that the robot controller can efficiently follow the desired path [3], [4]. one of the simplest and most widely adopted solution to this purpose, especially for automated guided vehicles (agvs) in industrial environments is line tracking [5], [6]. line tracking is also often used to evaluate the ability of new robot prototypes to follow a given trajectory [7]. typical sensors for this kind of applications include reflective infra-red light emitting diodes (leds) coupled with suitable photo-diodes, photo-transistors or light dependent resistors (ldrs) [8], [9], arrays of electric inductance sensors [10], and vision-based systems [11]-[13]. while optical or electric inductance sensors are very cheap, they have also a very limited reading range. moreover, at least two of such sensors are generally needed for correct line detection and to estimate whether the vehicle’s position is left or right of the reference line. the vision-based line detection systems are instead advantageous because much information can be extracted from a sequence of collected pictures. of course, in all cases sensor accuracy, range and speed are essential to ensure good performances and real-time behavior. stemming from industrial robotic applications, the technical solutions developed in this area of research have been also gradually applied to service robots, including autonomous guidance systems [14], [15], intelligent vehicles [16], [17], and abstract  localization  and  tracking  systems  are  nowadays  a  quite  common  solution  for  automated  guided  vehicles  (agv)  in  industrial  environments. the bunch of technological solutions developed in this sector is now being revitalized by their applications in service  robots, e.g. for safe navigation in crowded public spaces or in autonomous cars. related to this field are robotic competitions and  races. in this particular scenario, the robot often has to track a line painted on the ground. line tracking techniques typically rely on  light dependent resistors (ldr), photo‐diodes or photo‐transistors detecting the light generated by normal or infrared light emitting  diodes (leds), arrays of electric inductance sensors or vision systems. crucial issues for line tracking are: accuracy and reliability in  estimating the direction of the moving vehicle with respect to the line and processing speed. such problems are particularly critical  when  high‐speed  mobile  robots  are  considered.  to  address  this  issue,  in  this  paper  a  vision‐based  technique  of  moderate  computational complexity is described. the proposed solution has been implemented on a low‐cost embedded platform and relies on  a high frame rate  light contrast sensor, a tailored ransac‐based algorithm and a kalman filter. the reported experimental results  prove that the proposed solution is able to track the direction of the vehicle in real‐time even when the field of view of the camera is  limited and the vehicle moves at high speed.  acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 91  road vehicles both for hazard detection [18], [19], and fully automatic guidance [20]. in this context, robotic vehicle competitions have considerably thrust the development of smart sensing systems. the main problem that still hinders the effectiveness of sensing solutions for position tracking in challenging scenarios (e.g. in robotic races) is the variability of the environment (especially in outdoor scenarios, where the speed of a robot can be quite higher than indoors) and the possible lack of known cues. in this respect, vision-based solutions have gained an undisputed leading role mainly due to their flexibility and ability to estimate multiple quantities with a single measurement system. unfortunately, this increased flexibility comes at the price of an increased computational load as well. in this paper, we will focus on a novel vision-based measurement technique and on a system prototype that perfectly fits the industrial domain, as it is able to improve accuracy, robustness and speed in estimating both the direction and the position of a robotic vehicle with respect to a painted line, thus supporting efficient control, as preliminary reported in [21]. in general, the problem of line detection is closely related to the classic problem of line recognition in images [22], [23], although it is made more difficult by time-varying light conditions and by the robot’s dynamics. in general, the performances of vision-based line detection systems are limited by robustness and speed issues, which in turn depend on camera frame rate, camera resolution and algorithm complexity. in this respect, one of the most famous and effective algorithms for line recognition is the so-called hough transform [24]. however, this algorithm needs line clustering in the image space and it is quite heavy from the computational point of view, although several optimizations have been proposed in the last years, e.g. through randomization [25], or hierarchical image partitioning [26]. as a result of the availability of increasingly powerful embedded platforms, several other image processing algorithms have been proposed over the last few years, e.g. based on a customized image segmentation [27], fuzzy logic [28], or the viterbi algorithm [29]. all of them rely on standard cameras and are characterized by a significant computational burden. some of the solutions proposed in literature for path detection and tracking combine specifically conceived image features [30], [31], known path models [32], statistical methods [33] or a combination of particle filtering and artificial intelligence [34]. another important field of research, which is somehow related to the problem at hand, but is more focused on lane and obstacle detection rather than on path tracking, is described, for instance, in [35]. the technique described in this paper is instead based on a simple algorithm, which works well also on binary lowresolution images, such as those collected by a special highframe-rate light contrast sensor. this sensor ensures a straightforward detection of the line edges (provided that the colors of the line and of the background are different enough) with minimum bandwidth and latency requirements. it is worth emphasizing that the light contrast sensor is supposed to look at the road surface (i.e. with the camera image plane approximately parallel to the ground), in order to recognize the painted line only. in this way, the probability of recording unwanted objects that may perturb line detection is much smaller than using a front camera. the proposed approach is explicitly conceived to support automated vehicle control over optimal paths [36], although the control problem is out of the scope of this paper. line detection is performed through a random sample and consensus (ransac) algorithm combined with a kalman filter. ransac is a general guess–and–test method that randomly chooses a hypothesis in the measurement space and then scores its trustworthiness a posteriori [37]. ransac has been used in many contexts, including trajectory estimation [38] and, more recently, even in road applications [39], [40]. however, the solution described in this paper is faster and it is expected to be more accurate and more robust than the basic ransac algorithm preliminarily described and analyzed through simulations in [40]. in fact, the additional kalman filter prevents sudden and unnatural jumps in the estimated vehicle direction. the algorithm has been implemented and tested on a simple embedded platform. low cost is indeed a further important feature of the system developed. the rest of the paper is structured as follows. at first in section 2 the model underlying the theoretical problem is described. then, section 3 deals with the description of the estimation algorithm. finally, in section 4 the implementation details as well as several experimental results in different conditions are reported. 2. model description  2.1. vision system model  as shortly explained in section 1, the vision system is supposed to observe the road or floor surface just to detect a painted line. if wp = [wx, wy, wz]t is a point in the reference frame w defined by axes xw, yw and zw (with plane πw=xw x yw lying on the ground as shown in figure 1(a)), the equations of two parallel lines in space (namely the line edges) are given by:          awbwwawrw a w b w a w l w ppopp pppp     (1) where r , wpa and wpb are two given points belonging to the left edge and wo = [wxo, wyo, wzo] t is the offset vector between the two lines. let i be an additional reference frame defined by axes xi, yi and zi so that πi=xi xyi includes the image plane and the origin of i coincides with the principal point of the camera, as shown in figure 1(b). in such a frame, every point lying on the image plane has the zi coordinate equal to zero. if axes xi and yi are parallel to xw and yw, respectively, then the image coordinates ip of a generic point wp in the field of view of the camera are given by (see appendix a for reference):                          z w x w y z w y w x i i i tz tx f tz ty f y x p (2) where fx and fy are the focal lengths along axes xi and yi, respectively, and tw =[tx, ty, tz] t is the translation vector expressing the coordinates of the camera pin-hole in the frame w . therefore, if the line equations (1) are plugged into (2), the coordinates of the points belonging to the line edges projected onto the image plane are:          aibiiairi a i b i a i l i ppopp pppp     (3) acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 92  where ipa = [ ixa, iya] t and ipb = [ ixb, iyb] t are two points of the left edge mapped onto the image plane and io = [ixo, iyo] t is the image offset vector between the left and the right edge. since io can be chosen arbitrarily, in the following we assume that ixo = d ≠0 and iyo = 0. as a consequence, d is the distance between the line edges along axis xi, as depicted in figure 1(b). notice that if the image plane and ground are approximately parallel, threedimensional parallel lines are mapped into two-dimensional parallel lines. the same considerations hold also for the orientation angles of each line assuming that fx = fy, as it often occurs in practice. albeit these two assumptions are not perfectly true in a real scenario [41], a slight change of perspective can be easily addressed through inverse perspective mapping (ipm) [42]. moreover, possible time-varying fluctuations of the image plane (e.g. due to vehicle vibrations or jolts), are generally negligible compared with the intrinsic fast variability of the collected images. 2.2. problem formulation and estimation model  after detecting the line, the goal of the measurement system is to estimate the position and the orientation of the robotic vehicle with respect to the line itself in real-time. in particular, the quantities to be measured are:  h, which is the horizontal coordinate of the intersection point between the line centroid and axis xi, i.e.                  a i b i a i b i a i a i yy xx y d xh 2 (4)  and   ,0 , namely the angle between the parallel line edges and axis yi, i.e.            a i b i a i b i xx yy arctan (5) the meaning of these parameters is shown in figure 1(b). notice that h →∞ for lines that are parallel to xi, axis, whereas (h,α) = (0,0) when the line is perfectly vertical and exactly in the center of the image. as explained in section 3, the values of (h,α) associated with a given image are used as a prior for parallel line detection in a newly acquired image. in this way, the computation time is reduced by constraining the ransacbased line search in a specific subset of the pixels of every collected image. to this purpose, (4) and (5) must be inverted to recover (3). unfortunately, the pair (h,α) alone is not sufficient to this end. indeed, for any given pair, a very large (ideally infinite) number of parallel lines exist. in stricter theoretical terms, the position of the parallel line edges is unobservable using just h and α. in fact, observability is guaranteed only if d is available in (4). therefore, also parameter d needs to be estimated by the algorithm. to this purpose, we can rely on a simple dynamic model, whose state is the vector q=[h, α, d]t. in general, the dynamic of q is a function of both the line to be tracked and the motion of the camera. however, if the field of view of the camera is much shorter than the radius of curvature of the wanted path, the effect of the line curvature in the image plane is negligible, thus greatly simplifying both the line recognition problem and the model. as a consequence, the system dynamic due only to the motion of the camera can be simply modelled as follows: vq v v v d h d h                            (6) where v = [vh, vα, vd] t, namely the input of the system, is the speed vector. note that vh depends mostly on the speed components of the camera along the plane of motion, vα is related to the angular speed of the image plane of the camera, and vd depends on the occasional motion of the camera along the axis orthogonal to the ground surface because of potholes, humps, or sporadic changes of the line width. in any case, the time evolution of the state variables of (6) is affected by the uncertainty associated with the measurement of v = [vh, vα, vd] t. this in turn may require a dedicated inertial platform when involved scenarios are considered. 3. algorithm description  as stated in section 1, the algorithm for line detection and state variable estimation relies on the combination of a tailored ransac algorithm and a kalman filter. the rationale behind the combination of such techniques is related to the different nature of the uncertainty sources affecting the model parameters. on one hand, the unknown probability density function related to the estimation of q can be considered as multimodal for the presence of both outliers and noise in every grabbed image. for instance, if no accurate speed values are available, the components of v in (6) can be described just stochastically (e.g. using the variance of available data). this assumption justifies the choice of a randomized, multihypothesis algorithm like ransac. on the other hand, as a camera cannot move instantaneously in multiple different directions, the distribution of the fluctuations due to its motion must be definitely unimodal. therefore, the multi-modal (a)    (b)  figure 1. vision system model and reference frames: (a) perspective view and (b) top view.   acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 93  ransac approach takes advantage of the kalman filter principal mode defined by (6), since the kalman filter is able to reduce such fluctuations. when the system starts, the kalman filter is initialized as soon as the line is clearly detected in the image plane using ransac without any prior. in this preliminary phase, a longer execution time is tolerated in order to have an accurate first guess. then, the estimation algorithm performs iteratively the following three steps: kalman-based prediction, ransac-based road line recognition and kalman-based update. the flow chart of the proposed algorithm is shown in figure 2. note that ransac operates between the prediction and the update steps of the kalman filter. since the ransac algorithm estimates the parallel lines in the image space, on one hand it benefits from kalman prediction results as a prior; on the other it is used to generate the measurement data for the following update step. in the next subsections, the three steps mentioned above as well as the initialization criteria of the kalman filter are described in detail. 3.1. kalman filter initialization  when the algorithm starts, two quantities are initialized in the kalman filter, i.e.  the initial system state q(0), which is set equal to the values returned by the first iteration of the ransac algorithm on a full image;  the initial value of the state covariance matrix p(0), which is set equal to 1/10 of the input covariance matrix q. the covariance matrix q is a constant diagonal matrix, whose elements are the estimated variances of vh, vα and vd. observe that, unlike how it is commonly done in most kalman filters, in this case the initial state covariance matrix is one order of magnitude smaller than q. this is due to the fact that the initial guess q(0) has usually a high accuracy since it results from the application of the ransac algorithm to a full image. 3.2. kalman filter prediction step  the kalman filter is executed anytime a new image is available. hence, the model reported in (6) is discretized with a sampling period δt equal to the inverse of the frame rate. the discretized system model is then simply given by       ttvtqttq  , where  ttq  is the predicted state of system (6) at time t + δt. since no knowledge about the motion of the camera is assumed, the state prediction equation is    tqttq  . similarly, the predicted covariance matrix is given by     qtpttp  , where p(t) is the covariance matrix of the state variables in (6) and q is the model input covariance matrix defined in section 3.1. 3.3. ransac‐based line recognition  in general, the purpose of ransac is to find a subset s* of a set s containing n data, such that its elements fit an instance m* of a model m (depending on n≤n parameters) within a user-defined tolerance threshold st. in the case considered, m refers to the parallel line equations defined in (3), and s is the set of camera pixels where the probability of finding the road line edges is maximum. in the worst case, s coincides with the whole pixel matrix and n coincides with the number of pixels. however, even if s changes anytime a new frame is collected, it can be properly reduced to the region of interest by using the prior information obtained from the kalman filter prediction   figure 2. flow‐chart of the algorithm. the ransac algorithm runs in two different modes, i.e. either using the prior information returned by the kalman  filter or by processing the whole image for initialization.  acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 94  step. of course, such a reduction does not take place during the initialization of the kalman filter (see figure 2 for reference). the ransac-based line detection relies on (3), which depends on n=3 independent parameters, i.e. ipa, ipb and io = [d, 0]t. on the basis of these assumptions, the main steps of the ransac algorithm conceived for the intended application are briefly summarized in the following.  starting from k=1, n=3 pixels are randomly extracted from s to create a subset sk composed by ipak, ipbk and ipak +iok. if some prior information is available (i.e. the output  ttq  of the prediction step) the choice of the subset sk is constrained in the region determined by  ttq  . the selected points are then used to create a model instance mk, namely two parallel lines based on (3).  afterwards, mk is used to determine the subset of pixels sk*  s that are likely to belong to the line edges within a tolerance interval ±st. sk* is usually referred to as the consensus set or the set of inliers. in practice, this set consists of two disjoint subsets skl* and skr* corresponding to the left and right line edges, respectively. if the prior information provided in the prediction step of the kalman filter is available, threshold st can be modified according to the diagonal elements of the predicted covariance matrix  ttp  ;  finally, this procedure starts over and a new set sk+1 of n random points is chosen from s. in a typical ransac algorithm, the iterative approach ends as soon as the number of elements in sk* exceeds a given threshold. however, due to the time-varying uncertainty contributions affecting the problem at hand, no a priori outlier stochastic description is available. as a consequence, in this case the iterative process stops as soon as a maximum number of iterations k is reached. the value of k in presence of outliers is given by [37]    nw p k    1 1 log log (7) where p is the wanted probability to select the correct model if the set s is sampled k times and w is the probability that one of the chosen points is actually an inlier. in the presented solution the value of k is computed and updated in real-time. let sl* and sr* be the sets with the largest number of inliers for the left and right edges, respectively, after k iterations, with sl*  sr* =ø and sl*  sr* = s*. if l* = ( ipa*, ipb*, io* ) is the corresponding set of parameters to be replaced into (3), the optimal set of values finally results from                22   ** ** minminminarg ~ r ii sp l ii sp l ppppl r i l i (8) where  oppl ibiai ~,~,~~  . given that h~ and ~ are computed from l ~ using (4) and (5), respectively, and recalling that  ti do 0,~~  , the output of the ransac-based line recognition algorithm is  tdhq ~,~,~~  . 3.4. kalman filter update step  the elements of q~ represent the measurement values to be injected into the kalman filter during the update step. in particular, the kalman gain kg is given by      1 rttpttpk g (9) where r is the covariance matrix of q~ and results from the residuals of (8). since the motion of the camera is not included in the model, the kalman filter strongly relies on the available measurement data. in particular, the updated values of both the state and the system covariance matrix result from               tggtgg g rkkkittpkittp ttqqkttqttq   33 ~ (10) where the joseph form has been used to express the updated covariance matrix p(t + δt), thus preventing numerical instability. note that the low computational burden of the kalman filter update step is very suitable for an embedded implementation. it may happen that, if an image is heavily corrupted by structured or unstructured outliers (e.g. illumination problems, shadows, small potholes, faded paint), the ransac algorithm fails in finding a good estimate of the parallel line edges. such situations are tolerated to a certain extent thanks to the information retained by the kalman filter. in such cases, the kalman filter update step simply becomes        ttpttp ttqttq   (11) clearly, expression (11) provides the open-loop dynamic of the kalman estimator. if, for some reason, ransac does not work properly, the covariance matrix p(t + δt) tends to grow indefinitely. for this reason, the trace of p(t + δt) is checked at the end of each update step. if the value of the trace exceeds a user-defined threshold tp (e.g. when the total uncertainty is larger than the image size), the state estimated by the kalman filter is discarded at all and the kalman filter is reinitialized, as soon as the next image is grabbed. it is worth noticing that the case of incomplete measures (e.g. when just one of the line edges is in the field view of the camera) can be handled by the algorithm. indeed, the prior information given by the kalman filter during the prediction step allows ransac to detect correctly even a single line edge, while waiting for the other one reappearing in the image. in this case, just the uncertainty associated with d tends to increase monotonically in the state covariance matrix. 4. experimental results  4.1. experimental setup description  the proposed algorithm has been implemented in a beagleboard xm embedded platform. this platform is equipped with a 1-ghz arm dm3730 by texas instruments, 512 mb of ram, a 4 gb micro secure digital (sd) memory and a linux angstrom distribution. the beagleboard xm provides expansion headers to connect other peripherals to the platform. the adopted vision system relies on a custom light contrast imager driven by an xc2c512 xilinx coolrunner-ii complex programmable logic device (cpld). both board and camera are shown in figure 3. since the pins of the beagleboard expansion headers are rated at 1.8 v while the camera requires 3.3 v, a piggy-back voltage level translator has been used to interface the embedded platform with the camera. the light contrast sensor is a 35-μm cmos imager with a acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 95  resolution of 128 × 64 pixels made at the “fondazione bruno kessler” (fbk), trento, italy [43]. note that the low resolution of the camera is helpful for the application considered, as it greatly reduces the number of iterations of the ransac algorithm at no cost in terms of accuracy, as it will be shown in section 4.2. an important feature of the adopted vision system is that it requires a single bit for each pixel. each pixel generates a voltage value that is proportional to the maximum relative variation of the average light intensity over triples of adjacent pixels during a given integration interval. all output voltage values are quantized by a 1-bit comparator. the resulting bits can be stored into a local on-chip memory or they can be immediately read out and buffered into the cpld. this feature (along with the low resolution of the imager) makes frame acquisition faster than in regular cameras. as a consequence, the camera frame rate can be larger than 100 frame/s, but with low bandwidth requirements for data transfer. in practice, the frame rate is limited by the light integration time of each pixel. this can be as low as 1 ms, but it should be at least 10 ms not to affect the sensitivity of the imager [44]. the on-board cpld is provided with a serial peripheral interface (spi) to read the image frames captured by the camera. the embedded platform communicates with the logic circuitry of the vision system through general purpose inputoutput (gpio) and spi pins. the gpio pins are used to set the camera operating parameters (e.g. operation modes and integration time) and to exchange interrupts and control signals with the vision system. the spi pins are instead used to transfer the image frames. the transmission rate is controlled by the spi clock. in the current implementation the spi clock frequency is 48 mhz. the cpld is designed to hold only one image row at a time. therefore, one spi read per row is needed to read a full image frame. in order to collect such data as quickly as possible, a highly optimized software driver was developed for linux. 4.2. performance evaluation  in order to evaluate the performances of the overall system, several image records of several minutes each have been collected at about 100 frame/s with a pixel integration time of 10 ms. the camera was fastened to the right external rear view mirror of a real vehicle, about 1.2 m above the ground to detect one of the side lines of the road. such lines are typically white and about 15 cm wide. the results of all experiments reported in the following were conducted on urban roads, in a peripheral district of the city of trento. the speed of the vehicle was kept between about 30 km/h and 70 km/h (i.e. 50 km/h on average) depending on the road and traffic conditions. it is worth emphasizing that such testing conditions are more challenging than those adopted in robotic vehicle competitions. in fact, during the experiments, line recognition was hindered and occasionally perturbed by shadows, faded or interrupted lines, gravel and road imperfections (e.g. asphalt patches). usually, the light contrast between the road surface and the lines painted on the street is large enough to enable an easy detection of the line edges. in particular, such edges are depicted as linear clusters of active pixels on a grey background (in the following referred to as data points). some significant examples of collected images are shown in figures 4(a)-(i). figure 4(a) refers to ideal conditions, i.e. a freshly painted white line on a uniform dark grey road background. in this case, the line edges are easily recognizable to the naked eye. in figure 4(b) the right side of the line is partially faded. in figure 4(c) the whole line is faded. in figure 4(d) and 4(e) the line edges are sharp, but some shadows due to nearby objects create some additional contrast points (additional dark pixels). in figure 4(f) one of the line edges is out of the field of view of the camera because the vehicle has slightly departed from the wanted trajectory. in figure 4(g) the road line is covered by some gravel. in figure 4(h) the road is both dusty and in shadow. finally, in figure 4(i) line recognition is perturbed by a manhole cover. in the examples above, the thin, white straight lines plotted in each picture are reconstructed using the values of q = [h,α,d]t estimated in real-time. in all experiments, the threshold value st for the ransac algorithm is a function of the state covariance matrix p, and it is never smaller than 2 pixels. estimation accuracy and robustness to visual artefacts have been evaluated offline. to this purpose, the images collected in about 15 tests of several minutes each have been grouped into three data sets depending on the amount of disturbances. in the following, data set 1 refers to the experiments in which the percentage of images affected by disturbances, such as those shown in figure 4, is moderate (i.e. between about 15 % and 25 %). data set 2 includes the best experiments, namely those where no more than about 15 % of the collected images is perturbed. finally, data set 3 comprises the worst experiments, i.e. those where the line was faded, partially interrupted or perturbed in up to 40 % of images. in all cases, the actual values of the state variables (namely the ground truth parameters denoted as hgt(t), αgt(t) and dgt(t)) were obtained with the following off-line procedure: 1. at first, the so-called projection pursuit algorithm is used to select the coarse-grained histograms with the highest peaks [45]. each set of features generating the histograms’ peaks (one peak for each line) represents a hypothesis; 2. among all possible hypotheses, the one with the highest number of features is chosen; 3. the projection pursuit algorithm is executed again, but with a fine-grained resolution, on the remaining features to detect possible clusters. the cluster with the highest ratio between the number of features and its standard deviation is chosen as the winning hypothesis; 4. the features belonging to such a hypothesis are determined using a voting scheme similar to ransac; 5. finally, a least squares quadratic (lsq) optimization algorithm is used to compute the values of hgt(t), αgt(t) and dgt(t). the results of this ground truth reconstruction procedure were carefully checked a posteriori by visual inspection record figure  3.  vision  system  for  light  contrast  detection  (on  the  left)  and beagleboard xm embedded platform (on the right).   acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 96  by record to remove or to correct possible outliers. a few images whose ground truth parameters could not be clearly estimated and that could not be corrected manually were simply removed from the data sets. figure 5(a)-(c) shows the 0.95-level confidence intervals associated with εh=h-hgt (a), εα =α-αgt (b) and εd =d-dgt (c), for the three data sets described above. in each case the results of two different estimation techniques are shown and compared, i.e. the one described in this paper (which relies on both ransac and kalman filtering) and the solution based on ransac only presented in [40]. we decided to evaluate measurement uncertainty in terms of confidence intervals rather than reporting the standard uncertainty computed with a type-a evaluation approach [46], because the probability density functions of εh, εα and εd are strongly unimodal, but the normal probability plots of the collected data show that they are not normally distributed. an example of such distributions in the case of variable εα for data set 3 is shown in figure 6. the histograms of the other variables are quite similar and are not particularly significant; so they are not reported for the sake of brevity. the use of confidence intervals in figure 5(a)-(c) emphasizes more clearly the benefits of using the kalman filter to improve accuracy, precision and robustness in line detection. this is especially true as far as the α parameter is concerned, which is also the most important for vehicle control purposes. observe that εα ranges approximately between [-1,+1] degree with 95 % probability in all cases. of course, the results related to data set 2 are the best ones. however, even in the presence of more frequent disturbances, the proposed estimation technique is quite robust. this improvement is due to two reasons. first of all, the kalman filter decreases the probability that ransac detects wrong line edges in consecutive images, thus greatly reducing the probability of unnatural “jumps” in the estimation process. secondly, the kalman filter makes the algorithm track angle α even when one of the two line edges is occasionally lost (e.g. because the line is too faded or because it is out of the field of view of the camera, as it is shown in figure 4(f)). the analysis of εd and εh is a bit more complex. first of all, the distribution of such estimation errors in the ransac-only case is asymmetric. this asymmetry is due to the fact that when one of the two line edges is out of view, the “orphan” one is not always detected on the correct side of the line. therefore, the right edge could be recognized as the left one and vice versa. however, the probability of these events is not the same. thus, the estimation errors exhibit an asymmetric distribution, which, in the ransac-only case, can sometimes exceed 15 pixels. given that in the current setup 1 pixel (namely with the vision system about 1.2 m above the ground) corresponds to about 3 cm, the position error can be larger than 45 cm. by using the proposed algorithm, symmetry is usually reestablished because the kalman filter keeps memory of the previous positions of the line edges. in this way, the estimation errors are considerably reduced. indeed, they are generally within ±5 pixels (i.e. about 15 cm), with the only exception of εd in data set 3. this is due to the fact that, when the line is heavily faded, its width can be hardly estimated with good accuracy. nonetheless, the accuracy in estimating vehicle direction is still preserved. it is important to remind that the memory effect of the kalman filter is lost in the case of filter re-initialization, which occurs when the trace of matrix p exceeds the threshold tp defined in section 3.4. a further important aspect that has to be carefully evaluated to appreciate the performances of the system prototype is related to computation time. figure 7 shows the cumulative distribution curves of the execution times of the estimation algorithm running on the chosen low-cost embedded platform. the dashed line refers to the algorithm based on ransac            (a)             (b)          (c)                 (d)          (e)          (f)                (g)          (h)          (i)      figure 4. examples of images collected from the light contrast sensor: (a) ideal situation; (b) partially faded line; (c) considerably faded line; (d)‐(e) effect of  shadows; (f) right edge out of the field of view of the camera; (g) road partially covered by gravel; (h) dusty shady line; (i) line painted over a manhole cover  (i). in each image, the direction of the vehicle estimated by the algorithm with respect to the road line is represented by a white line.    acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 97  only, whereas the solid line corresponds to the proposed solution based on both ransac and kalman filtering. observe that the execution time of the ransac-withkalman-filter approach is faster than the solution based on ransac only. this is due to the fact that generally a smaller number of iterations is needed to detect the line edges due to the prior retained by the kalman filter. in particular, in this case the median execution time of the proposed algorithm is just 4 ms and it is lower than 10 ms with 97 % probability. this means that with a camera frame rate equal to 100 frame/s (so that a new image can be processed every 10 ms), even if the vehicle travels at 100 km/h, the measurement system is able to estimate direction changes with a space resolution of about 30 cm, which is compatible with automated high-speed vehicle control. 5. conclusions  increasingly sophisticated, high-speed automated guided vehicles (agvs) require fast sensors able to ensure accurate and reliable real-time path-following techniques. in this context, this paper presents a simple vision-based technique and a lowcost system able to estimate the direction and the relative position of a vehicle with respect to a line painted on the ground. line-tracking techniques are indeed commonly used in robots for industrial applications. the proposed solution relies on a special camera and on a tailored ransac algorithm enhanced by a kalman filter. even if the system does not address other crucial safety problems, such as collision avoidance for instance, it is robust to manifold uncertainty sources and it is cheaper and faster than other vision-based solutions. this is due not only to the proposed algorithm per se, but also to the low resolution of the adopted light contrast imager, which greatly reduces image processing burden and data transfer latency. in future, the current prototype could be -10 -5 0 5 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35   [degrees] p ro b a b ili ty figure 6. histogram of εα obtained when the proposed algorithm is applied  to the images of data set 3.  0 5 10 15 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 execution time [ms] p ro b a b ili ty ransac only ransac & kalman figure  7.  execution  time  cumulative  distribution  curves  of  the  ransac‐ only (dashed  line) and ransac‐with‐kalman‐filter (solid  line) algorithms  running on the beagleboard xm platform.   -25 -20 -15 -10 -5 0 5 10  h [ p ix e l] d at a se t 1 r an sa c o nl y d at a se t 1 r an sa c & k al m an d at a se t 2 r an sa c o nl y d at a se t 2 r an sa c & k al m an d at a se t 3 r an sa c o nl y d at a se t 3 r an sa c & k al m an (a)  -3 -2 -1 0 1 2 3   [ d e g re e s ] d at a se t 1 r an sa c o nl y d at a se t 1 r an sa c & k al m an d at a se t 2 r an sa c o nl y d at a se t 2 r an sa c & k al m an d at a se t 3 r an sa c o nl y d at a se t 3 r an sa c & k al m an (b)  -20 -15 -10 -5 0 5 10 15  d [ p ix e l] d at a se t 1 r an sa c o nl y d at a se t 1 r an sa c & k al m an d at a se t 2 r an sa c o nl y d at a se t 2 r an sa c & k al m an d at a se t 3 r an sa c o nl y d at a se t 3 r an sa c & k al m an (c)   figure 5. 0.95‐level confidence intervals associated to εh (a) and εα (b) and εd (c)  for  three  different  data  sets.  pairs  of  adjacent  intervals  refer  to  the estimation errors resulting from the proposed ransac algorithm with and without kalman filtering, respectively.  acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 98  further improved by using a more sensitive imager and an inertial platform measuring vehicle speed. appendix ‐ derivation of expression (2)  let w be a reference frame, with axes xc, yc and zc defined in such a way that:  xc is directed as the x-axis of the image plane;  yc is oriented as the y-axis of the image plane (according to a right-handed reference frame);  zc is oriented towards the observed scene till intersecting the image plane in the principal point, as shown in figure 1. the origin of c coincides with the camera pin-hole. therefore, c is simply translated with respect to i without any rotation. on the contrary, the orientation of c with respect to w can be arbitrary. therefore, if crw is the rotation matrix of c and tw =[tx, ty, tz] t is the same translation vector, as defined in section 2.1, the coordinates of a generic point wp in the camera frame are given by [41]:   ttwww ptrp 1cc , (a.1) where cp = [cx, cy, cz]t and [crw|tw] is a 3 x 4 matrix which defines the rigid transformation between c and w . in particular, if the axes of c are completely rotated with respect to w as shown in figure 1, (a.1) can be rewritten as               z w y w x w tz ty tx pc (a.2) from the basic theory of vision systems, it is known that the projection of any point in the field of view of the camera onto the image plane results from [41]:                      z y x g z yz xz i i c c c c c c (a.3) where            100 0 yy xx cf csf g (a.4) is the so-called camera calibration matrix, fx and fy represent the focal lengths along axes xi and yi, respectively, s models the radial distortion of the image, and the coordinates cx and cy of the principal point of the camera are both equal to 0 in the frame i . in practice, the terms of the calibration matrix can be estimated using widely known numerical tools [47]. thus, if the radial distortion coefficient is negligible (as it commonly occurs in practice), by replacing (a.2) into (a.3), after a few mathematical steps, (2) finally results. acknowledgements  the authors would like to thank marco de nicola, dr. massimo gottardi and dr. leonardo gasparini from fbk, trento, for their advice and support in configuring the experimental light contrast imager used for the system prototype described in this paper. references  [1] s.s. carlisle, “the role of measurement in the development of industrial automation,” acta imeko 3 (2013), pp. 4-9. [2] t.h. lee, h.k. lam, f.h.f. leung, p.k.s. tam, “a fast path planning-and-tracking control for wheeled mobile robots,” proc. ieee international conference on robotics and automation, (icra), may 22-26, 2001, seoul, korea, pp. 1736-1741. [3] c. samson and k. ait-abderrahim, “feedback control of a nonholonomic wheeled cart in cartesian space,” in proc. ieee international conference on robotics and automation, apr. 911, 1991, sacramento, ca, usa, pp. 1136–1141. [4] a. aguiar and j. hespanha, “trajectory-tracking and pathfollowing of underactuated autonomous vehicles with parametric modeling uncertainty,” ieee transactions on automation and control 52 (2007), pp. 1362–1379. [5] r. d’souza, “designing an autonomous robot vehicle,” ieee potentials 17 (1998), pp. 40-43. [6] xiuzhi li, songmin jia, jinhui fan, liwen gao, bing guo, “autonomous mobile robot guidance based on ground line mark,” proc. of sice annual conference (sice), akita, japan, 20-23 aug. 2012, pp. 1091-1095. [7] yangsheng xu, s. kwok-way au, “stabilization and path following of a single wheel robot,” ieee/asme transactions on mechatronics 9 (2004), pp. 407-419. [8] l. xiafu, c. yong, “a design of autonomous tracing in intelligent vehicle based on infrared photoelectric sensor,” proc. international conference on information engineering & computer science (icies), dec. 19-20, 2009, wuhan, china, pp. 1-4. [9] n.m. arshad, m.f. misnan, n.a. razak, “single infra-red sensor technique for line-tracking autonomous mobile vehicle,” proc. ieee 7th international colloquium on signal processing and its applications (cspa), mar. 4-6, 2011, penang, malaysia, pp.159162. [10] jeong-yean yang; dong-soo kwon, “electric inductance sensor-based path recognition for the highly configurable path tracking of service robot,” proc. 17th ieee international symposium on robot and human interactive communication (roman), aug. 1-3, 2008, munich, germany, pp. 419-424. [11] a. bonarini, p. aliverti, m. lucioni, “an omnidirectional vision sensor for fast tracking for mobile robots,” ieee transactions on instrumentation and measurement 49 (2000), pp. 509-512. [12] l. armesto, j. tornero, “automation of industrial vehicles: a vision-based line tracking application,” in proc. ieee conference on emerging technologies & factory automation (etfa), sep. 22-25, 2009, mallorca, spain, pp. 1-7. [13] n.m. arshad, n.a. razak, “vision-based detection technique for effective line-tracking autonomous vehicle,” proc. ieee 8th international colloquium on signal processing and its applications (cspa), mar. 23-25, 2012, melaka, malaysia, pp. 441-445. [14] h. gross, h.j. boehme, t. wilhelm, “contribution to visionbased localization, tracking and navigation methods for an interactive mobile service-robot,” proc. ieee international conference on systems, man, and cybernetics, oct. 7-10, 2001, tucson, az, usa, pp. 672-677. [15] g. antonelli, s. chiaverini, g. fusco, “a fuzzy-logic-based approach for mobile robot path tracking,” ieee transactions on fuzzy systems 15 (2007), pp. 211-221. [16] f. jimenez, “improvements in road geometry measurement using inertial measurement systems in datalog vehicles,” measurement, 44 (2011), pp. 102-112. [17] r. satzoda, s. sathyanarayana, t. srikanthan, and s. sathyanarayana, “hierarchical additive hough transform for lane acta imeko | www.imeko.org  june 2015| volume 4 | number 2| 99  detection,” ieee embedded systems letters 2, (2010), pp. 23– 26. [18] a. troiano, e. pasero, and l. mesin, “new system for detecting road ice formation,” ieee transactions on instrumentation and measurement 60 (2011), pp. 1091-1101. [19] f. espinosa, “design and implementation of a portable electronic system for vehicle-driver-route activity measurement,” measurement 44 (2011), pp. 326-337. [20] m. buehler, k. iagnemma, and s. singh, the 2005 darpa grand challenge the great robot race (springer tracts in advanced robotics), vol. 36, springer-verlag, berlin/heidelberg, germany, 2007. [21] d. fontanelli, l. palopoli, t. rizano, “high speed robotics with low cost hardware,” proc. ieee 17th conference on emerging technologies & factory automation (etfa), sep. 2012, 17-21, kralow, poland, pp. 1-8. [22] g. vanderbrug, “line detection in satellite imagery,” ieee transactions on geoscience electronics 14 (1976), pp. 37–44. [23] d. guru, b. shekar, p. nagabhushan, “a simple and robust line detection algorithm based on small eigenvalue analysis,” pattern recognition letters 25 (2004), pp. 1–13. [24] n. aggarwal, w. karl, “line detection in images through regularized hough transform,” ieee transactions on image processing 15 (2006), pp. 582–591. [25] l. xu, e. oja, “randomized hough transform (rht): basic mechanisms, algorithms, and computational complexities,” cvgip: image understanding 57 (1993), pp. 131–131. [26] r. satzoda, s. sathyanarayana, t. srikanthan, s. sathyanarayana, “hierarchical additive hough transform for lane detection, ieee embedded systems letters 2 (2010), pp. 23–26. [27] a.h. ismail, h.r. ramli, m.h. ahmad, m.h. marhaban, “vision-based system for line following mobile robot,” proc. ieee symposium on industrial electronics & applications (isiea), oct. 4-6 2009, kuala lumpur, malaysia, pp. 642-645. [28] a. chatterjee, a. rakshit, n. nirmal singh, “vision based mobile robot path/line tracking,” in vision based autonomous robot navigation, springer, berlin-heidelberg, germany, 2013, isbn: 978-3-642-33964-6, pp. 143-166. [29] p. mazurek, “line estimation using the viterbi algorithm and track-before-detect approach for line following mobile robots,” proc. 19th international conference on methods and models in automation and robotics (mmar), sep. 2-5, 2014, miedzyzdroje, poland, pp. 788-793. [30] c. guo, s. mita, d. mcallester, “lane detection and tracking in challenging environments based on a weighted graph and integrated cues,” proc. ieee/rsj international conference on intelligent robots and systems, oct. 18-22, 2010, taiwan, pp. 5543 5550. [31] a. lópez, j. serrat, c. canero, f. lumbreras, “robust lane lines detection and quantitative assessment,” in pattern recognition and image analysis. springer-verlag, new york, ny, 2007, isbn: 978-3-540-72846-7, pp. 274–281. [32] b.-f. wu, c.-t. lin, and y.-l. chen, “dynamic calibration and occlusion handling algorithms for lane tracking,” ieee transactions on industrial electronics 56 (2009), pp. 1757–1773. [33] y. wang, n. dahnoun, a. achim, “a novel system for robust lane detection and tracking,” signal processing (2012), pp. 319334. [34] z. kim, “robust lane detection and tracking in challenging scenarios,” ieee transactions on intelligent transportation systems 9 (2008), pp. 16-26. [35] m. bertozzi, a. broggi, “gold: a parallel real-time stereo vision system for generic obstacle and lane detection,” ieee transactions on image processing 7 (1998), pp. 62–81. [36] t. rizano, d. fontanelli, l. palopoli, l. pallottino, p. salaris, “global path planning for competitive robotic cars,” proc. ieee 52nd annual conference on decision and control (cdc), dec. 10-13, 2013, florence, italy, pp. 4510-4516. [37] m. fischler, r. bolles, “random sampling consensus: a paradigm for model fitting with application to image analysis and automated cartography,” communications of the acm 24 (1981) pp. 381–395. [38] d. fontanelli, l. ricciato, s. soatto, “a fast ransac–based registration algorithm for accurate localization in unknown environments using lidar measurements,” proc. ieee international conference on automation science and engineering, sep. 22-25, 2007, scottsdale, az, usa, pp. 597– 602. [39] g. mastorakis, e. davies, “improved line detection algorithm for locating road lane markings,” electronics letters 47 (2011), pp. 183–184. [40] d. fontanelli, m. cappelletti, d. macii, “a ransac-based fast road line detection system for high-speed wheeled vehicles,” proc. of ieee int. instrumentation and measurement technology conference (i2mtc), may 10-12, 2011, hang zhou, china, pp. 186–191. [41] r. hartley, a. zisserman, multiple view geometry in computer vision, cambridge university press, 2003, isbn: 0-52-154-0518. [42] h. mallot, h. blthoff, j. little, and s. bohrer, “inverse perspective mapping simplifies optical flow computation and obstacle detection,” biological cybernetics 64 (1991), pp. 177– 185. [43] m. gottardi, n. massari, s. jawed, “a 100-μw 128×64 pixels contrastbased asynchronous binary vision sensor for sensor networks applications,” ieee journal of solid-state circuits 44 (2009), pp. 1765–1770. [44] l. gasparini, d. macii, m. gottardi, d. fontanelli, “a lowpower data acquisition system for image contrast detection,” in proc. imeko tc-4 international workshop on adc modelling and testing (iwadc), jun. 30jul. 1, 2011, orvieto, italy, pp. 1-6. [45] p. huber, “projection pursuit,” the annals of statistics 13 (1985) 435–475. [46] bipm, and iec, and ifc, and iso, and iupac, oiml, guide to the expression of uncertainty in measurement, geneva, switzerland, 2008. [47] z. zhang, “a flexible new technique for camera calibration,” ieee transactions on pattern analysis and machine intelligence 22 (2002), pp. 1330–1334. template for an acta imeko event paper acta imeko issn: 2221-870x april 2017, volume 6, number 1, 59-63 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 59 tools for uncertainty calculations in force measurement dirk röske1, jussi ala-hiiro2, andy knott3, nieves medina4, petr kaspar5, mikołaj woźniak6 1physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany 2vtt mikes metrology, tehdaskatu 15, puristamo 9p19 87100 kajaani, finland 3national physical laboratory, hampton rd, teddington, middlesex tw11 0lw, united kingdom 4centro español de metrología, calle del alfar, 2, 28760 tres cantos, madrid, spain 5cesky metrologicky institut, okružní 31, 638 00 brno, czech republic 6ministerstwo gospodarki, główny urząd miar, ul. elektoralna 2, 00-139 warszawa, poland section: research paper keywords: emrp sib 63; uncertainty; force measurement; calculation tools citation: dirk röske, jussi ala-hiiro, andy knott, nieves medina, petr kaspar, mikołaj woźniak, tools for uncertainty calculations in force measurement, acta imeko, vol. 6, no. 1, article 9, april 2017, identifier: imeko-acta-06 (2017)-01-09 section editor: paul regtien, the netherlands received september 9, 2016; in final form february 13, 2017; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the research leading to these results has received funding from the european union on the basis of decision no 912/2009/ec. the emrp is jointly funded by the emrp participating countries within euramet and the european union corresponding author: dirk röske, e-mail: dirk.roeske@ptb.de 1. introduction the main objective of the research project “force traceability within the meganewton range” (sib63) in the european metrology research programme (emrp) was to improve the traceability of force measurement from primary standards to calibration services and testing laboratories especially in the meganewton range. the project was started in july 2013 and had a duration of 36 months. the aim of the project was to develop new methods that could be applied by users of measurement devices for large forces in industrial calibration laboratories and applications as well as in testing laboratories. different work packages (wp) had been defined dealing with build-up systems of force transducers and extrapolation methods of measurement results (wp1), with multi-component measurements (wp2) and with time-dependent effects like creep and hysteresis (wp3). the present paper describes the work that was undertaken within work package 4 improved dissemination of the si unit of force (wp4) during the project’s lifetime. the key idea of wp4 was to improve traceability by taking various influencing effects into account. this is done by applying corrections to the measurement results based on mathematical models and known parameter values. for the corrected results, measurement uncertainties can be calculated using the tools provided to the users from the project’s website. the procedure of wp4 was as follows. first, the application conditions and requirements for the calibration depending on the further use of the force measuring instrument were compiled based on an on-line questionnaire. second, the corresponding technical parameters and coefficients describing the effects of application conditions on the measurement result were collected or defined. third, for nine of the parameters and coefficients, detailed models and elaborated uncertainty calculation examples were provided. they are publicly available from the website [1]. fourth, for at least six of the models, online calculations tools were developed. for offline use, a abstract within the framework of the european metrology research programme (emrp), tools for the calculation of measurement uncertainties have been developed. the joint research project (jrp) 63 of the “si broader scope ii” (sib) call of the emrp is entitled: “force traceability within the meganewton range”. the project was started in july 2013 and had a duration of 36 months. the aim of the sib63 project was to improve the traceability of the force measurement especially in the meganewton range. this paper presents the results of work package 4 “improved dissemination of the si unit of force”. the tools developed by the partners are available from the website of the project. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 60 spreadsheet file, including the same models, was programmed and is also publicly available from the website of the project. 2. application conditions force transducers are often calibrated under laboratory conditions. in the later use of these instruments the application conditions must be taken into account if they differ from the calibration conditions to an extent that affects the measurement result. in order to receive feedback from the stakeholders about their application conditions, an online form was generated (figure 1). it is still publicly available from the web address http://www.ptb.de/emrp/forcemetrology.html [1]. all graphics are screenshots taken from the project’s website. if the figures are difficult to read, please open – if available the website. as the next step, a survey was undertaken and 66 stakeholders were asked to take part by submitting the filled-in form to the project team. 24 users replied and all the data received was entered into the database and can be found under “work packages/wp4.1 application conditions” on the website. the intention of the survey was to find out under which conditions and in what ranges (for example, temperatures up to 1100 °c) force transducers are sometimes used. it was not intended to develop methods covering all these ranges so users should be aware that, for example, a temperature coefficient that was measured in the range up to 30 °c may not be useful in the range of 100 °c or above. 3. technical parameters and coefficients several parameters or coefficients are already defined in written standards, for example in [2], or in scientific papers. the available information was compiled into one list together with references under “work packages/wp4.2 parameters and coefficients” on the website (figure 2). all parameters are given with their name, symbol and unit, as well as a description. in some cases, it was necessary to adapt the definition given in the guidelines for a better uniformity. this may also serve as a proposal for a future revision of the guidelines. the parameters and coefficients were grouped into geometrical, mechanical, temporal, electrical and environmental effects. for some of the parameters, exact values are given as well. unfortunately, this list is very short and it does not cover most of the parameters due to a lack of corresponding measurement results. if a user needs to know the exact value of a parameter, then it might be necessary to carry out corresponding measurements. on the other hand, depending on the target uncertainty, it may be sufficient to work with parameter ranges in the form of upper limits of their absolute values from data sheets. for example, the temperature effect on the characteristic value (“sensitivity”) of a c4 force transducer from hbm [3] is not higher than 0.01 % per 10 k temperature change (see figure 3). figure 1. questions from the on-line questionnaire about application conditions. figure 2. parameter and coefficient definitions (part of the table). figure 3. parameter ranges from data sheets. http://www.ptb.de/emrp/forcemetrology.html acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 61 4. detailed models and examples 4.1. models the uncertainty contributions were then evaluated for some of the parameters and their related influencing quantities. in [4] it was shown that the uncertainty calculation can be performed with quite simple software tools. the various influencing effects are given under “work packages/wp4.3 uncertainty contributions” on the website (figure 4), arranged in categories. the intention is that the user applies corrections to the measurement results based on the knowledge of the effect, the mathematical model describing the influence and the values of the parameters including their uncertainties. the models follow directly from the above given definitions of the parameters and coefficients. if new influencing quantities or effects other than those in the list of models have to be considered, the related models can be developed on the basis of the models on the page. 4.2. calculation examples for some of the influencing effects from the list in figure 4, elaborated example calculations are given in the lower part of the wp4.3 page, see figure 5. these examples are organized using an accordion element in the graphical user interface of the page. it consists of a vertically stacked list of alternately expandable header items, meaning a maximum of one item can be expanded at a time. usually [5], for uncertainty calculations only linearized models are applied and the underlying formula is given in (1): 𝑢𝑐2(𝑦) = ∑ � 𝜕𝜕 𝜕𝑥𝑖 � 2 𝑁 𝑖=1 𝑢 2(𝑥𝑖) . (1) in the case of our form, an enhanced formula (2) is used 𝑢𝑐2(𝑦) = �� 𝜕𝜕 𝜕𝑥𝑖 � 2𝑁 𝑖=1 𝑢2(𝑥𝑖) + + ∑ ∑ �1 2 � 𝜕 2𝜕 𝜕𝑥𝑖𝜕𝑥𝑗 � 2 + 𝜕𝜕 𝜕𝑥𝑖 𝜕3𝜕 𝜕𝑥𝑖𝜕𝑥𝑗 2� 𝑁 𝑗=1 𝑁 𝑖=1 𝑢 2(𝑥𝑖)𝑢2�𝑥𝑗� . (2) one of the advantages of this enhanced calculation method is that additional contributions to uncertainty are taken into account. if, for example, the user works at the same temperature as the calibration laboratory (20 °c, see above), the correction due to the temperature coefficient is zero, but due to a supposed higher uncertainty of the temperature measurement (for example, 0.5 k instead of 0.1 k) the result should also have a higher uncertainty. in the case of the example shown in figure 6, the effect of temperature on the characteristic value is calculated when the transducer is used at 35 °c after it was calibrated in an airconditioned laboratory at 20 °c. the standard uncertainty of measurement is 0.25 %. due to the known temperature figure 5. collection of detailed models and example calculations. figure 6. example of a fully elaborated uncertainty calculation including a corrected result. figure 4. list of categories and influencing effects. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 62 coefficient of 0.01 % per 10 k, the characteristic values changed from 2.000 mv/v at 1 mn to 2.003 mv/v. the uncorrected result has an additional error of 0.15 %. depending on the application requirements, this can be either low enough to be ignored or it needs to be considered. if the effect is taken into account by applying a correction to the measurement result – which is the preferred way according to the gum [5], the corrected value has an uncertainty of 0.252 % which is slightly higher than the uncertainty of the calibration result (0.25 %). the increase is mainly caused by the uncertainty in the knowledge of the temperature coefficient tkc. the example calculations are also used for the test of correctness and validation of the online and offline tools that are described in the following section. 5. online and offline calculations tools the developed online tools can be found under “work packages/wp4.4 software tool” on the website (figure 7). the user should enter corresponding values into all white input text boxes of the form. after this is done, a click on the “evaluation” button calculates the results and shows them in the grey text boxes together with a list of contributions and an explanation below the form (figure 8). the calculations are performed in the internet browser on the user’s local computer. for this purpose, javascript must be supported by the browser and enabled. the use of this tool is free but without any warranty. the source code is accessible and can be checked. the “reset” button can be used to delete the current results; the values in the white boxes are not affected. this allows the user to obtain a form without any results but it is not necessary to use this reset function, as any input value can be changed at any time and, by pressing the “evaluation” button, the results can be re-calculated using these new values. the user should be aware that the indicated results will not correspond to the input values if the latter have been changed without starting a recalculation using the “evaluation” button. the table below the form shows the contributing variances in descending order. in the example of figure 8 it can be seen that the main contributions come from the uncertainty of the transducer calibration and the uncertainty of the temperature coefficient. in addition to the online tools, an excel file is available for offline use. the first sheet in this file contains a short description together with internal links to the other sheets and some settings for the units in which the values are given. the second sheet offers some explanation of technical terms including their units. each following sheet contains one model and its structure is similar to that on the website. the sheets contain the model and descriptions, input fields (cells) for data, results fields (cells) for the calculation results as well as the “reset” and “evaluation” buttons and the contributing variances, and the explanation. the calculations in this spreadsheet file are based on macros. for the full functionality of the files macros must therefore be allowed. depending on the settings of the local programme, the user may be asked to allow macros to be executed by the programme. the last sheet of the file offers a graphical representation of the different contributing uncertainties (figure 9). the user can select which effects and influences should be figure 8. the filled-in form of figure 7 with calculated results and explanation. figure 9. pie charts of the contributing effects and influences (source: excel file [6]). figure 7. example of an online tool form. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 63 taken into account by checking or unchecking the corresponding checkbox in the table above the charts. the graphics will be updated accordingly. 6. conclusions the methods and tools developed within this work package should help the users of calibrated force transducers to better estimate the measurement uncertainty with less effort, thus improving the traceability of force measurement. it should be noted that the methods and tools described in this paper are limited neither to the meganewton force measuring range nor indeed to the quantity force, so they can equally be applied to torque or pressure measurements. acknowledgement the research leading to these results has received funding from the european union on the basis of decision no 912/2009/ec. the emrp is jointly funded by the emrp participating countries within euramet and the european union. references [1] emrp sib63 "force traceability in the meganewton range", [online], http://www.ptb.de/emrp/sib63-home.html. [2] vdi/vde/dkd 2638 kenngrößen für kraftaufnehmer – begriffe, 2008-10, (german and english versions in one document), https://www.vdi.de/richtlinie/vdivdedkd_2638kenngroessen_fuer_kraftaufnehmer_begriffe/. [3] hbm c4 force standard data sheet, special features, http://www.hbm.com.pl/pdf/b0663.pdf. [4] d. röske, “uncertainty calculations using free cas software maxima,” in imeko 22nd tc3, 12th tc5 and 3rd tc22 international conferences, cape town, south africa, february 2014, http://www.imeko.org/publications/tc3-2014/imekotc3-2014-016.pdf. [5] bipm jcgm 100:2008, evaluation of measurement data guide to the expression of uncertainty in measurement, http://www.bipm.org/utils/common/documents/jcgm/jcgm _100_2008_e.pdf. [6] excel file for offline calculation of measurement uncertainties, http://www.ptb.de/emrp/fileadmin/documents/forcemetrology /workpackages/wp4/uploads/wp4.4_spreadsheet_tool_v1.0.1 .xlsm. http://www.ptb.de/emrp/sib63-home.html https://www.vdi.de/richtlinie/vdivdedkd_2638-kenngroessen_fuer_kraftaufnehmer_begriffe/ https://www.vdi.de/richtlinie/vdivdedkd_2638-kenngroessen_fuer_kraftaufnehmer_begriffe/ http://www.hbm.com.pl/pdf/b0663.pdf http://www.imeko.org/publications/tc3-2014/imeko-tc3-2014-016.pdf http://www.imeko.org/publications/tc3-2014/imeko-tc3-2014-016.pdf http://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf http://www.bipm.org/utils/common/documents/jcgm/jcgm_100_2008_e.pdf http://www.ptb.de/emrp/fileadmin/documents/forcemetrology/workpackages/wp4/uploads/wp4.4_spreadsheet_tool_v1.0.1.xlsm http://www.ptb.de/emrp/fileadmin/documents/forcemetrology/workpackages/wp4/uploads/wp4.4_spreadsheet_tool_v1.0.1.xlsm http://www.ptb.de/emrp/fileadmin/documents/forcemetrology/workpackages/wp4/uploads/wp4.4_spreadsheet_tool_v1.0.1.xlsm microsoft word article 11 195-1080-1-pb.docx acta imeko may 2014, volume 3, number 1, 47 – 55 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 47 intelligent instrumentation: a quality challenge h.j. kohoutek hewlett-packard company, colorado computer quality department, fort collins, colorado, u.s.a. section: research paper keywords: instrumentation design, measurement quality citation: h.j. kohoutek, intelligent instrumentation: a quality challenge, acta imeko, vol. 3, no. 1, article 11, may 2014, identifier: imeko-acta-03 (2014)-01-11 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. introduction regular reports on use of electronic measurement and analytical instrumentation, e.g. [1, 2], indicate that scientists and engineers plan to acquire more and more complex instruments (such as spectrophotometers, chromatographs, electron microscopes) and laboratory data management systems. to maximize the information gain from their application in physical sciences, life sciences, medicine, and engineering, a relatively high level of user knowledge and operational skill is required. this trend toward more complex instrumentation is accompanied by changes in the way professionals fulfill their various tasks [3]. we observe growth of time segments dedicated to study, learning, analysis, reasoning, and judgement in the structure of their jobs, with corresponding decline in data gathering, memorization, and physical activity. there is also a corresponding change in the generic structure of the instrument’s block diagram [4]: from a simple measuring device dedicated to specific physical quantities, followed by transducers interfacing a generic electrical instrument, to current instrumentation (with both analog and digital electronic components) combined with data acquisition and processing. the architecture of contemporary instrumentation permits implementation of complex and abstract models of applicable physical, chemical, and information transformation processes. similar change is observable in the modes of instrumentation controls from mostly manual (allowing operators to adjust the set-up) to stored programs and feedback algorithms, allowing steadily increasing levels of automation in sample handling, repetitive measurements, and self-test. another of the key changes in test and measurement instrumentation is the evolution of small multifunction products (usually an instrument-on-a-card) able to replace an entire rack [5]. the dominating technical issues being addressed by contemporary designers are concentrated around: – requirements for real time measurements [6], reflected in increases of rates of data acquisition, speed of a/d conversion, rates of data processing, careful selection of data format, interrupt priorities, etc. major unsolved problems remain in the areas of multinodal matrix data acquisition and processing of unstructured inputs. – assessing the strategic impact of personal computers [7] which are strengthening their foothold in the field of electronic instrumentation. although pc-based instrumentation does not provide anything substantially new, it handles many data related tasks less expensively and this is a reissue of a paper which appeared in acta imeko acta imeko 1988, proceedings of the proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 337–345. after a review and description of current trends in the design of electronic measurement and analytical instrumentation, changes in its application and use, and of associated quality issues, this paper deals with new quality issues emerging from the expected increase of artificial intelligence impact on system design and implementation strategies. the concept of knowledge quality in all its aspects (i.e. knowledge levels, representation, storage, and processing) is identified as the key new issue. discussion of crucial knowledge quality attributes and associated assurance strategies suggests the need to enrich the assurance sciences and technologies by the methods and tools of applied epistemology. described results from current research and investigation, together with first applications of artificial intelligence to particular analytical instruments, lead to conclusion that the conceptual framework of quality management is, in general, adequate for successful resolution of all quality issues associated with intelligent instrumentation. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 48 with more flexibility than dedicated processors. their memory with centrally stored programs and data permits efficient record keeping, cooperation of multiple instruments or measurement processes, and communication. this frees the user from many routine laboratory and reporting chores. – interfacing, stemming from the availability of data processing equipment and need for at least laboratory-wide networking. all this is a contemporary reflection of the intuitively obvious and routine search for more capability (on-line measurements, automation of sample handling, ...), higher performance (detection limits, sensitivity, selectivity, signal-tonoise ratio, speed, ...), ease of use, and lower price. in more specific terms the overall goals of new developments in the various types of instrumentation are: – enhanced productivity and creativity of the analyst and experimenter, – increased information content of experiments, – optimized choice of experiment conditions, – multidimensional and multiparametric measurements, and – an integrated, highly automated laboratory. in general, these instrument design trends [8] seem to be driven by: – need for wide repertoires of techniques and methods in a single laboratory, which is expected to be satisfied by implementations that preclude the prohibitive cost of a large number of dedicated instruments. – recognized power of software leading to more software controlled instrumentation [9] and to blending of computations and measurements, in these systems the instrumentation parts are expected to be fixed, while the software assures needed flexibility and defines how the individual components behave together. this conceptual arrangement permits radical changes in the experiment to be accomplished with minimal hardware changes, in addition, software driven instrumentation opens door for self-tuning, optimization, self-diagnostics, integration into the overall laboratory environment, etc. – availability of new and exotic technologies, such as fiber optics and microfabrication, offering an enormous range of opportunities for chemical microsensors (e.g. chemfets, microdielectrometers, surface acoustic wave sensors), multiwavelength measurements, portability, and cost reduction [10, 11]. the overall pace of changes along these recognized trends will probably quicken and be more dramatic after the introduction of research results of artificial intelligence (ai) into routine instrument design considerations (block diagram of an intelligent instrument is presented in figure 1). experience currently being accumulated in the design of a wide variety of expert systems, complementing the well published work on dendral, macsyma, prospector, mycin, etc., will not only impact the instrumentation design, but will change the jobs of both instrument designers and instrument users [12], and highly probably, create new artificial intelligence technology based jobs in support areas. the expected result will be intelligent instrumentation which will be able to solve problems or produce answers instead of data [13]. the overall objective of introducing ai into instrumentation is to free the professional experimenter from unnecessary involvement with minute implementation details of the experiment and give him or her intelligent assistance in data interpretation, experiment evaluation and, ultimately, experimentation and instrumentation control according to embedded knowledge. results from current research and application implementations, e.g. [14, 15], are presenting a convincing argument about the reality of an expected major impact by ai on the new generation of instruments. embedded artificial intelligence could easily become the dominant design and implementation strategy for 21st century instrumentation. 2. generic quality issues in contemporary instrumentation measurement systems performance, like all technical equipment performance, is a result of sometimes complex interactions among the equipment, the operator, the procedures used, and the environment in which uninterrupted performance is expected. quality management must address all these factors, because they all can be the root cause of imperfect performance representing itself as a detect, an error, or a failure [16]. the quality characteristics of the measurement process have developed around the concepts of precision, the degree of mutual agreement of independent measurements yielded by repeated figure 1. block diagram of an intelligent instrument. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 49 applications of the process under specified conditions; and accuracy, the degree of agreement of such measurements with the true value of the magnitude of the quantity under consideration [17]. the concept of precision lends itself to a rigorous mathematical treatment by methods of statistical analysis. for accuracy, unfortunately, there does not exist any single, comprehensive, and satisfactory measure, and in some particular situations the concept itself can become illusive. here statistics can only help, e.g. with tools such as mean square error, as proposed by gauss himself, measurement bias, systematic error, and measurement uncertainty. but even with these tools the steps for analyzing assignable and unassignable causes of error are plagued with many difficulties. to satisfy the statistical character of both concepts multiple measurements are needed, which leads to an empirical concept of the repeatability of a measurement. repeatability implies the same measurement process and the same method of measurement. considered more fully it should also include the same observer, operators, auxiliary equipment, and equipment calibration status. in addition, it should be based on random sampling which takes into account a reasonable range of circumstances. out of the wide variety of measurement equipment quality attributes the ones closest to measurement process quality are performance linearity over the effective range, and instrument stability. linearity (actually the non-linearity or lack of linearity) is usually conveyed by statements about how the signal deviates as it spans the range of measurements. linearity of an instrument is generally determined as the result of a calibration test in which only limited number of values is determined for a system in steady state. only rarely will the same relation apply for dynamic states because the storage parameters will alter the instantaneous transfer characteristics according to the signal amplitude and history of previous signal excursions. pressure gauges, accelerometers, thermometers, etc. will give very different values for instantaneous and steady state measurements. the best expression of an instrument linearity is via the calibration curve showing deviations from the best straight line. drift is a system characteristic, sometimes understood to be synonymous with the term stability, that characterizes how a system variable, which is intended to be constant, varies with time. it also conveys information about very-low-frequency response of the instrument to the measured. drift characteristics are usually specified from the reference point (defined at 25°c) to the extremes of the working range, expanded by safety margins. linearity and stability, traditionally associated with the notion of instrument quality, are mostly a direct function of design choices and goals. other typical design objectives, intended to minimize deterioration due to time or environmental stresses, represent additional product quality attributes which are achieved by proper system structure, component selection, adequate design margins, and built-in redundancy. these attributes are most often known as: – reliability, the probability of a system’s successful function over a given mission period [18]. – environmental ruggedness, defined by a class of environmental stresses under which proper functionality is guaranteed. – electromagnetic compatibility, given by the energy levels, directions, and frequencies of radiated and conducted emissions and by the instrument susceptibility to external electromagnetic fields. – safety level or class, indicating the risks of explosion, fire, or other hazards caused by potential internal defects or failures. – maintainability [19], which is the probability that, when maintenance action is performed under stated conditions, a failed system will be restored to operable condition within a specified total down time. levels of these product quality characteristics, required for business success and customer satisfaction, are given by users’ expectations and set by market conditions, competitive products performance, state-of-the-art of assurance technologies, and by government regulations or international standards. in some situations the overall economic value of these characteristics can be expressed by cost of ownership, which takes into account values of vendor’s goodwill shown in warranty period and conditions, cost of support and service contracts, etc. to the product quality characteristics mentioned above we must add the classical expectations of products free from defects at delivery, compliant to product specifications and data sheets, and evidencing satisfactory level of manufacturing quality controls, adequate workmanship skills, and attention to detail. advances in computing technologies and the related lower cost of computing found their applications in many fields, including analytical, control, and measurement instrumentation. currently the major part of electronic instrumentation is software driven, which brings into focus issues of software quality. the quality of the whole measurement system may, to a significant degree, be dependent on the quality of the code. because computer software is a logical rather than a physical system, it has different characteristics [20], e.g.: – it does not wear out, – maintenance often includes design modifications and enhancements, and – there is no significant manufacturing activity involved. there is also not very much well structured industrial experience to significantly indicate preferable engineering design strategies. many attempts to define software quality characteristics with associated metrics, criteria, and contributing factors have been made, e.g. [21, 22, 23], but only a few members of the proposed family of quality attributes are universally accepted, mainly: – reliability, meaning a measure of the frequency and criticality of unacceptable program behavior under permissible operating conditions; – correctness, now inconclusively demonstrated by validation testing which provides some level of assurance that final software product meets all functional and performance requirements. there is some hope that, in the future, programs will be infallibly proven to be correct. some tools for automated proofs are beginning to emerge from the research work in artificial intelligence. – maintainability, which may be defined qualitatively as the ease with which software can be understood, corrected, adapted, and enhanced. software maintainability is a difficult term to quantify, but gilb [24] proposed a number of measures relating to the maintenance effort. – ease of use assures that the job or task will be successfully completed by personnel at any required stage in system operation within a specified time. usability characteristics are based on observations and understanding of operator acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 50 behavior, limitations of body characteristics, and reaction times. design approaches and assurance techniques for these quality attributes have not stabilized yet and vary widely from vendor to vendor. in the commercial world, as in science and engineering, these described measurement system quality characteristics are complemented by the ever increasing need to demonstrate that the total measurement uncertainty is sufficiently small to meet users requirements. this measurement assurance, in reality a quality control of measurements, is based on redundancy built into the measurement scheme. this redundancy is usually represented by repeated measurements of a stable standard. measurement assurance is then obtained when verifiable limits of uncertainty have been demonstrated, and measurement process quality control program exists in real time to monitor its performance [25, 26]. 3. quality issues unique to intelligent instrumentation – the definition of quality, which is currently oriented toward conformance to specification. it needs to be replaced by a sociometric definition, which perceives and eventually measures it as a multiattribute product characteristic to be designed in; – the value of quality, now primarily beneficial to the manufacturer via reduced production and support cost, must be recognizable directly at the product level in both classical forms (value in exchange and value in use). this will allow quality to be properly priced, and appreciated as a source of revenue; – the role of the quality department, where quality specialists and managers must shun being critics of manufacturing processes and become contributing partners with all line functions. we understand the fact that scientific research, as well as control and measurement processes are, by nature, processes of goal oriented learning, as evident from the similarity of flow diagrams of basic process steps shown in figures 2a, b. the key objective in making these processes as efficient as possible is frequently achieved by applying scientific analytical instrumentation. further, considering that the learning process itself is a feedback loop between inductive formation of hypotheses, based on accumulated data, and collection of new data according to deductively proposed experiments, we can see that classical instrumentation plays a major role in data gathering and analysis. however, intelligent instrumentation is expected to bring new (expert) levels of efficiency into the figure 2a. the process of experimentation as iterative learning. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 51 processes of experimental data interpretation, new hypotheses forming, and designing of new experiments. recognition that all the terms, such as data, information, rules, metarules, heuristics, etc. used in presented flow diagrams, represent different forms of knowledge, brings us to knowledge as the central point of the whole held of artificial intelligence perceived, from the practical viewpoint of the electronic industry, as knowledge-based problem solving by computers. also the empirical and intuitively self-evident observation that the competence, and by that the value, of any intelligent system strongly correlated to the amount and quality of the embedded knowledge, leads us to the central unique quality issue of intelligent instrumentation: the issue of knowledge quality. 3.1. knowledge quality the factual part of our knowledge comes mostly in the form of scientific and technical data in three broad classes [27]: – repeatable measurements on well defined systems (class a); where quality assurance methodologies recommend using data only from reliable sources such as national standard reference data system (nsrds) coordinated by the u.s. national bureau of standards. – observational data, often time or space dependent (class b). the most effective way to assure quality of class b data by careful maintenance and calibration of the measuring instruments prior to data acquisition. recording and preservation of all required auxiliary information is also important. – statistical data (class c); here quality control strategy, based on probability theory, is a more difficult task, often hindered by disagreements in definitions and terminology. one of the greatest benefits of the recent advances in graphics software is increased control over the quality and completeness of experimental data by allowing for its visual inspection via windowing, 3-d modelling, and simultaneous processing of alphanumerical and graphical information. improvements have reached the point where virtually any level of detail from multiple measurements can be observed on the screen and functionally used to enhance data quality. also more frequent acceptance of the need for systematic skepticism and openness to alternative modes of interpretation in data analysis is now leading to application of statistical figure 2b. event-driven control scheme of a general problem solver. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 52 methods of exploratory data analysis. these methods reflect recognition of the importance of graphical representation for visual inspection of unprocessed data, which should precede formal statistical calculations even when the latter is the desired end product. this approach opens wider ranges of alternative explanations and allows the researcher to be open to unexpected possibilities, particularly while working with weak theories. assuring quality of higher level knowledge, which is being engineered today and will be embedded in intelligent instrumentation (e.g. definitions, taxonomies, discrete descriptions, constraints, deductive algorithms), presents many epistemological challenges. the philosophical questions about the reliability of scientific knowledge, by themselves a serious intellectual issue, will not significantly impact the design of intelligent instruments. this is so because, in majority of expected applications, the knowledge and application domains will be sufficiently narrow and well defined. this will permit quality assurance to concentrate on issues of unambiguity, consistency coherence, visibility and acceptability of perceived patterns, and methods of justification and validation. a completely new set of quality issues is associated with the problems of knowledge base updates and knowledge maintenance. quality assurance methods here might be simplified by the presence of metaknowledge or metarules controlling automatic learning in embedded knowledge bases. 3.2. quality of knowledge representation the pragmatic approach to embedding knowledge into programs that exhibit intelligent behavior is focused on developing schemes for knowledge representation which permit manipulation of specialized data structures to obtain intelligent inferences. in general, the knowledge representation schemes are combinations of data structures and interpretive procedures. the most frequently used formalisms are: state/space representation, formal logic, procedural representation, semantic nets, production systems, scripts, and frames. there are also special techniques for visual scene and speech representation. the issues of form and notation have occupied most previous quality discussions. currently it is recognized that the issue of what could or could not be done with a representation is more important. so, on the conceptual level, the most important quality attributes of knowledge representation are associated with its three adequacies [28]: – metaphysical, assuring a form that does not contradict the character of facts of interest, – epistemological, which guarantees that the representation can practically express the facts, and – heuristical, that assures the reasoning processes, leading to the problem solution, are expressible in contemplated language. many authors [29] proposed more practical attributes, e.g.: – modularity, which allows adding, deleting, or changing information independently, – uniformity, to assure understandability to all parts of the system achievable via rigid structures, – naturalness, to reflect the ease of expressing important kinds of relevant knowledge, – understandability, the degree to which the representation scheme is clear to humans, – modifiability, which assures context independence. studies of semantic primitives and established representational vocabularies [30] lead to other attributes such as finitude, comprehensiveness, independence, noncircularity, and primitiveness. a completely new quality assurance issue arises when problem solving algorithms require representation change midstream. representation transformations must be then evaluated in terms of isomorphism and homomorphism. 3.3. knowledge base quality issues technical implementations of knowledge bases are benefiting from the design for quality methodologies of data bases, which culminated in fault-tolerant designs. new quality issues, emerging from the special characteristics of relational data bases and current experience with knowledge bases, are bringing into focus needs of assuring: – protections against interferences among the subsets of a knowledge base. this is complicated by the environment of laboratory-wide networks which do not have a central operating system. current “design for quality” research results tend toward making transactions atomic, which could adversely affect the transaction time; – protections against new aspects of pattern sensitivity; – transparency of structural characteristics, a difficult task in conditions of high structural complexity; and – already mentioned knowledge maintenance quality controls. first attempts to formulate adequate assurance strategy for knowledge maintenance have already taken place, as described e.g. in [31], providing prespecified set of control metarules, production rules formalism, and tools for sensitivity analysis, execution tracing, and explanations. but effective implementation of these strategies will be complicated by the problems of scale, speed, and system complexity. the importance of methods for validation of system’s knowledge base and evaluation of its quality is also self-evident, but their development is a very difficult task. 3.4. quality of knowledge processing the basic set of knowledge processing algorithms consists of search processes, deductions, inductive inferences, and learning. – search algorithms, not significantly different from other algorithms implemented by computer programs, do not present new or unique quality problems. the basic questions here still relate to implementation correctness and its demonstrated verification levels. some new challenges may be found in implementation of associative searches, but even here, the main issue is correctness. – deductions are basically chains of statements, each of which is either a premise or something which follows from statements occurring earlier in the chain. mechanization of deductive processes based on predicate logic must, beside the issue of direct correctness, take into account needs for the deduction’s visibility, ease of inspection, backward traceability etc., to improve solution credibility and to allow for correctness verification. – the most difficult task for knowledge processing quality assurance arises when we depart from categorical (or deterministic) reasoning. statistically based inferences, or inferences based on social judgement are plagued with errors of insufficient evidence, illusory correlations, acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 53 circumstantial prompting, belief perseverance in the face of contrary evidence, etc. research into application of epistemology to solve these types of problems is only beginning. but we do not expect these problems to be encountered during the early generations of intelligent instrumentation. – automated learning systems’ performance can be measured by the rate of knowledge compilation. the quality of such a system is usually described by the system’s generality (the ability to perform successfully in novel situations), its ability to discriminate between critical and peripheral information, its ability to automatically restrict the range of acquired knowledge applicability, and its ability to control the degree of interest of inferences in order to comply with principles of significance and goal satisfaction. but the fundamental conceptual problem of controlling quality of knowledge processing will be resolved only if the problem of adequate modelling of the categorical structure of the thinking process will be resolved. 4. other quality issues associated with the intelligent instrumentation in many aspects the intelligent instruments will be implementations of an advanced generation of the current software driven instrumentation. because of this fact all the traditional software quality issues will be still present and will have to be addressed. the aspects of the new, higher level of intelligent human interface will bring into the field of instrumentation quality issues currently associated with advanced computing and engineering workstations. the most important seem to be the: – quality aspects of the control language; in the field of intelligent instrumentation the interface and control language will have to provide convenient means for implementation of a variety of experimenting strategies and designs. it must also act as a vehicle of experimenter’s thought, and via local area networks allow communication of higher level concepts between experimenters. the minimum quality requirements will have to cover aspects of ease of programming, error handling, automatic backtracking, machine independency of compilers, and level of standardization. similar requirements will apply to languages designed specially for manipulating knowledge bases. – quality of human interface, which must permit the instrumentation user to concentrate fully on his experiment, without being distracted by the uniqueness of the applied computational technology. human interface design strategies will have to go beyond the traditional ergonomic and intuitive approaches to design for system friendliness. they must address the physical, physiological, psychological, and intellectual aspects at the experimenters job and personality. solutions must be based not only on anthropometry but be developed in the context of the measurement system’s lexical, syntactic, and semantic requirements. there is a growing recognition of the need for serious scientific experimentation which will permit both the designer and cognitive psychologist to significantly improve current approaches to human-instrument interface designs. – quality aspects associated with new computing architectures and increased levels of parallelism. – system testability and diagnosis; the growing complexity of the instrumentation, its real-time performance, which will make accurate repetitions of the computational and reasoning processes impossible due to asynchronisms and new levels of freedom for internal intelligent controls, will make these tasks especially difficult but nevertheless extremely important for future fault-tolerant designs. despite expected software dominance there still will be enough hardware in the next generation of instrumentation to keep us worrying about the traditional quality issues associated with product safety, regulatory compliance, environmental ruggedness, reliability, maintainability, supportability, etc. some new quality issues will emerge with new families of vlsi components. the increased levels of their complexity and density will challenge our ability to control qualify of all materials involved in processing, the dimensions of manufactured structures in and below the one micron range, and the many process variables needed to achieve yield economy and final product quality. the expansion of intelligent instrumentation, expected in 1990’s, will be complemented by maturing of some sensor technologies currently being introduced. the new families of sensors solve one set of problems but open risks in other areas of which the quality assurance community must be aware, e.g.: – optical sensors for chemical analysis are an answer to problems of electrical interference, reference electrodes, physical contact with the reagents, and need for multiwavelength measurements. but to ensure the highest possible quality of their application, care must be taken to prevent background light interference, the possibility of photodegradation of reagents, to assure long term stability, etc. the sources of potential error must be identified and compensated for. – electrochemical sensors, where developments are moving away from potentiometric to amperometric sensors. these changes are bringing challenges to techniques of surface modification control required to achieve wanted selectivity, to compensation methods for dealing with increased sensitivity and to design for reliability. for chemfets, especially, challenges will appear in areas of stability and encapsulation technologies. the fundamental quality issues and strategies for assurance of these new families of sensors revolve around: – the two concepts: compound confirmation and compound identification, as well as – the practical need to avoid application surprises, such as interference in sample matrix effects, errors at the reference point, etc. 5. results of current research and applications reports regarding results of ai application in support of instrumented diagnostics and troubleshooting are many. manufacturers, including ibm, digital, tandem, prime, at&t, general electric (for review see e.g. [32]), have devised remote diagnostic expert systems to analyze hardware problems in the field. in 1984 major breakthrough by dr. l.f. pau at battelle memorial institute, switzerland, in development of a set of about 400 application independent metarules for diagnostic expert systems, provided new strategy for both reduction of design cost and increase of capability via a model of a circuit network which describes many types of real electronic faults. also the proliferation of cae tools and workstations has a far reaching impact on the productivity of development and ease of acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 54 support of maintenance related expert systems. other software based expert system-building tools, such as automated reasoning tool art [33], permits the designer to develop the necessary knowledge base, reflecting his experience. these tools often use frames to represent knowledge about objects of interest, along with their attributes. agenda or menu windows indicate the rules fired together with facts that initiated them. these and command windows, which trace program execution step-by-step, simplify both debugging and use of the system by illustrating the reasoning process flow. less visible, but significant, are the results of the still infrequent applications of ai to upgrade performance of analytical instruments. one of the key reasons for this slow progress is that the majority of known successful projects has been implemented on large computers, often unavailable in current laboratory settings. a microcomputer based expert system for enhancement of electroencephalogram evaluation [15] has demonstrated the usefulness of ai methods in biomedical instrumentation. this system provides on-line evaluation with performance equal to that of a large computer. the most important information is being extracted from the frequency domain, using fft for spectral estimation, hanning windows to reduce spectral leakage, and forty-three assembly language coded rules for basic classification into either normal or abnormal categories. the disadvantage of the present system lies in the need to have the user provide the complete control strategy, i.e. the selection of relevant rules and sequence of their implementation. the development of a totally computerized triple quadrupole mass spectrometer, with prototype knowledge based instrument control [14], is a necessary step toward the goal of intelligent tqms instrumentation. it has the ability to apply structured stored experience, to use power of automatic reasoning to control its own behavior, and to respond to novel situations and new problems. the knowledge including control heuristics is represented in the form of productions (rules). resident, ai guided calibration program assures self-adaptive feedback control process for real-time optimization or tuning of the data acquisition process. studies of data acquisition, knowledge representation, reasoning algorithms, and interpretation of results, applied, e.g., to chromatographic data for the diagnosis of spontaneous bacterial peritonitis [34], indicate that current results can be extended to other applications and could lead to automated intelligent decision systems. honeywell system’s fleet/dras project [35] demonstrated the feasibility of applying expert system technology to automation of data reduction, analysis, and detection of anomalies. although incomplete, work done in investigation and assessment of the mutual relationship between artificial intelligence and assurance sciences and technologies, has identified the key quality strategies necessary to address the new, knowledge quality related issues [36]. these strategies are based on creative application of current assurance expertise enriched by methods of applied epistemology. as described in previous chapters major progress has been achieved in: – controlling quality of the factual part of embedded knowledge; – identification of adequacy (metaphysical, epistemological, and heuristical) as the most important quality attribute of a knowledge representation; – strategies for fault-tolerant design of data and knowledge bases; and – key architectural guidelines for intelligent human interface design. the key quality assurance technologies and skills for the manufacture of key vlsi components, which are expected to be fundamental for the intelligent instrumentation hardware, are identified and in place [37]. new sensor technologies, new non-von neumann architectures, new system parts and functions, as well as support of knowledge quality assurance, will require additional intensive applied research and engineering work by teams of experts from both the r&d and quality communities. 6. summary and conclusions the emergence of successful practical implementations of the research results in artificial intelligence is beginning to impact conceptual thinking about the new generation of electronic measurement and analytical instrumentation. because the competence, and by that the value of an intelligent instrument, will always be strongly correlated to the amount and quality of the embedded knowledge, the issue of knowledge quality is beginning to form the cornerstone of the new concern of quality management. to assure success of quality programs, methods of applied epistemology must be employed. only in this way can the problems of quality at all levels of knowledge, its representation, storage, and processing be solved. the key quality assurance strategies addressing these knowledge related quality attributes have been identified. new quality issues are also emerging because of the increased levels of hardware parallelism, computational speed, system complexity, and new sensor technologies. traditional issues of hardware and software quality, as well as issues of support and vendors goodwill, remain substantially unchanged. current research and application results indicate that artificial intelligence must be considered by instrumentation designers in their search for more capability, higher performance, ease of use, and lower price. they even suggest that embedding artificial intelligence into the measurement and analytical instrumentation could easily become the dominant system design and implementation strategy for the 21st century. the conceptual framework of current quality assurance, if enriched by methods and tools of applied epistemology as indicated, seems to be generally adequate to address the new issues associated with knowledge quality and intelligent user interface. references [1] c.j. mosbacher, use of analytical instruments and equipment, research & development, feb. 1985, pp. 174–177. [2] c.j. mosbacher, use of analytical instruments and equipment, research & development, feb. 1986, pp. 90–93. [3] f. hayes-roth, the machine as partner of the new professional, ieee spectrum, jun. 1984, pp. 28–31. [4] c.h. house, perspectives on dedicated and control processing, computer, dec. 1980, pp. 35–49. [5] g. sideris, traditional products get squeezed by newer gear, electronics, feb. 1987, pp. 89–93. [6] j.g. liscouski, selecting instrument interfaces for real-time data acquisition, analytical chemistry, 1982, 54 (7) p. 849. [7] f. guteri, instrumentation (technology 1985), ieee spectrum, jan. 1985, pp. 65–67. [8] w.e. terry, instrument advances will be an intelligent progression, electronic business, dec. 1985, pp.137–138. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 55 [9] m.l. salit, m.l.parson, software-driven instrumentation; the new wave, analytical chemistry, 1985, 57 (6) p.715. [10] v. berry, liquid chromatography at the 1986 pittsburg conference; detectors, american laboratory, may 1986, pp. 36–48. [11] h. wohitjen, chemical microsensors and microinstrumentation, analytical chemistry, 1984, 56 (1), p.87. [12] p. wailich, the engineer’s job; it moves toward abstraction, ieee spectrum. jun. 1984, pp. 32–47. [13] j. sztipanovits, j. bourne, design of intelligent instrumentation, proc. of the first conference on artificial intelligence applications, ieee computer society, dec. 1984, pp. 490–495. [14] c.m. wong, r.w. crawford, j.c. kunz, t.p. kehler, application of artificial intelligence to triple quadrupole mass spectrometry, ieee trans. nvc1. sc1, vol. ns-31, no. 1, feb. 1984, pp. 804–810. [15] l. baas, j.r. bourne, a rule-based microcomputer system for electroencephalogram evaluation, ieee trans. biomed. engr., vol. bme-31, no. 10, oct 1984, pp. 660–664. [16] f.g. peuscher, design and manufacture of measurement systems, handbook of measurement sciences, vol. 1, ch.28, pp.1209–1215 john wiley & sons ltd, 1983. [17] h.h. ku, editor: precision measurements and calibrations, nbs special publication 300, vol. 1, feb. 1969. [18] m.l. shooman, probabilistic reliability, mcgraw-hill book co., new york, 1968. [19] j.g. rau, optimization and probability in systems engineering, van nostrand reinhold co., new york, 1970 [20] r.s. pressman, software engineering, mcgraw-hill book co., 1982. [21] b. boehm, et al., characteristics of software quality, north holland publishing company, 1978. [22] t. mccall, et al., factors in software quality, ge-tis-77ciso2, general electric company, 1977. [23] w. curtis, management and experimentation in software engineering, proc. of the ieee, vol. 68, no. 9, sep. 1980. [24] t. gilb, a comment on the definition of reliability, acm software engineering notes, vol.4, no. 3, july 1979. [25] j.w. locks, measurement assurance and accreditation, proc. of the ieee, vol. 74, no. 1, jan. 86, pp. 21–23. [26] r.m. judish, quality control of measurements – measurement assurance”, proc. of the ieee, vol. 74, no 1, jan 86, pp. 23–25. [27] d. lide jr., critical data for critical needs, science, vol. 212, no. 4501, june 1981, pp. 1343–1349. [28] j. mccarthy, p.j. hayes, some philosophical problems from the standpoint of artificial intelligence, readings in artificial intelligence, tioga publishing co., palo alto, california,1981, pp. 431–450. [29] a. barr, e. feigenbaum, editors, handbook of artificial intelligence, vol i, chapter iii, pp. 193–194. [30] y.a. wilks, methodological questions about artificial intelligence; approaches to understanding natural language, journal of pragmatics, vol. 1 (1977), pp. 69–84. [31] p. politakis, s.m. weiss, using empirical analysis to refine expert system knowledge bases, artificial intelligence, vol. 22, no 1 (1984), pp. 23–48. [32] h. lahore, l. gozzo, artificial intelligence application to testability, proc. of annual reliability and maintainability symposium, 1985, pp. 276–282. [33] c. kalme, ai tool applies the art of building expert systems to troubleshooting boards, electronic design, apr. 1986, pp. 159– 166. [34] m.e. cohen et al., knowledge representation and classification of chromatographic data for diagnostic medical decision making, pp. 481–486. [35] w.k. utt et al, an expert system for data reduction, proc. of the 2nd conference on artificial intelligence applications, miami beach, florida, usa, 1985, ieee computer society / northholland, pp. 120–124. [36] h.j. kohoutek, quality issues in new generation computing, proc. of the international conference on fifth generation computer systems, icot, 1984, pp. 695–670. [37] h.j. kohoutek, quality assurance technologies for vlsi manufacturing, quality and reliability engineering international, vol. 3 (1987), pp. 107–126. template for an acta imeko event paper acta imeko issn: 2221-870x november 2016, volume 5, number 3, 32-44 acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 32 numerical inversion of a characteristic function: an alternative tool to form the probability distribution of output quantity in linear measurement models viktor witkovský institute of measurement science, slovak academy of sciences, dúbravská cesta 9, sk-841 04 bratislava, slovakia section: research paper keywords: gum; linear measurement model; probability density function; characteristic function; numerical inversion; gil-pelaez inversion formulae; fft algorithm citation: viktor witkovský, numerical inversion of a characteristic function: an alternative tool to form the probability distribution of output quantity in linear measurement models, acta imeko, vol. 5, no. 3, article 6, november 2016, identifier: imeko-acta-05 (2016)-03-06 section editor: franco pavese, italy received april 27, 2016; in final form april 27, 2016; published november 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the work was supported by the slovak research and development agency, project apvv-15-0295, and by the scientific grant agency vega of the ministry of education of the slovak republic and the slovak academy of sciences, by the projects vega 2/0047/15 and vega 2/0011/16 corresponding author: viktor witkovský, e-mail: witkovsky@savba.sk 1. introduction the basic working tool in measurement uncertainty analysis, as advocated in the current revision (under preparation) of the guide to the expression of uncertainty in measurement (gum) [1], and consistent with its supplement 1 – propagation of distributions using a monte carlo method [2], is the state-of-knowledge pdf about the quantity (true value of measurand), based on the currently available information. the state-of-knowledge pdf quantifies the degree of belief about the values that can be assigned to the quantity based on the available information. the expectation and the standard deviation of this pdf (if they exist) are used to report the measurement result and the associated (standard) measurement uncertainty. although the latest gum development emphasizes the bayesian view of probability in the evaluation of measurement uncertainty, it should be clearly stated and understood that this approach is not based on the strict bayesian principles of statistical inference (i.e. straightforward application of the abstract measurement uncertainty analysis based on combining the state-of-knowledge distributions requires evaluation of the probability density function (pdf), the cumulative distribution function (cdf), and/or the quantile function (qf) of a random variable reasonably associated with the measurand. this can be derived from the characteristic function (cf), which is defined as a fourier transform of its probability distribution function. working with cfs provides an alternative and frequently much simpler route than working directly with pdfs and/or cdfs. in particular, derivation of the cf of a weighted sum of independent random variables is a simple and trivial task. however, the analytical derivation of the pdf and/or cdf by using the inverse fourier transform is available only in special cases. thus, in most practical situations, a numerical derivation of the pdf/cdf from the cf is an indispensable tool. in metrological applications, such approach can be used to form the probability distribution for the output quantity of a measurement model of additive, linear or generalized linear form. in this paper we propose new original algorithmic implementations of methods for numerical inversion of the characteristic function which are especially suitable for typical metrological applications. the suggested numerical approaches are based on the gil-pelaez inverse formulae and on using the approximation by discrete fourier transform and the fast fourier transform (fft) algorithm for computing pdf/cdf of the univariate continuous random variables. as illustrated here, for typical metrological applications based on linear measurement models the suggested methods are an efficient alternative to the standard monte carlo methods. mailto:witkovsky@savba.sk acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 33 bayes' theorem). for more details and further discussion see, e.g., [3]-[5], or [6], and also [7] and [8]. in fact, the gum approach is based on using a well-defined functional relationship between the mutually inter-related quantities for propagating the state-of-knowledge pdfs of the input quantities, represented by the random variables (rvs), into the state-of-knowledge pdf of the output quantity – which is believed to be a rv reasonably associated with the measurand. frequently, it is suggested to use a well-known functional relationship (based, e.g., on the physical and/or geometrical laws) between the true value of the measurand and the true values of the other influencing input variables, which is typically expressed by the measurement equation of the measurement model. obviously, such pdf of the output quantity represents currently available knowledge (limited, but hopefully the best to date) about the measurand, i.e. it expresses probability distribution of the values being attributed to a quantity (the measurand), based on information used (which could be rather limited and/or heavily biased). this interpretation is consistent to the original gum definition of the uncertainty in measurement (see [1], clause 2.2.3), which is defined as a parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could be reasonably attributed to the measurand. however, the derived term coverage interval is inconsistent with this interpretation, for more details and discussion see [8]. in fact, without imposing further (well and clearly defined) model assumptions and optimality criteria for selecting and combining the information, it can be only hardly expected that the presented result shall represent the best (in what sense?) estimate of the true measurand value. on the other hand, the proposed gum approach could be well accepted as a (simple) method for combining experimental results with the expert judgment in order to get comprehensive characterization of our knowledge about the true value of measurand, based on all currently available information, albeit without the possibility of guaranteeing the (otherwise naturally) required statistical properties and/or optimality criteria. if this is the goal, other means and/or subsequent analysis should be applied and properly used. as already mentioned, the term coverage interval (introduced in [2], and defined as the interval containing the (true) value of a quantity with a stated probability (say 95 %), based on the information available) is not properly used in this context. hence, as an alternative to the 95 % coverage interval, here we shall use a more appropriate term – the 95 % state-of-knowledge interval. this should read as the interval of 95 % values that could be reasonably attributed to the unknown value of measurand based on the current state-of-knowledge (i.e. based on the measurement model, the currently available information, and method used for combining the information). of course, further study is necessary for characterizing the optimality properties of the used method, e.g., under repeatability conditions. a standard approach to derive the state-of-knowledge pdf is based on the propagation of distributions using a monte carlo method, as suggested in supplement 1 of the gum, [2]. for more details and discussion on applicability of the uncertainty evaluation methods based on the gum and its supplement 1 see, e.g., [9]-[12]. a principal advantage of the monte carlo methods is their simplicity and asymptotic consistency (convergence to the true values with growing number of the monte carlo simulations). a disadvantage of the monte carlo methods is their principal ambiguity (i.e. non-uniqueness/variability of the results under independently repeated computations with given number of simulations) and typically large demand on the computational resources. often, to achieve the pre-selected accuracy level, a need of a very large number of simulations is required. alternatively, in linear measurement models, the state-ofknowledge pdf of the output quantity can be derived by inverting its characteristic function, which can be easily derived from the known characteristic functions of the input quantities. a principal advantage of the methods based on inversion of the characteristic function is their principal exactness (as there is a theoretical one-to-one mapping between the cf and the cdf) and efficiency for all values of the computed cdf/pdf/qf. however, in real situations, the precision of the used computational methods is also subject to possible numerical errors. a typical numerical error of such results is due to particular algorithmic implementation (as e.g., the truncation error and/or the integration error), which can be and should be properly controlled. as we shall illustrate below, in typical metrological applications with linear measurement models and the well behaved probability distributions of the input quantities, the numerical precision and efficiency of the here presented simple methods and algorithms is superior to the standard monte carlo methods. we notice that among other possible alternative approaches to evaluate the propagated probability distribution of the output quantity we can include the advanced methods for arithmetic computations with random variables and their distributions, see e.g. [13], [14], and also [15], [16]. however, applicability of these methods is still limited to a relatively small number of the input random variables. 2. linear measurement model and the characteristic functions as mentioned above, an alternative tool to form the stateof-knowledge probability distribution of the output quantity in a linear measurement model is based on the numerical inversion of its characteristic function (cf), which is defined as a fourier transform of its pdf, see (2). computing the (inverse) fourier transform numerically is a well-known problem, frequently connected with the problem of computing integrals of highly oscillatory (complex) functions. the problem was studied for a long time in general, but also with focus on specific applications, see, e.g., [17]-[22], to show just a few. in particular, the methods suggested for inverting the characteristic function for obtaining the probability distribution function include [23]-[27]. approximations of the continuous fourier transform by the discrete fourier transform and by using the fft algorithm are widely used in different fields of engineering. however, using the fft for evaluation of the pdf/cdf from the characteristic function is not widespread in statistical applications (one important exception is the field of financial mathematics and econometrics), and in general, not well implemented in relevant software packages. in [28], korczynski, cox, and harris suggested and illustrated the use of convolution principles in metrology applications. their suggested approach was based on consecutive replacing of the convolution integrals by the convolution sums evaluated by using the fast fourier transform (i.e. without direct using the exact characteristic functions), to acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 34 form the probability distribution for the output quantity in the measurement model of additive, linear or generalized linear form. if compared with the here proposed approach based on combining and inverting the exact characteristic functions, the numerical precision of their suggested approach can quickly deteriorate with the growing number of the required convolution integrals. in fact, in metrological applications a number of measurement models used in uncertainty evaluation are, at least approximately (up to reasonable level), of the additive linear form 𝑌 = 𝑐1𝑋1 + ⋯ + 𝑐𝑛𝑋𝑛, (1) where the input quantities 𝑋1, … , 𝑋𝑛 are independent random variables with known probability distributions, 𝑋𝑗 ∼ 𝐹𝑋𝑗, for 𝑗 = 1, … , 𝑛, possibly parametrized by 𝜃𝑗. here, 𝑐1, … , 𝑐𝑛 denote the known constants and 𝑌 represents the univariate output quantity (a random variable with an unknown distribution to be determined). the characteristic function of a continuous univariate random variable 𝑋 ∼ 𝐹𝑋, with its probability density function pdf𝑋(𝑥), is defined as a fourier transform of its pdf, cf𝑋(𝑡) = ∫ 𝑒𝑖𝑖𝑖 ∞ −∞ pdf𝑋(𝑥) 𝑑𝑥, 𝑡 ∈ 𝑹 (2) analytical expressions of the characteristic functions are known for many standard probability distributions, see e.g. [29]-[31], or other available sources. otherwise, cf could be derived either analytically, expressed by using computer-based tools, e.g. by using mathematica, or evaluated numerically. in table 1 we present some selected characteristic functions of the univariate distributions, frequently used in metrological applications. compare the presented distributions with those in table 1 in [2]. notice that the characteristic functions of the symmetric zero-mean distributions are purely real functions of the argument 𝑡 ∈ 𝑹. deriving cf of a weighted sum of independent random variable is a simple and trivial task. let cf𝑋𝑗(𝑡) denote the characteristic function of 𝑋𝑗. the characteristic function of 𝑌 defined by (1) is cf𝑌(𝑡) = cf𝑋1(𝑐1𝑡) ⋯ cf𝑋𝑛(𝑐𝑛𝑡). (3) for illustration, in figure 1 we plotted the cf of a linear combination of two independent chi-squared random variables with 𝜈1 = 1 and 𝜈10 = 10 degrees of freedom, evaluated for 𝑡 ∈ (−1,1). here we shall assume that the considered characteristic functions of the input and/or output quantities, the random variables 𝑋1, … , 𝑋𝑛 and 𝑌 are known or can be easily derived. then, by the fourier inversion theorem, the pdf of the random variable 𝑌 is given by pdf𝑌(𝑦) = 1 2𝜋 ∫ 𝑒−𝑖𝑖𝑖 ∞ −∞ cf𝑌(𝑡) 𝑑𝑡, 𝑦 ∈ 𝑹. (4) analytical derivation of the pdf by using the (inverse) fourier transform (4) is available only in special cases. thus, in most practical situations, a numerical derivation of the pdf/cdf from the cf is an indispensable tool. in the next section we present two simple but frequently very efficient approaches (approximate numerical methods) for inversion of the characteristic function together with a detailed description of their algorithmic implementations, which are suitable for typical metrological applications. 3. numerical inversion of the characteristic function the exact inverse fourier transform (4) can be naturally approximated by the truncated integral form, pdf𝑌(𝑦) = 1 2𝜋 ∫ 𝑒−𝑖𝑖𝑖 𝑇 −𝑇 cf𝑌(𝑡) 𝑑𝑡, (5) table 1. characteristic functions of continuous univariate distributions used in metrological applications (selected symmetric zero-mean distributions and non-negative distributions). here, kν(z) denotes the modified bessel function of the second kind, jν(z) is the bessel function of the first kind, and u(a, b, z) is the confluent hypergeometric function of the second kind. probability distribution characteristic function (cf) gaussian 𝑁(0,1) cf(𝑡) = 𝑒− 1 2 𝑖2 student’s t 𝑡𝜈 cf(𝑡) = 1 2 𝜈 2−1γ�𝜈 2 � �𝜈 1 2 |𝑡|� 𝜈 2 𝐾𝜈 2 �𝜈 1 2 |𝑡|� rectangular 𝑅(−1,1) cf(𝑡) = sin (𝑖) 𝑖 triangular 𝑇(−1,1) cf(𝑡) = 2−2cos (𝑖) 𝑖2 arcsine 𝑈(−1,1) cf(𝑡) = 𝐽0(𝑡) exponential 𝐸𝑥𝐸(𝜆) cf(𝑡) = 𝜆 𝜆−𝑖𝑖 𝜆 > 0 rate gamma γ(𝛼, 𝛽) cf(𝑡) = �1 − 𝑖𝑖 𝛽 � −𝛼 𝛼 > 0 shape, 𝛽 > 0 rate chi-squared 𝜒𝜈2 cf(𝑡) = (1 − 2𝑖𝑡)− 𝜈 2 𝜈 > 0 degrees of freedom fisher-snedecor’s f 𝐹𝜈1,𝜈2 cf(𝑡) = γ�ν1 2 +ν2 2 � γ�𝜈2 2 � 𝑈 �𝜈1 2 , 1 − 𝜈2 2 , − 𝜈2 𝜈1 𝑖𝑡 � 𝜈1 > 0, 𝜈2 > 0 degrees of freedom figure 1. real (blue) and imaginary (red) part of the characteristic function of 𝑌 = 10 𝑋𝜒12 + 𝑋𝜒10 2 — the linear combination of two independent chisquared random variables with 𝜈1 = 1 and 𝜈1 = 10 degrees of freedom, evaluated for 𝑡 ∈ (−1,1). acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 35 where 𝑇 is a sufficiently large (real) value, and the integrand is a complex (oscillatory) function, which is assumed to decay to zero reasonably fast for |𝑡| → ∞ (as is typical for continuous distributions used in metrological applications). note that the selection of the integration limit 𝑇 and the speed of integrand function decay contribute significantly to the truncation error of this approximation. in general, the required integral can be evaluated by any suitable numerical quadrature method. frequently, a simple trapezoidal rule gives fast and satisfactory results. however, caution is necessary if the integrand is a highly oscillatory function, what is the case if abs(𝑦) is a large value from the tail area of the distribution, as e.g. the 99 % quantile. in such situations a more advanced quadrature method should be used, typically in combination with efficient root-finding algorithms and algorithms for accelerated computing of the limit values of the sums of alternating series (not considered here), see e.g., [21], [32]. note that the selection of the integration method significantly contributes to the integration error of this approximation. figure 2 illustrates the problem of integrating the highly oscillatory integrand function. for typical metrological applications we suggest to consider algorithms based on the gil-pelaez inversion formulae (6)-(7) and the approximate discrete fourier transform (21) resp. (22), based on the simple trapezoidal rule for computing the required sub-integrals. the possible numerical error of the results (i.e. the truncation error and the integration error) can be controlled by selecting a proper truncation limit 𝑇 (sufficiently large, leading to a small truncation error), and by dividing the selected integration interval (0, 𝑇) to a large number, say 𝑁, of small sub-intervals, where the trapezoidal rule provides a satisfactorily good numerical approximation (with small integration error) of the true integral value. for theoretical results on how to control the truncation and integration errors in such and similar situations see, e.g., [35]. 3.1. the gil-pelaez inversion formulae in [33], gil-pelaez derived the inversion formulae, suitable for numerical evaluation of the pdf and/or the cdf, which require integration of a real-valued function, only. the pdf is given by pdf𝑌(𝑦) = 1 𝜋 ∫ ℜ�𝑒−𝑖𝑖𝑖cf𝑌(𝑡)� ∞ 0 𝑑𝑡. (6) further, if 𝑦 is a continuity point of the cumulative distribution function of 𝑌, the cdf is given by cdf𝑌(𝑦) = 1 2 − 1 𝜋 ∫ ℑ� 𝑒−𝑖𝑖𝑖cf𝑌(𝑖) 𝑖 � ∞0 𝑑𝑡. (7) by ℜ(𝑓(𝑡)) and ℑ(𝑓(𝑡)) we denote the real and imaginary part of the complex function 𝑓(𝑡), respectively. numerical inversion of the characteristic function based on (6) and (7) have been successfully implemented for evaluation of the distribution function of a linear combination of independent chi-squared rvs by imhof in [34] and by davies in [35]. further, gil-pelaez's method has been implemented in the algorithm tdist, see [36] and [37], for computing the distribution of a linear combination of independent student's t random variables and/or other symmetric zero-mean random variables, and also for computing the distribution of a linear combination of independent inverted gamma random variables suggested in [38], and the distribution of a linear combination of independent log-lambert 𝑊 × 𝜒𝜈2 rvs, [39]. in [40], the algorithm tdist has been suggested and applied for computing the 95 % state-of-knowledge interval (considered as the approximate 95 % confidence interval) for the common mean value in the inter-laboratory comparisons with systematic effects (biases). in general, the integrals in (6) and (7) can be computed by any numerical quadrature method, possibly in combination with efficient root-finding algorithms and accelerated computing of limits of the alternating series, as considered, e.g., in [20] and [25]. frequently, (6) and (7) can be efficiently approximated by a simple trapezoidal quadrature: pdf𝑌(𝑦) ≈ 𝛿𝑖 𝜋 �𝑤𝑗 𝑁 𝑗=0 ℜ�𝑒−𝑖𝑖𝑗𝑖cf𝑌�𝑡𝑗�� ≈ 𝛿𝑖 𝜋 � 𝑤0 + ∑ 𝑤𝑗𝑁𝑗=1 cos�𝑡𝑗𝑦�ℜ�cf𝑌�𝑡𝑗�� + ∑ 𝑤𝑗𝑁𝑗=1 sin�𝑡𝑗𝑦�ℑ�cf𝑌�𝑡𝑗�� � (8) cdf𝑌(𝑦) ≈ 1 2 − 𝛿𝑖 𝜋 �𝑤𝑗 𝑁 𝑗=0 ℑ� 𝑒−𝑖𝑖𝑗𝑖cf𝑌�𝑡𝑗� 𝑡𝑗 � ≈ 1 2 − 𝛿𝑖 𝜋 ⎝ ⎜ ⎛ 𝑤0(mean(𝑌) − 𝑦) + + ∑ 𝑤𝑗𝑁𝑗=1 cos�𝑡𝑗𝑦�ℑ� cf𝑌�𝑖𝑗� 𝑖𝑗 � + ∑ 𝑤𝑗𝑁𝑗=1 sin�𝑡𝑗𝑦�ℜ� cf𝑌�𝑖𝑗� 𝑖𝑗 � ⎠ ⎟ ⎞ , (9) where 𝑁 is a sufficiently large number of (equidistant) subintervals of (0, 𝑇), 𝑤𝑗 are the appropriate quadrature weights, and 𝑡𝑗 denote the appropriate (equidistant) nodes from the interval (0, 𝑇), for sufficiently large 𝑇. in particular, for the trapezoidal quadrature rule we set • 𝛿𝑖 = 𝑇 𝑁 or alternatively 𝛿𝑖 = 2𝜋 𝐵−𝐴 , which gives 𝑇 = 𝑁 2𝜋 𝐵−𝐴 , • 𝑤0 = 𝑤𝑁 = 1 2 , and 𝑤𝑗 = 1 for 𝑗 = 1, … , 𝑁 − 1, • 𝑡𝑗 = 𝑗𝛿𝑖 for 𝑗 = 0, … , 𝑁, with 𝑇 = 𝑡𝑁 = 𝑁𝛿𝑖. here, the interval (𝐴, 𝐵) specifies the range of typical values 𝑦, i.e. a large part of the distribution support of the random variable 𝑌. figure 2. integrand functions for computing the pdf/cdf of the chi-squared distributed random variable, 𝑌 ∼ 𝜒12, at 𝑦 = 15, computed by the gil-pelaez inversion formulae of its characteristic function cfy(𝑡) = (1 − 2𝑖𝑡) −1 2. the plotted integrand functions of the integrals in (6) and (7) are evaluated for 𝑡 ∈ (0, 𝑇), 𝑇 = 5. the red circles depict the zeros (roots) of the integrand functions on (0, 𝑇). the total integral can be computed as an infinite sum of sub-integrals the limit value of the sum of alternating series. http://www.mathworks.com/matlabcentral/fileexchange/4199-tdist http://www.mathworks.com/matlabcentral/fileexchange/4199-tdist acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 36 as a simple rule of thumb, if the (optimum) value of 𝑇 is unknown we suggest to start with the application of the sixsigma-rule, i.e. set the typical range (𝐴, 𝐵) as an intersection of the natural parametric space of 𝑌 with the interval (𝐿, 𝑈) (e.g., (𝐴, 𝐵) = (𝐿, 𝑈) ∩ 𝑹 or (𝐴, 𝐵) = (𝐿, 𝑈) ∩ 𝑹+, with • 𝐿 = mean(𝑌) − 6 std(𝑌), • 𝑈 = mean(𝑌) + 6 std(𝑌), where mean(𝑌) and std(𝑌) represent the expectation and the standard deviation of the probability distribution of 𝑌, and define 𝑇 = 𝑁 2𝜋 𝐵−𝐴 for some pre-selected fixed 𝑁. note that the value of 𝑇 should be kept sufficiently large, such that the characteristic function is sufficiently small for all 𝑡 > 𝑇, i.e. 𝑎𝑎𝑎(cf𝑌(𝑡)) < 𝜖 for small 𝜖, say 𝜖 = 10−12. this can be effectively controlled by using a proper value (sufficiently large) of 𝑁. further, for computing the leading term in (9), we use the result from [36]: if the mean (expectation) of 𝑌 exists, then lim𝑖→0 ℑ� 𝑒−𝑖𝑖𝑖cf𝑌(𝑖) 𝑖 � = mean(𝑌) − 𝑦 . (10) the required mean(𝑌) and std(𝑌) can be evaluated analytically, from the moments of the input variables, or approximately, by using numerical differentiation of the characteristic function of 𝑌, cf𝑌(𝑡). in particular, mean(𝑌) ≈ 1 12𝑖ℎ � cf𝑌(−2ℎ) − 8cf𝑌(−ℎ) +8cf𝑌(−ℎ) − cf𝑌(2ℎ) � , (11) std(𝑌) ≈ �m2(𝑌) − mean2(𝑌)� , (12) where m2(𝑌) ≈ 1 144ℎ2 ⎝ ⎜ ⎛ cf𝑌(−4ℎ) − 16cf𝑌(−3ℎ) +64cf𝑌(−2ℎ) + 16cf𝑌(−ℎ) −130 +16cf𝑌(ℎ) + 64cf𝑌(2ℎ) −16cf𝑌(3ℎ) + cf𝑌(4ℎ) ⎠ ⎟ ⎞ , (13) for any small ℎ > 0, e.g., ℎ = 10−4. this numerical approach is applicable even in the cases when the theoretical moments of the considered distribution, defined by cf𝑌(𝑡), do not exist (e.g., for the student’s t distribution with 1 or 2 degrees of freedom). the truncation limit 𝑇 = 𝑁𝛿𝑖 = 𝑁 2𝜋 𝐵−𝐴 depends on 𝑁 and (𝐴, 𝐵). the trade-off between the values of 𝐵 − 𝐴, 𝑁, 𝛿𝑖 and 𝑇, strongly depends on the particular distribution of 𝑌 and its cf. for example, for a highly precise numerical inversion of the standard normal cf, cfy(𝑡) = 𝑒 −1 2 𝑖2, computed in double precision arithmetic with precision better than 𝜖 = 10−14, it is sufficient to set (𝐴, 𝐵) = (−8,8) with 𝐵 − 𝐴 = 16, 𝑁 = 25 = 32, leading to 𝛿𝑖 = 𝜋 8 and 𝑇 = 4 𝜋 with cfy(𝑇) = 𝑒 −1 2 𝑇2 ≈ 5 × 10−35. on the other hand, the numerical inversion of the rectangular cf given by cfy(𝑡) = sin (𝑖) 𝑖 requires in double precision arithmetic the value 𝑇 = 1014 to get 𝑎𝑎𝑎�cf(𝑡)� ≤ 10−14 for 𝑡 > 𝑇. for the natural choice of (𝐴, 𝐵) = (−1,1) with 𝐵 − 𝐴 = 2, this suggests to set 𝑁 ≈ 1013, what is an unacceptable value, and thus it reveals that the simple trapezoidal rule is not a suitable integration method for the highly precise numerical inversion of the rectangular distribution cf. fortunately, cf of the output quantity 𝑌 in typical metrological situations based on linear measurement models is a well behaved function, and the methods based on simple trapezoidal quadrature are efficient to provide good numerical approximations of the true values. in general, the pre-selected values of 𝑇 and 𝑁 should be checked and/or properly corrected. a simple diagnostic check is based on evaluating abs(cf𝑌(𝑇)). for large 𝑇 this should be a small value (smaller that the accepted truncation error). this check allows to control also the level of 𝑁, given the already fixed and sufficiently large interval (𝐴, 𝐵). we note that the presented quadrature method requires only one evaluation of the characteristic function cf𝑌�𝑡𝑗� at 𝑡𝑗, 𝑗 = 1, … , 𝑁, for any 𝑦 ∈ (𝐴, 𝐵) in pdf𝑌(𝑦) and cdf𝑌(𝑦), respectively. moreover, the computation is further simplified if 𝑌 is a continuous random variable with a symmetric zero-mean distribution, i.e. with purely real cf, pdf𝑌(𝑦) ≈ 𝛿𝑖 𝜋 � 1 2 + ∑ cos�𝑡𝑗𝑦�cf𝑌�𝑡𝑗�𝑁−1𝑗=1 + 1 2 cos(𝑡𝑁𝑦)cf𝑌(𝑡𝑁) � , (14) cdf𝑌(𝑦) ≈ 1 2 − 𝛿𝑖 𝜋 � − 𝑖 2 + ∑ sin�𝑡𝑗𝑦� cf𝑌�𝑖𝑗� 𝑖𝑗 𝑁−1𝑗=1 + 1 2 sin(𝑡𝑁𝑦) cf𝑌(𝑖𝑁) 𝑖𝑁 � . (15) finally, the quantile function (qf) can be evaluated by using the iterative newton-raphson scheme, based on repeated evaluations of the pdf/cdf, see (8)-(9) and/or (14)-(15). in particular, for a fixed probability level 𝐸 ∈ (0,1), the 𝐸quantile of the (continuous) distribution of 𝑌, say 𝑞 = qf𝑌(𝐸), is given as a solution (fixed point) of the following iterative scheme, qf𝑌𝑘+1(𝐸) = qf𝑌𝑘(𝐸) − �cdf𝑌�qf𝑌 𝑘(𝑝)�−𝑝� pdf𝑌�qf𝑌 𝑘(𝑝)� , (16) where 𝑘 = 0,1, …, and the starting value qf𝑌0(𝐸) is set as, e.g., qf𝑌0(𝐸) = mean(𝑌), defined by (11). %% example (matlab algorithm cf2distgp) % % pdf and cdf of a linear combination of rvs: % y = c1*x1 + c2*x2 + c3*x3 + c4*x4 + c5*x5, % where, % x1 ~ normal(0,1) with c1=1, % x2 ~ student's t with 1 df and c2=1, % x3 ~ rectangular on (-1,1) with c3=5, % x4 ~ triangular on (-1,1) with c4=1, % x5 ~ u-distribution on (-1,1) with c5=10 cfn = @(t) exp(-t.^2/2); cft = @(t,nu) min(1,besselk(nu/2, ... abs(t).*sqrt(nu),1) .* ... exp(-abs(t).*sqrt(nu)) .* ... (sqrt(nu).*abs(t)).^(nu/2) / ... 2^(nu/2-1)/gamma(nu/2)); cfr = @(t) min(1,sin(t)./t); cft = @(t) min(1,(2-2*cos(t))./t.^2); cfu = @(t) besselj(0,t); c = [1 1 5 1 10]; nu = 1; cfy = @(t) ... cfn(c(1)*t) .* ... cft(c(2)*t,nu) .* ... cfr(c(3)*t) .* ... cft(c(4)*t) .* ... cfu(c(5)*t); y = linspace(-50,50,201)'; [result,cdf,pdf] = cf2distgp(cfy,y); acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 37 a working version of the matlab algorithm cf2distgp for computing the pdf/cdf by numerical inversion of the characteristic function, based on the gil-pelaez inversion formulae, is presented in appendix a. for illustration, the matlab code presented above evaluates the pdf and cdf of the output variable 𝑌, which is a linear combination of the independent random variables with a normal, student's t, rectangular, triangular and arcsine distributions, i.e. 𝑌 = 𝑋𝑁 + 𝑋𝑖𝜈 + 5𝑋𝑅 + 𝑋𝑇 + 10𝑋𝑈, by using the algorithm cf2distgp with its default settings of 𝑇 and 𝑁. similarly, the pdf/cdf of the random variable 𝑌 = 𝑋𝑁 + 𝑋𝑖𝜈 + 5𝑋𝑅 + 𝑋𝑇 + 10𝑋𝑈 can be evaluated by using the matlab algorithm tdist, see also figure 3: 3.2. numerical inversion of the characteristic function by using the fft algorithm this approach for computing the pdf by numerical inversion of the characteristic function by using the fft algorithm is based on the results by hürlimann in [41]. alternatively, for other applications based on using the fractional fast fourier transform (frft), see [42] and also [43][46]. here we shall approximate the continuous fourier transform (cft), say 𝐹(𝑦) = ∫ 𝑒−𝑖2𝜋𝜋𝑖 ∞ −∞ 𝑓(𝑢) 𝑑𝑢, (17) by a discrete fourier transform (dft). the dft can be efficiently evaluated by using the fft algorithm that computes the same result as the dft, but much faster. for complex numbers 𝑓0, … , 𝑓𝑁−1 the dft is defined as 𝐹𝑘 = ∑ 𝑒 −𝑖2𝜋𝑘𝑗𝑁 𝑓𝑗𝑁−1𝑗=0 , 𝑘 = 0, … , 𝑁 − 1. (18) formally, here we shall use the following notation, 𝐅𝑁 = 𝐹𝐹𝑇(𝐟𝑁), (19) where 𝐟𝑁 = (𝑓0, … , 𝑓𝑁−1) and 𝐅𝑁 = (𝐹0, … , 𝐹𝑁−1). the relationship between the cf and the pdf is given by the (inverse) continuous fourier transform defined by (4). for a sufficiently large interval (−𝑇, 𝑇), it is possible to approximate a pdf by (5). here we shall consider the integral approximation, based on the mid-point integration rule, ∫ 𝑓(𝑥)𝑑𝑥 ≈ 𝑓(𝑎)+𝑓(𝑏) 2 (𝑎 − 𝑎)𝑏𝑎 , which corresponds to the trapezoidal quadrature. for more alternative approaches based on more sophisticated integration rules see, e.g., [39]. similarly as before, let (𝐴, 𝐵) denote a sufficiently large interval, where the distribution of 𝑌 is concentrated. a reasonable rule for determining (𝐴, 𝐵) can be based, for example, on the six-sigma-rule, or its modifications, by using an altered multiplication coefficient (e.g., use 10 instead of 6). let 𝑦𝑘 = 𝐴 + 𝑘𝛿𝑖, with 𝛿𝑖 = 𝐵−𝐴 𝑁 , and 𝑘 = 0, … , 𝑁 − 1. for 𝑁 large, 𝑇 = 𝜋/𝛿𝑖 is also large, and from (5), by using 𝑡 = 2𝜋𝑢, 𝑑𝑡 = 2𝜋𝑑𝑢, and 𝑑𝑢 = 1 𝐵−𝐴 , we get pdf𝑌(𝑦𝑘) ≈ 1 2𝜋 ∫ 𝑒−𝑖2𝜋𝜋𝑖𝑘 1 2𝛿𝑖 − 12𝛿𝑖 cf𝑌(2𝜋𝑢) 𝑑𝑢 . (20) now, we shall approximate the integral (19) by using the approximate trapezoidal quadrature rule, for each of the 𝑁 subintervals. thus, pdf𝑌(𝑦𝑘) ≈ 1 𝐵−𝐴 ∑ 𝑒−𝑖2𝜋𝜋𝑗𝑖𝑘𝑁−1𝑗=0 cf𝑌�2𝜋𝑢𝑗� , (21) figure 3. the probability density function (pdf) and the cumulative distribution function (cdf) of a random variable 𝑌 = ∑ 𝑐𝑗𝑋𝑗 5 𝑗=1 , with 𝑋1 ∼ 𝑁(0,1), 𝑋2 ∼ 𝑡𝜈=1, 𝑋3 ∼ 𝑅(−1,1), 𝑋4 ∼ 𝑇(−1,1), 𝑋5 ∼ 𝑈(−1,1), and coefficients 𝑐 = (𝑐1, … , 𝑐5) = (1,1,5,1,10), evaluated by numerical inversion of its characteristic function by the matlab algorithm tdist, see also the examples. %% example (matlab algorithm tdist) % % tdist at matlab central file exchange: % http://www.mathworks.com/matlabcentral/ % /fileexchange/4199-tdist % % pdf and cdf of a linear combination of rvs % y = c1*x1 + c2*x2 + c3*x3 + c4*x4 + c5*x5 % with: % x1 ~ normal(0,1) [we set df1=inf] with c1=1, % x2 ~ student's t with 1 df [set df2=1], c2=1, % x3 ~ rectangular on (-1,1) [set df3=-1],c3=5, % x4 ~ triangular on (-1,1) [set df4=-2], c4=1, % x5 ~ u-distribution on (-1,1) [df5=-3], c5=10 df = [inf 1 -1 -2 -3]; coefs = [1 1 5 1 10]; [pdf,y] = tdist([],df,coefs,'pdf'); cdf = tdist(y,df,coefs,'cdf'); figure; plot(y,pdf); grid figure; plot(y,cdf); grid http://www.mathworks.com/matlabcentral/fileexchange/4199-tdist http://www.mathworks.com/matlabcentral/fileexchange/4199-tdist acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 38 where 𝑢𝑗 = 1 2+𝑗− 𝑁 2 𝐵−𝐴 , 𝑗 = 0, … , 𝑁 − 1. from that, by using 𝑒𝑖𝜋 = −1, the expressions for 𝑢𝑗 and 𝑦𝑘, and the dft defined by (17), we finally get the formal relationship 𝐩𝐩𝐟 = 𝐂 ⊙ 𝐹𝐹𝑇(𝐃 ⊙ 𝐜𝐟), (22) where ⊙ denotes the dot product (element wise multiplication), • 𝐩𝐩𝐟 = �pdf𝑌(𝑦0), … , pdf𝑌(𝑦𝑁−1)�, • 𝐂 = (𝐶0, … , 𝐶𝑁−1), with • 𝐶𝑘 = 1 𝐵−𝐴 (−1)�� 1−1 𝑁�� 𝑁𝑁 𝐵−𝑁+𝑘�� , 𝑘 = 0, … , 𝑁 − 1, • 𝐃 = (𝐷0, … , 𝐷𝑁−1), with • 𝐷𝑘 = (−1) − 2𝑁𝐵−𝑁𝑘, 𝑘 = 0, … , 𝑁 − 1, • 𝐜𝐟 = �cf𝑌(𝑡0), … , cf𝑌(𝑡𝑁−1)�, with • 𝑡𝑘 = 2𝜋 𝐵−𝐴 �1 2 + 𝑘 − 𝑁 2 �, 𝑘 = 0, … , 𝑁 − 1. further, cdf is evaluated here by simple cumulative sum from the evaluated pdf values, and qf is evaluated by interpolation from the cdf. a working version of the matlab algorithm cf2distfft for computing the pdf/cdf/qf by numerical inversion of the characteristic function, based on the fft algorithm, is presented in appendix b. for illustration, the matlab code presented below evaluates the pdf/cdf of the output variable 𝑌 = 10 𝑋𝜒12 + 𝑋𝜒102 , by using the algorithm cf2distfft with its default settings of (𝐴, 𝐵) and 𝑁. other specific versions of the algorithm for computing the pdf/cdf/qf of a linear combination of independent random variables with the fisher-snedecor’s f-distributions and the log-normal distributions, by numerical inversion of the characteristic function by using the fft algorithm, are available at the matlab central file exchange as the algorithm fdist, file id: 56262, and the algorithm logndist, file id: 56512, respectively. 4. comparison of the cf approach and the monte carlo method the main advantage of the monte carlo methods lies in their simplicity and asymptotic consistency. a disadvantage of the monte carlo methods lies in the large computational demands, typically required in order to achieve the pre-specified accuracy level. on the other hand, application of the cf approach offers principal theoretical exactness which could be, however, influenced by unacceptable numerical errors (i.e. the truncation and the integration errors), if not properly used. for illustration, here we consider the linear measurement model for calibration of a coaxial step attenuator as considered in [47], a model typical for metrological applications. we consider this example in order to compare results based on the cf approach and the monte carlo method and to illustrate the applicability and/or advantage of the suggested methods for potential users in metrology applications. for more details about this specific example see [47], example s7, and/or [48]. the linear measurement model of the attenuation 𝐿𝑋 of the attenuator to be calibrated is given by 𝐿𝑋 = 𝑐𝑐𝑛𝑎𝑡 + 𝐿𝑆 + 𝛿𝐿𝑆 + 𝛿𝐿𝐷 + 𝛿𝐿𝑀 + 𝛿𝐿𝐾 + 𝛿𝐿𝑖𝑏 − 𝛿𝐿𝑖𝑎 + 𝛿𝐿0𝑏 − 𝛿𝐿0𝑎, (23) with the following information about the distributions of the input quantities (available from the given uncertainty budget, see s7.12 in [47] with the correction as presented in [48]): • 𝑐𝑐𝑛𝑎𝑡 = 30.04 + 0.003 = 30.043, • 𝐿𝑆 ∼ 0.0090 × 𝑁(0,1), • 𝛿𝐿𝑆 ∼ (0.0025/� 1 3 ) × 𝑅(−1,1), • 𝛿𝐿𝐷 ∼ (0.0011/� 1 2 ) × 𝑈(−1,1), • 𝛿𝐿𝑀 ∼ (0.0200/� 1 2 ) × 𝑈(−1,1), • 𝛿𝐿𝐾 ∼ (0.0017/� 1 2 ) × 𝑈(−1,1), • 𝛿𝐿𝑖𝑏 ∼ (0.0003/� 1 3 ) × 𝑅(−1,1) • 𝛿𝐿𝑖𝑎 ∼ (0.0003/� 1 3 ) × 𝑅(−1,1), • 𝛿𝐿0𝑏 ∼ 0.0020 × 𝑁(0,1), • 𝛿𝐿0𝑎 ∼ 0.0020 × 𝑁(0,1). the algorithms cf2distgp, with default option parameters based on the six-sigma-rule and pre-selected 𝑁 = 210 = 1024, evaluated the support interval (𝐴, 𝐵) = (−0.1341,0.1341), and hence 𝑇 = 23989, with 𝛿𝑖 = 𝑇 𝑁 = 23.4. then, the 97.5 %-quantile of 𝑌 = 𝐿𝑋 − 𝑐𝑐𝑛𝑎𝑡 was calculated as 𝑞 = 0.03900448275179. thus, the calculated 95 % state-of-knowledge interval for the attenuation 𝐿𝑋 is given as 30.043 ∓ 0.039. the basic diagnostic check ensures the highest precision of the presented calculations in double precision arithmetic, as abs(cf𝑌(𝑇))=0 (i.e. the value equals to the machine zero, which is here defined as 4.94 × 10−324). the used computer time was 𝑡 = 3.9 × 10−4 s. further, here we also present the estimated values of the required 97.5 % quantile computed by the monte carlo method for sample sizes 𝑀, with 𝑀 = 104, 105, 106, 107, and 108. %% example (matlab algorithm cf2distfft) % % distribution of a linear combination of rvs % (chi-squared rvs with 1 and 10 dfs) % y = 10*x_{\chi^2_1} + x_{\chi^2_10} df1 = 1; df2 = 10; cfchi2_1 = @(t) (1-2i*t).^(-df1/2); cfchi2_10 = @(t) (1-2i*t).^(-df2/2); cfy = @(t) cfchi2_1(10*t) .* cfchi2_10(t); clear options options.ispositivesupport = true; result = cf2distfft(cfy,[],[],options); % plot the cf of y t = linspace(-1,1,501); figure plot(t,real(cfy(t)),t,imag(cfy(t)));grid xlabel('t'); ylabel('characteristic function'); title('y = 10*x_{\chi^2_1}+x_{\chi^2_{10}}') http://www.mathworks.com/matlabcentral/fileexchange/56262-fdist http://www.mathworks.com/matlabcentral/fileexchange/56512-logndist acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 39 the estimated values of the 97.5 % quantile 𝑞, and the computer time 𝑡 (in seconds), used for the sample size 𝑀: • 𝑀 = 104, 𝑞 = 0.039343812021804, 𝑡 = 0.07, • 𝑀 = 105, 𝑞 = 0.038985244853524, 𝑡 = 0.08, • 𝑀 = 106, 𝑞 = 0.038952503603421, 𝑡 = 0.75, • 𝑀 = 107, 𝑞 = 0.038998144396413, 𝑡 = 7.49, • 𝑀 = 108, 𝑞 = 0.039003626106614, 𝑡 = 153.2. this clearly illustrates the advantages and computational efficiency of the proposed approach over the standard monte carlo methods, especially if high precision of the computed (estimated) quantiles is required. 5. conclusions we suggest to consider numerical methods for derivation of the state-of-knowledge pdf/cdf of the output quantity in linear measurement models from its characteristic function. such approach can be used to form the probability distribution for the output quantity of a measurement model of additive, linear or generalized linear form, and can be considered as an alternative tool to the uncertainty evaluation based on the monte carlo methods. here we have presented two simple but efficient approaches for numerical inversion of the characteristic function, which are especially suitable for metrological applications. the suggested numerical approaches are based on the gilpelaez inverse formula and on the approximation by discrete fourier transform (dft) and the fft algorithm for computing the pdf/cdf of (univariate) continuous random variables. as we have explained and illustrated, the suggested cf approach should be considered as an alternative to the standard approach based on the monte carlo methods (as considered e.g. in supplement 1 – propagation of distributions using a monte carlo method [2]) in specific situations, i.e. for evaluating the state-ofknowledge probability distributions of the output quantity in a linear measurement model, especially if the highest precision of the reported distribution quantiles is required. however, numerical errors of such results should be properly controlled. for illustration purposes, here we present working versions of the matlab codes of the suggested algorithms (cf2distgp and cf2distfft), as well as some simple examples in order to illustrate the applicability of the suggested methods. although the methods for inverting characteristic functions for obtaining the probability distribution functions have been studied for long time, especially in statistical literature, and the possible applications are much more general than those motivated by metrology, surprisingly, such methods are still not widespread and used in engineering applications and only rarely used among statisticians. one possible reason might be that the characteristic functions and the algorithms for numerical inversions are not directly available in standard (statistical) software packages, like e.g., r or matlab. systematic development of the methods, algorithms and the software toolbox (developed for r and/or matlab) for computing, combining and inverting the characteristic functions is our highly desirable goal for the next future. acknowledgement the work was supported by the slovak research and development agency, project apvv-15-0295, and by the scientific grant agency vega of the ministry of education of the slovak republic and the slovak academy of sciences, by the projects vega 2/0047/15 and vega 2/0011/16. %% comparison / cf approach & monte carlo method % cf approach: % distribution and quantile of the output % quantity in linear measurement model % y = l_s + δl_s + δl_d + δl_m + δl_k + % δl_ib δl_ia + δl_0b δl_0a % const = 30.043; prob = 0.975; c = [0.009 0.0025/sqrt(1/3) ... 0.0011/sqrt(1/2) 0.0200/sqrt(1/2) ... 0.0017/sqrt(1/2) 0.0003/sqrt(1/3) ... -0.0003/sqrt(1/3) 0.0020 -0.0020 ]; cfn = @(t) exp(-t.^2/2); cfr = @(t) min(1,sin(t)./t); cfu = @(t) besselj(0,t); cfy = @(t) cfn(c(1)*t) .* cfr(c(2)*t) .* ... cfu(c(3)*t) .* cfu(c(4)*t) .* ... cfu(c(5)*t) .* cfr(c(6)*t) .* ... cfr(c(7)*t) .* cfn(c(8)*t) .* ... cfn(c(9)*t); [result,cdf,pdf,q_cf] = cf2distgp(cfy,[],prob) % 95% state-of-knowledge interval / cf approach: i_cf = const + [-q_cf,q_cf] % monte carlo method: m = [10^4 10^5 10^6 10^7 10^8]; q_mc = zeros(5,1); time = zeros(5,1); rng; for m = 1:5 tic; n = randn(m(m),3); r = 2*rand(m(m),3)-1; u = 2*betarnd(0.5,0.5,m(m),3)-1; y = c(1)*n(:,1) + c(2)*r(:,1) ... + c(3)*u(:,1) + ... + c(4)*u(:,2) + c(5)*u(:,3) ... + c(6)*r(:,2) + c(7)*r(:,3) ... + c(8)*n(:,2) + c(9)*n(:,3); y = sort(y); q_mc(m) = y(ceil(m(m)*prob)); time(m) = toc; end % 95% state-of-knowledge interval / mc method: q = q_mc(5); i_mc = const + [-q,q] acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 40 appendix a. matlab algorithm cf2distgp for numerical inversion of the characteristic function based on the gil-pelaez inversion formulae function [result,cdf,pdf,qf] = ... cf2distgp(cf,x,prob,options) % calculates cdf/pdf/qf from the characteristic % function by the gil-pelaez inversion formulae, % integrated by a simple trapezoidal quadrature. % % syntax: % [result,cdf,pdf,qf] = ... % cf2distgp(cf,x,prob,options) %% check the input parameters tic; narginchk(1, 4); if nargin < 4, options = []; end if nargin < 3, prob = []; end if nargin < 2, x = []; end if ~isfield(options, 'n') options.n = 2^10; end if ~isfield(options, 't') options.t = []; end if ~isfield(options, 'sixsigmarule') options.sixsigmarule = 6; end if ~isfield(options, 'xmean') options.xmean = []; end if ~isfield(options, 'xstd') options.xstd = []; end if ~isfield(options, 'toldiff') options.toldiff = 1e-4; end if ~isfield(options, 'qf0') options.qf0 = (cf(1e-4)-cf(-1e-4))/(2e-4*1i); end if ~isfield(options, 'crit') options.crit = 1e-13; end if ~isfield(options, 'maxiter') options.maxiter = 1000; end if ~isfield(options, 'isplot') options.isplot = true; end if ~isfield(options, 'dist') options.dist = []; end if ~isempty(options.dist) xmean = options.dist.xmean; cft = options.dist.cft; dt = options.dist.dt; n = length(cft); t = (1:n)' * dt; range = 2*pi / dt; xmin = xmean range/2; xmax = xmean + range/2; xstd = []; else n = options.n; t = options.t; sixsigmarule = options.sixsigmarule; xmean = options.xmean; xstd = options.xstd; h = options.toldiff; cft = cf(h*(1:4)); if isempty(xmean) xmean = real((-cft(2) ... + 8*cft(1)-8*conj(cft(1)) ... + conj(cft(2)))/(1i*12*h)); end if isempty(xstd) xm2 = real(-(conj(cft(4)) ... 16*conj(cft(3)) ... + 64*conj(cft(2)) ... + 16*conj(cft(1)) ... 130 + 16*cft(1) ... + 64*cft(2) ... 16*cft(3)+cft(4))/(144*h^2)); xstd = sqrt(xm2 xmean^2); end if ~isempty(t) dt = t / n; t = (1:n)' * dt; cft = cf(t); cft(n) = cft(n)/2; range = 2*pi / dt; xmin = xmean range/2; xmax = xmean + range/2; xstd = []; else xmin = xmean sixsigmarule * xstd; xmax = xmean + sixsigmarule * xstd; range = xmax xmin; dt = 2*pi / range; t = (1:n)' * dt; cft = cf(t); cft(n) = cft(n)/2; end options.dist.xmean = xmean; options.dist.cft = cft; options.dist.dt = dt; end %% algorithm if isempty(x) x = linspace(xmin,xmax,101)'; end if any(x < xmin) || any(x > xmax) warning(['x out-of-range (the support): ', ... '[xmin, xmax] = [',num2str(xmin),... ', ',num2str(xmax),'] !']); end [n,m] = size(x); x = x(:); e = exp(-1i*x*t'); % cdf cdf = (xmean x)/2 + imag(e * (cft ./ t)); cdf = 0.5 (cdf * dt) / pi; cdf = reshape(max(0,min(1,cdf)),n,m); % pdf pdf = 0.5 + real(e * cft); pdf = (pdf * dt) / pi; pdf = reshape(max(0,pdf),n,m); acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 41 appendix b. matlab algorithm cf2distfft for numerical inversion of the characteristic function based on the fft algorithm % qf if ~isempty(prob) isplot = options.isplot; options.isplot = false; [n,m] = size(prob); prob = prob(:); maxiter = options.maxiter; crit = options.crit; qf = options.qf0; criterion = true; count = 0; [res,cdfq,pdfq] = ... cf2distgp([],qf,[],options); options = res.options; while criterion count = count + 1; correction = (cdfq prob) ./ pdfq; qf = qf correction; [~,cdfq,pdfq] = ... cf2distgp([],qf,[],options); criterion = any(abs(correction) ... > crit * abs(qf)) ... && max(abs(correction)) ... > crit && count < maxiter; end qf = reshape(qf,n,m); prob = reshape(prob,n,m); options.isplot = isplot; else qf = []; count = []; correction =[]; end %% result result.cdf = cdf; result.pdf = pdf; result.qf = qf; result.x = x; result.xmean = xmean; result.xstd = xstd; result.xmin = xmin; result.xmax = xmax; result.prob = prob; result.sixsigmarule = options.sixsigmarule; result.t = t; result.t = t(end); result.dt = dt; result.cf = cf; result.n = n; result.count = count; result.correction = correction; result.options = options; result.tictoc = toc; %% plot the pdf / cdf if length(x)==1, options.isplot = false; end if options.isplot figure plot(x,pdf,'.-') grid title('pdf specified by the cf') xlabel('x') ylabel('pdf') % figure plot(x,cdf,'.-') grid title('cdf specified by the cf') xlabel('x') ylabel('cdf') end end function [result,cdf,pdf,qf] = ... cf2distfft(cffun,y,prob,options) %cf2distfft calculates the approximate values % of cdf, pdf, and qf by numerical inversion of % the characteristic function cf by using the % fft algorithm. % % syntax: % [result,cdf,pdf,qf] = ... % cf2distfft2(cffun,y,prob,options) % viktor witkovsky (witkovsky@savba.sk) % ver.: 24-apr-2016 17:12:15 %% check the input parameters if nargin < 1 error('too few inputs'); end if nargin < 4, options = []; end if nargin < 3, prob = []; end if nargin < 2, y = []; end if ~isfield(options,'n') options.n = 2^10; end n = options.n; if ~isfield(options,'sixsigmarule') options.sixsigmarule = 6; end if ~isfield(options,'miny') options.miny = []; end if ~isfield(options,'maxy') options.maxy = []; end if ~isfield(options,'isforcedsymmetric') options.isforcedsymmetric = []; end if isempty(options.isforcedsymmetric) isforcedsymmetric = false; else isforcedsymmetric = ... options.isforcedsymmetric; end if ~isfield(options,'iszerosymmetric') options.iszerosymmetric = []; end if isempty(options.iszerosymmetric) if isforcedsymmetric iszerosymmetric = true; else iszerosymmetric = false; end else iszerosymmetric = options.iszerosymmetric; end if ~isfield(options,'ispositivesupport') options.ispositivesupport = []; end acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 42 if isempty(options.ispositivesupport) ispositivesupport = false; else ispositivesupport = ... options.ispositivesupport; end if ~isfield(options,'isplot') options.isplot = true; end if ~isfield(options,'delta') options.toldiff = 1e-4; end %% moments and support h = options.toldiff; t = h*(1:4); cf = cffun(t); meany = real((-cf(2) + 8*cf(1) – ... 8*conj(cf(1))+ conj(cf(2)))/(1i*12*h)); m2 = real(-(conj(cf(4)) 16*conj(cf(3)) + ... 64*conj(cf(2)) + 16*conj(cf(1)) 130 + ... 16*cf(1) + 64*cf(2) 16*cf(3) + ... cf(4))/(144*h^2)); stdy = sqrt(m2 meany^2); a = meany options.sixsigmarule * stdy; b = meany + options.sixsigmarule * stdy; if ispositivesupport if a <= 0 && ... isempty(options.isforcedsymmetric) a = max(0,a); isforcedsymmetric = true; elseif a > 0 && ... isempty(options.isforcedsymmetric) isforcedsymmetric = false; end end % use the specified values (if available) if ~isempty(options.miny), a = options.miny; end if ~isempty(options.maxy), b = options.maxy; end % symmetric support [-b,b] ? if isforcedsymmetric || iszerosymmetric b = options.sixsigmarule * ... sqrt(stdy^2 + meany^2); if ~isempty(options.maxy) b = options.maxy; end a = -b; end %% characteristic function cf k = (0:(n-1))'; t = 2*pi * (0.5-n/2+k) / (b-a); cf = cffun(t(n/2+1:end)); cf = [conj(cf(end:-1:1));cf]; % cf of the 'symetrized' distribution if isforcedsymmetric cf = real(cf); end %% pdf by the fft algorithm dy = (b-a)/n; c = (-1).^((1-1/n)*(a/dy+k))/(b-a); d = (-1).^(-2*(a/(b-a))*k); pdffft = real(c.*fft(d.*cf)); cdffft = cumsum(pdffft*dy); yfft = a + k * dy; if options.iszerosymmetric cdffft = cdffft + 0.5 ... (cdffft(n/2+1)+cdffft(n/2))/2; end % special treatment for symmetrized distribution if isforcedsymmetric pdffft = max(0,2*pdffft(n/2+1:end)); cdffft = cdffft + 0.5 ... (cdffft(n/2+1)+cdffft(n/2))/2; cdffft = min(1, ... max(0,2*cdffft(n/2+1:end)-1)); yfft = yfft(n/2+1:end); else pdffft = max(0,pdffft); cdffft = min(1,max(0,cdffft)); end ymin = min(yfft); ymax = max(yfft); %% interpolate quantile function : qf(prob) if isempty(prob) prob = [0.9,0.95,0.975,0.99,0.995,0.999]; end [cdfu,id] = unique(cdffft); yyu = yfft(id); szp = size(prob); qffun = @(prob) interp1([-eps;cdfu],... [-eps;yyu+dy/2],prob); qf = reshape(qffun(prob),szp); % interpolate cdf/qf/pdf if isempty(y) y = linspace(a,a+(n-1)*dy,100); end szy = size(y); cdffun = @(x) interp1([-eps;yyu+dy/2],... [-eps;cdfu],x(:)); cdf = reshape(cdffun(y),szy); try pdffun = @(x) interp1(yfft,pdffft,y(:)); pdf = reshape(pdffun(y),szy); catch warning('unable to interpolate') pdf = nan*y; pdffun = []; end %% result result.y = y; result.cdf = cdf; result.pdf = pdf; result.prob = prob; result.quant = qf; result.cdffun = cdffun; result.pdffun = pdffun; result.qffun = qffun; result.ymin = ymin; result.ymax = ymax; result.cdfmin = min(cdffft); result.cdfmax = max(cdffft); result.n = n; result.details.yftt = yfft; result.details.pdffft = pdffft; result.details.cdffft = cdffft; result.details.meany = meany; result.details.stdy = stdy; result.details.a = a; result.details.b = b; result.details.dy = dy; result.details.dt = 2*pi/(b-a); result.details.t = t; result.details.cf = cf; result.details.cffun = cffun; result.options = options; acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 43 references [1] jcgm100:2008, evaluation of measurement data – guide to the expression of uncertainty in measurement (gum 1995 with minor corrections), in jcgm joint committee for guides in metrology, (iso, bipm, iec, ifcc, ilac, iupac, iupap and oiml, 2008). [2] jcgm101:2008, evaluation of measurement data – supplement 1 to the guide to the expression of uncertainty in measurement – propagation of distributions using a monte carlo method, in jcgm joint committee for guides in metrology, (iso, bipm, iec, ifcc, ilac, iupac, iupap and oiml, 2008). [3] l.j. gleser, assessing uncertainty in measurement, statistical science, 277 (1998). [4] a. possolo and b. toman, assessment of measurement uncertainty via observation equations, metrologia 44, p. 464 (2007). [5] a. forbes and j. sousa, the gum, bayesian inference and the observation and measurement equations, measurement 44, 1422 (2011). [6] c. elster, bayesian uncertainty analysis compared with the application of the gum and its supplements, metrologia 51, p. s159 (2014). [7] r.willink, what can we learn from the gum of 1995?, measurement (2016). [8] r.n. kacker, r. kessel and j.f. lawrence, removing divergence of jcgm documents from the gum (1993) and repairing other defects, measurement 88, 194 (2016). [9] m.g. cox and p.m. harris, validating the applicability of the gum procedure, metrologia 51, p. s167s175 (2014). [10] p.m. harris and m.g. cox, on a monte carlo method for measurement uncertainty evaluation and its implementation, metrologia 51, p. s176-s182 (2014). [11] r. willink, measurement uncertainty and probability (cambridge university press, 2013). [12] r. willink, an improved procedure for combining type a and type b components of measurement uncertainty. international journal of metrology and quality engineering 4, p. 55-62 (2013). [13] m. korzen and s. jaroszewicz, pacal: a python package for arithmetic computations with random variables, journal of statistical software 57, 1 (2014). [14] r.c. williamson and t. downs, probabilistic arithmetic. i. numerical methods for calculating convolutions and dependency bounds, international journal of approximate reasoning 4, 89 (1990). [15] m. sheppard, apply a function to a set of probability distributions. matlab central file exchange, file 34108 (dec 2011 (updated apr 2012), 2012). [16] t.a. driscoll, n. hale and l. n. trefethen, chebfun guide, pafnuty publications (2014). [17] a. asheim and d. huybrechs, complex gaussian quadrature for oscillatory integral transforms, ima journal of numerical analysis , p. drs060 (2013). [18] d. levin, fast integration of rapidly oscillatory functions, journal of computational and applied mathematics 67, 95 (1996). [19] g. milovanović, numerical calculation of integrals involving oscillatory and singular kernels and some applications of quadratures, computers & mathematics with applications 36, 19 (1998). [20] a. sidi, the numerical evaluation of very oscillatory infinite integrals by extrapolation, mathematics of computation 38, 517 (1982). [21] a. sidi, a user-friendly extrapolation method for oscillatory infinite integrals, mathematics of computation 51, 249 (1988). [22] a. sidi, a user-friendly extrapolation method for computing infinite range integrals of products of oscillatory functions, ima journal of numerical analysis 32, 602 (2012). [23] j. abate and w. whitt, the fourier-series method for inverting transforms of probability distributions, queueing systems 10, 5 (1992). [24] n.g. shephard, from characteristic function to distribution function: a simple framework for the theory, econometric theory 7, 519 (1991). [25] l.a. waller, b.w. turnbull and j.m. hardin, obtaining distribution functions by numerical inversion of characteristic functions with applications, the american statistician 49,346 (1995). [26] r. zieliński, high-accuracy evaluation of the cumulative distribution function of 𝛼-stable symmetric distributions, journal of mathematical sciences 105, 2630 (2001). [27] r.l. strawderman, computing tail probabilities by numerical fourier inversion: the absolutely continuous case, statistica sinica, 175 (2004). [28] m.j. korczynski, m. cox and p. harris, convolution and uncertainty evaluation, series on advances in mathematics for applied sciences 72, p. 188 (2006). [29] e. lukacs, characteristics functions, griffin, london (1970). [30] n.l. johnson, s. kotz, and n. balakrishnan, continuous univariate distributions, vol. 1, wiley series in probability and mathematical statistics: applied probability and statistics. (1994). [31] n.l. johnson, s. kotz, and n. balakrishnan, continuous univariate distributions, vol. 2, wiley series in probability and mathematical statistics: applied probability and statistics. (1995). [32] h. cohen, f. r. villegas and d. zagier, convergence acceleration of alternating series, experimental mathematics 9, 3 (2000). [33] j. gil-pelaez, note on the inversion theorem, biometrika 38, 481 (1951). [34] j. imhof, computing the distribution of quadratic forms in normal variables, biometrika 48, 419 (1961). [35] r. davies, algorithm as 155: the distribution of a linear combinations of 𝜒2 random variables, applied statistics 29, 232 (1980). [36] v. witkovský, on the exact computation of the density and of the quantiles of linear combinations of t and f random variables, journal of statistical planning and inference 94, 1 (2001). [37] v. witkovský, matlab algorithm tdist: the distribution of a linear combination of student’s t random variables, in compstat 2004 symposium, ed. j. antoch (physica verlag/springer, heidelberg, germany, 2004) pp. 1995–2002. [38] v. witkovský, computing the distribution of a linear combination of inverted gamma variables, kybernetika 37, 79 (2001). [39] v.witkovský, g.wimmer and t. duby, logarithmic lambert 𝑊 × 𝐹 random variables for the family of chi-squared distributions and their applications, statistics & probability letters 96, 223 (2015). [40] v. witkovský, g. wimmer and s. ďuriš, on statistical methods for common mean and reference confidence intervals in interlaboratory comparisons for temperature, international journal of thermophysics 36, 2150 (2015). [41] w. hürlimann, improved fft approximations of probability functions based on modified quadrature rules, international mathematical forum 8, 829 (2013). %% plot the pdf/cdf, if required if options.isplot figure plot(yfft,pdffft,'-','linewidth',2) grid title('pdf specified by the cf') xlabel('y') ylabel('pdf') figure plot(yfft,cdffft,'-','linewidth',2) grid title('cdf specified by the cf') xlabel('y') ylabel('cdf') end end acta imeko | www.imeko.org november 2016 | volume 5 | number 3 | 44 [42] d.h. bailey and p.n. swarztrauber, the fractional fourier transform and applications, siam review 33, 389 (1991). [43] p. carr and d. madan, option valuation using the fast fourier transform, journal of computational finance 2, 61 (1999). [44] k. chourdakis, option pricing using the fractional fft, journal of computational finance 8, 1 (2004). [45] m. held, cfh toolbox (characteristic function option pricing). matlab central file exchange, file 46489 (may 5 (updated aug 12), 2014). [46] y.s. kim, s. rachev, m.l. bianchi and f.j. fabozzi, computing var and avar in infinitely divisible distributions, probability and mathematical statistics 30, 223 (2010). [47] ea-4/02, evaluation of the uncertainty of measurement in calibration, m: 2013 (2013). [48] a. possolo, statistical models and computation to evaluate measurement uncertainty, metrologia 51, p. s228 (2014). numerical inversion of a characteristic function: an alternative tool to form the probability distribution of output quantity in linear measurement models microsoft word 298-2198-1-le-rev5 acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 69‐80    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 69  nmisa, kebs, bksv tri‐lateral vibration comparison results  anderson maina 1 , ian veldman 2 , henning ploug 3   1  kenyan bereau of standards, bundesallee 100, 38116 braunschweig, kenya  2  national measurement institiute of south africa, csir campus, pretoria, south africa  3  brüel & kjær calibration laboratory, skodsborgvej 307, dk‐2850 nærum, denmark      section: research paper   keywords:  vibration comparison; degrees of equivalence; comparison reference value  citation: anderson maina, ian veldman, henning ploug, nmisa, kebs, bksv tri‐lateral vibration comparison results, acta imeko, vol. 5, no. 1, article 13,  april 2016, identifier: imeko‐acta‐05 (2016)‐01‐13  editor: paolo carbone, university of perugia, italy  received december 14, 2015; in final form december 14, 2015; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: ian veldman, e‐mail: csveldman@nmisa.org    1. introduction  this report presents the results of a comparison in the area of vibration (quantity of acceleration). it was agreed in the technical protocol [1] that the comparison reference values (crvs) be obtained from the nmisa primary vibration calibration system [2] and the degrees of equivalence between the values of the participating laboratories and the crvs be determined. the technical protocol [1] of april 2012 specifies in detail, the aim and the task of the comparison, the conditions of measurement, the transfer standards used, measurement instructions, time schedule and other items. a brief overview of the protocol is given in the sections 4 to 6. section 2 lists the participants with the task of the inter laboratory comparison (ilc) described in section 3. section 7 deals with the measurement results, artefact stability analysis and degrees of equivalence. in section 8, possible correlation between the comparison reference value (crv) and the nmisa results is investigated. 2. participants  three laboratories, namely kenya bureau of standards (kebs), national metrology institute of south africa (nmisa), and brüel & kjær calibration laboratory (bksv) participated in the comparison as shown in table 1. 3. task and purpose of the comparison  the aim of the comparison was to compare measurements of the sensitivity of accelerometers as measured using secondary (back to back) means in accordance with iso 16063-21 “vibration calibration by comparison to a reference transducer” [3]. during the circulation period the three laboratories calibrated two accelerometers (see section 5), as the transfer standards. the laboratories were tasked to measure the magnitude of the sensitivity of the accelerometers at different frequencies and table 1. overview of styles and font sizes used in this template. participating laboratory  acronym  country  calibration  period  brüel & kjær  bksv  denmark  6 to 24 aug 2012 kenyan bureau of standards kebs  kenya  8 to 25 jan 2013  national metrology institute  of south africa  nmisa  south africa  8 to 26 apr 2013 abstract  national  metrology  institutes  (nmis)  and  calibration  laboratories  perform  inter  laboratory  comparisons  (ilc)  as  part  of  their  processes to validate their measurement capabilities (mcs).  in the area of vibration (quantity  acceleration) the kenyan bureau of  standards (kebs) piloted an ilc, inclusive of two additional participating laboratories for this purpose. this technical review documents  the calibration results and findings of the ilc.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 70  acceleration amplitudes as specified in section 4 and clause 3 of [1]. the reference surface was defined as the mounting surface of the accelerometer. for the double ended accelerometer, the bottom mounting surface was defined as the reference surface. this accelerometer was viewed as a single ended accelerometer for the purpose of this comparison only. dependent on the participating laboratory's measurement capability (mc), the sensitivity reported excluded effects from the applicable conditioning amplifier/power supply unit (psu) used. for all instances, the participating laboratory provided the amplifier to be used. if the sensitivity reported by the laboratory included effects by the amplifier, these effects were taken into account in the reported uncertainty of measurement (uom). no amplifier/psu accompanied the circulation of the transfer standards. 4. conditions of measurement  the participating laboratories observed fully the conditions stated in the technical protocol, i.e. frequencies in hz: 10, 12.5, 16, 20, 25, 31.5, 40, 63, 80, 100, 125, 160, 200, 250, 315, 400, 500, 630, 800, 1000, 1250, 1500, 1600, 2000, 2500, 3000, 3150, 3500, 4000, 4500, 5000, 5500, 6000, 6300, 6500, 7000, 7500, 8000, 8500, 9000, 9500, 10000. amplitudes: the range of acceleration amplitudes was between 1 m/s² and 100 m/s². 1 m/s² to 200 m/s² was admissible (considering the displacement and acceleration limitations of the vibration exciters). ambient temperature and accelerometer temperature during the calibration: (23 ± 3) °c (actual values were stated within tolerances of ± 0.5 °c ). relative humidity: max. 75 % rh mounting torque of the accelerometer: (2 ± 0.1) n·m 5. transfer standards  during the preparatory stage, nmisa investigated the characteristics (long-term stability, linearity, etc.) of the reference standard accelerometers (property of nmisa) to be used as transfer standards. the following accelerometers were selected: accelerometer 1: brüel & kjær 8305 s serial number 1033874, with nominal sensitivity of 0.1 pc/(m/s2). accelerometer 2: pcb 301m15 serial number 1032, with a nominal sensitivity of 10 mv/(m/s2) . stability measurements for both accelerometers were made at the beginning, middle and end of the circulation period using nmisa primary calibration system [2]. the results of these stability measurements are given in section 7. 6. circulation type and transportation  each participating laboratory was accorded three weeks for measurement. at the beginning, middle and the end of the circulation the transfer standards were measured at the nmisa laboratory using primary means [2] in order to determine reference values and to monitor the stability of the transducer sets. the transfer standards were transported in a metal case via courier or by a representative of each participating laboratory in the following order. nmisa (stability & crv) → bksv (measurements) → nmisa (stability & crv) → kebs (measurements) → nmisa (stability & crv) → nmisa (measurements). 7. results of the measurements  7.1. monitoring of stability  the two transfer standards were monitored for stability in the beginning, middle and end of the circulation period using primary means [2]. the laboratory used calculated en values as a stability evaluation measure. the method used for the evaluation of the measurement results was to calculate the deviation, en, normalised with respect to the uom as follows: en = s/uc , (1) where s is the standard deviation of the sensitivity values over the comparison period. uc is the expanded uncertainty of measurement of the nmisa. |en| ≤ 1 were considered acceptable. stability results for the two sensors; brüel & kjær 8305 s and pcb 301m15, are given in table 2 and table 3 respectively and illustrated in figure 1 and figure 2. figure 1. results of monitoring of stability during circulation period for brüel & kjær 8305 s sensor.     figure 2. results of monitoring of stability during circulation period for pcb  301m15 sensor.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 71  the values reported for the brüel & kjær 8305 s sensor showed good stability over the circulation period. the values reported for the pcb 301m15 sensor suggested a change in sensitivity with time of up to 1 % from the beginning to the end of the circulation period. the effect of this change was compensated for by an increase in the uncertainty of the crv for the pcb accelerometer. no correction was applied to the data reported by the participants. 7.2. comparison reference value  the crvs were obtained from the mean of measurements carried out at the beginning, middle and end of the circulation period by nmisa primary vibration calibration system. the chi-square goodness of fit test was used to check the consistency of the crv(xref) against the results of the participating laboratories (xi) as follows: table 2. results of monitoring of stability during circulation period for brüel & kjær 8305 s sensor. frequency  (hz)  11‐may‐12  sensitivity  (pc/(m/s²))  11‐oct‐12 sensitivity  (pc/(m/s²))  11‐mar‐13 sensitivity  (pc/(m/s²))  s (fc/(m/s²))  uc (fc/(m/s²))  en  10.0  0.124 3  0.124 4  0.124 2  0.10  0.37  0.27  12.5  0.124 3  0.124 4  0.124 2  0.10  0.37  0.27  16.0  0.124 4  0.124 5  0.124 2  0.15  0.37  0.41  20.0  0.124 4  0.124 5  0.124 3  0.10  0.37  0.27  25.0  0.124 4  0.124 5  0.124 3  0.10  0.37  0.27  31.5  0.124 4  0.124 5  0.124 3  0.10  0.37  0.27  40.0  0.124 4  0.124 6  0.124 3  0.15  0.62  0.25  63.0  0.124 4  0.124 6  0.124 3  0.15  0.62  0.25  80.0  0.124 4  0.124 5  0.124 3  0.10  0.62  0.16  100  0.124 5  0.124 3  0.124 3  0.12  0.62  0.19  125  0.124 5  0.124 3  0.124 3  0.12  0.62  0.19  160  0.124 5  0.124 5  0.124 3  0.12  0.62  0.19  200  0.124 4  0.124 2  0.124 3  0.10  0.62  0.16  250  0.124 4  0.124 3  0.124 3  0.06  0.62  0.09  315  0.124 4  0.124 4  0.124 3  0.06  0.62  0.09  400  0.124 5  0.124 4  0.124 4  0.06  0.62  0.09  500  0.124 5  0.124 4  0.124 4  0.06  0.62  0.09  630  0.124 6  0.124 5  0.124 5  0.06  0.62  0.09  800  0.124 7  0.124 6  0.124 6  0.06  0.62  0.09  1 000  0.124 8  0.124 8  0.124 8  0.00  0.62  0.00  1 250  0.125 1  0.125 1  0.125 1  0.00  1.00  0.00  1 500  0.125 6  0.125 6  0.125 5  0.06  1.00  0.06  1 600  0.125 6  0.125 5  0.125 6  0.06  1.00  0.06  2 000  0.126 3  0.126 2  0.126 2  0.06  1.01  0.06  2 500  0.127 4  0.127 3  0.127 3  0.06  1.02  0.06  3 000  0.128 8  0.128 8  0.128 7  0.06  1.03  0.06  3 150  0.129 3  0.129 2  0.129 2  0.06  1.03  0.06  3 500  0.130 4  0.130 3  0.130 3  0.06  1.04  0.06  4 000  0.132 4  0.132 3  0.132 2  0.10  1.06  0.09  4 500  0.134 7  0.134 5  0.134 4  0.15  1.08  0.14  5 000  0.137 3  0.137 1  0.137 0  0.15  1.65  0.09  5 500  0.140 3  0.139 9  0.140 0  0.21  1.68  0.12  6 000  0.143 7  0.143 4  0.143 2  0.25  1.72  0.15  6 300  0.146 3  0.146 4  0.145 7  0.38  1.75  0.22  6 500  0.147 7  0.147 1  0.147 4  0.30  1.77  0.17  7 000  0.152 0  0.151 3  0.151 7  0.35  1.82  0.19  7 500  0.156 8  0.154 5  0.158 1  1.82  1.88  0.97  8 000  0.162 2  0.161 3  0.161 9  0.46  1.94  0.24  8 500  0.168 1  0.167 1  0.167 9  0.53  2.01  0.26  9 000  0.174 5  0.173 4  0.174 4  0.61  2.09  0.29  9 500  0.181 6  0.180 3  0.181 5  0.72  2.17  0.33  10 000  0.190 4  0.189 3  0.190 8  0.78  2.28  0.34  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 72  ⋯ . (2) the consistency check fails if 0.05 . (3) the crvs for both accelerometers as reported by the nmisa using primary accelerometer calibration methods [2] are reported in table 4. table 5 shows the results of the consistency check for the reported stability measurements over time both accelerometers. 7.3. the participants  the participants results were communicated to the pilot laboratory via an electronic spreadsheet circulated with the protocol [1]. the official results were communicated to the pilot laboratory in the form of calibration certificates. the calibration certificates had to be issued in the same format as the laboratory would do for their customers. results of table 6 shows the measurement results of the participants for the sensor brüel & kjær 8305 s while the results for the participants for the pcb 301m15 sensor is reported in table 7. table 3. results of monitoring of stability during circulation period for pcb 301m15 sensor. frequency  (hz)  23‐apr‐12  sensitivity  (mv/(m/s²))  05‐oct‐12 sensitivity  (mv/(m/s²))  14‐mar‐13 sensitivity  (mv/(m/s²))  s  (µv/(m/s²))  uc  (µv/(m/s²))  en  10.0  10.27  10.31  10.37  50  52  0.98  12.5  10.25  10.30  10.35  50  52  0.97  16.0  10.24  10.28  10.33  45  51  0.88  20.0  10.22  10.26  10.31  45  51  0.88  25.0  10.21  10.25  10.29  40  51  0.78  31.5  10.19  10.23  10.27  40  51  0.78  40.0  10.18  10.21  10.25  35  51  0.69  63.0  10.14  10.18  10.22  40  51  0.79  80.0  10.12  10.16  10.20  40  51  0.79  100  10.10  10.12  10.18  42  51  0.82  125  10.08  10.11  10.14  30  51  0.59  160  10.06  10.09  10.13  35  50  0.70  200  10.05  10.07  10.11  31  50  0.61  250  10.03  10.06  10.11  40  50  0.80  315  10.01  10.04  10.08  35  50  0.70  400  9.99  10.02  10.06  35  50  0.70  500  9.97  9.99  10.04  36  50  0.72  630  9.95  9.96  10.02  38  50  0.76  800  9.94  9.95  10.00  32  50  0.65  1 000  9.91  9.93  9.98  36  50  0.73  1 250  9.89  9.91  9.95  31  79  0.39  1 500  9.88  9.90  9.94  31  79  0.39  1 600  9.87  9.89  9.94  36  79  0.46  2 000  9.86  9.86  9.92  35  79  0.44  2 500  9.84  9.86  9.90  31  79  0.39  3 000  9.83  9.84  9.87  21  79  0.26  3 150  9.83  9.84  9.87  21  79  0.26  3 500  9.83  9.83  9.87  23  79  0.29  4 000  9.83  9.83  9.87  23  79  0.29  4 500  9.83  9.83  9.86  17  79  0.22  5 000  9.83  9.83  9.87  23  118  0.20  5 500  9.86  9.85  9.90  26  118  0.22  6 000  9.87  9.87  9.90  17  119  0.15  6 300  9.88  9.87  9.93  32  119  0.27  6 500  9.89  9.88  9.95  38  119  0.32  7 000  9.92  9.91  9.95  21  119  0.17  7 500  9.96  9.94  10.03  47  120  0.39  8 000  9.98  9.97  10.00  15  120  0.13  8 500  10.00  10.01  10.02  10  120  0.08  9 000  10.03  10.05  10.03  12  120  0.10  9 500  10.05  10.09  10.05  23  121  0.19  10 000  10.05  10.12  10.09  35  121  0.29  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 73  7.4. degrees of equivalence between participants and the crvs  table 8 and table 9 show the degrees of equivalence (di) between each of the participating laboratories measurements (xi) and the crvs (xref) for the brüel & kjær 8305 and pcb 301m15 sensors respectively where; (4) and the associated uncertainty of the degree of equivalence (u(di)) where . (5) in table 8, cells with yellow shading represent cases where |di| > u(di). table  4.  crvs  as  by  nmisa  primary  calibration  for  sensors  brüel  &  kjær  8305 s and pcb 301m15.  frequency  (hz)  brüel & kjær 8305 s  pcb 301m15  xref  (pc/(m/s²))  u (xref)   (%)  xref  (mv/(m/s²))  u(xref) (%)  10.0  0.124 3  0.3  10.32  0.5  12.5  0.124 3  0.3  10.30  0.5  16.0  0.124 4  0.3  10.28  0.5  20.0  0.124 4  0.3  10.26  0.5  25.0  0.124 4  0.3  10.25  0.5  31.5  0.124 4  0.3  10.23  0.5  40.0  0.124 4  0.3  10.21  0.5  63.0  0.124 4  0.5  10.18  0.5  80.0  0.124 4  0.5  10.16  0.5  100  0.124 4  0.5  10.13  0.5  125  0.124 4  0.5  10.11  0.5  160  0.124 4  0.5  10.09  0.5  200  0.124 3  0.5  10.08  0.5  250  0.124 3  0.5  10.07  0.5  315  0.124 4  0.5  10.04  0.5  400  0.124 4  0.5  10.02  0.5  500  0.124 4  0.5  10.00  0.5  630  0.124 5  0.5  9.98  0.5  800  0.124 6  0.5  9.96  0.5  1 000  0.124 8  0.5  9.94  0.5  1 250  0.125 1  0.8  9.92  0.8  1 500  0.125 6  0.8  9.91  0.8  1 600  0.125 6  0.8  9.90  0.8  2 000  0.126 2  0.8  9.88  0.8  2 500  0.127 3  0.8  9.87  0.8  3 000  0.128 8  0.8  9.85  0.8  3 150  0.129 2  0.8  9.85  0.8  3 500  0.130 3  0.8  9.84  0.8  4 000  0.132 3  0.8  9.84  0.8  4 500  0.134 5  0.8  9.84  0.8  5 000  0.137 1  1.2  9.84  1.2  5 500  0.140 1  1.2  9.87  1.2  6 000  0.143 4  1.2  9.88  1.2  6 300  0.146 1  1.2  9.89  1.2  6 500  0.147 4  1.2  9.91  1.2  7 000  0.151 7  1.2  9.93  1.2  7 500  0.156 5  1.2  9.98  1.2  8 000  0.161 8  1.2  9.98  1.2  8 500  0.167 7  1.2  10.01  1.2  9 000  0.174 1  1.2  10.04  1.2  9 500  0.181 1  1.2  10.06  1.2  10 000  0.190 2  1.2  10.09  1.2  table 5. the results of the chi‐square goodness of fit test for 2 degrees of  freedom for both accelerometers.  frequency  (hz)  brüel & kjær 8305 s  pcb 301m15        10.0  0.15  ≥0.90  0.66  ≥0.70 12.5  0.54  ≥0.70  0.62  ≥0.70 16.0  0.32  ≥0.80  0.31  ≥0.80 20.0  0.08  ≥0.95  0.63  ≥0.70 25.0  0.18  ≥0.9  0.59  ≥0.70 31.5  0.12  ≥0.90  0.55  ≥0.70 40.0  1.12  ≥0.90  0.55  ≥0.70 63.0  0.10  ≥0.95  0.60  ≥0.70 80.0  0.10  ≥0.95  0.60  ≥0.70 100  0.08  ≥0.95  0.51  ≥0.70 125  0.04  ≥0.95  0.60  ≥0.70 160  0.08  ≥0.95  0.76  ≥0.50 200  0.11  ≥0.90  0.50  ≥0.70 250  0.12  ≥0.90  0.72  ≥0.50 315  0.06  ≥0.95  0.39  ≥0.80 400  0.11  ≥0.90  0.50  ≥0.70 500  0.12  ≥0.90  0.43  ≥0.80 630  0.11  ≥0.90  0.56  ≥0.70 800  0.11  ≥0.90  0.53  ≥0.70 1 000  0.09  ≥0.95  0.58  ≥0.70 1 250  0.01  ≥0.95  0.46  ≥0.70 1 500  0.06  ≥0.95  0.56  ≥0.70 1 600  0.05  ≥0.95  0.51  ≥0.70 2 000  0.04  ≥0.95  0.48  ≥0.70 2 500  0.02  ≥0.95  0.65  ≥0.70 3 000  0.01  ≥0.95  0.45  ≥0.80 3 150  0.01  ≥0.95  0.45  ≥0.80 3 500  0.01  ≥0.95  0.49  ≥0.70 4 000  0.01  ≥0.95  0.36  ≥0.80 4 500  0.07  ≥0.95  0.32  ≥0.80 5 000  0.16  ≥0.90  0.11  ≥0.90 5 500  0.59  ≥0.70  0.25  ≥0.80 6 000  0.99  ≥0.50  0.22  ≥0.80 6 300  1.72  ≥0.30  0.21  ≥0.90 6 500  0.71  ≥0.70  0.09  ≥0.95 7 000  0.33  ≥0.80  0.14  ≥0.90 7 500  0.67  ≥0.70  0.42  ≥0.80 8 000  2.19  ≥0.30  0.68  ≥0.70 8 500  0.94  ≥0.50  0.61  ≥0.70 9 000  1.71  ≥0.30  0.73  ≥0.50 9 500  0.84  ≥0.50  0.26  ≥0.80 10 000  0.90  ≥0.50  0.12  ≥0.90 acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 74  sets of degrees of equivalence as reported in table 8 and table 9 for selected frequencies are shown in graphical form for the brüel & kjær 8305 and pcb 301m15 in figure 3 and figure 4 respectively. in section 8 using exclusive statistics [4] it is shown that any correlation between the crv's and the results of nmisa is small as compared to the nmisa uncertainty. 8. considering correlation between crv and nmisa  results  the aim of this comparison was to compare measurements of the sensitivity of accelerometers as measured using secondary (back to back) means in accordance with iso table 6. results of the participants for the sensor brüel & kjær 8305 s. frequency  (hz)  bksv  kebs  nmisa  s  (pc/(m/s²))  u s   (%)  s   (pc/(m/s²))  us   (%)  s   (pc/(m/s²))  us  (%)  10  0.124 5  0.8  0.124 2  0.8  0.124 7  1.0  12.5  0.124 4  0.8  0.124 4  0.8  0.125 2  1.0  16  0.124 5  0.8  0.124 4  0.8  0.125 1  1.0  20  0.124 5  0.8  0.124 3  0.8  0.124 1  1.0  25  0.124 5  0.8  0.124 3  0.8  0.124 9  1.0  31.5  0.124 5  0.8  0.124 3  0.8  0.124 8  1.0  40  0.124 5  0.8  0.124 2  0.8  0.125 7  1.0  63  0.124 4  0.8  0.124 2  0.8  0.124 7  1.0  80  0.124 4  0.8  0.124 2  0.8  0.124 7  1.0  100  0.124 5  0.8  0.124 2  0.8  0.124 6  1.0  125  0.124 4  0.8  0.124 3  0.8  0.124 6  1.0  160  0.124 5  0.6  0.124 2  0.8  0.124 6  1.0  200  0.124 5  0.8  0.124 2  0.8  0.124 6  1.0  250  0.124 6  0.8  0.124 3  0.8  0.124 5  1.0  315  0.124 6  0.8  0.124 3  0.8  0.124 5  1.0  400  0.124 7  0.8  0.124 3  0.8  0.124 5  1.0  500  0.124 7  0.8  0.124 4  0.8  0.124 6  1.0  630  0.124 8  0.8  0.124 4  0.8  0.124 6  1.0  800  0.124 9  0.8  0.124 5  0.8  0.124 7  1.0  1 000  0.125 1  0.8  0.124 8  0.8  0.124 8  1.0  1 250  0.125 2  0.8  0.125 0  1.5  0.125 1  1.3  1 500  0.125 6  0.8  0.125 2  1.5  0.125 4  1.3  1 600  0.125 8  0.8  0.125 4  1.5  0.125 5  1.3  2 000  0.126 4  0.8  0.126 2  1.5  0.126 2  1.3  2 500  0.127 4  0.8  0.127 1  1.5  0.127 3  1.3  3 000  0.128 8  0.8  0.128 7  1.5  0.128 7  1.3  3 150  0.129 3  0.8  0.129 2  1.5  0.129 2  1.3  3 500  0.130 4  0.8  0.130 3  1.5  0.130 4  1.3  4 000  0.132 2  0.8  0.132 3  1.5  0.132 4  1.3  4 500  0.134 4  0.8  0.135 0  1.5  0.134 5  1.3  5 000  0.136 8  0.8  0.137 7  1.5  0.137 1  1.9  5 500  0.139 4  0.8  0.140 9  2.0  0.141 0  1.9  6 000  0.142 3  0.8  0.144 1  2.0  0.143 4  1.9  6 300  0.144 7  0.8  0.147 6  2.0  0.146 1  1.9  6 500  0.146 3  2  0.148 9  2.0  0.149 0  1.9  7 000  0.150 4  2  0.151 6  2.0  0.152 8  1.9  7 500  0.154 8  2  0.157 4  2.0  0.158 1  1.9  8 000  0.159 4  2  0.158 0  2.0  0.160 5  1.9  8 500  0.164 8  2  0.167 6  2.0  0.166 4  1.9  9 000  0.170 6  2  0.172 7  2.0  0.171 8  1.9  9 500  0.178 1  2  0.180 8  2.0  0.179 9  1.9  10 000  0.186 7  2  0.189 6  2.0  0.190 3  1.9  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 75  16063-21: 2003 “vibration calibration by comparison to a reference transducer” [3]. the crvs were obtained from the mean of measurements carried out at the beginning, middle and end of the circulation period by nmisa primary vibration calibration system [2]. however the same primary system was used to calibrate the nmisa reference transducers used in the comparison. we adopt exclusive statistics [4] to show that correlation between the crv's and the results of nmisa is small as compared to the nmisa uncertainty. let χ be the exclusive weighted mean of the results of noncorrelated laboratories, such that ∑ ; ∑ ; , (6) table 7. results of the participants for the sensor pcb 301m15. frequency  (hz)  bksv  kebs  nmisa  s  (mv/(m/s²))  us  (%)  s  (mv/(m/s²))  us   (%)  s  (mv/(m/s²))  us  (%)  10  10.29  0.8  10.30  0.8  10.25  1.0  12.5  10.27  0.8  10.29  0.8  10.23  1.0  16  10.26  0.8  10.27  0.8  10.23  1.0  20  10.25  0.8  10.26  0.8  10.18  1.0  25  10.23  0.8  10.27  0.8  10.18  1.0  31.5  10.22  0.8  10.25  0.8  10.16  1.0  40  10.20  0.8  10.23  0.8  10.14  1.0  63  10.16  0.8  10.20  0.8  10.11  1.0  80  10.14  0.8  10.18  0.8  10.09  1.0  100  10.14  0.8  10.16  0.8  10.07  1.0  125  10.11  0.8  10.15  0.8  10.05  1.0  160  10.10  0.6  10.13  0.8  10.02  1.0  200  10.08  0.8  10.09  0.8  10.01  1.0  250  10.06  0.8  10.09  0.8  9.99  1.0  315  10.04  0.8  10.07  0.8  9.99  1.0  400  10.02  0.8  10.05  0.8  9.96  1.0  500  10.00  0.8  10.02  0.8  9.94  1.0  630  9.98  0.8  10.00  0.8  9.91  1.0  800  9.95  0.8  9.97  0.8  9.89  1.0  1 000  9.93  0.8  9.96  0.8  9.87  1.0  1 250  9.90  0.8  9.93  1.5  9.84  1.3  1 500  9.89  0.8  9.91  1.5  9.82  1.3  1 600  9.89  0.8  9.90  1.5  9.81  1.3  2 000  9.86  0.8  9.90  1.5  9.80  1.3  2 500  9.84  0.8  9.88  1.5  9.78  1.3  3 000  9.84  0.8  9.82  1.5  9.77  1.3  3 150  9.84  0.8  9.82  1.5  9.77  1.3  3 500  9.83  0.8  9.80  1.5  9.76  1.3  4 000  9.83  0.8  9.81  1.5  9.77  1.3  4 500  9.83  0.8  9.85  1.5  9.77  1.3  5 000  9.83  0.8  9.86  1.5  9.79  1.9  5 500  9.84  0.8  9.88  2.0  9.81  1.9  6 000  9.85  0.8  9.89  2.0  9.83  1.9  6 300  9.86  0.8  9.94  2.0  9.88  1.9  6 500  9.87  2.0  9.95  2.0  9.89  1.9  7 000  9.88  2.0  9.94  2.0  9.88  1.9  7 500  9.90  2.0  9.95  2.0  9.89  1.9  8 000  9.93  2.0  9.85  2.0  9.90  1.9  8 500  9.96  2.0  9.92  2.0  9.90  1.9  9 000  10.01  2.0  9.93  2.0  9.92  1.9  9 500  10.05  2.0  9.99  2.0  9.99  1.9  10 000  10.10  2.0  10.03  2.0  10.06  1.9  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 76  where the jth or correlated laboratory (nmisa) is excluded. since the exclusive weighted mean χ is not correlated with the crv, │x crv│ is a simple quantitative measure of the crv bias, also inherent in the nmisa reference transducers. table 10 shows that the absolute values of │x crv│ are small in comparison with the uncertainty of nmisa's comparison results. the value at 8000 hz for the sensor brüel & kjær 8305 being the only exception. table 8. degrees of equivalence participants and the crvs for sensor brüel & kjær 8305 s. frequency  (hz)  bksv  kebs  nmisa  di  (fc/(m/s²))  u(di)  (fc/(m/s²))  di  (fc/(m/s²))  u(di)  (fc/(m/s²))  di  (fc/(m/s²))  u(di)  (fc/(m/s²))  10  0.2  1.1  ‐0.1  1.1  0.4  1.3  12.5  0.1  1.1  0.1  1.1  0.9  1.3  16  0.1  1.1  0.0  1.1  0.7  1.3  20  0.1  1.1  ‐0.1  1.1  ‐0.3  1.3  25  0.1  1.1  ‐0.1  1.1  0.5  1.3  31.5  0.1  1.1  ‐0.1  1.1  0.4  1.3  40  0.1  1.1  ‐0.2  1.1  1.3  1.3  63  0.0  1.2  ‐0.2  1.2  0.3  1.4  80  0.0  1.2  ‐0.2  1.2  0.3  1.4  100  0.1  1.2  ‐0.2  1.2  0.2  1.4  125  0.0  1.2  ‐0.1  1.2  0.2  1.4  160  0.1  1.0  ‐0.2  1.2  0.2  1.4  200  0.2  1.2  ‐0.1  1.2  0.3  1.4  250  0.3  1.2  0.0  1.2  0.2  1.4  315  0.2  1.2  ‐0.1  1.2  0.1  1.4  400  0.3  1.2  ‐0.1  1.2  0.1  1.4  500  0.3  1.2  0.0  1.2  0.2  1.4  630  0.3  1.2  ‐0.1  1.2  0.1  1.4  800  0.3  1.2  ‐0.1  1.2  0.1  1.4  1 000  0.3  1.2  0.0  1.2  0.0  1.4  1 250  0.1  1.4  ‐0.1  2.1  0.0  1.9  1 500  0.0  1.4  ‐0.4  2.1  ‐0.2  1.9  1 600  0.2  1.4  ‐0.2  2.1  ‐0.1  1.9  2 000  0.2  1.4  0.0  2.1  0.0  1.9  2 500  0.1  1.4  ‐0.2  2.2  0.0  1.9  3 000  0.0  1.4  ‐0.1  2.2  ‐0.1  2.0  3 150  0.1  1.4  0.0  2.2  0.0  2.0  3 500  0.1  1.4  0.0  2.2  0.1  2.0  4 000  ‐0.1  1.5  0.0  2.2  0.1  2.0  4 500  ‐0.1  1.5  0.5  2.3  0.0  2.1  5 000  ‐0.3  2.0  0.6  2.6  0.0  3.1  5 500  ‐0.7  2.0  0.8  3.3  0.9  3.2  6 000  ‐1.1  2.0  0.7  3.4  0.0  3.2  6 300  ‐1.4  2.1  1.5  3.4  0.0  3.3  6 500  ‐1.1  3.4  1.5  3.5  1.6  3.3  7 000  ‐1.3  3.5  ‐0.1  3.5  1.1  3.4  7 500  ‐1.7  3.6  0.9  3.7  1.6  3.5  8 000  ‐2.4  3.7  ‐3.8  3.7  ‐1.3  3.6  8 500  ‐2.9  3.9  ‐0.1  3.9  ‐1.3  3.7  9 000  ‐3.5  4.0  ‐1.4  4.0  ‐2.3  3.9  9 500  ‐3.0  4.2  ‐0.3  4.2  ‐1.2  4.1  10 000  ‐3.5  4.3  ‐0.6  4.4  0.1  4.3  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 77  9. conclusions  the ilc reported here was planned as a single event with a protocol equivalent to afrimets.auv.v-s3. it was supposed to provide support and validation of participating laboratories’ mcs in the field of vibration for magnitude sensitivity of accelerometers. the frequency range covered the scope currently implemented by the participating laboratories. the circulation of the artefact went without any complications. the analysis of the crv values and nmisa submitted data indicated no discernible correlation. the reported degrees of equivalence support the individual laboratories’ mcs. table 9. degrees of equivalence participants and the crvs for sensor pcb 301m15. frequency  (hz)  bksv  kebs  nmisa  di   (µv/(m/s²))  u(di)     (µv/(m/s²))  di  (µv/(m/s²))  u(di)   (µv/(m/s²))  di  (µv/(m/s²))  u(di)  (µv/(m/s²))  10  ‐30  100  ‐20  100  ‐70  110  12.5  ‐30  100  ‐10  100  ‐70  110  16  ‐20  100  ‐10  100  ‐50  110  20  ‐10  100  0  100  ‐80  110  25  ‐20  100  20  100  ‐70  110  31.5  ‐10  90  20  100  ‐70  110  40  ‐10  90  20  100  ‐70  110  63  ‐20  90  20  100  ‐70  110  80  ‐20  90  20  100  ‐70  110  100  10  90  30  100  ‐60  110  125  0  90  40  100  ‐60  110  160  10  80  40  100  ‐70  110  200  0  90  10  100  ‐70  110  250  ‐10  90  20  100  ‐80  110  315  0  90  30  90  ‐50  110  400  0  90  30  90  ‐60  110  500  0  90  20  90  ‐60  110  630  0  90  20  90  ‐70  110  800  ‐10  90  10  90  ‐70  110  1 000  ‐10  90  20  90  ‐70  110  1 250  ‐20  110  10  170  ‐80  150  1 500  ‐20  110  0  170  ‐90  150  1 600  ‐10  110  0  170  ‐90  150  2 000  ‐20  110  20  170  ‐80  150  2 500  ‐30  110  10  170  ‐90  150  3 000  ‐10  110  ‐30  170  ‐80  150  3 150  ‐10  110  ‐30  170  ‐80  150  3 500  ‐10  110  ‐40  170  ‐80  150  4 000  ‐10  110  ‐30  170  ‐70  150  4 500  ‐10  110  10  170  ‐70  150  5 000  ‐10  140  20  190  ‐50  220  5 500  ‐30  140  10  230  ‐60  220  6 000  ‐30  140  10  230  ‐50  220  6 300  ‐30  140  50  230  ‐10  220  6 500  ‐40  230  40  230  ‐20  220  7 000  ‐50  230  10  230  ‐50  220  7 500  ‐80  230  ‐30  230  ‐90  220  8 000  ‐50  230  ‐130  230  ‐80  220  8 500  ‐50  230  ‐90  230  ‐110  220  9 000  ‐30  230  ‐110  230  ‐120  220  9 500  ‐10  230  ‐70  230  ‐70  220  10 000  10  230  ‐60  230  ‐30  230  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 78  figure  3.  degrees  of  equivalence  between  the  participating  laboratories  and  the  crvs  for  the  sensor  brüel  &  kjær  8305  in  graphical  form  for  select  frequencies. red lines correspond to cases where |di| > u(di).  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 79    figure 4. degrees of equivalence between the participating laboratories and the crvs for the sensor pcb 301m15 in graphical form for select frequencies.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 80          references  [1] technical protocol of the comparison nmisa.ilc.auv-03 (vibration). kebs, anderson k. maina, april 2012. [2] iso 16063-11: 1999 “primary vibration calibration by laser interferometry, method 3”. [3] iso 16063-21: 2003 “vibration calibration by comparison to a reference transducer”. [4] steele a. g., wood b. m. and douglas r. j., exclusive statistics; simple treatment of the unavoidable correlations from key comparison reference values, metrologia, vol. 38, pp.483-488, 2001. [5] maina a., veldman i., et al., final report on the supplementary comparison: afrimets.auv.v-s3, metrologia, vol. 51 tech. suppl. 09003, 2014.   table 10. |x ‐ crv | compared to nmisa comparison uncertainty ux. frequency  (hz)  brüel & kjær 8305 s  pcb 301m15  |x ‐crv|  (fc/(m/s²))  us  (fc/(m/s²))  |x ‐crv|  (µv/(m/s²)))  us  (µv/(m/s²))  10  0.0  1.2  26  103  12.5  0.1  1.3  21  102  16  0.0  1.3  15  102  20  0.0  1.2  7  102  25  0.0  1.2  1  102  31.5  0.0  1.2  4  102  40  0.1  1.3  4  101  63  0.1  1.2  0  101  80  0.1  1.2  2  101  100  0.1  1.2  18  101  125  0.1  1.2  19  101  160  0.0  1.2  18  100  200  0.0  1.2  4  100  250  0.1  1.2  6  100  315  0.0  1.2  16  100  400  0.1  1.2  16  100  500  0.1  1.2  10  99  630  0.1  1.2  9  99  800  0.1  1.2  0  99  1 000  0.1  1.2  7  99  1 250  0.1  1.6  12  128  1 500  0.1  1.6  13  128  1 600  0.1  1.6  8  128  2 000  0.2  1.6  8  127  2 500  0.0  1.7  21  127  3 000  0.0  1.7  16  127  3 150  0.1  1.7  15  127  3 500  0.1  1.7  14  127  4 000  0.1  1.7  17  127  4 500  0.0  1.7  7  127  5 000  0.1  2.6  2  186  5 500  0.5  2.7  23  186  6 000  0.9  2.7  22  187  6 300  1.0  2.8  21  188  6 500  0.2  2.8  3  188  7 000  0.7  2.9  19  188  7 500  0.4  3.0  54  188  8 000  3.1  3.0  92  188  8 500  1.5  3.2  68  188  9 000  2.5  3.3  71  188  9 500  1.7  3.4  41  190  10 000  2.1  3.6  26  191  the role of metrology in the cyber-security of embedded devices acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 the role of metrology in the cyber-security of embedded devices pasquale arpaia1,2,3, francesco caputo1,2, antonella cioffi1, antonio esposito1,2 1 department of electrical engineering and information technology (dieti), università degli studi di napoli federico ii, naples, italy 2 augmented reality for health monitoring laboratory (arhemlab), università degli studi di napoli federico ii, naples, italy 3 centro interdipartimentale di ricerca in management sanitario e inno vazione in sanità (cirmis), università degli studi di napoli federico ii, naples, italy section: research paper keywords: smart card; vulnerability assessment; side-channel attack; metrology citation: pasquale arpaia, francesco caputo, antonella cioffi, antonio esposito, the role of metrology in the cyber-security of embedded devices, acta imeko, vol. 12, no. 2, article 8, june 2023, identifier: imeko-acta-12 (2023)-02-08 section editor: paolo carbone, university of perugia, italy received january 23, 2023; in final form january 23, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the “programma operativo nazionale 2014-2020, dottorati di ricerca su tematiche dell'innovazione e green, 10/08/2021 d.m. 10 agosto 2021, n. 1061, a.a. 2021/2022 ciclo 37 – tematiche green (azione iv.5)”. corresponding author: antonio esposito, e-mail: antonio.esposito9@unina.it 1. introduction the progress in information and communication technology favoured the development of a new paradigm, known as internet of things (iot) [1], [2]. this consists of distributed smart transducers that acquire data continuously and communicate with each other by wireless connection, so as to support everyday tasks and improve the quality of life [3]. these transducer networks concern a broad range of applications, such as environmental monitoring, smart power grids, transportation, healthcare, agriculture, and electronic payments [4]–[8]. most of the iot transducers are physically accessible, and this allows certain types of attacks and security breaches [9]. indeed, an attacker can gain access to the network nodes in order to control them or to eavesdrop on exchanged messages. the security of the processed information should be typically guaranteed by encrypting messages through cryptographic algorithms [10], e.g., based on the well-known advanced encryption standard (aes). such an algorithm is implemented in the iot transducer to encrypt the transmitted messages and decrypt the received ones. however, although mathematically safe, the implementation of these algorithms presents some vulnerabilities to side-channel attacks, namely attacks based on the measurement of physical quantities associated with the encryption/decryption operations [11]. measured quantities may involve power consumption, electromagnetic emissions, execution time, light, or heat associated with device cryptographic operations. this side-channel information, also referred to as “leakages”, can be exploited to discover the secret key of the cryptographic algorithm. the side-channel attacks have been extensively studied by researchers and test laboratories for more than two decades. the attacks that have received most of the attention are based on the measurement of power consumption, which is dissipated by the embedded device during its operations. these are known as power analysis attacks and they were firstly introduced in 1999 by kocher [12], who managed to break a public key cryptographic algorithm by measuring the power consumption of a device. indeed, by exploiting the dependence between power abstract the cyber-security of an embedded device is a crucial issue especially in the internet of things (iot) paradigm, since the physical accessibility to the smart transducers eases an attacker to eavesdrop the exchanged messages. in this manuscript, the role of metrology in improving the characterization and security testing of embedded devices is discussed in terms of vulnerability testing and robustness evaluation. the presented methods ensure an accurate assessment of the device’s security by relying on statistical analysis and design of experiments. a particular focus is given on power analysis by means of a scatter attack. in this context, the metrological approach contributes to guaranteeing the confidentiality and integrity of the data exchanged by iot transducers. mailto:antonio.esposito9@unina.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 consumption and the internal state of the device during the execution of cryptographic operations, he was able to obtain information about the secret key being used. the implementation of this type of attack simply consists of two main phases: (i) the acquisition of power traces and (ii) a statistical analysis. with regard to the actual statistical analysis, different types of attacks can be distinguished: simple power analysis, differential power analysis [12], correlation power analysis [13], and scatter analysis [14]. moreover, machine learning-based approaches to data analysis are gaining more and more interest due to their capability of decoding patterns generated by complex systems [15]-[17]. in previous studies, many proposals concerned methods to improve the attack efficiency [18]–[20], techniques to make cryptographic algorithms robust against the power attacks [21]– [23], and tests aiming to evaluate the robustness of iot devices with respect to the side-channel attacks. broadly speaking, the appealing sensitive and personal information encourages the attackers, on the one hand, to break a cryptographic system in order to make profits. on the other hand, the security offered by such systems must be increased and tested in order to ensure confidentiality. most cryptography papers present resource usages for breaking the cryptographic scheme analysed in function of the security parameters. this allows the system designers to choose the parameter values in order to make it more expensive to achieve a successful attack [24]. in this context, the rigor offered by metrology can contribute to the characterization of embedded and consumer-grade devices [25]. notably, metrology has been recently allowing substantial progress in the field of information security of smart transducers thanks to a characterization of the aes vulnerability [26] or an evaluation of the robustness offered by countermeasures implemented in many devices [27]. the present work focuses on the role of metrology in improving security testing for embedded devices. in particular, results from previous works on the topic are recalled, integrated, and discussed. in detail, a vulnerability assessment will be presented by also providing some examples of enhanced power analysis attacks. next, the discussion will move to robustness evaluation by means of design of experiments. the overall aim is to highlight the role that metrology can have in cyber-security. the paper is organized as follows. section 2 introduces the instruments and the setup adopted to implement a power analysis scatter attack. section 3 presents the results achieved thanks to the metrological approach to security testing. conclusions are drawn in section 4 along with addressing future works. 2. materials and methods power analysis attacks are side-channel attacks that exploit the variations in the power consumption of a cryptographic device to reveal the secret key [28]. in particular, the data-dependency and the operation dependency are exploited. the following subsections present a power analysis attack known as “scatter attack”, the instruments adopted in power traces acquisition, and the measurement setup. then, how to perform an optimized attack and security testing will be discussed in the next section with inherent results. 2.1. scatter attack the scatter attack was chosen to demonstrate the vulnerability of the aes. the attack was implemented against an iot microcontroller (a.k.a. iot device under test), secured by the aes cryptographic algorithm with a 16-bytes-long secret key, i.e., aes-128. the scatter attack consists of statistical analysis of several power traces, whose variations in amplitude are related to the value of the key. the attack implements a “divide-andconquer” strategy by discovering a single byte of the key at a time. the workflow of the attack is shown in figure 1. for each 𝑖𝑡ℎ byte of the key (𝑖 ∈ [0, 15]), a simple discriminant related to each key byte hypothesis 𝑘 is obtained by means of a pearson’s chi-squared (𝜒2) statistical test. when the key byte hypothesis is correct, the discriminant related to the real value should be characterized by the highest value with respect to other guesses. therefore, high values in the discriminant maximizes the likelihood that the key byte hypothesis coincides with the secret key byte. the secret key is thus discovered by repeating the described procedure for all the key bytes. for the implementation of the attack, a malicious measurement system is wired to the iot microcontroller under test for monitoring the power consumption (power trace acquisition phase) during the encryption. the measured data of the power consumption are processed by a method of signature analysis (statistical analysis phase) in order to reveal the secret key of the aes-128 algorithm. for the advanced encryption standard, the portion of the power traces whose variations in amplitude are related to the key value is represented by the addroundkey and subbytes steps of the first aes round. indeed, the key expansion of the aes algorithm produces 10 round keys, the first of those generated coinciding with the secret key. therefore, the first round employs the secret key in clear. moreover, among the operations computed in the first round, the aes sbox output has a statistical influence on the power consumption [29]. 2.2. instruments the device under test (dut) was the atmega-163, a lowpower cmos 8-bit microcontroller based on the avr architecture, embedded on a smart card. the microcontroller presents 16 kb in-system flash, 512 bytes eeprom, 1024 bytes internal sram, and 8 mhz maximum clock, and implements the advanced encryption standard with a key of 128 bits. the cryptographic algorithm is implemented in software, and it does not include the side-channel countermeasures. the acquisition phase exploited the oscilloscope teledyne lecroy hdo9304, characterized by 3 ghz bandwidth, 40 gsa/s sample rate, and 8-bit analog-to-digital converter (adc). a further hardware figure 1. flow diagram of a scatter attack. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 component, useful for the communication between the dut and the oscilloscope, was employed. this component consisted of power tracer by riscure [30], a low-noise card reader for sidechannel power measurements with precise triggering capabilities. the power tracer supplies the integrated circuit on the smart card and communicates with it via iso/iec 7816-3 protocol. moreover, it contains a low noise amplifier (26 pa/ √ hz @ 1 mhz) with top end low-noise and high bandwidth analogue components that are electrically isolated from digital circuitry. thanks to this circuitry, the power tracer provides the power consumption in output with a good signal-to-noise ratio. the capacitors inside the power tracer are pre-charged to power the smart card during each single measurement to avoid any external noise in the circuit. typically, power consumption of contact smart cards is measured via a resistor inserted between the ground pin of a smart card and the ground of a card reader. the power tracer measured power consumption without measurement resistance in the power chain, thus allowing stable card voltage, maximum signal bandwidth, high sensitivity, and low insertion error [31]. an illustration of the instruments and devices used for the attack is shown in figure 2. 2.3. measurement setup the block diagram of the measurement setup for power traces acquisition, is shown in figure 3. a market-leading and professional tool, namely inspector by riscure, was installed on a personal computer. this sends the initial configuration parameters to the digital oscilloscope by means of a usb protocol. moreover, the software communicates with the smart card through the power tracer. in particular, the pc provides the smart card reader with the plaintexts to be sent to the smart card in the apdu (application protocol data unit) format. note that the apdu is a standard protocol, defined by iso/iec 7816-4, allowing the communication between a smart card reader and a smart card. in the communication between smart card and smart card reader, the inspector tool on the pc appeared as the user interface of the card reader allowing to prepare the commands that will be physically sent by the reader to the smart card. the smart card encrypts the message by using the aes-128 algorithm and returns the encrypted message to the pc by means of the smart card reader. the power tracer is also used to send a trigger signal to the oscilloscope by means of serial i/o line to synchronize the acquisition on the encryption. the oscilloscope acquires the power traces from the output signal of the power tracer, i.e., power consumption with an excellent signal-to-noise ratio. the sample data are sent to the pc by means of a usb interface. before the oscilloscope acquires the power consumption, a bnc coaxial low pass filter (mini circuits blp50+, dc to 48 mhz, 50 ω) cuts the signal frequencies over 48 mhz as they represent no significant information. 3. results in this section, the results achieved by the adoption of the metrology in the security of embedded devices are reported. the discussion is conducted by analyzing the contributions in optimization of power analysis attack and of vulnerability assessment. the overall aim is to give some indications about the current state of metrology role for cybersecurity of embedded devices. 3.1. optimization of power analysis attack the success rate of the power attacks is significantly affected by the signal-to-noise ratio (snr) of the power traces [19]. techniques for noise reduction are particularly important when measuring power consumption because power analysis attacks are very sensitive to the magnitude of these signals to recover the value of the secret key. therefore, it is important to eliminate noise effectively and improve the snr of power traces to extract the secret key with minor effort. indeed, a good level of signal to noise ratio involves in reducing the number of power traces needed to correctly reveal the secret key, and, consequently, in decreasing the time to perform a successful attack. in [32], a filtering operation was employed in order to enhance the test attack effectiveness. the filter adopted is a low pass digital filter, which makes each sample a weighted average of the previous and the current sample. the improvement of the signal to noise ratio is also obtained by the decimation operation applied after an operation of oversampling. in fact, the decimation contributes to improving the signal to noise ratio by reducing the noise floor. an assessment is conducted to establish the best configuration for the filter weight. the experiment proved that the number of disclosed bytes by keeping the number of power tracks fixed increases according to the filter weight. indeed, with 50000 power traces, a filter weight equal to 400 allows to discover the 16 bytes key while a weight of 300 returns only 15 bytes. therefore, a filter weight equal to 400 was considered as the best configuration. the effectiveness of the enhanced scatter attack was proved experimentally in the best configuration by reducing the sample size of power traces. the figure 4, shows the results. the number of bytes disclosed correctly are reported as a function of the number of power traces, where the filter weight is fixed to 400. the plot highlights that, with a filter weight of 400, a number of 30 000 power traces is sufficient to find the encryption key exactly. contrarily, in absence of filtering operation, a successful attack needs 50 000 figure 2. instruments and devices for the scatter attack implementation. figure 3. block diagram of the measurement setup for power traces acquisition. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 power traces. definitely, the use of a filter allows to discover the entire key unlike the lack of pre-processing with the same number of traces. moreover, the filter is able to reduce the number of traces needed for a successful attack. decreasing the number of power traces also reduces the time needed to find the secret key. 3.2. vulnerability assessment enhancement the vulnerability assessment allows us to evaluate the robustness and security of cryptographic devices with respect to “side-channel attacks”. these consist in assessing the effort made to penetrate the device, quantified in terms of computational resources and time necessary for a potential attacker. a good as the robustness of a cryptographic device is important to guarantee, with greater reliability, the confidentiality and integrity of the data. a correct and reliable vulnerability assessment depends on a correct choice of the factors involved in the attack phases, such as sampling frequency, pre-processing techniques and number of traces acquired. indeed, when the parameters are not optimal, an attack could require more effort to reveal the secret key, such as a higher number of traces to acquire and a longer time to reveal the secret key. this occurrence can distort the outcome of the vulnerability analysis. in recent years, metrology has been contributing to the world of cybersecurity in order to improve the characterization of devices in terms of security. in [26], the experimental design method was investigated to evaluate the factors affecting the attack system and to identify the values that maximize the number of bytes correctly identified with a minimum number of experimental tests. the attack system analysed in [26], consists of (i) the measurement devices used to acquire the power traces, (ii) the pre-processing techniques adopted to improve the power traces, and (iii) the statistical analysis of the scatter attack to discover the secret key of the aes-128 implemented in a smart card. the pre-processing techniques employed are a fast bidirectional filter to enhance the signal-to-noise ratio of the power traces and the resampling operation to reduce their dimensionality. the factors chosen for the analysis are the filter weight of the fast bidirectional filter, the resampling rate, and the number of power traces. for each parameter, 3 values are investigated. therefore, a l9 orthogonal array is chosen for the experimental planning. indeed, this design allows us to model a problem of 3 parameters and 3 values. the experimental design method implemented for the attack system under analysis allows to identify the parameters that mostly influence the attack and the values that increase the number of correctly discovered bytes. the results of the statistical significance analysis are shown in the pareto chart of figure 5. the histogram bars report the f-value for each attack parameter, while the line represents the related percentage contribution. the bars presented in descending order highlight the number of power traces as the most important parameter among those considered. the figure 6 exhibits the parameter value effects obtained by and the estimated error (for a confidence level of 99.97 %). in particular, each point is computed as the mean of the objective function values obtained for a fixed value of the factor analyzed. this plot allows to establish the best configuration for the attacking parameters able to maximize the disclosed key bytes. the best configuration for the case under analysis is 500 for the filter weight, 500 ksa/s for the resampling frequency, and 400 for the number of power traces. 3.3. metric for robustness evaluation the security of cryptographic algorithms with respect to power analysis attacks is improved by software and hardware countermeasures. power analysis attacks success in discovering the secret key because the power consumption of cryptographic devices depends on intermediate values of the executed cryptographic algorithms. therefore, the goal of power countermeasures is to make the power consumption of figure 4. number of bytes disclosed by the scatter attack as a function of the number of power traces for a weight of the filter equal to 400 (dotted line: 3rd-order polynomial interpolation). figure 5. pareto chart of the parameters: the histogram bars represent the fvalues (left axis) and the orange line the cumulative percentage contributions (right axis). figure 6. plot summarizing the number of disclosed bytes in each experiment obtained with a fixed number of power traces and weight of the filter. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 cryptographic devices independent of the processed data. the countermeasures include hiding and masking techniques. the hiding countermeasures introduces variations of power consumption in the time domain or amplitude domain, while masking implements a data randomization by concealing each intermediate value. in [27], a method was proposed to assess the security performance among different power countermeasures designed to reinforce a software implementation of aes-128. moreover, the method provides a metric to express the effectiveness of a countermeasure in straightening the iot transducer security. the method consists of conducting a power analysis attack at varying the security measures and in computing, for each combination of attack and countermeasure, the number of traces needed to discover the secret key. this parameter is typically used for assessing the countermeasure effectiveness. the more the countermeasure is effective, the more the number of traces increases. the calculation of the minimum number of power traces needed to succeed in the attack for each countermeasure is obtained as the mean of the minimum number of power traces obtained on n repetitions of the attack on different batches of power traces. a successive-approximation method in a range with extremes determined in a preliminary experimental campaign was adopted for the first repetition of the attack. the successive n − 1 repetitions implement a grid search method with a variable step initialized to the minimum number of power traces found in the first repetition. the step is an increment of a certain number of power traces until the secret key is not fully recovered, and a decrement of an order of magnitude lower until the minimum number of power traces for a particular repetition is identified. the method employs the minimum numbers of power traces needed to discover the secret key obtained by each combination of attack and countermeasure to compute the strength factors (sfs). this parameter quantifies the level of protection for each countermeasure, and it is calculated as 𝑆𝐹𝐶 ∗ = ∑ min𝐴,𝐶 ∗ 𝑁𝐴 𝑖=1 ∑ min𝐴,1 𝑁𝐴 𝑖=1 , (1) where 𝑁𝐴 is the number of implemented power attacks, 𝑚𝑖𝑛𝐴,𝐶 ∗ is the minimum number of power traces for a fixed countermeasure 𝐶 ∗ at varying of the power attack, and 𝑚𝑖𝑛𝐴,1 is the minimum number of power traces for no-countermeasure at varying of the power attack. the proposed method for the robustness evaluation of countermeasures was applied to a case study consisting of a software implementation of the aes-128 reinforced by countermeasures. the countermeasures under analysis are (i) random delay insertion, (ii) random sbox, and (iii) boolean masking. moreover, the configuration with no countermeasures was evaluated. the result of the analysis is reported in table 1. random delay strengthens the aes of a 1.3 factor with respect to no countermeasure condition; the strengthening factor for random sbox is 208, while more than 318 for masking. in case of masking, the non-availability of a minimum number of power traces does not allow to determine a strength factor value but only a low limit. the comparison of strength factors highlights masking as the most effective over other power countermeasures as it increases the number of power traces needed to succeed in the attack to a greater extent than the other countermeasure. 4. conclusions in this work, the role of metrology to improve the characterization and security testing of embedded devices was discussed. this was done by recalling previous works focusing on vulnerability testing and robustness evaluation. such results were recalled, integrated, and discussed. in detail, a method based on the design of experiments was presented to enhance the vulnerability assessment. meanwhile, a metric was introduced to express the effectiveness of a countermeasure in straightening the iot transducer security. overall, the discussion suggests that metrology plays an important role in cyber-security, especially in a contest where iot transducers are spread more and more, and their physical accessibility demands rigorous security testing. future works will continue these investigations by extending the metrological approach to machine learning-based attacks. indeed, these have great potential in security breaches, and they thus deserve further investigation. references [1] l. atzori, a. iera, g. morabito, the internet of things: a survey, computer networks 54(15) (2010), pp. 2787–2805. doi: 10.1016/j.comnet.2010.05.010 [2] i. ahmed, e. balestrieri, f. lamonaca, iomt-based biomedical measurement systems for healthcare monitoring: a review, acta imeko 10(2) (2021), pp. 174-184. doi: 10.21014/acta_imeko.v10i2.1080 [3] t. yousuf, r. mahmoud, f. aloul, i. zualkernan, internet of things (iot) security: current status, challenges and countermeasures, international journal for information security research (ijisr) 5(4) (2015), pp. 608–616. doi: 10.1109/icitst.2015.7412116 [4] w. t. sung, s. j. hsiao, the application of thermal comfort control based on smart house system of iot, measurement 149 (2020), p. 106997. doi: 10.1016/j.measurement.2019.106997 [5] f. abate, m. carratù, c. liguori, v. paciello, a low cost smart power meter for iot, measurement 136 (2019), pp. 59–66. doi: 10.1016/j.measurement.2018.12.069 [6] a. h. alavi, p. jiao, w. g. buttlar, n. lajnef, internet of thingsenabled smart cities: state-of-the-art and future trends, measurement 129 (2018), pp. 589–606. doi: 10.1016/j.measurement.2018.07.067 [7] j. gutiérrez, j. f. villa-medina, a. nieto-garibay, an m. a. portagàndara, automated irrigation system using a wireless sensor network and gprs module, ieee transactions on instrumentation and measurement 63(1) (2013), pp. 166–176. doi: 10.1109/tim.2013.2276487 [8] h. ozkan, o. ozhan, y. karadana, m. gulcu, s. macit, f. husain, a portable wearable tele-ecg monitoring system, ieee transactions on instrumentation and measurement (2019). doi: 10.1109/tim.2019.2895484 [9] j. deogirikar, a. vidhate, security attacks in iot: a survey, in 2017 international conference on i-smac (iot in social, mobile, analytics and cloud)(ismac). ieee (2017), pp. 32–37. doi: 10.1109/i-smac.2017.8058363 [10] h. a. abdul-ghani, d. konstantas, a comprehensive study of security and privacy guidelines, threats, and countermeasures: an iot perspective, journal of sensor and actuator networks 8(2) (2019), p. 22, 2019. doi: 10.3390/jsan8020022 table 1. strength factor for each countermeasure countermeasures strength factor (sf) none 1 random delay 1.3 random sbox 208 masking > 318 https://doi.org/10.1016/j.comnet.2010.05.010 https://doi.org/10.21014/acta_imeko.v10i2.1080 https://doi.org/10.1016/j.measurement.2019.106997 https://doi.org/10.1016/j.measurement.2018.12.069 https://doi.org/10.1016/j.measurement.2018.07.067 https://doi.org/10.1109/tim.2013.2276487 https://doi.org/10.1109/tim.2019.2895484 https://doi.org/10.1109/i-smac.2017.8058363 https://doi.org/10.3390/jsan8020022 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 [11] f. x. standaert, introduction to side-channel attacks, in secure integrated circuits and systems. springer (2010), pp. 27–42. doi: 10.1007/978-0-387-71829-3_2 [12] p. kocher, j. jaffe, b. jun, differential power analysis, proc. of the 19th annual int. cryptology conference "advances in cryptology crypto '99", santa barbara, california, usa, 15-19 august 1999, pp. 388–397. doi: 10.1007/3-540-48405-1_25 [13] e. brier, c. clavier, f. olivier, correlation power analysis with a leakage model, in international workshop on cryptographic hardware and embedded systems. springer (2004), pp. 16–29. doi: 10.1007/978-3-540-28632-5_2 [14] h. thiebeauld, g. gagnerot, a. wurcker, c. clavier, scatter: a new dimension in side-channel, int. workshop on constructive side-channel analysis and secure design, springer, (2018), pp. 135–152. [15] t. kubota, k. yoshida, m. shiozaki, t. fujino, deep learning side-channel attack against hardware implementations of aes, microprocessors and microsystems 87 (2021), p.103383. doi: 10.1016/j.micpro.2020.103383 [16] p. arpaia, a. esposito, a. natalizio, m. parvis, how to successfully classify eeg in motor imagery bci: a metrological analysis of the state of the art, journal of neural engineering (2022). doi: 10.1088/1741-2552/ac74e0 [17] m. k. nanjundaswamy, a. a. babu, s. shet, n. selvaraj, j. kovelakuntla, mitigation of spectrum sensing data falsification attack using multilayer perception in cognitive radio networks, acta imeko 11(1) (2022), pp. 1-7. doi: 10.21014/acta_imeko.v11i1.1199 [18] y. kim, t. sugawara, n. homma, t. aoki, a. satoh, biasing power traces to improve correlation power analysis attacks, in first international workshop on constructive side-channel analysis and secure design (cosade 2010). citeseer (2010), pp. 77–80. [19] w. liu, l. wu, x. zhang, a. wang, wavelet-based noise reduction in power analysis attack, in 2014 tenth international conference on computational intelligence and security. ieee, (2014), pp. 405–409. doi: 10.1109/cis.2014.103 [20] b. hettwer, s. gehrer, t. guneysu, profiled power analysis attacks using convolutional neural networks with domain knowledge, proc. of the 25th international conference "selected areas in cryptography – sac 2018", calgary, ab, canada, 15–17 august 2018, pp. 479–498. doi: 10.1007/978-3-030-10970-7_22 [21] g. b. ratanpal, r. d. williams, t. n. blalock, an on-chip signal suppression countermeasure to power analysis attacks, ieee transactions on dependable and secure computing, 1(3) (2004), pp. 179–189. doi: 10.1109/tdsc.2004.25 [22] t. popp, s. mangard, e. oswald, power analysis attacks and countermeasures, ieee design & test of computers, 24(6) (2007), pp. 535–543. doi: 10.1109/mdt.2007.200 [23] c. herbst, e. oswald, s. mangard, an aes smart card implementation resistant to power analysis attacks, proc. of the 4th int. conf. on applied cryptography and network security acns 2006, singapore, 6-9 june 2006, pp. 239–252. doi: 10.1007/11767480_16 [24] b. s. yee, security metrology and the monty hall problem, in workshop on information security system rating and ranking (2001). [25] p. arpaia, l. callegaro, a. cultrera, a. esposito, m. ortolano, metrological characterization of consumer-grade equipment for wearable brain–computer interfaces and extended reality, ieee transactions on instrum. and measurement 71 (2021), pp. 1-9. doi: 10.1109/tim.2021.3127650 [26] p. arpaia, f. bonavolontà, a. cioffi, n. moccaldi, reproducibility enhancement by optimized power analysis attacks in vulnerability assessment of iot transducers, ieee transactions on instrum. and measurement 70 (2021), pp. 1–8. doi: 10.1109/tim.2021.3107610 [27] p. arpaia, f. bonavolontà, a. cioffi, n. moccaldi, power measurement-based vulnerability assessment of iot medical devices at varying countermeasures for cybersecurity, ieee transactions on instrum. and measurement 70 (2021), pp. 1–9. doi: 10.1109/tim.2021.3088491 [28] s. mangard, e. oswald, t. popp, power analysis attacks: revealing the secrets of smart card, springer science & business media 31 (2008) [29] p. kocher, j. jaffe, b. jun, p. rohatgi, introduction to differential power analysis, journal of cryptographic engineering, 1(1) 2021, pp. 5–27. doi: 10.1007/s13389-011-0006-y [30] riscure inspector sca. online [accessed 11 march 2023]. https://www.riscure.com/securitytools/inspector-sca/ [31] m. bucci, l. giancane, r. luzzi, m. marino, g. scotti, a. trifiletti, enhancing power analysis attacks against cryptographic devices, iet circuits, devices & systems 2(3) (2008), pp. 298–305. doi: 10.1049/iet-cds:20070166 [32] p. arpaia, f. bonavolontà, a. cioffi, problems of the advanced encryption standard in protecting internet of things sensor networks, measurement 161 (2020), art. no. 107853. doi: 10.1016/j.measurement.2020.107853 https://doi.org/10.1007/978-0-387-71829-3_2 https://doi.org/10.1007/3-540-48405-1_25 https://doi.org/10.1007/978-3-540-28632-5_2 https://doi.org/10.1016/j.micpro.2020.103383 https://doi.org/10.1088/1741-2552/ac74e0 https://doi.org/10.21014/acta_imeko.v11i1.1199 https://doi.org/10.1109/cis.2014.103 https://doi.org/10.1007/978-3-030-10970-7_22 https://doi.org/10.1109/tdsc.2004.25 https://doi.org/10.1109/mdt.2007.200 https://doi.org/10.1007/11767480_16 https://doi.org/10.1109/tim.2021.3127650 https://doi.org/10.1109/tim.2021.3107610 https://doi.org/10.1109/tim.2021.3088491 https://doi.org/10.1007/s13389-011-0006-y https://www.riscure.com/securitytools/inspector-sca/ https://doi.org/10.1049/iet-cds:20070166 https://doi.org/10.1016/j.measurement.2020.107853 developments of interlaboratory comparisons on pressure measurements in the philippines acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 developments of interlaboratory comparisons on pressure measurements in the philippines mary ness i. salazar1,2 1 national metrology laboratory of the philippines – industrial technology development institute dost, philippines 2 measurement science, kriss school university of science and technology, daejeon, republic of korea section: research paper keywords: pressure measurement; intercomparison; proficiency testing; interlaboratory comparison citation: mary ness i. salazar, developments of interlaboratory comparisons on pressure measurements in the philippines, acta imeko, vol. 12, no. 2, article 36, june 2023, identifier: imeko-acta-12 (2023)-02-36 section editor: leonardo iannucci, politecnico di torino, italy received january 13, 2023; in final form may 24, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by the laboratory proficiency evaluation program of the nml-itdi, dost. corresponding author: mary ness i. salazar, e-mail: marynesssalazar@yahoo.com 1. introduction the national metrology laboratory (nml) of the industrial technology development institute (itdi) under the department of science and technology (dost) conducted interlaboratory comparisons in the field of hydraulic pressure measurement among the local calibration laboratories in the philippines. the provision of interlaboratory comparison, otherwise known as proficiency testing (pt), is a program to strengthen the nml relationship with the said laboratories in establishing scientific metrology in the country. this pt program aims to [1], [2]: (a) determine the technical capabilities and performance of the laboratories; (b) assess the reliability of their measurement results and validate their calibration measurement capabilities; (c) disseminate a harmonized and validated calibration procedure; (d) demonstrate metrological equivalence to the nml and most importantly, (e) provide access to interlaboratory comparisons for their compliance to iso/iec 17025 [[3]] requirements. the pressure standards section (pss) of the nml acted as the program coordinator and reference laboratory, accredited under the terms of iso/iec 17025:2005 (2017 at present). the pss was responsible for providing the artefact, the reference value and its measurement uncertainty, the monitoring of the program as a whole, and preparation of written reports for the two intercomparisons being compared; one was conducted in 2010 while the other was in 2016 the following will be the outline of this paper. in section 2, the comparison process will be discussed followed by the measurement results of the participants. discussion of these results commences in section 4, and the evaluation of both the pts is compared in section 5. finally, the concluding section summarizes the major accomplishment and development based on these experiences. abstract the national metrology laboratory of the philippines, industrial technology development institute under the department of science and technology (itdi-dost) offered first of its kind interlaboratory comparison in pressure measurement in the country following a demand survey identifying the gaps in proving the competency of local calibration laboratories. this paper presents the development in implementing the two interlaboratory comparisons in the philippines with six years interval using the same pressure artefact. with its many objectives, the program also aimed to provide these laboratories access to proficiency testing (pt) to comply with the iso/iec 17025 requirements. comparatively, while both are considered successfully implemented, the improved awareness and commitment to quality of the participants and the enhanced competency of the pilot laboratory in implementing such activity are a few of the enumerated factors instigating the increase in the number of participants as well as those obtaining satisfactory results in the latter pt. the interlaboratory comparison schemes offered aimed at sustaining the demands of metrology stakeholders and continuously develop this service to support further progress in the calibration and measurement capabilities of local laboratories in the country. mailto:marynesssalazar@yahoo.com acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 2. comparison process the two intercomparisons followed an almost similar process; differences are, however, emphasized in this paper, specifically the contributing factors that affected the results of the two schemes. it is herein referred to the first pt scheme as 2010 and the second pt scheme as 2016 throughout the discussion. the interlaboratory comparison program was designed as a cycle where the nml, as a reference or pilot laboratory, calibrates the artefact at the program's beginning, middle, and end. one main difference between 2010 and 2016 is the conduct of a preparatory workshop before the start of the pt program. this workshop proved essential in getting to know each laboratory's capabilities, standards, and procedures they follow in calibrating pressure gauges. also, in this workshop, an agreement between the participants and the nml was reached, fulfilling the objective of disseminating a harmonized and validated calibration procedure and the identification of limitations of the participants and the pt program in general. 2.1. participants participants in the two pts are local calibration laboratories with nml as the reference lab. in 2010, five (5) participants were all private laboratories in metro manila, the capital of the philippines. in 2016, the 16 participants comprised private and government laboratories, including some dost regional metrology laboratories from different provinces in the philippines. 2.2. artefact the artefact (figure 1and table 1) used in 2010 and 2016 is a bourdon-tube-type pressure gauge. it was noted that in 2010, two artefacts of different ranges were measured by the participants, but in this paper, comparisons will be based on only one artefact used on both pts, and the description is as follows: the artefact was subjected to initial and subsequent characterization before the pts and maintained a regular interval of calibration and intermediate checks when not used as an artefact. 2.3. calibration method the participants were asked to calibrate the artefact through direct comparison to their standard. in 2010, each participant used the typical calibration procedure in their laboratory, which means their laboratory-developed method. meanwhile, in 2016, after the earlier mentioned preparatory workshop, it was agreed among the participants to follow an international guideline, the dkd-r 6-1 calibration of pressure gauges [4]. which not only guided the calibration procedure but the computation of measurement uncertainty as well. the nml uses the said guideline. 2.4. measurement scheme the nml chose a measurement scheme to monitor the artefact's metrological quality throughout the pt process. in 2010, the artefact was calibrated before and after a trip to a participant, as shown in figure 2. it was hand-carried to and from each participating lab by its representative. in 2016 however, since there were more participants than in 2010 and some were outside metro manila, the nml calibrated the artefact before and after a group of participants, usually 2 to 3 labs, strategically chosen based on location so the sending back and forth to the nml of the artefact are made most efficiently. figure 3 shows this scheme. figure 1. the artefact. table 1. technical specification of the artefact. manufacturer ashcroft serial number / identification s2-w-006 measuring range 25 000 kpa scale division 100 kpa accuracy 0.25 % medium liquid figure 2. pt 2010 measurement scheme. figure 3. pt 2016 measurement scheme. nml lab b nml lab c nml lab d nml lab e start nml end lab a lab4 lab5 lab6 nml lab7 lab8 nml lab9 lab10 lab11 lab12 nml lab13 lab14 lab15 nml lab16 nml end nml start lab1 lab2 lab3 nml acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 2.5. report of the participants in 2010, the participants were only asked to submit the filledout nml-provided measurement datasheet and the calibration certificate they usually issue to their customers. other information, such as the uncertainty budget, was only known when required by the nml. this exchange of information could have been more efficient, consequently causing a delay in data evaluation and publishing of the final report. hence, the nml changed this practice in 2016. the participants should submit documents such as the measurement datasheets, a copy of the calibration certificate of their standard proving valid traceability, their usual calibration report, and the uncertainty budget. this change improved the data evaluation process for the nml, making it easier and faster since information transparency was present from the beginning. 2.6. reference values in both pts, the reference values used in evaluating the normalized error (𝐸n ) for each participant were based on the values nearest the participant’s reported results, either before or after the calibration of the reference laboratory. in 2016, this result was given to participants in an interim report, showing only the specific participant’s results compared to nml’s. this interim report was beneficial for participants having to prove their competence to technical peers during their assessment while the intercomparison is not yet completed. it should be noted, however, that in the final report, the reference values reflected are the weighted average of all the measurement results of the nml. 3. measurement results the measurement results of participating laboratories are evaluated using the earlier mentioned normalized error or the 𝐸n ratio [5], calculated using the equation: 𝐸n = 𝑥lab − 𝑥ref √𝑈lab 2 + 𝑈ref 2 , (1) where 𝑥lab is the measured value of the participating laboratory, 𝑥ref is the reference value, 𝑈lab and 𝑈ref are the expanded uncertainty (k = 2) of the participant’s measured value and the reference value, respectively. the reference value in this equation is the deviation of the artefact reading from the nml’s applied pressure at the nominal calibration points. similarly, the measured value of the participating laboratory is the deviation of their reported value to the nominal calibration points. this practice ensures the uniformity of values to be compared. moreover, the expanded uncertainties were reported with a coverage factor of k = 2, indicating a confidence level of approximately 95 %. figure 4 shows the performance of participants in the two pts. more participants joined in 2016 with 88 % (14 out of 16) satisfactory performance compared to 40 % (2 out of 5) in 2010. two participants joined both pts and one participant performed better in 2016 while the other still failed. in 2010, four out of the five participants were able to calibrate the artefact in its full measuring range, while one participant did not submit a result in the last two highest points. meanwhile, in 2016, all the participating laboratories were able to calibrate the artefact as a whole. table 2 and table 3 show the 𝐸𝑛 value of the participants in 2010 and 2016 respectively. figure 4. participants’ performance on two pts. table 2. summary of 2010 participants’ en values relative to the reference values. table 3. summary of 2016 participants’ en values relative to the nominal pressure values 0 5 10 15 20 2010 2016 n o . o f p a rt ic ip a n ts year satisfactory unsatisfactory acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 4. discussion of results the laboratory’s satisfactory performance was determined when |𝐸n| ≤ 1 in all the prescribed calibration points. in 2010, only 95 % of the total calibration points with |𝐸𝑛 | ≤ 1 were required to be considered satisfactory laboratory performance, but this was later corrected to 100 % of the calibration points in 2016. some participants interpreted that the 95 % confidence level in the uncertainty budget estimate may also be applied in the inter-laboratory comparison result, thus, assuming that they don’t need to perform well in all the measurement points since there is a 5 % margin of error. the nml had to explain the rationalization that the 5 % margin of error cannot be tolerated in the measurement procedure since the basic requirement of the pt was to calibrate the artefact as a whole and any doubt in their procedure must well be accounted for in their uncertainty budget and not on their measurement value. also, a 95 % satisfactory performance will not be possible in the calibration points prescribed since any one point was already 10 % of the artefact’s ten calibration points. it is, however, emphasized that the laboratory’s performance is considered satisfactory only through the 𝐸n score. accordingly, so as not to abuse the said acceptance criteria, it was later suggested that a limiting value for uncertainty is to be set, and most of the participants followed it in 2016. for illustration purposes, figure 5 and figure 6 show the participant’s deviation from the reference value with its corresponding uncertainties in the non-zero minimum and the maximum calibration points, respectively. the 2010 participants were represented by blue markers and labeled alphabetically (lab a to lab e), while participants in 2016 were of green markers and were labeled with numbers (lab1 to lab16). comparing the participants’ performance in the two pts, all the participants performed satisfactorily in the minimum calibration point in 2010 as opposed to those in 2016, with 1 participant whose value already lies beyond the limit if not with its uncertainty. differences in the computed uncertainty values depended mainly on the standard they used, primarily a digital pressure gauge; only a few used a pressure calibrator and a deadweight tester. moreover, in 2010, while the nml prescribed a guideline for measurement uncertainty calculations, most participating laboratories estimated the expanded uncertainties using their laboratory procedure and technique with the notion that declaring a low uncertainty means better performance. figure 5. participant’s deviation from the reference values and its corresponding uncertainties at 2500 kpa. figure 6. participant’s deviation from the reference values and its corresponding uncertainties at 25 000 kpa. -150 -100 -50 0 50 100 150 200 d e v ia ti o n , k p a 2 500 kpa measurement point -350 -300 -250 -200 -150 -100 -50 0 50 100 150 d e v ia ti o n , k p a 25 000 kpa measurement point acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 contrastingly, in 2016, all the participating laboratories followed the agreed guideline on procedure and uncertainty estimates, with the lower limit as the resolution of the artefact. this agreement ensured that measurement uncertainties were not over or under-estimated. a sample calculation of the recommended minimum components of measurement uncertainty of the pilot laboratory is shown in table 4. the minimum components of measurement uncertainty were enumerated as the factors coming from the standard, the height difference between the standard and the artefact or the so-called hydrostatic pressure effect, the artefacts’ resolution, zero deviation, hysteresis, and repeatability. the expanded uncertainty, as earlier mentioned, was evaluated with a 95 % level of confidence with the coverage factor, k = 2. the uncertainty from the standard was taken from the most recent calibration certificate of the artefact before the start of the interlaboratory comparison. the uncertainty due to the hydrostatic pressure effect was evaluated but was found negligible due to minimizing, if not eliminating, the height difference of the reference pressure points of the standard and the artefact during calibration. similarly, the uncertainty due to zero deviation was also found negligible due to the behaviour of the artefact. the major contributor to uncertainty in the reference value was the resolution of the artefact, which was agreed to be the limiting factor for uncertainty evaluation for all the participants, including the pilot laboratory. this decision accommodated the limited accuracy capability of some participants, which caters mainly to industrial calibrations. the uncertainty due to hysteresis and repeatability are the maximum values among the measurement performed by the nml. the nml used these factors in both the 2010 and 2016 pt. 5. evaluation of comparisons comparing the two pts as a whole, defined factors affecting the performance of participating laboratories are summarized in figure 7. the earlier mentioned preparatory workshop, not done in 2010 but conducted in 2016, proved to be one key factor that led to the increased satisfactory performance of participants. in 2016, four laboratories could not attend this workshop; however, two laboratories asked for details after the event and strictly followed the agreed procedure, 1 lab with a satisfactory result but did not follow the agreed uncertainty limit, and one did not perform satisfactorily. in both pts, the main objective of the laboratories' participation was to fulfil the iso/iec 17025 requirement for pt participation. however, the urgency of this requirement was only partially realized by the 2010 participants since the accreditation to iso/iec 17025 was reasonably new in the country during that time. moreover, most 2010 participants have inexperienced or untrained personnel and need to familiarize themselves with the calibration method used by the nml. contrastingly, in 2016, most laboratories had trained personnel and had proven competencies in scopes other than the pressure field. their submitted results showed that knowledge of estimating uncertainty budgets also significantly improved through training. the latter pt also showed an enhanced selection of standard and upgraded facilities by the laboratories. it is most probable that the 2010 pt was considered as a test run by the participating laboratories, with two labs participating satisfactorily on both, 1 lab that improved in 2016, 1 that still unsatisfactorily performed, and one that did not continue to be a calibration laboratory, this is also the participant that did not complete the measurement due to inappropriate standard. the 2016 participants, on the other hand, are mostly maintaining their table 4. the nml’s calculation of the recommended minimum measurement uncertainty components. calibration item test gauge measurement range 25 000 kpa scale division 100 kpa resolution 50 kpa uncertainty of standard used 4.1 kpa coverage factor (k) 2 components of uncertainty nominal pressure uncertainty from the standard uncertainty due to height difference uncertainty due to resolution uncertainty due to zero deviation uncertainty due to hysteresis uncertainty due to repeatability expanded uncertainty kpa kpa kpa kpa kpa kpa kpa kpa 0 2.6 0.0 28.9 0.0 0.0 0.0 58 2500 2.6 0.0 28.9 0.0 0.2 0.1 58 5000 2.6 0.0 28.9 0.0 0.2 0.3 58 7500 2.6 0.0 28.9 0.0 0.1 0.1 58 10000 2.6 0.0 28.9 0.0 0.3 0.8 58 12500 2.6 0.0 28.9 0.0 0.1 0.1 58 15000 2.6 0.0 28.9 0.0 0.2 0.1 58 17500 2.6 0.0 28.9 0.0 0.1 0.1 58 20000 2.6 0.0 28.9 0.0 0.0 0.2 58 22500 2.6 0.0 28.9 0.0 0.1 0.0 58 25000 2.6 0.0 28.9 0.0 0.0 0.1 58 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 iso/iec 17025 accreditation already or are in the process of acquiring their certification in the pressure scope, supported by this intercomparison. in both pts, the nml recommended that all the laboratories with unsatisfactory performance review their calibration method and uncertainty budget analysis, investigate sources of error leading to unsatisfactory results, and initiate corrective actions. the nml, on the other hand, as the reference laboratory, continuously improves as a pt provider learning from experience, from the handling of the artefact to the analysis of data that is most appropriate to all participants. consequently, the nml extended its pt offering regularly, with different pressure ranges and other fields of measurement; also, the conduct of the concluding workshop was planned for the succeeding pts. furthermore, coordination with the local accreditation body as the channel to know the demands for pt in the country for nml's plan on pt provision, and in return, the laboratories are made aware of the nml pt offerings. availability of artefact is still the most significant limitation of the pt provision but was hoping to be resolved to cope-up with ongoing demands on pt. currently, the laboratories' commitment to quality, supported by courses and training which are not only technical but also in the quality management systems, are contributors to the laboratories' handling of intercomparison prominent to satisfactory performance. the participants have become more aware of good laboratory practices and are encouraged to continuously improve through refresher and new metrology awareness courses. 6. summary and conclusion the intercomparisons, 2010 and 2016, are two independent pts and are generally considered successful in terms of results, coordination, and the experience gained by the participants and the reference laboratory. measurement results revealed the calibration and measurement capabilities of each participating laboratory. based on the requirements of iso/iec 17043:2010, the performances are mostly satisfactory, in terms of the 𝐸n values, especially in the 2016 pt. this also indicates that the measurement practices of these participants significantly improved and are aligned and complying with an internationally validated method. the pt schemes offered by the nml will be of continuous improvement to support further progress of the local calibration laboratories in the philippines. acknowledgement funding support for dost and the laboratory proficiency evaluation program (lpep) of the nml, itdi-dost is gratefully acknowledged. the lpep was initially funded by the philippines council for industry, energy and emerging technology for research and development (pcieerd) making it possible to acquire the artefact used in these intercomparisons. references [1] maryness i. salazar, final report for pressure test gauge calibration interlaboratory comparison 07-2010-ptilc-0042, nml, itdi-dost philippines, 2011. unpublished results. [2] maryness i. salazar, interlaboratory comparison on hydraulic pressure standards 01-2016-ptilc(pres)-0002m nml, itdidost, philippines, 2016. unpublished results. [3] international organization for standardization (2017) iso/iec 17025:2017 general requirements for the competence of testing and calibration laboratories. iso geneva, switzerland. [4] ptb deutscher kalibrierdienst, guideline dkd-r 6-1 calibration of pressure gauges. dkd, edition 03/2014. [5] international organization for standardization (2010) iso/iec 17043:2010 conformity assessment – general requirements for proficiency testing. iso, geneva, switzerland. figure 7. defined factors affecting participants’ performance. 0 2 4 6 8 10 12 14 16 preparatory workshop attendance quality management system training of personnel following international guideline use of appropriate standard n o . o f p a rt ic ip a n ts year 2010 2016 a method to consider a maximum admissible risk in decision-making procedures based on measurement results acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 9 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 a method to consider a maximum admissible risk in decisionmaking procedures based on measurement results alessandro ferrero1, harsha vardhana jetti1, sina ronaghi2, simona salicone1 1 dipartimento di elettronica, informazione e bioingegneria (deib), politecnico di milano, italy 2 dipartimento di energia (deng), politecnico di milano, italy section: research paper keywords: measurement uncertainty; threshold; decision making; risk of wrong decision; maximum admissible risk citation: alessandro ferrero, harsha vardhana jetti, sina ronaghi, simona salicone , a method to consider a maximum permissible risk in decision-making procedures based on measurement results, acta imeko, vol. 12, no. 2, article 41, june 2023, identifier: imeko-acta-12 (2023)-02-41 section editor: laura fabbiano, politecnico di bari, italy received march 17, 2023; in final form may 31, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: simona salicone, e-mail: simona.salicone@polimi.it 1. introduction measurement results are very often used as input elements in decision-making procedures, which represent the core element of conformity assessment. this is a very critical task in many fields, from the industrial one, where conformity of a product's feature to given specifications must be assessed, to environment protection, health, legal and forensic ones, where decisions are generally related to checking that the presence of a substance (a pollutant, a drug, etc.) or the error of an instrument does not exceed a given threshold or maximum admissible limit. most decisions – if not all of them – are taken by comparing a measurement result with a threshold or a range of admissible values, where the threshold, or the upper and lower limits of the range, are given as simple quantity values [1]. then, according to where the measurement result is located with respect to the threshold or the range, a decision is taken on whether conformity can be declared or not. if measurement uncertainty is not considered, or if it can be assumed to be negligible, this decision can be easily taken by comparing two numerical values: the measured value with the threshold (as shown in figure 1) or the measured value with the upper and lower limits of the range. figure 1 shows that, in such a situation, the decision is apparently taken with no risk of being wrong. however, even if measurement uncertainty has been evaluated and found to be negligible, a risk of wrong decision still exists, because it is widely recognized [2] that “when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured”. it is also well-known, according to the gum [2], that in many applications “it is often necessary to provide an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement. thus, the ideal method for evaluating and expressing uncertainty in measurement should be capable of readily providing such an interval, in particular, one with a coverage probability or level of confidence that corresponds in a realistic way with that required”. when measurement uncertainty is taken into account, again a decision about conformity can be readily taken if the coverage interval is completely below or above the threshold (figure 2). abstract measurement uncertainty plays a very important role ensuring validity of decision-making procedures, since it is the main source of incorrect decisions in conformity assessment. the guidelines given by the actual standards allow one to take a decision of conformity or non-conformity, according to the given limit and measurement uncertainty associated to the measured value. due to measurement uncertainty, a risk of a wrong decision is always present, and the standards also give indications on how to evaluate this ri sk, although they mostly refer to a normal probability density function to represent the distribution of values that can be reasonably attributed to the measurand. since such a function is not always the one that best represents this distribution of values, this paper considers some of the most-often used probability density functions and derives simple formulas to set the acceptance (or rejection) limits in such a way that a pre-defined maximum admissible risk is not exceeded. mailto:simona.salicone@polimi.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 on the other hand, the situation represented in figure 3 appears to be quite critical, since the threshold falls inside the coverage interval representing the fraction of the distribution of values that could reasonably be attributed to the quantity subject to measurement (the measurand). this means that there is a probability that some of the values that could reasonably be attributed to the measurand might be greater than threshold t, even if the measured value x̅𝑚 is lower than the threshold. this also means that, if conformity shall be assessed when the measurand is lower than the threshold, a risk exists of declaring the measurand conforming while it is not, and this risk can be evaluated starting from measurement uncertainty [3]. conformity assessment involves, therefore, a decisionmaking process affected by uncertainty. such a problem has been widely covered in the literature [4]-[6], mostly by taking epistemic uncertainty into account [7]. however when the input elements to a decision-making process are measurement results, uncertainty takes a well-defined meaning, defined by the vim [1] and the gum [2], and such a definition and the related evaluation methods cannot be disregarded when evaluating the risk of wrong conformity assessment, as clearly shown in [8]-[13]. this problem is covered by the bipm document jcgm 106:2012 [14], in a very extensive way under a strict metrological perspective, and treating uncertainty according to the gum recommendations [2]. in particular, it covers the problem of stating whether a measured quantity falls inside a given tolerance interval, which is defined in [14] as the “interval of permissible values of a property”. according to the above definition, the tolerance interval can be both a closed interval and a one-sided interval. furthermore, document [14] defines acceptance limits in such a way that, given a measurement uncertainty value, the measurand is declared conforming if the measured value falls inside the acceptance limits and non-conforming when it falls outside these limits. the document considers different decision rules and the way to evaluate the associated risk of incorrect assessment starting from measurement uncertainty. hence, it represents a very useful guide in evaluating the probability of declaring as conforming an item that is not, and vice versa. although this problem is well discussed in [14] from a theoretical perspective, little guidance is provided, from a more practical point of view, on how to set the numerical value of the acceptance limit not to exceed the maximum admissible risk of making a wrong decision (once the measurement uncertainty and the maximum admissible risk are given). this is an important issue when dealing with critical measurements, such as those performed to protect health and environment. this paper, after having quickly reviewed the most used decision rules, proposes a method that, given a threshold (or more in general a tolerance limit), provides the acceptance limit as a function of measurement uncertainty and a predefined maximum admissible risk of exceeding the given threshold. example are given for some of the most used probability distributions. 2. the most common decision rules to correctly evaluate the risk associated with decision rules, it is necessary to identify or assume the probability density function (pdf) representing the distribution of values that could reasonably be attributed to the measurand [2], since this risk can be evaluated only after integrating such pdf from - to the threshold [2], [3]. it is well known that, according to the gum [2], the standard uncertainty u(x) associated with a measurement result x is the standard deviation of the pdf representing the distribution of values that could reasonably be attributed to the measurand. on the other hand, the expanded uncertainty u(x) = ku(x) identifies a coverage interval [x – u(x); x + u(x)], built about the numerical value x of the measurement result, whose coverage probability depends on the assumed probability density function and the considered coverage factor k. it is also worth reminding that the pdf representing the distribution of values that could reasonably be attributed to the measurand depends on the available information. it is generally – and wrongly – considered that the available information comes only from the employed measuring equipment [15], while document jcgm 106:2012 [14] states that such information always has two components: the one available before performing the measurement (called prior information) and the additional information supplied by the measurement. the resulting, or posterior pdf, can be obtained by applying bayes' theorem [14]. keeping in mind the above considerations, it is possible to consider and discuss the two most common and employed decision rules in conformity assessment. it is assumed that the pdfs considered in the following sections are always the posterior pdfs. 2.1. decision rule based on simple acceptance this rule, also known as shared risk, considers accepting as conforming (and reject otherwise) an item whose property has a figure 1. comparison of a measured value with a threshold, when the measured value 𝑥𝑚 is lower (a) and greater (b) than threshold t and measurement uncertainty is not taken into account. figure 2. comparison of an uncertainty interval with a threshold, when the uncertainty interval is completely below (a) and above (b) threshold t. figure 3. comparison of an uncertainty interval with a threshold, when the threshold falls within the interval. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 measured value inside the tolerance interval. in this case, uncertainty is not explicitly considered. in mathematical terms, and assuming that the tolerance interval is given by all values lower or equal than threshold t, an item is accepted as conforming if the measured value xm of property x satisfies to condition 𝑥m ≤ 𝑡. let’s make a few considerations about this decision rule. it can be readily checked that, assuming a symmetrical pdf about xm for the values that could reasonably be attributed to the measurand, the highest probability of exceeding the threshold is obtained in the limit case of xm = t and is 50%, no matter on the evaluated uncertainty value and the pdf. therefore, when this decision rule is applied, measurement uncertainty does not affect the risk: reducing uncertainty only decreases the width of the interval of non-conforming values 𝑥nc that are considered as conforming, but does not reduce the risk of misidentifying nonconforming items as conforming, which still remains 50% (when xm = t). to define a maximum width of the interval of nonconforming values that are considered as conforming, a mutually agreed maximum acceptable expanded uncertainty umax is generally set and it is therefore suggested that the expanded uncertainty u associated to the measured value, for a coverage factor k = 2, must satisfy u  umax [14]. 2.2. decision rule based on guarded acceptance / rejection the simple acceptance rule reported in sec 2.1 shows that the closer the measured value to the threshold, the higher is the probability (up to 50%) of accepting an item as conforming that is not, and vice versa [14]. this probability can be reduced by setting an acceptance limit inside the tolerance interval, as suggested by [14] and as shown in figure 4, when, respectively, the measured value is required to be lower or equal a given threshold (𝑥m ≤ 𝑇𝑈) as in figure 4a and the measured value is required to be within a closed interval (𝑇𝐿 ≤ 𝑥m ≤ 𝑇𝑈) as in figure 4b. figure 4 represents the case of guarded acceptance [14], that is the decision rule for which the risk of accepting a nonconforming item is reduced by setting an acceptance limit au inside the tolerance interval (see sec. 8.3.2 in [14]). according to this rule [14], if the tolerance interval is a onesided interval, upper limited by tu (figure 4a), an acceptance limit au is set inside the tolerance interval. the interval between au and tu (highlighted in yellow in figure 4a) is called the guard band and its width (with sign) is defined as [14]: 𝑤 = 𝑇u − 𝐴𝑈 . (1) in the case of figure 4a, it is w > 0. on the other hand, in the case of a two-sided tolerance interval, two acceptance limits al and au are set, as shown in figure 4b. in this case, two guard bands are obtained, whose widths are defined as wu= tu – au> 0 and wl= tl – al< 0 respectively. from figure 4, it can be conclusded that, when a guarded acceptance decision rule is considered, an acceptance interval smaller than the tolerance interval is obtained. this decision rule is hence in favour of increasing the probability that an accepted item is truly conforming. for the sake of completeness, let’s consider that, with respect to the two cases shown in figure 4 (𝑥m ≤ 𝑇𝑈 and 𝑇𝐿 ≤ 𝑥m ≤ 𝑇𝑈), other two cases exist, that is: • 𝑥m ≥ 𝑇𝐿: in this case, considering guarded acceptance, al is on the right of tl and w= tl – al< 0; • 𝑥m ≤ 𝑇𝐿 ∪ 𝑥m ≥ 𝑇𝑈: in this case, considering guarded acceptance, au is on the right of tu and al is on the left of tl, so that wu < 0 and wl> 0; so that the obtained acceptance interval is smaller than the tolerance interval. a similar, though opposite situation is obtained in the case of guarded rejection [14]. in fact, this decision rule is in favour of increasing the probability that a rejected item is truly nonconforming [14]. in this case, acceptance limits are set, providing acceptance intervals greater than the tolerance interval. without entering the details, as an example, by considering again figure 4a, if the guarded rejection decision rule were applied, then the acceptance limit would be at the right of tu, thus providing a wider acceptance interval. in general, |w| is set as a multiple of the expanded uncertainty: |w|= ru [14]. if the pdf representing the distribution of values that could reasonably be attributed to the measurand is known or assumed, it is also possible to evaluate the risk of declaring a non-conforming value as conforming (or vice versa), as shown in figure 5 in the case 𝑥m ≤ 𝑇𝑈, when a normal pdf is considered and |w| = u = 2u is taken, as suggested by [16]. in particular, figure 5a and figure 5b represent, respectively, the decisions of guarded acceptance and guarded rejection. figure 4. decision rule based on guarded acceptance. in figure 4a a one-sided tolerance interval upper limited by tu is considered, while in figure 4b a twosided tolerance interval is considered between a lower and an upper limit tl and tu. figure 5. example when 𝑥m ≤ 𝑇𝑈 and a normal pdf is supposed. the standard uncertainty 𝑢(𝑥) and the maximum admissible limit tu (red line) are given in table 1, and |𝑤| = 𝑈(𝑥) = 2 𝑢(𝑥) is supposed. the coloured area represents the probability of exceeding tu, when the measured value corresponds to au, in the cases of guarded acceptance (a) and guarded rejection (b). (b) tolerance interval w > 0 acceptance interval acceptance interval tolerance interval wl < 0 wu> 0 (a) guard band acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 to understand the relationship between w and the risk of wrong decision, let us consider the numerical example in figure 5, where the maximum admissible limit (mal, or, employing the same notation as the one in [14], tu) for a pollutant in water is assumed to be 50 mg/l. the pollutant concentration is assumed to be measured with a standard uncertainty of 5 mg/l, and the pdf representing the distribution of values that could reasonably be attributed to the measurand is assumed to be normal, as summarized in table 1. according to these values, the expanded uncertainty with k = 2 is u = 10 mg/l. a guard band w = u = 10 mg/l is considered, so that an acceptance limit au = tu – w = 40 mg/l is set, when guarded acceptance is considered. therefore, the concentration of the considered pollutant in water is considered as conforming for every measured value xm  au. if xm = au, the situation shown in figure 5a is obtained, in which the red line is located on tu. since a coverage factor k = 2 has been considered, the coverage probability of interval [xm – u; xm + u] is p = 95.45%. therefore, the risk of exceeding tu, that is the probability pw of taking the wrong decision, when this decision rule is adopted, is 𝑝𝑤 = 1−𝑝 2 = 2.28 %, independently of u. of course, if xm < au, pw < 2.28 %. on the other hand, if the aim of the measurement procedure is to assess, with high probability, that the pollutant concentration is higher than tu, the acceptance limit should be set, according to [14], at au = tu + w. with the same numerical values and assumptions as before, this means that au = 60 mg/l. therefore, the concentration of the considered pollutant in water is considered as non-conforming for every measured value xm  au. if xm = au, the situation shown in figure 5b is obtained, in which the red line is again located on tu. in this case, the risk of declaring that the pollutant exceeds the tolerance limit tu while it does not is again pw = 2.28%. on the other hand, the risk that the pollutant is above the limit is, obviously, 97.7%, 3. the relationship among uncertainty, acceptance limit and the maximum admissible risk the example shown in the previous section relates uncertainty, acceptance limit and the risk of wrong conformity assessment in an implicit way, since it assumes that the distribution of the values that could reasonably be attributed to the measurand obeys to a normal pdf and the generally used coverage factor k = 2 is considered. these assumptions lead to the well-known 2.28 % risk of wrong decision. however, different situations with different pdfs and different values for the acceptance limits may occur in practical cases, where also different values might be desired for the maximum admissible risk (mar) of wrong decisions. therefore, a general formulation relating uncertainty, acceptance limit and mar would be very useful to obtain one of 1 it is worth reminding that a normal pdf is generally obtained when the combined standard uncertainty is obtained as a combination of a sufficiently high number of contributions, so that the central limit theorem applies, as suggested by the gum [2]. on the other hand, the triangular and them, given the other two ones. document jcgm 106:2012 [14] provides some very general indications on how to do this, mostly referring to a normal pdf. attempts were made in the past, especially in the legal metrology domain [17], to set the acceptance limits in such a way that, given the measurement uncertainty, a pre-defined risk could be granted. however, to the authors' knowledge, no practical indications are available to relate acceptance limits, measurement uncertainty, and risk in such a way that, having set two of them, the third one could be found. such a relationship can be obtained starting from the pdf p(x) of the distribution of values that could reasonably be attributed to measurand x. having defined such a pdf, the pertaining cumulative probability distribution function (cdf) can be obtained as: 𝐹𝑋(𝑥) = ∫ 𝑝(𝑡) d𝑡 𝑥 −∞ (2) it can be readily checked that 𝐹𝑋(𝑥) represents the probability that variable x is lower than x. similarly, 1 − 𝐹𝑋(𝑥) represents the probability that variable x is greater than x. therefore, using the same notation as the one used in [14], in the general case shown in figure 4b, given a cdf 𝐹𝑋(𝑥), a tolerance limit tul, and a mar, if, for the considered measurable property, the measured value must be below the given tolerance limit tu, the following inequality must be satisfied: 𝐹𝑋(𝑇u) ≥ 1 − mar , (3) while, if the measured value must be above the given tolerance limit tl, the following inequality must be satisfied: 𝐹𝑋(𝑇l) ≤ mar . (4) therefore, one of the following two equations must be solved to get the value of the acceptance limit au (or al) that ensures that the probability that the tolerance limit tu (or tl) is exceeded is exactly equal to the mar: 𝐴u|𝐹𝑋(𝑇u) = 1 − mar , (5) when the measured value must be below the threshold, or 𝐴l|𝐹𝑋(𝑇l) = mar , (6) when the measured value must be above the threshold. the acceptance limit values ensure that, respectively, if 𝑥𝑚 ≤ 𝐴u (𝑥𝑚 ≥ 𝐴l), then pw  mar, where xm is the measured value and pw is the probability of exceeding the tolerance limit, that is, the probability of wrong decision. of course, solving these equations is strictly related to the shape of the pdf associated with the measurement result, and a solution cannot always be found in closed form. this does not prevent, however, the application of this method, because a numerical solution can be obtained by means of a monte carlo simulation, following the recommendations provided by supplement 1 to the gum [18]. on the other hand, the vast majority of the practical cases consider normal, uniform, triangular or trapezoidal pdfs1. in such cases, a closed-form solution can be readily obtained for au (or al), and, hence, the normal, uniform, triangular and trapezoidal pdfs are considered in the following. trapezoidal pdfs are generally obtained when two uniform pdfs are linearly combined, as in many practical measurement applications. table 1. numerical example. max. admissible limit tu standard uncertainty u pdf type 50 mg/l 5 mg/l normal acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 3.1. the measurement results distribute according to a normal posterior pdf when a normal pdf is considered: 𝑝(𝑥) = 1 √2 π σ2 e −(𝑥−𝜇)2 2 σ2 , (7) where 𝜇 is the mean value and 𝜎 is the standard deviation. then, the corresponding cdf is given by: 𝐹𝑋(𝑥) = 1 2 [1 + erf( 𝑥 − μ √2 ⋅ σ )] , (8) where: erf(𝑧) = 2 π ∑ (−1)𝑛 ⋅ 𝑧2𝑛+1 𝑛! ⋅ (2𝑛 + 1) ∞ 𝑛=0 (9) is the error function and can be well approximated with no more than 10 terms in (9). 𝐹𝑋(𝑇u) can, therefore, be written as: 𝐹𝑋(𝑇u) = 1 2 [1 + erf( 𝑇u − 𝜇 √2 ⋅ σ )] , (10) while 𝐹𝑋(𝑇l) can be similarly obtained. in the above equation, 𝜇, the mean value of the normal pdf, represents the measured value 𝑥m of the measurand. therefore, if we want to find au (al), that is the maximum (minimum) value of the measured value such that 𝑝w ≤ mar, 𝜇 = 𝐴u (𝜇 = 𝐴l) must be considered in (10). therefore, according to (5) and (6) 1 2 [1 + erf( 𝑇u,l − 𝐴u,l √2 σ )] ={ 1 − mar if 𝑥m < 𝑇u is required mar if 𝑥m > 𝑇l is required. (11) when 𝑥m < 𝑇u is required, solving equation (11) yields: 𝐴u = 𝑇u − √2 σ ⋅ erfinv(1 − 2 ⋅ mar) , (12) where erfinv is the inverse error function, which is given by: erfinv(𝑧) = ∑ 𝑐𝑘 2 𝑘 + 1 ∞ 𝑘=0 ( √π 2 𝑧) 2 𝑘+1 , (13) where 𝑐0 = 1 and 𝑐𝑘 = ∑ 𝑐𝑚 𝑐𝑘−1−𝑚 (𝑚 + 1)(2𝑚 + 1) 𝑘−1 𝑚=0 . similarly to the error function, also the inverse error function is well approximated with no more than 10 terms in (13). on the other hand, when 𝑥m > 𝑇l is required, solving (11) yields: 𝐴l = 𝑇𝐿 − √2 σ ⋅ erfinv(2 ⋅ mar − 1) . (14) since the inverse error function is an anti-symmetric function, that is: erfinv(−𝑧) = − erfinv(𝑧) , (15) equations (12) and (14) can be grouped into a single equation: 𝐴u,l = 𝑇u,l ∓ √2 σ ⋅ erfinv(1 − 2 ⋅ mar) . (16) therefore, to have a risk lower than mar to exceed the tolerance limit tu (or tl), an acceptance limit au (or al) should be evaluated, obtained by shifting the tolerance limit to the left (or right) by quantity √2 ⋅ σ ⋅ erfinv(1 − 2 ∙ 𝑀𝐴𝑅). in particular: • the limit is shifted to the left when 𝑥𝑚 ≤ 𝑇u is required and guarded acceptance is applied; • the limit is shifted to the right when 𝑥𝑚 ≥ 𝑇l is required and guarded acceptance is applied; • the limit is shifted to the right when 𝑥𝑚 ≤ 𝑇u is required and guarded rejection is applied; • the limit is shifted to the left when 𝑥𝑚 ≥ 𝑇l is required and guarded rejection is applied. to provide a numerical example, let us consider again the example considered in section 2.2 and the values in table 1. let us remember that 𝑥𝑚 < 𝑇u is required and suppose the mar is set to 5 % and guarded acceptance is considered. by applying equation (16), it follows 𝐴u = 41.8 mg/l. figure 6 shows the normal pdf with a mean value equal to the obtained au and standard uncertainty given in table 1. in this figure, the coloured area represents the probability of being above tu, as also reported in table 2. this probability is exactly the set mar. this means that, for every value 𝑥𝑚 < 𝐴u, the probability of exceeding 𝑇u will be lower than 5%. it is therefore possible to set the acceptance limit, given the pdf associated with the estimated measurement uncertainty and the desired mar. 3.2. the measurement results distribute according to a uniform posterior pdf when a uniform pdf is considered: 𝑝(𝑥) = { 1 2𝑎 if 𝜇 − 𝑎 < 𝑥 < 𝜇 + 𝑎 0 otherwise , (17) where 𝜇 is the mean value and 2a is the support of the pdf, which is related to the pdf standard deviation by 𝑎 = 𝜎 ⋅ √3. the corresponding cdf is given by: 𝐹𝑋(𝑥) = ∫ 𝑝(𝑡) d𝑡 = 𝑥 −∞ ∫ 1 2 𝑎 d𝑡 = 𝑥 𝜇−𝑎 1 2 𝑎 .(𝑥 − 𝜇 + 𝑎) (18) figure 6. example when the normal pdf is centered on au = 41.8 mg/l. the coloured area represents the probability of being above tu. table 2. probability of being below or above tu in the case of figure 6. 𝑷(𝒙 < mal) 𝑷(𝒙 > mal) 0.95 0.05 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 and therefore: 𝐹𝑋(𝑇u,l) = 1 2 𝑎 ⋅ (𝑇u,l − 𝜇 + 𝑎) . (19) from (5) and (6), and considering 𝜇 = 𝐴u,l in (19): 1 2a ⋅ (𝑇u,l − 𝐴u,l + a) = = { 1 − mar if 𝑥𝑚 < 𝑇u is required mar if 𝑥𝑚 > 𝑇𝐿 is required . (20) by solving the above equations, the value for the acceptance limit is found: 𝐴u,l = 𝑇u,l ∓ 𝑎 ⋅ (1 − 2 ⋅ mar) (21) which shows that, to have a risk below mar, the acceptance limit must be set to the left or right of the tolerance limit (as detailed in the previou section) by quantity 𝑎 ⋅ (1 − 2 ⋅ mar). to provide a numerical example, let us consider again the example of the pollutant in water considered in section 2.2. let us consider again that tu = 50 mg/l as in table 1, but let us now suppose that the pdf associated with the estimated measurement uncertainty is uniform. let us also suppose that the half-width of this uniform pdf is a = 10 mg/l, that the mar is set to 5 % and that guarded acceptance is applied. by applying equation (21), it follows au = 41 mg/l. figure 7 shows the uniform pdf with a mean value equal to the obtained au. the coloured area represents the probability of being above tu, which is exactly 5 %. this means that every measured value of the pollutant in water lower than au will provide a risk of exceeding tu lower than 5 %. 3.3. the measurement results distribute according to a triangular posterior pdf when a symmetric triangular pdf is considered, its equation is the following: 𝑝(𝑥) = { 𝑦1(𝑥) if 𝜇 − 𝑎 ≤ 𝑥 ≤ 𝜇 𝑦2(𝑥) if 𝜇 < 𝑥 ≤ 𝜇 + 𝑎 0 otherwise, (22) where: 𝑦1(𝑥) = 𝑥 𝑎2 + 𝑎 − 𝜇 𝑎2 (23) and: 𝑦2(𝑥) = − 𝑥 𝑎2 + 𝑎 + 𝜇 𝑎2 (24) and where 𝜇 and 2a are, respectively, the mean value and the support of the pdf. furthermore, 𝑎 = 𝜎 √6 holds, where 𝜎 is the standard deviation of the pdf. to evaluate the corresponding cdf, two situations should be considered, that is the case when 𝑥 ≤ μ and the case when 𝑥 > 𝜇. if 𝑥 ≤ μ, then the cdf is given by: 𝐹𝑋,1(𝑥) = ∫ 𝑦1(𝑡) d𝑡 𝑥 𝜇−𝑎 = ∫ ( 𝑡 𝑎2 + 𝑎 − 𝜇 𝑎2 )d𝑡 𝑥 𝜇−𝑎 = 1 2 𝑎2 ⋅ [𝑥2 + 2𝑥 ⋅ (𝑎 − 𝜇) + (𝑎 − 𝜇)2] = 1 2 𝑎2 ⋅ [𝑥 + (𝑎 − 𝜇)]2 , (25) while, if 𝑥 > μ, the cdf is given by: 𝐹𝑋,2(𝑥) = 1 2 + ∫ 𝑦2(𝑡) d𝑡 = 1 2 𝑥 𝜇 + ∫ (− t a2 + a + 𝜇 a2 ) 𝑥 𝜇 d𝑡 = 1 2 − 1 2 𝑎2 . [𝑥2 − 2 𝑥 ∙ (𝑎 + 𝜇) + 𝜇 ∙ (𝜇 + 2𝑎)] . (26) now, (5) and (6) should be solved for both 𝐹𝑋,1 and 𝐹𝑋,2, thus leading to four equations. however, only the most likely situations are here reported. in fact, when 𝑥 ≤ 𝑇u is required and the mar is supposed to be small (as it should be when environmental, legal or health situations are considered), the situation shown in figure 8a will occur, so that equation (27) must be solved. 𝐴u|𝐹𝑋,2(𝑇u) = 1 − mar . (27) on the other hand, when 𝑥𝑚 ≥ 𝑇l is required and again the mar is supposed to be small, the situation in figure 9b will occur, so that equation (28) must be solved. 𝐴l|𝐹𝑋,1(𝑇l) = mar . (28) equation (27) yields: 1 2 − 1 2 𝑎2 ⋅ [𝑇u 2 − 2 ∙ 𝑇u ⋅ (𝑎 + 𝐴u) + 𝐴u ⋅ (𝐴u + 2𝑎)] = 1 − mar . (29) by solving this simple equation with respect to au, the following second-order equation is found: 𝐴u 2 − 2 ∙ 𝐴u ⋅ (𝑇u − 𝑎) + [(𝑇u − 𝑎) 2 − 2 ∙ 𝑎2 ⋅ mar] = 0 , (30) figure 7. example when the uniform pdf is centered on au = 41 mg/l. the coloured area represents the probability of being above tu. figure 8. example when a triangular pdf is assumed. the coloured area represents the probability of exceeding tu (or tl) when: a) the measured value is required to be below the tolerance limit (𝑥𝑚 ≤ 𝑇u); b) the measured value is required to be above the tolerance limit (𝑥𝑚 ≥ 𝑇l). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 which provides the two solutions: 𝐴u = (𝑇u − 𝑎) ∓ 𝑎 ∙ √2 ∙ mar . (31) among the two above solutions, the one with the minus sign can be discarded. in fact, if we considered a pdf with width 2a and 𝜇 = (𝑇u − 𝑎) − 𝑎 ∙ √2 ∙ mar, this pdf would not cross tu and therefore it would provide a risk of exceeding tu equal to zero. of course, this should be a very lucky situation, but here the limit not to exceed the mar needs to be found and, therefore, the following equation holds, under the assumption that 𝑥 ≤ 𝑇u is required: 𝐴u = (𝑇u − 𝑎) + 𝑎 ∙ √2 ∙ mar . (32) when, on the other hand, 𝑥 ≥ 𝑇l is required, (28) yields: 1 2 𝑎2 ⋅ [𝑇l 2 + 2 ∙ 𝑇𝐿 ⋅ (𝑎 − 𝐴l) + (𝑎 − 𝐴l) 2] = mar . (33) solving this simple equation with respect to al, the following second-order equation is found: 𝐴l 2 − 2 ∙ 𝐴l ⋅ (𝑇l + 𝑎) + [(𝑇l + 𝑎) 2 − 2 ∙ 𝑎2 ⋅ mar] = 0 , (34) which provides the two solutions: 𝐴l = (𝑇l + 𝑎) ∓ 𝑎 ∙ √2 ∙ mar . (35) among the two above solutions, the one with the plus sign can be discarded. in fact, if we considered a pdf with width 2a and 𝜇 = (𝑇l + 𝑎) + 𝑎 ∙ √2 ∙ mar, this pdf would not cross tl and therefore it would provide a risk of exceeding tl equal to zero. of course, this should be a very lucky situation, but here the limit not to exceed the mar needs to be found and, therefore, the following equation holds, under the assumption that 𝑥 ≥ 𝑇l is required: 𝐴𝐿 = (𝑇l + 𝑎) − 𝑎 ∙ √2 ∙ mar . (36) finally, by considering (32) and (36) together, it is: 𝐴u,l = 𝑇𝑈,𝐿 ∓ 𝑎 ⋅ (1 − √2 ∙ mar) , (37) that is, the acceptance limit must be shifted to the left or right of the tolerance limit (as detailed in sec. 3.1) by quantity 𝑎 ⋅ (1 − √2 ∙ mar) to have a risk lower than the mar. as a numerical example, let us consider again the example of the pollutant in water considered in section 2.2. let us consider again that tu = 50 mg/l as in table 1, but let us now assume a triangular pdf associated to the estimated uncertainty. furthermore, the half-width of the triangular pdf is supposed to be a = 10 mg/l, the mar is set to 5% and guarded acceptance is applied. since 𝑥 ≤ 𝑇u is desired, by applying (37), au = 43.2 mg/l is obtained. figure 9 shows the obtained pdf (centered on the obtained au value), where the coloured area represents the probability of being above the tolerance limit tu, which is exactly equal to the pre-set mar (5%). this means that every measured value of the pollutant in water lower than the obtained au value will provide a risk lower than 5 %. 3.4. the measurement results distribute according to a trapezoidal posterior pdf when a symmetric trapezoidal pdf is considered, then the pdf is described by the following equations: 𝑝(𝑥) = { y3(x) if μ − 𝑎 ≤ 𝑥 ≤ μ − 𝑎 β 1 𝑎 ⋅ (1 + β) if μ− 𝑎 β < 𝑥 ≤ μ + 𝑎 β 𝑦4(𝑥) if   μ + 𝑎 β < 𝑥 ≤ μ + 𝑎 0 otherwise, (38) where: 𝑦3(𝑥) = 1 𝑎2 ⋅ (1 − β2) ⋅ (𝑥 + 𝑎 − 𝜇)  (39) 𝑦4(𝑥) = − 1 𝑎2 ⋅ (1 − β2) ⋅ (𝑥 − 𝑎 − 𝜇) , (40) 𝜇 is the mean value of the pdf, 2a is its width and  is the ratio between the two basis. to evaluate the corresponding cdf, three situations should be considered, that is the case when 𝜇 − 𝑎 ≤ 𝑥 ≤ 𝜇 − 𝑎 𝛽, the case when 𝜇 − 𝑎 𝛽 < 𝑥 ≤ 𝜇 + 𝑎 𝛽 and the case when 𝜇 + 𝑎 𝛽 < 𝑥 ≤ 𝜇 + 𝑎. however, similar considerations as the ones made for the triangular pdf apply, so that only the two situations shown in figure 10a and figure 10b are considered. let us call 𝐹𝑋,3(𝑥) the cdf for the case in figure 10a. the following equation must then be solved: 𝐴𝑈|𝐹𝑥,3(𝑇u) = 1 – mar . (41) on the other hand, let us call 𝐹𝑋,4(𝑥) the cdf for the case in figure 10b. the following equation must then be solved: 𝐴𝐿|𝐹𝑋,4(𝑇𝐿) = mar . (42) figure 9. example when the triangular pdf is centered on au = 43.2 mg/l. the coloured area represents the probability of being above tu. figure 10. example when a trapezoidal pdf is assumed. the coloured area represents the probability of exceeding the tolerance limit when: a) the measured value is required to be below the tolerance limit (𝑥𝑚 ≤ 𝑇u); b) the measured value is required to be above the tolerance limit (𝑥𝑚 ≥ 𝑇𝐿). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 it follows: 𝐹𝑥,3(𝑥) = 1 − 𝛽 2 (1 + 𝛽) + 2 𝛽 1 + 𝛽 + ∫ 𝑦4(𝑡) d𝑡 𝑥 𝜇+𝑎𝛽 = 1 + 3 𝛽 2 (1 + 𝛽) + 1 2 𝑎2(1 − 𝛽2) ∙ ⋅ {−𝑥2 + 2(𝜇 + 𝑎) ⋅ [𝑥 − (𝜇 + 𝑎 𝛽)]+(𝜇 + 𝑎 𝛽)2} (43) and, by solving (41), the second-order equation is obtained: 𝐴u 2 − 2 ∙ 𝐴u ⋅ (𝑇u − 𝑎) + (𝑇u − 𝑎) 2 − 2 ∙ 𝑎2 ⋅ mar ⋅ (1 − 𝛽2) = 0, (44) for which the two following solutions can be found: 𝐴u = (𝑇u − 𝑎) ∓ 𝑎 ⋅ √2 ∙ mar ⋅ (1 − 𝛽 2) . (45) according to the same considerations as the ones in the case of a triangular pdf, it can be concluded that, among the two above solutions, the one with the minus sign can be discarded. therefore, the following equation holds, under the assumption that x  tu is required: 𝐴u = (𝑇u − 𝑎) + 𝑎 ⋅ √2 ∙ mar ⋅ (1 − 𝛽 2) . (46) on the other hand: fx,4(x)=∫ y3(t) dt x μ-a = 1 2 𝑎2(1-𝛽2) ⋅[x2+2∙(a-μ)∙𝑥+(𝑎 − 𝜇)2] (47) and, by solving (42) with respect to al, the following secondorder equation is obtained: 𝐴l 2 − 2 ∙ 𝐴l ⋅ (𝑇l + 𝑎) + (𝑇l + 𝑎) 2 − 2 ∙ 𝑎2 ⋅ mar ⋅ (1 − 𝛽2) = 0 (48) equation (48) has two solutions: 𝐴l = (𝑇l + 𝑎) ∓ 𝑎 ⋅ √2 ∙ mar ⋅ (1 − 𝛽 2) , (49) where, according to the same previous considerations, the one with the plus sign can be discarded, so that: 𝐴l = (𝑇l + 𝑎) − 𝑎 ⋅ √2 ∙ mar ⋅ (1 − 𝛽 2) . (50) finally, by considering together (46) and (50), it can be written: 𝐴u,l = 𝑇u,l ∓ 𝑎 ⋅ (1 − √2 ∙ mar ⋅ (1 − 𝛽 2)) , (51) that is, the acceptance limit is simply shifted to the left or right of the tolerance limit (as detailed in sec. 3.1) by quantity 𝑎 ⋅ (1 − √2 ∙ mar ⋅ (1 − 𝛽2)). to provide a numerical example, let us consider again the example of the pollutant in water considered in section 2.2. let us consider again that tu = 50 mg/l as in table 1, but let us now suppose that the pdf is trapezoidal with  = 0.5. furthermore, the half-width of the trapezoidal pdf is supposed to be 𝑎 = 10 mg/l, the mar is set to 5 % and guarded acceptance is applied. since, in the considered example, x  tu is required, (51) yields au = 42.7 mg/l. figure 11 shows the obtained pdf (centered on the obtained au value), where the coloured area represents the probability of being above the tolerance limit tu and it is exactly equal to the pre-set mar (5 %). this means that every measured value of the pollutant in water lower than the obtained au value will provide a risk lower than 5 %. 4. conclusions following the suggestions given in the present standards [14] [16], a measurand is considered as conforming when the measured value falls inside the acceptance interval, as defined in art. 3.3.9 of [14], and is considered non-conforming when the measured value falls inside the rejection interval, as defined in art. 3.3.10 of [14]. the definition of the acceptance interval strongly depends on the measurement uncertainty with which the measurand is measured and the admissible risk of declaring a non-conforming value as conforming and vice versa. while this problem is clearly highlighted in [14], very few practical indications are given on how to define the acceptance limits once the measurement uncertainty is estimated, and the maximum admissible risk (mar) given. this paper has shown how the acceptance limits depend also on the probability density function (pdf) considered to represent the distribution of values that could reasonably be attributed to the measurand, and has proposed a general method to relate them to the considered pdf and the considered mar. the most used normal, uniform, triangular and trapezoidal pdfs have been considered and general formulas have been given to define the acceptance limits given uncertainty and mar. the numerical examples have shown that different results are obtained for the acceptance limits, when the different pdfs are considered, as expected from the theory. the closed-form formulas provided in the paper allow one to evaluate the acceptance limits in a straightforward way, in both situations of guarded acceptance and guarded rejection. should different probability distributions be considered, the general proposed method can still be applied, and a monte carlo simulation can provide the desired acceptance limits. references [1] jcgm 200:2012. international vocabulary of metrology – basic and general concepts and associated terms (vim 2008 with minor corrections). joint committee for guides in metrology. 2012. [2] jcgm 100:2008. evaluation of measurement data – guide to the expression of uncertainty in measurement, (gum 1995 with minor corrections). joint committee for guides in metrology. 2008. figure 11. example when the trapezoidal pdf is centered on au = 42.7 mg/l.. the coloured area represents the probability of being above tu. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 9 [3] a. ferrero, v. scotti. uncertainty and conscious decisions, in: forensic metrology: an introduction to the fundamentals of metrology for judges, lawyers and forensic scientists. cham: springer international publishing, 2022, pp. 115–124. isbn: 9783-031-14619-0. doi: 10.1007/978-3-031-14619-0_8 [4] m peterson, an introduction to decision theory. cambridge: cambridge university press, 2009, p. 317. isbn: 9780511800917. doi: 10.1017/cbo9780511800917 [5] c yoe, principles of risk analysis: decision making under uncertainty. 2nd ed. boca raton: crc press, 2019, p. 848. isbn: 9780429021121. doi: 10.1201/9780429021121 [6] r. m. peterman, j. l. anderson, decision analysis: a method for taking uncertainties into account in riskbased decision making, human and ecological risk assessment: an int. journal 5.2 (1999), pp. 231–244. doi: 10.1080/10807039991289383 [7] b. solaiman, d. guériot, sh. almouahed, b. alsahwa, é. bossé, a new hybrid possibilistic-probabilistic decision-making scheme for classification, entropy 23.1 (2021). issn: 1099-4300. doi: 10.3390/e23010067 [8] l. r. pendrill, using measurement uncertainty in decision-making and conformity assessment, metrologia 51.4 (2014), pp. 206–218. doi: 10.1088/0026-1394/51/4/s206 [9] a. allard, n. fischer, i. smith, p. harris, l. pendrill, risk calculations for conformity assessment in practice. ed. by array, 19th international congress of metrology cim, paris, france, 2426 september 2019. doi: 10.1051/metrology/201916001 [10] s. puydarrieux, j. m. pou, l. leblond, n. fischer, a. allard, m. feinberg, d. el guennouni, role of measurement uncertainty in conformity assessment. ed. by array, 19th international congress of metrology cim, paris, france, 24-26 september 2019. doi: 10.1051/metrology/201916003 [11] e. cruz de oliveira, f. r. lourenço, risk of false conformity assessment applied to automotive fuel analysis: a multiparameter approach, chemosphere 263 (2021), p. 128265. issn: 0045-6535. doi: 10.1016/j.chemosphere.2020.128265 [12] l. separovic, r. s. simabukuro, a. r. couto, m. l. g. bertanha, f. r.s. dias, a. y. sano, a. m. caffaro& f. r. lourenço, measurement uncertainty and conformity assessment applied to drug and medicine analyses – a review, critical reviews in analytical chemistry 0.0 (2021), pp. 1–16. doi: 10.1080/10408347.2021.1940086 [13] m. h. habibie, o. hedrony, using decision rule in calibration of long gauge block on the conformity assessment scheme, iop conference series: materials science and engineering 673.1 (dec. 2019), p. 012110. doi: 10.1088/1757-899x/673/1/012110 [14] jcgm 106:2012. evaluation of measurement data – the role of measurement uncertainty in conformity assessment. joint committee for guides in metrology. 2012. [15] j. m. pou, l. leblond, smart metrology: from the metrology of instrumentation to the metrology of decisions. ed. by array, 18th international congress of metrology cim, paris, france, 19-21 september 2017. doi: 10.1051/metrology/201701007 [16] iso en 14253-1. geometrical product specification (gps) inspection by measurement of workpieces and measurement instruments part 1: decision rules for proving conformance or non-conformance with specification. iso en 14253-1. 1998 m [17] h. källgren, l. pendrill, b. magnusson. role of measurement uncertainty in conformity assessment in legal metrology and trade, accreditation and quality assurance 8 (2003), pp. 541–547. doi: 10.1007/s00769-003-0707-8 [18] jcgm 101:2008. evaluation of measurement data – supplement 1 to the guide to the expression of uncertainty in measurement – propagation of distributions using a monte carlo method. joint committee for guides in metrology. 2008. https://doi.org/10.1007/978-3-031-14619-0_8 https://doi.org/10.1017/cbo9780511800917 https://doi.org/10.1201/9780429021121 https://doi.org/10.1080/10807039991289383 https://doi.org/10.3390/e23010067 https://doi.org/10.1088/0026-1394/51/4/s206 https://doi.org/10.1051/metrology/201916001 https://doi.org/10.1051/metrology/201916003 https://doi.org/10.1016/j.chemosphere.2020.128265 https://doi.org/10.1080/10408347.2021.1940086 https://dx.doi.org/10.1088/1757-899x/673/1/012110 https://doi.org/10.1051/metrology/201701007 https://doi.org/10.1007/s00769-003-0707-8 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 72‐79    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 72  effectiveness of filters in 3 kv dc railway traction substations  supplied by distorted voltage–measurements and diagnostics  adam szeląg, tadeusz maciołek, marek patoka   warsaw university of technology, institute of electrical machines, pl. politechniki 1, 00‐661 warsaw, poland         section: research paper   keywords: dc traction substation; harmonics; filters; voltage distortion; railway engineering; measurements   citation: adam szeląg, tadeusz maciołek, marek patoka, effectiveness of filters in 3 kv dc railway traction substations supplied by distorted voltage– measurements and diagnostics, acta imeko, vol. 4, no. 2, article 13, june 2015, identifier: imeko‐acta‐04 (2015)‐02 ‐13  editor: paolo carbone, university of perugia, italy  received november 1, 2014; in final form february 3, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by research funds of pkp energetyka s.a.  corresponding author: adam szeląg, e‐mail: adam.szelag@ee.pw.edu.pl    1. introduction  quality criteria for a dc traction supply system concern the circuits from the rectifiers installed in traction substations till the vehicle’s current collector. technical effectiveness of the applied supply system as well as quality of the provided energy are assessed mainly by fulfilment of the following requirements [1], [3], [7]-[9], [12]-[16], [18]-[20], [22], [25], [26]: a) required conditions for voltage of rolling stock supply (of values and quality of the provided voltage, including relations to the harmonics content) are fulfilled, and voltage drops in a supplying dc network are low enough not to significantly influence the speed of train traffic; b) admissible levels of shortand long-term loads will not be exceeded in any element of the supply system (supply devices); c) safety of passengers, personnel as well as other persons is provided; d) short-circuit identification and reliability of switching-off (both on the dc and ac side of power supply); e) the system remains open for the increase of traffic – possibility of expansion with further increase in transport demand by its modernisation, depending on a degree in traffic increase until achieving the target traffic anticipated in a forecast. in the analysis of technical effectiveness of the examined load variants, one states conditions for fulfilling the assumed technical standards of a supply system – reliability and dependability of supply, admissible voltage drops and efficiency for the assumed circuits configurations [21]. assumptions regarding train traffic and configuration of a supply system allow for assessment of a supply system functioning during peak traffic hours and correspond to the conditions assumed for dimensioning of system devices in a design process, including equipment on the side of dc voltage [2], [23]. abstract  electric energy quality criteria relating to a dc supply system concern the circuits from the rectifiers installed in traction substations till  the  vehicle's  current  collector.  energy  law  and  related  implementing  provisions  unequivocally  state  that  an  electrified  transport  system, as the energy recipient, shall fulfil energy consumption requirements that are defined in the agreement. it imposes certain  conditions for the railway power supply company. introduction into traffic of the traction vehicles that are equipped with converter  power electronics drive systems  increased requirements regarding voltage quality  in a dc catenary. at the same time,  increase  in  share of non‐linear recipients causes the increase of distortions in ac voltage supplying traction substations, which also transfer to the  dc side. in some cases,  disturbances in operation of low power infrastructure, such as signaling and control  circuits, were observed,   although energy quality parameters at the ac side have been fulfilled. all these factors caused a change in operational conditions of  the resonance smoothing filters hitherto used in the rectifier traction substations that are supplied by ac medium voltage power lines.  this paper presents a research and a case study of the problem of effectiveness of functioning of the applied resonance filters, from  measurements, allowing for problem identification, to the results of the studies of the proposed new solution of a filter and the results  of the observed exploitation of a prototype  with the application of a digital monitoring and diagnostics system.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 73  dc output voltage of rectifier traction substations consists of ac components – harmonics, which could influence negatively the operation of various devices, e.g. a railway control and signalling system [3], [10], [23]-[26], and might disturb reception of broadcasting services in the areas close to the substation [22]. ac side current harmonics of rectifiers in traction substations cause distortion of voltage in supplying ac power lines and in a grid point, to which both a railway traction substation and other loads are connected [2], [4]-[6]. 1.1. dc‐side voltage harmonics   the variable component of the dc side voltage is composed of a set of harmonics [2], whose orders n equal: n = c p (1) where c = 1, 2, 3, 4, ... and p=6, 12 (number of pulses of a traction rectifier). therefore, the content of higher harmonics in the rectified voltage will differ in various rectifier systems. for instance, in six-pulse systems the harmonics of the 6th, 12th, 18th and 24th order occur (harmonics of higher orders are usually omitted in technical considerations, due to their low values), whereas in 12-pulse systems only harmonics of the 12th and 24th order occur, thus the 12-pulse system has lower content of higher harmonics than the 6-pulse system. un rms of individual higher harmonics of voltage with a defined value of rectifier voltage is the lower, the higher n harmonics order is. for an idle state of a rectifier operation state the following dependency for un is used [2], [6], [23]: . 12 2 doin u n u   (2) knowing un, a total group variable component can be calculated:  g n n u d nc u 2 (3) where nd, ng – orders, respectively: lower and upper order of a sum of harmonics. the relative value of a variable component of the rectified voltage in the n idle state can also be determined according to: %.100 doi c u u n  (4) under conditions of an idle state, for p = 3 the n coefficient is within 15.3 %, for p = 6 – 4.2 %, while for p = 12 the value is 1.04 %, hence the content of a variable component in the rectified voltage is evidently decreasing with an increase of the number of rectifier pulses. voltage of individual higher harmonics un increases proportionally with the increase of the commutation angle u, thus their rms value un%(u) given in percentage of udoi can be calculated as: )sin()sin(2)cos()cos(2)(sin)1(2 2)1( )( 22 2% nuunnuuun n u uu doin    (5) harmonics values for declared operational overloads of rectifier units that are obtained from this dependency constitute a basis for calculations for smoothing devices installed in traction substations. already at the stage of design, one should therefore determine the means that would enable decreasing disturbing voltages uz to admissible levels, as stated in relevant regulations and recommendations. currently in poland, the admissible level of uz equivalent voltage, taking into account the psophometric weight, is defined by equation (6)[8], [23]:  ) 2 (  nnz uu (6) where: un – rms of nth harmonic, n – coefficient of a psophometric weight of the nth harmonic [8] the value of this coefficient equals 16.5 v in poland, while in germany and in italy it equals 10 v. in some countries, the maximum admissible value of a single harmonic (e.g. 100 v) is determined as well, which in fact concerns the most significant non-characteristic harmonic of 100 hz frequency. however, suitability for use of the psophometric weight criterion for the dc side voltage raises doubts, since in relation to track circuits the use of the current criterion is of more importance. for instance, current limits given in [3] determine assessment of possible disturbances during introduction of a new rolling stock. this criterion does not have restrictions for the frequency range between 601380 hz, that is for a range including the harmonics with higher amplitudes (300, 600, 900, 1200 hz) that are filtered through the resonance contours of dc side filters. therefore, it can be difficult to fulfil current limits for 1500, 1800, 2100 and 2400 hz [3], [26] harmonics (especially for filters with a small inductance), even though the voltage criterion (6) has been fulfilled. 1.2. dc side filters   in order to decrease an ac component on the dc voltage side, there are widely used filters that seek for decreasing higher harmonics that are introduced to a catenary. it is performed by introducing a serial branch and a low-impedance (for certain harmonics) parallel branch that is shunting the traction network. resonance (poland, russia, czech republic, slovakia, spain, slovenia, croatia, japan, republic of south africa, india, algeria, morocco) and aperiodic filters (italy) are commonly used as smoothing devices [2], [23]. a typical scheme of resonance filters used on polish state railways is presented in figure 1. figure 1. a simplified scheme of a filter with two resonance contours (600  and 1200 hz) used in 12‐pulse 3 kv dc traction substations.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 74  the effectiveness of filters for each frequency is established using the wn attenuation coefficient (smoothing), which is defined as a ratio of the rms value of voltage uon of nth higher order harmonic at the output of the filter to the rms value of the same voltage higher harmonic un at the output of a rectifier (at the input of the filter), this coefficient assumes the following value: n on n u u w  (7) the basic disadvantages of resonance filters include:  a low level of attenuation with fluctuation of the supplying voltage frequency,  possibility of strengthening certain harmonics as a result of resonance phenomena (especially with supply of vehicles that are equipped with converter systems),  low stability of characteristics in time and necessity of periodic tuning. in case of introduction into traffic of locomotives with power electronic devices (a chopper drive or an inverter with an ac motor), disturbances from higher harmonics in a traction catenary will increase, while additional current harmonics might cause overload of filter elements. furthermore, the phenomenon of transmission of higher harmonics by a traction catenary might happen in case of a substation cooperating with different types of rectifiers. locomotives equipped with converter drives, which generate higher harmonics of current, usually have low-pass filters of lc type as an input. due to the distributed character of passive elements and current (vehicles) as well as voltage (substations) type of harmonics sources, a scheme of a power supply circuit of energy delivery from a 3-phase public network to the 3 kv dc catenary supplied electric vehicle (ev) is significantly complicated (figure 2). so while designing solutions for the main circuit traction vehicles with asynchronous drive, it is necessary to take into consideration and analyse the mutual impact of a supply system (substations, substation’s smoothing filters) on the circuits of signalling, control and communication systems. the frequency spectrum and impact of an ac component of the vehicle’s current on other circuits depend on the method of control, an operating point and converter frequency as well as parameters of other circuits [3], [9], [12], [16], [22]-[24]. in some cases, a locomotive with a converter drive can constitute a lower impedance for harmonics occurring in the dc voltage of a traction substation (especially of lower orders and non-characteristic ones) than the conventional drive with dc motors and resistor start-up [24], [26]. the input filter of a vehicle is a damping circuit, which might cause occurrence of damped oscillations with a frequency corresponding to the frequency of own oscillations in the case of sudden changes of voltage in a traction catenary. when a drive is operating at constant power [12], the phenomenon of excitation of oscillations with a frequency equal to a traction vehicle input filter’s own frequency is known. 2. measurements of the quality of ac voltage  supplying a traction substation  the problem of the quality of electric energy that is delivered to the consumers from the electricity grid concerns both suppliers and consumers of electrical energy. it imposes requirements so as operators of the power supply network supply the recipients with the energy of appropriate quality. the important quality parameters [1]-[7], [15], [25] include the admissible level of non-linear distortions (thd total harmonic distortion) as well as admissible load fluctuations (due to the instability of the receiver) and asymmetry. one should pay particular attention to the criteria of energy quality, which are stated in the standard en 50160 [18], especially with supply of a traction substation with lines having various parameters. traction substation equipped with 12-pulse rectifiers served as an example of the negative influence of a distorted supply voltage on the operation of a 3 kv dc traction system, due to the fact that the phenomenon of fuses deterioration occurred in substation’s resonance type filter. the adjacent substations had also smoothing devices of the resonance type installed, adjusted for harmonics of the 12th and 24th order, which are characteristic for the 12-pulse rectifiers. thus, in order to determine the cause of occurrence of such phenomena, it was important to conduct measurements at 15 kv 50 hz voltage supplying substations. measurements of energy quality were performed in 3-phase 15 kv ac lines supplying a substation, using a specialised meter for measurement of the quality of electrical energy that is consumed by a traction substation. exemplary results waveforms of averaged, 10-min values of voltage harmonics in one of the phases (figure 3a) and thd u values for all phases (figure 3b) in an idle state of a supplying line (without the operating rectifier substation) are presented. one can observe the presence of the 5th harmonic in the supply voltage (up to 5 % at the admissible value of 6 %). the remaining harmonics had far lower values below 1 %. global thd u coefficient exceeded 5 % at the admissible value of 8 %, so the limits imposed by [19] were not exceeded. 3. measurement of effectiveness of a dc side  resonance filter with distorted ac voltage  supplying rectifiers  measurements of rms values of the filter's current on the 3 kv dc side were conducted at various configurations of operation of rectifier units and at different variants of operation of adjacent substations with the use of a specialised fibre optical measurement (figure 4) system established at the warsaw university of technology. in this case, 12-pulse rectifiers were being exploited in the traction substation, thus a filter was devoted to attenuate 600 hz and 1200 hz harmonics (resonance contours) (figure 1) and the harmonics of higher orders by a less significant element an aperiodic element. the fifth harmonic occurring in the supply voltage (figure 3) [2], [20], [23] (although the limit imposed by [19] was not exceeded) causes occurrence, on the dc side, of a 6th voltage harmonic (300 hz), which is non-characteristic for a 12-pulse rectifier (1). figure  2.  a  scheme  of  a  circuit  of  electrical  energy  delivery  from  an  ac power supply network (psn) to a rectifier traction substation with a dc side filter and via catenary to an electric vehicle (ev).  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 75  therefore, a resonance filter lacks the resonance element for this frequency (this device was designed assuming that this harmonic does not occur in the rectified voltage), and thus it is not attenuated. while in case of operation of two rectifier units the 6th harmonic appeared to be additionally enhanced. this phenomenon causes current of this harmonics (300 hz) in the smoothing devices to reach considerably large values (figure 5a), while the main characteristic harmonic current (600 hz) was much lower, and with the increase in load, the global current flowing was causing a filter fuse to blow.  additionally, what appeared is the 2nd harmonic (100 hz) – figure 5, that is caused by supply voltage asymmetry and which is additionally enhanced by a filter. the undertaken measurements showed:  influence of a supply configuration of a traction substation and the level of distortions coefficient thd u on the energy quality on a dc side;  significance of ac supply voltage asymmetry in the operation of the resonance filters;  influence of the rectifiers configuration in a substation on the value of the filter current and the content of current harmonics of the resonance smoothing device (exemplary waveform of rms values of current harmonics in time is presented in figure 3a). due to exploitation needs, in order to allow temporary operation of the filter, it was decided to increase the value of an aperiodic part (capacitance 100 f – figure 1) from 100 to 350 f, and as a result the 300 hz harmonic current value was reduced significantly (figure 5). figure  4.  measurement  system  in  a  dc  side  filter  cabinet  in  a  3  kv  dc  traction substation (c‐ capacitance, b‐ fuse).                          a)                      b)  figure 3. the level of higher harmonics in a 15 kv line that operates in the idle  state,  participation  (of  individual  voltage  harmonics  a)  thd  u  in particular phases‐b).  a) b) figure 5. rms values of higher harmonics of substation's filter current a) with a  resonance  filter,  b)  with  a  resonance  filter  and  an  expanded  aperiodic element from 100 to 350 f.  0.1 1 10 100 150 250 2400hz 1800hz 1200hz 200hz 100hz 50hz 900hz 600hz 300hz 1800 hz 2400 hz50 hz 900 hz 100 hz 200 hz 1200 hz 600 hz 300 hz t [s] i n [ a ] 0.1 1 10 100 250 500 750 2400hz 1800hz 1200hz 900hz 600hz 300hz 200hz 100hz 50hz 100 hz 1200 hz 200 hz 600 hz 300 hz t[s] in rm s [a ] acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 76  4. modification of a smoothing dc side filter  experience from exploitation of resonance filters and the introduction of a high-voltage supply of 3 kv dc traction substations on polish railways required further analyses and study for several new phenomena that do not occur with substations equipped with previously used resonance filters [11], [17]. this concerned, among others, mutual cooperation (bilateral supply of the same catenary section) of a traction substation and a supply system both on ac and dc sides, taking into account:  states of normal operation (including cooperation with high power vehicles equipped with drives with power electronic converters and adjacent substations with resonance filters);  the most difficult operating and emergency state conditions so the degree of exposure of individual elements to overvoltage and excessive current, e.g. during short-circuit, had to be considered (figure 6). in developing the concept for solutions of a system filter based on the conducted analyses, preliminary work and measurements, the following requirements regarding a dc side filter have been formulated: a) providing proper functioning under operating conditions of a traction substation, so as not to exceed admissible values of distortion; b) limiting the influence of higher non-characteristic harmonics, caused by supply voltage asymmetry, on the value of interference voltage at the dc side; c) limiting the impact of frequency fluctuations of supply voltage on the characteristics of a filter; d) providing proper operation of a filter and limiting its interaction (coupling) with the filters of adjacent substations and input filters of locomotives equipped with converter drives; e) limiting the generated switching overvoltages to the values lower than the withstanding overvoltage of a rectifier and dc circuit elements, e.g. high-speed circuit-breaker. the source of overvoltages in the circuits of substations’ filters is the inductance of a circuit during breaking the load or short-circuit currents by the high-speed circuit breakers of the feeders. the values of overvoltages depend on the current derivative di/dt and the value of inductance in the circuit during shortcircuit (including inductance of a choke and catenary). the higher value of choke inductance decreases the steepness of current rising and subsequently the steepness of current decay with short-circuit breaking; however it linearly increases the value of the generated overvoltage on a choke. switching overvoltages occur in the circuits of a substation on the rectifiers terminals (figure 7), irrespective of the type of the applied filter, while their values and duration depend on inductance of chokes and the values of capacitance (aperiodic branch) connected in parallel behind the choke. another important criterion is the overvoltage withstanding value of a rectifier as well as circuit elements to commutation and switching overvoltages in the circuit of a substation. for instance, in case of short-circuit breaking using the only operating feeder, the overvoltage will appear between the “+” and ““ rectifier terminals, and not in a catenary. therefore, it is so important that the manufacturers define the actual withstanding voltage value of a rectifier, having regard to the transformer and high-speed circuit breaker. it concerned, among others, parameters such as: voltage connected to the “+” and “” terminals from the side of a catenary; rectifier insulation; transformer insulation; overvoltage withstanding value of a chamber of a high-speed circuit breaker after breaking the short-circuit current. the last one is of high importance, especially during possible occurrence of multiple short-circuits across a high-speed breaker chamber when its insulation parameters are lost. what also proved to be beneficial was equipping the filter with the elements limiting the overvoltage value (coordinated with overvoltage withstanding value of a rectifier) and (necessity) discharge resistors for discharging the energy accumulated in the capacitance of a filter as a result of overvoltage as well as discharging the capacitance after decay of rectifier (breaking) voltage [11], [17], [23]. since the direct connection of a not charged set of capacitors to voltage will cause surge charging of a capacitor, hence charging current should be limited so to avoid damaging the capacitor. similarly, high-current short-circuit in a catenary (with short-circuit resistance close to zero) will cause impulse and oscillation discharge of capacitor overload, which may result in its damage. current impulses and oscillations in the circuit of a filter and dc switchgear cause excessive wear-out of contacts and circuit elements, so additional low-value power resistors in series with the filter’s capacitance were added. during trials, ongoing intermittently for a few years, the above mentioned assumed requirements towards the dc side filters were verified by performing the following studies:  before including the filter in the operation: • measurements of filter’s characteristics; • testing voltage insulation and resistance; • checking the correct functioning of the auxiliary circuits;  while putting the filter into operation, checking the correct selection of a filter’s protection fuse;  the test under load was performed (heating of the elements, checking the transient states) and with connecting and switching-off the subsequent rectifier units;  checking discharging time of capacitors with the filter off figure  6.  short‐circuit  current  isc  at  the  output  of  a  3  kv  dc  traction  substation  (lower  figure)  and  capacitance  of  a  filter’s  current  –ic  (upper  figure)  –  the  observed  peak  current  is  caused  by  a  sudden  capacitance  discharge.  figure 7. voltage uc at capacitance terminals during a short‐circuit clearing. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 77  as well as checking the proper selection of discharging resistors;  checking the elements of a filter and the correct selection of a filter protection fuse with maximum value shortcircuit in a 3 kv catenary, functional testing of a main circuit and auxiliary circuits (signalling of a fuse blow, capacitors damage etc.);  measurements of the filter operation effectiveness (under conditions that were attained on a substation that has been selected for the test study purpose) with one and two rectifier units for existing quality conditions of 15 kv voltage that supplies the substation and with various schemes of bilateral supply of a substation on the 3 kv side with section cabins and adjacent substations operating or not. the assumed specifications of a filter have been confirmed by in-situ measurements of:  characteristics of a filter, before starting operation in the substation,  characteristics of filter attenuation acc. to (4),  harmonics voltages before and behind the filter,  filter’s current spectrum during normal operation,  current of a filter during high-current short-circuit test in a catenary,  calculated disturbing psophometric voltage uz acc. to (6). the presented measurements and theoretical analyses showed, in certain conditions, the increase of the 100 hz harmonic that occurs at the dc side of the substation in case of ac supply voltage asymmetry. filters with the resonance contours can efficiently attenuate the specific harmonic by parallel contours tuned to its frequency, but they may enhance (or do not attenuate) some harmonics (between the frequencies of resonance elements) and they require periodic tuning. hence, what was proposed was a modified filter system according to the patents [11], [17]. lc filters have the appropriate low-pass characteristic of a better attenuation (4) coefficient (figure 8) and they do not require tuning, but the necessary element is the high capacity of the 400÷800 f range, in order to efficiently attenuate lower harmonics. the disturbing voltage (3) for the analysed conditions with lc filters is considerably lower than the admissible value of 16.5 v and lower than for the substations operating with resonance filters thereto used (calculated disturbing voltage for a lc filter is 5 v, while for the thereto used resonance filter it is 17.1 v). lc filters are also efficient with the increase of distortions in the ac voltage supplying the substation [23]. 5. monitoring and diagnostics of filter's  operation in a traction substation  in order to monitor operating conditions of new filters, czat3000plus microprocessor controllers are used for traction substations; they were used in many automation systems developed by pkp-elester [27]. the system of czat 3000plus controllers has a modular structure and is a part of a set with command and registration modules and measurement transducers hvm (high voltage measurement) or voltage transformers and temperature sensors. individual czat3000plus controllers are connected with an object cabinet of remote control via the can-bus/rs485, which eliminates the necessity of laying a large number of control cables from an object cabinet for controlled devices. waveforms of the filter current (figure 9) and the dc voltage at the output of a substation as well as the temperature in a filter’s chamber are measured on-line and are used in the automation and security systems, especially for undervoltage protection, overvoltage protection, overcurrent protection and temperature protection. selected, measured values are registered in a memory card (maximum instantaneous values and averaged values per 1 s), and they are used for evaluation of the filter's operating conditions. one might notice the increased load of a filter with the increased load of a substation (figure 10a) and decrease of the filter's load in the occurrence of regenerative braking of vehicles (figures 10b, 11b). additionally one might observe the increase of the filter's load with decrease of capacitance voltage caused by the increase of the substation’s load by the trains and decrease of current of the filter's load with decrease of the substation's load (figures 10a, 11a). also, the presence of a short term increase of capacitance voltage above the value of 3600 v under the conditions of occurrence of trains regenerative braking has been noted (figures 10b, 11b). figures 10a, 10b, 10c and d present the exemplary results of registration from the czat3000 controller waveforms of u voltage on a filter's capacitance (output voltage from a 3 kv dc substation) upper figures and i current of filter's load (bottom figures). figure 9. exemplary time‐curve of a filter current.  figure  8.  comparison  of  attenuation  characteristic  (4)  of  a  thereto  used resonance filter (signed as: 12‐p rez) with a 12‐pulse unit and lc filter (lc 12‐p),  noticeably  better  attenuation  of  a  lc  filter  with  the  supply  of  a substation with distorted and asymmetric voltage, when 300 hz harmonic is present.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 78  6. conclusions  the paper focuses on the analysis, based on measurements and monitoring of the effectiveness of applying dc side filters in 3 kv traction substations supplied by a 3-phase ac medium voltage line with distorted and asymmetrical voltage, but still having energy quality standards [19] fulfilled. using the results of the conducted analyses, studies and measurements as well as experience in filters' exploitation, it can be stated that:  upon measuring the quality of energy supplied from a supplying ac line a selection of the scheme and parameters of a smoothing filter for a dc traction substation should be performed to assess the level of voltage distortions. it is worth underlining that even in case the energy quality standards [19] are fulfilled, it is not enough to ensure proper operation and effectiveness of operating resonance dc side filters and compatibility of track circuits and rolling stock equipped with power electronic converters;  the use in new traction substations supplied by 15 kv voltage of lc filters with higher inductance values than these hitherto used, improved the filters’ efficiency and eliminated problems of fuse burn-out, even in case of distorted ac voltage supplying a traction substation. lc filters do not require tuning and have higher effectiveness (figure 8) than resonance filters in the range of frequencies above the last resonance element a)    b)  figure 10 a, b. exemplary waveforms of u output voltage (upper figure) and  i filter's current (bottom figure) from registration via czat controller  in a traction substation.         a)    b) figure  11.  a,  b.  exemplary  waveforms  of  u  output  voltage  (upper  figure)  and i filter's current (bottom figure) from registration via czat controller in  a traction substation. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 79  (1.3÷2.4 khz), a range significant due to the accuracy of operation of track circuits and cooperation with modern traction vehicles equipped with converter drive systems) [3], [16], [23], [26]. furthermore, in the ranges between the frequencies of resonance elements, in resonance filters there occur strengthening (or considerable weakening of attenuation) of certain harmonics, which is unfavourable, especially in case of substation supply by distorted and asymmetric ac voltage (currently, these are extremely common phenomena);  for a few years now, in one of the 3 kv dc traction substations operated by pkp energetyka s.a., the supervised exploitation of a prototype of a lc filter with the use of monitoring and diagnostics with a digital czat controller constructed by elester-pkp has been conducted. the exploitation has proved the feasibility of the changes and effectiveness as well as reliability of new solutions. acknowledgement  the authors would like to express their gratitude to colleagues from companies elester pkp and pkp energetyka s.a. for cooperation during measurements and delivery of data from diagnostic systems. references  [1] j. altus, m. novak, a. otcenasova, m. pokorny, a. szeląg, “quality parameters of electricity supplied to electric railways, scientific letters of the university of żilina-communications”, no 2-3/2001. [2] j. arillaga, r. w. neville, “power system harmonics”, second edition, john wiley&sons, ltd, 2003. [3] a. białoń, a. szeląg, w. zając, “disturbing influence of electric traction vehicles on signalling and control circuit on silesian regional railway”, international symposium on electromagnetic compatibility and electromagnetic ecology emc. vol. 95., emc 95 eme, 1995, saint-petersburg, pp. 222-224. [4] s. bolkowski, w. brociek, r.wilanowicz, “comparative analysis of influence of the type line supplying nonlinear load on deformation of voltage and current in the power system”, computer applications in electrical eng, vol.11, 2013, pp. 11 23. [5] w.brociek, r.wilanowicz, z. filipowicz, “frequency characteristics of the power line with nonlinear load”, przegląd elektrotech-niczny, no 4, 2009, pp. 62-64. [6] w. brociek, r. wilanowicz, “estimation of higher voltage and current harmonics generated by nonlinear load”, przegląd elektrotechniczny no 1, 2010, pp.141-143. [7] k. buchta, a. szeląg, “application of statistic and probabilistic methods for assessment of quality of 3kv dc network energy delivery to traction vehicle”, chapt. 4. modern electric traction: volume 2 power supply (ed. k., karwowski, a., szeląg), 2009, politechnika gdańska, pp. 22-46. [8] ccitt directives concerning the protection of telecommunication lines against harmful effects from electric power and electrified railway lines. vol.ii, iii, iv, geneva 1989. [9] p. c. coles, m. fracchia, r.j. hill, p. pozzobon, a. szeląg, “identification of catenary resonance conditions on 3 kv dc traction systems”, 7th mediterranean electrotechnical conf. melcon'94, antalya, turkey, 12-14 april 1994, pp.825-828. [10] w. czuchra, m. kowalczewski, w. zając, “analiza przyczyn odkształceń napięć wyjściowych prostownika 12-pulsowego na podstacji trakcyjnej” (in polish), 6th int. conference “modern electric traction in integrated xxist century europe”, met’ 2003, warsaw, pp. 17-21. [11] gamma filter for 3 kv dc traction substations with 12-pulse rectifiers supplied by medium voltage lines). technical documentation (in polish), 2011, pkp energetyka s.a. [12] k. karwowski, j. skibicki, “analiza stabilności pracy sieciowych pojazdów z napedem energoelektronicznym” (in polish), konf. semtrak 2004, cracow – zakopane 2004, pp. 223-230. [13] m. lewandowski, a. szeląg, “minimizing harmonics of the output voltage of the chopper inverter”, arch. elektrotechnik, vol. 69, 1986, pp.223-226. [14] a. mariscotti, “statistical evaluation of measured voltage spectra in dc railways”, 17th symposium imeko tc 4, 3rd symposium imeko tc 19 and 15th iwadc workshop instrumentation for the ict era sept. 8-10, 2010, kosice, slovakia, pp. 8-10. [15] l. mierzejewski, a. szeląg, “an analysis of disturbances in power utility systems caused by traction rectifier substation”, comprail’96 conference, berlin, 1996, pp. 433-442. [16] a. ogunsola, a. mariscotti, “electromagnetic compatibility in railways-analysis and management”, lecture notes in electrical engineering, isbn: 978-3-642-30280-0, volume 168, 2013, springer. [17] patent republic of poland, “filtr aperiodyczny i sposób wyznaczania parametrów filtra prostowników trakcji elektrycznej” (patent aperiodic filter and a method if determination parameters of traction rectifiers’ filter), nr 186399, 2004. [18] en 50388 standard-2011: “railway applications-power supply and rolling stock-technical criteria for the coordination between power supply (substation) and rolling stock to achieve interoperability”. [19] en 50160 standard-2000: “voltage characteristics of electricity supplied by public distribution systems”. [20] a. szeląg, l. mierzejewski, “ground transportation systems”, chapter in encyclopedia of electrical and electronics eng. suppl. 1, john wiley & sons, inc., (1999), pp. 169-194. [21] a. szeląg, t. maciołek, “a 3 kv dc electric traction system modernisation for increased speed and trains power demand – problems of analysis and synthesis”, przegląd elektrotechniczny, r.89, 3a/2013, pp. 21-28. [22] a. szeląg, m. patoka, “issues of low frequency electromagnetic disturbances measurements in traction vehicles equipped with power electronics drive systems”, przegląd elektrotechniczny 9/2013, pp. 290-296. [23] a. szeląg, m. steczek, “analysis of input impedance frequency characteristic of electric vehicles with a.c. motors supplied by 3 kv dc system for reducing disturbances in signalling track circuits caused by the harmonics in the vehicle’s current”, przegląd elektrotechniczny r.89, 3a/2013, pp. 29-33. [24] a. szeląg, l. mierzejewski, “modelling and verification of simulation results in computer aided analysis of electric traction systems”, comprail 2000, computers in railways vii, bologna, italy, 2000, pp. 599-610. [25] a. szeląg, t. maciołek, m. patoka, “correlation analysis of results of measurements in ac power supply of dc traction substations for identification of harmonic disturbances”, 20th imeko tc4 int. symposium research on electric and electronic measurement for the economic upturn benevento, italy, 2014, pp. 958-963. [26] w. zając, a. szeląg, “harmonic distortion caused by suburban and underground rolling stock with dc motors”, power electronics congress, 1996. technical proceedings. ciep '96., v ieee international, pp. 200-206. [27] url: http://www.elester-pkp.com.pl template for an acta imeko event paper acta imeko issn: 2221-870x april 2017, volume 6, number 1, 20-26 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 20 numerical experimental investigation of comparison data evaluation method using preference aggregation sergey v. muravyov, irina a. marinushkina, diana d. garif national research tomsk polytechnic university, pr. lenina 30, 634050 tomsk, russia section: research paper keywords: interlaboratory comparisons; reference value; largest consistent subset; preference aggregation; robust method citation: sergey muravyov, irina marinushkina, diana garif, numerical experimental investigation of comparison data evaluation method using preference aggregation, acta imeko, vol. 6, no. 1, article 4, april 2017, identifier: imeko-acta-06 (2017)-01-04 editor: paolo carbone, university of perugia, italy received july 4, 2016; in final form october 13, 2016; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by the ministry of education and science of russian federation corresponding author: sergey muravyov, e-mail: muravyov@tpu.ru 1. introduction interlaboratory comparisons (ic) are now a quite common and important metrological procedure that is used under key comparisons [1], measurement laboratories proficiency testing [2], etc. the procedure consists in arrangement and implementation of assessment of measurement quality of a given object characteristic by means of several different laboratories in accordance with definite prescribed rules. main task of any kind of interlaboratory comparisons is establishing a reference value of measured quantity xref that characterizes a largest subset of consistent (reliable) measurement results, i.e. so called largest consistent subset (lcs) [3]. for this aim, participating in comparisons laboratories estimates the same nominal value xnom of the measured quantity. laboratories having unreliable measurement results do not participate in establishing the final reference value. it should be noticed that, in contrast to proficiency testing, an official procedure of key comparisons (kcs) of the mra [1] does not allow to discard any of the participant results, even though a result looks like unreliable or outlying. in this paper we will adhere to a hypothetical position that the two types of ics can be treated to be similar actions tolerating exclusion of outliers, understanding the resulting reference value can be biased in the sense that some participants were excluded from its computation. there are different approaches to check consistency of laboratory measurement results and to find the reference value xref, see, for example [3]-[7]. the choice of a particular consistency test method depends on a kind of travelling standard, measurement conditions and number of participating laboratories. widely used methods are statistical ones characterizing ic participant competences to carry out measurements based on, for example, calculation of the difference of laboratory measurement results and assigned by comparison providers, percent differences, percentiles, or ranks [8]. however, these methods usually impose limitations on a feasible ic participating laboratories number. moreover, statistical methods may evince low discriminating ability, that is the capacity to differ truly unreliable laboratories from laboratories providing results to be trusted. in [4]-[5], a rather widely known so called procedure a was presented. the procedure uses the weighted mean value y: − − = = = ∑ ∑2 2 1 1 ( )/ ( ), m m i i i i i y x u x u x (1) abstract an integrated software for experimental testing preference aggregation method for interlaboratory comparison data processing is presented. the data can be obtained by a monte-carlo simulation and/or taken from real comparisons. numerical experimental investigations with the software have shown that, as against traditional techniques of interlaboratory comparison data processing, the preference aggregation method provides a robust comparison reference value to be closer to a nominal value. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 21 where xi is the nominal value estimate provided by the i-th laboratory; u(xi) are corresponding standard uncertainties; m is the number of ic participating laboratories. the standard uncertainty of value y has the view: 2 2 1 ( ) 1/ ( ) m i i u y u x− = = ∑ . (2) in this procedure the weighted average value y is accepted as the reference value xref only if its consistency with ic participating laboratories data is confirmed in accordance to the criterion χ2. if the consistency test is not satisfied, it is proposed in [3] to use a strategy of successive exclusion of outliers, that is, results which are not consistent with the others by limits of claimed uncertainties. a result is deemed to be inconsistent if |en| > 2, where − = = ± n 2 2 , 1, ..., . ( ) ( ) i i x y e i m u x u y (3) the process of exclusion of one inconsistent result is repeated until a consistency of results by the criterion χ2 is achieved. for lcs obtained in this way, the reference value is determined by (1), where instead of m number of reliable laboratories m' is used. procedure a can be reasonably applied if measurement results provided by participating laboratories are characterized by a normal probability distribution. that is why there is a need to develop robust methods for interlaboratory comparison data processing that are well-behaved in cases where the law of laboratory measurement results distribution differs from normal or unknown. for example, in paper [9] nielsen proposed the method, the successful application of which has been described in [10]. the method offers to consider the uncertainty range u(xi) as the rectangular distribution and to deem that each participant gives one vote to each value within its uncertainty range and no votes for values outside this range. this produces a robust algorithm for the reference value xref determination that is insensitive to outliers, i.e. results with an uncertainty considerably lower than those of other participants. this paper is devoted to software implementation of the comparison reference value determination method presented in terms of preference aggregation [11]-[13]. in section 2 a way is considered to transform uncertainty intervals provided by participating laboratories into rankings of measured quantity values. then the obtained rankings, constituting an initial preference profile, can serve as input data for determination of consensus ranking by the kemeny rule that allows to find the reference value of the measurand and to assess an ability of participating laboratories to provide reliable measurement results. in section 3 specially developed software is discussed to carry out numerical experimental researches of ic methods including procedure a, the nielsen algorithm and the proposed preference aggregation method. in section 4 processing of real comparison data by the preferences aggregation method is presented. 2. ic data processing on the base of preference aggregation define the procedure of transformation of uncertainty intervals provided by laboratories into rankings. for this aim, designate an uncertainty interval gained by the i-th laboratory through =( ) [ ( ), ( )]i l i u iu x u x u x . define a, a range of actual values (rav), of the measurand for converting uncertainty intervals of m laboratories to rankings. the initial value а1 of a is chosen to be equal to a least lower bound of uncertainty intervals = =1 1min{ ( )| 1, ..., }ia u x i m provided by laboratories. the finite value аn of a is chosen to be equal to a largest upper bound of laboratories uncertainty intervals = =umax{ ( )| 1, ..., }n ia u x i m . divide a into n – 1 equal intervals (divisions) in such a way that their amount guarantees a necessary and sufficient accuracy of the measurand values representation. then there will be n values of the measurand a = {а1, а2, …, аn} corresponding to boundaries of the division intervals (marks), see figure 1. details on the proper selection of a particular value of n can be found in [14]. compose a preference profile λ of m rankings representing the uncertainty intervals of laboratories. each i-th ranking, i = 1, …, m, is a union of binary relations of strict order and equivalence possessing the following properties at k = 1, …, m and i, j = 1, …, n: а) ai  aj if ai ∈ u(xk) ∧ aj ∉ u(xk); b) ai ~ aj if ai, aj ∈ u(xk) ∨ ai, aj ∉ u(xk); c) ai  aj if ai ∉ u(xk) ∧ aj ∈ u(xk). then the measurement result indicated by some laboratories is represented by a ranking of the measurand values where one or more equivalent values which belong to the uncertainty interval of the laboratory are more preferable. all other values of a in this ranking are less preferable and equivalent to each other. thus, each ranking includes a single symbol of strict order  and n – 1 symbols of equivalence ~. to aggregate the m ranking means to determine a single preference relation β ensuring a best compromise between them. such a ranking β is called consensus ranking. in the authors’ works [12], [15], [16] it was shown that the kemeny median can be used in the capacity of consensus ranking. one of the possible algorithms is based on the branch and bound technique and described in [12]. as soon as a consensus ranking β is found, a value ranked first in it can be selected as the reference value xref of the measurand. the lcs consists of laboratories whose uncertainty intervals include the revealed reference value xref. laboratories that do not contain the reference value are ignored when forming the largest consistent subset. a standard uncertainty of the obtained reference value for the lcs is defined as the smallest of the two values, i.e. from the maximum lower bound ≤ ref( )l iu x x and the minimum upper bound ≥u ref( )iu x x of the uncertainty intervals of laboratories. figure 1. an example of shaping a range of actual values a. 11.6 a2 11.8 11.4 12.0 12.2 12.4 a1 a3 a4 a5 a6 range of actual values (rav) = a rav division rav mark 12.6 12.8 a7 a8 initial value finite value acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 22 3. experimental investigations of ic data processing methods to investigate experimentally the proposed method for ic data processing on the base of preference aggregation special software was developed called interlabcom in the environment microsoft visual с#. the software has a userfriendly interface and, in its current version, implements the following three ic data processing methods: the proposed preference aggregation method (pam), procedure a and the nielsen algorithm. measurement results provided by laboratories can be real and/or simulated by means of a program pseudo-random numbers generator that provides an opportunity to realize various modifications of the monte-carlo method when conducting numerical computing experiments. there is a possibility to choose a uniform or a normal distributions of generated measurement results. uniformly distributed comparison data xi and u(xi) can be generated at a given value хnom using the standard library function rand(). normally distributed data of comparison results are obtained from uniformly distributed data using the well-known box–muller transform [17]. when preparing an experiment, in a special window, one can preset a nominal measurand value xnom, the number of participating laboratories m, and the number of the measurand values n. by pushing the button "generation" the generated measurement result xi and its uncertainty u(xi) are displayed on a monitor screen. the uncertainty u(xi) is represented as the couple of upper and lower bounds. a graph of the initial generated ic data is indicated in a special window (figure 2). uncertainty intervals are shown in a two-dimensional graph with dimensions “measurand” (vertical axis) and “laboratories” (horizontal axis). the software allows to indicate ic data processing of each method in a separate window including a table with initial comparison data (measurand values and corresponding uncertainty intervals), a graph of comparison processed data and conclusion on consistency of each participating laboratory results. all the ic data processing results by means of different methods are reduced to a summary table and graph. an inconsistent result is labelled by a special mark and the corresponding data are removed from the processed set. the graph and final data of comparison can be saved in microsoft excel format for further processing. in order to demonstrate the developed software tool operation, some ic measurement data for 7 participating laboratories are shown in figure 3. in this case the rav with lower and upper bounds 11.43 and 12.73 is divided into 5 equal divisions, bounds of which define 6 values a of the measurand. the appropriate preference profile λ, constructed as described in section 2, has the following view: λ1: a2 ~ a3  a1 ~ a4 ~ a5 ~ a6 λ2: a2 ~ a3  a1 ~ a4 ~ a5 ~ a6 λ3: a3 ~ a4 ~ a5 ~ a6  a1 ~ a2 λ4: a2 ~ a3  a1 ~ a4 ~ a5 ~ a6 λ5: a3 ~ a4 ~ a5  a1 ~ a2 ~ a6 λ6: a2 ~ a3 ~ a4  a1 ~ a5 ~ a6 λ7: a1 ~ a2  a3 ~ a4 ~ a5 ~ a6 for this profile two optimal consensus rankings exist: а3  a2  a4  a5  a6  a1 a3  a2  a4  a5  a1  a6, from where the final consensus ranking is: β = {a3  a2  a4  a5  a6 ~ a1}, where the first position is occupied by the value a3 = 11.95. this value is accepted as the measurand reference value xref. our hypothesis consists in that, as ordinal data are used in the pam, a reference value obtained by means of this method should not significantly depend on the particular probability distribution law of measurement results. for experimental investigations of this hypothesis normally distributed data for 100 individual problems have been generated that were distinguished from each other by random uncertainty intervals; the laboratories number m = 15; хnom = 3. these data were processed by pam, procedure a and the nielsen algorithm. the same steps under similar conditions were undertaken for uniformly distributed generated data. in table 1 and table 2 the numerical experimental investigation results of pam as compared with procedure a and the nielsen algorithm are reduced. the fact that the program model allows to assign and know a nominal value beforehand, gives a possibility to assess a quality of method m intended for ic data processing by means of calculation of the difference ξ = −ref nom( ) .x m x (4) thus, table 1 includes xref and ξ for each individual problem solved by each of the three methods obtained for normal distribution and table 2 includes the values acquired for uniform distribution. figure 3. example of ic measurement results. figure 2. one of the software user interface windows. 11.43 11.69 11.95 12.21 12.47 12.73 λ1 λ2 λ3 λ4 λ5 λ5 λ7 m ea su ra n d , a. u . laboratories https://en.wikipedia.org/wiki/uniform_distribution_(continuous) https://slovari.yandex.ru/hypothesis/en-ru acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 23 the experimental data were used to plot curves illustrating how values ξ are changed from problem to problem for each comparison method. values ξ were taken for every 100 individual problems and organized in ascending order. figure 4 shows the graph of deviations ξ obtained by the proposed pam compared to procedure a for uniform (u) and normal (n) distributions of comparison data. it should be noticed that procedure a is not intended to be applied for data distributed by laws other than normal. therefore, the experimental results obtained under the uniform law are given here in order to demonstrate the nonrobust method behaviour compared to the robust ones over the same data. one can see in figure 4 that a particular kind of measured results probability distribution practically does not influence the pam (curves 3 and 4) performance. it means that the pam is a robust procedure. over the same data, the procedure a (curves 1 and 2) has shown a considerable increase of ξ when passing from normally to uniformly distributed measurements. figure 5 represents a graph of deviations ξ obtained by the table 2. a fragment comparison generated data procession results for xnom = 3.0 a.u.; uniform distribution. problem number pam procedure a nielsen algorithm xref ξ xref ξ xref ξ 1 3.01 0.01 2.92 0.08 2.95 0.05 2 2.97 0.03 2.92 0.08 2.95 0.05 3 3.12 0.03 2.43 0.57 3.25 0.25 4 3.04 0.04 2.69 0.31 2.67 0.33 5 2.98 0.02 2.65 0.35 2.46 0.54 6 2.98 0.02 2.16 0.84 2.86 0.14 7 2.89 0.11 2.54 0.46 2.86 0.14 8 2.81 0.19 2.57 0.43 2.54 0.46 9 2.91 0.09 2.49 0.51 2.74 0.26 10 3.10 0.10 3.00 0.00 2.71 0.29 11 2.96 0.04 2.62 0.38 3.20 0.20 12 3.04 0.04 2.97 0.03 3.37 0.37 13 3.14 0.14 2.69 0.31 2.73 0.27 14 2.98 0.02 2.90 0.10 3.00 0.00 15 2.90 0.10 2.54 0.46 3.06 0.06 … 86 2.99 0.01 3.01 0.01 2.95 0.05 87 2.84 0.17 2.66 0.34 2.90 0.10 88 3.03 0.03 2.88 0.12 2.85 0.15 89 2.94 0.06 2.77 0.23 2.85 0.15 90 2.86 0.14 2.40 0.60 3.09 0.09 91 2.98 0.02 2.90 0.10 2.79 0.21 92 3.11 0.11 2.38 0.62 3.27 0.27 93 2.97 0.03 2.88 0.12 2.75 0.25 94 2.73 0.27 1.90 1.10 2.69 0.31 95 3.00 0.00 2.49 0.51 3.11 0.11 96 2.96 0.04 2.95 0.05 3.11 0.11 97 2.97 0.03 2.90 0.10 3.12 0.12 98 2.95 0.05 2.62 0.38 3.12 0.12 99 2.98 0.02 2.85 0.15 2.89 0.11 100 3.08 0.08 3.01 0.01 2.93 0.07 table 1. a fragment comparison generated data procession results for xnom = 3.0 arbitrary units (a.u.); normal distribution. problem number pam procedure a nielsen algorithm xref ξ xref ξ xref ξ 1 2.97 0.03 2.92 0.08 2.95 0.05 2 2.91 0.09 2.90 0.10 2.93 0.07 3 2.95 0.05 2.91 0.09 2.91 0.09 4 2.98 0.02 2.98 0.02 2.90 0.12 5 3.05 0.05 2.90 0.10 2.96 0.04 6 2.89 0.11 2.89 0.11 2.86 0.14 7 2.98 0.02 3.00 0.00 2.79 0.21 8 2.93 0.07 2.98 0.02 3.10 0.10 9 2.98 0.02 2.86 0.14 2.91 0.09 10 2.97 0.03 2.97 0.03 2.68 0.32 11 2.98 0.02 2.95 0.05 3.02 0.02 12 2.92 0.08 2.99 0.01 2.85 0.15 13 2.99 0.01 2.97 0.03 2.92 0.08 14 2.96 0.04 2.99 0.01 2.92 0.08 15 2.93 0.07 2.99 0.01 2.99 0.01 … 86 3.03 0.03 2.90 0.11 2.94 0.06 87 2.99 0.01 2.97 0.03 2.85 0.15 88 2.94 0.06 2.97 0.03 2.83 0.17 89 2.98 0.02 2.94 0.06 2,74 0.26 90 2.91 0.09 2.94 0.06 2.88 0.12 91 2.93 0.07 2.97 0.03 2.92 0.08 92 2.98 0.02 2.90 0.10 2.93 0.07 93 2.97 0.03 2.81 0.19 2.70 0.30 94 2.99 0.01 2.99 0.01 2.94 0.06 95 2.99 0.01 2.78 0.22 2.87 0.13 96 2.96 0.04 2.99 0.01 2.83 0.17 97 2.98 0.02 2.97 0.03 3.05 0.05 98 2.97 0.03 2.84 0.16 2.93 0.07 99 2.99 0.01 2.91 0.09 3.11 0.11 100 3.01 0.01 3.01 0.01 2.85 0.15 figure 4. deviations ξ obtained by pam and procedure a for uniform (u) and normal (n) distributions of comparison data. figure 5. deviations ξ obtained by pam and nielsen algorithm for uniform (u) and normal (n) distributions of comparison data. 0 20 40 60 80 100 ξ, a .u . individual problems 1 2 3 4 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0 20 40 60 80 100 ξ, a .u . individual problems 1 2 3 4 0.70 0.60 0.50 0.40 0.30 0.20 0.10 0.00 1 – nielsen algorithm (u) 2 – nielsen algorithm (n) 3 – pаm (u) 4 – pаm (n) 1 – procedure а (u) 2 – procedure а (n) 3 – pаm (u) 4 – pаm (n) acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 24 proposed pam compared to the nielsen algorithm for uniform (u) and normal (n) distributions of comparison data. it can be seen from figure 5 that the pam provides estimates of xref closer to the nominal value xnom than the nielsen algorithm. at the same time, the latter method (curves 1 and 2) shows a discrepancy between normally and uniformly distributed data of about 0.18 which is more than twice bigger than pam with a discrepancy 0.08. 4. real comparisons data processing by the method of preferences aggregation let us demonstrate the applicability of the pam to real world examples of comparison data taken from open sources [10], [18]. 4.1. the key comparison on high frequency power participating national metrology institutes (nmi) in key comparison (kc) cipm ccem.rf-k25.w [18] determined the effective efficiency and the calibration factor reference of two waveguide thermistor power sensors in the frequency range from 33 to 50 ghz. the effective efficiency of the travelling standard was determined by formula: dc,sub eff rf,abs p p η = , (5) where pdc,sub is the substituted dc power and prf,abs is the total absorbed rf power. participants of the comparisons calculated also the calibration factor ηcal according to equation: η = −γ η2cal eff(1 ) (6) where г is the input reflection coefficient of the travelling standard which was measured as a complex quantity stated as magnitude and phase at the measuring frequencies. the median absolute deviation was used to identify an outlier: 1 med( mad) median{[ ]},is kσ ≈ ≡ η −η (7) where k1 is a multiplier determined by simulation; ηmed is the median value of measurement results {η}. the value of ηi which differed from the median by more than 2.5·s(mad) has been regarded as an outlier. it has been excluded from the calculation of the reference value. this criterion was used to check each measurement result: η −η > ⋅med 2.5 ( mad)i s . (8) the reference value of kc was determined in accordance with section 8 of the technical report [18] on the basis of the unweighted mean value: ' eff,ref eff, 1 1 . ' m i im = η = η∑ (9) the standard uncertainties were calculated: = η = η∑ ' 2 eff,ref ff, 1 1 ( ) ( ). ' m e i i u u m (10) kc data treatment in accordance with ccem.rf-k25.w. the results of the comparison on effective efficiency at 36 ghz are reduced to table 3. the comparison reference value ηeff,ref = 0.9161 was determined for the effective efficiency ηeff with the uncertainty u(ηeff,ref) = 0.0027. nim, mnia and nrc have not participated in the reference value determination as nim and nrc measurement results were considered to be outliers in accordance with criterion (8). the result of mnia was proved to be traceable to the results of other participants. a graphic illustration of the comparison results and the reference value are shown in figure 6. the results of the comparison on the calibration factor are reduced to table 4. the reference value for the calibration factor ηcal,ref = 0.7942 was determined with the uncertainty u(ηcal,ref) = 0.0024 (figure 7). results of vniiftri and nrc were recognized as outliers. result of nmia turned out to be traceable to the results of other participants. kc data treatment by the pam. data of table 3 were processed using pam at n = 8. then the rav was divided into n – 1 = 7 equal divisions. bounds of divisions corresponded to eight values a of the measurand: a1 = 0.8288, a2 = 0.8462, a3 = 0.8636, a4 = 0.8809, a5 = 0.8983, a6 = 0.9157, a7 = 0.9331, and a8 = 0.9505. the preference profile consisted of nine rankings describing the uncertainty intervals figure 6. uncertainty intervals of effective efficiency value provided by nmis. table 4. key comparisons data on the calibration factor. m nmi calibration factor, ηcal ηcal, i u(ηcal, i) 1 ptb 0.7954 0.0036 2 npl 0.7937 0.0067 3 nist 0.7976 0.0070 4 lne 0.7914 0.0046 5 kriss 0.7935 0.0079 6 nim 0.7936 0.0031 7 vniiftri 0.7820 0.0105 8 mnia 0.7972 0.0073 9 nrc 0.8140 0.0130 table 3. key comparison data on effective efficiency. m nmi effective efficiency, ηeff ηeff, i u(ηeff, i) 1 ptb 0.9153 0.0031 2 npl 0.9167 0.0060 3 nist 0.9184 0.0064 4 lne 0.9157 0.0018 5 kriss 0.9143 0.0104 6 vniiftri 0.9160 0.0079 7 nim 0.8360 0.0072 8 mnia 0.9174 0.0071 9 nrc 0.9375 0.0130 0.8200 0.8400 0.8600 0.8800 0.9000 0.9200 0.9400 0.9600 1 2 3 4 5 6 7 8 9 η e ff national metrology institute 0.9600 0.9400 0.9200 0.9000 0.8800 0.8600 0.8400 0.8200 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 25 of the appropriate nmis: λ1: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ2: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ3: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ4: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ5: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ6: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ7: a1  a2 ~ a3 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ8: a6  a1 ~ a2 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 λ9: a7 ~ a8  a1 ~ a2 ~ a3 ~ a4 ~ a5 ~ a6. the final consensus ranking was determined as βfin = {a6  a1 ~ a7 ~ a8  a2 ~ a3 ~ a4 ~ a5}. the comparison reference value of a6 = ηeff,ref = 0.9157 was obtained with uncertainty u(ηeff,ref) = 0.0018. the cardinality of lcs was m' = 7 as measurement results of nim and nrc were recognized to be outliers because of not containing the obtained reference value (figure 8). data of table 5 were also processed, using the pam, at n = 6. the rav was divided into five equal divisions. bounds of the intervals corresponded to six values of the measurand: a1 = 0.7715, a2 = 0.7826, a3 = 0.7937, a4 = 0.8048, a5 = 0.8159, a6 = 0.8270 (figure 9). the preference profile was shaped of 9 rankings: λ1: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ2: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ3: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ4: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ5: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ6: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ7: a1 ~ a2  a3 ~ a4 ~ a5 ~ a6 λ8: a3  a1 ~ a2 ~ a4 ~ a5 ~ a6 λ9: a4 ~ a5  a1 ~ a2 ~ a3 ~ a6. the final consensus ranking was determined as βfin = {a3  a2 ~ a4 ~ a5 ~ a6  a1}. the reference value of comparison ηcal,ref = 0.7937 was obtained with uncertainty u(ηcal,ref) = 0.0019. the cardinality of lcs was m' = 7 (figure 9). measurement results of vniiftri and nrc were recognized to be outliers. 4.2. interlaboratory power comparison in the microwave region in [5], results of interlaboratory power comparisons in the microwave region (50 mhz–26.5 ghz) on the project sit.af01 were reviewed. they were organized by the inrim (istituto nazionale di ricerca metrologica, italy) in turin. a hewlett packard power meter model 438a as a travelling standard has been sent to 12 laboratories. the comparison aim was to confirm the claimed uncertainties of laboratories accredited in the national system of accreditation in the field of microwave measurements. table 5 and figure 10 show one of the series of comparison data of the power sensor calibration factor k measurements at a frequency of 1 ghz. to process the comparison data the nielsen algorithm (see section 1) was used. according to the data analysis outcomes, figure 7. uncertainty intervals of calibration factor value provided by nmis. figure 8. uncertainty intervals of effective efficiency value provided by nmis and reference value obtained by the pam. table 5. comparisons data on power sensor calibration factor k at 1 ghz. laboratories xi u(xi) 1 0.985 0.013 2 0.989 0.008 3 0.982 0.013 4 0.982 0.035 5 0.984 0.014 6 0.980 0.028 7 0.981 0.017 8 0.990 0.021 9 0.982 0.011 10 0.989 0.017 11 1.017 0.014 12 0.987 0.019 figure 10. uncertainty intervals of calibration factor k value provided by participating laboratories and corresponding reference value. figure 9. uncertainty intervals of the calibration factor value provided by nmis and reference value obtained by the pam. 0.7700 0.7800 0.7900 0.8000 0.8100 0.8200 0.8300 1 2 3 4 5 6 7 8 9 η c al national metrology institute 0.8300 0.8200 0.8100 0.8000 0.7900 0.7800 0.8288 0.8462 0.8636 0.8809 0.8983 0.9157 0.9331 0.9505 1 2 3 4 5 6 7 8 9 η e ff national metrology institute 0.9505 0.9331 0.9157 0.8983 0.8809 0.8636 0.8462 0.8288 0.925 0.945 0.965 0.985 1.005 1.025 1 2 3 4 5 6 7 8 9 10 11 12 ca lib ra ti on fa ct or k laboratories 1.025 1.005 0.985 0.965 0.945 0.7715 0.7826 0.7937 0.8048 0.8159 0.8270 1 2 3 4 5 6 7 8 9 η c al national metrology institute 0.8270 0.8159 0.8048 0.7937 0.7826 0.7715 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 26 the lcs, formed as a result of nielsen's algorithm processing, included the eleven laboratories. laboratory 11 was excluded, because its result, in accordance with the algorithm conditions, was deemed to be unreliable. the reference value was obtained as xref = 0.985 in correspondence with the greatest number of laboratory "votes". the data of table 5 were processed using the pam at n = 5. the rav was divided into n – 1 = 4 equal divisions. the bounds of intervals corresponded to five values a of the measurand: a1 = 0.947, a2 = 0.968, a3 = 0.989, a4 = 1.009, a5 = 1.030 (figure 11). the corresponding preference profile was as follows: λ1: a3  a1 ~ a2 ~ a4 ~ a5 λ2: a3  a1 ~ a2 ~ a4 ~ a5 λ3: a3  a1 ~ a2 ~ a4 ~ a5 λ4: a1 ~ a2 ~ a3 ~ a4  a5 λ5: a3  a1 ~ a2 ~ a4 ~ a5 λ6: a2 ~ a3  a1 ~ a4 ~ a5 λ7: a2 ~ a3  a1 ~ a4 ~ a5 λ8: a3 ~ a4  a1 ~ a2 ~ a5 λ9: a3  a1 ~ a2 ~ a4 ~ a5 λ10: a3  a1 ~ a2 ~ a4 ~ a5 λ11: a4 ~ a5  a1 ~ a2 ~ a3 λ12: a2 ~ a3  a1 ~ a4 ~ a5. the final consensus ranking was βfin = {a3  a2  a4  a1 ~ a5}. the value a3 was chosen as the reference value xref = 0.989 with a corresponding uncertainty u(xref) = 0.004. the lcs formed by the pam included 11 laboratories, just as in the project sit.af-01 (figure 11). 5. conclusion a method, called preference aggregation method (pam) has been described, aimed to process ic data. the pam is based on transformation of uncertainty intervals provided by participating laboratories into rankings of measured quantity values. for a preference profile composed in this way, a consensus ranking is determined by the kemeny rule that allows to find the reference value of a measurand. the operation of this method was demonstrated. a software tool has been considered that is intended for experimental investigations of the proposed method and other methods of processing generated normally and uniformly distributed ic data. numerical experiments, carried out with its help, have shown that the pam is indeed a robust procedure that does not depend on the probability distribution of the measurement results. it also follows from numerical experiments that the pam provides an estimate of a reference value being closer to the nominal value than the other robust method (the nielsen algorithm) with half the discrepancy between normally and uniformly distributed comparison data. the pam performance was experimentally verified on real comparison results. in all cases, the reference value and associated uncertainty, determined by the proposed method, were very close to the outcomes obtained by the comparison coordinators. acknowledgement this work was supported in part by the ministry of education and science of russian federation, basic part of the state task in 2014-2016, project 2078 and in 2017-2019, project 4.1763.gzb.2017. the authors would like to thank the anonymous referee for helpful comments. references [1] cipm mra-d-05. measurement comparisons in the cipm mra, version 1.5, p. 28. [2] iso/iec 17043 (2010) conformity assessment – general requirements for proficiency testing. international organization for standardisation, geneva, switzerland. [3] m.g. cox, the evaluation of key comparison data: determining the largest consistent subset, metrologia 44 (2007) pp. 187-200. [4] m.g. cox, the evaluation of key comparison data, metrologia 39 (2002) pp. 589-595. [5] n.yu. efremova, a.g. chunovkina, experience in evaluating the data of interlaboratory comparisons for calibration and verification laboratories, meas. tech. 50(6) (2007) pp. 584-592. [6] c. elster, b. toman, analysis of key comparisons data: critical assessment of elements of current practice with suggested improvements, metrologia 50 (2013) pp. 549-555. [7] i. lira, a.g. chunovkina, c. elster, w. woeger, analysis of key comparisons incorporating knowledge about bias, ieee trans. instrum. meas. 61(8) (2012) pp. 2079-2084. [8] iso 13528 (2005) statistical methods for use in proficiency testing by interlaboratory comparisons. international organization for standardisation, geneva, switzerland. [9] h.s. nielsen, determining consensus values in interlaboratory comparisons and proficiency testing, ncsli newsletter 44(2) (2004) pp. 12-15. [10] l. brunetti, l. oberto, m. sellone, p. terzi, establishing reference value in high frequency power comparisons, measurement 42 (2009), pp. 318-1323. [11] s.v. muravyov, i.a. marinushkina, “largest consistent subsets in interlaboratory comparisons: preference aggregation approach”, proc. of 14th joint international imeko tc1, tc7, tc13 symposium, aug. 31-sept. 2, 2011, jena, germany, pp. 69-73. [12] s.v. muravyov, ordinal measurement, preference aggregation and interlaboratory comparisons, measurement 46(8) (2013) pp. 2927-2935. [13] s.v. murav'ev, aggregation of preferences as a method of solving problems in metrology and measurement technique, meas. tech. 57(2) (2014) pp. 132-138. [14] s.v. muravyov, i.a. marinushkina, processing of interlaboratory comparison data by preference aggregation method, meas. tech. 58(12) (2016) pp. 1285-1291. [15] s.v. muravyov, i.a. marinushkina, intransitivity in multiple solutions of kemeny ranking problem, j. phys. conf. ser. 459(1) (2013) 012006. [16] s.v. muravyov, dealing with chaotic results of kemeny ranking determination, measurement 51 (2014) pp. 328-334. [17] g.e.p. box, m.e. muller, a note on the generation of random normal deviates, ann. math. stat. 29(2) (1958) pp. 610-611. [18] r. judaschke, final report of the pilot laboratory ccem key comparison ccem.rf-k25.w rf power from 33 ghz to 50 ghz in waveguide, physikalisch-technische bundesanstalt, germany, 2014. figure 11. uncertainty intervals provided by participating laboratories and corresponding reference value obtained by the pam. 0.926 0.947 0.968 0.989 1.009 1.030 1 2 3 4 5 6 7 8 9 10 11 12 ca lib ra ti on fa ct or k laboratories 1.030 1.009 0.989 0.968 0.947 0.926 numerical experimental investigation of comparison data evaluation method using preference aggregation microsoft word 24-145-1-ga.doc acta imeko  july 2012, volume 1, number 1, 65‐69  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 65  new generation of ac‐dc current transfer standards at  inmetro  m. klonz 1 , r. afonso 2 , r.m. souza 2 , r.p. landim 2   1  retired from physikalisch‐technische bundesanstalt, bundesallee 100, d‐38116 braunschweig, germany  2  instituto nacional de metrologia, qualidade e tecnologia, av. nossa senhora das graças, 50 – xerém, rj25250020 duque de caxias, brazil      keywords: ac‐dc difference; pmjtc; comparison  citation: m. klonz, r. afonso, r.m. souza, r.p. landim, new generation of ac‐dc current transfer standards at inmetro, acta imeko, vol. 1, no. 1, article 13,  july 2012, identifier: imeko‐acta‐01(2012)‐01‐13  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 10 th , 2012; in final form may 11 st , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: information not available  corresponding author: r. m. souza, e‐mail: rmsouza@inmetro.gov.br    1. introduction  inmetro, the brazilian national metrology institute, is responsible for developing new calibration set-ups and standards that will improve the capacity of providing a higher quality calibration service, specially to the brazilian accredited laboratories, which are responsible for providing calibration services to all other laboratories and industries in brazil. the inmetro started to invest in pmjtcs (planar multijunction thermal converters), to replace sjtcs (single junction thermal converters) as primary standard at 10 ma current level. for higher currents the new tccs (thermal current converters) are built from high quality coaxial shunts, model a40b, manufactured by fluke in parallel to pmjtcs. this fluke design of shunts with small current level effect on the ac-dc current transfer difference follows the design by several authors in different national institutes [1, 2, 3, 4, 5]. thermal converters are capable of comparing the joule heating between ac and dc modes at 0.1 µv/v level, and are widely employed as the ac-dc current transfer standards in most of the national metrology institutes. the existing ac-dc current transfer standards of inmetro are made of sjtcs which have one thermocouple at the midpoint of the heater and are enclosed in an evacuated glass bulb to improve its sensitivity [6]. the fundamental limitations of the performance of an sjtc are thermoelectric errors (thomson and peltier effects) in the heater due to the rather large temperature gradient along the heater (about 200 ºc), level dependence of the ac-dc difference, small output voltage and therefore small dynamic range. moreover such an sjtc based ac-dc current transfer system needs to be recalibrated against higher level standards at least every 5 years to obtain small uncertainties. to reduce these thermoelectric errors, the mjtc uses as many as two hundred thermocouples spaced along a much longer heater wire [6], which results in a larger output voltage and negligible temperature gradients along the heater. however, the mjtc fabrication process is complicated and expensive. the design of pmjtcs is suitable to mass production without degradation of the performance of the mjtc. pmjtcs provide long-term stability together with high sensitivity and high dynamic range. they are well known for very small ac–dc current transfer differences at audio frequencies [7, 8]. moreover the shunts used in the former ac-dc current transfer standards (model a40 shunts made by fluke) were replaced by high quality coaxial shunts (model a40b manufactured by fluke). in order to validate the new system, an unofficial comparison was made between ptb (physikalischtechnische bundesanstalt, germany) [9] and inmetro standards. the this  paper  describes  the  new  primary  standard  for  the  ac‐dc  current  transfer  at  inmetro,  based  on  pmjtcs  and  the  new  shunts  manufactured by fluke for rated currents from 10 ma up to 20 a. the build‐up of the ac‐dc current scale is described together with the  uncertainty budgets which result in final uncertainties at 5 a of 6 µa/a to 12 µa/a in the frequency range from 10 hz to 100 khz. the  recalibration of the standards after one year showed very small differences which are included in the uncertainty budget.    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 66  measurements were performed at 10 ma and 5 a, in the whole frequency range from 10 hz to 100 khz. 2. calibration set‐up  the basic standard for ac-dc current transfer is the 10 ma pmjtc providing traceability to ptb. all standards for higher currents contain a shunt associated with a dedicated pmjtc which measures the voltage across the shunt. to build-up the current scale from 10 ma to 20 a, the different current ranges have to be calibrated against each other. in this step-up method starting from 10 ma the next higher current standard for 20 ma is calibrated at the current of 10 ma. under the assumption that it does not change its ac-dc current transfer differences, it is used then at 20 ma. this procedure continues step by step for all frequencies from 10 hz to 100 khz and currents up to 20 a. the calibration set-up used is shown in detail in figure 1. two separate calibrators deliver ac and dc voltages. an ac-dc switch connects the ac and dc voltages to the transconductance amplifier which converts the voltage to the necessary current. both ac-dc current transfer standards are connected in series and therefore get the same current for this comparison. the two nanovoltmeters keithley 182 measure the output voltage of the pmjtcs. the nanovoltmeters are modified because their input amplifiers should be driven at the potential of the ac-dc transfer standards. the basic design of the measurement set-up showing only pmjtcs for 10 ma is given in figure 1. for higher currents, coaxial shunts, model a40b, manufactured by fluke, are associated to them. the different earth connections are chosen in a specific way to avoid any earth loops which may change the measured values in an unknown way. a coaxial choke (cc) has been introduced to suppress earth currents causing common mode voltages at the input of the transconductance amplifier. the introduction of potential driven guards (figure 2) in the comparison circuit of the two ac-dc transfer standards avoids systematic changes of the ac-dc transfer difference of the standard, which is at the higher potential in the series connection of the two standards, especially at higher frequencies [10, 11]. this is necessary because the standard is calibrated at low potential and used at high potential. photos in figure 3 and figure 4 show the calibration set-up. with this calibration set-up, standards for all current ranges have been built-up with a standard deviation of the measurement smaller than 1 µa/a. in figure 5 the different steps in the step-up procedure are shown. all pmjtcs called 90 have heater resistances of 90 ω, whereas the 400 has a heater resistance of 400 ω and the 900 has a 900 ω one. the second name is the shunt for the different currents. in figure 6 some thermal converters are shown. the first one from the left is a 10 ma pmjtc; the second one is a boxed                clarke hess ‐ 8100   hi lo chassis 20 a 20 ma a 2 a    dc calibrator         gnd     output   sense   grd     hi     lo     ac calibrator       gnd   output   sense   grd   hi   lo     external guard off   external guard off      k‐182 pmjtc      k‐182   pmjtc cc   ac‐dc switch   transcoductance  amplifier figure 1. measurement set‐up for ac–dc current transfer difference  measurements.  figure 2. ac‐dc current transfer with potential driven guards (pmjtcs with  shunts for current ranges above 10 ma).  figure 3. ac‐dc current transfer set‐up.  figure 4. connection of the ac‐dc current transfer standards in series.    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 67  50 ma shunt connected to a 90  pmjtc and the others are pmjtcs connected to coaxial shunts for currents up to 100 ma and for higher currents up to 20 a. 3. uncertainty analysis  the model equation is step tep -1 ca c com. mode lev lf diff. step-ups i s i                (1) with  step i – 1: transfer difference of standard at the step i – 1. ca: contribution of the mean of repeated twelve measurements. c: contribution of the measurement set-up. com. mode: transfer difference from the common mode effect in the transconductance amplifier determined from the difference of measurements with and without the choke cc in figure 1. lev: transfer difference due to level dependence of shunts estimated from the design of the shunts. lf: transfer difference due to low frequency behavior of pmjtc. diff.step-ups: correction with the difference of different step-up measurements performed in the same measurement set-up. the sum of the variances of the different contributions results in the variance of the result:                 2 2 2 2 step tep -1 ca c 2 2 2 2 com. mode lev lf diff. step-ups u u u u +u u u u i s i               (2) where u2(x) represents the variance of x. the uncertainty budgets of the different current steps are given in tables 1 to 4. 4. comparison results  an unofficial interlaboratory comparison of ac-dc current transfer standards between ptb and inmetro was performed with a travelling standard for 10 ma and one for 5 a. the current points chosen were 10 ma and 5 a, in the frequency range from 10 hz and 100 khz. in inmetro each current point was measured against inmetro standards in twelve cycles at all frequencies, and the mean was calculated using the results of the sequence which gives the ac-dc current transfer difference, represented by δinmetro. tables 5 and 6 show the results obtained. ptb uses a similar calibration set-up and similar standards. the shunts are manufactured by the norwegian metrology institute justervesenet with a different design [12]. the results obtained in ptb are represented by δptb. the associated expanded uncertainties are given by u. the measured differences inmetro-ptb between both institutes were small compared to the given uncertainties which is also represented by the small en-values. this is a very satisfying result of the comparison. 5. conclusions  in the first step a new step-up procedure was developed to build-up the ac-dc current transfer standards for 10 ma up to 20 a and to perform an uncertainty analysis for the whole build-up. the second step towards the introduction of the new ac-dc current transfer system of inmetro was performing a comparison between inmetro and ptb, which proved that inmetro’s new calibration set-up and standards work as expected.     3 ma  1 ma  900‐2 400‐3  400‐3  900‐2 90‐11  90‐12 5 ma  90‐13 + sh20ma  20 ma  90‐13 + sh20ma  90‐14 + sh50ma  90‐11  90‐12 10 ma  50 ma  90‐14 + sh50ma  90‐15 + sh100ma  100 ma  90‐15 + sh100ma  90‐16 + sh200ma  200 ma  90‐16 + sh200ma  90‐17 + sh500ma  500 ma  90‐17 + sh500ma  90‐18 + sh1a 1 a  90‐18 + sh1a  90‐19 + sh2a 2 a  90‐19 + sh2a  90‐20 + sh5a  5 a  90‐20 + sh5a  90‐21 + sh10a  90‐21 + sh10a  90‐22 + sh20a  10 a  20 a  90‐22 + sh20a  figure 5. schematics of the step‐up procedure.  figure 6. ac‐dc current transfer standards.    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 68  table 1. uncertainty analysis for the step‐up at 10 ma.  influencing quantity  measurement uncertainty in µa/a at the frequency in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  u( 10 ma)  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  1.5  u(ca)  0.5  0.4  0.3  0.5  0.6  0.3  0.3  0.3  0.3  0.3  0.3  0.3  0.4  0.6  u(c)   0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  0.1  u(com. mode)  1  1  1  1  1  1  0  0  0  0  0  0  0  0  u(lev)  0  0  0  0  0  0  0  0  0  0  0  0  0  0  u(diff. step‐ups)  0.2  0.2  0.0  0.3  0.1  0.0  0.1  0.2  0.1  0.1  0.1  0.1  0.1  0.2  u(lf)  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(ma)  1.9  1.9  1.8  1.9  1.9  1.8  1.5  1.5  1.5  1.5  1.5  1.5  1.6  1.6  u(ma) k = 2  3.8  3.8  3.6  3.8  3.8  3.6  3.0  3.0  3.0  3.0  3.0  3.0  3.2  3.2                                table 2. uncertainty analysis for the step‐up at 100 ma.  influencing quantity  measurement uncertainty in µa/a at the frequency in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  u( 50 ma)  3.0  2.6  2.5  2.5  2.5  2.4  1.7  1.7  1.7  1.7  1.7  1.7  1.8  1.9  u(ca)  0.5  0.3  0.5  0.4  0.3  0.5  0.4  0.4  0.4  0.2  0.4  0.4  0.4  0.5  u(c)   0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  u(com. mode)  1.0  1.0  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(lev)  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.3  0.4  u(diff. step‐ups)  0.1  0.1  0.1  0.1  0.3  0.2  0.1  0.1  0.2  0.4  0.2  0.1  0.1  0.1  u(lf)  0.3  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(ma)  3.2  2.9  2.8  2.7  2.8  2.7  1.8  1.8  1.8  1.8  1.8  1.8  1.9  2.0  u(ma) k = 2  6.4  5.8  5.6  5.4  5.6  5.4  3.6  3.6  3.6  3.6  3.6  3.6  3.8  4.0                                table 3. uncertainty analysis for the step‐up at 1 a.  influencing quantity  measurement uncertainty in µa/a at the frequency in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  u( 500 ma)  3.8  3.5  3.4  3.3  3.4  3.4  2.1  2.0  2.1  2.1  2.1  2.1  2.1  2.5  u(ca)  0.4  0.3  0.3  0.3  0.3  0.4  0.3  0.3  0.2  0.2  0.2  0.2  0.3  0.3  u(c)   0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  u(com. mode)  1.0  1.0  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(lev)  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.5  u(diff. step‐ups)  0.1  0.1  0.1  0.2  0.2  0.1  0.4  0.1  0.2  0.3  0.1  0.3  0.1  0.4  u(lf)  0.4  0.1  0.1  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(a)  4.1  3.8  3.7  3.6  3.7  3.7  2.4  2.3  2.3  2.3  2.3  2.3  2.4  2.9  u(a) k = 2  8.2  7.6  7.4  7.2  7.4  7.4  4.8  4.6  4.6  4.6  4.6  4.6  4.8  5.8    table 4. uncertainty analysis for the step‐up at 5 a.  influencing quantity  measurement uncertainty in µa/a at the frequency in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  u(  2 a)  4.4  4.1  4.0  3.9  4.0  4.0  2.7  2.6  2.5  2.8  3.1  3.8  3.9  4.6  u(ca)  0.3  0.4  0.3  0.2  0.4  0.4  0.3  0.5  0.4  0.2  0.2  0.4  0.6  0.4  u(c)   0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  0.2  u(com. mode)  1.0  1.0  1.0  1.0  1.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(lev)  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.0  1.5  2.0  3.0  3.0  3.5  u(diff. step‐ups)  0.1  0.2  0.3  0.0  0.1  0.2  0.3  0.1  0.3  0.3  0.2  0.6  0.5  0.7  u(lf)  0.4  0.3  0.1  0.1  0.1  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  u(a)  4.6  4.4  4.3  4.2  4.3  4.3  2.9  2.8  2.8  3.2  3.7  4.9  5.0  5.9  u(a) k = 2  9.2  8.8  8.6  8.4  8.6  8.6  5.8  5.6  5.6  6.4  7.4  9.8  10.0  11.8    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 69  the results of both institutes’ standards agreed within 2 µa/a for 10 ma and 5 µa/a for 5 a between 10 hz and 100 khz. that means that this new system works reliably and is ready for use in the next international comparison of ac-dc current transfer standards (sim.em-k12) [13]. references  [1] i. budovsky, “measurement of phase angle errors of precision current shunts in the frequency range from 40 hz to 200 khz”, ieee trans. instrum. meas., vol. 56, no. 2, april 2007, pp. 284-288. [2] m. garcocz, p. scheibenreiter, w. waldmann, g. heine, “expanding the measurement capabilities for ac-dc current transfer at bev”, 2004 conf. on precision electromagnetic measurements digest, cpem, june 27 to july 2, 2004, pp. 461-462. [3] p.s.filipski and m. boeker, “ac-dc current shunts and system for extended current and frequency ranges”, in proc. imtc, ottawa, on, canada, may 17-19, 2005, pp. 991-995. [4] p. s. filipski and m. boeker, “ac-dc current transfer standards and calibrations at nrc”, simposio de metrologia 2006, 25 al 27 otubre 2006. [5] r. m. souza, r. v. f. ventura, f. a. silveira, “reevaluation of ac-dc current transfer system based on single junction thermal converters at inmetro”, xviii imeko tc4 symposium and ix semetro / metrologia 2011: tc 4 imeko and ix semetro, natal, brazil, 2011. [6] m. klonz, “ac-dc transfer difference of the ptb multijunction thermal converter in the frequency range from 10 hz to 100 khz”, ieee trans. instrum. meas., vol. 36, no. 2, june 1987, pp. 320-329. [7] m. klonz, t. weimann, t., “accurate thin film multijunction thermal converter on a silicon chip”, ieee trans. instrum. meas. im-38, no. 2, april 1989, pp. 335-336, [8] h. laiz, m. klonz, e. kessler, t. spiegel, “new thin-film multijunction thermal converters with negligible low frequency ac-dc transfer differences”, ieee trans. instrum. meas., vol. 50, no. 2, april 2001, pp.333-337. [9] m. klonz, h. laiz, t. spiegel, p. bittel, “ac-dc current transfer step-up and step-down calibration and uncertainty calculation”, ieee trans. instrum. meas., vol. 51, no. 5, 2002, pp. 1027-1034. [10] k. e. rydler, “high precision automated measuring system for ac-dc current transfer standards”, ieee trans. instrum. meas., vol. 42, no.1, april 1993, pp. 608-611. [11] t. funck, and m. klonz, “improved ac-dc current transfer step-up with new current shunts and potential driven guarding”, ieee trans. instrum. meas. vol. 56, april 2007, pp. 361-364. [12] k. lind, t. sorsdal, and h. slinde, “design, modeling, and verification of high-performance ac-dc current shunts from inexpensive components”, ieee trans. instrum. meas., vol. 57, no.1, jan. 2008, pp. 176-181. [13] l. d. lillo, “sim.em-k12 comparison protocol – ac-dc current transfer difference”, 2010. table 5.  result of the comparison between inmetro and ptb at 10 ma. institutes  ac‐dc current transfer differences together with their uncertainties in µa/a at the frequencies in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  inmetro  5.5  1.3  1.0  0.7  0.3  ‐0.1  ‐0.5  0.1  0.4  0.6  2.2  17.6  34.8  70.9  u inmetro  4  4  4  4  4  4  3  3  3  3  3  3  3  4  ptb  5.3  1.8  0.7  0.7  ‐0.3  ‐0.4  ‐0.1  ‐0.2  0.5  1.5  3.2  18.9  36.4  72.2  u ptb  3  3  3  3  3  3  3  3  3  3  3  3  3  3  difference inmetro ‐ ptb  0.2  ‐0.5  0.3  0.0  0.5  0.3  ‐0.4  0.3  ‐0.1  ‐0.9  ‐1.0  ‐1.3  ‐1.6  ‐1.3  en  0.0  ‐0.1  0.1  0.0  0.1  0.1  ‐0.1  0.1  0.0  ‐0.2  ‐0.2  ‐0.3  ‐0.4  ‐0.3  table 6.  result of the comparison between inmetro and ptb at 5 a.  institutes  ac‐dc current transfer differences together with their uncertainties in µa/a at the frequencies in khz  0.01  0.02  0.03  0.04  0.055  0.12  0.5  1  5  10  20  50  70  100  inmetro  0.0  0.4  0.2  0.1  0.0  ‐0.3  0.0  0.1  2.7  11.6  13.2  ‐88.1  ‐187.0  ‐353.9  u inmetro  9  9  9  9  9  9  6  6  6  7  8  10  10  12  ptb  ‐0.9  ‐0.2  0.4  0.2  ‐0.4  0.5  0.5  0.4  3.0  10.6  10.5  ‐89.9  ‐187.2  ‐349.4  u ptb  5  4  4  4  4  4  4  4  4  5  7  9  10  11  difference inmetro ‐ ptb  0.9  0.6  ‐0.2  0.0  0.4  ‐0.8  ‐0.5  ‐0.3  ‐0.3  1.0  2.7  1.9  0.2  ‐4.5  en  0.1  0.1  0.0  0.0  0.0  ‐0.1  ‐0.1  0.0  0.0  0.1  0.2  0.1  0.0  ‐0.3  acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 1    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 |  1  journal contacts    about the journal    acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221870x.      editor‐in‐chief  paolo carbone, italy    honorary editor‐in‐chief  paul p.l. regtien, netherlands    editorial board      leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy rosario morello, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany rainer tutsch, germany ian veldman, south africa luca de vito, italy     about imeko    the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry.     addresses    principal contact  prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia, italy email: paolo.carbone@unipg.it    acta imeko  prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov), the netherlands email: paul@regtien.net    support contact  dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany email: dirk.roeske@ptb.de     automatic system to identify and manage garments for blind people acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 10 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 automatic system to identify and manage garments for blind people luís silva1, daniel rocha2,3,4, filomena soares2, joão sena esteves2, vítor carvalho2,3 1 university of minho, school of engineering, campus of azurém, guimarães, portugal 2 algoritmi research center, university of minho, campus of azurém, guimarães, portugal 3 2ai-school of technology, ipca, barcelos, portugal 4 inl international iberian nanotechnology laboratory, braga, portugal section: research paper keywords: blind people; automatic system; garments; colour detection; deep learning citation: luís silva, daniel rocha, filomena soares, joão sena esteves, vítor carvalho, automatic system to identify and manage garments for blind people, acta imeko, vol. 12, no. 3, article 7, september 2023, identifier: imeko-acta-12 (2023)-03-07 section editor: susanna spinsante, grazia iadarola, polytechnic university of marche, ancona, italy received february 28, 2023; in final form june 10, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work has been supported by national funds through fct – fundação para a ciência e tecnologia within the r&d units project scope: uidb/00319/2020, uidb/05549/2020 and uidp/05549/2020 corresponding author: vitor carvalho, e-mail: vcarvalho@ipca.pt 1. introduction in recent years, there has been a concerted effort to develop technologies that provide people with disabilities access to information and knowledge [1]. for individuals who are blind, several technological solutions have been proposed to assist with daily routine activities [2], [3]. despite the advancements in assistive technology, blind individuals still face difficulties in performing basic daily tasks, such as selecting their clothing. the process of identifying features on garments is often slow and challenging, leading to a loss of autonomy in choosing their desired clothing. this study aims to address these challenges by proposing an automatic wardrobe that improves quality of life and well-being of blind people, complementing the continuous work developed under the same project [4]-[11], [14]. in this research, the major contribution is the development of a mechatronic system prototype that facilitates garment selection and management. the prototype is divided into two distinct modules: the first module is dedicated to the physical prototype, while the second module is dedicated to image processing and machine learning algorithms to segment, classify, and extract colours from clothing. furthermore, within this scope, this work had the support of the association of the blind and amblyopes of portugal (acapo) allowing a preliminary validation. this paper comprises seven sections, where section 2 outlines the prior research. following this, section 3 presents a general overview of the system, while sections 4 and 5 discuss the hardware and software architectures, respectively. section 6 provides an initial analysis of the results obtained and presents concluding remarks as well as suggestions for future research. abstract in recent years, there has been increased attention towards the integration of handicapped individuals in society, with significant efforts being made to promote their inclusion. technology has played a critical role in this effort, with several technological solutions emerging to help handicapped people in their daily routines, enabling them to better integrate into society. however, there are still challenges that remain, particularly in the area of basic tasks for blind people, such as managing and identifying personal garments. this study seeks to provide an improvement in the quality of life and well-being of blind people. a mechatronic automatism that allows the identification of user garments using sensors was developed. an interface developed with the implementation of a server responsible for managing the requests from the user is integrated. algorithms were implemented for segmentation and classification of garments and for detecting the predominant colors of each garment. by the results obtained it was evidenced that the system could be an efficient solution to reduce the time taken for garment selection, particularly in terms of color differentiation and the selection of combinations for blind people. mailto:vcarvalho@ipca.pt acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 2. previous work existing solutions for helping blind people choosing their garments rely primarily on the concept of smart wardrobes, which has seen a significant surge in popularity in recent years [12]. furthermore, it was identified by perry et al. the level of acceptance of smart wardrobes and the consumers' opinions about this approach [13]. the study revealed that 84% of the participants identified ease of use and utility as the predominant factors for accepting this technological model. a survey carried out with acapo (portuguese association of blind and amblyope people) allowed the identification of several problems regarding garments identification by blind people [14]. this survey was important to identify that there isn´t a solution to help blind people on garments management. myeyes, proposed in [4], [5] by rocha et al., was developed to overcome this gap. the solution integrates a mobile application with an arduino board, and it allows the users to have a virtual wardrobe with their personal garments. near field communication (nfc) technology is used to allow the addition of garments. as future development of this work, a possible physical implementation of a wardrobe to help blind people was presented in [10]. additionally, r. alabduljabbar et al. presented a system that integrates nfc technology with a smartphone, allowing visually impaired people to choose the desired clothes [15]. a similar solution was proposed in [16] by s. j. v. gatis filho et al. that also explores nfc technology combined with quick response (qr) technology with the main goal of developing a clothing matching system with audio description. solutions whose main target is people without any disability are in a crescent rising with more implementations emerging. goh et al. [17] proposed a system that integrates tags with radio frequency identification (rfid) technology, allowing unique identification of clothing items. this system is controlled by an application that allows garment management and suggests clothing items based on several criteria, such as style, colour, material, and user’s mood. on other hand, some fashion brands present different solutions developed to help people plan what to wear. in 2017, amazon presented the echo look [18] based on a kit with a camera that allows garments photo capture and cataloguing of outfits. this method also suggests combinations based on meteorology and users’ preferences. the mobile application fashion api [19] is a “closet” that plans what to purchase and adds garments based on qr code reading. another mobile application is the smart closet [20], which plans combinations to wear and allows the addition of clothing items based on photo capture. tailortags [21] is a system that uses smart tags to detect garments automatically that suggests combinations based on user’s preferences. the systems presented have the emphasis on solutions based on clothing implementations, not only targeted to blind people, but also showing some examples of what has been made available to the general public, in order to assist and facilitate the selection of clothing pieces. table 1 summarizes the characteristics of the systems analysed. besides all efforts to develop systems to help blind people, except for solutions presented in [4], [5], [15], and [16], the systems in table 1 focus on solutions that help people without any disability. as the project encompasses the recognition and identification of garments’ colours, it was necessary to identify solutions that help blind people in the detection of the colours. in this way, there are some implementations from mobile applications to small electronic devices. regarding to mobile applications, emerged v7 aipoly [22] that allows the identification of not only colours but also objects and texts. the system helps blind people on daily routines tasks and interacts with the user by audio description. relatively to physical devices, there are colortest2000 and colorino. the colortest2000 [23] allows the identification of over 1700 different colours of daily objects and colorino [24] has the capability of distinguish 150 tones of colours. yang, yuan, and tian presented in [25] a prototype that analyses garment and allows the identification of 4 different types of patterns and 11 different colours. the system integrates a camera, a microphone, a computer, and headphones. for the colour identification it is used the hue, saturation, intensity (hsi) model and implemented machine learning algorithms to recognize the patterns. in [26], j. jarin et al. proposed a similar system capable of detecting colours and patterns in clothing pieces. the garments photos captured by the camera are submitted to a support vector machine (svm) algorithm to identify and classify the different colours and patterns on each piece. in the case of the proposal presented in [27] by x. yang et al., it is similar with the previous solution, however, uses statistical feature (sta) and sift (scale invariant feature transform) to identify the colours and patterns in each garment. lastly, medeiros, j. presented in [28] a small wearable device with an endoscopic camera to use in the top of a finger that allows the surface capture of garments images. to identify and classify different textures, there is used classification approach based on the combination of two complementary features that uses imagenet classifier and an svm. the colour detection is based on superpixel segmentation capable of detecting multiple colours simultaneously. regarding the scope of garments segmentation, there are several solutions presented. another capability of the system presented in this paper is the classification and segmentation of garments. for the system implementation the algorithm used was the mask region-based convolution neural network (mask rcnn) that mainly considered image segmentation. it was necessary to identify implementations using this algorithm on garments scope. the solution approached by khurana, t. et al. in [29] allows the distinction of garments with similar colours and patterns. firstly, the cnn is used to detect and segmentate the table 1. analysed solutions overview. solution description myeyes [4], [5] manage garments with rfid “an iot smart clothing system for the visually impaired using nfc technology” [15] manage garments based on nfc technology “my best shirt with the right pants: improving the outfits of visually impaired people with qr codes and nfc tags” [16] manage garments based on the combination of nfc and qr technology “developing a smart wardrobe system” [17] adds clothing items to the wardrobe based on rfid tags reading echo look [18] suggests advice based on weather and personal trend fashion api [19] adds clothing items to the virtual wardrobe based on qr code reading smart closet [20] adds clothing items to the virtual wardrobe based on photo capture tailortags [21] adds clothing items to the virtual wardrobe based on wireless tags detection acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 3 spatial limits of each garment and next, the features are identified. the results were obtained using two datasets: fashionista with 685 images and cfpd with 2682 images. on one hand, with fashionista dataset, the accuracy and iou obtained was 91.1 % and 42.1 %, respectively. on the other hand, using cfpd dataset, the value obtained as accuracy was 93.5 % and for iou, the value was 58.7 %. the method deepfashion2 proposed in [30] by ge, y. et al. is a method capable of extracting several features of garments images. the dataset used was built with 801 000 images annotated with several data, such as type and style. the model implemented was match r-cnn that is a derivation of mask r-cnn. the algorithm extract features from garments with an average precision of 79.3 % with iou = 0.75 and 83.4 % with iou = 0.5. this solution presented a slight improvement in average precision in comparison with the mask r-cnn implementation. wang, j. et al. presented a solution in [31] which is based in deeplabv3+ model and allows the segmentation of garments in images with complex backgrounds. the neural network architecture was redesigned increasing the adaptability with the limits in each image. concerning to results, the values of 97.26 % in accuracy, 93.23 % in miou and 90.56 % in average precision were slightly higher in comparison with the results obtained previously. in the approach presented in [32] by zhang, x. it is used a convolution neural network to allow the recognition of garments or combinations and identification of the specific year that refers to. the dataset used contained 9339 images of garments of 8 different years and the results presented a good accuracy. regarding to segmentation, the segnet model presented an iou of 0.951. concerning classification, the proposal method presented an accuracy of 0.805, a higher value than 0.785 using the resnet101 model. finally, yang, t. et al. proposed a method in [33] capable of detecting t-shirts and long sleeve shirts silhouettes using instance segmentation with mask r-cnn. the dataset encompasses 9000 images with the annotations. the results showed that the model presented good results in the detection of t-shirts with an average precision of 95 %. furthermore, several studies [34]-[37] presented methods to segment objects using cnns which have similarities to the implementation used and modelized for the garment segmentation and classification. table 2 summarizes the main results of the segmentation and classification methods analysed. the proposed system's primary objective is to identify and classify clothing items for blind individuals to select their clothing with ease. to achieve this goal, a thorough analysis of the current state-of-the-art solutions was conducted, revealing a significant gap to assist visually impaired individuals in this task. thus, a new system is proposed, utilizing a physical prototype with advanced artificial intelligence (ai) algorithms to classify clothing items and detect their predominant colours, enabling blind individuals to make informed clothing choices. the proposed solution represents a significant advancement in assistive technology for the visually impaired and has the potential to improve the daily lives of them. 3. system overview the system presented in this paper consists of a physical prototype and control software, figure 1. the hardware includes an nfc module that reads the garment tags, a stepper motor screwed to the wardrobe roof for circular movement, and a servo motor on each hanger for 180-degree rotation during photo capture. two photos (top and bottom) are taken and stitched together using an algorithm. a raspberry pi serves as the control unit to process data and provide analysis on request. an inside illumination system reduces reflections and creates a controlled environment for photo capture. figure 1 depicts the mobile application as the intermediary between the user and the physical prototype, facilitating the interaction with the raspberry pi through the installation of a server that handles user requests. at the hardware level, the raspberry pi consists of three main components: the nfc reader, cameras, and dc motors, as shown in figure 9. the raspberry pi triggers the corresponding hardware components to capture garment photos and return them to the user in response to their requests. the software module includes two algorithms designed to determine the predominant colours of each garment. a mask region-based convolutional neural network (r-cnn) algorithm is used to carry out garment image segmentation and background removal, using a dataset of 480 annotated garment images. additionally, an algorithm based on opencv analyses the pixels of the images to extract their redgreen-blue (rgb) values and determine the predominant colours. the upgrades between the two systems are significant. while the previous system [4], [5] used nfc communication for garment management, the proposed system combines this functionality with an automation mechanism that can move and rotate garments to capture multiple angles accurately. the table 2. literature overview on garment segmentation and classification. method dataset metrics fully convolutional networks [29] fashionista: 685 images cfpd: 2 682 images accuracy: 91.1 % iou: 42.1 % accuracy: 93.5 % iou: 58.7 % match r-cnn [30] 801 000 images ap75: 79.3 % ap50: 83.4 % deeplabv3+ [31] 491 000 images accuracy: 97.26 % miou: 93.23 % segnet [32] 9 339 images iou: 95.1 % iou: 80.5 % mask r-cnn [33] 9 000 images ap: 95 % figure 1. system overview. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 4 integration of a photo capture module further enhances the system's ability to detect and classify garments, allowing the system to determine the predominant colours of each piece accurately. the proposed system's enhanced capabilities represent a significant advancement in the field of garment management and classification, having also potential applications in various industries. 4. hardware architecture the functioning of the system has four distinct phases: nfc reading, photo capturing, hanger rotation, and circular movement. the nfc module is used to read tags attached to each garment, allowing for the collection of the identifier number (uid), which is the user identification code associated with each tag. photo capturing is carried out via a vision system with one camera. capturing a complete photo of a clothing item can be challenging, even with a camera equipped with a larger aperture lens, due to the short distance between the camera and the item in the developed prototype. to address this limitation, a servomotor is attached to the camera to facilitate a 180-degree rotation, allowing the capture of multiple positions of the item, which are subsequently merged using a stitching algorithm to create a complete photo of the garment, combining both top and bottom photos. this innovative approach represents a solution to accurately capturing clothing items in the prototype and can be implemented in various wardrobe sizes, making it a versatile solution. during the photo capture process proper illumination is crucial to avoid dark areas. to achieve this a white led strip consisting of hundreds of leds emitting light at a colour temperature of approximately 6000k is mounted on the wall where the cameras are situated. this type of illumination offers a diffuse, flexible, and cost-effective solution, allowing for a higher focus on each garment. for a circular movement within the wardrobe, a stepper motor is affixed to the roof, with its shaft connected to a circular platform. this platform supports the weight of all servomotors attached to each hanger, responsible for rotating each garment 180 degrees. the servomotors are attached to the underside of the circular platform providing a flexible and cost-effective solution. to establish a proof of concept and create an easily mountable solution, a small ikea wardrobe with dimensions of 50 cm × 30 cm × 80 cm was selected for testing garments of appropriate size [38]. the circular movement of the garments inside the wardrobe is enabled by a stepper motor, installed on the roof, which drives a circular platform capable of supporting the weight of all servo motors attached to each hanger. the servo motors, responsible for rotating each garment 180 degrees, are mounted on the inferior surface of the circular platform. the nfc reader is attached to one of the side walls to allow for collision and detection of the uid associated with each tag. figure 2 represents the hardware overview. the system was implemented using the raspberry pi 3b+ microcontroller board as its base. this board is a popular choice for complex projects due to its versatility and affordability. the board has 40 pins that can be used for various functions such as i2c, spi, and uart communication. it has 1gb of ram and a 16gb sd card with the raspbian operating system installed for storage. to read the tags, a pn532 module that communicates via nfc was used. this module can detect tags up to 4 cm away and can be connected to the raspberry pi board using i2c, spi, or uart. the system considers two types of actuators servomotors and a stepper motor. servomotors are affixed to the hangers, enabling 180-degree garment rotation for photo capture as depicted in figure 3. additionally, a servomotor connected to the camera enables two positions during photo capture. on the other hand, the stepper motor is attached to the wardrobe's roof, facilitating the garment movement. it is controlled by a uln2003 driver, which governs the motor's rotation. to take photos, an ov5647 camera module was used. it integrates the camera and a small board for connection to the bcm2835 processor through channel state information (csi) communication. this camera has a 5 mp resolution and can capture images up to 2592 × 1944 pixels. the csi bus was used to connect this module to the raspberry pi. 5. software architecture regarding the software developed and implemented, the system has three main components: server, control software automatism and image processing algorithms (figure 4). concerning the interaction between the user and the system, a figure 2. hardware overview. figure 3. automatism to rotate the garments. figure 4. server overview. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 5 server was implemented on the raspberry pi. the server receives the user’s requests and returns the photo from the respective garment. this communication allows the connection between the user interface and the physical system. the server was implemented using flask framework and hosted on a raspberry pi. the server received photo requests from the user, and a virtual environment was created using docker platform to enable the use of detectron2 on a windows machine for background removal. the client on the docker received garment photos from the user and returned the respective class and predominant colours. a control software was developed to test and validate the wardrobe automation. the state machine, presented in figure 5, described the system's functioning, which included six states: ready, stepper, detection, capture, servo, and stitch. when the user requested a garment, the initial state ready changed to stepper. in this state, the stepper motor controlling the garment movement inside the wardrobe is triggered, moving until the tag with the requested uid is detected and stopped. the state then changed to detection, and the uid tag is displayed. the system then changes to the capture state, and top and bottom front photos of the garment are captured. afterward, the system moves to the servo state, where the servo motor is triggered, performing a 180 degrees rotation. after the rotation, the system returns to the capture state and captures top and bottom back photos of the garment. the system then changes to the stitch state, where a stitching algorithm, which is part of the opencv library, is used to combine the photos. this stitching algorithm solved the issue of not being able to capture the full view of each garment with only one camera shot by combining the images based on keypoint detection and overlapping common points on each picture. 5.1. background removal ai is integrated into the system to accurately classify the type of garment, which is a critical feature since the user may not be aware of all the clothing items inside the closet. nfc communication is utilized only to select the clothing item and identify its respective uid for analysis. after the item is selected, the system performs segmentation and background removal to facilitate identification of the dominant colours of the garment. this process is necessary to determine the colours requested by the user accurately. the integration of ai, nfc communication, and image processing techniques allows the system to identify and classify clothing items accurately, which represents a significant advancement in the field of clothing management and classification. to accomplish this task, a method was needed to remove the image's background while considering the garment's boundaries. after conducting multiple studies and research, a deep learning algorithm called mask r-cnn was employed to segment the images. to model and train the algorithm, google colab framework was utilized, providing high processing power for the neural network training on a local machine. to create the dataset, photos were gathered and divided into six types (classes) of garments: pants, shorts, t-shirts, shirts, polo shirts, and dresses. as there were not enough photos available, several images were obtained from an online dataset [39]. the final dataset comprised a total of 480 garment photos, divided equally among the six classes with 80 photos per class. to ensure the accuracy of the garment boundaries, the images were annotated with the visual geometry group’s annotator tool. following the annotation process, a function was developed to load the annotations in json format from the dataset. this function assigned each annotation to the corresponding class, as well as other attributes, such as region_attributes, shape attributes, and x/y coordinates from the bounding box. transfer learning was employed in the training process, which involved using a pretrained model. the dataset was then loaded, and several parameters were adjusted to control the flow of training. the learning rate was set to adjust the speed at which the neural network converges to the goal. this involved a trade-off between low values, which could slow down the training, and high values, which could cause the model to diverge from the goal. the number of iterations was adjusted to maximize the average precision (ap) and intersection over union (iou) as well as to prevent overfitting. the neural network was trained using specific parameters and further tested with various garments to ensure its accuracy. a function was developed to segment and classify garment images uploaded by the user, which takes an input image and produces three output images: the segmented image, the final image with the background removed, and the corresponding binary mask. additionally, a colour detection algorithm is used to analyse the pixels from the region of interest in each image, as shown in figure 6. figure 5. system state machine. figure 6. image segmentation flowchart. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 6 5.2. colour detection after the background removal, the garment image undergoes a colour detection algorithm that detects the predominant colours in the image using the opencv library. the algorithm analyses the rgb values for each pixel in an image, ranging from 0 to 255, to determine the colour based on a combination of the three components. the process begins by defining two arrays containing colour names and their respective rgb values. the algorithm then utilizes a binary mask and final image, generated by a deep learning algorithm that removes the background, to extract the pixel values from the final image, corresponding to the non-black coordinates of the binary mask. to determine the closest colour in the colour array, each pixel within the clothing region of interest is saved, and the frequency of each colour is counted to calculate the percentage of each colour present in the garment. to avoid potential segmentation errors, colours with percentages below 10 % are not considered. by focusing only on the dominant colours of the clothing item while minimizing the impact of segmentation errors on the colour analysis results, this approach ensures that clothing colour analysis is carried out exclusively on the pixels identified as part of the clothing region of interest. this section presents the results obtained to confirm and validate the proper functioning of the different modules previously introduced [40]. in addition, the flowchart of the control software developed, which integrates the hardware and software of the wardrobe automatism, is presented in figure 7. the pictures of a garment captured by the mechatronic system, which are subsequently combined by the stitching algorithm, are represented in figure 8. the automatic wardrobe system was designed to provide controlled photo capture, thereby eliminating dark fields and shadows. a test was conducted on a garment to demonstrate the accuracy of the photo capture control system. figure 9a) and figure 9b) depict photos captured outside the controlled system, which exhibit shadows and dark fields. in contrast, the photo captured inside the automatic wardrobe, as shown in figure 9c), is properly illuminated, highlighting the garments features and eliminating any reflection. the background removal algorithm utilized the mask r-cnn neural network, and several training sessions were carried out to optimize all parameters for maximum mean average precision (map) and iou. initially, the model was trained using a batch size of 128, which was later reduced to 64. during the training process, a large number of iterations were carried out to determine the point at which the model started overfitting. the map results for each class after 5000 iterations are presented in table 3. mean average precision (5000 iterations). . as can be seen from table 3. mean average precision (5000 iterations). , the value obtained for the dress class is not as expected, as the model has difficult in recognizing the dress silhouette correctly, given that the dresses in the dataset do not have a similar shape. subsequently, by adjusting the number of iterations to 3250, the results improved, with the map value figure 7. system flowchart. figure 8. an example of a garment with stitching implemented in the photos: (a) button photo capture; (b) upper photo capture; (c) stitching of both photo captures. figure 9. the effect of lighting conditions on a garment: a) ambient lighting with several reflections and shadows; b) ambient lighting without homogeneous illumination; c) controlled lighting. table 3. mean average precision (5000 iterations). map (%) pants (%) shorts (%) shirt (%) t-shirt (%) polo shirt (%) dress (%) 83.034 91.716 87.129 88.898 84.752 90.908 54.800 table 4. mean average precision (3250 iterations) map (%) pants (%) shorts (%) shirt (%) t-shirt (%) polo shirt (%) dress (%) 85.667 90.565 87.129 91.785 85.941 91.595 66.988 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 7 increasing to 85.667 %. this increase was largely due to the improvement in the dress class value, table 4. subsequently, a test was carried out with a slightly lower number of iterations, i.e., 3000 iterations. in this test, the overall map value decreased slightly, however, the value obtained for the dress class increased, approaching the other class values (table 5). thus, two trainings were conducted with a number of iterations of 2750 and 2500 (2nd and 3rd rows, respectively), as presented in table 6. table 6 illustrates that the model's performance starts deteriorating as it enters the underfitting phase after 2500 iterations. although the map value obtained for 2750 iterations was reasonable, the dress class's map fell short of expectations, being considerably lower than the other five classes. furthermore, the iou values were also obtained during the previously performed trainings, as can be seen in table 7. to obtain this metric, a function was used that allows the calculation of iou values with a variable threshold between 0.5 and 0.95, spaced by 0.01. the function goes through the 48 images of the validation dataset and obtains the respective iou value for each photo, comparing the predicted mask with the real mask obtained (from the application of the neural network). in this calculation, the two masks are intersected, and the area of intersection and union between the two masks is obtained, and the corresponding metric is calculated. following the validation of the developed automation, the background removal and colour detection algorithms were tested using several images. three garment photos were used in this testing phase, with one captured within the prototype and the other two taken using a smartphone. the first photo, which is depicted in figure 10, revealed that the segmentation algorithm was not perfect. specifically, the model wrongly identified some background pixels as region of interest, especially around the hanger. to address possible errors resulting from poor segmentation, the colour detection algorithm only considered percentages above 10 %. the percentage analysis revealed a predominance of cadet blue (40 %) with two other colours, slate grey (20 %) and sea green (12 %) (figure 11). the green colour corresponds to the background pixels due to the difficulty in accurately distinguishing the region of interest from the background, especially near the hanger. additionally, two additional photos were captured using a smartphone, as shown in figure 12 and figure 14. figure 12 showcases a precise segmentation, with the colour distribution divided into three colours: dark cyan (32 %), light sea green (27 %), and teal (19 %) (figure 13). the remaining 22 % encompasses colours with low percentages that are irrelevant for this analysis, such as background pixels that were erroneously considered as region of interest, especially around the garment’s edges. in the latest test, the segmentation algorithm was capable of distinguishing the region of interest from the background. the colour analysis shows the following colour distribution: rosy brown (50 %), grey (29 %), and dark grey (17 %) (figure 15). as previously stated, one of the goals of this study was to validate the system with the blind community. to achieve this, a preliminary validation was conducted through an interview with table 5. mean average precision (3000 iterations). map (%) pants (%) shorts (%) shirt (%) t-shirt (%) polo shirt (%) dress (%) 84.094 90.161 87.129 89.810 82.640 81.700 73.124 table 6. mean average precision (2750/2500 iterations). map (%) pants (%) shorts (%) shirt (%) t-shirt (%) polo shirt (%) dress (%) 85.581 91.011 98.57 93.441 83.564 92.702 54.198 79.137 88.520 92.426 82.028 77.355 80.297 54.198 table 7. intersection over union obtained in the sequential training. threshold/iterations 0.5 0.6 0.7 0.8 0.9 average 5000 0.91 0.92 0.91 0.89 0.74 0.87 3250 0.92 0.93 0.89 0.81 0.60 0.83 3000 0.87 0.90 0.86 0.76 0.63 0.80 2750 0.87 0.88 0.84 0.80 0.53 0.78 2500 0.92 0.91 0.86 0.75 0.67 0.82 figure 10. t-shirt segmentation and classification. figure 11. t-shirt colour distribution. figure 12. shorts segmentation: (a) original image; (b) segmented image. figure 13. shorts colour distribution. cadet blue 40% slate gray 20% sea green 12% others 28% dark cyan 32% light sea green 27% teal 19% others 22% acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 8 a blind representative from the acapo association. the purpose of the interview was to address important questions regarding the use of such systems by visually impaired individuals. the interview began with an overview of the system to introduce all the relevant details. following this, several questions were posed to identify the challenges blind people face when selecting and managing garments. the study revealed that distinguishing between different colours on each garment was a significant challenge. however, insights gained from the validation process suggest that presenting the location of each colour would be more relevant for blind users than showing the percentage of each colour. moreover, it was suggested to include audible feedback during the selection process, such as a gradual rising sound or a voice system to indicate the start and the end of the process. 6. conclusions and future work the scope of this work is integrated in the field of technology that can be used to assist human necessities [41]-[43]. the primary objective of this proposal paper was to present a system that assists blind people in choosing their clothes. the design of the prototype was the first part of the project. it required finding on the market a wardrobe that would allow the proof of concept in a home environment. to develop the system, it was necessary to study the requirements for its operation, particularly with regard to lighting, tag reading, rotation and movement of the clothes, and image capture. to operate these modules, an algorithm was developed to control each of the existing electronic elements. following the validation and individual testing of each module, they were integrated into a single system controlled from a command line interface. to obtain the predominant colours of the clothes, colour detection and background removal algorithms were implemented, with the latter using a neural network. the prototype presented is part of a larger project under development, with its future integration planned for a mobile application as part of the myeyes system [4], [5]. in this regard, a server for user request management has already been implemented. this integration will allow for the replication of the physical wardrobe on a mobile device, enabling clothing selection with a single click. in summary, it was evidenced that the system may be an efficient solution to reduce the time taken for garment selection, particularly in terms of colour differentiation and the selection of combinations for blind people. although, as the developed prototype is a proof of concept, the system has some limitations, including the reduced size of the wardrobe. this feature allowed only for the use of small-sized clothes and a limited number of items inside the wardrobe at a time. all developed electronics were designed for specific use in this small prototype and will need to be resized and relocated for the development of a prototype with larger-sized clothes. when integrating the physical prototype of larger dimensions with myeyes [4], [5], a more comprehensive validation will be conducted with the blind community, allowing to test the developed system with a view to its possible commercialization. acknowledgements this work has the support of association of the blind and amblyopes of portugal (acapo) and association of support for the visually impaired of braga (aadvdb). their considerations allow the first insights to a viable solution for the blind people community. this work has been supported by national funds through fct – fundação para a ciência e tecnologia within the r&d units project scope: uidb/00319/2020, uidb/05549/2020 and uidp/05549/2020.. references [1] m. mashiata, t. ali, p. das (+ another 10 authors), towards assisting visually impaired individuals: a review on current status and future prospects, biosensors and bioelectronics: 10 (2022) 12, 100265. doi: https://doi.org/10.1016/j.biosx.2022.100265 [2] h. q. nguyen, a. h. l. duong, m. d. vu, t. q. dinh, h. t. ngo, smart blind stick for visually impaired people, 8th int. conf. on the development of biomedical engineering (bme8), 2022, vol. 85, pp. 145–165. doi: https://doi.org/10.1007/978-3-030-75506-5_12 [3] f. s. apu, f. i. joyti, m. a. u. anik, m. w. u. zobayer, a. k. dey, s. sakhawat, text and voice braille translator for blind people, int. conf. on automation, control and mechatronics for industry 4.0 (acmi), 2021. doi: https://doi.org/10.1109/acmi53878.2021.9528283 [4] d. rocha, v. carvalho, e. oliveira, j. goncalves, f. azevedo, myeyes-automatic combination system of clothing parts to blind people: first insights, proc. of the ieee 5th int. conf. on serious games and applications for health (segah), perth, wa, australia, 2–4 april 2017. doi: https://doi.org/10.1109/segah.2017.7939298 [5] d. rocha, v. h. carvalho, j. gonçalves, e. oliveira, f. azevedo, myeyes – automatic combination system of clothing parts to blind people: prototype validation, sensordevices’ 2017 conf., rome, italy, 10-14 september 2017. [6] d. rocha, v. carvalho, f. soares, e. oliveira, extracting clothing features for blind people using image processing and machine learning techniques: first insights. in: tavares, j., natal jorge, r. (eds) vipimage 2019, lecture notes in computational vision and biomechanics, vol 34. springer, cham, pp. 411–418. doi: https://doi.org/10.1007/978-3-030-32040-9_42 [7] d. rocha, v. carvalho, f. soares, e. oliveira, a model approach for an automatic clothing combination system for blind people, dli 2020, lecture notes of the institute for computer sciences, social informatics and telecommunications engineering, vol 366. figure 14. t-shirt segmentation and classification. figure 15. t-shirt colour distribution. rosy brown 50% grey 29% dark grey 17% others 5% https://doi.org/10.1016/j.biosx.2022.100265 https://doi.org/10.1007/978-3-030-75506-5_12 https://doi.org/10.1109/acmi53878.2021.9528283 https://doi.org/10.1109/segah.2017.7939298 https://doi.org/10.1007/978-3-030-32040-9_42 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 9 springer, cham, pp. 74–85. doi: https://doi.org/10.1007/978-3-030-78448-5_6 [8] d. rocha, v. carvalho, j. gonçalves, f. azevedo, e. oliveira, development of an automatic combination system of clothing parts for blind people: myeyes, sens. transducers 2018, 219, 26– 33. [9] d. rocha, f. soares, e. oliveira, v. carvalho, blind people: clothing category classification and stain detection using transfer learning, appl. sci. 2023, 13, 1925. doi: https://doi.org/10.3390/app13031925 [10] d. rocha, v. carvalho, f. soares, e. oliveira, design of a mechatronic system to combine garments for blind people: first insights, 6th eai int. conf. on iot technologies for healthcare, braga, portugal, 4-6 december 2019. doi: https://doi.org/10.1007/978-3-030-42029-1_4 [11] d. rocha, l. pinto, j. machado, f. soares, v. carvalho, using object detection technology to identify defects in clothing for blind people, sensors 2023, 23, 4381. doi: https://doi.org/10.3390/s23094381 [12] h. bang, j. su, who uses virtual wardrobes? investigating the role of consumer traits in the intention to adopt virtual wardrobes, sustainability, 2022,14, 1209. doi: https://doi.org/10.3390/su14031209 [13] a. perry, consumers’ acceptance of smart virtual closets, j. retail. consum. serv., 2016, 33, pp. 171–177. doi: https://doi.org/10.1016/j.jretconser.2016.08.018 [14] d. rocha, v. carvalho, f. soares, e. oliveira, c. p. leão, understand the importance of garments’ identification and combination to blind people, in t. ahram & r. taiar (eds.), human interaction, emerging technologies and future systems v, 2022, springer int. publishing, pp. 74-81. doi: https://doi.org/10.1007/978-3-030-85540-6_10 [15] r. alabduljabbar, an iot smart clothing system for the visually impaired using nfc technology, int. journal of sensor networks, 2022, 38, 1, pp. 46–57. doi: https://doi.org/10.1504/ijsnet.2022.120273 [16] s. j. v. gatis filho, j. de assumpção macedo, m. m. saraiva, j. e. a. souza, f. b. breyer, j. kelner, my best shirt with the right pants: improving the outfits of visually impaired people with qr codes and nfc tags, in: marcus, a., wang, w. (eds) design, user experience, and usability: designing interactions. duxu 2018. lecture notes in computer science, springer, cham, vol 10919. doi: https://doi.org/10.1007/978-3-319-91803-7_41 [17] k. n. goh, y. y. chen, e. s. lin, developing a smart wardrobe system, in 2011 ieee consumer communications and networking conf. (ccnc), las vegas, nv, usa, 09-12 january 2011, pp. 303–307. doi: https://doi.org/10.1109/ccnc.2011.5766478 [18] amazon, amazon help. online [accessed: may 2022]. https://www.amazon.com/gp/help/customer/display.html?nod eid=202120810 [19] fashion taste api, fashion taste api. online [accessed may 2022]. https://fashiontasteapi.com/our-technology [20] rabbit tech inc., smart closet. online [accessed may 2022]. https://smartcloset.me [21] tailor, the smart closet. online [accessed may 2022]. http://www.tailortags.com [22] aipoly, aipoly. online [accessed may 2022]. https://www.aipoly.com [23] easterseals crossroads, colortest 2000. online [accessed may 2022]. https://www.eastersealstech.com/2013/08/13/colortest-2000 [24] easterseals crossroads, colorino color identifier – light detector. online [accessed may 2022]. https://www.eastersealstech.com/2016/07/05/colorinos-coloridentifier-light-detector [25] s. yang, s. yuan, y. tian, assistive clothing pattern recognition for visually impaired people, ieee trans. on human-mach. syst., 2014, 44, 2, pp. 234-243. doi: https://doi.org/10.1109/thms.2014.2302814 [26] j. jarin joe rini, b. thilagavathi, recognizing clothes patterns and colours for blind people using neural network, iciiecs 2015 ieee int. conf. on innovations in information, embedded and communication systems, coimbatore, india, 19-20 march 2015. doi: https://doi.org/10.1109/iciiecs.2015.7193006 [27] x. yang, s. yuan, y. l. tian, recognizing clothes patterns for blind people by confidence margin-based feature combination, mm’11-proc. of the 2011 acm multimedia conf. and co-located workshops, 2011, pp. 1097–1100. doi: https://doi.org/10.1145/2072298.2071947 [28] a. j. medeiros, l. stearns, l. findlater, c. chen, j. e. froehlich, recognizing clothing colors and visual textures using a fingermounted camera: an initial investigation, assets 2017 19th int. acm sigaccess conf. on computers and accessibility, baltimore, md, usa, 20 october 1 november 2017, pp. 393– 394. doi: https://doi.org/10.1145/3132525.3134805 [29] t. khurana, k. mahajan, c. arora, a. rai, exploiting texture cues for clothing parsing in fashion images, proc. of the int. conf. on image processing icip, athens, greece, 07-10 october 2018, pp. 2102–2106. doi: https://doi.org/10.1109/icip.2018.8451281 [30] y. ge, r. zhang, x. wang, x. tang, p. luo, deepfashion2: a versatile benchmark for detection, pose estimation, segmentation and re-identification of clothing images, proc. of the ieee computer society conf. on computer vision and pattern recognition, 2019-june, pp. 5332–5340. doi: https://doi.org/10.48550/arxiv.1901.07973 [31] j. wang, x. wan, l. li, j. wang, an improved deeplab model for clothing image segmentation, 2021 ieee 4th int. conf. on electronics and communication engineering, icece 2021, pp. 49–54. doi: https://doi.org/10.1109/icece54449.2021.9674326 [32] x. zhang, c. song, y. yang, z. zhang, x. zhang, p. wang, q. zou, deep learning based human body segmentation for clothing fashion classification, proc. of the 2020 chinese automation congress, cac 2020, shanghai, china, 06-08 november 2020, pp. 7544–7549. doi: https://doi.org/10.1109/cac51589.2020.9327016 [33] t. yang, y. shi, h. huang, f. du, recognize the silhouette attributes of t-shirts based on mask r-cnn, iaeac 2021 ieee 5th advanced information technology, electronic and automation control conf., 2021, pp. 106–110. doi: https://doi.org/10.1109/iaeac50856.2021.9390860 [34] s. chiodini, m. pertile, s. debei, occupancy grid mapping for rover navigation based on semantic segmentation, acta imeko, vol. 10, no. 4, article 25, december 2021. doi: https://doi.org/10.21014/acta_imeko.v10i4.1144 [35] k. pranathi, n. p. jagini, s. k. ramaraj, d. jeyaraman, videobased emotion sensing and recognition using convolutional neural network based kinetic gas molecule optimization, acta imeko, vol. 11, no. 2, article 13, june 2022. doi: https://doi.org/10.21014/acta_imeko.v11i2.1174 [36] s. thirumaladevi, k. veera swamy, m. sailaja, multilayer feature fusion using covariance for remote sensing scene classification, acta imeko, vol. 11, no. 1, article 33, march 2022. doi: https://doi.org/10.21014/acta_imeko.v11i1.1228 [37] d. he, y. qiu, j. miao, z. zou, k. li, c. ren, g. shen, improved mask r-cnn for obstacle detection of rail transit, measurement, 2022, 190, 110728. doi: https://doi.org/10.1016/j.measurement.2022.110728 [38] ikea, baggebo. online [accessed: may 2022]. https://www.ikea.com/pt/pt/p/baggebo-armario-c-portabranco-60481204/ [39] github, inc., alexeygrigorev/clothing-dataset [accessed may 2022] https://github.com/alexeygrigorev/clothing-dataset https://doi.org/10.1007/978-3-030-78448-5_6 https://doi.org/10.3390/app13031925 https://doi.org/10.1007/978-3-030-42029-1_4 https://doi.org/10.3390/s23094381 https://doi.org/10.3390/su14031209 https://doi.org/10.1016/j.jretconser.2016.08.018 https://doi.org/10.1007/978-3-030-85540-6_10 https://doi.org/10.1504/ijsnet.2022.120273 https://doi.org/10.1007/978-3-319-91803-7_41 https://doi.org/10.1109/ccnc.2011.5766478 https://www.amazon.com/gp/help/customer/display.html?nodeid=202120810 https://www.amazon.com/gp/help/customer/display.html?nodeid=202120810 https://fashiontasteapi.com/our-technology/ https://smartcloset.me/ http://www.tailortags.com/ https://www.aipoly.com/ https://www.eastersealstech.com/2013/08/13/colortest-2000/ https://www.eastersealstech.com/2016/07/05/colorinos-color-identifier-light-detector/ https://www.eastersealstech.com/2016/07/05/colorinos-color-identifier-light-detector/ https://doi.org/10.1109/thms.2014.2302814 https://doi.org/10.1109/iciiecs.2015.7193006 https://doi.org/10.1145/2072298.2071947 https://doi.org/10.1145/3132525.3134805 https://doi.org/10.1109/icip.2018.8451281 https://doi.org/10.48550/arxiv.1901.07973 https://doi.org/10.1109/icece54449.2021.9674326 https://doi.org/10.1109/cac51589.2020.9327016 https://doi.org/10.1109/iaeac50856.2021.9390860 https://doi.org/10.21014/acta_imeko.v10i4.1144 https://doi.org/10.21014/acta_imeko.v11i2.1174 https://doi.org/10.21014/acta_imeko.v11i1.1228 https://doi.org/10.1016/j.measurement.2022.110728 https://www.ikea.com/pt/pt/p/baggebo-armario-c-porta-branco-60481204/ https://www.ikea.com/pt/pt/p/baggebo-armario-c-porta-branco-60481204/ https://github.com/alexeygrigorev/clothing-dataset acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 10 [40] l. silva, d. rocha, v. carvalho, j. sena esteves, f. soares, automatic wardrobe for blind people, 9th eai int. conf. on iot technologies for healthcare, braga, portugal, 16-18 november 2022, in: spinsante, s., iadarola, g., paglialonga, a., tramarin, f. (eds) iot technologies for healthcare. healthyiot 2022. lecture notes of the institute for computer sciences, social informatics and telecommunications engineering, vol 456. springer, cham, pp. 3-13. doi: https://doi.org/10.1007/978-3-031-28663-6_1 [41] f. cleary, w. srisa-an, b. gil, j. kesavan, t. engel, d. c. henshall, s. balasubramaniam, wearable fabric μ brain enabling ongarment edge-based sensor data processing, in ieee sensors journal, 2022, 22, 21, 1 nov. 2022, pp. 20839-20854. doi: https://doi.org/10.1109/jsen.2022.3207912 [42] e. ayodele, s. a. raza zaidi, j. scott, z. zhang, a. hayajneh, s. shittu, d. mclernon, a weft knit data glove, in ieee transactions on instrumentation and measurement, 2021, 70, 112, art no. 4004212. doi: https://doi.org/10.1109/tim.2021.3068173 [43] e. sardini, m. serpelloni, v. pasqui, wireless wearable t-shirt for posture monitoring during rehabilitation exercises, ieee transactions on instrumentation and measurement, 2015, 64, 2, pp. 439-448. doi: https://doi.org/10.1109/tim.2014.2343411 https://doi.org/10.1007/978-3-031-28663-6_1 https://doi.org/10.1109/jsen.2022.3207912 https://doi.org/10.1109/tim.2021.3068173 https://doi.org/10.1109/tim.2014.2343411 statements of conformity provided by laboratories acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 3 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 statements of conformity provided by laboratories ana čop1 1 croatian metrology society, zagreb, croatia section: technical note keywords: statement of conformity; decision rule; laboratory; ilac g8; iso/iec 17025 citation: ana čop, statements of conformity provided by laboratories, acta imeko, vol. 12, no. 2, article 34, june 2023, identifier: imeko-acta-12 (2023)02-34 section editor: leonardo iannucci, politecnico di torino, italy received january 31, 2023; in final form june 11, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: ana čop, e-mail: ancica.co@gmail.com 1. introduction laboratories are often required or expected to provide a statement of conformity to specification or standard. specifications are usually defined in the product specification, applicable legislation, or by laboratory customer. they represent acceptable limit that (a property of) a test item shall comply with. because of uncertainty in measurement, when measurement results are close the specification limit, the way how the measurement uncertainty is taken into account can influence the decision on conformity of the test item. the risk of incorrect decision is taken by the side specifying the decision rule to be applied. in the context of laboratory work, this is the laboratory's customer. this note provides information on relevant literature and practice for laboratories to extend their knowledge on statements of conformity and risks associated with decision rules. the starting point is ilac g8 [1] and jcgm 106 [2]. for advanced statistics and calculation, the document [2] should be consulted. 2. decision rule the agreement on how to take into account measurement uncertainty is called a decision rule. the definition of decision rules in iso/iec 17025 [3] is “rule that describes how measurement uncertainty is accounted for when stating conformity with a specified requirement“. the definition in [2], “rule that describes how measurement uncertainty will be accounted for with regard to accepting or rejecting an item, given a specified requirement and the result of a measurement” is based on the consequences of the decision on the conformity of the test item. the definition in [2] also states that this rule shall be documented. statements of conformity can be expressed as a binary decision (conforming/nonconforming, yes/no, pass/fail) or as a non-binary decision (with more than two possible outcomes; e. g. pass, fail, and inconclusive or conditional pass/fail) [1]. in order to provide statements of conformity, a specification or tolerance limit (tl), defined as “specified upper or lower bound of permissible values of a property” [2], and an acceptance limit (al), defined as “specified upper or lower bound of permissible measured quantity values” [2], are established. if there are both upper and lower limits, intervals are considered. a common decision rule known as simple acceptance considers a test item as conforming if the measured value is in the tolerance interval, as shown in figure 1. to keep the probability of incorrect decisions at an acceptable level, this decision rule is associated with the requirement for the value of related acceptable measurement uncertainty [2]. therefore, simple acceptance should not be understood as not considering measurement uncertainty. the risk of providing an incorrect decision of conformity is kept under control by the introduction of guard bands. a guard abstract when required to provide a statement of conformity, the laboratory shall agree with its customer on the specification and decision rule to be applied. employing a decision rule on how to take into account measurement uncertainty carries the risk of providing an incorrect decision. to reduce this risk, the mechanism of guard banding is introduced. this technical note covers the approach presented in ilac g8 and other relevant literature. in some cases, e.g. when assessing conformity with regulatory requirements, measurement uncertainty is taken into account indirectly. when providing statements of conformity, laboratories usually consider the risk related to a particular test item. providing the statement of conformity is also covered by the iso/iec 17025 standard for the competence of laboratories. mailto:ancica.co@gmail.com acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 band (𝑤) is an interval between a tolerance limit and a corresponding acceptance limit where length 𝑤 = |𝑇𝐿 − 𝐴𝐿| as defined in [1]. as [1] states, guard bands are a “safety factor built into the measurement decision process”. applying a guard band to reduce the risk of identifying a nonconforming item as conforming (or in other words reducing the probability of falsely accepting the nonconforming item) by setting the acceptance limit inside the tolerance interval is called guarded acceptance [2]. for guarded acceptance the length of the parameter w is taken to be positive [2]. the guarded acceptance is shown in figure 2. guarded rejection is used to increase the probability that item identified as non-conforming item is truly nonconforming (or, in other words, reducing the probability of falsely rejecting a conforming item) [2]. guarded rejection is commonly used “to have evidence that a limit has been exceeded prior to taking a negative action” [2]. the guard band 𝑤 is usually taken to be “a multiple (𝑟)of the expanded uncertainty 𝑈 for a coverage factor 𝑘 = 2, 𝑈 = 2 𝑢, 𝑤 = 𝑟 𝑈” as defined in [2]. eurachem/citac guide [4] provides guidance and examples for determining the guard band and acceptance limit taking into account the measurement uncertainty, distribution of measurand values and required level of the probability that the measurand is within the specification limit. the eurolab guide [5] provides examples where decision rules are defined based on the hypothesis testing approach. in order to assure that the measurement uncertainty is taken into account consistently, especially in the case of assessing conformity to regulatory requirements, the measurement uncertainty is taken into account indirectly. practices in legal metrology [6] include the approach of accounting for plausible measurement uncertainty in maximum permissible errors (mpes), by “establishing conservative (inservice) maximum permissible errors in order to draw “safe” conclusions concerning whether measured errors of indication are within acceptable limits” or “specifying s a fraction, like 1/3 or 1/5, for the maximum allowed ratio of the error (uncertainty) of the standard (reference) measuring instrument to the mpe”. the example where acceptable measurement uncertainty is included in acceptance limit is the wada technical document [7]. this document provides a detailed explanation of defining the decision limit with the guard band defined with maximum acceptable measurement uncertainty obtained from the external quality assessment scheme and k factor corresponding to 95% coverage range for one tailed normal distribution. it is also required that the measurement uncertainty of particular laboratory is lower than this maximum acceptable measurement uncertainty. the document also provides instructions on how to report. an example from law enforcement practice is reduction of the measured speed value by a certain amount (guard band) when calculating a fine for speeding violation, which ensures that the decision to impose a fine for a speeding violation is made with significant confidence that the speed limit has been exceeded. in some cases, like water legislation [8], the performance criteria, limit of detection and acceptable measurement uncertainty are defined, and if ensured, the obtained measurement result is directly compared to the specification limit. the decision rule can also be explicitly stated in regulation or standards. as defined in [9] supporting european official food control, “compliance with the maximal residue level (mrl) is checked by assuming that the mrl is exceeded if the measured value exceeds the mrl by more than the expanded uncertainty (𝑥 − 𝑈 > 𝑀𝑅𝐿). with this decision rule, the value of the measurand should be above the mrl with at least 97.5% confidence”. 3. risk consideration when laboratory considers, the risk of falsely accepting a nonconforming item or falsely rejecting a conforming item is commonly based on the measurement of a particular item. as already mentioned, the risk arising from a measurement decision is on the side that specifies the decision rule to be applied. since the decision rule is specified, or at least accepted, by laboratory customers, it is up to them to cope with the risk of incorrect decisions. laboratory customers also take the responsibility for the consequences of non-conforming items. as [4] points out, there is a need to balance resources to be invested in reducing measure uncertainty against the costs arising from a higher number of incorrect decisions in case of higher measurement uncertainty. as described in [1], different guard bands correspond to different levels of risk. in case of simple acceptance and assuming normal distribution of measurement results, the probability of false accept is up to 50 %. in case of guarded acceptance with a guard band equal to extended measurement uncertainty and assuming the normal distribution of measurement results, the probability of false accept is up to 2.5 %. for larger guard bands, the probability of false accept is additionally reduced. it is up to the laboratory to consider other laboratory risks. a laboratory should invest effort in ensuring that measurement uncertainty is monitored, that it is estimated realistically, fit for purpose and stable in time. with respect to laboratory personnel competence, skills and knowledge should include understanding the context and purpose of measurement requested by the figure 1. illustration of the simple acceptance based on ilac g8 [1]. figure 2. illustration of the guarded acceptance based on ilac g8 [1]. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 laboratory customer, standard or legislation, if applicable; and the risks associated with decision rules. 4. iso/iec 17025 requirements the iso/iec 17025 [3] standard includes requirements related to providing statements of conformity by laboratories. personnel responsible shall be authorised for this activity (iso/iec 17025, §6.2.6) based on the competence criteria set. monitoring of laboratory personnel competence shall also cover competence for providing statements of conformity. in the request review phase, it is up to the laboratory to assure that the customer requesting a statement of conformity is aware and agrees with the decision rule in the request review stage (iso/iec 17025, §7.1.3). when a statement of conformity is provided, the employed decision rule shall be documented and applied. the decision rule shall take into account risks of an incorrect decision, unless it is specified by the customer, legislation, or standard. (iso/iec 17025, §7.8.6.1) the report shall contain both the specification and decision rule applied. (iso/iec 17025, §7.8.6.1). the extent of what the statement of conformity shall identify is prescribed by iso/iec 17025, §8.6.2; and includes “to which results the statement of conformity applies; which specifications, standards or parts thereof are met or not met; and which decision rule is applied (unless it is inherent in the requested specification or standard)”, [3]. 5. conclusions the risk associated with the applied decision rule is to be taken by laboratory customer. to assist the customers who are sometimes not even aware of the risk of incorrect decision, the laboratory shall understand the purpose of the measurement and the level of risk associated with the decision rule applied. to ensure consistent understanding and application, it is necessary that both decision rule and specification are defined and documented in the request review stage, and that the resulting report includes information on both. in order to make a reliable decision on the conformity of the test item, it is an important task of the laboratory to realistically estimate and monitor measurement uncertainty. special care shall be taken in cases where there are several properties measured in the same sample. providing a general statement that the sample is conforming can be misleading. as described in iso/iec 17025, §8.6.2, a statement of conformity shall identify “to which results it applies”, [3]. when sampling activity, as one of laboratory activities recognised by iso/iec 17025, is included in laboratory services, sampling measurement uncertainty contribution to overall measurement uncertainty shall also be considered when providing statements of conformity. acknowledgement i wish to thank višnja gašljević for numerous fruitful discussions on measurement uncertainty and decision rules. references [1] ilac, ilac-g8:09/2019, guidelines on decision rules and statements of conformity. online [accessed 12 june 2023] https://ilac.org/publications-and-resources/ilac-guidance-series/ [2] bipm, jcgm 106:2012, evaluation of measurement data – the role of measurement uncertainty in conformity assessment. online [accessed 12 june 2023] https://www.bipm.org/en/committees/jc/jcgm/publications [3] iso, iso/iec 17025:2017; general requirements for the competence of testing and calibration laboratories [4] a. williams and b. magnusson (eds.) eurachem/citac guide: use of uncertainty information in compliance assessment (2nd ed. 2021). isbn 978-0-948926-38-9. online [accessed 12 june 2023] https://www.eurachem.org/index.php/publications/guides/unc ertcompliance [5] eurolab “cook book” – doc no. 8, rev. n. 3, 09/2018 determination of conformance with specifications using measurement uncertainties – possible strategies. online [accessed 12 june 2023] https://www.eurolab.org/pubs-cookbooks [6] oiml, oiml g 19, the role of measurement uncertainty in conformity assessment decisions in legal metrology, edition 2017. online [accessed 12 june 2023] https://www.oiml.org/en/files/pdf_g/g019-e17.pdf/view [7] wada, wada technical document – td2022dl, decision limits for the confirmatory quantification of exogenous threshold substances by chromatography-based analytical methods, 6 october 2021. online [accessed 12 june 2023] https://www.wada-ama.org/en/resources/labdocuments/td2022dl [8] eu, directive (eu) 2020/2184 of the european parliament and of the council of 16 december 2020 on the quality of water intended for human consumption. online [accessed 12 june 2023] https://eur-lex.europa.eu/eli/dir/2020/2184/oj [9] eu, analytical quality control and method validation procedures for pesticide residues analysis in food and feed, sante/11312/2021. online [accessed 12 june 2023] https://www.eurlpesticides.eu/userfiles/file/eurlall/sante_11312_2021.pdf https://ilac.org/publications-and-resources/ilac-guidance-series/ https://www.bipm.org/en/committees/jc/jcgm/publications https://www.eurachem.org/index.php/publications/guides/uncertcompliance https://www.eurachem.org/index.php/publications/guides/uncertcompliance https://www.eurolab.org/pubs-cookbooks https://www.oiml.org/en/files/pdf_g/g019-e17.pdf/view https://www.wada-ama.org/en/resources/lab-documents/td2022dl https://www.wada-ama.org/en/resources/lab-documents/td2022dl https://eur-lex.europa.eu/eli/dir/2020/2184/oj https://www.eurl-pesticides.eu/userfiles/file/eurlall/sante_11312_2021.pdf https://www.eurl-pesticides.eu/userfiles/file/eurlall/sante_11312_2021.pdf journal contacts acta imeko june 2014, volume 3, number 2, 1 www.imeko.org journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor-in-chief paolo carbone, italy honorary editor-in-chief paul p.l. regtien, netherlands editorial board francisco alegria, portugal leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy catalin damian, romania eric benoit, france pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany luca de vito, italy about imeko the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov) the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 1 mailto:paul@regtien.net journal contacts microsoft word article 6 tc4 paper 2.docx acta imeko  august 2013, volume 2, number 1, 12 – 15  www.imeko.org    acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 12  metrological atomic force microscope and traceable  measurement of nano‐dimension structures  sitian gao, mingzhen lu, wei li, yushu shi, qi li  national institute of metrology, 100013, beijing, china      keywords: nanometrology; afm; traceability; nanostructure; pitch; linewidth; step height;  citation: sitian gao, mingzhen lu, wei li, yushu shi, qi li, “metrological atomic force microscope and traceable measurement of nano‐dimension  structures”, acta imeko, vol. 2, no. 1, article 6, august 2013, identifier: imeko‐acta‐02(2013)‐01‐06  editors: paolo carbone, university of perugia, italy; ján šaliga, technical university of košice, slovakia; dušan agrež, university of ljubljana, slovenia  received january 10 th , 2013; in final form july 11 th , 2013; published august 2013  copyright: © 2013 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: none reported  corresponding author: sitian gao, e‐mail: gaost@nim.ac.cn    1. introduction  with the development of semiconductor industry, the line width of integrated circuits is decreasing to tens of nanometers. the miniaturization in semiconductor manufacturing requires measurements with nanometer accuracy in the lithographic process. accurate dimensions and position of patterns are essential for present industrial quality. the main industrial countries around the world have been working at establishing metrological specifications and instruments relevant to nanotechnology. the discussion group 7 (dg7) for nanometrology under the consultative committee for length’s working group on dimensional metrology (cclwgdm) decided to perform a comparison for five different types of artifacts in order to set up an international nanometrological regime. atomic force microscopes (afms) provide high resolution and quantitative measurements for nanoand micro-scaled structures [1]. afms are engaged in dimensional measurements including distance, pitch, width, diameter, geometry, roughness, and thickness [2]. the probe or the sample is usually scanned by a piezotube and the displacement is measured by capacitive sensors. due to the nonlinearity and hysteresis of the piezo scanner, afms with metrological abilities are required for consistency in the measurement quality. most metrological afms integrate laser interferometers to ensure direct traceability to the si unit of length, and have a full evaluation of uncertainty budget to assign an uncertainty to the measurement results. afms can be calibrated by using transfer standards such as step height, 1d or 2d gratings and 3d pyramids, which are well characterized and traced to metrological afms. at the national institute of metrology (nim), a metrological afm has been developed and improved. in this paper, we report the principle and architecture of the metrological afm. then the calibration applications of the metrological afm in step height and pitch artefacts are demonstrated. 2. metrological afm  2.1. principle of afm  afm is based on the interaction between a sharp tip and samples. when the cantilever is approaching the sample surface, the interaction between the tip and sample will cause bending of the cantilever. the method of cantilever sensing abstract  the  quantity  assurance  in  semiconductor  industry  development  requires  dimensional  measurements  with  nanometer  accuracy.  a  metrological  afm  is  designed  to  establish  a  traceable  standard  with  nanometer  uncertainty.  the  principle  and  design  of  the  instrument are introduced in this paper. the displacement of the sample is traced to the si unit by interferometers. the metrological  afm is applied to step height and line pitch measurements. the results are compared with an optical instrument and a profilometer.  the metrological afm is used for step height measurement in an international comparison and the result shows an uncertainty less  than 2 nm. the application of the metrological afm in pitch measurements is also introduced. acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 13  here is a confocal method different from conventional quadrant photodiode detection. the confocal detection principle of the afm veritekt (zeiss) is illustrated in figure 1. the laser beam through the pinhole is focused onto the cantilever and reflected to the beam splitter. then the beam is separated and focused to the two detectors. the two pinholes before the detectors are arranged before and after the focal spots respectively. when the interaction between the tip and sample causes bending of the cantilever, the two detectors have opposite responses to the displacement of the cantilever tip. the reflected laser beam and the incident beam share the same optical path and the structure is stable, so the signal is less disturbed by air. the cantilever is 100 m in length and the tip height is 10-15 m with radius less than 10 nm. to reduce the fluctuation of the laser intensity, the difference of the signals is divided by the summation. the signal shows a linear relation to the displacement of the sample to tip as shown in figure 2. the signal is detected for feedback control of the piezostage, to keep the distance between the tip and sample constant. so the sensor serves as the zero point indicator. 2.2. metrological afm instrument  in the commercial afm using a piezotube to drive the probe, the bending of the tube will change the length of the tube in z-direction when scanning in the xy plane [3]. the scanning surface is curved and requires calibration by standards. the displacements also require calibration for the nonlinearity of the piezoelectric actuators. most of the metrological afms are equipped with interferometers to calibrate the position and trace the displacement to si unit [4]. by the incorporation of three miniature laser interferometers, an afm (zeiss) has been modified to the metrological afm in cooperation with the ilmenau technical university and the physikalisch-technische bundesanstalt [5]. the design is shown in figure 3. the sample stage is moved by a 3-dimensional scanner. the monolithic flexure hinge stage is driven by three linear piezoceramic elements with capacitive sensors in three directions. the flexure hinge is designed with different lengths and deflection angles to achieve a scanning range of 70 μm × 15 μm × 15 μm. the displacements in three axes are traceable by interferometers with a 633 nm laser. the maximum angular motion of the stage is 0.5 arcsec. so the tip of the afm is placed at the intersection of the interferometer beams, to eliminate the abbe error. the displacement signal is sent by a computer to the d/a converter to drive the piezostage. the difference between the expected position p and the real position ps measured by the interferometers is caused by the motion coupling between different axes and the non-linearity error of the motion axes. the compensation function is derived from the measurement. figure 1. afm with confocal sensing technique.  figure 2. the cantilever signal versus the tip displacement.   figure 3. structure of metrological afm.  figure 4.  a sample of a step height standard.  acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 14  3. measurement results  the step height, line width and pitch are three important parameters of nano-structure dimensions in integrated circuit manufacturing. to provide consistent and traceable measurements, the transfer standards are calibrated by the metrological afm and then are used to calibrate the instrument in industry and research. 3.1. step height  step height standards are utilized to calibrate the z-axis of microscopes and topography measuring instruments. a step sample is shown in figure 4. it is fabricated on a si substrate with sio2 step squares and bars. six height samples are fabricated with nominal heights 300 nm; 400 nm; 600 nm; 900 nm; 1000 nm; and 1800 nm. the samples are measured with a profilometer and an afm to compare different instruments and methods. the results are shown in table 1. the step height results measured with different instruments show consistent results with deviations less than 10 nm. to establish nanometrological equivalence in the nanoscale regime, international comparison of nano-standards started from the year 2000. step height international comparisons with ptb as the pilot laboratory has been accomplished [6]. the standards are si substrates with sio2 steps coated with cr. the measured region is 100 m×100 m. the scanning area of the instrument is 70 μm×12 μm with 400 × 200 pixels, so three subregions on the standard are scanned from top down in the 100 nm range. the result of a 20 nm step height sample is shown in figure 5. for each profile, the 15 μm lines marked on the upper step profile and the lower surface are fitted with a least square criterion to obtain the step height. the line step heights at different positions are averaged. fifteen national metrology institutes joined the comparison with different instruments. the result of the 70 nm step height is shown in figure 6. the uncertainty of the measured result of nim is no more than 2 nm. the uncertainty is somewhat larger compared with others. in fact the crosstalk in the z-direction caused by movement in the xand y-axes is the main uncertainty component in the step height measurement and this is independent of the step height. the uncertainty for large step height measurement results are relatively small compared with those of the other participant [6]. 3.2. 1d grating pitch  the lateral magnification of the afm can be calibrated by 1d or 2d pitch standards. these standards are calibrated by the metrological afm. for the periodic structure, the gravity centres of the grating structures are calculated and then the position of the centres are averaged to obtain the pitch value. the results of a 3 m pitch are shown in figure 7. the 69.246 m 12.147m 70 nm 0 15 m 15 m 15 m figure 5.image of a 20 nm step sample measured by the metrological afm.  table 1. step height standard results measured with different instruments (nm).  no  profilometer a  metrological afm  afm (ptb) b  1  301.0  300.5  301.3  2  396.0  393.5  391.7  3  593.5  598.9  595.6  4  924.0  922.7  923.6  5  1079.0  1077.0  1075.1  6  1739.0  1730.0  1722.7  a. alpha step 200  b. metrological spm with interferometer to calibrate  figure  6.  measured  step  heights  hi  of  the  participating  institutes  and  reference value href (red line) of the 70 nm step height sample.[6]      (a)    (b)  figure 7. measurement results of a 3 m 1d grating with 35 m × 10 m  scanning range (a) and a 1d scanning profile of the grating (b).  acta imeko | www.imeko.org  august 2013 | volume 2 | number 1 | 15  scanning range is 35 m ×10 m. the profile is shown in figure 7 (b). the height of the grating is about 110 nm. a threshold line across the profile is chosen to separate the grating structures. the gravity centre of each structure is calculated and the average pitch is determined [7]. usually the sample is mounted with an angle deviation relative to the interferometers axes. the incline of the sample causes a cosine error. as shown in figure 8, the error from the angle between the profile and the x-axis α can be corrected by fitting the baseline of the single profile. also the sample orientation in the sample plane relative to the interferometers axes is corrected. a series of measurements along the y-axis is required to calculate the orientation of the grating bars. the real pitch is obtained by 'sin( ) cos( ) x x      , (1) where β is the angle between the measured profile and the direction of the bars. the average pitch is 3.01 m with a standard deviation of 0.05 m due to the inhomogeneity of the sample. the uncertainties of the wavelength and the temperature are negligible. the uncertainty of the instrument due to the coupling of the different axes and the nonlinearity is 1.15 nm. the uncertainty due to the cosine error caused by the sample mounting after correction is 0.03 nm. because the probe tip of the afm has a finite size, the measured profile of the step edge is the convolution of the tip shape and the real step profile. this edge effect is not significant for step heights or line pitch measurements of the periodic structure, for the edges have the same influence and can be eliminated. however the line width is influenced by the tip size of the afm probe. so the definition of the line width is under study to retrieve the real width. 4. conclusions  a metrological afm is designed to establish a traceable standard with nanometer uncertainty. the sample stage is driven by a piezostage and reflectors are mounted on the stage as reference mirrors of interferometers, so that the movements of the stage in 3 directions are traceable. for step height measurements, the metrological afm gives comparable results with other instruments and the afm demonstrates an uncertainty u95 <2 nm. for the pitch measurement, the uncertainty from the instrument is less than 2 nm which is mainly due to the coupling of the axes and the sample incline. the scanning range limits the application of the metrological afm. a large range metrological afm has been developed at the nim to calibrate larger samples [8]. references   [1] a.yacoot, l. koenders, “recent developments in dimensional nanometrology using afms”, meas. sci. technol., 22, (2011), pp. 122001. [2] j. a. kramar, r. dixson, n. g. orji, “scanning probe microscope dimensional metrology at nist”, meas. sci. technol. 22, (2011), pp. 024001. [3] j. kwon, j. hong, y. kim, “atomic force microscope with improved scan accuracy, scan speed and optical vision”, rev sci. instrum., 74, (2003), pp. 4378-4383. [4] g. dai, f. pohlenz, h. danzebrink, k. hasche, g. wilkening, “improving the performance of interferometers in metrological scanning probe microscopes”, meas. sci. technol. 15, (2004), pp. 444-450. [5] m. bienias, s. gao, k. hasche, r. seemann, k. thiele, “a metrological scanning force microscope used for coating thickness and other topographical measurements”, applied physics a., 66, (1998), pp. s837-s842. [6] l. koenders. “wgdm-7: preliminary comparison on nanometrology according to the rules of ccl key comparisons nano 2 step height standards”, ptb braunschweig, (2003). [7] g. dai, l. koenders, f. pohlenz, t. dziomba, h. danzebrink, “accurate and traceable calibration of one-dimensional gratings”, meas. sci. technol. 16, (2005), pp. 1241–1249. [8] m. lu, s. gao, q. li, w. li, y. shi, x. tao, “long range metrological atomic force microscope with versatile measuring head”, proc. spie 8759, eighth international symposium on precision engineering measurement and instrumentation, 2012, 8-11, august, china, pp. 87594y. figure 8. error correction of the sample incline.  microsoft word article 4 191-1064-1-pb.docx acta imeko may 2014, volume 3, number 1, 10 – 15 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 10 fundamental concepts of measurement l. finkelstein the city university, department of systems and automation, st. john street, london e.c.1., england section: research paper keywords: measurement theory, uncertainty citation: l. finkelstein, fundamental concepts of measurement, acta imeko, vol. 3, no. 1, article 4, may 2014, identifier: imeko-acta-03 (2014)-01-04 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. the importance of measurement science the importance of measurement in natural science and technology is undeniable. measurement is the essential tool of scientific investigation and discovery, and it enables the complex phenomena of the universe to be described in the precise, concise and universal language of mathematics, without which it would, in general, be impossible to deduce laws from scientific observation or to formulate useful theoretical constructs. in technology, the increasing complexity and speed of many modern processes and machines, make automatic control essential, and such control is not possible without satisfactory means of measurement. the last three decades have seen spectacular advances in the technology of measurement of physical quantities. the development of electrical sensors and electronic data handling have vastly extended the range of what can be measured. measurement is also making spectacular advances in areas in which it had not hitherto been fruitfully or extensively applied. the social, political, economic and behavioural sciences are to an increasing extent adopting quantitative techniques. the size and complexity of modern society provides difficult problems of planning and control and such activities require data derived from measurement which modern computers make increasingly possible to acquire and handle. yet in spite of the general recognition of the importance of measurement, the progress in its techniques and the spread of its practical application, there is widespread neglect of its fundamental problems. we have not seen a development of measurement science as a widely recognised, distinct branch of knowledge. the teaching of the principles of measurement is neglected in many academic curricula and this neglect is, if anything, increasing. the question must be asked whether a measurement science as an organised, systematic body of knowledge setting out to embrace all aspects and fields of measurement is needed, and, if needed, whether it is possible to build one. it might appear superficially that the impressive development of the technology of measurement achieved by pragmatic approaches shows that a theoretical structure of measurement science is to a large extent superfluous. the opposite, however, is true. the very wealth of subject matter and the explosive growth of scientific information in the field mean that this material must be systematically organised and general principles derived. without a theoretical framework measurement technology would be just a catalogue of techniques and instruments, which a student would find too vast to assimilate, while the practising scientist or engineer would find it impossible to retrieve the right information to lead him to optimal solutions of problems. studies of measurement without a firm theoretical basis do not have the intellectual rigour to justify their place in the academic curriculum, which their practical importance demands. granted that there is a need for a measurement science, is it possible to formulate principles and methods which have general application to all measurement and which are not trivial platitudes? the answer must be, that any measurement problem requires a scientific understanding of the particular phenomenon or process under investigation, and generally the bringing together of a wide range of technologies to its solution such as fine-mechanics, electronics, nucleonics and the like. but, while each field of measurement has its own peculiar this is a reissue of a paper which appeared in acta imeko 1973, proceedings of the 6th congress of the international measurement confederation, “measurement and instrumentation”, 17-23.6.1973, dresden, vol. 1, pp. 11–27. the paper proposes a clear, synthetic presentation of some basics of the measurement science. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 11 features, and while measurement in one or other of its applications involves virtually all other technologies, a general principle of measurement, which are valid and useful, can be formulated, and indeed, have been, developed to a high degree of sophistication. the principles of measurement are scattered in the literature of a variety of fields: philosophy of science, statistics and, above all, that body of knowledge which is becoming known as systems science and engineering. it is important to collect these principles together, to synthesize them into a coherent whole, and to bring them to bear in a fruitful way on the problems of the technology of measurement. 2. scope and nature of present survey current development of measurement science by engineers has in the main proceeded along two major lines. one is the treatment of measurement information by the methods of information and signal theory. the other is the systemic analysis of measuring instruments and systems using the approach of dynamic systems analysis and of general network theory. the essence of these approaches is the concept that measuring systems operate by the transformation according to a prescribed functional relation of a measurand into an observable. it would be tempting to try to survey the general structure of measurement science as it stands at present. such an approach, however, would be too superficial to be useful. it is proposed therefore to examine one aspect of the subject only, its logical foundations and epistemology. the paper will attempt to review our understanding of the way in which we create an image of reality in terms of numbers and signals, a step which must precede the subsequent processing of the information. even the relatively limited topic tackled is too wide to be treated in this review with absolute rigour nor can all controversial philosophical points be critically discussed. all that is proposed is to outline key concepts as they are seen by the author. for detailed discussions and comprehensive bibliographies readers are referred to references [1-7]. 3. historical development of the epistemology of measurement the logical foundations of measurement – the relation between the material universe and mathematics – have been studied since the dawn of science. one may perhaps date interest in the subject from pythagoreans and their view that all things are number. aristotle presented the first analysis of the philosophical problems of measurement in his metaphysics. plato made a distinction between pure arithmetic and. its application to the real world and this greatly influenced subsequent thinkers. platonic idealism diverted attention from the problems of logical analysis of measurement. a major landmark in the progress of understanding of the relations between numbers and the real world has been the establishment by descartes of the connection between algebra and geometry. newton was also concerned with the logical foundations of measurement in his development of the theory of fluents. the foundations of the modern study of the epistemology of measurement with reference mainly to physics have been laid down by helmholtz [8] in the last century. this work has been extended and developed in the twenties of this century in the lucid writings of campbell [9, 10]. mathematicians concerned with the logical basis of their subject have naturally been concerned with the axioms of quantification. russell [11], frege [12], hoelder [13] and wiener [14] have among others explored this topic. philosophers of science have also given measurement their attention in recent times within the general context of their subjects. it may be invidious to single out particular contributions but one can perhaps mention the work of carnap [15], cohen, nagel [16], suppes, zinnes [7] and ellis [1]. the logical foundations of measurement have been involved in the studies of the theory of dimensions by such writers as bridgman [17], wallot [18], stille [19] and others. the principal problems that have motivated the modern development of the study of the logical foundations of measurement have been above all the need to develop the methods of measurement in psychology, with which the classical approach of helmholz could not deal, since it was based on the possibility of additive combination of quantities, which cannot be achieved in the case of say psychophysical responses to stimuli. the work of stevens [20], thurstone [21], torgerson [22], suppes and zinnes [7] among others, have in dealing with psychological problems extended the frontiers of our understanding of measurement against, often, the conservative opposition of physical scientists. another area where the need for quantification is growing is that of econometrics and utility theory. this has led among others to the work on measurement theory of pfanzagl [2]. this brief historical note is not intended to be comprehensive or to evaluate critically the individual contributions of the various workers but merely to provide a sketch of the antiquity of the problem, and the wide circles of workers concerned with it, to give credit to some of the principal contributors and to mention a few key publications. 4. epistemology of measurement and the technical sciences in spite of the importance of measurement in technology logical analysis of measurement has received virtually no treatment in the literature of technical measurement. discussions of the subject by engineers are rare. the view is often held that the problems of epistemology of measurement are of no practical interest in technology since the nature of technical measurement is self evident and satisfactorily understood. it is important to assert the opposite in this survey. in automation of industrial processes one of the outstanding practical problems is the monitoring by automatic means of complex product properties, such as flavour of foods or appearance of a decorative surface. it is by no means obvious how one can express such qualities in numbers. the practical solution of the problem depends upon the proper application of the epistemology of measurement. again in the design of engineering systems especially in technical cybernetics it is commonly necessary to formulate performance criteria which quantify various system characteristics and combine these quantities with different weightings so as to form a measure of the total utility of the design. a sound approach to this neglected problem requires an understanding of the fundamental nature of quantification. finally in the problem of error analysis and especially in the bayesian approach, the crucial question is the determination of measures which express correctly the unsatisfactoriness of acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 12 erroneous estimates. this requires again an understanding of the philosophy of measurement. 5. definition of measurement measurement is the assignment of numbers to entities in such a way as to describe them. i propose to define a number in the sense in which i am going to use the word, as an element n of a set of symbols n, which has a structure or set of relations defined on it. consider a set of extra mathematical entities q and a set of numbers n. measurement is an operation which assigns to an element qiq  , an element nn i  , such that relations between different elements nn i  and nn j  are isomorphic to empirical relations between the corresponding elements of q, qi and qj. 6. conceptual basis measurement presupposes something to be measured. both in the historical development and logical structure of scientific knowledge, measurement requires a clear operational defining concept of the class of entities described by it, that is a rule for determining whether an entity is a member of the class. unless there exists such a clear concept, we cannot start to measure. as precise knowledge derived from measurement accumulates, this usually leads to a clearer and more satisfactory concept definition. the process is iterative and the development is an ascending spiral. in some cases the concept of an entity arises from numerical laws, arrived at by measurement, and the entity is best thought of in such mathematical terms, but in general one attempts to arrive at some qualitative conceptual framework for it, if possible. one of the principal problems of scientific method is to ensure that the scale of measurement established for a class of entities yields measures, which in all contexts, describe the entity in a manner which corresponds to the underlying concept of the class. 7. direct measurement direct measurement is the process of measuring an entity by comparison with entities of the same class and without reference to the measurement of any other class of entities. it relies on the establishment on the class of entities q, upon which a scale of measurement is to be defined, of a system of empirical relations r, which have a formal similarity to relations among members of the class of numbers n. it is proposed to explain the process of direct measurement by examples. the simplest form of r is  q,r = in which we establish on q an equivalence relation  . an equivalence relation is one which is reflexive ( qq  ), symmetrical (if 21 qq  then 12 qq  ) and transitive (if 21 qq  and 32 qq  then 31 qq  ). an equivalence relation has the formal properties of equality. given  q,r = a set of differing entities qsi  may be selected to form a standard set  k21 ,...ss,ss = . numerals or other symbols nn i  are assigned to each is , the same symbol not being assigned to two different standards. in measurement entities qq  are compared with elements of s and those which bear the relation  to a standard are assigned the same symbol as the standard. this assignment constitutes nominal measurement. an example of such measurement is the use of a colour chart based on the empirical relation of “colour matching”. class inclusion is an equivalence relation. consider that q can be divided by empirical operations into n classes qq i  such that qq i n 1i == and  =ji qq if ji  and each subset qi is assigned a unique number say “i”. then if we can determine by an empirical operation that any element qq  belongs to a particular subset iqq  , it is assigned the characteristic symbol or number of the subset “i”. the subset qi may itself often be divided into further subsets, iji, qq  , iji, n 1j qq == and  =ki,ji, qq if kj  . the subsets ji,q are assigned unique numbers, say “ij”. an element for which we assign the class inclusion relation ji,qq  , is assigned the symbol “ij” which denotes its membership of iq and ji,q . this can be continued to an even finer subdivision. classificatory schemes of this kind such as decimal library classifications constitute a form of nominal measurement. it is often disputed whether the processes just described can in fact be called measurement. measures on a nominal scale merely describe whether two entities are identical or different. it is immaterial whether we term nominal measurement proper measurement or not, provided we recognise its similarity, to more sophisticated forms of quantification. it is very different from naming, which is labelling without description. two individuals may have the same name without being in any way similar, but two entities with the sane nominal measure are related. more elaborate than the nominal is the ordinal scale based on the establishment on q of an empirical order system   ,,q,r = .  is an equivalence relation. the set of relations is complementary, that is for any qq,q 21  , one and only one of the following must hold: 21 qq  , 21 qq  or 21 qq  .  and  are converse (that is if 21 qq  then 12 qq  ), irreflexive (that is we cannot have qq  ) transitive and asymmetrical (that is if 21 qq  then not 12 qq  ). the relation system  ,,q,r = enables entities of class q to be arranged in an ordered series. entities that can be so arranged may be called quantities. we can now select a number of differing standard entities qs i  and arrange them in an ordered standard series  k21 ,...ss,ss = . numerals, are assigned to each si, say i, in such a way that the order of numerals corresponds to the order in the standard series of the standards to which they are assigned. any entity qq  can then be measured by comparing it with the elements of s in the same way as in nominal measurement. if q bears the relation  to any element ss i  it is assigned the numeral of is . if an entity is not equivalent to any of the standards it is possible to determine between which two standards the measured entity would lie in the standard series. it is then assigned a numeral lying in between those of the two standard entities. the best example of an ordinal scale of measurement is the mohs’ scale of hardness of minerals. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 13 neither nominal nor ordinal measurement as described here constitute entirely satisfactory measurements. in general measurement scales are designed to define a distance concept on the scale of entities to be measured. the analysis and axiomatisation of the problem of defining such a distance concept has received much attention in the literature. those interested are referred to the works listed in the bibliography. it is proposed here to outline the classical approach originating from the work of helmholtz. basically this depends upon defining on the class q, in addition to an order relation system   ,,q,r = , a binary operation c which combines two entities of the class to form a third entity of the class:   321 qq,qc  2313 qqqq  the combination must be commutative:    1221 q,qcq,qc  and associative:      321321 q,q,qccq,qc,qc  these are the formal properties of addition for the class of real numbers. for two masses, for example, rigid connection is a form of combination which meets the requirements specified above, and the equiarm balance is the means by which the relations ‘greater than’, ‘less than’ and ‘equal to’ are established. when such a method of combination has been found, a single entity qs1  is chosen as standard and assigned the number 1. another entity qs11  such that 1 1 1 ss  is then sought and   2111 ss,sc  is assigned the number 2. we then proceed to form   312 ss,sc  and assign it the number 3 and so on. fractional standards are generated by making or seeking two entities qs,s 1 2/12/1  such that 1 2/12/1 ss  and   11 2/12/1 ss,sc  and assigning to 2/1s , the number 1/2. thus we generate an extended set of standards:  ... ,s ,s,...ss 212/1= any quantity qq  can then be measured by seeking the element si to which it bears the relation  . q is assigned the number corresponding to si. thus we can see that using the above procedure we have established a process of assigning numbers to entities in such a way that the numbers tell something of the extent by which one entity differs from another. the above argument has eschewed rigour for the sake of simplicity, but the procedures described can be rigorously axiomatised. scales based on additive combination represent the classical view of fundamental measurement. they give an accurate and penetrating insight into measurement in physics but in many areas of science they cannot be applied. the principal advances of modern theory of measurement have been to show that other methods of arriving at direct scales establishing a distance concept on a class of extra-mathematical entities with the same properties as those described above, are equally valid. one way, for instance, of establishing a scale is by the process of assigning to any pair of psychophysical stimuli a third stimulus judged to be equidistant from them. the essence of all direct measurement scales is an establishment of an isomorphism between empirical relations on the class of measurands and the relations on a class of numbers are not confined to particular isomorphisms. 8. indirect measurement while a wide variety of conceptual entities are capable of direct measurement, there are others which are not. these must be measured by scales which rely upon relations which the entities to be measured bear to other measurable quantities. this is termed indirect measurement. two forms of indirect measurement can be distinguished. the first of these is derived measurement. consider a case in which all systems s associated with a member of the class of entities q, for which we are to define a scale, are also associated with quantities x, y, z ... which can be measured. let there be a numerical law p(x, y, z) = kq in which kq is a constant. let us further suppose that whenever the systems s are arranged in the order of q, they are also arranged in the order of kq. then kq can be taken to be a derived measure of q. density is an example of a physical quantity measured by a derived scale. for all objects of one material the ratio ρ of mass to volume of the object is a constant. whenever objects are arranged in order of density as qualitatively defined, they are also arranged in order of ρ , which is thus a measure of density. the other form of indirect measurement is associative. consider that all systems s, associated with a member of class q, are also associated with some other measurable quantity x. further suppose that whenever the systems s are ordered according to q they are also arranged in the order of x. we can then define f(x) as an associative measure of q, where f( ) is an arbitrary, single-valued, monotonic function. all practical temperature scales are of the associative kind. 9. scales of measurement it should be clear from what has been said that it is not adequate in measurement merely to speak of units, one should speak of scales of measurement, that is of the triplet q, m, n where q is a class of entities, n is a class of numbers and m is a procedure of relating members of 0. to members of n. 10. scale classification forms of scale can be classified according to mathematical transformations which leave the scale invariant. thus if we transform the number n on the scale to a number n’ we have: permissible transformation scale type n’ = f(n) where f any one to one substitution nominal n’ = f(n) f any monotonic function ordinal n’ = an + b a  0 interval n’ = an a  0 ratio a permissible transformation is one which only changes that which has been arbitrarily chosen. thus the scale of mass is a ratio scale because the only thing arbitrarily chosen is the unit mass. similarly associative scales are generally ordinal. 11. measure and concept the point has already been made that the measure of an entity must constitute a description which corresponds to the defining concept of the entity. thus there must be isomorphism between any empirical relations among quantities acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 14 measured and corresponding relations between their measures. one basic test of this correspondence can be proposed. if, whenever systems associated with an entity of class q are placed in an order according to our quantitative concept of q, the order is the same as that in which the systems would be arranged by the order of some measure of q, the measure can be considered satisfactory. 12. meaningfulness the problem of the meaningfulness of statements made about measures of an entity is important. a statement is meaningful if its truth is unchanged by a permissible transformation of the scales of measurement. we say that a k-ry relation s is meaningful if: s(m(q1) ... m(qk)) = s(m’(q1) ... m’(qk)) where m → m’ is a permissible transformation. thus, as a very simple example, it is meaningful to speak of the ratio of two masses, since that ratio is invariant with respect to changes of the unit of mass, but it is not meaningful to speak of the ratio of two hardnesses measured on the mohs’ scale, since that ratio would be changed by a monotonic transformation of the scale. another view is that only such statements are meaningful which can be logically traced to the empirical operations on which the measurement is founded. 13. systems of scales in physics dimensions there is much that is arbitrary in the choice of scale forms. the guiding principle is that scales are so chosen as to result in the greatest simplicity of the resulting mathematical descriptions of the laws of nature. hence, for instance, the attempt to make practical temperature scale coincide with the thermodynamic temperature scale. in physics we attempt to reduce to the minimum the number of quantities termed primary the scales of which are independently defined. scales for the measurement of other quantities are termed secondary and they are obtained as derived scales from primary quantities by a chain of simple proportionality laws [1, 19]. two points need to be made. there is nothing intrinsically fundamental in the primary quantities: mass, length, time, current and luminous intensity, they are merely conventionally chosen for reason of the practical convenience of the definition of their scales. secondly, the dimension of a quantity in no way embodies any metaphysical essence. it merely denotes the way in which the conventionally chosen scale of the quantity relates to the conventionally chosen scale of the primary quantities and could conceivably be altered by different choice of scales. 14. multidimensional quantities certain entities cannot be described by a single measure, but require an array of numbers for their specification: m(q) = n = [n1, n2, ... nk] such measures are known as vector or multi-dimensional. thus force is described by three numbers: a magnitude and two direction angles, or three components. patterns may be described by an array of numbers specifying features. n may be viewed as specifying a point in a vector space and we may seek to define a metric for a vector space, that is a relation d, such that for any three vectors n1, n2, n3 we have: d(n1, n2) = 0 d(n1, n2) = d(n2, n1) if n1  n2 d(n1, n2) + d(n2, n3) > d(n1, n3) d( ) represents a measure of distance in the vector space. the problem of representing some aspects of n by a single real number is important especially in decision and optimisation theory where we wish to place multidimensional utilities in some order. a typical method is to determine: m2 = n a nt where a is a positive definite weighting matrix. it is important to establish logically what correspondence exists between m and the entity q measured by n. the problems of multidimensional measures offer many still unresolved problems. 15. measurement and uncertainty the essential assumption underlying the logic of measurement is that the operation m forms a single image of q only, and that m is invariant. however, any practical operation defining a scale represents a probabilistic and not a deterministic transformation. m is associated with a probability distribution, which must be specified with it. this subject has not been adequately discussed in the literature. 16. purpose and uses of measurement while measurement is generally recognised to be the foundation of science, in the context of a review of its fundamental concepts there is a need to analyse the purpose and uses of measurement critically. the advantages of measurement as a form of description can be summarised as follows. firstly, measurement represents a description of an entity which is concise, telling us in a single number information which would otherwise need many words. measurement is also a description which is precise, pinpointing by a single number a particular entity where the same verbal description indicates a range of similar but differing things. measurement is objective rather than subjective, that is, it constitutes a description invariant with respect to the observer making it. the language of measurement is universal and commonly understood, though this demands much effort in the establishment and maintenance of good and generally accepted scales and standards. quantitative description involves an ability to make distinctions and to describe relations among entities of the same kind. in particular the ability of ranking objects in an order according to some measure of a relevant attribute, say, size, coat, or efficiency, enables us to make decisions about them in a logical and systematic manner. a measure of an entity gives us an ability to express facts and conventions about it in the formal language of mathematics. without the convenient notation of this language, the complex chains of induction and deduction by which we describe and explain the universe, would be too cumbersome to express. we could not make our thoughts clear either to ourselves or to others. it follows from what has been said that the description of entities by numbers is not good in itself. the only value of measurement lies in the use to which the information is put. science is not just the amassing of numerical data, it depends upon the way in which the data are analysed and organised. finally in relation to technical cybernetics the essential feature of measurement is that it enables the measurand to be acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 15 expressed in signals which can be handled by machines. qualitative information is not machine intelligible. 17. conclusions the basic conclusions of the present review can be simply stated. the basic concepts of measurement form an essential foundation stone for the erection of a sound measurement science. they now have a sound logical basis which is compatible with the methods of information science. their understanding is essential for the solution of a range of important practical engineering problems. references [1] b. ellis, basic concepts of measurement, cambridge university press, 1966. [2] j. pfanzagl, theory of measurement, physica-verlag wurzburgvienna, 1968. [3] c.w. churchman, p. ratoosh, measurement, definitions and theories, wiley, new york, 1959. [4] akad. nauk., ukrain s.s.r. inst. fil. gnosseogicheskiye aspekty, izmereniyi naukova dumka, kiev, 1968. [5] akad. nauk., ukrain s.s.r. inst. fil. metodologicheskiye problemy teorii, izmereniyi naukova dumka, kiev 1966. [6] o.a. melnikov, o roli uzmereniyi v processe, poznaniya nauka, novosibirsk, 1968. [7] p. suppes, j.l. zinnes, basic measurement theory, in handbook of mathematical psychology, john wiley, 1963. [8] l.v. helmholtz, zaehlen und messen, erkenntnisstheoretisch betrachtet, in philosophische aufsaetze eduard zeller gewidmet, leipzig, 1887. [9] n.r. campbell, physics: the elements, cambridge, 1920. [10] n.r. campbell, an account of the principles of measurement and calculation, longmans and green, london, 1928. [11] b. russel, the principles of mathematics 2nd edition, new york, 1937. [12] g. frege, grundlagen der arithmetik, 1884. [13] o. hoelder, die axiome der quantitaet und die lehre von mass, berichte uber die verhandlungen der koeniglich saechsischen gesselschaft der wissenschaften zu leipzig (math.-phys. kiasse) 53, 1–64. [14] n. wiener, a new theory of measurement: a study in the logic of mathematics, proc. of lond. math. soc. ser 2. 19, pp.181– 205. [15] r. carnap, foundations of logic and measurement, international encyclopedia of unified science, vol. 1, chicago, university of chicago press. [16] m.r. cohen, e. nagel, an introduction to logic and scientific method, harcourt, new york, 1934. [17] p.w. bridgman, dimensional analysis, yale university press, new haven, 1931. [18] j. wallot, groessengleichungen, einheiten und dimensionen, leipzig, 1953. [19] u. stille, messen und rechnen in der physik, vieweg, braunschweig, 1961. [20] s.s. steven, mathematics, measurement and psychophysics, in s.s. steven (ed) handbook of experimental psychology, wiley, new york, 1951. [21] l.l. thurstone, the indifference function, j. soc. psychol 2, 1931, pp 139–167. [22] h.s. torgerson, theory and methods of scaling, wiley, new york, 1958. microsoft word 263-1860-3-le-rev2 acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 16‐19    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 |16  mathematical modeling of an altimeter  irina kislitsyna 1 , galina malykhina 2   1 russian state scientific center for robotics and technical сybernetics, saint‐petersburg, tikhoretsky prospect 21, russian federation  2 peter thegrate saint‐petersburg polytechnic university,  saint‐petersburg, polytechnicheskaya 29, russian federation      section: research paper   keywords: photon altimeter; lunar soil; gamma rays  citation: irina kislitsyna, galina malykhina, mathematical modelling of an altimeter, acta imeko, vol. 4, no. 4, article 5, december2015, identifier: imeko‐ acta‐04 (2015)‐04‐05  section editor: franco pavese, torino, italy  received: march 18, 2015; in final form june 15, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: galina f. malykhina, e‐mail: g_f_malychina@mail.ru    1. introduction  a photon altimeter is designed for measuring the distance between a lander and the underlying surface. the altimeter should measure the current height of the lander over the lunar surface in the range from 0.3 m to 10 m at a descent rate between 6.5 m/s and 11 m/s, providing a random component of the relative error less than 3 %. for this purpose a photonic altimeter is developed. the altimeter uses the effect of scattering of gamma radiation over the lunar surface. the conditions of a landing on the moon surface seriously differ from conditions on earth due to the lack of the lunar atmosphere, a high background radiation and the soil composition of the lunar surface. there are altimeters of different frequency range such as radio-wave, infrared and optical altimeters, the x-ray and the photon altimeters. the optical altimeters were developed for optical navigation system for lunar landing [1], [2]. we develop the altimeter based on the effect of scattering photons, having the following advantages: the ability to perform measurements through a plasma of engine, the insensitivity to the layer of dust on the surface of the planet, the high measurement accuracy at the low altitudes. the amount of scattered gamma rays depends on the type of the underlying surface of the moon [3]. the lunar surface consists of loose material called regolith, fragments of bedrock and secondary particles.the material was formed as a result of continuous meteoric impact and bombardment by atomic particles over billions of years. the average depth of the regolith that covers the entire surface of the moon ranges from 4 to 5 m in the lunar seas up to 10 to 15 m on the continents. the chemical composition of the regolith depends on the composition of rocks lying below, but it also contains other substances and minerals. the main minerals of lunar rocks are plagioclase (solid solution of albite and an orthite naalsi3o8 caal2si2o8), orthopyroxene (mg,fe)sio3, clinopyroxene (ca,mg,fe)sio3, olivine (mg,fe)2sio4, ilmenite (fetio3) and spinel-group minerals (fecr2o4 fe2tio4 feal2o4), [4]. lunar seas are volcanic plains that fill cavities in the topography of the continents. the predominant types of lunar sea rocks are marine basalts. lunar seas are more suitable for the landing of a spacecraft. 2. intensity of scattering photons  the altimeter uses the effect of gamma-rays scattering in other words, the compton effect. the differential cross-section for compton scattering of one electron per unit solid angle is given by the klein-nishina-tamm formula:                           cos11 cos1 cos1 cos11 1 2 22 2 22 er d d (1) where  is the scattering angle, er is the classical electron radius, m cm arr e ce 15-108179.2 04.137 1   abstract  the aim of this survey  is to simulate a photon altimeter designed for a soft  landing on the  lunar surface. a simulation of the  process of gamma rays scattering from the lunar surface with a typical composition of the lunar soil was implemented.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 |17  04.137 1 a is a constant, cm r e c   is the wave length scattering effect, em is the electron mass, c is the speed of light, kev660e is the energy of the primary radiation,  is the ratio of photon energy to the electron rest energy at kev660e , and 129.12  cm e e  . for a mixture of substances on the lunar surface the effective number effz of the underlying soil surface can be calculated by the following formula:     , 1 1 4     m i ii m i ii eff zη zη z (2) where iη is the relative quantity of substance i , having number iz of lunar soil, m is the number of components of the underlying surface. the electron density of the underlying surface is: , 6 1 i i a i ie z a n n     (3) where an is avogadro's number, ia is the atomic mass of substancei. lunar seas are volcanic valleys filling low places of the continent relief, which are the bottom of large craters and pools. a basalt is a dominant type of the sea’s rocks. composition of the ground shown in table 1 was used in the model as an example. the compton cross-section can be calculated by the equation effezn 25 0 1054.6  . the intensity of the scattered photon radiation i is given by the formula:                  , cos1cos11 cos1 1 cos11 cos1 16 3 2 32 2 2 0 0                     r ii (4) where 0i is the primary radiation and  the scattering angle at a distance r from the scattering electron. calculations by (1) to (4) showed that the intensity of the scattered radiation decreases with increasing scattering angle. in the range of angles of interest from 90° to 180° the intensity i of a gamma ray of 660 kev does not change dramatically. therefore, it is advisable to work in this range. the energy of the scattered photons depends on the angle of scattering in accordance with the relationship:   . cos11 2     cm e e e e (5) the photon energy decreases with increasing scattering angle. if the value of the scattering angle is in the range ]180...90[  , then the energy of the scattered gamma rays reduced down to 200 kev. the energy of the scattered photons in the working range depends less on the energy source. the geometry of the gamma radiation source and the gamma radiation detector’s location is shown in figure 1. the photon radiation source (s) is located in the centre of the measuring system, and four detectors (d1-d4) are placed on a distance l from the source. the lander may be tilted relative to the underlying surface, by an slope angle  of the axis formed by a pair of detectors. let h be the current height of the landing apparatus,  is the angle of incidence of the direct beam of the gamma ray,  is the angle between the projection of the axis of the detectors d1 and d2 on the line passing through the point of incidence,  is the scattering angle of photons recorded by the detector d1. the square of the distance between the scattering element and the detector can be calculated by formula:  22222 sintgcos2costg  lhhllhr  (6) the cosine of angle  amounts to:       . tgcos costgsin cos 2222    hl rlhlh   (7) the cosine of the scattering angle:       . 12 1 )cos( 2 2222    tgrh lrtgh    (8) the detector registers photons scattered by the lunar surface. the position of the scattering element may be characterized by angle  of the direct ray and angle  of the position of scattering element.the range of angle  depends on the angle of collimation max , where max0   . the range of angle  is  ., the distance between the centre and the scattering element is defined by the following formula: .tg h table 1. composition of the lunar surface ground.  element  percentage    6...1),(  i i   atomic number    6...1, iz i   atomic mass   6...1, ia i   si  20.4  14  28 o  41.3  8  16 fe  13.2  26  56 ca  0.79  20  40 al  0.68  13  27 mg  0.58  12  24 figure 1. location of the source and the four detectors.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 |18  the intensity of photons, detected by d1 can be obtained by integration over the scattering volume, depending on the :,, depth      depth depthe detect dddepthdhlriei      ),,,,,,()( (9) where  e is the coefficient of mass absorption, depending on the photon energy,  is the density of the lunar soil. the intensity of photons, scattered by an element, situated on the distance , the depth (depth) and the angle  , detected by d1, is defined by the following equation:                                     2 22 22 2 2 0 0 cos1cos11 cos1 1 cos11 cos1 16 3 ),,,,,,( r i hlri (10) detectors d1 to d4 receive the gamma rays scattered from the lunar surface in accordance to the angle of collimation of the radiation source. the detectors register the integral of intensity scattered gamma rays over the volume of interaction with the lunar soil. it is useful to describe the effect of the registration as the ratio of primary and scattered fluxes of gamma rays. the integral of ratio 0i i depends on the angle  , the depth of penetration of the gamma rays in the lunar soil ( depth) and on the height h of the landing apparatus.. 3. algorithm of modeling  the algorithm makes it possible to calculate the ratio 0i i of the registered intensity i and the original intensity 0i .it consists of three loops for the triple integral computation. the algorithm includes the following steps.  summary of the algorithm 1. initialize the parameters of the model: cross-section for compton scattering 0 , ratio of photon energy and energy of electron at rest  , mass absorption factor , matter density 0 . 2. set the height 10.0,0  hh , and for ...,,2,1i compute the current hih  . 3. set the angle of slope 32 ,0   , and for ...,,2,1i compute the current   i . 4. set the penetration of photons into lunar material, and for ...,,2,1i compute the current depthidepth  . 5. set the radius of the gamma-ray beam 10.0,0   , and for ...,,2,1i compute the current   i . 6. set the angle of the gamma ray beam 10.0,0   , and for ...,,2,1i compute the current   i . 7. compute the angle of the gamma ray beam and of the scattering element: .       depth arctg   8. compute the distance between detector and scattering element:     .sin tgcos2coscostg 2 1 2 222   ldepth ldepthldepthr   9. compute the cosine of the complementary scattering angle:       . 12 1 )cos( 2 2222    tgdepthr lrtgdepth    10. compute the elementary scattering volume:    .2 2   hv 10. compute the accumulation of the gamma intensity ratio:                  . cos1cos11 cos1 1 cos11 cos1 16 3 2 32 2 2 0 00 v ri i i i                    12. compute the gamma intensity ratio subject to the absorption of photons: .20 deptheii   the mass absorption factorof photons with energy 660 kev by silicon equals 0802.01  cm/g2. the mass absorption factor of the scattering gamma-quantum with energy 200 kev by silicon equals 123.02  cm/g2,  being the density of the matter. 4. numerical experiment  the proposed model is implemented as a simulation program running in matlab. dependences of the average intensity of scattered gamma-ray quanta detected by d1 on a height of the descending module are presented in figure 2.  the gamma-ray flux-noise is characterized by a poisson distribution. therefore, the result of measuring the height and the angle actually is not a smooth function of the intensity, as shown in figure 2. noisy data registered by the four photon detectors should be averaged. figure 3 shows the dependences of the height on the intensity of photon flux for different values of the angle. the measurement result is the average height of the four outputs of the detectors. 0 1 2 3 4 5 6 7 8 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 r a ti o o f in te n s it ie s ( i/ i0 ) height h gamma=0 degree gamma=5.6 degree gamma=11.25 degree gamma=16.88 degree gamma=22.5 degree figure  2.  the  ratio  of  average  intensity  of  scattered  and  initial  gamma  quanta versus the height of the descending module.   acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 |19  each detector records the angle of the lander.dependences of the measured angle on the height computed using the intensity of the gamma-ray flux are shown in figure 4. the signal received by the detector is stronger when the height of the descending module is small. the height and slope angle can be measured more accurately when the height is small. this fact is an advantage of the photon altimeter. it is more difficult to measure height and slope angle at high-altitude of the descending module. the results of the altimeter model showed that the standard deviation of error of slope angle for low-altitude of the descending module is not greater than 0.02 rad. the standard deviation of the error of the slope angle for high-altitude of the descending module is less than 0.06 rad. 5. conclusions the proposed model is based on initial values:initial height,activity of photon source, topology of photon sources and detectors,composition of the ground. the developed model allows estimate effect of the topology of the measuring system, activity of the source, size and arrangement of the detectors, the composition of the ground. the model allows determine basic characteristics of altimeter with reasonable accuracy. the range of measurement of altitude, the effect of landing apparatus slope, the relative error of measurement may be estimated. the model submit for consideration capability of angle slope measurement. acknowledgement  the scientific research was supported by central research institute of robotics and technical cybernetics. authors are grateful for scientific and financial support. references [1] v. simard bilodeau, s. clerc, r. drai, j. de lafontaine, optical navigation system for pin-point lunar landing, preprints of the 19th world congress the international federation of automatic control cape town, south africa, august 24-29, 2014, pp. 1053510542. [2] xiangyu huang, dayi wang, yingzi he, yifeng guan, autonomous navigation and control for pin point lunar soft landing, 7th international esa conference on guidance, navigation & control systems 2-5 june 2008, tralee, county kerry, ireland. [3] e. i. yurevech, photon technique, saint-petersburg, press of the saint-petersburg polytechnic university, 2003, 235pp, (in russian). [4] v.p. legostaev and v.a. lopota moon-a step towards technology development of the solar system. / m.-rsc "energy", 2011. 684pp, (in russian). 0 1 2 3 4 5 6 7 8 -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 height h a n g le o f s lo p e g a m m a < 0 relative error = 0.061682 gamma=0 gamma=-pi/64 radian gamma=-pi/32 radian gamma=-3pi/64 radian gamma=-pi/16 radian figure 4. dependences of slope angle measured for different heights.  500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 0 1 2 3 4 5 6 7 8 gamma intensity (i) h e ig h t h relative error = 0.010311 gamma=0 gamma=-pi/64 рад. gamma=-pi/32 рад. gamma=-3pi/64 рад. gamma=-pi/16 рад. figure  3.  dependences  of  the  height  on  the  intensity  of  photon  flux  for different values of the angle.  journal contacts acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 2 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 journal contacts about the journal acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. about imeko the international measurement confederation, imeko, is an international federation of actually 42 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses principal contact prof. francesco lamonaca university of calabria department of computer science, modelling, electronic and system science via p. bucci, 41c, vi floor, arcavacata di rende, 87036 (cs), italy e-mail: editorinchief.actaimeko@hunmeko.org support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany e-mail: dirk.roeske@ptb.de editor‐in‐chief francesco lamonaca, italy founding editor‐in‐chief paul p. l. regtien, netherlands associate editor dirk röske, germany copy editors egidio de benedetto, italy luca rossini, italy silvia sangiovanni, italy layout editors dirk röske, germany leonardo iannucci, italy domenico luca carnì, italy editorial board leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france paolo carbone, italy lorenzo ciani, italy catalin damian, romania pasquale daponte, italy luca de vito, italy sascha eichstaedt, germany laura fabiano, italy ravi fernandez, germany luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy leonardo iannucci, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom yasuharu koike, japan dan kytyr, czechia francesco lamonaca, italy aime lay ekuakille, italy massimo lazzaroni, italy fabio leccese, italy michele norgia, italy franco pavese, italy pedro miguel pinto ramos, portugal nicola pompeo, italy sergio rapuano, italy renato reis machado, brazil álvaro ribeiro, portugal gustavo ripper, brazil dirk röske, germany maik rosenberger, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy michela sega, italy enrico silva, italy pier giorgio spazzini, italy krzysztof stepien, poland ronald summers, uk marco tarabini, italy tatjana tomić, croatia joris van loco, belgium zsolt viharos, hungary bernhard zagar, austria davor zvizdic, croatia mailto:editorinchief.actaimeko@hunmeko.org mailto:dirk.roeske@ptb.de acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 section editors (vol. 7 12) jon bartholomew, uae yvan baudoin, belgium piotr bilski, poland francesco bonavolonta, italy giuseppe caravello, italy carlo carobbi, italy marcantonio catelani, italy umberto cesaro, italy alfredo cigada. italy marija cundeva-blajer, north macedonia mauro d’arco, italy egidio de benedeto, italy silvio del pizzo, italy alessandro depari, italy leila es sebar, italy michele fedel, italy alessandro germak, italy nicola giaquinto, italy istván harmati, hungary daniel hutzschenreuter, germany min-seok kim, korea bálint kiss, hungary momoko kojima, japan roberto montanini, italy koji ogushi, japan vilmos palfi, hungary annaluisa pedrotti, italy jeerasak pitakarnnop, thailand siniša prugovečki, usa/croatia md zia ur rahman, india gian marco revel, italy fabio santaniello, italy jan saliga, slovakia andrea scorza, italy emiliano sisinni, italy ciro spataro, italy oscar tamburis, italy zafar taqvi, usa jorge c. torres-guzman, mexico ioan tudosa, italy ian veldman, south africa rugkanawan wongpithayadisai, thailand cristian zet, romania claudia zoani, italy acta imeko, title acta imeko december 2014, volume 3, number 4, 32 – 37 www.imeko.org acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 32 techniques for on-board vibrational passenger comfort monitoring in public transport ileana bodini, matteo lancini, simone pasinetti, david vetturi university of brescia – department of mechanical and industrial engineering, via branze 38, 25123 brescia, italy section: research paper keywords: vibrational on-board comfort, geo-referenced comfort measurement, thematic maps citation: ileana bodini, matteo lancini, simone pasinetti, david vetturi, techniques for on-board vibrational passenger comfort monitoring in public transport, acta imeko, vol. 3, no. 4, article 7, december 2014, identifier: imeko-acta-03 (2014)-04-07 editor: paolo carbone, university of perugia received october 11 th , 2013; in final form january 14 th , 2014; published december 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: (none reported) corresponding author: ileana bodini, e-mail: ileana.bodini@ing.unibs.it 1. introduction the use and the development of collective transport systems are encouraged by public institutions in order to improve mobility and accessibility in urban areas. these aspects are also key elements in the development of “smart cities”. therefore, the increase of service quality and safety is fundamental in making public transport systems more attractive than private vehicles [1]. service quality is usually evaluated in terms of accessibility to the transport system, timing flexibility and other properties of the transport network, while a comfort analysis is usually focused on the vehicle itself, even if its results depend also on roads and infrastructures. developments in laws [2] and some researches [3, 4, 5] have put in evidence two different aspects that could seem to be in contrast: one is the necessity to improve road safety, by introducing traffic calming measures, such as roundabouts, speed bumps, chicanes, elevated pedestrian crossings or different types of pavements; the second is the attention to the exposure of workers and passengers to vibration also on public transport vehicles, and its effect on transport quality. until now many authors have been studying traffic calming devices and public transport passengers interaction [6, 7, 8, 9]. the purpose of this work is to give useful information to the designers of both vehicles and infrastructures; in particular this research is aimed at defining a technique to evaluate how traffic calming devices affect the acceleration that road public transport passengers perceive and to provide a low-cost device to easily measure and monitor vibration level on road public transport. therefore, taking as a reference the european regulations on rail transports [10], the researchers propose the definition of a vibrational comfort index that allows to characterize and compare different road infrastructures and that could also be useful to compare behaviours of vehicles related to road infrastructure and to monitor vehicle maintenance state. abstract traffic calming devices on urban streets, such as elevated pedestrian crossings, speed bumps and roundabouts, are increasingly used, raising a real problem in relation to the on-board comfort that passengers perceive. to measure vibrational comfort related to traffic calming devices that passengers of the public transport perceive, an acquisition system called asgcm (autonomous system for georeferenced comfort measurements) has been developed, taking as a reference the european regulations on rail transports. asgcm permits to link each measurement of vibration, ground velocity and acceleration with geographical information resulting from a gps. in this way a map of a comfort index, statistical surveys and correlation between on-board comfort and traffic calming, can be directly obtained by any geographic information system (gis), able to query a centralized remote database, which was developed ad-hoc. a large number of experimental tests has been performed to define a vibrational comfort index and to collect a large dataset that allows statistically significant comparisons between different infrastructures and their characterization. the proposed technique can also be useful for diagnostics purposes, such as vehicle comparison and road maintenance state monitoring. acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 33 to calculate a comfort index and to correlate it with a given location, accelerations and gps positions are needed. in section 2 an asgcm (autonomous system for geo-referenced comfort measurements) measurement system, that consists of the measurement chain and the processing system that allow to collect data and make them available, is described. in section 3 the proposed comfort index is defined, and in section 4 different types of data analysis are proposed. finally, in section 5, the comfort index validation is discussed. 2. measurement and processing system to assess the on-board comfort that bus passengers perceive due to traffic calming devices, an ad-hoc measurement system, called asgcm (autonomous system for geo-referenced comfort measurements) has been developed. this is autonomous for long periods of time and acquires, processes, and records data without requiring specialized staff [11]. to monitor on-board comfort, two quantities are needed: comfort level and geographical location. the measurement chain consists of a capacitive triaxial accelerometer mounted along the three principal orthogonal axes, a ni compact-rio, an analogue input module, supply and connection cables, a commercial gps antenna, and an usb flash memory. both the accelerometer and the ni compact-rio are battery-powered. the comfort level has been calculated taking into account both infrastructural effects, involving low frequencies, and vibration effects. these have been measured using a triaxial accelerometer, with an acquisition rate of 1000 hz and a buffer of 1000 samples. in one second of acquisition the buffer is filled up and data are processed using a band-pass filter, from 0.5 hz to 300 hz, designed according to the human response to vibration filters [10, 12]: acceleration magnitude is computed as a vector sum of the three accelerations acquired in orthogonal directions (the x-axis corresponds to the road direction, the yaxis is in the road plane and orthogonal to the x-axis, and the zaxis is perpendicular to the road plane) and an rms acceleration value is calculated, considering a period of 5 seconds. this period of time represents a compromise between the acquisition of low frequency vibrations and a statistically relevant number of samples. the time needed to fill the buffer is adequate to query the gps: each second a standard nmea (national marine electronics association) rmc (recommended minimum sentence c) string is recorded [13]. this string contains information of position, velocity and time. due to this procedure, illustrated in figure 1, information of vibration, position and velocity related to the same second, are associated with a single geographical point. 3. index choice the european regulation on rail transport [10] defines different types of comfort index; the one for standing passengers is called nvd. this has been adapted for road transport and has been calculated following equation (1), wy a wz a wy a wx anvd  5 22 4 2 163 (1) where wx a is the rms (root mean square) weighed longitudinal acceleration, wy a is the rms weighed lateral acceleration and wz a is the rms weighed vertical acceleration. weighing is performed following european regulations and considering the human response to vibration frequency filters [10, 12]. the higher the nvd value the greater the on-board discomfort. as stated before, each second, this comfort index is linked to vehicle velocity and gps coordinates and the results are updated to a remote database which could then be queried, in order to obtain thematic maps and statistical analyses. 4. analysis results 4.1. analysis of manoeuvres and infrastructures querying the developed database, kinematic parameters, such as speed and accelerations, related to vehicle vibrational comfort, can be analyzed. as a first result these parameters can be plotted as a function of time and position, as shown in figure 2. this type of analysis gives detailed information strictly related to every single passage of a chosen vehicle through a considered infrastructure, which can therefore be very accurately studied. however this kind of analysis is not very appropriate to monitor a complete public transport network or to evaluate an infrastructure design, because many external parameters such as driver, weather, bus crowding and traffic conditions can affect the measurement. these are important figure 1. flow chart of the automated acquisition procedure of asgcm. http://www.nmea.org/ http://www.nmea.org/ acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 34 factors that can be considered, for example, when discomfort causes have to be investigated. 4.2. geographical analysis to obtain less detailed but more significant information, the acquired data have to be aggregated. a first approach consists in a geographically-based aggregation obtained by dividing the urban map in a grid of a chosen step (for example 1 m), and then associating to each cell the mean value of the parameter under study, in particular the nvd index. to take into account the geographical distribution of the data, the generic parameter is weighed using the inverse of the squared distance between the centre of the cell and the recorded position of each sample, following the widely accepted inverse distance weighing method. in this way a mean value of the chosen parameter has been calculated, taking into account both geographical distribution of the data and different runs of the vehicle, in different conditions, in the same geographical position. figure 3 shows a thematic map of the comfort index of a particular bus route as an average of a large number of acquisitions. in particular the bus route is superimposed on the city cartography: very uncomfortable spots (in red) as well as comfortable ones (in blue) are evidenced. in general, thematic maps can be very useful to correlate a physical quantity, such as vehicle speed, acceleration vector, onboard comfort, with a given infrastructure, for example a roundabout, a speed bump or an elevated pedestrian crossing. figure 2. kinematic parameters (speed and acceleration) and nvd index on the same roundabout. figure 3. thematic map of a portion of a bus route of the nvd comfort index, superimposed on the city cartography. 35302520151050 0.12 0.10 0.08 0.06 0.04 0.02 0.00 nvd d e n s it y mean 9.807 stdev 4.067 n 2125 histogram of nvd normal 35 30 25 20 15 10 5 0 n v d boxplot of nvd figure 4. nvd comfort index statistical analysis. acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 35 however, using this kind of analysis, it is not possible to define different levels of index nvd related to real situations of comfort or discomfort, therefore thematic maps such as that shown in figure 3 can be only used to define areas of relative discomfort with respect to the acquired data, without being able to point out which is actually the level of criticality itself. 4.3. statistical analysis a second approach to data aggregation consists in isolating some identified infrastructures or situations of interest, such as crossings, elevated pedestrian crossings, roundabouts, straight roads and considering the interesting physical quantities related to these infrastructures (such as vehicle speed, accelerations, on-board comfort) as stochastic variables over which a statistical analysis can be performed. this allows to point out known situations of high comfort levels (straight roads with good pavement conditions) and low comfort levels (roundabouts, raised bumps) and to statistically investigate the corresponding nvd levels to develop a quantitative scale of the index of comfort. the proposed method of analysis consists in selecting a path portion with fixed length, centred on the chosen infrastructure, and in assessing some important statistical parameters of the distribution of the variable of interest, in particular its probability distribution function (pdf), its cumulative distribution function (cdf), a box plot and the 95th percentile of the distribution. this last parameter depends less on impulsive nvd index variations, not relevant from a statistical point of view, than the maximum nvd value, therefore, it has been chosen to summarize the complete distribution with a single number. a box plot is able, in particular, to synthetically indicate the trend of the probability distribution through the identification of its quartiles. as an example, referring to the roundabout proposed in figure 2 and considering the ndv index as a stochastic variable, it is possible to study some statistical parameters, that give brief information about comfort (or discomfort) level related to the chosen infrastructure. figure 4 shows the histogram of the distribution and a box plot of nvd indexes associated with the roundabout. a further example is given in figure 5, which represents the nvd index of a single run as a function of position along the path (inside the inspection radius centred on the roundabout) and puts in evidence (dotted line) that the 95th percentile of the nvd can be used to represent the worst on-board vibrational comfort condition, which occurs when the bus enters the selected infrastructure. the complete statistical population includes all runs through an infrastructure (or through the whole route) of the same vehicle in different traffic, bus crowding, weather conditions and for different drivers. each one of these parameters is a possible analysis criterion and a possible query for the database. following the described method, different statistical analyses have been performed; in particular, it is possible to represent cumulative distribution functions of the comfort index of a complete bus route or of a chosen type of infrastructure, such as roundabouts, elevated pedestrian crossings or straight roads. figure 6 shows these cumulative distributions and allows to compare the comfort index of the whole route with that related to a particular type of infrastructure, giving brief information about on-board comfort of a specific bus route and allowing figure 5. nvd comfort index – 95 th percentile. figure 6. comparison between cumulative distribution functions of the complete route for different infrastructures (straight roads, elevated pedestrian crossings and roundabouts). figure 7. comparison between box plots of different infrastructures (straight roads (sr), elevated pedestrian crossings (pc) and roundabouts (ro)) on the whole route. figure 8. comparison between box plots of different roundabouts. acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 36 for a comfort-based classification of specific traffic calming devices or road infrastructures. a comparison between comfort indexes related to different types of traffic calming devices and road infrastructures is also shown in figure 7, using box plots as the statistical method. figure 8 shows that it is also possible to compare different traffic calming devices of the same type, such as roundabouts, and to study comfort index distribution of buses crossing such infrastructures. in particular, figure 8 represents the comparison between different roundabouts of the same bus route and box plot are sorted according to diameter size; data were acquired using the same model of bus. due to this kind of graph it is possible to deduce if roundabouts with different diameters present different levels of comfort. another parameter that could be taken into account is the type of manoeuvre on roundabouts with the same geometry. 5. index validation to validate the proposed nvd index, an analysis of correlation between instrumental comfort measurements and individual comfort perception has been performed. the individual perception of the comfort/discomfort level, on a selected bus route, has been evaluated by a test panel jury of more than 30 passengers who were asked repeatedly to express an evaluation of their perception in relation to different infrastructures (straight road, pedestrian crossing, roundabout, etc.), just few moments after their occurrence, for the whole duration of the chosen track. the evaluation is given using a score that varies from 0, when the situation is perceived as very comfortable, to 4, when the situation is very uncomfortable. this ranking system is explained in advance to each subject to avoid misevaluation. for all devices under investigation, mean value and standard deviation have been calculated. as an example, figure 9 shows a distribution of jury responses on subjectively perceived comfort levels: in the graph each judgment (qualitative indication) is associated with a numerical value (quantitative indication) and the average value of judgment has also been reported. to search for a significant correlation, the nvd index related to each studied infrastructure was considered a stochastic variable and the 95th percentile of its distribution was chosen to synthesize the discomfort of the considered infrastructure. a linear regression was the result of a simple least square approach [13], between this indicator and the average of the panel responses. a linear relation has been found (r=96%) between the 95th percentile of the nvd distributions and test panel opinions, associating the best comfort levels found (mean evaluation 0.2 – close to “very good” comfort) with an nvd level of 5 and the highest discomfort (mean evaluation 4 – “very bad”) with an nvd level of 25. this relationship is shown in figure 10 and could be used, as proposed by the authors, to define threshold values for comfort index. 6. conclusions a portable and autonomous measuring device for on-board comfort measurements on local transport buses has been set up, and specific software for retrieving from multiple devices, stocking and localizing data on the whole local transport network has been created. automated tools for a geographical analysis of collected data with a standard gis interface have also been developed and statistical analysis of the influence of specific traffic calming devices on on-board comfort was made feasible using the data collected. a linear correlation between the results obtained by the measurement system and subjective perception of discomfort by a panel jury has been found, validating the authors’ proposal of a modified nvd index for road vehicles for public transportation. in particular, the proposed analysis methodology allows, thanks to comfort thematic maps, to identify critical points, geographically localized, on a particular route, thus making possible a fast maintenance work if needed or pointing out comfort problems related to traffic calming devices or particular traffic and infrastructure conditions. using the proposed index it is possible to compare different bus routes, different types of infrastructures, different types of bus travelling on the same route, different infrastructure of the same type that differs from geometry or for crossing manoeuvres. systematic acquisition of data regarding the whole transport network of a chosen area could allow to monitor the temporal evolution of vibrational comfort of the road network that is related to traffic conditions, road surface damages, presence of traffic calming devices. in this way, not only the general appreciation of the public transport system could be assessed, but also the temporal development of road conditions, as well as the interactions between subsequent traffic calming devices or road infrastructures. figure 9. relation between nvd index and panel evaluation. figure 10. linear relation between 95 th percentile of nvd index and panel evaluation. acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 37 acknowledgement authors would like to thank brescia trasporti, and its bus drivers, for the support received, prof. g. maternini for comments and insights and prof. f. docchio for careful reading and discussion. references [1] tiboni m., rossetti s., "implementing a road safety review approach for existing bus stops", wit transactions on the built environment, vol. 130, pp. 699 709, wit press. [2] european parliament, “directive 2002/44/ec of the european parliament and of the council of 25 june 2002 on the minimum health and safety requirements regarding the exposure of workers to the risks arising from physical agents (vibrations)”, official journal, l 177 6.7.2002, p. 13. [3] p. eriksson, o. friberg, structural and multidisciplinary optimization 20 (2000) pp. 67–75. [4] i.d. jacobson, r.w. barber, r.d. pepler, l.l. vallerie, transportation research record 646 (1997) pp. 1-6. [5] w.h. park, j.c. wambold, transportation research record 584 (1976), pp. 55-63. [6] g. maternini, silvia foini, tecniche di moderazione del traffico, egaf edizioni, forlì, 2010, isbn 978-88-8482-364-9 [7] p. jönsson, ӧ. johansson, journal of sound and vibration 282 (2005), pp. 1043-1064. [8] s. grudemo, a. ihs, m. wiklund, “the influence of road surface condition on driving comfort” in vti meddelande, 2004, vol. 957. [9] k. ahlin, n.o.j. granlund, international journal of pavement engineering (2002), pp. 207-216. [10] .uni en 12299, “railway applications, ride comfort for passengers, measurement and evaluation” (2009). [11] vettui d., maternini g., lancini m., bodini i., “on-board comfort measuerement system development for buses”, proc. of the seventh international conference on informatics and urban and regional planning input 2012, may 10-12, 2012, cagliari, italy, isbn: 9788856875973, pp 1277-1286iso 8041, “human response to vibration – measuring instrumentation” (2005). [12] sirf, nmea reference manual, sirf technology, inc., california (2007). [13] g.b. rossi, b. berglund, “measurement related to human perception and interpretation – state of the art and challenges”, xix imeko world congress fundamental and applied metrology, sept 6-11, 2009, lisbon. terahertz techniques for better hazelnut quality acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 8 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 terahertz techniques for better hazelnut quality manuel greco1, fabio leccese1, emilio giovenale2, andrea doria2 1 science department, università degli studi “roma tre”, rome 00146, italy 2 fusion and nuclear dept, enea, frascati, rome 00044, italy section: research paper keywords: terahertz; imaging; transmissions; yig; hazelnut citation: manuel greco, fabio leccese, emilio giovenale, andrea doria, terahertz techniques for better hazelnut quality, acta imeko, vol. 12, no. 1, article 29, march 2023, identifier: imeko-acta-12 (2023)-01-29 section editor: francesco lamonaca, university of calabria, italy received february 6, 2023; in final form february 24, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: manuel greco, e-mail: manuel.greco@uniroma3.it 1. introduction one of the pillars of food quality is food safety. reducing the presence of pathogens and detecting toxins in food production is a key part of the food production chain. to achieve a high standard of food quality control, a series of methods aimed at initially selecting the product and then placing it on the market, are available. one of the most used techniques in the field of food safety is x-ray imaging [1]-[3], but other methodologies such as ultrasound techniques [4], [5] also find space. one of the main advantages of x-ray techniques is that they produce high resolution images: x-rays can pass through matter undergoing an attenuation which depends, for example, on the type of material crossed. the higher the density of the material crossed, the more the rays are absorbed and the lower the amount of radiation transmitted. the latter contains information relating to the absorption of radiation, which can be used to obtain an image of the structure crossed. at the same time, techniques that use x-rays as an investigative tool are subject to limitations: some materials, such as wood and glass, are weakly detected due to their low density. an additional important factor that needs to be taken into consideration, when using techniques like these, is their ionizing nature. as known, x-radiation having a wavelength capable of interacting directly with the electrons of the orbitals, triggering ionizations and, therefore, modifications at a chemical level. furthermore, x-ray techniques can involve risks both for the operator and for the sample itself, since the x-ray photon is capable to tear electrons from matter and, therefore, can alter the quality of the food. in recent years, techniques operating in the terahertz frequency have appeared on the market and may potentially provide additional information to those obtained with traditional routine techniques. at present, imaging and terahertz spectroscopy techniques have been used in various fields, such as pharmaceutical industries [6], cultural heritage [7], aerospace industry [8] and finally precision agriculture [9], [10]. thz radiation, differently from infrared radiation, excites rotational and roto/vibrational transitions, so polar liquids such as water have a strong absorption in this spectral band. while on one hand this important feature could be exploited to quantify the water content in foods, on the other hand it makes it a limit [11]. as previously reported, unlike x-ray techniques, whose photons ionize matter, thz photons have a much lower energy. by exploiting this characteristic, thz radiation is unable to cause ionizations and, therefore, induce structural changes in biomolecules. therefore, techniques operating in this band can abstract in recent years, technological innovation has acquired a fundamental role in the agri-food sector, in particular in food quality control. the development of technology allowed to improve the quality of the food before it is placed on the market. recently, non-invasive techniques such as those operating in the thz spectral band were applied to the field of food quality control. in the laboratories of the enea centre in frascati, close to rome-italy, has been developed a thz imaging system operating in reflection mode and an experimental setup able to measure both reflection and transmission of the samples in the frequency range between 1840 ghz. with these two setups will distinguish rotten and healthy hazelnuts acquiring in real time both images of the fruit inside the shell by using the imaging system and the transmission data exploiting the 18-40 ghz system. mailto:manuel.greco@uniroma3.it acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 be considered safe for operators, but above all for the purposes of food quality. in the food industry, thz imaging technologies have proven useful in detecting food contaminants [12], [13]. as reported in several studies, the presence of chemical products on foods can lead to the onset of pathologies in consumers and therefore constitute a potential risk. recently, a group of researchers applied terahertz time-domain spectroscopy (thz-td) to detect the presence of pesticides in food powders such as sticky rice, sweet potato, and lotus root [14], highlighting specific absorbance peaks ranged from 0.5 to 1.6 thz for some pesticides. foreign bodies can be a problem in food safety as well. in [15], a thz imaging system, operating in transmission mode, was applied to detect foreign bodies, such as stones, glass fragments and metal screws in chocolate bars. all these elements were visible in the thz images. as previously reported, the potential that terahertz technologies can demonstrate in food analysis is enormous. this potential needs to be tested in the laboratory on a large number of samples, possibly combined with artificial intelligence. in this study, thz measurements by using a 97 ghz imaging system and an experimental setup were made on healthy and rotten hazelnuts samples. 2. imaging terahertz scanner these thz measures on healthy and rotten hazelnut were made by using a thz imaging scanner operating in reflection mode. figure 1a shows this system with some components, figure 1b, while a more detail description from a physical point of view is reported in [16]. this system mainly consists of a 97 ghz source, a directional coupler, a truncated waveguide, a laser triangulation system and a schottky diode; this last used as detector of the radiation. the impatt (impact ionization avalanche transit-time) diode, figure 1c, operates at a fixed frequency of 97 ghz with an output power of 70 mw and it was used as source. this diode is based on the avalanche effect. when a strong electric field is applied to a semiconductor or insulator, it can accelerate the free electrons present in the material. after having gained a certain amount of energy and impacting against the atoms of the material, these particles are able to induce ionizations, i.e., lead to the formation of other free electrons. in this way, at the end of this process called the avalanche effect, the number of electrons will be greatly increased. the impatt diode in this setup is connected to an elmika three port directional coupler, based on wr-10 waveguide, with 3 db attenuation as shown in figure 1d. in this study, the directional coupler was used to collect a fraction of the signal reflected back by the sample. a schottky diode, shown in figure 2, was used as detector of the radiation reflected. thanks to this high switching speed, this diode can follow the oscillations of the electric field up to frequencies of the order of hundreds of ghz. in this imaging scanner, the voltage signal generated by the schottky diode is sampled through an analogue-digital converter and sent to pc via usb interface. it is advisable to be able to accurately measure the distance that separates the waveguide from the sample, both to be able to position it correctly and to be able to exploit the capabilities of the system to measure the phase of the reflected radiation. in fact, the phase value also depends on the distance travelled by the reflected radiation, and it is therefore crucial to be able to measure this distance accurately. in order to measure the distance between the waveguide and the sample, a laser triangulation system was used, see figure 3. figure 1 a. imaging terahertz system; 1 b. (a) source, (b) directional coupler, (c) truncated wave guide, (d) laser triangulation system and (e) schottky diode; 1c. impatt diode at 97 ghz; 1d. directional coupler. figure 2. schottky diode. figure 3. laser triangulation system. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 a laser diode emits a laser beam, which is projected onto the surface of the measurement target. subsequently, once the beam has reached the sample, it will be reflected back to be subsequently captured by a ccd/cmos sensor, positioned at a certain angle with respect to the optical axis of the laser beam. the target distance is calculated through a geometric algorithm, while the data is analysed by the internal or external controller and made available in output in different formats. in this work, the laser sensor had a wavelength of 670 nm, with an optical power of less than 1 mw and is capable of measuring distances between 50 and 100 mm with a resolution of 5 µm. furthermore, three stepper motors, figure 4 a-b-c, have been used to move both the source and the detector along the three axes x, y, z. the movement of each motor is regulated by three mercury controllers. the motors are controlled by suitable controllers that regulate all the motor parameters along the axis (x, y, z), translating the movement instructions sent via pc into electrical impulses which activate the physical movement of the motors. the three controllers are identical, but they are programmed by entering the parameters relating to the motor to be controlled (stroke, speed and acceleration characteristics, position of the reference probes, accuracy, etc.). the controllers can be connected to the control pc via usb port or rs232 interface. 3. 18-40 ghz measurement system this measurement system has been designed to measure the transmission of samples in the frequency range between 18 and 40 ghz. it is made up of an yttrium iron garnet (yig) source and a detector used to measure the power of the transmitted radiation. in order to easily measure, the interaction with the radiation takes place in free space, so it is necessary to launch the radiation with the appropriate horns. then, the radiation emitted by the yig source is propagated within a transmission line to the launching horn, and then collected, in this case via a horn, see figure 5. the transmitted signal is detected by means of a schottky diode, and the voltage signal obtained is sent to a control pc via an analogue-digital converter also interfaced via a usb port. considering that each horn reflects a certain percentage of the radiation, the system consisting of the throwing horn, sample and horn used to collect the transmitted signal can be seen as a system composed of three partially reflecting mirrors. each pair of mirrors in fact behaves like an interferometer, producing a series a b c figure 4. a. x-axis motor; 4 b. y-axis motor; 4 c. z-axis motor. figure 5. image of the experimental setup. a) yig source; b) transmission line; c) launching horn; d) collecting horn; e) schottky diode. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 of effects, which allow to obtain information on the sample under examination. this system, therefore, consists of a yig driver system, a yig source, a transmission line made with coaxial cables, a first horn used to launch the radiation, a second horn to collect the transmitted signal and a schottky diode as detector. the yig driver system, shown in figure 6, is able to drive the yig oscillator mlos-184 model. on the front panel, 2 voltages, required for driving the yig oscillator, are displayed. the main voltage (labelled with tn voltage on the display) powers the main coil and can be changed from 0 volt to 10 volts in continuous manner (according with table 1 shown below). the fine voltage (labelled with fine modulation-fm coil voltage on the display) supplies power to the fine modulation coil allowing a little output frequency adjustment and can be changed from -10 v to 10 v. at the enea centre in frascati, close to rome (italy), a broadband yig source was chosen, operating in the region between 18 and 40 ghz with a maximum power of about 20 mw. figure 7 shows the yig source used in this study. spinels, garnets and hexaferrites are among the most common magnetic oxides used in industry. yttrium iron garnet is a synthetic crystal belonging to the garnet group. this material is used for tunable microwave electronic devices [17], circulators [18], insulators, phase shifters, tunable filters, and non-linear devices [19]. this crystal, also, can resonate at microwave frequencies when it is immersed into a static magnetic field. this resonance frequency is directly proportional to the strength of the magnetic field applied to the crystal. the magnetic field can be generated using an electromagnet, a permanent magnet, or a combination of both. the magnetic field of an electromagnet can be tuned using a variable current, thus, by using this crystal, we are to make a tuneable frequency source. a layer of eccosorb was placed inside the petri dish as shown in figure 8. a hole having the same dimensions of the hazelnuts was created in the eccosorb layer to allow the radiation to pass only through the sample. a schottky diode was used to detect the transmitted signal through the sample, figure 9. figure 6. yig driver system. table 1. linear relationship between voltage and frequency. tune voltage (v) frequency (ghz) 0.00 18 ghz 1.00 20 ghz 2.00 22 ghz 3.00 24 ghz 4.00 27 ghz 5.00 29 ghz 6.00 31 ghz 7.00 33 ghz 8.00 36 ghz 9.00 38 ghz 10.00 40 ghz figure 7. yig source. figure 8. petri dish with an eccosorb layer placed inside the dish itself. figure 9. schottky diode acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 4. hazelnut samples in this study, thz measurements were made on 5 healthy and 5 rotten hazelnuts belonging to the “tonda gentile romana” species which together with the “nocchione” are the most widespread species in lazio and central italy, figure 10. the tonda gentile romana has a hazelnut colour with shiny dark brown streaks, a smell of toasted hazelnuts, a toasted hazelnut flavor with no bitter or rancid aftertaste and a spheroidal appearance. 5. results at 97 ghz in order to carry out the measurements with the imaging system, the waveguide was positioned very close to the outer shell of each hazelnut. since the thz imaging system was designed to detect the phase shift of the reflected signal, by changing the distance that separates the sample from the waveguide by approximately 1.5 mm, corresponding to half a wavelength, this causes a phase shift, producing different coloured pixels in the image. in figure 11a-b the visible and thz image of the first two healthy hazelnuts is shown. the first healthy hazelnut at first glance did not seem rotten and showed no sign of deterioration, but breaking the outer shell the fruit was rotten. from the terahertz image can be seen that the fruit of the first healthy hazelnut does not have a welldefined, rounded shape if compared to the second healthy one. the second healthy hazelnut, contrarily, shows a well-defined shape. in figure 12a-b the visible and thz image of the third and fourth healthy hazelnut are shown. from the terahertz images can be seen that the fruit of the third and fourth healthy hazelnut has a rounded shape. the figure 10. rotten and healthy hazelnuts. a b figure 11. a. on the left, the first healthy hazelnut; on the right, the second healthy hazelnut. at first glance the first healthy one didn’t seem rotten; 11 b. thz image of the first two healthy hazelnuts. a b figure 12. a. on the left, the third healthy hazelnut; on the right, the fourth healthy hazelnut; 12. b. thz images of the third and fourth healthy hazelnut. a b figure 13. a. visible image of the fifth healthy hazelnut; 13. b. thz image of the fifth healthy hazelnut. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 hazelnut shown on the left is coloured yellow while the one on the right is blue. this is explained by the fact that each hazelnut having a well-defined shape and therefore size, this phenomenon causes a phase shift from a physical point of view. finally, in figure 13a-b, the visible and terahertz image of the fifth healthy hazelnut are shown. the same procedure was also followed for rotten hazelnuts. figure 14, figure 15, figure 16, figure 17, and figure 18 show the visible and thz image of the rotten hazelnuts, respectively. from the thz images, the hazelnuts don’t have a regular shape, i.e., rounded. 6. results at 18-40 ghz after carrying preliminary tests, we realized that it is better to operate in the upper frequency limit of the emission range: when operating below 36 ghz, the diffractive effects due to the higher wavelength, comparable to the hazelnuts dimensions, produce results difficult to analyse, due to the high background radiation transmission. operating at shorter wavelength makes the system able to clearly distinguish between healthy and rotten hazelnuts. focusing on the frequency range between 36 and 40 ghz, with a wavelength range between 8 and 7 millimetres, smaller than the size of hazelnuts, better results were obtained. to test the new setup, we decided to examine five healthy hazelnuts and five rotten ones by modulating the voltage between 8 and 10 volts. in the graph in figure 19, the transmission values relating to healthy and rotten hazelnuts are shown. in figure 19, in each series of data, corresponding to a specific frequency, the transmission of the healthy hazelnuts is shown at figure 14. on the left, visible image of the first rotten hazelnut; on the right thz image of the rotten one. figure 15. on the left, visible image of the first second rotten hazelnut; on the right, thz image of the rotten one. figure 16. on the left, visible image of the third rotten hazelnut; on the right, thz image of the rotten one. figure 17. on the left, visible image of the fourth rotten hazelnut; on the right, thz image of the rotten one. figure 18. on the left, visible image of the fifth rotten hazelnut; on the right, thz image of the rotten one. figure 19. measures performed at different frequencies, corresponding to applied voltages between 8 v and 10 v. figure 20. the orange bar highlights the transmission relating to the first healthy hazelnut. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 7 the bottom of the graph while the one referring to the rotten is shown above. from the graph, the highest transmission values are those relating to the petri dish used as baseline. a clear difference in transmission between healthy and rotten hazelnuts is evident, apart from the data from the first nut (orange bar in the graph in figure 20), that shows a transmission comparable to that of the rotten hazelnuts. the lower transmission of the healthy hazelnuts is correlated to the higher amount of fatty acids, which absorb the radiation. the anomaly of the first sample gave us a confirmation of this principle and of its possible practical application as a discrimination tool to identify rotten hazelnuts. sample 1 didn’t show, on visual inspection, any sign of deterioration, but when the outer shell was broken, the fruit was rotten as shown in figure 21. this is a clear demonstration that this system is able to identify hazelnuts that would be judged “healthy” as well on the basis of a simple visual inspection (that is the most used method in practice) and demonstrates the absence of any bias in the experiment. 7. conclusions from this study, we have seen how the thz reflection imaging system and the 18-40 ghz experimental system have proved to be non-destructive tools in the food industry. as reported in several scientific papers, terahertz techniques have been used to detect pesticides, fertilizers, and moisture in food. despite these advantages, the technique currently has some limitations, mainly related to its possible use in its practical use in an industrial environment. the terahertz imaging system has shown the capability of differentiating hazelnuts, but at the same time it presents some limitations, the main one of which is given by the scan time required to acquire a single hazelnut (30 seconds to scan a hazelnut), which is too long for a possible application in a production line. a possible solution could be to modify the layout of the instrumentation to carry out the measures in transmission mode, exploiting a thz-sensitive matrix detector, for one-shot image acquisition. to reduce the scanning time, it was therefore thought to design a device operating in transmission by detecting the signal transmitted through the nut. this device is also able to operate in a frequency range between 18 and 40 ghz thanks to the yig source used for this purpose. so, by modulating the magnetic field applied to the yttrium-iron garnet we are able to select the resonance frequency of the crystal. furthermore, this experimental system compared to the thz imaging system is cheaper and faster in terms of data acquisition. finally, from the acquired data it was possible to detect hazelnuts, which at first sight seemed healthy but were rotten. references [1] h. einarsdóttir, m. j. emerson, l. h. clemmensen, k. scherer, k. willer, m. bech, r. larsen, b. k. ersbøll, f. pfeiffer, novelty detection of foreign objects in food using multi-modal x-ray imaging, food control, 67, pp. 39-47. doi: 10.1016/j.foodcont.2016.02.023 [2] r. p. haff, n. toyofuku, x-ray detection of defects and contaminants in the food industry, sensing and instrumentation for food quality and safety, 2(4), pp. 262–273. doi: 10.1007/s11694-008-9059-8 [3] m. s. nielsen, t. lauridsen, l. b. christensen, r. feidenhans’l, xray dark-field imaging for detection of foreign bodies in food. food control, 30(2), pp. 531–535. doi: 10.1016/j.foodcont.2012.08.007 [4] j. chandrapala, c. oliver, s. kentish, m. ashokkumar, ultrasonics in food processing food quality assurance and food safety, trends in food science and technology, 26(2), pp. 88-98. doi: 10.1016/j.tifs.2012.01.010 [5] b. zhao, y. jiang, o. a. basir, g. s. mittal, foreign body detection in foods using the ultrasound pulse/echo method: foreign body detection in thoughout foods, journal of food quality, 27(4), pp. 274–288. doi: 10.1111/j.1745-4557.2004.00651.x [6] d. m. charron, k. ajito, j. kim, y. ueno, chemical mapping of pharmaceutical cocrystals using terahertz spectroscopic imaging, analytical chemistry, 85 (4), pp. 1980-1984. doi: 10.1021/ac302852n [7] a. doria, g. p. gallerano, e. giovenale, m. greco, m. picollo, thz detection of water: applications on mural paintings and mosaics, proc. of the 42nd int. conf. on infrared, millimeter, and terahertz waves, irmmw-thz, cancun, mexico, 27 august-1 september 2017, pp. 1-2. doi: 10.1109/irmmw-thz.2017.8067164 [8] m. greco, e. giovenale, f. leccese, a. doria, e. de francesco, g. piero gallerano, a thz imaging scanner to detect structural and fire damage on glass fiber composite, proc. of the ieee 9th international workshop on metrology for aerospace, metroaerospace, pisa, italy, 27-29 june 2022, pp. 384-389. doi: 10.1109/metroaerospace54187.2022.9856003 [9] m. greco, e. giovenale, f. leccese, a. doria, a discrimination of healthy and rotten hazelnuts using a thz imaging scanner, proc. of the ieee workshop on metrology for agriculture and forestry, metroagrifor, perugia, italy, 3-5 november 2022, pp. 229-233. doi: 10.1109/metroagrifor55389.2022.9964672 [10] m. greco, e. giovenale, f. leccese, a. doria, e. de francesco, g. p. gallerano, a thz imaging scanner to monitor leaf water content, proc. of the ieee int. workshop on metrology for agriculture and forestry, metroagrifor, trento-bolzano, italy, 35 november 2021, pp. 7-11. doi: 10.1109/metroagrifor52389.2021.9628522 [11] a. a. gowen, c. o'sullivan, c. p. o'donnell, terahertz time domain spectroscopy and imaging: emerging techniques for food process monitoring and quality control, trends in food science and technology, 25(1), pp. 40-46. doi: 10.1016/j.tifs.2011.12.006 [12] m. caciotta, s. giarnetti, f. leccese, b. orioni, m. oreggia, c. pucci, s. rametta, flavors mapping by kohonen network classification of panel tests of extra virgin olive oil, measurement: journal of the international measurement confederation, 78, 2016, pp. 366-372. doi: 10.1016/j.measurement.2015.09.051 [13] h. j. shin, s. choi, g. ok, qualitative identification of food materials by complex refractive index mapping in the terahertz range, food chemistry, 245, pp. 282-288. doi: 10.1016/j.foodchem.2017.10.056 figure 21. first healthy hazelnut external visual inspection and status of the inner fruit (degraded). https://doi.org/10.1016/j.foodcont.2016.02.023 https://doi.org/10.1007/s11694-008-9059-8 https://doi.org/10.1016/j.foodcont.2012.08.007 https://doi.org/10.1016/j.tifs.2012.01.010 https://doi.org/10.1111/j.1745-4557.2004.00651.x https://doi.org/10.1021/ac302852n https://doi.org/10.1109/irmmw-thz.2017.8067164 https://doi.org/10.1109/metroaerospace54187.2022.9856003 https://doi.org/10.1109/metroagrifor55389.2022.9964672 https://doi.org/10.1109/metroagrifor52389.2021.9628522 https://doi.org/10.1016/j.tifs.2011.12.006 https://doi.org/10.1016/j.measurement.2015.09.051 https://doi.org/10.1016/j.foodchem.2017.10.056 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 8 [14] a. ren, a. zahid, d. fan, x. yang, m. a. imran, a. alomainy, q. h. abbasi, state-of-the-art in terahertz sensing for food and water security – a comprehensive review, trends in food science and technology, 85, pp. 241-251. doi: 10.1016/j.tifs.2019.01.019 [15] g. ok, h. j. kim, h. s. chun, s. choi, foreign-body detection in dry food using continuous sub-terahertz wave imaging, food control, 42, pp. 284-289. doi: 10.1016/j.foodcont.2014.02.021 [16] a. doria, g. p. gallerano, e. giovenale, l. senni, m. greco, m. picollo, c. cucci, k. fukunaga, a. c. more, an alternative phasesensitive thz imaging technique for art conservation: history and new developments at the enea center of frascati, applied sciences (switzerland), 10(21), pp. 1-24. doi: 10.3390/app10217661 [17] c.-w. nan, m. i. bichurin, s. dong, d. viehland, g. srinivasan, multiferroic magnetoelectric composites: historical perspective, status, and future directions, progress in advanced dielectrics, pp. 191-293. doi: 10.1142/9789811210433_0005 [18] j. ganne, r. lebourgeois, m. paté, d. dubreuil, l. pinier, h. pascard, the electromagnetic properties of cu-substituted garnets with low sintering temperature, journal of the european ceramic society, 27(8-9) 2007, pp. 2771-2777. doi: 10.1016/j.jeurceramsoc.2006.11.054 [19] j. d. adam, l. e. davis, g. f. dionne, e. f. schloemann, s. n. stitzer, ferrite devices and materials, ieee t. microw. theory. 50 (2002), pp. 721-737. doi: 10.1109/22.989957 https://doi.org/10.1016/j.tifs.2019.01.019 https://doi.org/10.1016/j.foodcont.2014.02.021 https://doi.org/10.3390/app10217661 https://doi.org/10.1142/9789811210433_0005 https://doi.org/10.1016/j.jeurceramsoc.2006.11.054 https://doi.org/10.1109/22.989957 template for an acta imeko event paper acta imeko issn: 2221-870x december 2017, volume 6, number 4, 89-94 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 89 multi-component measuring device – completion, measurement uncertainty budget and signal crosstalk for combined load conditions sebastian baumgarten, dirk röske and rolf kumme physikalisch-technische bundesanstalt (ptb), bundesallee 100, 38116 braunschweig, germany section: research paper keywords: multi-component measurement; measurement uncertainty budget; torque; force; signal crosstalk; friction coefficient sensor citation: sebastian baumgarten, dirk röske and rolf kumme, multi-component measuring device – completion, measurement uncertainty budget and signal crosstalk for combined load conditions, acta imeko, vol. 6, no. 4, article 14, december 2017, identifier: imeko-acta-06 (2017)-04-14 section editor: eric benoit, university savoie mont blanc, france received march 24, 2016; in final form january 19, 2017; published december 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: sebastian baumgarten, e-mail: sebastian.baumgarten@ptb.de 1. introduction there is an increasing number of measuring systems that can detect more than one force or torque component of these vectorial physical quantities. there is, therefore, an increasing need for traceability with regard to multi-component measurements. realizations of such measuring facilities with sufficient measurement uncertainty and a suitable measuring range are complex and rare. ptb's hexapod [2] and the measuring facility at the instituto nazionale di ricerca metrologica (inrim) [3] are examples of such a realization. the ptb uses the infrastructure that is already available at such measuring facilities, to upgrade one facility by adding additional torque components. as a result of a project in ptb, within the 1 mn force standard machine (1 mn fsm), torques can now be generated by means of a lever/band/mass system. this extension of the fsm allows the combination of a force measuring range from 20 kn to 1 mn with a torque measuring range from 20 n·m to 2 kn·m. this, in turn, extends the service range of the measuring facility, and measuring systems such as friction coefficient sensors or wheel load sensors can, thus, be investigated specifically. the measurement uncertainty budget (mub) for mz is presented. 2. set-up the additional torque device has a modular set-up and can be mounted into or removed from the force flow of the 1-mn fsm. it works on the basis of the principle of a two-armed lever at the ends of which a force couple acts. the force couple is equal value which, although parallel to each other, act in the opposite direction to each other. the cross forces thus neutralize each other and all in all, an active torque mz is realized. the forces are generated via two mass stacks that are located symmetrically on either side of the 1 mn fsm (see figure 1). each of these mass stacks (see figure 2) is composed of a lowerable set of masses. the mass disks are coupled to a metallic band. the metallic band is diverted by means of an airbearing rotor. the vertical gravitational force of the mass stacks becomes a horizontal tensile force. the metallic band is coupled to the lever arm and thus transmits the force onto the abstract this paper presents the completion and the measurement uncertainty budget of a multi-component measuring facility. the new facility is part of the 1 mn force standard machine [1] at the physikalisch-technische bundesanstalt (ptb). it enables the simultaneous generation of a torque in the range from 20 n·m to 2 kn·m in addition to axial forces 20 kn to 1 mn. this allows the characterization of measuring systems which require combined loads of axial forces fz and torques mz, like friction coefficient sensors. the aim is a measurement uncertainty of (k = 2) for mz < 0.01 % and fz < 0.002 %. the physical model yields to extended measurement uncertainties (k = 2) for 20 n·m of 5.9·10-5 and for the maximum load step mz = (2000 ± 0.084) n·m. mailto:sebastian.baumgarten@ptb.de acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 90 system. sensors and step motors stabilize the system position under load and changing load conditions. the synchronous triggering, monitoring and data acquisition are effected by excel macros and a dmp 41. 3. measurement uncertainty budget the following sections are only a summary of the important points of the measurement uncertainty budget, more details about this very comprehensive topic in [9]. a specific measurement uncertainty budget for the additional facility is presented. it includes a model, figure 3, taking physical and geometric influence factors into account. this includes different factors, among other things, environmental influences, geometric characteristics, or the influence of the mass stacks. the influence of different influence factors on the measurement uncertainty and on the signal stability (e.g. friction inside the air bearing) have been investigated. in this case of application, also the realignment process of the mass stacks, the flatness errors of adaption parts and angular deviations must be taken into account. the model therefore encompasses a consideration of the system according to the vectorial components of m (1) and the analysis of the influence factors on the measurement uncertainty. in the coordinate system used, figure 1. multi-component measuring facility: 1 − mass stack a; 2 − mass stack b; 3 − metallic band for force application onto the lever; 4 − masses; 5 − two-armed lever; 6 − coordinate measuring device, mounted onto a column support; 7 − 1 mn fsm. figure 2. mass stack b. both mass stacks exhibit an identical design: 1 − sps control; 2 − support elements resting against the frame of the 1 mn fms; 3 − block with step motors for the displacement and tilting of the airbearing head; 4 − air-bearing head with integrated rotor for force diversion; 5 − metallic band and coupling element for force application; 6 − masses; 7 − rotational and linear table for position displacement of the mass stack. figure 3. overview of the acting influence factors in the form of an ishikawa diagram for mz. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 91 mz is the torque, ly is the lever length, and fx is the applied force. the ideal case thus consists in the lever and the force vector lying in the x-y-plane and being oriented orthogonal to each other. an additional axial force fz can be applied onto the system by the 1 mn fsm 𝑀��⃗ = �⃗� × 𝑙 = � 𝐹𝑥 𝐹𝑦 𝐹𝑧 � × � 𝑙𝑥 𝑙𝑦 𝑙𝑧 � = � 𝐹𝑦𝑙𝑧 − 𝐹𝑧𝑙𝑦 𝐹𝑧𝑙𝑥 − 𝐹𝑥𝑙𝑧 𝐹𝑥𝑙𝑦 − 𝐹𝑦𝑙𝑥 � = � 𝑀𝑥 𝑀𝑦 𝑀𝑧 � . (1) table 1 shows identified factors and their percentage weighting for the load steps 20 n·m and 2000 n·m. identical measurement uncertainty budgets have been established for each load step. according to the physical model, the measurement uncertainty (k = 2) for the minimum load step mz = (20 ± 1.2·10−3) n·m and for the maximum load step mz = (2000 ± 0.084) n·m. 3.1. local gravitational acceleration the local gravitational acceleration at the measuring station was determined by the institute for earth measurement (ife), hannover, as being gloc = 9.812524 ms−2 with an expanded measurement uncertainty (k = 2) of 10 µms−2. 3.2. density and masses of the weight one set of weights includes the following load steps 1 × 10 n, 2 × 20 n, 1 × 50 n, 3 × 100 n and 3 × 200 n. the density of the material used for the cylindrical weights can be indicated as ρm = 7979.7 kg m−3 ± 2.0 kg m−3 (k = 2). the uncertainty components (k = 2) lie for all masses mm in a range < 5·10−6 kg and are computed separately for each load step. the contribution to the measurement uncertainty budget never exceeds 1.33 %. 3.3. environment for the determination of the acting gravitational force, a buoyancy correction (2) was applied. the measuring facility is located in an air-conditioned hall. changes in the ambient conditions are minimum. the actual values for the air pressure, the humidity and the temperature are acquired to compute the mub. their influence on the mub, however, lies in a range < 0.01 %. the ambient parameters from table 1 for the mub are the humidity hl = 42 % ± 5 %, the temperature tl = 21 °c ± 0.1 °c, and the ambient pressure pl = 1003.4 hpa ± 2 hpa 𝐹𝑥 = 𝑚𝑚 · 𝑔𝑙𝑙𝑙 · �1 − 0,348·𝑝𝐿−0,009·ℎ𝐿·𝑒0,06·𝑇𝐿 (273,15+𝑇𝐿)·𝜌𝑚 � . (2) 3.4. lever length and thermal expansion a two-armed lever is used. a specified value of 999.92 mm applies to both sides. the length of the whole lever was calibrated at ptb's coordinate metrology division; the result obtained was: 1999.882 mm ± 0.028 mm (k = 2). when calculating the total length, also the half of the thickness of the metallic bands for force application must be taken into account. the thickness is 0.08 mm ± 0.001 mm. the measurement uncertainty of the determination of the lever length represents the largest contribution to the mub for mz. due to the geometrical dimension of the lever, this uncertainty cannot be further reduced with the existing coordinate measuring machines. the lever is made of an aluminium alloy. the thermal expansion for this alloy is 2·10−5 k−1. accordingly, temperature fluctuations of 0.1 °c have an influence of 4.92 % on the mub. the lever will later be replaced by another lever made of a temperature-stable invar alloy. 3.5. friction of the air bearings the air bearings do not provide absolutely friction-free force diversion. the influence of the friction inside the air bearing on the torque signal must therefore be investigated [5]. for this investigation, additional weights having a defined mass were applied. the weights are selected in such a way that, with the measuring chain used, a change in signal of practically one digit is expected. the measurements were repeated at all load steps up to 600 n·m and yielded the same result. a change of 1 digit, however, also corresponds to the signal stability of the measuring amplifier (dmp41 with low pass filter 0.04 hz butterworth), the influence has, thus, been estimated as being two digits. this corresponds to a maximum torque proportion of 3.1·10−4 n·m. the contribution to the mub is constant across the load steps. the percentage contribution to the mub for small load steps, 48.3 %, is therefore the largest. 3.6. influence of torsion under load loading the system with a torque leads to a torsion of the adaption/sensor system. torsion, in turn, leads to a reduction of the length of the metallic band between the lever and the unwinding point at the air bearing. the difference represents the overlapping of the metallic band on the side of the force generation fx and, as an additional mass, it contributes accordingly to the torque mz. the proportion directly depends on the load step. the differential length is determined, by means of a laser sensor as being to 10 µm. the change in mass is determined by means of the band thickness 0.080 mm ± 0.001 mm, height 30.0 mm ± 0.1 mm and density 7850 kg m−3 ± 20 kg m−3 and it is taken into account for the torque calculation. the contribution to the mub is < 0.01 %. 4. geometrical characteristics to calculate the mub and the disturbing quantities, the orientation as well as the geometric deviation from the optimal orientation must be detected. parallelism differences, angular table 1. measurement uncertainty budget for 20 n·m and 2000 n·m. influence quantities index of mub (20 ± 1.2·10-3) n·m (2000 ± 0.084) n·m gravitational acceleration 0.05 % 0.12 % ambient pressure < 0.01 % < 0.01 % air humidity < 0.01 % < 0.01 % temperature 4.92 % 9.73 % weight of the masses 1.36 % 0.413 % metallic bands overlap < 0.01 % < 0.01 % lever length 45.24 % 89.53 % air bearing friction 48.31 % < 0.01 % metallic band thickness 0.06 % 0.11 % height discrepancy < 0.01 % < 0.01 % parallelism error 0.04 % 0.07 % angular error of the pressure plate < 0.01 % < 0.01 % angular error of the adaption/sensor system < 0.01 % < 0.01 % acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 92 deviations, tilts of the lever and height differences are part of these deviations. a coordinate measuring device acquires the geometric characteristics. by scanning any given point, the coordinate measuring device, with the aid of various angular encoders, computes the spatial position in relation to the machine coordinate system. the quality of a measurement depends on the measurement process, on the user, on individual errors of the angular encoders as well as on the computation performed by the coordinate measuring device. we have assumed that the accumulation of the individual errors follows a gaussian distribution. the hypothesis was checked – and confirmed – by repeated measurements and by means of a shapiro-wilk test [6] for the individual measurement processes. sine and cosine functions must be used to calculate the mub according to (1). the problem is that the sensitivity coefficient often tends to be zero at small angles. for this reason, an upper estimation is used for the influence [7]. 4.1. deviation in parallelism orientation for an ideal couple, both metallic bands must be exactly parallel to each other. measurement points for the coordinate measuring device on the lever and on the air bearing serve as reference points to determine the angle. the uncertainty across the measurement process was estimated by averaging with (k = 2) 0.02°. together with the fine adjustment of the angular orientation, a parallelism error of 0.072° is obtained. 4.2. deviation in height orientation for the ideal orientation, the height of the force application point at the lever must be in agreement with the band unwinding point at the air bearing. reference points are used for the height orientation of the system; with 45 µm (k = 2) for the normal contribution throughout the measurement process. the stability of the height is given by differential height measurements with a laser sensor at the end of the lever and by the displacement of the air bearing via a step motor. the signal threshold level for the adjustment was laid down as being 100 µm. if the signal threshold level and the contributions due to the uncertainties of differential height measurements and of the displacement by means of the step motor, then one obtains a total contribution of 186 µm. a reduction of the signal threshold level considerably reduces the uncertainty; however, the effort for the adjustment control then increases tremendously. a one-sided height difference of 186 µm, related to a band length of 1440 mm, corresponds, at 1000 n, to a negligible change in torque of 8.3·10−6 n·m. when loading with the 1 mn fsm with fz, the adaptor/sensor/lever set-up lowers itself. this is due to elongations in the 1 mn adjustment control and compression of the adapter/sensor system. for 500 kn this lowering amounts to about 2.6 mm. this height difference to the air bearing is corrected automatically by the adjustment control when the load is changed. the orientation therefore remains stable, even under an fz load, in a range of 186 µm. 4.3. deviation in planarity of the pressure plate, adaptation and sensor the lever's tilt in relation to the ideal x-y-plane depends on the orientation of the adaptor/sensor system. the adaptor parts are mechanical components to mounting the sensors at the multi-component facility. deviations lead to angular errors and, thus, to a tilt of the lever. the standard reference is the pressure plate of the 1 mn fms. averaging over different measurement series provides an estimate of the flatness. this can be specified as αpp = 0° ± 0.0178°. the angle refers to a tilt of the plane in relation to the ideal x-y-plane. correspondingly, an angular deviation αpp = 0° ± 0.0178° also applies to the lever. in addition, the flatness errors accumulate due to the adaption parts, the sensor and their installation. the resulting angular error depends on the quality of the components and must therefore be determined separately for each adaptor/sensor system. in the case of the mub described in table 1, an angular error αap = 0.18° ± 0.0201° can be stated. the mub does not take into account the orientation of the angular error. the error is upper estimated by considering it as being constant for all directions. according to the calibration results obtained by the coordinate metrology division, the deflection of the lever due to its dead weight can be neglected. 5. disturbing quantities the quantities considered as disturbing quantities are the shearing force fy, an additional axial force fz and the bending moments mx and my. a nominal value 0 is the goal for all disturbing quantities. deviations of the geometric orientation (essentially), however, result in an uncertainty for the nominal value; this applies to each quantity. the computation is carried out separately for each torque load step and have to be recalculated for each adaptor/sensor system. for the system to which also table 1 applies, at 2000 n·m, 0 n·m ± 0.4 n·m is obtained for mx and 3.49 n·m ± 1.01 n·m for my. the deviation from the nominal value for my is due to the acting force fx and to an effective lever length lz as a result of the lever's tilt. due to adaption parts with smaller flatness errors, it is possible to reduce the lever's tilt as well as the resulting bending moments significantly. table 2 shows the percentage contribution of the significant influence quantities on the uncertainty of my and mx. the influence quantities that are not mentioned there, see table 1, have a negligibly small influence on the mub amounting to < 0.0001 %. the disturbing quantities fy and fz were also computed from the geometric deviations. at the maximum load step 2000 n·m, one obtains for the system 0 n ± 2.6 n for fy and 0 n ± 0.3 n for fz. disturbing quantities may cause the characteristic curve of a sensor to shift. the signal crosstalk as a function of these quantities is often difficult to describe [8]. with little technical effort, it is possible to use the measuring device asynchronously in order to, for example, estimate the sensitivity of a sensor to a certain disturbing quantity. 6. signal crosstalk for smz the combined load conditions between fz and mz and the signal crosstalk of sfz and smz were investigated based on the table 2. significant influences on the mub of mx and my for 2000 n·m. influence quantities index of mub mx / (0.4 n·m) my / (1.01 n·m) height discrepancy 99.97 % < 0.01 % parallelism error 0.02 % < 0.01 % angular error of the pressure plate < 0.01 % 43.98 % angular error of the adaption/sensor system < 0.01 % 56.02 % acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 93 example of a multi-component sensor (mcs) which is specially adapted to the auxiliary device. the subsequent objective being the traceability of industrial sensors, the measurement results will be used to derive and develop expedient and practical calibration procedures and sequences as well as evaluation and analytical procedures. 6.1. multi-component sensor the sensor has the nominal load ranges fz = 500 kn and mz = 500 n·m. calibration for individual quantities was performed according to en iso 376 and din 51309. for fz, the sensor achieved < 5·10−4 for > 100 kn and < 2·10−3 for < 100 kn, and for mz, the clockand anti-clockwise measurement uncertainty is (k = 2) < 7·10−4. fz is kept constant for a measurement series, whereas mz is varied; the next load step fz is then selected and mz is varied again. this sequence must be observed to prevent the mass disks from coupling asymmetrically into the load frame of the 1 mn force standard machine. figure 4 shows the result of the combined loading for the torque signal smz. the represented signal is only the signal change caused by combined loading. to assess the influence of signal crosstalk quantitatively, figure 5 shows the relative change of the signal based on the signal evolution from the calibration function of mz. without correction, the error share can reach 4 %. this error inherent in the system must therefore be taken into account. the signal crosstalk of mz on the bridges of sfz is negligibly small and is therefore not represented here. 6.2. analysis by means of multiple polynomial regression as shown in figure 4, the signal behaviour can, as a matter of principle, be represented by means of a higher-dimension regression surface. the multiple polynomial regression (mpr) method was applied. equation (3) describes the calculation of a parameter matrix θ from the identity matrix a and a signal matrix z. depending on the order you are aiming at for the solution, a minimum number of data points are required. for a reliable statement with, e.g., a cubic approximate solution, five different fz and mz load steps – i.e. 25 independent data points – are necessary 𝜃 = (𝐴𝑇𝐴)−1𝐴𝑇𝑍. (3) table 3 shows the set of parameters calculated and the coefficient of determination for the signal pattern from figure 4. with (4) the cubic solution describes the signal pattern sufficiently well 𝑆𝑀𝑧 = 𝑎1𝐹𝑧 + 𝑎2𝑀𝑧 + 𝑎3𝐹𝑧 2 + 𝑎4𝐹𝑧𝑀𝑧 + 𝑎5𝑀𝑧 2 + ⋯ 𝑎6𝐹𝑧 3 + 𝑎7𝐹𝑧 2𝑀𝑧 + 𝑎8𝐹𝑧𝑀𝑧 2 + 𝑎9𝑀𝑧 3 . (4) especially for the range > 10 % of the nominal load, the systematic influence can be reduced from 4 % to < 0.5 %. the solution can only be applied to a limited extent to the lower load range. the mpr solution improves significantly with an increasing number of data points. this procedure is applied to both the loading and the unloading range. the inverse transformation of the signal values smz and sfz to the input quantities fz and mz is analogue. the next step is planned to consist of comparison measurements with various friction coefficient sensors in order to determine the specific signal crosstalk. 7. conclusions the extended relative measurement uncertainty (k = 2) of the 1 mn fsm is about 2·10−5. the model provides an expanded relative measurement uncertainty for the additional measuring facility for torque generation of mz = (20 ± 1.2·10−3) n·m up to mz = (2000 ± 0.084) n·m for the maximum load. comparison measurements with different torque reference transducers have shown very good repeatability; the reproducibility, however, is within a range of < 4.1·10−4. the < 2·10−4 goal has, thus, not been achieved yet. most of the time, a measurement uncertainty < 1·10−3 is sufficient for industrial sensors. correspondingly, the measuring device is not yet listed in the catalogue of measuring facilities and ptb's figure 5. relative signal change smz by combined load conditions. table 3. cubic solution parameter for the measurements of figure 4. parameter for the cubic solution a1 a2 a3 a4 a5 a6 a7 a8 a9 r² 1·10-4 -3.4·10-3 -2.41·10-7 -2.98·10-7 -1.17·10-7 2.83·10-10 1.88·10-11 9.2·10-11 1.35·10-10 7.89·10-5 figure 4. torque signal change smz by combined load conditions. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 94 quality management system. the characterization of the signal crosstalk by means of a specific multi-component sensor has shown that significant errors may occur when signal crosstalk is not taken into account. the multiple polynomial regression method allows the functional relation to be described precisely. comparison measurements with industrial multi-component sensors from the screw industry will have to show whether these findings are applicable to other systems. references [1] w. weiler, m. peters, h. gassmann, h. fricke, w.ackerschott, “die 1-mn-normalmeßeinrichtung der ptb braunschweig.”, vdi-z 120, 1978, pp. 1-6. [2] d. röske, “metrological characterization of a hexapod for a multi-component calibration device”, proc. xvii imeko world congress, dubrovnik, croatia, 2003, pp. 347-351. [3] c. ferrero, li qing zhong, c.marinari and e.martino “new automatic multicomponent calibration system with crossedflexure levers”, proc. of ismcr'93 imeko tc-17, torino, italy, 1993. [4] a. robinson, „the commissioning of the first uk national standard static torque calibration machine“, 19th conference on force, mass and torque measurement, cairo, egypt, 2005 pp.1-6. [5] d. peschel, d. mauersberger, “determination of the friction of aerostatic radial bearings for the lever-mass system of torque standard machines”, proc. of the xiii imeko world congress, turin (italien), sept. 5-9, 1994, vol. 1, pp. 216-220. [6] s. s. shapiro, m. b. wilk “a analysis of variance test for normality”, biometrika, vol. 52, no. ¾, 1965, pp. 591-611. [7] d. röske, “uncertainty contribution in the case of cosine function with zero estimate – a proposal”, imeko 2010 tc3, tc5 and tc22 conferences, pattaya, chonburi, thailand, november 22-25, 2010. [8] a. brüge, d. röske, d. mauersberger, k. adolf “influence of cross forces and bending moments on reference torque sensors for torque wrench calibration”, xix imeko world congress, lisbon (portugal), sept. 6-11, 2009. [9] s.baumgarten, h. kahmann, d. röske “metrological characterization of a 2 kn·m torque standard machine for superposition with axial forces up to 1 mn”, metrologia, vol. 53, 2016, pp. 1165-1176. << /ascii85encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain 20%) /calrgbprofile (srgb iec61966-2.1) /calcmykprofile (u.s. web coated \050swop\051 v2) /srgbprofile (srgb iec61966-2.1) /cannotembedfontpolicy /error /compatibilitylevel 1.4 /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves 0.0000 /colorconversionstrategy /cmyk /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel 0 /emitdscwarnings false /endpage -1 /imagememory 1048576 /lockdistillerparams false /maxsubsetpct 100 /optimize true /opm 1 /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments true /preserveoverprintsettings true /startpage 1 /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution 300 /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution 300 /colorimagedepth -1 /colorimagemindownsampledepth 1 /colorimagedownsamplethreshold 1.50000 /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /colorimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000coloracsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000colorimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution 300 /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution 300 /grayimagedepth -1 /grayimagemindownsampledepth 2 /grayimagedownsamplethreshold 1.50000 /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /grayimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000grayacsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000grayimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution 1200 /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution 1200 /monoimagedepth -1 /monoimagedownsamplethreshold 1.50000 /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k -1 >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx1acheck false /pdfx3check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /ara /bgr /chs /cht /cze /dan /deu /esp /eti /fra /gre /heb /hrv (za stvaranje adobe pdf dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. stvoreni pdf dokumenti mogu se otvoriti acrobat i adobe reader 5.0 i kasnijim verzijama.) /hun /ita /jpn /kor /lth /lvi /nld (gebruik deze instellingen om adobe pdf-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader 5.0 en hoger.) /nor /pol /ptb /rum /rus /sky /slv /suo /sve /tur /ukr /enu (use these settings to create adobe pdf documents best suited for high-quality prepress printing. created pdf documents can be opened with acrobat and adobe reader 5.0 and later.) >> /namespace [ (adobe) (common) (1.0) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) (4.0) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /converttocmyk /destinationprofilename () /destinationprofileselector /documentcmyk /downsample16bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure false /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles false /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) (2.0) ] /pdfxoutputintentprofileselector /documentcmyk /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /usedocumentprofile /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [2400 2400] /pagesize [612.000 792.000] >> setpagedevice microsoft word article 2 editorial.docx acta imeko  february 2015, volume 4, number 1, 2 – 4  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 2  introductory notes for the acta imeko special issue on the  “19 th  symposium on measurement of electrical quantities”  and the “17 th  workshop on adc/dac modelling and testing”   alexandru salceanu 1 ,  spartacus gomariz 2   1   faculty of electrical engineering,technical university of iasi, str. mangeron 23, iasi (romania)  2  electronics department. universitat politècnica de catalunya, c. comte d'urgell, 187 barcelona (spain)    section: editorial   citation:  alexandru  salceanu,    spartacus  gomariz,  introductory  notes  for  the  acta  imeko  special  issue  on  the  “19 th   symposium  on  measurement  of  electrical quantities” and the “17 th  workshop on adc/dac modelling and testing”, acta imeko, vol. 4, no. 1, article 2, february 2015, identifier: imeko‐ acta‐04 (2015)‐01‐02  editor: paolo carbone, university of perugia   received february 4 th , 2015; in final form february 10 th , 2015; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the spanish ministry of economy and competitiveness under the research project: cgl2011‐28682‐c02‐02  corresponding author: alexandru salceanu, asalcean@tuiasi.ro    1. introduction  the first issue of volume 4 of acta imeko is in your hands. this issue is centered on the 19th symposium on measurement of electrical quantities and 17th workshop on adc/dac modelling and testing which were held in the city of barcelona on july 2013. the main goals of these time-honored scientific events were to show the latest investigations and exchange of information and points of view on current research in the development and use of electrical and electronic instruments for measuring, monitoring and recording electrical signals. as a novelty, the 19th imeko tc4 expands the topics covered in previous editions in order to address issues related to marine technologies and applications. since august till november 2013, the members of the imeko tc4 board identified 24 valuable contributions presented in the barcelona events, for eventual publication in acta imeko. their authors were invited, in december 2013 to submit extended and updated versions of their selected papers. all of the authors that gave positive responses to this challenge started sending their extended papers into the acta imeko online submission system in early 2014. a significant number of representative reviewers proceeded in successive stages, their task of assessing the papers and submitting their recommendations. the final list of 17 published papers in this issue is a highquality exhibition of the first-class papers submitted to our symposium. all the reviewers involved in the process gave their support aiming to further improve the shape and the intrinsic content of the papers and their work. we sincerely hope that all the contributions that you will find in the next pages provide you information of your interest. 2. about tc4  the essential interest and activity of the imeko tc4 (measurement of electrical quantities) is the development of theoretical and practical research in the topics of electrical and electronic measurements. this technical committee organizes in every year, since its establishment in 1984, specific symposia and workshops. the continuous increase of general relevance and inclusion in everyday life of the electrical and electronic instruments for measuring, monitoring and recording electrical signals has significantly enhanced our activities and enlarged our addressability. the current members of tc4 are: janusz mindykowski, chairperson poland dominique dallet, deputy chairperson france pedro m. ramos, scientific secretary portugal linus michaeli, past chairperson slovak republic miloš sedlacek, past chairperson czech republic pasquale daponte, past chairperson italy antónio cruz serra, past chairperson portugal acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 3  mario savino, honorary chairperson italy joaquin del rio fernández spain raul land estonia graziella kreiseler germany istvan kollar hungary turgay özkan turkey umberto pogliano italy gelson rocha brazil alexandru salceanu romania jan saliga slovak republic sergey g. semenchinsky russia yurij tesyk ukraine vladimir vujicic serbia olfa kanoun germany dušan agrež slovenia victor i. didenko russia izzet kale united kingdom leo van biesen belgium mihai cretu romania christian eugène belgium sohair fakhry egypt vladimir haasz czech republic jouko halttunen finland peter händel sweden damir ilic croatia voicu groza canada jae kap jung – republic of korea elefterios kayafas greece wang xiaofei china he qing china 3. the articles  in the paper by asensio et al (p. 5), the authors, improved a traditional approach, by using a microphone array linked to a noise-monitoring unit, aiming to enable the tracking of the direction of arrival of the sound, thus getting better the classification rates. another advantage is to track the aircraft location along the runway, wanting to transform the sound pressure measurements into sound power level estimations. this novel instrument delivered results with quite good classification rates. the behavior of systems that are prone to noise is investigated in the paper by angrisani et al (p. 11), mainly from the perspective of the noise sources available on the market, with focus on the generators of colored noise. the authors propose an analog generator that exploits an arbitrary waveform generator as noise source, being capable of producing noise signals characterized by arbitrary power spectral densities. the implementation and characterization of a data acquisition (daq) system controlled by a 450 mhz sharc digital signal processor (adsp 21489), being capable of on-board processing the acquired data, is proposed in the next article by pinto et al (p. 19). the system’s main highlights are four differential channels with 1 mhz bandwidth, allowing simultaneous acquisition, 9 independent bipolar ranges and a maximum sampling rate of 600 ks/s. the analog daq inputs are protected against incorrect connections, ensuring full operation recovery, not being necessary to replace the damaged components or fuses. the aim of the paper by garcia-benadí et al (p. 26) is to provide the core for the "in situ", calibration of a hydrophone using a directional 10 khz source. anyway, even if the assigned value of uncertainty is high, the overall results are useful and interesting. the energy management, including the control of the state of charge of the batteries, allowing simultaneous charging of all batteries from outside of the vehicle, and a wireless connection/disconnection mode for an autonomous underwater vehicle is the core of the next article by i. masmitjà et al (p. 35). the methodology for measuring the charge level of the battery is current integration. the associated measured data are sent to the mission control of the vehicle, aiming to optimize and guarantee its navigation. the paper by liccardo et al (p. 44) mainly deals with the propagation channel aboard trains, propagation path loss, the delay spread and the coherence bandwidth. the results sustain that due to reflections by metal walls, the path loss exponent is slightly smaller than in free space, while not being significantly influenced by the position of transmitter and receiver. it is proven that the polarization and the distance between transmitter and receiver have influence on the delay spread and coherence bandwidth. finally the functional law between coherence bandwidth and delay spread was determined. the development of a low-cost, space saving device that could simultaneously take measurements from all (usually eight or up to twelve) outgoing three-phase feeders in distribution substation is the main focus of the contribution by nikolic et al (p. 53). aiming to save space without reducing the measurement accuracy, the authors designed a data acquisition system based on real-time multiprocessing with microcontroller and fpga circuit. a fast fpga circuit is used for voltage and current measurements including high-order harmonics. the proposed solution was validated by the results obtained in the developed laboratory. the properties of the spectrum measured by the time domain electromagnetic interference systems that are influenced by imperfections in the multi-resolution analogto-digital converter are analyzed by kamenský and kováč (p. 61). the authors propose a dedicated process of the identification of discrepancy parameters from experimental data, influencing the expressions of the model and enabling the comparison of experimental with theoretical results. the paper by kızılkaya et al (p. 68) is a study on analyzing and synthesizing new, multi-path, timeinterleaved, digital sigma-delta modulators, able to operate at any arbitrary frequency. dualand quadruple-path fourth-order butterworth, chebyshev, inverse chebyshev and elliptical based digital sigma-delta modulators are discussed, offering to designers the flexibility of specifying the centre-frequency, pass-band/stop-band attenuation as well as the signal bandwidth. detailed simulations acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 4  performed at the behavioral-level in matlab are compared with the experimental results of the fpga implementation of the designed modulators. the paper presents the mathematical modeling and evaluation of the tones caused by the finite word-lengths of these digital multi-path sigma-delta modulators, stimulated by sinusoidal signals. a new architecture of the multi-tap-delay-line timeinterval measurement module of high resolution implemented in single fpga device is proposed in the paper by zieliński et al (p. 77). the here designed measurement module enables to collect sixteen time-stamps during a single measuring cycle. the main improvement is the increased resolution, the reduction of the total duration time of the measurements and the mitigation of the duty cycle of the measurement instrument. a real time and simultaneously low-cost compensation method for the extension of the frequency bandwidth of medium voltage dividers whose performances would not have satisfied the requirements is developed by crotti et al (p. 82). this entails the spread of transducers with suitable frequency bandwidths, as required by the relevant standards. the next paper by giordano et al (p. 90) describes the arrangement of a first experimental set-up which allows the comparison between the measurement of the electromagnetic field quantities induced inside a simple cylindrical phantom and the same quantities estimated numerically through a boundary element method. even if a metallic non-magnetic element which roughly simulates a medical implant was introduced inside the phantom, the method assessed its robustness. a power quality measurement system, intended for measurements in substations at medium voltage level including voltage and current transducers and a selfdeveloped measuring instrument is designed by crotti and giordano (p. 97). while the measuring instrument is based on a reconfigurable i/o-fpga system with embedded software, the used voltage and current transducers are classical: rogowski coil and a voltage divider. an effective application of the measurement system in a private substation is analyzed, connecting an industrial load and two photo-voltaic generation plants to the public medium voltage network. a significant decrease of the current and voltage measurement uncertainty was obtained. the starting point of the paper by skórkowski et al (p. 105) is based on the diagrams of a virtual quasi-balanced instrument, continuing with the verification of a quasibalanced circuit, virtual realization, for capacitance measurements. the daq card ni-6009 and the calculation were controlled by an application developed in labview. bao et al proposed test methods for analog-toinformation converters (p. 111); the main objective was to verify if figures of merit and test methods for traditional analog-to-digital converters might be applied to aics based on the random demodulation architecture. starting from commercially available integrated circuits, an aic prototype was designed. a simulation analysis and an experimental investigation have been carried out to study the additional influencing factors such as the parameters of the reconstruction algorithm. it was demonstrated that standard figures of merit are in general capable of describing the performance of aics, provided they are slightly modified according to the proposals reported by the authors. the main topic of contactless measurements for industrial applications is developed by di leo et al (p. 121). a vision-based system expressly realized for the measurement of geometric and/or chromatic parameters of rubber profiles is designed, for application in the automotive industry. the stereo vision system for the online measurement of the dimensional characteristics of the profile transversal section is described in all details. an improvement of the non-destructive eddy current testing (ect) aiming to detect the presence of thin defects in conductive materials is presented in the last paper, by betta et al (p. 128). an experimental comparison of different excitation signal designed was developed to improve the quality of experimental data associated with small cracks and consequently, to obtain a more reliable determination of the geometrical features of the inspected defects. 4. conclusions  we were deeply honored to act as guest editors for this innovative and challenging issue of acta imeko and we have the satisfaction of finalizing a very useful outcome. the here presented release is the result of the tireless efforts of a large but homogenous team, integrating the authors, the reviewers and, last but not least, paul regtien, paolo carbone and pedro ramos, providing salutary support for all this period. hbm advertisement acta imeko june 2014, volume 3, number 2, 64 www.imeko.org section: advertisement acta imeko | www.imeko.org june 2014 | volume 3 | number 2 | 64 acta imeko, title acta imeko december 2014, volume 3, number 4, 2 – 3 www.imeko.org introductory notes for the acta imeko special issue on “technical diagnostics : new perspectives in measurements, tools and techniques for industrial applications” marcantonio catelani, lorenzo ciani department of information engineering, university of florence, via s. marta 3, 50139, florence (italy) section: editorial keywords: technical diagnostic; quality; industrial application citation: marcantonio catelani, lorenzo ciani, introductory notes for the acta imeko special issue on “technical diagnostics : new perspectives in measurements, tools and techniques for industrial applications”, acta imeko, vol. 3, no. 4, article 2, december 2014, identifier: imeko-acta-03 (2014)-04-02 editor: paolo carbone, university of perugia received november 21tst 2014; in final form november 21st, 2014; published december 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: lorenzo ciani, lorenzo.ciani@unifi.it the concept of quality has undergone important transformations over time. in a first assumption, sometimes passed in many industrial contexts, the term “product quality” was essentially associated with “compliance”. nowadays, the term “conformity to specifications” is widely used but this has to be considered only as an aspect in the measurement of the quality level. in fact, referring to the iso standard 9001 quality management systems, the term quality corresponds with the degree to which a set of characteristics of an object fulfils requirements. concerning characteristics, such standard proposes a classification in physical (e.g. mechanical, electrical, etc.), temporal (e.g. reliability, availability, etc.), and so on. it is interesting to observe that also the concept of product changed over time. now, it is usual to assume the product as the result of a manufacturing process in which a series of activities transforms raw materials, technologies and resources into the output – that is the product. so, it is mandatory to implement different activities concerning monitoring, as well as to collect and analyse appropriate data with the aim to demonstrate the suitability and the achievement of a desired quality levels. the analysis of data provides information about conformity to product requirements but also related to characteristics and trends of processes and products, including opportunities for preventive actions. this last aspect – the preventive action – depends also on the ability to monitor both the functionality of the product and the capability of the process to fulfil requirements. in this scenario imeko tc 10 aims to represent a reference point for scientists and researchers. it can be regarded as a forum for exchanging knowledge and sharing ideas concerning methods, principles, instruments and tools, standards and industrial applications on technical diagnostics as well as their diffusion across the scientific community. in this issue two main aspects are considered as a result of different research activities presented in the workshop. first, the impact of technical diagnostics on the product performances is considered. for this topic different points of view and proposals in the field of fault diagnosis are presented by the authors. b. aubert et al. deal in [1] with an extended kalman filter based fault detection in permanent magnet synchronous generators. being such equipment used often in high technology contexts, it is fundamental to guarantee high performances in terms of reliability and safety. authors propose a method for inter-turn shortcircuit detection based on the identification of the shortcircuited turns ratio in a faulty pmsg model expressed in park frame. several tests are also implemented in order to demonstrate robustness and sensitivity of the approach. in the field of testing for electronic devices an interesting contribution is proposed by d. załęski and r. zielonko in [2]. the paper concerns the testing of analog circuits and blocks in mixed-signal electronic embedded systems (eess) by means of the built-in self-test (bist) technique. in particular, two testing functions are performed: acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 2 functional testing and fault diagnosis on the level of localization of a faulty element. such proposal can be applied in a wide range of eess. in this first topic of the workshop also the work presented by m. lazzaroni et al. [3] can be located. authors proposed a thermal characterization of a power converter demonstrator used in particularly critical environments where the presence of radiation and magnetic fields are possible. in addition to the previous applications, it is important to underline how technical diagnostics can be considered also to check the operating conditions and the functionality of components (above mechanicals), machines and plants. this is the case of the research presented by g. dinardo in [4] about measurements of vibration affecting rotating components of machines. authors propose a novel configuration of a laser doppler vibrometer (ldv). the instrument was set up for the estimation of the out of plane vibrations of moving (rotating) objects, in order to give a better characterization of the self-tracking technique employed with the use of a 1d single point ldv. on the subject of measurements of vibration, but with different aims, is the paper of i. bodini et al. [5]. the research concerns a monitoring activity for on-board vibration in public transport. this can be considered an example of how technical diagnostics can be implemented as support for the evaluation of quality of service. a set of experimental tests has been performed by the authors to define a vibrational comfort index and to collect a large dataset that allows statistically significant comparisons between different infrastructures and their characterization. the proposed technique can also be useful for diagnostics purposes, such as vehicle comparison and road maintenance state monitoring. as said above, the vast area of technical diagnostics covers also different topics as, for instance, the manufacturing environment. m. scagliarini proposes in [6] a method for assessing multivariate measurement systems. this article describes an approach that, using the data routinely available from the regular activity of the instrument, offers the possibility of assessing multivariate measurement systems without the necessity of performing a multivariate gauge study. the practical application of this research is of interest considering that, often, critical decisions about process and product quality depend on the quality of the measurement systems. the paper [7] of m.n. durakbaşa et al. presents, instead, a particular study on process parameters like cutting speed, feed rate, depth of cut, tool geometry, and different coatings. the goal is to indicate the effects on surface roughness of the machined product on the basis of two and three-dimensional precise measurements. taguchi and regression methods are proposed as methodological approaches in order to predict average surface roughness values against edge radius wear. in conclusion, the papers presented in the 12th imeko tc10 workshop on new perspectives in measurements, tools and techniques for industrial applications, and briefly described here, confirm the vast research area and different interests of scientists in technical diagnostic. references [1] brice aubert et al., stator winding fault diagnosis in permanent magnet synchronous generators based on shortcircuited turns identification using extended kalman filter, acta imeko, vol. 3, no. 4, article 3, december 2014, identifier: imeko-acta-03 (2014)-04-03 [2] dariusz załęski, romuald zielonko, two-functional µbist for testing and self-diagnosis of analog circuits in electronic embedded systems, acta imeko, vol. 3, no. 4, article 4, december 2014, identifier: imeko-acta-03 (2014)-04-04. [3] massimo lazzaroni et al., thermal modeling and characterization for designing reliable power converters for lhc power supplies,acta imeko, vol. 3, no. 4, article 5, december 2014, identifier: imeko-acta-03 (2014)-04-05. [4] giuseppe dinardo et al., how geometric misalignments can affect the accuracy of measurements by a novel configuration of self-tracking ldv, acta imeko, vol. 3, no. 4, article 6, december 2014, identifier: imeko-acta-03 (2014)-04-06. [5] ileana bodini et al., techniques for on-board vibrational passenger comfort monitoring in public transport, acta imeko, vol. 3, no. 4, article 7, december 2014, identifier: imeko-acta-03 (2014)-04-07. [6] michele scagliarini, a method for assessing multivariate measurement systems, acta imeko, vol. 3, no. 4, article 8, december 2014, identifier: imeko-acta-03 (2014)-04-08. [7] m. numan durakbaşa et al., surface roughness modeling with edge radius and end milling parameters on al 7075 alloy using taguchi and regression methods,acta imeko, vol. 3, no. 4, article 9, december 2014, identifier: imekoacta-03 (2014)-04-09. acta imeko | www.imeko.org december 2014 | volume 3 | number 4 | 3 introductory notes for the acta imeko special issue on “technical diagnostics : new perspectives in measurements, tools and techniques for industrial applications” acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 4‐9    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 4  single image geometry inspection using inverse endoscopic  fringe projection  steffen matthias, christoph ohrt, andreas pösch, markus kästner, eduard reithmeier   institute of measurement and automatic control, leibniz universität hannover, germany      section: research paper  keywords: inverse fringe projection; fiberscopy; endoscopy; sheet bulk metal forming  citation: steffen matthias, christoph ohrt, andreas pösch, markus kästner, eduard reithmeier, single image geometry inspection using inverse endoscopic  fringe projection, acta imeko, vol. 4, no2, article 2, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐02  editor: paolo carbone, university of perugia, italy  received april 17, 2014; in final form february 19, 2015; published june  2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was funded by the german research foundation (dfg) within the collaborative research center (crc) / tr 73  corresponding author: steffen matthias, e‐mail: steffen.matthias@imr.uni‐hannover.de    1. introduction  the development of a new metal forming process demands several kinds of new technological approaches. starting with new geometrical tool designs, fitted materials have to be found and analyzed. additionally, process parameters such as forming speeds and forces have to be optimized in respect to a maximum output, while maintaining low tool abrasion and long durability. concerning the new method of cold sheet bulk metal forming (sbmf) [3], which is currently being developed in an exemplary manufacturing process, assembly ready geometries shall be generated by a combined process of sheet and bulk forming. this results in both drawing and comprehensive forces on the tool in the same process step. the combination leads to early wearing effects on the sbmf-tool that may cause deficient workpieces. these need to be identified early in the manufacturing process in order to keep rejection costs at a minimum. proper simulations for quality assessments can only be based on measurement data gathered from the experimental process. to generate datasets of the process, measurements obtained in a complete production cycle are necessary. these requirements can only be met with fast areal measurement devices. desired cycle times are in the range of one second and less, while the standard deviation of the measured geometry data should be 100 µm or less compared to the reference. experiences show that fringe projection fulfills these abstract  fringe projection is an important technology for the measurement of free form elements in several application fields. it can be applied  to measure geometry elements smaller than one millimeter. in combination with deviation analysis algorithms, errors in fabrication  lines  can  be  found  promptly  to  minimize  rejections.  however,  some  fields  cannot  be  covered  by  the  classical  fringe  projection  approach. due to shadowing, filigree form elements on narrow or internal carrier geometries cannot be captured. to overcome this  limitation, a fiberscopic micro fringe projection sensor was developed [1]. the new device is capable of resolutions of less than 15 µm  with uncertainties of about 35 µm in a workspace of 3x3x3 mm³.  using standard phase measurement techniques, such as gray‐code and cos²‐patterns, measurement times of over a second are too  long for in‐situ operation. the following work will introduce an approach of applying a new single image measuring method to the  fiberscopic system, based on  inverse fringe projection [2]. the fiberscopic fringe projection system employs a  laser  light source  in  combination with a digital micro‐mirror device (dmd) to generate fringe patterns. fiber optical image bundles (foib) are used as well  as gradient‐index lenses to project these patterns on the specimen. this advanced optical system creates high demands on the pattern  generation algorithms to generate exact inverse patterns for arbitrary cad‐modelled geometries. approaches of optical simulations of  the  complex  beam  path and  the  drawbacks  of the  limited  resolutions  of  the  foibs  are  discussed.  early  results  of  inverse  pattern  simulations using a ray tracing approach of a pinhole system model are presented. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 5  requirements very well [4]. however, inner geometries such as sbmf-tools are not measureable at optimum angle, which reduces the lateral resolution of the measurement with the cosine of the camera angle towards the surface normal. common fringe projection sensors usually combine fringe generator and camera unit in a fixed housing with a predefined triangulation angle [5]. to overcome these limitations, a fiberscopic fringe projection system was developed [6], which can be positioned close to specifically stressed areas of a sbmftool. these, for example, filigree side form elements are exposed to drawing and bulk-forming forces at the same time. in these areas early abrasion is most likely and therefore needs to be taken care of at the very beginning of the development process. to reduce the time duration of measurements, the inverse fringe projection technique allows the detection of geometry deviations using a single projection of an object-specific inverse fringe pattern. an adapted projector pattern is derived from the specimen’s geometry and the desired camera pattern. deviations in geometry lead to differences in the camera image which can be easily detected. different approaches exist to calculate the inverse pattern. li et al. [7] present an algorithm to calculate inverse patterns by processing measurements of a reference object. following measurements will be able to detect deviations from the initially measured reference geometry. a different approach calculates the inverse pattern by the use of a cad model of the geometry and a mathematical model of the fringe projection system [8]. the latter solution has the advantage of using a virtual reference defined by the tool designer instead of a manufactured reference. in the following sections the principle of the fiberscopic system will be presented together with the adaptation of a model based inverse fringe projection approach. following an introduction of the optical setup, difficulties that arise from the image fibers and special lenses are discussed. in order to verify the pattern generation, parts of the system are simulated using ray-tracing software. the methods to reduce the measurement time to a minimum in order to keep it in the range of the cycle time of the process will be explained. 2. fiberscopic fringe projection  2.1. principle setup  figure 1 shows the basic design of the newly developed setup. to achieve a high depth of field (dof) with foibs, a good fiber coupling at a sufficient intensity is required. by their nature, laser light sources offer high intensities as well as a collimated beam profile, which makes them almost ideal for fiber coupling. only the coherent nature of lasers complicates the use in fringe projection since reflections on a diffuse reflecting specimen, as it is common in structured light measurements, create speckle contrast automatically. sometimes the speckle contrast can be higher than the contrast of the projected fringe pattern and thereby prevents an accurate evaluation [6]. to overcome this, a rotating diffuser is installed in the system together with micro lens arrays that act as beam shapers. since the diffuser changes the speckle formation faster than the frame rate of the ccd, it leads to an integration of randomly distributed patterns which results in a smooth light intensity image [6]. the beam shaper changes the gaussian into a flat top distribution that is fitted to the size of the dmd, which is used for the pattern generation. the pattern is focused to the 1.7 mm input aperture of the foib and guided 1:1 to the measurement area. the image on the specimen is projected through gradient index (grin) rod lenses with high numeric apertures, which are directly attached to the foib. the distorted image of the fringe pattern is captured by a similar grin-foib arrangement in a defined triangulation angle and guided to a 5 megapixel ccd camera for computing. 2.2. data processing  in structured light projection sequences of binary gray-code followed by cos²-phase shift patterns are common in commercial fringe projection systems [4][5]. the method achieves high accuracies and is robust to background light and therefore can be used in a wide spectrum of measurement tasks. however, the projection and acquisition of the sequences take the major time of the whole measurement. additionally, due to the significant loss of resolution in the 100,000 fiber image bundle, binary patterns start to fade on the cost of contrast in fiberscopic fringe projection. additionally, for a robust phase measurement the gray-code sequence needs to have twice the frequency of the continuous cos²-pattern to allow the compensation of errors in the phase-unwrapping process. it has to be noted, that only the information from the cos²-pattern is used in the later process, while the gray-code patterns are only used for phase unwrapping of the continuous pattern. however, due to the limited resolution of the fiber bundles, the maximum pattern frequency is very limited. thus, for the endoscopic system, unwrapping methods using lower frequency cos²-patterns are superior as the highest frequency of the cos²patterns can be higher than by using gray-code assisted unwrapping. also, using the unwrapping technique described by tao peng [9], the number of patterns to project may be reduced, leading to a faster measurement process. to obtain 3d-measurement data from the 2d-phase maps, the system is calibrated using a pinhole camera model and a black box model for the projector [10]. 2.3. resulting data  the generated data of the specimen’s geometry is saved in a point cloud in cartesian coordinates. sets of data in different geometries like plane, spheres and gearings were recorded and compared to data of other measurement systems, such as commercial fringe projection sensors and coordinate measurement machines (cmms), as well as to the original cad design data. for the task the commercial and widely accepted software polyworks (innovmetric software inc) was used. it figure 1. schematic of the fiberscopic fringe projection setup.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 6  showed that the presented system achieves accuracies of ± 10 µm for a 3d-measurement area of 3x3x3 mm³ at standard deviations of 35 µm. this made it capable for the indicated task in sbmf. however, even with reduced pattern resolution and the hereby reduced number of images, measuring speeds are not sufficient for measurements in cycle time of the sbmf process for holistic measurements of each part. 3. inverse fiberscopic fringe projection  3.1. inverse fringe projection  inverse fringe projection enables the detection of 3dgeometry deviations utilizing only a single adapted pattern instead of a sequence of patterns [11]. this pattern is adapted to both the geometry of the specimen and the optical characteristics and positioning of the fringe projection system, as seen in figure 2. for the inverse approach, the intended geometry needs to be known as a cad model. the measurement setup, consisting of a camera, a projector and an ideal specimen, needs to be modelled in a virtual environment. this virtual setup must approximate the optical properties of the real setup with sufficient accuracy; therefore, a calibration procedure of the real setup must be undertaken to identify the model parameters of the system. a common approach is to approximate both the camera and the projector using the pinhole model. the pinhole model describes how a point of a 3d object in the camera reference frame is projected on the 2d camera image [12]. the pinhole model consists of a camera matrix k, seen in equation (1), which contains the so-called intrinsic parameters. 0 0 0 0 1 (1) the parameters and describe both pixel size and focal length, while and model the principal point. additional intrinsic parameters may be used to approximate lens distortion effects [13]. extrinsic camera parameters describe the transformation from the world coordinate system, which can for example be the coordinate system of a cad-file, to the camera coordinate system. three parameters describe the rotation, while three additional parameters model the necessary translation. the image of an object point x in the world coordinate system can be expressed in homogenous coordinates by equation (2), where is the camera matrix and the transformation matrix. (2) a complete model includes the intrinsic parameters of both camera and projector as well as a transformation matrix describing the relation of camera and projector and the transformation to the world coordinate system. the inverse pattern can be simulated by ray-tracing using the virtual system. in the first step of the simulation, the inverse projection pattern is calculated by inversion of the path of light propagation. therefore, the camera is modeled as a projector and “emits” a straight, equidistant, structured light pattern onto the specimen (cad model). the projector works in this context as a camera and is used to “capture” the diffuse reflection into a raster image with the dmd-pixels remodeled as sensor pixels. of course, this is only possible utilizing the virtual ray-tracing-based system. when this inverse projection pattern, obtained by simulating the virtual system, is then projected by the real projector onto a real specimen (of the same shape and pose as the virtual specimen), the real camera will capture the beforehand-defined straight, equidistant light intensity pattern. geometry deviations of the real specimen, however, will lead to distortions in the camera image which can be detected robustly using fast 2d image processing techniques. the amount of deviation of the 2d fringe pattern can be related to three dimensional geometry deviations by a linearized defect model called sensitivity map. thus, quantitative information about the geometry defect is obtained. no reconstruction of point cloud data or processing of three dimensional data is required after acquisition of the measurement data resulting in a low latency time from measurement to result. the method is applicable to check for allowable geometry tolerances up to a µm scale. poesch [11] showed the proof of principle on a macroscopic fringe projector. however, the optical path of a fiberscopic fringe projector as described above is far more complex to model for inverse simulation. new artifacts, such as pixelation effects due to the fiber bundle, have to be taken care of. challenges arise from the simulation of the foib and the in and out-coupling, but also from the beam shaping and speckle removal stage. artifacts that arise from the optical elements which are unusual for classic fringe projection systems may complicate the scenario. 3.2. simulation of a fiberscopic fringe projector with ray‐tracing  the simulation of a fiberscopic fringe projection system requires several uncommon components in the beam path, which not only increases the complexity of the projector design, but also extends the average calculation time of one complete simulation run significantly. with the applied simulation software fred (photon engineering llc) and a state of the art multicore computer one simulation cycle with a sufficient number of rays requires a processing time of about one hour. since the optimization includes several kinds of optics and their respective positions numerous runs are inevitable. the setup was created component by component and connected afterwards in a combined simulation environment. in a first step, the telescope and rotating diffuser were optimized for minimal divergence of the beam and a highly randomized speckle distribution at each diffuser rotation. these elements mainly affect the available space in the fringe projection sensor. for the telescope the best simulation results were found with two 60 mm lenses which focus a laser with 2 mm diameter on the rotating diffuser plate. the rays detected behind the last lens are recorded and used as light source for the next simulation step. the flat top generator consists of the fly-eye micro lens arrays and a fourier-lens. figure 2. inverse fringe projection applied to a specimen.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 7  the pattern that is generated by the fly-eye optics is shown on “analysis surface 1” in figure 3. the displayed section represents that the fly-eyes successfully break the wave front of the gaussian distribution of the laser and generate a flat top light distribution. the several maxima flatten with increasing distance to the fourier-lens and equalize to a homogeneous level at the stage of the dmd. the optimization criteria were a highly consistent light pattern that exactly fits to the size of the 0.7” dmd. for the dmd in the first place a fixed micro mirror array was generated that projects half of the rays to a dump and the other half to the projecting foib. the pattern is shown on “analysis surface 2” in figure 3. again, the pattern is saved and used as the source for the next, most computation time intensive part of the simulation. the fiber coupling part consists of an in-coupling objective, which focusses the fringe image to a 1.7 mm diameter at the entrance of the foib. in a first attempt, the fiber bundles were simulated by modelling 100,000 individual fibers of 1 m length. the calculation of several hundred reflections in each fiber caused several days of simulation time. for comparison a 10 mm long version was designed. the simulated images of both fiber lengths were very similar, so that for simplification the 10 mm version was chosen for the following simulations (fiber coupling in figure 3). the fringe pattern is projected using an adapted grin-lens that is fitted to the diameter of the foib. under the triangulation angle a second grin-lens acquires the image in the second foib to a second coupling optic, which projects the resulting fringe pattern to the last analysis surface that represents the ccd of the actual setup. with the finished model, the behavior of a projected pattern in the foib can be predicted. especially the decrease of the 1 mp image to 100,000 pixels is very uncommon for fringe projection systems as well as the availability of the comparatively low resolutions. using the image of the last analysis surface in the way described in section 3.1 enables first indications of possible qualities of an inverse fiberscopic fringe projection approach. unfortunately, the number of parameters of this detailed model is too high to allow robust model identification. thus, a simpler approach, similar to the pinhole model, needs to be found in order to calibrate the virtual system to the actual endoscopic fringe projection system. 3.3. practical approach  for the continuous quality control of the abrasion within a sbmf-process it is necessary to compare the measured data to the reference. figure 4 shows the measurement of a sbmfdeep drawing tool with the fiberscopic system. in the first step a reference measurement is obtained by projecting the common gray-code and phase shift sequence which requires at least 12 images. the point cloud derived from the phase-map using the calibration data is calculated in the calibrated coordinate system for the system. to simulate the inverse pattern, it is crucial to know the pose and position of the tool in this system specific coordinate system. both can be obtained by calculating a rigid body transform by aligning measured reference points to the corresponding points in the cad-model using polyworks. from the resulting transformation matrix and the system model an inverse pattern can be generated by the help of ray-tracing. after this initial pattern generation step, inspection can be performed by capturing just one image of the inverse pattern. distortions in the camera image can only be seen at sections where abrasion (either welding on, or wear off effects) has taken place. the measuring procedure only takes several figure 3. complete ray tracing design simulation of the fiberscopic fringe projection system.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 8  hundred microseconds, which is below the beforehand made requirement. from the data an easy “out-of-tolerance” study can be derived, providing information about necessary tool exchanges or process stability. desired accuracies in the field of sbmf are in the range of 50 to 100 µm in order to be able to judge wear effects in terms of production quality. to verify the feasibility of this procedure, the system has been calibrated using the common pinhole model for both camera and projector. it was necessary to replace the projector micro lens with a standard grin lens as the adapted lens can only be described with this model with low accuracy. unfortunately, this decreases the depth of field of the endoscopic fringe projection system. as a first proof of principle and verification of the calibrated model parameters, an inverse pattern was simulated for a step calibration standard. the calibration standard is a diffuse plane with a 1 mm deep and 2 mm wide cutout. the standard was positioned arbitrarily in the fringe projection system’s coordinate system. pose and position were estimated by capturing a point cloud with the standard fringe projection technique and fitting it to the cadmodel using polyworks. the simulated inverse projector pattern can be seen in the top left image in figure 5. simulation of the pattern has been performed in a non-optimized matlab ray-tracing environment and required a processing time of about 2 minutes with a model consisting of about 30 triangles. as this simulation needs to be performed only initially, the processing does not limit the measurement time of the system. more advanced software is capable of calculating ray-tracing with complex models in less than one minute. the cutout can be seen in the center of the inverse pattern, while the slight deformation of the pattern towards the outer areas of the geometry is a result of the calibrated camera and projector distortion. in order to assess the quality of this approach, the inverse pattern was projected back onto the calibration standard using the endoscopic system. the grayscale image on the right in figure 5 shows the obtained camera image. it can be seen that the fringe patterns are parallel throughout the camera image, as it has been defined in the inverse pattern generation step. slight deviations are visible near the step in the standards geometry, resulting from slight positioning errors. the used step standard makes these misalignments visible due to its non-continuous geometry. with continuous geometries, such as the gearings on the bulk-sheet metal forming tools, the inverse pattern is less sensitive to misalignments. to enable an automated inspection of the image, a phase map was calculated by processing the fringe pattern in the marked rectangular part of the camera image. as expected, the phase-map shows only small deviations along the vertical axis. the cutout leads to a narrow shadowed area in the camera image, at which the phase unwrapping fails. this leads to the artifact seen in the phase image. 4. conclusions and future activity  the work showed that the principle of fiberscopic fringe projection works with common pattern generation approaches such as gray-code and phase shift or encoded phase shift. with highly collimated light sources and beam shaping a sufficient depth of focus can be achieved even with micro grin optics attached to the foibs. the working system has been mapped into a computer model for the simulation software fred. this model is the basis for the evaluation of virtual model based inverse fringe projection. we showed that the generation of inverse patterns works for the fiberscopic system. for practical evaluations, the simulation model has been simplified by using the pinhole model for projector and camera. arbitrary cad geometry can be used for pattern generation after pose and position of the sensor have been calibrated. the next steps of the work on the fiberscopic fringe projection sensor will be the installation in a sbmf-machine with a high precision positioning system to examine the results of the inverse fiberscopic fringe projection with more complex geometries. possible enhancements of the technique include improvements of the 2d-image processing as well as an improved lighting model for the ray-tracing simulation. apart from lens distortion modelling for the projector grin lens, the figure 4. measurement of a sbmf deep drawing tool (top) with following analyses of the point cloud in respect of abrasion effects (bottom).  figure 5. inverse measurement of a calibration standard.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 9  largest limitation of the demonstrated approach is currently its sensitivity to positioning errors and vibrations, as the position of the sensor head to the desired measurement area is defined during the simulation of the inverse patterns. acknowledgement  the authors would like to thank the german research foundation (dfg) for funding the project b6 "endoscopic geometry inspection" within the collaborative research center (crc) / tr 73. references  [1] c. ohrt, s. matthias, m. kästner, e. reithmeier, “fast endoscopic geometry measurement with fiber-based fringe projection for inner geometries”, journal of the cmsc, vol. 7, no. 2, 2012, pp. 10-14. [2] a. pösch, t. vynnyk, m. kästner, o. abo-namous, e. reithmeier, “virtual inverse fringe projection”, cmsc conference, reno, 2010. [3] m. merklein, j.m. allwood, b. behrens, a. brosius, h. hagenah, k. kuzman, k. mori, e. tekkaya, a. weckenmann, “bulk forming of sheet metal”, annals of the cirp, vol. 61, no. 2, 2012, pp. 725-745. [4] m. kästner, “optische geometrieprüfung präzisionsgeschmiedeter hochleistungsbauteile”, shaker, aachen, 2010, isbn 3832294430. [5] g. frankowski, m. chen, t. huth, “real-time 3d shape measurement with digital stripe projection by texas instruments micromirror devices (dmd)”, proceedings of the conference on three-dimensional image capture and applications iii, san jose, ca, usa, 2000. [6] c. ohrt, m. kästner, e. reithmeier, “endoscopic geometry inspection by modular fiber optic sensors with increased depth of focus”, proc. spie 8082, optical measurement systems for industrial inspection vii, 2011. [7] w. li, t. bothe, w. osten, m. kalms, “object adapted pattern projection—part i: generation of inverse patterns”, optics and lasers in engineering, 41(1), 2004, pp. 31-50. [8] a. pösch, t. vynnyk, e. reithmeier, “using inverse fringe projection to speed up the detection of local and global geometry defects on free-form surfaces”, spie optical engineering+ applications, international society for optics and photonics, 2012, pp. 85000b-85000b. [9] t. peng, “algorithms and models for 3-d shape measurement using digital fringe projections”, university of maryland, phdthesis, 2007. [10] j. vargas, m.j. terrón-lópez, j.a. quiroga, “flexible calibration procedure for fringe projection profilometry”, optical engineering, vol. 46, no. 2, 2007, p. 023601. [11] a. pösch, t. vynnyk, e. reithmeier, “fast detection of geometry defects on free-form surfaces using inverse fringe projection”, journal of the cmsc, vol.8, no. 1, 2013. [12] r. hartley, a. zisserman, “multiple view geometry in computer vision”, cambridge university press, 2004, isbn 0521540518. [13] c. brown, “close-range camera calibration”, photogrammetric engineering, vol. 37, no. 8, 1971, pp. 855-866. microsoft word 16-139-1-pb.doc acta imeko  july 2012, volume 1, number 1, 19‐25  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 19  gene expression programming and genetic algorithms in  impedance circuit identification  fernando m. janeiro 1 , pedro m. ramos 2   1  instituto de telecomunicações and universidade de évora, largo dos colegiais 2, 7004‐516 évora, portugal  2  instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, av. rovisco pais 1, 1049‐001 lisbon,portugal      keywords: impedance spectroscopy; circuit topology identification; genetic algorithms; gene expression programming  citation: fernando m. janeiro, pedro m. ramos, gene expression programming and genetic algorithms in impedance circuit identification, acta imeko,  vol. 1, no. 1, article 7, july 2012, identifier: imeko‐acta‐01(2012)‐01‐07  editor: paul regtien, measurement science consultancy, the netherlands  received january 3 rd , 2012; in final form may 21 st , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: fct project pest‐oe/eei/la0008/2011  corresponding author: fernando m. janeiro, e‐mail: fmtj@uevora.pt    1. introduction  impedance spectroscopy [1] is used in different fields ranging from biomedical applications [2] to electrochemical applications such as the study of fuel cells [3]. other applications include monitoring of anti-corrosion coatings [4] and sensor modelling, for example, of a humidity sensor [5]. the first step in performing impedance spectroscopy consists on obtaining the impedance spectral response [6]. this can be accomplished for example using an impedance vector-analyser or through a two-channel data acquisition system where a sinefitting algorithm [7] is used to extract the impedance magnitude and phase [8]. an improved version of the basic sine-fitting algorithm, called 7-parameter sine-fitting [9], that is well suited as a digital signal processing algorithm to estimate the impedance parameters has been proposed. usually, to fit the acquired spectral data, some knowledge of the underlying processes is needed to suitably choose a circuit topology that models the process under study. with a known circuit topology, the complex non-linear least squares (cnls) method can be used to obtain the circuit parameters of the chosen model [10]. the cnls implemented in [11] is based on the levenberg-marquardt algorithm and requires the input of the starting search values and then, it will efficiently converge to the local minima near this set of initial parameters. to avoid the convergence to the local minima, a new approach [12] based on a hybrid genetic algorithm (ga) [13] was proposed. this approach has been used to characterize a viscosity measurement system [14]. gene expression programming (gep) [15] was proposed as a method to search for suitable circuit topologies without any prior knowledge of the equivalent circuit [16]. in that work, a basic genetic algorithm was used to estimate the values of the circuit components. a different approach, based on gep and cultural algorithms, was used for the modelling of electrochemical phenomena in [17]. in this work, which is an extended version of [18], gep is interleaved with an improved version of the genetic algorithm to identify the circuit topology and circuit component values. a more effective fitness function, which works well for impedance responses that include resonances, is also used in this paper. in order to make the numerical simulations more realistic, measurement uncertainty is included for the magnitude and phase impedance data. further validation of the algorithm is performed by its application to impedance measurements performed on a circuit that models the humidity sensor presented in [19] and on the viscosity sensor used in [14]. impedance circuit identification through spectroscopy is often used to characterize sensors. when the circuit topology is known, it has  been shown that the component values can be obtained by genetic algorithms. also, gene expression programming can be used to  search for an adequate circuit topology.  in this paper, an  improved version of the  impedance circuit  identification based on gene  expression programming and hybrid genetic algorithm is presented to both identify the circuit and estimate its parameters. simulation  results are used to validate the proposed algorithm in different situations. further validation is presented from measurements on a  circuit that models a humidity sensor and also from measurements on a viscosity sensor. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 20  2. evolutionary algorithms  equivalent circuit identification from impedance spectroscopy involves identifying the circuit topology and then optimizing the values of each component in that circuit. this is accomplished in two interleaved steps; (i) a gep implementation is used to identify potential circuit network topologies that can model the measured impedance; (ii) a hybrid genetic algorithm is applied to each topology to obtain the values of the components that minimize the cost function. in the next subsections these two algorithms are described. 2.1. equivalent impedance parameters estimation  for a given circuit topology, the optimization of the component values is performed using a hybrid genetic algorithm [12]. initially, a population of m chromosomes is created where each chromosome is composed by the values of each component in the current circuit topology. the fitness of each chromosome is usually evaluated through the cost function       2 2 1 1 p i est i i i z z p z        (1) where  iz  is the measured impedance at angular frequencies 2i if  ,  est iz  is the estimated impedance obtained with the component values in the chromosome and p is the number of measurement frequencies. however, in circuits that exhibit resonance-like behaviour, the frequency points in the resonance region could have little weight in the final value of the cost function. this led to having good candidate circuits being discarded by the genetic algorithm. to tackle this issue, a different cost function that does not suffer from these problems was used       2 2 1 1 p i est i i est i z z p z        . (2) with this cost function, the absolute error of the impedance estimation is normalized by the estimated impedance, therefore giving more weight to frequency points in the resonance region. if information regarding the quality of each measurement value is available (e.g., experimental standard deviation), weights can be used in (2) to make sure that the fitting algorithm is more influenced by the measurement values with reduced uncertainty. the fitness of each chromosome is used to evolve the population based on survival of the fittest. just like in a biological population, there is reproduction and mutation. in reproduction, pairs of chromosomes are randomly chosen using a biased roulette wheel selection scheme where the fittest elements have a higher probability of being chosen to reproduce. each pair of chromosomes may create two offspring through the crossover operation or move directly to the next generation. in mutation, randomly chosen positions on some chromosomes are replaced by random generated values. mutation is vital to maintain population diversity and escape local minima of the cost function. traditional minimum search algorithms, such as the levenberg-marquardt algorithm and the gauss-newton method, are very sensitive to the starting search values and are not able to escape local minimums of the cost function. however, genetic algorithms are very efficient in finding the region of the absolute minimum of multi-dimensional cost functions even when the search space is vast and local minima are present nearby [20]. although genetic algorithms are suitable to find the global minimum of the cost function, they take a long time to converge to the actual minimum. therefore, a gauss-newton method is applied using the final results of the genetic algorithm as starting values for the gauss-newton method. 2.2. impedance network topology identification  the identification of the equivalent impedance network topology is performed using gep. each candidate circuit is expressed as a gene which contains components (r, l and c) and operators (series and parallel). the gene is composed of a head of size h and a tail of size t, with 1t h  . the head may contain components and operators while the tail is comprised only of components. this is fundamental to the operation of gep since it guarantees a valid circuit topology under all circumstances. each gene is converted into a binary tree by filling it in a breadth-first fashion as proposed in [15]. this tree can then be recursively run to obtain the equivalent circuit. breadth-first traversal of the tree, also known as width first, consists on filling the tree from left to right and top to bottom. when there are no more node positions to fill in the current level of the tree, the tree filling starts from left to right in the next lower level. in this implementation of gep, the binary tree is completely bypassed and is only used to help visualize the circuit topology. as an example, figure 1 shows an electric circuit that will be used as test impedance in the numerical results. the gep gene that codes this circuit is presented along with the corresponding binary tree. the numbers in the tree leafs correspond to the type of component r, l or c for 1, 2 or 3, respectively. notice that, in the 21 long gene, only the first 9 elements (shown in bold) correspond to the described circuit. the remaining elements are also part of the gene but do not define the circuit. also noticeable is that the last 11 elements (the tail) are all components. initially, a population of n circuit topologies is randomly created according to the previously described rules. the hybrid genetic algorithm is then executed for each candidate topology gene to optimize its component values and obtain its fitness. afterwards, this population is used to create a new generation of circuit topologies trough the gep operations: replication, mutation, transposition and recombination [15]. the new gene + 1 + 2 // // 1 2 3 + 1 1 2 1 3 1 2 1 2 2 3 figure 1. example of circuit topology and corresponding binary tree along  with coding gene for gep with h + t = 21. the coding region is shown in bold  and ends at position 9.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 21  generation is again evaluated by the hybrid genetic algorithm and the process repeats itself until a valid circuit is found (i.e., until its fitness is better than a predefined threshold) or the maximum number of generations is reached, in which case the algorithm failed to find a suitable circuit. in the creation of a new generation, gep operators are applied sequentially. the replication operator creates an intermediate population by choosing, based on a biased roulette wheel selection scheme, which genes survive from the original population. the fittest elements have a higher probability of being chosen and so multiple copies of them may appear in this intermediate population. genes with low fitness may not be chosen at all. mutation is applied to the intermediate population by changing the value of a random position inside some randomly chosen genes. if the position is inside the head it can be mutated into a circuit component or an operation, while if it is in the tail it is restricted to mutate only into a circuit component. this is mandatory to maintain the structural integrity of the genes. insert sequence transposition (is transposition) is then applied to some randomly chosen genes of the population that resulted from the mutation. in each chosen gene, a sequence of random length is transposed to any position in the head except to the root. an example is shown in figure 2a where, for the sake of simplicity, a smaller gene with h + t = 11 is used. the insertion sequence has length 2 and starts on position 5. the sequence was inserted at the randomly chosen position 2 and the head elements below the insertion point are moved downstream inside the head. however, to maintain gene integrity, the tail remains untouched and the last two positions of the original head are discarded. this operator creates a new intermediate population to which root insertion sequence transposition is applied. root insertion sequence transposition (ris transposition) is very similar to the previous operator, except that the insertion point is always the root. the insertion sequence is also of random length but must start with an operation. figure 2b illustrates this behaviour, where the insertion sequence has length 3 and starts in position 5. it is inserted at the root and the remaining head elements are moved downstream, with the last 3 elements being discarded in order to maintain the original tail. the next operation is the 1 point recombination. two genes and a crossover point are randomly chosen. then, the two genes exchange part of their information as shown in figure 2c. in the example, the crossover point is 3 and therefore positions 1 to 3 are exchanged between the two genes. the last operation is the 2 point recombination, where 2 crossover points are chosen. in the example presented in figure 2d, positions 3 to 7 are exchanged between the two genes. in recombination, the integrity of the gene structure is always assured. at the end of this sequence of operations, a new generation of candidate circuits has been created. also, due to the use of elitism, a copy of the best circuit topology is always included in the new population. these operations can profoundly change the population genes and, although the genes are of fixed length, their circuit coding region varies, resulting in trees of different sizes and complexity. this means that gep can add branches to the circuit or eliminate them, while searching for the circuit that best fits the measured data. before applying the ga, a basic circuit simplify algorithm searches the tree for operations that have, on the two leafs, identical type components. these correspond for example to two identical type components in series or in parallel and can be replaced by a single component. 3. numerical results  in this section, the circuit in figure 1 is used to test the proposed algorithm and the threshold to stop the algorithm is set at 0.0002%  . the spectral response of this impedance with rs = 10 , r = 1000 , ls = l = 1 mh and c = 1 f was calculated for p = 100 linearly spaced frequency points in the 100 hz to 10 khz range and random errors were included in the data to simulate real measurement conditions, with an uncertainty of 0.08% in the impedance magnitude and 0.05º in its phase. the encoding gene chosen by the algorithm is presented in figure 3, along with its corresponding binary tree. the resulting equivalent circuit is presented in figure 4 with the estimated component values. the corresponding frequency responses are shown in figure 5. although the equivalent circuit does not have the same topology as the original impedance circuit, the effect of the figure 2. examples of the main gep operations on genes with h + t = 11.  gene + // // + 3 + // 1 2 2 1 3 2 2 2 1 3 2 2 2 2 figure 3. encoding gene (in bold) and corresponding binary tree obtained by  gep  and  hybrid  genetic  algorithm  for  the  impedance  in  figure  1  with  rs = 10 , r = 1000 , ls = l = 1 mh and c = 1 f.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 22  capacitor in the first parallel circuit (the 4.934 pf capacitor) and of the inductor in series with the resistance in the second part of the circuit (the 900.3 h inductor) are negligible in the frequency range under study, as can be seen in figure 5 where the spectral response of the two circuits is shown. different runs of the algorithm yield different equivalent circuits, but always in good agreement with the spectral response of the original circuit. it should be noted that, in many applications, an equivalent circuit with physical meaning is needed, which is a requirement that is currently not satisfied by the gep and ga algorithm. in figure 6, the errors of the fit for this situation are presented. in figure 6a, the difference between the magnitudes of the simulated and estimated impedance are shown, while in figure 6b, the difference of the simulated and estimated phase are depicted. note that, the cost function must combine the magnitude and phase information to estimate a single value that quantifies the fit of each circuit and corresponding estimated circuit parameters. so, in figure 6c the magnitude of the difference between the two impedances (corresponding to the distance between the two complex numbers that represent the estimated and simulated impedances) is shown. in order to ensure that higher impedance magnitudes that occur at certain frequencies don’t overly influence the cost function solely due to their higher magnitude, the magnitude of the difference between the estimated and simulated impedance are normalized by the estimated impedance magnitude. this corresponds to each frequency point in cost function (2) and is shown, for this case, in figure 6d. the average of the squared values represented in figure 6d corresponds to the fitting error. in this situation 61.2 10 0.00012%    . since it is not realistic to measure impedance values for 100 different frequencies, the usefulness of the proposed algorithm depends on its performance with fewer measurements. in figures 7, 8 and 9 the results obtained with only 10 frequency points are presented. the circuit topology is quite similar to the original circuit with just the addition of a capacitor in parallel with the correct circuit. as can be seen in figure 8, the magnitude and phase responses of the estimated circuit closely match the original circuit frequency responses. therefore, the extra capacitor relative to the original circuit (the 14.26 pf capacitor), has a negligible effect on the overall impedance at the frequency range under study. note that, even without any measured point in the resonance region, the algorithm still finds a correct equivalent circuit. figure 9 presents the normalized impedance errors for this case, corresponding to a fitting error of 0.00013%  . in the third analysis of the performance of the proposed algorithm, the circuit of figure 1 is considered, but the simulated impedance values are narrowly centred near the resonance. in this case, 11 frequency points are used in the 4 khz to 6 khz range. in figure 10, the magnitude frequency 0 2 4 6 8 10 0 200 400 600 800 1000 1200 frequency [khz] |z | [  ] 0 2 4 6 8 10 -90 -60 -30 0 30 60 90  [ º] figure 5. impedance magnitude and phase of the circuit in figure 1 (lines),  estimated impedance magnitude and phase corresponding to the circuit in figure 4 (dashed lines overlapped with lines) and frequency points used for  estimation (squares for magnitude and circles for phase).  0 2 4 6 8 10 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 frequency [khz] |z m ||z | [ ] 0 2 4 6 8 10 -0.2 -0.1 0 0.1 0.2 frequency [khz]  m - [ º] 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 frequency [khz] |z m -z | [ ] 0 2 4 6 8 10 0 1 2 3 x 10 -3 frequency [khz] |z m -z |/ |z | a c b d figure 6. four representations of the fitting errors.  figure  4.  circuit  topology  and  component  values  corresponding  to  the  binary tree shown in figure 3.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 23  response of the original circuit and that of the estimated circuit are compared with the values considered for the circuit estimation. although the frequency points are very close to the resonance, the magnitude estimation is quite good even in the complete frequency range represented. figure 11 shows the results obtained for the phase. in this case, the overall estimation is quite good near the resonance but nearer the edges of the complete frequency range, there are some small noticeable differences. in this case, the error of the fit is 0.00014%  (figure 12). 4. experimental results  the next step in testing the developed algorithm is to evaluate its performance when applied to real measured data. measurements were performed in a circuit that models a humidity sensor [19] and in a previously analysed viscosity sensor [14]. 4.1. humidity sensor circuit model  the circuit that models the humidity sensor is represented in figure 13. the component values of the equivalent circuit of the sensor change with relative humidity and the case corresponding to a relative humidity of 54% is considered. the component values are presented in table 1. the circuit in figure 13 with the component values shown in table 1 was assembled and its spectral response was measured for p = 11 logarithmic spaced frequency points in the 1 hz to 100 khz range. the proposed algorithm was then applied to these measurements to find the equivalent circuit and figure  7.  circuit  topology  and  component  values  obtained  with  the proposed  algorithm  for  10  different  frequencies  in  the  100  hz  to  10  khz range.  0 2 4 6 8 10 0 200 400 600 800 1000 1200 frequency [khz] |z | [ ] 0 2 4 6 8 10 -90 -60 -30 0 30 60 90  [ º] figure 8. impedance magnitude and phase of the circuit in figure 1 (lines), estimated impedance magnitude and phase corresponding to the circuit in figure 7 (dashed lines overlapped with lines) and frequency points used for estimation (squares for magnitude and circles for phase).   0 2 4 6 8 10 0 1 2 3 x 10 -3 frequency [khz] |z m -z |/ |z |   figure 9. normalized impedance errors for 10 linearly spaced frequencies in the 100 hz to 10 khz range.  0 2 4 6 8 10 0 200 400 600 800 1000 1200 frequency [khz] |z | [ ] figure 10. impedance magnitude of the circuit in figure 1 (line), estimated  impedance  magnitude  corresponding  to  the  original  circuit  (dashed  line overlapped  with  line)  and  frequency  points  used  for  the  estimation  (squares) in the 4 khz to 6 khz range.  0 2 4 6 8 10 -90 -60 -30 0 30 60 90 frequency [khz]  [ º] figure  11.  impedance  phase  of  the  circuit  in  figure  1  (line),  estimated  impedance  phase  corresponding  to  the  original  circuit  (dashed  line)  and  frequency  points  used  for  the  estimation  (circles)  in  the  4  khz  to  6  khz range.   4 5 6 0 0.5 1 1.5 2 x 10 -3 frequency [khz] |z m -z |/ |z | figure 12. normalized impedance errors for 11 linearly spaced frequencies  in the 4 khz to 6 khz range.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 24  respective component values. figure 14 shows an example of the encoding gene and the respective binary tree that was found by the algorithm. the corresponding equivalent circuit with the component values is presented in figure 15. although different runs of the algorithm on the measured data yield different equivalent circuits that do not resemble the original circuit topology shown in figure 13, the spectral response of the resulting circuit closely matches the measurements performed on the original circuit, as shown in figure 16. thus, it is possible to conclude that the circuit in figure 15 is equivalent, at least in the measured frequency range, to the original circuit although it might not be equivalent to the sensor equivalent circuit at different relative humidity values. the error of the fit is 0.032%  (the threshold was set, in this case, to 0.035% due to noisy measurements). 4.2. viscosity sensor  the viscosity sensor consists on a vibrating wire cell whose resonance characteristics change with the viscosity of the liquid in which the wire is immersed and also on its temperature. from the measured frequency response of the sensor, it is possible to obtain its resonance characteristics and in turn obtain the viscosity of the liquid. further details on the working principle of this sensor can be found in [14]. the viscosity sensor wire was immersed in diisodecyl phthalate (didp) liquid at 15ºc. its impedance was then measured for p = 21 frequency values in the range 500 hz to 1.5 khz. from the application of the gep and the hybrid genetic algorithms resulted the equivalent circuit and respective component values presented in figure 17. the impedance frequency response of the equivalent circuit in figure 17 is plotted in figure 18 along with the 21 measured points. both magnitude and phase are in close agreement with the measurements showing that, also in this case, the algorithm was successful in finding an equivalent circuit. the error of the fit is 0.00008%  , while the threshold was set at 0.0001%. 5. conclusions  in this paper, an improved version of impedance spectroscopy using evolutionary algorithms was presented, where gep is used to evolve the target circuit topology and a hybrid genetic algorithm estimates the circuit component values. a detailed description of the gep algorithm and operators is presented as well as a summary of the hybrid genetic algorithm. figure 13. humidity sensor equivalent circuit. the component values change  with relative humidity [19].  gene // + // 1 3 + 3 + // 3 2 2 1 3 3 1 2 2 1 1 3 figure  14.  example  of  encoding  gene  (in  bold)  and  corresponding  binary  tree  obtained  by  gep  and  hybrid  genetic  algorithm  for  the  measured  spectral response of the sensor equivalent circuit with rh = 54%.  table  1.  component  values  for  the  humidity  sensor  equivalent  circuit  at  rh = 54%.  component  value  rs  1.18   rw  624.5   rb  30 m  rp  120.97 k  cb  0.68 f  cp  2.32 f  cpw  5.87 nf  0.4183 f 0.1130 h 620.0 10.76 h 5.711 nf 1.746 m 0.1275 f figure  15.  circuit  topology  and  component  values  corresponding  to  the  binary tree shown in figure 14.  10 -2 10 0 10 210 2 10 3 10 4 10 5 10 6 frequency [khz] |z | [ ] 10 -3 10 -2 10 -1 10 0 10 1 10 2-90 -75 -60 -45 -30 -15 0  [ º] figure 16. estimated impedance magnitude and phase (lines) of the circuit  in  figure  15  versus  the  measured  impedance  magnitude  (squares)  and  phase (circles) of the circuit in figure 13.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 25  to improve the convergence properties of the algorithm, a different fitness function, which works well with impedance sweeps that include resonances, has been proposed. numerical results show that even with measurement uncertainties and few frequency impedance measured values, the presented algorithm is capable of finding an equivalent circuit. the residual errors in the magnitude and phase of the estimated impedances were used to analyse the performance of the algorithm. the algorithm was also applied to measurements of a circuit that models a humidity sensor at a specific relative humidity and to a viscosity sensor. in both cases it was found that different runs of the algorithm yield different equivalent circuits that nonetheless closely match the measured spectral responses. a method for further automatic simplification of the obtained circuits is currently under development. acknowledgement  the authors would like to thank prof. j. fareleira and prof. f. caetano for the use of their viscosity sensor. references  [1] j.r. macdonald, impedance spectroscopy, annals of biomedical engineering, 20, (1992), pp. 289-305. [2] e. katz, i. willner, probing biomolecular interactions at conductive and semiconductive surfaces by impedance spectroscopy: routes to impedimetric immunosensors, dnasensors, and enzyme biosensors, electroanalysis, 15, (2003), pp. 913–947. [3] a.k. manohar, o. bretschger, k.h. nealson, f. mansfeld, the use of electrochemical impedance spectroscopy (eis) in the evaluation of the electrochemical properties of a microbial fuel cell, bioelectrochemistry, 72, (2008), pp. 149-154. [4] j. hoja, g. lentka, interface circuit for impedance sensors using two specialized single-chip microsystems, sensors and actuators a: physical, 163, (2010), pp. 191-197. [5] j. hoja, g. lentka, method using bilinear transformation for measurement of impedance parameters of a multielement two-terminal network, ieee trans. instrumen. meas., 57, (2008), pp. 1670-1677. [6] e. barsoukov, j. macdonald, impedance spectroscopy theory, experiment, and applications, wiley interscience, hoboken, 2005, isbn 978-0-471-64749-2. [7] ieee std. 1057-2007, ieee standard for digitizing waveform records, the institute of electrical and electronic engineers, (2007), new york, december, e-isbn: 0-7381-4543-2. [8] p.m. ramos, m.f. silva, a.c. serra, low frequency impedance measurement using sine-fitting, measurement, 35, (2004), pp. 89-96. [9] p.m. ramos, a.c. serra, a new sine-fitting algorithm for accurate amplitude and phase measurements in two channel acquisition systems, measurement, 41, (2008), pp. 135-143. [10] j.r. macdonald, j.a. garber, analysis of impedance and admittance data for solids and liquids, j. electrochem. soc., 124, (1977), pp. 1022-1030. [11] r. macdonald, levm/levmw manual, version 8.11, october 2011, available at www.jrossmacdonald.com/ levminfo.html. [12] f.m. janeiro, p.m. ramos, application of genetic algorithms for estimation of impedance parameters of two-terminal networks, i2mtc 09, singapore, 2009, pp. 602-606. [13] f.m. janeiro, p.m. ramos, impedance measurements using genetic algorithms and multiharmonic signals, ieee trans. instrumen. meas., 58, (2009), pp. 383-388. [14] f.m. janeiro, j. fareleira, j. diogo, d. máximo, p. m. ramos, f. caetano, impedance spectroscopy of a vibrating wire for viscosity measurements, i2mtc 10, austin, usa, 2010, pp. 1067-1072. [15] c. ferreira, gene expression programming in problem solving, 6th online world conference on soft computing in industrial applications, 2001. [16] h. cao, j. yu, l. kang, an evolutionary approach for modeling the equivalent circuit for electrochemical impedance spectroscopy, the 2003 congress on evolutionary computation, canberra, australia, 2003, pp. 1819-1825. [17] p. arpaia, a cultural evolutionary programming approach to automatic analytical modeling of electrochemical phenomena through impedance spectroscopy, meas. sci. technol., 20, (2009), pp. 1-10. [18] f.m. janeiro, p.m. ramos, impedance circuit identification through evolutionary algorithms, xviii tc 4 imeko symposium, natal, brazil, 2011. [19] j.a. gonzález, v. lópez, a. bautista, e. otero, x. r. nóvoa, characterization of porous aluminum oxide films from a.c. impedance measurements, j. appl. electrochem., 29, (1999), pp. 229-238. [20] r.l. haupt, s. e. haupt, practical genetic algorithms, wiley-interscience, hoboken, 2004, isbn 978-0-471-45565-3.   figure 17. viscosity sensor equivalent circuit.  0.5 1 1.5 0.6 0.7 0.8 frequency [khz] |z | [ ] 0.5 0.75 1 1.25 1.5 0 2 4 6 8 10  [ º] figure 18. estimated impedance magnitude and phase (lines) of the circuit in  figure  17  versus  the  measured  impedance  magnitude  (squares)  and phase (circles) of the viscosity sensor.  introductory notes for the acta imeko fourth issue 2022 general track acta imeko issn: 2221-870x december 2022, volume 11, number 4, 1 2 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 1 introductory notes for the acta imeko fourth issue 2022 general track francesco lamonaca1 1 department of department of computer science, modeling, electronics and systems engineering (dimes), university of calabria, ponte p. bucci, 87036, arcavacata di rende, italy section: editorial citation: francesco lamonaca, introductory notes for the acta imeko fourth issue 2022 general track, acta imeko, vol. 11, no. 4, article 1, december 2022, identifier: imeko-acta-11 (2022)-04-01 received december 22, 2022; in final form december 22, 2022; published december 2022 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: francesco lamonaca, e-mail: editorinchief.actaimeko@hunmeko.org dear readers, the end of the year is a time of accounting. thanks to all of you, as a community of authors and reviewers, acta imeko is increasing its reputation. indeed, according to sjr ranking [1], in the last year the journal passed from the fourth to the third quartile both in the field of electrical and electronic engineering and of mechanical engineering. the goal of speeding up the editorial process without reducing the quality of the publications has been achieved too. indeed, the publication time has been reduced by a factor of ten, and the rejection rate is at about 50%. this was done keeping the blind peer-review process based on at least two reviewers. since february 2022 acta imeko is indexed by the directory of open access journals (doaj) [2] and the procedure to be indexed in web of science was started. the editorial board (eb) was enlarged including most of the imeko technical committees chairs. please let me thank all the new eb members for accepting this important role in supporting the journal with their voluntary service. further efforts were done to upgrade the management system of the journal and the website. it was a huge work, especially in transferring the data and the history of the old system to the new one. finally, we have developed a procedure to recognize the voluntary efforts of the reviewers. in agreement with the imeko presidential board we have established an award for the best reviewer of the year and a recognition for the top 10 reviewers. as usual, also this issue includes a general track aimed to collect contributions that do not relate to a specific event. as editor in chief, it is my pleasure to give you an overview of these papers, with the aim of encouraging potential authors to consider sharing their research through acta imeko. natural lighting in building environments is an important aspect for the occupants' mental and physical health. furthermore, the proper exploitation of this resource can bring energy benefits related to the reduced use of artificial lighting. in [3], f. nicoletti et al. provide some estimates of the energy that can be saved by using a lighting system that recognises indoor illuminance. in particular, it is able to manage the switching on of lights according to the daylight detected in the room. the savings from this solution depend on the size and orientation of the window. the analysis is conducted on an office by means of simulations using the inlux-dbr code. the locations have an influence on the luminance characteristics of the sky. the analysis is conducted with reference to one city in the south and one in the north of italy (cosenza and milan). the energy saving is almost independent of latitude and therefore representative of the italian territory. it is highly variable according to exposure, being the highest for southern exposure (97 % with the window size equal to 36 % of the floor area) and between 26 % and 48 % (as a function of window size) for northern exposure. following the revision of the si, kibble balances around the world may now be used to realise the unit of mass. in a kibble balance, the weight of a mass is balanced by the electromagnetic force on a current-carrying coil of wire suspended in a magnetic field. at msl researchers are developing a kibble balance where the coil is connected to the piston of pressure balance in a twin pressure balance arrangement. the piston-cylinder unit of pressure balance provides a repeatable axis for the motion of the coil in the magnetic field. the twin pressure balance arrangement serves as a high-sensitivity force comparator. in the paper entitled “position control for the msl kibble balance coil using a syringe pump” [4], r. j. hawke and m. t. clarkson highlight that in a kibble balance, the position of the coil must be finely controlled. in weighing mode, the coil remains stationary in a location of constant magnetic field. in calibration mode, the coil is moved in the magnetic field to induce a voltage. in particular, they investigate how the piston (and therefore coil) position may be controlled through careful manipulation of the gas column under the piston. they demonstrate the use of a http://? acta imeko issn: 2221-870x december 2022, volume 11, number 4, 2 3 acta imeko | www.imeko.org december 2022 | volume 11 | number 4 | 2 syringe pump as a programmable volume regulator which can provide fall rate compensation as well as controlled motion of the piston. in [4] it is shown that the damped harmonic oscillator response of the pressure balance must be considered when moving the coil. from this initial investigation, authors discuss the implications for use in the msl kibble balance. reliability analysis can be committed to companies by customers willing to verify whether their products comply with the major international standards or simply to verify the design prior of market deployment. nevertheless, these analyses may be required at the very preliminary stages of design or when the design is already in progress due to low organizational capabilities or simple delay in the project implementation process. the results sometime may be far from the market or customer target with a subsequent need to redesign the whole asset. of course, not all the cases fall in the worst scenario and maybe with some additional considerations on mission definition it is possible to comply with the proposed reliability targets. marco mugnaini and ada fort, in the paper entitled “how to stretch system reliability exploiting mission constraints: a practical roadmap for industries” [5] provide an overview on the approach which could be adopted to achieve the reliability target even when the project is still on-going, providing a practical case study. the recent increase in the internet of things and industry 4.0 fields has led many researchers to focus on the innovative technologies that could support these emerging topics in different area of applications. in particular, the current trends are to close the gap between the physical and digital worlds, thus originating the so-called cyber-physical system (cps). a relevant feature of the cps is the digital twin, i.e., a digital replica of a process/product with which the user can interact to operate in the real world. in the paper entitled “digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system” [6], a. liccardo et al. propose an innovative approach exploiting an augmented reality solution as digital twin for the electronic instrumentation to obtain a tight connection between the measurements as the physical world and the internet of things as digital applications. actually, by means of the adoption of the 3d scanning strategy, augmented reality software and with the development of a suitable connection between the instrument and the digital world, a cyber-physical system has been realized as an iot platform that collects and controls the real instrumentation and makes it available in augmented reality. an application example involving a digital storage oscilloscope is finally presented to highlight the efficacy of the proposed approach. i hope you will enjoy your reading. francesco lamonaca editor in chief references [1] acta imeko in the scimago journal & country rank. online [accessed 22 december 2022] https://www.scimagojr.com/journalsearch.php?q=21100407601 &tip=sid&clean=0 [2] acta imeko in the directory of open access journals. online [accessed 22 december 2022] https://doaj.org/toc/2221-870x [3] f. nicoletti, v. ferraro, d. kaliakatsos, m. a. cucumo, a. rollo, n. arcuri, “evaluation of the electricity savings resulting from a control system for artificial lights based on the available daylight”, acta imeko, vol.11, no.4, 2022, pp.1-11. doi: 10.21014/actaimeko.v11i4.1309 [4] r. j. hawke, m. t. clarkson, “position control for the msl kibble balance coil using a syringe pump”, acta imeko, vol.11, no.4, 2022, pp.1-7. doi: 10.21014/actaimeko.v11i4.1343 [5] m. mugnaini, a. fort, “how to stretch system reliability exploiting mission constraints: a practical roadmap for industries”, acta imeko, vol.11, no.4, 2022, pp.1-6. doi: 10.21014/actaimeko.v11i4.1358 [6] a. liccardo, f. bonavolontà, r. s. lo moriello, f. lamonaca, l. de vito, a. gloria, e. caputo, g.de alteriis, “digital twins based on augmented reality of measurement instruments for the implementation of a cyber-physical system”, acta imeko, vol.11, no.4, 2022, pp.1-8. doi: 10.21014/actaimeko.v11i4.1397 https://www.scimagojr.com/journalsearch.php?q=21100407601&tip=sid&clean=0 https://www.scimagojr.com/journalsearch.php?q=21100407601&tip=sid&clean=0 https://doaj.org/toc/2221-870x https://doi.org/10.21014/actaimeko.v11i4.1309 https://doi.org/10.21014/actaimeko.v11i4.1343 https://doi.org/10.21014/actaimeko.v11i4.1358 https://doi.org/10.21014/actaimeko.v11i4.1397 microsoft word 213-2175-1-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 64‐68    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 64  systematic quality control for long term ocean observations  and applications  daniel m. toma 1 , albert garcia‐benadí 2 , bernat‐joan mànuel‐gonzález 1 , joaquín del‐río‐fernández 1    1  sarti research group, electronics dept.,. universitat politècnica de catalunya (upc), rambla exposició 24, 08800, vilanova i la geltrú,  barcelona, spain.   2  laboratori de metrologia i calibratge, centre tecnològic de vilanova i la geltrú, universitat politècnica de catalunya (upc), rambla  exposició, 24, 08800 vilanova i la geltrú, barcelona, spain         section: technical note   keywords: quality control; ocean; long term; observatory; metadata   citation: daniel m. toma, albert garcia‐benadí, bernat‐joan mànuel‐gonzález, joaquín del‐río‐fernández, systematic quality control for long term ocean  observations and applications, acta imeko, vol. 5, no. 1, article 12, april 2016, identifier: imeko‐acta‐05 (2016)‐01‐12  editor: paolo carbone, university of perugia, italy  received september 9, 2014; in final form september 9, 2014; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: daniel m. toma / albert garcia‐benadí, e‐mail: daniel.mihai.toma@upc.edu / albert.garcia‐benadi@upc.edu    1. introduction  recently, marine observations have been rapidly growing. some of the observed parameters that are currently monitored are the sea temperature, level of acidification and noise pollution. the value of each of these parameters gives very valuable information in different areas, for example, the temperature of seawater is currently used to perform better estimations of the climate change, the measure of the acidification of seawater is very important for fishing processes, etc. however, all these data are relevant if there is a continuous recording of them for long periods of time and the measurements have been performed under quality requirements. for this reason, in the last years many permanent seabed observatories as well as floating observatories such as moored or drifters buoys have been deployed to perform long term marine observations which are the basis for environmental modelling and assessment. close examination of these data often reveals a lack of quality that, frequently, happens for extended periods of time. the growing needs for real-time processing of data and the sheer quantity of data produced by these observatories means that automated quality assurance/quality control (qa/qc) is necessary to ensure that the collected data has the quality required for its purpose [1]-[3]. this paper demonstrates the use of well-defined community adopted qa/qc tests and automated data quality assessments [4] and [5] to provide a continuous scale of data quality, the capture of information about the system provenance, sensor and data processing history, and the inclusion of the flag values in metadata stream. an example of this system of implementation and testing the automated data quality assessments on a real time platform is the expandable seafloor observatory, obsea [6] and [7] that is deployed to monitor the barcelona coast, spain. among the parameters observed at obsea, for which these automated data quality controls is applied are the seawater and air temperature, conductivity, underwater and air pressure. abstract  with the advances of last year’s technologies many new observation platforms have been created and connected on networks for the  diffusion of numerous and diverse observations, and also it provided a great possibility to connect all kind of people facilitating the  creations of great scale and  long‐term studies. this paper  is focused on the marine observations and platforms employed for this  scope. real time data and the big data have to accomplish some minimal quality of data requirements. usually, the task to ensure  these quality requirements is accomplished by the platforms responsible. the aim of this paper is to explain the design of these quality  control systems and their implementation in an ocean observation platform.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 65  2. development  each data measured by the platforms has to pass different filters which are evaluated under some tests established for the customer. these tests have to be applied for every magnitude measured. 2.1. qualification tests  the qualification tests are separated in format tests and behaviour tests. the automated application of these tests is rather straightforward using java programs. however, the estimation of the parameter “thresholds” of these tests poses the greatest challenge. the statistical assumptions dictate that these threshold parameters are ideally defined by having a distribution of values that are objectively considered “reasonable” for every sensor at every site [8]. in the obsea the following automatic quality control tests are implemented with the range, step, delta, sigma, null, and gap parameter thresholds determined by statistical distributions based on existing data over a period of three years:  platform identification. this criterion determinates the location of the equipment, for example: in the laboratory, connected to the obsea platform, or under test.  impossible platform date/time. the date must be greater than the date of the obsea start, and the hour must be from 0 until 23, and the minutes must be from 0 until 59.  regional impossible parameter values. the thresholds of this test are defined for each magnitude. the thresholds establish the extreme values based on the minima and maxima observed for a given sample period (table 1). for example, the seawater temperature of the mediterranean sea cannot exceed 28 degree celsius and cannot be less than 10 degree celsius.  spike test. the tendency between the previous and actual value has to be coherent. for example the step between the previous and actual value of the temperature of seawater cannot exceed 0.1 °c for a sampling interval of less than 1 minute. the threshold of this test must be in accordance with the parameter measured in the environment and the acquisition rate of the instrument.  gradient test. this test fails when the difference between vertically adjacent measurements is too steep. for example the gradient value of the temperature of seawater cannot exceed 0.2 °c for a sampling interval of less than 1 minute. similar to the spike test the threshold of the gradient test must be in accordance with the parameter measured in the environment and the acquisition rate of the instrument. the last test is the visual inspection of the data. this test is not automated, but it is a main test to ensure the quality of the adopted qa/qc. this test performs data verification by means of visual inspection; any flagged data from the previous tests is either verified as being an incorrect value or is accepted as correct data. this test minimizes the risk of inadvertently eliminating the observation of a rare and potentially interesting event for the sake of data quality [9]. 2.2. tools  the obsea automated quality assurance/quality control system was created using the java programming language. the system block diagram in figure 1 provides a general overview of the qa/qc system. the system is composed of three components and their interdependencies, as shown. from this figure, it is clear that dependencies between subsystems are simple and the communication between the subsystem and its dependencies are through their interfaces. the system allows the building of application specific data structures assembled from standard building blocks allowing “on the fly” changes of parameters. therefore, it will be easier to add new algorithms and new parameters to the application. this allows components to be generic and configurable and allows other modules to retrieve information required and query data structures. 2.3. valuation  the result of these automatic quality control tests is a numeric value that is included inside the metadata in the acquired data. these qualifications are detailed in table 2. when the data are collected and properly verified, usually the flag value is 1, but for long time acquisitions, in a hostile table 1. the six tests employed in the obsea data quality control. problem to  be identified  section  test  calculation  data belongs  to platform  platform  identification  test  defined in metadata  timestamp  of data in  observatory  range  impossible  platform  date/time  year greater than 2008   month in range 1 to 12   day in range expected for month   hour in range 0 to 23   minute in range 0 to 59  data outliers regional  impossible  parameter  values  sea water temperature in range 10 °c  to 28 °c   salinity in range 35 to 39  sea level air pressure in range 850 hpa  to 1060 hpa (mbar)  air temperature in range ‐10 °c + 40 °c  wind speed in range 0 m/s to 60 m/s  wind direction in range 0° to 360°  humidity in range 5 % to 95 %  current speed in range 0 m/s to 3 m/s   current direction in range 0° to 360°  wave period in range 0 to 20 s   wave height in range 0 m to 10 m   depth in range 18 m to 21 m  conductivity  in  range  3.5  s/m  to  6.5  s/m  sound velocity in range 1480 m/s to  1550 m/s  jumps in  data values  spike test  | dt – (dt+1 + dt‐1)/2 | –   | (dt+1 – dt‐1) / 2 | > s   (where s defined by sampling)  change in  variance  structure  gradient test  | dt – (dt+1 + dt‐1)/2 | > g   (where g defined by sampling)  a dropped  data point  null test  defined by sampling  data belongs  to platform  platform  identification  test  defined in metadata  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 66  environment, in our case the sea, the equipment can suffer from undesirable variations. 3. results  the implementation of these automated quality control tests is illustrated using salinity and air temperature data from the obsea observatory. figure 2 illustrates the time series of 2 days of salinity data sampled at 10 s intervals in march 2014 and figure 3 illustrates the time series of 4 month of data sampled at 20 s intervals in february–june 2014. it should be noted that these data contain numerous known errors, which is useful for the purposes of this example. the salinity measurement is a function of conductivity, temperature and pressure measurements made with the sbe 37 smp instrument installed in the obsea observatory at a constant depth of approximately 20 m. as can be seen in figure 2, the salinity measurements present numerous errors, which have been flagged as bad data (flag=4) by the automated quality control system. the erroneous salinity measurements have been caused by the incorrect measurements of the conductivity cell or for an excess threshold in the criteria of acceptance. all these considerations are going to improve the automated quality control system. the percentage of incorrect salinity and conductivity measurements of the sbe 37 smp instrument for the 4 month of data sampled at 10 s intervals in february–june 2014 was 0.95 % as shown in table 3. in the same period, there were figure 1. the automated data quality assurance system block diagram.  table 2. quality number included in metadata and its correspondence. flag  meaning  0  no quality control  1  value seems correct  2  value appears inconsistent with other values 3  value seems doubtful  4  value seems erroneous  5  value was modified  6  flagged land test  7‐8  reserved for future use  9  data is missing  table 3. percentages of correct, incorrect and missing values for the obsea  parameters between february 2014 and june 2014.   parameter  measured  value seems  correct  value seems  incorrect  value  missing  sea water  temperature  91.69 %  0.16 %  8.15 %  salinity  90.90 %  0.95 %  8.15 %  depth  91.68 %  0.17 %  8.15 %  conductivity  91.74 %  0.11 %  8.15 %  sound velocity 91.78 %  0.07 %  8.15 %  sea level air  pressure  93.59 %  0.00 %  6.41 %  air  temperature  81,59 %  12.00 %  6.41 %  wind speed  93.59 %  0.00 %  6.41 %  wind direction 93.59 %  0.00 %  6.41 %  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 67  8.15 % of missing data, caused by various factors such as communication problems between the land station and the obsea observatory and rarely caused by instrument outage. 4. conclusion  the automated quality assurance/quality control provides truthfulness for long-term measures as well as the possibility of different states of the equipment, in calibration processes or out of services, among others, without cutting the link with the platform. moreover, the metadata has to be standard for all instruments and sensors; the standardization improves the compatibility of the automatic quality control framework with different platform. it is only through the use of these standardized approach that global scale ecosystem questions can ever be addressed. the future proposals include the uncertainty inside the metadata to know its uncertainty value, and other task will be to improve the top values of the different test to adapt at the place. acknowledgement  this work was partially supported by the projects nexos and fixo3 from the european union’s seventh programme for research, technological development and demonstration under grants agreement no 614102 and no 312463. references  [1] loescher, h. w., ocheltree, t., tanner, b., swiatek, e., dano, b., wong, j., zimmerman, g., campbell, j. l., stock, c., jacobsen, l., shiga, y., kollas, j., liburdy, j., and law, b. e.: comparison of temperature and wind statistics in contrasting environments among different sonic anemometer-thermometers, agr. forest meteorol., 133, pp.119–139. [2] ocheltree, t. o. and loescher, h. w.: design of the ameriflux portable eddy-covariance system and uncertainty analysis of carbon measurements, j. atmos. ocean. tech., 24, pp.1389– 1409, 2007. [3] taylor, j. r. and loescher, h. w.: neon’s fundamental instrument unit dataflow and quality assurance plan, neon.011009, national ecological observatory network, boulder, colorado, 2012. [4] seadatanet, 2007: data quality control procedures, version 0.1, 6th framework of ec dg research. [5] sylvie pouliquen and the data-meq working group, recommendations for in-situ data real time quality control, eg10.19, december 2010. url: http://eurogoos.eu/download/publications/rtqc.pdf [6] daniel toma, ikram bghiel, joaquin del rio, alberto hidalgo, normandino carreras, and antoni manuel 2014. automated data quality assurance using ogc sensor web enablement figure 2. time series of salinity observations between march 15 to march 17 2014 from obsea sea‐bird ctd model (sbe 37 smp) and the corresponding  quality control flags. the sampling rate of the instrument is 10 s. these data contain errors and missing value which are automatically flagged as incorrect  data (4) and missing data (9), respectively.  figure 3. time series of air temperature observations in february–june 2014 from obsea airmar weather station model 150wx and the corresponding  quality control flags. the sampling rate of the instrument is 20 s. these data contain errors and missing value which are automatically flagged as incorrect  data (4) and missing data (9), respectively.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 68  frameworks for marine observatories, geophysical research abstracts vol. 16, egu2014-11508, 2014 egu general assembly 2014. [7] jacopo aguzzi, antoni mànuel, fernando condal, jorge guillén, marc nogueras, joaquin del rio, corrado costa, paolo menesatti, pere puig, francesc sardà, daniel toma, albert palanques. the new seafloor observatory (obsea) for remote and long-term coastal ecosystem monitoring. in sensors vol. 11, issue 6, 2011. [8] taylor, j. r. and loescher, h. l.: automated quality control methods for sensor data: a novel observatory approach, biogeosciences, 10, 4957-4971, doi:10.5194/bg-10-4957-2013, 2013. [9] essenwanger, o. m.: analytical procedures for the quality control of meteorological data, meteorological observations and instrumentation: meteorological monograph, am. meteorol. soc., 33, 141–147, 1969. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 52‐56    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 52  accelerometer transverse sensitivity calibration; validation  and uncertainty estimation  christiaan s veldman  national metrology laboratory of south africa, acoustics, ultrasound and vibration laboratory, south africa private bag x34, lynnwood  ridge, 0004, south africa      section: research paper   keywords: accelerometer; calibration; transverse sensitivity; validation; uncertainty of measurement; iso 16063‐31  citation: christiaan s veldman: accelerometer transverse sensitivity calibration; validation and uncertainty estimation, acta imeko, vol. 4, no. 2, article 9,  june 2015, identifier: imeko‐acta‐04 (2015)‐02‐09  editor: paolo carbone, university of perugia, italy  received september 23 rd , 2014; in final form february 14 th , 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the department of trade and industry, south africa  corresponding author: ian veldman, e‐mail: csveldman@nmisa.org    1. introduction  due to their ease of use and low cost, accelerometers are widely considered as the vibration sensor of choice. a variety of different models are required to cover the wide range of vibration measurement applications. to select the accelerometer best suited for a specific application, the user will typically scrutinize the manufacturer’s specifications. apart from the general (usually the most relevant) specifications such as size, sensitivity, frequencyand acceleration ranges, the manufacturer also specifies the relative transverse sensitivity (rts) of an accelerometer. for specialized application, the transverse sensitivity is of importance. for some applications, a more accurately known value of the transverse sensitivity might be required [1], [2], [3]. in addition, knowledge of the angle (mechanical orientation) of the transverse sensitivity is also required. nmisa developed a capability to accurately measure the transverse sensitivity of accelerometers as part of its research in vibration metrology. nmisa modified its existing low frequency accelerometer calibration system to facilitate the measurement of accelerometer transverse sensitivity and will offer this calibration service to industry in the near future. in section 2, the author defined “transverse sensitivity”. the system hardware and specifications are described in section 3. in section 4, the approach followed to estimate the uncertainty of measurement as well as the major uncertainty of measurement contributors is discussed. the method followed for validating the system, with the validation results are discussed in section 5. finally, in the concluding section the findings are summarized. 2. transverse sensitivity  the transverse sensitivity of an accelerometer is defined as the sensitivity to acceleration applied perpendicular to its sensitive axis [4]. the axis of maximum sensitivity of the transducer is not necessarily aligned with the sensitive axis, as shown in figure 1. as a result, any motion not in line with the sensitive axis will produce an output. if the transducer is placed in a rectangular co-ordinate system, as shown in figure 1, the vector, smax, representing the maximum transducer sensitivity can be resolved into two abstract  the national metrology institute of south africa (nmisa) has implemented a system to measure the transverse sensitivity of vibration  transducers. as a mechanical device, the principle sensing axis of an accelerometer is not 100 % perpendicular to the mounting axis  (surface).  this  gives  rise  to  the  effect  that  the  accelerometer  will  produce  an  electrical  output  even  when  a  mechanical  input  perpendicular to the principle measurement axis is applied. the quantification of this “defect” parameter is of importance when high  accuracy acceleration measurements are performed using accelerometers.  this paper gives a brief overview of the system developed  by the nmisa to measure the transverse sensitivity of vibration transducers. the paper then explores the validation of the system  along with the uncertainty of measurement associated with the calibration system. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 53  components: the sensitive axis sensitivity, sn, (sensitivity) and the maximum transverse sensitivity, st,max. the theoretical transverse sensitivity curve is shown in figure 2. the transverse sensitivity, expressed as a percentage of the sensitivity, is referred to as the relative transverse sensitivity (rts). the rts is dependent on the excitation angle. for high quality accelerometers, manufacturers supply devices with low rts, typically ≤ 1 %, and with the direction of the lowest transverse sensitivity, βtmin, indicated with a red dot on the accelerometer. the manufacturer supplies these low transverse sensitivity devices through selection. that is, they physically measure the transverse sensitivity and select the units that meet the required rts specification. 3. system description  the transverse sensitivity calibration system of nmisa [5] was developed in compliance with iso 16063-31 [6]. the transverse sensitivity capability was developed as an extension of the existing primary low frequency accelerometer calibration system. a schematic diagram of the system configuration is shown in figure 3. the system utilizes the existing long stroke (152 mm peak to peak) electro-dynamic exciter, connected to an air bearing linear translation stage (abt). a stepper motor controlled turntable is mounted on top of the abt (figure 4). table 1 provides the system parameters. for the vibration generator with turntable system implemented by nmisa, once the unit under test (uut) is mounted on the turntable and all the hardware and cable connections are completed, the in-house developed software is executed. the software performs a set of procedural steps as part of each transverse sensitivity measurement per turntable angular position as follows:  move the turntable to the angular position of interest, (from 0° to 360° in 5° steps);  ramp the exciter to the selected vibration level, in frequency and amplitude;  sample two analogue inputs simultaneously, streaming the data (time series data) directly to computer storage;  apply the three parameter sine fit algorithm (3psf) [7] to the ref and uut output voltages time series to calculate acceleration amplitude and the uut voltage output;  calculate the relative transverse sensitivity (rts) using (1) and (2);  record the rts in the result sheet;  plot the rts on a polar diagram. table 1. parameters of nmisa transverse sensitivity calibration system.  vibration frequency range 5 hz to 20 hz  transverse acceleration range 5 m/s 2  to 50 m/s 2 analogue inputs four simultaneously sampled 12  bit channels  sampling frequency 500 khz  turntable rotation angle 0° to 360°  turntable rotation resolution 1°  reference polytec ofv‐505 heterodyne  laser interferometer system  figure 2. polar plot indicating a theoretical transverse sensitivity curve for an accelerometer.  figure 1. graphical illustration of transverse sensitivity.  figure 3. diagram of the system configuration.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 54  these steps are executed for each angular position from 0° to 360° using the selected step size, without the need of any intervention by the metrologist. for the calculation for transverse sensitivity, st, the following formula was used: t out t ˆ ˆ a u s  (1) where st is the transverse sensitivity, ûout is the amplitude of the output signal of the transducer vibrating perpendicularly to its sensitivity axis, and ât is the acceleration level in the test direction. the acceleration level is measured by means of a reference transducer and calculated as: ref ref t ˆ ˆ s u a  (2) where ûref is the voltage output of the transducer; and sref is the sensitivity of the reference transducer. the relative transverse sensitivity, *ts , is calculated from n t* t s s s  (3) where sn is the transducer sensitivity. once all the measurements have been completed, the results are saved using an excel template file. the result sheet also displays the rts in graphical format, similar to figure 2. 4. uncertainty of measurement  a conservative approach was followed w.r.t. the consideration of uncertainty contributors. the “worst case” (but still scientifically valid) values were used for the uncertainty contribution. this allows for a single uncertainty calculation that is valid for a wider range of measurement conditions. it also reduces the need to re-calculate the uncertainty budget for each calibration. however, it does not relieve the metrologist of the responsibility of having to consider the uncertainty for each calibration performed. by using the uncertainty values estimated for a specific calibration, instead of the generalized values, an rts calibration with a smaller uncertainty might be possible. the uncertainty of measurement (uom) was estimated in accordance with the gum [8]. the root uncertainty contributors were identified from the mathematical model in (1), which was expanded to (4) by inserting (1) and (2) into (3): ref out n ref* t ˆ ˆ s s u u s  (4) through consideration of the mathematical model (4) and the measurement procedure, a detailed set of uncertainty contributors was identified. this full set was reduced to a subset containing the uncertainty contributors with a significant contribution. this process produced the list of “dominant” uncertainty contributors listed in table 2. a summary of the uncertainty budget is reported in table 3. the voltages for both the reference signal as well as the accelerometer output signal were captured using an a to d converter and applying the 3psf. the sine approximation method is a well-established and adopted time domain signal processing technique [9]. it requires for the samples to be equidistance sampled, that is that the time between t0, t1, t2…be constant, and if the phase difference between signals is required, that the sampling of the two signals be performed simultaneously. for the system implemented, both these criteria were achieved using a four channel analogue to digital converter (adc) (four individual channels, not multiplexed channels), with a single sample timing clock, thus synchronising the sampling done by the four adcs. it has been established that the 3psf method is influenced by various factors [10-13] for instance:  the sampling frequency;  number of adc bits;  number of samples per period to be an integer number;  number of periods to be a prime number;  signal to noise ratio (snr). of particular relevance to this application is the snr [10] as the accelerometer output is a very small signal due to the low transverse sensitivity. as a result, for this application, the snr is measured. however, for simplicity, a minimum snr limit of figure 4. turntable mounted on top of air‐bearing translation stage.  table 2. significant uncertainty contributors  uncertainty contributor source of uncertainty reference transducer sensitivity calibration certificate acceleration level reference voltage measurement accelerometer output voltage accelerometer voltage  measurement  sensitivity (sensitive axis sensitivity)  calibration certificate  type a uncertainties statistical means (standard  deviation)  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 55  15 db is set for the purpose of calculating the 3psf uncertainty contribution. for a snr of 15 db, the uncertainty in the magnitude determination, σa, is about 0.3 %. σa is calculated using (5) where σa is the 3psf amplitude precision, σ is the zero mean white gaussian noise, q is the quantization error [10, 11]; n is the number of samples. due to the inherent small transverse sensitivity of accelerometers, additional steps were required to achieve the snr limit requirement. this was accomplished by filtering the time signals using a 4th order butterworth (infinite impulse response (iir)) digital bandpass filter. the forward-reverse filtering model was applied to accomplish zero phase response. the lower and upper cut-off frequencies were selected to be 0.8909·fv and 1.225·fv respectively, where fv is the vibration frequency. 4.1. reference sensitivity uncertainty  for this transverse sensitivity calibration system, the metrologist may choose to use either an accelerometer as the reference device or a laser interferometer (vibrometer). for both options, the sensitivity of the reference device is known from a prior calibration. the uncertainty associated with the sensitivity, along with its coverage factor, is obtained from the calibration of the reference. 4.2. acceleration level uncertainty  the voltage amplitude of the reference signal is determined using the 3psf algorithm [12]. from [13], the uncertainty associated with the amplitude is obtained from (5). 4.3. accelerometer output voltage uncertainty  the uncertainty associated with the accelerometer output voltage is estimated in the same manner as described in section 4.2 since the amplitude is also determined using the 3psf. however, the calculated uncertainty for this parameter is expected to be larger, due to the smaller snr, in light of the smaller output voltage. 4.4. sensitive axis sensitivity  the uncertainty contribution of the sensitivity will depend on the source of the value of sensitivity. generally, this sensitivity will be obtained from a valid calibration certificate. in such an instance, the uncertainty will be taken from the certificate. it is possible to determine the sensitivity of the sensitive axis using the calibration system, prior to performing the rts measurements. in this instance, the uncertainty contribution needs to be calculated separately. 4.5. type a uncertainty  at each measurement point, 0° to 355° in 5° steps, the system captures an equidistant sampled time series, containing 100 vibration cycles for both the referenceas well as the accelerometer channels. to eliminate the undesired effects introduced by the bandpass filtering, the 3psf is applied to the centre 50 cycles only. the final voltage amplitude is calculated as the mean and standard deviation of these 50 voltage amplitudes (per channel). the largest type a uncertainty was determined by calibrating an accelerometer with a low rts (≈ 0.1 %). using this accelerometer the largest standard deviation calculated, considering all the measurement points through the complete 360° rotation was 3.5 %. as was to be expected, the type a uncertainty reached a peak value at an angular position with the lowest rts. 5. validation  the performance of the transverse sensitivity calibration system and procedure was validated by performing a bilateral interlaboratory comparison (ilc). the ilc partner was the deutsche akkreditierungsstelle (dakks) accredited laboratory, spektra gmbh, dresden, germany. the purpose of the ilc was to evaluate and validate the metrological operation of the two participating laboratory’s accelerometer transverse sensitivity calibration systems and relevant procedures. the differences in the rts measurements obtained between the two laboratories would support (or disprove) each laboratory’s measurement capability, within their stated uom. the parameter covered by the ilc was the measurement of the relative transverse sensitivity (rts) of an accelerometer at 16 hz. three accelerometers were used as the ilc transfer devices. the accelerometers that were used are listed in table 4. 5.1. evaluation criteria  for each laboratory i, the data where xi,s, is the maximum relative transverse sensitivity st, reported and u(xi,s), is the reported standard uncertainty associated with the rts. for each of the comparison artefacts (transfer devices) an ilc reference value xr,s was determined as the weighted mean of the results of n laboratories (for this comparison, n = 2) according to , ∑ xi,s u2 xi,s n i 1 ∑ , (6) , ∑ , (7) the degree of equivalence, dlab-wm, and ulab-wm, was table 4. comparison transfer devices.  no accelerometer  serial number 1 endevco 2270m8  16194  2 pcb 3701g2fa3g  8353  3 pcb j353b01  47794  table 3. summary uncertainty budget.  source of uncertainty estimated  uncertainty  probability  distribution  (k)  reference transducer sensitivity 0.5 %  2 voltage measurement accuracy  (sine fitting)  0.2 %  1 uncertainty in the uut  sensitivity value  0.8 %  2 voltage measurement accuracy  (sine fitting)  0.3 %  1 rotation angle accuracy 0.5°  √3 vibration table horizontal  alignment  0.1°  √3  type a evaluation 3.5 %  1 acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 56  determined for the rts measurements using (8) (9) where xlab represents the measurement results obtained by the laboratory for each rts and xwm represents the ilc reference value calculated as the weighted mean (wm) using (8). ulab-wm is the uncertainty of measurement associated with the calculated dlab-wm for k = 2, calculated using (9). 5.2. comparison results  the rtss for the three individual accelerometers as reported by each laboratory (with the associated uncertainties) are reported in table 5. the calculated comparison reference values (weighted mean values) are reported in table 6, while the degrees of equivalence (doe) for each participant is reported in table 7. the doe for the measurement results reported by nmisa are shown in graphical format in figure 5. the graph clearly indicates uncertainties which overlap the reference values indicating nmisa equivalence with spektra. 6. conclusions  a transverse sensitivity calibration system was implemented in compliance with iso 16063-31 by nmisa. the transverse motion is generated using an electro-dynamic vibration exciter with a stepper motor controlled turntable for the angular positioning control. the requirement for a relatively high snr (≥ 15 db) was highlighted. this minimum level of snr is maintained through the use of digital narrow band band-pass filtering. four major sources of uncertainty were identified; the reference transducer sensitivity, the acceleration level, the accelerometer output voltage and the sensitive axis sensitivity. for this system, the upper limit for the type a uncertainty was calculated to be 3.5 %. the system was validated through a bi-lateral ilc. the results for the three different accelerometers used, support the uom estimated by nmisa. the ilc results further indicate that the uom estimated by nmisa could be considered as fairly conservative. references  [1] t. petzsche, “determination of the transverse sensitivity using a mechanical vibration generator with turntable,” iso tc 108/sc 3/wg 6 doc. n153, 2007. [2] j. dosch and m. lally, “automated testing of accelerometer transverse sensitivity,” proceedings of the international modal analysis conference (imac), kissimee, florida, usa, pp. 1-4, 2003. [3] r. sill and e. seller, “accelerometer transverse sensitivity measurement using planar orbital motion,” proceedings of the 77th shock and vibration symposium, monterey, california, usa, pp. 812, november 2006. [4] iso, “mechanical vibration, shock and condition monitoring – vocabulary” iso 2041, 2009. [5] c.s. veldman, “implementation of an accelerometer transverse sensitivity measurement system”, ncsl international, june 2013. [6] iso, “methods for the calibration of vibration and shock transducers -part 31: testing of transverse vibration sensitivity” iso 16063-31. [7] peter händle, “amplitude estimation using ieee-std-1057 three-parameter sine wave fit: statistical distribution, bias and variance”, measurement, vol. 43, pp. 766–770, 2010. [8] bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml 2008 evaluation of measurement data—guide to the expression of uncertainty in measurement joint committee for guides in metrology, jcgm 100:2008. [9] iso, “methods for the calibration of vibration and shock transducers -part 11: primary vibration calibration by laser interferometry” iso 16063-11. [10] m. bertocco, c. narduzi, p. paglierani, d. petri, “accuracy of effective bits estimation methods”, ieee instrumentation and technology conference, brussels, belgium, june 4-6, 1996. [11] konrad heijn, andrzej pacut, “generalized model of the quantization error – a unified approach”, ieee transactions on instrumentation and measurement, vol. 34, n. 1, feb. 1996. [12] f.corrêa alegria, “bias of amplitude estimation using threeparameter sine fitting in the presence of additive noise”, measurement, 42, pp.748 – 756, 2009. [13] f. corrêa alegria, a. cruz serra, “uncertainty of the estimates of sine wave fitting of digital data in the presence of additive noise”, ieee instrumentation and technology conference, sorrento, italy, april 24-27, 2006. table 5. reported ilc results.  accelerometer  nmisa  spektra rts  (%)  uc  (%)  rts  (%)  uc (%)  endevco 2270m8  0.8  3  0.95  0.3 pcb 3701g2fa3g  0.24  3  0.25  0.3 pcb j353b01  0.9  3  0.81  0.3 figure 5. nmisa degrees of equivalence.  table 6. calculated weighted mean values.  accelerometer  weighted mean  rtswm  uwm  (%)  (%)  endevco 2270m8  0.95  0.3  pcb 3701g2fa3g  0.25  0.3  pcb j353b01  0.81  0.3  table 7. calculated degrees of equivalence.  accelerometer  degrees of equivalence  dnmisa‐wm  unmisa‐wm  dspektra‐wm  uspektra‐wm  (%)  (%)  (%)  (%)  endevco   2270m8  ‐0.15  3.15  0.001  0.42  pcb  3701g2fa3g  ‐0.01  3.01  0.0  0.42  pcb  j353b01  0.09  3.11  ‐0.001  0.42  ‐5,0 ‐2,5 0,0 2,5 5,0 0 1 2 3 4d e v ia ti o n  f ro m  w m  ( % ) accelerometer number nmisa doe low-cost monitoring for stimulus detection in skin conductance acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 6 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 low-cost monitoring for stimulus detection in skin conductance grazia iadarola1, valeria bruschi1, stefania cecchi1, nefeli dourou1, susanna spinsante1 1 department of information engineering, polytechnic university of marche, 60131 ancona, italy section: research paper keywords: skin conductance; galvanic skin response; stimulus detection; low-cost sensor; active assisted living citation: grazia iadarola, valeria bruschi, stefania cecchi, nefeli dourou, susanna spinsante, low-cost monitoring for stimulus detection in skin conductance, acta imeko, vol. 12, no. 3, article 10, september 2023, identifier: imeko-acta-12 (2023)-03-10 section editor: leonardo iannucci, politecnico di torino, italy received april 16, 2023; in final form june 2, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this research was funded by the european union – next generation eu. project code: ecs00000041; project cup: c43c22000380007; project title: innovation, digitalisation and sustainability for the diffused economy in central italy – vitality. corresponding author: susanna spinsante, e-mail: s.spinsante@staff.univpm.it 1. introduction several hardware technologies have been developed to integrate different physiological quantities into wearables, enabled with wireless connectivity according to the internet of things (iot) [1], [2]. wearables are often used with the aim of acquiring those parameters that evaluate the subject physiological status, from the physical and health perspective [3]-[8] to the cognitive and emotional one [9]-[11]. indeed, the connection of smart devices to networking systems can reduce unnecessary hospital visits and lower the burden on healthcare services, by transferring data over secure infrastructures. iot devices allow continuous remote monitoring of physiological parameters, thus supporting long-term care at home (active assisted living) [12]. while existing a rich offer of wearables embedding sensors to monitor heart-related parameters, not so many devices are available to collect skin conductance (sc) signals, despite several applications could benefit from them. for example, the possibility to classify different physiological conditions, such as pain or perspiration, by means of sc acquisition system allows a better management of anxiety among patients under treatments, students, workers, or people affected by health problems related to distress [13]-[16]. sc is defined in a seminal work by boucsein [17] as the phenomenon that the skin temporarily becomes a better conductor of electricity due to a change in sweat secretion, when either external or internal stimuli occur. thus, sc signals reflect the changes in electric conductance, which is modulated by sweat gland activity. in fact, an increase in sweating, mostly composed by water, increases the skin capability of conducting electric current. sweat secretion from eccrine glands (spread over the body skin) cannot be consciously controlled, because it is driven by the autonomic nervous system, whose activity is modified by the events involving a subject [18]. consequently, sc signals allow to investigate the relationship between physiological status and stimuli [19], [20]. sc variations may be elicited by the response to stimuli of different nature, such as visual, acoustic [21], [22] or physical ones [23], or by stressful conditions, such as driving [24]-[26]. the so-called skin conductance level (scl) or tonic level reflects the slow changes depending on skin dryness, hydration, and automatic regulation. instead, the skin conductance response (scr) or phasic level represents the dynamic changes associated to stimuli [27], [28]. abstract not so many consumer devices are available for minimally invasive monitoring of skin conductance (sc), differently from what happens for other physiological signals. in this paper, a low-cost monitoring system for sc signals is presented. for comparison purposes, the sc signals are simultaneously acquired by the low-cost monitoring system and by a procomp infiniti desk equipment. the paper shows that, despite the simpler design and hardware limitations exhibited by the low-cost system, the collected sc signals provide the same relevant information for stimulus detection of the sc signals acquired by a much more expensive acquisition board. specifically, the comparison is fulfilled through the ability of the low-cost monitoring system in detecting the increase of both sc baseline and peaks after stimulation. mailto:s.spinsante@staff.univpm.it acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 sc signals may be collected by endosomatic approach, meaning that no external source of electricity is used, or by exosomatic approach, in which electrodes in direct contact with the skin allow to apply a current source. in the exosomatic approach, the ohm’s law is exploited to compute the skin conductance (or resistance) by means of voltage values. despite the device design is simpler in the endosomatic case, the exosomatic approach is more common in wearables, as it does not require to collect the voltage difference between active and inactive sites of the skin. the availability of smart devices for stimulus detection in sc signals can enable several applications regarding human response to specific events [29]. the advantage of using smart devices to collect sc signals consists exactly in the possibility to run different types of experiments with greater flexibility, without bulky equipment, thus in real-life conditions [25]. in such a context, the current work investigates the operation of a low-cost and portable system for sc monitoring. specifically, the current work is intended to evaluate if a low-cost system, although exhibiting simpler design and hardware limitations, can be adopted for stimulus detection. to this purpose, the sc signals acquired by the proposed low-cost system are compared to the sc signals collected by a reference instrument, the procomp infiniti. the comparison is then carried out before and after acoustic stimulation, by analysing the reliability of the low-cost system in detecting event-related changes. the paper is organized as follows. section 2 describes in detail the proposed low-cost system for sc monitoring. section 3 presents, instead, the reference instrument for sc monitoring. the applied test procedure is depicted in section 4. the obtained results are discussed in section 5. finally, section 6 provides conclusive remarks. 2. low-cost system for sc monitoring monitoring for stimulus detection in sc signals is implemented in this study through a low-cost data acquisition system. the proposed low-cost system consists of the grovegsr sensor v1.2 [30], with its signal conditioning circuit, and the arduino uno board, equipped with the atmega328p microcontroller. figure 1 shows the low-cost system. the grove-gsr sensor collects voltage values modulated by a microcurrent, injected by means of two nickel electrodes worn in direct contact to the fingers through small velcro bands. moreover, a signal conditioning circuit, including the texas instruments lm324 operational amplifier, is connected to the electrodes. then, the grove-gsr sensor is connected by means of a 4-wire cable to the arduino uno board, which is useful for low power consumption. the quantity acquired by the low-cost system is the skin resistance (sr), which is the inverse of sc. the voltage signal is acquired at 121 hz and converted into 10-bit digital values, so that such values range from a minimum of 0 to a maximum of 1023. in order to remove glitches from the acquired voltage values, the code embedded into the firmware of the microcontroller computes the sr average over 5 voltage samples. finally, data are sent via universal serial bus (usb) interface from the output port of the microcontroller to a computer, where they are saved into a file. 3. reference system for sc monitoring generally speaking, only a few devices are commercially available to acquire sc signals (see table 1). they exhibit different operation modes and sampling frequencies. as shown in table 1, above all, the accessibility to raw data samples can represent a critical point, causing difficulties in extracting the information of interest. among the available commercial devices for sc monitoring, the procomp infiniti by thought technology is desk equipment considered as a reference system to validate other devices [31], figure 1. the low-cost system for sc monitoring.. figure 2. the procomp infiniti system with the tt-usb interface. table 1. comparison of commercial devices for sc monitoring. device measured value sampling frequency availability of original samples typology empatica e4 conductance 4 hz yes wearable empatica embrace conductance 4 hz yes wearable moodmetric conductance not provided no wearable shimmer3 gsr+ resistance 32 hz yes wearable procomp infiniti conductance 256 hz yes desk equipment biopac mp36r device conductance 500 hz yes desk equipment acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 3 due to the accuracy of the sc sensor. it consists of a data acquisition system and five channels for monitoring of several biosignals. specifically, the first two channels allow to acquire at a sampling frequency of 2048 hz faster signals, such as electroencephalogram, electrocardiogram, heart rate or blood volume pulse. instead, the remaining three channels are designated to monitor slower signals, such as respiration, temperature, and sc, at a sampling frequency of 256 hz. therefore, one of the last channels is employed for sc monitoring, with a signal input range of [0, 30] μs. in particular, the electrode strap must be fastened around the finger so that the electrode surface is in contact with the pad (but not so tight to limit blood circulation). furthermore, as shown in figure 2, the system comprises a fiber optic cable and a tt-usb interface. the procomp infiniti system encodes and transmits the data via the fiber optic cable to the tt-usb interface, which is in its turn connected to the usb port of the computer. lastly, the biograph infiniti software allows to export the measured values as .csv data files. 4. test procedure the stimulus detection in sc is evaluated on a population of ten subjects, six women and four men, aged between 23 and 49 years (the details are given in table 2). all the involved subjects declared to be in good health and signed an informed consent before participating in the experimental acquisitions. sc signals were simultaneously acquired by the low-cost system and the procomp infiniti. acquisitions were performed inside a laboratory, with subjects comfortably sitting on a chair. in order to simultaneously acquire the sc signals by both the systems on index and middle fingers of the dominant hand, as suggested in [32], the electrodes of the grove-gsr sensor were placed on the fingertips, while the electrodes of the procomp infiniti were placed on the medial phalanx of the fingers, as shown in figure 3. specifically, the electrodes of the low-cost system are located on the fingertips, as suggested in the literature [32], to favor the best possible contact between electrodes and skin, while the electrodes of the procomp infiniti were applied on the medial phalanx of the same fingers. this choice aims to compensate for the smaller amplitude of the signal obtained by the low-cost system, which includes components with lower amplification gain with respect to the procomp infiniti. in any case, as detailed in section 5, the comparison between the two considered systems is carried out by normalizing the amplitudes of the acquired signals. moreover, the subjects were asked to keep their arm in a relaxed position, with the palm of the hand resting on a desk to minimize involuntary movements and favor the contact of the finger skin with the electrodes. the aim of the paper is verifying if the stimulation can be identified in the sc signals collected by both the low-cost system and the procomp infiniti. for such a reason, only a single audio stimulus was considered. in detail, each acquisition session lasted less than 2 minutes (see figure 4). the sc baseline of each subject was firstly acquired for 60 s, in relaxed conditions and in the absence of any external stimulus. then, an audio clip of 6 seconds was played through computer loudspeakers, while the sc acquisition went on for 58 s. thus, the acquisition duration was 1 minute and 58 s overall. the audio clip was chosen with a short duration to elicit a reaction in the listening subject, without generating the so-called “habituation effect” [33]. the audio clip of 6 seconds was extracted from the international affective digitized sounds (iads) database [34]. the iads database is a set of 167 audio clips for experimental investigations on emotion and attention, where at least 100 listeners, equally divided in males and females, rated each sound in the dataset. specifically, the sound clip rocknroll 815 was used in our experiments, since it exhibits the highest average value of the pleasure score (8.13/9.00) over the whole dataset, as reported in the documentation available together with the dataset. the raw data acquired by the low-cost system are firstly converted into sr values according to (1) [30]: sr = (210 + 2 ∙ x) ∙ r / (29 − x), (1) where x is the serial port reading, i.e., the digitized value displayed on the serial port, ranging from 0 to 1023 (10-bits digital values), while r=104 ω is the resistance value of the resistor in the voltage divider circuit used to read the variable resistance of the sensor. thus, the sr data acquired by the low-cost system were saved as local .txt files on the computer, and then processed through matlab environment. in particular, the sc values are calculated as inverse of the sr values. instead, the sc signals acquired by the procomp infiniti are directly provided in siemens and do not need to be preprocessed to be comparable to the sc signals obtained by the low-cost system. the sc data acquired by the procomp infiniti were saved by the biograph infiniti software-sa7900 of thought technology as .csv files and processed through matlab environment. table 2. dataset population details. subject f/m age (years) s1 f 47 s2 m 34 s3 f 33 s4 m 48 s5 f 29 s6 f 40 s7 m 32 s8 f 31 s9 m 49 s10 f 23 figure 3. test setup for simultaneous acquisition of sc signals by the low-cost system and the procomp infiniti. figure 4. time sequence of the sc acquisition protocol. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 4 5. results and discussion figure 5 and figure 6 show, for instance, the sc signals of the subjects s4 (male) and s10 (female), acquired by means of both the low-cost system and the procomp infiniti. in order to take into account the different electrode location for the two systems, as well as their different amplification components, the signals are shown with normalized amplitudes, allowing a fair comparison between them. it should be underlined that, looking at the signals in figure 5 and figure 6, clear peaks follow the stimulus event for both the subjects. in fact, as reported in [34], event-related peaks usually happen within 2.5 to 7.0 seconds after the stimulus. stimulus detection by the data acquisition systems is carried out by analyzing the sc signals of the ten subjects, before and after the acoustic stimulation, in terms of variation of scl average and scr peaks. for this reason, the sc signals are filtered, obtaining the two scl and scr components. as concerns the scl component, on the basis of the literature [36], an increase of positive fluctuations from relaxed condition to pleasant stimulation should be clearly evident in the baseline. in table 3, the results of the scl acquired by both the low-cost system and the procomp infiniti are reported before and after the acoustic stimulus. generally, the results provided by the lowcost system confirm the literature findings, as they highlight an increase in the scl average following the stimulus over most of the subjects (eight out of ten). furthermore, in one case out of ten (subject s3), the signals acquired by the two systems lead to different results, to the advantage of the low-cost system. in fact, an increase following the stimulus is recorded in the scl average by the only low-cost system. the data provided in table 4 refer to the analysis of the variation in the number of scr peaks detected before and after the stimulus, for each subject and by both the acquisition systems. for six subjects out of ten, the low-cost system detects an increase in the number of scr peaks following the acoustic stimulus, in accordance with the literature, as reported in [37], [38]. on the other hand, by analyzing the signals collected by the procomp infiniti, the peak increase is found only in four cases out of ten. therefore, the analysis on the acquired sets of signals proves that the low-cost system can contemplate the meaningful variations of the sc signals, as well as the procomp infiniti, taken as a reference device. these results agree with [39], confirming the possibility to implement a low-cost and portable system for sc monitoring aimed at stimulation detection. in fact, the signals collected by the prototypal system shown in this work allow to extract meaningful information in line with the results available in the literature and typically obtained by means of desk equipment, which cannot be suitable for scenarios requiring the mobility and freedom of movements for users. 6. conclusions measuring sc signals by means of minimally invasive and low-cost systems can enable the out-of-lab monitoring of subjects under different types of stimulation without bulky equipment, thus in real-life conditions. in this paper, a low-cost and portable monitoring system for stimulus detection in sc has been proposed. the performance of the proposed low-cost system has been evaluated in comparison to the performance of a reference desk equipment, the procomp infiniti, on ten subjects. the analysis on the sc signals, acquired before and after acoustic stimulation, highlights the capability of the low-cost system to provide their relevant variations, in the slowly varying sc component (scl), as well as in the rapidly varying one (scr). in fact, the increase of scl average and scr peaks after a) low-cost system b) procomp infiniti figure 5. sc signals with normalized amplitude of subject s4 (male) acquired by a) low-cost system, and b) procomp infiniti. a) low-cost system b) procomp infiniti figure 6. normalized amplitude sc signals of subject s10 (female) acquired by a) low-cost system, and b) procomp infiniti acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 5 stimulation is correctly monitored by the low-cost system, in accordance with the results obtained by means of the desk equipment of greater hardware capabilities. additional investigations aimed at generalizing the promising results obtained in this work will be performed as future work, increasing the dimension of the test population and possibly testing different types of stimulation. references [1] r. morello, f. ruffa, i. jablonski, l. fabbiano, c. de capua, an iot based ecg system to diagnose cardiac pathologies for healthcare applications in smart cities, measurement 190 (2022). doi: 10.1016/j.measurement.2021.110685 [2] k. aparna, d. h. dayajanaki, p. rani devika, sreenidhi prabha rajeev, s. d. baby sreeja, wearable sensors in daily life: a review, international conference on advanced computing and communication systems (icaccs), coimbatore, india, 17-18 march 2023, pp. 863-868. doi: 10.1109/icaccs57279.2023.10112956 [3] h. zhao, r. wang, d. qi, j. xie, j. cao, w.-h. liao, wearable gait monitoring for diagnosis of neurodegenerative diseases, measurement 20 (2022) art. no. 111839. doi: 10.1016/j.measurement.2022.111839 [4] g. iadarola, s. meletani, f. di nardo, s. spinsante, a new method for semg envelope detection from reduced measurements, 2022 ieee international symposium on medical measurements and applications (memea), messina, italy, 2022, pp. 1-6. doi: 10.1109/memea54994.2022.9856436 [5] m. donati, m. olivelli, r. giovannini, l. fanucci, rt-profasy: enhancing the well-being, safety and productivity of workers by exploiting wearable sensors and artificial intelligence, 2022 ieee int. workshop on metrology for industry 4.0 & iot (metroind4.0&iot), trento, italy, 2022, pp. 69-74. doi: 10.1109/metroind4.0iot54413.2022.9831499 [6] e. piuzzi, e. pisa, e. pittella, l. podestà, s. sangiovanni, wearable belt with built-in textile electrodes for cardio-respiratory monitoring, sensors 20 (2020), art. no. 4500. doi: 10.3390/s20164500 [7] p. daponte, l. de vito, g. iadarola, f. picariello, ecg monitoring based on dynamic compressed sensing of multi-lead signals, sensors 21(21) (2021). doi: 10.3390/s21217003 [8] p. schilk, k. dheman, m. magno, vitalpod: a low power in-ear vital parameter monitoring system, 2022 18th int. conf. on wireless and mobile computing, networking and communications (wimob), thessaloniki, greece, 10-12 october 2022, pp. 94–99 (2022). doi: 10.1109/wimob55322.2022.9941646 [9] d. chatterjee, r. gavas, s.k. saha, exploring skin conductance features for cross-subject emotion recognition, 2022 ieee region 10 symposium (tensymp). pp. 1–6 (2022). doi: 10.1109/tensymp54529.2022.9864492 [10] c. christoforou, s. christou-champi, f. constantinidou, m. theodorou, from the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience. frontiers in psychology 6 (2015). doi: 10.3389/fpsyg.2015.00579 [11] k. jambhale, b. rieland, s. mahajan, p. narsay, n. banerjee, a. dutt, r. vinjamuri, selection of optimal physiological features for accurate detection of stress, 2022 44th annual int. conf. of the ieee engineering in medicine & biology society (embc), glasgow, scotland, united kingdom, 11-15 july 2022, pp. 2514– 2517 (2022). doi: 10.1109/embc48229.2022.9871067 table 3. detection of average baseline increase following stimulation, for each subject and by both the systems used. low-cost system procomp infiniti subject scl average in µs before stimulus scl average in µs after stimulus baseline increase scl average in µs before stimulus scl average in µs after stimulus baseline increase s1 2.46 2.59 yes 0.60 0.63 yes s2 4.38 4.75 yes 1.07 1.23 yes s3 3.58 4.03 yes 1.65 1.62 no s4 2.49 2.63 yes 1.73 1.89 yes s5 0.96 0.88 no 0.81 0.69 no s6 1.53 1.75 yes 1.57 1.67 yes s7 2.68 2.82 yes 0.55 0.60 yes s8 3.85 3.54 no 1.13 1.05 no s9 1.64 1.67 yes 1.10 1.12 yes s10 1.89 2.70 yes 2.10 2.59 yes table 4. detection of scr events number increase following stimulation, for each subject and by both the systems used. low-cost system procomp infiniti subject scr peaks before stimulus scr peaks after stimulus peaks increase scr peaks before stimulus scr peaks after stimulus peaks increase s1 0 1 yes 3 2 no s2 3 5 yes 3 6 yes s3 2 4 yes 2 0 no s4 1 2 yes 0 2 yes s5 0 0 no 0 0 no s6 1 1 no 0 2 yes s7 4 3 no 4 2 no s8 0 0 no 0 0 no s9 0 1 yes 0 1 yes s10 1 2 yes 2 2 no https://doi.org/10.1016/j.measurement.2021.110685 https://doi.org/10.1109/icaccs57279.2023.10112956 https://doi.org/10.1016/j.measurement.2022.111839 https://dx.doi.org/10.1109/memea54994.2022.9856436 https://doi.org/10.1109/metroind4.0iot54413.2022.9831499 https://doi.org/10.3390/s20164500 https://doi.org/10.3390/s21217003 https://doi.org/10.1109/wimob55322.2022.9941646 https://doi.org/10.1109/tensymp54529.2022.9864492 https://doi.org/10.3389/fpsyg.2015.00579 https://doi.org/10.1109/embc48229.2022.9871067 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 6 [12] f. addante, f. gaetani, l. patrono, d. sancarlo, i. sergi, g. vergari (2019). an innovative aal system based on iot technologies for patients with sarcopenia, sensors 19(22), art. no. 4951. doi: 10.3390/s19224951 [13] y. nagano, y. nagata, y. miyanishi, t. nagahama, y. morita, class evaluation using iot skin conductance measuring instruments, japanese journal of physiological psychology and psychophysiology 37 (2019). doi: 10.5674/jjppp.1903si [14] a.-l. roos, t. goetz, m. krannich, m. donker, m. bieleke, a. caltabiano, t. mainhard, control, anxiety and test performance: self-reported and physiological indicators of anxiety as mediators, british journal of educational psychology, 93 (2023), pp72-89. doi: 10.1111/bjep.12536 [15] r. singh, a. gehlot, r. saxena, kh. alsubhi, d. anand, i. delgado noya, sh. vaseem akram, s. choudhury, stress detector supported galvanic skin response system with iot and labview gui, computers, materials & continua 74(1) (2023), pp. 1217–1233. doi: 10.32604/cmc.2023.023894 [16] b. villar, a. c. de la rica, m. vargas, m. j. turiel, a low cost iot enabled device for the monitoring, recording and communication of physiological signals, proceedings of the 14th international joint conference on biomedical engineering systems and technologies volume 1: biodevices (2021). doi: 10.32604/cmc.2023.023894 [17] boucsein, w. electrodermal activity: second edition; vol. 9781461411260, 2012; p. 1 – 618. [18] n. bharathiraja, m. sakthivel, t. deepa, s. hariprasad, n. ragasudha, design and implementation of selection algorithm based human emotion recognition system, 2023 7th international conference on trends in electronics and informatics (icoei), tirunelveli, india, 11-13 april 2023, pp. 1348-1353. doi: 10.1109/icoei56765.2023.10125696 [19] g. iadarola, a. poli and s. spinsante, compressed sensing of skin conductance level for iot-based wearable sensors, 2022 ieee int. instrumentation and measurement technology conference (i2mtc), 16-19 may 2022, ottawa, on, canada, 2022, pp. 1-6. doi: 10.1109/i2mtc48687.2022.9806516 [20] g. iadarola, a. poli, s. spinsante, reconstruction of galvanic skin response peaks via sparse representation, 2021 ieee int. instrumentation and measurement technology conference (i2mtc), 17-20 may 2021, glasgow, united kingdom, pp. 1–6. doi: 10.1109/i2mtc50364.2021.9459905 [21] v. sharma, n.r. prakash, p. kalra, audio-video emotional response mapping based upon electrodermal activity, biomedical signal processing and control 47 (2019), pp. 324-333. doi: 10.1016/j.bspc.2018.08.024 [22] g. iadarola, a. poli and s. spinsante, analysis of galvanic skin response to acoustic stimuli by wearable devices, 2021 ieee international symposium on medical measurements and applications (memea), lausanne, switzerland, 2021, pp. 1-6. doi: 10.1109/memea52024.2021.9478673 [23] a. soni, k. rawal, effect of physical activities on heart rate variability and skin conductance, biomedical engineering applications, basis and communications 33(5) (2021). doi: 10.4015/s1016237221500381 [24] a. amidei, a. poli, g. iadarola, f. tramarin, p. pavan, s. spinsante, l. rovati, driver drowsiness detection based on variation of skin conductance from wearable device, 2022 ieee int. workshop on metrology for automotive (metroautomotive), 04-06 july 2022, modena, italy, pp. 94–98 (2022). doi: 10.1109/metroautomotive54295.2022.9854871 [25] a. amidei, s. spinsante, g. iadarola, s. benatti, f. tramarin, p. pavan, l. rovati, driver drowsiness detection: a machine learning approach on skin conductance, sensors 23(8) (2023). doi: 10.3390/s23084004 [26] a. poli, a. amidei, s. benatti, g. iadarola, f. tramarin, l. rovati, p. pavan, s. spinsante, exploiting blood volume pulse and skin conductance for driver drowsiness detection, lecture notes of the institute for computer sciences, social-informatics and telecommunications engineering 456 (2023), pp. 50-61. doi: 10.1007/978-3-031-28663-6_5 [27] y. can, n. chalabianloo, d. ekiz, c. ersoy, continuous stress detection using wearable sensors in real life: algorithmic programming contest case study, sensors 19(8) (2019). doi: 10.3390/s19081849 [28] k. sapozhnikova, r. taymanov, i. baksheeva, i. danilova, measurements as the basis for interpreting the content of emotionally coloured acoustic signals, measurement 202 (2022), art. no. 111861. doi: 10.1016/j.measurement.2022.111861 [29] m. e. h. chowdhury, a. khandakar, kh. alzoubi, a. mohammed, s. taha, a. omar, kh. r. islam, t. rahman, md. sh. hossain, m. t. islam, m. bin ibne reaz, wearable real-time epileptic seizure detection and warning system, biomedical signals based computer-aided diagnosis for neurological disorders, springer., 2022. [30] seeed studio, grove gsr sensor v. 1.2. online [accessed 18 june 2023] https://wiki.seeedstudio.com/grove-gsr_sensor/ [31] h. g. van lier, m. e. pieterse, a. garde, m. g. postel, h. a. de haan, m. m. r. vollenbroek-hutten, j. m. schraagen, m. l. noordzij, a standardized validity assessment protocol for physiological signals from wearable technology: methodological underpinnings and an application to the e4 biosensor. behav res 52 (2020), pp. 607–629. doi: 10.1007/978-3-030-97845-7_11 [32] m. van dooren, j. j. g. de vries, j. h. janssen, emotional sweating across the body: comparing 16 different skin conductance measurement locations, physiology & behavior 106(2) (2012), pp. 298-304. doi: 10.1016/j.physbeh.2012.01.020 [33] r. thompson, habituation, editor(s): neil j. smelser, paul b. baltes, international encyclopedia of the social & behavioral sciences, pergamon, 2001, pp. 6458-6462, isbn: 9780080430768. [34] m. m. bradley, p. j. lang, the international affective digitized sounds (2nd edition; iads-2): affective ratings of sounds and instruction manual. technical report b-3. university of florida, gainesville, fl. doi: 10.1016/j.physbeh.2012.01.020 [35] m. kuhn, a. m. gerlicher, t. b. lonsdorf, navigating the manyverse of skin conductance response quantification approaches – a direct comparison of trough-to-peak, baseline correction, and model-based approaches in ledalab and pspm. psychophysiology 59(9) (2022), art. no. e14058. doi: 10.1111/psyp.14058 [36] d. leiner, a. fahr, h. früh, eda positive change: a simple algorithm for electrodermal activity to measure general audience arousal during media exposure, communication methods and measures 6(4) (2012), pp. 237-250. doi: 10.1080/19312458.2012.732627 [37] m. benedek, c. kaernbach, a continuous measure of phasic electrodermal activity, journal of neuroscience methods 190(1) (2010), pp 80-91. doi: 10.1016/j.jneumeth.2010.04.028 [38] d. bari, h. aldosky, c. tronstad, h. kalvøy, ø. martinsen, electrodermal activity responses for quantitative assessment of felt pain. journal of electrical bioimpedance 9(1) (2018), pp 5258. doi: 10.2478/joeb-2018-0010 [39] v. bruschi, n. dourou, g. iadarola, a. poli, s. spinsante, s. cecchi, skin conductance under acoustic stimulation: analysis by a portable device. lecture notes of the institute for computer sciences, social-informatics and telecommunications engineering 456 (2023), pp. 62-78. doi: 10.1007/978-3-031-28663-6_6 https://doi.org/10.3390/s19224951 https://doi.org/10.5674/jjppp.1903si https://doi.org/10.1111/bjep.12536 http://dx.doi.org/10.32604/cmc.2023.023894 http://dx.doi.org/10.32604/cmc.2023.023894 https://doi.org/10.1109/icoei56765.2023.10125696 https://doi.org/10.1109/i2mtc48687.2022.9806516 https://doi.org/10.1109/i2mtc50364.2021.9459905 https://doi.org/10.1016/j.bspc.2018.08.024 https://dx.doi.org/10.1109/memea52024.2021.9478673 https://doi.org/10.4015/s1016237221500381 https://doi.org/10.1109/metroautomotive54295.2022.9854871 https://doi.org/10.3390/s23084004 https://dx.doi.org/10.1007/978-3-031-28663-6_5 https://doi.org/10.3390/s19081849 https://doi.org/10.1016/j.measurement.2022.111861 https://wiki.seeedstudio.com/grove-gsr_sensor/ https://doi.org/10.1007/978-3-030-97845-7_11 https://doi.org/10.1016/j.physbeh.2012.01.020 https://doi.org/10.1016/j.physbeh.2012.01.020 https://doi.org/10.1111/psyp.14058 https://doi.org/10.1080/19312458.2012.732627 https://doi.org/10.1016/j.jneumeth.2010.04.028 https://doi.org/10.2478/joeb-2018-0010 https://dx.doi.org/10.1007/978-3-031-28663-6_6 microsoft word article 17 175-1305-1-le.docx acta imeko  february 2015, volume 4, number 1, 111 – 120  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 111  frequency‐domain  characterization  of  random  demodulation  analog‐to‐information  converters  doris bao, pasquale daponte, luca de vito, sergio rapuano  department of engineering, university of sannio, piazza roma, 21, 82100 benevento, italy    section: research paper   keywords: analog‐to‐information converter, compressive sampling, testing, frequency domain.  citation: doris bao, pasquale daponte, luca de vito, sergio rapuano, frequency‐domain characterization of random demodulation analog‐to‐information  converters, acta imeko, vol. 4, no. 1, article 17, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐17  editor: paolo carbone, university of perugia   received january 14 th , 2014; in final form april 4 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: luca de vito, e‐mail: devito@unisannio.it  1. introduction  high-speed data acquisition is becoming a relevant topic in advanced applications, such as high-speed radar and communications, signal analysis, high-speed video acquisition, and so on. moreover, it is a relevant challenge in wideband spectrum sensing for software defined radio and cognitive radio applications [1], [2]. such demand often is not met by traditional analog-todigital converters (adcs), due to technological limits in fast sampling rates [3]. the recent studies about compressive sampling (cs) drew a possible solution for signals, that can be represented by a finite number of non-zero elements in a specific domain. they demonstrated that, for such class of signals, it is possible to reconstruct the original waveform, from a set of samples of a lower dimension than that required by the shannon theorem. the idea, underlying the aic, is to spread the frequency content of the input signal. in this way, the high frequency components, folded back to low frequencies, can be acquired by an adc with a lower sampling frequency than that required by the shannon's theorem for the original signal. basing on this concept, different architectures have been proposed, implementing the frequency spreading by exploiting: (i) non-uniform [4] or random sampling [5], (ii) random filters [6], and (iii) random demodulation [3]. the aim of the paper is to define performance parameters and test methods for aics, starting from the state of art of research and the scientific knowledge about adc testing, well summarized in [7] and [8]. to this aim, the first step is the application to aics of standard parameters and test methods, actually defined for adcs in order to study how they are influenced by (i) the aic architecture type, (ii) the aic design parameters, and (iii) the circuit non-idealities. abstract  the paper aims at proposing test methods for analog‐to‐information converters (aics).   in particular, the objective of this work is to verify if figures of merit and test methods, currently defined in standards for traditional  analog‐to‐digital converters, can be applied to aics based on the random demodulation architecture.  for this purpose, an aic prototype has been designed, starting from commercially available integrated circuits. a simulation analysis  and an experimental  investigation have been carried out to study the additional  influencing factors such as the parameters of the  reconstruction algorithm. results show that standard figures of merit are in general capable of describing the performance of aics,  provided  that  they  are  slightly  modified  according  to  the  proposals  reported  in  the  paper.  in  addition,  test  methods  have  to  be  modified in order to take into account the statistical behavior of aics.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 112  in the scientific literature few papers can be found, facing the aic testing and most of them take into account only a reduced set of figures of merits (foms) and influencing parameters [9]. in [10], the authors presented a preliminary investigation carried out in simulation and on a first aic prototype, based on a digital oscilloscope, about the application of standard adc foms on the aic. this paper is an extended version of such work, in which new results are presented and a new aic prototype is used, based on commercial integrated circuits. as in [10], the aic architecture considered in this paper is based on the random demodulation, as it does not require a high sampling frequency adc. however, test methods and considerations can be easily extended to the other types of aic architectures. in particular, in the paper, a characterization of a random demodulation aic has been carried out by applying a reconstruction algorithm to the aic output and evaluating the dynamic parameters in the frequency domain, defined for adc testing. to this aim, in a former phase, a behavioural model of the random demodulation aic has been simulated, by considering the non-idealities introduced by its main building blocks. in a latter phase, an aic prototype has been designed, by following the theoretical descriptions found in literature, and an experimental analysis has been conducted on it. the paper is organized as follows: in section 2, an introductory description about compressive sampling theory is given; in section 3, the random demodulation architecture, which was used both in the simulation and the experimental analyses of this work, is described; in section 4, the approach followed for aic testing is explained; then, in section 5, a simulation phase is reported, in which the influence of several factors, such as circuit non-idealities, aic design parameters and reconstruction algorithm parameters, have been investigated. in section 6, the foms defined for traditional adcs are revised. finally, in section 7, the experimental analysis is presented and results are discussed. 2.  theoretical  background  the idea underlying cs approach is that many natural signals have concise representations when expressed in a convenient basis [11]. as an example, audio signals have sparse representations in the short-time fourier transform domain, or in the modified discrete cosine transform domain [12]. another example is given by radar echo signals, that, depending on the radar signal type, can have sparse representations in the time, frequency, wavelet, or time-frequency domains [13]. sparse representations of natural signals, audio, images and videos are currently exploited by transform coding schemes, such as those used by the jpeg, jpeg2000, mpeg, and mp3 standards. however, in signal compression, signals are acquired using nyquist rate converters, then they are transformed in a proper domain, where less significant coefficients are discarded. cs, instead, aims to acquire directly the compressed version of the signal, without wasting acquisition or memory resources, by taking a vector y of observations of the signal to be acquired, where the size of y is lower than that required by the shannon theorem to digitize the signal. in the past, some other techniques have been proposed to overcome the shannon theorem constraints in some specific conditions. the equivalent time sampling of time domain signals is an example of such techniques. however, equivalent time sampling requires the observed portion of the signal to be repetitive. cs, instead is applicable even to a non-repetitive signal, providing that a domain can be found, where the representation of such signal is sparse. for a compressible signal x(t), if x is a the vector of n samples of it acquired according to the shannon theorem, its compressed counterpart is represented by a vector y of size m < n, such that: ,= φxy (1) where,  is a matrix modelling the compression process. it can demonstrated that an estimate of x can be reconstructed from y according to (1) if a matrix transformation  exists, such that: ,= ψcx (2) where c has only k < m non-zero elements. the above defined condition is not rare in reality, since many natural signals are sparse or compressible in the sense that they have concise representations when expressed in the proper basis. by combining (1) and (2), the following expression is obtained: ,= acy (3) where a =  is an m × n matrix and, therefore, (3) is an under-determined linear system in c. the system can be solved by finding the solution of (3) that minimizes the ℓ0 norm, that is having the highest number of non-zero elements in c:   .subject toˆ 0 acycc =argmin= (4) the minimization of the ℓ0 norm is both numerically unstable and np-complete, requiring an exhaustive enumeration of all       k n possible locations of the non-zero entries in c [14]. therefore, the solution is approximated with that obtained by the minimization of the ℓ1 norm:   .subject toˆ 1 acycc =argmin= (5) in presence of additive noise e, equation (3) becomes: eacy += (6) and the minimization problem is modified as:   .subject toˆ 21 τ normal, rotations > normal, and acceleration > normal. 3. results 3.1. eeg classification results classification accuracies for normal vs. acceleration are shown in figures 4a and 4b. for all participants, accuracies using the differential signal for parietal electrodes (c3 (minus) cz) were significantly higher than chance level of 50 %. feature vectors using (c3 cz) worked well for all participants (participant 1: 76.3 % (p = 2.97 10-4), participant 2: 86.3 % (p = 3.50 10-4 ), participant 3: 80.0 % (p = 1.01 10-4). the differential signal for frontal electrodes (af3 af4) also showed significantly higher accuracy than chance level for all participants (participant 1: 75.0 % (p = 4.39 10--5 ), participant 2: 77.5 % (p = 8.66 10--5), participant 3: 65.0 % (p = 0.0330)). figure 4. classification accuracies using eeg signals. (a) binary classification results comparing acceleration and normal conditions using differential signal c3 – cz; (b) binary classification results comparing acceleration and normal conditions using differential signal af3 – af4; (c) binary classification results comparing rotation and normal conditions using differential signal c3 – cz; (d) binary classification results comparing rotation and normal conditions using differential signal af3 – af4. *p < 0.05, **p < 0.01, ***p < 0.001 by t-test. acta imeko | www.imeko.org july 2014 | volume 6 | number 2 | 96 classification accuracies for normal vs. rotation are shown in figures 4c and 4d). participant 2 did not show significance for either differential signal (c3 cz: 61.3 % (p = 0.0967), af3 af4: 50.0 % (p = 0.50 )). other participants also showed low classification accuracies, except participant 3 for af3 af4 (75.0 %, p = 1.96 10-4 ). 3.2. fmri activation areas figure 5a shows three significant activation areas that were obtained by the contrast (rotations and acceleration) > normal (p < 0.001, uncorrected for multiple comparisons). we expected the contrast to show the net effect of transformed cursor movement on brain activity. the largest cluster was located in the left middle frontal gyrus (mfg) (mni coordinates: [-48, 44, 26], t=3.61), and the second largest cluster was in the right lateral orbitofrontal cortex (lofc) (mni coordinates: [36, 54, -10], t=3.53). the left inferior parietal lobe (ipl) also showed a small activated-cluster (mni coordinates: [-52, -36, 58], t=3.13). looking at the rotations > normal and acceleration > normal contrasts (figures 5b and 5c), the rotation condition showed activated areas in the left ipl (mni coordinates: [-46, 38, 30], t=4.02, p < 0.001, uncorrected) and right mfg (mni coordinates: [28, 6, 56], t=3.24, p < 0.001, uncorrected). the acceleration condition showed many activated areas, and the largest cluster was located in the left and right ipls (mni coordinates: left, [-62, -36, 46], t=4.63, p < 0.05, family-wise error corrected; right, [68, -24, 34], t=4.42, p < 0.001, uncorrected). the right lofc was also the third largest activated area (mni coordinates: [36, 54, -10], t=4.15, p < 0.001, uncorrected). 4. discussion our results showed that significantly high classification accuracies especially in acceleration vs. normal classification figure 5. activation areas in general linear model analysis for contrasts (rotations + acceleration) > normal (a), rotation > normal (b), and acceleration > normal (c). all activated areas shown are significant at a threshold of p<0.001, uncorrected for multiple comparisons. acta imeko | www.imeko.org july 2014 | volume 6 | number 2 | 97 were obtained for all participants using only two eeg electrode signals, suggesting that brain activity varied enough across cursor movement transformations to be discriminated. since the eeg data were not contaminated by electrooculogram or motion artifacts, and all eight target positions were mixed in the classification analysis, the variance in brain activity can be interpreted as a reflection of activity elicited by the cursor transformation. 4.1. difference between rotation and acceleration as shown in figure 4, classification performance for rotation (c and d) was lower than that for acceleration (a and b), though the participants reported that rotation was more frustrating than acceleration. we think this is due to the timing of the occurrence of the frustration. rotation provided difficulty at the initiation of movement, while acceleration posed difficulty towards the end of the task period because it was hard to stop the cursor exactly on the target. we used eeg data from the time period immediately after the reaching tasks were completed to evaluate frustration, to ensure that the data did not contain other effects such as motion and movement directions. thus, it is possible that the time period used included more frustration elicited by acceleration than by rotation. 4.2. validity of the electrode positions providing high accuracies under the pleasure-arousal-dominance (pad) model [16] and its two-dimensional variation [17], [18], which employs arousal (intense calm) and valence (positive negative) as its orthogonal axes, emotions related to user-unfriendliness of interfaces could be considered as high arousal and low valence. in acceleration vs. normal (figure 4a), c3 cz showed high accuracies for all participants. this is comparable with results from previous studies that showed a relation between unpleasant image stimulation and parietal regions [14], [15]. electrodes in frontal regions, af3 af4, were also found to be effective for classification (fig. 4b). kostyunina and kulikov [13] reported that alpha wave power significantly increased at f3, t4, and o1 when feeling anger, which is considered a stronger emotion than annoyance or irritation [19]. disgust, which also has high arousal and low valence, also can be distinguished from a calm state using right frontal electrodes [10]. that may be the reason why af3 af4 showed relatively high accuracy in our results; the differential signal enhanced activity difference between the right and left hemispheres. 4.3. comparison of fmri and eeg results although fmri was performed for only one participant, we believe the data, as it compares with existing findings, does provide some insight into the emotional responses elicited by the tasks. even if we could not obtain high classification accuracies in rotation vs. normal condition due to the time lag between task period and rest period (i.e. data used for analysis), fmri analysis revealed if negative emotions were elicited by both of the cursor transformations. our fmri analysis showed activation mainly in the left and right mfgs, right lateral orbitofrontal gyrus, and left and right ipls. the left and right mfgs, that were mainly activated in the rotation condition, are associated with frustration [8] as well as negative feelings of sadness and anger [7]. the right lofc is associated with negative feelings elicited by angry face observation [20] and stress induction [21] as well as regulation of negative emotions [22]. the ipl relates to various cognitive functions, including attention [23], language [24], action processing, and emotional action observation [25]-[28]. considering our experimental tasks, emotional action observation is unlikely to have been the cause. rather, the ipl activation might have been due to reorienting and spatial attention [23], [29], which are motor planning and action-related functions [30]. controlling the cursor under transformed motion required motor planning for the entire duration of the task. furthermore, ipl activation was lower in the merged contrast (rotation and acceleration) > normal than the other contrasts. this may have occurred because the coordinates of the activated voxels in ipl were not exactly the same between acceleration and rotation, suggesting different strategies were employed for the reorienting or motor planning. comparing the relevant activation areas in fmri with the eeg electrodes positions used for classification, differential signal af3-af4 may have included brain activity around the right and left mfg and right lofc, and c3-cz may have included activity around the right and left ipl. to prioritize activity related to emotional response, it might be better to focus on af3-af4 differential activity in future work. further investigation with an increased sample size and eeg source localization analysis is also needed before more definitive interpretations can be made. 5. conclusions in this study, we used eeg to evaluate the usability of a human interface. we employed target-reaching tasks with transformations applied to cursor motion to introduce userunfriendliness. fft-based feature extraction and support vector machine classification revealed that two electrode signals from frontal regions were sufficiently effective in discriminating between user-friendly and user-unfriendly conditions. an fmri experiment using the same tasks further revealed activation in the left mfg and right lofc, which were previously reported as areas relevant to negative emotions. these results support our future plans to develop an interface with the ability to adapt its usability based on the user’s emotional response. references [1] p. rani, n. sarkar, c.a. smith, and j.a. adams: ‘affective communication for implicit human-machine interaction’, 2003 ieee international conference on systems, man and cybernetics, vols 1-5, conference proceedings, (2003) pp. 48964903. [2] s. jerritta, m. murugappan, r. nagarajan, and k. wan: ‘phyiological signals based human emotion recognition: a review’, 2011 ieee 7th internationla colloquium on signal processing and its applications, ieee, (2011) pp. 410-415 [3] m. li, and b.l. lu: ‘emotion classification based on gammaband eeg’, 2009 annual international conference of the ieee engineering in medicine and biology society, vols 1-20, (2009) pp. 1323-1326. [4] c.m. pawliczek, b. derntl, t. kellermann, r.c. gur, f. schneider, and u. habel: ‘anger under control: neural correlates of frustration as a function of trait aggression’, plos one, 8, (10), (2013) pp. e78503. [5] p.c. petrantonakis, and l.j. hadjileontiadis: ‘a novel emotion elicitation index using frontal brain asymmetry for enhanced eeg-based emotion recognition’, ieee t inf technol b, 15, (5), (2011) pp. 737-746. [6] p.c. petrantonakis, and l.j. hadjileontiadis: ‘adaptive emotional information retrieval from eeg signals in the time-frequency domain’, ieee t signal proces, 60, (5), (2012) pp. 2604-2616. acta imeko | www.imeko.org july 2014 | volume 6 | number 2 | 98 [7] k. vytal, and s. hamann: ‘neuroimaging support for discrete neural correlates of basic emotions: a voxel-based meta-analysis’, j cogn neurosci, 22, (12), (2010) pp. 2864-2885. [8] r. yu, d. mobbs, b. seymour, j.b. rowe, and a.j. calder: ‘the neural signature of escalating frustration in humans’, cortex, 54, (2014) pp. 165-178. [9] s.g. mason, a. bashashati, m. fatourechi, k.f. navarro, and g.e. birch: ‘a comprehensive survey of brain interface technology designs’, ann biomed eng, 35, (2), (2007) pp. 137169. [10] r.j. davidson, c.d. saron, j.a. senulis, p. ekman, and w.v. friesen: ‘approach withdrawal and cerebral asymmetry emotional expression and brain physiology .1.’, j pers soc psychol, 58, (2), (1990) pp. 330-341. [11] o. alzoubi, r.a. calvo, and r.h. stevens: ‘classification of eeg for affect recognition: an adaptive approach’, lect notes artif int, 5866, (2009) pp. 52-61. [12] c.c. chang, and c.j. lin: ‘libsvm: a library for support vector machines’, acm t intel syst tec, 2, (3), (2011). [13] b. guntekin, and e. basar: ‘event-related beta oscillations are affected by emotional eliciting stimuli’, neurosci lett, 483, (3), (2010) pp. 173-178. [14] m.b. kostyunina, and m.a. kulikov: ‘frequency characteristics of eeg spectra in the emotions’, neurosci behav physiol, 26, (4), (1996) pp. 340-343. [15] n. martini, d. menicucci, l. sebastiani, r. bedini, a. pingitore, n. vanello, m. milanesi, l. landini, and a. gemignani: ‘the dynamics of eeg gamma responses to unpleasant visual stimuli: from local activity to functional connectivity’, neuroimage, 60, (2), (2012) pp. 922-932. [16] a. mehrabian: ‘pleasure arousal dominance: a general framework for describing and measuring individual differences in temperament’, curr psychol, 14, (4), (1996) pp. 261-292. [17] t. eerola, and j.k. vuoskoski: ‘a comparison of the discrete and dimensional models of emotion in music’, psychology of music, 39, (1), (2010) pp. 18-49. [18] j.a. russell: ‘affective space is bipolar’, j pers soc psychol, 37, (3), (1979) pp. 345-356. [19] j.r. averill: ‘studies on anger and aggression implications for theories of emotion’, am psychol, 38, (11), (1983) pp. 11451160. [20] r. elliott, r.j. dolan, and c.d. frith: ‘dissociable functions in the medial and lateral orbitofrontal cortex: evidence from human neuroimaging studies’, cereb cortex, 10, (3), (2000) pp. 308-317. [21] n.y. oei, i.m. veer, o.t. wolf, p. spinhoven, s.a. rombouts, and b.m. elzinga: ‘stress shifts brain activation towards ventral 'affective' areas during emotional distraction’, soc cogn affect neurosci, 7, (4), (2012) pp. 403-412. [22] a. golkar, t.b. lonsdorf, a. olsson, k.m. lindstrom, j. berrebi, p. fransson, m. schalling, m. ingvar, and a. ohman: ‘distinct contributions of the dorsolateral prefrontal and orbitofrontal cortex during emotion regulation’, plos one, 7, (11), (2012) pp. e48107. [23] g.r. fink, j.c. marshall, p.h. weiss, and k. zilles: ‘the neural basis of vertical and horizontal line bisection judgments: an fmri study of normal volunteers’, neuroimage, 14, (1 pt 2), (2001) pp. s59-67. [24] m. vigneau, v. beaucousin, p.y. herve, h. duffau, f. crivello, o. houde, b. mazoyer, and n. tzourio-mazoyer: ‘metaanalyzing left hemisphere language areas: phonology, semantics, and sentence processing’, neuroimage, 30, (4), (2006) pp. 14141432. [25] b. de gelder, j. snyder, d. greve, g. gerard, and n. hadjikhani: ‘fear fosters flight: a mechanism for fear contagion when perceiving emotion expressed by a whole body’, proc natl acad sci u s a, 101, (47), (2004) pp. 16701-16706. [26] h. goldberg, a. christensen, t. flash, m.a. giese, and r. malach: ‘brain activity correlates with emotional perception induced by dynamic avatars’, neuroimage, 122, (2015) pp. 306317. [27] j. grezes, s. pichon, and b. de gelder: ‘perceiving fear in dynamic body expressions’, neuroimage, 35, (2), (2007) pp. 959967. [28] s. pichon, b. de gelder, and j. grezes: ‘emotional modulation of visual and motor areas by dynamic body expressions of anger’, soc neurosci, 3, (3-4), (2008) pp. 199-212. [29] m. corbetta, g. patel, and g.l. shulman: ‘the reorienting system of the human brain: from environment to theory of mind’, neuron, 58, (3), (2008) pp. 306-324. [30] s. caspers, k. zilles, a.r. laird, and s.b. eickhoff: ‘ale metaanalysis of action observation and imitation in the human brain’, neuroimage, 50, (3), (2010) pp. 1148-1167. decoding of emotional responses to user-unfriendly computer interfaces via electroencephalography signals template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 12-18 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 12 mn·m torque calibration for nacelle test benches using transfer standards christian schlegel, holger kahmann, rolf kumme physikalisch-technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany section: research paper keywords: torque calibration; transfer standard; nacelle test bench citation: christian schlegel, holger kahmann, rolf kumme, mn·m torque calibration for nacelle test benches using transfer standards, acta imeko, vol. 5, no. 4, article 3, december 2016, identifier: imeko-acta-05 (2016)-04-03 section editor: lorenzo ciani, university of florence, italy received september 21, 2016; in final form november 2, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the research within the project has received funding from the european union corresponding author: christian schlegel, e-mail:christian.schlegel@ptb.de 1. introduction the portion of renewable energies for electricity production is rising dramatically. for instance, last year the fraction of renewable energies in germany was 32.5 %. the portion of energy which was produced by wind energy was thereby 44.3 %, based on all renewable energies. the german government will increase the share of renewable energy in power generation to 40 to 45 % by 2025. from this development it is clear that wind energy is likely to provide the greatest contribution to this planned expansion. associated with this is a significant increase in the performance of the wind generators. this will lead to individual wind turbines more powerful in height, in wing span as well as in the provided electrical power, as seen in figure 1, taken from [1]. of course, reliable energy production from wind turbines will strongly depend upon their technical reliability. for that reason, several nacelle test benches have been established in the past. one crucial parameter for such a nacelle is the torque load which is initiated in the field according to the strength of the wind field. in nacelle test benches, instead of using wind power, a special motor is used to create the torque. often an additional device is located between the motor and the nacelle to create axial forces as well as parasitic bending forces and moments onto the drive train. one of the most important parameters of such test benches is the torque which is initiated in the nacelle. so far, no traceable calibration to national standards has been performed in such test benches. the paper will show calibration possibilities which already exist and also show future prospects. figure 1. development of the size of wind turbines. abstract to verify all technical aspects of wind turbines, more and more nacelle test benches have come into operation. one crucial parameter is the initiated torque in the nacelles, which amounts to several mn·m. so far, no traceable calibration to national standards has been performed in such test benches. the paper will show calibration possibilities which already exist and also show future prospects. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 13 the torque m is directly related to the electrical power pel and depends on the revolution speed n, respectively of the frequency f. 60 , 2 n f n p m el = ⋅ = π . (1) it should be noted that the relationship between f and n in (1) is for the case that the revolution speed is given in rounds per minute, rpm. table 1 shows an example of two points of operation. as shown in (1), the torque can be determined by an electrical power measurement. nevertheless, this kind of measurement results in relative uncertainties of several percent; that is why a more precise mechanical measurement of the torque is necessary. special torque transducers were built to measure the torque in the drive train. unfortunately, not all of these transducers are calibrated in the mn·m range due to the lack of a calibration facility. sometimes they are calibrated partially and extrapolated, e.g. via finite element calculations, to the nominal torque [2]-[4]. the paper is organised like this that in chapter 2 an existing standard torque calibration machine will be described where traceable calibrations up to 1.1 mn·m can be performed. in chapter 3 a novel 5 mn·m torque transducer and his calibration according the existing standard din 51309 with the described machine from chapter 2 will be introduced. in addition a proposal will be made how one could extrapolate calibration data to a non measurable range. in chapter 4 a new calibration machine will be introduced which is able to calibrate torques up to 20 mn·m. 2. about the traceable calibration of torque according to the definition of the torque m  , traceability can be realized via a lever arm l on which a force f  which is acting perpendicular, in the simplest case, is the gravitational force m·gloc, whereby m is the mass and gloc is the gravitational constant. ( ) lrlgmfrfrm frm loc =⋅⋅=⋅⋅= ×= ,,sin   . (2) based on this principle, there are torque calibration machines, which, for the force initiation, use mass stacks which can be varied in such a way that various torque steps can be applied. often the masses are coupled to the lever arm by means of thin metal bands (e.g. 20 to 30 µm) to get a very precise contact point for the deadweight force. several aspects of machine design and related measurements can be found in [6]-[12]. unfortunately, this principle cannot be further used by very high torques which are in the mn·m range. in these cases, one can apply a system where the torque is created by an actor lever system and measured by a second force lever system [7]. the principle is illustrated in figure 2 where the levers are indicated in red. between the two levers the torque transducer is mounted indicated in yellow in figure 2. this principle is used, for example, by the ptb 1.1 mn·m standard calibration machine [7], see figure 3. on the measuring side, which is the upper traverse in figures 2 and 3, the lever is connected at both ends with force transducers. in this way, the torque is traced by the length of the lever arm and the measured forces of the calibrated force transducers. in the machine, two different pairs of force transducer can be used, one (120 kn) for a lower range up to 220 knm and one pair (550 kn) for the upper range up to 1.1 mn·m. in the range up to 220 knm one obtains a relative uncertainty of the torque of 1.0·10-3, whereby in the upper range one gets 0.8·10-3. figure 2. working principle of the double lever arm system of the 1.1 mn·m torque calibration machine. figure 3. standard torque calibration machine for nominal torques up to 1.1 mn·m. table 1. examples of the relation of electrical power and torque. electrical power mw revolution speed rpm frequency hz torque mn·m 5 14 0.23 3.4 10 9 0.15 10.6 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 14 the torque itself is created by two mechanical spindles which are driven by a motor, as main drives which are located between the two lower platforms, see figures 2 and 3. in figure 3 the blue boxes indicate gears, whereby the small boxes represent manually gears and the bigger boxes motor driven gears. in addition, there is a secondary drive to define the horizontal position (left) and to reduce the cross force (fx1, fx2) and the bending moment (mz1, mz2). to compensate for the vertical force fz generated by the transducer weight and the lower lever arm, a hand-operated drive unit (under the lower lever arm) can be used. spring elements (indicated by the blue cylindrical elements in figure 3) are connected to the measuring lever to measure the parasitic mechanical components in the contact area between force transducer and lever arm. with the aid of these spring elements, all parasitic components can be minimized during a calibration procedure. last but not least, a reference torque transducer is mounted in the lower part of the machine (below the red flange) which is, in addition, equipped with measuring bridges for bending forces and moments. this transducer offers additional possibilities to check the adjusted torque and the alignment of the whole measuring axis. the uncertainty of the torque which is provided by the machine mainly depends on the lengths l1 and l2 of the double lever arm, on the measured force values f1 and f2 at both ends of the lever arm and on the remaining parasitic moments mz1 and mz2 which are due to not fully compensated bending components measured on the spring elements that are connected with the force transducers. the model for the uncertainty evaluation of the provided torque is: 212211 zz mmlflfm ++⋅−⋅= . (3) the length of the lever arm was measured with a special coordinate measuring arm within an accuracy of δl = 100 µm. the force transducers can be calibrated in the ptb 1 mn force standard machine. the calibration of the transducers is repeated in a period of two years. thereby, a relative measuring uncertainty of better than 1.0·10-4 can be achieved. nevertheless, in the uncertainty budget, a more conservative value of 1.0·10-3 was taken. for technical reasons, the lever arm cannot be dismounted every two years. therefore, random deformation measurements using laser interferometers are performed to monitor the stability. as seen from table 2, the main uncertainty contribution results from the calibration of the force transducers. the contribution from the parasitic moments mz1 and mz2 are very small and could actually be neglected. the chosen distributions are all rectangular because in most of the cases, where values taken from calibration certificates (here f1, f2 and l1, l2) this distribution is recommended, see also [13]. for the bending moments mz1, mz2 also a rectangle distribution is applied to get a more conservative uncertainty. anyway the rectangle distribution should be applied if no details about a possible distribution of a certain value is known [13], which applies for the case of the bending moment. 3. torque transfer transducers for nacelle test bench calibration in this section, we will show the partial calibration of a 5 mn·m torque transfer transducer and secondly describe a procedure of how to extrapolate torque values if only a partial range was calibrated. this procedure will be illustrated by means of the data of the reference transducer of the 1.1 mnm calibration machine. 3.1. calibration of a 5 mnm torque transducer in the partial range up to 1.1 mnm in order to realize a traceable calibration for nacelle test benches, the empir project “torque measurement in the mn·m range” was started in october 2015 [5]. one of the objectives of this project is to develop novel traceable calibration methods for torque values in nacelle test benches with the use of transfer standards for the range above 1 mn·m. for the realization of this goal, a commercial torque transducer with a nominal range of 5 mn·m will be used, see figure 4. during the above-mentioned empir project, an extrapolation procedure for the range above 1.1 mn·m will be developed. in figure 4, the commercial transducer is depicted together with 2 dmp 41 bridge amplifiers. the transducer is equipped with two independent torque channels, two channels for transverse bending forces, two channels for bending moments and two channels for axial forces. due to these additional channels, an investigation of multi-component loading on the measurement of torque will be possible. in particular, crosstalk effects in the case of 6-component loading (main torque, 3 directional forces, 2 directional bending torques) will be studied to describe effects table 2. uncertainty budget of the 1,1 mnm standard calibration machine. parameter value standard unvertainty distrib. sensitivi ty coeff. uncert. index f1 500.0 0.289 rect. 1.1 0.32 49.6% l1 1.1 52.5·10 -6 rect. 500 0.026 0.3% f2 -500.0 0.289 rect. 1.1 0.32 49.6% l2 1.1 52.5·10 -6 rect. 500 0.026 0.3% mz1 0.02 0.0115 rect. 1.0 0.012 0.0% mz2 0.02 0.0115 rect. 1.0 0.012 0.0% m 1100.04 0.451 result value expanded uncertinty k-factor coverageintervall m 1100.04 0.90 2.00 95% normal-dist rel: 8.2·10-4 figure 4. 5 mn·m torque transducer together with two dmp41 bridge amplifiers for the readout of the 8 channels of the transducer. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 15 on the torque measurements which occur in large nacelle test benches. finally, a calibration procedure for large nacelle test benches will be developed during the empir program. the calibration procedure will enable the traceability of torque loads up to 20 mn·m and will include an uncertainty model that considers crosstalk effects. first calibration measurements were performed with the shown 5 mn·m torque transducer in the ptb 1.1 mn·m standard calibration machine above described. thereby, the procedure according to the german standard din 51309 was applied [14]. figure 5 shows the calibration procedure according to din 51309. the diagram shows the relationship between the applied torques as a function of time. after three preloads (applying the nominal/maximum torque value of the transducer), a certain number of increasing and decreasing torque steps are performed in three mounting positions. mounting positions means that the transducer is rotated in respect to the axes where the torque acts, thereby often the three mounting positions 0º, 120º and 240º are used. at least 8 torque steps are required to determine a linear or polynomial fit. from the data, several characteristic parameters are derived which are used for the determination of the measurement uncertainty as well as for a classification of the torque transducer. the parameters are indicated in figure 5 in the calibration procedure. for example the hysteresis will be determined by the difference between the maximum and minimum torque value which were obtained in one certain torque step. thereby the data from all mounting positions as well as the steps from increasing and decreasing torque will be taken into account. more details about the indicated parameters of figure 5 and their use for an uncertainty determination can be found in [14]. one characteristic result of the calibration is the deviation from linearity shown in figure 6. the curves reflect the relative deviations of the measured values from a fitting straight line; the deviations are related to the measured mean value of the final value. the fitting straight line is defined by the value of its slope; the axis intercept is zero. the upper diagram in figure 6 shows the data of the 5 mn·m transducer, the lower diagram the data from the reference transducer, which was simultaneously calibrated. note the difference in the shape of the two diagrams. the shape of the reference transducer shows the common behaviour where a transducer is used up to his nominal value (end value). in the case of the 5 mn·m, the transducer was only used up to 22 % of its nominal value, which leads then to a curve shape as shown in figure 6. one of the aims in the mentioned empirproject is to find an extrapolation procedure using the data from the 1.1 mn·m calibration to extrapolate up to 5 mn·m. the significant linearity deviation might be used in this extrapolation procedure. during the calibration of the 5 mn·m transducer, also partial ranges within the 1.1 mn·m range were measured. this data will also be used to develop an extrapolation procedure. in addition, also the signals from the other six parasitic channels were recorded. the analysis of these data will provide information about the correlations of the torque with the bending forces and moments as well as the axial forces. 3.2. extrapolation of calibration data measured in a partial range to the full range in this subsection, a proposal will be made of how the calibration data measured in a partial range has to be extrapolated to the full range. a good starting point for this task is the data of the reference transducer because here, data are available from both a partial range and the full range. this gives us the opportunity to check how precise such an extrapolation will be. in figure 7, the interpolation deviation of the measurement up to 400 kn·m and the interpolation deviation up to the full range of 1.1 mn·m of the reference transducer are compared. in contrast to figure 6, all the three different mounting positions were averaged. as one can be seen from figure 7, the curves for upward or downward measurement exhibit a sinusoidal behaviour. due to this, the attempt is made to make a sinusoidal fit of that data. the equation function which was chosen for such a fit is:       − ⋅⋅+= w xx ayy cπsin0 . (4) thereby, y0 is the offset, a is the amplitude, xc is the zero, and w is the period of the sinusoidal function. these parameters are also illustrated in figure 8, where one can see the sinusoidal fit of the upward row (only increasing torque) of the full-range data of the reference transducer. the uncertainties of the parameters are quite small and lie in the ranges of a few percent. figure 6. the upper curve is the linearity deviation from a fitted straight line through the origin at zero of the 5 mn·m transducer. the lower curve shows the same behaviour of the 1 mn·m reference transducer .the deviations are related to the measured mean values of the final value. figure. 5. calibration procedure according to the german standard din 51309. indicated are several parameters which will be derived from the calibration data. this parameters as well as other additional influences will be used for an uncertainty determination. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 16 the procedure for the extrapolation is now to fit the partial range data and the full range data with a sinusoidal function according to (4). knowing the fitted parameter, one can then think about a strategy of how to scale the parameters from the partial range function to the full range function. figure 9 shows the two fits of the partial range data and the full range data of the upward torque measurement. the horizontal, coloured dashed lines are the offsets of the two fitted functions. in view of the fact that the difference between the two offsets (y0) is very small, one could neglect these for an extrapolation. the parameters of the fits and their respective uncertainties can be seen in table 3. here one can see that the offsets have the largest uncertainty, whereby the zero and the period of the sinusoidal functions have smaller uncertainties. starting from the parameters of the partial range data, an extrapolated sinusoidal function was calculated which is shown as a dashed curve in figure 9. thereby, the offset y0 from the partial range was chosen, the parameters xc and w were scaled with a factor of 1100/400, (which is just the scaling of the measurement range) and the amplitude was calculated by scaling the amplitude of the partial range with the ratio of the amplitudes of the full range to the partial range. normally one would not know this amplitude ratio, but one could estimate it by measuring several partial ranges. in this way, one would obtain several amplitudes which could be extrapolated linearly to the full measuring range. with the aid of the extrapolated interpolation deviation curve, one can now calculate measuring points which lie outside of the partial range as follows: ex iil ex i yxmy ∆+⋅= . (5) thereby, the extrapolated measuring points yi ex depend from the slope ml of the linear interpolation in the partial range up to 400 kn·m and on the extrapolated interpolation deviation δyi ex which is shown in figure 9. last but not least, one can now compare the measuring points calculated according to (5) with the actually measured points in the full range. as one can be seen from figure 10 the difference between the actually measured points and the extrapolated ones is mainly below 0.1 %. this difference is smaller or just in the order of the uncertainty of the measured torque in this region. using the linear interpolation deviation data for an extrapolation procedure seems to be a feasible way. nevertheless it should be noted that in the actual case of the 5 mn·m transducer this proposed extrapolation procedure will be more uncertain due to the quite scattering data from the partial range, see upper part of figure 6. further measurements of different partial ranges could improve the situation. table 3. fitted parameters of the partial range data and of the full range data with a sinusoidal approximation according to (4). 400 kn·m parameter value uncertainty relative uncertainty % y0 2.93e-05 4.72e-06 16.1 xc 262.53 16.54 6.3 w 300.44 13.93 4.6 a 6.44e-05 6.53e-06 10.1 1.1 mn·m y0 5.74e-05 1.18e-05 20.5 xc 797.63 20.84 2.6 w 852.99 16.48 1.9 a 3.60e-04 1.79e-05 5.0 figure 7. interpolation deviation relative to the nominal torque of the reference transducer for a partial range up to 400 kn·m and the full range up to 1.1 mn·m. the curves are separated depending on whether the torque increased upwards or downwards. figure 8. sinusoidal fit of the interpolation deviation data of the upward torque measurement of the reference transducer in the full range. figure 9. interpolation deviation of the upward row of the part range and the full range data together with a sinusoidal fit. in addition an extrapolated sinusoidal function (dashed curve) is plotted. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 17 4. development of a torque calibration machine for a range of up to 20 mn·m to extend the torque calibration range above 1.1 mn·m, a complete new calibration machine will be built at ptb. this machine will be part of a wind competence center which is funded by the german federal ministry for economic affairs and energy. this center also includes, besides the new torque calibration machine, a big coordinate measuring machine and a wind channel for the calibration of lidar systems. the coordinate measuring machine should be able, e.g., to geometrically measure the gear parts of the nacelles. the capacity will be sufficient for gearwheels with a diameter of up to 3 m. to realize this new center, two new buildings will be built on the ptb site: one for the coordinate measuring machine and the lidar system, and one especially for the new torque calibration machine. the new torque calibration machine (see figure 11) will be designed in a first stage for torques of up to 5 mn·m, and in a second stage up to 20 mn·m. similar to the 1.1 mn·m calibration machine, the operation principle will also be based on two lever systems: one actor lever and a measuring lever. on the actor lever, the forces will be created by two 1.2 mn servohydraulic cylinders. the lever has a length of 6 m. in addition, also bending moments and axial forces can be applied by a pair of horizontally aligned servo-cylinders. last but not least, the servo cylinders will also be able to operate dynamically in a frequency range of up to 3 hz. for that reason, the foundation of the machine is mounted on air springs. each spring can be individually adjusted in pressure to achieve optimal damping and to avoid resonance frequencies. the measuring lever system includes a pair of force transducers to measure the main force component of the torque and several spring elements for the detection of parasitic bending forces and moments. the base as well as the frame of the machine is designed already for 20 mn·m. by upgrading the machine with 3.6 mn hydraulic cylinders for the generation of the torque an extension to 20 mn·m will be realized. 5. conclusions to increase the reliability of wind turbines, extensive technical tests are performed in nacelle test benches. one important aspect of these tests is the torque in the mn·m range, which is initiated in the nacelle. traceable torque calibration in the mn·m range can so far only be realized by the 1.1 mn·m standard calibration machine at ptb. one solution for a traceable torque calibration of nacelles is to use transfer torque transducers which are calibrated in special standard calibration machines which are traced to the si. currently, a calibration of up to 1.1 mn·m of such a transducers was realized. for the above-mentioned torques, special extrapolation procedures have to be developed. one possibility could be to use in this extrapolation the knowledge of the characteristic sinusoidal shape of the interpolation deviation. to overcome the lack of calibration range, a new machine will be installed insight the ptb´s new wind competence center. this machine will be able to calibrate torques of up to 5 mn·m in a first stage and up to 20 mn·m in a second stage. acknowledgement the author would like to acknowledge the funding of the joint research project “14ind14 mn·m torque – torque measurement in the mn·m range” by the european union´s horizon 2020 research and innovation programme and the empir participating states. references [1] upwind “design limits and solutions for very large wind turbines”, the sixth framework programme for research and development of the european commission (fp6). [2] private communication, d. bosse, center for wind power drives (cwd), rwth-aachen. [3] emrp-program: “force traceability within the meganewton range”, http://www.ptb.de/emrp/2622.html [4] f. tegtmeier et. al., “investigation of transfer standards in the highest range up to 50 mn within emrp project sib 63”, xxi imeko world congress, 2015, prague, czech republic. [5] empir-program: “torque measurement in the mn·m range”, www.euramet.org/research-innovation/empir/empir-calls-andprojects/empir-call-2014/ [6] d. röske et. al., metrological characterization of a 1 n·m torque standard machine at ptb, germany, metrologia 51 (2014) 87-96. [7] d. peschel et. al., the new 1.1 mnm torque standard machine of the ptb braunschweig/germany, proceedings of the imeko-tc3 conference, cairo, egypt 2005, p. 40. [8] d. peschel et. al., “proposal for the design of torque calibration machines using the principle of a component figure 10. relative difference between the extrapolated measuring points and the measured points in the full range up to 1.1 mn·m. the difference is given in percent relative to the measured points. figure 11. design of a new standard torque calibration machine for torques up to 20 mn·m. http://www.ptb.de/emrp/2622.html http://www.euramet.org/research-innovation/empir/empir-calls-and-projects/empir-call-2014/ http://www.euramet.org/research-innovation/empir/empir-calls-and-projects/empir-call-2014/ acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 18 system”, proceedings of the 15th imeko tc3 conference, october 7-11, 1996, madrid, spain, pp. 251-254. [9] d. röske et. al., “key comparisons in the field of torque measurement”, 19th international conference on force, mass and torque, imeko tc3, cairo, 19-23, february, 2005, egypt. [10] d. röske et. al., “realization of the unit of torque determination of the force-acting line position in thin metal belts”, proceedings of the 15th imeko tc3 conference, october 7-11, 1996, madrid, spain, pp. 261-264. [11] d. röske et. al., “on the problem of lever arm lengths in the realization of the torque finite element calculations on lever models”, proceedings of the 14th imeko tc3 conference, september 5-8, 1995, warsaw, poland, pp. 178-182. [12] k. adolf, d. mauersberger, d. peschel, “specifications and uncertainty of measurement of the ptb's 1 knm torque standard machine”, proceedings of the 14th imeko tc3 conference, september 5-8, 1995, warsaw, poland, pp. 174-176. [13] gum, “guide to the expression of uncertainty in measurement”, iso, genf 2008, isbn 92-67-10188-9. [14] din 51309:2005-12, materials testing machines calibration of static torque measuring devices. https://de.wikipedia.org/wiki/spezial:isbn-suche/9267101889 mn m torque calibration for nacelle test benches using transfer standards determining the location of patients in remote rehabilitation by means of a wireless network acta imeko issn: 2221-870x september 2023, volume 12, number 3, 1 5 acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 1 determining the location of patients in remote rehabilitation by means of a wireless network oleksii tohoiev1 1 petro mohyla black sea national university, 68 desantnykiv str., 10, 54003 mykolaiv, ukraine section: research paper keywords: wi-fi; mac; wireless network; throughput; spatial flow; transport protocol citation: oleksii tohoiev, determining the location of patients in remote rehabilitation by means of a wireless network, acta imeko, vol. 12, no. 3, article 8, september 2023, identifier: imeko-acta-12 (2023)-03-08 section editor: susanna spinsante, grazia iadarola, polytechnic university of marche, ancona, italy received march 10, 2023; in final form june 13, 2023; published september 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by petro mohyla black sea national university (pm bsnu) in mykolaiv (ukraine), and also financed in part of the pm bsnu science-research work by the ministry of education and science of ukraine “development of the modules for automatization of wireless devices for recovery of postinfarction, post-stroke patients in individual conditions of remote individual rehabilitation” (state reg. no. 0121u109898). corresponding author: oleksii tohoiev, e-mail: oleksii.tohoiev@chmnu.edu.ua 1. introduction this article addresses an explanation of indoor location identification method usage developed to be of practical use in medical facilities. it is worth noting the numerous indoor localization methods having limitations, such as complexity of use (the necessity of special equipment being one of the complexities). the method described in this paper does not require additional maintenance or special equipment other than wi-fi access points. the reason why the method is unique lies in the fact that the one not requires the mobile device to be in hotspot mode. but more importantly, wi-fi access points location the only required data. the current paper shows that the acceptable accuracy of the location can be achieved in the real environment exclusively by means of the method described, which relies on simple hardware and software requirements. this paper presents a non-standard innovative way of using wi-fi access points (routers) as measuring equipment to determine the coordinates of a patient's location within a medical institution. to be able to use the described method, the patient must have some device with a built-in wi-fi module (smartphone, smart watch, etc.) or, for example, based on a microcontroller with wifi capabilities (cc3200) for ecg monitoring [1]. the work does not set the task of improving the accuracy of the indicated coordinates, a satisfactory result for the staff is the determination of the room in which the patient is currently located. this is especially important when it is necessary to provide assistance to patients after serious illnesses (who are in rehabilitation), when, for example, they were given an alarm signal or its biomedical signals are unstable. modern scientists are also actively considering the possibility of transferring data from body sensors of patients to remote cloud servers through access points [2]. determining the patient's location near certain telecommunications equipment will facilitate remote monitoring of the patient in the interests of telemedicine. 2. the current state and problems of determining the location of mobile network objects systems for determining the position of moving objects in space using a stationary network (for example, by means of parameters of telecommunications equipment) are effective for abstract this article discusses the indoor object's location identification method that had been developed to use in medical facilities. differently from other similar methods, the one reported in this paper requires access to wi-fi access points only and, most importantly, it does not require the mobile device to be in hotspot mode. it is proposed to use wi-fi access points (routers) with non-standard firmware as measuring equipment to determine the coordinates of the patient's location within the medical institution. thus, it can be claimed that using of the proposed method will help to achieve acceptable accuracy in real environments at real-time identification mobile devices using simple hardware and software requirements. mailto:oleksii.tohoiev@chmnu.edu.ua acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 2 controlling patient movement over short distances. such "moving objects" can include patients undergoing remote rehabilitation. in any case, such a mobile delivery network should be regarded as a corporate network with a single mac address plan and the ability to listen in (sniffing) on nodes connected via wireless communication means. in the classic sense, a sniffing attack refers to intercepting data by capturing network traffic using a packet sniffer (a program designed to capture network packets). sniffing is typically used by attackers to gain unauthorized access to confidential information, eventually leading to a network outage or damage, or to read messages circulating in the network. therefore, information security measures are aimed at countering sniffing attacks in computer networks [3]. the aforementioned traffic sniffing technology can be applied to determine the location of people moving around within an area where a wireless local network is deployed, if they have a gadget with them (such as a smartphone, tablet, smartwatch, etc.) that has a wi-fi module enabled. such an approach can also be useful for providing individualized conditions for the remote rehabilitation of post-stroke and post-heart attack patients. by using traffic sniffing technology, it is possible to determine the location of both the patient and the doctor, if necessary, to provide timely assistance to patients who are in remote rehabilitation and can move independently. currently, scientists and practitioners have conducted a significant amount of research on the problem of deploying a positioning system for moving objects that have a built-in wireless communication module. analysis of such research indicates that the error of existing methods ranges from 1.78 to 10 meters. scientists have considered various aspects of positioning and navigation methods [4], [5]. the most common technologies for positioning are satellite systems (such as gps and galileo) and terrestrial cellular networks. the tasks of determining the ultimate limits of positioning system accuracy, which are defined by the presence of noise and obstacles, remain completely unsolved and relevant; describing the operation of a number of new methods (such as direct positioning) aimed at enabling gps operation with very weak received radio signals (for example, in mountainous terrain); and methods of optimal combining of measurements based on radio signals and signals from various sensors, such as inertial platforms (including those with gyroscopes). research in new areas of cooperative positioning, where multiple nodes exchange signals and information to improve their positions, is also promising. in the last decade, one of the most popular solutions has been object localization based on wi-fi, which is considered the most promising for investigating open questions in both scientific and industrial communities. the main concept of a wireless wi-fi network is the presence of an access point (ap) that connects to the internet service provider and transmits a radio signal. usually, an ap consists of a receiver, a transmitter, a wired network interface, and software for quick setup. a network is formed around the ap within a radius of 50–100 meters (called a hotspot or wi-fi zone) where the wireless network can be used. the transmission distance depends on the transmitter's power (programmable in some equipment models), the presence and characteristics of obstacles such as antennas. today, the 802.11n standard is widely used, which provides a data transfer speed of up to 320 mbps [6]. methods of wi-fi positioning can be divided into two main groups. one is based on mapping and a map catalog compiled from data from satellite systems (ss) [4], and the other is based on modeling the propagation of radio waves (rf) [7]. the rf model determines the relationship between signal strength and distance. to determine the distance between known points and the patient, trilateration algorithms can be used [8]. however, currently, it takes several dozen measurements to determine the relationship between distance and signal strength. therefore, this model is not entirely dynamic and requires further research and improvement, such as 1) analyze the trends in the development of radio wavebased methods towards the implementation of projects for determining the position of moving objects. 2) substantiate the feasibility and implement the development of a mac-directed approach for monitoring numerous objects in a wireless corporate network. 3) develop an algorithm for detecting the location of moving objects based on a comprehensive approach to determining their positioning and identification. define the limits of using different methods for determining the location of moving objects in systems with a high level of information security. let us consider positioning based on radio wave propagation modeling (rf) [9]. the authors use rssi with bluetooth; however, this is not a suitable method for use at medical institution, because for this you need a separate device or turn on bluetooth separately on the smartphone. the purpose of such modeling is to express the mathematical relationship between the distance from the transmitter to the receiver and the signal strength. the mathematical expression is obtained from a thirdorder polynomial regression. the main advantage of this technology is the speed of positioning. in addition, an important point is that access points (ap) have fixed coordinates (figure 1). the development of methods for determining a patient's location based on a combination of signal characteristics from aps is currently a relevant issue. however, regression requires a large amount of accurate information on signal strength over a fairly long period. this method provides positioning accuracy within 1–3 m. an integral quadratic quality criterion is used to evaluate the effectiveness of positioning technologies [10]. for tracking patient movement, the radar radio frequency system [11] is also used. radar works by recording and processing signal strength information at several base stations located in the area of interest to provide overlapping coverage. it is possible to eliminate unwanted noise in sonar and radar systems by implementing the beamforming method proposed in figure 1. patient in a system with fixed access points api. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 3 [12] and well proven in the building of separate infrastructure for medical telemetry applications. using radar technology [13], a mobile device uses a ccmap of the required location. the original cc-map is generated from the coordinates, cc measurements, and patient location. the signal strength from each ap is compared to the corresponding values in the database, and the appropriate location is predicted. the average error of this method is 1.78 m, but the maximum error can be up to 40 m [14]. thus, if it is necessary to determine the location of a moving object with greater accuracy, then it is necessary to turn to other methods of positioning such an object. 3. the method of analyzing the positioning of moving objects based on determining the relative signal strength existing mobile positioning methods designed for stationary objects do not allow to track moving objects, and they will not be picked up by a wi-fi sniffer [15]. the proposed search method can be divided into three stages: data collection, propagation modeling, and location determination, which are presented in figure 2. during the data collection stage, the server periodically requests (position 1 in figure 2) each ap to scan for signals. during the search stage, the terminal investigates the wi-fi spectrum for available access points and determines the received signal strength indication (rssi) that reaches the antenna of each ap (position 2 in figure 2). then this information is sent (position 3 in figure 2) to the search server, which determines the point on the modeled signal propagation map from the device with a wi-fi module on the moving object (patient). at the data collection stage, the search server queries each access point for rssi survey results (figure 3). the results, including the list of access points and their corresponding rssi, are stored in a database (db) on the server. each time the rssi data is obtained from the access points for the current time, new location parameters are calculated. when calculating location parameters for a specific access point, a request is sent to the database to measure the signal received from this access point and detected by another access point. to calculate the distance from the transmitter to the receiver based on the ieee 802.11 standard, rssi is used to measure the relative signal strength, which is typically measured in decibelmilliwatts (dbm). however, there are no specific formulas for calculating accurate distances due to the influence of the real indoor environment and the implementation by manufacturers. according to [9] and using a simple power loss propagation model, the rssi distance measurement is given as: 𝑃𝐿 = 𝑃𝐿0 + 10 𝛾 log10(𝑑 𝑑0⁄ ) + δ, (1) where pl represents the path loss at distance d (in m); pl0 (in dbm) is the total path loss at distance d0(m); 𝛾 is the exponent of path loss depending on the surrounding environment; d is the distance between the patient and the control access point; δ is the variable that takes into account the change in the mean value, it is also often referred to as shadow fading [16]. let us denote each record in the database using the notation 𝑅𝑆𝑆𝐼𝑗,𝑖 , where the subscript j denotes 𝐴𝑃𝑗 , on which the signal was measured, and the index i means 𝐴𝑃𝑖 , from which the measured signal came. when identifying a parameter 𝛾𝑖 for the propagation of the signal coming from 𝐴𝑃𝑖 , it is necessary to query the database for the measurements 𝑅𝑆𝑆𝐼𝑚 , and 𝑅𝑆𝑆𝐼𝑛,𝑖 , collected in the interval δ, where subscripts m, n represent a pair of aps that are not 𝐴𝑃𝑖 . the experimental assessment, as detailed in figure 3, demonstrated that better accuracy could be achieved by excluding measurements from the access point (ap) situated on the same wall as api. specifically, the performance of the parameters for ap2 in figure 1, computed based on measurements acquired at ap3 and ap4, outperformed those generated using measurements from ap1. the observed disparity in parameter estimation was attributed to the reflection of the wall on which the transmitting ap was mounted. knowing the spatial positions of access points, it is possible to obtain distances 𝑑𝑖,𝑚 і 𝑑𝑖,𝑛, which are the distances between 𝐴𝑃𝑖 and 𝐴𝑃𝑚 or 𝐴𝑃𝑛 . rewriting equation (1) with the introduced subscripts, we obtain an equation that can be used to determine 𝛾𝑖 . at the same time, 𝑑0 is taken equal to 1. thus, equation (1) turns into (2): 𝑅𝑆𝑆𝐼𝑚,𝑖 − 𝑅𝑆𝑆𝐼𝑛,𝑖 = 10 𝛾𝑖 log10 𝑑𝑖,𝑚 𝑑𝑖,𝑛 . (2) indicator 𝛾𝑖 loss of signal strength on the path for the access point 𝐴𝑃𝑖 can be calculated using the least-squares regression of equation (2). to explain this phenomenon, it is advisable to extend the signal loss model with a parameter 𝛽𝑖 , which takes into account the effect of the angle difference between the direction of the direct path of the signal and the normal vector to the wall on which the ap is installed. the extended log distance path model can be written as: figure 2. main stages of patient search. figure 3. example of signals emitted from ap1, detected at ap2, ap3, and ap4 and patient. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 4 𝑅𝑆𝑆𝐼𝑚,𝑖 − 𝑅𝑆𝑆𝐼𝑛,𝑖 = 10 𝛾𝑖 log 10 𝑑𝑖,𝑚 𝑑𝑖,𝑛 + 10 𝛽𝑖 log 10 𝑑𝑖,𝑚 𝑑𝑖,𝑛 × (𝑎𝑖,𝑚 − 𝑎𝑖,𝑛 ) (3) where the introduced additional symbol αi,j is defined as: 𝑎𝑖,𝑗 = ∢(𝑛𝑖 , 𝑠𝑖,𝑗 ) 𝜋 2⁄ . (4) using (3), it is possible to use data from all surveyed access points as input data to determine the values 𝛾𝑖 and 𝛽𝑖 . it means that there is n×(n−1)/2×h measurement data points to infer signal strength parameters, where n is the number of access points within range 𝐴𝑃𝑖 . the indoor signal propagation model is developed and regularly updated by the international telecommunication union (itu), which is a un agency in the field of information and communication technologies. according to the itu r.1238 standard, for the walls of hospital buildings, γi will always be equal to 20 [17]. the power of the signal strength depends on the ap manufacturer. in the example from part 4, for mikrotik hap2 lite was used, it has a tx dbm of 27. there is also no standard dbm power for mobile devices, in the example, a mobile phone with a tx dbm of 22 antenna was used. hence, from a good signal level of –35 dbm to a weaker signal level of –55 dbm, as shown in the figure 3. as we see that signal strength of ap3 is good (–39 dbm), patient signal is good enough (–43 dbm). ap2 and ap4 signal is weak. 4. signal propagation simulation stage knowing the calculated signal propagation parameters, a model was chosen that was built to estimate the path loss inside the closed area to simulate the signal propagation. therefore, it was decided to use the itu model [17]. this itu model attempts to account for reflection and diffraction caused by objects, energy collisions, movements within the room, multipath effects, etc. their model provides guidance on indoor signal propagation in the 300 mhz to 100 ghz frequency range. the basic model can be expressed as [9]: 𝑃𝐿𝑖,𝑗 = 20 log10(𝑓) + 𝑁 log10(𝑑𝑖,𝑗 ) + 𝐿𝑓 (𝑛) − 28, (5) where 𝑃𝐿𝑖,𝑗 is the total loss on the path of the signal arriving at the access point 𝐴𝑃𝑖 at point j, db; f is the frequency, mhz; n is an indicator of power loss over distance; 𝑑𝑖,𝑗 is the distance between 𝐴𝑃𝑖 and point j,m; 𝐿𝑓 (𝑛) is a factor that takes into account the loss of signal power between floors (its influence is not taken into account in these studies). in the propagation simulation step, the location server calculates the expected power loss in the room for each access point in the mesh pattern and creates a propagation map. to eliminate the need for patient identification, the proposed method only requires the ability of access points to probe wi-fi channels and report the rssi of neighboring access points. most aps on the market today have this feature built in, as it is an integral part of automatically determining the most suitable channel for wi-fi communication. usually, information about these readings is not available to the end user. this is one of the reasons, apart from its popularity, that one of the most common wireless routers, the mikrotik hap2 lite, was chosen for the study. scanning is a periodic event. if its frequency is too high, it leads to an additional load on the wi-fi network, which is undesirable because it affects the performance of data transmission in the wireless network. a scan rate that is too low means that changes to the wi-fi network that is decided to be used to locate moving objects (patients in remote rehabilitation) will take too long to become meaningful and effective. choosing too many iterations will have the same consequences as choosing too long a listening period. too few iterations will result in more influence on the rssi variance than desired. after testing, one minute was chosen as the refresh rate, which ensures that changes due to long-term wi-fi instability and changes in indoor space will have a significant effect in less than 10 minutes, as they will be present in more than 50% of the data points used for calculation. after all the calculations and drawing the map, the developed application will show the doctor in which ward/room of the building the selected patient is located. an example of such a map is shown in figure 4. in the given example, you can see that the patient is in room 1. 5. conclusions the proposed method, based on listening to wi-fi signals, is a further development of the itu-r recommendations [17]. this method can be applied to a variety of wireless telecommunication equipment and requires only ap information. a map of the patient's location is built on the basis of the rssi indicators. to track the location of patients, they (patients) only need to have a smartphone with the wi-fi transmitter turned on. the disadvantages of the proposed method are that the rssi indicator is poorly correlated with the signal quality, but it can be used to approximate the signal quality. a more accurate assessment can be obtained using the link quality indicator (lqi). the expediency of using the specified parameter is justified by the fact that if there are numerous devices in the local network with enabled wi-fi modules in an open environment, such devices are also sources of interference. in this case, the power of the received signal may be high, but the ratio between the received signal and the noise level is lower than necessary to determine the location of the moving object. to use the lqi parameter, the bluetooth module, which is also built into most popular gadgets, or other types of technologies according to the 802.15 standard, are more suitable. figure 4. an example of a patient location map. acta imeko | www.imeko.org september 2023 | volume 12 | number 3 | 5 the implementation of the proposed algorithm does not require specialized antennas, which were used in previous studies [18]. different from previous approaches that relied on specific antenna configurations to achieve optimal results, our algorithm uses built-in signal processing techniques to achieve reliable results using standard antennas commonly found in most devices. this eliminates the need for expensive and specialized antenna systems, making the algorithm affordable and practical for a wide range of applications. acknowledgement this study was funded and supported by petro mohyla black sea national university (pm bsnu) in mykolaiv (ukraine), and also financed in part of the pm bsnu science-research work by the ministry of education and science of ukraine “development of the modules for automatization of wireless devices for recovery of postinfarction, post-stroke patients in individual conditions of remote individual rehabilitation” (state reg. no. 0121u109898). the author declares that there are no conflicts of interest regarding the publication of this article. written & oral informed consent was obtained from all individual participants included in the study but nothing identifying information is included in this paper. references [1] e. balestrieri, p. daponte, l. de vito, f. picariello, s. rapuano, i. tudosa, a wi-fi iot prototype for ecg monitoring exploiting a novel compressed sensing method, acta imeko 9(2) (2020) pp. 38–45. doi: 10.21014/acta_imeko.v9i2.787 [2] j. t. koyazo, m. a. ugwiri, a. lay-ekuakille, m. fazio, m. villari, c. liguori, collaborative systems for telemedicine diagnosis accuracy, acta imeko 10(3) (2021), pp. 192–197. doi: 10.21014/acta_imeko.v10i3.1133 [3] o. s. savenko, model and architecture of distributed multilevel detection system harmful software in local computer networks, herald of khmelnytskyi national university. technical sciences. 2 (2018), pp. 153–163. [4] yi. li, h. wu, d. meng, g. gao, c. lian, x. wang, ground positioning method of spaceborne sar high-resolution slidingspot mode based on antenna pointing vector, remote sensing. 14, 5233 (2022), pp. 1–19. doi: 10.3390/rs14205233 [5] r. santerre, l. pan, c. cai, single point positioning using gps, glonass and beidou satellites, positioning. 5 (2014), pp. 107– 114. doi: 10.4236/pos.2014.54013 [6] y. chenglian, v. s. lazebny, analysis of the research of real bandwidth in standard ieee 802.11 wireless networks, problems of informatization and management. 1, 61 (2019), pp. 30–39. doi: 10.18372/2073-4751.1.14033 [7] v. pasichnyk, v. savchuk, o. yegorova, mobile information technologies of navigation of a user in complex indoor environment, bulletin of the lviv polytechnic national university, radio electronic devices and systems. 849 (2016), pp. 236–240. online [accessed 18 june 2023] https://science.lpnu.ua/sites/default/files/journalpaper/2017/jun/4933/p29_0.pdf [8] i. burlachenko, i. zhuravska, m. musiyenko, devising a method for the active coordination of video cameras in optical navigation based on multi-agent approach, eastern-european journal of enterprise technologies 1/9 (85) (2017), pp. 17–25. doi: 10.15587/1729-4061.2017.90863 [9] z. jianyong, l. haiyong, c. zili, l. zhaohui, rssi based bluetooth low energy indoor positioning, proc. of the 2014 international conference on indoor positioning and indoor navigation (ipin), busan, 2014, pp. 526–533 doi: 10.1109/ipin.2014.7275525 [10] a. r. pantiukhin, a. s. belyaev, system of determination of objects location inside of premises, international research journal 10(64) (2017), pp. 81–84. doi: 10.23670/irj.2017.64.012 [11] a. antonucci et al., performance analysis of a 60-ghz radar for indoor positioning and tracking, proc. of the 2019 int. conf. on indoor positioning and indoor navigation (ipin), pisa, italy, 30 september-3 october 2019, pp. 1–7. doi: 10.1109/ipin.2019.8911764 [12] m. z. u. rahman, p. v. s. aswitha, d. sriprathyusha, s. k. sameera farheen, beamforming in cognitive radio networks using partial update adaptive learning algorithm, acta imeko 11(1) (2022), pp. 1–8. doi: 10.21014/acta_imeko.v11i1.1214 [13] x. huang, j.k.p. tsoi, n. patel, mmwave radar sensors fusion for indoor object detection and tracking, electronics 11(14) (2022), pp. 1–22. doi: 10.3390/electronics11142209 [14] p. dabove, v. di pietra, a. m. lingua, positioning techniques with smartphone technology: performances and methodologies in outdoor and indoor scenarios, smartphones from an applied research perspective. intech, 2017 doi: 10.5772/intechopen.69679 [15] w. j. krzysztofik, radio network planning and propagation models for urban and indoor wireless communication networks, antennas and wave propagation. intech, 2018 doi: 10.5772/intechopen.75384 [16] y. cui, y. zhang, y. huang, z. wang, h. fu, novel wi-fi/mems integrated indoor navigation system based on twostage ekf, micromachines 10(3) (2019) art. no. 198. doi: 10.3390/mi10030198 [17] itu-r recommendations, propagation data and prediction methods for the planning of indoor radiocommunication systems and radio local area networks in the frequency range 300 mhz to 100 ghz. geneva, switzerland, international telecommunication union (itu), 2015, pp. 1238–8. online [accessed 18 june 2023] https://www.itu.int/rec/r-rec-p.1238 [18] t. suneetha, s. naga kishore bhavanam, analysis of multiband rectangular patch antenna with defected ground structure using reflection coefficient measurement, acta imeko 11(1) (2022), pp. 1–6. doi: 10.21014/acta_imeko.v11i1.1202 https://doi.org/10.21014/acta_imeko.v9i2.787 https://doi.org/10.21014/acta_imeko.v10i3.1133 https://doi.org/10.3390/rs14205233 https://doi.org/10.4236/pos.2014.54013 http://dx.doi.org/10.18372/2073-4751.1.14033 https://science.lpnu.ua/sites/default/files/journal-paper/2017/jun/4933/p29_0.pdf https://science.lpnu.ua/sites/default/files/journal-paper/2017/jun/4933/p29_0.pdf https://doi.org/10.15587/1729-4061.2017.90863 https://doi.org/10.1109/ipin.2014.7275525 https://doi.org/10.23670/irj.2017.64.012 https://doi.org/10.1109/ipin.2019.8911764 https://doi.org/10.21014/acta_imeko.v11i1.1214 https://doi.org/10.3390/electronics11142209 https://doi.org/10.5772/intechopen.69679 https://doi.org/10.5772/intechopen.75384 https://doi.org/10.3390/mi10030198 https://www.itu.int/rec/r-rec-p.1238 https://doi.org/10.21014/acta_imeko.v11i1.1202 sample volume length and registration accuracy assessment in quality controls of pw doppler diagnostic systems: a comparative study acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 sample volume length and registration accuracy assessment in quality controls of pw doppler diagnostic systems: a comparative study giorgia fiori1, gabriele bocchetta1, silvia conforto1, salvatore a. sciuto1, andrea scorza1 1 department of industrial, electronic and mechanical engineering, roma tre university, via della vasca navale 79, 00146 rome, italy section: research paper keywords: quality controls; pw doppler; sample volume; ultrasound diagnostic systems; comparative study citation: giorgia fiori, gabriele bocchetta, silvia conforto, salvatore a. sciuto, andrea scorza, sample volume length and registration accuracy assessment in quality controls of pw doppler diagnostic systems: a comparative study, acta imeko, vol. 12, no. 2, article 6, june 2023, identifier: imeko-acta-12 (2023)02-06 section editor: alfredo cigada, politecnico di milano, italy, roberto montanini, università degli studi di messina, italy received december 5, 2022; in final form april 21, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: giorgia fiori, e-mail: giorgia.fiori@uniroma3.it 1. introduction pulsed wave (pw) doppler is extensively used in diagnostic imaging since it allows displaying and quantifying the velocity of both arterial and venous flow through time. ultrasound (us) diagnostic systems provide a real-time spectral doppler display by performing the spectral analysis of the doppler signal from the blood vessel [1], [2]. this signal is detected within a sensitive region, known as sample volume (sv), whose length and depth (from the interface between the patient skin and the us probe) can be adjusted by the operator on the b-mode anatomical image [2], [3]. in the literature, two main parameters were used to characterize the sample volume, i.e., size and range gate registration accuracy [4], [5]. in particular, sv size depends on both the effective sample duration and the width of the us beam [6], and it affects the pw doppler spatial resolution. in turn, sample volume size can be split into two main contributions [7]: lateral and axial resolution. nowadays, sv axial resolution is included by the uk's institute of physics and engineering in medicine (ipem) among the basic functional checks for spectral doppler designed for control functioning assessment and faults detection [8]. as regards the range gate registration accuracy, it quantifies the error due to the difference between the sv position displayed on the grayscale image and the actual one. such parameter is one of the quality control (qc) doppler tests recommended by the american institute for ultrasound in medicine (aium) [9]. it is worth noting that the majority of the scientific literature that focused on sample volume size [7], [10]-[12] and range gate registration accuracy [12], [13] dates back to the last years of the xx century. moreover, the qc test procedures recommended by the two international organizations are affected by operator subjectivity. in this regard, performance evaluation of doppler abstract in clinical diagnostics, pulsed wave (pw) doppler is one of the most used spectral doppler techniques since it provides quantitative information about the severity of several cardiac disorders. therefore, routine quality control tests should be scheduled to check whether a proper level of performance is maintained over time. despite continuous research in the field, performance evaluation of doppler equipment is still an open issue. therefore, the present study is focused on the comparative investigation based on a test parameter for the automatic analysis of faults in sample volume length and range gate registration accuracy. the velocity profile discrepancy index (vpdi) provides a quantitative estimation according to the agreement between the theoretical parabolic velo city profile and the measured one. the index was assessed through an automatic method that post-processes pw spectrogram images acquired at six sample volume depths with respect to the vessel radius of a doppler reference device. tests were repeated for three brand-new ultrasound diagnostic systems, equipped with convex and phased array probes, in two working conditions. from the analysis of the results, a lower discrepancy between the measured and the theoretical velocity profile was found for the convex array probes as well as a lower uncertainty contribution. mailto:giorgia.fiori@uniroma3.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 systems is still an open issue in the scientific research field [4], [14]-[17], despite the increasing demand for proper qc programs. nonetheless, the measurement of blood flow peak velocity is of particular relevance in clinical applications since several clinical parameters are evaluated from the peak velocity envelope. pw doppler provides quantitative information about the severity of many cardiac disorders [18], including the degree of stenosis [19], [20], arterial wall shear stress [21], [22], and pressure drops across cardiac valves [23]. therefore, inaccurate measurements can impact diagnosis and clinical therapy: overestimations ranging from 12 % to 50 % have been estimated as regards the measurement of blood peak velocities [24], [25]. from the abovementioned considerations, the present study would give a contribution to the field of qcs for doppler equipment testing, by proposing a comparative investigation of the velocity profile discrepancy index (vpdi), a quality parameter developed for the objective assessment of both sv length and range gate registration accuracy. the index, preliminarily defined and tested in [26], is estimated from the processing of pw doppler spectrograms by means of a customwritten image analysis method developed in matlab environment. three brand-new us diagnostic systems, each of them equipped with two ultrasound probes (convex and phased array) were tested in two working conditions. in section 2, the estimation rationale underlying the definition of the proposed quality index will be provided. in section 3, the experimental setup components and their main specifications will be described together with the image analysis-based method implemented for vpdi assessment. in section 4, experimental results with their associated measurement uncertainty will be presented and discussed, focusing on the comparison among the us diagnostic systems under test. finally, in the concluding section, the major achievements and future developments will be outlined. 2. velocity profile discrepancy index by considering a cylindrical tube of infinitesimal length d𝑦 exhibiting fully developed laminar flow in the reference system shown in figure 1, the well-known parabolic velocity profile 𝑣(𝑟) of the hagen-poiseuille flow [27], [28] can be written as: 𝑣(𝑟) = − 1 4 𝜇 d𝑃 d𝑦 (𝑅2 − 𝑟2) , (1) where 𝑅 is the tube radius, 𝜇 is the dynamic viscosity coefficient, d𝑃 is the infinitesimal pressure drop and 𝑦-axis is the flow direction. from (1), the maximum velocity value 𝑣max can be reached for 𝑟 = 0, while for 𝑟 = |𝑅| the no-slip boundary condition applies, i.e., the relative tangential and normal velocities between the flow and the tube walls are zero: { 𝑣(𝑟)|𝑟=0 = − 1 4 𝜇 d𝑃 d𝑦 𝑅2 = 𝑣max 𝑣(𝑟)|𝑟=|𝑅| = 0 . (2) from these considerations, it is possible to derive the mathematical expression of the parabolic velocity profile 𝑣(𝑥) as a function of 𝑣max and 𝑅, as follows: { 𝑣(𝑥) = 𝑣max (1 − 𝑥 2 𝑅2 ) |𝑥| ≤ 𝑅 𝑣(𝑥) = 0 |𝑥| > 𝑅 , (3) where 𝑥 is the radial coordinate. by subdividing 𝑣(𝑥) into a fixed number of bins and assuming the limits of the 𝑛-th bin (𝑛 = 1, …, 𝑁) as the sv boundaries (figure 2a), the theoretical velocity �̅�th for each bin can be derived from (3) as follows: { �̅�th = 𝑣max (1 − 𝑥sv 2 𝑅2 + 𝑙2 12 𝑅2 ) |𝑥sv| ≤ 𝑅 �̅�th = 0 |𝑥sv| > 𝑅 , (4) where 𝑥sv is the sv position with respect to the tube radius and 𝑙 is the sample volume length (figure 2b). finally, the velocity profile discrepancy index (vpdi) for the objective assessment of both sample volume length and registration accuracy is derived as [26]: 𝑉𝑃𝐷𝐼 = ∑ 𝑉𝑃𝐷𝐼𝑛 𝑁 𝑛=1 = ∑ (�̅�s,𝑛 − �̅�th,𝑛) 2 𝜎tot,𝑛 2 𝑁 𝑛=1 , (5) figure 1. steady laminar flow in a cylindrical tube of infinitesimal length. a b figure 2. (a) parabolic velocity profile v(x) subdivided in n bins: the limits of each bin are assumed as the sv boundaries; (b) graphical schematization of the sv length and positioning inside the cylindrical tube. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 where �̅�s,𝑛 is the mean peak velocity estimated from the doppler spectrogram collected at pre-set sample volume depth and length, �̅�th,𝑛 is the corresponding theoretical velocity value retrieved by applying (4) for each bin, while 𝜎tot,𝑛 is the total standard deviation estimated for the 𝑛-th bin as follows: { 𝜎tot,𝑛 = √𝜎s,𝑛 2 + 𝜎r,𝑛 2 + 𝜎p,𝑛 2 |𝑥𝑆𝑉 | ≤ 𝑅 𝜎tot,𝑛 = 𝜎r,𝑛 |𝑥𝑆𝑉 | > 𝑅 , (6) where 𝜎s is the standard deviation (sd) of the peak velocities in the doppler spectrogram because of the intrinsic flow dispersion, 𝜎r is the standard deviation of a random distribution associated with the electronic noise superimposed on the pw spectrogram, and 𝜎p is the sd due to the assumption of parabolic profile: 𝜎p = 𝜎sv 2 𝑣max 𝑅2 |𝑥sv| , (7) where 𝜎sv is the sd associated to the sample volume position with respect to r. according to the definition proposed in (5), vpdi is expected to be 0, i.e., the acquired pw spectrograms are not affected by any variation of the sample volume length with respect to the set value and any sv registration error. on the contrary, if vpdi deviates from 0, the pw spectrograms are affected by at least one of the two error sources, i.e., unwanted sv size variation and/or a low sv registration accuracy. 3. materials and methods 3.1. experimental setup the experimental setup was constituted by three high technology level us diagnostic systems, equipped with a convex and a phased array probe each, and a reference test device whose main specifications are listed in table 1. the test device used is a commercial flow phantom model (sun nuclear, doppler 403tm flow phantom [29]) consisting of a continuous flow loop vessel with a horizontal and a diagonal segment having the same inside diameter. a pump and a flow controller are designed by the manufacturer to provide user-selectable flow rates in the range (1.7-12.5) ml·s-1. doppler spectrograms were collected on the phantom diagonal segment set at a constant flow rate of 7.0 ml·s-1 corresponding to a nominal peak velocity of 71.3 cm·s-1. tests were repeated at two different us system working conditions (table 2): the best configuration setting provided by the product specialist (set 1), and the raw configuration setting by reducing preand post-processing settings (set 2). the three ultrasound diagnostic systems, manufactured by different companies, were anonymously addressed as system a, system b and system c. the main setting that distinguishes the two working conditions is the wall filter, a dedicated high-pass filter designed to reject the intense and low-frequency echoes coming from the vessel walls movement (clutter signal) [2]. in the raw configuration setting it was always set at the minimum adjustable table 1. technical specifications of the reference test device. characteristic specification phantom model sun nuclear, doppler 403tm flow phantom attenuation coefficient (0.70 ± 0.05) db·cm-1·mhz-1 tmm (a) sound speed (1540 ± 10) m·s-1 bmf (b) sound speed (1550 ± 10) m·s-1 scanning material aluminium-film composite vessel nominal inside diameter (5.0 ± 0.2) mm horizontal segment depth (2.0 ± 0.3) cm diagonal segment depth from 2.0 to 16.0 cm (± 0.3 cm) at 40° flow mode constant and pulsatile continuous flow mode range [(1.7 – 12.5) ± 0.4] ml·s-1 (a) tmm = tissue mimicking material; (b) bmf = blood mimicking fluid. table 2. main b-mode and pw doppler settings according to the probe model of each ultrasound diagnostic system. b-mode and pw doppler setting probe system a system b system c set 1 set 2 set 1 set 2 set 1 set 2 b-mode frequency range (mhz) b (a) resolution resolution resolution resolution resolution resolution field of view (cm) b 12 12 12 12 12 12 post-processing b linear linear linear linear linear linear doppler frequency (mhz) c (b) 2.2 2.2 2.0 2.0 2.2 2.2 p (c) 1.6 1.6 2.0 2.0 1.8 1.8 wall filter (hz) c medium (d) min (d,e) 120 50 (e) 70 50 (e) p medium (d) min (d,e) 200 50 (e) 125 75 (e) pulse repetition frequency (khz) c 2.63 2.63 2.3 2.3 – (d) – (d) p 2.50 2.50 2.3 2.3 – (d) – (d) nominal svl (mm) b 1 1 1 1 1 1 nominal svdc ± svd (cm) c 11.0 ± 0.1 11.0 ± 0.1 11.0 ± 0.1 11.0 ± 0.1 11.0 ± 0.1 11.0 ± 0.1 p 10.0 ± 0.1 10.0 ± 0.1 10.0 ± 0.1 10.0 ± 0.1 10.0 ± 0.1 10.0 ± 0.1 nominal svdb ± svd (cm) c 10.7 ± 0.1 10.7 ± 0.1 10.7 ± 0.1 10.7 ± 0.1 10.7 ± 0.1 10.7 ± 0.1 p 9.7 ± 0.1 9.7 ± 0.1 9.7 ± 0.1 9.7 ± 0.1 9.7 ± 0.1 9.7 ± 0.1 insonation angle (°) c 43 43 35 35 40 40 p 45 45 36 36 44 44 spectrogram duration (s) b ~ 11 ~ 11 ~ 12 ~ 12 ~ 12 ~ 12 (a) b = both convex and phased array probe models; (b) c = convex array probe model; (c) p = phased array probe model; (d) value not provided; (e) minimum adjustable wall filter value. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 value. conversely, the pulse repetition frequency (prf), i.e., the rate at which us pulses are transmitted [2], was, in turn, determined by the us system since its value depends on the other settings. among the latter, the insonation angle was varied according to both the probe positioning on the scanning surface and the probe model so as to keep the sample volume parallel to the flow axis throughout the acquisitions, while the spectrogram duration was set to at least 10 s (depending on the system availability) in order to allow the comparison of the results retrieved from different system-probe pairs. in addition, the sample volume length (svl) was maintained constant throughout the measurement campaign, while the sample volume depth (svd) was adjusted on the vessel diagonal segment to collect n = 6 doppler spectrograms for each probesystem pair as listed in table 2. first, the sample volume was placed in the centre of the segment diameter, 𝑆𝑉𝐷c in figure 3a, and then it was adjusted at 𝑆𝑉𝐷c ± ∆𝑆𝑉𝐷 (figure 2a), by considering ∆𝑆𝑉𝐷 the minimum sample volume depth variation that can be set in most of the ultrasound systems currently on the market. the sample volume was subsequently placed on the segment boundary, 𝑆𝑉𝐷b in figure 3b, as well as at 𝑆𝑉𝐷b ± ∆𝑆𝑉𝐷 (figure 2b). as in [26], svd was adjusted along the 𝑧-axis by assuming that the velocity does not vary along the flow axis of the tube. 3.2. vpdi assessment the velocity profile discrepancy index defined in [26] was assessed through an image analysis-based method that postprocesses pw doppler images acquired at different sample volume depths. the main steps of the measurement method are described in the following and shown in figure 4, while the specifications assumed in this study are listed in table 3. as a first step, each pw image is cropped to extract the box containing the spectrogram image only, i.e., excluding the patient information, the b-mode grayscale image and the ultrasound settings details. this processing step is automatically carried out based on the metadata included in the dicom (digital imaging and communications in medicine) file. a b figure 3. graphical schematization of the sample volume adjusted (a) at svdc and (b) at svdb on the diagonal segment of the phantom vessel. figure 4. block diagram of the image analysis-based method for velocity profile discrepancy index assessment. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 then, an adaptive grey level threshold 𝑡ℎg is applied to the spectrogram image to detect the pixels associated with the peak velocities. the corresponding mono-dimensional signal 𝑣s through time is retrieved as in [30], [31] and the mean peak velocity �̅�s is computed together with the standard deviation 𝜎s over 𝑀 samples (i.e., spectral lines). on the other hand, the total standard deviation 𝜎tot is estimated. in this study, the following assumptions were made for the estimation of 𝜎tot as in [26]. the standard deviation 𝜎r in (6) was estimated from a randomly generated gaussian distribution consisting of a number of elements equal to 15 % of 𝑀 and mean value half of the spectrogram full scale. in addition, 𝜎sv in (7) was estimated from a uniform distribution within the range [−∆𝑙 2⁄ ; ∆𝑙 2⁄ ] by considering ∆𝑙 as the minimum svl increment. finally, both 𝜎s and 𝜎p in (6) were taken equal to 0 if the number of peak velocities 𝑣s at 0 cm·s -1 was higher than 85 % of 𝑀. therefore, when the above condition occurred, the detected velocities were assumed as electronic noise, i.e., 𝜎tot = 𝜎r. otherwise, the uncertainty propagation law was applied including 𝜎s and 𝜎p in the computation of 𝜎tot even if outside of the vessel segment, i.e., 𝑆𝑉𝐷 at 𝑆𝑉𝐷b − ∆𝑆𝑉𝐷. the processing steps described above are repeated for all the 𝑁 acquisitions corresponding to different sample volume depths. at this point, the theoretical velocity �̅�th is determined for each svd by applying (4), in which the maximum velocity 𝑣max was chosen as the mean peak velocity determined at 𝑆𝑉𝐷c, as in [26]. in the last step, the velocity profile discrepancy index is computed according to (5). vpdi uncertainty can be estimated as the standard deviation of the data distribution obtained through the monte carlo simulation (mcs), a useful tool for the measurement uncertainty estimation of image analysis-based methods [32]-[36]. in this study, a mcs with 104 iterations was run for each probe-system pair in both working conditions. a uniform distribution with ± 2 % bounds was assigned to the adaptive threshold 𝑡ℎg and the 𝑀 spectral lines to be processed were randomized, at each cycle without repetition, among all the spectral lines detected in the spectrogram image. in particular, the uniform distribution was assigned to 𝑡ℎg because, in terms of standard deviation, it is a more cautious approach for the uncertainty estimation, whilst the distribution range was assumed as the maximum threshold dispersion allowing to distinguish the velocity signal from the background noise displayed in the spectrogram, as already experienced in previous studies [31], [37]. 4. results and discussion vpdi experimental outcomes for systems a, b and c equipped with convex and phased array probes at different working conditions are listed in table 4 in terms of mean ± 𝑆𝐷 and shown in figure 5. by comparing the two probe models, the convex array probes showed the best vpdi outcomes (closest to 0) in both working conditions as well as the lowest uncertainty contributions. this suggests that the pw spectrograms acquired through the phased array probes could be most affected by an error in sample volume length (with respect to the pre-set value) and in its registration, (i.e., a low sv registration accuracy). such issue seems to be supported by flow velocity trends still visible outside of the phantom vessel segment, as shown in figure 6. on the other hand, by comparing the two working conditions, higher uncertainty contributions were globally found at raw configuration settings (set 2) independently of the probe model. moreover, higher vdpi values were found in set 2 for both the probe models of system a and system c, while a specific behaviour cannot be inferred for system b. the discrepancy in results between set 1 and set 2 is mostly due to the different filtering of the clutter signal (i.e., wall filter setting in table 2). table 3. parameters setting for vpdi assessment. parameter symbol setting adaptive threshold thg 10 % of maximum gray level value (a) number of spectral lines m 1000 minimum svl increment l 1 mm maximum velocity vmax vs̄ at svdc (a) the threshold setting at 10 % allows to properly distinguish the velocity signal from the background noise displayed in the spectrogram [30], [31]. table 4. vpdi outcomes (mean  sd) for the three ultrasound diagnostic systems equipped with convex and phased array probes at different working conditions. probe model working condition system a system b system c convex set 1 0.14  0.01 0.69  0.04 0.28  0.04 set 2 0.23  0.03 0.68  0.12 0.58  0.12 phased set 1 3.36  0.24 3.46  0.13 1.64  0.18 set 2 4.61  0.37 1.40  0.06 5.71  0.21 a b figure 5. vpdi results for (a) convex and (b) phased array probes according to the ultrasound system and the working condition. figure 6. pw spectrogram acquired through system a equipped with the phased array probe in set 2: sample volume depth adjusted at svdb–svd. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 finally, by focusing on the comparison among the three ultrasound diagnostic systems, it should be noticed that system a equipped with the convex array probe in set 1, and system b equipped with the phased array probe in set 2 showed vpdi outcomes closer to 0. 5. conclusions in the study herein proposed, a comparative investigation on the velocity profile discrepancy index was carried out. the index, preliminarily investigated by the authors, was developed for the objective assessment of sample volume length and range gate registration accuracy. it relies on the assumption of a fully developed laminar flow in order to quantify the discrepancy between the theoretical parabolic profile and the actual velocities. according to the definition, vdpi is expected to be 0 when the processed pw spectrograms are not affected by any variation of the svl with respect to the pre-set nominal value and any range gate registration error. an automatic image analysis-based method post-processes pw spectrograms acquired at different sample volume depths with respect to the vessel radius of a doppler reference device. three brand-new us diagnostic systems, equipped with a convex and phased array probe each, were tested in two working conditions. a commercial reference test device constituted by a continuous flow loop vessel with a horizontal and a diagonal segment was set at a constant flow rate of 7.0 ml·s-1. experimental data were collected on the diagonal segment of the phantom vessel. from the analysis of the results, convex array probes showed vdpi outcomes closer to 0 in both working conditions as well as the lowest uncertainty contributions. on the basis of the promising outcomes obtained and their limitations, further studies and method improvements are going to be carried out to estimate the two error sources separately. moreover, further investigations should be performed including a higher number of diagnostic systems and probe models (e.g., linear array probes). acknowledgement the authors wish to thank jan galo of the clinical engineering service at i.r.c.c.s. children hospital bambino gesù for administrative and technical support; samsung healthcare, philips healthcare and mindray medical for hardware supply and technical assistance in data collection. references [1] d. h. evans, w. n. mcdiken, doppler ultrasound: physics, instrumentation and signal processing, wiley, new york, 2nd edition, 2000, isbn 9780471970019, pp. 97-118. [2] p. r. hoskins, k. martin, a. thrush, diagnostic ultrasound: physics and equipment, crc press, boca raton, fl, usa, 3rd edition, 2019, isbn 9781138892934, pp. 171-189. [3] international electrotechnical commission, iec 61895:1999-10, ultrasonics – pulsed doppler diagnostic systems – test procedures to determine performance, 1999. [4] j. e. browne, a review of doppler ultrasound quality assurance protocols and test devices, phys. med. 30 (2014), pp. 742-751. doi: 10.1016/j.ejmp.2014.08.003 [5] p. r. hoskins, simulation and validation of arterial ultrasound imaging and blood flow, ultrasound med. biol. 34 (2008), pp. 693717. doi: 10.1016/j.ultrasmedbio.2007.10.017 [6] t. van merode, p. hick, a. p. hoeks, r. s. reneman, limitations of doppler spectral broadening in the early detection of carotid artery disease due to the size of the sample volume, ultrasound med. biol. 9 (1983), pp. 581-586. doi: 10.1016/0301-5629(83)90002-9 [7] a. p. hoeks, c. j. ruissen, p. hick, r. s. reneman, methods to evaluate the sample volume of pulsed doppler systems, ultrasound med. biol. 10 (1984), pp. 427-434. doi: 10.1016/0301-5629(84)90197-2 [8] ipem report no.102, quality assurance of ultrasound imaging systems, 2010, isbn 9781903613436. [9] aium, performance criteria and measurements for doppler ultrasound devices, 2002, isbn 1930047835. [10] t. l. poepping, h. n. nikolov, r. n. rankin, m. lee, d. w. holdsworth, an in vitro system for doppler ultrasound flow studies in the stenosed carotid artery bifurcation, ultrasound med. biol. 28 (2002), pp. 495-506. doi: 10.1016/s0301-5629(02)00479-9 [11] d. w. baker, w. g. yates, technique for studying the sample volume of ultrasonic doppler devices, med. biol. eng. 11 (1973), pp. 766-770. doi: 10.1007/bf02478666 [12] a. goldstein, performance tests of doppler ultrasound equipment with a string phantom, j. ultrasound med. 10 (1991) pp. 125-139. doi: 10.7863/jum.1991.10.3.125 [13] e. j. boote, j. a. zagzebski, performance tests of doppler ultrasound equipment with a tissue and blood-mimicking phantom, j. ultrasound med. 7 (1988) pp. 137-147. doi: 10.7863/jum.1988.7.3.137 [14] z. f. lu, n. j. hangiandreou, p. carson, clinical ultrasonography physics: state of practice, in: clinical imaging physics: current and emerging practice. e. samei, d. e. pfeiffer (editors). wiley blackwell, hoboken, nj, 2020, isbn 9781118753453, pp. 261286. [15] s. balbis, t. meloni, s. tofani, f. zenone, d. nucera, c. guiot, criteria and scheduling of quality control of b-mode and doppler ultrasonography equipment, j. clin. ultrasound 40 (2012), pp. 167-173. doi: 10.1002/jcu.21897 [16] f. marinozzi, f. p. branca, f. bini, a. scorza, calibration procedure for performance evaluation of clinical pulsed doppler systems, measurement 45 (2012), pp. 1334-1342. doi: 10.1016/j.measurement.2012.01.052 [17] a. scorza, g. lupi, s. a. sciuto, f. bini, f. marinozzi, a novel approach to a phantom based method for maximum depth of penetration measurement in diagnostic ultrasound: a preliminary study. proc. of the 2015 ieee international symposium on medical measurements and applications (memea), turin, italy, 7 – 9 may 2015. doi: 10.1109/memea.2015.7145230 [18] m. a. quiñones, c. m. otto, m. stoddard, a. waggoner, w. a. zoghbi, doppler quantification task force of the nomenclature and standards committee of the american society of echocardiography, recommendations for quantification of doppler echocardiography: a report from the doppler quantification task force of the nomenclature and standards committee of the american society of echocardiography, j. am. soc. echocardiogr. 15 (2002), pp. 167-184. doi: 10.1067/mje.2002.120202 [19] c. m. gross, j. krämer, o. weingärtner, f. uhlich, f. c. luft, j. waigand, r. dietz, determination of renal arterial stenosis severity: comparison of pressure gradient and vessel diameter, radiology 220 (2001), pp. 751-756. doi: 10.1148/radiol.2203001444 [20] e. g. grant, c. b. benson, g. l. moneta, a. v. alexandrov, j. d. baker, e. i. bluth, b. a. carroll, m. eliasziw, j. gocke, b. s. hertzberg, s. katanick, l. needleman, j. pellerito, j. f. polak, k. s. rholl, d. l. wooster, r. e. zierler. carotid artery stenosis: grayscale and doppler us diagnosis–society of radiologists in ultrasound consensus conference, radiology 229 (2003), pp. 340346. doi: 10.1148/radiol.2292030516 https://doi.org/10.1016/j.ejmp.2014.08.003 https://doi.org/10.1016/j.ultrasmedbio.2007.10.017 https://doi.org/10.1016/0301-5629(83)90002-9 https://doi.org/10.1016/0301-5629(84)90197-2 https://doi.org/10.1016/s0301-5629(02)00479-9 https://doi.org/10.1007/bf02478666 https://doi.org/10.7863/jum.1991.10.3.125 https://doi.org/10.7863/jum.1988.7.3.137 https://doi.org/10.1002/jcu.21897 https://doi.org/10.1016/j.measurement.2012.01.052 https://doi.org/10.1109/memea.2015.7145230 https://doi.org/10.1067/mje.2002.120202 https://doi.org/10.1148/radiol.2203001444 https://doi.org/10.1148/radiol.2292030516 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [21] d. yigang, a. goddi, c. bortolotto, y. shen, a. dell’era, f. calliada, l. zhu, wall shear stress measurements based on ultrasound vector flow imaging, j ultrasound med. 39 (2020), pp. 1649-1664. doi: 10.1002/jum.15253 [22] j. p. mynard, b. a. wasserman, d. a. steinman, errors in the estimation of wall shear stress by maximum doppler velocity, atherosclerosis 227 (2013), pp. 259-266. doi: 10.1016/j.atherosclerosis.2013.01.026 [23] a. a. oglat, m.z. matjafri, n. suardi, m.a. oqlat, m.a. abdelrahman, a.a. oqlat, a review of medical doppler ultrasonography of blood flow in general and especially in common carotid artery, j med ultrasound 26 (2018), pp. 3-13. doi: 10.4103/jmu.jmu_11_17 [24] s. ambrogio, j. ansell, e. gabriel, g. aneju, b. newman, m. negoita, f. fedele, k. v. ramnarine, pulsed wave doppler measurements of maximum velocity: dependence on sample volume size, ultrasound med. biol. 48 (2022), pp. 68-77. doi: 10.1016/j.ultrasmedbio.2021.09.006 [25] s. cournane, a. j. fagan, j. e. browne, an audit of a hospitalbased doppler ultrasound quality control protocol using a commercial string doppler phantom, physica medica 30(2014), pp. 380-384. doi: 10.1016/j.ejmp.2013.10.001 [26] g. fiori, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a preliminary study on a novel approach to the assessment of the sample volume length and registration accuracy in pw doppler quality control. proc. of the 2022 ieee international symposium on medical measurements and applications (memea), messina, italy, 22 – 24 june 2022. doi: 10.1109/memea54994.2022.9856474 [27] y. c. fung, biomechanics: circulation, springer, new york, 2nd edition, 1997, isbn 9780387943848, pp. 114-118. [28] w. w. nichols, m. f. o’rourke, c. vlachopoulos, mcdonald’s blood flow in arteries, crc press, london, 6th edition, 2011, isbn 9780340985014, pp. 14-19. [29] sun nuclear corporation, doppler 403™ & mini-doppler 1430™ flow phantoms. online [accessed 20 february 2023] https://www.sunnuclear.com/uploads/documents/datasheets/ diagnostic/dopplerflow_phantoms_113020.pdf [30] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a novel sensitivity index from the flow velocity variation in quality control for pw doppler: a preliminary study. proc. of the 2021 ieee international symposium on medical measurements and applications (memea), lausanne, switzerland, 23 – 25 june 2021. doi: 10.1109/memea54994.2022.9856474 [31] g. fiori, f. fuiano, a. scorza, m. schmid, s. conforto, s. a. sciuto, doppler flow phantom failure detection by combining empirical mode decomposition and independent component analysis with short time fourier transform, acta imeko 10 (2021), pp. 185-193. doi: 10.21014/acta_imeko.v10i4.1150 [32] g. fiori, a. pica, s. a. sciuto, f. marinozzi, f. bini, a. scorza. a comparative study on a novel quality assessment protocol based on image analysis methods for color doppler ultrasound diagnostic systems, sensors 22 (2022). doi: 10.3390/s22249868 [33] g. fiori, f. fuiano, a. scorza, j. galo, s. conforto, s. a. sciuto, a preliminary study on an image analysis based method for lowest detectable signal measurements in pulsed wave doppler ultrasounds, acta imeko 10 (2021), pp. 126-132. doi: 10.21014/acta_imeko.v10i2.1051 [34] g. bocchetta, g. fiori, a. scorza, s. a. sciuto, image quality comparison of two different experimental setups for mems actuators functional evaluation: a preliminary study. proc. of the 25th imeko tc4 international symposium & 23rd international workshop on adc and dac modelling and testing, brescia, italy, 12 – 14 september 2022, pp. 320-324. doi: 10.21014/tc4-2022.59 [35] g. bocchetta, g. fiori, a. scorza, n. p. belfiore, s. a. sciuto, first results on the functional characterization of two rotary comb-drive actuated mems microgripper with different geometry. proc. of the 25th imeko tc4 international symposium & 23rd international workshop on adc and dac modelling and testing, brescia, italy, 12 – 14 september 2022, pp. 151-155. doi: 10.21014/tc4-2022.28 [36] g. fiori, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, a preliminary study on the blind angle estimation for quality assessment of color doppler ultrasound diagnostic systems. proc. of the 25th imeko tc4 international symposium & 23rd international workshop on adc and dac modelling and testing, brescia, italy, 12 – 14 september 2022, pp. 325-329. doi: 10.21014/tc4-2022.60 [37] g. fiori, f. fuiano, a. scorza, m. schmid, j. galo, s. conforto, s. a. sciuto, doppler flow phantom stability assessment through stft technique in medical pw doppler: a preliminary study. proc. of the 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4.0&iot), rome, italy, 7 – 9 june 2021. doi: 10.1109/metroind4.0iot51437.2021.9488513 https://doi.org/10.1002/jum.15253 https://doi.org/10.1016/j.atherosclerosis.2013.01.026 https://doi.org/10.4103/jmu.jmu_11_17 https://doi.org/10.1016/j.ultrasmedbio.2021.09.006 https://doi.org/10.1016/j.ejmp.2013.10.001 https://doi.org/10.1109/memea54994.2022.9856474 https://www.sunnuclear.com/uploads/documents/datasheets/diagnostic/dopplerflow_phantoms_113020.pdf https://www.sunnuclear.com/uploads/documents/datasheets/diagnostic/dopplerflow_phantoms_113020.pdf https://doi.org/10.1109/memea54994.2022.9856474 https://doi.org/10.21014/acta_imeko.v10i4.1150 https://doi.org/10.3390/s22249868 https://doi.org/10.21014/acta_imeko.v10i2.1051 https://doi.org/10.21014/tc4-2022.59 https://doi.org/10.21014/tc4-2022.28 https://doi.org/10.21014/tc4-2022.60 https://doi.org/10.1109/metroind4.0iot51437.2021.9488513 microsoft word 258-1783-1-le-rev2 acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 14 ‐ 22    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 14  mixed baseband architecture based on fbd ʃ∆–based adc  for multistandard receivers  rihab lahouli 1,2 , manel ben‐romdhane 1 , chiheb rebai 1 , dominique dallet 2   1  grescom research lab., sup’com, university of carthage, cité technologique des communications, 2083 el ghazela, ariana, tunisia  2  ims research lab., ipb enseirb‐matmeca, university of bordeaux, 351 cours de la libération, bâtiment a31, 33405 talence cedex,  france      section: research paper   keywords: frequency band decomposition (fbd);  modulators; software defined radio (sdr) receiver   citation: rihab lahouli, manel ben‐romdhane, chiheb rebai, dominique dallet, mixed baseband architecture based on fbd ʃ∆–based adc for  multistandard receivers, acta imeko, vol. 4, no.3, article 3, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐03  editor: paolo carbone, university of perugia, italy  received february 14, 2015; in final form may 18, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding authors: rihab lahouli, dominique dallet, e‐mails: rihab.lahouli@supcom.tn, dominique.dallet@ims‐bordeaux.fr    1. introduction  software defined radio (sdr) is a state-of-the-art technology solution of the software radio concept, first introduced by mitola [1]. sdr was proposed by scientists to achieve a feasible multistandard receiver. to ensure software reconfigurability, the received signals must be digitized as near as possible to the antenna in order to reduce analog circuitry. this leads to increased design constraints of the analog-to-digital converter (adc). in fact, in literature, there is no fully integrated adc that covers different coexisting wireless and mobile standards from narrowband to wideband channels with different required dynamic ranges [2]. to deal with this problem, the authors propose the use of parallel architectures of  modulators that ensure high accuracy, in terms of dynamic range, while extending conversion bandwidth. parallel architectures have become an attractive solution for analog-to-digital conversion especially in the context of sdr, where new applications require extended bandwidths. there are three main parallel architectures described in the literature; the hadamard modulated parallel architecture (π) [3], the timeinterleaved architecture (ti) [4], and the frequency band decomposition (fbd) architecture [5]-[7]. in this paper, the authors choose for the fbd architecture because unlike π and ti architectures the fbd architecture is insensitive to gain and offset mismatches [8], [9]. in the fbd architecture, the parallel  modulators are band-pass (bp) and each one converts a part of the total input signal band. there are propositions of fbd architecture designs in the literature, essentially in [5]-[7]. the main drawback of these solutions is abstract  this paper presents the design and simulation results of a novel mixed baseband stage for a frequency band decomposition (fbd)  analog‐to‐digital converter (adc) in a multistandard receiver. the proposed fbd‐based adc architecture is flexible with programmable  parallel branches composed of discrete time (dt) 4 th  order single‐bit ʃ∆ modulators. the mixed baseband architecture uses a single  non‐programmable anti‐aliasing filter (aaf) avoiding the use of an automatic gain control (agc) circuit. system level analysis proved  that  the  proposed  fbd  architecture  satisfies  design  specifications  of  the  software  defined  radio  (sdr)  receiver.  in  this  paper,  the  authors focus on the butterworth aaf filter design for a multistandard receiver. besides, theoretical analysis of the reconstruction  stage for umts test case is discussed. it leads to a complicated system of equations and high digital filter orders. to reduce the digital  reconstruction  stage  complexity,  the  authors  propose  an  optimized  digital  reconstruction  stage  architecture  design.  the  demodulation‐based  digital  reconstruction  stage  using  two  decimation  stages  has  been  implemented  using  matlab/simulink.  technical  choices  and  performances  are  discussed.  the  computed  signal‐to‐noise  ratio  (snr)  of  the  matlab/simulink  fbd  adc  model is equal to at least 75 db which satisfies the dynamic range required for umts signals. next to hardware implementation with  quantized filters coefficients, the authors implemented their proposition in vhdl in a sysgen environment. the measured snr of the  hardware implementation is equal to 74.08 db which satisfies the required dynamic range of umts signals.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 15  that they are based on continuous-time (ct)- modulators. in fact, ct- modulators bring analog errors that should be handled in the digital reconstruction stage. to overcome this problem, the authors proposed an fbd architecture based on discrete-time (dt)- modulators [10]. they are 4th order  modulators based on single-bit quantizers [10]. the authors choose single-bit quantization to overcome non-linearity errors introduced by multi-bit quantizers [7]. moreover, the novelty in this proposed architecture is the use of six programmable parallel branches with different sub-bandwidths, where only some branches are active according to the selected standard. the multistandard receiver handles e-gsm, umts and ieee802.11a communication standard signals. the outputs of the parallel branches in the fbd adc architecture have to be recombined using a digital reconstruction stage to provide the overall final output. e-gsm signals are not concerned in this stage since they solicit only one branch of the fbd adc architecture. only decimation is required at the  modulator output. however, a digital reconstruction stage is mandatory for the umts and ieee802.11a signals that solicit the three first branches and all the six branches of the adc architecture, respectively. in [11], the authors focused on the design and test of a digital reconstruction stage for the fbd ʃ∆-based adc architecture. it was verified when implementing the whole adc architecture in matlab/simulink that the architecture performances satisfy the standard requirements for the dynamic range of the umts signals in this test case. the choice of this test case has been made because the three first branches of the adc architecture, which are activated for digitizing umts signals, are reused for the digitization of ieee802.11a signals. besides, the umts standard requires a dynamic range of 73.8 db that is higher than the required dynamic range for ieee802.11a which equals 61.8 db. in this paper, interest is focused on the design of the mixed baseband stage for the sdr receiver. indeed, in a conventional baseband receiver stage, an anti-aliasing filter (aaf), an automatic gain control (agc) circuit and an adc are required to analogically process and digitize the received signals. however, in the proposed mixed baseband stage solution in this paper, the authors suggest to suppress the agc, to design a single passive aaf and to digitize the received signal thanks to the multistandard fbd dt -based adc architecture. moreover, the theoretical analysis of the digital reconstruction stage based on demodulation is detailed using multirate theory. the design of the fbd -based adc with demodulationbased digital reconstruction stage is first recalled [11]. then, the novelty comes with the theoretical discussion that justifies the authors’ proposition of an optimized digital reconstruction stage. in this paper, the authors come also with new comparative matlab/simulink simulation results of signal-to-noise ratio regarding frequency position of the input signals. besides, results of hardware implementation with quantized filter coefficients are presented and discussed. the paper is organized as follows. in section 2, the design of an fbd -based mixed baseband stage with a single passive aaf ahead intended for an sdr receiver is presented. section 3 deals with the digital reconstruction stage of the fbd based adc. the two existing approaches in the literature, the direct reconstruction and the demodulation-based reconstruction, are discussed. a demodulation based digital reconstruction stage design for umts test case is proposed and analyzed theoretically using multirate theory. this initial design has been modified and optimized in order to allow its implementation. simulation results of the fbd -based adc model using the matlab/simulink environment are presented in section 4. then, implementation results in vhdl using the sysgen environment are presented and discussed. finally, some conclusions are drawn in section 5. 2. flexible fbd ʃ∆ architecture design  to reach system level specifications of wireless and mobile standards, the authors propose to use a parallel  modulator architecture and modify the conventional mixed baseband stage design [10]. the multistandard receiver processes e-gsm [13], umts [14] and ieee802.11a [15] communication signals [10]. according to these supported communication standard specifications, design specifications for a multistandard sdr receiver have been computed. furthermore, a hybrid homodyne/low-if architecture was proposed in [16] for the sdr receiver front-end. an rf filter selects the received signals. afterward, the signals are amplified by a low-noise amplifier (lna). then, on the one side, the umts and ieee802.11a signals are down-converted by the mixer to baseband frequencies. on the other side, the e-gsm signals are down-converted to a low intermediate frequency of 100 khz to overcome flicker noise disturbance. system level specifications are introduced in sub-section 2.1. then, in sub-section 2.2, the mixed baseband stage design is explained. next, the design of the non-programmable aaf is proposed in sub-section 2.3. afterwards, the design of the fbd -based adc architecture is detailed in sub-section 2.4. 2.1. mixed baseband sdr receiver specifications  according to the specifications of the communication standards handled by the sdr receiver, system level specifications for the baseband receiver are depicted. table 1 summarizes the channel bandwidth chbw, the channel spacing chsp, the reference sensitivity sref, the signal-to-noise ratio (snr) at the receiver input, snrin, the signal-to-noise ratio at the receiver output snrout, the analog gain relative to a 13 dbm adc full scale input gana, the receiver dynamic range drin and the adc dynamic range dradc from which the adc resolution resadc, is deduced. since e-gsm signals are down-converted to a low intermediate frequency of 100 khz to avoid flicker noise, the egsm channel bandwidth is considered equal to 200 khz. the mixed baseband architecture is presented in the next sub-section. table 1. design specifications for the e‐gsm/umts/ieee802.11a receiver.   e‐gsm  umts  ieee802.11a  chbw (mhz) 0.2 3.84  16.6 chsp (mhz) 0.2 5  20 sref (dbm) ‐102 ‐117  ‐65 snrin (db) 18.8 ‐9  36.6 snrout (db) 9 ‐18.2  26.6 gana (db) 28 38  43 drin (db) 87 92  35 dradc (db) 96 73.8  61.8 resadc  (bits) 16 12  10 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 16  2.2. mixed baseband architecture  the mixed baseband stage, presented in figure 1 [10], follows the mixer which is controlled by the local oscillator (lo). the mixed baseband stage is composed of a single passive low-pass (lp) aaf that precedes the fbd -based adc. there is no need for n automatic gain control (agc) circuit before the adc stage since the aaf filters only e-gsm blockers that are outside the ieee802.11a bandwidth [16]. the m parallel single-bit quantizer  modulators are designed using matlab tools. their stability is ensured using a test plan performed in [10].  modulator outputs are combined in the digital reconstruction stage to reconstruct the final output. the design of the non-programmable aaf is explained in the next sub-section. 2.3. design of the non‐programmable aaf  in this sub-section, the authors are interested in the design of a butterworth non-programmable aaf for the sdr receiver. this aaf is unique for the e-gsm, umts and ieee802.11a signals. its role is to attenuate blockers and interfering signals which are susceptible to fold on the useful signal after sampling operation of the adc, while ensuring the required snrout as defined by design specifications presented in table 1. the low-pass aaf is defined by its cut-off frequency fp, its rejection frequency fr, its maximal attenuation in the useful bandwidth amax, and its minimal attenuation amin, beyond the rejection frequency. the cut-off frequency is set equal to half of the channel bandwidth with a conception margin of 30 %. this margin is required to avoid attenuation in the useful channel bandwidth after analog integrated circuit realization but also circuit aging [16]. the rejection frequency is fixed at fs-chbw/2, where fs is the sampling frequency. the fs values for the different supported standards are obtained when designing the fbd ʃ∆-based adc as given in table 2 [10]. these evaluation conditions are explained by figure 2. the minimal attenuation is calculated as given by (1) [16], aafoutrefbl msnrsna min  (1) where sref is the receiver sensitivity whose values for the different supported standards are given in table 1, maaf is a margin of conception equal to 3 db attributed to sref, and nbl is the level of the blocker to attenuate. the blocker level is calculated given the blocker’s profile at the rf filter output of the different supported standards. in fact, lna and mixer linearly amplify the signals in the received bandwidth. values of the aaf parameters are summarized in table 2. the cut-off frequency is considered to be the same for the three standards and corresponds to half of the chbw of ieee802.11a standard with a margin of 30 %. the aaf order is therefore computed given the butterworth attenuation expression as described by (2), )))(110(1(10)( 210/10 max n p a f f logfa  (2) where n is the butterworth filter’s order to compute and amax is set equal to 0.3 db. given the needed amin to attenuate blockers at the rejection frequency of each standard, the required aaf order is computed. computation results are summarized in table 2. for umts and ieee802.11a standards, the required aaf orders are 4 and 3, respectively. however, the e-gsm standard is the most restrictive since it requires a 6th order butterworth aaf. thus, for the sdr receiver the only aaf for the three standards is a 6th order butterworth aaf. the frequency response of the designed aaf is presented in figure 3. 2.4. flexible fbd ʃ∆ architecture  the authors in [10] started from sdr receiver specifications in terms of channel bandwidths and required adc dynamic ranges for the chosen communication standards. the designed discrete-time (dt) fbd  architecture for the adc stage was proposed in [10]. the design realizes a trade-off between increasing the sampling frequency while still operating in discrete time and increasing the number m of parallel branches regarding a low-complexity goal, or increasing  modulator orders while keeping them stable. thus, an fbd  architecture which is composed of 6 programmable parallel aaf digital reconstruction stage 1 bit fin flo frf ∑∆ modulator 1 1 bit ∑∆ modulator m ∑∆ fbd-based adc adc output figure 1. fbd ʃ∆‐based mixed baseband stage.  table 2. design of the lp aaf filter   e‐gsm  umts  ieee802.11a  fs (mhz) 72 72  96 fp (mhz) 10.8 10.8  10.8 fr (mhz) 71.8 70.08  85.2 nbl (db) ‐23 ‐44  ‐47 amin (db) 85 51.8  41.6 aaf order (n) 6 4  3 figure 2. evaluation conditions for the aaf design.  figure  3.  frequency  response  and  specification  mask  of  the  lp  non‐ programmable aaf for the sdr receiver.  0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 80 100 120 frequency (mhz) a tt e n u a ti o n ( d b ) specifications mask 6th order butterworth attenuationamin umts= 51.8 db f r umts =70.08 mhz f p =10.8 mhz amin ieee802.11a= 41.6 db f r ieee802.11a =85.2 mhz f r e-gsm =71.8 mhz amin e-gsm= 85 db acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 17  branches was proposed as presented in figure 4(a). according to the e-gsm, umts or ieee802.11a communication standard, from the whole architecture only the needed branches are activated. each branch is composed of a dt 4th order single-bit quantizer  modulator. the  modulator order is defined as the number of integrators or resonators k for low-pass (lp) and band-pass (bp)  modulators, respectively. since the designed fbd architecture is composed of both lp and bp  modulators, the authors designate in this paper k as  modulator order [10]. besides, the  modulators of the proposed fbd architecture are based on a non-unitary signal transfer function (nu-stf) that permits dealing with stability problems and recovering the input signal dynamic range [10]. the branch bandwidths are different and the sampling frequencies vary from one radio communication standard to another in order to optimize the flexible fbd  architecture while fulfilling the theoretically required dynamic ranges. the branch bandwidth and the sampling frequency according to the chosen standard are given by the branch frequency division plan presented in figure 4(b). in the next section, to detail the theoretical analysis and design of the digital reconstruction stage of the fbd -based architecture, the authors select the umts standard as a test case. the choice of this test case has been made because the umts standard uses the three first branches of the adc architecture. these branches are also selected with three more branches for the digitization of ieee802.11a signals. moreover, the required standard dynamic range of the umts is equal to 73.8 db which is higher than the 61.8 db for the required dynamic range of the ieee802.11a. 3. digital reconstruction stage: theoretical  analysis and design  in the literature, there are two main approaches to reconstruct the output signal from the parallel  modulator outputs while ensuring the required dynamic range [5]. for the first solution, the  modulator outputs are directly processed using band-pass filters, then, the selected signals are decimated. however, the second solution demodulates each  modulator output signal by converting it to baseband frequencies, then, the signal is decimated before being processed by a low-pass filter. it was shown in [5] that the digital reconstruction with direct processing presents high complexity due to high required bp filter orders and operating sampling frequencies. the digital reconstruction with demodulation requires lower lp filter orders and operating sampling frequencies. consequently, in this paper, the authors proceed to digital reconstruction with demodulation whose architecture is explained in sub-section 3.1. the theoretical analysis of this architecture is presented in sub-section 3.2. afterwards, an optimized digital reconstruction stage is implemented using matlab/simulink and technology choices are discussed in sub-section 3.3. 3.1. digital reconstruction with demodulation  the digital reconstruction architecture with demodulation is presented in figure 5. in this digital processing, the bp  modulator output signals are first brought to baseband by processing a complex demodulation. this operation consists in multiplying modulator outputs by the complex sequence mk[n] as given by (3) where fck is the central frequency of the kth branch bandwidth, ts is the sampling period which is equal to 1/fs and n is a positive integer:   sck ntfjk enm 2   (3) since the  modulators oversample input signals [5], it is mandatory to proceed to decimation and filtering operations after the complex demodulation operation. hence, each demodulated signal is decimated in order to decrease its sampling frequency and bring it to the nyquist frequency which is defined as the double of the channel bandwidth. the global decimation factor d is equal to the global oversampling ratio, osr, defined as the sampling frequency fs , out of the nyquist frequency. then, each demodulated and decimated signal is processed by a low-pass filter that selects the branch bandwidth before being modulated. the modulation operation consists in frequency up-converting each baseband signal around the corresponding branch central frequency at the nyquist frequency. finally, the output signals of the parallel branches are recombined to form the output signal of the fbd architecture. for the first branch that operates with a lp- (a) (b) figure  4.  (a)  designed  fbd  –based  adc  architecture,  (b)  branch  frequency division plan.  figure 5. digital reconstruction with demodulation of the fbd architecture (general case).  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 18  modulator, there is no need to demodulate and modulate as shown in figure 5. starting from the test case corresponding to the fbd based adc architecture operating to digitize umts signals, the design of a demodulation based digital reconstruction stage is performed. in this chosen test case, only the three first parallel branches of the fbd adc architecture are activated. the sampling frequency is set at 72 mhz and operating branche bandwidths are as explained by the branch frequency division plan for umts signals presented in figure 4(b). the global decimation factor d is chosen equal to 16 which is an integer number that permits digitizing umts signals at a nyquist frequency equal to 4.5 mhz. this nyquist band is between the channel bandwidth chbw and the channel spacing chsp, as given in table 1. the equivalent diagram in the discrete-time domain of the designed digital reconstruction stage is presented in figure 6 where hk(z) is the non-unitary signal transfer function of the kth  modulator, gk(z) the decimation filter of the kth branch, and fk(z) the branch bandwidth selection filter of the kth branch. this model is analyzed analytically in the next sub-section. 3.2. theoretical study of the demodulation‐based digital  reconstruction stage  in this sub-section, based on the multirate theory [12], the theoretical analysis of the digital reconstruction stage model, designed for the umts test case as presented in figure 6, is accomplished. in the first branch, a decimation operation and branch selection filtering process are performed. it is necessary to start by presenting the general expression of the z-transform of a decimated input signal by a decimation factor d as given by (4) [16]:        1 0 /1 1 0 /)2( )( 1 )()( 1 )( d k ld d l dljj wzx d zyorex d ey  (4) with djew /2 and jez  . therefore, the transfer function of the first branch output signal is deduced as expressed by equation (5): )()()()( 1 )( /11 /1 1 /1 1 1 0 /1 1 dldld d l ld zfwzgwzhwzx d zy     (5) in the 2nd and 3rd branche, the digital reconstruction processing contains demodulation and modulation operations that consist, as explained in the previous sub-section, in multiplication of the signal by a discrete exponential signal as given by (3). it is important to note that in this designed digital reconstruction stage, demodulation is operated at the  modulator oversampling frequency fs. however, the modulation is performed at the down sampling frequency equal to fs./d. therefore, the modulation is obtained by multiplying the outputs of branch selection bandwidth filters by the sequence given by (6):   sck ndtfjk enm 2mod_   (6) the z-transform expression of the 2nd branch output signal after demodulation y2dem, is then given by expression (7): )()()( 22 22 2 2 scsc tfjtfj dem ezhezxzy  . (7) then, the expression of the 2nd branch output after decimation is expressed by (8): ).()..( )..( 1 )( /1 2 /2/1 2 1 0 /2/1 2 2 2 lddtfjld d l dtfjld dec wzgewzh ewzx d zy sc sc        . (8) therefore, the z-transform expressions of the 2nd and 3rd branch output signals are determined as given respectively by (9) and (10): )()( )..( )..( 1 )( 22 2 2 2/1 2 2/1 2 /)1(2/1 2 1 0 /)1(2/1 2 scsc sc sc tfjdtfjld ddtfjld d l ddtfjld ezfewzg ewzh ewzx d zy             (9) )()..( )..( )..( 1 )( 33 3 3 2/1 3 2/1 3 /)1(2/1 3 ! 0 /)1(2/1 3 scsc sc sc tfjdtfjld ddtfjld d l ddtfjld ezfewzg ewzh ewzx d zy             (10) the combined output signal of the fbd adc architecture is obtained by summing the three branche output signals as presented by (11): )()()()( 321 zyzyzyzy  . (11) to cancel the aliasing and ensure a perfect reconstruction system, the output signal has to be a delayed version of the input signal and the alias terms should be canceled [12]. in the filter bank architectures, the signal is decimated at the input of the converters and interpolated at their outputs. the main difference between the fbd architecture with demodulationbased digital reconstruction and the filter bank architecture is the presence of demodulation and modulation operations. in fact, the signal at each band-pass branch is frequency shifted through these operations around the corresponding branch’s central frequency. consequently, a part of the input signal which is frequency shifted around the branch central frequency is applied at each branch’s bandwidth. there is a need to recuperate these input signals at the recombined final output. however, the aliasing terms introduced by decimation and corresponding to the input signal terms for l different from zero have to be eliminated to ensure perfect reconstruction. this leads to the expression of the output signal as given by (12) and (13):                                          1 0 1 /1 /)1(/1 /)1(/1 /)1(/1 1 d l m k k d k dd k ld k dd k ld k dd k ld vzf vwzg vwzh vwzx d zy (12) where scktfj k ev 2 with 01 cf figure 6. equivalent diagram of demodulation‐based digital reconstruction stage model for the umts use case.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 19                0,0 1 1 ,0 /)1(/1 1 1 /1/)1(/1 /)1(/1/)1(/1                          zylfor vzxz d vzfvzg vzhvzx d zy lfor dd k d m k k m k k d k dd k d k dd k d k dd k d k (13) where  and are gain and delay, respectively. this theoretical design of the digital reconstruction stage leads to a complex system of equations and also to very high filter orders when implementing it on matlab/simulink. consequently, it is essential to modify this model to permit an optimized digital implementation solution. 3.3. proposed optimized digital reconstruction stage architecture  design  the digital reconstruction architecture based on demodulation for the umts test case presented in sub-section 3.1 has been modified in order to minimize its implementation complexity. the optimized design is detailed in this sub-section. the model of the fbd -based adc with the proposed digital reconstruction architecture is designed using matlab/simulink as presented in figure 7. the model corresponds to the test case of the fbd architecture intended for umts signals. the corresponding bloc diagram for this model is presented in figure 8. for the first branch, only decimation and filtering operations are needed for the digital reconstruction since it operates at low-pass frequencies as shown in figure 5. the decimation operation is always preceded by a decimation filter that serves as an anti-aliasing filter of the resampling operation. to reduce the complexity of such a decimation filter with a high decimation factor, the authors opt for two-stage decimation. the first stage ensures decimation by a factor of 8 when the second stage decimates by a factor of 2. for the second and third parallel branches that operate in band-pass frequencies, the digital reconstruction is composed of the operations of demodulation, decimation, filtering and modulation as explained in figure 5. the complex demodulation as explained before consists in multiplying the  modulator output by a discrete exponential signal at the branch central frequency as given by (1). in the matlab/simulink model, the authors replace the complex demodulation and modulation by in phase (i) and quadrature (q) paths to ensure better conditions for implementation. the demodulation should then be followed by filtering of the unwanted frequencies which are due to the demodulation operation. this filter presents high complexity since the unwanted frequencies are at low values and the filter operates at the oversampling frequency of the  modulator. for the second branch, the required finite impulse response (fir) filter order after demodulation is equal to 190. to deal with this problem, the authors opt to place the demodulation operation after the first decimation stage with a factor of 8. this solution figure  8.  block  diagram  of  fbd  ʃ∆‐based  adc  architecture  with  demodulation‐based digital reconstruction .  figure 7. proposed fbd‐based adc architecture model with demodulation‐based digital reconstruction.  z 1 unit delay sigma_delta_output1 to workspace3 demodulated_signal_br2 to workspace21 sigma_delta_output3 to workspace2 sigma_delta_output2 to workspace1 final_output to workspace sine wave4 at fin4 sine wave3 at fin3 sine wave2 at fin2 sine wave1 at fin1 sine wave at fc2 sine wave (w_fc3) sine wave at fc3 sine wave at fc2 in1 out1 sigma delta modulator 3 in1 out1 sigma delta modulator 2 in1 out1 sigma delta modulator 1 product9 product8 product7 product4 product3 product2 product11 product1 product fdatool filter q_3 fdatool filter q_2 fdatool filter i_3 fdatool filter i_2 fdatool filter 1 2 downsample8 2 downsample7 2 downsample6 8 downsample3 2 downsample22 2 downsample21 8 downsample2 8 downsample1 fdatool decimation filter1 fdatool decimation filter 3 fdatool decimation filter 2 cosine wave at fc3 cosine wave at fc2 cosine wave at fc3 cosine wave at fc2 -j constant1 butter analog aaf filter add3 add2 i3 q3 i2 q2 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 20  permits to reduce the operating frequency of the filter following the demodulation. moreover, it allows combining this filter with the second stage decimation filter and the low-pass filter that selects the branch bandwidth signals and rejects the quantization noise at the adjacent branches bandwidths. the first decimation stage placed at the  modulator output is composed of an operation of decimation by a factor of 8 preceded by a lp fir decimation filter. the order of these filters at the first, second and third branches are chosen to be 29, 39 and 56, respectively. then, the order of the lp fir filters of branch bandwidth selection are equal to 82. the frequency response of these filters after modulation of the second and third ones is presented in figure 9. the filter responses are overlapping. their intersection is at a level around 6 db and at the frequency limits between adjacent branches as shown in figure 9. the sum of the lp filters of 82nd order after modulation is computed and its frequency response magnitude is presented in figure 10(a). at the higher and lower ends of the bandwidth, the magnitude response presents ripples and attenuations that do not exceed 2 db as shown in figure 10 (b). thus, expected performances of the reconstruction system are not affected. after decimation and filtering operations, the i and q paths are modulated to convert the sub-band signal frequency around its original frequency which is the branch central frequency. finally, the sub-band output signals are recombined to obtain the reconstructed umts signal. simulation results are presented in section 4. 4. simulation results  simulation results are realized by applying a multi-tone signal composed of four sine-wave signals to the fbd model shown in figure 7. the first and last sine-wave frequencies are placed in the bandwidths [0, 600 khz] and [1500 khz, 2500 khz] of the branches 1 and 3, respectively. the selected values are 300 khz and 1900 khz. the first branch central frequency fc1 and the third branch central frequency fc3 are equal to 300 khz and 2000 khz, respectively. the two other sine waves are at frequencies in the 2nd branch bandwidth. they are situated on both sides of the 2nd branch central frequency fc2, which is equal to 1100 khz, and their values are 700 khz and 1300 khz. in fact, the authors tested the umts fbd second branch with a two-tone signal to verify the correct operation of the i/q demodulation and modulation stages. the sine-wave normalized amplitudes are set at 0.5 for the first and last sine waves and at 0.25 for the sine-waves of the 2nd branch where the normalized amplitude is the input amplitude out of the power supply voltage [10]. the zoom in the spectrum over [4.5, 4.5 mhz] of the second branch sigma delta modulator output, sigma_delta_output2, is drawn in figure 11. it is shown that the sine-wave signals are at the frequencies 700 khz and 1300 khz as in the test conditions. to present i/q demodulated signal of the 2nd branch as in figure 11, the authors need to recombine a complex demodulated signal, demodulated_signal_br2, as defined in figure 7. the sampling frequency after the first decimation stage is equal to 9 mhz and the spectrum covers the band [4.5 mhz, 4.5 mhz]. the obtained sine-wave signals have frequencies 200 khz and 400 khz which are the frequencies of the needed demodulated sine-wave signals. however, the sine-wave signals at the frequencies 1800 khz and 2400 khz are unwanted signals that are filtered thanks to the filters filteri_2 and filterq_2 following the demodulation stage. after the second decimation stage, the recombined final output signal spectrum in the band [2.25 mhz, 2.25 mhz] is presented in figure 12. it shows that the modulated signals (a) (b) figure  10.  (a)  magnitude  of  the  sum  of  lp  filters  of  82 nd   order  after  modulation, (b) zoom in [0, 0.6] normalized frequency band to show ripples. figure 9. magnitude of the lp filters of 82 nd  order after modulation.  figure 11. demodulated signal spectrum of the 2 nd  branch.  0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 normalized frequency (  rad/sample) m a g n itu d e ( d b ) 0 0.1 0.2 0.3 0.4 0.5 0.6 -6 -4 -2 0 2 4 normalized frequency (  rad/sample) m a g n itu d e ( d b ) -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 normalized frequency: 0.1442871 magnitude: -6.266335 normalized frequency: 0.3444824 magnitude: -6.192487 normalized frequency (  rad/sample) m a g n itu d e ( d b ) filter of branch 1 bandwidth selection filter of branch 2 bandwidth selection filter of branch 3 bandwidth selection -4 -3 -2 -1 0 1 2 3 4 -160 -140 -120 -100 -80 -60 -40 -20 0 20 frequency (mhz) p o w e r (d b m ) sigma delta modulator 2 output demodulated signal acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 21  have frequencies equal to ±300 khz, ±700 khz, ±1300 khz and ±1900 khz which corresponds to the chosen values of the test conditions of the simulation results. moreover, the signalto-noise ratio (snr) computed using matlab/simulink is equal to 75.06 db which satisfies the required umts dynamic range which is equal to 73.8 db. performance parameters as snr and effective resolution resadc are computed for different combination of input frequencies of the four sine-wave signals. computation results are summarized in table 3. besides, the designed fbd model is implemented in vhdl using the system generator (sysgen) tool from xilinx inc. in a co-simulation environment with matlab. the implementation is realized on a virtex-6 fpga target from xilinx inc. test conditions for input signals are the same as for the matlab/simulink simulation. the output signal spectrum is presented in figure 13. the computed snr for this spectrum is equal to 74.08 db which satisfies the umts required dynamic range. 5. conclusions  in this paper, the authors proposed a mixed baseband architecture based on a fbd –based adc in a multistandard receiver. the mixed baseband stage architecture is presented and the single non-programmable aaf is designed using butterworth approximation. the theoretical analysis and design of the digital reconstruction stage for the fbd -based adc architecture dedicated to multistandard radio receivers are proposed. the designed digital reconstruction stage is based on demodulation that brings the  modulators outputs to baseband before proceeding to the decimation and lp filtering operations. the parallel signals are then modulated and combined to form a final output signal. however, the theoretical analysis of the digital reconstruction stage does not converge to a solution of filter coefficients. besides, the first proposed design leads to very high filter orders when implemented in matlab/simulink. consequently, it is essential to modify this model to permit an optimized digital implementation solution. finally, the whole fbd -based adc architecture model with the optimized digital reconstruction stage is implemented and tested for the umts test case in matlab/simulink. moreover, hardware implementation and test results in the sysgen environment are presented for quantized coefficient values. all obtained results satisfy at least the required umts dynamic range which is equal to 73.8 db. references  [1] j. mitola, “software radios: survey, critical evaluation and future directions,” ieee aero. and elect. syst. mag., vol.8, no.4, pp.25-36, apr. 1993. [2] j. m. de la rosa, “an empirical and statistical comparison of state-of-the-art sigma-delta-modulators,” ieee int. symp. on circ. and syst., isbn 978-1-4673-5760-9, pp. 825-822, may 2013. [3] i. galton, h.t. jensen, “delta-sigma modulator based a/d conversion without oversampling,” ieee trans. circuits and syst.-ii: analog and digital sig. proc., vol.42, no.12, pp.773-784, dec. 1995. [4] a. eshraghi, t. fiez, “a time-interleaved parallel ∆ʃ a/d converter,” ieee trans. circuits and syst.-ii: analog and digital sig. proc., vol.50, no. 3, pp.118-129, mar. 2003. [5] a. beydoun, p. benabes, “bandpass/wideband adc architecture using parallel delta sigma modulators”, proceedings of the 14th european signal processing conf., sept. 2006. [6] p. benabes, a. beydoun, m. javidan, “frequency-banddecomposition converters using continuous-time sigma delta a/d modulators,” ieee north-east workshop on circuits and syst. and taisa conf., pp. 1 – 4, jun. 2009. [7] p. benabes, “extended frequency-band-decomposition sigmadelta a/d converter”, analog integr. circ. process., springer science+business media, llc 2009. [8] a. eshraghi, t. fiez, “a comparative analysis of parallel delta– sigma adc architectures,” ieee trans. circuits and syst. i: regular papers, vol. 51, no. 3, pp. 450 – 458, mar. 2004. [9] a. blad et al., “a general formulation of analog-to-digital converters using parallel sigma-delta modulators and modulation sequences”, ieee asia pacific conf. circuits and syst. apccas, pp. 438-441, dec. 2006. [10] r. lahouli, m. ben-romdhane, c. rebai, d. dallet, “towards flexible parallel sigma delta modulator for software defined radio receiver”, ieee int. instrum. and meas. technology conf., may 2014. [11] r. lahouli et al., “digital reconstruction stage of the fbd σδbased adc architecture for multistandard receiver”, 20th imeko tc4 international workshop on adc modelling and testing research on electric and electronic measurement for the economic upturn, benevento, italy, sept. 2014. figure 12. recombined output signal spectrum.  table 3. performance parameters of final output signal.  input signals frequencies (khz)  snr (db)    resadc (bits)  300, 700, 1300 and 1900  75.06  12.18  400, 800, 1400 and 2000  74.73  12.12  500, 900, 1400 and 1900  74.43  12.07  100, 1000, 1400 and 1700  75.26  12.21  figure  13.  frequency  spectrum  of  the  recombined  output  signal  using  sysgen implementation.  -3 -2 -1 0 1 2 3 -140 -120 -100 -80 -60 -40 -20 0 20 frequency (mhz) p o w e r (d b m ) -3 -2 -1 0 1 2 3 -150 -100 -50 0 50 frequency (mhz) p o w e r (d b m ) acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 22  [12] p. p. vaidyanathan, “multirate systems and filter banks,” eaglewood cliffs, nj: prentice-hall, 1993. [13] gsm. radio transmission and reception gsm 05.05. etsi, 1996. [14] umts. ue. radio transmission and reception (fdd), 3gpp ts 25.101, version 5.2.0 release 5.etsi 2002. [15] ieee 802.11a part 11: wireless lan medium access control (mac), and physical layer specifications, amendment1 high speed physical layer in the 5 ghz band. ieee, 1999. [16] m. ben-romdhane, c. rebai, a. ghazel, p. desgreys, p. loumeau, “nonuniformly controlled analog-to-digital converter for sdr multistandard radio receiver,” ieee trans. circuits and syst. ii: brief papers, vol. 58, no.12, pp. 862 – 866, dec. 2011. microsoft word article 5 196-1084-1-pb.docx acta imeko may 2014, volume 3, number 1, 16 – 18 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 16 the measurement chain and validation of experimental measurements r.j. moffat department of mechanical engineering, stanford university, stanford, california 94305, united states of america section: research paper keywords: measurement chain, uncertainty propagation citation: r.j. moffat, the measurement chain and validation of experimental measurements, acta imeko, vol. 3, no. 1, article 5, may 2014, identifier: imeko-acta-03 (2014)-01-05 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited an experienced human operator can often recognize an anomalous combination of data even upon its first occurrence, and may save an apparatus from a serious failure by this “intuition”. the human operators ability to identify trends and anomalies is based not only upon the instantaneous value of a measurand but upon its context (the other associated measurands including prior values). decisions are made based on groups of data, some of which may even be non-quantitative (i.e. sound, smell, color etc.). the advent of automated data acquisition and computer control opens the door to this same type of decision making by a control system. certain requirements must be met: (1) the measured data must accurately reflect the state of the system (2) the computer must “know what to expect” over the range of normal operation and, (3) the computer must be able to distinguish between allowable deviations due to experimental uncertainty and deviations which signify trouble. these are the same problems which face an experimental research program and it seems likely that the nomenclature and methodology developed for research experiments will be helpful in discussing measurements for computer-aided control. this paper presents three ideas found useful in planning experimental programs: (1) the nomenclature of the measurement chain, (2) the data reduction program considered as a mathematical model of the real system and, (3) the use of uncertainty analysis to predict the allowable scatter in an experimental result. the process of finding the numerical value of a measurand is illustrated in figure 1. five potentially different values exist for each measurement. the various terms will be illustrated in terms of a hypothetical experiment: determining the exhaust gas temperature of a small engine. the principal measurand, in this example, is temperature; all other descriptors of the system are peripheral measurands. the figure 1. the measurement chain. this is a reissue of a paper which appeared in acta imeko 1973, proceedings of the 6th congress of the international measurement confederation, “measurement and instrumentation”, 17-23.6.1973, dresden, vol. 1, pp. 45–53. the paper witnesses the sophisticated discussion that, well before the publication of the guide to the expression of uncertainty in measurement (gum), was active in the measurement science community around the subject of error and uncertainty, and its consequences on the structure of the measuring process and the way it is performed. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 17 real value of the principal measurand is the value the measurand would have if the system were not affected by the measurement process. in the present example, the real value would be the temperature of the exhaust in an uninstrumented engine, running at some stated speed and load. the available value is the value of the measurand in the system, at the location of the sensor, while the measurement is being taken. there will always be some difference between the available value and the real value, though it may be small, since it is impossible to change the state of the sensor without also changing the state of the system. in addition, the presence of the sensor may cause the system to move to a new operating point, resulting in a still further change in the value of the measurand. the available value is the one to which the sensor is exposed: a “perfect sensor” would equilibrate at the available value. in the example, the presence of the temperature sensor in the exhaust duct will raise the engine back-pressure, requiring a slight increase in fuel flow to maintain the same nominal speed and load. this will result in an increase in the exhaust gas temperature. the available value will be higher than the real value for this case. the achieved value is the value the measurand has in the sensor, while the measurement is being made. if the calibration of the sensor were perfectly known, then this is the value which would be measured. many sensors, and particularly thermal sensors, respond to more than one aspect of their surroundings. these system/sensor interactions cause the sensor to equilibrate with its entire environment rather than just the principal measurand and gives rise to what is known as “environmental error”. in the present example the temperature sensor is subject to radiation error, conduction error, and velocity error. thus the temperature level in the sensor (the achieved value) will be lower than the temperature of the gas stream at the sensor location (the available value) due to system/sensor interaction. the difference will depend upon the velocity and composition of the exhaust gases as well as the materials and temperatures of the surrounding duct work and hardware. the measured value is the value which is attributed to the measurand when the output of the sensor is interpreted using the best estimate of the calibration of the sensor. if the calibration were without error the measured value would be equal to the achieved value. if the calibration of the sensor is affected by the conditions of use in a manner which is not known to the user, then the measured value will be different from the achieved value. in the present example assume the temperature sensor to be a thermocouple whose elements are exposed directly to the gas streams. after a period of time there may be sufficient chemical reaction with the exhaust gases to cause a change in the calibration of the wire. use of standard tables of emf– temperature on such a thermocouple would result in measured values which might be significantly different from achieved values. the corrected value is the engineer’s best estimate of the real value, accounting for all of the recognized sources of error: system disturbance, system/sensor interactions, and calibration change. in order that experimental data properly describe the state of a system, the corrected values must be acceptably close to the real values. recognition of the many ways in which unwanted effects can enter a measuring chain is important in devising systems which return valid measurements. too often, principal emphasis is placed on the calibration of the sensor (the link between the achieved value and the measured value). the state of the instrumentation art is so well advanced now that, in general, the principal remaining difficulties are caused by the system/sensor interactions. errors due to system/sensor interactions can be controlled by either of two techniques: (1) design of a system which minimizes system disturbance and system/sensor interactions or, (2), use of a data processing program prior to the control mode which applies the required corrections. the data reduction program and the apparatus must be considered together. the data reduction program may require peripheral data to be gathered (i.e. wall temperature, gas velocity etc.) in order to properly correct the control data. this relationship is illustrated in figure 2. consider a general question “what is the value of ?” a test apparatus and its associated data program together must account for all system disturbances and system sensor interactions with a minimum of uncertainty. large corrections tend to be uncertain, hence the system should be designed to minimize the disturbances and interactions. whatever cannot be accomplished by the system design must be done by the data processor. if the combination is properly matched then the corrected value will be independent of the peripheral effects: they will be suppressed by the system and corrected for by the program. only significant information will be passed to the control block. if, for instance, the temperature of the duct walls decreased due to a drop in ambient air temperature, the increased radiation error would cause the measured value of temperature to go down, even though the available value of temperature remained constant. a properly written data processing program would, however, return the same corrected value, since it would properly compute the new radiation correction. one further problem remains: tolerance on the set point. there is an uncertainty in any physical measurement and a result compute from several measurements is affected by the uncertainty in each of its inputs. it is desirable to be able to anticipate the uncertainty interval associated with a computed result, r, which results from the recognized uncertainty in each of its inputs. this describes the interval within which the computed result must lie as a result of purely random variations of each of its input variables. the uncertainty interval represents a “tolerance” on the computed result. for purposes of uncertainty analysis, a single measurement can be regarded as bearing the following information: x x δx (20/1) (1) where: x is the most probable mean value of x which would be figure 2. the experimental loop. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 18 observed if it were measured many times x is the presently recorded value of one measurement of x δx is the interval within which the most probable mean is felt to lie (20/1) are the “odds” which the experimenter believes apply to the preceding statement: i.e., a measure of confidence. one frequent technique for estimating the uncertainty in a computed result is that of kline and mcclintock [1] which propagates the uncertainty at constant probability. consider a result, r, computed from several variables, the x , where: (1) each x is independent and (2) each x displays a gaussian distribution of uncertainty. the uncertainty in the result is then given by: δr ∂r∂x1 δx1 2 ∂r∂x2 δx2 2 … ∂r∂xn δxn 2 / (2) the computing equation r=r(x1, x2, x3, ..., xn) is the data reduction equation by which r is calculated from its inputs, x . the various partial derivatives usually have different values in different parts of the operating range. the uncertainty in the result r is, therefore, governed sometimes by one variable and sometimes by another. active computer control permits the use of “variable tolerances” which are consistent with the physical laws governing the uncertainty. the principal problem which arises is: what value of δx should be used? the answer depends upon the use to which the final result will be put. a general answer is shown in figure 3. if an uncertainty calculation is being made in order to plan a system then the only component of δx would be the “resolution” of the sensor or the ability to interpolate data from its output (zeroth order uncertainty). any real system tends to have small disturbances which vary randomly with time (a timewise “jitter” or unsteadiness) and different sensors have different dynamic characteristics and may introduce different phase shifts into their outputs when exposed to the same process stream. one way to deal with this is to treat the unsteadiness as an uncertainty and add its effect to that of the interpolation problem (first order uncertainty). if the final result is to be used in such a way that the absolute level would be important (for example by subtracting two computed results to determine a difference) then the uncertainties in the calibrations must be included (nth order uncertainty). with the use of uncertainty propagation it becomes possible to set floating limits on the control variables to account for the changing sensitivity of the process to its variables. summary in many respects, the advent of computer based control brings closer together the areas of measurement for research and measurement for control. if computer control is carried to its logical end, the control function should be preceded by a data reduction program which corrects for the disturbance effect of the sensor, and all of the recognized interactions between the system and the sensors. a data reduction program which completely models the behavior of the system will return correct measurement data to the control unit, regardless of the peripheral conditions on the system. development of the data reduction program should complement and accompany the development of the hardware system. uncertainties in the measured data will cause uncertainties in the computed result. this requires establishment of “tolerances” on the control parameters. uncertainty analysis techniques based on constant probability propagation provide a rational basis for establishing limits for acceptable excursions. references [1] s.j. kline, f.a. mcclintock, “describing uncertainties in single sample experiments” mechanical engineering, january, 1953. figure 3. the levels of uncertainty analysis. microsoft word 35-177-1-ga.doc acta imeko  july 2012, volume 1, number 1, 1  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 1  journal contacts      about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editorial and publication board  prof. paul p.l. regtien (the netherlands vice president for publications) dr. dirk röske (germany information officer) prof. antónio da cruz serra (portugal chairman of the advisory board) prof. pasquale daponte (italy chairman of the technical board) prof. francisco alegria (portugal – editorial office) prof. sergio rapuano (italy – editorial office) imeko technical committee chairmen (ex officio) about imeko  the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses  principal contact prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov) the netherlands email: paul@regtien.net acta imeko dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig germany email: dirk.roeske@ptb.de microsoft word 278-2015-2-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 5‐9    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 5  lc‐tandem mass spectrometry as a screening tool for  multiple detection of allergenic ingredients in complex foods  linda monaci, rosa pilolli, elisabetta de angelis, rossella carone, michelangelo pascale  institute of sciences of food production (ispa), national research council (cnr), via giovanni amendola 122/o, 70126 bari, italy        section: research paper   keywords: linear ion trap; mass spectrometry; food allergens; multi‐allergen analysis; incurred cookies  citation: linda monaci, rosa pilolli, elisabetta de angelis, rossella carone, michelangelo pascale, lc‐tandem mass spectrometry as a screening tool for  multiple detection of allergenic ingredients in complex foods, acta imeko, vol. 5, no. 1, article 3, april 2016, identifier: imeko‐acta‐05 (2016)‐01‐03  section editor: claudia zoani, italian national agency for new technologies, energy and sustainable economic development affiliation, rome, italy  received 8 june, 2015; in final form december 16, 2015; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the project safe & smart– nuove tecnologie abilitanti per la food safety e l’integrità delle filiere agro‐alimentari in uno  scenario globale – national cl.an‐cluster agroalimentare nazionale programma area 2  corresponding author: linda monaci , e‐mail: linda.monaci@ispa.cnr.it    1. introduction  food allergies are reported to be on the rise especially in the last decade [1]. apart from the intentional incorporation of allergenic ingredients into foods for manufacturing purposes and regulated by the current legislation enforced in europe (directive 2007/68) [2], contamination of food by hidden allergens at the moment represents a major health problem for allergic patients. allergens are defined hidden when they are not declared on the product label as required by the legislation in place in europe and might unexpectedly reach the end products through several routes of accidental contamination [3]. different analytical methods have been developed for monitoring food allergen contamination along the food chain. methods address either the allergen itself or a marker contained in the allergenic food. to support allergen control within haccp (hazard analysis and critical control points) programs, laboratory immunoassays have been proposed as screening tool due to its ease in use, the relatively high throughputs and the low detection limits reached [4], [5]. on the other hand, an analysis with a higher level of specificity might be required for confirmation of the results obtained. in the last few years mass spectrometry based methods[6], [7] have been considered a promising analytical strategy for food allergens monitoring thanks to the advances made in this technology that enables the reduction of the risk of false positives compared to elisa methods. ms based methods in general enable to overcome the several restrictions issued by antibody-based methods, such as enzyme linked immunosorbent assay. it is renowned that antibody based kits can produce false positives, especially when applied to complex or processed food matrices as a consequence of epitope modification or masking effect. in addition the use of such antibody based kits presents limitations in running multiplex analysis. although a number of papers have been published in this field using mass spectrometry as a screening tool for allergens detection in foods [7]-[15], only a few of them are directed to the simultaneous determination of several classes of food allergens in food commodities[12], [16abstract  in  the  present  investigation,  an  lc‐ms  method  for  sensitive  multiplex  detection  of  five  allergenic  ingredients  in  a  processed  food  matrix is presented. cookie was chosen as complex food model and was incurred with egg, milk, soy, hazelnuts and peanuts before  baking.  extraction,  purification  and  pre‐concentration  protocols  were  applied  to  ground  cookie  basing  on  protocols  described  elsewhere.  specific  instrumental  features  of  a  dual  cell  linear  ion  trap  ms  instrument  were  exploited  to  identify  suitable  peptide  markers  for  each  allergen  and  to  deliver  a  sensitive  multiplex  srm‐based  method  for  the  simultaneous  detection  of  common  allergenic ingredients which might contaminate such a commodity.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 6  19]. thanks to the innovative configuration and the versatility shown by the dual cell linear ion trap ms used, we have recently developed a method based on micro high performance liquid chromatography (hplc) and esi-ms/ms detection for the screening of egg, milk and soy in cookies by monitoring selected peptide markers for each allergenic ingredient [17]. as a follow up of that work we present herein a step forward of such method that includes other allergenic ingredients to be monitored to deliver a multiple selected reaction monitoring (srm) method for the multi target detection of the allergenic foods like milk, egg, soy, peanut and hazelnut in cookie chosen as complex food matrix. a subset of peptides tracing for five allergenic foods were used to design a multi-target srm method to detect the presence of each allergenic contaminant in food and the two most sensitive peptide markers/protein were selected in order to retrieve quantitative information. 2. materials and methods  2.1. reagents  acetonitrile (lc–ms grade), formic acid, acetic acid, ammonium bicarbonate, trizma base, tween 20, hydrochloric acid, iodoacetamide (iaa), dithiothreitol (dtt), egg powder (ep) and skimmed milk powder (mp) were obtained from sigma–aldrich (milan, italy). trypsin (proteomic grade) was purchased by promega (milan, italy); rapigesttm surfactant was purchased by waters (milford, ma, usa). cellulose acetate syringe filters, 1.2 µm (size 25 mm) were purchased by labochem science s.r.l. (sant’agata di battiati, ct, italy), and polytetrafluoroethylene syringe filters, 0.2 µm (size 4 mm) were purchased by sartorius italy s.r.l. (muggiò, mb, italy). disposable cartridges pd-10 were purchased from ge healthcare life sciences (milan, italy), while ultrafiltration (uf) tubes with 30 kda cut-off membranes were purchased from millipore (billerica, ma, usa). pre-cooked soy flour (sf) was purchased from a local retailer. roasted peanuts and hazelnuts were provided by besana s.p.a. (san gennaro vesuviano, na, italy). 2.2. preparation of incurred cookie samples.  cookies incurred with egg, milk, soy, hazelnut and peanut were prepared according to the following recipe: 418 g of wheat flour, 180 g of sugar, 1 g of salt, 2 g of sodium bicarbonate, 90 g of extra virgin olive oil, 160 g of water, and 6.4 g of each allergenic ingredient (skimmed milk powder, egg powder, precooked soy flour, ground roasted peanut and ground roasted hazelnut). allergenic ingredients were first homogenized and then added to the dry mixture (wheat flour, sugar, salt and sodium bicarbonate) at a final concentration of 10000 µg/g. subsequently, olive oil and water were added and the final total weight was estimated to be approximately 900 g. incurred dough was left mixing 40 min before being further portioning in 10 g aliquots which were spread in open cookie tins of about 7 cm in diameter and approximately 1 cm in height and baked at 200°c for 12 min. after cooling down, cookies were weighted in order to re-scale the actual allergen concentration to the final baked matrix. the highest incurred cookies corresponded to 8756 µg/g (µg of allergenic ingredients per g of matrix). for the production of blank cookie samples the amount of allergenic foods was replaced by wheat flour (a total of 32 g). both blank and incurred cookies were finely milled in a blender at 17,000 rpm (steril mixer 12 model 6805-50, pbi international) by iteration of four cycles of blending (30 s) and rest (10 s) in order to prevent material overheating. ground blank and incurred cookies were passed through a 1 mm sieve, spread on a large tray (50cm × 50cm) and manually mixed for homogeneity purpose. a total of 5 subsamples (10 g each), were taken respectively from each stock powder produced (blank and incurred) and subsequently combined to form a single representative sample (about 50 g each). two serial dry dilutions of the incurred sample were prepared by mixing ground blank and incurred samples in the ratio 1:3, up to reach the final theoretical concentration of 973 µg/g. such concentration level was used as reference sample for all further experiments. 2.3. preparation of calibration curves  four points calibration curves were prepared for incurred cookies to cover the range 20 243 µg/g. calibration curves were prepared by serial dilutions of incurred cookie extracts at 973 µg/g concentration level, with appropriate volumes of a blank cookie extract. all the sample at each concentration levels were purified and pre-concentrated by ultrafiltration with dimensional cut-off membranes basing on protocols detailed elsewhere [17], [19]. 2.4. enzymatic digestion  the finalextract was denatured by heating for 15 min at 95°c and subsequently diluted in the surfactant/denaturing agent rapigesttm (dilution 1:2) to reach a final volume of 100 µl. protein enzymatic digestion was carried out by addition of suitable amount of trypsin as specific cleavage enzyme, to reach the approximately ratio 1/50, enzyme/protein. reaction was stopped after overnight incubation by acidifying the sample with hcl 1 m and the final extract was filtered through 0.2 µm ptfe filters before chromatographic injection. 2.5. hplc–ms/ms analysis and database search.   the hplc–ms system consisted of a uhplc pump provided with an autosampler and an esi interface connected to a linear ion trap mass spectrometer velos protm (thermo fisher scientific, san josè, usa). peptides separation was accomplished on an acclaimtm pepmap100 analytical column (thermo fisher, san josè, us), 1 mm × 15 cm × 3 µm, 100 å porosity at a flow rate of 60 µl/min and the following elution gradient was used: 0–40 min solvent a reduced from 85 % to 45 %, 4042 min further reduction from 45 to 10 %, constant for 10 min, 5254 min back to 85 % and constant at this composition to allow a 15 min column conditioning before next injection (solvent a = h2o + 0.1 % formic acid; solvent b = ch3cn/h2o, 80/20 v/v + 0.1 % formic acid). for srm acquisition mode, a six segments acquisition scheme was set up, screening a total of 21 peptides. each segment counted up to four scan events, in which each selected peptide was isolated and activated by cid with a normalized collision energy of 35 %, and ion current related to the three most intense transitions was recorded within a 3 m/z window. 3. results and discussions  3.1. selection of peptide markers.   one of the most crucial steps in designing a multi-target screening ms-based method for food allergen detection in complex food matrices, is the appropriate selection of target peptides that should fulfil specific requirements to be considered reliable allergens markers. acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 7  in order to draw down a list of potential candidate markers for the selected allergens, a preliminary untargeted ms analysis in data dependent acquisition (dda) mode was carried out on the cookies protein extract (sample incurred at 973 µg/g), followed by purification on size-exclusion columns. raw data were processed via commercial software proteome discoverertm (version 1.4) and protein assignment was accomplished via sequest ht scoring algorithm by searching ms/ms spectra against a customized database (db), restricted to food allergens and other contaminants. a list of peptides was generated and the ms/ms spectra providing the highest matching with the predicted fragmentation patterns were further validated by analyst inspection, as already detailed in a previous paper [17] and only ms/ms peptide spectra containing at least three consecutive peptide fragments of either yor b-ions were selected as good candidate. finally, three most intense transitions were chosen for each internally validated peptide and these were further used to build up an srm acquisition scheme aiming at developing a sensitive and selective method for the detection of target proteins in a complex matrix like cookies. in order to design a proper ms acquisition method, parameters such as number of segments along the lc run, number of scan events per segment, acquisition scan rate (and consequently resolution) and acquisition range, were optimized. basing on the expected elution time for each candidate peptide marker, the acquisition run in srm mode was split into six segments, and a maximum of four scan events per segment were recorded for each peptide marker monitored. notably, after peptide isolation and fragmentation, the sum of ion current related to the three most intense transitions was recorded. ms analyses were carried out both in allergen-free and incurred cookie extracts in order to confirm the absence of any interfering peak at the expected retention times and to evaluate the resolution obtained by chromatographic separation; this comparison allowed to highlight some co-elution problems encountered with two candidate peptides, namely peptide esyfvdaqpk, 592.3 m/z belonging to the α-chain of βconglycinin (soy protein), and peptide dqssylqgfsr, 644.3 m/z belonging to a fragment of conarachin (peanut protein) as depicted in figure 1. as a consequence, these peptides were further excluded from the list of candidate markers since not considered a reliable marker in incurred baked cookies. a part from the two mentioned peptides that were excluded from the list, due to the absence of any matrix interferences with the other candidate markers selected, these chromatographic conditions were considered suitable for our purposes and a total of 19 peptides deemed suitable markers for allergen detection in cookies. a typical representation of the multi-target analysis achieved in the srm mode is shown in figure 2 where an overlay of nineteen chromatographic traces is reported. 3.2. assessment of quantitative performances.  in order to select the best quantitative markers within the list of nineteen peptide markers included into the srm-ms/ms instrument method, matrix-matched curves were obtained in incurred cookies and the relevant srm traces were recorded. the two most sensitive peptides for each allergenic food category were then selected as quantifier peptides (table 1). response linearity was assessed on the whole concentration range under investigation (20243 µg/g) with fisher-snedecor f-test. the ratio between regression and residual variance was compared with the critical f value with the proper freedom degrees at 99% of confidence, proving that the linear correlation was significant. limits of detection (lod) and quantification (loq) were estimated as the minimum concentration of an added allergenic ingredient that can be detected, at s/n equals 3 and 10, respectively (the standard deviation of the calibration line intercept was used in this case as an estimate of noise). table 1 reports an overview of method performances for a total of ten peptides selected for the 5 allergenic ingredients. as appearing from the table, some limit values calculated for each allergenic ingredient resulted very similar irrespective of the specific marker monitored, thus confirming the reliability of both peptides selected for quantitative analysis. besides, lods of each allergenic ingredient under consideration in incurred cookies were found to be ranging from 10 µg/g for egg to 8 µg/g for milk and peanut, while the highest sensitivity reached for soy and hazelnut was down to 5 µg/g (allergen/food matrix). figure  1.  comparison  of  srm  chromatographic  traces  recorded  for  blank  and  incurred  cookie  (243  µg/g),  for  two  candidate  peptide  markers  592.3  (esyfvdaqpk) and 644.3 (dqssylqgfsr), belonging to soy and peanut, respectively.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 8  figure 2. overlay of typical ion chromatograms recorded in srm acquisition mode for the selected peptide markers. the entire chromatographic run was  divided into six acquisition time segments: (i) 0‐4.5 min; (ii) 4.5‐8.5 min; (iii) 8.5‐12.0 min; (iv) 12.0‐14.1 min; (v)14.1‐18.8 min; (vi) 18.8‐30.0 min.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 9  4. conclusions  in the present investigation, an lc-ms method for sensitive multiplex detection of egg, milk, soy, hazelnuts and peanuts allergenic ingredients in incurred cookie matrix was presented. suitable peptide markers for each allergenic ingredient were selected basing on several parameters which proved the reliability of the identification. specific transitions of ms/ms spectra were identified to build up a highly selective and sensitive lc-multiplex srm-based method for the simultaneous detection of egg, milk, soy, hazelnuts and peanuts cookie. acknowledgement  the work was funded by the project safe & smart– nuove tecnologie abilitanti per la food safety e l’integrità delle filiere agro-alimentari in uno scenario globale – national cl.ancluster agroalimentare nazionale programma area 2. besana group s.p.a. is kindly acknowledged for providing hazelnut and peanuts and for the fruitful discussions. references  [1] l. monaci, r. pilolli, e. de angelis g. mamone, mass spectrometry in food allergen research, in comprehensive analytical chemistry. y. picó editor. publisher, elsevier, amsterdam (netherlands), oxford (uk), waltham (usa), 2015, isbn 978-0-444-63340-8, 359-393. [2] european commission, commission directive 2007/68/ec.off. j. eur. union l130(2007)11–14. [3] s. hattersley, r. ward, a. baka, r.w.r. crevel, advances in the risk management of unintended presence of allergenic foods in manufactured food products an overview. food chem. toxic. 67(2014) 255-261. [4] l. monaci, a. visconti, immunochemical and dna-based methods in food allergen analysis and quality assurance perspectives, tr. food sci. technol. 21 (2010) 272-283. [5] r. pilolli, l. monaci, a. visconti, advances in biosensor development based on integrating nanotechnology and applied to food-allergen management. tr. anal. chem. 47 (2013) 12-26. [6] l. monaci, a. visconti, mass spectrometry-based proteomics methods for analysis of food allergens. tr.anal. chem. 28 (2009) 581-591. [7] p.e.johnson, s. baumgartner, t. aldick, c. bessant, v. giosafatto, j. heick, g. mamone, g. o’connor, r. poms, b. pöpping, a. reuter, f. ulberth, a. watson, l. monaci, e.n. mills, current perspectives and recommendations for the development of mass spectrometry methods for the determination of allergens in foods. j. aoac int. 94 (2011) 1026-1033. [8] a. cryar, c. pritchard, w. burkitt, m. walker, g. o’connor, d.t. burns, m. quaglia, towards absolute quantification of allergenic proteins in food-lysozyme in wine as a model system for metrologically traceable mass spectrometric methods and certified reference materials. j. aoac int. 96 (2013) 1350-1361. [9] b. pöpping, s.b. godefroy, allergen detection by mass spectrometry the new way forward. j. aoac int. 94 (2011)1005-1005. [10] g. picariello, g. mamone, f. addeo, p. ferranti, the frontiers of mass spectrometry-based techniques in food allergenomics. j. chromatogr. a, 1218 (2011) 7386-7398. [11] l. monaci, i. losito, f. palmisano, a. visconti, reliable detection of milk allergens in food using a high-resolution, stand-alone mass spectrometer. j. aoac int. 94 (2011) 10341042. [12] j. heick, m. fischer, b. pöpping, first screening method for the simultaneous detection of seven allergens by liquid chromatography mass spectrometry. j.chromatogr. a 1218 (2011) 938-943. [13] p. lutter, v. parisod, h. weymuth, development and validation of a method for the quantification of milk proteins in food products based on liquid chromatography with mass spectrometric detection. j. aoac int. 94 (2011) 1043-1059. [14] l. monaci, i. losito, f. palmisano, m. godula, a. visconti, towards the quantification of residual milk allergens in caseinate-fined white wines using hplccoupled with singlestage orbitraptm mass spectrometry. food addit. contam. 28 (2011) 1304-1314. [15] l. monaci, e. de angelis, s.l. bavaro, r. pilolli, high resolution-orbitraptm based mass spectrometry for a prompt detection of peanuts in nuts products. food addit. contam. a, 32 (2015) 1607-1616. [16] l. monaci, i. losito, e. de angelis, r. pilolli, a. visconti, multi-allergen quantification of fining-related egg and milk proteins in white wines by high-resolution mass spectrometry. rapid commun. mass spectrom. 27 (2013) 2009-2018. [17] l. monaci, r. pilolli, e. de angelis, m. godula, a. visconti multi-allergen detection in food by micro-hplc coupled to a dualcell linear ion trapmass spectrometer. j chromatogra 1358 (2014) 136–144. [18] r. pilolli, e. de angelis, m. godula, a. visconti,l. monaci, orbitrap™ monostage ms versus hybrid linear ion trap ms: application to multi-allergen screening in wine.j. mass spectrom. 49 (2014) 12541263. [19] r. pilolli, e. de angelis, l. monaci, speeding-up sample handling for multiplex ms/ms allergens detection in a processed food, submitted to food chemistry. table 1. summary of the quantitative performances provided by the lc‐srm method for selected peptide markers of five allergenic ingredients in cookies.  the accession number refers to on‐line uniprot database.  peptide  m/z  allergenic  food  protein  (accession number)  peptide sequence  transitions  retention time  (min)  lod (µg/g)  loq (µg/g)  slope  r 2   844.4  egg  ovalbumin (p01012)  gglepinfqtaadqar  y12 +2 ;  y7 + ;  y10 +   13.3±0.1   10  30  536±11  0.998  761.9  egg  ypilpeylqcvk  b4 + ;  y8 + ;  y9 +   17.5±0.1   15  51  192±6  0.995  692.9  milk  alpha‐s1‐casein (p02662) ffvapfpevfgk   y8 + ;  y9 + ;  y10 +2   22.9±0.1   8  26  4120±70  0.998  634.4  milk  ylgyleqllr   y5 + ;  y6 + ;  y8 +   21.6±0.1   13  40  2140±60  0.997  725.8  soy  glycin g1 (p04776)  sqsdnfeyvsfk  [m+2h] +2 ‐h2o;   y3 + ;  y10 +   12.5±0.1   5  18  1054±13  0.990  793.9  soy    fylagnqeqeflk   y11 +2 ;  y9 + ;  y10 +   14.8±0.1   8  27  1210±20 0.998  815.4  hazelnut  11s globulin‐like  protein  (q8w1c2)  alpddvlanafqisr  y7 + ;  y8 + ;  y13 +2   21.0±0.1   5  16  3330±30 0.999  576.3  hazelnut  adiyteqvgr   [m+2h] +2 ‐h2o;  y6 + ;  y7 +   3.5±0.1   1 0  32  1530±30 0.997  786.9  peanut  conarachin  fragment(q6ps u3)  vlleenaggeqeer y12 +2 ;  y7 + ;  y8 +   3.4±0.1   8  30  389±7  0.998  564.8  peanut  gtgnlelvavr   y3 + ;  y7 + ;  y6 +   9.6±0.1   9  30  526±11  0.997    template for an acta imeko event paper acta imeko issn: 2221-870x december 2017, volume 6, number 4, 80-88 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 80 simple methods of voltage dip tracking – case study tomasz tarasiuk gdynia maritime university, morska 81-87, 81-225 gdynia, poland section: research paper keywords: voltage short and long term variations, low-cost measurement, low-pass filtering citation: tomasz tarasiuk, simple methods of voltage dip tracking–case study, acta imeko, vol. 6, no. 4, article 13, december 2017, identifier: imeko-acta06 (2017)-04-13 section editor: konrad jedrzejewski, warsaw university of technology, poland received january 26, 2016; in final form november 15, 2017; published december 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: ministry of science and higher education under grant ds/430/2017 corresponding author: tomasz tarasiuk, e-mail: t.tarasiuk@we.am.gdynia.pl 1. introduction the problem of power quality in electric power networks is one of the hottest topics in electrical power engineering over recent years. proliferation of non-linear loads and renewable energy resources have led to notorious voltage disturbances, like voltage and current waveform distortions, voltage dips etc. the intelligent metering and monitoring systems are necessary [1] to monitor power flow and various voltage and current parameters in numerous locations thorough the grid. however, the proper solutions have to be implemented, in order not to increase the cost of the whole infrastructure. in author’s opinion, apart from high grade power quality analysers, a number of low-cost devices will be used, like instruments based on dedicated integrated circuits with fixed signal processing algorithms, application specific integrated circuits (asics). fortunately, such low-cost asics are already available on “the shelf”, e.g. analog devices single phase, multifunction metering ic with neutral current measurement ade7953 [2] or cirrus logic single phase bi-directional power/energy ic cs5461a [3]. they enable the measurement of voltage and current r.m.s. values, power and energy as well as dips or swells assessment. although they are based on various processing principles, the common feature of the devices is signal processing executed by fixed function digital signal processor (dsp) [2]. among the applied algorithms, the input signal filtrations are used for different aims like: elimination of input channels offset, zerocrossing detection or r.m.s. values of the signals and active power measurements [2]. this paper is focused solely on the singular feature of these ics, namely capacity of voltage dip or swell detection. there are several methods of dip detection and characterization, starting from simple solutions as: voltage peak value monitoring [4]-[8] or the traditional voltage r.m.s. value calculation [4]-[8], up to more complex solutions based on d-q transformation [4], [9], short time fourier transform [5], wavelet transform [4], [5], [8], [10], [11], kalman filtering [4], [5], [12] or a combination of both wavelet and kalman filtering [5], to name the most popular. the comparative study on chosen methods performance can be found e.g. in [6] and [8]. unfortunately, the last group of methods lacks simplicity required for implementation in simple, low cost and low energy consuming asics due to complex models and/or the necessity of implementation of a number of conditional and branching operations. therefore, the simpler solutions are used in asics. abstract the paper presents results of an experimental study of two methods of voltage dip tracking. the first is based on half cycle absolute peak value monitoring, whereas the second is based on low-pass filtration of squares of voltage samples. both methods are devised for application in low-cost integrated circuits, dedicated to power quality monitoring. the two real voltage dips have been considered for the aim. the results are compared with the reference method recommended in iec std. 61000-4-30, based on calculation of the r.m.s. voltage refreshed each half cycle. further, the application of the low-pass method for assessment small voltage variations is considered, both short term (r.m.s. voltage refreshed each half cycle) and long term (r.m.s. voltage calculated over 10 cycles of voltage fundamental component). the research confirmed sufficient accuracy of the method based on low-pass filtering for the class a of measurement. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 81 for instance, the manufactures of above mentioned asics [2], [3] use the method based on peak value monitoring. simply put, if the magnitude of the instantaneous voltage (its absolute value) is below the pre-defined threshold value, a dip is detected. similarly, the swell is to be identified if the absolute value of the magnitude of the instantaneous voltage is above the pre-defined threshold. the method is simple and easy to implement in asics. but the downside of the solution is a notorious impact of noise and distortions of the voltage on the outcome of the dip or swell identification [4], [8]. this can be avoided by superseding this method by an equally simple one based on low-pass filtering of the squares of the input voltage samples, like the method already used by some manufacturers for the r.m.s. value of voltage and current measurements [2], although not used for dip and swell identification. therefore, the paper aim is to explore the possibility of using this solution in its simplest form, which can be easily applied in asics, for dip and swell identification. it is compared with the above mentioned method based on the absolute value of instantaneous voltage monitoring. both solutions are evaluated against the reference measurement procedure laid in iec standard 61000-4-30 for class a measurements, based on the measurement of the r.m.s. voltage over 1 cycle, commencing at a fundamental zero crossing, and refreshed each half-cycle urms(1/2) [13]. this is recommended for voltage dips, swell and interruption detection and evaluation [14]. it has to be added that some documents use the term sag as synonym to the term dip [2], [3], [14]. but since the term dip is used by iec, it will be used consequently thorough this paper. finally, the paper presents a comparative study of the performance of these two methods. the first method is based on monitoring the voltage peak value. the second method is based on the r.m.s. voltage value determined by low pass filtering. because the idea behind the paper is to use real cases for the method’s performance assessment, results obtained by both solutions are compared against quantity values provided by the reference measurement procedure to assess the measurement trueness of the measured quantity values obtained from analysed procedures [15]. the reference procedure is described in detail in section 2. this section includes also basic information about limitations of the standard procedure. all subsequent analyses of experimental data are based on two voltage dips registered in real microgrids, namely the network of a sensitive data centre during its island operation mode as well as a marine microgrid, a network of a ferry, during switchon of the high power electric motor (for driving the bow thruster). further, another marine network with high distortion (voltage thd=11.8 %) and voltage and frequency modulation was considered for determining the method’s performance for tracking short-term voltage variations. the paper is the extended and updated version of the work presented during the xxi imeko world congress, 2015, prague, czech republic [16]. 2. standard framework – reference measurement procedure the voltage dip is defined as the sudden decrease in r.m.s. value of the supply voltage to a value between 10 % and 90 % of the declared voltage (in another standard it is 1 % to 90 % [17]) for durations from 0.5 cycles to 1 min [14]. a voltage dip is to be described by a pair of data: residual voltage (sometimes dip’s depth) and its duration [13]. the residual voltage is the lowest r.m.s. value of the voltage during the considered event, whereas duration is the time difference between the start time (falling of r.m.s. voltage below the dip threshold) and end the time (increase of r.m.s. voltage above the dip threshold plus hysteresis typically equal to 2 % of declared voltage) [13]. in some cases, the voltage dip is followed by a voltage peak (small swell), e.g. during a asynchronous motor start up in the microgrid [18]. typically, a voltage swell is monitored by the same methods like a voltage dip, e.g. voltage instantaneous values [2] or r.m.s. values [13]. it was mentioned above that for class a measurements the considered r.m.s. voltage should be calculated over 1 cycle, commencing at a fundamental zero crossing, and refreshed each half-cycle. it is designated as urms(1/2) [13] and it includes all components like: harmonics, interharmonics, etc. [13]. it has to be added that class s has been defined at the iec standard 61000-4-30 as well. for this class the dip assessment is to be carried out in a similar way like the above described but the voltage r.m.s. value can be refreshed each cycle. manufacturers of measuring instrument should specify which method is used [13]. finally, the r.m.s. value is to be calculated as square root of the mean value of squares of the voltage samples registered over the considered time interval, like required in iec standard 61557-12 [19]. the evaluation of a real dip carried out by the method devised for class a is used in this paper as a reference measurement procedure for evaluations of the other methods, including whole voltage shape assessment during the considered processes, namely bulk load startup in two investigated microgrids and resulted dips. next, it is compared with the same shape determined by the other simple method under investigation based on monitoring the voltage peak value. it must be firmly stressed that the standard approach is contested by some authors. for instance, in [6], the detailed analysis of the standard method’s performance for short dips (duration below two cycles) was presented. authors of [6] pointed out the ambiguity in determining the dip parameters, both duration and residual voltage, by the standard solution for such a short duration phenomenon. this is chiefly related to synchronization based on fundamental zero crossing [6], [8]. the proposed solution can be based on determining the rms value of the voltage over a one-cycle sliding window and computing the rms value at every sampling point [6], [7]. however, the solution entails higher computational burden for ensuring that the window length fits the actual fundamental period [6]. another frequently investigated solution is tracking the energy of the band 0-100 hz after wavelet transformation for detection of short duration dips [10] but this also entails an increase in computational complexity. next, the standard method has another limitation, namely it “does not provide information about the phase angle of voltage supply during the event, which could be of interest for some applications” [8]. reference [20] lists seven dip characterization methods, including four which require phase angle information. despite the above mentioned drawbacks of the standard method, it arguably remains one of the most popular due to its simplicity and sufficient performance for a number of most common applications. 3. the methods under investigation notwithstanding its simplicity, the application of the above described standard procedure for dip detection and evaluation acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 82 can be inconvenient in simple low-cost devices, due to some requirements for hardware resources of the measuring instrument. in shorthand, it requires storing of voltage samples for at least one cycle, conditional operation, some data address generating, etc. obviously, it can be easily implemented in digital signal processors (dsps) but not necessary in low-cost dedicated asics. therefore, manufacturers of low-cost measuring devices implement other principles of dip detection and evaluation [2], [3]. it is permissible for class b measurements if the manufacturer specifies the method used for the aim [13]. so, the paper explores the performance of two arguably simplest methods of voltage dip detection and characterization, which can be used for less demanding applications and both easy to apply in asics. 3.1. dip measurement based on voltage peak value tracking arguably, the simplest solution of the considered problem is detection of time instants when the absolute value of the voltage falls below the programmable threshold [2], [3]. this feature is easy to implement in asics. simply, the dedicated peak register is updated with a new instantaneous voltage value every time that the absolute value of the voltage sample exceeds the sample value already stored in the register. the register can be cleared after reading [2], which can be synchronized with the voltage fundamental component zero-crossing in order to determine the end of each half-cycle. this feature enables continuous recording the maximum value of the voltage waveforms for each half cycle. apart of its simplicity, the obvious disadvantage of the solution is the above mentioned possible impact of voltage distortions or noise [4], [8], since for distorted signals one cannot conclude on its r.m.s. value basing only on the maximum value of the voltage waveforms. it will be proved below that even in the case of low distortion, the results of dip analysis by the method can differ noticeably from the reference method. 3.2. dip measurement based on low-pass filtering of squares of voltage samples the standard dip assessment method is based on the determination of the voltage r.m.s. value over one cycle. it can be easily noted that )( 2 1 0 21 k n k nrms ulpfuu k ≈∑⋅= − = (1) where n is the number of voltage samples recorded over an integer number of cycles (in fact one cycle for dip assessment), uk is the voltage sample, lpf is a low-pass filter the solution consists in superseding the mean filter with n coefficients (each with value of 1/n) by another lpf filter. since n varies depending on the instantaneous frequency it seems much easier to implement in asics a low-pass filter with a constant number of coefficients. because of the ripples of the lpf output, the reading should be synchronised with the voltage fundamental component zero-crossing. it is determined after low-pass filtering of the voltage samples by another lpf. the algorithm [2] after some modifications is shown in figure 1. it should be added that the solution is applied using an ade7953 [2] for the r.m.s. measurement but not for dip detection and evaluation. the signal processing path shown in figure 1 can be used for both: measuring the r.m.s. value of the voltage during steady-state as well as dip monitoring. the reading is to be carried out after each zero-crossing of the voltage fundamental component, similar as in the case of the urms(1/2) measurement. this value is used directly to the dip or swell detection and evaluation. namely, it is to be compared with the assumed threshold level, once again similar as in the case of the standard method. in order to obtain a 10-cycles rms voltage, accumulation of urms(1/2) and averaging is necessary. the block “delay” shown in figure 1 is to account for the lpf1 and lpf2 characteristics (group delays of both filters). it means shifting of the samples of the lpf2 output. it depends on the actual power frequency. so, adaptation of the shift is to be performed. a simpler solution is to use constant sample shifts related to the rated power frequency but it can impact the accuracy of the proposed solution if used in islanded microgrids due to possible power frequency changes. 4. results of experimental research the real examples are used for this paper’s purpose. so, the research consisted in voltage samples registration in real networks and subsequent processing by various signal processing methods. a national instruments controller pxie8106 equipped with two data acquisition boards pxie-6124 was used for voltage samples recording in an office building. the analog input channel consisted of cv3-1500 lem voltage transducers and ltc 1564 anti-aliasing filters. the cut-off frequency of the anti-aliasing filters was equal to 10 khz. in the case of marine systems, data acquisition board pci703-16/a eagle technology, a resistive voltage divider and isolation amplifiers iso 124 of burr brown were used. the cutoff frequency of the anti-aliasing filter was equal to 3.5 khz. figure 1. block diagram of the signal processing path for the measurement of r.m.s. voltage, including dip detection and evaluation. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 83 finally, the momentary voltages for dip detection are calculated by both investigated methods. the results obtained by analysis of the voltage local peak values (like used in [2] and [3]) are designated as uabs. the recorded absolute values of the voltage waveform are divided by √2 in order to obtain the voltage r.m.s. value and subsequently to compare with the reference method. the results calculated by voltage samples squaring and low-pass filtering of the result are designated as ulpf. it represents the r.m.s. voltage momentary values. for the paper purpose, a third order butterworth filter is used with various cut-off frequencies. it has been mentioned (see figure 1) that, in order to diminish the effect of ripples of the filter output, its reading is to be synchronised with the fundamental component zero-crossing. zero-crossing of the fundamental component of the voltage is determined after low-pass filtering by another third order butterworth filter, but with higher cutoff frequency equal to 80 hz. such a solution is recommended in the iec 61000-4-30 standard for diminishing the impact of higher frequency components. the cut-off frequency is chosen after [2]. to compare the differences between the considered methods and the reference method, the square root of the mean value of squares of the differences is calculated by: ∑ −= − = 1 0 2 21 1 n k rmsmeth kk uu n diffsqr )()( )/( (2) where methu is the method under investigation (uabs or ulpf), n is the number of considered half cycles for the dip assessment. it is assumed that n=146 starting at the dip beginning. 4.1. dip in emergency power system of office building the voltage recording took place in the network of an office building that contained a very important data centre and sensitive for power quality disturbances. particularly, interruptions can lead to severe consequences. therefore, it was equipped with two upss and two generators driven by diesel engines for power backup. the network rated voltage was equal to 230 v and the rated frequency 50 hz. the whole research was carried out during the object island operation mode, due to suspicion of power quality problems during the mode. various parameters of voltage and currents were analysed in various points of the system. during the investigation the process of switching-on the bulk load was recorded. the waveform of the recorded voltage is presented in figure 2. it is easily discernible that the process of switching bulk load on in the power network of the data centre causes a voltage dip up to 85 % of the rated voltage followed by a small voltage swell. it is only 101.6 % of the rated voltage but approximately above 5 % of the registered mean steady-state voltage, which was approximately equal to 96.8 % of the rated voltage (230 v). nevertheless, this phenomenon is considered as well, in order to properly asses the methods under investigation. the details of urms(1/2) calculated according to the iec 61000-4-30 standard and the resulting voltage shape are shown in figure 3. obviously, the processes of switching bulk load on in microgrids cause voltage changes and concurrent momentary frequency changes. for the considered example, the lowest momentary frequency understood as reciprocal of fundamental cycle is equal to 48.61 hz followed by a frequency increase up 51.51 hz. typically, standards related to marine microgrids, e.g. [21], [22] deal with the phenomenon, but it has not been analysed for the paper’s aim. however, both investigated methods, which depend to some respect on the fundamental component zero-crossing, enable concurrent assessment of momentary frequency changes, including their value and duration. in fact, assessment of rapid power frequency changes in marine systems is similar as in the case of voltage dip assessment [21], [22]. finally, the results of the calculation of parameter pairs for the dip assessment (residue voltage umin and duration), completed by the r.m.s. value of the mini-swell umax voltage that followed the considered dip are shown in table 1. moreover, results of determining sqr(diff) for each of the methods are presented in the table as well. analysis of results in table 1 leads to the conclusion that results of dip shape assessment for filters with cut-off frequencies from 15 hz up to 19 hz lead to comparable results. observed minor differences can be neglected. the lowest sqr(diff) value is obtained for a filter with cut-off frequency 17 hz. the analysis confirms that the uabs method leads to worse results than the method based on low-pass filtering of the squares of voltage samples for most of the used filters. next, the capability of tracking the voltage shape for each of the methods is assessed. it is carried out by determining the differences between ulpf and urms(1/2) (they are graphically presented in figure 4 for lpf2 with cut-off frequency equal to 17 hz) and the differences between uabs and urms(1/2). the results of comparison are shown in figure 5. sqr(diff) values for these examples are given in table 1 as well. figure 2. voltage waveform during switching bulk load on in a power network of data centre during island operation. figure 3. variations of urms(1/2) value (reference method) during switching bulk load on in a power network of a data centre during island operation; residue voltage equals 195.86 v (85.16 % of urated), dip duration 18 half cycles, swell equals 233.58 v. -400 -300 -200 -100 0 100 200 300 400 time [ms] u( t) [v ] 800 0 190 200 210 220 230 240 time [s] u rm s( 1/ 2) [v ] 4 0 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 84 comparison of figures 4 and 5 once again reveals clear superiority of the first solution, i.e. ulpf. the difference between the considered method and the reference method does not exceed 1.2 v. maximal values are observed for the dip beginning. similar differences for the uabs method reach values above 4 v. the reason is distortion of the investigated voltage, although with a relatively low level. the value of the voltage thd was equal to 3.13 % prior to the considered dip and 3.26 % after the dip. it can be noted that even this small increase leads to a significant increase in differences between uabs and urms(1/2) (see figure 5). the differences increase nearly two times under steady-state conditions: before and after dip. the main reason is the increase of the 5th harmonic content by 0.19 % of the fundamental component, completed by a phase shift of some harmonics in relation to the fundamental component after the dip and a resulting increase in the voltage maximal instantaneous value. 4.2. dip in power system of a ferry the second example concerns the dip caused by switching on the high power electric motor with rated power 1.72 mw on board a ferry with a 6.6 kv 60 hz system. the motor start prior to the ship manoeuvring has led to a severe voltage dip, namely the voltage dropped below the permissible limit -20 % of the rated voltage [21], [22]. similar as in the previous example, the dip parameters are determined by the two investigated methods. the waveform of the recorded voltage is presented in figure 6 and the details of urms(1/2) calculated according to the iec 61000-4-30 standard and the resulting voltage shapes are shown in figure 7. the calculation results of the parameter pairs for the dip assessment (residue voltage umin and duration), completed by the r.m.s. voltage of the mini-swell umax that followed the considered dip are shown in table 2 and the differences between ulpf and urms(1/2) and the differences between uabs and urms(1/2) are given in figures 8 and 9, respectively once again, analysis of the above results leads to the conclusion that application of lpf filters with cut-off frequencies 12-20 hz gives satisfactory results. the solution superiority over uabs method is clearly visible, despite the fact that for the ferry voltage thd hardly exceeded 1.2 %. table 1. calculation results of dip (duration and umin) and swell (umax) parameters completed by sqr(diff). fcutoff half cycl. umin umax sqr(diff) [v] [%] [v] [v] urms(1/2) 18 195.86 85.16 233.58 -- uabs 17 197.6 85.91 236.47 3.26 ulpf 1 hz --212.87 92.55 226.99 6.23 2 hz 15 203.64 88.52 228.20 3.51 3 hz 17 198.28 86.20 232.53 1.72 4 hz 17 196.26 85.33 233.46 1.13 5 hz 17 195.20 84.87 233.68 0.88 6 hz 17 194.99 84.77 233.69 0.85 7 hz 18 194.96 84.76 233.72 0.68 8 hz 17 195.16 84.85 233.71 0.55 9 hz 18 195.43 84.97 233.66 0.61 10 hz 17 195.69 85.08 233.69 0.53 11 hz 18 195.87 85.16 233.67 0.36 12 hz 18 195.98 85.20 233.66 0.45 13 hz 18 196.00 85.20 233.67 0.64 14 hz 18 195.93 85.17 233.67 0.51 15 hz 18 195.87 85.16 233.65 0.35 16 hz 18 195.83 85.14 233.62 0.23 17 hz 18 195.82 85.14 233.59 0.20 18 hz 18 195.83 85.14 233.58 0.25 19 hz 18 195.87 85.16 233.59 0.34 20 hz 18 195.87 85.16 233.59 0.88 figure 4. the differences between r.m.s. value of voltage calculated by filtering of square samples ulpf (lpf2 cur-off frequency=17 hz) and reference method urms(1/2). figure 5. the differences between rms value of the voltage calculated on the basis of the absolute peak value of voltage uabs and reference urms(1/2). figure 6. voltage waveform during switch-on of the thruster in power network of ferry. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 85 4.3. voltage variations tracking in the system with high level of voltage distortion the voltage registered on board of a chemical tanker was chosen for the assessment of the performance under significantly distorted conditions. the vessel is equipped with a shaft generator working on main bus bars via a power converter to obtain a constant frequency. the system rated voltage was 440 v and rated frequency 60 hz. the voltage waveform registered in the system of the chemical tanker is shown in figure 10. for the assessment, the measurement results of the voltage modulation by low-pass filtering ulpf and by the urms(1/2) method were compared. it is to exemplify the capability of tracking of the voltage variations by the ulpf method. it must be stressed that the considered case is very hard to analyze because of concurrent presence of significant waveform distortions as well as fundamental voltage and frequency modulations shown in figure 11. the instantaneous frequency and voltage for figure 11 are calculated by a zoom-dft with kaiser window, refreshed every 0.5 ms and with a frequency resolution 0.001 hz. for this case, a similar modulation was observed for harmonics, e.g. the r.m.s. of the 5th harmonic varied between 15 and 25 v. figure 8. the differences between the r.m.s. value of voltage calculated by filtering of square samples ulpf (lpf2 cur-off frequency=17 hz) and reference method urms(1/2). figure 9. the differences between the rms value of the voltage calculated on the basis of the absolute peak value of voltage uabs and reference urms(1/2). figure 10. voltage waveform in a marine system with a shat generator working via a power converter, thd=11.8 %. figure 7. variations of urms(1/2) (reference method) during switching of the thruster onboard a ferry; residue voltage equals 5015 v (75.98 % of urated), dip duration equals to 51 half cycles, swell equals to 7083.1 v. table 2. results of calculation of dip (duration and umin) and swell (umax) parameters completed by sqr(diff). fcutoff half cycl. umin umax sqr(diff ) [v] [%] [v] [v] urms(1/2) 51 5015.0 75.98 7083.1 -- uabs 49 5211.6 78.96 7299.6 170.03 ulpf 1 hz 52 5554.2 84.15 6671.4 326.89 2 hz 48 4990.2 75.61 6958.7 146.99 3 hz 49 4884.4 74.01 7083 108.95 4 hz 49 4891.1 74.11 7105.6 87.96 5 hz 50 4918.0 74.52 7111.5 76.13 6 hz 50 4926.7 74.65 7109.9 64.24 7 hz 50 4929.8 74.69 7105.7 55.97 8 hz 50 4938.7 74.83 7100.5 51.63 9 hz 51 4953.1 75.05 7093.8 46.56 10 hz 52 4974.3 75.38 7086.9 42.23 11 hz 51 4993.7 75.66 7080.4 38.26 12 hz 51 5016.9 76.01 7074.7 42.65 13 hz 51 5015.5 75.99 7083.1 33.88 14 hz 51 5014.9 75.98 7083.1 28.86 15 hz 51 5013 75.95 7079.6 28.70 16 hz 52 5012.8 75.95 7080.8 32.15 17 hz 51 5014.3 75.97 7082.0 38.19 18 hz 51 5013.6 75.96 7083.3 28.18 19 hz 51 5013.6 75.96 7084.2 22.58 20 hz 51 5014.8 75.98 7085.9 18,43 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 86 the two voltage tracking methods are graphically compared in figure 12, for a steady-state condition. assumed cut-off frequency of lpf2 filter is equal to 17 hz. analysis of figure 12 once again indirectly proves that the ulpf method would be appropriate for the considered aim, namely tracking of voltage short-term variations, including voltage dip or swell detection and evaluation, even in the case of heavily distorted signals. finally, it has to be added that the r.m.s. value of voltage over 10 or 12 cycles time interval (depending on the power system’s rated frequency) should be calculated for the assessment of magnitude of the supply voltage (apart from dips and swells detection) [13]. because it would be unwise to design a separate signal processing path for the 10 or 12 cycles r.m.s. value determination, it seems that the simplest solution would be determining the average value of readings of the lpf2 filter over a specified number of cycles. it can easily be implemented in ics. to assess the accuracy of the solution, the average values of twenty or twenty four consecutive readings of the lpf2 filter with cut-off frequency equal to 17 hz is determined for the office building (50 hz system) and the chemical tanker (60 hz system), respectively. they are graphically (figure 13) compared with respective r.m.s. values determined for the very same time interval (exactly 10 cycles) by the reference method. the analysis of the results presented in figures 4, 8, 12 and 13 clearly indicates that the same signal processing path can be easily used for assessing voltage dips and swells as well as determining 10/12 cycles voltage magnitude after simple averaging. for example, the maximum difference between reference 10/12 cycle r.m.s. value and the respective value determined by averaging the lpf2 readings (cut-off frequency 17 hz) is below 0.02 v for the case presented in figure 13(a) and it is below 0.5 v for the case presented in figure 13(b). the value of sqr(diff) is below 0.01 v for the former case and 0.3 v for the latter case. 5. conclusions the paper aim was the investigation of some simple algorithms for dip or swell detection and evaluation, easily applicable in low-cost ics dedicated to multifunction electricity figure 11. fluctuations of voltage (a) and its frequency (b) on board of chemical tanker determined by zoom-dft. figure 12. comparison of tracking capabilities of short-term small voltage variations by ulpf method with urms(1/2) method. figure 13. comparison of reference r.m.s. value calculated over 10 or 12 cycles of input voltage and respective value of averaged lpf2 readings over the same time interval, (a) emergency supply of the office building, (b) bow thruster subsystem with non-linear load. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 87 parameters measurements. it was proved that the method of signal processing based on low-pass filtering of the input voltage samples is more suitable for the aim than the commonly used method based on absolute values of the momentary voltage peak. in the latter case (uabs method) the significant impact of voltage distortions on the measurement accuracy can be observed. it is true even for slightly distorted signals, which are rather norm than exception in nowadays power systems. therefore, using the method based on low-pass filtering is arguably a better solution. since it is already used for r.m.s. value estimation by the same ics, it requires only some design modification , but special attention has to be paid to the cut-off frequency and group delay of implemented lpfs. the used cutoff frequency has to be increased in comparison with that currently used in ics [2] but it inevitably leads to an increase in ripples of the filter output. fortunately, the ripples impact is limited, if synchronisation of reading with voltage zero crossings is implemented. the resulting overall accuracy is better than in the case of the hitherto used solution, based on tracking of the absolute instantaneous value of the input voltage. moreover, the reading of the used lpf2 filters can be used for determining the 10-cycles voltage magnitude with good accuracy after simple averaging. it simplifies the ic design, since the same signal processing path can be used for both aims, dip detection and the 10-cycles voltage magnitude measurement. currently it is determined by two separate signal processing paths, in dedicated ics with fixed dsp. it has to be added that the solution based on low-pass filtering was devised for low-cost and low power consumption applications, with sufficient accuracy for trouble-shooting applications, like defined in the iec standard 61000-4-30 [13]. its application for contractual purposes will require additional research, particularly for other voltage shapes during dips. next, the cut-off frequency of the applied lpf2 filter has to be carefully chosen, since it can affect the uncertainty of the method. if it is too low, it results in an increase in response time and the assessed residue voltage would be higher than the real one. this can affect the accuracy of determining the residue voltage during very short dips. however, a too high cut-off frequency would lead to an increase of ripples of the filter output and increase in the measurement uncertainty since reading exactly at the moments of zero-crossing is hardly possible. in fact, the results are affected by both lpf1 and lpf2 characteristics. their correction can assume discrete values with a resolution equal to the sampling period, since delay means shifting samples of the lpf2 output. so, the accuracy of correction of the group delay for the respective filters is in the range of ±0.5 ts (ts – sampling period, which for the considered research equals 40 µs for the office building and 95 µs for the two later cases, namely the marine microgrids). the remaining factors influencing measurement uncertainty are the same as considered for typical performance monitoring and measuring devices and discussed in standard [19]. particularly, the limitation of the method based on low-pass filtering is the necessity of synchronisation of the reading with the fundamental zero crossing. however, this is also a limitation of the solution based on the peak value, which can be determined once every half cycle. nevertheless, the research carried out for two various dip cases and completed by research of modulated, significantly distorted voltages (network of chemical tanker) proved to have good accuracy of the proposed solution, even under such unfortunate circumstances. the maximum difference between the reference method (for 12 cycles time interval) and the described proposal hardly exceeded 0.11 % of the voltage rated value. summing up, in author’s opinion, the proposed solution based on a simple low-pass filtering gives acceptable accuracy for less demanding applications and can be used for everyday monitoring of power systems performance. however, for more demanding or legal purposes other, more complex solutions, for instance mentioned in the introduction, should be considered. references [1] european commission, “communication from the commission to the european parliament, the council, the european economic and social committee and the committee of the regions, smart grids: from innovation to deployment.” com(2011) 202 final, brussels, 12.4.2011. [2] analog devices, “single phase, multifunction metering ic with neutral current measurement ade7953”, data sheet, 20112013. [3] cirrus logic, “single phase bi-directional power/energy ic cs5461a”, data sheet, 2011. [4] a. khoshkbar sadigh, k.m. smedley, “fast and precise voltage sag detection method for dynamic voltage restorer (dvr) application”, electric power systems research, vol. 130, 2016, pp. 192–207. [5] e. perez, j. barros, “a proposal for on-line detection and classification of voltage events in power systems”, ieee trans. on power delivery vol. 23, no. 4, 2008, pp. 2132-2138. [6] d. gallo, c. landi, m. luiso, e. fiorucci, “survey on voltage dip measurements in standard framework”, ieee trans. on instrumentation and measurement, vol. 63, no. 2, 2014, pp. 374387. [7] m. bollen, i. gu, “signal processing of power quality disturbances”, new york, usa, wiley, 2006. [8] a. moschitta, p. carbone, c. muscas, “performance comparison of advanced techniques for voltage dip detection”, ieee trans. on instrumentation and measurement, vol. 61, no. 5, 2012, pp. 1494-1502. [9] o. hua, b. le-ping, y. zhong-lin, ”voltage sag detection based on dq transform and mathematical morphology filter”, procedia engineering, vol. 23, 2011, pp. 775 – 779. [10] m. apraiz, j. barros, r. diego, “a real-time method for time– frequency detection of transient disturbances in voltage supply systems”, electric power systems research, vol. 108, 2014, pp. 103– 112. [11] j. decanini, m. tonelli-neto, f. malange, c. minussi, “detection and classification of voltage disturbances using a fuzzyartmap-wavelet network”, electric power systems research, vol. 81, 2011, pp. 2057– 2065. [12] e. pérez, j. barros, “an extended kalman filtering approach for detection and analysis of voltage dips in power systems”, electric power systems research, vol. 78, 2008, pp. 618–625. [13] iec standard 61000-4-30, “testing and measurement techniques – power quality measurement methods”, 2015. [14] ieee standard 1159-2009, “ieee recommended practice for monitoring electric power quality”, 2009. [15] joint committee for guides in metrology “international vocabulary of metrology – basic and general concepts and associated terms,” 3rd edition, , jcgm 200:2012. [16] t. tarasiuk, “comparative study on chosen methods of voltage dip tracking based on real example”, xxi imeko world congress “measurement in research and industry” august 30 − september 4, 2015, prague, czech republic. [17] en standard 50160, “voltage characteristics of electricity supplied by public distribution networks”, 2007. [18] a. vicenzutti, d. bosich, g. giadrossii, g. sulligoi, “the role of voltage controls in modern all-electric ships”, ieee acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 88 electrification magazine, vol. 3, no. 2, 2015, pp. 49-65. [19] iec standard 61557-12, “electrical safety in low voltage distribution systems up tp 1000 v a.c. and 1500 v d.c. – equipment for testing, measuring or monitoring of protective measures – part 12: performance measuring and monitoring devices (pmd)”, 2007. [20] y. wang, m. bollen, a. bagheri, x. xiao, m. olofsson, “a quantitative comparison approach for different voltage dip characterization methods”, electric power systems research, vol. 133, 2016, pp. 182–190. [21] det norske veritas, “rules for classification of ships / high speed, light craft and naval surface craft. electrical installations”, part 4, chapter 8, 2011. [22] ieee std. 45-2002, “ieee recommended practice for electrical installations on shipboard”, 2002. simple methods of voltage dip tracking – case study << /ascii85encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain 20%) /calrgbprofile (srgb iec61966-2.1) /calcmykprofile (u.s. web coated \050swop\051 v2) /srgbprofile (srgb iec61966-2.1) /cannotembedfontpolicy /error /compatibilitylevel 1.4 /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves 0.0000 /colorconversionstrategy /cmyk /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel 0 /emitdscwarnings false /endpage -1 /imagememory 1048576 /lockdistillerparams false /maxsubsetpct 100 /optimize true /opm 1 /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments true /preserveoverprintsettings true /startpage 1 /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution 300 /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution 300 /colorimagedepth -1 /colorimagemindownsampledepth 1 /colorimagedownsamplethreshold 1.50000 /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /colorimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000coloracsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000colorimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution 300 /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution 300 /grayimagedepth -1 /grayimagemindownsampledepth 2 /grayimagedownsamplethreshold 1.50000 /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /grayimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000grayacsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000grayimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution 1200 /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution 1200 /monoimagedepth -1 /monoimagedownsamplethreshold 1.50000 /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k -1 >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx1acheck false /pdfx3check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /ara /bgr /chs /cht /cze /dan /deu /esp /eti /fra /gre /heb /hrv (za stvaranje adobe pdf dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. stvoreni pdf dokumenti mogu se otvoriti acrobat i adobe reader 5.0 i kasnijim verzijama.) /hun /ita /jpn /kor /lth /lvi /nld (gebruik deze instellingen om adobe pdf-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader 5.0 en hoger.) /nor /pol /ptb /rum /rus /sky /slv /suo /sve /tur /ukr /enu (use these settings to create adobe pdf documents best suited for high-quality prepress printing. created pdf documents can be opened with acrobat and adobe reader 5.0 and later.) >> /namespace [ (adobe) (common) (1.0) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) (4.0) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /converttocmyk /destinationprofilename () /destinationprofileselector /documentcmyk /downsample16bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure false /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles false /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) (2.0) ] /pdfxoutputintentprofileselector /documentcmyk /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /usedocumentprofile /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [2400 2400] /pagesize [612.000 792.000] >> setpagedevice microsoft word 287-2052-2-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 45‐50    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 45  rapid electrochemical screening methods for food safety and  quality  danila moscone, giulia volpe, fabiana arduini, laura micheli  chemical sciences and technologies department, university of rome "tor vergata", via della ricerca scientifica, 00133 rome, italy       section: research paper  keywords: food safety; electrochemical screen printed sensors; biosensors; arsenic; pesticides; salmonella; hav  citation: danila moscone, giulia volpe, fabiana arduini, laura micheli, rapid electrochemical screening methods for food safety and quality, acta imeko,  vol. 5, no. 1, article 9, april 2016, identifier: imeko‐acta‐5 (2016)‐1‐09  section editor: claudia zoani, italian national agency for new technologies, energy and sustainable economic development affiliation, rome, italy  received july 29, 2015; in final form december 18, 2015; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: danila moscone, e‐mail: danila.moscone@uniroma2.it    1. introduction  the potential impact of foods on human health is receiving increasing attention by public opinion, scientists and legislators. the interest of consumers, as well as of producers, for food safety and quality testing has increased over the last years. the food production, processing and distribution globalization further complicate the problem. thus, efficient and sensitive analytical methods able to detect food contamination would help to reduce social and health risks. food control analyses require robust, sensitive, and selective detection methods. the most commonly used methods such as chromatography and mass spectrometry require expensive instrumentations and skilled technicians. therefore, the importance of new detection systems, which should be accurate and sensitive, cheap and preferably portable for on-site testing, is evident. sensors and biosensors allow the development of fast, simple and cheap electrochemical screening methods, and some of them are presented in this paper. in particular, screening methods will concern the detection of chemical compounds and microbial contaminants such arsenic and pesticides, salmonella sp, marine toxins such as palitoxin, and viruses such as the hepatitis a virus (hav). in all the methods that will be presented, screen printed electrodes (spes) and/or electrochemical arrays based on an 8screen-printed electrode strips, connected to a cost effective and portable apparatus, have been adopted as electrochemical transducers or assembled as immunosensors. some interesting and useful reviews on spes can be found in literature [1]-[3]. in order to obtain higher sensitivity, in some case spes have been modified with nanostructured materials such as carbon black (cb) or gold nanoparticles (aunps), while in the case of immunosensors the arrays have been coupled with magnetic beads (mb), where the immunological chain occurs. in this paper, we summarize the results of some recent research carried out by our analytical chemistry group, and presented at the 1st imekofood congress held in rome, italy in october 2014. experiments illustrating the optimization and analytical characterization of the developed screening methods and their application in real samples to evaluate matrix effect and recovery, will be presented. abstract  this paper presents some examples of rapid, simple and cost effective screening methods that can be realized by the use of screen  printed  electrodes  (spes)  coupled  with  portable  and  cheap  instrumentation,  for  the  monitoring  of  food  safety  and  quality.  when  necessary, these spes have been modified with nanomaterials in order to improve their analytic performances. arsenic detection, for  example, has been obtained with spes modified with a composite of nanostructured carbon black and au nanoparticles, while for the  pesticide detection the spes were modified with prussian blue nanoparticles in addition to the enzyme butyrylcholinesterase. in the  case of  immunosensors, a high sensitivity has been obtained making the entire  immunological chain to happen on the surface of  magnetic beads (mbs), finally collected on the surface of screen printed arrays with the aid of magnets located just under the working  electrodes. application to real samples are presented, in order to demonstrate the effectiveness of such approaches.    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 46  2. arsenic detection  arsenic has an historical use for its toxic and medicinal properties. in more recent ages, arsenic and its compounds have been widely used in pigments, as insecticides and herbicides, as an alloy in metals, and as chemical warfare agents. due to its wide use, together with its presence in a wide range of minerals, large accumulations now exist in soils and sediments, up to contaminate waters, including drinking waters. the problem of the presence of as in water is known since the 80's, when high levels of as have been found in groundwater resources. thus, contamination of natural water has been identified as a public health problem, due to the mutagenic and carcinogenic effects of this element, especially in the form of as(iii) [4], prompting the world health organization (who) to set the maximum permitted level in drinking water at 10 μg/l [5]. at present, a lot of detection methods have been reported and reviewed [6]. most of them obtain limits of detection below the who arsenic guideline value of 10 g/l, but often they are only suitable for laboratory conditions. therefore, a rapid and sensitive portable system for the screening of as in field could be really useful. we have developed an electrochemical sensor based on spes modified with a nanocomposite of carbon black (cb) and au nanoparticles (cb-aunps/spe) for the detection of as(iii) [7]. actualy, a wide variety of nanomaterials with different properties have found a broad application in several analytical methods. between them, gold nanoparticles (aunps) have received large attention mainly due to their interesting electrocatalytic properties [8], [9]. moreover, recently we demonstrated the advantages of using the cb modified screenprinted electrode (spe) compared to the bare one, for its improved analytic characteristics [10], [11]. further modification of spes with the composite cb-aunps still enhances these characteristics, due to a synergic effect of both nanomaterials. the spes were home produced with graphitebased conductive ink (elettrodag 421) for working and counter electrode and silver-silver chloride conductive ink (elettrodag 477 ss) for pseudo-reference electrode. the diameter of the working electrode was 0.3 cm, resulting in an apparent geometric area of 0.07 cm2. the sensors were prepared by modifying the spe firstly with cb (6 l of a 1 mg/ml solution in dmf/water 1:1) and then with aunps, also homemade, by depositing 6 l of the aunps solution on the cb-modified spe. the chosen electrochemical technique was the linear sweep anodic stripping voltammetry (scan rate=0.8 v/s, tdeposition=300 s, ecleaning= 0.2 v, tcleaning=10 s), applied with the portable potentiostat palmsens. after the electrochemical and analytic optimization, the as was measured with a high sensitivity (674 ma mm-1cm-2) and a lod of 0.4 g/l. finally, cb-aunps/spes has been applied to measure as(iii) traces in drinking water. the analysis requires only few minutes and this sensor provides the detection of as(iii) with high values of percentage recovery (99.9 %) in a tap water sample spiked with the legal limit amount (10 g/l). the electrodes are easily fabricated at low cost, are disposable, and suitable for in situ analysis in real time. 3. pesticides detection  in agricultural or industrial production there is a growing exploitation of chemical compounds, but these substances may pose serious risks on humans, animals and environment. among the various chemicals, pesticides are considered one of the most dangerous, due to their variable nature and to highly toxic effects on living organisms and wildlife. the toxicity of organophosphorus pesticides (ops) mainly arises from their capability to inhibit the acetylcholinesterase enzyme (ache), a crucial enzyme for the central nervous system processes [12], [13]. for these reasons, there is a general concern on pesticide contamination and on their detection in polluted sites. traditional methods for the detection of organophosphorus pesticides employ chromatographic techniques, such as gas chromatography (gc) [14], [15]. biosensors can represent suitable tools for rapid, reliable, cost-effective and in-situ analysis, for screening of environmental samples successively confirmed by traditional laboratory methods. in particular, several biosensors have been described in literature for organophosphorus pesticides based on their capacity to inhibit cholinesterase (che) enzymes [16][20]. in a recent work we developed an amperometric biosensor for the determination of paraoxon, based on the enzyme butyrylcholinesterase (bche) immobilized on screen-printed electrodes modified with prussian blue nanoparticles (pbnps), and embedded in a flow system [21]. prussian blue nanoparticles (pbnps) modification of spes was accomplished by placing a drop (10 μl total volume) of “precursor solution” on the working electrode area. this solution was obtained by mixing 5 μl of 0.1 m potassium ferricyanide in 10 mm hcl with 5 μl of 0.1 m ferric chloride in 10 mm hcl directly on the surface of the working electrode. the solution was left on the electrode for 10 min and then rinsed with a few milliliters of 10 mm hcl. the electrodes were then left 90 min in the oven at 100 °c to obtain a more stable and active layer of pbnps [22]. the pbnps modified electrodes were stored dry at room temperature in dark up to one year. to immobilize the bche enzyme on the pbnps modified electrode surface, 2 µl of 0.25 % glutaraldehyde was applied with a pipette exclusively on the pbnps modified working electrode. the solution was left to evaporate; then, 2 µl of a mixture of bsa, enzyme and nafion was dropped on the working electrode. the mixture was obtained by adding 25 µl of 3 % (w/v) bsa, 25 µl of 0.1 % (v/v) nafion® and 25 µl of a stock enzyme solution (40 u/ml). all solutions were prepared in distilled water. the biosensor was embebbed in a flow system shown in figure 1, and, in order to improve the working stability of the reference electrode, the ag/agcl reference electrode was covered with an acetate cellulose layer by applying 2 µl on its surface. the so prepared amperometric measurements were performed in a carrier solution consisting of 0.05 m phosphate buffer, 0.1 m kcl ph 7.4 at an applied potential of +200 mv vs. ag/agcl. firstly, the carrier buffer was passed through the electrochemical cell for 5 min to register an intensity current (control). then, a carrier buffer containing 5 mm butyrylthiocholine was passed through the flow cell where the biosensor was located. the substrate butyrylthiocholine was hydrolyzed by bche immobilized on the spe-pbnps producing thiocoline, which is electroactive.   acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 47  the resulted current signals were continuously recorded and the steady state current values were measured. stabilization of the current in flow conditions was reached in 10 min. the intensity currents were proportional to thiocholine produced, giving information about the enzymatic activity of the immobilized bche. the inhibitory effect of organophosphates (i.e., paraoxon) on the bche biosensor was evaluated by determining the decrease in the current obtained for the oxidation of a lesser amount of thiocholine produced by the enzyme when the sample contaminated with paraoxon was passed through the flow cell for a selected time. thus, the biosensor was exposed to paraoxon, followed by a washing step with distilled water. the enzymatic residual activity was finally determined in a flow buffer, in the presence of the enzymatic substrate. the resulting currents were measured as described above and the degree of inhibition was calculated as a relative decay of the biosensor response through the (1): % 100 (1) where io and ii represent the biosensor responses before and after the incubation procedure, respectively. the introduction of a washing step allowed to avoid electrochemical interferences such as ascorbic acid or phenolic compounds, since the enzymatic activity was always quantified in the phosphate buffer in the absence of any electroactive interfering species eventually present in the sample. the integration in a flow system enables the biosensor to be applied for continuous automatic monitoring of paraoxon in environmental samples. in addition, in a continuous flowsystem biosensor, manual procedures are minimized and analyses can be programmed and remotely delivered, minimizing the operator intervention. in order to optimize the biosensing system, a series of parameters was evaluated and optimized, such as different types of electrochemical cells, the flow rate during the enzymatic measurement and the incubation time. in optimized conditions, storage stability lasted up to 60 days at room temperature in dry conditions, demonstrating an excellent storage stability and making this system highly attractive for commercial use. furthermore, the analytical system was characterized by satisfactory analytical performances in the detection of organophosphate tested (paraoxon) reaching a linear range of concentration between 2 and 10 ppb with a detection limit (lod) of 1 ppb in standard solutions. this system was also challenged in drinking, river and lake water samples with satisfactory recovery values using a dilution step of 1:4 v/v. the advantage of this integrated system is the possibility to measure irreversible inhibitors of cholinesterase enzyme (i.e., organophosphorus and carbammic pesticides) in samples without extended pre-treatments and to automate the analyses reducing costs and time. this analytical system can be used as an alarm system for this class of compounds, followed, in the case of alarm, by the hplc or gc-ms analyses to exactly detect the pesticides present in the sample. 4. palytoxin detection  palytoxin (pltx) is one of the most potent marine toxin known to date [23]. recently blooms of ostreopsis spp. have been reported along the mediterranean coasts, posing serious risks to human health. occurrence of ostreopsis spp may result in palytoxin contamination of seafood (250 µg/kg proposed regulatory limit) and, in order to prevent sanitary risks, there is the need to develop rapid and sensitive monitoring methods of pltx-like compounds in seafood, coupled with an efficient extraction procedure. currently, there are no regulations on pltx-group toxins in shellfish, either in the european union or in other regions of the world, nor there is an official method for their determination. the method most commonly used for detection of pltxs is the mouse bioassay [24] but for reasons of animal welfare, of poor sensitivity and specificity, and long analysis times, other methods detection are required by efsa [25], [26]. we developed an electrochemical sensor, based on an 8screen-printed electrode strip connected to a cost effective and portable apparatus, for palytoxin (pltx) detection [27]. sheep erythrocytes were used to test palytoxin and the degree of haemolysis caused by this toxin was evaluated by measuring the release of the cytosolic lactate dehydrogenase (ldh). the percentage of haemolysis, and therefore the amount of ldh measured, using nadh/pyruvate and appropriate electrochemical mediators, is correlated to the concentration of this toxin. two different electrochemical approaches were investigated for evaluating ldh release, but only one based on the use of a binary redox mediator sequence (phenazine methosulphate in conjugation with hexacyanoferrate (iii)) has proven to be useful for our purpose. the approach involves three sequential reactions: ldh pyruvate + nadh ←→ lactate + nad+ (2) nadh + pms++→nad+ + pmsh (3) pmsh + 2 fe(cn)63→pms+ (which cycles via the reaction 3) + 2 fe(cn)64-+ h+ + 2e (4) after incubation of the ldh (released into the medium) and its substrates for 30 minutes, unreacted nadh (the amount depending on the concentration of ldh) spontaneously interacts with pms+ forming pmsh. this latter, produced by reaction (3) interacts with the second mediator, present in large excess, forming fe cn . in figure 1. scheme of the flow system, [21].    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 48  this approach, when the residual nadh is completely consumed, the reactions stop and reoxidation of fe cn on the spe surface, at an applied potential of +260 mv, gives a current signal inversely proportional to the ldh concentration and therefore to the pltx concentration. no reduction of fe cn and pms+ occurs at this potential. after an analytical/biochemical characterization, the sensor strip was used to measure palytoxin. sheep blood and standard solutions of pltx were let to react using two different incubation times (24 h or 4 h) obtaining working ranges of 7×10-3-0.02 g/l and 0.16-1.3 g/l, respectively. the specificity of the test for palytoxin was evaluated using ouabain which acts as pltx on the na+/k+-atpase pump. a crossreactivity study, using high concentrations of other marine biotoxins was also carried out. experiments to evaluate the matrix effect and recovery on mussel samples were also carried out, and results showed that the matrix effect was dependent on the pltx concentration, and thus it is necessary to use a matrix standard calibration curve for accurate analysis of pltx in experimentally and naturally contaminated samples. this was the first time that a biomolecular method for analysis of pltx in mussels has been evaluated for matrix and recovery effects. compared with the conventional haemolytic– spectrophotometric assay, the method proposed here proved to be faster (4 h blood/pltx incubation instead of 24 h) and uses a cost effective and portable apparatus. 5. salmonella detection  the increased consumption of fresh and ready to use vegetables has recently caused several outbreaks and illness due to their contamination by pathogenic microorganisms, such as salmonella [28], [29]. in italy, the production and processing of fresh vegetables are concentrated in the campania region, and especially in the plain of sele. recently, fresh and ready to eat vegetables from this area have been the cause of repeated food alert in the ec. this problem is mainly due to cultivation practices, manipulation and transformation. since the standard culture method for detecting salmonella [30, iso 6579:2002] requires up to 5 days to produce results, the need to develop rapid methods represents an important issue for the authorities and the producers. the purpose of this study was the development and evaluation of an elime (enzyme-linkedimmuno-magnetic-electrochemical)assay [31], [32] to detect salmonella in vegetables of i and iv gamma. the proposed elime assay is based on the use of magnetic beads (mbs) as support of a sandwich immunological chain, coupled with a strip of 8-magnetized screen-printed electrodes (localized at the bottom of 8 wells). the product of the enzymatic reaction is quickly measured by chronoamperometry at an applied potential of 100 mv for 60 seconds. four different kinds of mbs anti-salmonella were tested: • dynalbeads anti-salmonella (ready to use) • pathatrix anti-salmonella (ready to use) • pathatrix same day anti-salmonella (ready to use) • pan mouse igg mbs coated with a broad reactivity mab anti-salmonella an optimized dilution of pab-hrp (1:100) has been used to complete the sandwich, and the couple tmb + h2o2 as enzyme substrate. after verifying the ability of the system based on the use of dynabeads anti-salmonella to interact with different salmonella serotypes, we focused our attention on s. napoli and s. thompson, recently isolated from vegetables grown in italy. being salmonella, and in general pathogens, a small fraction of a large population of non-target (nt) organisms, present in food, able to adhere to various surfaces including mbs, we also tested several nt bacteria. in order to have the better sensitivity and specificity towards salmonella, different pab-hrp dilutions and several blocking agents (gelatin bovine, ɩ-carrageenan, bsa, pva, dry milk) were investigated. the best results were obtained using pab-hrp=1:100 and dry milk, because of better selectivity (figure 2). among the 20 nt bacteria tested, only e. cloacae, c. freundii, e. aerogenes, e. coli, gave back current signals greater than the zero point. similar results were also obtained using pathatrix mbs. for this reason, different salmonella serotypes were also tested using pan mouse igg mbs coated with a broad reactivity mab antisalmonella (dry milk as blocking agent and pab-hrp=1:100). in figure 2, calibration curves for s. napoli and s. thompson are reported, while figure 3 shows the selectivity study using dry milk and pva as blocking agent. from the results obtained, we can state that these particles are less sensitive but more specific than those ready to use. however, experiments on i and iv gamma vegetables, experimentally contaminated with salmonella, will be performed with both types of particles to evaluate their real capability to distinguish salmonella from endogenous nts. 1e+4 1e+5 1e+6 1e+7 1e+8 0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 salmonella serotypes [cfu/ml] 105 106 107 108 109 figure 2. calibration curves for s. napoli and s. thompson.  figure 3. selectivity studies.    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 49  6. hav detection  hepatitis a virus (hav) causes an acute hepatitis associated with a significant morbidity and occasional mortality. outbreaks of waterborne diseases are certainly underestimated due to the lack of adequate programs for the epidemiological surveillance. current legislation for water, shellfish (ec 2073/2005 ec b53/2004) and plant (ec 2073/2005) does not provide for any limitation due to the presence of hav and other enteric viruses in the irrigation and housing waters [33]. in addition, there is no official method for the detection of these viruses. actually, the presence of viruses in water and/or foods can be just indirectly deduced by the patient's symptoms and confirmed through the search for anti-hav igm and/or anti-hav igg antibodies in patient's blood. immunoanalytical approaches are reported as methods to determine hav in drinking water before its use, thus avoiding the infectious disease. in this work, still in progress, two electrochemical, competitive and sandwich, elime assays were developed. these systems are based on the use of new polydopaminemodified magnetic nanobeads as solid support for the immunochemical chain, and screen printed electrodes as a sensing platform. this rapid and low-cost analysis method involves the use of a portable instrument, able to perform measurements directly in the field. these elime assays showed detection limits equal to 1×10-8 iu/ml and 8×10-7 iu/ml for sandwich and competitive formats, respectively. in order to investigate the applicability of the proposed immunosensors for practical analysis, the immunosensors were used to evaluate the recoveries of different concentrations of hav spiked in different tap water samples. indeed, different hav concentrations from 10-10 to 10-2 iu/ml spiked in tap water samples were analyzed. preliminary results obtained on real samples were compared with those obtained by the qrt-pcr analysis that is a routine technique used to evaluate the hav contamination levels in samples. at moment, electrochemical values do not perfectly match the pcr results, being underestimated. experiments are in progress to elucidate and overcome this problem. 7. conclusions  new screening methods for the control of food safety have been presented. the use of spes in conjunction of portable and quite inexpensive instrumentation, could allow these methods to become popular because quite simple to be carried out also by not specialized operators. examples of determination of arsenic, pesticides and toxins (pltx) at ppb levels have been illustrated, with application to real samples in order to demonstrate the feasibility and effectiveness of these determinations. where necessary, improved analytical characteristics have been obtained modifying the spes with nanostructured materials able to enhance their performances. examples of immunosensors for the detection of bacteria such as salmonella and viruses such as hepatitis a have been also reported. in this case, a large enhancement of the sensitivity of the assay has been obtained using magnetic beads of micrometer dimensions as support for the immunological chain, and arrays of magnetized spes as electrochemical transducers. all the methods presented are intended to be adopted as screening methods, and not used in place of the confirmatory ones. in such way, only samples suspected to be contaminated can be subjected to the required confirmatory analytical methods for residues in foodstuffs. [34], [35]. acknowledgement  the salmonella project was funded by the national project ricerca finalizzata 2009 (rf-2009-1538880). the hav research was supported by grant from eu marie curie fp7 irses (international research staff exchange scheme). references  [1] j.p. metters, r.o. kadara, c.e banks, new directions in screen printed electroanalytical sensors: an overview of recent developments. analyst 136 (2011) pp. 1067-1076. [2] m. li, y.t. li, d.w. li, yt long, recent developments and applications of screen-printed electrodes in environmental assays--a review. anal chim acta. 734 (2012) pp. 31-44. [3] o.d. renedo, m.a. alonso-lomillo, m.j. martínez, recent developments in the field of screen-printed electrodes and their related applications. talanta 73 (2007) pp. 202-219. [4] b.k. mandal, k.t. suzuki, arsenic round the world: a review, talanta 58 (2002) pp. 201-235. [5] s. v. flanagan, r. b. johnston, y. zheng, arsenic in tube well water in bangladesh: health and economic impacts and implications for arsenic mitigation, bull. who 90 (2012) pp. 839-846. [6] d. q. hung, o. nekrassova, r. g. compton analytical methods for inorganic arsenic in water: a review talanta 64 (2004) 269–277. [7] s. cinti, s. politi, d. moscone, g. palleschi and f. arduini, stripping analysis of as(iii) by means of screen-printed electrodes modified with gold nanoparticles and carbon black nanocomposite, 26, (2014) pp. 931–939. [8] k. saha, s. s. agasti, c. kim, x. li, v. m. rotello, gold nanoparticles in chemical and biological sensing, chem.rev., 112 (2012) pp 2739–2779. [9] d. a. schultz, plasmon resonant particles for biological detection, curr.opin. biotech., 14 (2003) pp. 13-22. [10] f. arduini, f. di nardo, a. amine, l. micheli, g. palleschi, d. moscone, carbon black-modified screen-printed electrodes as electroanalytical tools, electroanalysis, 24 (2012) pp. 743-571. [11] f. arduini, c. majorani, a. amine, d. moscone, g. palleschi, hg2+ detection by measuring thiol groups with a highly sensitive screen-printed electrode modified with a nanostructured carbon black film, electrochim. acta 56, (2011) pp. 4209-4215. [12] j.l. sussman, m. harel, f.frolow, c. oefner, a. goldman, l. toker, i. silman, atomic structure of acetylcholinesterase from torpedo californica: a prototypic acetylcholine-binding protein. science 253 (1991) pp. 872–879. [13] j. bajgar, j. fusek, k. kuca, l.bartosova, d. jun, treatment of organophosphate intoxication using cholinesterase reactivators: facts and fiction. mini rev. med. chem. 7 (2007) pp. 461–466. [14] c. bergh, r. torgrip, c. ostman, simultaneous selective detection of organophosphate and phthalate esters using gas chromatography with positive ion chemical ionization tandem mass spectrometry and its application to indoor air and dust. rapid commun. mass spectrom. 24 (2010) pp. 2859–2867. [15] f. de freitas ventura, j. jr. de oliveira, w. dos reis pedreira filho, m. gerardo ribeiro, gc-ms quantification of organophosphorous pesticides extracted from xad-2 sorbent tube and foam patch matrices. anal. methods, 4 (2012) pp. 3666–3673.   acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 50  [16] f. arduini, a. amine, d. moscone, g. palleschi, biosensors based on cholinesterase inhibition for insecticides, nerve agents and aflatoxin b1 detection. microchim. acta, 170 (2010) pp. 193–214. [17] s.andreescu, j.l. marty, twenty years research in cholinesterase biosensors: from basic research to practical applications. biomol. eng. 23 (2006) pp. 1–15. [18] m. pohanka, k. musilek, k. kuca, progress of biosensors based on cholinesterase inhibition. curr. med. chem., 16, (2009) pp. 1790–1798. [19] a.p. periasamy, y. umasankar, s.m. chen, nanomaterials— acetylcholinesterase enzyme matrices for organophosphorus pesticides electrochemical sensors: a review. sensors 9 (2009) pp. 4034–4055. [20] w. zhang, a.m. asiri, d. liu, d. du, y. lin. nanomaterialbased biosensors for environmental and biological monitoring of organophosphorus pesticides and nerve agents. trends anal. chem. 54 (2014) pp. 1–10. [21] f. arduini, d. neag, v. scognamiglio, s. patarino, d. moscone, g. palleschi, automatable flow system for paraoxon detection with an embedded screen-printed electrode tailored with butyrylcholinesterase and prussian blue nanoparticles chemosensors, 3 (2015), pp. 129-145. [22] s. cinti, f. arduini, g. vellucci, i. cacciottti, f. nanni, d. moscone, carbon black assisted tailoring of prussian blue nanoparticles to tune sensitivity and detection limit towards h2o2 by using screen-printed electrode. electrochem. commun., 47 (2014) pp. 63–66. [23] m. usami, m. satake, s. ishida, a. inoue, y. kan, t. yasumoto palytoxin analogs from the dinoflagellates ostreopsis siamensis. j am chem soc 117 (1995) pp. 5389–5390. [24] gazzetta ufficiale della repubblica italiana n 165, 16 may (2002) pp 16–19. [25] crlmb eu harmonised standard operating procedure for detection of lipophilic toxins by mouse biomassay. version 5, june (2009). [26] efsa scientific opinion on marine biotoxins in shellfishpalytoxin group. efsa j 7(12) (2009) pp. 1393-1631. [27] g. volpe, l. cozzi, d. migliorelli, l. croci, g. palleschi development of a haemolytic–enzymatic assay with mediated amperometric detection for palytoxin analysis: application to mussels. anal. bioanal. chem. 406 (2014) pp. 2399–2410. [28] m. oliveira, m. abadias, p. colás-medà, j. usall, i. viñas biopreservative methods to control the growth of foodborne pathogens on fresh-cut lettuce, international journal of food microbiology 214 (2015) pp. 4–11. [29] i.h. igbinosa, h. isoken, biofilm formation of salmonella species isolated from fresh cabbage and spinach. journal of applied sciences and environmental management, 19 (2015) pp. 45-50. [30] international standard organization, iso 6579:2002/amd.1:2007 microbiology of food and animal feeding stuffs – horizontal method for the detection of salmonella spp. genève, 2007. 9p. [31] l. croci, e. delibato, g. volpe, d. de medici, g. palleschi, comparison of pcr, electrochemical enzyme-linked immunosorbent assays, and the standard culture method for detecting salmonella in meat products. appl. environ. microbiol. 70 (2004) pp. 1393-1396. [32] e. delibato, g. volpe, d. romanazzo, d. de medici, l. toti, d. moscone, g. palleschi, development and application of an electrochemical plate coupled with immunomagnetic beads (elime) array for salmonella enterica detection in meat samples. j. agric. food chem., 57 (2009) pp 7200–7204. [33] m. koopmans, e. duizer foodborne viruses: an emerging problem, international journal of food microbiology, 90 (2004) pp. 23–41. [34] 2002/657/ec: commission decision of 12 august 2002 implementing council directive 96/23/ec concerning the performance of analytical methods and the interpretation of results (text with eea relevance) (notified under document number c(2002) 3044) official journal l 221 , 17/08/2002 p. 0008 – 0036. [35] sanco/12571/2013 guidance document on analytical quality control and validation procedures for pesticide residues analysis in food and feed. supersedes sanco/12495/2011 implemented by 01/01/2014. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 32‐38    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 32  direct calibration chain for hand torque screwdrivers from  the national torque standard  koji ogushi, atsuhiro nishino, koji maeda and kazunaga ueda  national metrology institute of japan, aist, tsukuba central 3, 1‐1‐1 umezono, tsukuba, ibaraki, 305‐8563, japan      section: research paper   keywords: torque; standard; screwdriver; calibration  citation: koji ogushi, atsuhiro nishino, koji maeda and kazunaga ueda, direct calibration chain for hand torque screwdrivers from the national torque  standard, acta imeko, vol. 4, no. 2, article 6, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐06  editor: paolo carbone, university of perugia, italy  received october 10, 2014; in final form january 30, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: koji ogushi, e‐mail: kji.ogushi@aist.go.jp    1. introduction  hand torque screwdrivers (htds) are necessary for the assembly of medical, electrical, and precision devices in the production control process. there has been some progress in the development of calibration chains for hand torque wrenches (htws) in the last two decades [1]–[4]. however, the developmental progress of calibration chains for htds has been less reported. the authors have presented a comparison of the results for torque screwdriver testers (tdts) by using a reference torque screwdriver (rtd) and a conventional weights-and-bar-system (wbs) and have shown some of the advantages of a calibration chain including an rtd [5]. parasitic transverse forces and bending moments necessarily occur superposing on pure torque during calibration using wbs. figure 1 shows the calibration chain for htds with uncertainties (approximate) of calibrations and maximum permissible errors of tests for devices which are subject to. the national metrology institute of japan (nmij) has proposed this direct torque traceability system for hand torque screwdrivers, cooperating with japanese industry. first, a htd should be tested by using torque screwdriver checker (tdc; torque screwdriver testing equipment without loading system) or tdt (torque screwdriver testing equipment with loading system) with the maximum permissible deviation of four percent or six percent according to iso 6789 [6], which is a standard documentation for requirements and test methods of hand torque tools. second, the tdc or the tdt should be calibrated by using the rtd within the uncertainty of one percent. next, the rtd should be calibrated in an jcss (japan calibration service system) accredited laboratory within the uncertainty of approximately 0.2 % to 0.5 %. the accredited laboratory has a torque calibration machine (tcm) for torque measuring devices (pure torque loading). the tcm should have a calibration and measurement capability of approximately 0.04 % to 0.2 %. a high-precision tmd is used for calibration and control of tcm in the accredited laboratory. finally, this high-precision tmd should be calibrated by nmij within uncertainty of approximately 0.01 % to 0.04 %. this paper describes the establishment of a complete calibration chain from the national torque standard to htds, where the calibration and testing methods for the rtd using a national torque standard machine (tsm), for the tdt using the rtd, and for htds using the tdt are also investigated. it is shown that the traceability of the test results for htds is experimentally confirmed by this chain. the torque range used for htds is generally small, e.g., from several centi-newton meters to several newton meters. national torque standards at such a small range had not been abstract  hand torque screwdrivers are used for the fastening control of screws in almost all precise mechanical and electrical parts. this paper  describes the realization of a direct calibration chain for hand torque screwdrivers traceable to the national torque standard. the  calibration methods for a reference torque screwdriver using a primary torque standard machine, a torque screwdriver tester using  the  reference  torque  screwdriver,  and  hand  torque  screwdrivers  using  the  torque  screwdriver  tester  are  described  respectively.  uncertainty evaluation methods for each calibration level are also explained. the effectiveness of the calibration chain for hand torque  screwdrivers could be demonstrated.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 33  established in many countries, so the development of a calibration chain for htds has not progressed so much until recently. at nmij, a new deadweight torque standard machine (10-n·m-dwtsm) that is rated for 10 cn·m to 1000 cn·m has been completed and calibration service in this range has begun to the industry [7]. this development has accelerated the establishment of a calibration chain for htds. one user in the medical field requested a digital htd with the widest measuring range possible (for example, from 10 cn·m to 500 cn·m). the normative lower limit of an htd is 20 % of its nominal torque value according to iso 6789: 2003 [6]. this paper also focuses on the problem where the htd is used in the special range beyond the lower limit of 20 % in conjunction with the limit of resolution in the reference device (tdt). 2. experimental conditions  experiments on the calibration chain were conducted as follows: · experiment 1: calibration of the rtd using the tsm as the reference standard. · experiment 2: calibration of the tdt using the rtd as the reference standard. · experiment 3: testing verification of the htd using the tdt as the standard. these experimental conditions are described in the following subsections. 2.1. experiment 1  a newly developed deadweight-type tsm with a rated capacity of 10 n·m was used (10-n·m-dwtsm) [7]. a picture of the machine is shown in figure 2. the calibration range is from 10 cn·m to 1000 cn·m. a torque transducer with a torque screwdriver shape (tp-6n-0707, manufactured by showa measuring instruments inc., cooperating with nmij) was calibrated with a combination of an amplifier/indicator (mgcplus, amplifier type: ml38, manufactured by hbm gmbh) as the rtd. the rated capacity is 600 cn·m, and the observed resolution of the rtd r was 9.4 n·m (referring the definition of resolution in section 3.1). a picture of the tp-6n0707 is shown in figure 3. because of the limited calibration steps realized by the 10n·m-dwtsm, calibration was performed in two ranges: from 10 cn·m to 100 cn·m and from 50 cn·m to 600 cn·m. there were eight calibration steps, as follows: · 10 cn·m, 20 cn·m, 30 cn·m, 40 cn·m, 50 cn·m, 60 cn·m, 80 cn·m, and 100 cn·m, · 50 cn·m, 100 cn·m, 150 cn·m, 200 cn·m, 300 cn·m, 400 cn·m, 500 cn·m, and 600 cn·m. according to jmif015 [8], which is the industrial guideline for the calibration method of tmds issued by japan measuring instruments federation (jmif), at first, two calibration cycles with increasing and decreasing loading steps were conducted after three instances of pre-loading up to the maximum torque at the 0° mounting position. after changing the mounting position, one calibration cycle of increasing and decreasing loading steps was also conducted after one instance of preloading at the 120° and 240° mounting positions. the clockwise (cw) and counterclockwise (ccw) torques were calibrated, respectively. figure 4 shows the loading timetable for experiment 1. the room temperature was 19.6 °c to 20.5 °c, figure  1.  torque  traceability  system  for  htds  proposed  by  nmij (percentages  show  the  maximum  permissible  errors  or  approximate demanded uncertainty).  figure 2. torque standard machine (10‐n∙m‐dwtsm) for calibration of the  reference torque screwdriver (rtd).    figure  3.  torque  transducer  with  torque  screwdriver  shape,  which  was  manufactured by showa measuring instrument, cooperating with nmij.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 34  the relative humidity was 36 % to 37 % and the atmospheric pressure was 1013 hpa to 1016 hpa during the calibration. 2.2. experiment 2  the rtd (tp-6n-0707 + mgcplus: ml38) was used as the reference standard. the calibration range was from 10 cn·m to 600 cn·m. a tdt (tdt-600cn, manufactured by tohnichi mfg ltd.) was calibrated by installing the rtd on it (like a htd). a picture of the tdt-600cn with the installed tp-6n0707 is shown in figure 5. the calibration range specified by the manufacturer is from 20 cn·m to 600 cn·m, but the authors investigated the calibration for one range from 10 cn·m to 600 cn·m in this experiment (the lower limit was extended). the resolution of the tdt r was 0.2 cn·m. eight calibration steps were conducted as follows: · 10 cn·m, 20 cn·m, 50 cn·m, 100 cn·m, 200 cn·m, 300 cn·m, 400 cn·m, and 600 cn·m. according to jmif019 [9], which is another industrial guideline for the calibration method of torque wrench testers and torque testing machines issued by jmif, two calibration cycles with increasing loading steps were conducted after three instances of pre-loading up to the maximum torque at the 0° mounting position. after changing the mounting position of the rtd, one calibration cycle of increasing steps was also conducted after one instance of pre-loading at the 120° and 240° mounting positions. cw and ccw torques were calibrated, respectively. figure 6 shows the loading timetable for experiment 2. the room temperature was 20.5 °c to 21.1 °c, the relative humidity was 35 % and the atmospheric pressure was 1001 hpa to 1020 hpa during the calibration. 2.3. experiment 3  the tdt (tdt-600cn) was used as the reference standard. the calibrated range of the tdt was from 10 cn·m to 600 cn·m. the ftd-400cn (indicating type, the resolution: 1 cn·m) and rtd-500cn (setting type, the smallest graduation: 5 cn·m) htds (both manufactured by tohnichi mfg ltd.) were tested by installing them on the tdt. figure 7 shows pictures of the ftd-400cn and rtd-500cn installed on the tdt. the tested ranges were from 80 cn·m to 400 cn·m for the ftd-400cn and from 100 cn·m to 500 cn·m for the rtd 500cn.   figure  6.  loading  timetable  for  the  calibration  of  the  torque  screwdriver  tester (tdt).  figure  4.  loading  timetable  for  the  calibration  of  the  reference  torque screwdriver (rtd).  figure  5.  torque  screwdriver  tester  (tdt)  with  the  reference  torque screwdriver (rtd).  (a) indicating type                                      (b) setting type  figure  7.  hand  torque  screwdrivers  (htds)  installed  on  the  torque  screwdriver tester (tdt).  30 s 0o mes. 1st cycle, preloadings, 2nd cycle, 30 s max. zero 30 s 30 s 120o pre., mes., 240o pre., mes., 30 s 0o mes. 1st cycle, preloadings, 2nd cycle, 30 s max. zero 30 s 30 s 120o pre., mes., 240o pre., mes., acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 35  according to iso 6789 [6], three loading steps of 20 %, 60 %, and 100 % for maximum torque were conducted successively five times after pre-loading at the maximum torque five times. the cw and ccw torques were tested for the ftd400cn, and only the cw torque was tested for the rtd500cn. figure 8 shows the loading timetable for experiment 3. the room temperature was 20.3 °c to 20.5 °c, the relative humidity was 33 % to 34 % and the atmospheric pressure was 1023 hpa during the testing. 3. results and discussions  3.1. experiment 1  the uncertainty was evaluated according to jcg209s11 [10], which is a “guideline for the uncertainty evaluation of calibrations for torque meters and reference torque wrenches” issued by the international accreditation japan at the national institute of testing and evaluation (iajapan/nite: the accreditation body for calibration and testing laboratories in japan). the guideline was made mainly by the authors cooperating with japanese industry, based on the research of calibration methods of tmds [11][12]. detailed explanation of the evaluation method is described as follows. calibration results the measurement values at increasing torque s’ and decreasing torque s” are defined as the differences between the indicated values at the torque steps and the indicated value at the non-loading condition before starting the cycle. the calibration results 's and "s are calculated according to eqs. (1) as the means of the measurement values obtained from different mounting positions. the results measured during the second cycles are not included in the calculations of 's and "s :    rot 1e i1e rot i 1 ' n s n s (1a)    rot 1e i1e rot i " 1 " n s n s (1b) where i, j(=1) and e are the indexes of calibration steps, cycles and different mounting position series, and nrot indicates the total number of mounting positions. reproducibility with different mounting positions the relative reproducibility with different mounting positions bi is calculated according to eq. (2) as the experimental standard deviation of the measurement values during the first cycle of each mounting position:      rot 1e 2 ii1e roti i )'( )1( 1 ' 1 n ss ns b (2) the relative standard uncertainty wrot_rtd,i is calculated according to eq. (3) as the experimental standard deviation of the mean: 2 i rot 2 irot_rtd, 1 b n w  (3) the results measured during the second cycles are not included in the calculation of bi. repeatability with an unchanged mounting position the relative repeatability with an unchanged mounting position b’i is calculated according to eq. (4) when the cycle is repeated twice for only 0° series:     rep 1j ij1 rep i11i21 i ' 1 n s n ss b (4) where the nrep is the number of cycles for the same series, and s’ij1 indicates the measurement value at step i, cycle j and for the first series (e = 1). the relative standard uncertainty wrep_rtd,i is calculated according to eq. (5), by determining b’i as a half-width of the rectangular distribution: 2 i 2 irep_rtd, 3 1 bw  (5) deviation due to interpolation an interpolation equation for the increasing torque is determined via the least squares method using the calibration results i's (i = 1 to n). the interpolation equation relates the measurement value s’ and the torque t as follows: m m 2 210' tatataas  (6a) whether or not the interpolation equation includes a constant term depends on the manual of the calibration laboratory or the specifications from the client although the constant term was included in this experiment. if so be required, the interpolation equation for the decreasing torque may be determined as follows: m m 2 210 ''''" tatataas  (6b) the interpolation equation of the decreasing torque, eq. (6b), must be determined as the calculated values at the maximum torque tmax coincides with the value calculated by the interpolation equation of increasing torque, eq. (6a). the authors also calculated the interpolation equation of the decreasing torque in this experiment. however, the rtd was not subsequently used for the calibration of the tdt in decreasing torque steps. the relative deviation due to the interpolation fa,i is calculated according to eq. (7) as the difference between the calculated   figure  8.  loading  timetable  for  the  testing  verification  of  hand  torque screwdrivers (htds).  20 % loadingspreloadings, max. zero 60 % loadings 100 % loadings acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 36  value obtained by the interpolation equation s’a,i and the calibration result 's : ia, ia,i ia, ' '' s ss f   (7) the relative standard uncertainty associated with interpolation wint_rtd,i is calculated according to eq. (8), by determining fa,i as a half-width of the rectangular distribution: 2 ia, 2 iint_rtd, 3 1 fw  (8) the degree (1st, 2nd or 3rd) of the interpolation equation to be used is recommended to meet the following condition: n < 5; 1st degree, n < 8; 2nd degree, n ≥ 8; 3rd degree. then, the interpolation of 3rd degree was used for this experiment. if the rtd directly indicates values in the unit of torque and the measurement values cannot be electronically fitted to the interpolation curve of the calibration result, then rather than wint_rtd, the relative standard uncertainty of the deviation due to indication wind_rtd should be used. a detailed explanation, however, is not presented herein. zero error the relative zero error f0,e is calculated according to eq. (9) as the absolute value of deviation between the indicated values at the non-loading conditions before starting and after finishing the cycle. the calculation of f0,e does not include values obtained in calibration cycles with only increasing torque: n1e 01e01e e0, ' '" s ss f   (9) where s’01e and s”01e are the zero values indicated before starting and after finishing the cycle for the first cycle and series e. s’n1e shows the measurement value at the maximum torque in series e. the relative standard uncertainty due to zero drift wzer_rtd,i is calculated according to eq. (10), by determining the maximum value of f0,e as a half-width of the rectangular distribution: 2 max,0 2 zer_rtd 3 1 fw  (10) hysteresis the relative reversibility error (hysteresis) hi is calculated according to eq. (12) as the mean of the absolute differences between measurement values at increasing and decreasing torque steps in the same cycle: i 1e i1ei1e rot i ' 1 rot s ss n h n     (11) where s”i1e is the measurement value of the decreasing torque at step i at the first cycle for series e. the values measured in the cycle including only increasing torque are not included in the calculation of hi. the relative standard uncertainty due to reversibility wrev_rtd,i is calculated according to eq. (12), by determining hi as a halfwidth of the rectangular distribution: 2 i 2 irev_rtd, 3 1 hw  (12) the relative reversibility error, however, was not considered in this experiment as mentioned later. resolution the resolution r of the indicator is defined as the smallest fraction of a scale division that is readable in the case of analogue scale. in the case of digital scale, r is defined to be one increment of the last active number of the numerical indicator, provided that the indication does not fluctuate when the rtd is under the non-loading condition. if the indication fluctuates under the non-loading condition by more than one increment, the resolution is equal to a half-width of the fluctuation (here, the fluctuation is two counts when the last active number is fluctuated from 0 to 1, and three counts when it is fluctuated from 0 to 2, for example). r must be converted to and stated in units of torque as expressed in section 2.1. the relative standard uncertainty due to resolution wres_rtd,i is calculated according to eqs. (13), by determining r as a halfwidth or a whole-width of the rectangular distribution depending on the fluctuation: 2 i 2 ires_rtd, 23 2        t r w (non-fluctuation) (13a) 2 i 2 ires_rtd, 3 2        t r w (fluctuation) (13b) where ti is the torque at each torque step. wres_rtd,i is multiplied by 2 considering that the measurement value is obtained as the difference between the indicated values of the loaded torque step and non-loading zero step before starting the cycle (double readings). uncertainty of the calibration result the relative expanded uncertainty wrtd_cal of calibration for the rtd is calculated from the relative combined standard uncertainty wtsm of torque realized by the tsm and the relative combined standard uncertainty wc_rtd ascribable to the measurement of the rtd, according to the following equation: 2 c_rtd 2 tsmic_rtd_cal,irtd_cal, wwkwkw  (14) where the coverage factor k is equal to two and its interval was estimated to have a level of confidence of approximately 95 %. the authors also evaluated the effective degree of freedom and confirmed sufficient large number. wtsm has been evaluated as 3.3 × 10-5 [7]. wc_rtd is calculated as: 2 iint_rtd, 2 irep_rtd, 2 irot_rtd,ic_rtd, wwww  2 ires_rtd, 2 irev_rtd, 2 izer_rtd, www  (15a) in the case of the evaluation of only increasing torque, wc_rtd is calculated from: 2 iint_rtd, 2 irep_rtd, 2 irot_rtd,ic_rtd, wwww  2 ires_rtd, 2 izer_rtd, ww  (15b) eq. (15b) was used for the calculation in this experiment. the results of the uncertainty evaluation are shown in figure 9. the maximum relative expanded uncertainty for each calibration range was as follows: 10 cn·m – 100 cn·m: cw 0.024 %, ccw 0.031 %, 50 cn·m – 600 cn·m: cw 0.021 %, ccw 0.020 %. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 37  sufficient small uncertainties could be obtained because the rtd was directly calibrated by using the tsm. the relative uncertainty must be larger than the level of 0.02 % when the rtd is calibrated by using the tcm of a calibration laboratory as shown in figure 1. the interpolation equations obtained in the experiment 1 were used for the successive calibration of the tdt described in section 2.2. 3.2. experiment 2 the uncertainty was evaluated according to jcg209s21 [13], which is an another guideline for the uncertainty evaluation of calibrations for torque wrench testers and torque testing machines issued by iajapan/nite. the guideline was also made mainly by the authors cooperating with japanese industry. detailed explanation of the evaluation method is described (but only increasing torque) as follows. uncertainty of the calibration result first, the relative expanded uncertainty of the calibration is expressed by the following equation: 2 iint_tdt, 2 irep_tdt, 2 irot_tdt,ical_tdt, wwwkw  2 rtd 2 ires_tdt, 2 izer_tdt, www  (16) calibration results 's , the uncertainty contribution for the reproducibility with the change in the mounting position wrot_tdt, that for the repeatability without the change in the mounting position wrep_tdt, that due to the interpolation wint_tdt, that due to the resolution wres_tdt and the the coverage factor k are evaluated in the same way as described in section 3.1. zero error the loading timetable was only increasing torque in this calibration. but the relative zero error f0,e and the relative standard uncertainty wzer_tdt is calculated according to eq. (9) and (10) as well. in this case, the uncertainty tends to be larger than the zero error for the cycle of increasing and decreasing torque because unloading from the maximum torque suddenly occurs in the only increasing torque cycle. this would be controversy in the future. however, fortunately, the tdt showed null values of the zero error in all cycles in this experiment because of low resolution in the tdt. uncertainty of rtd as a reference standard the relative combined uncertainty wrtd when the rtd is used as a reference standard is calculated from the relative combined standard uncertainty wc_rtd_cal calculated by eq. (14), the relative standard uncertainty due to temperature variation wrtd_tmp, and the relative standard uncertainty of long term stability of the rtd wrtd_lgstb, according to the following equation: 2 rtd_lgstb 2 rtd_tmp 2 c_rtd_calrtd wwww  (17) wrtd_tmp is calculated by the following equation: 2 meas 2 2 rtd_tmp 23         t w  (18) where  is temperature coefficient of output sensitivity of the rtd and tmeas is temperature variation. the temperature correction of output value of rtd is available if necessary. this term was small in this experiment because the temperature coefficient is -1.4 × 10-4 k-1 and the temperature variation was 1.5 k at the maximum. wrtd_tmp was 6.1 × 10-5. wrtd_lgstb is calculated by the following equation:      cal 1c 2 c_meanc calc_mean rtd_lgstb )( )1( 11 n ss ns w , (19) provided the rtd has been calibrated more than three times (ncal ≥ 3) in the pre-determined calibration period (generally 26 months). here, cs is a calibration result of each time and c_means the mean value of calibration results for all time. it would be acceptable empirically determine wrtd_lgstb if the calibration time is less than three. the authors used empirically 0.02 % for wrtd_lgstb. the result of the uncertainty evaluation of calibration for the tdt is shown in figure 10. the maximum relative expanded uncertainties for the various lower limits of the calibration were as follows: 10 cn·m – 600 cn·m: cw 6.6 %, ccw 2.9 %, 20 cn·m – 600 cn·m: cw 2.6 %, ccw 2.0 %, 50 cn·m – 600 cn·m: cw 1.1 %, ccw 0.62 %, 100 cn·m – 600 cn·m: cw 0.75 %, ccw 0.24 %. in this evaluation, the uncertainty contribution of the interpolation (wint_tdt) was taken into consideration instead of the indication error (wind_tdt); i.e., the authors converted the indicated values into the reference values by using the interpolation equations when the tdt was used for the testing of the htds described in section 2.3 although the tdt has a digital indication display (by the direct torque unit expression). figure 9. uncertainty evaluation results of the calibration for the rtd  figure 10. uncertainty evaluation results of the calibration for the tdt..  r e la ti ve e xp a n d e d u n ce rt a in ty ( k = 2 ) / % . 0.00 0.01 0.02 0.03 0.04 -600 -400 -200 0 200 400 600 torque / cn m larger range, cw larger range, ccw smaller range, cw smaller range, ccw r e la tiv e e xp a n d e d u n ce rt a in ty (k = 2 ) / % . 0 1 2 3 4 5 6 7 -600 -400 -200 0 200 400 600 torque / cn m cw ccw maximum permissible error acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 38  it was found that the uncertainty exceeded 1 %, even if the interpolation equations were used when including a lower limit of 10, 20, or 50 cn·m. both the users and manufacturers of tdts should pay attention to the choice of the lower limit and consider the uncertainty. in addition, the authors insist that the normative description of “the maximum permissible uncertainty of the torque tool calibration devices shall be less than 1 %.” in iso 6789 should be clarified; i.e., the method of uncertainty evaluation for the tdt calibration should be defined and harmonized. 3.3. experiment 3 figure 11 shows test results for two different htds. in all of the five successive measurements at each step, the relative deviations from the reference standard (tdt) were within 6 % of the maximum permissible deviation prescribed in iso 6789 for both the ftd-400cn and rtd-500cn. these htds were recognized to conform with iso 6789 because the maximum uncertainty of the tdt was within 1 % in the tested ranges from 80 cn·m to 500 cn·m. however, if the tdt is used, including the lower range from 10 cn·m to 50 cn·m as described in section 1, the testing results must be out of conformity with iso 6789. the uncertainty was not evaluated for this test results. after discussion with manufacturers and users of htds in japan, it was found that the uncertainty of measurement for the results of testing htds is not yet required. on the other hand, the challenge for calculation of the uncertainty has started [4]. the authors will continue to discuss the problem with related organizations. 4. conclusions  the authors experimentally realized a complete calibration chain from the national torque standard to the hand torque screwdriver (htd), where the appropriate calibration methods and uncertainty evaluations were applied. when torque screwdriver testers (tdts) are used as reference standards for the testing of htds, testing organizations should pay attention to the fact that the calibration uncertainties of the tdts must be within the maximum permissible uncertainty prescribed in iso 6789 for the entire testing range of the htds. here, as shown in figure 1, it would be acceptable even if the first-grade accredited laboratories will calibrate rtds with an uncertainty of approximately 0.2 % for calibrators or users of tdts instead of the values presented in section 3.1 (approx. 0.02 % – 0.03 %). the authors, therefore, insist that it would be sufficiently possible for the japanese industry to establish the hierarchy proposed in figure 1. references  [1] p. d. hohmann, “advantages of traceability by using torque transfer standard tts,” proc. of xiii imeko world congress, sept. 1994, torino, italy, pp. 253-256. [2] a. brüge, d. röske, d. mauersberger and k. adolf, “influence of cross forces and bending moments on reference torque sensors for torque wrench calibration,” proc. xix imeko world congress, sept. 2009, lisbon, portugal, pp.356-361. [3] k. ogushi, a. nishino, k. maeda and k. ueda, “calibration of a torque wrench tester using a reference torque wrench,” sice annual conference 2011, sept, 2011, tokyo, japan, pp. 411-416. [4] d. röske, “iso 6789:2003 calibration results of hand torque tools with measurement uncertainty – some proposals,” proc. xx imeko world congress, sept, 2012, busan, republic of korea,usb flash drive. [5] k. ogushi, a. nishino, k. maeda and k. ueda, “advantages of the calibration chain for hand torque screwdrivers traceable to the national torque standard,” sice annual conference 2012, akita, japan, sept. 2012, usb flash drive. [6] jis b 4652 (iso 6789: 2003), “hand torque tools requirements and test methods,” japanese standards association, 2008 (in japanese). [7] a. nishino, k. ogushi and k. ueda, “uncertainty evaluation of a 10 n·m dead weight torque standard machine and comparison with an 1 kn·m dead weight torque standard machine,” measurement, 49 (2014), pp. 77–90. [8] jmif015, “guideline for calibration laboratories of torque measuring devices”, japan measuring instruments federation, 2004 (in japanese). [9] jmif019, “guideline for calibration laboratories of torque testing machines and/or torque wrench tester,” japan measuring instruments federation, 2007 (in japanese). [10] jcg209s11-05, “jcss guideline on uncertainty estimation – torque meters and reference torque wrenches,” international accreditation japan, national institute of testing and evaluation, 2012 (in japanese). [11] k. ohgushi, t. ota and k. ueda, “experimental study on the uncertainty in static calibration of a torque measuring device – 1st report: verification of repeatability, reproducibility and zero error-,” trans of the sice, 39-7 (2003), pp. 624-630 (in japanese). [12] k. ohgushi, t. ota and k. ueda, “experimental study on the uncertainty in static calibration of a torque measuring device – 2nd report: verification of interpolation error, reversibility and resolution(in japanese),” trans of the sice, 39-7 (2003), pp. 631-636 (in japanese). [13] jcg209s21-02, “jcss guideline on uncertainty estimation – torque wrench testers and torque testing machines,” international accreditation japan, national institute of testing and evaluation, 2012 (in japanese). figure 11. testing results for htds.  -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 -500 -400 -300 -200 -100 0 100 200 300 400 500r e la ti ve d e vi a ti o n s o f te st in g v a lu e s fr o m r e fe re n ce v a lu e s / % torque / cn m ftd-400cn(cw) maximum permissible errorftd-400cn(ccw) rtd-500cn(cw) . microsoft word 377-2372-1-le-rev-prova acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 3‐8      acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 3  the probability in throwing dices and in measurement   franco pavese  torino, italy      section: research paper   keywords: probability; dices throwing; measurement; uncertainty; aleatory; epistemological; ontological; definitional  citation: franco pavese, the probability in throwing dices and in measurement, acta imeko, vol. 5, no. 3, article 2, november 2016, identifier: imeko‐acta‐ 05 (2016)‐03‐02  section editor: paolo carbone, university of perugia, italy  received april 6, 2016; in final form april 6, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: franco pavese, e‐mail: frpavese@gmail.com    1. introduction  “it is unanimously agreed that statistics depends somehow on probability. but, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the tower of babel. doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.”, [1]. this paper, after a short introduction concerning the several meanings of the terms ‘uncertainty’ and ‘probability’, aims at limiting the illustration to a couple of interpretations of probability, discussing whether application of this concept to dice throwing has exactly the same meaning when it is used for measurement results, or there are differences. the concept of probability is historically born from the speculation about the prediction in gambling problems, like that of the occurrence of a specific face of a fair dice in subsequent throws or of a fair coin in subsequent tossing. in experimental science, when scientists realised that it is impossible to get ideally perfect and full information from measurement [2], the use of the concept of probability proposed itself as the most natural way to circumvent the difficulty and to model chance. the paper illustrates why and how, in general, the conditions of its application are basically different, by reviewing a range of positions. some consequences are drawn.   2. uncertainty interpretations  the term ‘uncertainty’ has several meanings.1 one can find a discussion on this issue from an epistemological point of view in [6]: “uncertainty is pervasive in most of the fields of science and technology (as it is also in real life and ordinary thinking), and in all cases it seems that what really matters and worries is the quantification of the ‘amount’ of uncertainty in some sense attributable to the considered statements, or events, be them possibly submitted to repetition (random events) or not (singular events). all that is done, in general, under a very unclear concept of what the word uncertainty could actually mean, but it is almost always done by aiming at the measurement of the variables affected, in very different contexts, by some kind of uncertainty.” according to that approach, “the linguistic term uncertainty is not only imprecise, but has a very broad applicative spectrum in both ordinary life and science. to scientifically approach what it could mean, it seems necessary to previously capture how their, also imprecise, mother-predicate u = uncertain, the opposite of certain, is used in different contexts and with 1 often the terms "uncertainty" and "error" are used interchangeably, or one of the two is rejected (the second in 14). "we find it convenient to distinguish them thus: ’error’ is the actual difference between a measurement and the value of the quantity it is intended to measure, and is generally unknown at the time of measurement. ’uncertainty’ is a scientist’s assessment of the probable magnitude of that error". 22 abstract  the paper summarises the main differences between the process of throwing dices and the measurement process, and draws some of  the consequences on the meaning and use of the probability concept in the two cases.       acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 4  different purposes, and where can consequently be represented by a fuzzy set”. limiting to the scientific field, it actually did not only bring to the concept of ‘probability’, but also to the many other fields of chance, often called “imprecise probability”, first of all to the possibility and fuzzy type of reasoning [7], [8], but also to others, like previsions, lower and upper probabilities, or interval probabilities, belief functions, possibility and necessity measures, lower and upper previsions, comparative probability orderings, partial preference orderings, sets of desirable gambles, p-boxes, robust bayes methods (see references, e.g., in [9]). in particular, uncertainty measures are not necessarily additive [6], [10]-[12]. in measurement, that attribute is added to uncertainty to specify the intended meaning: “measurement uncertainty: nonnegative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used” (term 2.26 in [13]). it is useful to recall here also its note 1: “measurement uncertainty includes components arising from systematic effects, such as components associated with corrections and the assigned quantity values of measurement standards, as well as the definitional uncertainty. sometimes estimated systematic effects are not corrected for but, instead, associated measurement uncertainty components are incorporated” (emphases added), useful in the subsequent discussion. 3. probability interpretations  probability is one of the concepts born to accommodate the concept of uncertainty, intrinsic in chance, as opposed to the term ‘certainty’ [2]. there are two broad categories of probability interpretations, which can be called "physical" and "evidential" probabilities (the following synthesis is taken from [3], for a full discussion see [4], [5]). physical probabilities, which are also called objective or frequency probabilities, are associated with random physical systems such as roulette wheels, rolling dice and radioactive atoms. in such systems, a given type of event (such as the dice yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials. physical probabilities either explain, or are invoked to explain, these stable frequencies. thus talking about physical probability makes sense only when dealing with well-defined random experiments. the two main kinds of theory of physical probability are frequentist accounts (such as those of venn, reichenbach and von mises) and propensity accounts (such as those of popper, miller, giere and fetzer). evidential probability, also called bayesian probability (or subjectivist probability), can be assigned to any statement whatsoever, even when no random process is involved, as a way to represent its subjective plausibility, or the degree to which the statement is supported by the available evidence. on most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds. the four main evidential interpretations are the classical (e.g. laplace's) interpretation, the subjective interpretation (de finetti and savage), the epistemic or inductive interpretation (e.g., ramsey, cox) and the logical interpretation (e.g., keynes, carnap). some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing. for example, the physical interpretation is taken by followers of "frequentist" statistical methods, such as r.a. fischer, j. neyman and e. pearson. statisticians of the opposing bayesian school typically accept the existence and importance of physical probabilities, but also consider the calculation of evidential probabilities to be both valid and necessary in statistics. this article, however, focuses on the interpretations of probability rather than theories of statistical inference. the terminology of this topic is rather confusing, in part because probabilities are studied within a variety of academic fields. the word "frequentist" is especially tricky. to philosophers it refers to a particular theory of physical probability, one that has more or less been abandoned. to scientists, on the other hand, "frequentist probability" is just another name for physical (or objective) probability. for those who promote bayesian inference view, "frequentist statistics" is an approach to statistical inference that recognises only physical probabilities. also the word "objective", as applied to probability, sometimes means exactly what "physical" means here, but is also used of evidential probabilities that are fixed by rational constraints, such as logical and epistemic probabilities. 4. throwing dices or tossing a coin  the needed approach in this case is a purely mathematical one, since the dice (or coin) is assumed to be ‘fair’ (perfect) and no interaction effect is assumed to occur from the way the throw is performed or with the environment of the dice and of its impact with a surface—nothing about the physics of the process. there are no influence factors affecting the outcome of a toss, which is not even dependent on time. thus, the throws are assumed to be strictly a repeatable ideal process for an indefinitely long time, whence the certain, equal probability of getting each face. in addition, each face is mutually exclusive, and each throw is independent of any past or future throw. this type is also called ‘discrete probability’. the meaning of uncertainty in this framework is that the prediction of the result of the subsequent throws is uncertain because the throwing process is strictly stochastic. no definitional uncertainty exists—for the ideal case. speculations also exist about the effects of deviations on the long run from the strictly ideal assumption, but they enter the arena of experimental science. instead of dices or coins, many other frames are born that can be assimilated to the previous ones (cards, …). 5. the experimental frame  the experimental frame of science is characterised by, at least, two facts: a. the concept of repeatability of the samples is subject to limitations, namely in time, so limiting also the justification to always consider the process as (only) a stochastic one; b. the concept of uncertainty applies not only to the unpredictability—within given (almost always finite) limits—of the next measurement result, but also to the possibility of occurrence of systematic effects, e.g. those born from epistemic reasons. as to a., the vim3 definition of “repeatability condition” is term 2.20 [13]: “condition of measurement, out of a set of conditions that includes the same measurement procedure, same operators, same measuring system, same operating       acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 5  conditions and same location, and replicate measurements on the same or similar objects over a short period of time” (emphasis added). in fact, it is basically a tautology, because the expression “short period of time” means ‘so short that the conditions for repeatability hold’. some kind of independent verification of the trueness of the condition is deemed necessary—when possible—concerning two features: (i) the stability in time of the measurand (same or similar objects); (ii) the repeatability of the operating conditions (all the remaining conditions).2 on the contrary, these conditions are both assumed as true for fair dices or coins. as to b., systematic effects induce uncertainty components that are absent, by definition, in the case of fair dices or coins. they are also called ‘bias’,3 indicating the deviations of the measured values from a set of reference values forming what can be called a ‘reference condition’. for example, for a single additive bias bi affecting a quantity xi, is xi = xi + bi (the symbol of ‘standard state’  is borrowed here, for analogy, from physical chemistry to indicate the reference condition for which e(bi) =: 0), and xi + bi = xi – ci , where ci is the so-called ‘correction’.4, [15]. a different category of systematic effects is what is causing the often-called epistemic uncertainty, [16] i.e., the one due to insufficient knowledge of known effects, which reflects onto an imperfect model of the experimental conditions. actually, this category is not exhaustive, and should better be spilt into two distinct categories of uncertainty: – epistemic (imperfect knowledge, namely in science and technique), [17]-[19]; – ontological (ignorance about (some parts of) the phenomenon under study), [20]. the former occurs when an influence quantity is misevaluated or mis-modelled. the latter comprises, e.g., the case where an influence quantity is omitted from the model because the existence of its effect was missed. in both cases, it results into an imperfect modelling. still a distinct category of uncertainty comes from the “definitional uncertainty” [13], [14], which should not be confused with any of the previous ones. it concerns the nonuniqueness of (known) definitions of the measurand: different definitions. it cannot be considered epistemic when it concerns known issues, but it is up to the judgment of the experimenter to take some of the cases into account in the model, or not: this is a model non-uniqueness, not an imperfection of it. ontological, epistemic and definitional uncertainties are not of stochastic nature. figure 1 summarises the different components of uncertainty in experimental science (see, e.g., [20]). the possible time dependence of the process under investigation is another feature of the experimental frame that is useful to consider separately. it can be due to fluctuations in time series, or it may be due to non-repeatability of compounded data series taken subsequent in time. 2 repeatability can be associated to the term “type a uncertainty” of gum, 14 not using the “error approach“. 13 3 “measurement bias: estimate of a systematic measurement error“ (term 2.18 in 13). 4 notice that, in general, e(ci) is what is intended for ‘correction’. in the first case, the terms used to indicate a dependence on time (drifting, dynamic, … systems), or invariance from time (static, stationary, …) may be different in different scientific and technical frames, or be semantically different. about the risk of consequent confusion, the reader is directed to the review performed in [34]. the time scale itself can be a reason for different random effects showing up: an example can be seen in the two-sample variance (allen variance) studies of frequency standards. however, one cannot construe from it that always the fluctuations with time are of random nature: a mixed effect can build up, like in the very common case of a ‘drifting’ characteristics of an instruments from its initial calibrated state. 6. differences between the two frames and some  consequences  “aleatory uncertainty represents an absolute limit. to use the coin toss example, having thrown the coin a thousand times we would be able to express with confidence the probability of a heads occurring, but that is all we can say about the next coin toss” (emphasis added) [20]. probability cannot have the same meaning in experimental science, outside the aleatory components of uncertainty. 5 a useful classification of uncertainty components can be found in [21]: (i) uncertainty in influence quantities, (ii) uncertainty in model, (iii) uncertainty in model parameters. components (i) can be either epistemic if referred to influence quantities properties, or random if referred to their measured values, or both. the influence quantities are normally subdivided into two groups: the “basic” ones (called “input quantities” in [14]) and the “derived” ones (i.e., the ones 5 one can even argue if probability is the only concept that can be used to deal with chance. figure 1. different components of  uncertainty  in experimental science,  in  the four possible combinations of known and unknown information.        acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 6  responsible for ‘bias’ [13] and needing ‘corrections’) [14]. the former are measured in indirect measurements (thus having at least one type a uncertainty component), while the latter can be measured, or their values are obtained or inferred from previous knowledge (thus having in the first case at least one type a uncertainty component, or only type b components in the second). 6 component (ii) can be epistemic if referred to model imperfection, or ontological if referred to missed quantities. it also includes the definitional uncertainty. component (iii) can be stochastic if the values are obtained from measurement, or epistemic if the values are computed or inferred. ontological uncertainty is generally not included in the budgets of experimental science,7 and definitional uncertainty is generally resolved by specifying the relevant type of definition—otherwise it becomes an ontological uncertainty component, excluded from treatment in the new gum [32]. epistemic uncertainty is often ‘randomised’, i.e. transformed into a stochastic component, by assuming ignorance about the position parameter, e.g. by assuming a null mean and estimating a range for the resulting uncertainty component. the latter can be set as an interval (e.g. “maximum permissible error” (mpe, term 4.26 in [13]) or “worst case uncertainty” (wcu) [33], or as a non-probabilistic interval, or as the standard deviation, or a multiple of it, set by the chosen confidence interval (or degree of believe), [14]. 7. an example and final remarks  from the previous review it turns out that the case of dices and similar are, in experimental science, an over-simplification of a much more complex structure of uncertainty, such that the former concerns only the intrinsic stochastic component of it. in the latter case, instead, the systematic effects are, in general, the major concern. though they may have a stochastic component, they mainly involve, after all, the need of an assessment that involves basically a subjective judgment requiring a decision. a particularly useful discussion of this situation can be found in [22], where the seminal case (but a wider range of similar cases in measurement applies) of the treatment of uncertainty in the assessment of the values of fundamental constant of physics, is treated from the viewpoint of psychology of measurement and decision theory—an interesting viewpoint basically ‘external’ to the metrology field. different reasons for ‘bias’ in the judgment are considered there, bringing to what is called “overconfidence”: “in several sets of analyzed measurements of physical constants, we have found consistent replication of a robust finding of laboratory studies of human judgment: reported uncertainties are too small. how could this apparent overconfidence arise? experimental studies of human judgment have shown that such biases can arise quite unintentionally from cognitive strategies employed in processing uncertain information”. however, not always this attitude in unintentional: one case “concerns the procedures chosen to assess the uncertainty. the recommended practice in physics is to consider all possible sources of systematic uncertainty when reporting results. 6 prior and posterior are not used in this paper necessarily in the bayesian sense. 7 several decades ago, this was done sparingly but systematically by the ussr school, as “undetected systematic errors”. however, without specific guidelines regarding what to consider and explicit recognition of the subjective elements in uncertainty assessment, one cannot be sure how comprehensively individual scientists have examined the uncertainty surrounding their own experiments. conceivably, some of the apparent overconfidence reflects a deliberate decision to ignore the herder-to-assess sources of uncertainty” (emphasis added); “a second possible source of bias is that, unlike laboratory experiments on judgment, which can take great care to ensure that subjects are motivated to express their uncertainty candidly, real-world settings create other pressures”. in the latter respect, precisely, “having a pre-existing recommended value may particularly encourage investigators to discard or adjust unexpected results, and so induce correlated errors in apparently independent experiments”. this has been observed in several circumstances, according to these authors [22], namely for the speed of light in vacuum, c0 , the basis for the unit of length,8 and the inverse of the fine structure constant, –1, the basis for the value of the electron change e, so of the unit of electrical current. the analysis in [22] in centred only on the uncertainties associated to the recommended values, while the aim of the actual codata task group [23] is also to adjust the values of the constants using the least squares analysis (lsa) method, instead of the use of the mean—or another strictly statistical parameter—of the experimental values (based on probability), [24]. the accent on the subjective side of the uncertainty analysis, (see also [25]) may seem to rule out ‘frequentist’ methodology; however, this does not necessarily mean that the bayesian one can always be better used instead. in chapter 1 of [22], pages 30-31, the authors say: “bayes theorem is an uncontroversial part of probability theory. bayesian inference is more controversial, because it treats probabilities as subjective, thereby allowing inferences that combine diverse kinds of evidence. frequentistic probabilities requires evidence of a single kind (e.g. coin flips). subjective judgements are only probabilities if they pass coherence tests. thus, probabilities are not just any assertion of belief” (emphasis added). actually, codata position had oscillations in the years, starting from a position where its major asset was not the use of the specific analytical treatment (the lsa) [26], but the preliminary critical review of the data and their screening. later they shifted to a position where all the available data were used [27]. that is the position since 2010. possibly, in addition to the criticism about the subjective nature of the screening, an abundance of data prompted the last decision, since a few outlying data could not anymore critically affect the resulting value and associated uncertainty. in [22] a statistical analysis is performed on the reliability of the estimates of the constants values, namely of 40 recommended values for the period 1928-1973. a “surprise index” is computed (percent which fall outside the assessed 98% confidence interval, or outside 2.33s) and found to be an impressive 57%. in more recent times, the studies have dramatically lowered the experimental uncertainty, so the former statistics could have improved a lot. however, the already cited “bandwagon” effect of the recommended values 8 notice that the history of the numerical values of c0 stopped in 1983 with the ‘stipulation’, misinterpreted as a definitive value.       acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 7  could now have increased in importance, deserving an urgent update of analyses of this type. in [22] the conclusions about that specific case are: “the underestimation of uncertainty in measurements of physics constants and compilations of recommended values seems to be pervasive. this evidence extends previous findings of overconfidence in laboratory studies of human judgment to a task domain of great practical importance. if reported uncertainties do not reflect the magnitude of actual errors, whether due to incomplete analysis or to judgment biases, the usefulness of those measurements is significantly diminished”. this is critical, e.g., for use in the “new si” [24]. in general, it results that the use of examples concerning the case of dices, coins and similar, has very little value in studying the treatment of experimental uncertainty, and can even be deceiving, so, in general, it should be avoided. 8. appendix: falsification in experimental science  the hereinbefore-asserted subjectivism in experimental sciences, with the need of judgment and decision, entrains the need of “verifications” (from wittgenstein on) or of “falsification” (from popper on) criteria. the first method was later prevalently considered an impossible goal to reach in the lack of a general criterion about the sufficient number of verifications, and the difficulties arising from unavoidable epistemological limitations. the second method was born basically in an uncertainty-free context: however, we learned that in measurement a single occurrence of falsification, even if reasonably proved, cannot be considered sufficient in the frame of uncertain knowledge. not only repeated occurrences are needed, but “falsification is not possible without some threshold deviation which would be considered sufficiently unlikely to reject the theory” [28] — also [29] is interesting, though one not necessarily always may share author’s opinions. early popper (1936) said: “with the idol of certainty (including that of degrees of imperfect certainty or probability) there falls one of the defences of obscurantism which bar the way of scientific advance“, and “the relations between probability and experience are also still in need of clarification. in investigating this problem we shall discover what will at first seem an almost insuperable objection to my methodological views. for although probability statements play such a vitally important role in empirical science, they turn out to be in principle impervious to strict falsification” (emphases added), [30]. later he managed reconciling the probability concept by proposing first the theory of “propensity”, a further interpretation of probability, [31]. references [1] l.j. savage, the foundations of statistics. new york: john wiley & sons, inc. (1954). isbn 0-486-62349-1. [2] f. pavese and p. de bièvre, fostering diversity of thought in measurement science, in advanced mathematical and computational tools in metrology and testing x (f. pavese, w. bremser, a. chunovkina, n. fischer, a.b. forbes, eds.), series on advances in mathematics for applied sciences vol. 86, world scientific, singapore, 2015, pp. 1–8. [3] wikipedia, term “probability interpretations”, as consulted on february 24, 2015, and references therein. [4] hájek, alan, zalta, edward n., ed., interpretations of probability, the stanford encyclopedia of philosophy, online. [5] de elía, ramón; laprise, rené, diversity in interpretations of probability: implications for weather forecasting, monthly weather review 133 (2005) 1129–1143. [6] enric trillas, some uncertain reflections on uncertainty, archives for the philosophy and history of soft computing, international online journal, issue 1 (2013). [7] zadeh, l. a., fuzzy sets as a basis for a theory of possibility. fuzzy sets and systems 1 (1978) 3–28. doi:10.1016/01650114(78)90029-5. [8] dubois, didier; henri prade (1985). théorie des possibilité. masson, paris. [9] wikipedia, term “imprecise probability”, as consulted on june 30, 2015, and references therein. [10] g.l.s. shackle, a non-additive measure of uncertainty, the review of economic studies 17 (1949-1950) 70-74. [11] i. gilboa, d. schmeidler, additive representations of non additive measures and the choquet integral, annals of operations research 52 (1994) 43-65. [12] d. skulj, non-additive probability, 2002. https:// www.stat.aau.at/tagungen/ossiach/skulj.pdf [13] bipm (2012) international vocabulary of basic and general terms in metrology (vim), 3rd edn. bipm/iso, jgcm 200:2012. 2008 version with minor corrections. http://www.bipm.org/en/ publications/guides/vim.html [14] bipm, guide for the expression of uncertainty in measurement; jcgm 00:2008, iso geneva, at http://www.bipm.org/en/ publications/guides/gum.html [15] f. pavese, key comparisons: the chance for discrepant results and some consequences, acta imeko 17 (2015) n.4, 38–47. [16] wikipedia, term “uncertainty quantification”, as consulted on june 30, 2015, and references therein. [17] l.p. swiler, t.l. paez, r. l. mayes, epistemic uncertainty quantification tutorial, proceedings of the imac-xxvii, 2009 orlando, florida usa, 2009 society for experimental mechanics inc. [18] a. der kiureghian, o. ditlevsen, aleatory or epistemic? does it matter?, special workshop on risk acceptance and risk communication, march 26, 2007, stanford university, pp. 1–13 [19] t. o’hagan, dicing with the unknown, significance 2005, 132– 133. [20] m. squair, epistemic, ontological and aleatory risk, 2009, blog http://criticaluncertainties.com/2009/10/11/epistemic-andaleatory-risk/. [21] f. pavese: “mathematical and statistical tools in metrological measurement”, 2013, chapter in physical methods, instruments and measurements, [ed. unesco-eolss joint committee], in encyclopedia of life support systems (eolss), developed under the auspices of the unesco, eolss publishers, oxford, uk, http://www.eolss.net. [22] m. henrion, b. fischhoff, assessing uncertainty in physical constants, ch. 9 in “judgement and decision making“ (b. fischhoff, ed.) routhledge (2013) pp. 172-187, 113649734x, 9781136497346, taken from american journal of physics 54, 791 (1986); doi: 10.1119/1.14447. [23] codata http://physics.nist.gov/cuu/constants/index.html , http://www.bipm.org/extra/codata/. [24] f. pavese, some problems concerning the use of the codata fundamental constants in the definition of measurement units, metrologia 51 (2014) l1–l4, with online supplementary information. [25] f. pavese, subjectively vs. objectively-based uncertainty evaluation in metrology and testing, imeko-tc1-tc7-2008030, http://www.imeko.org/index.php/proceedings# under tc7 (2008); f. pavese, on the degree of objectivity of uncertainty evaluation in metrology and testing” measurement 2009 42 1297–1303.       acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 8  [26] b.n. taylor, nbsir 81-2426, 1982, national bureau of standards (today nist), gaithersburg usa. [27] p.j. mohr, b.n. taylor and d.b. newell, codata recommended values of the fundamental physical constants: 2010, rev. modern phys. 84 (2012) 1–94. [28] j.r. grozier, can theories be falsified by experiment?, msc essay (ucl, london, 2012). [29] j.r. grozier, falsificationism, science and uncertainty, msc dissertation, london centre for the history of sciences, technology and medicine, 2013, pp. 1–29. [30] k. popper, the logic of scientific discovery (taylor & francis e-library ed.). london and new york: routledge / taylor & francis e-library (2005). [31] k. popper, the propensity interpretation of the calculus of probability and of the quantum theory. in observation and interpretation, korner & price (eds.), buttersworth scientific publications, 1957, pp. 65–70. [32] jgcm, guide for the expression of uncertainty in measurement, draft of 14 december 2014 delivered for comments to iso tc69. [33] da silva r.j., santos j.r., camões m.f., worst case uncertainty estimates for routine instrumental analysis, observations on the worst case uncertainty, analyst 127 (2002) 957–963; l. fabbiano, n. giaquinto, m. savino, g. vacca, observations on the worst case uncertainty, journal of physics: conference series 459 (2013): 2043, doi:10.1088/1742-6596/459/1/012038. [34] k.h. ruhm, are fluctuating measurement results instable, drifting or non-stationary? a survey on related terms in metrology, journal of physics conference series 459 (2013): 2043, doi: 10.1088/1742-6596/459/1/012043. microsoft word 0 acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 1    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |  1  journal contacts    about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are mainly based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x.      editor‐in‐chief  paolo carbone, italy  honorary editor‐in‐chief  paul p.l. regtien, netherlands                                               editorial board                                       section editors (vol. 4 and 5)  leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania lorenzo ciani, italy pasquale daponte, italy luca de vito, italy luigi ferrigno, italy edoardo fiorucci, italy alistair forbes, united kingdom helena geirinhas ramos, portugal sabrina grassini, italy fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom massimo lazzaroni, italy fabio leccese, italy rosario morello, italy michele norgia, italy pedro miguel pinto ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy eric benoit, france marcantonio catelani, italy paolo carbone, italy lorenzo ciani, italy spartacus gomariz, spain sabrina grassini, italy konrad jędrzejewski, poland min-seok kim, korea franco pavese, italy serge rapuano, italy paul regtien, the netherlands alexandru salceanu, romania jan saliga, slovakia krzysztof stepien, poland marco tarabini, italy ian veldman, south africa claudia zoani, italy final editing    rosario morello, university mediterranea of reggio calabria, italy   about imeko  the international measurement confederation, imeko, is an international federation of actually 40 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry.     addresses  principal contact  prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia, italy email: paolo.carbone@unipg.it    acta imeko  prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov), the netherlands email: paul@regtien.net    support contact  dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany email: dirk.roeske@ptb.de acta imeko  december 2014, volume 3, number 4, 4 – 9  www.imeko.org    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 4  stator winding fault diagnosis in permanent magnet  synchronous generators based on short‐circuited turns  identification using extended kalman filter  brice aubert 1,2,3 , jérémi régnier 1,2 , stéphane caux 1,2 , dominique alejo 3   1  université de toulouse ; inpt, ups ; laplace ; enseeiht, 2 rue charles camichel, bp 7122, f‐31071 toulouse cedex 7, france  2  cnrs ; laplace ; f‐31071 toulouse, france  3  aeroconseil ‐ akka technologies group ; 3 rue dieudonné costes; f‐31703 blagnac, france      section: research paper   keywords: extended kalman filter; fault diagnosis; on‐line parameters estimation; permanent magnet synchronous generator; inter‐turn short‐circuit  citation: brice aubert, jérémi régnier, stéphane caux, dominique alejo, stator winding fault diagnosis in permanent magnet synchronous generators based  on short‐circuited turns identification using extended kalman filter, acta imeko, vol. 3, no. 4, article 3, december 2014, identifier: imeko‐acta‐03 (2014)‐ 04‐03  editor: paolo carbone, university of perugia   received october 5 th , 2013; in final form january 4 th , 2014; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: brice aubert, e‐mail: brice.aubert@laplace.univ‐tlse.fr or brice.aubert@aeroconseil.com     1. introduction  permanent magnet synchronous generators (pmsg) are increasingly used in various industrial fields such as aerospace, railway and automotive sectors or renewable energy [1] to take advantage of their high efficiency, small size and easy control for variable-speed applications compared to a wounded rotor design. with regard to safety considerations, the main drawback of pmsg is the critical effect of inter-turn short-circuits in stator windings. indeed, stator faults in pmsg still remain while the machine is rotating because of the permanent rotor flux caused by magnets. more specifically, inter-turn short-circuits in stator windings are tricky to detect and are the root of highest fault currents, especially for a few number of faulty turns [2]. thus, an early on-line detection is required to perform the appropriate safety request and to avoid greater damage. as well as being quick, the detection method should be robust to electrical network variations to avoid false alarms. moreover, the constraints related to on-line monitoring considerations imply a detection algorithm matching the performances of on-board computers. in the literature, three main approaches are commonly used to detect inter-turn short-circuits in stator windings. a first approach, based on signal analysis, often uses spectral tools to underline specific frequency components related to the fault. for example, stator current analysis in park’s [2], concordia [3] or three-phase’s [4] frame and axial flux leakage measurements [5] can be used as an inter-turn shortcircuit fault indicator. even if these methods are compatible with on-line requirements and enable relatively quick fault detection, they are not suitable for variable-speed abstract  this paper deals with an extended kalman filter based fault detection for inter‐turn short‐circuits in permanent magnet synchronous  generators. inter‐turn short‐circuits are among the most critical faults in the pmsg. indeed, due to permanent magnets, the short‐ circuit current is maintained as long as the machine is rotating. thus, a specific pmsg faulty model in the d‐q frame is developed to  estimate the number of short‐circuited turns which are used to build a fault indicator. simulation results demonstrate the sensitivity  and the robustness of the proposed fault indicator against various operation points on an electrical network even for a few number of  short‐circuited turns.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 5  applications and are not robust with regard to electrical network variations in generator operation. the second approach is related to knowledge-based methods which imply an a priori knowledge of fault signature to allow an a posteriori fault classification during the monitored system operation. artificial intelligence tools as fuzzy logic system [6], artificial neural networks [7] and pattern recognition methods [8] can be used but time response and computation considerations are not matching with on-line detection requirements. the third approach is based on state or parameters estimation and implies the use of mathematical models of the studied system [9]. these methods are a good compromise between robustness with regard to electrical network variations, on-board computation and fast fault detection. to meet these requirements, the chosen method presented in this paper for inter-turn short-circuit detection is based on the identification of the short-circuited turns ratio in a faulty pmsg model expressed in park frame using an extended kalman filter (ekf) algorithm. traditionally used for state or parameter estimation, the ekf algorithm is, in this case, specifically adapted for detection purposes. a basic modelling approach allows defining a generalized algorithm suitable for a large power scale of synchronous generators. no additional measurements are required because the proposed detection scheme is fed with measurements already available for control purposes. this paper is organized as follow: in section 2, the faulty pmsg park model used for fault diagnosis is described. section 3 presents simulation results of the ekf algorithms in both healthy and faulty conditions. the proposed fault indicator built is also described. finally, a robustness and a sensitivity analysis are presented in section 4 to assess the proposed indicator effectiveness. 2. faulty pmsg model for inter‐turn short‐circuit  detection  the faulty pmsg model used for inter-turn short-circuit detection is based on an earlier study [10] with less modelling assumptions (voltage drops due to short-circuit reduction is taken into account) to make the pmsg model more sensitive to windings faults. the model enables the fault localization with the use of angle s/c (equal to 0, 2π/3 or 4π/3 for a short-circuit respectively on phase a, phase b or phase c) and the number of short-circuit turns ns/c (ratio between short-circuited turns and all turns on a stator winding). figure 1 shows this model used for the fault diagnosis with inter-turn short-circuit located on phase c (s/c = 4π/3). according to this faulty model, the stator voltages [vs] and the short-circuit loop equations are respectively given in (1) and (2): vs = rs. is ns/c.rs.ts/c.is/c s (1) 0 = ns/c.rs.ts/c t. is + ns/c.rs.is/c + φs/c (2) where the stator flux [s] and the short-circuit flux s/c are respectively expressed in (3) and (4): φs = l . is + 3 2 .ns/c.lps.t32. cosθs/c sinθs/c .is/c+ + ns/c.lls.ts/c.is/c e (3) φs/c 3 2 .ns/c.lps. t32. cosθs/c sinθs/c t . is ns/c.lls.ts/c t . is ns/c 2 . lps lls .is/c ns/c.ts/c t . e (4) with vs : stator voltages vector is : stator currents vector is/c : short-circuit current e : electromotive forces vector rs : stator resistance of a stator winding lls : leakage inductance of a stator winding lps : magnetizing inductance of a stator winding l = lls+lps lps/2 lps/2 lps/2 lls+lps lps/2 lps/2 lps/2 lls+lps : inductance matrix t32 = 1 √3 . √2 0 1/√2 √3/2 1/√2 √3/2 : concordia transformation matrix ts/c 1 3 . 1 2∙cos θs/c 1 2∙cos θs/c 2π 3 1 2∙cos θs/c 4π 3 : short-circuit matrix after park transformation applied to (1) and (2) and some calculations, the faulty model can be expressed with these two equations (5) and (6): vs dq ‐ rs. i ' s dq ω.ls. 0 ‐1 1 0 . i's dq ls. i's dq e dq (5) is dq = i's dq is/c dq = i's dq 1 zs/c dq ∙ vs dq (6) where: stator synchronous inductance: ls = 3/2∙lps + lls park transformation matrix: p θ = cos sin sin cos figure 1. faulty pmsg model used for inter‐turn short‐circuit diagnosis.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 6  short-circuit fault impedance: . . . . . . fault localization matrix: cos .sin cos .sin short-circuit current calculation in park frame: ∙ 2 3 . . cos sin . it can be noticed that (5) is the same equation as a classic healthy pmsg model taking into account [is’] as the stator current whereas the inter-turn short-circuit loop is represented in (6) with the equivalent fault impedance [zs/c] at the output of the machine which deflects a part of the stator current. expending this model to each phase, three short-circuit impedances (zs/ca, zs/cb, zs/cc for s/c=0, 2π/3, 4π/3 respectively) are added to the pmsg model as shown in figure 2. this model can be written as a state space representation with [is’] as the state vector. the inter-turn fault is expressed in the feedforward matrix d, depending on the number of short-circuited turns ns/ca, ns/cb, ns/cc. to estimate these parameters in this non-linear model, the ekf algorithm is used. in the ekf algorithm, special attention should be paid to covariance matrices tuning, particularly for the measurement noise covariance matrix r and the state noise covariance matrix q to obtain, at the same time, an accurate estimation and an appropriate dynamic response of the extended parameters [11]. in order to satisfy the fast dynamic response requirement for the fault indicator and according to noise measurements, the covariance matrices used for the estimation of each ns/c are expressed in (7): p0=10 -1 ∙ i5×5; r=10 -2 ∙ i2×2; q = 10 -5∙ i2×2 0 0 10-4.i3×3 (7) simulations results are presented in the following section. 3. simulation procedure  3.1. simulation test bench setup  the pmsg studied in this paper is an existing threephase 3.6 kw four-pole permanent magnet synchronous machine (figure 3). stator windings of pmsg are modified with additional connection points in the stator coils on phase a, allowing the introduction of several inter-turn short-circuit levels using a short-circuit resistance rsc (4%, 8%, 12% and 16% of short-circuit turns respectively corresponding to 3, 6, 9 and 12 turns on 72 whole turns of a stator winding). the faulty pmsg model used for simulation is built using the electrically coupled magnetic circuit (ecmc) method [12]. it consists in a semi-analytical computation of the pmsg inductances from the stator winding layout including additional connections for inter-turn short-circuit generation. the simulation test bench is set with saber software in order to generate various operating points for the faulty pmsg model (figure 4). it is composed of a balanced r-l load to vary the electrical power and the power factor, an unbalanced r load to create power unbalanced and a harmonic load including a three-phase diode bridge rectifier with resistive load to generate current harmonics. 3.2. simulation results  as shown in figure 5, where a 16% inter-turn shortcircuit is generated at t = 0.5 s on phase a, each estimated parameter ns/c is impacted. indeed, a modification of their mean value and large oscillations at twice the electrical frequency appear as soon as the winding fault occurs (figure 5b). it is also noticeable that ns/ca is not the only estimated parameter sensitive to the short-circuit: ns/cb and ns/cc are also modified in a lesser extent due to some simplifying assumption about the faulty pmsg model used for the parameters estimation. however, the fault can clearly be located on phase a in this test. according to this test, the proposed fault indicator to figure 2. pmsg characteristics and its stator windings.  figure 3. inter‐turn faulty pmsg model expressed in park’s frame.  figure 4. simulation test bench scheme.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 7  detect inter-turn short-circuits expressed in (8) is based on the sum of these three estimated short-circuited turn ratios (figure 5c). a moving average calculation over one electrical half period is added to filter the fault indicator and improve its robustness. fault_indicator = |ns/c|i t/2 i=a,b,c (8) figure 6 shows the evolution of this fault indicator in the healthy case (for t < 0.5 s) and in the faulty case (for t > 0.5 s) with several values of short-circuited turns on phase a. in the healthy case, the fault indicator remains close to 0, reflecting a safe pmsg. in faulty cases, the fault indicator increases with the number of short-circuited turns, which confirms that the fault detection is trickiest for a few number of short-circuited turns. however, shortcircuit detection remains possible even for 4% of shortcircuited turns in comparison with the fault indicator behavior in the healthy case. moreover, time response of the proposed fault indicator remains quick (about 20 ms) in spite of the filtering characteristic of the moving average calculation. in the following section, the effectiveness of this fault indicator is evaluated against several operating points and several inter-turn short-circuit. 4. fault indicator performance assessment  4.1. robustness assessment  in order to evaluate the robustness of each estimated parameter to the inter-turn short-circuit diagnosis, several tests based on figure 4 have been made and are listed below: test 1 frequency variation from 30 hz to 60 hz with 10 hz step and 10 a load current per phase. test 2 power variation from 15 a to 0 a load current per phase with 2.5 a step and 40 hz electrical frequency. test 3 power factor variation from 1 to 0.8 with 0.05 step at 40 hz and 10 a load current per phase. test 4 unbalanced load variation from -5 a to +5 a unbalanced load current with 2.5 a step at 40 hz and 10 a load current in other phases. test 5 harmonic load variation from 0% to 20% of nominal current harmonic distortion variation with 5% step at 40 hz and 10 a load current phases per phase. these scenarios are related to specifications that could be encountered in embedded electrical networks. the robustness is then characterized by the ability of the ekf algorithm to properly separate the healthy and the faulty cases whatever the operating conditions. to quantify this ability, healthy and faulty simulations are performed under the same operating conditions and the worst indicator’s value is kept for each test (i.e. the highest indicator’s value in the healthy case and the lowest indicator’s value in the faulty case). for example, figure 7 shows the evolution of the fault indicator for various operating electrical frequencies. in this test, the highest fault indicator values in healthy cases appear at the lowest frequency whereas the lowest fault indicator values in faulty cases appear at the figure 5. response to a 16% inter‐turn short‐circuit (iload = 10 a per phase,  40  hz).  (a)  short‐circuit  current.  (b)  estimated  parameters.  (c)  fault indicator.  figure 6. fault indicator response to various inter‐turn short‐circuits (iload =  10 a per phase, 40 hz).  figure 7. robustness spider chart for the proposed fault indicator.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 8  highest frequency. the following spider chart (figure 8) summarizes the robustness of the proposed fault indicator against electrical network variation, using the worst indicator’s values for each test, to compare healthy and faulty operation with several short-circuited turns. it indicates that an inter-turn short-circuit fault can be easily distinguished from the healthy pmsg whatever the operating conditions and even for a few number of short-circuited turns. according to these robustness tests, a detection threshold is set at twice the maximum of the fault indicator in healthy condition for each robustness test, in order to avoid any false alarms in healthy operation. this maximum fault indicator value in healthy case appears at the lowest frequency in the frequency tests. thus, the detection threshold is set to 2.5% and is also represented in figure 8. 4.2. sensitivity assessment  the sensitivity assessment consists in the evaluation of the short-circuit current needed to detect the inter-turn fault. it can be calculated according to the detection threshold previously determined. figure 9 shows this sensitivity test in which the short-circuit resistance rsc is used to vary the inter-turn short-circuit severity. this test confirms that the fault detection is trickier and more serious for a few number of short-circuited turns. indeed, in relation to the 15 a rated current of the studied pmsg, a fault current of 45 a is required to detect a 4% inter-turn short-circuit compared to a 15 a fault current required to detect a 16% inter-turn short-circuit. however, stator windings of a pmsg can easily withstand three times the rated current during the detection time (about 20ms). moreover, these results show that, even for a few number of short-circuited turns, an early detection of a resistive interturn short-circuit is possible with the proposed fault indicator. 5. conclusions  this paper describes a method for inter-turn shortcircuit detection in stator windings based on a short-circuit turns estimation on a faulty pmsg model using an ekf algorithm. this fault diagnosis allows a fast and robust detection of inter-turn short-circuit, whatever the operating conditions on a non ideal electrical network even for a few number of short-circuited turns. moreover, this fault indicator allows to localize the faulty phase and is also sensitive to resistive inter-turn short-circuit which could prevent a dead short-circuit. moreover, experimental tests have also been done with the short-circuit current limited to 25 arms for safety considerations. experimental results, available in [13], are quite similar to simulation ones and confirm the good representativeness of the simulation model even for a resistive inter-turn short-circuit. finally, the prospect of this study is the application of this fault indicator on a 45 kw, 400 hz – 800 hz pmsg. references  [1] k.r. weeber, m.r. shah, k. sivasubramaniam, a. el-refaie, qu ronghai, c. stephens, s. galioto, “advanced permanent magnet machines for a wide range of industrial applications”, ieee power and energy society general meeting, july 2529, 2010, minneapolis, usa, pp. 1-6. [2] s.m.a. cruz, a.j.m. cardoso, multiple reference frames theory: a new method for the diagnosis of stator faults in three-phase induction motors, ieee trans. energy convers. 20 (2005) pp. 611—619. [3] d. diallo, m.e.h. benbouzid, d. hamad, x. pierre, fault detection and diagnosis in an induction machine drive: a pattern recognition approach based on concordia stator mean current vector, ieee trans. energy convers. 20 (2005) pp. 512 -519. [4] m. sahraoui, a. ghoggal, s.e. zouzou, a. aboubou, h. razik, “modelling and detection of inter-turn short circuits in stator windings of induction motor”, proc. int. iecon, 2006, pp. 4981-4986. [5] j. penman, h.g. sedding, b.a. lloyd, w.t. fink, detection and location of interturn short circuits in the stator windings of operating motors, ieee trans. energy convers. 9 (1994) pp. 652-658. figure 8. fault indicator response to various electrical frequencies (iload = 10  a  per  phase).  (a)  electrical  frequency.  (b)  short‐circuit  current.  (c)  fault indicator.  figure 9. sensitiveness assessment for the proposed fault indicator (iload =  10 a per phase, 40 hz). (a) fault indicator. (b) short‐circuit current.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 9  [6] m.a. awadallah, m.m. morcos, s. gopalakrishnan, t.w. nehl, a neuro-fuzzy approach to automatic diagnosis and location of stator inter-turn faults in csi-fed pm brushless dc motors, ieee trans. energy convers. 20 (2005) pp. 253 259. [7] m.b.k. bouzid, g. champenois, n.m. bellaaj, l. signac, k. jelassi, an effective neural approach for the automatic location of stator interturn faults in induction motor, ieee trans. ind. electron. 55 (2008) pp. 4277-4289. [8] a. soualhi, g. clerc, h. razik, “faults classification of induction machine using an improved ant clustering technique”, ieee international symposium on diagnostics for electric machines, power electronics & drives (sdemped), sept. 5-8, 2011, bologna, italy, pp. 316-321. [9] m. khov, j. regnier, j. faucher, “detection of turn shortcircuit faults in stator of pmsm by on-line parameter estimation”, international symposium on power electronics, electrical drives, automation and motion (speedam), june 11-13, 2008, ischia, italy, pp. 161-166. [10] s. bachir, s. tnani, j.-c. trigeassou and g. champenois, diagnosis by parameter estimation of stator and rotor faults occurring in induction machines, ieee trans. ind. electron. 53 (2006) pp. 963 -973. [11] b. de fornel, j.-p. louis, “electrical actuators: identification and observation”, iste ltd, 2010. [12] a.a. abdallah, j. regnier, j. faucher, “simulation of internal faults in permanent magnet synchronous machines”, 6th international conference on power electronics and drive systems, nov. 28 dec. 1, 2005, kuala lumpur, malaysia, pp. 1390-1395. [13] b. aubert, j. regnier, s. caux, d. alejo, “on-line inter-turn short-circuit detection in permanent magnet synchronous generators”, ieee international symposium on diagnostics for electric machines, power electronics & drives (sdemped), aug. 27-30, 2013, valencia, spain, pp. 329-335. microsoft word 250-1781-1-le-rev acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 59 ‐ 64    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 59  new generation of cage‐type current shunts developed using  model analysis  věra nováková zachovalová, martin šíra, pavel bednář, stanislav mašláň  czech metrology institute, okruzni 31 638 00 brno, czech republic      section: research paper   keywords: current shunt; current measurement; phase measurement; electric variables measurement  citation: věra nováková zachovalová, martin šíra, pavel bednář, stanislav mašláň, new generation of cage‐type current shunts developed using model  analysis, acta imeko, vol. 4, no. 3, article 10, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐10  editor: paolo carbone, university of perugia, italy  received february 13, 2015; in final form april 8, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by czech metrology institute, czech republic.  corresponding author: věra nováková zachovalová, e‐mail: vnovakovazachovalova@cmi.cz    1. introduction  at national metrology institutes (nmi) shunts of special designs are used for high accuracy current measurements converting a current from the working level to a voltage convenient for the input of a meter. these shunts are mainly used in power and power quality measurement systems and in ac-dc current transfer difference measurement systems. the acdc current transfer difference serves for derivation of a root mean square (rms) value of an ac current from a dc current of the same nominal level it can be defined as the relative difference between the rms value of the ac current and the dc current of the same nominal level and can be measured by devices called thermal converters. in recent years the shunts of three different designs for high accuracy current measurements were developed at several nmis:  foil shunts [1] are built using resistive manganin or zeranin foils. they are distinguished by a lower ac-dc difference and phase angle error compared with cage type current shunts but on the other hand their temperature and power coefficient are larger.  cage shunts [2]-[6] are built using a symmetrical construction of printed circuit board (pcb) with insoldered discrete resistors. they are characterized by low temperature and power coefficients, but their ac-dc difference and phase angle error are larger compared with foil shunts.  coaxial shunts [7] are made using disk structures, either with surface mount (smd) resistors or resistive layers. their design is really compact but their frequency characteristic is the worst of all three shunt designs. at the czech metrology institute (cmi) the cage design was used to build a first (original) series of current shunts in 2008 [6]. in subsequent research a calculable model of the cmi cage shunt was developed using lumped circuit elements. the transimpedance, ac-dc difference and also the phase angle error of a shunt can be derived from the model [9]. the goal of this research was to build a new generation of cage shunts from 30 ma up to 10 a based on a model analysis for the purpose of optimizing their construction to achieve the lowest possible ac-dc difference and phase angle error. 2. cmi cage‐type current shunts construction and  modeling  the original cmi’s shunts were developed to be suitable for use with planar multijunction thermal converters (pmjtc) to establish an ac-dc current transfer difference measurement system at cmi in 2008 [6]. abstract  this paper describes a new generation of cage‐type ac‐dc current shunts from 30 ma up to 10 a, developed using a lumped circuit  element model. comparison of the calculated and measured values shows agreement better than 6 ppm in the ac‐dc difference at  frequencies up to 100 khz.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 60  the voltage drop across every shunt in parallel with a 90 ω pmjtc is 1 v at nominal current. the number, value and type of resistors are different in each shunt (see table 1). the temperature coefficient of the shunts was reduced by using a suitable combination of resistors s102 with different temperature coefficients (one third of s102c and two thirds of s102k), except for the 30 ma shunt with z201 resistors, which temperature coefficient is below 1 ppm/k [6], [8]. the physical design of the original cmi shunts is shown in figure 1 (detailed in figure 2). the shunts are constructed using single-sided and doublesided fibreglass-epoxy pcb material (fr4 with permittivity 5.45 and loss factor 0.0205) of 2 mm thickness with mounted vishay foil z201 or s102 resistors and can be split up in the following construction parts [6], [9]:  input connector;  input part (two disks made of single-sided pcb);  crossbars made of double-sided pcb;  resistors;  output part (circle and disk made of single-sided pcb and an aluminium wire connected between circle and output connector);  output connector. the input connector brings the current to the input disks, which spread the current through the crossbars. each crossbar carries a fraction of the input current to resistors connected in parallel. the output part senses the voltage drop across the resistors. to each construction part a two-port element was assigned and their cascade concatenation led to the development of the shunt model (see figure 3). each two-port was determined based on the physical design of the related construction part, for instance the crossbars are made from double-sided pcb with a leakage capacitance and resistance and between the copper layers, and the copper layer also has some inductance and resistance ) [9]. the product of the cascade matrixes assigned to the twoports defines the cascade matrix of the model [9]: (1) where , , , , and are cascade matrixes of individual two-ports according to figure 3. during measurements the output of the shunt is loaded by the device that measures the voltage drop across the shunt. therefore it is necessary to include this load into the model. then, the cascade matrix of the loaded shunt model is [9]: (2) where is cascade matrix assigned to the load and , , and are elements of matrix . element can be table 1. general parameters of the original shunts.  nominal current resistance (ω) number and  value of  resistors type of  resistors pcb type 30 ma 50 3x 150 ω  z201 fr4 100 ma 10 10x 100 ω  3x s102c 7x s102k  fr4 300 ma 3.3 30x 100 ω  10x s102c 20x s102k fr4 1 a 1 100x 100 ω  33x s102c 67x s102k fr4 10 a 0.1 100x 10 ω  33x s102c 67x s102k fr4 figure 1. the original cmi shunts.   figure 2. the design of the original 100 ma shunt.   figure 3. the lumped element model of the shunts.   current input voltage output acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 61  used for calculation of the transimpedance of the shunt [9]: . (3) from the transimpedance the phase-angle error and the acdc difference can be derived easily [9]. this model is based on calculations of all component values from the geometry and material properties. therefore it was necessary to be familiar with the dielectric properties of the pcb (permittivity, loss factor) and the capacitance and inductance of the resistor, which had to be measured [9]. evaluation of associated uncertainties of the model was done by means of the monte carlo method [9]. a detailed description of the model calculation is given in [9]. the model was validated by comparison of the calculated and measured values of the original shunts which showed agreement better than 6 ppm in the ac-dc difference and 110 µrad in the phase angle error at frequencies up to 100 khz [9]. 3. construction improvements  the shunt model can be used to determine the sensitivity of the output quantities to the modification of the input quantities, resulting in improvements to the construction of the shunts [9]. 3.1. theoretical analysis  the sensitivity of the calculated values of the ac-dc difference and phase angle error to modification of the input quantities of the model was investigated using parametric simulation. calculations indicated that some of the input quantities have a significant influence and some have an insignificant influence on the calculated values of the ac-dc difference and phase angle error. the following input quantities of the model were found to have significant influence on the calculated values [9]:  capacitance and inductance of the resistors;  relative permittivity and loss factor of the pcb;  thickness of the pcb;  number of crossbars and geometric dimensions. it was found that the influence of all these input quantities increases with frequency (see the example of the ac-dc difference and phase angle error dependence on frequency with modification of the pcb permittivity in figure 4 and figure 5). by converting the data to a single frequency point it was ascertained that the dependence of the ac-dc difference and phase angle error on the modification of the relevant input quantities is linear except for the dependence on the modification of pcb thickness (see the example of the ac-dc difference and phase angle error dependence on modification of pcb permittivity and pcb thickness in figure 6 to figure 9, calculated at 100 khz). in the next step the calculation of sensitivity coefficients was performed to extract information about the vastness of influence of a single input quantity. the sensitivity coefficients were calculated as: ∆ ∆ (4) figure  6.  ac‐dc  difference  dependence  on  modification  of  the  pcb  permittivity calculated at 100 khz.   figure  7.  phase  angle  error  dependence  on  modification  of  the  pcb permittivity calculated at 100 khz.   figure 4. ac‐dc difference dependence on frequency with modification of pcb permittivity calculated for a 100 ma shunt.   figure  8.  ac‐dc  difference  dependence  on  modification  of  the  pcb  thickness calculated at 100 khz.  figure 5. phase angle error dependence on frequency with modification of  pcb permittivity calculated for a 100 ma shunt.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 62  where ∆ represents a change of the output quantity (ac-dc difference or phase angle error) corresponding to a change ∆ of the selected input quantity. ac-dc difference and phase angle error sensitivity coefficients calculated at 100 khz are shown in table 2 and table 3. the values of the sensitivity coefficients are different for the low-current and high-current shunts. this indicates that the optimization of the shunts construction depends on different input quantities for the low-current and high-current shunts construction. the ac-dc difference of the low-current shunts is affected by the dielectric properties and thickness of the pcb and slightly also by geometrical dimensions. the ac-dc difference of high-current shunts depends more on the disk size and less on the dielectric properties and thickness of the pcb. the capacitance and inductance of the resistors do not influence the ac-dc difference of all shunts. the phase angle error of the low-current shunts is influenced by the relative permittivity and thickness of the pcb, by geometric dimensions and slightly also by the capacitance and inductance of the resistors. the phase angle error of highcurrent shunts is affected mainly by the inductance of the resistors, but also by the thickness of the pcb and slightly by the relative permittivity. the influence of the loss factor on the phase angle error is negligible for all shunts. 3.2. construction of the new generation cage‐type current shunts  figure 10 shows the new generation of cage-type shunts from 30 ma up to 10 a developed at cmi. the voltage drop across every shunt is 0.6 v at nominal current being convenient for use either with pmjtc in the measurement set up of the acdc current transfer difference established in 2008 [6] or with analogue to digital converters (adc) in view of the development of a new sampling wattmeter measurement system. the improvements to the shunts construction resulted from the theoretical analysis of the lumped element model described above. general information about resistors and pcb material used for their construction are in table 4. the low current shunts (up to 1 a) were constructed using the high frequency pcb material ro4350b with better dielectric properties (permittivity = 3.6, loss factor = 0.0031) than the previously used fr4 material. the thickness of the pcb was reduced from 2 mm down to 1.524 mm. geometrical figure  9.  phase  angle  error  dependence  on  modification  of  the  pcb thickness calculated at 100 khz.   table 2. ac‐dc difference sensitivity coefficients calculated at 100 khz. input quantity 30 ma  shunt 100 ma  shunt 1 a  shunt  10 a  shunt  unit of  sensitivity  coefficient capacitance of resistors  0.4  0.1 0.0  0.0  ppm/pf inductance of resistors  0.0  0.0 0.0  ‐0.1  ppm/nh pcb thickness  ‐44.3  ‐11.3  ‐2.9  3.2  ppm/mm pcb permittivity  30.2  5.1 1.3  ‐3.2  ppm pcb loss factor  57.8  12.9 4.0  1.1  ppm/10 ‐2 with of crossbars  5.5  1.8 0.5  0.1  ppm/mm length of crossbars  2.8  0.4 0.1  0.0  ppm/mm size of disks  21.4  3.1 ‐0.3  ‐5.3  ppm/cm figure 10. the new generation of cage‐type shunts.   table 3. phase angle error sensitivity coefficients calculated at 100 khz. input quantity 30 ma  shunt 100  ma  shunt 1 a  shunt  10 a  shunt  unit of  sensitivity  coefficient capacitance of resistors  ‐94.1  ‐62.8  ‐62.8  ‐6.3  µrad/pf inductance of resistors  4.18  6.3 6.3  62.8  µrad/nh pcb thickness  2673.9  572.9  160.1  41.5  µrad/mm pcb permittivity  ‐1782.0  ‐262.8  ‐73.4  ‐19.1  µrad pcb loss factor  0.8  0.0 0.0  ‐0.2  µrad/10 ‐2 with of crossbars  ‐342.6  ‐90.9  ‐22.7  ‐4.5  µrad/mm length of crossbars  ‐157.4  ‐18.2  ‐4.5  ‐0.9  µrad/mm size of disks  ‐1278.5  ‐239.5  ‐59.5  ‐5.3  µrad/cm table 4. general parameters of the new shunts.  nominal current resistance (ω) number and  value of  resistors type of  resistors pcb type 30 ma 20 3x 60 ω  z201 ro4350b 100 ma 6 10x 60 ω  3x s102c 7x s102k  ro4350b 300 ma 2 30x 60 ω  10x s102c 20x s102k  ro4350b 1 a 0.6 50x 30 ω  17x s102c 33x s102k  ro4350b 10 a 0.06 100x 6 ω  33x s102c 67x s102k  fr4 acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 63  dimensions should be kept as small as possible to obtain low frequency dependence of the shunt transimpedance. the dimensions of the 30 ma shunt were slightly reduced to be the same as for the 100 ma shunt. also the number of the 30 ma shunt crossbars was minimized to only three and the 30 ma shunt construction was reinforced using distance struts. the dimensions of the 100 ma, 300 ma and 1 a shunts were not modified. the length of the crossbars was reduced in all shunts. the 10 a shunt was constructed using low frequency pcb (fr4) because of the inconsiderable influence of its dielectric properties to the calculated values. only the thickness of the fr4 was reduced to 1.5 mm. dimensions of the 10 a shunt were reduced to be the same as the original 1 a shunt. the length of the crossbars was also reduced to be as short as possible. resistors mounted in the shunts are the same type as in previous construction because no other resistors with lower inductance were found. including all of these modifications into the model showed a reduction in the ac-dc difference below 3.5 ppm at frequencies up to 100 khz (see table 5). 4. comparison of calculated and measured values  of the new shunts  the ac-dc difference of the shunts was measured using an automated measurement system developed for the ac-dc current transfer difference set up (see figure 11), which was established in 2008 [6] and improved in 2010 [8]. a reference and a calibrated standard are connected in series. ac and dc currents are applied through a transconductance amplifier clarke&hess 8100, which is alternately connected to a dc and ac voltage source (multifunction calibrators fluke or datron) using an automated switch ofmet. the standards are comprised of a shunt loaded by a thermal converter [8]. the uncertainty budget of such a complicated measurement set up covers contributions of the reference standard and its level dependence, the reproducibility of the measured ac-dc difference (standard deviation), influences of the series connection of both standards and measurement set up, the frequency dependence, and the influence of temperature. a detailed description of the uncertainty calculation can be found in [8]. during the measurements of the new generation of the shunts the output of the shunts was loaded by a 90 ω/10 ma pmjtc, which was included in the model. table 5 shows the agreement between the measured and calculated values of the ac-dc differences of the new shunts to be better than 6 ppm at frequencies up to 100 khz. table 6 shows the comparison of the measured ac-dc differences of the original and new shunts. for all shunts a significant reduction of the ac-dc difference was accomplished. the most significant reduction of the ac-dc difference was achieved for the 30 ma shunt, which construction was modified most of all shunts (see section 3.2). following the sensitivity coefficients calculation (see table 2) the 30 ma shunt construction is also most sensitive to the modification of the input quantities in comparison with the other shunts. because of the significant modification of the 30 ma shunt construction and its high sensitivity to these modification its acfigure  11.  the  measurement  system  of  the  ac‐dc  current  transfer  difference.  table 5. calculated and measured values of the ac‐dc difference for the new shunts with associated uncertainties for k=1.  nominal  current  value  ac‐dc difference  frequency 500 hz 1 khz 10 khz 20 khz  50 khz  100 khz 30 ma  20 ω  calculated (ppm)  0.016 0.032 0.320 0.650  1.700  3.400     unc. (ppm)  0.005 0.010 0.096 0.190  0.490  0.980     measured (ppm)  ‐0.4 ‐0.5 1.5 2.9 4.7  7.0     unc. (ppm)  4.4 4.4 4.4 4.4 4.4  4.7 100 ma  6 ω  calculated (ppm)  0.011 0.022 0.022 0.440  1.110  2.240     unc. (ppm)  0.003 0.007 0.067 0.130  0.340  0.680     measured (ppm)  ‐0.9 ‐1.7 0.7 3.4 4.4  6.4     unc. (ppm)  6.4 6.4 6.4 6.5 6.7  7.1 300 ma  2 ω  calculated (ppm)  0.004 0.008 0.075 0.150  0.350  0.620     unc. (ppm)  0.001 0.002 0.023 0.047  0.120  0.230     measured (ppm)  0.9 1.1 1.1 1.9 4.3  5.5     unc. (ppm)  7.9 7.9 7.9 8.0 8.4  8.9 1 a  0.6 ω  calculated (ppm)  0.003 0.006 0.051 0.073  ‐0.039  ‐0.810     unc. (ppm)  0.001 0.002 0.020 0.039  0.098  0.210     measured (ppm)  ‐1.4 ‐0.8 ‐1.5 0.5 1.3  ‐0.3     unc. (ppm)  9.2 9.2 9.2 9.3 9.8  10.4 10 a  0.06 ω  calculated (ppm)  0.003 0.006 0.033 0.004  ‐0.460  ‐2.500     unc. (ppm)  0.000 0.001 0.005 0.013  0.067  0.270     measured (ppm)  0.7 1.5 ‐2.3 ‐1.2  4.0  3.2 unc. (ppm)  11.3  11.3  11.3  11.4  12.1  12.8  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 64  dc difference was decreased by more than 100 ppm. 5. conclusions  a new generation of the cage-type current shunts from 30 ma up to 10 a was developed. construction improvements were done based on the results obtained from the analysis of the lumped element model of the original shunts. comparison of the calculated and measured values of the new shunts showed an agreement better than 6 ppm in the acdc difference at frequencies up to 100 khz. future work will address the comparison of the calculated and measured phase angle errors of the new shunts and the development of high current shunts up to 100 a. references  [1] garcocz, m., scheibenreiter, p., waldmann, w., heine, g., “expanding the measurement capability for ac-dc current transfer at bev”, 2004 digest of cpem conf, june 2004, pp.461,462. [2] filipski, p.s., boecker, m., “ac-dc current shunts and system for extended current and frequency ranges”, ieee trans. on instrum. and meas., vol.55, no.4, aug. 2006, pp.1222-1227. [3] voljc, b., lindic, m., lapuh, r., “direct measurement of ac current by measuring the voltage drop on the coaxial current shunt”, ieee trans. on instrum. and meas., vol.58, no.4, april 2009, pp.863-867. [4] voljc, b., lindic, m., pinter, b., kokalj, m., svetik, z., lapuh, r., “evaluation of a 100 a current shunt for the direct measurement of ac current”, ieee trans. on instrum. and meas., vol.62, no.6, june 2013, pp.1675-1680. [5] lind, k., sorsdal, t., slinde, h., “design, modeling, and verification of high-performance ac-dc current shunts from inexpensive components”, ieee trans. on instrum. and meas., vol. 57, no. 1, jan. 2008, pp. 176-181. [6] zachovalová, v. n., “ac-dc current transfer difference in cmi”, 2008 digest of cpem conf., 2008, pp. 362-363. [7] pogliano, u., bosco, g. c., serazio, d., “coaxial shunts as ac– dc transfer standards of current”, ieee trans. on instrum. and meas., vol. 58, no.4, 2009, p. 872-877. [8] zachovalová, v.n., šira, m., streit, j., “current and frequency range extension of ac-dc current tranfer difference measurement system at cmi”, 2010 digest of cpem conf., 2010, pp.605,606. [9] zachovalová, v.n., “on the current shunts modeling”, ieee trans. on instrum. and meas., vol.63, no.6, june 2014, pp.16201627. table 6. comparison of the measured ac‐dc difference values (ppm) of the original and new shunts generation. nominal  current  generation  value  frequency 500 hz 1 khz 10 khz 20 khz  50 khz  100 khz 30 ma  new  20 ω  0 ‐1 2 3 5  7   original  50 ω  ‐1 0 8 20 57  131 100 ma  new  6 ω  ‐1 ‐2 1 3 4  6   original  10 ω  1 ‐1 4 9 20  39 300 ma  new  2 ω  1 1 1 2 4  6   original  3.3 ω  0 0 4 7 13  22 1 a  new  0.6 ω  ‐1 ‐1 ‐1 1 1  0   original  1 ω  1 ‐3 ‐2 2 6  12 10 a  new  0.06 ω  1 1 ‐2 ‐1 4  3   original  0.1 ω  1 ‐1 ‐2 ‐3 ‐4  ‐23 microsoft word article 10 164-1307-1-le.docx acta imeko  february 2015, volume 4, number 1, 61 – 67  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 61  identification of dominant error parameters in spectra  measured by a tdemi system  miroslav kamenský, karol kováč  slovak university of technology in bratislava, faculty of electrical engineering and information technology, institute of electrical  engineering, ilkovičova 3, 81219 bratislava, slovak republic      section: research paper   keywords: multiresolution quantization, time domain emi measurement, offset and slope errors, spectrum measurement, error identification  citation: miroslav kamenský, karol kováč, identification of dominant error parameters in spectra measured by a tdemi system, acta imeko, vol. 4, no. 1,  article 10, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐10  editor: paolo carbone, university of perugia   received december 16 th , 2013; in final form november 15 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the slovak research and development agency under grant no. apvv‐0333‐11 and by the slovak grant agency vega  under grant no. 1/0963/12  corresponding author: miroslav kamenský, e‐mail: miroslav.kamensky@stuba.sk    1. introduction  special measuring systems are being developed for different areas of industry improving quality in many aspects. today we often require an automated and fast measuring procedure alongside with high accuracy. measuring of the electromagnetic interference (emi) spectra has been obviously a time consuming procedure. conventional analog emi receivers are based on a superheterodyne principle. here, only one narrow frequency band is transferred to the detector at a time via an intermediate frequency amplifier. therefore timeconsuming sweeping through the whole measured frequency range is needed and several tens of minutes are often required to complete the whole emi spectrum measurement. the advantage of such an arrangement is the high dynamic range of the spectrum measurement required by emi standards. for the commercial production of electronic devices, emi measurements are inevitable but time consuming. automation helps to optimize a measurement process [1] and offers some reduction in the production time. however, it is still limited by the superheterodyne principle of traditional receivers. the introduction of new faster emi measuring principles would bring significant benefits in time to market and costs and could probably support the idea of just in time production [2]. the time domain emi (tdemi) system [3] was introduced quite recently. it is based on a multiresolution analog-to-digital converter (mradc) technology, which engages several parallel input channels to achieve the figure 1. block structure of a tdemi system.   signal power splitter ... ch.x: lim., ampl., adc digital signal processing spectrum ch.1: atten., adc ch.2: lim., ampl., adc abstract  multiresolution  analog‐to‐digital  converters  (mradc)  are  usually  used  in  time  domain  electromagnetic  interference  (tdemi)  measuring systems for very fast signal sampling with a sufficient dynamic range. the properties of the spectrum measured by the  tdemi system influenced by imperfections in the mradc are analyzed in this paper. errors are caused by imperfect matching of the  offset and gain and phase of the circuits used in parallel input channels typical for the mradc. for deep analyses of mradc behavior,  a precise mathematical model has been created using the concept of additive error pulses. furthermore, a dedicated process of the  identification  of  discrepancy  parameters  from  experimental  data  is  proposed.  identified  parameters  enter  the  expressions  of  the  model and enable side to side comparison of experimental and theoretical results. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 62  required quality of the system. the block structure of a tdemi device is depicted in figure 1. the power splitter distributes the analog signal to all paths and 3 channels are common at present. all adcs are very fast sampling ones of the same type and range. separate amplifiers/attenuators provide different signal ranges and voltage resolutions of individual channels. channel 1 covers the entire range. the range of the next channel is part of the previous one and therefore the input signal will likely exceed the range of subsequent channels. here, the limiter protects the adc input from overvoltage. all channels are simultaneously sampled and converted by identical 8 or 10 bit very fast flash adcs. the final discrete value is created by extracting the output from that adc offering the best resolution but with the range still covering the actual input value. short time fast fourier transform (fft) is finally applied to the sampled data [4] to obtain one representation of the whole frequency spectrum. tdemi devices process the complete spectral content at a time and the parallel structure of the mradc should still allow a high dynamic range of measurements [5]. some influence of the quantization process on the measured spectra [6] remains also for the mradc [7]. however practical experience shows that there are more serious error sources in real systems [8]. it is hardly possible to avoid the offset or phase shift between parallel input channels likewise the gain error differences. similar problems occur in adc systems with time interleaving or with reconfiguration structure [9] where offset or gain mismatch of parallel channels degrades the overall performance. so serious signal discontinuities could arise in points where the system switches between channels. spurious spectrum components generated by those discontinuities significantly restrict the spurious free dynamic range (sfdr) of a real tdemi device. the paper deals with identification of tdemi device errors and with the design of a precise error model. imperfections of a real tdemi system will be explained in chapter 2. a pulse model of the dominant error of spectrum measurements will be discussed in chapter 3. in the next chapter the identification process of channel discrepancy will be presented which offers finding model parameters. finally we will be able to compare the proposed error model with experimental results. 2. imperfections of real tdemi system  the concept of an mradc lies in the use of several parallel adcs each with uniform but different quantization steps. the minimum quantization step for the multiresolution quantization corresponds to the step of the channel with the lowest range. actually, the system range is divided into subranges with different quantization steps. even if perfectly realized such system generates disturbance in the quantized spectrum, as was shown in [7] for a harmonic signal. in the experimental waveform obtained from the tdemi device output and plotted in figure 2a we can observe that in distant parts from the zero axis there is a significant quantization error. however, in a real system there are more serious error sources. the main task of the analog input circuits is to split the input signal into multiple paths with precisely set gains. this is a non-trivial task because ideally a uniform amplification level should be achieved for the frequency range up to several ghz without introducing a different phase shift between channels. channel mismatch compensation techniques have been developed especially for time interleaved adcs [10]. on the other hand, publications describing properties of tdemi measurement systems are dealing mostly with a perfectly matched power splitter and a digital processing module. in a real system amplitude and phase frequency characteristics in each channel are not perfectly matched over the whole frequency range. moreover it is hardly possible to avoid the offset and slope difference between channels as the gain must be different in each channel. in the example of the time representation of the signal (figure 2a, frequency 175 khz and amplitude 0.128 v) such discontinuities are especially evident in the negative portion of the waveform around the threshold of ca. 60 mv between the upper two channels of a three-channel mradc. error components resulting from those discontinuities significantly restrict the sfdr of the spectrum measurement. we will demonstrate that channel discrepancy is the main contribution to spurious figure 2. experimental harmonic signal measured by a real tdemi system:  a) time representation inside the tdemi unit; b) measured spectrum.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 63  components visible in the measured spectrum depicted in figure 2b obtained for the same harmonic input signal. experiments suggest that the main spurious components of tdemi measurement results are caused by discontinuities in the time domain signal representation. an harmonic input signal may be considered as suitable for modeling the measured interference of devices operating on the switched mode power supply principle, where the disturbance is like a mixture of sinusoids. we will assume a two-channel mradc as the major influence seems to be the mismatch between the upper two channels. differences between channels result in disturbances similar to timedomain error pulses (and not only for the harmonic input) like sketched in figure 3. discontinuities present in the waveform reconstructed from the sequence of samples obtained by the mradc could be modeled as an additive impulse error signal. then, the spectrum of the impulse disturbance could be used to precisely model the resulting spurious spectral components. this is the fundamental assumption used in our analysis. two rectangular pulses represent the offset between channels while two cosine shaped pulses describe the gain and phase discrepancy. 3. analytical expression of error models  the proposed idea of pulse representation of errors can be utilized in the simulation model or for the evaluation of the analytical model. two error pulses of the model per period of the input signal enable precise estimation of the real error if proper shapes of pulse-forming waveforms are used. for an harmonic input signal the harmonic waveform (yellow “cosine original” in figure 3) with the right offset, gain and phase composes pulses for such accurate simulation model. compared to a real system the model neglects quantization errors, however, it is precise for the estimation of the distortion caused by the discrepancy between channels and can be used as a reference for other approximative models. in a tdemi system the measured spectrum is digitally evaluated from samples. digital systems calculate the spectrum by discrete fourier transform (dft) usually using an fft algorithm. dft output approximates coefficients of fourier series, which decomposes the given periodic function u(t) with frequency f0 into the sum of harmonic functions. therefore, speaking about spectrum, we are trying to find coefficients un = an-jbn (n=0,1,2; b0=0) of the fourier series. the error model is composed of two rectangular and two cosine pulses and at first we need to find the spectral components of all the pulses. rectangular pulses describe the offset between channels, i.e. the offset of the erroneous cosine original (the offset of the yellow waveform in figure 3). the zero coefficient of one rectangular pulse is    rf0fr00,rp 2 ,,, tt a ttaa     (1) while for n>0 we can write       r0f0fr0rp, sinsin 2 ,,, tntn n a ttaa n    , (2)       r0f0fr0rp, coscos 2 ,,, tntn n a ttab n    , (3) where 0 is the angular frequency of pulses, a is the amplitude of the rectangular pulse, tr and tf is the time of rising and falling edge. cosine pulses of the model involve the gain and phase discrepancy. spectral components of one cosine pulse are           r0f0fr00,cp sinsin 2 ,,,, tt a ttaa (4)                    r0f0 rf0 fr0cp,1 2sin2sin 8 cos 4 ,,,, tt a tta ttaa (5)                    r0f0 rf0 fr0cp,1 2cos2cos 8 sin 4 ,,,, tt a tta ttab (6) and for n>1                    ,,spc,,cps 12 ,,spc,,cps 12 ,,,, r0nr0n2 f0nf0n2 fr0,cp ttn n a ttn n a ttaa n       (7)                    ,,sps,,cpc 12 ,,sps,,cpc 12 ,,,, r0nr0n2 f0nf0n2 fr0,cp ttn n a ttn n a ttab n        (8) in this case, besides the frequency, times of edges and the amplitude of the cosine original a also its phase  enters the argument of expressions. for n>1 the formulas are more complicated. therefore we have defined auxiliary figure 3. basic idea of the pulse error model.  table 1. auxiliary functions.  name formula cpsn(,t,)    tnt  sincos  spcn(,t,)    tnt  cossin  cpcn(,t,)    tnt  coscos  spsn(,t,)    tnt  sinsin  ++ = original signal rectangular pulses cosine pulses error error t u tr,1 tr,2tf,1 tf,2 channel 1 channel 2 acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 64  functions (see table 1) which have allowed those simplified notations (7)(8). to write the final expression of spectral error components we follow the idea of two actual error pulses mathematically represented by two pairs of pulses. the time of the rising and falling edge is tr,1, tf,1 for the first error pulse and tr,2, tf,2 for the second. both error pulses are described by the sum of the rectangular pulse and the cosine pulse to cover offset, gain and phase error. considering that an is the real part and -jbn the imaginary part of the spectral coefficient un (use index rp and cp for rectangular and cosine pulse) the error model states          co2f2rco0cp, co1f1rco0,cp 2f2ro0rp, 1f1ro0,rp co1f1rcoo0e, ,,,, ,,,, ,,, ,,, ,,,,,      tta tta tta tta ttaa n n n n n u u u u u     (9) the amplitude ao of rectangular pulses is just the offset of channel 2 towards channel 1. the amplitude and phase of cosine pulses are taken from the erroneous cosine original and could be calculated using phasor rules. if the reference channel 1 has ideal parameters (zero phase and gain of one) the ideal output waveform is a1cos(0t) or a10 in the phasor notation. the amplitude a1 is the amplitude of the input signal or rather the amplitude of the signal obtained solely from channel 1. then the phasor of the erroneous cosine original is 2211coco 0   gaaa (10) where g2 is the gain of channel 2 and 2 is its phase shift. we can make some further simplifications in the proposed model (9). as we assume ideal parameters of channel 1 and if the switching between channels would ideally work according to channel 1, the rectangular pulses in equation (9) will be the same. then they could be replaced by one rectangular pulse signal with double frequency for even n and totally omitted for odd n [8]. in other words, we could expect that the offset discrepancy will impact only even error harmonics. on the other hand if the offset is the dominant mismatch parameter a simplified model can be used with only rectangular pulses (rec model in [8]). then, the amplitude ao should be calculated as mean of the samples within the real error pulse and separately for each rectangular pulse if the real error pulses are not perfectly square. 4. error identification  to be able to apply the proposed model to experimental data we need to extract main error parameters from the measured waveform (figure 2a). the parameters requested by the model are the precise signal frequency f0, the real signal amplitude a1, the offset between channels ao, the gain error described by gain g2 of the second channel and the phase shift between channels 2. also times of rising and falling edges of both error pulses tr,1, tf,1, tr,2, tf,2 should be estimated from the experimental data. to obtain precise results and a repeatable formal process of the estimation we proposed a special procedure of the error identification. it can be summarized into three main steps: 1. identification of edges; 2. curve fitting; 3. determination of parameters of the error model. 4.1. identification of edges  moments of error pulse edges are points where the system switches from one channel to another. from figure 2 it is obvious that noise present in the sampled waveform is a good indication of the channel. unfortunately, we are able to obtain only a screenshot from the commercial tdemi unit. therefore we needed at first to identify data from the original bitmap. our method based on colour shades provided data drawn black in figure 4a. the noise was slightly suppressed here by the data extraction algorithm. however, the difference between channels is still apparent. so, the identification of edges is based on signal noise and is done in the following steps. 1. noise identification to extract noise from the measured signal we smoothed the waveform using moving average (dark gray line in figure 4a). subsequently we separated the noise by subtracting the smoothed waveform from its original. the noise itself is depicted light gray in figure 4a or in figure 4b at a better scale. 2. noise thresholding parts of the noise data of low and high variance relate to the currently used input channel. to identify moments of channel change we need to find points of the variance change. the amount of noise in the actual time point was estimated in figure 4a from the moving root mean squared value (mrmsn, turquoise solid curve). finally by thresholding (with 0.5 mv) we identified edges depicted by turquoise dashed verticals. 3. edge times adjustment edges found by mrmsn thresholding are visibly shifted away from the exact points of noise variance changes. this follows from the nature of squaring which favors higher values. therefore we continued in searching of precise edge times in the vicinity of previously found approximate edges. this time we will use the moving average from the absolute noise values (maan, blue solid curve in figure 4a and figure 4b). maan does not offer as good distances for thresholding as mrmsn. however, in points of exact edges local extremes are located. indeed there is a good chance that exactly in the point of channel change a significant noise value occurs due to the jump in the final waveform caused by the channel discrepancy. to increase the impact of the actual noise value we used a weighted averaging for the maan evaluation. sometimes a special process of final edge discrimination could be needed if some fake edges were also found. in both figure 4a and figure 4b the precise adjusted edges are plotted as blue dashed verticals. 4.2. curve fitting  after identification of edges we are able to separate parts of the waveform samples corresponding to the first or second channel. (at this point we also set the time of the acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 65  first sample to zero.) the basic function for fitting is cosine and we want to identify four parameters. the frequency should be the same for both channels while the amplitude, offset and phase are independent. 1. channel data separation points of separation are the previously adjusted edge moments. in some cases there are fluctuations near the switching point. therefore it is possible and often suitable to omit a few samples from each side of the given edge. we also need to set the channel order: channel 1 data are green and channel 2 data are blue. 2. setting initial conditions nonlinear curve fitting methods require initial data for starting the numeric algorithm. the main input parameter here is a starting frequency which might be the set frequency of the test signal. knowing the frequency the other starting parameters (amplitude, offset, phase) could be calculated by linear fitting applied on the whole data set. 3. nonlinear fitting to identify the frequency of the cosine original we need to use a nonlinear curve fitting algorithm. data from channel 2 contain less noise, therefore we considered the channel 2 output as more suitable for determination of the frequency. we recommend a least squares method for the fitting and we used unweighted non-robust least squares implemented within matlab statistic library functions (gauss-newton algorithm, [11]). from the procedure also the amplitude, offset and phase of the signal from channel 2 are obtained. the cosine fitted to the channel 2 data is depicted as turquoise solid waveform in figure 5. 4. linear fitting if we already know the frequency of the cosine signal being identified a simple linear curve fitting algorithm can be used. the amplitude, offset and phase of the cosine signal approximating channel 1 data could be found from the solution of a pertinent linear system of equations using matrices and matlab. in figure 5 this fitted function is depicted as a light green solid waveform. 4.3. determination of parameters of the error model  identified parameters should be further accommodated for the error model. note that we want to use the channel 1 signal as the reference and therefore e.g. the time values have to be adjusted in the way that we force the zero offset and phase of this channel. discrepancy parameters of channel 2 will be related to channel 1. 1. parameters of input reference signal the frequency identified during the curve fitting process is simply the frequency f0 of the input signal which enters the error model. in our case we calculated f0 = 174.832 khz. as channel 1 is the reference one, amplitude a1 = 0.1079 v identified from this channel will be assigned also to the input or ideal output signal. 2. parameters of channel discrepancy by comparison of both channels we can calculate for channel 2: the offset ao = -6.9920 mv, its gain g2 = 1.2078 and the phase 2 = 3.0647. 3. uniform set of edge times from previously identified edge moments we need to solve relative times toward the start of a given period of the reference signal. the same edge could be identified several times if more input periods are contained in a test interval. four edge times per period are expected for the cosine input signal and from figure 4b one can notice that the first figure 4.  detection  of  times  of  edges.  a)  original  and  smoothed  mradc output  signal  with  detected  switching  points;  b)  noise  data  used  for thresholding and adjusting of edges.  figure 5. curve fitting performed separately for the data from the first and  second channel.  0 2 4 6 8 10 -0.1 -0.05 0 0.05 0.1 0.15 t (s) u (v) a) 0 2 4 6 8 10 -0.01 -0.005 0 0.005 0.01 t (s) u (v) b) 0 2 4 6 8 10 -0.1 -0.05 0 0.05 0.1 0.15 t (s) u (v) acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 66  three are identified two times. in that case we can calculate the final time of a given edge as a simple mean of all corresponding relative edge times. our final uniform set of four edges is: tr,1 = 1.1795 s, tf,1 = 1.8320 s, tr,2 = 3.7667 s, tf,2 = 4.5713 s. 5. results and discussion  parameters of the input signal, the channel discrepancy and switching times were used in the simulation as well as in the analytical model. in the time domain we used the simulation model. error pulses obtained from the model for two periods of the input signal are depicted in red in figure 6. the mradc output signal disturbed by those pulses is in dark gray. the light gray dots in the background are original experimental data only slightly shifted to impose ideality of channel 1. the model waveform visibly fits the experimental data in the time range where they are available. blue dashed verticals represent all 7 edges identified from experiments and adjusted for the used point of zero time. in the simulation model we used one set of four edges for both periods. therefore e.g. the time of the second edge of red error pulses (tf,1 = 1.8320 s) is the mean from time shifts of the second and sixth blue vertical from the start of the given period. after the identification of parameters entering the pulse model and after the verification in the time domain we are ready to confront model results with the experimental spectrum. error spectral components evaluated from the analytical pulse model (9) are depicted in figure 7a as red circles. only the gray “x” represents the measured component i.e. the amplitude of the input harmonic signal. the simulation error spectrum (gray lines with dot markers) is almost identical with results of analytical expressions. black circles are original experimental error spectral components (from figure 2b). apparently there is a visible similarity between the model and experimental results. we can observe a little shift of experimental points from the theoretical ones, however the overall trend or the shape of fluctuations is satisfactory estimated by theoretical expressions. the correlation coefficient is as high as 0.9130 in the compared range of approximately one frequency decade. note that if we used only rectangular pulses in the analytical model the correlation was 0.6928. if we compare experimental results to the own line fit (black solid line), the correlation coefficient is 0.8188. so the overall decay of the error spectrum is not the whole similarity and the number 0.9138 embraces also the similarity in fluctuations. to compare only the shape of fluctuations we decided to subtract the overall decay from spectral components. in figure 7a the lines fitted trough the points are drawn as black or red solid lines for experimental or model results. subsequently in figure 7b only differences from this overall decay are depicted. experimental data are in black here while the model results are red again. the similarity is more visible now and the calculated correlation coefficient of 0.7614 is not influenced by the decay of spectral components with rising frequency. 6. conclusions  the precise pulse error model for the estimation of error spectral components disturbing measurements performed by a time domain electromagnetic interference (tdemi) measuring system was presented in the paper. the analytical model was expressed for a two channel system and the harmonic input signal. furthermore a consistent process of the error parameters identification from experimental data has been proposed based on the signal figure 6. time representation of simulated error pulse signal and distorted mradc output waveform.  figure 7.  comparison  of  experimental  error  spectrum  and  error  components  obtained  from  the  pulse  model:  a)  experimental,  simulation  and theoretical spectra; b) differences from the line fit.  0 2 4 6 8 10 -0.1 -0.05 0 0.05 0.1 0.15 t (s) u (v) 10 0 10 1 30 40 50 60 70 80 90 100 f (mhz) |a| (dbv) a) 10 0 -10 -5 0 5 10 f (mhz) |a| (dbv) b) acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 67  noise. the comparison of model and experimental results has shown that the channel discrepancy represented by the analytical model has a dominant impact on error components occurring in the measured spectrum. shape changes in the experimental spurious components were adequately reproduced by the model while the overall decay was similar too. differences and little shift between compared values could be explained by simplifications used in the model, a limited interval of experimental timedomain data processed during the identification phase, windowing and asynchronous sampling in a real tdemi unit, etc. the designed model could be useful for the further development and improvements of the technology behind tdemi. it could help to find right switching points for expected type of the input signal or indicate critical conditions for the measurement method of tdemi. the proposed process of the error parameter identification helps to find realistic values and intervals of parameters describing the discrepancy. the estimation of the error behaviour is a necessary step towards future searching for methods of error correction. acknowledgement  work presented in this paper was supported by the slovak research and development agency under grant no. apvv-0333-11 and by the slovak grant agency vega under grant no. 1/0963/12. references  [1] o. čičáková, p. štefanička, efficiency of automated measurement systems, measurement science review, 6 (4) (2006) pp. 50-53. [2] b. moric milovanovic, b. sisek, m. kolakovic, just in time concept as a mean for achieving competitive advantage in the virtual economy, proceedings of 22th international daaam symposium, daaam international vienna, austria, 2011, pp. 1105-1106. [3] s. braun, p.russer, the dynamic range of a time-domain emi measurement system using several parallel analog to digital converters, proceedings of the 16th international zurich symposium on electromagnetic compatibility, swiss federal institute of technology zurich, switzerland, 2005, pp. 203-208. [4] f. krug, d. mueller, p. russer, signal processing strategies with the tdemi measurement system, ieee transactions on instrumentation and measurement, 53 (5) 2004, pp. 14021408. [5] s. braun, p. russer, a low-noise multiresolution highdynamic ultra-broad-band time-domain emi measurement system, ieee trans. on microwave theory and techniques, 53 (11), 2005, pp. 33543363. [6] t.a.c.m. claasen, a. jongepier, model for the power spectral density of quantization noise, ieee transactions on acoustics, speech and signal processing, assp-29 (4), 1981, pp. 914-917. [7] m. kamenský, k. kováč, g. války, improvement in spectral properties of quantization noise of harmonic signal using multiresolution quantization, ieee transactions on instrumentation and measurement, 61 (11), 2012, pp. 28882895. [8] m. kamenský, k. kováč, g. války, model of errors caused by discrepancies of input channels in multiresolution adc. imeko 2013 : tc-4 symposium on measurements of electrical quantities and 17th ieadc workshop on advances in instrumentation and sensors interoperability, barcelona, spain, 2013, pp. 178-183. [9] w. carvajal, w. van noije, an optimization-based reconfigurable design for a 6-bit 11-mhz parallel pipeline adc with double-sampling s&h, international journal of reconfigurable computing, article id 786205, 2012, 17 pages. [10] h.a. nawaz, a. sharif, m. sharif, comparative survey on time interleaved analog to digital converter mismatches compensation techniques, research journal of recent sciences, 2 (9), 2013, pp. 95-100. [11] h. wen, j. ma, m. zhang, g. ma, the comparison research of nonlinear curve fitting in matlab and labview. 2012 ieee symposium on electrical & electronics engineering (eeesym), kuala lumpur, malaysia, 2012, pp. 74-77. effects of saliva on additive manufacturing materials for dentistry applications: experimental research using flexural strength analysis acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 6 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 effects of saliva on additive manufacturing materials for dentistry applications: experimental research using flexural strength analysis tommaso zara1, lorenzo capponi2, francesca masciotti3, stefano pagano3, roberto marsili4, gianluca rossi4 1 cisas g. colombo, university of padova, via venezia 15, 35131 padova, italy 2 department of aerospace engineering, university of illinois at urbana-champaign, 104 s. wright st., 61801 urbana, illinois, usa 3 odontostomatological university centre, department of medicine and surgery, university of perugia, perugia, italy 4 department of engineering, university of perugia, via g. duranti 93, 06122 perugia, italy section: research paper keywords: additive manufacturing; dentistry; flexural strength; polymers; 3d printing citation: tommaso zara, lorenzo capponi, francesca masciotti, stefano pagano, roberto marsili, gianluca rossi, effects of saliva on additive manufacturing materials for dentistry applications: experimental research using flexural strength analysis, acta imeko, vol. 12, no. 2, article 14, june 2023, identifier: imeko-acta-12 (2023)-02-14 section editor: laura fabbiano, politecnico di bari, italy received march 18, 2023; in final form april 13, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: lorenzo capponi, e-mail: lcapponi@illinois.edu 1. introduction in orthodontics, the accurate depicting of a patient's dentition is essential to properly identify and diagnose malocclusions, to plan out the course of action, to aid with assessments, and to track the effectiveness of medical treatments [1]. for this purpose, in the dentistry field, impression materials are typically used to accurately replicate the shape of the oral structures (including teeth), creating a negative form to generate the cast mold [1]. in the early days of dentistry, materials known as impression compounds were commonly used to make casts of the oral cavity [1]. however, these materials were rigid and did not allow for the creation of accurate casts that could reproduce all the tissues of the oral cavity [2]. as a result, there was a growing need for materials that could remain elastic even after setting. in the mid-20th century, the use of hydrocolloids became widespread due to their ability to provide the required elasticity and to enable the creation of accurate casts of undercuts [3]. over time, other materials, such as pvs have been introduced, which exhibit less shrinkage over time compared to hydrocolloids, and these advancements have greatly improved the accuracy and reliability of dental impressions [4]. in the late 1990s, the most used impression materials in dentistry became addition reaction silicone, polyether, and reversible hydrocolloid [4], [5]. the ideal impression material should possess certain characteristics that are suitable for use in the clinical environment. these materials must be adaptable to the oral structures, be resistant to tearing, and be easily removable without causing trauma to the surrounding tissues [3]. the material's physical and mechanical properties must also be carefully considered to ensure suitability for the clinical application and can provide optimal precision and marginal adaptation [3], [4], [6]. moreover, the ability of the material to abstract accurate and reliable results in orthodontics heavily depend on selecting the right impression materials. with the rise of digital technology and additive manufacturing techniques, it has become necessary to characterize experimentally the materials used to design prosthetic bases. in this study, the mechanical properties of polyetheretherketone, nylon6, nylon12, and polypropylene are analyzed, as impression materials commonly used in dentistry applications. specifically, the effect on their flexural elastic modulus of the exposure to working environment conditions is also investigated by means of 3-point bending test performed on virgin materials and samples immersed in saliva for 72 hours. the proposed approach revealed significant behavior in terms of loss in mechanical performances. these findings have significant implications for the proper selection and use of am materials in dental applications. file://///fs.isb.pad.ptb.de/home/imeko/acta%20imeko%20papers/vol%2012,%20no%202%20(2023)/1_pre-formatted%20copies/lcapponi@illinois.edu acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 flow and adapt to the tooth structure is crucial for the creation of a quality restoration. therefore, the impression material must possess good fluidity and moisture tolerance to effectively slip and imprint itself into the tooth structure that needs to be copied [3]. in fact, the marginal precision of an in-vitro dental restoration is typically 50 µm [4]. in summary, the selection of the appropriate impression material is essential to achieving accurate and reliable results in dentistry. although impressions are widely used in dentistry due to their simplicity, ease of use, and cost-effectiveness, during the mixing stage, air bubbles may be incorporated, leading to inaccuracies in the model. furthermore, the typical impression process can be messy, and traces of material may remain on the patient's teeth, necessitating removal [2]. despite the development of new impression materials, it remains challenging to eliminate the possibility of human errors during each stage of the process [7]. additionally, the high fabrication time can impact the chairside or intraoperative period. plaster casts, which are commonly made from impressions, can be damaged and require storage space. patients may also need multiple models to observe the progress of treatment [7]. overall, while impressions have several advantages, they also have limitations and drawbacks that must be considered. because of this, the development of new materials and techniques continues to improve the accuracy and efficiency of the manufacturing, but careful attention must be paid to all stages of the process to achieve optimal results. with the advent of new technologies, it is now possible to obtain a 3d representation of oral structures and tissues using highresolution and high-fidelity scanners. this technology enables the storage of 3d files in digital format, providing the opportunity to reuse them when necessary [3], [4]. this alternative approach offers several advantages over traditional methods, including reduced risk of infection, improved patient comfort, and shorter chair-time. additionally, displaying the 3d images to patients during pre-treatment can simplify the therapy explanation and allow for greater patient involvement in the decision-making process [7]. intraoral scanners capture jaw and tooth geometries in digital format, which can then be processed using computer-aided design (cad) software. this software enables manipulation of the 3d data and facilitates the development of the final product through computer-aided manufacturing (cam). in the dentaltechnical field, cad/cam technology has the potential to bypass laboratory experience and allow for the fabrication of dental restorations directly at the chairside. this approach offers significant time savings and can result in one-appointment dental restorations [8]. in dentistry, there are two main cad/cam techniques: subtractive manufacturing (mm) and additive manufacturing (am). mm involves removing material from a block to produce a prosthetic, typically using metal such as cobalt-chromium (co-cr), zirconia (zro2), titanium (ti), or polymers such as methyl methacrylate (pmma) or polyetheretherketone (peek) [9]. this method provides the highest accuracy in prosthesis production and is commonly used in dentistry. am, on the other hand, is an industrial process that involves the production by adding layer-upon-layer based on a 3d model. this method allows for faster production and is suitable for highly customized manufacturing, with the ability to create irregular grooves, crannies, valleys, and bone-like morphology. as a result, am is ideal for individual dentistry implant [6], [10]. however, due to the inferior flexural strength of printed prostheses compared to mm, am is typically suggested for interim crown and fixed partial dentures to avoid extended mastication periods [9]. studies have shown that am can produce durable dental frameworks and can reduce material waste by up to 40 % compared to mm [10]. additionally, up to 98 % of the waste can be reused in future manufacturing processes, thereby reducing environmental impact [10]. furthermore, compared to milling technology, 3d printing offers a wider range of usable materials and more flexible available machines, making it an attractive technology for new applications in the dentistry field [11], [12]. of all the available techniques, the two most preferred additive manufacturing technologies in dentistry are fused deposition modelling (fdm) and selective laser sintering (sls) [12]. however, the use of metals in dental implants is not recommended [13]. despite their exceptional mechanical properties, the biocompatibility of metals is a known limitation. furthermore, materials such as stainlesssteel exhibit poor corrosion resistance in comparison to titanium, which can lead to allergic reactions. in addition to biocompatibility, aesthetic appearance is an important consideration in dental implant design [14]. for these reasons, polymers have emerged as a promising alternative to metals in this regard, offering improved aesthetics. to further enhance the properties of polymers, ceramic and fiber reinforcements have been incorporated into polymer matrices, resulting in polymer matrix composites. compared to metals, polymer composites have a significantly lower weight, which is one of their major advantages [14]. the elastic modulus of metals is higher than that of bones, which can lead to stress-shielding and failure if the mechanical load is not well-distributed along the adjacent tooth framework [14]. therefore, the use of polymer composites in dental implant design has the potential to offer improved biocompatibility, aesthetics, and mechanical properties [14]-[16]. the literature lacks extensive research where the properties of materials, generally used in am processes, are investigated in working environments for dentistry applications. in fact, saliva in dentistry is often used to estimate the loss of color from an aesthetic point of view, the increase of sample mass, rather than the number of bacteria settling the specimen. the evaluation of the performance in terms of elastic modulus before and after the material’s immersion is a new proposal in dentistry field. this research aims to extend the knowledge on properties and performances of polymers for prosthetic bases in dentistry. by means of flexural strength analysis, this study focuses on the experimental characterization of mechanical properties of four different polymers: polyetheretherketone (peek), nylon6, nylon12, and polypropylene (pp), which samples are obtained by fdm and sls manufacturing processes. as the digestion phase starts in a secreted-saliva environment, a comparison between flexural modules of materials immersed for 72 hours in an artificial-solution of saliva at an ambient temperature of 25 °c to virgin materials is also presented. on the long term, this work will enable the possibility to create customized 3d-printed prostheses directly in the dental office, improving results in orthodontics filed. 2. materials and methods in this research, the properties and the behaviour of peek, nylon6, nylon12, and pp materials are investigated. peek has high processability, and 3d printing makes it suitable for fabricating structures for the oral cavity. peek exhibits remarkable resistance to corrosion and can maintain its mechanical characteristics at temperatures as high as 120 °c. when used in a body fluid environment at 37 °c, acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 peek's wear is reduced, thereby minimizing the release of harmful particles, and decreasing the immune response. one of peek's main advantages is its young's modulus, which is similar to that of cortical bone (3-4 gpa) [17]. additionally, by incorporating other compounds, such as carbon fibers, it is possible to enhance the mechanical properties of peek [15]. nylon6 is a thermoplastic material widely used in various engineering applications, including automotive and textiles, due to its ease of processing and excellent mechanical properties [18]. the addition of fillers can further enhance its modulus and strength, resulting in a composite polymer. nylon6's biocompatibility can be attributed to the presence of amide groups in its chemical structure, which resemble the chemical structure of natural peptides. this similarity enables cells to disguise themselves from the immune system, thereby suppressing the host's reaction to a foreign body [19]. nylon12 finds application in various industrial fields. however, compared to nylon6, it has a lower melting point and mechanical strength. nevertheless, nylon12 exhibits excellent flexibility, high pressure resistance, low density, extreme temperature resistance, and thermal stability. moreover, nylon12 demonstrates outstanding impermeability, chemical resistance, and stability in humid environments, making it particularly suitable for water distribution systems that require precise dimensional accuracy [19], [20]. polypropylene is a commercial plastic with a low density and excellent mechanical, physical, and chemical properties. one of the most significant advantages of this material is its high temperature resistance [21]. in the biomedical industry, pp is also used for its hydrophilic characteristics, which promote cell adhesion through the cytokine response [22]. furthermore, the porosity of pp facilitates interaction between the host and the implant, allowing for earlier formation of capillaries [23]. as a preliminary study, where the focus of the research is the material property itself, a simplified geometry of the test samples is preferred over a complex and more realistic design. the specimens used in this study were realized with dimensions of 85 mm in height, 10 mm in width, and 2 mm in thickness, and they are shown in figure 1. the samples were provided by zare, and the printing settings used in the manufacturing process are not given in this paper. while the pp samples were realized using sls printing technique, all the other samples were realized through fdm. the technology behind sls is based on a laser beam that strikes a powdered material lying in a build plate and heats it to a temperature just below the melting point. once the first layer is completed, a rake drags a veil of material from the powder chamber to the build plate waiting for the execution of the successive layer, which has the resolution of 60 µm [24], [25]. even if during the sintering processes the powders remain partially melted resulting in porous internal structures and rough surface finishes, the mechanical properties of sls products are excellent. on the other hand, the fdm is one of the most used printers in the market due to the low-cost availability. a stepper motor driver let a thermoplastic filament to run across a hot nozzle capable to melt the material at a specific temperature previously set. usually, the nozzle can move along x, y and z axes and allows the material to deposit, creating the desired object layer by layer. temperature of the print head, speed of the driven stepper motor and the thickness (approximate accuracy 35–40 µm) of the layers are some of the parameters that can affect the printing quality, the printing-time, and the final costs. the surface roughness and the visible graduation layers are drawbacks of this technology [26]-[28]. a 3-point bending flexural strength test was selected for the material characterization, and the flexural modulus is picked for determining material properties degradation due to exposure to saliva. the flexural modulus, or bending modulus, is an intensive property that is obtained as the ratio of stress to strain in flexural deformation (astm d790). the flexural modulus is determined from the slope of a stress-strain curve produced by a flexural test, assuming a linear stress strain response. a schematic of a flexural modulus measurement setup is shown in figure 2. from figure 2, the flexural modulus is analytically defined as [28]: 𝐸f  =   𝐿3 𝐹 4 𝑤 𝑑 ℎ3 . (1) the experimental research was carried out in the laboratories of the university of perugia. for this study, a lloyo lr 30k was used. a pre-load, whose intensity depended on the specific material, a feed rate of 1.7 mm/min, defined in the iso 178 a, and a maximum deformation of 1 mm were the experimental conditions selected for the experiments (see figure 3). during each test, the stress 𝜎 and the strain 𝜖 measurements are collected, and the flexural modulus can be estimated in the region of linear elasticity by: 𝐸f  =   𝜎 𝜖 . (2) in this study, the flexural modulus is initially determined, by means of the procedure described above, for virgin materials. the second step consists of immersing the samples in an artificial saliva (saliva sintetica cts) in an environmental temperature of 25 °c for 72 hours, time required for the fluid absorption by polymer material [29]. in this way, the effect of the first digestion environment on the properties of materials is obtained. after the figure 1. tested samples materials and geometry. figure 2. flexural modulus measurement: 3-point bending test. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 curing process, the flexural strength test is repeated, and the flexural modulus of samples exposed to saliva is measured and compared to the one obtained in the virgin condition. for the sake of repeatability, this procedure is applied to 3 specimens made of the same material, and, considering the 4 materials analysed, a total of 12 samples are tested for this research. figure 4 shows an example of the results of a flexural strength test measurement for a generic material (differences in behaviour are enlarged for the sake of comprehensiveness). solid lines in figure 4 are obtained as linear interpolation of actual points of measurement (scatter points). from the linear interpolation in the region of elastic behaviour, the flexural elastic modulus is obtained with (2) for each tested sample. in this way, for every material in both virgin and exposed-to-saliva conditions, a mean value of the flexural modulus is given. also, repeatability analysis was conducted for all tests performed, both on virgin samples and those immersed in saliva. 3. discussion an overview of the results obtained as described in section 2 is given in table 1. as expected, the results confirm that the exposure to the saliva degrades the properties of the material. peek shows a better performance in terms of flexural behaviour, with 1794.08 mpa. on the other hand, the nylon12 shows the lowest values, both from the virgin samples (977.86 mpa) and the samples immersed in saliva (921.83 mpa). figure 5 shows the flexural modulus loss for the analysed materials. while nylon6 shows the smallest loss in mechanical performances with a 2 % loss in flexural modulus, pp has the highest decrease among the specimens. the reason for this behaviour can be related to the am printing technology. in fact, as the sls increases the porosity of the sample surface, the ability to absorb liquids also increases [30]. similarly, peek shows a decrease in performance of only 3 %. this is also due to the intrinsic characteristic of the material, which is inherently hydrophobic, with a contact angle of 80-90 degrees [16]. the repeatability analysis, performed with a 2 𝜎 interval at the 95 % of confidence, demonstrate the goodness of the setup and the experiments. in fact, the analysis led to the lowest value of 0.25 %, and a highest of 3.83 %. the results obtained indicate how saliva has a concrete impact in the elastic modulus performance of each material. on a practical level, this study shows a first approach to the possible development of prostheses with complex geometries, printed with these materials using additive manufacturing technologies. this research can provide the means to predict the performance of these prostheses over time in a reconstruction of the daily work environment. 4. conclusions this paper presents a first attempt to broaden the knowledge of properties of materials used for 3d additive manufacturing in dentistry applications. in this sense, the properties of four materials used prosthetic bases in dentistry were studied in terms of flexural strength modulus: polyetheretherketone (peek), nylon6, nylon12, and polypropylene (pp). specifically, the figure 3. example of pp sample mounted on the lloyo flexural strength test machinery. figure 4. example of linear interpolation of measured data for a generic material (differences in behavior are enlarged for the sake of comprehensiveness). table 1. overview of the experimental results. virgin saliva test 1 in mpa test 2 in mpa test 3 in mpa mean in mpa repeatability test 1 in mpa test 2 in mpa test 3 in mpa mean in mpa repeatability nylon6 1360.93 1354.78 1365.72 1327.14 ± 43.50 3.79 % 1282.22 1318.96 1316.30 1305.83 ± 16.72 1.48 % nylon12 1003.62 963.43 966.51 977.85 ± 18.26 2.16 % 910.67 905.02 949.78 921.82 ± 19.90 2.49 % peek 1818.47 1767.71 1799.06 1795.08 ± 20.91 1.35 % 1842.56 1742.04 1734.48 1738.26 ± 3.78 0.25 % pp 1095.33 1070.30 1011.53 1059.05 ± 35.12 3.83 % 988.23 1008.21 935.65 977.36 ± 30.60 3.62 % figure 5. flexural modulus loss for pp, nylon6, nylon12 and peek due to saliva exposure. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 effect of the saliva, as main component of the first digestive environment, on their mechanical performances was investigated by determining the flexural modulus loss in the materials. two different additive techniques were used to manufacture the samples used in the study, and 3-point bending tests were performed for determining flexural elastic moduli. surface material properties and manufacturing techniques were found to be the main factors impacting the properties of the materials exposed to the saliva for 72 hours. future developments will try to overcome the limitation of this research, i.e., involving a parametric study based on different times of exposure [29], immersing materials in different acid-fluids [31], studying the color loss after the immersion for appearance considerations [31], and, mostly, using more complex geometries of samples for representing actual dental prosthetic designs references [1] r. gupta, m. brizuela, dental impression materials, statpearls, sep. 2022. online [accessed 23 april 2023]. https://www.ncbi.nlm.nih.gov/books/nbk574496/ [2] a.-g. gabor, c. zaharia, a. t. stan, a. m. gavrilovici, m.-l. negruțiu, c. sinescu, digital dentistry digital impression and cad/cam system applications, journal of interdisciplinary medicine 2(1) (2017), pp. 54–57. doi: 110.1515/jim-2017-0033 [3] b. s. rubel, impression materials: a comparative review of impression materials most commonly used in restorative dentistry, dental clinics of north america 51(3) (2007) pp. 629–642. doi: 10.1016/j.cden.2007.03.006 [4] t. a. hamalian, e. nasr, j. j. chidiac, impression materials in fixed prosthodontics: influence of choice on clinical procedure, journal of prosthodontics 20(2) (2011) pp. 153–160. doi: 10.1111/j.1532-849x.2010.00673.x [5] g. m. tartaglia, a. mapelli, c. maspero, t. santaniello, m. serafin, m. farronato, a. caprioglio, direct 3d printing of clear orthodontic aligners: current state future possibilities, materials 14(7) (2021) art. no. 1799. doi: 10.3390/ma14071799 [6] a. h. l. tjan, t. li, g. irving logan, l. baum, marginal accuracy of complete crowns made from alternative casting alloys, j prosthet dent 66(2) (1991) pp. 157–164. doi: 10.1016/s0022-3913(05)80041-1 [7] p. vasamsetty, t. pss, d. kukkala, m. singamshetty, s. gajula, 3d printing in dentistry exploring the new horizons, in materials today: proceedings 26 (2020) pp. 838–841. doi: 10.1016/j.matpr.2020.01.049 [8] e. d. rekow, digital dentistry: the new state of the art — is it disruptive or destructive?, dental materials 36(1) (2020) pp. 9–24. doi: 10.1016/j.dental.2019.08.103 [9] c. valenti, m. i. federici, f. masciotti, l. marinucci, i. xhimitiku, s. cianetti, s. pagano, mechanical properties of 3d-printed prosthetic materials compared with milled and conventional processing: a systematic review and meta-analysis of in vitro studies, the journal of prosthetic dentistry (in press) doi: 10.1016/j.prosdent.2022.06.008 [10] t. a. sulaiman, materials in digital dentistry—a review, journal of esthetic and restorative dentistry 32(2) (2020) pp. 171–181. doi: 10.1111/jerd.12566 [11] a. barazanchi, k. c. li, b. al-amleh, k. lyons, j. n. waddell, additive technology: update on current materials and applications in dentistry, journal of prosthodontics 26(2) (2017) pp. 156–163. doi: 10.1111/jopr.12510 1 [12] r. galante, c. g. figueiredo-pina, a. p. serro, additive manufacturing of ceramics for dental applications: a review, dental materials 35(6) (2019) pp. 825–846. doi: 10.1016/j.dental.2019.02.026 [13] s. krishnakumar, t. senthilvelan, polymer composites in dentistry and orthopedic applications-a review, materials today: proceedings 46 (2019) pp. 9707–9713. doi: 10.1016/j.matpr.2020.08.463 [14] h. ibrahim, s. n. esfahani, b. poorganji, d. dean, m. elahinia, resorbable bone fixation alloys, forming, and post-fabrication treatments, materials science and engineering: c 70 (2017) pp. 870–888. doi: 10.1016/j.msec.2016.09.069 [15] s. najeeb, m. s. zafar, z. khurshid, f. siddiqui, applications of polyetheretherketone (peek) in oral implantology and prosthodontics, journal of prosthodontic research 60(1) (2016) pp. 12–19. doi: 10.1016/j.jpor.2015.10.001 [16] l. bathala, v. majeti, n. rachuri, n. singh, s. gedela, the role of polyether ether ketone (peek) in dentistry a review, journal of medicine and life 12(1) (2019) pp. 5–9. doi: 10.25122/jml-2019-0003 [17] x. han, w. gao, z. zhou, s. yang, j. wang, r. shi, y. li, j. jiao, y. qi, jinghui zhao, application of biomolecules modification strategies on peek and its composites for osteogenesis and antibacterial properties, colloids and surfaces b biointerfaces 215 (2022). doi: 10.1016/j.colsurfb.2022.112492 [18] h. unal, a. mimaroglu, m. alkan, mechanical properties and morphology of nylon-6 hybrid composites, polym int 53(1) (2004) pp. 56–60. doi: 10.1002/pi.1246 [19] m. shakiba, e. rezvani ghomi, f. khosravi, s. jouybar, a. bigham, m. zare, m. abdouss, r. moaref, s. ramakrishna, nylon—a material introduction and overview for biomedical applications, polymers for advanced technologies 32(9) (2021), pp. 3368–3383. doi: 10.1002/pat.5372 [20] i. y. phang, t. liu, a. mohamed, k. pallathadka pramoda, l. chen, l. shen, s. y. chow, c. he, x. lu, x. hu, morphology, thermal and mechanical properties of nylon 12/organoclay nanocomposites prepared by melt compounding, polym int 54(2) (2005) pp. 456–464. doi: 10.1002/pi.1721 [21] h. a. maddah, polypropylene as a promising plastic: a review, american journal of polymer science 6(1) (2016) pp. 1–11. doi: 10.5923/j.ajps.20160601.01 [22] h. patel, d. r. ostergard, g. sternschuss, polypropylene mesh and the host response, international urogynecology journal 23(6) (2012) pp. 669–679. doi: 10.1007/s00192-012-1718-y [23] t. sosakul, j. suwanprateeb, b. rungroungdouyboon, current applications of porous polyethylene in tissue regeneration and future use in alveolar bone defect, asia-pacific journal of science and technology 26(4) (2021). doi: 10.14456/apst.2021.36 [24] y. tian, c. x. chen, x. xu, j. wang, x. hou, k. li, x. lu, h. y. shi, e.-s. lee, h. b. jiang, a review of 3d printing in dentistry: technologies, affecting factors, and applications, scanning 2021 (2021). doi: 10.1155/2021/9950131 [25] a. dawood, b. m. marti, v. sauret-jackson, a. darwood, 3d printing in dentistry, br dent j 219(11) (2015) pp. 521–529. doi: 10.1038/sj.bdj.2015.914 [26] m. javaid, a. haleem, current status and applications of additive manufacturing in dentistry: a literature-based review, journal of oral biology and craniofacial research 9(3) (2019) pp. 179–185. doi: 10.1016/j.jobcr.2019.04.004 https://www.ncbi.nlm.nih.gov/books/nbk574496/ http://dx.doi.org/10.1515/jim-2017-0033 https://doi.org/10.1016/j.cden.2007.03.006 https://doi.org/10.1111/j.1532-849x.2010.00673.x https://doi.org/10.3390%2fma14071799 https://doi.org/10.1016/s0022-3913(05)80041-1 https://doi.org/10.1016/j.matpr.2020.01.049 https://doi.org/10.1016/j.dental.2019.08.103 https://doi.org/10.1016/j.prosdent.2022.06.008 https://doi.org/10.1111/jerd.12566 https://doi.org/10.1111/jopr.12510 https://doi.org/10.1016/j.dental.2019.02.026 https://doi.org/10.1016/j.matpr.2020.08.463 https://doi.org/10.1016/j.msec.2016.09.069 https://doi.org/10.1016/j.jpor.2015.10.001 https://doi.org/10.25122%2fjml-2019-0003 https://doi.org/10.1016/j.colsurfb.2022.112492 https://doi.org/10.1002/pi.1246 https://doi.org/10.1002/pat.5372 https://doi.org/10.1002/pi.1721 http://dx.doi.org/10.5923/j.ajps.20160601.01 https://doi.org/10.1007/s00192-012-1718-y https://doi.org/10.14456/apst.2021.36 https://doi.org/10.1155/2021/9950131 https://doi.org/10.1038/sj.bdj.2015.914 https://doi.org/10.1016/j.jobcr.2019.04.004 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 [27] u. punia, a. kaushik, r. k. garg, d. chhabra, a. sharma, 3d printable biomaterials for dental restoration: a systematic review, mater today proc, 63 (2022) pp. 566–572. doi: 10.1016/j.matpr.2022.04.018 [28] g. morettini, m. palmieri, l. capponi, l. landi, comprehensive characterization of mechanical and physical properties of pla structures printed by fff-3d-printing process in different directions, progress in additive manufacturing 7(5) (2022) pp. 1111–1122 doi: 10.1007/s40964-022-00285-8 [29] f. tamburrino, v. d’antò, r. bucci, g. alessandri-bonetti, s. barone, a. v. razionale, mechanical properties of thermoplastic polymers for aligner manufacturing: in vitro study, dent j 8(2) (2020). doi: 10.3390/dj8020047 [30] l. portilha gomes da costa, f. amorim mendonça alves, a. mazzo, 3d printers in dentistry: a review of additive manufacturing techniques and materials, clin lab res den 2021 (2021) pp. 1–10. doi: 10.11606/issn.2357-8041.clrd.2021.188502 [31] r. s. alsilani, r. m. sherif, n. a. elkhodary, evaluation of colour stability and surface roughness of three cad/cam materials (ips e. max, vita enamic, and peek) after immersion in two beverage solutions: an in vitro study, int j appl dent sci, 8(1) (2022) pp. 439-49. doi: 10.22271/oral.2022.v8.i1g.1460 https://doi.org/10.1016/j.matpr.2022.04.018 https://doi.org/10.1007/s40964-022-00285-8 https://doi.org/10.3390/dj8020047 https://doi.org/10.11606/issn.2357-8041.clrd.2021.188502 https://doi.org/10.22271/oral.2022.v8.i1g.1460 microsoft word 265-1854-1-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 26‐29    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 26  development of the combustion chamber of a reference  calorimeter by numerical analysis  e. eduardo gonzalez 1 , alejandro estrada 1 , leonel lira 2   1  instituto tecnológico de celaya, antonio garcia cubas 600, av. tecnológico c.p.38010, guanajuato, méxico  2  centro nacional de metrología, km 4.5 carretera a los cués c.p.76246,  el marqués queretaro, méxico        section: research paper   keywords: isoperibolic calorimeter; superior calorific value (scv); computational fluid dynamics (cfd)  citation: e. eduardo gonzález, alejandro estrada, leonel lira, development of combustion chamber of a reference calorimeter by numerical analysis, acta  imeko, vol. 4, no. 4, article 7, december 2015, identifier: imeko‐acta‐04 (2105)‐04‐07  section editor: franco pavese, torino, italy  received march 23, 2015; in final form may 22, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by instituto tecnológico de celaya, méxico  corresponding author: e. eduardo gonzález, e‐mail: eli.gonzalez@iqcelaya.itc.mx    1. introduction  the three main sources of fuels to generate energy around the world are petroleum with 4059, coal with 3724 and natural gas in the third position with 2906 million tons of crude oil equivalent. it is important to measure the quantity of energy contained in a sample of natural gas, or its scv, for billing natural gas deliveries and it is commercially very important to determine this quantity of energy, because commercial transactions for natural gas are in energy units instead of volume or mass units. on the other hand natural gas is the lowest-cost source of energy and has the lowest co2 emissions of all hydrocarbons, and probably in coming years will become the largest source of energy on the planet. nowadays the energy contained in any combustible gas is calculated on the base of composition via gas chromatography, supported with the standard iso 6976 [1], where the scv values for several pure gases are published. the scv’s in the standard iso 6976 for pure gases were measured in 1931 and 1972. for methane, the uncertainty is specified to amount to 0.12 % [2]. methane is the main constituent of natural gas, so measuring the value of its scv is important because it is used in gas chromatography as a reference material for calibration. however some institutions and researchers have developed special devices to improve this uncertainty and have reduced it to approximately 0.05 % [2]. today some institutions develop their own devices to measure the scv of methane and other gases. some of them are: gerg (groupe européen de recherches gazières) with work from six partners [2]; lne (laboratoire national de métrologie et d’essais) in france [3] and ofgem (office of gas and electricity markets) by dale et al. [4]. due to the commercial importance and the attempt to improve the uncertainty obtained by using gas chromatography supported by standard iso 6976, cenam (centro nacional de metrología) together with the itc (instituto tecnológico de celaya) are developing a reference calorimeter to measure the superior calorific value of natural gas. all this kinds of calorimeters devoted to measure scv named above work very similar and operate using the abstract  the present work focuses on the numerical modeling of two combustion chambers to be used inside an isoperibolic calorimeter to  measure the superior calorific value (scv) from natural gas. this work shows the performance of both chambers working under the  isoperibolic  principle,  through  simulations  based  on  computational  fluid  dynamics  (cfd).  the  aim  of  the  work  is  to  expose  the  performance of combustion chambers published in the literature versus another one proposed in this work, and to show how the  performance  of  the  chamber  proposed  in  this  work  was  improved  by  changing  the  geometry.  it  is  checked  by  analyzing  the  temperature of burned gases at the exit of the combustion chamber. acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 27  isoperibolic principle. some differences exist but generally all the apparatuses of this kind have the following main components: 1) the “burner”, which provides and mixes the oxidizer and fuel which generates the flame. the “combustion chamber” and “heat exchanger”, which maximize the heat transfer from the burned gases to the surrounding, generally water. 2) the “calorimeter vessel”, which can contain any fluid, water in this work. its function is to receive and measure the energy generated by the flame and the burned gases, as well as to maintain a homogeneous temperature in the fluid contained. the burner, combustion chamber and heat exchanger are immersed in the calorimeter vessel. 3) the “jacket”, which is a further vessel enclosing the calorimeter vessel and having a temperature either uniform and constant or at least known as regards space and time [5]. the principle under which the calorimeter operates is called isoperibolic. it consists of a rise of temperature from the calorimeter vessel, containing a stirred liquid, observed while the jacket temperature is kept constant [5]. in the literature, specifically in gerg [2], lne [3] and ofgem [4], calorimeters have something in common, the combustion chamber. it has the same geometry: a cylindrical body with hemispherical lid. this work proposes to modify the geometry to an elliptical form of the burner, in order to increase the heat exchange and obtain a lower temperature of the burned gases. to demonstrate this, a numerical analysis was used to simulate the performance of both types of combustion chambers: the elliptical one proposed by this work and the cylindrical one with hemispherical lid found in the literature. 2. methodology  2.1. design  after reviewing and analysing the combustion chambers published by several researchers[2], [3], [4] it was found that they have in common a cylindrical body with hemispherical lid. however, to improve the performance in this work the geometry was modified to an elliptical chamber. we choose this geometry because the area increases in an elliptical chamber without compromising diameter and height. in this way, by increasing the heat transfer area, the temperature of the gases leaving the burner could be reduced. figure 1 shows the combustion chamber from literature [2], [3], [4] with its heat exchanger and the elliptical chamber. 2.2. governing equations  conservation equations for steady state reacting flows are used in this paper. in our case, in order to take into account combustion, it was necessary to include chemical species in the conservation equation, therefore the code solves the conservation equation for chemical species, where the local mass fraction of each species is predicted through the solution of the convection-diffusion equation for the species, by means of the following equation: .)()( 1 iiii srjyvy t     (1) where ir is the net rate of production of species i by chemical reaction and is is the rate of creation by addition from the dispersed phase. in (1) 1j  is the diffusion flux of species i that arises from the gradients of concentration and temperature. the code uses fick’s law to model mass diffusion due to concentration gradients, under which the diffusion flux can be written as: .)( ,,1 t t dy sc dj itimi      (2) in (2), mid , is the mass diffusion coefficient for species i in the mixture, and itd , is the thermal diffusion coefficient. sc is the turbulent schmidt number (where   d/ is the turbulent viscosity and d is the turbulent diffusivity). due to the model used for combustion, the net rate of production of species in (1) is assumed to be controlled by the turbulence with a two-step reaction mechanism: ohcooch 224 25.1  225.0 cooco  due to the non-premixed combustion model used, the code used solves the total enthalpy from the energy equation: .)()()( h p t sh c k hvh t     (3) the conduction and species diffusion terms combine to give the first term on the right-hand of (3) while contribution from viscous dissipation hs appears in the non-conservative form. the total enthalpy h is defined as  j jj hyh (4) where iy is the mass fraction of species j and   t t jrefjjj jref thdtcph , )( , 0 (5) where enthalpy of formation is  jrefj th ,0 of species j at the reference temperature jreft , . to describe the flow of fuel and oxidant, thus buoyancy effect in the water due to large temperature gradients, the code uses the equation of conservation of momentum (6) described by fgpvv t      )()()( τ (6) where p is the static pressure, τ is the stress tensor, g  and f  are the gravitational body force and external body forces, respectively. figure  1.  combustion  chamber  published  in  the  literature,  left  side. combustion chamber from this work, right side.   acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 28  the flow simulations were done using fluent. the governing nonlinear equations together with the boundary conditions are solved by an iterative numerical approach using a finite volume method [6]. 2.3. simulation details  grid generation is done using icem. for the meshes an unstructured mesh with tetrahedral elements was used, due to the complex geometry, using elements of about 1 mm in dimension, approximately. in this work, two fluid domains and one solid domain were established. the first fluid domain represents the zone which provides fuel and oxidant, mixes both and generates flame and burned gases. coupled to it, we have one solid domain, which represents the burner and the heat exchanger. the second fluid domain represents the water contained inside the calorimeter vessel, which receives all the heat due to combustion and burned gases, see figure 2. we used default values for all thermophysical and thermochemical properties of fluids included in the code and values from literature [7] for solid properties modelling heat transfer in glass. both models were simulated in 3d and steady state with the same dimensions in diameter and height for both chambers, and with the same number of turns in the heat exchanger for each combustion chamber. the fuel flow is of methane, the oxidant is oxygen and its flow is three times the fuel flow [8]. the molar fractions were established to be 0.96 for methane and 0.90 for oxygen, with temperature inlet of 22.5 °c for both. to try reproducing the physical phenomena as close as possible, the combustion chamber, burner and heat exchanger are assumed to be inside the calorimeter vessel, represented for a water volume of geometry similar to those published in the literature [2], [4]. the walls of the calorimeter vessel were set to 25.0 °c to simulate the isoperibolic enclosure, see figure 3. 3. results  the results of increasing the area of the elliptical chamber was that heat exchange increases and therefore the temperature of burned gases is lower than in the cylindrical chamber with hemispherical lid. the exterior area of the elliptical chamber exposed is 170.0 cm2. on the other hand the cylindrical chamber area is 160.0 cm2 as in figure 4 and 5. in both figures one can see how the zones with highest temperatures are at the top of the burner. there are zones in blank, inside and in the limits of the chambers, because the temperature is higher than 40 °c. the limit among the blank zone and the rest is a red ring, indicating where in the calorimeter the temperature is highest and conversely in blue where the temperature is coldest. it is useful to design the stirring system because in this zone the fluid must recirculate well to get a homogenous temperature inside the calorimeter vessel. table 1 shows, for the proposed chamber (see figure 4) against the type published in literature (see figure 5), that the average temperature of the burned gases at the exit of the heat exchanger is lower by 0.5 °c than the published one. it means an improvement of about 2.0 % of the heat exchanged by the chamber proposed in this work over that in the literature. on the other hand, the average temperature of the water is similar in both cases, but is lower with the elliptical chamber by about 2.7 %. if we analyse the maximum and minimum temperature in the water, the cylindrical chamber has a higher temperature than the elliptical chamber. figure 4. temperature distribution (in celsius). elliptical chamber immersed  in the calorimeter vessel. area with temperatures higher than 40 °c is blank. figure  2.  discretization  of  calorimeter  with  burner,  heat  exchanger  and combustion chamber. right side this work with 1 403 244 nodes; left side from literature with 1 133 433 nodes.  figure 3. sketch analysed through numerical simulation with boundary and initial conditions.  table 1. elliptical chamber versus cylindrical chamber.  schedule  elliptical  chamber  cylindrical  chamber  average temperature of  burned gases at the exit  30.88 °c  31.45 °c  average temperature of water  29.07 °c  29.88 °c  maximum temperature in the water  68.27 °c  79.89 °c  minimum temperature in the water  24.93 °c  24.95 °c  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 29  4. conclusions  we have shown by numerical simulations for the steady state that the heat exchanged by the chamber proposed has improved by around 2.0 % maintaining the same diameter and height for both chambers. it means that the temperature of burned gases at the exit of the heat exchanger was reduced by 0.5 °c. however we still need to validate the results obtained by numerical simulations. numerical simulation is useful when geometries and phenomena that govern the physical model are complicated. developing and evaluating both combustion chambers in the same way lead to possible generated errors that are very similar and therefore results are comparable. we have demonstrated how numerical simulations help to develop measurement apparatuses. acknowledgement  every author who has contributed to the redaction and development of this work is kindly acknowledged. references  [1] iso 6976, natural gas-calculation of calorific values, density, relative density and wobbe index from composition. international standard iso 6976, 2nd edn., 1995-12-01, corrected and reprinted 1996-02-01. [2] p. schley, m. bech, m. uhring, s. m. sarge, j. rauch, f. haloua, j.-r filtz, b. hay, m. yakoubi, j. escande, a. benito, p.l. cremonesi, measurements of the calorific value of methane with the new gerg reference calorimeter, int. journal of thermophysics, 31:665-679 (springer 2010). [3] f. haloua, b. hay and j.-r filtz, new french reference calorimeter for gas calorific value measurements, journal of thermal analysis and calorimetry, vol. 97 2, 676-678, (akadèmiai kiadó, budapest 2009). [4] andrew dale, christopher lythall, john aucott, courtnay sayer, high precision calorimetry to determine the enthalpy of combustion of methane, thermochimica acta 382, 47-54, (elsevier 2001). [5] hobert c. dickinson, combustion calorimetry and heats of combustion of cane sugar, benzoic acid and naphthalene, bulletin of the bureau of standards, (washington 1914). [6] versteeg, h. k., and malalasekera, w., an introduction to computational fluid dynamics: the finite volume method. addison wesley-longman, 1995. [7] incropera and dewitt. fundamentals of heat and mass transfer. fourth edition john wiley & sons. [8] frédérique haloua, jean-noël ponsard, ghislain lartigue, bruno hay, clotilde villermaux, emilie foulon, murès zaréa, thermal behavior modeling of a reference calorimeter for natural gas, international journal of thermal sciences 55, 40-47, (elsevier 2012). [9] ansys® fluent 14.0, user’s guide. © 2011 sas ip inc. [10] ansys® icem cfd 14.0, user’s guide © 2011 sas ip inc. figure  5.  temperature  distribution  (in  celsius).  cylindrical  chamber  with hemispherical  lid  immersed  in  the  calorimeter  vessel.  area  with temperatures higher than 40°c is blank.  microsoft word 299-2475-1-le-rev acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 76‐80    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 76  surface analytical model and sorption artifact designing  method  xiao‐ping ren 1,2 , jian wang 1 , ombati wilson 3 ,yong wang 4 ,bai‐fan chen 4,5 , chang‐qing cai 1   1 mass laboratory, division of mechanics and acoustics, national institute of metrology, beijing, p.r.china  2 department of r&d management, national institute of metrology, beijing, p.r.china  3 kenya bureau of standards, nairobi, republic of kenya  4 college of information science & engineering, central south university, changsha, p.r.china  5 department of computer science and engineering, texas a&m university, usa      section: research paper   keywords: mass measurement; surface sorption corrected; mass dissemination; optimization algorithm  citation: xiao‐ping ren, jian wang, ombati wilson ,yong wang, bai‐fan chen, chang‐qing cai, surface analytical model and sorption artifact designing  method, acta imeko, vol. 5, no. 3, article 12, november 2016, identifier: imeko‐acta‐05 (2016)‐03‐12  section editor: marco tarabini, politecnico di milano, italy  received december 15, 2015; in final form august 1, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding:  this  work  was  supported  by  national  natural  science  funds  of  china  (51405459),  national  science  and  technology  support  program  (2011bak15b06) and special‐funded program on national key scientific instruments and equipment development (2012yq090208).  corresponding author: x.p. ren, e‐mail: renxp@nim.ac.cn    1. introduction  the unit of mass, the kilogram, is the last of the seven base units of the international system of units (si) to be defined according to an invariant of nature rather than a material artefact [1]. both the watt balance method and avogadro method are making the new definition under vacuum conditions whereas the current definition of the unit, from the international prototype kilogram (ipk), is realized and maintained in air. after the new definition, it is still necessary to consider traceability from the new realization in vacuum to the current working standards which are always maintained in air [2]. this is to make an indirect link between air and vacuum mass measurements by measuring mass in vacuum and then characterizing the absorption layers of contaminants during the process of transferring from vacuum to air [3]. during this transition, the mass of the standard is significantly affected by a sorption phenomenon. this phenomenon is caused by atmospheric gases and humidity, which subsequently lead to loss of stability of the mass value of the standard(s) [4]. in 1973, takayoshi studied the problem of surface water on metal artefacts [5]. r. schwartz wrote a series of papers on adsorption isotherms in air [6] and sorption phenomena in vacuum [7]. additional studies focused on pt/ir [8], stainless steel [6], [7], silicon [9] and also other materials (au [8], [10], [11]). for mass dissemination, there are many parameters to be considered in designing the mass standard, like height, diameter, volume, surface area and mass value of the weight. the characteristics of a measurement instrument are also important such as the size of the mass comparator, electronic weighing capacity and volume measurement instrument. in this paper, an adaptive algorithm which can be used to optimize the design of abstract  mass standards with alternative shapes are difficult to design due to the number of complex parameters. an analytical model based  on  surface  sorption  experiments  is  presented  to  study  adsorption.  this  model  is  based  on  an  optimization  algorithm  that  is  conceptualized to help to design the best sorption artefacts. experimental artefacts, cylinder‐weight and stack‐weight, were of the  same volume but different surface areas. this algorithm in essence determines the optimum surface of the artefact. after machining  the artefact, surface sorption measurements were carried out. a sorption experiment was done by transferring the artefact from air to  a vacuum. then the surface sorption model was set up which represented the relationship between sorption coefficient η, time t and  relative humidity h. logarithmic models were used to fit the variation of sorption coefficient η per relative humidity h with time t. acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 77  a mass standard based on a surface analytic model of mass standards is described. 2. recent research on surface artifacts  one kilogram weights with different surface areas and same nominal mass values are normally required. samples of different materials (stainless steel, silicon material and platinum-iridium) have to be selected. figure 1 shows an example of designing a stack of weights made of platinum-iridium assembled using a spacer (height of 1 cm and diameter of 2 mm). the volumes, masses and surface areas of the spacers were added to the overall values for the weight stack. for some applications of this kind of stack weight, when taking volume measurement, one disc should be placed into the liquid and then the spacer is placed on its surface. then another disc is put onto the spacer until all the separate discs and spacers are immersed in the volume measurement liquid. during this process, however, the stack of weights may fall down from the magazine and lead to process and system failure. another kind of stack weight was composed of a number of discs [6]. these discs were tightened on carrying rods (d: 10 mm). the discs and carrying rods were polished by häfner company. for this kind of stack weight, there was a hole in the middle of each disc where the rod was screwed in. these discs cannot be totally seamlessly fixed together (as shown in figure 2). thus whenever this stack weight is submerged into the liquid for volume measurement, the volume value would be affected by this gap. a similar design method was described by beer [13]. another kind of artefact, shown in figure 3, consisted of twelve discs separated with pieces of wire and held together with a thin rod. this particular artefact was gold coated with a 6 μm thick layer with an aim of determining the characterizations of the gold surfaces. in section 3 we present a mathematical model that, when optimized through computational methods, provides for the best design parameters of the artefacts to be machined. 3. integral sorption artefact  in mass measurements, the main uncertainties are due to air buoyancy correction and surface sorption correction. in order to improve the accuracy of the surface analytical model of a mass standard, the most important thing is to reduce the influence of the buoyancy correction as much as possible. two prototype models of 1 kg stainless sorption artefacts are shown in figure 4 and figure 5. the model represented by figure 4 is the classical prototype, and it is in cylindrical form with height and diameter being equal. the other model, shown in figure 5, is in the form of a stack and the discs are separated by a rod. this rod is not however separated from the disc; it is a monolith of stainless steel. the surface area (s) and volume (v) of a cylinder weight (figure 4) are shown in (1) and (2), where r, d and h represent the radius, diameter and height of weight respectively. cylinder 2 22 2 2 2 2 2                  d d s r rh h , (1) cylinder 2 2        d v h . (2) the dimensions of the stack prototype (shown in figure 5) are as follows: outer circle’s radius and height being r2 and h2 respectively; inner circle’s radius and height being r1 and h1 respectively. the total surface area and volume of 4-level stack prototype are shown in (3) and (4): 2 2 stack 1 1 2 2 2 16 8 8 6s r h r h r r       , (3) 2 2 stack 1 1 2 23 4v r h r h   . (4) generally, the 4-level stack prototype above is an example of designing the sorption artefact. different levels can be adopted during design of the sorption artefact depending on the sorption effect. the normal formulae for calculating the volume figure 3. 1 kg gold‐plated copper buoyancy artefact.  figure  1.  designing  of  a  platinum‐iridium  weight  set  by  johnson  matthey  and npl [12].  figure 2. the gap between disc and rod.  figure 4. cylinder prototype.           figure 5. discs of stack prototype.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 78  and surface area of stack weight with different levels n are shown in (5) and (6): 2 2 1 1 2 2( 1)v n r h n r h    , (5) 2 2 1 1 2 2 2 12( 1) 2 2 2( 1)s n r h n r h n r n r         . (6) in this research, the two models of mass standards are expected to be of the same volume but different surface areas i.e. (2) and (4) should be equal. thus, the air buoyancy correction applied during the comparison process will be minimized. the large difference in surface areas between (1) and (3) is better for the mass measurement during transferring from vacuum to ambient. 4. parameter  searching  method  based  on  the  optimization algorithm  for the same material with ideal density ρ, and nominal mass value (1 kg), both height h and radius r of cylinder prototype can be calculated by (2). for sorption purposes, let (5) equals the cylinder volume. thus, there are four parameters which determine the volume of the n-level stack, i.e. r1, r2, h1 and h2. with surface area of cylinder scylinder being also known, these four parameters can therefore be varied until the maximum difference between scylinder and sstack is obtained. however, this design scheme cannot neglect the usage of both mass and volume measurement equipment. all final artefacts therefore can be put into these instruments and their mass and volume can be measured accordingly. during determining the volume and surface area of the cylindrical prototype, a 9-level stack was considered. this design scheme had the following limitations: (a) r2 < 50 mm (size of the weighing magazine); (b) height of weight less than 90 mm (volume measurement instrument space); (c) volume of 9-level stack equal to the volume of cylindrical prototype i.e. vstack = vcylinder ; (d) surface area of 9-level stack > surface area of cylindrical weight i.e. sstack > scylinder; (e) mass difference between cylindrical and 9-level stack prototype less than or equal to 1.5 g (i.e. electronic weighing capacity). if the mass difference exceeds this limitation, the equipment malfunctions or gives misleading results; (f) machining precision equal to 0.01 mm, so the precision of 2 decimal points for each parameter is enough. for example, if the optimal height of the weight h is 53.3725 mm, the value is rounded to 53.37 mm. in this study, there were six parameters [d, h, r1, r2, h1 and h2] and 6 constraint conditions being considered in the sorption artefact. there are several approaches for solving this kind of multi-object optimization. matlab has an optimization toolbox with the function “fgoalattain”, whose graphical user interface is shown in figure 6. the following parameters were determined and programmed: start point, goals, weights, linear inequalities, bounds for variables. objective function and nonlinear constraint function were written into a .m file, which described the formula listed from (1) to (6). more details are given in appendix i and ii. start point and final results are respectively shown in tables 1 and 2. when the optimization algorithm is initially running from the start point, it executes 13 iterations and the algorithm makes the judgement whether volume and surface satisfy the requirement; otherwise, the algorithm is re-executed from the current result until the maximum difference of surface areas is obtained and volume difference is close to zero. optimization trends of surface area, volume and iteration numbers of optimization algorithm at different start points are shown in figure 7. figure 7(b) shows that the optimization sequence of table 1. start point for the optimization algorithm (unit: mm). h r r1 r2 h1 h2 0.01 0.01 0.01 0.01 0.01 0.01     table 2. optimization result for the start point (unit: mm).  h r r1 r2 h1 h2 53.27 26.64 4.04 50 9.50 1.62 figure 7. the trend of surface area, volume and the  iteration numbers of  optimization algorithm at different start points.  figure 6. interface for global optimization toolbox in matlab 2011.   10 20 30 40 50 0 5 10 15 x 10 4 optimization sequence (a) s u rf a ce a re a (m m 2 ) 0 20 40 108 110 112 114 116 118 120 122 optimization sequence (b) v o lu m e (c m 3 ) 0 10 20 30 40 50 0 20 40 60 optimization sequence (c) a lg o ri th m i te ra ti o n n u m b e r cylinder 9 level stack 9 level stack cylinder acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 79  volume may have unsatisfactory results (peaks). the curve (red line) in figure 7(c) represents the variation trends, and shows that the algorithm reached stability after 52 optimization sequences. from this particular optimization results, the maximum surface area difference and volume difference were determined to be 133699.71 mm2 and 0 cm3 respectively. 5. surface sorption experiment  after machining the artefacts (in 2014), their surfaces were thoroughly cleaned (using alcohol) to remove any contaminate e.g. oil and dust. they were subsequently set to stabilize for one year within laboratory ambient condition. these artefacts were also cleaned before the sorption measurements in air and vacuum. 5.1. surface measurement in air  during this experiment, the masses of the 9-level stack weight and cylindrical weight described in section 4 were measured using the mettler-toledo m-one system. measurements were performed under normal air condition; their masses were measured from sept. 10th to sept. 14th 2015 and the mass differences thereof computed. this measurement procedure was again repeated at the same condition a month later for two days (i.e. oct. 15th and oct. 16th). results of sorption measurements in air are as shown in figure 8, in which, δi, δm, h, t and ρ respectively represent balance indication, mass difference, relative humidity, temperature in the measurement chamber and air density. the differences between the two series of tests were analysed. 5.2. surface measurement in vacuum  artefacts were transferred from air to vacuum (tm and pm sensor installed in mass measurement system: m-one); the pressure ranged between 7×10-5 pa and 2×10-3 pa. the sorption was measured from oct. 20th to oct. 23th, 2015; results are shown in figure 9. figure 9(a) shows that the measurement values were stable at the end of the tests; the last mass difference was 1.2207 mg. combined with the measurement results in air condition, variation of the relative mass per area a could be determined by mass comparison of the sorption artefacts. 5.3. relation between sorption coefficient and humidity  in 1994, schwartz [6] determined the sorption coefficient η as η=δm/δa+η0. to minimize the effect of η0, the current study ensured that the artefacts were thoroughly cleaned before performing measurements, and hence η0 was assumed to be zero. from the scatter plot of figure 10, the horizontal axis and vertical axis respectively represent real time t (in hours), and sorption coefficient per relative humidity η/h (unit: μg/cm2/rh %). sorption coefficient and relative humidity were fit with the aid of the logarithmic model using the following the function in (7): f(η)= (4.3×10-4+ 3.5×10-5ln (0.43 t +1)) ·h . (7) the results obtained from (7) were similar to the results of schwartz’s model of the sorption coefficient. however, unlike schwartz’s model, the present model introduced humidity into the function and also obeyed the logarithmic rule. this study was congruent with earlier research findings on negligible influence of the effect of roughness condition and temperature of weight surface [6], and did not, therefore, investigate their effects on the sorption behaviour of weight. 6. conclusions  in order to provide a practical approach on disseminating the redefined kilogram, realized in vacuum to the mass scale at ambient, processes such as air to vacuum transferring for figure 8. result of the sorption measurement in air stage   (a) variation  of  the  δi  reading  from  the  balance  with  measuring  times (broken line: long term gap).   (b) variation of mass difference δm with measuring times.   (c) variation of relative humidity h with measuring times.   (d) variation  of  temperature  t  in  the  measurement  chamber  with measuring times.   (e) variation of air density ρ with measuring times.  figure 9. measuring the mass difference under vacuum condition.  0 20 40 60 80 100 120 140 -0.84 -0.835 -0.83 ∆ i( in te rv a l) measurement in air 0 20 40 60 80 100 120 140 -0.8 -0.795 -0.79 ∆ m (m g ) 0 20 40 60 80 100 120 140 35 40 45 h (r h % ) 0 20 40 60 80 100 120 140 20.5 20.6 20.7 t( � ) 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 1.19 1.191 1.192 measuring times ρ (m g /c m 3 ) (a) (b) (d) (e) (c) sep.10th~14th oct.15th~16th 0 10 20 30 40 50 60 70 80 90 -1.222 -1.22 -1.218 -1.216 -1.214 -1.212 ( ) ∆ i in te rv a l measurement in vacuum 0 10 20 30 40 50 60 70 80 90 0 2 4 6 8 x 10 -3 t m 2 (h p a ) 0 10 20 30 40 50 60 70 80 90 0 0.5 1 1.5 x 10 -3 measuring times p m (h p a ) oct 20th (a) (b) (c) oct 23th t (° c ) acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 80  standards must be studied. this transfer process and the results thereof are often affected by adsorption and desorption. in this paper, a new surface analytical model combining artefact standards designing method is presented. the model is based on an optimization algorithm which considers important parameters such as suitable design of weights (i.e. cylindrical weight and stacks of discs of weight). these parameters could help the metrologists in improving the accuracy of mass measurements. the preparation of the sorption artefact and the sorption measurement has also been described. sorption coefficient drift was be described by a logarithmic function that includes the effects of the humidity and time. in this experiment, the detailed value of roughness was not given, but the relationship between time, humidity and sorption coefficient was presented. further research may be conducted in future on artefacts to determine their drift for a long time in air condition. appendix i  function f = weight_design(x) density = 8421; n = 9; pi=3.14159265; s_cylinder = 2*pi*x(2)^2+2*pi*x(2)*x(1); v_cylinder = pi*x(1)*x(2)^2/1000; s_stack=2*(n-1)*pi*x(3)*x(5)+2*n*pi*x(4)*x(6)+2*n*pi*x(4) ^2 -2*(n-1)*pi*x(3)^2; v_stack = ((n-1)*pi*x(3)^2*x(5) + n*pi*x(4)^2*x(6))/1000; f(1) = -s_stack + s_cylinder; appendix ii  function [c,ceq] = nonlconstr(x) e1 = 0.5; density = 8421; n = 9; pi=3.14159265; c(1) = -x(1)*x(2)^2 +(1000*(1000000-e1))/pi/density; c(2) = x(1)*x(2)^2-(1000*(1000000+e1))/pi/density; c(3)=(1-n)*x(5)*x(3)^2-n*x(6)*x(4)^2-(1000*(e1-1000000))/ pi/density; c(4)=(n-1)*x(5)*x(3)^2+n*x(6)*x(4)^2-(1000*(e1+1000000))/ pi/density; ceq = pi*x(1)*x(2)^2-(n-1)*pi*x(5)*x(3)^2-n*pi*x(6)*x(4)^2; references  [1] s. davidson, j. berry, z. silvestri, et al, “addressing the requirements for the practical implementation and ongoing maintenance of the redefined kilogram,” proc. of imeko 22nd tc3, 12th tc5 and 3rd tc22 international confer-ences, feb.35, 2014, cape town, republic of south africa. [2] mise en pratique of the definition of the kilogram. http://www.bipm.org/en/si/new_si/mise-en-pratique.html, 2013. [3] p. j. abbott, r.c. dove. “progress on a vacuum-to-air mass calibration using magnetic suspension to disseminate the plankconstant realized kilogram,” proc. of imeko 22nd tc3, 12th tc5 and 3rd tc22 international conferences, feb.3-5, 2014, cape town, republic of south africa. [4] p fuchs, k marti, s russi. “new instrument for the study of ‘the kg, mise en pratique’: first results on the correlation between the change in mass and surface chemical state,” metrologia, 2012, v49, pp. 607–614. [5] takayoshi, shuiti, et al, “coulometric micro-determination of surface water on various metals and glasses and of hydrogen in beryllium metal,” materials transactions, v 14, 1973, pp. 396400. [6] r. schwartz. “precision determination of adsomtion lavers on stainless steel mass standards by mass comparison and ellipsometry. part i: adsorption isotherms in air,” metrologia, v 31, 1994, pp. 117-128. [7] r. schwartz. “precision determination of adsorption layers on stainless steel mass standards by mass comparison and ellipsometry part ii sorption phenomena in vacuum,” metrologia, v31, 1994, pp.129-136. [8] p. fuchs, k. marti , s. russi. “materials for mass standards: longterm stability of pt/ir and au after hydrogen and oxygen lowpressure plasma cleaning,” metrologia, v 49, 2012, pp. 615–627. [9] m. borys, m. mecke, u. kuetgens, et al. “the growth of the oxide layer on silicon spheres and its influence on their mass stability,” proc. of imeko 22nd tc3, 12th tc5 and 3rd tc22 international conferences, feb.3-5, 2014, cape town, republic of south africa. [10] p. fuchs. “low-pressure plasma cleaning of au and ptir noble metal surfaces,” applied surface science, v 256, 2009, pp.1382– 1390. [11] k. marti, p. fuchs , s. russi. “cleaning of mass standards: ii. a comparison of new techniques applied to actual and potential new materials for mass standards,” metrologia, v 50, 2013, pp. 83–92. [12] s. davidson, s. brown, j. berry.”a report on the potential reduction in uncertainty from traceable comparisons of platinum-iridium and stainless steel kilogram mass standards in vacuum”, npl report cmam 88, 2004. [13] w. beer, w. fasel, e. moll, p. richard, et al. “the metas 1 kg vacuum mass comparator – adsorption layer measurements on gold-coated copper buoyancy artefacts,” metrologia, v 39,2002, pp. 263-268. figure 10. relation between time t and η/h.  0 10 20 30 40 50 60 70 4.3 4.4 4.5 4.6 4.7 4.8 x 10 -4 time t(hour) s o rp tio n c o e ffi c ie n t p e r re la tiv e h u m id ity (μ g /c m 2 /% ) η /h vs t logarithmic model microsoft word article 6 192-1068-1-pb.docx acta imeko may 2014, volume 3, number 1, 19 – 22 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 19 problems in theory of measurement today l. gonella istituto di fisica sperimentale, politecnico di torino, c. duca degli abruzzi 24, 10129 torino, italy section: research paper keywords: measurement theory, value, error, uncertainty citation: l. gonella, problems in theory of measurement today, acta imeko, vol. 3, no. 1, article 6, may 2014, identifier: imeko-acta-03 (2014)-01-06 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. subject and scope of the theory of measurement to ask what is (or should be) the subject matter of the theory of measurement is not an idle question, since what seems to be the first problem today is to realize that there are indeed basic problems to solve and to identify then. too many people feel with undue complacency that the theory of measurement is a perfected structure, needing only peripheral work. others feel, instead, that fundamental questions lie unanswered, or even unasked, being pre-emptied by undiscussed assumptions; framing such questions, that i think are to be asked and answered from the operative standpoint [1– 4], is the aim of this paper. the purpose of a theory is to describe reality in a coherent framework – in our case to provide a coherent description of the valid praxis of measurement, suited to its whole range. it is apparent that the current theoretical framework is not up to this task: the rank confusion prevailing in the field is widely recognized and lamented. just try and compare the definitions given in the current literature of basic concepts as value, error, measurement, measurable quantity, etc. (i even saw the lecturers on metrology of an international school declining to hold a seminar on definitions as meaningless in today’s mixedup situation [3]). there is a widening gap between the praxis of measurement and its theory: our measurements agree with one another much better than we agree in talking about them. curiously enough, this is not felt as a flaw in the theory: for most people it is just a matter of ‘terminology’ and the problem is shunted to nomenclature committees, which face up an impossible task. they are supposed to choose the ‘right word’ for designating well-understood, undisputable concepts, but these ones turn out to be hazy or unfitting; a formal redefinition is called for, and in so doing unwitting theoretic work is done: what else is the core of a theory but a crossconnected list of definitions? the ‘current’ theory of measurement is embodied in the definitions, remarks and assumptions (both discussed and taken for granted) of textbooks on one hand and normative literature (standards, codes of practice, etc.) on the other. there is no parallel of the latter source in the other fields of science, unencumbered by the strong normative implications of the measurement problem. what happens is that the ‘normative’ authors feel that it is not up to them to discuss the theory, but only to codify the approved practice using the language of the main line of the approved textbooks; the latter, on their hand, cannot teach differently from the main line of the normative literature without rendering a bad service to their readers. the vicious circle thus generated makes it difficult to update the theory; its haphazard growth disguised as lexical adjustment or conventional ruling cannot touch the basic tenets, while the due concern over its responsibility may well transform some normative body into a low-pass filter against new ideas. this is one reason, i think, why the current theory of measurement is still based on the framework masterly set up by helmholtz [5] on basic ideas of euclidean lineage, which makes it the only field of science treated as yet in full 19th—century terms. a century passed, which shook the very foundations of mechanics and geometry, but new basic concepts like those of signal and this is a reissue of a paper which appeared in acta imeko 1979, proceedings of the 8th imeko congress of the international measurement confederation, “measurement for progress in science and technology”, 21-27.5.1979, moskow, vol. 1, pp. 103–110. the main questions to be answered in the theory of measurement are presented, and their formulation analysed in the classical approach and in a new operative one: the scope and aims of the theory itself, the concepts of measurable quantity, measurement results, error, uncertainty, the role of time. the paper is meant to define problems as the first step toward their solution. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 20 information, of quantum indeterminacy, etc., do not seem to have found their proper place in the frame of the measurement theory. meanwhile a more and more sophisticated instrumentation calls for a refined treatment in modern setting; we need a comprehensive theory able to range from the carpenter’s rule to the microprocessor, and the latter can hardly be dealt with as an extrapolation of the former. as technical-scientific people are shy of meddling with ‘philosophical’ issues concerning the meaning of what they do, and grew accustomed to shift most basic issues into terminological channels, the bulk of their work is devoted to mathematical techniques for data elaboration, mostly concerning the probabilistic error analysis. somebody is therefore inclined to identify the theory of measurement with this analysis (blanket statements as “any physical quantity is just a statistical variable” are often heard among physicists), which represents a drastic reduction in scope for the former and leaves the latter without bases (such position is also due to the general, very disputable, inclination to identify ‘theory’ with mathematics). the debate on the basic concepts of measurement has been left mainly to scholars in non-physical sciences [6–9]: almost everybody seems convinced that only there some problem is left, while in physics and the related technical fields all is nicely settled on helmholtz’s lines [8]. in the minds of many people stevens’s ideas are connected with non-technical measurements, though his stand against campbell [6] on psychophysical measurements could well have been taken by any technical scholar on several engineering measurements (e.g. hardness). marvel was expressed for the neglect in which the field is left [7], and it was noted that fundamental issues about measurement in physics are unsolved or plainly ignored [9]. measurement is a knowledge-gathering technique. what its theory is to be concerned with depends on what we expect from a theory. from the operative standpoint the measurement theory is the total body of definitions and rules concerning measuring activities, formulated with the aim of providing a guide to get consistent results. from another it is the argumentation that justifies the measurement itself, thus sanctioning what can be measured and what cannot. the issue of the scope of the measurement theory is coextensive with the question of what is a measurable quantity. 2. measurable quantities in an operative approach we may say a measurable quantity is anything we can describe in an objective and consistent way; the problem then is to define ‘objective and consistent’ in order that we can measure things to our satisfaction and discriminate against illusory knowledge. the classical approach starts from the other end: the consistency is looked for in the formal properties of the class of mathematical entities chosen for the description, the measuring procedure is modelled on a formal operation of this class, and a measurable quantity is what fits the model; the problem then is to justify the measurement of the other things that in practice we measure but do not fit directly in the pre-established model. the universal (though not necessary [9]) choice is to map the measurable quantities on the class of the real numbers, setting up a homomorphism between the empirical juxtaposition of the measured objects and the addition of the numbers assigned to measure them. the obvious paradigm is the measurement of length. all quantities endowed with the ‘additive’ property of the length, called fundamental or extensive (terminology and connotations depend on the author), may be measured directly; the others must be somehow ‘derived’ from them; if some kind of ‘derivation’ is not devised the quantity is not deemed and is excluded from the scientific realm. this way one gets a very good logical consistency for the fundamental quantities, but at the expense of generality, and the approach itself breeds the inclination to reduce the scope of the theory. when a theory meets phenomena which do not fit, either one changes the theory or puts an ‘off limits’ sign on the phenomena; the more clean-cut is the theory, the stronger the temptation to shrug off what does not fit. the ‘derivation’ criteria, either of the quantity or of the measuring procedure, are debatable. considerable ingenuity was spent for deriving the measurement of temperature; other quantities (e.g. hardness) were not so lucky, perhaps because they were not so firmly embedded in the core of physics, and are therefore in a limbo – someone calls them “pseudo-quantities”, which does not deter people from measuring them, but only hampers coherent standards. likewise, nuclear physicists keep using counters as measuring instruments while others show that counting is not a measurement [7]. something is wrong: either the people who keep on measuring “pseudo-quantities”, or the arbitrary requisite of an ‘additive’ property that so many measurands simply do not possess. one must remember that once people believed that all measurements ought to be referred to mass, length, and time: much of this kind of philosophy is still attached to the measurement theory. the ‘derivation’ problem is circumvented by the idea of different ‘scales of measurement’, with different relational properties, but many people do not accept all ‘scales’ as ‘measurement’, and others distinguish between a strict sense and a wide sense of measurement. it is also disturbing that the vectorial character of physical quantities cannot be accounted for in these frameworks. anyhow, if ‘measurement’ is a procedure of limited applicability, other consistency rules, beside another label, must be given to the knowledge-gathering procedure applied to the “pseudo-quantities”, and an overall treatment is in order. is a distinction between strict and wide sense of measurement meaningful, or is it not a question of different types of measurable quantities? the idea itself of looking into mathematics for ‘justifying’ a measuring procedure, for finding out whether or not we are allowed to measure a thing, is disputable: is the theory of measurement a branch of mathematics (albeit applied), or is it a branch of the natural and technical sciences (using suitable mathematical tools)? there is a difference, which lies in where we look for the ultimate criteria of consistency. 3. the data set representing a measurand granted that measurement is, in finkelstein’s well-chosen words, “the process of assignment of numbers to attributes ... in such a way as to describe them” [10], we have yet to decide in which way numbers are to be assigned. in the classical theory the question is not even posed; to represent the measurand one real number is assigned, tied to a unit of measure: the value of the measurand in the chosen scale of measurement. the geometric paradigm indeed leaves no doubt that “measurement demands some one-to-one relation between the numbers and the magnitudes in question” [11]. the fact is that with a single real number to represent the measurand no consistency can be obtained in different measurements of the same measurand, owing to what eisenhart aptly calls the “cussedness of acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 21 measurement” [12]. the difficulty is bypassed by assuming that the ‘true’ value – the one number that would make all measurements fit – exists, but is unknown and unknowable; individual measurements yield values differing from it by an ‘error’ due to instrumental ‘imperfections’; an analysis of these measurement errors is required to assess the interval within which the true value is supposed to lie. this way theory of measurement and theory of errors are separated: the former is free to pursue its way unencumbered by questions of uncertainty, left in care of the latter. somebody finds it a good arrangement, claiming that the principles of measurement cannot depend on ‘technical’ issues of uncertainty [8]; others feel that ‘essential difficulties in the logic of measurement arise from uncertainty” [10]. to postulate the solution is no way to solve a problem [11]: this course was taken only on the strength of the geometric paradigm. we should ask: is the real number the proper mathematical entity for representing measurands? the real number was invented to solve problems in pure mathematics born of the euclidean mensuration problem, i.e. expressing with a number the proportion of two segments of any geometric figure; the apodictic certainty offered by geometry is so attractive that one forgets it is due to the abstract definition of the geometric figures, that do not belong in the real world. but measurement is meant to describe the real world, and if it follows too closely the tracks of geometry it risks to become itself involved in idealized measurands. the real number is by definition the limit of a convergent sequence, but in measurement there is no such a thing as a sequence with a definite generating rule. we may also ask: is the mathematics of continuum, of which the real number is the main pillar, the proper tool for representing the physical world? atomic and nuclear physics did show matter to be granular; quantum mechanics added a granularity of its own and also an indeterminacy deeply embedded in the very fabric of things physical. this is not the setting meant for real numbers and the mathematics of continua. it would not do to have a different theory of measurement for the macroscopic and microscopic worlds. besides, everybody knows that any real macroscopic measurand is endowed with an intrinsic uncertainty inherent to its very definition (“you don’t measure bricks with a micrometer”); with more and more precise measurements of a given measurand one does not get closer to a real number expressing its value, as the classical theory assumes mimicking sequences, but rather at a certain point one finds oneself measuring something else, because the measurand’s definition has exploded into a finer structure. the implications of this commonplace experience are lost if one postulates descriptions by real numbers, and one also wonders whether possible ties between the quantum indeterminacy of microphysics and this ‘definition indeterminacy’ of macrophysics were not obscured by the slant toward geometric sharpness taken by the theory of measurement. if we look at things from the operative standpoint, we cannot help noting that measurements actually yield numerical intervals, and that upon such intervals we must reason to judge the consistency. even a ‘perfect’ instrument could not overcome the intrinsic uncertainty of a real measurand. the mathematical entity we are actually working with is a set of numbers, and there is nothing wrong in representing a measurand with a full set of numbers tied to a unit of measure: it is yet “assignment of numbers ... in such a way as to describe it”. this set assigned to describe the measurand (let us call it a “value-span”) would include in the description the uncertainty of the measurement, thus reuniting theory of measurement and ‘theory of error’. the one fundamental requirement for consistency is that different valuations of the same parameter yield ‘the same’ result: it is matter of defining rigorously what we mean by ‘the same’ and follow suit. two measurements are consistent if the assigned value-spans overlap: this translates in formal terms the practical judgment of consistency “to stay within the error”. the algebra of sets is to substitute the algebra of real numbers in operating on measurement data, starting with the replacement of the basic relation ‘equal’ with ‘not disjoint’ [1–4]. a new italian reference standard on basic measurement terminology [13] follows this line, that was found easy to practice. one may ask which special properties should have the sets assigned to describe measurands. in particular, the boundary of these sets cannot be sharply defined, if the pitfalls of the realnumber concept are to be avoided. a suitable specialization of the ‘fuzzy set’ concept might well turn out to be the proper mathematical tool for the job [1–3, 14]. 4. error, uncertainty, indifference the problem posed by the cussedness of measurement is tackled by the classical theory in terms of ‘error’. this concept rests on the idea that the instrument ought to indicate directly the true value of the measurand but indicates instead a wrong one; the underlying assumptions (derived straightly from the model of measuring length with a graduated rule) are: (a) the measurand is described by a single real number; (b) the instrumental indication is a value assigned to the measurand. on this basis the questions to ask are: why is the instrument wrong? how much is it wrong? a full answer is impossible: we cannot hope to know the error a priori, we can only judge a posteriori that it is smaller than a certain amount. this ‘maximum error’ is thought to be determinable by combining the different errors due to various causes, separately analysed, and an a priori distinction is made between ‘random’ and ‘systematic’ error; the former may be treated exactly by statistics, but the latter turns out to be an elusive concept [12] that calls for an educated guess. the distinction has some heuristic value, beside historic reasons, but adds more problems than it solves: without a sum rule it is no help for judging the consistency of different measurements, and no logically sound sum rule can be given for such essentially heterogeneous components (as shown by the fact that at least four competing rules are now used). such care was paid in distinguishing between ‘precision’ and ‘accuracy’ that some technical language even lacks a word describing their cumulative effect; so it happens that after defining different errors one often drifts to speak of ‘uncertainty’, without however defining it. the problem may be tackled from a different standpoint if we drop the basic assumptions of above, remembering that the theory had been cast in that mold long before the 20th-century concept of signal were developed. if we look at the instrumental indication as at a signal bearing information on the measurand, and allow that we are to assign a data set describing the latter (not necessarily a single number) by connecting this information with other information pertaining to the instrumental characteristics and to the ‘influence quantities’ affecting the measurement, then the questions to ask are: how do we describe the instrument output? which relation holds between this output and a value-span suited to describe the measurand? this way we keep much closer to the actual practice, and may account in the same framework for all acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 22 instruments (allowing for the evolution of the output signals from the reading of graduated rules to the string of digits out of on-line computers). this approach calls for a clear distinction between the terms referring to the description and those referring to the instrument’s output. much confusion is thus avoided that the classical approach brings about: many technical languages, e.g., use the same term for both the operation of measurement and the resulting description of the measurand, following a mathematical usage justified only by the uniqueness of mathematical solutions; the statistical treatment of ‘errors’ is also often confused with that of random quantities [4]. the attention is brought on the calibration procedures and the interplay of instrument, measurand, and environment. it is now possible a question that the classical approach cannot ask, i.e., is there a relation between the uncertainty of the measurement and the information gained on the measurand? the error concept reflects only the inability of the instrument to supply the information, assumed to be contained in the true value: the measurement is treated in an all-ornothing way, and we cannot tie a larger error with less information. the uncertainty concept has uncertain meaning. taken as ‘the maximum allowed error’ of the instrument, a given uncertainty associated to the value indicated by the instrument means that we are uncertain on where the true value is in the interval thus defined, and the width of the latter might be construed qualitatively as information on the measurand. on the other hand, if we treat this interval as a set assigned in its whole to describe the measurand, this means we are indifferent on which element of the set represents the measurand: its description is the whole set and two measurands cannot be distinguished from each other if their value-spans overlap; then an instrument which in correspondence to its output signal is able (through its calibration operator) to assign a narrower value-span is also able to discriminate to a higher degree a measurand from others of similar description, which means it supplies more information on the measurand. with this meaning of ‘indifference ranges’ the concept is suited to quantitative treatment and may turn out quite handy in tricky issues as the pattern recognition. 5. time and measurement time affects measurement as an influence quantity like the others, playing two different roles: (a) as the time allowed for the measurement operation; (b) as the lifetime of the instrument (or of its calibration). the classical theory is ill equipped to cope with the first problem, as the true value is surely instantaneous (geometry is time independent) and no question of principle can arise with time-varying measurands. in practice a distinction is made between static and dynamic or stationary and transient measurements (definitions and terms change with the milieu); most standards refer only to the former, as no conceptual tools are available to deal with the latter. with an operative approach in terms of signals and assigned value-spans the measurement time fits snugly in the picture, as one of the influence quantities determining the width of the value-span, and one wonders whether a general relation might be worked out between uncertainty (as indifference range) and measurement time, similar to the one tying the indeterminacy of conjugate variables in quantum mechanics. the second role refers to the time dependence of the calibration, hence to the problem of reliability. the cussedness of measurement depends on time [12] and a distinction is usually made between repeatability and stability, but the current approach does not go much further: the ‘age’ of a calibration is not usually considered an influence quantity though there is no reason for not treating it as such. when we calibrate an instrument we warrant the consistency of the value-spans assigned under the calibration; how long do we mean the warranty to hold? not forever. shall we state that the confidence level of the warranty decreases in time or that the uncertainty increases? the calibration is a procedure of quality control on the instruments, carried out with statistical techniques; the question then arises whether the ergodic theorem may be applied to a population of instruments to predict the behaviour of single ones [4, 15]. references [1] l. gonella, alta frequenza, 4, 622 (1975). [2] l. gonella, imek0 tc7 symposium, enschede 1975. [3] l. gonella, seminar held at the course “metrology and fundamental constants”, int. school of physics “e.fermi”, varenna, july 1976. [4] l. gonella, imek0 tc7 symposium, leningrad 1978. [5] h.v. helmholtz, zahlen und messen erkenntnis – theoretisch betrachtet, philosophische aufsatze eduard zeller gewidmet, leipzig 1887. [6] s.s. stevens, measurement, definitions and theories (c.w. churchman and p. ratoosh eds.), new york 1959, 20. [7] b. ellis, basic concepts of measurement, cambridge 1968. [8] j. pfanzagl, theory of measurement, würzburg 1971. [9] d.h. krantz, r.d. luce, p. suppes, a. tversky, foundations of measurement, vol. 1, new york 1971. [10] l. finkelstein, measurement and control, 8, 105 (1975). [11] b. russell, the principles of mathematics, 2nd ed., new york 1937. [12] c. eisenhart, journal of research nbs, 67c, n°2, 161 (1963). [13] l’unificazione, 30, n°1, 21 (1976). [14] j.l. destouches, imeko tc7 symposium, enschede 1975. [15] j.l. destouches, imeko tc7 symposium, leningrad 1978. microsoft word 409-2586-1-le-rev3 acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 95‐100    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 95  visible colour changes estimation in colorimetric  determination of chromium (vi) using polymeric sensors  nataliya a. gavrilenko, sergey v. muravyov, nadezhda v. saranchina, alexey v. sukhanov   national research tomsk polytechnic university, pr. lenina 30, 634050 tomsk, russia      section: research paper   keywords: colorimetric sensor; optode; chromium (vi); transparent polymeric matrix; immobilized reagents; digital colour analysis  citation: nataliya a. gavrilenko, sergey v. muravyov, nadezhda v. saranchina, alexey v. sukhanov, visible colour changes estimation in colorimetric  determination of chromium (vi) using polymeric sensors, acta imeko, vol. 5, no. 3, article 15, november 2016, identifier: imeko‐acta‐05 (2016)‐03‐15  editor: paolo carbone, university of perugia, italy  received july 4, 2016; in final form october 28, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by russian science foundation (project 14–19–00926)  corresponding author: sergey muravyov, e‐mail: muravyov@tpu.ru      1. introduction  chromium compounds are powerful oxidizing agents and tend to be irritating and corrosive. they appear to be well known toxic substances governing various health and ecological risks [1]. trace levels of chromium can be determined by different analytical techniques most of which are based on atomic absorption spectrometry [2], inductively coupled plasma– mass/atomic emission spectrometry [3], and high-performance liquid chromatography [4]. these methods, in spite of high sensitivity, need expensive instrumentation and skilled staff. optochemical sensors play an important part in industrial, environmental and clinical monitoring thanks to their low cost, possibility for miniaturization and great flexibility [5], [6]. among different types of optochemical sensors, colorimetric sensors (optodes) are especially attractive because they recognize analytes through colour change that allows obtaining the visually observed and easily measurable analytical signal [7], [8]. the analytical signal measurement can be carried out using not only standard spectrophotometric equipment, but also the naked eye without the use of an expensive equipment. naturally, the naked eye techniques cannot be as accurate as spectrophotometry. that is why the visible colour changes should be measured using different chromaticity coordinates (in rgb, xyz, l*a*b*, and/or other systems) and parameters such as total colour difference (colour variation) δe. usually chromaticity parameters are calculated as functions of absorption or reflection spectra by means of computer programs intended for spectral data processing. nowadays, the measurement of colorimetric parameters has become easier due to a wide use of scanners, digital photo and video cameras, etc. [9]-[11]. an image of an optode is captured and transferred to a computer and its colour is interpreted using imaging software. this approach can be implemented also with transformation of an optode colour into an electric signal by means of a diode target array. it provides an opportunity for carrying out rapid chemical analysis in an industrial working environment and/or in the field. it also allows to design a abstract  the  paper  describes  an  application  of  a  kind  of  optical  analytical  method,  digital  colour  analysis  (dca),  using  colorimetric  polymethacrylate sensors (optodes) in order to determine cr (vi). the optodes are made of optically transparent polymethacrylate  matrix (pmm) with 1.5‐diphenylcarbazide immobilized. the developed optode can be used in determination of the analytes using solid  phase  spectrophotometry  and  calculating  colour  coordinates  as  functions  of  absorbance  spectra.  also  colour  coordinates  can  be  represented as basic colour (e.g. rgb) data after the optode image digitizing. then one can determine the content of an analyte in a  sample by an appropriate colour difference calculated for these coordinates. experimental results of cr (vi) determination in tap water  show that the dca relative standard deviation is 8‐17 % and recovery is < 12 % at the range of determined concentrations 0.05‐1.0  mg∙l –1 . the characteristics are comparable with those of the solid phase spectrophotometry.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 96  reliable and easy-to-use measuring instrument that can be calibrated to be able to measure mass concentration of a whole series of substances. in the tomsk polytechnic university, such an analytical instrument has been created, which is called a digital colorimetric analyzer (dc analyzer) [12]. this portable instrument allows to measure colorimetric parameters of solid transparent optodes with high precision. the dc analyzer is controlled by a laptop or computer through a usb interface. specially developed software enables the instrument calibration for different test systems. due to compactness and ease of use, the dc analyzer can be used for rapid determinations in various practical domains. in the present work, an application of the dc analyser, using colorimetric polymethacrylate sensors as optodes, is considered in order to determine cr (vi). the optodes are made of optically transparent polymethacrylate matrix (pmm) with 1.5diphenylcarbazide immobilized. the developed optodes have been used in determining cr (vi) using solid-phase spectrophotometry and calculating colour coordinates as functions of absorbance spectra. as an alternative approach, colour coordinates can be represented by means of rgb data after the optode image digitizing. subsequent determination of the content of cr (vi) in a sample has been carried out on the base of calculating an appropriate colour difference as a function of the rgb coordinates. additionally, colorimetric parameters of the optode for cr (vi) determination were measured by means of the dc analyser. obtained outcomes are discussed in the present paper which is an extended version of the conference paper [13]. 2. methods  2.1. materials  the pmm is a specially created material containing functional groups which provide the ability to extract both the reagent and determined substance [14]. transparent 10  10 cm polymethacrylate plates, thickness (0.60 ± 0.04) mm, were prepared by the radical block polymerization of methacrylate and (alkyl)acrylates of alkaline (or alkaline earth) metals at the temperature 60-70 ˚c for 3-4 h. the plates were cut to platelets 6.0 × 8.0 mm (weight ca. 0.05 g) intended for analyses. all reagents were of analytical grade and used as purchased without further purification. deionized and distilled water was used in all experiments. the work solutions of 0.025–0.200 % 1.5-diphenylcarbazide were prepared by dissolving precise loads in ethanol by heating in a water bath with subsequent diluting by distilled water. the stock solutions of metals (1 mgml–1) were prepared by dissolving precise loads of their salts in 0.01 m acids. the work standard solutions of the required concentration were prepared by dilution of the stock solution in the day of the experiment. the required ph was adjusted using acid and alkali and controlled using an i-160 ionometer. 2.2. procedure  the immobilization of 1.5-diphenylcarbazide (dpc) into the polymethacrylate matrix was performed by sorption from water-alcohol solution of dpc under bath conditions during 3– 10 min. the tailoring of the optimum conditions of the interaction of immobilized dpc with cr (vi) are described in detail in [15]. briefly, pmm with immobilized dpc was put into 50 ml of cr (vi) solution of different concentration and ph and stirred for 15–30 min. after that absorption spectra or absorbance of pmm was measured. then the polymethacrylate matrix with the immobilized dpc was put into 50.0 ml of an analysed solution of definite concentration of cr (vi) at a ph ≈ 0 and stirred for 15 min. after that absorption spectra of the pmm were measured. then the colour coordinates x, y and z were calculated according to equations [16]:       780 10 380 ( ) ( ) ( )x k s x , (1)       780 10 380 ( ) ( ) ( )y k s y , (2)       780 10 380 ( ) ( ) ( )z k s z , (3) where     780 10 380 100 ( ) ( ) k s y , 10 ( )x , 10 ( )y , 10 ( )z are addition functions of cie 1964 supplementary standard observer, τ() is transmittance of the pmm, s() is relative spectral power distribution of the cie standard illuminant d65. after that, chromaticity coordinates x, y, z were found according to equations: х = х/(х + y + z) , (4) y = y/(x + y + z) , (5) z = z/(x + y + z) , (6) where the condition х + у + z = 1 is valid. then the transition from the xyz to the rgb system was made in accordance to the following equations:   3.2405 1.5371 0.476r x y z , (7)    0.9693 1.8760 0.041g x y z , (8)   0.0556 0.2040 1.0572b x y z . (9) the conversion from the xyz to the cie l*a*b* system was done as described by the following equations [17]:      1 3116( / ) 16 if / > 0.008856 * 903.3( / ) if / 0.008856 n n n n y y y y l y y y y , (10)  * 500[ ( / ) ( / )]n na f y y f y y , (11)  * 200[ ( / ) ( / )]n nb f y y f z z , (12) where       1 3 if 0.008856 ( ) 7.787 16 116 if 0.008856 t t f t t t , xn, yn and zn are values of coordinates of the white point of the system. the total colour differences  abs , xyze  lababse and  rgb abse for the chromaticity coordinates in xyz, cie l*a*b* and rgb colour systems calculated from absorption spectra were estimated according to the equations: 2 2 2 1 2xyz / abs ( )e x y z       , (13) 2 2 2 1 2lab / abs ( )e l a b       , (14) 2 2 2 1 2rgb / abs ( )e r g b       , (15) acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 97  where δx, δy, δz; δl, δa, δb and δr, δg, δb are the colour coordinate changes in the xyz, cie l*a*b* and rgb systems, respectively. visible colour changes of the pmm were estimated with digital imaging by means of a scanner and image processing software. the pmm image was captured and transferred to a computer and its colour was interpreted using imaging software where the colorimetric data in rgb format were related to the concentration of the analyte. the total colour difference δescan for the scanned images of the pmm [18] evaluated by means of scanner and software photoshop cs was estimated according to equation: 2 2 2 1 2 0 0 0 / scan [( ) ( ) ( ) ]e r r g g b b       , (16) where r0, g0, b0 are the colour coordinates of samples after contact with a plain solution; r, g, b are the colour coordinates of samples after contact with the solution containing the substance under determination. visible colour changes of the pmm were also estimated with the dc analyzer. for this aim, the pmm after contact with solution under determination was placed into the receiving bin of the analyzer and measurements were carried out. the measurement results were obtained as series of rgb coordinates and the corresponding colour difference δeinstr was calculated by formula similar to (16). 2.3. apparatus  the absorption spectra and absorbencies of the pmm were recorded on the schimadzu uv-mini-1240 (schimadzu corporation, japan) and spekol 21 (carl zeiss jena, germany) spectrophotometers. a non-modified polymethacrylate matrix was used as the reference sample. as the desktop scanner the hewlett packard scanjet 4400 c was used. the personal computer was pentium ii (333 mhz). the software packages were used as follows: photoshop cs for image collection and taking colour intensities, excel spreadsheet for general calculations and origin 7.0 for plotting. the colorimetric parameters were measured by means of the developed dc analyzer (instrumental error < 1%, response time < 25 ms). the ph values were measured by the i-160 ionometer (npo ‘‘izmeritelnaya tekhnika’’, russia) with a glass ph-selective electrode. the ionometer had an absolute error ±0.020 ph and was calibrated at 25 c using buffer solutions with ph 1.00 and 9.18. 3. results and discussion  traditional colorimetric determination of chromium (vi) is based on the formation of the red-violet colored diphenylcarbazonate of chromium (iii) complex as a result of interaction of dpc and cr (vi) ions in acid media. the interaction includes the following two stages (figure 1): redox reaction between chromium (vi) and 1.5-diphenyl-carbazide with formation of chromium (iii) and diphenylcarbazonate (dpco), and complex formation reaction between chromium (iii) and dpco. the exact structure of this complex is not known but it appears to be a cationic complex [cr(iii)dpco](3–n)+ in which an unknown number of protons n are liberated [19]. other colorimetric methods for chromium determination can be found in papers [20]-[23]. 3.1. cr (vi) colorimetric determination using pmm the pmm with immobilized dpc after contact with cr (vi) solution had red-violet colour due to the formation of the cationic complex. the absorption spectra of this complex in the pmm and scanned images of the pmm are presented in figure 2. the absorption spectrum has a maximum at 545 nm. the absorption at the wavelength 545 nm was taken as the analytical signal for solid phase spectrophotometric determination of cr (vi) which was explained in more detail in [15]. 3.2. acidity influence investigation investigations of the influence of analyzed solution acidity on the analytical signal showed that maximum analytical signal figure 2. absorption spectrum of pmm with immobilized dpc after contact  with cr (vi) solution with concentration, mgl–1: curve 1 – 0; curve 2 – 0.05;  curve 3 – 0.10; curve 4 – 0.25; curve 5 – 0.50; curve 6 – 0.75; curve 7 – 1.00. figure 1. scheme of cr(vi) and dpc interaction.  0 0.1 0.2 0.3 0.4 0.5 a 400 500 600 λ, nm 7  6  5  4  3  2  1  7 6 5 4 32 1  n n o nh nh 3   (3 )2[cr(iii)dpcoh] hn n 28h o  8h + dpc dpco n n o nh nh 33 nh o nh nh nh  42hcro  3 2cr 3 2cr acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 98  is reached at ph  0. also, the nature of acid plays a significant role in shaping the analytical signal for solid phase spectrophotometric determination of cr (vi). therefore, sulphuric acid and orthophosphoric acid are not suitable for maintaining the medium acidity. among hydrochloric acid and nitric acid the use of the former to maintain the medium acidity provided a maximum analytical signal which was stable during several hours, whereas the latter decreased the analytical signal and increased its stability up to several days. thus, the hydrochloric acid was used in further work due to the maximum analytical signal. 3.3. interfering ions influence investigation  influence of interfering ions such as fe (iii), cu (ii), hg (ii), v (v), co (iii), pb (ii), ni (ii) and mn (ii) on the determination of cr (vi) by using pmm with immobilized dpc was investigated. ions of each of the interfering elements were added individually to a 50 mg·l–1 chromium (vi) solution and relative error δ of cr (vi) determination in presence of the interfering element ions was calculated according to equation δ = [(aς – a)/a]·100, (17) where a is an analytical signal from the pmm with immobilized dpc after contact with cr (vi) solution without interfering ions; aς is the analytical signal after contact with the solution containing interfering ions. the interference investigation results are shown in table 1. 3.4. calibration dependences  the calibration dependences for cr (vi) were obtained for different methods and analytical signals, namely: traditional solid phase spectrophotometry and absorbance maximum a545; the solid phase spectrophotometry and total colour differences abs , xyze lababse and rgb abse computed by equations (14)-(16) on the base of the colour coordinates obtained from the absorption spectra; and the digital colour analysis (dca) [10] with total colour differences δescan for scanned optode images and δeinstr measured by the dc analyzer. the concentration dependences for cr (vi) are described by equations with high correlation coefficient values for all estimation methods of the visible colour changes. analytical performance for the equations of the colour changes are presented in table 2. graphical dependencies of the analytical signal versus concentration of cr (vi) in the analyzed solution are shown in figure 3. one can see in figure 3 that the slope of the calibration line rgbabse is greater than that of the calibration line abs xyze . it means that the use of rgb colour coordinates when computing the total colour difference for absorbance spectrum data provides a better sensitivity compared to xyz coordinates. it should be noticed that, in the case of cr (vi), the calibration dependence for δescan appears to have quadratic character. the calibration dependence for δeinstr could be described by both a second degree polynomial and a linear equation. as soon as the correlation coefficients for the two options were equal to each other, for the sake of simplicity, the linear equation was selected. precision of the procedure was expressed as the relative standard deviation (rsd) for the determination of 0.1 mgl–1 of cr (vi). values of range of determined concentrations (rdc), as well as of rsd, were obtained under conditions of reproducibility for a number of determinations n = 5 according to iso 5725-1994. expressed in percents rsd was estimated by the formula 2 1 1 1 1      ( )n iirsd c cc n , (18) where n is the number of determinations (optodes); ci is the result of i-th concentration determination; and c is the arithmetic mean of the n determination results. expressed in percents recovery q was estimated by the formula   added found added q q q q , (19) where qadded is introduced addition content and qfound is found addition content as an averaged value of n determinations. the accuracy and precision of cr (vi) determination results were verified by the standard addition method using drinking water (table 3). one can see from table 3 that the characteristics of accuracy and precision for analytical signals and δescan of the digital colour analysis are comparable with those of the solid phase spectrophotometry (a545 and δeabs). 4. conclusion  the proposed colorimetric sensor on the base of the pmm with immobilized dpc can be successfully used for the determination of cr (vi) by both solid phase spectrophotometry and digital colour analysis (dca). different analytical signals based on estimation of visible colour changes were studied. metrological performance of the dca method was shown to be comparable with those of the solid phase spectrophotometry. table 2. analytical performance of the cr (vi) colorimetric sensor.  analytical  signal  calibration equation  r 1   rdc 2 , mg∙l –1   a545  0.003 + 0.525  ccr(vi)  0.998  0.05–1.00   rgbabse   0.002 + 0.167  ccr(vi)  0.999  0.05–1.00   lababse 0.4 + 47.8  ccr(vi)  0.999  0.05–1.00   xyzabse   0.003 + 0.103  ccr(vi)  0.998  0.05–1.00  δescan  5.63 + 208.29  ccr(vi)  –  77.24c2cr(vi)  0.997  0.05–1.00  δeinstr  3.335 + 118.85  ccr(vi)  0.992  0.05–1.00  1  r is a correlation coefficient   2  rdc is a range of determined concentrations  table 1. influence of interfering ions.  ions  concentration ratios  δ, % fe (ii)  10  5  50  7  cu (ii)  10  5  50  10  v (v)  10  5  50  13  hg (ii)  10  5  50  13  mn (ii)  10  5  50  13  co (iii)  50  5  pb (ii)  50  5  ni (ii)  50  5  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 99  figure 3. calibration dependences for determination of cr (vi) obtained for estimation of visible colour changes using the analytical signals: a545,  xyz absδe , lab absδe , rgb absδe , δescan and δeinstr.  table 3. determination of cr (vi) in tap water (sample size n = 5, confidence level р = 0.95). technique  analytical signal  added, mg∙l ‐1   found, mgl‐1  rsd1, %  recovery, %  solid‐phase   spectrophotometry         a545  0.070  0.065 ± 0.016  15  –7  0.300  0.29 ± 0.03  7  –3  0.600  0.601 ± 0.023  2  0.2  colour coordinates   rgbabse   0.070  0.066 ± 0.008  14  –6  0.300  0.29 ± 0.02  8  –3  0.600  0.63 ± 0.05  8  5   lababse   0.070  0.068 ± 0.008  14  –3  0.300  0.280 ± 0.023  8  –5  0.600  0.62 ± 0.05  8  3   xyzabse   0.070  0.070 ± 0.013  21  0.2  0.300  0.29 ± 0.03  12  –3  0.600  0.63 ± 0.04  7  4  digital colour analysis   δescan   0.070  0.069 ± 0.012  20  –2  0.300  0.31 ± 0.03  11  2  0.600  0.59 ± 0.06  11  –2  δeinstr   0.070  0.066 ± 0.005  8  –6  0.300  0.34 ± 0.05  17  12  0.600  0.65 ± 0.08   14  9  1  rsd is a relative standard deviation  0 0,03 0,06 0,09 0,12 0,15 0,18 0,0 0,2 0,4 0,6 0,8 1,0 a b so rb a n ce  a t  5 4 5  n m concentration of cr (vi),  mg∙l‐1 0.18 0.15 0.12 0.09 0.06 0.03 0.00 0.0              0.2              0.4             0.6              0.8             1.0 0 10 20 30 40 50 60 0 0,2 0,4 0,6 0,8 1 t o ta l  co lo u r  d if fe re n ce   concentration of cr (vi),  mg∙l‐1 0.0              0.2             0.4             0.6              0.8            1.0 0,00 0,04 0,08 0,12 0,16 0,0 0,2 0,4 0,6 0,8 1,0 t o ta l  co lo u r  d if fe re n ce   concentration of cr (vi),  mg∙l‐1 0.16 0.12 0.08 0.04 0.00 0.0              0.2            0.4             0.6            0.8            1.0 0 30 60 90 120 150 0 0,2 0,4 0,6 0,8 1 t o ta l  co lo u r  d if fe re n ce   concentration of cr (vi),  mg∙l‐1 0.0             0.2            0.4             0.6            0.8            1.0 xyz abse  rgbabse  instre  scane  lababse 545a acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 | 100  the dca for determination of not only cr (vi) but also a wide spectrum of other elements and/or substances can be implemented using an office scanner with graphic software or the dc analyzer. regardless of the implementation mode, it can be applied as a highly sensitive method of chemical analysis in both laboratory and field conditions. use of the dc analyzer facilitates field measurements by means of the dca due to its compactness. acknowledgement  this work was supported by russian science foundation, project 14-19-00926. references  [1] epa/630/p-02/004f, generic ecological assessment endpoints (geae) for ecological risk assessment, u.s. environmental protection agency, washington, dc, october 2003. [2] z. tahmasebi, s.s.h. davarani, selective and sensitive speciation analysis of cr (vi) and cr (iii), at sub-μg l−1 levels in water samples by electrothermal atomic absorption spectrometry after electromembrane extraction, talanta 161 (2016) pp. 640-646. [3] r. nageswara rao, m.v.n. kumar talluri, an overview of recent applications of inductively coupled plasma-mass spectrometry (icp-ms) in determination of inorganic impurities in drugs and pharmaceuticals, j. pharm. biomed. anal. 43(1) (2007) pp. 1-13. [4] p. jin, x. liang, l. xia, f. jahouh, r. wang, y. kuang, x. hu, determination of 20 trace elements and arsenic species for a realgar-containing traditional chinese medicine niuhuang jiedu tablets by direct inductively coupled plasma–mass spectrometry and high performance liquid chromatography–inductively coupled plasma–mass spectrometry, j. trace elem. med. biol. 33 (2016) pp. 73-80. [5] c. mcdonagh, c.s. burke, b.d. mccraith, optical chemical sensors, chem. rev. 108(2) (2008) pp. 400-422. [6] r. narayanaswamy, o.s. wolfbeis (eds.), optical sensors, industrial, environmental and diagnostic applications, springer, 2004, isbn 978-3662-09111-1 [7] n. kaur, s. kumar, colorimetric metal ion sensor, tetrahedron 67 (2011) pp. 9233-9264. [8] n. sato, m. mori, h. itabashi, cloud point extraction of cu (ii) using a mixture of triton x-100 and dithizone with a salting-out effect and its application to visual determination, talanta 117 (2013) pp. 376-381. [9] s.v. khimchenko, l.p. eksperiandova, comparison of analytical potentials of detection versions in chromaticity rapid analysis using portable instruments, j. anal. chem. 67(8) (2012) pp. 701705. [10] n.a. gavrilenko, s.v. muravyov, s.v. silushkin, a.s. spiridonova, polymethacrylate optodes: a potential for chemical digital colour analysis, measurement 51 (2014) pp. 464-469. [11] j.r. askim, m. mahmoudi, k.s. suslick, optical sensor arrays for chemical sensing: the optoelectronic nose, chem. soc. rev. 42(22) (2013) pp. 8649-8682. [12] s.v. muravyov, a.s. spiridonova, gavrilenko n.a., p.f. baranov, l.i. khudonogova, a digital colorimetric analyzer for chemical measurements on the basis of polymeric optodes, instrum. exp. tech. 59(4) (2016) pp. 592-600. [13] n. saranchina, n. gavrilenko, a. sukhanov, s. muravyov “colorimetric polymer sensor for determination of chromium (vi): comparison of estimation methods of the visible colour changes”, proc. of 21st imeko world congress, aug. 30-sept. 4, 2015, prague, czech republic. [14] n.a. gavrilenko, g.m. mokrousov, “the indicator sensitive material for determination of microquantities of substances”, patent 2272284 (ru), 2004. [15] n.v. saranchina, i.v. mikheev, n.a. gavrilenko, m.a. proskurnin, m.a. gavrilenko, determination of chromium (vi) using 1.5-diphenylcarbazide immobilized in a polymethacrylate matrix, analitika i kontrol 18(1) (2014) pp. 105-111 (in russian). [16] d. jadd, g. wyszecki, colour in business, science and industry, new york, wiley, 1975, isbn: 0471452122. [17] a. ford, a. roberts. colour space conversions. westminster university, london, 1998, http://www.poynton.com/pdfs/ coloureq.pdf [18] s.v. murav’ev, n.a. gavrilenko, a.s. spiridonova, s.v. silushkin, “method of determining amount of analysed substance from colour scale”, patent 2428663 (ru), 2011. [19] y.m. scindia, a.k. pandey, a.v.r. reddy, s.b. manohar, chemically selective membrane optode for cr (vi) determination in aqueous samples, anal. chim. acta 515 (2004) pp. 311-321. [20] xiaoyan wu, yunbo xu, yangjun dong, xue jiang, ningning zhu, colorimetric determination of hexavalent chromium with ascorbic acid capped silver nanoparticles, anal. methods 5 (2013) pp. 560-565. [21] jing li, hui wei, shaojun guo, erkang wang, selective, peroxidase substrate based “signal-on” colorimetric assay for the detection of chromium (vi), anal. chim. acta 630 (2008) 181185. [22] d. kim, j. om, direct spectroscopic determination of aqueous phase hexavalent chromium, ujes 1(1) (2013) pp. 1-4. [23] p. parmar, a.k. pillai, v.k. gupta, an improved colorimetric determination of micro amounts of chromium (vi) and chromium (iii) using p-aminoacetophenone and phloroglucinol in different samples, j. anal. chem. 65(6) (2010) pp. 582–587. template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 4-11 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 4 optical system for on-line monitoring of welding: a machine learning approach for optimal set up giulio d’emilia, david di gasbarro, emanuela natale università degli studi dell’aquila, via giovanni gronchi 18, 67100 l'aquila, italy section: research paper keywords: p-value; machine learning; vision system; uncertainty; online control; welding citation: giulio d’emilia, david di gasbarro, emanuela natale, optical system for on-line monitoring of welding: a machine learning approach for optimal set up, acta imeko, vol. 5, no. 4, article 2, december 2016, identifier: imeko-acta-05 (2016)-04-02 section editor: lorenzo ciani, university of florence, italy received september 24, 2016; in final form december 14, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: giulio d’emilia, e-mail: giulio.demilia@univaq.it 1. introduction on-line monitoring of laser welding can be carried out by many inspection techniques, based on acoustic emission, optical detection, image analysis, radiographic testing and ultrasonic [1]. in particular, non-contact optical systems are widely used for on-line monitoring of quality, geometry and position of welding, based on different optical measurement methods, like spectroscopic studies of the welding plasma plume [2], triangulation systems [3] and other types of solutions [4]; remarkable measuring performances could be achieved in terms of measurement uncertainty, even though the cost of these solutions is often high, so that their use is limited to specific applications. furthermore, if reliable and accurate information is needed, more actions have to be provided, in order to improve the efficaciousness of results for geometry and defect classification, in particular for what the image processing is concerned. in fact, careful image analysis algorithms have been developed for defect classification for images obtained by radiographic systems [5], [6] or by measurement sensors of the eddy current type [7]. further approaches appear in literature, often based on neural networks and genetic algorithms, having different objectives: identification of welding parameters for optimization of both welding characteristics and cost [8], [9], classification of defects in general of components for the automotive industry [10] or of welding in particular [11]. furthermore, deep attention has been paid to the use of neural networks for image analysis for quick and effective classification based on a machine learning method [12]. an optical laser sheet is a simple and economical way to carry on non-contact geometrical measurements in different types of applications [3], [4], [13], [14]; a sheet of laser light allows to overcome the problem of low contrast images and to obtain, in a simple manner, information on the shape of surfaces. this is particularly important in case of investigation of untreated metal surfaces. in fact, at the intersection of the laser sheet with the object to be analysed, the laser line appears to be abstract in this paper a methodology is described for continuous checking of the settings of a low-cost vision system for automatic geometrical measurement of welding embedded on components of complicated shape. the measurement system is based on a laser sheet. measuring conditions and the corresponding uncertainty are analyzed by evaluating their p-value and its closeness to an optimal measurement configuration also when working conditions are changed. the method aims to check the holding of optimal measuring conditions by using a machine learning approach for the vision system: based on a such methodology single images can be used to check the settings, therefore allowing a continuous and on-line monitoring of the optical measuring system capabilities. according to this procedure, the optical measuring system is able to reach and to hold uncertainty levels adequate for automatic dimensional checking of welding and of defects, taking into account the effects of system hardware/software incorrect settings and environmental effects, like varying lighting conditions. the paper also studies the effects of process variability on the method for quantitative evaluation, in order to propose on-line solutions for this system. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 5 well contrasted with respect to the object surface. in this way, the intersection points (circled in red in figure 1) between different surfaces of an object are easily identified. usually the accuracy of these systems has to be optimized, in order to achieve a satisfactory uncertainty level [15]-[17], especially for the control of complicated geometries as in case of automotive applications. furthermore, the best quality level for measurements has to be maintained and this should be assessed on-line with a method not too onerous from the point of view of time, operations and data processing requirements. therefore, it is important to develop both a vision system that works well under certain environmental conditions and a methodology able to automatically maintain good metrological characteristics, also in complex applications and varying operating conditions. in this paper the measurement capability of a measurement system based on a laser sheet is described, with reference to the monitoring of welding having a complicated geometry, for realization of big mechanical components. furthermore, an experimental methodology is presented, in order to improve the possibility of using this simple measuring solution also in difficult situations, aiming to automatically check and maintain conditions of good setting of the vision system, with particular attention to the variations of the lighting level and characteristics. this work aims to individuate the main parameters affecting uncertainty and to evaluate their effect from both a theoretical and experimental point of view. a previous work of the authors [18] developed a first attempt to set a method based on a laser sheet system able to be robust and simply to use, for on-line continuous check of the right optical and procedural settings. the p-value parameter [19] was used to automatically identify the best settings for the vision system, taking into account the main causes of uncertainty. the method demonstrated to be able to detect the effects of a little variation of settings, so strongly supporting the setup of the vision system. the effect of process variability was also studied, in particular dimensional variability of pieces to be welded together, showing that the method is very sensitive to it. the methodology proved to be robust and reliable in practical applications. according to the previous considerations, the paper describes an improved methodology able to improve the accuracy of dimensional measurements and to reduce the processing time of the images of the measured pieces for more efficient use in on-line applications. the method is based on the use of a machine learning approach for optimal set-up and uncertainty reduction, despite of the variation of procedural and environmental conditions. a specific section will present the improved methodology explaining the theoretical and experimental motivations of it. the experimental results and the way the data have been processed will be described, in order to validate the methodology for a class of practical applications. some discussions with reference to the use in field will conclude the paper. 2. materials and methodology position and dimensional check of welding is by a vision system, which is based on a laser sheet, whose architecture is shown in (figure 2 and figure 3). a detailed description of components and the complete procedure for the setup of the system is given in a previous work [20]. in the present paper, the matlab software and machine learning toolbox and computer vision system toolbox have been used, to analyse the illumination conditions in the acquired images. the proposed system uses the same hardware components of the triangulation systems [3], but it provides a different software elaboration; in fact, this approach does not identify the figure 2. scheme of the system for the welding control. figure 3. picture of the system for the welding control. figure 1. example of identification of the intersection point between two lines generated by the laser sheet. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 6 geometrical surface of the analysed object, but it evaluates, in a simple manner, the distance of characteristic points from a reference. this possibility can be useful in all phases of a welding process: the checking of the line of contact of two pieces to be welded (seam tracking); the driving of the robotic arm that deposits the welding bead; the validation of the position and of the dimensional parameters of the realized welding (positioning and shape of the weld). measurements have been carried out on a piece constituted by two perpendicular steel plates, to be welded together by a linear bead, that is the typical t-joint. the weld seam will have a convex shape and a throat thickness of about 5 mm. in this case, the position of interest is the intersection point between two lines generated by the laser sheet (figure 1), with respect to a reference point of the piece (seam tracking). it is to be noticed that the method is also suitable for other configurations, allowing the measurement of any dimensional parameter based on the evaluation of the distance between two different points. this method is potentially not influenced by the welding parameters. it is important to ensure that characteristics of correct setting of the system are set over all the time, so that the measurement results are reliable and accurate. having a correct setting physically means that all variability causes act in a random and reduced way, so that no systematic and no remarkable effects of any specific variability cause appear. therefore, if the system is properly set, measurements are according a gaussian variability with fixed mean and reduced standard deviation [21]. the methodology described hereinafter is based on this assumption. this condition should be guaranteed both at the installation time of the vision system and during the on-line working. in the previous step [18] the gaussian distribution of repeated measurement of quantities of interest was verified by the p-value [19]. the p-value is widely used in statistical hypothesis testing. a model (the null hypothesis) and a threshold value for p, called the significance level of the test and denoted as α, have to be chosen. if the p-value is less than or equal to the significance level (α), the test suggests that the observed data is inconsistent with the null hypothesis, so the null hypothesis must be rejected [19]. in this case the null hypothesis is the gaussian distribution of measurement results, then, if the p-value, ranging between 0 and 1, is greater than a pre-set threshold, the gaussian distribution of measurements is guaranteed, with the confidence level indicated in the calculation of the p-value itself. of course, before the production activity begins, the time available for checking that the measurement distribution is effectively gaussian is much more than the one available during the production, due to the need of avoiding bottle necks to the production rhythms. therefore, automatically checking these conditions asks different requirements due to differences of time duration available for control before the start of production and during the production itself; this will influence the number of repeated measurements which are possible. the need of assuring that the probability distributions of possible measurements are unchanged (according to the measurement performances) all along the time requires that the closeness of both distributions is evaluated in very short time for on-line control. in the former approach [18] measurements were carried out on each production workpiece, to check that the optimal setup conditions are maintained. each measurement was repeated n times (typically n=30); that takes about 2 s to 3 s depending on the acquisition and image processing rate of the system. the pvalue was calculated on the basis of these n measurements. then, the p-value estimate is compared with a threshold value: if the p-value is above the threshold level, the system is correctly set. the threshold level can be defined in relation to a requested accuracy and to specific environmental conditions by means of a statistical analysis (mean and standard deviation of p-values) based on a sufficient number of p-values (at least 30 p-values) calculated on the first batch of production. being necessary to acquire 30 measurements in static condition and to post-elaborate the p-value, in some cases the production process can be too fast for this method. for these reasons in this paper the procedure has been improved according to actions described in the following: 1. "optical" setup of the vision system and spatial calibration: the optical setup is carried out using a simple calibration pattern. the pattern has to be framed by the camera, and a program for the identification of edges and intersection points is implemented, using the software for the image processing. focusing and exposure time are manually set, until all elements are recognized. a gauge block of known size is used for the evaluation of the ratio pixel/mm. 2. preliminary tests: an experimental and parametric study of the effect of changing the settings of the optical system and of the variation of environmental conditions is carried out. of course, the most relevant aspects are taken into account, like grey level settings and environmental light intensity and typology variation, but other variability sources of the welding process could be considered like, for example, differences in surface reflection. for each test condition the p-value and its statistics are evaluated, in order to correlate estimated ranges of the p-value to different operating conditions, depending on settings and environmental scenario. this step is intended to realize a correspondence between level of p-value, illumination conditions and settings of the vision system. the estimated uncertainty level is also of concern. 3. training and testing of the machine learning classifier: for each test condition 150 images are acquired to train and test a machine learning classifier. the aim of the classification is to quickly identify the operating illumination condition and then the corresponding optimal settings of the vision system. 4. validation: repeated measurements for all the considered working conditions are carried out in order to check the reproducibility of the p-value measurements. this step will be also used to more effectively define the p-value threshold, taking into account the variability in reproducibility conditions. 5. on-line monitoring of the optical system: one image is recorded and classified by the machine learning classifier, to identify the actual operating conditions (illumination condition and settings of the vision system). in this way, image classification allows to measure indirectly the p-value for the system: in case a predefined threshold in not reached, the system settings should be automatically modified in order to improve the p-value for optimal setting. when the p-value is greater than the threshold limit, the measure of the distance of interest can be accepted with known uncertainty level. a synthesis of the methodology is represented in the flow diagram of figure 4. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 7 in the next section the methodology will be discussed with reference to some practical applications concerning pieces to be welded. 3. results in this section the effects of varying the settings of the vision system with respect to the best ones in different lighting conditions and the application of the machine learning procedure are evaluated. measurements have been carried out on a piece, as an example, constituted by two perpendicular steel plates, to be welded together by a linear bead (figure 1). grey threshold has been identified as the most relevant setting parameter, to be studied, together with the lighting conditions as for the environmental parameters. the change of setting of the vision system, by varying the grey thresholds (variation of the grey level in the transition from dark area to the light one, ranging from 0 to 255) modifies the two boundaries of the laser line, which is a broken line because of the object shape (figure 1). these data should be processed in order to find the distance to be measured. six different conditions of illumination (ic) have been compared. for each lighting different settings of the grey threshold have been examined: ic 1: a neon light system at a 2 m distance from the object. ic 2: the same as ic 1, but the neon light system is partially obscured on the left side at 1.5 m of distance of the steel piece. ic 3: the same as ic 1, but the neon light system is partially obscured at 0.3 m distance. ic 4: the same as ic 3, but the neon light system is partially obscured at 0.5 m distance. ic 5: the same neon system as ic 1, but a led lamp is added at 0.3 m distance. ic 6: the same as ic 2, but the led lamp is placed at 0.5 m distance. the system setting can be optimal or not. this depends on the grey threshold values selected. the conditions are optimal when the measurement result corresponds to the reference one and the standard deviation is based on casual effects and then the calculated p-value is very high (near to 1). for each illumination condition four cases are analysed: three cases have different grey threshold values, the last one is the reproduction of the best case among the previous, based on the p-value. table 1 shows the test plan, having 24 different cases. according to the procedure described in the previous section, a large number of data has been acquired, in the order of 3000 measurements for each case and 150 bitmap images. the results of the “preliminary tests” are discussed in the following subsections. 3.1. p-value analysis the results “preliminary steps” are described in the graphs of figure 5. for all tests n = 30, being n the number of repeated on-line measurements. in particular, the diagrams of figure 5 show the behaviour of the mean p-value (averaging 100 p-values) with respect to the set measurement uncertainty (standard deviation of the normal reference distribution of the vision system measurements) for all the examined cases. the reference distance value is (60.77 +0.01) mm. for the cases with the optimal settings, even though the set uncertainty is reduced, the mean p-value remains practically unchanged, maintaining high values; for the other two cases, the found p-value quickly drops when the requested uncertainty of measurements is reduced. this is true for all the illumination conditions (except for ic 5). this result is very reasonable and it also suggests that a threshold value could be set to separate the best setting case from other ones. the diagrams of figure 5 show that separated p-value ranges occur, between settings optimal or not, if a reduced standard uncertainty is considered (standard uncertainty less than 0.23 mm). this result confirms the ability of this method of distinguishing the correct setting from slightly different configurations of the vision system. in all the conditions of lighting considered, in case the right setting of the grey threshold is fixed, the estimated distance between the reference points is in the range (60.77 ± 0.01) mm. the diagram of figure 5 shows that the p-value is strongly dependent on the grey setting for all the lighting conditions, which have been examined. due to the found differences of pvalue, even though the variability of p-value is taken into account, threshold values could be easily set, in order to quickly identify a condition of right setting. it is interesting to compare for each illumination condition p-values between cases 1 and 4, corresponding to same settings: the p-value estimates are very repeatable for all conditions. 3.2. machine learning classification the results of the “training and testing of the machine learning classifier” phase of the methodology described in section 2 are now discussed. 150 bitmap images, randomly selected from cases 1 to 3 for each illumination condition, have been used for the training and the testing of the machine learning algorithm. the steps for the machine learning system are described in the following: 1. loading the images in the system: 900 images are loaded in the system (150 for each illumination condition). figure 4. flow diagram of the methodology. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 8 table 1. tests plan. illumination condition case grey threshold 1 grey threshold 2 optimal illumination condition 1 case 1 40 60 illumination condition 1 case 2 45 80 optimal illumination condition 1 case 3 55 100 illumination condition 1 case 4 45 80 optimal illumination condition 2 case 5 45 80 illumination condition 2 case 6 75 40 optimal illumination condition 2 case 7 55 60 illumination condition 2 case 8 75 40 optimal illumination condition 3 case 9 45 80 illumination condition 3 case 10 55 50 optimal illumination condition 3 case 11 65 65 illumination condition 3 case 12 55 50 optimal illumination condition 4 case 13 45 80 illumination condition 4 case 14 50 90 optimal illumination condition 4 case 15 55 60 illumination condition 4 case 16 50 90 optimal illumination condition 5 case 17 45 80 illumination condition 5 case 18 50 75 illumination condition 5 case 19 55 60 optimal illumination condition 5 case 20 55 60 optimal illumination condition 6 case 21 45 80 illumination condition 6 case 22 85 54 optimal illumination condition 6 case 23 65 65 illumination condition 6 case 24 85 54 optimal figure 5. mean p-values and their standard deviation ranges vs. standard uncertainty. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 9 2. extracting features from the images and keeping 80 % of the strongest features from each image set. a feature is a measurable property or a characteristic of the image; for instance, a geometrical point of discontinuity in the image or an area at a specific grey level. in total the matlab script automatically finds 24491 features for each illumination condition. 3. using k-means clustering to create a 200 words visual vocabulary from the 24491 features. 4. encoding all the 900 images with the features and the clustering method described in point 2 and 3. 5. selecting and training the supervised machine learning classifier: the images used for the training are 50 % of the available ones for each illumination condition. the others ones are used for the testing of the system. a large number of algorithms are compared for the training of the system. the results are described in table 2. most of the algorithms detect the features with 100 % of accuracy. this result confirms the high quality of the images and the uniformity of them in each training set. this is a good result for the training phase but a problem in on-line monitoring. in fact, an algorithm trained with a dataset having low variability generally is unable to work with real data, which are more variable than the original ones. 3.3. validation of the machine learning classifier the results of the “validation” phase of the methodology described in section 2 are now discussed: the images of the fourth case of each illumination condition are used to test the classifier. five images are taken from each of the following cases: 4, 8, 12, 16, 20, 24. these cases are representative of all the illumination conditions (ic 1 to ic 6). these images are taken in the same conditions as the other cases, after a few hours delay. even though a few hours interval occurs between acquisitions, the images to be compared are very similar (figure 6). the classifiers with accuracy 100 % have been used in the validation step. among these, the best results have been reached by means of the “linear discrimination” classifier, described in table 3, in the second column. all the conditions are correctly identified. based on the successful classification, the working situation is identified and the standard deviation of the measurements can be estimated, depending on the actual setting of the grey threshold levels, according to the results of figure 5. in order to check the reproducibility of measurements, the mechanical and the optical systems have been dismounted and reassembled; also all the different illumination conditions were recreated as before, with the aim of obtaining the same working conditions as cases 4-8-12-16-20-24. a preliminary p-value evaluation has been made, to check that optimal working conditions arise for all illumination conditions to be analyzed. a test concerning the classification of images, according to the illumination condition, is carried out; results are shown in the same table 3 in the third column. first image of case 1 last image of case 4 figure 6. comparison of images with same illumination condition. table 3. response of linear discrimination classifier training results. illumination condition of the input image response of linear discrimination classifier response of linear discrimination classifier with new data ic_4 ic_4 ic_3 ic_2 ic_2 ic_2 ic_3 ic_3 ic_2 ic_2 ic_2 ic_2 ic_1 ic_1 ic_1 ic_6 ic_6 ic_6 ic_5 ic_5 ic_5 ic_4 ic_4 ic_3 ic_6 ic_6 ic_6 ic_5 ic_5 ic_5 ic_5 ic_5 ic_5 ic_4 ic_4 ic_3 ic_6 ic_6 ic_6 ic_3 ic_3 ic_2 ic_1 ic_1 ic_1 ic_3 ic_3 ic_2 ic_2 ic_2 ic_2 ic_1 ic_1 ic_1 ic_4 ic_4 ic_3 ic_1 ic_1 ic_1 ic_3 ic_3 ic_2 ic_2 ic_2 ic_2 ic_6 ic_6 ic_6 ic_1 ic_1 ic_1 ic_6 ic_6 ic_6 ic_4 ic_4 ic_3 ic_3 ic_3 ic_2 ic_5 ic_5 ic_5 ic_2 ic_2 ic_2 ic_5 ic_5 ic_5 table 2. training results. classifier accuracy decision trees complex 99.3 % decision trees medium 99.3 % decision trees simple 83.3 % discrimination analysis linear 100 % discrimination analysis quadratic 100 % svm linear 100 % svm quadratic 100 % svm cubic 100 % knn fine 100 % knn medium 100 % knn coarse 100 % knn cosine 100 % knn cubic 100 % knn weighted 100 % ensembles boosted trees 17 % ensembles bagged trees 100 % ensembles subspace discriminant 100 % ensembles subspace knn 100 % acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 10 the classification capability of the classifier is satisfactory: only a few conditions are confused, being very similar with respect to the illumination condition (figure 7). in case of confusion, the uncertainty corresponding to the poorer illumination condition can be considered. 4. conclusion a method has been discussed for on-line validation of position measurements of a monitoring system for welding. the device is based on a vision system and a laser sheet. the p-value parameter has been used to automatically identify the best settings for the vision system, taking into account the main cause of uncertainty. the method is able to detect the effects of a little variation of settings, so strongly supporting the setup of the vision system. a procedure has been proposed with the purpose of applying it on-line; it has been validated for different conditions of lighting. the results proved that the procedure is able to check the holding of optimal measuring conditions by using a machine learning approach for the vision system: based on such a methodology single images can be used to check the settings, for on-line use. once the working situation is identified, the standard deviation of the measurement can be estimated, depending on the actual setting of the grey threshold levels. achievable uncertainty values are in the order of some tenths of a millimeter. as an important improvement of the previous version of the method, the presented solution allowed to make the measurement procedure unaffected by the process variability with a remarkable reduction of measurement uncertainty. acknowledgement the courtesy of vision device srl, torrevecchia teatina, italy, for making available the vision system and the processing software, is gratefully acknowledged. references [1] j. shao, y. yan “review of techniques for on-line monitoring and inspection of laser welding”, journal of physics: conference series 15 (2005) pp. 101-107. [2] m. ferrara, a. ancona, p. m. lugara, m. sibilano, “online quality monitoring of welding processes by means of plasma optical spectroscopy”, p soc photo-opt ins (2000) pp. 750-758. [3] d. acosta, o. garcia, j. aponte, “laser triangulation for shape acquisition in a 3d scanner plus scan electronics”, proc. robotics and automotive mechanics conference, 26-29 sept. 2006, cuernavaca, morelos, mexico, pp. 14-19. [4] f. kong, j. ma, b. carlson, r. kovacevic, “real-time monitoring of laser welding of galvanized high strength steel in lap joint configuration”, opt. laser technol. 44 (2012) vol. 44 pp. 2186– 2196. [5] g. wang, t. warren liao, “automatic identification of different types of welding defects in radiographic images”, ndt &e international 35 (2002) pp. 519-528. [6] k. aoki, y. suga, “application of artificial neural network to discrimination of defect type in automatic radiographic testing of welds”, isij international 39 (1999) pp. 1081-1087. [7] o. postolache, a. lopes ribeiro, h. ramos, “weld testing using eddy current probes and image processing”, proc. of xix imeko world congress fundamental and applied metrology, september 6-11, 2009, lisbon, portugal, pp. 438-442. [8] j. gunther, p.m., pilarski, g. helfrich, h. shen, k. diepold, “intelligent lase welding through representation, prediction and control learning: an architecture with deep neural networks and reinforcement learning”, mechatronics 34 (2016) pp. 1-11. [9] h.y. tseng, “welding parameters optimization for economic design using neural approximation and genetic algorithm”, int. j. adv. manuf. technol. 27 (2006) pp. 897-901. [10] a. sumesh, k. rameshkumar, k. mohandas, r. shyam babu, “use of machine learning algorithms for weld quality monitoring using acoustic signature”, procedia computer science 50 (2015) pp. 316-322. [11] s. ravikumar, k.l. ramachandran, v. sugumaran, “machine learning approach for automated visual inspection of machine components”, expert systems with applications 38 (2011) pp. 3260-3266. [12] a. krizhevsky, i. sutskever, g. e. hinton, “imagenet classification with deep convolutional neural networks”, advance in neural information processing systems 2 (2012) pp. 1097-1105. [13] t. qing-bin, j. chao-qun, h. hui, l. gui-bin, d. zhen-liang, y. feng, “an automatic measuring method and system using laser triangulation scanning for the parameters of a screw thread”, meas. sci. technol. 25 (2014) pp. 035202-035211. [14] p. bellandi, f. docchio, g. sansoni, “roboscan: a combined 2d and 3d vision system for improved speed and flexibility in pickand-place operation”, int j adv manuf tech 69 (2013) pp. 18731886. [15] i. beˇsic´, n. vangestel, j.p. kruth, p. bleys, j. hodolicˇ, “accuracy improvement of laser line scanning for feature measurements on cmm”, opt laser eng, 49 (2011) pp. 1274– 1280. [16] m. mahmud, d. joannic, m. roy, a. isheil, j.f. fontaine, “3d part inspection path planning of a laser scanner with control on the uncertainty”, computer-aided des. 43 (2011) pp. 345–355. [17] f. xi, y. liu, h.-y. feng, “error compensation for threedimensional line laser scanning data”, int j adv manuf tech 18 (2001) pp. 211–216. [18] g. d’emilia, d. di gasbarro, e. natale, “on line control of ic_2 ic_3 ic_4 figure 7. example images from ic_2 ic_3 ic_4. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 11 optimal set-up of a laser sheet system for real time monitoring of welding”, proc. of 14th imeko tc10 workshop on technical diagnostics, june 27-28, 2016, milan, italy, pp.158-163. [19] d. c. montgomery, “statistical quality control”, mcgraw-hill, 2006, isbn 978-0470169926, pp.116-117. [20] g. d’emilia, d. di gasbarro, e. natale, “a simple and accurate solution based on a laser sheet system for position and size on line monitoring of weldings”, j phys conf ser. 658 (2015). [21] s. a. alfaro, g. c. carvalho, f. r. da cunha, “a statistical approach for monitoring stochastic welding processes”, j. mater. process. technol. 175 (2006) pp. 4-14. optical system for on-line monitoring of welding: a machine learning approach for optimal set up microsoft word 239-1855-2-le-rev acta imeko  issn: 2221‐870x  december 2015, volume 4, number 4, 75‐81    acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 75  dynamic calibration uncertainty of three‐axis low frequency  accelerometers  giulio d’emilia, antonella gaspari, emanuela natale   university of l’aquila, dipartimento di ingegneria industriale e dell’informazione e di economia,via g. gronchi 18 , 67100 l’aquila, italy       section: research paper   keywords: three‐axis accelerometer, low frequency vibration, calibration, uncertainty, cross sensitivity   citation: giulio d’emilia, antonella gaspari, emanuela natale, dynamic calibration uncertainty of three‐axis low frequency accelerometers, acta imeko, vol.  4, no. 4, article 14, december 2015, identifier: imeko‐acta‐04 (2015)‐04‐14  editor: paolo carbone, university of perugia, italy  received december 23, 2014; in final form june 12, 2015; published december 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: giulio d’emilia, giulio.demilia@univaq.it    1. introduction  social resilience to disasters is now considered a topic of highest political and technical concern in advanced nations. social safety and building heritage along with sustainable urban development are important issues in setting guide lines which aim to improve this topic. for this purpose, evaluating the conditions and performance of existing civil infrastructures is a crucial aspect that allows the decision-makers to configure the best lines of activity in order to improve the quality of life [1]. to have integrated procedures involving geotechnical and structural aspects finalized to buildings' diagnostics, a distributed sensor network is needed, allowing us to operate in a selective manner and in the most critical situations. this implies the need of a greater number of measuring sensors, for both the three-axis vibration as well as inclination measurements. this innovation, together with the need of covering wide areas through this multi-sensors network in order to include different buildings, sets the requirement of lower-cost solutions that are nevertheless able to ensure the requested uncertainty of measurements. most of the above mentioned measurement and cost requirements could be satisfied by the well-known microelectro-mechanical systems (mems) technology and some examples of its use for these applications can be found in literature [2], even though careful attention should be paid to many possible causes of errors [3], especially when low-cost and low-precision mems accelerometers are considered. due to these facts, many studies can be found in literature, referring to calibration of these sensors, using mechanical reference quantities [3], [4], or using fully electrical methods to estimate the sensitivity of capacitive mems accelerometers in batch fabrication [5]. abstract  in this paper a methodology concerning the static and dynamic calibration of three‐axis low‐cost accelerometers in the (0 to 10) hz  frequency range is described, to be used for evaluation of existing civil infrastructures.   main and cross sensitivities of the accelerometers have been experimentally estimated by means of  the matrix sensitivity concept.  the standard deviation of accelerations obtained along all three axes using different calibration data sets in repeatability conditions  has been calculated and intended as dynamic calibration uncertainty.  the method has been validated by using reference accelerations accurately realized, in order to evaluate the residual bias error.  static  and  dynamic  calibration  test  benches  have  been  used  to  realize  reference  accelerations.  in  order  to  create  a  three‐axis  acceleration field, a mechanical arm is used in static calibration; a rotary device is used in order to test the accelerometers in dynamic  conditions.   according  to  the  procedure  described  in  this  paper,  a  great  improvement  of  the  low  cost  accelerometers’  metrological  characterization could be achieved, especially in dynamic working conditions. acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 76  even if different aspects are studied, like bias correction and main and cross sensitivity evaluation, an exhaustive description of calibration uncertainty causes cannot be found, in particular with reference to the dynamic behaviour in the low frequency range (0 to 10 hz). market and literature solutions [2], [6], [7] and characteristics of the calibration test bench designed for this specific application [8], [9], [10], [11], [12], [13], [14] have been considered, but the provided solutions do not appear completely satisfactory if the right trade-off between uncertainty of calibration on one hand, and the cost of the test bench and the calibration procedure duration on the other hand are considered. in fact, only in a few cases the analysis refers to three-axis accelerometers and the test rigs are generally very expensive; they are based on three-dimensional shakers and only consider the high frequency range [13], [14]. when the low frequency range is examined, different devices have been used [2], [6], [7] [8], [9], [10], [11], [12]; in all these cases, the cross sensitivities are not determined and this appears detrimental with respect to the achievement of a satisfactory sensor accuracy. furthermore, bearing in mind these considerations, the first aspect to be considered is the limit that could be achieved with reference to the uncertainty of sensors to be used for acceleration and inclination, taking into account cost requirements as well. this goal involves sensor behaviour, its power supply and conditioning, data acquisition technique, and initial and periodical calibration, especially in the low frequency range of vibrations (0 to 10) hz, but calibration requirements appear to be mandatory. this paper aims to investigate the main technical and procedural aspects to be considered for the definition of a calibration procedure modulated in function of the application, as well as the cost range of the sensors. particular attention is paid to the experiment design, the way in which a mechanical reference acceleration is created and evaluated and how data processing techniques influence the variability of results, retrieving useful information without having to perform superfluous tests, according to the quality level of the sensors that are being selected. it is to be pointed out that straight through the paper the acceleration will be expressed in terms of g, acceleration due to gravity; the authors are aware that this is formally incorrect, but they prefer this solution for better understanding of results in the field of civil and seismic engineering. in section 2 the methodology will be discussed, concerning the experimental solutions able to realize the requested reference acceleration and the procedures for data processing in section 3 results of tests are presented which have been carried out according to the methodology of section 2; the remarkable reduction of systematic errors will be shown, together with the compensation of cross sensitivity effects for measurement uncertainty reduction. the comparison of contributions of different parameters will be used to also identify the specific actions to do for calibration procedure improvement. 2. methodology  scientific evaluations, experts' advice and market analysis led us to define the following ranges of interest for tests:  frequency range: (0 to 40) hz;  maximum amplitude of vibration: about 20 m/s2. in this paper the calibration in the frequency range (0  4) hz will be studied, with the perspective of increasing this range to 10 hz. for the frequency range (10 – 40) hz, other solutions are available, in particular three-axis electro-dynamic shakers. the three-axis sensor will be calibrated both statically and dynamically. in fact, in field applications we are interested to use it as an inclinometer, the steady behaviour, and as a threeaxis accelerometer, involving the dynamic behaviour. for the general calibration of the sensor the following relationship between the input acceleration and the output signals can be set: , (1) where the matrix s = sij is the sensitivity matrix, are the reference acceleration components, are the voltage outputs of the sensor, and is the offset vector. a total of 12 parameters have to be evaluated [3], [13], [14], [15]. depending on conditions (static or dynamic), static sensitivity matrix ss and dynamic sensitivity matrix sd, along with their corresponding offset vectors, qs and qd respectively, should be evaluated. for the static calibration a mechanical arm is used in order to adjust the sensor to different positions, through the gravity acceleration vector components along the three axes. the minimum number of measurement orientations is 4, although more measurements may give a more robust calibration, using a linear least squares optimization. in this article, the effect of the choice of the number of the angular positioning on the calibration uncertainty is investigated in order to provide useful information about the calibration procedure that is more suitable for the characteristics of the selected accelerometers. the behaviour of the main and transverse sensitivities in the frequency range (0 to 10) hz is also a point of interest. for the dynamic calibration purpose, a test bench based on a rotary device is used. it is driven by a brushless servomotor, controlled by a programmable logic controller (plc) by means of a high accuracy angular encoder, which allows us to realize different motion laws (sinusoidal, saw-tooth, ramp, etc..). the test bench is depicted in figure 1. the accelerometer is placed so that x-y or x-z planes do not correspond to the horizontal one, as figure 2 shows: in this way all the measuring axes will be subjected to a variable acceleration at a frequency depending on the repetition rate of oscillations. in particular:  the x and y axes measure components (depending on the angle  in figure 2) of the gravity acceleration g and of the tangential acceleration at;  the z axis measures the centripetal acceleration ac. the tangential acceleration at and the centripetal acceleration ac are defined by the following equations, where ω is the acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 77  angular velocity and r is the distance of the sensitive element of the accelerometer with respect to the axis of rotation: ∙ (2) ∙ . (3) in order to build the dynamic sensitivity matrix, a range of different values are assumed for the orientation angle , described in figure 2. a high accuracy three-axis accelerometer could also be employed for further validation purposes, according also to literature indications [12]. as to the uncertainty of radius, r (figure 3), tests are carried out at different radial positions of the sensors, with the purpose of reducing the whole uncertainty on the reference values of acceleration at and ac. in particular, the sensor has been positioned at increasing distances from the axis of rotation using blocks of known thickness, starting from an initial distance, r0. the relationship between the output of the sensor, y, and the increase r of the distance from the rotation axis with respect to the initial distance, r0, has been linearly fitted. the equation of the straight line has been obtained by a least squares linear fitting: y = m · r + b and r = b/m . (4) the graph in figure 4 shows the experimental relationship between the output of the sensor and the increase r of the distance from the rotation axis. the distance r of the measurement point of the accelerometer has been obtained according to (2), being 112.7 mm. the standard uncertainty of r, sr, can be evaluated, according to the following equation [16]: ∙ ∙ , (5) where b and m are the slope and the intercept of equation (2), obtained by a least squares method interpolation. the uncertainty contributions related to b and m, sb and sm respectively, can be calculated according to the following equations [17]: ∑∆ ∑∆ ∑∆⁄ (6) sm 2 = nsv 2 n ∑∆ri 2∑∆ri 2 (7) where: sv 2 = 1 (n-2)⁄ ∑ (m∆ri+b-vi) 2 . (8) ri are the input values of the increase r, and vi are the corresponding output tension values of the accelerometer. from experimental data results: sv = 3.2 · 10-4 v, sb = 2.6 · 10 4 v and sm = 1.2 · 10-5 v/mm. by substituting the relevant values obtained from the experimental test into (5), the relative uncertainty of r is in the order of 0.5 %, which is satisfactory. the effect of the actual shape of motion has to be analysed figure 1. the rotary table, driven by the brushless motor. rotation angle is measured by a high accuracy angular encoder.    figure 2. scheme of tangential acceleration components corresponding to a rotation angle  of the plane x‐z with respect to the horizontal one.  figure 3. distance r of the measuring point from the centre of rotation.  figure 4. method for the determination of the distance r.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 78  too, in relation to its theoretical setting, as well as the rotary device set-up. in particular the behaviour of the calibration bench is examined in case of both square wave and sinusoidal motion law for periodical oscillations. the whole uncertainty of ac and of at will be evaluated, taking into account the repeatability standard deviation of them, in correspondence of each angular position and the standard uncertainty of r. the absence of systematic effects on the angular position will be verified. tests have been planned in order to consider the following aspects with reference to the uncertainty of ac and of at (or a component of them depending on the position):  motion laws;  rotary device setting;  choosing of the reference;  number of points for calibration. finally it is to be pointed out that the requested calibration procedure and the accuracy to be achieved should be able to fit the sensor characteristics, so that the performances of calibration rig and cost of calibration can be adequately reduced. 3. results  3.1. static calibration  as a first step, a static calibration of a three-axis accelerometer has been done, according to the procedure described in [14], based on (1). in order to obtain a more robust calibration with respect to the case when the minimum necessary number of positions is used being equal to 4, a total of 10 different angular positions have been realized. the results are described in the following static sensitivity matrix: ss = -9.863 -0.006070 -0.04011 -0.1964 -9.936 -0.03548 0.7081 0.03142 -9.863 v/ m/s ) (9) and the offset vector: = 24.32 24.03 26.68 m/s . (10) for geometrical angles a maximum deviation of 1° has been assumed. a maximum relative deviation from the mean values of 2 % has been evaluated for the coefficients of the main sensitivities, taking into account the angle deviation. anyway, measurements have been carried out after calibration in the range ±9.807 m/s2 (±π/2 as for the angle) for each axis, using the local gravity as the source: a maximum deviation of 0.098 m/s2 has been evaluated. this result suggests that the sensitivity matrix should be evaluated on the whole, due to the compensation effect of the variability of the different matrix coefficients between each other. in fact, each acceleration component is corrected by using a combination of three matrix coefficients; therefore the acceleration variability should be reduced with respect to the coefficient one, if casual effects are considered. 3.2. dynamic calibration  3.2.1 identification of the motion law  in case of dynamic calibration of accelerometers, as a preliminary matter, the effect of the motion time law for oscillations on results accuracy has been investigated: in particular saw tooth and sinusoidal velocity profiles of oscillations were considered. at a preliminary evaluation a saw tooth velocity profile, that means square wave profile for at, seemed interesting, having a plateau in at, being very simple to realize on the test bench from the point of view of plc programming, and making able to investigate in the same test many frequencies (the odd harmonic components). this motion law has been discarded, since experimental tests indicated that it is difficult to be realized not only practically but also theoretically, being necessary specific settings, which are meaningless from a physical point of view. furthermore, oscillations in the test bench arise when at is changed, which is a problem also for sensors to be calibrated. all the results hereinafter refer to tests driven with a sinusoidal motion law. tests have been carried out at an oscillation frequency of 0.5, 1.0 and 1.6 hz; in particular the indication will be stressed referring to 1.6 hz oscillation, whose vibration peaks are the highest. 3.2.2 identification of the reference   in a calibration procedure the definition of the reference is a fundamental issue. the possible alternatives that have been considered are:  using a reference accelerometer. it is to be noticed that using a high accuracy accelerometer as the reference for accelerometer calibration introduces problems deriving from the different radial positioning of the reference accelerometer with respect to the other one; this aspect is expected to increase the calibration uncertainty of accelerometers;  using the output signal from the encoder. in particular, the tangential acceleration at and the centripetal one ac can be obtained according to (2) and (3), where ω is the angular velocity, by differentiating the signal of the angular encoder. the reference for the z-axis coincides with the ac value, while the references for the xand yaxes coincide with the sum of the appropriate components of the gravity acceleration g and of the tangential acceleration at, depending on the sensor orientation (figure 2);  using the motion law that manages the motion. this means that the angular velocity and the acceleration at different instants, according to (2) and (3), are obtained not from the real signal of the encoder, but from the theoretical angular velocity, set by the plc. the graphs of figure 5 show a comparison between real and theoretical time behaviour of the acceleration components, with reference to the axes of the accelerometer. it is to be pointed out that the components of at act along the xand y-axes, while ac acts along the z-axis. it can be noticed from the graph of figure 5 that the systematic error of the reference signal on the whole is negligible; in fact, the reference real signal oscillates around the theoretical one, without any offset. the root mean square (rms) of the difference between real axr(t), ayr(t), azr(t) and theoretical values axt(t), ayt(t), azt(t), during the oscillation period, has been evaluated, being 0.28 m/s2, 0.50 m/s2 and 0.12 m/s2 respectively. obviously, most of the oscillations found in axr(t) and ayr(t) depends on numerical differentiation, even though higher order harmonics can be identified with respect to the theoretical acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 79  behaviour, due to effective oscillations at points where the motion is inverted. anyway, if the rms is evaluated of the differences between output of the sensor corrected by the “theoretical” matrix (the matrix sd obtained by using the theoretical reference) and the real reference, the following value are obtained: rmsx = 0.36 m/s2; rmsy = 1.3 m/s2; rmsz = 0.35 m/s2 however, the rms of the differences between output of the sensor corrected by the “real” matrix (the matrix sd obtained by using the real reference) and the real reference are: rmsx = 0.35 m/s2; rmsy = 1.2 m/s2; rmsz = 0.32 m/s2. a number n = 20 of points during the cycle were used to evaluate the matrix sd: the best choice for n will be discussed hereinafter. then, a slight improvement using the real reference for the calculation of the matrix is observed, despite the noise produced by the numerical differentiation, since only the real reference can see all the actual vibrations of the calibration bench. for these reasons, the real reference, obtained from the real encoder signal, has been chosen. 3.2.3 number of points for calibration and preliminary  considerations  in table 1 the root mean square of the differences between reference acceleration and corrected sensor data during the oscillation period are evaluated, as a function of the number of points n. in table 1 reference data are those obtained by twice differentiation of the angular encoder data. the effect of building sensitivity matrix with an increasing number of points, n, is studied. furthermore, provided that a sufficient number of points is considered, no effect of increasing it has been acknowledged. it has been set n = 20. the tests so far carried out also suggested some actions to be carried on to improve the calibration procedure performances by reducing calibration uncertainty, in particular of at and its components; this objective will be pursued by making more accurate and repeatable the real reference acceleration profile. these actions on one hand require a mechanical improvement of the rotary table, allowing us to increase the transferability of the angular encoder information to the position of the accelerometer to be calibrated, on the other hand ask for a higher sampling rate of the angular encoder data. it is to be pointed out that the used plc allows us to use a task time of less than 1 ms, allowing a sampling rate of about 1 khz. rather than increasing the number of points on which the sensitivity matrix should be evaluated but provided that a sufficient number of points is considered, it seems more convenient to mix data measured changing the sensor orientation, also in the dynamic calibration. 3.2.4 results of the dynamic calibration and uncertainty  evaluation  considering the observations made in previous paragraphs, 20 different positions of the sensor have been taken into account during every cycle; for each position the reference accelerations along the x-, yand z-axes have been calculated, depending on gravity and motion acceleration components and compared with the corresponding outputs of sensors in all directions. then, the matrix sd and the offset vector qd in the dynamic case have been calculated on the basis of the reference values, being: sd 9.948 0.2576 0.3657 0.5342 10.97 0.6611 0.4454 1.614 10.83 v/ m/s ) (11) q d q q q 24.02 20.10 26.68 m/s (12) the improvement of using the dynamic sensitivity matrix with respect to the static one is clearly shown in diagrams of figures 6a, 6b, 6c, where real encoder data, twice differentiated, used as the reference, sensor data corrected by static sensitivity matrix and data corrected by dynamic sensitivity matrix are compared. a quantitative evaluation of the rms of the differences between the output corrected of the accelerometer and the reference confirm this observation (tables 2 and 3). figure 5. comparison between real and theoretical time behaviour of the  acceleration components, with reference to the axes of the accelerometer. table  2.  rms  of  the  differences  between  the  output  corrected  by  the  dynamic sensitivity matrix and  the reference.  absolute   rmsx [m/s 2 ]  absolute  rmsy [m/s 2 ]  absolute  rmsz[m/s 2 ] 0.35  1.2  0.32    relative     rmsx  relative  rmsy  relative  rmsz  0.15  0.14  0.09  table 3. rms of the differences between the output corrected by the static  sensitivity matrix and  the reference.  absolute   rmsx [m/s 2 ]  absolute  rmsy [m/s 2 ]  absolute  rmsz[m/s 2 ] 0.43  1.3  1.1    relative     rmsx  relative  rmsy  relative  rmsz  0.19  0.14  0.30  table  1.  rms  of  the  differences  between  the  output  corrected  by  the  dynamic sensitivity matrix and  the reference, as a function of n.  n  rmsx [m/s 2 ]  rmsy [m/s 2 ]  rmsz [m/s 2 ]  14  0.762  1.12  0.327  21  0.395  1.10  0.293  28  0.381  1.07  0.288  35  0.379  1.03  0.287  42  0.392  1.08  0.292  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 80  in order to evaluate the variability of the sensor output corrected by the results of the dynamic calibration, 10 different sets of 20 points have been considered, corresponding to different oscillation periods; 10 different matrices sd and offsets qd have been correspondingly calculated. using them to correct the sensor output during a cycle, the variability is found to be 0.15 m/s2, 0.29 m/s2, 0.071 m/s2 for the x-axis, y-axis and z-axis respectively. the same values, expressed as a percentage of the signal amplitude, are: 6.3 %, 3.4 % and 2.0 % respectively. these values can be assumed as standard percentage calibration uncertainties for the x-axis, yaxis and z-axis accelerations. the contribution of the uncertainty of the radius r appears to be negligible in any case, being in the order of 0.5 %. if the variability of the reference is considered too, using 10 cycles of the reference signal (figure 7) an average standard deviation has resulted, equal to: 0.28 m/s2, 1.1 m/s2, 0.16 m/s2, for the x-axis, y-axis and z-axis respectively. these values, expressed as a percentage of the signal amplitude, become: 12 %, 12 %, 4.5 % respectively. using the sensitivity matrix allows to get a corrected output with a reduced variability with respect to the cycle variations of the reference. this result suggests that improving the cycle repeatability of the reference could further improve the correction and reduce uncertainty of the calibrated sensor. a) b) c)  figure 6. comparison among real encoder data, twice differentiated, used as the reference, sensor data corrected by static sensitivity matrix and data corrected  by  dynamic  sensitivity  matrix.  a)  x‐axis  acceleration;  b)  y‐axis  acceleration; c)  z‐axis acceleration.  a) b) c)  figure  7.  variability  of  10  cycles  of  the  reference  acceleration.  a)  x‐axis  acceleration; b) y‐axis acceleration; c) z‐axis acceleration.  acta imeko | www.imeko.org  december 2015 | volume 4 | number 4 | 81  4. conclusions  in this paper some aspects concerning the calibration uncertainty of three-axis low-cost accelerometers for diagnostics of civil buildings are considered. both static conditions and dynamic behaviour in the range (0 to 4) hz as for vibrations have been examined. a frequency varying three-axis field of acceleration has been realized by modulating both gravity acceleration, and tangential/radial acceleration components of a rotary motion created on a test bench driven by a brushless servomotor. reference values for ac and at acceleration (or a component of it depending on position) have been deduced from the indications of a high precision rotary encoder used for axis control, in order to separate the effect on the whole uncertainty of calibration of the test bench characteristics and settings. the effects taken into account refer to :  motion laws;  rotary device setting;  choosing of the reference;  number of points for calibration. experimental results, for both static and dynamic applications, show that both main and cross sensitivities of a three-axis capacitive accelerometer of good quality level from a metrological point of view can be obtained; this allows to correct the effect of cross sensitivities also in dynamic applications, in order to remarkably improve the transducer accuracy for in field applications. using the sensitivity matrix allows to get a corrected output with a reduced variability with respect to the cycle variations of the reference. this result suggests that improving the cycle repeatability of the reference could further improve the correction and reduce uncertainty of the calibrated sensor. therefore, the main actions to be realized should move to improve the mechanical behaviour of the test bench, and the acquisition of the angular encoder data. improving these aspects is expected to further reduce the calibration uncertainty, and also to extend the calibration range up to 10 hz. references  [1] wyss m and rosset p “mapping sismic risk: the current crisis” nat. haz. 68 (2013) pp. 49-52. [2] zhao m and xiong x “a new mems accelerometer applied in civil engineering and its calibration test” the ninth international conference on electronic measurement & instruments icemi-2, 2009, pp.122-125. [3] ren wei, zhang tao, zhang hai-yun, wang lei-gang, zhou yong-jie, luan meng-kai, liu hui-feng, shi jing-wei “a research on calibration of low-precision mems inertial sensors”, 25th chinese conference on control and decision conference (ccdc), 2013, pp. 3243 – 3247 [4] zanjani p n, abraham a a, “a method for calibrating micro electro mechanical systems accelerometer for use as a tilt and seismograph sensor” 12th international conference on computer modelling and simulation (uksim), 2010, pp. 637 – 641 [5] dumas n, azais f, mailly f, nouet p, “study of the electrical set-up for capacitive mems accelerometer test and calibration”, journal of electronic testing 26 (2010) pp. 111-125. [6] milligan d j, homeijer b d and walmsley r g “an ultra-low noise mems accelerometer for seismic imaging” sensors (ieee), 2011, pp. 1281 1284. [7] vallet f and marcou j “a low-frequency optical accelerometer” j. opt. 29 (1998) pp. 152-155. [8] schiefer m i and bono r “improved low frequency accelerometer calibration” xix imeko world congress fundamental and applied metrology, 2009 [9] choi k, jang s and kim y “calibration of inertial measurement units using pendulum motion” int. j. of aeronautical & space sci. 11(3) (2010) pp.234–239. [10] zhu c, qin x, wu z and zheng l “research on calibration for measuring vibration of low frequency” international conference on mechanic automation and control engineering (2010) pp. 3315 3318. [11] mohd-yasin f, nagel d j, ong d s, korman c e and chuah h t “low frequency noise measurement and analysis of capacitive micro-accelerometers” microelectronic engineering 84 (2007) pp. 1788-1791. [12] ohm w, wu l, hanes p and wong g s k “generation of lowfrequency vibration using a cantilever beam for calibration of accelerometer” journal of sound and vibration 289 (2006) pp. 192-209. [13] akira umeda, mike onoe, kohji sakata, takehiro fukushia, kouichi kanari, hiroshi iioka, toshiyuki kobayashi “calibration of three-axis accelerometers using a three-dimensional vibration generator and three laser interferometers” sensors and actuators a: physical 114 (2004) pp. 93-101. [14] nakano a, hirai y, sugano k, tsuchiya t, tabata o, umeda a, “rotational motion effect on sensitivity matrix of mems threeaxis accelerometer for realization of concurrent calibration using vibration table”, ieee 26th international conference on micro electro mechanical systems (mems), 2013, pp. 645 – 648. [15] mark pedley, “high precision calibration of a three-axis accelerometer” freescale semiconductor (2013). [16] giulio d’emilia, antonella gaspari, emanuela natale, “evaluation of aspects affecting measurement of three-axis accelerometers”, measurement 77 (2016) pp. 95-104. [17] ernest o. doebelin, measurement systems, mcgraw-hill , 2004, isbn 0–07–243886–x. microsoft word 349-2408-1-le-rev acta imeko  issn: 2221‐870x  september 2016, volume 5, number 2, 71‐79    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 71  the high official harkhuf and the inscriptions of his tomb in  aswan (egypt). an integrated methodological approach  andrea angelini 1 , giuseppina capriotti vittozzi 2 , marina baldi 3   1 institute for technologies applied to cultural heritage, cnr‐itabc, via salaria km 29,300 ‐ 00015 monterotondo st. (rome), italy   2 institute for the study on ancient mediterranean, cnr‐isma, via salaria km 29,300 ‐ 00015 monterotondo st. (rome), italy  3 institute for biometeorology, cnr‐ibimet, via dei taurini 19 ‐ 00185 rome, italy      section: research paper   keywords: harkhuf tomb; epigraphic numerical model; image based 3d model; virtual scan; aswan climate  citation: andrea angelini, giuseppina capriotti vittozzi, marina baldi, the high official harkhuf and the inscriptions of his tomb in aswan (egypt). an  integrated methodological approach, acta imeko, vol. 5, no. 2, article 10, september 2016, identifier: imeko‐acta‐05 (2016)‐02‐10  section editors: sabrina grassini, politecnico di torino, italy; alfonso santoriello, università di salerno, italy  received march 20, 2016; in final form june 17, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this research was supported by cnr in the agreement for scientific cooperation between italy (cnr) and egypt (asrt)  corresponding authors: andrea angelini, e‐mail: andrea.angelini@itabc.cnr.it, giuseppina capriotti vittozzi, e‐mail: giuseppina.capriotti@isma.cnr.it,        marina baldi, e‐mail: m.baldi@ibimet.cnr.it    1. introduction to the project  the cnr – multidisciplinary egyptological mission (cnr – mem) has worked in the tomb of harkhuf in order to document its important hieroglyphic inscriptions. the mission has been carried out in the framework of the project tech (technologies for the egyptian cultural heritage) funded by the national research council of italy (cnr) and the academy of scientific research and technology of egypt (asrt). aswan is the southern gate of egypt: in ancient times caravan routes left the nile valley from this site to reach far away territories and different populations, carrying back exotic and precious goods. the ancient city of elephantine, the river island in the modern town of aswan, was the gateway to africa, the bridgehead towards unknown lands as dangerous as rich in exotic treasures. between the late old kingdom and the early middle kingdom, the tombs of the nobles at qubbet el-hawa testify of ancient journeys, far south explorations, trades and cultural exchanges, in particular, the tomb of harkhuf, a high official of the vi dynasty, who led trading and military expeditions into nubia [1]. the text inscribed on the façade of his tomb is a very important and famous document (figure 1). harkhuf led at least four expeditions to far away countries in the southern or west-southern territories. his tomb testifies of his life, his career, his journeys and explorations towards south, in the nubia and in the lybian desert. harkhuf was a noble of upper egypt, he served the kings merenra and pepi ii, during the vi dynasty in the xxiii cent. bc [1]. this tomb was at first noticed by an italian scholar, ernesto abstract  tech project (technology for the egyptian cultural heritage) aimed to document an egyptian monument for egyptological studies and  researches  but,  at  the  same  time,  to  check  a  new  methodological  approach  for  conservation,  valorisation  and  enhancement.  in  particular, the cnr mission focused the attention on the tomb of harkhuf, a high official of the vi dynasty (xxiii century bc), who led  trading and military expeditions into nubia. the hieroglyphic texts inscribed on the façade of his tomb are very important and famous  documents. the team checked an innovative and integrated methodology. the methodology has been focused mainly on the use of  digital  photogrammetric  systems  in  order  to  generate  an  accurate  numerical  model  (3d)  and  to  facilitate  the  epigraphic  study.  different procedures have been established in the processing and representation steps in order to accomplish the final communication  of the results. moreover climatic measurements have been carried out in order to understand the role of environmental factors on the  deterioration of the monument. finally the data have been crossed in order to check the environmental impact and the decay. acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 72  schiaparelli, famous egyptologist who discovered f.i. the tomb of nefertary and other important monuments. schiaparelli, who was director of the egyptian museum in florence and of the egyptian museum of turin, published the inscriptions in the memorie dell’accademia dei lincei in 1892 [2]. schiaparelli was a pioneer of the inter-disciplinary archaeology and egyptology, and he used the photography as an instrument for documentation on the archaeological field. in 1975, miriam lichtheim, who published a famous work on ancient egyptian literature, wrote in her book: cut in soft, flaking stone, the inscription (of harkhuf) is now in a very poor condition. in fact, the wind, the sun and some rainfalls contribute to the decay [3]. tech project aimed to check a non-invasive methodology for documenting egyptian monuments and above all egyptian epigraphy. the tomb of harkhuf has been chosen because of its importance, its status and for the old documentation provided by e. schiaparelli, which represented the starting point for the work. the project aimed at a very good documentation using digital photogrammetric system in order to obtain data on its conditions, to check the decay and the influence of the environmental factors. 2. the acquisition step of the inscriptions  2.1. introduction to the survey  the work presented in the paper is a preliminary note of a wider project on the tomb of harkhuf. since the first survey on the site, in 2014, several and damaged inscriptions preserved on the façade and the pillars inside the tomb, have been noticed. most of them are in different conditions, probably due to the intrinsic characteristics of the stones and to the weathering phenomenon. from the observations made during the mission emerged the importance of documenting the inscriptions, that actually risk to be lost forever. the most important inscriptions are visible on the main façade. they represent an important document of the harkhuf’s life (figure 2). the four pillars inside show reliefs and inscriptions with funerary formulas. the first step of the work was the documentation of the existing texts with a methodology that was able to record the signs and, at the same time, was able to evidence the differential degradation. every inscription was accurately documented through the use of photogrammetric systems. image based 3d methods can be considered a valid instrument of investigation and analysis to improve the knowledge in the archaeological field [4]. in recent years the development of digital photogrammetric systems allowed to define a very accurate working procedure of data acquisition and processing in the archaeological discipline. the extraordinary development of camera sensors, the manufacturing of the lenses and the more accurate algorithms for the recognition of the homologous points on images, allow to reconstruct the whole investigated subject [5]. the methodology was focused on the use of photogrammetric system despite other different approaches for the investigation could be used, such as the laser scanner (triangulation/phase difference) [6] or more recently, polynomial texture mapping (ptm) [7]. this second system is able to display depth details of an object, interpolating information carried out from different lighting of the surface. the limits of ptm consist in a not scaled model, the reduced size of the subject that cannot exceed the meter and the difficulty to apply the system in an open space. the use of digital photogrammetry was justified by the possibility to use a simple device (such as a camera) in almost every condition, collecting important information and data about archaeological artefacts. furthermore, due to the history and evolution of the photogrammetry, it is possible to evaluate its potentiality and extension to different application fields [8]. 2.2. the experimentation on the tomb of harkhuf  the first step of the research focused mainly on the external façades subjected to a greater stress. the survey was performed using two reflex cameras (canon 5d mark ii, canon 60d) with different optic lenses (canon 28 mm/50 mm). the following modalities have been used: structure from motion (sfm/agisoft photoscan); stereoscopic digital photogrammetry (mencisoftware zscan. the first system allows to perform photogrammetric acquisition, starting from unordered images. the software used is a commercial package able to orient and match large dataset of images. there is no published information concerning the matching algorithms employed in the software, even if it seems to be a stereo sgm-method [9]. figure 2. one of the inscription preserved on the main façade of the tomb.  in particular it represents an important document of harkhuf’s life.  figure  1.  the  famous  tomb  of  harkhuf  in  aswan,  egypt.  the  tomb  was completely carved out in the stone.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 73  in order to achieve accurate data, it is important to acquire images according to an established procedure. the algorithms used from the software are mainly based on the automatic recognition of the homologous points. the geometrical model is well known as epipolar geometry (figure 3). this particular geometry is able to establish a relation between a couple of images. consider two different frames and a point p in the space. the point p generates two homologous projections on the corresponding frames. we define on the frame the corresponding images of the projection centres (epipoles or epipolar points), in order to create the adequate relation. the epipolar plane is identified from the base and the collineation lines. the intersections between the epipolar plane and the two image planes are called epipolar lines. on the epipolar lines it is possible to find the image of the homologous point of the other frame. recent studies on this geometry demonstrated how the correspondences between features can be improved as a function of the position of the frames [10]. normally the epipoles are casually displaced on the frames. their position is much more far from the principal point as the optical axis is parallel. in that condition, when the frames are coplanar, the epipolar lines are in parallel. this is the best condition for the software to recognize the homologous points. the advantage consists in a lesser processing time and a more accurate point clouds (less noises). on the field a 28 mm optic lens with a tripod mounted at 0.50 m from the façade were used, moving the camera along the entire surface (figure 4). the acquisition simulated imaginary tracks in a bustrophedic direction (tape method). they were taken more than 400 pictures only for the letter of harkhuf at very high resolution and with 80 % of overlapping among the pictures. the side lap between each photographic sequence was about 30 %. the choice of the 28 mm optic lens guaranteed a good compromise between radial distortion and the covered area. experimentations were performed also with the other camera in the same condition, to test differences between the camera sensors. canon 5d mark ii is provided with a full frame sensor while canon 60d is equipped with an aps-c sensor. this difference conditioned the number of acquisitions on the face of the tomb. positioning the second camera at the same distance from the subject we needed more images to compensate the overlapping area. the second experimentation was performed with the other system (stereomatching), based on the stereoscopic acquisition of three images, using the same cameras. the canon 5d mark ii was mounted on a little aluminium bar (0.7 m) and fixed on the tripod at 2 m from the inscription. only a single camera, hold on the bar in 3 different positions, has been used. the baseline between 2 adjacent position was set at 200 mm, about 1/10 of the distance from the engravings. in this second application only a limited number of acquisitions were made, even if each one consisted of three camera shots. the co-planarity among the frames for each hat trick must be maintained to achieve a good correspondences of the points. settings of the camera represented an important key factor for avoiding noisy point clouds. in order to achieve good dense point clouds it was important to use fixed optic without aspherical lenses. as we know each optic is suffering from radial distortion, specified by the manufacturer. radial distortions are normally directly proportional to the distance from the centre of the image (principal point). some optics mount asperichal lenses to correct part of this distortion, altering the camera calibration step (internal orientation). in order to have a defined image and a good field depth the camera settings were modified by using automatic shutter, diaphragms at f11/f14, iso sensibility at 100 and infinity figure 4. acquisition step performed with a canon 5d mark  ii and canon  60d mounted on a tripod half a meter from the surface.  figure  3.  epipolar  geometry.  this  particular  geometry  applied  to  digital  photogrammetry  allows  to  recognize  homologous  points  between  a  couple  of frames.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 74  focusing. for the letter of harkhuf five acquisitions were made to have the complete engravings with different settings and lighting. the position of the tomb conditioned the acquisition steps. they were carried out in 2 different moments of the day: during the early morning, with direct light of the sun on the surface and during the afternoon with diffused illumination. knowing the exact distance among the three camera positions on the bar, it is possible to achieve spatial coordinates of inaccessible points. the system is often used in topography and is known as “space forward intersection”. based on the same principle of the eyesight, space forward intersection allows to reach inaccessible points starting from 2 central projections oriented on the same subject. the geometrical model explains how it is possible to achieve the coordinates (figure 5). the accuracy of the numerical model depends on the angle of the collimation straight lines and on the distance of the cameras from the object. it is however possible to demonstrate that the collimation straight lines are twisted lines, so that interpolation is necessary to find exactly the point. recent studies on this geometrical model evidenced other possible solutions to solve the problem. one of this is represented by the calculation of the middle point between the twisted lines. this is a good solution when the vertical angle of the lines is small. another solution is represented by an innovative geometrical model named “mutual tipping”, developed by carpiceci in 2012 [10]. this geometrical model allows to control 2 important factors: the real phase displacement of the twisted lines; the convergence angle of the triangulation. currently there is no software that uses this solution to generate dense point clouds, indeed the solution adopted is demonstrated only in the projective geometry. it should be interesting to write a specific algorithm able to use this solution and compare the results with the others. the sfm system was used to create a high resolution 3d numerical model (point clouds) even if one of the main problems consisted in a non-scaled result (figure 6). the second system (stereoscopic) generated an accurate and scaled 3d point cloud. in this case the limit consisted in the orientation of the reference system (local) that is in the origin of the perspective pyramid of the camera (the second shot of the hat-trick). in order to solve this problem and process the final graphic restitution it is necessary to transform and rotate the ucs in a desired position such as an orthographic view. aim of the project was to use both the models. the model carried out with the stereoscopic method allowed to extract point coordinates in order to reference the non-scaled model from sfm. the error registered between the two numerical models and the coordinates is about 0.008 m (figure 7). the value of the error is not constant over the entire model and the result is referred to the ground control points used to process the non-scaled model. some factors can change this value such as the exact recognition of the points between the two models, the number of the points used, the different definition of the images taken at different distances from the subject and the resolution step of the two point clouds. the points used for referencing the model (10 points) have been chosen homogeneously over the entire surface, focusing the attention in the corner of the engraved stone. the accuracy of the stereoscopic model and the job condition influenced the final results of the project. the stereoscopic model is composed of 2.5 million of points and figure 6. numerical model (point clouds) of the letter of harkhuf preserved  on  the  main  façade.  the  flags  show  the  ground  control  points  (gcps)  necessary to scale the model. the letter contain 60 million of points.  figure  5.  geometrical  model  used  in  topography  and  implemented  for  digital photogrammetry.  figure  7.  numerical  model  from  stereoscopic  acquisition.  on  the  point  clouds coordinates have been picked in order to reference the sfm model.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 75  the resolution is about one point every 1 mm (gsd). the accuracy of the data can only be estimated on the basis of a table compiled by mencisoftware. the table evidences the theoretical error of measurements in different condition. with the baseline set to 400 mm and a distance from the subject of 2000 mm the accuracy could be 0.89 mm (table 1). if we had changed the distance of 3 m from the inscription (3000 mm on the table) the accuracy in theory would be 1.84 mm. measurements taken on the inscription (length, height and diagonal) corresponded to those taken on the virtual models. certainly it is not possible to verify the dimension of each hieroglyphic sign, but only testify the entire documentation of inscriptions preserved on the tomb. 3. the processing step   3.1. the point clouds numerical model  as mentioned above the photogrammetric survey was carried out on the entire archaeological complex but only the letter preserved on the main entrance was completely elaborated. a specific elaboration procedure was developed for the inscription and the results are shown below. the processing step was performed with different software for point clouds management. the aim of the study was to create a numerical model of the inscription at high resolution, able to give information about its condition. from the high resolution 3d model (point clouds) it was possible to carry out all the measurements about length, width and depth of the engraving and to get information about the geometry, morphology and the global visualization of the object. with the integration of different point clouds management software it was possible to make a series of elaborations in order to get a better organization and comprehension of the data (figure 8). 3.2. the transformation of the numerical model   the main elaboration was represented from the transformation of the point clouds in the surface. despite it seems very simple and fast, this step is very difficult to process, above all when we handle with complex shapes, such as an egyptian inscription. once the filters were applied on the point clouds, the surface reconstruction was executed through the meshing technique. the meshing process was necessary to: reduce the total of the points; create the connection among the points; measure the connected and continued surfaces (volume, distance); apply a texture map from the bi-dimensional images. the mesh consists of vertices that give positional information and edges that give connective information. by connecting the edges to the vertices it is possible to introduce a concept of "closeness" between vertices and give the topological information. the faces are determined when given the vertices and edges. the faces do not introduce anything at the level of information; rarely they can have some associated attributes. currently a lot of mesh generators exist but the following are the most common, based on the main studies of owen in 1998 [11]. there are three main categories of mesh generators for the reconstruction of the surfaces, starting from a domain of unstructured points; octree; delaunay; advancing front; the most used method for generating triangular meshes is that known as delaunay criterion (figure 9). in the euclidean plane the principle of the method is very simple and it requires that each of the set of the nodes should not be contained in the circle which circumscribes any tetrahedron present in the mesh. for triangulating four vertices abcd, two are the possible options of triangulation. in order to satisfy the criterion, the solution in which the circles circumscribed the two triangles contain a vertex inside them, must be discarded [12]. this is figure 8. on the numerical model  it  is possible to separate the chromatic  characteristics from the shape of the engravings.  figure 9. delaunay criterion showed in the plane. the valid solution allows  to have more regular triangles.  table  1.  table  of  accuracy  of  the  stereoscopic  system  expressed  in  mm taken from mencisfotware/z‐scan.  baseline/distance 100 mm 200mm 300 mm 500 mm 750 mm 1000 mm 2000 mm 3000 mm 20 mm 0.04 0.16 0.37 50 mm 0.15 0.41 0.92 1.64 100 mm 0.20 0.46 0.82 3.27 150 mm 0.31 0.55 2.18 4.91 200 mm 0.41 1.64 3.68 300 mm 1.09 2.46 400 mm 0.82 1.84 500 mm 1.47 600 mm 1.23 700 mm 800 mm acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 76  therefore not just a real algorithm, but a selection criterion associated to an algorithm which subsequently generates the triangular surfaces. the effectiveness of this method is demonstrated by the relationship between the delaunay criterion and the voronoi diagram, that illustrated most of tessellation issues present in nature. the delaunay criterion is the dual of the voronoi diagram. it is a suitable system for reconstructing surfaces from unstructured information. unstructured point clouds have been processed with geomagic (wrap algorithm), based always on delaunay triangulation. the sfm point cloud contained 60 million points. the weight of the point cloud made it impossible to triangulate all the points. it was necessary to filter point clouds reducing in percentage the points by using different filters such as the curvature filter. this filter allows to maintain more points in the area where the curvatures are high. after the creation of the triangles the surface was improved through the optimization method. two are the main categories of the optimization method: smoothing algorithms, that maintain the connectivity but re-arrange the nodes; cleanup algorithms, that maintain the position of the nodes but change their connectivity. usually smoothing algorithms alter the original information of the surface, although the final result can be very suitable. cleanup algorithms are very useful for changing the topology of the triangles. sometimes it was very useful to flip the connectivity in order to give the correct shape to the engravings. the model was filtered in different ways to highlight the inscriptions but maintaining possibly original data (figure 10). other procedures were used to improve the final model. the following are only the most used for the letter of harkhuf: the repair of the mesh that is required when the algorithmic operation is not completely successful, so the model can have holes, or topological problems (self intersections or corners and non-manifold vertices); the decimation filter that uses a series of algorithms to simplify the model and generate multi-resolution models (to pyramid levels); the densification or refinement processes to increase the detail of the mesh. countless are the algorithms for the densification processes (such as edge bisection, point insertion, templates) [13]. 3.3. the virtual scan of the numerical model    in order to read the information about the condition of the inscriptions, it has been decided to create maps useful to the investigation and analysis. point clouds carried out from digital photogrammetry are unstructured point clouds. it means that the information is not organized according to a regular grid. normally image-based data presents only information about the spatial coordinates and the rgb value. it was important to add extra information to the numerical model. in the processing step a new procedure was tested in order to transform unstructured point clouds in structured point clouds. how is it possible to change the structure of numerical data? data were processed with a specific software (jrc reconstructor) in order to change the structure of point clouds. the aim was to give the same characteristics of a laser scanner data to the photogrammetric data. this idea was possible thanks to a tool, named “virtual scan”, present in the same software [14]. this tool allows to make a virtual scan of an object in the virtual space. once an orthographic camera that includes the numerical model (unstructured) is established, it is possible to acquire the whole point cloud at a specific resolution, such as a laser scanner. at the same time it is possible to create a spherical or cylindrical camera in any position, simulating the same movement of a panorama scanner. although the resolution of the results depends also on the graphic card mounted, it is possible to create a new structured 3d numerical model of the subject with the same characteristics of a point cloud taken with a laser scanner. for the inscription an orthographic camera was created at a resolution grid of 4.000 × 4.000 (pixel/points). subsequently a new 3d structured numerical model composed by 16 million points was carried out. the new structured model contained fewer points than the original one but the main advantage consisted in the possibility to apply the pre-processing algorithms to photogrammetric data. it was possible to remove the noise and add some useful information such as the normal, the depth and orientation discontinuities. this information has been used to generate different outputs such as digital elevation model (dem), normal maps and other elaborations useful for the final results (figure 11). figure  10.  point  cloud  was  transformed  in  surface  through  the  meshing technique. the final model contained only 3 million of triangles.  figure 11. example of a normal map of a part of an inscription. on the map  it  is  possible  to  evidence  the  difference  between  casual  signs  and  engravings.  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 77  4. the representation of the data   the aim of the work is not only to have a numerical model but also to find a possibility to use the model to draw the hieroglyphics in a bi-dimensional restitution for the epigraphists. the general idea consisted in experimenting different solutions able to solve the dichotomy between innovative acquisitions and graphic representations. usually bi-dimensional representations of the inscriptions are based on direct survey, supported by a photographic restitution on a specific plane. unfortunately this kind of representation does not characterize correctly signs and shape of the engravings. the signs are drawn through an interpretation from the rectified image (figure 12). aim of the project was to find a form of representation suitable for the epigraphic signs. a system derived from cartography and above all in the sculpture has been experimented. the approach is based on the representation of the contour lines. contour lines represent free shape of an object beyond contour outlines and edges of the engravings [15]. the equidistance of the contour lines depends on the representation scale of the graphic restitution. normally it is set to 1/1000 of the representation scale of the object expressed in meter. in theory an equidistance every 1 mm would be enough but in this case a defined restitution of the inscription was necessary. it was decided to use the numerical model (mesh) to generate the sections. sections were set each 0.2 mm (5 times lower) in order to evidence the details of every sign of the inscription. the generation of the sections depends on the quality of the numerical model and also from the main plane chosen from the user. it was necessary to establish a reference plane where the information about the sections are projected. the results evidenced the differences between the old drawings and the new approach. the signs are well defined by the lines and simultaneously the shape and the curvature of the stone material are described (figure 13). 5. the influence of the climate    the tomb of the nobles are carved in the rocks on the west bank of the river nile in aswan, an area characterized by hot and dry weather conditions [16], which are typical of a desert climate. in this region the total rainfall amount per year is about 1 mm, and heavy precipitation is a very rare event occurring once every 1 or 2 years, often resulting in flash flood. these rare, but heavy rainfall episodes have important, and sometimes catastrophic impacts, due to the high intensity and short duration, on population, buildings, infrastructures, ecosystems [17], including wadis discharge [18], and cultural heritage. the meteorological factors affecting the tomb of harkhuf at qubbet el-hawa are air temperature, its diurnal excursion, and wind, and, to some extent, relative humidity. the effects of the environmental factors, including meteorological elements, are evident if the lower part of the façade is compared to the upper part. for centuries, and until the work done by schiaparelli, the lower part has been covered by sand deposited on the façade, therefore remained well sheltered against meteorological factors, while the upper part was always subject to the environmental stress. this resulted in a better status of conservation of the lower part in comparison with the upper, which is still clearly visible. analysis of the meteorological factors in aswan show that night time relative humidity can be more than 30 % during the winter months, which rise rapidly during heavy rainfall episodes. the experiment, designed using portable meteorological instruments, allowed to define if the microclimate around the harkhuf tomb has the same characteristics of the larger aswan area, which can be derived by the meteorological station located at the aswan airport, and to determine the microclimate inside the tomb. in particular, it contributed to: determine the temperature gradient along the façade of the tomb in order to understand if it’s different parts are under the influence of physical stress of different intensity; determine if temperature excursions together with the right level of relative humidity of the air could favour the formation of dew at dawn and if the air can reach high dew point values. preliminary analysis of data collected between 8:00 am and 4:00 pm local time (figure 14) permitted to detect a differential heating of the façade, with the right part reaching temperatures warmer (few degrees c) than the left and for a longer period, being under the direct sunrays until early afternoon. in the interior, during the day, the temperature excursion is much more moderate, although the excursion curve is very well correlated with the air temperature outside the tomb. in addition, some measurements of wind intensity and direction figure 12. hieroglyphics drawn with manual recognition of the signs and a  conceivable support of a rectified image.  figure  13.    the  same  part  drawn  with  the  contour  line  method.  the  equidistance  was  set  to  0.2  mm.  this  adaptation  allowed  to  represent adequately the inscriptions. acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 78  show a persistent wind blowing from n-ne starting midmorning, around 10:30 am until late afternoon. relative humidity of the air outside the tomb does not represent a big concern during the day, however it is important in the interior where it can reach larger values and maintain a risky level during night-time which can favour the formation of mold on the ceiling. during the day evaporation from the surface of the nile river is considerable and can increase the air humidity in the lower layer of the atmosphere, the boundary layer. this humid air penetrates into the tomb and remains trapped and starts to condensate and deposit on the internal walls and ceiling increasing the risk of mold formation. all these environmental factors certainly play a key role in the deterioration of the monument and in particular of its façade so magnificently engraved, making its preservation and conservation an absolute necessity. in this respect the methodology described in the previous paragraphs represents a great tool to this purpose. 6. conclusion   the integrated methodology applied to the tomb of harkhuf has given good results. the research evidenced some important information about the characteristics of the letter (the support, the signs, the degradation). the experimentation has given the possibility to investigate the documentation of the tomb from different points of view, testing different elaborations that can evidence new information. particular attention was paid to the procedure known as “virtual scan” able to transform unordered data from imagebased methods in structured information such as range data. this application was very interesting and other experimentation will be made in the future. the team is elaborating the entire data in order to render an accurate documentation and to plan a conservation project. acknowledgements  this paper is precisely the result of tech project (technology for the egyptian cultural heritage), headed by prof. giuseppina capriotti vittozzi and supported by a cnr contribution. the paper is also the result of the collaboration between the authors. in particular giuseppina capriotti vittozzi developed the paragraph introduction to the project, andrea angelini the acquisition step of the inscriptions, the processing step and the representation of the data, marina baldi the influence of the climate. references  [1] e. edel, die felsgräbernekropole der qubbet el-hawa bei assuan / abt. 1. architektur, darstellungen, texte, archäologischer befund und funde der gräber 1. qh 24 qh 34p ; 2. qh 35 qh 101 ; 3. qh 102 qh 209. aus dem nachlaß verfaßt und herausgegeben von karl-j. seyfried und gerd vieler. bandzaehlung abt. 1, impressum paderborn: schöningh, 2008, pp. 625-648. [2] e. schiaparelli, “una tomba egiziana inedita”, memorie della classe di scienze morali, storiche e filologiche dell'accademia dei lincei, 10 (1892), pp. 21-53. [3] m. lichtheim, ancient egyptian literature: the old and middle kingdoms, university of california press, oakland, ca 1975, isbn 978-0520248427, p. 23. [4] f. remondino, s. el-hakim, “image based 3d modelling: a review”, in the photogrammetric record, vol. 21, issue 115, august 2006, pp. 269-291. [5] a. angelini, r. gabrielli, “laser scanning e photoscanning. tecniche di rilevamento per la documentazione 3d di beni architettonici ed archeologici”, in archeologia e calcolatori, 24, (2013), isbn 978-88-7814-580-1, pp. 374-394. [6] p. drap, m. sgrenzaroli, m. canciani, g. cannata, j. seinturier, “laser scanning and close range photogrammetry: towards a single measuring tool dedicated to architecture and archaeology”, proc. of 19th isprs symposium, sept. 30 – oct. 04, 2003, antalya, turkey, pp. 1-6. [7] t. malzbendert, g. elbd, h. h. wolters, “polynomial texture maps”, proc. of the 28th annual conference on computer graphics and interactive techniques, siggraph, 2001, new york, usa, pp. 519–528. [8] m. carpiceci, fotografia digitale e architettura. storia, strumenti ed elaborazioni con le odierne attrezzature fotografiche e informatiche, aracne, roma, 2012, isbn 978-88-548-4939-6. [9] f. remondino, m.g. spera, e. nocerino, f. menna, f. nex, “state of the art in high density image matching”, in the photogrammetric record, vol. 29, issue 146, june 2014, pp. 144–166. [10] m. carpiceci, modelli geometrici e costruzioni grafiche per il rilevamento architettonico. idee e proposte per una migliore gestione dei dati grafici e numerici nel rilevamento architettonico, aracne, roma 2012, isbn 978-88-548-5153-5. [11] s.j. owen, “a survey of unstructured mesh generation technology”, proc. of 7th international meshing roundtable, oct. 26-28, 1998, dearborn, michigan, usa, pp. 239-267. [12] r. migliari, geometria descrittiva. tecniche e applicazioni, vol. ii, 2009, isbn 978-88-251-7330-7. [13] d.f. watson, “computing the delaunay tessellation with application to voronoi polytopes”, in the computer journal, vol 24, (1981), pp. 167-172. [14] m. sgrenzaroli, g.p.m. vassena, tecniche di rilevamento tridimensionale tramite laser scanner. volume 1 – introduzione generale, starrylink editrice, brescia, 2007, isbn 978-88-8972073-8. aswan tombs of the nobles hourly temperature and humidity 0 5 10 15 20 25 30 35 40 1 2 :0 0 a m 5 :0 0 a m 1 0 :0 0 a m 3 :0 0 p m 8 :0 0 p m 1 :0 0 a m 6 :0 0 a m 1 1 :0 0 a m 4 :0 0 p m 9 :0 0 p m 2 :0 0 a m 7 :0 0 a m 1 2 :0 0 p m 5 :0 0 p m 1 0 :0 0 p m 3 :0 0 a m 8 :0 0 a m 1 :0 0 p m 6 :0 0 p m 1 1 :0 0 p m 4 :0 0 a m 9 :0 0 a m 2 :0 0 p m 7 :0 0 p m t e m p e ra tu re 0 10 20 30 40 50 60 r e la ti v e h u m id it y temperature °c humidity aswan tombs of the nobles hourly wind speed (km/h) 0 5 10 15 20 25 1 2 :0 0 a m 5 :0 0 a m 1 0 :0 0 a m 3 :0 0 p m 8 :0 0 p m 1 :0 0 a m 6 :0 0 a m 1 1 :0 0 a m 4 :0 0 p m 9 :0 0 p m 2 :0 0 a m 7 :0 0 a m 1 2 :0 0 p m 5 :0 0 p m 1 0 :0 0 p m 3 :0 0 a m 8 :0 0 a m 1 :0 0 p m 6 :0 0 p m 1 1 :0 0 p m 4 :0 0 a m 9 :0 0 a m 2 :0 0 p m 7 :0 0 p m w in d s p e e d figure  14.  diurnal  variation  of  temperature  and  relative  humidity  (top panel) and of wind speed (bottom panel).  acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 79  [15] m. carpiceci, g. cresciani, a. angelini, “rock hewn architecture survey: the problem of construction of the geometrical model”, in proc. of international congress of speleology in artifical cavities, hypogea, mar. 11-17, 2015, rome, italy, pp.389-398. [16] l. j. sutton, “the climate of egypt”, weather, 5(2), 1950, pp. 5962. [17] i. springuel, m. sheded, f. darius, r. bornkamm., "vegetation dynamics in an extreme desert wadi under the influence of episodic rainfall." polish botanical studies 22, 2006, pp 459-472. [18] z. el-barbary, g.a. sallam, “optimizing use of rainfall water in east desert of egypt”, in: proc. of the eighth international water technology conference, iwtc8, 2008,alexandria, egypt, pp 7381. acta imeko, title acta imeko issn: 2221-870x december 2017, volume 6, number 4, 69-74 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 69 reducing the uncertainty of strain gauge amplifier calibration miha hiti slovenian national building and civil engineering institute (zag), dimičeva 12, 1000 ljubljana, slovenija section: research paper keywords: calibration; strain gauge; simulator; amplifier; uncertainty citation: miha hiti, reducing the uncertainty of strain gauge amplifier calibration, vol. 6, no. 4, article 11, december 2017, identifier: imeko-acta-06 (2017)04-11 editor: paolo carbone, university of perugia, italy received june 19, 2016; in final form november 24, 2017; published december 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the authors acknowledge the financial support from the slovenian research agency (research core funding no. p2-0273) corresponding author: miha hiti, e-mail: miha.hiti@zag.si 1. introduction strain gauge bridge transducers which are used for the measurement of mechanical quantities such as force, torque and pressure, need an instrument for excitation of the strain gauge bridge and for the measurement and indication of the bridge output signal, which depends on the load applied to the transducer. strain gauge bridge measuring amplifiers usually perform these tasks. they indicate the result of the measurement as the ratio of the bridge supply voltage and the bridge output voltage. figure 1 shows a typical transducer– amplifier circuit in a 6-lead configuration. as an important part of the measuring chain (or measuring system), bridge measuring amplifiers should be verified regularly. this is usually performed by calibration with strain gauge bridge simulators, which provide defined voltage ratio values as reference values. such simulators, in turn, need to be calibrated themselves for each reference value they output. for high precision simulators (e.g. 225 hz carrier frequency type), the typical expanded calibration uncertainty u of the 2 mv/v range is about 0.00001 mv/v at nmi level [1], [2] and u=0.00002 mv/v at calibration laboratory level for further dissemination, leading to amplifier calibration uncertainties u=0.00002 mv/v and above. such calibration uncertainties consecutively lead to large uncertainty of strain gauge amplifier measurements in the lower part of the range [3] when expressed in relative terms with respect to the measured value, as is typical for force and torque calibration. figure 2 shows the relative standard uncertainty contribution of the simulator if the amplifier is calibrated with figure 1. strain gauge bridge transducer with 6-lead connection to the bridge amplifier in a force measuring system. abstract the article presents a method for calibration of strain gauge bridge amplifiers with improved uncertainty in low voltage ratio range. the procedure is based on combining traditional calibration of the amplifier at one point and linearity determination of the rest of the range. traditional calibration is performed by a calibrated strain gauge bridge simulator at a reference value where measurement uncertainty is adequate, and the linearity is determined by a combinatorial calibration method with lower uncertainty, employing a special resistance circuit. uncertainty in the lower part of the amplifier range can be significantly improved, resulting in a combined relative standard uncertainty below 2.5x10-5 for the range from 0.04 mv/v to 2.5 mv/v. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 70 traditional calibrator units, such as hbm k3608 or hbm bn100a simulators. for u=0.00002 mv/v (for k=2) expanded calibration uncertainty and therefore u=0.00001 mv/v standard uncertainty, the relative standard uncertainty w (u divided by the measured value) at 2 mv/v is 5×10-6, rising to w=5×10-5 at 0.2 mv/v, and to w=2.5×10-4 at 0.04 mv/v, which is the typical 2 % low range limit of 2 mv/v nominal range force transducer. the best available relative standard uncertainty of simulator calibration at nmi level with u=0.00001 mv/v expanded uncertainty is also shown, where the relative standard uncertainty w reaches 1.25×10-4 at 0.04 mv/v ratio. such uncertainties significantly exceed the available uncertainty of the realization of mechanical quantities, where, for example, force standard machines offer relative expanded uncertainties w of 1×10-5. calibrating the amplifier together with the transducer as a measuring system is one solution to overcome this limitation, but it limits the transducer to be always used with the same amplifier. the transducer calibration can be void in the case of amplifier replacement if the calibration data of amplifiers is not available. if the transducer is used with another amplifier than the one employed originally during calibration, i.e. with a replacement amplifier, both amplifiers, original and replacement, should be calibrated to assure comparability of results. furthermore, deviations and calibration uncertainties of both amplifiers should be considered [4], [5]. if both amplifiers are not checked with the same simulator, where the simulator is used as a comparator rather than a traceable reference, the uncertainty of the amplifier calibration should be evaluated and taken into account as necessary. according to the international standard iso 376 [4], replacing the amplifier (called simplified an indicator in the standard) is allowed if the following requirements are met: both amplifiers should be calibrated and traceable to national standards. both amplifiers should have the same working parameters (excitation voltage and frequency) and comparable resolution. calibration uncertainties of original and replacement amplifier should not significantly influence the total uncertainty of the force measuring chain. as a recommendation, the uncertainty of the replacement amplifier should not exceed 1/3 of the uncertainty of the entire system. whenever the amplifier is replaced, the contribution of the amplifier calibration should be compared against the calibration uncertainty of the measuring system, for the whole calibrated range, as the calibration uncertainty of the simulator can become the major uncertainty contribution and can increase the total measuring system uncertainty. in section 2 we discuss the effect of the replacement amplifier on the total measuring system uncertainty. further, in section 3, we present a method and a circuit for determining the linearity of the amplifier without the need to first calibrate the circuit, thus allowing lower uncertainties. in section 4 we show the results of the application of the method for linearity determination and calibration on a high precision measuring amplifier. finally, conclusion is given in section 5. 2. uncertainty contribution of the measuring amplifier when a calibrated transducer-amplifier measuring chain is separated, the traceability of the system needs to be reestablished via mv/v voltage ratio standard. a schematics of the calibration of a measuring chain is shown in figure 3a and necessary steps that are required for assuring exchangeability of measuring amplifiers via mv/v traceability is shown in figure 3b. in the first case, the calibration is valid for the whole chain only, as the effects of transducer and amplifier are not individually known and therefore cannot be separated. for such calibration, it is not possible to simply replace the amplifier, as the original amplifier characteristics (e.g. linearity, deviation) are not known. while the calibration results are typically attributed to the transducer, they are actually only valid for the whole force measuring system, the transducer with original amplifier. if the option of replacing the amplifier is required, then the calibration data for the transducer alone need to be provided, and not for the whole system. thus, the effect of the amplifier figure 2. relative standard uncertainty w of a typical calibrated high precision strain gauge bridge simulator with u=0.00002 mv/v (solid line) and the best available relative standard uncertainty at nmi level for u=0.00001 mv/v. (dotted line). figure 3. comparison of (a) calibration of a measuring chain and (b) separate calibration of transducer and amplifier. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 71 should be eliminated from the measuring chain. for this, the amplifier must first be calibrated, to assess the amplifier characteristics, and then the influence of the amplifier should be considered either the calibrated values of the system corrected for any amplifier deviation or the necessary corrections included as an additional uncertainty component. in any case, the uncertainty of the calibration of the amplifier will have to be included in the transducer calibration uncertainty. before the transducer can be used with a replacement amplifier, the replacement amplifier must also first be calibrated, and any deviations should be taken into account (corrected or included as additional uncertainty component). then, the calibration uncertainty of the replacement amplifier needs be included in the uncertainty budget for the newly assembled measuring system. to illustrate the scale of calibration uncertainty of the amplifier with respect to the uncertainty of the measuring chain (transducer with amplifier) an example is given in figure 4, based on calibration data for a 100 kn force transducer taken from an actual calibration certificate. the calibration uncertainty of the measuring chain is shown with a solid line, as a relative standard uncertainty. the force transducer in this example was calibrated in the range from 5 kn to 100 kn (5 % to 100 % of the nominal range) in a force standard machine with 0.002 % expanded uncertainty. for comparison, the calibration uncertainty of the amplifier is shown with a dashed line, for a nominal output of the transducer of 2 mv/v at nominal load. the calibration uncertainty of the amplifier is specified in its calibration certificate as 0.00002 mv/v from 0.1 mv/v to 2 mv/v. it can be seen that, while the amplifier uncertainty is much lower than the measurement chain uncertainty for the upper range of the transducer, it significantly exceeds the measuring chain calibration uncertainty at lower force values. when replacing the amplifier, the uncertainty of both amplifiers should be taken into account, further increasing the effect of the amplifier calibration uncertainty. as can be seen from this figure, the amplifier uncertainty exceeds 1/3 of the transducer calibration uncertainty for values below 30 % of the nominal range of the transducer. it results in a significant increase of the total uncertainty and should therefore not be neglected. this is also in agreement with the recommendation in iso 376 regarding suitability of replacement amplifiers. figure 5 shows the expanded uncertainty wtrans for the case where the 100 kn force transducer is calibrated together with the amplifier as a measuring chain (solid line), and the resulting expanded uncertainty wtrans+amp when a replacement amplifier is used after calibration (dashed line). for the second case, the combined standard uncertainty wtrans+amp is calculated from contributions of the original calibration uncertainty of the transducer chain wtrans and additional calibration uncertainties of each amplifier (original amplifier – wamp_nmi and replacement amplifier – wamp_lab) according to (1). in this example, both amplifiers have the same calibration uncertainty contribution (best available). 𝑊trans+amp = 2 ∙ �𝑤trans2 + 𝑤amp_nmi2+𝑤amp_lab2. (1) the actual uncertainty contributions from amplifier calibration (absolute uamp and relative wamp) and from transducer chain calibration (wtrans) are shown in table 1. the output of the transducer ranges from 0.04 mv/v at 2 % to 2 mv/v at 100 % nominal range. as the amplifier is not calibrated at 0.04 mv/v, the uncertainty is estimated from the rest of the range. also, calibration of the transducer is not available for 2 kn force step, but is expected to be much less than the amplifier uncertainty. to reduce the effect of the amplifier calibration uncertainty and to keep the low uncertainty of measuring system calibration also at the lower values of the range, an additional evaluation of figure 5. comparison of expanded relative calibration uncertainties of a transducer chain (wtrans) and of a calibration of a transducer with replacement amplifier (wtrans+amp). table 1. comparison of uncertainty contributions. range output [mv/v] uamp [mv/v] wamp [%] wtrans [%] 2 % 0.04 0.00002 (*) 0.050 / 5 % 0.1 0.00002 0.020 0.010 10 % 0.2 0.00002 0.010 0.009 20 % 0.4 0.00002 0.005 0.009 50 % 1.0 0.00002 0.002 0.008 100 % 2.0 0.00002 0.001 0.008 (*) estimate figure 4. comparison of relative standard uncertainty of calibration of a 100 kn force transducer chain (wtrans) and relative uncertainty of amplifier calibration (wamp). acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 72 the amplifier can be performed by determining the linearity of the amplifier (deviation from a straight line response) with low uncertainty. by calibrating the amplifier with a calibrated simulator at one reference value at higher ratio values, where relative uncertainty is adequate and using an alternative linearity check method with low uncertainty to verify the rest of the range, the measurement uncertainty in the lower range of the measuring system can be improved. 3. measuring equipment and procedure 3.1. combinatorial calibration method to solve the problem of evaluation of measuring instruments, where the calibration uncertainty of reference standards is too high, a calibration method that does not require traceably calibrated equipment can be applied [6]. the method, also called combinatorial method, has been in the last 20 years successfully applied in many different fields, e.g. thermometry bridge calibration [7]. the combinatorial calibration method is based on measuring a set of artefacts: each individual artefact separately and also all possible combinations of these artefacts. from the deviation of measured values of combinations of artefacts and calculated results from the same combinations, the non-linearity of the system can be estimated. additionally, if one artefact is calibrated, a full calibration can also be performed. 3.2. circuit for combinatorial method to apply the combinatorial method to evaluation of measuring amplifiers, a suitable set of artefacts in necessary. a circuit fulfilling the requirements, which can also act as a strain gauge bridge simulator, is presented in [8]. a general schematics of the circuit employed for linearity determination is shown in figure 6. the circuit is based on a variable resistive voltage divider comprised of base resistors, with additional resistor networks connected to the output leads of the divider to reduce the variation of the output resistance of the circuit. the output ratio is varied by tapping the output over a single base resistor or a combination of consecutive base resistors. in the case of the circuit in figure 6, the set of artefacts is defined by the four base resistors, which form a part of the voltage divider network. extending the circuit to eight base resistors allows measurements of 36 non-zero output combinations, from which the estimation of the linearity error can be calculated. the output ratios of the divider, as well as input and output resistance of the circuit, can be adapted to meet the requirements. the actual circuit built to check the linearity of the amplifier was designed for a 350 ω input and output resistance and a 2.5 mv/v nominal output ratio. with appropriately selected values for the eight base resistors, it can cover the range from about 0.04 mv/v to 2.5 mv/v. 3.3. linearity determination even without the calibration of the resistors, the circuit can be applied for the determination of linearity of measuring amplifiers. the circuit is connected to the amplifier in place of the transducer and the indication on the amplifier is recorded for each available output combination. when ratio values for all combinations have been measured, deviations of the measured sum of selected resistors and the calculated sum of selected resistors are determined. as the errors for base resistors are not known, they are estimated by a least square fit, based on the error distribution from all measurements. a correction function is calculated for the resulting deviation values which represents the nonlinearity, and residuals of the fit serve as standard uncertainty estimation of the linearity measurement. when the circuit is applied in combination with the combinatorial calibration method, the resulting uncertainty of the linearity determination depends mainly on the quality of the measuring instrument. if it is applied to high precision amplifiers [9], it is possible to reach a standard uncertainty of the linearity determination in the range of 0.000002 mv/v or better [10]. 3.4. reference point calibration the result of the linearity determination alone is not enough to calibrate the amplifier. additional traceable calibration must be performed at least at one non-zero ratio value to establish the absolute linear error of the amplifier indication. as the zero point is usually fixed by definition by setting the indication to zero at the beginning of the measurement or later subtracting the indication at zero load, only one calibration point is necessary to define the slope, as the line is expected to go through zero. the uncertainty of the calibrated absolute value defines the uncertainty of the slope – the linear error, and it scales in proportion to the measured value [6], [11]. for lower values it is proportionally lower and for higher values it is proportionally higher than at the reference point. it is beneficial to choose the largest possible absolute calibration point, as its relative uncertainty contribution will be constant and attributed to the whole range, and therefore provide the lowest uncertainty in the lower part of the range, figure 7. for the purpose of absolute calibration, traditional figure 7. the uncertainty contribution of the absolute calibration point with respect to the selected calibration value. figure 6. schematics of a resistive voltage divider circuit for application of the combinatorial method for amplifier linearity determination. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 73 simulators can be employed as, depending on the selected range, the simulator relative calibration uncertainty can be adequate for the required task. with zero value set, known linearity correction and its uncertainty, and one absolute calibration point for the slope, the instrument can be fully calibrated. at the expense of increased uncertainty contribution, the calibration range can also be extended above the calibrated point. 4. results the evaluation was performed on an hbm dmp41 highprecision 225 hz carrier frequency amplifier. the selected parameters were 2.5 mv/v range, 5 v excitation voltage and 0.1 hz bessel filter. the expanded calibration uncertainty of the amplifier at 2 mv/v using a traditional simulator was u=0.00002 mv/v, or w=1×10-5 relative uncertainty. in combination with the linearity determination, the calibration of the amplifier in the whole range from 0.04 mv/v to 2.5 mv/v was performed. figure 8 shows the result of the linearity determination with the combinatorial calibration method. the figure shows the resulting deviation of measurement of 36 possible resistor combinations (black dots) and the fit of the linearity error (solid curve) with its estimated standard uncertainty. the calculated standard deviation of the residual errors is about 0.000001 mv/v, shown with solid error bars. the linearity check with a calibrated simulator hbm bn100a is also shown (dashed line with square markers) and the standard uncertainty of the linearity determination of the amplifier performed with the simulator is represented with dashed uncertainty bars. for the comparison, the resulting curves are referenced at 0 mv/v and 2 mv/v. it can be seen that, while both measurements are in good agreement, the uncertainty of the linearity determination with the combinatorial method produces a much lower uncertainty than the linearity measurement with a calibrated simulator. the results of the combinatorial method are sufficient to characterise the nonlinearity, but they do not provide enough information for the calibration of the amplifier, since the linear error is not known. an additional measurement at one ratio value is required to fully calibrate the instrument. if the absolute calibration at one point is made with u=0.00002 mv/v expanded uncertainty, it will define the minimum uncertainty at that point. together with the linearity determination uncertainty, the total calibration uncertainty of the instrument can be calculated. figure 9 shows the standard uncertainty contributions for the combination of absolute calibration of the amplifier at 2 mv/v and linearity determination based on data from figure 8. the dashed line represents the reference point calibration standard uncertainty of u=0.00001 mv/v. for the linear instrument error, this value can be scaled proportionally with the ratio value, its relative contribution remaining constant (dotted line) the contribution due to calibration uncertainty. the second contribution is the standard uncertainty of the linearity determination (thin solid line) performed with the resistor circuit and combinatorial method. the final combined standard uncertainty is shown by the thick solid line. the combined uncertainty is calculated according to (2), where uc is the combined standard uncertainty, usim_prop the proportional part of the standard uncertainty contribution of the absolute point calibration, and ulin_check is the standard uncertainty contribution of the linearity determination employing the combinatorial calibration method: 𝑢c = �𝑢sim_prop2 + 𝑢lin_check2 . (2) it can be seen in figure 9 that the dominant uncertainty contribution for most of the range is the calibration uncertainty arising from the simulator calibration uncertainty. compared to a calibration employing only the simulator, the achieved uncertainty with the proposed procedure has been reduced significantly in the lower range of ratio values, and slightly increased for ratio values above the absolute calibration point. figure 10 shows the final calibration result of the amplifier using the combinatorial calibration method and one reference point calibration with a calibrated simulator. the result of the linearity determination from figure 8 is now combined with the established deviation during the amplifier calibration at 2 mv/v. the uncertainty of the linearity determination is enlarged by the proportional part of the reference point figure 8. amplifier linearity error determination with the combinatorial method (solid error bars) vs. calibration with a simulator (dashed error bars), referenced at 0 mv/v and 2 mv/v. error bars represent standard uncertainty. figure 9. combined standard uncertainty of absolute calibration at 2 mv/v (u=0.00002 mv/v) and linearity determination for the whole range. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 74 calibration uncertainty. in figure 11 the same measurement example from figure 9 is shown, but standard uncertainty contributions are expressed as relative standard uncertainties. we can see that the relative calibration uncertainty of the simulator increases significantly for lower ratio values (dashed line). the proportional part of the absolute calibration uncertainty taken for the 2 mv/v, when expressed as relative uncertainty, is constant for the whole calibration range (dotted line). the contribution of the linearity determination expressed as relative contribution is shown by the thin solid line. the combined relative standard uncertainty w is below 1×10-5 for the most of the range and below 2.5×10-5 for the whole evaluated range. again, the reduced uncertainty compared to employing only the simulator is evident at low ratio values. above the 2 mv/v absolute reference point, the uncertainty has been slightly increased. in this paper, only the calibration uncertainty of the amplifier calibration at one value and the uncertainty of the linearity check are considered. other contributions, such as resolution of the instrument, drift of the simulator ratio value and other possible contributions are not explicitly taken into account. 5. conclusions the results of the evaluation show improvement in calibration uncertainty of measuring amplifiers in comparison to traditional strain gauge simulator calibration only. combining the traditional simulator based calibration and combinatorial calibration based linearity determination, improved calibration in the lower part of the range can be achieved. with the presented method, the relative standard uncertainty w at 0.04 mv/v could be reduced from typical values of 2.5×10-4 to values below 2.5×10-5. the improved calibration uncertainty allows separate calibration of transducers and amplifiers and thus interchangeable transducer-amplifier combinations, while preserving acceptable calibration uncertainty levels for most scientific and industrial applications. references [1] r. vollmert, g. ramm, “realization, maintenance and dissemination of the measurand “ac voltage ratio in mv/v” for strain gauge measurements”, 18th imeko conference on force, mass and torque 2002, celle, germany, 2002, pp. 145-150. [2] calibration and measurement capabilities cmcs, the bipm key comparison database, http://kcdb.bipm.org, bipm. [3] d. schwind and t. hahn, “investigation of the influence of carrier frequency or direct current voltage in force calibrations”, xix imeko world congress, lisbon, portugal, 2009, pp. 201204. [4] iso 376:2011; metallic materials – calibration of force proving instruments used for the verification of uniaxial testing machines [5] calibration guide euramet cg-4 version 2.0(03/2011): uncertainty of force measurements, euramet, 2011. [6] d.r. white, m.t. clarkson, p. saunders and h.w. yoon “a general technique for calibrating indicating instruments”, metrologia, 45, 2008, pp. 199-210. [7] d.r. white, “a method for calibrating resistance thermometry bridges”, procedings of tempmeko ’96, torino, italy, 1997, pp.129-134. [8] m. hiti, “resistor network for linearity check of voltage ratio meters by combinatorial technique”, meas. sci technol.,vol 26, no. 5, 2015, pp. 1-7. [9] m.m. schäck, “high-precision measurement of strain gauge transducers at the physical limit without any calibration interruption”, imeko 22nd tc3, 12th tc5 and 3rd tc22 international conferences, cape town, republic of south africa, 2014, pp. 118-123. [10] m. hiti, “alternative calibration procedure for strain gauge amplifiers”, xxi imeko world congress 2015, prague, czech republic, 2015. [11] m. borys, r. schwartz, a. reichmuth, r. nater, “fundamentals of mass determination” springer-verlag berlin heidelberg, 2012, isbn 978-3-642-11937-8. figure 10. full calibration of the amplifier with absolute calibration at 2 mv/v and determined linearity for the whole range (solid error bars). result of the traditional calibration is also shown (dashed error bars). both curves are expected to go through zero as initially the indication is set to zero. error bars are for standard uncertainty. figure 11. combined standard uncertainty w of absolute calibration at 2 mv/v (u=0.00002 mv/v or w(2 mv/v)=5x10− 6) and uncertainty of the linearity determination for the whole range. reducing the uncertainty of strain gauge amplifier calibration << /ascii85encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain 20%) /calrgbprofile (srgb iec61966-2.1) /calcmykprofile (u.s. web coated \050swop\051 v2) /srgbprofile (srgb iec61966-2.1) /cannotembedfontpolicy /error /compatibilitylevel 1.4 /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves 0.0000 /colorconversionstrategy /cmyk /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel 0 /emitdscwarnings false /endpage -1 /imagememory 1048576 /lockdistillerparams false /maxsubsetpct 100 /optimize true /opm 1 /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments true /preserveoverprintsettings true /startpage 1 /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution 300 /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution 300 /colorimagedepth -1 /colorimagemindownsampledepth 1 /colorimagedownsamplethreshold 1.50000 /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /colorimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000coloracsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000colorimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution 300 /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution 300 /grayimagedepth -1 /grayimagemindownsampledepth 2 /grayimagedownsamplethreshold 1.50000 /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /grayimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000grayacsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000grayimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution 1200 /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution 1200 /monoimagedepth -1 /monoimagedownsamplethreshold 1.50000 /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k -1 >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx1acheck false /pdfx3check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /ara /bgr /chs /cht /cze /dan /deu /esp /eti /fra /gre /heb /hrv (za stvaranje adobe pdf dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. stvoreni pdf dokumenti mogu se otvoriti acrobat i adobe reader 5.0 i kasnijim verzijama.) /hun /ita /jpn /kor /lth /lvi /nld (gebruik deze instellingen om adobe pdf-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader 5.0 en hoger.) /nor /pol /ptb /rum /rus /sky /slv /suo /sve /tur /ukr /enu (use these settings to create adobe pdf documents best suited for high-quality prepress printing. created pdf documents can be opened with acrobat and adobe reader 5.0 and later.) >> /namespace [ (adobe) (common) (1.0) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) (4.0) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /converttocmyk /destinationprofilename () /destinationprofileselector /documentcmyk /downsample16bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure false /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles false /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) (2.0) ] /pdfxoutputintentprofileselector /documentcmyk /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /usedocumentprofile /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [2400 2400] /pagesize [612.000 792.000] >> setpagedevice acta imeko  december 2014, volume 3, number 4, 38 – 45  www.imeko.org  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 38  a method for assessing multivariate measurement systems  michele scagliarini  department of statistical sciences, university of bologna, via belle arti 41, 40126 bologna, italy  section: research paper   keywords: covariance matrices; eigenvalues; gauge study; manova; wishart distribution  citation: michele scagliarini, a method for assessing multivariate measurement systems, acta imeko, vol. 3, no. 4, article 8, december 2014, identifier:  imeko‐acta‐03 (2014)‐04‐08  editor: paolo carbone, university of perugia   received december 16 th , 2013; in final form december 16 th , 2013; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by grant from university of bologna  corresponding author: michele scagliarini, e‐mail: michele.scagliarini@unibo.it  1. introduction in a manufacturing environment, critical decisions about process and product quality depend on the quality of the measurement systems. measurement systems analysis (msa) is a set of statistical techniques used to quantify the uncertainty of the measurement instruments. [1] and [2] provided a review of gauge repeatability and reproducibility (r&r) methods for assessing the precision of measurement systems. in the case of univariate measurement systems, several msa-approval metrics are commonly used. for an overview on this topic we suggest [3] and [4]. in current manufacturing industry processes are often characterized by many important characteristics. accordingly, [5] proposed multivariate extensions of three commonly used msa-approval criteria using the volume of constant-density contours to characterize the variability of the measurement system. these multivariate msa-metrics require a multivariate analysis of variance (manova) for estimating the covariance matrices for one factor and twofactor gauge studies. in order to ensure constant flows of reliable data, manufacturers should periodically assess their measurement systems and the costs involved in maintaining well performing measurement systems are normally relevant. this issue motivates the present work. multivariate measurement systems analysis is usually performed by designing suitable gauge (r&r) experiments ignoring available data generated by the measurement system while used for inspection or process control. in recent literature, the use of these measurements from regular use of the instrument has been suggested for univariate msa studies (see e.g, [6]). here we propose the following approach. in the initial set up, after the multivariate measurement instrument is assessed as adequate, its performances are assumed as benchmark. therefore, using the data from the regular activity of the instrument, the periodic assessments of the measurement device are performed by comparing the present precision with the benchmark through a statistical test. since the proposed method does not require a multivariate gauge study, our proposal can be a useful tool for reducing the costs of multivariate msa carried out with a certain frequency. here is the outline of the paper. the next section introduces the multivariate measurement error model, abstract multivariate measurement systems analysis is usually performed by designing suitable gauge r&r experiments ignoring available data  generated by the measurement system while used for inspection or process control. this article proposes an approach that, using the  data  that  are  routinely  available  from  the  regular  activity  of  the  instrument,  offers  the  possibility  of  assessing  multivariate  measurement systems without the necessity of performing a multivariate gauge study. it can be carried out more frequently than a  multivariate  gauge  r&r  experiment,  since  it  can  be  implemented  at  almost  no  additional  cost.  therefore  the  synergic  use  of  the  proposed approach and the traditional multivariate gauge r&r studies can be a useful strategy for  improving the overall quality of  multivariate measurement systems and is effective for reducing the costs of a multivariate msa performed with a certain frequency.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 39  describes the multivariate msa-approval criteria proposed in recent literature and explains the multivariate analysis of variance (manova) method for estimating the covariance matrices of interest. section 3 develops the test for assessing the multivariate measurement instruments. section 4 studies the performances of the proposed method. finally, the last section contains a discussion and the conclusions. 2. multivariate measurement system analysis  let  1 2, ,..., mx x x x (1) represent the vector of m quality characteristics with mean vector µ and covariance matrix  positive definite. we assume that the multivariate process data are from a multivariate normal distribution. let us denote with  1 2, ,...,  mlsl lsl lsllsl , (2)  1 2, ,...,  musl usl uslusl , (3) and  1 2, ,...,  mt t tt (4) the m-vectors values of the lower specification limits, upper specification limits and target values, respectively. msa methodology assumes the model  y x e (5) where y is the vector of the observable quality characteristics, which is usually obtained from some physical measurements, x is the true quality characteristics vector and e is the multivariate measurement error vector. it is assumed that e  ,0 en σ (6) with c positive definite and that x and e are independent. as a result, y  , yn μ σ (7) where  y eς σ σ . let us denote with i, ei and yi (i=1,2,…,m) the eigenvalues of , e and y, respectively. in the multivariate framework, [5] developed multivariate versions of three univariate gauge-approval criteria. the author proposed the following statistics. the multivariate precision-to-tolerance ratio, which is defined as the m-th root of the ratio of (1-)100% volume of the multivariate error distribution and the volume of the tolerance region. this ratio, according to the specification of the tolerance region, simplifies to   1/ 2 /2 , 1 1 1 1 2 m m m m ei i m m i i p t tol m                           (8) when a hypercube-shaped tolerance region is used, and 1/ 2 , 12 2 m m m ei im i p t tol             (9) for the case of a hyperellipsoid-shaped tolerance region. in the above equations (·) is the gamma function, toli=usli-lsli and 2,m is the 100(1-)-th percentile of the 2 distribution with m degrees of freedom with (1-) usually fixed at 0.99. therefore the p/t1m and p/t2m criteria compare the multivariate instrument variability, computed on the base of the constant-density contour ellipsoid, with the multivariate tolerance region (hypercube or hyperellipsoid). the multivariate percent r&r ratio, which is defined taking the m-th root of the (1-)100% volumes of the gauge error distribution and measured-values distribution. the statistic in question simplifies in 1/ 1 % & 100 m m ei m i yi r r             (10) and expresses the relative widths of the multivariate distributions of the error e and the measured values y. the third multivariate approval-metric is the multivariate-signal-to-noise ratio which compares the (1)100% volume of the gauge-error distribution. this statistic is 1/ 1 2 m m i m i ei snr            . (11) the author in [5] also gave the guidelines for gauge acceptance. approval values for p/t1m and p/t2m range from 0 to 0.3, %r&rm should be ≤30%, while based on snrm a measurement system is adequate when snrm≥5. 2.1. multivariate msa in practice  the covariance matrices , e and y are usually unknown, for this reason [5] also proposes a multivariate analysis of variance (manova) method of estimating the covariance matrices for one-factor and two-factor gauge studies. according to the adopted notation, y′=[y1,y2,...,ym] represents the measured values or data generated by the gauge. let us consider a random-effects manova model [7] where the factors in question are all m-dimensional vectors. let us denote the factors as ip (i=1,2,…,p) for part, jo (j=1,2,…,o) for operator, ijpo for part-operator interaction. the error term is denoted by ijkε , where k=1,2,…,r indicates the repeated reading of the same part by the same operator. therefore ijky is an m-vector containing the k-th reading, by operator j, of the i-th part for the m quality characteristics. in the two-factor gauge study the manova model is:    ijk i j jk ijky p o po ε (12) where the random components ip , jo j, ijpo and ijkε are mutually independent with distributions ip   ,μ pn σ , jo   ,0 on σ , ijpo   ,0 pon σ and ijkε   , 0n σ , respectively. within this framework, acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 40    .. ... .. ... 11       p i i i or p msp y y y y (13) is the mean-square for the part matrix, where .. 1 1 1     o r i ijk j kor y y and ... 1 1 1 1      p o r ijk i j kpor y y . the mean square for the operator matrix is   . . ... . . ... 11       o j j j pr o mso y y y y (14) where . . 1 1 1     p r j ijk i kpr y y . the mean square for the partoperator interaction matrix is given by      . .. . . ... . .. . . ... 1 1 1 1             p o ij i j ij i j i j r o p mspo y y y y y y y y (15) and the mse matrix is     ... ...1 1 1 1 1         p o r ijk ijk i j kpo r mse y y y y . (16) the covariance matrices are estimated using the expected mean squares. the parts covariance matrix is estimated by ˆ p or msp mspo σ , (17) the operator factor covariance matrix is estimated by ˆ o pr   mso mspo σ , (18) the part-operator interaction covariance matrix is estimated by ˆ po r mspo mse σ (19) and the covariance matrix of the error terms ijkε is estimated by ˆ  σ mse . (20) by adopting the gauge r&r notation, p corresponds to ,the covariance matrix of the quality characteristic. repeatability and reproducibility are given by σ and o poς σ , respectively. the sum of repeatability and reproducibility gives the gauge measurement error covariance matrix     e o poς σ σ σ . (21) therefore, the estimators of the covariance matrices of interest are: ˆ ˆ pς σ , (22)  ˆ ˆ ˆ ˆ  e o poς σ σ σ , (23) and ˆ ˆ ˆ y eς σ σ . (24) 3. a test for multivariate measurement systems  the multivariate msa-approval criteria described in the previous section are based on constant-density contours of the multivariate normal distribution. a change in the precision of a measurement instrument will cause a change in the corresponding ellipsoid of constant density. therefore, since we are interested in the detection of a worsening in the measurement instrument precision, we will focus on significant reduction of the ellipsoid coverage. let us suppose that at the beginning of the manufacturing activity, which for notation purpose we denote as time t=0, a multivariate msa is performed and that the measurement instrument is assessed as adequate. we denote with e0 the precision of the measurement instrument, with0 the covariance matrix of the true quality characteristic and with y0=0+e0 the covariance matrix of the measurements, at time t=0. assuming the multivariate normality an ellipsoid of constant density is defined by     10 0:    yu cy y μ σ y μ . (25) if we assume c=2,m, then u0 is the boundary of the multivariate region in which 100 (1-)% process fall. after the initial set up, the measurement device is usually used for inspection or process control generating a lot of data at no additional costs. let us consider a time interval in which the instrument has been routinely used. at time t=t the measurement instrument is characterized by a precision et. usually, the process variability is monitored by a suitable control chart thus if no out of control signals occur we can assume the stability of the process i.e. 0=t. under these assumptions the variability of y is 0 yt etς σ σ . (26) in this framework, differences in the variability of the observed measures are only caused by changes in the precision since yt=y0 if and only if et=e0. at time t=t, the 0uy define the quadratic form             1 1 0 . t yt et q           y y μ σ y μ y μ σ σ y μ (27)  tq y under the normality assumption is distributed as a 2 with m degrees of freedom and has several useful properties.  tq y is not constant,   tq cy with equality only when 0yt yς σ , i.e. 0et eς σ . the minimum and the maximum values of  tq y can be determined analytically. using results from [8] we find     max maximum eigenvalue of  tq cy γ (28)     min minimum eigenvalue of  tq cy γ (29) where 10  y ytγ σ σ . therefore, if at time t=t the measurement instrument is worse than at time t=0, then the 0uy define ellipsoids with coverage ranging from    2min pr min m tp q y (30) to acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 41     2max pr max m tp q y . (31) the difference between (1 ) , the ellipsoid coverage at time t=0, and minp (or maxp ), the ellipsoid coverage at time t=t, quantifies the possible worsening in the instrument precision at instant t. a test for detecting the decreasing of the coverage can be derived as shown below. let us consider the null hypothesis that the instrument precision at instant t=t is equal to the precision at instant t=0 0 0h : et eς σ (32) if 0h holds, then 0yt yς σ , γ i ,   tq cy and therefore the ellipsoid coverage is min 1  p . let the alternative hypothesis be 1 0h : is positive definiteet eς σ (33) if 1h holds, then   tq cy hence the ellipsoid coverage is smaller than 1  : min 1  p . let 1 2 ,...,       m (34) be the eigenvalues of the matrix γ . under hypothesis 1h we have 1 1m    (35) and the (upper) limit 1 is reached only under 0h . let 1 be the largest eigenvalue of the matrix 1 1 0    yt yω γ σ σ . (36) then, when 1h holds 1 1 1m      (37) with 1 1  only under 0h . let s be the sample covariance matrix of a random sample of size n from y at time t. if 0h holds, then 1/2 1/2 0 0( 1)   y yn σ sς  ( , 1)w ni (38) where ( , 1)w ni denotes a wishart distribution with parameters i and 1n  . from [8] we have that matrix 1/ 2 1/ 2 0 0   y yς sς and matrix 1 0  yς s , have the same eigenvalues. furthermore, also matrices 10  yς s and 1 0  ysς have the same eigenvalues. therefore, if we denote with 1̂ the largest eigenvalue of the matrix 10  ysς , then 0h is not rejected if and only if 1 1 ˆ( 1)n u  , where 1u is the upper percentage point of the largest characteristic root of a wishart matrix. the advantage of this method is that the measurement instrument is assessed by comparing its performance instant t with those at instant 0, without the necessity of performing a multivariate gauge study (manova): the sample covariance matrix s can be estimated using the data available by the routine use of the measurement device at no additional costs. 4. case studies  in this section we discuss the ability of the test for detecting worsening in the measurement instrument performances. before entering in the details of the case studies it is useful to spend a few words reminding that the multivariate msa-metrics are designed thinking at different ways for assessing the measurement precision. the 1mp t and 2mp t criteria compare the multivariate instrument variability with the multivariate tolerance region (hypercube or hyperellipsoid). the remaining metrics do not involve the tolerances: % & mr r expresses the relative widths of the multivariate distributions of the errors e and the measured values y; msnr compares the width of the multivariate distribution of the true quality characteristics x with the corresponding volume of the multivariate errors e. since the test in question does not involve the tolerance regions, we expect a test behaviour similar to those of % & mr r and msnr . for this reason we shall compare the outcomes of the test only with the msa-metrics % & mr r and msnr . we consider as the situation at time t=0 (the benchmark) the case discussed by [5], then we examine a variety of worsening-precision scenarios at time t=t. for each of the proposed scenarios, we compute the multivariate msa-approval metrics presented in section 2 and we design suitable simulation experiments for studying the performances of the proposed test. let us therefore consider the case discussed by [5] where the data come from an automotive body panel gauge-study involving m=4 quality characteristics, with p=5 parts, o=2 operators and r=3 repeated measurements (see table 1 in [5] for the original data). using a two-factor manova method the matrices estimates are 0.01811 0.01600 0.02180 0.00763 0.01600 0.25163 0.15732 0.35463ˆ 0.02180 0.15732 0.20856 0.39249 0.00763 0.35463 0.39249 0.98631                σ 0.00094 0.00168 0.00141 0.00189 0.00168 0.00632 0.00475 0.00702ˆ 0.00141 0.00475 0.00486 0.00581 0.00189 0.00702 0.00581 0.00852              eς and 0.01905 0.01768 0.02321 0.00574 0.01768 0.25795 0.16207 0.36165ˆ 0.02321 0.16207 0.21342 0.39830 0.00574 0.36165 0.38830 0.99483                yς the eigenvalues of the covariance matrices σ̂ , ˆ eς and ˆ yς are reported in table 1. using equations (10) and (11) we obtain % & 12.26061mr r and 11.30385msnr respectively. the results show that the measurement instrument is assessed as acceptable by both the multivariate acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 42  criteria, therefore it can be used in the manufacturing process and for our purposes we can assume 0ˆ e eς σ , 0 ˆ σ σ and 0ˆ y yς σ . now we examine several scenarios where a realistic worsening of the measurement instrument after a period of use is considered. we base this discussion on the spectral decomposition of 0eς 0 0 0 0 e e e eς u d u (39) where 0 01 02 0( , ,..., )e e e e mu u u u is the matrix of eigenvectors with columns 0e iu (i=1,2,…,m), and  0 01 02 0, ,...,  e e e e mdiagd is the diagonal matrix of the eigenvalues. the diagonal matrix 0ed is the covariance matrix of the latent independent factors that represent the primary independent sources of variability introduced by the instrument at time t=0. the instrument after a period of use is characterised by a measurement error covariance matrix: et et et etς u d u (40) where 1 2( , ,..., )et et et etmu u u u with columns etiu (i=1,2,…,m) and  1 2, ,...,  et et et etmdiagd . many alternative cases for etς worse than 0eς can be considered, however etς cannot be attained by changing the elements of 0eς arbitrarily. it is realistic to examine for etς cases where changes in the variability are due to changes in variance of the latent independent factors. in other words, the factors that cause the variability in the instrument at instant t remain the same as for instant 0, but with larger variance. this means that cases for etς with practical meaning should be those involving the eigenvalues, 0eit ei  for (i=1,2,…,m), but maintaining unchanged the eigenvectors ( 0 e etu u ).a change in the eigenvectors can be interpreted as a the presence of serious problems in the instrument such that the independent sources of variability become dependent. it is worth noting that this concept has been used also by other authors. for instance [9] used this definition of plausible changes in a process capability analysis framework. in what follows, we consider three cases for etς where the eigenvectors remain unchanged. furthermore, for the sake of completeness, we will also consider the case of a change in the eigenvectors. 4.1. case 1  now we examine the case where the eigenvalues of 0eς are increased by the same additive term  : 0  et ed d i . (41) note that this case is equivalent to add the diagonal matrix  ι directly to 0eς 0  et eς σ i , (42) since from the spectral decomposition of 0eς we get the expression  0 0 0 0 0 0 0 0 0             e e e e e e e e e et u σ i u u σ u u u d i d (43) thus, the eigenvectors remain unchanged, 0 e etu u , and the eigenvalues can be expressed as 0eti e i    for i=1,2,…,m. within this case we consider scenarios where the worsening term  ranges from 0 to 0.006 with a step of 0.0001. therefore, for each value of  : a) we compute the multivariate msa approval criteria % & mr r and msnr and the results are shown in figure 1; b) using the rsoftware we generate 104 samples (n=50,75,100,150) from y   0,  etn μ σ σ where 0  et eς σ i . for each sample we compute the statistic   1ˆ1n  , where 1̂ is the largest eigenvalue of the matrix 10  ysς and s is the sample covariance matrix estimated from the sample. therefore, we evaluate the power of the test computing, for each value of  and n, the proportion of rejections of 0h . note that, fixed 0.05  , we used the function qwishartmax of the rpackage rmtstat [10] for computing the critical values of the test. the simulation results are summarized in figure 2 where, for each sample size, the rejection rates of hypothesis 0h as a function of  are reported. examining the results we note that msnr assesses the instrument as unacceptable for  ≥ 0.0033, while the test concludes, with a power greater than 80%, that instrument at time t=t is worse than the instrument at time t=0 for  ≥ 0.00325, when the sample size is n=75, and for  ≥ 0.00415 when n=50. therefore, pointing out that % & mr r evaluates the instrument as inadequate for  ≥ 0.0059, we can conclude that in this case, for moderate sample sizes (n=50 and n=75), the performances of the test are among the outcomes of metrics msnr and % & mr r . 4.2. case 2  next, we consider the case where the eigenvalues at time t=t are proportional to those at time t=0 0et ed d . (44) table 1. eigenvalues of the estimated covariance matrices. σ̂ 1 2 3 4 1.29428 0.11185 0.05438 0.00410 ˆ eς 1e 2e 3e 4e 0.01908 0.00081 0.00050 0.00025 ˆ yς 1y 2y  3y  4y  1.311189 0.11392 0.05557 0.00457 acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 43  note that (44) is equivalent to consider the measurement error covariance matrix of instrument at instant t to be proportional to the instrument precision at instant 0 0et eς σ (45) since we can write    0 1 0 0 0 0 0     e e e e e e e etu d u u d u σ σ . (46) also in this case, the eigenvectors do not change, 0 e etu u , and the eigenvalues are expressed as 0eti e i  , for i=1,2,…,m. we perform our study by considering scenarios where the worsening factor  ranges from 1 to 10 with a step of 0.1. for each value of  we proceed as for case 1: a) we compute the multivariate msa approval metrics, the results are displayed in figure 3; b) we generate for each sample size (n=50, 75, 100, 125), 104 samples from y   0,  etn μ σ σ where 0et eς σ . for each sample we compute the test statistic   1ˆ1n  and, fixed as before an -level of 5%, we evaluate the test power by computing the proportion of the 104 replications where 0h is rejected in favour of 1h . the rejection rates for hypothesis 0h as a function of  are shown in figure 4. in this case % & mr r assesses the measurement instrument as inadequate for  ≥7.4. the test concludes that the instrument at instant t is worse than the instrument m at instant 0, with a power greater than 80%, when δ≥7.2, δ≥6.2, and δ≥5.6 for n=75, n=100 and n=125 respectively. therefore, considering that using snr the instrument is unacceptable for  ≥5.2, we conclude that for sample sizes ranging from n=75 to n=125 the results of the test are among the outcomes of metrics msnr and % & mr r . 4.3. case 3  since in the previous cases we examined cases where all the eigenvalues simultaneously change, now we discuss the case where only several eigenvalues change their values. let us consider the scenario where three eigenvalues are increased by a factor  ranging from 1 to 20 with a step of 0.25:  01 02 03 04, , ,   et e e e ediagd (47) and the covariance matrix etς is given by 0 0et e et eς u d u . for each value of  we have followed the same procedure as for cases 1 and 2: the results are summarized in figures 5 and 6. the pattern of the results is similar to that observed for case 1. msnr assesses the instrument as unacceptable for  ≥9 and % & mr r for  ≥16, while the test concludes, that the instrument at time t=t is worse than the instrument at time 0, with a power greater than 80%, when  ≥8 and  ≥10 for n=75 and n=50 respectively. figure 4. h0 rejection rates versus    for case 2.  figure 2. h0 rejection rates versus    for case 1.  figure 3. multivariate msa‐approval criteria as a function of    for case 2.  figure 1. multivariate msa‐approval criteria as a function of    for case 1.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 44  4.4. case 4  finally, let us examine the situation where the variations involve also the eigenvectors, which can be interpreted as the presence of a serious problem in the measurement instrument. we examine the case where the first two diagonal elements of etς increase, while the other matrix elements are equal to the corresponding elements of 0eς : 0.00094 0.00168 0.00141 0.00189 0.00168 0.00632 0.00475 0.00702 0.00141 0.00475 0.00486 0.00581 0.00189 0.00702 0.00581 0.00852               etς (48) in the analysis the term  ranges from 1 to 8 with a step of 0.1. figure 7 shows the multivariate msa metrics computed for each values of  . the monte carlo experiment has been performed as in the previous cases and the results are summarized in figure 8. the results show that the test tends to be more sensitive to the increasing of  than % & mr r and msnr . for instance, for n=50 the power of the test is greater than 80% for  ≥6, while msnr evaluates the instrument as inadequate for  ≥6.2. note that in this case, although the multivariate metric % & mr r detects the worsening in the measurement instrument, it assesses the instrument as acceptable for all the values considered of  . summarizing, in the cases examined we aimed to study the performances of the test in realistic worsening scenarios of the measurement instrument after a period of use. the results show that the test provides outcomes with an appreciable level of agreement with the issues of % & mr r and msnr . 5. discussion and conclusions as any activity involving personnel, materials, tools and equipment, msa usually requires a non-negligible financial support. furthermore, the fact that these systems measure more than a single quality characteristic and that periodic assessments of measurement system performance are often required engages manufacturers in important challenges. in this work, we have proposed a method which can be an additional tool for assessing the statistical properties of a multivariate measurement system. the method makes use of the data that are routinely available from the regular activity of the instrument and offers the possibility of assessing multivariate measurement systems without the necessity of performing a multivariate gauge study (manova). since the illustrated strategy can be implemented at almost no additional costs it may carried out more frequently than a manova gauge study. therefore, the synergic use of the proposed approach and figure 5. multivariate msa‐approval criteria as a function of    for case 3.  figure 7. multivariate msa‐approval criteria as a function of    for case 4.  figure 8. h0 rejection rates versus    for case 4.  figure 6. h0 rejection rates versus    for case 3.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 45  the traditional multivariate gauge r&r studies can be: effective for reducing the costs of a multivariate msa performed with a certain frequency; a useful strategy for improving the overall quality of multivariate measurement systems. acknowledgement  the author would like to thank the two anonymous reviewers for their helpful comments. references  [1] r.k. burdick, c.m. borror, d.c. montgomery, a review of methods for measurements systems capability analysis, j. qual. technol. 35 (2003) pp. 342-354. [2] r.k. burdick, c.m. borror, d.c. montgomery, design and analysis of gauge r&r studies: making decisions with confidence intervals in random mixed anova models, asa–siam series on statistics and applied probability, siam, philadelphia, asa, alexandria, (2005), isbn 089871-588-1. [3] automotive industry action group (aiag), measurement systems analysis, 4th ed. southfield, mi, (2010), isbn 9781605342115 [4] w.h. woodall, c.m. borror, some relationships between gage r&r criteria, qual. reliab. eng. int, 24 (2008) pp. 99106. [5] k.d. majeske, approval criteria for multivariate measurement systems, j. qual. technol. 40 (2008) pp. 140153. [6] o. danila, s.h. steiner, r.j. mackay, assessment of a binary measurement system in current use, j. qual. technol. 42 (2010) pp.152-164. [7] n.h. timm, applied multivariate analysis, springerverlag, new york, 2002, isbn 0-387-95347-7. [8] k.v mardia, j.t. kent, j.m. bibby, multivariate analysis. academic press, san diego, 1979, isbn 9780124712522. [9] i. gonzález, i. sánchez, capability indices and nonconforming proportion in univariate and multivariate processes, int. j. adv. manuf. tech. 44 (2009) pp.1036-1050. [10] i.m., johnstone,. z. ma, p.o. perry, m. shahram, “package rmtstat: distributions, statistics and tests derived from random matrix theory”. (2009) http://cran.rproject. org/web/packages/rmtstat/ electroencephalography correlates of fear of heights in a virtual reality environment acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 electroencephalography correlates of fear of heights in a virtual reality environment andrea apicella1, simone barbato2, luis alberto barradas chacόn3, giovanni d’errico4, lucio tommaso de paolis5, luigi maffei6, patrizia massaro6, giovanna mastrati1, nicola moccaldi1, andrea pollastro1, selina christin wriessenegger3 1 department of electrical engineering and information technology, university of naples federico ii, naples, italy 2 idego digital psychology, rome, italy 3 institute of neural engineering, graz university of technology, graz, austria 4 department of applied science and technology, politecnico di torino, turin, italy 5 department of engineering for innovation university of salento, lecce, italy 6 neuroagain neurological and psychiatric center, benevento, italy section: research paper keywords: electroencephalography; brain computer interfaces; fear of heights; virtual reality citation: electroencephalography correlates of fear of heights in a virtual reality environment, andrea apicella, simone barbato, luis alberto barradas chacόn, giovanni d’errico, lucio tommaso de paolis, luigi maffei, patrizia massaro, giovanna mastrati, nicola moccaldi, andrea pollastro, selina christin wriessenegger, acta imeko, vol. 12, no. 2, article 9, june 2023, identifier: imeko-acta-12 (2023)-02-09 section editor: paolo carbone, university of perugia, italy received january 25, 2023; in final form february 21, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: this work was supported by phd grant “ar4clinicsuraugmented reality for clinical surgery” (inps national social security institution – italy) corresponding author: giovanna mastrati, e-mail: giovanna.mastrati@unina.it 1. introduction the expression fear of heights is often used as a synonym of acrophobia, a specific phobia according to the “diagnostic and statistical manual of mental disorders” (dsm-v) [1]. a detailed definition is provided by the american psychological association (apa) that is, “an excessive, irrational fear of heights, resulting in the avoidance of elevations or marked distress when unable to avoid high places” [2]. among specific phobias, the fear of heights is stated to have one of the highest lifetime prevalence, 6.8 % [1]. fear of heights is characterized by some psychological symptoms: (i) feeling intense anxiety when thinking of or being in high places, (ii) thinking that something bad could happen in a high place, or (iii) desire to escape from the high place. physical symptoms involve: (i) an increased heartbeat, dizziness, and daze abstract an electroencephalography (eeg)-based classification system of three levels of fear of heights is proposed. a virtual reality (vr) scenario representing a canyon was exploited to gradually expose the subjects to fear inducing stimuli with increasing intensity. an elevating platform allowed the subjects to reach three different height levels. psychometric tools were employed to initially assess the severity of fear of heights and to assess the effectiveness of fear induction. a feasibility study was conducted on eight subjects who underwent three experimental sessions. the eeg signals were acquired through a 32-channel headset during the exposure to the eliciting vr scenario. the main eeg bands and scalp regions were explored in order to identify which are the most affected by the fear of heights. as a result, the gamma band, followed by the high-beta band, and the frontal area of the scalp resulted the most significant. the average accuracies in the within-subject case for the three-classes fear classification task, were computed. the frontal region of the scalp resulted particularly relevant and an average accuracy of (68.20 ± 11.60) % was achieved using as features the absolute powers in the five eeg bands. considering the frontal region only, the most significant eeg bands resulted to be the high-beta and gamma bands achieving accuracies of (57.90 ± 10.10) % and of (61.30 ± 8.43) %, respectively. the sequential feature selection (sfs) confirmed those results by selecting for the whole set of channels, in the 48.26 % of the cases the gamma band and in the 22.92 % the high-beta band and by achieving an average accuracy of (86.10 ± 8.29) %. mailto:giovanna.mastrati@unina.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 when thinking of or being in high places, (ii) nausea, (iii) trembling, and (iv) shortness of breath. very often people suffering from fear of heights do not seek treatment for their condition preferring to avoid the fearful situation. this can cause limitations in performing simple tasks of everyday life. behavioural therapy is mainly used for the treatment of acrophobia. at the basis of the treatment is the exposure of the patient to the anxiogenic stimulus. the subject is slowly encouraged to enter and cope with the anxious situation. the stimulus can be imagined (imaginal modality) or delivered for real (in-vivo modality) [3]. among the exposure approaches, systematic desensitization combines exposure with relaxation techniques [4]; graded exposure, which has been shown to be superior to the former for the treatment of acrophobia [5], involves the gradual exposition of the subject to the phobic source in a managed and controllable environment without the use of relaxation techniques [6]. for more than two decades, the literature and the clinical world have shown how exposure treatment of acrophobia can make use of virtual reality (vr) [7], [8]. vr is a technology that immerses subjects in a computer-generated artificial environment offering a high degree of interactivity, sense of presence and multi-sensory stimulation [9]–[11]. this has led to virtual reality exposure therapy (vret), which shows greater efficacy than imaginal exposure [12], especially for subjects with insufficient imaginative capacity [3] and, in several cases, it is comparable to in-vivo exposure [9]. vret offers greater control to the therapist in the administration of the expository hierarchy and allows customisation to the needs of the subject by generating adjustable stimuli [13], [14]. vret also provides a completely safe environment with respect to the in-vivo exposure. to date, vret-based treatments require the operator intervention to appropriately modulate the phobic stimuli. therefore, the adaptation cannot take place in real-time mode. in recent years, many studies have proposed therapeutic treatments involving extended reality (xr) which are able to adapt to the user on the basis of real-time processing of biosignals [15]–[18]. among the physiological signals (i.e., cerebral blood, electroculographic signals, electrocardiogram, blood volume pulse, galvanic skin response, respiration, phalanx temperature, etc), the non-invasiveness and the high time resolution represent great advantages in the use of the eeg [19]–[21]. thus, the eeg can be successfully employed in realtime brain computer interface (bci) applications [22], [23]. thanks to the advances in the portability and wearability of the eeg devices, vr headsets and the eeg systems are to date compatible and can be simultaneously employed. phobic states can cause an asymmetry in the eeg signal, that is an imbalance in amplitude or power between two sites on the scalp. specifically, an increase in the beta power of the right hemisphere may reflect the presence of phobias [24]. a hyperactivity of the frontal cortex was also found in phobic disorders [25]. a recent study demonstrated that high levels of beta and high-beta activity in the temporal lobes (related to amygdala activation) resulted to be associated with anxiety, fear, panic, and phobia [26]. starting from the eeg signals, fear of heights can be automatically detected by means of machine learning (ml) techniques. the present study aims to identify the eeg signal pro cessing pipeline that maximizes the accuracy of classification between 3 intensity levels (i.e., low, medium and high) of fear of heights in subjects with a different acrophobia severity. thus, the novelty beyond the state of the art consists in focusing on the real time assessment of the intensity levels of fear experienced by a subject rather than the acrophobia severity diagnosis. this is a first step toward a bci adaptive system for the vr-based treatment of fear of heights. 2. related works in [27], the authors aimed to distinguish three groups of subjects affected by different severity of acrophobia by means of the eeg signal. the psychometric tools employed were the acrophobia questionnaire (aq) and the subjective unit of distress (sud) scores. eeg data of 76 subjects were collected during the exposure to a virtual environment reproducing a wooden plank hanging at a height of about 160 m. functional connectivity between each pair of channels was explored and complex networks named functional brain networks (fbns) were obtained. fbns features resulted in being able to distinguish different groups of subjects. ml algorithms and convolutional neural networks (cnns) with fbn features as inputs were trained. the best results were achieved by using cnns and an accuracy of (98.46 ± 0.42) % was obtained. inter and intra-connectivity of cerebral cortex regions resulted able to identify the degree of acrophobia. in [28], the severity of acrophobia was estimated based on eeg, heart rate (hr), and galvanic skin response (gsr). the visual height intolerance questionnaire (vhiq) was employed to preliminary assess the acrophobia intensity of the participants. 8 subjects underwent an in vivo pre-therapy exposure session, followed by a vr therapy and an in vivo evaluation procedure. various machine and deep learning classifiers were implemented and tested in both a userdependent and user-independent setting. in the user-dependent setting, accuracies of 79.12 % and of 52.75 % were achieved in the 2-class (relax vs fear) and 4-class (relax, low, medium, and high) fear level classification, respectively. in the user-independent setting, accuracies of 89.50 % and of 42.50 % were achieved in the same fear level classification tasks. the eeg feature which maximized the classification accuracy resulted to be the log-normalized beta band power. in [29], eeg and ecg biomarkers of 18 subjects were real time monitored in a virtual high-altitude scenario. statistical analysis was employed to explore the relationship between these biomarkers and the height-related stress. perceived stress scale (pss) was employed to record the subjects’ self-assessment of the perceived stress. based on a hr threshold, the sample was divided in two groups and statistically relevant differences emerged between eeg biomarkers. the absolute powers in beta and gamma bands in the occipital region were found to be associated with height-related stress. the following correlation with the pss score were found: (i) the increase in the frontal beta power (ρ = 0.50), (ii) the increase in the parietal beta and gamma powers (ρ = 0.56 and ρ = 0.71), (iii) the increase in the temporal beta and gamma powers (ρ = 0.70 and ρ = 0.60), (iii) the increase in the occipital beta and gamma powers (ρ = 0.53 and ρ = 0.56). 3. experimental validation in this section, the participants, the psychometric tools, the hardware, the software, and the protocol exploited for the experimental validation of the proposed study are presented. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 3.1. participants nine healthy subjects (age 22.63 ± 4.00; 3 males and 6 females) participated in the experiment. they were naive to the task of emotion-related visual stimulation. all participants gave prior written informed consent to participate. 3.2. psychometric tools the aq [30] and the sud scores were used for an initial screening of the sample to evaluate the severity of fear of heights. for each subject, the aq self-report scale allows to assess the anxiety and avoidance levels associated with 20 height-relevant situations. the aq is made of 20 items for each subscale (i.e., anxiety and avoidance). the sud [31] was used to monitor the fear level changes during the ongoing exposure session. it is a visual analogue scale commonly used to measure self-reported habituation of anxiety, agitation, stress or other painful feelings during the exposure therapy. current levels of anxiety are usually rated on a likert scale from 0 (no distress) to 100 (extreme distress). in the present study, a likert scale from 0 to 10 was exploited. in the exposure therapy, the sud is mainly used to develop fear hierarchies and arrange fear-provoking stimuli by order of severity. sud ratings are generally also used to assess the initial fear level of the subjects. subjects simultaneously reporting aq scores > 20 or < 6 on the anxiety and avoidance scales, respectively, and a sud score < 2, were considered not to suffer from fear of heights. only 1 subject belongs to this category and was excluded from the experiments. thus, the eeg data of eight subjects were considered for further analysis. 3.3. hardware eeg signals were recorded through the ultra-light weight, wearable liveamp amplifier from brain products [32] (figure 1). the system is provided with 32 gel-based active electrodes placed following the 10/20 international positioning system. the liveamp is equipped with a 24-bit analog-to-digital converter (measurement range ± 341.6 mv, resolution 40.7 nv lsb). the eeg signals are acquired at a sampling rate of 500 sa/s. the system is wireless and is also provided with a memory card to store the data internally in order to allow greater mobility. the rechargeable battery allows up to 4 hours recording. the brainvision recorder software allows to check the contact impedance between the electrodes and the scalp and the realtime visualization of the eeg data. the meta quest 2 [33], produced by meta platforms (figure 2) is the headset employed for the vr exposure. the quest 2 runs an android-based operating system and can be used both as a standalone device and with the vr software running on a desktop computer. the headset is provided with a fastswitch lcd display with a per-eye resolution of 1832 1920, and refresh rates of 60, 72, 90 hz are supported. the 3d positional audio is built directly into the headset and a data storage of 128 gb or 256 gb is allowed. with a motion tracker with 6 degrees of freedom (dof), the headset is able to track the movements of the head and the body, thus allowing the user a vr experience with realistic precision. 3.4. software the akron application, developed by idego [34], is a vr app designed for the treatment of fear of heights. the app allows to gradually expose the user to fearful situations by increasing the eliciting power of the stimulus. the displayed landscape is a canyon in a rocky desert, where the user can find a river and barren nature, see figure 3. a wooden lift and a platform allow the user to climb, starting from the ground level and reaching three increasing height levels, namely 15 m, 30 m, and 45 m. on each side, the lift is provided with protective barriers to let the user feel safe during the platform raising. frontal barriers are not present when the platform reaches the desired level in order to leave the user in greater eye contact with the sensation of empty space. the app was developed on the android platform using the unity game engine (version 2019.4.16), the c# as a programming language, the opengles3 graphics library, and an il2cpp-type backend configuration. the app has been optimized through an astc compression system to use it on low-end virtual reality viewers such as the meta quest 2. a 72 hz refresh rate was set. the 6 dof allows the user to look around. figure 1. brain vision liveamp and acticap. figure 2. meta quest 2. figure 3. displayed landscape. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 3.5. protocol the experiments were performed at the institute of neural engineering (bci lab) at the graz university of technology. for each subject, after a preparation phase, the three sessions were carried out on the same day. during the preparation phase, participants were carefully instructed on the purpose of the experiment. the researchers mounted the eeg cap on the participants head and the electrodes were filled with conductive gel. electrodes impedance was kept under 25 kω and the quality of the eeg signal was visually inspected. after eeg configuration, participants were asked to wear the vr headset and the eeg signals were again visually inspected to assure no perturbations occurred (figure 4). each session consisted of the same three runs. the subject is required to stand on an elevating platform in a vr environment reproducing a canyon. for each run, a higher height level is reached. before the first run, the subject stands at the ground level to become familiar with the environment and to allow the baseline signals acquisition. each run starts with a visual 5 s countdown followed by the platform raising. once reached the desired level, the subject is asked to keep that position for 90 s. at the end of the run, a message informs the subject the relax phase is starting and they are teleported at the ground level. the participant is then asked to rate the level of discomfort on a scale from 1 to 10, and the relax phase continues for other 60 s. the three sessions last about 45 min. in figure 5, the overall experimental procedure is reported. 4. data analysis 4.1. preprocessing the raw eeg data were pre-processed using matlab v. r2022a. a zero-phase notch iir filter was applied in order to filter out the 50 hz power line noise. a zero-phase digital filtering was performed with a 4th order butterworth band pass iir filter, with cut-off frequencies between 4 and 45 hz in order to extract the eeg frequency bands of interest. both for the notch and the bandpass filters, the zero-phase digital filtering was achieved by using the filtfilt() matlab function. artifact subspace reconstruction (asr) and independent component analysis (ica) [35] were employed to correct the eeg signal from artifacts through the eeglab matlab toolbox version 2019 [36]. the baseline correction was then applied to the eeg signal. next, signals were segmented into 1 s time windows with 50 % overlap between adjacent segments. 4.2. feature extraction, selection and classification the fast fourier transform (fft) with zero-padding was then applied to the eeg signals. the matlab function fft() with a number of points twice the original length of the signal was exploited. the absolute powers were computed in the following frequency bands: theta (4 to 7) hz, alpha (8 to 13) hz, low-beta (14 to 20) hz, high-beta (21 to 29) hz, and gamma (30 to 45) hz. for each frequency band, the absolute powers of channels belonging to the same brain region were averaged according to the subsequent articulation: • frontal: fp1, fp2, f3, f4, f7, f8, fc5, fc6, fc1, fc2, fz • central: c3, c4, cp5, cp6, cp1, cp2, cz • parietal: p3, p4, p7, p8, pz • temporal: t7, t8, tp9, tp10, ft9, ft10 • occipital: o1, o2, oz. the forward sequential feature selector (sfs) algorithm was used to reduce the original dimensional feature space. the sfs belongs to the wrapper methods [37]. for a given classifier, the sfs adds or removes features to form a new feature subset. at each stage, the best feature to add or remove based on the crossvalidation score is identified. the algorithm returned the most significant frequency band for each channel, by reducing the total number of features from 160 to 32. the k-nearest neighbors (k-nn), random forests (rfs), naïve bayes (nb), and support vector machines (svms) were the employed classifiers. the goal was to classify three intensity levels (i.e., low, medium, and high) of fear of heights. for each classifier, a 5-fold stratified cross-validation was employed in a within-subject setting returning a mean classification accuracy for the subject. during the 5 iterations, the test fold was composed of data from randomly selected runs of the 3 sessions. the remaining data (80 %) were used for the training phase. then, the overall mean was calculated, and the standard deviation was computed on the eight subjects considering n-1 degrees of freedom. 5. results the metric used to evaluate the performance of the models is the accuracy that formally refers to the number of correct predictions over the total predictions. three different experiments were carried out in order to find the most significant scalp area, the most significant eeg frequency band, and the most significant frequency band per channel. as reported in table 1, the highest mean classification accuracy was obtained over frontal regions. figure 4. eeg acquisition. figure 5. framework of one experimental session. following the baseline and the exposure phases, the subjective unit of distress (sud) is administered. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 in table 2, the classification accuracies of each subject are shown when the signals acquired over the frontal regions are input to the rf classifier. the achieved results agree with previous findings from the scientific literature: frontal area emerged to be able to distinguish among degrees of acrophobia [27]. in table 3, classification accuracies at varying the frequency bands are reported, considering the only frontal region. the most significant eeg bands resulted to be the gamma and high-beta bands. gamma and beta waves were observed to have a strong correlation with the brain response to highaltitude exposure [29]. these results were confirmed by the sfs when asked to select for each channel the frequency bands maximizing the accuracy. as a result, the gamma band and the high-beta band were selected in 48.26 % and in 22.92 % of channels. in table 4, the performance of classifiers is compared when the sfs algorithm is applied to select the most informative frequency bands and when all the frequency bands are considered. a feature selection process conducted on the eeg data of each single subject allows to achieve a better performance. however, the choice of custom features would require a preliminary calibration of the system for each different subject. to pursue the goal of generalization, the gamma and high beta in the frontal area of the scalp appear to be the most robust features. 6. conclusion and future works an eeg-based detection system of three intensity levels of fear of heights was proposed. a vr scenario reproducing a canyon allowed to gradually expose the subjects to fear eliciting stimuli. by means of an elevating platform, the subjects reached three different height levels. the aq and the sud scores were employed to assess the severity of fear of heights of the experimental sample. the eeg signals of eight subjects were acquired through a 32-channel headset during the exposure to the vr scenario. a detailed analysis was conducted in order to highlight the scalp regions and the eeg bands mostly affected by fear of heights. the average accuracy in the within-subject case for the three-classes fear classification task, was computed. the absolute powers in the five eeg bands extracted from the frontal region of the scalp allowed to achieve average accuracies of (68.20 ± 11.60) %. considering the only frontal, accuracies of (57.90 ± 10.10) % and of (61.30 ± 8.43) % were achieved in the high-beta and gamma bands, respectively. the sfs confirmed those results by selecting for the whole set of channels, in the 48.26 % the gamma band and in the 22.92 % the high-beta band and achieving an average accuracy of 86.10 ± 8.29. in future works the sample size will be enlarged and further eeg features will be tested to improve the generalizability of the results. acknowledgement the authors thank the institute of neural engineering (bci lab) at the graz university of technology for the support in the research activities. the authors also thank the phd grant “ar4clinicsuraugmented reality for clinical surgery” (inps national social security institution – italy). this work was carried out as part of the “ict for health” project, which was financially supported by the italian ministry of education, university and research (miur), under the initiative ‘departments of excellence’ (italian budget law no. 232/2016), through an excellence grant awarded to the department of information technology and electrical engineering of the university of naples federico ii, naples, italy. references [1] american psychiatric association, d., & american psychiatric association. diagnostic and statistical manual of mental disorders: dsm-5 (vol. 5, no. 5). american psychiatric association. washington, dc: 2013. isbn-13: 9780890425558 [2] g. r. vandenbos, apa dictionary of psychology. american psychological association, washington, dc: 2007. isbn-13: 9781433812071 [3] l. f. hodges, r. kooper, t. c. meyer, b. o. rothbaum, d. opdyke, j. j. d. graaff, j. s. williford, m. m. north, virtual environments for treating the fear of heights, ieee computer table 1. within-subject classification accuracy (mean and standard deviation on all the subjects) in % of fear of heights on a 3-level intensity scale. for each scalp region, the absolute powers of the 5 eeg bands were used as input features. brain classifier region knn nb rf svm frontal 53.00 ± 9.86 47.10 ± 7.15 68.20 ± 11.60 63.40 ± 12.80 central 42.80 ± 4.38 43.80 ± 7.85 51.80 ± 7.98 51.70 ± 8.71 parietal 45.10 ± 7.55 45.50 ± 4.64 53.30 ± 7.95 52.30 ± 8.27 temporal 56.00 ± 12.20 51.70 ± 8.80 66.90 ± 11.40 62.10 ± 12.50 occipital 52.80 ± 12.20 43.60 ± 6.27 57.70 ± 15.50 57.70 ± 15.90 table 2. within-subject classification accuracy (mean and standard deviation for each subject) in % of fear of heights on a 3-level intensity scale. for each scalp region, the absolute powers of the 5 eeg bands were used as input features. patient id classification accuracy #1 76.20 ± 1.40 #2 57.40 ± 3.19 #3 71.70 ± 3.12 #4 53.20 ± 4.84 #5 70.80 ± 2.98 #6 62.80 ± 3.24 #7 64.00 ± 1.48 #8 89.80 ± 1.74 overall mean 68.20 ± 11.60 table 3. within-subject classification accuracy (mean and standard deviation) in % of fear of heights on a 3-level intensity scale. for the frontal region, the absolute powers of the 5 eeg bands were separately used as input features. the two bands maximizing the accuracy for all the classifiers are highlighted in grey. frequency classifier band knn nb rf svm theta 39.30 ± 4.76 37.90 ± 3.26 41.60 ± 6.52 39.40 ± 7.66 alpha 41.20 ± 4.52 38.90 ± 6.45 42.70 ± 5.92 43.30 ± 6.43 low-beta 50.80 ± 11.00 45.70 ± 8.70 52.10 ± 10.80 52.00 ± 11.40 high-beta 53.60 ± 13.40 49.70 ± 9.16 57.40 ± 12.50 57.90 ± 10.40 gamma 57.80 ± 9.37 46.60 ± 6.90 61.20 ± 9.76 61.30 ± 8.43 table 4. within-subject classification accuracy (mean and standard deviation) in % of fear of heights on a 3-level intensity scale. the absolute powers of the 5 eeg bands were used as input features. results withand without feature selection are reported. sfs classifier knn nb rf svm with78.60 ± 8.50 55.40 ± 8.29 86.10 ± 8.29 85.70 ± 9.12 without59.50 ± 12.50 52.30 ± 7.09 82.00 ± 11.60 76.70 ± 14.00 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 28(7) (1995), pp. 27–34. doi: 10.1109/2.391038 [4] i. marks, m. gelder, a controlled retrospective study of behaviour therapy in phobic patients, the british journal of psychiatry 111(476) (1965), pp. 561–573. doi: 10.1192/bjp.111.476.561 [5] m. j. crowe, i. m. marks, w. s. agras, h. leitenberg, time limited desensitisation, implosion and shaping for phobic patients: a crossover study, behaviour research and therapy 10(4) (1972), pp. 319–328. doi: 10.1016/0005-7967(72)90055-1 [6] j. boehnlein, l. altegoer, n. k. muck, k. roesmann, r. redlich, u. dannlowski, e. j. leehr, factors influencing the success of exposure therapy for specific phobia: a systematic review, neuroscience & biobehavioral reviews 108 (2020), pp. 796–820. doi: 10.1016/j.neubiorev.2019.12.009 [7] d. opdyke, j. s. williford, m. north, effectiveness of computer generated (virtual reality) graded exposure in the treatment of acrophobia, am j psychiatry 1(152) (1995), pp. 626–628. doi: 10.1176/ajp.152.4.626 [8] p. m. emmelkamp, m. bruynzeel, l. drost, c. a. g. van der mast, virtual reality treatment in acrophobia: a comparison with exposure in vivo, cyberpsychology & behavior 4(3) (2001), pp. 335–339. doi: 10.1089/109493101300210222 [9] m. krijn, p. m. emmelkamp, r. biemond, c. d. w. de ligny, m. j. schuemie, c. a. van der mast, treatment of acrophobia in virtual reality: the role of immersion and presence, behaviour research and therapy 42(2) (2004), pp. 229–239. doi: 10.1016/s0005-7967(03)00139-6 [10] p. arpaia, g. d’errico, l. t. de paolis, n. moccaldi, f. nuccetelli, a narrative review of mindfulness-based interventions using virtual reality, mindfulness (2021), pp. 1–16. doi: 10.1007/s12671-021-01783-6 [11] g. albakri, r. bouaziz, w. alharthi, s. kammoun, m. al-sarem, f. saeed, and m. hadwan, phobia exposure therapy using virtual and augmented reality: a systematic review, applied sciences, vol. 12, no. 3, 2022, p. 1672. doi: 10.3390/app12031672 [12] m. j. lambert, behavior therapy with adults, bergin and garfield’s handbook of psychotherapy and behaviour change, john wiley & sons. 2013. isbn: 9781118038208 [13] y. h. choi, d. p. jang, j. h. ku, m. b. shin, and s. i. kim, shortterm treatment of acrophobia with virtual reality therapy (vrt): a case report, cyberpsychology & behaviour 4(3) (2001), pp. 349– 354. doi: 109493101300210240 [14] j. l. maples-keller, b. e. bunnell, s.-j. kim, b. o. rothbaum, the use of virtual reality technology in the treatment of anxiety and other psychiatric disorders, harvard review of psychiatry 25(3) (2017), art. no. 103. doi: 10.1002/0471206458.ch2 [15] p. arpaia, d. coyle, g. d’errico, e. de benedetto, l. t. de paolis, n. du bois, s. grassini, g. mastrati, n. moccaldi, e. vallefuoco, virtual reality enhances eeg-based neurofeedback for emotional selfregulation, in international conference on extended reality. springer, lecce, italy, 2022, pp. 420–431. doi: 10.1007/978-3-031-15553-6_29 [16] a. apicella, p. arpaia, s. giugliano, g. mastrati, n. moccaldi, high-wearable eeg-based transducer for engagement detection in paediatric rehabilitation, brain-computer interfaces 9(3) (2022), pp. 129–139. doi: 10.1080/2326263x.2021.2015149 [17] a. choo, a. may, virtual mindfulness meditation: virtual mindfulness meditation: virtual reality and electroencephalography for health gamification., in 2014 ieee games media entertainment. ieee, toronto, on, canada2014, pp. 1–3. doi: 10.1109/gem.2014.7048076 [18] i. kosunen, m. salminen, s. jarvela, a. ruonala, n. ravaja, and g. jacucci, relaworld: neuroadaptive and immersive virtual reality meditation system, in proceedings of the 21st international conference on intelligent user interfaces, sonoma california usa, 2016, pp. 208–217. doi: 10.1145/2856767.2856796 [19] z. liu, j. shore, m. wang, f. yuan, a. buss, and x. zhao, a systematic review on hybrid eeg/fnirs in brain-computer interface, biomedical signal processing and control, vol. 68, 2021, p. 102595. doi: 10.1016/j.bspc.2021.102595 [20] j. uchitel, e. e. vidal-rosas, r. j. cooper, h. zhao, wearable, integrated eeg–fnirs technologies: a review, sensors, vol. 21, no. 18, 2021, p. 6106. doi: 10.3390/s21186106 [21] n. yoshimura, o. koga, y. katsui, y. ogata, h. kambara, y. koike, decoding of emotional responses to user-unfriendly computer interfaces via electroencephalography signals, acta imeko 6(2) (2017), pp. 93–98. doi: 10.21014/acta_imeko.v6i2.383 [22] p. arpaia, l. callegaro, a. cultrera, a. esposito, m. ortolano, metrological characterization of a low-cost electroencephalograph for wearable neural interfaces in industry 4.0 applications, in 2021 ieee international workshop on metrology for industry 4.0 & iot (metroind4. 0&iot). ieee, 2021, pp. 1–5. doi: 10.1109/metroind4.0iot51437.2021.9488445 [23] arpaia, p., callegaro, l., cultrera, a., esposito, a., & ortolano, m. metrological characterization of consumer-grade equipment for wearable brain–computer interfaces and extended reality, ieee transactions on instrumentation and measurement 71 (2021), pp. 1–9. doi: 10.1109/tim.2021.3127650 [24] j. n. demos, getting started with neurofeedback. ww norton & company, new york, 2005. isbn-13: 9780393704501 [25] p. bucci, a. mucci, u. volpe, e. merlotti, s. galderisi, m. maj, executive hypercontrol in obsessive–compulsive disorder: electrophysiological and neuropsychological indices, clinical neurophysiology 115(6) (2004), pp. 1340–1348. doi: 10.1016/j.clinph.2003.12.031 [26] v. r. ribas, et al., pattern of anxiety, insecurity, fear, panic and/or phobia observed by quantitative electroencephalography (qeeg). dementia & neuropsychologia, 12 (2018), pp. 264–271. doi: 10.1590/1980-57642018dn12-030007 [27] q. wang, h. wang, f. hu, c. hua, d. wang, using convolutional neural networks to decode eeg-based functional brain network with different severity of acrophobia, journal of neural engineering 18(1) (2021), art. no. 016007. doi: 10.1088/1741-2552/abcdbd [28] o. balan, g. moise, a. moldoveanu, m. leordeanu, f. moldoveanu, an investigation of various machine and deep learning techniques applied in automatic fear level detection and acrophobia virtual therapy, sensors, vol. 20, no. 2, 2020, p. 496. doi: 10.3390/s20020496 [29] v. aspiotis, a. miltiadous, k. kalafatakis, k. d. tzimourta, n. giannakeas, m. g. tsipouras, d. peschos, e. glavas, a. t. tzallas, assessing electroencephalography as a stress indicator: a vr high-altitude scenario monitored through eeg and ecg 22(15) (2022), art. no. 5792. doi: 10.3390/s22155792 [30] d. c. cohen, comparison of self-report and overt-behavioral procedures for assessing acrophobia, behavior therapy 8(1) (1977), pp. 17–23. doi: 10.1016/s0005-7894(77)80116-0 [31] r. e. mccabe, subjective units of distress scale, j phobias: psychol irrational fear, vol. 18, p. 361, abc-clio, llc, 2015. isbn-13: 9781610695756 [32] brain products, “liveamp”, online [accessed 16 march 2023] https://www.brainproducts.com/solutions/liveamp/ [33] meta, “meta quest 2,” online [accessed 16 march 2023] https://www.meta.com/it/quest/products/quest-2/ https://doi.ieeecomputersociety.org/10.1109/2.391038 https://doi.org/10.1192/bjp.111.476.561 https://doi.org/10.1016/0005-7967(72)90055-1 https://doi.org/10.1016/j.neubiorev.2019.12.009 https://doi.org/10.1176/ajp.152.4.626 https://doi.org/10.1089/109493101300210222 https://doi.org/10.1016/s0005-7967(03)00139-6 https://doi.org/10.1007/s12671-021-01783-6 https://doi.org/10.3390/app12031672 https://doi.org/10.1089/109493101300210240 https://doi.org/10.1002/0471206458.ch2 https://doi.org/10.1007/978-3-031-15553-6_29 https://doi.org/10.1080/2326263x.2021.2015149 https://doi.org/10.1109/gem.2014.7048076 https://doi.org/10.1145/2856767.2856796 https://doi.org/10.1016/j.bspc.2021.102595 https://doi.org/10.3390/s21186106 https://doi.org/10.21014/acta_imeko.v6i2.383 https://doi.org/10.1109/metroind4.0iot51437.2021.9488445 https://doi.org/10.1109/tim.2021.3127650 https://doi.org/10.1016/j.clinph.2003.12.031 https://doi.org/10.1590/1980-57642018dn12-030007 10.1088/1741-2552/abcdbd https://doi.org/10.3390/s20020496 https://doi.org/10.3390/s22155792 https://doi.org/10.1016/s0005-7894(77)80116-0 https://www.brainproducts.com/solutions/liveamp/ https://www.meta.com/it/quest/products/quest-2/ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [34] idego, “idego digital psychology,” online [accessed 16 march 2023] [in italian] https://www.idego.it/ [35] p. arpaia, e. de benedetto, a. esposito, a. natalizio, m. parvis, m. pesola, comparing artifact removal techniques for daily-life electroencephalography with few channels, in 2022 ieee international symposium on medical measurements and applications (memea). ieee, messina, italy, 2022, pp. 1–6. doi: 10.1109/memea54994.2022.9856433 [36] t. raduntz, j. scouten, o. hochmuth, b. meffert, eeg artifact elimination by extraction of ica-component features using image processing algorithms, journal of neuroscience methods 243 (2015), pp. 84–93. doi: 10.1016/j.jneumeth.2015.01.030 [37] a. apicella, p. arpaia, f. isgrò, g. mastrati, n. moccaldi, a survey on eeg-based solutions for emotion recognition with a low number of channels, ieee access 10 (2022), pp. 117411-117428. doi: 10.1109/access.2022.3219844 https://www.idego.it/ https://doi.org/10.1109/memea54994.2022.9856433 https://doi.org/10.1016/j.jneumeth.2015.01.030 https://doi.org/10.1109/access.2022.3219844 microsoft word 308-2417-3-le-v5 acta imeko  issn: 2221‐870x  november 2016, volume 5, number 3, 87‐94    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |87  three‐cycle university studies in metrology ‐ an innovative  development approach  marija cundeva‐blajer  ss. cyril and methodius university, faculty of electrical engineering and information technologies, ul. ruger boskovic br. 18, pobox 574,  1000 skopje, r. macedonia       section: research paper   keywords: metrology education; three‐cycle study; bologna process; international cooperation in education; metrology study programmes  citation: marija cundeva‐blajer, three‐cycle university studies in  metrology – an innovative development approach, acta imeko, vol. 5, no. 3, article 14,  november 2016, identifier: imeko‐acta‐05 (2016)‐03‐14  section editor: paul regtien, the netherlands  received december 25, 2015; in final form july 8, 2016; published november 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: ss. cyril and methodius university, faculty of electrical engineering and information technologies  corresponding author: marija cundeva‐blajer, e‐mail: mcundeva@feit.ukim.edu.mk    1. introduction  the cooperation in the field of education enables development of the technological collaboration. the processes initiated by the bologna declaration from (1999) [1], [2] for internationalization and more intensive cooperation in the higher education, through opening of the national systems of higher education, exchange of academic staff and students mobilities, gives opportunities for more intensive regional and international cooperation in engineering education. however, the development of the metrology infrastructure at the national, regional as well as international level is predetermined by the education and training opportunities [3], [4]. therefore the innovation of metrology education according to the bologna declaration [2] is not only a possibility, but also a necessity because of the demanded highest level of academic staff and research facilities at all three levels of higher education (bsc, msc and phd) [5]. the competitiveness of a company is based on the quality of its products and this, in turn, depends on its measurement capabilities, related to the field of metrology, defined as measurement science and its applications [6]. the metrology provides a substantial basis for fundamental research and development activities. the measuring capability is directly related to the technological level of a country and it is fundamental to the development processes of the emerging countries. measurements have become an increasingly critical tool for national and international trade, and for removing technical barriers to global trade. therefore, the development of a well-structured national as well as regional system of metrology is essential to support scientific progress, innovation, technology and competitiveness of enterprises [6]. the metrology plays an important role in the economical development and in the growth of industry. hence, it is fundamental for developing countries to establish a strong national measurement infrastructure, including the development of human resources in metrology, which supports this abstract  an innovative creation process of three‐cycle university education in the metrology area is presented. the case study of development  of the higher education model in the field of metrology at the ss. cyril &methodius university is analyzed. the realized three‐cycle  university studies in metrology are fully harmonized with the bologna principles. the structures of bsc, msc and the joint international  phd study program initiated by a group of 12 european universities are shown. an international consortium of universities participated  in the creation of the three level university studies in metrology. the specifics, goals, quantitative and qualitative achievements as well  as the impact and contributions at the national, regional and international level of the new and innovated three‐cycle metrology study  programs are given and discussed. acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |88  economical development and the growth of the national and regional industry [3], [6]. there are different approaches in the development of the metrology studies, depending on the national regulation, current state of the art of the metrology infrastructure as well as metrology traditions [6]-[8]. the bologna declaration of 1999 [2] has led to the process of completely reviewing the higher education system in europe. at the end of the first bologna decade the new twoor three-cycle degree system (bachelor – master – doctoral degree) has been adopted by the majority of the signatory countries. the european credit transfer and accumulation system (ects) has triggered high interest in other major continents, e.g. in the u.s.a., while the u.s. education system is bench-marked against the european one in some studies. also in africa many education systems have been completely altered since the beginning of this century [8]. metrology the measurement science is international and inherited in all engineering fields [3]. it is also represented at the universities in the region of south-east europe (see): 1. ss. cyril & methodius university-skopje, r. macedonia, 2. south-eastern european university-tetovo, r. macedonia, 3. university of zagreb, r. croatia, 4. university of split, r. croatia and 5. university of prishtina, r. kosovo. until 2010 at these universities knowledge in the field of metrology was gained through accredited study programmes at the first (bsc) and the second cycle (msc) of studies according to the european credit transfer system (ects) and in line with the bologna principles. at the ss. cyril and methodius university in skopje, the first international cooperation for higher education reform in the field of metrology was initiated in 2005 through the eu funded project “introduction of two-tier studies of metrology”, a european project financed by the european training foundation, tempus jep cd 19010_2004 with partners from:  the netherlands (technical university of delft)  poland (agh university-krakow)  italy (politecnico di milano), 2005-2007 [9]. innovated bsc courses in metrology were introduced and a completely new msc study program in metrology and quality management was accredited as outcomes of the project. following the process of realization of higher education according to the bologna declaration [2] with three cycles of studies (graduate bsc, master msc and doctoral phd studies), the organization of a third cycle of studies in the multidisciplinary field of metrology as a need was posed [10], [11]. the mutual interest of the above mentioned see universities was expressed in the initiative for the creation and realization of a joint third cycle study program-phd studies in metrology. in the frame of the tempus iv program, this initiative was accepted by the european commission and the education, audiovisual and cultural executive agency (eacea), which financed the realization of the project 158599-tempus-mktempus-jpcr „creation of the third cycle of studiesdoctoral studies in metrology“, [12]. in the creation of the joint phd study program, beside the five above mentioned see universities, the following eu universities participated:  university of pavia, italy,  braunschweig university of technology, germany,  czech technical university in prague,  graz university of technology, austria,  university of zaragoza, spain,  university of gavle, sweden,  superior school of metrology, douai, france and  bureau of metrology of r. macedonia. currently, at the ss. cyril and methodius university in skopje, higher education in metrology is structured according to the bologna three cycles of study as represented in figure 1, where ects denotes european credit transfer system. 2. phase 1: innovation of bsc study program  courses in metrology  after the ratification of the bologna declaration [2] by the parliament of r. macedonia in 2004 [10], the process of higher education reform has intensively started in all study fields. metrology science before the bologna process was presented at the ss. cyril and methodius university in skopje, with the obligatory courses at the third semester of electrical engineering studies through the course of electrical measurements, the course of power measurements at the study program of power engineering and the course of electrical measurements of non-electrical quantities in the 9th semester. in 2005 new reformed bsc studies in electrical engineering, harmonized with the bologna principles, were accredited. the curriculum in measurement education was innovated through some updates, but also through some completely new metrology courses given in table 1. the bsc study programs with the courses programs in metrology were re-accredited in 2011. 3. phase 2: msc study program in metrology and  quality management  before the bologna process reform the metrology education at the ss. cyril and methodius university at a postgraduate level was represented through the curriculum of electrical measurements and materials (with several metrology courses). in 2008 a completely new study program in metrology and quality management was accredited and after some innovation in 2013 it was re-accredited. figure  1.  three‐cycle  studies  in  metrology  at  the  ss.  cyril  and  methodius  university in skopje according to the bologna declaration.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |89  table 1. metrology in bsc studies in electrical engineering and information technologies. course/  study program/  electiveness  competences content electrical  measurements/  all in electrical  engineering/  obligatory  knowledge of methods for measurements  of  electrical  quantities,  principles  and  usages of measurement instruments.     basic terms and definitions in measurement techniques. legal metrology. static  and dynamic characteristics and block structures of measuring devices. theory of  uncertainty,  errors  of  repeatable  and  indirect  measurements.  measurement  transducers,  ac/dc  transducers.  analogue  instruments.  digital  instruments,  structural  elements,  time  interval  and  frequency  measurements.  dmm.  oscilloscope. measurement bridges.  introduction  in remote measurements and  measuring systems.     electrical  measurements of  non‐electrical  quantities/  power engineering/  elective   knowledge on measurement systems and  elements  of  the  measurement  systems,  basic  sensor  technologies,  methods  for  measurement of non‐electrical quantities  by  electrical  procedures,  process  monitoring  systems,  application  of  modern  virtual  instrumentation  and  computer techniques for measurements  introduction to electrical measurements of non‐electrical quantities. application  of  measurement  systems.  elements  of  a  measurement  system.  choice  of  a  proper  measurement  device  or  system.  basic  sensor  technologies  (resistance,  capacitance,  magnetic  sensors,  hall  effect  sensors,  piezoelectric  transducers,  strain  gauges,  piezo‐resistive  sensors,  optical  sensors,  ultrasound  sensors,  nuclear sensors). micro‐sensors and special sensor technologies. smart sensors.  measurement  noise.  measurement  signals  conditioning.  a/d  conversion.  telemetry.  measurement  of  mechanical  quantities  (dimensions,  translation,  translation  and  rotation  velocity,  acceleration,  mass,  force,  torque,  pressure).  measurement of sound, noise and vibrations. measurement of speed, flow and  fluid  level.  humidity  measurement.  temperature  measurement.  radiation  measurement.  systems  for  monitoring  and  process  measurements.  modern  virtual  instrumentation  and  computer  techniques  for  measurement  of  non‐ electrical quantities.  labview practicum/  power engineering/  elective   basic knowledge about labview program,  interfacing  of  measurement  devices  with  pc,  analysis  and  data  acquisition  of  measurement  data.  programming  of  scada  systems,  communication  with  programmable logic controllers.  virtual  instrumentation  and  labview.  labview  environment.  declaration  of  variables,  matrices  and  arrays.  cycles,  state  machines,  clusters  and  arrays.   graphical  user  interface.  file  generation,  data  acquisition,  measurement  and  signal generation. labview advanced features. measurement control systems in  labview.  communication  interfaces  in  labview.  distributed  measurement  systems in labview.    fundamentals of  measurement  systems/  computer  engineering/  elective   knowledge  of  basic  methods  for  measuring  the  electrical  and  non‐ electrical  quantities.  introduction  to  the  principles  of  analogue  and  digital  measuring instruments.  basic  concepts  and  definitions  in  the  measurement  techniques.  static  and  dynamic characteristics of measurement devices. fundamentals of uncertainty,  errors  in  indirect  measurements.  basic  characteristics  of  analogue  and  digital  measuring  instruments.  sensors  and  converters.  classification  of  measurement  sensors.  introduction  to  resistive,  inductive  and  capacitive  sensors  for  measurement  of  various  physical  quantities.  signal  conditioning  and  basic  configurations of measurement circuits.  computerized  measurements/  computer  engineering/  elective   capability  for  development  and  support  of computer‐based measurement systems  and  measurement  data  transmission.  design  and  realization  of  information  measurement systems.  architecture  of  computer  elements  for  data  acquisition.  measurement  signal  conditioning.  a/d  and  d/a  conversion.  standard  interfaces,  serial  and  parallel.  virtual  measurement  systems.  graphical  programming  and  introduction  to  labview. work with variables, their declaration, cycles, areas and clusters. user  interface, measurement data recording and display. examples of measurement  information and measurement controlled pc systems.  power measurements  power engineering/  obligatory   knowledge  for  measuring  methods  and  instrumentation  for  measurements  in  electrical networks.    voltage and current measuring transformers, applications and errors. methods  for  measuring  one‐phase  and  three‐phase  power  and  power  factor.  electronic  and  digital  watt‐meters.  digital  electricity  meters,  principles  and  errors.  introduction  to  remote  measurements  and  measurements  data  transmission.  grounding  grid  measurements  touch  and  step  voltages,  earth  specific  measurements. power quality measurements.  process  measurements/  automatics, system  engineering /  elective   development  of  measurement  systems  applicable in industrial processes.    introduction to measurement systems in processes and their role in automated  systems. classification of sensors and transducers. transmission of measurement  data, voltage and current loops, industrial communication protocols. sources of  errors in processes measurement systems, electromagnetic interferences, active  and passive shields. microprocessor based measurement systems, smart sensors  and mems. materials and technologies of fabrication of smart sensors.  telecommunication  measurements/  telecommunications/  elective  capability  to  work  with  measurement  devices  and  measurement  information  systems in telecommunication     measurement  role  in  quality  control  of  telecommunication  equipment.  programmable  measurement  devices.  microprocessors  and  measurement  applications.  interfaces.  measurement  data  acquisition.  frequency  and  period  measurements.  high  frequency  power  measurements.  measurement  errors  in  high  frequency  power  measurements.  measurements  of  non‐linear  distortions.  pulse generator. signal analyzer. spectral analyzer. logic analyzer. digital storage  oscilloscope.   principles  of  quality  control/  automatics, system  engineering /  elective   introduction  to  the  basic  principles  of  quality  control  and  the  necessary  mathematical  background;  introduction  to the iso 9001 standard.     history and development of quality control. introduction to the basic terms. the  iso 9001 quality standard. total quality management. mathematical models for  quality control, review of basic terms and concepts. introduction to diagrams for  quality  control.  special  control  diagrams  for  attributes  and  variables  in  quality  control.  specification  of  limit  values,  tolerance  and  other  techniques.  process  control. industrial experiments. design of robust systems.  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |90  the objectives of the msc study program in metrology and quality management are to educate masters of sciences in the field of metrology and quality management who will be capable to work in:  different companies from industry  companies from the power sector  industrial and scientific laboratories where precise measurements are an important factor in the production  companies which are selling and servicing measurement equipment and instrumentation  the laboratories which are part of the metrological infrastructure, as well as  all others companies where measurements and measurement systems are part of their production process. the study program produces engineers who can work in different sectors and industries, where metrology is a pillar for their successful development. the future masters can work also in scientific institutions and universities, research centres, different laboratories, etc. the recent accredited msc courses in metrology are: 1. principles of metrology and quality management; 2. uncertainty in measurement and calibration; 3. sensors and measurement transducers; 4. microprocessor-based programmable instrumentation; 5. legal and industrial metrology; 6. power systems measurements; 7. processing and transmission of measurement data; 8. computerized measurement systems and virtual instrumentation; 9. digital signal processing; 10. measurement and control systems; 11. environmental monitoring; 12. computer and numerical methods in metrology; 13. quality assurance and quality control; 14. project management; 15. nanometrology and standardization; 16. techniques for non-destructive testing; 17. nanomaterials and nanostructures; 18. metrology of geometrical quantities. the innovation and creation of the bsc as well as the msc courses and curricula were performed through the eu project “introduction of two-tier studies of metrology” and by active support of the:  technical university of delft, the netherlands,  agh university, krakow, poland and  politecnico di milano, italy [9]. 4. phase 3: creation of phd study program in  metrology  the development of the metrology is tightly connected to the development of the industrial production, technical cooperation and trade. in the region of south-eastern europe the development of metrology was behind the needs of the industry, trade and society [1], [3], [5]. the metrological infrastructure and especially the national metrological institutes lack highly educated staff which would be the carriers of the further development [3]. therefore, in 2010, an eu initiative was launched by three south-east european (see) countries jointly to create a study program at doctoral level in metrology. the countries which jointly developed the joint phd study program are r. macedonia, r. croatia and r. kosovo, i.e. the universities:  ss. cyril and methodius university in skopje  university of zagreb  university of split  university of prishtina  south-eastern european university in tetovo. beside in the three above mentioned countries, there is also a lack of staff in the field of metrology in the other balkan countries (greece, serbia, bulgaria, bosnia and herzegovina, montenegro, albania etc.) [5]. by taking into account that metrology is a science represented in all technical disciplines and activities, the candidates which would accomplish the doctoral studies would have wide opportunities for application of their knowledge through activities in numerous fields through problems solving in the industry, health and food sector, environmental protection, energy, transportation and trade sector. so, conditions for development of the metrological infrastructure for the above mentioned areas were created. the wider goals of the joint phd education are:  enhancement of the quality and relevance of the higher education in metrology in macedonia, croatia and kosovo.  to upgrade the capacities of the see universities for international cooperation and for permanent modernisation.  orientation of the see universities to offer high quality education in metrology for the necessary industrial development and economic co-operation with the eu.  to intensify the co-operation of the academic staff of the see and eu universities. the specific objectives are:  harmonisation of the studies in metrology in the three cycle degree system according to the bologna process.  creation of phd studies in metrology at the see universities.  development of new courses and modernisation of the existing ones.  upgrading of laboratories for practical training to phd students in metrology.  establishment of a joint phd study program in metrology among the see universities.  transfer of knowledge and experience in the area of metrology. the creation of the joint study program in metrology was realized through set of joint activities of the see universities, such as: 1. elaboration of a new regulation on the third cycle of phd studies in metrology harmonised with the bologna principles. 2. elaboration and adoption of the joint study programme for phd studies in metrology. 3. creation and maintenance of a web-page of the phd studies in metrology (www.tempus-metrology.ukim.edu.mk) 4. promotion of the phd studies in metrology. 5. development of the content and teaching materials of the new and modernized courses for phd studies in metrology. 6. acquisition of laboratory equipment. acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |91  7. elaboration of an agreement for the joint phd study program among the see universities. 8. training of academic staff of the see universities by the eu universities through study visits, workshops and invited lectures. 9. enrolment of phd students at the study programme of metrology and students exchange. 10. dissemination of the results and experience from the creation of joint phd study program 11. insurance of sustainability of the phd study program. these activities were jointly and internationally realized. in figures 2 and 3 the joint study visit to the laboratories of the braunschweig university of technology and the swedish national metrology institute sp in boras are shown, respectively. the joint phd study program in metrology consists of four pillars:  ict in metrology;  instrumentation, industrial metrology, quality science;  metrology for life and society;  scientific metrology. the pillars are addressing the most challenging areas in contemporary metrology. each of these pillars comprises several courses given in table 2. the courses are lectured by at least two professors: one from the see universities and one from the eu universities. the students choose four of the elective courses for their curriculum and choose a supervisor from the see universities and a co-supervisor from the eu universities for the phd research. the program is realized by sharing academic staff, laboratories facilities and through a common student’s pool. the student mobility is one of the emphasised components in the realization of the joint phd program in metrology. the mobility is implemented through joint lectures and workshops, phd student conferences and presentations and study visits for accomplishment of the phd research. this concept of realization of the joint phd program implies not only educational cooperation, but also scientific bonding through joint research projects and exchange of researchers. the results of the joint phd study program in metrology are the creation of high-qualified professionals and researchers with the following general competences:  capability of research and development of solutions;  documenting the scientific researches;  working in interdisciplinary scientific research teams;  analysis of scientific and expertise problems;  application of the knowledge into praxis;  application of scientific research procedures and methods;  possibility of systematization of knowledge;  capability of generation of new ideas and solutions;  knowledge of scientific ethics;  presentation of scientific research results. figure  3.  joint  study  visit  of  see  academic  staff  to  laboratories  of  the swedish national metrology institute sp‐boras.  figure  2.  joint  study  visit  of  see  academic  staff  to  laboratories  of  the braunschweig university of technology.  table 2. phd study program in metrology.  pillars  courses  ict in  metrology  1. data acquisition and data processing  2. sensors and sensor networks  3. applicative software for metrology  4. modelling and numerical methods in metrology  5. knowledge discovery and data mining  instrument ation,  industrial  metrology  1. signal conditioning  2. complex monitoring and control systems  3. metrology for energy  4. rf measurements and metrology in  telecommunications  5. diagnostics, ndt and quality control  metrology  for life and  society  1. metrology for life sciences‐environmental  monitoring  2. metrology for chemistry, biochemistry and food  quality and safety  3. electromagnetic fields, electrical safety emc  4. sensor systems for biomedical measurements and  medicine  scientific  metrology  1. quantum metrology and nano‐metrology  2. primary standards, precise measurements and  calibration  3. metrology of mechanical quantities  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |92  the specific competences of the doctors of science in the scientific field of metrology are:  expert knowledge in the areas studied through the courses of the study program in metrology,  management of scientific and metrology researches,  design of new products and technologies,  management and design of metrology processes,  capability of management of the functions in a company and their integration through metrology,  generation of innovative metrology approaches,  solving practical problems by using scientific metrology methods and procedures,  activities in metrology consulting services connected to design and engineering of products/processes,  capability of relating theoretical knowledge and practical application of metrology in the engineering processes in the companies,  capability of application of research methods in metrology praxis. by accomplishment of the doctoral studies in metrology, the doctors of sciences in the field of metrology are competent for following job positions:  academic staff in higher education institutions;  researchers in research centers;  researchers in r&d centers in the industry;  researchers and managers in the metrological infrastructure and the national metrology institutes. 5. quantitative and qualitative analysis of the  realized three‐cycle university studies in  metrology  some of the metrology courses in the undergraduate studies in the area of electrical engineering are obligatory as given in table 1. so, all the students enrolling the third semester also enrol the obligatory courses in metrology. the average number of students in the third semester at the faculty of electrical engineering and information technologies at the ss. cyril and methodius university in skopje in the last ten years is app. 200. the success rate of the students who successfully pass the final exam is very high (in average 90 %). the number of students, who enrol the elective bsc courses in metrology, is variable, because the pool of students varies depending on the study program and the semester in the curriculum. for example, the average number of students enrolling the course of power systems measurements is app. 70 in the last five years. however, the interest for some specialized courses, like computerized measurement systems or telecommunication measurements, is low, i.e. app. 6 students per semester. still, the success rate of this courses is very high (almost 100 %), because of the small groups, the individual approach and the possibility for intensive laboratory praxis. the students exchange is not so developed at bsc level, but this opens future possibilities, especially through the eu erasmus program. the academic exchange is done through lectures by visiting professors, appr. 2-3 per semester (figures 4 to 6). since the accreditation of the msc study program in metrology and quality management in 2008 at the ss. cyril and methodius university in skopje,33 students have enrolled this figure 5. eu training of the see academic staff (professors) involved in the  joint phd study program in metrology.  figure 4. joint lectures for phd students in metrology.  figure 6. joint  lectures for phd students  in metrology open for the wider  audience‐with  participation  of  metrologists  from  the  industry  and  other  metrology stake holders.    acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |93  study program as given in table 3. most of them (70%) are with accomplished bsc degree in electrical engineering. the other candidates are with bsc degrees in physics, chemistry, and mechanical engineering, technology engineering or similar. up to now, 16 students have accomplished their master thesis. at msc level the success rate is lower than at bsc level, which can be concluded from the data in table 3. the main reason is that the candidates are mostly part-time students and are already employed. the topics of the msc theses are mainly in the metrology of electromagnetic quantities and metrology of energy and power, which is to be expected considering the bsc background of the students (e.g. “a contribution to the metrology infrastructure in r. macedonia-national standard of electrical energy and power”). however, some of the msc theses are in other scientific areas, like flow and volume metrology (e.g. “elaboration of national standard of mass flow”), length metrology, mass metrology or some other specialized fields like environmental metrology, metrology for information security or metrology for air traffic security. the exchange of students at msc is more intensive, especially during the life-time of the two tempus projects in 2007-2008 and from 2010-2013. in the frame of the projects app. 10 msc students have realized study stays at the partner universities. also, app. 10 lectures were given by visiting professors from the partner universities. two students have accomplished the experimental part of their msc theses in the laboratories of the partner universities. after the end of the tempus projects, the students mobilities are enabled through the eu erasmus program (in both directions), or through other international or bilateral scientific programs. since the accreditation of the phd studies in metrology in 2013, six students have enrolled the study program, only at the ss. cyril and methodius university in skopje. the number of students is satisfactory, bearing in mind that it is the third cycle and the specifics of the field. the students are with interdisciplinary background (msc in electrical engineering, mechanical engineering, physics...). until now, no candidate has accomplished the phd title. all the students are part-time students already employed in the industry or metrology infrastructure. depending on the year of enrolment they have different success rate in the accomplishments of the phd courses and the scientific progress in their phd research. this is still an early stage of the phd study program to make more specific conclusions on the success rate. since the accreditation of the phd study program in metrology, which was accomplished at the end of the eu tempus-158599 project as a main outcome of the project, the students and professors mobilities are enabled through other programs like the eu erasmus program (in both directions), or through other international or bilateral scientific programs. the international dimension of the joint phd study program in metrology can be seen in table 4, where the academic staff contribution of each of the partner universities is displayed. 6. contribution, international impact and  sustainability of the three‐cycle studies in  metrology  the creation of the three-cycle university studies in metrology and the joint realization by the see and the eu universities potentially contribute to enhancement of the whole scientific and technical cooperation among these countries. the up-to-date experience shows that the results of this cooperation such as upgrading the level of the academic staff, upgrading the laboratories’ facilities or enhancement of exchange of knowledge, is not limited only to the formal university studies. it broadens also positive influence on life-long learning, professional development, and training and generally on the scientific work. it is achieved through mobilities of the academic staff, inclusion of professors from other universities, professionals from the nmis, metrological infrastructure and industry. invited lectures, workshops in metrology are open to all interested institutions and professionals. the network of the 5 see universities and 7 eu universities is extended to other universities, national metrology institutes, laboratories performing testing end calibrations and other research institutions in the region, but also wider in europe. the realization of the joint phd study program developed a common language among the participants in the project, initiation and launching of novel joint ideas for further scientific and metrology cooperation. this is a guarantee of the sustainability of the established three-degree study programs in metrology and for further enhancement of international impact. 7. conclusions  the paper presented an original process of creation of three-cycle university education in the area of metrology. the specifics, qualitative and quantitative achievements and table 4. academic staff of the phd study program in metrology. university  number of  professors engaged  ss. cyril and methodius university in skopje  11  south eastern european university in tetovo  1  university of zagreb  5  university of split  2  university of prishtina  1  czech technical university in prague  2  university of pavia  1  university of zaragoza  1  braunschweig university of technology  1  university of gavle  1  graz university of technology  1  table 3. annual data on msc in metrology and quality management. academic  year  number of  enrolled students  number of  accomplished msc  success  rate [%]  2008  12  7  58.33  2009  4  4  100.00  2010  3  1  33.33  2011  4  1  25.00  2012  2  2  100.00  2013  4  1  25.00  2014  1  0  0.00  2015  3  0  0.00  acta imeko | www.imeko.org  november 2016 | volume 5 | number 3 |94  contribution of the innovated and new study programs were given. the education cooperation through realization of joint study programs, such as presented in the paper, extends also the international scientific, technical and metrological cooperation. the jointly educated staffs represent a bridge for realization and further development of education, scientific, technical and metrological cooperation among the universities, metrology infrastructure and countries. education cooperation through the university programs in metrology is a form which is contributing to the joint technological and metrology infrastructure development. references  [1] m. cundeva-blajer, “development of three-cycle university studies in metrology”, proc. of 21st imeko world congress, aug. 30-sept.4, 2015, prague, czech republic, pp. 42-47. [2] the bologna declaration, 1999. [3] l. arsov, m. cundeva-blajer, “establishing a metrological infrastructure and traceability of electrical power and energy in the r. macedonia”, acta imeko, volume 2, no. 2, december 2013, pp. 86-90. [4] education, audiovisual and culture executive agency, 2012 „overview of the higher education systems in the tempus partner countries: southern mediterranean, a tempus study“, no 14, eacea, brussels, november 2012. [5] l. arsov, m. cundeva-blajer, “internationalization of the phd education in metrology", internationalisation in higher education: evaluating concepts, challenges and strategies ihe 2013, pradec interdisciplinary conf. proc., vol. 2, prague, czech republic, paper no. 7, 25-26 april, 2013. [6] g. m. rocha, r. p. landim, “the brazilian experience in the development of human resources in metrology”, proc. of 20th imeko world congress, sept. 9-14, 2015, busan, south korea, pp. tc1-o2. [7] t. gordiyenko, a. gaber, “occupational education of specialists in field of metrology and instrumentations in ukraine”, proc. of 21st imeko world congress, aug. 30-sept. 4, 2015, prague, czech republic, pp. 56-61. [8] l. p. van biesen, j. l. kaindu, j. m. boyokame, “new tools for training the skills of students in the process of the academisation in the education of engineering degrees: a testimony with application to metrology from the democratic republic congo”, proc. of 21st imeko world congress, aug. 30-sept. 4, 2015, prague, czech republic, pp. 76-78. [9] v. dimcev, m. cundeva-blajer et. al. “introduction of two-tier studies of metrology”, eu project financed by the european training foundation tempus jep cd 19010_2004, 20052007 etf final report, skopje, 2008. [10] higher education law, official gazette of r. macedonia no. 35, 2008. [11] rulebook on the conditions, criteria and rules for enrolment and studying at the third cycle of studies-doctoral studies, ss. cyril and methodius university-skopje, university gazette, no. 150, 2010. [12] l. arsov, m. cundeva-blajer et. al. “creation of the third cycle studies-doctoral studies in metrology”, eu tempus iv joint multi-country project, 2010-2013, eacea final project report, skopje, 2013. analysis and optimization of surgical electromagnetic tracking systems by using magnetic field gradients acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 8 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 analysis and optimization of surgical electromagnetic tracking systems by using magnetic field gradients gregorio andria1, filippo attivissimo1, attilio di nisio1, anna m. l. lanzolla1, mattia alessandro ragolia1 1 department of electrical and information engineering, polytechnic of bari, via e. orabona 4, 70125 bari, italy section: research paper keywords: electromagnetic tracking; magnetic field generator optimization; magnetic field gradients; dipole model citation: gregorio andria, filippo attivissimo, attilio di nisio, anna m. l. lanzolla, mattia alessandro ragolia, analysis and optimization of surgical electromagnetic tracking systems by using magnetic field gradients, acta imeko, vol. 12, no. 2, article 26, june 2023, identifier: imeko-acta-12 (2023)-0226 section editor: francesco lamonaca, university of calabria, italy received june 14, 2023; in final form june 19, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: mattia alessandro ragolia, e-mail: mattiaalessandro.ragolia@poliba.it 1. introduction reducing invasiveness of medical measurement systems is driving the research toward the development of new systems and techniques to provide high-quality information with a minimallyinvasive approach, such as diagnostic techniques [1], miniaturized biosensors[2], techniques to monitor vital parameters [3]-[5], and tracking systems for surgical navigation [6]. among these, surgical navigation systems are currently being paid increase attention for the purpose of increasing the tracking distance. two main type of tracking systems are used in surgery: electromagnetic and optical systems [7]. electromagnetic tracking systems (emts) use a small coil sensor, inserted inside the surgical instrument, to determine its position and orientation by measuring the amplitude of the magnetic field of known geometry generated by a field generator (fg) [8]. the small size of the sensor and the independence from a direct line of sight allow to overcome the limitations of optical systems [7], thus allowing the use of flexible instruments such as needles, catheters, and endoscopes [8], [9]. emts technology presents two main limitations: a) it has high sensitivity to em interferences provided by electronic devices and to magnetic field distortions due to metal objects; b) current commercial systems are not able to provide accurate position estimations at a large distance from the signal source, due to the degradation of the magnetic field amplitude with the distance from the fg. hence, the signal source must be placed too much near the operating volume (i.e., patient’s table), thus hindering the medical staff during the operation. currently, commercial emtss provide accurate estimation of surgical instrument position only for limited distance from the fg, generally not more than 0.5 m [8], [10]. it should be noted that a higher tracking distance can be achieved by using some systems, such as abstract background: electromagnetic tracking systems (emtss) are widely used in surgical navigation, allowing to improve the outcome of diagnosis and surgical interventions, by providing the surgeon with real-time position of surgical instruments during medical procedures. objective: the main goal is to improve the limited range of current commercial systems, which strongly affects the freedom of movement of the medical team. studies are currently being conducted to optimize the magnetic field generator (fg) configuration (both geometrical arrangements and electrical properties) since it affects tracking accuracy. methods: in this paper, we discuss experimental data from an emts based on a developed 5-coils fg prototype, and we show the correlation between position tracking accuracy and the gradients of the magnetic field. therefore, we optimize the configuration of the fg by employing two different metrics based on i) the maximization of the amplitude of the magnetic field as reported in literature, and ii) the maximization of its gradients. results: the two optimized configurations are compared in terms of position tracking accuracy, showing that choosing the magnetic field gradients as objective function for optimization leads to higher position tracking accuracy than maximizing the magnetic field amplitude. mailto:mattiaalessandro.ragolia@poliba.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 polhemus long ranger trackers [11], but they employ larger fgs and bigger sensors, which are not suitable to track surgical instruments such as needles and endoscopes and cannot be used in many applications where a very small size is required. the design of the fg (electrical and geometrical parameters, arrangements of transmitting coils, processing techniques) determines the distribution of the magnetic field in space and the size of the fg itself, and it affects tracking performances [12]. often the transmitting coils must be placed inside a welldefined space due to practical needs, such as the configuration of the clinical environment, weight limitations, or application requirements. moreover, the tracking volume is usually proportional to the size of the fg (i.e., the magnetic field intensity) [8]. four types of generators can therefore be distinguished [8]: • standard fgs: they are the most common type and are produced by different manufacturers. they are small and cover an operating volume with linear sizes of a few tens of centimeters. • flat fgs: they are flat and thin, designed to be placed directly under the operating table where the patient lies. shielding the back of the fg greatly reduces the sensitivity to interference from below, such as induced eddy currents in metal structures. • mobile fgs: they are small fgs, which offer a small operating volume. their advantage is that they can be positioned near the operating area, thus offering good accuracy despite the reduced range. • long-range fgs: available only from polhemus inc., they can cover areas up to 2 m. different manufacturers propose their own fgs, but they do not provide their design principle. hence, ongoing research is focused on designing fg’s parameters with the aim of increasing tracking volume and accuracy [13]-[15]. in [16] we proposed a solution to study the effect of system parameters and fg configuration on tracking accuracy, and we compared the performance of two fg configurations, whose transmitting coils were identical in their geometrical and electrical parameters, except for their number and their pose (i.e., position and orientation) in space. in particular, one configuration consisted of a flat fg with six coplanar transmitting coils, whereas the other one was a representation of the 5-coils fg prototype presented and characterized in [17]-[19], which was developed to overcome the state of the art of current commercial systems by increasing the tracking distance beyond 0.5 m from the fg. in [16] we did not optimize any fg configurations, but we only compared the performances of chosen configurations. in [20] the configuration optimization of the fixed sensor array of an emts was proposed by employing a mobile transmitting coil, by running two-step evolutionary algorithm and using the position rmse (root mean square error) as performance metric. the drawback is that the computational cost is large, and the performance metric is valid only for that specific reconstruction algorithm. in [12], a simulator to evaluate tracking accuracy -according to certain performance metricsby setting system parameters was developed. in this study a test volume of 500 mm × 500 mm × 500 mm, which is typical for medical applications was considered. the authors optimized the configuration of a fg composed of eight transmitting coils, for both a compact fg volume or a wide 3d volume, by defining and comparing two objective functions to i) maximize the voltage induced in the sensor (i.e., the magnetic field) or ii) minimize the positional rmse of the sensor coil within the defined testing volume. the best performances were obtained when choosing the sensor position rmse as objective function, which however depends on that specific used reconstruction algorithm. in this paper we consider the same measurement volume as in [12], and we argue that the fg configuration can be optimized by maximizing the gradients of the induced voltage with respect to sensor position (as highlighted from preliminary experimental results), rather than maximizing the induced voltage as done in [12]; therefore, we follow an approach similar to [21], where the authors defined a metric based on the fisher information matrix. moreover, positional rmse in [12] depended on the chosen reconstruction algorithm, whereas our proposed metric is valid regardless of the position reconstruction algorithms. in [22] we have already discussed the utility of considering the magnetic field gradients to assess system performance, and we supposed position error was influence by gradients, whereas in this paper we perform simulations to employ information from gradients to analyze fg configurations. it should be noted that emtss that employ fixed sensor array and a moving emitting coil (instead of a fixed fg) are also present in literature. the approach that we develop in this paper is general, and it is valid for both cases, i.e., i) multiple fixed transmitting coils and one moving sensor, or ii) one moving transmitting coil and multiple fixed sensors. this paper is structured as follows. in section 2 we describe the model of the magnetic field obtained by approximating the transmitting coils as magnetic dipoles, which leads to define the sensor induced voltage and its spatial gradient. in section 3 we discuss experimental data from an emts based on a novel 5coils fg prototype, and we show the correlation between position error and the gradients of the magnetic field. in section 4 we define two metrics based on i) the amplitude of the magnetic field and ii) the gradients of the magnetic field, and we employ them to optimize the arrangement of the transmitting coils of an 8-coils fg. then, the obtained fg configurations are compared in terms of position tracking accuracy, and the role of the gradient distribution is highlighted. conclusions are drawn in section 5. 2. modeling the magnetic field and its spatial gradients if the wavelength of the generated magnetic fields is greater than the considered tracking volume, the magnetic field produced by the fg can be calculated by treating the transmitting coils as magnetic dipoles. therefore, the magnetic moment produced by the i-th transmitting coil can be described as a function of coil parameters and excitation [18], [23], [24]: 𝒎𝑡𝑥,𝑖 = 𝑚𝑡𝑥,𝑖 �̂�𝑡𝑥,𝑖 , 𝑚𝑡𝑥,𝑖 = 𝑁𝑡𝑥,𝑖 𝑆𝑡𝑥,𝑖 𝐼𝑖𝑖 , 𝑆𝑡𝑥,𝑖 = π 𝑟𝑡𝑥,𝑖 2 (1) where �̂�𝑡𝑥,𝑖 is the versor orthogonal to the surface 𝑆𝑡𝑥,𝑖 of the i-th coil, and 𝑟𝑡𝑥,𝑖 , 𝑁𝑡𝑥,𝑖 and 𝐼𝑖𝑖 denote the coil radius, the number of turns and the rms value of the sinewave excitation current, respectively. the same size and current are considered for all transmitting coils in the fg design, but slight differences in real quantity values have been observed. to account for coils’ acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 properties mismatch, these parameters are denoted by the subscript 𝑖. the rms magnetic field generated by the i-th transmitting coil in a generic point 𝒑𝒔 = [𝑥𝑠 , 𝑦𝑠 , 𝑧𝑠 ] t is [18], [23], [24]: 𝐁𝑖 (𝒑𝒔, 𝐼𝑖𝑖 ) = b𝑥𝑖 𝒙 + b𝑦𝑖 �̂� + b𝑧𝑖 �̂� = = µ0 4𝜋 𝑚𝑡𝑥,𝑖 𝑑𝑖 3 [3(�̂�𝑡𝑥,𝑖 · �̂�𝑑,𝑖 )�̂�𝑑,𝑖 − �̂�𝑡𝑥,𝑖 ] , (2) where 𝑑𝑖 = |𝒅𝒊|, with 𝒅𝒊 = 𝒑𝒔 − 𝒑𝒕𝒙,𝒊 represents the vector distance between 𝒑𝒔 and the center 𝒑𝒕𝒙,𝒊 of the i-th transmitting coil, and �̂�𝑑,𝑖 is its associated versor. by considering a homogeneous magnetic flux on the surface of the sensing coil 𝑆𝑠, voltage induced by the i-th transmitting coil may be calculated as: 𝑣𝑖 = 2 π 𝑓𝑖 𝑁𝑠 𝑆𝑠 𝐁𝑖 · �̂�𝑠 , (3) where 𝑁𝑠 is the number of coil sensor turns, 𝑓𝑖 is the excitation current (different for each transmitting coil) and �̂�𝑠 is the versor orthogonal to the sensor surface (figure 1), given by �̂�𝑠 = [ cos(𝛼𝑠 ) cos(𝛽𝑠 ) sin(𝛼𝑠 ) cos(𝛽𝑠 ) sin(𝛽𝑠 ) ] . (4) we can rewrite (3) in matrix notation: 𝒗 = [ 𝑣1 ⋮ 𝑣𝑛 ] = 𝐀𝐁�̂�𝑠 = 𝐇�̂�𝑠 , (5) where 𝐀 = [ 𝛼1 0 ⋯ 0 0 ⋮ 0 𝛼2 ⋯ 0 ⋮ ⋱ ⋮ 0 ⋯ 𝛼𝑛 ] (6) is the 𝑅𝑛×𝑛 matrix of sensor sensitivities at each frequency, with 𝛼𝑖 = 2 π 𝑓𝑖 𝑁𝑠 𝑆𝑠, and 𝐁(𝒑𝒔) = [ b𝑥1 b𝑦1 b𝑧1 ⋮ ⋮ ⋮ b𝑥𝑛 b𝑦𝑛 b𝑧𝑛 ] (7) is the 𝑅𝑛×3 matrix of the generated magnetic fields. it is convenient to define 𝐇 = 𝐀 𝐁. the gradient of the voltage induced by the i-th transmitting coil with respect to the position 𝒑𝒔 of the sensor (i.e., its spatial gradient) can be expressed by [21]: ∇(𝒑𝒔)𝑣𝑖 = [ 𝜕𝑣𝑖 𝜕𝑥𝑠 , 𝜕𝑣𝑖 𝜕𝑦𝑠 , 𝜕𝑣𝑖 𝜕𝑧𝑠 ] 𝑇 = 𝛼 𝜇0 4𝜋 𝑚𝑡𝑥,𝑖 𝑑𝑖 4 {15(�̂�𝑠 · �̂�𝑑,𝑖 )(�̂�𝑡𝑥,𝑖 · �̂�𝑑,𝑖 )�̂�𝑑,𝑖 − 3[(�̂�𝑠 · �̂�𝑑,𝑖 )�̂�𝑡𝑥,𝑖 + (�̂�𝑡𝑥,𝑖 · �̂�𝑑,𝑖 )�̂�𝑠 + (�̂�𝑠 · �̂�𝑡𝑥,𝑖)�̂�𝑑,𝑖 ]} . (8) 3. analysis of magnetic field gradients in [25] experimental tests have been carried out to evaluate position rms error of an emts based on a novel 5-coils fg prototype based on fdm (frequency division multiplexing), which was designed to reduce mutual inductances between transmitting coils, still maintaining a high induced voltage in the test volume, and in compliance with ieee standards for safety levels [26], [27]. in this section we refer to those experimental data and we carry out new analyses to show the correlation between position error and the gradients of the magnetic field. 3.1. error propagation based on magnetic field gradients the standard deviation of repeated measurements of the sensor position 𝒑𝒔 = [𝑥𝑠 , 𝑦𝑠 , 𝑧𝑠 ] t is calculated to evaluate the position repeatability error. we assume that uncorrelated noise affects the voltage components 𝒗 = [𝑣1, … , 𝑣𝑛] t (n=5 for the examined case) induced in the sensor. the diagonal covariance matrix is: 𝑪𝒗 = [ 𝜎𝑣1 2 … 0 ⋮ ⋱ ⋮ 0 … 𝜎𝑣𝑛 2 ] . (9) let g = [g1, … , gn] t: χ ⫃ r3 → rn be the function relating 𝒑𝒔 and 𝒗: 𝒗 = 𝒈(𝒑𝒔) . (10) expression (10) can be linearly approximated in the neighborhood of a generic point 𝒑𝒔, by means of the jacobian matrix 𝑱𝒈(𝒑𝒔) ∈ 𝑅 𝑛×3, whose rows are the gradients 𝛁𝒈𝒋(𝒑𝒔), defined as: 𝛁𝒈𝒋(𝒑𝒔) = [ 𝜕𝑔𝑗 𝜕𝑥 (𝒑𝒔), 𝜕𝑔𝑗 𝜕𝑦 (𝒑𝒔), 𝜕𝑔𝑗 𝜕𝑧 (𝒑𝒔)] (11) with 𝑗 = 1, … , 𝑛, which represents the slope of the function 𝒈(𝒑𝒔) around 𝒑𝒔 along each direction. expression (11) is a general approximation of (8), and it is independent on the chosen magnetic field model, hence can be used for complex systems of when the field model has not been provided or developed. as discussed in [22], [25], the variance 𝝈𝒑𝒔 ∘2 = [𝜎𝑥 2, 𝜎𝑦 2, 𝜎𝑧 2] t of repeated measurements of sensor position 𝒑𝒔 can be estimated by: figure 1. graphical representation and reference system of the 5-coils fg. sensor position is expressed both in cartesian (𝑥𝑠 , 𝑦𝑠 , 𝑧𝑠) and spherical coordinates (𝜌, 𝜙, 𝜃). acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 𝝈𝒑𝒔 ∘2 ≃ (𝑱𝒈 +(𝒑𝒔)) ∘2 𝝈𝒗 ∘2, (12) where 𝑱𝒈 +(𝒑𝒔) ∈ 𝑅 3×𝑁 is the moore-penrose pseudoinverse of the jacobian matrix, 𝝈𝒗 ∘2 is a nx1 column vector containing the element of the diagonal of 𝑪𝒗 in (9), ∘ is the hadamard power operator and ∘ 2 denotes element-wise square. the moorepenrose pseudoinverse in (12) provides a least squares solution to the linearized expression of (10). for details, the reader should refer to [22], [25], [28]. equation (10) is experimentally sampled by moving the sensor on a grid of 𝑁𝑝 evenly spaced points 𝒑𝒔𝒋, 𝑗 = 1, … , 𝑁𝑝, and measuring voltages 𝒗𝑖 . the rms error, also known as mean radial spherical error in the context of 3d localization, has been determined for each grid point as 𝜎𝑝𝑠 = √𝜎𝑥 𝑗 2 + 𝜎𝑦 𝑗 2 + 𝜎𝑧 𝑗 2. the partial derivatives are experimentally estimated by the ratios ∆𝑣𝑗 ∆𝑥 , ∆𝑣𝑗 ∆𝑦 , and ∆𝑣𝑗 ∆𝑧 , by considering small variations of sensor position. it should be noticed that this approach is applicable to any position reconstruction algorithm that can be linearized for error propagation. as shown in [22], the same approach can be used to evaluate the effects of residual drift in voltage measurements. 3.2. analysis of the pseudoinverse of the jacobian matrix the jacobian matrix 𝑱𝒈 introduced in section 3.1 gives important information about what to expect in terms of accuracy in different regions of the tracking volume. in this section we evaluate the elements of the pseudoinverse 𝑱𝒈 + of the jacobian matrix in a volume of 400 mm x 500 mm x 400 mm, for different orientations of the magnetic sensor. for each test, the sensor has been placed by means of the industrial robot in a regular grid of 𝑁𝑝 = 𝑛𝑥 · 𝑛𝑦 · 𝑛𝑧 points 𝒑𝑗 , 𝑗 = 1, … , 𝑁𝑝, where 𝑛𝑥 = 5, 𝑛𝑦 = 6, 𝑛𝑧 = 5 are the number of points along x-, yand z-axis, respectively, with a step of 100 mm in each direction, hence each plane normal to z-axis contains 𝑛𝑥 · 𝑛𝑦 = 30 points, for a total of 𝑁𝑝=150 points. the grid has been scanned by increasing the x-coordinate, then the y-coordinate is increased, and finally the z-coordinate, considering the reference system of figure 1. in figure 2, figure 3, and figure 4 the element of 𝑱𝒈 + are shown for the sensor oriented along x-, y-, and z-axis, respectively, thus considering 𝐵𝑥 , 𝐵𝑦 , and 𝐵𝑧 components of the magnetic field. the point order has been changed for figure 3, by increasing first the y-coordinate, then the x-coordinate, in order to highlight the presence of symmetry when switching x and y-coordinate. it can be noted that the elements of 𝑱𝒈 + generally increase when the distance from the fg increases, according to magnetic field model. the elements of 𝑱𝒈 + related to 𝐵𝑧 are much lower than those related to 𝐵𝑥 and 𝐵𝑦 . by comparing figure 2 and figure 3, it can be noted that when the sensor is aligned along x-axis, the elements related to the ycomponent are higher whereas the x-component is lower; viceversa when the sensor is aligned along y-axis; when the sensor is aligned along z-axis all elements assume low values. moreover, it can be noted the presence of peaks in the elements of 𝑱𝒈 + . it should be said that we conducted a related analysis in [22] by investigating the 3ꞏn elements of 𝑱𝒈 + , but in that case only one orientation of the sensor was considered instead of multiple orientations, and the sensor was moved along a single linear trajectory instead of a large volume as done in this case, hence we could not draw any considerations in that case. 3.3. position error evaluation through gradients in [25] we evaluated the position rms error of the 5-coils fg prototype, by applying the procedure shown in section 3.1 in the same volume and by employing the same pseudoinverse values analyzed in section 3.2. hence, in this section we refer to those experimental data and we relate the obtained results to the elements of the pseudoinverse of the jacobian matrix, which is influenced by the arrangement of the transmitting coils of the fg. figure 5 and figure 6 show the position error versus the point index, separately for each z-plane, for better visualization. the point order has been changed for figure 6, by increasing first the y-coordinate, then the x-coordinate, in order to highlight the presence of symmetry when switching xand y-coordinate. the error generally increases when increasing the zcoordinate, in accordance with the magnetic field model and with previous results. when the sensor is aligned along the x-axis, we found that the error of the y-component is higher (not shown figure 2. elements of the pseudoinverse 𝑱𝒈 + of the jacobian matrix with the sensor aligned along x-axis, thus measuring the 𝐵𝑥 component of the magnetic field. figure 3. elements of the pseudoinverse 𝑱𝒈 + of the jacobian matrix with the sensor aligned along y-axis, thus measuring the 𝐵𝑦 component of the magnetic field. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 here, see [25]), whereas the error of the x-component is lower; vice-versa when the sensor is aligned along y-axis. the error of the z-component (not shown here) is very low in both cases, about one order of magnitude lower. this is in accordance with previous results obtained in section 3.2. by considering the elements of 𝑱𝒈 + shown in section 3.2, we can affirm that the increased error obtained for orientation along xand y-axis is due to higher values of the elements of 𝑱𝒈 + (i.e., lower gradients of the magnetic field), rather than to higher voltage noise: in fact, this latter decreases with the distance from the f, whereas the position error increases. in accordance with the results of section 3.2, figure 5 and figure 6 highlight some regions where the position rms error is lower than others, as well as the presence of peaks in correspondence of x = 0 mm for figure 5 (y = ±50 mm for figure 6) when the sensor is aligned along x-axis (y-axis for figure 6); these peaks can be here related to peaks in the elements of 𝑱𝒈 + . except for those peaks, position error is always ≤ 0.8 mm, which is still slightly higher than the values that we obtained in [22]: there, in fact, the sensor was more oriented toward the fg, and not in the xy-plane, thus presenting errors similar to the case of the sensor aligned along z-axis. 4. optimization of field generator configuration following the observations from previous section 3, in this section we optimize the configuration of the fg by employing two different metrics based on i) the maximization of the induced voltage (i.e., the magnetic field) as in [12], and ii) the maximization of the gradients of the magnetic field, and we evaluate the tracking accuracy associated with the optimized configurations. 4.1. objective functions for optimization procedure the fg configuration is optimized by minimizing two different cost functions, namely 𝐹1 and 𝐹2: • 𝐹1 is the same proposed in [12]. it is defined as follows: 𝐹1 = 1 1 𝑛 𝑁𝑝 ∑ ∑ |𝑣𝑖𝑗 | 𝑛 𝑖=1 𝑁𝑝 𝑗=1 = 1 𝑣mean , (13) where 𝑁𝑝 is the number of points of the volume considered for the optimization procedure, and 𝑣𝑖𝑗 is the voltage induced by the i-th transmitting coil when the sensor is in the j-th grid point. in [12] a constant noise level was considered in the whole test volume, and minimizing 𝐹1 equals maximizing the measured induced voltage in the test volume. • 𝐹2 is based on the computation of the spatial gradients expressed by (8). we follow an approach similar to [21], where the authors optimized the allocation of measurements for an emts based on tdm (time-division multiplexing) by using a cost function based on the fisher information matrix to select a subset of a large number (at least 1089) of fixed coplanar sensors, with dipole moments all oriented along z-axis. they considered a constant 𝜎𝑣 2 in the whole test volume. however, they did not evaluate the tracking performance related to their configurations. therefore, we define the following cost function 𝐹2: figure 4. elements of the pseudoinverse 𝑱𝒈 + of the jacobian matrix with the sensor aligned along z-axis, thus measuring the 𝐵𝑧 component of the magnetic field. figure 5. position rms error obtained by propagation through gradients. the sensor is aligned along x-axis, thus measuring the 𝐵𝑥 component of the magnetic field. figure 6. position rms error obtained by propagation through gradients. the sensor is aligned along y-axis, thus measuring the 𝐵𝑦 component of the magnetic field. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 𝐹2 = − ∑ log det 𝐌𝑗 (𝒑𝒔) 𝑁𝑝 𝑗=1 , (14) where: 𝐌𝑗 (𝒑𝒔) = ∑ [∇(𝒑𝒔)𝑣𝑖𝑗 ][∇(𝒑𝒔)𝑣𝑖𝑗 ] 𝑇 𝜎𝑖𝑗 2 𝑛 𝑖=1 (15) is the fisher information matrix (fim), ∇(𝒑𝒔)𝑣𝑖𝑗 is the spatial gradient of 𝑣𝑖𝑗 , defined by expression (8), and 𝜎𝑖𝑗 2 is the variance of the voltage noise at the j-th point and related to the i-th transmitting coil. under the assumption that the noise terms are independent, it follows from the cramérrao inequality that maximizing the fim (in some sense) minimizes the attainable covariance of the estimated position, providing a lower bound [21], [29]: cov �̂�𝑠 ≥ 𝑴 −1 . (16) since it is generally not possible to find an optimal fim, the real-valued function 𝐹2 can be chosen instead [21], [30]. minimizing 𝐹2 is related to the approach followed in section 3.1 by using expression (12): in fact both the covariance (12) and the inverse of the fim can be interpreted as the inverse of the hessian of the weighted least-squares criterion [30]. note that, in contrast to [21], here the model of the voltage noise is composed of two terms, i.e., 𝜎𝑖𝑗 2 = 𝜎𝑎𝑐𝑞 2 + 𝜎𝐵 𝑖𝑗 2 (𝜎𝐼 ), where 𝜎𝑎𝑐𝑞 2 depends on the daq device and it is assumed to be constant in the whole test volume, whereas 𝜎𝐵𝑖𝑗 2 depends on the transmitting coil’s frequency and pose, on the noise 𝜎𝐼 of the excitation currents, and on the sensor position. the noise values for the simulations have been set according to experimental tests conducted on the 5-coils fg prototype, with an excitation current of 1 a rms for each transmitting coil, thus obtaining noise values 𝜎acq= 20 nv and 𝜎𝐼 = 0.07 ma, as in [19]. matlab fmincon function is used to minimize the cost functions thus optimizing the fg configuration, by setting the boundaries of the transmitting coils positions within a box volume of 300 mm × 300 mm × 70 mm. a planar grid of 500 mm × 500 mm is considered at a height from the fg of z = 650 mm, and with a step of 50 mm along xand y-axis, for a total of 121 positions. at each position, the sensor coil is aligned along the x-, y-, and z-axis, respectively, therefore 𝑁𝑝 = 121 × 3 = 363 sensor poses are considered. 4.2. evaluation of position tracking accuracy to compare different fg configurations, we define a test volume consisting of a grid of 𝑁𝑝 points: a volume of 500 mm × 500 mm × 600 mm is considered (from z = 50 mm to z = 650 mm), with a step of 50 mm along xand y-axis, and 100 mm along z-axis, for a total of 847 positions of the sensor coil. at each position, the sensor coil is aligned along the x-, y-, and z-axis, respectively, therefore 𝑁𝑝 = 847 × 3 = 2541 sensor poses are considered. for each point, we evaluate the position error: 𝑒𝑗 = √(�̂�𝑗 − 𝑥𝑗 ) 2 + (�̂�𝑗 − 𝑦𝑗 ) 2 + (�̂�𝑗 − 𝑧𝑗 ) 2 , (17) where 𝑥𝑗 , 𝑦𝑗 , 𝑧𝑗 denote the actual sensor position at the j-th grid point, whereas �̂�𝑗 , �̂�𝑗, �̂�𝑗 denote the estimated position by applying a reconstruction algorithm based on the dipole model approximation, as we have also done in [16], [18], [19]. in a real scenario, the position reconstruction algorithm will use the sensor pose estimation of the previous point as starting guess for the estimation of the following one. to simulate a real scenario, a position and orientation perturbation are added to the starting guess pose: for each cartesian coordinate, values form uniform distribution 𝑈(−√3 mm, √3 mm) are added, whereas for polar angle and the azimuthal angle in spherical coordinates we consider 𝑈(−1 °, 1 °). the levenberg–marquardt algorithm is used to solve the position reconstruction problem. for the details, see [19]. 4.3. results and discussions three different fg configurations are compared, as shown in figure 7: two of them are composed of eight transmitting coils, whose arrangement in space has been optimized by employing f1 and f2, whereas the other one represents the 5-coils fg prototype developed in [17] and it has been designed to minimize the mutual inductances still maintaining a high induced voltage in the test volume. as reported in [12], eight transmitter coils are commonly applied for real systems; this number of coils has been found elsewhere to be the minimum number of coplanar coils required to continuously track a sensor coil [13]. hence, eight transmitting coils are considered in this work for a direct comparison with [12]; all transmitting coils are identical in their electrical and geometrical parameters, and are powered with sinusoidal currents of 1 a rms. in contrast to [12], where the authors considered the same frequency of 1 khz for all transmitting coils, here we consider different frequencies for each coil, with a frequency gap of 1 khz according to the values used for first fg prototype, as shown in table 2; the optimization procedure will take into account this difference. the magnetic field produced by the 8-coils fgs is in compliance with ieee standards for safety levels [26], [27] beyond 10 cm from the fgs (i.e., z ≥ 10 cm in the fg reference system). table 1 shows the position error metrics related to the three different fg configurations. it can be noted that the optimized configurations lead to more accurate position tracking with respect to the configuration with five coils. moreover, optimizing with respect to 𝐹2 leads to better results than optimizing with table 1. position error metrics related to three different fg configurations: a) 5-coils compact fg, b) 8-coils fg optimized with f1, c) 8-coils fg optimized with f2. position error (mm) fg configuration (a) (b) (c) mean 0.80 0.77 0.42 std 0.92 0.82 0.59 max 9.44 5.96 2.70 figure 7. three different fg configurations under test: a) a 5-coils fg representing the prototype developed in [17]; b) an 8-coils fg optimized by applying 𝐹1 as done in [12]; c) an 8-coils fg optimized by applying 𝐹2. same scale is applied to all subfigures. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 respect to 𝐹1. note also that both configurations fall within the defined volume, and configuration (b) is more compact than configuration (c), thus resulting in a smaller size of the fg, which is an aspect to consider during system design. figure 8 shows the value of the mean induced voltage (i.e., 1/𝐹1, for better visualization) and of the cost function 𝐹2 for each grid point. only points with index > 363 are considered, i.e., z ≥ 350 mm, since for z ≤ 250 mm the position error does not present significant difference between the different configurations. as expected, the maximum mean induced voltage is achieved for configuration (b), nevertheless, the best position tracking accuracy is obtained with configuration (c), which presents the lowest value of cost function 𝐹2. configuration (a) and (b) presents similar values of 𝐹2, but the mean induced voltage of configuration (a) is much lower than configuration (b): this is probably responsible for the reduced accuracy of the 5-coils fg. it should be noted that both the mean induced voltage and the cost function 𝐹2 assume an oscillating trend, which resemble the behavior of the pseudoinverse of the jacobian matrix shown in section 3, thus it is in accordance with previous results. overall, the obtained position tracking accuracy is within the requirements of many surgical applications [8] and can be furtherly enhanced by improving the position reconstruction algorithm. 5. conclusions commercial emtss for surgical navigation present a limited tracking volume, mainly due to the reduced sensitivity of the small magnetic sensors with the distance from the fg. studies are currently being conducted to optimize the fg configuration of emtss for surgical navigation, since it affects tracking accuracy, hence an optimized configuration can increase tracking accuracy in regions far from the fg. in this paper, we discussed the influence of magnetic field gradients on tracking accuracy. therefore, we optimized the configuration of the fg by employing a metric based on the maximization of the magnetic field gradients, obtaining better performance than the case where the amplitude of the magnetic field is maximized (as done in literature [12]). overall, position tracking accuracy is within the requirements of many surgical applications. the spatial gradient of the magnetic field produced by the fg assumes an important role in the design of the fg itself, and it paves the way toward the exploitation of new metrics to achieve higher accuracy in regions far from the fg, thus increasing the tracking volume which is the main limitation of current commercial systems. references [1] g. fiori, g. bocchetta, s. conforto, s. a. sciuto, a. scorza, sample volume length and registration accuracy assessment in quality controls of pw doppler diagnostic systems: a comparative study, acta imeko 12(2) (2023), pp. 1-7. doi: 10.21014/actaimeko.v12i2.1425 [2] k. b. kim, w.-c. lee, c.-h. cho, d.-s. park, s. j. cho, y.-b. shim, continuous glucose monitoring using a microneedle array sensor coupled with a wireless signal transmitter, sens. actuators b chem. 281 (2019), pp. 14–21. table 2. parameters of three different fg configurations: a) 5-coils compact fg, b) 8-coils fg optimized with f1, c) 8-coils fg optimized with f2. coil 1 2 3 4 5 6 7 8 (a) pos (mm) x 0 67.00 0 -77.50 -18.90 y 0 -1.50 63.00 4.50 -59.70 z 0 0 0 12.10 12.10 ori (°) 𝜙 0 0 90 135 -45 𝜃 0 90 90 45 45 freq (khz) 0.98 2.06 3.06 3.68 4.74 (b) pos (mm) x 59.20 -2.70 17.50 11.45 -17.39 83.41 22.44 -58.82 y 2.84 -77.25 -6.61 -48.66 40.50 -39.22 62.61 7.65 z 19.21 19.43 19.20 19.54 19.21 19.21 19.47 19.24 ori (°) 𝜙 162.5 -144.4 67.1 105.9 105.1 -31.6 86.7 -11.5 𝜃 20.2 158.9 167.3 173.2 175.3 168.3 168.7 9 freq (khz) 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 (c) pos (mm) x 29.37 -206.08 29.76 160.17 -40.97 -37.39 53.78 -59.55 y 87.09 76.18 -165.80 82.04 -89.44 156.38 -25.81 5.25 z 15.29 18.03 19.10 19.35 19.44 19.57 19.68 19.65 ori (°) 𝜙 145.9 -51.7 116.1 -123.8 56.6 -106.9 -171.6 -37.5 𝜃 152.6 150.9 167.5 150.7 152.3 1115.2 131.1 125.6 freq (khz) 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 figure 8. value of 1 𝐹1 and 𝐹2 vs grid points. only points with index > 363 are considered, i.e., z-plane ≥ 350 mm. https://doi.org/10.21014/actaimeko.v12i2.1425 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 8 doi: 10.1016/j.snb.2018.10.081 [3] l. de palma, m. scarpetta, m. spadavecchia, characterization of heart rate estimation using piezoelectric plethysmography in time and frequency-domain, 2020 ieee international symposium on medical measurements and applications (memea), bari, italy, 01 june 2020 01 july 2020, pp. 1–6. doi:10.1109/memea49120.2020.9137226 [4] s. hermann, l. lombardo, g. campobello, m. burke, n. donato, a ballistocardiogram acquisition system for respiration and heart rate monitoring, 2018 ieee international instrumentation and measurement technology conference (i2mtc), houston, tx, usa, 14-17 may 2018, pp. 1–5. doi: 10.1109/i2mtc.2018.8409750 [5] m. scarpetta, m. spadavecchia, g. andria, m. a. ragolia, n. giaquinto, simultaneous measurement of heartbeat intervals and respiratory signal using a smartphone, 2021 ieee international symposium on medical measurements and applications (memea), lausanne, switzerland, 23-25 june 2021, pp. 1–5. doi: 10.1109/memea52024.2021.9478711 [6] t. peters and k. cleary, image-guided interventions: technology and applications. boston, ma: springer us, 2008, p. 557. doi: 10.1007/978-0-387-73858-1 [7] a. sorriento, m. b. porfido, s. mazzoleni, g. calvosa, m. tenucci, g. ciuti, p. dario, optical and electromagnetic tracking systems for biomedical applications: a critical review on potentialities and limitations, ieee rev. biomed. eng. 13 (2020), pp. 212–232. doi: 10.1109/rbme.2019.2939091 [8] a. m. franz, t. haidegger, w. birkfellner, k. cleary, t. m. peters, l. maier-hein, electromagnetic tracking in medicine -a review of technology, validation, and applications, ieee trans. med. imaging 33(8) (2014), pp. 1702–1725. doi: 10.1109/tmi.2014.2321777 [9] a. di nisio, n. giaquinto, a. m. l. lanzolla, m. a. ragolia, m. scarpetta, s. carrara, platinum nanostructured needle-shaped sensors for ion detection in biomedical applications, ieee sens. j. 22(23) (2022), pp. 22404–22412. doi: 10.1109/jsen.2022.3216682 [10] a. m. a. m. franz, a. seitel, d. cheray, l. maier-hein, polhemus em tracked micro sensor for ct-guided interventions, med. phys. 46(1) (2019), pp. 15–24. doi: 10.1002/mp.13280 [11] polhemus—accessories polhemus (2004). online [accessed 18 june 2022]. https://estkl.com/images/pdf/polhemus/accessories_brochure.pdf [12] m. li, c. hansen, g. rose, a simulator for advanced analysis of a 5-dof em tracking systems in use for image-guided surgery, int. j. comput. assist. radiol. surg. 12(12) (2017) pp. 2217–2229. doi: 10.1007/s11548-017-1662-x [13] a. plotkin, o. shafrir, e. paperno, d. m. kaplan, magnetic eye tracking: a new approach employing a planar transmitter, ieee trans. biomed. eng. 57(5) (2010), pp. 1209–1215. doi: 10.1109/tbme.2009.2038495 [14] h. a. jaeger, a. m. franz, k. o’donoghue, a. seitel, f. trauzettel, l. maier-hein, p. cantillon-murphy, anser emt: the first open-source electromagnetic tracking platform for imageguided interventions, int. j. comput. assist. radiol. surg. 12(6) (2017), pp. 1059–1067. doi: 10.1007/s11548-017-1568-7 [15] y.-c. wu, h.-y. ma, z.-h. kuo, m. teng, c.-y. lin, electromagnetic tracking system design for location and orientation estimation, 2022 ieee/asme international conference on advanced intelligent mechatronics (aim), sapporo, japan, 11-15 july 2022, pp. 1256–1262. doi: 10.1109/aim52237.2022.9863273 [16] m. a. ragolia, f. attivissimo, a. di nisio, a. m. l. lanzolla, m. scarpetta, a virtual platform for real-time performance analysis of electromagnetic tracking systems for surgical navigation, acta imeko 10(4) (2021), pp. 103–110. doi: 10.21014/acta_imeko.v10i4.1191 [17] m. a. ragolia, , g. andria, f. attivissimo, a. d. nisio, a. maria lucia lanzolla, m. spadavecchia, p. larizza, g. brunetti, performance analysis of an electromagnetic tracking system for surgical navigation, 2019 ieee international symposium on medical measurements and applications (memea), istanbul, turkey, 26-28 june 2019, pp. 1–6. doi: 10.1109/memea.2019.8802220 [18] f. attivissimo, a. d. nisio, a. m. l. lanzolla, m. a. ragolia, analysis of position estimation techniques in a surgical em tracking system, ieee sens. j. 21(13) (2021), pp. 14389–14396. doi: 10.1109/jsen.2020.3042647 [19] m. a. ragolia, f. attivissimo, a. di nisio, a. m. l. lanzolla, m. scarpetta, reducing effect of magnetic field noise on sensor position estimation in surgical em tracking, 2021 ieee int. symposium on medical measurements and applications (memea), lausanne, switzerland, 23-25 june 2021, pp. 1–6. doi: 10.1109/memea52024.2021.9478723 [20] o. shafrir, e. paperno, a. plotkin, magnetic tracking with a flat transmitter. lambert academic publishing, 2010. online [accessed 18 june 2023]. https://www.lappublishing.com/catalog/details/store/ru/book/978-3-83834436-2/magnetic-tracking-with-a-flat-transmitter [21] o. talcoth, g. risting, t. rylander, convex optimization of measurement allocation for magnetic tracking systems, optim. eng. 18(4) (2017), pp. 849–871. doi: 10.1007/s11081-016-9342-1 [22] g. andria, f. attivissimo, a. di nisio, a. m. l. lanzolla, m. a. ragolia, assessment of position repeatability error in an electromagnetic tracking system for surgical navigation, sensors 20(4) (2020), pp. 961–961. doi: 10.3390/s20040961 [23] v. pasku, a. de angelis, g. de angelis, a. moschitta, p. carbone, magnetic field analysis for 3-d positioning applications, ieee trans. instrum. meas. 66(5) (2017), pp. 935–943. doi: 10.1109/tim.2017.2682738 [24] f. santoni, a. de angelis, i. skog, a. moschitta, p. carbone, calibration and characterization of a magnetic positioning system using a robotic arm, ieee trans. instrum. meas. 68(5) (2019), pp. 1494–1502. doi: 10.1109/tim.2018.2885590 [25] m. a. ragolia, f. attivissimo, a. d. nisio, a. maria lucia lanzolla, evaluation of position rms error from magnetic field gradient for surgical em tracking systems, 2020 ieee international instrumentation and measurement technology conference (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012.2020.9128837 [26] ieee and i. s. c. c. 28, ieee standard for safety levels with respect to human exposure to electromagnetic fields, 0-3 khz, (2019). doi: 10.1109/ieeestd.2019.8859679 [27] ieee standards, ieee standard for safety levels with respect to human exposure to radio frequency electromagnetic fields, 3 khz to 300 ghz, 2005 (2006) isbn:978-0-7381-4835-9. doi: 10.1109/ieeestd.2006.99501 [28] m. a. ragolia, f. attivissimo, a. di nisio, a. maria lucia lanzolla, assessment of position repeatability of surgical em tracking systems employing magnetic field model, in 2020 ieee int. symposium on medical measurements and applications (memea), ieee, bari, italy, 1 june 1 july 2020, pp. 1–6. doi: 10.1109/memea49120.2020.9137161 [29] e. walter, l. pronzato, identification of parametric models from experimental data. in communications and control engineering series. london: springer, 1997. [30] d. ucinski, optimal measurement methods for distributed parameter system identification. boca raton: crc press llc, 2005. doi: 10.1201/9780203026786 https://doi.org/10.1016/j.snb.2018.10.081 https://doi.org/10.1109/memea49120.2020.9137226 https://doi.org/10.1109/i2mtc.2018.8409750 https://doi.org/10.1109/memea52024.2021.9478711 https://doi.org/10.1007/978-0-387-73858-1 https://doi.org/10.1109/rbme.2019.2939091 https://doi.org/10.1109/tmi.2014.2321777 https://doi.org/10.1109/jsen.2022.3216682 https://doi.org/10.1002/mp.13280 https://est-kl.com/images/pdf/polhemus/accessories_brochure.pdf https://est-kl.com/images/pdf/polhemus/accessories_brochure.pdf https://doi.org/10.1007/s11548-017-1662-x https://doi.org/10.1109/tbme.2009.2038495 https://doi.org/10.1007/s11548-017-1568-7 https://doi.org/10.1109/aim52237.2022.9863273 https://doi.org/10.21014/acta_imeko.v10i4.1191 https://doi.org/10.1109/memea.2019.8802220 file://///fs.isb.pad.ptb.de/home/imeko/acta%20imeko%20papers/vol%2012,%20no%202%20(2023)/1_pre-formatted%20copies/10.1109/jsen.2020.3042647 https://doi.org/10.1109/memea52024.2021.9478723 https://www.lap-publishing.com/catalog/details/store/ru/book/978-3-8383-4436-2/magnetic-tracking-with-a-flat-transmitter https://www.lap-publishing.com/catalog/details/store/ru/book/978-3-8383-4436-2/magnetic-tracking-with-a-flat-transmitter https://www.lap-publishing.com/catalog/details/store/ru/book/978-3-8383-4436-2/magnetic-tracking-with-a-flat-transmitter https://doi.org/10.1007/s11081-016-9342-1 https://doi.org/10.3390/s20040961 https://doi.org/10.1109/tim.2017.2682738 https://doi.org/10.1109/tim.2018.2885590 https://doi.org/10.1109/i2mtc43012.2020.9128837 https://doi.org/10.1109/ieeestd.2019.8859679 https://doi.org/10.1109/ieeestd.2006.99501 https://doi.org/10.1109/memea49120.2020.9137161 https://doi.org/10.1201/9780203026786 microsoft word article 7 161-1297-1-le.docx acta imeko  february 2015, volume 4, number 1, 35 – 43  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 35  power system of the guanay ii auv  ivan masmitjà, julián gonzález, gerard masmitjà, spartacus gomáriz, joaquín del‐río‐fernández   sarti reserch group, electronics dept., universitat politècnica de catalunya (upc), rambla exposició 24, 08800, vilanova i la geltrú.  barcelona. spain +(34) 938 967 200. www.cdsarti.org      section: research paper   keywords: state of charge; batteries; auv; wireless connection; ni‐cd  citation: ivan masmitjà, julián gonzález, gerard masmitjà, spartacus gomáriz, joaquín del‐río‐fernández, power system of the guanay ii auv, acta imeko,  vol. 4, no. 1, article 7, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐07  editor: paolo carbone, university of perugia   received december 12 th , 2013; in final form november 8 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the spanish ministry of economy and competitiveness under the research project: “sistemas inalambricos para la  extension de observatories submarinos” (ctm2010‐15459)  corresponding author: ivan masmitjà, e‐mail: ivan.masmitja@upc.edu    1. introduction  the study of the sea and oceans has become increasingly important among fishers and biologists especially because marine species change their behavior depending on environmental variables such as temperature, salinity and ph, among others [1]. for these oceanographic observations different tools are used. aerospace technologies can focus on the study of the oceans at a global level but are unable to carry out detailed observations in a specific sector or depth. oceanographic ships, on the other hand, can solve this problem but the actual planning and mission deployment required to obtain a satisfactory special-time resolution data are very expensive. as a solution to these needs, these tools are migrating towards the creation of autonomous marine vehicles. the autonomous underwater vehicle (auv) guanay ii [2] [4] is a vehicle developed by the sarti group of the technical university of catalonia (with the co-financing of imedea-csic) with the objective to provide a platform for measuring oceanographic variables, such as temperature and salinity of the water column, with a high simultaneously spatial and temporal resolution. figure 1 shows the vehicle. the knowledge of the charge status of the batteries in an autonomous vehicle is an important factor to ensure security of the vehicle and mission. this work presents a measurement system and energy figure  1. autonomous underwater vehicle guanay ii auv.  abstract  guanay  ii  is  an  autonomous  underwater  vehicle  (auv)  designed  to  perform  measurements  in  a  water  column.  in  this  paper  the  aspects of the vehicle’s power system are presented with particular focus on the power elements and the state of charge of the  batteries. the system performs both measurement and monitoring tasks and also controls the state of charge (soc) of the batteries. it  allows simultaneous charging of all batteries from outside the vehicle and has a wireless connection/disconnection mode. guanay ii  uses  a  nicd  battery  and  for  this  reason  the  current  integration  as  a  soc  methodology  has  been  selected.  moreover,  it  has  been  validated that it is possible to obtain instant consumption from the soc circuit. finally, laboratory and vehicle navigation tests have  been performed to validate the correct operation of the systems and the reliability of the measured data.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 36  management for the guanay ii vehicle. initially different solutions for electrically measuring the batteries charge state are analyzed, and the methodology of current integration is chosen [5] due to the nature of the batteries used in the guanay ii. subsequently, the construction of a prototype and the experimental tests performed both in laboratory and field missions in order to validate the correct operation of the device is presented. finally, this device is complemented with a design of a battery charger system, accessible from outside the vehicle, which allows simultaneous charging of all battery packs, and also a wireless connection-disconnection of the battery. 2. power system of guanay ii auv  the guanay ii’s power system can be divided into three parts: propulsion system; communication and control system; and battery system. below, each of these elements are briefly described. the propulsion systems consist of a series of propellers for longitudinal and directional movement and a motorpiston set to modify the buoyancy of the vehicle to perform dives. the principal characteristics of the main engine, for longitudinal movement, are: 24 v of direct current (dc) supply; 300w of output power; and can be controlled via rs232 communication. two engines are located on the rear side of the vehicle to control its direction. the engines have 24 v of power supply, maximum power output of 110 w each and are controlled by an rs232 communication port. finally, the pistonengine set comprises a motor of 24 vdc and a piston that can move up to 1.5 litres. this piston can either take in or eject seawater. with this action the buoyancy of the vehicle can change and therefore the dives can be controlled. these systems consume most of the guanay ii auv power. the second part of guanay ii vehicle is the control and communication system. the main control system of the vehicle is an embedded computer located inside. this computer is responsible for controlling and managing the various elements of the vehicle (sensors, actuators, motors, communications, etc.) also, by using a radio link and a wifi network the vehicle can be remotely controlled from a base station. finally, an energy storage system has been mentioned. this vehicle has autonomy of around 4 hours thanks to the nickel-cadmium (ni-cd) battery pack of 24 v and 21 ah. this pack consists of a subset of 12 vdc and 7 ah batteries arranged in two series and three in parallel configuration (2s3p). with these batteries and the average consumption of electronic components of the vehicle, the autonomy under normal conditions can be established. figure 2 shows the development of the vehicle in 3d where the three parts of the power system can be seen (1, communication and control system. 2, propulsion system. 3, battery system). finally, in table 1 the values of the theoretical consumption of various devices of the vehicle can be seen. these values are specified in the technical specifications of the devices. as seen in table 1, the engines consume most (90%) of the vehicle’s energy. the other electronic systems consume the remaining 10%. given these data and the total capacity of the batteries, the autonomy of the vehicle can be known. if the engines run at full power, a maximum of less than 1.5 h has been obtained. however, various factors have to be taken into account. first, the side engines correct the direction of the vehicle, but are never on for long time. with regard to the main engine, a compromise between speed and autonomy of the vehicle is necessary. finally, the engine piston set is only activated to make the dives. in experimental results, it has been observed that the vehicle has a range of about 4 hours under normal conditions. 3. measurement system for battery state of  charge  the state of charge (soc) is mathematically defined as in the following equation (1). nom t nom ahdttiahtsoc /))(()( 0  (1) table 1.   theoretical consumptions of the guanay ii.  maximum values v  a  w gps 5  240 m  2 pc104 5  0.9  9.9 compass 5  <20 m  0.1 radio‐modem 12  1.5  18 engine driver 5  31 m  0.15 main thruster 24  16  300 lateral thruster 24  4.25  110 piston‐engine 24  3  50 total 25.94  490.15 figure 2. this figure shows the development of the vehicle in 3d with the  three parts of the power system (1, communication and control system. 2, propulsion system. 3, battery system).   acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 37  where i(t) is the current extracted from the battery (which is assumed positive while discharging the battery), and ahnom is the nominal battery capacity. equation (1) assumes that the current integration starts at soc(t) = 100% at t = 0. several methods of estimating the soc of a battery have been used. some are specific to particular cell chemistries. this work is subject to the use of ni-cd batteries used in the guanay ii. 3.1. methods for soc estimation  for such batteries there are different methods to measure the charge, such as the measurement of voltage, impedance and current integration. a. voltage based soc estimation uses the voltage of the battery cell as the basis for calculating the remaining capacity [6]. results can vary widely depending on discharge rate and temperature and compensation for these factors must be provided to achieve reasonable accuracy. figure 3 shows the relationship between the voltage and the capacity at different discharge rates. b. internal impedance measurements can also be used to determine the soc. however these are not widely used because of difficulties in measuring the impedance while the cells are active as well as difficulties in interpreting the data and very complex calculations are required [7] and [8]. c. current based soc estimation, known as coulomb counting [5] [11]. in coulometric measurements however, the amount of capacity taken out or put into a battery is measured in terms of ampere-hour or in %. the coulomb counting approach basically implements equation (1) to evaluate the soc. it uses a more general definition, defined as can be seen in (2).  t m nom dtti ah soctsoc 0 )( 1 )0()( (2) where soc(0) is the starting value of soc and im(t) is the measured current. charge accumulation techniques where soc is determined by monitoring battery charge and discharge current are impractical in the long term due to accumulation of errors. for this reason this technique makes it impractical to be used by itself. a monitoring technique combining the opencircuit voltage under no-load condition, and coulometric measurements under constant load has been implemented in [13] on a microcomputerbased circuit. on the other hand, correction factors are required for different discharge rates and ambient temperatures [12]. 3.2. gas gauge ic for power‐assist applications  current based soc estimation, known as coulomb counting [5], which is currently selected, calculates the state of charge by measuring the instantaneous current of the battery and integrating in time both in the process of charging and discharging. the reason for using this method is because it obtains a good correlation of the measurement data and also because it can be implemented quickly because of the already existing specific integrated circuit (ic). this allows us to rapidly develop a prototype that can be used to estimate the remaining charge left in the battery in guanay ii and to perform some field tests. in this implementation the bq2013 ic (a gas gauge ic for powerassist applications) from texas instruments company, specifically designed for ni-cd batteries [9] has been used. figure 4 shows a block diagram of the soc estimator where the batteries, motors (load) and charger can be observed. the bq2013 constantly monitors the current flowing through the batteries and also measures the voltage and the internal temperature to compensate for various factors. finally, the communication to the soc estimator is through a serial communication, allowing the user to read and write some parameters of the configuration and read the value of the charge of the batteries with the microcontroller. figure  4. block diagram of the soc estimator.  figure  3. relationship between the voltage and the capacity at different  discharge rate. source: saft batteries [10].  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 38  3.3. prototype  the prototype developed is shown in figure 5, where the communication ports with the pc-104 and leds for visual indication can be seen. this ic includes a simple single-pin serial data interface with command-based protocol (hdq plus return). for this reason a rs-232 to hdq interface has been developed to communicate with both gas gauge and pc-104 systems [9]. 3.4. the soc measurement method  as explained in section 3.1 there are some drawbacks in using a simple coulomb counting soc. these drawbacks include the noise, temperature effects, discharge rates and how to estimate the initial soc value. the algorithm used should include these effects to get a reliable reading of the soc. the bq2013 implements compensations to minimize these drawbacks. the operational overview diagram in figure 6 illustrates the operation algorithm of the ic. as can be seen in figure 6, the ic compensates charge current for charge rate and temperature. discharge current is load compensated and allows automatically adjustment for self-discharge. the main counter, nominal available capacity (nac), represents the available battery capacity at any given time. battery charging increments the nac register, while battery discharging and self-discharge decrease the nac register and increment the dcr (discharge count register). the dcr and lmd (last measured discharge) registers are used to update and adjust the initial soc value. texas instruments provides equation (3) to obtain the value of the internal register nac (in mvh) of the bq2013 ic, which monitors the voltage drop across a resistor connected in series between the battery and ground. scaletorsenseresis mahacitybatterycapmvhnac   )( )()( (3) where senseresistor = 0.01 ω and scale = 640 (predefined). we can see the relation between the nac register and the soc in mah. as seen in (3) the nac value depends on the capacity of the battery and is determined by charging and discharging. therefore, the consumption can be calculated from expressions (4) and (3): )( )( )( ht ahacitybatterycap anconsumptio    . (4) finally, by correcting scale factors, we can obtain the instant consumption using equation (5): 01.01000640 3600 )( )( )(      st mvhnac anconsumptio , (5) where 3600 is the time factor correction; 640 is scale factor determined by the ic; 1000 is the ampere scale factor correction; and 0.01 is the sense resistor in ω.   figure  6. scheme of charging system and battery connection.  figure  5 . state of charge measuring prototype. a) top view b) bottom view.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 39  4. charging system and battery connection  in this section the implementation of charging system and battery connection are described. these two systems can improve the operability of the vehicle and increase safety and reliability. figure 7 shows a schematic model of this system. it shows the three battery packs connected in parallel. the switch (sw) between batteries and the auv electronics is radio-controlled and allows the user to turn on and off the vehicle. finally, an external charger can be connected using a two pin connector to charge the batteries. 4.1. charging system  the energy of the guanay ii comes from the 2s3p battery packs that can provide 24 vdc and 21 ah. however, a problem arises with this configuration when the batteries have to be charged. many manufacturers do not recommend charging two battery packs in parallel because this can cause damage especially at high charge rates. until now, types of chargers that can charge a single pack only were used, but the vehicle must be opened every time. in order to maximize the mission time a system was designed for simultaneous parallel charging of all battery packs through a single external connector. the location of this single point for charging means the it is not necessary to disassemble the vehicle’s mechanical and electronic systems and so the charging operation is faster. figure 8 shows the connector used for charging the batteries. this connector is located in one of the covers of the sealed cylinder. it is manufactured by subconn company. specifically it is a circular 4-pin capable of supporting up to 600 v and 10 a at pressures up to 1400 bars. after discussions with the manufacturer, it has been decided to use a low rate constant voltage charge applied to all the 2s3p batteries to prevent damage. a constant charge around 0.2 c is used. by keeping the charge current low enough so that the battery does not generate any heat, this method performs charging without using any control. the calculation formula for semi-constant-voltage charge system is as follows (6):  ch c n kv v   , (6) where: vch is the output voltage of the dc power supply for charge; vc is the single-cell battery voltage (1.45 v/cell: average battery voltage during charge at 20º c, 0.1 c); n is the number of cells used; (k) is the stabilizing constant and must be selected in accordance with the purpose of the device in which the battery pack is used. the value of the above-mentioned stabilizing constant (k) must be selected carefully and after discussions with the manufacturer, using a value of 1.25 has been decided upon. using equation (3) a charging voltage of 29 v has been obtained. moreover, the current has been limited to 5 a to prevent any overcurrent. charge rate is often denoted as c or c-rate and signifies a charge or discharge rate equal to the capacity of a battery in one hour. equation (7) shows the relation between the capacity of battery in ah and the current in a. ( ) (1/ ) ( )c apacityofbattery ah c h c urrent a´ = (7) 4.2. battery connection  in order to have control and easy access to the battery connection/disconnection to electronics and propulsion of the vehicle, a wireless device that acts as a switch has been incorporated. figure 7 shows a schematic model of this system. the user can turn off/on the switch of the vehicle using the remote controller. this operation enables the connection or disconnection of the batteries while the auv is in the water, obtaining greater security in the vehicle. a hir6-433 rf am 433 mhz receiver/decoder from rfsolutions has been used for this purpose. these types of modules provide a very low power receiver, combined with a flash programmable controller, supplied pre-programmed to operate with the keeloq transmitter encoder. the range from such a system can be up to 50 meters line of sight (los). finally, a new antenna with epoxy resin has been designed to allocate the hir6-433 and provides the necessary protection in the water and in the depth. besides hosting this module, the antenna is also used to install a new wifi communication module and gps antenna. 5. experimental results  initially both systems, the connection/disconnection of the battery and its charging and soc estimation have been tested in the laboratory, where the proper functioning of all the systems designed was verified.   figure  7. scheme of charging system and battery connection.    figure  8. subconn connector for charging in one of the covers of the sealed cylinder of guanay ii auv.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 40    figure  9. shows the two laboratory tests. in 1a charge testing of all packs of batteries in parallel can be seen. in 2b discharge testing and a comparison  between sensed current from the multimeter and sensed current from the bq2013 can be observed.    1.a) block scheme of the battery charge test. three battery packs, three digital multimeters, one voltage source and one laptop to monitor the variables. all  of them are connected through a gpib port.  1.b) shows the test bench where the battery charging has been tested.  1.c) the current of the three packs (red, green and purple) of batteries during a low rate charging in parallel.   2.a) block scheme of the battery discharge test. five resistors of 18 ohms in parallel as a load have been used. the three measurements (from the ammeter,  from the bq2013 and theoretical) have been compared using a laptop.  2.b) test bench that has been used to check and adjust the soc sensor of the battery. for this laboratory experiment a bank of five loads in parallel has been used.  each load of 18 ohms introduces a 433 ma of consumption current.  2.c) vehicle current measurement error using the soc circuit (x‐axis is the total current of the vehicle).     (1.a) (2.a) (1.b) (2.b) (2.c) (1.c) acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 41  one of the most critical parts is the parallel charging of battery packs, in particular the individual currents of each pack. laboratory tests have been conducted to validate the correct performance of the charging. in this test a power supply and three digital multimeters to monitor individual currents of each pack during charging has been used (see figure 9 point 1.a, block scheme). figure 9 in point 1.b shows the test bench where the battery charge has been tested. for this purpose, three ammeters and one power supply connected to a computer by gpib have been used. this computer is responsible for controlling and saving instrument data. figure 9 in point 1.c shows the current of the three packs of batteries during charging in parallel mode. in this picture it can be seen that after 10 hours of charge the current drops to 0.15 a and the battery is practically charged. also, the difference of the current charge in one of the packs can be observed. this is one reason why the high rate current charge is not recommended in order to prevent damages. in figure 9 point 1.c the rate of current used to charge the batteries can also be seen. according to the equation (7) a maximum current rate of 0.17 c is obtained during the first hour of charge, from which this rate decreases to less than 0.05 c. on the other hand, as seen in equation (5), the current consumption of the vehicle can be calculated using the soc circuit. a test and a calibration have been conducted to validate the system, using a digital multimeter and five 18 ohms resistors as a load (see figure 9 point 2.a, block scheme). figure 9 in point 2.b shows the test bench that has been used to check and adjust the soc sensor of the battery. for this laboratory experiment a bank of five loads in parallel has been used. each load of 18 ohms introduces 433 ma of consumption current. figure 9 point 2.c shows the vehicle’s current measurement error using the soc circuit bq2013. the error varies according to the total current of the vehicle with less than ±50 ma. therefore, using the nac register a sufficiently accurate method to estimate the power consumption from guanay ii has been obtained. afterwards, these systems have been validated in various vehicle navigation tests. for example, one of these tests was performed at the canal olímpic de catalunya in castelldefels (the olympic channel located in castelldefels) as shown in figure 10. figure 11 shows different trajectories performed by the vehicle during the field test. figure 16 shows the instantaneous battery consumption due to the action of the propulsion engine and two direction motors of the vehicle during a path on the water. the first three graphs show the action of propellers in %   figure  11. different trajectories performed by the vehicle during the field test.    figure  12.  percentage of power to the engine requested vs. current drawn.  (propulsion propeller).    figure  10. guanay ii auv in the field test.    figure  13.  percentage of power to the engine requested vs. current drawn.  (direction propeller).  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 42  (up to one). in blue the action of the main propeller is shown and in red and green the action of side propellers is shown. the fourth graph shows the instantaneous battery consumption (black line) and the mean of this (dashed line). one can observe the relationship between consumption and the action of the engines. also, the average consumption from the remaining electronic elements (pc104, radio modem, gps, etc.) can be seen. this value may be contrasted with the consumption values of table 1. the difference may come due to the time of use of the radio modem or pc104. in any case, this can be a field to investigate in future work. from the experimental results shown in figure 14, the relationship between the percentage of power to the engine requested and the current drawn has been made. figure 13 shows this relationship for the left direction propeller and figure 12 for the propulsion propeller. the results show that, at low power, the engine will not draw power and therefore will not work. this is because a minimum amount of power is needed to run the motors. for the direction propellers more than 50% of power has been required and for the propulsion propeller more than 20%. however, more field test points are needed to obtain a more accurate relationship. these tests also confirmed that disconnecting the batteries when the vehicle is still in the water provides security at its landing ground. 6. conclusions  a system for measurement and for energy management for an underwater vehicle have been designed. the system monitors the state of charge of the batteries and the instantaneous consumption of the thrusters, information that the control mission needs to optimize and facilitate navigation. a single external access connector in the vehicle allows all batteries to be charged and a radio frequency control switch is used to connect and disconnect the vehicle’s power supply even in water. the navigation tests performed confirm that the system operates correctly and the data measured is reliable. acknowledgement  this work was supported by the spanish ministry of economy and competitiveness under the research project: “sistemas inalambricos para la extension de observatorios submarinos” (ctm2010-15459).  references  [1] j. timothy pennington and f. p. chavez, “seasonal fluctuations of temperature, salinity, nitrate, chlorophyll and primary production at station h3/m1 over 1989-1996 in monterey bay, california” deep sea research part ii:   figure  14. consumption of the vehicle versus power of the propellers.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 43  topical studies in oceanography, vol. 47, no.5, pp. 947-973, 2000. [2] s. gomáriz, j. gonzález, a. arbos, i. masmitja, g. masmitja, j. prat. "design and construction of the guanay-ii autonomous underwater vehicle". oceans 2011 ieee/oes. santander, spain. june 2011. [3] s. gomáriz, j. prat, a. arbos, o. pallares, and c. viñolo, “autonomous vehicle development for vertical submarine observation” in international workshop on marine technology, (vilanova i la geltrú, spain), november 2009. [4] s. gomáriz, j. prat, p. gayà, and j. del río, “development of low-cost autonomous oceanographic observation vehicle”, in oceans’09 mts/ieee, (brement, germany), may 2009. [5] f.codecà, s. m. savaresi, v. manzoni. “the mix estimation algorithm for battery state-of-charge estimator – analysis of the sensitivity to measurement errors” in join 48th ieee conference on decision and control and 28th chinese control conference shangahai, p.r. china, december 1618, 2009. [6] m. gonzlaez, m.a. perrez, j.c. viera, c. carballo, a. garrido. “a new, reliable and easily implemented nicd/nimh battery state estimation method”. instrumentation and measurement technology conference, 1999. imtc/99. proceedings of the 16th ieee. page(s): 1260 – 1264 vol.2. [7] n. kato, k. yamamoto. “estimation of the capacity of nickel-cadmium batteries by measuring impedance using electrolyte-deficient battery characteristics” telecommunications energy conference, 1995. intelec ’95. 10.1109/intlec.1995.499046 . [8] i. damlund. “analysis and interpretation of acmeasurement on batteries used to assess state-of-health and capacity-condition” telecommunications energy conference, 1995. intelec ’95. 10.1109/intlec.1995.499055. [9] texas instruments. “hdq communication basics”. application report. slua408a – december 2006 – revised september 2008. [10] saft batteries. http://www.saftbatteries.com/technologies_nickel_nicd_ 293/language/en-us/default.aspx. may 2013 [11] y. çadirci, y. özkazanç. “microcontroller-based on-line state-of-charge estimator for sealed lead-acid batteries” journal of power sources 129 (2004) 330-342 [12] chyuan-yow tseng, chiu-feng lin. “estimation of the state-of-charge of lead-acid batteries used in electric scooters” journal of power sources 147 (2005) 282-287. [13] j.h. aylor, a. thieme, b.w. johnson. “a battery state-ofcharge indicator for electric wheelchairs”. ieee trans. ind. electron. 39 (5) (1992) 398-409. a learning model for battery lifetime prediction of lora sen-sors in additive manufacturing acta imeko issn: 2221-870x march 2023, volume 12, number 1, 1 10 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 1 a learning model for battery lifetime prediction of lora sensors in additive manufacturing alberto morato1, tommaso fedullo2,3, stefano vitturi1, luigi rovati2, federico tramarin2 1 national research council of italy, ieiit-cnr, padova, italy 2 department of engineering ''enzo ferrari'', university of modena and reggio emilia, modena, italy 3 department of management and engineering, university of padova, italy section: research paper keywords: lora; lorawan; iiot; battery lifetime; machine learning citation: alberto morato, tommaso fedullo, stefano vitturi, luigi rovati, federico tramarin, a learning model for battery lifetime prediction of lora sensors in additive manufacturing, acta imeko, vol. 12, no. 1, article 24, march 2023, identifier: imeko-acta-12 (2023)-01-24 section editor: francesco lamonaca, university of calabria, italy received november 17, 2022; in final form february 15, 2023; published march 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: federico tramarin, e-mail: federico.tramarin@unimore.it 1. introduction today, novel and intelligent measurement systems are increasingly developed thanks to the internet of things (iot) paradigm [1]. moreover, the combination of iot with the industry 4.0 [2], [3] paradigm (often referred to as industrial iot), introduces a number of unprecedented challenges to measurement capabilities [4], with the ever-increasing need to collect reliable yet accurate data from mobile, battery-powered nodes over potentially large areas. in this scenario, one of the enabling technologies of industry 4.0 is additive manufacturing (am). basically, am makes it possible to create 3d objects, such as prototypes of possible products designed with cad tools, without the burdens usually imposed by traditional production systems in terms of work organization, delivery times and material utilization. this has a tremendous impact on manufacturing processes, making them more efficient, timely, scalable, and customizable [5], [6]. the benefits resulting from the introduction of am also place unprecedented demands on sensor systems and related measurement techniques. in fact, there are several am applications, such as those based on powder-bed processes, where suitable sensor systems need to be developed to collect data during the manufacturing process of potentially large objects, e.g., for early detection of defects and anomalies, and for abstract today, an innovative leap for wireless sensor networks, leading to the realization of novel and intelligent industrial measurement systems, is represented by the requirements arising from the industry 4.0 and industrial internet of things (iiot) paradigms. in fact, unprecedented challenges to measurement capabilities are being faced, with the ever-increasing need to collect reliable yet accurate data from mobile, battery-powered nodes over potentially large areas. therefore, optimizing energy consumption and predicting battery life are key issues that need to be accurately addressed in such iot-based measurement systems. this is the case for the additive manufacturing application considered in this work, where smart battery-powered sensors embedded in manufactured artifacts need to reliably transmit their measured data to better control production and final use, despite being physically inaccessible. a low power wide area network (lpwan), and in particular lorawan (long range wan), represents a promising solution to ensure sensor connectivity in the aforementioned scenario, being optimized to minimize energy consumption while guaranteeing long-range operation and lowcost deployment. in the presented application, lora equipped sensors are embedded in artifacts to monitor a set of meaningful parameters throughout their lifetime. in this context, once the sensors are embedded, they are inaccessible, and their only power source is the originally installed battery. therefore, in this paper, the battery lifetime prediction and estimation problems are thoroughly investigated. for this purpose, an innovative model based on an artificial neural network (ann) is proposed, developed starting from the discharge curve of lithium-thionyl chloride batteries used in the additive manufacturing application. the results of experimental campaigns carried out on real sensors were compared with those of the model and used to tune it appropriately. the results obtained are encouraging and pave the way for interesting future developments. mailto:federico.tramarin@unimore.it acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 2 process qualification [7]. this is the case addressed in this paper, where a large powder-bed 3d printer is capable of producing different types of artifacts using mixtures of powders, derived from recycled materials or natural components such as sand, water and chemical-free reagents [8]. a peculiarity of the system is that the artifacts are permanently equipped with sensors. these sensors are embedded in the manufactured objects at the beginning of the production phase and cannot be physically accessed. the sensors are needed to provide measurements of appropriate variables, such as temperature and humidity, which are used for two main purposes: i) during artifact production, as mentioned above, they provide feedback that allows on-line tuning of the 3d printing process and assist in defect detection [9]; ii) during the artifacts' lifetime, when they are in their final positions, sensor data is collected to perform off-line analysis as well as to monitor environmental conditions. a system with these specifications has been developed as part of an italian regional project called admin-4d (additive manufacturing & industry 4.0 as innovation driver). of course, to achieve the above goals, the sensors must also be able to communicate and send the measured data to the correct destination(s). for this purpose, technologies such as low power wide area networks (lpwans) represent a promising solution to ensure sensor connectivity, as they are optimized to minimize energy consumption while ensuring longrange operation and low-cost deployment. in the application addressed in this paper, one of the most representative implementations was chosen, namely lorawan (long range wan) [10], [11]. it is worth noting that the application of lorawan in the industrial scenario has been extensively analyzed [12], [13], proving its effectiveness, possibly after a suitable protocol optimization. therefore, the targeted am application poses several challenges to the sensor network system, and three main issues must be evaluated for the efficient collection of measurement data: i) the effective acquisition of readings from sensors embedded in artifacts; ii) the transmission range of the sensors; iii) the lifetime of the batteries used by the sensors. we have already investigated both the actual transmission capability of the embedded sensors and the covered distances in our previous work [14]. the results are satisfactory. it was found that transmission ranges of several tens of meters could be effectively achieved with a low packet loss rate and under different environmental conditions, in line with the requirements. in this paper, we deal extensively with the latter issue, namely battery life. obviously, this is a critical aspect for the whole project, since once an artifact has been produced, its embedded sensors can no longer be accessed, nor can its batteries be replaced/recharged. consequently, once the lifetime of the batteries has expired, measurements from the sensors will no longer be transmitted. it should also be noted that battery lifetime prediction is of paramount importance in several other iiot-based applications, such as cooperative robotics [15], [16]. battery lifetime prediction is closely related to the battery discharge model. unfortunately, the definition of an analytical discharge model, although theoretically possible, is difficult to achieve from the data typically provided by the manufacturers. therefore, in this paper we present a novel approach to battery discharge modeling based on a hybrid scheme where a static model and a machine learning (ml) one coexist. in particular, we use an artificial neural network (ann), which is necessary to overcome the nonlinearity of battery capacity as a function of temperature, and which is trained and validated using data derived from the battery datasheet. this ann model is then used in conjunction with a closed-form static battery model (e.g., an accurately adapted version of the matlab model) and finally introduced in the context of the am application described. the ultimate goal is to develop a model that can accurately predict the battery life of sensors under various operating conditions, including changes in temperature, power consumption, and other variables. the model should also be able to account for any nonlinearities that may occur, such as the effect of extreme temperatures on battery performance. experimental campaigns were conducted to compare the predictions of the models with real battery data. the models were then configured to emulate typical operating conditions to achieve realistic battery life predictions. as a result, it will be possible to more accurately predict sensor lifetime and optimize communication parameters to ensure maximum battery life. the paper is organized as follows. section 2 gives a brief description of the targeted am application and of the admin4d project and outlines the requirements for sensor data acquisition. section 3 discusses some relevant related works and provides an overview of the contributions provided by this paper. section 4 introduces the adopted sensors and reports considerations about battery selection and characterization. section 5 describes the adopted dynamic battery discharge model and its tuning, whereas section 6 presents the developed datadriven discharge model. section 7 then presents the outcomes of experimental assessments and compares them with the model results. section 8 presents battery life predictions in the operational context of admin-4d. finally, section 9 concludes the paper with some considerations for future developments. 2. the admin-4d project the structure of the admin-4d project is shown in figure 1. its operation is characterized by two distinct phases, namely production and final deployment. the production phase, in which an artifact is created, takes a variable amount of time, depending mainly on the size of the artifact itself. typical production times can be tens of hours. during this phase, sensors are embedded in the artifact and immediately begin transmitting data that is collected by the 3d printer's automation system and used for online feedback. as discussed in the previous section, the system uses a sensor network based on lorawan. specifically, a lorawan gateway (gw) device is connected to the 3d printer's automation system via the admin-4d intranet and provides connectivity to the embedded figure 1. automation system of the 3d printer acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 3 sensors that act as lorawan end devices (eds). after production, the final deployment phase begins, the artifacts are placed in their final locations and the lorawan gateway device is positioned within range of the sensors and connected to the internet. this allows sensor data to be transmitted to a remote cloud for offline analysis. for the sake of clarity, the two types of sensor data transmission shown in figure 1 (referred to as “sensor data for on-line feedback” and “sensor data for off-line analysis”, respectively) will never occur simultaneously. in fact, during the production phase of an artifact, only the intranet connection is used, while during the final deployment phase, only the internet communication is active. battery lifetime of embedded sensors is strongly influenced by the periodicity of their transmission as well as the amount of data transmitted, which directly affects the time required for each transmission. in fact, transmission is the operating situation where the lora interface of the sensors has the higher power consumption. in the context of the intended application, the transmission time depends on the operational phase (production or final deployment), while the amount of data transmitted remains the same throughout the life of the artifacts. obviously, during the production of an artifact, short transmission times are required (relative to the time needed to produce the artifact), since sensor data is used for on-line feedback. conversely, in the final deployment phase, there is no need for strict timing and periods can be relaxed. table 1 shows the typical values of periods and amount of data exchanged used in the following battery life analysis. 3. related works and contribution some interesting contributions dealing with the power consumption aspects of lpwans are available in the scientific literature. in [17], the authors provide a comprehensive assessment of the power consumption of commercially available modules by defining theoretical models based on real data obtained from experimental sessions. in [18], the energy consumption of a solid waste management system is addressed in a simulated scenario that also allows the estimation of battery life. similarly, the authors of [19] present a self-optimizing wireless water level monitoring system that can improve battery life based on operating conditions. energy consumption is also a major constraint for the authors of [20], where a lora-based localization system is addressed. in [21], a battery lifetime analysis is proposed. interestingly, it refers to lora sensors deployed in manholes, which are difficult to access for maintenance purposes. nevertheless, an effective a priori estimation of the battery lifetime is crucial in the am application targeted in this manuscript, and this can only be achieved with accurate battery discharge models. in this direction, models based on machine learning (ml) techniques have already been successfully applied, since they allow to easily take into account the non-linearities that often occur in this context. for example, in [22], an ml-based system was used to detect the end of battery life. furthermore, in [23]-[27], ml techniques are used to build models for the state of charge, discharge curves, and lifetime prediction of lithium batteries. unfortunately, the models proposed in these works cannot be considered in admin-4d because the artifacts produced would be expected to operate even under potentially extreme weather conditions. actually, these models were developed for generic lithium batteries, but for the intended application, heavy lithium batteries were chosen that present very different characteristics. this highlights the importance of defining a more comprehensive battery discharge model, capable of capturing the effect of battery performance under different usage conditions, such as temperature variations or different configurations of sensor communication parameters. such a model would be an essential tool for the design of systems using battery-powered sensors, as it would allow the lifetime of the sensors to be estimated based on the conditions in which they will be used, and therefore decide which communication parameters and sensor suite to use to ensure maximum battery life. it is important to note that battery discharge curves are generally obtained using static parameters such as temperature and discharge current. in reality, however, sensors often operate under dynamic conditions where these parameters can vary continuously. therefore, it is necessary to use battery discharge models that take these variables into account to obtain more accurate results. moreover, given the multiple nonlinearities of the batteries under consideration, the ability to simulate the discharge process dynamically and under different conditions allows to predict in advance possible problems in the on-board sensors, such as those due to voltage and temperature variations, allowing improvements and optimizations of the measurement system before the final deployment. 4. sensors and batteries selection in consideration of the peculiarities of the am application described in the previous sections, the board hosting sensors and communication interfaces, which must be embedded within the manufactured artifact, should also present suitable mechanical and waterproof properties. we identified a good candidate in the tinovi pm-io-5-sm device [28], which can host different type of sensors and is already equipped with a lora interface, hence acting as a lorawan end device in the targeted lorawan sensor network scenario. in the adopted configuration, the tinovi system provide both humidity and temperature measurements (the latter in the range [-20, 70] °c with an uncertainty of 0.6 °c) transmits information about the battery state of charge with a good resolution. another important consideration is that the lora interface of such sensors can be configured using over-the-air activation (otaa) mode to activate the end device. in fact, artifacts are typically moved from the production site to the final deployment site, so the sensors need to be able to join different lorawan networks during their lifetime. considering that the sensor becomes inaccessible after production, the sensor must use otaa so that the device address and session key required to join a network are dynamically assigned. otaa also allows network parameters (e.g., measurement’s transmission period) to be changed remotely and dynamically. the tinovi sensors can be powered by different types of batteries. we have considered those listed in table 2, where they are compared for energy density and temperature range. in fact, the specific am printing process requires the batteries to work in a wide temperature range. for this reason, we chose lithium table 1. transmission periods and data payloads. phase period data amount per sensor production ≤ 300 s 20 bytes final deployment ≥ 3600 s 20 bytes acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 4 thionyl chloride (lisocl2) batteries. in fact, they are specifically designed to operate over a wide [-60, +85] °c temperature range, which makes them suitable for military and medical applications, locking mechanism or metering applications that require long life without the need to replace the battery [29]–[30]. it should be noted that admin-4d also requires a sufficient energy density to keep the size of the batteries as small as possible and to ensure that they can be easily inserted into the artifacts. for this reason, the widespread and cheap sla batteries are not considered in this study. furthermore, the energy density of the selected lisocl2 batteries is considered sufficient for the application. 4.1. modelling of lithium thionyl chloride batteries in a first stage, saft ls 17500 lisocl2 batteries were tested as they are widely available off-the-shelf [31]. their main characteristics are summarized in table 3. unfortunately, the battery manufacturer does not provide a public dataset for the different measurements provided in the battery datasheet. for this reason, we have derived our data from the graphs contained in the datasheet using a graphical data extraction method. the available data consists of i) voltage profiles, ii) the correlation between plateau voltage and current drawn at different temperatures and, finally, iii) the relationship between capacity and current drawn at different temperatures. the results of this extraction are represented with solid lines in figure 2, figure 3, and figure 4. considering the high resolution of the available images, the accuracy with which the curves are reproduced here is rather good. indeed, it is also possible to provide an estimate of the uncertainty associated to the data points. in fact, each curve has been originally sampled with 25 data points, and for figure 2 we have a maximum uncertainty of 2.3 mv, while for figure 3 it is 4.5 mv. for the data points related to capacity in figure 4 the uncertainty is 23 ma h. as can be seen in figure 2, the discharging profiles strongly depend on the current drawn, i.e., the current absorbed by the battery. the longest lifetime is 2523 hours, obtained with constant 1.3 ma current. it can also be seen that the level of the plateau voltage decreases gradually at higher current levels. this effect is due to the high nonlinear internal resistance, estimated to be 13.6 ω on average. this voltage drop is not a negligible effect, as the drop below the minimum operating voltage of the sensor can affect its actual functionality. the voltage variations are also evident considering discharge curves at different temperatures, as reported in figure 3. as can be seen, for a given current, there are considerably different plateau voltage values, depending on the working temperature. this behaviour may represent an issue in several applications (the admin-4d project is one of them, actually) where the batteries used to feed the sensors are exposed to highly variable climatic conditions. temperature and current drawn also impact on the total available battery capacity. from figure 4, it is clear that these table 2. comparison of different battery chemistries. chemistry energy density (w h l−1) temperature range (°c) sealed lead acid (sla) 70 -40 / +60 alkaline (𝑍𝑛 − 𝑀𝑛𝑂2) 340 -20 / +70 lithium cobalt oxide (𝐿𝑖𝐶𝑜𝑂2) 560 -20 / +60 lithium manganese nickel (𝐿𝑖𝑁𝑖𝑀𝑛𝐶𝑜𝑂2) 580 -20 / +60 lithium thionyl chloride (𝐿𝑖𝑆𝑂𝐶𝑙2) 350 -60 / +85 table 3. saft ls 17500 battery specification. description specification rechargeable no nominal voltage 3.60 v nominal capacity 3600 ma h nominal cut-off voltage 3.3 v operating temperature -60 °c to 85 °c figure 2. typical discharge profile at 20 °c for the saft ls 17500 battery. extracted from [26]. figure 3. dependence of the plateau voltage from current drawn at different temperatures. extracted from [26]. figure 4. dependence of the total available capacity from current drawn at different temperatures. extracted from [26]. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 5 types of cells are susceptible to the peukert’s effect [32], which lowers down the total available battery capacity depending on the current draw and temperature, with the consequent reduction of the battery lifetime. it can be concluded that the design of an accurate battery lifetime model is a challenging task that requires the consideration of several different aspects. admittedly, accurate models can be developed by exploiting the equations describing the chemical interaction between the anode and cathode. such models allow to obtain a very high accuracy, since they depend on the intrinsic parameters of the battery. unfortunately, they are difficult to define and tune for commercially available cells, since the information available for such devices is typically insufficient. therefore, to address this challenge, in this work we have adopted a hybrid approach, where a suitably tuned generic battery dynamic model is used in conjunction with a data-driven approach based on artificial neural networks. in the following section, we will discuss the tuning of the first dynamic model, while the ann approach will be discussed in section 7. 5. tuning of the generic battery dynamic model several models for generic purpose lithium-ion based batteries have been proposed in the scientific literature. a widely used one is discussed in [33] and its implementation is currently found in matlab simscape under the name generic battery dynamic model, hereafter referred to as gbdm. this model actually describes the behaviour of a generic rechargeable battery using a combination of three stages. first, a (short) exponential voltage drop occurs suddenly when the battery is fully charged and starts to deliver energy; this is followed by a so-called nominal region (a plateau region) where the energy is delivered from the battery with a quasi-constant voltage level until the voltage starts to drop below the nominal battery voltage level. the third section is related to the sudden loss of energy, where the battery is discharged, and the voltage drops abruptly. the gdbm is implemented by an equivalent feedback system, whose parameters can be tuned to adequately model different battery typologies and their discharge characteristics. in this regard, we considered the parameters provided by the battery manufacturer, in particular the discharge profiles specific to the saft ls 17500 lisocl2 batteries shown in the previous figure 2 to figure 4. from the discharge profile at the nominal current of 1.3 ma, we were able to extract the required model parameters that are reported in table 4. the tuning process of the gdbm model resulted in the discharge curves shown in figure 5, that have been subsequently derived for different current values (solid lines) and compared with those reported in the battery datasheet (dotted lines). as can be observed in the figure, the model discharge profiles (and thus the lifetime estimate) are in good agreement with those declared by the manufacturer for current values of 1.3, 3 and 8 ma. for higher currents, the experimental and model trends are completely different to the point that the model estimate of the discharge profile at 120 ma is completely inconsistent with the experimental one (in fact, it is not even visible in the plot). the above voltage profiles can be compared from a numerical point of view by looking at both the average deviation between the experimental and model curves and the predicted battery life. the former can be evaluated by the root mean square error (rmse), calculated as the root of the average squared deviations between the model predicted value and the expected voltage level of the battery, as extracted from the manufacturer's discharge curve for a given current and after a given time. table 5 reports the obtained values, where it can be observed that the rmse is relatively low for low currents and consistently increases as the current increases. analogously, the same trend can be observed in the lifetime estimation, although in a less obvious way, because the estimated lifetime is only consistent with the nominal current of 1.3 ma used for the tuning process. the obtained results are not surprising since, as already pointed out, the lithium chloride thionyl batteries present a considerable internal resistance that varies non-linearly with the current draw. also, such batteries are affected by the peukert’s effect, and the resistance is almost constant at the nominal current and tends to decrease at higher currents. unfortunately, this model does not consider these effects since, as a matter of fact, it keeps the resistance constant for every current draw, so that the peukert’s effect is not considered at all. table 4. parameters of the gdbm model, for the characterization of the discharge profile. parameter value nominal voltage 3.6 v maximum capacity 3.6 a h fully charged voltage 3.67 v nominal discharge current 1.3 ma internal resistance 13.6 ω nominal ambient temperature 20 °c second ambient temperature −40 °c maximum capacity at −40 °c 2.744 a h initial discharge voltage at −40 °c 3.12 v figure 5. comparison between the discharge profiles at 20 °c extracted from the datasheet (dotted lines) and derived from the tuned gbdm model (solid lines), for 5 different current values. table 5. comparison between the discharge profile (at 20 °c) extracted from the datasheet and the gbdm model. current (ma) lifetime (h) rmse in voltage profile datasheet estimated error (%) (v) 1.3 2523.88 2523.92 0.00 2.97 · 10-4 3.0 1140.39 1093.70 4.09 1.03 · 10-3 8.0 425.76 410.14 3.67 1.47 · 10-3 33.0 90.38 99.43 10.02 1.18 · 10-1 120.0 17.20 27.34 58.95 9.73 · 10-1 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 6 as discussed earlier, temperature is another source of complexity and nonlinearity that the gdbm takes into account, although it cannot reproduce the same behaviour as the lithium chloride thionyl batteries. this is visually illustrated in figure 6, which shows the discharge profile obtained at a constant discharge current of 1.3 ma for different temperatures. in the figure, five different temperatures are considered, and the discharge curves obtained from the gdbm are shown with solid lines, while the manufacturer's curves are shown with dotted lines. the graph clearly indicates that the match between model and datasheet is rather satisfactory only for temperatures of 20 °c and −40 °c, that are those used for tuning the model, as reported in table 4. conversely, for other temperatures a significant discrepancy can be observed, for example, at 55 °c and 70 °c. looking at figure 6, it can also be observed that both the plateau voltage and total capacity depend on the temperature in a strongly non-linear way. moreover, in the leftmost part of the solid gdbm curves, the exponential variation of the voltage is evident (notice that it is linear since the x-axis is in logarithmic scale). this behaviour is less evident only for t = 20 °c in both figure 5 and figure 6, since the model is tuned around that temperature, confirming the temperature dependence. the results obtained from the tuned gdbm model suggest that it is able to provide a satisfactory description of the main features of the discharge behaviour for the adopted lisocl2 batteries but limited to the cases close to the operating points selected for the model calibration. in the next section, a proposal to overcome this limitation is presented, based on a data-driven approach to be used in conjunction with the gdbm approach. 6. a data-driven discharge model models based on a data-driven approach may prove advantageous to address the negative effects due to nonlinearities that are experienced with models based on battery parameters, such as the gdbm discussed in the previous section [34]. in particular, an approach that has already proven its effectiveness [35] relies on the use of artificial neural networks (anns) to profitably approximate the nonlinear functions related to the battery discharge behaviour with the desired accuracy. therefore, to address the issues of the dynamic battery model that have been highlighted in the previous section, in particular related to the nonlinear dependence with the temperature and the internal resistance, we exploited a suitable ann with the aim of increase the estimation accuracy of the lifetime duration of lisocl2 batteries. in particular, we implemented a multiple-layer perceptron neural network (mlpnn) since it demonstrated to be promising in solving similar problems [36]. the structure of the mlpnn is shown in figure 7. in the input layer, the input parameters are state of charge (soc), temperature (t) and current drawn (c). two hidden layers, each with 64 neurons, realize the non-linear transformation from the input layer. finally, the output layer provides battery voltage (v) and lifetime estimation (t). the structure of the mlpnn was chosen with an embedded system implementation in mind, so that it can be deployed directly in the sensor, which can change the period and transmission parameters based on the estimated duration. in particular, the minimum number of neurons and hidden layers was chosen to solve the estimation problem without causing overfitting problems, while ensuring low memory consumption and computation time. the training and validation phases of the applied ann clearly require a consistent and sufficiently large dataset. unfortunately, as mentioned in section 4.1, the battery manufacturer does not provide a public dataset for the different measurements provided in the battery datasheet, which led us to use a graphical extraction procedure. in order to obtain an adequate amount of data to train and then identify complex relationships between inputs and outputs, we needed to apply a data augmentation technique to artificially increase the size of the dataset from the initial 25 points. to do this, we used a spline curve to fit the original raw data points while minimizing the rmse. given the low uncertainty associated with the extracted data, and the fact that the rmse in this fit was kept extremely low, the uncertainty associated with the data inferred from the spline can be considered consistent with those revealed in section 4.1. the obtained augmented dataset consists of 250000 unique entries, which were split between training and validation in a ratio of 80%-20%. other meaningful hyperparameters are shown in table 6. as shown in table 6, the number of training epochs was set to 150. this value was determined using the early stopping technique to avoid overfitting. the trends of training and validation loss are shown in figure 8. as can be seen from the figure, both training and validation loss decrease with the number of epochs and almost reach zero. after training the figure 6. comparison between the discharge profile extracted from the datasheet (dotted lines) and the gbdm model (solid lines) at different temperatures. all the curves are obtained considering a constant discharge current of 1.3 ma. figure 7. structure of the adopted mlpnn. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 7 model, its behaviour was compared with the experimental data obtained from the manufacturer. the results of modelling with the mlpnn are shown in figure 9, where the discharge profiles obtained for different currents are reported. table 7 provides more detailed statistics. as can be seen, the results obtained are better than those obtained with the gbdm model. in fact, although the percentage error of the lifetime estimation at 1.3 ma is higher than that of the gbdm model, the lifetime estimation error is limited to 2% over the whole discharge current range. the accuracy of the model on the test data set is also confirmed by the rmse in the voltage profile, which is definitely very limited. while for the gbdm model both the lifetime estimation error and the mse increase moving away from the profile where the tuning was performed, the mlpnn keeps both metrics bounded and rather stable over the different profiles. the advantage of using such a model seems clear. in fact, it is able to better generalize the estimation problem, giving a higher accuracy in a wide range of operating conditions. these considerations can be further confirmed by the discharge profiles obtained for a constant current of 1.3 ma, but for different temperatures. this comparison is shown in figure 10, whereas some statistical details are reported in table 8. even in this case, the discharge profiles as well as the lifetime estimations agree with the experimental ones with a very good accuracy, and an estimation error under 6% in the worst case. 7. experimental assessment of the proposed discharge models in order to evaluate the effectiveness of the proposed models, we first performed some experimental tests on the tinovi smart sensor alone. the goal is to verify whether the proposed models, tuned on the data provided by the manufacturer, are able to provide a consistent prediction of the actual battery behaviour in the final system. to increase the reproducibility of the experiments, the experimental sessions were conducted in a room with a controlled temperature of 20°c, with the tinovi device (namely, the lorawan ed) placed 1 m away from the lorawan gateway. the adopted tinovi pm-io-5-sm smart sensor is configured to transmit in each lorawan frame an internal reading of the battery voltage, whose value is encoded in binary format to map the range from 2.8 v to the maximum value of 4.2 v reached during charging. the resolution of the voltage reading is 1%, which correspond to 14 mv. during these tests, the sensor is configured to transmit a packet with a period of five (5) minutes, containing the measured variables (mostly, temperature and humidity) and the information about the battery status. it should be noted that the sensor can be in two different states, active and sleep: the former is the state in which the sensor is either sending or receiving data from its probes, while the latter is the idle state in which the sensor does not perform any action. the main table 6. parameters for the characterization of the discharge profile in the mlpnn model. parameter value activation function tanh loss function mse optimizer adam learning rate 0.0001 (non-scheduled) epoch 150 batch size 512 table 7. comparison between the discharge profile at 20 °c extracted from the validation dataset and the mlpnn model. current (ma) lifetime (h) rmse in voltage profile datasheet estimated error (%) (v) 1.3 2523.88 2568.19 1.76 3.10 · 10-4 3.0 1140.39 1122.50 1.56 7.41 · 10-5 8.0 425.76 425.52 0.06 9.33 · 10-5 33.0 90.38 89.21 1.29 2.57 · 10-5 120.0 17.20 16.94 1.51 1.19 · 10-5 table 8. comparison between the discharge profile at constant discharge current 1.3 ma for different temperatures. temperature (°c) lifetime (h) rmse in voltage profile datasheet estimated error (%) (v) -40.0 1666.21 1760.29 5.65 3.17 · 10-4 -20.0 2110.56 2233.70 5.83 3.44 · 10-4 20.0 2523.66 2568.19 1.76 3.10 · 10-4 55.0 1883.47 1944.04 3.22 3.33 · 10-4 70.0 1688.57 1711.55 1.36 3.31 · 10-4 figure 8. training and validation loss. figure 9. comparison between the discharge profile at 20 °c on the test dataset and the one generated by the mlpnn. markers represent the test data while solid lines represent the data generated by the model. acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 8 parameters used to configure the tinovi smart sensor during these experimental sessions are shown in table 9. the average current consumption of the sensor, as shown in the table, is 2.6 ma. we have therefore used the two proposed models to generate the corresponding discharge curves for this specific current and at a constant temperature of 20°c. the estimates of the voltage profiles were then compared with the results of the real voltage measurements taken by the sensor. the results are shown in figure 11 and table 10, respectively. the comparison highlights that both models produce similar discharge profiles, and in a rather good accordance with each other. in particular, the mlpnn is able to better approximate the nominal (plateau) voltage phase and is very close to the actual measured final fast discharge phase. conversely, the gbdm is more precise at modelling the starting point and the initial part of the discharge phase, where the voltage starts to drop below the nominal value. definitely, mlpnn provides a higher accuracy in the prediction of both the voltage values and the total battery lifetime, as can be seen from table 10. in fact, the gbdm model tends to overestimate the plateau and to slightly underestimate the lifetime of the battery. another aspect that affects the power consumption of the sensor is the spreading factor (sf) adopted by the lora modules. in fact, the use of a higher sf (at the same transmit power, of course) allows a more robust transmission, but at the cost of longer transmission times, which consequently result in a higher power consumption of the whole sensor. for example, in the case of the tinovi pm-io-5-sm, the transmission of 20 bytes with sf12 increases the time in the active state of the ed from 5 s to 6 s (remember that the packet airtime with sf12 is 1810 ms). as a result, the average current required increases from 2.6 ma to 3.1 ma, with a corresponding decrease in battery life. based on the outcomes of the previous experiment, the mplnn model can be profitably exploited to estimate the battery lifetime both at different temperatures and for a higher sf. the results of this analysis are shown in table 11, which confirms a decrease of about 16-17% in battery life when moving from sf = 7 to sf = 12. 8. battery life estimation in the admin-4d application context in this section, the developed battery models have been used to estimate the lifetime in the real application context, i.e., considering the whole 3d printing process of an artifact (production phase) and its subsequent positioning at the final site (deployment phase), where sensor data is transmitted to the remote cloud. it should be noted that the production of a real table 9. parameters adopted in the experimental setup. parameter value transmission frequency 868 mhz spreading factor 7 bandwidth 125 khz code rate 4/5 payload length 20 sleep time 300 s wake-up + acquisition + transmission time ∼ 5 s sleep current consumption 0.15 ma active current consumption 150 ma average current consumption 2.6 ma module operating temperature range −20 °c – +70 °c voltage operating range 2.5 v – 6 v output power 14 dbm figure 10. comparison between the discharge profiles from the validation dataset and the ones generated by the mlpnn, at different temperatures. markers represent the validation data while line represent the data generated by the model. all the curves are obtained considering a constant discharge current of 1.3 ma. table 10. evaluation statistics relevant to the curves in figure 11. lifetime (h) lifetime error (%) rmse (v) experimental 1323.94 mlpnn 1322.32 0.12 2.33 · 10-4 gbdm 1302.38 1.63 1.06 · 10-3 figure 11. comparison between the discharge profiles generated by the mlpnn and the gdbm models, with respect to the experimentally measured one. controlled environment, t = 20 °c, sensor only. table 11. battery lifetime estimation using the mlpnn model at different temperatures for sf = 7 and sf = 12. temperature (°c) lifetime (h) sf = 7 -20 1126.90 20 1323.94 70 938.56 sf = 12 -20 928.30 20 1112.94 70 784.43 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 9 artifact is a process that takes 72 h. in this phase, the sleep time of the lorawan eds (which corresponds to the period of sensor data transmission) was set to 300 s, in accordance with the values given in table 1. subsequently, in the final deployment phase, the sleep time was increased to 3600 s in order to maximize battery life while maintaining the required periodicity of measurement data for monitoring purposes. with these values for the sleep period duration, the resulting average current consumption is about 2.6 ma in the production phase and 0.35 ma in the final deployment phase. the two phases are clearly very different from each other, both in terms of power consumption and temperature range. moreover, the production phase is characterized by strong temperature variations, as experimentally observed, which range from 20 °c to 70 °c. from the analysis carried out in the previous sections, none of the models considered is able to satisfactorily emulate both phases. indeed, in the production phase, the gbdm model appears unsuitable due to the high temperature dynamics, while the mlpnn shows a good behaviour with respect to this type of operating conditions. conversely, in the deployment phase, which is practically static, the gbdm is more effective, taking into account the fact that the mlpnn provides inconsistent estimates, since this model has not been trained for the low current that characterizes such a phase. as a consequence, we adopted a hybrid approach to model the discharge curve in a real application context. specifically, the mlpnn model is used for the production phase and the gbdm for the deployment one. figure 12 shows the resulting voltage profile obtained with this hybrid approach. as can be seen, there is a slight voltage increase in the initial part (see the inner rectangle showing the curve from 0 to 100 hours) due to the increase in temperature. this is followed by the expected voltage plateau and finally by the rapid voltage drop. note that the glitch at t = 72 hours is due to the switch between the two models. it represents the initial rapid exponential part of the discharge curve typical of the gbdm model. taking into account the assessment of the tuned models made in the previous sections, the results obtained in this last analysis allowed to estimate the battery lifetime, for the final application conditions, to be up to 9104 hours. this corresponds to more than one year during which the sensor is able to send useful monitoring data. in fact, this value meets the requirements of the admin-4d project. 9. conclusions and future directions of research this paper presents an additive manufacturing experiment in which battery-powered sensors are embedded in the manufactured artifacts. since the sensors are inaccessible after manufacturing, the acquisition of their measurements is totally conditioned by the lifetime of the batteries. therefore, we investigated the possibility of modelling the discharge curve of the batteries in order to estimate their lifetime, taking into account the challenging operating conditions imposed by the targeted application. for this purpose, an innovative model based on an artificial neural network (ann), specifically a mlpnn, has been proposed and developed starting from the discharge curve of the adopted lithium-thionyl chloride batteries. the model has been used in conjunction with a popular one, namely the gbdm, realizing a hybrid approach that proved effective for the battery lifetime estimation. both models have been calibrated using information from the battery datasheet and validated using measurements obtained from actual sensors. the admin-4d project has been completed and some prototype artifacts have already been produced and positioned at the partners' sites where they transmit sensor data as described for the final deployment phase. this scenario paves the way for some interesting future activities. the first, related to the followup of the project, is represented by the off-line analyses that will provide data to better tune the upcoming artifact production. second, and more focused on the topics of this paper, the mlpnn model needs to be further refined to become effective also for low currents. in this respect, the data coming from the currently installed artifacts will prove particularly useful, since they can be used to perform a more extensive and precise training and validation process of the model. references [1] s. serroni, m. arnesano, l. violini, g. m. revel, an iot measurement solution for continuous indoor environmental quality monitoring for buildings renovation, acta imeko 10 (2021) 4, art. 35, pp. 230 238. doi: 10.21014/acta_imeko.v10i4.1182 [2] v. alcácer, v. cruz-machado, scanning the industry 4.0: a literature review on technologies for manufacturing systems, engineering science and technology, an international journal, vol. 22, no. 3, 2019, pp. 899–919. doi: 10.1016/j.jestch.2019.01.006 [3] e. oztemel, s. gursev, literature review of industry 4.0 and related technologies, journal of intelligent manufacturing, vol. 31, 2020, pp. 127–182. doi: 10.1007/s10845-018-1433-8 [4] e. balestrieri, l. de vito, f. lamonaca, f. picariello, s. rapuano, i. tudosa, research challenges in measurements for internet of things systems, acta imeko 7 (2018) 4, art. 14, pp. 82-94. doi: 10.21014/acta_imeko. v7i4.675 [5] l. fratocchi, c. di stefano, industry 4.0 technologies and manufacturing back-shoring: a european perspective, acta imeko 9 (2020) 4, pp. 13 20. doi: 10.21014/acta_imeko.v9i4.721 [6] g. allevi, l. capponi, p. castellini, p. chiariotti, f. docchio, f. freni, r. marsili, m. martarelli, r. montanini, s. pasinetti, a. quattrocchi, r. rossetti, g. rossi, g. sansoni, e. p. tomasini, investigating additive manufactured lattice structures: a multiinstrument approach, ieee transactions on instrumentation and measurement, vol. 69, no. 5, 2020, pp. 2459–2467. doi: 10.1109/tim.2019.2959293. [7] m. grasso, a. remani, a. dickins, b. m. colosimo, r. k. leach, in-situ measurement and monitoring methods for metal powder figure 12. voltage profile during the production and deployment phases. https://doi.org/10.21014/acta_imeko.v10i4.1182 https://doi.org/10.1016/j.jestch.2019.01.006 https://doi.org/10.1007/s10845-018-1433-8 https://doi.org/10.21014/acta_imeko.v7i4.675 https://www.doi.org/10.21014/acta_imeko.v9i4.721 https://www.doi.org/10.1109/tim.2019.2959293 acta imeko | www.imeko.org march 2023 | volume 12 | number 1 | 10 bed fusion: an updated review, measurement science and technology, vol. 32, no. 11, p. 112001, jul. 2021. doi: 10.1088/1361-6501/ac0b6b. [8] t. fedullo, a. morato, g. peserico, l. trevisan, f. tramarin, s. vitturi, l. rovati, an iot measurement system based on lorawan for additive manufacturing, sensors, vol. 22, no. 15, 2022. online [accessed 21 march 2023] https://www.mdpi.com/1424-8220/22/15/5466 [9] m. grasso, b. m. colosimo, process defects and in situ monitoring methods in metal powder bed fusion: a review, measurement science and technology, vol. 28, no. 4, feb. 2017, p. 044005. doi: 10.1088/1361-6501/aa5c4f. [10] s. vitturi, l. trevisan, a. morato, g. frigo, f. tramarin, evaluation of lorawan for sensor data collection in an iiot– based additive manufacturing project, 2020 ieee int. instrumentation and measurement technology conf. (i2mtc), dubrovnik, croatia, 25-28 may 2020, pp. 1–6. doi: 10.1109/i2mtc43012. 2020.9128684 [11] s. spinsante, l. gioacchini, l. scalise, a field-measurementsbased lora network planning tool, acta imeko 9 (2020) 4, pp. 21-29. doi: 10.21014/acta_imeko.v9i4.725 [12] m. luvisotto, f. tramarin, l. vangelista, s. vitturi, on the use of lorawan for indoor industrial iot applications, wireless communications and mobile computing, vol. 2018, p. 11. doi: 10.1155/2018/3982646 [13] t. fedullo, a. morato, f. tramarin, p. bellagente, p. ferrari, e. sisinni, adaptive lorawan transmission exploiting reinforcement learning: the industrial case, 2021 ieee int. workshop on metrology for industry 4.0 and iot, metroind 4.0 and iot 2021, rome, italy, 7-9 june 2021, pp. 671-676. doi: 10.1109/metroind4.0iot51437.2021.9488498 [14] l. trevisan, s. vitturi, f. tramarin, a. morato, an iiot system to monitor 3d–printed artifacts via lorawan embedded sensors, ieee int. conference on emerging technologies and factory automation (etfa), vienna, austria, 8-11 september 2020, pp. 1–4. doi: 10.1109/etfa46521.2020.9212019 [15] f. zonzini, c. aguzzi, l. gigli, l. sciullo, n. testoni, l. de marchi, t. s. cinotti, c. mennuti, a. marzani, structural health monitoring and prognostic of industrial plants and civil structures: a sensor to cloud architecture, ieee instr. & measurement magazine, vol. 23, no. 9, 2020, pp. 21–27. doi: 10.1109/mim.2020.9289069 [16] g. di renzone, e. landi, m. mugnaini, l. parri, g. peruzzi, and a. pozzebon, assessment of lorawan transmission systems under temperature and humidity, gas, and vibration aging effects within iiot contexts, ieee transactions on instrumentation and measurement, vol. 71, 2022, pp. 1–11. doi: 10.1109/tim.2021.3137568 [17] s. ould, n. s. bennett, energy performance analysis and modelling of lora prototyping boards, sensors, vol. 21, no. 23, 2021, 17 pp. doi: 10.3390/s21237992 [18] s. v. akram, r. singh, m. a. alzain, a. gehlot, m. rashid, o. s. faragallah, w. el-shafai, d. prashar, performance analysis of iot and long-range radio-based sensor node and gateway architecture for solid waste management, sensors, vol. 21, no. 8, 2021. doi: 2010.3390/s21082774 [19] t.-k. chi, h.-c. chen, s.-l. chen, p. a. r. abu, a high-accuracy and power-efficient self-optimizing wireless water level monitoring iot device for smart city, sensors, vol. 21, no. 6, 2021, pp. 1–18. doi: 10.3390/s21061936 [20] p. mayer, m. magno, a. berger, l. benini, rtk-lora: highprecision, long-range, and energy-efficient localization for mobile iot devices, ieee transactions on instrumentation and measurement, vol. 70, 2021, no. 3000611. doi: 10.1109/tim.2020.3042296 [21] li lei, zhang he sheng, liu xuan, development of low power consumption manhole cover monitoring device using lora, ieee int. instrumentation and measurement technology conf. i2mtc, auckland, new zealand, 20-23 may. 2019. doi: 10.1109/i2mtc.2019.8826885 [22] a. q. tameemi, reliable battery terminal voltage collapse detection using supervised machine learning approaches, ieee sensors journal, vol. 22, no. 1, 2022, pp. 795–802. doi: 10.1109/jsen.2021.3131859 [23] b. rente, m. fabian, m. vidakovic, x. liu, x. li, k. li, t. sun, k. t. v. grattan, lithium-ion battery state-of-charge estimator based on fbg-based strain sensor and employing machine learning, ieee sensors journal, vol. 21, no. 2, 2021, pp. 1453–1460. doi: 10.1109/jsen.2020.3016080 [24] z. ni, y. yang, a combined data-model method for state of charge estimation of lithium-ion batteries, ieee transactions on instrumentation and measurement, 2021, no. 2503611. doi: 10.1109/tim.2021.3137550 [25] s. s. afshari, s. cui, x. xu, x. liang, remaining useful life early prediction of batteries based on the differential voltage and differential capacity curves, ieee transactions on instrumentation and measurement, 2021, no. 6500709. doi: 10.1109/tim.2021.3117631 [26] y. li, h. sheng, y. cheng, h. kuang, lithium-ion battery state of health monitoring based on ensemble learning, ieee int. instrumentation and measurement technology conf. i2mtc, auckland, new zealand, 20-23 may. 2019. doi: doi: 10.1109/i2mtc.2019.8826824. [27] b. saha, k. goebel, s. poll, j. christophersen, prognostics methods for battery health monitoring using a bayesian framework, ieee transactions on instrumentation and measurement, vol. 58, no. 2, 2009, pp. 291–296. doi: 10.1109/tim.2008.2005965. [28] tinovi, tinovi.com. online [accessed 21 march 2023] https://tinovi.com/shop/lorawan-soil-moisture-temperature-airtemperature-humidity-12v-output-from-battery [29] a. hills, n. hampson, the li–socl2 cell–a review, journal of power sources, vol. 24, no. 4, 1988, pp. 253–271. doi: 10.1016/0378-7753(88)80102-2 [30] e. s. takeuchi, s. m. meyer, c. f. holmes, assessment of capacity loss in low-rate lithium/bromine chloride in thionyl chloride cells by microcalorimetry and long-term discharge, journal of the electrochemical society, vol. 137, no. 6, 1990, pp. 1665–1670. doi: 10.1149/1.2086768 [31] “saft ls 17500 lithium thionyl chloride primary cell” https://docs.rs-online.com/0d06/0900766b8017f0a9.pdf. [32] y. gong, x. zhang, h. li, h. liao, z. meng, y. liu, z. huang, estimation of peukert constant of lithium-ion batteries and its application in battery discharging time prediction, 2020 ieee energy conversion congress and exposition (ecce), detroit, mi, usa, 11-15 october 2020, pp. 905–910. doi: 10.1109/ecce44975.2020.9236241 [33] o. tremblay, l.-a. dessaint, experimental validation of a battery dynamic model for ev applications, world electric vehicle journal, vol. 3, no. 2, 2009, pp. 289–298. doi: 10.3390/wevj3020289 [34] g. dong, x. zhang, c. zhang, z. chen, a method for state of energy estimation of lithium-ion batteries based on neural network model, energy, vol. 90, oct. 2015, pp. 879–888. doi: 10.1016/j.energy.2015.07.120 [35] m. charkhgard, m. farrokhi, state-of-charge estimation for lithium-ion batteries using neural networks and ekf, ieee transactions on industrial electronics, vol. 57, no. 12, december 2010, pp. 4178–4187. doi: 10.1109/tie.2010.2043035 [36] l. kang, x. zhao, j. ma, a new neural network model for the state-of-charge estimation in the battery degradation process, applied energy, vol. 121, may 2014, pp. 20–27. doi: 10.1016/j.apenergy.2014.01.066. https://www.doi.org/10.1088/1361-6501/ac0b6b https://www.mdpi.com/1424-8220/22/15/5466 https://www.doi.org/10.1088/1361-6501/aa5c4f https://www.doi.org/10.1109/i2mtc43012.2020.9128684 https://www.doi.org/10.21014/acta_imeko.v9i4.725 https://www.doi.org/10.1155/2018/3982646 https://doi.org/10.1109/metroind4.0iot51437.2021.9488498 https://doi.org/10.1109/etfa46521.2020.9212019 https://doi.org/10.1109/mim.2020.9289069 https://doi.org/10.1109/tim.2021.3137568 https://www.doi.org/10.3390/s21237992 https://doi.org/2010.3390/s21082774 https://www.doi.org/10.3390/s21061936 https://www.doi.org/10.1109/tim.2020.3042296 https://www.doi.org/10.1109/i2mtc.2019.8826885 https://www.doi.org/10.1109/jsen.2021.3131859 https://www.doi.org/10.1109/jsen.2020.3016080 https://www.doi.org/10.1109/tim.2021.3137550 https://www.doi.org/10.1109/tim.2021.3117631 https://www.doi.org/10.1109/i2mtc.2019.8826824 https://www.doi.org/10.1109/tim.2008.2005965 https://tinovi.com/shop/lorawan-soil-moisture-temperature-air-temperature-humidity-12v-output-from-battery. https://tinovi.com/shop/lorawan-soil-moisture-temperature-air-temperature-humidity-12v-output-from-battery. https://doi.org/10.1016/0378-7753(88)80102-2 https://doi.org/10.1149/1.2086768 https://docs.rs-online.com/0d06/0900766b8017f0a9.pdf https://www.doi.org/10.1109/ecce44975.2020.9236241 https://doi.org/10.3390/wevj3020289 https://www.doi.org/10.1016/j.energy.2015.07.120 https://www.doi.org/10.1109/tie.2010.2043035 https://www.doi.org/10.1016/j.apenergy.2014.01.066 microsoft word 292-2047-3-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 15‐21    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 15  multi‐elemental composition of slovenian milk: analytical  approach and geographical origin determination   doris potočnik 1,3 , marijan nečemer 2 , darja mazej 1 , radojko jaćimović 1  and nives ogrinc 1,3  1  department of environmental science, jožef stefan institute, jamova 39, 1000 ljubljana, slovenia  2  department of low and medium energy physics, jožef stefan institute, jamova 39, 1000 ljubljana, slovenia  3  jožef stefan international postgraduate school, jamova 39, 1000 ljubljana, slovenia      section: research paper   keywords: milk; edxrf; k0‐inaa; icp‐ms; lda; geographical origin  citation: doris potočnik, marijan nečemer, darja mazej, radojko jaćimović and nives ogrinc, multi‐elemental composition of slovenian milk: analytical  approach and geographical origin determination, acta imeko, vol. 5, no. 1, article 5, april 2016, identifier: imeko‐acta‐05 (2016)‐01‐05  section editor: claudia zoani, italian national agency for new technologies, energy and sustainable economic development affiliation, rome, italy  received september 6, 2015; in final form december 20, 2015, year; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: project funding by iaea under contract no. 17897  corresponding author: nives ogrinc, e‐mail: nives.ogrinc@ijs.si    1. introduction  milk is an important everyday nutrition, since it is a source for many key nutrients including proteins, energy, many essential minerals and vitamins, factors that all contributed to an increased consumption in recent years. milk contains more than 20 different micro and macro elements. among them are elements such as cu, co, se, zn, mn, mo, i and fe that are essential for human health in certain concentrations. these elements have a beneficial effect on animals and humans, since they are important for normal functioning of metabolism, growth and development and any concentration discrepancies of these elements may cause disturbance in organisms [1], [2]. concentration of micro and macro elements in milk is not constant, but is influenced by genetics of individual animal species, stage of lactation, farm management (nutritional regime) and environment factors such as season, locality of the farm, nature of soil, fertilizer application and industrial activities [3]. industrial and other anthropogenic activities could also influence the distribution and levels of elements, mainly toxic elements such as cd, ni, co, pb, cr, hg in milk, since they are abstract  the main objective  in multi‐elemental analysis  in food has traditionally been and still  is, to ensure food quality and safety. three  different methods were investigated in the present study to obtain the elemental content in milk samples: energy dispersive x‐ray  fluorescence spectrometry (edxrf), k0‐instrumental nuclear activation analysis (k0‐inaa) and the  inductively coupled plasma mass  spectrometry (icp‐ms). results indicate that the obtained data with different methods are in good agreement with certified reference  materials. precision was found to be satisfactory with relative standard deviations always between 1 and 10 %, except for se, as, cd  and  pb.  the  concentrations  of  these  elements  were  close  to  the  detection  limit  and  thus,  precision  higher  than  10  %.  an  intercomparison  exercise  between  edxrf  and  k0‐inaa  showed  satisfactory  agreement  and  only  two  samples  exceeded  95  %  confidence interval for rb and zn with lower data obtained by k0‐inaa. it was found that edxrf was the cheapest, simplest and most  environmental friendly method for analysis of multi‐elemental composition (p, s, cl, k, ca, zn, br, rb, sr) in milk samples, while for the  determination of mn, fe, cu, se content and possible identification of pollutants such as as, cd and pb icp‐ms it was a method of  choice due to its excellent sensitivity and accuracy. these two methods were also used to determine the multi‐elemental composition  in slovenian raw cow milk from different geographical regions: alpine, mediterranean, dinaric and pannonian  in december  2013.  linear discriminant analysis (lda) was used to explore multi‐elemental analysis of milk samples to obtain classification according to  geographical regions. regional discrimination was most successful taking into account ca, s, p, k, and cl with prediction ability of 66.7  %.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 16  accumulated through the food chain [1], [4], [5]. they can easily accumulate in plants grown on polluted soils and thus cause toxic effects in cattle and consequently also in humans who consume “contaminated” milk [4], [6]. for this reason, it is necessary to control the concentration of micro and macro elements/toxic elements in consumed food. further, multi-element analysis has been successfully used for the determination of the geographical origin of different foodstuff such as wine [7], [8] onion [9]-[11], tea [12] and tomatoes [13]. in relation to milk and dairy products, most of the studies have been performed on cheese for descriptive purposes [14] and for origin authentication purposes [15]. sacco et al. [16] used multi-elemental analysis in combination with stable isotope analysis for differentiation between southern italy and foreign milk. trace elements in milk and dairy product have been used to investigate and to provide information about correlation between animals fodder, time of year, elemental conditions. it was found that season has more influence on mineral concentration than regions [15]. several different techniques are available for simultaneous multi-element chemical analysis including: energy dispersion xray fluorescence spectrometry (edxrf), neutron activation (k0-instrumental nuclear activation analysis; k0-inaa), inductively coupled plasma atomic emission spectroscopy (icpaes) and inductively coupled plasma mass spectrometry (icpms) [1]. each of these techniques has the capacity to provide a chemical profile or chemical fingerprinting, which can be used to characterize the material in question and to compare it with other samples or reference materials. among inductively coupled plasma systems icp-aes and icp-ms have been widely used for the analysis of minor and trace elements in food, since they provide satisfactory result and allow performing simultaneous-sequential determination of multiple elements. the advantages of these techniques are short time of analysis and low limit of detection [4], [17]. on the other hand, edxrf as a highly sensitive, fast and cheap technique offers the possibility of performing direct multi-element analysis of solid samples over a wide dynamic range. this technique is especially useful in food authentication and traceability studies where there is a requirement for a database of genuine samples to which the sample can be compared to establish its authenticity. for effective data processing, the large number of independently measured parameters is needed including multielemental composition [18]-[20]. another nondestructive multielement technique is k0-inaa. the main disadvantages of this method are long turn-around time, is labour intensive and a nuclear reactor or other source of activating particles is required. nevertheless, providing a different approach, this method is often an important asset in studies regarding the analysis of standard reference materials [21]-[23] and as an additional technique in evaluation of the accuracy of analytical results [24]. thus, the main objectives of this paper are: (1) to verify the methods for elemental composition in milk samples including edxrf, k0-inaa and icp-ms; and (2) to differentiate the milk according to geographical region in slovenia based on elemental composition. sample preparation and the analysis procedure for each of the above mentioned analytical techniques are described and a comparison of analytical and other parameters such as uncertainty, accuracy, limits of detection (lod), cost of analysis per sample, instrumental cost etc. is critically evaluated. 2. materials and methods  2.1. sampling  samples of the whole milk were provided by four slovenian diary producers: ljubljanske mlekarne d.d., pomurske mlekarne d.d., mlekarna planika d.o.o. and mlekarna celeia d.o.o. in december 2013 covering different geographical regions (mediterranean, pannonia, dinaric and alpine) in slovenia. the samples were stored at -20oc before analysis. all together 40 samples were obtained for multi-elemental analysis. 2.2. energy dispersive x‐ray fluorescence spectrometry (edxrf)  multielement determination of macro (p, s, cl, k, ca) and micro elemental (zn, br, rb, sr) content was nondestructively performed by energy dispersive x-ray fluorescence spectrometry (edxrf). the analysis was performed on freezedried samples. the pellets were prepared from 0.5 to 1.0 g of powder sample material by a pellet die and hydraulic press. for excitation, the disc radioisotope excitation source of fe-55 (25 mci) and cd-109 (20 mci) from eckert and ziegler were used. the emitted fluorescence radiation was measured by en energy dispersive x-ray spectrometer constituted of a si(li) detector (canaberra), a spectroscopy amplifier (canberra m2024), adc (canberra m8075) and pc based mca (s-100 canberra). the spectrometer was equipped with a vacuum chamber (fe-55) for the measurement of light elements p-cl, while the energy resolution of the spectrometer was 175 ev at 5.9 kev. the analysis of complex x-ray spectra was performed by the axil spectral analysis program [24], [25]. for the evaluated uncertainty of this procedure, it is required to include the statistical uncertainty of measured intensities and the uncertainty of the mathematical fitting procedure. for this purpose, quantification was then performed utilizing qaes (quantitative analysis of environmental samples) software, developed in our laboratory [24], [25]. the estimated uncertainty of the analysis was around 5 % to 10 %. rather high total estimated uncertainty is mainly due to contributions of matrix correction and geometry calibration procedures, which include errors of tabulated fundamental parameters, and contributions of spectrum acquisition and analysis. in the case of edxrf, the limits of detection calculated from the signal to background ratio [26] on the realistic sample nist 1549 for p, s, cl, k, ca, zn, br, rb and sr were 400, 200, 100, 20, 15, 4, 2, 1, 1 µg/g of dry milk sample, respectively. 2.3. k0‐inaa measurements  for k0-inaa measurements aliquot of the freeze-dry sample (0.10 to 0.20 g) was weighed into a polyethylene ampoule (spronk system, the netherlands). for the determination of long-lived radionuclides the sample was irradiated together with the standard al-au (0.1 %) for 12 hours. irradiation was carried out in a reactor triga mark ii. after irradiation the sample was kept in a polyethylene ampoule and gamma activity of radionuclides induced in the sample after 4, 9-15, and 24-33 days of cooling was measured. all measurements were performed on hpge detector (40 and 45 % yield). for the evaluation of gamma spectra we used the program hyperlab 2002, and element content was calculated with the program kayzero for windows. limits of detection for br, ca, k, rb, zn calculated as three times the standard deviations of the blank sample, are 1, 500, 500, 5, 5 µg/g milk sample, respectively. acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 17  the uncertainty level for elemental determination by k0inaa was 610 %, in the case of lower elemental content such as se and mn the uncertainty is higher, around 20 %. 2.4. icp‐ms measurements  for multi-elemental analysis of milk samples with icp-ms two procedures of sample preparation were used: a) microwave digestion: about 1 ml of milk sample (0.15 g of lyophilized sample) was weighed into quartz tubes. 1 ml 65 % hno3, (merck, germany, suprapur) and 1 ml 30 % h2o2, (merck, germany, suprapur) were added and samples were subjected to closed vessel microwave digestion (microwave system ethos 1, milestone sn 130471) at max. power of 1500 w: ramp to 130°c 10 min, ramp to 200 °c 10 min, hold 20 min, cooling 20 min. then the samples were equilibrated to room temperature. the solution was quantitatively transferred into 10 ml polyethylene graduated tubes and filled to the mark with mili-q water. before determination by icp-ms, samples were diluted five times. external calibration was made with icp multi element standard solution xvi certipur (merck). b) dilution (adapted from barany [27]): an aliquot of 1 ml of milk sample (0.15 g of lyophilized sample) was diluted five times with alkaline (merck, suprapur) solution containing triton x-100 (sigma aldrich, sigmaultra) and ethylenediaminetetraacetic acid disodium salt dehydrate (edta, fisher scientific, analytical reagent grade). for calibration the standard addition procedure was utilised. measurements of prepared solutions were made by an octapole reaction system inductively coupled plasma mass spectrometer (7500ce, agilent) equipped with an asx-510 autosampler (cetac). instrumental conditions: babington nebulizer, scott-type spray chamber, spray chamber temperature 5 °c, plasma gas flow rate 15 l/min, carrier gas flow rate 0.8 l/min, make-up gas flow rate 0.1 l/min, sample solution uptake flow rate 1 ml/min, rf power 1500 w, reaction cell gas helium 4 ml/min, isotopes monitored 55mn, 56fe, 63cu, 66zn, 75as, 78se, 111cd, 114cd, 206pb, 207pb, 208pb, isotopes monitored as internal standard added to all solutions: 45sc, 69ga, 89y, 157gd. tuning of the instrument was made daily using a solution containing li, mg, y, ce, tl and co. concentrations of cd, pb and as in milk samples are usually very low <0.5 ng/g and thus, insufficient detection limits were obtained with microwave digestion. therefore preparation with simple dilution of milk samples was introduced. limits of detection for cd, pb, as, se, mn, cu, zn and fe, calculated as three times the standard deviations of the blank sample, were 0.1, 0.2, 0.05, 2, 1, 6, 35 and 30 ng/g milk sample, respectively. estimated combined uncertainty of results was 5 % for cu, zn, fe, 10 % for se, mn and 15 % for as, cd, pb. 2.5. quality control  the accuracy of results was checked as follows: a) analysis of the certified reference materials: whole milk powder nist 8435, non-fat milk powder nist 1549 (both national institute of standard and technology) and skim milk powder bcr 150 (ec-jrc-irmm), ermbd150 and erm-bd151 (ec-jrc-irmm); b) comparison with independent methods: edxrf and k0inaa; c) for as there are no reference materials, therefore, comparison was made by an independent method radiochemical neutron activation analysis [28], [29]: result for internal quality control milk sample by k0-rnaa 0.28 ± 0.02 ng/g and by icp-ms 0.36 ± 0.05 ng/g; d) participation in interlaboratory comparison scheme fapas (food and environmental research agency, sand hutton,york, vb). 2.6. statistical evaluation   statistical calculations and multivariate analysis were carried out using xlstat software package (addinsoft, new york, usa) basic statistics included mean values (median and arithmetic mean), standard deviation (s.d.), minimum and maximum. multivariate analysis involved linear discriminant analysis (lda). 3. results and discussion  3.1. results on certified reference materials   results of the validation of the accuracy of multi-elemental analysis performed by icp-ms with certified reference materials (nist 1549, nist 843) are collected in table 1, while results of the validation of the accuracy with certified reference material erm-bd 150 and erm-bd 151 with all three methods are collected in table 2. results indicate that the obtained data with different methods are in good agreement with certified reference materials. precision was also found to be satisfactory with relative standard deviations (rsds) always between 1 and 10 %. only in four cases (of se, as, cd and pb), for which the table 1. validation of the accuracy of icp‐ms multi‐elemental procedures proposed (nist 1549, nist 8435 and bcr 150).      nist 1549   nist 8435   bcr 150  element  certified   value  found   value  certified   value  found   value  certified  value  found  value    dry weight mg/kg  mn  0.26 ± 0.06  0.25 ± 0.03  0.17 ± 0.05  0.19 ± 0.02  ‐  ‐  fe  1.78 ± 0.10  1.71 ± 0.09  1.80 ± 1.10  2.0 ± 0.1  11.8 ± 0.6  9.79 ± 0.48  cu  0.70 ± 0.10  0.67 ± 0.03  0.46 ± 0.10  0.40 ± 0.02  2.23 ± 0.08  1.95 ± 0.1  zn  46.1 ± 2.2  39.3 ± 2.0  28.0 ± 3.1  23.8 ± 1.2  ‐  ‐  se  0.11 ± 0.1  0.11 ± 0.1  0.131 ± 0.14  0.128 ± 0.01  ‐  ‐  cd  0.0005 ± 0.0002  0.0005 ± 00001  ‐  ‐  0.0218 ± 0.0014  0.0203 ± 0.0031  pb  0.019 ± 0.003  0.023 ± 0.004  ‐  ‐  1.00 ± 0.04  0.84 ± 0.13  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 18  measured concentrations were very close to the detection limit, precision was worse and higher than 10 %. 3.2. comparison of elemental composition between edxrf and  k0‐inaa methods  comparison of results of milk samples obtained by edxrf and k0-inaa is presented in table 3. it was found that only two samples exceeded 95 % confidence interval for rb and zn with lower data obtained by k0-inaa (bold values in table 3). the possible reason for this discrepancy could be in the inhomogeneity of distribution of these two elements in the two aliquotes of the same sample analysed by edxrf and k0inaa. 3.3. results of the interlaboratory comparison scheme fapas  we participated in the interlaboratory comparison scheme fapas in 2012, 2013 and 2014 for milk samples in powdered form with all analytes present at low natural levels. results are collected in table 4. the results indicated good agreement between measured and assigned values. with these independent assessment competence of our laboratory to analyse low levels of as, cd and pb was demonstrated. common characteristic of all three analytical techniques applied edxrf, k0-inaa and icp-ms is their multi-element capability. preparation of samples was simple and nondestructive in the case of edxrf and k0-inaa, meanwhile icp-ms required decomposition of samples. it is obvious that the most sensitive method applied in this study was icp-ms with lods in the range of few tens of ng/g. the sensitivity of edxrf and k0-inaa is comparable to each other, according to obtained estimated uncertainty (5-10 %) and lods for the analysed elements in the range from hundred to few µg/g. this means that lods of icp-ms were approximately two orders of magnitude lower compared to edxrf and k0-inaa. on the other hand, the determination of elements such as p, s, cl by icp-ms was impossible due to the fact that the ions formed by the icp discharge are typically positive ions, m+ or m+², therefore, elements that prefer to form negative ions (such as cl, i, f, etc.) are very difficult or impossible to determine by icp-ms. on the other hand, edxrf enables analysis of very important macro elements (p, s, and cl) in milk samples, meanwhile k0-inaa as an alternative analytical tool allows the determination of only cl. determination of p was impossible due to the production of pure negatron emitter 32p after activation. in the case of s, poor activation of 37s (short half life t1/2 = 5.05 min) resulted in high lods (3000-5000 µg/g). since the s content found in milk was in the concentration range of 2000-3000 µg/g, it was evident that determination of s by k0inaa was omitted due to its insufficient sensitivity. the advantage of k0-inaa in relation to edxrf is the determination of other important macronutrients mg and na. the determination of mg and na by edxrf was impossible due to insufficient instrumental sensitivity in concentration range from one to few thousand µg/g found in real milk samples. another disadvantage of the k0-inaa and edxrf techniques was their inability to perform the analysis of toxic elements such as as, pb and cd in the concentration range found in milk (table 2). further, considering the cost per sample, its multi-elemental capability, simple non-destructive sample preparation, the minimum number of steps in the measurement and quantification procedure, edxrf was undoubtedly the cheapest, simplest and most environmental friendly method among the applied analytical techniques and the most suitable for analysis of multi-elemental composition (p, s, cl, k, ca, zn, br, rb, sr) in milk samples. however, for the analysis of elemental content of mn, fe, cu, se and possible identification of pollutants such as as, cd and pb the icp-ms method was found as a method of choice due to its excellent sensitivity and accuracy. these two methods were also used for the determination of multi-elemental content in authentic slovenian raw cow milk. 3.4. multi‐elemental composition of slovenian milk  the elemental content of slovenian raw cow milk from different geographical regions collected in december 2013 is presented in table 5. the elemental composition is as follows k > ca > p, cl > s > zn > rb > br > fe > sr > cu> mn > table 2. determination of element concentrations in mg/kg in reference materials by icp‐ms, edxrf and k0‐inaa (erm‐bd 150 and erm‐bd 151).     erm‐bd 150  erm‐bd 151   element  certified value  found value  by edxrf  found value  by k0‐inaa  found value  by icp‐ms  certified  value  found value  by edxrf  found value  by k0‐inaa  found value  by icp‐ms  ca  13900 ± 800  13000 ± 1000  13487 ± 1013 ‐ 13900 ± 700  13200 ± 1100  13309 ± 1018  ‐ mg  1260 ± 100  ‐  1266 ± 117 ‐ 1260 ± 70  ‐  1268 ± 112  ‐ mn  0.289 ± 0.018  ‐  0.313 ± 0.101 ‐ 0.29 ± 0.03  ‐  0.306 ± 0.072  ‐ p  11000 ± 600  9100 ± 900  ‐  ‐ 11000 ± 600  9600 ± 900  ‐  ‐ k  17000 ± 700  16500 ± 1300  17665 ± 1263 ‐ 17000 ± 800  16200 ± 1300  17447 ± 1246  ‐ se  0.188 ± 0.014  ‐  0.201 ± 0.053 0.227 ± 0.022 0.19 ± 0.04  ‐  0.188 ± 0.046  0.224± 0.022 na  4180 ± 190  ‐  4397 ± 313 ‐ 4190 ± 230  ‐  4348 ± 308  ‐ sr  ‐  3.71 ± 0.3  ‐  ‐ ‐  4.10 ± 0.3  ‐  ‐ zn  44.8 ± 2.0  44.3 ± 3.5  46.7 ± 3.7  42.2 ± 2.1  44.9 ± 2.3  43.0 ± 3.4  46.1 ± 4.0  44.3 ± 3.2  cu  1.08 ± 0.06  ‐  ‐  1.05 ± 0.05 5.00 ± 0.23  ‐  ‐  5.24 ±0.33 cl  9700 ± 2000  9600 ± 900  10143 ± 743 ‐ 9800 ± 1200  9000 ± 900  10077 ± 737  ‐ cd  0.0114 ± 0.0029  ‐  ‐  0.0122 ± 0.002 0.106 ± 0.013  ‐  ‐  0.106 ±0.008 pb  0.019 ± 0.004  ‐  ‐  0.019 ± 0.003* 0.207 ± 0.014  ‐  ‐  0.206 ± 0.02 s  ‐  3300 ± 300  ‐  ‐ ‐  3300 ± 300  ‐  ‐ br  ‐  13.2 ± 1.0  ‐  ‐ ‐  13.6 ± 1.0  ‐  ‐ rb  ‐  17.7 ± 1.4  ‐  ‐ ‐  17.0 ± 1.4  ‐  ‐ acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 19  se > pb > as > cd and is consistent with literature data. the content of the elements, such as cd, cu, fe, mn, ni, pb, sr and zn is influenced by feed and environmental conditions [30]. the concentrations of toxic elements such as as, cd and pb, are low and do not represent a threat to human health. multi-elemental composition of milk was used for possible discrimination among four slovenian geographical regions. results of statistical evaluation by lda are presented in figure 1. partial separation between groups was obtained, where alpine and pannonian groups were well separated, while dinaric and mediterranean groups were overlapping. the later two groups are close to each other and thus insufficiently separation was obtained; however their discrimination tendency is promising. the most important factors affecting the geographical origin were ca, s, p, k, and cl. in a cross validation test 66.7 % of the samples are classified correctly. the highest rate of classification was for the alpine and dinaric samples (78.6 % and 71.4 %, respectively). it is expected that more precise and efficient separation between four geographical regions could be obtained by the involvement of more analysed parameters such as stable isotope values of milk samples. table  3.  comparison  between  k0‐inaa  and  edxrf  methods  for  selected  macro‐ and  micro‐elements  in  milk  samples.  samples  exceeding  95 %  confidence interval of comparison are marked bold.     k0‐inaa  edxrf  sample  no.  br  ca  k  rb  zn  br  ca  k  rb  zn  dry weight mg/kg  1  14.0  7970  10969  12.1  28.8  13.9  8710  11200  13.2  30.4  2  11.4  8210  10595  22.1  18.9  11.7  9530  10600  23.3  21.5  3  9.04  7520  8706  6.85  23.3  8.87  7430  8210  7.98  25.2  4  7.71  8680  11800  12.6  31.7  7.85  9210  11600  14.0  33.3  5  7.77  8200  11110  10.3  29.2  8.38  8720  11600  8.85  30.9  6  8.32  8350  11300  12.9  28.5  6.97  8760  11200  14.2  29.9  7  8.10  7670  10390  11.9  27.3  8.38  8740  10600  12.5  28.7  8  7.59  8610  10880  11.5  26.7  8.27  10000  12100  12.9  29.8  9  8.34  8590  11140  16.6  26.6  7.74  9000  11300  19.3  29.6  10  11.4  8120  10980  9.33  29.2  11.4  8810  11300  8.3  30.6  11  8.22  8550  11332  14.0  29.0  9.74  8950  11900  15.0  31.8  12  14.6  8410  11170  12.5  28.6  13.2  9680  12600  11.9  29.6  13  9.12  8420  11230  20.3  27.7  7.93  9660  12400  20.3  29.8  14  16.0  6040  8595  11.1  23.0  15.9  7150  8610  11.4  23.0  15  10.9  7020  9898  22.3  23.6  14.3  6150  8070  28.1  30.9  16  14.1  9470  12251  14.1  29.8  14.0  9930  12600  13.3  30.1  17  16.6  9000  12635  22.8  31.2  17.4  9630  12900  23.6  29.6  18  9.44  6070  8113  9.72  20.7  12.5  6360  8320  15.4  26.0  19  13.7  7340  10140  15.7  25.6  13.8  8700  10700  16.5  26.1  20  27.5  7660  10332  14.8  23.2  28.2  8510  10900  14.9  26.1  table 4. results of the metallic contaminants in milk powder in interlaboratory schemes fapas in 2012, 2013 and 2014.      2012: fapas 07172  2013: fapas 07190  2014: fapas  element  assigned value  found   value  assigned  value  found  value  assigned  value  found  value    µg/kg  as  56.4 ± 12.4  45.2 ± 6.1  49.1 ± 10.8 53.5 ± 6.0 58.1 ± 12.8  53.8 ± 1.6 cd  18.59 ± 4.09  19.9 ± 5.8  16.2 ± 3.56 15.8 ± 1.0 12.2 ± 2.69  12.5 ± 0.8 pb  66.17 ± 14.6  51.8 ± 3.3  50.8 ± 11.2 47.8 ± 3.4 47.5 ± 10.5  49.3 ± 3.0   figure  1.  discrimination  between  geographical  regions  using  significant  elemental parameters. function 1 represents 72.43 % of variability, while  function 2 represents 19.91 % of variability.   label: a‐alpine, d‐dinaric, m‐ mediterranean, p – pannonian.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 20  4. conclusions  there is a need for reliable milk element concentration data to provide information about nutritional uptake and at the same time to provide information about the excess amounts of trace and toxic elements. in addition, multi-elemental composition provides important information on the authenticity and geographical origin of food including milk and dairy products. this paper provides some interesting comparisons between three different techniques (edxrf, k0-inaa and icp-ms) in determination of multi-elemental composition in milk samples. quality assurance proved entirely satisfactory for all involved measurements and an intercomparison exercise between edxrf and k0-inaa showed satisfactory agreement. the simple, fast, and inexpensive edxrf method in combination with icp-ms, which was found the most appropriate technique for the analysis of elemental content of mn, fe, cu, se and toxic elements such as as, cd and pb, were further used for multi-elemental analysis of slovenian raw cow milk samples. multi-elemental content in milk samples was combined further in a linear discriminant model to discriminate milk according to geographical origin. only partial separation was possible using elemental content with overall predicting ability of 66.7 %. it is expected that if elemental data are used in conjunction with other characteristic chemical indices such as isotope analysis, a more holistic and accurate picture in relation to geographical region separation could be created. these data represent a basis for a database of authentic samples of milk in slovenia, which could be incorporated into a traceability system. acknowledgement  the research represents part of the era chair iso-food project and was partially supported by the iaea project under contract no. 17897 entitled “the use of stable isotopes and elemental composition for determination of authenticity and geographical origin of milk and dairy products” as part of crp d5.20.38 “accessible technologies for the verification of origin of dairy products as an example control system to enhance global trade and food safety”. references  [1] n. khan, i. s. jeong, i. m. hwang, j. s. kim, s. h. choi, e. y. nho, j. y. choi, k. s. park, k. s. kim, analysis of minor and trace elements in milk and yogurts by inductively coupled plasma-mass spectrometry (icp-ms), food chem. 147 (2014) pp. 220 – 224. [2] b. h. schwendel, t. j. wester, p. c. h. morel, m. h. tavendale, c. deadman, n. m. shadbolt, d. e. otter, invited review: organic and conventionally produced milk – an evaluation of factors influencing, milk composition, j. dairy sci. 98 (2015) pp. 721 – 746. [3] c. sola-larrañaga, i. navarro-blasco, chemometric analysis of minerals and trace elements in raw cow milk from the community of navarra, spain, food chem. 112 (2009) pp. 189 – 196. [4] s. birghila, s. dobrinas, g. stanciu, a. soceanu, determination of major and minor elements in milk through icp-aes, environ. eng. manag. j. 7 (2008) pp. 805 – 808. [5] n. bilandžić, m. đokić, m. sedak, b. solomun, i. varenina, z. knežević, m. benić, trace element levels in raw milk from table 5. the summary of elemental data in slovenian raw cow milk collected in december 2013. elements  mediterranean  pannonia  alpine  dinaric  ca (mg/100 g)  110  ±  14  113  ±  11  113  ±  15  111  ±  14  k (mg/100 g)  128  ±  20  145  ±  14  146  ±  20  144  ±  18  cl (mg/100 g)  74  ±  17  84  ±  9  89  ±  10  91  ±  13  s (mg/100 g)  24  ±  3  27  ±  4  26  ±  5  28  ±  5  p (mg/100 g)  62  ±  12  77  ±  10  78  ±  13  76  ±  12  na (mg/100 g)  33  ±  4  35  ±  2  35  ±  3  33  ±  7  zn (mg/100 g)  329  ±  57  392  ±  28  374  ±  34  368  ±  34  br (mg/100 g)  147  ±  32  106  ±  14  139  ±  40  197  ±  71  rb (mg/100 g)  190  ±  100  196  ±  67  210  ±  63  215  ±  40  sr (mg/100 g)  27  ±  6  29  ±  5  30  ±  15  20  ±  7  fe (mg/100 g)  33  ±  3  25  ±  6  28  ±  5  29  ±  5  ni (mg/100 g)  4.5  ±  0.7  4.6  ±  1.2  4.5  ±  1.0  5.6  ±  1.1  mo (mg/100 g)  7.4  ±  1.2  7.5  ±  2.5  8.7  ±  1.8  8.8  ±  2.2  mn (mg/100 g)  3.9  ±  0.9  2.8  ±  1.1  2.4  ±  0.8  2.9  ±  0.8  cu (mg/100 g)  7.0  ±  3.4  4.6  ±  1.2  4.4  ±  1.4  5.3  ±  1.5  se (mg/100 g)  2.0  ±  0.2  1.6  ±  0.4  1.6  ±  0.5  2.0  ±  0.8  as (mg/100 g)  0.043  ±  0.004  0.042  ±  0.017  0.043  ±  0.015  0.057  ±  23  cd (mg/100 g)  0.006  ±  0.004  0.005  ±  0.003  0.004  ±  0.002  0.007  ±  0.004  pb mg/100 g)  0.076  ±  0.051 0.055 ± 0.035 0.076 ± 0.057  0.064  ± 0.027 acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 21  northern and southern regions of croatia, food chem. 127 (2011) pp. 63 – 66. [6] r. pilarezyk, j. wojick, p. czerniak, p. sablik, b. pilarezyk, a. tomza-mirciniak, concentrations of toxic heavy metals and trace elements in raw milk of simmental and holstein-friesian cows from organic farm. environ. monit. assess. 185 (2013) pp. 8383 – 8392. [7] v. f. taylor, h. p. longerich, j. d. greenough, multielement analysis of canadian wines by inductively coupled plasma mass spectrometry (icp-ms) and multivariate statistics. j. agr. food chem. 51 (2003) pp. 856 – 860. [8] p. p. coetzee, f. e. steffens, r. j. eiselen, o. p. augustyn, l. balcaen, f. vanhaecke, multi-element analysis of south african wines by icp-ms and their classification according to geographical origin, j. agr. food chem. 53 (2005) pp. 5060 – 5066. [9] k. ariyama, y. aoyama, a. mochizuki, y. homura, m. kadokura, a. yasui, determination of the geographic origin of onions between three main production areas in japan and other countries by mineral composition, j. agr. food chem. 55 (2007) pp. 347 – 354. [10] g. a. chope, l. a. terry, use of canonical variate analysis to differentiate onion cultivars by mineral content as measured by icpaes, food chem. 115 (2009) pp. 1108 – 1113. [11] e. furia, a. naccarato, g. sindona, g. stabile, a. tagarelli, multielement fingerprinting as a tool in origin authentication of pgi food products: tropea red onion. j. agr. food chem 59 (2011) pp. 8450–8457. [12] a. moreda-piñeiro, a. fisher, s. j. hill, the classification of tea according to region of origin using pattern recognition techniques and trace metal data. j. food compos. anal. 16 (2003) pp. 195 – 211. [13] g. lo feudo, a. naccarato, g. sindona, a. tagarelli, investigating the origin of tomatoes and triple concentrated tomato pastes through multielement determination by inductively coupled plasma mass spectrometry and statistical analysis. j. agr. food chem. 58 (2010) pp. 3801 – 3807. [14] l. pillonel, r. badertscher, p. froidevaux, g. haberhauer, s. hölzl, p. horn, a. jakob, e. pfammatter, u. piantini, a. rossmann, r. tabacchi, j. o. bosset, stable isotope ratios, trace and radioactive elements in emmental cheese of different origin. food sci. technol. 26 (2003) pp. 615 – 623. [15] m. t. osorio, a. koidis, p. papademas, major and trace elements in milk and halloumi cheese as markers for authentication of goat feeding regimes and geographical origin, int. j. dairy technol. 67 (2015) pp. 1 – 10. [16] d. sacco, m. a. brescia, a. sgaramella, g. casiell, a. buccolieri, n. ogrinc, a. sacco, discrimination between southern italy and foreign milk samples using spectroscopic and analytical data, food chem. 114 (2009) pp. 1559 – 1563. [17] j. nobrega, y.gelinas, a. krushevska, r. m. barnes, direct cetermination of major and trace elements in milk by inductively coupled plasma atomic emission and mass spectrometry, j. anal. atom. spectrom. 12 (1997) pp. 1243 – 1246. [18] s. kelly, k. heaton, j. hoogewerff, tracing the geographical origin of food: the application of multi-element and multiisotope analysis, trends food sci. tech. 16 (2005) pp. 555 – 567. [19] u. kropf, m. korošec, j. bertoncelj, n. ogrinc, m. nečemer, p. kump, t. golob, determination of the geographical origin of slovenian black locust, lime and chestnut honey. food chem. 121 (2010) pp 839 – 846. [20] s. a. drivelos, c. a. georgiou, multi-element and multi-isotoperatio analysi to determine the geographical origin of foods in the european union. trac-trend. anal. chem 40 (21012) pp. 38 – 51. [21] r. jaćimović, p. makreski, v. stibilj, t. stafilov, determination of major and trace elements in iron reference materials using k0naa, j. radioanal nucl. ch 278 (2008) pp. 795 – 799. [22] m. a. menezes, r. jaćimović, k0-inaa quality assessment by analysis of soil reference material gbw07401 using the comparator and neutron flux monitor approaches, appl. radiat. isotopes. 69 (2011) pp. 1057 – 1063. [23] r. jaćimović, comparison of relative inaa and k0-inaa using soil and sediment reference materials. j. radiosnsl. nucl. ch. 300 (2014) pp. 663 – 672. [24] m. nečemer, p. kump, j. ščančar, r. jaćimović, j. simčič, p. pelicon, m. budnar, z. jeran, p. pongrac, m. regvar, k. vogelmikuš, application of x-ray fluorescence analytical techniques in phytoremediation and plant biology studies, spectrochim. acta b, 63 (2008) pp 1240 – 1244. [25] m. nečemer, p. kump, k. vogel-mikuš, “use of x-ray fluorescence-based analytical techniques in phytoremediation”, in: handbook of phytoremediation, (environmental science, engineering and technology), i.a. golubev, new york, 2011, isbn 978-1-61728-753-4, 1-61728-753-9, pp. 331 – 358. [26] p. kump, some considerations on the definition of the limit of detection in x-ray fluorescence spectrometry, spectrochim. acta part b 52 (1997) 405 – 408. [27] e. barany, i. a. bergdal., a. schutz, s. skerfving, a. oskarsson, inductively coupled plasma mass spectrometry for direct multielement analysis of diluted human blood and serum, j. anal. atom. spectrom. 12 (1997) pp. 1005 – 1009. [28] a. r. byrne, low level simultaneous determination of as and sb in standard reference materials using radiochemical neutron activation analysis with isotopic 77as and 125sb tracers. fresenius z. anal. chem. 326 (1987) pp. 733 – 735. [29] a.r. byrne, a. vakselj, rapid neutron activation analysis of arsenic in a wide range of samples by solvent extraction of the iodide, croat. chem. acta 46 (1974) pp. 225 – 235. [30] y. j. tu, x. y. han, z. r. xu, y.z. wang, w. f. li, effect of cadmium in feed on organs and meat colour of growing pigs, vet. res. commun. 31 (2007) pp. 621 – 630. acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 18‐22    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 18  geometric part of uncertainties in the calculation constant of  the primary four electrode conductivity cell  alexander mikhal 1 , zygmunt warsza 2   1  institute of electrodynamics national academy of science, av. peremogy 56, 03680 kiev, ukraine   2  industrial research institute of automation and measurements piap, al. jerozolimskie 202, 02‐486 warszawa poland      section: research paper  keywords: conductivity; primary cell; geometric errors; standard deviation  citation: alexander mikhal, zygmunt warsza, geometric part of uncertainties in the calculation constant of the primary four electrode conductivity cell,  acta imeko, acta imeko, vol. 4, no. 2, article 4, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐04  editor: paolo carbone, university of perugia, italy  received april 30, 2014; in final form july 13, 2014; published june  2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the presidium of the national academy of sciences of ukraine.  corresponding author: alexander mikhal, e‐mail: a_mikhal@ukr.net    1. introduction  in the recent years, the leading economically developed countries have established national standards of electrolytic conductivity (ес), the basis of which uses the “absolute (direct)” method of reproduction of a physical quantity [1]. world practice knows a lot of ways to implement this method; however, the principle of its operation is almost the same [2-4]. the “absolute (direct)” method is based on the measurement of the liquid column resistance and on the calculation of the ec by the known length of the column cross-sectional area. the basic components of the ec standard of ukraine are a four-electrode cell with the calculated constant, a special conductivity ac bridge, a thermostat for temperature control at 25ºс and a precision digital temperature meter. the ec is determined in accordance with the expression: 25(1 )k gk t   (1) where g is the conductivity of the liquid column obtained as a measurement result of the conductivity bridge; k is the cell constant obtained as a calculation result based on the actual geometric dimensions of the cell; α is the temperature coefficient for conductivity of potassium chloride solution; δt25 is the temperature deviation from 25ºс obtained as a measurement result using the digital thermometer. the uncertainty (here referred to as standard uncertainty type b only) of the national standard will be influenced by many factors. most significant errors are the ones in calculating the primary cell constant. error components being determined by the geometric parameters of the liquid column have the greatest influence. let us consider these error components on the example of the four-electrode conductivity cell which is used in the ec standard of ukraine [4]-[6]. the basis of the conductivity cell is the sensor element, a sketch of which is shown in figure 1. the sensor element is a tube with internal diameter d. the inside of the tube is filled with an electrolyte solution. typically, this is a solution of potassium chloride. the tube is used to fix the geometry of the liquid conductor and consists of three parts. the central part of the tube 1 has length l, two side abstract  the paper presents the construction of a primary four electrode conductivity cell with calculated constant for the ukrainian primary  standard of electrolytic conductivity (ec). the equations for calculating the cell constant and the error budget for calculating uncer‐ tainty are presented. the components of the budget are: errors due to the non‐uniformity of the force lines of the electric field; errors  due to the accuracy of measurement standards and measuring  instruments for determining  length and diameter of the tube; and  errors due to manufacturing techniques of tubes and their assemblage. the article considers in detail the errors due to the non‐ideal  profile of the central part of the tube. two methods to reduce the standard deviation are given: the method of linear interpolation for  compensation of the concave form which occurs along the axis of the tube and the method of equivalent triangles to compensate for  deviations from a circle that occur across the axis of the tube. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 19  portions 2 have equal lengths l. the ends of the central portion of tube 1 are coated with circular potential electrodes 4. their width corresponds to the tube wall thickness. two discs 3 are fixed at the edges of the tube. the inner surface of the discs is coated with a metallic film 5 which performs the function of current electrodes. discs 3 have central holes 6 with diameter d which serve for filling with the liquid. the inner disc surface has the form of a cone with angle α. such configuration is intended to facilitate the removal of air bubbles when filling the cell with the liquid. the tube and the discs are made of quartz glass which has good insulating properties, temporal stability and a minimum coefficient of thermal expansion. platinum is used as the metal for the electrodes as it has a minimum polarization effect for most electrolytes. the cell in its four points a, b, c and d is connected to the bridge which ensures the measurement of conductivity of the liquid column. the cell constant is determined by calculating the ratio of the length of the tube 1 to the cross-section area. however, this definition is true for an idealized object of measurements with uniform distribution of current flow lines. distortion of such lines will be due to: the presence of holes for solution filling; the form of current electrodes and the presence of potential electrodes; and the non-ideal profile of the inner quartz tube 1, figure 1. therefore, the calculation of the constant will show errors. 2. budget of uncertainty cells  the error budget for the constant calculation of the cell, shown in figure 1, can be written as [7]: [ , , ( , )]k st cal geom tec peu f      (2) where: δst is an error due to the accuracy of measurement standards and measuring instruments that determine the length and diameter of the tube; δcal is an error due to the deviation of the calculation model for the cell constant in real conditions relative to the idealized model; δgeom is an error in assessment of geometrical dimensions. we do not focus our attention on the method of processing errors (function f), which is used in calculating the uncertainty. this is regulated by international guidelines [8]. each error in eq. (2) has in its turn a number of components. minimizing of the error δst is limited by the level of metrological assurance for measurements of tube length and diameter. it is defined by the metrological characteristics of the standards and instruments for length measurement. error δcal has two origins and two components accordingly: an error due to the alternating current measurement and an error due to the discontinuity of an electric field in the cell because of its finite dimensions and design features. the geometric error δgeom also has two components: δtec is an error due to manufacturing technology for the tube sections and their assemblage; δpe is an error due to the presence of the potential electrodes. the latter component of the error depends on the finite thickness of the potential electrodes and on changing position of a singular point of the potential electrodes upon assemblage. this component is related to the calculation of the electric field inside the cell. it will be considered in other papers. in this paper, we examine the component of error δtec. this error is due to the deviation of the actual profile of the inner surface of tube 1 (figure 1) from the ideal profile of the tube. the latter one is presented as a rectangle along the longitudinal section and as a circle in cross-section. it should be mentioned that the cost of tube production from a monolithic quartz crystal is extremely high. as a rule, tubes are manufactured from work pieces (preform) which undergo precision machining. if precision machining of the inner surface is too deep, the mechanical resistance of the tube will reduce significantly. tubes of less than 1 mm in thickness will crack (fracture) under elastic forces (adhesive polymerization, temperature differences). therefore, grinding of the work piece inner profile should be of minimum depth. on the other side, the work piece inner surface can have wedge-like cracks which are in parallel to the axis of the work piece. these cracks are due to manufacturing techniques of the work piece production and depend on the quality of nozzles through which the work piece is pulled. therefore, due to the lack of deep machining of tubes, we can observe deviations from a circle in cross-section along the entire profile. the second reason of a non-ideal profile may be the precession of the grinding tool. during processing, quality control of the tube is practically impossible. after final grinding, the tube profile may differ from the ideal rectangle. 3. error due to manufacturing  to determine the actual profile of the tube, its diameter and length should be measured according to the following algorithm. we performed р measurements of the tube length l in different directions. these directions were distributed uniformly around the circumference. conventionally, uniformly along the length, the tube is divided into m sections. to define the diameter, n measurements are made in the cross-section of each part of the tube in different directions. as a result, we obtain n×m measurements of the tube diameter and р values of the tube length. the constant can be determined from the measurement results through averageing the values of diameter dav and tube length lav. 2 4 av av l k d  (3) in modern technologies for processing quartz glass it is much easier to manufacture a tube with stable length value than a tube with stable internal diameter. from the experimental data we observe distortions of the internal profile of two types. the figure 1. construction of the primary conductivity cell.   acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 20  first one is a deviation of the tube profile from a rectangle along the longitudinal section due to precession of the grinding tool. the second one is a deviation of the tube profile from a circle in cross-section due to the presence of wedge-like cracks on the inner surface of work pieces. the geometric dimensions of the actually manufactured tube can be measured with much higher accuracy than the error of profile. parameter measurements are performed with a device with an lsb of 0.1 μm. random components of the device are negligible. the actual tube profile deviates from the ideal rectangle by more than 10 lsb of this device. therefore, the following expression can be taken as a metrological model for the constant calculation: 2 2 0 (1 ), 2k k d lk k        (4) where: k0 is the true value of the constant which is determined by the actual profile of the tube inner surface; δk is a systematic relative error of the constant calculation; k, d and l are standard deviations of the mean cell constant, diameter and length of the tube, respectively. therefore, an error due to manufacturing δtec has two components: a systematic δk and a random k. hereinafter, under such deviation we understand the standard deviation of the mean (sdm) of the diameter. the average diameter and the sdm of the diameter in each i-th section of the tube are determined from the expressions: , 1 11 m nm iji av i ji av dd d m m n       (5a) 2 1 1 ( ) ( 1) n id ij iav j d d n n       (5b) where: di,av is the mean diameter in the i-th section of the tube and dij is the diameter in the j-th direction and in the i-th section of the tube. the sdm value from the m section (cut) can be expressed as: 2 1 1 ( 1) m d id im m       (5с) 3.1. error in the longitudinal section  the measuring results for the mean diameter of one of the sections of the tubes for n=8 and m=10 are shown in figure 2. these data indicate that the average diameters in sections 1 and 6 differ by almost 20 μm. in general, the profile of the tube internal section can be expressed through an arbitrary function d(x). the measuring results for average diameters along the length of the tube (figure 2) show that this dependence has clearly a deterministic character. the discrete nature of the data allows us to use linear interpolation for the function d(x). the results are shown in the following formula: 2 2 00 0 4 4 ( ) ( ) ixl m i dx dx k d x ax b          (6) where: a and b are linear interpolation coefficients; xi the length of the region between the i-th and i +1 section. the polynomial coefficients are expressed as: 1i i i d d a x    (6a) ib d (6b) after simple transformations, the expression for calculating the constant with the proposed correction takes the form: 1 , 1, 4 m i i i av i av x k d d      (7) we can use equation 3 and 5, but then we get an increase in the standard deviation. let us show in one figure the graphs for the sdm of diameter measuring results with and without the deterministic component by using linear interpolation (figure 3). as it can be seen, the sdm of the diameter measurement results, taking into account the deterministic component, is close to almost one value and has a very small scatter of results compared with the case where the deterministic component is ignored and the average diameter value is calculated according to eq. (3). as a result of this correction, the change of the sdm along the cross-section is reduced by 10 to 15 times. the sdm of the diameter with correction, calculated according to eq. (5с), is less than 0.0004 mm. the same parameter without correction is 3 times larger. when calculating the random component of an error (eq. 4), we obtain an error reduction by two. it should be noted that such correction method shifts the average value of the diameter. thus, the constants calculated by eqs. (3) and (7) for tube 1 in figure 1 differ by 0.027 %. this is the rate by which systematic relative errors δk differ (4) when calculating the cell constant with and without correction. 3.2. error in the cross‐section  let us consider another tube for which the values of the mean radius in each of the sections are grouped along a virtually horizontal line. however, in each individual section the surface profile differs from a circle. example of the function for the deviation from the mean diav (figure represented by circle 0.5d3,av = 4.569 mm) in one of the sections with m = 3 is shown in figure 4. in all ten sections with m = 1 to 10 we observe triangular run outs along the lines 3 to 11 and 6 to 14 (figure 4). such   figure 2. influence of linear interpolation on the level of sdm (σid).   10,15 10,155 10,16 10,165 10,17 10,175 1 2 3 4 5 6 7 8 9 10 d i, a v m m section number, m figure 3. profile along the axis of the tube.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 21  character of profile distortions allows using a method of the equivalent triangles. this method involves the assessment of the effective area sief of the tube section and subsequent diameter corrections. the algorithm for calculation is to replace the diameter values in section 3 and 6 with the values of the mean diameter dav, then to use standard formula for calculating the basic mean diameter dbias and the basic area of the tube section sbias: 2( ) 4bias biass d (8) next, we calculate the correction value in each section separately. it is represented as areas of triangles: 2i i is c h (9) сi is the base of the assumed triangle in direction 3,11 or 6,14 (figure 4); hi is the height of the assumed triangle in the direction 3,11 or 6,14 (figure 4). we take into account the influence of the deterministic component by forming the effective area of each section which is expressed as: 3,6 3,6 2ief bias i bias i i i i s s s s ch        (10) according to eq. (11) we calculate the corrected dicor which is put in eqs. (2,3) instead of diav: 4icor iefd s  (11) the difference between sdm values for the measurement result of diameter diav before correction and sdm values for the mean diameter dicor after correction is shown in figure 5: as can be seen in this figure, the sdm with the correction of the deterministic component is 2.5 to 3 times less than without correction. the use of an algorithm of effective areas shifts the mean diameter value. the constants calculated by eqs. (5) and (11) differ by 0.015 %. just as in the previous case, this value represents the difference of systematic errors (3, 4) for the cell constant calculation. 4. conclusions  the above methods were used to calculate the corrections to the cell constant. for the primary ес standard of ukraine, several cell designs were made. the primary cell is shown in figure 6. correctness, sufficiency and adequacy of the all selected models for correction constant cells is confirmed by international comparisons p22, p47, k36, which involved primary standard of ukraine (laboratory ukrcsm). 14 laboratories of the leading nmi of the world participated in international comparisons ccqm-k36: usa (nist), germany (ptb), israel (inpl), slovakia (smu), denmark (dfm), and others. to all participants samples of potassium chloride solutions with a nominal electrolytic conductivity 0.5 s/m and 5 ms/m were sent. each laboratory using equation (1) and own metrological features establishes the exact value ec klab and uncertainty u(klab) conductivity samples. the rules of key comparison specify that a key comparison reference value must be derived against which the participants’ results are compared, and a degree of equivalence of each laboratory must be inferred. in [6] the model of reference value kref was presented. the degree of equivalence is given as:   figure 4. real profile (blue) across the axis of the tube for section m=3 and  0.5di,av=4.569 mm (red).     figure 6. the cells for the primary standard.   figure 7. the results of international comparisons ccqm‐к36.а [6].   0 0,0002 0,0004 0,0006 1 2 3 4 5 6 7 8 9 10d e v ia ti o n , m m s ection number m diav dicor figure 5. sdm (σid) without correction diav and with correction dicor. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 22   /  lab refd sm m k k (12) the results of the degree of equivalence with uncertainty on the international comparisons ccqm-k36 are shown in figure 7. the proposed methods of correction constants cells (method of linear interpolation and method of effective areas) allowed us to solve two problems [7]. firstly, the value obtained by the ukrcsm klab, practically coincides with the value kref [6]. secondly, by minimizing the random component of the error (eq. 4, 5c), a minimum value of uncertainty u(klab) was obtained. acknowledgement  the authors express their gratitude to prof. m.n. surdu, as supervisor of researches [4, 5] and v.g. gavrilkin, as a scientist keeper of primary standards [5, 6], head of the department of metrological provision of physical and chemical quantitative measurement (lab. ukrcsm, kiev). references  [1] wu y.c., a dc method for the absolute determination of conductivities of the primary standard kcl solution from 0°c to 50°c, j. res. nati. inst. stand. tecnol., 1994, 99, №3, 241-246. [2] maґriaґssy m., pratt k.w., spitzer p., major applications of electrochemical techniques at national metrology institutes, metrologia 46 (2009) 199–213 [3] shreiner r.h., pratt k.w., standard reference materials: primary standards and standard reference materials for electrolytic conductivity, nist special publication 260-142, 2004 ed. [4] brinkmann f. and etc., general paper: primary methods for the measurement of electrolytic conductivity, accred qual assur. – 2003. – 8:346 – 353 doi 10.1007/s00769-003-0645-5. [5] gavrilkin v.g. and etc., state primary standard unit of electrical conductivity of liquids, ukrainian metrology journal, 2006, №3, p. 47-51. (ukr.) [6] jensen h d 2006 final report of key comparison ccqm-k36. 15 august 2006, available online at http://kcdb.bipm.org/appendixb/appbresults/ccqmk36/ccqm-k36_final_report.pdf [7] miкhal a.a., warsza z.l. influence of geometric uncertainties on the accuracy of calculated constant of the primary conductivity cell. 11th international symposium on measurement and quality control. imeko, poland, cracow-kielce, 11-13 september 2013, p.8 [8] guide to the expression of uncertainty in measurement, isbn 92-67-10188-9, 1st ed., international organization for standardization, geneva, switzerland, 1993. acta imeko  december 2014, volume 3, number 4, 17 – 25  www.imeko.org  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 17  thermal modeling and characterization for designing reliable  power converters for lhc power supplies  massimo lazzaroni 1,2 , mauro citterio 2 , stefano latorre 2 , agostino lanza 3 , paolo cova 4,3 , nicola  delmonte 4,3 , francesco giuliani 4   1  dipartimento di fisica, università degli studi di milano, via celoria, 16, 20133 milano, italy  2  infn milano, via g. celoria, 16, 20133 milano, italy  3  infn pavia, via a. bassi, 6, 27100 pavia, italy  4  dipartimento di ingegneria dell’informazione, university of parma, viale g.p. usberti 181/a, 43124 parma, italy  section: research paper  keywords: measurement; reliability; rams; hostile environment; lhc; atlas  citation: massimo lazzaroni, mauro citterio, stefano latorre, agostino lanza, paolo cova, nicola delmonte, francesco giuliani, thermal modeling and  characterization for designing reliable power converters for lhc power supplies, acta imeko, vol. 3, no. 4, article 5, december 2014, identifier: imeko‐acta‐ 03 (2014)‐04‐05  editor: paolo carbone, university of perugia   received october 11 th , 2013; in final form october 11 th , 2013; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by apollo collaboration of infn, italy  corresponding author: massimo lazzaroni, e‐mail: massimo.lazzaroni@unimi.it  1. introduction the large hadron collider (lhc) is, nowadays, the world's largest and highest-energy particle operative accelerator, built by the european organization for nuclear research (cern). founded in 1954, the cern laboratory sits astride the franco-swiss border near geneva [1]. the lhc can be used in order to verify the predictions of different theories of particle physics and high-energy physics with particular attention to prove the existence of the higgs boson but also the large family of new particles predicted by super-symmetric theories. the aforementioned collider extends the frontiers of particle physics thanks to its high energy and luminosity [2]. it is important to underline that inside the collider high interaction rates, radiation doses, particle multiplicities and energies are present. all this pushes the features of the used electronic instrumentations and measurements equipment at new standards that are necessary to consider during the design phase. this is particularly true for the detector atlas (a toroidal lhc apparatus) and for every device involved in the experiments [1], [2]. in fact, electronic converters employed in power supplies for the lhc atlas experiments operate in a hostile environment, due to the simultaneous presence of radiation and high static magnetic field, which impose severe design constraints. moreover, abstract power supplies for lhc experiments (atlas) require dc‐dc power converters able to work in very hostile environments. the apollo  collaboration, funded by the italian istituto nazionale di fisica nucleare (infn), aims to study dedicated topologies and to design, build  and test demonstrators, developing the needed technology for the industrialization phase.  besides the presence of radiation and magnetic fields, thermal specifications are particularly stringent in the working environment. in  order  to  have  the  wanted  features  in  terms  of  reliability  and  availability  during  the  experimental  activity,  these  power  electronics  circuits must be cooled by specifically designed water heat sinks, and an accurate thermal design is mandatory in order to guarantee  safe and reliable operation. moreover, an optimized thermal design allows to have a maintenance strategy in compliance with the  requirements of the experiments.  in this paper thermal characterization is used for tuning a coupled thermo‐fluid‐dynamic 3d numerical model, for both the water heat  sink and the whole system. based on this model an optimized water heat sink was designed and fabricated. thermal characterization of  the power converter demonstrator in different operating conditions shows good agreement with simulation results.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 18  the power supply has to be quasi-adiabatic, due to the proximity of very sensitive detection circuitry, and this requirement implies water cooling and very stringent thermal constraints [3], [4]. thermal management represents the key issue in designing these converters and accurate thermal modelling is mandatory to maximize the rams performances (reliability, availability, maintainability and safety) [5], [6]. in the present work accurate thermal modelling of the water heat sink is addressed, in order to design an optimized water path for a specific converter which will be possibly adopted in the next atlas power supplies. the phase-2 upgrade of the lhc is planned on 2022 long shutdown. the detectors will be upgraded or substituted, and new ones will be installed to improve the performances and sustain the higher rates and backgrounds at the new design luminosity: 51034 cm-2s-1. this increase of radiation background will cause the accumulation of a total ionizing dose (tid) up to 10 kgy in silicon, and fluences up to 21013 protons/cm2 and 81013 neutrons/cm2. both the new scenario and the increased power demand of the detectors, lead to re-design the power distribution system. a description of the developed 1.5 kw dc-dc converter can be found in [7] and a detailed thermal model of it, with identification of the main heating component is reported in [8]. the proposed power distribution system (figure 1) and the implemented main converter (mc) (figure 2), will be briefly described in section 2 whereas in section 3 a theory of reliability will be given. for further details see [7]. for ad-hoc water heat sink development, a fluid-dynamic model of the water path has to be coupled with thermal models of both liquid and solid domains of the heat sink. this is not an obvious task, since some parameters are known with large uncertainty and the models have to be tuned experimentally. these aspects will be discussed in section 4 in which the procedure followed for setting and tuning such models on the basis of thermal characterization of a known heat sink prototype will be described. moreover the thermal behaviour of the whole converter mounted on the heat sink has to be simulated. due to the complexity of the converter structure, which is made by many heating components, an established simplifying procedure has been followed to obtain an effective reduced converter thermal model [9]. on the basis of the numerical thermo-fluid-dynamic model the specific water heat sink was designed and manufactured, as described in section 5. section 6 shows the good agreement between thermal characterization of the whole system and simulation results. discussions concerning simulation results and the results obtained by means of a deep measurement activities will be given in section 7. finally, conclusions are given in section 8. 2. the power distribution and the main converter  an isolated dc-dc mc has to supply an intermediate medium voltage bus able to distribute the voltage to the electronics installed on the detectors. at the board level, point of load (pol) converters should be implemented on front-end boards (febs) for precise voltage adaptation and regulation as depicted in figure 1 [10]. the mc is designed starting by a modular approach in order to improve the overall reliability of the system and provide the requested redundancy as well described in the following section iii. the transient-resonant topology   figure 1. the proposed power supply distribution for atlas lar calorimeter  [10].     figure 2. the implemented main converter.     figure 3. the adopted switch‐in‐line topology.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 19  adopted is based upon a switch-in-line-converter (silc) and the basic schematic is drawn in figure 3. the typical features for this topology can be summarized as: possible multiple outputs, low switch voltage stress, soft switching operation, first order dynamic, decreasing the sensitivity of power devices to ionizing radiations, limiting the overall power losses and, finally, the generated electromagnetic interference (emi) in compliance with lhc requirements [7]. as far as the galvanic isolation is concerned it is guaranteed by a dedicated planar transformer ad hoc designed. the new planar transformer has overall reduced dimensions and minimizes the iron losses. the use of both non-commercial magnetic material for the transformer core and the adoption of auxiliary windings in order to face the external magnetic field has been also investigated. in particular, cores in kool m� has been used [11]. in fact, kool m� powder cores, made with a ferrous alloy powder, have low losses at elevated temperatures. magnetic kool m� cores are made from a 85% of iron, 9% of silicon, and 6% of aluminum alloy powder in order to obtain low losses at elevated frequencies. the typical advantages of kool m cores are: high saturation (1.05 t), lower core loss than powdered iron, moderate cost, low magnetostriction, very high curie temperature, stable performance with temperature, variety of available shapes. the very interesting features of this material is that it is suitable to be used in the switching regulator where the most important and critical parameter of the switching regulator inductor is the ability to provide inductance, or permeability, also under dc bias. figure 4 depicts the proposed planar transformer. 3. an introduction on reliability  all the rams requirements are relevant to this case. in fact, reliability is defined as the ability of an item to perform a required function under given conditions for a given time interval, and, quantitatively, assuming that the item is capable of carrying out its required function at the beginning of a specific time interval. reliability is the probability of failure-free performance over the specified timeframe, under specified conditions [5]. moreover, availability is the ability to perform a required function under given conditions at a given time instant or over a given time interval, assuming that the required external resources are provided [5]. maintainability is the ability of an item under given condition of use, to be retained in, or restored to, a state in which it can perform a required function, when maintenance is performed under given conditions and using stated procedures and resources [5]. finally, it is important to underline that also in this kind of applications, safety (qualitatively defined as the absence of catastrophic consequences on the users and the environment in case of malfunction) is mandatory. it is very important, at this point, to highlight that for these devices maintenance must be correctly scheduled in order to be in compliance with the experimental activity: some types of maintenance can be, in fact, planned only during the shutdown periods. at this aim, the maximization of the parameters connected to the reliability, availability and safety must be taken into account. for example, some kinds of redundancies are mandatory, as described in the following. diagnostic-on-board tools can be, obviously, very useful in these kinds of applications. in fact, only appropriate diagnostic allows addressing the problem of maintenance between two different policies. in the first policy, called corrective maintenance (cm), the figure 4. the proposed planar transformer [7]. dimensions in mm.  (a)      (b)  figure 5. the functional configuration 2oo3 when: modules without failures (a), and with a module in fault state (b).   acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 20  maintenance is performed after a system failure. in the second one, called preventive maintenance (pm), maintenance is performed before a system failure [12], [13]. in order to follow this second approach, policy diagnostic tools are necessary as aforementioned. not least, the thermal behaviour of the system has to be kept under control, in order to prevent components degradation. an accurate thermal modelling may be very useful not only for thermal sizing, but also to identify the most critical components and find the most appropriate placement of thermal sensors for diagnostics [12]-[17]. as far as the redundancy is concerned the applied approach is based on the functional configuration k out of n. redundancy is particularly useful in experiments where very high dependability features are mandatory. in particular, this is true if high reliability, availability and safety of equipment are requested. parallelism of items (devices or systems) in the reliability block diagram (rbd) does not mean automatically and necessarily parallel in the hardware block diagram. in the used configuration the redundant elements are subject to a lower load (load sharing) until one of the elements fails as a functional configuration denoted as warm redundancy (wr). in fact, in the final configuration three 1.5 kw main converter modules work together in order to deliver a power of 3 kw. the load is shared and each module delivers 1 kw during safe operations. if a module fails the remaining two modules share the full load (in fact, each module is able to operate up to 1.5 kw). this way to operate is known as the k out of n functional configuration that, in the present case, results in a 2oo3 configuration. in this particular type of redundancy operation is ensured if at least k (2) number of elements out of a total of n (3) elements are functioning normally. in figure 5 the designed situation has been drawn. many examples of such fault tolerant systems can be found: airplane with a multi-engine propulsion system, cockpit with a multi display system, etc. in order to evaluate the reliability of this configuration the binomial distribution is used. furthermore, we fix that the generic element of the system can assume only two conditions: correct functioning and failure. indicating with  r t the reliability of the generic element and with  sr t the reliability of the system, the reliability of the systems can be evaluated as [5]:                   1 n i n i s i k n r t r t r t i (1) where       ! !( )! n n i i n i . (2) assuming a constant failure rate                  1 n i n i t t s i k n r t e e i (3) the mean time to failure of the system can be immediately calculated as                0 (1 ) n n i t t n i s ii k mttf e e dt . (4) from the previous equations, it is interesting to recall that for  1k this configuration coincides with the parallel configuration, while for k n with the series configuration. 4. validation of thermo‐fluid‐dynamic fe model  a known water heat sink, made by milled aluminium, was used as a reference for thermo-fluid-dynamic 3d numerical model tuning. the thermal performances of this cooling device were characterized at various liquid flow rates and compared with simulation results in the same operating conditions. 4.1. experimental set‐up  in the application the converter must be thermally insulated from the surrounding electronic detection circuitry, operating in quasi-adiabatic condition, then almost all the heat dissipated by electronics must be collected by the heat sink and extracted by the water. for this reason a test bench was built, in which all the surfaces of the heat sink, except the back side, used for measurements, were thermally insulated by a teflon (a)    (b)  figure 6. 3d model of experimental set‐up for numerical model tuning (a)  and schematic of the test bench (b).   acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 21  polystyrene box, as illustrated in figure 6 (a). in addition, figure 6 (b) shows a schematic view of the measurement setup. three power resistors provide heat input to the heat sink. power was regulated by a couple of tti cpx400a power supplies. measured parameters include inlet and outlet cooling water temperatures and the heat sink surface thermal map, obtained by two dedicated thermocouples in the connection ducts and an infrared camera (flir a325), respectively. in addition, three thermocouples are inserted in the thermal insulation box to monitor the resistor temperatures and prevent overheating. all the thermocouples were k-type. tests were carried out by changing inlet water flow rate and electric power in the resistors. for each value of flow rate and heating power, data were collected after steady-state conditions were achieved. the steady-state conditions were assumed to be reached when the change of the maximum temperature reading was less than 0.2 °c within 25 min. 4.2. numerical model tuning  figure 7 (a) shows the geometry of the reference model and the water velocity field inside. we used and compared two commercial software packages: comsol multiphysics and ansys-fluent. the detailed flow field and heat transfer inside the heat sink were investigated by the computational fluid dynamics (cfd) method, which combines the governing equations for the fluid flow with the heat convection in fluid and the heat transfer in solid equations [18]. the relatively small inlet flow rate of the studied case (around 1 l/min) suggests water laminar flow. this preliminary hypothesis was made using the reynolds number and was confirmed by simulation results. figure 7 (b) shows some of the parameters set in the model (due to the symmetry, only half of the structure was modelled). the uniform heat flux on the resistors’ contact surface is equivalent to their total dissipated power (560 w). the lateral surfaces were set as adiabatic, while on the top surface natural horizontal air convection has been considered. figure 8 shows an example of simulation results (comsol) for a dissipated power of 560 w and a delivery (a)        (b)  figure 7. geometry of the reference water heat sink (a) and fe model, with  indication of boundary conditions (b).  (a)    (b)    (c)  figure  8.  (a)  steady  state  simulated  thermal  map  (comsol)  for  the  reference heat sink (half structure) at operating conditions of figure 7 (b);  (b)  ir  temperature  map;  (c)  measured  and  simulated  (ansys)  heat  sink  surface temperature along the black horizontal lines of figure 8 (b).  22.5 23.5 24.5 25.5 26.5 27.5 0 50 100 150 200 250 t em p er at u re [ °c ] position [mm] ir measurement ansys-fluent simulation acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 22  of 1.3 l/min. maximum temperature was around 30 °c on the heat sink and 60 °c on the resistors. the temperatures measured by the two thermocouples at the coolant inlet and outlet, and the temperature across the top surface estimated by infrared thermography were used for model validation. as fitting parameter the heat convection coefficient h at the air-exposed boundary were set. this parameter can be estimated by a rule of thumb, but it is very hard to evaluate its actual value because it depends on many others parameters and/or operating conditions, so, to obtain good results from simulations, we tested different values to find the one that allows obtaining the best fit. here, h was set to around 7 w/(m2k), a typical value for natural air convection. by comparing the simulated thermal map of figure 8 (a) with figure 8 (b), which reports the corresponding ir measurement, a good agreement appears. this can be better appreciated in figure 8 (c), which shows the comparison between temperatures along the symmetry line on the heat sink top surface. the matching is also good for water temperatures at the inlet (both 18.8 °c) and outlet (measurement: 24.8 °c, ansys simulation: 25.1 °c). 5. optimized heat sink design  the developed model was exploited to design a specific water heat sink, able to comply with constraints, which were: inlet and outlet on the same short side; thickness 15 mm, water path diameter 5 mm; flow rate 0.63 l/min, maximum pressure drop 350 mbar;  18 cinlett ; maximum  25 coutlett . the heat generation in the converter is not uniform, but as a first approximation, it can be divided in four main regions where heating takes place: the planar transformer region, the primary and the secondary regions and the auxiliary power supply region [6]. in order to easily evaluate the performance of different water paths we simulated the heat sink alone, by modelling the inward heat flux from the power electronics as four different uniform heat fluxes, as shown in figure 11 for a delivered power  1.5 kwoutp . with these boundary conditions different topologies were examined in order to enlarge as much as possible the heat exchanging surface area between solid and fluid in the region were the input heat is higher [19]. the chosen design is illustrated in figure 10 (a). finally, figure 10 (b) shows the thermal map simulated with this structure at the surface in contact with the power converter delivering 1.5 kw. 6. full converter modeling and characterization  the power converter was modelled by the simplified blocks method [8], [9] and the thermal behaviour of the assembly made by the converter mounted on the designed heat sink was simulated. the heat sink was fabricated by milling an aluminum plate and the aluminum cover plate brazed on it. the converter was mounted on the heat sink ensuring optimal thermal contact by a thin layer of thermal paste, and uniform pressure was guaranteed by screws. the whole assembly was tested at different operating conditions by a test bench able to fix and measure all the figure 9. the 2‐c heat sink.    (a)                                                                                                                          (b)  figure  10.  water  velocity  field  in  the  designed  heat  sink  (a)  and  thermal  map  simulated  at  the  surface  in  contact  with  the  power  converter  operating  at pout = 1.5 kw (b).    figure 11. boundary conditions on the heat sink surface in contact with the  converter, in case of pout = 1.5 kw.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 23  operational parameters. in particular: a. differential inlet/outlet water temperature by means of a thermometer based on k-type thermocouples (0.1 °c of resolution) at the input and at the output of the heat sink. the minimum differential measured temperature was 1.7 °c and the maximum differential measured temperature was 2.7 °c. the mean value during the experimental activity was about 2.2 °c. both the water temperature measured during the tests and the differential inlet/outlet water temperature are in compliance with the constraints above reported:  18 cinlett ; maximum  25 coutlett . b. differential inlet/outlet water pressure by means of two pressure sensors type pse560-02 from smc installed near the input and the output of the heat sink. the pressure is measured using an interface type pse200-ma4c from smc (+/0.5% f.s.). the differential inlet/outlet water pressure during the experimental activity was stable at 210 mbar. this value is in compliance with the specification aforementioned. in fact, a maximum pressure drop of 350 mbar is allowed. c. water flux by a water digital flux-meter type pf3w704-f03-btn-m from smc. the nominal range of the adopted sensor is 0.5 4 l/m (+/3% f.s.). during the experimental activity the reading of the water flux was stable at a mean value of 0.6 l/m. also in this case, the measured value complies with constraints concerning a water flow rate 0.63 l/min, as aforementioned. d. temperature map by a flir a325 infrared thermocamera. all the reflecting surfaces were painted black, in order to obtain an almost homogeneous emission coefficient. e. temperature of the most heating components by means of k-type thermocouples and pc-driven switching unit. figure 12 shows, as an example, the comparison between simulation and measurement in the case of the converter delivering  1.2 kwoutp . in figure 13 the experimental setup has been depicted. the positions of the thermocouples are depicted in figure 14 whereas in table 1 the function of every thermocouple is finally described.   (a)                                                                                                                   (b)  figure 12. thermal map of the power  converter  mounted  on  the  designed heat sink at operating conditions: pout = 1.2 kw, tinlet = 18 °c, tamb = 21 °c. (a)  simulation; (b) ir measurement.   figure 13.the experimental setup.    figure 14. thermocouples position inside the mc.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 24  7. discussion  complex systems operating inside the lhc must comply with very stringent rams requirements. one of the key issues is the thermal behaviour of electronic converters in power supplies. an accurate thermal modelling, accounting for conduction, air convection and fluid-dynamic heat transport, drives to the knowledge of the most critical components and conditions in the system, allowing the design of an optimized water heat-sink. moreover, we asserted the importance of performing preventive diagnostics. this is true even at the design stage. at this time the identification of the most appropriate positions for temperature sensors able to detect component degradation at the early stage is mandatory. in this way, with the support of the developed thermal model, it is possible to identify and localize thermal and/or thermomechanical degradation modes, and it is possible to take proper corrective actions and even to substitute power modules which show component degradation, when maintenance interruptions are scheduled. it is important to highlight that predictive diagnostics can also put out design errors or weaknesses of the design. for example, the planar transformer proved to be the weakest point only after the thermo-graphic analysis and simulation of the complete main converter. in fact, in the design phase with electronic simulation tools (such as spice) it is not always possible to put in full light the aspects of thermal dissipation. finally, it is important to highlight that predictive diagnostics can therefore lead to design reviews with significant savings in the prototyping phase. 8. conclusions  after a brief description of an isolated dc-dc mc able to supply an intermediate “medium” voltage on a distribution bus for atlas (lar), a coupled thermo-fluiddynamic 3d model of a water heat sink for power electronic converters has been developed and tuned by thermal characterization of a known prototype. the model was used for designing a suitable solution for a specific complex dc-dc converter for application in lhc future experiments, with very stringent thermal constraints. the heat sink was fabricated and thermal measurements performed on the converter mounted on it show a good agreement with thermo-fluid-dynamic simulations of the whole assembly. the proposed cold plate can be utilized with the mc in order to obtain the wanted delivered power. considerations about dependability of the proposed mc have been presented. in particular, the reliability of the adopted solution has been discussed. finally, both diagnostic aspects and maintenance strategies in compliance with the experimental activity have been discussed. acknowledgement  the authors would like to thank the other members of the apollo collaboration, founded by italian istituto nazionale di fisica nucleare (infn), for their fruitful support [20]. references  [1] http://home.web.cern.ch. [2] g. aad et al., the atlas experiment at the cern lhc, journ. of instrumentation 3 (2008) s08003. [3] j. bohm, j. stastny, v. vacek, cooling performance test of the sct lv&hv power supply rack, atl-indet-pub2006-004, nov. 2005. [4] p. tenti, g. spiazzi, s. buso, m. riva, p. maranesi, f. belloni, p. cova, r. menozzi, n. delmonte, m. bernardoni, f. iannuzzo, g. busatto, a. porzio, f. velardi, a. lanza, m. citterio, c. meroni, power supply distribution system for calorimeters at the lhc beyond the nominal luminosity, journ. of instrumentation 6 (2011) p06006. [5] m. lazzaroni, l. cristaldi, l. peretto, p. rinaldi and m. catelani, reliability engineering: basic concepts and applications in ict, springer-verlag, berlin heidelberg, 2011, isbn 978-3-642-20982-6, e-isbn 978-3-642-20983-3. [6] iso 9000:2005, quality management systems fundamentals and vocabulary. [7] m. alderighi, m. citterio, m. riva, s. latorre, a. costabeber, a. paccagnella, f. sichirollo, g. spiazzi, m. stellini, p. tenti, p. cova, n. delmonte, a. lanza, m. bernardoni, r. menozzi, s. baccaro, f. iannuzzo, a. sanseverino, g. busatto, v. de luca, f. velardi, power converters for future lhc experiments, journ. of instrumentation 7 (2012) c06012. [8] p. cova, n. delmonte, thermal modeling and design of power converters with tight thermal constraints, microelectronics reliability 52 (2012) pp. 2391-2396. [9] p. cova, n. delmonte, r. menozzi, thermal characterization and modeling of power hybrid converters, microelectronics reliability, 46 (2006) pp. 1760-1765. [10] s. baccaro, g. busatto, m. citterio, p. cova, n. delmonte, f. iannuzzo, a. lanza, m. riva, a. sanseverino, g. spiazzi, reliability oriented design of power supplies for high energy physics applications, microelectronics reliability 52 (2012) pp.2465–2470. [11] http://mag-inc.com/products/powder-cores/kool-mu/learnmore-kool-mu. [12] l. cristaldi, m. faifer, m. rossi, s. toscani, m. lazzaroni, “condition based maintenance through electrical signature analysis: a case study”, i2mtc international instrumentation and measurement technology conference, may 3-6, 2010, austin, texas, usa, pp. 1169 – 1174. [13] l. cristaldi, m. faifer, m. lazzaroni, s. toscani, an inverter-fed induction motor diagnostic tool based on timedomain current analysis, ieee trans. on instrumentation table 1. thermocouples position.  thermocouple  position  1  toroidal inductor  2  mosfet drain flange at primary  3  planar transformer core (inner point)  4  planar transformer windings (inner point)  5  isotop diode flange for heat‐sink at secondary  6  ambient (not shown in figure 14)    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 25  and measurement 58 (2009) pp. 1454-1461. [14] l. cristaldi, m. lazzaroni, a. monti, f. ponci, f.e. zocchi, “a genetic algorithm for fault identification in electrical drives: a comparison with neuro-fuzzy computation”, imtc instrumentation and measurement technology conference, may 18-20, 2004, como, italy, pp. 1454-1459. [15] l. cristaldi, m. lazzaroni, a. monti, f. ponci, a neurofuzzy application for ac motor drives monitoring system, ieee trans. on instrumentation and measurement 53 (2004) pp. 1020-1027. [16] a. azzini, l. cristaldi, m. lazzaroni, a. monti, f. ponci, a.g.b. tettamanzi, “incipient fault diagnosis in electrical drives by tuned neural networks”, imtc instrumentation and measurement technology conference, april 24-27, 2006, sorrento, italy, pp. 1284-1289. [17] l. cristaldi, m. faifer, m. lazzaroni, s. toscani, “a vi based tool for inverter fed induction motor diagnostic”, imtc instrumentation and measurement technology conference, may 12-15, 2008, victoria, canada, pp. 15601565. [18] j.h. ferziger and m. perìc, computational methods for fluid dynamics, springer, 2002, isbn-10: 3540420746, isbn-13: 978-3540420743. [19] p. cova, n. delmonte, f. giuliani, m. citterio, s. latorre, m. lazzaroni, a. lanza, thermal optimization of water heat sink for power converters with tight thermal constraints, microelectronics reliability 53 (2013) pp. 17601765. [20] http://www2.pv.infn.it/~servel/apollo/index.html. template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 88-99 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 88 comparison of the performance of a microwave-based and an nmr-based biomaterial moisture measurement instrument petri österberg1,2, martti heinonen3, maija ojanen-saloranta3, anssi mäkynen1 1university of oulu / cemis-oulu, kehrämöntie 7, fi-87400 kajaani, finland 2kajaani university of applied sciences, kuntokatu 5,, fi-87101, kajaani, finland 3vtt technical research centre of finland ltd, centre for metrology mikes, tekniikantie 1, 02150 espoo, finland section: research paper keywords: moisture measurement; bioenergy; microwave; nmr; loss-on-drying citation: petri österberg, martti heinonen, maija ojanen-saloranta, anssi mäkynen, comparison of the performance of a microwave based and an nmrbased biomaterial moisture measurement instrument, acta imeko, vol. 5, no. 4, article 14, december 2016, identifier: imeko-acta-05 (2016)-04-14 section editor: paul regtien, measurement science consultancy, the netherlands received may 31, 2016; in final form november 15, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was funded by the research grant from the european metrology research programme (emrp-reg). the emrp is jointly funded by the emrp participating countries within euramet and the european union corresponding author: petri österberg, e-mail: petri.osterberg@measurepolis.fi 1. introduction the loss-on-drying (lod) method is conventional and is still the only standardized method of measuring the moisture content of biomaterial load. to determine the moisture content of the load, a sample of at least 300 g is taken from the biomaterial load of a truck or trailer according to an appropriate sampling standard [1]. the sample is milled to reduce the maximum particle size to meet the requirements of a sample preparation standard [2]. finally, the moisture content of the biomaterial sample is determined according to the standardized lod method [3], that is, by weighing the sample and then drying it for several hours in an oven at a temperature of 105 ± 2 °c and performing final weighing after the moisture of the sample has been fully removed (i.e. after the mass change of the sample is below the detection limit given in the standard). however, the drying time should not exceed 24 hours. after the final weighing, the moisture content of the sample is calculated as the ratio of the mass change to the mass of the original sample. the lod procedure is too slow for efficient process control, but if not in all countries, at least in several countries it is still the only standardized moisture measurement method for the moisture-content determination of biomaterial delivery lots and is therefore used mostly in invoicing. in a typical case at a power plant, the lod-based moisture content estimate for the biomaterial load is available from two days after delivery. to overcome the problem of the slowness of the lod method, novel rapid moisture measurement methods have challenged the conventional oven drying method. in this research, we studied two of these new methods, namely the nuclear magnetic resonance (nmr)-based moisturemeasurement solution developed by valmet ltd, called the mr abstract this article compares the performance of an nmr-based and a microwave-based moisture measurement instrument designed for biomaterials. the conventional moisture measurement method, loss-on-drying, serves as a reference measurement for both instruments. six different biomaterials at three moisture content levels were measured with the microwave instrument and five biomaterials were measured with the nmr instrument. after instrument calibrations, the difference and variation of the measurement results for parallel samples and the repeatability of measurement results obtained by the nmr and microwave instruments were estimated. reasonable agreement between the measurement methods was achieved. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 89 moisture analyzer, and the microwave-based moisture measurement solution developed by senfit ltd, called the bma desktop. the calibration of the nmr instrument is very simple: it is done with a water container, first empty and then filled, whereas the microwave instrument calibration is more complicated: it must be calibrated separately for each biomaterial section with the reference lod method. however, the microwave instrument is more rapid than the nmr instrument: a single measurement takes only ten seconds, whereas an nmr measurement takes two minutes. still both are very rapid in comparison with the conventional method: the lod-based moisture content estimate takes at least 16 hours. however, new research results are urgently needed to demonstrate the performance of novel instruments designed for delivering moisture data for biomaterial invoicing. this article is constructed using the research and data published and presented at two scientific conferences [4], [5]. the article describes a comparative study of two novel alternative instruments designed to mostly or fully replace the use of the conventional oven drying method. the benefits and drawbacks of the nmr and microwave instruments and their suitability for power plants are compared and discussed. during our research, swedish and canadian researchers published an article [6] in which they described the testing of the metso (now renamed as valmet) mr moisture instrument with forest-based biomaterial samples with different moisture contents in both countries. in comparison with our measurements, the measurement difference from the oven drying was smaller in fridh and eliasson’s results in sweden but larger in volpé’s results in canada, and the repeatability of the measurements was also worse in the research results obtained in both countries. in the article [6], the researchers used four to five moisture classes per material, whereas in this article three moisture classes but more sample materials and more samples per material and per moisture class were measured. one of the swedish authors of the previous publication, fridh, also published a working report on the nmr instrument for skogforsk at the same time [7]. in comparison with reference oven drying, the difference and standard deviation given by fridh’s results were slightly smaller than in this article. in addition to these, some publications of the moisture measurement instrument prototypes using nmr technology are available [8], [9]. only a few publications about commercial biomaterial moisture measurement instruments based on microwave technology could be found during the research. two theses written in finnish described the performance of corresponding senfit bma desktop microwave instruments [10], [11], but they researched the performance with selected biomaterials in a restricted moisture range, namely within the range of moisture contents at delivery. the research described in this article is part of a european project, metefnet [12], which concentrates on the creation of unambiguous si (système international d’unités i.e. the international system of units) traceability chains for measurements of moisture in solid materials. section 2 describes the solid biofuel sample materials, sampling, and sample preparation procedures used during the research. in section 3 we will present the research instruments. instrument calibrations are described in section 4. section 5 describes how the moisture measurement process with all the three measurement methods – the nmr-instrument, the microwave instrument, and the reference lod method – is performed. in sections 6 and 7, measurement results are presented for the nmr instrument and the microwave instrument respectively. the discussion in section 8 compares the performance of the measurement instruments. finally, we conclude the article and the future plans are discussed. 2. biomaterial samples and sample preparation the biomaterial samples for the research were collected from two locations: from a nearby sawmill and from lorries transporting solid biofuel loads to a local power plant. six different biofuel sections presented in figure 1 were chosen: 1. sawdust from a sawmill 2. bark waste from a sawmill 3. chipped pruning residues 4. chipped small-sized trees (mixed with sawdust) 5. crushed and chipped stumps 6. milled peat however, milled peat samples were not measured with the nmr instrument but only with the microwave instrument. this was due to the possible ferromagnetic components in peat and is discussed more detailed at the end of section 3: ‘research instruments’. two measurement sessions were arranged, but the microwave instrument was available only for the first session. the nmr instrument was available for both sessions. for the first measurement session in october 2014, bark waste and pruning residue samples were stored for several months in a pile before measurement so that the original greenish colour of bark samples changed fully to grey/brown and the green needles in pruning residue samples dried and dropped away. for the second measurement session in may 2015, bark waste samples and pruning residue samples were fresh and collected only some hours after debarking of the logs and delimbing of the trees. for both measurement sessions, sawdust samples were fresh and stumps were stored for several months in a roadside storage area before processing and measurement. chipped small-sized trees were not delivered to the power plant during the second measurement session and this sample material could be measured only during the first measurement session. about 50 l of each selected biofuel section was collected. each biofuel section was taken from a single location of the same truckload or storage pile to ensure the similarity of parallel samples for the research. this was the opposite of the normal sampling procedure, in which a biofuel sample is collected from several locations of the load or pile to ensure that the sample represents the whole load or pile. all biofuel sections were measured at three different moisture levels. thus the 50 l section samples were divided into three parts. the first part was measured at the moisture content present at delivery. the second part was spread in a shallow empty pool for drying at room temperature for three to five days depending on the initial moisture and the drying speed of each biomaterial. the third part of the biofuel sample was placed in a water tub and additional water was inserted at the beginning to increase the moisture content of the sample. the amount of water added was determined according to the moisture content at delivery of each biofuel section. the sample was stored for two days in the water tub with a lid. after the two-day moistening period, the free water was removed from the moisturized biomaterial. altogether, there were 15 different biofuel sample sets for the nmr instrument research and 18 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 90 sets for the microwave instrument research, that is, five biofuel sections at three moisture-content levels for the nmr instrument and six biofuel sections at three moisture-content levels for the microwave instrument. samples were milled to meet the particle size requirements of the sampling standard for the oven drying method [2] (the particle size must be smaller than 31 mm × 31 mm × 31 mm for oven drying). before a measurement session, each biomaterial sample of ca. 15 kg at three moisture levels was mixed separately in a cement mixer to obtain homogenous sample material. homogenized sample material was divided into ~1 kg subsamples in plastic bags. the plastic bags were closed so that as little air as possible remained in them. before the moisture content measurement, the biomaterial subsamples were stored for one night at room temperature to obtain the constant temperature and moisture content of the biomaterial samples inside the plastic bags. the number of samples was from 6 to 20 pieces for each biomaterial and moisture level per instrument, depending on the biomaterial and moisture content (6 to 13 samples for the microwave instrument and 8 to 20 samples for the nmr instrument). altogether the moisture contents of 398 biomaterial samples were measured in this study with the nmr moisture measurement instrument and 191 samples with the microwave one. corresponding lod reference measurements were also made for the samples (589 lod measurements). additionally, some of the samples were measured several times to test the repeatability of the nmr and microwave instruments. the volume of biomass samples for the nmr instrument was always 0.8 l, but the sample weight varied between 161.4 and 492.1 g, whereas the sample weight for the microwave instrument was always 400 ± 1 g, but the sample volume varied according to the material and moisture content. 3. research instruments four ovens were used as the reference moisture measurement instruments to dry out the biomaterial samples during the measurement session. all the ovens were made by termaks [13], and they included two ts4115 models and two newer models, ts8056 and ts8136. a single oven can simultaneously dry 6 to 18 biofuel samples of about 400 g depending on the oven model and measures of sample containers. precisa ep 2220m and sartorius cp4202s scales were used in the weighing of the lod samples. two valmet mr moisture analyzers [14] were used as the nmr-based instrument in this study (see the left part of figure 2). in the first measurement session, an earlier version of the same instrument was used; it was called the metso mr moisture analyzer prior to the company’s demerger. the moisture content measurement range of the instrument is 0–90 %. the instrument measures biomaterial samples in the 0.8 l measurement container seen in the image on the right side of figure 2. the container is inserted in the measurement chamber and the measurement of the moisture content of a sample takes two minutes. the measurement chamber is located inside a magnet, and thus samples having ferromagnetic components may cause problems during the measurement. for example, peat samples may contain ferromagnetic components and thus the instrument’s manufacturer does not recommend their use as measurement samples. senfit bma desktop [15] was used as the microwave-based instrument in this study (see the left part of figure 3). the moisture content measurement range of the instrument is 0–70 %. the instrument was tuned for a 400 ± 5 g biomaterial sample placed in a plate-shaped measurement bowl seen on the right side of figure 3. the bowl is inserted in the measurement chamber and the moisture measurement of the sample takes figure 1. solid biofuel samples from upper left corner: sawdust, non-fresh bark waste, non-fresh pruning residue, chipped small-sized trees, crushed and chipped stumps, and milled peat. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 91 about ten seconds. the particle size of the sample should be no more than 31 mm × 31 mm × 31 mm; thus all the samples were milled with a senfit sample mill [16] to obtain the appropriate particle size. 4. instrument calibrations before the start of the measurement session, the drying ovens were tested and calibrated against a calibrated thermometer to ensure that they met the requirements of the standard [3]. the temperature must stay between 103 and 107 °c during the drying time, that is, at least 16 hours. the maximum drying time is 24 hours according to the standard. the calibration of the nmr moisture measurement instrument is very simple. one 0.8 l sample container was filled with water, closed with a lid, and stored for a day in a measurement room so that the water temperature equalled the temperature of the sample stored in the measurement room. another sample container was left empty. every morning before the actual measurements, these two containers were measured to calibrate the nmr instrument. the calibration water in the container was not changed during the measurement session. the calibration procedure took about five minutes. careful calibration of the microwave moisture measurement instrument is essential to achieve reliable measurement results. the calibration of the microwave instrument is done with calibration samples whose moisture content has been determined using the lod method. the closer the properties of the calibration material are to the properties of the sample material, the better the measurement results that are achieved. the calibration of the microwave instrument should be supplier-specific for every biomaterial section and should be done prior to the biomaterial moisture measurements. although the measurement range of the microwave instrument is a moisture content of 0 to 70 %, the instrument must be calibrated for the moisture content ranges of 0–15 % and 15– 70 % separately for all material sections. however, solid biofuels drier than 15 % are very rare at combustion plants. the measurement range in this research was planned to be 15–70 % and the microwave instrument was calibrated for this range. the calibration was carried out by using calibration samples taken from the same truck load and location as the measurement samples. two calibration samples were picked out from each sample material set and moisture level after milling (to meet the particle size requirements of the standard) and mixing the biomaterial. thus the calibration curve was derived from six calibration points for each biomaterial (two samples at three moisture levels). we also tested a calibration case that is weaker, but still possible in real life at power plants. according to the instructions given by the instrument manufacturer personnel, two samples of each material were dried or moistened to four moisture levels so that there were eight calibration points for each biomaterial section to create a calibration curve for the microwave instrument. unlike in the previous calibration, the calibration samples were collected one to three weeks before carrying out the measurement procedure on the research samples. thus in this case the calibration samples were taken at least from different loads, but likely from a different forest stand or roadside storage area too. for five out of the six materials, the driest calibration samples had moisture contents slightly below 15 % (the final moisture content was between 9 and 12 %) and thus the calibration curve was slightly distorted at the drier end. additionally, two of the six biomaterial suppliers (suppliers for chipped small sized trees and milled peat) changed between the collection of the calibration samples and the measurement session. these factors probably decreased the measurement accuracy considerably, in spite of the fact that the measurement samples were visually the same as the calibration samples. 5. moisture measurements during the research, two moisture measurement sessions were arranged. the first measurement session was performed in autumn 2014 and the second in spring 2015. the nmr instrument was available for both sessions and the microwave instrument was available for the first measurement session. additionally, all the samples were measured with the lod reference method. the nmr instrument was changed for the second measurement session, but it was similar to the instrument used in the first measurement session. before the measurements, the biomaterial samples of ca. 1 kg were stored overnight in plastic bags at room temperature to obtain a constant temperature and moisture content inside the plastic figure 2: valmet mr moisture analyzer. left: nmr moisture measurement instrument; right: measurement container of the instrument with sawdust sample. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 92 bag. the measurement procedure of a single sample started by shaking a plastic bag to mix the sample and the bag was emptied onto a clean table. the sample was divided into two parts. the first part was a sample of 400 g ± 1 g, which was weighed and placed in the measurement bowl of the microwave instrument seen on the right side of figure 3. the microwave moisture measurement takes about ten seconds and the instrument saves the measurement data. the instrument measures the sample temperature using an ir-sensor and tunes the microwave moisture measurement result based on the temperature of the sample. the second sample part is placed in the 0.8 l nmr instrument sample container seen on the righthand side of figure 2. the nmr instrument weighs the sample, measures the moisture content of the sample in two minutes, shows the result on the screen, and saves the measurement data. after the microwave or nmr moisture measurement, the biomaterial sample was placed in a metal container, weighed, and inserted in the oven for the reference moisture measurement with the lod method. the drying time was chosen as 23–24 hours, ensuring at least 16 hours heating at 105 °c to meet the requirements of the standard [3]. due to the long drying time and large number of samples, several ovens were used to achieve an efficient operation. after the drying period, the sample was weighed again. the obtained mass loss represents the vaporized moisture from the sample. finally, the moisture content estimate for the sample was calculated by dividing the mass loss by the initial sample mass. the repeatability of the microwave and nmr instruments’ results was tested by repeating the measurement of the same sample five times. the moisture contents of 18 (with the microwave instrument) and 15 (with the nmr instrument) different biomaterial samples (six and five biomaterial sections, respectively, at three moisture levels) were measured five times each. the sample was removed and placed back in the measurement chamber of the instrument between separate measurements. the time gap between consecutive nmr measurements was at least five minutes to avoid warming of the samples inside the nmr instrument. the time gap between consecutive microwave moisture measurements was shorter, because sample warming is negligible in the microwave instrument. the operating power of the microwave moisture measurement instrument is about 10 mw and thus very small in comparison with, for example, a microwave oven. the sample container of the microwave instrument was turned by about 30° between consecutive measurements. the position of the sample container inside the nmr instrument was random. altogether 923 separate measurements were made during this research. the measurements include the nmr and microwave instrument performance measurements, the repeatability measurements, and the corresponding lod reference measurements. 6. measurements results of nmr instrument 6.1. common nmr instrument results for all biomaterial samples as stated in the introduction, most of the measurement results and the data published in section 6 were included in the conference proceedings [5]. the comparative analysis of the two instruments based on the previously published data is the additional information provided in this article. the representation of the research results has also been improved. for five chosen biomaterials, the average difference between all the moisture measurement results determined with the nmr instrument and the oven drying method is 1.0 ± 3.8 %mc (percentage points of moisture content) when the uncertainty is given at the 95 % confidence level (k = 2). thus the nmr instrument overestimated the moisture content of the sample by about 1 %mc in comparison with oven drying. the moisture range for five different forest-based biomaterials varied from 14.3 to 68.6 %. on average, the standard deviation of moisture measurements for parallel samples was 0.9 %mc for the nmr measurement instrument and 0.4 %mc for the oven drying method, when considering all biomaterials at three moisture levels. thus, the variation of the measurement results for parallel samples of single material is smaller with the lod method. the repeatability tests for the nmr instrument showed that the standard deviation for the measurement repeated five times on single samples taken from all five sample materials at three moisture levels was 0.5 %mc on average. repeated measurement results seemed to deviate randomly. the material figure 3. left: senfit bma desktop, the microwave moisture measurement instrument; right: plate-shaped measurement bowl of the instrument with a crushed and milled stump sample divided into three parts. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 93 specific and moisture-level-specific repeatability tests are described in section 6.3. obviously, the five repeated measurements on the single sample cannot be done for the oven drying method, because the sample changes (dries) during the first measurement. 6.2. biomaterial-specific measurement results for the nmr instrument at three moisture levels this section summarizes the biomaterial-specific and moisture-level-specific nmr moisture measurement results for all the biomaterials compared with the reference lod measurements. figure 4 shows the performance of the nmr measurement instrument for sawdust, bark waste, and pruning residue samples in comparison with the lod method by showing all the measurements with red crosses (the first nmr instrument and measurement session) and blue circles (the second nmr instrument and measurement session). the x-axis represents the difference in moisture measurements between the nmr instrument and the reference lod method, while the y-axis represents the moisture content of the sample measured with the reference lod method. sawdust is typically a quite homogenous material and the uncertainty was the second smallest for the nmr instrument after measurements of crushed and milled stump samples (nmr – lod differences of stump samples can be seen on the left-hand side of figure 5). here, the mean difference in moisture content between the oven-drying method and the nmr instrument was 1.6 ± 2.1 %mc for sawdust when a 95 % confidence level was applied (k = 2). in this case, the measurement results made with the second instrument during the spring 2015 measurement session were very good and were closer to the reference lod results than the measurement results achieved by the first measurement session and instrument. the middle image of figure 4 shows the performance results of the nmr instrument with bark waste samples. for the first measurement session, chipped bark samples were collected from the unloading and feeding station of the combustion process of a power plant. this means that bark material was stored in the pile for some weeks before the measurement so that the greenish colour of the sample material vanished and the solid biofuel became acceptable for combustion at the power plant. for the second measurement session, bark samples were fresh and greenish and were collected some hours after debarking the logs. despite that, the results of both measurement sessions are quite similar, as seen in the middle image of figure 4. for chipped bark samples, the deviation of the nmr instrument results from the reference lod method results was 3.0 ± 2.4 %mc on average when a confidence level of 95 % was applied. thus bark waste was the biofuel whose moisture content was most overestimated by the nmr instrument. the right-hand image in figure 4 shows the performance results of the nmr instrument with pruning residue samples. for the pruning residue samples, the mean deviation between all the nmr moisture measurement readings and the reference lod method readings was –0.9 ± 4.1 %mc when a 95 % confidence level was applied. thus the pruning residue was the only biofuel whose moisture content was underestimated by the nmr instrument. however, when observing the right-hand image of figure 4, it seems that the nmr instrument underestimates the moisture content mostly with dry samples. pruning residue is very inhomogeneous material and that may be the reason why the standard deviation and the uncertainty, respectively, were quite large and nearly doubled in comparison with the other four measured forest-based biomasses. in particular, the deviation in moisture content of the moistened pruning residue samples measured in the first measurement session was large on the right side of figure 4. the largest single difference between the nmr instrument and the reference lod method measurement results is no less than ~6 figure 4. the differences of all measurement pairs between the nmr measurement and the reference lod method are presented for sawdust (left-hand image), bark waste (middle image), and pruning residue (right-hand image). the x-axis represents the nmr – lod difference in moisture content while the yaxis represents the moisture content of the sample according to the reference measurement. if the circle or cross falls on the dashed line, no deviation between the nmr and reference instrument measurements exists. the corresponding images of the remaining biomaterials are presented in figure 5. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 94 %mc in the right-hand image of figure 4. typically, the reason for such outliers was incorrect weighing (either manual weighing during the lod method or automatic nmr weighing), but the reason for this outlier could not be found and therefore this is included in the analysis. for the first measurement session, the pruning residue sample material was collected from a truckload at the unloading station of the power plant, and before that the samples were stored for several months in a pile in a forest roadside storage area to get rid of the needles in order to increase the fuel energy content and decrease the harmful corrosive components of the solid biofuel. most needles dried and dropped away during the storage period and transportation. for the second measurement session, the sample material was collected from the forest stand during logging, and thus the fresh and green pruning residue sample material with needles was used in this measurement session. however, a considerable difference in measurement results was not obtained. in both cases the moisture content was underestimated with dry samples and overestimated with wet samples in comparison with the lod measurement and this explains the higher standard deviation. the freshness of the pruning residue samples of the first and the second measurement sessions can also be noticed in the right-hand image in figure 4: the moisture content at delivery is about 27 % in the first measurement session (i.e. the most middle cluster of the three clusters of measurement results marked with red crosses) while the moisture content at delivery in the second measurement session is about 50 % (i.e. the middle-most cluster of the three clusters of measurement results marked with blue circles). the right-hand image in figure 5 shows the deviation between the nmr instrument and the reference lod method measurement pairs for chipped small-sized trees. the sample material of chipped small-sized trees was available only for the first measurement session and the deviation of the nmr measurement results in comparison with the reference lod method was 0.7 ± 2.6 %mc on average when a 95 % confidence level was applied. during the collection of chipped small-sized tree material samples, we noticed that the chosen biomaterial was not pure but was mixed with sawdust, which may affect the results in comparison with pure material. the measurement results here, indeed, were quite close to the sawdust results, although the material seemed totally different, when observing the left-hand images in the upper and lower corners of figure 1. the crushed and chipped stump material contained more soil than the other four solid biofuel sample materials, and thus weaker nmr measurement results were expected. however, both differences in comparison with the reference lod method and the 2 × sigma based measurement error were the smallest among the five tested biomaterials, being 0.1 ± 1.8 %mc. the right-hand image in figure 5 shows that the measurement differences from the reference lod method results and their variation are small and the results here are beautifully concentrated quite close to the zero line. biomaterials are typically very inhomogeneous and the variation of moisture measurement readings between parallel samples was notable even for sawdust. the standard deviation of moisture readings of parallel samples obtained by the oven drying method varied from 0.0 %mc to the poorest case of 1.4 %mc depending on the sample material and moisture level. the mean of these standard deviations was 0.4 %mc. correspondingly, the standard deviation of moisture readings of parallel samples obtained by the nmr moisture measurement instrument varied from 0.4 %mc to the poorest case of 2.1 %mc depending on the sample material and the moisture level. the mean of the standard deviations of nmr measurements was 0.9 %mc. a clear dependence between the moisture content of the solid biomaterials and the standard deviation of the parallel measurements could not be observed, even though the dependence between the moisture content of the biomaterial and nmr instrument repeatability for the very figure 5. the differences of all measurement pairs between the nmr measurement and the reference lod method are presented for chipped small-sized trees in the left-hand image and for crushed and chipped stumps in the right-hand image. the x-axis represents the nmr – lod difference in moisture content and the y-axis represents the moisture content of the sample according to the reference measurement. if the circle or cross falls on the dashed line, no deviation between the nmr and the reference instrument measurements exists. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 95 same sample is obvious, as described later in section 6.3. four of the 388 measurement pairs were omitted from the analyses due to incorrect sample weighing by the nmr instrument. the scale of the nmr instrument showed the masses of those four samples to be tens of grams different from the real masses. in addition to weighing by the nmr instrument, the sample must be weighed manually before oven drying, and the result must be quite close to the nmr weighing result. also, the masses of parallel samples should be rather close to each other, because the sample container must always be completely full. if considerable deviation between parallel samples or between nmr and manual weighing is observed, it can be supposed that there is a weighing error. two of 388 measurement pairs were omitted due to incorrect handling of samples during lod measurements. the milled peat sample material was also available during the measurements, but it was measured only with the microwave instrument. the nmr instrument manufacturer does not recommend the use of peat samples due to the fact that the measurement is based on a homogenous magnetic field. namely, some peat materials from certain geological areas may have ferromagnetic components, which may distort the magnetic field inside the nmr instrument and further distort the moisture measurement readings. 6.3. repeatability measurements of the nmr-based moisture measurement instrument table 1 summarizes the material-specific and moisture-levelspecific repeatability tests of the nmr instrument for single samples. one sample of each sample class was chosen, the nmr moisture measurement was repeated five times on the sample, and the average moisture and standard deviation of the five measurements were calculated. table 1 shows that the standard deviation of the repeated measurement results varied from 0.2 to 1.1 %mc, depending on the biomaterial. the average of all standard deviations of single samples for different materials and moisture levels was 0.5 % and the higher standard deviation values were achieved with drier samples. in section 6.1, we showed that the standard deviation with parallel samples (not exactly the same) was 0.9 %mc on average; thus roughly half of the variation of the nmr instrument results can be explained by the variation of the biomaterial and another part can be explained by the instrument properties. the repeatability test values from table 1 are plotted on a graph (see figure 6, right). the graph clearly shows that the standard deviation of consecutive measurements of a single sample increases when the sample is drier. thus repeatability is better with moister samples. however, this dependence vanishes when variation due to the material properties is added, as mentioned at the end of section 6.2. the repeatability test results for the small-sized tree samples in the spring 2015 measurement session are missing from table 1, because this solid biofuel material was not available during the second measurement session. 7. measurement results of microwave instrument 7.1. common microwave instrument results for all biomaterial samples similarly to the nmr research data and measurement results, most of the measurement results and the data obtained with the microwave instrument were included in another conference proceedings [4]. the representation of the research results of the microwave instrument has also been improved and the measurement results are discussed from the viewpoint of the differences between the instruments. the measurements made with the microwave-based moisture measurement instrument were similar to those made with the nmr moisture measurement instrument in section 6. however, the samples were measured less, because the microwave instrument was available only for the first measurement session. contrary to the nmr measurements, the peat samples were measured with the microwave instrument. for six chosen biomaterials, the average difference between all the moisture measurement results determined with the microwave instrument and the oven drying method is 0.0 ± 1.8 %mc when the uncertainty is given at the 95 % confidence level table 1. the repeatability test results of the nmr measurement instrument for the five solid biomaterials at three moisture levels. figure 6. the graph presents the values from table 1 graphically, and the dependence between repeatability and moisture content can be clearly seen. moisture moisture std moisture std class on avg. /% /% mc on avg. /% /% mc dried 26.2 0.7 14.7 1.1 normal 54.7 0.2 54.9 0.4 moistened 64.4 0.2 68.4 0.3 dried 26.4 0.7 21.3 0.9 normal 50.9 0.2 50.4 0.7 moistened 64.7 0.2 59.6 0.2 dried 15.9 1.1 18.8 1.0 normal 27.1 0.9 51.8 0.2 moistened 58.1 0.4 64.0 0.2 dried 17.2 0.7 normal 49.9 0.7 moistened 64.2 0.2 dried 15.9 0.5 11.1 0.4 normal 35.4 0.5 27.7 0.5 moistened 50.3 0.3 58.7 0.9 autumn 2014 spring 2015 smal l -si zed tree crushed stump sawdust bark waste pruni ng resi dues acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 96 (k = 2). this can be considered a very good result, since the moisture range for five different forest-based biomaterials varied from 14.3 to 68.6 %. however, one should keep in mind here that calibration was carried out by the reference lod method and the microwave instrument was tuned according to the oven-drying measurement results. also, perfect calibration samples taken from the same truck loads or piles as the measurement samples were used. on average, the standard deviation of each biomaterial in the three moisture ranges was 0.7 %mc for the microwave measurement instrument and 0.4 %mc for the oven-drying method. thus, the variation of the measurement results for parallel samples of a single material is smaller to that of the lod method. to demonstrate the significance of the good representativeness of the actual measurement samples provided by the calibration samples, the microwave instrument was also used with a calibration carried out using calibration samples collected one to three weeks earlier than the measurement samples from another biomaterial load supplied from a different location. in addition to that, the driest calibration samples had moisture contents slightly below 15 % and the suppliers of two biomaterials (milled peat and chipped small sized trees) changed between the collection of calibration and measurement samples. despite the supplier change, the peat samples and chipped small-sized tree samples seemed visually similar to the calibration samples. however, such imperfect collection of calibration samples may be possible at power plants. the difference in the moisture measurement results between the oven-drying method and the microwave instrument was in this case 0.1 ± 5.2 %mc on average when the incomplete microwave instrument calibration was applied. this result indicates that without optimal representativeness of the calibration samples, the variation in the moisture readings may be large, while the mean value may stay close to the correct value. the repeatability tests for the microwave instrument showed that the standard deviation for the measurement repeated five times on single samples taken from all six sample materials at three moisture levels was 0.3 %mc on average. the materialspecific and moisture-level-specific repeatability tests are described in section 7.3. obviously, the five repeated measurements on the single sample cannot be done for the oven-drying method, because the sample changes (dries) during the first measurement. 7.2. biomaterial-specific measurement results for the microwave instrument at three moisture levels this section summarizes the biomaterial-specific and moisture-level-specific microwave moisture measurement results for all the biomaterials. the left-hand image in figure 7 shows the performance of the microwave measurement instrument for sawdust samples. sawdust is typically quite a homogenous material, but surprisingly the largest dispersion of the microwave and lod differences among the six chosen biomaterials was found for sawdust samples: the mean difference in moisture content between the oven-drying method and the microwave instrument was –0.1 ± 2.7 %mc for sawdust when a 95 % confidence level was applied (k = 2). when considering not only different biomaterials but also moisture levels, the difference and standard deviation were largest for dry sawdust samples among all materials and all moisture levels, being –1.5 ± 2.6 %mc, and thus the microwave instrument underestimated the moisture content of dry samples, as can be seen in the lefthand image in figure 7. the measurement results for chipped bark waste and chipped pruning residues are quite similar, although the materials are quite different among forest-based biomasses. the results are presented in the middle and right-hand images in figure 7. the differences between the microwave moisture figure 7. the differences of all measurement pairs between the microwave measurement and the reference lod method are presented for sawdust (lefthand image), bark waste (middle image), and pruning residue (right-hand image). the x-axis represents the microwave – lod difference in moisture content and the y-axis represents the moisture content of the sample according to the reference measurement. if the circle or cross falls on the dashed line, no deviation between the microwave and reference instrument measurements exists. the corresponding images of remaining biomaterials are presented in figure 8. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 97 measurements and lod measurements are quite small in terms of the mean value and standard deviation, being 0.2 ± 1.4% mc for bark waste samples and 0.4 ± 1.4%mc for pruning residue samples, when a 95 % confidence level was applied (k = 2). for chipped small-sized trees, the measurement results are presented in the left-hand image of figure 8. the difference between the microwave and lod moisture measurement methods was 0.3 ± 1.9 %mc with a 95 % confidence level. the standard deviation was the second poorest after pure sawdust samples, but it can be recalled from section 2 that chipped small-sized tree material was mixed with sawdust and this may have affected the results. the best measurement results for the microwave instrument were achieved with chipped and crushed stump samples. this can also be seen in the middle image of figure 8, in which all the crosses indicating a measurement difference between the microwave instrument and the lod method are located close to the dashed zero line. the difference between the microwave instrument measurements and the reference lod method was – 0.2 ± 1.1 %mc when a 95 % confidence level was applied (k = 2), so due to the small variation, that mild underestimation can be easily observed in the middle image of figure 8. the last tested biomaterial was milled peat and these samples were measured only with the microwave instrument and the reference lod method, but not with the nmr instrument. the difference between the moisture measurement results achieved with the microwave instrument and the reference lod method was 0.1 ± 1.5 with a 95 % confidence level. as seen in the right-hand image of figure 8, the variation of the measurement results for milled peat is quite high for wet samples, but the measurement results for dried peat samples and the peat samples having the moisture content present at delivery are very close to the zero line and thus match the reference lod method results well. 7.3. repeatability measurements of the microwave-based moisture measurement instrument table 2 summarizes the material-specific and moisture-levelspecific repeatability tests of the microwave instrument for single samples. one sample of each sample class was chosen, the microwave moisture measurement was repeated five times on the sample, and the average moisture and standard deviation of the five measurements were calculated. table 2 shows that the standard deviation of the repeated measurement results varied between 0.1 and 1.1 %mc, depending on the biomaterial. the average of all standard deviations of single samples for different materials and moisture levels was 0.3 %. the highest standard deviation values were achieved with chipped smallsized tree samples and milled peat samples. in section 7.1, we showed that the standard deviation with parallel samples (not exactly the same) was 0.7 %mc on average; thus roughly more than half of the variation of the microwave instrument results can be explained by the variation of the biomaterial and the rest can be explained by the instrument properties. the repeatability test values from table 2 are plotted on a graph (see figure 9, table 2. the repeatability test results of the microwave measurement instrument for the six solid biomaterials at three moisture levels. the repeatability test results of the measurement session in autumn 2014 are presented. figure 8. the differences of all measurement pairs between the microwave measurement and the reference lod method are presented for chipped smallsized trees (left-hand image), crushed and chipped stumps (middle image), and milled peat (right-hand image). the x-axis represents the microwave – lod difference in moisture content and the y-axis represents the moisture content of the sample according to the reference measurement. if the circle or cross falls on the dashed line, no deviation between the microwave and reference instrument measurements exists. moisture moisture std moisture std class on avg. /% /% mc on avg. /% /% mc dried 23.4 0.1 16.5 0.9 normal 55.3 0.1 53.6 0.7 moistened 66.4 0.2 68.7 0.5 dried 23.7 0.2 15.6 0.1 normal 52.2 0.1 36.2 0.1 moistened 69.4 0.1 54.8 0.1 dried 17.5 0.1 25.3 1.1 normal 32.8 0.1 50.5 0.1 moistened 71.2 0.1 61.3 0.5 autumn 2014 autumn 2014 sawdust bark waste pruning residues crushed stump small-sized tree milled peat acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 98 right). a clear dependence between the standard deviation of consecutive measurements of a single sample and the moisture content of the sample cannot be addressed as was done with the nmr instrument, but a dependence between the material type and standard deviation seems to be evident. 8. discussion when the two measurement instruments – the senfit bma desktop microwave instrument and the valmet mr moisture nmr instrument – were compared, the senfit bma achieved slightly better results in terms of both precision and accuracy (0.0 ± 1.8 vs. 1.0 ± 3.8 %mc for all samples on average). however, the good measurement results for the microwave instrument are highly dependent on the calibration procedure. in particular, similarity between the calibration material and the sample material is important for the microwave instrument: if a deviation occurs between the calibration and sample material properties, the measurement accuracy worsens rapidly and the uncertainty may easily be tripled. the calibration process of the nmr instrument is much more rapid and simpler than for the microwave instrument. the nmr instrument is calibrated only with an empty and water-filled sample container, and the same calibration works for all sample materials. thus, another moisture measurement method (e.g. the lod) is not needed for the calibration. the nmr instrument calibration takes about five minutes daily. in comparison, the microwave instrument must be calibrated against another moisture measurement method (typically lod) separately for each biomaterial and each biomaterial supplier and the calibration process takes at least one day. an appropriate calibration line must always be loaded from memory when the biomaterial type or supplier changes. thus another (most commonly the lod) moisture measurement method cannot be neglected with the senfit bma desktop, because it is still needed for the calibration of the microwave instrument. the precision of the microwave instrument is always quite good due to the fact that calibration is carried out according to the reference lod method measurements. thus the microwave instrument is tuned towards the lod reference measurement results during the calibration process. the measurement range of the nmr instrument is larger than that with the microwave instrument (0–90 vs. 0–70 %). both the instruments have their own restrictions: the samples for the microwave instrument should be milled to meet the particle size requirements, similarly to the lod method. the maximum particle size for the nmr instrument samples is not restricted. the samples for the nmr instrument must not include ferromagnetic components and this may occur with, for example, some peat samples. also, the nmr instrument sample must include at least 20 g of water for a reliable measurement result. this typically corresponds to a moisture content of 5–10 % depending of the mass of the 0.8 l biomaterial sample. the moisture measurement results showed that the total variation between the moisture content values of parallel samples was slightly larger for the nmr instrument than for the microwave instrument (the standard deviation was on average 0.9 %mc vs. 0.7%mc). additionally, the repeatability tests of the very same sample showed that on average the uncertainty caused by the instrument was also slightly larger for the nmr instrument than for the microwave instrument (the standard deviation was on average 0.5 %mc vs. 0.3 %mc). the uncertainty caused by the nmr instrument increases when the samples are drier, but this does not occur with the microwave instrument measurements. when observing the average standard deviation of the measurements of parallel samples and average standard deviation of the repeated measurements of the very same sample, as given above, it can be noticed that 0.4 %mc of the total variation explains the variation caused by material properties for both instruments. this was also the average standard deviation of moisture content measurement of parallel samples measured with the lod method. based on this result, it can be concluded that the total variation of the lod method is almost fully explained by material variation. thus, we obtained the result that the variation caused by the lod method itself is close to zero, although we cannot carry out the repeatability test for the lod method by drying the same sample several times. the variation of the properties between different biomaterials affected the performance of both moisture measurement instruments and the lod method as well, which was expected. there is no given measurement accuracy interval for the standardized lod method in [3], due to the high variability of the properties of different solid biofuels. however, for the solid biofuels having a particle size smaller than 1 mm, a deviation smaller than 0.2 % is accepted for parallel samples [17]. the particle size of all solid biofuels in this research excluding milled peat was larger than 1 mm. the authors held discussions with the key personnel of an enterprise that measures the daily quality of supplied biomasses for a few finnish power plants with the conventional lod method. according to them, the standard deviation of 0.5 % for moisture measurement results (i.e. ±1.0 %-point uncertainty with a 95 % confidence level) is considered as a good result for parallel samples measured with the reference lod method. they also tested the novel instruments discussed in the article. in industrial use, the commonly accepted deviation between moisture readings of an excellent moisture measurement instrument and the reference lod method is ±2.0 %-point with a 95 % confidence level. the uncertainty interval of the microwave instrument, 0.0 ± 1.8 %, is hardly within the commonly accepted limits for an excellent instrument on average, but the performance varies according to the biomass material: the microwave instrument figure 9: the graph on the right presents the values from the table on the left graphically, and the clear dependence between repeatability and moisture content cannot be observed. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 99 satisfies the limits for five materials out of six. the standard deviation of the measurements was outside the range only for sawdust samples. the offset of the microwave instrument measurements was very small due to the fact that this instrument must be calibrated against the reference lod method. the uncertainty interval of the nmr instrument, 1.0 ± 3.8 %, was not inside the acceptable uncertainty limits for an excellent instrument on average. among the five materials researched, the measurement uncertainty of crushed and milled stump material was inside these limits even with the offset. for most materials, the offset was larger in comparison with the microwave instrument. the standard deviation of the measurement results of the nmr instrument was close to the acceptable limits for the three sample materials but not for pruning residue. if the offset is tuned, good enough measurement uncertainty is achieved with the nmr instrument. 9. conclusions and future work as a conclusion, both the nmr-technology-based valmet mr and the microwave-technology-based senfit bma desktop moisture measurement instruments might be reasonable alternatives to replace (nmr) or reduce (microwave) the number of the lod-based measurements, when rapid measurements are needed. however, it seems that neither of them can beat the lod method in terms of accuracy and precision, but good enough results can be achieved for biomaterial invoicing with the microwave instrument, and the nmr instrument achieves results that are close to the acceptable limits as well. in future, we will test the performance of our reference lod method. a few parallel samples have already been measured with an enhanced lod method [18]. the enhanced oven drying system is equipped with a cold trap to collect water and other vocs (volatile organic compounds) from vaporized gases during drying. tentatively, a small but non-significant effect of vocs in the moisture measurement results has been observed but this will be reported in more detail in future work. acknowledgements the research project is funded by a research grant from the european metrology research programme (emrp-reg). the emrp is jointly funded by the emrp participating countries within euramet and the european union. in addition to emrp, the authors would also like to thank vtt mikeskajaani for providing its infrastructure for sample preparation and measurements, senfit ltd, valmet ltd, and the university of oulu for loaning the measurement instruments, the combustion plant kainuun voima ltd for supplying the sample material for the research, and finally prometec ltd for loaning its sample mill and sample mixer as well as for providing information and assistance with practical work on biomaterial measurements at the power plant. references [1] en-ts 14778:2011 solid biofuels. sampling. 2011. [2] en 14780:2011 solid biofuels. sample preparation. 2011. [3] cen-ts 14774-2 solid biofuels – determination of moisture content – oven dry method – part 2: total moisture – simplified method. 2004. [4] österberg p., heinonen m., ojanen-saloranta m. and mäkynen a, “the comparison of a microwave based bioenergy moisture measurement instrument against the loss-on-drying method”, xxi imeko world congress “measurement in research and industry” 30 august – 4 september, 2015, prague, czech republic. 6 p. [5] österberg p., heinonen m., ojanen-saloranta m. and mäkynen a., “comparison of an nmr-based bioenergy moisture measurement instrument against the loss-on-drying method”, isema 2016, 11th international conference on electromagnetic wave interaction with water and moist substances, 23–27 may 2016, florence, italy. 11 p. [6] fridh l., volpé s., and eliasson l. “an accurate and fast method for moisture content determination”, int. j. for. eng. 25 (2014), no. 3, pp. 222-208. issn 1494-2119. [7] fridh l., “evaluation of metso mr moisture analyser”, work report number 839-2014 of skogforsk. 2014. [8] järvinen t., “rapid and accurate biofuel moisture content gauging using magnetic resonance measurement technology”, vtt technology. espoo 2013. [9] rytkönen t. kiinteiden biopolttoaineiden lämpöarvon ja kosteuden määrittäminen nmr – menetelmällä [the determination of calorific value and moisture content of solid biofuels using nmr – technology]. master’s thesis. university of eastern finland. 2012, 60 p (in finnish). [10] toivakainen j. voimalaitoksen leijukattilassa käytettävän metsäpolttoaineen laadunmääritys [determining the quality of forest biofuel which is used in the power plant’s fluidized bed boiler]. diploma thesis. tampere university of technology. june 2014 (in finnish). [11] hautakorpi j. bma-kosteusanalysaattorin hyödynnettävyys biopolttoainejakeiden ja hakkeiden kosteusmäärityksissä [usability of the bma moisture analyzer in measuring moisture content of biofuel materials and wood chips]. bachelor’s thesis. tampere university of applied sciences. june 2013 (in finnish). [12] metefnet. webpage of emrp-project. [online] available: http://metef.net/. 17 february 2016. [13] termaks ltd. [online] available: www.termaks.com. 22 february 2016. [14] valmet ltd. product webpage. [online] available: www.valmet.com/products/automation/analyzers-andmeasurements/analyzers/mr-moisture/. 22 february 2016. [15] senfit ltd. product webpage. [online] available: http://www.senfit.com/products/biomass-sample-mill.html. 22 february 2016. [16] senfit ltd. product webpage. [online] available: www.senfit.com/products/biomass-moisture/bma.html. 22 february 2016. [17] cen-ts 14774-3 solid biofuels – determination of moisture content – oven dry method – part 3: moisture in general analysis sample. 2004. [18] ojanen m., sairanen h., riski k., kajastie h. and heinonen m., “moisture measurement setup for wood based materials”, ncsli measure j. meas. sci., 9 (2014), no. 4, pp. 56–60. http://metef.net/ http://www.termaks.com/ http://www.valmet.com/products/automation/analyzers-and-measurements/analyzers/mr-moisture/ http://www.valmet.com/products/automation/analyzers-and-measurements/analyzers/mr-moisture/ http://www.senfit.com/products/biomass-sample-mill.html http://www.senfit.com/products/biomass-moisture/bma.html comparison of the performance of a microwave-based and an nmr-based biomaterial moisture measurement instrument evaluating chemometric strategies and machine learning approaches for a miniaturized near-infrared spectrometer in plastic waste classification acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 7 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 evaluating chemometric strategies and machine learning approaches for a miniaturized near-infrared spectrometer in plastic waste classification claudio marchesi1 , monika rani1, stefania federici1, matteo lancini2, laura e. depero1 1 department of mechanical and industrial engineering, university of brescia & udr instm of brescia, via branze 38, 25123 brescia, italy 2 department of medical and surgical specialties, radiological sciences, and public health, university of brescia, viale europa 11, 25123 brescia, italy section: research paper keywords: plastic waste sorting; near-infrared spectroscopy (nirs); circular economy; chemometrics; machine learning citation: claudio marchesi, monika rani, stefania federici, matteo lancini, laura e. depero, evaluating chemometric strategies and machine learning approaches for a miniaturized near-infrared spectrometer in plastic waste classification, acta imeko, vol. 12, no. 2, article 40, june 2023, identifier: imekoacta-12 (2023)-02-40 section editor: leonardo iannucci, politecnico di torino, italy received march 31, 2023; in final form june 19, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. funding: the research was funded by the project pon “r&i” 2014-2020: sirimap—sistemi di rilevamento dell’inquinamento marino da plastiche e successivo recupero-riciclo (no. ars01_01183, cup d86c18000520008) and partially based upon work from cost action ca20101 plastics monitoring detection remediation recovery priority, supported by cost (european cooperation in science and technology), www.cost.eu. corresponding authors: stefania federici, e-mail: stefania.federici@unibs.it; matteo lancini, e-mail: matteo.lancini@unibs.it 1. introduction in the perspective of whole-system economic sustainability, the enormous volume of urban plastic waste and the constant increase in human plastic consumption require a high level of waste valorisation. by the numbers, global plastic production reached 367 million tons in 2021, with europe accounting for 16 % of the total [1]. 9 % of plastic was recycled, 12 % was incinerated, and 79 % ended up in landfills or natural compartments [2]. the recycling of polymer waste has significant environmental advantages owing to the replacement of primary manufacturing, and waste sorting optimization plays a critical role in the development of the recycling process [3], [4]. recycling is a technique for plastic product end-of-life waste management [5]. basically, two types of recycling processes can be distinguished: mechanical and chemical processes [3], [6]. in both, sorting is the most critical stage in the recycling process, and this is true regardless of how effective the recycling program is [3], [4]. the use of automated sorting equipment makes the process more efficient [7]. usually, these devices rely on vibrational spectroscopic techniques [8]-[11], and camera abstract optimizing the sorting of plastic waste plays a crucial role in improving the recycling process. in this contribution, we report on a comparative study of multiple machine learning and chemometric approaches to categorize a data set derived from the analysis of plastic waste performed with a handheld spectrometer working in the near-infrared (nir) spectral range. conducting a cost-effective nir study requires identifying appropriate techniques to improve commodity identification and categorization. chemometric techniques, such as principal component analysis (pca) and partial least squares discriminant analysis (pls da), and machine learning techniques such as supportvector machines (svm), fine tree, bagged tree, and ensemble learning were compared. various pretreatments were tested on the collected nir spectra. in particular, standard normal variate (snv) and savitzky-golay derivatives as signal pre-processing tools were compared with feature selection techniques such as multiple gaussian curve fit based on radial basis functions (rbf). furthermore, results were combined into a single predictor by using a likelihood-based aggregation formula. predictive performances of the tested models were compared in terms of classification parameters such as non-error rate (ner) and sensitivity (sn) with the analysis of the confusion matrices, giving a broad overview and a rational means for the selection of the approach in the analysis of nir data for plastic waste sorting. mailto:stefania.federici@unibs.it mailto:matteo.lancini@unibs.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 systems for the polymer identification of clear and coloured products [5], [12]. other techniques are based on ultraviolet (uv) spectroscopy [13], [14], x-ray [15], and hyperspectral imaging [16]-[18]. over the years, this strategy has increased the purity of the output plastic, achieving a high percentage of recyclates in the production of secondary materials. however, these systems reach their limits with mixed plastics that require additional sorting elsewhere and can affect the quality of the recyclate if not appropriately allocated. a positive cost-benefit analysis is only possible if the separated polymer fractions have a high purity grade and satisfy the market demand for high-quality recyclates. therefore, post-consumer recycling consists of many essential steps: collection, sorting, cleaning, size reduction and separation, and/or compatibilization to reduce polymer contamination [5]. in this scenario, the prospect of combining a well-established polymer identification technology with a small, portable, lowcost, real-time spectrometer for local and intermittent semiautomatic sorting is highly desirable, accompanied by robust data analysis [19], [20]. in recent years, chemometric analysis of nondestructive spectroscopic data has been widely investigated as an automated method for improving plastic sorting systems [21][24]. this improvement has been driven by the need to reduce the environmental impact [25]. recently, machine learning has attracted considerable attention in plastic waste recognition using spectroscopic techniques [26]-[32]. in this study, we compared machine learning and chemometric techniques for classifying plastic waste data acquired with a portable near-infrared (nir) spectrometer (see figure 1 for the scheme of the work). comparisons were made between chemometric approaches, principal component analysis (pca) and partial least squares – discriminant analysis (pls-da), and machine learning techniques, support-vector machines (svm), fine tree, bagged tree, and ensemble learning. a comparison was also made in terms of pre-processing: traditional techniques, such as standard normal variate (snv) and savitzky-golay derivatives were examined in contrast to feature reduction techniques, such as multiple gaussian curve fit based on radial basis functions (rbf). the predictive performances of the tested models were compared in terms of classification parameters, such as nonerror rate (ner) and sensitivity (sn) with the analysis of confusion matrices, providing a comprehensive overview and a rational means of selecting the approach for the analysis of nir data for plastic waste sorting. 2. materials and methods 2.1. samples collection the first batch of plastic samples was collected in the selection division of the montello spa recovery and recycling plant (bergamo, italy), which accepts post-consumer plastic in the form of municipal waste for recycling [20]. subsequently, the dataset was expanded to include new samples from municipal waste collected before ending up in landfills. a total of 325 samples from a variety of polymer classes were used in this study. specifically, the products studied were: 75 samples of poly(ethylene terephthalate) (pet), 100 samples of polyethylene (pe), 75 samples of polypropylene (pp), and 75 samples of poly(styrene) (ps). the assortment included bottles, containers, and packaging of various sizes, shapes, and colours. 2.2. nir analysis plastic samples were analysed using the micronir on-site spectrometer (viavi solutions inc., ca, united states) in reflectance mode without pre-treatment of the samples. the instrument is a palm-sized, portable spectrometer weighing approximately 250 g and measuring less than 200 mm in length and 50 mm in diameter. the instrument is equipped with a linear variable filter (lvf), coupled to a linear detector array, which operates in the wavelength range 950-1650 nm. control settings for spectral data acquisition were set to 10 milliseconds integration time and 50 scans, resulting in a short measurement time of 0.25 seconds. a point-and-shoot technique was used to perform 5 replicates for each sample to reduce the effects caused by sample non-uniformity. a total of 1625 spectra were acquired, and acquisition was performed using micronirtm pro v3.0 software (viavi solutions inc., ca, united states). 2.3. spectral pre-processing and chemometrics pre-processing nir spectral data has become an essential aspect of chemometric modelling. the goal is to eliminate physical events from the spectra to improve subsequent multivariate regression, classification model, or exploratory analysis [33]. in this study, the spectra were retrieved in a single matrix of 1625 × 125 (samples × wavenumbers) and preprocessing was applied using the savitzky-golay second derivative method with seven data points and a second order polynomial followed by standard normal variate (snv). the second derivative was applied to correct the drift effect [34], [35] figure 1. scheme of the work. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 3 in the nir spectra, while snv corrects the baseline shift [36]. snv was calculated as follows [36]: 𝑋corr = 𝑋org − 𝑎0 𝑎1 (1) where 𝑋corr is the spectrum corrected, 𝑋org is the raw spectrum collected by the instrument, 𝑎0 is the value of the mean of the spectrum to be corrected, and 𝑎1 is the standard deviation. in addition, normalization was performed by mean centering. different chemometric methods were used for the correct evaluation of the data of all analysed samples. pca was initially applied as an exploratory analysis to investigate the data structure and was performed on 1625 nir spectra from all polymer classes. then, pls-da was applied as a supervised pattern recognition tool to separate the different commodities. prior to using pls-da, data were split into a training set and a test set using a matlab proprietary function. the process was repeated 500 times, generating a different training and test set each time (75 % of the samples belonged to the training set and 25 % to the test set). all chemometric analyses were performed with matlab 2021b (the mathworks, inc, natick, ma, usa) using the pls-toolbox (eigenvector research, inc. manson, washington, usa). 2.4. machine learning and pre-processing various machine learning algorithms were applied for classification purposes; svm, fine tree, ensemble learning, and bagged tree. in addition, a likelihood-based aggregation procedure (here called combo) was used to integrate the data into a single predictor, and the same procedure was applied with a monte carlo method (mcm) to make a perturbation on raw data, to improve the generalization performance. the chosen hyperparameters are the following: for fine tree gini's diversity index (gdi) was used as split criterion with 100 maximum number of splits; svm was performed with a linear kernel function with kernel scale equal to 3. lastly, ensemble learning was performed with the bagged tree method with 30 cycles of learning. to test the reliability of the system, 200 random extractions were performed for splitting the training and testing set. again, 75 % of the samples were used for training and the rest for testing. machine learning methods were performed on three different datasets: the raw data collected as specified in the previous paragraph (2.2), data reduced using the gaussian rbf curve fit [37], and a dataset obtained combining raw and pre-processed data. each curve of the dataset has been fitted using a combination of 12 gaussian functions and a linear interpolation with a seconddegree function, thus reducing the dataset dimension to 12 rbf centres and 12 sigma values. the procedure is as follows: 1. the second order derivative is computed and fed to find detection algorithm for the initial guesses of the rbf centres (here the matlab function “findpeaks” was used with a limitation of 12 peaks maximum and excluding the first and last 20 samples of the spectrum). 2. a linear regression with a second-degree equation is used to remove offset and second-order trends. 3. the rbf centres are used as initial guess to an optimization procedure based on a sequential quadratic programming constrained minimization function [38]. the cost function 𝜀 used is reported in (2) where 𝐴𝑖 is the frequency of the 𝑖-th sample, 𝑦𝑖 is its raw values, and 𝜇𝑗 , 𝜎𝑗 , and 𝐴𝑗 are the centre, sigma and amplitude of the 𝑗-th rbf function respectively. 𝜀 = ∑ ∑ 𝜀𝑖,𝑗 𝑗𝑖 𝜀𝑖,𝑗 = { 0, 𝜎𝑗 < 0 𝐴𝑗 𝑒 − (𝑓𝑖−𝜇𝑗) 2 2 𝜎𝑗 2 , 𝜎𝑗 ≥ 0 (2) 4. the centres and sigmas found are collected as features of the new dataset. the condition posed in (2) on the positive value allows to reduce dynamically the number of rbf functions actually used, while the interpolation removes trends that could hide peaks. a third dataset combining the two previous dataset (raw and rbf gaussian fit) is also created simply joining the two tables. all calculations were performed using matlab and statistics toolbox release 2021b (the mathworks, inc, natick, ma, usa). automation of the procedure was implemented using matlab functions created in-house. in figure 2 the data analysis approach starting from raw data is reported, both for chemometrics and machine learning modelling. 3. results ann discussion 3.1. nir spectra the main advantage of nir spectroscopy is that it is a fastresponse analytical technique capable of collecting spectra without prior processing and predicting physical and chemical properties from a single spectrum [39]. the absorption bands in the nir region are caused by overtones and/or combination bands of primarily carbon-hydrogen vibrations and oxygenhydrogen vibrations. correct band assignment is difficult since it may be caused by various combinations of fundamental vibrations. also, overtone vibrations are highly overlapping [40]. representative nir reflectance spectra of the four polymers (pe, pet, pp, and ps) are shown in figure 3. figure 2. data analysis approach. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 4 the main absorbance band for pet was found at 1660 nm, which is related to the 1st overtone of c-h stretching [41], with other two peaks at about 1130 nm and 1415 nm. for pe the peak around 1211 nm is related to 2nd overtone of methylene c-h group, while the peak at about 1217 nm is related to the c-h stretch [42]. peaks at 1391 nm and 1168 nm, correspond, respectively, to c-h combination band and 2nd overtone of ch2 symmetric stretch. regarding pp, the 2nd overtone of the asymmetric methyl c-h stretch is around 1193 nm, while the asymmetric methylene c-h stretch occurs at about 1211 nm [43]. the two peaks at 1391 nm and 1397 nm are related to methyl and methylene (c-h) combination. lastly, for ps the peak at 1205 nm corresponds to the 2nd overtone of the aromatic c-h stretch; the stretching vibrational mode of c-h which occurs around 1639 nm, and the 1st overtone of aromatic c-h stretch overlaps with c-h combination band, which occurs at about 1391 nm [42]. to allow comparison between the raw spectra and the same spectra after applying the savitzky-golay 2nd derivative and snv, figure 4 shows the representative spectra of the four commodities after pre-processing. 3.2. principal component analysis the pca calculation was performed after the pre-processing described above for the entire spectral range. for data structure analysis, pca is a useful chemometric method. the goal of pca is to extract the information stored in many variables into a smaller number of variables, called principal components [44]. figure 5 shows the score plot of the first two components (73.88 % of the total explained variability), in which a clear separation between the polymer classes can be seen. along pc1 pet is distinguished from the other commodities. pet samples show very negative score values, while the other samples show positive score values. on the other hand, along pc2, ps is clearly separated from the other plastics. a clear separation between pp and pe can be noticed in the score plot of pc1 vs pc3 in figure 6, where pc3 accounts for 15.83 % of the total information and explains the difference of pp from the other class of polymers. 3.3. partial least squares discriminant analysis following the exploratory pca analysis, a supervised classification technique was used to distinguish the different plastic groups. in pls-da, a classification objective is added to figure 3. representative near-infrared (nir) spectra of the four classes of polymers. figure 4. nir spectra of the four classes of the polymers after the typical preprocessing for chemometric analysis: savitzky-golay 2nd derivative and standard normal variate. figure 5. results of pca performed with spectral data of different commodities. the score plot of pc1 vs pc2 is presented. figure 6. results of pca performed with spectral data of different commodities. the score plot of pc1 vs pc3 is presented. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 5 the pls regression technique. the response variable is categorical and reflects the class to which the statistical units belong. pls-da returns the prediction as a vector with values between 0 and 1 and a length equal to the number of classes in the predictor variables [45], [46]. each time pls-da was performed, the parameters such as ner and sensitivity were calculated in fitting, in cross-validation (cv), and for the test set. the cross-validation procedure was based on venetian blind approach with 5 groups. cv was also used to determine the optimal number of latent variables (lvs) for each pls-da model. figure 7 shows all sensitivities for each class, calculated for training set, cv, and for test set. the values are close to 1, indicating a very high classification performance. moreover, the results are very balanced between training, cv, and test set; therefore, overfitting is completely avoided, and the model can be considered reliable and stable. table 1 shows the ner defined as mean class sensitivity [47], calculated for all the training set, cross-validation, and test set. overall, 99 % of the samples were correctly classified for each of the 500 iterations. 3.4. machine learning due to the complexity and the large number of results, for the machine learning analysis the classification parameters are presented only for the test set. figure 8 shows the ner of the classes for each computed model and for each treatment of the data. it is noticeable that the models run on raw data have the worst performances. the ner ranges from 0.74 (fine tree) to 0.9 (svm), indicating a high variability in the results. for raw data only svm can be considered as a satisfactory model for pattern recognition. lower variability in the results is observed for pretreated data and for a mixture of pre-treated and raw data, where the ner ranges from 0.96 to 0.99 and from 0.96 to 0.98, respectively. thus, there is no difference in the results between pre-processed data and the combination of raw and pre-treated data. these results confirm that feature reduction based on the gaussian curve with rbf gives high performances for pattern recognition in machine learning analysis. in general, the model performance is comparable between machine learning and multivariate analysis methods. after random extraction of training and test data repeated 500 and 200 times for chemometrics and machine learning, respectively, the ner calculated for the test set is above 0.95 for both methods. however, the use of chemometrics reduces the computational time, compared to the computationally intensive machine learning algorithms. 4. conclusion this paper included a side-by-side comparison between conventional chemometric methods and machine learning algorithms for the classification of a dataset obtained from the study of plastic waste with a portable near-infrared (nir) spectrometer. multivariate methods such as principal component analysis (pca) and partial least squares discriminant analysis (pls da) were investigated, as well as machine learning methods such as support vector machines (svm), fine tree, bagged tree and ensemble learning. results were also compared in terms of data processing: signal preprocessing tools, snv, and savitzky-golay derivatives were compared with feature reduction approaches such as multiple gaussian curve fit based on radial basis functions (rbf). in addition, the machine learning algorithms were run on raw data, pre-processed data, and the combination of the two approaches. the results from pls-da showed very high performances for pattern recognition; in fact, the ner for the training set, in cv, and for the test set are all equal to 0.99. in contrast, for machine learning, the ner for raw data ranges from 0.74 for fine tree to 0.90 for svm, indicating high variability in the results. the results for the pre-processed data show lower variability with ner value ranging from 0.96 to 0.99, which is also valid for the combination of raw data and pre-processed data. this confirms that rbf-based variable reduction is the most crucial point to improve classification performances. contrarily to some results found in the literature regarding the pre-treatment of data having a negative effect in accuracy using chemometrics [48], the pretreatment of data is generally an improvement in the detection accuracy using machine learning techniques. we can conclude that the multivariate and machine learning approaches produce comparable results in terms of model performance. the ner estimated for the test set is above 0.95 for both chemometrics and machine learning after randomly extracting the training and figure 7. pls-da model. class sensitivities (sn) calculated for training set, in cross-validation and for test set. table 1. pls-da model. non-error rate calculated for training set, in cv and for test set. ner training 0.99 cv 0.99 test 0.99 figure 8. machine learning. comparison of the non-error rate (ner) calculated from the confusion matrices for each model. results are presented for raw data, pre-treated data, and the combination of raw and pre-treated data. acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 6 test data and repeating them 500 and 200 times, respectively. on the other hand, chemometrics is characterised by a lower computation time compared to machine learning algorithms and it can therefore be considered more advantageous. references [1] plastics europe. plastics europe website. online [accessed 25 june 2023] http://www.plasticseurope.org [2] r. geyer, j. r. jambeck, k. l. law, production, use, and fate of all plastics ever made, science advances vol. 3 (2017) no. 7. doi: 10.1126/sciadv.1700782 [3] s. m. al-salem, p. lettieri, j. baeyens, recycling and recovery routes of plastic solid waste (psw): a review, waste management. vol. 29 (2009) no.10, pp. 2625–2643. doi: 10.1016/j.wasman.2009.06.004 [4] r. siddique, j. khatib, i. kaur, use of recycled plastic in concrete: a review, waste management vol.28 (2008) no.10, pp. 1835-1852. doi: 10.1016/j.wasman.2007.09.011 [5] j. hopewell, r. dvorak, e. kosior, plastics recycling: challenges and opportunities, philosophical transactions of the royal society b: biological sciences vol. 364 (2009) no. 1526, pp. 2115-2126. doi: 10.1098/rstb.2008.0311 [6] k. ragaert, l. delva, k. van geem, mechanical, and chemical recycling of solid plastic waste, waste management vol. 69 (2017), pp. 24-58. doi: 10.1016/j.wasman.2017.07.044 [7] s. p. gundupalli, s. hait, a. a. thakur, a review on automated sorting of source-separated municipal solid waste for recycling, waste management vol. 60 (2017), pp. 56-74. doi: 10.1016/j.wasman.2016.09.015 [8] v. allen, j. h. kalivas, r. g. rodriguez, post-consumer plastic identification using raman spectroscopy, applied spectroscopy vol. 53 (1999) no. 6, pp. 672–681. doi: 10.1366/000370299194732 [9] v. ludwig, z. da costa ludwig, m. m. rodrigues, v. anjos, c. batesttin costa, d. r. sant’anna das dores, v. r. da silva, f. soares, analysis by raman and infrared spectroscopy combined with theoretical studies on the identification of plasticizer in pvc films, vibrational spectroscopy vol. 98 (2018), pp. 134-138. doi: 10.1016/j.vibspec.2018.08.004 [10] o. rozenstein, e. puckrin, j. adamowski, development of a new approach based on midwave infrared spectroscopy for postconsumer black plastic waste sorting in the recycling industry, waste management vol. 68 (2017), pp. 38–44. doi: 10.1016/j.wasman.2017.07.023 [11] a. vázquez-guardado, m. money, n. mckinney, d. chanda, multi-spectral infrared spectroscopy for robust plastic identification, appl. optics vol. 54 (2015) no. 24, pp. 7396-7405. doi: 10.1364/ao.54.007396 [12] y. tachwali, y. al-assaf, y. & a. r. al-ali, automatic multistage classification system for plastic bottles recycling, resources, conseration and recycling vol. 52 (2007) no. 2, pp. 266–285. doi: 10.1016/j.resconrec.2007.03.008 [13] e. maris, a. aoussat, e. naffrechoux, d. froelich, polymer tracer detection systems with uv fluorescence spectrometry to improve product recyclability, minerals engineering vol. 29 (2012), pp. 77– 88. doi: 10.1016/j.mineng.2011.09.016 [14] s. m. safavi, h. masoumi, s. mirian, m. tabrizchi, sorting of polypropylene resins by color in msw using visible reflectance spectroscopy, waste management vol. 30 (2010) no. 11, pp. 2216– 2222. doi: 10.1016/j.wasman.2010.06.023 [15] s. brunner, p. fomin, c. kargel, automated sorting of polymer flakes: fluorescence labeling and development of a measurement system prototype, waste management vol. 38 (2015), pp. 49–60. doi: 10.1016/j.wasman.2014.12.006 [16] y. zheng, j. bai, j. xu, x. li, y. zhang, a discrimination model in waste plastics sorting using nir hyperspectral imaging system, waste management vol. 72 (2018), pp. 87–98. doi: 10.1016/j.wasman.2017.10.015 [17] m. vidal, a. gowen, j. m. amigo, nir hyperspectral imaging for plastics classification, nir news vol. 23 (2012) no.1, pp. 13–15. doi: 10.1255/nirn.1285 [18] m. moroni, a. mei, a. leonardi, e. lupo, f. la marca, pet and pvc separation with hyperspectral imagery, sensors vol. 15 (2015) no. 1, pp. 2205–2227. doi: 10.3390/s150102205 [19] i. vollmer, m. j. f. jenks, m. c. p. roelands, r. j. white, t. van harmelen, g. p. van der laan, f. meirer, j. t. f. keurenjes, b. m. weckhuysen, beyond mechanical recycling: giving new life to plastic waste, angewandte chemie international edition vol. 59 (2020) no.36, pp. 15402–15423. doi: 10.1002/anie.201915651 [20] m. rani, c. marchesi, s. federici, g. rovelli, i. alessandri, i. vassalini, s. ducoli, l. borgese, a. zacco, f. bilo, e. bontempi, l. e. depero, miniaturized near-infrared (micronir) spectrometer in plastic waste sorting, materials vol. 12 (2019) no. 17. doi: 10.3390/ma12172740 [21] r. junjuri, m. k. gundawar, a low-cost libs detection system combined with chemometrics for rapid identification of plastic waste, waste management vol. 117 (2021), pp. 48–57. doi: 10.1016/j.wasman.2020.07.046 [22] v. c. costa, f. w. b. aquino, c. m. paranhos, e. r. pereira-filho, identification and classification of polymer e-waste using laserinduced breakdown spectroscopy (libs) and chemometric tools, polymer testing vol. 59 (2017), pp. 390–395. doi: 10.1016/j.polymertesting.2017.02.017 [23] g. bonifazi, l. fiore, r. gasbarrone, p. hennebert, p. s. serranti, detection of brominated plastics from e-waste by short-wave infrared spectroscopy, recycling vol. 6 (2021) no. 3. doi: 10.3390/recycling6030054 [24] e. r. k. neo, z. yeo, j. s. c. low, v. goodship, k. debattista, a review on chemometric techniques with infrared, raman and laser-induced breakdown spectroscopy for sorting plastic waste in the recycling industry, resources, conservation and recycling vol. 180 (2022) 106217. doi: 10.1016/j.resconrec.2022.106217 [25] c. araujo-andrade, e. bugnicourt, l. philippet, l. rodriguezturienzo, d. nettleton, l. hoffmann, m. schlummer, review on the photonic techniques suitable for automatic monitoring of the composition of multi-materials wastes in view of their posterior recycling, waste management & research vol. 39 (2021) no. 5, pp. 631–651. doi: 10.1177/0734242x21997908 [26] v. da silva, h. murphy, j. m. amigo, c. stedmon, j. strand, classification and quantification of microplastics (<100 μm) using a focal plane array-fourier transform infrared imaging system and machine learning, analytical chemistry vol. 92 (2020) no. 20, pp. 13724–13733. doi: 10.1021/acs.analchem.0c01324 [27] s. zhu, h. chen, m. wang, x. guo, y. lei, g. jin, plastic solid waste identification system based on near infrared spectroscopy in combination with support vector machine, advanced industrial and engineering polymer research vol. 2 (2019) no.2, pp. 77–81. doi: 10.1016/j.aiepr.2019.04.001 [28] a. p. m. michel, a. e. morrison, v. l. preston, c. t. marx, b. c. colson, h. k. white, rapid identification of marine plastic debris via spectroscopic techniques and machine learning classifiers, environmental science &technologies vol. 54 (2020) no. 17, pp. 10630–10637. doi: 10.1021/acs.est.0c02099 [29] y. yang, w. zhang, z. wang, y. li, differentiation of plastics by combining raman spectroscopy and machine learning, journal of applied spectroscopy vol. 89 (2022) no.4, pp. 790–798. doi: 10.1007/s10812-022-01426-1 http://www.plasticseurope.org/ https://doi.org/10.1126/sciadv.1700782 https://doi.org/10.1016/j.wasman.2009.06.004 https://doi.org/10.1016/j.wasman.2007.09.011 https://doi.org/10.1098/rstb.2008.0311 https://doi.org/10.1016/j.wasman.2017.07.044 https://doi.org/10.1016/j.wasman.2016.09.015 https://doi.org/10.1366/0003702991947324 https://doi.org/10.1016/j.vibspec.2018.08.004 https://doi.org/10.1016/j.wasman.2017.07.023 https://doi.org/10.1364/ao.54.007396 https://doi.org/10.1016/j.mineng.2011.09.016 https://doi.org/10.1016/j.wasman.2010.06.023 https://doi.org/10.1016/j.wasman.2014.12.006 https://doi.org/10.1016/j.wasman.2017.10.015 https://doi.org/10.1255/nirn.1285 https://doi.org/10.3390/s150102205 https://doi.org/10.1002/anie.201915651 https://doi.org/10.3390/ma12172740 https://doi.org/10.1016/j.wasman.2020.07.046 https://doi.org/10.1016/j.polymertesting.2017.02.017 https://doi.org/10.3390/recycling6030054 https://doi.org/10.1016/j.resconrec.2022.106217 https://doi.org/10.1177/0734242x21997908 https://doi.org/10.1021/acs.analchem.0c01324 https://doi.org/10.1016/j.aiepr.2019.04.001 https://doi.org/10.1021/acs.est.0c02099 https://doi.org/10.1007/s10812-022-01426 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 7 [30] b. carrera, v. l. piñol, j. b. mata, k. kim, a machine learning based classification models for plastic recycling using different wavelength range spectrums, journal of cleaner production vol.374 (2022) 133883. doi: 10.1016/j.jclepro.2022.133883 [31] d. covarrubias-martínez, h. lobato-morales, j. m. ramírezcortés, g. a. álvarez-botero, classification of plastic materials using machine-learning algorithms and microwave resonant sensor, journal of electromagnetic waves and applications vol. 36 (2022) no. 12, pp. 1760–1775. doi: 10.1080/09205071.2022.2043192 [32] s. zinchik, s. jiang, s. friis, f. long, l. høgstedt, v. m. zavala, e. bar-ziv, accurate characterization of mixed plastic waste using machine learning and fast infrared spectroscopy, acs sustainable chemistry & engineering vol. 9 (2021) no. 42, pp. 14143-14151. doi: 10.1021/acssuschemeng.1c04281 [33] a, rinnan, f. van den berg, s. b. engelsen, review of the most common pre-processing techniques for near-infrared spectra, trac trends in analytical chemistry vol. 28 (2009) no. 10, pp. 1201–1222. doi: 10.1016/j.trac.2009.07.007 [34] p. oliveri, c. malegori, r. simonetti, m. casale, the impact of signal pre-processing on the final interpretation of analytical outcomes – a tutorial, analytica chimica acta vol. 1058 (2019), pp. 9–17. doi: 10.1016/j.aca.2018.10.055 [35] v. m. taavitsainen, denoising and signal-to-noise ratio enhancement: derivatives. in comprehensive chemometrics, eds. brown, s. d., tauler, r. & walczak, b., pp. 57–66, elsevier, 2009, isbn 978-0-444-52701-1. doi: 10.1016/b978-044452701-1.00101-0 [36] r. j. barnes, m. s. dhanoa, s. j. lister, standard normal variate transformation and de-trending of near-infrared diffuse reflectance spectra, applied spectroscopy vol. 43 (1989) no. 5, pp. 772–777. doi: 10.1366/000370289420220 [37] m. m. li, the development of a nonlinear curve fitter using rbf neural networks with hybrid neurons. lecture notes in computer science, part of 13th international symposium on neural networks, isnn 2016, st. petersburg, russia, july 6-8, 2016, proceedings vol. 9719, 2016, pp. 434–443. doi: 10.1007/978-3-319-40663-3_50 [38] j. frédéric bonnans, j. charles gilbert, c. lemaréchal, c. a. sagastizábal, numerical optimization: theoretical and practical aspects, springer berlin heidelberg, 2006, isbn 978-3-54035447-5. doi: 10.1007/978-3-540-35447-5 [39] m. blanco, m. i. villarroya, nir spectroscopy: a rapid-response analytical tool, trac trends in analytical chemistry vol. 21 (2002) no. 4, pp. 240–250. doi: 10.1016/s0165-9936(02)00404-1 [40] j. workman jr., l. weyer, practical guide and spectral atlas for interpretive near-infrared spectroscopy, crc press, boca raton, 2006, isbn 9781439875254. doi: 10.1201/b11894 [41] e. w. crandall, a. n. jagtap, the near‐infrared spectra of polymers. journal of applied polymer science vol. 21 (1977), pp. 449–454. doi: 10.1002/app.1977.070210211 [42] j. workman jr., the handbook of organic compounds, nir, ir, r, and uv-vis spectra featuring polymers and surfactants, academic press, 2001, isbn 13: 9780127635613. doi: 10.1016/b978-0-12-763560-6.x5000-4 [43] s. zhu, z. song, s. shi, m. wang, m. g. jin, fusion of nearinfrared and raman spectroscopy for in-line measurement of component content of molten polymer blends, sensors vol. 19 (2019) no.16, 3643. doi: 10.3390/s19163463 [44] d. ballabio, a matlab toolbox for principal component analysis and unsupervised exploration of data structure, chemometrics and intelligent laboratory systems vol. 149 (2015), pp. 1–9. doi: 10.1016/j.chemolab.2015.10.003 [45] d. ballabio, v. consonni, classification tools in chemistry. part 1: linear models. pls-da, analytical methods vol. 5 (2013), pp. 3790–3798. doi: 10.1039/c3ay40582f [46] r. g. brereton, g.r. lloyd, partial least squares discriminant analysis: taking the magic away, journal of chemometrics vol. 28 (2014), pp. 213–225. doi: 10.1002/cem.2609 [47] d. ballabio, f. grisoni, r. todeschini, multivariate comparison of classification performance measures, chemometrics and intelligent laboratory systems vol. 174 (2018), pp. 33–44. doi: 10.1016/j.chemolab.2017.12.004 [48] p. mishra, d. n. rutledge, j. m. roger, k. wali, h. a. khan, chemometric pre-processing can negatively affect the performance of near-infrared spectroscopy models for fruit quality prediction, talanta vol. 229 (2021) 122303. doi: 10.1016/j.talanta.2021.122303 https://doi.org/10.1016/j.jclepro.2022.133883 https://doi.org/10.1080/09205071.2022.2043192 https://doi.org/10.1021/acssuschemeng.1c04281 https://doi.org/10.1016/j.trac.2009.07.007 https://doi.org/10.1016/j.aca.2018.10.055 http://dx.doi.org/10.1016/b978-044452701-1.00101-0 https://doi.org/10.1366/000370289420220 https://doi.org/10.1007/978-3-319-40663-3_50 https://doi.org/10.1007/978-3-540-35447-5 https://doi.org/10.1016/s0165-9936(02)00404-1 http://dx.doi.org/10.1201/b11894 https://doi.org/10.1002/app.1977.070210211 https://doi.org/10.1016/b978-0-12-763560-6.x5000-4 https://doi.org/10.3390/s19163463 https://doi.org/10.1016/j.chemolab.2015.10.003 https://doi.org/10.1039/c3ay40582f https://doi.org/10.1002/cem.2609 https://doi.org/10.1016/j.chemolab.2017.12.004 https://doi.org/10.1016/j.talanta.2021.122303 acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 62‐67    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 62  defect detection in stainless steel tubes with amr and gmr  sensors using remote field eddy current inspection  dario j. l. pasadas, a. lopes ribeiro, helena g. ramos, tiago j. rocha  instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, av. rovisco pais 1, 1049‐001 lisbon,portugal      section: research paper  keywords:  eddy currents; remote field testing; tube inspection; amr sensor; gmr sensor  citation: dario j. l. pasadas, a. lopes ribeiro, helena g. ramos, tiago j. rocha, defect detection in stainless steel tubes with amr and gmr sensors using  remote field eddy current inspection, acta imeko, vol. 4, no. 2, article 11, june 2015, identifier: imeko‐acta‐04 (5)‐02 ‐11  editor: paolo carbone, university of perugia, italy  received november 26, 2014; in final form march 9, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: instituto de telecomunicações project evaltubes and projects: uid/eea/50008/2013, sfrh/bd/81856/2011 and sfrh/bd/81857/2011 of the  portuguese science and technology foundation (fct)  corresponding author: dario j. l. pasadas, e‐mail: dpasadas@lx.it.pt    1. introduction  the remote field eddy current testing (rfec) is currently applied in non-destructive evaluation (nde) of metallic tubes [1]-[3]. this testing method is a special case of eddy current testing (ect) currently used to evaluate the metal thickness [4], [5] and to detect defects in the material [5], [6]. the rfec testing is an electromagnetic technique that allows the inspection of discontinuities in the metallic materials under test. remote field probes have an equal sensitivity to internal and external tube defects and the phase shift is directly proportional to wall metal loss. detection, localization and characterization [7], [8] of defects are the three basic activities in the ect field. the achievement of these three inspection actions continues to be in study by several researchers to improve the current systems used in industrial maintenance. the development of computers with good performance, simulation tools, advanced signal processing and new electromagnetic sensors are attracting many scientists to work in the nde fields. the remote field eddy current technique requires one coil excited with a time-varying current to produce a magnetic field that penetrates the tube wall under test. the excitation can be sinusoidal [9] or pulsed [10]. in this article the excitation current is sinusoidal with constant amplitude. the magnetic field produced by the excitation coil induces currents in the tube wall. these currents are called eddy currents. the field diffusion along the tube wall is the base of the through-wall indirect technique [11]. analysing a material with a defect, the eddy current flow changes around the defect and the magnetic field produced by it also changes. this magnetic field perturbation is measured with a magnetic detector in order to evaluate the defect features. nowadays, small pick-up coils are the most used as magnetic detectors. recently other magnetic detectors with improved characteristics were introduced, such as hall effect sensors [12], [13] and magneto-resistor sensors like anisotropic magnetoabstract  the  purpose  of  this  paper  is  to  compare  the  performance  of  the  giant  magneto‐resistor  (gmr)  and  anisotropic  magneto‐resistor  (amr)  sensors  for  remote  field  eddy  current  testing  in  stainless  steel  tubes.  two  remote  field  eddy  current  probes  were  built  to  compare detection and characterization capabilities in standard defects like longitudinal and transverse defects. both probes include a  coil to produce a sinusoidal magnetic field that penetrates the tube wall. each probe includes a detector with gmr and amr sensors,  where each sensor has four magneto‐resistive elements configured in a wheatstone bridge. each sensor needs be biased differently to  operate in the high sensitivity linear mode.  the description of the measurement system used to detect defects is presented in this  paper. for the choice of the detector’s optimal position, numerical simulations and experimental measurements were performed. for  comparison of these sensors in defect detection using remote field eddy current testing, experimental measurements were performed  under the same conditions. the results are presented and discussed in this paper. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 63  resistors (amr) [14], [15] or giant magneto-resistors (gmr) [16], [17]. table 1 summarizes the specifications of the magnetic sensors above mentioned. in this paper, special attention has been given to the amr and gmr sensors to provide advantages over coil based magnetic sensors. the directional characteristics, high sensitivity with linear response and large bandwidth (dc to 1 mhz) provided by these sensors make them excellent candidates for defect detection applications in steel tubes. 2.  characteristics of the tube sample under test   the inspected material is a tube sample of austenitic stainless steel (aisi 304) with internal diameter of 26 mm and external diameter of 28 mm. the magnetic permeability of this material is equal to 70 4 10    h/m and the electric conductivity is equal to 1.4  ms/m. this type of steel is non-ferromagnetic and is used in several industries due to its good resistance to corrosion. some typical standard defects like longitudinal and transverse cracks were made into the tube sample in order to test the defect detection with amr and gmr sensors. figure 1 presents the two types of defects analysed in this paper. 3. experimental setup  the experimental setup used in this work is illustrated in figure 2. a single axis positioning system is used to move the probe along the inner tube wall with steps of 0.5 mm. this positioning system is controlled through an rs232 interface by the personal computer. a data acquisition board (pxi-6251) included in a pxi system from national instruments (ni), measures the output voltage (from gmr or amr sensors), amplified by an instrumentation amplifier (ina118) with 40 db of gain, and the voltage across the current sampling resistor rs equal to 0.22 ω. this resistor rs is used to monitor the amplitude and phase of the excitation current. the data acquisition board has 16-bit of resolution and a maximum sampling rate of 1.25 ms/s per channel. the excitation coil current was sinusoidal and generated by a fluke5700a calibrator controlled through the gpib interface. the pxi system was controlled through the rs232 interface by a personal computer running matlab program. at each step of the probe movement, the sensor sinusoidal signal was acquired during 100 periods of the excitation current at the maximum sampling frequency of the board. for each measurement point, a three-parameter sine-fitting algorithm was used to extract the parameters of the sinusoidal sensor output and the excitation current. the amplitude of the sensor signal and the phase difference between the sensor signal and the excitation current are the parameters estimated in this work. the excitation frequency was 5 khz. it takes 20 ms to acquire the signal at each point of the scan, but it is necessary to wait 0.5 s at each point to stabilize the probe. 3.1. amr sensor  the amr sensor used in this work is the hmc 1021z from honeywell. this sensor has one single sensing axis with high sensitivity (1 mv/v/gauss) and a field range of 6 gauss. the sensing axis is oriented along the longitudinal x-axis of the tube. the hmc 1021z sensor is configured in a wheatstone bridge with four magneto-resistive elements. the power supply applied to the bridge was 12 v. a set/reset drive circuit providing pulses of electrical current was used to force the sensor to operate in the high sensitivity and linear mode. this pulse of current was made before each data measurement. 3.2. gmr sensor  the gmr sensor is the aa002-02 from non volatile electronics. this sensor includes four magneto-resistive elements with gmr technology configured in a wheatstone bridge, where two gmr elements are shielded, working as passive resistors and the other two are gmr sensing elements with resistance values changing linearly with the variation of the magnetic field. the bridge was powered by a power supply equal to 12 v. due to the output characteristic of the gmr sensor, a small permanent magnet was placed close to the sensor to polarise it in the high sensitivity and linear mode. table 1. magnetic sensors characteristics.  type of magnetic sensor   sensitivity to  magnetic field range  frequency range   coil  d dt    1 nt to more than 10 t  3 khz to more than 5 mhz hall  b    1 mt to 10 t dc to 10 khz  amr  h    10   t to 1 t dc to 1 mhz  gmr  h    10   t to 10 t dc to 1 mhz  figure 1. representation of the experimental setup.    figure 2. representation of the standard defects tested in this work.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 64  4. choice of the optimal position for the detector  the design of the probe for remote eddy current inspection is very important for detection optimization. first of all, it is necessary to determine the optimum distance between the magnetic sensor and the excitation coil. for this purpose, a finite element commercial program was used to study the remote field eddy current phenomenon along the inner wall of the stainless steel tube aisi 304. the simulation was made applying an excitation current to the coil with 150 ma at 5 khz. the coil was placed in a fixed position into the tube as depicted in figure 3. the magnetic field intensity along the inner tube wall was obtained and is presented in figure 4. the operating frequency was chosen considering the tube thickness and the relation to the standard depth of penetration. at this frequency the magnetic field diffuses easily to the outside of the stainless steel tube with thickness equal to 2 mm. when the rfec technique is applied, three operating zones exist. observing figure 4 with the measured signal along the inner wall, the three operating zones are visible and are called direct zone (0 mm < x < 0.3 mm), transition zone (0.3 mm < x < 5 mm) and remote zone(x > 5 mm). as the direct zone is close to the excitation coil, the magnetic field produced by the eddy currents is dominated by the strong influence of the magnetic field produced by the excitation coil which doesn’t allow the detection of the defect. the remote zone is the region where the magnetic field is dominated by the field produced by the eddy currents and the field intensity decreases exponentially. when a defect is present in the tube wall, the perturbation of the magnetic field produced by the deviation of the eddy currents from the defect can be sensed by a magnetic sensor placed in that zone. a transition zone is visible in figure 4 that corresponds to the region where the magnetic field produced by the eddy currents begins to overlap the excitation magnetic field and creates a minimum (x = 1 mm) in the total magnetic field amplitude due to their opposing phases. experimental results were obtained fixing the excitation coil into the tube and moving the amr and gmr sensors along the inner wall in order to experimentally visualize the rfec phenomena in a stainless steel aisi 304, validate the model, and to choose the optimal position of the sensor. figure 5 depicts the magnetic field intensity along the inner tube wall, obtained experimentally with the amr and gmr sensors. the direct and transition zones are not visible due to the size of the package of the sensor that does not allow measuring the magnetic field close to the excitation coil. however, it is possible to observe the final part of the transition zone that corresponds to the maximum amplitude obtained in figure 5. the information given in figure 5 shows that the distance between the excitation coil and the detector must be greater than 3 mm to ensure that the measured field is inside the remote zone. the magnetic field difference between the amr and gmr output sensors may originate from one factor. this difference is due to the different distances between the detectors and the tube wall. the gmr paths are closer to the tube wall due to its smaller size. 5. experimental results in defect detection  two probes were mounted and tested for defect detection in the stainless steel tubes. in one probe the magnetic sensor is used to detect longitudinal defects in the tube wall and the other one transverse effects. figure 6 shows photographs of both probes. to ensure that the detectors are in the remote zone, the distance between the excitation coil and the detectors was chosen 15 mm. figure  3.  illustration  of  the  experimental  geometry  including  the  sensor’s  path to obtain simulation results.  figure 4. simulated magnetic field obtained with the finite element model.    figure  5.  magnetic  field  line  along  the  tube  wall  obtained  with  the  experimental setup.  figure  6.  photographs  of  the  two  mounted  probes:  (a)  amr  probe  with  1021z sensor from hmc; (b) gmr probe with aa002‐02 sensor from nve.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 65  for testing purposes a longitudinal defect and a transverse defect both 10 mm in length, were scanned using the experimental setup with each probe. the first experimental test was made moving the amr probe along the inner tube wall in the sample that has a longitudinal defect. a sinusoidal current with amplitude equal to 150 ma was applied to the excitation coil. the second test was scanning the same defect with the gmr probe. the output amplitudes of both sensors are depicted in figure 7. the phase difference between the excitation current and each detector signal was also measured and depicted in figure 8. it should be noted that the sensing axis of both sensors were directed along the longitudinal direction, close to the inner tube wall. observing figures 7 and 8, an amplitude perturbation and phase perturbation is visible when the probes passed close the longitudinal defect. the perturbation occurred between the positions x=45 mm and x=55 mm and corresponds to the defect position when the detector passed the defect. the defect zone is denoted by two orange lines. observing the output amplitudes of the sensors depicted in figure 7, a perturbation is present but it is not possible to conclude anything about the geometry of the defect. also, the amplitude signals contain noise due to lift-off effect (distance between sensor and the tube wall). observing the phase perturbation depicted in figure 8, the defect zone is clearly identified by a negative phase peak, matching approximately the real length of the longitudinal defect. the unexpected perturbation represented in figure 7 and figure 8 between x=20 mm and x=40 mm is caused by the passage of the excitation coil under the defect, changing the eddy current amplitudes. the same experimental test was made for both probes applying a sinusoidal current with amplitude 50 ma to the excitation coil. figures 9 and 10 show the amplitude perturbation and phase perturbation when the probes passed close the defect. in figure 9 the attenuation of the amplitude perturbation in both output signals is visible. however, in figure 10 it is seen that the phase perturbation continues to be present between x=45 mm and x=55 mm with approximately the same variation type shown in figure 8 for different amplitudes. this shows that the phase difference between the excitation current and the detector signal can be useful to detect and characterize defects with approximately the same information, but different excitation current amplitudes. the same comparison was made analysing the transverse defect. the results for the phase perturbation applying a sinusoidal amplitude current of 150 ma and 50 ma are depicted in figure 11 and figure 12, respectively. the defect position is marked with an orange line. figures 11 and 12 show that the defect position is clearly identified by a negative peak for each probe. the unexpected perturbation presented when the longitudinal defect was scanned is not present when the excitations coil crosses the defect position. this is due to the circumferential nature of the excitation current, the eddy current direction being parallel to the defect and the sensing axis of the detectors. it is also visible that the phase perturbation obtained figure  9.  output  amplitudes  (in  tesla  units)  obtained  with  the  amr  and  gmr  sensors  for  detection  of  a  longitudinal  defect,  applying  a  sinusoidal  current with amplitude 50 ma to the excitation coil.  figure 10. phase difference between the excitation current and each sensor  signal  for  defect  detection  of  a  longitudinal  defect  applying  a  sinusoidal  current with amplitude 50 ma to the excitation coil.  figure  7.  output  amplitudes  (in  tesla  units)  obtained  with  the  amr  and gmr sensors for detection of a  longitudinal defect, applying a   sinusoidal current with amplitude 150 ma to the excitation coil.  figure 8. phase difference between the excitation current and each sensor signal  for  defect  detection  of  a  longitudinal  defect  applying  a  sinusoidal current with amplitude 150 ma to the excitation coil.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 66  between the excitation current and each sensor signal is maintained when a sinusoidal amplitude current of 150 ma or 50 ma is applied to the excitation coil. 6. conclusions  the probes presented in this paper proved that amr and gmr sensors as detectors seem to be good alternatives to conventional remote field eddy current probes that use coils as detectors. the main difference between these magneto-resistive sensors and coils is the possibility to use low frequency tests that allow an easy magnetic field penetration into thick wall tubes. when coils are used it is usually necessary to increase the frequency of operation to obtain good signal to noise ratios. experimental results proved that these sensors are able to detect defects in stainless steel tubes using remote field eddy current testing. the directional characteristics and high sensitivity with linear response over a large bandwidth (dc to 5 mhz) provided by these sensors can be useful to detect subsurface defects at lower frequencies, where coil detectors can’t be used. the results clearly show that the gmr sensor is more sensitive to the magnetic field than the amr sensor. the results also show that the phase perturbation contains more clear information about defect presence when compared to the amplitude perturbation. furthermore, the phase perturbation remained unchanged for both 150 and 50 ma of excitation current, while the amplitude perturbation decreased with a decrement of the excitation current. this means that when the excitation current amplitude decreases, the amplitude output signal of both sensors becomes difficult to measure, however the phase still contains useful information caused by the defect presence. note that the chosen distance between the excitation coil and the detector is important to ensure that the measured field perturbation of the eddy currents is strong enough to detect defects with minimal signal attenuation. as future work we consider testing these sensors on stainless steel tubes with larger thicknesses and using lower frequencies in order to increase the penetration depth of eddy currents, and to apply this method to other industrial applications. another important conclusion is related to the accuracy of the measurement of the geometrical sizes of the defects. taken into consideration the phase measurements it is possible to conclude that we always got uncertainties under 5 mm for longitudinal defects. this quantity refers to the positions of the defect edges. for transverse circular defects the uncertainty is much lower, especially if the defect is perfectly circularly oriented. acknowledgement  this work was developed under the instituto de telecomunicações project evaltubes and supported by the projects: uid/eea/50008/2013, sfrh/bd/81856/2011 and sfrh/bd/81857/2011 of the portuguese science and technology foundation (fct). this support is gratefully acknowledged. references  [1] y-j kim, s-s lee, “eddy current probes of inclined coils for increased detectability of circumferential cracks in tubing”, ndt & e international, july, 2012, vol. 49, pp. 77-82. [2] d. h. hur, m. s. choi, d. h. lee, s. j. kim, j. h. han, “a case study on detection and sizing of defects in steam generator tubes using eddy current testing”, nuclear engineering and design, january, 2010, vol. 240, no. 1, pp. 204-208. [3] j.bruce nestleroth, richard j. davis, “application of eddy currents induced by permanent magnets for pipeline inspection”, ndt & e international, january, 2007, vo. 40, no. 1, pp. 77-84. [4] helena g. ramos, tiago rocha, jakub král, dario pasadas, artur l. ribeiro, “an svm approach with electromagnetic methods to assess metal plate thickness”, measurement, august, 2014, vol. 54, pp. 201-206. [5] tianlu chen, gui yun tian, ali sophian, pei wen que, “feature extraction and selection for defect classification of pulsed eddy current ndt”, ndt & e international, september, 2008, vol. 41, no. 6, pp. 467-476. [6] d. pasadas, t. rocha, h. g. ramos, a. l. ribeiro, “evaluation of portable ect instruments with positioning capability”, measurement, january, 2012, vol. 45, no. 1, pp. 393-404. [7] d. kim, l. udpa, s. udpa, “remote field eddy current testing for detection of stress corrosion cracks in gas transmission pipelines”, materials letters, june, 2004, vol. 58, no. 15, pp. 2101-2104. [8] dario pasadas, tiago rocha, artur l. ribeiro, helena g. ramos, “ect characterization of a linear defect from multiple angle measurements”, 19th symposium imeko tc 4 symposium and 17th iwadc workshop advances in instrumentation and sensors interoperability, barcelona, spain, july 18-19, 2013, pp. 218-221. [9] a. l. ribeiro, h. g. ramos, j. gonçalves; "thickness measurement of a nonmagnetic metallic plate using harmonic eddy current excitation and a gmr sensor", proc imeko tc4 symp., natal, brazil, , september, 2011, pp. 1-6. figure 7. phase difference between the excitation current and each sensor signal  for  defect  detection  of  a  transversal  defect  applying  sinusoidal current with amplitude 150 ma to the excitation coil.  figure 8. phase difference between the excitation current and each sensor signal  for  defect  detection  of  a  transversal  defect  applying  a  sinusoidal current with amplitude 50 ma to the excitation coil.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2  | 67  [10] b. yang, x. li, “pulsed remote field technique used for nondestructive inspection of ferromagnetic tube”, ndt&e international, january, 2013, vol. 53, pp. 47-52. [11] d. l. atherton, s. sullivan, “the remote-field through-wall electromagnetic inspection technique for pressure tubes”, materials evaluation, december, 1986, vol. 44, no. 13, pp. 1544–1550. [12] y. he, f. luo, m. pan, f. weng, x. hu, j. gao, b. lui, “pulse eddy current technique for defect detection in aircraft riveted structures”, ndt&e international, , mar. , 2010, vol. 43, no. 2, pp. 176-181. [13] k. kosmas, c. sargentis, d. tsamakis, e. hristoforou, “nondestructive evaluation of magnetic metallic materials using hall sensors”, sensors and actuators a: physical, 2005, vol. 161, no 1-2, pp. 359-362. [14] d.j. pasadas, a. lopes ribeiro, t. j. rocha, h. g. ramos, “remote field eddy current ndt in tubes using anisotropic magneto-resistors”, proc. 13th imeko tc10 workshop on technical diagnostics advanced measurement tools in technical diagnostics for systems reliability and safety, warsaw, poland, june 26-27, 2014, pp. 60-64. [15] a. jander, c. smith, and r. shneider, “multi-frequency ect with amr sensor”, ndt & e international, september, 2011, vol. 44, no. 5, pp. 438-441. [16] l. kufrin, o. postolache, a. l. ribeiro, h. g. ramos, "image analysis for crack detection", proc ieee international instrumentation and technology conf., austin, united states, may, 2010, vol. 1, pp. 1096 1100. [17] j.h. espina-hernández, e. ramírez-pacheco, f. caleyo, j. a. pérez-benitez, j.m. hallen, “rapid estimation of artificial nearside crack dimensions in aluminium using a gmr-based eddy current sensor”, ndt & e international, october, 2012, vol. 51, pp. 94 – 100. microsoft word 36-186-1-pb.doc acta imeko  july 2012, volume 1, number 1, 4  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 4  editorial: the revival of acta imeko   paul p.l. regtien   measurement science consultancy, julia culpstraat 66, 7558jb hengelo, the netherlands    keywords: journal; imeko   citation: paul p.l. regtien, editorial, acta imeko, vol. 1, no. 1, article 4, july 2012, identifier: imeko‐acta‐01(2012)‐01‐04  editor: paul regtien, measurement science consultancy, the netherlands  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: paul p.l. regtien, e‐mail: paul@regtien.net    the first issue of the official publication of imeko was published in 1958, the year that imeko was founded. the title of this issue was simply “acta imeko 1958”, with subtitle “proceedings of the international measurements conference” printed in 5 languages on the second page (figure 1). the publication was issued by the hungarian sci. soc. for measurement & automation. that issue contains the papers presented at the first imeko world congress, and since then is organized every three years. during many years, the contributions to imeko world congresses were published in acta imeko. later, the proceedings of the world congresses were issued by the organizers as proceedings, and the name acta imeko disappeared from these printed versions. in the last decades, the printed proceedings were gradually replaced by proceedings on cdrom’s, and more recently on memory sticks. the proceedings of the tc-events followed the same development. selected papers from world congresses and tc events were published in a newly established imeko journal “measurement” which first issue came out in 1983. this journal evolved gradually into an international journal published by elsevier, with scientific articles not only from imeko events but from scientists all over the world working in the realm of measurement and instrumentation. the restricted number of issues per year prevents publication of the many qualified contributions to imeko workshops, symposia and congresses. this was a major reason for the general council of imeko to decide in 2009 to publish an e-journal. its main goal is the enhancement of academic activities of imeko and a wider dissemination of scientific output from tc events. starting a new journal is not just a matter of pushing a button. i am much indebted to the persons who assisted with the preparatory work. special thanks go to dirk röske who is supervising the electronic platform underpinning this journal, pedro ramos who acted as the first guest editor and in this capacity had to brake fresh ground, and francisco alegria who designed the journal’s template and act as the layout editor for all articles in this issue. without their help it was not possible to create this first electronic version of acta imeko as a modern successor of the original printed versions. hopefully this first special issue will soon be followed by many others presenting selected papers from future imeko events to the world-wide community of measurement scientists and engineers.   figure 1. image of the second page of the first issue of acta imeko, 1958.  microsoft word 247-1780-2-le-rev acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 47 ‐ 52    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 47  broadband corbino spectroscopy and stripline resonators to  study the microwave properties of superconductors  marc scheffler 1 , m. maximilian felger 1 , markus thiemann 1 , daniel hafner 1 , katrin schlegel 1 ,   martin dressel 1 , konstantin s. ilin 2 , michael siegel 2 , silvia seiro 3 , christoph geibel 3 , frank steglich 3   1  1. physikalisches institut, universität stuttgart, pfaffenwaldring 57, 70569 stuttgart, germany  2  institut für mikro‐ und nanoelektronische systeme (ims), kit, hertzstraße 16, 76187 karlsruhe, germany  3  max planck institute for chemical physics of solids, nöthnitzer straße 40, 01187 dresden, germany      section: research paper   keywords: superconductivity; microwaves; optical response; microwave spectroscopy  citation: marc scheffler, m. maximilian felger, markus thiemann, daniel hafner, katrin schlegel, martin dressel, konstantin s. ilin, michael siegel, silvia  seiro, christoph geibel, frank steglich, broadband corbino spectroscopy and stripline resonators to study the microwave properties of superconductors,  acta imeko, vol. 4, no. 3, article 8, september 2015, identifier: imeko‐acta‐04 (2015)‐03‐08   editor: paolo carbone, university of perugia, italy  received february 11, 2015; in final form april 9, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by deutsche forschungsgemeinschaft (dfg)  corresponding author: marc scheffler, e‐mail: scheffl@pi1.physik‐uni‐stuttgart.de    1. introduction  superconductivity belongs to the most intensively studied phenomena of solid state physics. this research is driven both by the interest to gain fundamental understanding of electronic properties of solids and the aim to establish new applications of superconductors or to improve the existing ones. both fields require experimental techniques to measure the characteristic material properties that characterize a superconductor. while certain “static” approaches like measuring the dc resistivity, the dc magnetization, or the specific heat are routine approaches in the material development and characterization of superconductors, electrodynamics experiments are much less common. in this contribution, we first motivate why microwave studies are particularly helpful in superconductivity research and then we present two complementary techniques, the broadband corbino approach and the multi-frequency stripline resonators, that we have implemented successfully to study different types of superconductors using microwaves. 1.1. optical response of superconductors  though microwave experiments are technically performed in a very different manner compared to conventional optics, the actual outcome is rather similar: one probes the electrodynamics of the sample under study. therefore, we briefly review the optical response of superconductors with a special focus on microwave frequencies [1], [2]. the crucial energy scale of the optical response of a superconductor is the superconducting energy gap  that in the case of a weak-coupling bcs superconductor is related to the critical temperature tc via 2 = 3.53 kb tc. with typical tc of the order of a few k, the energy gap is found in the range of abstract  superconducting materials are of great interest both for the fundamental understanding of electrons in solids as well as for a range of  different applications. studying superconductors with microwaves offers a direct experimental access to the electrodynamic response  of these materials, which in turn can reveal fundamental material properties such as the superconducting penetration depth. here we  describe two different techniques to study superconductors at microwave frequencies: the broadband corbino approach can cover  frequencies from the mhz range up to 50 ghz continuously but is limited to thin‐film samples whereas the stripline resonators are  sensitive  enough  to  study  low‐loss  single  crystals  but  reveal  data  only  at  a  set  of  roughly  equidistant  resonant  frequencies.  we  document the applicability of these two techniques with data taken on an ultrathin tan film and a single crystal of the heavy‐fermion  superconductor cecu2si2, respectively.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 48  µev to mev, i.e. up to thz frequencies. this means that at microwave frequencies one operates well within the superconducting energy gap and therefore probes the response of the superconducting ground state of the material. at these frequencies, two types of charge carriers contribute to the electrodynamics: firstly, there are the cooper pairs that constitute the superconducting condensate, the fundamental manifestation of superconductivity. secondly, there are thermally excited quasiparticles that are lossy (i.e. ohmic with respect to conduction) like conventional metallic electrons, but that do not show up in dc resistance measurements because they are short-circuited by the loss-less cooper pairs. microwave measurements probe both kinds of electrons: the quasiparticles determine the real part 1 of the complex conductivity  = 1 + i 2 whereas the cooper pairs dominate the imaginary part 2. in particular, 2 exhibits a 1/ frequency dependence (= 2f ), which can be explained via kramerskronig transformation of the ()-peak of the cooper pairs in 1. the magnitude of the 1/ dependence directly depends on the cooper pair density, which in turn is related to the superconducting penetration depth. 1.2. general aspects of optical and microwave measurements  optical measurements are contact free and usually only demand a sample significantly bigger than the wavelength, i.e. typically of size of order mm and with at least one flat surface. compared to other spectroscopic techniques, these requirements can often be met rather easily. however, in the case of conducting samples, the crucial question is whether the sensitivity of a particular optical technique is sufficient to resolve the features of interest. in particular for superconductors, this is a stringent restriction as the optical reflectivity of a bulk superconductor is very close to unity and therefore is extremely difficult to study even with state-of-theart optical spectrometers of the relevant spectral range, i.e. typically at thz and/or infrared frequencies. at microwave frequencies, the situation is in principle even more difficult, as the lower frequency of the employed radiation directly means reflectivity yet closer to unity. however, because of the rather long wavelength of microwave radiation, typically of order cm, the employed experimental techniques do not follow optical routines, but are intrinsically different ones where the sample as well as the guides for the microwave radiation are smaller than the wave-length. i.e. although the fundamental physics of the microwave response of superconductors follows the theory of “optics of superconductors”, the experiments are rather different than a conventional optics approach. 2. microwave measurements on superconductors  a very well established and very versatile technique to study the microwave properties of conducting materials is cavity resonators [3]. here, a three-dimensional conducting cavity has dimensions of order of the wavelength, and the sample under study acts as a small perturbation. the major disadvantages of this technique are that it is usually operated only at a single frequency (i.e. one does not gain any spectral information) and that it is difficult to obtain quantitative information due to the in general poorly defined geometry (whereas relative information, e.g. as a function of temperature, is obtained easily). therefore, different new experimental techniques have been devised. these can be roughly divided into two classes: the first group comprises truly broadband techniques that cover the full frequency dependence. here two approaches have been applied to the study of superconductors, namely the corbino approach [4], [5] and the bolometric approach [6]. the main difference in their capabilities is that the corbino approach reveals phase information (giving amplitude and phase of the response) but has a rather poor sensitivity, whereas the bolometric approach is very sensitive but does not reveal phase information. the second group is based on resonators that can be operated at several frequencies. while our approach below is based on onedimensional transmission line structures, namely superconducting stripline resonators [7], [8], a threedimensional resonator with a dielectric resonator in its core has recently also been established successfully [9]. 3. broadband corbino spectroscopy  in corbino spectroscopy, the flat sample terminates the open end of a coaxial cable (schematically shown in figure 1). in this way, the sample reflects the microwave signal traveling in the coaxial cable, and the reflection coefficient s11, which can conveniently be measured with a network analyzer, then directly yields the impedance zl of the sample via the following relation (with z0 = 50  the characteristic impedance of the coaxial cable): 11 11 0 1 1 s s zz l    (1) while this approach is commonly used at room temperature and in particular in the study of dielectrics [10], it is much more difficult for the case of superconductors. this is due to the elaborate calibration procedure that has to be applied to correct for the strongly temperature-dependent transmission properties of the coaxial cable [4], [5]. different groups have come up with different schemes how this challenge can be met most successfully. it turns out that the best approach can differ drastically depending on the particular goal. in contrast to groups that concentrate on detailed studies of a single sample (or a selected few) [11], we have focused on a procedure that allows a high throughput combined with rather low temperatures. presently, we operate a corbino spectrometer with one cooldown per day, base temperature of 1.1 k (in a 4he cryostat), frequency range 45 mhz to 50 ghz, and two samples measured during each cooldown. with this rate, we can easily study extended sets of samples. current examples are nbn and tan thin films as a function of deposition parameters. ultrathin films of these materials with thicknesses about 5 nm or even smaller are interesting both for applications in superconducting singlephoton detectors [12], [13] as well as for understanding of the so-called superconductor-to-insulator transition (sit) [14], [15]. the typical critical temperatures of these materials are of order 5 k to 10 k, and as a result the energy gap is located in the figure 1. scheme of a corbino spectrometer. the network analyzer (nwa)  measures the reflection coefficient of the microwave signal that travels in the coaxial cable and is reflected by the sample.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 49  thz frequency range, much higher than accessible with corbino spectroscopy. however, for such ultrathin films, thz transmission measurements are easily possible and can directly determine the energy gap, whereas these studies usually have difficulties to resolve quasiparticle dynamics at the lowfrequency limit of their spectral range [1], [16]. therefore, the combination of corbino spectroscopy at ghz frequencies and thz transmission measurements is an ideal match to study the complete charge dynamics in superconductors, if the materials are available as very thin films. if a conductive thin sample is studied in corbino spectroscopy and if the microwave fields and current density are uniform through the sample thickness (which usually is the case for conductive thin films of thickness up to several tens of nm), the complex conductivity of the sample can directly be calculated from the sample impedance zl as follows: lzd aa   2 )/ln( 12 (2) here d is the thickness of the superconducting thin film and a1 and a2 are the inner and outer diameters of the corbino disk, respectively [4], [5]. this corbino disk is created by deposition of concentric conductive contacts (e.g. made of gold) onto the superconducting film, and it defines the geometry of the sample impedance zl that is probed. the thin-film samples are usually deposited onto dielectric substrates, and typically these substrates do not affect corbino microwave measurements much in the frequency range below 20 ghz [17], [18]. in figure 2 we show typical microwave spectra, both real and imaginary parts of the conductivity, for a tan film of thickness 5 nm that was sputtered onto a sapphire substrate. for temperatures above the critical temperature tc = 8.9 k, the real part 1 is frequency independent and the imaginary part 2 basically vanishes, as expected for a conventional metal at ghz frequencies. below tc, 1 rises strongly upon cooling. our frequency range is well below the energy gap, and therefore 1 is not suppressed yet compared to the normal-state value even for the lowest temperature of 1.2 k [19]. the detailed frequency dependence reflects the density of states in the superconducting state and can phenomenologically be fitted with the drude formula over a substantial frequency range [19]. also 2 increases strongly upon cooling below tc, and its characteristic 1/f frequency dependence gives direct access to the cooper pair density and the penetration depth [1], [19]. corbino spectroscopy at 4he temperatures is thus well suited to study thin films of superconductors with tc of 1.5 k and higher. considering that for many spectroscopic studies one desires the photon energy in a similar range as the thermal energy, we have also implemented a corbino setup at 3he temperatures [20]. here the requirements for the calibration of the microwave coaxial cable in principle are even more stringent, but since in our laboratory we have a 4he corbino spectrometer well established, we limit our 3he spectrometer to the temperature range 0.45 k to 2.0 k, and we can use the overlap in temperature with the 4he setup for additional calibration purposes. all our studies mentioned so far, as well as corbino experiments on superconductors by other groups [11], [19]-[24], are limited to thin-film samples (thickness of a few tens of nm or less). the reason is the limited sensitivity of corbino spectroscopy [5]. but this is no restriction if the superconductor of interest can be prepared as thin films with the same material properties as the more commonly studied bulk samples or if the particular properties of interest only become manifest in thin films, like the sit. for highly disordered superconductors close to the sit, the conductivity in the superconducting state at frequencies below 2 has recently gained substantial attention because deviations from the canonical predictions of bcs theory might occur [25]-[28]. 4. superconducting stripline resonators  many unconventional superconductors, which are the topic of present research, are not available as thin film samples and as such not accessible with microwave corbino spectroscopy. many of these are ternary, quaternary, or even more complex intermetallics, and growth of high-quality thin films is extremely demanding [29]-[31]. furthermore, tc of unconventional superconductors is often suppressed if the mean free path of the conduction electrons is reduced, and then the small thickness of thin films can be an intrinsic limitation to superconductivity in these materials. to still be able to study these materials with microwaves as a function of frequency, we have developed a resonant technique for bulk samples that can be operated at numerous, roughly equidistant frequencies. the general idea is that in a transmission line structure based on a stripline (also called triplate), as shown as cross section in figure 3, one of the ground planes, which surround the signalcarrying center strip, can be formed by the conducting bulk material of interest. to be sensitive to low-loss samples, the stripline is created as a one-dimensional transmission line resonator, which has a fundamental frequency typically between 1 ghz and 3 ghz (depending on sample size) and which can easily be excited at numerous harmonics, i.e. one obtains a set of data at roughly equidistant frequencies, spanning a range up to approximately 15 ghz at present. particular care has to be taken in the choice of the center conductor material: here, the density of the microwave currents is an order of magnitude larger than in the ground planes, and therefore the overall figure 2. microwave conductivity spectra of a tan thin film, measured with a corbino spectrometer.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 50  losses of the stripline first of all depend on the center conductor. if we want to study a superconductor as ground plane, this means that the center conductor has to be fabricated from a thin film of an even “better” superconductor, i.e. one with even lower microwave losses at low temperatures. in general, such very low losses can be achieved if the resonator is operated at temperatures well below tc, i.e. materials with a high tc are desirable for the center conductor. however, the materials should be conventional s-wave superconductors, as superconductors with nodes have much higher losses at low temperatures due to the larger amount of thermally excited quasiparticles. this restriction excludes the high-tc cuprate superconductors, which are d-wave, from this application. therefore, we fabricate our stripline resonators either from nb or pb thin films [8], [32]. while we originally implemented this technique to study unconventional metals, such as heavy fermions with their peculiar microwave properties [33]-[39], it turned out that this technique is also well capable to study bulk superconductors if their tc is lower than that of the stripline material [8]. after test measurements on the conventional elemental superconductors ta and sn [8], we now apply it to the unconventional heavyfermion superconductor cecu2si2 [40]. while cecu2si2 was long believed to be a d-wave superconductor [41], recent studies instead suggest that it is a two-gap s-wave superconductor [42]. this question is of particular interest as it is related to the possible pairing mechanism for superconductivity, which is thought to be of magnetic nature in cecu2si2 [43]. one possibility to address whether a superconductor has an order parameter with nodes or not is measure the temperature dependence of its penetration depth. this can be done with microwaves. so far, no microwave studies on cecu2si2 have been reported. one difficulty here is the rather low tc of around 0.6 k. while this temperature can be reached with 3he cryostats, a detailed study of the penetration depth requires a base temperature of order tc/10 or lower, i.e. one has to employ a 3he/4he dilution refrigerator. here, another advantage of our stripline probe comes into play: its physical dimensions are much smaller than the fundamental wavelength, as this onedimensional resonator can be shaped as a meander. as a result, the typical mounting of resonator and sample amounts to just a few cm3, a volume that can easily be incorporated in a commercial dilution refrigerator. furthermore, the microwave signals are guided from the room-temperature electronics to the cryogenic probe and back via coaxial cables, which is much more generic than waveguides that are often used for threedimensional cavity resonators. therefore, the stripline resonator is a widely applicable, very sensitive technique to study the microwave response of superconductors. in contrast to many other resonator approaches, here it is rather simple to obtain data at several different frequencies, i.e. spectral information. while for addressing the penetration depth, which in this range basically is frequency independent, a single resonator frequency might be sufficient, there are other questions of interest, such as the quasiparticle dynamics, where data as a function of frequency are crucially required. these frequency dependences are usually broad, and therefore the rather restricted spectral resolution of order 1 ghz is sufficient, whereas the truly broadband corbino approach has a much better frequency resolution that can also reveal narrow spectral structures. as a demonstration for mk measurements using a stripline resonator, in figure 4 we show data obtained on a cecu2si2 single crystal for two resonator modes, at frequencies 4.36 ghz and 6.52 ghz, measured during the same temperature sweep. when cooled below tc, the surface resistance rs decreases strongly due to the reduction of microwave losses. at the same time, the resonator frequency f0 increases because of the decreasing penetration depth. above tc, in the metallic heavyfermion state, rs is also temperature dependent, even for these low temperatures below 1 k. this indicates that the electronic conduction is governed by intrinsic scattering mechanisms (in contrast to temperature-independent impurity scattering), and it shows promise for future studies of the microwave response of heavy fermions in the normal state of cecu2si2, an interesting field of study on its own [33]-[38]. comparing the two resonator frequencies reveals that the superconducting penetration depth well below tc does not depend on frequency, whereas rs is noticeably enhanced for the higher frequency, both below and above tc, as expected. our stripline approach thus can reveal the frequencydependent microwave response of a superconductor. in the present case of cecu2si2, where a 3he/4he dilution refrigerator was employed, measurements down to 30 mk were performed. thus a wide range of unconventional low-tc superconductors can be studied with this technique. recently, another microwave approach, namely a multimode dielectric resonator [9], was used to study the heavy-fermion superconductor cecoin5 [44]. these two techniques have rather complementary figure  4.  temperature  dependence  of  stripline  resonator  frequency  f0 (normalized  to  the  frequency  at  the  lowest  temperature)  and  sample  surface  resistance  rs  measured  on  a  cecu2si2  single  crystal  sample.  both  quantities are shown for two different resonator modes.  figure 3. schematic cross section of a stripline as used in resonator studies. the  signal‐carrying  center  strip  (thickness  t  =  1  µm,  width  w  =  45  µm)  is  sandwiched between two ground planes at distance h = 127 µm (thickness of dielectric spacers, made of sapphire, with dielectric constant r). center  strip and lower ground plane are made of a low‐loss superconductor (pb or nb),  whereas  the  bulk  superconducting  sample  of  interest  acts  as  upper ground  plane  of  the  stripline.  the  microwave  fields  enter  the superconductors within the superconducting penetration depth  (indicated  as shaded regions).   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 51  advantages: our stripline approach offers a wide and flexible range of numerous ghz frequencies, it can be used to resolve anisotropy in the microwave response, and due to its compact design it can conveniently be installed into a wide range of dilution refrigerators (thus achieving even lower temperatures than those demonstrated in [44]), whereas the multimode dielectric resonator can be operated in high magnetic field, which is not possible for our stripline approach based on the superconductors pb or nb. 5. conclusions  we have discussed why microwave measurements on superconductors are powerful tools to understand their electrodynamics and to obtain detailed information about fundamental characteristics of superconductivity, such as the temperature-dependent penetration depth. in recent years, several groups have developed new techniques for cryogenic microwave spectroscopy. while the corbino approach is particularly popular and offers the possibility to cover a very large frequency range completely, it is limited to the study of very thin films. as an alternative, we have established a rather different technique, which are stripline resonators where the bulk sample of interest is one ground plane of the stripline. with these two approaches available, we can cover a very large range of superconducting materials and address their microwave properties both for fundamental research as well as for materials optimization for applications. acknowledgement  we thank g. untereiner for assistance in probe and sample preparation and u. pracht and l. benfatto for helpful discussions. references  [1] u.s. pracht, e. heintze, c. clauss, d. hafner, r. bek, d. werner, s. gelhorn, m. scheffler, m. dressel, d. sherman, b. gorshunov, k.s. il'in, d. henrich, m. siegel, electrodynamics of the superconducting state in ultra-thin films at thz frequencies, ieee trans. thz sci. technol. 3 (2013) pp. 269-280. [2] m. dressel, electrodynamics of metallic superconductors, adv. condens. matter phys. 2013 (2013) article 104379. [3] o. klein, s. donovan, m. dressel, g. grüner, microwave cavity perturbation technique: part i: principles, int. j. infrared millim. waves 14 (1993) pp. 2423-2457; s. donovan, o. klein, m. dressel, k. holczer, g. grüner, microwave cavity perturbation technique: part ii: experimental scheme, int. j. infrared millim. waves 14 (1993) pp. 2459-2487; m. dressel, o. klein, s. donovan, g. grüner, microwave cavity perturbation technique: part iii: applications, int. j. infrared millim. waves 14 (1993) pp. 2489-2517. [4] j.c. booth, d.h. wu, s.m. anlage, a broadband method for the measurement of the surface impedance of thin films at microwave frequencies, rev. sci. instrum. 65 (1994) pp. 20822090. [5] m. scheffler, m. dressel, broadband microwave spectroscopy in corbino geometry for temperatures down to 1.7 k, rev. sci. instrum. 76 (2005) article 074702. [6] p.j. turner, d.m. broun, s. kamal, m.e. hayden, j.s. bobowski, r. harris, d.c. morgan, j.s. preston, d.a. bonn, w.n. hardy, bolometric technique for high-resolution broadband microwave spectroscopy of ultra-low-loss samples, rev. sci. instrum. 75 (2004) pp. 124-135. [7] m.s. diiorio, a.c. anderson, b.y. tsaur, rf surface resistance of y-ba-cu-o thin films, phys. rev. b. 38 (1988) pp. 7019-7022. [8] d. hafner, m. dressel, m. scheffler, surface-resistance measurements using superconducting stripline resonators, rev. sci. instrum. 85 (2014) article 014702. [9] w.a. huttema, b. morgan, p.j. turner, w.n. hardy, x. zhou, d.a. bonn, r. liang, d.m. broun, apparatus for high-resolution microwave spectroscopy in strong magnetic fields, rev. sci. instrum. 77 (2006) article 023901. [10] j. krupka, frequency domain complex permittivity measurements at microwave frequencies, meas. sci. technol. 17 (2006) pp. r55-r70. [11] w. liu, m. kim, g. sambandamurthy, n.p. armitage, dynamical study of phase fluctuations and their critical slowing down in amorphous superconducting films, phys. rev. b 84 (2011) article 024511. [12] a. engel, a. aeschbacher, k. inderbitzin, a. schilling, k. il’in, m. hofherr, m. siegel, a. semenov, h.-w. hübers, tantalum nitride superconducting single-photon detectors with low cut-off energy, appl. phys. lett. 100 (2012) article 062601. [13] d. henrich, s. dörner, m. hofherr, k. il'in, a. semenov, e. heintze, m. scheffler, m. dressel, m. siegel, broadening of hotspot response spectrum of superconducting nbn nanowire single-photon detector with reduced nitrogen content, j. appl. phys. 112 (2012) article 074511. [14] v.f. gantmakher, v.t. dolgopolov, superconductor-insulator quantum phase transition, phys. usp. 53 (2010) pp. 1-49. [15] j. yong, t.r. lemberger, l. benfatto, k. ilin, m. siegel, robustness of the berezinskii-kosterlitz-thouless transition in ultrathin nbn films near the superconductor-insulator transition, phys. rev. b 87 (2013) article 184505. [16] u.s. pracht, m. scheffler, m. dressel, d.f. kalok, c. strunk, t.i. baturina, direct observation of the superconducting gap in a thin film of titanium nitride using terahertz spectroscopy, phys. rev. b 86 (2012) article 184503. [17] h. kitano, t. ohashi, a. maeda, broadband method for precise microwave spectroscopy of superconducting thin films near the critical temperature, rev. sci. instrum. 79 (2008) article 074701. [18] m.m. felger, m. dressel, m. scheffler, microwave resonances in dielectric samples probed in corbino geometry: simulation and experiment, rev. sci. instrum. 84 (2013) article 114703. [19] k. steinberg, m. scheffler, m. dressel, quasiparticle response of superconducting aluminum to electromagnetic radiation, phys. rev. b 77 (2008) article 214517. [20] k. steinberg, m. scheffler, m. dressel, broadband microwave spectroscopy in corbino geometry at 3he temperatures, rev. sci. instrum. 83 (2012) article 024704. [21] d.h. wu, j.c. booth, s.m. anlage, frequency and field variation of vortex dynamics in yba2cu3o7−δ, phys. rev. lett. 75 (1995) pp. 525-528. [22] h. kitano, t. ohashi, a. maeda, i. tsukada, critical microwaveconductivity fluctuations across the phase diagram of superconducting la2−xsrxcuo4 thin films, phys. rev. b 73 (2006) article 092504. [23] e. silva, n. pompeo, s. sarti, wideband microwave measurements in nb/pd84ni16/nb structures and comparison with thin nb films, supercond. sci. technol. 24 (2011) article 024018. [24] m. mondal, a. kamlapure, s.c. ganguli, j. jesudasan, v. bagwe, l. benfatto, p. raychaudhuri, enhancement of the finitefrequency superfluid response in the pseudogap regime of strongly disordered superconducting films, sci. rep. 3 (2013) article 1357. [25] s. gazit, d. podolsky, a. auerbach, d. p. arovas, dynamics and conductivity near quantum criticality, phys. rev. b 88 (2013) article 235108. [26] m. swanson, y.l. loh, m. randeria, n. trivedi, dynamical conductivity across the disorder-tuned superconductor-insulator transition, phys. rev. x 4 (2014) article 021007. [27] t. cea, d. bucheli, g. seibold, l. benfatto, j. lorenzana, c. castellani, optical excitation of phase modes in strongly disordered superconductors, phys. rev. b 89 (2014) article 174506. acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 52  [28] d. sherman, u.s. pracht, b. gorshunov, s. poran, j. jesudasan, m. chand, p. raychaudhuri, m. swanson, n. trivedi, a. auerbach, m. scheffler, a. frydman, m. dressel, the higgs mode in disordered superconductors close to a quantum phase transition, nature phys. 11 (2015) pp. 188-192. [29] m. jourdan, a. zakharov, m. foerster, h. adrian, evidence for multiband superconductivity in the heavy fermion compound uni2al3, phys. rev. lett. 91, no.9 (2004) article 097001. [30] y. mizukami, h. shishido, t. shibauchi, m. shimozawa, s. yasumoto, d. watanabe, m. yamashita, h. ikeda, t. terashima, h. kontani, y. matsuda, extremely strong-coupling superconductivity in artificial two-dimensional kondo lattices, nat. phys. 7 (2011) pp. 849-853. [31] m. scheffler, t. weig, m. dressel, h. shishido, y. mizukami, t. terashima, t. shibauchi, y. matsuda, terahertz conductivity of the heavy-fermion state in cecoin5, j. phys. soc. jpn. 82 (2013) article 043712. [32] m. thiemann, d. bothner, d. koelle, r. kleiner, m. dressel, m. scheffler, niobium stripline resonators for microwave studies on superconductors, j. phys.: conf. ser. 568 (2014) article 022043. [33] l. degiorgi, the electrodynamic response of heavy-electron compounds, rev. mod. phys. 71 (1999) pp. 687-734. [34] m. scheffler, m. dressel, m. jourdan, h. adrian, extremely slow drude relaxation of correlated electrons, nature 438 (2005) pp. 1135-1137. [35] m. scheffler, m. dressel, m. jourdan, h. adrian, dynamics of heavy fermions: drude response in upd2al3 and uni2al3, physica b 378-380 (2006) pp. 993-994. [36] m. scheffler, m. dressel, m. jourdan, microwave conductivity of heavy fermions in upd2al3, eur. phys. j. b 74 (2010) pp. 331338. [37] d.n. basov, r.d. averitt, d.v.d. marel, m. dressel, k. haule, electrodynamics of correlated electron materials, rev. mod. phys. 83 (2011) pp.471-541. [38] m. scheffler, k. schlegel, c. clauss, d. hafner, c. fella, m. dressel, m. jourdan, j. sichelschmidt, c. krellner, c. geibel, f. steglich, microwave spectroscopy on heavy-fermion systems: probing the dynamics of charges and magnetic moments, phys. status solidi b 250 (2013) pp.439-449. [39] d. hafner, m. dressel, o. stockert, k. grube, h.v. löhneysen, m. scheffler, anomalous microwave surface resistance of cecu6, jps conf. proc. 3 (2014) article 012016. [40] f. steglich, j. aarts, c.d. bredl, w. lieke, d. meschede, w. franz, h. schäfer, superconductivity in the presence of strong pauli paramagnetism: cecu2si2, phys. rev. lett. 43 (1979) pp. 1892-1896. [41] f. steglich, j. arndt, o. stockert, s. friedemann, m. brando. c. klingner, c. krellner, c. geibel, s. wirth, s. kirchner, q. si, magnetism, f-electron localization and superconductivity in 122type heavy-fermion metals, j. phys. condens. matter 29 (2012) article 294201. [42] s. kittaka, y. aoki, y. shimura, t. sakakibara, s. seiro, c. geibel, f. steglich, h. ikeda, k. machida, multiband superconductivity with unexpected deficiency of nodal quasiparticles in cecu2si2, phys. rev. lett. 112 (2014) article 067002. [43] o. stockert, j. arndt, e. faulhaber, c. geibel, h.s. jeevan, s. kirchner, m. loewenhaupt, k. schmalzl, w. schmidt, q. si, f. steglich, magnetically driven superconductivity in cecu2si2, nat. phys. 7 (2011) pp. 119-124. [44] c.j.s. truncik, w.a. huttema, p.j. turner, s. özcan, n.c. murphy, p.r. carrière, e. thewalt, k.j. morse, a.j. koenig, j.l. sarrao, d.m. broun, nodal quasiparticle dynamics in the heavy fermion superconductor cecoin5 revealed by precision microwave spectroscopy, nat. commun. 4 (2013) article 2477. microsoft word article 15 171-1324-1-le.doc acta imeko  february 2015, volume 4, number 1, 97 – 104  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 97  set up and characterization of a measuring system for pq  measurements in a mv site with pv generation  gabriella crotti, domenico giordano  istituto nazionale di ricerca metrologica, strada delle cacce, 91, 10135 torino, italy       section: research paper   keywords: non‐conventional transformers, pq measurements, mv substation, pv plants  citation: gabriella crotti, domenico giordano, set up and characterization of a measuring system for pq measurements in a mv site with pv generation,  acta imeko, vol. 4, no. 1, article 15, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐15  editor: paolo carbone, university of perugia   received january 15 st , 2014; in final form november 21 st , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: the research leading to the results here described is part of the european metrology research program (emrp), which is jointly funded by the  emrp participating countries within euramet and the european union.  corresponding author: domenico giordano, e‐mail: d.giordano@inrim.it    1. introduction  the measurement of the quality of the electrical supply is today gaining more and more attention. this is due to the need of accurate evaluation of the power quality for the assessment of the compliance to the limits indicated for public distribution networks, the identification and location of disturbances and the verification of the impact of newly connected distributed sources [1]-[5]. as to the power quality (pq) measurement systems, attention is generally mainly focused on the performances of the measuring instruments. test procedures, expected performances and uncertainties are extensively prescribed for the measuring instruments only in the relevant standards [6], [7]. when dealing with pq measurements in medium and low voltage grids, a reduction of current and voltage level is needed. to this end, the instrument transformers already available in the measurement site are generally used. however, their use for the measurement of pq events cannot be considered completely satisfactory and their contribution to the overall uncertainty can be significant [8]. as an alternative, non-conventional transformers are more and more considered, because of their better performances regarding linearity and frequency response and their low level power output [9]-[13]. prescriptions for the measurement uncertainties associated to transducers used in pq measurements are given in [15]. according to this, the percentage error has to be within 1% of the measured value for the first and second harmonic and within 5% for the higher order ones. with reference to the required performances, the features of different types of current and voltage sensors are analysed in [8]. to estimate and reduce the uncertainty of the pq parameters, the characteristics of the adopted transducers and their performance in relation to the expected measurement conditions including the circuit sensors arrangement and the surrounding environment should be known. to this abstract  the  features  of  a  power  quality  measurement  system  which  includes  both  voltage  and  current  transducers  and  a  self‐developed  measuring instrument are described. the system is intended for measurements in substations at medium voltage level. a rogowski  coil and a voltage divider are the used transducers, whereas the measuring instrument is based on a reconfigurable i/o‐fpga system  with embedded software. attention is focused on the procedures adopted for the characterization of the voltage and current sensors,  which  are  carried  out  taking  into  account  the  expected  on‐site  measurement  conditions.  an  example  of  use  of  the  systems  for  measurement of some pq parameters  in a private substation which connects an  industrial  load and two photo‐voltaic generation  plants to the public mv voltage network is presented, together with the associated voltage and current uncertainty budget. by making  use of the results obtained in the characterization phase, the current and voltage measurement uncertainty can be reduced.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 98  end, an extended and focused transducer characterisation is needed. the paper deals with a system for pq measurement at medium voltage (mv) up to 24 kv rms, which makes use of a voltage divider and three rogowski coils (rc) and is completed by a measuring instrument based on a national instrument compact reconfigurable i/o programmable automatic controller platform. the system is intended for indoor use in a mv site with photo-voltaic (pv) generation. the use of a clamp-on and flexible coil for the current measurement enables its positioning in harsh and narrow sites, where measurement conditions quite far from the laboratory ones are met, without need of power circuit disconnection and/or outage. however, the same features, flexibility and presence of a coil turn gap, make the sensor more sensitive to perturbing effects, such as those due to the position of the primary or other current-carrying conductors, and less accurate with respect to rigid, closed coils, with expected measurement uncertainty of the order of one percent or higher [17]. as to the voltage transducer, a solid insulated voltage divider is selected, to exploit its linearity and frequency performances. for this kind of transducer, the effects due to the divider capacitive coupling with close metallic elements at ground or floating potential and the environmental conditions have to be considered and controlled. attention is therefore focused on the characterisation procedure of the sensors that, if carefully performed under conditions that reproduce the experimental ones, can allow the decrease of the on-site measurement uncertainty and enables their use in compliance with the given accuracy limits [8], [15]. application of the system to pq measurements in a private mv/lv substation that supplies an industrial load and is connected both to the public supply network and to photo-voltaic (pv) generation plants is eventually shown and discussed. 2. the measurement system  the scheme of the pq measurement system is shown in figure 1. the system includes a voltage divider and three commercial flexible rogowski coil (rc) with the associated integrator as transducers, a ni compact reconfigurable i/o based measuring instrument and the connection to a gps receiver for time synchronisation [16]. in the following, a description is given of the measuring system components and of the relevant characterisation tests carried out to estimate the uncertainty contributions to be considered in the on-site use of the current and voltage sensors. 2.1. the current transducer   the on-site operating conditions are generally quite different from those taking place in the laboratory when calibrating the system components. they can significantly affect the performances of the sensor as a function of its construction characteristics. in particular for clamp-on rcs, the path of the primary and return conductors, the presence of external magnetic field sources and the coil nonorthogonality with respect to the primary conductor can lead to a considerable variation of the coil mutual inductance, whose amount depends on the sensor characteristics and construction solutions [18]. taking into account the expected measurement location (measurement in a mv/lv substation on the mv side) and the range and type of currents to be measured (from hundreds of milliamperes to several tens of amperes), commercial flexible and clamp-on shielded rogowski coils, equipped with a three channel integration system, have been selected. the rated characteristic of the rcs and associate integrator are summarised in table 1. to investigate the sensitivity of the used coil to influencing and perturbing factors and predict their effect on the expected measurement accuracy, an extensive characterisation of the rc with the associated integrator is carried out in the laboratory, by reproducing configurations similar to those expected on-site. characterisation of the current sensor includes: i) calibration at power frequency and determination of the rc transformation ratio; ii) linearity over the expected measurement range; iii) frequency response determination; iv) evaluation of return conductor proximity effects and perturbing source presence; v) estimation of the temperature effect. a scheme of the supply and measuring circuit for the calibration at power frequency and the linearity tests is shown in figure 2a. the current is generated by a clarke-hess 8100 transconductance amplifier (0.2 ma to 100 a, dc to 100 khz) supplied by a stable waveform generator (fluke 5500). the current value is obtained by measuring the voltage figure 1. pq measuring system scheme.   table 1 – rc coil and integrator rated characteristics.  type   rogowski coil with integrator  rc length  400 mm  rc section   44 mm 2   integrator selectable output ranges  20  a/v,  100  a/v,  ...  10  ka/v  (overrange 1000%)  integrator temperature coefficient   < 0.01%/°c  rc+integrator temperature coefficient   0.04%/°c  bandwidth of the rc+integrator  1 hz to 100 khz  rc + integrator rated accuracy  1%  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 99  across a calibrated non-inductive shunt (tinsley 0.05  standard resistor). the reference and rc signals have been acquired by two synchronised agilent 3458a multimeters, exploiting their digitising features, with a sampling frequency varying from 1 khz to 1 mhz depending on the supply frequency. the measurements are carried out by centring the coil around the primary conductor by a rigid, but easily moveable non-conductive support to limit the uncertainty contribution due to coil positioning. both the shape of the primary conductor and its diameter are chosen close to those of the mv cables (figure 2b). linearity is investigated at 50 hz by generating currents in the range 0.1 a to 100 a with the return conductor 30 mm distant from the coil loop and opposite with respect to the coil gap. the characterization is carried out for the range 20 a/1 v, exploiting the overrange capabilities of the selected coils (table i). the relative deviations of the measured rc ratio from the rated one are found within 610-4 over the investigated current range. a systematic increase of +1×10-3 is measured when the acquisition and measurement system is included in the measurement chain. a relative decrease of about 2×10-4 a for currents lower than 1 a occurs due to the lower values of the digitiser input voltage. regarding the frequency investigation, the impedance magnitude and phase of the coil, without the integrator, have been first measured by an agilent 4294a impedance analyser. figure 3 compares the measured behaviour with that computed by considering a simple rcl coil model, whose parameter values are indicated in the graph. the resonance frequency is around 300 khz. the frequency behaviour of the rc with integrator and connection cable is then measured in the same circuit used for the linearity evaluation. current is measured in this case by a guildline 7320 ac shunt, whose ac/dc difference is within 200 ppm at 100 khz. exploiting the use of stable and repetitive signals, evaluation of the rc error is carried out with the agilent digitisers configured in dcv mode [14] from few hertz up to hundreds of hertz and in time equivalent sampling mode (also named subsampling mode) [14] for higher frequencies. since first experimentation of the system is planned on a site where a wide harmonic spectrum can be expected (pv generation, with inverter switching frequency around 20 khz), measurements are carried out up to some tens of kilohertz. figures 5a and 5b show the rc transformation ratio, normalised to the rated one, and the phase error measured over the range 20 hz ÷ 40 khz with the return conductor 35 mm distant. limit values given in [15] for the 1st to the 50th harmonics for the ratio and phase errors are shown for comparison by the red continuous lines. the dashed lines are limit values extrapolated to the lower and higher harmonic ranges, where no indication is presently given. measurements are then repeated by varying the minimum distance d of the return conductor from the coil loop from 30 mm to 200 mm. rc ratios normalised to the rated value are shown in figure 6, for two distances d of the return conductor (conf. i) 30 mm and ii) 200 mm) with the coil gap external to the current loop as shown in figure 2b. the behaviour measured with the gap internal, when distance d is 200 mm (conf. iii), is also shown. a quite constant deviation of about 3.2×10-3 is found between conf. i) and conf. iii). the influence of external magnetic field sources is then investigated by measuring the coil stray signal induced by a 400 a current flowing in a conductor external to the coil loop, placed at increasing distances d. the induced stray signal is 2 mv for d=100 mm and reduces to 0.5 mv when d is 600 mm. finally the response of the coil in the worst positioning conditions with respect to the primary conductor, that is a)  b)   c)    figure 2.   a)  linearity,  proximity  and  frequency  test  circuit;  b)  rc  coil  experimental  arrangement  with  the  coil  centered  on  the  primary  conductor; c) rc coil experimental arrangement with the coil hung from  the primary conductor.   figure 3. measured magnitude and phase of the rc impedance.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 100  with the coil directly suspended on it as shown in figure 2c, is also determined. the relative deviation of the rc ratio from the value measured with the centred coil increases up to 2%, when the gap is close to the conductor. 2.2. the voltage transducer   the voltage sensor is a resistive divider with rated scale factor sf 10000, 1 m rated load, rated primary voltage um=24 kv, withstand voltage 50 kv, rated accuracy class 0.5. the divider is polyurethanic resin insulated and is equipped with a 10 m coaxial cable. dielectric tests (power frequency withstand voltage, impulse test voltage, partial discharge test) are first carried out and successfully passed by the divider. characterisation tests performed include: i) sf and phase errors determination at 50 hz up to 24 kv/3; ii) proximity effects analysis at 50 hz; iii) frequency response; iv) temperature dependence. the divider sf is measured at 50 hz by comparison with an inrim standard measurement voltage transformer (uncertainty 50 ppm and 50 µrad for their ratio and phase errors respectively). from 5 kv to 12 kv a relative decrease of 110-3 of the sf value and an increase in the phase error up to 1 mrad are detected, with respect to the error measured at 5 kv. the characterisation at mv voltage is repeated by inserting the divider inside a grounded metallic box with side 80 cm (figure 6). the stray capacitive couplings between the high voltage elements and the surrounding metallic surfaces do not significantly modify the divider sf, but lead to an increase of the phase error of 3 mrad. a systematic correction for the phase angle can then be considered in case of use of the divider inside the grounded box. the frequency response is investigated at low voltage (600 v) from dc up to 30 khz. the divider ratio error complies with the standard requirements up to 1.4 khz. at increasing frequency the divider response amplifies the harmonic amplitude. this effect is exploited to better detect the low amplitude high frequency components. the measured value of the harmonic amplitudes and the thd are then evaluated by introducing the frequency corrections determined in the calibration phase. as an alternative, the correction of the divider frequency behaviour according to the method described in [19], can allow optimisation of its response. the variation of the sf in the range -5 °c to 40 °c is measured at low voltage (600 v). figure 7 shows the time evolution of the sf variation from the 23 °c value, with the divider inside a climatic chamber when the chamber temperature is set to 40°c and -5 °c. the sf deviation reached at thermal balance is +1.7‰ and -1.2% respectively. the mean temperature coefficient is, thus, 100 ppm/°c. several hours are needed to reach a steady sf value, due to the amount of mass of the insulated transducer. 2.3. the measuring instrument   the transducer voltage and current signals are digitized and pq parameters are estimated and stored by a ni compact reconfigurable i/o (ni compact rio) programmable automatic controller platform, which includes a crio embedded real-time controller with a 4 slots reconfigurable chassis, equipped with a userprogrammable fpga. two 10 v a/d cards (16 bit 1 msample/s/ch and 24 bit 50 ksample/s/ch), a gps synchronisation module and two sd cards for pq parameters recording complete the assembly. the ni compact rio platform and the acquisition and elaboration strategy are shown in figure 8. the quantities of interest: frequency, rms values, harmonics and thds are evaluated, in real time according to [6] and [7]. in particular, the algorithm for the frequency estimation, over a time interval of 10 s [6], and the current and voltage rms values are implemented on the fpga, while the thd and the harmonic content estimation algorithm is implemented in a real time operating figure 4.   measured rc plus integrator normalized ratio and phase errors  vs frequency.  figure 5.   normalized rc plus integrator ratio for different distances d of  the return conductor and coil gap position.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 101  system. this approach gives a real time estimation of the power quality parameters. different time aggregation starting from 3 s can be user selected, as a function of the measurement approach. presently, four files are stored in the sd cards: one contains the time stamp and the corresponding frequency value, the second stores the time stamp, the voltage and current thd, the rms values of one voltage and three currents and corresponding first harmonics. the last two files contain the time stamp and harmonic contents for one voltage and one current phase, respectively. 2.4. uncertainty budget  the uncertainty budget for the rms current values from 3 a to 40 a is shown in table 2. it is evaluated taking into account the calibration uncertainty of the assigned rc coil ratio and associated measurement instrument (acquisition and evaluation system), its linearity, the effect of the other mv conductors and that of the plant lv conductors. two estimates are presented: in the first case, a more conservative evaluation is given (i), whilst the second one (ii) is performed considering correction factors and uncertainties relevant to the specific measurement conditions, evaluated taking into account the results obtained in the characterisation carried out as previously detailed. the resulting uncertainty (95% coverage level) reduces from 2% to 4×10-3. as to the rms voltage, and taking into account the contributions respectively due to mv divider linearity from 5 kv to 12 kv and the temperature variation in the range (2310) °c, the measuring system relative uncertainty (95% coverage probability) is estimated to be within 2×10-3. 3. on‐site measurement  the developed system has been deployed in a power quality measurement campaign carried out in a private mv/lv 15 kv/400 v, 70 a private substation. the substation is connected to the public medium voltage distribution network and supplies a factory equipped with two pv generation plants, respectively rated 200 kw and 800 kw. the private network can absorb up to 800 kw from the distribution network operator (dno) and supply up to 1 mva. voltage and current are measured on the mv line connecting the 800 kva step-up transformer to the dno mv network, by making use of the measurement system equipped with the characterised voltage and current transducers and the measuring system previously described. in the following, the measurement arrangement, with specific reference to the solutions adopted for connection of the voltage and current transducers is described. 0 1 2 3 4 6 7 8 9 -1.2 -0.8 -0.4 0.0 0.4 0.8 1.2 1.6 2.0 s f d e vi a tio n ( ‰ ) elapsed time (h) measured sf deviation: 40 °c measured sf deviation: -5 °c fitted sf deviation: 40 °c fitted sf deviation: -5 °c   figure 7.   time evolution of the divider sf relative variation from 23 °c.     figure 6.   voltage  calibration  with  the  divider  inside  a  metallic  grounded  box.   table 2 – uncertainty budget (rms current from 3 a to 40 a). quantity/influence factor  relative  standard  uncertainty (10‐3)  conf. (i)   relative standard   uncertainty (10‐3)  conf. (ii)  rc and acquisition system  ratio   0.25  0.25  linearity   1.5  1.0  return  conductor,  gap  position  variation  and  orthogonality  of  the  sensor  6.5  30 mm100 mm)   1.2   (d>600 mm)  ambient temperature   2.3        ta=(23 10) °c  1.15        ta=(23 5) °c  stability and repeatability  2  0.6  expanded rel. uncertainty  (95% coverage probability) 20  4,0    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 102  3.1. the measurement arrangement   the connection of the sensors to mv level is a crucial point. as regards the voltage sensor, the connection point is made inside the metal enclosed air-insulated switchgear unit on the mv line connecting the 800 kw step-up transformer to the dno network. taking into account the dimensions of the cubicle, the electrical clearances to be respected between the introduced mv and lv components and conductors and the surrounding energized, earthed or floating elements, the divider is located outside the unit. to avoid any casual contact with live conductor the divider is placed inside the same grounded metallic box used in the calibration phase (figure 7) and the mv connection is made by means of a mv cable with suitable terminations. firm connections to the mv busbars inside the unit are established by means of bolted connections. the divider lo connector is earthed. in order to avoid dangerous overvoltages at the measuring instrument input, a surge arrester is connected between the lv divider output terminal and earth. regarding the current sensors, since their positioning on the bare power conductors inside the switchgear unit is not possible, the rcs are centred on the mv insulated cable between the mv transformer of the 800 kw pv plant and the switchgear unit by a suitable insulating support. attention is paid to locate the coil opening gap opposite to the other conductors: keeping it as far as possible from the other mv conductors. systematic effects of the other phase conductors placed in close proximity were quantified in the laboratory during the coil characterization phase, as described in the previous section. measurement of the stray signals due to the currents possibly flowing in the mv cable grounded shields was carried out with the pv plant disconnected. the influence of the stray currents was estimated to be within 5 part per thousand and a few percent for the lower current values. this component is then taken into account in the figure 8. ni compact rio and acquisition architecture. a) weather conditions: cloudy b) figure 9.   current measurement on two consecutive days  in a late autumn  day.   figure 10.  current waveforms normalized to the peak values detected at 11  a.m.,  when  the  current  thd  is  about  2%  a)  and  at  5  p.m.  when  thd  is  around 8% b). a) b) figure 11. current, voltage (rms and thd) and global irradiance measured  on two consecutive days in a late spring day. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 103  uncertainty budget for the on-site current measurement. since the produced current is related to the solar conditions and in particular to the irradiance, a passive pyranometer is installed close to the solar panels [20]. the pyranometer output voltage is digitized by the measurement instrument simultaneously to the acquired current signal. 4. results and discussion  voltage and current measurement are performed under different weather conditions during different seasons. the current and voltage signals are digitised with 50 khz sampling frequency and the frequency, rms, thd, harmonic parameters are evaluated. a preliminary current analysis, carried out on the current waveforms with higher sampling frequencies (up to 500 khz), does not detect any harmonic of the 18 khz fundamental one. a quite significant variation in the maximum rms currents is found depending on the weather conditions, as shown in figure 9. to give evidence on the waveform distortion which dramatically depends on the generated current amplitude, thus on the weather conditions, figure 10 is introduced. it shows one period of the injected currents, normalized to their peak values, recorded at 11:00 a.m. during a sunny day when the related thd is about 2% and the rms is about 16 a and at 5 p.m. when the thd reaches 8 – 10 % while the rms falls down to 2 a (see figure 9). the measured rms current and irradiance show quite the same behaviour (figure 11a). as expected, the current thd significantly increases for values lower than some amperes. when the rms current decreases from 8 a (measured irradiance 0.3 w /m2) to 3 a (measured irradiance 0.2 w /m2), the corresponding current thd increases from 2.4% to 8.5%. however, the corresponding measured voltage behaviour remains quite unchanged (thd = 0.6%/0.7%). this can be explained considering the low value of the ratio of the pv plant power to the short-circuit power at the point of common coupling. the simultaneously measured voltage is shown in figure 11b. the rms voltage values slightly increase during the weather conditions: sunny evening/night hours. the harmonic analysis detects the presence of two significant components (the 5th and the 7th). analysis of currents recorded during one week period shows, as expected, a marked dependence on the weather conditions (fig 12). as regards the voltage, the measured behaviour again indicates an increase of the voltage during the nights. maximum variations of the rms voltage are found within 3%. the slight voltage increase (about 1%) during the night is more evident if the mean rms voltages measured in five working days in the day-time hours (7-19) are compared with those registered during the night (19-7). on the contrary, this voltage increase is far less evident or nor detected during the weekends. an example of the performances of the measuring system as regards the voltage harmonic analysis is shown in figure 13, where the daily behaviour of the three most significant harmonics, normalised to the fundamental one, are shown. harmonics voltage variations down to some hundreds of ppm of the fundamental one are well evidenced by the system. the uncertainty associated with the harmonic measurement is estimated to be within 0.25% of the power frequency component.     figure 12.  current and irradiance measurements over one week. figure 13.  daily  behavior  of  5th,  7th  and  11th  voltage  harmonics  detected  on  a  working  day  (amplitude  normalized  to  the  50  hz  component). acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 104  5. conclusions  pq measurement accuracy in mv and hv plants can significantly depend on the performances of the used current and voltage transducers in relation to the actual measurement conditions. the measurement uncertainty can be estimated and reduced by carrying out a careful characterisation of the used transducers. characterisation procedures under conditions as close as possible to the onsite ones have been experimented on a current and voltage measurement system which includes a clamp-on and flexible rc current sensor, whose expected on-site accuracy is of the order of some percent, and an accuracy class 0.5 voltage divider. by taking into account the results obtained in the characterisation phase the estimated uncertainty of the current and voltage measurements carried out on-site by the developed systems are reduced to 6 and 2 part per thousand, respectively. acknowledgement  the research leading to the results here described is part of the european metrology research program (emrp), which is jointly funded by the emrp participating countries within euramet and the european union. moreover, the authors acknowledge the ftm (fabbrica trasformatori di misura) factory for its valuable contribution in the development of the voltage divider and cos.pel which made the mv station of the mondovì (italy) plant available for the pq measurements. references  [1] en 50160, voltage characteristics of electricity supplied by public distribution networks, 2011-05. [2] p. kadurek, j. f. g. cobben , and w. l. kling “smart mv/lv transformer for future grids”, speedam 2010, international symposium on power electronics, electrical drives, automation and motion. [3] n. r. browne, t. j. browne, s. elphick, “monitoring intelligent distribution power systems – a power quality plan”, harmonics and quality of power (ichqp), 2010 14th international conference on, pp.1-7. [4] s. elphick, v. smith, v. gosbell, r. barr, “australian long term power quality survey project update”. [5] mindykowski, “fundamentals of electrical power quality assessment” xvii imeko world congress, 22-27 june 2003 dubrovnik, croatia, pp. 552-557. [6] iec 61000 “emc part 4-30: power quality measurement methods”, 2009. [7] iec 61000 “emc part 4-7: “general guide on harmonics and interharmonics measurements and instrumentation for power supply systems and equipment connected thereto”, 2009. [8] iec/tr 61869-103 “instrument transformers the use of instrument transformers for power quality measurements”, 2012. [9] a. cataliotti, d. di cara, e. emanuel, s. nuccio., “current transformers effects on the measurement of harmonic active power in lv and mv networks”, ieee trans. on pd, vol. 26, no. 1, pp. 360-368, 2011. [10] d. gallo, c. landi, m. luiso, e. fiorucci, g. bucci, f. ciancetta, “realization and characterization of an electronic instrument transducer for mv networks with fiber optic insulation”, wseas transactions on power systems, issue 1, volume 8, january 2013. [11] n. locci, c. muscas, s. sulis, “experimental comparison of mv voltage transducers for power quality applications”, i2mtc 2009 international instrumentation and measurement technology conference, singapore, 5-7 may 2009. [12] m. faifer, s, toscani, r. ottoboni, “electronic combined transformer for power-quality measurements in highvoltage systems”, ieee trans. on instrum. and meas., vol. 60, no. 6, june 2011. [13] s. kucuksari, g. g. karady “experimental comparison of conventional and optical current transformers, ieee trans. on power delivery, vol. 25, no. 4, october 2010, pp. 2455-2463. [14] agilent technologies 3458a multimeters user’s guide, http://cp.literature.agilent.com/litweb/pdf/03458-90014.pdf [15] iec 60044-8 instrument transformers part 8: electronic current transformers, 2004. [16] delle femine, a.; gallo, d.; landi, c.; luiso, m., "powerquality monitoring instrument with fpga transducer compensation," instrumentation and measurement, ieee transactions on , vol.58, no.9, pp.3149,3158, sept. 2009. [17] j.d. ramboz, “machinable rogowski coils, design and calibraton”, ieee trans. on instrum. and meas., vol. 45, no. 6, april 1996. [18] m. chiampi, g. crotti, a. morando “evaluation of flexible rogowski coil performances in power frequency applications”, ieee trans. on im, vol. 60, no. 3, pp. 854-862, 2011. [19] g. crotti, d. gallo, d. giordano, c. landi, m. luiso “a real-time compensation method for mv voltage transducer for power quality analysis”, proc. of 19th imeko tc 4 symposium and 17th iwadc workshop advances in instrumentation and sensors interoperability, july 18-19, 2013, barcelona, spain. [20] m patsalides, d. evagorou, g. makrides, z. achillides, g. e. georghiou, a. stavrou, v. efthimiou, b. w. schmitt, j. h. werner “the effect of solar irradiance on the power quality behaviour of grid connected photovoltaic systems”. microsoft word article 9 197-1088-1-pb.docx acta imeko may 2014, volume 3, number 1, 38 – 40 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 38 structured instrument design p.h. sydenham school of electronic engineering measurement and instrumentation systems, south australian institute of technology, adelaide south australia section: research paper keywords: measurement theory, value, error, uncertainty citation: p.h. sydenham, structured instrument design, acta imeko, vol. 3, no. 1, article 9, may 2014, identifier: imeko-acta-03 (2014)-01-09 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. need for more fundamental approach the need for measuring instruments arises in application domains. an instrument forms the link to observed processes for gathering knowledge about that process. application domain persons are those who know best about the knowledge sought but often do not have sufficient background to efficiently design instrument systems. experienced instrument designers, in contrast, have the design expertise but need to become familiar with the application domain knowledge to provide worthwhile support. this mismatch of knowledge requirement has resulted in haphazard improvement in instrument design and application. there has been too much reinvention and too little systematic building upon work of others. the overall approach to the design of instruments, and related electronics, has never been streamlined since its sophistication began around the 1920’s. ref. [1] provides an historical overview of the developments. study of past general catalogue style texts on measuring technology show that whilst the form of implementation is changing the basic fundamentals are relatively static. the need for improved methodology and practice is increasing as measuring system performance and the cost and time to produce them come under closer scrutiny. it is now becoming clear that this inherited, uncoordinated, methodology can be improved by integrating the core knowledge of what has been learned over this past half century. available computing technology and software techniques, the mature state of recorded knowledge about measurement systems theory and practice sets the scene for revising the approach to the development and application of measuring systems. this paper provides an overview of work of the measurement and instrumentation systems centre misc in this regard placing that contribution in perspective with other related work. 2. sensory interfaces to the physical world there has been much identified and well supported activity for fundamental research into the design and application of the human to computer interface, hci. in contrast it has been most difficult to gain respectability for counterpart work into the physical world to computer interface pwci – the interface that provides people and hence, also computer systems, with knowledge about the world in which we exist. the pwci is a sensory interface [2] established with a properly designed and installed measurement system. a measuring instrument is equally as much an information machine, as is a computer. this concept has been cursorily developed by finkelstein, another “fundamental issues” contributor [3, 4], and was a prime realisation of the historical study [1], being reflected in its title. measurement systems are the “cinderella” of the information technology movement. the research programme of misc concentrates on development of universally applicable sensory interfacing this is a reissue of a paper which appeared in acta imeko 1988, proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 271–275. the overall system design of measuring instrumentation is currently carried out by an intuitive process. using a “required plus available” knowledge approach, implemented in computers, the user’s domain can be interfaced to measurement engineering expertise providing an effective cae tool. progress in this new approach, that promises to dramatically reduce design time, place system generation in the hands of application domain users, and bring about more unity in measurement scholarship, is reviewed. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 39 concepts. it is strengthened by working in close co-operation with the loughborough university of technology, computer human interface centre, lutchi and the cad of instruments group of the city university, london. research activities into the pwci and the hci must be integrated to ensure that humans, computers and the physical world each are interfaced with respect to the common entity all need, that is, knowledge. this concept has been expanded in [5]. 3. key stages in instrument design design or selection of a suitable instrument system has identifiable serial steps. each step should progressively better interface the necessary application domain knowledge to the instrument design expertise base as neither knows enough of the other at the various points of that process. the instrument design task is, fortunately, not as diverse as its manifestation and varied forms have made it appear. the process of working from a “knowledge needed” statement through to installed hardware is not over complex once the necessary information and procedural path are clarified and adopted in a systematic, constrained and pragmatic manner. the design of measurement systems should begin firmly embedded in terms of the knowledge needed from the sensory interface to be created by the designer who is most likely to be a user. a strategy to assist applications persons develop rigorous operational and technical design specifications has been developed [6] and it is being implemented as software under the name, measurement interface design system, minds. this step helps set up the vital metrological requirement in terms of measurands, their key metrology parameter values, influence variables and how the several measurands’ information is to be combined to form the “many to fewer” mapping process needed in the sensory interface. a means to convert the output of minds into an engineering specification document, or to directly write a specification where it is already known what measurement system measurand is needed, has been developed as software called specriter [7]. this stage calls for more knowledge about non-metrological aspects of a real instrument examples being, packaging, environment, testing, power supplies, cooling, and mass. having developed a sound specification it can be used to either purchase a ready-made item, have it designed and manufactured or use it as basic knowledge for the next step of the misc designer system which provides core sensor design shells. the purchase of proprietary items is not being studied in the misc programme because of the existence of an extensive project that already exists at the warren springs laboratory, uk. their thesac centre is developing an on-line, dial-up, knowledge based, service whereby users can rapidly learn which proprietary sensors come close to their need. although that programme is targeted at the process industry the over 70 measurands catered for are applicable to other applications. more adequate understanding of the taxonomy of information systems, formed as instruments and electronic systems, is now becoming essential. this has become a key necessity in a misc venture [8] investigating automatic electronic system design software that can be used by applications persons not having electronic training. before that process can be automated a classification of electronic building blocks formed in terms of a knowledge and information role was needed. at present designers use heuristic knowledge, in an art form manner, to decide which block to use and where. the names and concepts used go back to the 1930’s, well before information technology was recognised as a major entity. at the sensor design shell level a key taxonomy consideration now is which shells to develop and whether there exists a generic building block approach that can be used in a hierarchical manner. to build a shell for every measurand known would appear to be the wrong way to proceed. the misc project currently has length, pressure and temperature shells in development along with other foregoing shells exploring how these shells can cover their use for measurement of other variables. the shells provide extensive engineering documentation to support manufacture. having the appropriate sensor systems and knowledge about how their signals should be combined is still not the end of the applications domain users task. installation, commissioning and operation also require assistance. the misc programme framework also has projects in place to develop these aspects by incorporating heuristic and formal knowledge into software and hardware. these will tell the user how to install the sensors, automatically test and set them up (with some help from the user), and then automatically run the system with regular operational health checks. the overall programme aims to take sensory system design into a new era providing do-it-yourself capability [9]. counterpart work by others touches on some facets of the commissioning and use step. distributed sensory systems are currently a more widespread research interest, the work of brignell, university of southampton, being an example. smart sensors are studied by several of the silicon sensor develop teams. misc has the work of haskard to call upon [10]. at this early stage it is already clear that the caeinst software system now needs extension to cater for systems of sensors. work in 1988 covers setting up interfaces between caeinst designed sensors and manufacturing and scientific computers and of a data logger arrangement. a review of the project has recently been prepared that gives more detail of individual projects [11]. this, now several year long experience, permits some general conclusions to be given about cad of instruments. 4. using computers to assist design knowledge based systems allied to traditional formal knowledge are essential to develop application user driven designer systems. to provide cad tools based only in formal description modelling is to throw away the powerful rules of thumb that are used so effectively in practice to get fast, adequate results. the place for rigorous modelling is in assisting in-depth development needed to squeeze more out of a given design form, such as in a major product line situation or where the mathematical approach is superior to the heuristic way, for example to calculate the thickness of a pressure gauge diaphragm. the exemplary formal transducer modelling work of the city university, london, has been supported by manufacturers wishing to improve their products. misc has incorporated a acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 40 simple version of their mediem modelling software into its temperature sensor shell [12] to calculate thermal parameters and response times of the thermowell protected sensor. much of sensory system design needs are not for enhanced modelling elegance but are in the more subjective aspects such as, selection of appropriate device, management of the design process, awareness of standards, terminology and other semantic fields for such aspects as costing and quality. although the best computational methodology for implementing the above concept is not yet clear, the benefits of using the software based design approach are already emerging. once developed the method becomes very low cost to use and to maintain. the necessary knowledge is not so fast moving as might be thought. basic considerations will remain the same for many years. the speed at which a user can be in possession of a good sensory system design, including full manufacturing detail, will reduce from months to days. the limiting factor for a “oneoff” task now appears to be the time to manufacture the system, not to develop its design. work on computer integrated manufacture will improve that aspect. it is also becoming clear that the knowledge bases needed for this field are reasonably small – a few hundred words and a hundred or so rules will often be sufficient to design a major aspect of a need. the computing task should not blow out to one concerning such breadth as parsing of english language in its totality. 5. effort needed, problems of being pioneers as already alluded to above, gaining acceptance that fundamental development in measurement science and instrument engineering is both possible and worthwhile is not easy. as this is a radical change in both method and attitude it needs resourcing support of centralised governmental kind. it is too early to expect commercial support and its nature, being of advantage to many persons in many places yet not in large identified groups, makes support from users unlikely until it is ready to use. at what stage this concept will be taken up by others is not clear. misc has difficulty in citing the contribution of others to the overall systems approach because there appears to be little other work to report. to assist software dissemination, and hopefully gain constructive criticism and faster development, misc software is being made available to others interested in making and a contribution to the method. momentum will gradually develop and using this opportunity it is hoped more people will work together to move to the new methods. that will be hard decision for many as those who enter this field at this early stage will face lack of appreciation by others who cannot see that the current ways can be bettered. 6. horizons opened up once instrument cae in use to complete this review it is worth considering where the approach leads to once established. given that a series of, knowledge based, questions and answers can yield the full design of a sensory interface the idea can be extended to ask how many of those questions can the sensory interface resolve for itself given a sufficiently large and appropriate knowledge base and sensory system of its own. this concept may sound like a science-fiction scenario but already worthwhile applications appear possible. a misc project is developing a listening sensory interface that will work out the heuristic rules of a sound enabling recognition of indicated sounds that it should, in future, discern. these rules, currently used manually (but in future possibly used by the automatic electronic system designer), have been used to build a simple analogue/digital circuit that provides the sensory mapping needed by a rule based, rather than, formal processing, method. thus the concept of using information machines to generate the full manufacturable design of other information machines now appears to be viable. this theme is soon to be explored in more general manner using a sensory head having hearing and vision sensors. there the aim is to reach the highest possible programming level, that is, spoken conversation between the operator and the inspection head as the latter sits in its workplace. self repairing and adaptive sensory systems will also become closer because sensory systems will be able to decide if their own sensor and electronics design needs have changed and switch parts as needed without operator help. such visionary concepts are not new but sensory interface development based in heuristic and formal knowledge embedded in computers does provide the closest yet chance of building economically viable systems. references [1] p.h. sydenham, measuring instruments: tools of knowledge and control, peregrinus, stevenage, 1979. [2] p.h. sydenham, computer aided engineering of measuring instrument systems, cae jnl., 4 (1987) pp.117–123. [3] l. finkelstein, instrument science series: introductory article, j. phys. e. :sci. instrum., 10 (1977), pp.566–572. [4] l. finkelstein, instrumentation, control and information technology, j.phys.e.:sci.instrum., 19 (1986), pp.246–247. [5] p.h. sydenham, computer aided engineering of sensing interfaces, abstract. proc. conf, eurosensors, cambridge, sep. 1987. [6] p.h. sydenham, structured understanding of the measurement process, measurement, 3 (1985), pp. 115–120 and pp. 161–168. [7] s. cook, automatic generation of measuring instrument specifications, measurement, 6 (1988), pp.155-160. [8] l.c. jain, p.h. sydenham, caenic—user-characterised, electronic systems development software, cae jnl., 5 (1988), pp.200-205. [9] p.h. sydenham, a. skinner, r.w. beijer, do it yourself measurement and control cae package, jnl. meas. contr., 21 (1988), pp.69-75. [10] m.r. haskard, general purpose intelligent sensors, microelectronics jnl., 17 (1986), pp.9–14. [11] p.h. sydenham, cae of instrument and electronic systems proc. conf. “control ‘88”, christchurch, new zealand, may, 1988. [12] p.h. sydenham, n.a. morgan, m. anfiteatro, thermocouple temperature sensors cad tool, cae jnl., 5 (1988), pp.206-210. template for an acta imeko event paper acta imeko issn: 2221-870x december 2017, volume 6, number 4, 113-120 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 113 a root cause analysis and a risk evaluation of pv balance of systems failures loredana cristaldi, mohamed khalil, payam soulatiantork deib, politecnico di milano, milano, italy section: research paper keywords: reliability assessment; root cause analysis of failures; markov process; photovoltaic systems; balance of system; fmeca citation: loredana cristaldi, mohamed khalil, payam soulatiantork, a root cause analysis and a risk evaluation of pv balance of systems failures, acta imeko, vol. 6, no. 4, article 18, december 2017, identifier: imeko-acta-06 (2017)-04-18 section editor: lorenzo ciani, university of florence, italy received september 28, 2016; in final form october 19, 2017; published december 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: mohamed khalil, mohamedmahmoud.khalil@polimi.it 1. introduction a balance of system (bos) comprises all the non-module components of photovoltaic (pv) power plants. failures of bos components are the major reason behind the presence of non-producing modules in pv field. ten years survey [1] was carried out by sandia national laboratories on 35 pv systems, and results showed that failure of bos components such as switches, fuses, dc contactors and surge arrestors were responsible for 54 % of the non-producing modules that were found, around 10,000 non-working modules. the layout of the pv system varies according to the architecture design; it can be a singleinverter system where all the strings are connected to a central inverter: a string-inverter system where each string has its own inverter, or a multi-inverter system where the pv field is divided into groups of strings connected to an inverter. accordingly, bos varies in design according to the layout of the pv systems. the most optimized bos whose components are the basic for any design is presented in figure 1. the failure of any of its components contributes significantly in the failures of pv system. it is worth mentioning that protection equipment is excluded since the utility switchgear is sufficient for the protection purposes of an optimum bos. it is worth noting that the utility switchgear is full of protection relays and tripping equipment that trip immediately the pv plant in case of any power faults. figure 1. bos components. abstract the photovoltaic (pv) system is divided mainly into two subsystems; pv modules and a balance of system (bos) subsystems. this work shows two approaches for a reliability analysis on the subsystem level of abos: failure mode effects criticality analysis (fmeca) and a markov process. fmeca concerns the root causes of failures and introduces prioritization numbers to highlight critical components of a bos. meanwhile, a markov process is a reliability methodology that aims to predict the probability of success and failure of a bos. in this way, a markov process is a supportive tool for helping decision-makers to judge the criticality of failures associated with the operation of pv systems. results show that the pv inverter contributes significantly to the failures of a bos. accordingly, further investigations are conducted on a pv inverter to prioritize the maintenance activities by determining the risk priority number of its component failures through quantitative ca. the novelty of the proposed methodologies stems from analyzing the roots of failure causes of bos components and estimating the probability of failure of these components in order to improve the early development of a bos, enhance maintenance management, and satisfy the demanding reliability by electric utilities. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 114 in literature, most of the studies focus on pv modules reliability evaluation and only very limited publications consider the reliability of bos. among these publications, a qualitative reliability analysis is presented in [2] using fault tree analysis and other efforts in [3] investigated the reliability of both pv modules and bos using petri's networks in order to estimate the lifetime, reliability, and availability. in general, the first step towards enhancing a system’s reliability is to detect the root causes of systems’ failures. in this respect, failure mode effect criticality analysis (fmeca), a well-known methodology, is used in order to analyze the failure causes of systems. it focuses mainly on identifying the possible failure causes. in addition, it is one among several methods used for risk assessment and management by selecting the most proper maintenance strategies to enhance the system performance. a recent research [4] applied fmeca on a pv system designed by brookhaven national laboratory and results show that inverter and ground system of the pv system have the highest risk priority number (rpn). it provides a strong investigation of the fmeca on all pv system components; however [4] considered a specific design and did not list recommended actions to limit the failure causes, and the potential failure modes were listed without highlighting the contribution of each system component in the failure of the whole pv system. on the other hand, the fmeca methodology, in this work, is limited with more details to bos only whose components are more optimized. the failure causes of each component are studied in detail, and recommended actions are listed. in addition, a prioritization number is assigned to each failure, based on iec-60182, to improve the maintenance activities. moreover, a further reliability investigation is carried out by a markov process to be a supportive tool along with fmeca in case of any confusion on judging the priority number by maintenance management. more details on the confusion of judging rpn is available in [5]. both techniques illustrate results that can be utilized during the design phase in order to reduce the major field problems and improve the reliability of the systems. also, they can be used during the operation phase by improving the maintenance management and reducing the failure probability of occurrence. this paper is organized as follows; section 2 provides a general overview on the failure causes of bos components; section 3 presents fmeca results; section 4 shows the markov process conducted on bos; and section 5 focuses only on criticality analysis (ca) of the pv inverter. finally, section 6 includes the conclusions. 2. balance of system failure causes mapping the failure causes is the first step towards the reliability analysis for determining the underlying failures and enhancing failure prediction methods. figure 2 shows a schematic diagram of optimized bos that consists of the necessary components needed to be installed on a pv string. in this section, the failure cause of each bos’s component is described in detail. 2.1. ac and dc cables ac and dc cables represent the veins of pv systems and their failures result in partial or complete shutting down of the pv plant. in addition, cable problems at the module levels can lead to a severe mismatch for other modules cabled in the same parallel block [6] and result in drop of the output power. loose cables by poor workmanship due to excess torque and pressure during installation, undersized cables, overvoltage and over current, insufficient protection are the main causes for pv cabling. 2.2. bypass diode bypass diodes are usually supplied inside the module junction box or manufactured only inside pv modules for sophisticated module types only [7]. a study was carried out in [8] on 1272 modules showed that 47 % of the modules have defective bypass diodes and 3 % of the defective bypass diode caused burn mark on the modules. generally, the main function of a bypass diode is to allow the current to pass around the shaded or cracked cells and thereby reduces the power losses within the module itself. hence, hot spots will be avoided and a long lifetime of the system will be guaranteed [9]. bypass diodes have a junction temperature reaching up to 150-200 °c and they possess a significant self-heating [10], however the main reason of their failures is the applicable thermal stress during their operation because they are not exposed to sufficient air flow for cooling. 2.3. string fuse depending on the necessary capacity of a pv system, there might be several strings which are connected in parallel for higher currents and more power. only pv systems that have at least three strings require a fuse to be placed on each string. pv systems which have less than three strings will not generate sufficient fault current and do not present a safety hazard [11]. in general, a fuse can be considered as a conductor with a relatively low melting temperature surrounded by a dielectric insulator. string fuses have different failure modes that can be summarized into false operation, design factor, cracks of dielectric packaging and shift in fuse resistance; the resistance is increased during the normal operation, or it becomes relatively low during tripping [12],[13]. moreover, the fatigue factor contributes to the aforementioned failure modes; string fuses are subjected to wear out since switching on and off would heat up and cool down the fuses. consequently, fuse fatigue is developed by time. 2.4. ac and dc isolation switches ieee std c37.100-1992 [14] defines the isolating switch as a mechanical switching device used for changing the connection in a circuit or for isolating a circuit or equipment from the power source. in pv systems, the installation of a dc switch on each string is necessary for maintenance purposes of strings in order to avoid shutting down the inverter and consequently disconnecting the whole strings. on the ac side, the cable connecting the inverter to the grid is usually dimensioned to carry currents higher than the maximum current which the inverter can deliver. so a figure 2. bos of pv string. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 115 protection against overload is not necessary and a circuit breaker at the utility switchgear is sufficient to protect against faults from the grid. however, an ac switch is still necessary and should be installed for maintenance purposes of the inverter [15]. the most common failure mode of isolating switches is a failure in the mechanical mechanism; thus the switch fails to open or close, and contacts will carbonize; that results in local temperature rise and reduction of contact quality. 2.5. inverter in a grid-connected pv plant, the inverter represents an expensive and complex key component. a typical three-phase pv inverter includes: igbt power modules, cooling fans, control software and dc link capacitors implemented on printed circuit boards (pcbs) in addition to ac & dc contactors. an igbt power module fails as a result of thermal runaway [16], ceramic substrate to base plate solder fatigue [17], partial discharge [18], and fwd if short circuited [19]. ac and dc contactors fail to open or close due to design defects, mechanical locks, failure of the tripping coil, arcs and overheating that cause degradation of the electric contacts. solder fractures and cracks are the main failure causes of pcbs and result in overheating and gradual resistance increase of the solder joints [20]. the control software fails in case of improper design, absence of health monitoring facility and incapability to adapt the change in electrical and environmental parameters. 2.6. surge arrestors surge arrestors are designed to isolate the pv circuit from the ground during normal voltage operation and to connect to ground when the voltage of the line exceeds the threshold value. in pv systems, they are installed to provide a complete protection against lightning and induced over-voltages. on the dc side, a surge protection device is always placed on the supply side of the inverter’s isolating device in order to provide a complete protection when the isolating device is opened. in service, surge arrestors are exposed to frequent lightening that results in excessive overheating and leads to degradation of its characteristics. also, moisture ingress can find its way inside the surge arrestor in case of sealing defects and contribute in dielectric degradation. 3. failure mode effect criticality analysis fmeca consists of two separate parts, the failure mode and effects analysis (fmea) and the criticality analysis (ca). fmea includes a list of possible equipment failure modes, reason of these failures, local and final effects that refer to the impact of each failure on the system element and the whole system respectively, and the alternative recommended corrective actions to avoid each failure [21]. on the other hand, criticality analysis plans and focuses the maintenance activities according to a set of priorities by giving failures with the highest risk the highest priority. 3.1. fmea on bos the analysis starts with gathering information on the functions and failures of bos components. the impact of each failure cause for each component is investigated on the component itself and the pv modules and strings, stated in the local effect. afterwards, the impact of each component failure, on the whole pv system, is stated in final effect. finally, the most proper recommendations are given to reduce the failure of each bos component. the working of fmea on bos is listed in table 2. 3.2. ca on bos the criticality is a manner to quantify how much attention is necessary to pay about determined component failure or event; this is carried out either through qualitative means based on experience and field background or quantitative means if previous failure data are available. currently, field data are not available for bos, therefore, a qualitative ca is the most relevant means to evaluate ca. this is managed by assigning each failure mode to a risk priority number (rpn), defined by rpn = o × s × d, where s represents a scale for the failure severity and the risks behind the failure occurrence, o denotes the probability of failure mode occurrence, and d means detection, and represents the possibility to recognize the failure before the system or the customers are affected. for expectation purposes of components’ failures, an iec evaluation criterion is selected as shown in table 1. in the ca evaluation, the occurrence is evaluated in accordance to the failure rate of the bos components stated in table 3; the severity is based on the expected interruption of power and possible damages to pv modules, and detection considers the fault detection tools and equipment in the field. the evaluated rpn is presented in figure 3. it is depicted from figure 3 that the inverter has the highest rpn because of the complexity of its components. the bypass diode follows the inverter since its rate of occurrence is quite high compared to the rest of other components; its failure results in burned marks on pv modules and reduction of table 1. iec-60182 evaluation criteria for occurrence, severity, detection. occurrence ( o) severity ( s) detection ( d) ranking failure is unlikely no discernible effect almost certain 1 low: relatively few failures very minor very high 2 minor high 3 moderate: occasional failures very low moderately high 4 low moderate 5 moderate low 6 high: repeated failures high very low 7 very high remote 8 very high: failure is almost unavoidable hazardous with warning very remote 9 table 3. component adopted failure rates. component failure rate bypass diode 0.027 f/year [26] dc switch 0.0018 f/year [27] ac wire 0.00011 f/year. [28] dc wire 0.00042 f/year. [28 ] ac switch 0.0003 f/year [27] string fuse 0.00017 f/yr [26] photovoltaic inverter 0.125 f/year [25] acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 116 power. in the field, failure detection of the bypass diode is done by infrared cameras and signal transmitter devices. both the inverter and bypass diode are the key elements for a safe system operation; therefore, it is recommended to conduct the aforementioned detection procedures of the bypass diodes along with the routine maintenance of the pv inverter in time. on the other hand, the surge arrestor has the lowest rpn, since it is very rarely when it fails in short circuit mode, and it is not opening the circuit in case of open circuit mode. 4. markov analysis on bos a markov process is a sequences of random variables in which the future variable is determined by the present variable and independent on the way in which the present state arose from its predecessors. the analysis looks at a sequence of events and analyzes the tendency of one event to be followed by another [22]. this tendency is the probability evaluation of transition from one state to another until the system has reached the final state. thus, a markov process is defined by a process {p(t), t≥0} with state space x={0,1,2,3,….r} and stationary transition probabilities: 𝑃ij= 𝑃𝑃(𝑝(𝑡) = 𝑗│ 𝑝(0) = 𝑖) 𝑓𝑓𝑃 𝑎𝑎𝑎 𝑖, 𝑗 ∈ 𝑋 (1) where p(t) is a random variable denotes the state and belongs to state space x. the rate of change from one state to another is estimated based on the transient analysis point of view, through kolmogorov forward equations, table 2. fmeca on bos. su bs ys te m outage mode possible outage cause local effect final effect compensating provision against failure s o d rpn bo s ac cables thermal expansion and contraction. loose cables. undersized cables. overvoltage and over current. slow output power degradation and increased power losses shutdown of one or more pv strings. arcs and fire risk minimizing electrical cables wiring, proper design, sufficient protection, using cable ducts, routine visual inspection 3 2 2 12 dc cables 3 3 2 18 bypass diodes thermal stress, insufficient cooling. over voltages and high currents. insufficient rating hot spots and burn marks on pv module bypass diode is open circuited: no change in output power. bypass diode short circuited: significant drop of power. proper design, installing surge arrestors 5 6 2 60 string fuses false operation. improper design, cracks of dielectric packaging. shift in fuse resistance, thermal wear out. in closed circuit mode: slow output degradation and increase of power losses. in open circuit mode: isolation of the one or more strings significant reduction of output power proper design, installing surge arrestors. regular visual inspection 3 2 3 18 dc isolating switches mechanical mechanism failure, improper design, carbonized contacts increase in contact resistance and power losses partial or complete shutdown of the pv system enhance periodic maintenance and proper inspection of operating mechanism 4 5 2 40 ac isolating switches 4 3 2 24 surge arrestors excessive overheating sealing defects and environmental contamination characteristics degradation. leakage current increases and dielectric integrity fails to discharge over voltages. partial discharge arching, induced over voltages and lightning strikes on pv equipment regular testing ( leakage current and meggar) visual inspection to avoid dust accumulation and sealing defects. maintaining and ensuring proper grounding systems. 2 2 3 12 figure 3. rpn of bos components. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 117 d𝑝(𝑡) d𝑡 = 𝑝(𝑡). 𝐀 (2) and � p(𝑡) 𝑛 𝑗=0 = 1 (3) where d𝑝(𝑡) d𝑡 is a vector that represents the state probability p(t) at time t, and a is a matrix of transition probability between states. as the number of possible states are finite, (3) is necessary because the probabilities of all states at any time t should equal 1 and the system can be in one and only one of these states. in case of zero repair rates, i.e. poisson birth-death process, (2) can be rewritten as d𝑝𝑖(𝑡) d𝑡 = – ∑ λ𝑖,𝑗 𝑖≠𝑗 𝑃𝑖(𝑡) + ∑ λ𝑗,𝑖 𝑖≠𝑗 𝑃𝑗(𝑡) (4) although a markov process, from the theoretical viewpoint, is flexible and versatile, special precautions are necessary to deal about the difficulties of practical applications. the main problem is that the number of system states and possible transitions increases rapidly with the number of events in the system [23]. therefore, assumptions become a necessity. the usual assumptions considered by current standards and references, i.e. iec-61165 [23], iec 61508[24], and [22], can be summarized as follows: i) failure and repair rate are constant, ii) failure and repair events are independent, iii) the transition probability from one state to another state occurs within a very small time interval, iv) only one event occurs at the same time. in systems modelling without repairs, iec-61165 [23] considered three possible states for the system: the up, degraded and absorbing states. the up state represents a system free of any failure. the degraded state is related to a system state whose performance meets the warranty limits although its operation is associated with failures. absorbing states are the final states for the system when it falls. in the markov process, states are absorbing if they are once reached by the system; the system will remain there forever. the term bos is a very general term since it includes all the non pv module components and it depends as well on the design of the pv system, whether it is a central inverter, a string-inverter system or a multi inverter pv system. therefore, the reliability analysis is carried out on the bos components of the pv string shown in figure 2. it is assumed that the surge arrestor never fails in short circuit mode and it is not opening the circuit in case of failure; therefore it will be excluded from the markov process analysis. the major problem that always appears on any reliability study concerning the pv system is the lack of pv components failure information and absence of reliability, therefore the components failure rate of the bos are gathered from literature [26]-[28] in table 3. it is worth to mention that the inverter failure rate is calculated by considering one failure in 8 years [25] so the failure rate is 0.125 failure/year. according to the string configuration shown in figure 2, once any component fails the whole pv string fails; therefore, each components is assumed to have two states: up and down. all the possible scenarios for the failures of the string bos components are listed in table 4. accordingly, the state transition diagram is illustrated in figure 4. consequently, the state equations of a string bos can be estimated from (4) as follows: 𝑃0∗(𝑡) = −(𝜆1 + 𝜆2 + 𝜆3 + 𝜆4 + 𝜆5 + 𝜆6 + 𝜆7)𝑃0(𝑡) (5) from the definition, reliability is the probability to perform its required function without any failures, under given conditions and for a stated period of time. therefore, the string bos reliability is equal to the probability of state 0, 𝑃0(𝑡). hence 𝑅(𝑡) is presented in figure 5. based on figure 5, the mttf of a string bos is around 6 years which is close to the mttf of the inverter. in order to highlight the impact of the inverter on the reliability of the string bos, figure 6 is given. the mttf of the string bos without the inverter is around 33 years. figure 5. reliability of string bos. table 4. system states. state scenario 0 all components work 1 bypass diode fails 2 dc switch fails 3 ac wire fails 4 dc wire fails 5 ac switch fails 6 string fuse fails 7 photovoltaic inverter fails λ1 λ2 λ3 λ4 λ5 λ6 λ7 figure 4. state transition diagram of string bos failures. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 118 5. criticality analysis of pv inverter our past work in [29] considers the pv inverter (pvi) to be just a component among other bos components. however, results of ca on bos, presented in figure 3, show that pvi contributes significantly in the failures of bos. meanwhile, the main concern of the current studies is quite limited to reliability estimations of pvi and reliability improvements of current pvi [30]-[34]. therefore, performing a ca on pvi components is very crucial to implement the best preventive measures, based on the prioritization of failure modes for pvi components, in order to limit and avoid the outage modes of pvi and reduce the risk number of pvi in the ca of bos. the availability of failure data plays a critical role in determining whether the ca will be qualitative, based on previous failure information or quantitative, based on past experience in the field. both approaches, however, require an identification of the system components as a first step before carrying out the ca. a typical three phase pvi includes: igbt power modules, cooling fans, control software and dc link capacitors implemented on printed circuit boards (pcbs) in addition to ac & dc contactors. a quantitative ca approach is followed up for pvi based on a survey carried out by sunedison company that operates more than 600 pv systems in four continents with 1500 in-service inverters from 16 vendors and more than 2.2 million pv modules from 35 manufacturers [35]. the surveyed period is during interval january 2010 and march 2012 for 350 systems. the most significant failure modes of pvi inverters in the sunedison survey are shown in table 5. inverter tickets are issued whenever there is a reported failure with pvi. accordingly, both the percentage of tickets and kwh lost reflects the occurrence (o) and severity (s) of ca respectively. in regards to occurrence, failures modes which have percentage of tickets greater than 30 % are considered very high and almost unavoidable failures, percentage of tickets in between 10 % to 20 % and 20 to 30 % are assigned to occasional and high repeated failures, respectively. meanwhile, a failure mode which has a ticket percentage less than 10 % represents failures with low occurrence. the severity of the failure mode is evaluated considering the performance and safety issues resulting from the occurrence of each failure mode. severity is considered low or minor if the percentage of lost kwh is less than 10 %. it is assigned to moderate, high, and very high rankings when the percentage of lost kwh reaches 10 %-20 %, 20 %-30 % and more than 30 %, respectively. the detection (d) is measured according to the probability of identifying the failure before the system is affected. this can be carried out through the different field indicators. for instance, the alarming system, visual inspection, and comparing measured parameters of monitoring systems to reference values. ca is conducted based on both table 2 and table 5, and rpn are evaluated in table 6. in the detection evaluation, it is worth mentioning that igbt and pcb are assigned to very low and low detection possibilities, respectively, according to iec criteria, since both are hard to be detected in the field and several studies are currently available to enhance their detection methods. on the other hand, control software has the highest detection possibility; because the changes in generated electricity can be easily observed with a strong power and energy metering. moreover, the protection scheme is tested frequently in the site during the scheduled maintenance activities. results of ca in table 6 show that pcb has the highest rpn compared to the rest of the components. although it has a low rate of occurrence, its severity is the highest, because the highest percentage of kwh in sunedison is assigned to its failures. in addition, it has a high detection ranking since the failure detection possibilities of pcbs in the field are very limited. table 5 frequency of tickets and associated energy loss for each pvi failure mode. specific failure area percentage of tickets percentage of kwh lost control software 28 % 15 % pcb board 13 % 22 % ac contactors 12 % 13 % dc contactor 4 % 1 % fans 6 % 5 % igbt modules 6 % 5 % capacitors 3 % 7 % table 6. ca of pvi. component failure s o d rpn igbt power module 3 3 7 63 dc link capacitor 3 2 5 30 ac/dc contactors 6 5 5 150 cooling fans 4 3 4 48 pcb 7 4 6 168 control software 6 7 3 126 figure 6. comparison of string bos reliability w and w/o inverter. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 119 meanwhile, ac/dc contactors recorded the second highest rpn because of their high assigned severity value in sunedison survey. improper operation of the contactor subjects the human’s life to danger and contributes to fault propagation in the grid. in the field, ac/dc contactors, generally, have moderate detection facility during routine protection tests only. control software has the third highest detection possibility. although the failure detection of control software can be easily detected in the field. the igbt power module has a quite moderate rpn. the igbt power module has the highest detection ranking since it is generally quite difficult in the field to detect its failure. however, both their recorded failures occurrence and the percentage of losses kwh in the sunedison survey are moderate compared to other failures. consequently, the igbt order is the fourth in the priority. both cooling fans and dc link capacitors have the lowest rpn. in the sunedison survey their failures have insignificant impact on power interruption, and they have low failure frequency of occurrence as well. in addition, both cooling fans and dc link capacitors have high possibilities to be detected in the field. for instance the ipc-9591 standard [36] specified six symptoms for cooling fans’ failures in the field: shaft rotational speed reduction, current consumption increase, loud noise, incorrect or erratic operation of electronic interface, visible cracking of the structure, and visible leakage of lubricants. on the other side, the dc link capacitor symptoms of failures can be detected through rf noise during operation and dissipation factor tests during maintenances besides esr monitoring during its operation. 6. conclusions the root failure causes of pv string bos are studied in detail through a fmea approach, and a qualitative ca was conducted in order to prioritize these failure causes, to enhance bos maintenance activities and decision-making. ca shows that the pv inverter has a high rpn compared to other failure causes and this result was supported by a markov process. in the markov analysis, the mttf of string bos is significantly low, around six years, due to the high failure rate of the inverter. the estimated mttf of string bos excluding the inverter impact is around 33 years; this can be an accepted value compared to the lifetime of a pv module. further investigations are conducted on the pvi to prioritize the maintenance activities by determining the rpn of its component failures through quantitative ca. references [1] rosenthal, a.l.; thomas, m.g.; durand, s.j., "a ten year review of performance of photovoltaic systems," photovoltaic specialists conference, 1993., conference record of the twenty third ieee , pp.1289-1291, 1993. [2] gabriele zini, christophe mangeant, jens merten,” reliability of large-scale grid-connected photovoltaic systems,” renewable energy journal, volume 36, issue 9, september 2011, pages 2334–2340. [3] charki, a.; bigaud, d., "availability estimation of a photovoltaic system," in proceedings of reliability and maintainability symposium (rams), pp.1-5, 28-31 jan. 2013. [4] alessandra colli,” failure mode and effect analysis for photovoltaic systems,” renewable and sustainable energy reviews, volume 50, pages 804–809, october 2015. [5] catelani, m.; ciani, l.; cristaldi, l.; faifer, m.; lazzaroni, m.; khalil, m., "toward a new definition of fmeca approach," in instrumentation and measurement technology conference (i2mtc), 2015 ieee international , vol., no., pp.981-986, 11-14 may 2015 [6] cipriani, g.; di dio, v.; la manna, d.; miceli, r.; galluzzo, g.r., "technical and economical comparison between different topologies of pv plant under mismatch effect," in ninth international conference on ecological vehicles and renewable energies (ever), pp.1-6, 25-27 march 2014. [7] an3432 application note, how to choose a bypass diode for a silicon panel junction box, september 2011. available on http://www.st.com [8] neelkanth g. dhere ; narendra shiradkar ; eric schneller and vivek gade" the reliability of bypass diodes in pv modules ", proc. spie 8825, reliability of photovoltaic cells, modules, components, and systems vi, 88250i (september 24, 2013); doi:10.1117/12.2026782; http://dx.doi.org/10.1117/12.2026782 [9] m.c. alonso-garcía, j.m. ruíz,” analysis and modelling the reverse characteristic of photovoltaic cells,” solar energy materials and solar cells, vol. 90, issue 7-8, pp. 1105-1120, may 2006. [10] silvestre s et al., “study of bypass diodes configuration on pv modules, “applied energy, vol. 86, issue 9, pp. 1632-1640, 2009. [11] frank jackson, planning and installing photovoltaic systems: a guide for installers, architects, and engineers, deutsche gesellschaft für sonnenenergie,2nd edition, 2008. [12] shunfeng cheng; tom, k.; pecht, m., "failure precursors for polymer resettable fuses," ieee transactions on device and materials reliability, vol.10, no.3, pp.374-380, sept. 2010. [13] jing zhao; shan-shan cao; yu-liang dong; hong-wei zhang; jian gong; qiang xie, "analysis of typical failure modes and causes for space thick film fuse," 2013 international conference on quality, reliability, risk, maintenance, and safety engineering (qr2mse), pp.1051-1055, 15-18 july 2013. [14] ieee std c37.100-1992, ieee standard definitions for power switchgear, 1992. [15] abb, technical application papers no.10: photovoltaic plants, vol.10, 2011. [16] kuang sheng; finney, s.j.; williams, b.w., "thermal stability of igbt high-frequency operation," ieee transactions on industrial electronics, vol.47, no.1, pp.9-16, feb 2000. [17] thebaud, j.-m.; woirgard, e.; zardini, c.; sommer, k.-h., "thermal fatigue resistance evaluation of solder joints in igbt power modules for traction applications," in proceedings of 2000 ieee 31st annual on power electronics specialists conference, 2000. pesc 00. vol.3, pp.1285-1290 vol.3, 2000. [18] do, m.t.; auge, j.l.; lesaint, o., "partial discharges in silicone gel in the temperature range 20-150â°c," 2006 ieee conference on electrical insulation and dielectric phenomena, pp.590-593, 15-18 oct. 2006. [19] r. wu, f. blaabjerg, h. wang, m. liserre,” overview of catastrophic failures of freewheeling diodes in power electronic circuits,” european symposium on reliability of electron devices, failure physics and analysis, microelectronics reliability, volume 53, issues 9–11, pages 1169-1828, september–november, 2013. [20] yao bin; lu yudong; luo daojun, "key failure modes of solder joints on hasl pcbs and root cause analysis," 2013 14th international conference on electronic packaging technology (icept), pp.742-745, 11-14 aug. 2013. [21] m. khalil, l. cristaldi, m. faifer,” fmeca analysis for the assessing of maintenance activity for power transformers,” proceedings of maintenance performance measurement and management (mpmm2014), pp. 21-26, coimbra, portugal, 4-5 sept., 2014. [22] marvin rausand, arnljot høyland, system reliability theory: models, statistical methods, and applications, 2nd edition, wiley, 2004. isbn: 978-0-471-47133-2. [23] iec-61165, application of markov techniques, 2nd ed., 2007. http://www.sciencedirect.com/science/article/pii/s1364032115005262 http://www.st.com/ http://www.sciencedirect.com/science/article/pii/s0927024805002199 http://www.sciencedirect.com/science/article/pii/s0927024805002199 http://www.sciencedirect.com/science/journal/09270248 http://www.sciencedirect.com/science/journal/09270248 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 120 [24] iec-61508, functional safety of electrical/ electronic/ programmable electronic safetyrelated systems, 2nd ed., 2010. [25] a review of pv inverter technology cost and performance projections, navigant consulting, burlington, ma, nrel subcontract rep. nrel/sr-620-38771, jan. 2006. [26] reliability prediction of electronic equipment, mil-hdbk-217f, 217f notice 1, 217f notice 2 [27] gabriele zini, christophe mangeant, and jens merten,” reliability of large-scale grid0.-connected photovoltaic systems,” renewable energy , vol. 36, issue 9, pp. 2334-2340, 2011. [28] charki, a.; bigaud, d., "availability estimation of a photovoltaic system," in proceedings of reliability and maintainability symposium (rams), pp.1-5, 28-31 jan. 2013. [29] loredana cristaldi, mohamed khalil, payam soulatiantork “reliability assessment of pv bos” in proceeding of imeko tc10 workshop on technical diagnostics, advanced measurement tools in technical diagnostics for, systems' reliability and safety, milan, italy, june, 2016. [30] s. harb and r. s. balog, "reliability of candidate photovoltaic module-integrated-inverter (pv-mii) topologies—a usage model approach," in ieee transactions on power electronics, vol. 28, no. 6, pp. 3019-3027, june 2013. [31] a. pregelj, m. begovic and a. rohatgi, "impact of inverter configuration on pv system reliability and energy production," ieee photovoltaic specialists conference, pp. 1388-1391, 2002. [32] j. m. fife, m. scharf, s. g. hummel and r. w. morris, "field reliability analysis methods for photovoltaic inverters," ieee photovoltaic specialists conference (pvsc), pp. 002767002772, 2010. [33] battistelli, l.; chiodo, e.; lauria, d., “bayes assessment of photovoltaic inverter system reliability and availability,” in proceedings of international symposium on power electronics electrical drives automation and motion (speedam), june 2010. [34] z. j. ma and s. thomas, "reliability and maintainability in photovoltaic inverter design," reliability and maintainability symposium (rams), lake buena vista, fl, pp. 1-5, 2011. [35] a. golnas,” pv system reliability: an operator's perspective,” ieee journal of photovoltaic, vol. 3, issue 1, pp. 416-421, 2013. [36] hyunseok oh, tadahiro shibutani ,michael pecht,” precursor monitoring approach for reliability assessment of cooling fans,” journal of inteeligent manufacturing, vol.23, pp. 173-179, 2012. http://www.sciencedirect.com/science/journal/09601481 << /ascii85encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain 20%) /calrgbprofile (srgb iec61966-2.1) /calcmykprofile (u.s. web coated \050swop\051 v2) /srgbprofile (srgb iec61966-2.1) /cannotembedfontpolicy /error /compatibilitylevel 1.4 /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves 0.0000 /colorconversionstrategy /cmyk /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel 0 /emitdscwarnings false /endpage -1 /imagememory 1048576 /lockdistillerparams false /maxsubsetpct 100 /optimize true /opm 1 /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments true /preserveoverprintsettings true /startpage 1 /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution 300 /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution 300 /colorimagedepth -1 /colorimagemindownsampledepth 1 /colorimagedownsamplethreshold 1.50000 /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /colorimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000coloracsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000colorimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution 300 /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution 300 /grayimagedepth -1 /grayimagemindownsampledepth 2 /grayimagedownsamplethreshold 1.50000 /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /grayimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000grayacsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000grayimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution 1200 /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution 1200 /monoimagedepth -1 /monoimagedownsamplethreshold 1.50000 /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k -1 >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx1acheck false /pdfx3check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /ara /bgr /chs /cht /cze /dan /deu /esp /eti /fra /gre /heb /hrv (za stvaranje adobe pdf dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. stvoreni pdf dokumenti mogu se otvoriti acrobat i adobe reader 5.0 i kasnijim verzijama.) /hun /ita /jpn /kor /lth /lvi /nld (gebruik deze instellingen om adobe pdf-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader 5.0 en hoger.) /nor /pol /ptb /rum /rus /sky /slv /suo /sve /tur /ukr /enu (use these settings to create adobe pdf documents best suited for high-quality prepress printing. created pdf documents can be opened with acrobat and adobe reader 5.0 and later.) >> /namespace [ (adobe) (common) (1.0) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) (4.0) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /converttocmyk /destinationprofilename () /destinationprofileselector /documentcmyk /downsample16bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure false /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles false /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) (2.0) ] /pdfxoutputintentprofileselector /documentcmyk /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /usedocumentprofile /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [2400 2400] /pagesize [612.000 792.000] >> setpagedevice acta imeko, title acta imeko issn: 2221-870x december 2016, volume 5, number 4, 2-3 acta imeko | www.imeko.org december 2016 | volume 5 | number 4| 2 introduction to the special section of the 14th imeko tc10 workshop on technical diagnostics lorenzo ciani, marcantonio catelani department of information engineering, university of florence, italy section: editorial citation: lorenzo ciani, marcantonio catelani, introduction to the special section of the 14th imeko tc10 workshop on technical diagnostics, acta imeko, vol. 5, no. 4, article 1, december 2016, identifier: imeko-acta-05 (2016)-04-01 section editor: paul regtien, measurement science consultancy, the netherlands received december 22, 2016; in final form december 22, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: lorenzo ciani, email: lorenzo.ciani@unifi.it dear reader, in the first part of this issue, acta imeko publishes ten papers that were originally presented at the 14th imeko tc10 workshop on technical diagnostics in milan (italy) and that are here presented in their extended versions. technical diagnostics is taking in the years an increasingly important role. behind this, the fact that in high-tech industry and in different fields of application it is mandatory to fulfil the requirements related to diagnostics, reliability, maintainability and logistic support as well as risk and safety assessment. the capability to monitor and diagnose a component, a system, an equipment or an industrial plant – in general term, an item – with the aim to verify its functions represents the starting point for more complex rams (reliability, availability, maintainability and safety) evaluations and assessment. the 14th imeko tc10 workshop was held at the politecnico di milano, italy, on june 27-28, 2016. the preparation of the technical program was particularly challenging since 95 abstracts were received from all over the world and the final program hosted 39 oral and 42 poster papers scheduled over two days. the first paper by giulio d’emilia, david di gasbarro and emanuela natale describes a methodology for continuous checking of the settings of a low-cost vision system for automatic geometrical measurement of welding embedded on components of complicated shape. the method aims to check the holding of optimal measuring conditions by using a machine learning approach for the vision system: based on a such methodology single images can be used to check the settings, therefore allowing a continuous and on-line monitoring of the optical measuring system capabilities. the second paper by christian schlegel, holger kahmann and rolf kumme aims to propose a solution for a traceable torque calibration of nacelles by using transfer torque transducers which are calibrated in special standard calibration machines which are traced to the si. in this context, a calibration of up to 1.1 mn·m of such a transducer was realized. the third paper is the result of an international collaboration among researchers from national research university of information technologies, mechanics and optics, saintpetersburg (russia), ilmenau university of technology, ilmenau, (germany) and society for production engineering and development (germany). this research showed the successful results of using a computer vision approach for roughness assessment of a metal surface with help of different texture features extracted from 2d images. such method of non-contact quality control gives a possibility to detect parts with certain defects in a fast and reliable way and can increase the outcome of good products. the fourth paper in this issue is another international collaboration between politecnico di milano (italy), national research council cnr-ifn, milan (italy), bauman moscow state technical university (bmstu)centre for photonics and infrared technology, moscow, (russia) and swansea university, swansea, (wales-uk). the paper deals with the development and characterization of a novel fiber-pumped single-mode yb,er:glass microchip laser. single-frequency output power up to 8 mw was observed with a laser slope efficiency of ∼10 % and a laser threshold as low as 30 mw. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 3 the fifth contribution by philippe chiquet et al. proposes to investigate the effect of short pulsed program/erase signals on the functioning of flash memory transistors. a novel experimental setup used to replace standard electric signals with short pulses is described and measurement results showing the benefits of programming and erasing non-volatile memories with short pulses are presented. the sixth paper by alberto lavatelli, emanuele zappa investigates and discusses what are the main limitations of vision based modal analysis in comparison with the classic transducer based application, with particular focus on isolating the main sources of uncertainty in dynamic contexts. the experiments demonstrated that the frequency response measurement bias is present also in real situations, with good statistical significance. the seventh paper by zsolt jános viharos et al. introduces a methodology to define production trend classes and also the results to serve with trend prognosis in a given manufacturing situation. the proposed solution is applicable to realize production control inside the tolerance limits to proactively avoid the production process going outside from the given upper and lower tolerance limits. the proposed approach was developed and validated by using real data. the eighth paper by micaela caserza magro et al. presents a roadmap towards the condition based maintenance of a fleet of railway vehicles. the paper associates to each maintenance policy its benefits and its requirements in terms of technological infrastructure and operating costs. bombardier transportation italy, that supports this research, started this roadmap a few years ago, for moving from a reactive maintenance policy to a proactive policy. the ninth contribution by tommaso addabbo et al. takes into consideration the most common suggested standards used in designing radar based railway level crossing surveillance systems and introduces new general concepts which demystify the use of such standards in actual applications. this paper illustrates the roadmap to be followed in general when designing such monitoring systems, to minimize the risk due to object misdetection occurring on barrier closure when exploiting radar technology. finally, the tenth paper in this issue is authored by mohamed khalil, christian laurano, giacomo leone and michele zanoni analyses the impact of the outages occurred to the italian overhead transmission lines (ohtls) from 2008 to 2015 is carried out. a new simple and effective reliability index, namely the severity factor, is introduced with the aim to drive the prioritization of the failure causes and the enhancement of the maintenance activities. the proposed methodology, thanks to the introduction of the severity factor, is a useful and effective tool for the identification of the transmission network criticalities and the enhancement of the related maintenance activities. let me take this occasion to remind everyone the next 15th imeko tc10 workshop on technical diagnostics – “technical diagnostics in cyber-physical era” that will be held in budapest, hungary, on june 5-6, 2017 and i am pleased to invite you to attend the conference. the workshop is a forum for advancing knowledge and exchange ideas on methods, principles, instruments and tools, standards and industrial applications on technical diagnostics as well as their diffusion across the scientific community. participants have an excellent opportunity to meet top specialists from industry and academia all over the world and to enhance their international co-operation. the program will feature industry leading keynote speakers and selected presentations. all details about imeko tc10 workshop are available on the conference website: http://www.imekotc10-2017.sztaki.hu/ lorenzo ciani guest editor marcantonio catelani imeko tc10, chair http://www.imekotc10-2017.sztaki.hu/ introduction to the special section of the 14th imeko tc10 workshop on technical diagnostics template for an acta imeko event paper acta imeko issn: 2221-870x july 2017, volume 6, number 2, 4-7 acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 4 realization of an si traceable small force of 10 to 100 micronewton using an electrostatic measuring system jile jiang, gang hu, zhimin zhang force and torque laboratory, division of mechanics and acoustics, nationai institute of metrology, china section: research paper keywords: small force; electrostatic; capacitance gradient citation: jile jiang, gang hu, zhimin zhang, realization of an si traceable small force of 10 to 100 micro-newton using an electrostatic measuring system, acta imeko, vol. 6, no. 2, article 2, july 2017, identifier: imeko-acta-06 (2017)-02-02 section editor: min-seok kim, research institute of standards and science, korea received march 18, 2016; in final form february 4, 2017; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by the national key technology research and development program of the ministry of science and technology of china, no.2011bak15b06 corresponding author: jile jiang, e-mail: jiangjl@nim.ac.cn 1. introduction the invention of atomic force microscopy (afm) has had a great impact on the development of nanotechnology [1]. this technology has the capability of characterizing material properties such as the surface morphology and interaction potentials [2], [3]. the reliable measurement of the force based on an afm tip depends on the accuracy of the spring constant. the determination of the spring constant is essential for the absolute measurement of forces below 1 mn. many methods have been proposed to estimate the spring constant of the afm tip, but these methods are not si traceable and lack accuracy [4]–[9]. high-accuracy measurements of si traceable small forces in the range 1–100 μn have been realized by several nmis recently [10]-[15]. most of the standard devices are based on the electrostatic force, and recently a small forcemeasuring system based on it has been established. most parts of the measuring system have been completed and the entire project will be continued under further arrangement at nim, china. 2. principles 2.1. methodology a standard measuring system for small forces in the range 10–100 μn has been realized using an electrostatic force. the structure of the entire measuring system is shown schematically in figure 1. the measuring system was placed in a vacuum chamber and the chamber placed on an optical table supported by six legs. the level of the chamber was adjusted with an air bubble level. the optical table featured a mass block to increase the inertia of the entire system. the inner diameter of the vacuum chamber was 700 mm. because the vibration from the pump may cause instability of the inner electrode, the measurements were operated under normal air pressure. abstract a small force of (10–100) μn traceable to the international system of units (si) has been realized using an electrostatic measuring system at the national institute of metrology, china. the key component of the measuring system is a pair of coaxial cylindrical electrodes. the inner electrode is suspended with the support of a self-balanced flexure hinge, while the outer electrode is attached to a piezoelectric moving stage. the stiffness of the self-balanced flexure hinge was also designed so as to be both sufficiently stable and sensitive to the small force applied to the inner electrode. two sets of cameras were used to capture the shape of the electrodes and to obtain a better coaxial arrangement of the inner and outer electrodes. with the help of a capacitance bridge and a piezoelectric moving stage, the relative standard uncertainty of the capacitance gradient does not exceed 0.04%. associated with a laser interferometer and a dc voltage power source, the feedback system that controls the position of the inner electrode is responsible for the generation of a force of 10–100 μn. the standard uncertainty associated with the force of 100 μn does not exceed 0.1 %. acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 5 the key component of the measuring system is the electrostatic force generator, a pair of coaxial cylindrical electrodes, shown in figure 1. the inner electrode is mounted on the self-balanced hinge, while the outer electrode is mounted on the moving stage. the self-balanced hinge is made of copper and its spring constant is 11.679 n/m. the inner electrode can be moved along the z axis, nominally parallel to the direction of gravity. the degree of overlap between the two cylinders is measured by a heterodyne laser interferometer. the surface of the bottom inside the inner electrode is smooth enough to reflect the laser beam. a mirror with an al and mgs2 coating was mounted on the outer electrode to reflect the reference laser beam. in order to obtain the proper reflection of the laser beam from two cylinders and a proper coaxial alignment, four main steps were followed. the outer electrode was first adjusted with the air bubble level to ensure that the axial alignment was in the direction of gravity. the laser beam was then tuned to obtain the proper reflection from the outer electrode. the inner electrode was adjusted with the help of an x-y plane moving stage and x-y rotation stages to obtain the desirable alignment to the outer electrode. the alignment is observed by using the shape of the two electrodes. two cameras were used to capture the shape of the electrodes in the x and y direction. the centric distance of the electrodes should be less than 1 μm and the angle between the axes of the electrodes should be less than 1 mrad. although a self-balanced hinge is introduced to the system, an unexpected movement of the inner electrode can still be found, as shown in figure 2. this movement was unpredictable. it measured less than 5 μm in 8 hours when the operators were in the room and less than 1 μm in 80 hours without operators. the creep may have been initiated by temperature variation, because the thermal expansion coefficients of the materials comprising the supporting stages of the inner and outer electrodes are different. comparison between the force generated by the coaxial cylinders and a standard weight (1 mg, 2 mg, 5 mg, and 10 mg, provided by mettler toledo) was carried out, following the sequence shown in figure 3. the comparison was achieved by controlling the voltage on the electrodes to maintain a fixed position of the two cylinders. a proportional integral-derivative (pid) method was applied to the system using a computer program. once the weight was loaded on the inner electrode, the displacement was measured by the interferometer. then, after the weight was unloaded, an electrostatic force was generated to move the inner electrode to its original position. 2.2. capacitance gradient measurement the force generated by the system is determined by two factors: the voltage between the two electrodes and the capacitance gradient along the z direction, described as 21 . 2e dc f u dz = (1) it is very important to obtain dc/dz with high accuracy. when measuring the capacitance gradient, the inner electrode is considered to be static. the outer electrode was driven vertically by a piezoelectric stage. the total nominal displacement between the electrodes was 100 μm. in each test, dc/dz was measured when the displacement between two electrodes was −50, 0, and +50 μm. the outer electrode was held in each position for 30 s. the capacitance was measured by a sensitive bridge (andeen-hagerling model ah2700a, cleveland, oh, usa), while the displacement was measured by the interferometer. because the reference mirror is fixed to the outer electrode, the displacement from the interferometer equals the displacement between the inner and outer electrode. the creep effect can be disregarded here. as shown in figure 4, dc/dz is calculated from the linear fit. typical measurement results are shown in figure 4. as can be seen in the expanded part in figure 4a, the nonlinearity of the capacitance versus displacement in 2 nm may be the cause of the difference and the higher standard deviation. 3. comparison to deadweight the comparisons to deadweights of 1 mg, 2 mg, 5 mg, and 10 mg were carried out after the capacitance gradient was figure 1. schematic of the electrostatic measuring system. figure 2. creep of the inner electrodes. figure 3. process of comparison between deadweight and force. cantilever holder assembly outer electrode adjustment assembly 0 0.5 1 1.5 2 2.5 3 3.5 x 10 5 -2 0 2 4 6 time(s) d is pl ac em en t( µ m ) 20.52±0.13oc without operator 21.98oc v ol ta ge ul uh w ei gh t r el at iv e po si ti on load unload null position time acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 6 measured. the comparison procedure is shown in figure 3. the voltage was held for approximately 200 s once the deadweight was loaded and unloaded. data acquisition was applied all the time, but after 100 s the value of the voltage can be considered stable. nominally, the room temperature was 21.0±0.5 oc. the typical response of the voltage and displacement of the inner electrode is shown in figure 5, in which the deadweight artefact was 10 mg. comparison between deadweights of 1 mg, 2 mg, 5 mg, and 10 mg is shown in table 1. the relative standard deviation of the electrostatic force fe became smaller when the larger deadweight was loaded. 4. uncertainty considering (1) and assuming all the factors are uncorrelated, the uncertainty of the measuring force is 2 2 2 2( ) ( ) ( ) ( ) ( ), ( )( ) f dc f u f u u u dc dz u dz 2∂ ∂= + ∂∂ (2) where the capacitance gradient is obtained by the following method: 1 2 1 ( )( ) . ( ) n i i i n i i z z c c dc dz z z = = − − = − ∑ ∑ (3) ci is the capacitance value corresponding to the relative displacement of the inner electrode, zi. the average values of ci and zi are c and z , respectively. it can be verified that 1 1 2 2 2 2 2 2 1 1 1 1 2 ( 2 ) ( ) ( ) n n i i i i i i i i in n n n i i i i i i i i i dc c z c z c cdz dz z z dz z z z z z = = = = = = ∂ = − − + ∂ ∑ ∑ ∑ ∑ ∑ ∑ (4) and 2 1 .ii in i i i dc zdz dc dc c z = ∂ = ∂ ∑ (5) the uncertainty associated from a linear fit by the least square method, to determine the capacitance gradient, can be omitted. the relative standard uncertainty of the force generated by the system is expressed as 2 2 2 2 2 ( ) ( ) ( ) ( ) ( ) 4 ( ).r r r r r u f f u r u c u z u d u u= + + + + (6) ( )ru r is caused mainly by the repeatability of dc/dz, associated with vibrations from the air flow and the unexpected vibration from the base. the actual displacements of the inner and outer electrodes were measured by the laser interferometer, so ur(z) should be considered. ur(u) is caused by voltage noise from the figure 5. variations of the voltage and displacement of the inner electrodes when a deadweight of 10 mg was applied. a b c d figure 4. measurement of the capacitance gradient. a: capacitance values at 0 μm and ±50 μm. drifts of the capacitance and displacement are 0.0013 pf and 2.5 nm, respectively. b: variation of dc/dz vs time (mean=1.019077 pf/mm, σ=3.0×10-4 pf/mm, σ/ n =6.7×10−5 pf/mm). c: capacitance gradient of 35 sets of six points at 0, 10, 20, 30, 40, and 50 μm (mean=1.018901 pf/mm, σ=1.5×10−3 pf/mm, σ/ n =1.8×10−4 pf/mm), 21.75±0.12 oc. d: capacitance gradient of 10 sets of nine points at 0, 12.5, 25, 37.5, 50, 62. 5, 75, 87.5, and 100 μm (mean=1.019012 pf/mm, σ=3×10−4 pf/mm, σ/ n =6.2×10−5 pf/mm), 21.54±0.11 oc. 0 400 800 1200 1600 2000 2400 2800 3200 3600 4000 4400 400 600 800 1000 1200 v ol ta ge ( v ) time(s) 0 400 800 1200 1600 2000 2400 2800 3200 3600 4000 4400 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 d is ta nc e ( µ m ) voltage distance 0 0.05 0.1 14 14.05 14.1 14.15 displacement(µm) c ap ac ita nc e( pf ) 0.0485 0.049 0.0495 0.05 14.0616 14.0618 14.062 14.0622 14.0624 14.0626 14.0628 displacement(µm) c ap ac ita nc e( pf ) 21.40 21.45 21.50 21.55 21.60 1.0184 1.0186 1.0188 1.0190 1.0192 1.0194 1.0196 1.0198 0 500 1000 1500 2000 te m pe ra tu re (o c ) dc /d z( pf /m m ) time (s) forward reverse mean temperature 1.012 1.014 1.016 1.018 1.02 1.022 1.024 0 2000 4000 6000 8000 10000 12000 dc /d z( pf /m m ) time (s) forwa rd reverse mea n 1.01840 1.01860 1.01880 1.01900 1.01920 1.01940 1.01960 1.01980 0 500 1000 1500 2000 2500 dc /d z( pf /m m ) time (s) forwa rd reverse mea n acta imeko | www.imeko.org july 2017 | volume 6 | number 2 | 7 dc power source. this is related to the actual force generated. noise in the measurement of the capacitance, ur(c), and the displacement error of the piezoelectric moving stage, ur(d), are also included. according to the calibration results of the ah2700a capacitance bridge, the dc power source, the laser interferometer, and the piezoelectric stage, the relative standard uncertainties ur(c), ur(u), ur(z), and ur(d) are 0.5×10−6, 2.5×10−5, 1×10−8, and 1×10−4, respectively. according to the results shown in figure 4, u(r)=1.8×10−4 pf/mm, and the relative standard uncertainty of the force generated by the system is 2.69×10−4. according to the results of the comparison shown in table 1, the relative difference between the electrostatic force and the gravity of the deadweight is larger than 2.69×10−4. in general, the following aspects should be considered: 1) the test mass is placed close to the inner electrode. the shapes of these test masses are different, which will affect the actual capacitance measurement since the mass is lifted by the stainless-steel wire and attached directly to the self-balanced hinge. 2) the fork used to lift the test mass is made of stainless steel and is isolated from the ground. considering that the outer electrode is grounded and the inner electrode is applied to a high voltage, the test mass can be electrically charged and attracted by the inner electrode. this could be studied by reversing the polarity of the electrodes. the possible primary improvement of the system should be realized by electrically isolating the inner and outer electrodes from the other parts, especially the test mass. 5. conclusions forces of (10–100) μn traceable to the international system of units (si) were realized by a measuring system based on electrostatic force. primary analysis shows a relative standard uncertainty of 2.69×10−4. the deviation between deadweight and the electrostatic force was less than 0.6 %. acknowledgements this study was supported by “research on measurement standards and reference materials based on micro-nano technology” (no. 2011bak15b00) and by “research on key technologies in establishment and traceability of microgram mass standards and micro-nano force standards” (no. 2011bak15b06), both subprojects of the national key technology research and development program of the ministry of science and technology of china. references [1] binning g, quate c f, gerber ch, “atomic force microsope”, phys. rev. lett., 56(1986) pp. 930-933. [2] baibich m n, broto j m, fert a,” giant magnetoresistance of (001) fe/(001)cr magnetic superlattices”, phy. rev. lett., 61(1988) pp. 2472-2475. [3] wu s, fu x, hu x d, “manipulation and behavior modeling of one-dimensional nanomaterials on a structured surface”. appl. surf. sci., 256(2010) pp. 4738-4744. [4] clifford c a, seah m p, “the determination of atomic force microscope cantilever spring constants wia dimensional methods for nanomechanical analysis”. nanotechnology, 16(2005) pp. 1666-1680. [5] neumeister j m, ducker w a. “lateral normal and longitudinal spring constants of atomic force microscopy cantilevers”. rev. sci. instrum., 65(1994) pp. 2527-2531. [6] cleveland j p, manne s, bocek d, et al. “a nondestructive method for determining the spring constant of cantilevers for scanning force microscopy”. rev. sci. instrum., 64(1993) pp. 403-405. [7] burnham n a, chen x, hodges c s et al., “comparison of calibration methods for atomic-force microscopy cantilevers”, nanotechnology, 14(2003) pp. 1-6. [8] sader j e, chon jwm, mulvaney p. “calibration of rectangular atomic force microscope cantilevers”. rev. sci. instrum., 70(1995) pp. 3967-3969. [9] ohler b. “cantilever spring constant calibration using laser doppler vibrometry”. rev. sci. instrum., 78(2007) pp. 063701063701-5. [10] pratt j r, newell d b, kramar j a. “a flexure balance with adjustable restoring torque for nanonewton force measurement. joint international conference imeko tc3/tc5/tc20, celle, ge: vdi berichte, 2002, 1685 pp. 77-822 [11] pratt j r, smith d t, kramar j a, “microforce and instrumented indentation research at the national institute of sandards and technology, gaithersburg, md[r]: charlotte, nc: nist, 2003. [12] christopher w j, richard k l. “review of low force transfer artefact technologies, report eng 5 [r]. middlesex: npl, 2008. [13] richard k l, simon o. 2004. “design of a bi-directional electrostatic actuator for realising nanonewton to micronewton forces, report depc-em 001[r]. middlesex: npl, 2004. [14] chen sj, pan s s. “a force measurement system based on an electrostatic sensing and actuating technique for calibrating force in a micronewton range with a resolution of nanonewton scale” . meas. sci. technol., 22(2011) pp 045104-045104-8. [15] vladimir n. “facility and methods for the measurement of micro and nano forces in the range below 10-5 n with a resolution of 10-12n (development concept)”. meas. sci. technol., 2007, 18(2007) pp. 360-366. table 1. comparison between deadweights of 1, 2, 5, and 10 mg. gravity of deadweight g (µn, with expanded uncertainty, k=2) electrostatic force fe (µn) relative standard deviation of fe, σ (fe g)/ g (%) 9.806±0.002 9.8008 0.0112 -0.055 19.609±0.002 19.595 0.0143 -0.071 49.009±0.002 48.746 0.0025 -0.54 98.021±0.002 97.455 0.00074 -0.58 realization of an si traceable small force of 10 to 100 micro-newton using an electrostatic measuring system template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 19-23 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 19 the application of texture features to quality control of metal surfaces konstantin trambitckii1,2, katharina anding2, galina polte2, daniel garten3, victor musalimov1, petr kuritcyn2 1national research university of information technologies, mechanics and optics, av. kronverksky 49, 197101 saint-petersburg, russia 2ilmenau university of technology, gustav-kirchhoff-platz 2, 98693 ilmenau, germany 3society for production engineering and development (gfe e.v.), naeherstiller strasse 10, 98639 schmalkalden, germany section: research paper keywords: image processing; texture features; surface quality citation: konstantin trambitckii, katharina anding, galina polte, daniel garten, victor musalimov, petr kuritcyn, the application of texture features to quality control of metal surfaces, acta imeko, vol. 5, no. 4, article 4, december 2016, identifier: imeko-acta-05 (2016)-04-04 section editor: lorenzo ciani, university of florence, italy received september 22, 2016; in final form november 7, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: the research project, which forms the basis of this paper, was supported by the thuringian ministry of economy, employment and technology (tmwat) with means from the european social fund (esf). the author konstantin trambitckii was supported by the michail lomonosov grant corresponding author: konstantin trambitckii, e-mail: konstantin.trambitckii@tu-ilmenau.de 1. introduction a surfaces quality control is an important step during the production process of metal parts. it is necessary in order to check whether surface quality meets the requirements. the quality control gives the possibility to detect parts with certain defects and can help to increase the outcome and the reliability of good products. a surface roughness assessment is one of the main ways to control the quality of a surface. there are two main groups of roughness assessment methods: contact and non-contact measurements. non-contact measurements in comparison with contact measurements have the advantage that a measured surface remains undamaged after the end of the process. nowadays optical industrial cameras are widely used in the non-contact quality assessment of metal surfaces. the measurement speed of this group of surface assessment methods is higher in comparison with contact methods. if the surface of a measured part has a curved shape and the camera lens has a narrow depth of field (because of the short focal length), some regions of the image, obtained by the camera, can be out-of-focus. these regions have less information about the surface, because fewer details can be obtained from them. trambitckii et al. used different texture features to segment and remove such regions [1]. when the out-of-focus regions were removed, further surface quality analysis can be performed. authors of different papers proposed various methods of optical surface quality control. the image texture analysis was already used in the field of quality assessment of metal surfaces. laws [2] and haralick [3] descriptors are used for detection of various defects of metal surfaces. laws developed an approach, based on the calculation of texture energy abstract quality assessment is an important step in production processes of metal parts. this step is required in order to check whether surface quality meets the requirements. progress in the field of computing technologies and computer vision gives the possibility of visual surface quality control with industrial cameras and image processing methods. authors of different papers proposed various texture feature algorithms which are suitable for different fields of image processing. in this research 27 texture features were calculated for surface images taken in different lighting conditions. correlation coefficients between these 2d texture features and 11 roughness 3d parameters were calculated. a strong correlation between 2d features and 3d parameters occurred for images captured under ring light conditions. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 20 parameters. convolution masks are used to filter the source image. then the texture energy descriptors are computed by applying non-linear window operations to the filtered images. finally, all the descriptors are combined to achieve rotation invariance of the resulting feature. to evaluate haralick’s descriptors, firstly the grey level co-occurrence matrix (glcm) is calculated for an image. values of the glcm are calculated considering a spatial location of neighbourhood pixels. glcms can be created for different directions and for different distances taking into account locations of neighbourhood pixels. after that step for each glcm haralick’s descriptors (energy, contrast, correlation, etc.) are calculated. alegre et al. used these both groups of descriptors as input vectors for k-nn classifiers to divide stainless steel surfaces into two classes based on their roughness quality [4]. alves et al. used haralick’s descriptors to define roughness of the surface and used multilayer perceptron (artificial neural network) to classify these surfaces into three classes [5]. another way to calculate descriptors for classification is the wavelet transform. alegre et al. [6] proposed to apply the haar transform to decompose original surface images. as the next step, haralick’s features were calculated for these decompositions. this method showed a reliable classification of surfaces based on their finish quality. quality assessment can be based on the fourier transform as well. tsai et al. used the two-dimensional fourier transform to classify cast surfaces with different roughness into several groups [7]. naïve bayes and neural network classifiers were implemented to solve this task. the negative side of an application of the fourier transform is computational complexity of the method. another group of texture features is represented by focus features. these features are successfully used in autofocus systems of cameras, microscopes, etc. authors of several papers developed and suggested different focus features. the focus features are essential for the autofocus algorithms, giving numerical values to a whole image or to different regions. the focus features give the maximum response to the sharpest image from a set of images. the higher the contrast of the specific region, the higher the value of the focus feature will be. it is possible to use these features for in-focus surface images as the descriptors of surface roughness. the different surface quality will influence the surface visual appearance on the image. these surfaces will have different contrast and intensity variations, which can be described with the help of the focus texture features. nayar [8] developed a feature, which is based on the laplacian operator. this feature was called as the sum modified laplacian (sml). the sml can be used as a descriptor for sharpness of specific regions. pech et al. [9] suggested to use variance of the laplacian operator convoluted with an input image. thelen et al. [10] created a modification of nayar’s sml feature, additionally taking into account the intensity information from diagonal pixels. due to the increased amount of points used for calculation, this improved feature showed more reliable results on noisy images. in this paper the focus is on the quality control of metal surfaces using various texture features. for this goal a correlation between different texture features and roughness parameters was calculated. it gives the possibility to find the most correlated pairs, suitable for non-contact visual surface quality assessment. the algorithms were tested on surfaces produced by a milling and drilling operation. chapter 2 gives detailed information about the data acquisition system for both 2d and 3d surface images. chapter 3 and 4 contain descriptions of some texture features and roughness parameters, used in this research. there is an explanation of the correlation calculation algorithm in chapter 5. in chapter 6 the main results of this work are presented. chapter 7 contains brief conclusions, which summarize the performed research. 2. data acquisition in this research, metal parts with cone shaped surfaces were used. following the main concept of this experiment, processed test surfaces of metal parts were marked, as shown in figure 1. that gives a possibility to find the same regions in both 2d images and 3d surface data. in our research the size of the region of interest is around 1×1 mm2. 3d roughness information of metal surfaces was obtained with the alicona 3d infinite focus g4 measurement system. a lens with the magnification of 20× was used. the lateral resolution (along x and y-axis) of the measurement system with 20× lens is 2.93 µm, the vertical resolution (along the z-axis) is around 100 nm. 2d images were obtained with an optical camera which was built-in in the same system. in the optical measurements of metal surfaces light plays an important role because of the complex reflectance characteristics of such surfaces. in this research, different light sources were used. the light through the lens system of the alicona scanner was used, as well as a ring light in different modes. the main advantage of the ring light is rotation invariance of shadows on surface images relatively to the lighting source. a sample image of the surface with the ring light source, used in this research, is shown in figure 2. the observed workpiece area is a counter sink. the cutting speed of the tool, used to produce these drill hole, varied from 175 to 185 m/min. figure 1. marked regions in both 2d image (top) and 3d surface (bottom). acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 21 3. 2d features extraction 27 different texture feature maps were calculated for each image of the metal surfaces. the most correlated features are described in the present chapter. algorithms were written in matlab. 3.1. thresholded gradient the thresholded absolute gradient is calculated using the following equation [11]: ,),(),1( 1 1 ∑∑ = = −+= m i n j grat jiijiif (1) while ν>−+ ),(),1( jiijii , where m and n are the numbers of horizontal and vertical pixels of the image, i(i,j) is the grey level intensity of pixel (i,j), and v is the gradient threshold. 3.2. absolute central moment shirvaikar proposed to calculate the absolute central moment of an image. that texture feature showed robust work on the test images. it has peak values when the region of interest has high contrast. the absolute central moment facm is defined by [12]: ,)( 1 0 ∑ − = ⋅−= l k acm khkf µ (2) where l is the number of grey levels in the image, μ is an average value of grey levels of the image, and h(k) is the value of the histogram h for the k-th grey level. 3.3. spatial frequency frequencies for rows and columns are defined by [13] ∑∑ = = −+ ⋅ = m i n j jiijii nm rfreq 1 1 2),(),1( 1 (3) and .),()1,( 1 1 1 2∑∑ = = −+ ⋅ = m i n j jiijii nm cfreq (4) thus, the spatial frequency is defined as .)()( 22 cfreqrfreqfsfrq += (5) 3.4. laplacian based features as described in the introduction, nayar suggested to use the sml feature, based on the laplacian operator. the modified laplacian feature is calculated by using the following equation [8]: ,),(),(),(2 ),(),(),(2 1 1 stjiistjiijii jstiijstiijiif m i n j ml +−−− ++−−−= ∑∑ = = (6) where st is a distance between pixels, used to compute the feature. thelen et al. improved this feature by taking into account the intensity information from diagonal pixels [10]. it is defined by .),(),(),(2 2 1 ),(),(),(2 2 1 ),(),(),(2 ),(),(),(2 1 1 stjstiistjstiijii stjstiistjstiijii stjiistjiijii jstiijstiijiif m i n j xml −+−+−− +++−−−− ++−−− +−−−−= ∑∑ = = (7) the improved feature showed the more robust results for noisy images. 3.5. brenner’s gradient brenner et al. proposed to calculate the sum of squares of the second derivatives, which is larger than a threshold value v [14]: ( ) ,),(),2( 1 1 2∑∑ = = −+= m i n j bren jiijiif (8) while ( ) ν>−+ 2),(),2( jiijii . 4. 3d roughness parameters the surface quality can be estimated using roughness parameters established in international standards [15]. in this research the following iso roughness parameters were used: sa (arithmetical mean deviation of the assessed surface), sq (root mean square deviation of the surface), ssk (skewness of the surface), sku (kurtosis of the surface), sv (maximum valley height of the surface), sp (maximum peak height of the surface), sz (maximum height of the surface, i.e. the difference between the highest peak and the deepest valley), s10z (10 point maximum height), sdq (root mean square surface slope) and sdr (developed interfacial area ratio). along with the iso parameters listed above another roughness parameter from another source was used – ssc (mean summit curvature) [16]. all these parameters were calculated for marked areas of the 3d surface, which correspond to the same areas of the 2d images. 5. correlation evaluation for this research 6 sample surfaces were taken and marked. these marks can be visible both on the images obtained with a 2d industrial camera and on the surfaces obtained with the alicona system, as shown in figure 1. that gives the figure 2. sample 2d image of metal surface. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 22 opportunity to calculate different texture features and roughness parameters for the corresponding regions of the surface. as the first step, regions of interest were set for every obtained 2d image and 3d surface. these regions of interest correspond to the marked areas of surfaces. then marked areas were divided into subregions. the subregions for the 2d images and the 3d surfaces have different sizes because of their different original dimensions. the marked areas of the 2d images were divided into the subregions of 35×35 pixels, the marked areas of the 3d surfaces were divided into the subregions of 140×140 points. these region sizes result in the equal sizes for the corresponding arrays. it is an important condition for further estimation of correlation. in the next step, the texture features were calculated for each subregion of the 2d images. for every image, every single texture feature results in an array of size corresponding to the number of subregions. roughness parameters for the 3d surfaces were calculated the same way. after the texture features and roughness parameter arrays were calculated, correlation coefficients between every pair of the parameters were estimated. having two data arrays of similar sizes gives the possibility to find the correlation coefficient, which shows statistical relationships between these data values. if the coefficient is equal to 0, that means that two values are linearly independent. the closer the coefficient to -1 or 1, the stronger the correlation between these two values. in other words, the closer the coefficient to these values, the stronger the linear dependency they have. for the correlation coefficient an interpretation of the brosius criteria [17] was used. these criteria are listed in table 1. this interpretation of the correlation coefficients gives an easier explanation whether values have weak or strong correlations. 6. results as the first step, a set of texture features was tested against two sets of roughness parameters, calculated for raw surfaces and calculated for surfaces with a removed curvature shape. the correlation for the second set was very weak (correlation coefficients are below 0.39), and in the next research only the raw surfaces data were used. images under ring light conditions gave interesting results. the texture features calculated for these images showed strong (up to 0.85) correlation between the roughness parameters. the correlation for a ring light environment is stronger than for light through lens conditions, see the box plot in figure 3. the correlation coefficients between 27 texture features and 11 roughness parameters were calculated. it results into an array of 297 pair-wise correlation coefficients. for all the coefficients absolute values were taken, as some of them have negative values. then all pairs were sorted from the highest to the lowest values of the correlation coefficient. this information was used to draw a plot, which shows the distribution of the correlation coefficients for all 297 pairs, see figure 4. the x-axis represents the absolute correlation coefficient value. the y-axis is an index of pair, sorted by the correlation coefficient value in descending order. the performed studies showed that for our conditions the most correlated roughness parameters are sa, sq, sv, sp, s10z, ssc, sdq and sdr. the most correlated texture features are the absolute central moment, the thresholded gradient and the spatial frequency. an example of the most correlated pair is shown in figure 5. an average value of the correlation coefficient between the average central moment and the average roughness (sa) is equal to 0.85. linear regression (the table 1. correlation coefficients interpretation. absolute value of coefficients interpretation 0 no correlation 0 < r < 0.2 very weak 0.2 < r < 0.4 weak 0.4 < r < 0.6 medium 0.6 < r < 0.8 strong 0.8 < r < 1 very strong 1 perfect figure 3. box plot for ring light and light through lens conditions. figure 4. distribution of correlation coefficients. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 23 red line in figure 5) was performed to get the relationship between these two values. 7. conclusions this research showed the successful results of using a computer vision approach for roughness assessment of a metal surface with help of different texture features extracted from 2d images. such method of non-contact quality control gives a possibility to detect parts with certain defects in a fast and reliable way and can increase the outcome of good products. overall, 27 texture features were calculated for surface images taken in different lighting environments. correlation coefficients between these texture features and 11 roughness parameters were calculated. strong correlations between features and parameters showed up for images under ring light conditions. future work will analyse surface images taken with other industrial cameras and different lens systems. the results showed that surface defects, which are reflected in a deviation of roughness parameters, can be detected with fast operating low-cost 2d cameras using 2d texture analysis instead of using time-consuming and expensive 3d measurement devices. acknowledgement the research project, which forms the basis of this paper, was supported by the thuringian ministry of economy, employment and technology (tmwat) with means from the european social fund (esf). the author konstantin trambitckii was supported by the michail lomonosov grant. the responsibility for the content of this paper lies with the authors. references [1] k. trambitckii, k. anding, g. polte, d. garten, “elimination of out-of-focus regions for surface analysis in 2-d colour images”, proc. of ilmenau scientific colloquium, 2014, vol. 58. [2] k. laws, “texture energy measures”, image understanding workshop, darpa, 1979. [3] r. m. haralick, k. shanmugam, i. dinstein, “textural features for image classification”, ieee trans. on systems, man and cybernetics, vol. smc-3, no. 6, 1973, pp. 610-621. [4] e. alegre, j. barreiro, m. castejón, s. suarez, “computer vision and classification techniques on the surface finish control in machining processes”, lecture notes in computer science, vol. 5112, pp. 1101-1110. [5] m.l. alves, e. clua, f.r. leta, “evaluation of surface roughness standards applying haralick parameters and artificial neural networks”, ieee trans. on systems, signals and image processing, april 2012, pp. 452-455. [6] p. morala-argueello, j. barreiro, e. alegre, s. suarez, v. gonzalez-castro, “qualitive surface roughness evaluation using haralick features and wavelet transform”, annals of daaam & proceedings, january 2009, pp. 1241-1242. [7] d.m. tsai, c.f. tseng, “surface roughness classification for castings”, pattern recognition, vol. 32, no. 3, march 1999, pp. 389–405. [8] s.k. nayar, tech. report: shape from focus. 1989. [9] j. pech, g. cristobal, j. chamorro, j. fernandez, diatom autofocusing in brightfield microscopy: a comparative study. 2000. [10] a.thelen, s. frey, s. hirsch, p. hering, interpolation improvements in shape-from-focus for holographic reconstructions with regard to focus operators, neighborhood-size, and height value. 2009 [11] a. santos, c. ortiz de solórzano, j.j. vaquero, j.m. peña, n. malpica, f.del pozo, “evaluation of autofocus functions in molecular cytogenetic analysis”, journal of microscopy, december 1997, vol. 188, no. 3, pp. 264–272. [12] m.v. shirvaikar, “an optimal measure for camera focus and exposure”, proc. of the thirty-sixth southeastern symposium on system theory, 2004, pp. 472-475. [13] a.m. eskicioglu, p.s. fisher, “image quality measures and their performance”, ieee transactions on communications, 1995, vol. 43, no. 12. [14] j.f. brenner, b.s. dew, j.b. horton, j.b. king, p.w. neirath, w.d. sellers, “an automated microscope for cytologic research,” j. histochem. cytochem, vol. 24, pp. 100-111, 1971. [15] iso 25178: geometric product specifications (gps) – surface texture: areal. [16] k.j. stout, p.j. sullivan, w.p. dong, e. mainsah, n. luo, t. mathia, h. zahouani, “the development of methods for the characterization of roughness on three dimensions”, publication no. eur 15178 en of the commission of the european communities, luxembourg, 1994. [17] f. brosius, “spss 8“, international thomson publishing, 1998, p. 503. figure 5. a graph showing the relationship between the average surface roughness (sa) and the absolute central moment (facm). the application of texture features to quality control of metal surfaces microsoft word 34-178-1-pb.doc acta imeko  july 2012, volume 1, number 1, 2  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 2  a message from the president of imeko  dae‐im kang  kriss ‐ korea research institute of standards and science, 267 gajeong‐ro, yuseong‐gu, daejeon 305‐340, republic of korea      keywords: imeko; acta; president; message; global metrology community  citation: dae‐im kang, a message from the president of imeko, acta imeko, vol. 1, no. 1, article 2, july 2012, identifier: imeko‐acta‐01(2012)‐01‐02  editor: paul regtien, measurement science consultancy, the netherlands  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: dae‐im kang, e‐mail: dikang@kriss.re.kr    colleagues and readers from all over the world, it is a pleasure and a privilege for me to extend my heartfelt congratulations on the first publication of acta. since its establishment in 1958, imeko has made substantial contributions to the advancement of measurement technology, which is indispensable for developing industry, science and technology and enhancing the quality of human life. as the president of imeko, i have great confidence in and respect for the confederation for such proactive contributions, which i am sure will ever continue in close partnership with all of its member organizations. we live in a world of ever-increasing interdependency and thus of shared global challenges – the challenges of sustainable economic growth, of improving health care for all our people, of food safety and energy security, of clean accessible water, of the environmental conservation, etc. new technologies and discoveries will be essential to addressing the challenges our interconnected world is facing. the science of measurement and its applications serve as one of the most important tools that can deliver workable solutions to the global agenda. therefore, this is a particularly vital moment for academia, researchers, and engineers to work more closely together by sharing ideas and vision on metrology than ever before. i would like to take this opportunity to encourage all of the members of imeko and outstanding experts in metrology to take a leading role to enable enduring advancement of measurement science and technology ranging from fundamental to the cutting-edge areas. all in these efforts, i am confident that acta will be of great assistance by being a fresh new arena in timely sharing a lot of insightful research achievements in metrology that is essential to the growth of the imeko and the global metrology community as well. on behalf of imeko, i would like to thank prof. paul p. l. regtien, vice president for publications and all my colleagues at imeko who have done their best to make the birth of acta possible. last but not least, i hope that acta will also be instrumental to keep the global metrology community working in great harmony in days to come. once again, i would like to express my sincere congratulations and best wishes for the everlasting success of acta. here is a message from the president of imeko, to colleagues and readers of acta imeko from all over the world, celebrating the  publication of the first issue of the journal.  microsoft word article 18 176-1310-1-le.docx acta imeko  february 2015, volume 4, number 1, 121 – 127  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 121  machine vision systems for on line quality monitoring in  industrial applications  giuseppe di leo, consolatina liguori, alfredo paolillo, antonio pietrosanto  department of industrial engineering, university of salerno, via giovanni paolo ii, 132 fisciano (sa), italy       section: research paper   keywords: machine vision, non‐contact measurement, image processing, gloss, color image, stereo vision  citation: g. dileo, c. liguori, a. paolillo, a. pietrosanto, machine vision systems for on line quality monitoring in industrial applications, acta imeko, vol. 4,  no. 1, article 18, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐18  editor: paolo carbone, university of perugia   received january 15 th , 2014; in final form november 17 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: giuseppe di leo, e‐mail: gdileo@unisa.it    1. introduction  thanks to new technologies that today allow to implement automatic nonintrusive solutions at convenient costs, on line inspection systems are going to be widespread also in areas driven by off-line quality control requirements. dimensions and external finish of products can be today monitored on-line at costs that can be recovered after few months of production waste reduction. a number of inspection techniques have been thought to be implemented on-line and the choice among them must be made on the basis of the product nature. non-contact measurements are strongly recommended in many applications, e.g. to avoid load effects when deformable objects are to be inspected. machine vision methods have provided the efficient and reliable techniques for the design of image-based non-contact measurement systems [1]. in some industrial fields, such as the production of either electronic components and boards or mechanical parts, automated inspection using machine vision has already had a relevant number of applications. this is not the case of the production of rubber profiles, due to the complexity of the production process and to the flexibility and the dark color of the product. in this field the authors realized an image-based measurement system for inspection tasks in a plant producing rubber profiles for the automotive industry [2][4]. a stereo vision system was designed to make the on-line contour extraction, the 3d reconstruction of the transversal section of rubber profiles, and the measurement of the main dimensional parameters. it is based on the registration of each observed profile onto a reference profile. a suitable software tool let users specify the dimensional lengths to be measured. in this way, users have been able to maintain the software up to date for new models of profiles and new measurements [2]. nevertheless the measurement system still keeps some limits: i) the robustness of the contour extraction algorithm gets low when a metal reinforcement is included in the profile section; ii) the software does not provide statistical information on the product quality; iii) no inspection is made on the external surface of the profile. in this paper which is an extended version of [3] the authors, after a brief recall of the overall structure of the extrusion line, describe the solutions they propose to abstract  this paper is concerned with the topic of non‐contact measurements for industrial applications. it presents the case of vision‐based  systems realized by the authors expressly for the measurement of geometric and/or chromatic parameters of rubber profiles for the  automotive  industry. after a brief description of the extrusion process, a stereo vision system for the on‐line measurement of the  dimensional  characteristics  of  the  profile  transversal  section  is  first  described  in  all  its  main  components  and  features,  then  two  modules for the inspection of the finish of profile surfaces are presented. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 122  overcome the aforementioned limits of the contactless measurement system. 2. the extrusion line  at the site of battipaglia, italy, the cooper standard automotive s.p.a. manufactures rubber profiles for car window and door sealings. the production cycle is divided into several basic steps (see figure 1). the extrusion essentially compresses the pasty material at high pressures through a matrix which reproduces the interior and exterior shape of the piece. some profiles, the so-called armed profiles, have a metal insert with stiffening function. the extruded rubber flows through ovens for the vulcanization which causes a change in the molecular structure of the polymer resulting in an increase of the elasticity and tensile strength and in an improvement of other physical and chemical properties. subsequently, the product passes to the phase of electrostatic flocking which gives the rubber a velvety appearance, obtained by first sprinkling an adhesive solution over the surface and then uniformly distributing fibers over it by means of an electrostatic field. after a cooling step, those products for which it is required undergo a brushing step. this phase has a predominantly aesthetic purpose so as to give it a matt appearance to part of the outer surface. the step of cutting to a given length ends the in-line processing, followed by the collection of the products in the cases for the transport. 3. the measurement system  the measurement station for the verification of the geometrical dimensions of the profile has been placed at a side of the second conveyor belt used for the collection of the finished product (figure 2): it is constituted by two cameras, an illuminator, a photocell and a data acquisition system. the cameras are suitably positioned to frame the cross section of the profile with different angles of view. the on-line operation of the measurement station can be viewed as a sequence of modules, detailed in the following and shown in figure 3. the main module is the on-line procedure, which is executed each time a new profile goes into the visual field of the two cameras. both (left and right) images feed the 2d procedure which outputs the edge points of the two contours. then, starting from the contours of the two 2d images, a 3d reconstruction [5]-[6] of the section of the profile is run. in order to determine the position in the space of the profile points of the profiles from the pair of images, taken by the two cameras, it is necessary to know the parameters of the intrinsic and extrinsic cameras, obtained by means of a calibration procedure [7]. the measurements are carried out on the three-dimensional reconstruction by a third module of the developed application, called the profile editor. 3.1.  two‐dimensional image processing  since the measurements are made on the 3-d reconstruction of the 2-d contours, the localization of the cross-section contour plays a key role in the measurement procedure. the developed image processing algorithms apply a sequence of known image processing techniques: a histogram enhancement routine normalizes the grey level distribution in the image, a region growing stage locates all the pixels belonging to the profile section (the foreground region), and a contour tracking algorithm locates the contour as the border of the foreground region. for the profiles with metal reinforcement, the tuning of parameters such as integration time, lens aperture and amplifier gain of the camera becomes critical as they have a direct influence on the brightness of the captured image. if a value of these parameters is found so that the rubber part of the front section is represented with a sufficiently high grey level, the metal sheet, which has a much higher reflectance, saturates (i.e. reaches the maximum grey level) and generates glares, smearing or blooming broadening the appearance of the lit final product vulcanization surface finish • flocking • painting cuttingextrusion plate compound figure 1. stages for the rubber manufacturing.  1st conveyor belt 2nd conveyor belt profile right camera left camera illuminator camera support photoelectric cell figure  2. the hardware architecture of the measurement station.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 123  part and affecting the measurement. in order to obtain high-contrast and to avoid these undesirable phenomena, the dynamic range of the image on which measurements have to be done must be expanded. two techniques were considered: the high dynamic range (hdr) imaging [8] and exposure fusion (ef) [9], which, starting from a sequence of images of the same scene taken with different exposures, reconstruct an image with a higher dynamic range. the hdr technique aims to reconstruct the radiance map of the scene: from the sequence of images at different exposures, the characteristic function (crf, camera response function) which relates the radiance of the scene to the brightness of individual pixels is first reconstructed. then by inverting this function the radiance map of the scene can be achieved. the hdr processing, however, imposes a significant computational load that is not compatible with the requirements of on-line control. furthermore, the images obtained with this technique tend to create undesirable halos around bright areas of the image, and then the presence of reflections on the metal part would cause a broadening of the contours. the ef technique addresses some of the limitations of hdr: the final image is composed by taking the individual pixels from images of the sequence on the basis of a score, according to specific figures of merit associated to the individual pixels of each image. by this way, two goals can be achieved: i) the execution of the algorithm is much faster, because there is no need to sort the map of radiance again; ii) the effect of halos that was observed in hdr processed images is eliminated, thus allowing a more accurate contour extraction. figures 4a) and b) show the images obtained with the two techniques mentioned above and the related measures. in both cases the rubber part is measured correctly, while the measurements of the metal part have a maximum error of 21% compared to the reference for the image obtained by means of hdr, and 6% in the case of images obtained via ef, thus demonstrating the effectiveness of the ef technique in preserving the outlines of the brighter areas. 3.2. three‐dimensional reconstruction  the two 2-d contours determined in the previous step are processed in order to obtain a 3-d reconstruction of the contours of the leading cross-section. the procedure can be divided into two phases: i) the search for matching stereo pairs, namely the pair of image points in the left and right images generated by the same real point, and ii) the calculation of the 3-d coordinates of the profile contour points as a function of the two stereo pairs. the search for stereo pairs is performed by exploiting the epipolar constraint [6]: given a point on one image (e.g. the left one), the corresponding point on the other (e.g. the right) image lies on a line, the so-called epipolar line, whose localization can be determined from the calibration parameters. the point on the right image which corresponds to a given point on the left image can be searched for within the intersections between its epipolar line and the profile contours. once the calibration parameters of the two cameras and the pixel coordinates of two image points are figure 3. the software architecture.    a)    b)  figure 4. a) hdr image with overlaying measurement indicators; b) enfused  image with overlaying measurement indicators.    2d processing profile and measurement database stereo matching 2d-3d profile fitting plane calibration parameters measurement procedure list of measurements left image right image acquisition 3d transversal section reconstruction off-line calibration off-line profile editor measurement extraction profile and measurement database comparison pass / fail quality control 2-d proc 2-d proc acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 124  known, a triangulation algorithm [5] allows to obtain the measurement of the three absolute coordinates of each point of the contour. in order to simplify the subsequent extraction of measurements from the 3d contour, a fitting plane is determined as the least square plane approximating the points of the cross section of the profile, and then the measurements are performed on the 2-d projection of the contour on this plane. 3.3. measurements  a procedure of measurement valid for all the profiles and products has been implemented and set up. the procedure can also be extended to future production, since the list and the description of the measurements can be easily tailored for a new model of profile. the dimensions to be measured must be only once specified by the user on the coordinate system of the reference profile contour. then, the measurement procedure consists of two main steps: i) each new observed 3d contour of the profile is superimposed ("registered") to the reference contour of the profile; ii); the measurements specified on the reference are made on the observed contour. the reference profile library management is entrusted to another application module, called profile editor. this module allows the user to select and view the reference profile of a section, to specify the measurements to be made on the profile. the measurements are defined by primitives, each of which requires the user to define two points on the reference contour; these couples of points must be placed in the coordinate system of the reference contour. the different types of measurement primitives that can be applied to profiles are: the “gauge”, distance between the two intersections of a specified segment (p, q) and the observed profile; the so-called “tip-to-tip”, distance between the two points of the profile observed that best correspond, in terms of proximity and curvature, to two specified points on the reference profile (p, q). it is also possible to project the measured segment onto a straight line ("axis") determined by two points belonging to the contour (r and s of figure 5a) and specified using the profile editor. for each selected measurement the nominal value can be stored in the data base and compared on line with the measured one. in figure 5b) a screen output of the software is shown. 3.4. calibration  in order to determine the spatial position of the profile contour points the three-dimensional reconstruction algorithm requires that the intrinsic and extrinsic parameters of the cameras are known. goal of the calibration procedure is the determination of the intrinsic and extrinsic parameters of the cameras. it must be repeated whenever one of the configuration parameters of the system (position or angle of the cameras, distance, focus) are expected to have changed. the authors’ proposal is based on the approach of zhang [7], which requires the two cameras to observe a different planar target, whose geometry in 3-d space is known with very good precision, shown at a few different unknown poses. a sandblasted metal plate containing a pattern of 6×5 square holes (120 squares corners) is adopted as the target, due to its good accuracy (~10 μm). the knowledge of the corner positions in several pairs of images, at least five, and of the target geometry in mm, allows specific calculations based on the maximum likelihood criterion in order to determine the intrinsic and extrinsic parameters of the cameras. 3.5. quality checks, report and metrological characterization  each measurement result is compared with the specification limits in order to assess its compliance with design tolerances and the software reports the outcome of the dimensional test with a “pass/fail” indication. the presentation of the results includes, for each piece, the superposition of the observed and the reference profiles, the table of last results, the time chart of the results and some statistics. some report pages can be recalled to view the chart of the quantities of interest. the user’s queries can include: the line, the section, date and time of beginning and ending of observation, and the quantity of interest. summary reports of different types can be generated: i) basic, ii) advanced, iii) failure chart, iv) production line, v) summary. the metrological characterization was carried out on the system in use at cooper standard automotive plant in battipaglia, italy. tests have been performed to assess: (i) the influence of the relative position of the cameras, (ii) the systematic errors of the system, and (iii) the repeatability of the measurement results. choosing the optimal position of the cameras and after performing a correction of systematic errors, the standard uncertainty [10] of all types of measurements was contained in 0.2 mm. the processing time of the proposed measurement system, figure 5. a) measurement primitives; b) example of use of the profile editor.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 125  at the typical extrusion speed of the line (3 pieces / min), allows to monitor the full 100% of the produced pieces. 4. a system for painting quality control  the coating process consists in depositing a transparent silicone paint on the exterior surface of the profile. the coating has both an aesthetic and a functional purpose: to confer a shiny effect and to make a waterproof seal. the coating system in use at the plant is produced by kremlin, consists of four air spray guns, and is managed by a plc, which allows to set for each gun both the pressure of the paint and of the air. the system for the regulation of the pressure is based on the use of electronic controllers, which allow to obtain a great precision in regulating the value of air flow and paint. this results in an excellent finish quality with a very high degree of pulverization, and in a very low thickness of the painting (from 10μm to 30�m). nevertheless, temporary or permanent gun failures may result in parts not painted that must be detected and discarded. since the paint is transparent, in order to allow visual verification by the operator, it is mixed with an additive (uvitex ob by ciba) which emits in the visible when illuminated with ultraviolet (uv) radiation. in the previous system, a uv lamp illuminated the profile, a cctv system grabbed images of that profile and showed them on a monitor, where a human operator had to periodically check whether the profile image had the required brightness. the vision system designed by the authors replaces the human operator in the control of paint. it has been designed to estimate the amount of paint deposited on the surface of the profile in order to detect possible faults on the air spray guns. the vision system, schematically shown in fig 6a), is positioned immediately after the spray booth and uses two camera-illuminator pairs in order to monitor the whole external surface that must be painted. the two 640 × 480 color cameras are featured with ring type illuminators having a wavelength of 365 nm. the system acquires at regular time intervals two images of the profile surface and then processes them to measure the amount of paint and compare it with a predefined threshold. the image processing is based on the measurement of the amount of blue color present in each acquired image, since it is proportional to the amount of additive and then of paint. in particular, the algorithm identifies a sub-image centred onto the profile contour, and calculates an index m proportional to the average brightness of the blue component extracted from the acquired sub-image. the profile contour is located by searching the first and last edge along vertical lines with a differential operator. the control panel (fig. 6b) contains the processing unit, the monitor for display, the stack lights, acoustic signalling devices and the power supply plugs. if the measured value is out of tolerance, an alarm is generated. a chart is also shown on the operator panel with the history of the last 300 measurements. in this way, the operator can periodically monitor the measurements and anticipate unwanted trends. 4.1. assessment of fault detection capability  tests have been carried out in order to verify the functionality of the system and the compliance to production needs. faults on each gun and multiple faults are simulated obstructing the guns and the indices measured on the images of the two cameras are recorded. for each kind of fault, the fault state is maintained for about 30 seconds and the variability range of the indices is recorded. this whole procedure is repeated 3 times. said p1, p2, p3 and p4 the status of the 4 guns and m1 and m2 the two indices measured on the images from the two cameras, table 1 reports the minimum and maximum observed value during the tests for the main fault cases (single or multiple) and for unfaulty conditions. as can be seen, the sensitivity to the spray gun fault is different for the two parameters (m1, m2), due to the positioning of the four air spray guns and the two cameras, as shown in fig. 6a. moreover, different absolute values happen to be measured in unfaulty condition, since different types of profiles have different shapes and then respond to the illumination differently. analyzing the results, the unfaulty condition appears clearly separate from the single faults, that are the most difficult to detect. the diagnostic capability is acceptable since it is possible to identify, at least, on which side of the profile there is one or more guns not working correctly. p4 p1 p2 camera 1 (a) paint spray guns illuminator camera 2 p3 (b) figure 6. a) painting quality measurement system; b) software user  interface.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 126  other tests are carried out in order to verify the system sensitivity to partial obstruction of a paint spray gun. in this case the system detects the fault only when the obstruction is relevant. finally, tests with a complete obstruction of a single gun for a short time period are performed. the promptness of the detection is related to the measurement rate; the time required to process a couple of images is about 100 ms. since the maximum line speed is 20 m/min, and the field of view is about 10 cm, each measurement covers a length of about 3 cm. then, the system is able to detect the lack of paint for length at least greater than 6 cm. these characteristics fully respond to specific requests of the factory quality control system. 5. on‐line gloss meter  one of the final stages of the production line is the brushing, where a rotating brush can be moved vertically in order to abrade with a different intensity the surface of the profile and then to give a different matt appearance to the surface. also this stage has a predominantly esthetical purpose. in order to verify the obtained matt appearance, the degree of gloss (expressed in gloss unit, gu) was traditionally measured off-line on some samples using a commercial gloss-meter. these measurements are then compared with the gloss acceptance range [1.5 3.5 gu]. the gloss measurement exploits the phenomenon of specular reflection of a surface illuminated by a light beam. the gloss-meter projects a non-polarized white light on the sample surface, at a predetermined angle of incidence and it measures the intensity of the reflected light with a sensor located in a specular position with respect to the light source. according to the standard [11], the angles of incidence for the gloss measurement are either 20°, 60° or 85° depending on the gloss value. the off-line measurements show several drawbacks: they require a relevant time, they cannot be carried out on all the pieces, and specialized operators are required. the research activity has been aimed at achieving an online non-contact gloss-meter. the approach suggested by the standard [11] cannot be precisely followed. the incidence and reflection angles of light rays cannot be accurately controlled online, since the rubber profile can have small movements and its surface is generally not flat. the proposed technique uses a white led illuminator ring (advanced illuminator rl127) as the light source, and a digital smart camera (ni 1772, 640×480 pixels, 8 bit, 110 fps) as the sensing device [12]. the measurement station is positioned immediately after the brushing booth (fig. 7a). the smart camera is positioned on top of the profile and acquires an image of a portion of the profile. the processing steps applied onto the acquired image are the following: i) the position of the profile within the image is determined by edge detection; ii) a sub-image of 30 × 20 pixels is located at the centre of the detected profile; iii) the average brightness of the sub-image is evaluated. the measured value can be related to the gloss value through a linear relationship that is experimentally estimated with a calibration procedure described in section 5.1. the measured gloss is compared with a tolerance value defined by the product specifications. if the measured value is out of tolerance, an alarm is generated. the operator panel (fig. 7b) shows the acquired image, the overlay of the sub-image rectangle, the measured gloss value and a chart with the history of the last 300 measurements. table 1. test results for faulty and unfaulty cases.  p1 p2 p3 p4 m1 m2 ok ok ok ok 80-90 65-80 fault ok ok ok 40-65 50-75 ok fault ok ok 40-60 50-80 ok ok fault ok 80-90 20-30 ok ok ok fault 75-90 25-40 fault fault ok ok 30-45 50-80 ok ok fault fault 70-85 15-25 fault fault fault fault 10-20 10-20 illuminator conveyor belt gasket profile brushing  machinesmart camera vision system (a) (b) figure 7. a) gloss measurement system; b) software user interface.  g = 0,113 l ‐ 7,988 r² = 0,978 0,0 1,0 2,0 3,0 4,0 75 80 85 90 95 100 105 l g [g u ] figure 8. the results of calibration.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 127  5.1. calibration procedure  a calibration procedure is required in order to estimate the relationship between the average brightness of the subimage and the gloss value. as a reference instrument, an erichsen pico-glossmaster 500 is used, since it is the instrument currently in use at the quality laboratory of the plant for offline measurements. it has a measurement range of [0, 100] gu and an accuracy equal to 0.2 gu. the following steps of the calibration procedure are repeated for n different brush positions: the brush is set in a fixed position. the average brightness of the sub-image is measured for 30 seconds. mean (l) and standard deviation (�l) of the averages are evaluated. while the profile is being measured the starting and final point are marked on its surface. the marked profile length is measured offline with the reference instrument, by taking 30 uniformly spaced measurements. mean (g) and standard deviation (σg) of these measurements are evaluated. a linear fit is applied on the pairs (gi, li) in order to estimate the calibration curve. figure 8 shows the calibration points, the obtained curve, its analytical formulation and the determination factor. being r2 equal to 0.978, the linear approximation can be considered acceptable. the standard deviation �y of the output of the linear fit model can be estimated with the following relationship: 1 1 where is the value obtained with the linear approximation at the abscissa li .for almost all the calibration points the standard deviation �l is practically the same in all points and nearly equal to 4. the standard deviation �y is equal to 0.2 gu. the uncertainty [10] of final gloss measurements can be estimated with the following relationship: ∙ where s = 0.113 is the slope of fitting line, and resulted to be equal to ug = 0.5 gu. as far as the measurement rate is concerned, since the system is able to process about 4 images per second and the maximum speed of the line is equal to 20 m / min, the system is in able to perform a measurement every 8 cm. the obtained metrological characteristics meet the requirements of the quality control system. 6. conclusions  a vision based system for the on-line measurement and control of dimensional and esthetical parameters of rubber profiles has been designed and characterized by the authors. the measurement system has demonstrated to be efficient and to fully meet the specifications of the industrial quality production control system. moreover, this on-line noncontact solution has caused a drastic reduction of production waste, and then a decrease of costs. the systems presented in this paper are going to be connected to the same fieldbus plant network. in this way a supervising application, which is under development, will allow to monitor the status of the entire production line, to save results into a database, to manage report printing, to coordinate the feedback actions. references  [1] te-hsiu sun, chun-chieh tseng, min-sheng chen, “electric contacts inspection using machine vision” image and vision computing, volume 28, issue 6, june 2010, pages 890-901. [2] r. anchini, g. di leo, c. liguori, a. paolillo, “metrological characterization of a vision-based measurement system for the online inspection of automotive rubber profile”, ieee transactions on i&m, vol. 58, no. 1, jan. 2009, pp. 4-13. [3] g. di leo, c. liguori, a. paolillo, a. pietrosanto, “contactless measurements for on line quality monitoring in rubber extrusion processes”, 19th symposium imeko tc 4 symposium and 17th iwadc workshop advances in instrumentation and sensors interoperability july 18-19, 2013, barcelona, spain pp. 197-202. [4] g. di leo, c. liguori, a. paolillo, "covariance propagation for the uncertainty estimation in stereo vision", ieee transactions on instrumentation and measurement, vol. 60, issue 5, doi 10.1109/tim.2011.2113070, accepted for publication. [5] r. i. hartley and p. sturm, triangulation, comput. vision image understand., 68-2 (1997-11), 146-157. [6] hartley, zisserman, "multiple view geometry in computer vision", 2nd ed, cambridge university press. [7] z. zhang, “a flexible new technique for camera calibration”, ieee transactions on pattern analysis and machine intelligence, 22(11):1330-1334, 2000. [8] e. reinhard, g. ward, p. debevec, w. heidrich et al., “high dynamic range imaging: acquisition, display and imagebased lighting”, 2nd ed., 2010, morgan kaufmann. [9] t. mertens, j. kautz and f. van reeth, “exposure fusion”, pacific graphics 2007. [10] jcgm 100:2008, "evaluation of measurement data guide to the expression of uncertainty in measurement". [11] iso 2813:1994, “paints and varnishes determination of specular gloss of non-metallic paint films at 20 degrees, 60 degrees and 85 degrees”. [12] t. aida, s. hayasaki, t. sakai, “high speeding of the glossmeter for curved surfaces using ccd line sensor”, conference on precision electromagnetic measurements, 7-10 june 1988, pp. 359-360. microsoft word 405-2987-1-le-rev acta imeko  issn: 2221‐870x  july 2017, volume 6 number 2, 65‐69    acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 65  reduction of the effect of floor vibrations in a checkweigher  using an electromagnetic force balance system  yuji yamakawa 1 , takanori yamazaki 2   1 the university of tokyo, hongo 7‐3‐1, bunkyo‐ku, tokyo, japan  2 tokyo denki university, hatoyama, saitama, japan      section: research paper   keywords: mass measurement; electromagnetic force balance system; dynamic behavior; floor vibration  citation: yuji yamakawa, takanori yamazaki, reduction of the effect of floor vibrations in a checkweigher using an electromagnetic force balance system,  acta imeko, vol 6, no. 2, article 11, july 2017, identifier: imeko‐acta‐06 (2017)‐02‐11  section editor: min‐seok kim, research institute of standards and science, korea  received june 30, 2016; in final form february 2, 2017; published july 2017  copyright: © 2017 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: yuji yamakawa, e‐mail: yuji_yamakawa@ipc.i.u‐tokyo.ac.jp    1. introduction  recently, some robots have been introduced in various situations such as fa (factory automation), inspection, food industry and logistics industry etc. by applying the robots to such cases, automation of task realization has been improved effectively. in such situation, a high-speed and good-accuracy mass measurement is also strongly required for accurately sorting products and improving a working time. for example, the continuous mass measurement for 300 products per minute is required. in general, two types of mass measurement systems of the checkweigher exist, such as a load cell type [1]-[4] and an electromagnetic force compensation (emfc) type. in this research, we adopt the emfc type in order to achieve a highspeed and good-accuracy mass measurement. until now, we have proposed a simple dynamic model of the checkweigher with emfc, and we confirmed the validity of the proposed model for the step responses [5]. in this paper, we explore effects of floor vibrations, which are considered to have one dynamic input, in the system with emfc [6], [7]. floor vibration is one of the disturbances to the system and may occur by human walking and surrounding machine motion etc. the effect of floor vibration is an important problem to be solved in order to apply the system to an actual environment and situation and to achieve highaccuracy mass measurements. in this paper, our goal is to duplicate the effect of floor vibration with the proposed dynamic model and to propose a simple reduction method for the effect of the floor vibration. in case that an emfc system is used in a realistic situation, it is extremely important to consider the effect of floor vibration induced to the system by motions of surrounding machines and human walking. the emfc system is a feedback type, and is more susceptible to floor vibrations than the feedforward type (the load cell type). therefore, we propose a model that can estimate the effect of the floor vibration and a simple reduction method based on the model for reducing the error of mass measurement due to floor vibration. finally, we verify the effectiveness of the proposed method through experiments in several situations. 2. electromagnetic force balance system  figure 1 shows an overall scheme of the checkweigher. in this figure only the measuring conveyor is shown; the feed conveyor is located to the left of the measuring conveyor and the sorter is located to the right. the products are moved by the feed conveyor, the product mass is measured by the abstract  the objective of this paper is to propose a simple dynamic model of the system and a reduction method for effect of floor vibration.  the  dynamics  of  the system  with  electro‐magnetic  force  compensation  is  approximated  by a mass‐spring‐damper  system,  and  an  equation of motion is also derived. then, a comparison of a simulation result with a realistic response is carried out. finally, effects of  floor vibration are explored with the model, and the effectiveness of the proposed reduction method is confirmed.  acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 66  measuring conveyor and the product is removed by a sorter outside of the measurable range. the mass measurement system consists of a weighing platform, the roberval mechanism, the lever-linked roberval mechanism, a counter weight, the electromagnetic force actuator to control the lever displacement and a displacement sensor to measure the lever displacement. by applying the roberval mechanism to the measurement mechanism, the mass of the product can be measured regardless the location of the product on the weighing platform. the mass of the product is estimated from the current of the electromagnetic force actuator to control the lever displacement. the procedure of mass measuring is as follows: 1. when the product with mass m is put on the measurement system, this causes a displacement of the roberval mechanism. 2. the displacement of the roberval mechanism is magnified by the lever. 3. the magnified displacement (the lever displacement) can be measured by the displacement sensor. 4. the current is controlled so that the displacement of the lever is maintained at zero. 5. the mass is estimated based on the current. since the control in the procedure is performed based on the lever displacement, the error of the mass measurement may be generated due to the displacement by the floor vibration. 3. derivation of the dynamic model  figures 2 and 3 show the basic mechanism and structure, and the physical model of the measurement system, respectively. the equation of motion about mass m can be written as follows: flmgkxxcxlmmm l  )( 2 , (1) where m is the mass of the roberval mechanism, m is the mass of the product to be measured, c is a damping coefficient, k is a spring constant, g is the acceleration of gravity, l (= 20 m/m) is the lever ratio, and x is the displacement of mass m. in addition, ml is the mass of the lever, xl is the displacement of mass ml, and f (= bli, where b is the magnetic flux density, l the length of the coil, and i the current) is the electromagnetic force input to control the position xl of mass ml. from (1), the natural frequency f can be calculated as 22 1 lmmm k f l   . (2) from step responses in an open-loop system, the natural frequency appeared to be 5 hz. thus, the spring constant k can be obtained based on (2) as follows: k  (2 f )2  (m  m ml l 2 )  89.2 103 . (3) in addition, the damping coefficient c can be adjusted so as to match the convergence rate in the responses of the lever. from (1), the roberval displacement x in the steady-state in the open-loop system (f = 0) can be described by: x  mg k . (4) further, the lever displacement xl in the steady state, which is the roberval displacement magnified by the lever, can be obtained by the following equation: xl  lx . (5) simulations of the mass measurement system can be executed using (1)-(5). figure 4 shows a block diagram of the mass measurement system. the effectiveness of the dynamic model (1)-(5) can be verified by comparing the output of the lever displacement sensor (xl). 4. model validation  here, the validity of the proposed dynamic model (1)-(5) is confirmed based on step responses in open-loop and closedloop systems [5]. 4.1. open‐loop system  the validity of the proposed model was first examined by comparing the simulation and experimental results in the openloop system. figure 5(a) depicts the experimental and simulation results for m = 0.01, 0.02, 0.05 and 0.1 kg. the red, blue, green and black lines depict the results for m = 0.01, 0.02, 0.05 and 0.1 kg, respectively. also, the dotted and solid lines indicate the experimental and simulation results, respectively. the start-up operation is the time when the product of the mass m is put on the measurement system. at time 0.2 s, the product of the mass m is removed. at the same time, the lever displacement xl is measured by the displacement sensor. in figure 5(a), the figure 2. basic mechanism and structure.  figure 3. physical model.  figure 1. overall mass measurement system.  weighing platform counter weightroberval mechanism lever m m k spring c damper x xl=lx f= bli ml 1:l xb product mass of roberval mass of lever electromagnetic actuator floor vibration acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 67  vertical axis is the output of the displacement sensor about the lever displacement. it can be seen from this figure that the responses of the simulations were in excellent agreement with the experimental results. thus, the validity of the proposed model was confirmed in the open-loop system. 4.2. closed‐loop system  this section explains the experimental and simulation results with emfc. the electromagnetic force was controlled by a proportional-integral-derivative (pid) control algorithm. taking the actual circuit of the d action into account, the ideal d action could not be implemented. thus, we implemented the approximated d action instead of the ideal d action. namely, the transfer function of the pid controller c(s) is described by c(s)  kp  ki s  kdns kdds1 , (6) where kp is the proportional gain, ki is the integral gain, and kdn and kdd are the numerator and denominator coefficients of the differential gains, respectively. the control voltage uv(s) (laplace transform of uv(t)) can be adjusted as follows: uv (s)  c(s)e(s) , (7) where e(s) is the laplace transform of e(t), and e(t) (= xlr − xl) is the error between the reference xlr (=0) and the output of the lever displacement sensor xl. the pid control can be performed by an fpga (field programmable gate array) every 0.1 ms. figure 5(b) depicts the experimental and simulation results for m = 0.01, 0.02, 0.05 and 0.1 kg. as in figure 5(a), the red, blue, green and black lines depict the results for m = 0.01, 0.02, 0.05 and 0.1 kg, respectively. the dotted and solid lines indicate the experimental and simulation results, respectively. the simulation and experimental conditions are the same as the open-loop system.   figure 4. block diagram of the measurement system including the reduction method for floor vibration.        figure 5. comparisons between experimental results and simulation results.  acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 68  responses such as the convergence time, the rise time and the settling time were almost the same for the simulation and experimental results. thus, the validity of the proposed dynamic model with emfc was confirmed. however, the peak value of the simulation result for m = 0.1 kg was different from that of the experimental result. we assume that the reasons for the modelling error were the frictional force in small displacements and the magnification mechanism of the lever. 5. floor vibration and reduction method  in this section we explore effects of floor vibrations. floor vibration is one of the system disturbances to be considered. first we compare experimental results with simulation results for floor vibration and confirm the validity of the model about the floor vibration. then we propose a new method for decreasing the effect of the floor vibration to achieve a high accuracy mass measurement. figure 4 shows the block diagram of the overall system in mass measurement including the vibration input. the upper and lower block diagrams show a reduction method for floor vibration and the proposed dynamic model of the system, respectively. in experiments, the floor vibrations are acted on the system by using a vibration exciter. in simulations, floor vibration inputs are applied to the system as the environmental disturbance, as shown in figure 4. a displacement of the floor vibration in the simulation can be given by tatx bbb sin)(  , (8) where xb(t) is the displacement of the floor vibration, ab [mm] is the amplitude of the floor vibration, ωb [rad/s] is the angular frequency of the floor vibration, and t is time. therefore, an acceleration ab(t) of the floor vibration can be described as follows: tata bbbb  sin)( 2 . (9) floor vibration input to the system can be represented by the product of the acceleration ab(t) and the mass mb of the part of the roverbal mechanism. the input of the floor vibration is added to the right side in (1) as mbab. thus (1) can be rewritten as bbl amflmgkxxcxlmmm  )( 2 . (10) the following five conditions about the floor vibrations for experiments and simulations are chosen; (a) ab = 1 mm, fb = 10 hz, (b) ab = 1 mm, fb = 15 hz (c) ab = 0.75 mm, fb = 15 hz, (d) ab = 1.25 mm, fb = 5 hz (e) ab = 1.25 mm, fb = 10 hz 5.1. effect of floor vibration  first we compare experimental results with simulation results about the effects of the floor vibration. then we will confirm the validity of the proposed model for several floor vibrations. the figures on the left hand side in figure 6 show experimental results (red lines) and simulation results (blue lines) about floor vibrations. from these results, we found that almost the same responses between the simulations and the experiments are obtained. thus we confirmed the validity of the proposed dynamic model for floor vibrations. however, differences between the simulation result and the experimental result in the condition as shown in figures 6(b) and 6(c) occured. the reason is that the mass value in the experiment becomes smaller due to mechanical limits of the lever. as a result, the error of the mass measurement due to the floor vibration will be generated as shown in figure 6. thus we propose a reduction method for the effect of the floor vibration in the next section. 5.2. method for decreasing floor vibration effects  since the same responses for floor vibrations can be obtained using the proposed model, we can compensate the effect of floor vibration in terms of mass measurement by a simple subtraction. namely, we compensate the error using the following equation, simesti mmm ˆ , (11) where m̂ , mesti and msim mean the compensated mass value, the mass value obtained from the system and the mass value obtained by the simulation with the proposed model, respectively. since a high-speed mass measurement is required, it is desirable that the processing time is short and the response after the processing does not become slower. therefore, the proposed method is considered to be appropriate to our objective. the figures on the right hand side in figure 6 show the compensated mass values for floor vibrations under the same conditions as before. it can be seen from these results that the errors of the mass measurements by the floor vibration can be effectively decreased more than half of the original errors. actual compensation methods for mass measurement errors due to the floor vibrations can be considered as follows: “an acceleration of the floor vibration is measured using an acceleration sensor. the measured acceleration is used as the input of the floor vibration in the proposed model. then the reduction method for the floor vibration in the mass measurement is performed.” here we discussed the effectiveness of the proposed method through simulations with the experimental data. in the future, we will implement the proposed method to the actual mass measurement system in order to verify the validity. 6. conclusions  in this paper, we proposed a dynamic model of the mass measurement system with emfc. we examined the validity of the proposed model for the open-loop and the closed-loop systems. comparing the experimental results with the simulation result, the same responses can be obtained. furthermore, we discussed the effect of the floor vibrations to the mass measurement system, and we proposed a simple reduction method for decreasing the effect of the floor vibration. finally, we evaluated the proposed method by experiments in various situations. consequently, the mass measurement error for the floor vibration can be reduced effectively by at least 50 % with the proposed method. in the future, we plan to perform the experiment for lower frequencies of the floor vibration. moreover, we will implement the proposed method to the realistic system and explore the effectiveness of the proposed method in an actual environment. references  [1] t. ono, “basic point of dynamic mass measurement”, proc. sice, pp. 43-44, 1999. [2] t. ono, “dynamic weighing of mass”, instrumentation and automation 12, no. 2, pp. 35, 1984. acta imeko | www.imeko.org  july 2017 | volume 6 | number 2 | 69  [3] w.g. lee et al., “development of speed and accuracy for mass measurements in checkweighers and conveyor belt scales”, proc. ismfm, pp. 23-28, 1994. [4] k. kameoka et al., “signal processing for checkweigher”, proc. apmf, pp. 122-128, 1996. [5] y. yamakawa and t. yamazaki, “mathematical model of checkweigher with electromagnetic force balance system”, acta imeko, vol.3, no.2, pp.9-13, 2014. [6] y. yamakawa and t. yamazaki, “modeling and control for checkweigher on floor vibration”, xxi imeko world congress, tc3-o-079, 2015. [7] y. yamakawa and t. yamazaki, “modeling of floor vibration effect for high-speed mass measurement system”, 2015 asiapacific symposium on measurement of mass, force and torque (apmf2015), pp.161-166, 2015.   figure 6. comparisons between experimental results and simulation results of the effects of floor vibrations.  template for an acta imeko event paper acta imeko issn: 2221-870x april 2017, volume 6, number 1, 27-32 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 27 dependence of ultrasonic contact impedance hardness on young’s modulus of elasticity of creep-resistant steels michal junek, jiří janovec, petr ducháček ctu in prague, faculty of mechanical engineering, department of materials engineering, praha 2, karlovo náměstí 13, czech republic section: research paper keywords: ultrasound hardness tester; young’s modulus of elasticity; calibration; creep-resistant steel; residual operation life citation: michal junek, jiří janovec, petr ducháček, dependence of ultrasonic contact impedance hardness on young’s modulus of elasticity of creepresistant steels, acta imeko, vol. 6, no. 1, article 5, april 2017, identifier: imeko-acta-06 (2017)-01-05 section editor: konrad jędrzejewski, warsaw university of technology, poland received march 2, 2016; in final form march 17, 2016; published april 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: this work was supported by the ministry of education, youth and sport of the czech republic within the project no. lo1207 of the program npu1 corresponding author: m. junek, e-mail: michal.junek@fs.cvut.cz 1. introduction at present a trend of developing hardness measurements by portable hardness testers exists, allowing "non-destructive" hardness measurements of large construction parts without any need of specimen separation for stationary hardness testers. four physically distinct methods used are: uci ultrasonic method, dynamic rebound method, optical method tiv (through indenter viewing) and electrical resistance method (handy esatest). in the following text, the focus was aimed only on the uci method. one of the many possibilities of using portable uci hardness testers is the hardness measurement in assemblies (steam boilers, steam pipe-lines) functioning in energy units, which operate in creep conditions (temperature approx. 550 °c and a pressure approx. 15 n.mm-2). long-term operation (2.5×105 hours) in these conditions leads to degradation of the material accompanied by a decrease of the hardness values. uci hardness measurement method should thus serve to detect thresholds hardness values that indicate the poor mechanical properties of the material, the risk of failure and the necessity of a more thorough examination of the experimental material state (microstructure evaluation, small punch samples preparation). 2. the uci method of hardness measurement an uci probe consists of a vickers diamond tip attached to the end of a metal rod, which is excited to perform a longitudinal oscillation with a frequency of 70 khz, by piezoelectric transducers (figure 1). t is the piezoelectric transducer, r the piezoelectric receiver, o the oscillating rod, v the indenter (for example vickers diamond) and m is the test material. figure 1. uci probe schematics [1]. abstract the paper deals with the hardness measurements by mobile uci hardness testers as a means of determining the residual operation life of power unit components. it aims to answer questions regarding the level of dependence of uci hardness on young´s modulus of creep-resistant steels and determining the conditions of a uci hardness tester calibration. the experimental part describes comparative measurements of hardness values obtained using stationary hardness testers and uci hardness testers. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 28 the uci method does not require the diagonals of the test indentation to be measured which is necessary for the vickers hardness determining. in this method, the shift of an ultrasonic frequency of the oscillating indenter is electronically related with the area of the indentation and thus resulting in the final hardness value. the deeper the diamond indenter penetrates, the larger is the indentation area, the larger the frequency shift of the diamond tip and the lower the resulting hardness, see (1), (2) and figure 2 [2]. ∆𝑓 = 𝑓(𝐸𝑒𝑒𝑒 ∙ 𝐴) , (1) 𝐻𝐻 = 𝐹 𝐴� , (2) where δf is the frequency shift, eeff the effective modulus of elasticity (contains the elastic constants of both the indenter and the test piece), a is the area of indentation, hv the vickers hardness value and f is the test load. equation (1) implies that a frequency shift depends on the effective modulus of elasticity as well. therefore, young’s modulus of elasticity in tension must be considered using the uci method in practice. the equipment must be calibrated when determining the hardness of different materials with different hardness values. but the question is: to what extent does the uci hardness depend on the modulus of elasticity? [2] 2.1. calibration the astm a1038 – 05 standard [1] states that a uci hardness tester usually has been calibrated on non-alloyed and low-alloyed steel, that is, certified hardness reference blocks with young’s modulus of elasticity equal to 210 000 n.mm-2. because unalloyed or low-alloy steels have a similar young’s modulus of elasticity, accurate results are obtained with the standard calibration. in many cases, the difference in young’s modulus of medium-alloy and high-alloy steels is so insignificant that the error created falls within the allowable tolerances of the part. but the question is: what is considered as a similar young’s modulus of elasticity? hardness reference blocks are needed for calibration to other materials with a different young’s modulus of elasticity. this paper should answer the question of calibration uci hardness testers used for hardness measurement of components functioning in power units. 2.2. comparison of non-destructive methods of hardness measurement several methods for non-destructive hardness measurements are used in practice. table 1 shows various methods for nondestructive measuring of hardness using portable hardness testers. furthermore, a comparison of the areas of applications of each portable hardness testers is given. from this comparison, the uci method seems to be best for hardness measurements of large construction parts functioning in energy units (steam pipe-lines, steam boilers and welds of these parts). 3. degradation mechanisms of high pressure (hp) steam pipe-lines the best resistance against operating in conditions of creep damage shows crmov low-alloy steels in the normalized or heat-treated state (13crmo4-5, 14mov6-3, 10crmo9-10). in case of higher operating parameters, it is 9-12 % cr martensitic steel x10crmovnb9-1 (p91) and x10crwmovnb9-2 (p92) steel. the main factors affecting the lifetime of hp steam pipelines is a combination of their material properties and operating conditions. the main degradation mechanisms therefore include material degradation in the form of structural changes caused by coagulation of carbide particles and the nucleation and formation of cavities due to creep damage. this table 1. recommended applications for hardness measurement by portable hardness testers [2]. applications dynamic rebound method uci method tiv method handy esatest solid (big) parts * * o o coarse-grained materials * x x x steel and aluminium cast alloys * o o o haz with welds x * * * tubes: wall thickness > 20 mm * * * * tubes: wall thickness < 20 mm x * * * inhomogeneous surfaces o x x x sheet metal, coils x o * * thin layers x o * * hard to get at positions x * x * coarse surfaces * x x o finally machined surfaces o * * * electrically conductive materials * * * x dusty environments * o o x explanatory notes to table: x not recommended o sometimes suitable (in the case of elimination conditions having adverse impacts on measurement results) * especially well-suited figure 2. frequency shift of an ultrasonic contact impedance (uci) probe as a function of hardness [3]. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 29 degradation is accompanied by a reduction of hardness. [4] the structural changes are temperature and time dependent processes which could lead to a decrease in both short-term characteristics (yield strength, ultimate strength, hardness, and fracture toughness) and long-term characteristics (creep strength, creep deformation, plasticity). in case of low-alloy creep-resistant steels an area of predominant hardening and an area of predominant softening exists. hardening of the material (up to 1000 hours of operation) in crmov steels is caused by additional precipitation of vanadium carbides, thereby increasing the number of new dispersed particles, reducing their average size, increasing their volume fraction and decreasing their interparticle distance. on the other hand, during material softening coarsening of dispersed particles via diffusion processes occurs, resulting in an increase of their average size, volume fraction decrease and increase of the interparticle distance [4]. stages of degradation of creep resistant, low-alloy steel of chemical composition of 0.5 % cr 0.5 % mo 0.25 % v (14mov6-3) subjected to creep exposure are in figure 3. stage 0 corresponds to the initial state with a ferritic-bainitic structure at the beginning of creep exposure (figure 3 stage 0). the first stage of structural changes is characterized by a moderate decomposition of bainite. that is accompanied by coagulation of m3c carbides in these areas and further precipitation of m23c6 carbides along the ferritic grains boundaries. at the same time, very fine mc carbides precipitate within the ferrite grains (figure 3 stage 1). the next stage is characterized by significant decomposition of bainite and coagulation of m3c carbides into relatively large carbide particles at the grain boundaries. the m23c6 carbides precipitate on the boundaries of ferritic grains and form chains. simultaneously, fine mc carbides are observed within ferritic grains (figure 3 stage 2). final structural changes result in a ferritic matrix containing mc and m6c carbides inside ferrite grains and large m23c6 carbides precipitated along the grain boundaries (figure 3 stage 3). depending on the operating conditions, the material may contain also other types of carbides, e.g. m7c3 carbides. after such a degradation of the material microstructure and further creep exposure creep cavities are formed [5]. 3.1. surface decarburization surface decarburization takes place most frequently on the outer surface of steel components and is accompanied by rapid reduction of carbon content on the surface due to diffusion caused by high temperatures. a decarburized layer has a lower hardness than the material below the layer due to reduced carbon content. the carbon content gradient in the decarburized layer increases with the distance from the outer surface, see figure 4 and figure 5. therefore, the hardness values measured after removal of the decarburized layer are higher. since the hardness is measured on the outer surface components, it is imperative to remove this layer in order to achieve relevant results. the decarburized layer has a thickness usually of up to 1.0 mm. 4. experiments the experimental part was focused on determining uci hardness dependence on young’s modulus of elasticity in tension via comparative measurements of hardness values obtained by the classic hv10 vickers method and values obtained using the uci hardness tester. deviation of the mean values measured by the uci hardness tester and by the stationary (laboratory) hardness tester were evaluated. 4.1. used measurement equipment comparative measurements as a means of determining the uci hardness on young’s modulus of elasticity were performed in accord with the čsn en iso 6507-1 standard using a calibrated stationary hardness tester by the vickers method with a load of hv10. as a representative of portable hardness testers the uci hardness tester krautkrämer mic20 and krautkrämer mic10 with uci probe mic 2010 of load of 98 n was selected. figure 3. stages of microstructure degradation after creep exposure [5]. figure 4. surface decarburization of hp steam pipe-line of fossil fuel power plants. figure 5. dependence of rockwell hardness on the carbon content [6]. acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 30 4.2. experimental materials materials with different young’s modulus e were selected for comparative measurements. values of young’s modulus e were verified using data from literature. the mean values without standard deviations are shown in table 2. the experimental determination of the young’s modulus values was not carried out due to lack of time. the experimental samples were removed from the steam pipe-lines or boiler tube in different states of degradation. in the case of steels x20crmov12-1 and x5crnicunb17-4-4, it was possible to get the samples from the rotor blades. table 3 shows a description of individual samples. 4.3. preparation of the measurement site hardness of the materials described above was measured always on the outer surface (approximately 0.5 to 1.0 mm was grinded off in order to remove the decarburized layer), and across the tube wall thickness. in case of steels of a tensile modulus of 200 n.mm-2 hardness was measured only in the cross-section of the blade lock. the surface was prepared using a metallographic grinder with sandpaper grit of 400, thereby achieving a surface roughness ra of 0.07 – 0.12 µm (measured by a surface roughness tester hommelwerke lv-5e). the selected number of indentations on every surface was 10. 4.4. measurement process on the prepared surfaces with an approximate area of 10 mm2 at first the hardness was measured according to čsn en iso 6507-1 using a stationary hardness tester (vickers method) with a load of hv10. subsequently, uci hardness tester was used for hardness measurement in close vicinity (approximately 5 mm) of these indentations. during uci tester measurements the following finding was observed. the probe mic 2010 was firstly used for measuring the initial state hardness (without a fixture providing perpendicular positioning). hardness values measured in this way showed significant deviations and unreal values. therefore, it was decided to use the probe with a fixture providing perpendicular positioning to solve the problem. it implies that when measuring the hardness, it is necessary to keep the probe perpendicular to the surface, which can be achieved by installing a fixture or by a properly trained and experienced person performing the measurements. 4.5. measurements results all hardness measurement results are summarized in table 3 to table 10. the tables contain the average hardness values from 10 measurements for each sample measured on the surface and throughout the wall thickness by the laboratory hardness tester and by uci hardness testers. the tables further include standard deviations (std) of measured values and the deviation of the average value of the uci hardness from average values measured by the stationary hardness tester (lab). a) steels with e = 218 000 n .mm -2 in case of steels with the same young’s modulus 218 000 n.mm-2 the measurements show that the values of the deviations of both uci hardness testers (krautkrämer mic20 and mic10) vary in the same trend. the average value of the deviations is approximately -21 hv (see tables 4 and 5). b) steels with e = 210 000 n.mm-2 in case of 14mo6-3 steel and t23, t24 steels with the same young’s modulus 210 000 n.mm-2 it was observed that the table 4. steel x10crmovnb9-1; samples p21 and p6. x10crmovnb9-1; sample p21 (ø 324 x 28 mm); initial state after heat treatment measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 201 187 186 215 199 195 std ± 5 ± 6 ± 9 ± 5 ± 11 ± 10 deviation 14 15 16 20 x10crmovnb9-1; sample p6 (ø 270 x 25 mm); laboratory ageing at 600 °c/10 000 hours avg. 228 208 206 238 218 217 std ± 5 ± 5 ± 5 ± 4 ± 6 ± 6 deviation 20 22 20 21 table 3. selected samples of materials and their state of operating (laboratory) degradation. material sample state of degradation x10crmovnb9-1 p6 – lab. ageing at 600 °c/10 000 hours p21 – initial state after heat treatment x10crwmovnb9-2 bt3 – lab. ageing at 650 °c/20 000 hours t33 – lab. ageing at 650 °c/5 033 hours 14mov6-3 k1 – degraded at 525 °c/240 000 hours epr – degraded at 560 °c/261 800 hours epc – degraded at 540 °c/240 066 hours t23 v23 – initial state after heat treatment d23 – lab. ageing at 650 °c/5 033 hours t24 v24 – initial state after heat treatment d24 – lab. ageing at 650 °c/5 033 hours x20crmov12-1 l1 – unknown x5crnicunb17-4-4 l2 – unknown steel super 304h 2030 – dissolving annealing 1150 °c/2 min table 2. selected materials and their young’s modulus values. materials e [n.mm-2] x10crmovnb9-1 (p91) x10crwmovnb9-2 (p92) 218 000 14mov6-3 t23 t24 210 000 x20crmov12-1 x5crnicunb17-4-4 200 000 steel super 304h 193 000 table 5. steel x10crwmovnb9-2; samples bt3 and t33. x10crwmovnb9-2; sample bt3 (ø 350 x 39 mm); laboratory ageing at 650 °c/20 000 hours measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 226 211 207 232 205 211 std ± 2 ± 9 ± 9 ± 3 ± 5 ± 7 deviation 15 19 27 21 x10crwmovnb9-2; sample t33 (ø 528 x 94 mm); laboratory ageing at 650 °c/5 033 hours avg. 223 204 198 230 205 216 std ± 3 ± 12 ± 6 ± 3 ± 7 ± 6 deviation 19 25 25 14 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 31 values of the deviations of both uci hardness testers (krautkrämer mic20 and mic10) vary in the inverse trend. the average value of the deviations is approximately -13 hv for 14mo6-3 steel and 13 hv for t23, t24 steels (see table 6 table 8). c) steels with e = 200 000 n.mm-2 the deviation values measured in steels with young’s modulus of 200 000 n.mm-2 of both uci hardness testers (krautkrämer mic20 and mic10) are similar. the average value of the deviations is approximately -3 hv (see tables 9 and 10). d) steels with e = 193 000 n .mm -2 in case of austenitic steel super 304h with young’s modulus 193 000 n.mm-2 it was observed that the measured values show large measurement errors. the standard deviation of the value measured by the uci hardness tester is up to 59 hv. the measurement of hardness of austenitic steels is generally a problem due to their strengthening. the dynamic rebound method (brinell method) was not carried out because specimens, on which the measurement was to be performed, did not reach the required minimum weight of 5 kg and minimum wall thickness 20 mm. the average value of the deviations is very different (see tables 11). 5. conclusion the original idea of practice that hardness measurement by mobile uci hardness testers is independent of the values of young’s modulus has proven not to be entirely correct. the experimental results show the following (see table 12). the larger is young’s modulus the larger is the negative deviation of the measured hardness. the explanation of this dependence is table 6. steel 14mov6-3; samples pk1, epr and epc. 14mov6-3; sample k1 (ø 273 x 26 mm); degraded at 525 °c/240 000 hours measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 201 191 187 214 202 202 std ± 3 ± 7 ± 7 ± 5 ± 9 ± 14 deviation 10 14 12 12 14mov6-3; sample epr (ø 245 x 36 mm); degraded at 560 °c/261 800 hours avg. 151 137 133 150 138 133 std ± 1 ± 11 ± 7 ± 2 ± 8 ± 8 deviation 14 18 12 17 14mov6-3; sample epc (ø 324 x 48 mm); degraded at 540 °c/240 066 hours avg. 128 121 119 133 117 119 std ± 4 ± 2 ± 7 ± 3 ± 5 ± 5 deviation -7 -9 -16 -14 table 7. steel t23; samples v23 and d23. t23; sample v23 (ø 38 x 5.6 mm); initial state after heat treatment measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 205 222 214 201 221 210 std ± 4 ± 11 ± 6 ± 1 ± 6 ± 6 deviation 17 9 20 9 t23; sample d23 (ø 38 x 5.6 mm); laboratory ageing at 650 °c/5 033 hours avg. 157 169 167 166 176 169 std ± 2 ± 8 ± 8 ± 3 ± 9 ± 9 deviation 12 10 10 3 table 8. steel t24; samples v24 and d24. t24; sample v24 (ø 38 x 5.6 mm); initial state after heat treatment measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 229 250 239 228 243 231 std ± 2 ± 12 ± 7 ± 5 ± 6 ± 11 deviation 21 10 15 3 t24; sample d24 (ø 38 x 5.6 mm); laboratory ageing at 650 °c/5 033 hours avg. 175 186 192 176 199 190 std ± 2 ± 12 ± 8 ± 4 ± 11 ± 11 deviation 11 17 23 14 table 11. steel super 304h; sample 2030. super 304h; sample 2030 (ø 38 x 6.3 mm); dissolving annealing 1150 °c/2 min measured on surface measured through the wall thickness lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 181 366 176 227 std ± 2 ± 59 ± 4 ± 32 deviation 185 49 table 10. steel x5crnicunb17-4-4; sample l2. x5crnicunb17-4-4; sample l2 (rotor blades); unknown state measured through the cross-section measured through the cross-section lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 503 502 500 508 509 504 std ± 7 ± 8 ± 7 ± 3 ± 10 ± 6 deviation 1 3 1 4 table 9. steel x20crmov12-1; sample l1. x20crmov12-1; sample l1 (rotor blades); unknown state measured through the cross-section measured through the cross-section lab mic 20 mic 10 lab mic 20 mic 10 hv10 uci hv10 uci hv10 hv10 uci hv10 uci hv10 avg. 352 348 347 361 357 355 std ± 5 ± 11 ± 13 ± 5 ± 7 ± 7 deviation 4 5 4 6 acta imeko | www.imeko.org april 2017 | volume 6 | number 1 | 32 evident from the figure 2, which shows that the greater the frequency shift the smaller the measured hardness. therefore, we can say that steel with a higher modulus of elasticity has a greater frequency shift. therefore, we measured a lower hardness for steels with a higher young’s modulus. the measured results showed the existing dependence of uci hardness on young’s modulus. although the dependence is very small, it is necessary to consider it and to perform the proposed calibration of uci hardness testers using suitable calibration plates. the measured results are summarized in table 12. from the results it is possible to state that with increasing young’s modulus of elasticity e the value of the negative deviation of hardness values measured by the uci hardness tester increases in comparison with the values measured using a stationary (laboratory) tester. the dependence can be considered as linear. however, that doesn´t apply in the case of t23 and t24 steels with young’s modulus of 210 000 n.mm-2 where the deviation of uci hardness varies in the opposite direction. the general validity of these conclusions has not been confirmed in one case. t23 and t24 steels showed the deviation trend opposite to the deviation measured in 16mov63 steel. the average value of the deviations is approximately -13 hv for 14mo6-3 steel and 13 hv for t23, t24 steels. finding the reasons for this difference in deviations values is not yet completed. a possible cause could be a different wall thickness, because boiler tubes from these materials have a wall thickness of 5.6 mm and steam pipe-lines from other materials have wall thicknesses from 20 up to 80 mm. the observation of the influence of different wall thicknesses (steam and boiler tubes), the influence of microstructure and verification of the real young’s modulus will be subjects for the further research. considering the above measured results, we propose the following groups of calibration plates: 1) calibration plates for 9 % cr martensitic steel – made from p91 (p92) steel: e = 218 000 n.mm-2 2) calibration plates for low alloy crmov steels – made from 16mov6-3 steel: e = 210 000 n.mm-2 3) calibration plates for 2% cr steels – made from t23 (t24) steel: e = 210 000 n.mm-2 4) calibration plates for steels for rotor blades – made from x20crmov12-1 steel: e = 200 000 n.mm-2 it will be necessary to determine the hardness of these calibration plates at the lower and upper limit of practically measured values of hv hardness in operating conditions. the rebound method was not carried out because specimens, on which the measurement was to be performed, did not reach the required minimum weight of 5 kg. measured values would be thus misleading. these results are used in normative technical documentation for requirements for hardness measurements by portable hardness testers in czech classic power plants. acknowledgement this work was supported by the ministry of education, youth and sport of the czech republic within the project no. lo1207 of the programme npu1. references [1] astm a1038 – 05. “standard practice for portable hardness testing by the ultrasonic contact impedance method”. west conshohocken: astm international, 2005. [2] s. frank, “mobile hardness testing application guide for hardness testers”. general electric company [online]. 2005 [cit. 2015-02-13]. available from: https://www.gemeasurement.com/sites/gemc.dev/files/hardnes s_testing_application_guide_english_0.pdf. [3] k. herrmann, “hardness testing: principles and applications”. materials park, ohio: asm international, 2011. isbn 978-161503-832-9. [4] m. junek, “posouzení životnosti vt parovodů v podmínkách creepového poškození”. diploma thesis. praha: 2014. ctu in prague, faculty of mechanical engineering, department of material engineering. p. 87. [5] pa. philadelphia, “power plant life management and performance improvement” woodhead pub. ed. john e. oakey, 2011, 684 p. isbn 978-184-5697-266. [6] slide share. slide share [online]. 2013 [cit. 2015-08-29]. available from: http://image.slidesharecdn.com/f4-131118222248phpapp02/95/hardening-14-638.jpg?cb=1384813955. table 12. a summary of the results of the average deviations. materials e [n.mm-2] average deviations hv10 x10crmovnb9-1 (p91) x10crwmovnb9-2 (p92) 218 000 -21 14mov6-3 210 000 -13 t23 t24 210 000 13 x20crmov12-1 x5crnicunb17-4-4 200 000 -3 https://www.gemeasurement.com/sites/gemc.dev/files/hardness_testing_application_guide_english_0.pdf https://www.gemeasurement.com/sites/gemc.dev/files/hardness_testing_application_guide_english_0.pdf http://image.slidesharecdn.com/f4-131118222248-phpapp02/95/hardening-14-638.jpg?cb=1384813955 http://image.slidesharecdn.com/f4-131118222248-phpapp02/95/hardening-14-638.jpg?cb=1384813955 dependence of ultrasonic contact impedance hardness on young’s modulus of elasticity of creep-resistant steels microsoft word article 3 154-1288-1-le.docx acta imeko  february 2015, volume 4, number 1, 5 – 10  www.imeko.org    acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 5  a novel intelligent instrument for the detection and  monitoring of thrust reverse noise at airports  c. asensio 1 , m. ruiz 1 , m. recuero 1 , g. moschioni 2 , m. tarabini 2   1  universidad politécnica de madrid (i2a2), 28031 madrid, etsi topografía – ctra. de valencia km 7, spain  2  politecnico di milano (dipartimento di meccanica), 23900 lecco, via d'oggiono 18, italy      section: research paper   keywords: airport noise; thrust reverse; aircraft; noise regulations, detection, classification  citation: c. asensio, m. ruiz, m. recuero, g. moschioni, m. tarabini, a novel intelligent instrument for the detection and monitoring of thrust reverse noise  at airports, acta imeko, vol. 4, no. 1, article 3, february 2015, identifier: imeko‐acta‐04 (2015)‐01‐03  editor: paolo carbone, university of perugia   received october 24 th , 2013; in final form april 21 th , 2014; published february 2015  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by i2a2 research group of the universidad politécnica de madrid  corresponding author: césar asensio, e‐mail: casensio@i2a2.upm.es    1. introduction  noise is a major reason for concern regarding environmental protection around airports. among all the noise sources, the activation of the thrust reverser to slow down the aircraft after landing is well known by airport authorities as a major cause of acoustic impact (and also emissions), annoyance and complaints in the vicinity of airports. although the weight of this noise source on the overall long-term average noise assessment can be moderate (for instance in terms of day-evening-night and night-time figure 1. examples of audio signals of a landing with thrust reverse activation (highlighted). abstract  many airports all over the world have established restrictions for the use of thrust reverse for slowing down aircraft after landings, especially during the  night period, as a way of reducing noise impact and the number of complaints in the vicinity of airports. this is the case of madrid airport, where the  universidad  politécnica  de  madrid,  in  collaboration  with  aena,  and  the  politecnico  di  milano  have  been  researching,  and  developing  intelligent  instruments to improve the detection and classification of thrust reverse noise among other noise sources present in the airport. based on a traditional  approach,  the  thrust  reverse  noise  detection  tool  detects  two  consecutive  sound  events,  and  applies  pattern  recognition  techniques  for  the  classification of each of them (such as landing and thrust reverse). a second improvement refers to the use of a microphone array linked to a noise‐ monitoring unit, which enables tracking the direction of arrival of the sound, thus  improving the classification rates. by taking the  latter,  it  is also  possible  to  track  the  aircraft  location  along  the  runway,  which  enables  sound  pressure  measurements  to  be  transformed  into  sound  power  level  estimations. although the novel instrument can still be optimized and customized, the results have shown quite good classification rates (over 90%).  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 6  noise indicators for strategic noise mapping purposes), especially in those airports where runways are used either for landings and take-offs, the reverse thrust can be quite disturbing, as a rapid change of engine power from idle to reverse occurs causing a sudden burst of noise [1-4], as shown in figure 1. therefore, many airports have established restrictions for the use of thrust reversal after landing, especially during the night period, as a way of reducing the noise impact of airport operations on the community in circumstances where it is critical (for instance paris-orly, london-heathrow, o’hare-chicago [5,6]). the traditional approach for identifying thrust reverse noise is based on the detection of two consecutive sound events, by applying thresholds to the sound level measurements acquired by the airport’s noise monitoring units. each event is detected if the noise level is over the threshold for longer than a pre-established time period. but this technique has proved not to work properly in many cases, as there are many factors affecting the strength, separation and duration of both sound events: the aircraft model and the type of thrust reverser, the weather conditions, the company procedures, the aircraft’s final destination, the pilot’s behaviour… make it difficult to perform the detection task by just using a traditional threshold-based approach (figure 1). the poor performance of the traditional methods, in practice, disables the sanctioning procedures, reducing the efficacy of regulations in fighting noise impact, and originating misalignment between environmental and time efficiency policies. extended and updated information regarding this specific noise source will help to keep and customize efficient thrust reverse restrictions where necessary, while removing them in airports where they are not useful (thrust reverse has many benefits regarding safety, time efficiency and maintenance cost). 2. methodology  the methodology described in this paper is also based on a threshold detection of two consecutive sound events (ev1, for landing noise, and ev2, for thrust reverse noise). but, unlike the traditional approach, which uses the overall sound level, in this case, the detection is improved by the application of filters, acoustic modelling and signal processing practices. furthermore, two cardioid microphones (referred to as left and right, l-r microphones) acquire the sound signal, making the system more robust against sounds arriving from a direction where landings are not expected. a specific filter is applied in each of the detectors, so that the sound level is analysed for customized bandwidths. furthermore, a transformation is applied to improve the detection of the second event (ev2), so that sound power instead of sound pressure can be analysed. if both detectors are successful, ev1 and ev2 will be classified using statistical pattern recognition techniques. in order to complete the identification of a thrust reverse sound, both sound events must be detected and properly classified. figure 2 shows the main system block diagram and figure 3 shows a scheme on the field locations. 2.1. the ev1 detector  the first stage in the trend (thrust reverse noise detector) system is the detection of a landing sound event (ev1). a microphone acquires the sound signal, and the running sound pressure level (lp), is calculated and used for the detection of this first sound event applying time and level thresholds. as high frequency sounds are quite   figure 2. thrust reverse noise detection system block diagram.  figure 3. field measurements scheme.  figure 4. example of ev1 detection.  figure 5.  ev2 detector block diagram.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 7  unusual in environmental noise sources, a high frequency band pass filter (5.0 to 5.2 khz) is applied before the lp measurement, to improve the detection rates by reducing the false positives rates (lp, filtered). figure 4 shows an example of the performance of the ev1 detector. 2.2. the ev2 detector  when a landing (ev1) is detected, the system starts searching for ev2 (figure 5). the ev2 detector consists of two microphones (mic1 and mic2) forming an array that is used for tracking the aircraft’s position along the runway, as follows. the delay between the signals captured in each microphone changes, depending on the relative position of the noise source (the aircraft) and the array. the delay is calculated using a cross-correlation method in the frequency domain [7]. this time delay of arrival allows estimating the direction of arrival of the sound [8], which is used to estimate the distance (r) between the aircraft and the sensors (see equation 1, where d is the distance from the array to the runway, c is the speed of sound, x12 the distance between the two microphones in the array, and is the delay obtained in the measurement). figure 6 shows a simplified scheme of parameters involved in the source location tracking process. ∙ δ (1) unlike sound pressure level (spl), the distance from the source to the receiver does not affect sound power level; therefore, when the thrust reverser is activated, the sound power emitted increases suddenly, making it easier to detect. taking advantage of this phenomenon, and using an approach similar to the one presented by the authors in [9], the estimation of the sound power level has been carried out using a simplified inverse sound propagation model based on iso 9613 [10], and shown in equation 2. 20log (2) where lw(t) is the sound power level (db), lp(t) the sound pressure level (db), r(t) the distance from the source to the receiver (m), α is a coefficient describing the atmospheric attenuation of sound with the dista nce (db/km), and a is a constant that counts for all other factors. using this transformation every thrust reverse sound event is enhanced, making its dynamic range higher (see figure 7), thereby improving the performance of a threshold detector. 2.3. ev1 detection improvement  during landings, the aircraft arrives from the south (in this test case) and, as the distance decreases (figure 3), the sound pressure level increases, triggering ev1 in time t0. at that time, the aircraft is at the left of the trend system, and during ev1 the aircraft will change its position to the right. therefore, during ev1 the array will detect a positive delay at the beginning that becomes negative in t1, which is the time when the aircraft is in front of the array axis (see figure 8). in the event that a negative zero crossing is not perceived within ev1, the event will be rejected (this is a constraint in this measurement setup). in the event that t1 is detected, the system starts searching ev2. this is a way to reduce the ev1’s false positive detection rates, as any sound event is rejected if its source was not strictly figure 6. aircraft location tracking.  figure 8. time delay during a landing.  figure 7. ev2 enhancement for detection. figure 9. performance of ev1 detector for early thrust reverse activation. acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 8  moving from left to right during the detection duration. 2.4. ev2 detection improvement  the ev2 detection must be done once the landing sound event is finished, which should fit to the end of ev1. however, by just using a threshold-based ev1 detector, it is not possible to tune the ev1 threshold in a very accurate way for all of the cases (figure 9). in view of those cases having “early” thrust reverse activation, the ev1 ending time cannot be a reference and there must be extra constraints to start searching for ev2. a little time after t1 (or at t1) the estimated sound power level must start decreasing (as the aircraft is moving away slowing down). this is the time we have called t2. only after t2, thrust reverse can be detected. this will prevent the system form false positives caused by aircraft directivity. after t2, if thrust reverse is activated, lw(t) will increase suddenly at a high rate, determining t3. t2 and t3 are calculated from analysing the slope trends in the sound power level time history (by just establishing some thresholds for these constraints, see figure 10). this is a way to prevent false positives caused by distant sound events in the airport (run-ups, taxi…), as thrust reverse usually produces strong level increments. 2.5. events classification  after detecting two consecutive sound events (ev1 and ev2), the system classifies them independently, in order to reduce the false positives identification rates. this process is carried out through the application of statistical pattern recognition techniques. as the classification stage is performed after an autonomous previous detection of two sound events, it was decided to use two independent classification stages for ev1 and ev2. as the characteristics of the target sounds are completely different (landings for the ev1 detector, and thrust reverse for the ev2 detector), using specific classification algorithms could improve the performance of each of the classifiers. a feature extraction process is carried out for both events, describing the dynamic and frequency characteristics of the sounds, and also the position of the aircraft for each of the events. mel frequency cepstral coefficients (mfcc) have shown a good performance in sound recognition applications [11-15], so the first 20 coefficients were selected. two new features were selected to describe the evolution of the time delay between microphones during ev1 and ev2, which is highly correlated to the location and movement of the aircraft during both events. the training and testing of the system was developed with matlab-prtools. the recognition process starts with a table 1. ev1 detection rates prior to classification.  detection rates ev1 detected ev1 not detected error (%) landings 398 1 0.0 take-offs 0 252 0.0 table 2. ev2 detection rates prior to classification.  detection rates ev2 detected ev2 not detected error (%) landings with thrust reverse 315 14 4.0 landings without thrust reverse 20 63 24.0 table 3. overall identification results.  event type detection error rates (%) classification error rates (%) overall error rates (%) landing with thrust reverse 4.0 4.9 8.5 landing without thrust reverse 24.0 40.0 9.6 figure 10. ev2 enhancement for detection.  figure 11. thrust reverse noise identification.  acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 9  principal components analysis (pca), used to decorrelate the data. afterwards, different algorithms were tested for each of the classifiers independently, so that those two showing the best performance were selected: a k-nearest neighbour [16] was used for thrust reverse events. this is a non-parametric classifier. it calculates the distance between the object to be classified and those in the training set, and assigns the class that is most repeated among the k nearest neighbours (k=5). a parzen classifier [17] was used for landing events. this is a non-parametric classifier. during the training, the classifier makes a non-parametric estimation of the probability density functions of each class, which are used for the classification of new objects. the identification of thrust reverse activation is positive if the first event is classified as landing and the second one is classified as thrust reverse noise (see figure 11). 3. results  some tests were carried out at madrid-barajas airport. the recordings were manually edited and labelled, creating a sound events database, consisting of 315 landings with thrust reverse activation and 83 without it. tables 1 and 2 show the performance of the detectors. the ev1 detector has shown a great performance: only landings are detected as ev1s, and almost every landing is detected. on the other hand, ev2 detector performance is lower, as sources other than thrust reverse are incorrectly detected as ev2. an adjustment with a higher threshold would remove many of these false positives, which are generated by aircraft taxiing or run-ups at a long distance, but, on the other hand, many thrust reverse events could also be missed. as no prior information was available regarding the number of these low level thrust reverse events, a conservative scenario was considered. however, after the classification stage, the operation point of the detectors can be optimized, and the overall identification rates will be improved. the tests showed an error rate lower than 10% (table 3), which can still be optimized, on the one hand, with a proper customization of the sensors and the measurement setup, and on the other, with the use of the prior probability of thrust reverse occurrence, during the training stage of the classifier. 4. conclusions  the variability in the duration, strength and separation of the landing and thrust reverse sound events cannot be properly implemented by just using thresholds on the overall running sound pressure level measurements. this is the reason why traditional threshold detectors are not effective for identifying thrust reverse noise at airports. the novelty in this paper derives from: a) the improvement in the detection through the application of signal processing to track the aircraft position and transform sound pressure into sound power levels; b) the application of pattern recognition techniques for the classification of previously detected sound events in the process of identification thrust reverser being activated. obviously, aircraft will have generated most of the noise events inside an airport, making it difficult to recognize each of them, as the different sound classes are not neatly separated. tracking the location of the aircraft along the runway optimizes the performance of both the detection and the classification stages, reducing the error rates below 10%. the sound power level estimation has shown a great performance for the detection of thrust reverse sound events, but it involves taking extra precautions, as not only the sound level is important, but also the cross-correlation between microphones (for instance, the protection of microphones against environmental conditions must be improved). it should be noted that the location of the trend system must be carefully selected. the location must be as far as possible from the braking area in order to enhance the separation between sound events. but it must be close enough to capture the thrust reverse sound. the array parameters and the distance from the array to the runway must be chosen so that the operating angle of the instrument covers the typical landing and braking areas. the described methodology can be implemented easily and reliably in an instrument, by simply applying common hardware and software resources to provide real time results regarding the activation of thrust reverse. these results, linked to those provided by noise monitoring units can be used, for instance, to complete any sanctioning procedure at an airport, or to analyse the effect or the need for restrictions regarding thrust reverser use. the methodology has been valid and effective for the measurements carried out at madrid-barajas airport [18,19]. acknowledgements  the authors are very grateful to sea milano and aena for showing their support for this project. the authors are grateful to the spanish “ministerio de educación, cultura y deportes” for the financial support provided to facilitate the corresponding author’s research stay at the politecnico di milano in italy.  references  [1] b.m. dunkin, a.a. atchley, k.k. hodgdon, directivity and spectral characteristics of aircraft noise during landing operations, journal of the acoustical society of america. 121, 2007, p. 3112. [2] b.m. dunkin, directivity and spectral source noise characterization of commercial aircraft during landing with thrust reverser engagement, 2008. [3] k.k. hodgdon, a.a. atchley, r.j. bernhard, low frequency noise study, partner-coe-2007-001 (2007). acta imeko | www.imeko.org  february 2015 | volume 4 | number 1 | 10  [4] r.m. gutiérrez, a.a. atchley, k.k. hodgdon, characterization of aircraft noise during thrust reverser engagement, journal of the acoustical society of america. 118, 2005, p. 1852. [5] boeing, airports with noise and emissions restrictions, 2012. [6] c.c. rice, c.m. walton, restricting the use of reverse thrust as an emissions reduction strategy. report nºswutc/03/167231-1 2, report no. swutc/03/167231-1 2, 2001. [7] d. hertz, time delay estimation by combining efficient algorithms and generalized cross-correlation methods, acoustics, speech and signal processing, ieee transactions on. 34, 1986, pp. 1-7. [8] j.m. valin, f. michaud, j. rouat, d. létoumeau, robust sound source localization using a microphone array on a mobile robot, proceedings international conference on intelligent robots and systems, 2003. [9] c. asensio, i. pavón, m. ruiz, r. pagan, m. recuero, estimation of directivity and sound power levels emitted by aircrafts during taxiing, for outdoor noise prediction purpose, appl. acoust. 68, 2007, pp. 1263-1279. [10] iso, attenuation of sound during propagation outdoors. part 2: general method of calculation. iso 9613-2:199, 1996. [11] c. asensio, m. ruiz, m. recuero, real-time aircraft noise likeness detector, appl. acoust. 71, 2010, pp. 539-545. [12] a. rabaoui, z. lachiri, n. ellouze, automatic environmental noise recognition, industrial technology, 2004. ieee icit '04. 2004 ieee international conference on., vol. 3, 2004, pp. 1670-1675. [13] a. rabaoui, h. kadri, n. ellouze, new approaches based on one-class svms for impulsive sounds recognition tasks, machine learning for signal processing, 2008. mlsp 2008. ieee workshop on., 2008, pp. 285-290. [14] m. cowling, r. sitte, comparison of techniques for environmental sound recognition, pattern recognition letters, 24, 2003, pp. 2895-2907. [15] m. tabacchi, c. asensio, i. pavón, m. recuero, j. mir, m.c. artal, a statistical pattern recognition approach for the classification of cooking stages. the boiling water case, appl. acoust. 74, 2013, pp. 1022-1032. [16] t.m. cover, p.e. hart, nearest neghbor pattern classification, ieee transactions on information theory, 13, 1967. [17] a.k. jain, m.d. ramaswami, classifier design with parzen windows, pattern recognition and artificial intelligence, 1988, pp. 221-228. [18] c. asensio, g. moschioni, m. ruiz, m. tarabini, m. recuero, implementation of a thrust reverse noise detection system for airports, transportation research part d: transport and environment, 19, 2013, pp. 42-47. [19] c. asensio, aportaciones a los sistemas de discriminación de fuentes sonoras en al medida de ruido en aeropuertos, 2012. microsoft word 410-2443-1-le-rev acta imeko  issn: 2221‐870x  september 2016, volume 5, number 2, 2‐3    acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 2  introduction to the special section of the 1st international  metroarcheo conference 2015  sabrina grassini 1 , alfonso santoriello 2   1 department of applied science and technology, politecnico di torino, italy  2 dipartimento di scienze del patrimonio culturale, università di salerno, italy    section: editorial   citation: sabrina grassini, alfonso santoriello, introduction to the special section of the 1st international metroarcheo conference 2015, acta imeko, vol. 5,  no. 2, article 1, september 2016, identifier: imeko‐acta‐05 (2016)‐02‐01  section editors: sabrina grassini, politecnico di torino, italy; alfonso santoriello, università di salerno, italy  received august 23, 2016; in final form august 23, 2016; published september 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: sabrina grassini, email: sabrina.grassini@polito.it; alfonso santoriello, email: asantori@unisa.it      dear reader, this special issue of acta imeko contains a selection of papers presented at the 1st international conference on metrology for archaeology – metroarcheo2015. this conference, which took place in benevento, italy, on october, 22-23 2015, was a real and concrete opportunity for gathering together science and measurement with the humanistic world, specifically archaeology, with the final aim of encouraging discussion and exchange of knowledge and expertise between worlds still considered, often wrongly, distant and different one from the other. archaeology, and more generally the cultural heritage field, in recent decades, has undergone so important and profound changes. for over two centuries, archaeological studies have been mainly focused on collecting information and knowledge on the artistic and monumental production of the ancient world. today it appears clear that the horizons are greatly expanded, and now encompass renewed complex of interactions between the humanistic world and the so-called “exact sciences”. new integrated systems able to provide answers to historiographical questions and face real problems in research, protection, monitoring and exploitation of cultural heritage and archaeological sites by providing useful contributions also in terms of urban and regional development planning, have been developed. the cooperation among experts of different disciplines often results in a multidisciplinary approach in which the individual disciplines and their instruments, work together, but without a unique methodological plan, with the final result of multiplying the problems, increasing the number of documentary series and juxtapose the obtained data. vice versa, this cooperation should result in a transdisciplinary vision, i.e. comparison among disciplines, and in the creation of new data that should be the splice among the same disciplines. transdisciplinarity is not just a question of terminology, it presupposes an open rationality which overcomes the excessive formalism. from the methodological ad conceptual point of view, this transdisciplinary approach really contributes to increase the interest toward archaeology and cultural heritage, by extending their objectives and enriching their technical and instrumental approaches. knowing how to read a heritage context taking advantage of the know-how of different disciplines allows one to have access to knowledge and methods, to propose a global and poliedrical view for the study of anthropogenic phenomena, and to develop tailored strategies for conservation, safeguard and management of tangible and intangible cultural heritage and landscapes. multiplicity and flexibility in the developed approaches also significantly contribute to explain the traces of mankind and their relation with the environment. moreover, it should be pointed out that metrology, the science of measurements, includes all aspects both theoretical and practical with reference to measurements, whatever their uncertainty are, and in whatever fields of science or technology they occur. consequently, the valorisation, characterisation and preservation of cultural heritage is deeply related to metrological issues for the collection, interpretation and acta imeko | www.imeko.org  september 2016 | volume 5 | number 2 | 3  validation of data collected with the different analytical, physical-chemical, and mechanical techniques, digital technologies, new ict tools, etc. metroarcheo2015 was particularly satisfactory in the methodological and conceptual aspect above-described. this special issue collects ten papers which give a complete view of the different possibilities offered by such integrated analytical and measurement approaches and well represent the worldwide attendance to the conference. the first two papers move from the dating to the "quantification" of time in absolute terms (desachy), to the different methods and statistical analysis systems (maspero et al.). the third and fourth papers highlight examples of contributions of archaeometry. the papers show how, through the analysis of stable isotopes and their comparison with archaeological records and other documentary series, it is possible to obtain consistent results in determining diets and health conditions in large territories and in sites of particular relevance in the central and southern italy (bondioli et al.) and in the magna graecia (ricci et al.). another paper describes the importance of geoarchaeological survey and geophysical data managed by gis, which gives impetus to studies of the environmental contexts and biosphere, overcoming the limits of traditional archaeology, which generally only takes into consideration the anthropogenic contest (cozzolino et al.). the paper of ullo et al. shows how remote sensing, in the frame of the sentinel-2 mission (2015), allows following the transformations occurred over time, particularly in vegetation, caused both by natural phenomena and anthropogenic causes. eventually, the last four contributions deal with several issues and fundamental aspects of protection, such as preventive archaeology and safeguard, monitoring and enhancement of cultural and archaeological heritage in urban areas or in specific sites. one of the discussed cases is devoted to the radical changes of the city of naples (italy) due to the construction of the underground line, which allows to put in evidence the extraordinary stratigraphy of the old city. in this contest the use and the optimization of 3d technology surveying, as well as the necessity of improving its accuracy and usability, are fundamental for collecting reliable data (fregonese et al.). 3d technologies and photogrammetric solutions are discussed in the paper of limongiello et al. for the documentation and survey of narrow spaces in the "villa di giulia felice" in pompeii (italy) for the relief and representation of the monuments with the final aim of monitoring their long-time conservation state. another example of the potentialities of three-dimensional techniques for collecting information and reading important details on monuments is provided by the research conducted on the important harkhuf tomb in egypt, inscribed with hieroglyphic texts. the paper of angelini et al. discusses the methodological approach, the technique and the results obtained thanks to the development of a mathematical model, which provides important information for the safeguard of this important witness of mankind culture. eventually, the last paper deals with the study of urban archaeological sites and geophysical prospecting in urban areas, discussing the case studies of the palatine hill and the st. john lateran basilica in rome (piro et al.) highlighting how these non invasive approaches are particularly suitable especially in the presence of archaeological structures built on natural soils. our hope is that these contributions are able to highlight the importance of the synergy among the different heritage actors and disciplines, which provides important tools and measuring approaching for the monitoring, preservation, management and safeguard of cultural heritage assets. last but not least, it is our sincere hope that this special issue gives the reader new points of interest for metrology and measurement science applied to cultural heritage taking also into account their fundamental role in our society. hope you will have an interesting reading! sabrina and alfonso acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 85‐89    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 85  two application examples of rfid sensor systems‐ identification and diagnosis of concrete components and  monitoring of dangerous goods transports  matthias bartholmai, markus stoppel, sergej petrov, stefan hohendorf and thomas goedecke  federal institute for materials research and testing, unter den eichen 87, 12205 berlin, germany      section: research paper   keywords: rfid sensors systems; diagnosis of concrete components; monitoring of dangerous goods  citation: matthias bartholmai, markus stoppel, sergej petrov, stefan hohendorf and thomas goedecke, two application examples of rfid sensor systems‐ identification  and  diagnosis  of  concrete  components  and  monitoring  of  dangerous  goods  transports,  acta  imeko,  vol. 4,  no.  2,  article  15,  june 2015,  identifier: imeko‐acta‐04 (5)‐02 ‐15  editor: paolo carbone, university of perugia, italy  received october 13, 2014; in final form november 24, 2014; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding:  section 2. of this paper is part of the study (fe no. 29.0322/2013/bast) conducted on behalf of the federal ministry of transport and digital  infrastructure, represented by the federal highway research institute (bast)  corresponding author: matthias bartholmai, e‐mail: matthias.bartholmai@bam.de    1. introduction  the combination of radio-frequency identification (rfid) tags with different types of sensors offers excellent potential for applications with regard to identification, diagnosis, and monitoring. this should be demonstrated by means of two examples of actual developments carried out by the federal institute for materials research and testing (bam). the identification and diagnosis of concrete components is a major task in the maintenance of critical infrastructure, for instance concrete bridges with heavy traffic volume. a feasibility study investigates the application of rfid sensor systems for this task. the second example reviews the transportation of dangerous goods. using modern technologies enables promising possibilities to reduce accidents and to avoid nonconformity with transportation regulations. project results demonstrate an innovative technical solution for monitoring of dangerous goods transports with rfid sensor systems. a literature and web research was carried out to determine the state of the art and state of research in the field of embedded rfid-based sensors. the vast majority of rfid manufacturers and suppliers develop and distribute components that are used in logistics or in trade. because of the huge demand, rfid electronics are very inexpensive. rfid tags combined with passive (powered by the induced electromagnetic field) or active (battery powered) sensors are a niche product that only few manufacturers have in their portfolio. nevertheless, in several areas of application a growing demand of rfid sensor systems is developing [1], [2], [3], [4]. 2. identification and diagnosis of concrete  components  infrastructure is subject to continuous ageing. this has given life cycle management of infrastructure an important role. therefore, an increasing demand for reliable inspection and monitoring tools is noticeable. the prediction of the service life of a new structure at the design stage or the diagnosis and the evaluation of the residual service life of existing structures is a key aspect of concrete structure management. life cycle analysis and risk evaluation methods can be beneficially used to assess existing structures: actions on structures, inspectionabstract  the  combination  of  rfid  tags and  energy efficient  sensors  offers  promising potential  for  identification,  diagnosis,  and monitoring  applications  −  particularly  when  it  comes  to  objects,  which  require  continuous  observation  and  which  are  difficult  to  access  with  conventional tools. this paper presents two examples as an outlook for rfid sensor systems in embedded structures and in mobile  applications.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 86  oriented design and construction, characteristics of components and structures, life cycle costs, risk analysis and the environmental performance of a structure over its lifespan are important factors which have to be considered when the remaining service life of an infrastructure building is in question. if the possibility of efficient inspections during construction, operation, and maintenance has already been considered during the design phase, one prerequisite for reliable assessment is given. assessment of the safety of engineering works must be conducted by examining all aspects of their behaviour and all possibilities for failure which can be manifested. analysing potential critical situations of structures in europe is performed by identifying so-called limit states [1]. a limit state is defined as a condition beyond which a structure is no longer able to satisfy the provisions for which it was designed. however, it has to be distinguished between ultimate and serviceability limit states. primarily, the reactive maintenance approach has been implemented in europe to manage the road infrastructure network with respect to deterioration. this approach may be valid for well-managed structures without deficits and exposition to unchanging loads. however, this approach is unsatisfactory to structures containing structural deficits and subjected to increasing loads, modifications or widening measures for which they are not designed. therefore, a paradigm change from reactive to proactive infrastructure management is required and non-destructive inspection methods play a key role here. a reliable prognosis of the condition and behaviour of a structure is an important basis for an effective service life management. in order to determine the most economical moment for repair measures in the lifetime of a structure, the knowledge about the deterioration process at exposed regions as well as a detailed knowledge about the current condition of the whole structure is essential [6]. embedded sensors are meeting the demands for gathering more information from the inside of a structure. wired sensors are meanwhile well established but may have disadvantages in case of a subsequent installation in existing structures. wireless sensors which can be embedded in concrete provide an attractive alternative solution. this study summarizes the activities of a research project which dealt with the suitability of rfid sensor tags for concrete bridge structures. figure 1 displays the concept of the overall system for identification and diagnosis of concrete bridge elements and the interaction between the main system components. rfid is the wireless non-contact use of radio-frequency electromagnetic fields to transfer data (figure 2) for automatically identifying and tracking tags attached to objects and is well developed and known in commercial use. upcoming   figure 1. concept of the overall system for  identification and diagnosis of concrete bridge elements and  interaction between the main system  components.  figure  2.  scheme  of  the  interaction  between  reader  coil  and  embedded  sensor in concrete. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 87  developments in sensor based tags for monitoring purposes in logistic and transport can be modified and used in civil engineering. this technology is by all means promising to be used within a monitoring system enabling a condition assessment for concrete bridge structures. the aim of the presented project, which was commissioned by the german federal highway research institute (bast), was to investigate modular rfid sensor systems with respect to their basic suitability for the use in concrete (bridge) structures. the selected rfid tags have a sensor interface to be connected to sensors with low power consumption. the measured values of the embedded sensors can then be read wirelessly. after a literature and product research, the project focused on basic investigations, such as: • selection of suitable sensors for humidity, temperature and corrosion activity [7], [8], which are the most relevant parameters for the long term structural integrity of concrete structures; • application of the sensors to reinforced concrete structures in order to perform an accurate and reliable diagnosis; • determination of the maximum distance (cover), at which an rfid sensor tag can still be discovered and delivers correct data to gain knowledge about the possibilities and limits of structure integration; • comparison between high frequency (hf) and ultra-high frequency (uhf) radio communication for performance evaluation using structure integrated rfid tags; • investigation in laboratory and real world application to approach the application relevant scenarios step by step. for this purpose, prototypes or rfid sensor tags where developed, assembled, encapsulated (figure 3), and embedded in concrete specimen with size of about 30×20×5 cm3 (figure 4). measurements in climate chambers were carried out and showed useful results. figure 5 shows the results of humidity measurements in a climate chamber comparing the encapsulated and in concrete embedded rfid sensor tag (red) with a not embedded tag (black). relative humidity was applied between 10 and 90 % in steps of 20 %. while the signal sequence of the not embedded tag follows immediately the applied humidity settings, the sequence of the embedded tag is delayed caused by the transport processes through the concrete. the delay increases with increasing humidity indicating slower transport characteristics, presumably due to approximating saturation. however, measurements were successfully performed with the embedded specimen, with coherent results over the full measurement range.  range measurements were carried out to determine the maximum distance of rfid data communication through concrete. the results show that an effective range of up to 170 mm is reachable with the developed rfid uhf system in the undisturbed case. in addition, the developed rfid sensors were successfully embedded in a reinforced concrete slab with properties of a typical bridge superstructure. the laboratory experiments have confirmed the basic functional efficiency and reliability of the developed sensor system. further investigations will be carried out on bridge components under real application conditions. the demand for embedded sensors in civil engineering is increasing. for further developments, other measuring principles as a supplement to or a combination with the developed system are being investigated. 3. monitoring of dangerous goods transports  the project “sensor-enabled rfid tags for safeguard of dangerous goods” with acronym sigrid investigates and evaluates possibilities to improve safety and security of dangerous goods transports through the use of the latest rfid technology [9]. this technology can be used to greatly enhance the transparency of the supply chain and aid logistics companies figure 3. encapsulated rfid sensor tag with reinforcement.  figure 4. preparation of concrete specimen with embedded rfid sensor tag. figure 5. humidity measurements, comparison between embedded and not embedded rfid sensor tags.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 88  in complying with regulations. in the context of sigrid custom rfid sensor tags (figure 6) were developed to monitor dangerous goods during transport and help to prevent hazards by allowing timely countermeasures. this requires the combination of communication technology and sensor functionality with low power consumption and small design. to achieve long battery-life, the use of very energy efficient sensors is mandatory. other desirable properties of the sensors include high accuracy, long lifetime, and short response time. for gas sensors a high selectivity is also very important. currently four types of sensors are integrated in the rfid tag, which are a combined humidity and temperature sensor, gas sensors for carbon monoxide (co) and oxygen (o2), and a tilt sensor. other interesting sensor options that might be tested in future include sensors for detecting the filling level and sensors for monitoring the operation of equipment that is built into the container like a stirring unit. the integrated sensors enable the system for recognizing and evaluating of different scenarios. adequate gas sensors indicate an emission from the containments via measured concentrations. if a possible gas release from the transported substance cannot be detected because of lacking the proper sensor, the o2-sensor can indicate a leakage through decreasing oxygen values. for numerous dangerous goods a maximal transport temperature is defined to prevent any chemical reaction. temperatures can be measured and compared periodically to substance specific values. if that value or a tolerance is exceeded an alarm or countermeasure can be activated. the tilt sensor can be triggered on heavy vibrations or tilting of the containment. in case of a dangerous good accident the available information about the type, amount, and condition of the dangerous goods can be used to accurately inform the relief forces. unavailable or inaccurate information represents a significant problem. this often leads to a delay of the rescue operation, because relief forces must be aware of the involved substances and their condition to effectively protect themselves against them. within the scope of the project, an rfid tag was developed, that allows connecting with different types of sensors. this rfid tag combines the advantages of semi-active (only sensors are battery supplied) and active tags (sensors and radio communication are battery supplied). on one side this tag is compatible to the iso 18000 respectively epc-gen2 standards, on the other side this tag has also the ability to communicate via the widely adopted wireless lan standard wi-fi. because the tag is woken up the same way as battery-less passive tags and for that reason does not need to power-up a receivermodule, battery-lifetimes of more than half a year are possible just as with semi active tags. after the tag is woken up, the wlan module is activated and allows very fast data transmission, that otherwise would only be achievable with active tags. this greater transmission speed makes the tag suitable as storage device for much larger amounts of data, than the ones that are normally possible with rfid tags. the possibility to store great amounts of data in combination with a long battery lifetime makes this tag ideal for use as a data logger. logging intervals can be configured individually for every sensor. the tag has also an open interface which allows an easy integration of different kinds of sensors. sensor tags, data communication, and software are combined to an interactive solution, which can tackle various scenarios during dangerous goods transports. the underlying information is provided by a data base with expert knowledge, in this case the bam dangerous goods database "gefahrgut" [10]. figure 7 displays the interaction between the main system components during transport. figure 7. concept of the overall system for monitoring of dangerous goods transports and interaction between the main system components.    figure 6. prototype of the sensor enabled rfid tag.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 89  the focal point of the vehicle equipment is the on-board unit (obu), which consists of a ruggedized industry pc that is specially designed for use in a truck. the main functions of the obu include acquisition of position data via gps, routing, generation of transport documents, data communication via the mobile phone network, monitoring of the load with sensors and surveillance cameras as well as wlan connectivity. it is either possible to read the sensors of the semi-active transponders or sensors that are permanently installed in the loading area. the obu constantly monitors the measurements to ensure, that they are in the allowable range. if that is not the case, an alarm is automatically triggered. current status messages are transmitted to the centralized database, that has also the cargo manifest stored. in case of need, the obu should supply the relief forces with all required information via wlan. but if the obu gets destroyed during an accident, all information is still available through the centralized database. 4. conclusions  several applications require identification, diagnosis, or monitoring or a combination of them. often, general conditions, like mobility or embedding, favour wireless solutions. the presented examples demonstrate, that radio based communication and small-sized, low-energy demanding sensor technologies offer the basis for innovative solutions in this regards. subsequent tasks like sensor data fusion and automated analysis and evaluation processes offer additional benefits. acknowledgement  the contribution of further and former colleagues of bam, namely werner daum, alexander pettelkau, mahin farahbakhsh, david damm, and others, is gratefully acknowledged. section 2. of this paper is part of the study (fe no. 29.0322/2013/bast) conducted on behalf of the federal ministry of transport and digital infrastructure, represented by the federal highway research institute (bast). the responsibility for the content lies solely with the authors. references  [1] i. zalbide, e. d'entremont, a. jimenez, h. solar, a. beriain, r. berenguer, “battery-free wireless sensors for industrial applications based on uhf rfid technology”, proceedings of ieee sensors, 2014, pp. 1499-1502. [2] g. schirripa spagnolo, l. cozzella, m. caciotta, r. colasanti, g. ferrari, “analogue fingerprinting for painting authentication”, proceedings of 20th imeko tc-4 international symposium measurement of electrical quantities, 2014, pp. 297-402. [3] m. merenda, i. farris, c. felini, l. militano, s. c. spinella, f. g. della corte, a. iera, “performance assessment of an enhanced rfid sensor tag for long-run sensing applications”, proceedings of ieee sensors, 2014, pp. 738-741. [4] s. kim, t. le, m. tentzeris, a. harrabi, a. collado, a. georgiadis, “an rfid-enabled inkjet-printed soil moisture sensor on paper for "smart" agricultural applications”, proceedings of ieee sensors, 2014, pp. 1507-1510. [5] d. diamantidis, “probabilistic assessment of existing structures”, rilem publications s.a.r.l, 2001. [6] j. h. kurz, h. rieder, m. stoppel, a. taffe, “control and data acquisition of automated multi-sensor systems in civil engineering”, proceedings of non-destructive testing in civil engineering, ndt-ce, paris, laboratoire central des ponts et chaussées (lcpc), 2009. [7] c. dauberschmidt, c. sodeikat, “innovative zustandserfassung durch korrosionsmonitoring und ultraschallmessungen“, tiefbau, vol. 4, 2008 (in german). [8] s. lee, “development of corrosion sensors for monitoring steelcorroding agents in reinforced concrete structures”, materials and corrosion, 2003. [9] t. goedecke, a. pettelkau, s. hohendorf, d. damm, m. bartholmai, and m. farahbakhsh, “securing of dangerous goods transports by rfid tags with sensor-functionality and integrated database “gefahrgut” information (sigrid)”, proceedings of the 17th iapri world conference on packaging, 2010, pp. 639-642. [10] url: http://www.dgg.bam.de acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 1    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 |  1  journal contacts    about the journal    acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221870x.      editor‐in‐chief  paolo carbone, italy    honorary editor‐in‐chief  paul p.l. regtien, netherlands    editorial board      leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy rosario morello, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany rainer tutsch, germany ian veldman, south africa luca de vito, italy     about imeko    the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry.     addresses    principal contact  prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia, italy email: paolo.carbone@unipg.it    acta imeko  prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov), the netherlands email: paul@regtien.net    support contact  dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany email: dirk.roeske@ptb.de     acta imeko  december 2014, volume 3, number 4, 1  www.imeko.org    acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 1  journal contacts    about the journal  acta imeko is an e-journal reporting on the contributions on the state and progress of the science and technology of measurement. the articles are based on presentations presented at imeko workshops, symposia and congresses. the journal is published by imeko, the international measurement confederation. the issn, the international identifier for serials, is 2221-870x. editor‐in‐chief  paolo carbone, italy honorary editor‐in‐chief  paul p.l. regtien, netherlands editorial board francisco alegria, portugal leopoldo angrisani, italy filippo attivissimo, italy eulalia balestieri, italy eric benoit, france catalin damian, romania pasquale daponte, italy luigi ferrigno, italy alistair forbes, united kingdom fernando janeiro, portugal konrad jedrzejewski, poland andy knott, united kingdom fabio leccese, italy helena geirinhas ramos, portugal pedro ramos, portugal sergio rapuano, italy dirk röske, germany alexandru salceanu, romania constantin sarmasanu, romania lorenzo scalise, italy emiliano schena, italy enrico silva, italy marco tarabini, italy susanne toepfer, germany rainer tutsch, germany luca de vito, italy about imeko  the international measurement confederation, imeko, is an international federation of actually 39 national member organisations individually concerned with the advancement of measurement technology. its fundamental objectives are the promotion of international interchange of scientific and technical information in the field of measurement, and the enhancement of international co-operation among scientists and engineers from research and industry. addresses  principal contact prof. paolo carbone university of perugia dept. electr. inform. engineering (diei) via g.duranti, 93, 06125 perugia, italy email: paolo.carbone@unipg.it acta imeko prof. paul p. l. regtien measurement science consultancy (msc) julia culpstraat 66, 7558 jb hengelo (ov), the netherlands email: paul@regtien.net support contact dr. dirk röske physikalisch-technische bundesanstalt (ptb) bundesallee 100, 38116 braunschweig, germany email: dirk.roeske@ptb.de article 1 contacts microsoft word 4-prima pagina acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 23‐31    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 23  uncertainty restrictions of transfer click‐torque wrenches  andreas brüge  physikalisch‐technische bundesanstalt (ptb), bundesallee 100 , 38116 braunschweig, germany      section: research paper   keywords: torque; click‐torque wrench; iso 6789; transfer  citation: andreas brüge, uncertainty restrictions of transfer click‐torque wrenches, acta imeko, vol. 4, no. 2, article 5, june 2015, identifier: imeko‐acta‐04  (2015)‐02‐05  editor: paolo carbone, university of perugia, italy  received april 21, 2015; in final form june 8, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: andreas brüge, e‐mail: andreas.bruege@ptb.de    1. introduction  click-torque wrenches (ctws) are setting torque tools of the type ii according to iso 6789 [1], which are wrenches with a release mechanism to limit the transferable torque. in particular this definition excludes screwdrivers or wrenches with flexion bars. they are to be calibrated according to iso 6789, where conformity limits are required which amount to ±4 % and ±6 % respectively, for the relative deviation of the releasing value. nevertheless, ctws intended as transfer standards for the traceability of the calibration facilities concerned should comply with much stricter requirements. they should correspond to the best measurement capabilities (bmcs) of laboratories accredited for ctw calibration in the german accreditation body (dakks). these bmcs cover the range from 0.2 % to 1 % (figure 1). a key comparison of the deutscher kalibrierdienst (dkd) based on procedures according to the iso 6789 using ctws demonstrated the insufficiency of the combination of the ctws with these procedures for transfer purposes. therefore, nowadays only indicating transfer torque wrenches can deliver the traceability of the static torque calibration in the concerning facilities according to the bmc values of calibration laboratories. but specific dynamic requests to the facilities destined for ctw calibration cannot be verified by indicating torque wrenches. therefore the use of transfer ctws is necessary particularly during assessments to complement the traceability of laboratories working in the field of iso 6789. in order to submit proposals for the selection of suitable types of ctws and for measurement procedures which help overcome known restrictions of the available types, this work surveys important sources of measurement uncertainty of different types of ctws. figure 1. distribution of  listed bmcs  in the dakks for click‐torque wrench  calibration according to iso 6789. abstract  for  different  types  of  click‐torque  wrenches,  measurement  uncertainty  contributions  of  typical  calibration  conditions  were  investigated. procedures which differ from those given in iso 6789 were suggested for transfer measurements at calibration facilities  for click‐torque wrenches in order to improve their reproducibility. examinations were carried out for influences like lever length, rise  time, temperature and humidity.  acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 24 2. characterisation of ctws  2.1. calibration facility  the measurements for this paper were performed at the 2kn·m calibration facility of the physikalisch-technische bundesanstalt (ptb) in braunschweig. this facility generates torque loads continuously using an electric motor and a gearbox. the detection of the load was conducted with raute reference torque transducers of type tt1 and an hbm dmcplus transient recorder. fitted with a dv55 carrier frequency amplifier module, this recorder is able to detect releasing signals (figure 2) under the following conditions: carrier frequency 4.8 khz resolution (24 ... 15) bit sampling rate (150 ... 9600) hz. altogether, the expanded relative contribution of the reference measuring chain results in approximately 8·10-4. 2.2. measurements  calibrations of ctws according to iso 6789 are mainly meant to achieve the deviation of the actual release torque a from the value which is set at the ctw. for this purpose, repeated releases are to be performed five times each at 20 %, 60 % and 100 % of the nominal torque. hence inherent properties of the ctw, like running-in and repeatability, should be part of the result. in contrast the intention of the measurements in this work is to figure out the ctw’s coefficients of disturbing quantities and other contributions to their measurement uncertainty. furthermore, procedures are questioned in order to minimize the impact of disturbing quantities and to maximize the reproducibility during a transfer measurement in a calibration laboratory. these procedures are discussed later in this paper. in the following, brief descriptions of the measurement procedures used for determining considerable influences on the calibration of ctws are given. 2.3. release torque a    the calibration facility performs a constantly increasing load until the ctw releases and the measured torque falls sharply. the peak value a of the recorded signal of the reference measuring chain is calculated, taking into account the relative noises of the release curve by detecting the span ba (figure 3), which is a specific value of the ctws between 2·10-5 and 2·10-4 and of the reference measurement chain’s zero signal (typically 3.5·10-5). together with the relative uncertainties of the signal recording and of the sensitivity of the reference transducer, plus the influences of the reference transducer creeping and of the zero signal drift, the combined relative uncertainty of the determination of the release torque a amounts to about 4·10-4. 2.4. relative repeatability srpt  as a mechanical mechanism, a ctw is affected by the history of the load, temperature and humidity. every ctw in this investigation was stored under laboratory conditions for one day at least before use. furthermore, preliminary series of releases were executed. with this preliminary series of up to 120 releases, several purposes are to be served. first, the ctws’ lubrication after long idleness should be recovered and, second, the maximum release rate should be achieved. if releases succeed too fast, thermal effects in the release mechanism provoke a drift of the release torque value. furthermore, the relative standard deviation of the values of a provides the relative repeatability srpt of the mean value ̅ of the release torque (figure 4). rpt ∑ ̅ 1 (1) achieving the mean value of the release torque is a specific task of a transfer measurement. usually measurements with a ctw according to iso 6789 are never given as an average, because every single release in the calibration has to fulfil the demands of the standard. figure 3. determination of the release torque a from the torque signal of a  releasing click‐torque wrench over time considering the contribution ba of  the signal noise to the measurement uncertainty.   figure  4. devolution of the release torque a of a ctw during consecutive  releases.  the  running‐in  reaches  to  value  no.  14  (shaded  area)  and  the  relative  standard  deviation  of  the  rest  of  the  data  delivers  a  relative repeatability srpt of 7.6∙10 ‐4 . figure 2. torque signal of a releasing click‐torque wrench over time.   acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 25 due to friction and resolution effects within the release mechanism, even without thermal effects the relative shortterm repeatability of consecutive releases can exceed 10-3. series of 30 repeated release events at least were performed and averaged for the determination of each value of ̅ in order to overcome this instability of the ctws. this is important, as a high value of the repeatability relative to the other coefficients is to be taken into account at the uncertainty estimations of them. 2.5. relative reproducibility srpr  the relative reproducibility srpr is defined as the relative difference between two independent measurements of ̅ with one ctw. manufacturers of ctws recommend readjusting them to a minimum value after each use to avoid mechanical drifting during long-term storage due to the spring tension inside the release mechanism. nevertheless, keeping up the tension of the ctw over the days of investigation turned out to be advantageous. the drift that manufacturers warn against could not be detected during extensive tests of the specimens. therefore, avoiding the readjustment of the ctws between measurements yields the benefit of a smaller value of srpr by eliminating the resolution uncertainty (figure 5). the relative resolution of the ctw, which amounts from 2·10-4 to 3·10-3 for the specimens used in this work, is one of the main uncertainty contributions for calibrations according to iso 6789. it originates from the uncertainty of the adjusting scale due to the width of lines and due to mechanical clearance in the adjusting mechanism, even if these are latching. as usual, calibrations at the beginning and at the end of a transfer can deliver a reliable value of srpr. in this work, these calibrations were simulated by repeated remounting of the ctw while the adjustment was kept. 2.6. relative coefficient of lever length cl  the distance between the pivot of the square drive and the support at the handhold of a ctw affects the amount of a supposedly by a cross force effect in the pivot. furthermore, deviations of a correlative to the lever length could be caused by an eccentric female square drive in the calibration facility [2]. this has been avoided by careful alignment of the facility axis with an eccentricity of less than 0.05 mm. so it is possible to detect just such an eccentricity in the calibration facility under test during a transfer calibration using the ctw, by gaining increased values of c l there. to find the value of c l, two additional series of 30 releases each are performed at a lever length 10 mm shorter respectively longer than the normal length. the relative coefficient of lever length thus can be found with ̅ (2) where ̅ is the average of peak signals of the calibration facility gained at the preliminary series mentioned above, and s is the averaged peak signal with the longer or the shorter lever respectively, which length is given by l. though the determination of c l usually has to be performed at minimum level of torque adjustment, for the purposes of transfer measurements this should be done preferably at nominal torque adjustment in this context. in this way, no readjustment of the ctw takes place and the uncertainty contribution of adjustment reproducibility can be avoided. 2.7. relative coefficient of temperature ct  the peak value of a ctw can be influenced by its temperature by way of altering the lubrication viscosity, by way of dimensional changes in the release mechanism or, if a strain gauge is integrated, by temperature dependence of the gauge sensitivity [3]. one possibility to get an estimation of ct is to track the relaxation of a heated ctw from 40 °c by repeated series of 30 release events each to the normal laboratory temperature of 21 °c. the ctw can be heated inside a climate cabinet overnight and then it will have to be moved quickly into the calibration facility, which is assisted by the quick and easy mounting of the ctw with a square drive. at the beginning of the experiment, the repetition rate of the series should be high. because of the downtime of measurements during the transport of the ctw from the climate cabinet into the calibration facility, the initial status of the ctw is only available by an extrapolation. the relaxated state, in contrast, is to be measured simply by waiting some hours until the stability of occasional measurements is equal to the repeatability of the ctw obtained at the preliminary measurement. this method is equivalent to the method which was proposed for simplified measurements of the relative humidity coefficient [4]. with this set-up, ct can be obtained by ̅ (3) where ̅ and s are defined analogical to (2), but in the case of a temperature step cab lab between a climate cabinet and the laboratory. in contrast to ctw calibration according to iso 6789, transfer measurements can be reduced to very close limits of temperature, because these measurements are only reasonable in laboratories with low uncertainty budget and thereby with an efficient temperature control. thus, the contribution of ct in a transfer measurement usually remains small. 2.8. relative coefficient of humidity cf  like temperature, humidity can affect the lubrication of the mechanism or the sensitivity of a gauge within a ctw [5]. in a similar way as described above, the humidity of a ctw can be changed in a climate cabinet and its relaxation to the laboratory level can be observed. then c f is given by figure  5.  amount  of  ctws’  relative  resolution,  given  mainly  by  adjusting  uncertainty.   acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 26 ̅ (4) with ̅ and s defined analogical to (2), but in the case of a humidity step between a climate cabinet and the laboratory. strict humidity control is challenging and expensive even for ambitious laboratories. therefore, humidity deviation between ptb and a laboratory under test is usually greater than 5 %rh, often greater than 10 %rh. consequently, the impact of c f on the uncertainty budget potentially could be higher than that of other coefficients discussed in this paper. 2.9. relative coefficient of rise time ctb  the rise time tb of a ctw release event is defined as the time between 80 % load and the release load which defines the 100 % load. the dedicated relative coefficient ctb can be obtained by ̅ , , (5) with ̅ and s defined analogical to (2), but in the case of a step in the rise time , , . measurement of the rise time needs correction if the ctw under test exhibits a high reading error. then the release torque is far away from the expected nominal value and the starting point of the rise time has to be shifted to 80 % of the actual release torque. in the most cases, shifting to an earlier time is necessary, which is possible if the loading curve was recorded completely as given in this work. if a calibration facility delivers only the peak value of the curve, correction of the rise time value is impossible. 2.10. relative coefficient of square drive influence cv  the quality of the square drive can influence the result of a ctw calibration by reactive forces and moments which are possible if angularity, planarity and dimensional accuracy are inadequate. the usual procedure to determine cv is to perform five series of 10 measurements with orientation alterations of the square drive by 90° between them. during this investigation only a short version of this procedure was performed with 30 release events at 0° and 90° position of the square drive each, because of the great time need of such measurements. then, a coefficient could be obtained using ° ° ̅ (6) with ̅ and s defined analogical to (2), but in the case of the alteration of the square drive position from 0° to 90°. of course, transfer measurements should be performed with a fixed position of the square drive in both calibration facilities involved, but the determination of cv does not become dispensable. assuming the female square drive of the ptb facility to be well aligned, an increased value of cv in the facility under test would indicate some of the mechanical problems described above at the female square drive of the latter. in this sense, cv is due to the properties of the calibration facility in the first order. therefore, the contribution of cv to the measurement uncertainty of the ctw should be taken into account if cv is increased in the laboratory under test. 2.11. relative coefficient of the ratchet influence crch  if the ctw comes with a ratchet, eccentricity of the rotating part of it is an important source of deviation, as in the case of eccentricity of the machine’s axis, mentioned in section 2.6. about c l. because the ratchet is not a part of the calibration facility under test, in a transfer measurement this influence should be eliminated by using a fixed or labelled position of the ratchet in both facilities. since ratchets often come with high angular resolution and are not usually fixable, an alteration of the ratchet position could take place unnoticed during each handling. thus, the effect of the ratchet position alteration was determined by ̅ (7) with ̅ and s defined analogical to (2), but in the case of the alteration in the ratchet position by ± 1 cog. because of the sinusoidal character of the eccentricity this measurement was undertaken at a ratchet position of 0° and of 90°. the maximum value of crch was used. 2.12. uncertainty of coefficients  the uncertainties of the coefficient determinations described in this paragraph are dominated by that of s, which is given mainly by the repeatability srpt of the ctw. therefore, the contribution of the coefficients to the uncertainty is at least in the range of srpt multiplied by a sensitivity coefficient given by the amount of alteration of the influence quantity in question (see section 4.4). this underlines the importance of small repeatabilities for the value of a ctw for transfer measurements. the use of at least 30 release events for the determination of each value of ̅ reduces the influence of the release instability by more than a factor of 5. 3. specimens  in order to obtain an overview of possible properties of ctws, some different types of them were investigated in the manner described in the chapter above (table 1). most of them are of the common mechanical release type. these wrenches are equipped with a release mechanism consisting of an instable crank, which can be pre-stressed by a spring and thereby be adjusted for a certain release torque. when the torque load exceeds the adjusted value, the instable crank turns over and the torque load at the square drive falls abruptly. while this mechanism is integrated into the body of the mechanical ctw, a buckling ctw is constructed to fold table 1. ctws used in this investigation.   no.  type  nom.  torque  in  n∙m  release type  1 m210 210  mechanical 2 s.305 da 350  mechanical 3 no 18 700  buckling  4 714/10 100  electromechanical 5 5122 ct 180  mechanical 6 1800 ql 180  mechanical 7 721nf/80 800  mechanical 8 typ d 760  mechanical acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 27 entirely at the half-way point of the lever. this design is outdated and nowadays in use merely for screwing in workshops. one exponent of this type is used in the survey to have a look at the lower end of the state-of-the-art. an electromechanical ctw includes a strain gauge to measure the torque load. when this measure exceeds a digitally preset limit, an electromagnetic force unlocks the square drive for release. this design is quite new, but nevertheless it raises expectations for solving some of the problems which are known about mechanical ctws. 4. results  4.1. sampling rate  to obtain a ctw with a well-defined and repeatable release value, the act of releasing has to be as short as possible. the signal in time therefore tends to be discontinuous, which implies high requirements for the sampling rate of the amplifier which detects the release event. comparisons of release measurements with different sampling rates exhibit relative deviations of the peak value up to 1.3·10-3 (figure 6). using a signal of a release event measured with a sampling rate of 9.6 khz, a simulation of signal integrations with time periods corresponding to lower sampling rates was calculated. this simulation curve shows a qualitative shape similar to that of the measurements. according to that, the curve should change from a high inclination at lower sampling rates to an approximately constant part for rates higher than 2 khz. therefore it is advisable for a comparison of two calibration facilities either to use a sampling rate of more than 2 khz or to agree on a certain value of the sampling rate, if only lower rates are available. 4.2. rise time  in daily workshop use a ctw is loaded with torque by hand and as quickly as the worker is able to. thus, the typical rise time of torque is shorter than one second. to accommodate this fact, in iso 6789 values of tb are required to be within 0.5 s and 4 s. the higher value is far beyond the practical value for manual loading and is to be seen as a concession to the deficient speed of calibration machines. the mechanical parts of ctws are subject to inertia effects and to friction. both depend on the velocity of the moving parts. in this way, rise time can affect the release torque value in ctws. electromechanical ctws additionally have to face the runtime error of the release bolt and the influences of the sensing element resolution and of the internal sampling rate. in this spirit, a rise time of 0.5 s is unreasonably short for transfer measurements. in this connection, the limits of iso 6789 are too broad for practical application and too close for the requirements of transfer measurements. the situation becomes worse if the calibration facility under test lacks a precise rise time gauging. then the transfer measurement not only has to test the release torque measurement capability of the calibration machine under test. besides this, the procedure should provide information about the rise time of the calibration machine in question. measurements with different rise times show a relative deviation of the release torque of some 10-3 per second at rise times shorter than 4 s, in agreement with the consideration above (figure 7). in the range between 4 s and 5 s the dependence of the release torque on the rise time is minimal. thus, for the purpose of transfer, measurements should be performed slowly with a rise time of 5 s. as manufacturers are obliged to design their calibration machines according to iso 6789, the machines are often unable to run so slowly. in this case, the two comparing laboratories should agree upon rise time, which poses the problem of measuring the rise time exactly. 4.3. selection of a transfer ctw  in order to rate the ability of the ctws for transfer purposes, a table of the coefficients of the ctws in the survey is given (table 2). figure 6. relative deviation of peak value, depending on the sampling rate  of  the  amplifier.  the  dashed  curve  is  the  result  of  a  simulation  which performs signal integrations with corresponding time periods at a measured release signal gained at a sampling rate of 9.6 khz.   table 2. absolute values of relative coefficients in 10 ‐6  of ctws listed in table 1. some data were not measured due to time constraints; the referring  coefficients were treated as equalling zero. the deviation of specimen no. 3 is due to friction‐induced drift and was treated as a systematic contribution.  rel. coefficient  in 10 ‐6 1  2  3  4  5  6  7  8  c l in mm ‐1  2548  92  no data 326  394  636  330  37  c  t in k ‐1  1041  351  no data 507 903  510  1008  181  c  f in (%rh) ‐1  1154  560  no data 20  268  77  no data  229  c tb in s ‐1  8153  6029  no data 2293  5749  1259  4537  3084  c v   no data  4071  no data 220  7453  586  151  8041  crch (one cog)   6787  1249  no data 305  5783  2316  2635  no ratchet  srpr  2748  217  no data 143  3419  674  989  5184  srpt  580  234  27842  96  139  524  646  334  acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 28 this survey is neither representative of the ctws available on the market nor could a conclusion be drawn about the properties of a ctw type, this would require tests at a higher number of ctws of a certain type. the survey shall give examples of which parameters are important. better than a competition of coefficients, an analysis of their uncertainty impacts give evidence of the suitability of a ctw for transfer calibrations (table 3, figure 8). the contributions in table 3 indicate the priority for compensating the uncertainty sources of a ctw or for its improvement in order to qualify it for transfer measurements. the most influential contribution to the measurement uncertainty to be considered at the selection of a transfer ctw is the repeatability srpt. further parameters not included in table 2 are the object of a qualitative appraisal. the shape of a release curve should feature a well-defined peak value with a monotone rising and a fast break off, kickback-free for at least one second. the specific running-in of the release value from the initial increase to stable amounts has to be observed at preliminary tests and is to be considered at the calibrations by omitting the referring data points. moreover, it has to be ensured by tests that keeping the ctw in tension during a transfer calibration would not increase repeatability uncertainty due to mechanical drift as mentioned in section 2.5. 4.4. uncertainty budgets  the combined relative uncertainty of the reference torque measurement with a ctw wref consists of the contributions of the reproducibility wrpr (9), of the release torque detection wa (10) and of the influence of the coefficients wx (11): ref rpr a x (8) rpr rpr 12 2 rpt (9) a 2 zero nn nn a 12 (10) the contribution of the reproducibility is given by the measured amount of srpr and twice the uncertainty of this measurement which is dominated by the repeatability srpt. the contribution wa consists in addition to the span ba of the relative uncertainty of the national standard facility wnn, the relative long-term stability of the facility snn and twice the relative stability of the zero signal of the standard facility. table 3. relative standard measurement uncertainties w in % of ctws listed in table 1 due to examined quantities introduced in the text for a transfer  measurement  between  a  national  standard  machine  and  a  calibration  laboratory.  the  contributions  of  the  specific  coefficients  listed  in  table 2  are  calculated  using  the  calibration  conditions  listed  in  table 4  both  for  a  standard  machine  and  for  a  calibration  laboratory.  the  resulting  combined  expanded relative uncertainty wref is given both under the assumptions given in table 4 and neglecting the influence of a ratchet. the contributions,  except for wrch, are graded in reference to their amount by colour. the highest contributions are printed in red, the second highest in orange.  uncertainty in %  1  2  3  4  5  6  7  8  wa  0.042  0.041  0.044  0.041  0.044  0.043  0.042  0.041  wrpr  0.161  0.048  0.557  0.020  0.142  0.108  0.135  0.222  w l    0.026  0.001  0.007  0.003  0.004  0.007  0.004  0.001  wt   0.063  0.021  0.041  0.030  0.054  0.031  0.061  0.012  wf   0.180  0.087  0.053  0.004  0.042  0.016  0.012  0.036  wtb   0.122  0.089  0.100  0.034  0.085  0.026  0.071  0.047  wrch    0.041  0.119  0.197  0.009  0.215  0.041  0.046  no ratchet            wref , k=2  0.57  0.37  1.29  0.13  0.57  0.26  0.35  0.47  wref, k=2   (no ratchet)  0.56  0.28  1.12  0.13  0.37  0.25  0.34  0.47  figure 8. relative standard uncertainties of the  investigated contributions calculated according to (9), (10) and (11).   figure 7.  peak value of a ctw depending on the torque load rise time.  acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 29 the coefficients cx (x stands for one of the indices l, t, f, tb and rch introduced in chapter 2), stability contributions srpr and srpt, uncertainty of calibration conditions ux (table 4), and distribution functions were processed to relative standard uncertainty contributions wx under the condition that the sampling rate is chosen adequately and cv is equal in both laboratories. because the complete analysis according to the guide to the expression of uncertainty in measurement [6] is disproportionately extensive, a simplified approximation was used: nn x lab 12 2 rpt ∆ (11) while the uncertainty of the coefficient’s detection given mainly by the repeatability srpt cannot be neglected, the contribution of a specific coefficient cx has to be extended by the xth fraction of srpt. here, x is the step of the calibration condition quantity, in units of this quantity, performed during the detection of the coefficient. the contribution of the coefficient cx is derived from a span, thus a rectangular distribution is to be taken into account. in contrast, the repeatability srpt originates from an averaging, thus a normal distribution is to be assumed. because cx is achieved by a difference, the extension due to the repeatability is to be taken into account twice. to obtain the actual contribution of the coefficients, the expanded specific coefficients have to be multiplied by the uncertainty of the corresponding calibration conditions x, which are given in table 4 for the reference calibration with the national standard machine (nn) and for a typical accredited calibration laboratory (lab). thus the relative standard uncertainty x includes the impact of the ctw properties at both calibration facilities involved. the uncertainty contribution of the stability wrpr is calculated equivalent to the second part of (11) from the reproducibility srpr with a rectangular distribution and twice the repeatability srpt with a normal distribution. as an overall result, the expanded relative combined uncertainty wref (k=2) is given for each ctw in figure 9. this uncertainty implies the contributions due to ctw properties in combination both with the calibration conditions in the national standard machine (tn-nn) and with those in a calibration machine of a laboratory under test (tn-lab) as well as a contribution from the national standard machine (nn). if a transfer calibration should confirm the bmc of a calibration machine under test, the en value (12) must not exceed 1: | | ̅ ̅ ref lab 1 (12) the en value compares the difference between two measurements of the release torque ̅ with the quadratic sum of uncertainties referring to these calibrations. in this work, the transfer ctw is to be understood as a part of the reference calibration, thus the referring expanded uncertainty uref corresponds to the relative expanded uncertainty wref given in figure 9: ref nn tn-nn tn-lab (13) the examination of uref is important since (12) implies that even at equality of the two comparison measurements ( ̅ ̅ , the smallest bmc the comparison could verify is lab ref. therefore, a ctw for transfer application without restriction in accredited laboratories according to figure 1 should feature a value of wref smaller than 0.2 %. only specimen no. 4 meets this requirement and is able to be used in transfer measurements for all laboratories. the majority of bmc values in figure 1 is greater than or equal to 0.5 %. transfer to these laboratories should be possible also with ctw no. 6 with a sufficient uncertainty buffer. the best results were achieved with ctw no. 4 which is of the electromechanical type. apparently the higher effort in engineering is reflected in a very low dependency of the release torque on calibration conditions. in table 3, the contributions are graded in reference to their amount by colour. the highest contributions are printed in red, the second highest in orange. excluded is wrch, because this contribution could be avoided by using no ratchet but rather a fixed square drive. for ctw no. 4, the contribution of wa, which is mainly the uncertainty of the standard calibration facility, is the highest, the contribution of wtb is the second highest. a further improvement of this ctw should start with an enhancement of the internal sampling rate, which should reduce the influence of the rise time. for the other ctws, an improvement of this aspect is only to be expected by re-designing the releasing parts – quite a large effort which does not meet the actual goals of the manufacturers. but many of the ctws could be improved fundamentally in this field. most of the ctws would benefit also from an improvement of the relative repeatability srpt. this is indicated in table 2 by the amount of the repeatability of a ctw in comparison to the other influences. srpt should be in the range figure 9. combined expanded relative uncertainty wref for the investigated  specimens  in  comparison  to  the  smallest  bmc  of  the  dakks‐accredited  laboratories (yellow bar).  table  4.  uncertainty  of  calibration  conditions  ux  expressed  as  a  half‐span  value  used  for  the  calculation  of  an  uncertainty  budget  in  a  transfer  measurement  (table 3).  given  are  measured  values  of  the  ptb  standard  calibration  machine  (nn)  and  assumed  values  for  typical  accredited  calibration laboratories (lab).  condition  index x  ux (nn)  ux (lab) lever length  l  0.25 mm  0.25 mm temperature  t  0.5 k  2 k  humidity  f  2 %rh  5 %rh rise time  tb  0.1 s  0.5  s ratchet position  rch  0 cog  1 cog acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 30 of the smallest contribution in this compilation. many of the tested ctws cannot fulfil this requirement. ctw no. 3 exhibits friction-induced drift during the preliminary measurements. this drift is to be understood as a systematic contribution and therefore has to be taken into account by absolute value. thus, this contribution on its own exceeds the requirements of transfer application and hence no more measurements were performed with this ctw. coefficients which were not measured are marked in table 2 with “no data”. these are counted as zero in order to get a lower estimation of wref at the end. nevertheless, the uncertainty contributions wx of these conditions are greater than zero in this analysis because of the contribution of srpt in (11). 4.5. procedure for transfer measurements   the measurements of the survey show that the procedure of iso 6789 (figure 10) is not adequate for transfer measurements with ctws. readjusting uncertainty of the ctws can be avoided when the adjustment of the ctw is not changed during the measurements. this adjustment should be made at the nominal value of the ctw in order to use the best possible signal-tonoise ratio. extensive tests with ctws stored several days under the tension of this adjustment did not show a drift of the release torque. furthermore, the ctw with the best overall uncertainty is of the electromechanical type and hence is free of mechanical tension due to adjustment. the transfer measurement should consist of series of at least 30 release events to reduce the repeatability uncertainty (figure 11). the load rise time should be fixed to a certain value, preferably at 5 s. furthermore, the laboratories should agree on a fixed value of the sampling rate, at best more than 2 khz. the position of the square drive and of the ratchet should be the same in both laboratories. additional series after alteration of the square drive position by 90° should be performed at least for two positions. the length of the lever, the temperature and the humidity should be controlled and documented. 5. conclusions  the capability of ctws for use as transfer transducers depends not only on their technical specifications. restricted specifications could be compensated if adapted procedures are employed which can largely differ from those according to iso 6789. the objective of these procedures should be an optimization of the reproducibility of the ctws’ response. a suggested procedure designates the measurements required in iso 6789 at 20 % and 60 % of the nominal value, but calls for at least 30 repeated measurements instead of 5. parameters like rise time, positions of square drive and ratchet, sample rate, temperature, humidity and lever length are to be agreed upon and reproduced in the laboratories within narrow limits. the survey delivers the following results: 1. the use of transfer ctws for dakks assessments is possible under special conditions and with adequate types of ctws for the laboratories accredited by the dakks. therefore, the development both of specific measurement procedures and of selection criteria for ctws, which was carried out to complement the traceability of the laboratories working in the field of iso 6789, should be supplemented by further improvement of the ctws, especially of their repeatability and rise time sensitivity. 2. the detected coefficients of the used specimens vary over a wide range of amounts. the selection of a ctw for transfer application therefore requires the analysis of the combined measurement uncertainty budget in the setup of the intended transfer measurement. 3. a new design of ctw with an electromechanical release mechanism yields the best results among the tested specimens. here further improvements could be possible with higher internal sampling rates. acknowledgment  thanks go to sebastian kaletka, who carefully performed hundreds of measurements for this investigation. references  [1] iso 6789:2003-10, “assembly tools for screws and nuts hand torque tools requirements and test methods for design conformance testing, quality conformance testing and recalibration procedure”, international organization for standardization, geneva, switzerland. [2] brüge a., “influence of eccentric mounting on the calibration of torque measuring devices”, proceedings of the 15th imeko tc3 conference, october 7-11, 1996, madrid, spain, pp. 255259. [3] sanponpute, t.; arksonnarong n., “temperature and humidity dependence on stability of torque measuring devices”, imeko 22nd tc3, 15th tc5 and 3rd tc22 international conferences, february 3-5, 2014, cape town, republic of south africa figure 10. loading schedule according to iso 6789.   figure 11. proposed loading schedule for transfer measurement with ctw.  acta imeko | www.imeko.org   june 2015| volume 4 | number 2| 31 url: http://www.imeko.org/publications/tc3-2014/imekotc3-2014-006.pdf [4] brüge a., “influence of humidity on torque transducers – estimation methods for calibration laboratories”, xx imeko world congress, metrology for green growth, 9 – 14 sept. 2012, busan, republic of korea. url:http://www.imeko.org/publications/wc-012/imekowc-2012-tc3-o30.pdf [5] röske, d.; mauersberger, d., ”on the stability of measuring devices for torque key comparisons”, imeko xviii world congress and iv brazilian congress of metrology, "metrology for a sustainable development", rio de janeiro, 17-22, september, 2006, brazil, on cd-rom, file name: 00181.pdf [6] bipm, “evaluation of measurement data – guide to the expression of uncertainty in measurement”, jcgm 100:2008 url:http://www.bipm.org/utils/common/documents/jcgm/j cgm_100_2008_e.pdf acta imeko  issn: 2221‐870x  june 2015, volume 4, number 2, 45‐51    acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 45  investigations for the model‐based dynamic calibration of  force transducers by using shock excitation  michael kobusch 1 , sascha eichstädt 2 , leonard klaus 1 , thomas bruns 1   1  physikalisch‐technische bundesanstalt (ptb), bundesallee 100, 38116 braunschweig, germany  2  physikalisch‐technische bundesanstalt (ptb), abbestr. 2‐12, 10587 berlin, germany      section: research paper   keywords: dynamic calibration; shock force; dynamic modelling; parameter identification  citation: michael kobusch, sascha eichstädt, leonard klaus, thomas bruns, investigations for the model‐based dynamic calibration of force transducers by  using shock excitation, acta imeko, vol. 4, no. 2, article 8, june 2015, identifier: imeko‐acta‐04 (2015)‐02‐08  editor: paolo carbone, university of perugia, italy  received september 29, 2014; in final form february 13, 2015; published june 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.  funding: this work is part of the joint research project ind09 ”traceable dynamic measurement of mechanical quantities” of the european metrology  research programme (emrp). the emrp is jointly funded by the emrp participating countries within euramet and the european union.  corresponding author: michael kobusch, e‐mail: michael.kobusch@ptb.de    1. introduction  dynamic force measurements are widely used in many industrial areas, and the increasing demands on measurement accuracy have set new metrological challenges. but still, traceability for dynamic measurements is solely based on static calibrations, and documentary standards or commonly accepted guidelines for dynamic measurements do not exist. for this reason, the establishment of traceable measurements under dynamic conditions is a highly relevant topic. its importance is emphasized by a current european metrology research programme (emrp) joint research project dedicated to the traceable dynamic measurement of mechanical quantities [1]. in this context, the general approach of a model-based calibration methodology will be followed in which the dynamic behaviour of the force transducer in a given mechanical calibration set-up is described by an appropriate model consisting of a series arrangement of spring-massdamper elements. the characteristic dynamic model parameters of the force transducer – i.e. values describing its distribution of mass, stiffness and damping – have to be determined. by fitting modelled and measured shock force data, the parameters of interest may be identified from the dynamic measurements. considering the fact that the measurement data may show modal oscillations of the mechanical set-up of various sources, which were found in previous experimental studies supported by finite element simulations [2], the development and selection of adequate methods and procedures to analyse the shock force data is of great importance. the general purpose of a model-based calibration is the determination of the characteristic parameters which define the force transducer’s dynamic measurement behaviour in a given dynamic application. the parametric model of the transducer could in principle be employed as part of a larger parametric model of the particular measurement application, for instance of a shock calibration device or a uni-axial testing machine. the approach of the model-based calibration will furthermore allow the calculation of realistic measurement uncertainties in abstract  within the scope of the joint research project emrp ind09 “traceable dynamic measurements of mechanical quantities”, numerous  measurements were performed at ptb’s 20 kn primary shock force calibration device to investigate and validate the approach of a  model‐based  dynamic  calibration  of  force  transducers  by  using  shock  excitations.  the  tests  included  several  strain  gauge  force  transducers  of  greatly  differing  structural  design,  size,  weight  and  mechanical  coupling.  by  looking  at  a  few  examples,  some  investigated physical models of the measurement set‐up and a developed data analysis procedure for parameter identification based  on measured shock data are presented and discussed. the models reproduce the dynamic response  including the observed modal  oscillations  of  various  origins  that  limit  the  usable  measurement  bandwidth.  moreover,  these  modal  oscillations  may  have  an  important role for the parameter identification process, which is further discussed. this paper is an extended version of the original  contribution to the imeko 2014 conference in cape town, south africa. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 46  dynamic force measurements, which are basically not known at the moment. the remainder of this paper is outlined as follows. section 2 presents experimental results from shock force calibration tests performed with different force transducers. section 3 then discusses the mathematical modelling and the parameter identification procedure. finally, the last two sections give an outlook to future research and conclude the presented work. 2. experimental tests  within the scope of the above-mentioned emrp project, several strain gauge force transducers of greatly differing size, weight, mechanical design and adaptation were investigated at ptb’s 20 kn primary shock force calibration device. 2.1. shock calibration device  figure 1 shows the calibration device and illustrates its working principle. two cube-shaped mass bodies (mb1, mb2) each of 10 kg and the force transducer under test are brought to collision. the traceability of the shock force is realized by the determination of mass (from weighing) and acceleration (by means of laser vibrometers). further information about this device is given in [3]. a recent modification of its measurement geometry now allows on-axis vibrometer measurements similar to those obtained with a larger 250 kn calibration device [4]. as this geometry is not susceptible to parasitic rotational vibrations possibly excited by the impact [5], the quality of the timedependent acceleration signals and has been improved. 2.2. force transducers  the tested force transducers are equipped with a threaded bolt with a spherical end face, and can measure compression as well as tension forces. this choice also allows calibrations with sinusoidal forces for comparing the different dynamic results. figure 2 shows two investigated force transducers and their adaptation to the mass body of the calibration device. the small hbm u9b / 1 kn has a mass of about 63 g and uses a measuring body with a flexing diaphragm applied with strain gauges that ends in the upper threaded bolt. this diaphragm, which is considered as the measuring spring, divides the mechanical structure of the force transducer into two parts. the effective mass of the upper part (head mass, see section 3.1) is less than 3 g. the manufacturer specifies a fundamental resonance of 24 khz. in contrast, the large interface 1610 / 2.2 kn is a shear beam strain gauge transducer of more than 1.5 kg (including the connector) and a diameter of 105 mm. this transducer hardly fits into the squared air bearing of 108 mm clearance and thus clearly marks the maximum size for the 20 kn shock force calibration device. former tests with an even larger force transducer of similar design (interface 1032 / 225 kn) performed at the 250 kn shock calibration device showed that the shock response can be predominately influenced by a comparably low coupling resonance [2], [6], which denotes the vibration of the transducer against its fixation on the reacting mass body. regarding the small hbm u9b with its thin threaded bolt that connects to the base adapter, similar behaviour might also be expected. 2.3. shock force measurements   typical examples of measured shock force signals are shown in figure 3. the small hbm u9b / 1 kn measured a very smooth pulse without post-impact signal ringing. in contrast, the much heavier interface 1610 / 2.2 kn responded with a shorter pulse with superposed oscillations and prominent signal ringing. the plots demonstrate the possible variety of responses in the time domain. different types of transducers with their specific mechanical adaptations can respond quite differently. the tests used a hard metallic shock contact in order to achieve short pulses that are capable of exciting the transducer’s mechanical resonances, i.e. no pulse-shaping material was applied between the impact surfaces. under these conditions, the impacting mass body of 10 kg generated shock pulses of a duration of about one millisecond. figure 1. the 20 kn shock force calibration device at ptb:     schematic diagram of the working principle (top), photographs (bottom) of the  device  with  visualized  beams  of  the  laser  interferometers,  the  inset  shows the beam deflection at the spring‐driven acceleration mechanism.   figure 2. mounted strain gauge force transducers:     hbm u9b / 1 kn (left), interface 1610 / 2.2 kn (right).  figure 3. measured shock force pulses:     force  transducer  hbm  u9b / 1 kn  (left),  interface  1610 / 2.2 kn  (right),  all  signals low‐pass filtered at 20 khz. acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 47  supplementary measurement data that might be beneficial for the subsequent parameter identification could be obtained by applying an additional load button (see figure 5). such a modification of the mechanical impact configuration alters the transducer’s dynamic response as the increased mass at the force introduction bolt lowers the transducer’s fundamental resonance frequency, which basically is a function of the transducer’s head mass and its elastic coupling (see section 3.1). to achieve well-defined, reproducible testing conditions, the threaded connections of all mechanical couplings were fastened with a defined torque in each case. typical shock measurements obtained with two different transducers (hbm u9b / 1 kn, interface 1610 / 2.2 kn) are presented as examples in the following paragraphs. the plots show the three acquired measurement signals, which are the accelerations and of both mass bodies derived by differentiation of the vibrometer signals, as well as the transducer output signal force . the first example in figure 4 obtained with the hbm u9b / 1 kn demonstrates that the acceleration signals can experience strong shock-excited noise which probably originates from modal oscillations of the cube-shaped mass bodies. these high-frequency signal components might have to be filtered correctly for the subsequent parameter identification process. using a low-pass filter of 20 khz cut-off frequency, which is already below the expected fundamental resonance of this transducer, the noise is still very strong. the acceleration of the impacting mass body exhibits an almost undamped vibration with a superposed beat signal. similar behaviour is shown by the signal of the reacting mass body with the mounted transducer, but the damping is stronger. in general, the repeatability of the observed high-frequency oscillation pattern is excellent, which is demonstrated by three repetitive measurements (figure 4 insets). the signal responses to excitations of similar shock intensity are nearly indistinguishable in the time domain. the effect of an additional load button was investigated with the hbm u9b / 1 kn in order to shift the transducer’s fundamental resonance frequency below the above mentioned noise components of the experimental set-up. figure 5 shows the mounted force transducer with two different load buttons fixed at its threaded rod, and figure 6 illustrates the measured spectral content of the post-impact signal ringing for the respective configurations. without an additional head mass, the transducer exhibits an oscillation below 30 khz that probably is its fundamental resonance. this peak apparently shifts towards figure 4. shock‐excited ringing of the acceleration signals:     force transducer hbm u9b / 1 kn with load button of 7.0 g, insets show 3 repetitive measurements (span 2 ms), all signals low‐pass filtered at 20 khz. figure 5. hbm u9b / 1 kn with two load buttons with spherical end faces,  increased head mass of 3.1 g (left) and 7.0 g (right).  figure 6. spectral analysis of the signal ringing for the hbm u9b / 1 kn with  different load buttons. figure  7.  shock  signals  in  the  time  domain  and  frequency  domain  (signal  ringing  only)  obtained  with  the  force  transducer  interface  1610 / 2.2 kn,  time signals low‐pass filtered at 20 khz.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 48  lower frequencies with the increasing mass of a load button. in addition, a second strong resonance that only slightly depends on the load appears at about 9 khz. this resonance could be identified as a second axial vibration mode due to the mass and elasticity of the transducer housing [7], which is possibly further influenced by a non-rigid mechanical coupling at the base. the last example in figure 7 was performed with the interface 1610 / 2.2 kn. both force and acceleration signals show a strong vibration at 3.7 khz, and the spectral analysis reveals a weaker component at 6.7 khz. with similar amplitudes of the dominant component in both signals, this behaviour actually differs from the previously mentioned tests with the larger transducer model, which may indicate different sources. looking at the quite different shock responses obtained with the transducers presented as examples, the identification of the transducer’s dynamic parameters from shock measurements may require various analysis tools to provide satisfying results. to elaborate these mathematical methods and analysis procedures for the parameter determination from the experimental data, hundreds of shock force measurements were performed. the large number of tests will further provide an assessment of the reliability of the proposed parameter identification methods. the first trials on the parameter identification (see section 3.3) gave consistent results for the force transducer interface 1610 / 2.2 kn which responded with strong signal ringing, cf. figure 3. on the other hand, it showed that the smooth shock pulses measured with the hbm u9b / 1 kn apparently contain insufficient information to be able to identify the parameters that describe the transducer’s dynamic behaviour. tests with additional head mass obtained data better suited for the parameter identification process, but it seems that this transducer is more complex and its model description has to be refined. for this reason, the following sections will focus on the first transducer (interface 1610 /2.2 kn), and a dedicated paper will cover the second one (hbm u9b / 1 kn) in the near future presenting further research. 3. modelling and procedures for parameter  identification  3.1. model of the force transducer  apart from the various influences of a coupled mechanical environment that have to be taken into account to correctly understand a dynamic measurement, the force transducer itself exhibits a dynamic behaviour which is mainly affected by inertia forces due to the motion of its internal mass structure. to describe this behaviour, the force transducer is mathematically modelled by a spring-mass-damper system (figure 8) consisting of two concentrated masses (base mass , head mass ) which are connected by a linear spring element (stiffness , damping ). the body motions are described by the linear displacement coordinates and , and the forces acting at both sides are denoted by the input force and the reaction force . the output signal of the force transducer is assumed to be proportional to the elongation of the measuring spring. such a mechanical system responds with a characteristic resonance when subjected to a dynamic excitation. considering a transducer rigidly mounted at its base and neglecting damping, the fundamental resonance frequency given in equation (1) is a function of and . ⁄ (1) this parameter is often used to judge the dynamic suitability of a force transducer. it is mentioned in the german directive vdi/vde 2638 [8] and values are specified in some force transducer data sheets. however, to understand the dynamic measurement behaviour in an application, this single parameter is not sufficient, but rather the four model parameters , , and have to be known. 3.2. model of the calibration device  by expanding the basic model of the force transducer, the mechanical system of the calibration device with a mounted force transducer is modelled by a one-dimensional multi-body system, which consists of a linear series arrangement of lumped masses coupled by visco-elastic springs. models of different degrees of freedom using 3, 4 and 5 model masses were investigated (figure 9) in order to account for a possible non-rigid coupling at the transducer base as previous shock tests have shown that this mechanical coupling may significantly influence the dynamic measurement behaviour. the three models basically differ in their description of the mechanical adaptation of the transducer to the reacting mass body. the 3-mass model assumes a rigid fixation, whereas the models with 4 and 5 masses describe an elastic coupling. for the 4-mass model, the adapter mass is split and allocated to the neighbouring masses depending on the specific mechanical set-up. in each case, the coupling of an optional load button is assumed to be rigid. the assumption of a rigid coupling of has to be revised figure 8. basic model of a force transducer.  figure 9. models of the shock force calibration device: 3‐mass model (top),  4‐mass model (middle), 5‐mass model (bottom).  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 49  if the resulting coupling resonance between and becomes important. this is the case for a considerably large mass and/or a small coupling stiffness which will result in a low resonance frequency that cannot be further neglected. dynamic force calibrations with sine excitations [9] typically apply large load masses (= ) and the corresponding model descriptions consider an elastic coupling, i.e. a spring element is introduced between and . however, the model description of the shock force calibration device may have to be modified accordingly if necessary. the dynamic behaviour of the model components is expressed by a system of linear ordinary differential equations (ode) derived from the equilibrium of forces at each mass element. the ode system has the form (2) where , and are the motion vectors (displacement, velocity, acceleration), and , , and denote the matrices for mass, damping and stiffness, and is the load vector. the mass matrix is diagonal while the damping and stiffness matrices are tri-diagonal with non-zero elements on the main diagonal and the upper and lower sub-diagonal. as an example, the components of the 5-mass model are given in the following equations. 0 0 0 0 0 0 0 0 0 0 0 0 (3) 0 0 0 0 0 0 (4) 0 0 0 0 0 0 (5) , , , (6) ,0,0,0 (7) the ode system of the 5-mass model consists of only four equations, as the shock contact, i.e. the coupling between the impacting mass and the force transducer (model mass ) is not expressed by another elastic coupling element but rather by the measured shock force . differing from the basic model of the force transducer (figure 8), the models of the shock calibration device substitute the reaction force with the corresponding inertia force calculated by the ode system, where the motion of the reaction mass body mb2 denotes the input quantity. some of the model parameters, such as the mass values of the two mass bodies and the adaptation parts, can be measured before the actual calibration. the remaining parameters, in particular those of the force transducer, have to be inferred from the measured dynamic calibration data or from cad data. 3.3. parameter identification  measurement data gained from the calibration experiment are the displacements and of both mass bodies and the corresponding force transducer output signal . using numerical differentiation by means of finite differences together with low-pass filtering for noise attenuation, the acceleration data ( , ) of the mass bodies is calculated. the measured acceleration of the impacting mass body is then used to calculate the system input force . in order to account for the system change during the measurement interval due to the transient shock contact, we employed a time window to suppress post-impact signal components. for the fit, the transducer output signal and the acceleration of the reacting mass body at the time instants ,…, are considered: , with 0… (8) assuming normally distributed measurement noise, we carry out a maximum-likelihood estimation of all parameters by means of non-linear least squares in the time domain. therefore, a numerical integration method for the differential equations (2) is employed together with the nelder-mead nonlinear simplex method for optimization [10, 11]. the benefit of this optimization method is that it does not require the calculation of derivatives. the optimization merit function is ‖ , , ‖ (9) where ‖ ⋅ ‖ is the euclidean norm, the vector of the sought stiffness and damping parameters, the measured data and the ode initial values. in this approach, the mass values of all model components are assumed to be known. the design function ∙ denotes the numerical integration of the ode (2) corresponding to the considered system model and the calculation of the resulting system output data. for the 5-mass model ∙ is calculated as , , x / (10) with the two ode trajectories denoting the force signal as a function of the elongation of the measuring spring, and the acceleration of the reacting mass body mb2 calculated from (2). as the ode integration is part of the evaluation of the function (10), the calculation of derivatives is a cumbersome analytical and numerical task. very high precision in the ode integration would be required to obtain sufficiently precise finite differences for a derivative-based numerical optimization. for this reason, the derivative-free nelder-mead simplex method was applied for parameter estimation. results of the parameter identification for the transducer interface 1610 / 2.2 kn are presented in the following. the attenuation of noise in the calculation of acceleration from displacement data was carried out by a 4th order butterworth low-pass filter of 12 khz cut-off frequency. using the model parameters estimated by each model, figure 10 displays the calculated trajectories of the two elements (force , acceleration ) of the function ∙ and compares them with the corresponding measurement data. for more clarity, the fit residuals are additionally plotted. regarding the 4-mass model, the adapter mass was fully allocated to the reacting mass body as the adapter is more rigidly coupled to the reacting mass than to the transducer comparing the different mounting conditions (contact area, thread size, mounting torque). it is seen that the 3-mass model yields considerably large deviations, in particular in acceleration . on the other hand, the 4-mass and 5-mass models both show much smaller deviations which demonstrates that the coupling of the acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 50  transducer to the reacting mass body cannot be assumed to be completely rigid for the considered mechanical set-up. as both models achieve a nearly identical fit quality, the chosen 4mass model suffices to describe the system’s behaviour. according to figure 7, there are just two components in the frequency range of interest, which corresponds to such a model. this behaviour is further proved by a short-time dft analysis of the measured ringing in acceleration after the main pulse (figure 11). table 1 summarizes the identified model parameters for the particular data set (as in figures 10, 11) using models with 3, 4, and 5 masses. for neglected damping, the corresponding resonance frequencies (see table 2) are calculated from the eigenvalues of the characteristic system matrix of (2) as eig (11) it is worth noting that the stiffness of the coupling is ten times greater than that of the transducer’s measuring spring. all three models reproduce the lowest resonance at about 3.7 khz well. figure 10. comparison of  modelled and measured  shock response signals for  the  interface  1610 / 2.2 kn:  force  signal    (top)  and  corresponding  fit residuals ∆ , acceleration   of the reacting mass body and corresponding fit residuals ∆  (bottom).  figure 11. short‐time dft of the ringing  in the measured acceleration  ,  shock  calibration  of  the  interface  1610 / 2.2 kn,  the  diagram  displays  the colour‐coded acceleration amplitude over time.  table 2. calculated resonance frequencies of the different models. resonant  frequency  unit  3‐mass  model  4‐mass  model  5‐mass  model    khz  3.687  3.687  3.687  ,   khz ‐  6.695  6.694 ,   khz ‐  ‐  35.62 table 1. identified model parameters of the interface 1610 / 2.2 kn using  different models, analysis of one shock pulse.  parameter unit  3‐mass  model  4‐mass  model  5‐mass  model    10 6  n/m  166  187  186    kg/s 109  154  154     10 6 n/m ‐  1743  2867   kg/s ‐  539  154     10 6 n/m ‐  ‐  4680   kg/s ‐  ‐  1.12 figure  12.  identified  model  parameters  of  the  interface  1610 / 2.2 kn  obtained with the 4‐mass model, analysis of 27 shock pulses.  acta imeko | www.imeko.org  june 2015 | volume 4 | number 2 | 51  the additional resonance of the 5-mass model falls well outside the considered frequency range, which further confirms that the modelling of the coupling stiffness at both sides of the adapter (cf. figure 9) does not show any benefit. using the 4-mass model, a first analysis of 27 repetitive shock measurements of about 2 kn peak value gives promising results. the identified parameter values of the force transducer interface 1610 /2.2 kn (spring stiffness , damping ) and its coupling ( , ) are plotted in figure 12. the relative standard deviations are quite small, in particular those of the stiffness parameters. the larger variance of the damping parameters is probably due to their comparably small influence on the dynamic behaviour of the shock force calibration device, and to the fact that the parameter identification process based on the available measured data is therefore less sensitive. 4. outlook  at the moment it is far too early to specify associated uncertainties for the identified parameters due to numerous factors that may have an influence on the parameter identification process, e.g. the fitting methods, data length, weighting, windowing and filtering. in addition, the remaining deviations observed in the time domain still need some explanation. in any case, the investigated models cannot reproduce the additional spectral peaks at higher frequencies (cf. figure 7). in the end, all these topics have to be covered in future research. the parameter identification presented as an example was obtained with shock pulses of considerably strong post-impact signal ringing. some first trials on the identification of smooth pulses of almost no ringing (cf. figure 3 left) indicate that the excited modal oscillations presumably carry the crucial information to unambiguously identify the model parameters of a multi-body system having more than one degree of freedom. these findings as well as future investigations on this topic will be presented in a dedicated publication. the analysis of various measurements with an identical force transducer under modified measurement conditions (e.g. pulse intensities, mechanical adaptations, various mounting conditions) will eventually allow a verification of the suitability of the mechanical model and the data analysis methods applied. in particular, it will demonstrate the influence of various disturbances, e.g. from high-frequency modal oscillations [2] not explained by the chosen model, on the estimation of the model parameters of interest. the result of these investigations will then be incorporated into an extended system model if necessary. 5. conclusions  this paper presented new research on the model-based dynamic calibration of force transducers by using shock excitations. by analysing experimental shock measurements obtained with force transducers of different structural design, size, weight and mechanical coupling, the suitability of the mathematical models and the applied methods for the estimation of the transducer’s model parameters were described and discussed. a first parameter identification applied to shock data obtained with a force transducer that responds with strong shock-excited ringing gave satisfying results. future research will investigate the numerous influences on the parameter identification process, as well as the application of the proposed methods to other types of force transducers, e.g. those that respond without strong signal ringing. it will also focus on the comparison of results from sinusoidal excitation experiments and, finally, on the evaluation of the measurement uncertainties for parameter identification. references  [1] c. bartoli et al., “dynamic calibration of force, torque and pressure sensors”, proc. of imeko tc3, tc5 and tc22 int. conf., cape town, south africa, 2014. [2] m. kobusch, l. klaus, t. bruns, “model-based analysis of the dynamic behaviour of a 250 kn shock force calibration device”, xx imeko world congress, busan, 2012, republic of korea. [3] m. kobusch, a. link, a. buss, t. bruns, “comparison of shock and sine force calibration methods”, proc. of imeko tc3 & tc16 & tc22 int. conf., merida, mexico, 2007. [4] m. kobusch, t. bruns, l. klaus, m. müller, “the 250 kn primary shock force calibration device at ptb”, measurement: 46 (2012), 5, 1757 – 1761. [5] m. kobusch, t. bruns, “uncertainty contributions of the impact force machine at ptb”, proc. of the xviii imeko world congress, rio de janeiro, brazil, 2006. [6] m. kobusch, “influence of mounting torque on the stiffness and damping parameters of the dynamic model of a 250 kn shock force calibration device”, 7th workshop on analysis of dynamic measurements, paris, france, 2012. [7] m. kobusch, s. eichstädt, l. klaus, t. bruns, “analysis of shock force measurements for the model-based dynamic calibration”, 8th workshop on analysis of dyn. meas., turin, italy, 2014. [8] vdi-richtlinie vdi/vde/dkd 2638, kenngrößen für kraftaufnehmer ‒ begriffe und definitionen, beuth, 2006. [9] c. schlegel et al., “traceable periodic force measurement”, metrologia, vol. 49, 224–235, 2012. [10] j. nocedal, s. j. wright, numerical optimization, springer series in operations research, 1999. [11] s. eichstädt, c. elster, “reliable uncertainty evaluation for ode parameter estimation – a comparison”, j. phys. 490, 1, 2014. microsoft word 27-141-1-pb.doc acta imeko  july 2012, volume 1, number 1, 77‐84  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 77  design of an efficient mobile measurement system for urban  pollution monitoring  andrea bernieri, domenico capriglione, luigi ferrigno, marco laracca  diei, università di cassino e del lazio meridionale, via g. di biasio 43, 03043, cassino (fr), italy    keywords: pollution monitoring; mobile sensor networks; wireless sensor network; distributed measurement systems.  citation: andrea bernieri, domenico capriglione, luigi ferrigno, marco laracca, design of an efficient mobile measurement system for urban pollution  monitoring, acta imeko, vol. 1, no. 1, article 15, july 2012, identifier: imeko‐acta‐01(2012)‐01‐15  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 16 th , 2012; in final form may 28 th , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by the emilia romagna region (italy), call prriitt 2008, measure 3.1, action a and 3dinformaticatm ltd.  corresponding author: luigi ferrigno, e‐mail: ferrigno@unicas.it  1. introduction  the timely and reliable knowledge of environmental pollution in urban areas has become, in recent years, one of the most critical issues for local public authorities, even for the not negligible impact with regard to political and social choices to be made. in particular, many local authorities wish (or must) to offer continuous services about the determination of the pollution levels, especially when they exceed limits that are considered unsafe or that are regulated by local laws [1]. in this scenario, the ability to have data timely, reliable, and well spread over the territory on which base suitable management policies for the urban community, can enable to develop several actions for safeguarding the health of the population, for optimizing the resource, and for controlling the industrial and anthropogenic emissions. the typical set-up adopted to retrieve information on air quality is based on fixed measurement units, positioned in suitable points of the area under test. these measurement units are characterized by good metrological performance somewhat counterbalanced by high costs, weights, dimensions, large measurement times, and power consumption requirements. values of the pollutants in different and neighboring areas are then obtained by applying suitable predicting mathematical models. the position of these units must be appropriately designed to enable a reliable extension of the information collected throughout the range of interest. however, in cases where the hilly terrain or urban structure does not allow an extension of information to wide areas of territory, additional monitoring stations should be employed. typically, this circumstance leads to a meaningful increase of the monitoring cost that sometimes becomes prohibitive for the interested authority. if there is a need for accurate measurements in areas where fixed stations are not installed, some innovative solutions, present in the literature, suggest the use of mobile (i.e. portable) measurement systems that, indeed, re-propose the use of fixed in recent years, the pollution monitoring in urban areas has become one of the most critical issues for local public authorities, which  wish  (or  must)  verify  that  pollution  levels  not  exceed  limits  considered  unsafe  or  that  are  regulated  by  local  laws.  generally,  the  pollution monitoring is performed by using measurement stations located in few points of the region of interest since these stations  are generally characterized by high costs, weights, and dimensions. then, the pollution levels over the remaining area are predicted by  means  of  suitable  interpolation  models.  due  to  the  great  variety  of  urban  scenarios,  it  becomes  very  difficult  to  obtain  reliable  pollution  levels  in  area  in  which  the  measurements  has  not  been  directly  taken  but  only  predicted.  consequently,  the  pollution  monitoring can suffer of a lack of reliable information indispensable for the actuation of proper environmental management policies.  in this framework, this paper proposes a mobile measurement system for the real time monitoring of environmental pollutions over  urban areas. the proposed approach is based on the use of a set of vehicles, typically employed for public transportation inside the  urban area, equipped with the proposed mobile measurement system allowing it to measure, store, and transmit the acquired data to  a remote supervisor unit somewhere on the path followed by the vehicles. particular attention has been paid on the definition of the  metrological characteristics of the measurement devices with the aims of complying with the applicable european directives accuracy  requirements and of selecting a suitable trade‐off between accuracy and cost. the experimental measurement campaign performed  on a suitable urban scenario has confirmed the goodness of the proposed system. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 78  units made transportable through their installation on trucks or cars [2]-[4]. thus, measurement campaigns can be implemented in wider parts of the territory, but the simultaneity of the data recorded is closely related to the total number of systems used for surveying. in this context, it thus becomes of interest to have mobile (i.e. moving) analysis systems that measures the environmental pollution in wider parts of a territory according to defined paths of the vehicles on which they are installed. in this way, the cost of urban pollution monitoring can be substantially reduced avoiding the use of many more expensive fixed or transportable monitoring units. by adopting a suitable data management technique, it is then possible to perform the integration of the information in "clusters" of data. this approach makes possible, with a small number of devices, to achieve full coverage, accurate, and timely provision of pollutions in wide areas. to these aims, some measurement architectures have been presented in the literature [5]-[7]. however, they does not fully consider the issue of the reliability of data collected from the standpoint of measurement uncertainty and of the maintenance of the metrological characteristics of the units during the whole operative cycle. then some troubles about the reliability of such solutions arise. thanks to previous experience in the field of wireless measurement systems and sensing devices [8], wireless sensor networks [9] and uncertainty estimation [10], in this paper the authors propose an effective mobile measurement system for the real time monitoring of environmental pollutions over urban areas [11]. key features of the proposed system, thought to be located on public urban ground transportation vehicles, are the significant low cost, the high portability of the measurement units, the autonomous power supply, and the measurement strategy able to minimize the measurement uncertainty. to these aims, suitable interpolation techniques aggregating data acquired over the spatial and time domains can be applied. in the following, starting from requirements of directives and laws operating in the european countries, the metrological specifications for the sensing devices are defined. then, the architecture of the proposed measurement system is presented, together with some details about the measurement unit, the considered sensors, and the software architecture. finally, to validate the proposed system, a measurement campaign has been performed on a suitable urban area. 2. european regulation for the air pollutant  this section summarizes the legal regulations and limit values for the air pollution parameters which have been considered to drive the selection of the transducers and the design of the measurement system. in europe, european directive number 30 of the april 22-th 1999 related to environmental air quality limit values regulates the presence of air pollutions [12]. the considered quantities are the sulphur dioxide (so2), nitrogen dioxide (no2), nitrogen oxides (no), particulate matter (pm) and lead. in 2000/69/ec the ambient air quality limit values for benzene (c6h6) and carbon monoxide (co) are given [13]. these directives are then to be acknowledged by the state trough suitable decrees. as an example, in italy the italian ministerial decree number 60 of the 02/04/2002 has acknowledged these directives. typically, standards and laws express the limits of the pollutants to which the human body can be exposed in milligram on cubic meter (mg/m3) or microgram on cubic meter (g/m3), and impose limitations about the period of exposure. table 1 summarizes these limits for the main air pollutants, and reports them also in parts per million (ppm) or parts per billion (ppb), quantities often used in the transducer specifications. the limits reported in the table are generally referred to a mean value in a considered exposure period; on the contrary, alert thresholds are usually given in terms of maximum number of events in which the specific pollutant may exceed a corresponding limit, in a defined time period. in the table, for the unit conversion, a reference pressure equal to 101325 pa and a reference temperature equal to 298.16 k were considered. 3. the measurement system   the general architecture of the proposed system is sketched in figure 1. it is based on two main classes of devices: mobile measurement unit (mmu): designed to be hosted on board of vehicles, as urban service buses or service vehicles of the local authority. central supervisor system (css): designed to be located in a fixed position and devoted to remotely access to measurement data. the mmus, whose architecture and components are described in detail in section 4, perform the acquisition, the collection, and the transmission of the measured data (environmental quantities and geo-localization information) to the css. table 1. pollutant limits according to the european directives. pollutant  limit [mg/m 3 ]  limit [ppm]  exposure period  co  10  8.7  daily  no2  0.2  0.11  hourly  0.03  0.016  annual  0.4  0.21  alert threshold  so2  0.35  0.13  hourly  0.125  0.048  daily  0.02  0.008  annual  0.5  0.19  alert threshold  o3  0.18  0.084  hourly  c6h6  0.005  0.144  daily  wifi connection mobile  measurementunit bus pole wired connection umts/gprs/gsm connection wifi connection mobile  measurementunit bus pole wired connection centralsupervisor system figure 1. general architecture of the measurement system.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 79  the css is a workstation equipped with the necessary devices and software for retrieving, storing, and viewing measurement data collected by mmus as well as for managing the data elaboration. in particular, using appropriate software modules suitably developed, the css ensures:  the management of involved mmus by means of a graphical i/o user interface for enabling/disabling, control, configuring and verifying the mmu operation;  the representation of each mmu position and its measurement results on a digital cartography;  the storage of the data collected in time series for their statistical analysis over time;  the dynamic integration of data collected from different mmus, with reference to mathematical models of pollution diffusion in the monitored area, and data fusion techniques to correlate data relating to the same location and detected by different mmus;  the availability of the data processed on the internet for a free use or a controlled use by the operators enabled. 4. the mobile measurement unit  the core of the proposed system is the mobile measurement unit (mmu). it is designed and realized according to a modular architecture, in which various sensors can be combined as required by the specific needs of the measurement purposes. the sensor assembly is also designed to be easily removed from the rest of the unit, in order to perform the check and periodic maintenance necessary to ensure proper traceability and accuracy of the measurements. with reference to figure 2, the mmu is composed by four main sub-modules: (i) the probe head, (ii) the sensing, conditioning and acquiring devices, (iii) the processing and memory unit, and (iv) the data communication modules. figure 3 shows a picture of the realized mmu. 4.1. the probe head  a schematic of this device is reported in figure 4. it is composed by a package in which the sensing elements are suitably placed, and by a controlled air aspiration system that is able to guarantee the desired air flux whatever the vehicle velocity is. in particular, the sampling air port is placed in the opposite direction with respect to the vehicle direction, and the air flow is fixed by the aspiration system. inside the probe head, some fins able to impose a suitable air velocity for each considered sensing element are also placed. this choice allows the reliability of the mobile measurements to be improved. 4.2. the sensing, conditioning, and acquiring devices  the sensing and conditioning circuits are developed to measure the following quantities: air temperature, air humidity, and the concentrations of: carbon monoxide (co), nitrogen dioxide (no2), sulphur dioxide (so2), ozone (o3), and benzene (c6h6). in the future version of the measurement unit p r o b e h e a d nox sox co c6h6 o3 gps data  concentrator c c c c c temperature, humidity,  velocity gps sensing conditioning and acquiring devices temperature humidity inertial platform wifi tx/rx umts/gprs/gsm bus pole hand held unit central supervisor system data communication modules odometer figure 2. architecture of the proposed mobile measurement unit.  figure 3. the realized mobile measurement unit (mmu).  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 80  also the fine particles (pmx) and other quantities of chemical interest will be considered. the sensors are selected mainly on the basis of range, accuracy, and measurement rate, according to requirements imposed by the european directives and limits showed in the table 1. in particular, for the pollutant quantities, kanomaxtm aeroqual high spec sm50 sensors are used [14]. table 2 shows the main metrological characteristics of the selected sensors. all sensors have an internal resistance used to bring the conversion head to a temperature that avoids any influence from external environment factors, such as atmospheric temperature and humidity, at the time of the readings. moreover, they have the current output, typically with a load resistance ranging from 100 to 200 ω, with the aim of falling in the voltage input range 0-5v of the analog to digital converter. a suitable on-board kanomax aeroqual adc with 0-5v input range / 12 bit resolution is adopted; in addition this converter also has an output interface based on rs232/rs485 standards. 4.3. the processing and memory unit  it is composed by a microprocessor and data storage hardware. the advantech ark 1388 industrial pc was used for the purpose. it is characterized by small dimensions and consumptions and by high robustness and good i/o capabilities [15]. the microprocessor manages the local unit and controls all the measurement and data acquisition tasks. the data are geo-referenced by means of a high-resolution gps, embedded on the industrial pc, together with an odometer corrsys-datron l-400 and an inertial platform landmark gladiator lmrk10, which allow retrieving the geographic position even in areas not covered by gps signal. the system also provides the local data storage necessary to save measured data between two successive data transmissions to css. 4.4. the data communication modules  the mmu has been equipped with suitable wireless mobile communication modules. in particular, wifitm tx/rx system and umts/gprs/gsm modem, both embedded on the industrial pc, are used. in this way, the measured data can be transmitted to the css in two alternative ways: a) by means of short-range wifi connections installed at the predefined locations (e.g. the bus poles); in this case the connection between the bus pole and the css is performed by a wired lan connection; b) by means of long-range umts/gprs/ gsm wireless connection. the solution a) is a reliable and free of charge connection that guarantees high data rates but, due to the short range of the communication link, it is usable only at the predetermined locations (bus poles). even if the measurement data are collected in real time by mmus, they are retrievable (by the css) only off-line, i.e. when the vehicle is close to the bus pole. consequently, a time delay occurs in the management of the pollution data and of the measurement system, even if it can be usually acceptable for pollution monitoring purposes. the solution b) performs an on-line communication on long-range distance (if there is an adequate cellular network coverage), but it is generally characterized by a low data rate and not free of charge connection. then, it is used only if the data must be on-line transmitted (e.g. alarms) or if the solution a) is temporarily out-of-order. 5. the software architecture  the measurement and control software has been developed in labviewtm environment. the software is based on a clientserver architecture and it is constituted by different modules running on different devices. in particular, two main software units can be identified: i) the program running on the mmu (which operates as “measurement server”), and ii) the program running on the external remote devices (hand-held units and bus pole units which act as clients). figure 5 shows the general architecture of the developed software, also considering the wifi connection of the mmu at the bus pole. as for the measurement server, all functions (data acquisition from sensors and external devices, storing and transmission) have been implemented into the firmware of the embedded pc on the mmu. once the device has been powered on, they go in a standby mode waiting for configuration (by means of a control software running on the hand held devices) and/or for download of the storage data (by means of a download software running on the bus poles). after the configuration phase, the whole operating is completely automatic. the firmware acquires the data from transducers and from the devices devoted to the geo localization (gps, odometer, and inertial platform). these data are stored on the internal memory unit (as suitable files). in addition, the firmware manages the connection and the communication with the client devices. during the acquisition phase, a suitable software module allows the on-line upload of the stored data on the client devices. as for the clients, two different software have been developed. the first one is the control software (usually running on the hand held units) that allows the connection to the mmu and, depending on the mmu’s status, sends the configuration and/or start command (if the mmu is in standby mode) and, when required, the stop command (if the mmu is in acquisition mode). the second one is the download software (running on suitable unit placed on bus poles) that continuously scans the wifi network to connect with the mmu, and then downloads all data previously stored in the mmu internal memory. it manages the data download in both the mmu standby and acquisition modes. 6. strategies for pollution data evaluation  to provide a satisfying characterization of the environmental pollution, it is possible to adopt several approaches for the data sampling and interpolating over a regional scale [16], [17]. generally, a regional distribution of the polluting can be achieved by means of either predictive models (which are characterized by a good spatial coverage and poor accuracy) or careful measurements suitably post-processed with line drawing line drawing sampling port sampling port direction co c3h3 sox nox o3 measuring cell air flow air flow air flow air flow controller rate aspiration pump figure 4. sketch of the realized probe head.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 81  interpolation methodologies. in the literature, we can find a number of interpolation methods developed for different applications [17], [18]. starting from punctual data set, their aim is to obtain spatial polluting concentration fields. these methods can be classified in two main categories, namely deterministic and geo-statistical. the main difference between these methods is that the former are only based on punctual measurements, whereas the latter exploit the data statistical spatial structure providing also the estimate of the variance on the interpolated values. more specifically, the methods belonging to these categories are: (a) deterministic: classic interpolation methods such as voronoi diagrams, thiessen polygons, delaunay triangles, inverse distance weighting (idw), radial basis functions (rbf), bier. (b) geo-statistical: interpolation methods, based on the knowledge of the statistical structure of the aleatory field of the point of interest, such as the optimum interpolation, the kriging (in the various existing versions), the geo-statistical multivariate model. in the following, a deterministic method, namely the voronoi diagram, and a geo-statistical method, namely the ordinary kriging, were used to present the achieved results obtained by means of the proposed measurement system. 7. experimental results  in order to evaluate the performance of the proposed measurement system, a number of measurement sessions were carried out. for the sake of brevity, only the results related to the co, no2, and o3 will be reported in the following. the measurement sessions have the following aims: hand held unit mobile measurement unit bus pole unit start configuration data upload in standby mode configuration? no yes start acquisition data upload in acquisition mode data acquisition gps sensors standby waiting for configuration & data management speed control speed limit < speed limit data storage memory unit stop? noyes > stop command local stop start search mmu for connection connection? no yes connected? no yes mmu status standby send configuration command selection acquisition end send stop command stop acquisition discon nection start search mmu for connection download? no yes connected? no yes mmu status standby download in standby mode acquisition end download acquisition mode end   figure 5. the architecture of the developed software.  table 2. main metrological characteristics of the selected sensors (ldl: lower detection limit). measured  quantity  calibrated  range [ppm]  maximum  exposure [ppm]  ldl [ppm]  accuracy  resolution  [ppm]  response   time [s]  operational range  temperature [k]  rh [%]  co  0 to 100  200  0.5  +/‐ 5ppm  0.1  <150  273 to 313  5 to 95  no2  0‐0.2  0.5  0.001  +/‐ 0.01 ppm (0 to 0.1 ppm) +/‐ 10% (0.1 to 0.2 ppm)  0.001  <180  273 to 313  30 to 70  so2  0 to 10  20  0.2  +/‐ 0.5 ppm  0.01  <60  253 to 313  5 to 95%  o3  0 to 0.5  1  0.001  +/‐ 0.008 ppm (0 to 0.1 ppm) +/‐ 10% (0.1 to 0.5 ppm)  0.001  <60  268 to 313  5 to 95  c6h6  10 to 1000  n/a  10  10 ppm  0.1  <20  268 to 313  5 to 95  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 82  (i) of validating the mmu performance by means of a comparison between the results obtained from the mmu and those obtained from a calibrated traditional measurement instrumentation, positioned at the same location; to this aim, a dasibi 2108 chemiluminescent nitrogen oxide analyzer, a dasibi 2003 ozone analyzer, and a ml9830b carbon monoxide analyzer were used as reference instruments; (ii) of evaluating the distribution of pollutants in an extended area, by means of a great number of measurements acquired during predefined paths performed by using a service vehicle on which the mmu was installed; the measurements were then elaborated by means of suitable procedures, running on the css, which perform the elaboration of voronoi diagrams and the kriging variograms. to simplify this preliminary evaluation phase, the on-line data transmission from the mmu and the css was not performed, and the data collected during the considered paths were locally stored on the mmu and then downloaded to css at the end of each path. the measurements were collected on suitable paths in the urban area of the city of bologna, italy, using a commercial vehicle. for the considered pollutants, figure 6 shows the results obtained in the (i) tests, performed during a period of about 60 hours. the results show a good agreement between the reference values and those obtained by means of the mmu, with a mean deviation always less than 10%. these results are in agreement with the metrological performances of the employed sensors. figure 7 and figure 8 show the considered pollutant distribution in the urban area using voronoi diagrams and kriging variograms, respectively. in both figures, the circles indicate the punctual measurement data acquired by means of the mmu and geo-referenced using the google map of the city. the results showed are computed as mean of seven different paths, performed in a period of one week of stable weather conditions (absence of wind and of rain). also the single-day and the single-path results are available, but they do not show substantial differences. in the figures 7 and 8, the red triangles indicate the positions of two traditional fixed measurement units. it is evident that the use of the proposed mobile measurement system is capable to highlight some urban areas in which the pollutant concentration is very high. using the traditional approach, in these areas the pollution information may be neglected or obtained by means of prediction models which could not assure the same accuracy of the punctual measurements. 8. conclusions  the paper proposes an efficient architecture for mobile systems involved in urban pollution monitoring. it is constituted by two main classes of devices: a mobile measurement unit (mmu) and remote nodes which in turn can be used both for control the mmu (and then the measurement settings) and download data collected by the mmu. the mobile measurement unit is made to be installed on public vehicles and is able to perform pollution monitoring during the usual vehicle service. the proposed measurement system is thought to provide the collected data to a suitable central supervision system which controls many measurement units and use the acquired data to perform a spatial pollution evaluation in the area of interest (as an example by means of suitable data fusion and/or interpolation techniques). the modular architecture of the proposed measurement system assures good feasibility and efficiency. in addition, the described solution is very attractive because allows exploiting the spatial and time capillarity of a typical public transportation system to achieve in an easy way a high amount of data (from the spatial and time points of view) useful also for improving the prediction models. with reference to the employed data elaboration techniques (voronoi diagrams and ordinary kriging variograms), the achieved results show a good agreement thanks to the high number of the punctual measurements carried out. however, further experimental activity has to be addressed in order to identify the best interpolation strategy related to the number and distribution of measured data and to the accuracy to be achieved. moreover, the use of many mobile measurement systems installed on different vehicles, which perform the same or different urban paths, can improve the whole measurement uncertainty using suitable data fusion techniques. this is the goal of the future research development, when many mobile measurement systems will be available. 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 13/12/10 18.00 14/12/10 0.00 14/12/10 6.00 14/12/10 12.00 14/12/10 18.00 15/12/10  0.00 15/12/10 6.00 15/12/10  12.00 15/12/10 18.00 16/12/10 0.00 16/12/10 6.00 16/12/10 12.00 mmu reference co [ppm]    0 10 20 30 40 50 60 13/12/10 18.00 14/12/10 0.00 14/12/10 6.00 14/12/10 12.00 14/12/10 18.00 15/12/10 0.00  15/12/10  6.00  15/12/10 12.00 15/12/10  18.00 16/12/10 0.00 16/12/10 6.00 16/12/10 12.00 mmu reference no2 [ppb]    0 5 10 15 20 25 30 35 13/12/10 18.00 14/12/10 0.00 14/12/10 6.00 14/12/10 12.00 14/12/10 18.00 15/12/10  0.00 15/12/10  6.00 15/12/10 12.00 15/12/10  18.00 16/12/10 0.00 16/12/10 6.00 16/12/10 12.00 o3 [ppb]  mmu reference figure 6. mmu validation test results.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 83  references  [1] *** “health aspects of air pollution with particulate matter, ozone and nitrogen dioxide”, report on a who working group, bonn, germany, 2003. [2] k. d. zoysa and c. keppitiyagama, “busnet – a sensor network built over a public transport system”, proceedings of the 4th european conference on wireless sensor networks, 2007. [3] d. t. n. r. group, “dtn reference implementation v2.3.0.” 2006. [online]. http://www.dtnrg.org/docs/ code/dtn2 [4] f. gil-castineira, f. gonzalez-castano, r. duro, and f. lopezpena, “urban pollution monitoring throught opportunistic mobile sensor networks based on public transport”, proceedings of the ieee international conference on computational intelligence for measurement systems and applications, 2008. [5] weber, c., hirsch, j., perron, g., kleinpeter, j., ranchin, t., ung, a. and wald, l., “urban morphology, remote sensing and pollutants distribution: an application to the city of strasbourg, france”, international union of air pollution prevention and environmental protection associations (iuappa) symposium and korean society for atmospheric environment (kosae) symposium, 12th world clean air & environment congress, greening the new millennium, 26 – 31 august 2001, seoul, korea. [6] g. varela, a. paz-lopez, r. j. duro, f. lopez-pena1, f. j. gonzalez-castano, “an integrated system for urban pollution monitoring through a public transportation based opportunistic mobile sensor network”, ieee international workshop on figure 8. kriging variograms of the considered pollutants (the red triangles indicate the position of traditional fixed measurement systems).  figure 7. voronoi diagrams of the considered pollutants (the red triangles indicate the position of traditional fixed measurement systems).  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 84  intelligent data acquisition and advanced computing systems: technology and applications, 21-23 september 2009, rende (cosenza), italy. [7] v. cavalho,j.g. lopes, h.g. ramos, f. correa alegria, “citywide mobile air quality measurement system”, proceedings of ieee sensors 2009 conference, christchurch (nzl), 25-28 oct. 2009, pp. 546-551. [8] g. betta, d. capriglione, l. ferrigno, g. miele, “influence of wifi computer interfaces on measurement apparatuses”, ieee transactions on instrumentation and measurement. vol. 59, issue 12, dec 2010, pp. 3244-3252. [9] ferrigno l., paciello v.,pietrosanto a., "low-cost visual sensor node for bluetooth-based measurement networks," ieee transactions on instrumentation and measurement, vol. 55, no.2, april 2006, pp. 521527. [10] g. betta, d. capriglione, l. ferrigno, g. miele, “experimental investigation of the electromagnetic interference of zigbee transmitters on measurement instruments”, ieee transactions on instrumentation and measurement. vol.57, issue 10, oct 2008, pp. 2118-2127. [11] a. bernieri, d. capriglione, l. ferrigno, m. laracca, “a mobile measurement system for urban pollution monitoring”, proceedings of the xiii tc-4 imeko symposium and ix semetro, natal, brazil, september 2011. [12] directive 1999/30/ec of 22 april 1999 relating to limit values for sulphur dioxide, nitrogen dioxide and oxides of nitrogen, particulate matter and lead in ambient air. [13] directive 2000/69/ec of the european parliament and of the council of 16 november 2000 relating to limit values for benzene and carbon monoxide in ambient air. [14] [online] http://www.kanomaxusa.com/iaq/sensor/gas_sensor_head.html. [15] [online] http://www.advantech.com. [16] b. denby, v. garcia, f. holland, c. hogrefe, “integration of air quality modeling and monitoring for enhanced health exposure assessment”, special edition of em magazine “monitoring and modeling needs in the 21st century”. [17] b. denby, j. horálek, s.e. walker, k. eben, and j. fiala, “interpolation and assimilation methods for european scale air quality assessment and mapping, part i: review and recommendations, etc/acc technical paper 2005/7. [18] j. horálek, p. kurfürst, b. denby, p. de smet, f. de leeuw, m. brabec, j. fiala, “ interpolation and assimilation methods for european scale air quality assessment and mapping, part ii: development and testing new methodologies”, etc/acc technical paper 2005/7. microsoft word article 2 editorial.docx acta imeko december 2013, volume 2, number 2, 2 www.imeko.org acta imeko | www.imeko.org december 2013 | volume 2 | number 2 | 2 editorial paul regtien measurement science consultancy (msc), julia culpstraat 66, 7558 jb hengelo (ov), the netherlands section: editorial citation: paul regtien, editorial, acta imeko, vol. 2, no. 2, article 2, december 2013, identifier: imeko-acta-02 (2013)-02-02 editor: paolo carbone, university of perugia copyright: © 2013 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: paul regtien, e-mail: paul@regtien.net this issue of acta imeko comprises the second series of articles based on papers presented at the xx imeko world congress, busan, republic of korea, 9-12 september, 2012. this time acta imeko includes the extended and updated versions of papers from tc2 (photonics), tc11 (metrological infrastructures), tc18 (measurement of human functions), tc21 (mathematical tools for measurements) and tc22 (vibration measurement), illustrating the wide range of facets encompassing the field of metrology. the issue starts with five research papers on mathematical tools (tc21), followed by four papers about vibration measurements (tc22), two papers on photonics (tc2) and one article on measuring human functions (tc18). an essential aspect of metrology is the international cooperation with respect to standardization and calibration. acta imeko has introduced a section on technical notes, offering a position for contributions on metrological infrastructures and other important aspects of metrology, next to the section with regular research papers. the two technical notes in this issue are about the development of metrological infrastructures (tc11). we like to thank all those who have contributed to this issue: authors, reviewers, layout editors and journal managers. special thanks go to the section editors bernhard zagar (tc2), mladen borsic (tc11), yasuharu koike (tc18), alistair forbes and franco pavese (tc21) and gustavo ripper (tc22). we hope you enjoy this issue. microsoft word 288-2053-3-le-rev acta imeko  issn: 2221‐870x  april 2016, volume 5, number 1, 10‐14    acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 10  analysis of cadmium and lead in honey: direct‐ determination by graphite furnace atomic absorption  spectrometry  andrea colabucci, anna chiara turco, maria ciprotti, marco di gregorio, angela sorbo, laura ciaralli   european union reference laboratory for chemical elements in food of animal origin (eurl‐cefao), department of food safety and  veterinary public health, italian national institute of health, v.le regina elena, 299‐ 00161 rome, italy       section: research paper   keywords: method validation; chemical elements; honey; atomic absorption  citation: andrea colabucci, anna chiara turco, maria ciprotti, marco di gregorio, angela sorbo, laura ciaralli, analysis of cadmium and lead in honey:  direct‐determination by graphite furnace atomic absorption spectrometry, acta imeko, vol. 5, no. 1, article 4, april 2016, identifier: imeko‐acta‐05  (2016)‐01‐04  section editor: claudia zoani, italian national agency for new technologies, energy and sustainable economic development affiliation, rome, italy  received july 31, 2015; in final form december 22, 2015; published april 2016  copyright: © 2016 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work has been carried out with the financial support of the sanco 2005/food safety/003‐residues program of the european commission.  the contents of this manuscript are the sole responsibility of the authors and can in no way be taken to reflect the views of the european commission.  corresponding author:  andrea colabucci, e‐mail: andrea.colabucci@iss.it    1. introduction  according to the task listed in article 32 of reg. 882/2004 [1] the eurl-cefao is responsible of the annual evaluation for group b3c (chemical elements) of the national residue control plans, that the european union member states (eu mss) have to perform in accordance with the requirements of directive 23/96/ec [2]. in general terms, these plans foresee the analyses of different food products in order to monitor the presence of unauthorised substances, residues of veterinary medicinal products and chemical contaminants that may represent a risk for public health. as for chemical elements, the eu mss usually choose to investigate those for which maximum levels (mls) are set in the relevant eu legislation (cr 1881/2006) [3] but other element/matrix combinations can be selected according to each ms control strategy. following this last approach the number of eu mss that include the analysis of chemical elements in honey had increased in 2012. in particular, according to the relevant eurl-cefao evaluation, cadmium and lead in honey were considered of interest by 26 eu mss although no ml for chemical elements in this matrix was set in eu legislation at that time. however, values at which samples are abstract  one of the emerging issues regarding the analysis of inorganic contaminants in food is the determination of cadmium (cd) and lead  (pb)  in honey. in fact, this kind of analysis is foreseen in most of the national residue control plans (nrcps) that european union  member states (eu mss) have to perform according to directive 96/23.  with  the  aim  to  provide  technical  support  to  the  eu  national  reference  laboratories  (nrls),  an  easy‐to‐use  method  on  direct  determination  (dd)  of  cd  and  pb  in  honey  by  graphite  furnace  atomic  absorption  spectrometry  (gf‐aas)  after  dissolution  in  an  aqueous mixture of 2.5 % hno3 (v/v) and 12.5 % h2o2 (v/v) was in‐house validated and distributed to the nrls network. analyses  were carried out on a batch prepared for the feasibility study of the eurl‐cefao 19th pt on honey. the validation levels (20 and 100  μg/kg for cd and pb, respectively) were chosen similar to those that would have been proposed for the pt.  the method proved its efficacy in terms of analytical performance; it appears to be low cost and time saving compared to a microwave  sample preparation.  in addition,  it also decreases the environmental  impact, being the amount of acid and reagents considerably  reduced.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 11  rejected being non-compliant, namely action levels (als), were set for cd and pb in 14 eu countries. the other eleven mss that analyzed honey did not set or gave any indication on the levels considered. the als were spread all over the european union ranging from presence (few µg/kg) up to hundreds of µg/kg. according to the efsa report 2012 [4], the presence of lead represented the most important cause for sample rejection. furthermore, as the differences in rules adopted by the member states may hinder the functioning of the common market, the possibility of setting an harmonised maximum level for lead in honey was discussed at eu level and a ml was set in cr 1005/2015 [5], amending cr 1881/2006 [3], to be adopted as of 1 january 2016. as a consequence of this picture, the availability of analytical methods suitable to quantify chemical elements in honey becomes a key point for both the eu national reference laboratories and laboratories dealing with official controls. the lack of certified reference materials (crms) [6] as well as the scarcity of specific proficiency tests [7] make it difficult to assess the trueness and the performance of these methods. according to its tasks [1], the eurl-cefao organized in 2013 an interlaboratory comparison for its network on the determination of cadmium and lead mass fraction in honey making especially efforts to prepare an adequate material for this proficiency test [8]. in fact, from a physical point of view, honey is a high-viscosity liquid foodstuff containing a range of important nutritional complementary elements including a complex mixture of carbohydrates [9]. the difficulties in obtaining an homogeneous and stable material have never led to the effective production of a reference material certified for chemical elements [10], [11]. therefore, this proficiency test was particularly valuable because of the production of a material suitable to check the nrls capability in dealing with this matrix. as the basal content of cadmium and lead was found negligible in the honey used as start material, it was spiked with these chemical elements so as to obtain a mass fractions of ~ 0.02 mg/kg and ~ 0.10 mg/kg for cd and pb, respectively. these values were selected on the basis of a rough estimate of the als mean value but were also considered adequate to make the exercise as profitable as possible. furthermore, the eurl-cefao developed, validated and distributed among the nrls “ad-hoc” in-house methods to quantify cd and pb in honey so as to fulfil another’s key task, namely providing national reference laboratories with details of analytical methods. the validation was carried out with the two analytical techniques used in the network (icp-ms and gf-aas) around the values of mass fraction foreseen for the pt. this activity was of particular interest due to the intrinsic difficulty in the analysis of a high sugary matrix such as honey is. in particular for gf-aas two different sample preparation approaches were investigated: the first one was based on the microwave sample digestion (mw gf-aas) while the second one, consisting in a direct determination (dd gf-aas), was chosen on the basis of information gathered from a survey of the literature [9], [12]. the method validation was planned and conducted according to the eurl-cefao internal procedure based on the eurachem guide [13]. this paper describes the full “in house” method validation of the direct determination of pb and cd in honey by gf-aas and its efficacy and capabilities especially considering its lowcost and time saving peculiarities. 2. materials and instrumentations  all the steps of the material preparation were performed in the eurl-cefao facilities. the different types of honey were purchased at the retail store. in particular ~ 15 kg of the wildflower one were used for the screening, the production of the preliminary batch and the final pt items. moreover, other types of honey (chestnut, flowers, honeymoon, eucalyptus and melon) having different fluidity where bought to test the method and for screening purposes. all chemical reagents were at least of suprapure grade: hno3 67-69 % (v/v) (romil, cambridge, uk) and h2o2 30 % (v/v) (romil) to dissolve or digest the samples. the spiking solution and the calibrants were prepared from cd and pb elemental stock solutions of high purity grade at 1000 µg/ml in 2 % hno3 (volume fraction) (high-purity standards, charleston, usa directly traceable to srm 3100 series spectroscopic standard solutions) and diluted in high purity deionised water (resistivity 18 mω cm) (zeneer up 900 water purification system, human corporation, seoul, korea) up to the final concentrations. tfmtm polytetrafluoroethylene vessels and microwave ovens (ethos 1 and ethos-900, milestone, sorisole, italy) were used for the microwave assisted acid digestions. the mass fractions of the elements were determined by graphite furnace atomic absorption spectrometry with zeeman effect background correction (z-gf-aas aanalyst 800, perkinelmer, waltham, usa) using thga graphite tubes and electrodeless discharge lamps (cd and pb, perkinelmer). matrix modifiers were prepared starting from magnesium nitrate (mg(no3)2·(6h2o), suprapur merck, darmstadt, germany), ammonium dihydrogen phosphate (nh4(h2po4) suprapur merck) and palladium nitrate (pd(no3)2, perkinelmer sol. 1 % in 15 % hno3). an inductively coupled plasma mass spectrometer (icpms, elan drc ii, perkin elmer, waltham, usa) with cyclonic spray chamber and meinhard quartz concentric nebulizer (meinhard, golden, co, usa) was employed for screening and comparison purposes. minitab 15 statistical software was used for statistical scopes. the following equipments were used to prepare the preliminary batch and the final sample for the 19th eurlcefao pt: an ols200 shaking water bath (grant instruments, cambridge, uk) to warm up the jars in order to enhance the fluidity of honeys and to pasteurize the final samples; polypropylene beakers for food use (5 l) to collect the sample before homogenization; two technical balances (mettler-toledo, greifensee, switzerland 4200s and 16001l) and an analytical balance (mettler-toledo at 261 dr); a t 25 ultra-turrax high-speed homogenizer (ika-werke gmbh & co. kg, germany) equipped with a stainless steel anchor stirrer rod for liquids characterized by medium/high viscosity; a granite machine (fbm l (bras international spa, italy) for higher quantity of highly viscous liquids equipped with a dispenser. acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 12  3. method validation  3.1. general scheme  a preliminary study on the possible interferences on the determination of cd and pb was carried out especially evaluating the occurrence of a matrix effect. linearity of the calibration curve used for quantification purposes, limit of detection (lod) quantification (loq), repeatability [14], intermediate precision and trueness were the parameters assessed during the validation process. in particular, the repeatability, lod and loq were calculated according to what stated cr 333/2007 [15], even if it is mandatory only for element/matrix combinations for which mls are set in cr 1881/2006 [3] and following amendments. 3.2. instrumental set‐up and preliminary studies  ashing and atomization curves were performed to allow the eurl to set the best temperatures in the furnace programmes: cd 700 °c1550 °c and for pb 600 °c1700 °c. the instrumental conditions are reported in table 1. these investigations, as well as the analysis foreseen for the validation, were performed using spare parts (e.g. graphite tubes) at about their half-life to better reproduce the typical working conditions of the laboratory. moreover, it is to be underlined, that even if a carbonaceous deposit may remain in the graphite tube, its working life is not sensibly reduced. the high matrix effect, observed for both elements, lead to analyze samples using the matrix matched calibration approach. this choice was also due to the necessity to overcome the problems raised from the different density between standard aqueous solutions and the sample resulting in an underestimation of both element mass fractions evidenced in screening experiments (ca. 20 %). moreover, matrix modifier concentrations were more diluted compared to those used when analyzing microwave digested samples and prepared in triton x 0.05 % to enhance the fluidity of the whole injected solutions. the calibration curves were constructed using 5 point (0 and base excluded) to cover all the concentrations of interest (0.25 – 2 and 2  20 µg/kg for cd and pb, respectively), diluting one part of the aqueous standard solutions in one part of basal honey (1:1). the curves were found linear (minimum correlation coefficient 0.9995) in the investigated range. these investigations, as well as the analysis foreseen for the validation, were performed using spare parts (e.g. graphite tubes) at about their half-life to better reproduce the typical working conditions of the laboratory. moreover, it is to be underlined, that even if a carbonaceous deposit may remain in the graphite tube, its working life is not sensibly reduced. 3.3. limits of detections and limits of quantification  lods and loqs were assessed according to the indications set in cr 333/2007 [15]. twenty-one samples from five different types of honey were analyzed. in particular, four samples of honeymoon, chestnut, eucalyptus, flowers each, and five of the wildflowers honey were spiked with a small amount of the analytes (0.1 µg/kg and 2 µg/kg for cd and pb respectively) so as to have a quantifiable analytical signal. the samples were then analysed for cd and pb content and the means and the standard deviations of each group were determined. the sample means were statistically compared by using the anova test and no significant differences were found considering an alpha of 0.05 (p-value = 0.173 and 0.582 for cd and pb, respectively) so the pooled standard deviation was used to calculate lod and loq for both cd and pb (figure 1). in particular, the pooled standard deviation was multiplied by 3 to derive the lods (0.8 µg/kg and 13 µg/kg for cd and pb, respectively) and by 10 to derive the loqs (2.7 µg/kg and 42 µg/kg for cd and pb respectively). the obtained values for these parameters fulfilled the internal requirements established by the eurl. 3.4. assessment of repeatability and intermediate precision    the method performance under repeatability condition [14] was evaluated analyzing a set of 10 samples (from the preliminary batch) versus two different calibration curves in the same day. as no suitable reference material (rm) with certified values of the analytes of interest was available, the mean values of the results (n=20) was compared with those obtained analysing the material both with gf-aas and icp-ms after microwave assisted sample digestion (mw). the use of the “surrogate” recovery (spiking of the sample with a known amount of the analytes before the digestion) was table 1: instrumental conditions for cd and pb.  parameter  cd  pb  ashing t (°c)  700  600  atomization t (°c)  1550 1700 volume injected (µl)  20  20 matrix modifier  nh4h2po4 0.06 g/l in  0.01% triton x  nh4h2po4 0.06 g/l +  mg(no3)2 0,05 g/l   in triton x 0.01%  calibration points  (µg/l)  0.25, 0.5, 1.0, 1.5, 2.0  2.5, 5, 10, 15, 20  calibration curve  method of addition  calibrate  method of addition  calibrate  integration mode  peak area  peak area  number of replicates  2  2  minimum sample  dilution  1:1  1:1  wavelength (nm)  228.8  283.3  background correction  zeeman effect  zeeman effect  lamp  electrodeless  discharge lamp  electrodeless  discharge lamp  lamp current (ma)  230  440      figure 1. test one‐way anova for pooled standard deviations.  acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 13  discarded as considered of lesser significance. in fact, the “surrogate” recovery is effective when samples are quantified versus an external calibration (e.g. to verify if matrix effect occurs). in this method, samples were quantified versus a calibration curve prepared on spiked honey so that any interference found in the calibration step would probably similarly affect the quantification of the samples. the mean values of mass fraction calculated from the 20 results showed a good agreement among the different procedures and analytical techniques applied for both cd and pb as summarised in table 2. this accordance obtained in the different sample preparation/analytical technique combinations as well as the foreseen mass fractions, given by the sum of the incurred and the spiked mass fractions, were considered good evidences to judge the methods as “fit for the purpose”. a further proof that can also be considered is the good recoveries obtained (96 % for cd and 102 % for pb) on the material used for the eurl-cefao 17th pt on infant formula analyzed by icp-ms during the repeatability assessment. this matrix was chosen considering its similarity honey in nutritional components, as reported in the nist triangle for food equivalence. the values of the relative standard deviations were compared with the horratr value [16]. the criterion to be < 1, prescribed in the eurl-cefao procedure, was met. moreover it was found that, using dd gf-aas, lower standard deviations were observed compared to those obtained when analysing the samples after microwave assisted acid digestion; this effect was probably due to the minimum handling during sample preparation for analysis. the intermediate precision of the method [14] was evaluated as described: six independent aliquots of the material used to assess the repeatability underwent the afore-mentioned sample treatment in two different periods of time (april 2014 and july 2014) and then were analysed for the content of cd and pb. the results coming from each set of analysis (n=12) were combined with the first six results obtained during the repeatability assessment to calculate the standard deviation used to derive the parameters obtained in intermediate precision. as prescribed in the internal procedure, they were collected in a period of time longer than 5 months. in particular, the intermediate precision conditions were simulated by changing the operator in the second run, while another atomic absorption spectrometer (same model and brand) was employed by the first user in the third analytical run. 3.5. certainty estimate  the estimate of bias, that is the systematic measurement error, is a crucial parameter to assess the uncertainty of a measurement method. the analytical result given by a laboratory needs a “true” value to be compared with and this can be done using a certified reference material. as previously reported, no crm for chemical elements in honey was available on the market. for this reason accuracy was evaluated versus the assigned value of the eurl-cefao 19th pt on honey, obtained by consensus from the participants (national reference laboratories of the european union) according to iso 13528 [17]. this approach was chosen bearing in mind that the low value of uncertainty of the material, that does not take into account all the parameters considered when dealing with a crm or a rm [18], [19], could lead to an underestimation of the method uncertainty. for this reason the contributions of the non-significant bias and instrumental drift were included in the uncertainty budget. it is also to be underlined that the consensus values were, anyway, obtained from the results of outstanding laboratories (eu nrls). moreover the assigned values (0.0199 mg/kg and 0.102 mg/kg for cd and pb, respectively) were in full agreement with the grand mean of the results obtained by the eurl-cefao (0.0201 mg/kg and 0.106 mg/kg for cd and pb, respectively) when assessing the sufficient homogeneity of the material [8] according to the international harmonized protocol [20]. according to the internal procedure based on the eurachem guide for quantifying uncertainty [21], two sets of 4 samples each were analysed together with the sets for the intermediate precision assessment and a third set in september 2014 to cover a time interval longer than 5 months. the mean of the values obtained for cd and pb for the twelve samples was considered. a test for the significance of the recovery was performed using the student test. the tcalc from (1) was compared with 2-tailed critical values for n-1 (i.e. 11) degrees of freedom at 95 % confidence (tcrit = 2.20) using the average recovery (rec) and its standard uncertainty (u ): (1) the tcalc both for cd (1.71) and for pb (0.97) were lower than the tcrit = 2.2. even if the comparison between tcalc and tcrit showed that the bias was not significant, the contribution of the bias was however included in the uncertainty budget [22]. the combined uncertainty was estimated taking into account the figures obtained in intermediate precision, bias and instrumental drift according to the following equation: (2) sr and cr are the standard deviation and the mean value of the measurements in intermediate precision conditions, and m is the number of replicates analyzed in routine analysis (three in the eurl-cefao procedure). the drift (continuous change in an indication, related neither to a change in the quantity being measured nor to a change of any recognized influence quantity of the instrument [14] was also included. the uncertainty of the bias was estimated taking into account the figures obtained in intermediate precision condition (mean value, bias, standard deviation): (3) the maximum uncertainties associated, using 2 as a coverage factor [15], were 10 % and 12 % for cd and pb respectively, complying with the maximum allowable value set in cr 333/2007 [15]. the figures of merit are reported in table 3. table  2:  results  obtained  under  repeatability  conditions  using  gf‐aas  dd, gf‐aas mw and icp‐ms mw .   cd  gf‐aas dd  gf‐aas mw  icp‐ms mw meanr (µg/kg)  20.9  21.1  21.5  sr (µg/kg)  0.3  0.5  0.6 rsdr (%)  1.4  2.4  2.6  r (µg/kg)  0.8  1.4  1.7 pb  gf‐aas dd  gf‐aas mw  icp‐ms mw  meanr (µg/kg)  115  112  113  sr (µg/kg)  2  4  3  rsdr (%)  1.7  3.6  2.7 r (µg/kg)  6  11  8  tcalc |1 rec| urec uc c x sr √m cr 2 ubias cbias 2 driftmax √m 2     bias mrm 2 srm mrm √m 2 ubias crm 2   acta imeko | www.imeko.org  april 2016 | volume 5 | number 1 | 14  4.  conclusions  method validation was carried out on an in-house prepared honey sample with ad-hoc mass fractions of cd and pb. the long standing experience as producer of pt test items permitted the eurl-cefao to obtain a suitable material, whose homogeneity and stability for the time necessary for the validation, were verified according to internal procedures based on iso/iec 13528. validation levels were chosen taking into account a rough mean of the action levels set by the eu mss in their 2012 national residue control plans for these element/matrix combinations. this choice has proved to be of particular relevance considering the recent introduction of a maximum level for lead (cr 1005/2015 amending cr 1881/2006) in this matrix at a value consistent with the mass fraction used in the eurl method validation. the preliminary results produced applying the direct determination method were in good agreement with those obtained by icp-ms and gf-aas after mw sample digestion. the minimum sample handling also allowed the dd gf-aas method to provide high quality data either under repeatability or intermediate precision conditions (table 1). moreover, notwithstanding the lack of a proper certified reference material, it was possible to verify the capability of this method, in terms of accuracy, versus leftover test items material employed for the eurl-cefao 19th pt on honey. as reported in table 3, the good recovery for both the elements (103.5 % and 102.9 % for cd and pb, respectively) versus the consensus values produced by outstanding laboratories (nrls of the eu mss) is a further proof of the appropriateness of the method performance. this outcome demonstrates the suitability of the method for the direct determination of cd and pb in honey by graphite furnace atomic absorption spectrometry in terms of capability and precision. its main strong point is the minimum sample handling resulting in a very low cost, time saving and more “eco-friendly” process. acknowledgement  authors thank guendalina fornari luswergh for editorial assistance on the manuscript. references  [1] regulation (ec) no 882/2004 of the european parliament and of the council of 29 april 2004 on official controls performed to ensure the verification of compliance with feed and food law, animal health and animal welfare rules. oj l 165: 1–141. [2] council directive 96/23/ec of 29 april 1996 on measures to monitor certain substances and residues thereof in live animals and animal products and repealing directives 85/358/eec and 86/469/eec and decisions 89/187/eec and 91/664/eec. oj l 125:10–32. [3] commission regulation (ec) no 1881/2006 of 19 december 2006 setting maximum levels for certain contaminants in foodstuffs. oj l 364:5–24. [4] european food safety authority, 2014. report for 2012 on the results from the monitoring of veterinary medicinal product residues and other substances in live animals and animal products. efsa supporting publication 2014:en-540. 65 pp. [5] commission regulation (eu) no 2015/1005 of 25 june 2015 amending regulation (ec) no 1881/2006 as regards maximum levels of lead in certain foodstuffs. oj l 161:9-13. [6] http://www.comar.bam.de/en [7] https://www.eptis.bam.de/en/index.htm [8] l. ciaralli, a. c. turco, m. ciprotti, a. colabucci, m. di gregorio, a. sorbo. honey as a material for proficiency testing accred. qual. assur., doi 10.1007/s00769-015-1119-2. [9] m.d. ioannidou, g.a. zachariadis, a.n. anthemidis, j.a. stratis. direct determination of toxic trace metals in honey and sugars using inductively coupled plasma atomic emission spectrometry, talanta (2005); 65(1): 92-97. [10] s. caroli, g. forte, m. alessandrelli, r. cresti, m. spagnoli, s. d'ilio, j. pauwels, g.n. kramer. a pilot study for the production of a certified reference material for trace elements in honey, microchemical journal (2000); 67(1–3): 227-233. [11] g .forte, s. d'ilio, s. caroli. honey as a candidate reference material for trace elements, journal of aoac international (2001); 84(6): 1972-1975. [12] p. viñas, i. lópez-garcía , m. lanzón, m. hernández-córdoba. direct determination of lead, cadmium, zinc, and copper in honey by electrothermal atomic absorption spectrometry using hydrogen peroxide as a matrix modifier, j. agric. food chem. (1997); 45(10): 3952–3956. [13] eurachem guide: the fitness for purpose of analytical methods. a laboratory guide to method validation and related topics, 1st edition, 1998. [14] international vocabulary of metrology – basic and general concepts and associated terms (vim), 3rd edition, 2012. [15] commission regulation (ec) no 333/2007 of 28 march 2007 laying down the methods of sampling and analysis for the official control of the levels of lead, cadmium, mercury, inorganic tin, 3mcpd and benzo(a)pyrene in foodstuffs. oj l 88:29–38. [16] w. horwitz, r. albert. the horwitz ratio (horrat): a useful index of method performance with respect to precision. j. aoac int., (2006); 89(4): 1095-1109. [17] iso/iec 13528 (2005) statistical methods for use in proficiency testing by interlaboratory comparisons. international organization for standardisation, geneva, switzerland. [18] iso/iec 34 (2009) general requirements for the competence of reference material producers. international organization for standardisation, geneva, switzerland. [19] iso/iec 35 (2006)reference materials-general and statistical principles for certification. international organization for standardisation, geneva, switzerland. [20] m. thompson, s.l.r. ellison, r. wood, international harmonized protocol for the proficiency testing of analytical chemistry laboratories. pure appl. chem. (2006); 78(1): 145-196. [21] eurachem/citac guide cg 4 “quantifying uncertainty in analytical measurement, third edition, 2012. [22] b. magnusson, s. l. r. ellison, treatment of uncorrected measurement bias in uncertainty estimation for chemical measurements analytical and bioanalytical chemistry (2008); 390(1): 201-213. table 3: figures of merit of the method.  parameter  cd  pb  validation level (µg/kg)  20  100  sr (µg/kg)  0.3  0.002  rsdr (%)  1.4  1.7  r (µg/kg)  0.8  0.006  sr (µg/kg)  0.2  0.004  rsdr (%)  1.0  3.5  r (µg/kg)  0.6  0.011  lod (µg/kg)  0.8  13  loq (µg/kg)  2.7  42  recovery (%)  103.5  102.9  u (µg/kg)  1  6  u (%)    5  6  u (µg/kg)  2  12  u  (%)  10  12  microsoft word article 2 editorial 189-1056-2-ga.docx acta imeko may 2014, volume 3, number 1, 2 – 3 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 2 introductory notes for the acta imeko special issue on “historical papers on fundamentals of measurement” l. mari school of industrial engineering, università carlo cattaneo – liuc, c.so matteotti, 22 21053 castellanza (va) – italy section: editorial keywords: measurement theory, uncertainty, history citation: l. mari, introductory notes for the acta imeko special issue on “historical papers on fundamentals of measurement”, acta imeko, vol. 3, no. 1, article 2, may 2014, identifier: imeko-acta-03 (2014)-01-02 editor: paolo carbone, university of perugia received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: l. mari, e-mail: lmari@liuc.it the term “acta imeko” was originally used to designate the proceedings of the imeko world congresses, and explicitly the first twelve congresses from budapest, hungary, 1958, to beijing, china, 1991, held in a period of about thirty crucial years for our scientific, technological, social, and political history. this special issue of the (new) acta imeko journal offers the reader a small selection of papers on fundamentals aspects of measurement, that were originally published in those proceedings. it is an opportunity, first of all, to pay a tribute to the many distinguished colleagues that excellently contributed to the development of measurement science in the context of imeko, but also to celebrate imeko in its history and unique role in the promotion of international sharing of knowledge and collaboration among scientists. despite the common feeling that we live in an epoch of extraordinary, and extraordinarily rapid, changes, the papers that have been reissued here mainly maintain their scientific significance, for the outstanding visions and analyses that propose, while their most temporally situated positions are often of great interest as lively witnesses of what the state and the perspectives of measurement science were a few decades ago. a chronological study on the (original) acta imeko might be conducted, for example, on the changes induced on measurement by the progressive diffusion of digital devices and numerical methods, and the corresponding formation of the concept of measuring systems as information machines. unmodified significance of the basic proposals and historical interest of the foreseen perspectives: these have been the guiding principles in the selection of the papers for the present special issue, all of them sharing an interest in contributing to the development of a well-structured, systemic interpretation of the fundamentals of measurement (the selection has been limited by the unfortunate fact that not all volumes of the proceedings were available in the selection stage). in the present social and economic situation more and more emphasizing the role and the importance of measurement processes, identifying the pivotal elements of a culture of measurement becomes a critical task for metrology, the “science of measurement and its application” (as the international vocabulary of metrology defines it), possibly formalized in the development of a body of knowledge on measurement science. the diachronic perspective offered by these papers might give some contributions in highlighting continuities and discontinuities in fundamental concept formation in the last fifty years of measurement-related scientific and technical history. the reader might be interested to know that the current default choice of english as the only language for the published papers is relatively recent in the context of the imeko community: in the first fifteen years of acta imeko, multiple options were available as for the language of papers (and the proceedings of the first two congresses, budapest, hungary, 1958 and 1961, include texts in five languages, english, french, german, hungarian, and russian!). on the other hand, only english written papers have been selected for this special issue. the reissued papers are identical in contents to the originals (having thus accepted lack of homogeneity in the layout, abstract and keywords, sometimes absent), and they have only imeko dedicates this special issue to 2014 world metrology day, that celebrates the signature of the metre convention on may 20th 1875 by representatives of seventeen nations acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 3 been edited to make them better readable within web pages and to fix the few material errors that have been found. the order in which they are presented is chronological. the role of measurement in the development of industrial automation, by s.s. carlisle [acta imeko 1967, proceedings of the 4th international measurement congress, 1967, warsaw, vol. 1, pp. 37–50], is far-sighted paper that sets a frame on the role of measurement and control as critical tools to support the automation of manufacturing processes. in this perspective the author discusses three main requirements for measurement, i.e., to identify where automation can be most profitably used; to investigate individual process behaviors and hence to formulate process control strategies; and finally to perform quality control of products. also with the support of several practical examples, with interesting mentions to the growing importance of digital computer and pattern recognition techniques, the author proposes a systemic vision, in which measurement is convincingly intended as a communication enabler among the different subjects involved in industrial innovation processes: control engineers, process and product engineers, managers, and end users. fundamental concepts of measurement, by l. finkelstein [acta imeko 1973, proceedings of the 6th congress of the international measurement confederation, “measurement and instrumentation”, 17-23.6.1973, dresden, vol. 1, pp. 11–27], proposes a clear, synthetic presentation of some basics of measurement science. while in the last forty years many things have changed in the technological aspects of measurement (so that, in particular, what the author calls “associative measurement” is much more widespread than one might suppose from the short mention that is devoted here), the conceptual framework that follows is an excellent entry point for the reader interested in the epistemological and logical foundations of measurement. the measurement chain and validation of experimental measurements, by r.j. moffat [acta imeko 1973, proceedings of the 6th congress of the international measurement confederation, “measurement and instrumentation”, 17-23.6.1973, dresden, vol. 1, pp. 45–53], witnesses the sophisticated discussion that, well before the publication of the guide to the expression of uncertainty in measurement (gum), was active in the measurement science community around the subject of error and uncertainty, and its consequences on the structure of the measuring process and the way it is performed. some of the ideas presented here, and particularly the hypothesis that what the gum calls the standard uncertainty admits multiple methods of computation depending on the aim of the process, are still delicate, open issues today. problems in theory of measurement today, by l. gonella [acta imeko 1979, proceedings of the 8th imeko congress of the international measurement confederation, “measurement for progress in science and technology”, 21-27.5.1979, moskow, vol. 1, pp. 103–110], challenges, in an informed and convincing way, some basic presuppositions of the “classical” theory of measurement, particularly highlighting the biases induced by an often acritical adoption of the geometric paradigm, and consequently of real numbers, in measurement. theoretical, physical and metrological problems of further development of measurement techniques and instrumentation in science and technology, by d. hofmann and yu. v. tarbeyev [acta imeko 1979, proceedings of the 8th imeko congress of the international measurement confederation, “measurement for progress in science and technology”, 21-27.5.1979, moskow, vol. 3, pp. 607–626], lists a number of issues of measurement science and technology, thus interestingly witnessing their state in the second half of the 20th century. measurement and meaningfulness, by j. van brakel [acta imeko 1985, proceedings of the 10th world congress of the international measurement confederation, “new measurement technology to serve mankind”, 22-26.4.1985, prague, pp. 319– 333], is a learned paper on a key concept in measurement, i.e., the meaningfulness of statements reporting measurement results, whose connections, among the others, to scale invariance, conditions for derived measurement, dimensional analysis, and definition of measurement standards are clearly presented and discussed also by means of several examples. structured instrument design, by p.h. sydenham [acta imeko 1988, proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 271–275], proposes an overview on the delicate subject of the possibility of automatic design and configuration of measuring systems (“cad of instruments”, or “instrument cae”), in a well-conceived scenario of measuring systems intended as an important class of information machines. despite in those early days of computer-aided processes the position, sometimes made into a research plan, was not unusual that the creative activity of development can be fully automatized (as in the case of the efforts towards universal software code generators), the author is well aware of the importance not only of formal algorithms, but also of heuristics, “rules of thumb”, and even subjective judgement, critical source of complexity for the purported target. impact of modern instrumentation on the system of basic concepts in metrology, by j.m. jaworski, j. bek, and a. j. fiok [acta imeko 1988, proceedings of the proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 125–134], offers several analytical arguments on the practical importance of rethinking the basics of measurement science as a concept system, in response to the critical increase of complexity of measurement problems in both spatial and temporal dimensions, i.e., relating to multivariate, dynamic quantities. the authors significantly emphasize the role of models and of pragmatics, including the costs of designing and performing the process. intelligent instrumentation: a quality challenge, by h.j. kohoutek [acta imeko 1988, proceedings of the proceedings of the 11th triennial world congress of the international measurement confederation (imeko), “instrumentation for the 21st century”, 16-21.10.1988, houston, pp. 337–345], presents a systematic analysis of the consequences of the adoption of artificial intelligence techniques in measuring systems and the related open issues. the author emphasizes that the growing importance of software components offers new opportunities for the development of flexible, “intelligent” instrumentation, but at the same time poses new problems in the metrological assurance of their behavior. our hope is that, together with the pride and the curiosity that this small sample can produce on imeko and its history, the reader can find here new reasons of interest for measurement science and its fundamental role for our society. microsoft word 20-142-1-pb.doc acta imeko  july 2012, volume 1, number 1, 49‐58  www.imeko.org    acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 49  a two‐stage, guarded inductive voltage divider with small  ratio errors for coaxial bridge applications  gregory a. kyriazis 1 , j. angel moreno 2 , jürgen melcher 3   1  instituto nacional de metrologia, qualidade e tecnologia, av. nossa senhora das graças, 50, 25250‐020, duque de caxias, rj, brazil  2  centro nacional de metrología, km 4.5 carretera a los cués, municipio del marqués, cp 76900, querétaro, mexico  3  physikalisch‐technische bundesanstalt, bundesallee 100, 38116 braunschweig, germany    keywords: capacitance; coaxial bridges; inductive voltage dividers; traceability chain; calibration; metrology  citation: gregory a. kyriazis, j. angel moreno, jürgen melcher, a two‐stage, guarded inductive voltage divider with small ratio errors for coaxial bridge  applications, acta imeko, vol. 1, no. 1, article 11, july 2012, identifier: imeko‐acta‐01(2012)‐01‐11  editor: pedro ramos, instituto de telecomunicações and instituto superior técnico/universidade técnica de lisboa, portugal  received january 9 th , 2012; in final form may 17 th , 2012; published july 2012  copyright: © 2012 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: this work was supported by instituto nacional de metrologia, qualidade e tecnologia (inmetro), brazil  corresponding author: gregory a. kyriazis, e‐mail: gakyriazis@inmetro.gov.br  1. introduction  the traceability chain to derive the capacitance unit from the quantized hall resistance comprises a few coaxial bridges: a 1:1 ratio bridge, a 10:1 ratio bridge, a quadrature bridge and an inductive voltage divider (ivd) calibration bridge. in fact only three bridges are necessary as a 1:1 ratio bridge is easily changed to a 10:1 ratio bridge by a cable rearrangement. all these bridges can use the same isolation transformer and main ivd [1]-[3]. the main ivd to be used in coaxial ratio bridges [4] should be constructed to provide an overall bridge uncertainty at 1:1 and 10:1 ratios (at 1 khz and 1.592 khz) of a few parts in 108. this requires the use of special guarding and two-stage techniques typically adopted in the 10-100 khz range [5]. a similar ivd had been constructed previously during the development of inmetro´s two terminal-pair coaxial capacitance bridge [6][7]. this bridge has been in operation since 2005. the old ivd design details and calibration results had been reported in [8]. the new ivd discussed here was completely constructed at inmetro and presents much smaller ratio errors than the previous one. this work benefitted from the technical expertise of centro nacional de metrología (cenam) and physikalishtechnische bundesanstalt (ptb). the coaxial capacitance bridge construction details are reviewed in section 2. the corresponding measurement model and the cable effects are respectively analyzed in sections 3 and 4. section 5 presents the construction details of the new ivd and the design changes that we believe were responsible for the results obtained. the ivd calibration is discussed in section 6. the old ivd was replaced by the new one in the coaxial capacitance bridge and the overall bridge uncertainty was reevaluated as reported in section 7. the conclusions are drawn in section 8. 2. two terminal‐pair coaxial capacitance bridge  the bridge design is originally from ptb and is similar to that adopted by cenam [9]. both inmetro and cenam participated successfully with similar bridges in an international comparison on capacitance (sim.em-k4, sim.em-s4 and sim.em-s3 [10]) among several national metrology institutes from the inter-american metrology system (sim). the capacitance bridge uses a two terminal-pair coaxial design [4]. one way of looking at a coaxial bridge is to see it as two superposed networks (figure 1). the first of these consists of straightforward meshes of components and the interconnecting wires between them. the second network comprises the shields of the components and the outer, coaxial shield of the connecting cables. the configurations of the two the traceability chain to derive the capacitance unit from the quantum hall resistance comprises some coaxial bridges. these bridges  employ  a  main  two‐stage  inductive  voltage  divider  to  provide  the  voltage  ratio  needed.  one  such  divider  has  been  recently  constructed and calibrated at inmetro. the design techniques responsible for the small ratio errors of the device and the calibration  method employed are both detailed. the new divider was  installed  in  inmetro´s two terminal‐pair coaxial capacitance bridge with  significant improvements in the bridge resolution and accuracy. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 50  networks are identical and by providing every mesh with an equalizing device, the current in the outer shield is constrained to be equal in magnitude and shifted 180° to the current in the components and central conductors. the current in any cable as a whole is zero and no external magnetic field is created. the second network of shields and cable outer conductors has low impedance and it is all at nearly the same potential, so that there is no significant external electric field. this construction has the further advantage that such networks do not respond to fields from external sources. otherwise, the bridge balance conditions could be affected. current equalization is achieved by threading a coaxial cable through a high permeability (typically supermalloy) toroidal core so that core and cable act as a 1:1 transformer [11][12]. usually, the impedance of the outer conductor is several orders of magnitude smaller than the impedance of the inner conductor (which includes impedance standards, transformers, etc.). a current equalizer has therefore no direct effect on the current flowing along the inner conductor; it just gives rise to an equal current flowing along the outer conductor which is supplied from the same source as the inner conductor. to achieve equal return currents in the whole bridge network, every independent mesh of the bridge network must be provided with one current equalizer in a way that every bridge component is connected to the central ground point of the bridge with just one path without current equalizer. this central point is connected to earth ground. a simplified scheme of a two terminal-pair coaxial bridge for comparing two standard capacitors of the same nominal value (cn and cx) is shown in figure 1. cx is the capacitor under calibration and cn is the reference standard. the current equalizers are also shown in the figure. the main ivd operates with a voltage u across the taps. its actual ratio deviates from the nominal 1:1 ratio by a small complex amount. further, the two capacitances are not ideal and do not have precisely the same value. the complex voltage ud detected at the node between the two capacitances is therefore not null. the real part of this voltage is then compensated by injecting an adjustable in-phase current. this is done by applying an adjustable voltage u/ to capacitor c. likewise, the imaginary part is compensated by injecting an adjustable quadrature current. this is done by applying an adjustable voltage u/ to conductance g. when the detector voltage is nullified in this way, we obtain the in-phase balance equation    x n 1 2 1 2 c c c c             , (1) where c  is the parasitic capacitance (not shown in the figure) that shunts the conductance g. the dividing factors 1 and 1 are obtained by balancing the bridge with the capacitors cn and cx positioned as shown in figure 1, and 2 and 2 are obtained by rearranging the cables so that cn and cx are interchanged. the complex ratio error of the main ivd is cancelled by this technique. the constructive details of the bridge built at inmetro are discussed in the sequel. the detailed scheme of the capacitance bridge is shown in figure 2. the current equalizers are shown as black rectangles figure 2. detailed scheme of the coaxial capacitance bridge (original design from ptb).  figure  1.  simplified  scheme  of  the  two  terminal‐pair  coaxial  capacitance bridge.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 51  (for clarity, we preferred not to draw the coaxial cable shields). the two terminal-pair coaxial ratio bridge operates at 1 khz and 1.592 khz and compares decadic capacitors in the range from 10 pf to 1 nf at 1:1, 10:1 and 1:10 ratios. the scheme of the 1:1 ratio bridge is shown in figure 2. a 1:1 ratio bridge is easily changed to a 10:1 ratio bridge by a simple cable rearrangement (the corresponding scheme of the 10:1 ratio bridge is shown in [6]). the bridge comprises an isolation transformer, a two-stage main ivd, a voltage predivider, a t-network box, a switch box, a short-circuit box, and a wagner current transformer. the following commercial equipment complement the bridge: an ultra-pure sinusoidal oscillator, a wideband power amplifier, a low-noise preamplifier, a lock-in amplifier, a 1 pf fused-silica standard capacitor and two coaxial, double, six-decade ivds. a photo of the capacitance bridge is shown in figure 3. the isolation transformer is used to isolate the bridge from the mains. the two-stage main ivd is constructed to provide an uncertainty in 1:1 and 10:1 ratios of a few parts in 108. both the isolation transformer and the main ivd have a flexible design that allows their use in other bridges of the aforementioned traceability chain. the specific design and construction details of these devices were published in [13][8]. the power amplifier is cascaded to the sinusoidal oscillator to apply the voltage required to the isolation transformer primary taps, so that up to u = 200 v is available across the +5 and –5 secondary taps (worst case when two 10 pf capacitors are compared for maintenance of the unit). the voltage u depends on the bridge ratio and the standard capacitor value (see table 2 in the next section). table 1 lists the values of voltage and frequency which are recommended for the whole chain from 10 pf to 1 nf. the voltages applied to the standards decrease by a factor of 10 for each 10:1 ratio step in the chain avoiding corrections due to voltage dependence of the standards (that of the ivd ratio error is negligible). the transition in frequency from 1 khz to 1.592 khz is necessary for the traceability chain to derive the capacitance unit from the quantum hall resistance. it is made with the standard which has the lowest frequency dependence, namely, the 1 nf capacitor. the voltage choice ultimately means that the current flowing through the standards being compared is always kept at approximately 6.3 a at 1 khz (or 10 a at 1.592 khz). the main balance voltage is derived from the voltage across the +1 and 0 taps of the isolation transformer secondary winding and reduced by the 10:1 voltage predivider. the resulting voltages at the 0.1 taps of the voltage predivider are applied to variable impedances (1 pf fused silica capacitor and a t-network box cascaded to coaxial, double, six-decade ivds), thus generating the in-phase and quadrature main balance currents, respectively. the t-network box is built with metalfilm resistors. the short-circuit box is a multiport connector in which the inners are brought together at a point, as are the outers. a suitable construction is illustrated in [4] where the inners are soldered symmetrically to a disc. the short-circuit box receives the currents flowing through the standards being compared and the in-phase and quadrature main balance currents. it has also figure 3. inmetro´s two terminal‐pair coaxial capacitance bridge.  table 1. voltage applied to standard capacitors.   standard  (pf)  frequency  (hz)  voltage  (v)  10  1000  100  100  1000  10  1000  1000 and 1592  1  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 52  an output port for voltage monitoring. the main balance is adjusted by monitoring the voltage at the short-circuit box while changing the impedances connected to the 0.1 taps of the voltage predivider. the low-noise preamplifier is cascaded to the lock-in amplifier to monitor the balance voltage. with the preamplifier gain set to 103, we have been able to balance the bridge so that the lock-in amplifier readings are within 20 v. the isolation transformer has a separate secondary winding for the wagner balance [13]. this balance is done first (the battery-operated switch box is used to select the balances to be monitored). we have verified that this balance is crucial to the accuracy of the main balance. the voltage across the 1 w taps of this separate winding is applied to coaxial, double, sixdecade ivds with selectable fixed impedances and the generated current is injected in the isolation transformer zero tap connection to ground through an 100:1 injection transformer (installed inside the isolation transformer). this changes the potential between the zero tap and ground. the current in tap terminal 5 of the main ivd is changed accordingly. the wagner balance is adjusted by monitoring the current in tap terminal 5 with the 1:100 wagner current transformer while changing the settings of the wagner balance ivds. we have been able to balance the wagner network so that the lock-in amplifier readings are within 5 v. 3. measurement model  the simplified scheme of the capacitance bridge is shown in figure 4. the main ivd operates with a voltage u across the 0 and 10 taps for the 1:1 ratio or across the 0 and 11 taps for the 10:1 (or 1:10) ratio (see figure 2). the voltage value depends on the bridge ratio and the standard capacitor value (table 2). the nominal ratio of the main ivd is d = 1/2 for the 1:1 ratio bridge and d = 1/11 for the 10:1 (and 1:10) ratio bridge. the value of 1/ ratio depends on the bridge ratio and the voltage predivider ratios (table 3). for instance, for a 1:1 ratio bridge, the isolation transformer ratio at its +1 secondary tap is 1/10. the voltage predivider ratio at its +0.1 tap (typically used) is 1/10. the value of 1/ ratio in this case is therefore 1/100. the resulting dividing factors of the bridge main balance are  = (a  0.5)/0.5 and  = (b  0.5)/0.5, where a and b are the settings of each coaxial six-decade main balance ivd. cn (and gn) is the known capacitance (and conductance) of the standard capacitor, and cx (and gx) is the unknown capacitance (and conductance) of the capacitor under calibration. c is the 1 pf fused-silica standard capacitor. g and c  are the conductance and parasitic capacitance, respectively, of the t-network box (with connecting cables), measured with a commercial capacitance bridge. the ratio error is here expressed as a fraction of unit. in this case, the ivd ratio is expressed as a sum of the nominal ratio d and the complex ratio error  = k + jk (k and k are respectively the in-phase and quadrature components), that is 0.1 0 1.1 0u u d d k jk        . (2) the main balance is obtained when n x b m 0   i i i i . (3) the balance equation for the 1:1 ratio is given by (1), which is reproduced here for the reader convenience,    x n 1 2 1 2 c c c c             . (1) the dividing factors 1 and 1 are obtained by balancing the bridge with the capacitors positioned as shown in figure 4, and 2 and 2 are obtained by rearranging the cables so that the capacitors are in the reversed position. as mentioned before the complex ratio error of the main ivd is cancelled by this technique. however, it is interesting for quality control purposes to compute the in-phase and quadrature components of the ratio error of the main ivd for the 1:1 ratio, that is       n x n 1 2 1 2 1 2 1 4 4 c c k c c g k c                       (4) where  is the angular frequency. the corresponding balance equation for the 10:1 ratio is  x n 1 1 gtc c t        , (5a) table 2. total applied voltage u.   standard  (pf)  bridge  ratio  voltage  (v)  10  1:1  200  10  10:1  110  100  1:1  20  100  10:1  11  1000  1:1  2  table 3. value of .   predivider ratio  bridge  ratio   10:1  1:1  100  10:1  10:1  110  1:1  1:1  10  1:1  10:1  11  figure 4. simplified scheme of the capacitance bridge.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 53  where       n n n n tan 1 1 1 g c c k c c k t d k c c t c c t k g t t c                                               (5b) where tan n is the loss tangent of the standard capacitor. here it is neither possible nor necessary to use the aforementioned cabling reversal technique. one needs however to know accurately the in-phase ratio error of the main ivd for the nominal ratio d = 1/11. thus, the main ivd needs to be calibrated at this ratio (see section 6). the physical arrangement of the bridge is the same for both the 10:1 and 1:10 bridge ratios. the only change is the reversed position of cn and cx. the balance equation for the 1:10 ratio is  x n n ntg 1 1 1 1 k g c tc c c c t t                   . (6) 4. cable correction  the values of cx obtained in (1), (5) and (6) include the cable parasitic contributions. we must correct the results for the cable errors. a simplified scheme of the capacitance bridge with the connecting cables modelled as delta networks is shown in figure 5. cable losses can be neglected for capacitance bridges as high quality cables are used. l1 and c2 are, respectively, the inductance and half the capacitance of the cable labelled ‘(7)’ in figure 2. l3 and c6 are, respectively, the inductance and half the capacitance of the cable labelled ‘(8)’. the contribution of c1 (and c5) is not taken into account since these capacitances are in parallel with the source. the contribution of c4 (and c8) is negligible when the bridge is balanced. l2 and c3 are, respectively, the inductance and one-fourth the capacitance of the cable labelled ‘(9)’ in figure 2. l4 and c7 are, respectively, the inductance and onefourth the capacitance of the cable labelled ‘(10)’. chg and clg are the parasitic capacitances to ground of the high and low inputs of the standard capacitor cn, respectively. hgc  and lgc  are the capacitances to ground of the high and low inputs of the capacitor under calibration cx, respectively. the cable relative error for the standard capacitor is, approximately,    tn nrn n hg n lg n 2 1 2 2 3 c c l c c l c c c           ,(7) where ctn is the total capacitance of the standard capacitor (capacitor plus cables), c2hg = c2 + chg and c3lg = c3 + clg. the cable relative error for the capacitor under calibration is, approximately,    tx xrx x hg x lg x 2 3 6 4 7 c c l c c l c c c            ,(8) where ctx is the total capacitance of the capacitor under calibration (capacitor plus cables), hg6 6 hgc c c   and lg7 7 lgc c c   . for the 1:1 ratio, the measurement result should be corrected for the total relative error, that is, r = rn rx. for the 10:1 ratio, we regard both the standard capacitor and its associated cable as a whole standard. the value of the standard capacitor is therefore inserted in the spreadsheet as  n rn1c  , and the measurement result is corrected for the cable error of the capacitor under calibration, that is,  x rx1c  . 5. ivd design and construction  a schematic diagram showing the arrangement of cores, windings and guarding of the main ivd is shown in figure 6. the ivd operates with up to 200 v (at 1 khz) across the 0 figure 5. simplified scheme of the capacitance bridge with cable modeling.  figure 6. schematic diagram showing the arrangement of cores, windings,  and guarding (the grounding conductors are not shown for clarity).  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 54  and m10 taps. therefore the first stage core comprises two plastic-encased supermalloy toroidal cores with 76.2 mm inner dia. x 101.6 mm outer dia. x 25.4 mm height and 0.0254 mm tape thickness (magnetics 01500441f) placed one on top of the other. (note: the previous design described in [8] used aluminium-encased cores). a uniform one-layer 220-turn (with 0.57 mm dia. magnet wire) magnetizing winding covers the entire first stage core. this winding has three main taps for external connection whose leads are coloured for easy identification, namely, 0 (red), m10 (blue) and m11 (black), which are equivalent to ratios 0, 1.0 and 1.1, respectively. the guard source for the divider winding is obtained by tapping the magnetizing winding at appropriate points (yellow). the guard source taps are equally spaced around the core (arranged in a star configuration – see figure 7). the guard voltages are 0.05 v, 0.15 v, 0.25 v, … , 1.05 v (assuming 1 v input). the magnetizing winding is covered with kapton tape (an insulating tape with good electrical, mechanical and thermal properties) and enclosed in a soldered toroidal shield made of 0.1 mm copper foil (figure 7). care should be taken in the shield overlaps to avoid the shorted turn. the shield is set at mid-tap potential, i.e. connected to the magnetizing winding tap corresponding to 0.55 ratio (see figure 6). the cross-sectional area of the second stage core was chosen so that it has a magnetic permeance of about 1/3 that of the first stage core. therefore two plastic-encased supermalloy toroidal cores with dimensions 76.2 mm inner dia. x 95.3 mm outer dia. x 9.5 mm height and 0.0254 mm tape thickness (magnetics 01500671f) were chosen. one core is placed on top of the shielded magnetizing winding and the other below it (figure 8). this symmetric arrangement was chosen to minimize leakage fields. (note: both the first and the second stage cores had been enclosed together within the toroidal shield in the design described in [8]). the ivd ratio winding consists of a 20-turn rope of 11 coaxial cables wound around the core assembly whose ends are connected in series to create a 220-turn winding (figure 9). (note: here lies another difference from the previous design: a rope arrangement of the cables had not been employed in [8]). the tapped terminals 1 ... 10 are brought out from these interconnections (see figure 6). the centre conductor is tapped at ten points, which are equivalent to ratios 0.1, 0.2, 0.3 ... 1. the outer shield of the cable is cut (and insulated) at each tap. in this type of construction each section has the same resistance and is equally well coupled to all other sections, thus balancing the mutual and leakage impedances. before arranging the cables in a rope, a coloured wire is soldered as accurately as possible to the middle of the outer conductor of each coaxial cable (figure 10). the whole assembly is then insulated with glass-fiber tape and the separate wires are connected to the corresponding magnetizing winding taps (figure 11). coloured wires are used to ensure that each wire is being connected to the correct guard tap. we confirmed that mistakes here are common and always result in large ivd ratio errors. this guarding method is an attempt to equalize the admittances between each half of each guard and nearby figure 7. shielded magnetizing winding with guard source taps.  figure 9. ratio winding (rope arrangement).  figure 8. shielded magnetizing winding with second stage cores (a second stage core is located below the assembly and cannot be seen).  figure 10. a wire soldered to the middle of the outer conductor.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 55  conductors. the coaxial cable is gore gsc 6591 (conductor size: 19 x 0.127 mm awg 24 (19/36), 0.24 mm2; conductor material: cuag, 78.5 m/m; dielectric diameter: 0.84 mm; dielectric material: ptfe; screen details: braided screen from cuag awg 38 (1); jacket material: 0.15-mm ptfe; nominal diameter: 1.6 mm.). the whole assembly is then insulated with kapton tape, isolated from mechanical vibrations (with 25 mm extruded polystyrene slabs fixed by two opposing insulating boards kept firm with four aluminium rods) and enclosed in a metal box with holes on its top panel for later penetration of the coaxial output sockets (figure 12). the method of bringing out the taps to the coaxial connectors requires consideration if the highest possible accuracy is to be attained. the coaxial output sockets (bpo connectors) are fixed to a rigid insulating board placed above the ivd assembly (figure 13). they are insulated from the metal box in figure 14. stout conductors (1.3 mm dia. magnet wire) are taken from the socket outers for the ivd ratio winding, routing them close to the short tap connections to a point well within the volume of the box where they are joined together (see figure 12). this point is grounded to the metal box through an output socket. the electrical resistance between each socket outer and the metal box was measured to be less than 0.015 . the same arrangement for bringing out the taps to the coaxial connectors is adopted for the ivd magnetizing winding leads. the joint point of the stout conductors is also grounded to the metal box through another output socket. the resistance between each socket outer and the metal box was measured to be less than 0.009 . the bpo connectors for the magnetizing winding are the three ones located on the right side of figure 13. see [4] for more details on this grounding arrangement. the metal box is made of 1.5 mm chromium-coated carbon steel and the box inner surface is covered with 0.79 mm mumetal sheets. the box panel has the following outputs: (a) divider taps: 0 (twofold), 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 (twofold), and 11 (twofold), (b) magnetizing winding taps: 0, m10 and m11, and (c) two independent ground connections (both the magnetizing winding and the ratio winding stout conductor joint points were always grounded to the metal box in the ivd calibration and capacitance bridge measurements – see figure 2 for grounding connections). the final assembly of the main ivd is shown in figure 14. the short-circuiting plugs of the two independent ground connections are not shown in the figure. 6. ivd calibration  as remarked in section 2, one needs to know accurately the in-phase ratio error of the main ivd for the nominal ratio d = 1/11. the ratio error of the new main ivd was therefore calibrated at inmetro. the old ivd was replaced by the new one in the coaxial capacitance bridge and the bridge was used to figure 14. main ivd final assembly. figure 12. the whole assembly is isolated from mechanical vibrations.  figure 13. coaxial output sockets (bpo connectors).  figure 11. separate wires connected to the magnetizing winding taps.  acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 56  compare 10 pf and 100 pf fused-silica standard capacitors which had been previously calibrated by the bureau international des poids et mesures (bipm) with a relative uncertainty of 4 parts in 108. with this method one can only measure the in-phase component of the complex ratio error. the calibration was performed at 110 v (across the 0 and m11 taps) and at both 1 khz and 1.592 khz. the dividing factors  and  in (5) can be expressed as , r r                 (9) where  and  are the values, r and r are the resolutions and and  are the random deviations of the  and readings, respectively. neglecting the contribution from k” in (5), and rearranging terms, the ivd ratio can be expressed as n n x c a t c c    , (10) where  1a c c     . (11) the values of cn and cx in (10) include their drifts with time and the cable parasitic contributions, that is     n n rn n x x rx x 1 1 , c c d c c d         (12) where nc and xc are the capacitance values reported in the calibration certificate, rn and rx are the relative corrections of the cables connected to each capacitor according to (7) and (8), and dn and dx are the capacitor drifts in the period between the date the capacitor calibration certificates were issued by bipm and the date of the ivd calibration. we regard both the standard capacitor and its associated cable as a whole standard. the value of cn is therefore inserted in the spreadsheet as cn(1 + rn) and the value of cx is inserted in the spreadsheet as cx(1  rx). since the main uncertainty contributions to the cable error are associated with the parasitic inductances and capacitances of the capacitor and cable, we may neglect the uncertainty contribution associated with the capacitor values and write n n n n x x x x , c c c d c c c d         (13) where cn and cx are the absolute corrections of the cable errors (in f), that is    n n n hg n lg2 1 2 2 3c c l c c l c c       , (14a)    x x x hg x lg2 3 6 4 7c c l c c l c c         . (14b) here cn = 10 pf and cx = 100 pf, and it is assumed that their contributions to the uncertainty associated with cn and cx are negligible. the in-phase component of the main ivd ratio error is estimated from k t d   . (15) by taking into account that the variables nc and xc are strongly correlated as the capacitors were calibrated at bipm with the same system, that is    n x n x, 0 , 0,u c c u c c   (16) the squared standard uncertainty associated with that estimate is then [14]                            x n n x n x n x n x x n n x n x 2 2 2 2 4 2 2 4 2 2 4 1 2 , c a u k u t u c c c c a u c c c u a c c c a c a u c c c c                    (17) where                                             n n n n x x x x 2 n n n hg 2 n n lg 2 x x x hg 2 x x lg 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 3 2 2 2 2 2 3 6 2 2 2 2 4 7 2 2 2 2 2 2 2 2 2 2 2 2 4 2 ( ) ( ) ( ) ( ) 1 1 u c u c u c u d u c u c u c u d u c c u l c u c c u l c u c u c c u l c u c c u l c u c u a c u u c c u u c c c u u                                                                     n x n x n x 2 2 2 2 2 2 2 , , u u r u u u u r u u c c u c c u c u c                  (18) since a correlation  n x, 1c c  is assumed here. the above uncertainty contributions were evaluated according to [14]. the in-phase 10:1 ratio errors k and their expanded uncertainties u(k), evaluated respectively with (15) and (17), are listed in table 4. table 5 lists the in-phase and quadrature 1:1 ratio errors evaluated from (4). the old ivd whose construction was reported in [8] was calibrated again using the same method described here. the reader should compare the results listed in tables 6 and 7 with those for the new ivd. the ratio errors of the new ivd are indeed much smaller than those of the old one. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 57  7. capacitance bridge measurement uncertainty  the new ivd was installed in the two terminal-pair coaxial capacitance bridge and the uncertainty obtained in capacitance calibration was assessed again here. this bridge has been used for calibrating stable standard capacitors at inmetro since its construction in 2005. an experiment was made to evaluate the consistency of the measurement results obtained with this bridge. consider three 10 pf fused-silica standard capacitors labelled here for convenience as a, b, and c. capacitor a is traceable to bipm. firstly, capacitor b was calibrated by comparing it with known capacitor a. secondly, capacitor c was similarly calibrated and the (computed) difference between b and c values was recorded. capacitor b (assuming it to be unknown) was then calibrated by comparing it with the (now) known capacitor c and the (measured) difference between b and c values was also recorded. all measurement results were corrected for cable errors. the computed and measured differences between b and c values differed by 710 at 1.592 khz when the new ivd was installed in the bridge. contrast this with the 6.710 figure obtained when the old ivd was installed in the bridge (or even with the 1.510 figure reported earlier in [6]). this is a bridge systematic error which is detected when comparing several standards of same nominal value for consistency in the results. this error contributes to the overall uncertainty of the bridge at 1:1 ratio. so, this uncertainty contribution was reduced by one order of magnitude as a result of the reduced ratio errors presented by the new ivd. table 8 shows the uncertainty budget for the calibration at 1:1 ratio and at 1.592 khz of a stable 10 pf fused-silica standard capacitor (cx) from a similar capacitor (cn) traceable to bipm. the combined relative standard uncertainty associated with the difference between the capacitance values of the standards being compared (cxcn) is less than one part in 108. the uncertainty contribution due to the capacitance bridge is therefore negligible compared to other contributions such as table 4. in‐phase 10:1 ratio errors (new ivd).  frequency (hz)  k´  u(k´)  1000  3 × 10 9   23 × 10 9   1592  12 × 10 9   23 × 10 9   table 5. in‐phase and quadrature 1:1 ratio errors (new ivd).   frequency (hz)  k´  k”  1592  23 × 10 9   9 × 10 9   table 6. in‐phase 10:1 ratio errors (old ivd).  frequency (hz)  k´  u(k´)  1000  124 × 109  23 × 109  1592  164 × 109  23 × 109  table 7. in‐phase and quadrature 1:1 ratio errors (old ivd).  frequency (hz)  k´  k”  1592  280 × 109  23 × 109  table 8. uncertainty budget at 1:1 ratio (10 pf – 1.592 khz).  quantity  standard  uncertainty  sensitivity  coefficient  eval.  type  cn  (1)   4.0×10 7  pf  1  b   3.16×106  1.00×102 pf  a   7.48×107  8.00×106 pf  a  c  4×10 8  pf  6.56×10 5   b  c  0.0008 pf  8.00×107  b   0.1  6.63×107 pf  b  r (2)   1×10 8  pf  1  b  cxcn (3)   7×10 8  pf  1  comb.  error  (4)   7×10 9  pf  1  b  cx  (5)   7×10 8  pf  1  comb.  rk‐90  (6)   1.00×10 6  pf  1  b  biannual drift  (7)   1.00×10 6  pf  1  a  cx  (8)   1.5×10 6  pf    comb.  (1) relative combined standard uncertainty reported in the bipm calibration certificate for cn (a 10 pf capacitor). (2) uncertainty contribution associated with the correction for the cable errors. (3) combined standard uncertainty associated with the difference between the capacitances of the standards being compared (see text). (4) systematic error that is detected when comparing several standards for consistency in the results (see text). (5) combined standard uncertainty associated with cx without taking into account the uncertainty contributions associated with rk-90 and the reference standard biannual drift. cx is a 10 pf capacitor. (6) standard uncertainty associated with the recommended value of rk-90. (7) biannual drift evaluated by fitting a straight line to data reported in bipm calibration certificates in the last six years. (8) combined standard uncertainty associated with cx by taking into account all known uncertainty contributions. table 9. uncertainty budget at 10:1 ratio (100 pf – 1.592 khz).   quantity  standard uncertainty  sensitivity  coefficient  eval.  type  cn  (1)   4.0×10 7  pf  10  b   1.5×106  1.00×101 pf  a  k´  1.2×108  1.21×103 pf  b  r (2)   1×10 8  pf  1  b  cx  (3)   1.51×10 5  pf  1  comb.  rk‐90  (4)   1.00×10 5  pf  1  b  biannual drift  (5)   1.00×10 5  pf  1  a  cx  (6)   2.1×10 5  pf    comb.  (1) relative combined standard uncertainty reported in the bipm calibration certificate for cn (a 10 pf capacitor). (2) uncertainty contribution associated with the correction for the cable errors. (3) combined standard uncertainty associated with cx without taking into account the uncertainty contributions associated with rk-90 and the reference standard biannual drift. cx is a 100 pf capacitor. (4) standard uncertainty associated with the recommended value of rk-90. (5) biannual drift evaluated by fitting a straight line to data reported in bipm calibration certificates in the last six years. (6) combined standard uncertainty associated with cx by taking into account all known uncertainty contributions. acta imeko | www.imeko.org  july 2012 | volume 1 | number 1 | 58  the relative uncertainty reported in the bipm certificate, the reference standard drift, and the uncertainty associated with the recommended value of rk-90 (von klitzing constant). the major uncertainty contributions are now the 1/ ratio and the stability of the 1 2) readings. table 9 shows the uncertainty budget for the calibration at 10:1 ratio and at 1.592 khz of a stable 100 pf fused-silica standard capacitor (cx) from a similar 10 pf capacitor (cn) traceable to bipm. the combined relative standard uncertainty associated with the calibration result (without the uncertainty contributions associated with rk-90 and with the reference standard drift) is 1.510. the major contribution to the overall bridge uncertainty at the 10:1 ratio is that associated with the in-phase component of the ivd ratio error. as mentioned in the introduction, the main ivd to be used in coaxial ratio bridges should be constructed to provide an overall bridge uncertainty at 1:1 and 10:1 ratios of a few parts in 108. in order to achieve this for the 10:1 ratio it is necessary to reduce the uncertainty associated with the in-phase component of the ivd ratio error by one order of magnitude. this demands the construction of a special system for calibrating the ivd 10:1 ratio with an uncertainty of a few parts in 109. note: quantities not listed in tables 8 and 9 were found to have negligible uncertainty contributions. 8. conclusions  the constructional details of the new main two-stage ivd for coaxial bridge applications, recently built at inmetro, were presented along with the design changes implemented. the design changes are: (a) plastic-encased cores are used as first stage cores, (b) only the magnetizing winding and the first stage cores are copper shielded and (c) a rope arrangement is employed for the coaxial cables in the ratio winding. the method used to calibrate the ivd ratio error was also described in detail. the in-phase 10:1 ratio error was determined from known values of two stable decadic standard capacitors. it was confirmed that the ratio errors of the new ivd are much smaller than those reported previously for another ivd. upon installing the new ivd in inmetro´s coaxial capacitance bridge, the major contributions to the overall uncertainty of the 1:1 ratio bridge are those associated with the predivider ratio and with the stability of the readings. the major contribution at 10:1 ratio is that associated with the in-phase component of the ivd ratio error. a special system for calibrating the ivd 10:1 ratio with an uncertainty of a few parts in 109 is required if one needs to achieve an overall uncertainty of parts in 108 with the 10:1 ratio bridge. acknowledgement  g. k. thanks l. m. ogino and e. afonso for providing the resources for this project. g. k. also wishes to thank all inmetro/sengi personnel and the late a. etchebehere for the construction of the ivd metal box. he also thanks r. p. miloski for soldering the ivd coaxial cables and r. t. b. vasconcellos for previous collaboration in the construction and initial operation of the capacitance bridge. references  [1] j.melcher, j.schurr, “modular system for the calibration of capacitance standards based on the quantum hall effect”, proc. of v semetro, 2002, apr. 9-12, rio de janeiro, brazil. [2] j.melcher, j.schurr, k.pierz et al., “the european acqhe project: modular system for the calibration of capacitance standards based on the quantum hall effect”, ieee trans. instrum. meas., apr. 2003, 52, no.2, pp. 563-568. [3] physikalisch-technische bundesanstalt (ptb), j.schurr, j.melcher, “modular system for the calibration of capacitance standards based on the quantum hall effect,” workshop and project documentation, january 2002, brauschweig, germany. available in cd-rom. [4] b.p.kibble, g.h.rayner, coaxial ac bridges, adam hilger ltd., 1984. [5] d.homan, t.zapf, “two stage, guarded inductive voltage divider for use at 100 khz”, isa trans., 1970, 9, no. 3, pp. 201-209. [6] g.a.kyriazis, r.t.b.vasconcellos, l.m.ogino, j.melcher, j.a.moreno, “design and construction of a two terminal-pair coaxial capacitance bridge”, proc. of vi semetro, 2005, sept. 21-23, rio de janeiro, brazil, pp. 57-62. [7] g.a.kyriazis, r.t.b.vasconcellos, l.m.ogino, j.melcher, j.a.moreno, “a two terminal-pair coaxial capacitance bridge constructed at inmetro”, cpem digest, 2006, july 9-14, turin, italy, pp. 522-523. [8] g.a.kyriazis, j.melcher, j.a.moreno, “a two-stage, guarded inductive voltage divider for use in coaxial bridges employed in the derivation of the capacitance unit from quantized hall resistance”, cpem digest, 2004, june 27 – july 2, london, uk, pp. 131-132. [9] j.a.moreno, r.hanke, “maintenance and development of the capacitance unit at cenam”, cpem digest, 2000, may 14-19, sydney, australia, pp. 218-219. [10] a.koffman, n.f.zhang, y.wang, s.shields, b.wood, k.kochav, j.a.moreno, h.sanchez, b.i.castro, m.cazabat, l.m.ogino, g.a.kyriazis, r.t.b.vasconcellos, d.slomovitz, d.izquierdo, c.faverio, "sim.em-k4 capacitance comparison summary", cpem digest, 2012, july 1-6, washington, dc, usa, pp. 398-399. [11] g.a.kyriazis, r.t.b.vasconcellos, “unequalized currents in two terminal-pair coaxial capacitance bridges”, proc. xviii imeko world congress, 2006, sept. 17-22, rio de janeiro, brazil. [12] j.schurr, j.melcher, “unequalized currents in coaxial ac bridges”, ieee trans. instrum. meas., jun. 2004, 53, no. 3, pp. 807-811. [13] g.a.kyriazis, l.m.ogino, e.afonso, j.a.neves, “an isolation transformer for coaxial bridges used in the derivation of the capacitance unit from the ac quantized hall resistance”, mapan jmsi, jul.-sept. 2003, 18, no. 3, pp. 137-144. [14] bipm, iec, ifcc, ilac, iso, iupac, iupap and oiml, evaluation of measurement data – guide to the expression of uncertainty in measurement, gum 1995 with minor corrections, joint committee for guides in metrology, jcgm 100, 2008, http://www.bipm.org/utils/common/documents/ jcgm/jcgm_100_2008_e.pdf (accessed 29.10.11). acta imeko  december 2014, volume 3, number 4, 26 – 31  www.imeko.org  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 26  how geometric misalignments can affect the accuracy of  measurements by a novel configuration of self‐tracking ldv  giuseppe dinardo, laura fabbiano, gaetano vacca  dept. of mechanics, mathematics and management (dmmm) politecnico di bari, university, bari, italy  section: research paper   keywords: self‐tracking ldv, accuracy analysis, vibration, rotating object, geometric misalignment  citation: g. dinardo, l. fabbiano, g. vacca, how geometric misalignments can affect the accuracy of measurements by a novel configuration of self‐ tracking ldv, acta imeko, vol. 3, no. 4, article 6, december 2014, identifier: imeko‐acta‐03 (2014)‐04‐06  editor: paolo carbone, university of perugia   received october 10 th , 2013; in final form march 24 th , 2014; published december 2014  copyright: © 2014 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  funding: (none reported)  corresponding author: laura fabbiano  1. introduction an accurate evaluation of dynamic stresses due to vibrations affecting rotating machines components is crucial for both design and diagnostics interests. the accurate evaluation and determination of the dynamic stresses due to vibrations is indeed of considerable importance in the design, testing and running of turbomachinery, typically, as the measurements of these stresses become critical in the case of moving components, especially rotating ones. reason of that is the difficulty in providing a suitable set-up capable of acquiring data from sensors placed in the same environment and transferring them to the processing unit. in order to obtain more precise measurements, some innovative nonintrusive techniques (based on the use of appropriately installed noncontact sensors) have been developed. nowadays, in industrial ambit, optical or electromagnetic sensors are used to characterize the vibrational state of the rotating parts of the machines, ensuring nonintrusive measurements [1]. one of the most suitable techniques is based on the use of ldv technology: a laser beam, characterized by spatial and temporal coherence, impacts on a vibrating surface; then, reflected light wave impinging the laser receiver undergoes a frequency shift (doppler effect) allowing the evaluation of the vibrational velocity of the object or structure under examination. the embedded difficulties in the ldv technique for the measurement of moving and vibrating organs, as turbine blades, consist in the apparent impossibility of tracking the object, from which the data required to assess the nature and the entity of vibrations have to be extracted. this is why both industrial and academic researchers focus their efforts to develop new tracking systems based on laser technology. there is a vast literature on this subject suggesting different techniques for tracking moving objects by adding optics to the laser equipment, [2], [3], [4]. more specifically, there are several solutions involving the adoption of a scanning laser doppler vibrometer [5], [6], [7]. nevertheless, the analysis of the errors in the measurement of vibrations by means of these configurations remains very abstract aim of the paper is to evaluate the reliability of a novel configuration of a laser doppler vibrometer (ldv) for the measurement of  vibrations affecting rotating components of machines. in the paper an analysis of errors due to the possible static misalignments of the  mirrors utilized in an experimental apparatus has been carried out. this instrument was set up for the estimation of the out of plane  vibrations of moving (rotating) objects, in order to give a better characterization of the self‐tracking technique employed with the use  of  a  1d  single  point  ldv  and  its  measurements.  the  accuracies  of  the  measures  are  mostly  linked  to  the  interaction  between  environment and instrumentation. acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 27  complex and little explored. one of those techniques has been proposed by the authors in [8], by using a 1d single point ldv apparatus. additional equipment made by optical mirrors installed in an appropriate way is required, in order to appropriately perform the tracking of a given point lying on the rotating structure to be analysed. other authors have analysed the geometric errors due to misalignments in the optical core system only of laser tracking configurations [9], [10], [11]. this paper deals with the study of some errors caused by the employment of additional external optics to better evaluate the relative uncertainty of the measurement. so, a qualitative analysis of uncertainties of such a configuration has been performed. these considerations are related to the static misalignments which may exist between the laser beam, the rotor axis moving the test beam and the axis of the equipment supporting the mirrors. in this context, other possible misalignments, the so called dynamic ones due to dynamic effects induced by the movement of the object itself, are not taken into consideration. 2. experimental apparatus  the experimental configuration of reference for the present analysis is the one presented in [8] and here succinctly reported in figure 1 and in figure 3 in the essential components. figure 1 shows the mutual position between ldv (pdv 100 polytec) and the fold mirrors structure. the fold mirrors’ support is free to rotate around its axis so that their angular position can vary. figure 2 shows the configuration of the experimental apparatus. in figure 3 the laser beam paths are simulated for easy understanding. the fold mirrors and the vertex-mirror (rod mirror) placed on the static support and the rotor respectively are all 45° mirrors (figure 4). 3. misalignment errors analysis  by the ldv technique (in the self-tracking configuration) the measure of vibrations is possible at every angular velocity in principle. nevertheless, the system can be particularly sensitive to possible static misalignments among the laser beam, the vertex mirror axis, the axis of the fold mirrors support and the rotor axis. it is immediately evident how those static misalignments cause a displacement of the measurement point on the rotating blade. as the purpose of the arranged set-up is to measure the out-of-plane vibration of a pre-set blade point while that is rotating, a perfect alignment between each pair of the setup components is required. if the alignment is not accomplished in all the four angular positions of the blade, the measurement point (where the laser spot hits) changes and thus the measurement relative to the four angular positions are not representative of the vibration of the same blade point under analysis. figure 1. mutual position between ldv and fold mirrors structure.   figure 2. experimental apparatus  figure 3. laser beam paths.  figure 4. technical image of the right angle mirror (fold mirror) [12] and of  the 45° rod mirror (vertex mirror) embedded in the rotor [13].  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 28  the measurement errors are mainly due to the fact that those misalignments provoke laser beam reflections not orthogonal to the incident surface. thus, the vibration sensed is not the desired out-of-plane one and this circumstance causes additional spectral components which are superimposed to the expected measurement spectrum, causing the presence of harmonic components non representative of the original vibration velocity signal. these complications, then, are exacerbated by dynamic effects due to the revolution, to dynamic imbalances and so on. although the contact absence between the blade and the sensor is an advantage (with no persistent and systematic loading effects which would influence the dynamic behaviour of the rotating structure), another source of uncertainty has to be mentioned: the vibrometer would also sense the vibration component due to the rigid body motion of the rotating structure (linked to the structural imperfections of the governor), which is superimposed to the effective and real vibration motion of the rotating object due to the fluid dynamic phenomena affecting it. other sources of uncertainty are certainly due to imperfections and misalignments of the optical components included into the core of the ldv head and to the output analogue signal manipulation. 4. ldv errors analysis   the description of the frequency shift due to the doppler effect, which provides the measure sensed by the laser beam incident onto diffusive and reflective surfaces, has well been discussed in [14] and here synthetically proposed again for clarity. 4.1. diffusive surface  several tests on diffusive surfaces have been performed in order to evaluate the amount of the errors affecting the measurements of the vibration velocity when misalignments between laser beam and surface under analysis are forced. these impositions have the main purpose of simulating misalignments which could occur in practice and due to improper installations of the components of the experimental set-up. by theoretical considerations, these misalignments influence the frequency shift affecting the laser beam (once it is refracted by the incident surface), by a specific and fixed amount. thus, these static misalignments (due to imprecise installation of the components of the experimental arrangement) can be treated as systematic errors. the frequency shift of the laser beam reflected by a diffusive surface is expressed by the well-known equation: ∆ ≅ (1) where, is the wavelength of the laser beam and is the component of the vibration velocity vector (time dependent) along the direction of the incident laser beam. from equation (1), it is clear that misalignments or improper installation of the components of the experimental arrangement, influence the vibration velocity of surface under analysis. an analysis and characterization of the errors in measuring the blade vibration velocity due to the cited effects has already been carried out in [14]. stressing the rotating blade by a simple periodic vibration in different directions (the angles between the laser beam and the vibration velocity vector of the diffusive surface are forced), it has been shown how the induced vibrations affect its motion. the measured velocity is the component of the vibration velocity vector along the laser beam direction. when the impressed vibration velocity onto the blade is directed along the direction of the incident laser beam, the velocity of vibration is directly given by equation (1). in the cases where the angle between the velocity vector and the laser beam is less than 90°, a vibrational velocity is measured with same frequency as the first case but with reduced amplitude, in proportion to increasing angle. if that angle is right 90°, no component of the velocity vector along the laser beam direction is detected, except for noise signal. so the output returns indeed a not null signal, though very small, because of the noise floor, speckle noise and so many other causes, mainly given by interfering phenomena. by this analysis on diffusive surfaces, it has been possible to infer the following deductions: • only vibrations with a velocity component along the laser beam direction give a significant output signal; • vibrations orthogonal to the laser beam produce an output which is characterized only by noise, speckle noise and other interfering effects of difficult characterization. 4.2. reflecting surface  since the experimental set-up includes optical components (stationary fold mirror system and the rotating vertex mirror), an analysis of how reflection surfaces influence the frequency shift (due to doppler effect) experienced by the laser beam is required. the fold mirrors do not induce a frequency shift, since they are stationary. on the contrary, a moving mirror induces a frequency shift expressed by the formula, [14]: ∆ ∙ ∙ (2) where is the velocity vector of the laser spot point on the figure 5 ‐ general configuration for induced and reflected laser beams.  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 29  mirror surface, is the surface direction and is the direction of the reflected beam, figure 5. in the arrangement of the self-tracking ldv, the only component giving a frequency shift is the vertex mirror, whose reflecting surface is inclined of 45°. it is installed on the centre of the hub supporting the blades. all the points on the reflective surface have the same angular velocity (equal to the angular velocity of the rotor). in addition each point on the reflective surface has a velocity vector which is tangential to the circular trajectory described by each of them. the point on the reflecting surface illuminated by the light spot has a velocity vector which results to be constant in time (if the angular velocity of the rotor is constant and the laser beam is stationary. this is the case here, since the fold mirror reflecting the incident laser beam is stationary). since the reflecting surface is inclined, its normal vector will not be stationary but will rotate at the same angular velocity of the rotor and will describe a cone or a truncated cone, depending on the particular point chosen on the reflecting surface. these considerations have to be taken into account in order to quantify the frequency shift induced by the vertex mirror. the plane containing the incident laser beam onto the vertex mirror and the reflected rotates at the same angular velocity of the rotor and is parallel to the rotation axis. the projection of the point of the reflective surface illuminated by the laser spot and the surface of both incident and reflected laser beams vary with a sinusoidal law as the following equation states: (3) where, is the velocity vector of the point illuminated by the laser spot, its projection on the rotating surface and the angle between the velocity vector, , and the surface. since the incident laser beam lies on a non-stationary point, it is affected by a frequency shift. in order to avoid the frequency shift affecting the laser beam incident on the vertex mirror, a perfect alignment between laser beam, the vertex mirror and the rotor axis is required. in this way the point illuminated by the laser spot coincides with the centre of rotation of the entire system and thus, it remains stationary. no frequency shifting is then generated if the alignment is guaranteed. 5. further considerations on static misalignments  in this paragraph, further considerations on static misalignments between laser beam and rotor and fold mirrors supporter are developed. it is clear that these static misalignments are due to improper installation of the equipment of the described setup, leading to evaluation of vibrations of multiple points on rotating beam as far as these static misalignments cause a slip between the laser beam reflected by fold mirrors (and impinging the plane where the rotating test beam lies on). two types of static misalignments are going to be described in this paper: so called parallel shift misalignments and angular misalignments. the first ones occur whenever the axes of the considered components are perfectly parallel shifted and there are no angular misalignments between them. the second ones contemplate the existence of angular misalignment between the axes. figure 6 represents the ideal case where a perfect alignment between laser beam, vertex mirror axis and fold mirror structure is achieved. figure 7 also shows the geometric distances taking place between the most considerable points in the setup. it can be noted that a parallel shift between laser beam and vertex mirror axis results in a displacement of the figure 6. perfect alignment between laser beam, vertex mirror axis and fold mirror structure  figure 7. parallel shift misalignment   figure 8. parallel shift between vertex mirror axis and fold mirrors support  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 30  detected point on the test blade (by the laser), by an amount equal to the parallel misalignment between the laser beam and the vertex mirror axis. if we would assume that the fold mirror structure given by a cylindrical mirror with internal reflective surface inclined by 45’, the laser beam reflected by this ideal mirror and impinging the test blade plane would be a cylinder with an axis characterized by a displacement with respect to the rotor axis equal to the amount of the parallel shift. the maximum error in tracking the effective impinging point of the laser beam onto the rotating blade is /2. a larger error would cause a complete missing of the incidence of the laser beam onto the vertex mirror. analogous considerations could be given if we would consider a parallel shift between fold mirrors support and vertex mirror axis or a parallel shift between the former and the laser beam itself, as shown in figure 8 the overall parallel displacement between the measuring point and the one occurring in the case of perfect alignment is given by the algebraic sum of the partial shifts occurring between the couples laser beam – vertex mirror axis and between the latter and the fold mirror support axis. another source of geometric uncertainty needs to be evaluated. as explained above, if the fold mirror would be the internal surface (inclined by 45’) of a cone trunk, the reflected beam successively impinging the blade plane, would be a cylinder with parallel axis and shifted to the rotor axis. actually the fold mirror complex is formed by four mirrors with plane reflective surface. the resulting reflected beam is thus discontinuous (occurring whenever the rotating laser beam reflected by the vertex mirror impinges each fold mirror) and track segments instead of arches (because of the planarity of the fold mirror surfaces). but if the length of each fold mirror tends to zero, these segments would tend to portions of arc. this approximation would allow us to state that at each of the four angular positions (where the fold mirrors are installed), the laser point lying on the test blade would track a fixed point, but (due to parallel misalignment between laser beam – vertex mirror axis – fold mirror support axis), the point tracked at each angular position would change by an amount equal to the overall displacement between the formerly mentioned axes. now angular misalignments are going to be discussed. the following picture (figure 10) portraits these types of misalignment. in this case, the actual measuring point is displaced by an amount proportional to tan(), with  the angular misalignment between rotor axis and laser beam and proportional to the distance between the impinging point of the laser beam on the fold mirror and the rotor plane. also in this case, if the fold mirror would be the internal surface (inclined by 45’) of a cone trunk, the reflected beam successively impinging the blade plane would be a cylinder with axes inclined of ± respect to the vertex mirror axis. since the fold mirror complex is formed by four angularly displaced mirrors with plane reflective surfaces, the reflected beams consist of a tracked segment on the blade plane, tracking more than one single point. this would be another source of error to be neglected in the case of infinitesimal fold mirror surface length. figure 9 shows an angular misalignment between the fold mirror system axis and that of the rotor. also in this case the same consideration as above can be made. the overall shift of the measuring point (with respect to the ideal one lying on the same point of the blade during its rotation) is given by the algebraic sum of the simpler shifts due to the stepwise angular misalignments between the several parts of the laser tracking system. the analyzed errors due to static misalignments imply an uncertainty in the effective tracked point on the rotating test beam. one would expect that at each reflection performed by each fold mirror, the laser beam tracks the figure 9. angular misalignment between fold mirrors structure axis and that of the vertex mirror.  figure 10. angular misalignment between laser beam and rotor axis  acta imeko | www.imeko.org  december 2014 | volume 3 | number 4 | 31  same point (although a relative uncertainty in which point is going to be tracked still persists). actually due to a parallel shift (occurring in the case of parallel misalignments) and to an angular shift (occurring in the case of angular misalignments) of the portions (taking place at the reflection locations at each fold mirror) of the hypothetical cylindrical surface (if the length of reflective surface of fold mirror would tend to zero), the point tracked by the laser beam on the rotating test beam isn’t the same. 6. conclusions  the considerations made in the paper are effective in the case of only static misalignments. if the proper alignment among laser beam, rotor axis, vertex mirror axis and fold mirrors support axis could be guaranteed, the laser beam generated by the ldv head would experience a frequency shift only due to out-of-plane vibrations of the plane blade under analysis due only to fluid-dynamics and rigid body motion phenomena. as it is unlikely to avoid such kind of imperfections, it is necessary to estimate at least the maximum error involved by them. then, given the systematic nature of such kind of errors, indeed non-perfectly evaluable, their estimation could be incorporated in the global accuracy of the equipment by means of their maximum values valuation. the rotation of the blades and other dynamic phenomena, such as dynamic unbalances of the rotor, might introduce dynamic misalignments which affect the overall frequency shift experienced by the laser beam. these so-called dynamic phenomena introduce random effects, which aren’t predictable and highly difficult to quantify. another phenomenon which theoretically could influence the measurements by the self-tracking ldv arrangement and its efficacy is the speckle noise. this phenomenon occurs when a relative motion is present between the laser beam spot and the surface under test. the phenomenon, due to the micro reflections and refractions locally distributed on the surface due to its roughness, essentially results in a noise floor which is superimposed on the vibration velocity signal and as such it can be opportunely filtered. by comparing the ldv technique with other likewise sophisticated techniques such as the interferometry, the former minimizes indeed such kind of error. references  [1] i. ye zablotskiy, a. yu korostelev, l. b. sviblov, “contactless measuring of vibrations in the rotor blades of turbines”, technical translation fdt-ht-23, 1974, pp. 673-74. [2] w. k kulczyk, q. v. davis, “laser doppler instrument for measurement of vibration of moving turbine blade”, proceedings of the institution of electrical engineers, vol. 120, n. 9, 1973. [3] m. zielinski, g. ziller, “noncontact vibration measurements on compressor rotor blades”, meas. sci. technol., 11, 2000, pp. 847-856. [4] r. a. lomenzo, a. j. barker, a. l. wicks, “laser vibrometry system for rotating bladed disks” in proc. of the 18th imac, 1999, pp. 277-282. [5] h. dietzhausen, k. bendel, n. scelles, “tracking scanning laser doppler vibrometers: extending laservibrometry to arbitrarily moving objects” in proc.of imac xxi, 2003. [6] p. castellini, n. paone, “development of the tracking laser vibrometer: performance and uncertainty analysis”, review of scientific instruments, vol, 71, 2000, pp. 4639–4647 [7] m. tirabassi, s. j. rothberg, “scanning ldv using wedge prisms” optics and laser in engineering, 47 (3-4), 2009, pp. 454-460. [8] g. dinardo, l. fabbiano, g. vacca “influences of geometric configuration on the analysis of uncertainties affecting a novel configuration of self-tracking ldv” in proc of 12th imeko tc10, florence, italy, 2013, pp. 195-201. [9] p. l. teoh, b. shirinzadeh, c. w. foong, g. alici, «the measurement uncertainties in the laser interferometry-based sensing and tracking technique», measurement, vol. 32, n. 2, 2002, pp. 135–150. [10] k. umetsu, r. furutnani, s. osawa, t. takatsuji, t. kurosawa, “geometric calibration of a coordinate measuring machine using a laser tracking system”, measurement science and technology, 16(12), 2005, pp. 24662472. [11] s. ibaraki, t. hata, a. matsubara, “a new formulation of laser step-diagonal measurement—two-dimensional case”, precision engineering, vol. 33, n. 1, 2009, pp. 56–64. [12] http://www.edmundoptics.com/optics/opticalmirrors/flat-mirrors/right-angle-mirrors/1930. [13] http://www.edmundoptics.com/optics/prisms/specialtyprisms/45-degree-rod-lenses-mirrors/1413. [14] r. a. lomenzo, “static misalignment effects in a selftracking laser vibrometry system for rotating bladed disks”, dissertation, blacksburg, usa, 1998. editorial to selected papers from the 2022 imeko tc11&tc24 joint conference acta imeko issn: 2221-870x june 2023, volume 12, number 2, 1 2 acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 1 editorial to selected papers from the 2022 imeko tc11&tc24 joint conference marija cundeva-blajer1, leonardo iannucci2 1 ss. cyril and methodius university in skopje (ukim), faculty of electrical engineering and information technologies (feeit), st. rugjer boskovik no. 18, 1000 skopje, republic of north macedonia 2 politecnico di torino, department of applied science and technlogy, corso duca degli abruzzi 24, 10129 torino, italy section: editorial citation: marija cundeva-blajer, leonardo iannucci, editorial to selected papers from the 2022 imeko tc11&tc24 joint conference, acta imeko, vol. 12, no. 2, article 4, june 2023, identifier: imeko-acta-12 (2023)-02-04 received june 26, 2023; in final form june 30, 2023; published june 2023 copyright: this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. corresponding author: leonardo iannucci, e-mail: leonardo.iannucci@polito.it dear readers, this special issue collects the extended version of some of the contributions presented at the 2022 imeko tc11&tc24 joint conference, held in dubrovnik (croatia) from the 17th to the 19th of october 2022. this international conference gathered experts both from industry and academia, covering different topics from the field of ‘measurement in testing, inspection and certification’ (imeko tc11) and ‘chemical measurements’ (imeko tc24). considering the wide interdisciplinarity of the two technical committees, many topics and metrological issues were addressed by the conference participants. in the following, the published papers will be individually presented. in the technical note ‘statements of conformity provided by laboratories’, a. čop [1] examines the approach presented in ilac g8 and other pertinent literature about decision rules for statements of conformity to specifications or standards. this issue is of paramount importance for laboratories, which need to correctly take into account measurement uncertainty in order to properly reduce the risk of incorrect decisions. the paper ‘evaluating chemometric strategies and machine learning approaches for a miniaturized near-infrared spectrometer in plastic waste classification’ by c. marchesi et al. [2] reports on a comparative study of different computational tools to categorize a data set derived from near infra-red measurements. the results showed a very good performance of the multivariate methods (such as principal components analysis), which outperformed the investigated machine learning tools. these findings are of great interest for the applications related to plastic waste sorting and polymers recycling. the contribution “metrology infrastructure for radon metrology at the environmental level” by a. röttger et. al. [3], presents results in the areas of novel 226ra standard sources with continuous controlled 222rn emanation rate, radon chambers aimed to create a reference radon atmosphere and a reference field for radon flux monitoring. the new infrastructure is capable of filling this gap in traceability. the achieved results make new calibration services, far beyond the state of art, possible. the contribution “decision-making on establishment of recalibration intervals of testing, inspection or certification measurement equipment by data science” by m. cundevablajer [4], is related to issues on decisions in conformity assessment, especially in testing, inspection, and certification (tic). the paper conducts a survey of options for deployment of data science in the tic decision making processes, based on conclusions with complementary usage of empirical “measurements” and the “data science”, through a case study for determining the instrument calibration frequency, presenting a model to establish recalibration interval with reduced risk in the final decision delivery over the next re-calibration moment. during the second large hadron collider long shutdown, the liquid argon calorimeter of the atlas experiment at cern has been upgraded with a new trigger readout electronics which provides digital information with higher granularity to the atlas trigger system as presented in “power distribution board: quality control, quality assurance and testing” by a. carbone et al. [5]. in particular the new lar trigger digitizer boards process and digitize the "super cells" (group of readout calorimeters cells) and send the processed data to the back-end electronics. the power distribution board is a mezzanine board that provides the power distribution to the ltdb. the paper ‘developments of interlaboratory comparisons on pressure measurements in the philippines’ by m. n. i. salazar [6] presents the results from interlaboratory comparisons in pressure measurement in the philippines. the initiative derived from an increasing demand in proving the competency of local calibration laboratories and it was successfully implemented, performing two interlaboratory comparisons 6 years apart. mailto:leonardo.iannucci@polito.it acta imeko | www.imeko.org june 2023 | volume 12 | number 2 | 2 an important case study is then reported in the article ‘the experience gained from implementing an iso 56000-based innovation management system’, by t. gueorguiev [7]. the author presents the adoption of the iso 56000 series of standards at the university of ruse, in bulgaria. as a result, significant improvement in the innovation management system was achieved, and the main initiatives are outlined in the paper. we hope you will enjoy your reading. marija cundeva-blajer leonardo iannucci guest editors for the special issue references [1] a. čop, statements of conformity provided by laboratories, acta imeko, vol. 12 (2023) no.2, pp. 1-3. doi: 10.21014/acta_imeko.v12i2.1468 [2] claudio marchesi, monika rani, stefania federici, matteo lancini, laura e. depero, evaluating chemometric strategies and machine learning approaches for a miniaturized near-infrared spectrometer in plastic waste classification, acta imeko, vol. 12 (2023) no.2, pp. 1-7. doi: 10.21014/acta_imeko.v12i2.1531 [3] annette röttger, stefan röttger, daniel rábago, luis quindós, katarzyna woloszczuk, maciej norenberg, ileana radulescu, aurelian luca, metrology infrastructure for radon metrology at the environmental level, acta imeko, vol. 12 (2023) no.2, pp. 1-7. doi: 10.21014/acta_imeko.v12i2.1440 [4] m. cundeva-blajer decision-making on establishment of recalibration intervals of testing, inspection or certification measurement equipment by data science, acta imeko, vol. 12 (2023) no.2, pp. 1-8. doi: 10.21014/acta_imeko.v12i2.1465 [5] antonio carbone, francesco tartarelli, massimo lazzaroni, stefano latorre power distribution board: quality control, quality assurance and testing, acta imeko, vol. 12 (2023) no.2, pp. 1-4. doi: 10.21014/acta_imeko.v12i2.1486 [6] mary ness i. salazar, developments of interlaboratory comparisons on pressure measurements in the philippines, acta imeko, vol. 12 (2023) no.2, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1448 [7] tzvetelin gueorguiev, the experience gained from implementing an iso 56000-based innovation management system, acta imeko, vol. 12 (2023) no.2, pp. 1-6. doi: 10.21014/acta_imeko.v12i2.1461 https://doi.org/10.21014/acta_imeko.v12i2.1468 https://doi.org/10.21014/acta_imeko.v12i2.1531 https://doi.org/10.21014/acta_imeko.v12i2.1440 https://doi.org/10.21014/acta_imeko.v12i2.1465 https://doi.org/10.21014/acta_imeko.v12i2.1486 https://doi.org/10.21014/acta_imeko.v12i2.1448 https://doi.org/10.21014/acta_imeko.v12i2.1461 template for an acta imeko event paper acta imeko issn: 2221-870x december 2016, volume 5, number 4, 29-36 acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 29 enhancement of flash memory endurance using short pulsed program/erase signals philippe chiquet1, jérémy postel-pellerin1, célia tuninetti2, sarra souiki-figuigui1, pascal masson2 1aix-marseille universite, im2np-cnrs umr 7334, rue frederic joliot-curie, 13451 marseille, france 2nice sophia-antipolis university epoc, 06410 biot, france section: research paper keywords: non-volatile memory; flash; reliability; advanced electrical characterization citation: philippe chiquet, jérémy postel-pellerin, célia tuninetti, sarra souiki-figuigui, pascal masson, enhancement of flash memory endurance using short pulsed program/erase signals, acta imeko, vol. 5, no. 4, article 6, december 2016, identifier: imeko-acta-05 (2016)-04-06 section editor: lorenzo ciani, university of florence, italy received september 26, 2016; in final form november 6, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: philippe chiquet, e-mail: philippe.chiquet@im2np.fr 1. introduction over the last few years, flash memory cells, which can nowadays be found in common products such as usb flash drives and smart cards, have become the dominant non-volatile memory devices in the semiconductor industry. flash memory is built around a floating gate transistor, a mos transistor to which an extra tri-layer stack oxide “oxide/nitride/oxide” (ono) and a top electrode (control gate) have been added, as shown in figure 1. information is stored in the device as an electric charge located in the floating gate which, thanks to the various oxide layers surrounding it, retains this charge when the power supply is removed, thus assuring the non-volatile behaviour of the flash transistor [1]. according to the value qfg of this charge, two distinct logical states ‘0’ and ‘1’ can be differentiated, and the threshold voltage vt of the transistor can be calculated according to equation (1): pp fg tt c q vv −= 0 , (1) where vt0 is the natural threshold voltage of the cell and cpp the control gate / ono / floating gate capacitance [2]. the transition from one state to the other is obtained when electric signals are applied to some of the transistor’s terminals, inducing the injection (or removal) of electrons into (or from) the floating gate through the tunnel oxide. in the remainder of this paper, the logical states corresponding to the highest and lowest threshold voltage values of the floating gate transistor will be called the “programmed” and “erased” states figure 1. schematic (left) and electric (right) representation of a flash floating-gate transistor. abstract the present paper proposes to investigate the effect of short pulsed program/erase signals on the functioning of flash memory transistors. usually, electrical operations related to said devices involve the application of single long pulses to various terminals of the transistor to induce various tunneling effects allowing the variation of the floating gate charge. according to the literature, the oxide degradation occurring after a number of electrical operations, leading to loss of performance and reliability, can be reduced by replacing dc stress by ac stress or by reducing the time spent under polarization by the mos-based devices. after a brief presentation of the functioning of the flash memory transistors tested in this work, the experimental setup used to replace standard electric signals with short pulses will be described. electrical results showing the benefits of programming and erasing non-volatile memories with short pulses will then be presented. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 30 respectively. programming is usually achieved through channel hot electron (che) injection in flash memories, while erasing is generally obtained thanks to fowler-nordheim (fn) injection [3]. due to this functioning, the tunnel oxide layer, which can be degraded due to repeated electrical operations, is critical to the reliability of the memory devices and various solutions can be considered in order to improve their overall reliability [4]. the present work will focus on the use of short-pulsed signals, which consists in replacing long standard signals by a series of pulses with short plateau widths, as previous studies show that mos-based devices are less impacted by ac stress than dc stress [5]-[11], or by shorter pulses of higher amplitude [12]. 2. generation of short-pulsed signals 2.1. experimental setup in order to investigate the endurance of the tested flash memory cells under short pulses, a complete setup has been developed with the help of a keysight semiconductor device parameter analyzer b1500a [13] equipped with two wgfmus (waveform generator fast measurement unit b1530a), two spgus (semiconductor pulse generator unit) and four smus (source monitor unit), as described in figure 2. the smus are used to measure the ids(vgs) characteristics necessary to the reading (threshold voltage extraction at a given drain current value) of the cell state while the spgu enables the definition of pulses for the successive programming/erasing operations. an agilent 16440a selector switches between the smu and spgu during the endurance test. the additional wgfmu in the setup, connected via rsu (remote-sense and switch unit) enables the definition of arbitrary waveforms and the dynamic measurement of the drain current, which can be useful when evaluating the consumption of the programming operation during the endurance test [14], [15]. 2.2. definition of the applied signals standard (std) flash che signals consist in simultaneous control gate and drain pulses of amplitude vpp equal to +9 v (during 5 µs) and +4.2 v (during 1 µs) respectively, while standard fn erasing is obtained by applying a single 500 µs pulse of amplitude vpp equal to −18 v on the control gate. the first approach here, which will be developed in section 3, consists in replacing standard programming and/or erasing signals by trains of short pulses. these new signals have been chosen so as the “programmed” and “erased” threshold voltages measured for a fresh flash device are close to the threshold voltage values obtained when standard signals are used. it has been experimentally observed that such a condition is met when the total plateau time spent on short pulses is equal to the duration of the standard signal. in order to obtain comparable threshold values using series of short pulses, the standard drain signal in che programming and the standard control gate signal in fn erasing have respectively been transformed into a series of 20 pulses of 50 ns plateau time and a series of 10,000 pulses of 50 ns plateau time. in each case, the signals created with the experimental setup described above have been checked with the help of an oscilloscope as seen in figures 3 to 5. the second approach, which will be the topic of section 4, is much simpler on an experimental level and consists in replacing standard signals with single pulses of higher amplitude and shorter plateau time. figure 3. oscilloscope observation of the control gate (yellow) and drain (blue) signals for a short-pulsed che programming of the flash memory cells. figure 4. close-up observation of signals observed in figure 3. figure 5. oscilloscope observation of the control gate signal of a flash floating gate transistor during a fn erase operation. figure 2. experimental setup used to perform endurance tests with current consumption measurements. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 31 3. electrical results: pulse trains 3.1. qualitative results and total window closure measurements carried out on numerous devices yielded electrical results highlighting the effect of pulsed program/erase operations on flash memory endurance. endurance of memory cells is mainly characterized by the evolution of their programming window, defined as the difference between the threshold voltages of the “programmed” and “erased” states, over numerous program/erase electrical operations. in order to quantify the gradual programming window closure linked to device degradation, a criterion had to be chosen. in the present work, the total window closure δvt after k program/erase cycles, defined by equations (2) and (3) has been retained: erase t prog tt vvv ∆−∆=∆ , (2) )1()()( ,,, eraseprogt eraseprog t eraseprog t vkvkv −=∆ . (3) according to its definition, δvt is calculated as the sum of two negative values and represents the combined shifts of the “programmed” and “erased” threshold voltages. this total window closure is plotted in figure 6 as a function of the number of program/erase cycles up to 106 cycles. it can be seen that both the short (20×50 ns)/std and std/short (10000×50 ns) cyclings provide a decrease in programming window closure of about 0.5 v compared to the std/std cycling. in order to understand the effect of the pulses’ parameters on this observed endurance improvement, measurements involving different plateau amplitudes and duty cycles have been carried out. 3.2. effect of pulse amplitude electrical results presented in figure 7 compare the programming window closure of the std/short (10000×50 ns) and std/std cyclings for vpp values of −17 v, −18 v and −19 v. while a difference in programming window closure between the two cycling modes can be observed for the first two vpp values after 104 cycles, erasing at vpp = −19 v with short pulses instead of the standard signal makes no difference. this could be explained by the fact that at a given duty cycle, the benefits of replacing dc stress with ac stress become noticeable only if the plateau time of the pulses is reduced as much as possible as the stressing field is increased [11]. other studies have also shown that such results cannot be achieved at ambient temperature when plateau times of the applied pulses become long enough [5]. 3.3. effect of duty cycle in order to achieve lower device degradation, allowing oxide relaxation during electrical stress is a common solution [5], [16]. it has been achieved in this study by increasing the delay between the short pulses constituting the signals applied to the control gate and the drain of the transistor. the duty cycle, which is defined as the ratio of the plateau duration tpl to the period value t as shown in figure 8, is thus decreased. electrical results obtained after short/short cycling (vpp = −18 v) for duty cycles equal to 0.1 and 0.01 have been compared to experimental data obtained for std/std cycling in figure 9. it can be observed that decreasing the duty cycle allows to reduce the programming window closure. using experimental data from figure 9 allows to plot δvt as a function of the duty cycle as shown in figure 10. the total window closure is shown to have a logarithmic behaviour with respect to duty cycle. 3.4. measurement of device consumption energy consumption of non-volatile memory devices has been of recent interest as experimental possibilities offered with the coming of new parameter analyzers in recent years [14], [15]. it can be defined according to the following equation (4): figure 7. programming window closure up to 106 program/erase (std/std and std/short) cycles for various erasing vpp values. each measurement point has been averaged over ten flash memory cells. figure 6. programming window closure up to 106 program/erase (std/std, short/std and std/short) cycles. each measurement point has been averaged over ten flash memory cells. figure 8. schematic representation of a period of the electrical signals applied to the control gate (vgc) or drain (vd) of the memory transistor. the duty cycle is calculated as the ratio of the plateau duration tpl to the period value t. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 32 ∫= dtive dsdsc , (4) where vds and ids are the drain-source voltage and the drain current of the transistor, respectively. results of time-resolved measurements of drain current performed during a programming operation before and after electrical cycling (std/std and short/std), which can be observed in figures 11 and 12, show that device consumption typically increases with device degradation. figure 13 allows a comparison between the respective consumptions of a fresh flash device during std and short programming operations. taking (4) into consideration, it is pretty obvious that device consumption is higher in the latter case, as the integral of the red curve (short) is clearly higher than that of the black one (std). it can be easily explained as in this case, the pulses constituting the short programming signal are characterized by rise and fall times of the same order of magnitude as the plateau times. these rise and fall times, which do not exist during a std programming signal, make up an important part of the short signal and thus explain this consumption increase. 3.5. reliability of flash memory cells under various cycling modes the statistical behaviour of flash cell populations subjected to std/std, short/short (duty cycle equal to 0.1), and std/short (vpp = −17 v) cyclings has been investigated. flash memory cells remain functional as long as their programming window remains greater than a certain value, below which the figure 10. total window closure as a function of duty cycle obtained from data of figure 9 at 105 and 106 cycles. figure 11. time-resolved measurement of the drain current ids during a standard programming signal before (circles) and after (squares) std/std program/erase cycling (106 cycles). figure 12. time-resolved measurement of the drain current ids during a short (20x50 ns) programming signal before (circles) and after (squares) short/std program/erase cycling (106 cycles). figure 13. current consumption measurements during std (black) and short (red) programming operations performed on a fresh flash device. figure 9. programming window closure up to 106 program/erase cycles obtained for various duty cycle values. each measurement point has been averaged over ten flash memory cells. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 33 two logical states cannot be distinguished anymore. in that case, the threshold voltage value obtained when reading such a cell would not allow determining if it is “programmed” or “erased”. for each cycling mode, the cells were considered failed when the programming window went below 2 v. generally speaking, failure of electronic systems or components over time has often been described using a weibull distribution [17], [18]. assuming such a distribution in the present case, the ratio of failed cells after k program/erase cycles f(k) can be written according to equation (5): β τ )( 01)( kk ekf − − −= , (5) where k0, τ and β are three fitting parameters of the distribution. weibull plots of experimental data obtained from the three cell populations are presented in figure 14. it can be seen that the failure of the tested flash memory cells can be well described by a weibull distribution. the fitting of experimental data of the three cell populations is realized using the parameter values reported in table 1. as expected, cycling flash cells with signals that are less degrading induces a shift of the failed cell distribution towards higher numbers of cycles. the failure rate [19]-[21] λ(k) can be expressed as a function of the number of cells still functional after k cycles n(k) according to equation (6): dk kdn kn k )( )( 1 )( −=λ . (6) the “bathtub” shape of the failure rate, calculated from experimental data of the three cell populations and shown in figure 15, is typical for such statistical distributions [18], [22]. 4. electrical results: single short erase pulse while the last section investigates the consequences of changing standard program/erase signals into series of short pulses, the idea is here to find the electrical characteristics of a short single pulse that can be used to replace a given standard signal during device operation. the study presented in this section has been limited to erasing pulses. 4.1. choice of the erase pulse characteristics in order to achieve better electrical performance by speeding up the erasing process while targeting a similar vt erase value, the new pulse will be characterized by a shorter plateau time and a higher amplitude (i.e. tpl < 500 µs and |vpp| > 18 v). under these conditions, the necessary plateau time value for a given pulse amplitude can be determined by studying the erasing kinetics of the tested flash devices, according to the method described in figure 16. successive short pulses are applied to the control gate of a programmed cell in order to obtain a progressive shift of its threshold voltage for a given vpp value. the evolution of the threshold voltage can be monitored figure 15. calculated failure rate of flash memory cells subjected to std/std, short/short (dc = 0.1), and std/short (vpp = -17 v) cyclings. figure 16. experimental method to investigate erasing kinetics of the tested flash cells. table 1. best weibull parameters allowing the fitting of experimental data for memory cells subjected to std/std, short/short (dc = 0.1), and std/short (vpp = -17 v) cyclings. cycling condition k0 τ β std/std 6.11×104 9.29×104 1.02 short/short dc = 0.1 1.56×105 2.86×105 1.10 std/short vpp=-17 v 5.78×104 1.58×105 1.01 figure 14. weibull plot obtained for flash memory cells subjected to std/std, short/short (dc = 0.1), and std/short (vpp = -17 v) cyclings. the lines displayed in that figure correspond to the fitting of experimental data using weibull parameters found in table table 1. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 34 through reading operations performed between these individual pulses. electrical results for vpp values ranging from −18 v to −23 v and from −24 v to −31 v are displayed on figures 17 and 18, respectively, where the threshold voltage evolution is plotted as a function of the erasing time, defined in this experiment as the total plateau time spent on the various pulses depicted in figure 16. the particular erasing time needed to reach the targeted vt erase value (equal to 2.9 v as measured after an erasing operation using a standard signal) can be extracted for each vpp value as the intersection point between the corresponding experimental curve and the dotted line in both figures. these results show that for appropriate amplitudes (typically |vpp| > 26 v), the erasing pulse’s plateau time could be shorter than 50 ns to reach a vt erase value of 2.9 v. assuming rise/fall times of 100 ns and tpl = 50 ns, the standard erasing operation (lasting around 500 µs) could be sped up by a factor 2000, all the while retaining the typically low fowler-nordheim current consumption. the use of shorter and higher-amplitude erasing pulses could thus be considered for various applications where higher voltages are tolerated and higher device speed is of primordial importance. 4.2. total window closure in order to test the endurance of flash cells subject to these new erasing pulses, electrical program/erase cyclings (with standard programming operations) have been performed for vpp values ranging from −18 v to −28 v. results presented in figure 19 show that the total window closure of the flash cells is decreased when compared to the std/std case (vpp = −18 v). it can however be noticed that the flash devices’ endurance is optimal for an intermediate vpp value which seems to be around −20 v. device degradation, monitored through programming window closure, should be more important as the amplitude of the erasing pulse increases (see figure 7 for example), but that phenomenon is compensated by the resulting decrease of the plateau time. to further explain the existence of such an optimal amplitude value, tcad simulations could be considered in order to monitor the temporal evolution of the tunnel oxide electric field in each case, and in particular, to try to estimate the impact of the erasing pulse rise time, which should limit the oxide electric field at higher vpp values. endurance tests have also been performed for vpp values up to −40 v, but due to technical limitations of the parameter analyzer used for this study (the minimum value that can be set for tpl being 10 ns), the approach presented at the beginning of this section had to be modified. for erasing pulses of amplitude equal to −30 v, −35 v and −40 v plateau time had to be set to 10 ns, and as a consequence the obtained threshold voltages (reported in table 2) are well below the standard value (i.e. 2.9 v). electrical results presented in figure 20 show that window closure is generally smaller than the one obtained with standard signals, except for vpp = −40 v. even in the latter case where the window closure is close to the standard one, it should not be as critical for device operation as the initial programming window is much larger to begin with. figure 19. programming window closure up to 106 program/erase cycles for various erasing vpp values (from -18 v to -28 v). each measurement point has been averaged over ten flash memory cells. table 2. “erased” vt values obtained for tpl = 10 ns. amplitude threshold voltage -30 v 2 v -35 v -3 v -40 v -6.8 v figure 17. erasing kinetics for -23 v < vpp < -18 v. “erased” threshold voltage measured after a standard erasing operation is added for comparison (dotted line). figure 18. erasing kinetics for -31 v < vpp < -24 v. “erased” threshold voltage measured after a standard erasing operation is added for comparison (dotted line). acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 35 5. conclusions the impact of the use of short pulses on the endurance of flash memories during the che programming and fn erasing operations has been investigated in this paper. the proposed experimental setup helped produce two main types of signals (trains of short pulses or single short pulses of high amplitude) that could potentially be used for specific industrial applications. replacing standard program/erase signals by trains of short pulses with plateau times as low as 50 ns allowed the observation of a decrease in programming window closure during program/erase cycling. the impact on this closure of parameters such as amplitude or duty cycle of the signals applied to the terminals of the memory transistors has been investigated. as the results presented in section 3 show, the endurance improvement obtained by replacing standard control gate and drain signals by series of short pulses can only occur at a cost as the overall signal durations and device consumption have been shown to increase. this trade-off could prove useful in specific application fields (i.e. automotive and avionic industries) where reliability of the memory devices is the most important factor. moreover, automotive and avionic applications typically require memory devices to operate within a much larger temperature range (typically −40 °c to 150 °c). as some other authors have shown, decreasing the duty cycle value yields better electrical results at higher temperatures [5], making the present results potentially even more interesting for such applications. replacing standard (erasing) signals by single shorter pulses of higher amplitude seems to be another recipe to produce a noticeable enhancement of the endurance of flash memory devices, as the programming window closure has also been shown to decrease in this case. moreover, current consumption should remain quite low, and the erasing speed can potentially be greatly improved. on the downside, these signals should only be used for applications in which higher voltages can be tolerated. as a perspective, many other types of measurements could be considered to complete the study presented here. firstly, both approaches showcased respectively in sections 3 and 4 of this paper could probably be combined to maximize the endurance enhancement of flash devices (i.e. train of short pulses characterized by |vpp| > 18 v). moreover, measurements presented in this work could be reproduced using different plateau times, duty cycles, rise/fall times to be able to understand the effect of these parameters on device degradation and to optimize them. finally, complementary measurements could be performed on test mos capacitors to investigate the effect of short-pulsed electrical stress in terms of interface trap density [16] and stress induced leakage current (silc), which are physical manifestations of oxide degradation that can lead to program/erase window closure in non-volatile memory devices. acknowledgement authors wish to thank jean-luc ogier (stmicroelectronics, rousset) for his technical support. references [1] w.brown, j.brewer, “nonvolatile semiconductor memory technology: a comprehensive guide to understanding and to using nvsm devices”, ieee press, 1998. [2] p. pavan, r. bez, p. olivo, e. zanoni, “flash memory cells an overview”, proc. of the ieee 85 (8) (1999), pp. 1248-1271. [3] r.laffont, r.bouchakour, o.pizzuto, j.-m.mirabel, “a 0.18 um flash source side erasing improvement”, proceedings of nvmts (2004), pp. 105-109. [4] j.postel-pellerin, g.micolau, p.chiquet, j.melkonian, g.just, d.boyer, c.ginoux, “setting up of a floating gate test bench in a low noise environment to measure very low tunnelling currents”, acta imeko, vol.4, no.3, 2015, pp. 36-41. [5] b.rebuffat, j-l.ogier, p.masson, m.mantelli, r.laffont, “relaxation effect on cycling on nor flash memories”, proc. of ieee international conference on electron devices and solid-state circuits (edssc), 2015, pp. 613-616. [6] a.cester, a.paccagnella, g.ghidini, “stress induced leakage current under pulsed voltage stress”, solid-state electronics, vol.46, 2002, pp. 399-405. [7] m.nafria, j.sune, d.yelamos, x.aymerich, “high field dynamic stress of thin sio2 films”, microelectron. reliab., vol.35, no.3, 1995, pp. 539-553. [8] d.caputo, r.feruglio, f.irrera, b.ricco “effect of pulsed stress on leakage current in mos capacitors for non-volatile memory applications”, proc. of ieee european solid-state device research conference (essderc), 2002, pp. 567-570. [9] f.irrera, b. ricco, “pulsed tunnel programming of nonvolatile memories”, ieee trans. on electron devices, vol.50, no.12, december 2003, pp. 2474-2480. [10] a.chimenton, f.irrera, p.olivo, “improving performance and reliability of nor-flash arrays by using pulsed operation”, microelectronics reliability, vol.46, 2006, pp. 1478-1481. [11] f.irrera, t.fristachi, d.caputo, b.ricco, “optimising flash memory tunnel programming”, microelectronics engineering, vol.72, 2004, pp. 405-410. [12] p.canet, r.bouchakour, j.razafindramora, f.lalande, j.m.mirabel, “very fast eeprom erasing study”, proc. of ieee european solid-state circuit conference (esscirc), 2002, pp. 683-686. [13] keysight technologies, keysight b1500a data sheet, february 2016. [14] v.della marca, j.postel-pellerin, g.just, p.canet, j-l.ogier, “impact of endurance degradation on the programming efficiency and the energy consumption of nor flash memories”, microelectron. reliab., vol.54, no.9-10, 2014, pp. 2262-2265. figure 20. programming window closure up to 106 program/erase cycles for various erasing vpp values (from -30 v to -40 v, with standard -18 v results for comparison). each measurement point has been averaged over ten flash memory cells. acta imeko | www.imeko.org december 2016 | volume 5 | number 4 | 36 [15] v.della marca, t.wakrim, j.postel-pellerin, “advanced experimental setup for reliability and current consumption measurements of flash non-volatile memories”, proc. of the 20th imeko tc4 international symposium, 2014, pp. 10361040. [16] b.rebuffat, p.masson, j-l.ogier, m.mantelli, r.laffont, “effect of ac stress on oxide tddb and trapped charge in interface states” proc. of international symposium on integrated circuits (isic), 2014, pp. 416-419. [17] american department of defense, military handbook – reliability prediction of electronic equipment (mil-hdbk-217f), 1991. [18] u.gurel, m.cakmakci, “impact of reliability on warranty: a study of application in a large size company of electronics industry”, measurement, vol. 46, 2014, pp. 1297-1310. [19] m.catelani, l.ciani, s.rossin, m.venzi, “failure rates sensitivity analysis using monte carlo simulation”, proc. of the 13th imeko tc10 workshop on technical diagnostics, 2014, pp. 195-200. [20] l.ciani, m. catelani, “a fault tolerant architecture to avoid the effects of single event upset (seu) in avionics applications”, measurement, vol. 54, 2014, pp. 256-263. [21] m.venzi, s.rossin, c.michelassi, c.accillaro, m.catellani, l.ciani, « improved fbd and rbd generation for system reliability assessment”, proc. of the 12th imeko tc10 workshop on technical diagnostics, 2013, pp. 266-270. [22] c.calaiselvan, l.rao, “accelerated life testing of nano ceramic capacitors and capacitor test boards using non-parametric method”, measurement, vol. 88, 2016, pp. 58-65. enhancement of flash memory endurance using short pulsed program/erase signals acta imeko, title acta imeko issn: 2221-870x december 2016, volume 5, number 4, 80 acta imeko | www.imeko.org december 2016 | volume 5 | number 4| 80 introduction to selected papers from the xxi imeko world congress 2015 paolo carbone university of perugia, italy section: editorial citation: paolo carbone, editorial to selected papers from the xxi imeko world congress 2015, acta imeko, vol. 5, no. 4, article 12, december 2016, identifier: imeko-acta-05 (2016)-04-12 section editor: paul regtien, the netherlands received december 26, 2016; in final form december 26, 2016; published december 2016 copyright: © 2016 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited corresponding author: paolo carbone, email: paolo.carbone@unipg.it dear reader, the last papers in this issue were presented at the 2015 imeko world congress 2015 in their original version and then updated and extended for being published in this journal after the usual review process. the first paper by maik rosenberger et al. addresses the subject of measuring geometric parameters using ratiometric cameras. the procedure proposed by the authors follows a standard published by the european machine vision association and presents experimental results aimed at discussing the applicability characteristics of such standard. the final paper is authored by petri österberg et al. and addresses the topic of measurements of moisture in biomaterials using microwaves and nmr. it describes in full detail the procedure followed in both cases and presents experimental results that also include data from the standard loss-on-drying method that apparently remains the most accurate and precise among the three approaches. have a fruitful reading of the fourth issue of acta imeko in 2016! introduction to selected papers from the xxi imeko world congress 2015 microsoft word 251-1770-1-le-rev2 acta imeko  issn: 2221‐870x  september 2015, volume 4, number 3, 36 ‐ 41    acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 36  setting up of a floating‐gate test bench in a low noise  environment to measure very low tunneling currents  jeremy postel‐pellerin 1 , gilles micolau 2 , philippe chiquet 1 , jeanne melkonian 1 , guillaume just 1,3 ,  daniel boyer 4 , cyril ginoux 2   1  aix‐marseille university, im2np‐cnrs, umr 7334, imt technopole de chateau‐gombert, 13451 marseille, france  2  avignon university, emmah‐inra, umr 1114, 33 rue louis pasteur, 84000 avignon, france   3  stmicroelectronics, zi de rousset bp 2, 13106 rousset cedex, france  4  laboratoire sous‐terrain bas bruit, lsbb‐cnrs, ums 3538, la grande combe, 84400 rustrel, france       section: research paper   keywords: flash memory; reliability; floating‐gate technique; leakage current; low noise   citation: jeremy postel‐pellerin, gilles micolau, philippe chiquet, jeanne melkonian, guillaume just, daniel boyer, cyril ginoux, setting up of a floating‐gate  test bench in a low noise environment to measure very low tunneling currents, acta imeko, vol. 4, no. 3, article 6, september 2015, identifier: imeko‐acta‐ 04 (2015)‐03‐06  editor: paolo carbone, university of perugia, italy  received february 14, 2015; in final form april 17, 2015; published september 2015  copyright: © 2015 imeko. this is an open‐access article distributed under the terms of the creative commons attribution 3.0 license, which permits  unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited  corresponding author: jeremy postel‐pellerin, e‐mail: jeremy.postel‐pellerin@im2np.fr    1. introduction  the floating-gate technique (fgt) has been developed for many years [1], [2] in order to evaluate very low leakage currents through dielectrics. these currents are not accessible through direct electric characterization, even with high-performance analyzers [3], [4]. the principle of the technique lies in evaluating a slow temporal evolution of the electric charge of a mos capacitance by an indirect measurement. the floatinggate device is a micro-electronic integrated test device. it is embedded in the scribe lines of a wafer and has to be probed under needles. in the context of non volatile memories (nvm) reliability studies [5]-[9], the time range of one experiment is typically between several weeks and several months. to maintain the electrical contacts assured by needles over this long time, this technique requires protecting the probed device from external perturbations (principally mechanical vibrations and thermal variations). in a classical laboratory or industrial environment, it is difficult to set such a dedicated prober. for instance, this technique has been benched in the im2np characterization room [10] in a context of nvm flash type reliability studies. on working days, the electrical contact of the needle can be hold, in best cases, over several hours and during winter holidays (empty laboratory), over a maximum of ten days. the development of this technique in a classical environment has been proposed in previous studies [11]. it requires a self-sustained prober, with compressed air network, in a thermalized artificial environment. this solution is technically and economically heavy (price of the hardware, space consuming, thermal control). if not impossible, it is very difficult to be developed. thus, the main idea of our experimental platform lies in placing it in an underground room, naturally insulated from external perturbations. in the south of france near apt, the very low noise underground laboratory, called lsbb ("laboratoire sous-terrain à bas bruit") [12], offers such facilities. it consists in a set of horizontal galleries dug in the bedrock of big mountain, bordering the south albion plateau. it was dedicated to be the abstract  we propose and develop a complete solution to evaluate very low leakage currents in non‐volatile memories, based on the floating‐ gate technique. we intend to use very basic tools (power supply, multimeter,...) but with a very good current resolution. the aim of  this work is to show the feasibility of such measurements and the ability to reach current levels lower than the ones obtained by any  direct measurement, even from high‐performance devices. the key node is that the experiment is led in a very particular low‐noise  environment (underground laboratory) allowing to keep the electrical contacts on the device under test as long as possible. we have  demonstrated the feasibility of this approach and obtained a very promising 10 ‐17 a current level in less than two weeks.   acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 37  former cockpit shooting #1 of the french nuclear deterrent force, from 1973 to 1998. rooms and galleries are shielded from electromagnetic waves, by concrete and massive slabs of steel. the depth varies between ground level down to 525 meters underneath the top of big mountain. the first peculiarity of this underground laboratory (compared to others) lies in its location in a natural national park, far from big communication ways (no highway, no railroad). the second appreciable peculiarity is its size: there are a lot of rooms of different sizes, each of them offering classical facilities (electrical and informatic network) for a scientific environment. the lsbb has been used by the national institute of universe (insu) as a hydrogeological, geophysical (net of seismographs) and astronomical (muons detectors) observatory for a few years. it also allows to test microelectronic devices in a nonradiative environment. for our purpose, this laboratory is a very good environment since it naturally prevents from external perturbations (electromagnetic, mechanical, thermal variations) without any additional devices. it is important to notice that this laboratory is easy to access. moreover the occupation fee for a single room is cheaper than a self-sustained marble table. because the fgt only requires very basic electrical measurements (applying biases and acquiring currents), the technical hardware is standard and the probe station is a light one (35 kg). the entire experimental platform occupies around 2 square meters. this paper contains three major parts of development and a conclusion. first a background concerning nvm and fgt measurement is exposed, next the details of the experimental setup are presented, and finally the first results are shown and limitations of the measurements are discussed. 2. backgrounds  2.1. non‐volatile memories  the most widespread solution to enable semiconductor memories to be non-volatile that is to say able to keep information without any power supply is to use mos transistors whose threshold voltage is shifted by a charge stored in an isolated gate above the channel [5]. the most common technology consists in adding a second gate between the gate and the channel of a classical mos transistor. flash memories are based on this principle. this second gate, made of conductor or semiconductor materials, can isolate charges to make the transistor threshold voltage variable. most of the time, charges are injected through a dielectric, in general a thin silicon dioxide (sio2) layer, placed between the transistor channel and this second gate, as presented in figure 1 (left). lastly, the two gates of this "transistor" are also separated by a dielectric, most commonly a tri-layer stack oxide "oxide/nitride/oxide" (ono). thus, the nvm elementary cell can be seen as a classical mos transistor in series with a capacitor cpp (figure 1 right). the common electrode is called "floating-gate" (fg) because its electrical potential cannot be leaded by an external contact. it can store a charge while the top electrode of the capacitor becomes the "control-gate" (cg) of the cell. barrier transparency in the tunnel oxide, which can be electrically modeled by a current source i, allows the injection of charges in the floating gate, shifting the mos transistor threshold voltage vt according to (1), thus defining two different logical states: (1) where vt0 is the natural threshold voltage of the cell, qfg the charge amount in the floating-gate and cpp the capacitance between the control-gate and the floating-gate. a total quantity of elementary charges (electrons or holes) qfg is injected into the fg using an adequate set of biases, depending on technology [5]-[9]. these charges must be stored for a long time in the fg but they can escape from it through a damaged dielectric. according to (1), if charges are lost, the stored information will also be lost, since the two logical states cannot be distinguished anymore. a better understanding of physical phenomena responsible for the charge leakage, generally related to tunneling currents, is crucial to improve the whole quality of memory cells. this leakage current is not directly measurable in the nvm device. the quality of this dielectric is a key for high reliability devices. that’s why it is important to develop powerful methods allowing precise electrical characterizations of this dielectric. the main difficulty is to reach the very low current levels, responsible for such leakage, because they are not accessible through direct measurements, even with highperformance analyzers [3], [4]. thus, some indirect measurement techniques have to be used. the floating-gate technique is one of the most widely used methods. 2.2. floating‐gate technique (fgt)  the fgt is based on the use of a mos capacitor and a mos transistor in parallel [1], [2] as presented in figure 2. it consists in a high-voltage transistor whose gate is common with the top electrode of a large tunnel capacitor, denoted ctun. this common gate plays the role of the floating-gate in a memory cell but is directly accessible to apply biases. the tunnel capacitor represents the injection zone of the memory cell. indeed it has exactly the same process conditions as the injection region (oxide thickness, doping conditions…). nevertheless, its area is much larger than that of the memory cell. since the area ratio between the capacitor and the transistor is very high (around 1200:1) and the hv oxide is twice as thick as the tunnel oxide, the observed leakage is almost exclusively due to the tunnel capacitor. the fgt is based on the voltage measurement of the initially charged gate of the mos capacitor, which is then disconnected from the external circuit during the experiment. figure 2. floating‐gate test structure.  figure  1.  schematic  of  a  flash‐type  nvm  (left)  and  its  electrical  scheme (right).  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 38  the measurement of this slowly decreasing gate voltage is performed indirectly through the measurement of the drain current of the transistor sharing its gate with the capacitor. this transistor "converts" the charge of the capacitor, and thus the gate voltage, in a measurable drain current. the temporal variation of this drain current is directly linked to the variation of the gate voltage and thus to its gate charge which is the same as the capacitor charge. figure 3 depicts the full methodology to obtain the leakage current ileak as a function of the floatinggate voltage vfg. the drain current acquisition over time can be directly linked to a floating-gate voltage variation thanks to a preliminary ids(vfg) measurement, illustrated in figure 4. the floating-gate voltage is the image of the floating-gate charge qfg. it is extracted from the ctun(vfg) characteristics, presented in figure 5, using (2): (2) the leakage current ileak is then the variation of this floatinggate charge qfg over time (3): (3) the acquisition of the drift of the drain current over very long time ranges is performed, keeping electrical contacts on the transistor (except on the floating-gate) during the whole experiment. as already detailed in the introduction, the main difficulty is to keep these electrical contacts for months: the longer the measurement, the lower the extracted current. the improvement presented here consists in developing a simple but very sensitive test bench. the low noise environment ensures the robustness against external perturbations. 3. low noise environment   the experimental platform has been installed at the end of the lsbb gallery, where the perturbations are supposed to be the smallest. 3.1. proposed test bench  the test bench is composed of a classical probe station with 5 manipulators and needles. these manipulators are directly screwed on the probe station instead of being fixed by vacuum because the air pump would create undesired vibrations. the needles are then connected using standard coaxial cables to the power supply and the multimeter. an agilent e3631a triple output dc power supply has been chosen because of its very good stability over time (0.1 % + 5mv after 12 months) [13]. its cost is relatively limited and its use is widely spread, making the development of the remote controlling program easier. concerning the drain current acquisition, the measured level is around 200 μa and a good resolution is needed to measure very low variations of this drain current. the tektronix dmm4050 digital multimeter with a 6.5 digit and 100 pa resolution has been chosen [14]. its cost is also relatively limited and it is easily programmable with any classical remote controlling software. the probe station is placed on a wood and iron workbench with an anti-vibration mat to absorb potential external perturbations. the power supply and the multimeter are placed on a separate support, also to reduce parasitic vibrations. the global test bench is shown in figure 6. to avoid as much as possible any movement around the test bench, a fully remote controlled experiment is required. figure  5.  preliminary  ctun(vfg)  measurement  on  the  capacitor  from  the  floating‐gate test structure.  figure  4.  preliminary  ids(vfg)  measurement  on  the  transistor  from  the floating‐gate test structure.  figure  3.  the  five  key  points  of  floating‐gate  technique  methodology  to  extract leakage current.  figure 6. picture of the proposed test bench, embedded  in the  low‐noise  environment.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 39  3.2. remote controlling the experiment  labview from national instruments has been chosen as a remote controlling software, which enables the use of the gpib interface available on the devices to control them. a first program has been developed to perform the initial ids(vfg) characteristics presented in figure 4. the floating-gate of the transistor (and the capacitor) is then charged at the initial bias. the floating-gate needle is then lifted to let the capacitor slowly discharge through the dielectric. the drain current acquisition (three current measurements, repeated each minute), controlled by a second program, is performed. data are stored in a text file and saved regularly to be exported to an external computer located out of the lownoise environment. 4. preliminary measurement results  4.1. raw measurements  the first raw temporal acquisition of the drain current is shown in figure 7. noise currents appear randomly at different days. the relative rms value is around 5 % of the mean drain current value (around 225 μa). it seems that there is an electrical external perturbation responsible for that. at this time we are not able to clearly explain the origin of this noise, but the ways to explore are linked to the hardware used in the setup experiment (pc, usb/gpib adaptor…). it is probably an electrical phenomenon because during this first acquisition, there was a significant earthquake in barcelonnette (small town in the "alpes de haute-provence") located 50 km from the lsbb. this earthquake occurred on 07/04/2014 at 21h17min (local time), with a 5.1 magnitude on the richter’ scale [15]. all the seismographs in the lsbb have detected the earthquake, including the nearest of our platform (located at a few meters), but measurements have not been perturbed as can be seen in the zoom of the measurements presented in figure 8. that is to say the mechanical contacts between the points and the electric pads of the tested device seem to be reliable for high frequency perturbations. the high frequency noise observed in measurements is visible because when noise occurs, the three consecutive measurements at the same minute are different. that is to say the current noise is at higher frequency than the time resolution measurement of the dmm4050 multimeter. to further explore the reason of this noise on the drain current acquisition, the spectrum of this noise current could be studied. nevertheless, this noise current has been removed by increasing the number of successive acquisitions while drastically reducing the number of measurements over time (six successive acquisitions every hour instead of three successive acquisitions every minute previously) and with a higher integration time (2 seconds instead of some tens of milliseconds previously). the improvement can be observed in figure 9. 4.2. estimation of the global sensitivity of the measurement  protocol  figure 10 presents the results of measured drain currents (ids), on a discharged capacitor, for around 400 measurements, each separated by 20 minutes. three drain polarizations (vds = 0.1 v ; vds = 0.3 v and vds = 0.5 v) have been investigated to eventually detect a sensitivity due to the drain polarization. the statistical means and standard deviation of the drain current extracted from figure 10 are reported in table 1. these results are coherent with the ones obtained when the multimeter dmm4050 is not connected (empty measurements) figure 7. raw drain current acquisition. each vertical line denotes a change of day (midnight), from 03/04/2014 to 16/04/2014. the initial gate voltage of the capacitor is 7v here and the drain‐source bias is vds = 0.1 v.  figure 9.  improved raw current measurements, using the new acquisition  protocol.  figure 10. statistical global current sensibility estimation.  figure  8.  raw  drain  current  acquisition  during  the  earthquake  day  07/04/2014. (same legend than figure 7).  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 40  to any device and the ones given by the constructor [14]. the lowest significant variation of the current can be estimated around 1 na while the lowest significant current measurable can be evaluated around 8 na. for the following process of extraction of the leakage current, those values are not critical. however it will be important to take them into account at the end of the retention time for very low values of ids. indeed, the derivation process of the charge is impacted by the precision in the measurements. 4.3. low leakage current extraction   from non-noisy measurements presented in figure 9 and the preliminary characteristics, the total electrical charge in the floating gate is extracted according to the fgt methodology (figure 3). the numerical derivative (or a local fitting) gives the leakage current assuming that the numerical variations taken into account are greater than the stds given in table 1. it is clearly the case for the level of currents involved in this section. then, it is also possible to plot this leakage current versus the floating-gate voltage as presented in figure 11. we can first notice in figure 11 the continuity between the two methods (direct measurements and fgt extraction) at high current levels around 6.5 v. for low levels, the best result obtained with a classical agilent hp4156 (with a long integration time and force/sense cables) is around 10-15 a. however, using the fgt, in less than two weeks two additional decades have been reached with a level around 10-17 a. to our knowledge, keeping electrical contacts over such a long time in a classical laboratory has needed very heavy equipment with extracted current levels over 10-17 a after a maximum of 45 days [11]. moreover, the resolution obtained for the leakage current can be statistically estimated. a first evaluation is done thanks to the three successive acquisitions performed during the experiment. considering these three measurements as three independent acquisitions, the difference between the maximum value and the minimum value of these three independent extractions of the leakage current is plotted in figure 12.   we notice that the resolution of the extraction improves drastically during the experiment. indeed, for relatively high current level extraction the resolution is around 10-14 a but when extracting lower current levels, the resolution decreases down to around 10-19 a. it is one of the main known advantages of the floating-gate technique to have a resolution improving with time. this is a promising preliminary result, awaiting better results from further measurements which can be performed over months in the proposed specific environment. 5. conclusion and perspectives  following the very classical protocol of the floating-gate technique, the feasibility of an approach with a very simple test bench, embedded in a very low-noise environment has been proven. in less than two weeks a leakage current lower than the minimum directly measurable one with any classical tool has been extracted. the first improvement performed on the proposed experimental platform has been to find the reasons of the electrical noise and to reduce it by modifying the acquisition methodology. the natural perspective of this work is the continuation of the drain current acquisition in a long-time range. it also requires a reliable and reproducible process to extract leakage current from the charge versus time evolution. this experimental work should also be completed by a theoretical and numerical approach that would allow to theoretically and numerically model the loss of charges through the oxide. the developed fgt can also be reproduced on oxides, previously damaged with conditions close to those used in nvm products, to study the reliability of such memory cells. references  [1] b. fishbein, d. krakauer, b. doyle, “measurement of very low tunneling current density in sio2, using the floating-gate technique”, ieee electron device letters, vol. 12, no. 12, 1991, pp. 713-715. [2] f.h. gaensslen, j.m. aitken, “sensitive technique for measuring small mos gate currents”, ieee electron device letters, vol. 1, no. 11, 1980, pp. 231-233. [3] agilent4156c precision semiconductor parameter analyzer user’s guide vol. 2, 6th edition, march 2008. [4] agilent b1500a semiconductor device analyzer data sheet, october 2013. [5] w. brown, j. brewer, “nonvolatile semiconductor memory technology: a comprehensive guide to understanding and to using nvsm devices”, ieee press, 1998. figure 12. evaluation of the floating‐gate technique resolution using three  independent acquisitions during a single experiment.  table  1.  statistical  mean  and  standard  deviation  (std)  of  the  measurements presented in figure 10.  vds (v)  0.1  0.3  0.5  mean (na)  6.54  7.34  7.62  std (na)  1.1  0.97  0.95  figure  11.  comparison  between  direct  measurement  and  floating‐gate  technique extraction.  acta imeko | www.imeko.org  september 2015 | volume 4 | number 3 | 41  [6] p. canet, r. bouchakour, n. harabech, p. boivin, j.-m. mirabel, “eeprom programming study-time and degradation aspects”, proc. of iscas, 2001, pp. 846-849. [7] r.laffont, r.bouchakour, o.pizzuto, j.-m.mirabel, “a 0.18um flash source side erasing improvement”, proc. of nvmts, 2004, pp. 105-109. [8] p. canet, r. bouchakour, n. harabech, p. boivin, j.-m. mirabel, c. plossu, “study of signal programming to improve eeprom cells reliability”, proc. of mwscas, 2000, pp. 11441147. [9] p. canet, r. bouchakour, j. razafindramora, f. lalande, j.m. mirabel, “very fast eeprom erasing study”, proc. of esscirc, 2002, pp. 683-686. [10] p. chiquet, “etude et modélisation des courants tunnel: application aux mémoires non-volatiles”, phd thesis, aixmarseille université (france), 2012. [11] s. burignat, “mécanismes de transport, courants de fuite ultrafaibles et rétention dans les mémoires non volatiles à grille flottante”, phd thesis, insa lyon (france), 2004. [12] http://www.lsbb.eu/index.php/en/ [13] agilent e3631a triple output dc power supply service guide 12th edition, agilent technologies, april 21, 2014. [14] tektronix dmm4050 digital multimeter users manual, 0770361-00, october 05, 2009. [15] url: http://www.lsbb.eu/index.php/fr/ct-menu-item-243/ctmenu-item-244/ct-menu-item-248 template for an acta imeko event paper acta imeko issn: 2221-870x december 2017, volume 6, number 4, 95-99 acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 95 assessment of mass comparator sensitivity christian müller-schöll mettler-toledo gmbh, im langacher 44, 8606 greifensee, switzerland section: research paper keywords: mass calibration; comparator sensitivity; uncertainty; mass comparator; oiml r111 citation: christian müller-schöll, assessment of mass comparator sensitivity, acta imeko, vol. 6, no. 4, article 15, december 2017, identifier: imeko-acta06 (2017)-04-15 editor: paolo carbone, university of perugia, italy received january 29, 2016; in final form november 18, 2017; published december 2017 copyright: © 2017 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited funding: mettler-toledo gmbh, greifensee, switzerland corresponding author: christian müller-schöll, e-mail: christian.mueller-schoell@mt.com 1. introduction when calibrating weight pieces, the difference between the unknown weight and the reference weight is calculated from the indications of the mass comparator display (or the data interface signal). this difference is then used to calculate the conventional mass of the unknown weight. for common industrial and laboratory applications, weight pieces are calibrated in conventional mass [1] and electronic balances and mass comparators also indicate in this conventional unit. the term sensitivity is defined in [2] as “quotient of the change in an indication of a measuring system and the corresponding change in a value of a quantity being measured” [2, 4.12]. according to this definition the general equation for sensitivity of a mass comparator is 𝑆 = ∆𝐼 ∆𝑚𝑐 (1) with change in indication ∆𝐼 and conventional mass of a weight ∆𝑚𝑐 (and not the reciprocal value as is sometimes used, e.g. in [3] and [7]). the general assumption is that for electronic balances and mass comparators, the difference ∆𝐼 of the indications is equal to the difference in conventional mass ∆𝑚𝑐. this corresponds to a sensitivity equal to 1. however, there are sources that suggest that this is not always the case. in earlier days, it was apparently usual that comparators with an optical scale did not necessarily indicate correct mass differences. therefore, [3, sop4] and [1, c.4.1.2], for example, describe weighing cycles for mass calibrations including a sensitivity weight (s), e.g. of the form “a – b – b+(s) – a+(s)”. with these cycles, the sensitivity is determined in every single weighing cycle to convert the scale divisions into a mass difference. in section 2 we will review existing literature on this subject. sections 3 and 4 deal with uncertainty questions. in sections 5 through 7 we derive a procedure for assessing comparator sensitivity based on optimized uncertainty. section 8 presents a verification through experimental data. the conclusions finally summarize the findings. 2. review of the literature 2.1. testing procedures for sensitivity according to some sources in the literature, it is not necessary to determine sensitivity in every weighing cycle for modern electronic comparators: • "a sensitivity weight is not required if the electronic mass comparator that is used has been tested (with supporting data available) to determine that the balance has sufficient accuracy…" [3, gmp 14]. the uncertainty of this assumption shall be included as an uncorrected systematic error in the uncertainty budget and acceptable limits are "2 %" [3, sop8]. abstract for the calibration of weight pieces, the evaluation of comparator sensitivity and its associated uncertainty component is essential. yet, there is not much documented guidance on how to assess these values. this paper proposes a procedure for assessing, evaluating and optimizing mass comparator sensitivity. acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 96 • kochsiek et al. mention in the german edition of their book “massebestimmung” [4] (translated to english: “mass determination”) that for a “frequently adjusted electromagnetically compensated balance” sensitivity 𝑆 shall be assumed to be 1 and an uncertainty in 𝑆 shall be assumed to be less than 5 𝐸 − 04 or may even be neglected, especially when using auxiliary weights …”.[4]. however, it remains unclear how to adjust the comparator so that it fulfils the condition of being “adjusted”, nor are there any suggestions as to the accuracy and thus uncertainty of the adjustment procedures, nor as to how often an adjustment must take place to fulfil the requirement of being “frequently adjusted”. it is interesting to note that the cited numerical value (5 𝐸 − 04) is omitted in the (later) english edition of the same book [5]. • chapter c.6.4.2 of [1] requires that an uncertainty component for sensitivity be included in the budget when calibrating weights, but gives no specific guidance on how to assess the value of this component and especially on how to select a proper sensitivity test weight regarding its size and its calibration quality. in general, there are two ways to treat sensitivity: one is to assume ideal sensitivity and set a limit for the deviation of the sensitivity from "1" and test and verify the sensitivity against that limit. this limit value is considered in the uncertainty budget as an uncorrected error. the other one is to evaluate a value for sensitivity, correct all balance readings with this value, calculate the uncertainty of this correction and have this contribute to the uncertainty budget (which results in a smaller uncertainty but requires a higher calculation effort). since mass calibrations involve a significant amount of calculations, it is common practice today to use at least software spreadsheets or even dedicated software for the calculations. if this is the case, the sensitivity value can easily be incorporated as a correction in the calculation of the mass differences and its uncertainty will then contribute to the uncertainty calculation. in some commercially available calibration software (e.g. scalesnet, mclink), a numerical value of sensitivity is determined in a separate test for each comparator. but for its uncertainty, the uncertainty value given in [4] is frequently used without alteration. where the mass difference between the calibrated weight and the reference weight becomes large, this might have significant influence on the final combined calibration uncertainty of the weight piece under test. one result of this paper is to answer the question whether this worst case estimation of a sensitivity uncertainty of 5 𝐸 − 04 is justified. 2.2. test weights for sensitivity testing there seems to be common understanding that sensitivity of mass comparators is tested with a “small” weight: reference [3, sop 2] mentions a “small weight”, [3, sop 34] mentions a maximum of 0.5 % of balance capacity, while [3, gmp 10] mentions a maximum of 1 % of balance capacity. it remains unclear if "balance capacity" means the full load value or the electrical weighing range in this context. no standard method for the selection of the sensitivity test weight appears to be available. 3. assessment of sensitivity the vim [2] defines adjustment of a measuring system as “set of operations carried out on a measuring system so that it provides prescribed indications corresponding to given values of a quantity to be measured”. today’s mass comparators with electromagnetic force compensation are equipped with provisions for self-adjustment. these consist of one (or more) internal weight piece(s) and an algorithm which can be time and/or temperature controlled or manually triggered. furthermore, external mass calibration software allows for an additional “adjustment” of the reading by applying a sensitivity factor in the processing of the value that was read from the comparator output. in this light, we consider the calibration software a part of the “measuring system” together with the comparator (see figure 1), and the application of a sensitivity value is considered an adjustment of this system. the sensitivity factor applied by the software is gained from the following test procedure: the comparator is loaded with a pre-load (between zero and nominal load) which brings the comparator into a typical working range and working state. then a test is carried out using an “abba” cycle which starts with the mentioned pre-load (“a”), then a calibrated sensitivity test weight with conventional mass ∆𝑚𝑐,𝑆 is added (“b”), this step is repeated (“b”) and finally the test weight is removed (“a”). from the calibration value of the test weight and the calculated, buoyancy-corrected difference of comparator indication during the test ∆𝐼𝑆, the sensitivity value 𝑆 is calculated according to (1). (subscripts upper case "𝑆" are used in this paper to indicate the context of a sensitivity test). once this sensitivity value is determined, it must be used for any further processing of a reading of that comparator. we assume here that the self-adjustment procedure of the comparator is run about once a day, so that any climate-induced changes (air density, temperature) and their effect on the comparator sensitivity are negligible, this means that the sensitivity of the comparator stays “the same” over time (but will not necessarily be 1 exactly). 4. uncertainty of sensitivity 4.1. general considerations given the procedure above and (1) for the calculation of sensitivity, we find the following sources of uncertainty when determining s from a sensitivity test: • readability of the comparator 𝑢𝑟𝑟𝑟𝑟 • repeatability of the comparator 𝑢𝑟𝑟𝑟𝑟𝑟𝑟 • calibration uncertainty of the sensitivity test weight 𝑢𝑤𝑟𝑤𝑤ℎ𝑟 figure 1. the measuring system. measuring system: delivers processed value software (external of comparator): processes indication value with a factor comparator: delivers indication value acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 97 the uncertainty in 𝑆 can thus easily be derived as: 𝑢𝑆2 = � 1 ∆𝑚𝑐,𝑆 × 𝑢𝑟𝑟𝑟𝑟� 2 + � 1 ∆𝑚𝑐,𝑆 × 𝑢𝑟𝑟𝑟𝑟𝑟𝑟� 2 + + � ∆𝐼𝑆 ∆𝑚𝑐,𝑆 2 × 𝑢𝑤𝑟𝑤𝑤ℎ𝑟� 2 (2) with the components 𝑢𝑟𝑟𝑟𝑟 = 𝑑 2 × 1 √3 for an abba cycle (derived from the calculation of an abba difference) and with comparator indication readability 𝑑, 𝑢𝑟𝑟𝑟𝑟𝑟𝑟 = 𝑠 √𝑛 with the repeatability standard deviation 𝑠 of the comparator and the number of cycles of the sensitivity test 𝑛 and 𝑢𝑤𝑟𝑤𝑤ℎ𝑟 = 𝑈𝑐𝑟𝑐 𝑘 with coverage factor 𝑘, taken from the calibration certificate of the sensitivity test weight. for simplification of the uncertainty calculation (only) we set ∆𝐼𝑆 = ∆𝑚𝑐 so that (2) becomes for a sensitivity test: 𝑢𝑆2 = � 1 ∆𝑚𝑐,𝑆 × 𝑟 2×√3 � 2 + � 1 ∆𝑚𝑐,𝑆 × 𝑠 �𝑛𝑆 � 2 + � 1 ∆𝑚𝑐,𝑆 × 𝑈𝑐𝑐𝑐 2 � 2 (3) please note that since 𝑆 is a relative number, its uncertainty 𝑢𝑆2 is also a relative number while its uncertainty contributors are in mass units. 4.2. uncertainty of conventional mass – connection to oiml r111 oiml r111 [1] requires an uncertainty contribution 𝑢𝑠 (note lower case “s” in the subscript) to be estimated that, as a component of the balance uncertainty 𝑢𝑏𝑟, accounts for the uncertainty of the sensitivity in the calculation of conventional mass. in our case, where a factor 𝑆 is used to calculate the conventional mass difference according to (1), this uncertainty component of conventional mass is (with sensitivities 𝑆 close to 1): 𝑢𝑠2 = � ∆𝐼 𝑆2 × u𝑆� 2 ≈ [∆𝐼 × u𝑆]2 (4) note that 𝑢𝑠2 (with lower case subscript s) is taken from [1] while the last term (with upper case s) is our concept. 5. initial choice of a sensitivity test weight as has been mentioned above, there is only little literature available on mass comparator sensitivity. the publication of r. davis [6] appears to have been written in the light of comparators with optical scales and a sensitivity assessment in every mass calibration cycle and therefore provides only a rough direction for today’s questions. lee shih mean’s paper [7] focuses on a very special problem (the calibration of stainless steel against pt-ir standards and evaluating true mass) and thus has its own reasons for the choice of the weight size. all sources, however, agree in the general idea that the weight should be “small” and in the magnitudes of the weighing differences that will be obtained. the reason is probably, that this procedure tries to approximate an ideal differential sensitivity 𝜕𝐼 𝜕𝑚𝑐 from the test with finite values ∆𝐼𝑆 ∆𝑚𝑐,𝑆 thus avoiding any influence of non-linearities in the characteristic curve of the comparator. as a first practical assumption, we chose calibrated test weights with a nominal value of about 100 times the readability 𝑑 of the comparator, but not smaller than the smallest oimlweight which is 1 mg. the weights are made of stainless steel to avoid any complications arising from buoyancy effects. as test objects we chose the manual comparators in the mass calibration laboratory of mettler toledo, accredited as scs 0032. (table 1). we further assume for simplicity reasons that the sensitivity test weights were calibrated with an uncertainty (k=2) of one third of the mpe of class e1 according to [1]. thirdly, we used "datasheet repeatablities" for our calculations and 𝑛𝑆 = 1 repetitions for the sensitivity test. for each mass comparator used in our laboratory, we calculated the uncertainty of the sensitivity as given in (3). this revealed some unexpected results. figure 2 shows the sensitivity uncertainties (k=1) for each comparator type listed in table 1. we note the following important findings: • although we applied the same basic idea for the choice of the sensitivity test weight, the differences of the sensitivity uncertainties between the comparators were of about 3 magnitudes, ranging from 6 e-04 to 2 e-02. • the maximum uncertainty value observed (2 e-02) was significantly higher than the simplified value of 5 e-04 from the literature although our procedure uses a correction. we conclude that a general assumption of an uncertainty of comparator sensitivity of 5 e-04 (as can be found in the literature) is not justified. 6. variation and optimization of parameters 6.1. optimization to reach an uncertainty of 5 e-04 further investigation of the uncertainty budget of the sensitivity revealed that in most cases (and especially in the case table 1. mass comparators, datasheet repeatabilities and readabilities and test weights nominal values of “initial choice”. c om pa ra to r x p 64 00 3l x p 10 00 3s x p 50 03 s x p 20 04 s a x 10 05 a x 10 6 x p 6u 𝑑 (mg) 5 1 1 0.1 0.01 0.001 0.0001 𝑠 (mg) 8 1 0.8 0.1 0.02 0.003 0.0003 5 𝑚𝑐,𝑆 (mg) 500 100 100 10 1 1 1 figure 2. sensitivity uncertainties based on initial choice of the sensitivity test weight (k=1). acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 98 of the high values identified above), the dominant contributor to the uncertainty budget was the influence of repeatability (which is the second component in (3)). the equation suggests that this contributor could be reduced by an increase in the number of weighing cycles used in the adjustment. however, this has little effect since it is not practical to use more than about 5 abba cycles. (for this publication, we will continue to use datasheet repeatabilities. however, it is obvious that using individually determined repeatabilities (which are usually smaller) will have a significant, improving impact on the uncertainty.) re-visiting equation (3), we find that the nominal value of the chosen sensitivity test weight influences all three uncertainty contributors. this opens the door to optimizing the sensitivity uncertainty by adjusting number of cycles and nominal values of the test weights. increasing the nominal weight value will lead to smaller sensitivity uncertainties. a massive increase would, however, violate the principle of “small” sensitivity test weights (as explained above in section 2.2), so we prefer to keep nominal values small. additionally, we will only use nominal values that are specified in [1]. with these restrictions, we iteratively increased the nominal values in the sensitivity test weight with the aim of reducing uncertainties of all sensitivities to approximately 5 e-04 or less (which is the literature value). in order to keep the procedures easy to understand for all laboratory personnel, we fixed the number of weighing cycles for the sensitivity test to 𝑛𝑆 = 3 abba cycles for all types of comparators. the new values are shown in figure 3. please note the difference in y-axis scale compared to figure 2. these results were achieved using the sensitivity test weights for the comparator models as shown in table 2. the nominal values of these weights are below the maximum value of 0.5 % of balance capacity as stipulated in [3], so this choice does not violate the concept of a "small" weight. 6.2. further optimization to reach smaller uncertainties by increasing the number of cycles and by increasing the nominal values of the weight pieces, it is possible to reach values for 𝑢𝑠 of e.g 1 e-04. however, with an attempt to reach 1 e-05 (still using datasheet repeatabilities), the necessary test weights approach the "0.5 % of capacity limit" (see above) and thus are no longer considered "small". 6.3. variation of sensitivity test weight accuracy except for microbalance comparators, the calibration uncertainty of the sensitivity test weight 𝑈𝑐𝑟𝑐 has little influence on the uncertainty in 𝑆. so using weights calibrated in e2 quality instead of e1 weights is possible without major disadvantage. 7. general cookbook procedure for assessing optimized comparator sensitivity and its uncertainty the following procedure for assessing sensitivity uncertainty can be derived from the considerations above: 1. set a maximum acceptable value for sensitivity uncertainty (e.g. 5e-04 or 1e-05). 2. set a number of abba cycles for the sensitivity test, use (3) and iteratively increase the nominal weight value until the above condition is fulfilled for the comparator concerned. 3. execute sensitivity test and apply the value found for 𝑆 to all future readings. 4. use the value of 𝑢𝑠 for the uncertainty estimation of mass calibrations according to [1]. 8. experimental verification in order to verify the theoretical ideas outlined above, we have done several tests. one series of tests was done on an xp2004s comparator balance. we have tested different weight sizes and numbers of test cycles. the results are found in figure 4: the diagram shows 3 series of sensitivity values and their calculated uncertainties. the first series was done with a 100 mg sensitivity test weight and one cycle, while the second series was done with a 500 mg weight and three cycles. the third series was done with a 1 g test weight and three cycles. the picture confirms the theoretical considerations: • as could be expected, the variability of the values gained with higher uncertainty is bigger. the larger the test weights and the number of cycles, the smaller the variability. • most data points are consistent to each other: an en-test shows that most data points are consistent with every other one, especially the points gained with more than one cycle (i.e. 𝑛𝑆 > 1 ) show excellent agreement with each other. figure 3. sensitivity uncertainties with optimized procedure (k=1). table 2. mass comparators and test weight nominal values for optimized sensitivity. c om pa ra to r x p 64 00 3l x p 10 00 3s x p 50 03 s x p 20 04 s a x 10 05 a x 10 6 x p 6u 𝑚𝑐 (mg) 10000 2000 1000 200 50 5 1 figure 4. three series of sensitivity test points together with calculated uncertainties (k=2). nominal values and number of cycles for each series: 100 mg (n=1), 500 mg (n=3), 1 g (n=3). acta imeko | www.imeko.org december 2017 | volume 6 | number 4 | 99 9. conclusions the sensitivity of mass comparators and its associated uncertainty are both values with important significance in the field of weight piece calibration. the literature does not provide much guidance neither on procedures to be used for assessing sensitivity and its uncertainty nor on the selection of suitable weights for sensitivity testing. we have presented a procedure for assessing and the mathematics for evaluating sensitivity and its associated uncertainty. by means of iterative application, a weight for a sensitivity test can be selected so that relative sensitivity uncertainties 𝑢𝑠 of e.g. 1 e-04 are achieved with the prerequisite that a correction for sensitivity is applied. references [1] oiml: “oiml r111-1:2004: weights of classes e1, e2, f1, f2, m1, m1–2, m2, m2–3 and m3 part 1: metrological and technical requirements”: www.oiml.org. [2] jcgm 200: 2012: “international vocabulary of metrology – basic and general concepts and associated terms (vim) 3rd edition”: www.bipm.org. [3] nist: ir 6969: www.nist.gov. [4] kochsiek m, glaeser m, massebestimmung, vch, weinheim, 1997 [5] kochsiek m, glaeser m (eds.). comprehensive mass metrology, wiley-vch, berlin, 2000. [6] r. davis: “note on the choice of a sensitivity weight in precision weighing”, journal of research of the national bureau of standards, volume 92, number 3, pp 239-242, may-june 1987. [7] s. m. lee, r. davis, l. k. lim: “calibration of a 1 kg stainless steel standard with respect to a 1 kg pt-ir prototype: a survey of corrections and their uncertainties”, asia-pacific symposium on mass, force and torque (apmf 2007), oct 24-25 2007. certain commercial equipment, instruments, or materials are identified in this paper in order to adequately describe the experimental procedure. such identification does not imply recommendation or endorsement by the author, nor does it imply that the materials or equipment identified are the only or best available for the purpose. assessment of mass comparator sensitivity << /ascii85encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain 20%) /calrgbprofile (srgb iec61966-2.1) /calcmykprofile (u.s. web coated \050swop\051 v2) /srgbprofile (srgb iec61966-2.1) /cannotembedfontpolicy /error /compatibilitylevel 1.4 /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves 0.0000 /colorconversionstrategy /cmyk /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel 0 /emitdscwarnings false /endpage -1 /imagememory 1048576 /lockdistillerparams false /maxsubsetpct 100 /optimize true /opm 1 /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments true /preserveoverprintsettings true /startpage 1 /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution 300 /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution 300 /colorimagedepth -1 /colorimagemindownsampledepth 1 /colorimagedownsamplethreshold 1.50000 /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /colorimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000coloracsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000colorimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution 300 /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution 300 /grayimagedepth -1 /grayimagemindownsampledepth 2 /grayimagedownsamplethreshold 1.50000 /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /grayimagedict << /qfactor 0.15 /hsamples [1 1 1 1] /vsamples [1 1 1 1] >> /jpeg2000grayacsimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /jpeg2000grayimagedict << /tilewidth 256 /tileheight 256 /quality 30 >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution 1200 /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution 1200 /monoimagedepth -1 /monoimagedownsamplethreshold 1.50000 /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k -1 >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx1acheck false /pdfx3check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ 0.00000 0.00000 0.00000 0.00000 ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /ara /bgr /chs /cht /cze /dan /deu /esp /eti /fra /gre /heb /hrv (za stvaranje adobe pdf dokumenata najpogodnijih za visokokvalitetni ispis prije tiskanja koristite ove postavke. stvoreni pdf dokumenti mogu se otvoriti acrobat i adobe reader 5.0 i kasnijim verzijama.) /hun /ita /jpn /kor /lth /lvi /nld (gebruik deze instellingen om adobe pdf-documenten te maken die zijn geoptimaliseerd voor prepress-afdrukken van hoge kwaliteit. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader 5.0 en hoger.) /nor /pol /ptb /rum /rus /sky /slv /suo /sve /tur /ukr /enu (use these settings to create adobe pdf documents best suited for high-quality prepress printing. created pdf documents can be opened with acrobat and adobe reader 5.0 and later.) >> /namespace [ (adobe) (common) (1.0) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) (4.0) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /converttocmyk /destinationprofilename () /destinationprofileselector /documentcmyk /downsample16bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure false /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles false /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) (2.0) ] /pdfxoutputintentprofileselector /documentcmyk /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /usedocumentprofile /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [2400 2400] /pagesize [612.000 792.000] >> setpagedevice microsoft word article 7 193-1072-1-pb.docx acta imeko may 2014, volume 3, number 1, 23 – 31 www.imeko.org acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 23 theoretical, physical and metrological problems of further development of measurement techniques and instrumentation in science and technology d. hofmann, y.v. tarbeyev friedrich -schilleruniversity, 32 ernst-thälmannring, 69 jena, gdr section: research paper keywords: metrology, measurement technology, measurement theory, standards, fundamental physical constants, uniformity of measurements citation: d. hofmann, y.v. tarbeyev, theoretical, physical and metrological problems of further development of measurement techniques and instrumentation in science and technology, acta imeko, vol. 3, no. 1, article 7, may 2014, identifier: imeko-acta-03 (2014)-01-07 editor: luca mari, università carlo cattaneo received may 1st, 2014; in final form may 1st, 2014; published may 2014 copyright: © 2014 imeko. this is an open-access article distributed under the terms of the creative commons attribution 3.0 license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited 1. introduction the conversion of science into a major productive force is inevitably accompanied with growing interest in increasing measurement accuracy. many objective studies of reality are reduced to measurements. at the same time the results of scientific research serve as a basis for the further development of this reality. experimenters in science and practical engineers are interested in higher measurement accuracy, in technical improvement of measuring instrumentation and in higher reliability of measurements. these aspects make up the object of metrology. the task of meeting the urgent requirements of science and technology is the motive force for the progress of metrology and improvement of systems of units and standards which in the long run should provide for an optimum metrological assurance. this capacious concept includes not only the establishment of standards per se but also the development of most efficient methods of dissemination of the values of units and metrological supervision of the correctness of measurements as well as the development of up-to-date measuring transducers, fast automatic measuring systems that require minimum metrological supervision [1]. metrology is a science, whose scope of knowledge is extremely polytechnical. it should embrace a wide range of electromagnetic, optical, mechanical, physical and chemical, nuclear and many other phenomena and incorporate an immense number of measuring problems that cover measurement transformations, estimation of measurement results and uncertainties, realization of the units of physical quantities and their dissemination. therefore, it is not difficult to foresee that the task of establishing metrology as an independent science enjoying full rights and having its own large domain of specially arranged knowledge, theorems, methods of investigation, is complicated. the practical requirements and continuous growth of the role of measurements urgently demand that the progress of metrology must be accelerated. the aim of the present paper is to show the directions and ways of the further development of metrology. as examples you will become acquainted with some actual problems of modern metrology and measurement theory. 2. common trends in the development of research and application in the field of metrology and measurement technology in the 20th century natural, technical and social sciences have become a decisive productive force. as a result of such this is a reissue of a paper which appeared in acta imeko 1979, proceedings of the 8th imeko congress of the international measurement confederation, “measurement for progress in science and technology”, 21-27.5.1979, moskow, vol. 3, pp. 607–626. common interest of both metrologists and representatives of science and technology in constant improvement of measurements as well as general trends in the development of research in the field of metrology, measurement technology and instrumentation at the present-day stage are shown. problems of general metrology, of improving systems of units and standards (“natural” standards in particular) are considered in detail. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 24 sharp increase in the practical role of sciences, more and more technical subjects go over from empirically descriptive representation of the observed facts to generalizing theoretical considerations and concepts. in doing so, the peculiar features of the modern progress in scientific research are: – a sharp increase in the use of systems approach to all problems – an increase in the scope and intensity of fundamental research in the total volume of research work. the above-mentioned statements are not only valid for metrology, but are also especially characteristic of the present stage in its development. it is determined by substantial qualitative and quantitative changes. the most prominent quantitative changes are: 1. an increase in the number of measurements and in the number of fields in which measurements are employed. practically, there are no technical fields where measurements are not made; the measurement expenses reach 50 to 60% of the total production expenses in the most advanced industries (figure 1) [2]. 2. measurements provide indispensable technical key information for evaluating and controlling specialized cooperative production. they are carried out on a large scale and provide the decision criteria for acceptance or refusal of labor having been realized in masses. 3. natural, technical and social sciences operate an instrument technique possessing industrial dimensions and demanding universal knowledge as well as a concerning behaviour of the scientists (in the fields of nucleonics, microelectronics, space research, radiology, electron-scan microscopy, psychological behavior research). planned experiments are intended to prove the correctness of theoretical considerations. 4. measurements have two main functions:  an increased observability of preferred technical-physicalchemical states or procedures, widely exceeding the natural limit being set to the sensing organs of the measuring persons.  the objectivation of observations made by measuring persons. it is achieved by comparing measured quantities with known and settled standards. qualitatively new requirements concerning measurement theory are additionally caused by the following facts: 5. the accuracy of measurements required both in scientific experiments and in industry is more and more approaching the level of standard measurements. 6. due to the high sensitivity of measuring instruments the environmental conditions and external influencing factors are producing ever-increasing effects on the measurement results. 7. sophisticated measuring systems whose calibration by traditional methods presents big problems are becoming widespread. 8. greater sophistication of measuring instrumentation in respect of both the operating principle and the design features makes still higher demands of its operators. thus, today in measurements, measurement technology and instrumentation take place those qualitative and quantitative changes which not only require the proper investigation of individual metrological problems but also inevitably result in an urgent need for the development of a general theory of metrology and, first of all, a general (unified) theory of measurements. the development of a general measurement theory is complicated by the following state of affairs: 1. measurement science has a long history and therefore a mighty tradition. in our days this tradition is still aimed at a further differentiation. 2. measuring has two different definitions:  measuring in a narrow sense, that is experimental comparing of a measured quantity to a known comparison quantity of the same kind (quality) which had been determined to be the unit of measurement. the measurement of physical quantities and the use of ratio scales are typical procedures. (figure 2) [3]. figure 1. acquisition of measurement information in production. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 25  measuring in a wide sense is the coordination of numbers with states or procedures, following some rule. scaling technical and no technical matters without special knowledge and using low value scales are typical procedures (figure 2) [3]. at measuring in a narrow sense the subjective influence of the measuring person can little by little be diminished. this does not apply to measuring in a wide sense. at measuring you will observe disadvantages and inaccuracies if it is not considered  that each quantity is always a quantity of a certain quality  that the coordination of numbers with objects is also possible without knowing their quality  that formal operating with numbers for any data you like (measured values or non—measured values) is possible and  that mathematical models cannot express themselves if they represent correct facts or arbitrary phenomena [3, p. 37]. 3. historically we are just beginning to provide generalized knowledge in the fields of measurement engineering and measurement theory. the example of mass measurement is to prove that. already 2500 b.c. the equal-armed balance had been graphically represented in a pyramid in gizeh in ancient egypt. about 300 b.c. aristotle, euclid and archimedes provided the theory for this balance. electromechanical balances, with strain gages and radiometric balances are developments of the 20th century [4, 5]. 4. the special language of measurement engineering comprises a great number of terms. numerous terms have not yet been well defined. synonyms and polysems are frequently used [5]. classifications and teaching conceptions in measurement engineering have the following points of view:  measured quantities (length, force, temperature measurement engineering)  measuring devices (strain-gage, oscillograph and digital measurement engineering)  measurement principles (isotope, ultrasonic, infrared measurement engineering)  field of measurement (production, process, laboratory measurement engineering). 3. problems of the generalized measurement theory topical tasks of measurement theory are 1. to increase the uniformity of measurements 2. to assure correct measurements 3. to get valid models of measurements and its aims. working with models has methodical advantages. in contrast to the originals (measured signals, measuring systems and measurement processes) models are simpler, cheaper, more easily described, can be transformed in time and space, varied, limited and optimized. as a rule, modelling is associated with a certain simplification of the original. for example, a certain body is characterized by the following properties: length, surface roughness, volume, mass, density, temperature, colour, etc. usually, only selected parameters of the object are considered exactly. physical (homolog) models transform the scale of the original only or just simplify the original; the measured quantities of the original and model coincide. mathematical (analog) models describe similar (analog) processes in different fields of knowledge; the measured quantities of the original and model do not coincide. models consist of a certain number of elements and relations (links) between them. a great advantage is that in gaining knowledge and building up a theory the originals of different nature can produce similar images that allow a uniform treatment and interpretation. in any case the knowledge that was gained using a model in the image field should be confirmed in practice for the actual object. otherwise the value of this knowledge is subject to argument. the measurement theory deals mainly with well-defined or poorly-defined models of behaviour [6] expressed as algorithms. these models describe:  measured signals and measuring systems in terms of mathematical algorithms,  measurement processes in terms of heuristic algorithms. figure 2. classification of scales and their properties after [3]. acta imeko | www.imeko.org may 2014 | volume 3 | number 1 | 26 specific features of mathematical algorithms are determinism, finiteness, universality and guarantee of solution [7]. heuristic algorithms are not associated with such strict requirements and still they guarantee only a solution probability 0